pax_global_header00006660000000000000000000000064147654675660014544gustar00rootroot0000000000000052 comment=1b0b3e2a0b0c8611287f4677f21356b704495665 dipy-1.11.0/000077500000000000000000000000001476546756600125715ustar00rootroot00000000000000dipy-1.11.0/.codecov.yml000066400000000000000000000014551476546756600150210ustar00rootroot00000000000000github_checks: annotations: false comment: layout: "reach, diff, files" behavior: default require_changes: false # if true: only post the comment if coverage changes require_base: no # [yes :: must have a base report to post] require_head: yes # [yes :: must have a head report to post] branches: null ignore: - "*/benchmarks/*" - "setup.py" - "conftest.py" - "*/conftest.py" - "*/setup.py" - "*/tests/*" coverage: status: project: default: # Drops on the order 0.01% are typical even when no change occurs # Having this threshold set a little higher (0.1%) than that makes it # a little more tolerant to fluctuations target: auto threshold: 0.5% patch: default: target: auto threshold: 0.5% dipy-1.11.0/.codespellrc000066400000000000000000000004521476546756600150720ustar00rootroot00000000000000[codespell] skip = .git,*.pdf,*.svg,*.bib # reson -- abbreviation from Resonance frequently used # nd -- import shortening etc # te -- the TE # ue,lod,ans,lastr,numer,lamda,PrIs -- variable names # Hart,Flagg -- name ignore-words-list = laf,reson,nd,te,ue,lod,hart,ans,lastr,numer,lamda,flagg,pris dipy-1.11.0/.coveragerc000066400000000000000000000005541476546756600147160ustar00rootroot00000000000000[run] branch = True source = dipy include = */dipy/* omit = */setup.py */benchmarks/* */tests/* disable_warnings = include-ignored [report] show_missing = True exclude_lines = pragma: no cover def __repr__ if self.debug: if settings.DEBUG raise AssertionError raise NotImplementedError if 0: if __name__ == .__main__.: dipy-1.11.0/.gitattributes000066400000000000000000000000421476546756600154600ustar00rootroot00000000000000dipy/COMMIT_INFO.txt export-subst dipy-1.11.0/.github/000077500000000000000000000000001476546756600141315ustar00rootroot00000000000000dipy-1.11.0/.github/CODE_OF_CONDUCT.md000066400000000000000000000123561476546756600167370ustar00rootroot00000000000000# Community Guidelines DIPY is a [NIPY](https://nipy.org) project, and we strive to adhere to the [NIPY Community Code](https://nipy.org/conduct.html), reproduced below. The NIPY community is a community of practice devoted to the use of the Python programming language in the analysis of neuroimaging data. The following code of conduct is a guideline for our behavior as we participate in this community. It is based on, and heavily inspired by a reading of the Python community code of conduct, the Apache foundation code of conduct, the Debian code of conduct, and the Ten Principles of Burning Man. ## The code of conduct for the NIPY community The Neuroimaging in Python (NIPY) community is made up of members with a diverse set of skills, personalities, background, and experiences. We welcome these differences because they are the source of diverse ideas, solutions and decisions about our work. Decisions we make affect users, colleagues, and through scientific results, the general public. We take these consequences seriously when making decisions. When you are working with members of the community, we encourage you to follow these guidelines, which help steer our interactions and help keep NIPY a positive, successful, and growing community. ### As members of the NIPY community we try to: #### Be open Members of the community are open to collaboration. Be it on the reuse of data, on the implementation of methods, on finding technical solutions, establishing best practices, and otherwise. We are accepting of all who wish to take part in our activities, fostering an environment where anyone can participate and everyone can make a difference. #### Be collaborative Our work will be used by other people, and in turn we will depend on the work of others. When we make something for the benefit of others, we are willing to explain to others how it works, so that they can build on the work to make it even better. We are willing to provide constructive criticism on the work of others and accept criticism of our own work, as the experiences and skill sets of other members contribute to the whole of our efforts. #### Be inquisitive Nobody knows everything! Asking questions early avoids many problems later, so questions are encouraged, though they may be directed to the appropriate forum. Those who are asked should be responsive and helpful, within the context of our shared goal of improving neuroimaging practice. #### Be considerate Members of the community are considerate of their peers. We are thoughtful when addressing the efforts of others, keeping in mind that often-times the labor was completed simply for the good of the community. We are attentive in our communications, whether in person or online, and we are tactful when approaching differing views. #### Be careful in the words that we choose: We value courtesy, kindness and inclusiveness in all our interactions. Therefore, we take responsibility for our own speech. In particular, we avoid: * Personal insults. * Violent threats or language directed against another person. * Sexist, racist, or otherwise discriminatory jokes and language. * Any form of sexual or violent material. * Sharing private content, such as emails sent privately or non-publicly, or unlogged forums such as IRC channel history. * Excessive or unnecessary profanity. * Repeated harassment of others. In general, if someone asks you to stop, then stop. * Advocating for, or encouraging, any of the above behaviour. #### Try to be concise in communication Keep in mind that what you write once will be read by many others. Writing a short email means people can understand the conversation as efficiently as possible. Even short emails should always strive to be empathetic, welcoming, friendly and patient. When a long explanation is necessary, consider adding a summary. Try to bring new ideas to a conversation, so that each message adds something unique to the conversation. Keep in mind that, when using email, the rest of the thread still contains the other messages with arguments that have already been made. Try to stay on topic, especially in discussions that are already fairly long and complex. #### Be respectful Members of the community are respectful. We are respectful of others, their positions, their skills, their commitments, and their efforts. We are respectful of the volunteer and professional efforts that permeate the NIPY community. We are respectful of the processes set forth in the community, and we work within them. When we disagree, we are courteous and kind in raising our issues. ## Incident Reporting We put great value on respectful, friendly and helpful communication. If you feel that any of our DIPY communications lack respect, or are unfriendly or unhelpful, please try the following steps: * If you feel able, please let the person who has sent the email or comment that you found it disrespectful / unhelpful / unfriendly, and why; * If you don't feel able to do that, or that didn't work, please contact Eleftherios Garyfallidis directly by email (), and he will do his best to resolve it. If you don't feel comfortable contacting Eleftherios, please email Ariel Rokem () instead. ## Attribution The vast majority of the above was taken from the NIPY Code of Conduct. dipy-1.11.0/.github/CONTRIBUTING.md000066400000000000000000000065511476546756600163710ustar00rootroot00000000000000# Contributing to DIPY DIPY is an open-source software project, and we have an open development process. This means that we welcome contributions from anyone. We do ask that you first read this document and follow the guidelines we have outlined here and that you follow the [NIPY community code of conduct](https://nipy.org/conduct.html). ## Getting started If you are looking for places that you could make a meaningful contribution, please contact us! We respond to queries on the [Nipy mailing list](https://mail.python.org/mailman/listinfo/neuroimaging), and to questions on our [gitter channel](https://gitter.im/dipy/dipy). A good place to get an idea for things that currently need attention is the [issues](https://github.com/dipy/dipy/issues) page of our Github repository. This page collects outstanding issues that you can help address. Join the conversation about the issue, by typing into the text box in the issue page. ## The development process Please refer to the [development section](https://docs.dipy.org/stable/devel/index.html#development) of the documentation for the procedures we use in developing the code. ## When writing code, please pay attention to the following: ### Tests and test coverage We use [pytest](https://docs.pytest.org) to write tests of the code, and [GitHub Actions](https://github.com/dipy/dipy/actions) for continuous integration. If you are adding code into a module that already has a 'test' file (e.g., if you are adding code into ``dipy/tracking/streamline.py``), add additional tests into the respective file (e.g., ``dipy/tracking/tests/test_streamline.py ``). New contributions are required to have as close to 100% code coverage as possible. This means that the tests written cause each and every statement in the code to be executed, covering corner-cases, error-handling, and logical branch points. To check how much coverage the tests have, you will need. When running: coverage run -m pytest -s --doctest-modules --verbose dipy You will get the usual output of pytest, but also a table that indicates the test coverage in each module: the percentage of coverage and also the lines of code that are not run in the tests. You can also see the test coverage in the Travis run corresponding to the PR (in the log for the machine with ``COVERAGE=1``). If your contributions are to a single module, you can see test and coverage results for only that module without running all of the DIPY tests. For example, if you are adding code to ``dipy/core/geometry.py``, you can use: coverage run --source=dipy.core.geometry -m pytest -s --doctest-modules --verbose dipy/core/tests/test_geometry.py You can then use ``coverage report`` to view the results, or use ``coverage html`` and open htmlcov/index.html in your browser for a nicely formatted interactive coverage report. Contributions to tests that extend test coverage in older modules that are not fully covered are very welcome! ### Code style Code contributions should be formatted according to the [DIPY Coding Style Guideline](../doc/devel/coding_style_guideline.rst). Please, read the document to conform your code contributions to the DIPY standard. ### Documentation DIPY uses [Sphinx](https://www.sphinx-doc.org/en/master/index.html) to generate documentation. The [DIPY Coding Style Guideline](../doc/devel/coding_style_guideline.rst) contains details about documenting the contributions. dipy-1.11.0/.github/ISSUE_TEMPLATE.md000066400000000000000000000020651476546756600166410ustar00rootroot00000000000000## Description [Please provide a general introduction to the issue/proposal.] [If reporting a bug, attach the entire traceback from Python and follow the way to reproduce below] [If proposing an enhancement/new feature, provide links to related articles, reference examples, etc.] ## Way to reproduce [If reporting a bug, please include the following important information:] - [ ] Code example - [ ] Relevant images (if any) - [ ] Operating system and version (run `python -c "import platform; print(platform.platform())"`) - [ ] Python version (run `python -c "import sys; print('Python', sys.version)"`) - [ ] dipy version (run `python -c "import dipy; print(dipy.__version__)"`) - [ ] dependency version (numpy, scipy, nibabel, h5py, cvxpy, fury) * import numpy; print("NumPy", numpy.__version__) * import scipy; print("SciPy", scipy.__version__) * import nibabel; print("Nibabel", nibabel.__version__) * import h5py; print("H5py", h5py.__version__) * import cvxpy; print("Cvxpy", cvxpy.__version__) * import fury; print("fury", fury.__version__) dipy-1.11.0/.github/dependabot.yml000066400000000000000000000004251476546756600167620ustar00rootroot00000000000000# Set update schedule for GitHub Actions version: 2 updates: - package-ecosystem: "github-actions" directory: "/" schedule: # Check for updates to GitHub Actions every week interval: "weekly" groups: actions: patterns: - "*" dipy-1.11.0/.github/workflows/000077500000000000000000000000001476546756600161665ustar00rootroot00000000000000dipy-1.11.0/.github/workflows/benchmark.yml000066400000000000000000000021341476546756600206430ustar00rootroot00000000000000name: Benchmarks on: [push, pull_request] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: benchmark: name: Linux runs-on: ubuntu-latest defaults: run: shell: bash strategy: fail-fast: false matrix: python-version: [ '3.11' ] steps: - name: Set up system uses: actions/checkout@v4 with: fetch-depth: 0 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m pip install --upgrade pip pip install .[dev,test,ml,extra,benchmark] - name: Set threading parameters for reliable benchmarking run: | export OPENBLAS_NUM_THREADS=1 export MKL_NUM_THREADS=1 export OMP_NUM_THREADS=1 - name: Run benchmarks run: | asv machine --yes --config benchmarks/asv.conf.json asv run --config benchmarks/asv.conf.json --show-stderr dipy-1.11.0/.github/workflows/build_docs.yml000066400000000000000000000014461476546756600210250ustar00rootroot00000000000000name: Build docs on: push: branches: [ master ] pull_request: branches: [ master ] defaults: run: shell: bash jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash strategy: fail-fast: false matrix: python-version: ['3.11'] steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Install Dependencies run: | python -m pip install --upgrade pip pip install .[extra] pip install .[doc] - name: Build documentation run: | cd doc make -C . html-no-examples SPHINXOPTS="-W --keep-going" dipy-1.11.0/.github/workflows/check_format.yml000066400000000000000000000011041476546756600213320ustar00rootroot00000000000000name: code format on: push: branches: [master] pull_request: branches: [master] jobs: pre-commit: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest] python-version: ['3.10'] requires: ['latest'] steps: - name: Check out repository uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Install and run pre-commit hooks uses: pre-commit/action@v3.0.1 dipy-1.11.0/.github/workflows/coverage.yml000066400000000000000000000010511476546756600205010ustar00rootroot00000000000000name: Coverage on: push: branches: [ master ] pull_request: branches: [ master ] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: cov: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", ]' coverage: true enable-viz-tests: true extra-depends: scikit-learn scipy statsmodels pandas tables fury tensorflow torch secrets: codecov-token: ${{ secrets.CODECOV_TOKEN }} dipy-1.11.0/.github/workflows/first_interaction.yml000066400000000000000000000041051476546756600224370ustar00rootroot00000000000000name: First interaction on: - pull_request_target - issues jobs: build: runs-on: ubuntu-latest permissions: pull-requests: write issues: write steps: - uses: actions/first-interaction@v1 with: repo-token: "${{ secrets.GITHUB_TOKEN }}" issue-message: | Thank you for contributing an issue ! **We are glad that you are finding DIPY useful !** This is an automatic message. Allow for time for DIPY maintainers to be able to read the issue and comment on it. If asking for help or advice, please move the issue to the [Discussions section](https://github.com/dipy/dipy/discussions): issues are intended to request new features or to report bugs. We would appreciate if you took the time to submit a pull request to fix this issue should it happen to be one. Please read our [CODE OF CONDUCT](https://github.com/dipy/dipy/blob/master/.github/CODE_OF_CONDUCT.md) and our [CONTRIBUTING guidelines](https://github.com/dipy/dipy/blob/master/.github/CONTRIBUTING.md) if you have not done that already :book:. pr-message: | Thank you for contributing a pull request ! **We are glad that you are finding DIPY useful !** This is an automatic message. Allow for time for DIPY maintainers to be able to read this pull request and comment on it. Note that we require the **code formatting**, **testing** and **documentation builds** to **pass** in order to merge your pull request. GitHub will report on the status of each aspect as the builds become available. **Please, check their status and make the appropriate changes as necessary**. It is **your responsibility** to ensure that the above checks pass to have your pull request reviewed in a timely manner and merged :mag:. Please read our [CODE OF CONDUCT](https://github.com/dipy/dipy/blob/master/.github/CODE_OF_CONDUCT.md) and our [CONTRIBUTING guidelines](https://github.com/dipy/dipy/blob/master/.github/CONTRIBUTING.md) if you have not done that already :book:. dipy-1.11.0/.github/workflows/nightly.yml000066400000000000000000000231311476546756600203670ustar00rootroot00000000000000name: Build and upload nightly wheels on: workflow_dispatch: inputs: branch_or_tag: description: "Branch or Tag to Checkout" # Description shown in the GitHub UI required: true default: "master" schedule: # ┌───────────── minute (0 - 59) # │ ┌───────────── hour (0 - 23) # │ │ ┌───────────── day of the month (1 - 31) # │ │ │ ┌───────────── month (1 - 12 or JAN-DEC) # │ │ │ │ ┌───────────── day of the week (0 - 6 or SUN-SAT) # │ │ │ │ │ - cron: "0 0 * * 0,3" # Every Sunday and Wednesday at midnight env: BUILD_COMMIT: "master" CIBW_BUILD_VERBOSITY: 2 CIBW_TEST_REQUIRES: "-r requirements/build.txt pytest==8.0.0" CIBW_TEST_COMMAND: pytest --pyargs dipy CIBW_CONFIG_SETTINGS: "compile-args=-v" permissions: contents: read concurrency: group: build-${{ github.workflow }}-${{ github.head_ref || github.run_id }} cancel-in-progress: true jobs: build_linux_wheels: name: Build python ${{ matrix.cibw_python }} ${{ matrix.cibw_arch }} wheels on ${{ matrix.os }} if: github.repository_owner == 'dipy' && github.ref == 'refs/heads/master' runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: [ubuntu-latest] cibw_python: ["cp310-*", "cp311-*", "cp312-*", "cp313-*"] cibw_manylinux: [manylinux2014] cibw_arch: ["x86_64", "aarch64"] steps: - name: Setup Environment variables shell: bash run: | if [ "schedule" == "${{ github.event_name }}" ]; then echo "BUILD_COMMIT=master" >> $GITHUB_ENV; else echo "BUILD_COMMIT=${{ github.event.inputs.branch_or_tag }}" >> $GITHUB_ENV; fi - uses: actions/checkout@v4 with: fetch-depth: 0 ref: ${{ env.BUILD_COMMIT }} - uses: actions/setup-python@v5 name: Install Python with: python-version: "3.12" - name: Set up QEMU if: ${{ matrix.cibw_arch == 'aarch64' }} uses: docker/setup-qemu-action@v3 with: platforms: arm64 - name: Install cibuildwheel run: python -m pip install cibuildwheel - name: Build the wheel run: python -m cibuildwheel --output-dir dist env: CIBW_BUILD: ${{ matrix.cibw_python }} CIBW_ARCHS_LINUX: ${{ matrix.cibw_arch }} CIBW_SKIP: "*-musllinux_*" CIBW_TEST_SKIP: "*" # "*_aarch64" CIBW_MANYLINUX_X86_64_IMAGE: ${{ matrix.cibw_manylinux }} CIBW_MANYLINUX_I686_IMAGE: ${{ matrix.cibw_manylinux }} CIBW_BUILD_FRONTEND: 'pip; args: --pre --extra-index-url "https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"' - name: Rename Python version run: echo "PY_VERSION=$(echo ${{ matrix.cibw_python }} | cut -d- -f1)" >> $GITHUB_ENV - uses: actions/upload-artifact@v4 with: name: wheels-${{ env.PY_VERSION }}-${{ matrix.cibw_manylinux }}-${{ matrix.cibw_arch }} path: ./dist/*.whl build_osx_wheels: name: Build python ${{ matrix.cibw_python }} ${{ matrix.cibw_arch }} wheels on ${{ matrix.os }} if: github.repository_owner == 'dipy' && github.ref == 'refs/heads/master' runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: [macos-14, macos-latest] cibw_python: ["cp310-*", "cp311-*", "cp312-*", "cp313-*"] include: - os: macos-latest cibw_arch: x86_64 compiler_env: CC=/usr/local/opt/llvm/bin/clang CXX=/usr/local/opt/llvm/bin/clang++ LIBRARY_PATH=/usr/local/opt/llvm/lib:$LIBRARY_PATH - os: macos-14 cibw_arch: arm64 compiler_env: CC=/opt/homebrew/opt/llvm/bin/clang CXX=/opt/homebrew/opt/llvm/bin/clang++ LIBRARY_PATH=/opt/homebrew/opt/llvm/lib:$LIBRARY_PATH MACOSX_DEPLOYMENT_TARGET=14.7 steps: - name: Setup Environment variables shell: bash run: | if [ "schedule" == "${{ github.event_name }}" ]; then echo "BUILD_COMMIT=master" >> $GITHUB_ENV; else echo "BUILD_COMMIT=${{ github.event.inputs.branch_or_tag }}" >> $GITHUB_ENV; fi - uses: actions/checkout@v4 with: fetch-depth: 0 ref: ${{ env.BUILD_COMMIT }} - uses: actions/setup-python@v5 name: Install Python with: python-version: "3.12" - name: Install cibuildwheel run: python -m pip install cibuildwheel - name: Build the wheel run: python -m cibuildwheel --output-dir dist env: CIBW_BEFORE_ALL_MACOS: "brew install llvm libomp" CIBW_BUILD: ${{ matrix.cibw_python }} CIBW_ARCHS_MACOS: ${{ matrix.cibw_arch }} CIBW_TEST_SKIP: "*" # "*_aarch64 *-macosx_arm64" CIBW_ENVIRONMENT_MACOS: ${{ matrix.compiler_env }} CIBW_BUILD_FRONTEND: 'pip; args: --pre --extra-index-url "https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"' - name: Rename Python version run: echo "PY_VERSION=$(echo ${{ matrix.cibw_python }} | cut -d- -f1)" >> $GITHUB_ENV - uses: actions/upload-artifact@v4 with: name: wheels-${{ env.PY_VERSION }}-${{ matrix.cibw_manylinux }}-${{ matrix.cibw_arch }} path: ./dist/*.whl build_windows_wheels: name: Build python ${{ matrix.cibw_python }} ${{ matrix.cibw_arch }} wheels on ${{ matrix.os }} if: github.repository_owner == 'dipy' && github.ref == 'refs/heads/master' runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: [windows-latest] cibw_python: ["cp310-*", "cp311-*", "cp312-*", "cp313-*"] cibw_arch: ["AMD64"] steps: - name: Setup Environment variables shell: bash run: | if [ "schedule" == "${{ github.event_name }}" ]; then echo "BUILD_COMMIT=master" >> $GITHUB_ENV; else echo "BUILD_COMMIT=${{ github.event.inputs.branch_or_tag }}" >> $GITHUB_ENV; fi - uses: actions/checkout@v4 with: fetch-depth: 0 ref: ${{ env.BUILD_COMMIT }} - uses: actions/setup-python@v5 name: Install Python with: python-version: "3.12" - name: Install cibuildwheel run: python -m pip install cibuildwheel - name: Build the wheel run: python -m cibuildwheel --output-dir dist env: CIBW_BUILD: ${{ matrix.cibw_python }} CIBW_ARCHS_WINDOWS: ${{ matrix.cibw_arch }} CIBW_CONFIG_SETTINGS: "setup-args=--vsenv compile-args=-v" CIBW_BUILD_FRONTEND: 'pip; args: --pre --extra-index-url "https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"' - name: Rename Python version shell: bash run: echo "PY_VERSION=$(echo ${{ matrix.cibw_python }} | cut -d- -f1)" >> $GITHUB_ENV - uses: actions/upload-artifact@v4 with: name: wheels-${{ env.PY_VERSION }}-${{ matrix.cibw_arch }} path: ./dist/*.whl test_wheels: name: Test wheels if: github.repository_owner == 'dipy' && github.ref == 'refs/heads/master' needs: [build_linux_wheels] runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest] python-version: ["3.12"] steps: - name: Rename Python version shell: bash run: echo "PY_VERSION=$(echo ${{ matrix.python-version }} | tr -d '.')" >> $GITHUB_ENV - uses: actions/download-artifact@v4 with: name: wheels-cp${{ env.PY_VERSION }}-manylinux2014-x86_64 path: ./dist - uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Test wheel run: | set -eo pipefail ls -al ./dist/*.whl python -m pip install --only-binary="numpy,scipy,h5py" --index-url "https://pypi.anaconda.org/scientific-python-nightly-wheels/simple" "numpy>=2.1.0.dev0" "scipy>=1.14.0.dev0" h5py python -m pip install ./dist/*.whl python -c "import dipy; print(dipy.__version__)" python -c "import numpy; assert int(numpy.__version__[0]) >= 2, numpy.__version__" python -c "from dipy.align.imaffine import AffineMap" upload_anaconda: permissions: contents: write # for softprops/action-gh-release to create GitHub release name: Upload to Anaconda needs: [build_linux_wheels, build_osx_wheels, build_windows_wheels] if: ${{ always() }} && github.repository_owner == 'dipy' && github.ref == 'refs/heads/master' runs-on: ubuntu-latest steps: - uses: actions/download-artifact@v4 id: download with: pattern: wheels-* path: ./dist merge-multiple: true - name: Upload wheel uses: scientific-python/upload-nightly-action@82396a2ed4269ba06c6b2988bb4fd568ef3c3d6b # 0.6.1 if: github.event_name != 'schedule' with: artifacts_path: dist anaconda_nightly_upload_token: ${{secrets.ANACONDA_NIGHTLY_TOKEN}} anaconda_nightly_upload_organization: dipy anaconda_nightly_upload_labels: dev - name: Upload wheel uses: scientific-python/upload-nightly-action@82396a2ed4269ba06c6b2988bb4fd568ef3c3d6b # 0.6.1 if: env.BUILD_COMMIT == 'master' with: artifacts_path: dist anaconda_nightly_upload_token: ${{secrets.ANACONDA_SCIENTIFIC_PYTHON_NIGHTLY_TOKEN}} anaconda_nightly_upload_organization: scientific-python-nightly-wheels anaconda_nightly_upload_labels: main dipy-1.11.0/.github/workflows/test.yml000066400000000000000000000013111476546756600176640ustar00rootroot00000000000000# This workflow will install Python dependencies, run tests and lint with a variety of Python versions # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions name: Stable on: push: branches: [ master ] pull_request: branches: [ master ] schedule: - cron: '0 0 5 * 0' # 1 per month concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: pip: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", "macos-latest", "windows-latest", "macos-14"]' python-version: '["3.10", "3.11", "3.12", "3.13"]' dipy-1.11.0/.github/workflows/test_compat.yml000066400000000000000000000013021476546756600212270ustar00rootroot00000000000000name: Compat on: push: branches: [ master ] pull_request: branches: [ master ] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: minimal-py310: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", ]' python-version: '["3.10", ]' depends: cython==0.29.25 numpy==1.22.4 scipy==1.8.1 nibabel==3.0.0 h5py==3.6.0 tqdm minimal-py311: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", ]' python-version: '["3.11", ]' depends: cython==0.29.32 numpy==1.23.5 scipy==1.9.3 nibabel==3.0.0 h5py==3.8.0 tqdm dipy-1.11.0/.github/workflows/test_optional.yml000066400000000000000000000015501476546756600215760ustar00rootroot00000000000000name: Optional Deps on: push: branches: [ master ] pull_request: branches: [ master ] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: pip: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", "macos-latest", "windows-latest"]' depends: cython!=0.29.29 numpy==1.24.2 matplotlib h5py==3.11.0 nibabel cvxpy<=1.4.4 tqdm extra-depends: scikit_learn pandas statsmodels tables scipy==1.10.1 numexpr conda: uses: ./.github/workflows/test_template.yml with: runs-on: '["macos-latest", "windows-latest"]' install-type: '["conda"]' depends: cython!=0.29.29 numpy==1.25.0 matplotlib h5py==3.11.0 nibabel cvxpy<=1.4.4 tqdm extra-depends: scikit-learn pandas statsmodels pytables scipy==1.10.1 dipy-1.11.0/.github/workflows/test_parallel.yml000066400000000000000000000007611476546756600215500ustar00rootroot00000000000000name: Parallelization on: push: branches: [ master ] pull_request: branches: [ master ] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: pip: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", "macos-latest", "windows-latest"]' extra-depends: dask joblib ray==2.9.3 protobuf<4.0.0 # More info here https://github.com/ray-project/ray/pull/25211 dipy-1.11.0/.github/workflows/test_pre.yml000066400000000000000000000007201476546756600205350ustar00rootroot00000000000000name: PRE_WHEELS on: push: branches: [ master ] pull_request: branches: [ master ] concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: PRE-py313: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", ]' python-version: '["3.13",]' use-pre: true extra-depends: scikit_learn scipy statsmodels pandas tables dipy-1.11.0/.github/workflows/test_template.yml000066400000000000000000000110301476546756600215560ustar00rootroot00000000000000name: DIPY Base Workflow Template on: workflow_call: inputs: runs-on: description: "Select on which environment you want to run the workflow" type: string required: false default: '"ubuntu-latest"' python-version: description: "Select Python version" type: string required: false default: '["3.10", ]' use-pre: description: "bal" type: boolean required: false coverage: description: "coverage" type: boolean required: false enable-viz-tests: description: "Activate viz tests and headless" type: boolean required: false install-type: description: "" type: string required: false default: '["pip", ]' depends: description: "" type: string required: false default: cython!=0.29.29 numpy matplotlib h5py nibabel cvxpy tqdm extra-depends: description: "" type: string required: false secrets: codecov-token: description: "codecov token to send the report" required: false defaults: run: shell: bash jobs: build: runs-on: ${{ matrix.os }} timeout-minutes: 120 strategy: fail-fast: false matrix: python-version: ${{fromJSON(inputs.python-version) }} os: ${{fromJSON(inputs.runs-on) }} install-type: ${{fromJSON(inputs.install-type) }} env: DEPENDS: ${{ inputs.depends }} EXTRA_DEPENDS: ${{ inputs.extra-depends }} INSTALL_TYPE: ${{ matrix.install-type }} PYTHON_VERSION: ${{ matrix.python-version }} VENV_ARGS: "--python=python" USE_PRE: ${{ inputs.use-pre }} COVERAGE: ${{ inputs.coverage }} DIPY_WERRORS: 1 CODECOV_TOKEN: ${{ secrets.codecov-token }} TEST_WITH_XVFB: ${{ inputs.enable-viz-tests }} PRE_WHEELS: "https://pypi.anaconda.org/scientific-python-nightly-wheels/simple" # "https://pypi.anaconda.org/scipy-wheels-nightly/simple" steps: - uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} if: ${{ matrix.install-type != 'conda' }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Set up Virtualenv if: ${{ matrix.install-type != 'conda' }} run: | python -m pip install --upgrade pip virtualenv virtualenv $VENV_ARGS venv - name: Setup Miniconda uses: conda-incubator/setup-miniconda@v3 if: ${{ matrix.install-type == 'conda' }} with: auto-update-conda: true auto-activate-base: false activate-environment: venv python-version: ${{ matrix.python-version }} channels: defaults, conda-forge use-only-tar-bz2: true - name: Install HDF5 and pytables on macOS if: ${{ (runner.os == 'macOS') && (env.EXTRA_DEPENDS != '') && (env.INSTALL_TYPE == 'pip') }} run: | pip install cython brew install hdf5 brew install c-blosc export HDF5_DIR=/opt/homebrew/opt/hdf5 export BLOSC_DIR=/opt/homebrew/opt/c-blosc pip install tables - name: Install Dependencies run: | if [ "${{ inputs.use-pre }}" == "true" ]; then tools/ci/install_dependencies.sh || echo "::warning::Experimental Job so Install Dependencies failure ignored!" else tools/ci/install_dependencies.sh fi - name: Install OpenMP on macOS if: runner.os == 'macOS' run: | brew install llvm brew install libomp echo "CC=clang" >> $GITHUB_ENV - name: Install DIPY run: | if [ "${{ inputs.use-pre }}" == "true" ]; then tools/ci/install.sh || echo "::warning::Experimental Job so Install DIPY failure ignored!" else tools/ci/install.sh fi - name: Setup Headless if: ${{ inputs.enable-viz-tests }} run: tools/ci/setup_headless.sh - name: Run the Tests run: | if [ "${{ inputs.use-pre }}" == "true" ]; then tools/ci/run_tests.sh || echo "::warning::Experimental Job so failure ignored!" else tools/ci/run_tests.sh fi - name: Upload coverage to Codecov uses: codecov/codecov-action@v5 if: ${{ fromJSON(env.COVERAGE) }} with: token: ${{ secrets.CODECOV_TOKEN }} directory: for_testing_results files: coverage.xml fail_ci_if_error: true flags: unittests name: codecov-umbrella verbose: true dipy-1.11.0/.github/workflows/test_viz.yml000066400000000000000000000010641476546756600205610ustar00rootroot00000000000000name: Visualization on: push: branches: [ master ] paths: - dipy/viz/** - dipy/workflows/** pull_request: branches: [ master ] paths: - dipy/viz/** - dipy/workflows/** concurrency: group: build-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true jobs: VIZ: uses: ./.github/workflows/test_template.yml with: runs-on: '["ubuntu-latest", "macos-latest", "windows-latest"]' extra-depends: scikit_learn vtk fury scipy enable-viz-tests: truedipy-1.11.0/.gitignore000066400000000000000000000007521476546756600145650ustar00rootroot00000000000000*.pyc *.pyd *.so *.c *.cpp *.png *.vtk *.gz *.dpy *.npy *.img *.hdr *.mat *.pkl *.orig *.trk *.trx *.pam5 *.pial build *~ doc/_build doc/examples_revamped doc/reference_cmd doc/*-stamp MANIFEST dist/ .pytest_cache .project .pydevproject *.sw[po] dipy.egg-info/ nibabel*egg/ pyx-stamps __config__.py .DS_Store .coverage .buildbot.patch .eggs/ dipy/.idea/ .idea .vscode .pytest_cache/* tmp/ venv build-install .mesonpy* build-install benchmarks/results/ benchmarks/env/ sg_execution_times.rstdipy-1.11.0/.mailmap000066400000000000000000000362651476546756600142260ustar00rootroot00000000000000Ariel Rokem arokem Ariel Rokem arokem Bago Amirbekian Bago Amirbekian Bago Amirbekian Bago Amirbekian Bago Amirbekian Bago Amirbekian Bago Amirbekian MrBago Bago Amirbekian MrBago Maxime Descoteaux mdesco Maxime Descoteaux mdesco Maxime Descoteaux Maxime Descoteaux Stefan van der Walt Stefan van der Walt Christopher Nguyen Christopher Christopher Nguyen Christopher Nguyen Eleftherios Garyfallidis Dipy Developers Eleftherios Garyfallidis Eleftherios Garyfallidis Eleftherios Garyfallidis Eleftherios Garyfallidis Eleftherios Garyfallidis Eleftherios Garyfallidis Marc-Alexandre Côté Marc-Alexandre Cote Gabriel Girard Gabriel Girard Gabriel Girard Gabriel Girard Gabriel Girard Gabriel Girard Etienne St-Onge Etienne St-Onge Etienne St-Onge StongeEtienne Etienne St-Onge Etienne St-Onge Emanuele Olivetti Emanuele Olivetti Ian Nimmo-Smith Ian Nimmo-Smith Ian Nimmo-Smith iannimmosmith Ian Nimmo-Smith Ian Nimmo-Smith Matthew Brett Matthew Brett Jon Haitz Legarreta Gorroño Jon Haitz Legarreta Gorroño Jon Haitz Legarreta Gorroño Jon Haitz Legarreta Omar Ocegueda omarocegueda Omar Ocegueda omarocegueda Omar Ocegueda Omar Ocegueda Gonzalez Shahnawaz Ahmed Shahnawaz Ahmed Shahnawaz Ahmed root Shahnawaz Ahmed Your Name Sylvain Merlet smerlet Mauro Zucchelli Mauro Mauro Zucchelli maurozucchelli Mauro Zucchelli maurozucchelli Andrew Lawrence AndrewLawrence Samuel St-Jean samuelstjean Samuel St-Jean samuelstjean Samuel St-Jean samuelstjean Samuel St-Jean Samuel St-Jean Samuel St-Jean Samuel St-Jean Samuel St-Jean Samuel St-Jean Samuel St-Jean Samuel St-Jean <3030760+samuelstjean@users.noreply.github.com> Jean-Christophe Houde Jean-Christophe Houde Riddhish Bhalodia Riddhish Bhalodia Ranveer Aggarwal Ranveer Aggarwal Serge Koudoro skab12 Serge Koudoro skab12 Serge Koudoro skab12 Serge Koudoro skab12 Serge Koudoro skoudoro Serge Koudoro skoudoro Serge Koudoro skoudoro Tingyi Wanyan Tingyi Wanyan Tingyi Wanyan Tingyi Wanyan Tingyi Wanyan Tingyi Wanyan Kesshi Jordan kesshijordan Kesshi Jordan Kesshi jordan Rafael Neto Henriques Rafael Henriques Rafael Neto Henriques Rafael Neto Henriques RafaelNH Alexandre Gauvin algo Alexandre Gauvin Alexandre Gauvin Nil Goyette Nil Goyette Eric Peterson etpeterson Rutger Fick Rutger Fick Rutger Fick Rutger Fick Demian Wassermann Demian Wassermann Sourav Singh Sourav Sven Dorkenwald Manu Tej Sharma manu-tej David Qixiang Chen Oscar Esteban Matthieu Dumont unknown Guillaume Theaud Adam Rybinski Bennet Fauber Aman Arya Ricci Woo RicciWoo Francois Rheault Francois Rheault Francois Rheault frheault Francois Rheault frheault Francois Rheault frheault Antoine Theberge AntoineTheb Antoine Theberge AntoineTheb David Hunt David David Hunt davhunt Parichit Sharma Parichit Sharma Chandan Gangwar Naveen Kumarmarri Jakob Wasserthal Shreyas Fadnavis Daniel Enrico Cahall Bramsh Qamar Bramsh Qamar Bramsh Qamar Yash Sherry endolith ArjitJ <32598699+ArjitJ@users.noreply.github.com> Kevin Sitek Ross Lawrence Lawreros Ross Lawrence Ross Lawrence <52179159+Lawreros@users.noreply.github.com> Derek Pisner dPys Derek Pisner Derek Pisner <16432683+dPys@users.noreply.github.com> Pradeep Reddy Raamana Pradeep Reddy Raamana Shreyas Fadnavis Shreyas Sanjeev Fadnavis Shreyas Fadnavis Shreyas Fadnavis Javier Guaje Javier Guaje Javier Guaje Javier Guaje Bennet Fauber Bennet Fauber Fabio Nery fnery Jirka Borovec Jirka Jirka Borovec Jiri Borovec Jirka Borovec Jirka Borovec Charles Poirier Charles Charles Poirier CHrlS98 Charles Poirier vagrant Charles Poirier CHrlS98 Charles Poirier Charles Poirier <41654474+CHrlS98@users.noreply.github.com> Charles Poirier Charles Poirier Leevi Kerkela Leevi Leevi Kerkela kerkelae Shubham Shaswat ShubhamShaswat Philippe Karan karanphil Philippe Karan Philippe Karan Philippe Karan karanphil <47002808+karanphil@users.noreply.github.com> Philippe Karan karp2601 Jaewon Chung j1c Basile Pinsard basile Siddharth Kapoor <16ec142siddharth@nitk.edu.in> sidkapoor97 <16ec142siddharth@nitk.edu.in> Areesha Tariq areeshatariq David Romero-Bascones dombas David Romero-Bascones drombas Leon Weninger Leon Weninger <18707626+weningerleon@users.noreply.github.com> Leon Weninger weninger Dan Bullock Daniel Bullock Lucas Da Costa lcscosta Lucas Da Costa Lucas da Costa <45444800+lcscosta@users.noreply.github.com> Francis Jerome jerryfrancis-97 Kenji Marshall Kenji Marshall <47823200+kenjimarshall@users.noreply.github.com> Tom Dela Haije Tom Deneb Boito DenebBoito Deneb Boito DenebBoito <45635305+DenebBoito@users.noreply.github.com> Julio Villalon Julio Villalon Malinda Dilhara Malinda Emmanuelle Renauld EmmaRenauld Chantal Tax ChantalTax Jong Sung Park Park Jong Sung Park pjsjongsung Jong Sung Park Jong Sung Park Jong Sung Park Jong Sung Park Jong Sung Park Jong Sung Park Rahul Ubale RahulUbale Rahul Ubale Rahul19 <68568128+RahulUbale@users.noreply.github.com> Baran Aydogan baranaydogan Baran Aydogan Dogu Baran Aydogan Shilpi Prasad shilpiprd Shilpi Prasad Shilpi <72646134+shilpiprd@users.noreply.github.com> Shilpi Prasad Shilpi Dimitri Papadopoulos Dimitri Papadopoulos <3234522+DimitriPapadopoulos@users.noreply.github.com> Dimitri Papadopoulos Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com> Paul Camacho pcamach2 <49655443+pcamach2@users.noreply.github.com> Maharshi Gor maharshigor Vara Lakshmi Bayanagari lakshmi John Kruper 36000 John Kruper John Kruper John Kruper John K John Kruper John Kruper Atharva Shah Atharva Shah <118627741+Atharva-Shah-2298@users.noreply.github.com> Theodore Ni <3806110+tjni@users.noreply.github.com> Nicolas Delinte Nicolas Delinte <70629561+DelinteNicolas@users.noreply.github.com> Praitayini Kanakaraj praitayini <31248960+praitayini@users.noreply.github.com> Praitayini Kanakaraj praitayini John Shen ccshen John Shen John.Shen Martin Kozár MartinKjunior Michael R. Crusoe Michael R. Crusoe <1330696+mr-c@users.noreply.github.com> Michael R. Crusoe Michael R. Crusoe Kaustav Deka deka27 Kaustav Deka Kaustav Deka <116963260+deka27@users.noreply.github.com> Prajwal Reddy iprajwalreddy Prajwal Reddy Sai Prajwal Reddy Inigo Tellaetxe itellaetxe Kaibo Tang kaibo Sandro Turriate Sandro Florent Wijanto FWijanto Vara Lakshmi Bayanagari <> lb-97 <42465606+lb-97@users.noreply.github.com> dipy-1.11.0/.pre-commit-config.yaml000066400000000000000000000006251476546756600170550ustar00rootroot00000000000000default_language_version: python: python3 repos: - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.6.9 hooks: # Run the linter - id: ruff args: [ --fix ] # Run the formatter - id: ruff-format - repo: https://github.com/codespell-project/codespell rev: v2.3.0 hooks: - id: codespell additional_dependencies: - tomli dipy-1.11.0/.spin/000077500000000000000000000000001476546756600136205ustar00rootroot00000000000000dipy-1.11.0/.spin/cmds.py000066400000000000000000000122701476546756600151220ustar00rootroot00000000000000"""Additional Command-line interface for spin.""" import os import shutil import click from spin import util from spin.cmds import meson # From scipy: benchmarks/benchmarks/common.py def _set_mem_rlimit(max_mem=None): """Set address space rlimit.""" import resource import psutil mem = psutil.virtual_memory() if max_mem is None: max_mem = int(mem.total * 0.7) cur_limit = resource.getrlimit(resource.RLIMIT_AS) if cur_limit[0] > 0: max_mem = min(max_mem, cur_limit[0]) try: resource.setrlimit(resource.RLIMIT_AS, (max_mem, cur_limit[1])) except ValueError: # on macOS may raise: current limit exceeds maximum limit pass def _commit_to_sha(commit): p = util.run(["git", "rev-parse", commit], output=False, echo=False) if p.returncode != 0: raise click.ClickException(f"Could not find SHA matching commit `{commit}`") return p.stdout.decode("ascii").strip() def _dirty_git_working_dir(): # Changes to the working directory p0 = util.run(["git", "diff-files", "--quiet"]) # Staged changes p1 = util.run(["git", "diff-index", "--quiet", "--cached", "HEAD"]) return p0.returncode != 0 or p1.returncode != 0 def _run_asv(cmd): # Always use ccache, if installed PATH = os.environ["PATH"] # EXTRA_PATH = os.pathsep.join([ # '/usr/lib/ccache', '/usr/lib/f90cache', # '/usr/local/lib/ccache', '/usr/local/lib/f90cache' # ]) env = os.environ env["PATH"] = f"EXTRA_PATH:{PATH}" # Control BLAS/LAPACK threads env["OPENBLAS_NUM_THREADS"] = "1" env["MKL_NUM_THREADS"] = "1" # Limit memory usage try: _set_mem_rlimit() except (ImportError, RuntimeError): pass util.run(cmd, cwd="benchmarks", env=env) @click.command() @click.option( "--tests", "-t", default=None, metavar="TESTS", multiple=True, help="Which tests to run", ) @click.option( "--compare", "-c", is_flag=True, default=False, help="Compare benchmarks between the current branch and main " "(unless other branches specified). " "The benchmarks are each executed in a new isolated " "environment.", ) @click.option("--verbose", "-v", is_flag=True, default=False) @click.option( "--quick", "-q", is_flag=True, default=False, help="Run each benchmark only once (timings won't be accurate)", ) @click.argument("commits", metavar="", required=False, nargs=-1) @click.pass_context def bench(ctx, tests, compare, verbose, quick, commits): """🏋 Run benchmarks. \b Examples: \b $ spin bench -t bench_lib $ spin bench -t bench_random.Random $ spin bench -t Random -t Shuffle Two benchmark runs can be compared. By default, `HEAD` is compared to `main`. You can also specify the branches/commits to compare: \b $ spin bench --compare $ spin bench --compare main $ spin bench --compare main HEAD You can also choose which benchmarks to run in comparison mode: $ spin bench -t Random --compare """ if not commits: commits = ("main", "HEAD") elif len(commits) == 1: commits = commits + ("HEAD",) elif len(commits) > 2: raise click.ClickException("Need a maximum of two revisions to compare") bench_args = [] for t in tests: bench_args += ["--bench", t] if verbose: bench_args = ["-v"] + bench_args if quick: bench_args = ["--quick"] + bench_args if not compare: # No comparison requested; we build and benchmark the current version click.secho( "Invoking `build` prior to running benchmarks:", bold=True, fg="bright_green", ) ctx.invoke(meson.build) meson._set_pythonpath() p = util.run( ["python", "-c", "import dipy; print(dipy.__version__)"], cwd="benchmarks", echo=False, output=False, ) os.chdir("..") dipy_ver = p.stdout.strip().decode("ascii") click.secho( f"Running benchmarks on DIPY {dipy_ver}", bold=True, fg="bright_green" ) cmd = ["asv", "run", "--dry-run", "--show-stderr", "--python=same"] + bench_args _run_asv(cmd) else: # Ensure that we don't have uncommitted changes commit_a, commit_b = [_commit_to_sha(c) for c in commits] if commit_b == "HEAD" and _dirty_git_working_dir(): click.secho( "WARNING: you have uncommitted changes --- " "these will NOT be benchmarked!", fg="red", ) cmd_compare = ( [ "asv", "continuous", "--factor", "1.05", ] + bench_args + [commit_a, commit_b] ) _run_asv(cmd_compare) @click.command() def clean(): """🧹 Remove build and install folder.""" build_dir = "build" install_dir = "build-install" print(f"Removing `{build_dir}`") if os.path.isdir(build_dir): shutil.rmtree(build_dir) print(f"Removing `{install_dir}`") if os.path.isdir(install_dir): shutil.rmtree(install_dir) dipy-1.11.0/AUTHOR000066400000000000000000000044161476546756600135230ustar00rootroot00000000000000Eleftherios Garyfallidis Ariel Rokem Serge Koudoro Matthew Brett Rafael Neto Henriques Bago Amirbekian Gabriel Girard Omar Ocegueda Bramsh Qamar Jon Haitz Legarreta Gorroño Francois Rheault Samuel St-Jean Shreyas Fadnavis Marc-Alexandre Côté Philippe Karan Javier Guaje Rutger Fick Shahnawaz Ahmed Ian Nimmo-Smith Mauro Zucchelli Matthieu Dumont Kesshi Jordan Stefan van der Walt Ranveer Aggarwal Maxime Descoteaux Kenji Marshall Jong Sung Park Riddhish Bhalodia Shilpi Prasad Tom Dela Haije Parichit Sharma Bishakh Ghosh Christopher Nguyen Ricci Woo Maharshi Gor Stephan Meesters Leevi Kerkela Eric Peterson David Romero-Bascones Etienne St-Onge Julio Villalon Jean-Christophe Houde Manu Tej Sharma Sourav Singh Nasim Anousheh Charles Poirier Dimitri Papadopoulos Paul Camacho Kumar Ashutosh David Reagan Guillaume Theaud Shubham Shaswat Aman Arya Deneb Boito Dimitris Rozakis Fabio Nery Rahul Ubale kunal mehta Gregory R. Lee Areesha Tariq Saber Sheybani Sam Coveney Chantal Tax Gregory Lee Nil Goyette Eric Larson Rohan Prinja Yaroslav Halchenko Antonio Ossa Jaewon Chung Vara Lakshmi Bayanagari Demian Wassermann John Kruper Michael Paquette Shrishti Hore Tingyi Wanyan Jiri Borovec Siddharth Kapoor Leon Weninger Malinda Dilhara Conor Corbin Dan Bullock Derek Pisner Martijn Nagtegaal ArjitJ Enes Albay Kimberly Chan Erik Ziegler Jacob Roberts Takis Panagopoulos Antoine Theberge Baran Aydogan Ross Lawrence Scott Trinkle David Qixiang Chen Kevin Sitek Pradeep Reddy Raamana Adam Richie-Halford Alexandre Gauvin Basile Pinsard David Hunt Emanuele Olivetti Emmanuelle Renauld Nicolas Delinte Sarath Chandra Clint Greene Giulia Bertò Lucas Da Costa Matthias Ekman Mitesh Gulecha Yash Sherry dependabot[bot] endolith Andrew Lawrence Bennet Fauber Emmanuel Caruyer Felix Liu Matt Cieslak Oscar Esteban Tashrif Billah Tom Wright svabhishek29 ujjwal-shekhar Adam Rybinski Aryansh Omray Chandan Gangwar Clément Zotti Gonzalo Sanguinetti Joshua Newton Naveen Kumarmarri Sylvain Merlet Vatsala Swaroop Vibhatha Abeykoon Étienne Mollier Atharva Shah Chris Filo Gorgolewski Daniel Enrico Cahall Dogu Baran Aydogan Francis Jerome Himanshu Mishra Jakob Wasserthal Jirka Borovec Jon Mendoza Katrin Leinweber Maria Luisa Mandelli Martino Pilia Michael R. Crusoe Qiyuan Tian Sagun Pai Sven Dorkenwald Theodore Ni Yijun Liu dipy-1.11.0/Changelog000066400000000000000000000370371476546756600144150ustar00rootroot00000000000000.. -*- mode: rst -*- .. vim:syntax=rest .. _changelog: Dipy Development Changelog ----------------------------- Dipy is a diffusion MR imaging library written in Python 'Close gh-' statements refer to GitHub issues that are available at:: http://github.com/dipy/dipy/issues The full VCS changelog is available here: http://github.com/dipy/dipy/commits/master Releases ~~~~~~~~ Dipy ++++ The code found in Dipy was created by the people found in the AUTHOR file. * 1.11.0 (Saturday, March 15, 2025) - NF: Refactoring of the tracking API. - Deprecation of Tensorflow backend in favor of PyTorch. - Performance improvements of multiple functionalities. - DIPY Horizon improvements and minor features added. - Added support for Python 3.13. - Drop support for Python 3.9. - Multiple Workflows updated and added (15 workflows). - Documentation update. - Closed 73 issues and merged 47 pull requests. * 1.10.0 (Friday, December 12, 2024) - NF: Patch2Self3 - Large improvements of self-supervised denoising method added. - NF: Fiber density and spread from ODF using Bingham distributions method added. - NF: Iteratively reweighted least squares for robust fitting of diffusion models added. - NF: NDC - Neighboring DWI Correlation quality metric added. - NF: DAM - tissue classification method added. - NF: New Parallel Backends (Ray, joblib, Dask) for fitting reconstruction methods added. - RF: Deprecation of Tensorflow support. PyTorch support is now the default. - Transition to Keyword-only arguments (PEP 3102). - Zero-warnings policy (CIs, Compilation, doc generation) adopted. - Adoption of ruff for automatic style enforcement. - Transition to using f-strings. - Citation system updated. It is more uniform and robust. - Multiple Workflows updated. - Multiple DIPY Horizon features updated. - Large documentation update. - Closed 250 issues and merged 185 pull requests. * 1.9.0 (Friday, March 8, 2024) - Numpy 2.0.0 support. - DeepN4 novel DL-based N4 Bias Correction method added. - Multiple Workflows added. - Large update of DIPY Horizon features. - Pytest for Cython files(*.pyx) added. - Large documentation update. - Support of Python 3.8 removed. - Closed 139 issues and merged 58 pull requests. * 1.8.0 (Wednesday, December 13, 2023) - Python 3.12.0 support. - Cython 3.0.0 compatibility. - Migrated to Meson build system. Setuptools is no more. - EVAC+ novel DL-based brain extraction method added. - Parallel Transport Tractography (PTT) 10X faster. - Many Horizon updates. Fast overlays of many images. - New Correlation Tensor Imaging (CTI) method added. - Improved warnings for optional dependencies. - Large documentation update. New theme/design integration. - Closed 197 issues and merged 130 pull requests. * 1.7.0 (Sunday, April 23, 2023) - NF: BundleWarp - Streamline-based nonlinear registration method for bundles added. - NF: DKI+ - Diffusion Kurtosis modeling with advanced constraints added. - NF: Synb0 - Synthetic b0 creation added using deep learning added. - NF: New Parallel Transport Tractography (PTT) added. - NF: Fast Streamline Search algorithm added. - NF: New denoising methods based on 1D CNN added. - Handle Asymmetric Spherical Functions. - Large update of DIPY Horizon features. - Multiple Workflows updated - Large codebase cleaning. - Large documentation update. Integration of Sphinx-Gallery. - Closed 53 issues and merged 34 pull requests. * 1.6.0 (Monday, January 16, 2023) - NF: Unbiased groupwise linear bundle registration added. - NF: MAP+ constraints added. - Generalized PCA to less than 3 spatial dims. - Add positivity constraints to QTI. - Ability to apply Symmetric Diffeomorphic Registration to points/streamlines. - New Human Connectome Project (HCP) data fetcher added. - New Healthy Brain Network (HBN) data fetcher added. - Multiple Workflows updated (DTIFlow, LPCAFlow, MPPCA) and added (RUMBAFlow). - Ability to handle VTP files. - Large codebase cleaning. - Large documentation update. - Closed 75 issues and merged 41 pull requests. * 1.5.0 (Friday, March 11, 2022) - New reconstruction model added: Q-space Trajectory Imaging (QTI). - New reconstruction model added: Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD). - New reconstruction model added: Residual block Deep Neural Network (ResDNN). - Masking management in Affine Registration added. - Multiple Workflows updated (DTIFlow, DKIFlow, ImageRegistrationFlow) and added (MotionCorrectionFlow). - Compatibility with Python 3.10 added. - Migrations from Azure Pipeline to Github Actions. - Large codebase cleaning. - New parallelisation module added. - ``dipy.io.bvectxt`` module deprecated. - New DIPY Horizon features (ROI Visualizer, random colors flag). - Large documentation update. - Closed 129 issues and merged 72 pull requests. * 1.4.1 (Thursday, May 6, 2021) - Patch2Self and its documentation updated. - BUAN and Recobundles documentation updated. - Standardization and improvement of the multiprocessing / multithreading rules. - Community and governance information added. - New surface seeding module for tractography named `mesh`. - Large update of Cython code in respect of the last standard. - Large documentation update. - Closed 61 issues and merged 28 pull requests. * 1.4.0 (Tuesday, March 14, 2021) - New self-supervised denoising algorithm Patch2Self added. - BUAN and RecoBundles documentation updated. - Response function refactored and clarified. - B-tensor allowed with response functions. - Large Command Line Interface (CLI) documentation updated. - Public API for Registration added. - Large documentation update. - Closed 47 issues and merged 19 pull requests. * 1.3.0 (Tuesday, November 3, 2020) - Gibbs ringing correction 10X faster. - Spherical harmonics basis definitions updated. - Added SMT2 metrics from mean signal diffusion kurtosis. - New interface functions added to the registration module. - New linear transform added to the registration module. - New tutorials for DIPY command line interfaces. - Fixed compatibility issues with different dependencies. - Tqdm (multiplatform progress bar for data downloading) dependency added. - Large documentation update. - Bundle section highlight from BUAN added in Horizon. - Closed 134 issues and merged 49 pull requests. * 1.2.0 (Tuesday, September 9, 2020) - New command line interfaces for group analysis: BUAN. - Added b-tensor encoding for gradient table. - Better support for single shell or multi-shell response functions. - Stats module refactored. - Numpy minimum version is 1.2.0. - Fixed compatibilities with FURY 0.6+, VTK9+, CVXPY 1.1+. - Added multiple tutorials for DIPY command line interfaces. - Updated SH basis convention. - Improved performance of tissue classification. - Fixed a memory overlap bug (multi_median). - Large documentation update (typography / references). - Closed 256 issues and merged 94 pull requests. * 1.1.1 (Friday, January 10, 2020) - New module for deep learning ``dipy.nn`` (uses TensorFlow 2.0). - Improved DKI performance and increased utilities. - Non-linear and RESTORE fits from DTI compatible now with DKI. - Numerical solutions for estimating axial, radial and mean kurtosis. - Added Kurtosis Fractional Anisotropy by Glenn et al. 2015. - Added Mean Kurtosis Tensor by Hansen et al. 2013. - Nibabel minimum version is 3.0.0. - Azure CI added and Appveyor CI removed. - New command line interfaces for LPCA, MPPCA and Gibbs unringing. - New MSMT CSD tutorial added. - Horizon refactored and updated to support StatefulTractograms. - Speeded up all cython modules by using a smarter configuration setting. - All tutorials updated to API changes and 2 new tutorials added. - Large documentation update. - Closed 126 issues and merged 50 pull requests. * 1.0.0 (Monday, 5 August 2019) - Critical :doc:`API changes ` - Large refactoring of tracking API. - New denoising algorithm: MP-PCA. - New Gibbs ringing removal. - New interpolation module: ``dipy.core.interpolation``. - New reconstruction models: MTMS-CSD, Mean Signal DKI. - Increased coordinate systems consistency. - New object to manage safely tractography data: StatefulTractogram - New command line interface for downloading datasets: FetchFlow - Horizon updated, medical visualization interface powered by QuickBundlesX. - Removed all deprecated functions and parameters. - Removed compatibility with Python 2.7. - Updated minimum dependencies version (Numpy, Scipy). - All tutorials updated to API changes and 3 new added. - Large documentation update. - Closed 289 issues and merged 98 pull requests. * 0.16.0 (Sunday, 10 March 2019) - Horizon, medical visualization interface powered by QuickBundlesX. - New Tractometry tools: Bundle Analysis / Bundle Profiles. - New reconstruction model: IVIM MIX (Variable Projection). - New command line interface: Affine and Diffeomorphic Registration. - New command line interface: Probabilistic, Deterministic and PFT Tracking. - Integration of Cython Guidelines for developers. - Replacement of Nose by Pytest. - Documentation update. - Closed 103 issues and merged 41 pull requests. * 0.15 (Wednesday, 12 December 2018) - Updated RecoBundles for automatic anatomical bundle segmentation. - New Reconstruction Model: qtau-dMRI. - New command line interfaces (e.g. dipy_slr). - New continuous integration with AppVeyor CI. - Nibabel Streamlines API now used almost everywhere for better memory management. - Compatibility with Python 3.7. - Many tutorials added or updated (5 New). - Large documentation update. - Moved visualization module to a new library: FURY. - Closed 287 issues and merged 93 pull requests. * 0.14 (Tuesday, 1 May 2018) - RecoBundles: anatomically relevant segmentation of bundles - New super-fast clustering algorithm: QuickBundlesX - New tracking algorithm: Particle Filtering Tracking. - New tracking algorithm: Probabilistic Residual Bootstrap Tracking. - Integration of the Streamlines API for reading, saving and processing tractograms. - Fiber ORientation Estimated using Continuous Axially Symmetric Tensors (Forecast). - New command line interfaces. - Deprecated fvtk (old visualization framework). - A range of new visualization improvements. - Large documentation update. * 0.13 (Monday, 24 October 2017) - Faster local PCA implementation. - Fixed different issues with OpenMP and Windows / OSX. - Replacement of cvxopt by cvxpy. - Replacement of Pytables by h5py. - Updated API to support the latest numpy version (1.14). - New user interfaces for visualization. - Large documentation update. * 0.12 (Tuesday, 26 June 2017) - IVIM Simultaneous modeling of perfusion and diffusion. - MAPL, tissue microstructure estimation using Laplacian-regularized MAP-MRI. - DKI-based microstructural modelling. - Free water diffusion tensor imaging. - Denoising using Local PCA. - Streamline-based registration (SLR). - Fiber to bundle coherence (FBC) measures. - Bayesian MRF-based tissue classification. - New API for integrated user interfaces. - New hdf5 file (.pam5) for saving reconstruction results. - Interactive slicing of images, ODFS, and peaks. - Updated API to support the latest numpy versions. - New system for automatically generating command line interfaces. - Faster computation of the Cross-Correlation metric for registration. * 0.11 (Sunday, 21 February 2016) - New framework for the contextual enhancement of ODFs. - Compatibility with numpy (1.11). - Compatibility with VTK 7.0 which supports Python 3.x. - Faster PIESNO for noise estimation. - Reorient gradient directions according to motion correction parameters. - Supporting Python 3.3+ but not 3.2. - Reduced memory usage in DTI. - DSI now can use datasets with multiple b0s. - Fixed different issues with Windows 64bit and Python 3.5. * 0.10 (Thursday, 2 December 2015) * Compatibility with new versions of scipy (0.16) and numpy (1.10). * New cleaner visualization API, including compatibility with VTK 6, and functions to create your own interactive visualizations. * Diffusion Kurtosis Imaging(DKI): Google Summer of Code work by Rafael Henriques. * Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. * Anisotropic Power Maps from spherical harmonic coefficients. * New framework for affine registration of images. * 0.9.2 (Wednesday, 18 March 2015) * Anatomically Constrained Tissue Classifiers for Tracking * Massive speedup of Constrained Spherical Deconvolution (CSD) * Recursive calibration of the response function for CSD * New experimental framework for clustering * Improvements and 10X speedup for Quickbundles * Improvements in Linear Fascicle Evaluation (LiFE) * New implementation of Geodesic Anisotropy * New efficient transformation functions for registration * Sparse Fascicle Model supports acquisitions with multiple b-values * 0.8.0 (Tuesday, 6 Jan 2015) * Nonlinear Image-based Registration (SyN) * Streamline-based Linear Registration (SLR) * Linear Fascicle Evaluation (LiFE) * Cross-validation for reconstruction models * Sparse Fascicle Model (SFM) * Non-local means denoising (NLMEANS) * New modular tracking machinery * Closed 388 issues and merged 155 pull requests * A variety of bug-fixes and speed improvements * 0.7.1 (Thursday, 16 Jan 2014) * Made installing Dipy easier and more universal * Fixed automated seeding problems for tracking * Removed default parameter for odf_vertices in EuDX * 0.7.0 (Monday, 23 Dec 2013) * Constrained Spherical Deconvolution (CSD) * Simple Harmonic Oscillator based Reconstruction and Estimation (SHORE) * Sharpening Deconvolution Transform (SDT) * Signal-to-noise ratio estimation * Parallel processing enabled for all reconstruction models using `peaks_from_model` * Simultaneous peak and ODF visualization * Streamtube visualization * Electrostatic repulsion for sphere generation * Connectivity matrices and density maps * Streamline filtering through specific ROIs using `target` * Brain extraction and foreground extraction using `median_otsu` * RESTORE fitting for DTI * Westin's Tensor maps * Access to more publicly available datasets directly through Dipy functions. * 3x more tutorials than the previous release * 0.6.0 (Sunday, 24 Mar 2013) * Cython 0.17+ enforced * New reconstruction models API * Diffusion Spectrum Imaging (DSI) * DSI with deconvolution * Generalized Q-sampling Imaging 2 (GQI2) * Modular fiber tracking * deterministic * probabilistic * Fast volume indexing (a faster ndindex) * Spherical Harmonic Models * Opdt (Tristan-Vega et. al) * CSA odf (Aganj et. al) * Analytical Q-ball (Descoteaux et. al) * Tuch's Q-ball (Tuch et. al) * Visualization of spherical functions * Peak finding in odfs * Non-linear peak finding * Sphere Object * Gradients Object * 2D sphere plotting * MultiTensor and Ball & Sticks voxel simulations * Fetch/Download data for examples * Software phantom generation * Angular similarity for comparisons between multiple peaks * SingleVoxelModel to MultiVoxelModel decorator * Mrtrix and fibernavigator SH bases * More Benchmarks * More Tests * Color FA and other Tensor metrics added * Scripts for the ISBI 2013 competition * Fit_tensor script added * Radial basis function interpolation on the sphere * New examples/tutorials * 0.5.0 (Friday, 11 Feb 2011) * Initial release. * Reconstruction algorithms e.g. GQI, DTI * Tractography generation algorithms e.g. EuDX * Intelligent downsampling of tracks * Ultra-fast tractography clustering * Resampling datasets with anisotropic voxels to isotropic * Visualizing multiple brains simultaneously * Finding track correspondence between different brains * Reading many different file formats e.g. Trackvis or Nifti * Dealing with huge tractography without memory restrictions * Playing with datasets interactively without storing * And much more and even more to come in next releases dipy-1.11.0/LICENSE000066400000000000000000000032061476546756600135770ustar00rootroot00000000000000Unless otherwise specified by LICENSE.txt files in individual directories, or within individual files or functions, all code is: Copyright (c) 2008-2025, dipy developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the dipy developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. dipy-1.11.0/Makefile000066400000000000000000000060131476546756600142310ustar00rootroot00000000000000# Simple makefile to quickly access handy build commands for Cython extension # code generation. Note that the actual code to produce the extension lives in # the setup.py file, this Makefile is just meant as a command # convenience/reminder while doing development. PYTHON ?= python3 PKGDIR=dipy DOCSRC_DIR=doc DOCDIR=${PKGDIR}/${DOCSRC_DIR} TESTDIR=${PKGDIR}/tests help: @echo "Numpy/Cython tasks. Available tasks:" @echo "ext -> build the Cython extension module." @echo "cython-html -> create annotated HTML from the .pyx sources" @echo "test -> run a simple test demo." @echo "all -> Call ext, html and finally test." all: ext cython-html test ext: recspeed.so propspeed.so vox2track.so \ distances.so streamlinespeed.so denspeed.so \ vec_val_sum.so quick_squash.so vector_fields.so \ crosscorr.so sumsqdiff.so expectmax.so bundlemin.so \ cythonutils.so featurespeed.so metricspeed.so \ clusteringspeed.so clustering_algorithms.so \ mrf.so test: ext pytest -s --verbose --doctest-modules . cython-html: ${PKGDIR}/reconst/recspeed.html ${PKGDIR}/tracking/propspeed.html ${PKGDIR}/tracking/vox2track.html ${PKGDIR}/tracking/distances.html ${PKGDIR}/tracking/streamlinespeed.html ${PKGDIR}/segment/cythonutils.html ${PKGDIR}/segment/featurespeed.html ${PKGDIR}/segment/metricspeed.html ${PKGDIR}/segment/clusteringspeed.html ${PKGDIR}/segment/clustering_algorithms.html recspeed.so: ${PKGDIR}/reconst/recspeed.pyx cythonutils.so: ${PKGDIR}/segment/cythonutils.pyx featurespeed.so: ${PKGDIR}/segment/featurespeed.pyx metricspeed.so: ${PKGDIR}/segment/metricspeed.pyx mrf.so: ${PKGDIR}/segment/mrf.pyx clusteringspeed.so: ${PKGDIR}/segment/clusteringspeed.pyx clustering_algorithms.so: ${PKGDIR}/segment/clustering_algorithms.pyx propspeed.so: ${PKGDIR}/tracking/propspeed.pyx vox2track.so: ${PKGDIR}/tracking/vox2track.pyx distances.so: ${PKGDIR}/tracking/distances.pyx streamlinespeed.so: ${PKGDIR}/tracking/streamlinespeed.pyx denspeed.so: ${PKGDIR}/denoise/denspeed.pyx vec_val_sum.so: ${PKGDIR}/reconst/vec_val_sum.pyx quick_squash.so: ${PKGDIR}/reconst/quick_squash.pyx vector_fields.so: ${PKGDIR}/align/vector_fields.pyx crosscorr.so: ${PKGDIR}/align/crosscorr.pyx sumsqdiff.so: ${PKGDIR}/align/sumsqdiff.pyx expectmax.so: ${PKGDIR}/align/expectmax.pyx bundlemin.so: ${PKGDIR}/align/bundlemin.pyx $(PYTHON) setup.py build_ext --inplace # Phony targets for cleanup and similar uses .PHONY: clean clean: - find ${PKGDIR} -name "*.so" -print0 | xargs -0 rm - find ${PKGDIR} -name "*.pyd" -print0 | xargs -0 rm - find ${PKGDIR} -name "*.c" -print0 | xargs -0 rm - find ${PKGDIR} -name "*.html" -print0 | xargs -0 rm rm -rf build rm -rf docs/_build rm -rf docs/dist rm -rf dipy/dipy.egg-info distclean: clean rm -rf dist # Suffix rules %.c : %.pyx cython $< %.html : %.pyx cython -a $< source-release: clean $(PYTHON) -m compileall . $(PYTHON) -m build --sdist --wheel . binary-release: clean $(PYTHON) -m build --wheel . # Checks to see if local files pass formatting rules format: $(PYTHON) -m pycodestyle dipy dipy-1.11.0/README.rst000066400000000000000000000065731476546756600142730ustar00rootroot00000000000000.. image:: doc/_static/images/logos/dipy-logo.png :height: 180px :target: http://dipy.org :alt: DIPY - Diffusion Imaging in Python | .. image:: https://github.com/dipy/dipy/actions/workflows/test.yml/badge.svg?branch=master :target: https://github.com/dipy/dipy/actions/workflows/test.yml .. image:: https://github.com/dipy/dipy/actions/workflows/build_docs.yml/badge.svg?branch=master :target: https://github.com/dipy/dipy/actions/workflows/build_docs.yml .. image:: https://codecov.io/gh/dipy/dipy/branch/master/graph/badge.svg :target: https://codecov.io/gh/dipy/dipy .. image:: https://img.shields.io/pypi/v/dipy.svg :target: https://pypi.python.org/pypi/dipy .. image:: https://anaconda.org/conda-forge/dipy/badges/platforms.svg :target: https://anaconda.org/conda-forge/dipy .. image:: https://anaconda.org/conda-forge/dipy/badges/downloads.svg :target: https://anaconda.org/conda-forge/dipy .. image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg :target: https://github.com/dipy/dipy/blob/master/LICENSE DIPY [DIPYREF]_ is a python library for the analysis of MR diffusion imaging. DIPY is for research only; please contact admins@dipy.org if you plan to deploy in clinical settings. Website ======= Current information can always be found from the DIPY website - http://dipy.org Mailing Lists ============= Please see the DIPY community list at https://mail.python.org/mailman3/lists/dipy.python.org/ Please see the users' forum at https://github.com/dipy/dipy/discussions Please join the gitter chatroom `here `_. Code ==== You can find our sources and single-click downloads: * `Main repository`_ on Github. * Documentation_ for all releases and current development tree. * Download as a tar/zip file the `current trunk`_. .. _main repository: http://github.com/dipy/dipy .. _Documentation: http://dipy.org .. _current trunk: http://github.com/dipy/dipy/archives/master Installing DIPY =============== DIPY can be installed using `pip`:: pip install dipy or using `conda`:: conda install -c conda-forge dipy For detailed installation instructions, including instructions for installing from source, please read our `installation documentation `_. Python versions and dependencies -------------------------------- DIPY follows the `Scientific Python`_ `SPEC 0 — Minimum Supported Versions`_ recommendation as closely as possible, including the supported Python and dependencies versions. Further information can be found in `Toolchain Roadmap `_. License ======= DIPY is licensed under the terms of the BSD license. Please see the `LICENSE file `_. Contributing ============ We welcome contributions from the community. Please read our `Contributing guidelines `_. Reference ========= .. [DIPYREF] E. Garyfallidis, M. Brett, B. Amirbekian, A. Rokem, S. Van Der Walt, M. Descoteaux, I. Nimmo-Smith and DIPY contributors, "DIPY, a library for the analysis of diffusion MRI data", Frontiers in Neuroinformatics, vol. 8, p. 8, Frontiers, 2014. .. _`Scientific Python`: https://scientific-python.org/ .. _`SPEC 0 — Minimum Supported Versions`: https://scientific-python.org/specs/spec-0000/ dipy-1.11.0/bench.ini000066400000000000000000000002341476546756600143500ustar00rootroot00000000000000# content of bench.ini # use pytest look for "bench" instead of "test" [pytest] python_files = bench_*.py python_classes = Bench python_functions = bench_*dipy-1.11.0/benchmarks/000077500000000000000000000000001476546756600147065ustar00rootroot00000000000000dipy-1.11.0/benchmarks/README.rst000066400000000000000000000071651476546756600164060ustar00rootroot00000000000000===================== 🚀 DIPY Benchmarks 📊 ===================== Benchmarking Dipy with Airspeed Velocity (ASV). Measure the speed and performance of DIPY functions easily! Prerequisites ⚙️ --------------------- Before you start, make sure you have ASV and installed: .. code-block:: bash pip install asv pip install virtualenv Getting Started 🏃‍♂️ --------------------- DIPY Benchmarking is as easy as a piece of 🍰 with ASV. You don't need to install a development version of DIPY into your current Python environment. ASV manages virtual environments and builds DIPY automatically. Running Benchmarks 📈 --------------------- To run all available benchmarks, navigate to the root DIPY directory at the command line and execute: .. code-block:: bash spin bench This command builds DIPY and runs all available benchmarks defined in the ``benchmarks/`` directory. Be patient; this could take a while as each benchmark is run multiple times to measure execution time distribution. For local testing without replications, unleash the power of ⚡: .. code-block:: bash cd benchmarks/ export REGEXP="bench.*Ufunc" asv run --dry-run --show-stderr --python=same --quick -b $REGEXP Here, ``$REGEXP`` is a regular expression used to match benchmarks, and ``--quick`` is used to avoid repetitions. To run benchmarks from a particular benchmark module, such as ``bench_segment.py``, simply append the filename without the extension: .. code-block:: bash spin bench -t bench_segment To run a benchmark defined in a class, such as ``BenchQuickbundles`` from ``bench_segment.py``, show your benchmarking ninja skills: .. code-block:: bash spin bench -t bench_segment.BenchQuickbundles Comparing Results 📊 -------------------- To compare benchmark results with another version/commit/branch, use the ``--compare`` option (or ``-c``): .. code-block:: bash spin bench --compare v1.7.0 -t bench_segment spin bench --compare 20d03bcfd -t bench_segment spin bench -c master -t bench_segment These commands display results in the console but don't save them for future comparisons. For greater control and to save results for future comparisons, use ASV commands: .. code-block:: bash cd benchmarks asv run -n -e --python=same asv publish asv preview Benchmarking Versions 💻 ------------------------ To benchmark or visualize releases on different machines locally, generate tags with their commits: .. code-block:: bash cd benchmarks # Get commits for tags # delete tag_commits.txt before re-runs for gtag in $(git tag --list --sort taggerdate | grep "^v"); do git log $gtag --oneline -n1 --decorate=no | awk '{print $1;}' >> tag_commits.txt done # Use the last 20 versions for maximum power 🔥 tail --lines=20 tag_commits.txt > 20_vers.txt asv run HASHFILE:20_vers.txt # Publish and view asv publish asv preview Contributing 🤝 --------------- TBD Writing Benchmarks ✏️ --------------------- See `ASV documentation `__ for basics on how to write benchmarks. Things to consider: - The benchmark suite should be importable with multiple DIPY version. - Benchmark parameters should not depend on which DIPY version is installed. - Keep the runtime of the benchmark reasonable. - Prefer ASV's ``time_`` methods for benchmarking times. - Prepare arrays in the setup method rather than in the ``time_`` methods. - Be mindful of large arrays created. Embrace the Speed! ⏩ --------------------- Now you're all set to benchmark DIPY with ASV and watch your code reach for the stars! Happy benchmarking! 🚀 dipy-1.11.0/benchmarks/asv.conf.json000066400000000000000000000065671476546756600173340ustar00rootroot00000000000000{ // The version of the config file format. Do not change, unless // you know what you are doing. "version": 1, // The name of the project being benchmarked "project": "dipy", // The project's homepage "project_url": "https://dipy.org", // The URL or local path of the source code repository for the // project being benchmarked "repo": "..", // List of branches to benchmark. If not provided, defaults to "master" // (for git) or "tip" (for mercurial). "branches": ["HEAD"], "build_command": [ "python -m build --wheel -o {build_cache_dir} {build_dir}" ], // The DVCS being used. If not set, it will be automatically // determined from "repo" by looking at the protocol in the URL // (if remote), or by looking for special directories, such as // ".git" (if local). "dvcs": "git", // The tool to use to create environments. May be "conda", // "virtualenv" or other value depending on the plugins in use. // If missing or the empty string, the tool will be automatically // determined by looking for tools on the PATH environment // variable. "environment_type": "virtualenv", // the base URL to show a commit for the project. "show_commit_url": "https://github.com/dipy/dipy/commit/", // The Pythons you'd like to test against. If not provided, defaults // to the current version of Python used to run `asv`. // "pythons": ["3.12"], // The matrix of dependencies to test. Each key is the name of a // package (in PyPI) and the values are version numbers. An empty // list indicates to just test against the default (latest) // version. "matrix": { "Cython": [], "build": [], "packaging": [] }, // The directory (relative to the current directory) that benchmarks are // stored in. If not provided, defaults to "benchmarks" "benchmark_dir": "benchmarks", // The directory (relative to the current directory) to cache the Python // environments in. If not provided, defaults to "env" "env_dir": "env", // The directory (relative to the current directory) that raw benchmark // results are stored in. If not provided, defaults to "results". "results_dir": "results", // The directory (relative to the current directory) that the html tree // should be written to. If not provided, defaults to "html". "html_dir": "html", // The number of characters to retain in the commit hashes. // "hash_length": 8, // `asv` will cache wheels of the recent builds in each // environment, making them faster to install next time. This is // number of builds to keep, per environment. "build_cache_size": 8, // The commits after which the regression search in `asv publish` // should start looking for regressions. Dictionary whose keys are // regexps matching to benchmark names, and values corresponding to // the commit (exclusive) after which to start looking for // regressions. The default is to start from the first commit // with results. If the commit is `null`, regression detection is // skipped for the matching benchmark. // // "regressions_first_commits": { // "some_benchmark": "352cdf", // Consider regressions only after this commit // "another_benchmark": null, // Skip regression detection altogether // } }dipy-1.11.0/benchmarks/asv_compare.conf.json.tpl000066400000000000000000000100251476546756600216200ustar00rootroot00000000000000// This config file is almost similar to 'asv.conf.json' except it contains // custom tokens that can be substituted by 'runtests.py' and ASV, // due to the necessity to add custom build options when `--bench-compare` // is used. { // The version of the config file format. Do not change, unless // you know what you are doing. "version": 1, // The name of the project being benchmarked "project": "dipy", // The project's homepage "project_url": "https://www.dipy.org/", // The URL or local path of the source code repository for the // project being benchmarked "repo": "..", // List of branches to benchmark. If not provided, defaults to "master" // (for git) or "tip" (for mercurial). "branches": ["HEAD"], // The DVCS being used. If not set, it will be automatically // determined from "repo" by looking at the protocol in the URL // (if remote), or by looking for special directories, such as // ".git" (if local). "dvcs": "git", // The tool to use to create environments. May be "conda", // "virtualenv" or other value depending on the plugins in use. // If missing or the empty string, the tool will be automatically // determined by looking for tools on the PATH environment // variable. "environment_type": "virtualenv", // the base URL to show a commit for the project. "show_commit_url": "https://github.com/dipy/dipy/commit/", // The Pythons you'd like to test against. If not provided, defaults // to the current version of Python used to run `asv`. // "pythons": ["3.12"], // The matrix of dependencies to test. Each key is the name of a // package (in PyPI) and the values are version numbers. An empty // list indicates to just test against the default (latest) // version. "matrix": { "Cython": [], "setuptools": ["59.2.0"], "packaging": [] }, // The directory (relative to the current directory) that benchmarks are // stored in. If not provided, defaults to "benchmarks" "benchmark_dir": "benchmarks", // The directory (relative to the current directory) to cache the Python // environments in. If not provided, defaults to "env" // NOTE: changes dir name will requires update `generate_asv_config()` in // runtests.py "env_dir": "env", // The directory (relative to the current directory) that raw benchmark // results are stored in. If not provided, defaults to "results". "results_dir": "results", // The directory (relative to the current directory) that the html tree // should be written to. If not provided, defaults to "html". "html_dir": "html", // The number of characters to retain in the commit hashes. // "hash_length": 8, // `asv` will cache wheels of the recent builds in each // environment, making them faster to install next time. This is // number of builds to keep, per environment. "build_cache_size": 8, "build_command" : [ "python -m build {dipy_build_options}", // pip ignores '--global-option' when pep517 is enabled, we also enabling pip verbose to // be reached from asv `--verbose` so we can verify the build options. "PIP_NO_BUILD_ISOLATION=false python {build_dir}/benchmarks/asv_pip_nopep517.py -v {dipy_global_options} --no-deps --no-index -w {build_cache_dir} {build_dir}" ], // The commits after which the regression search in `asv publish` // should start looking for regressions. Dictionary whose keys are // regexps matching to benchmark names, and values corresponding to // the commit (exclusive) after which to start looking for // regressions. The default is to start from the first commit // with results. If the commit is `null`, regression detection is // skipped for the matching benchmark. // // "regressions_first_commits": { // "some_benchmark": "352cdf", // Consider regressions only after this commit // "another_benchmark": null, // Skip regression detection altogether // } }dipy-1.11.0/benchmarks/asv_pip_nopep517.py000066400000000000000000000017311476546756600203610ustar00rootroot00000000000000""" This file is used by asv_compare.conf.json.tpl. Note ---- This file is copied (possibly with major modifications) from the sources of the numpy project - https://github.com/numpy/numpy. It remains licensed as the rest of NUMPY (BSD 3-Clause as of November 2023). # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # # See COPYING file distributed along with the Numpy package for the # copyright and license terms. # # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """ import subprocess import sys # pip ignores '--global-option' when pep517 is enabled therefore we disable it. cmd = [sys.executable, "-mpip", "wheel", "--no-use-pep517"] try: output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, text=True) except Exception as e: output = str(e.output) if "no such option" in output: print("old version of pip, escape '--no-use-pep517'") cmd.pop() subprocess.run(cmd + sys.argv[1:]) dipy-1.11.0/benchmarks/benchmarks/000077500000000000000000000000001476546756600170235ustar00rootroot00000000000000dipy-1.11.0/benchmarks/benchmarks/__init__.py000066400000000000000000000000001476546756600211220ustar00rootroot00000000000000dipy-1.11.0/benchmarks/benchmarks/bench_reconst.py000066400000000000000000000051641476546756600222170ustar00rootroot00000000000000"""Benchmarks for``dipy.reconst`` module.""" import numpy as np from dipy.core.sphere import unique_edges from dipy.data import default_sphere from dipy.reconst.recspeed import local_maxima from dipy.reconst.vec_val_sum import vec_val_vect class BenchRecSpeed: def setup(self): vertices, faces = default_sphere.vertices, default_sphere.faces self.edges = unique_edges(faces) self.odf = np.zeros(len(vertices)) self.odf[1] = 1.0 self.odf[143] = 143.0 self.odf[305] = 305.0 def time_local_maxima(self): local_maxima(self.odf, self.edges) class BenchVecValSum: def setup(self): def make_vecs_vals(shape): return ( np.random.randn(*shape), np.random.randn(*(shape[:-2] + shape[-1:])), ) shape = (10, 12, 3, 3) self.evecs, self.evals = make_vecs_vals(shape) def time_vec_val_vect(self): vec_val_vect(self.evecs, self.evals) # class BenchCSD: # def setup(self): # img, self.gtab, labels_img = read_stanford_labels() # data = img.get_fdata() # labels = labels_img.get_fdata() # shape = labels.shape # mask = np.isin(labels, [1, 2]) # mask.shape = shape # center = (50, 40, 40) # width = 12 # a, b, c = center # hw = width // 2 # idx = (slice(a - hw, a + hw), slice(b - hw, b + hw), # slice(c - hw, c + hw)) # self.data_small = data[idx].copy() # self.mask_small = mask[idx].copy() # voxels = self.mask_small.sum() # self.small_gtab = GradientTable(self.gtab.gradients[:75]) # def time_csdeconv_basic(self): # # TODO: define response and remove None # sh_order_max = 8 # model = ConstrainedSphericalDeconvModel(self.gtab, None, # sh_order_max=sh_order_max) # model.fit(self.data_small, self.mask_small) # def time_csdeconv_small_dataset(self): # # TODO: define response and remove None # # Smaller data set # # data_small = data_small[..., :75].copy() # sh_order_max = 8 # model = ConstrainedSphericalDeconvModel(self.small_gtab, None, # sh_order_max=sh_order_max) # model.fit(self.data_small, self.mask_small) # def time_csdeconv_super_resolution(self): # # TODO: define response and remove None # # Super resolution # sh_order_max = 12 # model = ConstrainedSphericalDeconvModel(self.gtab, None, # sh_order_max=sh_order_max) # model.fit(self.data_small, self.mask_small) dipy-1.11.0/benchmarks/benchmarks/bench_segment.py000066400000000000000000000060401476546756600221760ustar00rootroot00000000000000"""Benchmarks for ``dipy.segment`` module.""" import numpy as np from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.segment.clustering import QuickBundles as QB_New from dipy.segment.mask import bounding_box from dipy.segment.metricspeed import Metric from dipy.tracking.streamline import Streamlines, set_number_of_points class BenchMask: def setup(self): self.dense_vol = np.zeros((100, 100, 100)) self.dense_vol[:] = 10 self.sparse_vol = np.zeros((100, 100, 100)) self.sparse_vol[0, 0, 0] = 1 def time_bounding_box_sparse(self): bounding_box(self.sparse_vol) def time_bounding_box_dense(self): bounding_box(self.dense_vol) class BenchQuickbundles: def setup(self): dtype = "float32" nb_points = 12 # The expected number of clusters of the fornix using threshold=10 # is 4. self.basic_parameters = { "threshold": 10, "expected_nb_clusters": 4 * 8, } fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines fornix_streamlines = Streamlines(fornix) fornix_streamlines = set_number_of_points( fornix_streamlines, nb_points=nb_points ) # Create eight copies of the fornix to be clustered (one in # each octant). self.streamlines = [] self.streamlines += [ s + np.array([100, 100, 100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([100, -100, 100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([100, 100, -100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([100, -100, -100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([-100, 100, 100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([-100, -100, 100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([-100, 100, -100], dtype) for s in fornix_streamlines ] self.streamlines += [ s + np.array([-100, -100, -100], dtype) for s in fornix_streamlines ] class MDFpy(Metric): def are_compatible(self, shape1, shape2): return shape1 == shape2 def dist(self, features1, features2): dist = np.sqrt(np.sum((features1 - features2) ** 2, axis=1)) dist = np.sum(dist / len(features1)) return dist self.custom_metric = MDFpy() def time_quickbundles(self): qb2 = QB_New(self.basic_parameters.get("threshold", 10)) _ = qb2.cluster(self.streamlines) def time_quickbundles_metric(self): qb = QB_New( self.basic_parameters.get("threshold", 10), metric=self.custom_metric ) _ = qb.cluster(self.streamlines) dipy-1.11.0/benchmarks/benchmarks/bench_tracking.py000066400000000000000000000036061476546756600223430ustar00rootroot00000000000000"""Benchmarks for functions related to streamline in ``dipy.tracking``module.""" import numpy as np from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.tracking import Streamlines from dipy.tracking.streamline import length, set_number_of_points from dipy.tracking.streamlinespeed import compress_streamlines class BenchStreamlines: def setup(self): rng = np.random.default_rng(42) nb_streamlines = 20000 min_nb_points = 2 max_nb_points = 100 def generate_streamlines(nb_streamlines, min_nb_points, max_nb_points, rng): streamlines = [ rng.random((rng.integers(min_nb_points, max_nb_points), 3)) for _ in range(nb_streamlines) ] return streamlines self.data = {} self.data["rng"] = rng self.data["nb_streamlines"] = nb_streamlines self.data["streamlines"] = generate_streamlines( nb_streamlines, min_nb_points, max_nb_points, rng=rng ) self.data["streamlines_arrseq"] = Streamlines(self.data["streamlines"]) fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines self.fornix_streamlines = Streamlines(fornix) def time_set_number_of_points(self): streamlines = self.data["streamlines"] set_number_of_points(streamlines, nb_points=50) def time_set_number_of_points_arrseq(self): streamlines = self.data["streamlines_arrseq"] set_number_of_points(streamlines, nb_points=50) def time_length(self): streamlines = self.data["streamlines"] length(streamlines) def time_length_arrseq(self): streamlines = self.data["streamlines_arrseq"] length(streamlines) def time_compress_streamlines(self): compress_streamlines(self.fornix_streamlines) dipy-1.11.0/dipy/000077500000000000000000000000001476546756600135365ustar00rootroot00000000000000dipy-1.11.0/dipy/__init__.py000066400000000000000000000035121476546756600156500ustar00rootroot00000000000000""" Diffusion Imaging in Python ============================ For more information, please visit https://dipy.org Subpackages ----------- :: align -- Registration, streamline alignment, volume resampling core -- Spheres, gradient tables core.geometry -- Spherical geometry, coordinate and vector manipulation core.meshes -- Point distributions on the sphere data -- Small testing datasets denoise -- Denoising algorithms direction -- Manage peaks and tracking io -- Loading/saving of dpy datasets nn -- Neural networks algorithms reconst -- Signal reconstruction modules (tensor, spherical harmonics, diffusion spectrum, etc.) segment -- Tractography segmentation sims -- MRI phantom signal simulation stats -- Tractometry tracking -- Tractography, metrics for streamlines viz -- Visualization and GUIs workflows -- Predefined Command line for common tasks Utilities --------- :: test -- Run unittests __version__ -- Dipy version """ from dipy.version import version as __version__ def get_info(): from os.path import dirname import sys import numpy import dipy return { "pkg_path": dirname(__file__), "commit_hash": dipy.version.git_revision, "sys_version": sys.version, "sys_executable": sys.executable, "sys_platform": sys.platform, "np_version": numpy.__version__, "dipy_version": dipy.__version__, } submodules = [ "align", "core", "data", "denoise", "direction", "io", "nn", "reconst", "segment", "sims", "stats", "tracking", "utils", "viz", "workflows", "tests", "testing", ] __all__ = submodules + ["__version__", "setup_test", "get_info"] dipy-1.11.0/dipy/align/000077500000000000000000000000001476546756600146305ustar00rootroot00000000000000dipy-1.11.0/dipy/align/__init__.py000066400000000000000000000027611476546756600167470ustar00rootroot00000000000000import numpy as np floating = np.float32 class Bunch: def __init__(self, **kwds): r"""A 'bunch' of values (a replacement of Enum) This is a temporary replacement of Enum, which is not available on all versions of Python 2 """ self.__dict__.update(kwds) VerbosityLevels = Bunch(NONE=0, STATUS=1, DIAGNOSE=2, DEBUG=3) r""" VerbosityLevels This enum defines the four levels of verbosity we use in the align module. NONE : do not print anything STATUS : print information about the current status of the algorithm DIAGNOSE : print high level information of the components involved in the registration that can be used to detect a failing component. DEBUG : print as much information as possible to isolate the cause of a bug. """ from dipy.align._public import ( affine, affine_registration, center_of_mass, motion_correction, read_mapping, register_dwi_series, register_dwi_to_template, # noqa register_series, resample, rigid, rigid_isoscaling, rigid_scaling, streamline_registration, syn_registration, translation, write_mapping, ) __all__ = [ "syn_registration", "register_dwi_to_template", "write_mapping", "read_mapping", "resample", "center_of_mass", "translation", "rigid_isoscaling", "rigid_scaling", "rigid", "affine", "motion_correction", "affine_registration", "register_series", "register_dwi_series", "streamline_registration", ] dipy-1.11.0/dipy/align/_public.py000066400000000000000000000732711476546756600166310ustar00rootroot00000000000000""" Registration API: simplified API for registration of MRI data and of streamlines. """ import collections.abc from functools import partial import numbers import re from warnings import warn import nibabel as nib import numpy as np from dipy.align.imaffine import ( AffineMap, AffineRegistration, MutualInformationMetric, transform_centers_of_mass, ) from dipy.align.imwarp import DiffeomorphicMap, SymmetricDiffeomorphicRegistration from dipy.align.metrics import CCMetric, EMMetric, SSDMetric from dipy.align.streamlinear import StreamlineLinearRegistration from dipy.align.transforms import ( AffineTransform3D, RigidIsoScalingTransform3D, RigidScalingTransform3D, RigidTransform3D, TranslationTransform3D, ) import dipy.core.gradients as dpg import dipy.data as dpd from dipy.io.image import load_nifti, save_nifti from dipy.io.streamline import load_trk from dipy.io.utils import read_img_arr_or_path from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import set_number_of_points from dipy.tracking.utils import transform_tracking_output __all__ = [ "syn_registration", "register_dwi_to_template", "write_mapping", "read_mapping", "resample", "center_of_mass", "translation", "rigid_isoscaling", "rigid_scaling", "rigid", "affine", "motion_correction", "affine_registration", "register_series", "register_dwi_series", "streamline_registration", ] # Global dicts for choosing metrics for registration: syn_metric_dict = {"CC": CCMetric, "EM": EMMetric, "SSD": SSDMetric} affine_metric_dict = {"MI": MutualInformationMetric} @warning_for_keywords() def _handle_pipeline_inputs( moving, static, *, static_affine=None, moving_affine=None, starting_affine=None ): """ Helper function to prepare inputs for pipeline functions Parameters ---------- moving, static: Either as a 3D/4D array or as a nifti image object, or as a string containing the full path to a nifti file. static_affine, moving_affine: 2D arrays. The array associated with the static/moving images. starting_affine : 2D array, optional. This is the registration matrix that is inherited from previous steps in the pipeline. Default: 4-by-4 identity matrix. """ static, static_affine = read_img_arr_or_path(static, affine=static_affine) moving, moving_affine = read_img_arr_or_path(moving, affine=moving_affine) if starting_affine is None: starting_affine = np.eye(4) return static, static_affine, moving, moving_affine, starting_affine @warning_for_keywords() def syn_registration( moving, static, *, moving_affine=None, static_affine=None, step_length=0.25, metric="CC", dim=3, level_iters=None, prealign=None, **metric_kwargs, ): """Register a 2D/3D source image (moving) to a 2D/3D target image (static). Parameters ---------- moving, static : array or nib.Nifti1Image or str. Either as a 2D/3D array or as a nifti image object, or as a string containing the full path to a nifti file. moving_affine, static_affine : 4x4 array, optional. Must be provided for `data` provided as an array. If provided together with Nifti1Image or str `data`, this input will over-ride the affine that is stored in the `data` input. Default: use the affine stored in `data`. metric : string, optional The metric to be optimized. One of `CC`, `EM`, `SSD`, Default: 'CC' => CCMetric. dim: int (either 2 or 3), optional The dimensions of the image domain. Default: 3 level_iters : list of int, optional the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used). Default: [10, 10, 5]. metric_kwargs : dict, optional Parameters for initialization of the metric object. If not provided, uses the default settings of each metric. Returns ------- warped_moving : ndarray The data in `moving`, warped towards the `static` data. forward : ndarray (..., 3) The vector field describing the forward warping from the source to the target. backward : ndarray (..., 3) The vector field describing the backward warping from the target to the source. """ level_iters = level_iters or [10, 10, 5] static, static_affine, moving, moving_affine, _ = _handle_pipeline_inputs( moving, static, moving_affine=moving_affine, static_affine=static_affine, starting_affine=None, ) use_metric = syn_metric_dict[metric.upper()](dim, **metric_kwargs) sdr = SymmetricDiffeomorphicRegistration( use_metric, level_iters=level_iters, step_length=step_length ) mapping = sdr.optimize( static, moving, static_grid2world=static_affine, moving_grid2world=moving_affine, prealign=prealign, ) warped_moving = mapping.transform(moving) return warped_moving, mapping @warning_for_keywords() def register_dwi_to_template( dwi, gtab, *, dwi_affine=None, template=None, template_affine=None, reg_method="syn", **reg_kwargs, ): """Register DWI data to a template through the B0 volumes. Parameters ---------- dwi : 4D array, nifti image or str Containing the DWI data, or full path to a nifti file with DWI. gtab : GradientTable or sequence of strings The gradients associated with the DWI data, or a sequence with (fbval, fbvec), full paths to bvals and bvecs files. dwi_affine : 4x4 array, optional An affine transformation associated with the DWI. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. template : 3D array, nifti image or str Containing the data for the template, or full path to a nifti file with the template data. template_affine : 4x4 array, optional An affine transformation associated with the template. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. reg_method : str, One of "syn" or "aff", which designates which registration method is used. Either syn, which uses the :func:`syn_registration` function or :func:`affine_registration` function. Default: "syn". reg_kwargs : key-word arguments for :func:`syn_registration` or :func:`affine_registration` Returns ------- warped_b0 : ndarray b0 volume warped to the template. mapping : DiffeomorphicMap or ndarray If reg_method is "syn", a DiffeomorphicMap class instance that can be used to transform between the two spaces. Otherwise, if reg_method is "aff", a 4x4 matrix encoding the affine transform. Notes ----- This function assumes that the DWI data is already internally registered. See :func:`register_dwi_series`. """ dwi_data, dwi_affine = read_img_arr_or_path(dwi, affine=dwi_affine) if template is None: template = dpd.read_mni_template() template_data, template_affine = read_img_arr_or_path( template, affine=template_affine ) if not isinstance(gtab, dpg.GradientTable): if isinstance(gtab, (list, tuple)) and len(gtab) == 2: bvals, bvecs = gtab gtab = dpg.gradient_table(bvals, bvecs=bvecs) else: raise ValueError( "gtab should be a GradientTable object or a tuple of (bvals, bvecs)" ) mean_b0 = np.mean(dwi_data[..., gtab.b0s_mask], -1) if reg_method.lower() == "syn": warped_b0, mapping = syn_registration( mean_b0, template_data, moving_affine=dwi_affine, static_affine=template_affine, **reg_kwargs, ) elif reg_method.lower() == "aff": warped_b0, mapping = affine_registration( mean_b0, template_data, moving_affine=dwi_affine, static_affine=template_affine, **reg_kwargs, ) else: raise ValueError( f"reg_method should be one of 'aff' or 'syn', but you provided {reg_method}" ) return warped_b0, mapping def write_mapping(mapping, fname): """Write out a syn registration mapping to a nifti file. Parameters ---------- mapping : DiffeomorphicMap Registration mapping derived from :func:`syn_registration` fname : str Full path to the nifti file storing the mapping Notes ----- The data in the file is organized with shape (X, Y, Z, 3, 2), such that the forward mapping in each voxel is in `data[i, j, k, :, 0]` and the backward mapping in each voxel is in `data[i, j, k, :, 1]`. """ mapping_data = np.array([mapping.forward.T, mapping.backward.T]).T save_nifti(fname, mapping_data, mapping.codomain_world2grid) @warning_for_keywords() def read_mapping(disp, domain_img, codomain_img, *, prealign=None): """Read a syn registration mapping from a nifti file. Parameters ---------- disp : str or Nifti1Image A file of image containing the mapping displacement field in each voxel Shape (x, y, z, 3, 2) domain_img : str or Nifti1Image codomain_img : str or Nifti1Image Returns ------- A :class:`DiffeomorphicMap` object. Notes ----- See :func:`write_mapping` for the data format expected. """ if isinstance(disp, str): disp_data, disp_affine = load_nifti(disp) if isinstance(domain_img, str): domain_img = nib.load(domain_img) if isinstance(codomain_img, str): codomain_img = nib.load(codomain_img) mapping = DiffeomorphicMap( dim=3, disp_shape=disp_data.shape[:3], disp_grid2world=np.linalg.inv(disp_affine), domain_shape=domain_img.shape[:3], domain_grid2world=domain_img.affine, codomain_shape=codomain_img.shape, codomain_grid2world=codomain_img.affine, prealign=prealign, ) mapping.forward = disp_data[..., 0] mapping.backward = disp_data[..., 1] mapping.is_inverse = True return mapping @warning_for_keywords() def resample( moving, static, *, moving_affine=None, static_affine=None, between_affine=None ): """Resample an image (moving) from one space to another (static). Parameters ---------- moving : array, nifti image or str Containing the data for the moving object, or full path to a nifti file with the moving data. static : array, nifti image or str Containing the data for the static object, or full path to a nifti file with the moving data. moving_affine : 4x4 array, optional An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. static_affine : 4x4 array, optional An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. between_affine: 4x4 array, optional If an additional affine is needed between the two spaces. Default: identity (no additional registration). Returns ------- A Nifti1Image class instance with the data from the moving object resampled into the space of the static object. """ static, static_affine, moving, moving_affine, between_affine = ( _handle_pipeline_inputs( moving, static, moving_affine=moving_affine, static_affine=static_affine, starting_affine=between_affine, ) ) affine_map = AffineMap( between_affine, domain_grid_shape=static.shape, domain_grid2world=static_affine, codomain_grid_shape=moving.shape, codomain_grid2world=moving_affine, ) resampled = affine_map.transform(moving) return nib.Nifti1Image(resampled, static_affine) @warning_for_keywords() def affine_registration( moving, static, *, moving_affine=None, static_affine=None, pipeline=None, starting_affine=None, metric="MI", level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs, ): """ Find the affine transformation between two 3D images. Alternatively, find the combination of several linear transformations. Parameters ---------- moving : array, nifti image or str Containing the data for the moving object, or full path to a nifti file with the moving data. static : array, nifti image or str Containing the data for the static object, or full path to a nifti file with the moving data. moving_affine : 4x4 array, optional An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. static_affine : 4x4 array, optional An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti. pipeline : list of str, optional Sequence of transforms to use in the gradual fitting. Default: gradual fit of the full affine (executed from left to right): ``["center_of_mass", "translation", "rigid", "affine"]`` Alternatively, any other combination of the following registration methods might be used: center_of_mass, translation, rigid, rigid_isoscaling, rigid_scaling and affine. starting_affine: 4x4 array, optional Initial guess for the transformation between the spaces. Default: identity. metric : str, optional. Currently only supports 'MI' for MutualInformationMetric. level_iters : sequence, optional AffineRegistration key-word argument: the number of iterations at each scale of the scale space. `level_iters[0]` corresponds to the coarsest scale, `level_iters[-1]` the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used. sigmas : sequence of floats, optional AffineRegistration key-word argument: custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0]. factors : sequence of floats, optional AffineRegistration key-word argument: custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1]. ret_metric : boolean, optional Set it to True to return the value of the optimized coefficients and the optimization quality metric. moving_mask : array, shape (S', R', C') or (R', C'), optional moving image mask that defines which pixels in the moving image are used to calculate the mutual information. static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. nbins : int, optional MutualInformationMetric key-word argument: the number of bins to be used for computing the intensity histograms. The default is 32. sampling_proportion : None or float in interval (0, 1], optional MutualInformationMetric key-word argument: There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If `sampling_proportion` is None, then dense sampling is used. If `sampling_proportion` is a floating point value in (0,1] then sparse sampling is used, where `sampling_proportion` specifies the proportion of voxels to be used. The default is None (dense sampling). Returns ------- resampled : array with moving data resampled to the static space after computing the affine transformation. final_affine : the affine 4x4 associated with the transformation. xopt : the value of the optimized coefficients. fopt : the value of the optimization quality metric. Notes ----- Performs a gradual registration between the two inputs, using a pipeline that gradually approximates the final registration. If the final default step (`affine`) is omitted, the resulting affine may not have all 12 degrees of freedom adjusted. """ pipeline = pipeline or ["center_of_mass", "translation", "rigid", "affine"] level_iters = level_iters or [10000, 1000, 100] sigmas = sigmas or [3, 1, 0.0] factors = factors or [4, 2, 1] starting_was_supplied = starting_affine is not None static, static_affine, moving, moving_affine, starting_affine = ( _handle_pipeline_inputs( moving, static, moving_affine=moving_affine, static_affine=static_affine, starting_affine=starting_affine, ) ) # Define the Affine registration object we'll use with the chosen metric. # For now, there is only one metric (mutual information) use_metric = affine_metric_dict[metric](**metric_kwargs) affreg = AffineRegistration( metric=use_metric, level_iters=level_iters, sigmas=sigmas, factors=factors ) # Convert pipeline to sanitized list of str pipeline = list(pipeline) for fi, func in enumerate(pipeline): if callable(func): for key, val in _METHOD_DICT.items(): if func is val[0]: # if they passed the callable equiv. pipeline[fi] = func = key break if not isinstance(func, str) or func not in _METHOD_DICT: raise ValueError( f"pipeline[{fi}] must be one of " f"{list(_METHOD_DICT)}, got {func!r}" ) if pipeline == ["center_of_mass"] and ret_metric: raise ValueError( "center of mass registration cannot return any quality metric." ) # Go through the selected transformation: for func in pipeline: if func == "center_of_mass": if starting_affine is not None and starting_was_supplied: wm = "starting_affine overwritten by center_of_mass transform" warn(wm, UserWarning, stacklevel=2) # multiply images by masks for transform_centers_of_mass static_masked, moving_masked = static, moving if static_mask is not None: static_masked = static * static_mask if moving_mask is not None: moving_masked = moving * moving_mask transform = transform_centers_of_mass( static_masked, static_affine, moving_masked, moving_affine ) starting_affine = transform.affine else: transform = _METHOD_DICT[func][1]() xform, xopt, fopt = affreg.optimize( static, moving, transform, None, static_grid2world=static_affine, moving_grid2world=moving_affine, starting_affine=starting_affine, ret_metric=True, static_mask=static_mask, moving_mask=moving_mask, ) starting_affine = xform.affine # Copy the final affine into a final variable final_affine = starting_affine.copy() # After doing all that, resample once at the end: affine_map = AffineMap( final_affine, domain_grid_shape=static.shape, domain_grid2world=static_affine, codomain_grid_shape=moving.shape, codomain_grid2world=moving_affine, ) resampled = affine_map.transform(moving) # Return the optimization metric only if requested if ret_metric: return resampled, final_affine, xopt, fopt return resampled, final_affine center_of_mass = partial(affine_registration, pipeline=["center_of_mass"]) center_of_mass.__doc__ = ( "Implements a center of mass transform. Based on `affine_registration()`." ) translation = partial(affine_registration, pipeline=["translation"]) translation.__doc__ = ( "Implements a translation transform. Based on `affine_registration()`." ) rigid = partial(affine_registration, pipeline=["rigid"]) rigid.__doc__ = "Implements a rigid transform. Based on `affine_registration()`." rigid_isoscaling = partial(affine_registration, pipeline=["rigid_isoscaling"]) rigid_isoscaling.__doc__ = ( "Implements a rigid isoscaling transform. Based on `affine_registration()`." ) rigid_scaling = partial(affine_registration, pipeline=["rigid_scaling"]) rigid_scaling.__doc__ = ( "Implements a rigid scaling transform. Based on `affine_registration()`." ) affine = partial(affine_registration, pipeline=["affine"]) affine.__doc__ = "Implements an affine transform. Based on `affine_registration()`." _METHOD_DICT = { # mapping from str key -> (callable, class) tuple "center_of_mass": (center_of_mass, None), "translation": (translation, TranslationTransform3D), "rigid_isoscaling": (rigid_isoscaling, RigidIsoScalingTransform3D), "rigid_scaling": (rigid_scaling, RigidScalingTransform3D), "rigid": (rigid, RigidTransform3D), "affine": (affine, AffineTransform3D), } @warning_for_keywords() def register_series( series, ref, *, pipeline=None, series_affine=None, ref_affine=None, static_mask=None ): """Register a series to a reference image. Parameters ---------- series : 4D array or nib.Nifti1Image class instance or str The data is 4D with the last dimension separating different 3D volumes ref : int or 3D array or nib.Nifti1Image class instance or str If this is an int, this is the index of the reference image within the series. Otherwise it is an array of data to register with (associated with a `ref_affine` required) or a nifti img or full path to a file containing one. pipeline : sequence, optional Sequence of transforms to do for each volume in the series. Default: (executed from left to right): `[center_of_mass, translation, rigid, affine]` series_affine, ref_affine : 4x4 arrays, optional. The affine. If provided, this input will over-ride the affine provided together with the nifti img or file. static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. Returns ------- xformed, affines : 4D array with transformed data and a (4,4,n) array with 4x4 matrices associated with each of the volumes of the input moving data that was used to transform it into register with the static data. """ pipeline = pipeline or ["center_of_mass", "translation", "rigid", "affine"] series, series_affine = read_img_arr_or_path(series, affine=series_affine) if isinstance(ref, numbers.Number): ref_as_idx = ref idxer = np.zeros(series.shape[-1]).astype(bool) idxer[ref] = True ref = series[..., idxer].squeeze() ref_affine = series_affine else: ref_as_idx = False ref, ref_affine = read_img_arr_or_path(ref, affine=ref_affine) if len(ref.shape) != 3: raise ValueError( "The reference image should be a single volume", " or the index of one or more volumes", ) xformed = np.zeros(series.shape) affines = np.zeros((4, 4, series.shape[-1])) for ii in range(series.shape[-1]): this_moving = series[..., ii] if isinstance(ref_as_idx, numbers.Number) and ii == ref_as_idx: # This is the reference! No need to move and the xform is I(4): xformed[..., ii] = this_moving affines[..., ii] = np.eye(4) else: transformed, reg_affine = affine_registration( this_moving, ref, moving_affine=series_affine, static_affine=ref_affine, pipeline=pipeline, static_mask=static_mask, ) xformed[..., ii] = transformed affines[..., ii] = reg_affine return xformed, affines @warning_for_keywords() def register_dwi_series( data, gtab, *, affine=None, b0_ref=0, pipeline=None, static_mask=None ): """Register a DWI series to the mean of the B0 images in that series. all first registered to the first B0 volume Parameters ---------- data : 4D array or nibabel Nifti1Image class instance or str Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file. gtab : a GradientTable class instance or tuple of strings If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order). affine : 4x4 array, optional. Must be provided for `data` provided as an array. If provided together with Nifti1Image or str `data`, this input will over-ride the affine that is stored in the `data` input. Default: use the affine stored in `data`. b0_ref : int, optional. Which b0 volume to use as reference. Default: 0 pipeline : list of callables, optional. The transformations to perform in sequence (from left to right): Default: ``[center_of_mass, translation, rigid, affine]`` static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. Returns ------- xform_img, affine_array: a Nifti1Image containing the registered data and using the affine of the original data and a list containing the affine transforms associated with each of the """ pipeline = pipeline or ["center_of_mass", "translation", "rigid", "affine"] data, affine = read_img_arr_or_path(data, affine=affine) if isinstance(gtab, collections.abc.Sequence): gtab = dpg.gradient_table(*gtab) if np.sum(gtab.b0s_mask) > 1: # First, register the b0s into one image and average: b0_img = nib.Nifti1Image(data[..., gtab.b0s_mask], affine) trans_b0, b0_affines = register_series( b0_img, ref=b0_ref, pipeline=pipeline, static_mask=static_mask ) ref_data = np.mean(trans_b0, -1, keepdims=True) else: # There's only one b0 and we register everything to it trans_b0 = ref_data = data[..., gtab.b0s_mask] b0_affines = np.eye(4)[..., np.newaxis] # Construct a series out of the DWI and the registered mean B0: moving_data = data[..., ~gtab.b0s_mask] series_arr = np.concatenate([ref_data, moving_data], -1) series = nib.Nifti1Image(series_arr, affine) xformed, affines = register_series( series, ref=0, pipeline=pipeline, static_mask=static_mask ) # Cut out the part pertaining to that first volume: affines = affines[..., 1:] xformed = xformed[..., 1:] affine_array = np.zeros((4, 4, data.shape[-1])) affine_array[..., gtab.b0s_mask] = b0_affines affine_array[..., ~gtab.b0s_mask] = affines data_array = np.zeros(data.shape) data_array[..., gtab.b0s_mask] = trans_b0 data_array[..., ~gtab.b0s_mask] = xformed return nib.Nifti1Image(data_array, affine), affine_array motion_correction = partial( register_dwi_series, pipeline=["center_of_mass", "translation", "rigid", "affine"] ) motion_correction.__doc__ = re.sub( "Register.*?volume", "Apply a motion " "correction to a DWI dataset " "(Between-Volumes Motion correction)", register_dwi_series.__doc__, flags=re.DOTALL, ) @warning_for_keywords() def streamline_registration(moving, static, *, n_points=100, native_resampled=False): """Register two collections of streamlines ('bundles') to each other. Parameters ---------- moving, static : lists of 3 by n, or str The two bundles to be registered. Given either as lists of arrays with 3D coordinates, or strings containing full paths to these files. n_points : int, optional How many points to resample to. Default: 100. native_resampled : bool, optional Whether to return the moving bundle in the original space, but resampled in the static space to n_points. Returns ------- aligned : list Streamlines from the moving group, moved to be closely matched to the static group. matrix : array (4, 4) The affine transformation that takes us from 'moving' to 'static' """ # Load the streamlines, if you were given a file-name if isinstance(moving, str): moving = load_trk(moving, "same", bbox_valid_check=False).streamlines if isinstance(static, str): static = load_trk(static, "same", bbox_valid_check=False).streamlines srr = StreamlineLinearRegistration() srm = srr.optimize( static=set_number_of_points(static, nb_points=n_points), moving=set_number_of_points(moving, nb_points=n_points), ) aligned = srm.transform(moving) if native_resampled: aligned = set_number_of_points(aligned, nb_points=n_points) aligned = transform_tracking_output(aligned, np.linalg.inv(srm.matrix)) return aligned, srm.matrix dipy-1.11.0/dipy/align/bundlemin.pyx000066400000000000000000000260651476546756600173600ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport numpy as cnp cimport safe_openmp as openmp from safe_openmp cimport have_openmp from cython.parallel import prange from libc.stdlib cimport malloc, free from libc.math cimport sqrt from dipy.utils.omp import determine_num_threads from dipy.utils.omp cimport set_num_threads, restore_default_num_threads cdef cnp.dtype f64_dt = np.dtype(np.float64) cdef double min_direct_flip_dist(double *a,double *b, cnp.npy_intp rows) noexcept nogil: r""" Minimum of direct and flip average (MDF) distance between two streamlines. See :footcite:p:`Garyfallidis2012a` for a definition of the distance. Parameters ---------- a : double pointer first streamline b : double pointer second streamline rows : number of points of the streamline both tracks need to have the same number of points Returns ------- out : double minimum of direct and flipped average distances References ---------- .. footbibliography:: """ cdef: cnp.npy_intp i=0, j=0 double sub=0, subf=0, distf=0, dist=0, tmprow=0, tmprowf=0 for i in range(rows): tmprow = 0 tmprowf = 0 for j in range(3): sub = a[i * 3 + j] - b[i * 3 + j] subf = a[i * 3 + j] - b[(rows - 1 - i) * 3 + j] tmprow += sub * sub tmprowf += subf * subf dist += sqrt(tmprow) distf += sqrt(tmprowf) dist = dist / rows distf = distf / rows if dist <= distf: return dist return distf def _bundle_minimum_distance_matrix(double [:, ::1] static, double [:, ::1] moving, cnp.npy_intp static_size, cnp.npy_intp moving_size, cnp.npy_intp rows, double [:, ::1] D, num_threads=None): """ MDF-based pairwise distance optimization function We minimize the distance between moving streamlines of the same number of points as they align with the static streamlines. Parameters ---------- static: array Static streamlines moving: array Moving streamlines static_size : int Number of static streamlines moving_size : int Number of moving streamlines rows : int Number of points per streamline D : 2D array Distance matrix num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. Returns ------- cost : double """ cdef: cnp.npy_intp i=0, j=0, mov_i=0, mov_j=0 int threads_to_use = -1 threads_to_use = determine_num_threads(num_threads) set_num_threads(threads_to_use) with nogil: for i in prange(static_size): for j in prange(moving_size): D[i, j] = min_direct_flip_dist(&static[i * rows, 0], &moving[j * rows, 0], rows) if num_threads is not None: restore_default_num_threads() return np.asarray(D) def _bundle_minimum_distance(double [:, ::1] static, double [:, ::1] moving, cnp.npy_intp static_size, cnp.npy_intp moving_size, cnp.npy_intp rows, num_threads=None): """ MDF-based pairwise distance optimization function We minimize the distance between moving streamlines of the same number of points as they align with the static streamlines. Parameters ---------- static : array Static streamlines moving : array Moving streamlines static_size : int Number of static streamlines moving_size : int Number of moving streamlines rows : int Number of points per streamline num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. Returns ------- cost : double Notes ----- The difference with ``_bundle_minimum_distance_matrix`` is that it does not save the full distance matrix and therefore needs much less memory. """ cdef: cnp.npy_intp i=0, j=0 double sum_i=0, sum_j=0, tmp=0 double inf = np.finfo('f8').max double dist=0 double * min_j double * min_i openmp.omp_lock_t lock int threads_to_use = -1 threads_to_use = determine_num_threads(num_threads) set_num_threads(threads_to_use) with nogil: if have_openmp: openmp.omp_init_lock(&lock) min_j = malloc(static_size * sizeof(double)) min_i = malloc(moving_size * sizeof(double)) for i in range(static_size): min_j[i] = inf for j in range(moving_size): min_i[j] = inf for i in prange(static_size): for j in range(moving_size): tmp = min_direct_flip_dist(&static[i * rows, 0], &moving[j * rows, 0], rows) if have_openmp: openmp.omp_set_lock(&lock) if tmp < min_j[i]: min_j[i] = tmp if tmp < min_i[j]: min_i[j] = tmp if have_openmp: openmp.omp_unset_lock(&lock) if have_openmp: openmp.omp_destroy_lock(&lock) for i in range(static_size): sum_i += min_j[i] for j in range(moving_size): sum_j += min_i[j] free(min_j) free(min_i) dist = (sum_i / static_size + sum_j / moving_size) dist = 0.25 * dist * dist if num_threads is not None: restore_default_num_threads() return dist def _bundle_minimum_distance_asymmetric(double [:, ::1] static, double [:, ::1] moving, cnp.npy_intp static_size, cnp.npy_intp moving_size, cnp.npy_intp rows): """ MDF-based pairwise distance optimization function We minimize the distance between moving streamlines of the same number of points as they align with the static streamlines. Parameters ---------- static : array Static streamlines moving : array Moving streamlines static_size : int Number of static streamlines moving_size : int Number of moving streamlines rows : int Number of points per streamline Returns ------- cost : double Notes ----- The difference with ``_bundle_minimum_distance`` is that we sum the minimum values only for the static. Therefore, this is an asymmetric distance metric. This means that we are weighting only one direction of the registration. Not both directions. This can be very useful when we want to register a big set of bundles to a small set of bundles. See :footcite:p:`Wanyan2017`. References ---------- .. footbibliography:: """ cdef: cnp.npy_intp i=0, j=0 double sum_i=0, sum_j=0, tmp=0 double inf = np.finfo('f8').max double dist=0 double * min_j openmp.omp_lock_t lock with nogil: if have_openmp: openmp.omp_init_lock(&lock) min_j = malloc(static_size * sizeof(double)) for sz_i in range(static_size): min_j[sz_i] = inf for i in prange(static_size): for j in range(moving_size): tmp = min_direct_flip_dist(&static[i * rows, 0], &moving[j * rows, 0], rows) if have_openmp: openmp.omp_set_lock(&lock) if tmp < min_j[i]: min_j[i] = tmp if have_openmp: openmp.omp_unset_lock(&lock) if have_openmp: openmp.omp_destroy_lock(&lock) for i in range(static_size): sum_i += min_j[i] free(min_j) dist = sum_i / static_size return dist def distance_matrix_mdf(streamlines_a, streamlines_b): r""" Minimum direct flipped distance matrix between two streamline sets All streamlines need to have the same number of points Parameters ---------- streamlines_a : sequence of streamlines as arrays, [(N, 3) .. (N, 3)] streamlines_b : sequence of streamlines as arrays, [(N, 3) .. (N, 3)] Returns ------- DM : array, shape (len(streamlines_a), len(streamlines_b)) distance matrix """ cdef: cnp.npy_intp i, j, lentA, lentB # preprocess tracks cdef: cnp.npy_intp longest_track_len = 0, track_len longest_track_lenA, longest_track_lenB cnp.ndarray[object, ndim=1] tracksA64 cnp.ndarray[object, ndim=1] tracksB64 cnp.ndarray[cnp.double_t, ndim=2] DM lentA = len(streamlines_a) lentB = len(streamlines_b) tracksA64 = np.zeros((lentA,), dtype=object) tracksB64 = np.zeros((lentB,), dtype=object) DM = np.zeros((lentA,lentB), dtype=np.double) if streamlines_a[0].shape[0] != streamlines_b[0].shape[0]: msg = 'Streamlines should have the same number of points as required' msg += 'by the MDF distance' raise ValueError(msg) # process tracks to predictable memory layout for i in range(lentA): tracksA64[i] = np.ascontiguousarray(streamlines_a[i], dtype=f64_dt) for i in range(lentB): tracksB64[i] = np.ascontiguousarray(streamlines_b[i], dtype=f64_dt) # preallocate buffer array for track distance calculations cdef: cnp.float64_t *t1_ptr cnp.float64_t *t2_ptr cnp.float64_t *min_buffer # cycle over tracks cdef: cnp.ndarray [cnp.float64_t, ndim=2] t1, t2 cnp.npy_intp t1_len, t2_len double d[2] t_len = tracksA64[0].shape[0] for i from 0 <= i < lentA: t1 = tracksA64[i] t1_ptr = cnp.PyArray_DATA(t1) for j from 0 <= j < lentB: t2 = tracksB64[j] t2_ptr = cnp.PyArray_DATA(t2) DM[i, j] = min_direct_flip_dist(t1_ptr, t2_ptr,t_len) return DM dipy-1.11.0/dipy/align/cpd.py000066400000000000000000000346051476546756600157600ustar00rootroot00000000000000""" Note ---- This file is copied (possibly with major modifications) from the sources of the pycpd project - https://github.com/siavashk/pycpd. It remains licensed as the rest of PyCPD (MIT license as of October 2010). # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # # See COPYING file distributed along with the PyCPD package for the # copyright and license terms. # # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """ import numbers from warnings import warn import numpy as np from dipy.testing.decorators import warning_for_keywords def gaussian_kernel(X, beta, Y=None): if Y is None: Y = X diff = X[:, None, :] - Y[None, :, :] diff = np.square(diff) diff = np.sum(diff, 2) return np.exp(-diff / (2 * beta**2)) def low_rank_eigen(G, num_eig): """Calculate num_eig eigenvectors and eigenvalues of gaussian matrix G. Enables lower dimensional solving. """ S, Q = np.linalg.eigh(G) eig_indices = list(np.argsort(np.abs(S))[::-1][:num_eig]) Q = Q[:, eig_indices] # eigenvectors S = S[eig_indices] # eigenvalues. return Q, S def initialize_sigma2(X, Y): """Initialize the variance (sigma2). Parameters ---------- X: numpy array NxD array of points for target. Y: numpy array MxD array of points for source. Returns ------- sigma2: float Initial variance. """ (N, D) = X.shape (M, _) = Y.shape diff = X[None, :, :] - Y[:, None, :] err = diff**2 return np.sum(err) / (D * M * N) @warning_for_keywords() def lowrankQS(G, beta, num_eig, *, eig_fgt=False): """Calculate eigenvectors and eigenvalues of gaussian matrix G. !!! This function is a placeholder for implementing the fast gauss transform. It is not yet implemented. !!! Parameters ---------- G: numpy array Gaussian kernel matrix. beta: float Width of the Gaussian kernel. num_eig: int Number of eigenvectors to use in lowrank calculation of G eig_fgt: bool If True, use fast gauss transform method to speed up. """ # if we do not use FGT we construct affinity matrix G and find the # first eigenvectors/values directly if eig_fgt is False: S, Q = np.linalg.eigh(G) eig_indices = list(np.argsort(np.abs(S))[::-1][:num_eig]) Q = Q[:, eig_indices] # eigenvectors S = S[eig_indices] # eigenvalues. return Q, S elif eig_fgt is True: raise Exception("Fast Gauss Transform Not Implemented!") class DeformableRegistration: """ Deformable point cloud registration. Attributes ---------- X: numpy array NxD array of target points. Y: numpy array MxD array of source points. TY: numpy array MxD array of transformed source points. sigma2: float (positive) Initial variance of the Gaussian mixture model. N: int Number of target points. M: int Number of source points. D: int Dimensionality of source and target points iteration: int The current iteration throughout registration. max_iterations: int Registration will terminate once the algorithm has taken this many iterations. tolerance: float (positive) Registration will terminate once the difference between consecutive objective function values falls within this tolerance. w: float (between 0 and 1) Contribution of the uniform distribution to account for outliers. Valid values span 0 (inclusive) and 1 (exclusive). q: float The objective function value that represents the misalignment between source and target point clouds. diff: float (positive) The absolute difference between the current and previous objective function values. P: numpy array MxN array of probabilities. P[m, n] represents the probability that the m-th source point corresponds to the n-th target point. Pt1: numpy array Nx1 column array. Multiplication result between the transpose of P and a column vector of all 1s. P1: numpy array Mx1 column array. Multiplication result between P and a column vector of all 1s. Np: float (positive) The sum of all elements in P. alpha: float (positive) Represents the trade-off between the goodness of maximum likelihoo fit and regularization. beta: float(positive) Width of the Gaussian kernel. low_rank: bool Whether to use low rank approximation. num_eig: int Number of eigenvectors to use in lowrank calculation. """ def __init__( self, X, Y, *args, sigma2=None, alpha=None, beta=None, low_rank=False, num_eig=100, max_iterations=None, tolerance=None, w=None, **kwargs, ): if not isinstance(X, np.ndarray) or X.ndim != 2: raise ValueError("The target point cloud (X) must be at a 2D numpy array.") if not isinstance(Y, np.ndarray) or Y.ndim != 2: raise ValueError("The source point cloud (Y) must be a 2D numpy array.") if X.shape[1] != Y.shape[1]: msg = "Both point clouds need to have the same number " msg += "of dimensions." raise ValueError(msg) if sigma2 is not None and ( not isinstance(sigma2, numbers.Number) or sigma2 <= 0 ): msg = f"Expected a positive value for sigma2 instead got: {sigma2}" raise ValueError(msg) if max_iterations is not None and ( not isinstance(max_iterations, numbers.Number) or max_iterations < 0 ): msg = "Expected a positive integer for max_iterations " msg += f"instead got: {max_iterations}" raise ValueError(msg) elif isinstance(max_iterations, numbers.Number) and not isinstance( max_iterations, int ): msg = "Received a non-integer value for max_iterations: " msg += f"{max_iterations}. Casting to integer." warn(msg, stacklevel=2) max_iterations = int(max_iterations) if tolerance is not None and ( not isinstance(tolerance, numbers.Number) or tolerance < 0 ): msg = "Expected a positive float for tolerance " msg += f"instead got: {tolerance}" raise ValueError(msg) if w is not None and (not isinstance(w, numbers.Number) or w < 0 or w >= 1): msg = "Expected a value between 0 (inclusive) and 1 (exclusive) " msg += f"for w instead got: {w}" raise ValueError(msg) self.X = X self.Y = Y self.TY = Y self.sigma2 = initialize_sigma2(X, Y) if sigma2 is None else sigma2 (self.N, self.D) = self.X.shape (self.M, _) = self.Y.shape self.tolerance = 0.001 if tolerance is None else tolerance self.w = 0.0 if w is None else w self.max_iterations = 100 if max_iterations is None else max_iterations self.iteration = 0 self.diff = np.inf self.q = np.inf self.P = np.zeros((self.M, self.N)) self.Pt1 = np.zeros((self.N,)) self.P1 = np.zeros((self.M,)) self.PX = np.zeros((self.M, self.D)) self.Np = 0 if alpha is not None and (not isinstance(alpha, numbers.Number) or alpha <= 0): msg = "Expected a positive value for regularization parameter " msg += f"alpha. Instead got: {alpha}" raise ValueError(msg) if beta is not None and (not isinstance(beta, numbers.Number) or beta <= 0): msg = "Expected a positive value for the width of the coherent " msg += f"Gaussian kernel. Instead got: {beta}" self.alpha = 2 if alpha is None else alpha self.beta = 2 if beta is None else beta self.W = np.zeros((self.M, self.D)) self.G = gaussian_kernel(self.Y, self.beta) self.low_rank = low_rank self.num_eig = num_eig if self.low_rank is True: self.Q, self.S = low_rank_eigen(self.G, self.num_eig) self.inv_S = np.diag(1.0 / self.S) self.S = np.diag(self.S) self.E = 0.0 @warning_for_keywords() def register(self, *, callback=lambda **kwargs: None): """ Perform the EM registration. Parameters ---------- callback: function A function that will be called after each iteration. Can be used to visualize the registration process. Returns ------- self.TY: numpy array MxD array of transformed source points. registration_parameters: Returned params dependent on registration method used. """ self.transform_point_cloud() while self.iteration < self.max_iterations and self.diff > self.tolerance: self.iterate() if callable(callback): kwargs = { "iteration": self.iteration, "error": self.q, "X": self.X, "Y": self.TY, } callback(**kwargs) return self.TY, self.get_registration_parameters() def update_transform(self): """ Calculate a new estimate of the deformable transformation. See Eq. 22 of https://arxiv.org/pdf/0905.2635.pdf. """ if self.low_rank is False: A = np.dot(np.diag(self.P1), self.G) + self.alpha * self.sigma2 * np.eye( self.M ) B = self.PX - np.dot(np.diag(self.P1), self.Y) self.W = np.linalg.solve(A, B) elif self.low_rank is True: # Matlab code equivalent can be found here: # https://github.com/markeroon/matlab-computer-vision-routines/tree/master/third_party/CoherentPointDrift dP = np.diag(self.P1) dPQ = np.matmul(dP, self.Q) F = self.PX - np.matmul(dP, self.Y) self.W = ( 1 / (self.alpha * self.sigma2) * ( F - np.matmul( dPQ, ( np.linalg.solve( ( self.alpha * self.sigma2 * self.inv_S + np.matmul(self.Q.T, dPQ) ), (np.matmul(self.Q.T, F)), ) ), ) ) ) QtW = np.matmul(self.Q.T, self.W) self.E = self.E + self.alpha / 2 * np.trace( np.matmul(QtW.T, np.matmul(self.S, QtW)) ) @warning_for_keywords() def transform_point_cloud(self, *, Y=None): """Update a point cloud using the new estimate of the deformable transformation. Parameters ---------- Y: numpy array, optional Array of points to transform - use to predict on new set of points. Best for predicting on new points not used to run initial registration. If None, self.Y used. Returns ------- If Y is None, returns None. Otherwise, returns the transformed Y. """ if Y is not None: G = gaussian_kernel(X=Y, beta=self.beta, Y=self.Y) return Y + np.dot(G, self.W) else: if self.low_rank is False: self.TY = self.Y + np.dot(self.G, self.W) elif self.low_rank is True: self.TY = self.Y + np.matmul( self.Q, np.matmul(self.S, np.matmul(self.Q.T, self.W)) ) return def update_variance(self): """Update the variance of the mixture model. This is using the new estimate of the deformable transformation. See the update rule for sigma2 in Eq. 23 of of https://arxiv.org/pdf/0905.2635.pdf. """ qprev = self.sigma2 # The original CPD paper does not explicitly calculate the objective # functional. This functional will include terms from both the negative # log-likelihood and the Gaussian kernel used for regularization. self.q = np.inf xPx = np.dot( np.transpose(self.Pt1), np.sum(np.multiply(self.X, self.X), axis=1) ) yPy = np.dot( np.transpose(self.P1), np.sum(np.multiply(self.TY, self.TY), axis=1) ) trPXY = np.sum(np.multiply(self.TY, self.PX)) self.sigma2 = (xPx - 2 * trPXY + yPy) / (self.Np * self.D) if self.sigma2 <= 0: self.sigma2 = self.tolerance / 10 # Here we use the difference between the current and previous # estimate of the variance as a proxy to test for convergence. self.diff = np.abs(self.sigma2 - qprev) def get_registration_parameters(self): """Return the current estimate of the deformable transformation parameters. Returns ------- self.G: numpy array Gaussian kernel matrix. self.W: numpy array Deformable transformation matrix. """ return self.G, self.W def iterate(self): """Perform one iteration of the EM algorithm.""" self.expectation() self.maximization() self.iteration += 1 def expectation(self): """Compute the expectation step of the EM algorithm.""" # (M, N) P = np.sum((self.X[None, :, :] - self.TY[:, None, :]) ** 2, axis=2) P = np.exp(-P / (2 * self.sigma2)) c = ( (2 * np.pi * self.sigma2) ** (self.D / 2) * self.w / (1.0 - self.w) * self.M / self.N ) den = np.sum(P, axis=0, keepdims=True) # (1, N) den = np.clip(den, np.finfo(self.X.dtype).eps, None) + c self.P = np.divide(P, den) self.Pt1 = np.sum(self.P, axis=0) self.P1 = np.sum(self.P, axis=1) self.Np = np.sum(self.P1) self.PX = np.matmul(self.P, self.X) def maximization(self): """Compute the maximization step of the EM algorithm.""" self.update_transform() self.transform_point_cloud() self.update_variance() dipy-1.11.0/dipy/align/crosscorr.pyx000066400000000000000000001002411476546756600174070ustar00rootroot00000000000000""" Utility functions used by the Cross Correlation (CC) metric """ import numpy as np from dipy.align.fused_types cimport floating cimport cython cimport numpy as cnp cdef inline int _int_max(int a, int b) noexcept nogil: r""" Returns the maximum of a and b """ return a if a >= b else b cdef inline int _int_min(int a, int b) noexcept nogil: r""" Returns the minimum of a and b """ return a if a <= b else b cdef enum: SI = 0 SI2 = 1 SJ = 2 SJ2 = 3 SIJ = 4 CNT = 5 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline int _wrap(int x, int m) noexcept nogil: r""" Auxiliary function to `wrap` an array around its low-end side. Negative indices are mapped to last coordinates so that no extra memory is required to account for local rectangular windows that exceed the array's low-end boundary. Parameters ---------- x : int the array position to be wrapped m : int array length """ if x < 0: return x + m return x @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline void _update_factors(double[:, :, :, :] factors, floating[:, :, :] moving, floating[:, :, :] static, cnp.npy_intp ss, cnp.npy_intp rr, cnp.npy_intp cc, cnp.npy_intp s, cnp.npy_intp r, cnp.npy_intp c, int operation)noexcept nogil: r"""Updates the precomputed CC factors of a rectangular window Updates the precomputed CC factors of the rectangular window centered at (`ss`, `rr`, `cc`) by adding the factors corresponding to voxel (`s`, `r`, `c`) of input images `moving` and `static`. Parameters ---------- factors : array, shape (S, R, C, 5) array containing the current precomputed factors to be updated moving : array, shape (S, R, C) the moving volume (notice that both images must already be in a common reference domain, in particular, they must have the same shape) static : array, shape (S, R, C) the static volume, which also defines the reference registration domain ss : int first coordinate of the rectangular window to be updated rr : int second coordinate of the rectangular window to be updated cc : int third coordinate of the rectangular window to be updated s: int first coordinate of the voxel the local window should be updated with r: int second coordinate of the voxel the local window should be updated with c: int third coordinate of the voxel the local window should be updated with operation : int, either -1, 0 or 1 indicates whether the factors of voxel (`s`, `r`, `c`) should be added to (`operation`=1), subtracted from (`operation`=-1), or set as (`operation`=0) the current factors for the rectangular window centered at (`ss`, `rr`, `cc`). """ cdef: double sval double mval if s >= moving.shape[0] or r >= moving.shape[1] or c >= moving.shape[2]: if operation == 0: factors[ss, rr, cc, SI] = 0 factors[ss, rr, cc, SI2] = 0 factors[ss, rr, cc, SJ] = 0 factors[ss, rr, cc, SJ2] = 0 factors[ss, rr, cc, SIJ] = 0 else: sval = static[s, r, c] mval = moving[s, r, c] if operation == 0: factors[ss, rr, cc, SI] = sval factors[ss, rr, cc, SI2] = sval*sval factors[ss, rr, cc, SJ] = mval factors[ss, rr, cc, SJ2] = mval*mval factors[ss, rr, cc, SIJ] = sval*mval elif operation == -1: factors[ss, rr, cc, SI] -= sval factors[ss, rr, cc, SI2] -= sval*sval factors[ss, rr, cc, SJ] -= mval factors[ss, rr, cc, SJ2] -= mval*mval factors[ss, rr, cc, SIJ] -= sval*mval elif operation == 1: factors[ss, rr, cc, SI] += sval factors[ss, rr, cc, SI2] += sval*sval factors[ss, rr, cc, SJ] += mval factors[ss, rr, cc, SJ2] += mval*mval factors[ss, rr, cc, SIJ] += sval*mval @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def precompute_cc_factors_3d(floating[:, :, :] static, floating[:, :, :] moving, cnp.npy_intp radius, num_threads=None): """Precomputations to quickly compute the gradient of the CC Metric. Pre-computes the separate terms of the cross correlation metric and image norms at each voxel considering a neighborhood of the given radius to efficiently compute the gradient of the metric with respect to the deformation field :footcite:p:`Ocegueda2016`, :footcite:p:`Avants2008`, :footcite:p:`Avants2009`. Parameters ---------- static : array, shape (S, R, C) the static volume, which also defines the reference registration domain moving : array, shape (S, R, C) the moving volume (notice that both images must already be in a common reference domain, i.e. the same S, R, C) radius : the radius of the neighborhood (cube of (2 * radius + 1)^3 voxels) Returns ------- factors : array, shape (S, R, C, 5) the precomputed cross correlation terms:: - factors[:,:,:,0] : static minus its mean value along the neighborhood - factors[:,:,:,1] : moving minus its mean value along the neighborhood - factors[:,:,:,2] : sum of the pointwise products of static and moving along the neighborhood - factors[:,:,:,3] : sum of sq. values of static along the neighborhood - factors[:,:,:,4] : sum of sq. values of moving along the neighborhood References ---------- .. footbibliography:: """ cdef: cnp.npy_intp ns = static.shape[0] cnp.npy_intp nr = static.shape[1] cnp.npy_intp nc = static.shape[2] cnp.npy_intp side = 2 * radius + 1 cnp.npy_intp firstc, lastc, firstr, lastr, firsts, lasts cnp.npy_intp s, r, c, it, sides, sider, sidec double cnt cnp.npy_intp ssss, sss, ss, rr, cc, prev_ss, prev_rr, prev_cc double Imean, Jmean, IJprods, Isq, Jsq double[:, :, :, :] temp = np.zeros((2, nr, nc, 5), dtype=np.float64) floating[:, :, :, :] factors = np.zeros((ns, nr, nc, 5), dtype=np.asarray(static).dtype) with nogil: sss = 1 for s in range(ns+radius): ss = _wrap(s - radius, ns) sss = 1 - sss firsts = _int_max(0, ss - radius) lasts = _int_min(ns - 1, ss + radius) sides = (lasts - firsts + 1) for r in range(nr+radius): rr = _wrap(r - radius, nr) firstr = _int_max(0, rr - radius) lastr = _int_min(nr - 1, rr + radius) sider = (lastr - firstr + 1) for c in range(nc+radius): cc = _wrap(c - radius, nc) # New corner _update_factors(temp, moving, static, sss, rr, cc, s, r, c, 0) # Add signed sub-volumes if s > 0: prev_ss = 1 - sss for it in range(5): temp[sss, rr, cc, it] += temp[prev_ss, rr, cc, it] if r > 0: prev_rr = _wrap(rr-1, nr) for it in range(5): temp[sss, rr, cc, it] -= \ temp[prev_ss, prev_rr, cc, it] if c > 0: prev_cc = _wrap(cc-1, nc) for it in range(5): temp[sss, rr, cc, it] += \ temp[prev_ss, prev_rr, prev_cc, it] if c > 0: prev_cc = _wrap(cc-1, nc) for it in range(5): temp[sss, rr, cc, it] -= \ temp[prev_ss, rr, prev_cc, it] if r > 0: prev_rr = _wrap(rr-1, nr) for it in range(5): temp[sss, rr, cc, it] += \ temp[sss, prev_rr, cc, it] if c > 0: prev_cc = _wrap(cc-1, nc) for it in range(5): temp[sss, rr, cc, it] -= \ temp[sss, prev_rr, prev_cc, it] if c > 0: prev_cc = _wrap(cc-1, nc) for it in range(5): temp[sss, rr, cc, it] += temp[sss, rr, prev_cc, it] # Add signed corners if s >= side: _update_factors(temp, moving, static, sss, rr, cc, s-side, r, c, -1) if r >= side: _update_factors(temp, moving, static, sss, rr, cc, s-side, r-side, c, 1) if c >= side: _update_factors(temp, moving, static, sss, rr, cc, s-side, r-side, c-side, -1) if c >= side: _update_factors(temp, moving, static, sss, rr, cc, s-side, r, c-side, 1) if r >= side: _update_factors(temp, moving, static, sss, rr, cc, s, r-side, c, -1) if c >= side: _update_factors(temp, moving, static, sss, rr, cc, s, r-side, c-side, 1) if c >= side: _update_factors(temp, moving, static, sss, rr, cc, s, r, c-side, -1) # Compute final factors if s >= radius and r >= radius and c >= radius: firstc = _int_max(0, cc - radius) lastc = _int_min(nc - 1, cc + radius) sidec = (lastc - firstc + 1) cnt = sides*sider*sidec Imean = temp[sss, rr, cc, SI] / cnt Jmean = temp[sss, rr, cc, SJ] / cnt IJprods = (temp[sss, rr, cc, SIJ] - Jmean * temp[sss, rr, cc, SI] - Imean * temp[sss, rr, cc, SJ] + cnt * Jmean * Imean) Isq = (temp[sss, rr, cc, SI2] - Imean * temp[sss, rr, cc, SI] - Imean * temp[sss, rr, cc, SI] + cnt * Imean * Imean) Jsq = (temp[sss, rr, cc, SJ2] - Jmean * temp[sss, rr, cc, SJ] - Jmean * temp[sss, rr, cc, SJ] + cnt * Jmean * Jmean) factors[ss, rr, cc, 0] = static[ss, rr, cc] - Imean factors[ss, rr, cc, 1] = moving[ss, rr, cc] - Jmean factors[ss, rr, cc, 2] = IJprods factors[ss, rr, cc, 3] = Isq factors[ss, rr, cc, 4] = Jsq return factors @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def precompute_cc_factors_3d_test(floating[:, :, :] static, floating[:, :, :] moving, int radius): """Precomputations to quickly compute the gradient of the CC Metric. This version of precompute_cc_factors_3d is for testing purposes, it directly computes the local cross-correlation factors without any optimization, so it is less error-prone than the accelerated version. """ cdef: cnp.npy_intp ns = static.shape[0] cnp.npy_intp nr = static.shape[1] cnp.npy_intp nc = static.shape[2] cnp.npy_intp s, r, c, k, i, j, t cnp.npy_intp firstc, lastc, firstr, lastr, firsts, lasts double Imean, Jmean floating[:, :, :, :] factors = np.zeros((ns, nr, nc, 5), dtype=np.asarray(static).dtype) double[:] sums = np.zeros((6,), dtype=np.float64) with nogil: for s in range(ns): firsts = _int_max(0, s - radius) lasts = _int_min(ns - 1, s + radius) for r in range(nr): firstr = _int_max(0, r - radius) lastr = _int_min(nr - 1, r + radius) for c in range(nc): firstc = _int_max(0, c - radius) lastc = _int_min(nc - 1, c + radius) for t in range(6): sums[t] = 0 for k in range(firsts, 1 + lasts): for i in range(firstr, 1 + lastr): for j in range(firstc, 1 + lastc): sums[SI] += static[k, i, j] sums[SI2] += static[k, i, j]**2 sums[SJ] += moving[k, i, j] sums[SJ2] += moving[k, i, j]**2 sums[SIJ] += static[k, i, j]*moving[k, i, j] sums[CNT] += 1 Imean = sums[SI] / sums[CNT] Jmean = sums[SJ] / sums[CNT] factors[s, r, c, 0] = static[s, r, c] - Imean factors[s, r, c, 1] = moving[s, r, c] - Jmean factors[s, r, c, 2] = (sums[SIJ] - Jmean * sums[SI] - Imean * sums[SJ] + sums[CNT] * Jmean * Imean) factors[s, r, c, 3] = (sums[SI2] - Imean * sums[SI] - Imean * sums[SI] + sums[CNT] * Imean * Imean) factors[s, r, c, 4] = (sums[SJ2] - Jmean * sums[SJ] - Jmean * sums[SJ] + sums[CNT] * Jmean * Jmean) return np.asarray(factors) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_cc_forward_step_3d(floating[:, :, :, :] grad_static, floating[:, :, :, :] factors, cnp.npy_intp radius): """Gradient of the CC Metric w.r.t. the forward transformation. Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) :footcite:p:`Avants2008` w.r.t. the displacement associated to the moving volume ('forward' step) as in :footcite:t:`Avants2009`. Parameters ---------- grad_static : array, shape (S, R, C, 3) the gradient of the static volume factors : array, shape (S, R, C, 5) the precomputed cross correlation terms obtained via precompute_cc_factors_3d radius : int the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels. Returns ------- out : array, shape (S, R, C, 3) the gradient of the cross correlation metric with respect to the displacement associated to the moving volume energy : the cross correlation energy (data term) at this iteration References ---------- .. footbibliography:: """ cdef: cnp.npy_intp ns = grad_static.shape[0] cnp.npy_intp nr = grad_static.shape[1] cnp.npy_intp nc = grad_static.shape[2] double energy = 0 cnp.npy_intp s, r, c double Ii, Ji, sfm, sff, smm, localCorrelation, temp floating[:, :, :, :] out =\ np.zeros((ns, nr, nc, 3), dtype=np.asarray(grad_static).dtype) with nogil: for s in range(radius, ns-radius): for r in range(radius, nr-radius): for c in range(radius, nc-radius): Ii = factors[s, r, c, 0] Ji = factors[s, r, c, 1] sfm = factors[s, r, c, 2] sff = factors[s, r, c, 3] smm = factors[s, r, c, 4] if sff == 0.0 or smm == 0.0: continue localCorrelation = 0 if sff * smm > 1e-5: localCorrelation = sfm * sfm / (sff * smm) if localCorrelation < 1: # avoid bad values... energy -= localCorrelation temp = 2.0 * sfm / (sff * smm) * (Ji - sfm / sff * Ii) out[s, r, c, 0] -= temp * grad_static[s, r, c, 0] out[s, r, c, 1] -= temp * grad_static[s, r, c, 1] out[s, r, c, 2] -= temp * grad_static[s, r, c, 2] return np.asarray(out), energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_cc_backward_step_3d(floating[:, :, :, :] grad_moving, floating[:, :, :, :] factors, cnp.npy_intp radius): """Gradient of the CC Metric w.r.t. the backward transformation. Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) :footcite:p:`Avants2008`. w.r.t. the displacement associated to the static volume ('backward' step) as in :footcite:t:`Avants2009`. Parameters ---------- grad_moving : array, shape (S, R, C, 3) the gradient of the moving volume factors : array, shape (S, R, C, 5) the precomputed cross correlation terms obtained via precompute_cc_factors_3d radius : int the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels. Returns ------- out : array, shape (S, R, C, 3) the gradient of the cross correlation metric with respect to the displacement associated to the static volume energy : the cross correlation energy (data term) at this iteration References ---------- .. footbibliography:: """ ftype = np.asarray(grad_moving).dtype cdef: cnp.npy_intp ns = grad_moving.shape[0] cnp.npy_intp nr = grad_moving.shape[1] cnp.npy_intp nc = grad_moving.shape[2] cnp.npy_intp s, r, c double energy = 0 double Ii, Ji, sfm, sff, smm, localCorrelation, temp floating[:, :, :, :] out = np.zeros((ns, nr, nc, 3), dtype=ftype) with nogil: for s in range(radius, ns-radius): for r in range(radius, nr-radius): for c in range(radius, nc-radius): Ii = factors[s, r, c, 0] Ji = factors[s, r, c, 1] sfm = factors[s, r, c, 2] sff = factors[s, r, c, 3] smm = factors[s, r, c, 4] if sff == 0.0 or smm == 0.0: continue localCorrelation = 0 if sff * smm > 1e-5: localCorrelation = sfm * sfm / (sff * smm) if localCorrelation < 1: # avoid bad values... energy -= localCorrelation temp = 2.0 * sfm / (sff * smm) * (Ii - sfm / smm * Ji) out[s, r, c, 0] -= temp * grad_moving[s, r, c, 0] out[s, r, c, 1] -= temp * grad_moving[s, r, c, 1] out[s, r, c, 2] -= temp * grad_moving[s, r, c, 2] return np.asarray(out), energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def precompute_cc_factors_2d(floating[:, :] static, floating[:, :] moving, cnp.npy_intp radius): """Precomputations to quickly compute the gradient of the CC Metric. Pre-computes the separate terms of the cross correlation metric :footcite:p:`Avants2008` and image norms at each voxel considering a neighborhood of the given radius to efficiently compute the gradient of the metric with respect to the deformation field :footcite:p:`Avants2009`. Parameters ---------- static : array, shape (R, C) the static volume, which also defines the reference registration domain moving : array, shape (R, C) the moving volume (notice that both images must already be in a common reference domain, i.e. the same R, C) radius : the radius of the neighborhood(square of (2*radius + 1)^2 voxels) Returns ------- factors : array, shape (R, C, 5) the precomputed cross correlation terms:: - factors[:,:,0] : static minus its mean value along the neighborhood - factors[:,:,1] : moving minus its mean value along the neighborhood - factors[:,:,2] : sum of the pointwise products of static and moving along the neighborhood - factors[:,:,3] : sum of sq. values of static along the neighborhood - factors[:,:,4] : sum of sq. values of moving along the neighborhood References ---------- .. footbibliography:: """ ftype = np.asarray(static).dtype cdef: cnp.npy_intp side = 2 * radius + 1 cnp.npy_intp nr = static.shape[0] cnp.npy_intp nc = static.shape[1] cnp.npy_intp r, c, i, j, t, q, qq, firstc, lastc double Imean, Jmean floating[:, :, :] factors = np.zeros((nr, nc, 5), dtype=ftype) double[:, :] lines = np.zeros((6, side), dtype=np.float64) double[:] sums = np.zeros((6,), dtype=np.float64) with nogil: for c in range(nc): firstc = _int_max(0, c - radius) lastc = _int_min(nc - 1, c + radius) # compute factors for row [:,c] for t in range(6): for q in range(side): lines[t, q] = 0 # Compute all rows and set the sums on the fly # compute row [i, j = {c-radius, c + radius}] for i in range(nr): q = i % side for t in range(6): lines[t, q] = 0 for j in range(firstc, lastc + 1): lines[SI, q] += static[i, j] lines[SI2, q] += static[i, j] * static[i, j] lines[SJ, q] += moving[i, j] lines[SJ2, q] += moving[i, j] * moving[i, j] lines[SIJ, q] += static[i, j] * moving[i, j] lines[CNT, q] += 1 for t in range(6): sums[t] = 0 for qq in range(side): sums[t] += lines[t, qq] if i >= radius: # r is the pixel that is affected by the cube with slices # [r - radius.. r + radius, :] r = i - radius Imean = sums[SI] / sums[CNT] Jmean = sums[SJ] / sums[CNT] factors[r, c, 0] = static[r, c] - Imean factors[r, c, 1] = moving[r, c] - Jmean factors[r, c, 2] = (sums[SIJ] - Jmean * sums[SI] - Imean * sums[SJ] + sums[CNT] * Jmean * Imean) factors[r, c, 3] = (sums[SI2] - Imean * sums[SI] - Imean * sums[SI] + sums[CNT] * Imean * Imean) factors[r, c, 4] = (sums[SJ2] - Jmean * sums[SJ] - Jmean * sums[SJ] + sums[CNT] * Jmean * Jmean) # Finally set the values at the end of the line for r in range(nr - radius, nr): # this would be the last slice to be processed for pixel # [r, c], if it existed i = r + radius q = i % side for t in range(6): sums[t] -= lines[t, q] Imean = sums[SI] / sums[CNT] Jmean = sums[SJ] / sums[CNT] factors[r, c, 0] = static[r, c] - Imean factors[r, c, 1] = moving[r, c] - Jmean factors[r, c, 2] = (sums[SIJ] - Jmean * sums[SI] - Imean * sums[SJ] + sums[CNT] * Jmean * Imean) factors[r, c, 3] = (sums[SI2] - Imean * sums[SI] - Imean * sums[SI] + sums[CNT] * Imean * Imean) factors[r, c, 4] = (sums[SJ2] - Jmean * sums[SJ] - Jmean * sums[SJ] + sums[CNT] * Jmean * Jmean) return np.asarray(factors) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def precompute_cc_factors_2d_test(floating[:, :] static, floating[:, :] moving, cnp.npy_intp radius): """Precomputations to quickly compute the gradient of the CC Metric. This version of precompute_cc_factors_2d is for testing purposes, it directly computes the local cross-correlation without any optimization. """ ftype = np.asarray(static).dtype cdef: cnp.npy_intp nr = static.shape[0] cnp.npy_intp nc = static.shape[1] cnp.npy_intp r, c, i, j, t, firstr, lastr, firstc, lastc double Imean, Jmean floating[:, :, :] factors = np.zeros((nr, nc, 5), dtype=ftype) double[:] sums = np.zeros((6,), dtype=np.float64) with nogil: for r in range(nr): firstr = _int_max(0, r - radius) lastr = _int_min(nr - 1, r + radius) for c in range(nc): firstc = _int_max(0, c - radius) lastc = _int_min(nc - 1, c + radius) for t in range(6): sums[t] = 0 for i in range(firstr, 1 + lastr): for j in range(firstc, 1+lastc): sums[SI] += static[i, j] sums[SI2] += static[i, j]**2 sums[SJ] += moving[i, j] sums[SJ2] += moving[i, j]**2 sums[SIJ] += static[i, j]*moving[i, j] sums[CNT] += 1 Imean = sums[SI] / sums[CNT] Jmean = sums[SJ] / sums[CNT] factors[r, c, 0] = static[r, c] - Imean factors[r, c, 1] = moving[r, c] - Jmean factors[r, c, 2] = (sums[SIJ] - Jmean * sums[SI] - Imean * sums[SJ] + sums[CNT] * Jmean * Imean) factors[r, c, 3] = (sums[SI2] - Imean * sums[SI] - Imean * sums[SI] + sums[CNT] * Imean * Imean) factors[r, c, 4] = (sums[SJ2] - Jmean * sums[SJ] - Jmean * sums[SJ] + sums[CNT] * Jmean * Jmean) return np.asarray(factors) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_cc_forward_step_2d(floating[:, :, :] grad_static, floating[:, :, :] factors, cnp.npy_intp radius): """Gradient of the CC Metric w.r.t. the forward transformation. Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) :footcite:p:`Avants2008` w.r.t. the displacement associated to the moving image ('backward' step) as in :footcite:t:`Avants2009`. Parameters ---------- grad_static : array, shape (R, C, 2) the gradient of the static image factors : array, shape (R, C, 5) the precomputed cross correlation terms obtained via precompute_cc_factors_2d Returns ------- out : array, shape (R, C, 2) the gradient of the cross correlation metric with respect to the displacement associated to the moving image energy : the cross correlation energy (data term) at this iteration Notes ----- Currently, the gradient of the static image is not being used, but some authors suggest that symmetrizing the gradient by including both, the moving and static gradients may improve the registration quality. We are leaving this parameter as a placeholder for future investigation References ---------- .. footbibliography:: """ cdef: cnp.npy_intp nr = grad_static.shape[0] cnp.npy_intp nc = grad_static.shape[1] double energy = 0 cnp.npy_intp r, c double Ii, Ji, sfm, sff, smm, localCorrelation, temp floating[:, :, :] out = np.zeros((nr, nc, 2), dtype=np.asarray(grad_static).dtype) with nogil: for r in range(radius, nr-radius): for c in range(radius, nc-radius): Ii = factors[r, c, 0] Ji = factors[r, c, 1] sfm = factors[r, c, 2] sff = factors[r, c, 3] smm = factors[r, c, 4] if sff == 0.0 or smm == 0.0: continue localCorrelation = 0 if sff * smm > 1e-5: localCorrelation = sfm * sfm / (sff * smm) if localCorrelation < 1: # avoid bad values... energy -= localCorrelation temp = 2.0 * sfm / (sff * smm) * (Ji - sfm / sff * Ii) out[r, c, 0] -= temp * grad_static[r, c, 0] out[r, c, 1] -= temp * grad_static[r, c, 1] return np.asarray(out), energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_cc_backward_step_2d(floating[:, :, :] grad_moving, floating[:, :, :] factors, cnp.npy_intp radius): """Gradient of the CC Metric w.r.t. the backward transformation. Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) :footcite:p:`Avants2008` w.r.t. the displacement associated to the static image ('forward' step) as in :footcite:t:`Avants2009`. Parameters ---------- grad_moving : array, shape (R, C, 2) the gradient of the moving image factors : array, shape (R, C, 5) the precomputed cross correlation terms obtained via precompute_cc_factors_2d Returns ------- out : array, shape (R, C, 2) the gradient of the cross correlation metric with respect to the displacement associated to the static image energy : the cross correlation energy (data term) at this iteration References ---------- .. footbibliography:: """ ftype = np.asarray(grad_moving).dtype cdef: cnp.npy_intp nr = grad_moving.shape[0] cnp.npy_intp nc = grad_moving.shape[1] cnp.npy_intp r, c double energy = 0 double Ii, Ji, sfm, sff, smm, localCorrelation, temp floating[:, :, :] out = np.zeros((nr, nc, 2), dtype=ftype) with nogil: for r in range(radius, nr-radius): for c in range(radius, nc-radius): Ii = factors[r, c, 0] Ji = factors[r, c, 1] sfm = factors[r, c, 2] sff = factors[r, c, 3] smm = factors[r, c, 4] if sff == 0.0 or smm == 0.0: continue localCorrelation = 0 if sff * smm > 1e-5: localCorrelation = sfm * sfm / (sff * smm) if localCorrelation < 1: # avoid bad values... energy -= localCorrelation temp = 2.0 * sfm / (sff * smm) * (Ii - sfm / smm * Ji) out[r, c, 0] -= temp * grad_moving[r, c, 0] out[r, c, 1] -= temp * grad_moving[r, c, 1] return np.asarray(out), energy dipy-1.11.0/dipy/align/expectmax.pyx000066400000000000000000000531341476546756600173760ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport cython cimport numpy as cnp from dipy.align.fused_types cimport floating cdef extern from "dpy_math.h" nogil: int dpy_isinf(double) double floor(double) cdef inline int ifloor(double x) nogil: return int(floor(x)) def quantize_positive_2d(floating[:, :] v, int num_levels): """Quantize a 2D image to num_levels quantization levels. Quantizes the input image at num_levels intensity levels considering <=0 as a special value. Those input pixels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels-1 uniform quantization levels. The following are undefined, and raise a ValueError:: * Quantizing at zero levels because at least one level must be assigned * Quantizing at one level because positive values should be assigned a level different from the special level 0 (at least 2 levels are needed) Parameters ---------- v : array, shape (R, C) the image to be quantized num_levels : int the number of levels Returns ------- out : array, shape (R, C), same shape as v the quantized image levels : array, shape (num_levels,) the quantization values: levels[0]=0, and levels[i] is the mid-point of the interval of intensities that are assigned to quantization level i, i=1, ..., num_levels-1. hist : array, shape (num_levels,) histogram: the number of pixels that were assigned to each quantization level """ ftype = np.asarray(v).dtype cdef: cnp.npy_intp nrows = v.shape[0] cnp.npy_intp ncols = v.shape[1] cnp.npy_intp npix = nrows * ncols cnp.npy_intp i, j, l double epsilon, delta double min_val = -1 double max_val = -1 cnp.npy_int32[:] hist = np.zeros(shape=(num_levels,), dtype=np.int32) cnp.npy_int32[:, :] out = np.zeros(shape=(nrows, ncols,), dtype=np.int32) floating[:] levels = np.zeros(shape=(num_levels,), dtype=ftype) #Quantizing at zero levels is undefined #Quantizing at one level is not supported because we want to make sure the #maximum level in the quantization is never greater than num_levels-1 if num_levels < 2: raise ValueError('Quantization levels must be at least 2') if num_levels >= 2**31: raise ValueError('Quantization levels must be < 2**31') num_levels -= 1 # zero is one of the levels with nogil: for i in range(nrows): for j in range(ncols): if v[i, j] > 0: if min_val < 0 or v[i, j] < min_val: min_val = v[i, j] if v[i, j] > max_val: max_val = v[i, j] epsilon = 1e-8 delta = (max_val - min_val + epsilon) / num_levels # notice that we decreased num_levels, so levels[0..num_levels] are well # defined if num_levels < 2 or delta < epsilon: for i in range(nrows): for j in range(ncols): if v[i, j] > 0: out[i, j] = 1 else: out[i, j] = 0 hist[0] += 1 levels[0] = 0 levels[1] = 0.5 * (min_val + max_val) hist[1] = npix - hist[0] with gil: return out, levels, hist levels[0] = 0 levels[1] = min_val + delta * 0.5 for i in range(2, 1 + num_levels): levels[i] = levels[i - 1] + delta for i in range(nrows): for j in range(ncols): if v[i, j] > 0: l = ifloor((v[i, j] - min_val) / delta) out[i, j] = l + 1 hist[l + 1] += 1 else: out[i, j] = 0 hist[0] += 1 return np.asarray(out), np.array(levels), np.array(hist) def quantize_positive_3d(floating[:, :, :] v, int num_levels): """Quantize a 3D volume to num_levels quantization levels. Quantize the input volume at num_levels intensity levels considering <=0 as a special value. Those input voxels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels-1 uniform quantization levels. The following are undefined, and raise a ValueError:: * Quantizing at zero levels because at least one level must be assigned * Quantizing at one level because positive values should be assigned a level different from the special level 0 (at least 2 levels are needed) Parameters ---------- v : array, shape (S, R, C) the volume to be quantized num_levels : int the number of levels Returns ------- out : array, shape (S, R, C), same shape as v the quantized volume levels : array, shape (num_levels,) the quantization values: levels[0]=0, and levels[i] is the mid-point of the interval of intensities that are assigned to quantization level i, i=1, ..., num_levels-1. hist : array, shape (num_levels,) histogram: the number of voxels that were assigned to each quantization level """ ftype = np.asarray(v).dtype cdef: cnp.npy_intp nslices = v.shape[0] cnp.npy_intp nrows = v.shape[1] cnp.npy_intp ncols = v.shape[2] cnp.npy_intp nvox = nrows * ncols * nslices cnp.npy_intp i, j, k, l double epsilon, delta double min_val = -1 double max_val = -1 int[:] hist = np.zeros(shape=(num_levels,), dtype=np.int32) int[:, :, :] out = np.zeros(shape=(nslices, nrows, ncols), dtype=np.int32) floating[:] levels = np.zeros(shape=(num_levels,), dtype=ftype) #Quantizing at zero levels is undefined #Quantizing at one level is not supported because we want to make sure the #maximum level in the quantization is never greater than num_levels-1 if num_levels < 2: raise ValueError('Quantization levels must be at least 2') num_levels -= 1 # zero is one of the levels with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if v[k, i, j] > 0: if min_val < 0 or v[k, i, j] < min_val: min_val = v[k, i, j] if v[k, i, j] > max_val: max_val = v[k, i, j] epsilon = 1e-8 delta = (max_val - min_val + epsilon) / num_levels # notice that we decreased num_levels, so levels[0..num_levels] are well # defined if num_levels < 2 or delta < epsilon: for k in range(nslices): for i in range(nrows): for j in range(ncols): if v[k, i, j] > 0: out[k, i, j] = 1 else: out[k, i, j] = 0 hist[0] += 1 levels[0] = 0 levels[1] = 0.5 * (min_val + max_val) hist[1] = nvox - hist[0] with gil: return out, levels, hist levels[0] = 0 levels[1] = min_val + delta * 0.5 for i in range(2, 1 + num_levels): levels[i] = levels[i - 1] + delta for k in range(nslices): for i in range(nrows): for j in range(ncols): if v[k, i, j] > 0: l = ifloor((v[k, i, j] - min_val) / delta) out[k, i, j] = l + 1 hist[l + 1] += 1 else: out[k, i, j] = 0 hist[0] += 1 return np.asarray(out), np.asarray(levels), np.asarray(hist) def compute_masked_class_stats_2d(int[:, :] mask, floating[:, :] v, int num_labels, int[:, :] labels): r"""Computes the mean and std. for each quantization level. Computes the mean and standard deviation of the intensities in 'v' for each corresponding label in 'labels'. In other words, for each label L, it computes the mean and standard deviation of the intensities in 'v' at pixels whose label in 'labels' is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels. Parameters ---------- mask : array, shape (R, C) the mask of pixels that will be taken into account for computing the statistics. All zero pixels in mask will be ignored v : array, shape (R, C) the image which the statistics will be computed from num_labels : int the number of different labels in 'labels' (equal to the number of hidden variables in the EM metric) labels : array, shape (R, C) the label assigned to each pixel Returns ------- means : array, shape (num_labels,) means[i], 0<=i 0: means[i] /= counts[i] for i in range(nrows): for j in range(ncols): if mask[i, j] != 0: diff = v[i, j] - means[labels[i, j]] variances[labels[i, j]] += diff ** 2 for i in range(num_labels): if counts[i] > 1: variances[i] /= counts[i] else: variances[i] = INF64 return np.asarray(means), np.asarray(variances) def compute_masked_class_stats_3d(int[:, :, :] mask, floating[:, :, :] v, int num_labels, int[:, :, :] labels): r"""Computes the mean and std. for each quantization level. Computes the mean and standard deviation of the intensities in 'v' for each corresponding label in 'labels'. In other words, for each label L, it computes the mean and standard deviation of the intensities in 'v' at voxels whose label in 'labels' is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels. Parameters ---------- mask : array, shape (S, R, C) the mask of voxels that will be taken into account for computing the statistics. All zero voxels in mask will be ignored v : array, shape (S, R, C) the volume which the statistics will be computed from num_labels : int the number of different labels in 'labels' (equal to the number of hidden variables in the EM metric) labels : array, shape (S, R, C) the label assigned to each pixel Returns ------- means : array, shape (num_labels,) means[i], 0<=i 0: means[i] /= counts[i] for k in range(nslices): for i in range(nrows): for j in range(ncols): if mask[k, i, j] != 0: diff = means[labels[k, i, j]] - v[k, i, j] variances[labels[k, i, j]] += diff ** 2 for i in range(num_labels): if counts[i] > 1: variances[i] /= counts[i] else: variances[i] = INF64 return np.asarray(means), np.asarray(variances) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_em_demons_step_2d(floating[:,:] delta_field, floating[:,:] sigma_sq_field, floating[:,:,:] gradient_moving, double sigma_sq_x, floating[:,:,:] out): r"""Demons step for EM metric in 2D Computes the demons step :footcite:p:`Vercauteren2009` for SSD-driven registration ( eq. 4 in :footcite:p:`Vercauteren2009` ) using the EM algorithm :footcite:p:`ArceSantana2014` to handle multi-modality images. In this case, $\sigma_i$ in eq. 4 of :footcite:p:`Vercauteren2009` is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel. Parameters ---------- delta_field : array, shape (R, C) contains, at each pixel, the difference between the moving image (warped under the current deformation s(. , .) ) J and the static image I: delta_field[i,j] = J(s(i,j)) - I(i,j). The order is important, changing to delta_field[i,j] = I(i,j) - J(s(i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the 'gradient_moving' passed corresponds to the gradient of the static image sigma_sq_field : array, shape (R, C) contains, at each pixel (i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[i,j] (which must have been previously quantized) gradient_moving : array, shape (R, C, 2) the gradient of the moving image sigma_sq_x : float parameter controlling the amount of regularization. It corresponds to $\sigma_x^2$ in algorithm 1 of :footcite:p:`Vercauteren2009` out : array, shape (R, C, 2) the resulting demons step will be written to this array Returns ------- demons_step : array, shape (R, C, 2) the demons step to be applied for updating the current displacement field energy : float the current em energy (before applying the returned demons_step) References ---------- .. footbibliography:: """ cdef: cnp.npy_intp nr = delta_field.shape[0] cnp.npy_intp nc = delta_field.shape[1] cnp.npy_intp i, j double delta, sigma_sq_i, nrm2, energy, den, prod if out is None: out = np.zeros((nr, nc, 2), dtype=np.asarray(delta_field).dtype) with nogil: energy = 0 for i in range(nr): for j in range(nc): sigma_sq_i = sigma_sq_field[i,j] delta = delta_field[i,j] energy += (delta**2) if dpy_isinf(sigma_sq_i) != 0: out[i, j, 0], out[i, j, 1] = 0, 0 else: nrm2 = (gradient_moving[i, j, 0]**2 + gradient_moving[i, j, 1]**2) if sigma_sq_i == 0: if nrm2 == 0: out[i, j, 0], out[i, j, 1] = 0, 0 else: out[i, j, 0] = (delta * gradient_moving[i, j, 0] / nrm2) out[i, j, 1] = (delta * gradient_moving[i, j, 1] / nrm2) else: den = (sigma_sq_x * nrm2 + sigma_sq_i) prod = sigma_sq_x * delta out[i, j, 0] = prod * gradient_moving[i, j, 0] / den out[i, j, 1] = prod * gradient_moving[i, j, 1] / den return np.asarray(out), energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_em_demons_step_3d(floating[:,:,:] delta_field, floating[:,:,:] sigma_sq_field, floating[:,:,:,:] gradient_moving, double sigma_sq_x, floating[:,:,:,:] out): r"""Demons step for EM metric in 3D Computes the demons step :footcite:p:`Vercauteren2009` for SSD-driven registration ( eq. 4 in :footcite:p:`Vercauteren2009` ) using the EM algorithm :footcite:p:`ArceSantana2014` to handle multi-modality images. In this case, $\sigma_i$ in eq. 4 of :footcite:p:`Vercauteren2009` is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel. Parameters ---------- delta_field : array, shape (S, R, C) contains, at each pixel, the difference between the moving image (warped under the current deformation s ) J and the static image I: delta_field[k,i,j] = J(s(k,i,j)) - I(k,i,j). The order is important, changing to delta_field[k,i,j] = I(k,i,j) - J(s(k,i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the 'gradient_moving' passed corresponds to the gradient of the static image sigma_sq_field : array, shape (S, R, C) contains, at each pixel (k, i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[k,i,j] (which must have been previously quantized) gradient_moving : array, shape (S, R, C, 2) the gradient of the moving image sigma_sq_x : float parameter controlling the amount of regularization. It corresponds to $\sigma_x^2$ in algorithm 1 of footcite:p:`Vercauteren2009`. out : array, shape (S, R, C, 2) the resulting demons step will be written to this array Returns ------- demons_step : array, shape (S, R, C, 3) the demons step to be applied for updating the current displacement field energy : float the current em energy (before applying the returned demons_step) References ---------- .. footbibliography:: """ cdef: cnp.npy_intp ns = delta_field.shape[0] cnp.npy_intp nr = delta_field.shape[1] cnp.npy_intp nc = delta_field.shape[2] cnp.npy_intp i, j, k double delta, sigma_sq_i, nrm2, energy, den if out is None: out = np.zeros((ns, nr, nc, 3), dtype=np.asarray(delta_field).dtype) with nogil: energy = 0 for k in range(ns): for i in range(nr): for j in range(nc): sigma_sq_i = sigma_sq_field[k,i,j] delta = delta_field[k,i,j] energy += (delta**2) if dpy_isinf(sigma_sq_i) != 0: out[k, i, j, 0] = 0 out[k, i, j, 1] = 0 out[k, i, j, 2] = 0 else: nrm2 = (gradient_moving[k, i, j, 0]**2 + gradient_moving[k, i, j, 1]**2 + gradient_moving[k, i, j, 2]**2) if sigma_sq_i == 0: if nrm2 == 0: out[k, i, j, 0] = 0 out[k, i, j, 1] = 0 out[k, i, j, 2] = 0 else: out[k, i, j, 0] = (delta * gradient_moving[k, i, j, 0] / nrm2) out[k, i, j, 1] = (delta * gradient_moving[k, i, j, 1] / nrm2) out[k, i, j, 2] = (delta * gradient_moving[k, i, j, 2] / nrm2) else: den = (sigma_sq_x * nrm2 + sigma_sq_i) out[k, i, j, 0] = (sigma_sq_x * delta * gradient_moving[k, i, j, 0] / den) out[k, i, j, 1] = (sigma_sq_x * delta * gradient_moving[k, i, j, 1] / den) out[k, i, j, 2] = (sigma_sq_x * delta * gradient_moving[k, i, j, 2] / den) return np.asarray(out), energy dipy-1.11.0/dipy/align/fused_types.pxd000066400000000000000000000001201476546756600176700ustar00rootroot00000000000000cimport cython ctypedef cython.floating floating ctypedef cython.numeric number dipy-1.11.0/dipy/align/imaffine.py000066400000000000000000001674451476546756600170010ustar00rootroot00000000000000"""Affine image registration module consisting of the following classes: AffineMap: encapsulates the necessary information to perform affine transforms between two domains, defined by a `static` and a `moving` image. The `domain` of the transform is the set of points in the `static` image's grid, and the `codomain` is the set of points in the `moving` image. When we call the `transform` method, `AffineMap` maps each point `x` of the domain (`static` grid) to the codomain (`moving` grid) and interpolates the `moving` image at that point to obtain the intensity value to be placed at `x` in the resulting grid. The `transform_inverse` method performs the opposite operation mapping points in the codomain to points in the domain. ParzenJointHistogram: computes the marginal and joint distributions of intensities of a pair of images, using Parzen windows :footcite:p:`Parzen1962` with a cubic spline kernel, as proposed by :footcite:t:`Mattes2003`. It also computes the gradient of the joint histogram w.r.t. the parameters of a given transform. MutualInformationMetric: computes the value and gradient of the mutual information metric the way `Optimizer` needs them. That is, given a set of transform parameters, it will use `ParzenJointHistogram` to compute the value and gradient of the joint intensity histogram evaluated at the given parameters, and evaluate the value and gradient of the histogram's mutual information. AffineRegistration: it runs the multi-resolution registration, putting all the pieces together. It needs to create the scale space of the images and run the multi-resolution registration by using the Metric and the Optimizer at each level of the Gaussian pyramid. At each level, it will setup the metric to compute value and gradient of the metric with the input images with different levels of smoothing. References ---------- .. footbibliography:: """ from warnings import warn import numpy as np import numpy.linalg as npl import scipy.ndimage as ndimage from dipy.align import VerbosityLevels, vector_fields as vf from dipy.align.imwarp import ScaleSpace, get_direction_and_spacings from dipy.align.parzenhist import ( ParzenJointHistogram, compute_parzen_mi, sample_domain_regular, ) from dipy.align.scalespace import IsotropicScaleSpace from dipy.core.interpolation import interpolate_scalar_2d, interpolate_scalar_3d from dipy.core.optimize import Optimizer from dipy.testing.decorators import warning_for_keywords _interp_options = ["nearest", "linear"] _transform_method = {} _transform_method[(2, "nearest")] = vf.transform_2d_affine_nn _transform_method[(3, "nearest")] = vf.transform_3d_affine_nn _transform_method[(2, "linear")] = vf.transform_2d_affine _transform_method[(3, "linear")] = vf.transform_3d_affine _number_dim_affine_matrix = 2 class AffineInversionError(Exception): pass class AffineInvalidValuesError(Exception): pass class AffineMap: @warning_for_keywords() def __init__( self, affine, *, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None, ): """AffineMap. Implements an affine transformation whose domain is given by `domain_grid` and `domain_grid2world`, and whose co-domain is given by `codomain_grid` and `codomain_grid2world`. The actual transform is represented by the `affine` matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying `domain_grid2world`. Then we apply the `affine` transform to (x,y,z) obtaining (x', y', z') in moving image's world coordinates. Finally, (x', y', z') is mapped to voxel coordinates (i', j', k') in the moving image by multiplying (x', y', z') by the inverse of `codomain_grid2world`. The `codomain_grid_shape` is used analogously to transform the static image towards the moving image when calling `transform_inverse`. If the domain/co-domain information is not provided (None) then the sampling information needs to be specified each time the `transform` or `transform_inverse` is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as streamlines. Parameters ---------- affine : array, shape (dim + 1, dim + 1) the matrix defining the affine transform, where `dim` is the dimension of the space this map operates in (2 for 2D images, 3 for 3D images). If None, then `self` represents the identity transformation. domain_grid_shape : sequence, shape (dim,), optional the shape of the default domain sampling grid. When `transform` is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None, then the sampling grid shape must be specified each time the `transform` method is called. domain_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with the domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity. codomain_grid_shape : sequence of integers, shape (dim,) the shape of the default co-domain sampling grid. When `transform_inverse` is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None (the default), then the sampling grid shape must be specified each time the `transform_inverse` method is called. codomain_grid2world : array, shape (dim + 1, dim + 1) the grid-to-world transform associated with the co-domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity. """ self.set_affine(affine) self.domain_shape = domain_grid_shape self.domain_grid2world = domain_grid2world self.codomain_shape = codomain_grid_shape self.codomain_grid2world = codomain_grid2world def get_affine(self): """Return the value of the transformation, not a reference. Returns ------- affine : ndarray Copy of the transform, not a reference. """ # returning a copy to insulate it from changes outside object return self.affine.copy() def set_affine(self, affine): """Set the affine transform (operating in physical space). Also sets `self.affine_inv` - the inverse of `affine`, or None if there is no inverse. Parameters ---------- affine : array, shape (dim + 1, dim + 1) the matrix representing the affine transform operating in physical space. The domain and co-domain information remains unchanged. If None, then `self` represents the identity transformation. """ if affine is None: self.affine = None self.affine_inv = None return try: affine = np.array(affine) except Exception as e: raise TypeError( "Input must be type ndarray, or be convertible" " to one." ) from e if len(affine.shape) != _number_dim_affine_matrix: raise AffineInversionError("Affine transform must be 2D") if not affine.shape[0] == affine.shape[1]: raise AffineInversionError("Affine transform must be a square matrix") if not np.all(np.isfinite(affine)): raise AffineInvalidValuesError("Affine transform contains invalid elements") # checking on proper augmentation # First n-1 columns in last row in matrix contain non-zeros if not np.all(affine[-1, :-1] == 0.0): raise AffineInvalidValuesError( f"First {affine.shape[0] - 1} columns in last row" " in matrix contain non-zeros!" ) # Last row, last column in matrix must be 1.0! if affine[-1, -1] != 1.0: raise AffineInvalidValuesError( "Last row, last column in matrix" " is not 1.0!" ) # making a copy to insulate it from changes outside object self.affine = affine.copy() try: self.affine_inv = npl.inv(affine) except npl.LinAlgError as e: raise AffineInversionError("Affine cannot be inverted") from e def __str__(self): """Printable format - relies on ndarray's implementation.""" return str(self.affine) def __repr__(self): """Reloadable representation - relies on ndarray's implementation.""" return self.affine.__repr__() def __format__(self, format_spec): """Implementation various formatting options.""" if format_spec is None or self.affine is None: return str(self.affine) elif isinstance(format_spec, str): format_spec = format_spec.lower() if format_spec in ["", " ", "f", "full"]: return str(self.affine) # rotation part only (initial 3x3) elif format_spec in ["r", "rotation"]: return str(self.affine[:-1, :-1]) # translation part only (4th col) elif format_spec in ["t", "translation"]: # notice unusual indexing to make it a column vector # i.e. rows from 0 to n-1, cols from n to n return str(self.affine[:-1, -1:]) else: allowed_formats_print_map = [ "full", "f", "rotation", "r", "translation", "t", ] raise NotImplementedError( f"Format {format_spec} not recognized or implemented.\n" f"Try one of {allowed_formats_print_map}" ) @warning_for_keywords() def _apply_transform( self, image, *, interpolation="linear", image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False, apply_inverse=False, ): """Transform the input image applying this affine transform. This is a generic function to transform images using either this (direct) transform or its inverse. If applying the direct transform (`apply_inverse=False`): by default, the transformed image is sampled at a grid defined by `self.domain_shape` and `self.domain_grid2world`. If applying the inverse transform (`apply_inverse=True`): by default, the transformed image is sampled at a grid defined by `self.codomain_shape` and `self.codomain_grid2world`. If the sampling information was not provided at initialization of this transform then `sampling_grid_shape` is mandatory. Parameters ---------- image : 2D or 3D array the image to be transformed interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with `image`. If None (the default), then the grid-to-world transform is assumed to be the identity. sampling_grid_shape : sequence, shape (dim,), optional the shape of the grid where the transformed image must be sampled. If None (the default), then `self.domain_shape` is used instead (which must have been set at initialization, otherwise an exception will be raised). sampling_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with the sampling grid (specified by `sampling_grid_shape`, or by default `self.domain_shape`). If None (the default), then the grid-to-world transform is assumed to be the identity. resample_only : Boolean, optional If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform. apply_inverse : Boolean, optional If False (the default) the image is transformed from the codomain of this transform to its domain using the (direct) affine transform. Otherwise, the image is transformed from the domain of this transform to its codomain using the (inverse) affine transform. Returns ------- transformed : array, shape `sampling_grid_shape` or `self.domain_shape` the transformed image, sampled at the requested grid """ # Verify valid interpolation requested if interpolation not in _interp_options: msg = f"Unknown interpolation method: {interpolation}" raise ValueError(msg) # Obtain sampling grid if sampling_grid_shape is None: if apply_inverse: sampling_grid_shape = self.codomain_shape else: sampling_grid_shape = self.domain_shape if sampling_grid_shape is None: msg = "Unknown sampling info. Provide a valid sampling_grid_shape" raise ValueError(msg) dim = len(sampling_grid_shape) shape = np.array(sampling_grid_shape, dtype=np.int32) # Verify valid image dimension img_dim = len(image.shape) if img_dim < 2 or img_dim > 3: raise ValueError(f"Undefined transform for dim: {img_dim}") # Obtain grid-to-world transform for sampling grid if sampling_grid2world is None: if apply_inverse: sampling_grid2world = self.codomain_grid2world else: sampling_grid2world = self.domain_grid2world if sampling_grid2world is None: sampling_grid2world = np.eye(dim + 1) # Obtain world-to-grid transform for input image if image_grid2world is None: if apply_inverse: image_grid2world = self.domain_grid2world else: image_grid2world = self.codomain_grid2world if image_grid2world is None: image_grid2world = np.eye(dim + 1) image_world2grid = npl.inv(image_grid2world) # Compute the transform from sampling grid to input image grid if apply_inverse: aff = self.affine_inv else: aff = self.affine if (aff is None) or resample_only: comp = image_world2grid.dot(sampling_grid2world) else: comp = image_world2grid.dot(aff.dot(sampling_grid2world)) # Transform the input image if interpolation == "linear": image = image.astype(np.float64) transformed = _transform_method[(dim, interpolation)](image, shape, affine=comp) return transformed @warning_for_keywords() def transform( self, image, *, interpolation="linear", image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False, ): """Transform the input image from co-domain to domain space. By default, the transformed image is sampled at a grid defined by `self.domain_shape` and `self.domain_grid2world`. If such information was not provided then `sampling_grid_shape` is mandatory. Parameters ---------- image : 2D or 3D array the image to be transformed interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with `image`. If None (the default), then the grid-to-world transform is assumed to be the identity. sampling_grid_shape : sequence, shape (dim,), optional the shape of the grid where the transformed image must be sampled. If None (the default), then `self.codomain_shape` is used instead (which must have been set at initialization, otherwise an exception will be raised). sampling_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with the sampling grid (specified by `sampling_grid_shape`, or by default `self.codomain_shape`). If None (the default), then the grid-to-world transform is assumed to be the identity. resample_only : Boolean, optional If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform. Returns ------- transformed : array the transformed image, sampled at the requested grid, with shape `sampling_grid_shape` or `self.codomain_shape`. """ transformed = self._apply_transform( image, interpolation=interpolation, image_grid2world=image_grid2world, sampling_grid_shape=sampling_grid_shape, sampling_grid2world=sampling_grid2world, resample_only=resample_only, apply_inverse=False, ) return np.array(transformed) @warning_for_keywords() def transform_inverse( self, image, *, interpolation="linear", image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False, ): """Transform the input image from domain to co-domain space. By default, the transformed image is sampled at a grid defined by `self.codomain_shape` and `self.codomain_grid2world`. If such information was not provided then `sampling_grid_shape` is mandatory. Parameters ---------- image : 2D or 3D array the image to be transformed interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with `image`. If None (the default), then the grid-to-world transform is assumed to be the identity. sampling_grid_shape : sequence, shape (dim,), optional the shape of the grid where the transformed image must be sampled. If None (the default), then `self.codomain_shape` is used instead (which must have been set at initialization, otherwise an exception will be raised). sampling_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-world transform associated with the sampling grid (specified by `sampling_grid_shape`, or by default `self.codomain_shape`). If None (the default), then the grid-to-world transform is assumed to be the identity. resample_only : Boolean, optional If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform. Returns ------- transformed : array the transformed image, sampled at the requested grid, with shape `sampling_grid_shape` or `self.codomain_shape`. """ transformed = self._apply_transform( image, interpolation=interpolation, image_grid2world=image_grid2world, sampling_grid_shape=sampling_grid_shape, sampling_grid2world=sampling_grid2world, resample_only=resample_only, apply_inverse=True, ) return np.array(transformed) class MutualInformationMetric: @warning_for_keywords() def __init__(self, *, nbins=32, sampling_proportion=None): r"""Initialize an instance of the Mutual Information metric. This class implements the methods required by Optimizer to drive the registration process. Parameters ---------- nbins : int, optional the number of bins to be used for computing the intensity histograms. The default is 32. sampling_proportion : None or float in interval (0, 1], optional There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If `sampling_proportion` is None, then dense sampling is used. If `sampling_proportion` is a floating point value in (0,1] then sparse sampling is used, where `sampling_proportion` specifies the proportion of voxels to be used. The default is None. Notes ----- Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied. """ self.histogram = ParzenJointHistogram(nbins) self.sampling_proportion = sampling_proportion self.metric_val = None self.metric_grad = None @warning_for_keywords() def setup( self, transform, static, moving, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, static_mask=None, moving_mask=None, ): r"""Prepare the metric to compute intensity densities and gradients. The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of `static` and `moving` Parameters ---------- transform: instance of Transform the transformation with respect to whose parameters the gradient must be computed static : array, shape (S, R, C) or (R, C) static image moving : array, shape (S', R', C') or (R', C') moving image. The dimensions of the static (S, R, C) and moving (S', R', C') images do not need to be the same. static_grid2world : array (dim+1, dim+1), optional the grid-to-space transform of the static image. The default is None, implying the transform is the identity. moving_grid2world : array (dim+1, dim+1) the grid-to-space transform of the moving image. The default is None, implying the spacing along all axes is 1. starting_affine : array, shape (dim+1, dim+1), optional the pre-aligning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no pre-alignment is performed. If a pre-alignment matrix is available, it is recommended to provide this matrix as `starting_affine` instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no pre-alignment is performed. static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. moving_mask : array, shape (S', R', C') or (R', C'), optional moving image mask that defines which pixels in the moving image are used to calculate the mutual information. """ n = transform.get_number_of_parameters() self.metric_grad = np.zeros(n, dtype=np.float64) self.dim = len(static.shape) if moving_grid2world is None: moving_grid2world = np.eye(self.dim + 1) if static_grid2world is None: static_grid2world = np.eye(self.dim + 1) self.transform = transform self.static = np.array(static).astype(np.float64) self.moving = np.array(moving).astype(np.float64) self.static_grid2world = static_grid2world self.static_world2grid = npl.inv(static_grid2world) self.moving_grid2world = moving_grid2world self.moving_world2grid = npl.inv(moving_grid2world) self.static_direction, self.static_spacing = get_direction_and_spacings( static_grid2world, self.dim ) self.moving_direction, self.moving_spacing = get_direction_and_spacings( moving_grid2world, self.dim ) self.starting_affine = starting_affine P = np.eye(self.dim + 1) if self.starting_affine is not None: P = self.starting_affine self.affine_map = AffineMap( P, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving.shape, codomain_grid2world=moving_grid2world, ) # Masks can only be used with dense sampling if self.sampling_proportion in [None, 1.0]: if static_mask is not None: self.static_mask = static_mask.astype(np.int32) else: self.static_mask = None if moving_mask is not None: self.moving_mask = moving_mask.astype(np.int32) else: self.moving_mask = None else: if (static_mask is not None) or (moving_mask is not None): wm = "Masking is not implemented for sampling_proportion < 1, " wm = wm + "setting static_mask = None and moving_mask = None" warn(wm, UserWarning, stacklevel=2) self.static_mask, self.moving_mask = None, None if self.dim == 2: self.interp_method = interpolate_scalar_2d else: self.interp_method = interpolate_scalar_3d if self.sampling_proportion is None: self.samples = None self.ns = 0 else: k = int(np.ceil(1.0 / self.sampling_proportion)) shape = np.array(static.shape, dtype=np.int32) self.samples = sample_domain_regular(k, shape, static_grid2world) self.samples = np.array(self.samples) self.ns = self.samples.shape[0] # Add a column of ones (homogeneous coordinates) self.samples = np.hstack((self.samples, np.ones(self.ns)[:, None])) if self.starting_affine is None: self.samples_prealigned = self.samples else: self.samples_prealigned = self.starting_affine.dot(self.samples.T).T # Sample the static image static_p = self.static_world2grid.dot(self.samples.T).T static_p = static_p[..., : self.dim] self.static_vals, inside = self.interp_method(static, static_p) self.static_vals = np.array(self.static_vals, dtype=np.float64) self.histogram.setup( self.static, self.moving, smask=self.static_mask, mmask=self.moving_mask ) def _update_histogram(self): r"""Update the histogram according to the current affine transform. The current affine transform is given by `self.affine_map`, which must be set before calling this method. Returns ------- static_values: array, shape(n,) if sparse sampling is being used, array, shape(S, R, C) or (R, C) if dense sampling the intensity values corresponding to the static image used to update the histogram. If sparse sampling is being used, then it is simply a sequence of scalars, obtained by sampling the static image at the `n` sampling points. If dense sampling is being used, then the intensities are given directly by the static image, whose shape is (S, R, C) in the 3D case or (R, C) in the 2D case. moving_values: array, shape(n,) if sparse sampling is being used, array, shape(S, R, C) or (R, C) if dense sampling the intensity values corresponding to the moving image used to update the histogram. If sparse sampling is being used, then it is simply a sequence of scalars, obtained by sampling the moving image at the `n` sampling points (mapped to the moving space by the current affine transform). If dense sampling is being used, then the intensities are given by the moving imaged linearly transformed towards the static image by the current affine, which results in an image of the same shape as the static image. """ static_mask_values, moving_mask_values = None, None if self.sampling_proportion is None: # Dense case static_values = self.static moving_values = self.affine_map.transform(self.moving) if self.static_mask is not None: static_mask_values = self.static_mask if self.moving_mask is not None: moving_mask_values = self.affine_map.transform( self.moving_mask, interpolation="nearest" ).astype(np.int32) self.histogram.update_pdfs_dense( static_values, moving_values, smask=self.static_mask, mmask=moving_mask_values, ) else: # Sparse case sp_to_moving = self.moving_world2grid.dot(self.affine_map.affine) pts = sp_to_moving.dot(self.samples.T).T # Points on moving grid pts = pts[..., : self.dim] self.moving_vals, inside = self.interp_method(self.moving, pts) self.moving_vals = np.array(self.moving_vals) static_values = self.static_vals moving_values = self.moving_vals self.histogram.update_pdfs_sparse(static_values, moving_values) return static_values, moving_values, static_mask_values, moving_mask_values @warning_for_keywords() def _update_mutual_information(self, params, *, update_gradient=True): r"""Update marginal and joint distributions and the joint gradient. The distributions are updated according to the static and transformed images. The transformed image is precisely the moving image after transforming it by the transform defined by the `params` parameters. The gradient of the joint PDF is computed only if update_gradient is True. Parameters ---------- params : array, shape (n,) the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform update_gradient : Boolean, optional if True, the gradient of the joint PDF will also be computed, otherwise, only the marginal and joint PDFs will be computed. The default is True. """ # Get the matrix associated with the `params` parameter vector current_affine = self.transform.param_to_matrix(params) # Get the static-to-prealigned matrix (only needed for the MI gradient) static2prealigned = self.static_grid2world if self.starting_affine is not None: current_affine = current_affine.dot(self.starting_affine) static2prealigned = self.starting_affine.dot(static2prealigned) self.affine_map.set_affine(current_affine) # Update the histogram with the current joint intensities static_values, moving_values, static_mask_values, moving_mask_values = ( self._update_histogram() ) H = self.histogram # Shortcut to `self.histogram` grad = None # Buffer to write the MI gradient into (if needed) if update_gradient: grad = self.metric_grad # Compute the gradient of the joint PDF w.r.t. parameters if self.sampling_proportion is None: # Dense case # Compute the gradient of moving img. at physical points # associated with the >>static image's grid<< cells # The image gradient must be eval. at current moved points grid_to_world = current_affine.dot(self.static_grid2world) mgrad, inside = vf.gradient( self.moving, self.moving_world2grid, self.moving_spacing, self.static.shape, grid_to_world, ) # The Jacobian must be evaluated at the pre-aligned points H.update_gradient_dense( params, self.transform, static_values, moving_values, static2prealigned, mgrad, smask=static_mask_values, mmask=moving_mask_values, ) else: # Sparse case # Compute the gradient of moving at the sampling points # which are already given in physical space coordinates pts = current_affine.dot(self.samples.T).T # Moved points mgrad, inside = vf.sparse_gradient( self.moving, self.moving_world2grid, self.moving_spacing, pts ) # The Jacobian must be evaluated at the pre-aligned points pts = self.samples_prealigned[..., : self.dim] H.update_gradient_sparse( params, self.transform, static_values, moving_values, pts, mgrad ) # Call the cythonized MI computation with self.histogram fields self.metric_val = compute_parzen_mi( H.joint, H.joint_grad, H.smarginal, H.mmarginal, grad ) def distance(self, params): r"""Numeric value of the negative Mutual Information. We need to change the sign so we can use standard minimization algorithms. Parameters ---------- params : array, shape (n,) the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform Returns ------- neg_mi : float the negative mutual information of the input images after transforming the moving image by the currently set transform with `params` parameters """ try: self._update_mutual_information(params, update_gradient=False) except (AffineInversionError, AffineInvalidValuesError): return np.inf return -1 * self.metric_val def gradient(self, params): r"""Numeric value of the metric's gradient at the given parameters. Parameters ---------- params : array, shape (n,) the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform Returns ------- grad : array, shape (n,) the gradient of the negative Mutual Information """ try: self._update_mutual_information(params, update_gradient=True) except (AffineInversionError, AffineInvalidValuesError): return 0 * self.metric_grad return -1 * self.metric_grad def distance_and_gradient(self, params): r"""Numeric value of the metric and its gradient at given parameters. Parameters ---------- params : array, shape (n,) the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform Returns ------- neg_mi : float the negative mutual information of the input images after transforming the moving image by the currently set transform with `params` parameters neg_mi_grad : array, shape (n,) the gradient of the negative Mutual Information """ try: self._update_mutual_information(params, update_gradient=True) except (AffineInversionError, AffineInvalidValuesError): return np.inf, 0 * self.metric_grad return -1 * self.metric_val, -1 * self.metric_grad class AffineRegistration: @warning_for_keywords() def __init__( self, *, metric=None, level_iters=None, sigmas=None, factors=None, method="L-BFGS-B", ss_sigma_factor=None, options=None, verbosity=VerbosityLevels.STATUS, ): """Initialize an instance of the AffineRegistration class. Parameters ---------- metric : None or object, optional an instance of a metric. The default is None, implying the Mutual Information metric with default settings. level_iters : sequence, optional the number of iterations at each scale of the scale space. `level_iters[0]` corresponds to the coarsest scale, `level_iters[-1]` the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used. sigmas : sequence of floats, optional custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0]. factors : sequence of floats, optional custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1]. method : string, optional optimization method to be used. If Scipy version < 0.12, then only L-BFGS-B is available. Otherwise, `method` can be any gradient-based method available in `dipy.core.Optimize`: CG, BFGS, Newton-CG, dogleg or trust-ncg. The default is 'L-BFGS-B'. ss_sigma_factor : float, optional If None, this parameter is not used and an isotropic scale space with the given `factors` and `sigmas` will be built. If not None, an anisotropic scale space will be used by automatically selecting the smoothing sigmas along each axis according to the voxel dimensions of the given image. The `ss_sigma_factor` is used to scale the automatically computed sigmas. For example, in the isotropic case, the sigma of the kernel will be $factor * (2 ^ i)$ where $i = 1, 2, ..., n_{scales} - 1$ is the scale (the finest resolution image $i=0$ is never smoothed). The default is None. options : dict, optional extra optimization options. The default is None, implying no extra options are passed to the optimizer. """ self.metric = metric if self.metric is None: self.metric = MutualInformationMetric() if level_iters is None: level_iters = [10000, 1000, 100] self.level_iters = level_iters self.levels = len(level_iters) if self.levels == 0: raise ValueError("The iterations sequence cannot be empty") self.options = options self.method = method if ss_sigma_factor is not None: self.use_isotropic = False self.ss_sigma_factor = ss_sigma_factor else: self.use_isotropic = True if factors is None: factors = [4, 2, 1] if sigmas is None: sigmas = [3, 1, 0] self.factors = factors self.sigmas = sigmas self.verbosity = verbosity # Separately add a string that tells about the verbosity kwarg. This needs # to be separate, because it is set as a module-wide option in __init__: docstring_addendum = ( """verbosity : int (one of {0, 1, 2, 3}), optional Set the verbosity level of the algorithm: - 0 : do not print anything - 1 : print information about the current status of the algorithm - 2 : print high level information of the components involved in the registration that can be used to detect a failing component. - 3 : print as much information as possible to isolate the cause of a bug. Default: % s """ % VerbosityLevels.STATUS ) __init__.__doc__ = __init__.__doc__ + docstring_addendum def _init_optimizer( self, static, moving, transform, params0, static_grid2world, moving_grid2world, starting_affine, static_mask, moving_mask, ): r"""Initialize the registration optimizer. Initializes the optimizer by computing the scale space of the input images Parameters ---------- static : array, shape (S, R, C) or (R, C) the image to be used as reference during optimization. moving : array, shape (S', R', C') or (R', C') the image to be used as "moving" during optimization. The dimensions of the static (S, R, C) and moving (S', R', C') images do not need to be the same. transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed params0 : array, shape (n,) parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation. static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated with the static image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated with the moving image starting_affine : string, or matrix, or None If string: 'mass': align centers of gravity 'voxel-origin': align physical coordinates of voxel (0,0,0) 'centers': align physical coordinates of central voxels If matrix: array, shape (dim+1, dim+1) If None: Start from identity static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. moving_mask : array, shape (S', R', C') or (R', C'), optional moving image mask that defines which pixels in the moving image are used to calculate the mutual information. """ self.dim = len(static.shape) self.transform = transform n = transform.get_number_of_parameters() self.nparams = n # ensure that masks are not all zeros if np.all(static_mask == 0): warn( "static_mask is all zeros, setting to None (which means \ the entire volume will be used)", UserWarning, stacklevel=2, ) static_mask = None if np.all(moving_mask == 0): warn("moving_mask is all zeros, setting to None", UserWarning, stacklevel=2) moving_mask = None # save masks for use elsewhere self.static_mask, self.moving_mask = static_mask, moving_mask # multiply images by masks for transform_centers_of_mass static_masked, moving_masked = static, moving if static_mask is not None: static_masked = static * static_mask if moving_mask is not None: moving_masked = moving * moving_mask if params0 is None: params0 = self.transform.get_identity_parameters() self.params0 = params0 if starting_affine is None: self.starting_affine = np.eye(self.dim + 1) elif isinstance(starting_affine, str): if starting_affine == "mass": affine_map = transform_centers_of_mass( static_masked, static_grid2world, moving_masked, moving_grid2world ) self.starting_affine = affine_map.affine print("starting_affine in imaffine:", self.starting_affine) elif starting_affine == "voxel-origin": affine_map = transform_origins( static, static_grid2world, moving, moving_grid2world ) self.starting_affine = affine_map.affine elif starting_affine == "centers": affine_map = transform_geometric_centers( static, static_grid2world, moving, moving_grid2world ) self.starting_affine = affine_map.affine else: raise ValueError("Invalid starting_affine strategy") elif isinstance(starting_affine, np.ndarray) and starting_affine.shape >= ( self.dim, self.dim + 1, ): self.starting_affine = starting_affine else: raise ValueError("Invalid starting_affine matrix") # Extract information from affine matrices to create the scale space static_direction, static_spacing = get_direction_and_spacings( static_grid2world, self.dim ) moving_direction, moving_spacing = get_direction_and_spacings( moving_grid2world, self.dim ) # Scale the images by min and max values (where mask == 1) if static_mask is not None: smin = np.min(static[static_mask == 1]) smax = np.max(static[static_mask == 1]) else: smin, smax = np.min(static), np.max(static) static = (static.astype(np.float64) - smin) / (smax - smin) if moving_mask is not None: mmin = np.min(moving[moving_mask == 1]) mmax = np.max(moving[moving_mask == 1]) else: mmin, mmax = np.min(moving), np.max(moving) moving = (moving.astype(np.float64) - mmin) / (mmax - mmin) # Build the scale space of the input images if self.use_isotropic: self.moving_ss = IsotropicScaleSpace( moving, self.factors, self.sigmas, image_grid2world=moving_grid2world, input_spacing=moving_spacing, mask0=False, ) self.static_ss = IsotropicScaleSpace( static, self.factors, self.sigmas, image_grid2world=static_grid2world, input_spacing=static_spacing, mask0=False, ) else: self.moving_ss = ScaleSpace( moving, self.levels, image_grid2world=moving_grid2world, input_spacing=moving_spacing, sigma_factor=self.ss_sigma_factor, mask0=False, ) self.static_ss = ScaleSpace( static, self.levels, image_grid2world=static_grid2world, input_spacing=static_spacing, sigma_factor=self.ss_sigma_factor, mask0=False, ) @warning_for_keywords() def optimize( self, static, moving, transform, params0, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False, static_mask=None, moving_mask=None, ): r"""Start the optimization process. Parameters ---------- static : 2D or 3D array the image to be used as reference during optimization. moving : 2D or 3D array the image to be used as "moving" during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by "pre-aligning" the moving image towards the static using an affine transformation given by the 'starting_affine' matrix transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed params0 : array, shape (n,) parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation. static_grid2world : array, shape (dim+1, dim+1), optional the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity. moving_grid2world : array, shape (dim+1, dim+1), optional the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity. starting_affine : string, or matrix, or None, optional If string: 'mass': align centers of gravity 'voxel-origin': align physical coordinates of voxel (0,0,0) 'centers': align physical coordinates of central voxels If matrix: array, shape (dim+1, dim+1). If None: Start from identity. ret_metric : boolean, optional if True, it returns the parameters for measuring the similarity between the images (default 'False'). The metric containing optimal parameters and the distance between the images. static_mask : array, shape (S, R, C) or (R, C), optional static image mask that defines which pixels in the static image are used to calculate the mutual information. moving_mask : array, shape (S', R', C') or (R', C'), optional moving image mask that defines which pixels in the moving image are used to calculate the mutual information. Returns ------- affine_map : instance of AffineMap the affine resulting affine transformation xopt : optimal parameters the optimal parameters (translation, rotation shear etc.) fopt : Similarity metric the value of the function at the optimal parameters. """ self._init_optimizer( static, moving, transform, params0, static_grid2world, moving_grid2world, starting_affine, static_mask, moving_mask, ) del starting_affine # Now we must refer to self.starting_affine del static_mask # Now we must refer to self.static_mask del moving_mask # Now we must refer to self.moving_mask # Multi-resolution iterations original_static_shape = self.static_ss.get_image(0).shape original_static_grid2world = self.static_ss.get_affine(0) original_moving_shape = self.moving_ss.get_image(0).shape original_moving_grid2world = self.moving_ss.get_affine(0) affine_map = AffineMap( None, domain_grid_shape=original_static_shape, domain_grid2world=original_static_grid2world, codomain_grid_shape=original_moving_shape, codomain_grid2world=original_moving_grid2world, ) for level in range(self.levels - 1, -1, -1): self.current_level = level max_iter = self.level_iters[-1 - level] if self.verbosity >= VerbosityLevels.STATUS: print(f"Optimizing level {level} [max iter: {max_iter}]") # Resample the smooth static image to the shape of this level smooth_static = self.static_ss.get_image(level) current_static_shape = self.static_ss.get_domain_shape(level) current_static_grid2world = self.static_ss.get_affine(level) current_affine_map = AffineMap( None, domain_grid_shape=current_static_shape, domain_grid2world=current_static_grid2world, codomain_grid_shape=original_static_shape, codomain_grid2world=original_static_grid2world, ) current_static = current_affine_map.transform(smooth_static) current_static_mask = None if self.static_mask is not None: current_static_mask = current_affine_map.transform( self.static_mask, interpolation="nearest" ).astype(np.int32) # The moving image is full resolution current_moving_grid2world = original_moving_grid2world current_moving = self.moving_ss.get_image(level) # Prepare the metric for iterations at this resolution self.metric.setup( transform, current_static, current_moving, static_grid2world=current_static_grid2world, moving_grid2world=current_moving_grid2world, starting_affine=self.starting_affine, static_mask=current_static_mask, moving_mask=self.moving_mask, ) # Optimize this level if self.options is None: self.options = {"gtol": 1e-4, "disp": False} if self.method == "L-BFGS-B": self.options["maxfun"] = max_iter else: self.options["maxiter"] = max_iter opt = Optimizer( self.metric.distance_and_gradient, self.params0, method=self.method, jac=True, options=self.options, ) params = opt.xopt # Update starting_affine matrix with optimal parameters T = self.transform.param_to_matrix(params) self.starting_affine = T.dot(self.starting_affine) # Start next iteration at identity self.params0 = self.transform.get_identity_parameters() affine_map.set_affine(self.starting_affine) if ret_metric: return affine_map, opt.xopt, opt.fopt return affine_map def transform_centers_of_mass(static, static_grid2world, moving, moving_grid2world): r"""Transformation to align the center of mass of the input images. Parameters ---------- static : array, shape (S, R, C) static image static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the static image moving : array, shape (S, R, C) moving image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the moving image Returns ------- affine_map : instance of AffineMap the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image """ dim = len(static.shape) if static_grid2world is None: static_grid2world = np.eye(dim + 1) if moving_grid2world is None: moving_grid2world = np.eye(dim + 1) c_static = ndimage.center_of_mass(np.array(static)) c_static = static_grid2world.dot(c_static + (1,)) c_moving = ndimage.center_of_mass(np.array(moving)) c_moving = moving_grid2world.dot(c_moving + (1,)) transform = np.eye(dim + 1) transform[:dim, dim] = (c_moving - c_static)[:dim] affine_map = AffineMap( transform, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving.shape, codomain_grid2world=moving_grid2world, ) return affine_map def transform_geometric_centers(static, static_grid2world, moving, moving_grid2world): r"""Transformation to align the geometric center of the input images. With "geometric center" of a volume we mean the physical coordinates of its central voxel Parameters ---------- static : array, shape (S, R, C) static image static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the static image moving : array, shape (S, R, C) moving image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the moving image Returns ------- affine_map : instance of AffineMap the affine transformation (translation only, in this case) aligning the geometric center of the moving image towards the one of the static image """ dim = len(static.shape) if static_grid2world is None: static_grid2world = np.eye(dim + 1) if moving_grid2world is None: moving_grid2world = np.eye(dim + 1) c_static = tuple((np.array(static.shape, dtype=np.float64)) * 0.5) c_static = static_grid2world.dot(c_static + (1,)) c_moving = tuple((np.array(moving.shape, dtype=np.float64)) * 0.5) c_moving = moving_grid2world.dot(c_moving + (1,)) transform = np.eye(dim + 1) transform[:dim, dim] = (c_moving - c_static)[:dim] affine_map = AffineMap( transform, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving.shape, codomain_grid2world=moving_grid2world, ) return affine_map def transform_origins(static, static_grid2world, moving, moving_grid2world): r"""Transformation to align the origins of the input images. With "origin" of a volume we mean the physical coordinates of voxel (0,0,0) Parameters ---------- static : array, shape (S, R, C) static image static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the static image moving : array, shape (S, R, C) moving image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of the moving image Returns ------- affine_map : instance of AffineMap the affine transformation (translation only, in this case) aligning the origin of the moving image towards the one of the static image """ dim = len(static.shape) if static_grid2world is None: static_grid2world = np.eye(dim + 1) if moving_grid2world is None: moving_grid2world = np.eye(dim + 1) c_static = static_grid2world[:dim, dim] c_moving = moving_grid2world[:dim, dim] transform = np.eye(dim + 1) transform[:dim, dim] = (c_moving - c_static)[:dim] affine_map = AffineMap( transform, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving.shape, codomain_grid2world=moving_grid2world, ) return affine_map dipy-1.11.0/dipy/align/imwarp.py000066400000000000000000002113471476546756600165110ustar00rootroot00000000000000"""Classes and functions for Symmetric Diffeomorphic Registration""" import abc import logging import nibabel as nib from nibabel.streamlines import ArraySequence as Streamlines import numpy as np import numpy.linalg as npl from dipy.align import Bunch, VerbosityLevels, floating, vector_fields as vfu from dipy.align.scalespace import ScaleSpace from dipy.testing.decorators import warning_for_keywords RegistrationStages = Bunch( INIT_START=0, INIT_END=1, OPT_START=2, OPT_END=3, SCALE_START=4, SCALE_END=5, ITER_START=6, ITER_END=7, ) """Registration Stages This enum defines the different stages which the Volumetric Registration may be in. The value of the stage is passed as a parameter to the call-back function so that it can react accordingly. INIT_START: optimizer initialization starts INIT_END: optimizer initialization ends OPT_START: optimization starts OPT_END: optimization ends SCALE_START: optimization at a new scale space resolution starts SCALE_END: optimization at the current scale space resolution ends ITER_START: a new iteration starts ITER_END: the current iteration ends """ logger = logging.getLogger(__name__) def mult_aff(A, B): """Returns the matrix product A.dot(B) considering None as the identity Parameters ---------- A : array, shape (n,k) B : array, shape (k,m) Returns ------- The matrix product A.dot(B). If any of the input matrices is None, it is treated as the identity matrix. If both matrices are None, None is returned """ if A is None: return B elif B is None: return A return A.dot(B) def get_direction_and_spacings(affine, dim): """Extracts the rotational and spacing components from a matrix Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image's gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient's component by the voxel size along the corresponding axis, that's what the spacings are used for. Since the image's gradients are oriented along the grid axes, we also need to re-orient the gradients to be given in physical space coordinates. Parameters ---------- affine : array, shape (k, k), k = 3, 4 the matrix transforming grid coordinates to physical space. Returns ------- direction : array, shape (k-1, k-1) the rotational component of the input matrix spacings : array, shape (k-1,) the scaling component (voxel size) of the matrix """ if affine is None: return np.eye(dim), np.ones(dim) dim = affine.shape[1] - 1 # Temporary hack: get the zooms by building a nifti image affine4x4 = np.eye(4) empty_volume = np.zeros((0, 0, 0)) affine4x4[:dim, :dim] = affine[:dim, :dim] affine4x4[:dim, 3] = affine[:dim, dim - 1] nib_nifti = nib.Nifti1Image(empty_volume, affine4x4) scalings = np.asarray(nib_nifti.header.get_zooms()) scalings = np.asarray(scalings[:dim], dtype=np.float64) A = affine[:dim, :dim] return A.dot(np.diag(1.0 / scalings)), scalings class DiffeomorphicMap: @warning_for_keywords() def __init__( self, dim, disp_shape, *, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None, ): """DiffeomorphicMap Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxel-to-space matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates. Parameters ---------- dim : int, 2 or 3 the transformation's dimension disp_shape : array, shape (dim,) the number of slices (if 3D), rows and columns of the deformation field's discretization disp_grid2world : the voxel-to-space transform between the def. fields grid and space domain_shape : array, shape (dim,) the number of slices (if 3D), rows and columns of the default discretization of this map's domain domain_grid2world : array, shape (dim+1, dim+1) the default voxel-to-space transformation between this map's discretization and physical space codomain_shape : array, shape (dim,) the number of slices (if 3D), rows and columns of the images that are 'normally' warped using this transformation in the forward direction (this will provide default transformation parameters to warp images under this transformation). By default, we assume that the inverse transformation is 'normally' used to warp images with the same discretization and voxel-to-space transformation as the deformation field grid. codomain_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation of images that are 'normally' warped using this transformation (in the forward direction). prealign : array, shape (dim+1, dim+1) the linear transformation to be applied to align input images to the reference space before warping under the deformation field. """ self.dim = dim if disp_shape is None: raise ValueError("Invalid displacement field discretization") self.disp_shape = np.asarray(disp_shape, dtype=np.int32) # If the discretization affine is None, we assume it's the identity self.disp_grid2world = disp_grid2world if self.disp_grid2world is None: self.disp_world2grid = None else: self.disp_world2grid = npl.inv(self.disp_grid2world) # If domain_shape isn't provided, we use the map's discretization shape if domain_shape is None: self.domain_shape = self.disp_shape else: self.domain_shape = np.asarray(domain_shape, dtype=np.int32) self.domain_grid2world = domain_grid2world if domain_grid2world is None: self.domain_world2grid = None else: self.domain_world2grid = npl.inv(domain_grid2world) # If codomain shape was not provided, we assume it is an endomorphism: # use the same domain_shape and codomain_grid2world as the field domain if codomain_shape is None: self.codomain_shape = self.domain_shape else: self.codomain_shape = np.asarray(codomain_shape, dtype=np.int32) self.codomain_grid2world = codomain_grid2world if codomain_grid2world is None: self.codomain_world2grid = None else: self.codomain_world2grid = npl.inv(codomain_grid2world) self.prealign = prealign if prealign is None: self.prealign_inv = None else: self.prealign_inv = npl.inv(prealign) self.is_inverse = False self.forward = None self.backward = None def interpret_matrix(self, obj): """Try to interpret `obj` as a matrix Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the 'identity' string. It returns the same object if it is a numpy array. It raises an exception otherwise. Parameters ---------- obj : object any object Returns ------- obj : object the same object given as argument if `obj` is None or a numpy array. None if `obj` is the 'identity' string. """ if (obj is None) or isinstance(obj, np.ndarray): return obj if isinstance(obj, str) and (obj == "identity"): return None raise ValueError("Invalid matrix") def get_forward_field(self): """Deformation field to transform an image in the forward direction Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the 'is_inverse' flag). """ if self.is_inverse: return self.backward else: return self.forward def get_backward_field(self): """Deformation field to transform an image in the backward direction Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the 'is_inverse' flag). """ if self.is_inverse: return self.forward else: return self.backward def allocate(self): """Creates a zero displacement field Creates a zero displacement field (the identity transformation). """ self.forward = np.zeros(tuple(self.disp_shape) + (self.dim,), dtype=floating) self.backward = np.zeros(tuple(self.disp_shape) + (self.dim,), dtype=floating) @warning_for_keywords() def _get_warping_function(self, interpolation, *, warp_coordinates=False): r"""Appropriate warping function for the given interpolation type Returns the right warping function from vector_fields that must be called for the specified data dimension and interpolation type Parameters ---------- interpolation : string, either 'linear' or 'nearest' specifies the type of interpolation used for image warping. It does not have any effect if `warp_coordinates` is True, in which case no interpolation is intended to be performed. warp_coordinates : Boolean, if False, then returns the right image warping function for this DiffeomorphicMap dimension and the specified `interpolation`. If True, then returns the right coordinate warping function. """ if self.dim == 2: if warp_coordinates: return vfu.warp_coordinates_2d if interpolation == "linear": return vfu.warp_2d else: return vfu.warp_2d_nn else: if warp_coordinates: return vfu.warp_coordinates_3d if interpolation == "linear": return vfu.warp_3d else: return vfu.warp_3d_nn @warning_for_keywords() def _warp_coordinates_forward(self, points, *, coord2world=None, world2coord=None): r"""Warps the list of points in the forward direction Applies this diffeomorphic map to the list of points given by `points`. We assume that the points' coordinates are mapped to world coordinates by applying the `coord2world` affine transform. The warped coordinates are given in world coordinates unless `world2coord` affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system. Parameters ---------- points : coord2world : world2coord : """ warp_f = self._get_warping_function(None, warp_coordinates=True) coord2prealigned = mult_aff(self.prealign, coord2world) out = warp_f( points, self.forward, coord2prealigned, world2coord, self.disp_world2grid ) return out @warning_for_keywords() def _warp_coordinates_backward(self, points, *, coord2world=None, world2coord=None): """Warps the list of points in the backward direction Applies this diffeomorphic map to the list of points given by `points`. We assume that the points' coordinates are mapped to world coordinates by applying the `coord2world` affine transform. The warped coordinates are given in world coordinates unless `world2coord` affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system. Parameters ---------- points : coord2world : world2coord : """ warp_f = self._get_warping_function(None, warp_coordinates=True) world2invprealigned = mult_aff(world2coord, self.prealign_inv) out = warp_f( points, self.backward, coord2world, world2invprealigned, self.disp_world2grid, ) return out @warning_for_keywords() def _warp_forward( self, image, *, interpolation="linear", image_world2grid=None, out_shape=None, out_grid2world=None, ): """Warps an image in the forward direction Deforms the input image under this diffeomorphic map in the forward direction. Since the mapping is defined in the physical space, the user must specify the sampling grid shape and its space-to-voxel mapping. By default, the transformation will use the discretization information given at initialization. Parameters ---------- image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2 the image to be warped under this transformation in the forward direction interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used for warping, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_world2grid : array, shape (dim+1, dim+1) the transformation bringing world (space) coordinates to voxel coordinates of the image given as input out_shape : array, shape (dim,) the number of slices, rows, and columns of the desired warped image out_grid2world : the transformation bringing voxel coordinates of the warped image to physical space Returns ------- warped : array, shape = out_shape or self.codomain_shape if None the warped image under this transformation in the forward direction Notes ----- A diffeomorphic map must be thought as a mapping between points in space. Warping an image J towards an image I means transforming each voxel with (discrete) coordinates i in I to (floating-point) voxel coordinates j in J. The transformation we consider 'forward' is precisely mapping coordinates i from the input image to coordinates j from reference image, which has the effect of warping an image with reference discretization (typically, the "static image") "towards" an image with input discretization (typically, the "moving image"). More precisely, the warped image is produced by the following interpolation: warped[i] = image[W * forward[Dinv * P * S * i] + W * P * S * i )] where i denotes the coordinates of a voxel in the input grid, W is the world-to-grid transformation of the image given as input, Dinv is the world-to-grid transformation of the deformation field discretization, P is the pre-aligning matrix (transforming input points to reference points), S is the voxel-to-space transformation of the sampling grid (see comment below) and forward is the forward deformation field. If we want to warp an image, we also must specify on what grid we want to sample the resulting warped image (the images are considered as points in space and its representation on a grid depends on its grid-to-space transform telling us for each grid voxel what point in space we need to bring via interpolation). So, S is the matrix that converts the sampling grid (whose shape is given as parameter 'out_shape' ) to space coordinates. """ # if no world-to-image transform is provided, we use the codomain info if image_world2grid is None: image_world2grid = self.codomain_world2grid # if no sampling info is provided, we use the domain info if out_shape is None: if self.domain_shape is None: raise ValueError( "Unable to infer sampling info. Provide a valid out_shape." ) out_shape = self.domain_shape else: out_shape = np.asarray(out_shape, dtype=np.int32) if out_grid2world is None: out_grid2world = self.domain_grid2world W = self.interpret_matrix(image_world2grid) Dinv = self.disp_world2grid P = self.prealign S = self.interpret_matrix(out_grid2world) # this is the matrix which we need to multiply the voxel coordinates # to interpolate on the forward displacement field ("in"side the # 'forward' brackets in the expression above) affine_idx_in = mult_aff(Dinv, mult_aff(P, S)) # this is the matrix which we need to multiply the voxel coordinates # to add to the displacement ("out"side the 'forward' brackets in the # expression above) affine_idx_out = mult_aff(W, mult_aff(P, S)) # this is the matrix which we need to multiply the displacement vector # prior to adding to the transformed input point affine_disp = W # Convert the data to required types to use the cythonized functions if interpolation == "nearest": if image.dtype is np.dtype("float64") and floating is np.float32: image = image.astype(floating) elif image.dtype is np.dtype("int64"): image = image.astype(np.int32) else: image = np.asarray(image, dtype=floating) warp_f = self._get_warping_function(interpolation) warped = warp_f( image, self.forward, affine_idx_in=affine_idx_in, affine_idx_out=affine_idx_out, affine_disp=affine_disp, out_shape=out_shape, ) return warped @warning_for_keywords() def _warp_backward( self, image, *, interpolation="linear", image_world2grid=None, out_shape=None, out_grid2world=None, ): """Warps an image in the backward direction Deforms the input image under this diffeomorphic map in the backward direction. Since the mapping is defined in the physical space, the user must specify the sampling grid shape and its space-to-voxel mapping. By default, the transformation will use the discretization information given at initialization. Parameters ---------- image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2 the image to be warped under this transformation in the backward direction interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used for warping, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_world2grid : array, shape (dim+1, dim+1) the transformation bringing world (space) coordinates to voxel coordinates of the image given as input out_shape : array, shape (dim,) the number of slices, rows and columns of the desired warped image out_grid2world : the transformation bringing voxel coordinates of the warped image to physical space Returns ------- warped : array, shape = out_shape or self.domain_shape if None the warped image under this transformation in the backward direction Notes ----- A diffeomorphic map must be thought as a mapping between points in space. Warping an image J towards an image I means transforming each voxel with (discrete) coordinates i in I to (floating-point) voxel coordinates j in J. The transformation we consider 'backward' is precisely mapping coordinates i from the reference grid to coordinates j from the input image (that's why it's "backward"), which has the effect of warping the input image (moving) "towards" the reference. More precisely, the warped image is produced by the following interpolation: warped[i]=image[W * Pinv * backward[Dinv * S * i] + W * Pinv * S * i )] where i denotes the coordinates of a voxel in the input grid, W is the world-to-grid transformation of the image given as input, Dinv is the world-to-grid transformation of the deformation field discretization, Pinv is the pre-aligning matrix's inverse (transforming reference points to input points), S is the grid-to-space transformation of the sampling grid (see comment below) and backward is the backward deformation field. If we want to warp an image, we also must specify on what grid we want to sample the resulting warped image (the images are considered as points in space and its representation on a grid depends on its grid-to-space transform telling us for each grid voxel what point in space we need to bring via interpolation). So, S is the matrix that converts the sampling grid (whose shape is given as parameter 'out_shape' ) to space coordinates. """ # if no world-to-image transform is provided, we use the domain info if image_world2grid is None: image_world2grid = self.domain_world2grid # if no sampling info is provided, we use the codomain info if out_shape is None: if self.codomain_shape is None: msg = "Unknown sampling info. Provide a valid out_shape." raise ValueError(msg) out_shape = self.codomain_shape if out_grid2world is None: out_grid2world = self.codomain_grid2world W = self.interpret_matrix(image_world2grid) Dinv = self.disp_world2grid Pinv = self.prealign_inv S = self.interpret_matrix(out_grid2world) # this is the matrix which we need to multiply the voxel coordinates # to interpolate on the backward displacement field ("in"side the # 'backward' brackets in the expression above) affine_idx_in = mult_aff(Dinv, S) # this is the matrix which we need to multiply the voxel coordinates # to add to the displacement ("out"side the 'backward' brackets in the # expression above) affine_idx_out = mult_aff(W, mult_aff(Pinv, S)) # this is the matrix which we need to multiply the displacement vector # prior to adding to the transformed input point affine_disp = mult_aff(W, Pinv) if interpolation == "nearest": if image.dtype is np.dtype("float64") and floating is np.float32: image = image.astype(floating) elif image.dtype is np.dtype("int64"): image = image.astype(np.int32) else: image = np.asarray(image, dtype=floating) warp_f = self._get_warping_function(interpolation) warped = warp_f( image, self.backward, affine_idx_in=affine_idx_in, affine_idx_out=affine_idx_out, affine_disp=affine_disp, out_shape=out_shape, ) return warped @warning_for_keywords() def transform( self, image, *, interpolation="linear", image_world2grid=None, out_shape=None, out_grid2world=None, ): """Warps an image in the forward direction Transforms the input image under this transformation in the forward direction. It uses the "is_inverse" flag to switch between "forward" and "backward" (if is_inverse is False, then transform(...) warps the image forwards, else it warps the image backwards). Parameters ---------- image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2 the image to be warped under this transformation in the forward direction interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used for warping, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_world2grid : array, shape (dim+1, dim+1) the transformation bringing world (space) coordinates to voxel coordinates of the image given as input out_shape : array, shape (dim,) the number of slices, rows and columns of the desired warped image out_grid2world : the transformation bringing voxel coordinates of the warped image to physical space Returns ------- warped : array, shape = out_shape or self.codomain_shape if None the warped image under this transformation in the forward direction Notes ----- See _warp_forward and _warp_backward documentation for further information. """ if out_shape is not None: out_shape = np.asarray(out_shape, dtype=np.int32) if self.is_inverse: warped = self._warp_backward( image, interpolation=interpolation, image_world2grid=image_world2grid, out_shape=out_shape, out_grid2world=out_grid2world, ) else: warped = self._warp_forward( image, interpolation=interpolation, image_world2grid=image_world2grid, out_shape=out_shape, out_grid2world=out_grid2world, ) return np.asarray(warped) @warning_for_keywords() def transform_inverse( self, image, *, interpolation="linear", image_world2grid=None, out_shape=None, out_grid2world=None, ): """Warps an image in the backward direction Transforms the input image under this transformation in the backward direction. It uses the "is_inverse" flag to switch between "forward" and "backward" (if is_inverse is False, then transform_inverse(...) warps the image backwards, else it warps the image forwards) Parameters ---------- image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2 the image to be warped under this transformation in the forward direction interpolation : string, either 'linear' or 'nearest' the type of interpolation to be used for warping, either 'linear' (for k-linear interpolation) or 'nearest' for nearest neighbor image_world2grid : array, shape (dim+1, dim+1) the transformation bringing world (space) coordinates to voxel coordinates of the image given as input out_shape : array, shape (dim,) the number of slices, rows, and columns of the desired warped image out_grid2world : the transformation bringing voxel coordinates of the warped image to physical space Returns ------- warped : array, shape = out_shape or self.codomain_shape if None warped image under this transformation in the backward direction Notes ----- See _warp_forward and _warp_backward documentation for further information. """ if self.is_inverse: warped = self._warp_forward( image, interpolation=interpolation, image_world2grid=image_world2grid, out_shape=out_shape, out_grid2world=out_grid2world, ) else: warped = self._warp_backward( image, interpolation=interpolation, image_world2grid=image_world2grid, out_shape=out_shape, out_grid2world=out_grid2world, ) return np.asarray(warped) @warning_for_keywords() def transform_points(self, points, *, coord2world=None, world2coord=None): """Warp the list of points in the forward direction. Applies this diffeomorphic map to the list of points (or streamlines) given by `points`. We assume that the points' coordinates are mapped to world coordinates by applying the `coord2world` affine transform. The warped coordinates are given in world coordinates unless `world2coord` affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system. Parameters ---------- points : array, shape (N, dim) or Streamlines object coord2world : array, shape (dim+1, dim+1), optional affine matrix mapping points to world coordinates world2coord : array, shape (dim+1, dim+1), optional affine matrix mapping world coordinates to points """ return self._transform_coordinates( points, coord2world, world2coord, inverse=self.is_inverse ) @warning_for_keywords() def transform_points_inverse(self, points, *, coord2world=None, world2coord=None): """Warp the list of points in the backward direction. Applies this diffeomorphic map to the list of points (or streamlines) given by `points`. We assume that the points' coordinates are mapped to world coordinates by applying the `coord2world` affine transform. The warped coordinates are given in world coordinates unless `world2coord` affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system. Parameters ---------- points : array, shape (N, dim) or Streamlines object coord2world : array, shape (dim+1, dim+1), optional affine matrix mapping points to world coordinates world2coord : array, shape (dim+1, dim+1), optional affine matrix mapping world coordinates to points """ return self._transform_coordinates( points, coord2world, world2coord, inverse=not self.is_inverse ) @warning_for_keywords() def _transform_coordinates( self, points, coord2world, world2coord, *, inverse=False ): is_streamline_obj = isinstance(points, Streamlines) data = points.get_data() if is_streamline_obj else points if inverse: out = self._warp_coordinates_backward( data, coord2world=coord2world, world2coord=world2coord ) else: out = self._warp_coordinates_forward( data, coord2world=coord2world, world2coord=world2coord ) if is_streamline_obj: old_data_dtype = points._data.dtype old_offsets_dtype = points._offsets.dtype streamlines = points.copy() streamlines._offsets = points._offsets.astype(old_offsets_dtype) streamlines._data = out.astype(old_data_dtype) return streamlines return out def inverse(self): """Inverse of this DiffeomorphicMap instance Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced. Returns ------- inv : DiffeomorphicMap object the inverse of this diffeomorphic map. """ inv = DiffeomorphicMap( dim=self.dim, disp_shape=self.disp_shape, disp_grid2world=self.disp_grid2world, domain_shape=self.domain_shape, domain_grid2world=self.domain_grid2world, codomain_shape=self.codomain_shape, codomain_grid2world=self.codomain_grid2world, prealign=self.prealign, ) inv.forward = self.forward inv.backward = self.backward inv.is_inverse = True return inv def expand_fields(self, expand_factors, new_shape): """Expands the displacement fields from current shape to new_shape Up-samples the discretization of the displacement fields to be of new_shape shape. Parameters ---------- expand_factors : array, shape (dim,) the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization. new_shape : array, shape (dim,) the shape of the arrays holding the up-sampled discretization """ if self.dim == 2: expand_f = vfu.resample_displacement_field_2d else: expand_f = vfu.resample_displacement_field_3d expanded_forward = expand_f(self.forward, expand_factors, new_shape) expanded_backward = expand_f(self.backward, expand_factors, new_shape) expand_factors = np.append(expand_factors, [1]) expanded_grid2world = mult_aff(self.disp_grid2world, np.diag(expand_factors)) expanded_world2grid = npl.inv(expanded_grid2world) self.forward = expanded_forward self.backward = expanded_backward self.disp_shape = new_shape self.disp_grid2world = expanded_grid2world self.disp_world2grid = expanded_world2grid def compute_inversion_error(self): """Inversion error of the displacement fields Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields. Returns ------- residual : array, shape (R, C) or (S, R, C) the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism) stats : array, shape (3,) statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation Notes ----- Since the forward and backward displacement fields have the same discretization, the final composition is given by comp[i] = forward[ i + Dinv * backward[i]] where Dinv is the space-to-grid transformation of the displacement fields """ Dinv = self.disp_world2grid if self.dim == 2: compose_f = vfu.compose_vector_fields_2d else: compose_f = vfu.compose_vector_fields_3d residual, stats = compose_f(self.backward, self.forward, None, Dinv, 1.0, None) return np.asarray(residual), np.asarray(stats) def shallow_copy(self): """Shallow copy of this DiffeomorphicMap instance Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced) Returns ------- new_map : DiffeomorphicMap object the shallow copy of this diffeomorphic map """ new_map = DiffeomorphicMap( dim=self.dim, disp_shape=self.disp_shape, disp_grid2world=self.disp_grid2world, domain_shape=self.domain_shape, domain_grid2world=self.domain_grid2world, codomain_shape=self.codomain_shape, codomain_grid2world=self.codomain_grid2world, prealign=self.prealign, ) new_map.forward = self.forward new_map.backward = self.backward new_map.is_inverse = self.is_inverse return new_map def warp_endomorphism(self, phi): """Composition of this DiffeomorphicMap with a given endomorphism Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi's corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self's properties (we also assume that the pre-aligning matrix of phi is None or identity). Parameters ---------- phi : DiffeomorphicMap object the endomorphism to be warped by this diffeomorphic map Returns ------- composition : the composition of this diffeomorphic map with the endomorphism given as input Notes ----- The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation. Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants' mid-point algorithm (SyN) are much simpler. """ # Compose the forward deformation fields d1 = self.get_forward_field() d2 = phi.get_forward_field() d1_inv = self.get_backward_field() d2_inv = phi.get_backward_field() premult_disp = self.disp_world2grid if self.dim == 2: compose_f = vfu.compose_vector_fields_2d else: compose_f = vfu.compose_vector_fields_3d forward, stats = compose_f(d1, d2, None, premult_disp, 1.0, None) ( backward, stats, ) = compose_f(d2_inv, d1_inv, None, premult_disp, 1.0, None) composition = self.shallow_copy() composition.forward = forward composition.backward = backward return composition def get_simplified_transform(self): """Constructs a simplified version of this Diffeomorhic Map The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map. """ if self.dim == 2: simplify_f = vfu.simplify_warp_function_2d else: simplify_f = vfu.simplify_warp_function_3d # Simplify the forward transform D = self.domain_grid2world P = self.prealign Rinv = self.disp_world2grid Cinv = self.codomain_world2grid # this is the matrix which we need to multiply the voxel coordinates # to interpolate on the forward displacement field ("in"side the # 'forward' brackets in the expression above) affine_idx_in = mult_aff(Rinv, mult_aff(P, D)) # this is the matrix which we need to multiply the voxel coordinates # to add to the displacement ("out"side the 'forward' brackets in the # expression above) affine_idx_out = mult_aff(Cinv, mult_aff(P, D)) # this is the matrix which we need to multiply the displacement vector # prior to adding to the transformed input point affine_disp = Cinv new_forward = simplify_f( self.forward, affine_idx_in, affine_idx_out, affine_disp, self.domain_shape ) # Simplify the backward transform C = self.codomain_world2grid Pinv = self.prealign_inv Dinv = self.domain_world2grid affine_idx_in = mult_aff(Rinv, C) affine_idx_out = mult_aff(Dinv, mult_aff(Pinv, C)) affine_disp = mult_aff(Dinv, Pinv) new_backward = simplify_f( self.backward, affine_idx_in, affine_idx_out, affine_disp, self.codomain_shape, ) simplified = DiffeomorphicMap( dim=self.dim, disp_shape=self.disp_shape, disp_grid2world=None, domain_shape=self.domain_shape, domain_grid2world=None, codomain_shape=self.codomain_shape, codomain_grid2world=None, prealign=None, ) simplified.forward = new_forward simplified.backward = new_backward return simplified class DiffeomorphicRegistration(metaclass=abc.ABCMeta): @warning_for_keywords() def __init__(self, *, metric=None): """Diffeomorphic Registration This abstract class defines the interface to be implemented by any optimization algorithm for diffeomorphic registration. Parameters ---------- metric : SimilarityMetric object the object measuring the similarity of the two images. The registration algorithm will minimize (or maximize) the provided similarity. """ if metric is None: raise ValueError("The metric cannot be None") self.metric = metric self.dim = metric.dim def set_level_iters(self, level_iters): """Sets the number of iterations at each pyramid level Establishes the maximum number of iterations to be performed at each level of the Gaussian pyramid, similar to ANTS. Parameters ---------- level_iters : list the number of iterations at each level of the Gaussian pyramid. level_iters[0] corresponds to the finest level, level_iters[n-1] the coarsest, where n is the length of the list """ self.levels = len(level_iters) if level_iters else 0 self.level_iters = level_iters @abc.abstractmethod def optimize(self): """Starts the metric optimization This is the main function each specialized class derived from this must implement. Upon completion, the deformation field must be available from the forward transformation model. """ @abc.abstractmethod def get_map(self): """ Returns the resulting diffeomorphic map after optimization """ class SymmetricDiffeomorphicRegistration(DiffeomorphicRegistration): @warning_for_keywords() def __init__( self, metric, *, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-5, inv_iter=20, inv_tol=1e-3, callback=None, ): """Symmetric Diffeomorphic Registration (SyN) Algorithm Performs the multi-resolution optimization algorithm for non-linear registration using a given similarity metric. Parameters ---------- metric : SimilarityMetric object the metric to be optimized level_iters : list of int the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used) opt_tol : float the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold inv_iter : int the number of iterations to be performed by the displacement field inversion algorithm step_length : float the length of the maximum displacement vector of the update displacement field at each iteration ss_sigma_factor : float parameter of the scale-space smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, ..., n_scales is the scale inv_tol : float the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold callback : function(SymmetricDiffeomorphicRegistration) a function receiving a SymmetricDiffeomorphicRegistration object to be called after each iteration (this optimizer will call this function passing self as parameter) """ super(SymmetricDiffeomorphicRegistration, self).__init__(metric=metric) if level_iters is None: level_iters = [100, 100, 25] if len(level_iters) == 0: raise ValueError("The iterations list cannot be empty") self.set_level_iters(level_iters) self.step_length = step_length self.ss_sigma_factor = ss_sigma_factor self.opt_tol = opt_tol self.inv_tol = inv_tol self.inv_iter = inv_iter self.energy_window = 12 self.energy_list = [] self.full_energy_profile = [] self.verbosity = VerbosityLevels.STATUS self.callback = callback self.moving_ss = None self.static_ss = None self.static_direction = None self.moving_direction = None self.mask0 = metric.mask0 def update( self, current_displacement, new_displacement, disp_world2grid, time_scaling ): """Composition of the current displacement field with the given field Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm). Parameters ---------- current_displacement : array, shape (R', C', 2) or (S', R', C', 3) the displacement field defining where to interpolate new_displacement new_displacement : array, shape (R, C, 2) or (S, R, C, 3) the displacement field to be warped by current_displacement disp_world2grid : array, shape (dim+1, dim+1) the space-to-grid transform associated with the displacements' grid (we assume that both displacements are discretized over the same grid) time_scaling : float scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (`time_scaling`) of d2. Returns ------- updated : array, shape (the same as new_displacement) the warped displacement field mean_norm : the mean norm of all vectors in current_displacement """ sq_field = np.sum((np.array(current_displacement) ** 2), -1) mean_norm = np.sqrt(sq_field).mean() # We assume that both displacement fields have the same # grid2world transform, which implies premult_index=Identity # and premult_disp is the world2grid transform associated with # the displacements' grid self.compose( current_displacement, new_displacement, None, disp_world2grid, time_scaling, current_displacement, ) return np.array(current_displacement), np.array(mean_norm) def get_map(self): """Return the resulting diffeomorphic map. Returns the DiffeomorphicMap registering the moving image towards the static image. """ if not hasattr(self, "static_to_ref"): msg = "Diffeormorphic map can not be obtained without running " msg += "the optimizer. Please call first " msg += "SymmetricDiffeomorphicRegistration.optimize()" raise ValueError(msg) return self.static_to_ref def _connect_functions(self): """Assign the methods to be called according to the image dimension Assigns the appropriate functions to be called for displacement field inversion, Gaussian pyramid, and affine / dense deformation composition according to the dimension of the input images e.g. 2D or 3D. """ if self.dim == 2: self.invert_vector_field = vfu.invert_vector_field_fixed_point_2d self.compose = vfu.compose_vector_fields_2d else: self.invert_vector_field = vfu.invert_vector_field_fixed_point_3d self.compose = vfu.compose_vector_fields_3d def _init_optimizer( self, static, moving, static_grid2world, moving_grid2world, prealign ): """Initializes the registration optimizer Initializes the optimizer by computing the scale space of the input images and allocating the required memory for the transformation models at the coarsest scale. Parameters ---------- static : array, shape (S, R, C) or (R, C) the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image. moving : array, shape (S, R, C) or (R, C) the image to be used as "moving" during optimization. Since the deformation fields' discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by "pre-aligning" the moving image towards the static using an affine transformation given by the 'prealign' matrix static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated to the static image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated to the moving image prealign : array, shape (dim+1, dim+1) the affine transformation (operating on the physical space) pre-aligning the moving image towards the static """ self._connect_functions() # Extract information from affine matrices to create the scale space static_direction, static_spacing = get_direction_and_spacings( static_grid2world, self.dim ) moving_direction, moving_spacing = get_direction_and_spacings( moving_grid2world, self.dim ) # the images' directions don't change with scale self.static_direction = np.eye(self.dim + 1) self.moving_direction = np.eye(self.dim + 1) self.static_direction[: self.dim, : self.dim] = static_direction self.moving_direction[: self.dim, : self.dim] = moving_direction # Build the scale space of the input images if self.verbosity >= VerbosityLevels.DIAGNOSE: logger.info(f"Applying zero mask: {self.mask0}") if self.verbosity >= VerbosityLevels.STATUS: logger.info( f"Creating scale space from the moving image. Levels: {self.levels}. " f"Sigma factor: {self.ss_sigma_factor:f}." ) self.moving_ss = ScaleSpace( moving, self.levels, image_grid2world=moving_grid2world, input_spacing=moving_spacing, sigma_factor=self.ss_sigma_factor, mask0=self.mask0, ) if self.verbosity >= VerbosityLevels.STATUS: logger.info( f"Creating scale space from the static image. Levels: {self.levels}. " f"Sigma factor: {self.ss_sigma_factor:f}." ) self.static_ss = ScaleSpace( static, self.levels, image_grid2world=static_grid2world, input_spacing=static_spacing, sigma_factor=self.ss_sigma_factor, mask0=self.mask0, ) if self.verbosity >= VerbosityLevels.DEBUG: logger.info("Moving scale space:") for level in range(self.levels): self.moving_ss.print_level(level) logger.info("Static scale space:") for level in range(self.levels): self.static_ss.print_level(level) # Get the properties of the coarsest level from the static image. These # properties will be taken as the reference discretization. disp_shape = self.static_ss.get_domain_shape(self.levels - 1) disp_grid2world = self.static_ss.get_affine(self.levels - 1) # The codomain discretization of both diffeomorphic maps is # precisely the discretization of the static image codomain_shape = static.shape codomain_grid2world = static_grid2world # The forward model transforms points from the static image # to points on the reference (which is the static as well). So the # domain properties are taken from the static image. Since its the same # as the reference, we don't need to pre-align. domain_shape = static.shape domain_grid2world = static_grid2world self.static_to_ref = DiffeomorphicMap( dim=self.dim, disp_shape=disp_shape, disp_grid2world=disp_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=None, ) self.static_to_ref.allocate() # The backward model transforms points from the moving image # to points on the reference (which is the static). So the input # properties are taken from the moving image, and we need to pre-align # points on the moving physical space to the reference physical space # by applying the inverse of pre-align. This is done this way to make # it clear for the user: the pre-align matrix is usually obtained by # doing affine registration of the moving image towards the static # image, which results in a matrix transforming points in the static # physical space to points in the moving physical space prealign_inv = None if prealign is None else npl.inv(prealign) domain_shape = moving.shape domain_grid2world = moving_grid2world self.moving_to_ref = DiffeomorphicMap( dim=self.dim, disp_shape=disp_shape, disp_grid2world=disp_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=prealign_inv, ) self.moving_to_ref.allocate() def _end_optimizer(self): """Frees the resources allocated during initialization""" del self.moving_ss del self.static_ss def _iterate(self): """Performs one symmetric iteration Performs one iteration of the SyN algorithm: 1.Compute forward 2.Compute backward 3.Update forward 4.Update backward 5.Compute inverses 6.Invert the inverses Returns ------- der : float the derivative of the energy profile, computed by fitting a quadratic function to the energy values at the latest T iterations, where T = self.energy_window. If the current iteration is less than T then np.inf is returned instead. """ # Acquire current resolution information from scale spaces current_moving = self.moving_ss.get_image(self.current_level) current_static = self.static_ss.get_image(self.current_level) current_disp_shape = self.static_ss.get_domain_shape(self.current_level) current_disp_grid2world = self.static_ss.get_affine(self.current_level) current_disp_world2grid = self.static_ss.get_affine_inv(self.current_level) current_disp_spacing = self.static_ss.get_spacing(self.current_level) # Warp the input images (smoothed to the current scale) to the common # (reference) space at the current resolution wstatic = self.static_to_ref.transform_inverse( current_static, interpolation="linear", image_world2grid=None, out_shape=current_disp_shape, out_grid2world=current_disp_grid2world, ) wmoving = self.moving_to_ref.transform_inverse( current_moving, interpolation="linear", image_world2grid=None, out_shape=current_disp_shape, out_grid2world=current_disp_grid2world, ) # Pass both images to the metric. Now both images are sampled on the # reference grid (equal to the static image's grid) and the direction # doesn't change across scales self.metric.set_moving_image( wmoving, current_disp_grid2world, current_disp_spacing, self.static_direction, ) self.metric.use_moving_image_dynamics( current_moving, self.moving_to_ref.inverse() ) self.metric.set_static_image( wstatic, current_disp_grid2world, current_disp_spacing, self.static_direction, ) self.metric.use_static_image_dynamics( current_static, self.static_to_ref.inverse() ) # Initialize the metric for a new iteration self.metric.initialize_iteration() if self.callback is not None: self.callback(self, RegistrationStages.ITER_START) # Compute the forward step (to be used to update the forward transform) fw_step = np.array(self.metric.compute_forward()) # set zero displacements at the boundary fw_step = self.__set_no_boundary_displacement(fw_step) # Normalize the forward step nrm = np.sqrt(np.sum((fw_step / current_disp_spacing) ** 2, -1)).max() if nrm > 0: fw_step /= nrm # Add to current total field self.static_to_ref.forward, md_forward = self.update( self.static_to_ref.forward, fw_step, current_disp_world2grid, self.step_length, ) del fw_step # Keep track of the forward energy fw_energy = self.metric.get_energy() # Compose backward step (to be used to update the backward transform) bw_step = np.array(self.metric.compute_backward()) # set zero displacements at the boundary bw_step = self.__set_no_boundary_displacement(bw_step) # Normalize the backward step nrm = np.sqrt(np.sum((bw_step / current_disp_spacing) ** 2, -1)).max() if nrm > 0: bw_step /= nrm # Add to current total field self.moving_to_ref.forward, md_backward = self.update( self.moving_to_ref.forward, bw_step, current_disp_world2grid, self.step_length, ) del bw_step # Keep track of the energy bw_energy = self.metric.get_energy() der = np.inf n_iter = len(self.energy_list) if len(self.energy_list) >= self.energy_window: der = self._get_energy_derivative() if self.verbosity >= VerbosityLevels.DIAGNOSE: ch = "-" if np.isnan(der) else der logger.info( "%d:\t%0.6f\t%0.6f\t%0.6f\t%s" % (n_iter, fw_energy, bw_energy, fw_energy + bw_energy, ch) ) self.energy_list.append(fw_energy + bw_energy) self.__invert_models(current_disp_world2grid, current_disp_spacing) # Free resources no longer needed to compute the forward and backward # steps if self.callback is not None: self.callback(self, RegistrationStages.ITER_END) self.metric.free_iteration() return der def __set_no_boundary_displacement(self, step): """set zero displacements at the boundary Parameters ---------- step : array, ndim 2 or 3 displacements field Returns ------- step : array, ndim 2 or 3 displacements field """ step[0, ...] = 0 step[:, 0, ...] = 0 step[-1, ...] = 0 step[:, -1, ...] = 0 if self.dim == 3: step[:, :, 0, ...] = 0 step[:, :, -1, ...] = 0 return step def __invert_models(self, current_disp_world2grid, current_disp_spacing): """Converting static - moving models in both direction. Parameters ---------- current_disp_world2grid : array, shape (3, 3) or (4, 4) the space-to-grid transformation associated to the displacement field d (transforming physical space coordinates to voxel coordinates of the displacement field grid) current_disp_spacing :array, shape (2,) or (3,) the spacing between voxels (voxel size along each axis) """ # Invert the forward model's forward field self.static_to_ref.backward = np.array( self.invert_vector_field( self.static_to_ref.forward, current_disp_world2grid, current_disp_spacing, self.inv_iter, self.inv_tol, start=self.static_to_ref.backward, ) ) # Invert the backward model's forward field self.moving_to_ref.backward = np.array( self.invert_vector_field( self.moving_to_ref.forward, current_disp_world2grid, current_disp_spacing, self.inv_iter, self.inv_tol, start=self.moving_to_ref.backward, ) ) # Invert the forward model's backward field self.static_to_ref.forward = np.array( self.invert_vector_field( self.static_to_ref.backward, current_disp_world2grid, current_disp_spacing, self.inv_iter, self.inv_tol, start=self.static_to_ref.forward, ) ) # Invert the backward model's backward field self.moving_to_ref.forward = np.array( self.invert_vector_field( self.moving_to_ref.backward, current_disp_world2grid, current_disp_spacing, self.inv_iter, self.inv_tol, start=self.moving_to_ref.forward, ) ) def _approximate_derivative_direct(self, x, y): """Derivative of the degree-2 polynomial fit of the given x, y pairs Directly computes the derivative of the least-squares-fit quadratic function estimated from (x[...],y[...]) pairs. Parameters ---------- x : array, shape (n,) increasing array representing the x-coordinates of the points to be fit y : array, shape (n,) array representing the y-coordinates of the points to be fit Returns ------- y0 : float the estimated derivative at x0 = 0.5*len(x) """ x = np.asarray(x) y = np.asarray(y) X = np.vstack((x**2, x, np.ones_like(x))) XX = X.dot(X.T) b = X.dot(y) beta = npl.solve(XX, b) x0 = 0.5 * len(x) y0 = 2.0 * beta[0] * x0 + beta[1] return y0 def _get_energy_derivative(self): """Approximate derivative of the energy profile Returns the derivative of the estimated energy as a function of "time" (iterations) at the last iteration """ n_iter = len(self.energy_list) if n_iter < self.energy_window: raise ValueError("Not enough data to fit the energy profile") x = range(self.energy_window) y = self.energy_list[(n_iter - self.energy_window) : n_iter] ss = sum(y) if not ss == 0: # avoid division by zero ss = -ss if ss > 0 else ss y = [v / ss for v in y] der = self._approximate_derivative_direct(x, y) return der def _optimize(self): """Starts the optimization The main multi-scale symmetric optimization algorithm """ self.full_energy_profile = [] if self.callback is not None: self.callback(self, RegistrationStages.OPT_START) for level in range(self.levels - 1, -1, -1): if self.verbosity >= VerbosityLevels.STATUS: logger.info(f"Optimizing level {level}") self.current_level = level self.metric.set_levels_below(self.levels - level) self.metric.set_levels_above(level) if level < self.levels - 1: expand_factors = self.static_ss.get_expand_factors(level + 1, level) new_shape = self.static_ss.get_domain_shape(level) self.static_to_ref.expand_fields(expand_factors, new_shape) self.moving_to_ref.expand_fields(expand_factors, new_shape) self.niter = 0 self.energy_list = [] derivative = np.inf if self.callback is not None: self.callback(self, RegistrationStages.SCALE_START) while (self.niter < self.level_iters[self.levels - 1 - level]) and ( self.opt_tol < derivative ): derivative = self._iterate() self.niter += 1 self.full_energy_profile.extend(self.energy_list) if self.callback is not None: self.callback(self, RegistrationStages.SCALE_END) # Reporting mean and std in stats[1] and stats[2] residual, stats = self.static_to_ref.compute_inversion_error() if self.verbosity >= VerbosityLevels.DIAGNOSE: logger.info( f"Static-Reference Residual error: {stats[1]:0.6f} ({stats[2]:0.6f})" ) residual, stats = self.moving_to_ref.compute_inversion_error() if self.verbosity >= VerbosityLevels.DIAGNOSE: logger.info( f"Moving-Reference Residual error :{stats[1]:0.6f} ({stats[2]:0.6f})" ) # Compose the two partial transformations self.static_to_ref = self.moving_to_ref.warp_endomorphism( self.static_to_ref.inverse() ).inverse() # Report mean and std for the composed deformation field residual, stats = self.static_to_ref.compute_inversion_error() if self.verbosity >= VerbosityLevels.DIAGNOSE: logger.info(f"Final residual error: {stats[1]:0.6f} ({stats[2]:0.6f})") if self.callback is not None: self.callback(self, RegistrationStages.OPT_END) @warning_for_keywords() def optimize( self, static, moving, *, static_grid2world=None, moving_grid2world=None, prealign=None, ): """ Starts the optimization Parameters ---------- static : array, shape (S, R, C) or (R, C) the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image. moving : array, shape (S, R, C) or (R, C) the image to be used as "moving" during optimization. Since the deformation fields' discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by "pre-aligning" the moving image towards the static using an affine transformation given by the 'prealign' matrix static_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated to the static image moving_grid2world : array, shape (dim+1, dim+1) the voxel-to-space transformation associated to the moving image prealign : array, shape (dim+1, dim+1) the affine transformation (operating on the physical space) pre-aligning the moving image towards the static Returns ------- static_to_ref : DiffeomorphicMap object the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse). """ if self.verbosity >= VerbosityLevels.DEBUG: if prealign is not None: logger.info(f"Pre-align: {prealign}") self._init_optimizer( static.astype(floating), moving.astype(floating), static_grid2world, moving_grid2world, prealign, ) self._optimize() self._end_optimizer() self.static_to_ref.forward = np.array(self.static_to_ref.forward) self.static_to_ref.backward = np.array(self.static_to_ref.backward) return self.static_to_ref dipy-1.11.0/dipy/align/meson.build000066400000000000000000000021031476546756600167660ustar00rootroot00000000000000cython_sources = ['bundlemin', 'crosscorr', 'expectmax', 'parzenhist', 'sumsqdiff', 'transforms', 'vector_fields',] cython_headers = [ 'fused_types.pxd', 'transforms.pxd', 'vector_fields.pxd', ] foreach ext: cython_sources extra_args = [] # Todo: check why it is failing to compile with transforms.pxd # C attributes cannot be added in implementation part of extension type # defined in a pxd # if fs.exists(ext + '.pxd') # extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] # endif py3.extension_module(ext, cython_gen.process(ext + '.pyx', extra_args: extra_args), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/align' ) endforeach python_sources = ['__init__.py', '_public.py', 'cpd.py', 'imaffine.py', 'imwarp.py', 'metrics.py', 'reslice.py', 'scalespace.py', 'streamlinear.py', 'streamwarp.py', ] py3.install_sources( python_sources + cython_headers, pure: false, subdir: 'dipy/align' ) subdir('tests')dipy-1.11.0/dipy/align/metrics.py000066400000000000000000001255741476546756600166660ustar00rootroot00000000000000"""Metrics for Symmetric Diffeomorphic Registration""" import abc import numpy as np from numpy import gradient from scipy import ndimage from dipy.align import ( crosscorr as cc, expectmax as em, floating, sumsqdiff as ssd, vector_fields as vfu, ) from dipy.testing.decorators import warning_for_keywords class SimilarityMetric: def __init__(self, dim): r"""Similarity Metric abstract class A similarity metric is in charge of keeping track of the numerical value of the similarity (or distance) between the two given images. It also computes the update field for the forward and inverse displacement fields to be used in a gradient-based optimization algorithm. Note that this metric does not depend on any transformation (affine or non-linear) so it assumes the static and moving images are already warped Parameters ---------- dim : int (either 2 or 3) the dimension of the image domain """ self.dim = dim self.levels_above = None self.levels_below = None self.static_image = None self.static_affine = None self.static_spacing = None self.static_direction = None self.moving_image = None self.moving_affine = None self.moving_spacing = None self.moving_direction = None self.mask0 = False def set_levels_below(self, levels): r"""Informs the metric how many pyramid levels are below the current one Informs this metric the number of pyramid levels below the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly Parameters ---------- levels : int the number of levels below the current Gaussian Pyramid level """ self.levels_below = levels def set_levels_above(self, levels): r"""Informs the metric how many pyramid levels are above the current one Informs this metric the number of pyramid levels above the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly Parameters ---------- levels : int the number of levels above the current Gaussian Pyramid level """ self.levels_above = levels def set_static_image( self, static_image, static_affine, static_spacing, static_direction ): r"""Sets the static image being compared against the moving one. Sets the static image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations Parameters ---------- static_image : array, shape (R, C) or (S, R, C) the static image """ self.static_image = static_image self.static_affine = static_affine self.static_spacing = static_spacing self.static_direction = static_direction def use_static_image_dynamics(self, original_static_image, transformation): r"""This is called by the optimizer just after setting the static image. This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_static_image equals self.moving_image. Parameters ---------- original_static_image : array, shape (R, C) or (S, R, C) original image from which the current static image was generated transformation : DiffeomorphicMap object the transformation that was applied to original image to generate the current static image """ pass def set_moving_image( self, moving_image, moving_affine, moving_spacing, moving_direction ): r"""Sets the moving image being compared against the static one. Sets the moving image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations Parameters ---------- moving_image : array, shape (R, C) or (S, R, C) the moving image """ self.moving_image = moving_image self.moving_affine = moving_affine self.moving_spacing = moving_spacing self.moving_direction = moving_direction def use_moving_image_dynamics(self, original_moving_image, transformation): r"""This is called by the optimizer just after setting the moving image This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_moving_image equals self.moving_image. Parameters ---------- original_moving_image : array, shape (R, C) or (S, R, C) original image from which the current moving image was generated transformation : DiffeomorphicMap object the transformation that was applied to the original image to generate the current moving image """ pass @abc.abstractmethod def initialize_iteration(self): r"""Prepares the metric to compute one displacement field iteration. This method will be called before any compute_forward or compute_backward call, this allows the Metric to pre-compute any useful information for speeding up the update computations. This initialization was needed in ANTS because the updates are called once per voxel. In Python this is unpractical, though. """ @abc.abstractmethod def free_iteration(self): r"""Releases the resources no longer needed by the metric This method is called by the RegistrationOptimizer after the required iterations have been computed (forward and / or backward) so that the SimilarityMetric can safely delete any data it computed as part of the initialization """ @abc.abstractmethod def compute_forward(self): r"""Computes one step bringing the reference image towards the static. Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm """ @abc.abstractmethod def compute_backward(self): r"""Computes one step bringing the static image towards the moving. Computes the backward update field to register the static image towards the moving image in a gradient-based optimization algorithm """ @abc.abstractmethod def get_energy(self): r"""Numerical value assigned by this metric to the current image pair Must return the numeric value of the similarity between the given static and moving images """ class CCMetric(SimilarityMetric): @warning_for_keywords() def __init__(self, dim, *, sigma_diff=2.0, radius=4): r"""Normalized Cross-Correlation Similarity metric. Parameters ---------- dim : int (either 2 or 3) the dimension of the image domain sigma_diff : the standard deviation of the Gaussian smoothing kernel to be applied to the update field at each iteration radius : int the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation """ super(CCMetric, self).__init__(dim) self.sigma_diff = sigma_diff self.radius = radius self._connect_functions() def _connect_functions(self): r"""Assign the methods to be called according to the image dimension Assigns the appropriate functions to be called for precomputing the cross-correlation factors according to the dimension of the input images """ if self.dim == 2: self.precompute_factors = cc.precompute_cc_factors_2d self.compute_forward_step = cc.compute_cc_forward_step_2d self.compute_backward_step = cc.compute_cc_backward_step_2d self.reorient_vector_field = vfu.reorient_vector_field_2d elif self.dim == 3: self.precompute_factors = cc.precompute_cc_factors_3d self.compute_forward_step = cc.compute_cc_forward_step_3d self.compute_backward_step = cc.compute_cc_backward_step_3d self.reorient_vector_field = vfu.reorient_vector_field_3d else: raise ValueError(f"CC Metric not defined for dim. {self.dim}") def initialize_iteration(self): r"""Prepares the metric to compute one displacement field iteration. Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations. """ min_size = self.radius * 2 + 1 def invalid_image_size(image): return any(size < min_size for size in image.shape) msg = ( "Each image dimension should be superior to 2 * radius + 1 " f"({min_size}). Decrease CCMetric radius ({self.radius}) or " "increase your image size (shape=%(shape)s)." ) if invalid_image_size(self.static_image): raise ValueError( "Static image size is too small. " + msg % {"shape": self.static_image.shape} ) if invalid_image_size(self.moving_image): raise ValueError( "Moving image size is too small. " + msg % {"shape": self.moving_image.shape} ) self.factors = self.precompute_factors( self.static_image, self.moving_image, self.radius ) self.factors = np.array(self.factors) self.gradient_moving = np.empty( shape=self.moving_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.moving_image)): self.gradient_moving[..., i] = grad # Convert moving image's gradient field from voxel to physical space if self.moving_spacing is not None: self.gradient_moving /= self.moving_spacing if self.moving_direction is not None: self.reorient_vector_field(self.gradient_moving, self.moving_direction) self.gradient_static = np.empty( shape=self.static_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.static_image)): self.gradient_static[..., i] = grad # Convert moving image's gradient field from voxel to physical space if self.static_spacing is not None: self.gradient_static /= self.static_spacing if self.static_direction is not None: self.reorient_vector_field(self.gradient_static, self.static_direction) def free_iteration(self): r"""Frees the resources allocated during initialization""" del self.factors del self.gradient_moving del self.gradient_static def compute_forward(self): r"""Computes one step bringing the moving image towards the static. Computes the update displacement field to be used for registration of the moving image towards the static image """ displacement, self.energy = self.compute_forward_step( self.gradient_static, self.factors, self.radius ) displacement = np.array(displacement) for i in range(self.dim): displacement[..., i] = ndimage.gaussian_filter( displacement[..., i], self.sigma_diff ) return displacement def compute_backward(self): r"""Computes one step bringing the static image towards the moving. Computes the update displacement field to be used for registration of the static image towards the moving image """ displacement, energy = self.compute_backward_step( self.gradient_moving, self.factors, self.radius ) displacement = np.array(displacement) for i in range(self.dim): displacement[..., i] = ndimage.gaussian_filter( displacement[..., i], self.sigma_diff ) return displacement def get_energy(self): r"""Numerical value assigned by this metric to the current image pair Returns the Cross Correlation (data term) energy computed at the largest iteration """ return self.energy class EMMetric(SimilarityMetric): @warning_for_keywords() def __init__( self, dim, *, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type="gauss_newton", ): r"""Expectation-Maximization Metric Similarity metric based on the Expectation-Maximization algorithm to handle multi-modal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm. Parameters ---------- dim : int (either 2 or 3) the dimension of the image domain smooth : float smoothness parameter, the larger the value the smoother the deformation field inner_iter : int number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric) q_levels : number of quantization levels (equal to the number of hidden variables in the EM algorithm) double_gradient : boolean if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image. step_type : string ('gauss_newton', 'demons') the optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected) """ super(EMMetric, self).__init__(dim) self.smooth = smooth self.inner_iter = inner_iter self.q_levels = q_levels self.use_double_gradient = double_gradient self.step_type = step_type self.static_image_mask = None self.moving_image_mask = None self.staticq_means_field = None self.movingq_means_field = None self.movingq_levels = None self.staticq_levels = None self._connect_functions() def _connect_functions(self): r"""Assign the methods to be called according to the image dimension Assigns the appropriate functions to be called for image quantization, statistics computation and multi-resolution iterations according to the dimension of the input images """ if self.dim == 2: self.quantize = em.quantize_positive_2d self.compute_stats = em.compute_masked_class_stats_2d self.reorient_vector_field = vfu.reorient_vector_field_2d elif self.dim == 3: self.quantize = em.quantize_positive_3d self.compute_stats = em.compute_masked_class_stats_3d self.reorient_vector_field = vfu.reorient_vector_field_3d else: raise ValueError(f"EM Metric not defined for dim. {self.dim}") if self.step_type == "demons": self.compute_step = self.compute_demons_step elif self.step_type == "gauss_newton": self.compute_step = self.compute_gauss_newton_step else: raise ValueError(f"Opt. step {self.step_type} not defined") def initialize_iteration(self): r"""Prepares the metric to compute one displacement field iteration. Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged. """ sampling_mask = self.static_image_mask * self.moving_image_mask self.sampling_mask = sampling_mask staticq, self.staticq_levels, hist = self.quantize( self.static_image, self.q_levels ) staticq = np.array(staticq, dtype=np.int32) self.staticq_levels = np.array(self.staticq_levels) staticq_means, staticq_vars = self.compute_stats( sampling_mask, self.moving_image, self.q_levels, staticq ) staticq_means[0] = 0 self.staticq_means = np.array(staticq_means) self.staticq_variances = np.array(staticq_vars) self.staticq_sigma_sq_field = self.staticq_variances[staticq] self.staticq_means_field = self.staticq_means[staticq] self.gradient_moving = np.empty( shape=self.moving_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.moving_image)): self.gradient_moving[..., i] = grad # Convert moving image's gradient field from voxel to physical space if self.moving_spacing is not None: self.gradient_moving /= self.moving_spacing if self.moving_direction is not None: self.reorient_vector_field(self.gradient_moving, self.moving_direction) self.gradient_static = np.empty( shape=self.static_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.static_image)): self.gradient_static[..., i] = grad # Convert moving image's gradient field from voxel to physical space if self.static_spacing is not None: self.gradient_static /= self.static_spacing if self.static_direction is not None: self.reorient_vector_field(self.gradient_static, self.static_direction) movingq, self.movingq_levels, hist = self.quantize( self.moving_image, self.q_levels ) movingq = np.array(movingq, dtype=np.int32) self.movingq_levels = np.array(self.movingq_levels) movingq_means, movingq_variances = self.compute_stats( sampling_mask, self.static_image, self.q_levels, movingq ) movingq_means[0] = 0 self.movingq_means = np.array(movingq_means) self.movingq_variances = np.array(movingq_variances) self.movingq_sigma_sq_field = self.movingq_variances[movingq] self.movingq_means_field = self.movingq_means[movingq] if self.use_double_gradient: for i, grad in enumerate(gradient(self.staticq_means_field)): self.gradient_moving[..., i] += grad for i, grad in enumerate(gradient(self.movingq_means_field)): self.gradient_static[..., i] += grad def free_iteration(self): r""" Frees the resources allocated during initialization """ del self.sampling_mask del self.staticq_levels del self.movingq_levels del self.staticq_sigma_sq_field del self.staticq_means_field del self.movingq_sigma_sq_field del self.movingq_means_field del self.gradient_moving del self.gradient_static def compute_forward(self): """Computes one step bringing the reference image towards the static. Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm """ return self.compute_step(forward_step=True) def compute_backward(self): r"""Computes one step bringing the static image towards the moving. Computes the update displacement field to be used for registration of the static image towards the moving image """ return self.compute_step(forward_step=False) @warning_for_keywords() def compute_gauss_newton_step(self, *, forward_step=True): r"""Computes the Gauss-Newton energy minimization step Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by :footcite:t:`Bruhn2005`. Parameters ---------- forward_step : boolean if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image) Returns ------- displacement : array, shape (R, C, 2) or (S, R, C, 3) the Newton step References ---------- .. footbibliography:: """ reference_shape = self.static_image.shape if forward_step: gradient = self.gradient_static delta = self.staticq_means_field - self.moving_image sigma_sq_field = self.staticq_sigma_sq_field else: gradient = self.gradient_moving delta = self.movingq_means_field - self.static_image sigma_sq_field = self.movingq_sigma_sq_field displacement = np.zeros(shape=reference_shape + (self.dim,), dtype=floating) if self.dim == 2: self.energy = v_cycle_2d( self.levels_below, self.inner_iter, delta, sigma_sq_field, gradient, None, self.smooth, displacement, ) else: self.energy = v_cycle_3d( self.levels_below, self.inner_iter, delta, sigma_sq_field, gradient, None, self.smooth, displacement, ) return displacement @warning_for_keywords() def compute_demons_step(self, *, forward_step=True): r"""Demons step for EM metric Parameters ---------- forward_step : boolean if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image) Returns ------- displacement : array, shape (R, C, 2) or (S, R, C, 3) the Demons step """ sigma_reg_2 = np.sum(self.static_spacing**2) / self.dim if forward_step: gradient = self.gradient_static delta_field = self.static_image - self.movingq_means_field sigma_sq_field = self.movingq_sigma_sq_field else: gradient = self.gradient_moving delta_field = self.moving_image - self.staticq_means_field sigma_sq_field = self.staticq_sigma_sq_field if self.dim == 2: step, self.energy = em.compute_em_demons_step_2d( delta_field, sigma_sq_field, gradient, sigma_reg_2, None ) else: step, self.energy = em.compute_em_demons_step_3d( delta_field, sigma_sq_field, gradient, sigma_reg_2, None ) for i in range(self.dim): step[..., i] = ndimage.gaussian_filter(step[..., i], self.smooth) return step def get_energy(self): r"""The numerical value assigned by this metric to the current image pair Returns the EM (data term) energy computed at the largest iteration """ return self.energy def use_static_image_dynamics(self, original_static_image, transformation): r"""This is called by the optimizer just after setting the static image. EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation) Parameters ---------- original_static_image : array, shape (R, C) or (S, R, C) the original static image from which the current static image was generated, the current static image is the one that was provided via 'set_static_image(...)', which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one). transformation : DiffeomorphicMap object the transformation that was applied to the original_static_image to generate the current static image """ self.static_image_mask = (original_static_image > 0).astype(np.int32) if transformation is None: return shape = np.array(self.static_image.shape, dtype=np.int32) affine = self.static_affine self.static_image_mask = transformation.transform( self.static_image_mask, interpolation="nearest", image_world2grid=None, out_shape=shape, out_grid2world=affine, ) def use_moving_image_dynamics(self, original_moving_image, transformation): r"""This is called by the optimizer just after setting the moving image. EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation) Parameters ---------- original_moving_image : array, shape (R, C) or (S, R, C) the original moving image from which the current moving image was generated, the current moving image is the one that was provided via 'set_moving_image(...)', which may not be the same as the original moving image but a warped version of it. transformation : DiffeomorphicMap object the transformation that was applied to the original_moving_image to generate the current moving image """ self.moving_image_mask = (original_moving_image > 0).astype(np.int32) if transformation is None: return shape = np.array(self.moving_image.shape, dtype=np.int32) affine = self.moving_affine self.moving_image_mask = transformation.transform( self.moving_image_mask, interpolation="nearest", image_world2grid=None, out_shape=shape, out_grid2world=affine, ) class SSDMetric(SimilarityMetric): @warning_for_keywords() def __init__(self, dim, *, smooth=4, inner_iter=10, step_type="demons"): r"""Sum of Squared Differences (SSD) Metric Similarity metric for (mono-modal) nonlinear image registration defined by the sum of squared differences (SSD) Parameters ---------- dim : int (either 2 or 3) the dimension of the image domain smooth : float smoothness parameter, the larger the value the smoother the deformation field inner_iter : int number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric) step_type : string the displacement field step to be computed when 'compute_forward' and 'compute_backward' are called. Either 'demons' or 'gauss_newton' """ super(SSDMetric, self).__init__(dim) self.smooth = smooth self.inner_iter = inner_iter self.step_type = step_type self.levels_below = 0 self._connect_functions() def _connect_functions(self): r"""Assign the methods to be called according to the image dimension Assigns the appropriate functions to be called for vector field reorientation and displacement field steps according to the dimension of the input images and the select type of step (either Demons or Gauss Newton) """ if self.dim == 2: self.reorient_vector_field = vfu.reorient_vector_field_2d elif self.dim == 3: self.reorient_vector_field = vfu.reorient_vector_field_3d else: raise ValueError(f"SSD Metric not defined for dim. {self.dim}") if self.step_type == "gauss_newton": self.compute_step = self.compute_gauss_newton_step elif self.step_type == "demons": self.compute_step = self.compute_demons_step else: raise ValueError(f"Opt. step {self.step_type} not defined") def initialize_iteration(self): r"""Prepares the metric to compute one displacement field iteration. Pre-computes the gradient of the input images to be used in the computation of the forward and backward steps. """ self.gradient_moving = np.empty( shape=self.moving_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.moving_image)): self.gradient_moving[..., i] = grad # Convert static image's gradient field from voxel to physical space if self.moving_spacing is not None: self.gradient_moving /= self.moving_spacing if self.moving_direction is not None: self.reorient_vector_field(self.gradient_moving, self.moving_direction) self.gradient_static = np.empty( shape=self.static_image.shape + (self.dim,), dtype=floating ) for i, grad in enumerate(gradient(self.static_image)): self.gradient_static[..., i] = grad # Convert static image's gradient field from voxel to physical space if self.static_spacing is not None: self.gradient_static /= self.static_spacing if self.static_direction is not None: self.reorient_vector_field(self.gradient_static, self.static_direction) def compute_forward(self): r"""Computes one step bringing the reference image towards the static. Computes the update displacement field to be used for registration of the moving image towards the static image """ return self.compute_step(forward_step=True) def compute_backward(self): r"""Computes one step bringing the static image towards the moving. Computes the updated displacement field to be used for registration of the static image towards the moving image """ return self.compute_step(forward_step=False) @warning_for_keywords() def compute_gauss_newton_step(self, *, forward_step=True): r"""Computes the Gauss-Newton energy minimization step Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field. Parameters ---------- forward_step : boolean if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image) Returns ------- displacement : array, shape = static_image.shape + (3,) if forward_step==True, the forward SSD Gauss-Newton step, else, the backward step """ reference_shape = self.static_image.shape if forward_step: gradient = self.gradient_static delta_field = self.static_image - self.moving_image else: gradient = self.gradient_moving delta_field = self.moving_image - self.static_image displacement = np.zeros(shape=reference_shape + (self.dim,), dtype=floating) if self.dim == 2: self.energy = v_cycle_2d( self.levels_below, self.inner_iter, delta_field, None, gradient, None, self.smooth, displacement, ) else: self.energy = v_cycle_3d( self.levels_below, self.inner_iter, delta_field, None, gradient, None, self.smooth, displacement, ) return displacement @warning_for_keywords() def compute_demons_step(self, *, forward_step=True): r"""Demons step for SSD metric Computes the demons step proposed by :footcite:t:`Vercauteren2009` for the SSD metric. Parameters ---------- forward_step : boolean if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image) Returns ------- displacement : array, shape (R, C, 2) or (S, R, C, 3) the Demons step References ---------- .. footbibliography:: """ sigma_reg_2 = np.sum(self.static_spacing**2) / self.dim if forward_step: gradient = self.gradient_static delta_field = self.static_image - self.moving_image else: gradient = self.gradient_moving delta_field = self.moving_image - self.static_image if self.dim == 2: step, self.energy = ssd.compute_ssd_demons_step_2d( delta_field, gradient, sigma_reg_2, None ) else: step, self.energy = ssd.compute_ssd_demons_step_3d( delta_field, gradient, sigma_reg_2, None ) for i in range(self.dim): step[..., i] = ndimage.gaussian_filter(step[..., i], self.smooth) return step def get_energy(self): r"""The numerical value assigned by this metric to the current image pair Returns the Sum of Squared Differences (data term) energy computed at the largest iteration """ return self.energy def free_iteration(self): r""" Nothing to free for the SSD metric """ pass @warning_for_keywords() def v_cycle_2d( n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0, ): r"""Multi-resolution Gauss-Seidel solver using V-type cycles Multi-resolution Gauss-Seidel solver: solves the Gauss-Newton linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by :footcite:t:`Bruhn2005`. Parameters ---------- n : int number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0) k : int the number of iterations at each multi-resolution level delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigma_sq_field : array, shape (R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. gradient_field : array, shape (R, C, 2) the gradient of the moving image target : array, shape (R, C, 2) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter, the larger its value the smoother the displacement field displacement : array, shape (R, C, 2) the displacement field to start the optimization from Returns ------- energy : the energy of the EM (or SSD if sigmafield[...]==1) metric at this iteration References ---------- .. footbibliography:: """ # pre-smoothing for _ in range(k): ssd.iterate_residual_displacement_field_ssd_2d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, ) if n == 0: energy = ssd.compute_energy_ssd_2d(delta_field) return energy # solve at coarser grid residual = None residual = ssd.compute_residual_displacement_field_ssd_2d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, residual, ) sub_residual = np.array(vfu.downsample_displacement_field_2d(residual)) del residual subsigma_sq_field = None if sigma_sq_field is not None: subsigma_sq_field = vfu.downsample_scalar_field_2d(sigma_sq_field) subdelta_field = vfu.downsample_scalar_field_2d(delta_field) subgradient_field = np.array(vfu.downsample_displacement_field_2d(gradient_field)) shape = np.array(displacement.shape).astype(np.int32) half_shape = ((shape[0] + 1) // 2, (shape[1] + 1) // 2, 2) sub_displacement = np.zeros(shape=half_shape, dtype=floating) sublambda_param = lambda_param * 0.25 v_cycle_2d( n - 1, k, subdelta_field, subsigma_sq_field, subgradient_field, sub_residual, sublambda_param, sub_displacement, depth=depth + 1, ) # displacement += np.array( # vfu.upsample_displacement_field(sub_displacement, shape)) displacement += vfu.resample_displacement_field_2d( sub_displacement, np.array([0.5, 0.5]), shape ) # post-smoothing for _ in range(k): ssd.iterate_residual_displacement_field_ssd_2d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, ) energy = ssd.compute_energy_ssd_2d(delta_field) return energy @warning_for_keywords() def v_cycle_3d( n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0, ): r"""Multi-resolution Gauss-Seidel solver using V-type cycles Multi-resolution Gauss-Seidel solver: solves the linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by :footcite:t:`Bruhn2005`. Parameters ---------- n : int number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0) k : int the number of iterations at each multi-resolution level delta_field : array, shape (S, R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigma_sq_field : array, shape (S, R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. gradient_field : array, shape (S, R, C, 3) the gradient of the moving image target : array, shape (S, R, C, 3) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter, the larger its value the smoother the displacement field displacement : array, shape (S, R, C, 3) the displacement field to start the optimization from Returns ------- energy : the energy of the EM (or SSD if sigmafield[...]==1) metric at this iteration References ---------- .. footbibliography:: """ # pre-smoothing for _ in range(k): ssd.iterate_residual_displacement_field_ssd_3d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, ) if n == 0: energy = ssd.compute_energy_ssd_3d(delta_field) return energy # solve at coarser grid residual = ssd.compute_residual_displacement_field_ssd_3d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, None, ) sub_residual = np.array(vfu.downsample_displacement_field_3d(residual)) del residual subsigma_sq_field = None if sigma_sq_field is not None: subsigma_sq_field = vfu.downsample_scalar_field_3d(sigma_sq_field) subdelta_field = vfu.downsample_scalar_field_3d(delta_field) subgradient_field = np.array(vfu.downsample_displacement_field_3d(gradient_field)) shape = np.array(displacement.shape).astype(np.int32) sub_displacement = np.zeros( shape=((shape[0] + 1) // 2, (shape[1] + 1) // 2, (shape[2] + 1) // 2, 3), dtype=floating, ) sublambda_param = lambda_param * 0.25 v_cycle_3d( n - 1, k, subdelta_field, subsigma_sq_field, subgradient_field, sub_residual, sublambda_param, sub_displacement, depth=depth + 1, ) del subdelta_field del subsigma_sq_field del subgradient_field del sub_residual displacement += vfu.resample_displacement_field_3d( sub_displacement, 0.5 * np.ones(3), shape ) del sub_displacement # post-smoothing for _ in range(k): ssd.iterate_residual_displacement_field_ssd_3d( delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, ) energy = ssd.compute_energy_ssd_3d(delta_field) return energy dipy-1.11.0/dipy/align/parzenhist.pyx000066400000000000000000001533231476546756600175700ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport numpy as cnp cimport cython from dipy.align.fused_types cimport floating from dipy.align import vector_fields as vf from dipy.align.vector_fields cimport(_apply_affine_3d_x0, _apply_affine_3d_x1, _apply_affine_3d_x2, _apply_affine_2d_x0, _apply_affine_2d_x1) from dipy.align.transforms cimport (Transform) cdef extern from "dpy_math.h" nogil: double cos(double) double sin(double) double log(double) class ParzenJointHistogram: def __init__(self, nbins): r""" Computes joint histogram and derivatives with Parzen windows :footcite:p:`Parzen1962`. Base class to compute joint and marginal probability density functions and their derivatives with respect to a transform's parameters. The smooth histograms are computed by using Parzen windows :footcite:p:`Parzen1962` with a cubic spline kernel, as proposed by :footcite:t:`Mattes2003`. This implementation is not tied to any optimization (registration) method, the idea is that information-theoretic matching functionals (such as Mutual Information) can inherit from this class to perform the low-level computations of the joint intensity distributions and its gradient w.r.t. the transform parameters. The derived class can then compute the similarity/dissimilarity measure and gradient, and finally communicate the results to the appropriate optimizer. Parameters ---------- nbins : int the number of bins of the joint and marginal probability density functions (the actual number of bins of the joint PDF is nbins**2) References ---------- .. footbibliography:: Notes ----- We need this class in cython to allow _joint_pdf_gradient_dense_2d and _joint_pdf_gradient_dense_3d to use a nogil Jacobian function (obtained from an instance of the Transform class), which allows us to evaluate Jacobians at all the sampling points (maybe the full grid) inside a nogil loop. The reason we need a class is to encapsulate all the parameters related to the joint and marginal distributions. """ self.nbins = nbins # Since the kernel used to compute the Parzen histogram covers more # than one bin, we need to add extra bins to both sides of the # histogram to account for the contributions of the minimum and maximum # intensities. Padding is the number of extra bins used at each side # of the histogram (a total of [2 * padding] extra bins). Since the # support of the cubic spline is 5 bins (the center plus 2 bins at each # side) we need a padding of 2, in the case of cubic splines. self.padding = 2 self.setup_called = False def setup(self, static, moving, smask=None, mmask=None): r""" Compute histogram settings to store the PDF of input images Parameters ---------- static : array static image moving : array moving image smask : array mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background). If None, the behaviour is equivalent to smask=ones_like(static) mmask : array mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background). If None, the behaviour is equivalent to mmask=ones_like(static) """ if smask is None: smask = np.ones_like(static) if mmask is None: mmask = np.ones_like(moving) self.smin = np.min(static[smask != 0]) self.smax = np.max(static[smask != 0]) self.mmin = np.min(moving[mmask != 0]) self.mmax = np.max(moving[mmask != 0]) numerator = self.smax - self.smin denominator = self.nbins - 2 * self.padding self.sdelta = np.divide(numerator, denominator, out=np.zeros_like(numerator, dtype=np.float64), where=denominator!=0) numerator = self.mmax - self.mmin self.mdelta = np.divide(numerator, denominator, out=np.zeros_like(numerator, dtype=np.float64), where=denominator!=0) self.smin = np.divide(self.smin, self.sdelta, out=np.zeros_like(self.smin, dtype=np.float64), where=self.sdelta!=0) - self.padding self.mmin = np.divide(self.mmin, self.mdelta, out=np.zeros_like(self.mmin, dtype=np.float64), where=self.mdelta!=0) - self.padding self.joint_grad = None self.metric_grad = None self.metric_val = 0 self.joint = np.zeros(shape=(self.nbins, self.nbins)) self.smarginal = np.zeros(shape=(self.nbins,), dtype=np.float64) self.mmarginal = np.zeros(shape=(self.nbins,), dtype=np.float64) self.setup_called = True def bin_normalize_static(self, x): r""" Maps intensity x to the range covered by the static histogram If the input intensity is in [self.smin, self.smax] then the normalized intensity will be in [self.padding, self.nbins - self.padding] Parameters ---------- x : float the intensity to be normalized Returns ------- xnorm : float normalized intensity to the range covered by the static histogram """ return _bin_normalize(x, self.smin, self.sdelta) def bin_normalize_moving(self, x): r""" Maps intensity x to the range covered by the moving histogram If the input intensity is in [self.mmin, self.mmax] then the normalized intensity will be in [self.padding, self.nbins - self.padding] Parameters ---------- x : float the intensity to be normalized Returns ------- xnorm : float normalized intensity to the range covered by the moving histogram """ return _bin_normalize(x, self.mmin, self.mdelta) def bin_index(self, xnorm): r""" Bin index associated with the given normalized intensity The return value is an integer in [padding, nbins - 1 - padding] Parameters ---------- xnorm : float intensity value normalized to the range covered by the histogram Returns ------- bin : int the bin index associated with the given normalized intensity """ return _bin_index(xnorm, self.nbins, self.padding) def update_pdfs_dense(self, static, moving, smask=None, mmask=None): r""" Computes the Probability Density Functions of two images The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively. Parameters ---------- static : array, shape (S, R, C) static image moving : array, shape (S, R, C) moving image smask : array, shape (S, R, C) mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background). If None, ones_like(static) is used as mask. mmask : array, shape (S, R, C) mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background). If None, ones_like(moving) is used as mask. """ if static.shape != moving.shape: raise ValueError("Images must have the same shape") dim = len(static.shape) if not dim in [2, 3]: msg = 'Only dimensions 2 and 3 are supported. ' +\ str(dim) + ' received' raise ValueError(msg) if not self.setup_called: self.setup(static, moving, smask=None, mmask=None) if dim == 2: _compute_pdfs_dense_2d(static, moving, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint, self.smarginal, self.mmarginal) elif dim == 3: _compute_pdfs_dense_3d(static, moving, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint, self.smarginal, self.mmarginal) def update_pdfs_sparse(self, sval, mval): r""" Computes the Probability Density Functions from a set of samples The list of intensities `sval` and `mval` are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting distributions corresponds to the paired intensities according to the alignment at the moment the images were sampled. The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively. Parameters ---------- sval : array, shape (n,) sampled intensities from the static image at sampled_points mval : array, shape (n,) sampled intensities from the moving image at sampled_points """ if not self.setup_called: self.setup(sval, mval) energy = _compute_pdfs_sparse(sval, mval, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint, self.smarginal, self.mmarginal) def update_gradient_dense(self, theta, transform, static, moving, grid2world, mgradient, smask=None, mmask=None): r""" Computes the Gradient of the joint PDF w.r.t. transform parameters Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The gradient is stored in self.joint_grad. Parameters ---------- theta : array, shape (n,) parameters of the transformation to compute the gradient from transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed static : array, shape (S, R, C) static image moving : array, shape (S, R, C) moving image grid2world : array, shape (4, 4) we assume that both images have already been sampled at a common grid. This transform must map voxel coordinates of this common grid to physical coordinates of its corresponding voxel in the moving image. For example, if the moving image was sampled on the static image's grid (this is the typical setting) using an aligning matrix A, then (1) grid2world = A.dot(static_affine) where static_affine is the transformation mapping static image's grid coordinates to physical space. mgradient : array, shape (S, R, C, 3) the gradient of the moving image smask : array, shape (S, R, C), optional mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background). The default is None, indicating all voxels are considered. mmask : array, shape (S, R, C), optional mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background). The default is None, indicating all voxels are considered. """ if static.shape != moving.shape: raise ValueError("Images must have the same shape") dim = len(static.shape) if not dim in [2, 3]: msg = 'Only dimensions 2 and 3 are supported. ' +\ str(dim) + ' received' raise ValueError(msg) if mgradient.shape != moving.shape + (dim,): raise ValueError('Invalid gradient field dimensions.') if not self.setup_called: self.setup(static, moving, smask=smask, mmask=mmask) n = theta.shape[0] nbins = self.nbins if (self.joint_grad is None) or (self.joint_grad.shape[2] != n): self.joint_grad = np.zeros((nbins, nbins, n)) if dim == 2: if mgradient.dtype == np.float64: _joint_pdf_gradient_dense_2d[cython.double](theta, transform, static, moving, grid2world, mgradient, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) elif mgradient.dtype == np.float32: _joint_pdf_gradient_dense_2d[cython.float](theta, transform, static, moving, grid2world, mgradient, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) else: raise ValueError('Grad. field dtype must be floating point') elif dim == 3: if mgradient.dtype == np.float64: _joint_pdf_gradient_dense_3d[cython.double](theta, transform, static, moving, grid2world, mgradient, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) elif mgradient.dtype == np.float32: _joint_pdf_gradient_dense_3d[cython.float](theta, transform, static, moving, grid2world, mgradient, smask, mmask, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) else: raise ValueError('Grad. field dtype must be floating point') def update_gradient_sparse(self, theta, transform, sval, mval, sample_points, mgradient): r""" Computes the Gradient of the joint PDF w.r.t. transform parameters Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The list of intensities `sval` and `mval` are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting gradient corresponds to the paired intensities according to the alignment at the moment the images were sampled. The gradient is stored in self.joint_grad. Parameters ---------- theta : array, shape (n,) parameters to compute the gradient at transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed sval : array, shape (m,) sampled intensities from the static image at sampled_points mval : array, shape (m,) sampled intensities from the moving image at sampled_points sample_points : array, shape (m, 3) coordinates (in physical space) of the points the images were sampled at mgradient : array, shape (m, 3) the gradient of the moving image at the sample points """ dim = sample_points.shape[1] if mgradient.shape[1] != dim: raise ValueError('Dimensions of gradients and points are different') nsamples = sval.shape[0] if ((mgradient.shape[0] != nsamples) or (mval.shape[0] != nsamples) or sample_points.shape[0] != nsamples): raise ValueError('Number of points and gradients are different.') if not mgradient.dtype in [np.float32, np.float64]: raise ValueError('Gradients dtype must be floating point') n = theta.shape[0] nbins = self.nbins if (self.joint_grad is None) or (self.joint_grad.shape[2] != n): self.joint_grad = np.zeros(shape=(nbins, nbins, n)) if dim == 2: if mgradient.dtype == np.float64: _joint_pdf_gradient_sparse_2d[cython.double](theta, transform, sval, mval, sample_points, mgradient, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) elif mgradient.dtype == np.float32: _joint_pdf_gradient_sparse_2d[cython.float](theta, transform, sval, mval, sample_points, mgradient, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) else: raise ValueError('Gradients dtype must be floating point') elif dim == 3: if mgradient.dtype == np.float64: _joint_pdf_gradient_sparse_3d[cython.double](theta, transform, sval, mval, sample_points, mgradient, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) elif mgradient.dtype == np.float32: _joint_pdf_gradient_sparse_3d[cython.float](theta, transform, sval, mval, sample_points, mgradient, self.smin, self.sdelta, self.mmin, self.mdelta, self.nbins, self.padding, self.joint_grad) else: raise ValueError('Gradients dtype must be floating point') else: msg = 'Only dimensions 2 and 3 are supported. ' + str(dim) +\ ' received' raise ValueError(msg) cdef inline double _bin_normalize(double x, double mval, double delta) nogil: r""" Normalizes intensity x to the range covered by the Parzen histogram We assume that mval was computed as: (1) mval = xmin / delta - padding where xmin is the minimum observed image intensity and delta is the bin size, computed as: (2) delta = (xmax - xmin)/(nbins - 2 * padding) If the minimum and maximum intensities were assigned to the first and last bins (with no padding), it could be possible that samples at the first and last bins contribute to "non-existing" bins beyond the boundary (because the support of the Parzen window may be larger than one bin). The padding bins are used to collect such contributions (i.e. the probability of observing a value beyond the minimum and maximum observed intensities may correctly be assigned a positive value). The normalized intensity is (from eq(1) ): (3) nx = (x - xmin) / delta + padding = x / delta - mval This means that normalized intensity nx must lie in the closed interval [padding, nbins-padding], which contains bins with indices padding, padding+1, ..., nbins - 1 - padding (i.e., nbins - 2*padding bins) """ if delta == 0: return 0 return x / delta - mval cdef inline cnp.npy_intp _bin_index(double normalized, int nbins, int padding) nogil: r""" Index of the bin in which the normalized intensity `normalized` lies. The intensity is assumed to have been normalized to the range of intensities covered by the histogram: the bin index is the integer part of `normalized`, which must be within the interval [padding, nbins - 1 - padding]. Parameters ---------- normalized : double normalized intensity nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) Returns ------- bin_id : int index of the bin in which the normalized intensity 'normalized' lies """ cdef: cnp.npy_intp bin_id bin_id = normalized if bin_id < padding: return padding if bin_id > nbins - 1 - padding: return nbins - 1 - padding return bin_id def cubic_spline(double[:] x): r""" Evaluates the cubic spline at a set of values Parameters ---------- x : array, shape (n) input values """ cdef: cnp.npy_intp i cnp.npy_intp n = x.shape[0] double[:] sx = np.zeros(n, dtype=np.float64) with nogil: for i in range(n): sx[i] = _cubic_spline(x[i]) return np.asarray(sx) cdef inline double _cubic_spline(double x) nogil: r""" Cubic B-Spline evaluated at x See eq. (3) of :footcite:t:`Mattes2003`. References ---------- .. footbibliography:: """ cdef: double absx = -x if x < 0.0 else x double sqrx = x * x if absx < 1.0: return (4.0 - 6.0 * sqrx + 3.0 * sqrx * absx) / 6.0 elif absx < 2.0: return (8.0 - 12 * absx + 6.0 * sqrx - sqrx * absx) / 6.0 return 0.0 def cubic_spline_derivative(double[:] x): r""" Evaluates the cubic spline derivative at a set of values Parameters ---------- x : array, shape (n) input values """ cdef: cnp.npy_intp i cnp.npy_intp n = x.shape[0] double[:] sx = np.zeros(n, dtype=np.float64) with nogil: for i in range(n): sx[i] = _cubic_spline_derivative(x[i]) return np.asarray(sx) cdef inline double _cubic_spline_derivative(double x) nogil: r""" Derivative of cubic B-Spline evaluated at x See eq. (3) of :footcite:t:`Mattes2003`. References ---------- .. footbibliography:: """ cdef: double absx = -x if x < 0.0 else x if absx < 1.0: if x >= 0.0: return -2.0 * x + 1.5 * x * x else: return -2.0 * x - 1.5 * x * x elif absx < 2.0: if x >= 0: return -2.0 + 2.0 * x - 0.5 * x * x else: return 2.0 + 2.0 * x + 0.5 * x * x return 0.0 cdef _compute_pdfs_dense_2d(double[:, :] static, double[:, :] moving, int[:, :] smask, int[:, :] mmask, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :] joint, double[:] smarginal, double[:] mmarginal): r""" Joint Probability Density Function of intensities of two 2D images Parameters ---------- static : array, shape (R, C) static image moving : array, shape (R, C) moving image smask : array, shape (R, C) mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background) mmask : array, shape (R, C) mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background) smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) joint : array, shape (nbins, nbins) the array to write the joint PDF smarginal : array, shape (nbins,) the array to write the marginal PDF associated with the static image mmarginal : array, shape (nbins,) the array to write the marginal PDF associated with the moving image """ cdef: cnp.npy_intp nrows = static.shape[0] cnp.npy_intp ncols = static.shape[1] cnp.npy_intp offset, valid_points cnp.npy_intp i, j, r, c double rn, cn double val, spline_arg, total_sum joint[...] = 0 total_sum = 0 valid_points = 0 with nogil: smarginal[:] = 0 for i in range(nrows): for j in range(ncols): if smask is not None and smask[i, j] == 0: continue if mmask is not None and mmask[i, j] == 0: continue valid_points += 1 rn = _bin_normalize(static[i, j], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(moving[i, j], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn smarginal[r] += 1 for offset in range(-2, 3): val = _cubic_spline(spline_arg) joint[r, c + offset] += val total_sum += val spline_arg += 1.0 if total_sum > 0: for i in range(nbins): for j in range(nbins): joint[i, j] /= valid_points for i in range(nbins): smarginal[i] /= valid_points for j in range(nbins): mmarginal[j] = 0 for i in range(nbins): mmarginal[j] += joint[i, j] cdef _compute_pdfs_dense_3d(double[:, :, :] static, double[:, :, :] moving, int[:, :, :] smask, int[:, :, :] mmask, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :] joint, double[:] smarginal, double[:] mmarginal): r""" Joint Probability Density Function of intensities of two 3D images Parameters ---------- static : array, shape (S, R, C) static image moving : array, shape (S, R, C) moving image smask : array, shape (S, R, C) mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background) mmask : array, shape (S, R, C) mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background) smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) joint : array, shape (nbins, nbins) the array to write the joint PDF to smarginal : array, shape (nbins,) the array to write the marginal PDF associated with the static image mmarginal : array, shape (nbins,) the array to write the marginal PDF associated with the moving image """ cdef: cnp.npy_intp nslices = static.shape[0] cnp.npy_intp nrows = static.shape[1] cnp.npy_intp ncols = static.shape[2] cnp.npy_intp offset, valid_points cnp.npy_intp k, i, j, r, c double rn, cn double val, spline_arg, total_sum joint[...] = 0 total_sum = 0 with nogil: valid_points = 0 smarginal[:] = 0 for k in range(nslices): for i in range(nrows): for j in range(ncols): if smask is not None and smask[k, i, j] == 0: continue if mmask is not None and mmask[k, i, j] == 0: continue valid_points += 1 rn = _bin_normalize(static[k, i, j], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(moving[k, i, j], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn smarginal[r] += 1 for offset in range(-2, 3): val = _cubic_spline(spline_arg) joint[r, c + offset] += val total_sum += val spline_arg += 1.0 if total_sum > 0: for i in range(nbins): for j in range(nbins): joint[i, j] /= total_sum for i in range(nbins): smarginal[i] /= valid_points for j in range(nbins): mmarginal[j] = 0 for i in range(nbins): mmarginal[j] += joint[i, j] cdef _compute_pdfs_sparse(double[:] sval, double[:] mval, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :] joint, double[:] smarginal, double[:] mmarginal): r""" Probability Density Functions of paired intensities Parameters ---------- sval : array, shape (n,) sampled intensities from the static image at sampled_points mval : array, shape (n,) sampled intensities from the moving image at sampled_points smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) joint : array, shape (nbins, nbins) the array to write the joint PDF to smarginal : array, shape (nbins,) the array to write the marginal PDF associated with the static image mmarginal : array, shape (nbins,) the array to write the marginal PDF associated with the moving image """ cdef: cnp.npy_intp n = sval.shape[0] cnp.npy_intp offset, valid_points cnp.npy_intp i, r, c double rn, cn double val, spline_arg, total_sum joint[...] = 0 total_sum = 0 with nogil: valid_points = 0 smarginal[:] = 0 for i in range(n): valid_points += 1 rn = _bin_normalize(sval[i], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(mval[i], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn smarginal[r] += 1 for offset in range(-2, 3): val = _cubic_spline(spline_arg) joint[r, c + offset] += val total_sum += val spline_arg += 1.0 if total_sum > 0: for i in range(nbins): for j in range(nbins): joint[i, j] /= total_sum for i in range(nbins): smarginal[i] /= valid_points for j in range(nbins): mmarginal[j] = 0 for i in range(nbins): mmarginal[j] += joint[i, j] cdef _joint_pdf_gradient_dense_2d(double[:] theta, Transform transform, double[:, :] static, double[:, :] moving, double[:, :] grid2world, floating[:, :, :] mgradient, int[:, :] smask, int[:, :] mmask, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :, :] grad_pdf): r""" Gradient of the joint PDF w.r.t. transform parameters theta Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The transformation itself is not necessary to compute the gradient, but only its Jacobian. Parameters ---------- theta : array, shape (n,) parameters of the transformation to compute the gradient from transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed static : array, shape (R, C) static image moving : array, shape (R, C) moving image grid2world : array, shape (3, 3) the grid-to-space transform associated with images static and moving (we assume that both images have already been sampled at a common grid) mgradient : array, shape (R, C, 2) the gradient of the moving image smask : array, shape (R, C) mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background) mmask : array, shape (R, C) mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background) smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) grad_pdf : array, shape (nbins, nbins, len(theta)) the array to write the gradient to """ cdef: cnp.npy_intp nrows = static.shape[0] cnp.npy_intp ncols = static.shape[1] cnp.npy_intp n = theta.shape[0] cnp.npy_intp offset, valid_points int constant_jacobian = 0 cnp.npy_intp k, i, j, r, c double rn, cn double val, spline_arg, norm_factor double[:, :] J = np.empty(shape=(2, n), dtype=np.float64) double[:] prod = np.empty(shape=(n,), dtype=np.float64) double[:] x = np.empty(shape=(2,), dtype=np.float64) grad_pdf[...] = 0 with nogil: valid_points = 0 for i in range(nrows): for j in range(ncols): if smask is not None and smask[i, j] == 0: continue if mmask is not None and mmask[i, j] == 0: continue valid_points += 1 x[0] = _apply_affine_2d_x0(i, j, 1, grid2world) x[1] = _apply_affine_2d_x1(i, j, 1, grid2world) if constant_jacobian == 0: constant_jacobian = transform._jacobian(theta, x, J) for k in range(n): prod[k] = (J[0, k] * mgradient[i, j, 0] + J[1, k] * mgradient[i, j, 1]) rn = _bin_normalize(static[i, j], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(moving[i, j], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn for offset in range(-2, 3): val = _cubic_spline_derivative(spline_arg) for k in range(n): grad_pdf[r, c + offset, k] -= val * prod[k] spline_arg += 1.0 norm_factor = valid_points * mdelta if norm_factor > 0: for i in range(nbins): for j in range(nbins): for k in range(n): grad_pdf[i, j, k] /= norm_factor cdef _joint_pdf_gradient_dense_3d(double[:] theta, Transform transform, double[:, :, :] static, double[:, :, :] moving, double[:, :] grid2world, floating[:, :, :, :] mgradient, int[:, :, :] smask, int[:, :, :] mmask, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :, :] grad_pdf): r""" Gradient of the joint PDF w.r.t. transform parameters theta Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The transformation itself is not necessary to compute the gradient, but only its Jacobian. Parameters ---------- theta : array, shape (n,) parameters of the transformation to compute the gradient from transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed static : array, shape (S, R, C) static image moving : array, shape (S, R, C) moving image grid2world : array, shape (4, 4) the grid-to-space transform associated with images static and moving (we assume that both images have already been sampled at a common grid) mgradient : array, shape (S, R, C, 3) the gradient of the moving image smask : array, shape (S, R, C) mask of static object being registered (a binary array with 1's inside the object of interest and 0's along the background) mmask : array, shape (S, R, C) mask of moving object being registered (a binary array with 1's inside the object of interest and 0's along the background) smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) grad_pdf : array, shape (nbins, nbins, len(theta)) the array to write the gradient to """ cdef: cnp.npy_intp nslices = static.shape[0] cnp.npy_intp nrows = static.shape[1] cnp.npy_intp ncols = static.shape[2] cnp.npy_intp n = theta.shape[0] cnp.npy_intp offset, valid_points int constant_jacobian = 0 cnp.npy_intp l, k, i, j, r, c double rn, cn double val, spline_arg, norm_factor double[:, :] J = np.empty(shape=(3, n), dtype=np.float64) double[:] prod = np.empty(shape=(n,), dtype=np.float64) double[:] x = np.empty(shape=(3,), dtype=np.float64) grad_pdf[...] = 0 with nogil: valid_points = 0 for k in range(nslices): for i in range(nrows): for j in range(ncols): if smask is not None and smask[k, i, j] == 0: continue if mmask is not None and mmask[k, i, j] == 0: continue valid_points += 1 x[0] = _apply_affine_3d_x0(k, i, j, 1, grid2world) x[1] = _apply_affine_3d_x1(k, i, j, 1, grid2world) x[2] = _apply_affine_3d_x2(k, i, j, 1, grid2world) if constant_jacobian == 0: constant_jacobian = transform._jacobian(theta, x, J) for l in range(n): prod[l] = (J[0, l] * mgradient[k, i, j, 0] + J[1, l] * mgradient[k, i, j, 1] + J[2, l] * mgradient[k, i, j, 2]) rn = _bin_normalize(static[k, i, j], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(moving[k, i, j], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn for offset in range(-2, 3): val = _cubic_spline_derivative(spline_arg) for l in range(n): grad_pdf[r, c + offset, l] -= val * prod[l] spline_arg += 1.0 norm_factor = valid_points * mdelta if norm_factor > 0: for i in range(nbins): for j in range(nbins): for k in range(n): grad_pdf[i, j, k] /= norm_factor cdef _joint_pdf_gradient_sparse_2d(double[:] theta, Transform transform, double[:] sval, double[:] mval, double[:, :] sample_points, floating[:, :] mgradient, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :, :] grad_pdf): r""" Gradient of the joint PDF w.r.t. transform parameters theta Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The transformation itself is not necessary to compute the gradient, but only its Jacobian. Parameters ---------- theta : array, shape (n,) parameters to compute the gradient at transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed sval : array, shape (m,) sampled intensities from the static image at sampled_points mval : array, shape (m,) sampled intensities from the moving image at sampled_points sample_points : array, shape (m, 2) positions (in physical space) of the points the images were sampled at mgradient : array, shape (m, 2) the gradient of the moving image at the sample points smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) grad_pdf : array, shape (nbins, nbins, len(theta)) the array to write the gradient to """ cdef: cnp.npy_intp n = theta.shape[0] cnp.npy_intp m = sval.shape[0] cnp.npy_intp offset int constant_jacobian = 0 cnp.npy_intp i, j, r, c, valid_points double rn, cn double val, spline_arg, norm_factor double[:, :] J = np.empty(shape=(2, n), dtype=np.float64) double[:] prod = np.empty(shape=(n,), dtype=np.float64) grad_pdf[...] = 0 with nogil: valid_points = 0 for i in range(m): valid_points += 1 if constant_jacobian == 0: constant_jacobian = transform._jacobian(theta, sample_points[i], J) for j in range(n): prod[j] = (J[0, j] * mgradient[i, 0] + J[1, j] * mgradient[i, 1]) rn = _bin_normalize(sval[i], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(mval[i], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn for offset in range(-2, 3): val = _cubic_spline_derivative(spline_arg) for j in range(n): grad_pdf[r, c + offset, j] -= val * prod[j] spline_arg += 1.0 norm_factor = valid_points * mdelta if norm_factor > 0: for i in range(nbins): for j in range(nbins): for k in range(n): grad_pdf[i, j, k] /= norm_factor cdef _joint_pdf_gradient_sparse_3d(double[:] theta, Transform transform, double[:] sval, double[:] mval, double[:, :] sample_points, floating[:, :] mgradient, double smin, double sdelta, double mmin, double mdelta, int nbins, int padding, double[:, :, :] grad_pdf): r""" Gradient of the joint PDF w.r.t. transform parameters theta Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter. The transformation itself is not necessary to compute the gradient, but only its Jacobian. Parameters ---------- theta : array, shape (n,) parameters to compute the gradient at transform : instance of Transform the transformation with respect to whose parameters the gradient must be computed sval : array, shape (m,) sampled intensities from the static image at sampled_points mval : array, shape (m,) sampled intensities from the moving image at sampled_points sample_points : array, shape (m, 3) positions (in physical space) of the points the images were sampled at mgradient : array, shape (m, 3) the gradient of the moving image at the sample points smin : float the minimum observed intensity associated with the static image, which was used to define the joint PDF sdelta : float bin size associated with the intensities of the static image mmin : float the minimum observed intensity associated with the moving image, which was used to define the joint PDF mdelta : float bin size associated with the intensities of the moving image nbins : int number of histogram bins padding : int number of bins used as padding (the total bins used for padding at both sides of the histogram is actually 2*padding) grad_pdf : array, shape (nbins, nbins, len(theta)) the array to write the gradient to """ cdef: cnp.npy_intp n = theta.shape[0] cnp.npy_intp m = sval.shape[0] cnp.npy_intp offset, valid_points int constant_jacobian = 0 cnp.npy_intp i, j, r, c double rn, cn double val, spline_arg, norm_factor double[:, :] J = np.empty(shape=(3, n), dtype=np.float64) double[:] prod = np.empty(shape=(n,), dtype=np.float64) grad_pdf[...] = 0 with nogil: valid_points = 0 for i in range(m): valid_points += 1 if constant_jacobian == 0: constant_jacobian = transform._jacobian(theta, sample_points[i], J) for j in range(n): prod[j] = (J[0, j] * mgradient[i, 0] + J[1, j] * mgradient[i, 1] + J[2, j] * mgradient[i, 2]) rn = _bin_normalize(sval[i], smin, sdelta) r = _bin_index(rn, nbins, padding) cn = _bin_normalize(mval[i], mmin, mdelta) c = _bin_index(cn, nbins, padding) spline_arg = (c - 2) - cn for offset in range(-2, 3): val = _cubic_spline_derivative(spline_arg) for j in range(n): grad_pdf[r, c + offset, j] -= val * prod[j] spline_arg += 1.0 norm_factor = valid_points * mdelta if norm_factor > 0: for i in range(nbins): for j in range(nbins): for k in range(n): grad_pdf[i, j, k] /= norm_factor def compute_parzen_mi(double[:, :] joint, double[:, :, :] joint_gradient, double[:] smarginal, double[:] mmarginal, double[:] mi_gradient): r""" Computes the mutual information and its gradient (if requested) Parameters ---------- joint : array, shape (nbins, nbins) the joint intensity distribution joint_gradient : array, shape (nbins, nbins, n) the gradient of the joint distribution w.r.t. the transformation parameters smarginal : array, shape (nbins,) the marginal intensity distribution of the static image mmarginal : array, shape (nbins,) the marginal intensity distribution of the moving image mi_gradient : array, shape (n,) the buffer in which to write the gradient of the mutual information. If None, the gradient is not computed """ cdef: double epsilon = 2.2204460492503131e-016 double metric_value cnp.npy_intp nrows = joint.shape[0] cnp.npy_intp ncols = joint.shape[1] cnp.npy_intp n = joint_gradient.shape[2] with nogil: mi_gradient[:] = 0 metric_value = 0 for i in range(nrows): for j in range(ncols): if joint[i, j] < epsilon or mmarginal[j] < epsilon: continue factor = log(joint[i, j] / mmarginal[j]) if mi_gradient is not None: for k in range(n): mi_gradient[k] += joint_gradient[i, j, k] * factor if smarginal[i] > epsilon: metric_value += joint[i, j] * (factor - log(smarginal[i])) return metric_value def sample_domain_regular(int k, int[:] shape, double[:, :] grid2world, double sigma=0.25, object rng=None): r""" Take floor(total_voxels/k) samples from a (2D or 3D) grid The sampling is made by taking all pixels whose index (in lexicographical order) is a multiple of k. Each selected point is slightly perturbed by adding a realization of a normally distributed random variable and then mapped to physical space by the given grid-to-space transform. The lexicographical order of a pixels in a grid of shape (a, b, c) is defined by assigning to each voxel position (i, j, k) the integer index F((i, j, k)) = i * (b * c) + j * (c) + k and sorting increasingly by this index. Parameters ---------- k : int the sampling rate, as described before shape : array, shape (dim,) the shape of the grid to be sampled grid2world : array, shape (dim+1, dim+1) the grid-to-space transform sigma : float the standard deviation of the Normal random distortion to be applied to the sampled points Returns ------- samples : array, shape (total_pixels//k, dim) the matrix whose rows are the sampled points Examples -------- >>> from dipy.align.parzenhist import sample_domain_regular >>> import dipy.align.vector_fields as vf >>> shape = np.array((10, 10), dtype=np.int32) >>> sigma = 0 >>> dim = len(shape) >>> grid2world = np.eye(dim+1) >>> n = shape[0]*shape[1] >>> k = 2 >>> samples = sample_domain_regular(k, shape, grid2world, sigma) >>> (samples.shape[0], samples.shape[1]) == (n//k, dim) True >>> isamples = np.array(samples, dtype=np.int32) >>> indices = (isamples[:, 0] * shape[1] + isamples[:, 1]) >>> len(set(indices)) == len(indices) True >>> (indices%k).sum() 0 """ cdef: cnp.npy_intp i, dim, n, m, slice_size double s, r, c double[:, :] samples dim = len(shape) if not vf.is_valid_affine(grid2world, dim): raise ValueError("Invalid grid-to-space matrix") if rng is None: rng = np.random.default_rng(1234) if dim == 2: n = shape[0] * shape[1] m = n // k samples = rng.standard_normal((m, dim)) * sigma with nogil: for i in range(m): r = ((i * k) // shape[1]) + samples[i, 0] c = ((i * k) % shape[1]) + samples[i, 1] samples[i, 0] = _apply_affine_2d_x0(r, c, 1, grid2world) samples[i, 1] = _apply_affine_2d_x1(r, c, 1, grid2world) else: slice_size = shape[1] * shape[2] n = shape[0] * slice_size m = n // k samples = rng.standard_normal((m, dim)) * sigma with nogil: for i in range(m): s = ((i * k) // slice_size) + samples[i, 0] r = (((i * k) % slice_size) // shape[2]) + samples[i, 1] c = (((i * k) % slice_size) % shape[2]) + samples[i, 2] samples[i, 0] = _apply_affine_3d_x0(s, r, c, 1, grid2world) samples[i, 1] = _apply_affine_3d_x1(s, r, c, 1, grid2world) samples[i, 2] = _apply_affine_3d_x2(s, r, c, 1, grid2world) return np.asarray(samples) dipy-1.11.0/dipy/align/reslice.py000066400000000000000000000105771476546756600166420ustar00rootroot00000000000000import multiprocessing as mp import warnings import numpy as np from scipy.ndimage import affine_transform from dipy.testing.decorators import warning_for_keywords from dipy.utils.multiproc import determine_num_processes def _affine_transform(kwargs): with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=".*scipy.*18.*", category=UserWarning) return affine_transform(**kwargs) @warning_for_keywords() def reslice( data, affine, zooms, new_zooms, *, order=1, mode="constant", cval=0, num_processes=1 ): """Reslice data with new voxel resolution defined by ``new_zooms``. Parameters ---------- data : array, shape (I,J,K) or (I,J,K,N) 3d volume or 4d volume with datasets affine : array, shape (4,4) mapping from voxel coordinates to world coordinates zooms : tuple, shape (3,) voxel size for (i,j,k) dimensions new_zooms : tuple, shape (3,) new voxel size for (i,j,k) after resampling order : int, from 0 to 5 order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don't want any smoothing 0 is the option you need. mode : string ('constant', 'nearest', 'reflect' or 'wrap') Points outside the boundaries of the input are filled according to the given mode. cval : float Value used for points outside the boundaries of the input if mode='constant'. num_processes : int, optional Split the calculation to a pool of children processes. This only applies to 4D `data` arrays. Default is 1. If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. Returns ------- data2 : array, shape (I,J,K) or (I,J,K,N) datasets resampled into isotropic voxel size affine2 : array, shape (4,4) new affine for the resampled image Examples -------- >>> from dipy.io.image import load_nifti >>> from dipy.align.reslice import reslice >>> from dipy.data import get_fnames >>> f_name = get_fnames(name="aniso_vox") >>> data, affine, zooms = load_nifti(f_name, return_voxsize=True) >>> data.shape == (58, 58, 24) True >>> zooms (4.0, 4.0, 5.0) >>> new_zooms = (3.,3.,3.) >>> new_zooms (3.0, 3.0, 3.0) >>> data2, affine2 = reslice(data, affine, zooms, new_zooms) >>> data2.shape == (77, 77, 40) True """ num_processes = determine_num_processes(num_processes) # We are suppressing warnings emitted by scipy >= 0.18, # described in https://github.com/dipy/dipy/issues/1107. # These warnings are not relevant to us, as long as our offset # input to scipy's affine_transform is [0, 0, 0] with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=".*scipy.*18.*", category=UserWarning) new_zooms = np.array(new_zooms, dtype="f8") zooms = np.array(zooms, dtype="f8") R = new_zooms / zooms new_shape = zooms / new_zooms * np.array(data.shape[:3]) new_shape = tuple(np.round(new_shape).astype("i8")) kwargs = { "matrix": R, "output_shape": new_shape, "order": order, "mode": mode, "cval": cval, } if data.ndim == 3: data2 = affine_transform(input=data, **kwargs) elif data.ndim == 4: data2 = np.zeros(new_shape + (data.shape[-1],), data.dtype) if num_processes == 1: for i in range(data.shape[-1]): affine_transform(input=data[..., i], output=data2[..., i], **kwargs) else: params = [] for i in range(data.shape[-1]): _kwargs = {"input": data[..., i]} _kwargs.update(kwargs) params.append(_kwargs) mp.set_start_method("spawn", force=True) pool = mp.Pool(num_processes) for i, res in enumerate(pool.imap(_affine_transform, params)): data2[..., i] = res pool.close() else: raise ValueError( f"dimension of data should be 3 or 4 but you provided {data.ndim}" ) Rx = np.eye(4) Rx[:3, :3] = np.diag(R) affine2 = np.dot(affine, Rx) return (data2, affine2) dipy-1.11.0/dipy/align/scalespace.py000066400000000000000000000417171476546756600173170ustar00rootroot00000000000000import logging import numpy as np import numpy.linalg as npl from scipy.ndimage import gaussian_filter from dipy.align import floating from dipy.testing.decorators import warning_for_keywords logger = logging.getLogger(__name__) class ScaleSpace: @warning_for_keywords() def __init__( self, image, num_levels, *, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False, ): """ScaleSpace. Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image's voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, ..., the sigma is given by $s * ( 2^L - 1 )$. If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions. Parameters ---------- image : array, shape (r,c) or (s, r, c) where s is the number of slices, r is the number of rows and c is the number of columns of the input image. num_levels : int the desired number of levels (resolutions) of the scale space image_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-space transform of the image grid. The default is the identity matrix input_spacing : array, shape (dim,), optional the spacing (voxel size) between voxels in physical space. The default is 1.0 along all axes sigma_factor : float, optional the smoothing factor to be used in the construction of the scale space. The default is 0.2 mask0 : Boolean, optional if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False. """ self.dim = len(image.shape) self.num_levels = num_levels input_size = np.array(image.shape) if mask0: mask = np.asarray(image > 0, dtype=np.int32) # Normalize input image to [0,1] img = (image - np.min(image)) / (np.max(image) - np.min(image)) if mask0: img *= mask # The properties are saved in separate lists. Insert input image # properties at the first level of the scale space self.images = [img.astype(floating)] self.domain_shapes = [input_size.astype(np.int32)] if input_spacing is None: input_spacing = np.ones((self.dim,), dtype=np.int32) self.spacings = [input_spacing] self.scalings = [np.ones(self.dim)] self.affines = [image_grid2world] self.sigmas = [np.zeros(self.dim)] if image_grid2world is not None: self.affine_invs = [npl.inv(image_grid2world)] else: self.affine_invs = [None] # Compute the rest of the levels min_spacing = np.min(input_spacing) for i in range(1, num_levels): scaling_factor = 2**i # Note: the minimum below is present in ANTS to prevent the scaling # from being too large (making the sub-sampled image to be too # small) this makes the sub-sampled image at least 32 voxels at # each direction it is risky to make this decision based on image # size, though (we need to investigate more the effect of this) # scaling = np.minimum(scaling_factor * min_spacing /input_spacing, # input_size / 32) scaling = scaling_factor * min_spacing / input_spacing output_spacing = input_spacing * scaling extended = np.append(scaling, [1]) if image_grid2world is not None: affine = image_grid2world.dot(np.diag(extended)) else: affine = np.diag(extended) output_size = input_size * (input_spacing / output_spacing) + 0.5 output_size = output_size.astype(np.int32) sigmas = sigma_factor * (output_spacing / input_spacing - 1.0) # Filter along each direction with the appropriate sigma filtered = gaussian_filter(image, sigmas) filtered = (filtered - np.min(filtered)) / ( np.max(filtered) - np.min(filtered) ) if mask0: filtered *= mask # Add current level to the scale space self.images.append(filtered.astype(floating)) self.domain_shapes.append(output_size) self.spacings.append(output_spacing) self.scalings.append(scaling) self.affines.append(affine) self.affine_invs.append(npl.inv(affine)) self.sigmas.append(sigmas) def get_expand_factors(self, from_level, to_level): """Ratio of voxel size from pyramid level from_level to to_level. Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to 'expand' them to level b). Parameters ---------- from_level : int, 0 <= from_level < L, (L = number of resolutions) the resolution to expand voxels from to_level : int, 0 <= to_level < from_level the resolution to expand voxels to Returns ------- factors : array, shape (k,), k = 2, 3 the expand factors (a scalar for each voxel dimension) """ factors = np.array(self.spacings[to_level]) / np.array( self.spacings[from_level] ) return factors def print_level(self, level): """Prints properties of a pyramid level. Prints the properties of a level of this scale space to standard output Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to be printed """ logger.info(f"Domain shape: {self.get_domain_shape(level)}") logger.info(f"Spacing: {self.get_spacing(level)}") logger.info(f"Scaling: {self.get_scaling(level)}") logger.info(f"Affine: {self.get_affine(level)}") logger.info(f"Sigmas: {self.get_sigmas(level)}") def _get_attribute(self, attribute, level): """Return an attribute from the Scale Space at a given level. Returns the level-th element of attribute if level is a valid level of this scale space. Otherwise, returns None. Parameters ---------- attribute : list the attribute to retrieve the level-th element from level : int, the index of the required element from attribute. Returns ------- attribute[level] : object the requested attribute if level is valid, else it raises a ValueError """ if 0 <= level < self.num_levels: return attribute[level] raise ValueError(f"Invalid pyramid level: {level}") def get_image(self, level): """Smoothed image at a given level. Returns the smoothed image at the requested level in the Scale Space. Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the smooth image from Returns ------- the smooth image at the requested resolution or None if an invalid level was requested """ return self._get_attribute(self.images, level) def get_domain_shape(self, level): """Shape the sub-sampled image must have at a particular level. Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have). Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the sub-sampled shape from Returns ------- the sub-sampled shape at the requested resolution or None if an invalid level was requested """ return self._get_attribute(self.domain_shapes, level) def get_spacing(self, level): """Spacings the sub-sampled image must have at a particular level. Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have). Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the sub-sampled shape from Returns ------- the spacings (voxel sizes) at the requested resolution or None if an invalid level was requested """ return self._get_attribute(self.spacings, level) def get_scaling(self, level): """Adjustment factor for input-spacing to reflect voxel sizes at level. Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level. Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the scalings from Returns ------- the scaling factors from the original spacing to the spacings at the requested level """ return self._get_attribute(self.scalings, level) def get_affine(self, level): """Voxel-to-space transformation at a given level. Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have). Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get affine transform from Returns ------- the affine (voxel-to-space) transform at the requested resolution or None if an invalid level was requested """ return self._get_attribute(self.affines, level) def get_affine_inv(self, level): """Space-to-voxel transformation at a given level. Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have). Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the inverse transform from Returns ------- the inverse (space-to-voxel) transform at the requested resolution or None if an invalid level was requested """ return self._get_attribute(self.affine_invs, level) def get_sigmas(self, level): """Smoothing parameters used at a given level. Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space Parameters ---------- level : int, 0 <= from_level < L, (L = number of resolutions) the scale space level to get the smoothing parameters from Returns ------- the smoothing parameters at the requested level """ return self._get_attribute(self.sigmas, level) class IsotropicScaleSpace(ScaleSpace): @warning_for_keywords() def __init__( self, image, factors, sigmas, *, image_grid2world=None, input_spacing=None, mask0=False, ): """IsotropicScaleSpace. Computes the Scale Space representation of an image using isotropic smoothing kernels for all scales. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with different smoothing parameters. This specialization of ScaleSpace allows the user to provide custom scale and smoothing factors for all scales. Parameters ---------- image : array, shape (r,c) or (s, r, c) where s is the number of slices, r is the number of rows and c is the number of columns of the input image. factors : list of floats custom scale factors to build the scale space (one factor for each scale). sigmas : list of floats custom smoothing parameter to build the scale space (one parameter for each scale). image_grid2world : array, shape (dim + 1, dim + 1), optional the grid-to-space transform of the image grid. The default is the identity matrix. input_spacing : array, shape (dim,), optional the spacing (voxel size) between voxels in physical space. The default if 1.0 along all axes. mask0 : Boolean, optional if True, all smoothed images will be zero at all voxels that are zero in the input image. The default is False. """ self.dim = len(image.shape) self.num_levels = len(factors) if len(sigmas) != self.num_levels: raise ValueError("sigmas and factors must have the same length") input_size = np.array(image.shape) if mask0: mask = np.asarray(image > 0, dtype=np.int32) # Normalize input image to [0,1] img = (image.astype(np.float64) - np.min(image)) / ( np.max(image) - np.min(image) ) if mask0: img *= mask # The properties are saved in separate lists. Insert input image # properties at the first level of the scale space self.images = [img.astype(floating)] self.domain_shapes = [input_size.astype(np.int32)] if input_spacing is None: input_spacing = np.ones((self.dim,), dtype=np.int32) self.spacings = [input_spacing] self.scalings = [np.ones(self.dim)] self.affines = [image_grid2world] self.sigmas = [np.ones(self.dim) * sigmas[self.num_levels - 1]] if image_grid2world is not None: self.affine_invs = [npl.inv(image_grid2world)] else: self.affine_invs = [None] # Compute the rest of the levels min_index = np.argmin(input_spacing) for i in range(1, self.num_levels): factor = factors[self.num_levels - 1 - i] shrink_factors = np.zeros(self.dim) new_spacing = np.zeros(self.dim) shrink_factors[min_index] = factor new_spacing[min_index] = input_spacing[min_index] * factor for j in range(self.dim): if j != min_index: # Select the factor that maximizes isotropy shrink_factors[j] = factor new_spacing[j] = input_spacing[j] * factor min_diff = np.abs(new_spacing[j] - new_spacing[min_index]) for f in range(1, factor): diff = input_spacing[j] * f - new_spacing[min_index] diff = np.abs(diff) if diff < min_diff: shrink_factors[j] = f new_spacing[j] = input_spacing[j] * f min_diff = diff extended = np.append(shrink_factors, [1]) if image_grid2world is not None: affine = image_grid2world.dot(np.diag(extended)) else: affine = np.diag(extended) output_size = (input_size / shrink_factors).astype(np.int32) new_sigmas = np.ones(self.dim) * sigmas[self.num_levels - i - 1] # Filter along each direction with the appropriate sigma filtered = gaussian_filter(image.astype(np.float64), new_sigmas) filtered = (filtered.astype(np.float64) - np.min(filtered)) / ( np.max(filtered) - np.min(filtered) ) if mask0: filtered *= mask # Add current level to the scale space self.images.append(filtered.astype(floating)) self.domain_shapes.append(output_size) self.spacings.append(new_spacing) self.scalings.append(shrink_factors) self.affines.append(affine) self.affine_invs.append(npl.inv(affine)) self.sigmas.append(new_sigmas) dipy-1.11.0/dipy/align/streamlinear.py000066400000000000000000001344221476546756600176760ustar00rootroot00000000000000import abc from itertools import combinations import logging from time import time import numpy as np from dipy.align.bundlemin import ( _bundle_minimum_distance, _bundle_minimum_distance_asymmetric, distance_matrix_mdf, ) from dipy.core.geometry import compose_matrix, compose_transformations, decompose_matrix from dipy.core.optimize import Optimizer from dipy.segment.clustering import qbx_and_merge from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import ( Streamlines, center_streamlines, length, select_random_set_of_streamlines, set_number_of_points, transform_streamlines, unlist_streamlines, ) DEFAULT_BOUNDS = [ (-35, 35), (-35, 35), (-35, 35), (-45, 45), (-45, 45), (-45, 45), (0.6, 1.4), (0.6, 1.4), (0.6, 1.4), (-10, 10), (-10, 10), (-10, 10), ] logger = logging.getLogger(__name__) class StreamlineDistanceMetric(metaclass=abc.ABCMeta): @warning_for_keywords() def __init__(self, *, num_threads=None): """An abstract class for the metric used for streamline registration. If the two sets of streamlines match exactly then method ``distance`` of this object should be minimum. Parameters ---------- num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable. """ self.static = None self.moving = None self.num_threads = num_threads @abc.abstractmethod def setup(self, static, moving): pass @abc.abstractmethod def distance(self, xopt): """calculate distance for current set of parameters.""" pass class BundleMinDistanceMetric(StreamlineDistanceMetric): """Bundle-based Minimum Distance aka BMD. This is the cost function used by the StreamlineLinearRegistration. See :footcite:p:`Garyfallidis2014b` for further details about the metric. Methods ------- setup(static, moving) distance(xopt) References ---------- .. footbibliography:: """ def setup(self, static, moving): """Setup static and moving sets of streamlines. Parameters ---------- static : streamlines Fixed or reference set of streamlines. moving : streamlines Moving streamlines. Notes ----- Call this after the object is initiated and before distance. """ self._set_static(static) self._set_moving(moving) def _set_static(self, static): static_centered_pts, st_idx = unlist_streamlines(static) self.static_centered_pts = np.ascontiguousarray( static_centered_pts, dtype=np.float64 ) self.block_size = st_idx[0] def _set_moving(self, moving): self.moving_centered_pts, _ = unlist_streamlines(moving) def distance(self, xopt): """Distance calculated from this Metric. Parameters ---------- xopt : sequence List of affine parameters as an 1D vector, """ return bundle_min_distance_fast( xopt, self.static_centered_pts, self.moving_centered_pts, self.block_size, num_threads=self.num_threads, ) class BundleMinDistanceMatrixMetric(StreamlineDistanceMetric): """Bundle-based Minimum Distance aka BMD This is the cost function used by the StreamlineLinearRegistration Methods ------- setup(static, moving) distance(xopt) Notes ----- The difference with BundleMinDistanceMetric is that this creates the entire distance matrix and therefore requires more memory. """ def setup(self, static, moving): """Setup static and moving sets of streamlines. Parameters ---------- static : streamlines Fixed or reference set of streamlines. moving : streamlines Moving streamlines. Notes ----- Call this after the object is initiated and before distance. Num_threads is not used in this class. Use ``BundleMinDistanceMetric`` for a faster, threaded and less memory hungry metric """ self.static = static self.moving = moving def distance(self, xopt): """Distance calculated from this Metric. Parameters ---------- xopt : sequence List of affine parameters as an 1D vector """ return bundle_min_distance(xopt, self.static, self.moving) class BundleMinDistanceAsymmetricMetric(BundleMinDistanceMetric): """Asymmetric Bundle-based Minimum distance. This is a cost function that can be used by the StreamlineLinearRegistration class. """ def distance(self, xopt): """Distance calculated from this Metric. Parameters ---------- xopt : sequence List of affine parameters as an 1D vector """ return bundle_min_distance_asymmetric_fast( xopt, self.static_centered_pts, self.moving_centered_pts, self.block_size ) class BundleSumDistanceMatrixMetric(BundleMinDistanceMatrixMetric): """Bundle-based Sum Distance aka BMD This is a cost function that can be used by the StreamlineLinearRegistration class. Methods ------- setup(static, moving) distance(xopt) Notes ----- The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins. """ def distance(self, xopt): """Distance calculated from this Metric Parameters ---------- xopt : sequence List of affine parameters as an 1D vector """ return bundle_sum_distance(xopt, self.static, self.moving) class JointBundleMinDistanceMetric(StreamlineDistanceMetric): """Bundle-based Minimum Distance for joint optimization. This cost function is used by the StreamlineLinearRegistration class when running halfway streamline linear registration for unbiased groupwise bundle registration and atlasing. It computes the BMD distance after moving both static and moving bundles to a halfway space in between both. Methods ------- setup(static, moving) distance(xopt) Notes ----- In this metric both static and moving bundles are treated equally (i.e., there is no static reference bundle as both are intended to move). The naming convention is kept for consistency. """ def setup(self, static, moving): """Setup static and moving sets of streamlines. Parameters ---------- static : streamlines Set of streamlines moving : streamlines Set of streamlines Notes ----- Call this after the object is initiated and before distance. Num_threads is not used in this class. """ self.static = static self.moving = moving def distance(self, xopt): """Distance calculated from this Metric. Parameters ---------- xopt : sequence List of affine parameters as an 1D vector. These affine parameters are used to derive the corresponding halfway transformation parameters for each bundle. """ # Define halfway space transformations x_static = np.concatenate((xopt[0:6] / 2, (1 + xopt[6:9]) / 2, xopt[9:12] / 2)) x_moving = np.concatenate( (-xopt[0:6] / 2, 2 / (1 + xopt[6:9]), -xopt[9:12] / 2) ) # Move static bundle to the halfway space aff_static = compose_matrix44(x_static) static = transform_streamlines(self.static, aff_static) # Move moving bundle to halfway space and compute distance return bundle_min_distance(x_moving, static, self.moving) class StreamlineLinearRegistration: @warning_for_keywords() def __init__( self, *, metric=None, x0="rigid", method="L-BFGS-B", bounds=None, verbose=False, options=None, evolution=False, num_threads=None, ): r"""Linear registration of 2 sets of streamlines. See :footcite:p:`Garyfallidis2015` for further details about the method. Parameters ---------- metric : StreamlineDistanceMetric, If None and fast is False then the BMD distance is used. If fast is True then a faster implementation of BMD is used. Otherwise, use the given distance metric. x0 : array or int or str Initial parametrization for the optimization. If 1D array with: a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing is performed (affine). Here is an example of x0 with 12 elements: ``x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, -0.5, 0])`` This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, -0.5, 0). If int: a) 6 ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) 7 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) 12 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` If str: a) "rigid" ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) "similarity" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) "affine" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` method : str, 'L_BFGS_B' or 'Powell' optimizers can be used. bounds : list of tuples or None, If method == 'L_BFGS_B' then we can use bounded optimization. For example for the six parameters of rigid rotation we can set the bounds = [(-30, 30), (-30, 30), (-30, 30), (-45, 45), (-45, 45), (-45, 45)] That means that we have set the bounds for the three translations and three rotation axes (in degrees). verbose : bool, optional. If True, if True then information about the optimization is shown. options : None or dict, Extra options to be used with the selected method. evolution : boolean If True save the transformation for each iteration of the optimizer. Supported only with Scipy >= 0.11. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable. References ---------- .. footbibliography:: """ # noqa: E501 self.x0 = self._set_x0(x0) self.metric = metric if self.metric is None: self.metric = BundleMinDistanceMetric(num_threads=num_threads) self.verbose = verbose self.method = method if self.method not in ["Powell", "L-BFGS-B"]: raise ValueError("Only Powell and L-BFGS-B can be used") self.bounds = bounds self.options = options self.evolution = evolution @warning_for_keywords() def optimize(self, static, moving, *, mat=None): """Find the minimum of the provided metric. Parameters ---------- static : streamlines Reference or fixed set of streamlines. moving : streamlines Moving set of streamlines. mat : array Transformation (4, 4) matrix to start the registration. ``mat`` is applied to moving. Default value None which means that initial transformation will be generated by shifting the centers of moving and static sets of streamlines to the origin. Returns ------- map : StreamlineRegistrationMap """ msg = "need to have the same number of points. Use " msg += "set_number_of_points from dipy.tracking.streamline" if not np.all(np.array(list(map(len, static))) == static[0].shape[0]): raise ValueError(f"Static streamlines {msg}") if not np.all(np.array(list(map(len, moving))) == moving[0].shape[0]): raise ValueError(f"Moving streamlines {msg}") if not np.all(np.array(list(map(len, moving))) == static[0].shape[0]): raise ValueError(f"Static and moving streamlines {msg}") if mat is None: static_centered, static_shift = center_streamlines(static) moving_centered, moving_shift = center_streamlines(moving) static_mat = compose_matrix44( [static_shift[0], static_shift[1], static_shift[2], 0, 0, 0] ) moving_mat = compose_matrix44( [-moving_shift[0], -moving_shift[1], -moving_shift[2], 0, 0, 0] ) else: static_centered = static moving_centered = transform_streamlines(moving, mat) static_mat = np.eye(4) moving_mat = mat self.metric.setup(static_centered, moving_centered) distance = self.metric.distance if self.method == "Powell": if self.options is None: self.options = {"xtol": 1e-6, "ftol": 1e-6, "maxiter": 1e6} opt = Optimizer( distance, self.x0.tolist(), method=self.method, options=self.options, evolution=self.evolution, ) if self.method == "L-BFGS-B": if self.options is None: self.options = { "maxcor": 10, "ftol": 1e-7, "gtol": 1e-5, "eps": 1e-8, "maxiter": 100, } opt = Optimizer( distance, self.x0.tolist(), method=self.method, bounds=self.bounds, options=self.options, evolution=self.evolution, ) if self.verbose: opt.print_summary() opt_mat = compose_matrix44(opt.xopt) mat = compose_transformations(moving_mat, opt_mat, static_mat) mat_history = [] if opt.evolution is not None: for vecs in opt.evolution: mat_history.append( compose_transformations( moving_mat, compose_matrix44(vecs), static_mat ) ) # If we are running halfway streamline linear registration (for # groupwise registration or atlasing) the registration map is different if isinstance(self.metric, JointBundleMinDistanceMetric): srm = JointStreamlineRegistrationMap( opt.xopt, opt.fopt, mat_history, opt.nfev, opt.nit ) else: srm = StreamlineRegistrationMap( mat, opt.xopt, opt.fopt, mat_history, opt.nfev, opt.nit ) del opt return srm def _set_x0(self, x0): """check if input is of correct type.""" if hasattr(x0, "ndim"): if len(x0) not in [3, 6, 7, 9, 12]: m_ = "Only 1D arrays of 3, 6, 7, 9 and 12 elements are allowed" raise ValueError(m_) if x0.ndim != 1: raise ValueError("Array should have only one dimension") return x0 if isinstance(x0, str): if x0.lower() == "translation": return np.zeros(3) if x0.lower() == "rigid": return np.zeros(6) if x0.lower() == "similarity": return np.array([0, 0, 0, 0, 0, 0, 1.0]) if x0.lower() == "scaling": return np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0]) if x0.lower() == "affine": return np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 0, 0, 0]) if isinstance(x0, int): if x0 not in [3, 6, 7, 9, 12]: msg = "Only 3, 6, 7, 9 and 12 are accepted as integers" raise ValueError(msg) else: if x0 == 3: return np.zeros(3) if x0 == 6: return np.zeros(6) if x0 == 7: return np.array([0, 0, 0, 0, 0, 0, 1.0]) if x0 == 9: return np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0]) if x0 == 12: return np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 0, 0, 0]) raise ValueError("Wrong input") class StreamlineRegistrationMap: def __init__(self, matopt, xopt, fopt, matopt_history, funcs, iterations): r"""A map holding the optimum affine matrix and some other parameters of the optimization Parameters ---------- matopt : array, 4x4 affine matrix which transforms the moving to the static streamlines xopt : array, 1d array with the parameters of the transformation after centering fopt : float, final value of the metric matopt_history : array All transformation matrices created during the optimization funcs : int, Number of function evaluations of the optimizer iterations : int Number of iterations of the optimizer """ self.matrix = matopt self.xopt = xopt self.fopt = fopt self.matrix_history = matopt_history self.funcs = funcs self.iterations = iterations def transform(self, moving): """Transform moving streamlines to the static. Parameters ---------- moving : streamlines Returns ------- moved : streamlines Notes ----- All this does is apply ``self.matrix`` to the input streamlines. """ return transform_streamlines(moving, self.matrix) class JointStreamlineRegistrationMap: def __init__(self, xopt, fopt, matopt_history, funcs, iterations): """A map holding the optimum affine matrices for halfway streamline linear registration and some other parameters of the optimization. xopt is optimized by StreamlineLinearRegistration using the JointBundleMinDistanceMetric. In that case the mat argument of the optimize method needs to be np.eye(4) to avoid streamline centering. This constructor derives and stores the transformations to move both static and moving bundles to the halfway space. Parameters ---------- xopt : array 1d array with the parameters of the transformation. fopt : float Final value of the metric. matopt_history : array All transformation matrices created during the optimization. funcs : int Number of function evaluations of the optimizer. iterations : int Number of iterations of the optimizer. """ trans, angles, scale, shear = xopt[:3], xopt[3:6], xopt[6:9], xopt[9:] self.x1 = np.concatenate((trans / 2, angles / 2, (1 + scale) / 2, shear / 2)) self.x2 = np.concatenate((-trans / 2, -angles / 2, 2 / (1 + scale), -shear / 2)) self.matrix1 = compose_matrix44(self.x1) self.matrix2 = compose_matrix44(self.x2) self.fopt = fopt self.matrix_history = matopt_history self.funcs = funcs self.iterations = iterations def transform(self, static, moving): """Transform both static and moving bundles to the halfway space. All this does is apply ``self.matrix1`` and `self.matrix2`` to the static and moving bundles, respectively. Parameters ---------- static : streamlines moving : streamlines Returns ------- static : streamlines moving : streamlines """ static = transform_streamlines(static, self.matrix1) moving = transform_streamlines(moving, self.matrix2) return static, moving @warning_for_keywords() def bundle_sum_distance(t, static, moving, *, num_threads=None): """MDF distance optimization function (SUM). We minimize the distance between moving streamlines as they align with the static streamlines. Parameters ---------- t : ndarray t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing. static : list Static streamlines moving : list Moving streamlines. These will be transformed to align with the static streamlines num_threads : int, optional Number of threads. If -1 then all available threads will be used. Returns ------- cost: float """ aff = compose_matrix44(t) moving = transform_streamlines(moving, aff) d01 = distance_matrix_mdf(static, moving) return np.sum(d01) def bundle_min_distance(t, static, moving): """MDF-based pairwise distance optimization function (MIN). We minimize the distance between moving streamlines as they align with the static streamlines. Parameters ---------- t : ndarray t is a vector of affine transformation parameters with size at least 6. If size is 6, t is interpreted as translation + rotation. If size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing. static : list Static streamlines moving : list Moving streamlines. Returns ------- cost: float """ aff = compose_matrix44(t) moving = transform_streamlines(moving, aff) d01 = distance_matrix_mdf(static, moving) rows, cols = d01.shape return ( 0.25 * ( np.sum(np.min(d01, axis=0)) / float(cols) + np.sum(np.min(d01, axis=1)) / float(rows) ) ** 2 ) @warning_for_keywords() def bundle_min_distance_fast(t, static, moving, block_size, *, num_threads=None): """MDF-based pairwise distance optimization function (MIN). We minimize the distance between moving streamlines as they align with the static streamlines. Parameters ---------- t : array 1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing. static : array N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline. moving : array K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline. block_size : int Number of points per streamline. All streamlines in static and moving should have the same number of points M. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Returns ------- cost: float Notes ----- This is a faster implementation of ``bundle_min_distance``, which requires that all the points of each streamline are allocated into an ndarray (of shape N*M by 3, with N the number of points per streamline and M the number of streamlines). This can be done by calling `dipy.tracking.streamlines.unlist_streamlines`. """ aff = compose_matrix44(t) moving = np.dot(aff[:3, :3], moving.T).T + aff[:3, 3] moving = np.ascontiguousarray(moving, dtype=np.float64) rows = static.shape[0] // block_size cols = moving.shape[0] // block_size return _bundle_minimum_distance( static, moving, rows, cols, block_size, num_threads=num_threads ) def bundle_min_distance_asymmetric_fast(t, static, moving, block_size): """MDF-based pairwise distance optimization function (MIN). We minimize the distance between moving streamlines as they align with the static streamlines. Parameters ---------- t : array 1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing. static : array N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline. moving : array K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline. block_size : int Number of points per streamline. All streamlines in static and moving should have the same number of points M. Returns ------- cost: float """ aff = compose_matrix44(t) moving = np.dot(aff[:3, :3], moving.T).T + aff[:3, 3] moving = np.ascontiguousarray(moving, dtype=np.float64) rows = static.shape[0] // block_size cols = moving.shape[0] // block_size return _bundle_minimum_distance_asymmetric(static, moving, rows, cols, block_size) def remove_clusters_by_size(clusters, min_size=0): ob = filter(lambda c: len(c) >= min_size, clusters) centroids = Streamlines() for cluster in ob: centroids.append(cluster.centroid) return centroids @warning_for_keywords() def progressive_slr( static, moving, metric, x0, bounds, *, method="L-BFGS-B", verbose=False, num_threads=None, ): """Progressive SLR. This is a utility function that allows for example to do affine registration using Streamline-based Linear Registration (SLR) :footcite:p:`Garyfallidis2015` by starting with translation first, then rigid, then similarity, scaling and finally affine. Similarly, if for example, you want to perform rigid then you start with translation first. This progressive strategy can help with finding the optimal parameters of the final transformation. Parameters ---------- static : Streamlines Static streamlines. moving : Streamlines Moving streamlines. metric : StreamlineDistanceMetric Distance metric for registration optimization. x0 : string Could be any of 'translation', 'rigid', 'similarity', 'scaling', 'affine' bounds : array Boundaries of registration parameters. See variable `DEFAULT_BOUNDS` for example. method : string L_BFGS_B' or 'Powell' optimizers can be used. Default is 'L_BFGS_B'. verbose : bool, optional. If True, log messages. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable. References ---------- .. footbibliography:: """ if verbose: logger.info("Progressive Registration is Enabled") if x0 in ("translation", "rigid", "similarity", "scaling", "affine"): if verbose: logger.info(" Translation (3 parameters)...") slr_t = StreamlineLinearRegistration( metric=metric, x0="translation", bounds=bounds[:3], method=method ) slm_t = slr_t.optimize(static, moving) if x0 in ("rigid", "similarity", "scaling", "affine"): x_translation = slm_t.xopt x = np.zeros(6) x[:3] = x_translation if verbose: logger.info(" Rigid (6 parameters) ...") slr_r = StreamlineLinearRegistration( metric=metric, x0=x, bounds=bounds[:6], method=method ) slm_r = slr_r.optimize(static, moving) if x0 in ("similarity", "scaling", "affine"): x_rigid = slm_r.xopt x = np.zeros(7) x[:6] = x_rigid x[6] = 1.0 if verbose: logger.info(" Similarity (7 parameters) ...") slr_s = StreamlineLinearRegistration( metric=metric, x0=x, bounds=bounds[:7], method=method ) slm_s = slr_s.optimize(static, moving) if x0 in ("scaling", "affine"): x_similarity = slm_s.xopt x = np.zeros(9) x[:6] = x_similarity[:6] x[6:] = np.array((x_similarity[6],) * 3) if verbose: logger.info(" Scaling (9 parameters) ...") slr_c = StreamlineLinearRegistration( metric=metric, x0=x, bounds=bounds[:9], method=method ) slm_c = slr_c.optimize(static, moving) if x0 == "affine": x_scaling = slm_c.xopt x = np.zeros(12) x[:9] = x_scaling[:9] x[9:] = np.zeros(3) if verbose: logger.info(" Affine (12 parameters) ...") slr_a = StreamlineLinearRegistration( metric=metric, x0=x, bounds=bounds[:12], method=method ) slm_a = slr_a.optimize(static, moving) if x0 == "translation": slm = slm_t elif x0 == "rigid": slm = slm_r elif x0 == "similarity": slm = slm_s elif x0 == "scaling": slm = slm_c elif x0 == "affine": slm = slm_a else: raise ValueError("Incorrect SLR transform") return slm @warning_for_keywords() def slr_with_qbx( static, moving, *, x0="affine", rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=(40, 30, 20, 15), nb_pts=20, progressive=True, rng=None, num_threads=None, ): """Utility function for registering large tractograms. For efficiency, we apply the registration on cluster centroids and remove small clusters. See :footcite:p:`Garyfallidis2014b`, :footcite:p:`Garyfallidis2015` and :footcite:p:`Garyfallidis2018` for details about the methods involved. Parameters ---------- static : Streamlines Fixed or reference set of streamlines. moving : streamlines Moving streamlines. x0 : str, optional. rigid, similarity or affine transformation model rm_small_clusters : int, optional Remove clusters that have less than `rm_small_clusters` maxiter : int, optional Maximum number of iterations to perform. select_random : int, optional. If not, None selects a random number of streamlines to apply clustering verbose : bool, optional If True, logs information about optimization. greater_than : int, optional Keep streamlines that have length greater than this value. less_than : int, optional Keep streamlines have length less than this value. qbx_thr : variable int Thresholds for QuickBundlesX. nb_pts : int, optional Number of points for discretizing each streamline. progressive : boolean, optional True to enable progressive registration. rng : np.random.Generator If None creates random generator in function. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable. Notes ----- The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR :footcite:p:`Garyfallidis2015` is applied. References ---------- .. footbibliography:: """ if rng is None: rng = np.random.default_rng() if verbose: logger.info(f"Static streamlines size {len(static)}") logger.info(f"Moving streamlines size {len(moving)}") def check_range(streamline, gt=greater_than, lt=less_than): if (length(streamline) > gt) & (length(streamline) < lt): return True else: return False streamlines1 = Streamlines(static[np.array([check_range(s) for s in static])]) streamlines2 = Streamlines(moving[np.array([check_range(s) for s in moving])]) if verbose: logger.info(f"Static streamlines after length reduction {len(streamlines1)}") logger.info(f"Moving streamlines after length reduction {len(streamlines2)}") if select_random is not None: rstreamlines1 = select_random_set_of_streamlines( streamlines1, select_random, rng=rng ) else: rstreamlines1 = streamlines1 rstreamlines1 = set_number_of_points(rstreamlines1, nb_points=nb_pts) rstreamlines1._data.astype("f4") cluster_map1 = qbx_and_merge(rstreamlines1, thresholds=qbx_thr, rng=rng) qb_centroids1 = remove_clusters_by_size(cluster_map1, rm_small_clusters) if select_random is not None: rstreamlines2 = select_random_set_of_streamlines( streamlines2, select_random, rng=rng ) else: rstreamlines2 = streamlines2 rstreamlines2 = set_number_of_points(rstreamlines2, nb_points=nb_pts) rstreamlines2._data.astype("f4") cluster_map2 = qbx_and_merge(rstreamlines2, thresholds=qbx_thr, rng=rng) qb_centroids2 = remove_clusters_by_size(cluster_map2, rm_small_clusters) if verbose: t = time() if not len(qb_centroids1): msg = "No cluster centroids found in Static Streamlines. Please " msg += "decrease the value of rm_small_clusters." raise ValueError(msg) if not len(qb_centroids2): msg = "No cluster centroids found in Moving Streamlines. Please " msg += "decrease the value of rm_small_clusters." raise ValueError(msg) if not progressive: slr = StreamlineLinearRegistration( x0=x0, options={"maxiter": maxiter}, num_threads=num_threads ) slm = slr.optimize(qb_centroids1, qb_centroids2) else: bounds = DEFAULT_BOUNDS slm = progressive_slr( qb_centroids1, qb_centroids2, x0=x0, metric=None, bounds=bounds, num_threads=num_threads, ) if verbose: logger.info(f"QB static centroids size {len(qb_centroids1)}") logger.info(f"QB moving centroids size {len(qb_centroids2)}") duration = time() - t logger.info(f"SLR finished in {duration:0.3f} seconds.") if slm.iterations is not None: logger.info(f"SLR iterations: {slm.iterations}") moved = slm.transform(moving) return moved, slm.matrix, qb_centroids1, qb_centroids2 # In essence whole_brain_slr can be thought as a combination of # SLR on QuickBundles centroids and some thresholding see # Garyfallidis et al. Recognition of white matter # bundles using local and global streamline-based registration and # clustering, NeuroImage, 2017. whole_brain_slr = slr_with_qbx @warning_for_keywords() def groupwise_slr( bundles, *, x0="affine", tol=0, max_iter=20, qbx_thr=(4,), nb_pts=20, select_random=10000, verbose=False, rng=None, ): """Function to perform unbiased groupwise bundle registration All bundles are moved to the same space by iteratively applying halfway streamline linear registration in pairs. With each iteration, bundles get closer to each other until the procedure converges and there is no more improvement. See :footcite:p:`Garyfallidis2014b`, :footcite:p:`Garyfallidis2015` and :footcite:p:`Garyfallidis2018`. Parameters ---------- bundles : list List with streamlines of the bundles to be registered. x0 : str, optional rigid, similarity or affine transformation model. tol : float, optional Tolerance value to be used to assume convergence. max_iter : int, optional Maximum number of iterations. Depending on the number of bundles to be registered this may need to be larger. qbx_thr : variable int, optional Thresholds for Quickbundles used for clustering streamlines and reduce computational time. If None, no clustering is performed. Higher values cluster streamlines into a smaller number of centroids. nb_pts : int, optional Number of points for discretizing each streamline. select_random : int, optional Maximum number of streamlines for each bundle. If None, all the streamlines are used. verbose : bool, optional If True, logs information. rng : np.random.Generator If None, creates random generator in function. References ---------- .. footbibliography:: """ def group_distance(bundles, n_bundle): all_pairs = list(combinations(np.arange(n_bundle), 2)) d = np.zeros(len(all_pairs)) for i, ind in enumerate(all_pairs): mdf = distance_matrix_mdf(bundles[ind[0]], bundles[ind[1]]) rows, cols = mdf.shape d[i] = ( 0.25 * ( np.sum(np.min(mdf, axis=0)) / float(cols) + np.sum(np.min(mdf, axis=1)) / float(rows) ) ** 2 ) return d if rng is None: rng = np.random.default_rng() metric = JointBundleMinDistanceMetric() bundles = bundles.copy() n_bundle = len(bundles) if verbose: logging.info("Groupwise bundle registration running.") logging.info(f"Number of bundles found: {n_bundle}.") # Preprocess bundles: streamline selection, centering and clustering centroids = [] aff_list = [] for i in range(n_bundle): if verbose: logging.info( f"Preprocessing: bundle {i}/{n_bundle}: " + f"{len(bundles[i])} streamlines found." ) if select_random is not None: bundles[i] = select_random_set_of_streamlines( bundles[i], select_random, rng=rng ) bundles[i] = set_number_of_points(bundles[i], nb_points=nb_pts) bundle, shift = center_streamlines(bundles[i]) aff_list.append(compose_matrix44(-shift)) if qbx_thr is not None: cluster_map = qbx_and_merge(bundle, thresholds=qbx_thr, rng=rng) bundle = remove_clusters_by_size(cluster_map, 1) centroids.append(bundle) # Compute initial group distance (mean distance between all bundle pairs) d = group_distance(centroids, n_bundle) if verbose: logging.info(f"Initial group distance: {np.mean(d)}.") # Make pairs and start iterating pairs, excluded = get_unique_pairs(n_bundle) n_pair = n_bundle // 2 for i_iter in range(1, max_iter + 1): for i_pair, pair in enumerate(pairs): ind1 = pair[0] ind2 = pair[1] centroids1 = centroids[ind1] centroids2 = centroids[ind2] hslr = StreamlineLinearRegistration(x0=x0, metric=metric) hsrm = hslr.optimize(static=centroids1, moving=centroids2, mat=np.eye(4)) # Update transformation matrices aff_list[ind1] = np.dot(hsrm.matrix1, aff_list[ind1]) aff_list[ind2] = np.dot(hsrm.matrix2, aff_list[ind2]) centroids1, centroids2 = hsrm.transform(centroids1, centroids2) centroids[ind1] = centroids1 centroids[ind2] = centroids2 if verbose: logging.info(f"Iteration: {i_iter} pair: {i_pair+1}/{n_pair}.") d = np.vstack((d, group_distance(centroids, n_bundle))) # Use as reference the distance 3 iterations ago prev_iter = np.max([0, i_iter - 3]) d_improve = np.mean(d[prev_iter, :]) - np.mean(d[i_iter, :]) if verbose: logging.info(f"Iteration {i_iter} group distance: {np.mean(d[i_iter, :])}") logging.info(f"Iteration {i_iter} improvement previous 3: {d_improve}") if d_improve < tol: if verbose: logging.info("Registration converged {d_improve} < {tol}") break pairs, excluded = get_unique_pairs(n_bundle, pairs=pairs) # Move bundles just once at the end for i, aff in enumerate(aff_list): bundles[i] = transform_streamlines(bundles[i], aff) return bundles, aff_list, d @warning_for_keywords() def get_unique_pairs(n_bundle, *, pairs=None): """Make unique pairs from n_bundle bundles. The function allows to input a previous pairs assignment so that the new pairs are different. Parameters ---------- n_bundle : int Number of bundles to be matched in pairs. pairs : array, optional array containing the indexes of previous pairs. """ if not isinstance(n_bundle, int): raise TypeError(f"n_bundle must be an int but is a {type(n_bundle)}") if n_bundle <= 1: raise ValueError(f"n_bundle must be > 1 but is {n_bundle}") # Generate indexes index = np.arange(n_bundle) n_pair = n_bundle // 2 # If n_bundle is odd, we exclude one ensuring it wasn't previously excluded excluded = None if np.mod(n_bundle, 2) == 1: if pairs is None: excluded = np.random.choice(index) else: excluded = np.random.choice(np.unique(pairs)) index = index[index != excluded] # Shuffle indexes index = np.random.permutation(index) new_pairs = index.reshape((n_pair, 2)) if pairs is None or n_bundle <= 3: return new_pairs, excluded # Repeat the shuffle process until we find new unique pairs all_pairs = np.vstack((new_pairs, new_pairs[:, ::-1], pairs, pairs[:, ::-1])) while len(np.unique(all_pairs, axis=0)) < 4 * n_pair: index = np.random.permutation(index) new_pairs = index.reshape((n_pair, 2)) all_pairs = np.vstack((new_pairs, new_pairs[:, ::-1], pairs, pairs[:, ::-1])) return new_pairs, excluded def _threshold(x, th): return np.maximum(np.minimum(x, th), -th) @warning_for_keywords() def compose_matrix44(t, *, dtype=np.double): """Compose a 4x4 transformation matrix. Parameters ---------- t : ndarray This is a 1D vector of affine transformation parameters with size at least 3. If the size is 3, t is interpreted as translation. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If the size is 9, t is interpreted as translation + rotation + anisotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing. Returns ------- T : ndarray Homogeneous transformation matrix of size 4x4. """ if isinstance(t, list): t = np.array(t) size = t.size if size not in [3, 6, 7, 9, 12]: raise ValueError("Accepted number of parameters is 3, 6, 7, 9 and 12") MAX_DIST = 1e10 scale, shear, angles, translate = (None,) * 4 translate = _threshold(t[0:3], MAX_DIST) if size in [6, 7, 9, 12]: angles = np.deg2rad(t[3:6]) if size == 7: scale = np.array((t[6],) * 3) if size in [9, 12]: scale = t[6:9] if size == 12: shear = t[9:12] return compose_matrix(scale=scale, shear=shear, angles=angles, translate=translate) @warning_for_keywords() def decompose_matrix44(mat, *, size=12): """Given a 4x4 homogeneous matrix return the parameter vector. Parameters ---------- mat : array Homogeneous 4x4 transformation matrix size : int Size of the output vector. 3, for translation, 6 for rigid, 7 for similarity, 9 for scaling and 12 for affine. Default is 12. Returns ------- t : ndarray One dimensional ndarray of 3, 6, 7, 9 or 12 affine parameters. """ scale, shear, angles, translate, _ = decompose_matrix(mat) t = np.zeros(12) t[:3] = translate if size == 3: return t[:3] t[3:6] = np.rad2deg(angles) if size == 6: return t[:6] if size == 7: t[6] = np.mean(scale) return t[:7] if size == 9: t[6:9] = scale return t[:9] if size == 12: t[6:9] = scale t[9:12] = shear return t raise ValueError("Size can be 3, 6, 7, 9 or 12") dipy-1.11.0/dipy/align/streamwarp.py000077500000000000000000000221421476546756600173730ustar00rootroot00000000000000import warnings import numpy as np from scipy.optimize import linear_sum_assignment from dipy.align.bundlemin import distance_matrix_mdf from dipy.align.cpd import DeformableRegistration from dipy.align.streamlinear import slr_with_qbx from dipy.segment.clustering import QuickBundles from dipy.segment.metricspeed import AveragePointwiseEuclideanMetric from dipy.stats.analysis import assignment_map from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import Streamlines, length, unlist_streamlines from dipy.utils.optpkg import optional_package from dipy.viz.plotting import bundle_shape_profile pd, have_pd, _ = optional_package("pandas") def average_bundle_length(bundle): """Find average Euclidean length of the bundle in mm. Parameters ---------- bundle : Streamlines Bundle who's average length is to be calculated. Returns ------- int Average Euclidean length of bundle in mm. """ metric = AveragePointwiseEuclideanMetric() qb = QuickBundles(threshold=85.0, metric=metric) clusters = qb.cluster(bundle) centroids = Streamlines(clusters.centroids) return length(centroids)[0] def find_missing(lst, cb): """Find unmatched streamline indices in moving bundle. Parameters ---------- lst : List List of integers containing all the streamlines indices in moving bundle. cb : List List of integers containing streamline indices of the moving bundle that were not matched to any streamline in static bundle. Returns ------- list List containing unmatched streamlines from moving bundle """ return [x for x in range(0, len(cb)) if x not in lst] @warning_for_keywords() def bundlewarp( static, moving, *, dist=None, alpha=0.5, beta=20, max_iter=15, affine=True ): """Register two bundles using nonlinear method. See :footcite:p:`Chandio2023` for further details about the method. Parameters ---------- static : Streamlines Reference/fixed bundle. moving : Streamlines Target bundle that will be moved/registered to match the static bundle. dist : float, optional Precomputed distance matrix. alpha : float, optional Represents the trade-off between regularizing the deformation and having points match very closely. Lower value of alpha means high deformations. beta : int, optional Represents the strength of the interaction between points Gaussian kernel size. max_iter : int, optional Maximum number of iterations for deformation process in ml-CPD method. affine : boolean, optional If False, use rigid registration as starting point. Returns ------- deformed_bundle : Streamlines Nonlinearly moved bundle (warped bundle) moving_aligned : Streamlines Linearly moved bundle (affinely moved) dist : np.ndarray Float array containing distance between moving and static bundle matched_pairs : np.ndarray Int array containing streamline correspondences between two bundles warp : np.ndarray Nonlinear warp map generated by BundleWarp References ---------- .. footbibliography:: """ if alpha <= 0.01: warnings.warn( "Using alpha<=0.01 will result in extreme deformations", stacklevel=2 ) if average_bundle_length(static) <= 50: beta = 10 x0 = "affine" if affine else "rigid" moving_aligned, _, _, _ = slr_with_qbx(static, moving, x0=x0, rm_small_clusters=0) if dist is not None: print("using pre-computed distances") else: dist = distance_matrix_mdf(static, moving_aligned).T matched_pairs = np.zeros((len(moving), 2)) matched_pairs1 = np.asarray(linear_sum_assignment(dist)).T for mt in matched_pairs1: matched_pairs[mt[0]] = mt num = len(matched_pairs1) all_pairs = list(matched_pairs1[:, 0]) all_matched = False while all_matched is False: num = len(all_pairs) if num < len(moving): ml = find_missing(all_pairs, moving) dist2 = dist[:][ml] # dist2 has distance among unmatched streamlines of moving bundle # and all static bundle's streamlines matched_pairs2 = np.asarray(linear_sum_assignment(dist2)).T for i in range(matched_pairs2.shape[0]): matched_pairs2[i][0] = ml[matched_pairs2[i][0]] for mt in matched_pairs2: matched_pairs[mt[0]] = mt all_pairs.extend(matched_pairs2[:, 0]) num2 = num + len(matched_pairs2) if num2 == len(moving): all_matched = True num = num2 else: all_matched = True deformed_bundle = Streamlines([]) warp = [] # Iterate over each pair of streamlines and deform them # Append deformed streamlines in deformed_bundle for _, pairs in enumerate(matched_pairs): s1 = static[int(pairs[1])] s2 = moving_aligned[int(pairs[0])] static_s = s1 moving_s = s2 reg = DeformableRegistration( X=static_s, Y=moving_s, alpha=alpha, beta=beta, max_iterations=max_iter ) ty, pr = reg.register() ty = ty.astype(float) deformed_bundle.append(ty) warp.append(pr) warp = pd.DataFrame(warp, columns=["gaussian_kernel", "transforms"]) # Returns deformed bundle, affinely moved bundle, distance matrix, # streamline correspondences, and warp field return deformed_bundle, moving_aligned, dist, matched_pairs, warp def bundlewarp_vector_filed(moving_aligned, deformed_bundle): """Calculate vector fields. Vector field computation as the difference between each streamline point in the deformed and linearly aligned bundles Parameters ---------- moving_aligned : Streamlines Linearly (affinely) moved bundle deformed_bundle : Streamlines Nonlinearly (warped) bundle Returns ------- offsets : List Vector field modules directions : List Unitary vector directions colors : List Colors for bundle warping field vectors. Colors follow the convention used in DTI-derived maps (e.g. color FA) :footcite:p:`Pajevic1999`. References ---------- .. footbibliography:: """ points_aligned, _ = unlist_streamlines(moving_aligned) points_deformed, _ = unlist_streamlines(deformed_bundle) vector_field = points_deformed - points_aligned offsets = np.sqrt(np.sum((vector_field) ** 2, 1)) # vector field modules # Normalize vectors to be unitary (directions) directions = vector_field / np.array([offsets]).T # Define colors mapping the direction vectors to RGB. # Absolute value generates DTI-like colors colors = directions return offsets, directions, colors @warning_for_keywords() def bundlewarp_shape_analysis( moving_aligned, deformed_bundle, *, no_disks=10, plotting=False ): """Calculate bundle shape difference profile. Bundle shape difference analysis using magnitude from BundleWarp displacements and BUAN. Depending on the number of points of a streamline, and the number of segments requested, multiple points may be considered for the computation of a given segment; a segment may contain information from a single point; or some segments may not contain information from any points. In the latter case, the segment will contain an ``np.nan`` value. The point-to-segment mapping is defined by the :func:`assignment_map`: for each segment index, the point information of the matching index positions, as returned by :func:`assignment_map`, are considered for the computation. Parameters ---------- moving_aligned : Streamlines Linearly (affinely) moved bundle deformed_bundle : Streamlines Nonlinearly (warped) moved bundle no_disks : int, optional Number of segments to be created along the length of the bundle plotting : Boolean, optional Plot bundle shape profile Returns ------- shape_profile : np.ndarray Float array containing bundlewarp displacement magnitudes along the length of the bundle stdv : np.ndarray Float array containing standard deviations """ n = no_disks offsets, directions, colors = bundlewarp_vector_filed( moving_aligned, deformed_bundle ) indx = assignment_map(deformed_bundle, deformed_bundle, n) indx = np.array(indx) rng = np.random.default_rng() colors = rng.random((n, 3)) disks_color = [] for _, ind in enumerate(indx): disks_color.append(tuple(colors[ind])) x = np.array(range(1, n + 1)) shape_profile = np.zeros(n) stdv = np.zeros(n) for i in range(n): mask = indx == i if sum(mask): shape_profile[i] = np.mean(offsets[mask]) stdv[i] = np.std(offsets[mask]) else: shape_profile[i] = np.nan stdv[i] = np.nan if plotting: bundle_shape_profile(x, shape_profile, stdv) return shape_profile, stdv dipy-1.11.0/dipy/align/sumsqdiff.pyx000066400000000000000000001027431476546756600174020ustar00rootroot00000000000000""" Utility functions used by the Sum of Squared Differences (SSD) metric """ import numpy as np cimport cython cimport numpy as cnp from dipy.align.fused_types cimport floating cdef extern from "dpy_math.h" nogil: int dpy_isinf(double) double sqrt(double) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _solve_2d_symmetric_positive_definite(double* A, double* y, double det, double* out) noexcept nogil: r"""Solves a 2-variable symmetric positive-definite linear system The C implementation of the public-facing Python function ``solve_2d_symmetric_positive_definite``. Solves the symmetric positive-definite linear system $Mx = y$ given by:: M = [[A[0], A[1]], [A[1], A[2]]] Parameters ---------- A : array, shape (3,) the array containing the entries of the symmetric 2x2 matrix y : array, shape (2,) right-hand side of the system to be solved out : array, shape (2,) the array the output will be stored in """ out[1] = (A[0] * y[1] - A[1] * y[0]) / det out[0] = (y[0] - A[1] * out[1]) / A[0] def solve_2d_symmetric_positive_definite(A, y, double det): r"""Solves a 2-variable symmetric positive-definite linear system Solves the symmetric positive-definite linear system $Mx = y$ given by:: M = [[A[0], A[1]], [A[1], A[2]]] Parameters ---------- A : array, shape (3,) the array containing the entries of the symmetric 2x2 matrix y : array, shape (2,) right-hand side of the system to be solved Returns ------- out : array, shape (2,) the array the output will be stored in """ cdef: cnp.ndarray out = np.zeros(2, dtype=float) _solve_2d_symmetric_positive_definite( cnp.PyArray_DATA(np.ascontiguousarray(A, float)), cnp.PyArray_DATA(np.ascontiguousarray(y, float)), det, cnp.PyArray_DATA(out)) return np.asarray(out) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef int _solve_3d_symmetric_positive_definite(double* g, double* y, double tau, double* out) nogil: r"""Solves a 3-variable symmetric positive-definite linear system Solves the symmetric semi-positive-definite linear system $Mx = y$ given by $M = (g g^{T} + \tau I)$ The C implementation of the public-facing Python function ``solve_3d_symmetric_positive_definite``. Parameters ---------- g : array, shape (3,) the vector in the outer product above y : array, shape (3,) right-hand side of the system to be solved tau : double $\tau$ in $M = (g g^{T} + \tau I)$ out : array, shape (3,) the array the output will be stored in Returns ------- is_singular : int 1 if M is singular, otherwise 0 """ cdef: double a,b,c,d,e,f, y0, y1, y2, sub_det a = g[0] ** 2 + tau if a < 1e-9: return 1 b = g[0] * g[1] sub_det = (a * (g[1] ** 2 + tau) - b * b) if sub_det < 1e-9: return 1 c = g[0] * g[2] d = (a * (g[1] ** 2 + tau) - b * b) / a e = (a * (g[1] * g[2]) - b * c) / a f = (a * (g[2] ** 2 + tau) - c * c) / a - (e * e * a) / sub_det if f < 1e-9: return 1 y0 = y[0] y1 = (y[1] * a - y0 * b) / a y2 = (y[2] * a - c * y0) / a - (e * (y[1] * a - b * y0)) / sub_det out[2] = y2 / f out[1] = (y1 - e * out[2]) / d out[0] = (y0 - b * out[1] - c * out[2]) / a return 0 def solve_3d_symmetric_positive_definite(g, y, double tau): r"""Solves a 3-variable symmetric positive-definite linear system Solves the symmetric semi-positive-definite linear system $Mx = y$ given by $M = (g g^{T} + \tau I)$. Parameters ---------- g : array, shape (3,) the vector in the outer product above y : array, shape (3,) right-hand side of the system to be solved tau : double $\tau$ in $M = (g g^{T} + \tau I)$ Returns ------- out : array, shape (3,) the array the output will be stored in is_singular : int 1 if M is singular, otherwise 0 """ cdef: cnp.ndarray out = np.zeros(3, dtype=float) int is_singular is_singular = _solve_3d_symmetric_positive_definite( cnp.PyArray_DATA(np.ascontiguousarray(g, float)), cnp.PyArray_DATA(np.ascontiguousarray(y, float)), tau, cnp.PyArray_DATA(out)) return np.asarray(out), is_singular @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef double iterate_residual_displacement_field_ssd_2d( floating[:, :] delta_field, floating[:, :] sigmasq_field, floating[:, :, :] grad, floating[:, :, :] target, double lambda_param, floating[:, :, :] displacement_field): r"""One iteration of a large linear system solver for 2D SSD registration Performs one iteration at one level of the Multi-resolution Gauss-Seidel solver proposed by :footcite:t:`Bruhn2005`. Parameters ---------- delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigmasq_field : array, shape (R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. grad : array, shape (R, C, 2) the gradient of the moving image target : array, shape (R, C, 2) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter of the objective function displacement_field : array, shape (R, C, 2) current displacement field to start the iteration from Returns ------- max_displacement : float the norm of the maximum change in the displacement field after the iteration References ---------- .. footbibliography:: """ ftype = np.asarray(delta_field).dtype cdef: int NUM_NEIGHBORS = 4 int* dRow = [-1, 0, 1, 0] int* dCol = [0, 1, 0, -1] cnp.npy_intp nrows = delta_field.shape[0] cnp.npy_intp ncols = delta_field.shape[1] cnp.npy_intp r, c, dr, dc, nn, k double* b = [0, 0] double* d = [0, 0] double* y = [0, 0] double* A = [0, 0, 0] double xx, yy, opt, nrm2, delta, sigmasq, max_displacement, det max_displacement = 0 with nogil: for r in range(nrows): for c in range(ncols): delta = delta_field[r, c] sigmasq = sigmasq_field[r, c] if sigmasq_field is not None else 1 if target is None: b[0] = delta_field[r, c] * grad[r, c, 0] b[1] = delta_field[r, c] * grad[r, c, 1] else: b[0] = target[r, c, 0] b[1] = target[r, c, 1] nn = 0 y[0] = 0 y[1] = 0 for k in range(NUM_NEIGHBORS): dr = r + dRow[k] if dr < 0 or dr >= nrows: continue dc = c + dCol[k] if dc < 0 or dc >= ncols: continue nn += 1 y[0] += displacement_field[dr, dc, 0] y[1] += displacement_field[dr, dc, 1] if dpy_isinf(sigmasq) != 0: xx = displacement_field[r, c, 0] yy = displacement_field[r, c, 1] displacement_field[r, c, 0] = y[0] / nn displacement_field[r, c, 1] = y[1] / nn xx -= displacement_field[r, c, 0] yy -= displacement_field[r, c, 1] opt = xx * xx + yy * yy if max_displacement < opt: max_displacement = opt else: A[0] = grad[r, c, 0] ** 2 + sigmasq * lambda_param * nn A[1] = grad[r, c, 0] * grad[r, c, 1] A[2] = grad[r, c, 1] ** 2 + sigmasq * lambda_param * nn det = A[0] * A[2] - A[1] * A[1] if det < 1e-9: nrm2 = (grad[r, c, 0] ** 2 + grad[r, c, 1] ** 2) if nrm2 < 1e-9: displacement_field[r, c, 0] = 0 displacement_field[r, c, 1] = 0 else: displacement_field[r, c, 0] = (b[0]) / nrm2 displacement_field[r, c, 1] = (b[1]) / nrm2 else: y[0] = b[0] + sigmasq * lambda_param * y[0] y[1] = b[1] + sigmasq * lambda_param * y[1] _solve_2d_symmetric_positive_definite(A, y, det, d) xx = displacement_field[r, c, 0] - d[0] yy = displacement_field[r, c, 1] - d[1] displacement_field[r, c, 0] = d[0] displacement_field[r, c, 1] = d[1] opt = xx * xx + yy * yy if max_displacement < opt: max_displacement = opt return sqrt(max_displacement) @cython.boundscheck(False) @cython.wraparound(False) cpdef double compute_energy_ssd_2d(floating[:, :] delta_field): r"""Sum of squared differences between two 2D images Computes the Sum of Squared Differences between the static and moving image. Those differences are given by delta_field Parameters ---------- delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) Returns ------- energy : float the SSD energy at this iteration Notes ----- The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the Greedy-SyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient --of the data term-- and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this re-projection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the data-term energy only and detecting oscilations in the energy profile. """ cdef: cnp.npy_intp nrows = delta_field.shape[0] cnp.npy_intp ncols = delta_field.shape[1] cnp.npy_intp r, c double energy = 0 with nogil: for r in range(nrows): for c in range(ncols): energy += delta_field[r, c] ** 2 return energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef double iterate_residual_displacement_field_ssd_3d( floating[:, :, :] delta_field, floating[:, :, :] sigmasq_field, floating[:, :, :, :] grad, floating[:, :, :, :] target, double lambda_param, floating[:, :, :, :] disp): r"""One iteration of a large linear system solver for 3D SSD registration Performs one iteration at one level of the Multi-resolution Gauss-Seidel solver proposed by :footcite:t:`Bruhn2005`. Parameters ---------- delta_field : array, shape (S, R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigmasq_field : array, shape (S, R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. grad : array, shape (S, R, C, 3) the gradient of the moving image target : array, shape (S, R, C, 3) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter of the objective function disp : array, shape (S, R, C, 3) the displacement field to start the optimization from Returns ------- max_displacement : float the norm of the maximum change in the displacement field after the iteration References ---------- .. footbibliography:: """ ftype = np.asarray(delta_field).dtype cdef: int NUM_NEIGHBORS = 6 int* dSlice = [-1, 0, 0, 0, 0, 1] int* dRow = [0, -1, 0, 1, 0, 0] int* dCol = [0, 0, 1, 0, -1, 0] cnp.npy_intp nslices = delta_field.shape[0] cnp.npy_intp nrows = delta_field.shape[1] cnp.npy_intp ncols = delta_field.shape[2] int nn double* g = [0, 0, 0] double* b = [0, 0, 0] double* d = [0, 0, 0] double* y = [0, 0, 0] double* A = [0, 0, 0, 0, 0, 0] double xx, yy, zz, opt, nrm2, delta, sigmasq, max_displacement cnp.npy_intp dr, ds, dc, s, r, c max_displacement = 0 with nogil: for s in range(nslices): for r in range(nrows): for c in range(ncols): g[0] = grad[s, r, c, 0] g[1] = grad[s, r, c, 1] g[2] = grad[s, r, c, 2] delta = delta_field[s, r, c] sigmasq = sigmasq_field[s, r, c] if sigmasq_field is not None else 1 if target is None: b[0] = delta_field[s, r, c] * g[0] b[1] = delta_field[s, r, c] * g[1] b[2] = delta_field[s, r, c] * g[2] else: b[0] = target[s, r, c, 0] b[1] = target[s, r, c, 1] b[2] = target[s, r, c, 2] nn = 0 y[0] = 0 y[1] = 0 y[2] = 0 for k in range(NUM_NEIGHBORS): ds = s + dSlice[k] if ds < 0 or ds >= nslices: continue dr = r + dRow[k] if dr < 0 or dr >= nrows: continue dc = c + dCol[k] if dc < 0 or dc >= ncols: continue nn += 1 y[0] += disp[ds, dr, dc, 0] y[1] += disp[ds, dr, dc, 1] y[2] += disp[ds, dr, dc, 2] if dpy_isinf(sigmasq) != 0: xx = disp[s, r, c, 0] yy = disp[s, r, c, 1] zz = disp[s, r, c, 2] disp[s, r, c, 0] = y[0] / nn disp[s, r, c, 1] = y[1] / nn disp[s, r, c, 2] = y[2] / nn xx -= disp[s, r, c, 0] yy -= disp[s, r, c, 1] zz -= disp[s, r, c, 2] opt = xx * xx + yy * yy + zz * zz if max_displacement < opt: max_displacement = opt elif sigmasq < 1e-9: nrm2 = g[0] ** 2 + g[1] ** 2 + g[2] ** 2 if nrm2 < 1e-9: disp[s, r, c, 0] = 0 disp[s, r, c, 1] = 0 disp[s, r, c, 2] = 0 else: disp[s, r, c, 0] = (b[0]) / nrm2 disp[s, r, c, 1] = (b[1]) / nrm2 disp[s, r, c, 2] = (b[2]) / nrm2 else: tau = sigmasq * lambda_param * nn y[0] = b[0] + sigmasq * lambda_param * y[0] y[1] = b[1] + sigmasq * lambda_param * y[1] y[2] = b[2] + sigmasq * lambda_param * y[2] is_singular = _solve_3d_symmetric_positive_definite( g, y, tau, d) if is_singular == 1: nrm2 = g[0] ** 2 + g[1] ** 2 + g[2] ** 2 if nrm2 < 1e-9: disp[s, r, c, 0] = 0 disp[s, r, c, 1] = 0 disp[s, r, c, 2] = 0 else: disp[s, r, c, 0] = (b[0]) / nrm2 disp[s, r, c, 1] = (b[1]) / nrm2 disp[s, r, c, 2] = (b[2]) / nrm2 xx = disp[s, r, c, 0] - d[0] yy = disp[s, r, c, 1] - d[1] zz = disp[s, r, c, 2] - d[2] disp[s, r, c, 0] = d[0] disp[s, r, c, 1] = d[1] disp[s, r, c, 2] = d[2] opt = xx * xx + yy * yy + zz * zz if max_displacement < opt: max_displacement = opt return sqrt(max_displacement) @cython.boundscheck(False) @cython.wraparound(False) cpdef double compute_energy_ssd_3d(floating[:, :, :] delta_field): r"""Sum of squared differences between two 3D volumes Computes the Sum of Squared Differences between the static and moving volume Those differences are given by delta_field Parameters ---------- delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) Returns ------- energy : float the SSD energy at this iteration Notes ----- The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the Greedy-SyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient --of the data term-- and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this re-projection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the data-term energy only and detecting oscilations in the energy profile. """ cdef: cnp.npy_intp nslices = delta_field.shape[0] cnp.npy_intp nrows = delta_field.shape[1] cnp.npy_intp ncols = delta_field.shape[2] cnp.npy_intp s, r, c double energy = 0 with nogil: for s in range(nslices): for r in range(nrows): for c in range(ncols): energy += delta_field[s, r, c] ** 2 return energy @cython.boundscheck(False) @cython.wraparound(False) def compute_residual_displacement_field_ssd_3d( floating[:, :, :] delta_field, floating[:, :, :] sigmasq_field, floating[:, :, :, :] gradient_field, floating[:, :, :, :] target, double lambda_param, floating[:, :, :, :] disp, floating[:, :, :, :] residual): r"""The residual displacement field to be fit on the next iteration Computes the residual displacement field corresponding to the current displacement field (given by 'disp') in the Multi-resolution Gauss-Seidel solver proposed by :footcite:t:`Bruhn2005`. Parameters ---------- delta_field : array, shape (S, R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigmasq_field : array, shape (S, R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. gradient_field : array, shape (S, R, C, 3) the gradient of the moving image target : array, shape (S, R, C, 3) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter in the objective function disp : array, shape (S, R, C, 3) the current displacement field to compute the residual from residual : array, shape (S, R, C, 3) the displacement field to put the residual to Returns ------- residual : array, shape (S, R, C, 3) the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned References ---------- .. footbibliography:: """ ftype = np.asarray(delta_field).dtype cdef: int NUM_NEIGHBORS = 6 int* dSlice = [-1, 0, 0, 0, 0, 1] int* dRow = [0, -1, 0, 1, 0, 0] int* dCol = [0, 0, 1, 0, -1, 0] double* b = [0, 0, 0] double* y = [0, 0, 0] cnp.npy_intp nslices = delta_field.shape[0] cnp.npy_intp nrows = delta_field.shape[1] cnp.npy_intp ncols = delta_field.shape[2] double delta, sigmasq, dotP cnp.npy_intp s, r, c, ds, dr, dc if residual is None: residual = np.empty(shape=(nslices, nrows, ncols, 3), dtype=ftype) with nogil: for s in range(nslices): for r in range(nrows): for c in range(ncols): delta = delta_field[s, r, c] sigmasq = sigmasq_field[s, r, c] if sigmasq_field is not None else 1 if target is None: b[0] = delta * gradient_field[s, r, c, 0] b[1] = delta * gradient_field[s, r, c, 1] b[2] = delta * gradient_field[s, r, c, 2] else: b[0] = target[s, r, c, 0] b[1] = target[s, r, c, 1] b[2] = target[s, r, c, 2] y[0] = 0 y[1] = 0 y[2] = 0 for k in range(NUM_NEIGHBORS): ds = s + dSlice[k] if ds < 0 or ds >= nslices: continue dr = r + dRow[k] if dr < 0 or dr >= nrows: continue dc = c + dCol[k] if dc < 0 or dc >= ncols: continue y[0] += (disp[s, r, c, 0] - disp[ds, dr, dc, 0]) y[1] += (disp[s, r, c, 1] - disp[ds, dr, dc, 1]) y[2] += (disp[s, r, c, 2] - disp[ds, dr, dc, 2]) if dpy_isinf(sigmasq) != 0: residual[s, r, c, 0] = -lambda_param * y[0] residual[s, r, c, 1] = -lambda_param * y[1] residual[s, r, c, 2] = -lambda_param * y[2] else: dotP = (gradient_field[s, r, c, 0] * disp[s, r, c, 0] + gradient_field[s, r, c, 1] * disp[s, r, c, 1] + gradient_field[s, r, c, 2] * disp[s, r, c, 2]) residual[s, r, c, 0] = (b[0] - (gradient_field[s, r, c, 0] * dotP + sigmasq * lambda_param * y[0])) residual[s, r, c, 1] = (b[1] - (gradient_field[s, r, c, 1] * dotP + sigmasq * lambda_param * y[1])) residual[s, r, c, 2] = (b[2] - (gradient_field[s, r, c, 2] * dotP + sigmasq * lambda_param * y[2])) return np.asarray(residual) @cython.boundscheck(False) @cython.wraparound(False) cpdef compute_residual_displacement_field_ssd_2d( floating[:, :] delta_field, floating[:, :] sigmasq_field, floating[:, :, :] gradient_field, floating[:, :, :] target, double lambda_param, floating[:, :, :] d, floating[:, :, :] residual): r"""The residual displacement field to be fit on the next iteration Computes the residual displacement field corresponding to the current displacement field in the Multi-resolution Gauss-Seidel solver proposed by :footcite:t:`Bruhn2005`. Parameters ---------- delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) sigmasq_field : array, shape (R, C) the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance. gradient_field : array, shape (R, C, 2) the gradient of the moving image target : array, shape (R, C, 2) right-hand side of the linear system to be solved in the Weickert's multi-resolution algorithm lambda_param : float smoothness parameter in the objective function d : array, shape (R, C, 2) the current displacement field to compute the residual from residual : array, shape (R, C, 2) the displacement field to put the residual to Returns ------- residual : array, shape (R, C, 2) the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned References ---------- .. footbibliography:: """ ftype = np.asarray(delta_field).dtype cdef: int NUM_NEIGHBORS = 4 int* dRow = [-1, 0, 1, 0] int* dCol = [0, 1, 0, -1] double* b = [0, 0] double* y = [0, 0] cnp.npy_intp nrows = delta_field.shape[0] cnp.npy_intp ncols = delta_field.shape[1] double delta, sigmasq, dotP cnp.npy_intp r, c, dr, dc if residual is None: residual = np.empty(shape=(nrows, ncols, 2), dtype=ftype) with nogil: for r in range(nrows): for c in range(ncols): delta = delta_field[r, c] sigmasq = sigmasq_field[r, c] if sigmasq_field is not None else 1 if target is None: b[0] = delta * gradient_field[r, c, 0] b[1] = delta * gradient_field[r, c, 1] else: b[0] = target[r, c, 0] b[1] = target[r, c, 1] y[0] = 0 # reset y y[1] = 0 nn=0 for k in range(NUM_NEIGHBORS): dr = r + dRow[k] if dr < 0 or dr >= nrows: continue dc = c + dCol[k] if dc < 0 or dc >= ncols: continue y[0] += (d[r, c, 0] - d[dr, dc, 0]) y[1] += (d[r, c, 1] - d[dr, dc, 1]) if dpy_isinf(sigmasq) != 0: residual[r, c, 0] = -lambda_param * y[0] residual[r, c, 1] = -lambda_param * y[1] else: dotP = (gradient_field[r, c, 0] * d[r, c, 0] + gradient_field[r, c, 1] * d[r, c, 1]) residual[r, c, 0] = (b[0] - (gradient_field[r, c, 0] * dotP + sigmasq * lambda_param * y[0])) residual[r, c, 1] = (b[1] - (gradient_field[r, c, 1] * dotP + sigmasq * lambda_param * y[1])) return np.asarray(residual) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_ssd_demons_step_2d(floating[:,:] delta_field, floating[:,:,:] gradient_moving, double sigma_sq_x, floating[:,:,:] out): r"""Demons step for 2D SSD-driven registration Computes the demons step for SSD-driven registration ( eq. 4 in :footcite:p:`Bruhn2005` ) Parameters ---------- delta_field : array, shape (R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) gradient_field : array, shape (R, C, 2) the gradient of the moving image sigma_sq_x : float parameter controlling the amount of regularization. It corresponds to $\sigma_x^2$ in algorithm 1 of :footcite:t:`Vercauteren2009`. out : array, shape (R, C, 2) if None, a new array will be created to store the demons step. Otherwise the provided array will be used. Returns ------- demons_step : array, shape (R, C, 2) the demons step to be applied for updating the current displacement field energy : float the current ssd energy (before applying the returned demons_step) References ---------- .. footbibliography:: """ cdef: cnp.npy_intp nr = delta_field.shape[0] cnp.npy_intp nc = delta_field.shape[1] cnp.npy_intp i, j double delta, delta_2, nrm2, energy, den if out is None: out = np.zeros((nr, nc, 2), dtype=np.asarray(delta_field).dtype) with nogil: energy = 0 for i in range(nr): for j in range(nc): delta = delta_field[i,j] delta_2 = delta**2 energy += delta_2 nrm2 = gradient_moving[i, j, 0]**2 + gradient_moving[i, j, 1]**2 den = delta_2/sigma_sq_x + nrm2 if den <1e-9: out[i, j, 0] = 0 out[i, j, 1] = 0 else: out[i, j, 0] = delta * gradient_moving[i, j, 0] / den out[i, j, 1] = delta * gradient_moving[i, j, 1] / den return np.asarray(out), energy @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def compute_ssd_demons_step_3d(floating[:,:,:] delta_field, floating[:,:,:,:] gradient_moving, double sigma_sq_x, floating[:,:,:,:] out): r"""Demons step for 3D SSD-driven registration Computes the demons step for SSD-driven registration ( eq. 4 in :footcite:p:`Bruhn2005` ) Parameters ---------- delta_field : array, shape (S, R, C) the difference between the static and moving image (the 'derivative w.r.t. time' in the optical flow model) gradient_field : array, shape (S, R, C, 2) the gradient of the moving image sigma_sq_x : float parameter controlling the amount of regularization. It corresponds to $\sigma_x^2$ in algorithm 1 of :footcite:t:`Vercauteren2009`. out : array, shape (S, R, C, 2) if None, a new array will be created to store the demons step. Otherwise the provided array will be used. Returns ------- demons_step : array, shape (S, R, C, 3) the demons step to be applied for updating the current displacement field energy : float the current ssd energy (before applying the returned demons_step) References ---------- .. footbibliography:: """ cdef: cnp.npy_intp ns = delta_field.shape[0] cnp.npy_intp nr = delta_field.shape[1] cnp.npy_intp nc = delta_field.shape[2] cnp.npy_intp i, j, k double delta, delta_2, nrm2, energy, den if out is None: out = np.zeros((ns, nr, nc, 3), dtype=np.asarray(delta_field).dtype) with nogil: energy = 0 for k in range(ns): for i in range(nr): for j in range(nc): delta = delta_field[k,i,j] delta_2 = delta**2 energy += delta_2 nrm2 = (gradient_moving[k, i, j, 0]**2 + gradient_moving[k, i, j, 1]**2 + gradient_moving[k, i, j, 2]**2) den = delta_2/sigma_sq_x + nrm2 if den < 1e-9: out[k, i, j, 0] = 0 out[k, i, j, 1] = 0 out[k, i, j, 2] = 0 else: out[k, i, j, 0] = (delta * gradient_moving[k, i, j, 0] / den) out[k, i, j, 1] = (delta * gradient_moving[k, i, j, 1] / den) out[k, i, j, 2] = (delta * gradient_moving[k, i, j, 2] / den) return np.asarray(out), energy dipy-1.11.0/dipy/align/tests/000077500000000000000000000000001476546756600157725ustar00rootroot00000000000000dipy-1.11.0/dipy/align/tests/__init__.py000066400000000000000000000000001476546756600200710ustar00rootroot00000000000000dipy-1.11.0/dipy/align/tests/meson.build000066400000000000000000000007311476546756600201350ustar00rootroot00000000000000python_sources = ['__init__.py', 'test_api.py', 'test_crosscorr.py', 'test_expectmax.py', 'test_imaffine.py', 'test_imwarp.py', 'test_metrics.py', 'test_parzenhist.py', 'test_reslice.py', 'test_scalespace.py', 'test_streamlinear.py', 'test_streamwarp.py', 'test_sumsqdiff.py', 'test_transforms.py', 'test_vector_fields.py', 'test_whole_brain_slr.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/align/tests' ) dipy-1.11.0/dipy/align/tests/test_api.py000066400000000000000000000313061476546756600201570ustar00rootroot00000000000000import os.path as op from tempfile import TemporaryDirectory import nibabel as nib import numpy as np import numpy.testing as npt import pytest from dipy.align import ( affine, affine_registration, center_of_mass, motion_correction, read_mapping, register_dwi_series, register_dwi_to_template, register_series, rigid, rigid_isoscaling, rigid_scaling, streamline_registration, syn_registration, translation, write_mapping, ) from dipy.align.imwarp import DiffeomorphicMap import dipy.core.gradients as dpg import dipy.data as dpd from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.testing.decorators import set_random_number_generator from dipy.tracking.utils import transform_tracking_output def setup_module(): global \ subset_b0, \ subset_dwi_data, \ subset_t2, \ subset_b0_img, \ subset_t2_img, \ gtab, \ hardi_affine, \ MNI_T2_affine MNI_T2 = dpd.read_mni_template() hardi_img, gtab = dpd.read_stanford_hardi() MNI_T2_data = MNI_T2.get_fdata() MNI_T2_affine = MNI_T2.affine hardi_data = hardi_img.get_fdata() hardi_affine = hardi_img.affine b0 = hardi_data[..., gtab.b0s_mask] mean_b0 = np.mean(b0, -1) # We select some arbitrary chunk of data so this goes quicker: subset_b0 = mean_b0[40:45, 40:45, 40:45] subset_dwi_data = nib.Nifti1Image(hardi_data[40:45, 40:45, 40:45], hardi_affine) subset_t2 = MNI_T2_data[40:50, 40:50, 40:50] subset_b0_img = nib.Nifti1Image(subset_b0, hardi_affine) subset_t2_img = nib.Nifti1Image(subset_t2, MNI_T2_affine) def test_syn_registration(): with TemporaryDirectory() as tmpdir: warped_moving, mapping = syn_registration( subset_b0, subset_t2, moving_affine=hardi_affine, static_affine=MNI_T2_affine, step_length=0.1, metric="CC", dim=3, level_iters=[5, 5, 5], sigma_diff=2.0, radius=1, prealign=None, ) npt.assert_equal(warped_moving.shape, subset_t2.shape) mapping_fname = op.join(tmpdir, "mapping.nii.gz") write_mapping(mapping, mapping_fname) file_mapping = read_mapping(mapping_fname, subset_b0_img, subset_t2_img) # Test that it has the same effect on the data: warped_from_file = file_mapping.transform(subset_b0) npt.assert_equal(warped_from_file, warped_moving) # Test that it is, attribute by attribute, identical: for k in mapping.__dict__: npt.assert_( ( np.all( mapping.__getattribute__(k) == file_mapping.__getattribute__(k) ) ) ) def test_register_dwi_to_template(): # Default is syn registration: warped_b0, mapping = register_dwi_to_template( subset_dwi_data, gtab, template=subset_t2_img, level_iters=[5, 5, 5], sigma_diff=2.0, radius=1, ) npt.assert_(isinstance(mapping, DiffeomorphicMap)) npt.assert_equal(warped_b0.shape, subset_t2_img.shape) # Use affine registration (+ don't provide a template and inputs as # strings): fdata, fbval, fbvec = dpd.get_fnames(name="small_64D") warped_data, affine_mat = register_dwi_to_template( fdata, (fbval, fbvec), reg_method="aff", level_iters=[5, 5, 5], sigmas=[3, 1, 0], factors=[4, 2, 1], ) npt.assert_(isinstance(affine_mat, np.ndarray)) npt.assert_(affine_mat.shape == (4, 4)) def test_affine_registration(): moving = subset_b0 static = subset_b0 moving_affine = static_affine = np.eye(4) # Test without returning metric xformed, affine_mat = affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, level_iters=[5, 5], sigmas=[3, 1], factors=[2, 1], ) # We don't ask for much: npt.assert_almost_equal(affine_mat[:3, :3], np.eye(3), decimal=1) # [center_of_mass] + ret_metric=True should raise an error with pytest.raises(ValueError): xformed, affine_mat = affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, pipeline=["center_of_mass"], ret_metric=True, ) # Define list of methods reg_methods = [ "center_of_mass", "translation", "rigid", "rigid_isoscaling", "rigid_scaling", "affine", center_of_mass, translation, rigid, rigid_isoscaling, rigid_scaling, affine, ] # Test methods individually (without returning any metric) for func in reg_methods: xformed, affine_mat = affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, level_iters=[5, 5], sigmas=[3, 1], factors=[2, 1], pipeline=[func], ) npt.assert_almost_equal(affine_mat[:3, :3], np.eye(3), decimal=1) # Bad method with pytest.raises(ValueError, match=r"^pipeline\[0\] must be one.*foo.*"): affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, pipeline=["foo"], ) # Test methods individually (returning quality metric) expected_nparams = [0, 3, 6, 7, 9, 12] * 2 assert len(expected_nparams) == len(reg_methods) for i, func in enumerate(reg_methods): if func in ("center_of_mass", center_of_mass): # can't return metric with pytest.raises(ValueError, match="cannot return any quality"): affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, pipeline=[func], ret_metric=True, ) continue xformed, affine_mat, xopt, fopt = affine_registration( moving=moving, static=static, moving_affine=moving_affine, static_affine=static_affine, level_iters=[5, 5], sigmas=[3, 1], factors=[2, 1], pipeline=[func], ret_metric=True, ) npt.assert_equal(len(xopt), expected_nparams[i]) npt.assert_equal(isinstance(fopt, (int, float)), True) with pytest.raises(ValueError): xformed, affine_mat = affine_registration(moving, static) # Not supported transform names should raise an error npt.assert_raises( ValueError, affine_registration, moving, static, moving_affine=moving_affine, static_affine=static_affine, pipeline=["wrong_transform"], ) # If providing nifti image objects, don't need to provide affines: moving_img = nib.Nifti1Image(moving, moving_affine) static_img = nib.Nifti1Image(static, static_affine) xformed, affine_mat = affine_registration(moving_img, static_img) npt.assert_almost_equal(affine_mat[:3, :3], np.eye(3), decimal=1) # Using strings with full paths as inputs also works: t1_name, b0_name = dpd.get_fnames(name="syn_data") moving = b0_name static = t1_name xformed, affine_mat = affine_registration( moving, static, level_iters=[5, 5], sigmas=[3, 1], factors=[4, 2] ) npt.assert_almost_equal(affine_mat[:3, :3], np.eye(3), decimal=1) def test_single_transforms(): moving = subset_b0 static = subset_b0 moving_affine = static_affine = np.eye(4) reg_methods = [ center_of_mass, translation, rigid_isoscaling, rigid_scaling, rigid, affine, ] for func in reg_methods: xformed, affine_mat = func( moving, static, moving_affine=moving_affine, static_affine=static_affine, level_iters=[5, 5], sigmas=[3, 1], factors=[2, 1], ) # We don't ask for much: npt.assert_almost_equal(affine_mat[:3, :3], np.eye(3), decimal=1) def test_register_series(): fdata, fbval, fbvec = dpd.get_fnames(name="small_64D") img = nib.load(fdata) gtab = dpg.gradient_table(fbval, bvecs=fbvec) ref_idx = np.where(gtab.b0s_mask)[0][0] xformed, affines = register_series(img, ref_idx) npt.assert_(np.all(affines[..., ref_idx] == np.eye(4))) npt.assert_(np.all(xformed[..., ref_idx] == img.get_fdata()[..., ref_idx])) def test_register_dwi_series_and_motion_correction(): fdata, fbval, fbvec = dpd.get_fnames(name="small_64D") with TemporaryDirectory() as tmpdir: # Use an abbreviated data-set: img = nib.load(fdata) data = img.get_fdata()[..., :10] nib.save(nib.Nifti1Image(data, img.affine), op.join(tmpdir, "data.nii.gz")) # Save a subset: bvals = np.loadtxt(fbval) bvecs = np.loadtxt(fbvec) np.savetxt(op.join(tmpdir, "bvals.txt"), bvals[:10]) np.savetxt(op.join(tmpdir, "bvecs.txt"), bvecs[:10]) gtab = dpg.gradient_table( op.join(tmpdir, "bvals.txt"), bvecs=op.join(tmpdir, "bvecs.txt") ) reg_img, reg_affines = register_dwi_series(data, gtab, affine=img.affine) reg_img_2, reg_affines_2 = motion_correction(data, gtab, affine=img.affine) npt.assert_(isinstance(reg_img, nib.Nifti1Image)) npt.assert_array_equal(reg_img.get_fdata(), reg_img_2.get_fdata()) npt.assert_array_equal(reg_affines, reg_affines_2) @set_random_number_generator() def test_streamline_registration(rng): sl1 = [ np.array([[0, 0, 0], [0, 0, 0.5], [0, 0, 1], [0, 0, 1.5]]), np.array([[0, 0, 0], [0, 0.5, 0.5], [0, 1, 1]]), ] affine_mat = np.eye(4) affine_mat[:3, 3] = rng.standard_normal(3) sl2 = list(transform_tracking_output(sl1, affine_mat)) aligned, matrix = streamline_registration(sl2, sl1) npt.assert_almost_equal(matrix, np.linalg.inv(affine_mat)) npt.assert_almost_equal(aligned[0], sl1[0]) npt.assert_almost_equal(aligned[1], sl1[1]) # We assume the two tracks come from the same space, but it might have # some affine associated with it: base_aff = np.eye(4) * rng.random() base_aff[:3, 3] = np.array([1, 2, 3]) base_aff[3, 3] = 1 with TemporaryDirectory() as tmpdir: for use_aff in [None, base_aff]: fname1 = op.join(tmpdir, "sl1.trk") fname2 = op.join(tmpdir, "sl2.trk") if use_aff is not None: img = nib.Nifti1Image(np.zeros((2, 2, 2)), use_aff) # Move the streamlines to this other space, and report it: tgm1 = StatefulTractogram( transform_tracking_output(sl1, np.linalg.inv(use_aff)), img, Space.VOX, ) save_trk(tgm1, fname1, bbox_valid_check=False) tgm2 = StatefulTractogram( transform_tracking_output(sl2, np.linalg.inv(use_aff)), img, Space.VOX, ) save_trk(tgm2, fname2, bbox_valid_check=False) else: img = nib.Nifti1Image(np.zeros((2, 2, 2)), np.eye(4)) tgm1 = StatefulTractogram(sl1, img, Space.RASMM) tgm2 = StatefulTractogram(sl2, img, Space.RASMM) save_trk(tgm1, fname1, bbox_valid_check=False) save_trk(tgm2, fname2, bbox_valid_check=False) aligned, matrix = streamline_registration(fname2, fname1) npt.assert_almost_equal(aligned[0], sl1[0], decimal=5) npt.assert_almost_equal(aligned[1], sl1[1], decimal=5) def test_register_dwi_series_multi_b0(): # Test if register_dwi_series works with multiple b0 images dwi_fname, dwi_bval_fname, dwi_bvec_fname = dpd.get_fnames(name="sherbrooke_3shell") data, affine = load_nifti(dwi_fname) bvals, bvecs = read_bvals_bvecs(dwi_bval_fname, dwi_bvec_fname) data_small = data[..., :3] data_small = np.concatenate([data[..., :1], data_small], axis=-1) bvals_small = np.concatenate([bvals[:1], bvals[:3]], axis=0) bvecs_small = np.concatenate([bvecs[:1], bvecs[:3]], axis=0) gtab = dpg.gradient_table(bvals_small, bvecs=bvecs_small) _ = motion_correction(data_small, gtab, affine=affine) dipy-1.11.0/dipy/align/tests/test_crosscorr.py000066400000000000000000000173301476546756600214260ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_array_almost_equal from dipy.align import crosscorr as cc, floating from dipy.testing.decorators import set_random_number_generator def test_cc_factors_2d(): r""" Compares the output of the optimized function to compute the cross- correlation factors against a direct (not optimized, but less error prone) implementation. """ a = np.array(range(20 * 20), dtype=floating).reshape(20, 20) b = np.array(range(20 * 20)[::-1], dtype=floating).reshape(20, 20) a /= a.max() b /= b.max() for radius in [0, 1, 3, 6]: factors = np.asarray(cc.precompute_cc_factors_2d(a, b, radius)) expected = np.asarray(cc.precompute_cc_factors_2d_test(a, b, radius)) assert_array_almost_equal(factors, expected) def test_cc_factors_3d(): r""" Compares the output of the optimized function to compute the cross- correlation factors against a direct (not optimized, but less error prone) implementation. """ a = np.array(range(20 * 20 * 20), dtype=floating).reshape(20, 20, 20) b = np.array(range(20 * 20 * 20)[::-1], dtype=floating).reshape(20, 20, 20) a /= a.max() b /= b.max() for radius in [0, 1, 3, 6]: factors = np.asarray(cc.precompute_cc_factors_3d(a, b, radius)) expected = np.asarray(cc.precompute_cc_factors_3d_test(a, b, radius)) assert_array_almost_equal(factors, expected, decimal=5) @set_random_number_generator(1147572) def test_compute_cc_steps_2d(rng): # Select arbitrary images' shape (same shape for both images) sh = (32, 32) radius = 2 # Select arbitrary centers c_f = (np.asarray(sh) / 2) + 1.25 c_g = c_f + 2.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G gradF = np.array(X - c_f, dtype=floating) gradG = np.array(X - c_g, dtype=floating) sz = np.size(gradF) Fnoise = rng.random(sz).reshape(gradF.shape) * gradF.max() * 0.1 Fnoise = Fnoise.astype(floating) gradF += Fnoise sz = np.size(gradG) Gnoise = rng.random(sz).reshape(gradG.shape) * gradG.max() * 0.1 Gnoise = Gnoise.astype(floating) gradG += Gnoise sq_norm_grad_G = np.sum(gradG**2, -1) F = np.array(0.5 * np.sum(gradF**2, -1), dtype=floating) G = np.array(0.5 * sq_norm_grad_G, dtype=floating) Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise # precompute the cross correlation factors factors = cc.precompute_cc_factors_2d_test(F, G, radius) factors = np.array(factors, dtype=floating) # test the forward step against the exact expression _I = factors[..., 0] _J = factors[..., 1] sfm = factors[..., 2] sff = factors[..., 3] smm = factors[..., 4] expected = np.ndarray(shape=sh + (2,), dtype=floating) factor = (-2.0 * sfm / (sff * smm)) * (_J - (sfm / sff) * _I) expected[..., 0] = factor * gradF[..., 0] factor = (-2.0 * sfm / (sff * smm)) * (_J - (sfm / sff) * _I) expected[..., 1] = factor * gradF[..., 1] actual, energy = cc.compute_cc_forward_step_2d(gradF, factors, 0) assert_array_almost_equal(actual, expected) for radius in range(1, 5): expected[:radius, ...] = 0 expected[:, :radius, ...] = 0 expected[-radius::, ...] = 0 expected[:, -radius::, ...] = 0 actual, energy = cc.compute_cc_forward_step_2d(gradF, factors, radius) assert_array_almost_equal(actual, expected) # test the backward step against the exact expression factor = (-2.0 * sfm / (sff * smm)) * (_I - (sfm / smm) * _J) expected[..., 0] = factor * gradG[..., 0] factor = (-2.0 * sfm / (sff * smm)) * (_I - (sfm / smm) * _J) expected[..., 1] = factor * gradG[..., 1] actual, energy = cc.compute_cc_backward_step_2d(gradG, factors, 0) assert_array_almost_equal(actual, expected) for radius in range(1, 5): expected[:radius, ...] = 0 expected[:, :radius, ...] = 0 expected[-radius::, ...] = 0 expected[:, -radius::, ...] = 0 actual, energy = cc.compute_cc_backward_step_2d(gradG, factors, radius) assert_array_almost_equal(actual, expected) @set_random_number_generator(12465825) def test_compute_cc_steps_3d(rng): sh = (32, 32, 32) radius = 2 # Select arbitrary centers c_f = (np.asarray(sh) / 2) + 1.25 c_g = c_f + 2.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G gradF = np.array(X - c_f, dtype=floating) gradG = np.array(X - c_g, dtype=floating) sz = np.size(gradF) Fnoise = rng.random(sz).reshape(gradF.shape) * gradF.max() * 0.1 Fnoise = Fnoise.astype(floating) gradF += Fnoise sz = np.size(gradG) Gnoise = rng.random(sz).reshape(gradG.shape) * gradG.max() * 0.1 Gnoise = Gnoise.astype(floating) gradG += Gnoise sq_norm_grad_G = np.sum(gradG**2, -1) F = np.array(0.5 * np.sum(gradF**2, -1), dtype=floating) G = np.array(0.5 * sq_norm_grad_G, dtype=floating) Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise # precompute the cross correlation factors factors = cc.precompute_cc_factors_3d_test(F, G, radius) factors = np.array(factors, dtype=floating) # test the forward step against the exact expression _I = factors[..., 0] _J = factors[..., 1] sfm = factors[..., 2] sff = factors[..., 3] smm = factors[..., 4] expected = np.ndarray(shape=sh + (3,), dtype=floating) factor = (-2.0 * sfm / (sff * smm)) * (_J - (sfm / sff) * _I) expected[..., 0] = factor * gradF[..., 0] expected[..., 1] = factor * gradF[..., 1] expected[..., 2] = factor * gradF[..., 2] actual, energy = cc.compute_cc_forward_step_3d(gradF, factors, 0) assert_array_almost_equal(actual, expected) for radius in range(1, 5): expected[:radius, ...] = 0 expected[:, :radius, ...] = 0 expected[:, :, :radius, :] = 0 expected[-radius::, ...] = 0 expected[:, -radius::, ...] = 0 expected[:, :, -radius::, ...] = 0 actual, energy = cc.compute_cc_forward_step_3d(gradF, factors, radius) assert_array_almost_equal(actual, expected) # test the backward step against the exact expression factor = (-2.0 * sfm / (sff * smm)) * (_I - (sfm / smm) * _J) expected[..., 0] = factor * gradG[..., 0] expected[..., 1] = factor * gradG[..., 1] expected[..., 2] = factor * gradG[..., 2] actual, energy = cc.compute_cc_backward_step_3d(gradG, factors, 0) assert_array_almost_equal(actual, expected) for radius in range(1, 5): expected[:radius, ...] = 0 expected[:, :radius, ...] = 0 expected[:, :, :radius, :] = 0 expected[-radius::, ...] = 0 expected[:, -radius::, ...] = 0 expected[:, :, -radius::, ...] = 0 actual, energy = cc.compute_cc_backward_step_3d(gradG, factors, radius) assert_array_almost_equal(actual, expected) dipy-1.11.0/dipy/align/tests/test_expectmax.py000066400000000000000000000416741476546756600214150ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.align import expectmax as em, floating from dipy.testing.decorators import set_random_number_generator @set_random_number_generator(1346491) def test_compute_em_demons_step_2d(rng): r""" Compares the output of the demons step in 2d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{2}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (30, 20) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # The squared norm of grad_G to be used later sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G delta_field = G - F # Now select an arbitrary parameter for # $\sigma_x$ (eq 4 in [Vercauteren09]) sigma_x_sq = 1.5 # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Select some pixels to have special values random_labels = rng.integers(0, 5, sh[0] * sh[1]) random_labels = random_labels.reshape(sh) # this label is used to set sigma_i_sq == 0 below random_labels[sigma_i_sq == 0] = 2 # this label is used to set gradient == 0 below random_labels[sq_norm_grad_G == 0] = 2 expected = np.zeros_like(grad_G) # Pixels with sigma_i_sq = inf sigma_i_sq[random_labels == 0] = np.inf expected[random_labels == 0, ...] = 0 # Pixels with gradient!=0 and sigma_i_sq=0 sqnrm = sq_norm_grad_G[random_labels == 1] sigma_i_sq[random_labels == 1] = 0 expected[random_labels == 1, 0] = ( delta_field[random_labels == 1] * grad_G[random_labels == 1, 0] / sqnrm ) expected[random_labels == 1, 1] = ( delta_field[random_labels == 1] * grad_G[random_labels == 1, 1] / sqnrm ) # Pixels with gradient=0 and sigma_i_sq=0 sigma_i_sq[random_labels == 2] = 0 grad_G[random_labels == 2, ...] = 0 expected[random_labels == 2, ...] = 0 # Pixels with gradient=0 and sigma_i_sq!=0 grad_G[random_labels == 3, ...] = 0 # Directly compute the demons step according to eq. 4 in [Vercauteren09] num = (sigma_x_sq * (F - G))[random_labels >= 3] den = (sigma_x_sq * sq_norm_grad_G + sigma_i_sq)[random_labels >= 3] # This is $J^{P}$ in eq. 4 [Vercauteren09] expected[random_labels >= 3] = -1 * np.array(grad_G[random_labels >= 3]) expected[random_labels >= 3, ...] *= (num / den)[..., None] # Now compute it using the implementation under test actual = np.empty_like(expected, dtype=floating) em.compute_em_demons_step_2d( np.array(delta_field, dtype=floating), np.array(sigma_i_sq, dtype=floating), np.array(grad_G, dtype=floating), sigma_x_sq, actual, ) # Test sigma_i_sq == inf try: assert_array_almost_equal( actual[random_labels == 0], expected[random_labels == 0] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == inf") from e # Test sigma_i_sq == 0 and gradient != 0 try: assert_array_almost_equal( actual[random_labels == 1], expected[random_labels == 1] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == 0 and gradient != 0") from e # Test sigma_i_sq == 0 and gradient == 0 try: assert_array_almost_equal( actual[random_labels == 2], expected[random_labels == 2] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == 0 and gradient == 0") from e # Test sigma_i_sq != 0 and gradient == 0 try: assert_array_almost_equal( actual[random_labels == 3], expected[random_labels == 3] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq != 0 and gradient == 0 ") from e # Test sigma_i_sq != 0 and gradient != 0 try: assert_array_almost_equal( actual[random_labels == 4], expected[random_labels == 4] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq != 0 and gradient != 0") from e @set_random_number_generator(1346491) def test_compute_em_demons_step_3d(rng): r""" Compares the output of the demons step in 3d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{3}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (20, 15, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # The squared norm of grad_G to be used later sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G delta_field = G - F # Now select an arbitrary parameter for # $\sigma_x$ (eq 4 in [Vercauteren09]) sigma_x_sq = 1.5 # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Select some pixels to have special values random_labels = rng.integers(0, 5, sh[0] * sh[1] * sh[2]) random_labels = random_labels.reshape(sh) # this label is used to set sigma_i_sq == 0 below random_labels[sigma_i_sq == 0] = 2 # this label is used to set gradient == 0 below random_labels[sq_norm_grad_G == 0] = 2 expected = np.zeros_like(grad_G) # Pixels with sigma_i_sq = inf sigma_i_sq[random_labels == 0] = np.inf expected[random_labels == 0, ...] = 0 # Pixels with gradient!=0 and sigma_i_sq=0 sqnrm = sq_norm_grad_G[random_labels == 1] sigma_i_sq[random_labels == 1] = 0 expected[random_labels == 1, 0] = ( delta_field[random_labels == 1] * grad_G[random_labels == 1, 0] / sqnrm ) expected[random_labels == 1, 1] = ( delta_field[random_labels == 1] * grad_G[random_labels == 1, 1] / sqnrm ) expected[random_labels == 1, 2] = ( delta_field[random_labels == 1] * grad_G[random_labels == 1, 2] / sqnrm ) # Pixels with gradient=0 and sigma_i_sq=0 sigma_i_sq[random_labels == 2] = 0 grad_G[random_labels == 2, ...] = 0 expected[random_labels == 2, ...] = 0 # Pixels with gradient=0 and sigma_i_sq!=0 grad_G[random_labels == 3, ...] = 0 # Directly compute the demons step according to eq. 4 in [Vercauteren09] num = (sigma_x_sq * (F - G))[random_labels >= 3] den = (sigma_x_sq * sq_norm_grad_G + sigma_i_sq)[random_labels >= 3] # This is $J^{P}$ in eq. 4 [Vercauteren09] expected[random_labels >= 3] = -1 * np.array(grad_G[random_labels >= 3]) expected[random_labels >= 3, ...] *= (num / den)[..., None] # Now compute it using the implementation under test actual = np.empty_like(expected, dtype=floating) em.compute_em_demons_step_3d( np.array(delta_field, dtype=floating), np.array(sigma_i_sq, dtype=floating), np.array(grad_G, dtype=floating), sigma_x_sq, actual, ) # Test sigma_i_sq == inf try: assert_array_almost_equal( actual[random_labels == 0], expected[random_labels == 0] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == inf") from e # Test sigma_i_sq == 0 and gradient != 0 try: assert_array_almost_equal( actual[random_labels == 1], expected[random_labels == 1] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == 0 and gradient != 0") from e # Test sigma_i_sq == 0 and gradient == 0 try: assert_array_almost_equal( actual[random_labels == 2], expected[random_labels == 2] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq == 0 and gradient == 0") from e # Test sigma_i_sq != 0 and gradient == 0 try: assert_array_almost_equal( actual[random_labels == 3], expected[random_labels == 3] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq != 0 and gradient == 0 ") from e # Test sigma_i_sq != 0 and gradient != 0 try: assert_array_almost_equal( actual[random_labels == 4], expected[random_labels == 4] ) except AssertionError as e: raise AssertionError("Failed for sigma_i_sq != 0 and gradient != 0") from e @set_random_number_generator(1246592) def test_quantize_positive_2d(rng): # an arbitrary number of quantization levels num_levels = 11 # arbitrary test image shape (must contain at least 3 elements) img_shape = (15, 20) min_positive = 0.1 max_positive = 1.0 epsilon = 1e-8 delta = (max_positive - min_positive + epsilon) / (num_levels - 1) true_levels = np.zeros((num_levels,), dtype=np.float32) # put the intensities at the centers of the bins true_levels[1:] = np.linspace( min_positive + delta * 0.5, max_positive - delta * 0.5, num_levels - 1 ) # generate a target quantization image true_quantization = np.empty(img_shape, dtype=np.int32) random_labels = rng.integers(0, num_levels, np.size(true_quantization)) # make sure there is at least one element equal to 0, 1 and num_levels-1 random_labels[0] = 0 random_labels[1] = 1 random_labels[2] = num_levels - 1 true_quantization[...] = random_labels.reshape(img_shape) # make sure additive noise doesn't change the quantization result noise_amplitude = np.min([delta / 4.0, min_positive / 4.0]) sz = np.size(true_quantization) noise = rng.random(sz).reshape(img_shape) * noise_amplitude noise = noise.astype(floating) input_image = np.ndarray(img_shape, dtype=floating) # assign intensities plus noise input_image[...] = true_levels[true_quantization] + noise # preserve original zeros input_image[true_quantization == 0] = 0 # preserve min positive value input_image[true_quantization == 1] = min_positive # preserve max positive value input_image[true_quantization == num_levels - 1] = max_positive out, levels, hist = em.quantize_positive_2d(input_image, num_levels) levels = np.asarray(levels) assert_array_equal(out, true_quantization) assert_array_almost_equal(levels, true_levels) for i in range(num_levels): current_bin = np.asarray(true_quantization == i).sum() assert_equal(hist[i], current_bin) # test num_levels<2 and input image with zeros and non-zeros everywhere assert_raises(ValueError, em.quantize_positive_2d, input_image, 0) assert_raises(ValueError, em.quantize_positive_2d, input_image, 1) out, levels, hist = em.quantize_positive_2d(np.zeros(img_shape, dtype=floating), 2) assert_equal(out, np.zeros(img_shape, dtype=np.int32)) out, levels, hist = em.quantize_positive_2d(np.ones(img_shape, dtype=floating), 2) assert_equal(out, np.ones(img_shape, dtype=np.int32)) @set_random_number_generator(1246592) def test_quantize_positive_3d(rng): # an arbitrary number of quantization levels num_levels = 11 # arbitrary test image shape (must contain at least 3 elements) img_shape = (5, 10, 15) min_positive = 0.1 max_positive = 1.0 epsilon = 1e-8 delta = (max_positive - min_positive + epsilon) / (num_levels - 1) true_levels = np.zeros((num_levels,), dtype=np.float32) # put the intensities at the centers of the bins true_levels[1:] = np.linspace( min_positive + delta * 0.5, max_positive - delta * 0.5, num_levels - 1 ) # generate a target quantization image true_quantization = np.empty(img_shape, dtype=np.int32) random_labels = rng.integers(0, num_levels, np.size(true_quantization)) # make sure there is at least one element equal to 0, 1 and num_levels-1 random_labels[0] = 0 random_labels[1] = 1 random_labels[2] = num_levels - 1 true_quantization[...] = random_labels.reshape(img_shape) # make sure additive noise doesn't change the quantization result noise_amplitude = np.min([delta / 4.0, min_positive / 4.0]) sz = np.size(true_quantization) noise = rng.random(sz).reshape(img_shape) * noise_amplitude noise = noise.astype(floating) input_image = np.ndarray(img_shape, dtype=floating) # assign intensities plus noise input_image[...] = true_levels[true_quantization] + noise # preserve original zeros input_image[true_quantization == 0] = 0 # preserve min positive value input_image[true_quantization == 1] = min_positive # preserve max positive value input_image[true_quantization == num_levels - 1] = max_positive out, levels, hist = em.quantize_positive_3d(input_image, num_levels) levels = np.asarray(levels) assert_array_equal(out, true_quantization) assert_array_almost_equal(levels, true_levels) for i in range(num_levels): current_bin = np.asarray(true_quantization == i).sum() assert_equal(hist[i], current_bin) # test num_levels<2 and input image with zeros and non-zeros everywhere assert_raises(ValueError, em.quantize_positive_3d, input_image, 0) assert_raises(ValueError, em.quantize_positive_3d, input_image, 1) out, levels, hist = em.quantize_positive_3d(np.zeros(img_shape, dtype=floating), 2) assert_equal(out, np.zeros(img_shape, dtype=np.int32)) out, levels, hist = em.quantize_positive_3d(np.ones(img_shape, dtype=floating), 2) assert_equal(out, np.ones(img_shape, dtype=np.int32)) @set_random_number_generator(1246592) def test_compute_masked_class_stats_2d(rng): shape = (32, 32) # Create random labels labels = np.ndarray(shape, dtype=np.int32) labels[...] = rng.integers(2, 10, np.size(labels)).reshape(shape) # now label 0 is not present and label 1 occurs once labels[0, 0] = 1 # Create random values values = rng.standard_normal((shape[0], shape[1])).astype(floating) values *= labels values += labels expected_means = [0, values[0, 0]] + [ values[labels == i].mean() for i in range(2, 10) ] expected_vars = [np.inf, np.inf] + [values[labels == i].var() for i in range(2, 10)] mask = np.ones(shape, dtype=np.int32) means, std_dev = em.compute_masked_class_stats_2d(mask, values, 10, labels) assert_array_almost_equal(means, expected_means, decimal=4) assert_array_almost_equal(std_dev, expected_vars, decimal=4) @set_random_number_generator(1246592) def test_compute_masked_class_stats_3d(rng): shape = (32, 32, 32) # Create random labels labels = np.ndarray(shape, dtype=np.int32) labels[...] = rng.integers(2, 10, np.size(labels)).reshape(shape) # now label 0 is not present and label 1 occurs once labels[0, 0, 0] = 1 # Create random values values = rng.standard_normal((shape[0], shape[1], shape[2])).astype(floating) values *= labels values += labels expected_means = [0, values[0, 0, 0]] + [ values[labels == i].mean() for i in range(2, 10) ] expected_vars = [np.inf, np.inf] + [values[labels == i].var() for i in range(2, 10)] mask = np.ones(shape, dtype=np.int32) means, std_dev = em.compute_masked_class_stats_3d(mask, values, 10, labels) assert_array_almost_equal(means, expected_means, decimal=4) assert_array_almost_equal(std_dev, expected_vars, decimal=4) dipy-1.11.0/dipy/align/tests/test_imaffine.py000066400000000000000000000673601476546756600211750ustar00rootroot00000000000000import numpy as np import numpy.linalg as npl from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, assert_warns, ) from dipy.align import imaffine, vector_fields as vf from dipy.align.imaffine import ( AffineInvalidValuesError, AffineInversionError, AffineMap, _number_dim_affine_matrix, ) from dipy.align.tests.test_parzenhist import setup_random_transform from dipy.align.transforms import regtransforms from dipy.core import geometry as geometry from dipy.testing.decorators import set_random_number_generator # For each transform type, select a transform factor (indicating how large the # true transform between static and moving images will be), a sampling scheme # (either a positive integer less than or equal to 100, or None) indicating # the percentage (if int) of voxels to be used for estimating the joint PDFs, # or dense sampling (if None), and also specify a starting point (to avoid # starting from the identity) factors = { ("TRANSLATION", 2): (2.0, 0.35, np.array([2.3, 4.5])), ("ROTATION", 2): (0.1, None, np.array([0.1])), ("RIGID", 2): (0.1, 0.50, np.array([0.12, 1.8, 2.7])), ("SCALING", 2): (0.01, None, np.array([1.05])), ("AFFINE", 2): (0.1, 0.50, np.array([0.99, -0.05, 1.3, 0.05, 0.99, 2.5])), ("TRANSLATION", 3): (2.0, None, np.array([2.3, 4.5, 1.7])), ("ROTATION", 3): (0.1, 1.0, np.array([0.1, 0.15, -0.11])), ("RIGID", 3): (0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5, 1.7])), ("SCALING", 3): (0.1, 0.35, np.array([0.95])), ("AFFINE", 3): ( 0.1, None, np.array( [0.99, -0.05, 0.03, 1.3, 0.05, 0.99, -0.10, 2.5, -0.07, 0.10, 0.99, -1.4] ), ), } def test_transform_centers_of_mass_3d(): shape = (64, 64, 64) rm = 8 sph = vf.create_sphere(shape[0] // 2, shape[1] // 2, shape[2] // 2, rm) moving = np.zeros(shape) # The center of mass will be (16, 16, 16), in image coordinates moving[: shape[0] // 2, : shape[1] // 2, : shape[2] // 2] = sph[...] rs = 16 # The center of mass will be (32, 32, 32), in image coordinates static = vf.create_sphere(shape[0], shape[1], shape[2], rs) # Create arbitrary image-to-space transforms axis = np.array([0.5, 2.0, 1.5]) t = 0.15 # translation factor trans = np.array( [ [1, 0, 0, -t * shape[0]], [0, 1, 0, -t * shape[1]], [0, 0, 1, -t * shape[2]], [0, 0, 0, 1], ] ) trans_inv = npl.inv(trans) for rotation_angle in [-1 * np.pi / 6.0, 0.0, np.pi / 5.0]: for scale_factor in [0.83, 1.3, 2.07]: # scale rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, rotation_angle) rot[3, 3] = 1.0 scale = np.array( [ [1 * scale_factor, 0, 0, 0], [0, 1 * scale_factor, 0, 0], [0, 0, 1 * scale_factor, 0], [0, 0, 0, 1], ] ) static_grid2world = trans_inv.dot(scale.dot(rot.dot(trans))) moving_grid2world = npl.inv(static_grid2world) # Expected translation c_static = static_grid2world.dot((32, 32, 32, 1))[:3] c_moving = moving_grid2world.dot((16, 16, 16, 1))[:3] expected = np.eye(4) expected[:3, 3] = c_moving - c_static # Implementation under test actual = imaffine.transform_centers_of_mass( static, static_grid2world, moving, moving_grid2world ) assert_array_almost_equal(actual.affine, expected) def test_transform_geometric_centers_3d(): # Create arbitrary image-to-space transforms axis = np.array([0.5, 2.0, 1.5]) t = 0.15 # translation factor for theta in [-1 * np.pi / 6.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.83, 1.3, 2.07]: # scale m_shapes = [(256, 256, 128), (255, 255, 127), (64, 127, 142)] for shape_moving in m_shapes: s_shapes = [(256, 256, 128), (255, 255, 127), (64, 127, 142)] for shape_static in s_shapes: moving = np.ndarray(shape=shape_moving) static = np.ndarray(shape=shape_static) trans = np.array( [ [1, 0, 0, -t * shape_static[0]], [0, 1, 0, -t * shape_static[1]], [0, 0, 1, -t * shape_static[2]], [0, 0, 0, 1], ] ) trans_inv = npl.inv(trans) rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [ [1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1], ] ) static_grid2world = trans_inv.dot(scale.dot(rot.dot(trans))) moving_grid2world = npl.inv(static_grid2world) # Expected translation c_static = np.array(shape_static, dtype=np.float64) * 0.5 c_static = tuple(c_static) c_static = static_grid2world.dot(c_static + (1,))[:3] c_moving = np.array(shape_moving, dtype=np.float64) * 0.5 c_moving = tuple(c_moving) c_moving = moving_grid2world.dot(c_moving + (1,))[:3] expected = np.eye(4) expected[:3, 3] = c_moving - c_static # Implementation under test actual = imaffine.transform_geometric_centers( static, static_grid2world, moving, moving_grid2world ) assert_array_almost_equal(actual.affine, expected) def test_transform_origins_3d(): # Create arbitrary image-to-space transforms axis = np.array([0.5, 2.0, 1.5]) t = 0.15 # translation factor for theta in [-1 * np.pi / 6.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.83, 1.3, 2.07]: # scale m_shapes = [(256, 256, 128), (255, 255, 127), (64, 127, 142)] for shape_moving in m_shapes: s_shapes = [(256, 256, 128), (255, 255, 127), (64, 127, 142)] for shape_static in s_shapes: moving = np.ndarray(shape=shape_moving) static = np.ndarray(shape=shape_static) trans = np.array( [ [1, 0, 0, -t * shape_static[0]], [0, 1, 0, -t * shape_static[1]], [0, 0, 1, -t * shape_static[2]], [0, 0, 0, 1], ] ) trans_inv = npl.inv(trans) rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [ [1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1], ] ) static_grid2world = trans_inv.dot(scale.dot(rot.dot(trans))) moving_grid2world = npl.inv(static_grid2world) # Expected translation c_static = static_grid2world[:3, 3] c_moving = moving_grid2world[:3, 3] expected = np.eye(4) expected[:3, 3] = c_moving - c_static # Implementation under test actual = imaffine.transform_origins( static, static_grid2world, moving, moving_grid2world ) assert_array_almost_equal(actual.affine, expected) @set_random_number_generator(202311) def test_affreg_all_transforms(rng): # Test affine registration using all transforms with typical settings # Make sure dictionary entries are processed in the same order regardless # of the platform. Otherwise any random numbers drawn within the loop would # make the test non-deterministic even if we fix the seed before the loop. # Right now, this test does not draw any samples, but we still sort the # entries to prevent future related failures. for ttype in sorted(factors): dim = ttype[1] if dim == 2: nslices = 1 else: nslices = 45 factor = factors[ttype][0] sampling_pc = factors[ttype][1] trans = regtransforms[ttype] # Shorthand: srt = setup_random_transform static, moving, static_g2w, moving_g2w, smask, mmask, T = srt( trans, factor, nslices, 1.0, rng=rng ) # Sum of absolute differences start_sad = np.abs(static - moving).sum() metric = imaffine.MutualInformationMetric( nbins=32, sampling_proportion=sampling_pc ) affreg = imaffine.AffineRegistration( metric=metric, level_iters=[1000, 100, 50], sigmas=[3, 1, 0], factors=[4, 2, 1], method="L-BFGS-B", ss_sigma_factor=None, options=None, ) x0 = trans.get_identity_parameters() # test warning for using masks (even if all ones) with sparse sampling if sampling_pc not in [1.0, None]: affine_map = assert_warns( UserWarning, affreg.optimize, static, moving, trans, x0, static_grid2world=static_g2w, moving_grid2world=moving_g2w, starting_affine=None, ret_metric=None, static_mask=smask, moving_mask=mmask, ) else: affine_map = affreg.optimize( static, moving, trans, x0, static_grid2world=static_g2w, moving_grid2world=moving_g2w, starting_affine=None, ret_metric=None, static_mask=smask, moving_mask=mmask, ) transformed = affine_map.transform(moving) # Sum of absolute differences end_sad = np.abs(static - transformed).sum() reduction = 1 - end_sad / start_sad print(f"{ttype}>>{reduction:f}") assert reduction > 0.9 # Verify that exception is raised if level_iters is empty metric = imaffine.MutualInformationMetric(nbins=32) assert_raises( ValueError, imaffine.AffineRegistration, metric=metric, level_iters=[] ) # Verify that exception is raised if masks are all zeros affine_map = assert_warns( UserWarning, affreg.optimize, static, moving, trans, x0, static_grid2world=static_g2w, moving_grid2world=moving_g2w, starting_affine=None, ret_metric=None, static_mask=np.zeros_like(smask), moving_mask=np.zeros_like(mmask), ) @set_random_number_generator(202311) def test_affreg_defaults(rng): # Test all default arguments with an arbitrary transform # Select an arbitrary transform (all of them are already tested # in test_affreg_all_transforms) transform_name = "TRANSLATION" dim = 2 ttype = (transform_name, dim) aff_options = ["mass", "voxel-origin", "centers", None, np.eye(dim + 1)] for starting_affine in aff_options: if dim == 2: nslices = 1 else: nslices = 45 factor = factors[ttype][0] transform = regtransforms[ttype] static, moving, static_grid2world, moving_grid2world, smask, mmask, T = ( setup_random_transform(transform, factor, nslices, 1.0, rng=rng) ) # Sum of absolute differences start_sad = np.abs(static - moving).sum() metric = None x0 = None sigmas = None scale_factors = None level_iters = None static_grid2world = None moving_grid2world = None smask = None mmask = None for ss_sigma_factor in [1.0, None]: affreg = imaffine.AffineRegistration( metric=metric, level_iters=level_iters, sigmas=sigmas, factors=scale_factors, method="L-BFGS-B", ss_sigma_factor=ss_sigma_factor, options=None, ) affine_map = affreg.optimize( static, moving, transform, x0, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=starting_affine, ret_metric=None, static_mask=smask, moving_mask=mmask, ) transformed = affine_map.transform(moving) # Sum of absolute differences end_sad = np.abs(static - transformed).sum() reduction = 1 - end_sad / start_sad print(f"{ttype}>>{reduction:f}") assert reduction > 0.9 transformed_inv = affine_map.transform_inverse(static) # Sum of absolute differences end_sad = np.abs(moving - transformed_inv).sum() reduction = 1 - end_sad / start_sad print(f"{ttype}>>{reduction:f}") assert reduction > 0.89 @set_random_number_generator(2022966) def test_mi_gradient(rng): # Test the gradient of mutual information h = 1e-5 # Make sure dictionary entries are processed in the same order regardless # of the platform. Otherwise any random numbers drawn within the loop would # make the test non-deterministic even if we fix the seed before the loop: # in this case the samples are drawn with `np.random.randn` below for ttype in sorted(factors): transform = regtransforms[ttype] dim = ttype[1] if dim == 2: nslices = 1 else: nslices = 45 factor = factors[ttype][0] sampling_proportion = factors[ttype][1] theta = factors[ttype][2] # Start from a small rotation start = regtransforms[("ROTATION", dim)] nrot = start.get_number_of_parameters() starting_affine = start.param_to_matrix(0.25 * rng.standard_normal(nrot)) # Get data (pair of images related to each other by an known transform) static, moving, static_g2w, moving_g2w, smask, mmask, M = ( setup_random_transform(transform, factor, nslices, 2.0, rng=rng) ) # Prepare a MutualInformationMetric instance mi_metric = imaffine.MutualInformationMetric( nbins=32, sampling_proportion=sampling_proportion ) mi_metric.setup(transform, static, moving, starting_affine=starting_affine) # Compute the gradient with the implementation under test actual = mi_metric.gradient(theta) # Compute the gradient using finite-differences n = transform.get_number_of_parameters() expected = np.empty(n, dtype=np.float64) val0 = mi_metric.distance(theta) for i in range(n): dtheta = theta.copy() dtheta[i] += h val1 = mi_metric.distance(dtheta) expected[i] = (val1 - val0) / h dp = expected.dot(actual) enorm = npl.norm(expected) anorm = npl.norm(actual) nprod = dp / (enorm * anorm) assert nprod >= 0.99 def create_affine_transforms(dim, translations, rotations, scales, rot_axis=None): r"""Creates a list of affine transforms with all combinations of params This function is intended to be used for testing only. It generates affine transforms for all combinations of the input parameters in the following order: let T be a translation, R a rotation and S a scale. The generated affine will be: A = T.dot(S).dot(R).dot(T^{-1}) Translation is handled this way because it is convenient to provide the translation parameters in terms of the center of rotation we wish to generate. Parameters ---------- dim: int (either dim=2 or dim=3) dimension of the affine transforms translations: sequence of dim-tuples each dim-tuple represents a translation parameter rotations: sequence of floats each number represents a rotation angle in radians scales: sequence of floats each number represents a scale rot_axis: rotation axis (used for dim=3 only) Returns ------- transforms: sequence of (dim + 1)x(dim + 1) matrices each matrix correspond to an affine transform with a combination of the input parameters """ transforms = [] for t in translations: trans_inv = np.eye(dim + 1) trans_inv[:dim, dim] = -t[:dim] trans = npl.inv(trans_inv) for theta in rotations: # rotation angle if dim == 2: ct = np.cos(theta) st = np.sin(theta) rot = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) else: rot = np.eye(dim + 1) rot[:3, :3] = geometry.rodrigues_axis_rotation(rot_axis, theta) for s in scales: # scale scale = np.eye(dim + 1) * s scale[dim, dim] = 1 affine = trans.dot(scale.dot(rot.dot(trans_inv))) transforms.append(affine) return transforms @set_random_number_generator(2112927) def test_affine_map(rng): dom_shape = np.array([64, 64, 64], dtype=np.int32) cod_shape = np.array([80, 80, 80], dtype=np.int32) # Radius of the circle/sphere (testing image) radius = 16 # Rotation axis (used for 3D transforms only) rot_axis = np.array([0.5, 2.0, 1.5]) # Arbitrary transform parameters t = 0.15 rotations = [-1 * np.pi / 10.0, 0.0, np.pi / 10.0] scales = [0.9, 1.0, 1.1] for dim1 in [2, 3]: # Setup current dimension if dim1 == 2: # Create image of a circle img = vf.create_circle(cod_shape[0], cod_shape[1], radius) oracle_linear = vf.transform_2d_affine oracle_nn = vf.transform_2d_affine_nn else: # Create image of a sphere img = vf.create_sphere(cod_shape[0], cod_shape[1], cod_shape[2], radius) oracle_linear = vf.transform_3d_affine oracle_nn = vf.transform_3d_affine_nn img = np.array(img) # Translation is the only parameter differing for 2D and 3D translations = [t * dom_shape[:dim1]] # Generate affine transforms gt_affines = create_affine_transforms( dim1, translations, rotations, scales, rot_axis ) # Include the None case gt_affines.append(None) # testing str/format/repr for affine_mat in gt_affines: aff_map = AffineMap(affine_mat) assert_equal(str(aff_map), aff_map.__str__()) assert_equal(repr(aff_map), aff_map.__repr__()) for spec in ["f", "r", "t", ""]: assert_equal(format(aff_map, spec), aff_map.__format__(spec)) for affine in gt_affines: # make both domain point to the same physical region # It's ok to use the same transform, we just want to test # that this information is actually being considered domain_grid2world = affine codomain_grid2world = affine grid2grid_transform = affine # Evaluate the transform with vector_fields module (already tested) expected_linear = oracle_linear( img, dom_shape[:dim1], affine=grid2grid_transform ) expected_nn = oracle_nn(img, dom_shape[:dim1], affine=grid2grid_transform) # Evaluate the transform with the implementation under test affine_map = imaffine.AffineMap( affine=affine, domain_grid_shape=dom_shape[:dim1], domain_grid2world=domain_grid2world, codomain_grid_shape=cod_shape[:dim1], codomain_grid2world=codomain_grid2world, ) actual_linear = affine_map.transform(img, interpolation="linear") actual_nn = affine_map.transform(img, interpolation="nearest") assert_array_almost_equal(actual_linear, expected_linear) assert_array_almost_equal(actual_nn, expected_nn) # Test set_affine with valid matrix affine_map.set_affine(affine) if affine is None: assert affine_map.affine is None assert affine_map.affine_inv is None else: # compatibility with previous versions assert_array_equal(affine, affine_map.affine) # new getter new_copy_affine = affine_map.affine # value must be the same assert_array_equal(affine, new_copy_affine) # but not its reference assert id(affine) != id(new_copy_affine) actual = affine_map.affine.dot(affine_map.affine_inv) assert_array_almost_equal(actual, np.eye(dim1 + 1)) # Evaluate via the inverse transform # AffineMap will use the inverse of the input matrix when we call # `transform_inverse`. Since the inverse of the inverse of a matrix # is not exactly equal to the original matrix (numerical # limitations) we need to invert the matrix twice to make sure # the oracle and the implementation under test apply the same # transform aff_inv = None if affine is None else npl.inv(affine) aff_inv_inv = None if aff_inv is None else npl.inv(aff_inv) expected_linear = oracle_linear(img, dom_shape[:dim1], affine=aff_inv_inv) expected_nn = oracle_nn(img, dom_shape[:dim1], affine=aff_inv_inv) affine_map = imaffine.AffineMap( affine=aff_inv, domain_grid_shape=cod_shape[:dim1], domain_grid2world=codomain_grid2world, codomain_grid_shape=dom_shape[:dim1], codomain_grid2world=domain_grid2world, ) actual_linear = affine_map.transform_inverse(img, interpolation="linear") actual_nn = affine_map.transform_inverse(img, interpolation="nearest") assert_array_almost_equal(actual_linear, expected_linear) assert_array_almost_equal(actual_nn, expected_nn) # Verify AffineMap can not be created with non-square matrix non_square_shapes = [ np.zeros((dim1, dim1 + 1), dtype=np.float64), np.zeros((dim1 + 1, dim1), dtype=np.float64), ] for nsq in non_square_shapes: assert_raises(AffineInversionError, AffineMap, nsq) # Verify incorrect augmentations are caught for affine_mat in gt_affines: aff_map = AffineMap(affine_mat) if affine_mat is None: continue bad_aug = aff_map.affine # no zeros in the first n-1 columns on last row bad_aug[-1, :] = 1 assert_raises(AffineInvalidValuesError, AffineMap, bad_aug) bad_aug = aff_map.affine bad_aug[-1, -1] = 0 # lower right not 1 assert_raises(AffineInvalidValuesError, AffineMap, bad_aug) # Verify AffineMap cannot be created with a non-invertible matrix invalid_nan = np.zeros((dim1 + 1, dim1 + 1), dtype=np.float64) invalid_nan[1, 1] = np.nan invalid_zeros = np.zeros((dim1 + 1, dim1 + 1), dtype=np.float64) assert_raises( imaffine.AffineInvalidValuesError, imaffine.AffineMap, affine=invalid_nan ) assert_raises( AffineInvalidValuesError, imaffine.AffineMap, affine=invalid_zeros ) # Test exception is raised when the affine transform matrix is not # valid invalid_shape = np.eye(dim1) affmap_invalid_shape = imaffine.AffineMap( affine=invalid_shape, domain_grid_shape=dom_shape[:dim1], domain_grid2world=None, codomain_grid_shape=cod_shape[:dim1], codomain_grid2world=None, ) assert_raises(ValueError, affmap_invalid_shape.transform, img) assert_raises(ValueError, affmap_invalid_shape.transform_inverse, img) # Verify exception is raised when sampling info is not provided valid = np.eye(3) affmap_invalid_shape = imaffine.AffineMap(valid) assert_raises(ValueError, affmap_invalid_shape.transform, img) assert_raises(ValueError, affmap_invalid_shape.transform_inverse, img) # Verify exception is raised when requesting an invalid interpolation assert_raises(ValueError, affine_map.transform, img, interpolation="invalid") assert_raises( ValueError, affine_map.transform_inverse, img, interpolation="invalid" ) # Verify exception is raised when attempting to warp an image of # invalid dimension for dim2 in [2, 3]: affine_map = imaffine.AffineMap( affine=np.eye(dim2), domain_grid_shape=cod_shape[:dim2], domain_grid2world=None, codomain_grid_shape=dom_shape[:dim2], codomain_grid2world=None, ) for sh in [(2,), (2, 2, 2, 2)]: img = np.zeros(sh) assert_raises(ValueError, affine_map.transform, img) assert_raises(ValueError, affine_map.transform_inverse, img) aff_sing = np.zeros((dim2 + 1, dim2 + 1)) aff_nan = np.zeros((dim2 + 1, dim2 + 1)) aff_nan[...] = np.nan aff_inf = np.zeros((dim2 + 1, dim2 + 1)) aff_inf[...] = np.inf assert_raises(AffineInvalidValuesError, affine_map.set_affine, aff_sing) assert_raises(AffineInvalidValuesError, affine_map.set_affine, aff_nan) assert_raises(AffineInvalidValuesError, affine_map.set_affine, aff_inf) # Verify AffineMap can not be created with non-2D matrices : len(shape) != 2 for dim_not_2 in range(10): if dim_not_2 != _number_dim_affine_matrix: mat_large_dim = rng.random([2] * dim_not_2) assert_raises(AffineInversionError, AffineMap, mat_large_dim) @set_random_number_generator() def test_MIMetric_invalid_params(rng): transform = regtransforms[("AFFINE", 3)] static = rng.random((20, 20, 20)) moving = rng.random((20, 20, 20)) n = transform.get_number_of_parameters() sampling_proportion = 0.3 theta_sing = np.zeros(n) theta_nan = np.zeros(n) theta_nan[...] = np.nan theta_inf = np.zeros(n) theta_nan[...] = np.inf mi_metric = imaffine.MutualInformationMetric( nbins=32, sampling_proportion=sampling_proportion ) mi_metric.setup(transform, static, moving) for theta in [theta_sing, theta_nan, theta_inf]: # Test metric value at invalid params actual_val = mi_metric.distance(theta) assert np.isinf(actual_val) # Test gradient at invalid params expected_grad = np.zeros(n) actual_grad = mi_metric.gradient(theta) assert_equal(actual_grad, expected_grad) # Test both actual_val, actual_grad = mi_metric.distance_and_gradient(theta) assert np.isinf(actual_val) assert_equal(actual_grad, expected_grad) dipy-1.11.0/dipy/align/tests/test_imwarp.py000066400000000000000000001166131476546756600207120ustar00rootroot00000000000000import nibabel.eulerangles as eulerangles import numpy as np from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.align import ( VerbosityLevels, floating, imwarp as imwarp, metrics as metrics, vector_fields as vfu, ) from dipy.align.imwarp import DiffeomorphicMap from dipy.core.interpolation import interpolate_scalar_2d, interpolate_scalar_3d from dipy.data import get_fnames from dipy.testing.decorators import set_random_number_generator from dipy.tracking.streamline import deform_streamlines def test_mult_aff(): r"""Test matrix multiplication using None as identity""" A = np.array([[1.0, 2.0], [3.0, 4.0]]) B = np.array([[2.0, 0.0], [0.0, 2.0]]) C = imwarp.mult_aff(A, B) expected_mult = np.array([[2.0, 4.0], [6.0, 8.0]]) assert_array_almost_equal(C, expected_mult) C = imwarp.mult_aff(A, None) assert_array_almost_equal(C, A) C = imwarp.mult_aff(None, B) assert_array_almost_equal(C, B) C = imwarp.mult_aff(None, None) assert_equal(C, None) @set_random_number_generator(2022966) def test_diffeomorphic_map_2d(rng): r"""Test 2D DiffeomorphicMap Creates a random displacement field that exactly maps pixels from an input image to an output image. First a discrete random assignment between the images is generated, then each pair of mapped points are transformed to the physical space by assigning a pair of arbitrary, fixed affine matrices to input and output images, and finally the difference between their positions is taken as the displacement vector. The resulting displacement, although operating in physical space, maps the points exactly (up to numerical precision). """ domain_shape = (10, 10) codomain_shape = (10, 10) # create a simple affine transformation nr = domain_shape[0] nc = domain_shape[1] s = 1.1 t = 0.25 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(trans)) # create the random displacement field domain_grid2world = gt_affine codomain_grid2world = gt_affine disp, assign = vfu.create_random_displacement_2d( np.array(domain_shape, dtype=np.int32), domain_grid2world, np.array(codomain_shape, dtype=np.int32), codomain_grid2world, ) disp = np.array(disp, dtype=floating) assign = np.array(assign) # create a random image (with decimal digits) to warp moving_image = np.ndarray(codomain_shape, dtype=floating) ns = np.size(moving_image) moving_image[...] = rng.integers(0, 10, ns).reshape(codomain_shape) # set boundary values to zero so we don't test wrong interpolation due # to floating point precision moving_image[0, :] = 0 moving_image[-1, :] = 0 moving_image[:, 0] = 0 moving_image[:, -1] = 0 # warp the moving image using the (exact) assignments expected = moving_image[(assign[..., 0], assign[..., 1])] # warp using a DiffeomorphicMap instance diff_map = imwarp.DiffeomorphicMap( 2, domain_shape, disp_grid2world=domain_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=None, ) diff_map.forward = disp # Verify that the transform method accepts different image types (note that # the actual image contained integer values, we don't want to test # rounding) for _type in [floating, np.float64, np.int64, np.int32]: moving_image = moving_image.astype(_type) # warp using linear interpolation warped = diff_map.transform(moving_image, interpolation="linear") # compare the images (the linear interpolation may introduce slight # precision errors) assert_array_almost_equal(warped, expected, decimal=5) # Now test the nearest neighbor interpolation warped = diff_map.transform(moving_image, interpolation="nearest") # compare the images (now we don't have to worry about precision, # it is n.n.) assert_array_almost_equal(warped, expected) # verify the is_inverse flag inv = diff_map.inverse() warped = inv.transform_inverse(moving_image, interpolation="linear") assert_array_almost_equal(warped, expected, decimal=5) warped = inv.transform_inverse(moving_image, interpolation="nearest") assert_array_almost_equal(warped, expected) # Now test the inverse functionality diff_map = imwarp.DiffeomorphicMap( dim=2, disp_shape=codomain_shape, disp_grid2world=codomain_grid2world, domain_shape=codomain_shape, domain_grid2world=codomain_grid2world, codomain_shape=domain_shape, codomain_grid2world=domain_grid2world, prealign=None, ) diff_map.backward = disp for _type in [floating, np.float64, np.int64, np.int32]: moving_image = moving_image.astype(_type) # warp using linear interpolation warped = diff_map.transform_inverse(moving_image, interpolation="linear") # compare the images (the linear interpolation may introduce slight # precision errors) assert_array_almost_equal(warped, expected, decimal=5) # Now test the nearest neighbor interpolation warped = diff_map.transform_inverse(moving_image, interpolation="nearest") # compare the images (now we don't have to worry about precision, # it is nearest neighbour) assert_array_almost_equal(warped, expected) # Verify that DiffeomorphicMap raises the appropriate exceptions when # the sampling information is undefined diff_map = imwarp.DiffeomorphicMap( dim=2, disp_shape=domain_shape, disp_grid2world=domain_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=None, ) diff_map.forward = disp diff_map.domain_shape = None # If we don't provide the sampling info, it should try to use the map's # info, but it's None... assert_raises(ValueError, diff_map.transform, moving_image, interpolation="linear") # Same test for diff_map.transform_inverse diff_map = imwarp.DiffeomorphicMap( dim=2, disp_shape=domain_shape, disp_grid2world=domain_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=None, ) diff_map.forward = disp diff_map.codomain_shape = None # If we don't provide the sampling info, it should try to use the map's # info, but it's None... assert_raises( ValueError, diff_map.transform_inverse, moving_image, interpolation="linear" ) # We must provide, at least, the reference grid shape assert_raises(ValueError, imwarp.DiffeomorphicMap, 2, None) # Verify that matrices are correctly interpreted from string non_array_obj = diff_map array_obj = np.ones((3, 3)) assert_raises(ValueError, diff_map.interpret_matrix, "a different string") assert_raises(ValueError, diff_map.interpret_matrix, non_array_obj) assert diff_map.interpret_matrix("identity") is None assert diff_map.interpret_matrix(None) is None assert_array_equal(diff_map.interpret_matrix(array_obj), array_obj) def test_diffeomorphic_map_simplification_2d(): r"""Test simplification of 2D diffeomorphic maps Create an invertible deformation field, and define a DiffeomorphicMap using different voxel-to-space transforms for domain, codomain, and reference discretizations, also use a non-identity pre-aligning matrix. Warp a circle using the diffeomorphic map to obtain the expected warped circle. Now simplify the DiffeomorphicMap and warp the same circle using this simplified map. Verify that the two warped circles are equal up to numerical precision. """ # create a simple affine transformation dom_shape = (64, 64) cod_shape = (80, 80) nr = dom_shape[0] nc = dom_shape[1] s = 1.1 t = 0.25 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(trans)) # Create the invertible displacement fields and the circle radius = 16 circle = vfu.create_circle(cod_shape[0], cod_shape[1], radius) d, dinv = vfu.create_harmonic_fields_2d(dom_shape[0], dom_shape[1], 0.3, 6) # Define different voxel-to-space transforms for domain, codomain and # reference grid, also, use a non-identity pre-align transform D = gt_affine C = imwarp.mult_aff(gt_affine, gt_affine) R = np.eye(3) P = gt_affine # Create the original diffeomorphic map diff_map = imwarp.DiffeomorphicMap( dim=2, disp_shape=dom_shape, disp_grid2world=R, domain_shape=dom_shape, domain_grid2world=D, codomain_shape=cod_shape, codomain_grid2world=C, prealign=P, ) diff_map.forward = np.array(d, dtype=floating) diff_map.backward = np.array(dinv, dtype=floating) # Warp the circle to obtain the expected image expected = diff_map.transform(circle, interpolation="linear") # Simplify simplified = diff_map.get_simplified_transform() # warp the circle warped = simplified.transform(circle, interpolation="linear") # verify that the simplified map is equivalent to the # original one assert_array_almost_equal(warped, expected) # And of course, it must be simpler... assert_equal(simplified.domain_grid2world, None) assert_equal(simplified.codomain_grid2world, None) assert_equal(simplified.disp_grid2world, None) assert_equal(simplified.domain_world2grid, None) assert_equal(simplified.codomain_world2grid, None) assert_equal(simplified.disp_world2grid, None) def test_diffeomorphic_map_simplification_3d(): r"""Test simplification of 3D diffeomorphic maps Create an invertible deformation field, and define a DiffeomorphicMap using different voxel-to-space transforms for domain, codomain, and reference discretizations, also use a non-identity pre-aligning matrix. Warp a sphere using the diffeomorphic map to obtain the expected warped sphere. Now simplify the DiffeomorphicMap and warp the same sphere using this simplified map. Verify that the two warped spheres are equal up to numerical precision. """ # create a simple affine transformation domain_shape = (64, 64, 64) codomain_shape = (80, 80, 80) nr = domain_shape[0] nc = domain_shape[1] ns = domain_shape[2] s = 1.1 t = 0.25 trans = np.array( [[1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1]] ) trans_inv = np.linalg.inv(trans) scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) gt_affine = trans_inv.dot(scale.dot(trans)) # Create the invertible displacement fields and the sphere radius = 16 sphere = vfu.create_sphere( codomain_shape[0], codomain_shape[1], codomain_shape[2], radius ) d, dinv = vfu.create_harmonic_fields_3d( domain_shape[0], domain_shape[1], domain_shape[2], 0.3, 6 ) # Define different voxel-to-space transforms for domain, codomain and # reference grid, also, use a non-identity pre-align transform D = gt_affine C = imwarp.mult_aff(gt_affine, gt_affine) R = np.eye(4) P = gt_affine # Create the original diffeomorphic map diff_map = imwarp.DiffeomorphicMap( dim=3, disp_shape=domain_shape, disp_grid2world=R, domain_shape=domain_shape, domain_grid2world=D, codomain_shape=codomain_shape, codomain_grid2world=C, prealign=P, ) diff_map.forward = np.array(d, dtype=floating) diff_map.backward = np.array(dinv, dtype=floating) # Warp the sphere to obtain the expected image expected = diff_map.transform(sphere, interpolation="linear") # Simplify simplified = diff_map.get_simplified_transform() # warp the sphere warped = simplified.transform(sphere, interpolation="linear") # verify that the simplified map is equivalent to the # original one assert_array_almost_equal(warped, expected) # And of course, it must be simpler... assert_equal(simplified.domain_grid2world, None) assert_equal(simplified.codomain_grid2world, None) assert_equal(simplified.disp_grid2world, None) assert_equal(simplified.domain_world2grid, None) assert_equal(simplified.codomain_world2grid, None) assert_equal(simplified.disp_world2grid, None) def test_optimizer_exceptions(): r"""Test exceptions from SyN""" # An arbitrary valid metric metric = metrics.SSDMetric(2) # The metric must not be None assert_raises(ValueError, imwarp.SymmetricDiffeomorphicRegistration, None) # The iterations list must not be empty assert_raises( ValueError, imwarp.SymmetricDiffeomorphicRegistration, metric, level_iters=[] ) optimizer = imwarp.SymmetricDiffeomorphicRegistration(metric, level_iters=None) # Verify the default iterations list assert_array_equal(optimizer.level_iters, [100, 100, 25]) # Verify exception thrown when attempting to fit the energy profile without # enough data assert_raises(ValueError, optimizer._get_energy_derivative) assert_raises(ValueError, optimizer.get_map) def test_get_direction_and_spacings(): r"""Test direction and spacings from affine transforms""" xrot = 0.5 yrot = 0.75 zrot = 1.0 direction_gt = eulerangles.euler2mat(zrot, yrot, xrot) spacings_gt = np.array([1.1, 1.2, 1.3]) scaling_gt = np.diag(spacings_gt) translation_gt = np.array([1, 2, 3]) affine = np.eye(4) affine[:3, :3] = direction_gt.dot(scaling_gt) affine[:3, 3] = translation_gt direction, spacings = imwarp.get_direction_and_spacings(affine, 3) assert_array_almost_equal(direction, direction_gt) assert_array_almost_equal(spacings, spacings_gt) def simple_callback(sdr, status): r"""Verify callback function is called from SyN""" if status == imwarp.RegistrationStages.INIT_START: sdr.INIT_START_CALLED = 1 if status == imwarp.RegistrationStages.INIT_END: sdr.INIT_END_CALLED = 1 if status == imwarp.RegistrationStages.OPT_START: sdr.OPT_START_CALLED = 1 if status == imwarp.RegistrationStages.OPT_END: sdr.OPT_END_CALLED = 1 if status == imwarp.RegistrationStages.SCALE_START: sdr.SCALE_START_CALLED = 1 if status == imwarp.RegistrationStages.SCALE_END: sdr.SCALE_END_CALLED = 1 if status == imwarp.RegistrationStages.ITER_START: sdr.ITER_START_CALLED = 1 if status == imwarp.RegistrationStages.ITER_END: sdr.ITER_END_CALLED = 1 def test_ssd_2d_demons(): r"""Test 2D SyN with SSD metric, demons-like optimizer Classical Circle-To-C experiment for 2D monomodal registration. We verify that the final registration is of good quality. """ fname_moving = get_fnames(name="reg_o") fname_static = get_fnames(name="reg_c") moving = np.load(fname_moving) static = np.load(fname_static) moving = np.array(moving, dtype=floating) static = np.array(static, dtype=floating) moving = (moving - moving.min()) / (moving.max() - moving.min()) static = (static - static.min()) / (static.max() - static.min()) # Create the SSD metric smooth = 4 step_type = "demons" similarity_metric = metrics.SSDMetric(2, smooth=smooth, step_type=step_type) # Configure and run the Optimizer level_iters = [200, 100, 50, 25] step_length = 0.25 opt_tol = 1e-4 inv_iter = 40 inv_tol = 1e-3 ss_sigma_factor = 0.2 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) # test callback being called optimizer.INIT_START_CALLED = 0 optimizer.INIT_END_CALLED = 0 optimizer.OPT_START_CALLED = 0 optimizer.OPT_END_CALLED = 0 optimizer.SCALE_START_CALLED = 0 optimizer.SCALE_END_CALLED = 0 optimizer.ITER_START_CALLED = 0 optimizer.ITER_END_CALLED = 0 optimizer.callback_counter_test = 0 optimizer.callback = simple_callback optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 assert_equal(optimizer.OPT_START_CALLED, 1) assert_equal(optimizer.OPT_END_CALLED, 1) assert_equal(optimizer.SCALE_START_CALLED, 1) assert_equal(optimizer.SCALE_END_CALLED, 1) assert_equal(optimizer.ITER_START_CALLED, 1) assert_equal(optimizer.ITER_END_CALLED, 1) def test_ssd_2d_gauss_newton(): r"""Test 2D SyN with SSD metric, Gauss-Newton optimizer Classical Circle-To-C experiment for 2D monomodal registration. We verify that the final registration is of good quality. """ fname_moving = get_fnames(name="reg_o") fname_static = get_fnames(name="reg_c") moving = np.load(fname_moving) static = np.load(fname_static) moving = np.array(moving, dtype=floating) static = np.array(static, dtype=floating) moving = (moving - moving.min()) / (moving.max() - moving.min()) static = (static - static.min()) / (static.max() - static.min()) # Create the SSD metric smooth = 4 inner_iter = 5 step_type = "gauss_newton" similarity_metric = metrics.SSDMetric( 2, smooth=smooth, inner_iter=inner_iter, step_type=step_type ) # Configure and run the Optimizer level_iters = [200, 100, 50, 25] step_length = 0.5 opt_tol = 1e-4 inv_iter = 40 inv_tol = 1e-3 ss_sigma_factor = 0.2 optimizer = imwarp.SymmetricDiffeomorphicRegistration( metric=similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) # test callback not being called optimizer.INIT_START_CALLED = 0 optimizer.INIT_END_CALLED = 0 optimizer.OPT_START_CALLED = 0 optimizer.OPT_END_CALLED = 0 optimizer.SCALE_START_CALLED = 0 optimizer.SCALE_END_CALLED = 0 optimizer.ITER_START_CALLED = 0 optimizer.ITER_END_CALLED = 0 optimizer.verbosity = VerbosityLevels.DEBUG transformation = np.eye(3) mapping = optimizer.optimize( static, moving, static_grid2world=transformation, moving_grid2world=transformation, prealign=transformation, ) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 assert_equal(optimizer.OPT_START_CALLED, 0) assert_equal(optimizer.OPT_END_CALLED, 0) assert_equal(optimizer.SCALE_START_CALLED, 0) assert_equal(optimizer.SCALE_END_CALLED, 0) assert_equal(optimizer.ITER_START_CALLED, 0) assert_equal(optimizer.ITER_END_CALLED, 0) def get_warped_stacked_image(image, nslices, b, m): r"""Creates a volume by stacking copies of a deformed image The image is deformed under an invertible field, and a 3D volume is generated as follows: the first and last `nslices`//3 slices are filled with zeros to simulate background. The remaining middle slices are filled with copies of the deformed `image` under the action of the invertible field. Parameters ---------- image : 2d array shape(r, c) the image to be deformed nslices : int the number of slices in the final volume b, m : float parameters of the harmonic field (as in [1]). Returns ------- vol : array shape(r, c) if `nslices`==1 else (r, c, `nslices`) the volumed generated using the undeformed image wvol : array shape(r, c) if `nslices`==1 else (r, c, `nslices`) the volumed generated using the warped image References ---------- [1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008). A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107 """ shape = image.shape # create a synthetic invertible map and warp the circle d, dinv = vfu.create_harmonic_fields_2d(shape[0], shape[1], b, m) d = np.asarray(d, dtype=floating) dinv = np.asarray(dinv, dtype=floating) mapping = DiffeomorphicMap(2, shape) mapping.forward, mapping.backward = d, dinv wimage = mapping.transform(image) if nslices == 1: return image, wimage # normalize and form the 3d by piling slices image = image.astype(floating) image = (image - image.min()) / (image.max() - image.min()) zero_slices = nslices // 3 vol = np.zeros(shape=image.shape + (nslices,)) vol[..., zero_slices : (2 * zero_slices)] = image[..., None] wvol = np.zeros(shape=image.shape + (nslices,)) wvol[..., zero_slices : (2 * zero_slices)] = wimage[..., None] return vol, wvol def get_synthetic_warped_circle(nslices): # get a subsampled circle fname_cicle = get_fnames(name="reg_o") circle = np.load(fname_cicle)[::4, ::4].astype(floating) # create a synthetic invertible map and warp the circle d, dinv = vfu.create_harmonic_fields_2d(64, 64, 0.1, 4) d = np.asarray(d, dtype=floating) dinv = np.asarray(dinv, dtype=floating) mapping = DiffeomorphicMap(2, (64, 64)) mapping.forward, mapping.backward = d, dinv wcircle = mapping.transform(circle) if nslices == 1: return circle, wcircle # normalize and form the 3d by piling slices circle = (circle - circle.min()) / (circle.max() - circle.min()) circle_3d = np.ndarray(circle.shape + (nslices,), dtype=floating) circle_3d[...] = circle[..., None] circle_3d[..., 0] = 0 circle_3d[..., -1] = 0 # do the same with the warped circle wcircle = (wcircle - wcircle.min()) / (wcircle.max() - wcircle.min()) wcircle_3d = np.ndarray(wcircle.shape + (nslices,), dtype=floating) wcircle_3d[...] = wcircle[..., None] wcircle_3d[..., 0] = 0 wcircle_3d[..., -1] = 0 return circle_3d, wcircle_3d def test_ssd_3d_demons(): r"""Test 3D SyN with SSD metric, demons-like optimizer Register a stack of circles ('cylinder') before and after warping them with a synthetic diffeomorphism. We verify that the final registration is of good quality. """ moving, static = get_synthetic_warped_circle(30) moving[..., :8] = 0 moving[..., -1:-9:-1] = 0 static[..., :8] = 0 static[..., -1:-9:-1] = 0 # Create the SSD metric smooth = 4 step_type = "demons" similarity_metric = metrics.SSDMetric(3, smooth=smooth, step_type=step_type) # Create the optimizer level_iters = [10, 10] step_length = 0.1 opt_tol = 1e-4 inv_iter = 20 inv_tol = 1e-3 ss_sigma_factor = 0.5 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_ssd_3d_gauss_newton(): r"""Test 3D SyN with SSD metric, Gauss-Newton optimizer Register a stack of circles ('cylinder') before and after warping them with a synthetic diffeomorphism. We verify that the final registration is of good quality. """ moving, static = get_synthetic_warped_circle(35) moving[..., :10] = 0 moving[..., -1:-11:-1] = 0 static[..., :10] = 0 static[..., -1:-11:-1] = 0 # Create the SSD metric smooth = 4 inner_iter = 5 step_type = "gauss_newton" similarity_metric = metrics.SSDMetric( 3, smooth=smooth, inner_iter=inner_iter, step_type=step_type ) # Create the optimizer level_iters = [10, 10] step_length = 0.1 opt_tol = 1e-4 inv_iter = 20 inv_tol = 1e-3 ss_sigma_factor = 0.5 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_cc_2d(): r"""Test 2D SyN with CC metric Register a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 1 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Configure the metric sigma_diff = 3.0 radius = 4 metric = metrics.CCMetric(2, sigma_diff=sigma_diff, radius=radius) # Configure and run the Optimizer level_iters = [15, 5] optimizer = imwarp.SymmetricDiffeomorphicRegistration( metric=metric, level_iters=level_iters ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_cc_3d(): r"""Test 3D SyN with CC metric Register a volume created by stacking copies of a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 21 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Create the CC metric sigma_diff = 2.0 radius = 2 similarity_metric = metrics.CCMetric(3, sigma_diff=sigma_diff, radius=radius) # Create the optimizer level_iters = [20, 5] step_length = 0.25 opt_tol = 1e-4 inv_iter = 20 inv_tol = 1e-3 ss_sigma_factor = 0.2 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize( static, moving, static_grid2world=None, moving_grid2world=None, prealign=None ) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_em_3d_gauss_newton(): r"""Test 3D SyN with EM metric, Gauss-Newton optimizer Register a volume created by stacking copies of a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 21 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Create the EM metric smooth = 2.0 inner_iter = 20 step_length = 0.25 q_levels = 256 double_gradient = True iter_type = "gauss_newton" similarity_metric = metrics.EMMetric( 3, smooth=smooth, inner_iter=inner_iter, q_levels=q_levels, double_gradient=double_gradient, step_type=iter_type, ) # Create the optimizer level_iters = [20, 5] opt_tol = 1e-4 inv_iter = 20 inv_tol = 1e-3 ss_sigma_factor = 1.0 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_em_2d_gauss_newton(): r"""Test 2D SyN with EM metric, Gauss-Newton optimizer Register a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 1 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Configure the metric smooth = 5.0 inner_iter = 20 q_levels = 256 double_gradient = False iter_type = "gauss_newton" metric = metrics.EMMetric( 2, smooth=smooth, inner_iter=inner_iter, q_levels=q_levels, double_gradient=double_gradient, step_type=iter_type, ) # Configure and run the Optimizer level_iters = [40, 20, 10] optimizer = imwarp.SymmetricDiffeomorphicRegistration( metric, level_iters=level_iters ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_em_3d_demons(): r"""Test 3D SyN with EM metric, demons-like optimizer Register a volume created by stacking copies of a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 21 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Create the EM metric smooth = 2.0 inner_iter = 20 step_length = 0.25 q_levels = 256 double_gradient = True iter_type = "demons" similarity_metric = metrics.EMMetric( 3, smooth=smooth, inner_iter=inner_iter, q_levels=q_levels, double_gradient=double_gradient, step_type=iter_type, ) # Create the optimizer level_iters = [20, 5] opt_tol = 1e-4 inv_iter = 20 inv_tol = 1e-3 ss_sigma_factor = 1.0 optimizer = imwarp.SymmetricDiffeomorphicRegistration( similarity_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 def test_em_2d_demons(): r"""Test 2D SyN with EM metric, demons-like optimizer Register a coronal slice from a T1w brain MRI before and after warping it under a synthetic invertible map. We verify that the final registration is of good quality. """ fname = get_fnames(name="t1_coronal_slice") nslices = 1 b = 0.1 m = 4 image = np.load(fname) moving, static = get_warped_stacked_image(image, nslices, b, m) # Configure the metric smooth = 2.0 inner_iter = 20 q_levels = 256 double_gradient = False iter_type = "demons" metric = metrics.EMMetric( 2, smooth=smooth, inner_iter=inner_iter, q_levels=q_levels, double_gradient=double_gradient, step_type=iter_type, ) # Configure and run the Optimizer level_iters = [40, 20, 10] optimizer = imwarp.SymmetricDiffeomorphicRegistration( metric=metric, level_iters=level_iters ) optimizer.verbosity = VerbosityLevels.DEBUG mapping = optimizer.optimize(static, moving, static_grid2world=None) m = optimizer.get_map() assert_equal(mapping, m) warped = mapping.transform(moving) starting_energy = np.sum((static - moving) ** 2) final_energy = np.sum((static - warped) ** 2) reduced = 1.0 - final_energy / starting_energy assert reduced > 0.9 @set_random_number_generator(1741332) def test_coordinate_mapping(rng): r"""Test coordinate mapping with DiffeomorphicMap 1. Create a random displacement field and a small affine transform to map grid to world coordinates. 2. Create a DiffeomorphicMap with the previously created field and affine transform. 3. Create a random input image. 4. Select a few non-boundary voxels from the domain grid. 5. Warp the input image with the DiffeomorphicMap and interpolate the **warped image** at the selected locations. The result is the `expected` array. 6. Map only the selected points using the DiffeomorphicMap and interpolate the **input image** at the warped points. The result is the `actual` array, which should be almost equal to the `expected` array. """ for dim in range(2, 4): npoints = 100 points = np.empty((npoints, dim), dtype=np.float64) if dim == 2: domain_shape = (10, 10) codomain_shape = (15, 15) nr = domain_shape[0] nc = domain_shape[1] s = 1.1 t = 0.25 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(trans)) n = codomain_shape[0] * codomain_shape[1] moving_image = rng.integers(0, 10, n).reshape(codomain_shape) moving_image = moving_image.astype(np.float64) # Select a few grid coordinates not at the boundary of the domain points[:, 0] = rng.integers(1, nr - 1, npoints) points[:, 1] = rng.integers(1, nc - 1, npoints) random_df = vfu.create_random_displacement_2d interpolate_f = interpolate_scalar_2d else: domain_shape = (10, 10, 10) codomain_shape = (15, 15, 15) nr = domain_shape[0] nc = domain_shape[1] ns = domain_shape[2] s = 1.1 t = 0.25 trans = np.array( [ [1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1], ] ) trans_inv = np.linalg.inv(trans) scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) gt_affine = trans_inv.dot(scale.dot(trans)) n = codomain_shape[0] * codomain_shape[1] * codomain_shape[2] moving_image = rng.integers(0, 10, n).reshape(codomain_shape) moving_image = moving_image.astype(np.float64) # Select a few grid coordinates not at the boundary of the domain points[:, 0] = rng.integers(1, nr - 1, npoints) points[:, 1] = rng.integers(1, nc - 1, npoints) points[:, 2] = rng.integers(1, ns - 1, npoints) random_df = vfu.create_random_displacement_3d interpolate_f = interpolate_scalar_3d # create the random displacement field domain_grid2world = gt_affine codomain_grid2world = gt_affine disp, assign = random_df( np.array(domain_shape, dtype=np.int32), domain_grid2world, np.array(codomain_shape, dtype=np.int32), codomain_grid2world, ) disp = disp.astype(floating) # Create a DiffeomorphicMap instance diff_map = imwarp.DiffeomorphicMap( dim, domain_shape, disp_grid2world=domain_grid2world, domain_shape=domain_shape, domain_grid2world=domain_grid2world, codomain_shape=codomain_shape, codomain_grid2world=codomain_grid2world, prealign=None, ) diff_map.forward = disp # Here, expected is obtained after two interpolation steps, therefore # we need to increase the tolerance when comparing against the result # using only one interpolation step (we set decimal=5 below) warped = diff_map.transform(moving_image, interpolation="linear") expected, inside = interpolate_f(warped, points) # Now map the points with the implementation under test # Specify how to map the given array to world coordinates in2world = diff_map.domain_grid2world # Request mapping back from world to grid coordinates world2out = diff_map.domain_world2grid # Execute warping wpoints = diff_map.transform_points( points, coord2world=in2world, world2coord=world2out ) # Interpolate at warped points and verify it's equal to direct warping actual, inside = interpolate_f(moving_image, wpoints) assert_array_almost_equal(actual, expected, decimal=5) if dim in [3, 4]: wpoints_2 = deform_streamlines( [ points, ], disp, np.eye(4), domain_grid2world, np.eye(4), codomain_grid2world, ) assert_array_almost_equal(wpoints, wpoints_2[0]) dipy-1.11.0/dipy/align/tests/test_metrics.py000066400000000000000000000234561476546756600210630ustar00rootroot00000000000000import itertools import numpy as np from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_raises from scipy import ndimage from dipy.align import floating from dipy.align.metrics import CCMetric, EMMetric, SSDMetric from dipy.testing.decorators import set_random_number_generator def test_exceptions(): for invalid_dim in [-1, 0, 1, 4, 5]: assert_raises(ValueError, CCMetric, invalid_dim) assert_raises(ValueError, EMMetric, invalid_dim) assert_raises(ValueError, SSDMetric, invalid_dim) assert_raises(ValueError, SSDMetric, 3, step_type="unknown_metric_name") assert_raises(ValueError, EMMetric, 3, step_type="unknown_metric_name") def init_metric(shape, radius): dim = len(shape) metric = CCMetric(dim, radius=radius) metric.set_static_image( np.arange(np.prod(shape), dtype=float).reshape(shape), np.eye(4), np.ones(dim), np.eye(3), ) metric.set_moving_image( np.arange(np.prod(shape), dtype=float).reshape(shape), np.eye(4), np.ones(dim), np.eye(3), ) return metric # Generate many shape combinations shapes_2d = itertools.product((5, 8), (8, 5)) shapes_3d = itertools.product((5, 8), (8, 5), (30, 50)) all_shapes = itertools.chain(shapes_2d, shapes_3d) # expected to fail for any dimension < 2*radius + 1. for shape in all_shapes: metric = init_metric(shape, 4) assert_raises(ValueError, metric.initialize_iteration) # expected to pass for any dimension == 2*radius + 1. metric = init_metric((9, 9), 4) metric.initialize_iteration() @set_random_number_generator(7181309) def test_EMMetric_image_dynamics(rng): metric = EMMetric(2) target_shape = (10, 10) # create a random image image = np.ndarray(target_shape, dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(tuple(target_shape)) # compute the expected binary mask expected = (image > 0).astype(np.int32) metric.use_static_image_dynamics(image, None) assert_array_equal(expected, metric.static_image_mask) metric.use_moving_image_dynamics(image, None) assert_array_equal(expected, metric.moving_image_mask) def test_em_demons_step_2d(): r""" Compares the output of the demons step in 2d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{2}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (20, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # The squared norm of grad_G to be used later sq_norm_grad_F = np.sum(grad_F**2, -1) sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * sq_norm_grad_F G = 0.5 * sq_norm_grad_G # Create an instance of EMMetric metric = EMMetric(2) metric.static_spacing = np.array([1.2, 1.2]) # The $\sigma_x$ (eq. 4 in [Vercauteren09]) parameter is computed in ANTS # based on the image's spacing sigma_x_sq = np.sum(metric.static_spacing**2) / metric.dim # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Set the properties relevant to the demons methods metric.smooth = 3.0 metric.gradient_static = np.array(grad_F, dtype=floating) metric.gradient_moving = np.array(grad_G, dtype=floating) metric.static_image = np.array(F, dtype=floating) metric.moving_image = np.array(G, dtype=floating) metric.staticq_means_field = np.array(F, dtype=floating) metric.staticq_sigma_sq_field = np.array(sigma_i_sq, dtype=floating) metric.movingq_means_field = np.array(G, dtype=floating) metric.movingq_sigma_sq_field = np.array(sigma_i_sq, dtype=floating) # compute the step using the implementation under test actual_forward = metric.compute_demons_step(forward_step=True) actual_backward = metric.compute_demons_step(forward_step=False) # Now directly compute the demons steps according to eq 4 in # [Vercauteren09] num_fwd = sigma_x_sq * (G - F) den_fwd = sigma_x_sq * sq_norm_grad_F + sigma_i_sq # This is $J^{P}$ in eq. 4 [Vercauteren09] expected_fwd = -1 * np.array(grad_F) expected_fwd[..., 0] *= num_fwd / den_fwd expected_fwd[..., 1] *= num_fwd / den_fwd # apply Gaussian smoothing expected_fwd[..., 0] = ndimage.gaussian_filter(expected_fwd[..., 0], 3.0) expected_fwd[..., 1] = ndimage.gaussian_filter(expected_fwd[..., 1], 3.0) num_bwd = sigma_x_sq * (F - G) den_bwd = sigma_x_sq * sq_norm_grad_G + sigma_i_sq # This is $J^{P}$ in eq. 4 [Vercauteren09] expected_bwd = -1 * np.array(grad_G) expected_bwd[..., 0] *= num_bwd / den_bwd expected_bwd[..., 1] *= num_bwd / den_bwd # apply Gaussian smoothing expected_bwd[..., 0] = ndimage.gaussian_filter(expected_bwd[..., 0], 3.0) expected_bwd[..., 1] = ndimage.gaussian_filter(expected_bwd[..., 1], 3.0) assert_array_almost_equal(actual_forward, expected_fwd) assert_array_almost_equal(actual_backward, expected_bwd) def test_em_demons_step_3d(): r""" Compares the output of the demons step in 3d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{3}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (20, 15, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # The squared norm of grad_G to be used later sq_norm_grad_F = np.sum(grad_F**2, -1) sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * sq_norm_grad_F G = 0.5 * sq_norm_grad_G # Create an instance of EMMetric metric = EMMetric(3) metric.static_spacing = np.array([1.2, 1.2, 1.2]) # The $\sigma_x$ (eq. 4 in [Vercauteren09]) parameter is computed in ANTS # based on the image's spacing sigma_x_sq = np.sum(metric.static_spacing**2) / metric.dim # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Set the properties relevant to the demons methods metric.smooth = 3.0 metric.gradient_static = np.array(grad_F, dtype=floating) metric.gradient_moving = np.array(grad_G, dtype=floating) metric.static_image = np.array(F, dtype=floating) metric.moving_image = np.array(G, dtype=floating) metric.staticq_means_field = np.array(F, dtype=floating) metric.staticq_sigma_sq_field = np.array(sigma_i_sq, dtype=floating) metric.movingq_means_field = np.array(G, dtype=floating) metric.movingq_sigma_sq_field = np.array(sigma_i_sq, dtype=floating) # compute the step using the implementation under test actual_forward = metric.compute_demons_step(forward_step=True) actual_backward = metric.compute_demons_step(forward_step=False) # Now directly compute the demons steps according to eq 4 in # [Vercauteren09] num_fwd = sigma_x_sq * (G - F) den_fwd = sigma_x_sq * sq_norm_grad_F + sigma_i_sq expected_fwd = -1 * np.array(grad_F) expected_fwd[..., 0] *= num_fwd / den_fwd expected_fwd[..., 1] *= num_fwd / den_fwd expected_fwd[..., 2] *= num_fwd / den_fwd # apply Gaussian smoothing expected_fwd[..., 0] = ndimage.gaussian_filter(expected_fwd[..., 0], 3.0) expected_fwd[..., 1] = ndimage.gaussian_filter(expected_fwd[..., 1], 3.0) expected_fwd[..., 2] = ndimage.gaussian_filter(expected_fwd[..., 2], 3.0) num_bwd = sigma_x_sq * (F - G) den_bwd = sigma_x_sq * sq_norm_grad_G + sigma_i_sq expected_bwd = -1 * np.array(grad_G) expected_bwd[..., 0] *= num_bwd / den_bwd expected_bwd[..., 1] *= num_bwd / den_bwd expected_bwd[..., 2] *= num_bwd / den_bwd # apply Gaussian smoothing expected_bwd[..., 0] = ndimage.gaussian_filter(expected_bwd[..., 0], 3.0) expected_bwd[..., 1] = ndimage.gaussian_filter(expected_bwd[..., 1], 3.0) expected_bwd[..., 2] = ndimage.gaussian_filter(expected_bwd[..., 2], 3.0) assert_array_almost_equal(actual_forward, expected_fwd) assert_array_almost_equal(actual_backward, expected_bwd) dipy-1.11.0/dipy/align/tests/test_parzenhist.py000066400000000000000000000660611476546756600216030ustar00rootroot00000000000000from functools import reduce from operator import mul import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) import scipy as sp from dipy.align import vector_fields as vf from dipy.align.parzenhist import ( ParzenJointHistogram, cubic_spline, cubic_spline_derivative, sample_domain_regular, ) from dipy.align.transforms import regtransforms from dipy.core.interpolation import interpolate_scalar_2d, interpolate_scalar_3d from dipy.core.ndindex import ndindex from dipy.data import get_fnames from dipy.testing.decorators import set_random_number_generator factors = { ("TRANSLATION", 2): 2.0, ("ROTATION", 2): 0.1, ("RIGID", 2): 0.1, ("SCALING", 2): 0.01, ("AFFINE", 2): 0.1, ("TRANSLATION", 3): 2.0, ("ROTATION", 3): 0.1, ("RIGID", 3): 0.1, ("SCALING", 3): 0.1, ("AFFINE", 3): 0.1, } def create_random_image_pair(sh, nvals, rng=None): r"""Create a pair of images with an arbitrary, non-uniform joint PDF Parameters ---------- sh : array, shape (dim,) the shape of the images to be created nvals : int maximum number of different values in the generated 2D images. The voxel intensities of the returned images will be in {0, 1, ..., nvals-1} rng : numpy.random.generator numpy's random generator. If None, it is set with a random seed. Default is None Returns ------- static : array, shape=sh first image in the image pair moving : array, shape=sh second image in the image pair """ if rng is None: rng = np.random.default_rng() sz = reduce(mul, sh, 1) sh = tuple(sh) static = rng.integers(0, nvals, sz).reshape(sh) # This is just a simple way of making the distribution non-uniform moving = static.copy() moving += rng.integers(0, nvals // 2, sz).reshape(sh) - nvals // 4 # This is just a simple way of making the distribution non-uniform static = moving.copy() static += rng.integers(0, nvals // 2, sz).reshape(sh) - nvals // 4 return static.astype(np.float64), moving.astype(np.float64) def test_cubic_spline(): # Cubic spline as defined in [Mattes03] eq. (3) # # [Mattes03] Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., # & Eubank, W. PET-CT image registration in the chest using # free-form deformations. IEEE Transactions on Medical Imaging, # 22(1), 120-8, 2003. in_list = [] expected = [] for epsilon in [-1e-9, 0.0, 1e-9]: for t in [-2.0, -1.0, 0.0, 1.0, 2.0]: x = t + epsilon in_list.append(x) absx = np.abs(x) sqrx = x * x if absx < 1: expected.append((4.0 - 6 * sqrx + 3.0 * (absx**3)) / 6.0) elif absx < 2: expected.append(((2 - absx) ** 3) / 6.0) else: expected.append(0.0) actual = cubic_spline(np.array(in_list, dtype=np.float64)) assert_array_almost_equal(actual, np.array(expected, dtype=np.float64)) def test_cubic_spline_derivative(): # Test derivative of the cubic spline, as defined in [Mattes03] eq. (3) by # comparing the analytical and numerical derivatives # # [Mattes03] Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., # & Eubank, W. PET-CT image registration in the chest using # free-form deformations. IEEE Transactions on Medical Imaging, # 22(1), 120-8, 2003. in_list = [] for epsilon in [-1e-9, 0.0, 1e-9]: for t in [-2.0, -1.0, 0.0, 1.0, 2.0]: x = t + epsilon in_list.append(x) h = 1e-6 in_list = np.array(in_list) input_h = in_list + h s = np.array(cubic_spline(in_list)) s_h = np.array(cubic_spline(input_h)) expected = (s_h - s) / h actual = cubic_spline_derivative(in_list) assert_array_almost_equal(actual, expected) def test_parzen_joint_histogram(): # Test the simple functionality of ParzenJointHistogram, # the gradients and computation of the joint intensity distribution # will be tested independently for nbins in [15, 30, 50]: for min_int in [-10.0, 0.0, 10.0]: for intensity_range in [0.1, 1.0, 10.0]: fact = 1 max_int = min_int + intensity_range P = ParzenJointHistogram(nbins) # Make a pair of 4-pixel images, introduce +/- 1 values # that will be excluded using a mask static = np.array([min_int - 1.0, min_int, max_int, max_int + 1.0]) # Multiply by an arbitrary value (make the ranges different) moving = fact * np.array( [min_int, min_int - 1.0, max_int + 1.0, max_int] ) # Create a mask to exclude the invalid values (beyond min and # max computed above) static_mask = np.array([0, 1, 1, 0]) moving_mask = np.array([1, 0, 0, 1]) P.setup(static, moving, smask=static_mask, mmask=moving_mask) # Test bin_normalize_static at the boundary normalized = P.bin_normalize_static(min_int) assert_almost_equal(normalized, P.padding) index = P.bin_index(normalized) assert_equal(index, P.padding) normalized = P.bin_normalize_static(max_int) assert_almost_equal(normalized, nbins - P.padding) index = P.bin_index(normalized) assert_equal(index, nbins - 1 - P.padding) # Test bin_normalize_moving at the boundary normalized = P.bin_normalize_moving(fact * min_int) assert_almost_equal(normalized, P.padding) index = P.bin_index(normalized) assert_equal(index, P.padding) normalized = P.bin_normalize_moving(fact * max_int) assert_almost_equal(normalized, nbins - P.padding) index = P.bin_index(normalized) assert_equal(index, nbins - 1 - P.padding) # Test bin_index not at the boundary delta_s = (max_int - min_int) / (nbins - 2 * P.padding) delta_m = fact * (max_int - min_int) / (nbins - 2 * P.padding) for i in range(nbins - 2 * P.padding): normalized = P.bin_normalize_static(min_int + (i + 0.5) * delta_s) index = P.bin_index(normalized) assert_equal(index, P.padding + i) normalized = P.bin_normalize_moving( fact * min_int + (i + 0.5) * delta_m ) index = P.bin_index(normalized) assert_equal(index, P.padding + i) @set_random_number_generator(1246592) def test_parzen_densities(rng): # Test the computation of the joint intensity distribution # using a dense and a sparse set of values nbins = 32 nr = 30 nc = 35 ns = 20 nvals = 50 for dim in [2, 3]: if dim == 2: shape = (nr, nc) static, moving = create_random_image_pair(shape, nvals, rng) else: shape = (ns, nr, nc) static, moving = create_random_image_pair(shape, nvals, rng) # Initialize parzen_hist = ParzenJointHistogram(nbins) parzen_hist.setup(static, moving) # Get distributions computed by dense sampling parzen_hist.update_pdfs_dense(static, moving) actual_joint_dense = parzen_hist.joint actual_mmarginal_dense = parzen_hist.mmarginal actual_smarginal_dense = parzen_hist.smarginal # Get distributions computed by sparse sampling sval = static.reshape(-1) mval = moving.reshape(-1) parzen_hist.update_pdfs_sparse(sval, mval) actual_joint_sparse = parzen_hist.joint actual_mmarginal_sparse = parzen_hist.mmarginal actual_smarginal_sparse = parzen_hist.smarginal # Compute the expected joint distribution with dense sampling expected_joint_dense = np.zeros(shape=(nbins, nbins)) for index in ndindex(shape): sv = parzen_hist.bin_normalize_static(static[index]) mv = parzen_hist.bin_normalize_moving(moving[index]) sbin = parzen_hist.bin_index(sv) # The spline is centered at mv, will evaluate for all row spline_arg = np.array([i - mv for i in range(nbins)]) contribution = cubic_spline(spline_arg) expected_joint_dense[sbin, :] += contribution # Compute the expected joint distribution with sparse sampling expected_joint_sparse = np.zeros(shape=(nbins, nbins)) for index in range(sval.shape[0]): sv = parzen_hist.bin_normalize_static(sval[index]) mv = parzen_hist.bin_normalize_moving(mval[index]) sbin = parzen_hist.bin_index(sv) # The spline is centered at mv, will evaluate for all row spline_arg = np.array([i - mv for i in range(nbins)]) contribution = cubic_spline(spline_arg) expected_joint_sparse[sbin, :] += contribution # Verify joint distributions expected_joint_dense /= expected_joint_dense.sum() expected_joint_sparse /= expected_joint_sparse.sum() assert_array_almost_equal(actual_joint_dense, expected_joint_dense) assert_array_almost_equal(actual_joint_sparse, expected_joint_sparse) # Verify moving marginals expected_mmarginal_dense = expected_joint_dense.sum(0) expected_mmarginal_dense /= expected_mmarginal_dense.sum() expected_mmarginal_sparse = expected_joint_sparse.sum(0) expected_mmarginal_sparse /= expected_mmarginal_sparse.sum() assert_array_almost_equal(actual_mmarginal_dense, expected_mmarginal_dense) assert_array_almost_equal(actual_mmarginal_sparse, expected_mmarginal_sparse) # Verify static marginals expected_smarginal_dense = expected_joint_dense.sum(1) expected_smarginal_dense /= expected_smarginal_dense.sum() expected_smarginal_sparse = expected_joint_sparse.sum(1) expected_smarginal_sparse /= expected_smarginal_sparse.sum() assert_array_almost_equal(actual_smarginal_dense, expected_smarginal_dense) assert_array_almost_equal(actual_smarginal_sparse, expected_smarginal_sparse) def setup_random_transform(transform, rfactor, nslices=45, sigma=1, rng=None): r"""Creates a pair of images related to each other by an affine transform We transform the static image with a random transform so that the returned ground-truth transform will produce the static image when applied to the moving image. This will simply stack some copies of a T1 coronal slice image and add some zero slices up and down to reduce boundary artefacts when interpolating. Parameters ---------- transform: instance of Transform defines the type of random transformation that will be created rfactor: float the factor to multiply the uniform(0,1) random noise that will be added to the identity parameters to create the random transform nslices: int number of slices to be stacked to form the volumes sigma: float standard deviation of the gaussian filter rng: np.random.Generator numpy's random generator. If None, it is set with a random seed. Default is None """ if rng is None: rng = np.random.default_rng() dim = 2 if nslices == 1 else 3 if transform.get_dim() != dim: raise ValueError("Transform and requested volume have different dims.") zero_slices = nslices // 3 fname = get_fnames(name="t1_coronal_slice") moving_slice = np.load(fname) moving_slice = moving_slice[40:180, 50:210] if nslices == 1: dim = 2 moving = moving_slice transform_method = vf.transform_2d_affine else: dim = 3 transform_method = vf.transform_3d_affine moving = np.zeros(shape=moving_slice.shape + (nslices,)) moving[..., zero_slices : (2 * zero_slices)] = moving_slice[..., None] moving = sp.ndimage.gaussian_filter(moving, sigma) moving_g2w = np.eye(dim + 1) mmask = np.ones_like(moving, dtype=np.int32) # Create a transform by slightly perturbing the identity parameters theta = transform.get_identity_parameters() n = transform.get_number_of_parameters() theta += rng.random(n) * rfactor M = transform.param_to_matrix(theta) shape = np.array(moving.shape, dtype=np.int32) static = np.array(transform_method(moving.astype(np.float32), shape, affine=M)) static = static.astype(np.float64) static_g2w = np.eye(dim + 1) smask = np.ones_like(static, dtype=np.int32) return static, moving, static_g2w, moving_g2w, smask, mmask, M @set_random_number_generator(3147702) def test_joint_pdf_gradients_dense(rng): # Compare the analytical and numerical (finite differences) gradient of # the joint distribution (i.e. derivatives of each histogram cell) w.r.t. # the transform parameters. Since the histograms are discrete partitions # of the image intensities, the finite difference approximation is # normally not very close to the analytical derivatives. Other sources of # error are the interpolation used when transforming the images and the # boundary intensities introduced when interpolating outside of the image # (i.e. some "zeros" are introduced at the boundary which affect the # numerical derivatives but is not taken into account by the analytical # derivatives). Thus, we need to relax the verification. Instead of # looking for the analytical and numerical gradients to be very close to # each other, we will verify that they approximately point in the same # direction by testing if the angle they form is close to zero. h = 1e-4 # Make sure dictionary entries are processed in the same order regardless # of the platform. Otherwise any random numbers drawn within the loop # would make the test non-deterministic even if we fix the seed before # the loop. Right now, this test does not draw any samples, but we still # sort the entries to prevent future related failures. for ttype in sorted(factors): dim = ttype[1] if dim == 2: nslices = 1 transform_method = vf.transform_2d_affine else: nslices = 45 transform_method = vf.transform_3d_affine transform = regtransforms[ttype] factor = factors[ttype] theta = transform.get_identity_parameters() static, moving, static_g2w, moving_g2w, smask, mmask, M = ( setup_random_transform(transform, factor, nslices, 5.0, rng=rng) ) parzen_hist = ParzenJointHistogram(32) parzen_hist.setup(static, moving, smask=smask, mmask=mmask) # Compute the gradient at theta with the implementation under test M = transform.param_to_matrix(theta) shape = np.array(static.shape, dtype=np.int32) moved = transform_method(moving.astype(np.float32), shape, affine=M) moved = np.array(moved) parzen_hist.update_pdfs_dense( static.astype(np.float64), moved.astype(np.float64) ) # Get the joint distribution evaluated at theta J0 = np.copy(parzen_hist.joint) grid_to_space = np.eye(dim + 1) spacing = np.ones(dim, dtype=np.float64) mgrad, inside = vf.gradient( moving.astype(np.float32), moving_g2w, spacing, shape, grid_to_space ) params = transform.get_identity_parameters() parzen_hist.update_gradient_dense( params, transform, static.astype(np.float64), moved.astype(np.float64), grid_to_space, mgrad, smask=smask, mmask=mmask, ) actual = np.copy(parzen_hist.joint_grad) # Now we have the gradient of the joint distribution w.r.t. the # transform parameters # Compute the gradient using finite differences n = transform.get_number_of_parameters() expected = np.empty_like(actual) for i in range(n): dtheta = theta.copy() dtheta[i] += h # Update the joint distribution with the transformed moving image M = transform.param_to_matrix(dtheta) shape = np.array(static.shape, dtype=np.int32) moved = transform_method(moving.astype(np.float32), shape, affine=M) moved = np.array(moved) parzen_hist.update_pdfs_dense( static.astype(np.float64), moved.astype(np.float64) ) J1 = np.copy(parzen_hist.joint) expected[..., i] = (J1 - J0) / h # Dot product and norms of gradients of each joint histogram cell # i.e. the derivatives of each cell w.r.t. all parameters P = (expected * actual).sum(2) enorms = np.sqrt((expected**2).sum(2)) anorms = np.sqrt((actual**2).sum(2)) prodnorms = enorms * anorms # Cosine of angle between the expected and actual gradients. # Exclude very small gradients P[prodnorms > 1e-6] /= prodnorms[prodnorms > 1e-6] P[prodnorms <= 1e-6] = 0 # Verify that a large proportion of the gradients point almost in # the same direction. Disregard very small gradients mean_cosine = P[P != 0].mean() std_cosine = P[P != 0].std() assert mean_cosine > 0.9 assert std_cosine < 0.25 @set_random_number_generator(3147702) def test_joint_pdf_gradients_sparse(rng): h = 1e-4 # Make sure dictionary entries are processed in the same order regardless # of the platform. Otherwise any random numbers drawn within the loop # would make the test non-deterministic even if we fix the seed before # the loop.Right now, this test does not draw any samples, but we still # sort the entries to prevent future related failures. for ttype in sorted(factors): dim = ttype[1] if dim == 2: nslices = 1 interp_method = interpolate_scalar_2d else: nslices = 45 interp_method = interpolate_scalar_3d transform = regtransforms[ttype] factor = factors[ttype] theta = transform.get_identity_parameters() static, moving, static_g2w, moving_g2w, smask, mmask, M = ( setup_random_transform(transform, factor, nslices, 5.0, rng=rng) ) parzen_hist = ParzenJointHistogram(32) parzen_hist.setup(static, moving, smask=smask, mmask=mmask) # Sample the fixed-image domain k = 3 sigma = 0.25 shape = np.array(static.shape, dtype=np.int32) samples = sample_domain_regular(k, shape, static_g2w, sigma=sigma, rng=rng) samples = np.array(samples) samples = np.hstack((samples, np.ones(samples.shape[0])[:, None])) sp_to_static = np.linalg.inv(static_g2w) samples_static_grid = sp_to_static.dot(samples.T).T[..., :dim] intensities_static, inside = interp_method( static.astype(np.float32), samples_static_grid ) # The routines in vector_fields operate, mostly, with float32 because # they were thought to be used for non-linear registration. We may need # to write some float64 counterparts for affine registration, where # memory is not so big issue intensities_static = np.array(intensities_static, dtype=np.float64) # Compute the gradient at theta with the implementation under test M = transform.param_to_matrix(theta) sp_to_moving = np.linalg.inv(moving_g2w).dot(M) samples_moving_grid = sp_to_moving.dot(samples.T).T[..., :dim] intensities_moving, inside = interp_method( moving.astype(np.float32), samples_moving_grid ) intensities_moving = np.array(intensities_moving, dtype=np.float64) parzen_hist.update_pdfs_sparse(intensities_static, intensities_moving) # Get the joint distribution evaluated at theta J0 = np.copy(parzen_hist.joint) spacing = np.ones(dim + 1, dtype=np.float64) mgrad, inside = vf.sparse_gradient( moving.astype(np.float32), sp_to_moving, spacing, samples ) parzen_hist.update_gradient_sparse( theta, transform, intensities_static, intensities_moving, samples[..., :dim], mgrad, ) # Get the gradient of the joint distribution w.r.t. the transform # parameters actual = np.copy(parzen_hist.joint_grad) # Compute the gradient using finite differences n = transform.get_number_of_parameters() expected = np.empty_like(actual) for i in range(n): dtheta = theta.copy() dtheta[i] += h # Update the joint distribution with the transformed moving image M = transform.param_to_matrix(dtheta) sp_to_moving = np.linalg.inv(moving_g2w).dot(M) samples_moving_grid = sp_to_moving.dot(samples.T).T intensities_moving, inside = interp_method( moving.astype(np.float32), samples_moving_grid ) intensities_moving = np.array(intensities_moving, dtype=np.float64) parzen_hist.update_pdfs_sparse(intensities_static, intensities_moving) J1 = np.copy(parzen_hist.joint) expected[..., i] = (J1 - J0) / h # Dot product and norms of gradients of each joint histogram cell # i.e. the derivatives of each cell w.r.t. all parameters P = (expected * actual).sum(2) enorms = np.sqrt((expected**2).sum(2)) anorms = np.sqrt((actual**2).sum(2)) prodnorms = enorms * anorms # Cosine of angle between the expected and actual gradients. # Exclude very small gradients P[prodnorms > 1e-6] /= prodnorms[prodnorms > 1e-6] P[prodnorms <= 1e-6] = 0 # Verify that a large proportion of the gradients point almost in # the same direction. Disregard very small gradients mean_cosine = P[P != 0].mean() std_cosine = P[P != 0].std() assert mean_cosine > 0.98 assert std_cosine < 0.16 def test_sample_domain_regular(): # Test 2D sampling shape = np.array((10, 10), dtype=np.int32) affine = np.eye(3) invalid_affine = np.eye(2) sigma = 0 dim = len(shape) n = shape[0] * shape[1] k = 2 # Verify exception is raised with invalid affine assert_raises( ValueError, sample_domain_regular, k, shape, invalid_affine, sigma=sigma ) samples = sample_domain_regular(k, shape, affine, sigma=sigma) isamples = np.array(samples, dtype=np.int32) indices = isamples[:, 0] * shape[1] + isamples[:, 1] # Verify correct number of points sampled assert_array_equal(samples.shape, [n // k, dim]) # Verify all sampled points are different assert_equal(len(set(indices)), len(indices)) # Verify the sampling was regular at rate k assert_equal((indices % k).sum(), 0) # Test 3D sampling shape = np.array((5, 10, 10), dtype=np.int32) affine = np.eye(4) invalid_affine = np.eye(3) sigma = 0 dim = len(shape) n = shape[0] * shape[1] * shape[2] k = 10 # Verify exception is raised with invalid affine assert_raises( ValueError, sample_domain_regular, k, shape, invalid_affine, sigma=sigma ) samples = sample_domain_regular(k, shape, affine, sigma=sigma) isamples = np.array(samples, dtype=np.int32) indices = ( isamples[:, 0] * shape[1] * shape[2] + isamples[:, 1] * shape[2] + isamples[:, 2] ) # Verify correct number of points sampled assert_array_equal(samples.shape, [n // k, dim]) # Verify all sampled points are different assert_equal(len(set(indices)), len(indices)) # Verify the sampling was regular at rate k assert_equal((indices % k).sum(), 0) def test_exceptions(): H = ParzenJointHistogram(32) valid = np.empty((2, 2, 2), dtype=np.float64) invalid = np.empty((2, 2, 2, 2), dtype=np.float64) # Test exception from `ParzenJointHistogram.update_pdfs_dense` assert_raises(ValueError, H.update_pdfs_dense, valid, invalid) assert_raises(ValueError, H.update_pdfs_dense, invalid, valid) assert_raises(ValueError, H.update_pdfs_dense, invalid, invalid) # Test exception from `ParzenJointHistogram.update_gradient_dense` for shape in [(5, 5), (5, 5, 5)]: dim = len(shape) grid2world = np.eye(dim + 1) transform = regtransforms[("ROTATION", dim)] theta = transform.get_identity_parameters() valid_img = np.empty(shape, dtype=np.float64) valid_grad = np.empty(shape + (dim,), dtype=np.float64) invalid_img = np.empty((2, 2, 2, 2), dtype=np.float64) invalid_grad_type = np.empty_like(valid_grad, dtype=np.int32) invalid_grad_dim = np.empty(shape + (dim + 1,), dtype=np.float64) for s, m, g in [ (valid_img, valid_img, invalid_grad_type), (valid_img, valid_img, invalid_grad_dim), (invalid_img, valid_img, valid_grad), (invalid_img, invalid_img, invalid_grad_type), (invalid_img, invalid_img, invalid_grad_dim), ]: assert_raises( ValueError, H.update_gradient_dense, theta, transform, s, m, grid2world, g, ) # Test exception from `ParzenJointHistogram.update_gradient_dense` nsamples = 2 for dim in [2, 3]: transform = regtransforms[("ROTATION", dim)] theta = transform.get_identity_parameters() valid_vals = np.empty((nsamples,), dtype=np.float64) valid_grad = np.empty((nsamples, dim), dtype=np.float64) valid_points = np.empty((nsamples, dim), dtype=np.float64) invalid_grad_type = np.empty((nsamples, dim), dtype=np.int32) invalid_grad_dim = np.empty((nsamples, dim + 2), dtype=np.float64) invalid_grad_len = np.empty((nsamples + 1, dim), dtype=np.float64) invalid_vals = np.empty((nsamples + 1), dtype=np.float64) invalid_points_dim = np.empty((nsamples, dim + 2), dtype=np.float64) invalid_points_len = np.empty((nsamples + 1, dim), dtype=np.float64) C = [ (invalid_vals, valid_vals, valid_points, valid_grad), (valid_vals, invalid_vals, valid_points, valid_grad), (valid_vals, valid_vals, invalid_points_dim, valid_grad), (valid_vals, valid_vals, invalid_points_dim, invalid_grad_dim), (valid_vals, valid_vals, invalid_points_len, valid_grad), (valid_vals, valid_vals, valid_points, invalid_grad_type), (valid_vals, valid_vals, valid_points, invalid_grad_dim), (valid_vals, valid_vals, valid_points, invalid_grad_len), ] for s, m, p, g in C: assert_raises( ValueError, H.update_gradient_sparse, theta, transform, s, m, p, g ) dipy-1.11.0/dipy/align/tests/test_reslice.py000066400000000000000000000051371476546756600210370ustar00rootroot00000000000000import nibabel as nib import numpy as np from numpy.testing import assert_, assert_almost_equal, assert_equal, assert_raises from dipy.align.reslice import reslice from dipy.data import get_fnames from dipy.denoise.noise_estimate import estimate_sigma from dipy.io.image import load_nifti def test_resample(): fimg, _, _ = get_fnames(name="small_25") data, affine, zooms = load_nifti(fimg, return_voxsize=True) # test that new zooms are correctly from the affine (check with 3D volume) new_zooms = (1, 1.2, 2.1) data2, affine2 = reslice( data[..., 0], affine, zooms, new_zooms, order=1, mode="constant" ) img2 = nib.Nifti1Image(data2, affine2) new_zooms_confirmed = img2.header.get_zooms()[:3] assert_almost_equal(new_zooms, new_zooms_confirmed) # test that shape changes correctly for the first 3 dimensions (check 4D) new_zooms = (1, 1, 1.0) data2, affine2 = reslice(data, affine, zooms, new_zooms, order=0, mode="reflect") assert_equal(2 * np.array(data.shape[:3]), data2.shape[:3]) assert_equal(data2.shape[-1], data.shape[-1]) # same with different interpolation order new_zooms = (1, 1, 1.0) data3, affine2 = reslice(data, affine, zooms, new_zooms, order=5, mode="reflect") assert_equal(2 * np.array(data.shape[:3]), data3.shape[:3]) assert_equal(data3.shape[-1], data.shape[-1]) # test that the sigma will be reduced with interpolation sigmas = estimate_sigma(data) sigmas2 = estimate_sigma(data2) sigmas3 = estimate_sigma(data3) assert_(np.all(sigmas > sigmas2)) assert_(np.all(sigmas2 > sigmas3)) # check that 4D resampling matches 3D resampling data2, affine2 = reslice(data, affine, zooms, new_zooms) for i in range(data.shape[-1]): _data, _affine = reslice(data[..., i], affine, zooms, new_zooms) assert_almost_equal(data2[..., i], _data) assert_almost_equal(affine2, _affine) # check use of multiprocessing pool of specified size data3, affine3 = reslice(data, affine, zooms, new_zooms, num_processes=4) assert_almost_equal(data2, data3) assert_almost_equal(affine2, affine3) # check use of multiprocessing pool of autoconfigured size data3, affine3 = reslice(data, affine, zooms, new_zooms, num_processes=-1) assert_almost_equal(data2, data3) assert_almost_equal(affine2, affine3) # test invalid values of num_threads assert_raises(ValueError, reslice, data, affine, zooms, new_zooms, num_processes=0) # test invalid volume dimension assert_raises( ValueError, reslice, np.zeros((4, 4, 4, 4, 1)), affine, zooms, new_zooms ) dipy-1.11.0/dipy/align/tests/test_scalespace.py000066400000000000000000000067411476546756600215160ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_raises import scipy as sp from dipy.align import floating from dipy.align.imwarp import get_direction_and_spacings from dipy.align.scalespace import IsotropicScaleSpace, ScaleSpace from dipy.align.tests.test_imwarp import get_synthetic_warped_circle from dipy.testing.decorators import set_random_number_generator def test_scale_space(): num_levels = 3 for test_class in [ScaleSpace, IsotropicScaleSpace]: for dim in [2, 3]: print(dim, test_class) if dim == 2: moving, static = get_synthetic_warped_circle(1) else: moving, static = get_synthetic_warped_circle(30) input_spacing = np.array([1.1, 1.2, 1.5])[:dim] grid2world = np.diag(tuple(input_spacing) + (1.0,)) original = moving if test_class is ScaleSpace: ss = test_class( original, num_levels, image_grid2world=grid2world, input_spacing=input_spacing, ) elif test_class is IsotropicScaleSpace: factors = [4, 2, 1] sigmas = [3.0, 1.0, 0.0] ss = test_class( original, factors, sigmas, image_grid2world=grid2world, input_spacing=input_spacing, ) for level in range(num_levels): # Verify sigmas and images are consistent sigmas = ss.get_sigmas(level) expected = sp.ndimage.gaussian_filter(original, sigmas) expected = (expected - expected.min()) / ( expected.max() - expected.min() ) actual = ss.get_image(level) assert_array_almost_equal(actual, expected) # Verify scalings and spacings are consistent spacings = ss.get_spacing(level) scalings = ss.get_scaling(level) expected = ss.get_spacing(0) * scalings actual = ss.get_spacing(level) assert_array_almost_equal(actual, expected) # Verify affine and affine_inv are consistent affine = ss.get_affine(level) affine_inv = ss.get_affine_inv(level) expected = np.eye(1 + dim) actual = affine.dot(affine_inv) assert_array_almost_equal(actual, expected) # Verify affine consistent with spacings exp_dir, expected_sp = get_direction_and_spacings(affine, dim) actual_sp = spacings assert_array_almost_equal(actual_sp, expected_sp) @set_random_number_generator(2022966) def test_scale_space_exceptions(rng): target_shape = (32, 32) # create a random image image = np.ndarray(target_shape, dtype=floating) ns = np.size(image) image[...] = rng.integers(0, 10, ns).reshape(tuple(target_shape)) zeros = (image == 0).astype(np.int32) ss = ScaleSpace(image, 3) for invalid_level in [-1, 3, 4]: assert_raises(ValueError, ss.get_image, invalid_level) # Verify that the mask is correctly applied, when requested ss = ScaleSpace(image, 3, mask0=True) for level in range(3): img = ss.get_image(level) z = (img == 0).astype(np.int32) assert_array_equal(zeros, z) dipy-1.11.0/dipy/align/tests/test_streamlinear.py000066400000000000000000000361721476546756600221020ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_equal, assert_raises, ) from dipy.align.bundlemin import ( _bundle_minimum_distance, _bundle_minimum_distance_matrix, distance_matrix_mdf, ) from dipy.align.streamlinear import ( BundleMinDistanceMatrixMetric, BundleMinDistanceMetric, BundleSumDistanceMatrixMetric, StreamlineDistanceMetric, StreamlineLinearRegistration, compose_matrix44, decompose_matrix44, get_unique_pairs, groupwise_slr, ) from dipy.core.geometry import compose_matrix from dipy.data import get_fnames, read_five_af_bundles, two_cingulum_bundles from dipy.io.streamline import load_tractogram from dipy.testing.decorators import set_random_number_generator from dipy.tracking.streamline import ( Streamlines, center_streamlines, relist_streamlines, set_number_of_points, transform_streamlines, unlist_streamlines, ) def simulated_bundle(no_streamlines=10, waves=False, no_pts=12): t = np.linspace(-10, 10, 200) # parallel waves or parallel lines bundle = [] for i in np.linspace(-5, 5, no_streamlines): if waves: pts = np.vstack((np.cos(t), t, i * np.ones(t.shape))).T else: pts = np.vstack((np.zeros(t.shape), t, i * np.ones(t.shape))).T pts = set_number_of_points(pts, nb_points=no_pts) bundle.append(pts) return bundle def fornix_streamlines(no_pts=12): fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines fornix_streamlines = Streamlines(fornix) streamlines = set_number_of_points(fornix_streamlines, nb_points=no_pts) return streamlines def evaluate_convergence(bundle, new_bundle2): pts_static = np.concatenate(bundle, axis=0) pts_moved = np.concatenate(new_bundle2, axis=0) assert_array_almost_equal(pts_static, pts_moved, 3) def test_rigid_parallel_lines(): bundle_initial = simulated_bundle() bundle, shift = center_streamlines(bundle_initial) mat = compose_matrix44([20, 0, 10, 0, 40, 0]) bundle2 = transform_streamlines(bundle, mat) bundle_sum_distance = BundleSumDistanceMatrixMetric() options = {"maxcor": 100, "ftol": 1e-9, "gtol": 1e-16, "eps": 1e-3} srr = StreamlineLinearRegistration( metric=bundle_sum_distance, x0=np.zeros(6), method="L-BFGS-B", bounds=None, options=options, ) new_bundle2 = srr.optimize(bundle, bundle2).transform(bundle2) evaluate_convergence(bundle, new_bundle2) def test_rigid_real_bundles(): bundle_initial = fornix_streamlines()[:20] bundle, shift = center_streamlines(bundle_initial) mat = compose_matrix44([0, 0, 20, 45.0, 0, 0]) bundle2 = transform_streamlines(bundle, mat) bundle_sum_distance = BundleSumDistanceMatrixMetric() srr = StreamlineLinearRegistration( metric=bundle_sum_distance, x0=np.zeros(6), method="Powell" ) new_bundle2 = srr.optimize(bundle, bundle2).transform(bundle2) evaluate_convergence(bundle, new_bundle2) bundle_min_distance = BundleMinDistanceMatrixMetric() srr = StreamlineLinearRegistration( metric=bundle_min_distance, x0=np.zeros(6), method="Powell" ) new_bundle2 = srr.optimize(bundle, bundle2).transform(bundle2) evaluate_convergence(bundle, new_bundle2) assert_raises(ValueError, StreamlineLinearRegistration, method="Whatever") def test_rigid_partial_real_bundles(): static = fornix_streamlines()[:20] moving = fornix_streamlines()[20:40] static_center, shift = center_streamlines(static) moving_center, shift2 = center_streamlines(moving) print(shift2) mat = compose_matrix( translate=np.array([0, 0, 0.0]), angles=np.deg2rad([40, 0, 0.0]) ) moved = transform_streamlines(moving_center, mat) srr = StreamlineLinearRegistration() srm = srr.optimize(static_center, moved) print(srm.fopt) print(srm.iterations) print(srm.funcs) moving_back = srm.transform(moved) print(srm.matrix) static_center = set_number_of_points(static_center, nb_points=100) moving_center = set_number_of_points(moving_back, nb_points=100) vol = np.zeros((100, 100, 100)) spts = np.concatenate(static_center, axis=0) spts = np.round(spts).astype(int) + np.array([50, 50, 50]) mpts = np.concatenate(moving_center, axis=0) mpts = np.round(mpts).astype(int) + np.array([50, 50, 50]) for index in spts: i, j, k = index vol[i, j, k] = 1 vol2 = np.zeros((100, 100, 100)) for index in mpts: i, j, k = index vol2[i, j, k] = 1 overlap = np.sum(np.logical_and(vol, vol2)) / float(np.sum(vol2)) assert_equal(overlap * 100 > 40, True) def test_stream_rigid(): static = fornix_streamlines()[:20] moving = fornix_streamlines()[20:40] center_streamlines(static) mat = compose_matrix44([0, 0, 0, 0, 40, 0]) moving = transform_streamlines(moving, mat) srr = StreamlineLinearRegistration() sr_params = srr.optimize(static, moving) moved = transform_streamlines(moving, sr_params.matrix) srr = StreamlineLinearRegistration(verbose=True) srm = srr.optimize(static, moving) moved2 = transform_streamlines(moving, srm.matrix) moved3 = srm.transform(moving) assert_array_almost_equal(moved[0], moved2[0], decimal=3) assert_array_almost_equal(moved2[0], moved3[0], decimal=3) def test_min_vs_min_fast_precision(): static = fornix_streamlines()[:20] moving = fornix_streamlines()[:20] static = [s.astype("f8") for s in static] moving = [m.astype("f8") for m in moving] bmd = BundleMinDistanceMatrixMetric() bmd.setup(static, moving) bmdf = BundleMinDistanceMetric() bmdf.setup(static, moving) x_test = [0.01, 0, 0, 0, 0, 0] print(bmd.distance(x_test)) print(bmdf.distance(x_test)) assert_equal(bmd.distance(x_test), bmdf.distance(x_test)) @set_random_number_generator() def test_same_number_of_points(rng): A = [rng.random((10, 3)), rng.random((20, 3))] B = [rng.random((21, 3)), rng.random((30, 3))] C = [rng.random((10, 3)), rng.random((10, 3))] D = [rng.random((20, 3)), rng.random((20, 3))] slr = StreamlineLinearRegistration() assert_raises(ValueError, slr.optimize, A, B) assert_raises(ValueError, slr.optimize, C, D) assert_raises(ValueError, slr.optimize, C, B) def test_efficient_bmd(): a = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]) streamlines = [a, a + 2, a + 4] points, offsets = unlist_streamlines(streamlines) points = points.astype(np.double) points2 = points.copy() D = np.zeros((len(offsets), len(offsets)), dtype="f8") _bundle_minimum_distance_matrix( points, points2, len(offsets), len(offsets), a.shape[0], D ) assert_equal(np.sum(np.diag(D)), 0) points2 += 2 _bundle_minimum_distance_matrix( points, points2, len(offsets), len(offsets), a.shape[0], D ) streamlines2 = relist_streamlines(points2, offsets) D2 = distance_matrix_mdf(streamlines, streamlines2) assert_array_almost_equal(D, D2) cols = D2.shape[1] rows = D2.shape[0] dist = ( 0.25 * ( np.sum(np.min(D2, axis=0)) / float(cols) + np.sum(np.min(D2, axis=1)) / float(rows) ) ** 2 ) dist2 = _bundle_minimum_distance( points, points2, len(offsets), len(offsets), a.shape[0] ) assert_almost_equal(dist, dist2) @set_random_number_generator() def test_openmp_locks(rng): static = [] moving = [] pts = 20 for _ in range(1000): s = rng.random((pts, 3)) static.append(s) moving.append(s + 2) moving = moving[2:] points, offsets = unlist_streamlines(static) points2, offsets2 = unlist_streamlines(moving) D = np.zeros((len(offsets), len(offsets2)), dtype="f8") _bundle_minimum_distance_matrix( points, points2, len(offsets), len(offsets2), pts, D ) dist1 = ( 0.25 * ( np.sum(np.min(D, axis=0)) / float(D.shape[1]) + np.sum(np.min(D, axis=1)) / float(D.shape[0]) ) ** 2 ) dist2 = _bundle_minimum_distance(points, points2, len(offsets), len(offsets2), pts) assert_almost_equal(dist1, dist2, 6) def test_from_to_rigid(): t = np.array([10, 2, 3, 0.1, 20.0, 30.0]) mat = compose_matrix44(t) vec = decompose_matrix44(mat, size=6) assert_array_almost_equal(t, vec) t = np.array([0, 0, 0, 180, 0.0, 0.0]) mat = np.eye(4) mat[0, 0] = -1 vec = decompose_matrix44(mat, size=6) assert_array_almost_equal(-t, vec) def test_matrix44(): assert_raises(ValueError, compose_matrix44, np.ones(5)) assert_raises(ValueError, compose_matrix44, np.ones(13)) assert_raises(ValueError, compose_matrix44, np.ones(16)) def test_abstract_metric_class(): class DummyStreamlineMetric(StreamlineDistanceMetric): def test(self): pass assert_raises(TypeError, DummyStreamlineMetric) def test_evolution_of_previous_iterations(): static = fornix_streamlines()[:20] moving = fornix_streamlines()[:20] moving = [m + np.array([10.0, 0.0, 0.0]) for m in moving] slr = StreamlineLinearRegistration(evolution=True) slm = slr.optimize(static, moving) assert_equal(len(slm.matrix_history), slm.iterations) def test_similarity_real_bundles(): bundle_initial = fornix_streamlines() bundle_initial, shift = center_streamlines(bundle_initial) bundle = bundle_initial[:20] xgold = [0, 0, 10, 0, 0, 0, 1.5] mat = compose_matrix44(xgold) bundle2 = transform_streamlines(bundle_initial[:20], mat) metric = BundleMinDistanceMatrixMetric() x0 = np.array([0, 0, 0, 0, 0, 0, 1], "f8") slr = StreamlineLinearRegistration( metric=metric, x0=x0, method="Powell", bounds=None, verbose=False ) slm = slr.optimize(bundle, bundle2) new_bundle2 = slm.transform(bundle2) evaluate_convergence(bundle, new_bundle2) def test_affine_real_bundles(): bundle_initial = fornix_streamlines() bundle_initial, shift = center_streamlines(bundle_initial) bundle = bundle_initial[:20] xgold = [0, 4, 2, 0, 10, 10, 1.2, 1.1, 1.0, 0.0, 0.2, 0.0] mat = compose_matrix44(xgold) bundle2 = transform_streamlines(bundle_initial[:20], mat) x0 = np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 0, 0, 0]) x = 25 bounds = [ (-x, x), (-x, x), (-x, x), (-x, x), (-x, x), (-x, x), (0.1, 1.5), (0.1, 1.5), (0.1, 1.5), (-1, 1), (-1, 1), (-1, 1), ] options = {"maxcor": 10, "ftol": 1e-7, "gtol": 1e-5, "eps": 1e-8} metric = BundleMinDistanceMatrixMetric() slr = StreamlineLinearRegistration( metric=metric, x0=x0, method="L-BFGS-B", bounds=bounds, verbose=True, options=options, ) slm = slr.optimize(bundle, bundle2) new_bundle2 = slm.transform(bundle2) slr2 = StreamlineLinearRegistration( metric=metric, x0=x0, method="Powell", bounds=None, verbose=True, options=None ) slm2 = slr2.optimize(bundle, new_bundle2) new_bundle2 = slm2.transform(new_bundle2) evaluate_convergence(bundle, new_bundle2) def test_vectorize_streamlines(): cingulum_bundles = two_cingulum_bundles() cb_subj1 = cingulum_bundles[0] cb_subj1 = set_number_of_points(cb_subj1, nb_points=10) cb_subj1_pts_no = np.array([s.shape[0] for s in cb_subj1]) assert_equal(np.all(cb_subj1_pts_no == 10), True) @set_random_number_generator() def test_x0_input(rng): for x0 in [6, 7, 12, "Rigid", "rigid", "similarity", "Affine"]: StreamlineLinearRegistration(x0=x0) for x0 in [rng.random(6), rng.random(7), rng.random(12)]: StreamlineLinearRegistration(x0=x0) for x0 in [8, 20, "Whatever", rng.random(20), rng.random((20, 3))]: assert_raises(ValueError, StreamlineLinearRegistration, x0=x0) x0 = rng.random((4, 3)) assert_raises(ValueError, StreamlineLinearRegistration, x0=x0) x0_6 = np.zeros(6) x0_7 = np.array([0, 0, 0, 0, 0, 0, 1.0]) x0_12 = np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 0, 0, 0]) x0_s = [x0_6, x0_7, x0_12, x0_6, x0_7, x0_12] for i, x0 in enumerate([6, 7, 12, "Rigid", "similarity", "Affine"]): slr = StreamlineLinearRegistration(x0=x0) assert_equal(slr.x0, x0_s[i]) @set_random_number_generator() def test_compose_decompose_matrix44(rng): for _ in range(20): x0 = rng.random(12) mat = compose_matrix44(x0[:6]) assert_array_almost_equal(x0[:6], decompose_matrix44(mat, size=6)) mat = compose_matrix44(x0[:7]) assert_array_almost_equal(x0[:7], decompose_matrix44(mat, size=7)) mat = compose_matrix44(x0[:12]) assert_array_almost_equal(x0[:12], decompose_matrix44(mat, size=12)) assert_raises(ValueError, decompose_matrix44, mat, size=20) def test_cascade_of_optimizations_and_threading(): cingulum_bundles = two_cingulum_bundles() cb1 = cingulum_bundles[0] cb1 = set_number_of_points(cb1, nb_points=20) test_x0 = np.array([10, 4, 3, 0, 20, 10, 1.5, 1.5, 1.5, 0.0, 0.2, 0]) cb2 = transform_streamlines(cingulum_bundles[0], compose_matrix44(test_x0)) cb2 = set_number_of_points(cb2, nb_points=20) print("first rigid") slr = StreamlineLinearRegistration(x0=6, num_threads=1) slm = slr.optimize(cb1, cb2) print("then similarity") slr2 = StreamlineLinearRegistration(x0=7, num_threads=2) slm2 = slr2.optimize(cb1, cb2, mat=slm.matrix) print("then affine") slr3 = StreamlineLinearRegistration( x0=12, options={"maxiter": 50}, num_threads=None ) slm3 = slr3.optimize(cb1, cb2, mat=slm2.matrix) assert_(slm2.fopt < slm.fopt) assert_(slm3.fopt < slm2.fopt) @set_random_number_generator() def test_wrong_num_threads(rng): A = [rng.random((10, 3)), rng.random((10, 3))] B = [rng.random((10, 3)), rng.random((10, 3))] slr = StreamlineLinearRegistration(num_threads=0) assert_raises(ValueError, slr.optimize, A, B) def test_get_unique_pairs(): # Regular case pairs, exclude = get_unique_pairs(6) assert_equal(len(np.unique(pairs)), 6) assert_equal(exclude, None) # Odd case pairs, exclude = get_unique_pairs(5) assert_equal(len(np.unique(pairs)), 4) assert_equal(isinstance(exclude, (int, np.int64, np.int32)), True) # Iterative case new_pairs, new_exclude = get_unique_pairs(5, pairs=pairs) assert_equal(len(np.unique(pairs)), 4) assert_equal(exclude != new_exclude, True) # Check errors assert_raises(TypeError, get_unique_pairs, 2.7) assert_raises(ValueError, get_unique_pairs, 1) def test_groupwise_slr(): bundles = read_five_af_bundles() # Test regular use case with convergence new_bundles, T, d = groupwise_slr(bundles, verbose=True) assert_equal(len(new_bundles), len(bundles)) assert_equal(type(new_bundles), list) assert_equal(len(T), len(bundles)) assert_equal(type(T), list) # Test regular use case without convergence (few iterations) new_bundles, T, d = groupwise_slr(bundles, max_iter=3, tol=-10, verbose=True) dipy-1.11.0/dipy/align/tests/test_streamwarp.py000077500000000000000000000062431476546756600216000ustar00rootroot00000000000000from numpy.testing import assert_equal import pytest from dipy.align.streamwarp import ( bundlewarp, bundlewarp_shape_analysis, bundlewarp_vector_filed, ) from dipy.data import two_cingulum_bundles from dipy.tracking.streamline import Streamlines, set_number_of_points from dipy.utils.optpkg import optional_package pd, have_pd, _ = optional_package("pandas") needs_pandas = pytest.mark.skipif(not have_pd, reason="Requires pandas") @needs_pandas def test_bundlewarp(): cingulum_bundles = two_cingulum_bundles() cb1 = Streamlines(cingulum_bundles[0]) cb1 = set_number_of_points(cb1, nb_points=20) cb2 = Streamlines(cingulum_bundles[1]) cb2 = set_number_of_points(cb2, nb_points=20) deformed_bundle, affine_bundle, dists, mp, warp = bundlewarp(cb1, cb2) assert_equal(len(affine_bundle), len(cb2)) assert_equal(len(deformed_bundle), len(cb2)) assert_equal(len(deformed_bundle), len(affine_bundle)) assert_equal(dists.shape, (len(cb2), len(cb1))) assert_equal(len(cb2), len(mp)) assert_equal(len(cb2), len(warp)) @needs_pandas def test_bundlewarp_vector_filed(): cingulum_bundles = two_cingulum_bundles() cb1 = Streamlines(cingulum_bundles[0]) cb1 = set_number_of_points(cb1, nb_points=20) cb2 = Streamlines(cingulum_bundles[1]) cb2 = set_number_of_points(cb2, nb_points=20) deformed_bundle, affine_bundle, dists, mp, warp = bundlewarp(cb1, cb2) offsets, directions, colors = bundlewarp_vector_filed( affine_bundle, deformed_bundle ) assert_equal(len(offsets), len(cb2.get_data())) assert_equal(len(directions), len(cb2.get_data())) assert_equal(len(colors), len(cb2.get_data())) assert_equal(len(offsets), len(deformed_bundle.get_data())) assert_equal(len(directions), len(deformed_bundle.get_data())) assert_equal(len(colors), len(deformed_bundle.get_data())) @needs_pandas def test_bundle_shape_profile(): cingulum_bundles = two_cingulum_bundles() cb1 = Streamlines(cingulum_bundles[0]) cb1 = set_number_of_points(cb1, nb_points=20) cb2 = Streamlines(cingulum_bundles[1]) cb2 = set_number_of_points(cb2, nb_points=20) deformed_bundle, affine_bundle, dists, mp, warp = bundlewarp(cb1, cb2) n = 10 shape_profile, stdv = bundlewarp_shape_analysis( affine_bundle, deformed_bundle, no_disks=n ) assert_equal(len(shape_profile), n) assert_equal(len(stdv), n) n = 100 shape_profile, stdv = bundlewarp_shape_analysis( affine_bundle, deformed_bundle, no_disks=n ) assert_equal(len(shape_profile), n) assert_equal(len(stdv), n) @needs_pandas def test_transformation_dimensions(): cingulum_bundles = two_cingulum_bundles() n = 20 cb1 = Streamlines(cingulum_bundles[0]) cb1 = set_number_of_points(cb1, n) cb2 = Streamlines(cingulum_bundles[1]) cb2 = set_number_of_points(cb2, n) deformed_bundle, affine_bundle, dists, mp, warp = bundlewarp(cb1, cb2) assert_equal(warp["gaussian_kernel"][0].shape, (n, n)) assert_equal(warp["transforms"][0].shape, (n, 3)) assert_equal(warp.columns.get_loc("gaussian_kernel"), 0) assert_equal(warp.columns.get_loc("transforms"), 1) dipy-1.11.0/dipy/align/tests/test_sumsqdiff.py000066400000000000000000000624401476546756600214120ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_allclose, assert_almost_equal, assert_array_almost_equal, assert_equal, ) from dipy.align import floating, sumsqdiff as ssd from dipy.testing.decorators import set_random_number_generator def iterate_residual_field_ssd_2d( delta_field, sigmasq_field, grad, target, lambda_param, dfield ): r""" This implementation is for testing purposes only. The problem with Gauss-Seidel iterations is that it depends on the order in which we iterate over the variables, so it is necessary to replicate the implementation under test. """ nrows, ncols = delta_field.shape if target is None: b = np.zeros_like(grad) b[..., 0] = delta_field * grad[..., 0] b[..., 1] = delta_field * grad[..., 1] else: b = target y = np.zeros(2) for r in range(nrows): for c in range(ncols): sigmasq = sigmasq_field[r, c] if sigmasq_field is not None else 1 # This has to be done inside the nested loops because # some d[...] may have been previously modified nn = 0 y[:] = 0 for dRow, dCol in [(-1, 0), (0, 1), (1, 0), (0, -1)]: dr = r + dRow if dr < 0 or dr >= nrows: continue dc = c + dCol if dc < 0 or dc >= ncols: continue nn += 1 y += dfield[dr, dc] if np.isinf(sigmasq): dfield[r, c] = y / nn else: tau = sigmasq * lambda_param * nn A = np.outer(grad[r, c], grad[r, c]) + tau * np.eye(2) det = np.linalg.det(A) if det < 1e-9: nrm2 = np.sum(grad[r, c] ** 2) if nrm2 < 1e-9: dfield[r, c, :] = 0 else: dfield[r, c] = b[r, c] / nrm2 else: y = b[r, c] + sigmasq * lambda_param * y dfield[r, c] = np.linalg.solve(A, y) def iterate_residual_field_ssd_3d( delta_field, sigmasq_field, grad, target, lambda_param, dfield ): r""" This implementation is for testing purposes only. The problem with Gauss-Seidel iterations is that it depends on the order in which we iterate over the variables, so it is necessary to replicate the implementation under test. """ nslices, nrows, ncols = delta_field.shape if target is None: b = np.zeros_like(grad) for i in range(3): b[..., i] = delta_field * grad[..., i] else: b = target y = np.ndarray((3,)) for s in range(nslices): for r in range(nrows): for c in range(ncols): g = grad[s, r, c] # delta = delta_field[s, r, c] sigmasq = sigmasq_field[s, r, c] if sigmasq_field is not None else 1 nn = 0 y[:] = 0 for dSlice, dRow, dCol in [ (-1, 0, 0), (0, -1, 0), (0, 0, 1), (0, 1, 0), (0, 0, -1), (1, 0, 0), ]: ds = s + dSlice if ds < 0 or ds >= nslices: continue dr = r + dRow if dr < 0 or dr >= nrows: continue dc = c + dCol if dc < 0 or dc >= ncols: continue nn += 1 y += dfield[ds, dr, dc] if np.isinf(sigmasq): dfield[s, r, c] = y / nn elif sigmasq < 1e-9: nrm2 = np.sum(g**2) if nrm2 < 1e-9: dfield[s, r, c, :] = 0 else: dfield[s, r, c, :] = b[s, r, c] / nrm2 else: tau = sigmasq * lambda_param * nn y = b[s, r, c] + sigmasq * lambda_param * y G = np.outer(g, g) + tau * np.eye(3) try: dfield[s, r, c] = np.linalg.solve(G, y) except np.linalg.linalg.LinAlgError: nrm2 = np.sum(g**2) if nrm2 < 1e-9: dfield[s, r, c, :] = 0 else: dfield[s, r, c] = b[s, r, c] / nrm2 @set_random_number_generator(5512751) def test_compute_residual_displacement_field_ssd_2d(rng): # Select arbitrary images' shape (same shape for both images) sh = (20, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g Fnoise = rng.random(np.size(grad_F)).reshape(grad_F.shape) * grad_F.max() * 0.1 Fnoise = Fnoise.astype(floating) grad_F += Fnoise Gnoise = rng.random(np.size(grad_G)).reshape(grad_G.shape) * grad_G.max() * 0.1 Gnoise = Gnoise.astype(floating) grad_G += Gnoise # The squared norm of grad_G sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise delta_field = np.array(F - G, dtype=floating) sigma_field = rng.standard_normal(delta_field.size).reshape(delta_field.shape) sigma_field = sigma_field.astype(floating) # Select some pixels to force sigma_field = infinite inf_sigma = rng.integers(0, 2, sh[0] * sh[1]) inf_sigma = inf_sigma.reshape(sh) sigma_field[inf_sigma == 1] = np.inf # Select an initial displacement field d = rng.standard_normal(grad_G.size).reshape(grad_G.shape).astype(floating) lambda_param = 1.5 # Implementation under test iut = ssd.compute_residual_displacement_field_ssd_2d # In the first iteration we test the case target=None # In the second iteration, target is not None target = None rtol = 1e-9 atol = 1e-4 for _ in range(2): # Sum of differences with the neighbors s = np.zeros_like(d, dtype=np.float64) s[:, :-1] += d[:, :-1] - d[:, 1:] # right s[:, 1:] += d[:, 1:] - d[:, :-1] # left s[:-1, :] += d[:-1, :] - d[1:, :] # down s[1:, :] += d[1:, :] - d[:-1, :] # up s *= lambda_param # Dot product of displacement and gradient dp = d[..., 0] * grad_G[..., 0] + d[..., 1] * grad_G[..., 1] dp = dp.astype(np.float64) # Compute expected residual if target is None: expected = np.zeros_like(grad_G) expected[..., 0] = delta_field * grad_G[..., 0] expected[..., 1] = delta_field * grad_G[..., 1] else: expected = target.copy().astype(np.float64) # Expected residuals when sigma != infinity expected[inf_sigma == 0, 0] -= ( grad_G[inf_sigma == 0, 0] * dp[inf_sigma == 0] + sigma_field[inf_sigma == 0] * s[inf_sigma == 0, 0] ) expected[inf_sigma == 0, 1] -= ( grad_G[inf_sigma == 0, 1] * dp[inf_sigma == 0] + sigma_field[inf_sigma == 0] * s[inf_sigma == 0, 1] ) # Expected residuals when sigma == infinity expected[inf_sigma == 1] = -1.0 * s[inf_sigma == 1] # Test residual field computation starting with residual = None actual = iut( delta_field, sigma_field, grad_G.astype(floating), target, lambda_param, d, None, ) assert_allclose(actual, expected, rtol=rtol, atol=atol) # destroy previous result actual = np.ndarray(actual.shape, dtype=floating) # Test residual field computation starting with residual is not None iut( delta_field, sigma_field, grad_G.astype(floating), target, lambda_param, d, actual, ) assert_allclose(actual, expected, rtol=rtol, atol=atol) # Set target for next iteration target = actual # Test Gauss-Seidel step with residual=None and residual=target for residual in [None, target]: expected = d.copy() iterate_residual_field_ssd_2d( delta_field, sigma_field, grad_G.astype(floating), residual, lambda_param, expected, ) actual = d.copy() ssd.iterate_residual_displacement_field_ssd_2d( delta_field, sigma_field, grad_G.astype(floating), residual, lambda_param, actual, ) assert_allclose(actual, expected, rtol=rtol, atol=atol) @set_random_number_generator(5512751) def test_compute_residual_displacement_field_ssd_3d(rng): # Select arbitrary images' shape (same shape for both images) sh = (20, 15, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g Fnoise = rng.random(np.size(grad_F)).reshape(grad_F.shape) * grad_F.max() * 0.1 Fnoise = Fnoise.astype(floating) grad_F += Fnoise Gnoise = rng.random(np.size(grad_G)).reshape(grad_G.shape) * grad_G.max() * 0.1 Gnoise = Gnoise.astype(floating) grad_G += Gnoise # The squared norm of grad_G sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise delta_field = np.array(F - G, dtype=floating) sigma_field = rng.random(delta_field.size).reshape(delta_field.shape) sigma_field = sigma_field.astype(floating) # Select some pixels to force sigma_field = infinite inf_sigma = rng.integers(0, 2, sh[0] * sh[1] * sh[2]) inf_sigma = inf_sigma.reshape(sh) sigma_field[inf_sigma == 1] = np.inf # Select an initial displacement field d = rng.random(grad_G.size).reshape(grad_G.shape).astype(floating) lambda_param = 1.5 # Implementation under test iut = ssd.compute_residual_displacement_field_ssd_3d # In the first iteration we test the case target=None # In the second iteration, target is not None target = None rtol = 1e-9 atol = 1e-4 for _ in range(2): # Sum of differences with the neighbors s = np.zeros_like(d, dtype=np.float64) s[:, :, :-1] += d[:, :, :-1] - d[:, :, 1:] # right s[:, :, 1:] += d[:, :, 1:] - d[:, :, :-1] # left s[:, :-1, :] += d[:, :-1, :] - d[:, 1:, :] # down s[:, 1:, :] += d[:, 1:, :] - d[:, :-1, :] # up s[:-1, :, :] += d[:-1, :, :] - d[1:, :, :] # below s[1:, :, :] += d[1:, :, :] - d[:-1, :, :] # above s *= lambda_param # Dot product of displacement and gradient dp = ( d[..., 0] * grad_G[..., 0] + d[..., 1] * grad_G[..., 1] + d[..., 2] * grad_G[..., 2] ) # Compute expected residual if target is None: expected = np.zeros_like(grad_G) for i in range(3): expected[..., i] = delta_field * grad_G[..., i] else: expected = target.copy().astype(np.float64) # Expected residuals when sigma != infinity for i in range(3): expected[inf_sigma == 0, i] -= ( grad_G[inf_sigma == 0, i] * dp[inf_sigma == 0] + sigma_field[inf_sigma == 0] * s[inf_sigma == 0, i] ) # Expected residuals when sigma == infinity expected[inf_sigma == 1] = -1.0 * s[inf_sigma == 1] # Test residual field computation starting with residual = None actual = iut( delta_field, sigma_field, grad_G.astype(floating), target, lambda_param, d, None, ) assert_allclose(actual, expected, rtol=rtol, atol=atol) # destroy previous result actual = np.ndarray(actual.shape, dtype=floating) # Test residual field computation starting with residual is not None iut( delta_field, sigma_field, grad_G.astype(floating), target, lambda_param, d, actual, ) assert_allclose(actual, expected, rtol=rtol, atol=atol) # Set target for next iteration target = actual # Test Gauss-Seidel step with residual=None and residual=target for residual in [None, target]: expected = d.copy() iterate_residual_field_ssd_3d( delta_field, sigma_field, grad_G.astype(floating), residual, lambda_param, expected, ) actual = d.copy() ssd.iterate_residual_displacement_field_ssd_3d( delta_field, sigma_field, grad_G.astype(floating), residual, lambda_param, actual, ) # the numpy linear solver may differ from our custom implementation # we need to increase the tolerance a bit assert_allclose(actual, expected, rtol=rtol, atol=atol * 10) def test_solve_2d_symmetric_positive_definite(): # Select some arbitrary right-hand sides bs = [ np.array([1.1, 2.2]), np.array([1e-2, 3e-3]), np.array([1e2, 1e3]), np.array([1e-5, 1e5]), ] # Select arbitrary symmetric positive-definite matrices As = [] # Identity identity = np.array([1.0, 0.0, 1.0]) As.append(identity) # Small determinant small_det = np.array([1e-3, 1e-4, 1e-3]) As.append(small_det) # Large determinant large_det = np.array([1e6, 1e4, 1e6]) As.append(large_det) for A in As: AA = np.array([[A[0], A[1]], [A[1], A[2]]]) det = np.linalg.det(AA) for b in bs: expected = np.linalg.solve(AA, b) actual = ssd.solve_2d_symmetric_positive_definite(A, b, det) assert_allclose(expected, actual, rtol=1e-9, atol=1e-9) def test_solve_3d_symmetric_positive_definite(): # Select some arbitrary right-hand sides bs = [ np.array([1.1, 2.2, 3.3]), np.array([1e-2, 3e-3, 2e-2]), np.array([1e2, 1e3, 5e-2]), np.array([1e-5, 1e5, 1.0]), ] # Select arbitrary taus taus = [0.0, 1.0, 1e-4, 1e5] # Select arbitrary matrices gs = [] # diagonal diag = np.array([0.0, 0.0, 0.0]) gs.append(diag) # canonical basis gs.append(np.array([1.0, 0.0, 0.0])) gs.append(np.array([0.0, 1.0, 0.0])) gs.append(np.array([0.0, 0.0, 1.0])) # other gs.append(np.array([1.0, 0.5, 0.0])) gs.append(np.array([0.0, 0.2, 0.1])) gs.append(np.array([0.3, 0.0, 0.9])) for g in gs: A = g[:, None] * g[None, :] for tau in taus: AA = A + tau * np.eye(3) for b in bs: actual, is_singular = ssd.solve_3d_symmetric_positive_definite( g, b, tau ) if tau == 0.0: assert_equal(is_singular, 1) else: expected = np.linalg.solve(AA, b) assert_allclose(expected, actual, rtol=1e-9, atol=1e-9) def test_compute_energy_ssd_2d(): sh = (32, 32) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * np.sum(grad_G**2, -1) # Note: this should include the energy corresponding to the # regularization term, but it is discarded in ANTS (they just # consider the data term, which is not the objective function # being optimized). This test case should be updated after # further investigation expected = ((F - G) ** 2).sum() actual = ssd.compute_energy_ssd_2d(np.array(F - G, dtype=floating)) assert_almost_equal(expected, actual) def test_compute_energy_ssd_3d(): sh = (32, 32, 32) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * np.sum(grad_G**2, -1) # Note: this should include the energy corresponding to the # regularization term, but it is discarded in ANTS (they just # consider the data term, which is not the objective function # being optimized). This test case should be updated after # further investigating expected = ((F - G) ** 2).sum() actual = ssd.compute_energy_ssd_3d(np.array(F - G, dtype=floating)) assert_almost_equal(expected, actual) @set_random_number_generator(1137271) def test_compute_ssd_demons_step_2d(rng): r""" Compares the output of the demons step in 2d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{2}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (20, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.ndarray(sh + (2,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g Fnoise = rng.random(np.size(grad_F)).reshape(grad_F.shape) * grad_F.max() * 0.1 Fnoise = Fnoise.astype(floating) grad_F += Fnoise Gnoise = rng.random(np.size(grad_G)).reshape(grad_G.shape) * grad_G.max() * 0.1 Gnoise = Gnoise.astype(floating) grad_G += Gnoise # The squared norm of grad_G to be used later sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise delta_field = np.array(G - F, dtype=floating) # Select some pixels to force gradient = 0 and F=G random_labels = rng.integers(0, 2, sh[0] * sh[1]) random_labels = random_labels.reshape(sh) F[random_labels == 0] = G[random_labels == 0] delta_field[random_labels == 0] = 0 grad_G[random_labels == 0, ...] = 0 sq_norm_grad_G[random_labels == 0, ...] = 0 # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Now select arbitrary parameters for $\sigma_x$ (eq 4 in [Vercauteren09]) for sigma_x_sq in [0.01, 1.5, 4.2]: # Directly compute the demons step according to eq. 4 in # [Vercauteren09] num = (sigma_x_sq * (F - G))[random_labels == 1] den = (sigma_x_sq * sq_norm_grad_G + sigma_i_sq)[random_labels == 1] # This is $J^{P}$ in eq. 4 [Vercauteren09] expected = -1 * np.array(grad_G) expected[random_labels == 1, 0] *= num / den expected[random_labels == 1, 1] *= num / den expected[random_labels == 0, ...] = 0 # Now compute it using the implementation under test actual = np.empty_like(expected, dtype=floating) ssd.compute_ssd_demons_step_2d( delta_field, np.array(grad_G, dtype=floating), sigma_x_sq, actual ) assert_array_almost_equal(actual, expected) @set_random_number_generator(1137271) def test_compute_ssd_demons_step_3d(rng): r""" Compares the output of the demons step in 3d against an analytical step. The fixed image is given by $F(x) = \frac{1}{2}||x - c_f||^2$, the moving image is given by $G(x) = \frac{1}{2}||x - c_g||^2$, $x, c_f, c_g \in R^{3}$ References ---------- [Vercauteren09] Vercauteren, T., Pennec, X., Perchant, A., & Ayache, N. (2009). Diffeomorphic demons: efficient non-parametric image registration. NeuroImage, 45(1 Suppl), S61-72. doi:10.1016/j.neuroimage.2008.10.040 """ # Select arbitrary images' shape (same shape for both images) sh = (20, 15, 10) # Select arbitrary centers c_f = np.asarray(sh) / 2 c_g = c_f + 0.5 # Compute the identity vector field I(x) = x in R^2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.ndarray(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O # Compute the gradient fields of F and G grad_F = X - c_f grad_G = X - c_g Fnoise = rng.random(np.size(grad_F)).reshape(grad_F.shape) * grad_F.max() * 0.1 Fnoise = Fnoise.astype(floating) grad_F += Fnoise Gnoise = rng.random(np.size(grad_G)).reshape(grad_G.shape) * grad_G.max() * 0.1 Gnoise = Gnoise.astype(floating) grad_G += Gnoise # The squared norm of grad_G to be used later sq_norm_grad_G = np.sum(grad_G**2, -1) # Compute F and G F = 0.5 * np.sum(grad_F**2, -1) G = 0.5 * sq_norm_grad_G Fnoise = rng.random(np.size(F)).reshape(F.shape) * F.max() * 0.1 Fnoise = Fnoise.astype(floating) F += Fnoise Gnoise = rng.random(np.size(G)).reshape(G.shape) * G.max() * 0.1 Gnoise = Gnoise.astype(floating) G += Gnoise delta_field = np.array(G - F, dtype=floating) # Select some pixels to force gradient = 0 and F=G random_labels = rng.integers(0, 2, sh[0] * sh[1] * sh[2]) random_labels = random_labels.reshape(sh) F[random_labels == 0] = G[random_labels == 0] delta_field[random_labels == 0] = 0 grad_G[random_labels == 0, ...] = 0 sq_norm_grad_G[random_labels == 0, ...] = 0 # Set arbitrary values for $\sigma_i$ (eq. 4 in [Vercauteren09]) # The original Demons algorithm used simply |F(x) - G(x)| as an # estimator, so let's use it as well sigma_i_sq = (F - G) ** 2 # Now select arbitrary parameters for $\sigma_x$ (eq 4 in [Vercauteren09]) for sigma_x_sq in [0.01, 1.5, 4.2]: # Directly compute the demons step according to eq. 4 in # [Vercauteren09] num = (sigma_x_sq * (F - G))[random_labels == 1] den = (sigma_x_sq * sq_norm_grad_G + sigma_i_sq)[random_labels == 1] # This is $J^{P}$ in eq. 4 [Vercauteren09] expected = -1 * np.array(grad_G) expected[random_labels == 1, 0] *= num / den expected[random_labels == 1, 1] *= num / den expected[random_labels == 1, 2] *= num / den expected[random_labels == 0, ...] = 0 # Now compute it using the implementation under test actual = np.empty_like(expected, dtype=floating) ssd.compute_ssd_demons_step_3d( delta_field, np.array(grad_G, dtype=floating), sigma_x_sq, actual ) assert_array_almost_equal(actual, expected) dipy-1.11.0/dipy/align/tests/test_transforms.py000066400000000000000000000230001476546756600215740ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.align.transforms import Transform, regtransforms from dipy.testing.decorators import set_random_number_generator def test_number_of_parameters(): expected_params = { ("TRANSLATION", 2): 2, ("TRANSLATION", 3): 3, ("ROTATION", 2): 1, ("ROTATION", 3): 3, ("RIGID", 2): 3, ("RIGID", 3): 6, ("SCALING", 2): 1, ("SCALING", 3): 1, ("AFFINE", 2): 6, ("AFFINE", 3): 12, ("RIGIDSCALING", 2): 5, ("RIGIDSCALING", 3): 9, ("RIGIDISOSCALING", 2): 4, ("RIGIDISOSCALING", 3): 7, } for ttype, transform in regtransforms.items(): assert_equal(transform.get_number_of_parameters(), expected_params[ttype]) @set_random_number_generator() def test_param_to_matrix_2d(rng): # Test translation matrix 2D transform = regtransforms[("TRANSLATION", 2)] dx, dy = rng.uniform(size=(2,)) theta = np.array([dx, dy]) expected = np.array([[1, 0, dx], [0, 1, dy], [0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_equal(actual, expected) # Test rotation matrix 2D transform = regtransforms[("ROTATION", 2)] angle = rng.uniform() theta = np.array([angle]) ct = np.cos(angle) st = np.sin(angle) expected = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid matrix 2D transform = regtransforms[("RIGID", 2)] angle, dx, dy = rng.uniform(size=(3,)) theta = np.array([angle, dx, dy]) ct = np.cos(angle) st = np.sin(angle) expected = np.array([[ct, -st, dx], [st, ct, dy], [0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test scaling matrix 2D transform = regtransforms[("SCALING", 2)] factor = rng.uniform() theta = np.array([factor]) expected = np.array([[factor, 0, 0], [0, factor, 0], [0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid isoscaling matrix 2D transform = regtransforms[("RIGIDISOSCALING", 2)] angle, dx, dy, factor = rng.uniform(size=(4,)) theta = np.array([angle, dx, dy, factor]) ct = np.cos(angle) st = np.sin(angle) expected = np.array( [[ct * factor, -st * factor, dx], [st * factor, ct * factor, dy], [0, 0, 1]] ) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid scaling matrix 2D transform = regtransforms[("RIGIDSCALING", 2)] angle, dx, dy, sx, sy = rng.uniform(size=(5,)) theta = np.array([angle, dx, dy, sx, sy]) ct = np.cos(angle) st = np.sin(angle) expected = np.array([[ct * sx, -st * sx, dx], [st * sy, ct * sy, dy], [0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test affine 2D transform = regtransforms[("AFFINE", 2)] theta = rng.uniform(size=(6,)) expected = np.eye(3) expected[0, :] = theta[:3] expected[1, :] = theta[3:6] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Verify that ValueError is raised if incorrect number of parameters for transform in regtransforms.values(): n = transform.get_number_of_parameters() # Set incorrect number of parameters theta = np.zeros(n + 1, dtype=np.float64) assert_raises(ValueError, transform.param_to_matrix, theta) @set_random_number_generator() def test_param_to_matrix_3d(rng): # Test translation matrix 3D transform = regtransforms[("TRANSLATION", 3)] dx, dy, dz = rng.uniform(size=(3,)) theta = np.array([dx, dy, dz]) expected = np.array([[1, 0, 0, dx], [0, 1, 0, dy], [0, 0, 1, dz], [0, 0, 0, 1]]) actual = transform.param_to_matrix(theta) assert_array_equal(actual, expected) # Create helpful rotation matrix function def get_rotation_matrix(alpha, beta, gamma): ca = np.cos(alpha) sa = np.sin(alpha) cb = np.cos(beta) sb = np.sin(beta) cc = np.cos(gamma) sc = np.sin(gamma) X = np.array([[1, 0, 0], [0, ca, -sa], [0, sa, ca]]) Y = np.array([[cb, 0, sb], [0, 1, 0], [-sb, 0, cb]]) Z = np.array([[cc, -sc, 0], [sc, cc, 0], [0, 0, 1]]) return Z.dot(X.dot(Y)) # Apply in order: Y, X, Z (Y goes to the right) # Test rotation matrix 3D transform = regtransforms[("ROTATION", 3)] theta = rng.uniform(size=(3,)) R = get_rotation_matrix(theta[0], theta[1], theta[2]) expected = np.eye(4) expected[:3, :3] = R[:3, :3] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid matrix 3D transform = regtransforms[("RIGID", 3)] theta = rng.uniform(size=(6,)) R = get_rotation_matrix(theta[0], theta[1], theta[2]) expected = np.eye(4) expected[:3, :3] = R[:3, :3] expected[:3, 3] = theta[3:6] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid isoscaling matrix 3D transform = regtransforms[("RIGIDISOSCALING", 3)] theta = rng.uniform(size=(7,)) R = get_rotation_matrix(theta[0], theta[1], theta[2]) expected = np.eye(4) expected[:3, :3] = R[:3, :3] * theta[6] expected[:3, 3] = theta[3:6] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test rigid scaling matrix 3D transform = regtransforms[("RIGIDSCALING", 3)] theta = rng.uniform(size=(9,)) R = get_rotation_matrix(theta[0], theta[1], theta[2]) expected = np.eye(4) R[0, :3] *= theta[6] R[1, :3] *= theta[7] R[2, :3] *= theta[8] expected[:3, :3] = R[:3, :3] expected[:3, 3] = theta[3:6] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test scaling matrix 3D transform = regtransforms[("SCALING", 3)] factor = rng.uniform() theta = np.array([factor]) expected = np.array( [[factor, 0, 0, 0], [0, factor, 0, 0], [0, 0, factor, 0], [0, 0, 0, 1]] ) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Test affine 3D transform = regtransforms[("AFFINE", 3)] theta = rng.uniform(size=(12,)) expected = np.eye(4) expected[0, :] = theta[:4] expected[1, :] = theta[4:8] expected[2, :] = theta[8:12] actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) # Verify that ValueError is raised if incorrect number of parameters for transform in regtransforms.values(): n = transform.get_number_of_parameters() # Set incorrect number of parameters theta = np.zeros(n + 1, dtype=np.float64) assert_raises(ValueError, transform.param_to_matrix, theta) def test_identity_parameters(): for transform in regtransforms.values(): dim = transform.get_dim() theta = transform.get_identity_parameters() expected = np.eye(dim + 1) actual = transform.param_to_matrix(theta) assert_array_almost_equal(actual, expected) @set_random_number_generator() def test_jacobian_functions(rng): # Compare the analytical Jacobians with their numerical approximations h = 1e-8 nsamples = 50 for transform in regtransforms.values(): n = transform.get_number_of_parameters() dim = transform.get_dim() expected = np.empty((dim, n)) theta = rng.uniform(size=(n,)) T = transform.param_to_matrix(theta) for _ in range(nsamples): x = 255 * (rng.uniform(size=(dim,)) - 0.5) actual = transform.jacobian(theta, x) # Approximate with finite differences x_hom = np.ones(dim + 1) x_hom[:dim] = x[:] for i in range(n): dtheta = theta.copy() dtheta[i] += h dT = np.array(transform.param_to_matrix(dtheta)) g = (dT - T).dot(x_hom) / h expected[:, i] = g[:dim] assert_array_almost_equal(actual, expected, decimal=5) # Test ValueError is raised when theta parameter doesn't have the right # length for transform in regtransforms.values(): n = transform.get_number_of_parameters() # Wrong number of parameters theta = np.zeros(n + 1) x = np.zeros(dim) assert_raises(ValueError, transform.jacobian, theta, x) def test_invalid_transform(): # Note: users should not attempt to use the base class Transform: # they should get an instance of one of its derived classes from the # regtransforms dictionary (the base class is not contained there) # If for some reason the user instantiates it and attempts to use it, # however, it will raise exceptions when attempting to retrieve its # Jacobian, identity parameters or its matrix representation. It will # return -1 if queried about its dimension or number of parameters transform = Transform() theta = np.ndarray(3) x = np.ndarray(3) assert_raises(ValueError, transform.jacobian, theta, x) assert_raises(ValueError, transform.get_identity_parameters) assert_raises(ValueError, transform.param_to_matrix, theta) expected = -1 actual = transform.get_number_of_parameters() assert_equal(actual, expected) actual = transform.get_dim() assert_equal(actual, expected) dipy-1.11.0/dipy/align/tests/test_vector_fields.py000066400000000000000000001643001476546756600222370ustar00rootroot00000000000000from nibabel.affines import apply_affine, from_matvec import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from scipy.ndimage import map_coordinates from dipy.align import floating, imwarp, vector_fields as vfu from dipy.align.parzenhist import sample_domain_regular from dipy.align.transforms import regtransforms from dipy.core import geometry from dipy.testing.decorators import set_random_number_generator @set_random_number_generator(3921116) def test_random_displacement_field_2d(rng): from_shape = (25, 32) to_shape = (33, 29) # Create grid coordinates x_0 = np.asarray(range(from_shape[0])) x_1 = np.asarray(range(from_shape[1])) X = np.empty((3,) + from_shape, dtype=np.float64) _O = np.ones(from_shape) X[0, ...] = x_0[:, None] * _O X[1, ...] = x_1[None, :] * _O X[2, ...] = 1 # Create an arbitrary image-to-space transform t = 0.15 # translation factor trans = np.array( [[1, 0, -t * from_shape[0]], [0, 1, -t * from_shape[1]], [0, 0, 1]] ) trans_inv = np.linalg.inv(trans) for theta in [-1 * np.pi / 6.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.83, 1.3, 2.07]: # scale ct = np.cos(theta) st = np.sin(theta) rot = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) from_grid2world = trans_inv.dot(scale.dot(rot.dot(trans))) to_grid2world = from_grid2world.dot(scale) to_world2grid = np.linalg.inv(to_grid2world) field, assignment = vfu.create_random_displacement_2d( np.array(from_shape, dtype=np.int32), from_grid2world, np.array(to_shape, dtype=np.int32), to_grid2world, rng=rng, ) field = np.array(field, dtype=floating) assignment = np.array(assignment) # Verify the assignments are inside the requested region assert_equal(0, (assignment < 0).sum()) for i in range(2): assert_equal(0, (assignment[..., i] >= to_shape[i]).sum()) # Compute the warping coordinates (see warp_2d documentation) Y = np.apply_along_axis(from_grid2world.dot, 0, X)[0:2, ...] Z = np.zeros_like(X) Z[0, ...] = Y[0, ...] + field[..., 0] Z[1, ...] = Y[1, ...] + field[..., 1] Z[2, ...] = 1 W = np.apply_along_axis(to_world2grid.dot, 0, Z)[0:2, ...] # Verify the claimed assignments are correct assert_array_almost_equal(W[0, ...], assignment[..., 0], 5) assert_array_almost_equal(W[1, ...], assignment[..., 1], 5) # Test exception is raised when the affine transform matrix is not valid valid = np.zeros((2, 3), dtype=np.float64) invalid = np.zeros((2, 2), dtype=np.float64) shape = np.array(from_shape, dtype=np.int32) assert_raises( ValueError, vfu.create_random_displacement_2d, shape, invalid, shape, valid, rng=rng, ) assert_raises( ValueError, vfu.create_random_displacement_2d, shape, valid, shape, invalid, rng=rng, ) @set_random_number_generator(7127562) def test_random_displacement_field_3d(rng): from_shape = (25, 32, 31) to_shape = (33, 29, 35) # Create grid coordinates x_0 = np.asarray(range(from_shape[0])) x_1 = np.asarray(range(from_shape[1])) x_2 = np.asarray(range(from_shape[2])) X = np.empty((4,) + from_shape, dtype=np.float64) _O = np.ones(from_shape) X[0, ...] = x_0[:, None, None] * _O X[1, ...] = x_1[None, :, None] * _O X[2, ...] = x_2[None, None, :] * _O X[3, ...] = 1 # Select an arbitrary rotation axis axis = np.array([0.5, 2.0, 1.5]) # Create an arbitrary image-to-space transform t = 0.15 # translation factor trans = np.array( [ [1, 0, 0, -t * from_shape[0]], [0, 1, 0, -t * from_shape[1]], [0, 0, 1, -t * from_shape[2]], [0, 0, 0, 1], ] ) trans_inv = np.linalg.inv(trans) for theta in [-1 * np.pi / 6.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.83, 1.3, 2.07]: # scale rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) from_grid2world = trans_inv.dot(scale.dot(rot.dot(trans))) to_grid2world = from_grid2world.dot(scale) to_world2grid = np.linalg.inv(to_grid2world) field, assignment = vfu.create_random_displacement_3d( np.array(from_shape, dtype=np.int32), from_grid2world, np.array(to_shape, dtype=np.int32), to_grid2world, rng=rng, ) field = np.array(field, dtype=floating) assignment = np.array(assignment) # Verify the assignments are inside the requested region assert_equal(0, (assignment < 0).sum()) for i in range(3): assert_equal(0, (assignment[..., i] >= to_shape[i]).sum()) # Compute the warping coordinates (see warp_2d documentation) Y = np.apply_along_axis(from_grid2world.dot, 0, X)[0:3, ...] Z = np.zeros_like(X) Z[0, ...] = Y[0, ...] + field[..., 0] Z[1, ...] = Y[1, ...] + field[..., 1] Z[2, ...] = Y[2, ...] + field[..., 2] Z[3, ...] = 1 W = np.apply_along_axis(to_world2grid.dot, 0, Z)[0:3, ...] # Verify the claimed assignments are correct assert_array_almost_equal(W[0, ...], assignment[..., 0], 5) assert_array_almost_equal(W[1, ...], assignment[..., 1], 5) assert_array_almost_equal(W[2, ...], assignment[..., 2], 5) # Test exception is raised when the affine transform matrix is not valid valid = np.zeros((3, 4), dtype=np.float64) invalid = np.zeros((3, 3), dtype=np.float64) shape = np.array(from_shape, dtype=np.int32) assert_raises( ValueError, vfu.create_random_displacement_2d, shape, invalid, shape, valid, rng=rng, ) assert_raises( ValueError, vfu.create_random_displacement_2d, shape, valid, shape, invalid, rng=rng, ) def test_harmonic_fields_2d(): nrows = 64 ncols = 67 mid_row = nrows // 2 mid_col = ncols // 2 expected_d = np.empty(shape=(nrows, ncols, 2)) expected_d_inv = np.empty(shape=(nrows, ncols, 2)) for b in [0.1, 0.3, 0.7]: for m in [2, 4, 7]: for i in range(nrows): for j in range(ncols): ii = i - mid_row jj = j - mid_col theta = np.arctan2(ii, jj) expected_d[i, j, 0] = ii * (1.0 / (1 + b * np.cos(m * theta)) - 1.0) expected_d[i, j, 1] = jj * (1.0 / (1 + b * np.cos(m * theta)) - 1.0) expected_d_inv[i, j, 0] = b * np.cos(m * theta) * ii expected_d_inv[i, j, 1] = b * np.cos(m * theta) * jj actual_d, actual_d_inv = vfu.create_harmonic_fields_2d(nrows, ncols, b, m) assert_array_almost_equal(expected_d, actual_d) assert_array_almost_equal(expected_d_inv, expected_d_inv) def test_harmonic_fields_3d(): nslices = 25 nrows = 34 ncols = 37 mid_slice = nslices // 2 mid_row = nrows // 2 mid_col = ncols // 2 expected_d = np.empty(shape=(nslices, nrows, ncols, 3)) expected_d_inv = np.empty(shape=(nslices, nrows, ncols, 3)) for b in [0.3, 0.7]: for m in [2, 5]: for k in range(nslices): for i in range(nrows): for j in range(ncols): kk = k - mid_slice ii = i - mid_row jj = j - mid_col theta = np.arctan2(ii, jj) expected_d[k, i, j, 0] = kk * ( 1.0 / (1 + b * np.cos(m * theta)) - 1.0 ) expected_d[k, i, j, 1] = ii * ( 1.0 / (1 + b * np.cos(m * theta)) - 1.0 ) expected_d[k, i, j, 2] = jj * ( 1.0 / (1 + b * np.cos(m * theta)) - 1.0 ) expected_d_inv[k, i, j, 0] = b * np.cos(m * theta) * kk expected_d_inv[k, i, j, 1] = b * np.cos(m * theta) * ii expected_d_inv[k, i, j, 2] = b * np.cos(m * theta) * jj actual_d, actual_d_inv = vfu.create_harmonic_fields_3d( nslices, nrows, ncols, b, m ) assert_array_almost_equal(expected_d, actual_d) assert_array_almost_equal(expected_d_inv, expected_d_inv) def test_circle(): sh = (64, 61) cr = sh[0] // 2 cc = sh[1] // 2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.empty((2,) + sh, dtype=np.float64) _O = np.ones(sh) X[0, ...] = x_0[:, None] * _O - cr X[1, ...] = x_1[None, :] * _O - cc nrm = np.sqrt(np.sum(X**2, axis=0)) for radius in [0, 7, 17, 32]: expected = nrm <= radius actual = vfu.create_circle(sh[0], sh[1], radius) assert_array_almost_equal(actual, expected) def test_sphere(): sh = (64, 61, 57) cs = sh[0] // 2 cr = sh[1] // 2 cc = sh[2] // 2 x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.empty((3,) + sh, dtype=np.float64) _O = np.ones(sh) X[0, ...] = x_0[:, None, None] * _O - cs X[1, ...] = x_1[None, :, None] * _O - cr X[2, ...] = x_2[None, None, :] * _O - cc nrm = np.sqrt(np.sum(X**2, axis=0)) for radius in [0, 7, 17, 32]: expected = nrm <= radius actual = vfu.create_sphere(sh[0], sh[1], sh[2], radius) assert_array_almost_equal(actual, expected) def test_warping_2d(): r""" Tests the cython implementation of the 2d warpings against scipy """ sh = (64, 64) nr = sh[0] nc = sh[1] # Create an image of a circle radius = 24 circle = vfu.create_circle(nr, nc, radius) circle = np.array(circle, dtype=floating) # Create a displacement field for warping d, dinv = vfu.create_harmonic_fields_2d(nr, nc, 0.2, 8) d = np.asarray(d).astype(floating) # Create grid coordinates x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.empty((3,) + sh, dtype=np.float64) _O = np.ones(sh) X[0, ...] = x_0[:, None] * _O X[1, ...] = x_1[None, :] * _O X[2, ...] = 1 # Select an arbitrary translation matrix t = 0.1 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) # Select arbitrary rotation and scaling matrices for theta in [-1 * np.pi / 6.0, 0.0, np.pi / 6.0]: # rotation angle for s in [0.42, 1.3, 2.15]: # scale ct = np.cos(theta) st = np.sin(theta) rot = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) aff = trans_inv.dot(scale.dot(rot.dot(trans))) # Select arbitrary (but different) grid-to-space transforms sampling_grid2world = scale field_grid2world = aff field_world2grid = np.linalg.inv(field_grid2world) image_grid2world = aff.dot(scale) image_world2grid = np.linalg.inv(image_grid2world) A = field_world2grid.dot(sampling_grid2world) B = image_world2grid.dot(sampling_grid2world) C = image_world2grid # Reorient the displacement field according to its grid-to-space # transform dcopy = np.copy(d) vfu.reorient_vector_field_2d(dcopy, field_grid2world) extended_dcopy = np.zeros((nr + 2, nc + 2, 2), dtype=floating) extended_dcopy[1 : nr + 1, 1 : nc + 1, :] = dcopy # Compute the warping coordinates (see warp_2d documentation) Y = np.apply_along_axis(A.dot, 0, X)[0:2, ...] Z = np.zeros_like(X) Z[0, ...] = map_coordinates(extended_dcopy[..., 0], Y + 1, order=1) Z[1, ...] = map_coordinates(extended_dcopy[..., 1], Y + 1, order=1) Z[2, ...] = 0 Z = np.apply_along_axis(C.dot, 0, Z)[0:2, ...] T = np.apply_along_axis(B.dot, 0, X)[0:2, ...] W = T + Z # Test bilinear interpolation expected = map_coordinates(circle, W, order=1) warped = vfu.warp_2d( circle, dcopy, affine_idx_in=A, affine_idx_out=B, affine_disp=C, out_shape=np.array(sh, dtype=np.int32), ) assert_array_almost_equal(warped, expected) # Test nearest neighbor interpolation expected = map_coordinates(circle, W, order=0) warped = vfu.warp_2d_nn( circle, dcopy, affine_idx_in=A, affine_idx_out=B, affine_disp=C, out_shape=np.array(sh, dtype=np.int32), ) assert_array_almost_equal(warped, expected) # Test exception is raised when the affine transform matrix is not valid val = np.zeros((2, 3), dtype=np.float64) inval = np.zeros((2, 2), dtype=np.float64) sh = np.array(sh, dtype=np.int32) # Exceptions from warp_2d assert_raises( ValueError, vfu.warp_2d, circle, d, affine_idx_in=inval, affine_idx_out=val, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_2d, circle, d, affine_idx_in=val, affine_idx_out=inval, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_2d, circle, d, affine_idx_in=val, affine_idx_out=val, affine_disp=inval, out_shape=sh, ) # Exceptions from warp_2d_nn assert_raises( ValueError, vfu.warp_2d_nn, circle, d, affine_idx_in=inval, affine_idx_out=val, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_2d_nn, circle, d, affine_idx_in=val, affine_idx_out=inval, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_2d_nn, circle, d, affine_idx_in=val, affine_idx_out=val, affine_disp=inval, out_shape=sh, ) def test_warping_3d(): r""" Tests the cython implementation of the 2d warpings against scipy """ sh = (64, 64, 64) ns = sh[0] nr = sh[1] nc = sh[2] # Create an image of a sphere radius = 24 sphere = vfu.create_sphere(ns, nr, nc, radius) sphere = np.array(sphere, dtype=floating) # Create a displacement field for warping d, dinv = vfu.create_harmonic_fields_3d(ns, nr, nc, 0.2, 8) d = np.asarray(d).astype(floating) # Create grid coordinates x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) x_2 = np.asarray(range(sh[2])) X = np.empty((4,) + sh, dtype=np.float64) _O = np.ones(sh) X[0, ...] = x_0[:, None, None] * _O X[1, ...] = x_1[None, :, None] * _O X[2, ...] = x_2[None, None, :] * _O X[3, ...] = 1 # Select an arbitrary rotation axis axis = np.array([0.5, 2.0, 1.5]) # Select an arbitrary translation matrix t = 0.1 trans = np.array( [[1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1]] ) trans_inv = np.linalg.inv(trans) # Select arbitrary rotation and scaling matrices for theta in [-1 * np.pi / 5.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.45, 1.1, 2.0]: # scale rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) aff = trans_inv.dot(scale.dot(rot.dot(trans))) # Select arbitrary (but different) grid-to-space transforms sampling_grid2world = scale field_grid2world = aff field_world2grid = np.linalg.inv(field_grid2world) image_grid2world = aff.dot(scale) image_world2grid = np.linalg.inv(image_grid2world) A = field_world2grid.dot(sampling_grid2world) B = image_world2grid.dot(sampling_grid2world) C = image_world2grid # Reorient the displacement field according to its grid-to-space # transform dcopy = np.copy(d) vfu.reorient_vector_field_3d(dcopy, field_grid2world) extended_dcopy = np.zeros((ns + 2, nr + 2, nc + 2, 3), dtype=floating) extended_dcopy[1 : ns + 1, 1 : nr + 1, 1 : nc + 1, :] = dcopy # Compute the warping coordinates (see warp_2d documentation) Y = np.apply_along_axis(A.dot, 0, X)[0:3, ...] Z = np.zeros_like(X) Z[0, ...] = map_coordinates(extended_dcopy[..., 0], Y + 1, order=1) Z[1, ...] = map_coordinates(extended_dcopy[..., 1], Y + 1, order=1) Z[2, ...] = map_coordinates(extended_dcopy[..., 2], Y + 1, order=1) Z[3, ...] = 0 Z = np.apply_along_axis(C.dot, 0, Z)[0:3, ...] T = np.apply_along_axis(B.dot, 0, X)[0:3, ...] W = T + Z # Test bilinear interpolation expected = map_coordinates(sphere, W, order=1) warped = vfu.warp_3d( sphere, dcopy, affine_idx_in=A, affine_idx_out=B, affine_disp=C, out_shape=np.array(sh, dtype=np.int32), ) assert_array_almost_equal(warped, expected, decimal=5) # Test nearest neighbor interpolation expected = map_coordinates(sphere, W, order=0) warped = vfu.warp_3d_nn( sphere, dcopy, affine_idx_in=A, affine_idx_out=B, affine_disp=C, out_shape=np.array(sh, dtype=np.int32), ) assert_array_almost_equal(warped, expected, decimal=5) # Test exception is raised when the affine transform matrix is not valid val = np.zeros((3, 4), dtype=np.float64) inval = np.zeros((3, 3), dtype=np.float64) sh = np.array(sh, dtype=np.int32) # Exceptions from warp_3d assert_raises( ValueError, vfu.warp_3d, sphere, d, affine_idx_in=inval, affine_idx_out=val, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_3d, sphere, d, affine_idx_in=val, affine_idx_out=inval, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_3d, sphere, d, affine_idx_in=val, affine_idx_out=val, affine_disp=inval, out_shape=sh, ) # Exceptions from warp_3d_nn assert_raises( ValueError, vfu.warp_3d_nn, sphere, d, affine_idx_in=inval, affine_idx_out=val, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_3d_nn, sphere, d, affine_idx_in=val, affine_idx_out=inval, affine_disp=val, out_shape=sh, ) assert_raises( ValueError, vfu.warp_3d_nn, sphere, d, affine_idx_in=val, affine_idx_out=val, affine_disp=inval, out_shape=sh, ) def test_affine_transforms_2d(): r""" Tests 2D affine transform functions against scipy implementation """ # Create a simple invertible affine transform d_shape = (64, 64) codomain_shape = (80, 80) nr = d_shape[0] nc = d_shape[1] # Create an image of a circle radius = 16 circle = vfu.create_circle(codomain_shape[0], codomain_shape[1], radius) circle = np.array(circle, dtype=floating) # Create grid coordinates x_0 = np.asarray(range(d_shape[0])) x_1 = np.asarray(range(d_shape[1])) X = np.empty((3,) + d_shape, dtype=np.float64) _O = np.ones(d_shape) X[0, ...] = x_0[:, None] * _O X[1, ...] = x_1[None, :] * _O X[2, ...] = 1 # Generate affine transforms t = 0.3 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) for theta in [-1 * np.pi / 5.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.5, 1.0, 2.0]: # scale ct = np.cos(theta) st = np.sin(theta) rot = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(rot.dot(trans))) # Apply the affine transform to the grid coordinates Y = np.apply_along_axis(gt_affine.dot, 0, X)[0:2, ...] expected = map_coordinates(circle, Y, order=1) warped = vfu.transform_2d_affine( circle, np.array(d_shape, dtype=np.int32), affine=gt_affine ) assert_array_almost_equal(warped, expected) # Test affine warping with nearest-neighbor interpolation expected = map_coordinates(circle, Y, order=0) warped = vfu.transform_2d_affine_nn( circle, np.array(d_shape, dtype=np.int32), affine=gt_affine ) assert_array_almost_equal(warped, expected) # Test the affine = None case warped = vfu.transform_2d_affine( circle, np.array(codomain_shape, dtype=np.int32), affine=None ) assert_array_equal(warped, circle) warped = vfu.transform_2d_affine_nn( circle, np.array(codomain_shape, dtype=np.int32), affine=None ) assert_array_equal(warped, circle) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((2, 2), dtype=np.float64) invalid_nan = np.zeros((3, 3), dtype=np.float64) invalid_nan[1, 1] = np.nan shape = np.array(codomain_shape, dtype=np.int32) # Exceptions from transform_2d assert_raises(ValueError, vfu.transform_2d_affine, circle, shape, affine=invalid) assert_raises( ValueError, vfu.transform_2d_affine, circle, shape, affine=invalid_nan ) # Exceptions from transform_2d_nn assert_raises(ValueError, vfu.transform_2d_affine_nn, circle, shape, affine=invalid) assert_raises( ValueError, vfu.transform_2d_affine_nn, circle, shape, affine=invalid_nan ) def test_affine_transforms_3d(): r""" Tests 3D affine transform functions against scipy implementation """ # Create a simple invertible affine transform d_shape = (64, 64, 64) codomain_shape = (80, 80, 80) ns = d_shape[0] nr = d_shape[1] nc = d_shape[2] # Create an image of a sphere radius = 16 sphere = vfu.create_sphere( codomain_shape[0], codomain_shape[1], codomain_shape[2], radius ) sphere = np.array(sphere, dtype=floating) # Create grid coordinates x_0 = np.asarray(range(d_shape[0])) x_1 = np.asarray(range(d_shape[1])) x_2 = np.asarray(range(d_shape[2])) X = np.empty((4,) + d_shape, dtype=np.float64) _O = np.ones(d_shape) X[0, ...] = x_0[:, None, None] * _O X[1, ...] = x_1[None, :, None] * _O X[2, ...] = x_2[None, None, :] * _O X[3, ...] = 1 # Generate affine transforms # Select an arbitrary rotation axis axis = np.array([0.5, 2.0, 1.5]) t = 0.3 trans = np.array( [[1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1]] ) trans_inv = np.linalg.inv(trans) for theta in [-1 * np.pi / 5.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.45, 1.1, 2.3]: # scale rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) gt_affine = trans_inv.dot(scale.dot(rot.dot(trans))) # Apply the affine transform to the grid coordinates Y = np.apply_along_axis(gt_affine.dot, 0, X)[0:3, ...] expected = map_coordinates(sphere, Y, order=1) transformed = vfu.transform_3d_affine( sphere, np.array(d_shape, dtype=np.int32), affine=gt_affine ) assert_array_almost_equal(transformed, expected) # Test affine transform with nearest-neighbor interpolation expected = map_coordinates(sphere, Y, order=0) transformed = vfu.transform_3d_affine_nn( sphere, np.array(d_shape, dtype=np.int32), affine=gt_affine ) assert_array_almost_equal(transformed, expected) # Test the affine = None case transformed = vfu.transform_3d_affine( sphere, np.array(codomain_shape, dtype=np.int32), affine=None ) assert_array_equal(transformed, sphere) transformed = vfu.transform_3d_affine_nn( sphere, np.array(codomain_shape, dtype=np.int32), affine=None ) assert_array_equal(transformed, sphere) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((3, 3), dtype=np.float64) invalid_nan = np.zeros((4, 4), dtype=np.float64) invalid_nan[1, 1] = np.nan shape = np.array(codomain_shape, dtype=np.int32) # Exceptions from transform_3d_affine assert_raises(ValueError, vfu.transform_3d_affine, sphere, shape, affine=invalid) assert_raises( ValueError, vfu.transform_3d_affine, sphere, shape, affine=invalid_nan ) # Exceptions from transform_3d_affine_nn assert_raises(ValueError, vfu.transform_3d_affine_nn, sphere, shape, affine=invalid) assert_raises( ValueError, vfu.transform_3d_affine_nn, sphere, shape, affine=invalid_nan ) @set_random_number_generator(8315759) def test_compose_vector_fields_2d(rng): r""" Creates two random displacement field that exactly map pixels from an input image to an output image. The resulting displacements and their composition, although operating in physical space, map the points exactly (up to numerical precision). """ input_shape = (10, 10) tgt_sh = (10, 10) # create a simple affine transformation nr = input_shape[0] nc = input_shape[1] s = 1.5 t = 2.5 trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(trans)) # create two random displacement fields input_grid2world = gt_affine target_grid2world = gt_affine disp1, assign1 = vfu.create_random_displacement_2d( np.array(input_shape, dtype=np.int32), input_grid2world, np.array(tgt_sh, dtype=np.int32), target_grid2world, rng=rng, ) disp1 = np.array(disp1, dtype=floating) assign1 = np.array(assign1) disp2, assign2 = vfu.create_random_displacement_2d( np.array(input_shape, dtype=np.int32), input_grid2world, np.array(tgt_sh, dtype=np.int32), target_grid2world, rng=rng, ) disp2 = np.array(disp2, dtype=floating) assign2 = np.array(assign2) # create a random image (with decimal digits) to warp moving_image = np.empty(tgt_sh, dtype=floating) moving_image[...] = rng.integers(0, 10, np.size(moving_image)).reshape( tuple(tgt_sh) ) # set boundary values to zero so we don't test wrong interpolation due to # floating point precision moving_image[0, :] = 0 moving_image[-1, :] = 0 moving_image[:, 0] = 0 moving_image[:, -1] = 0 # evaluate the composed warping using the exact assignments # (first 1 then 2) warp1 = moving_image[(assign2[..., 0], assign2[..., 1])] expected = warp1[(assign1[..., 0], assign1[..., 1])] # compose the displacement fields target_world2grid = np.linalg.inv(target_grid2world) premult_index = target_world2grid.dot(input_grid2world) premult_disp = target_world2grid for time_scaling in [0.25, 0.5, 1.0, 2.0, 4.0]: composition, stats = vfu.compose_vector_fields_2d( disp1, disp2 / time_scaling, premult_index, premult_disp, time_scaling, None ) # apply the implementation under test warped = np.array( vfu.warp_2d( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test also using nearest neighbor interpolation warped = np.array( vfu.warp_2d_nn( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test updating the displacement field instead of creating a new one composition = disp1.copy() vfu.compose_vector_fields_2d( composition, disp2 / time_scaling, premult_index, premult_disp, time_scaling, composition, ) # apply the implementation under test warped = np.array( vfu.warp_2d( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test also using nearest neighbor interpolation warped = np.array( vfu.warp_2d_nn( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # Test non-overlapping case x_0 = np.asarray(range(input_shape[0])) x_1 = np.asarray(range(input_shape[1])) X = np.empty(input_shape + (2,), dtype=np.float64) _O = np.ones(input_shape) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O random_labels = rng.integers(0, 2, input_shape[0] * input_shape[1] * 2) random_labels = random_labels.reshape(input_shape + (2,)) values = np.array([-1, tgt_sh[0]]) disp1 = (values[random_labels] - X).astype(floating) composition, stats = vfu.compose_vector_fields_2d( disp1, disp2, None, None, 1.0, None ) assert_array_almost_equal(composition, np.zeros_like(composition)) # test updating the displacement field instead of creating a new one composition = disp1.copy() vfu.compose_vector_fields_2d(composition, disp2, None, None, 1.0, composition) assert_array_almost_equal(composition, np.zeros_like(composition)) # Test exception is raised when the affine transform matrix is not valid valid = np.zeros((2, 3), dtype=np.float64) invalid = np.zeros((2, 2), dtype=np.float64) assert_raises( ValueError, vfu.compose_vector_fields_2d, disp1, disp2, invalid, valid, 1.0, None, ) assert_raises( ValueError, vfu.compose_vector_fields_2d, disp1, disp2, valid, invalid, 1.0, None, ) @set_random_number_generator(8315759) def test_compose_vector_fields_3d(rng): r""" Creates two random displacement field that exactly map pixels from an input image to an output image. The resulting displacements and their composition, although operating in physical space, map the points exactly (up to numerical precision). """ input_shape = (10, 10, 10) tgt_sh = (10, 10, 10) # create a simple affine transformation ns = input_shape[0] nr = input_shape[1] nc = input_shape[2] s = 1.5 t = 2.5 trans = np.array( [[1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1]] ) trans_inv = np.linalg.inv(trans) scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) gt_affine = trans_inv.dot(scale.dot(trans)) # create two random displacement fields input_grid2world = gt_affine target_grid2world = gt_affine disp1, assign1 = vfu.create_random_displacement_3d( np.array(input_shape, dtype=np.int32), input_grid2world, np.array(tgt_sh, dtype=np.int32), target_grid2world, rng=rng, ) disp1 = np.array(disp1, dtype=floating) assign1 = np.array(assign1) disp2, assign2 = vfu.create_random_displacement_3d( np.array(input_shape, dtype=np.int32), input_grid2world, np.array(tgt_sh, dtype=np.int32), target_grid2world, rng=rng, ) disp2 = np.array(disp2, dtype=floating) assign2 = np.array(assign2) # create a random image (with decimal digits) to warp moving_image = np.empty(tgt_sh, dtype=floating) moving_image[...] = rng.integers(0, 10, np.size(moving_image)).reshape( tuple(tgt_sh) ) # set boundary values to zero so we don't test wrong interpolation due to # floating point precision moving_image[0, :, :] = 0 moving_image[-1, :, :] = 0 moving_image[:, 0, :] = 0 moving_image[:, -1, :] = 0 moving_image[:, :, 0] = 0 moving_image[:, :, -1] = 0 # evaluate the composed warping using the exact assignments # (first 1 then 2) warp1 = moving_image[(assign2[..., 0], assign2[..., 1], assign2[..., 2])] expected = warp1[(assign1[..., 0], assign1[..., 1], assign1[..., 2])] # compose the displacement fields target_world2grid = np.linalg.inv(target_grid2world) premult_index = target_world2grid.dot(input_grid2world) premult_disp = target_world2grid for time_scaling in [0.25, 0.5, 1.0, 2.0, 4.0]: composition, stats = vfu.compose_vector_fields_3d( disp1, disp2 / time_scaling, premult_index, premult_disp, time_scaling, None ) # apply the implementation under test warped = np.array( vfu.warp_3d( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test also using nearest neighbor interpolation warped = np.array( vfu.warp_3d_nn( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test updating the displacement field instead of creating a new one composition = disp1.copy() vfu.compose_vector_fields_3d( composition, disp2 / time_scaling, premult_index, premult_disp, time_scaling, composition, ) # apply the implementation under test warped = np.array( vfu.warp_3d( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # test also using nearest neighbor interpolation warped = np.array( vfu.warp_3d_nn( moving_image, composition, affine_idx_in=None, affine_idx_out=premult_index, affine_disp=premult_disp, ) ) assert_array_almost_equal(warped, expected) # Test non-overlapping case x_0 = np.asarray(range(input_shape[0])) x_1 = np.asarray(range(input_shape[1])) x_2 = np.asarray(range(input_shape[2])) X = np.empty(input_shape + (3,), dtype=np.float64) _O = np.ones(input_shape) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O sz = input_shape[0] * input_shape[1] * input_shape[2] * 3 random_labels = rng.integers(0, 2, sz) random_labels = random_labels.reshape(input_shape + (3,)) values = np.array([-1, tgt_sh[0]]) disp1 = (values[random_labels] - X).astype(floating) composition, stats = vfu.compose_vector_fields_3d( disp1, disp2, None, None, 1.0, None ) assert_array_almost_equal(composition, np.zeros_like(composition)) # test updating the displacement field instead of creating a new one composition = disp1.copy() vfu.compose_vector_fields_3d(composition, disp2, None, None, 1.0, composition) assert_array_almost_equal(composition, np.zeros_like(composition)) # Test exception is raised when the affine transform matrix is not valid valid = np.zeros((3, 4), dtype=np.float64) invalid = np.zeros((3, 3), dtype=np.float64) assert_raises( ValueError, vfu.compose_vector_fields_3d, disp1, disp2, invalid, valid, 1.0, None, ) assert_raises( ValueError, vfu.compose_vector_fields_3d, disp1, disp2, valid, invalid, 1.0, None, ) def test_invert_vector_field_2d(): r""" Inverts a synthetic, analytically invertible, displacement field """ shape = (64, 64) nr = shape[0] nc = shape[1] # Create an arbitrary image-to-space transform t = 2.5 # translation factor trans = np.array([[1, 0, -t * nr], [0, 1, -t * nc], [0, 0, 1]]) trans_inv = np.linalg.inv(trans) d, _ = vfu.create_harmonic_fields_2d(nr, nc, 0.2, 8) d = np.asarray(d).astype(floating) for theta in [-1 * np.pi / 5.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.5, 1.0, 2.0]: # scale ct = np.cos(theta) st = np.sin(theta) rot = np.array([[ct, -st, 0], [st, ct, 0], [0, 0, 1]]) scale = np.array([[1 * s, 0, 0], [0, 1 * s, 0], [0, 0, 1]]) gt_affine = trans_inv.dot(scale.dot(rot.dot(trans))) gt_affine_inv = np.linalg.inv(gt_affine) dcopy = np.copy(d) # make sure the field remains invertible after the re-mapping vfu.reorient_vector_field_2d(dcopy, gt_affine) inv_approx = vfu.invert_vector_field_fixed_point_2d( dcopy, gt_affine_inv, np.array([s, s]), 40, 1e-7 ) mapping = imwarp.DiffeomorphicMap(2, (nr, nc), disp_grid2world=gt_affine) mapping.forward = dcopy mapping.backward = inv_approx residual, stats = mapping.compute_inversion_error() assert_almost_equal(stats[1], 0, decimal=4) assert_almost_equal(stats[2], 0, decimal=4) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((2, 2), dtype=np.float64) spacing = np.array([1.0, 1.0]) assert_raises( ValueError, vfu.invert_vector_field_fixed_point_2d, d, invalid, spacing, 40, 1e-7, start=None, ) def test_invert_vector_field_3d(): r""" Inverts a synthetic, analytically invertible, displacement field """ shape = (64, 64, 64) ns = shape[0] nr = shape[1] nc = shape[2] # Create an arbitrary image-to-space transform # Select an arbitrary rotation axis axis = np.array([2.0, 0.5, 1.0]) t = 2.5 # translation factor trans = np.array( [[1, 0, 0, -t * ns], [0, 1, 0, -t * nr], [0, 0, 1, -t * nc], [0, 0, 0, 1]] ) trans_inv = np.linalg.inv(trans) d, _ = vfu.create_harmonic_fields_3d(ns, nr, nc, 0.2, 8) d = np.asarray(d).astype(floating) for theta in [-1 * np.pi / 5.0, 0.0, np.pi / 5.0]: # rotation angle for s in [0.5, 1.0, 2.0]: # scale rot = np.zeros(shape=(4, 4)) rot[:3, :3] = geometry.rodrigues_axis_rotation(axis, theta) rot[3, 3] = 1.0 scale = np.array( [[1 * s, 0, 0, 0], [0, 1 * s, 0, 0], [0, 0, 1 * s, 0], [0, 0, 0, 1]] ) gt_affine = trans_inv.dot(scale.dot(rot.dot(trans))) gt_affine_inv = np.linalg.inv(gt_affine) dcopy = np.copy(d) # make sure the field remains invertible after the re-mapping vfu.reorient_vector_field_3d(dcopy, gt_affine) # Note: the spacings are used just to check convergence, so they # don't need to be very accurate. Here we are passing (0.5 * s) to # force the algorithm to make more iterations: in ANTS, there is a # hard-coded bound on the maximum residual, that's why we cannot # force more iteration by changing the parameters. # We will investigate this issue with more detail in the future. inv_approx = vfu.invert_vector_field_fixed_point_3d( dcopy, gt_affine_inv, np.array([s, s, s]) * 0.5, 40, 1e-7 ) mapping = imwarp.DiffeomorphicMap(3, (nr, nc), disp_grid2world=gt_affine) mapping.forward = dcopy mapping.backward = inv_approx residual, stats = mapping.compute_inversion_error() assert_almost_equal(stats[1], 0, decimal=3) assert_almost_equal(stats[2], 0, decimal=3) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((3, 3), dtype=np.float64) spacing = np.array([1.0, 1.0, 1.0]) assert_raises( ValueError, vfu.invert_vector_field_fixed_point_3d, d, invalid, spacing, 40, 1e-7, start=None, ) def test_resample_vector_field_2d(): r""" Expand a vector field by 2, then subsample by 2, the resulting field should be the original one """ domain_shape = np.array((64, 64), dtype=np.int32) reduced_shape = np.array((32, 32), dtype=np.int32) factors = np.array([0.5, 0.5]) d, dinv = vfu.create_harmonic_fields_2d(reduced_shape[0], reduced_shape[1], 0.3, 6) d = np.array(d, dtype=floating) expanded = vfu.resample_displacement_field_2d(d, factors, domain_shape) subsampled = expanded[::2, ::2, :] assert_array_almost_equal(d, subsampled) def test_resample_vector_field_3d(): r""" Expand a vector field by 2, then subsample by 2, the resulting field should be the original one """ domain_shape = np.array((64, 64, 64), dtype=np.int32) reduced_shape = np.array((32, 32, 32), dtype=np.int32) factors = np.array([0.5, 0.5, 0.5]) d, dinv = vfu.create_harmonic_fields_3d( reduced_shape[0], reduced_shape[1], reduced_shape[2], 0.3, 6 ) d = np.array(d, dtype=floating) expanded = vfu.resample_displacement_field_3d(d, factors, domain_shape) subsampled = expanded[::2, ::2, ::2, :] assert_array_almost_equal(d, subsampled) @set_random_number_generator(8315759) def test_downsample_scalar_field_2d(rng): size = 32 sh = (size, size) for reduce_r in [True, False]: nr = size - 1 if reduce_r else size for reduce_c in [True, False]: nc = size - 1 if reduce_c else size image = np.empty((size, size), dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(sh) if reduce_r: image[-1, :] = 0 if reduce_c: image[:, -1] = 0 a = image[::2, ::2] b = image[1::2, ::2] c = image[::2, 1::2] d = image[1::2, 1::2] expected = 0.25 * (a + b + c + d) if reduce_r: expected[-1, :] *= 2 if reduce_c: expected[:, -1] *= 2 actual = np.array(vfu.downsample_scalar_field_2d(image[:nr, :nc])) assert_array_almost_equal(expected, actual) @set_random_number_generator(2115556) def test_downsample_displacement_field_2d(rng): size = 32 sh = (size, size, 2) for reduce_r in [True, False]: nr = size - 1 if reduce_r else size for reduce_c in [True, False]: nc = size - 1 if reduce_c else size field = np.empty((size, size, 2), dtype=floating) field[...] = rng.integers(0, 10, np.size(field)).reshape(sh) if reduce_r: field[-1, :, :] = 0 if reduce_c: field[:, -1, :] = 0 a = field[::2, ::2, :] b = field[1::2, ::2, :] c = field[::2, 1::2, :] d = field[1::2, 1::2, :] expected = 0.25 * (a + b + c + d) if reduce_r: expected[-1, :, :] *= 2 if reduce_c: expected[:, -1, :] *= 2 actual = vfu.downsample_displacement_field_2d(field[:nr, :nc, :]) assert_array_almost_equal(expected, actual) @set_random_number_generator(8315759) def test_downsample_scalar_field_3d(rng): size = 32 sh = (size, size, size) for reduce_s in [True, False]: ns = size - 1 if reduce_s else size for reduce_r in [True, False]: nr = size - 1 if reduce_r else size for reduce_c in [True, False]: nc = size - 1 if reduce_c else size image = np.empty((size, size, size), dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(sh) if reduce_s: image[-1, :, :] = 0 if reduce_r: image[:, -1, :] = 0 if reduce_c: image[:, :, -1] = 0 a = image[::2, ::2, ::2] b = image[1::2, ::2, ::2] c = image[::2, 1::2, ::2] d = image[1::2, 1::2, ::2] aa = image[::2, ::2, 1::2] bb = image[1::2, ::2, 1::2] cc = image[::2, 1::2, 1::2] dd = image[1::2, 1::2, 1::2] expected = 0.125 * (a + b + c + d + aa + bb + cc + dd) if reduce_s: expected[-1, :, :] *= 2 if reduce_r: expected[:, -1, :] *= 2 if reduce_c: expected[:, :, -1] *= 2 actual = vfu.downsample_scalar_field_3d(image[:ns, :nr, :nc]) assert_array_almost_equal(expected, actual) @set_random_number_generator(8315759) def test_downsample_displacement_field_3d(rng): size = 32 sh = (size, size, size, 3) for reduce_s in [True, False]: ns = size - 1 if reduce_s else size for reduce_r in [True, False]: nr = size - 1 if reduce_r else size for reduce_c in [True, False]: nc = size - 1 if reduce_c else size field = np.empty((size, size, size, 3), dtype=floating) field[...] = rng.integers(0, 10, np.size(field)).reshape(sh) if reduce_s: field[-1, :, :] = 0 if reduce_r: field[:, -1, :] = 0 if reduce_c: field[:, :, -1] = 0 a = field[::2, ::2, ::2, :] b = field[1::2, ::2, ::2, :] c = field[::2, 1::2, ::2, :] d = field[1::2, 1::2, ::2, :] aa = field[::2, ::2, 1::2, :] bb = field[1::2, ::2, 1::2, :] cc = field[::2, 1::2, 1::2, :] dd = field[1::2, 1::2, 1::2, :] expected = 0.125 * (a + b + c + d + aa + bb + cc + dd) if reduce_s: expected[-1, :, :, :] *= 2 if reduce_r: expected[:, -1, :, :] *= 2 if reduce_c: expected[:, :, -1, :] *= 2 actual = vfu.downsample_displacement_field_3d(field[:ns, :nr, :nc]) assert_array_almost_equal(expected, actual) def test_reorient_vector_field_2d(): shape = (16, 16) d, dinv = vfu.create_harmonic_fields_2d(shape[0], shape[1], 0.2, 4) d = np.array(d, dtype=floating) # the vector field rotated 90 degrees expected = np.empty(shape=shape + (2,), dtype=floating) expected[..., 0] = -1 * d[..., 1] expected[..., 1] = d[..., 0] # rotate 45 degrees twice c = np.sqrt(0.5) affine = np.array([[c, -c, 0.0], [c, c, 0.0]]) vfu.reorient_vector_field_2d(d, affine) vfu.reorient_vector_field_2d(d, affine) # verify almost equal assert_array_almost_equal(d, expected) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((2, 2), dtype=np.float64) assert_raises(ValueError, vfu.reorient_vector_field_2d, d, invalid) def test_reorient_vector_field_3d(): sh = (16, 16, 16) d, dinv = vfu.create_harmonic_fields_3d(sh[0], sh[1], sh[2], 0.2, 4) d = np.array(d, dtype=floating) dinv = np.array(dinv, dtype=floating) # the vector field rotated 90 degrees around the last axis expected = np.empty(shape=sh + (3,), dtype=floating) expected[..., 0] = -1 * d[..., 1] expected[..., 1] = d[..., 0] expected[..., 2] = d[..., 2] # rotate 45 degrees twice around the last axis c = np.sqrt(0.5) affine = np.array([[c, -c, 0, 0], [c, c, 0, 0], [0, 0, 1, 0]]) vfu.reorient_vector_field_3d(d, affine) vfu.reorient_vector_field_3d(d, affine) # verify almost equal assert_array_almost_equal(d, expected) # the vector field rotated 90 degrees around the first axis expected[..., 0] = dinv[..., 0] expected[..., 1] = -1 * dinv[..., 2] expected[..., 2] = dinv[..., 1] # rotate 45 degrees twice around the first axis affine = np.array([[1, 0, 0, 0], [0, c, -c, 0], [0, c, c, 0]]) vfu.reorient_vector_field_3d(dinv, affine) vfu.reorient_vector_field_3d(dinv, affine) # verify almost equal assert_array_almost_equal(dinv, expected) # Test exception is raised when the affine transform matrix is not valid invalid = np.zeros((3, 3), dtype=np.float64) assert_raises(ValueError, vfu.reorient_vector_field_3d, d, invalid) @set_random_number_generator(1134781) def test_reorient_random_vector_fields(rng): # Test reorienting vector field for n_dims, func in ( (2, vfu.reorient_vector_field_2d), (3, vfu.reorient_vector_field_3d), ): size = [20, 30, 40][:n_dims] + [n_dims] arr = rng.normal(size=size) arr_32 = arr.astype(floating) affine = from_matvec(rng.normal(size=(n_dims, n_dims)), np.zeros(n_dims)) func(arr_32, affine) assert_almost_equal(arr_32, apply_affine(affine, arr), 6) # Reorient reorients without translation trans = np.arange(n_dims) + 2 affine[:-1, -1] = trans arr_32 = arr.astype(floating) func(arr_32, affine) assert_almost_equal(arr_32, apply_affine(affine, arr) - trans, 6) # Test exception is raised when the affine transform is not valid invalid = np.eye(n_dims) assert_raises(ValueError, func, arr_32, invalid) @set_random_number_generator(3921116) def test_gradient_2d(rng): sh = (25, 32) # Create grid coordinates x_0 = np.asarray(range(sh[0])) x_1 = np.asarray(range(sh[1])) X = np.empty(sh + (3,), dtype=np.float64) _O = np.ones(sh) X[..., 0] = x_0[:, None] * _O X[..., 1] = x_1[None, :] * _O X[..., 2] = 1 transform = regtransforms[("RIGID", 2)] theta = np.array([0.1, 5.0, 2.5]) T = transform.param_to_matrix(theta) TX = X.dot(T.T) # Eval an arbitrary (known) function at TX # f(x, y) = ax^2 + bxy + cy^{2} # df/dx = 2ax + by # df/dy = 2cy + bx a = 2e-3 b = 5e-3 c = 7e-3 img = a * TX[..., 0] ** 2 + b * TX[..., 0] * TX[..., 1] + c * TX[..., 1] ** 2 img = img.astype(floating) # img is an image sampled at X with grid-to-space transform T # Test sparse gradient: choose some sample points (in space) sample = sample_domain_regular(20, np.array(sh, dtype=np.int32), T, rng=rng) sample = np.array(sample) # Compute the analytical gradient at all points expected = np.empty((sample.shape[0], 2), dtype=floating) expected[..., 0] = 2 * a * sample[:, 0] + b * sample[:, 1] expected[..., 1] = 2 * c * sample[:, 1] + b * sample[:, 0] # Get the numerical gradient with the implementation under test sp_to_grid = np.linalg.inv(T) img_spacing = np.ones(2) actual, inside = vfu.sparse_gradient(img, sp_to_grid, img_spacing, sample) diff = np.abs(expected - actual).mean(1) * inside # The finite differences are really not accurate, especially with float32 assert_equal(diff.max() < 1e-3, True) # Verify exception is raised when passing invalid affine or spacings invalid_affine = np.eye(2) invalid_spacings = np.ones(1) assert_raises( ValueError, vfu.sparse_gradient, img, invalid_affine, img_spacing, sample ) assert_raises( ValueError, vfu.sparse_gradient, img, sp_to_grid, invalid_spacings, sample ) # Test dense gradient # Compute the analytical gradient at all points expected = np.empty(sh + (2,), dtype=floating) expected[..., 0] = 2 * a * TX[..., 0] + b * TX[..., 1] expected[..., 1] = 2 * c * TX[..., 1] + b * TX[..., 0] # Get the numerical gradient with the implementation under test sp_to_grid = np.linalg.inv(T) img_spacing = np.ones(2) actual, inside = vfu.gradient(img, sp_to_grid, img_spacing, sh, T) diff = np.abs(expected - actual).mean(2) * inside # In the dense case, we are evaluating at the exact points (sample points # are not slightly moved like in the sparse case) so we have more precision assert_equal(diff.max() < 1e-5, True) # Verify exception is raised when passing invalid affine or spacings assert_raises(ValueError, vfu.gradient, img, invalid_affine, img_spacing, sh, T) assert_raises( ValueError, vfu.gradient, img, sp_to_grid, img_spacing, sh, invalid_affine ) assert_raises(ValueError, vfu.gradient, img, sp_to_grid, invalid_spacings, sh, T) @set_random_number_generator(3921116) def test_gradient_3d(rng): shape = (25, 32, 15) # Create grid coordinates x_0 = np.asarray(range(shape[0])) x_1 = np.asarray(range(shape[1])) x_2 = np.asarray(range(shape[2])) X = np.zeros(shape + (4,), dtype=np.float64) _O = np.ones(shape) X[..., 0] = x_0[:, None, None] * _O X[..., 1] = x_1[None, :, None] * _O X[..., 2] = x_2[None, None, :] * _O X[..., 3] = 1 transform = regtransforms[("RIGID", 3)] theta = np.array([0.1, 0.05, 0.12, -12.0, -15.5, -7.2]) T = transform.param_to_matrix(theta) TX = X.dot(T.T) # Eval an arbitrary (known) function at TX # f(x, y, z) = ax^2 + by^2 + cz^2 + dxy + exz + fyz # df/dx = 2ax + dy + ez # df/dy = 2by + dx + fz # df/dz = 2cz + ex + fy a, b, c = 2e-3, 3e-3, 1e-3 d, e, f = 1e-3, 2e-3, 3e-3 img = ( a * TX[..., 0] ** 2 + b * TX[..., 1] ** 2 + c * TX[..., 2] ** 2 + d * TX[..., 0] * TX[..., 1] + e * TX[..., 0] * TX[..., 2] + f * TX[..., 1] * TX[..., 2] ) img = img.astype(floating) # Test sparse gradient: choose some sample points (in space) sample = sample_domain_regular(100, np.array(shape, dtype=np.int32), T, rng=rng) sample = np.array(sample) # Compute the analytical gradient at all points expected = np.empty((sample.shape[0], 3), dtype=floating) expected[..., 0] = 2 * a * sample[:, 0] + d * sample[:, 1] + e * sample[:, 2] expected[..., 1] = 2 * b * sample[:, 1] + d * sample[:, 0] + f * sample[:, 2] expected[..., 2] = 2 * c * sample[:, 2] + e * sample[:, 0] + f * sample[:, 1] # Get the numerical gradient with the implementation under test sp_to_grid = np.linalg.inv(T) img_spacing = np.ones(3) actual, inside = vfu.sparse_gradient(img, sp_to_grid, img_spacing, sample) # Discard points outside the image domain diff = np.abs(expected - actual).mean(1) * inside # The finite differences are really not accurate, especially with float32 assert_equal(diff.max() < 1e-3, True) # Verify exception is raised when passing invalid affine or spacings invalid_affine = np.eye(3) invalid_spacings = np.ones(2) assert_raises( ValueError, vfu.sparse_gradient, img, invalid_affine, img_spacing, sample ) assert_raises( ValueError, vfu.sparse_gradient, img, sp_to_grid, invalid_spacings, sample ) # Test dense gradient # Compute the analytical gradient at all points expected = np.empty(shape + (3,), dtype=floating) expected[..., 0] = 2 * a * TX[..., 0] + d * TX[..., 1] + e * TX[..., 2] expected[..., 1] = 2 * b * TX[..., 1] + d * TX[..., 0] + f * TX[..., 2] expected[..., 2] = 2 * c * TX[..., 2] + e * TX[..., 0] + f * TX[..., 1] # Get the numerical gradient with the implementation under test sp_to_grid = np.linalg.inv(T) img_spacing = np.ones(3) actual, inside = vfu.gradient(img, sp_to_grid, img_spacing, shape, T) diff = np.abs(expected - actual).mean(3) * inside # In the dense case, we are evaluating at the exact points (sample points # are not slightly moved like in the sparse case) so we have more precision assert_equal(diff.max() < 1e-5, True) # Verify exception is raised when passing invalid affine or spacings assert_raises(ValueError, vfu.gradient, img, invalid_affine, img_spacing, shape, T) assert_raises( ValueError, vfu.gradient, img, sp_to_grid, img_spacing, shape, invalid_affine ) assert_raises(ValueError, vfu.gradient, img, sp_to_grid, invalid_spacings, shape, T) dipy-1.11.0/dipy/align/tests/test_whole_brain_slr.py000066400000000000000000000061471476546756600225640ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_array_almost_equal, assert_equal, assert_raises from dipy.align.streamlinear import ( compose_matrix44, decompose_matrix44, slr_with_qbx, transform_streamlines, whole_brain_slr, ) from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.tracking.distances import bundles_distances_mam from dipy.tracking.streamline import Streamlines def test_whole_brain_slr(): fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1 = f.copy() f2 = f.copy() # check translation f2._data += np.array([50, 0, 0]) moved, transform, qb_centroids1, qb_centroids2 = whole_brain_slr( f1, f2, x0="affine", verbose=True, rm_small_clusters=2, greater_than=0, less_than=np.inf, qbx_thr=[5, 2, 1], progressive=False, ) # we can check the quality of registration by comparing the matrices # MAM streamline distances before and after SLR D12 = bundles_distances_mam(f1, f2) D1M = bundles_distances_mam(f1, moved) d12_minsum = np.sum(np.min(D12, axis=0)) d1m_minsum = np.sum(np.min(D1M, axis=0)) print(f"distances= {d12_minsum} {d1m_minsum}") assert_equal(d1m_minsum < d12_minsum, True) assert_array_almost_equal(transform[:3, 3], [-50, -0, -0], 2) # check rotation mat = compose_matrix44([0, 0, 0, 15, 0, 0]) f3 = f.copy() f3 = transform_streamlines(f3, mat) moved, transform, qb_centroids1, qb_centroids2 = slr_with_qbx( f1, f3, verbose=False, rm_small_clusters=1, greater_than=20, less_than=np.inf, qbx_thr=[2], progressive=True, ) # we can also check the quality by looking at the decomposed transform assert_array_almost_equal(decompose_matrix44(transform)[3], -15, 2) moved, transform, qb_centroids1, qb_centroids2 = slr_with_qbx( f1, f3, verbose=False, rm_small_clusters=1, select_random=400, greater_than=20, less_than=np.inf, qbx_thr=[2], progressive=True, ) # we can also check the quality by looking at the decomposed transform assert_array_almost_equal(decompose_matrix44(transform)[3], -15, 2) def test_slr_one_streamline(): fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1_one = Streamlines([f[0]]) f2 = f.copy() f2._data += np.array([50, 0, 0]) assert_raises( ValueError, slr_with_qbx, f1_one, f2, verbose=False, rm_small_clusters=50, greater_than=20, less_than=np.inf, qbx_thr=[2], progressive=True, ) assert_raises( ValueError, slr_with_qbx, f2, f1_one, verbose=False, rm_small_clusters=50, greater_than=20, less_than=np.inf, qbx_thr=[2], progressive=True, ) dipy-1.11.0/dipy/align/transforms.pxd000066400000000000000000000005151476546756600175440ustar00rootroot00000000000000cdef class Transform: cdef: int number_of_parameters int dim cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil cdef void _param_to_matrix(self, double[:] theta, double[:, :] T) noexcept nogil dipy-1.11.0/dipy/align/transforms.pyx000066400000000000000000001521511476546756600175750ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport numpy as cnp cdef extern from "dpy_math.h" nogil: double cos(double) double sin(double) double log(double) cdef class Transform: r""" Base class (contract) for all transforms for affine image registration Each transform must define the following (fast, nogil) methods: 1. _jacobian(theta, x, J): receives a parameter vector theta, a point in x, and a matrix J with shape (dim, len(theta)). It must writes in J, the Jacobian of the transform with parameters theta evaluated at x. 2. _get_identity_parameters(theta): receives a vector theta whose length is the number of parameters of the transform and sets in theta the values that define the identity transform. 3. _param_to_matrix(theta, T): receives a parameter vector theta, and a matrix T of shape (dim + 1, dim + 1) and writes in T the matrix representation of the transform with parameters theta This base class defines the (slow, convenient) python wrappers for each of the above functions, which also do parameter checking and raise a ValueError in case the provided parameters are invalid. """ def __cinit__(self): r""" Default constructor Sets transform dimension and number of parameter to invalid values (-1) """ self.dim = -1 self.number_of_parameters = -1 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: return -1 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: return cdef void _param_to_matrix(self, double[:] theta, double[:, :] T) noexcept nogil: return def jacobian(self, double[:] theta, double[:] x): r""" Jacobian function of this transform Parameters ---------- theta : array, shape (n,) vector containing the n parameters of this transform x : array, shape (dim,) vector containing the point where the Jacobian must be evaluated Returns ------- J : array, shape (dim, n) Jacobian matrix of the transform with parameters theta at point x """ n = theta.shape[0] if n != self.number_of_parameters: raise ValueError("Invalid number of parameters: %d"%(n,)) m = x.shape[0] if m < self.dim: raise ValueError("Invalid point dimension: %d"%(m,)) J = np.zeros((self.dim, n)) ret = self._jacobian(theta, x, J) return np.asarray(J) def get_identity_parameters(self): r""" Parameter values corresponding to the identity transform Returns ------- theta : array, shape (n,) the n parameter values corresponding to the identity transform """ if self.number_of_parameters < 0: raise ValueError("Invalid transform.") theta = np.zeros(self.number_of_parameters) self._get_identity_parameters(theta) return np.asarray(theta) def param_to_matrix(self, double[:] theta): r""" Matrix representation of this transform with the given parameters Parameters ---------- theta : array, shape (n,) the parameter values of the transform Returns ------- T : array, shape (dim + 1, dim + 1) the matrix representation of this transform with parameters theta """ n = len(theta) if n != self.number_of_parameters: raise ValueError("Invalid number of parameters: %d"%(n,)) T = np.eye(self.dim + 1) self._param_to_matrix(theta, T) return np.asarray(T) def get_number_of_parameters(self): return self.number_of_parameters def get_dim(self): return self.dim cdef class TranslationTransform2D(Transform): def __init__(self): r""" Translation transform in 2D """ self.dim = 2 self.number_of_parameters = 2 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the 2D translation transform The transformation is given by: T(x) = (T1(x), T2(x)) = (x0 + t0, x1 + t1) The derivative w.r.t. t1 and t2 is given by T'(x) = [[1, 0], # derivatives of [T1, T2] w.r.t. t0 [0, 1]] # derivatives of [T1, T2] w.r.t. t1 Parameters ---------- theta : array, shape (2,) the parameters of the 2D translation transform (the Jacobian does not depend on the parameters, but we receive the buffer so all Jacobian functions receive the same parameters) x : array, shape (2,) the point at which to compute the Jacobian (the Jacobian does not depend on x, but we receive the buffer so all Jacobian functions receive the same parameters) J : array, shape (2, 2) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 1, indicating that the Jacobian is constant (independent of x) """ J[0, 0], J[0, 1] = 1.0, 0.0 J[1, 0], J[1, 1] = 0.0, 1.0 # This Jacobian does not depend on x (it's constant): return 1 return 1 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (2,) buffer to write the parameters of the 2D translation transform """ theta[:2] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D translation transform Parameters ---------- theta : array, shape (2,) the parameters of the 2D translation transform R : array, shape (3, 3) the buffer in which to write the translation matrix """ R[0, 0], R[0, 1], R[0, 2] = 1, 0, theta[0] R[1, 0], R[1, 1], R[1, 2] = 0, 1, theta[1] R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class TranslationTransform3D(Transform): def __init__(self): r""" Translation transform in 3D """ self.dim = 3 self.number_of_parameters = 3 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the 3D translation transform The transformation is given by: T(x) = (T1(x), T2(x), T3(x)) = (x0 + t0, x1 + t1, x2 + t2) The derivative w.r.t. t1, t2 and t3 is given by T'(x) = [[1, 0, 0], # derivatives of [T1, T2, T3] w.r.t. t0 [0, 1, 0], # derivatives of [T1, T2, T3] w.r.t. t1 [0, 0, 1]] # derivatives of [T1, T2, T3] w.r.t. t2 Parameters ---------- theta : array, shape (3,) the parameters of the 3D translation transform (the Jacobian does not depend on the parameters, but we receive the buffer so all Jacobian functions receive the same parameters) x : array, shape (3,) the point at which to compute the Jacobian (the Jacobian does not depend on x, but we receive the buffer so all Jacobian functions receive the same parameters) J : array, shape (3, 3) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 1, indicating that the Jacobian is constant (independent of x) """ J[0, 0], J[0, 1], J[0, 2] = 1.0, 0.0, 0.0 J[1, 0], J[1, 1], J[1, 2] = 0.0, 1.0, 0.0 J[2, 0], J[2, 1], J[2, 2] = 0.0, 0.0, 1.0 # This Jacobian does not depend on x (it's constant): return 1 return 1 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (3,) buffer to write the parameters of the 3D translation transform """ theta[:3] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 3D translation transform Parameters ---------- theta : array, shape (3,) the parameters of the 3D translation transform R : array, shape (4, 4) the buffer in which to write the translation matrix """ R[0, 0], R[0, 1], R[0, 2], R[0, 3] = 1, 0, 0, theta[0] R[1, 0], R[1, 1], R[1, 2], R[1, 3] = 0, 1, 0, theta[1] R[2, 0], R[2, 1], R[2, 2], R[2, 3] = 0, 0, 1, theta[2] R[3, 0], R[3, 1], R[3, 2], R[3, 3] = 0, 0, 0, 1 cdef class RotationTransform2D(Transform): def __init__(self): r""" Rotation transform in 2D """ self.dim = 2 self.number_of_parameters = 1 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 2D rotation with parameter theta, at x The transformation is given by: T(x,y) = (T1(x,y), T2(x,y)) = (x cost - y sint, x sint + y cost) The derivatives w.r.t. the rotation angle, t, are: T'(x,y) = [-x sint - y cost, # derivative of T1 w.r.t. t x cost - y sint] # derivative of T2 w.r.t. t Parameters ---------- theta : array, shape (1,) the rotation angle x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 1) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double st = sin(theta[0]) double ct = cos(theta[0]) double px = x[0], py = x[1] J[0, 0] = -px * st - py * ct J[1, 0] = px * ct - py * st # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (1,) buffer to write the parameters of the 2D rotation transform """ theta[0] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D rotation transform Parameters ---------- theta : array, shape (1,) the rotation angle R : array, shape (3,3) the buffer in which to write the matrix """ cdef: double ct = cos(theta[0]) double st = sin(theta[0]) R[0, 0], R[0, 1], R[0, 2] = ct, -st, 0 R[1, 0], R[1, 1], R[1, 2] = st, ct, 0 R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class RotationTransform3D(Transform): def __init__(self): r""" Rotation transform in 3D """ self.dim = 3 self.number_of_parameters = 3 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 3D rotation with parameters theta, at x Parameters ---------- theta : array, shape (3,) the rotation angles about the canonical axes x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 3) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double px = x[0], py = x[1], z = x[2] J[0, 0] = (-sc * ca * sb) * px + (sc * sa) * py + (sc * ca * cb) * z J[1, 0] = (cc * ca * sb) * px + (-cc * sa) * py + (-cc * ca * cb) * z J[2, 0] = (sa * sb) * px + ca * py + (-sa * cb) * z J[0, 1] = (-cc * sb - sc * sa * cb) * px + (cc * cb - sc * sa * sb) * z J[1, 1] = (-sc * sb + cc * sa * cb) * px + (sc * cb + cc * sa * sb) * z J[2, 1] = (-ca * cb) * px + (-ca * sb) * z J[0, 2] = (-sc * cb - cc * sa * sb) * px + (-cc * ca) * py + \ (-sc * sb + cc * sa * cb) * z J[1, 2] = (cc * cb - sc * sa * sb) * px + (-sc * ca) * py + \ (cc * sb + sc * sa * cb) * z J[2, 2] = 0 # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (3,) buffer to write the parameters of the 3D rotation transform """ theta[:3] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 3D rotation transform The matrix is the product of rotation matrices of angles theta[0], theta[1], theta[2] around axes x, y, z applied in the following order: y, x, z. This order was chosen for consistency with ANTS. Parameters ---------- theta : array, shape (3,) the rotation angles about each axis: theta[0] : rotation angle around x axis theta[1] : rotation angle around y axis theta[2] : rotation angle around z axis R : array, shape (4, 4) buffer in which to write the rotation matrix """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) R[0,0], R[0,1], R[0,2] = cc*cb-sc*sa*sb, -sc*ca, cc*sb+sc*sa*cb R[1,0], R[1,1], R[1,2] = sc*cb+cc*sa*sb, cc*ca, sc*sb-cc*sa*cb R[2,0], R[2,1], R[2,2] = -ca*sb, sa, ca*cb R[3,0], R[3,1], R[3,2] = 0, 0, 0 R[0, 3] = 0 R[1, 3] = 0 R[2, 3] = 0 R[3, 3] = 1 cdef class RigidTransform2D(Transform): def __init__(self): r""" Rigid transform in 2D (rotation + translation) The parameter vector theta of length 3 is interpreted as follows: theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis """ self.dim = 2 self.number_of_parameters = 3 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 2D rigid transform (rotation + translation) The transformation is given by: T(x,y) = (T1(x,y), T2(x,y)) = (x cost - y sint + dx, x sint + y cost + dy) The derivatives w.r.t. t, dx and dy are: T'(x,y) = [-x sint - y cost, 1, 0, # derivative of T1 w.r.t. t, dx, dy x cost - y sint, 0, 1] # derivative of T2 w.r.t. t, dx, dy Parameters ---------- theta : array, shape (3,) the parameters of the 2D rigid transform theta[0] : rotation angle (t) theta[1] : translation along the x axis (dx) theta[2] : translation along the y axis (dy) x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 3) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double st = sin(theta[0]) double ct = cos(theta[0]) double px = x[0], py = x[1] J[0, 0], J[0, 1], J[0, 2] = -px * st - py * ct, 1, 0 J[1, 0], J[1, 1], J[1, 2] = px * ct - py * st, 0, 1 # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (3,) buffer to write the parameters of the 2D rigid transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis """ theta[:3] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D rigid transform Parameters ---------- theta : array, shape (3,) the parameters of the 2D rigid transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis R : array, shape (3, 3) buffer in which to write the rigid matrix """ cdef: double ct = cos(theta[0]) double st = sin(theta[0]) R[0, 0], R[0, 1], R[0, 2] = ct, -st, theta[1] R[1, 0], R[1, 1], R[1, 2] = st, ct, theta[2] R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class RigidTransform3D(Transform): def __init__(self): r""" Rigid transform in 3D (rotation + translation) The parameter vector theta of length 6 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis """ self.dim = 3 self.number_of_parameters = 6 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 3D rigid transform (rotation + translation) Parameters ---------- theta : array, shape (6,) the parameters of the 3D rigid transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 6) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double px = x[0], py = x[1], pz = x[2] J[0, 0] = (-sc * ca * sb) * px + (sc * sa) * py + (sc * ca * cb) * pz J[1, 0] = (cc * ca * sb) * px + (-cc * sa) * py + (-cc * ca * cb) * pz J[2, 0] = (sa * sb) * px + ca * py + (-sa * cb) * pz J[0, 1] = (-cc * sb - sc * sa * cb) * px + (cc * cb - sc * sa * sb) * pz J[1, 1] = (-sc * sb + cc * sa * cb) * px + (sc * cb + cc * sa * sb) * pz J[2, 1] = (-ca * cb) * px + (-ca * sb) * pz J[0, 2] = (-sc * cb - cc * sa * sb) * px + (-cc * ca) * py + \ (-sc * sb + cc * sa * cb) * pz J[1, 2] = (cc * cb - sc * sa * sb) * px + (-sc * ca) * py + \ (cc * sb + sc * sa * cb) * pz J[2, 2] = 0 J[0, 3:6] = 0 J[1, 3:6] = 0 J[2, 3:6] = 0 J[0, 3], J[1, 4], J[2, 5] = 1, 1, 1 # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (6,) buffer to write the parameters of the 3D rigid transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis """ theta[:6] = 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 3D rigid transform Parameters ---------- theta : array, shape (6,) the parameters of the 3D rigid transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis R : array, shape (4, 4) buffer in which to write the rigid matrix """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double dx = theta[3] double dy = theta[4] double dz = theta[5] R[0,0], R[0,1], R[0,2] = cc*cb-sc*sa*sb, -sc*ca, cc*sb+sc*sa*cb R[1,0], R[1,1], R[1,2] = sc*cb+cc*sa*sb, cc*ca, sc*sb-cc*sa*cb R[2,0], R[2,1], R[2,2] = -ca*sb, sa, ca*cb R[3,0], R[3,1], R[3,2] = 0, 0, 0 R[0,3] = dx R[1,3] = dy R[2,3] = dz R[3,3] = 1 cdef class RigidIsoScalingTransform2D(Transform): def __init__(self): """ Rigid isoscaling transform in 2D. (rotation + translation + scaling) The parameter vector theta of length 4 is interpreted as follows: theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : isotropic scaling """ self.dim = 2 self.number_of_parameters = 4 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 2D rigid isoscaling transform. The transformation (rotation + translation + isoscaling) is given by: T(x,y) = (T1(x,y), T2(x,y)) = (sx * x * cost - sx * y * sint + dx, sy * x * sint + sy * y * cost + dy) Parameters ---------- theta : array, shape (4,) the parameters of the 2D rigid isoscaling transform theta[0] : rotation angle (t) theta[1] : translation along the x axis (dx) theta[2] : translation along the y axis (dy) theta[3] : isotropic scaling (s) x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 4) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double st = sin(theta[0]) double ct = cos(theta[0]) double px = x[0], py = x[1] double scale = theta[3] J[0, 0], J[0, 1], J[0, 2] = -px * scale * st - py * scale * ct, 1, 0 J[1, 0], J[1, 1], J[1, 2] = px * scale * ct - py * scale * st, 0, 1 J[0, 3] = px * ct - py * st J[1, 3] = px * st + py * ct # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: """ Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (4,) buffer to write the parameters of the 2D isoscaling rigid transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : isotropic scaling """ theta[:3] = 0 theta[3] = 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D rigid isoscaling transform Parameters ---------- theta : array, shape (4,) the parameters of the 2D rigid isoscaling transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : isotropic scaling R : array, shape (3, 3) buffer in which to write the rigid isoscaling matrix """ cdef: double ct = cos(theta[0]) double st = sin(theta[0]) R[0, 0], R[0, 1], R[0, 2] = ct*theta[3], -st*theta[3], theta[1] R[1, 0], R[1, 1], R[1, 2] = st*theta[3], ct*theta[3], theta[2] R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class RigidIsoScalingTransform3D(Transform): def __init__(self): """Rigid isoscaling transform in 3D. (rotation + translation + scaling) The parameter vector theta of length 7 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling """ self.dim = 3 self.number_of_parameters = 7 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 3D isoscaling rigid transform. (rotation + translation + scaling) Parameters ---------- theta : array, shape (7,) the parameters of the 3D rigid isoscaling transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 7) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double px = x[0], py = x[1], pz = x[2] double scale = theta[6] J[0, 0] = (-sc * ca * sb) * px * scale + (sc * sa) * py * scale + \ (sc * ca * cb) * pz * scale J[1, 0] = (cc * ca * sb) * px * scale + (-cc * sa) * py * scale + \ (-cc * ca * cb) * pz * scale J[2, 0] = (sa * sb) * px * scale + ca * py * scale + \ (-sa * cb) * pz * scale J[0, 1] = (-cc * sb - sc * sa * cb) * px * scale + \ (cc * cb - sc * sa * sb) * pz * scale J[1, 1] = (-sc * sb + cc * sa * cb) * px * scale + \ (sc * cb + cc * sa * sb) * pz * scale J[2, 1] = (-ca * cb) * px * scale + (-ca * sb) * pz * scale J[0, 2] = (-sc * cb - cc * sa * sb) * px * scale + \ (-cc * ca) * py * scale + \ (-sc * sb + cc * sa * cb) * pz * scale J[1, 2] = (cc * cb - sc * sa * sb) * px * scale + \ (-sc * ca) * py * scale + \ (cc * sb + sc * sa * cb) * pz * scale J[2, 2] = 0 J[0, 3:6] = 0 J[1, 3:6] = 0 J[2, 3:6] = 0 J[0, 3], J[1, 4], J[2, 5] = 1, 1, 1 J[0, 6] = (cc*cb-sc*sa*sb) * px - sc*ca*py + (cc*sb+sc*sa*cb) * pz J[1, 6] = (sc*cb+cc*sa*sb) * px + cc*ca*py + (sc*sb-cc*sa*cb) * pz J[2, 6] = -ca*sb*px + sa*py + ca*cb*pz # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: """ Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (7,) buffer to write the parameters of the 3D rigid isoscaling transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling """ theta[:6] = 0 theta[6] = 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: """ Matrix associated with the 3D rigid isoscaling transform Parameters ---------- theta : array, shape (7,) the parameters of the 3D rigid isoscaling transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : isotropic scaling R : array, shape (4, 4) buffer in which to write the rigid isoscaling matrix """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double dx = theta[3] double dy = theta[4] double dz = theta[5] double sxyz = theta[6] R[0,0], R[0,1], R[0,2] = (cc*cb-sc*sa*sb)*sxyz, -sc*ca*sxyz, (cc*sb+sc*sa*cb)*sxyz R[1,0], R[1,1], R[1,2] = (sc*cb+cc*sa*sb)*sxyz, cc*ca*sxyz, (sc*sb-cc*sa*cb)*sxyz R[2,0], R[2,1], R[2,2] = -ca*sb*sxyz, sa*sxyz, ca*cb*sxyz R[3,0], R[3,1], R[3,2] = 0, 0, 0 R[0,3] = dx R[1,3] = dy R[2,3] = dz R[3,3] = 1 cdef class RigidScalingTransform2D(Transform): def __init__(self): """ Rigid scaling transform in 2D. (rotation + translation + scaling) The parameter vector theta of length 5 is interpreted as follows: theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : scaling along the x axis theta[4] : scaling along the y axis """ self.dim = 2 self.number_of_parameters = 5 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of a 2D rigid scaling transform. The transformation (rotation + translation + scaling) is given by: T(x,y) = (T1(x,y), T2(x,y)) = (sx * x * cost - sx * y * sint + dx, sy * x * sint + sy * y * cost + dy) Parameters ---------- theta : array, shape (5,) the parameters of the 2D rigid isoscaling transform theta[0] : rotation angle (t) theta[1] : translation along the x axis (dx) theta[2] : translation along the y axis (dy) theta[3] : scaling along the x axis theta[4] : scaling along the y axis x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 5) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double st = sin(theta[0]) double ct = cos(theta[0]) double px = x[0], py = x[1] double fx = theta[3] double fy = theta[4] J[0, 0], J[0, 1], J[0, 2] = -px * fx * st - py * fx * ct, 1, 0 J[1, 0], J[1, 1], J[1, 2] = px * fy * ct - py * fy * st, 0, 1 J[0, 3], J[0, 4] = px * ct - py * st, 0 J[1, 3], J[1, 4] = 0, px * st + py * ct # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: """ Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (5,) buffer to write the parameters of the 2D scaling rigid transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : scaling along the x axis theta[4] : scaling along the y axis """ theta[:3] = 0 theta[3], theta[4] = 1, 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D rigid scaling transform Parameters ---------- theta : array, shape (5,) the parameters of the 2D rigid scaling transform theta[0] : rotation angle theta[1] : translation along the x axis theta[2] : translation along the y axis theta[3] : scaling along the x axis theta[4] : scaling along the y axis R : array, shape (3, 3) buffer in which to write the rigid scaling matrix """ cdef: double ct = cos(theta[0]) double st = sin(theta[0]) R[0, 0], R[0, 1], R[0, 2] = ct*theta[3], -st*theta[3], theta[1] R[1, 0], R[1, 1], R[1, 2] = st*theta[4], ct*theta[4], theta[2] R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class RigidScalingTransform3D(Transform): def __init__(self): """Rigid Scaling transform in 3D (rotation + translation + scaling). The parameter vector theta of length 9 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis """ self.dim = 3 self.number_of_parameters = 9 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: """Jacobian matrix of a 3D rigid scaling transform. (rotation + translation + scaling) Parameters ---------- theta : array, shape (9,) the parameters of the 3D rigid transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 9) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double px = x[0], py = x[1], pz = x[2] double fx = theta[6] double fy = theta[7] double fz = theta[8] J[0, 0] = (-sc * ca * sb) * px * fx + (sc * sa) * py * fx + \ (sc * ca * cb) * pz * fx J[1, 0] = (cc * ca * sb) * px * fy + (-cc * sa) * py * fy + \ (-cc * ca * cb) * pz * fy J[2, 0] = (sa * sb) * px * fz + ca * py * fz + (-sa * cb) * pz * fz J[0, 1] = (-cc * sb - sc * sa * cb) * px * fx + \ (cc * cb - sc * sa * sb) * pz * fx J[1, 1] = (-sc * sb + cc * sa * cb) * px * fy + \ (sc * cb + cc * sa * sb) * pz * fy J[2, 1] = (-ca * cb) * px * fz + (-ca * sb) * pz * fz J[0, 2] = (-sc * cb - cc * sa * sb) * px * fx + (-cc * ca) * py * fx + \ (-sc * sb + cc * sa * cb) * pz * fx J[1, 2] = (cc * cb - sc * sa * sb) * px * fy + (-sc * ca) * py * fy + \ (cc * sb + sc * sa * cb) * pz * fy J[2, 2] = 0 J[0, 3:6] = 0 J[1, 3:6] = 0 J[2, 3:6] = 0 J[0, 3], J[1, 4], J[2, 5] = 1, 1, 1 J[0, 6] = (cc*cb-sc*sa*sb) * px - sc*ca*py + (cc*sb+sc*sa*cb) * pz J[1, 6], J[2, 6] = 0, 0 J[1, 7] = (sc*cb+cc*sa*sb) * px + cc*ca*py + (sc*sb-cc*sa*cb) * pz J[0, 7], J[2, 7] = 0, 0 J[0, 8], J[1, 8] = 0, 0 J[2, 8] = -ca*sb*px + sa*py + ca*cb*pz # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (9,) buffer to write the parameters of the 3D rigid scaling transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis """ theta[:6] = 0 theta[6:9] = 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 3D rigid scaling transform Parameters ---------- theta : array, shape (9,) the parameters of the 3D rigid scaling transform theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis theta[6] : scaling in the x axis theta[7] : scaling in the y axis theta[8] : scaling in the z axis R : array, shape (4, 4) buffer in which to write the rigid matrix """ cdef: double sa = sin(theta[0]) double ca = cos(theta[0]) double sb = sin(theta[1]) double cb = cos(theta[1]) double sc = sin(theta[2]) double cc = cos(theta[2]) double dx = theta[3] double dy = theta[4] double dz = theta[5] double fx = theta[6] double fy = theta[7] double fz = theta[8] R[0,0], R[0,1], R[0,2] = (cc*cb-sc*sa*sb)*fx, -sc*ca*fx, (cc*sb+sc*sa*cb)*fx R[1,0], R[1,1], R[1,2] = (sc*cb+cc*sa*sb)*fy, cc*ca*fy, (sc*sb-cc*sa*cb)*fy R[2,0], R[2,1], R[2,2] = -ca*sb*fz, sa*fz, ca*cb*fz R[3,0], R[3,1], R[3,2] = 0, 0, 0 R[0,3] = dx R[1,3] = dy R[2,3] = dz R[3,3] = 1 cdef class ScalingTransform2D(Transform): def __init__(self): r""" Scaling transform in 2D """ self.dim = 2 self.number_of_parameters = 1 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the isotropic 2D scale transform The transformation is given by: T(x) = (s*x0, s*x1) The derivative w.r.t. s is T'(x) = [x0, x1] Parameters ---------- theta : array, shape (1,) the scale factor (the Jacobian does not depend on the scale factor, but we receive the buffer to make it consistent with other Jacobian functions) x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 1) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ J[0, 0], J[1, 0] = x[0], x[1] # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (1,) buffer to write the parameters of the 2D scale transform """ theta[0] = 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 2D (isotropic) scaling transform Parameters ---------- theta : array, shape (1,) the scale factor R : array, shape (3, 3) the buffer in which to write the scaling matrix """ R[0, 0], R[0, 1], R[0, 2] = theta[0], 0, 0 R[1, 0], R[1, 1], R[1, 2] = 0, theta[0], 0 R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class ScalingTransform3D(Transform): def __init__(self): r""" Scaling transform in 3D """ self.dim = 3 self.number_of_parameters = 1 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the isotropic 3D scale transform The transformation is given by: T(x) = (s*x0, s*x1, s*x2) The derivative w.r.t. s is T'(x) = [x0, x1, x2] Parameters ---------- theta : array, shape (1,) the scale factor (the Jacobian does not depend on the scale factor, but we receive the buffer to make it consistent with other Jacobian functions) x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 1) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ J[0, 0], J[1, 0], J[2, 0]= x[0], x[1], x[2] # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (1,) buffer to write the parameters of the 3D scale transform """ theta[0] = 1 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with the 3D (isotropic) scaling transform Parameters ---------- theta : array, shape (1,) the scale factor R : array, shape (4, 4) the buffer in which to write the scaling matrix """ R[0, 0], R[0, 1], R[0, 2], R[0, 3] = theta[0], 0, 0, 0 R[1, 0], R[1, 1], R[1, 2], R[1, 3] = 0, theta[0], 0, 0 R[2, 0], R[2, 1], R[2, 2], R[2, 3] = 0, 0, theta[0], 0 R[3, 0], R[3, 1], R[3, 2], R[3, 3] = 0, 0, 0, 1 cdef class AffineTransform2D(Transform): def __init__(self): r""" Affine transform in 2D """ self.dim = 2 self.number_of_parameters = 6 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the 2D affine transform The transformation is given by: T(x) = |a0, a1, a2 | |x0| | T1(x) | |a0*x0 + a1*x1 + a2| |a3, a4, a5 | * |x1| = | T2(x) | = |a3*x0 + a4*x1 + a5| | 1| The derivatives w.r.t. each parameter are given by T'(x) = [[x0, 0], #derivatives of [T1, T2] w.r.t a0 [x1, 0], #derivatives of [T1, T2] w.r.t a1 [ 1, 0], #derivatives of [T1, T2] w.r.t a2 [ 0, x0], #derivatives of [T1, T2] w.r.t a3 [ 0, x1], #derivatives of [T1, T2] w.r.t a4 [ 0, 1]] #derivatives of [T1, T2, T3] w.r.t a5 The Jacobian matrix is the transpose of the above matrix. Parameters ---------- theta : array, shape (6,) the parameters of the 2D affine transform x : array, shape (2,) the point at which to compute the Jacobian J : array, shape (2, 6) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ J[0, :6] = 0 J[1, :6] = 0 J[0, :2] = x[:2] J[0, 2] = 1 J[1, 3:5] = x[:2] J[1, 5] = 1 # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (6,) buffer to write the parameters of the 2D affine transform """ theta[0], theta[1], theta[2] = 1, 0, 0 theta[3], theta[4], theta[5] = 0, 1, 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with a general 2D affine transform The transformation is given by the matrix: A = [[a0, a1, a2], [a3, a4, a5], [ 0, 0, 1]] Parameters ---------- theta : array, shape (6,) the parameters of the 2D affine transform R : array, shape (3,3) the buffer in which to write the matrix """ R[0, 0], R[0, 1], R[0, 2] = theta[0], theta[1], theta[2] R[1, 0], R[1, 1], R[1, 2] = theta[3], theta[4], theta[5] R[2, 0], R[2, 1], R[2, 2] = 0, 0, 1 cdef class AffineTransform3D(Transform): def __init__(self): r""" Affine transform in 3D """ self.dim = 3 self.number_of_parameters = 12 cdef int _jacobian(self, double[:] theta, double[:] x, double[:, :] J) noexcept nogil: r""" Jacobian matrix of the 3D affine transform The transformation is given by: T(x)= |a0, a1, a2, a3 | |x0| | T1(x) | |a0*x0 + a1*x1 + a2*x2 + a3| |a4, a5, a6, a7 |* |x1|= | T2(x) |= |a4*x0 + a5*x1 + a6*x2 + a7| |a8, a9, a10, a11| |x2| | T3(x) | |a8*x0 + a9*x1 + a10*x2+a11| | 1| The derivatives w.r.t. each parameter are given by T'(x) = [[x0, 0, 0], #derivatives of [T1, T2, T3] w.r.t a0 [x1, 0, 0], #derivatives of [T1, T2, T3] w.r.t a1 [x2, 0, 0], #derivatives of [T1, T2, T3] w.r.t a2 [ 1, 0, 0], #derivatives of [T1, T2, T3] w.r.t a3 [ 0, x0, 0], #derivatives of [T1, T2, T3] w.r.t a4 [ 0, x1, 0], #derivatives of [T1, T2, T3] w.r.t a5 [ 0, x2, 0], #derivatives of [T1, T2, T3] w.r.t a6 [ 0, 1, 0], #derivatives of [T1, T2, T3] w.r.t a7 [ 0, 0, x0], #derivatives of [T1, T2, T3] w.r.t a8 [ 0, 0, x1], #derivatives of [T1, T2, T3] w.r.t a9 [ 0, 0, x2], #derivatives of [T1, T2, T3] w.r.t a10 [ 0, 0, 1]] #derivatives of [T1, T2, T3] w.r.t a11 The Jacobian matrix is the transpose of the above matrix. Parameters ---------- theta : array, shape (12,) the parameters of the 3D affine transform x : array, shape (3,) the point at which to compute the Jacobian J : array, shape (3, 12) the buffer in which to write the Jacobian Returns ------- is_constant : int always returns 0, indicating that the Jacobian is not constant (it depends on the value of x) """ cdef: cnp.npy_intp j for j in range(3): J[j, :12] = 0 J[0, :3] = x[:3] J[0, 3] = 1 J[1, 4:7] = x[:3] J[1, 7] = 1 J[2, 8:11] = x[:3] J[2, 11] = 1 # This Jacobian depends on x (it's not constant): return 0 return 0 cdef void _get_identity_parameters(self, double[:] theta) noexcept nogil: r""" Parameter values corresponding to the identity Sets in theta the parameter values corresponding to the identity transform Parameters ---------- theta : array, shape (12,) buffer to write the parameters of the 3D affine transform """ theta[0], theta[1], theta[2], theta[3] = 1, 0, 0, 0 theta[4], theta[5], theta[6], theta[7] = 0, 1, 0, 0 theta[8], theta[9], theta[10], theta[11] = 0, 0, 1, 0 cdef void _param_to_matrix(self, double[:] theta, double[:, :] R) noexcept nogil: r""" Matrix associated with a general 3D affine transform The transformation is given by the matrix: A = [[a0, a1, a2, a3], [a4, a5, a6, a7], [a8, a9, a10, a11], [ 0, 0, 0, 1]] Parameters ---------- theta : array, shape (12,) the parameters of the 3D affine transform R : array, shape (4,4) the buffer in which to write the matrix """ R[0, 0], R[0, 1], R[0, 2] = theta[0], theta[1], theta[2] R[1, 0], R[1, 1], R[1, 2] = theta[4], theta[5], theta[6] R[2, 0], R[2, 1], R[2, 2] = theta[8], theta[9], theta[10] R[3, 0], R[3, 1], R[3, 2] = 0, 0, 0 R[0, 3] = theta[3] R[1, 3] = theta[7] R[2, 3] = theta[11] R[3, 3] = 1 regtransforms = dict() regtransforms [('TRANSLATION', 2)] = TranslationTransform2D() regtransforms [('TRANSLATION', 3)] = TranslationTransform3D() regtransforms [('ROTATION', 2)] = RotationTransform2D() regtransforms [('ROTATION', 3)] = RotationTransform3D() regtransforms [('RIGID', 2)] = RigidTransform2D() regtransforms [('RIGID', 3)] = RigidTransform3D() regtransforms [('SCALING', 2)] = ScalingTransform2D() regtransforms [('SCALING', 3)] = ScalingTransform3D() regtransforms [('AFFINE', 2)] = AffineTransform2D() regtransforms [('AFFINE', 3)] = AffineTransform3D() regtransforms [('RIGIDSCALING', 2)] = RigidScalingTransform2D() regtransforms [('RIGIDSCALING', 3)] = RigidScalingTransform3D() regtransforms [('RIGIDISOSCALING', 2)] = RigidIsoScalingTransform2D() regtransforms [('RIGIDISOSCALING', 3)] = RigidIsoScalingTransform3D() dipy-1.11.0/dipy/align/vector_fields.pxd000066400000000000000000000040101476546756600201700ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True cdef inline double _apply_affine_3d_x0(double x0, double x1, double x2, double h, double[:, :] aff) nogil: r"""Multiplies aff by (x0, x1, x2, h), returns the 1st element of product Returns the first component of the product of the homogeneous matrix aff by (x0, x1, x2, h) """ return aff[0, 0] * x0 + aff[0, 1] * x1 + aff[0, 2] * x2 + h*aff[0, 3] cdef inline double _apply_affine_3d_x1(double x0, double x1, double x2, double h, double[:, :] aff) nogil: r"""Multiplies aff by (x0, x1, x2, h), returns the 2nd element of product Returns the first component of the product of the homogeneous matrix aff by (x0, x1, x2, h) """ return aff[1, 0] * x0 + aff[1, 1] * x1 + aff[1, 2] * x2 + h*aff[1, 3] cdef inline double _apply_affine_3d_x2(double x0, double x1, double x2, double h, double[:, :] aff) nogil: r"""Multiplies aff by (x0, x1, x2, h), returns the 3d element of product Returns the first component of the product of the homogeneous matrix aff by (x0, x1, x2, h) """ return aff[2, 0] * x0 + aff[2, 1] * x1 + aff[2, 2] * x2 + h*aff[2, 3] cdef inline double _apply_affine_2d_x0(double x0, double x1, double h, double[:, :] aff) nogil: r"""Multiplies aff by (x0, x1, h), returns the 1st element of product Returns the first component of the product of the homogeneous matrix aff by (x0, x1, h) """ return aff[0, 0] * x0 + aff[0, 1] * x1 + h*aff[0, 2] cdef inline double _apply_affine_2d_x1(double x0, double x1, double h, double[:, :] aff) nogil: r"""Multiplies aff by (x0, x1, h), returns the 2nd element of product Returns the first component of the product of the homogeneous matrix aff by (x0, x1, h) """ return aff[1, 0] * x0 + aff[1, 1] * x1 + h*aff[1, 2] dipy-1.11.0/dipy/align/vector_fields.pyx000066400000000000000000003460601476546756600202330ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport numpy as cnp from dipy.align.fused_types cimport floating, number from dipy.core.interpolation cimport (_interpolate_scalar_2d, _interpolate_scalar_3d, _interpolate_vector_2d, _interpolate_vector_3d, _interpolate_scalar_nn_2d, _interpolate_scalar_nn_3d) cdef extern from "dpy_math.h" nogil: double floor(double) double sqrt(double) double cos(double) double atan2(double, double) def is_valid_affine(double[:, :] M, int dim): if M is None: return True if M.shape[0] < dim or M.shape[1] < dim + 1: return False if not np.all(np.isfinite(M)): return False return True cdef void _compose_vector_fields_2d(floating[:, :, :] d1, floating[:, :, :] d2, double[:, :] premult_index, double[:, :] premult_disp, double time_scaling, floating[:, :, :] comp, double[:] stats) noexcept nogil: r"""Computes the composition of two 2D displacement fields Computes the composition of the two 2-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as: comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ] where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1's grid. Using this parameters it is possible to compose vector fields with arbitrary discretization: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1's grid is given by: comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i (the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function. Parameters ---------- d1 : array, shape (R, C, 2) first displacement field to be applied. R, C are the number of rows and columns of the displacement field, respectively. d2 : array, shape (R', C', 2) second displacement field to be applied. R', C' are the number of rows and columns of the displacement field, respectively. premult_index : array, shape (3, 3) the matrix A in the explanation above premult_disp : array, shape (3, 3) the matrix B in the explanation above time_scaling : float this corresponds to the time scaling 't' in the above explanation comp : array, shape (R, C, 2), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) Returns ------- comp : array, shape (R, C, 2), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) Notes ----- If d1[r,c] lies outside the domain of d2, then comp[r,c] will contain a zero vector. Warning: it is possible to use the same array reference for d1 and comp to effectively update d1 to the composition of d1 and d2 because previously updated values from d1 are no longer used (this is done to save memory and time). However, using the same array for d2 and comp may not be the intended operation (see comment below). """ cdef: cnp.npy_intp nr1 = d1.shape[0] cnp.npy_intp nc1 = d1.shape[1] cnp.npy_intp nr2 = d2.shape[0] cnp.npy_intp nc2 = d2.shape[1] int inside, cnt = 0 double maxNorm = 0 double meanNorm = 0 double stdNorm = 0 double nn cnp.npy_intp i, j double di, dj, dii, djj, diii, djjj for i in range(nr1): for j in range(nc1): # This is the only place we access d1[i, j] dii = d1[i, j, 0] djj = d1[i, j, 1] if premult_disp is None: di = dii dj = djj else: di = _apply_affine_2d_x0(dii, djj, 0, premult_disp) dj = _apply_affine_2d_x1(dii, djj, 0, premult_disp) if premult_index is None: diii = i djjj = j else: diii = _apply_affine_2d_x0(i, j, 1, premult_index) djjj = _apply_affine_2d_x1(i, j, 1, premult_index) diii += di djjj += dj # If d1 and comp are the same array, this will correctly update # d1[i,j], which will never be accessed again # If d2 and comp are the same array, then (diii, djjj) may be # in the neighborhood of a previously updated vector from d2, # which may be problematic inside = _interpolate_vector_2d[floating](d2, diii, djjj, &comp[i, j, 0]) if inside == 1: comp[i, j, 0] = time_scaling * comp[i, j, 0] + dii comp[i, j, 1] = time_scaling * comp[i, j, 1] + djj nn = comp[i, j, 0] ** 2 + comp[i, j, 1] ** 2 meanNorm += nn stdNorm += nn * nn cnt += 1 if maxNorm < nn: maxNorm = nn else: comp[i, j, 0] = 0 comp[i, j, 1] = 0 meanNorm /= cnt stats[0] = sqrt(maxNorm) stats[1] = sqrt(meanNorm) stats[2] = sqrt(stdNorm / cnt - meanNorm * meanNorm) def compose_vector_fields_2d(floating[:, :, :] d1, floating[:, :, :] d2, double[:, :] premult_index, double[:, :] premult_disp, double time_scaling, floating[:, :, :] comp): r"""Computes the composition of two 2D displacement fields Computes the composition of the two 2-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as: comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ] where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1's grid. Using this parameters it is possible to compose vector fields with arbitrary discretizations: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1's grid is given by: comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i (the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function. Parameters ---------- d1 : array, shape (R, C, 2) first displacement field to be applied. R, C are the number of rows and columns of the displacement field, respectively. d2 : array, shape (R', C', 2) second displacement field to be applied. R', C' are the number of rows and columns of the displacement field, respectively. premult_index : array, shape (3, 3) the matrix A in the explanation above premult_disp : array, shape (3, 3) the matrix B in the explanation above time_scaling : float this corresponds to the time scaling 't' in the above explanation comp : array, shape (R, C, 2) the buffer to write the composition to. If None, the buffer is created internally Returns ------- comp : array, shape (R, C, 2), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) """ cdef: double[:] stats = np.zeros(shape=(3,), dtype=np.float64) if comp is None: comp = np.zeros_like(d1) if not is_valid_affine(premult_index, 2): raise ValueError("Invalid index multiplication matrix") if not is_valid_affine(premult_disp, 2): raise ValueError("Invalid displacement multiplication matrix") _compose_vector_fields_2d[floating](d1, d2, premult_index, premult_disp, time_scaling, comp, stats) return np.asarray(comp), np.asarray(stats) cdef void _compose_vector_fields_3d(floating[:, :, :, :] d1, floating[:, :, :, :] d2, double[:, :] premult_index, double[:, :] premult_disp, double t, floating[:, :, :, :] comp, double[:] stats) noexcept nogil: r"""Computes the composition of two 3D displacement fields Computes the composition of the two 3-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as: comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ] where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1's grid. Using this parameters it is possible to compose vector fields with arbitrary discretization: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1's grid is given by: comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i (the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function. Parameters ---------- d1 : array, shape (S, R, C, 3) first displacement field to be applied. S, R, C are the number of slices, rows and columns of the displacement field, respectively. d2 : array, shape (S', R', C', 3) second displacement field to be applied. R', C' are the number of rows and columns of the displacement field, respectively. premult_index : array, shape (4, 4) the matrix A in the explanation above premult_disp : array, shape (4, 4) the matrix B in the explanation above time_scaling : float this corresponds to the time scaling 't' in the above explanation comp : array, shape (S, R, C, 3), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) Returns ------- comp : array, shape (S, R, C, 3), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) Notes ----- If d1[s,r,c] lies outside the domain of d2, then comp[s,r,c] will contain a zero vector. Warning: it is possible to use the same array reference for d1 and comp to effectively update d1 to the composition of d1 and d2 because previously updated values from d1 are no longer used (this is done to save memory and time). However, using the same array for d2 and comp may not be the intended operation (see comment below). """ cdef: cnp.npy_intp ns1 = d1.shape[0] cnp.npy_intp nr1 = d1.shape[1] cnp.npy_intp nc1 = d1.shape[2] cnp.npy_intp ns2 = d2.shape[0] cnp.npy_intp nr2 = d2.shape[1] cnp.npy_intp nc2 = d2.shape[2] int inside, cnt = 0 double maxNorm = 0 double meanNorm = 0 double stdNorm = 0 double nn cnp.npy_intp i, j, k double di, dj, dk, dii, djj, dkk, diii, djjj, dkkk for k in range(ns1): for i in range(nr1): for j in range(nc1): # This is the only place we access d1[k, i, j] dkk = d1[k, i, j, 0] dii = d1[k, i, j, 1] djj = d1[k, i, j, 2] if premult_disp is None: dk = dkk di = dii dj = djj else: dk = _apply_affine_3d_x0(dkk, dii, djj, 0, premult_disp) di = _apply_affine_3d_x1(dkk, dii, djj, 0, premult_disp) dj = _apply_affine_3d_x2(dkk, dii, djj, 0, premult_disp) if premult_index is None: dkkk = k diii = i djjj = j else: dkkk = _apply_affine_3d_x0(k, i, j, 1, premult_index) diii = _apply_affine_3d_x1(k, i, j, 1, premult_index) djjj = _apply_affine_3d_x2(k, i, j, 1, premult_index) dkkk += dk diii += di djjj += dj # If d1 and comp are the same array, this will correctly update # d1[k,i,j], which will never be accessed again # If d2 and comp are the same array, then (dkkk, diii, djjj) # may be in the neighborhood of a previously updated vector # from d2, which may be problematic inside = _interpolate_vector_3d[floating](d2, dkkk, diii, djjj, &comp[k, i, j, 0]) if inside == 1: comp[k, i, j, 0] = t * comp[k, i, j, 0] + dkk comp[k, i, j, 1] = t * comp[k, i, j, 1] + dii comp[k, i, j, 2] = t * comp[k, i, j, 2] + djj nn = (comp[k, i, j, 0] ** 2 + comp[k, i, j, 1] ** 2 + comp[k, i, j, 2]**2) meanNorm += nn stdNorm += nn * nn cnt += 1 if maxNorm < nn: maxNorm = nn else: comp[k, i, j, 0] = 0 comp[k, i, j, 1] = 0 comp[k, i, j, 2] = 0 meanNorm /= cnt stats[0] = sqrt(maxNorm) stats[1] = sqrt(meanNorm) stats[2] = sqrt(stdNorm / cnt - meanNorm * meanNorm) def compose_vector_fields_3d(floating[:, :, :, :] d1, floating[:, :, :, :] d2, double[:, :] premult_index, double[:, :] premult_disp, double time_scaling, floating[:, :, :, :] comp): r"""Computes the composition of two 3D displacement fields Computes the composition of the two 3-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as: comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ] where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1's grid. Using this parameters it is possible to compose vector fields with arbitrary discretization: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1's grid is given by: comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i (the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function. Parameters ---------- d1 : array, shape (S, R, C, 3) first displacement field to be applied. S, R, C are the number of slices, rows and columns of the displacement field, respectively. d2 : array, shape (S', R', C', 3) second displacement field to be applied. R', C' are the number of rows and columns of the displacement field, respectively. premult_index : array, shape (4, 4) the matrix A in the explanation above premult_disp : array, shape (4, 4) the matrix B in the explanation above time_scaling : float this corresponds to the time scaling 't' in the above explanation comp : array, shape (S, R, C, 3), same dimension as d1 the buffer to write the composition to. If None, the buffer will be created internally Returns ------- comp : array, shape (S, R, C, 3), same dimension as d1 on output, this array will contain the composition of the two fields stats : array, shape (3,) on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation) Notes ----- If d1[s,r,c] lies outside the domain of d2, then comp[s,r,c] will contain a zero vector. """ cdef: double[:] stats = np.zeros(shape=(3,), dtype=np.float64) if comp is None: comp = np.zeros_like(d1) if not is_valid_affine(premult_index, 3): raise ValueError("Invalid index pre-multiplication matrix") if not is_valid_affine(premult_disp, 3): raise ValueError("Invalid displacement pre-multiplication matrix") _compose_vector_fields_3d[floating](d1, d2, premult_index, premult_disp, time_scaling, comp, stats) return np.asarray(comp), np.asarray(stats) def invert_vector_field_fixed_point_2d(floating[:, :, :] d, double[:, :] d_world2grid, double[:] spacing, int max_iter, double tolerance, floating[:, :, :] start=None): r"""Computes the inverse of a 2D displacement fields Computes the inverse of the given 2-D displacement field d using the fixed-point algorithm [1]. [1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008). A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107 Parameters ---------- d : array, shape (R, C, 2) the 2-D displacement field to be inverted d_world2grid : array, shape (3, 3) the space-to-grid transformation associated to the displacement field d (transforming physical space coordinates to voxel coordinates of the displacement field grid) spacing :array, shape (2,) the spacing between voxels (voxel size along each axis) max_iter : int maximum number of iterations to be performed tolerance : float maximum tolerated inversion error start : array, shape (R, C) an approximation to the inverse displacement field (if no approximation is available, None can be provided and the start displacement field will be zero) Returns ------- p : array, shape (R, C, 2) the inverse displacement field Notes ----- We assume that the displacement field is an endomorphism so that the shape and voxel-to-space transformation of the inverse field's discretization is the same as those of the input displacement field. The 'inversion error' at iteration t is defined as the mean norm of the displacement vectors of the input displacement field composed with the inverse at iteration t. """ cdef: cnp.npy_intp nr = d.shape[0] cnp.npy_intp nc = d.shape[1] int iter_count, current, flag double difmag, mag, maxlen, step_factor double epsilon double error = 1 + tolerance double di, dj, dii, djj double sr = spacing[0], sc = spacing[1] ftype = np.asarray(d).dtype cdef: double[:] stats = np.zeros(shape=(2,), dtype=np.float64) double[:] substats = np.empty(shape=(3,), dtype=np.float64) double[:, :] norms = np.zeros(shape=(nr, nc), dtype=np.float64) floating[:, :, :] p = np.zeros(shape=(nr, nc, 2), dtype=ftype) floating[:, :, :] q = np.zeros(shape=(nr, nc, 2), dtype=ftype) if not is_valid_affine(d_world2grid, 2): raise ValueError("Invalid world-to-image transform") if start is not None: p[...] = start with nogil: iter_count = 0 while (iter_count < max_iter) and (tolerance < error): if iter_count == 0: epsilon = 0.75 else: epsilon = 0.5 _compose_vector_fields_2d[floating](p, d, None, d_world2grid, 1.0, q, substats) difmag = 0 error = 0 for i in range(nr): for j in range(nc): mag = sqrt((q[i, j, 0]/sr) ** 2 + (q[i, j, 1]/sc) ** 2) norms[i, j] = mag error += mag if difmag < mag: difmag = mag maxlen = difmag * epsilon for i in range(nr): for j in range(nc): if norms[i, j] > maxlen: step_factor = epsilon * maxlen / norms[i, j] else: step_factor = epsilon p[i, j, 0] = p[i, j, 0] - step_factor * q[i, j, 0] p[i, j, 1] = p[i, j, 1] - step_factor * q[i, j, 1] error /= (nr * nc) iter_count += 1 stats[0] = substats[1] stats[1] = iter_count return np.asarray(p) def invert_vector_field_fixed_point_3d(floating[:, :, :, :] d, double[:, :] d_world2grid, double[:] spacing, int max_iter, double tol, floating[:, :, :, :] start=None): r"""Computes the inverse of a 3D displacement fields Computes the inverse of the given 3-D displacement field d using the fixed-point algorithm [1]. [1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008). A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107 Parameters ---------- d : array, shape (S, R, C, 3) the 3-D displacement field to be inverted d_world2grid : array, shape (4, 4) the space-to-grid transformation associated to the displacement field d (transforming physical space coordinates to voxel coordinates of the displacement field grid) spacing :array, shape (3,) the spacing between voxels (voxel size along each axis) max_iter : int maximum number of iterations to be performed tol : float maximum tolerated inversion error start : array, shape (S, R, C) an approximation to the inverse displacement field (if no approximation is available, None can be provided and the start displacement field will be zero) Returns ------- p : array, shape (S, R, C, 3) the inverse displacement field Notes ----- We assume that the displacement field is an endomorphism so that the shape and voxel-to-space transformation of the inverse field's discretization is the same as those of the input displacement field. The 'inversion error' at iteration t is defined as the mean norm of the displacement vectors of the input displacement field composed with the inverse at iteration t. """ cdef: cnp.npy_intp ns = d.shape[0] cnp.npy_intp nr = d.shape[1] cnp.npy_intp nc = d.shape[2] int iter_count, current double dkk, dii, djj, dk, di, dj double difmag, mag, maxlen, step_factor double epsilon = 0.5 double error = 1 + tol double ss = spacing[0], sr = spacing[1], sc = spacing[2] ftype = np.asarray(d).dtype cdef: double[:] stats = np.zeros(shape=(2,), dtype=np.float64) double[:] substats = np.zeros(shape=(3,), dtype=np.float64) double[:, :, :] norms = np.zeros(shape=(ns, nr, nc), dtype=np.float64) floating[:, :, :, :] p = np.zeros(shape=(ns, nr, nc, 3), dtype=ftype) floating[:, :, :, :] q = np.zeros(shape=(ns, nr, nc, 3), dtype=ftype) if not is_valid_affine(d_world2grid, 3): raise ValueError("Invalid world-to-image transform") if start is not None: p[...] = start with nogil: iter_count = 0 difmag = 1 while (0.1 < difmag) and (iter_count < max_iter) and (tol < error): if iter_count == 0: epsilon = 0.75 else: epsilon = 0.5 _compose_vector_fields_3d[floating](p, d, None, d_world2grid, 1.0, q, substats) difmag = 0 error = 0 for k in range(ns): for i in range(nr): for j in range(nc): mag = sqrt((q[k, i, j, 0]/ss) ** 2 + (q[k, i, j, 1]/sr) ** 2 + (q[k, i, j, 2]/sc) ** 2) norms[k, i, j] = mag error += mag if difmag < mag: difmag = mag maxlen = difmag*epsilon for k in range(ns): for i in range(nr): for j in range(nc): if norms[k, i, j] > maxlen: step_factor = epsilon * maxlen / norms[k, i, j] else: step_factor = epsilon p[k, i, j, 0] = (p[k, i, j, 0] - step_factor * q[k, i, j, 0]) p[k, i, j, 1] = (p[k, i, j, 1] - step_factor * q[k, i, j, 1]) p[k, i, j, 2] = (p[k, i, j, 2] - step_factor * q[k, i, j, 2]) error /= (ns * nr * nc) iter_count += 1 stats[0] = error stats[1] = iter_count return np.asarray(p) def simplify_warp_function_2d(floating[:, :, :] d, double[:, :] affine_idx_in, double[:, :] affine_idx_out, double[:, :] affine_disp, int[:] out_shape): r""" Simplifies a nonlinear warping function combined with an affine transform Modifies the given deformation field by incorporating into it a an affine transformation and voxel-to-space transforms associated to the discretization of its domain and codomain. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. More precisely, the resulting transform is of the form: (1) T[i] = W * d[U * i] + V * i Where U = affine_idx_in, V = affine_idx_out, W = affine_disp. Parameters ---------- d : array, shape (R', C', 2) the non-linear part of the transformation (displacement field) affine_idx_in : array, shape (3, 3) the matrix U in eq. (1) above affine_idx_out : array, shape (3, 3) the matrix V in eq. (1) above affine_disp : array, shape (3, 3) the matrix W in eq. (1) above out_shape : array, shape (2,) the number of rows and columns of the sampling grid Returns ------- out : array, shape = out_shape the deformation field `out` associated with `T` in eq. (1) such that: T[i] = i + out[i] Notes ----- Both the direct and inverse transforms of a DiffeomorphicMap can be written in this form: Direct: Let D be the voxel-to-space transform of the domain's discretization, P be the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Cinv be the space-to-voxel transform of the codomain's discretization. Then, for each i in the domain's grid, the direct transform is given by (2) T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv Inverse: Let C be the voxel-to-space transform of the codomain's discretization, Pinv be the inverse of the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Dinv be the space-to-voxel transform of the domain's discretization. Then, for each j in the codomain's grid, the inverse transform is given by (3) Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv """ cdef: cnp.npy_intp nrows = out_shape[0] cnp.npy_intp ncols = out_shape[1] cnp.npy_intp i, j double di, dj, dii, djj floating[:] tmp = np.zeros((2,), dtype=np.asarray(d).dtype) floating[:, :, :] out = np.zeros(shape=(nrows, ncols, 2), dtype=np.asarray(d).dtype) if not is_valid_affine(affine_idx_in, 2): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 2): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 2): raise ValueError("Invalid displacement multiplication matrix") with nogil: for i in range(nrows): for j in range(ncols): # Apply inner index pre-multiplication if affine_idx_in is None: dii = d[i, j, 0] djj = d[i, j, 1] else: di = _apply_affine_2d_x0( i, j, 1, affine_idx_in) dj = _apply_affine_2d_x1( i, j, 1, affine_idx_in) _interpolate_vector_2d[floating](d, di, dj, &tmp[0]) dii = tmp[0] djj = tmp[1] # Apply displacement multiplication if affine_disp is not None: di = _apply_affine_2d_x0( dii, djj, 0, affine_disp) dj = _apply_affine_2d_x1( dii, djj, 0, affine_disp) else: di = dii dj = djj # Apply outer index multiplication and add the displacements if affine_idx_out is not None: out[i, j, 0] = di + _apply_affine_2d_x0(i, j, 1, affine_idx_out) - i out[i, j, 1] = dj + _apply_affine_2d_x1(i, j, 1, affine_idx_out) - j else: out[i, j, 0] = di out[i, j, 1] = dj return np.asarray(out) def simplify_warp_function_3d(floating[:, :, :, :] d, double[:, :] affine_idx_in, double[:, :] affine_idx_out, double[:, :] affine_disp, int[:] out_shape): r""" Simplifies a nonlinear warping function combined with an affine transform Modifies the given deformation field by incorporating into it an affine transformation and voxel-to-space transforms associated with the discretization of its domain and codomain. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. More precisely, the resulting transform is of the form: (1) T[i] = W * d[U * i] + V * i Where U = affine_idx_in, V = affine_idx_out, W = affine_disp. Parameters ---------- d : array, shape (S', R', C', 3) the non-linear part of the transformation (displacement field) affine_idx_in : array, shape (4, 4) the matrix U in eq. (1) above affine_idx_out : array, shape (4, 4) the matrix V in eq. (1) above affine_disp : array, shape (4, 4) the matrix W in eq. (1) above out_shape : array, shape (3,) the number of slices, rows and columns of the sampling grid Returns ------- out : array, shape = out_shape the deformation field `out` associated with `T` in eq. (1) such that: T[i] = i + out[i] Notes ----- Both the direct and inverse transforms of a DiffeomorphicMap can be written in this form: Direct: Let D be the voxel-to-space transform of the domain's discretization, P be the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Cinv be the space-to-voxel transform of the codomain's discretization. Then, for each i in the domain's grid, the direct transform is given by (2) T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv Inverse: Let C be the voxel-to-space transform of the codomain's discretization, Pinv be the inverse of the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Dinv be the space-to-voxel transform of the domain's discretization. Then, for each j in the codomain's grid, the inverse transform is given by (3) Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv """ cdef: cnp.npy_intp nslices = out_shape[0] cnp.npy_intp nrows = out_shape[1] cnp.npy_intp ncols = out_shape[2] cnp.npy_intp i, j, k, inside double di, dj, dk, dii, djj, dkk floating[:] tmp = np.zeros((3,), dtype=np.asarray(d).dtype) floating[:, :, :, :] out = np.zeros(shape=(nslices, nrows, ncols, 3), dtype=np.asarray(d).dtype) if not is_valid_affine(affine_idx_in, 3): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 3): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 3): raise ValueError("Invalid displacement multiplication matrix") with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if affine_idx_in is None: dkk = d[k, i, j, 0] dii = d[k, i, j, 1] djj = d[k, i, j, 2] else: dk = _apply_affine_3d_x0( k, i, j, 1, affine_idx_in) di = _apply_affine_3d_x1( k, i, j, 1, affine_idx_in) dj = _apply_affine_3d_x2( k, i, j, 1, affine_idx_in) inside = _interpolate_vector_3d[floating](d, dk, di, dj, &tmp[0]) dkk = tmp[0] dii = tmp[1] djj = tmp[2] if affine_disp is not None: dk = _apply_affine_3d_x0( dkk, dii, djj, 0, affine_disp) di = _apply_affine_3d_x1( dkk, dii, djj, 0, affine_disp) dj = _apply_affine_3d_x2( dkk, dii, djj, 0, affine_disp) else: dk = dkk di = dii dj = djj if affine_idx_out is not None: out[k, i, j, 0] = dk +\ _apply_affine_3d_x0(k, i, j, 1, affine_idx_out) - k out[k, i, j, 1] = di +\ _apply_affine_3d_x1(k, i, j, 1, affine_idx_out) - i out[k, i, j, 2] = dj +\ _apply_affine_3d_x2(k, i, j, 1, affine_idx_out) - j else: out[k, i, j, 0] = dk out[k, i, j, 1] = di out[k, i, j, 2] = dj return np.asarray(out) def reorient_vector_field_2d(floating[:, :, :] d, double[:, :] affine): r"""Linearly transforms all vectors of a 2D displacement field Modifies the input displacement field by multiplying each displacement vector by the given matrix. Note that the elements of the displacement field are vectors, not points, so their last homogeneous coordinate is zero, not one, and therefore the translation component of the affine transform will not have any effect on them. Parameters ---------- d : array, shape (R, C, 2) the displacement field to be re-oriented affine: array, shape (3, 3) the matrix to be applied """ cdef: cnp.npy_intp nrows = d.shape[0] cnp.npy_intp ncols = d.shape[1] cnp.npy_intp i, j double di, dj if not is_valid_affine(affine, 2): raise ValueError("Invalid affine transform matrix") if affine is None: return with nogil: for i in range(nrows): for j in range(ncols): di = d[i, j, 0] dj = d[i, j, 1] d[i, j, 0] = _apply_affine_2d_x0(di, dj, 0, affine) d[i, j, 1] = _apply_affine_2d_x1(di, dj, 0, affine) def reorient_vector_field_3d(floating[:, :, :, :] d, double[:, :] affine): r"""Linearly transforms all vectors of a 3D displacement field Modifies the input displacement field by multiplying each displacement vector by the given matrix. Note that the elements of the displacement field are vectors, not points, so their last homogeneous coordinate is zero, not one, and therefore the translation component of the affine transform will not have any effect on them. Parameters ---------- d : array, shape (S, R, C, 3) the displacement field to be re-oriented affine : array, shape (4, 4) the matrix to be applied """ cdef: cnp.npy_intp nslices = d.shape[0] cnp.npy_intp nrows = d.shape[1] cnp.npy_intp ncols = d.shape[2] cnp.npy_intp i, j, k double di, dj, dk if not is_valid_affine(affine, 3): raise ValueError("Invalid affine transform matrix") if affine is None: return with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): dk = d[k, i, j, 0] di = d[k, i, j, 1] dj = d[k, i, j, 2] d[k, i, j, 0] = _apply_affine_3d_x0(dk, di, dj, 0, affine) d[k, i, j, 1] = _apply_affine_3d_x1(dk, di, dj, 0, affine) d[k, i, j, 2] = _apply_affine_3d_x2(dk, di, dj, 0, affine) def downsample_scalar_field_3d(floating[:, :, :] field): r"""Down-samples the input volume by a factor of 2 Down-samples the input volume by a factor of 2. The value at each voxel of the resulting volume is the average of its surrounding voxels in the original volume. Parameters ---------- field : array, shape (S, R, C) the volume to be down-sampled Returns ------- down : array, shape (S', R', C') the down-sampled displacement field, where S' = ceil(S/2), R'= ceil(R/2), C'=ceil(C/2) """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp ns = field.shape[0] cnp.npy_intp nr = field.shape[1] cnp.npy_intp nc = field.shape[2] cnp.npy_intp nns = (ns + 1) // 2 cnp.npy_intp nnr = (nr + 1) // 2 cnp.npy_intp nnc = (nc + 1) // 2 cnp.npy_intp i, j, k, ii, jj, kk floating[:, :, :] down = np.zeros((nns, nnr, nnc), dtype=ftype) int[:, :, :] cnt = np.zeros((nns, nnr, nnc), dtype=np.int32) with nogil: for k in range(ns): for i in range(nr): for j in range(nc): kk = k // 2 ii = i // 2 jj = j // 2 down[kk, ii, jj] += field[k, i, j] cnt[kk, ii, jj] += 1 for k in range(nns): for i in range(nnr): for j in range(nnc): if cnt[k, i, j] > 0: down[k, i, j] /= cnt[k, i, j] return np.asarray(down) def downsample_displacement_field_3d(floating[:, :, :, :] field): r"""Down-samples the input 3D vector field by a factor of 2 Down-samples the input vector field by a factor of 2. This operation is equivalent to dividing the input image into 2x2x2 cubes and averaging the 8 vectors. The resulting field consists of these average vectors. Parameters ---------- field : array, shape (S, R, C) the vector field to be down-sampled Returns ------- down : array, shape (S', R', C') the down-sampled displacement field, where S' = ceil(S/2), R'= ceil(R/2), C'=ceil(C/2) """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp ns = field.shape[0] cnp.npy_intp nr = field.shape[1] cnp.npy_intp nc = field.shape[2] cnp.npy_intp nns = (ns + 1) // 2 cnp.npy_intp nnr = (nr + 1) // 2 cnp.npy_intp nnc = (nc + 1) // 2 cnp.npy_intp i, j, k, ii, jj, kk floating[:, :, :, :] down = np.zeros((nns, nnr, nnc, 3), dtype=ftype) int[:, :, :] cnt = np.zeros((nns, nnr, nnc), dtype=np.int32) with nogil: for k in range(ns): for i in range(nr): for j in range(nc): kk = k // 2 ii = i // 2 jj = j // 2 down[kk, ii, jj, 0] += field[k, i, j, 0] down[kk, ii, jj, 1] += field[k, i, j, 1] down[kk, ii, jj, 2] += field[k, i, j, 2] cnt[kk, ii, jj] += 1 for k in range(nns): for i in range(nnr): for j in range(nnc): if cnt[k, i, j] > 0: down[k, i, j, 0] /= cnt[k, i, j] down[k, i, j, 1] /= cnt[k, i, j] down[k, i, j, 2] /= cnt[k, i, j] return np.asarray(down) def downsample_scalar_field_2d(floating[:, :] field): r"""Down-samples the input 2D image by a factor of 2 Down-samples the input image by a factor of 2. The value at each pixel of the resulting image is the average of its surrounding pixels in the original image. Parameters ---------- field : array, shape (R, C) the image to be down-sampled Returns ------- down : array, shape (R', C') the down-sampled displacement field, where R'= ceil(R/2), C'=ceil(C/2) """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp nr = field.shape[0] cnp.npy_intp nc = field.shape[1] cnp.npy_intp nnr = (nr + 1) // 2 cnp.npy_intp nnc = (nc + 1) // 2 cnp.npy_intp i, j, ii, jj floating[:, :] down = np.zeros(shape=(nnr, nnc), dtype=ftype) int[:, :] cnt = np.zeros(shape=(nnr, nnc), dtype=np.int32) with nogil: for i in range(nr): for j in range(nc): ii = i // 2 jj = j // 2 down[ii, jj] += field[i, j] cnt[ii, jj] += 1 for i in range(nnr): for j in range(nnc): if cnt[i, j] > 0: down[i, j] /= cnt[i, j] return np.asarray(down) def downsample_displacement_field_2d(floating[:, :, :] field): r"""Down-samples the 2D input vector field by a factor of 2 Down-samples the input vector field by a factor of 2. The value at each pixel of the resulting field is the average of its surrounding pixels in the original field. Parameters ---------- field : array, shape (R, C) the vector field to be down-sampled Returns ------- down : array, shape (R', C') the down-sampled displacement field, where R'= ceil(R/2), C'=ceil(C/2), """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp nr = field.shape[0] cnp.npy_intp nc = field.shape[1] cnp.npy_intp nnr = (nr + 1) // 2 cnp.npy_intp nnc = (nc + 1) // 2 cnp.npy_intp i, j, ii, jj floating[:, :, :] down = np.zeros((nnr, nnc, 2), dtype=ftype) int[:, :] cnt = np.zeros((nnr, nnc), dtype=np.int32) with nogil: for i in range(nr): for j in range(nc): ii = i // 2 jj = j // 2 down[ii, jj, 0] += field[i, j, 0] down[ii, jj, 1] += field[i, j, 1] cnt[ii, jj] += 1 for i in range(nnr): for j in range(nnc): if cnt[i, j] > 0: down[i, j, 0] /= cnt[i, j] down[i, j, 1] /= cnt[i, j] return np.asarray(down) def warp_coordinates_3d(points, floating[:, :, :, :] d1, double[:, :] in2world, double[:, :] world2out, double[:, :] field_world2grid): r""" Parameters ---------- points : array, shape (n, 3) d1 : array, shape (S, R, C, 3) in2world : array, shape (4, 4) world2out : array, shape (4, 4) field_world2grid : array, shape (4, 4) """ cdef: cnp.npy_intp n = points.shape[0] cnp.npy_intp i double x, y, z, wx, wy, wz, gx, gy, gz double[:, :] out = np.zeros(shape=(n, 3), dtype=np.float64) double[:, :] _points = np.array(points, dtype=np.float64) double[:, :] in2grid int inside floating[:] tmp = np.zeros(shape=(3,), dtype=np.asarray(d1).dtype) # in2grid maps points to displacement's grid if in2world is None: # then points are already in world coordinates in2grid = field_world2grid elif field_world2grid is None: # then the grid is in world coordinates in2grid = in2world else: in2grid = np.dot(field_world2grid, in2world) with nogil: for i in range(n): x = _points[i, 0] y = _points[i, 1] z = _points[i, 2] # Map points to world coordinates if in2world is not None: wx = _apply_affine_3d_x0(x, y, z, 1, in2world) wy = _apply_affine_3d_x1(x, y, z, 1, in2world) wz = _apply_affine_3d_x2(x, y, z, 1, in2world) else: wx = x wy = y wz = z # Map points to deformation field's grid if in2grid is not None: gx = _apply_affine_3d_x0(x, y, z, 1, in2grid) gy = _apply_affine_3d_x1(x, y, z, 1, in2grid) gz = _apply_affine_3d_x2(x, y, z, 1, in2grid) else: gx = x gy = y gz = z # Interpolate deformation field at (gx, gy, gz) inside = _interpolate_vector_3d[floating](d1, gx, gy, gz, &tmp[0]) # Warp input point wx += tmp[0] wy += tmp[1] wz += tmp[2] # Map warped point to requested out coordinates if world2out is not None: out[i, 0] = _apply_affine_3d_x0(wx, wy, wz, 1, world2out) out[i, 1] = _apply_affine_3d_x1(wx, wy, wz, 1, world2out) out[i, 2] = _apply_affine_3d_x2(wx, wy, wz, 1, world2out) else: out[i, 0] = wx out[i, 1] = wy out[i, 2] = wz return np.asarray(out) def warp_coordinates_2d(points, floating[:, :, :] d1, double[:, :] in2world, double[:, :] world2out, double[:, :] field_world2grid): r""" Parameters ---------- points : array, shape (n, 2) d1 : array, shape (S, R, C, 2) in2world : array, shape (3, 3) world2out : array, shape (3, 3) field_world2grid : array, shape (3, 3) """ cdef: cnp.npy_intp n = points.shape[0] cnp.npy_intp i double x, y, wx, wy, gx, gy double[:, :] out = np.zeros(shape=(n, 2), dtype=np.float64) double[:, :] _points = np.array(points, dtype=np.float64) double[:, :] in2grid int inside floating[:] tmp = np.zeros(shape=(2,), dtype=np.asarray(d1).dtype) # in2grid maps points to displacement's grid if in2world is None: # then points are already in world coordinates in2grid = field_world2grid elif field_world2grid is None: # then the grid is in world coordinates in2grid = in2world else: in2grid = np.dot(field_world2grid, in2world) with nogil: for i in range(n): x = _points[i, 0] y = _points[i, 1] # Map points to world coordinates if in2world is not None: wx = _apply_affine_2d_x0(x, y, 1, in2world) wy = _apply_affine_2d_x1(x, y, 1, in2world) else: wx = x wy = y # Map points to deformation field's grid if in2grid is not None: gx = _apply_affine_2d_x0(x, y, 1, in2grid) gy = _apply_affine_2d_x1(x, y, 1, in2grid) else: gx = x gy = y # Interpolate deformation field at (gx, gy, gz) inside = _interpolate_vector_2d[floating](d1, gx, gy, &tmp[0]) # Warp input point wx += tmp[0] wy += tmp[1] # Map warped point to requested out coordinates if world2out is not None: out[i, 0] = _apply_affine_2d_x0(wx, wy, 1, world2out) out[i, 1] = _apply_affine_2d_x1(wx, wy, 1, world2out) else: out[i, 0] = wx out[i, 1] = wy return np.asarray(out) def warp_3d(floating[:, :, :] volume, floating[:, :, :, :] d1, double[:, :] affine_idx_in=None, double[:, :] affine_idx_out=None, double[:, :] affine_disp=None, int[:] out_shape=None): r"""Warps a 3D volume using trilinear interpolation Deforms the input volume under the given transformation. The warped volume is computed using tri-linear interpolation and is given by: (1) warped[i] = volume[ C * d1[A*i] + B*i ] where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape. Parameters ---------- volume : array, shape (S, R, C) the input volume to be transformed d1 : array, shape (S', R', C', 3) the displacement field driving the transformation affine_idx_in : array, shape (4, 4) the matrix A in eq. (1) above affine_idx_out : array, shape (4, 4) the matrix B in eq. (1) above affine_disp : array, shape (4, 4) the matrix C in eq. (1) above out_shape : array, shape (3,) the number of slices, rows and columns of the sampling grid Returns ------- warped : array, shape = out_shape the transformed volume Notes ----- To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, a volume with grid-to-space transformation T and let's say we want to sample the warped volume on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped volume is given by: (2) warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ] where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image. """ cdef: cnp.npy_intp nslices = volume.shape[0] cnp.npy_intp nrows = volume.shape[1] cnp.npy_intp ncols = volume.shape[2] cnp.npy_intp nsVol = volume.shape[0] cnp.npy_intp nrVol = volume.shape[1] cnp.npy_intp ncVol = volume.shape[2] cnp.npy_intp i, j, k int inside double dkk, dii, djj, dk, di, dj if not is_valid_affine(affine_idx_in, 3): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 3): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 3): raise ValueError("Invalid displacement multiplication matrix") if out_shape is not None: nslices = out_shape[0] nrows = out_shape[1] ncols = out_shape[2] elif d1 is not None: nslices = d1.shape[0] nrows = d1.shape[1] ncols = d1.shape[2] cdef floating[:, :, :] warped = np.zeros(shape=(nslices, nrows, ncols), dtype=np.asarray(volume).dtype) cdef floating[:] tmp = np.zeros(shape=(3,), dtype=np.asarray(d1).dtype) with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if affine_idx_in is None: dkk = d1[k, i, j, 0] dii = d1[k, i, j, 1] djj = d1[k, i, j, 2] else: dk = _apply_affine_3d_x0( k, i, j, 1, affine_idx_in) di = _apply_affine_3d_x1( k, i, j, 1, affine_idx_in) dj = _apply_affine_3d_x2( k, i, j, 1, affine_idx_in) inside = _interpolate_vector_3d[floating](d1, dk, di, dj, &tmp[0]) dkk = tmp[0] dii = tmp[1] djj = tmp[2] if affine_disp is not None: dk = _apply_affine_3d_x0( dkk, dii, djj, 0, affine_disp) di = _apply_affine_3d_x1( dkk, dii, djj, 0, affine_disp) dj = _apply_affine_3d_x2( dkk, dii, djj, 0, affine_disp) else: dk = dkk di = dii dj = djj if affine_idx_out is not None: dkk = dk + _apply_affine_3d_x0(k, i, j, 1, affine_idx_out) dii = di + _apply_affine_3d_x1(k, i, j, 1, affine_idx_out) djj = dj + _apply_affine_3d_x2(k, i, j, 1, affine_idx_out) else: dkk = dk + k dii = di + i djj = dj + j inside = _interpolate_scalar_3d[floating](volume, dkk, dii, djj, &warped[k, i, j]) return np.asarray(warped) def transform_3d_affine(floating[:, :, :] volume, int[:] ref_shape, double[:, :] affine): r"""Transforms a 3D volume by an affine transform with trilinear interp. Deforms the input volume under the given affine transformation using tri-linear interpolation. The shape of the resulting transformation is given by ref_shape. If the affine matrix is None, it is taken as the identity. Parameters ---------- volume : array, shape (S, R, C) the input volume to be transformed ref_shape : array, shape (3,) the shape of the resulting volume affine : array, shape (4, 4) the affine transform to be applied Returns ------- out : array, shape (S', R', C') the transformed volume Notes ----- The reason it is necessary to provide the intended shape of the resulting volume is because the affine transformation is defined on all R^{3} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms. """ cdef: cnp.npy_intp nslices = ref_shape[0] cnp.npy_intp nrows = ref_shape[1] cnp.npy_intp ncols = ref_shape[2] cnp.npy_intp nsVol = volume.shape[0] cnp.npy_intp nrVol = volume.shape[1] cnp.npy_intp ncVol = volume.shape[2] cnp.npy_intp i, j, k, ii, jj, kk int inside double dkk, dii, djj, tmp0, tmp1 double alpha, beta, gamma, calpha, cbeta, cgamma floating[:, :, :] out = np.zeros(shape=(nslices, nrows, ncols), dtype=np.asarray(volume).dtype) if not is_valid_affine(affine, 3): raise ValueError("Invalid affine transform matrix") with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if affine is not None: dkk = _apply_affine_3d_x0(k, i, j, 1, affine) dii = _apply_affine_3d_x1(k, i, j, 1, affine) djj = _apply_affine_3d_x2(k, i, j, 1, affine) else: dkk = k dii = i djj = j inside = _interpolate_scalar_3d[floating](volume, dkk, dii, djj, &out[k, i, j]) return np.asarray(out) def warp_3d_nn(number[:, :, :] volume, floating[:, :, :, :] d1, double[:, :] affine_idx_in=None, double[:, :] affine_idx_out=None, double[:, :] affine_disp=None, int[:] out_shape=None): r"""Warps a 3D volume using using nearest-neighbor interpolation Deforms the input volume under the given transformation. The warped volume is computed using nearest-neighbor interpolation and is given by: (1) warped[i] = volume[ C * d1[A*i] + B*i ] where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape. Parameters ---------- volume : array, shape (S, R, C) the input volume to be transformed d1 : array, shape (S', R', C', 3) the displacement field driving the transformation affine_idx_in : array, shape (4, 4) the matrix A in eq. (1) above affine_idx_out : array, shape (4, 4) the matrix B in eq. (1) above affine_disp : array, shape (4, 4) the matrix C in eq. (1) above out_shape : array, shape (3,) the number of slices, rows and columns of the sampling grid Returns ------- warped : array, shape = out_shape the transformed volume Notes ----- To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, a volume with grid-to-space transformation T and let's say we want to sample the warped volume on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped volume is given by: (2) warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ] where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image. """ cdef: cnp.npy_intp nslices = volume.shape[0] cnp.npy_intp nrows = volume.shape[1] cnp.npy_intp ncols = volume.shape[2] cnp.npy_intp nsVol = volume.shape[0] cnp.npy_intp nrVol = volume.shape[1] cnp.npy_intp ncVol = volume.shape[2] cnp.npy_intp i, j, k int inside double dkk, dii, djj, dk, di, dj if not is_valid_affine(affine_idx_in, 3): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 3): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 3): raise ValueError("Invalid displacement multiplication matrix") if out_shape is not None: nslices = out_shape[0] nrows = out_shape[1] ncols = out_shape[2] elif d1 is not None: nslices = d1.shape[0] nrows = d1.shape[1] ncols = d1.shape[2] cdef number[:, :, :] warped = np.zeros(shape=(nslices, nrows, ncols), dtype=np.asarray(volume).dtype) cdef floating[:] tmp = np.zeros(shape=(3,), dtype = np.asarray(d1).dtype) with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if affine_idx_in is None: dkk = d1[k, i, j, 0] dii = d1[k, i, j, 1] djj = d1[k, i, j, 2] else: dk = _apply_affine_3d_x0( k, i, j, 1, affine_idx_in) di = _apply_affine_3d_x1( k, i, j, 1, affine_idx_in) dj = _apply_affine_3d_x2( k, i, j, 1, affine_idx_in) inside = _interpolate_vector_3d[floating](d1, dk, di, dj, &tmp[0]) dkk = tmp[0] dii = tmp[1] djj = tmp[2] if affine_disp is not None: dk = _apply_affine_3d_x0( dkk, dii, djj, 0, affine_disp) di = _apply_affine_3d_x1( dkk, dii, djj, 0, affine_disp) dj = _apply_affine_3d_x2( dkk, dii, djj, 0, affine_disp) else: dk = dkk di = dii dj = djj if affine_idx_out is not None: dkk = dk + _apply_affine_3d_x0(k, i, j, 1, affine_idx_out) dii = di + _apply_affine_3d_x1(k, i, j, 1, affine_idx_out) djj = dj + _apply_affine_3d_x2(k, i, j, 1, affine_idx_out) else: dkk = dk + k dii = di + i djj = dj + j inside = _interpolate_scalar_nn_3d[number](volume, dkk, dii, djj, &warped[k, i, j]) return np.asarray(warped) def transform_3d_affine_nn(number[:, :, :] volume, int[:] ref_shape, double[:, :] affine=None): r"""Transforms a 3D volume by an affine transform with NN interpolation Deforms the input volume under the given affine transformation using nearest neighbor interpolation. The shape of the resulting volume is given by ref_shape. If the affine matrix is None, it is taken as the identity. Parameters ---------- volume : array, shape (S, R, C) the input volume to be transformed ref_shape : array, shape (3,) the shape of the resulting volume affine : array, shape (4, 4) the affine transform to be applied Returns ------- out : array, shape (S', R', C') the transformed volume Notes ----- The reason it is necessary to provide the intended shape of the resulting volume is because the affine transformation is defined on all R^{3} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms. """ cdef: cnp.npy_intp nslices = ref_shape[0] cnp.npy_intp nrows = ref_shape[1] cnp.npy_intp ncols = ref_shape[2] cnp.npy_intp nsVol = volume.shape[0] cnp.npy_intp nrVol = volume.shape[1] cnp.npy_intp ncVol = volume.shape[2] double dkk, dii, djj, tmp0, tmp1 double alpha, beta, gamma, calpha, cbeta, cgamma cnp.npy_intp k, i, j, kk, ii, jj number[:, :, :] out = np.zeros((nslices, nrows, ncols), dtype=np.asarray(volume).dtype) if not is_valid_affine(affine, 3): raise ValueError("Invalid affine transform matrix") with nogil: for k in range(nslices): for i in range(nrows): for j in range(ncols): if affine is not None: dkk = _apply_affine_3d_x0(k, i, j, 1, affine) dii = _apply_affine_3d_x1(k, i, j, 1, affine) djj = _apply_affine_3d_x2(k, i, j, 1, affine) else: dkk = k dii = i djj = j _interpolate_scalar_nn_3d[number](volume, dkk, dii, djj, &out[k, i, j]) return np.asarray(out) def warp_2d(floating[:, :] image, floating[:, :, :] d1, double[:, :] affine_idx_in=None, double[:, :] affine_idx_out=None, double[:, :] affine_disp=None, int[:] out_shape=None): r"""Warps a 2D image using bilinear interpolation Deforms the input image under the given transformation. The warped image is computed using bi-linear interpolation and is given by: (1) warped[i] = image[ C * d1[A*i] + B*i ] where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape. Parameters ---------- image : array, shape (R, C) the input image to be transformed d1 : array, shape (R', C', 2) the displacement field driving the transformation affine_idx_in : array, shape (3, 3) the matrix A in eq. (1) above affine_idx_out : array, shape (3, 3) the matrix B in eq. (1) above affine_disp : array, shape (3, 3) the matrix C in eq. (1) above out_shape : array, shape (2,) the number of rows and columns of the sampling grid Returns ------- warped : array, shape = out_shape the transformed image Notes ----- To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, an image with grid-to-space transformation T and let's say we want to sample the warped image on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped image is given by: (2) warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ] where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image. """ cdef: cnp.npy_intp nrows = image.shape[0] cnp.npy_intp ncols = image.shape[1] cnp.npy_intp nrVol = image.shape[0] cnp.npy_intp ncVol = image.shape[1] cnp.npy_intp i, j, ii, jj double di, dj, dii, djj if not is_valid_affine(affine_idx_in, 2): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 2): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 2): raise ValueError("Invalid displacement multiplication matrix") if out_shape is not None: nrows = out_shape[0] ncols = out_shape[1] elif d1 is not None: nrows = d1.shape[0] ncols = d1.shape[1] cdef floating[:, :] warped = np.zeros(shape=(nrows, ncols), dtype=np.asarray(image).dtype) cdef floating[:] tmp = np.zeros(shape=(2,), dtype=np.asarray(d1).dtype) with nogil: for i in range(nrows): for j in range(ncols): # Apply inner index pre-multiplication if affine_idx_in is None: dii = d1[i, j, 0] djj = d1[i, j, 1] else: di = _apply_affine_2d_x0( i, j, 1, affine_idx_in) dj = _apply_affine_2d_x1( i, j, 1, affine_idx_in) _interpolate_vector_2d[floating](d1, di, dj, &tmp[0]) dii = tmp[0] djj = tmp[1] # Apply displacement multiplication if affine_disp is not None: di = _apply_affine_2d_x0( dii, djj, 0, affine_disp) dj = _apply_affine_2d_x1( dii, djj, 0, affine_disp) else: di = dii dj = djj # Apply outer index multiplication and add the displacements if affine_idx_out is not None: dii = di + _apply_affine_2d_x0(i, j, 1, affine_idx_out) djj = dj + _apply_affine_2d_x1(i, j, 1, affine_idx_out) else: dii = di + i djj = dj + j # Interpolate the input image at the resulting location _interpolate_scalar_2d[floating](image, dii, djj, &warped[i, j]) return np.asarray(warped) def transform_2d_affine(floating[:, :] image, int[:] ref_shape, double[:, :] affine=None): r"""Transforms a 2D image by an affine transform with bilinear interp. Deforms the input image under the given affine transformation using tri-linear interpolation. The shape of the resulting image is given by ref_shape. If the affine matrix is None, it is taken as the identity. Parameters ---------- image : array, shape (R, C) the input image to be transformed ref_shape : array, shape (2,) the shape of the resulting image affine : array, shape (3, 3) the affine transform to be applied Returns ------- out : array, shape (R', C') the transformed image Notes ----- The reason it is necessary to provide the intended shape of the resulting image is because the affine transformation is defined on all R^{2} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms. """ cdef: cnp.npy_intp nrows = ref_shape[0] cnp.npy_intp ncols = ref_shape[1] cnp.npy_intp nrVol = image.shape[0] cnp.npy_intp ncVol = image.shape[1] cnp.npy_intp i, j, ii, jj double dii, djj, tmp0 double alpha, beta, calpha, cbeta floating[:, :] out = np.zeros(shape=(nrows, ncols), dtype=np.asarray(image).dtype) if not is_valid_affine(affine, 2): raise ValueError("Invalid affine transform matrix") with nogil: for i in range(nrows): for j in range(ncols): if affine is not None: dii = _apply_affine_2d_x0(i, j, 1, affine) djj = _apply_affine_2d_x1(i, j, 1, affine) else: dii = i djj = j _interpolate_scalar_2d[floating](image, dii, djj, &out[i, j]) return np.asarray(out) def warp_2d_nn(number[:, :] image, floating[:, :, :] d1, double[:, :] affine_idx_in=None, double[:, :] affine_idx_out=None, double[:, :] affine_disp=None, int[:] out_shape=None): r"""Warps a 2D image using nearest neighbor interpolation Deforms the input image under the given transformation. The warped image is computed using nearest-neighbor interpolation and is given by: (1) warped[i] = image[ C * d1[A*i] + B*i ] where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape. Parameters ---------- image : array, shape (R, C) the input image to be transformed d1 : array, shape (R', C', 2) the displacement field driving the transformation affine_idx_in : array, shape (3, 3) the matrix A in eq. (1) above affine_idx_out : array, shape (3, 3) the matrix B in eq. (1) above affine_disp : array, shape (3, 3) the matrix C in eq. (1) above out_shape : array, shape (2,) the number of rows and columns of the sampling grid Returns ------- warped : array, shape = out_shape the transformed image Notes ----- To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, an image with grid-to-space transformation T and let's say we want to sample the warped image on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped image is given by: (2) warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ] where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image. """ cdef: cnp.npy_intp nrows = image.shape[0] cnp.npy_intp ncols = image.shape[1] cnp.npy_intp nrVol = image.shape[0] cnp.npy_intp ncVol = image.shape[1] cnp.npy_intp i, j, ii, jj double di, dj, dii, djj if not is_valid_affine(affine_idx_in, 2): raise ValueError("Invalid inner index multiplication matrix") if not is_valid_affine(affine_idx_out, 2): raise ValueError("Invalid outer index multiplication matrix") if not is_valid_affine(affine_disp, 2): raise ValueError("Invalid displacement multiplication matrix") if out_shape is not None: nrows = out_shape[0] ncols = out_shape[1] elif d1 is not None: nrows = d1.shape[0] ncols = d1.shape[1] cdef number[:, :] warped = np.zeros(shape=(nrows, ncols), dtype=np.asarray(image).dtype) cdef floating[:] tmp = np.zeros(shape=(2,), dtype=np.asarray(d1).dtype) with nogil: for i in range(nrows): for j in range(ncols): # Apply inner index pre-multiplication if affine_idx_in is None: dii = d1[i, j, 0] djj = d1[i, j, 1] else: di = _apply_affine_2d_x0( i, j, 1, affine_idx_in) dj = _apply_affine_2d_x1( i, j, 1, affine_idx_in) _interpolate_vector_2d[floating](d1, di, dj, &tmp[0]) dii = tmp[0] djj = tmp[1] # Apply displacement multiplication if affine_disp is not None: di = _apply_affine_2d_x0( dii, djj, 0, affine_disp) dj = _apply_affine_2d_x1( dii, djj, 0, affine_disp) else: di = dii dj = djj # Apply outer index multiplication and add the displacements if affine_idx_out is not None: dii = di + _apply_affine_2d_x0(i, j, 1, affine_idx_out) djj = dj + _apply_affine_2d_x1(i, j, 1, affine_idx_out) else: dii = di + i djj = dj + j # Interpolate the input image at the resulting location _interpolate_scalar_nn_2d[number](image, dii, djj, &warped[i, j]) return np.asarray(warped) def transform_2d_affine_nn(number[:, :] image, int[:] ref_shape, double[:, :] affine=None): r"""Transforms a 2D image by an affine transform with NN interpolation Deforms the input image under the given affine transformation using nearest neighbor interpolation. The shape of the resulting image is given by ref_shape. If the affine matrix is None, it is taken as the identity. Parameters ---------- image : array, shape (R, C) the input image to be transformed ref_shape : array, shape (2,) the shape of the resulting image affine : array, shape (3, 3) the affine transform to be applied Returns ------- out : array, shape (R', C') the transformed image Notes ----- The reason it is necessary to provide the intended shape of the resulting image is because the affine transformation is defined on all R^{2} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms. """ cdef: cnp.npy_intp nrows = ref_shape[0] cnp.npy_intp ncols = ref_shape[1] cnp.npy_intp nrVol = image.shape[0] cnp.npy_intp ncVol = image.shape[1] double dii, djj, tmp0 double alpha, beta, calpha, cbeta cnp.npy_intp i, j, ii, jj number[:, :] out = np.zeros((nrows, ncols), dtype=np.asarray(image).dtype) if not is_valid_affine(affine, 2): raise ValueError("Invalid affine transform matrix") with nogil: for i in range(nrows): for j in range(ncols): if affine is not None: dii = _apply_affine_2d_x0(i, j, 1, affine) djj = _apply_affine_2d_x1(i, j, 1, affine) else: dii = i djj = j _interpolate_scalar_nn_2d[number](image, dii, djj, &out[i, j]) return np.asarray(out) def resample_displacement_field_3d(floating[:, :, :, :] field, double[:] factors, int[:] out_shape): r"""Resamples a 3D vector field to a custom target shape Resamples the given 3D displacement field on a grid of the requested shape, using the given scale factors. More precisely, the resulting displacement field at each grid cell i is given by D[i] = field[Diag(factors) * i] Parameters ---------- factors : array, shape (3,) the scaling factors mapping (integer) grid coordinates in the resampled grid to (floating point) grid coordinates in the original grid out_shape : array, shape (3,) the desired shape of the resulting grid Returns ------- expanded : array, shape = out_shape + (3, ) the resampled displacement field """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp tslices = out_shape[0] cnp.npy_intp trows = out_shape[1] cnp.npy_intp tcols = out_shape[2] cnp.npy_intp k, i, j int inside double dkk, dii, djj floating[:, :, :, :] expanded = np.zeros((tslices, trows, tcols, 3), dtype=ftype) for k in range(tslices): for i in range(trows): for j in range(tcols): dkk = k * factors[0] dii = i * factors[1] djj = j * factors[2] _interpolate_vector_3d[floating](field, dkk, dii, djj, &expanded[k, i, j, 0]) return np.asarray(expanded) def resample_displacement_field_2d(floating[:, :, :] field, double[:] factors, int[:] out_shape): r"""Resamples a 2D vector field to a custom target shape Resamples the given 2D displacement field on a grid of the requested shape, using the given scale factors. More precisely, the resulting displacement field at each grid cell i is given by D[i] = field[Diag(factors) * i] Parameters ---------- factors : array, shape (2,) the scaling factors mapping (integer) grid coordinates in the resampled grid to (floating point) grid coordinates in the original grid out_shape : array, shape (2,) the desired shape of the resulting grid Returns ------- expanded : array, shape = out_shape + (2, ) the resampled displacement field """ ftype = np.asarray(field).dtype cdef: cnp.npy_intp trows = out_shape[0] cnp.npy_intp tcols = out_shape[1] cnp.npy_intp i, j int inside double dii, djj floating[:, :, :] expanded = np.zeros((trows, tcols, 2), dtype=ftype) for i in range(trows): for j in range(tcols): dii = i*factors[0] djj = j*factors[1] inside = _interpolate_vector_2d[floating](field, dii, djj, &expanded[i, j, 0]) return np.asarray(expanded) def create_random_displacement_2d(int[:] from_shape, double[:, :] from_grid2world, int[:] to_shape, double[:, :] to_grid2world, object rng=None): r"""Creates a random 2D displacement 'exactly' mapping points of two grids Creates a random 2D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain. Parameters ---------- from_shape : array, shape (2,) the grid shape where the displacement field will be defined on. from_grid2world : array, shape (3,3) the grid-to-space transformation of the displacement field to_shape : array, shape (2,) the grid shape where the deformation field will map the input grid to. to_grid2world : array, shape (3,3) the grid-to-space transformation of the mapped grid rng : numpy.rnadom.Generator class, optional Numpy's random generator for setting seed values when needed. Default is None. Returns ------- output : array, shape = from_shape the random displacement field in the physical domain int_field : array, shape = from_shape the assignment of each point in the input grid to the target grid """ cdef: cnp.npy_intp i, j, ri, rj double di, dj, dii, djj int[:, :, :] int_field = np.empty(tuple(from_shape) + (2,), dtype=np.int32) double[:, :, :] output = np.zeros(tuple(from_shape) + (2,), dtype=np.float64) cnp.npy_intp dom_size = from_shape[0]*from_shape[1] if not is_valid_affine(from_grid2world, 2): raise ValueError("Invalid 'from' affine transform matrix") if not is_valid_affine(to_grid2world, 2): raise ValueError("Invalid 'to' affine transform matrix") if rng is None: rng = np.random.default_rng() # compute the actual displacement field in the physical space for i in range(from_shape[0]): for j in range(from_shape[1]): # randomly choose where each input grid point will be mapped to in # the target grid ri = rng.integers(1, to_shape[0]-1) rj = rng.integers(1, to_shape[1]-1) int_field[i, j, 0] = ri int_field[i, j, 1] = rj # convert the input point to physical coordinates if from_grid2world is not None: di = _apply_affine_2d_x0(i, j, 1, from_grid2world) dj = _apply_affine_2d_x1(i, j, 1, from_grid2world) else: di = i dj = j # convert the output point to physical coordinates if to_grid2world is not None: dii = _apply_affine_2d_x0(ri, rj, 1, to_grid2world) djj = _apply_affine_2d_x1(ri, rj, 1, to_grid2world) else: dii = ri djj = rj # the displacement vector at (i,j) must be the target point minus # the original point, both in physical space output[i, j, 0] = dii - di output[i, j, 1] = djj - dj return np.asarray(output), np.asarray(int_field) def create_random_displacement_3d(int[:] from_shape, double[:, :] from_grid2world, int[:] to_shape, double[:, :] to_grid2world, object rng=None): r"""Creates a random 3D displacement 'exactly' mapping points of two grids Creates a random 3D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain. Parameters ---------- from_shape : array, shape (3,) the grid shape where the displacement field will be defined on. from_grid2world : array, shape (4,4) the grid-to-space transformation of the displacement field to_shape : array, shape (3,) the grid shape where the deformation field will map the input grid to. to_grid2world : array, shape (4,4) the grid-to-space transformation of the mapped grid rng : numpy.rnadom.Generator class, optional Numpy's random generator for setting seed values when needed. Default is None. Returns ------- output : array, shape = from_shape the random displacement field in the physical domain int_field : array, shape = from_shape the assignment of each point in the input grid to the target grid """ cdef: cnp.npy_intp i, j, k, ri, rj, rk double di, dj, dk, dii, djj, dkk int[:, :, :, :] int_field = np.empty(tuple(from_shape) + (3,), dtype=np.int32) double[:, :, :, :] output = np.zeros(tuple(from_shape) + (3,), dtype=np.float64) cnp.npy_intp dom_size = from_shape[0]*from_shape[1]*from_shape[2] if not is_valid_affine(from_grid2world, 3): raise ValueError("Invalid 'from' affine transform matrix") if not is_valid_affine(to_grid2world, 3): raise ValueError("Invalid 'to' affine transform matrix") if rng is None: rng = np.random.default_rng() # compute the actual displacement field in the physical space for k in range(from_shape[0]): for i in range(from_shape[1]): for j in range(from_shape[2]): # randomly choose the location of each point on the target grid rk = rng.integers(1, to_shape[0]-1) ri = rng.integers(1, to_shape[1]-1) rj = rng.integers(1, to_shape[2]-1) int_field[k, i, j, 0] = rk int_field[k, i, j, 1] = ri int_field[k, i, j, 2] = rj # convert the input point to physical coordinates if from_grid2world is not None: dk = _apply_affine_3d_x0(k, i, j, 1, from_grid2world) di = _apply_affine_3d_x1(k, i, j, 1, from_grid2world) dj = _apply_affine_3d_x2(k, i, j, 1, from_grid2world) else: dk = k di = i dj = j # convert the output point to physical coordinates if to_grid2world is not None: dkk = _apply_affine_3d_x0(rk, ri, rj, 1, to_grid2world) dii = _apply_affine_3d_x1(rk, ri, rj, 1, to_grid2world) djj = _apply_affine_3d_x2(rk, ri, rj, 1, to_grid2world) else: dkk = rk dii = ri djj = rj # the displacement vector at (i,j) must be the target point # minus the original point, both in physical space output[k, i, j, 0] = dkk - dk output[k, i, j, 1] = dii - di output[k, i, j, 2] = djj - dj return np.asarray(output), np.asarray(int_field) def create_harmonic_fields_2d(cnp.npy_intp nrows, cnp.npy_intp ncols, double b, double m): r"""Creates an invertible 2D displacement field Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1] Parameters ---------- nrows : int number of rows in the resulting harmonic field ncols : int number of columns in the resulting harmonic field b, m : float parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (a circle or a grid) under the deformation field, or see examples in [1] Returns ------- d : array, shape (nrows, ncols, 2) the harmonic displacement field inv : array, shape (nrows, ncols, 2) the analytical inverse of the harmonic displacement field [1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008). A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107 """ cdef: cnp.npy_intp mid_row = nrows/2 cnp.npy_intp mid_col = ncols/2 cnp.npy_intp i, j, ii, jj double theta double[:, :, :] d = np.zeros((nrows, ncols, 2), dtype=np.float64) double[:, :, :] inv = np.zeros((nrows, ncols, 2), dtype=np.float64) for i in range(nrows): for j in range(ncols): ii = i - mid_row jj = j - mid_col theta = atan2(ii, jj) d[i, j, 0] = ii * (1.0 / (1 + b * cos(m * theta)) - 1.0) d[i, j, 1] = jj * (1.0 / (1 + b * cos(m * theta)) - 1.0) inv[i, j, 0] = b * cos(m * theta) * ii inv[i, j, 1] = b * cos(m * theta) * jj return np.asarray(d), np.asarray(inv) def create_harmonic_fields_3d(int nslices, cnp.npy_intp nrows, cnp.npy_intp ncols, double b, double m): r"""Creates an invertible 3D displacement field Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1] computing the angle theta along z-slides. Parameters ---------- nslices : int number of slices in the resulting harmonic field nrows : int number of rows in the resulting harmonic field ncols : int number of columns in the resulting harmonic field b, f : float parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (e.g. a circle or a grid) under the deformation field, or see examples in [1] Returns ------- d : array, shape (nslices, nrows, ncols, 3) the harmonic displacement field inv : array, shape (nslices, nrows, ncols, 3) the analytical inverse of the harmonic displacement field [1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008). A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107 """ cdef: cnp.npy_intp mid_slice = nslices / 2 cnp.npy_intp mid_row = nrows / 2 cnp.npy_intp mid_col = ncols / 2 cnp.npy_intp i, j, k, ii, jj, kk double theta double[:, :, :, :] d = np.zeros((nslices, nrows, ncols, 3), dtype=np.float64) double[:, :, :, :] inv = np.zeros((nslices, nrows, ncols, 3), dtype=np.float64) for k in range(nslices): for i in range(nrows): for j in range(ncols): kk = k - mid_slice ii = i - mid_row jj = j - mid_col theta = atan2(ii, jj) d[k, i, j, 0] = kk * (1.0 / (1 + b * cos(m * theta)) - 1.0) d[k, i, j, 1] = ii * (1.0 / (1 + b * cos(m * theta)) - 1.0) d[k, i, j, 2] = jj * (1.0 / (1 + b * cos(m * theta)) - 1.0) inv[k, i, j, 0] = b * cos(m * theta) * kk inv[k, i, j, 1] = b * cos(m * theta) * ii inv[k, i, j, 2] = b * cos(m * theta) * jj return np.asarray(d), np.asarray(inv) def create_circle(cnp.npy_intp nrows, cnp.npy_intp ncols, cnp.npy_intp radius): r""" Create a binary 2D image where pixel values are 1 iff their distance to the center of the image is less than or equal to radius. Parameters ---------- nrows : int number of rows of the resulting image ncols : int number of columns of the resulting image radius : int the radius of the circle Returns ------- c : array, shape (nrows, ncols) the binary image of the circle with the requested dimensions """ cdef: cnp.npy_intp mid_row = nrows/2 cnp.npy_intp mid_col = ncols/2 cnp.npy_intp i, j, ii, jj double r double[:, :] c = np.zeros((nrows, ncols), dtype=np.float64) for i in range(nrows): for j in range(ncols): ii = i - mid_row jj = j - mid_col r = sqrt(ii*ii + jj*jj) if r <= radius: c[i, j] = 1 else: c[i, j] = 0 return np.asarray(c) def create_sphere(cnp.npy_intp nslices, cnp.npy_intp nrows, cnp.npy_intp ncols, cnp.npy_intp radius): r""" Create a binary 3D image where voxel values are 1 iff their distance to the center of the image is less than or equal to radius. Parameters ---------- nslices : int number if slices of the resulting image nrows : int number of rows of the resulting image ncols : int number of columns of the resulting image radius : int the radius of the sphere Returns ------- c : array, shape (nslices, nrows, ncols) the binary image of the sphere with the requested dimensions """ cdef: cnp.npy_intp mid_slice = nslices/2 cnp.npy_intp mid_row = nrows/2 cnp.npy_intp mid_col = ncols/2 cnp.npy_intp i, j, k, ii, jj, kk double r double[:, :, :] s = np.zeros((nslices, nrows, ncols), dtype=np.float64) for k in range(nslices): for i in range(nrows): for j in range(ncols): kk = k - mid_slice ii = i - mid_row jj = j - mid_col r = sqrt(ii*ii + jj*jj + kk*kk) if r <= radius: s[k, i, j] = 1 else: s[k, i, j] = 0 return np.asarray(s) def _gradient_3d(floating[:, :, :] img, double[:, :] img_world2grid, double[:] img_spacing, double[:, :] out_grid2world, floating[:, :, :, :] out, int[:, :, :] inside): r""" Gradient of a 3D image in physical space coordinates Each grid cell (i, j, k) in the sampling grid (determined by out.shape) is mapped to its corresponding physical point (x, y, z) by multiplying out_grid2world (its grid-to-space transform) by (i, j, k), then the image is interpolated, at P1=(x + h, y, z), Q1=(x - h, y, z) P2=(x, y + h, z), Q2=(x, y - h, z) P3=(x, y, z + h), Q3=(x, y, z - h) (by mapping Pi and Qi to the grid using img_world2grid: the inverse of the grid-to-space transform of img). The displacement parameter h is of magnitude 0.5 (in physical space units), therefore the approximated partial derivatives are given by the difference between the image interpolated at Pi and Qi. Parameters ---------- img : array, shape (S, R, C) the input volume whose gradient will be computed img_world2grid : array, shape (4, 4) the space-to-grid transform matrix associated to img img_spacing : array, shape (3,) the spacing between voxels (voxel size along each axis) of the input volume out_grid2world : array, shape (4, 4) the grid-to-space transform associated to the sampling grid out : array, shape (S', R', C', 3) the buffer in which to store the image gradient inside : array, shape (S', R', C') the buffer in which to store the flags indicating whether the sample point lies inside (=1) or outside (=0) the image grid """ cdef: cnp.npy_intp nslices = out.shape[0] cnp.npy_intp nrows = out.shape[1] cnp.npy_intp ncols = out.shape[2] cnp.npy_intp i, j, k, in_flag double tmp double[:] x = np.empty(shape=(3,), dtype=np.float64) double[:] dx = np.empty(shape=(3,), dtype=np.float64) double[:] h = np.empty(shape=(3,), dtype=np.float64) double[:] q = np.empty(shape=(3,), dtype=np.float64) with nogil: h[0] = 0.5 * img_spacing[0] h[1] = 0.5 * img_spacing[1] h[2] = 0.5 * img_spacing[2] for k in range(nslices): for i in range(nrows): for j in range(ncols): inside[k, i, j] = 1 # Compute coordinates of index (k, i, j) in physical space x[0] = _apply_affine_3d_x0(k, i, j, 1, out_grid2world) x[1] = _apply_affine_3d_x1(k, i, j, 1, out_grid2world) x[2] = _apply_affine_3d_x2(k, i, j, 1, out_grid2world) dx[:] = x[:] for p in range(3): # Compute coordinates of point dx on img's grid dx[p] = x[p] - h[p] q[0] = _apply_affine_3d_x0(dx[0], dx[1], dx[2], 1, img_world2grid) q[1] = _apply_affine_3d_x1(dx[0], dx[1], dx[2], 1, img_world2grid) q[2] = _apply_affine_3d_x2(dx[0], dx[1], dx[2], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_3d[floating](img, q[0], q[1], q[2], &out[k, i, j, p]) if in_flag == 0: out[k, i, j, p] = 0 inside[k, i, j] = 0 continue tmp = out[k, i, j, p] # Compute coordinates of point dx on img's grid dx[p] = x[p] + h[p] q[0] = _apply_affine_3d_x0(dx[0], dx[1], dx[2], 1, img_world2grid) q[1] = _apply_affine_3d_x1(dx[0], dx[1], dx[2], 1, img_world2grid) q[2] = _apply_affine_3d_x2(dx[0], dx[1], dx[2], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_3d[floating](img, q[0], q[1], q[2], &out[k, i, j, p]) if in_flag == 0: out[k, i, j, p] = 0 inside[k, i, j] = 0 continue out[k, i, j, p] = ((out[k, i, j, p] - tmp) / img_spacing[p]) dx[p] = x[p] def _sparse_gradient_3d(floating[:, :, :] img, double[:, :] img_world2grid, double[:] img_spacing, double[:, :] sample_points, floating[:, :] out, int[:] inside): r""" Gradient of a 3D image evaluated at a set of points in physical space For each row (x_i, y_i, z_i) in sample_points, the image is interpolated at P1=(x_i + h, y_i, z_i), Q1=(x_i - h, y_i, z_i) P2=(x_i, y_i + h, z_i), Q2=(x_i, y_i - h, z_i) P3=(x_i, y_i, z_i + h), Q3=(x_i, y_i, z_i - h) (by mapping Pi and Qi to the grid using img_world2grid: the inverse of the grid-to-space transform of img). The displacement parameter h is of magnitude 0.5 (in physical space units), therefore the approximated partial derivatives are given by the difference between the image interpolated at Pi and Qi. Parameters ---------- img : array, shape (S, R, C) the input volume whose gradient will be computed img_world2grid : array, shape (4, 4) the space-to-grid transform matrix associated to img img_spacing : array, shape (3,) the spacing between voxels (voxel size along each axis) of the input sample_points: array, shape (n, 3) list of points where the derivative will be evaluated (one point per row) out : array, shape (n, 3) the buffer in which to store the image gradient """ cdef: cnp.npy_intp n = sample_points.shape[0] cnp.npy_intp i, in_flag double tmp double[:] dx = np.empty(shape=(3,), dtype=np.float64) double[:] h = np.empty(shape=(3,), dtype=np.float64) double[:] q = np.empty(shape=(3,), dtype=np.float64) with nogil: h[0] = 0.5 * img_spacing[0] h[1] = 0.5 * img_spacing[1] h[2] = 0.5 * img_spacing[2] for i in range(n): inside[i] = 1 dx[0] = sample_points[i, 0] dx[1] = sample_points[i, 1] dx[2] = sample_points[i, 2] for p in range(3): # Compute coordinates of point dx on img's grid dx[p] = sample_points[i, p] - h[p] q[0] = _apply_affine_3d_x0(dx[0], dx[1], dx[2], 1, img_world2grid) q[1] = _apply_affine_3d_x1(dx[0], dx[1], dx[2], 1, img_world2grid) q[2] = _apply_affine_3d_x2(dx[0], dx[1], dx[2], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_3d[floating](img, q[0], q[1], q[2], &out[i, p]) if in_flag == 0: out[i, p] = 0 inside[i] = 0 continue tmp = out[i, p] # Compute coordinates of point dx on img's grid dx[p] = sample_points[i, p] + h[p] q[0] = _apply_affine_3d_x0(dx[0], dx[1], dx[2], 1, img_world2grid) q[1] = _apply_affine_3d_x1(dx[0], dx[1], dx[2], 1, img_world2grid) q[2] = _apply_affine_3d_x2(dx[0], dx[1], dx[2], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_3d[floating](img, q[0], q[1], q[2], &out[i, p]) if in_flag == 0: out[i, p] = 0 inside[i] = 0 continue out[i, p] = (out[i, p] - tmp) / img_spacing[p] dx[p] = sample_points[i, p] def _gradient_2d(floating[:, :] img, double[:, :] img_world2grid, double[:] img_spacing, double[:, :] out_grid2world, floating[:, :, :] out, int[:, :] inside): r""" Gradient of a 2D image in physical space coordinates Each grid cell (i, j) in the sampling grid (determined by out.shape) is mapped to its corresponding physical point (x, y) by multiplying out_grid2world (its grid-to-space transform) by (i, j), then the image is interpolated, at P1=(x + h, y), Q1=(x - h, y) P2=(x, y + h), Q2=(x, y - h) (by mapping Pi and Qi to the grid using img_world2grid: the inverse of the grid-to-space transform of img). The displacement parameter h is of magnitude 0.5 (in physical space units), therefore the approximated partial derivatives are given by the difference between the image interpolated at Pi and Qi. Parameters ---------- img : array, shape (R, C) the input image whose gradient will be computed img_world2grid : array, shape (3, 3) the space-to-grid transform matrix associated to img img_spacing : array, shape (2,) the spacing between pixels (pixel size along each axis) of the input image out_grid2world : array, shape (3, 3) the grid-to-space transform associated to the sampling grid out : array, shape (S', R', 2) the buffer in which to store the image gradient inside : array, shape (S', R') the buffer in which to store the flags indicating whether the sample point lies inside (=1) or outside (=0) the image grid """ cdef: cnp.npy_intp nrows = out.shape[0] cnp.npy_intp ncols = out.shape[1] cnp.npy_intp i, j, k, in_flag double tmp double[:] x = np.empty(shape=(2,), dtype=np.float64) double[:] dx = np.empty(shape=(2,), dtype=np.float64) double[:] h = np.empty(shape=(2,), dtype=np.float64) double[:] q = np.empty(shape=(2,), dtype=np.float64) with nogil: h[0] = 0.5 * img_spacing[0] h[1] = 0.5 * img_spacing[1] for i in range(nrows): for j in range(ncols): inside[i, j] = 1 # Compute coordinates of index (i, j) in physical space x[0] = _apply_affine_2d_x0(i, j, 1, out_grid2world) x[1] = _apply_affine_2d_x1(i, j, 1, out_grid2world) dx[:] = x[:] for p in range(2): # Compute coordinates of point dx on img's grid dx[p] = x[p] - h[p] q[0] = _apply_affine_2d_x0(dx[0], dx[1], 1, img_world2grid) q[1] = _apply_affine_2d_x1(dx[0], dx[1], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_2d[floating](img, q[0], q[1], &out[i, j, p]) if in_flag == 0: out[i, j, p] = 0 inside[i, j] = 0 continue tmp = out[i, j, p] # Compute coordinates of point dx on img's grid dx[p] = x[p] + h[p] q[0] = _apply_affine_2d_x0(dx[0], dx[1], 1, img_world2grid) q[1] = _apply_affine_2d_x1(dx[0], dx[1], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_2d[floating](img, q[0], q[1], &out[i, j, p]) if in_flag == 0: out[i, j, p] = 0 inside[i, j] = 0 continue out[i, j, p] = (out[i, j, p] - tmp) / img_spacing[p] dx[p] = x[p] def _sparse_gradient_2d(floating[:, :] img, double[:, :] img_world2grid, double[:] img_spacing, double[:, :] sample_points, floating[:, :] out, int[:] inside): r""" Gradient of a 2D image evaluated at a set of points in physical space For each row (x_i, y_i) in sample_points, the image is interpolated at P1=(x_i + h, y_i), Q1=(x_i - h, y_i) P2=(x_i, y_i + h), Q2=(x_i, y_i - h) (by mapping Pi and Qi to the grid using img_world2grid: the inverse of the grid-to-space transform of img). The displacement parameter h is of magnitude 0.5 (in physical space units), therefore the approximated partial derivatives are given by the difference between the image interpolated at Pi and Qi. Parameters ---------- img : array, shape (R, C) the input volume whose gradient will be computed img_world2grid : array, shape (3, 3) the space-to-grid transform matrix associated to img img_spacing : array, shape (2,) the spacing between pixels (pixel size along each axis) of the input sample_points: array, shape (n, 2) list of points where the derivative will be evaluated (one point per row) out : array, shape (n, 2) the buffer in which to store the image gradient inside : array, shape (n,) the buffer in which to store the flags indicating whether the sample point lies inside (=1) or outside (=0) the image grid """ cdef: cnp.npy_intp n = sample_points.shape[0] cnp.npy_intp i, in_flag double tmp double[:] dx = np.empty(shape=(2,), dtype=np.float64) double[:] h = np.empty(shape=(2,), dtype=np.float64) double[:] q = np.empty(shape=(2,), dtype=np.float64) with nogil: h[0] = 0.5 * img_spacing[0] h[1] = 0.5 * img_spacing[1] for i in range(n): inside[i] = 1 dx[0] = sample_points[i, 0] dx[1] = sample_points[i, 1] for p in range(2): # Compute coordinates of point dx on img's grid dx[p] = sample_points[i, p] - h[p] q[0] = _apply_affine_2d_x0(dx[0], dx[1], 1, img_world2grid) q[1] = _apply_affine_2d_x1(dx[0], dx[1], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_2d[floating](img, q[0], q[1], &out[i, p]) if in_flag == 0: out[i, p] = 0 inside[i] = 0 continue tmp = out[i, p] # Compute coordinates of point dx on img's grid dx[p] = sample_points[i, p] + h[p] q[0] = _apply_affine_2d_x0(dx[0], dx[1], 1, img_world2grid) q[1] = _apply_affine_2d_x1(dx[0], dx[1], 1, img_world2grid) # Interpolate img at q in_flag = _interpolate_scalar_2d[floating](img, q[0], q[1], &out[i, p]) if in_flag == 0: out[i, p] = 0 inside[i] = 0 continue out[i, p] = (out[i, p] - tmp) / img_spacing[p] dx[p] = sample_points[i, p] def gradient(img, img_world2grid, img_spacing, out_shape, out_grid2world): r""" Gradient of an image in physical space Parameters ---------- img : 2D or 3D array, shape (R, C) or (S, R, C) the input image whose gradient will be computed img_world2grid : array, shape (dim+1, dim+1) the space-to-grid transform matrix associated to img img_spacing : array, shape (dim,) the spacing between voxels (voxel size along each axis) of the input image out_shape : array, shape (dim,) the number of (slices), rows and columns of the sampling grid out_grid2world : array, shape (dim+1, dim+1) the grid-to-space transform associated to the sampling grid Returns ------- out : array, shape (R', C', 2) or (S', R', C', 3) the buffer in which to store the image gradient, where (S'), R', C' are given by out_shape """ dim = len(img.shape) if not is_valid_affine(img_world2grid, dim): raise ValueError("Invalid image affine transform") if not is_valid_affine(out_grid2world, dim): raise ValueError("Invalid sampling grid affine transform") if len(img_spacing) < dim: raise ValueError("Invalid spacings") ftype = img.dtype.type out = np.empty(tuple(out_shape)+(dim,), dtype=ftype) inside = np.empty(tuple(out_shape), dtype=np.int32) # Select joint density gradient 2D or 3D if dim == 2: jd_grad = _gradient_2d elif dim == 3: jd_grad = _gradient_3d else: raise ValueError('Undefined gradient for image dimension %d' % (dim,)) if img_world2grid.dtype != np.float64: img_world2grid = img_world2grid.astype(np.float64) if img_spacing.dtype != np.float64: img_spacing = img_spacing.astype(np.float64) if out_grid2world.dtype != np.float64: out_grid2world = out_grid2world.astype(np.float64) jd_grad(img, img_world2grid, img_spacing, out_grid2world, out, inside) return np.asarray(out), np.asarray(inside) def sparse_gradient(img, img_world2grid, img_spacing, sample_points): r""" Gradient of an image in physical space Parameters ---------- img : 2D or 3D array, shape (R, C) or (S, R, C) the input image whose gradient will be computed img_world2grid : array, shape (dim+1, dim+1) the space-to-grid transform matrix associated to img img_spacing : array, shape (dim,) the spacing between voxels (voxel size along each axis) of the input image sample_points: array, shape (n, dim) list of points where the derivative will be evaluated (one point per row) Returns ------- out : array, shape (n, dim) the gradient at each point stored at its corresponding row """ dim = len(img.shape) if not is_valid_affine(img_world2grid, dim): raise ValueError("Invalid affine transform matrix") if len(img_spacing) < dim: raise ValueError("Invalid spacings") ftype = img.dtype.type n = sample_points.shape[0] out = np.empty(shape=(n, dim), dtype=ftype) inside = np.empty(shape=(n,), dtype=np.int32) # Select joint density gradient 2D or 3D if dim == 2: jd_grad = _sparse_gradient_2d else: jd_grad = _sparse_gradient_3d if img_world2grid.dtype != np.float64: img_world2grid = img_world2grid.astype(np.float64) if img_spacing.dtype != np.float64: img_spacing = img_spacing.astype(np.float64) jd_grad(img, img_world2grid, img_spacing, sample_points, out, inside) return np.asarray(out), np.asarray(inside) dipy-1.11.0/dipy/conftest.py000066400000000000000000000070611476546756600157410ustar00rootroot00000000000000"""pytest initialization.""" import importlib import os import re import warnings import numpy as np import pytest """ Set numpy print options to "legacy" for new versions of numpy If imported into a file, pytest will run this before any doctests. References ---------- https://github.com/numpy/numpy/commit/710e0327687b9f7653e5ac02d222ba62c657a718 https://github.com/numpy/numpy/commit/734b907fc2f7af6e40ec989ca49ee6d87e21c495 https://github.com/nipy/nibabel/pull/556 """ np.set_printoptions(legacy="1.13") warnings.simplefilter(action="default", category=FutureWarning) # List of files that pytest should ignore collect_ignore = ["testing/decorators.py", "bench*.py", "**/benchmarks/*"] def pytest_addoption(parser): parser.addoption( "--warnings-as-errors", action="store_true", help="Make all uncaught warnings into errors.", ) @pytest.hookimpl(trylast=True) def pytest_sessionfinish(session, exitstatus): have_werrors = os.getenv("DIPY_WERRORS", False) have_werrors = ( session.config.getoption("--warnings-as-errors", False) or have_werrors ) if have_werrors: # Check if there were any warnings during the test session reporter = session.config.pluginmanager.get_plugin("terminalreporter") if reporter.stats.get("warnings", None): session.exitstatus = 2 @pytest.hookimpl def pytest_terminal_summary(terminalreporter, exitstatus, config): have_werrors = os.getenv("DIPY_WERRORS", False) have_werrors = config.getoption("--warnings-as-errors", False) or have_werrors have_warnings = terminalreporter.stats.get("warnings", None) if have_warnings and have_werrors: terminalreporter.ensure_newline() terminalreporter.section("Werrors", sep="=", red=True, bold=True) terminalreporter.line( "Warnings as errors: Activated. \n" f"{len(have_warnings)} warnings were raised and " "treated as errors. \n" ) def pytest_collect_file(parent, file_path): if file_path.suffix in [".pyx", ".so"] and file_path.name.startswith("test"): return PyxFile.from_parent(parent, path=file_path) class PyxFile(pytest.File): def collect(self): try: match = re.search(r"(dipy/[^\/]+/tests/test_\w+)", str(self.path)) mod_name = match.group(1) if match else None if mod_name is None: raise PyxException("Could not find test module for " f"{self.path}.") mod_name = mod_name.replace("/", ".") mod = importlib.import_module(mod_name) for name in dir(mod): item = getattr(mod, name) if callable(item) and name.startswith("test_"): yield PyxItem.from_parent(self, name=name, test_func=item, mod=mod) except ImportError as e: msg = ( f"Import failed for {self.path}. Make sure you cython file " "has been compiled." ) raise PyxException(msg, self.path, 0) from e class PyxItem(pytest.Item): def __init__(self, *, test_func, mod, **kwargs): super().__init__(**kwargs) self.mod = mod self.test_func = test_func def runtest(self): """Called to execute the test item.""" self.test_func() def repr_failure(self, excinfo): """Called when self.runtest() raises an exception.""" return excinfo.value.args[0] def reportinfo(self): return self.path, 0, f"test: {self.name}" class PyxException(Exception): """Custom exception for error reporting.""" dipy-1.11.0/dipy/core/000077500000000000000000000000001476546756600144665ustar00rootroot00000000000000dipy-1.11.0/dipy/core/__init__.py000066400000000000000000000000601476546756600165730ustar00rootroot00000000000000# Init for core dipy objects """Core objects""" dipy-1.11.0/dipy/core/geometry.py000066400000000000000000001020511476546756600166720ustar00rootroot00000000000000"""Utility functions for algebra etc""" import itertools import math import numpy as np import numpy.linalg as npl from dipy.testing.decorators import warning_for_keywords # epsilon for testing whether a number is close to zero _EPS = np.finfo(float).eps * 4.0 # axis sequences for Euler angles _NEXT_AXIS = [1, 2, 0, 1] # map axes strings to/from tuples of inner axis, parity, repetition, frame _AXES2TUPLE = { "sxyz": (0, 0, 0, 0), "sxyx": (0, 0, 1, 0), "sxzy": (0, 1, 0, 0), "sxzx": (0, 1, 1, 0), "syzx": (1, 0, 0, 0), "syzy": (1, 0, 1, 0), "syxz": (1, 1, 0, 0), "syxy": (1, 1, 1, 0), "szxy": (2, 0, 0, 0), "szxz": (2, 0, 1, 0), "szyx": (2, 1, 0, 0), "szyz": (2, 1, 1, 0), "rzyx": (0, 0, 0, 1), "rxyx": (0, 0, 1, 1), "ryzx": (0, 1, 0, 1), "rxzx": (0, 1, 1, 1), "rxzy": (1, 0, 0, 1), "ryzy": (1, 0, 1, 1), "rzxy": (1, 1, 0, 1), "ryxy": (1, 1, 1, 1), "ryxz": (2, 0, 0, 1), "rzxz": (2, 0, 1, 1), "rxyz": (2, 1, 0, 1), "rzyz": (2, 1, 1, 1), } _TUPLE2AXES = dict({(v, k) for k, v in _AXES2TUPLE.items()}) def sphere2cart(r, theta, phi): """Spherical to Cartesian coordinates This is the standard physics convention where `theta` is the inclination (polar) angle, and `phi` is the azimuth angle. Imagine a sphere with center (0,0,0). Orient it with the z axis running south-north, the y axis running west-east and the x axis from posterior to anterior. `theta` (the inclination angle) is the angle to rotate from the z-axis (the zenith) around the y-axis, towards the x axis. Thus the rotation is counter-clockwise from the point of view of positive y. `phi` (azimuth) gives the angle of rotation around the z-axis towards the y axis. The rotation is counter-clockwise from the point of view of positive z. Equivalently, given a point P on the sphere, with coordinates x, y, z, `theta` is the angle between P and the z-axis, and `phi` is the angle between the projection of P onto the XY plane, and the X axis. Geographical nomenclature designates theta as 'co-latitude', and phi as 'longitude' Parameters ---------- r : array_like radius theta : array_like inclination or polar angle phi : array_like azimuth angle Returns ------- x : array x coordinate(s) in Cartesian space y : array y coordinate(s) in Cartesian space z : array z coordinate Notes ----- See these pages: * https://en.wikipedia.org/wiki/Spherical_coordinate_system * https://mathworld.wolfram.com/SphericalCoordinates.html for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page. Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r -> z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) -> x == r * sin(theta) * cos(phi) and so on. We have deliberately named this function ``sphere2cart`` rather than ``sph2cart`` to distinguish it from the Matlab function of that name, because the Matlab function uses an unusual convention for the angles that we did not want to replicate. The Matlab function is trivial to implement with the formulae given in the Matlab help. """ sin_theta = np.sin(theta) x = r * np.cos(phi) * sin_theta y = r * np.sin(phi) * sin_theta z = r * np.cos(theta) x, y, z = np.broadcast_arrays(x, y, z) return x, y, z def cart2sphere(x, y, z): r"""Return angles for Cartesian 3D coordinates `x`, `y`, and `z` See doc for ``sphere2cart`` for angle conventions and derivation of the formulae. $0\le\theta\mathrm{(theta)}\le\pi$ and $-\pi\le\phi\mathrm{(phi)}\le\pi$ Parameters ---------- x : array_like x coordinate in Cartesian space y : array_like y coordinate in Cartesian space z : array_like z coordinate Returns ------- r : array radius theta : array inclination (polar) angle phi : array azimuth angle """ r = np.sqrt(x * x + y * y + z * z) cos = np.divide(z, r, where=r > 0) theta = np.arccos(cos, where=(cos >= -1) & (cos <= 1)) theta = np.where(r > 0, theta, 0.0) phi = np.arctan2(y, x) r, theta, phi = np.broadcast_arrays(r, theta, phi) return r, theta, phi def sph2latlon(theta, phi): """Convert spherical coordinates to latitude and longitude. Returns ------- lat, lon : ndarray Latitude and longitude. """ return np.rad2deg(theta - np.pi / 2), np.rad2deg(phi - np.pi) @warning_for_keywords() def normalized_vector(vec, *, axis=-1): """Return vector divided by its Euclidean (L2) norm See :term:`unit vector` and :term:`Euclidean norm` Parameters ---------- vec : array_like shape (3,) Returns ------- nvec : array shape (3,) vector divided by L2 norm Examples -------- >>> vec = [1, 2, 3] >>> l2n = np.sqrt(np.dot(vec, vec)) >>> nvec = normalized_vector(vec) >>> np.allclose(np.array(vec) / l2n, nvec) True >>> vec = np.array([[1, 2, 3]]) >>> vec.shape == (1, 3) True >>> normalized_vector(vec).shape == (1, 3) True """ return vec / vector_norm(vec, axis=axis, keepdims=True) @warning_for_keywords() def vector_norm(vec, *, axis=-1, keepdims=False): """Return vector Euclidean (L2) norm See :term:`unit vector` and :term:`Euclidean norm` Parameters ---------- vec : array_like Vectors to norm. axis : int Axis over which to norm. By default norm over last axis. If `axis` is None, `vec` is flattened then normed. keepdims : bool If True, the output will have the same number of dimensions as `vec`, with shape 1 on `axis`. Returns ------- norm : array Euclidean norms of vectors. Examples -------- >>> import numpy as np >>> vec = [[8, 15, 0], [0, 36, 77]] >>> vector_norm(vec) array([ 17., 85.]) >>> vector_norm(vec, keepdims=True) array([[ 17.], [ 85.]]) >>> vector_norm(vec, axis=0) array([ 8., 39., 77.]) """ vec = np.asarray(vec) vec_norm = np.sqrt((vec * vec).sum(axis)) if keepdims: if axis is None: shape = [1] * vec.ndim else: shape = list(vec.shape) shape[axis] = 1 vec_norm = vec_norm.reshape(shape) return vec_norm def rodrigues_axis_rotation(r, theta): r"""Rodrigues formula Rotation matrix for rotation around axis `r` for angle `theta`. The rotation matrix is given by the Rodrigues formula: .. math:: :nowrap: \mathbf{R} = \mathbf{I} + \sin(\theta)*\mathbf{S}_n + (1-\cos(\theta))*\mathbf{S}_n^2 with: .. math:: Sn = \begin{pmatrix} 0 & -n{z} & n{y} \\ n{z} & 0 & -n{x} \\ -n{y} & n{x} & 0 \end{pmatrix} where :math:`n = r / ||r||` In case the angle :math:`||r||` is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around :math:`theta=0`: .. math:: :nowrap: R = I + \sin(\theta)/\theta \mathbf{S}_r + (1-\cos(\theta))/\theta^2 \mathbf{S}_r^2 leading to: .. math:: :nowrap: R = \mathbf{I} + (1-\theta^2/6)*\mathbf{S}_r + (\frac{1}{2}-\theta^2/24)*\mathbf{S}_r^2 Parameters ---------- r : array_like shape (3,) Axis. theta : float Angle in degrees. Returns ------- R : array, shape (3,3) Rotation matrix. Examples -------- >>> import numpy as np >>> from dipy.core.geometry import rodrigues_axis_rotation >>> v=np.array([0,0,1]) >>> u=np.array([1,0,0]) >>> R=rodrigues_axis_rotation(v,40) >>> ur=np.dot(R,u) >>> np.round(np.rad2deg(np.arccos(np.dot(ur,u)))) 40.0 """ theta = np.deg2rad(theta) if theta > 1e-30: n = r / np.linalg.norm(r) Sn = np.array([[0, -n[2], n[1]], [n[2], 0, -n[0]], [-n[1], n[0], 0]]) R = np.eye(3) + np.sin(theta) * Sn + (1 - np.cos(theta)) * np.dot(Sn, Sn) else: Sr = np.array([[0, -r[2], r[1]], [r[2], 0, -r[0]], [-r[1], r[0], 0]]) theta2 = theta * theta R = np.eye(3) + (1 - theta2 / 6.0) * Sr + (0.5 - theta2 / 24.0) * np.dot(Sr, Sr) return R def nearest_pos_semi_def(B): """Least squares positive semi-definite tensor estimation. See :footcite:p:`Niethammer2006` for further details about the method. Parameters ---------- B : (3,3) array_like B matrix - symmetric. We do not check the symmetry. Returns ------- npds : (3,3) array Estimated nearest positive semi-definite array to matrix `B`. Examples -------- >>> B = np.diag([1, 1, -1]) >>> nearest_pos_semi_def(B) array([[ 0.75, 0. , 0. ], [ 0. , 0.75, 0. ], [ 0. , 0. , 0. ]]) References ---------- .. footbibliography:: """ B = np.asarray(B) vals, vecs = npl.eigh(B) # indices of eigenvalues in descending order inds = np.argsort(vals)[::-1] vals = vals[inds] cardneg = np.sum(vals < 0) if cardneg == 0: return B if cardneg == 3: return np.zeros((3, 3)) lam1a, lam2a, lam3a = vals scalers = np.zeros((3,)) if cardneg == 2: b112 = np.max([0, lam1a + (lam2a + lam3a) / 3.0]) scalers[0] = b112 elif cardneg == 1: lam1b = lam1a + 0.25 * lam3a lam2b = lam2a + 0.25 * lam3a if lam1b >= 0 and lam2b >= 0: scalers[:2] = lam1b, lam2b else: # one of the lam1b, lam2b is < 0 if lam2b < 0: b111 = np.max([0, lam1a + (lam2a + lam3a) / 3.0]) scalers[0] = b111 if lam1b < 0: b221 = np.max([0, lam2a + (lam1a + lam3a) / 3.0]) scalers[1] = b221 # resort the scalers to match the original vecs scalers = scalers[np.argsort(inds)] return np.dot(vecs, np.dot(np.diag(scalers), vecs.T)) @warning_for_keywords() def sphere_distance(pts1, pts2, *, radius=None, check_radius=True): """Distance across sphere surface between `pts1` and `pts2` Parameters ---------- pts1 : (N,R) or (R,) array_like where N is the number of points and R is the number of coordinates defining a point (``R==3`` for 3D) pts2 : (N,R) or (R,) array_like where N is the number of points and R is the number of coordinates defining a point (``R==3`` for 3D). It should be possible to broadcast `pts1` against `pts2` radius : None or float, optional Radius of sphere. Default is to work out radius from mean of the length of each point vector check_radius : bool, optional If True, check if the points are on the sphere surface - i.e check if the vector lengths in `pts1` and `pts2` are close to `radius`. Default is True. Returns ------- d : (N,) or (0,) array Distances between corresponding points in `pts1` and `pts2` across the spherical surface, i.e. the great circle distance See Also -------- cart_distance : cartesian distance between points vector_cosine : cosine of angle between vectors Examples -------- >>> print('%.4f' % sphere_distance([0,1],[1,0])) 1.5708 >>> print('%.4f' % sphere_distance([0,3],[3,0])) 4.7124 """ pts1 = np.asarray(pts1) pts2 = np.asarray(pts2) lens1 = np.sqrt(np.sum(pts1**2, axis=-1)) lens2 = np.sqrt(np.sum(pts2**2, axis=-1)) if radius is None: radius = (np.mean(lens1) + np.mean(lens2)) / 2.0 if check_radius: if not (np.allclose(radius, lens1) and np.allclose(radius, lens2)): raise ValueError("Radii do not match sphere surface") # Get angle with vector cosine dots = np.inner(pts1, pts2) lens = lens1 * lens2 angle_cos = np.arccos(dots / lens) return angle_cos * radius def cart_distance(pts1, pts2): """Cartesian distance between `pts1` and `pts2` If either of `pts1` or `pts2` is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension. Parameters ---------- pts1 : (N,R) or (R,) array_like where N is the number of points and R is the number of coordinates defining a point (``R==3`` for 3D) pts2 : (N,R) or (R,) array_like where N is the number of points and R is the number of coordinates defining a point (``R==3`` for 3D). It should be possible to broadcast `pts1` against `pts2` Returns ------- d : (N,) or (0,) array Cartesian distances between corresponding points in `pts1` and `pts2` See Also -------- sphere_distance : distance between points on sphere surface Examples -------- >>> cart_distance([0,0,0], [0,0,3]) 3.0 """ sqs = np.subtract(pts1, pts2) ** 2 return np.sqrt(np.sum(sqs, axis=-1)) def vector_cosine(vecs1, vecs2): """Cosine of angle between two (sets of) vectors The cosine of the angle between two vectors ``v1`` and ``v2`` is given by the inner product of ``v1`` and ``v2`` divided by the product of the vector lengths:: v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) * np.sqrt(np.sum(v2**2))) Parameters ---------- vecs1 : (N, R) or (R,) array_like N vectors (as rows) or single vector. Vectors have R elements. vecs1 : (N, R) or (R,) array_like N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast `vecs1` against `vecs2` Returns ------- vcos : (N,) or (0,) array Vector cosines. To get the angles you will need ``np.arccos`` Notes ----- The vector cosine will be the same as the correlation only if all the input vectors have zero mean. """ vecs1 = np.asarray(vecs1) vecs2 = np.asarray(vecs2) lens1 = np.sqrt(np.sum(vecs1**2, axis=-1)) lens2 = np.sqrt(np.sum(vecs2**2, axis=-1)) dots = np.inner(vecs1, vecs2) lens = lens1 * lens2 return dots / lens def lambert_equal_area_projection_polar(theta, phi): r"""Lambert Equal Area Projection from polar sphere to plane Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161). See doc for ``sphere2cart`` for angle conventions - $0 \le \theta \le \pi$ and $0 \le \phi \le 2 \pi$ - $|(y_1,y_2)| \le 2$ The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, and *vice versa*. Parameters ---------- theta : array_like theta spherical coordinates phi : array_like phi spherical coordinates Returns ------- y : (N,2) array planar coordinates of points following mapping by Lambert's EAP. """ return ( 2 * np.repeat(np.sin(theta / 2), 2).reshape((theta.shape[0], 2)) * np.column_stack((np.cos(phi), np.sin(phi))) ) def lambert_equal_area_projection_cart(x, y, z): r"""Lambert Equal Area Projection from cartesian vector to plane Return positions in $(y_1,y_2)$ plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161). The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2. and *vice versa*. See doc for ``sphere2cart`` for angle conventions Parameters ---------- x : array_like x coordinate in Cartesian space y : array_like y coordinate in Cartesian space z : array_like z coordinate Returns ------- y : (N,2) array planar coordinates of points following mapping by Lambert's EAP. """ (r, theta, phi) = cart2sphere(x, y, z) return lambert_equal_area_projection_polar(theta, phi) @warning_for_keywords() def euler_matrix(ai, aj, ak, *, axes="sxyz"): """Return homogeneous rotation matrix from Euler angles and axis sequence. Code modified from the work of Christoph Gohlke, link provided here https://github.com/cgohlke/transformations/blob/master/transformations/transformations.py Parameters ---------- ai, aj, ak : Euler's roll, pitch and yaw angles axes : One of 24 axis sequences as string or encoded tuple Returns ------- matrix : ndarray (4, 4) Code modified from the work of Christoph Gohlke, link provided here https://github.com/cgohlke/transformations/blob/master/transformations/transformations.py Examples -------- >>> import numpy >>> R = euler_matrix(1, 2, 3, axes='syxz') >>> numpy.allclose(numpy.sum(R[0]), -1.34786452) True >>> R = euler_matrix(1, 2, 3, axes=(0, 1, 0, 1)) >>> numpy.allclose(numpy.sum(R[0]), -0.383436184) True >>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3) - 0.5) >>> for axes in _AXES2TUPLE.keys(): ... _ = euler_matrix(ai, aj, ak, axes=axes) >>> for axes in _TUPLE2AXES.keys(): ... _ = euler_matrix(ai, aj, ak, axes=axes) """ try: firstaxis, parity, repetition, frame = _AXES2TUPLE[axes] except (AttributeError, KeyError): firstaxis, parity, repetition, frame = axes i = firstaxis j = _NEXT_AXIS[i + parity] k = _NEXT_AXIS[i - parity + 1] if frame: ai, ak = ak, ai if parity: ai, aj, ak = -ai, -aj, -ak si, sj, sk = math.sin(ai), math.sin(aj), math.sin(ak) ci, cj, ck = math.cos(ai), math.cos(aj), math.cos(ak) cc, cs = ci * ck, ci * sk sc, ss = si * ck, si * sk M = np.identity(4) if repetition: M[i, i] = cj M[i, j] = sj * si M[i, k] = sj * ci M[j, i] = sj * sk M[j, j] = -cj * ss + cc M[j, k] = -cj * cs - sc M[k, i] = -sj * ck M[k, j] = cj * sc + cs M[k, k] = cj * cc - ss else: M[i, i] = cj * ck M[i, j] = sj * sc - cs M[i, k] = sj * cc + ss M[j, i] = cj * sk M[j, j] = sj * ss + cc M[j, k] = sj * cs - sc M[k, i] = -sj M[k, j] = cj * si M[k, k] = cj * ci return M @warning_for_keywords() def compose_matrix( *, scale=None, shear=None, angles=None, translate=None, perspective=None ): """Return 4x4 transformation matrix from sequence of transformations. Code modified from the work of Christoph Gohlke, link provided here https://github.com/cgohlke/transformations/blob/master/transformations/transformations.py This is the inverse of the ``decompose_matrix`` function. Parameters ---------- scale : (3,) array_like Scaling factors. shear : array_like Shear factors for x-y, x-z, y-z axes. angles : array_like Euler angles about static x, y, z axes. translate : array_like Translation vector along x, y, z axes. perspective : array_like Perspective partition of matrix. Returns ------- matrix : 4x4 array Examples -------- >>> import math >>> import numpy as np >>> import dipy.core.geometry as gm >>> scale = np.random.random(3) - 0.5 >>> shear = np.random.random(3) - 0.5 >>> angles = (np.random.random(3) - 0.5) * (2*math.pi) >>> trans = np.random.random(3) - 0.5 >>> persp = np.random.random(4) - 0.5 >>> M0 = gm.compose_matrix( ... scale=scale, shear=shear, angles=angles, translate=trans, perspective=persp ... ) """ M = np.identity(4) if perspective is not None: P = np.identity(4) P[3, :] = perspective[:4] M = np.dot(M, P) if translate is not None: T = np.identity(4) T[:3, 3] = translate[:3] M = np.dot(M, T) if angles is not None: R = euler_matrix(angles[0], angles[1], angles[2], axes="sxyz") M = np.dot(M, R) if shear is not None: Z = np.identity(4) Z[1, 2] = shear[2] Z[0, 2] = shear[1] Z[0, 1] = shear[0] M = np.dot(M, Z) if scale is not None: S = np.identity(4) S[0, 0] = scale[0] S[1, 1] = scale[1] S[2, 2] = scale[2] M = np.dot(M, S) M /= M[3, 3] return M def decompose_matrix(matrix): """Return sequence of transformations from transformation matrix. Code modified from the excellent work of Christoph Gohlke, link provided here: https://github.com/cgohlke/transformations/blob/master/transformations/transformations.py Parameters ---------- matrix : array_like Non-degenerate homogeneous transformation matrix Returns ------- scale : (3,) ndarray Three scaling factors. shear : (3,) ndarray Shear factors for x-y, x-z, y-z axes. angles : (3,) ndarray Euler angles about static x, y, z axes. translate : (3,) ndarray Translation vector along x, y, z axes. perspective : ndarray Perspective partition of matrix. Raises ------ ValueError If matrix is of wrong type or degenerate. Examples -------- >>> import numpy as np >>> T0=np.diag([2,1,1,1]) >>> scale, shear, angles, trans, persp = decompose_matrix(T0) """ M = np.array(matrix, dtype=np.float64, copy=True).T if abs(M[3, 3]) < _EPS: raise ValueError("M[3, 3] is zero") M /= M[3, 3] P = M.copy() P[:, 3] = 0, 0, 0, 1 if not np.linalg.det(P): raise ValueError("matrix is singular") scale = np.zeros((3,), dtype=np.float64) shear = [0, 0, 0] angles = [0, 0, 0] if any(abs(M[:3, 3]) > _EPS): perspective = np.dot(M[:, 3], np.linalg.inv(P.T)) M[:, 3] = 0, 0, 0, 1 else: perspective = np.array((0, 0, 0, 1), dtype=np.float64) translate = M[3, :3].copy() M[3, :3] = 0 row = M[:3, :3].copy() scale[0] = vector_norm(row[0]) row[0] /= scale[0] shear[0] = np.dot(row[0], row[1]) row[1] -= row[0] * shear[0] scale[1] = vector_norm(row[1]) row[1] /= scale[1] shear[0] /= scale[1] shear[1] = np.dot(row[0], row[2]) row[2] -= row[0] * shear[1] shear[2] = np.dot(row[1], row[2]) row[2] -= row[1] * shear[2] scale[2] = vector_norm(row[2]) row[2] /= scale[2] shear[1:] /= scale[2] if np.dot(row[0], np.cross(row[1], row[2])) < 0: scale *= -1 row *= -1 angles[1] = math.asin(-row[0, 2]) if math.cos(angles[1]): angles[0] = math.atan2(row[1, 2], row[2, 2]) angles[2] = math.atan2(row[0, 1], row[0, 0]) else: # angles[0] = math.atan2(row[1, 0], row[1, 1]) angles[0] = math.atan2(-row[2, 1], row[1, 1]) angles[2] = 0.0 return scale, shear, angles, translate, perspective def circumradius(a, b, c): """a, b and c are 3-dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points. Parameters ---------- a, b, c : (3,) array_like the three vertices of the triangle Returns ------- circumradius : float the desired circumradius """ x = a - c xx = np.linalg.norm(x) ** 2 y = b - c yy = np.linalg.norm(y) ** 2 z = np.cross(x, y) # test for collinearity if np.linalg.norm(z) == 0: return np.sqrt(np.max(np.dot(x, x), np.dot(y, y), np.dot(a - b, a - b))) / 2.0 else: m = np.vstack((x, y, z)) w = np.dot(np.linalg.inv(m.T), np.array([xx / 2.0, yy / 2.0, 0])) return np.linalg.norm(w) / 2.0 def vec2vec_rotmat(u, v): r"""rotation matrix from 2 unit vectors u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v. In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v. The transpose of R will align v to u. Parameters ---------- u : array, shape(3,) v : array, shape(3,) Returns ------- R : array, shape(3,3) Examples -------- >>> import numpy as np >>> from dipy.core.geometry import vec2vec_rotmat >>> u=np.array([1,0,0]) >>> v=np.array([0,1,0]) >>> R=vec2vec_rotmat(u,v) >>> np.dot(R,u) array([ 0., 1., 0.]) >>> np.dot(R.T,v) array([ 1., 0., 0.]) """ # Cross product is the first step to find R # Rely on numpy instead of manual checking for failing # cases w = np.cross(u, v) wn = np.linalg.norm(w) # Check that cross product is OK and vectors # u, v are not collinear (norm(w)>0.0) if np.isnan(wn) or wn < np.finfo(float).eps: norm_u_v = np.linalg.norm(u - v) # This is the case of two antipodal vectors: # ** former checking assumed norm(u) == norm(v) if norm_u_v > np.linalg.norm(u): return -np.eye(3) return np.eye(3) # if everything ok, normalize w w = w / wn # vp is in plane of u,v, perpendicular to u vp = v - (np.dot(u, v) * u) vp = vp / np.linalg.norm(vp) # (u vp w) is an orthonormal basis P = np.array([u, vp, w]) Pt = P.T cosa = np.clip(np.dot(u, v), -1, 1) sina = np.sqrt(1 - cosa**2) R = np.array([[cosa, -sina, 0], [sina, cosa, 0], [0, 0, 1]]) Rp = np.dot(Pt, np.dot(R, P)) # make sure that you don't return any Nans # check using the appropriate tool in numpy if np.any(np.isnan(Rp)): return np.eye(3) return Rp def compose_transformations(*mats): """Compose multiple 4x4 affine transformations in one 4x4 matrix Parameters ---------- mat1 : array, (4, 4) mat2 : array, (4, 4) ... matN : array, (4, 4) Returns ------- matN x ... x mat2 x mat1 : array, (4, 4) """ prev = mats[0] if len(mats) < 2: raise ValueError("At least two or more matrices are needed") for mat in mats[1:]: prev = np.dot(mat, prev) return prev @warning_for_keywords() def perpendicular_directions(v, *, num=30, half=False): r"""Computes n evenly spaced perpendicular directions relative to a given vector v Parameters ---------- v : array (3,) Array containing the three cartesian coordinates of vector v num : int, optional Number of perpendicular directions to generate half : bool, optional If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False Returns ------- psamples : array (n, 3) array of vectors perpendicular to v Notes ----- Perpendicular directions are estimated using the following two step procedure: 1) the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the x-axis. 2) Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following: - The first axis is vector v - The second axis is defined as the normalized vector given by the cross product between vector v and the unit vector aligned to the x-axis - The third axis is defined as the cross product between the previous computed vector and vector v. Following this two steps, coordinates of the final perpendicular directions are given as: .. math:: \left [ -\sin(a_{i}) \sqrt{{v_{y}}^{2}+{v_{z}}^{2}} \; , \; \frac{v_{x}v_{y}\sin(a_{i})-v_{z}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \; , \; \frac{v_{x}v_{z}\sin(a_{i})-v_{y}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \right ] This procedure has a singularity when vector v is aligned to the x-axis. To solve this singularity, perpendicular directions in procedure's step 1 are defined in the plane normal to y-axis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the y-axis. Following this, the coordinates of the perpendicular directions are given as: \left [ -\frac{\left (v_{x}v_{y}\sin(a_{i})+v_{z}\cos(a_{i}) \right )} {\sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} \; , \; \sin(a_{i}) \sqrt{{v_{x}}^{2}+{v_{z}}^{2}} \; , \; \frac{v_{y}v_{z}\sin(a_{i})+v_{x}\cos(a_{i})} {\sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} \right ] For more details on this calculation, see `here `_. """ # noqa: E501 v = np.array(v, dtype=float) # Float error used for floats comparison er = np.finfo(v[0]).eps * 1e3 # Define circumference or semi-circumference if half is True: a = np.linspace(0.0, math.pi, num=num, endpoint=False) else: a = np.linspace(0.0, 2 * math.pi, num=num, endpoint=False) cosa = np.cos(a) sina = np.sin(a) # Check if vector is not aligned to the x axis if abs(v[0] - 1.0) > er: sq = np.sqrt(v[1] ** 2 + v[2] ** 2) psamples = np.array( [ -sq * sina, (v[0] * v[1] * sina - v[2] * cosa) / sq, (v[0] * v[2] * sina + v[1] * cosa) / sq, ] ) else: sq = np.sqrt(v[0] ** 2 + v[2] ** 2) psamples = np.array( [ -(v[2] * cosa + v[0] * v[1] * sina) / sq, sina * sq, (v[0] * cosa - v[2] * v[1] * sina) / sq, ] ) return psamples.T def dist_to_corner(affine): """Calculate the maximal distance from the center to a corner of a voxel, given an affine Parameters ---------- affine : 4 by 4 array. The spatial transformation from the measurement to the scanner space. Returns ------- dist: float The maximal distance to the corner of a voxel, given voxel size encoded in the affine. """ R = affine[0:3, 0:3] vox_dim = np.diag(np.linalg.cholesky(R.T.dot(R))) return np.sqrt(np.sum((vox_dim / 2) ** 2)) def is_hemispherical(vecs): """Test whether all points on a unit sphere lie in the same hemisphere. Parameters ---------- vecs : numpy.ndarray 2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere. Returns ------- is_hemi : bool If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere pole : numpy.ndarray If `is_hemi == True`, then pole is the "central" pole of the input vectors. Otherwise, pole is the zero vector. References ---------- https://rstudio-pubs-static.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html """ # noqa: E501 if vecs.shape[1] != 3: raise ValueError("Input vectors must be 3D vectors") if not np.allclose(1, np.linalg.norm(vecs, axis=1)): raise ValueError("Input vectors must be unit vectors") # Generate all pairwise cross products v0, v1 = zip(*itertools.permutations(vecs, 2)) cross_prods = np.cross(v0, v1) # Normalize them cross_prods /= np.linalg.norm(cross_prods, axis=1)[:, np.newaxis] # `cross_prods` now contains all candidate vertex points for "the polygon" # in the reference. "The polygon" is a subset. Find which points belong to # the polygon using a dot product test with each of the original vectors angles = np.arccos(np.dot(cross_prods, vecs.transpose())) # And test whether it is orthogonal or less dot_prod_test = angles <= np.pi / 2.0 # If there is at least one point that is orthogonal or less to each # input vector, then the points lie on some hemisphere is_hemi = len(vecs) in np.sum(dot_prod_test.astype(int), axis=1) if is_hemi: vertices = cross_prods[np.sum(dot_prod_test.astype(int), axis=1) == len(vecs)] pole = np.mean(vertices, axis=0) pole /= np.linalg.norm(pole) else: pole = np.array([0.0, 0.0, 0.0]) return is_hemi, pole dipy-1.11.0/dipy/core/gradients.py000066400000000000000000001376671476546756600170440ustar00rootroot00000000000000import logging from warnings import warn import numpy as np from scipy.linalg import inv, polar from dipy.core.geometry import vec2vec_rotmat, vector_norm from dipy.core.onetime import auto_attr from dipy.core.sphere import HemiSphere, disperse_charges from dipy.io import gradients as io from dipy.testing.decorators import warning_for_keywords WATER_GYROMAGNETIC_RATIO = 267.513e6 # 1/(sT) logger = logging.getLogger(__name__) def b0_threshold_empty_gradient_message(bvals, idx, b0_threshold): """Message about the ``b0_threshold`` value resulting in no gradient selection. Parameters ---------- bvals : (N,) ndarray The b-value, or magnitude, of each gradient direction. idx : ndarray Indices of the gradients to be selected. b0_threshold : float Gradients with b-value less than or equal to `b0_threshold` are considered to not have diffusion weighting. Returns ------- str Message. """ return ( "Filtering gradient values with a b0 threshold value " f"of {b0_threshold} results in no gradients being " f"selected for the b-values ({bvals[idx]}) corresponding " f"to the requested indices ({idx}). Lower the b0 threshold " "value." ) def b0_threshold_update_slicing_message(slice_start): """Message for b0 threshold value update for slicing. Parameters ---------- slice_start : int Starting index for slicing. Returns ------- str Message. """ return f"Updating b0_threshold to {slice_start} for slicing." def mask_non_weighted_bvals(bvals, b0_threshold): """Create a diffusion gradient-weighting mask for the b-values according to the provided b0 threshold value. Parameters ---------- bvals : (N,) ndarray The b-value, or magnitude, of each gradient direction. b0_threshold : float Gradients with b-value less than or equal to `b0_threshold` are considered to not have diffusion weighting. Returns ------- ndarray Gradient-weighting mask: True for all b-value indices whose value is smaller or equal to ``b0_threshold``; False otherwise. """ return bvals <= b0_threshold class GradientTable: """Diffusion gradient information Parameters ---------- gradients : array_like (N, 3) Diffusion gradients. The direction of each of these vectors corresponds to the b-vector, and the length corresponds to the b-value. b0_threshold : float Gradients with b-value less than or equal to `b0_threshold` are considered as b0s i.e. without diffusion weighting. Attributes ---------- gradients : (N,3) ndarray diffusion gradients bvals : (N,) ndarray The b-value, or magnitude, of each gradient direction. qvals: (N,) ndarray The q-value for each gradient direction. Needs big and small delta. bvecs : (N,3) ndarray The direction, represented as a unit vector, of each gradient. b0s_mask : (N,) ndarray Boolean array indicating which gradients have no diffusion weighting, ie b-value is close to 0. b0_threshold : float Gradients with b-value less than or equal to `b0_threshold` are considered to not have diffusion weighting. btens : (N,3,3) ndarray The b-tensor of each gradient direction. See Also -------- gradient_table Notes ----- The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table """ @warning_for_keywords() def __init__( self, gradients, *, big_delta=None, small_delta=None, b0_threshold=50, btens=None, ): """Constructor for GradientTable class""" gradients = np.asarray(gradients) if gradients.ndim != 2 or gradients.shape[1] != 3: raise ValueError("gradients should be an (N, 3) array") self.gradients = gradients # Avoid nan gradients. Set these to 0 instead: self.gradients = np.where(np.isnan(gradients), 0.0, gradients) self.big_delta = big_delta self.small_delta = small_delta self.b0_threshold = b0_threshold if btens is not None: linear_tensor = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]]) planar_tensor = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 1]]) / 2 spherical_tensor = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) / 3 cigar_tensor = np.array([[2, 0, 0], [0, 0.5, 0], [0, 0, 0.5]]) / 3 if isinstance(btens, str): b_tensors = np.zeros((len(self.bvals), 3, 3)) if btens == "LTE": b_tensor = linear_tensor elif btens == "PTE": b_tensor = planar_tensor elif btens == "STE": b_tensor = spherical_tensor elif btens == "CTE": b_tensor = cigar_tensor else: raise ValueError( f"{btens} is an invalid value for btens. " + "Please provide one of the following: " + "'LTE', 'PTE', 'STE', 'CTE'." ) for i, (bvec, bval) in enumerate(zip(self.bvecs, self.bvals)): if btens == "STE": b_tensors[i] = b_tensor * bval else: R = vec2vec_rotmat(np.array([1, 0, 0]), bvec) b_tensors[i] = np.matmul(np.matmul(R, b_tensor), R.T) * bval self.btens = b_tensors elif isinstance(btens, np.ndarray) and btens.shape in ( (gradients.shape[0],), (gradients.shape[0], 1), (1, gradients.shape[0]), ): b_tensors = np.zeros((len(self.bvals), 3, 3)) if btens.shape == (1, gradients.shape[0]): btens = btens.reshape((gradients.shape[0], 1)) for i, (bvec, bval) in enumerate(zip(self.bvecs, self.bvals)): R = vec2vec_rotmat(np.array([1, 0, 0]), bvec) if btens[i] == "LTE": b_tensors[i] = ( np.matmul(np.matmul(R, linear_tensor), R.T) * bval ) elif btens[i] == "PTE": b_tensors[i] = ( np.matmul(np.matmul(R, planar_tensor), R.T) * bval ) elif btens[i] == "STE": b_tensors[i] = spherical_tensor * bval elif btens[i] == "CTE": b_tensors[i] = np.matmul(np.matmul(R, cigar_tensor), R.T) * bval else: raise ValueError( f"{btens[i]} is an invalid value in btens. " + "Array element options: 'LTE', 'PTE', 'STE', " + "'CTE'." ) self.btens = b_tensors elif isinstance(btens, np.ndarray) and btens.shape == ( gradients.shape[0], 3, 3, ): self.btens = btens else: raise ValueError( f"{btens} is an invalid value for btens. " + "Please provide a string, an array of " + "strings, or an array of exact b-tensors. " + "String options: 'LTE', 'PTE', 'STE', 'CTE'" ) else: self.btens = None @auto_attr def bvals(self): return vector_norm(self.gradients) @auto_attr def tau(self): return self.big_delta - self.small_delta / 3.0 @auto_attr def qvals(self): tau = self.big_delta - self.small_delta / 3.0 return np.sqrt(self.bvals / tau) / (2 * np.pi) @auto_attr def gradient_strength(self): tau = self.big_delta - self.small_delta / 3.0 qvals = np.sqrt(self.bvals / tau) / (2 * np.pi) gradient_strength = ( qvals * (2 * np.pi) / (self.small_delta * WATER_GYROMAGNETIC_RATIO) ) return gradient_strength @auto_attr def b0s_mask(self): return mask_non_weighted_bvals(self.bvals, self.b0_threshold) @auto_attr def bvecs(self): # To get unit vectors we divide by bvals, where bvals is 0 we divide by # 1 to avoid making nans denom = self.bvals + (self.bvals == 0) denom = denom.reshape((-1, 1)) return self.gradients / denom def __getitem__(self, idx): if isinstance(idx, int): idx = [idx] # convert in a list if integer. elif isinstance(idx, slice): # Get the lower bound of the slice slice_start = idx.start if idx.start is not None else 0 # Check if it is different from b0_threshold if slice_start != self.b0_threshold: # Update b0_threshold and warn the user self.b0_threshold = slice_start warn( b0_threshold_update_slicing_message(slice_start), UserWarning, stacklevel=2, ) idx = range(*idx.indices(len(self.bvals))) mask = np.logical_not( mask_non_weighted_bvals(self.bvals[idx], self.b0_threshold) ) if not any(mask): raise ValueError( b0_threshold_empty_gradient_message(self.bvals, idx, self.b0_threshold) ) # Apply the mask to select the desired b-values and b-vectors bvals_selected = self.bvals[idx][mask] bvecs_selected = self.bvecs[idx, :][mask, :] # Create a new MyGradientTable object with the selected b-values # and b-vectors return gradient_table_from_bvals_bvecs( bvals_selected, bvecs_selected, big_delta=self.big_delta, small_delta=self.small_delta, b0_threshold=self.b0_threshold, btens=self.btens, ) # removing atol parameter as it's not in the constructor # of GradientTable. @property def info(self, use_logging=False): show = logging.info if use_logging else print show(self.__str__()) def __str__(self): msg = f"B-values shape {self.bvals.shape}\n" msg += f" min {self.bvals.min():f}\n" msg += f" max {self.bvals.max():f}\n" msg += f"B-vectors shape {self.bvecs.shape}\n" msg += f" min {self.bvecs.min():f}\n" msg += f" max {self.bvecs.max():f}\n" return msg def __len__(self): """Get the number of gradients. Includes the b0s. Examples -------- >>> bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) >>> bvecs = generate_bvecs(bvals.shape[-1]) >>> gtab = gradient_table(bvals, bvecs=bvecs) >>> len(gtab) 9 """ return len(self.bvals) @warning_for_keywords() def gradient_table_from_bvals_bvecs( bvals, bvecs, *, b0_threshold=50, atol=1e-2, btens=None, **kwargs ): """Creates a GradientTable from a bvals array and a bvecs array Parameters ---------- bvals : array_like (N,) The b-value, or magnitude, of each gradient direction. bvecs : array_like (N, 3) The direction, represented as a unit vector, of each gradient. b0_threshold : float Gradients with b-value less than or equal to `b0_threshold` are considered to not have diffusion weighting. If its value is equal to or larger than all values in b-vals, then it is assumed that no thresholding is requested. atol : float Each vector in `bvecs` must be a unit vectors up to a tolerance of `atol`. btens : can be any of three options 1. a string specifying the shape of the encoding tensor for all volumes in data. Options: 'LTE', 'PTE', 'STE', 'CTE' corresponding to linear, planar, spherical, and "cigar-shaped" tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor's normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value. 2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: 'LTE', 'PTE', 'STE', 'CTE' corresponding to linear, planar, spherical, and "cigar-shaped" tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor's normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value. 3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed. Other Parameters ---------------- **kwargs : dict Other keyword inputs are passed to GradientTable. Returns ------- gradients : GradientTable A GradientTable with all the gradient information. See Also -------- GradientTable, gradient_table """ bvals = np.asarray(bvals, float) bvecs = np.asarray(bvecs, float) dwi_mask = bvals > b0_threshold # check that bvals is (N,) array and bvecs is (N, 3) unit vectors if bvals.ndim != 1 or bvecs.ndim != 2 or bvecs.shape[0] != bvals.shape[0]: raise ValueError( "bvals and bvecs should be (N,) and (N, 3) arrays " "respectively, where N is the number of diffusion " "gradients" ) # checking for negative bvals if b0_threshold < 0: raise ValueError("Negative bvals in the data are not feasible") # Upper bound for the b0_threshold if b0_threshold >= 200: warn("b0_threshold has a value > 199", stacklevel=2) # If all b-values are smaller or equal to the b0 threshold, it is assumed # that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): # checking for the correctness of bvals if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, increase your " "b0_threshold. It should be higher than the lowest b0 value " f"({bvals.min()}).", stacklevel=2, ) bvecs = np.where(np.isnan(bvecs), 0, bvecs) bvecs_close_to_1 = abs(vector_norm(bvecs) - 1) <= atol if bvecs.shape[1] != 3: raise ValueError("bvecs should be (N, 3)") if not np.all(bvecs_close_to_1[dwi_mask]): raise ValueError( "The vectors in bvecs should be unit (The tolerance " "can be modified as an input parameter)" ) bvecs = np.where(bvecs_close_to_1[:, None], bvecs, 0) bvals = bvals * bvecs_close_to_1 gradients = bvals[:, None] * bvecs grad_table = GradientTable( gradients, b0_threshold=b0_threshold, btens=btens, **kwargs ) grad_table.bvals = bvals grad_table.bvecs = bvecs grad_table.b0s_mask = ~dwi_mask return grad_table @warning_for_keywords() def gradient_table_from_qvals_bvecs( qvals, bvecs, big_delta, small_delta, *, b0_threshold=50, atol=1e-2 ): """A general function for creating diffusion MR gradients. It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process. Parameters ---------- qvals : an array of shape (N,), q-value given in 1/mm bvecs : can be any of two options 1. an array of shape (N, 3) or (3, N) with the b-vectors. 2. a path for the file which contains an array like the previous. big_delta : float or array of shape (N,) acquisition pulse separation time in seconds small_delta : float acquisition pulse duration time in seconds b0_threshold : float All b-values with values less than or equal to `bo_threshold` are considered as b0s i.e. without diffusion weighting. atol : float All b-vectors need to be unit vectors up to a tolerance. Returns ------- gradients : GradientTable A GradientTable with all the gradient information. Examples -------- >>> from dipy.core.gradients import gradient_table_from_qvals_bvecs >>> qvals = 30. * np.ones(7) >>> big_delta = .03 # pulse separation of 30ms >>> small_delta = 0.01 # pulse duration of 10ms >>> qvals[0] = 0 >>> sq2 = np.sqrt(2) / 2 >>> bvecs = np.array([[0, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [0, 0, 1], ... [sq2, sq2, 0], ... [sq2, 0, sq2], ... [0, sq2, sq2]]) >>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs, ... big_delta, small_delta) Notes ----- 1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__. 2. We assume that the minimum number of b-values is 7. 3. B-vectors should be unit vectors. """ qvals = np.asarray(qvals) bvecs = np.asarray(bvecs) if (bvecs.shape[1] > bvecs.shape[0]) and bvecs.shape[0] > 1: bvecs = bvecs.T bvals = (qvals * 2 * np.pi) ** 2 * (big_delta - small_delta / 3.0) return gradient_table_from_bvals_bvecs( bvals, bvecs, big_delta=big_delta, small_delta=small_delta, b0_threshold=b0_threshold, atol=atol, ) @warning_for_keywords() def gradient_table_from_gradient_strength_bvecs( gradient_strength, bvecs, big_delta, small_delta, *, b0_threshold=50, atol=1e-2 ): """A general function for creating diffusion MR gradients. It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process. Parameters ---------- gradient_strength : an array of shape (N,), gradient strength given in T/mm bvecs : can be any of two options 1. an array of shape (N, 3) or (3, N) with the b-vectors. 2. a path for the file which contains an array like the previous. big_delta : float or array of shape (N,) acquisition pulse separation time in seconds small_delta : float acquisition pulse duration time in seconds b0_threshold : float All b-values with values less than or equal to `bo_threshold` are considered as b0s i.e. without diffusion weighting. atol : float All b-vectors need to be unit vectors up to a tolerance. Returns ------- gradients : GradientTable A GradientTable with all the gradient information. Examples -------- >>> from dipy.core.gradients import ( ... gradient_table_from_gradient_strength_bvecs) >>> gradient_strength = .03e-3 * np.ones(7) # clinical strength at 30 mT/m >>> big_delta = .03 # pulse separation of 30ms >>> small_delta = 0.01 # pulse duration of 10ms >>> gradient_strength[0] = 0 >>> sq2 = np.sqrt(2) / 2 >>> bvecs = np.array([[0, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [0, 0, 1], ... [sq2, sq2, 0], ... [sq2, 0, sq2], ... [0, sq2, sq2]]) >>> gt = gradient_table_from_gradient_strength_bvecs( ... gradient_strength, bvecs, big_delta, small_delta) Notes ----- 1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__. 2. We assume that the minimum number of b-values is 7. 3. B-vectors should be unit vectors. """ gradient_strength = np.asarray(gradient_strength) bvecs = np.asarray(bvecs) if (bvecs.shape[1] > bvecs.shape[0]) and bvecs.shape[0] > 1: bvecs = bvecs.T qvals = gradient_strength * small_delta * WATER_GYROMAGNETIC_RATIO / (2 * np.pi) bvals = (qvals * 2 * np.pi) ** 2 * (big_delta - small_delta / 3.0) return gradient_table_from_bvals_bvecs( bvals, bvecs, big_delta=big_delta, small_delta=small_delta, b0_threshold=b0_threshold, atol=atol, ) @warning_for_keywords() def gradient_table( bvals, *, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=1e-2, btens=None, ): """A general function for creating diffusion MR gradients. It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process. Parameters ---------- bvals : can be any of the four options 1. an array of shape (N,) or (1, N) or (N, 1) with the b-values. 2. a path for the file which contains an array like the above (1). 3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped. 4. a path for the file which contains an array like the one at (3). bvecs : can be any of two options 1. an array of shape (N, 3) or (3, N) with the b-vectors. 2. a path for the file which contains an array like the previous. big_delta : float acquisition pulse separation time in seconds (default None) small_delta : float acquisition pulse duration time in seconds (default None) b0_threshold : float All b-values with values less than or equal to `bo_threshold` are considered as b0s i.e. without diffusion weighting. atol : float All b-vectors need to be unit vectors up to a tolerance. btens : can be any of three options 1. a string specifying the shape of the encoding tensor for all volumes in data. Options: 'LTE', 'PTE', 'STE', 'CTE' corresponding to linear, planar, spherical, and "cigar-shaped" tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor's normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value. 2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: 'LTE', 'PTE', 'STE', 'CTE' corresponding to linear, planar, spherical, and "cigar-shaped" tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor's normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value. 3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed. Returns ------- gradients : GradientTable A GradientTable with all the gradient information. Examples -------- >>> from dipy.core.gradients import gradient_table >>> bvals = 1500 * np.ones(7) >>> bvals[0] = 0 >>> sq2 = np.sqrt(2) / 2 >>> bvecs = np.array([[0, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [0, 0, 1], ... [sq2, sq2, 0], ... [sq2, 0, sq2], ... [0, sq2, sq2]]) >>> gt = gradient_table(bvals, bvecs=bvecs) >>> gt.bvecs.shape == bvecs.shape True >>> gt = gradient_table(bvals, bvecs=bvecs.T) >>> gt.bvecs.shape == bvecs.T.shape False Notes ----- 1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__. 2. We assume that the minimum number of b-values is 7. 3. B-vectors should be unit vectors. """ # If you provided strings with full paths, we go and load those from # the files: if isinstance(bvals, str): bvals, _ = io.read_bvals_bvecs(bvals, None) if isinstance(bvecs, str): _, bvecs = io.read_bvals_bvecs(None, bvecs) bvals = np.asarray(bvals) # If bvecs is None we expect bvals to be an (N, 4) or (4, N) array. if bvecs is None: if bvals.shape[-1] == 4: bvecs = bvals[:, 1:] bvals = np.squeeze(bvals[:, 0]) elif bvals.shape[0] == 4: bvecs = bvals[1:, :].T bvals = np.squeeze(bvals[0, :]) else: raise ValueError( "input should be bvals and bvecs OR an (N, 4)" " array containing both bvals and bvecs" ) else: bvecs = np.asarray(bvecs) if bvecs.shape[1] != 3 and bvecs.shape[0] > 1: bvecs = bvecs.T return gradient_table_from_bvals_bvecs( bvals, bvecs, big_delta=big_delta, small_delta=small_delta, b0_threshold=b0_threshold, atol=atol, btens=btens, ) @warning_for_keywords() def reorient_bvecs(gtab, affines, *, atol=1e-2): """Reorient the directions in a GradientTable. When correcting for motion, rotation of the diffusion-weighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect :footcite:p:`Leemans2009`. Parameters ---------- gtab : GradientTable The nominal gradient table with which the data were acquired. affines : list or ndarray of shape (4, 4, n) or (3, 3, n) Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the non-zero gradient directions (ordered according to their order in `gtab.bvecs[~gtab.b0s_mask]`) atol: see gradient_table() Returns ------- gtab : a GradientTable class instance with the reoriented directions References ---------- .. footbibliography:: """ if isinstance(affines, list): affines = np.stack(affines, axis=-1) if affines.shape[0] != affines.shape[1]: msg = """reorient_bvecs has changed to require affines as (4, 4, n) or (3, 3, n). Shape of (n, 4, 4) or (n, 3, 3) is deprecated and will fail in the future.""" warn(msg, UserWarning, stacklevel=2) affines = np.moveaxis(affines, 0, -1) new_bvecs = gtab.bvecs[~gtab.b0s_mask] if new_bvecs.shape[0] != affines.shape[-1]: e_s = "Number of affine transformations must match number of " e_s += "non-zero gradients" raise ValueError(e_s) # moving axis to make life easier affines = np.moveaxis(affines, -1, 0) for i, aff in enumerate(affines): if aff.shape == (4, 4): # This must be an affine! # Remove the translation component: aff = aff[:3, :3] # Decompose into rotation and scaling components: R, S = polar(aff) Rinv = inv(R) # Apply the inverse of the rotation to the corresponding gradient # direction: new_bvecs[i] = np.dot(Rinv, new_bvecs[i]) return_bvecs = np.zeros(gtab.bvecs.shape) return_bvecs[~gtab.b0s_mask] = new_bvecs return gradient_table( gtab.bvals, bvecs=return_bvecs, big_delta=gtab.big_delta, small_delta=gtab.small_delta, b0_threshold=gtab.b0_threshold, atol=atol, ) @warning_for_keywords() def generate_bvecs(N, *, iters=5000, rng=None): """Generates N bvectors. Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere. Parameters ---------- N : int The number of bvectors to generate. This should be equal to the number of bvals used. iters : int Number of iterations to run. rng : numpy.random.Generator, optional Numpy's random number generator. If None, the generator is created. Default is None. Returns ------- bvecs : (N,3) ndarray The generated directions, represented as a unit vector, of each gradient. """ if rng is None: rng = np.random.default_rng() theta = np.pi * rng.random(N) phi = 2 * np.pi * rng.random(N) hsph_initial = HemiSphere(theta=theta, phi=phi) hsph_updated, potential = disperse_charges(hsph_initial, iters) bvecs = hsph_updated.vertices return bvecs @warning_for_keywords() def round_bvals(bvals, *, bmag=None): """ "This function rounds the b-values Parameters ---------- bvals : ndarray Array containing the b-values bmag : int The order of magnitude to round the b-values. If not given b-values will be rounded relative to the order of magnitude $bmag = (bmagmax - 1)$, where bmaxmag is the magnitude order of the larger b-value. Returns ------- rbvals : ndarray Array containing the rounded b-values """ if bmag is None: bmag = int(np.log10(np.max(bvals))) - 1 b = bvals / (10**bmag) # normalize b units return b.round() * (10**bmag) @warning_for_keywords() def unique_bvals_tolerance(bvals, *, tol=20): """Gives the unique b-values of the data, within a tolerance gap The b-values must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the b-values of a cluster fit within the tolerance gap, the highest b-value is kept. Parameters ---------- bvals : ndarray Array containing the b-values tol : int The tolerated gap between the b-values to extract and the actual b-values. Returns ------- ubvals : ndarray Array containing the unique b-values using the median value for each cluster """ b = np.unique(bvals) ubvals = [] lower_part = np.where(b <= b[0] + tol)[0] upper_part = np.where( np.logical_and(b <= b[lower_part[-1]] + tol, b > b[lower_part[-1]]) )[0] ubvals.append(b[lower_part[-1]]) if len(upper_part) != 0: b_index = upper_part[-1] + 1 else: b_index = lower_part[-1] + 1 while b_index != len(b): lower_part = np.where( np.logical_and(b <= b[b_index] + tol, b > b[b_index - 1]) )[0] upper_part = np.where( np.logical_and(b <= b[lower_part[-1]] + tol, b > b[lower_part[-1]]) )[0] ubvals.append(b[lower_part[-1]]) if len(upper_part) != 0: b_index = upper_part[-1] + 1 else: b_index = lower_part[-1] + 1 # Checking for overlap with get_bval_indices for i, ubval in enumerate(ubvals[:-1]): indices_1 = get_bval_indices(bvals, ubval, tol=tol) indices_2 = get_bval_indices(bvals, ubvals[i + 1], tol=tol) if len(np.intersect1d(indices_1, indices_2)) != 0: msg = """There is overlap in clustering of b-values. The tolerance factor might be too high.""" warn(msg, UserWarning, stacklevel=2) return np.asarray(ubvals) @warning_for_keywords() def get_bval_indices(bvals, bval, *, tol=20): """ Get indices where the b-value is `bval` Parameters ---------- bvals: ndarray Array containing the b-values bval: float or int b-value to extract indices tol: int The tolerated gap between the b-values to extract and the actual b-values. Returns ------- Array of indices where the b-value is `bval` """ return np.where(np.logical_and(bvals <= bval + tol, bvals >= bval - tol))[0] @warning_for_keywords() def unique_bvals_magnitude(bvals, *, bmag=None, rbvals=False): """This function gives the unique rounded b-values of the data Parameters ---------- bvals : ndarray Array containing the b-values bmag : int The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: $bmag=log_{10}(max(bvals)) - 1$. rbvals : bool, optional If True function also returns all individual rounded b-values. Returns ------- ubvals : ndarray Array containing the rounded unique b-values """ b = round_bvals(bvals, bmag=bmag) if rbvals: return np.unique(b), b return np.unique(b) @warning_for_keywords() def check_multi_b(gtab, n_bvals, *, non_zero=True, bmag=None): """ Check if you have enough different b-values in your gradient table Parameters ---------- gtab : GradientTable class instance. Gradient table. n_bvals : int The number of different b-values you are checking for. non_zero : bool Whether to check only non-zero bvalues. In this case, we will require at least `n_bvals` *non-zero* b-values (where non-zero is defined depending on the `gtab` object's `b0_threshold` attribute) bmag : int The order of magnitude of the b-values used. The function will normalize the b-values relative $10^{bmag}$. Default: derive this value from the maximal b-value provided: $bmag=log_{10}(max(bvals)) - 1$. Returns ------- bool : Whether there are at least `n_bvals` different b-values in the gradient table used. """ bvals = gtab.bvals.copy() if non_zero: bvals = bvals[~gtab.b0s_mask] uniqueb = unique_bvals_magnitude(bvals, bmag=bmag) if uniqueb.shape[0] < n_bvals: return False else: return True def _btens_to_params_2d(btens_2d, ztol): """Compute trace, anisotropy and asymmetry parameters from a single b-tensor Auxiliary function where calculation of `bval`, bdelta` and `b_eta` from a (3,3) b-tensor takes place. The main function `btens_to_params` then wraps around this to enable support of input (N, 3, 3) arrays, where N = number of b-tensors Parameters ---------- btens_2d : (3, 3) numpy.ndarray input b-tensor ztol : float Any parameters smaller than this value are considered to be 0 Returns ------- bval: float b-value (trace) bdelta: float normalized tensor anisotropy bdelta: float tensor asymmetry Notes ----- Implementation following :footcite:t:`Topgaard2016`. References ---------- .. footbibliography:: """ btens_2d[abs(btens_2d) <= ztol] = 0 evals = np.real(np.linalg.eig(btens_2d)[0]) bval = np.sum(evals) bval_is_zero = np.abs(bval) <= ztol if bval_is_zero: bval = 0 bdelta = 0 b_eta = 0 else: lambda_iso = (1 / 3) * bval diff_lambdas = np.abs(evals - lambda_iso) evals_zzxxyy = evals[np.argsort(diff_lambdas)[::-1]] lambda_zz = evals_zzxxyy[0] lambda_xx = evals_zzxxyy[1] lambda_yy = evals_zzxxyy[2] bdelta = (1 / (3 * lambda_iso)) * (lambda_zz - ((lambda_yy + lambda_xx) / 2)) if np.abs(bdelta) <= ztol: bdelta = 0 yyxx_diff = lambda_yy - lambda_xx if abs(yyxx_diff) <= np.spacing(1): yyxx_diff = 0 b_eta = yyxx_diff / (2 * lambda_iso * bdelta + np.spacing(1)) if np.abs(b_eta) <= b_eta: b_eta = 0 return float(bval), float(bdelta), float(b_eta) @warning_for_keywords() def btens_to_params(btens, *, ztol=1e-10): r"""Compute trace, anisotropy and asymmetry parameters from b-tensors. Parameters ---------- btens : (3, 3) OR (N, 3, 3) numpy.ndarray input b-tensor, or b-tensors, where N = number of b-tensors ztol : float Any parameters smaller than this value are considered to be 0 Returns ------- bval: numpy.ndarray b-value(s) (trace(s)) bdelta: numpy.ndarray normalized tensor anisotropy(s) b_eta: numpy.ndarray tensor asymmetry(s) Notes ----- This function can be used to get b-tensor parameters directly from the GradientTable `btens` attribute. Examples -------- >>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]]) >>> bval, bdelta, b_eta = btens_to_params(lte) >>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta)) bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.] """ # Bad input checks value_error_msg = ( "`btens` must be a 2D or 3D numpy array, respectively" " with (3, 3) or (N, 3, 3) shape, where N corresponds" " to the number of b-tensors" ) if not isinstance(btens, np.ndarray): raise ValueError(value_error_msg) nd = np.ndim(btens) if nd == 2: btens_shape = btens.shape elif nd == 3: btens_shape = btens.shape[1:] else: raise ValueError(value_error_msg) if not btens_shape == (3, 3): raise ValueError(value_error_msg) # Reshape so that loop below works when only one input b-tensor is provided if nd == 2: btens = btens.reshape((1, 3, 3)) # Pre-allocate n_btens = btens.shape[0] bval = np.empty(n_btens) bdelta = np.empty(n_btens) b_eta = np.empty(n_btens) # Loop over b-tensor(s) for i in range(btens.shape[0]): i_btens = btens[i, :, :] i_bval, i_bdelta, i_b_eta = _btens_to_params_2d(i_btens, ztol) bval[i] = i_bval bdelta[i] = i_bdelta b_eta[i] = i_b_eta return bval, bdelta, b_eta def params_to_btens(bval, bdelta, b_eta): """Compute b-tensor from trace, anisotropy and asymmetry parameters. Parameters ---------- bval: int or float b-value (>= 0) bdelta: int or float normalized tensor anisotropy (>= -0.5 and <= 1) b_eta: int or float tensor asymmetry (>= 0 and <= 1) Returns ------- (3, 3) numpy.ndarray output b-tensor Notes ----- Implements eq. 7.11. p. 231 in :footcite:p:`Topgaard2016`. References ---------- .. footbibliography:: """ # Check input times are OK expected_input_types = (float, int) input_types_all_ok = ( isinstance(bval, expected_input_types) and isinstance(bdelta, expected_input_types) and isinstance(b_eta, expected_input_types) ) if not input_types_all_ok: s = [x.__name__ for x in expected_input_types] it_msg = f"All input types should any of: {s}" raise ValueError(it_msg) # Check input values within expected ranges if bval < 0: raise ValueError("`bval` must be >= 0") if not -0.5 <= bdelta <= 1: raise ValueError("`delta` must be >= -0.5 and <= 1") if not 0 <= b_eta <= 1: raise ValueError("`b_eta` must be >= 0 and <= 1") m1 = np.array([[-1, 0, 0], [0, -1, 0], [0, 0, 2]]) m2 = np.array([[-1, 0, 0], [0, 1, 0], [0, 0, 0]]) btens = bval / 3 * (np.eye(3) + bdelta * (m1 + b_eta * m2)) return btens def ornt_mapping(ornt1, ornt2): """Calculate the mapping needing to get from orn1 to orn2.""" mapping = np.empty((len(ornt1), 2), "int") mapping[:, 0] = -1 A = ornt1[:, 0].argsort() B = ornt2[:, 0].argsort() mapping[B, 0] = A assert (mapping[:, 0] != -1).all() sign = ornt2[:, 1] * ornt1[mapping[:, 0], 1] mapping[:, 1] = sign return mapping @warning_for_keywords() def reorient_vectors(bvecs, current_ornt, new_ornt, *, axis=0): """Change the orientation of gradients or other vectors. Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in "RAS" will be [-x, -y, z] in "LPS". R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior """ if isinstance(current_ornt, str): current_ornt = orientation_from_string(current_ornt) if isinstance(new_ornt, str): new_ornt = orientation_from_string(new_ornt) n = bvecs.shape[axis] if current_ornt.shape != (n, 2) or new_ornt.shape != (n, 2): raise ValueError("orientations do not match") bvecs = np.asarray(bvecs) mapping = ornt_mapping(current_ornt, new_ornt) output = bvecs.take(mapping[:, 0], axis) out_view = np.rollaxis(output, axis, output.ndim) out_view *= mapping[:, 1] return output @warning_for_keywords() def reorient_on_axis(bvecs, current_ornt, new_ornt, *, axis=0): if isinstance(current_ornt, str): current_ornt = orientation_from_string(current_ornt) if isinstance(new_ornt, str): new_ornt = orientation_from_string(new_ornt) n = bvecs.shape[axis] if current_ornt.shape != (n, 2) or new_ornt.shape != (n, 2): raise ValueError("orientations do not match") mapping = ornt_mapping(current_ornt, new_ornt) order = [slice(None)] * bvecs.ndim order[axis] = mapping[:, 0] shape = [1] * bvecs.ndim shape[axis] = -1 sign = mapping[:, 1] sign.shape = shape output = bvecs[order] output *= sign return output def orientation_from_string(string_ornt): """Return an array representation of an ornt string.""" orientation_dict = { "r": (0, 1), "l": (0, -1), "a": (1, 1), "p": (1, -1), "s": (2, 1), "i": (2, -1), } ornt = tuple(orientation_dict[ii] for ii in string_ornt.lower()) ornt = np.array(ornt) if _check_ornt(ornt): msg = string_ornt + " does not seem to be a valid orientation string" raise ValueError(msg) return ornt def orientation_to_string(ornt): """Return a string representation of a 3d ornt.""" if _check_ornt(ornt): msg = repr(ornt) + " does not seem to be a valid orientation" raise ValueError(msg) orientation_dict = { (0, 1): "r", (0, -1): "l", (1, 1): "a", (1, -1): "p", (2, 1): "s", (2, -1): "i", } ornt_string = "" for ii in ornt: ornt_string += orientation_dict[(ii[0], ii[1])] return ornt_string def _check_ornt(ornt): uniq = np.unique(ornt[:, 0]) if len(uniq) != len(ornt): print(len(uniq)) return True uniq = np.unique(ornt[:, 1]) if tuple(uniq) not in {(-1, 1), (-1,), (1,)}: print(tuple(uniq)) return True def extract_b0(dwi, b0_mask, *, group_contiguous_b0=False, strategy="mean"): """Extract a set of b0 volumes from a dwi dataset. Parameters ---------- dwi: ndarray 4D diffusion-weighted imaging data (X, Y, Z, N), where N is the number of volumes. b0_mask: ndarray of bool Mask over the time dimension (4th) identifying b0 volumes. group_contiguous_b0: bool, optional If True, each contiguous b0 volumes are grouped together. strategy: str, optional The extraction strategy, of either: - first: select the first b0 found. - all: select them all. - mean: average them. When used in conjunction with the batch parameter set to True, the strategy is applied individually on each continuous set found. Returns ------- b0_data : ndarray Extracted b0 volumes. """ if not isinstance(b0_mask, np.ndarray) or b0_mask.ndim != 1 or 0 in b0_mask.shape: raise ValueError("b0_mask must be a boolean array.") if not isinstance(dwi, np.ndarray) or dwi.ndim != 4: raise ValueError("dwi must be a 4D numpy array.") if dwi.shape[-1] != b0_mask.shape[0]: raise ValueError("The last dimension of dwi must match the length of b0_mask.") strategy = strategy.lower() if strategy not in ["first", "all", "mean"]: raise ValueError( "Invalid strategy: {}. Valid strategies are: " "first, all, mean.".format( strategy ) ) if group_contiguous_b0: b0_indices = np.where(b0_mask)[0] groups = np.split(b0_indices, np.where(np.diff(b0_indices) > 1)[0] + 1) grouped_b0 = [] for group in groups: group_data = dwi[..., group - b0_indices[0]] if strategy == "first": grouped_b0.append(group_data[..., 0]) elif strategy == "mean": grouped_b0.append(np.mean(group_data, axis=-1)) else: # "all" grouped_b0.append(group_data) return np.stack(grouped_b0, axis=-1) b0_data = dwi[..., b0_mask] if strategy == "first": return b0_data[..., 0] elif strategy == "mean": return np.mean(b0_data, axis=-1) return b0_data def extract_dwi_shell(dwi, gtab, bvals_to_extract, *, tol=20, group_shells=True): """Extract diffusion-weighted imaging (DWI) volumes based on b-value shells. Parameters ---------- dwi : ndarray 4D diffusion-weighted imaging data (X, Y, Z, N), where N is the number of volumes. gtab : GradientTable The gradient table associated with the DWI dataset. bvals_to_extract : list of int List of b-values to extract. If None, all b-values are included. tol : int, optional Tolerance range for b-value selection. A value of 20 means volumes with b-values within ±20 units of the specified b-values will be extracted. group_shells : bool, optional If True, extracted volumes are grouped into a single array. If False, returns a list of separate volumes. Returns ------- indices : ndarray Indices of the extracted volumes corresponding to the specified b-values. shell_data : ndarray or list of ndarrays Extracted DWI volumes, grouped if `group_shells=True`, otherwise returned as a list. output_bvals : ndarray The b-values of the selected volumes. output_bvecs : ndarray The b-vectors of the selected volumes. """ bvals = gtab.bvals bvecs = gtab.bvecs if not bvals_to_extract: warn( "No b-values specified. All b-values will be included.", UserWarning, stacklevel=2, ) mask = ( np.ones_like(bvals, dtype=bool) if not bvals_to_extract else np.any(np.abs(bvals[:, None] - np.array(bvals_to_extract)) <= tol, axis=1) ) indices = np.where(mask)[0] if not indices.size and bvals_to_extract: raise ValueError("No volumes found with the specified b-values.") shell_data, output_bvals, output_bvecs, output_indices = [], [], [], [] if group_shells: shell_data.append(dwi[..., indices]) output_indices.append(indices) output_bvals.append(bvals[indices]) output_bvecs.append(bvecs[indices]) else: for i in bvals_to_extract: selected_indices = indices[ (bvals[indices] >= i - tol) & (bvals[indices] <= i + tol) ] output_indices.append(selected_indices) output_bvals.append(bvals[selected_indices]) output_bvecs.append(bvecs[selected_indices]) shell_data.append(dwi[..., selected_indices]) return output_indices, shell_data, output_bvals, output_bvecs dipy-1.11.0/dipy/core/graph.py000066400000000000000000000072321476546756600161450ustar00rootroot00000000000000"""A simple graph class""" from dipy.testing.decorators import warning_for_keywords class Graph: """A simple graph class""" def __init__(self): """A graph class with nodes and edges :-) This class allows us to: 1. find the shortest path 2. find all paths 3. add/delete nodes and edges 4. get parent & children nodes Examples -------- >>> from dipy.core.graph import Graph >>> g=Graph() >>> g.add_node('a', attr=5) >>> g.add_node('b', attr=6) >>> g.add_node('c', attr=10) >>> g.add_node('d', attr=11) >>> g.add_edge('a','b') >>> g.add_edge('b','c') >>> g.add_edge('c','d') >>> g.add_edge('b','d') >>> g.up_short('d') ['d', 'b', 'a'] """ self.node = {} self.pred = {} self.succ = {} @warning_for_keywords() def add_node(self, n, *, attr=None): self.succ[n] = {} self.pred[n] = {} self.node[n] = attr @warning_for_keywords() def add_edge(self, n, m, *, ws=True, wp=True): self.succ[n][m] = ws self.pred[m][n] = wp def parents(self, n): return self.pred[n].keys() def children(self, n): return self.succ[n].keys() def up(self, n): return self.all_paths(self.pred, n) def down(self, n): return self.all_paths(self.succ, n) def up_short(self, n): return self.shortest_path(self.pred, n) def down_short(self, n): return self.shortest_path(self.succ, n) @warning_for_keywords() def all_paths(self, graph, start, *, end=None, path=None): path = path or [] path = path + [start] if start == end or graph[start] == {}: return [path] if start not in graph: return [] paths = [] for node in graph[start]: if node not in path: newpaths = self.all_paths(graph, node, end=end, path=path) for newpath in newpaths: paths.append(newpath) return paths @warning_for_keywords() def shortest_path(self, graph, start, *, end=None, path=None): path = path or [] path = path + [start] if graph[start] == {} or start == end: return path if start not in graph: return [] shortest = None for node in graph[start]: if node not in path: newpath = self.shortest_path(graph, node, end=end, path=path) if newpath: if not shortest or len(newpath) < len(shortest): shortest = newpath return shortest def del_node_and_edges(self, n): try: del self.node[n] except KeyError as e: raise KeyError("node not in the graph") from e for s in self.succ[n]: del self.pred[s][n] del self.succ[n] for p in self.pred[n]: del self.succ[p][n] del self.pred[n] def del_node(self, n): try: del self.node[n] except KeyError as e: raise KeyError("node not in the graph") from e for s in self.succ[n]: for p in self.pred[n]: self.succ[p][s] = self.succ[n][s] self.pred[s][p] = self.pred[s][n] for s in self.succ.keys(): try: del self.succ[s][n] except KeyError: pass for p in self.pred.keys(): try: del self.pred[p][n] except KeyError: pass del self.succ[n] del self.pred[n] dipy-1.11.0/dipy/core/histeq.py000066400000000000000000000015561476546756600163440ustar00rootroot00000000000000import numpy as np from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def histeq(arr, *, num_bins=256): """Performs an histogram equalization on ``arr``. This was taken from: http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html Parameters ---------- arr : ndarray Image on which to perform histogram equalization. num_bins : int Number of bins used to construct the histogram. Returns ------- result : ndarray Histogram equalized image. """ # get image histogram histo, bins = np.histogram(arr.flatten(), num_bins, density=True) cdf = histo.cumsum() cdf = 255 * cdf / cdf[-1] # use linear interpolation of cdf to find new pixel values result = np.interp(arr.flatten(), bins[:-1], cdf) return result.reshape(arr.shape) dipy-1.11.0/dipy/core/interpolation.pxd000066400000000000000000000031171476546756600200740ustar00rootroot00000000000000 cimport numpy as cnp from dipy.align.fused_types cimport floating, number cdef int trilinear_interpolate4d_c(floating[:, :, :, :] data, floating* point, floating* result) noexcept nogil cdef int _interpolate_vector_2d(floating[:, :, :] field, double dii, double djj, floating* out) noexcept nogil cdef int _interpolate_scalar_2d(floating[:, :] image, double dii, double djj, floating* out) noexcept nogil cdef int _interpolate_scalar_nn_2d(number[:, :] image, double dii, double djj, number *out) noexcept nogil cdef int _interpolate_scalar_nn_3d(number[:, :, :] volume, double dkk, double dii, double djj, number* out) noexcept nogil cdef int _interpolate_scalar_3d(floating[:, :, :] volume, double dkk, double dii, double djj, floating* out) noexcept nogil cdef int _interpolate_vector_3d(floating[:, :, :, :] field, double dkk, double dii, double djj, floating* out) noexcept nogil cdef void _trilinear_interpolation_iso(double* X, double* W, cnp.npy_intp* IN) noexcept nogil cdef cnp.npy_intp offset(cnp.npy_intp* indices, cnp.npy_intp* strides, int lenind, int typesize) noexcept nogil dipy-1.11.0/dipy/core/interpolation.pyx000066400000000000000000001170201476546756600201200ustar00rootroot00000000000000# cython: boundscheck=False, wraparound=False, cdivision=True cimport cython cimport numpy as cnp import numpy as np import warnings from libc.math cimport floor from scipy.interpolate import Rbf, RBFInterpolator from dipy.align.fused_types cimport floating, number from dipy.utils.deprecator import deprecate_with_version @deprecate_with_version( "dipy.core.interpolation.interp_rbf is deprecated, " "Please use " "dipy.core.interpolation.rbf_interpolation instead", since="1.10.0", until="1.12.0", ) def interp_rbf(data, sphere_origin, sphere_target, function='multiquadric', epsilon=None, smooth=0.1, norm="angle"): """Interpolate data on the sphere, using radial basis functions. Parameters ---------- data : (N,) ndarray Function values on the unit sphere. sphere_origin : Sphere Positions of data values. sphere_target : Sphere M target positions for which to interpolate. function : {'multiquadric', 'inverse', 'gaussian'} Radial basis function. epsilon : float Radial basis function spread parameter. Defaults to approximate average distance between nodes. a good start smooth : float values greater than zero increase the smoothness of the approximation with 0 as pure interpolation. Default: 0.1 norm : str A string indicating the function that returns the "distance" between two points. 'angle' - The angle between two vectors 'euclidean_norm' - The Euclidean distance Returns ------- v : (M,) ndarray Interpolated values. See Also -------- scipy.interpolate.RBFInterpolator """ def angle(x1, x2): xx = np.arccos(np.clip((x1 * x2).sum(axis=0), -1, 1)) return np.nan_to_num(xx) def euclidean_norm(x1, x2): return np.sqrt(((x1 - x2)**2).sum(axis=0)) if norm == "angle": norm = angle elif norm == "euclidean_norm": w_s = "The Euclidean norm used for interpolation is inaccurate " w_s += "and will be deprecated in future versions. Please consider " w_s += "using the 'angle' norm instead" warnings.warn(w_s, PendingDeprecationWarning) norm = euclidean_norm # Workaround for bug in older versions of SciPy that don't allow # specification of epsilon None: if epsilon is not None: kwargs = {'function': function, 'epsilon': epsilon, 'smooth': smooth, 'norm': norm} else: kwargs = {'function': function, 'smooth': smooth, 'norm': norm} rbfi = Rbf(sphere_origin.x, sphere_origin.y, sphere_origin.z, data, **kwargs) return rbfi(sphere_target.x, sphere_target.y, sphere_target.z) def rbf_interpolation(data, sphere_origin, sphere_target, *, function='multiquadric', epsilon=None, smoothing=0.1): """Interpolate `data` on the sphere, using radial basis functions, where `data` can be scalar- (1D), vector- (2D), or tensor-valued (3D and beyond). Parameters ---------- data : (..., N) ndarray Values of the spherical function evaluated at the N positions specified by `sphere_origin`. sphere_origin : Sphere N positions on the unit sphere where the spherical function is evaluated. sphere_target : Sphere M positions on the unit sphere where the spherical function is interpolated. function: str, optional Radial basis function. Possible values: {'linear', 'thin_plate_spline', 'cubic', 'quintic', 'multiquadric', 'inverse_multiquadric', 'inverse_quadratic', 'gaussian'}. epsilon : float, optional Radial basis function spread parameter. Defaults to 1 when `function` is 'linear', 'thin_plate_spline', 'cubic', or 'quintic'. Otherwise, `epsilon` must be specified. smoothing : float, optional Smoothing parameter. When `smoothing` is 0, the interpolation is exact. As `smoothing` increases, the interpolation approaches a least-squares fit of `data` using the supplied radial basis function. Default: 0. Returns ------- v : (..., M) ndarray Interpolated values of the spherical function at M positions specified by `sphere_target`. See Also -------- scipy.interpolate.RBFInterpolator """ last_dim_idx = data.ndim - 1 if not data.shape[last_dim_idx] == sphere_origin.vertices.shape[0]: raise ValueError("The last dimension of `data` must be equal to the number of " "vertices in `sphere_origin`.") rbfi = RBFInterpolator(sphere_origin.vertices, np.moveaxis(data, last_dim_idx, 0), smoothing=smoothing, kernel=function, epsilon=epsilon) return np.moveaxis(rbfi(sphere_target.vertices), 0, last_dim_idx) @cython.cdivision(True) cdef cnp.npy_intp offset(cnp.npy_intp *indices, cnp.npy_intp *strides, int lenind, int typesize) noexcept nogil: """ Access any element of any ndimensional numpy array using cython. Parameters ---------- indices : cnp.npy_intp * (int64 *) Indices of the array for which we want to find the offset. strides : cnp.npy_intp * strides lenind : int, len(indices) typesize : int Number of bytes for data type e.g. if 8 for double, 4 for int32 Returns ------- offset : integer Element position in array """ cdef cnp.npy_intp i cdef cnp.npy_intp summ = 0 for i from 0 <= i < lenind: summ += strides[i] * indices[i] summ /= typesize return summ cdef void splitoffset(float *offset, cnp.npy_intp *index, cnp.npy_intp shape) noexcept nogil: """Splits a global offset into an integer index and a relative offset""" offset[0] -= .5 if offset[0] <= 0: index[0] = 0 offset[0] = 0. elif offset[0] >= (shape - 1): index[0] = shape - 2 offset[0] = 1. else: index[0] = offset[0] offset[0] = offset[0] - index[0] @cython.profile(False) cdef inline float wght(int i, float r) noexcept nogil: if i: return r else: return 1.-r @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def trilinear_interp(cnp.float32_t[:, :, :, :] data, cython.floating[:] index, cython.floating[::1] voxel_size): """Interpolates vector from 4D `data` at 3D point given by `index` Interpolates a vector of length T from a 4D volume of shape (I, J, K, T), given point (x, y, z) where (x, y, z) are the coordinates of the point in real units (not yet adjusted for voxel size). """ cdef: float x = index[0] / voxel_size[0] float y = index[1] / voxel_size[1] float z = index[2] / voxel_size[2] float weight cnp.npy_intp x_ind, y_ind, z_ind, ii, jj, kk, LL cnp.npy_intp last_d = data.shape[3] bint bounds_check cnp.ndarray[cnp.float32_t, ndim=1, mode='c'] result bounds_check = (x < 0 or y < 0 or z < 0 or x > data.shape[0] or y > data.shape[1] or z > data.shape[2]) if bounds_check: raise IndexError splitoffset(&x, &x_ind, data.shape[0]) splitoffset(&y, &y_ind, data.shape[1]) splitoffset(&z, &z_ind, data.shape[2]) result = np.zeros(last_d, dtype='float32') for ii from 0 <= ii <= 1: for jj from 0 <= jj <= 1: for kk from 0 <= kk <= 1: weight = wght(ii, x)*wght(jj, y)*wght(kk, z) for LL from 0 <= LL < last_d: result[LL] += data[x_ind+ii,y_ind+jj,z_ind+kk,LL]*weight return result @cython.boundscheck(False) @cython.wraparound(False) def map_coordinates_trilinear_iso(cnp.ndarray[double, ndim=3] data, cnp.ndarray[double, ndim=2] points, cnp.ndarray[cnp.npy_intp, ndim=1] data_strides, cnp.npy_intp len_points, cnp.ndarray[double, ndim=1] result): """ Trilinear interpolation (isotropic voxel size) Has similar behavior to ``map_coordinates`` from ``scipy.ndimage`` Parameters ---------- data : array, float64 shape (X, Y, Z) points : array, float64 shape(N, 3) data_strides : array npy_intp shape (3,) Strides sequence for `data` array len_points : cnp.npy_intp Number of points to interpolate result : array, float64 shape(N) The output array. This array should be initialized before you call this function. On exit it will contain the interpolated values from `data` at points given by `points`. Returns ------- None Notes ----- The output array `result` is filled in-place. """ cdef: double w[8] double values[24] cnp.npy_intp index[24] cnp.npy_intp off, i, j double *ds= cnp.PyArray_DATA(data) double *ps= cnp.PyArray_DATA(points) cnp.npy_intp *strides = cnp.PyArray_DATA(data_strides) double *rs= cnp.PyArray_DATA(result) if not cnp.PyArray_CHKFLAGS(data, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("data is not C contiguous") if not cnp.PyArray_CHKFLAGS(points, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("points is not C contiguous") if not cnp.PyArray_CHKFLAGS(data_strides, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("data_strides is not C contiguous") if not cnp.PyArray_CHKFLAGS(result, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("result is not C contiguous") with nogil: for i in range(len_points): _trilinear_interpolation_iso(&ps[i * 3], w, index) rs[i] = 0 for j in range(8): weight = w[j] off = offset(&index[j * 3], strides, 3, 8) value = ds[off] rs[i] += weight * value return cdef void _trilinear_interpolation_iso(double *X, double *W, cnp.npy_intp *IN) noexcept nogil: """ Interpolate in 3d volumes given point X Returns ------- W : weights IN : indices of the volume """ cdef double Xf[3] cdef double d[3] cdef double nd[3] cdef cnp.npy_intp i # define the rectangular box where every corner is a neighboring voxel # (assuming center) !!! this needs to change for the affine case for i from 0 <= i < 3: Xf[i] = floor(X[i]) d[i] = X[i] - Xf[i] nd[i] = 1 - d[i] # weights # the weights are actually the volumes of the 8 smaller boxes that define # the initial rectangular box for more on trilinear have a look here # https://en.wikipedia.org/wiki/Trilinear_interpolation # http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/interpolation/index.html W[0]=nd[0] * nd[1] * nd[2] W[1]= d[0] * nd[1] * nd[2] W[2]=nd[0] * d[1] * nd[2] W[3]=nd[0] * nd[1] * d[2] W[4]= d[0] * d[1] * nd[2] W[5]=nd[0] * d[1] * d[2] W[6]= d[0] * nd[1] * d[2] W[7]= d[0] * d[1] * d[2] # indices # the indices give you the indices of the neighboring voxels (the corners # of the box) e.g. the qa coordinates IN[0] =Xf[0]; IN[1] =Xf[1]; IN[2] =Xf[2] IN[3] =Xf[0]+1; IN[4] =Xf[1]; IN[5] =Xf[2] IN[6] =Xf[0]; IN[7] =Xf[1]+1; IN[8] =Xf[2] IN[9] =Xf[0]; IN[10]=Xf[1]; IN[11]=Xf[2]+1 IN[12]=Xf[0]+1; IN[13]=Xf[1]+1; IN[14]=Xf[2] IN[15]=Xf[0]; IN[16]=Xf[1]+1; IN[17]=Xf[2]+1 IN[18]=Xf[0]+1; IN[19]=Xf[1]; IN[20]=Xf[2]+1 IN[21]=Xf[0]+1; IN[22]=Xf[1]+1; IN[23]=Xf[2]+1 return @cython.boundscheck(False) @cython.wraparound(False) cdef int trilinear_interpolate4d_c(floating[:, :, :, :] data, floating* point, floating* result) noexcept nogil: """Tri-linear interpolation along the last dimension of a 4d array Parameters ---------- data : 4d array Data to be interpolated. point : 1d array (3,) 3 doubles representing a 3d point in space. If point has integer values ``[i, j, k]``, the result will be the same as ``data[i, j, k]``. result : 1d array The result of interpolation. Should have length equal to the ``data.shape[3]``. Returns ------- err : int 0 : successful interpolation. -1 : point is outside the data area, meaning round(point) is not a valid index to data. -2 : point has the wrong shape -3 : shape of data and result do not match """ cdef: cnp.npy_intp flr, N double w, rem cnp.npy_intp index[3][2] double weight[3][2] for i in range(3): if point[i] < -.5 or point[i] >= (data.shape[i] - .5): return -1 flr = floor(point[i]) rem = point[i] - flr index[i][0] = flr + (flr == -1) index[i][1] = flr + (flr != (data.shape[i] - 1)) weight[i][0] = 1 - rem weight[i][1] = rem N = data.shape[3] for i in range(N): result[i] = 0 for i in range(2): for j in range(2): for k in range(2): w = weight[0][i] * weight[1][j] * weight[2][k] for L in range(N): result[L] += w * data[index[0][i], index[1][j], index[2][k], L] return 0 def trilinear_interpolate4d(floating[:, :, :, :] data, floating[:] point, floating[:] out=None): """Tri-linear interpolation along the last dimension of a 4d array Parameters ---------- data : 4d array Data to be interpolated. point : 1d array (3,) 3 doubles representing a 3d point in space. If point has integer values ``[i, j, k]``, the result will be the same as ``data[i, j, k]``. out : 1d array, optional The output array for the result of the interpolation. Returns ------- out : 1d array The result of interpolation. """ if out is None: out = np.empty(data.shape[3]) elif data.shape[3] != out.shape[0]: msg = "out array must have same size as the last dimension of data." raise ValueError(msg) err = trilinear_interpolate4d_c(data, &point[0], &out[0]) if err == 0: return out elif err == -1: raise IndexError("The point point is outside data") elif err == -2: raise ValueError("Point must be a 1d array with shape (3,).") return out def nearestneighbor_interpolate(data, point): index = tuple(np.round(point).astype(int)) return data[index] def interpolate_vector_2d(floating[:, :, :] field, double[:, :] locations): r"""Bilinear interpolation of a 2D vector field Interpolates the 2D vector field at the given locations. This function is a wrapper for _interpolate_vector_2d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with bilinear interpolation at each vector component Parameters ---------- field : array, shape (S, R, 2) the 2D vector field to be interpolated locations : array, shape (n, 2) (locations[i,0], locations[i,1]), 0<=i= nr or djj >= nc: out[0] = 0 out[1] = 0 return 0 # ---top-left ii = floor(dii) jj = floor(djj) calpha = dii - ii cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta inside = 0 if (ii >= 0) and (jj >= 0): out[0] = alpha * beta * field[ii, jj, 0] out[1] = alpha * beta * field[ii, jj, 1] inside += 1 else: out[0] = 0 out[1] = 0 # ---top-right jj += 1 if (jj < nc) and (ii >= 0): out[0] += alpha * cbeta * field[ii, jj, 0] out[1] += alpha * cbeta * field[ii, jj, 1] inside += 1 # ---bottom-right ii += 1 if (jj < nc) and (ii < nr): out[0] += calpha * cbeta * field[ii, jj, 0] out[1] += calpha * cbeta * field[ii, jj, 1] inside += 1 # ---bottom-left jj -= 1 if (jj >= 0) and (ii < nr): out[0] += calpha * beta * field[ii, jj, 0] out[1] += calpha * beta * field[ii, jj, 1] inside += 1 return 1 if inside == 4 else 0 def interpolate_scalar_2d(floating[:, :] image, double[:, :] locations): r"""Bilinear interpolation of a 2D scalar image Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with bilinear interpolation Parameters ---------- field : array, shape (S, R) the 2D image to be interpolated locations : array, shape (n, 2) (locations[i,0], locations[i,1]), 0<=i= nr or djj >= nc: out[0] = 0 return 0 # ---top-left ii = floor(dii) jj = floor(djj) calpha = dii - ii cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta inside = 0 if (ii >= 0) and (jj >= 0): out[0] = alpha * beta * image[ii, jj] inside += 1 else: out[0] = 0 # ---top-right jj += 1 if (jj < nc) and (ii >= 0): out[0] += alpha * cbeta * image[ii, jj] inside += 1 # ---bottom-right ii += 1 if (jj < nc) and (ii < nr): out[0] += calpha * cbeta * image[ii, jj] inside += 1 # ---bottom-left jj -= 1 if (jj >= 0) and (ii < nr): out[0] += calpha * beta * image[ii, jj] inside += 1 return 1 if inside == 4 else 0 def interpolate_scalar_nn_2d(number[:, :] image, double[:, :] locations): r"""Nearest neighbor interpolation of a 2D scalar image Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation Parameters ---------- image : array, shape (S, R) the 2D image to be interpolated locations : array, shape (n, 2) (locations[i,0], locations[i,1]), 0<=i nr - 1 or djj > nc - 1: out[0] = 0 return 0 # find the top left index and the interpolation coefficients ii = floor(dii) jj = floor(djj) # no one is affected if ii < 0 or jj < 0 or ii >= nr or jj >= nc: out[0] = 0 return 0 calpha = dii - ii # by definition these factors are nonnegative cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta if alpha < calpha: ii += 1 if beta < cbeta: jj += 1 # no one is affected if ii < 0 or jj < 0 or ii >= nr or jj >= nc: out[0] = 0 return 0 out[0] = image[ii, jj] return 1 def interpolate_scalar_nn_3d(number[:, :, :] image, double[:, :] locations): r"""Nearest neighbor interpolation of a 3D scalar image Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation Parameters ---------- image : array, shape (S, R, C) the 3D image to be interpolated locations : array, shape (n, 3) (locations[i,0], locations[i,1], locations[i,2), 0<=ifloor(dkk) ii = floor(dii) jj = floor(djj) # no one is affected if not ((0 <= kk < ns) and (0 <= ii < nr) and (0 <= jj < nc)): out[0] = 0 return 0 cgamma = dkk - kk calpha = dii - ii cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta gamma = 1 - cgamma if gamma < cgamma: kk += 1 if alpha < calpha: ii += 1 if beta < cbeta: jj += 1 # no one is affected if not ((0 <= kk < ns) and (0 <= ii < nr) and (0 <= jj < nc)): out[0] = 0 return 0 out[0] = volume[kk, ii, jj] return 1 def interpolate_scalar_3d(floating[:, :, :] image, locations): r"""Trilinear interpolation of a 3D scalar image Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with trilinear interpolation Parameters ---------- field : array, shape (S, R, C) the 3D image to be interpolated locations : array, shape (n, 3) (locations[i,0], locations[i,1], locations[i,2), 0<=ifloor(dkk) ii = floor(dii) jj = floor(djj) # no one is affected cgamma = dkk - kk calpha = dii - ii cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta gamma = 1 - cgamma inside = 0 # ---top-left if ii >= 0 and jj >= 0 and kk >= 0: out[0] = alpha * beta * gamma * volume[kk, ii, jj] inside += 1 else: out[0] = 0 # ---top-right jj += 1 if ii >= 0 and jj < nc and kk >= 0: out[0] += alpha * cbeta * gamma * volume[kk, ii, jj] inside += 1 # ---bottom-right ii += 1 if ii < nr and jj < nc and kk >= 0: out[0] += calpha * cbeta * gamma * volume[kk, ii, jj] inside += 1 # ---bottom-left jj -= 1 if ii < nr and jj >= 0 and kk >= 0: out[0] += calpha * beta * gamma * volume[kk, ii, jj] inside += 1 kk += 1 if kk < ns: ii -= 1 if ii >= 0 and jj >= 0: out[0] += alpha * beta * cgamma * volume[kk, ii, jj] inside += 1 jj += 1 if ii >= 0 and jj < nc: out[0] += alpha * cbeta * cgamma * volume[kk, ii, jj] inside += 1 # ---bottom-right ii += 1 if ii < nr and jj < nc: out[0] += calpha * cbeta * cgamma * volume[kk, ii, jj] inside += 1 # ---bottom-left jj -= 1 if ii < nr and jj >= 0: out[0] += calpha * beta * cgamma * volume[kk, ii, jj] inside += 1 return 1 if inside == 8 else 0 def interpolate_vector_3d(floating[:, :, :, :] field, double[:, :] locations): r"""Trilinear interpolation of a 3D vector field Interpolates the 3D vector field at the given locations. This function is a wrapper for _interpolate_vector_3d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with trilinear interpolation at each vector component Parameters ---------- field : array, shape (S, R, C, 3) the 3D vector field to be interpolated locations : array, shape (n, 3) (locations[i,0], locations[i,1], locations[i,2), 0<=ifloor(dkk) ii = floor(dii) jj = floor(djj) cgamma = dkk - kk calpha = dii - ii cbeta = djj - jj alpha = 1 - calpha beta = 1 - cbeta gamma = 1 - cgamma inside = 0 if ii >= 0 and jj >= 0 and kk >= 0: out[0] = alpha * beta * gamma * field[kk, ii, jj, 0] out[1] = alpha * beta * gamma * field[kk, ii, jj, 1] out[2] = alpha * beta * gamma * field[kk, ii, jj, 2] inside += 1 else: out[0] = 0 out[1] = 0 out[2] = 0 # ---top-right jj += 1 if jj < nc and ii >= 0 and kk >= 0: out[0] += alpha * cbeta * gamma * field[kk, ii, jj, 0] out[1] += alpha * cbeta * gamma * field[kk, ii, jj, 1] out[2] += alpha * cbeta * gamma * field[kk, ii, jj, 2] inside += 1 # ---bottom-right ii += 1 if jj < nc and ii < nr and kk >= 0: out[0] += calpha * cbeta * gamma * field[kk, ii, jj, 0] out[1] += calpha * cbeta * gamma * field[kk, ii, jj, 1] out[2] += calpha * cbeta * gamma * field[kk, ii, jj, 2] inside += 1 # ---bottom-left jj -= 1 if jj >= 0 and ii < nr and kk >= 0: out[0] += calpha * beta * gamma * field[kk, ii, jj, 0] out[1] += calpha * beta * gamma * field[kk, ii, jj, 1] out[2] += calpha * beta * gamma * field[kk, ii, jj, 2] inside += 1 kk += 1 if kk < ns: ii -= 1 if jj >= 0 and ii >= 0: out[0] += alpha * beta * cgamma * field[kk, ii, jj, 0] out[1] += alpha * beta * cgamma * field[kk, ii, jj, 1] out[2] += alpha * beta * cgamma * field[kk, ii, jj, 2] inside += 1 jj += 1 if jj < nc and ii >= 0: out[0] += alpha * cbeta * cgamma * field[kk, ii, jj, 0] out[1] += alpha * cbeta * cgamma * field[kk, ii, jj, 1] out[2] += alpha * cbeta * cgamma * field[kk, ii, jj, 2] inside += 1 # ---bottom-right ii += 1 if jj < nc and ii < nr: out[0] += calpha * cbeta * cgamma * field[kk, ii, jj, 0] out[1] += calpha * cbeta * cgamma * field[kk, ii, jj, 1] out[2] += calpha * cbeta * cgamma * field[kk, ii, jj, 2] inside += 1 # ---bottom-left jj -= 1 if jj >= 0 and ii < nr: out[0] += calpha * beta * cgamma * field[kk, ii, jj, 0] out[1] += calpha * beta * cgamma * field[kk, ii, jj, 1] out[2] += calpha * beta * cgamma * field[kk, ii, jj, 2] inside += 1 return 1 if inside == 8 else 0 class OutsideImage(Exception): pass class Interpolator: """Class to be subclassed by different interpolator types""" def __init__(self, data, voxel_size): self.data = data self.voxel_size = np.array(voxel_size, dtype=float, copy=True) class NearestNeighborInterpolator(Interpolator): """Interpolates data using nearest neighbor interpolation""" def __getitem__(self, index): index = tuple(index / self.voxel_size) if min(index) < 0: raise OutsideImage('Negative Index') try: return self.data[tuple(np.array(index).astype(int))] except IndexError: raise OutsideImage class TriLinearInterpolator(Interpolator): """Interpolates data using trilinear interpolation interpolate 4d diffusion volume using 3 indices, ie data[x, y, z] """ def __init__(self, data, voxel_size): super(TriLinearInterpolator, self).__init__(data, voxel_size) if self.voxel_size.shape != (3,) or self.data.ndim != 4: raise ValueError("Data should be 4d volume of diffusion data and " "voxel_size should have 3 values, ie the size " "of a 3d voxel") def __getitem__(self, index): index = np.asarray(index, dtype="float") try: return trilinear_interp(self.data, index, self.voxel_size) except IndexError: raise OutsideImage dipy-1.11.0/dipy/core/math.pxd000066400000000000000000000011601476546756600161320ustar00rootroot00000000000000# cython: boundscheck=False # cython: cdivision=True # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False # cython: overflowcheck=False from dipy.align.fused_types cimport floating cdef inline floating f_max(floating a, floating b) noexcept nogil: """Return the maximum of a and b.""" return a if a >= b else b cdef inline floating f_min(floating a, floating b) noexcept nogil: """Return the minimum of a and b.""" return a if a <= b else b cdef floating f_array_min(floating* arr, int n) noexcept nogil cdef floating f_array_max(floating* arr, int n) noexcept nogil dipy-1.11.0/dipy/core/math.pyx000066400000000000000000000013521476546756600161620ustar00rootroot00000000000000# cython: boundscheck=False # cython: cdivision=True # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False # cython: overflowcheck=False from dipy.align.fused_types cimport floating cdef floating f_array_min(floating* arr, int n) noexcept nogil: """Return the minimum value of an array.""" cdef int i cdef double min_val = arr[0] for i in range(1, n): if arr[i] < min_val: min_val = arr[i] return min_val cdef floating f_array_max(floating* arr, int n) noexcept nogil: """Compute the maximum value of an array.""" cdef int i cdef double max_val = arr[0] for i in range(1, n): if arr[i] > max_val: max_val = arr[i] return max_valdipy-1.11.0/dipy/core/meson.build000066400000000000000000000015331476546756600166320ustar00rootroot00000000000000cython_sources = [ 'interpolation', 'math', ] cython_headers = [ 'interpolation.pxd', 'math.pxd', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/core' ) endforeach python_sources = ['__init__.py', 'geometry.py', 'gradients.py', 'graph.py', 'histeq.py', 'ndindex.py', 'onetime.py', 'optimize.py', 'profile.py', 'rng.py', 'sphere.py', 'sphere_stats.py', 'subdivide_octahedron.py', 'wavelet.py', ] py3.install_sources( python_sources + cython_headers, pure: false, subdir: 'dipy/core' ) subdir('tests')dipy-1.11.0/dipy/core/ndindex.py000066400000000000000000000021531476546756600164720ustar00rootroot00000000000000import numpy as np from numpy.lib.stride_tricks import as_strided def ndindex(shape): """ An N-dimensional iterator object to index arrays. Given the shape of an array, an `ndindex` instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first. Parameters ---------- shape : tuple of ints The dimensions of the array. Examples -------- >>> from dipy.core.ndindex import ndindex >>> shape = (3, 2, 1) >>> for index in ndindex(shape): ... print(index) (0, 0, 0) (0, 1, 0) (1, 0, 0) (1, 1, 0) (2, 0, 0) (2, 1, 0) """ if len(shape) == 0: yield () else: x = as_strided(np.zeros(1), shape=shape, strides=np.zeros_like(shape)) try: ndi = np.nditer(x, flags=["multi_index", "zerosize_ok"], order="C") except AttributeError: # nditer only available in numpy >= 1.6 yield from np.ndindex(*shape) else: for _ in ndi: yield ndi.multi_index dipy-1.11.0/dipy/core/onetime.py000066400000000000000000000161111476546756600165000ustar00rootroot00000000000000""" Descriptor support for NIPY. Copyright (c) 2006-2011, NIPY Developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the NIPY Developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Utilities to support special Python descriptors [1]_, [2]_ in particular the use of a useful pattern for properties we call 'one time properties'. These are object attributes which are declared as properties, but become regular attributes once they've been read the first time. They can thus be evaluated later in the object's life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints. A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their 'untriggered' state. References ---------- .. [1] How-To Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm .. [2] Python data model, https://docs.python.org/reference/datamodel.html """ # ---------------------------------------------------------------------------- # Classes and Functions # ---------------------------------------------------------------------------- class ResetMixin: """A Mixin class to add a .reset() method to users of OneTimeProperty. By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid. This class offers a .reset() method that users can call *explicitly* when they know the state of their objects may have changed and they want to ensure that *all* their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again. .. warning:: If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are auto-computed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains. If this becomes a problem in practice (for our analyzer objects it isn't, as they don't have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor. Examples -------- >>> class A(ResetMixin): ... def __init__(self,x=1.0): ... self.x = x ... ... @auto_attr ... def y(self): ... print('*** y computation executed ***') ... return self.x / 2.0 ... >>> a = A(10) About to access y twice, the second time no computation is done: >>> a.y *** y computation executed *** 5.0 >>> a.y 5.0 Changing x >>> a.x = 20 a.y doesn't change to 10, since it is a static attribute: >>> a.y 5.0 We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset() About to access y twice again after reset(): >>> a.y *** y computation executed *** 10.0 >>> a.y 10.0 """ def reset(self): """Reset all OneTimeProperty attributes that may have fired already.""" instdict = self.__dict__ classdict = self.__class__.__dict__ # To reset them, we simply remove them from the instance dict. At that # point, it's as if they had never been computed. On the next access, # the accessor function from the parent class will be called, simply # because that's how the python descriptor protocol works. for mname, mval in classdict.items(): if mname in instdict and isinstance(mval, OneTimeProperty): delattr(self, mname) class OneTimeProperty: """A descriptor to make special properties that become normal attributes. This is meant to be used mostly by the auto_attr decorator in this module. """ def __init__(self, func): """Create a OneTimeProperty instance. Parameters ---------- func : method The method that will be called the first time to compute a value. Afterwards, the method's name will be a standard attribute holding the value of this computation. """ self.getter = func self.name = func.__name__ def __get__(self, obj, type=None): """This will be called on attribute access on the class or instance.""" if obj is None: # Being called on the class, return the original function. This # way, introspection works on the class. # return func return self.getter # Errors in the following line are errors in setting a # OneTimeProperty val = self.getter(obj) setattr(obj, self.name, val) return val def auto_attr(func): """Decorator to create OneTimeProperty attributes. Parameters ---------- func : method The method that will be called the first time to compute a value. Afterwards, the method's name will be a standard attribute holding the value of this computation. Examples -------- >>> class MagicProp: ... @auto_attr ... def a(self): ... return 99 ... >>> x = MagicProp() >>> 'a' in x.__dict__ False >>> x.a 99 >>> 'a' in x.__dict__ True """ return OneTimeProperty(func) dipy-1.11.0/dipy/core/optimize.py000066400000000000000000000434231476546756600167060ustar00rootroot00000000000000"""A unified interface for performing and debugging optimization problems.""" import abc import warnings import numpy as np import scipy.optimize as opt from scipy.optimize import minimize from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") class Optimizer: @warning_for_keywords() def __init__( self, fun, x0, args=(), *, method="L-BFGS-B", jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False, ): """A class for handling minimization of scalar function of one or more variables. Parameters ---------- fun : callable Objective function. x0 : ndarray Initial guess. args : tuple, optional Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian). method : str, optional Type of solver. Should be one of - 'Nelder-Mead' - 'Powell' - 'CG' - 'BFGS' - 'Newton-CG' - 'Anneal' - 'L-BFGS-B' - 'TNC' - 'COBYLA' - 'SLSQP' - 'dogleg' - 'trust-ncg' jac : bool or callable, optional Jacobian of objective function. Only for CG, BFGS, Newton-CG, dogleg, trust-ncg. If `jac` is a Boolean and is True, `fun` is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. `jac` can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as `fun`. hess, hessp : callable, optional Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for Newton-CG, dogleg, trust-ncg. Only one of `hessp` or `hess` needs to be given. If `hess` is provided, then `hessp` will be ignored. If neither `hess` nor `hessp` is provided, then the hessian product will be approximated using finite differences on `jac`. `hessp` must compute the Hessian times an arbitrary vector. bounds : sequence, optional Bounds for variables (only for L-BFGS-B, TNC and SLSQP). ``(min, max)`` pairs for each element in ``x``, defining the bounds on that parameter. Use None for one of ``min`` or ``max`` when there is no bound in that direction. constraints : dict or sequence of dict, optional Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields: type : str Constraint type: 'eq' for equality, 'ineq' for inequality. fun : callable The function defining the constraint. jac : callable, optional The Jacobian of `fun` (only for SLSQP). args : sequence, optional Extra arguments to be passed to the function and Jacobian. Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints. tol : float, optional Tolerance for termination. For detailed control, use solver-specific options. callback : callable, optional Called after each iteration, as ``callback(xk)``, where ``xk`` is the current parameter vector. Only available using Scipy >= 0.12. options : dict, optional A dictionary of solver options. All methods accept the following generic options: maxiter : int Maximum number of iterations to perform. disp : bool Set to True to print convergence messages. For method-specific options, see `show_options('minimize', method)`. evolution : bool, optional save history of x for each iteration. Only available using Scipy >= 0.12. See Also -------- scipy.optimize.minimize """ self.size_of_x = len(x0) self._evol_kx = None if evolution is True: self._evol_kx = [] def history_of_x(kx): self._evol_kx.append(kx) res = minimize( fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback=history_of_x, options=options, ) else: res = minimize( fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options, ) self.res = res @property def xopt(self): return self.res["x"] @property def fopt(self): return self.res["fun"] @property def nit(self): return self.res["nit"] @property def nfev(self): return self.res["nfev"] @property def message(self): return self.res["message"] def print_summary(self): print(self.res) @property def evolution(self): if self._evol_kx is not None: return np.asarray(self._evol_kx) else: return None def spdot(A, B): """The same as np.dot(A, B), except it works even if A or B or both are sparse matrices. Parameters ---------- A, B : arrays of shape (m, n), (n, k) Returns ------- The matrix product AB. If both A and B are sparse, the result will be a sparse matrix. Otherwise, a dense result is returned See discussion here: http://mail.scipy.org/pipermail/scipy-user/2010-November/027700.html """ return A @ B @warning_for_keywords() def sparse_nnls( y, X, *, momentum=1, step_size=0.01, non_neg=True, check_error_iter=10, max_error_checks=10, converge_on_sse=0.99, ): """ Solve y=Xh for h, using gradient descent, with X a sparse matrix. Parameters ---------- y : 1-d array of shape (N) The data. Needs to be dense. X : ndarray. May be either sparse or dense. Shape (N, M) The regressors momentum : float, optional The persistence of the gradient. step_size : float, optional The increment of parameter update in each iteration non_neg : Boolean, optional Whether to enforce non-negativity of the solution. check_error_iter : int, optional How many rounds to run between error evaluation for convergence-checking. max_error_checks : int, optional Don't check errors more than this number of times if no improvement in r-squared is seen. converge_on_sse : float, optional a percentage improvement in SSE that is required each time to say that things are still going well. Returns ------- h_best : The best estimate of the parameters. """ num_regressors = X.shape[1] # Initialize the parameters at the origin: h = np.zeros(num_regressors) # If nothing good happens, we'll return that: h_best = h iteration = 1 ss_residuals_min = np.inf # This will keep track of the best solution sse_best = np.inf # This will keep track of the best performance so far count_bad = 0 # Number of times estimation error has gone up. error_checks = 0 # How many error checks have we done so far while 1: if iteration > 1: # The gradient is (Kay 2008 supplemental page 27): gradient = spdot(X.T, spdot(X, h) - y) gradient += momentum * gradient # Normalize to unit-length unit_length_gradient = gradient / np.sqrt(np.dot(gradient, gradient)) # Update the parameters in the direction of the gradient: h -= step_size * unit_length_gradient if non_neg: # Set negative values to 0: h[h < 0] = 0 # Every once in a while check whether it's converged: if np.mod(iteration, check_error_iter): # This calculates the sum of squared residuals at this point: sse = np.sum((y - spdot(X, h)) ** 2) # Did we do better this time around? if sse < ss_residuals_min: # Update your expectations about the minimum error: ss_residuals_min = sse h_best = h # This holds the best params we have so far # Are we generally (over iterations) converging on # sufficient improvement in r-squared? if sse < converge_on_sse * sse_best: sse_best = sse count_bad = 0 else: count_bad += 1 else: count_bad += 1 if count_bad >= max_error_checks: return h_best error_checks += 1 iteration += 1 class SKLearnLinearSolver(metaclass=abc.ABCMeta): """ Provide a sklearn-like uniform interface to algorithms that solve problems of the form: $y = Ax$ for $x$ Sub-classes of SKLearnLinearSolver should provide a 'fit' method that have the following signature: `SKLearnLinearSolver.fit(X, y)`, which would set an attribute `SKLearnLinearSolver.coef_`, with the shape (X.shape[1],), such that an estimate of y can be calculated as: `y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)` """ def __init__(self, *args, **kwargs): self._args = args self._kwargs = kwargs @abc.abstractmethod def fit(self, X, y): """Implement for all derived classes""" def predict(self, X): """ Predict using the result of the model Parameters ---------- X : array-like (n_samples, n_features) Samples. Returns ------- C : array, shape = (n_samples,) Predicted values. """ X = np.asarray(X) return np.dot(X, self.coef_.T) class NonNegativeLeastSquares(SKLearnLinearSolver): """ A sklearn-like interface to scipy.optimize.nnls """ def fit(self, X, y): """ Fit the NonNegativeLeastSquares linear model to data Parameters ---------- X : array-like (n_samples, n_features) Samples. y : array-like (n_samples,) Target values. """ coef, rnorm = opt.nnls(X, y) self.coef_ = coef return self class PositiveDefiniteLeastSquares: @warning_for_keywords() def __init__(self, m, *, A=None, L=None): r"""Regularized least squares with linear matrix inequality constraints. See :footcite:p:`DelaHaije2020` for further details about the method. Generate a CVXPY representation of a regularized least squares optimization problem subject to linear matrix inequality constraints. Parameters ---------- m : int Positive int indicating the number of regressors. A : array (t = m + k + 1, p, p), optional Constraint matrices $A$. L : array (m, m), optional Regularization matrix $L$. Default: None. Notes ----- The basic problem is to solve for $h$ the minimization of $c=\|X h - y\|^2 + \|L h\|^2$, where $X$ is an (m, m) upper triangular design matrix and $y$ is a set of m measurements, subject to the constraint that $M=A_0+\sum_{i=0}^{m-1} h_i A_{i+1}+\sum_{j=0}^{k-1} s_j A_{m+j+1}>0$, where $s_j$ are slack variables and where the inequality sign denotes positive definiteness of the matrix $M$. The sparsity pattern and size of $X$ and $y$ are fixed, because every design matrix and set of measurements can be reduced to an equivalent (minimal) formulation of this type. This formulation is used here mainly to enforce polynomial sum-of-squares constraints on various models, as described in :footcite:p:`DelaHaije2020`. References ---------- .. footbibliography:: """ # Note: several comments refer to DelaHaije2020 eq 22 # Input self.A = A # list: [zeros, H(omega), L(alpha)] --- see eq 22 self.L = L # this is a regularizer matrix, NOT L(alpha) eq 22 # Problem size t = len(A) if A else 0 k = t - m - 1 # length of alpha in L(alpha) eq 22 # Unknowns self._X = cvxpy.Parameter((m, m)) # Design matrix self._f = cvxpy.Parameter(m) # Given solution for feasibility check self._h = cvxpy.Variable(m) # Solution to constrained problem self._y = cvxpy.Parameter(m) # Regressand # Error output self._zeros = np.zeros(m) # Objective c = self._X @ self._h - self._y if L is not None: c += L @ self._h f_objective = cvxpy.Minimize(0) p_objective = cvxpy.Minimize(cvxpy.norm(c)) # Constraints if t: M = F = A[0] # first matrix all zeros (use to initialize) if k > 0: for i in range(m): # loop over H(omega) from eq 22 F += self._f[i] * A[i + 1] M += self._h[i] * A[i + 1] # A-matrix for 'intercept' (22nd, i=21) should be zeros self._s = cvxpy.Variable(k) for j in range(k): # loop over L(alpha) from eq 22 F += self._s[j] * A[m + j + 1] M += self._s[j] * A[m + j + 1] else: for i in range(t - 1): F += self._f[i] * A[i + 1] M += self._h[i] * A[i + 1] f_constraints = [F >> 0] p_constraints = [M >> 0] else: f_constraints = p_constraints = [] # CVXPY problems self.problem = cvxpy.Problem(p_objective, p_constraints) self.unconstrained_problem = cvxpy.Problem(p_objective) self.feasibility_problem = cvxpy.Problem(f_objective, f_constraints) @warning_for_keywords() def solve(self, design_matrix, measurements, *, check=False, **kwargs): r"""Solve CVXPY problem Solve a CVXPY problem instance for a given design matrix and a given set of observations, and return the optimum. Parameters ---------- design_matrix : array (n, m) Design matrix. measurements : array (n) Measurements. check : boolean, optional If True check whether the unconstrained optimization solution already satisfies the constraints, before running the constrained optimization. This adds overhead, but can avoid unnecessary constrained optimization calls. kwargs : keyword arguments Arguments passed to the CVXPY solve method. Returns ------- h : array (m) Estimated optimum for problem variables $h$. """ # Compute and set reduced problem parameters try: X = np.linalg.cholesky(np.dot(design_matrix.T, design_matrix)).T except np.linalg.linalg.LinAlgError: msg = "Cholesky decomposition failed, returning zero array. Verify " msg += "that the data is sufficient to estimate the model " msg += "parameters, and that the design matrix has full rank." warnings.warn(msg, stacklevel=2) return self._zeros self._X.value = X self._y.value = np.linalg.multi_dot( [X, np.linalg.pinv(design_matrix), measurements] ) try: # Check unconstrained solution if check: # Solve unconstrained problem self.unconstrained_problem.solve(**kwargs) # Return zeros if optimization failed status = self.unconstrained_problem.status if status != "optimal": msg = f"Solver failed to produce an optimum: {status}." warnings.warn(msg, stacklevel=2) msg = "Optimization failed, returning zero array." warnings.warn(msg, stacklevel=2) return self._zeros # Return unconstrained solution if satisfactory self._f.value = self._h.value self.feasibility_problem.solve(**kwargs) if self.feasibility_problem.status == "optimal": return np.asarray(self._h.value).squeeze() # Solve constrained problem self.problem.solve(**kwargs) # Show warning if solution is not optimal status = self.problem.status if status != "optimal": msg = f"Solver failed to produce an optimum: {status}." warnings.warn(msg, stacklevel=2) # Return solution return np.asarray(self._h.value).squeeze() except cvxpy.error.SolverError: # Return zeros msg = "Optimization failed, returning zero array." warnings.warn(msg, stacklevel=2) return self._zeros dipy-1.11.0/dipy/core/profile.py000066400000000000000000000057131476546756600165060ustar00rootroot00000000000000"""Class for profiling cython code""" import os import subprocess from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cProfile, _, _ = optional_package("cProfile") pstats, _, _ = optional_package( "pstats", trip_msg="pstats is not installed. It is " "part of the python-profiler package in " "Debian/Ubuntu", ) class Profiler: """Profile python/cython files or functions If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file and for the functions that you do not want to profile you can use this decorator in your cython files @cython.profile(False) Parameters ---------- caller : file or function call args : function arguments Attributes ---------- stats : function, stats.print_stats(10) will prin the 10 slower functions Examples -------- from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname='test.py' p=Profiler(fname) p.print_stats(10) p.print_stats('det') References ---------- https://docs.cython.org/src/tutorial/profiling_tutorial.html https://docs.python.org/library/profile.html https://github.com/rkern/line_profiler """ def __init__(self, *args, call=None): # Delay import until use of class instance. We were getting some very # odd build-as-we-go errors running tests and documentation otherwise import pyximport pyximport.install() try: ext = os.path.splitext(call)[1].lower() print("ext", ext) if ext in (".py", ".pyx"): # python/cython file print("profiling python/cython file ...") subprocess.call( ["python3", "-m", "cProfile", "-o", "profile.prof", call] ) s = pstats.Stats("profile.prof") stats = s.strip_dirs().sort_stats("time") self.stats = stats except Exception: print("profiling function call ...") self.args = args self.call = call cProfile.runctx( "self._profile_function()", globals(), locals(), "profile.prof" ) s = pstats.Stats("profile.prof") stats = s.strip_dirs().sort_stats("time") self.stats = stats def _profile_function(self): self.call(*self.args) @warning_for_keywords() def print_stats(self, *, N=10): """Print stats for profiling You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats('function_name') will give you the stats for all the calls with name 'function_name' Parameters ---------- N : stats.print_stats argument """ self.stats.print_stats(N) dipy-1.11.0/dipy/core/pyalloc.pxd000066400000000000000000000005301476546756600166440ustar00rootroot00000000000000# -*- python -*- or rather like from python_string cimport PyString_FromStringAndSize, PyString_AS_STRING # Function to allocate, wrap memory via Python string creation cdef inline object pyalloc_v(Py_ssize_t n, void **pp): cdef object ob = PyString_FromStringAndSize(NULL, n) pp[0] = PyString_AS_STRING(ob) return ob dipy-1.11.0/dipy/core/rng.py000066400000000000000000000115701476546756600156320ustar00rootroot00000000000000"""Random number generation utilities.""" from math import floor from platform import architecture import numpy as np from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def WichmannHill2006(*, ix=100001, iy=200002, iz=300003, it=400004): """Wichmann Hill random number generator. See :footcite:p:`Wichmann2006` for advice on generating many sequences for use together, and on alternative algorithms and codes Parameters ---------- ix: int First seed value. Should not be null. (default 100001) iy: int Second seed value. Should not be null. (default 200002) iz: int Third seed value. Should not be null. (default 300003) it: int Fourth seed value. Should not be null. (default 400004) Returns ------- r_number : float pseudo-random number uniformly distributed between [0-1] References ---------- .. footbibliography:: Examples -------- >>> from dipy.core import rng >>> N = 1000 >>> a = [rng.WichmannHill2006() for i in range(N)] """ if not ix or not iy or not iz or not it: raise ValueError("A seed value can not be null.") if architecture()[0] == "64": # If 64 bits are available then the following lines of code will be # faster. ix = (11600 * ix) % 2147483579 iy = (47003 * iy) % 2147483543 iz = (23000 * iz) % 2147483423 it = (33000 * it) % 2147483123 else: # If only 32 bits are available ix = 11600 * (ix % 185127) - 10379 * (ix / 185127) iy = 47003 * (ix % 45688) - 10479 * (iy / 45688) iz = 23000 * (iz % 93368) - 19423 * (iz / 93368) it = 33000 * (it % 65075) - 8123 * (it / 65075) if ix < 0: ix = ix + 2147483579 if iy < 0: iy = iy + 2147483543 if iz < 0: iz = iz + 2147483423 if it < 0: it = it + 2147483123 W = ix / 2147483579.0 + iy / 2147483543.0 + iz / 2147483423.0 + it / 2147483123.0 return W - floor(W) @warning_for_keywords() def WichmannHill1982(*, ix=100001, iy=200002, iz=300003): """Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2. Returns a pseudo-random number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article. ix, iy and iz should be set to integer values between 1 and 30000 before the first entry. Integer arithmetic up to 5212632 is required. Parameters ---------- ix: int First seed value. Should not be null. (default 100001) iy: int Second seed value. Should not be null. (default 200002) iz: int Third seed value. Should not be null. (default 300003) Returns ------- r_number : float pseudo-random number uniformly distributed between [0-1] Examples -------- >>> from dipy.core import rng >>> N = 1000 >>> a = [rng.WichmannHill1982() for i in range(N)] """ if not ix or not iy or not iz: raise ValueError("A seed value can not be null.") ix = (171 * ix) % 30269 iy = (172 * iy) % 30307 iz = (170 * iz) % 30323 """ If integer arithmetic only up to 30323 (!) is available, the preceding 3 statements may be replaced by: ix = 171 * (ix % 177) - 2 * (ix / 177) iy = 172 * (iy % 176) - 35 * (iy / 176) iz = 170 * (iz % 178) - 63 * (iz / 178) if ix < 0: ix = ix + 30269 if iy < 0: iy = iy + 30307 if iz < 0: iz = iz + 30323 """ return np.remainder( float(ix) / 30269.0 + float(iy) / 30307.0 + float(iz) / 30323.0, 1.0 ) @warning_for_keywords() def LEcuyer(*, s1=100001, s2=200002): """Return a LEcuyer random number generator. Generate uniformly distributed random numbers using the 32-bit generator from figure 3 of: L'Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742-749 & 774-?, June 1988. The cycle length is claimed to be 2.30584E+18 Parameters ---------- s1: int First seed value. Should not be null. (default 100001) s2: int Second seed value. Should not be null. (default 200002) Returns ------- r_number : float pseudo-random number uniformly distributed between [0-1] Examples -------- >>> from dipy.core import rng >>> N = 1000 >>> a = [rng.LEcuyer() for i in range(N)] """ if not s1 or not s2: raise ValueError("A seed value can not be null.") k = s1 / 53668 s1 = 40014 * (s1 - k * 53668) - k * 12211 if s1 < 0: s1 = s1 + 2147483563 k = s2 / 52774 s2 = 40692 * (s2 - k * 52774) - k * 3791 if s2 < 0: s2 = s2 + 2147483399 z = s1 - s2 if z < 0: z = z + 2147483562 return z / 2147483563.0 dipy-1.11.0/dipy/core/sphere.py000066400000000000000000000611751476546756600163400ustar00rootroot00000000000000import warnings import numpy as np from scipy import optimize from scipy.spatial import Delaunay from dipy.core.geometry import cart2sphere, sphere2cart, vector_norm from dipy.core.onetime import auto_attr from dipy.reconst.recspeed import remove_similar_vertices from dipy.testing.decorators import warning_for_keywords __all__ = ["Sphere", "HemiSphere", "faces_from_sphere_vertices", "unique_edges"] def _all_specified(*args): for a in args: if a is None: return False return True def _some_specified(*args): for a in args: if a is not None: return True return False def faces_from_sphere_vertices(vertices): """ Triangulate a set of vertices on the sphere. Parameters ---------- vertices : (M, 3) ndarray XYZ coordinates of vertices on the sphere. Returns ------- faces : (N, 3) ndarray Indices into vertices; forms triangular faces. """ faces = Delaunay(vertices).convex_hull if len(vertices) < 2**16: return np.asarray(faces, np.uint16) else: return faces @warning_for_keywords() def unique_edges(faces, *, return_mapping=False): """Extract all unique edges from given triangular faces. Parameters ---------- faces : (N, 3) ndarray Vertex indices forming triangular faces. return_mapping : bool If true, a mapping to the edges of each face is returned. Returns ------- edges : (N, 2) ndarray Unique edges. mapping : (N, 3) For each face, [x, y, z], a mapping to its edges [a, b, c]. .. code-block:: text y /\ / \ a/ \b / \ / \ /__________\ x c z """ faces = np.asarray(faces) edges = np.concatenate([faces[:, 0:2], faces[:, 1:3], faces[:, ::2]]) if return_mapping: ue, inverse = unique_sets(edges, return_inverse=True) return ue, inverse.reshape((3, -1)).T else: return unique_sets(edges) @warning_for_keywords() def unique_sets(sets, *, return_inverse=False): """Remove duplicate sets. Parameters ---------- sets : array (N, k) N sets of size k. return_inverse : bool If True, also returns the indices of unique_sets that can be used to reconstruct `sets` (the original ordering of each set may not be preserved). Returns ------- unique_sets : array Unique sets. inverse : array (N,) The indices to reconstruct `sets` from `unique_sets`. """ sets = np.sort(sets, 1) order = np.lexsort(sets.T) sets = sets[order] flag = np.ones(len(sets), "bool") flag[1:] = (sets[1:] != sets[:-1]).any(-1) uniqsets = sets[flag] if return_inverse: inverse = np.empty_like(order) inverse[order] = np.arange(len(order)) index = flag.cumsum() - 1 return uniqsets, index[inverse] else: return uniqsets class Sphere: """Points on the unit sphere. The sphere can be constructed using one of three conventions:: Sphere(x, y, z) Sphere(xyz=xyz) Sphere(theta=theta, phi=phi) Parameters ---------- x, y, z : 1-D array_like Vertices as x-y-z coordinates. theta, phi : 1-D array_like Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively. xyz : (N, 3) ndarray Vertices as x-y-z coordinates. faces : (N, 3) ndarray Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation. edges : (N, 2) ndarray Edges between vertices. If unspecified, the edges are derived from the faces. """ @warning_for_keywords() def __init__( self, *, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, ): all_specified = ( _all_specified(x, y, z) + _all_specified(xyz) + _all_specified(theta, phi) ) one_complete = ( _some_specified(x, y, z) + _some_specified(xyz) + _some_specified(theta, phi) ) if not (all_specified == 1 and one_complete == 1): raise ValueError( "Sphere must be constructed using either " "(x,y,z), (theta, phi) or xyz." ) if edges is not None and faces is None: raise ValueError( "Either specify both faces and edges, only faces, or neither." ) if edges is not None: self.edges = np.asarray(edges) if faces is not None: self.faces = np.asarray(faces) if theta is not None: self.theta = np.asarray(theta) if self.theta.ndim < 1: self.theta = np.reshape(self.theta, (1,)) self.phi = np.asarray(phi) if self.phi.ndim < 1: self.phi = np.reshape(self.phi, (1,)) return if xyz is not None: xyz = np.asarray(xyz) x, y, z = xyz.T x, y, z = (np.asarray(t) for t in (x, y, z)) r, self.theta, self.phi = cart2sphere(x, y, z) if not np.allclose(r, 1): warnings.warn("Vertices are not on the unit sphere.", stacklevel=2) @auto_attr def vertices(self): return np.column_stack(sphere2cart(1, self.theta, self.phi)) @property def x(self): return self.vertices[:, 0] @property def y(self): return self.vertices[:, 1] @property def z(self): return self.vertices[:, 2] @auto_attr def faces(self): faces = faces_from_sphere_vertices(self.vertices) return faces @auto_attr def edges(self): return unique_edges(self.faces) @warning_for_keywords() def subdivide(self, *, n=1): r"""Subdivides each face of the sphere into four new faces. New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c]. .. code-block:: text y /\ / \ a/____\b /\ /\ / \ / \ /____\/____\ x c z Parameters ---------- n : int, optional The number of subdivisions to perform. Returns ------- new_sphere : Sphere The subdivided sphere. """ vertices = self.vertices faces = self.faces for _ in range(n): edges, mapping = unique_edges(faces, return_mapping=True) new_vertices = vertices[edges].sum(1) new_vertices /= vector_norm(new_vertices, keepdims=True) mapping += len(vertices) vertices = np.vstack([vertices, new_vertices]) x, y, z = faces.T a, b, c = mapping.T face1 = np.column_stack([x, a, c]) face2 = np.column_stack([y, b, a]) face3 = np.column_stack([z, c, b]) face4 = mapping faces = np.concatenate([face1, face2, face3, face4]) if len(vertices) < 2**16: faces = np.asarray(faces, dtype="uint16") return Sphere(xyz=vertices, faces=faces) def find_closest(self, xyz): """ Find the index of the vertex in the Sphere closest to the input vector Parameters ---------- xyz : array-like, 3 elements A unit vector Returns ------- idx : int The index into the Sphere.vertices array that gives the closest vertex (in angle). """ cos_sim = np.dot(self.vertices, xyz) return np.argmax(cos_sim) class HemiSphere(Sphere): """Points on the unit sphere. A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). `edges` and `faces` are remapped to the remaining points as closely as possible. The HemiSphere can be constructed using one of three conventions:: HemiSphere(x, y, z) HemiSphere(xyz=xyz) HemiSphere(theta=theta, phi=phi) Parameters ---------- x, y, z : 1-D array_like Vertices as x-y-z coordinates. theta, phi : 1-D array_like Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively. xyz : (N, 3) ndarray Vertices as x-y-z coordinates. faces : (N, 3) ndarray Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation. edges : (N, 2) ndarray Edges between vertices. If unspecified, the edges are derived from the faces. tol : float Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates. See Also -------- Sphere """ @warning_for_keywords() def __init__( self, *, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-5, ): """Create a HemiSphere from points""" sphere = Sphere(x=x, y=y, z=z, theta=theta, phi=phi, xyz=xyz) uniq_vertices, mapping = remove_similar_vertices( sphere.vertices, tol, return_mapping=True ) uniq_vertices *= 1 - 2 * (uniq_vertices[:, -1:] < 0) if faces is not None: faces = np.asarray(faces) faces = unique_sets(mapping[faces]) if edges is not None: edges = np.asarray(edges) edges = unique_sets(mapping[edges]) Sphere.__init__(self, xyz=uniq_vertices, edges=edges, faces=faces) @classmethod @warning_for_keywords() def from_sphere(cls, sphere, *, tol=1e-5): """Create instance from a Sphere""" return cls( theta=sphere.theta, phi=sphere.phi, edges=sphere.edges, faces=sphere.faces, tol=tol, ) def mirror(self): """Create a full Sphere from a HemiSphere""" n = len(self.vertices) vertices = np.vstack([self.vertices, -self.vertices]) edges = np.vstack([self.edges, n + self.edges]) _switch_vertex(edges[:, 0], edges[:, 1], vertices) faces = np.vstack([self.faces, n + self.faces]) _switch_vertex(faces[:, 0], faces[:, 1], vertices) _switch_vertex(faces[:, 0], faces[:, 2], vertices) return Sphere(xyz=vertices, edges=edges, faces=faces) @auto_attr def faces(self): vertices = np.vstack([self.vertices, -self.vertices]) faces = faces_from_sphere_vertices(vertices) return unique_sets(faces % len(self.vertices)) @warning_for_keywords() def subdivide(self, *, n=1): """Create a more subdivided HemiSphere See Sphere.subdivide for full documentation. """ sphere = self.mirror() sphere = sphere.subdivide(n=n) return HemiSphere.from_sphere(sphere) def find_closest(self, xyz): """ Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry Parameters ---------- xyz : array-like, 3 elements A unit vector Returns ------- idx : int The index into the Sphere.vertices array that gives the closest vertex (in angle). """ cos_sim = abs(np.dot(self.vertices, xyz)) return np.argmax(cos_sim) def _switch_vertex(index1, index2, vertices): """When we mirror an edge (a, b). We can either create (a, b) and (a', b') OR (a, b') and (a', b). The angles of edges (a, b) and (a, b') are supplementary, so we choose the two new edges such that their angles are less than 90 degrees. """ n = len(vertices) A = vertices[index1] B = vertices[index2] is_far = (A * B).sum(-1) < 0 index2[is_far] = index2[is_far] + (n / 2.0) index2 %= n def _get_forces(charges): r"""Given a set of charges on the surface of the sphere gets total force those charges exert on each other. The force exerted by one charge on another is given by Coulomb's law. For this simulation we use charges of equal magnitude so this force can be written as $\vec{r}/r^3$, up to a constant factor, where $\vec{r}$ is the separation of the two charges and $r$ is the magnitude of $\vec{r}$. Forces are additive so the total force on each of the charges is the sum of the force exerted by each other charge in the system. Charges do not exert a force on themselves. The electric potential can similarly be written as $1/r$ and is also additive. """ all_charges = np.concatenate((charges, -charges)) all_charges = all_charges[:, None] r = charges - all_charges r_mag = np.sqrt((r * r).sum(-1))[:, :, None] with warnings.catch_warnings(): warnings.simplefilter("ignore") force = r / r_mag**3 potential = 1.0 / r_mag d = np.arange(len(charges)) force[d, d] = 0 force = force.sum(0) force_r_comp = (charges * force).sum(-1)[:, None] f_theta = force - force_r_comp * charges potential[d, d] = 0 potential = 2 * potential.sum() return f_theta, potential @warning_for_keywords() def disperse_charges(hemi, iters, *, const=0.2): """Models electrostatic repulsion on the unit sphere Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step. Parameters ---------- hemi : HemiSphere Points on a unit sphere. iters : int Number of iterations to run. const : float Using a smaller const could provide a more accurate result, but will need more iterations to converge. Returns ------- hemi : HemiSphere Distributed points on a unit sphere. potential : ndarray The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum. Notes ----- This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefore, each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at -x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable. """ if not isinstance(hemi, HemiSphere): raise ValueError("expecting HemiSphere") charges = hemi.vertices forces, v = _get_forces(charges) force_mag = np.sqrt((forces * forces).sum()) const = const / force_mag.max() potential = np.empty(iters) v_min = v for ii in range(iters): new_charges = charges + forces * const norms = np.sqrt((new_charges**2).sum(-1)) new_charges /= norms[:, None] new_forces, v = _get_forces(new_charges) if v <= v_min: charges = new_charges forces = new_forces potential[ii] = v_min = v else: const /= 2.0 potential[ii] = v_min return HemiSphere(xyz=charges), potential @warning_for_keywords() def fibonacci_sphere(n_points, *, hemisphere=False, randomize=True, rng=None): """ Generate points on the surface of a sphere using Fibonacci Spiral. Parameters ---------- n_points : int The number of points to generate on the sphere surface. hemisphere : bool, optional If True, generate points only on the upper hemisphere. Default is False. randomize : bool, optional If True, randomize the starting point on the sphere. rng : np.random.Generator, optional If None creates random generator in function. Returns ------- points : ndarray An array of 3D points representing coordinates on the sphere surface. """ if not isinstance(n_points, int) or n_points <= 4: raise ValueError("Number of points must be a positive integer greater than 4.") random_shift = 0 if randomize: random_generator = rng or np.random.default_rng() random_shift = random_generator.integers(0, n_points) indices = np.arange(n_points) increment = np.pi * (3.0 - np.sqrt(5.0)) if not hemisphere: offset = 2.0 / n_points y = ((indices * offset) - 1) + (offset / 2) else: offset = 1.0 / n_points y = (indices * offset) + (offset / 2) r = np.sqrt(1 - y**2) phi = ((indices + random_shift) % n_points) * increment x = np.cos(phi) * r z = np.sin(phi) * r points = np.column_stack((x, y, z)) if n_points < 30 and hemisphere: points_updated = disperse_charges_alt(points, 1000) return points_updated return points def _equality_constraints(vects): """Spherical equality constraint. Returns 0 if vects lies on the unit sphere. Note that a flattened array is returned because `scipy.optimize` expects a 1-D array. Parameters ---------- vects : array-like shape (N * 3) Points on the sphere. Returns ------- array-like (N,) Difference between squared vector norms and 1. """ N = vects.shape[0] // 3 vects = vects.reshape((N, 3)) return (vects**2).sum(1) - 1.0 def _grad_equality_constraints(vects): r"""Return normals to the surface constraint (which corresponds to the gradient of the implicit function). Parameters ---------- vects : array-like (N * 3) Points on the sphere. Returns ------- array-like (N, N * 3) grad[i, j] contains :math:`\partial f_i / \partial x_j`. """ N = vects.shape[0] // 3 vects = vects.reshape((N, 3)) vects = (vects.T / np.sqrt((vects**2).sum(1))).T grad = np.zeros((N, N * 3)) for i in range(3): grad[:, i * N : (i + 1) * N] = np.diag(vects[:, i]) return grad @warning_for_keywords() def _get_forces_alt(vects, *, alpha=2.0, **kwargs): r"""Electrostatic-repulsion objective function. The alpha parameter controls the power repulsion (energy varies as $1 / r^\alpha$) :footcite:p:`Papadakis2000`. For $\alpha = 1.0$, this corresponds to electrostatic interaction energy. The weights ensure equal importance of each shell to the objective function :footcite:p:`Cook2007`, :footcite:p:`Caruyer2013`. Parameters ---------- vects : array-like (N * 3,) Points on the sphere. alpha : float Controls the power of the repulsion. Default is 1.0. weights : array-like (N, N) Weight values to the electrostatic energy. Returns ------- energy : float Sum of all interactions between any two vectors. References ---------- .. footbibliography:: """ nb_points = vects.shape[0] // 3 weights = kwargs.get("weights", np.ones((nb_points, nb_points))) charges = vects.reshape((nb_points, 3)) all_charges = np.concatenate((charges, -charges)) all_charges = all_charges[:, None] r = charges - all_charges r_mag = np.sqrt((r * r).sum(-1))[:, :, None] with warnings.catch_warnings(): warnings.simplefilter("ignore") potential = 1 / r_mag**alpha d = np.arange(len(charges)) potential[d, d] = 0 potential = potential[:nb_points] + potential[nb_points:] potential = weights * potential.sum(-1) potential = potential.sum() return potential @warning_for_keywords() def _get_grad_forces_alt(vects, *, alpha=2.0, **kwargs): """1st-order derivative of electrostatic-like repulsion energy. The weights ensure equal importance of each shell to the objective function :footcite:p:`Cook2007`, :footcite:p:`Caruyer2013`. See :footcite:p:`Papadakis2000` for more details about the definition. Parameters ---------- vects : array-like (N * 3,) Points on the sphere. alpha : float Controls the power of the repulsion. Default is 1.0. weights : array-like (N, N) Weight values to the electrostatic energy. Returns ------- grad : array-like (N * 3,) Gradient of the objective function. References ---------- .. footbibliography:: """ nb_points = vects.shape[0] // 3 weights = kwargs.get("weights", np.ones((nb_points, nb_points))) charges = vects.reshape((nb_points, 3)) all_charges = np.concatenate((charges, -charges)) all_charges = all_charges[:, None] r = charges - all_charges r_mag = np.sqrt((r * r).sum(-1))[:, :, None] with warnings.catch_warnings(): warnings.simplefilter("ignore") forces = -2 * alpha * r / r_mag ** (alpha + 2.0) d = np.arange(len(charges)) forces[d, d] = 0 forces = forces[:nb_points] + forces[nb_points:] forces = forces * weights.reshape((nb_points, nb_points, 1)) forces = forces.sum(0) return forces.reshape((nb_points * 3)) @warning_for_keywords() def disperse_charges_alt(init_pointset, iters, *, tol=1.0e-3): """Reimplementation of disperse_charges making use of `scipy.optimize.fmin_slsqp`. Parameters ---------- init_pointset : (N, 3) ndarray Points on a unit sphere. iters : int Number of iterations to run. tol : float Tolerance for the optimization. Returns ------- array-like (N, 3) Distributed points on a unit sphere. """ K = init_pointset.shape[0] vects = optimize.fmin_slsqp( _get_forces_alt, init_pointset.reshape(K * 3), f_eqcons=_equality_constraints, fprime=_get_grad_forces_alt, iter=iters, acc=tol, args=(), iprint=0, ) return vects.reshape((K, 3)) @warning_for_keywords() def euler_characteristic_check(sphere, *, chi=2): r"""Checks the euler characteristic of a sphere If $f$ = number of faces, $e$ = number_of_edges and $v$ = number of vertices, the Euler formula says $f-e+v = 2$ for a mesh on a sphere. More generally, whether $f -e + v == \chi$ where $\chi$ is the Euler characteristic of the mesh. - Open chain (track) has $\chi=1$ - Closed chain (loop) has $\chi=0$ - Disk has $\chi=1$ - Sphere has $\chi=2$ - HemiSphere has $\chi=1$ Parameters ---------- sphere : Sphere A Sphere instance with vertices, edges and faces attributes. chi : int, optional The Euler characteristic of the mesh to be checked Returns ------- check : bool True if the mesh has Euler characteristic $\chi$ Examples -------- >>> euler_characteristic_check(unit_octahedron) True >>> hemisphere = HemiSphere.from_sphere(unit_icosahedron) >>> euler_characteristic_check(hemisphere, chi=1) True """ v = sphere.vertices.shape[0] e = sphere.edges.shape[0] f = sphere.faces.shape[0] return (f - e + v) == chi octahedron_vertices = np.array( [ [1.0, 0.0, 0.0], [-1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, -1.0, 0.0], [0.0, 0.0, 1.0], [0.0, 0.0, -1.0], ] ) octahedron_faces = np.array( [ [0, 4, 2], [1, 5, 3], [4, 2, 1], [5, 3, 0], [1, 4, 3], [0, 5, 2], [0, 4, 3], [1, 5, 2], ], dtype="uint16", ) t = (1 + np.sqrt(5)) / 2 icosahedron_vertices = np.array( [ [t, 1, 0], # 0 [-t, 1, 0], # 1 [t, -1, 0], # 2 [-t, -1, 0], # 3 [1, 0, t], # 4 [1, 0, -t], # 5 [-1, 0, t], # 6 [-1, 0, -t], # 7 [0, t, 1], # 8 [0, -t, 1], # 9 [0, t, -1], # 10 [0, -t, -1], # 11 ] ) icosahedron_vertices /= vector_norm(icosahedron_vertices, keepdims=True) icosahedron_faces = np.array( [ [8, 4, 0], [2, 5, 0], [2, 5, 11], [9, 2, 11], [2, 4, 0], [9, 2, 4], [10, 8, 1], [10, 8, 0], [10, 5, 0], [6, 3, 1], [9, 6, 3], [6, 8, 1], [6, 8, 4], [9, 6, 4], [7, 10, 1], [7, 10, 5], [7, 3, 1], [7, 3, 11], [9, 3, 11], [7, 5, 11], ], dtype="uint16", ) unit_octahedron = Sphere(xyz=octahedron_vertices, faces=octahedron_faces) unit_icosahedron = Sphere(xyz=icosahedron_vertices, faces=icosahedron_faces) hemi_icosahedron = HemiSphere.from_sphere(unit_icosahedron) dipy-1.11.0/dipy/core/sphere_stats.py000066400000000000000000000203021476546756600175410ustar00rootroot00000000000000"""Statistics on spheres""" from itertools import permutations import numpy as np import dipy.core.geometry as geometry from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def random_uniform_on_sphere(*, n=1, coords="xyz"): r"""Random unit vectors from a uniform distribution on the sphere. Parameters ---------- n : int Number of random vectors coords : {'xyz', 'radians', 'degrees'} 'xyz' for cartesian form 'radians' for spherical form in rads 'degrees' for spherical form in degrees Notes ----- The uniform distribution on the sphere, parameterized by spherical coordinates $(\theta, \phi)$, should verify $\phi\sim U[0,2\pi]$, while $z=\cos(\theta)\sim U[-1,1]$. References ---------- .. [1] https://mathworld.wolfram.com/SpherePointPicking.html. Returns ------- X : array, shape (n,3) if coords='xyz' or shape (n,2) otherwise Uniformly distributed vectors on the unit sphere. Examples -------- >>> from dipy.core.sphere_stats import random_uniform_on_sphere >>> X = random_uniform_on_sphere(n=4, coords='radians') >>> X.shape == (4, 2) True >>> X = random_uniform_on_sphere(n=4, coords='xyz') >>> X.shape == (4, 3) True """ rng = np.random.default_rng() z = rng.uniform(-1, 1, n) theta = np.arccos(z) phi = rng.uniform(0, 2 * np.pi, n) if coords == "xyz": r = np.ones(n) return np.vstack(geometry.sphere2cart(r, theta, phi)).T angles = np.vstack((theta, phi)).T if coords == "radians": return angles if coords == "degrees": return np.rad2deg(angles) @warning_for_keywords() def eigenstats(points, *, alpha=0.05): r"""Principal direction and confidence ellipse Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5. Parameters ---------- points : array_like (N,3) array of points on the sphere of radius 1 in $\mathbb{R}^3$ alpha : real or None 1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage. Returns ------- centre : vector (3,) centre of ellipsoid b1 : vector (2,) lengths of semi-axes of ellipsoid """ n = points.shape[0] # the number of points rad2deg = 180 / np.pi # scale angles from radians to degrees # there is a problem with averaging and axis data. """ centroid = np.sum(points, axis=0)/n normed_centroid = geometry.normalized_vector(centroid) x,y,z = normed_centroid #coordinates of normed centroid polar_centroid = np.array(geometry.cart2sphere(x,y,z))*rad2deg """ cross = np.dot(points.T, points) / n # cross-covariance of points evals, evecs = np.linalg.eigh(cross) # eigen decomposition assuming that cross is symmetric order = np.argsort(evals) # eigenvalues don't necessarily come in an particular order? tau = evals[order] # the ordered eigenvalues h = evecs[:, order] # the eigenvectors in corresponding order h[:, 2] = h[:, 2] * np.sign(h[2, 2]) # map the first principal direction into upper hemisphere centre = np.array(geometry.cart2sphere(*h[:, 2]))[1:] * rad2deg # the spherical coordinates of the first principal direction e = np.zeros((2, 2)) p0 = np.dot(points, h[:, 0]) p1 = np.dot(points, h[:, 1]) p2 = np.dot(points, h[:, 2]) # the principal coordinates of the points e[0, 0] = np.sum((p0**2) * (p2**2)) / (n * (tau[0] - tau[2]) ** 2) e[1, 1] = np.sum((p1**2) * (p2**2)) / (n * (tau[1] - tau[2]) ** 2) e[0, 1] = np.sum((p0 * p1 * (p2**2)) / (n * (tau[0] - tau[2]) * (tau[1] - tau[2]))) e[1, 0] = e[0, 1] # e is a 2x2 helper matrix d = -2 * np.log(alpha) / n s, w = np.linalg.eig(e) g = np.sqrt(d * s) b1 = np.arcsin(g) * rad2deg # b1 are the estimated 100*(1-alpha)% confidence ellipsoid semi-axes # in degrees return centre, b1 # # b2 is equivalent to b1 above # # # try to invert e and calculate vector b the standard errors of # # centre - these are forced to a mixture of NaN and/or 0 in singular cases # b2 = np.array([np.NaN,np.NaN]) # if np.abs(np.linalg.det(e)) < 10**-20: # b2 = np.array([0,np.NaN]) # else: # try: # f = np.linalg.inv(e) # except np.linalg.LigAlgError: # b2 = np.array([np.NaN, np.NaN]) # else: # t, y = np.linalg.eig(f) # d = -2*np.log(alpha)/n # g = np.sqrt(d/t) # b2= np.arcsin(g)*rad2deg def compare_orientation_sets(S, T): r"""Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity) Parameters ---------- S : array, shape (m,d) First set of vectors. T : array, shape (n,d) Second set of vectors. Returns ------- max_mean_cosine : float Maximum mean cosine distance. Examples -------- >>> from dipy.core.sphere_stats import compare_orientation_sets >>> S=np.array([[1,0,0],[0,1,0],[0,0,1]]) >>> T=np.array([[1,0,0],[0,0,1]]) >>> compare_orientation_sets(S,T) 1.0 >>> T=np.array([[0,1,0],[1,0,0],[0,0,1]]) >>> S=np.array([[1,0,0],[0,0,1]]) >>> compare_orientation_sets(S,T) 1.0 >>> from dipy.core.sphere_stats import compare_orientation_sets >>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]]) >>> T=np.array([[1,0,0],[0,0,-1]]) >>> compare_orientation_sets(S,T) 1.0 """ m = len(S) n = len(T) if m < n: A = S.copy() a = m S = T T = A n = a v = [ np.sum([np.abs(np.dot(p[i], T[i])) for i in range(n)]) for p in permutations(S, n) ] return np.max(v) / float(n) # return np.max(v)*float(n)/float(m) def angular_similarity(S, T): r"""Computes the cosine distance of the best match between points of two sets of vectors S and T Parameters ---------- S : array, shape (m,d) T : array, shape (n,d) Returns ------- max_cosine_distance:float Examples -------- >>> import numpy as np >>> from dipy.core.sphere_stats import angular_similarity >>> S=np.array([[1,0,0],[0,1,0],[0,0,1]]) >>> T=np.array([[1,0,0],[0,0,1]]) >>> angular_similarity(S,T) 2.0 >>> T=np.array([[0,1,0],[1,0,0],[0,0,1]]) >>> S=np.array([[1,0,0],[0,0,1]]) >>> angular_similarity(S,T) 2.0 >>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]]) >>> T=np.array([[1,0,0],[0,0,-1]]) >>> angular_similarity(S,T) 2.0 >>> T=np.array([[0,1,0],[1,0,0],[0,0,1]]) >>> S=np.array([[1,0,0],[0,1,0],[0,0,1]]) >>> angular_similarity(S,T) 3.0 >>> S=np.array([[0,1,0],[1,0,0],[0,0,1]]) >>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]]) >>> angular_similarity(S,T) 2.7071067811865475 >>> S=np.array([[0,1,0],[1,0,0],[0,0,1]]) >>> T=np.array([[1,0,0]]) >>> angular_similarity(S,T) 1.0 >>> S=np.array([[0,1,0],[1,0,0]]) >>> T=np.array([[0,0,1]]) >>> angular_similarity(S,T) 0.0 >>> S=np.array([[0,1,0],[1,0,0]]) >>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]]) Now we use ``print`` to reduce the precision of of the printed output (so the doctests don't detect unimportant differences) >>> print('%.12f' % angular_similarity(S,T)) 0.707106781187 >>> S=np.array([[0,1,0]]) >>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]]) >>> print('%.12f' % angular_similarity(S,T)) 0.707106781187 >>> S=np.array([[0,1,0],[0,0,1]]) >>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]]) >>> print('%.12f' % angular_similarity(S,T)) 0.707106781187 """ m = len(S) n = len(T) if m < n: A = S.copy() a = m S = T T = A n = a """ v=[] for p in permutations(S,n): angles=[] for i in range(n): angles.append(np.abs(np.dot(p[i],T[i]))) v.append(np.sum(angles)) print(v) """ v = [ np.sum([np.abs(np.dot(p[i], T[i])) for i in range(n)]) for p in permutations(S, n) ] return float(np.max(v)) # *float(n)/float(m) dipy-1.11.0/dipy/core/subdivide_octahedron.py000066400000000000000000000041161476546756600212260ustar00rootroot00000000000000"""Create a unit sphere by subdividing all triangles of an octahedron recursively. The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices. This recursive method will avoid the common problem of the polar singularity, produced by 2d (lon-lat) parameterization methods. """ from dipy.core.sphere import HemiSphere, unit_octahedron from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def create_unit_sphere(*, recursion_level=2): """Creates a unit sphere by subdividing a unit octahedron. Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere. Parameters ---------- recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have $4^recursion_level+2$ vertices. Returns ------- Sphere : The unit sphere. See Also -------- create_unit_hemisphere, Sphere """ if recursion_level > 7 or recursion_level < 1: raise ValueError("recursion_level must be between 1 and 7") return unit_octahedron.subdivide(n=recursion_level - 1) @warning_for_keywords() def create_unit_hemisphere(*, recursion_level=2): """Creates a unit sphere by subdividing a unit octahedron, returns half the sphere. Parameters ---------- recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have $(4^recursion_level+2)/2$ vertices. Returns ------- HemiSphere : Half of a unit sphere. See Also -------- create_unit_sphere, Sphere, HemiSphere """ sphere = create_unit_sphere(recursion_level=recursion_level) return HemiSphere.from_sphere(sphere) dipy-1.11.0/dipy/core/tests/000077500000000000000000000000001476546756600156305ustar00rootroot00000000000000dipy-1.11.0/dipy/core/tests/__init__.py000066400000000000000000000000441476546756600177370ustar00rootroot00000000000000# init to make tests into a package dipy-1.11.0/dipy/core/tests/meson.build000066400000000000000000000013561476546756600177770ustar00rootroot00000000000000cython_sources = [ 'test_math', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/core/tests' ) endforeach python_sources = ['__init__.py', 'test_geometry.py', 'test_gradients.py', 'test_graph.py', 'test_interpolation.py', 'test_ndindex.py', 'test_optimize.py', 'test_rng.py', 'test_sphere.py', 'test_subdivide_octahedron.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/core/tests' ) dipy-1.11.0/dipy/core/tests/test_geometry.py000066400000000000000000000304701476546756600211000ustar00rootroot00000000000000"""Testing utility functions""" from itertools import permutations import random import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.core.geometry import ( cart2sphere, cart_distance, circumradius, compose_matrix, compose_transformations, decompose_matrix, dist_to_corner, is_hemispherical, lambert_equal_area_projection_polar, nearest_pos_semi_def, perpendicular_directions, sphere2cart, sphere_distance, vec2vec_rotmat, vector_cosine, vector_norm, ) from dipy.core.sphere_stats import random_uniform_on_sphere from dipy.testing.decorators import set_random_number_generator from dipy.testing.spherepoints import sphere_points def test_vector_norm(): A = np.array([[1, 0, 0], [3, 4, 0], [0, 5, 12], [1, 2, 3]]) expected = np.array([1, 5, 13, np.sqrt(14)]) assert_array_almost_equal(vector_norm(A), expected) expected.shape = (4, 1) assert_array_almost_equal(vector_norm(A, keepdims=True), expected) assert_array_almost_equal(vector_norm(A.T, axis=0, keepdims=True), expected.T) def test_sphere_cart(): # test arrays of points rs, thetas, phis = cart2sphere(*sphere_points.T) xyz = sphere2cart(rs, thetas, phis) assert_array_almost_equal(xyz, sphere_points.T) # test radius estimation big_sph_pts = sphere_points * 10.4 rs, thetas, phis = cart2sphere(*big_sph_pts.T) assert_array_almost_equal(rs, 10.4) xyz = sphere2cart(rs, thetas, phis) assert_array_almost_equal(xyz, big_sph_pts.T, decimal=6) # test that result shapes match x, y, z = big_sph_pts.T r, theta, phi = cart2sphere(x[:1], y[:1], z) assert_equal(r.shape, theta.shape) assert_equal(r.shape, phi.shape) x, y, z = sphere2cart(r[:1], theta[:1], phi) assert_equal(x.shape, y.shape) assert_equal(x.shape, z.shape) # test a scalar point pt = sphere_points[3] r, theta, phi = cart2sphere(*pt) xyz = sphere2cart(r, theta, phi) assert_array_almost_equal(xyz, pt) # Test full circle on x=1, y=1, z=1 x, y, z = sphere2cart(*cart2sphere(1.0, 1.0, 1.0)) assert_array_almost_equal((x, y, z), (1.0, 1.0, 1.0)) def test_invert_transform(): n = 100.0 theta = np.arange(n) / n * np.pi # Limited to 0,pi phi = (np.arange(n) / n - 0.5) * 2 * np.pi # Limited to 0,2pi x, y, z = sphere2cart(1, theta, phi) # Let's assume they're all unit vecs r, new_theta, new_phi = cart2sphere(x, y, z) # Transform back assert_array_almost_equal(theta, new_theta) assert_array_almost_equal(phi, new_phi) def test_nearest_pos_semi_def(): B = np.diag(np.array([1, 2, 3])) assert_array_almost_equal(B, nearest_pos_semi_def(B)) B = np.diag(np.array([0, 2, 3])) assert_array_almost_equal(B, nearest_pos_semi_def(B)) B = np.diag(np.array([0, 0, 3])) assert_array_almost_equal(B, nearest_pos_semi_def(B)) B = np.diag(np.array([-1, 2, 3])) Bpsd = np.array([[0.0, 0.0, 0.0], [0.0, 1.75, 0.0], [0.0, 0.0, 2.75]]) assert_array_almost_equal(Bpsd, nearest_pos_semi_def(B)) B = np.diag(np.array([-1, -2, 3])) Bpsd = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 2.0]]) assert_array_almost_equal(Bpsd, nearest_pos_semi_def(B)) B = np.diag(np.array([-1.0e-11, 0, 1000])) Bpsd = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 1000.0]]) assert_array_almost_equal(Bpsd, nearest_pos_semi_def(B)) B = np.diag(np.array([-1, -2, -3])) Bpsd = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) assert_array_almost_equal(Bpsd, nearest_pos_semi_def(B)) def test_cart_distance(): a = [0, 1] b = [1, 0] assert_array_almost_equal(cart_distance(a, b), np.sqrt(2)) assert_array_almost_equal(cart_distance([1, 0], [-1, 0]), 2) pts1 = [2, 1, 0] pts2 = [0, 1, -2] assert_array_almost_equal(cart_distance(pts1, pts2), np.sqrt(8)) pts2 = [[0, 1, -2], [-2, 1, 0]] assert_array_almost_equal(cart_distance(pts1, pts2), [np.sqrt(8), 4]) def test_sphere_distance(): # make a circle, go around... radius = 3.2 n = 5000 n2 = n // 2 # pi at point n2 in array angles = np.linspace(0, np.pi * 2, n, endpoint=False) x = np.sin(angles) * radius y = np.cos(angles) * radius # dists around half circle, including pi half_x = x[: n2 + 1] half_y = y[: n2 + 1] half_dists = np.sqrt(np.diff(half_x) ** 2 + np.diff(half_y) ** 2) # approximate distances from 0 to pi (not including 0) csums = np.cumsum(half_dists) # concatenated with distances from pi to 0 again cdists = np.r_[0, csums, csums[-2::-1]] # check approximation close to calculated sph_d = sphere_distance([0, radius], np.c_[x, y]) assert_array_almost_equal(cdists, sph_d) # Now check with passed radius sph_d = sphere_distance([0, radius], np.c_[x, y], radius=radius) assert_array_almost_equal(cdists, sph_d) # Check points not on surface raises error when asked for assert_raises(ValueError, sphere_distance, [1, 0], [0, 2]) # Not when check is disabled sphere_distance([1, 0], [0, 2], radius=None, check_radius=False) # Error when radii don't match passed radius assert_raises(ValueError, sphere_distance, [1, 0], [0, 1], radius=2.0) @set_random_number_generator() def test_vector_cosine(rng): a = [0, 1] b = [1, 0] assert_array_almost_equal(vector_cosine(a, b), 0) assert_array_almost_equal(vector_cosine([1, 0], [-1, 0]), -1) assert_array_almost_equal(vector_cosine([1, 0], [1, 1]), 1 / np.sqrt(2)) assert_array_almost_equal(vector_cosine([2, 0], [-4, 0]), -1) pts1 = [2, 1, 0] pts2 = [-2, -1, 0] assert_array_almost_equal(vector_cosine(pts1, pts2), -1) pts2 = [[-2, -1, 0], [2, 1, 0]] assert_array_almost_equal(vector_cosine(pts1, pts2), [-1, 1]) # test relationship with correlation # not the same if non-zero vector mean a = rng.uniform(size=(100,)) b = rng.uniform(size=(100,)) cc = np.corrcoef(a, b)[0, 1] vcos = vector_cosine(a, b) assert not np.allclose(cc, vcos) # is the same if zero vector mean a_dm = a - np.mean(a) b_dm = b - np.mean(b) vcos = vector_cosine(a_dm, b_dm) assert_array_almost_equal(cc, vcos) def test_lambert_equal_area_projection_polar(): theta = np.repeat(np.pi / 3, 10) phi = np.linspace(0, 2 * np.pi, 10) # points sit on circle with co-latitude pi/3 (60 degrees) leap = lambert_equal_area_projection_polar(theta, phi) assert_array_almost_equal( np.sqrt(np.sum(leap**2, axis=1)), np.array([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]), ) # points map onto the circle of radius 1 def test_lambert_equal_area_projection_cart(): xyz = np.array( [[1, 0, 0], [0, 1, 0], [0, 0, 1], [-1, 0, 0], [0, -1, 0], [0, 0, -1]] ) # points sit on +/-1 on all 3 axes r, theta, phi = cart2sphere(*xyz.T) leap = lambert_equal_area_projection_polar(theta, phi) r2 = np.sqrt(2) assert_array_almost_equal( np.sqrt(np.sum(leap**2, axis=1)), np.array([r2, r2, 0, r2, r2, 2]) ) # x and y =+/-1 map onto circle of radius sqrt(2) # z=1 maps to origin, and z=-1 maps to (an arbitrary point on) the # outer circle of radius 2 def test_circumradius(): assert_array_almost_equal( np.sqrt(0.5), circumradius(np.array([0, 2, 0]), np.array([2, 0, 0]), np.array([0, 0, 0])), ) def test_vec2vec_rotmat(): a = np.array([1, 0, 0]) for b in np.array([[0, 0, 1], [-1, 0, 0], [1, 0, 0]]): R = vec2vec_rotmat(a, b) assert_array_almost_equal(np.dot(R, a), b) # These vectors are not unit norm c = np.array([0.33950729, 0.92041729, 0.1938216]) d = np.array([0.40604787, 0.97518325, 0.2057731]) R1 = vec2vec_rotmat(c, d) c_norm = c / np.linalg.norm(c) d_norm = d / np.linalg.norm(d) R2 = vec2vec_rotmat(c_norm, d_norm) assert_array_almost_equal(R1, R2, decimal=1) assert_array_almost_equal(np.diag(R1), np.diag(R2), decimal=3) # we are catching collinear vectors # but in the case where they are not # exactly collinear and slightly different # the cosine can be larger than 1 or smaller than # -1 and therefore we have to catch that issue e = np.array([1.0001, 0, 0]) f = np.array([1.001, 0.01, 0]) R3 = vec2vec_rotmat(e, f) g = np.array([1.001, 0.0, 0]) R4 = vec2vec_rotmat(e, g) assert_array_almost_equal(R3, R4, decimal=1) assert_array_almost_equal(np.diag(R3), np.diag(R4), decimal=3) def test_compose_transformations(): A = np.eye(4) A[0, -1] = 10 B = np.eye(4) B[0, -1] = -20 C = np.eye(4) C[0, -1] = 10 CBA = compose_transformations(A, B, C) assert_array_equal(CBA, np.eye(4)) assert_raises(ValueError, compose_transformations, A) @set_random_number_generator() def test_compose_decompose_matrix(rng): for translate in permutations(40 * rng.random(3), 3): for angles in permutations(np.deg2rad(90 * rng.random(3)), 3): for shears in permutations(3 * rng.random(3), 3): for scale in permutations(3 * rng.random(3), 3): mat = compose_matrix( translate=translate, angles=angles, shear=shears, scale=scale ) sc, sh, ang, trans, _ = decompose_matrix(mat) assert_array_almost_equal(translate, trans) assert_array_almost_equal(angles, ang) assert_array_almost_equal(shears, sh) assert_array_almost_equal(scale, sc) def test_perpendicular_directions(): num = 35 vectors_v = np.zeros((4, 3)) for v in range(4): theta = random.uniform(0, np.pi) phi = random.uniform(0, 2 * np.pi) vectors_v[v] = sphere2cart(1.0, theta, phi) vectors_v[3] = [1, 0, 0] for vector_v in vectors_v: pd = perpendicular_directions(vector_v, num=num, half=False) # see if length of pd is equal to the number of intended samples assert_equal(num, len(pd)) # check if all directions are perpendicular to vector v for d in pd: cos_angle = np.dot(d, vector_v) assert_almost_equal(cos_angle, 0) # check if directions are sampled by multiples of 2*pi / num delta_a = 2.0 * np.pi / num for d in pd[1:]: angle = np.arccos(np.dot(pd[0], d)) rest = angle % delta_a if rest > delta_a * 0.99: # To correct cases of negative error rest = rest - delta_a assert_almost_equal(rest, 0) def _rotation_from_angles(r): R = np.array( [[1, 0, 0], [0, np.cos(r[0]), np.sin(r[0])], [0, -np.sin(r[0]), np.cos(r[0])]] ) R = np.dot( R, np.array( [ [np.cos(r[1]), 0, np.sin(r[1])], [0, 1, 0], [-np.sin(r[1]), 0, np.cos(r[1])], ] ), ) R = np.dot( R, np.array( [ [np.cos(r[2]), np.sin(r[2]), 0], [-np.sin(r[2]), np.cos(r[2]), 0], [0, 0, 1], ] ), ) R = np.linalg.inv(R) return R @set_random_number_generator() def test_dist_to_corner(rng): affine = np.eye(4) # Calculate the distance with the pythagorean theorem: pythagoras = np.sqrt(np.sum((np.diag(affine)[:-1] / 2) ** 2)) # Compare to calculation with this function: assert_array_almost_equal(dist_to_corner(affine), pythagoras) # Apply a rotation to the matrix, just to demonstrate the calculation is # robust to that: R = _rotation_from_angles(rng.random(3) * np.pi) new_aff = np.vstack([np.dot(R, affine[:3, :]), [0, 0, 0, 1]]) assert_array_almost_equal(dist_to_corner(new_aff), pythagoras) def test_is_hemispherical(): # Smoke test the ValueError for non-3D vectors assert_raises(ValueError, is_hemispherical, np.array([[1, 2, 3, 4], [5, 6, 7, 8]])) # Test on hemispherical input xyz = random_uniform_on_sphere(n=100, coords="xyz") xyz = xyz[xyz[:, 2] > 0] assert_equal(is_hemispherical(xyz)[0], True) # Test on spherical input xyz = random_uniform_on_sphere(n=100, coords="xyz") assert_equal(is_hemispherical(xyz)[0], False) # Smoke test the ValueError for non unit-vectors assert_raises(ValueError, is_hemispherical, xyz * 2.0) dipy-1.11.0/dipy/core/tests/test_gradients.py000066400000000000000000001154521476546756600212310ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt import pytest from dipy.core.geometry import vec2vec_rotmat, vector_norm from dipy.core.gradients import ( WATER_GYROMAGNETIC_RATIO, GradientTable, b0_threshold_empty_gradient_message, b0_threshold_update_slicing_message, btens_to_params, check_multi_b, extract_b0, extract_dwi_shell, generate_bvecs, get_bval_indices, gradient_table, gradient_table_from_bvals_bvecs, gradient_table_from_gradient_strength_bvecs, gradient_table_from_qvals_bvecs, mask_non_weighted_bvals, orientation_from_string, orientation_to_string, params_to_btens, reorient_bvecs, reorient_vectors, round_bvals, unique_bvals_magnitude, unique_bvals_tolerance, ) from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.testing import clear_and_catch_warnings from dipy.testing.decorators import set_random_number_generator def test_mask_non_weighted_bvals(): bvals = np.array([0.0, 100.0, 200.0, 300.0, 400.0]) b0_threshold = 0.0 expected_val = np.asarray([True, False, False, False, False]) obtained_val = mask_non_weighted_bvals(bvals, b0_threshold) assert np.array_equal(obtained_val, expected_val) b0_threshold = 50 obtained_val = mask_non_weighted_bvals(bvals, b0_threshold) assert np.array_equal(obtained_val, expected_val) b0_threshold = 200.0 expected_val = np.asarray([True, True, True, False, False]) obtained_val = mask_non_weighted_bvals(bvals, b0_threshold) assert np.array_equal(obtained_val, expected_val) def test_btable_prepare(): sq2 = np.sqrt(2) / 2.0 bvals = 1500 * np.ones(7) bvals[0] = 0 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) bt = gradient_table(bvals, bvecs=bvecs) npt.assert_array_equal(bt.bvecs, bvecs) # bt.info fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) bvecs = np.where(np.isnan(bvecs), 0, bvecs) bt = gradient_table(bvals, bvecs=bvecs) npt.assert_array_equal(bt.bvecs, bvecs) bt2 = gradient_table(bvals, bvecs=bvecs.T) npt.assert_array_equal(bt2.bvecs, bvecs) btab = np.concatenate((bvals[:, None], bvecs), axis=1) bt3 = gradient_table(btab) npt.assert_array_equal(bt3.bvecs, bvecs) npt.assert_array_equal(bt3.bvals, bvals) bt4 = gradient_table(btab.T) npt.assert_array_equal(bt4.bvecs, bvecs) npt.assert_array_equal(bt4.bvals, bvals) # Test for proper inputs (expects either bvals/bvecs or 4 by n): npt.assert_raises(ValueError, gradient_table, bvecs) def test_GradientTable(): gradients = np.array( [[0, 0, 0], [1, 0, 0], [0, 0, 1], [3, 4, 0], [5, 0, 12]], "float" ) expected_bvals = np.array([0, 1, 1, 5, 13]) expected_b0s_mask = expected_bvals == 0 expected_bvecs = gradients / (expected_bvals + expected_b0s_mask)[:, None] gt = GradientTable(gradients, b0_threshold=0) npt.assert_("B-values shape (5,)" in gt.__str__()) npt.assert_array_almost_equal(gt.bvals, expected_bvals) npt.assert_array_equal(gt.b0s_mask, expected_b0s_mask) npt.assert_array_almost_equal(gt.bvecs, expected_bvecs) npt.assert_array_almost_equal(gt.gradients, gradients) gt = GradientTable(gradients, b0_threshold=1) npt.assert_array_equal(gt.b0s_mask, [1, 1, 1, 0, 0]) npt.assert_array_equal(gt.bvals, expected_bvals) npt.assert_array_equal(gt.bvecs, expected_bvecs) # checks negative values in gtab npt.assert_raises(ValueError, GradientTable, -1) npt.assert_raises(ValueError, GradientTable, np.ones((6, 2))) npt.assert_raises(ValueError, GradientTable, np.ones((6,))) with warnings.catch_warnings(record=True) as l_warns: _ = gradient_table(expected_bvals, bvecs=expected_bvecs, b0_threshold=200) # Select only UserWarning selected_w = [w for w in l_warns if issubclass(w.category, UserWarning)] assert len(selected_w) >= 1 msg = [str(m.message) for m in selected_w] npt.assert_equal("b0_threshold has a value > 199" in msg, True) def test_GradientTable_btensor_calculation(): # Generate a gradient table without specifying b-tensors gradients = np.array( [[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [3, 4, 0], [5, 0, 12]], "float" ) # Check that when btens attribute not specified it takes the value of None gt = GradientTable(gradients) npt.assert_equal(gt.btens, None) # Check that btens are correctly created if specified gt = GradientTable(gradients, btens="LTE") # Check that the number of b tensors is correct npt.assert_equal(gt.btens.shape[0], gt.bvals.shape[0]) for i, (bval, bten) in enumerate(zip(gt.bvals, gt.btens)): # Check that the b tensor magnitude is correct npt.assert_almost_equal(np.trace(bten), bval) # Check that the b tensor orientation is correct if bval != 0: evals, evecs = np.linalg.eig(bten) dot_prod = np.dot(np.real(evecs[:, np.argmax(evals)]), gt.bvecs[i]) npt.assert_almost_equal(np.abs(dot_prod), 1) # Check btens input option 1 for btens in ["LTE", "PTE", "STE", "CTE"]: gt = GradientTable(gradients, btens=btens) # Check that the number of b tensors is correct npt.assert_equal(gt.bvals.shape[0], gt.btens.shape[0]) for bval, bvec, bten in zip(gt.bvals, gt.bvecs, gt.btens): # Check that the b tensor magnitude is correct npt.assert_almost_equal(np.trace(bten), bval) # Check that the b tensor orientation is correct if btens == ("LTE" or "CTE"): if bval != 0: evals, evecs = np.linalg.eig(bten) dot_prod = np.dot(np.real(evecs[:, np.argmax(evals)]), bvec) npt.assert_almost_equal(np.abs(dot_prod), 1) elif btens == "PTE": if bval != 0: evals, evecs = np.linalg.eig(bten) dot_prod = np.dot(np.real(evecs[:, np.argmin(evals)]), bvec) npt.assert_almost_equal(np.abs(dot_prod), 1) # Check btens input option 2 btens = np.array(["LTE", "PTE", "STE", "CTE", "LTE", "PTE"]) gt = GradientTable(gradients, btens=btens) # Check that the number of b tensors is correct npt.assert_equal(gt.bvals.shape[0], gt.btens.shape[0]) for i, (bval, bvec, bten) in enumerate(zip(gt.bvals, gt.bvecs, gt.btens)): # Check that the b tensor magnitude is correct npt.assert_almost_equal(np.trace(bten), bval) # Check that the b tensor orientation is correct if btens[i] == ("LTE" or "CTE"): if bval != 0: evals, evecs = np.linalg.eig(bten) dot_prod = np.dot(np.real(evecs[:, np.argmax(evals)]), bvec) npt.assert_almost_equal(np.abs(dot_prod), 1) elif btens[i] == "PTE": if bval != 0: evals, evecs = np.linalg.eig(bten) dot_prod = np.dot(np.real(evecs[:, np.argmin(evals)]), bvec) npt.assert_almost_equal(np.abs(dot_prod), 1) # Check btens input option 3 btens = np.array([np.eye(3, 3) for i in range(6)]) gt = GradientTable(gradients, btens=btens) npt.assert_equal(btens, gt.btens) npt.assert_equal(gt.bvals.shape[0], gt.btens.shape[0]) # Check invalid input npt.assert_raises(ValueError, GradientTable, gradients=gradients, btens="PPP") npt.assert_raises( ValueError, GradientTable, gradients=gradients, btens=np.array([np.eye(3, 3) for i in range(10)]), ) npt.assert_raises( ValueError, GradientTable, gradients=gradients, btens=np.zeros((10, 10)) ) def test_gradient_table_from_qvals_bvecs(): qvals = 30.0 * np.ones(7) big_delta = 0.03 # pulse separation of 30ms small_delta = 0.01 # pulse duration of 10ms qvals[0] = 0 sq2 = np.sqrt(2) / 2 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) gt = gradient_table_from_qvals_bvecs(qvals, bvecs, big_delta, small_delta) bvals_expected = (qvals * 2 * np.pi) ** 2 * (big_delta - small_delta / 3.0) gradient_strength_expected = ( qvals * 2 * np.pi / (small_delta * WATER_GYROMAGNETIC_RATIO) ) npt.assert_almost_equal(gt.gradient_strength, gradient_strength_expected) npt.assert_almost_equal(gt.bvals, bvals_expected) def test_gradient_table_from_gradient_strength_bvecs(): gradient_strength = 0.03e-3 * np.ones(7) # clinical strength at 30 mT/m big_delta = 0.03 # pulse separation of 30ms small_delta = 0.01 # pulse duration of 10ms gradient_strength[0] = 0 sq2 = np.sqrt(2) / 2 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) gt = gradient_table_from_gradient_strength_bvecs( gradient_strength, bvecs, big_delta, small_delta ) qvals_expected = ( gradient_strength * WATER_GYROMAGNETIC_RATIO * small_delta / (2 * np.pi) ) bvals_expected = (qvals_expected * 2 * np.pi) ** 2 * (big_delta - small_delta / 3.0) npt.assert_almost_equal(gt.qvals, qvals_expected) npt.assert_almost_equal(gt.bvals, bvals_expected) def test_gradient_table_from_bvals_bvecs(): sq2 = np.sqrt(2) / 2 bvals = [0, 1, 2, 3, 4, 5, 6, 0] bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], [0, 0, 0], ] ) gt = gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=0) npt.assert_array_equal(gt.bvecs, bvecs) npt.assert_array_equal(gt.bvals, bvals) npt.assert_array_equal(gt.gradients, np.reshape(bvals, (-1, 1)) * bvecs) npt.assert_array_equal(gt.b0s_mask, [1, 0, 0, 0, 0, 0, 0, 1]) # Test nans are replaced by 0 new_bvecs = bvecs.copy() new_bvecs[[0, -1]] = np.nan gt = gradient_table_from_bvals_bvecs(bvals, new_bvecs, b0_threshold=0) npt.assert_array_equal(gt.bvecs, bvecs) # b-value > 0 for non-unit vector bad_bvals = [2, 1, 2, 3, 4, 5, 6, 0] npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bad_bvals, bvecs, b0_threshold=0.0 ) # num_gard inconsistent bvals, bvecs bad_bvals = np.ones(7) npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bad_bvals, bvecs, b0_threshold=0.0 ) # negative bvals bad_bvals = [-1, -1, -1, -5, -6, -10] npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bad_bvals, bvecs, b0_threshold=0.0 ) # bvals not 1d bad_bvals = np.ones((1, 8)) npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bad_bvals, bvecs, b0_threshold=0.0 ) # bvec not 2d bad_bvecs = np.ones((1, 8, 3)) npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bvals, bad_bvecs, b0_threshold=0.0 ) # bvec not (N, 3) bad_bvecs = np.ones((8, 2)) npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bvals, bad_bvecs, b0_threshold=0.0 ) # bvecs not unit vectors bad_bvecs = bvecs * 2 npt.assert_raises( ValueError, gradient_table_from_bvals_bvecs, bvals, bad_bvecs, b0_threshold=0.0 ) # Test **kargs get passed along gt = gradient_table_from_bvals_bvecs( bvals, bvecs, b0_threshold=0, big_delta=5, small_delta=2 ) npt.assert_equal(gt.big_delta, 5) npt.assert_equal(gt.small_delta, 2) def test_gradient_table_special_bvals_bvecs_case(): bvals = [0, 1] bvecs = np.array([[0, 0, 0], [0, 0, 1]]) gt = gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=0) npt.assert_array_equal(gt.bvecs, bvecs) npt.assert_array_equal(gt.bvals, bvals) def test_b0s(): sq2 = np.sqrt(2) / 2.0 bvals = 1500 * np.ones(8) bvals[0] = 0 bvals[7] = 0 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], [0, 0, 0], ] ) bt = gradient_table(bvals, bvecs=bvecs) npt.assert_array_equal(np.where(bt.b0s_mask > 0)[0], np.array([0, 7])) npt.assert_array_equal(np.where(bt.b0s_mask == 0)[0], np.arange(1, 7)) def test_gtable_from_files(): fimg, fbvals, fbvecs = get_fnames(name="small_101D") gt = gradient_table(fbvals, bvecs=fbvecs) bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) npt.assert_array_equal(gt.bvals, bvals) npt.assert_array_equal(gt.bvecs, bvecs) def test_deltas(): sq2 = np.sqrt(2) / 2.0 bvals = 1500 * np.ones(7) bvals[0] = 0 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) bt = gradient_table(bvals, bvecs=bvecs, big_delta=5, small_delta=2) npt.assert_equal(bt.big_delta, 5) npt.assert_equal(bt.small_delta, 2) def test_qvalues(): sq2 = np.sqrt(2) / 2.0 bvals = 1500 * np.ones(7) bvals[0] = 0 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) qvals = np.sqrt(bvals / 6) / (2 * np.pi) bt = gradient_table(bvals, bvecs=bvecs, big_delta=8, small_delta=6) npt.assert_almost_equal(bt.qvals, qvals) @set_random_number_generator() def test_reorient_bvecs(rng): sq2 = np.sqrt(2) / 2 bvals = np.concatenate([[0], np.ones(6) * 1000]) bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) gt = gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=0) # The simple case: all affines are identity affs = np.zeros((4, 4, 6)) for i in range(4): affs[i, i, :] = 1 # We should get back the same b-vectors new_gt = reorient_bvecs(gt, affs) npt.assert_equal(gt.bvecs, new_gt.bvecs) # Now apply some rotations rotation_affines = [] rotated_bvecs = bvecs[:] for i in np.where(~gt.b0s_mask)[0]: rot_ang = rng.random() cos_rot = np.cos(rot_ang) sin_rot = np.sin(rot_ang) rotation_affines.append( np.array( [ [1, 0, 0, 0], [0, cos_rot, -sin_rot, 0], [0, sin_rot, cos_rot, 0], [0, 0, 0, 1], ] ) ) rotated_bvecs[i] = np.dot(rotation_affines[-1][:3, :3], bvecs[i]) rotation_affines = np.stack(rotation_affines, axis=-1) # Copy over the rotation affines full_affines = rotation_affines[:] # And add some scaling and translation to each one to make this harder for i in range(full_affines.shape[-1]): full_affines[..., i] = np.dot( full_affines[..., i], np.array([[2.5, 0, 0, -10], [0, 2.2, 0, 20], [0, 0, 1, 0], [0, 0, 0, 1]]), ) gt_rot = gradient_table_from_bvals_bvecs(bvals, rotated_bvecs, b0_threshold=0) new_gt = reorient_bvecs(gt_rot, full_affines) # At the end of all this, we should be able to recover the original # vectors npt.assert_almost_equal(gt.bvecs, new_gt.bvecs) # We should be able to pass just the 3-by-3 rotation components to the same # effect new_gt = reorient_bvecs(gt_rot, np.array(rotation_affines)[:3, :3]) npt.assert_almost_equal(gt.bvecs, new_gt.bvecs) # Verify that giving the wrong number of affines raises an error: full_affines = np.concatenate([full_affines, np.zeros((4, 4, 1))], axis=-1) npt.assert_raises(ValueError, reorient_bvecs, gt_rot, full_affines) # Shear components in the matrix need to be decomposed into rotation only, # and should not lead to scaling of the bvecs shear_affines = [] for _ in np.where(~gt.b0s_mask)[0]: shear_affines.append( np.array([[1, 0, 1, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) ) shear_affines = np.stack(shear_affines, axis=-1) # atol is set to 1 here to do the scaling verification here, # so that the reorient_bvecs function does not throw an error itself new_gt = reorient_bvecs(gt, shear_affines[:3, :3], atol=1) bvecs_close_to_1 = abs(vector_norm(new_gt.bvecs[~gt.b0s_mask]) - 1) <= 0.001 npt.assert_(np.all(bvecs_close_to_1)) def test_nan_bvecs(): """ Test that the presence of nan's in b-vectors doesn't raise warnings. In previous versions, the presence of NaN in b-vectors was taken to indicate a 0 b-value, but also raised a warning when testing for the length of these vectors. This checks that it doesn't happen. """ fdata, fbvals, fbvecs = get_fnames() with warnings.catch_warnings(record=True) as l_warns: gradient_table(fbvals, bvecs=fbvecs) # Select only UserWarning selected_w = [w for w in l_warns if issubclass(w.category, UserWarning)] npt.assert_(len(selected_w) == 0) @set_random_number_generator() def test_generate_bvecs(rng): """Tests whether we have properly generated bvecs.""" # Test if the generated b-vectors are unit vectors bvecs = generate_bvecs(100, rng=rng) norm = [np.linalg.norm(v) for v in bvecs] npt.assert_almost_equal(norm, np.ones(100)) # Test if two generated vectors are almost orthogonal bvecs_2 = generate_bvecs(2, rng=rng) cos_theta = np.dot(bvecs_2[0], bvecs_2[1]) npt.assert_almost_equal(cos_theta, 0.0, decimal=6) def test_getitem_idx(): # Create a GradientTable object with some test b-values and b-vectors bvals = np.array([0.0, 100.0, 200.0, 300.0, 400.0]) # value should be in increasing order as b-value affects the diffusion # weighting of the image, and the amount of diffusion weighting increases # with increasing b-value. bvecs = np.array([[0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0]]) # the b-vectors should be unit-length vectors gradients = bvals[:, None] * bvecs # Test a too large b0 threshold value b0_threshold = 100 gtab = GradientTable(gradients, b0_threshold=b0_threshold) idx = 1 with pytest.raises(ValueError) as excinfo: _ = gtab[idx] assert str(excinfo.value) == b0_threshold_empty_gradient_message( bvals, [idx], b0_threshold ) gtab = GradientTable(gradients) # Test with a single index gtab_slice1 = gtab[1] assert np.array_equal(gtab_slice1.bvals, np.array([100.0])) assert np.array_equal(gtab_slice1.bvecs, np.array([[1.0, 0.0, 0.0]])) # Test with a range of indices gtab = GradientTable(gradients) idx_start = 2 idx_end = 5 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=b0_threshold_update_slicing_message(idx_start), category=UserWarning, ) gtab_slice2 = gtab[idx_start:idx_end] assert np.array_equal(gtab_slice2.bvals, np.array([200.0, 300.0, 400.0])) assert np.array_equal( gtab_slice2.bvecs, np.array([[0.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 0.0]]), ) def test_round_bvals(): bvals_gt = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) b = round_bvals(bvals_gt) npt.assert_array_almost_equal(bvals_gt, b) bvals = np.array([995, 995, 995, 995, 2005, 2005, 2005, 2005, 0]) # We don't consider differences this small to be sufficient: b = round_bvals(bvals) npt.assert_array_almost_equal(bvals_gt, b) # Unless you specify that you are interested in this magnitude of changes: b = round_bvals(bvals, bmag=0) npt.assert_array_almost_equal(bvals, b) # Case that b-values are in ms/um2 bvals = np.array([0.995, 0.995, 0.995, 0.995, 2.005, 2.005, 2.005, 2.005, 0]) b = round_bvals(bvals) bvals_gt = np.array([1, 1, 1, 1, 2, 2, 2, 2, 0]) npt.assert_array_almost_equal(bvals_gt, b) def test_unique_bvals_tolerance(): bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) ubvals_gt = np.array([0, 1000, 2000]) b = unique_bvals_tolerance(bvals) npt.assert_array_almost_equal(ubvals_gt, b) # Testing the tolerance factor on many b-values that are within tol. bvals = np.array([950, 980, 995, 1000, 1000, 1010, 1999, 2000, 2001, 0]) ubvals_gt = np.array([0, 950, 1000, 2001]) b = unique_bvals_tolerance(bvals) npt.assert_array_almost_equal(ubvals_gt, b) # All unique b-values are kept if tolerance is set to zero: bvals = np.array([990, 990, 1000, 1000, 2000, 2000, 2050, 2050, 0]) ubvals_gt = np.array([0, 990, 1000, 2000, 2050]) b = unique_bvals_tolerance(bvals, tol=0) npt.assert_array_almost_equal(ubvals_gt, b) # Case that b-values are in ms/um2 bvals = np.array([0.995, 0.995, 0.995, 0.995, 2.005, 2.005, 2.005, 2.005, 0]) b = unique_bvals_tolerance(bvals, tol=0.5) ubvals_gt = np.array([0, 0.995, 2.005]) npt.assert_array_almost_equal(ubvals_gt, b) def test_get_bval_indices(): bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) indices_gt = np.array([0, 1, 2, 3]) indices = get_bval_indices(bvals, 1000) npt.assert_array_almost_equal(indices_gt, indices) # Testing the tolerance factor on many b-values that are within tol. bvals = np.array([950, 980, 995, 1000, 1000, 1010, 1999, 2000, 2001, 0]) indices_gt = np.array([0]) indices = get_bval_indices(bvals, 950, tol=20) npt.assert_array_almost_equal(indices_gt, indices) indices_gt = np.array([1, 2, 3, 4, 5]) indices = get_bval_indices(bvals, 1000, tol=20) npt.assert_array_almost_equal(indices_gt, indices) indices_gt = np.array([6, 7, 8]) indices = get_bval_indices(bvals, 2001, tol=20) npt.assert_array_almost_equal(indices_gt, indices) # All unique b-values indices are returned if tolerance is set to zero: bvals = np.array([990, 990, 1000, 1000, 2000, 2000, 2050, 2050, 0]) indices_gt = np.array([2, 3]) indices = get_bval_indices(bvals, 1000, tol=0) npt.assert_array_almost_equal(indices_gt, indices) # Case that b-values are in ms/um2 bvals = np.array([0.995, 0.995, 0.995, 0.995, 2.005, 2.005, 2.005, 2.005, 0]) indices_gt = np.array([0, 1, 2, 3]) indices = get_bval_indices(bvals, 0.995, tol=0.5) npt.assert_array_almost_equal(indices_gt, indices) def test_unique_bvals_magnitude(): bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) ubvals_gt = np.array([0, 1000, 2000]) b = unique_bvals_magnitude(bvals) npt.assert_array_almost_equal(ubvals_gt, b) bvals = np.array([995, 995, 995, 995, 2005, 2005, 2005, 2005, 0]) # Case that b-values are rounded: b = unique_bvals_magnitude(bvals) npt.assert_array_almost_equal(ubvals_gt, b) # b-values are not rounded if you specific the magnitude of the values # precision: b = unique_bvals_magnitude(bvals, bmag=0) npt.assert_array_almost_equal(b, np.array([0, 995, 2005])) # Case that b-values are in ms/um2 bvals = np.array([0.995, 0.995, 0.995, 0.995, 2.005, 2.005, 2.005, 2.005, 0]) b = unique_bvals_magnitude(bvals) ubvals_gt = np.array([0, 1, 2]) npt.assert_array_almost_equal(ubvals_gt, b) # Test case that optional parameter round_bvals is set to true bvals = np.array([995, 1000, 1004, 1000, 2001, 2000, 1988, 2017, 0]) ubvals_gt = np.array([0, 1000, 2000]) rbvals_gt = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) ub, rb = unique_bvals_magnitude(bvals, rbvals=True) npt.assert_array_almost_equal(ubvals_gt, ub) npt.assert_array_almost_equal(rbvals_gt, rb) def test_check_multi_b(): bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) bvecs = generate_bvecs(bvals.shape[-1]) gtab = gradient_table(bvals, bvecs=bvecs) npt.assert_(check_multi_b(gtab, 2, non_zero=False)) # We don't consider differences this small to be sufficient: bvals = np.array([1995, 1995, 1995, 1995, 2005, 2005, 2005, 2005, 0]) bvecs = generate_bvecs(bvals.shape[-1]) gtab = gradient_table(bvals, bvecs=bvecs) npt.assert_(not check_multi_b(gtab, 2, non_zero=True)) # Unless you specify that you are interested in this magnitude of changes: npt.assert_(check_multi_b(gtab, 2, non_zero=True, bmag=0)) # Or if you consider zero to be one of your b-values: npt.assert_(check_multi_b(gtab, 2, non_zero=False)) # Case that b-values are in ms/um2 (this should successfully pass) bvals = np.array([0.995, 0.995, 0.995, 0.995, 2.005, 2.005, 2.005, 2.005, 0]) bvecs = generate_bvecs(bvals.shape[-1]) gtab = gradient_table(bvals, bvecs=bvecs) npt.assert_(check_multi_b(gtab, 2, non_zero=False)) @set_random_number_generator() def test_btens_to_params(rng): """ Checks if bvals, bdeltas and b_etas are as expected for 4 b-tensor shapes (LTE, PTE, STE, CTE) as well as scaled and rotated versions of them This function intrinsically tests the function `_btens_to_params_2d` as `_btens_to_params_2d` is only meant to be called by `btens_to_params` """ n_rotations = 30 n_scales = 3 expected_bvals = np.array([1, 1, 1, 1]) expected_bdeltas = np.array([1, -0.5, 0, 0.5]) expected_b_etas = np.array([0, 0, 0, 0]) # Baseline tensors to test linear_tensor = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]]) planar_tensor = np.array([[0, 0, 0], [0, 1, 0], [0, 0, 1]]) / 2 spherical_tensor = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) / 3 cigar_tensor = np.array([[2, 0, 0], [0, 0.5, 0], [0, 0, 0.5]]) / 3 base_tensors = [linear_tensor, planar_tensor, spherical_tensor, cigar_tensor] # --------------------------------- # Test function on baseline tensors # --------------------------------- # Loop through each tensor type and check results for i, tensor in enumerate(base_tensors): i_bval, i_bdelta, i_b_eta = btens_to_params(tensor) npt.assert_array_almost_equal(i_bval, expected_bvals[i]) npt.assert_array_almost_equal(i_bdelta, expected_bdeltas[i]) npt.assert_array_almost_equal(i_b_eta, expected_b_etas[i]) # Test function on a 3D input base_tensors_array = np.empty((4, 3, 3)) base_tensors_array[0, :, :] = linear_tensor base_tensors_array[1, :, :] = planar_tensor base_tensors_array[2, :, :] = spherical_tensor base_tensors_array[3, :, :] = cigar_tensor bvals, bdeltas, b_etas = btens_to_params(base_tensors_array) npt.assert_array_almost_equal(bvals, expected_bvals) npt.assert_array_almost_equal(bdeltas, expected_bdeltas) npt.assert_array_almost_equal(b_etas, expected_b_etas) # ----------------------------------------------------- # Test function after rotating+scaling baseline tensors # ----------------------------------------------------- scales = np.concatenate((np.array([1]), rng.random(n_scales))) for scale in scales: ebs = expected_bvals * scale # Generate `n_rotations` random 3-element vectors of norm 1 v = rng.random((n_rotations, 3)) - 0.5 u = np.apply_along_axis(lambda w: w / np.linalg.norm(w), axis=1, arr=v) for rot_idx in range(n_rotations): # Get rotation matrix for current iteration u_i = u[rot_idx, :] R_i = vec2vec_rotmat(np.array([1, 0, 0]), u_i) # Rotate each of the baseline test tensors and check results for i, tensor in enumerate(base_tensors): tensor_rot_i = np.matmul(np.matmul(R_i, tensor), R_i.T) i_bval, i_bdelta, i_b_eta = btens_to_params(tensor_rot_i * scale) npt.assert_array_almost_equal(i_bval, ebs[i]) npt.assert_array_almost_equal(i_bdelta, expected_bdeltas[i]) npt.assert_array_almost_equal(i_b_eta, expected_b_etas[i]) # Input can't be string npt.assert_raises(ValueError, btens_to_params, "LTE") # Input can't be list of strings npt.assert_raises(ValueError, btens_to_params, ["LTE", "LTE"]) # Input can't be 1D nor 4D npt.assert_raises(ValueError, btens_to_params, np.zeros((3,))) npt.assert_raises(ValueError, btens_to_params, np.zeros((3, 3, 3, 3))) # Input shape must be (3, 3) OR (N, 3, 3) npt.assert_raises(ValueError, btens_to_params, np.zeros((4, 4))) npt.assert_raises(ValueError, btens_to_params, np.zeros((2, 2, 2))) def test_params_to_btens(): """ Checks if `params_to_btens` generates the expected b-tensors from provided `bvals`, `bdeltas`, `b_etas`. """ # Test parameters that should generate "baseline" b-tensors bvals = [1, 1, 1, 1] bdeltas = [0, -0.5, 0.5, 1] b_etas = [0, 0, 0, 0] expected_btens = [ np.eye(3) / 3, np.array([[1, 0, 0], [0, 1, 0], [0, 0, 0]]) / 2, np.array([[0.5, 0, 0], [0, 0.5, 0], [0, 0, 2]]) / 3, np.array([[0, 0, 0], [0, 0, 0], [0, 0, 1]]), ] for i, (bval, bdelta, b_eta) in enumerate(zip(bvals, bdeltas, b_etas)): btens = params_to_btens(bval, bdelta, b_eta) npt.assert_array_almost_equal(btens, expected_btens[i]) # Additional tests bvals = [1.7, 0.4, 2.3] bdeltas = [0.6, -0.2, 0] b_etas = [0.3, 0.8, 0.7] expected_btens = [ np.array([[0.12466667, 0, 0], [0, 0.32866667, 0], [0, 0, 1.24666667]]), np.array([[0.18133333, 0, 0], [0, 0.13866667, 0], [0, 0, 0.08]]), np.array([[0.76666667, 0, 0], [0, 0.76666667, 0], [0, 0, 0.76666667]]), ] for i, (bval, bdelta, b_eta) in enumerate(zip(bvals, bdeltas, b_etas)): btens = params_to_btens(bval, bdelta, b_eta) npt.assert_array_almost_equal(btens, expected_btens[i]) # Tests to trigger value errors # 1: wrong input type # 2: bval out of valid range # 3: bdelta out of valid range # 4: b_eta out of valid range bvals = [np.array([1]), -1, 1, 1] bdeltas = [0, 0, -1, 1] b_etas = [0, 0, 0, -1] for bval, bdelta, b_eta in zip(bvals, bdeltas, b_etas): npt.assert_raises(ValueError, params_to_btens, bval, bdelta, b_eta) def test_orientation_from_to_string(): with clear_and_catch_warnings(): ras = np.array(((0, 1), (1, 1), (2, 1))) lps = np.array(((0, -1), (1, -1), (2, 1))) asl = np.array(((1, 1), (2, 1), (0, -1))) npt.assert_array_equal(orientation_from_string("ras"), ras) npt.assert_array_equal(orientation_from_string("lps"), lps) npt.assert_array_equal(orientation_from_string("asl"), asl) npt.assert_raises(ValueError, orientation_from_string, "aasl") assert orientation_to_string(ras) == "ras" assert orientation_to_string(lps) == "lps" assert orientation_to_string(asl) == "asl" def test_reorient_vectors(): with clear_and_catch_warnings(): bvec = np.arange(12).reshape((3, 4)) npt.assert_array_equal(reorient_vectors(bvec, "ras", "ras"), bvec) npt.assert_array_equal(reorient_vectors(bvec, "ras", "lpi"), -bvec) result = bvec[[1, 2, 0]] npt.assert_array_equal(reorient_vectors(bvec, "ras", "asr"), result) bvec = result result = bvec[[1, 0, 2]] * [[-1], [1], [-1]] npt.assert_array_equal(reorient_vectors(bvec, "asr", "ial"), result) result = bvec[[1, 0, 2]] * [[-1], [1], [1]] npt.assert_array_equal(reorient_vectors(bvec, "asr", "iar"), result) npt.assert_raises(ValueError, reorient_vectors, bvec, "ras", "ra") bvec = np.arange(12).reshape((3, 4)) bvec = bvec.T npt.assert_array_equal(reorient_vectors(bvec, "ras", "ras", axis=1), bvec) npt.assert_array_equal(reorient_vectors(bvec, "ras", "lpi", axis=1), -bvec) result = bvec[:, [1, 2, 0]] npt.assert_array_equal(reorient_vectors(bvec, "ras", "asr", axis=1), result) bvec = result result = bvec[:, [1, 0, 2]] * [-1, 1, -1] npt.assert_array_equal(reorient_vectors(bvec, "asr", "ial", axis=1), result) result = bvec[:, [1, 0, 2]] * [-1, 1, 1] npt.assert_array_equal(reorient_vectors(bvec, "asr", "iar", axis=1), result) bvec = np.arange(12).reshape((3, 4)) def test_affine_input_change(): sq2 = np.sqrt(2) / 2 bvals = np.concatenate([[0], np.ones(6) * 1000]) bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) gt = gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=0) # Wrong affine dimension affs = np.zeros((6, 4, 4)) for i in range(4): affs[:, i, i] = 1 npt.assert_warns(Warning, reorient_bvecs, gt, affs) # Check if list still works affs = [np.eye(4) for _ in range(6)] _ = reorient_bvecs(gt, affs) def test_gradient_table_len(): # Test with single-shell gradient scheme sq2 = np.sqrt(2) / 2.0 bvals = 1500 * np.ones(7) bvals[0] = 0 bvecs = np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [sq2, sq2, 0], [sq2, 0, sq2], [0, sq2, sq2], ] ) gtab = gradient_table(bvals, bvecs=bvecs) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 # Test with big and small deltas big_delta = 5 small_delta = 2 gtab = gradient_table( bvals, bvecs=bvecs, big_delta=big_delta, small_delta=small_delta ) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 # Test with multi-shell gradient scheme bvals = np.array([1000, 1000, 1000, 1000, 2000, 2000, 2000, 2000, 0]) bvecs = generate_bvecs(bvals.shape[-1]) gtab = gradient_table(bvals, bvecs=bvecs) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 # Test reading gradient values from a file _, fbvals, fbvecs = get_fnames(name="small_101D") gtab = gradient_table(fbvals, bvecs=fbvecs) bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 # Test with different shapes gtab = gradient_table(bvals, bvecs=bvecs.T) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 btab = np.concatenate((bvals[:, None], bvecs), axis=1) gtab = gradient_table(btab) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 gtab = gradient_table(btab.T) exp_grad_count = len(bvals) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 # Test with b-tensor encoding gradient schemes gradients = np.array( [[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [3, 4, 0], [5, 0, 12]], "float" ) for btens in ["LTE", "PTE", "STE", "CTE"]: gtab = GradientTable(gradients, btens=btens) exp_grad_count = len(gradients) obt_grad_count1 = gtab.__len__() obt_grad_count2 = len(gtab) assert exp_grad_count == obt_grad_count1 == obt_grad_count2 def test_extract_b0(): dwi = np.random.rand(10, 10, 10, 5).astype(np.float32) b0_mask = np.array([True, False, True, True, False]) expected_first = dwi[..., 0] expected_mean = np.mean(dwi[..., [0, 2, 3]], axis=-1) expected_all = dwi[..., [0, 2, 3]] # Test extracting all b0 volumes result_all = extract_b0(dwi, b0_mask, strategy="all") npt.assert_equal(result_all.shape[-1], np.sum(b0_mask)) npt.assert_array_equal(result_all, expected_all) # Test extracting first b0 volume result_first = extract_b0(dwi, b0_mask, strategy="first") npt.assert_equal(result_first.shape, dwi.shape[:-1]) npt.assert_array_equal(result_first, expected_first) # Test extracting mean b0 volume result_mean = extract_b0(dwi, b0_mask, strategy="mean") npt.assert_equal(result_mean.shape, dwi.shape[:-1]) npt.assert_almost_equal(result_mean, expected_mean) # Test contiguous grouping result_grouped = extract_b0(dwi, b0_mask, group_contiguous_b0=True, strategy="mean") npt.assert_equal(result_grouped.shape[-1], 2) # 2 groups expected npt.assert_almost_equal(result_grouped[..., 0], result_first) npt.assert_almost_equal(result_grouped[..., 1], dwi[..., 2:4].mean(axis=-1)) npt.assert_raises(ValueError, extract_b0, dwi, b0_mask, strategy="max") npt.assert_raises(ValueError, extract_b0, dwi[..., 0], b0_mask) npt.assert_raises(ValueError, extract_b0, dwi, b0_mask[:, None]) npt.assert_raises(ValueError, extract_b0, dwi, b0_mask[:-1]) def test_extract_dwi_shell(): class MockGradientTable: def __init__(self, bvals, bvecs): self.bvals = np.array(bvals) self.bvecs = np.array(bvecs) dwi = np.random.rand(10, 10, 10, 6).astype(np.float32) gtab = MockGradientTable([0, 1000, 2000, 2005, 3000, 4000], np.random.rand(6, 3)) # Test grouped shell extraction indices, shell_data, output_bvals, output_bvecs = extract_dwi_shell( dwi, gtab, bvals_to_extract=[2000], tol=10 ) expected_indices = np.array([2, 3]) expected_data = dwi[..., expected_indices] expected_bvals = np.array([2000, 2005]) expected_bvecs = gtab.bvecs[expected_indices] npt.assert_array_equal(indices[0], expected_indices) npt.assert_array_equal(shell_data[0], expected_data) npt.assert_array_equal(output_bvals[0], expected_bvals) npt.assert_array_equal(output_bvecs[0], expected_bvecs) # Test ungrouped shell extraction indices, shell_data_list, output_bvals, output_bvecs = extract_dwi_shell( dwi, gtab, bvals_to_extract=[2000], tol=10, group_shells=False ) npt.assert_array_equal(indices[0], expected_indices) npt.assert_array_equal(shell_data_list[0], dwi[..., 2:4]) npt.assert_array_equal(output_bvals[0], expected_bvals) npt.assert_array_equal(output_bvecs[0], expected_bvecs) npt.assert_raises(ValueError, extract_dwi_shell, dwi, gtab, bvals_to_extract=[5000]) npt.assert_warns(UserWarning, extract_dwi_shell, dwi, gtab, bvals_to_extract=[]) dipy-1.11.0/dipy/core/tests/test_graph.py000066400000000000000000000017221476546756600203440ustar00rootroot00000000000000from numpy.testing import assert_equal from dipy.core.graph import Graph def test_graph(): g = Graph() g.add_node("a", attr=5) g.add_node("b", attr=6) g.add_node("c", attr=10) g.add_node("d", attr=11) g.add_edge("a", "b") g.add_edge("b", "c") g.add_edge("c", "d") g.add_edge("b", "d") print("Nodes") print(g.node) print("Successors") print(g.succ) print("Predecessors") print(g.pred) print("Paths above d") print(g.up("d")) print("Paths below a") print(g.down("a")) print("Shortest path above d") print(g.up_short("d")) print("Shortest path below a") print(g.down_short("a")) print("Deleting node b") # g.del_node_and_edges('b') g.del_node("b") print("Nodes") print(g.node) print("Successors") print(g.succ) print("Predecessors") print(g.pred) assert_equal(len(g.node), 3) assert_equal(len(g.succ), 3) assert_equal(len(g.pred), 3) dipy-1.11.0/dipy/core/tests/test_interpolation.py000066400000000000000000000431041476546756600221320ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from scipy.ndimage import map_coordinates from dipy.align import floating from dipy.core.interpolation import ( NearestNeighborInterpolator, OutsideImage, TriLinearInterpolator, interp_rbf, interpolate_scalar_2d, interpolate_scalar_3d, interpolate_scalar_nn_2d, interpolate_scalar_nn_3d, interpolate_vector_2d, interpolate_vector_3d, map_coordinates_trilinear_iso, rbf_interpolation, trilinear_interpolate4d, ) from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.testing.decorators import set_random_number_generator @set_random_number_generator() def test_trilinear_interpolate(rng): """This tests that the trilinear interpolation returns the correct values.""" a, b, c = rng.random(3) def linear_function(x, y, z): return a * x + b * y + c * z N = 6 x, y, z = np.mgrid[:N, :N, :N] data = np.empty((N, N, N, 2)) data[..., 0] = linear_function(x, y, z) data[..., 1] = 99.0 # Use a point not near the edges point = np.array([2.1, 4.8, 3.3]) out = trilinear_interpolate4d(data, point) expected = [linear_function(*point), 99.0] npt.assert_array_almost_equal(out, expected) # Pass in out ourselves out[:] = -1 trilinear_interpolate4d(data.astype(float), point.astype(float), out=out) npt.assert_array_almost_equal(out, expected) # use a point close to an edge point = np.array([-0.1, -0.1, -0.1]) expected = [0.0, 99.0] out = trilinear_interpolate4d(data, point) npt.assert_array_almost_equal(out, expected) # different edge point = np.array([2.4, 5.4, 3.3]) # On the edge 5.4 get treated as the max y value, 5. expected = [linear_function(point[0], 5.0, point[2]), 99.0] out = trilinear_interpolate4d(data, point) npt.assert_array_almost_equal(out, expected) # Test index errors point = np.array([2.4, 5.5, 3.3]) npt.assert_raises(IndexError, trilinear_interpolate4d, data, point) point = np.array([2.4, -1.0, 3.3]) npt.assert_raises(IndexError, trilinear_interpolate4d, data, point) @set_random_number_generator(5324989) def test_interpolate_scalar_2d(rng): sz = 64 target_shape = (sz, sz) image = np.empty(target_shape, dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(target_shape) extended_image = np.zeros((sz + 2, sz + 2), dtype=floating) extended_image[1 : sz + 1, 1 : sz + 1] = image[...] # Select some coordinates inside the image to interpolate at nsamples = 200 locations = rng.random(2 * nsamples).reshape((nsamples, 2)) * (sz + 2) - 1.0 extended_locations = locations + 1.0 # shift coordinates one voxel # Call the implementation under test interp, inside = interpolate_scalar_2d(image, locations) # Call the reference implementation expected = map_coordinates(extended_image, extended_locations.transpose(), order=1) npt.assert_array_almost_equal(expected, interp) # Test interpolation stability along the boundary epsilon = 5e-8 for k in range(2): for offset in [0, sz - 1]: delta = ((rng.random(nsamples) * 2) - 1) * epsilon locations[:, k] = delta + offset locations[:, (k + 1) % 2] = rng.random(nsamples) * (sz - 1) interp, inside = interpolate_scalar_2d(image, locations) locations[:, k] = offset expected = map_coordinates(image, locations.transpose(), order=1) npt.assert_array_almost_equal(expected, interp) if offset == 0: expected_flag = np.array(delta >= 0, dtype=np.int32) else: expected_flag = np.array(delta <= 0, dtype=np.int32) npt.assert_array_almost_equal(expected_flag, inside) @set_random_number_generator(1924781) def test_interpolate_scalar_nn_2d(rng): sz = 64 target_shape = (sz, sz) image = np.empty(target_shape, dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(target_shape) # Select some coordinates to interpolate at nsamples = 200 locations = rng.random(2 * nsamples).reshape((nsamples, 2)) * (sz + 2) - 1.0 # Call the implementation under test interp, inside = interpolate_scalar_nn_2d(image, locations) # Call the reference implementation expected = map_coordinates(image, locations.transpose(), order=0) npt.assert_array_almost_equal(expected, interp) # Test the 'inside' flag for i in range(nsamples): if (locations[i, 0] < 0 or locations[i, 0] > (sz - 1)) or ( locations[i, 1] < 0 or locations[i, 1] > (sz - 1) ): npt.assert_equal(inside[i], 0) else: npt.assert_equal(inside[i], 1) @set_random_number_generator(3121121) def test_interpolate_scalar_nn_3d(rng): sz = 64 target_shape = (sz, sz, sz) image = np.empty(target_shape, dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(target_shape) # Select some coordinates to interpolate at nsamples = 200 locations = rng.random(3 * nsamples).reshape((nsamples, 3)) * (sz + 2) - 1.0 # Call the implementation under test interp, inside = interpolate_scalar_nn_3d(image, locations) # Call the reference implementation expected = map_coordinates(image, locations.transpose(), order=0) npt.assert_array_almost_equal(expected, interp) # Test the 'inside' flag for i in range(nsamples): expected_inside = 1 for axis in range(3): if locations[i, axis] < 0 or locations[i, axis] > (sz - 1): expected_inside = 0 break npt.assert_equal(inside[i], expected_inside) @set_random_number_generator(9216326) def test_interpolate_scalar_3d(rng): sz = 64 target_shape = (sz, sz, sz) image = np.empty(target_shape, dtype=floating) image[...] = rng.integers(0, 10, np.size(image)).reshape(target_shape) extended_image = np.zeros((sz + 2, sz + 2, sz + 2), dtype=floating) extended_image[1 : sz + 1, 1 : sz + 1, 1 : sz + 1] = image[...] # Select some coordinates inside the image to interpolate at nsamples = 800 locations = rng.random(3 * nsamples).reshape((nsamples, 3)) * (sz + 2) - 1.0 extended_locations = locations + 1.0 # shift coordinates one voxel # Call the implementation under test interp, inside = interpolate_scalar_3d(image, locations) # Call the reference implementation expected = map_coordinates(extended_image, extended_locations.transpose(), order=1) npt.assert_array_almost_equal(expected, interp) # Test interpolation stability along the boundary epsilon = 5e-8 for k in range(3): for offset in [0, sz - 1]: delta = ((rng.random(nsamples) * 2) - 1) * epsilon locations[:, k] = delta + offset locations[:, (k + 1) % 3] = rng.random(nsamples) * (sz - 1) locations[:, (k + 2) % 3] = rng.random(nsamples) * (sz - 1) interp, inside = interpolate_scalar_3d(image, locations) locations[:, k] = offset expected = map_coordinates(image, locations.transpose(), order=1) npt.assert_array_almost_equal(expected, interp) if offset == 0: expected_flag = np.array(delta >= 0, dtype=np.int32) else: expected_flag = np.array(delta <= 0, dtype=np.int32) npt.assert_array_almost_equal(expected_flag, inside) @set_random_number_generator(7711219) def test_interpolate_vector_3d(rng): sz = 64 target_shape = (sz, sz, sz) field = np.empty(target_shape + (3,), dtype=floating) field[...] = rng.integers(0, 10, np.size(field)).reshape(target_shape + (3,)) extended_field = np.zeros((sz + 2, sz + 2, sz + 2, 3), dtype=floating) extended_field[1 : sz + 1, 1 : sz + 1, 1 : sz + 1] = field # Select some coordinates to interpolate at nsamples = 800 locations = rng.random(3 * nsamples).reshape((nsamples, 3)) * (sz + 2) - 1.0 extended_locations = locations + 1 # Call the implementation under test interp, inside = interpolate_vector_3d(field, locations) # Call the reference implementation expected = np.zeros_like(interp) for i in range(3): expected[..., i] = map_coordinates( extended_field[..., i], extended_locations.transpose(), order=1 ) npt.assert_array_almost_equal(expected, interp) # Test interpolation stability along the boundary epsilon = 5e-8 for k in range(3): for offset in [0, sz - 1]: delta = ((rng.random(nsamples) * 2) - 1) * epsilon locations[:, k] = delta + offset locations[:, (k + 1) % 3] = rng.random(nsamples) * (sz - 1) locations[:, (k + 2) % 3] = rng.random(nsamples) * (sz - 1) interp, inside = interpolate_vector_3d(field, locations) locations[:, k] = offset for i in range(3): expected[..., i] = map_coordinates( field[..., i], locations.transpose(), order=1 ) npt.assert_array_almost_equal(expected, interp) if offset == 0: expected_flag = np.array(delta >= 0, dtype=np.int32) else: expected_flag = np.array(delta <= 0, dtype=np.int32) npt.assert_array_almost_equal(expected_flag, inside) @set_random_number_generator(1271244) def test_interpolate_vector_2d(rng): sz = 64 target_shape = (sz, sz) field = np.empty(target_shape + (2,), dtype=floating) field[...] = rng.integers(0, 10, np.size(field)).reshape(target_shape + (2,)) extended_field = np.zeros((sz + 2, sz + 2, 2), dtype=floating) extended_field[1 : sz + 1, 1 : sz + 1] = field # Select some coordinates to interpolate at nsamples = 200 locations = rng.random(2 * nsamples).reshape((nsamples, 2)) * (sz + 2) - 1.0 extended_locations = locations + 1 # Call the implementation under test interp, inside = interpolate_vector_2d(field, locations) # Call the reference implementation expected = np.zeros_like(interp) for i in range(2): expected[..., i] = map_coordinates( extended_field[..., i], extended_locations.transpose(), order=1 ) npt.assert_array_almost_equal(expected, interp) # Test interpolation stability along the boundary epsilon = 5e-8 for k in range(2): for offset in [0, sz - 1]: delta = ((rng.random(nsamples) * 2) - 1) * epsilon locations[:, k] = delta + offset locations[:, (k + 1) % 2] = rng.random(nsamples) * (sz - 1) interp, inside = interpolate_vector_2d(field, locations) locations[:, k] = offset for i in range(2): expected[..., i] = map_coordinates( field[..., i], locations.transpose(), order=1 ) npt.assert_array_almost_equal(expected, interp) if offset == 0: expected_flag = np.array(delta >= 0, dtype=np.int32) else: expected_flag = np.array(delta <= 0, dtype=np.int32) npt.assert_array_almost_equal(expected_flag, inside) def test_NearestNeighborInterpolator(): # Place integers values at the center of every voxel ell, m, n, o = np.ogrid[0:6.01, 0:6.01, 0:6.01, 0:4] data = ell + m + n + o nni = NearestNeighborInterpolator(data, (1, 1, 1)) a, b, c = np.mgrid[0.5:6.5:1.6, 0.5:6.5:2.7, 0.5:6.5:3.8] for ii in range(a.size): x = a.flat[ii] y = b.flat[ii] z = c.flat[ii] expected_result = int(x) + int(y) + int(z) + o.ravel() npt.assert_array_equal(nni[x, y, z], expected_result) ind = np.array([x, y, z]) npt.assert_array_equal(nni[ind], expected_result) npt.assert_raises(OutsideImage, nni.__getitem__, (-0.1, 0, 0)) npt.assert_raises(OutsideImage, nni.__getitem__, (0, 8.2, 0)) def test_TriLinearInterpolator(): # Place (0, 0, 0) at the bottom left of the image ell, m, n, o = np.ogrid[0.5:6.51, 0.5:6.51, 0.5:6.51, 0:4] data = ell + m + n + o data = data.astype("float32") tli = TriLinearInterpolator(data, (1, 1, 1)) a, b, c = np.mgrid[0.5:6.5:1.6, 0.5:6.5:2.7, 0.5:6.5:3.8] for ii in range(a.size): x = a.flat[ii] y = b.flat[ii] z = c.flat[ii] expected_result = x + y + z + o.ravel() npt.assert_array_almost_equal(tli[x, y, z], expected_result, decimal=5) ind = np.array([x, y, z]) npt.assert_array_almost_equal(tli[ind], expected_result) # Index at 0 expected_value = np.arange(4) + 1.5 npt.assert_array_almost_equal(tli[0, 0, 0], expected_value) # Index at shape expected_value = np.arange(4) + (6.5 * 3) npt.assert_array_almost_equal(tli[7, 7, 7], expected_value) npt.assert_raises(OutsideImage, tli.__getitem__, (-0.1, 0, 0)) npt.assert_raises(OutsideImage, tli.__getitem__, (0, 7.01, 0)) def test_trilinear_interp_cubic_voxels(): def stepped_1d(arr_1d): # Make a version of `arr_1d` which is not contiguous return np.vstack((arr_1d, arr_1d)).ravel(order="F")[::2] A = np.ones((17, 17, 17)) B = np.zeros(3) strides = np.array(A.strides, np.intp) A[7, 7, 7] = 2 points = np.array([[0, 0, 0], [7.0, 7.5, 7.0], [3.5, 3.5, 3.5]]) map_coordinates_trilinear_iso(A, points, strides, 3, B) npt.assert_array_almost_equal(B, np.array([1.0, 1.5, 1.0])) # All of the input array, points array, strides array and output array must # be C-contiguous. Check by passing in versions that aren't C contiguous npt.assert_raises( ValueError, map_coordinates_trilinear_iso, A.copy(order="F"), points, strides, 3, B, ) npt.assert_raises( ValueError, map_coordinates_trilinear_iso, A, points.copy(order="F"), strides, 3, B, ) npt.assert_raises( ValueError, map_coordinates_trilinear_iso, A, points, stepped_1d(strides), 3, B ) npt.assert_raises( ValueError, map_coordinates_trilinear_iso, A, points, strides, 3, stepped_1d(B) ) def test_interp_rbf(): def data_func(s, a, b): return a * np.cos(s.theta) + b * np.sin(s.phi) s0 = create_unit_sphere(recursion_level=3) s1 = create_unit_sphere(recursion_level=4) for a, b in zip([1, 2, 0.5], [1, 0.5, 2]): data = data_func(s0, a, b) expected = data_func(s1, a, b) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") interp_data_a = interp_rbf(data, s0, s1, norm="angle") npt.assert_(np.mean(np.abs(interp_data_a - expected)) < 0.1) npt.assert_(len(w) == 1) npt.assert_(issubclass(w[0].category, DeprecationWarning)) npt.assert_("deprecated" in str(w[0].message)) # Test that using the euclidean norm raises a warning # (following # https://docs.python.org/2/library/warnings.html#testing-warnings) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") interp_rbf(data, s0, s1, norm="euclidean_norm") npt.assert_(len(w) == 2) npt.assert_(issubclass(w[0].category, DeprecationWarning)) npt.assert_("deprecated" in str(w[0].message)) npt.assert_(issubclass(w[1].category, PendingDeprecationWarning)) npt.assert_("deprecated" in str(w[1].message)) def test_rbf_interpolation(): s0 = create_unit_sphere(recursion_level=3) s1 = create_unit_sphere(recursion_level=4) def data_func(s, a, b): return a * np.cos(s.theta) + b * np.sin(s.phi) # Test 1D case def data_func_1d(s, a, b, i): return data_func(s, a, b) + i for a, b in zip([1, 2, 0.5], [1, 0.5, 2]): data = np.empty([3, len(s0.vertices)]) for i in range(3): data[i] = data_func_1d(s0, a, b, i) expected = np.empty([3, len(s1.vertices)]) for i in range(3): expected[i] = data_func_1d(s1, a, b, i) interp_data = rbf_interpolation(data, s0, s1, epsilon=10) npt.assert_(np.mean(np.abs(interp_data - expected)) < 0.1) # Test 2D case def data_func_2d(s, a, b, i, j): return data_func(s, a, b) + i + j for a, b in zip([1, 2, 0.5], [1, 0.5, 2]): data = np.empty([3, 4, len(s0.vertices)]) for i in range(3): for j in range(4): data[i, j] = data_func_2d(s0, a, b, i, j) expected = np.empty([3, 4, len(s1.vertices)]) for i in range(3): for j in range(4): expected[i, j] = data_func_2d(s1, a, b, i, j) interp_data = rbf_interpolation(data, s0, s1, epsilon=10) npt.assert_(np.mean(np.abs(interp_data - expected)) < 0.1) # Test 3D case def data_func_3d(s, a, b, i, j, k): return data_func(s, a, b) + i + j + k for a, b in zip([1, 2, 0.5], [1, 0.5, 2]): data = np.empty([3, 4, 5, len(s0.vertices)]) for i in range(3): for j in range(4): for k in range(5): data[i, j, k] = data_func_3d(s0, a, b, i, j, k) expected = np.empty([3, 4, 5, len(s1.vertices)]) for i in range(3): for j in range(4): for k in range(5): expected[i, j, k] = data_func_3d(s1, a, b, i, j, k) interp_data = rbf_interpolation(data, s0, s1, epsilon=10) npt.assert_(np.mean(np.abs(interp_data - expected)) < 0.1) # Test that shape mismatch raises an error with npt.assert_raises(ValueError): data = data[:, :, :, :-1] # Remove one vertex rbf_interpolation(data, s0, s1, epsilon=10) dipy-1.11.0/dipy/core/tests/test_math.pyx000066400000000000000000000017721476546756600203710ustar00rootroot00000000000000 cimport numpy as cnp import numpy as np import numpy.testing as npt from dipy.core.math cimport f_array_min, f_array_max, f_max, f_min def test_f_array_max(): cdef int size = 10 cdef double[:] d_arr = np.arange(size, dtype=np.float64) cdef float[:] f_arr = np.arange(size, dtype=np.float32) npt.assert_equal(f_array_max(&d_arr[0], size), 9) npt.assert_equal(f_array_max(&f_arr[0], size), 9) def test_f_array_min(): cdef int size = 6 cdef double[:] d_arr = np.arange(5, 5+size, dtype=np.float64) cdef float[:] f_arr = np.arange(5, 5+size, dtype=np.float32) npt.assert_almost_equal(f_array_min(&d_arr[0], size), 5) npt.assert_almost_equal(f_array_min(&f_arr[0], size), 5) def test_f_max(): npt.assert_equal(f_max[double](1, 2), 2) npt.assert_equal(f_max[float](2, 1), 2) npt.assert_equal(f_max(1., 1.), 1) def test_f_min(): npt.assert_equal(f_min[double](1, 2), 1) npt.assert_equal(f_min[float](2, 1), 1) npt.assert_equal(f_min(1.0, 1.0), 1) dipy-1.11.0/dipy/core/tests/test_ndindex.py000066400000000000000000000006011476546756600206670ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_array_equal from dipy.core.ndindex import ndindex def test_ndindex(): x = list(ndindex((1, 2, 3))) expected = [ix for ix, e in np.ndenumerate(np.zeros((1, 2, 3)))] assert_array_equal(x, expected) def test_ndindex_0d(): x = list(ndindex(np.array(1).shape)) expected = [()] assert_array_equal(x, expected) dipy-1.11.0/dipy/core/tests/test_optimize.py000066400000000000000000000070741476546756600211110ustar00rootroot00000000000000import numpy as np import numpy.testing as npt import scipy.sparse as sps import dipy.core.optimize as opt from dipy.core.optimize import Optimizer, sparse_nnls, spdot from dipy.testing.decorators import set_random_number_generator def func(x): return x[0] ** 2 + x[1] ** 2 + x[2] ** 2 def func2(x): return x[0] ** 2 + 0.5 * x[1] ** 2 + 0.2 * x[2] ** 2 + 0.2 * x[3] ** 2 def test_optimize_new_scipy(): opt = Optimizer(fun=func, x0=np.array([1.0, 1.0, 1.0]), method="Powell") npt.assert_array_almost_equal(opt.xopt, np.array([0, 0, 0])) npt.assert_almost_equal(opt.fopt, 0) opt = Optimizer( fun=func, x0=np.array([1.0, 1.0, 1.0]), method="L-BFGS-B", options={"maxcor": 10, "ftol": 1e-7, "gtol": 1e-5, "eps": 1e-8}, ) npt.assert_array_almost_equal(opt.xopt, np.array([0, 0, 0])) npt.assert_almost_equal(opt.fopt, 0) npt.assert_equal(opt.evolution, None) npt.assert_equal(opt.evolution, None) opt = Optimizer( fun=func, x0=np.array([1.0, 1.0, 1.0]), method="L-BFGS-B", options={"maxcor": 10, "ftol": 1e-7, "gtol": 1e-5, "eps": 1e-8}, evolution=False, ) npt.assert_array_almost_equal(opt.xopt, np.array([0, 0, 0])) npt.assert_almost_equal(opt.fopt, 0) opt.print_summary() opt = Optimizer( fun=func2, x0=np.array([1.0, 1.0, 1.0, 5.0]), method="L-BFGS-B", options={"maxcor": 10, "ftol": 1e-7, "gtol": 1e-5, "eps": 1e-8}, evolution=True, ) npt.assert_equal(opt.evolution.shape, (opt.nit, 4)) opt = Optimizer( fun=func2, x0=np.array([1.0, 1.0, 1.0, 5.0]), method="Powell", options={"xtol": 1e-6, "ftol": 1e-6, "maxiter": 1e6}, evolution=True, ) npt.assert_array_almost_equal(opt.xopt, np.array([0, 0, 0, 0.0])) @set_random_number_generator() def test_sklearn_linear_solver(rng): class SillySolver(opt.SKLearnLinearSolver): def fit(self, X, y): self.coef_ = np.ones(X.shape[-1]) MySillySolver = SillySolver() n_samples = 100 n_features = 20 y = rng.random(n_samples) X = np.ones((n_samples, n_features)) MySillySolver.fit(X, y) npt.assert_equal(MySillySolver.coef_, np.ones(n_features)) npt.assert_equal(MySillySolver.predict(X), np.ones(n_samples) * 20) @set_random_number_generator() def test_nonnegativeleastsquares(rng): n = 100 X = np.eye(n) beta = rng.random(n) y = np.dot(X, beta) my_nnls = opt.NonNegativeLeastSquares() my_nnls.fit(X, y) npt.assert_equal(my_nnls.coef_, beta) npt.assert_equal(my_nnls.predict(X), y) @set_random_number_generator() def test_spdot(rng): n = 100 m = 20 k = 10 A = rng.standard_normal((n, m)) B = rng.standard_normal((m, k)) A_sparse = sps.csr_matrix(A) B_sparse = sps.csr_matrix(B) dense_dot = np.dot(A, B) # Try all the different variations: npt.assert_array_almost_equal(dense_dot, spdot(A_sparse, B_sparse).todense()) npt.assert_array_almost_equal(dense_dot, spdot(A, B_sparse)) npt.assert_array_almost_equal(dense_dot, spdot(A_sparse, B)) @set_random_number_generator() def test_sparse_nnls(rng): # Set up the regression: beta = rng.random(10) X = rng.standard_normal((1000, 10)) y = np.dot(X, beta) beta_hat = sparse_nnls(y, X) beta_hat_sparse = sparse_nnls(y, sps.csr_matrix(X)) # We should be able to get back the right answer for this simple case npt.assert_array_almost_equal(beta, beta_hat, decimal=1) npt.assert_array_almost_equal(beta, beta_hat_sparse, decimal=1) dipy-1.11.0/dipy/core/tests/test_rng.py000066400000000000000000000025361476546756600200350ustar00rootroot00000000000000"""File dedicated to test ``dipy.core.rng`` module.""" import numpy.testing as npt from scipy.stats import chisquare from dipy.core import rng def test_wichmann_hill2006(): n_generated = [rng.WichmannHill2006() for i in range(10000)] # The chi-squared test statistic as result and The p-value of the test chisq, pvalue = chisquare(n_generated) # P-values equal 1 show evidence of the null hypothesis which indicates # that it is uniformly distributed. This is what we want to check here npt.assert_almost_equal(pvalue, 1.0) npt.assert_raises(ValueError, rng.WichmannHill2006, ix=0) def test_wichmann_hill1982(): n_generated = [rng.WichmannHill1982() for i in range(10000)] chisq, pvalue = chisquare(n_generated) # P-values equal 1 show evidence of the null hypothesis which indicates # that it is uniformly distributed. This is what we want to check here npt.assert_almost_equal(pvalue, 1.0) npt.assert_raises(ValueError, rng.WichmannHill1982, iz=0) def test_LEcuyer(): n_generated = [rng.LEcuyer() for i in range(10000)] chisq, pvalue = chisquare(n_generated) # P-values equal 1 show evidence of the null hypothesis which indicates # that it is uniformly distributed. This is what we want to check here npt.assert_almost_equal(pvalue, 1.0) npt.assert_raises(ValueError, rng.LEcuyer, s2=0) dipy-1.11.0/dipy/core/tests/test_sphere.py000066400000000000000000000310711476546756600205310ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as nt from dipy.core.geometry import cart2sphere, vector_norm from dipy.core.sphere import ( HemiSphere, Sphere, _get_forces, _get_forces_alt, disperse_charges, disperse_charges_alt, faces_from_sphere_vertices, fibonacci_sphere, hemi_icosahedron, unique_edges, unique_sets, unit_icosahedron, unit_octahedron, ) from dipy.core.sphere_stats import random_uniform_on_sphere from dipy.testing.decorators import set_random_number_generator verts = unit_octahedron.vertices edges = unit_octahedron.edges oct_faces = unit_octahedron.faces r, theta, phi = cart2sphere(*verts.T) def test_sphere_construct_args(): nt.assert_raises(ValueError, Sphere) nt.assert_raises(ValueError, Sphere, x=1, theta=1) nt.assert_raises(ValueError, Sphere, xyz=1, theta=1) nt.assert_raises(ValueError, Sphere, xyz=1, theta=1, phi=1) def test_sphere_edges_faces(): nt.assert_raises(ValueError, Sphere, xyz=1, edges=1, faces=None) Sphere(xyz=[0, 0, 1], faces=[0, 0, 0]) Sphere( xyz=[[0, 0, 1], [1, 0, 0], [0, 1, 0]], edges=[[0, 1], [1, 2], [2, 0]], faces=[0, 1, 2], ) def test_sphere_not_unit(): with warnings.catch_warnings(): warnings.simplefilter("error") nt.assert_raises(UserWarning, Sphere, xyz=[0, 0, 1.5]) def test_bad_edges_faces(): nt.assert_raises(ValueError, Sphere, xyz=[0, 0, 1.5], edges=[[1, 2]]) def test_sphere_construct(): s0 = Sphere(xyz=verts) s1 = Sphere(theta=theta, phi=phi) # Unpack verts.T into x, y, z and pass as keyword arguments x, y, z = verts.T s2 = Sphere(x=x, y=y, z=z) nt.assert_array_almost_equal(s0.theta, s1.theta) nt.assert_array_almost_equal(s0.theta, s2.theta) nt.assert_array_almost_equal(s0.theta, theta) nt.assert_array_almost_equal(s0.phi, s1.phi) nt.assert_array_almost_equal(s0.phi, s2.phi) nt.assert_array_almost_equal(s0.phi, phi) def array_to_set(a): return {frozenset(i) for i in a} def test_unique_edges(): faces = np.array([[0, 1, 2], [1, 2, 0]]) e = array_to_set([[1, 2], [0, 1], [0, 2]]) u = unique_edges(faces) nt.assert_equal(e, array_to_set(u)) u, m = unique_edges(faces, return_mapping=True) nt.assert_equal(e, array_to_set(u)) edges = [[[0, 1], [1, 2], [2, 0]], [[1, 2], [2, 0], [0, 1]]] nt.assert_equal(np.sort(u[m], -1), np.sort(edges, -1)) def test_unique_sets(): sets = np.array([[0, 1, 2], [1, 2, 0], [0, 2, 1], [1, 2, 3]]) e = array_to_set([[0, 1, 2], [1, 2, 3]]) # Run without inverse u = unique_sets(sets) nt.assert_equal(len(u), len(e)) nt.assert_equal(array_to_set(u), e) # Run with inverse u, m = unique_sets(sets, return_inverse=True) nt.assert_equal(len(u), len(e)) nt.assert_equal(array_to_set(u), e) nt.assert_equal(np.sort(u[m], -1), np.sort(sets, -1)) def test_faces_from_sphere_vertices(): faces = faces_from_sphere_vertices(verts) faces = array_to_set(faces) expected = array_to_set(oct_faces) nt.assert_equal(faces, expected) def test_sphere_attrs(): s = Sphere(xyz=verts) nt.assert_array_almost_equal(s.vertices, verts) nt.assert_array_almost_equal(s.x, verts[:, 0]) nt.assert_array_almost_equal(s.y, verts[:, 1]) nt.assert_array_almost_equal(s.z, verts[:, 2]) def test_edges_faces(): s = Sphere(xyz=verts) faces = oct_faces nt.assert_equal(array_to_set(s.faces), array_to_set(faces)) nt.assert_equal(array_to_set(s.edges), array_to_set(edges)) s = Sphere(xyz=verts, faces=[[0, 1, 2]]) nt.assert_equal(array_to_set(s.faces), array_to_set([[0, 1, 2]])) nt.assert_equal(array_to_set(s.edges), array_to_set([[0, 1], [1, 2], [0, 2]])) s = Sphere(xyz=verts, faces=[[0, 1, 2]], edges=[[0, 1]]) nt.assert_equal(array_to_set(s.faces), array_to_set([[0, 1, 2]])) nt.assert_equal(array_to_set(s.edges), array_to_set([[0, 1]])) def test_sphere_subdivide(): sphere1 = unit_octahedron.subdivide(n=4) sphere2 = Sphere(xyz=sphere1.vertices) nt.assert_equal(sphere1.faces.shape, sphere2.faces.shape) nt.assert_equal(array_to_set(sphere1.faces), array_to_set(sphere2.faces)) sphere1 = unit_icosahedron.subdivide(n=4) sphere2 = Sphere(xyz=sphere1.vertices) nt.assert_equal(sphere1.faces.shape, sphere2.faces.shape) nt.assert_equal(array_to_set(sphere1.faces), array_to_set(sphere2.faces)) # It might be good to also test the vertices somehow if we can think of a # good test for them. def test_sphere_find_closest(): sphere1 = unit_octahedron.subdivide(n=4) for ii in range(sphere1.vertices.shape[0]): nt.assert_equal(sphere1.find_closest(sphere1.vertices[ii]), ii) def test_hemisphere_find_closest(): hemisphere1 = hemi_icosahedron.subdivide(n=4) for ii in range(hemisphere1.vertices.shape[0]): nt.assert_equal(hemisphere1.find_closest(hemisphere1.vertices[ii]), ii) nt.assert_equal(hemisphere1.find_closest(-hemisphere1.vertices[ii]), ii) nt.assert_equal(hemisphere1.find_closest(hemisphere1.vertices[ii] * 2), ii) def test_hemisphere_subdivide(): def flip(vertices): x, y, z = vertices.T f = (z < 0) | ((z == 0) & (y < 0)) | ((z == 0) & (y == 0) & (x < 0)) return 1 - 2 * f[:, None] decimals = 6 # Test HemiSphere.subdivide # Create a hemisphere by dividing a hemi-icosahedron hemi1 = HemiSphere.from_sphere(unit_icosahedron).subdivide(n=4) vertices1 = np.round(hemi1.vertices, decimals) vertices1 *= flip(vertices1) order = np.lexsort(vertices1.T) vertices1 = vertices1[order] # Create a hemisphere from a subdivided sphere sphere = unit_icosahedron.subdivide(n=4) hemi2 = HemiSphere.from_sphere(sphere) vertices2 = np.round(hemi2.vertices, decimals) vertices2 *= flip(vertices2) order = np.lexsort(vertices2.T) vertices2 = vertices2[order] # The two hemispheres should have the same vertices up to their order nt.assert_array_equal(vertices1, vertices2) # Create a hemisphere from vertices hemi3 = HemiSphere(xyz=hemi1.vertices) nt.assert_array_equal(hemi1.faces, hemi3.faces) nt.assert_array_equal(hemi1.edges, hemi3.edges) def test_hemisphere_constructor(): s0 = HemiSphere(xyz=verts) s1 = HemiSphere(theta=theta, phi=phi) # Unpack verts.T into x, y, z and pass as keyword arguments x, y, z = verts.T s2 = HemiSphere(x=x, y=y, z=z) uniq_verts = verts[::2].T rU, thetaU, phiU = cart2sphere(*uniq_verts) nt.assert_array_almost_equal(s0.theta, s1.theta) nt.assert_array_almost_equal(s0.theta, s2.theta) nt.assert_array_almost_equal(s0.theta, thetaU) nt.assert_array_almost_equal(s0.phi, s1.phi) nt.assert_array_almost_equal(s0.phi, s2.phi) nt.assert_array_almost_equal(s0.phi, phiU) def test_mirror(): verts = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [-1, -1, -1]] verts = np.array(verts, "float") verts = verts / np.sqrt((verts * verts).sum(-1)[:, None]) faces = [[0, 1, 3], [0, 2, 3], [1, 2, 3]] h = HemiSphere(xyz=verts, faces=faces) s = h.mirror() nt.assert_equal(len(s.vertices), 8) nt.assert_equal(len(s.faces), 6) verts = s.vertices def _angle(a, b): return np.arccos(np.dot(a, b)) for triangle in s.faces: a, b, c = triangle nt.assert_(_angle(verts[a], verts[b]) <= np.pi / 2) nt.assert_(_angle(verts[a], verts[c]) <= np.pi / 2) nt.assert_(_angle(verts[b], verts[c]) <= np.pi / 2) def test_hemisphere_faces(): t = (1 + np.sqrt(5)) / 2 vertices = np.array( [ [-t, -1, 0], [-t, 1, 0], [1, 0, t], [-1, 0, t], [0, t, 1], [0, -t, 1], ] ) vertices /= vector_norm(vertices, keepdims=True) faces = np.array( [ [0, 1, 2], [0, 1, 3], [0, 2, 4], [1, 3, 4], [2, 3, 4], [1, 2, 5], [0, 3, 5], [2, 3, 5], [0, 4, 5], [1, 4, 5], ] ) edges = np.array( [ (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5), ] ) h = HemiSphere(xyz=vertices) nt.assert_equal(len(h.edges), len(edges)) nt.assert_equal(array_to_set(h.edges), array_to_set(edges)) nt.assert_equal(len(h.faces), len(faces)) nt.assert_equal(array_to_set(h.faces), array_to_set(faces)) def test_get_force(): charges = np.array([[1.0, 0, 0], [0, 1.0, 0], [0, 0, 1.0]]) force, pot = _get_forces(charges) nt.assert_array_almost_equal(force, 0) charges = np.array([[1, -0.1, 0], [1, 0, 0]]) force, pot = _get_forces(charges) nt.assert_array_almost_equal(force[1, [0, 2]], 0) nt.assert_(force[1, 1] > 0) def test_disperse_charges(): charges = np.array([[1.0, 0, 0], [0, 1.0, 0], [0, 0, 1.0]]) d_sphere, pot = disperse_charges(HemiSphere(xyz=charges), 10) nt.assert_array_almost_equal(charges, d_sphere.vertices) charges = np.array([[3.0 / 5, 4.0 / 5, 0], [4.0 / 5, 3.0 / 5, 0]]) expected_charges = np.array([[0, 1.0, 0], [1.0, 0, 0]]) d_sphere, pot = disperse_charges(HemiSphere(xyz=charges), 1000, const=0.2) nt.assert_array_almost_equal(expected_charges, d_sphere.vertices) for ii in range(1, len(pot)): # check that the potential of the system is going down nt.assert_(pot[ii] - pot[ii - 1] <= 0) # Check that the disperse_charges does not blow up with a large constant d_sphere, pot = disperse_charges(HemiSphere(xyz=charges), 1000, const=20.0) nt.assert_array_almost_equal(expected_charges, d_sphere.vertices) for ii in range(1, len(pot)): # check that the potential of the system is going down nt.assert_(pot[ii] - pot[ii - 1] <= 0) # check that the function seems to work with a larger number of charges charges = np.arange(21).reshape(7, 3) norms = np.sqrt((charges * charges).sum(-1)) charges = charges / norms[:, None] d_sphere, pot = disperse_charges(HemiSphere(xyz=charges), 1000, const=0.05) for ii in range(1, len(pot)): # check that the potential of the system is going down nt.assert_(pot[ii] - pot[ii - 1] <= 0) # check that the resulting charges all lie on the unit sphere d_charges = d_sphere.vertices norms = np.sqrt((d_charges * d_charges).sum(-1)) nt.assert_array_almost_equal(norms, 1) def test_disperse_charges_alt(): # Create a random set of points num_points = 3 init_pointset = random_uniform_on_sphere(n=num_points, coords="xyz") # Compute the associated electrostatic potential init_pointset = init_pointset.reshape(init_pointset.shape[0] * 3) init_charges_potential = _get_forces_alt(init_pointset) # Disperse charges init_pointset = init_pointset.reshape(3, 3) dispersed_pointset = disperse_charges_alt(init_pointset, 10) # Compute the associated electrostatic potential dispersed_pointset = dispersed_pointset.reshape(init_pointset.shape[0] * 3) dispersed_charges_potential = _get_forces_alt(dispersed_pointset) # Verify that the potential of the optimal configuration is smaller than # that of the original configuration nt.assert_array_less(dispersed_charges_potential, init_charges_potential) @set_random_number_generator() def test_fibonacci_sphere(rng): # Test that the number of points is correct points = fibonacci_sphere(n_points=724, rng=rng) nt.assert_equal(len(points), 724) # Test randomization points1 = fibonacci_sphere(n_points=100, randomize=True, rng=rng) points2 = fibonacci_sphere(n_points=100, randomize=True, rng=rng) with nt.assert_raises(AssertionError): nt.assert_array_equal(points1, points2) # Check for near closeness to 0 nt.assert_almost_equal(np.mean(np.mean(points, axis=0)), 0, decimal=2) @set_random_number_generator() def test_fibonacci_hemisphere(rng): # Test that the number of points is correct points = fibonacci_sphere(n_points=724, hemisphere=True, rng=rng) nt.assert_equal(len(points), 724) # Test randomization points1 = fibonacci_sphere(n_points=100, hemisphere=True, randomize=True, rng=rng) points2 = fibonacci_sphere(n_points=100, hemisphere=True, randomize=True, rng=rng) with nt.assert_raises(AssertionError): nt.assert_array_equal(points1, points2) # Check for near closeness to 0 nt.assert_almost_equal(np.mean(points, axis=0)[2], 0, decimal=2) dipy-1.11.0/dipy/core/tests/test_subdivide_octahedron.py000066400000000000000000000007461476546756600234340ustar00rootroot00000000000000from numpy.testing import assert_array_almost_equal from dipy.core.subdivide_octahedron import create_unit_sphere def test_create_unit_sphere(): sphere = create_unit_sphere(recursion_level=7) v, _, _ = sphere.vertices, sphere.edges, sphere.faces assert_array_almost_equal((v * v).sum(1), 1) def create_half_unit_sphere(): sphere = create_half_unit_sphere(7) v, _, _ = sphere.vertices, sphere.edges, sphere.faces assert_array_almost_equal((v * v).sum(1), 1) dipy-1.11.0/dipy/core/wavelet.py000066400000000000000000000146211476546756600165130ustar00rootroot00000000000000import numpy as np from dipy.denoise import nlmeans_block from dipy.testing.decorators import warning_for_keywords """ Functions for Wavelet Transforms in 3D domain Code adapted from WAVELET SOFTWARE AT POLYTECHNIC UNIVERSITY, BROOKLYN, NY https://eeweb.engineering.nyu.edu/iselesni/WaveletSoftware/ """ def cshift3D(x, m, d): """3D Circular Shift Parameters ---------- x : 3D ndarray N1 by N2 by N3 array m : int amount of shift d : int dimension of shift (d = 1,2,3) Returns ------- y : 3D ndarray array x will be shifted by m samples down along dimension d """ s = x.shape idx = (np.array(range(s[d])) + (s[d] - m % s[d])) % s[d] idx = np.array(idx, dtype=np.int64) if d == 0: return x[idx, :, :] elif d == 1: return x[:, idx, :] else: return x[:, :, idx] def permutationinverse(perm): """ Function generating inverse of the permutation Parameters ---------- perm : 1D array Returns ------- inverse : 1D array permutation inverse of the input """ inverse = [0] * len(perm) for i, p in enumerate(perm): inverse[p] = i return inverse def afb3D_A(x, af, d): """3D Analysis Filter Bank (along one dimension only) Parameters ---------- x : 3D ndarray N1xN2xN2 matrix, where min(N1,N2,N3) > 2*length(filter) (Ni are even) af : 2D ndarray analysis filter for the columns af[:, 1] - lowpass filter af[:, 2] - highpass filter d : int dimension of filtering (d = 1, 2 or 3) Returns ------- lo : 1D array lowpass subbands hi : 1D array highpass subbands """ lpf = af[:, 0] hpf = af[:, 1] # permute dimensions of x so that dimension d is first. p = [(i + d) % 3 for i in range(3)] x = x.transpose(p) # filter along dimension 0 (N1, N2, N3) = x.shape L = af.shape[0] // 2 x = cshift3D(x, -L, 0) n1Half = N1 // 2 lo = np.zeros((L + n1Half, N2, N3)) hi = np.zeros((L + n1Half, N2, N3)) for k in range(N3): lo[:, :, k] = nlmeans_block.firdn(x[:, :, k], lpf) lo[:L] = lo[:L] + lo[n1Half : n1Half + L, :, :] lo = lo[:n1Half, :, :] for k in range(N3): hi[:, :, k] = nlmeans_block.firdn(x[:, :, k], hpf) hi[:L] = hi[:L] + hi[n1Half : n1Half + L, :, :] hi = hi[:n1Half, :, :] # permute dimensions of x (inverse permutation) q = permutationinverse(p) lo = lo.transpose(q) hi = hi.transpose(q) return lo, hi def sfb3D_A(lo, hi, sf, d): """3D Synthesis Filter Bank (along single dimension only) Parameters ---------- lo : 1D array lowpass subbands hi : 1D array highpass subbands sf : 2D ndarray synthesis filters d : int dimension of filtering Returns ------- y : 3D ndarray the N1xN2xN3 matrix """ lpf = sf[:, 0] hpf = sf[:, 1] # permute dimensions of lo and hi so that dimension d is first. p = [(i + d) % 3 for i in range(3)] lo = lo.transpose(p) hi = hi.transpose(p) (N1, N2, N3) = lo.shape N = 2 * N1 L = sf.shape[0] y = np.zeros((N + L - 2, N2, N3)) for k in range(N3): y[:, :, k] = np.array(nlmeans_block.upfir(lo[:, :, k], lpf)) + np.array( nlmeans_block.upfir(hi[:, :, k], hpf) ) y[: (L - 2), :, :] = y[: (L - 2), :, :] + y[N : (N + L - 2), :, :] y = y[:N, :, :] y = cshift3D(y, 1 - L / 2, 0) # permute dimensions of y (inverse permutation) q = permutationinverse(p) y = y.transpose(q) return y @warning_for_keywords() def sfb3D(lo, hi, sf1, *, sf2=None, sf3=None): """3D Synthesis Filter Bank Parameters ---------- lo : 1D array lowpass subbands hi : 1D array highpass subbands sfi : 2D ndarray synthesis filters for dimension i Returns ------- y : 3D ndarray output array """ if sf2 is None: sf2 = sf1 if sf3 is None: sf3 = sf1 LLL = lo LLH = hi[0] LHL = hi[1] LHH = hi[2] HLL = hi[3] HLH = hi[4] HHL = hi[5] HHH = hi[6] # filter along dimension 2 LL = sfb3D_A(LLL, LLH, sf3, 2) LH = sfb3D_A(LHL, LHH, sf3, 2) HL = sfb3D_A(HLL, HLH, sf3, 2) HH = sfb3D_A(HHL, HHH, sf3, 2) # filter along dimension 1 L = sfb3D_A(LL, LH, sf2, 1) H = sfb3D_A(HL, HH, sf2, 1) # filter along dimension 0 y = sfb3D_A(L, H, sf1, 0) return y @warning_for_keywords() def afb3D(x, af1, *, af2=None, af3=None): """3D Analysis Filter Bank Parameters ---------- x : 3D ndarray N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3) afi : 2D ndarray analysis filters for dimension i afi[:, 1] - lowpass filter afi[:, 2] - highpass filter Returns ------- lo : 1D array lowpass subband hi : 1D array highpass subbands, h[d]- d = 1..7 """ if af2 is None: af2 = af1 if af3 is None: af3 = af1 # filter along dimension 0 L, H = afb3D_A(x, af1, 0) # filter along dimension 1 LL, LH = afb3D_A(L, af2, 1) HL, HH = afb3D_A(H, af2, 1) # filter along dimension 3 LLL, LLH = afb3D_A(LL, af3, 2) LHL, LHH = afb3D_A(LH, af3, 2) HLL, HLH = afb3D_A(HL, af3, 2) HHL, HHH = afb3D_A(HH, af3, 2) return LLL, [LLH, LHL, LHH, HLL, HLH, HHL, HHH] def dwt3D(x, J, af): """3-D Discrete Wavelet Transform Parameters ---------- x : 3D ndarray N1 x N2 x N3 matrix 1) Ni all even 2) min(Ni) >= 2^(J-1)*length(af) J : int number of stages af : 2D ndarray analysis filters Returns ------- w : cell array wavelet coefficients """ w = [None] * (J + 1) for k in range(J): x, w[k] = afb3D(x, af, af2=af, af3=af) w[J] = x return w def idwt3D(w, J, sf): """ Inverse 3-D Discrete Wavelet Transform Parameters ---------- w : cell array wavelet coefficient J : int number of stages sf : 2D ndarray synthesis filters Returns ------- y : 3D ndarray output array """ y = w[J] for k in range(J)[::-1]: y = sfb3D(y, w[k], sf, sf2=sf, sf3=sf) return y dipy-1.11.0/dipy/data/000077500000000000000000000000001476546756600144475ustar00rootroot00000000000000dipy-1.11.0/dipy/data/3shells-1000-2000-3500-N193.bval000066400000000000000000000113311476546756600206430ustar00rootroot000000000000000.000000000000000000e+00 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 1.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 2.000000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 3.500000000000000000e+03 dipy-1.11.0/dipy/data/3shells-1000-2000-3500-N193.bvec000066400000000000000000000347031476546756600206460ustar00rootroot000000000000000.000000000000000000e+00 9.999789999999999512e-01 0.000000000000000000e+00 -2.570549999999999918e-02 5.895179999999999865e-01 -2.357849999999999946e-01 -8.935779999999999834e-01 7.978399999999999936e-01 2.329370000000000052e-01 9.367199999999999971e-01 5.041299999999999670e-01 3.451989999999999781e-01 4.567649999999999766e-01 -4.874809999999999977e-01 -6.170330000000000537e-01 -5.785120000000000262e-01 -8.253639999999999866e-01 8.950759999999999827e-01 2.899920000000000275e-01 1.150140000000000051e-01 -7.999340000000000339e-01 5.124940000000000051e-01 -7.900049999999999573e-01 9.492810000000000414e-01 2.323179999999999967e-01 -1.967069999999999930e-02 2.159689999999999943e-01 7.726450000000000262e-01 -1.601529999999999898e-01 -1.461669999999999914e-01 8.873699999999999921e-01 -5.629889999999999617e-01 -3.813130000000000130e-01 -3.059540000000000037e-01 -3.326819999999999777e-01 -9.622389999999999555e-01 -9.595320000000000515e-01 4.509639999999999760e-01 -7.711919999999999886e-01 7.098160000000000025e-01 -6.945430000000000215e-01 6.815489999999999604e-01 -1.416890000000000094e-01 -7.403509999999999813e-01 -1.027560000000000001e-01 5.839130000000000154e-01 -8.775499999999999967e-02 -5.505060000000000509e-01 8.374430000000000485e-01 3.629290000000000016e-01 -1.836109999999999964e-01 -7.183230000000000448e-01 4.327820000000000000e-01 5.018369999999999775e-01 -1.705180000000000029e-01 4.631950000000000234e-01 3.837130000000000263e-01 -7.141659999999999675e-01 2.592050000000000187e-01 0.000000000000000000e+00 3.636330000000000118e-02 5.708539999999999726e-01 -2.822049999999999836e-01 7.203509999999999636e-01 2.658909999999999885e-01 9.999789999999999512e-01 0.000000000000000000e+00 -2.570549999999999918e-02 5.895179999999999865e-01 -2.357849999999999946e-01 -8.935779999999999834e-01 7.978399999999999936e-01 2.329370000000000052e-01 9.367199999999999971e-01 5.041299999999999670e-01 3.451989999999999781e-01 4.567649999999999766e-01 -4.874809999999999977e-01 -6.170330000000000537e-01 -5.785120000000000262e-01 -8.253639999999999866e-01 8.950759999999999827e-01 2.899920000000000275e-01 1.150140000000000051e-01 -7.999340000000000339e-01 5.124940000000000051e-01 -7.900049999999999573e-01 9.492810000000000414e-01 2.323179999999999967e-01 -1.967069999999999930e-02 2.159689999999999943e-01 7.726450000000000262e-01 -1.601529999999999898e-01 -1.461669999999999914e-01 8.873699999999999921e-01 -5.629889999999999617e-01 -3.813130000000000130e-01 -3.059540000000000037e-01 -3.326819999999999777e-01 -9.622389999999999555e-01 -9.595320000000000515e-01 4.509639999999999760e-01 -7.711919999999999886e-01 7.098160000000000025e-01 -6.945430000000000215e-01 6.815489999999999604e-01 -1.416890000000000094e-01 -7.403509999999999813e-01 -1.027560000000000001e-01 5.839130000000000154e-01 -8.775499999999999967e-02 -5.505060000000000509e-01 8.374430000000000485e-01 3.629290000000000016e-01 -1.836109999999999964e-01 -7.183230000000000448e-01 4.327820000000000000e-01 5.018369999999999775e-01 -1.705180000000000029e-01 4.631950000000000234e-01 3.837130000000000263e-01 -7.141659999999999675e-01 2.592050000000000187e-01 0.000000000000000000e+00 3.636330000000000118e-02 5.708539999999999726e-01 -2.822049999999999836e-01 7.203509999999999636e-01 2.658909999999999885e-01 9.999789999999999512e-01 0.000000000000000000e+00 -2.570549999999999918e-02 5.895179999999999865e-01 -2.357849999999999946e-01 -8.935779999999999834e-01 7.978399999999999936e-01 2.329370000000000052e-01 9.367199999999999971e-01 5.041299999999999670e-01 3.451989999999999781e-01 4.567649999999999766e-01 -4.874809999999999977e-01 -6.170330000000000537e-01 -5.785120000000000262e-01 -8.253639999999999866e-01 8.950759999999999827e-01 2.899920000000000275e-01 1.150140000000000051e-01 -7.999340000000000339e-01 5.124940000000000051e-01 -7.900049999999999573e-01 9.492810000000000414e-01 2.323179999999999967e-01 -1.967069999999999930e-02 2.159689999999999943e-01 7.726450000000000262e-01 -1.601529999999999898e-01 -1.461669999999999914e-01 8.873699999999999921e-01 -5.629889999999999617e-01 -3.813130000000000130e-01 -3.059540000000000037e-01 -3.326819999999999777e-01 -9.622389999999999555e-01 -9.595320000000000515e-01 4.509639999999999760e-01 -7.711919999999999886e-01 7.098160000000000025e-01 -6.945430000000000215e-01 6.815489999999999604e-01 -1.416890000000000094e-01 -7.403509999999999813e-01 -1.027560000000000001e-01 5.839130000000000154e-01 -8.775499999999999967e-02 -5.505060000000000509e-01 8.374430000000000485e-01 3.629290000000000016e-01 -1.836109999999999964e-01 -7.183230000000000448e-01 4.327820000000000000e-01 5.018369999999999775e-01 -1.705180000000000029e-01 4.631950000000000234e-01 3.837130000000000263e-01 -7.141659999999999675e-01 2.592050000000000187e-01 0.000000000000000000e+00 3.636330000000000118e-02 5.708539999999999726e-01 -2.822049999999999836e-01 7.203509999999999636e-01 2.658909999999999885e-01 0.000000000000000000e+00 -5.040010000000000150e-03 9.999919999999999920e-01 6.538610000000000255e-01 -7.692360000000000309e-01 -5.290949999999999820e-01 -2.635589999999999877e-01 1.337260000000000115e-01 9.318840000000000456e-01 1.441389999999999894e-01 -8.466939999999999467e-01 -8.503110000000000390e-01 -6.356720000000000148e-01 -3.939079999999999804e-01 6.768490000000000339e-01 -1.093469999999999998e-01 -5.250340000000000007e-01 -4.482420000000000154e-02 -5.454729999999999857e-01 -9.640499999999999625e-01 4.077669999999999906e-01 8.421389999999999709e-01 1.579929999999999946e-01 -2.376949999999999896e-01 7.870509999999999451e-01 -1.920310000000000072e-01 -9.571229999999999460e-01 -6.075340000000000185e-01 3.604129999999999834e-01 7.352739999999999831e-01 4.211110000000000131e-01 2.364819999999999978e-01 1.470370000000000010e-01 -2.037930000000000019e-01 -1.341130000000000100e-01 -2.694639999999999813e-01 2.097700000000000120e-01 -8.903370000000000450e-01 6.311750000000000416e-01 4.131589999999999985e-01 2.793949999999999906e-02 5.331010000000000471e-01 -7.292410000000000281e-01 3.932229999999999892e-01 8.253669999999999618e-01 -6.007820000000000382e-01 -3.396509999999999807e-01 -7.954839999999999689e-01 -4.622020000000000017e-01 -5.659300000000000441e-01 3.970810000000000173e-01 -6.957010000000000138e-01 6.863609999999999989e-01 6.943369999999999820e-01 -5.137690000000000312e-01 4.280519999999999881e-01 -8.125719999999999610e-01 -2.514669999999999961e-01 8.872579999999999911e-01 8.131860000000000477e-02 -9.046159999999999757e-01 -3.085970000000000102e-01 1.497950000000000115e-01 6.119139999999999580e-01 9.606829999999999536e-01 -5.040010000000000150e-03 9.999919999999999920e-01 6.538610000000000255e-01 -7.692360000000000309e-01 -5.290949999999999820e-01 -2.635589999999999877e-01 1.337260000000000115e-01 9.318840000000000456e-01 1.441389999999999894e-01 -8.466939999999999467e-01 -8.503110000000000390e-01 -6.356720000000000148e-01 -3.939079999999999804e-01 6.768490000000000339e-01 -1.093469999999999998e-01 -5.250340000000000007e-01 -4.482420000000000154e-02 -5.454729999999999857e-01 -9.640499999999999625e-01 4.077669999999999906e-01 8.421389999999999709e-01 1.579929999999999946e-01 -2.376949999999999896e-01 7.870509999999999451e-01 -1.920310000000000072e-01 -9.571229999999999460e-01 -6.075340000000000185e-01 3.604129999999999834e-01 7.352739999999999831e-01 4.211110000000000131e-01 2.364819999999999978e-01 1.470370000000000010e-01 -2.037930000000000019e-01 -1.341130000000000100e-01 -2.694639999999999813e-01 2.097700000000000120e-01 -8.903370000000000450e-01 6.311750000000000416e-01 4.131589999999999985e-01 2.793949999999999906e-02 5.331010000000000471e-01 -7.292410000000000281e-01 3.932229999999999892e-01 8.253669999999999618e-01 -6.007820000000000382e-01 -3.396509999999999807e-01 -7.954839999999999689e-01 -4.622020000000000017e-01 -5.659300000000000441e-01 3.970810000000000173e-01 -6.957010000000000138e-01 6.863609999999999989e-01 6.943369999999999820e-01 -5.137690000000000312e-01 4.280519999999999881e-01 -8.125719999999999610e-01 -2.514669999999999961e-01 8.872579999999999911e-01 8.131860000000000477e-02 -9.046159999999999757e-01 -3.085970000000000102e-01 1.497950000000000115e-01 6.119139999999999580e-01 9.606829999999999536e-01 -5.040010000000000150e-03 9.999919999999999920e-01 6.538610000000000255e-01 -7.692360000000000309e-01 -5.290949999999999820e-01 -2.635589999999999877e-01 1.337260000000000115e-01 9.318840000000000456e-01 1.441389999999999894e-01 -8.466939999999999467e-01 -8.503110000000000390e-01 -6.356720000000000148e-01 -3.939079999999999804e-01 6.768490000000000339e-01 -1.093469999999999998e-01 -5.250340000000000007e-01 -4.482420000000000154e-02 -5.454729999999999857e-01 -9.640499999999999625e-01 4.077669999999999906e-01 8.421389999999999709e-01 1.579929999999999946e-01 -2.376949999999999896e-01 7.870509999999999451e-01 -1.920310000000000072e-01 -9.571229999999999460e-01 -6.075340000000000185e-01 3.604129999999999834e-01 7.352739999999999831e-01 4.211110000000000131e-01 2.364819999999999978e-01 1.470370000000000010e-01 -2.037930000000000019e-01 -1.341130000000000100e-01 -2.694639999999999813e-01 2.097700000000000120e-01 -8.903370000000000450e-01 6.311750000000000416e-01 4.131589999999999985e-01 2.793949999999999906e-02 5.331010000000000471e-01 -7.292410000000000281e-01 3.932229999999999892e-01 8.253669999999999618e-01 -6.007820000000000382e-01 -3.396509999999999807e-01 -7.954839999999999689e-01 -4.622020000000000017e-01 -5.659300000000000441e-01 3.970810000000000173e-01 -6.957010000000000138e-01 6.863609999999999989e-01 6.943369999999999820e-01 -5.137690000000000312e-01 4.280519999999999881e-01 -8.125719999999999610e-01 -2.514669999999999961e-01 8.872579999999999911e-01 8.131860000000000477e-02 -9.046159999999999757e-01 -3.085970000000000102e-01 1.497950000000000115e-01 6.119139999999999580e-01 9.606829999999999536e-01 0.000000000000000000e+00 -4.027949999999999760e-03 -3.987939999999999714e-03 -7.561780000000000168e-01 -2.464619999999999866e-01 -8.151469999999999549e-01 -3.633939999999999948e-01 -5.878510000000000124e-01 -2.780869999999999731e-01 -3.190299999999999803e-01 1.701830000000000009e-01 3.972519999999999940e-01 6.223229999999999595e-01 -7.792289999999999495e-01 -4.014300000000000090e-01 8.083110000000000017e-01 -2.076359999999999872e-01 4.436550000000000216e-01 7.863609999999999767e-01 2.395410000000000039e-01 4.402639999999999887e-01 -1.677849999999999897e-01 5.923939999999999761e-01 -2.058300000000000129e-01 -5.714719999999999800e-01 9.811919999999999531e-01 -1.930610000000000104e-01 -1.841800000000000104e-01 -9.189410000000000078e-01 6.618209999999999926e-01 -1.877240000000000020e-01 7.919089999999999741e-01 -9.126779999999999893e-01 9.299790000000000001e-01 -9.334540000000000060e-01 -3.853909999999999975e-02 -1.878710000000000102e-01 -6.270149999999999335e-02 -8.295329999999999371e-02 -5.704919999999999991e-01 -7.189079999999999915e-01 5.012929999999999886e-01 -6.694269999999999943e-01 -5.452120000000000299e-01 -5.551669999999999661e-01 -5.459920000000000329e-01 -9.364489999999999759e-01 -2.532760000000000011e-01 2.916480000000000183e-01 -7.402739999999999876e-01 8.992299999999999738e-01 -3.548969999999999816e-03 5.844730000000000203e-01 -5.158049999999999580e-01 8.408120000000000038e-01 -7.760289999999999688e-01 -4.387380000000000169e-01 -6.532470000000000221e-01 3.815569999999999795e-01 9.966880000000000184e-01 -4.246750000000000247e-01 7.608510000000000550e-01 9.475879999999999859e-01 -3.265830000000000122e-01 7.993519999999999792e-02 -4.027949999999999760e-03 -3.987939999999999714e-03 -7.561780000000000168e-01 -2.464619999999999866e-01 -8.151469999999999549e-01 -3.633939999999999948e-01 -5.878510000000000124e-01 -2.780869999999999731e-01 -3.190299999999999803e-01 1.701830000000000009e-01 3.972519999999999940e-01 6.223229999999999595e-01 -7.792289999999999495e-01 -4.014300000000000090e-01 8.083110000000000017e-01 -2.076359999999999872e-01 4.436550000000000216e-01 7.863609999999999767e-01 2.395410000000000039e-01 4.402639999999999887e-01 -1.677849999999999897e-01 5.923939999999999761e-01 -2.058300000000000129e-01 -5.714719999999999800e-01 9.811919999999999531e-01 -1.930610000000000104e-01 -1.841800000000000104e-01 -9.189410000000000078e-01 6.618209999999999926e-01 -1.877240000000000020e-01 7.919089999999999741e-01 -9.126779999999999893e-01 9.299790000000000001e-01 -9.334540000000000060e-01 -3.853909999999999975e-02 -1.878710000000000102e-01 -6.270149999999999335e-02 -8.295329999999999371e-02 -5.704919999999999991e-01 -7.189079999999999915e-01 5.012929999999999886e-01 -6.694269999999999943e-01 -5.452120000000000299e-01 -5.551669999999999661e-01 -5.459920000000000329e-01 -9.364489999999999759e-01 -2.532760000000000011e-01 2.916480000000000183e-01 -7.402739999999999876e-01 8.992299999999999738e-01 -3.548969999999999816e-03 5.844730000000000203e-01 -5.158049999999999580e-01 8.408120000000000038e-01 -7.760289999999999688e-01 -4.387380000000000169e-01 -6.532470000000000221e-01 3.815569999999999795e-01 9.966880000000000184e-01 -4.246750000000000247e-01 7.608510000000000550e-01 9.475879999999999859e-01 -3.265830000000000122e-01 7.993519999999999792e-02 -4.027949999999999760e-03 -3.987939999999999714e-03 -7.561780000000000168e-01 -2.464619999999999866e-01 -8.151469999999999549e-01 -3.633939999999999948e-01 -5.878510000000000124e-01 -2.780869999999999731e-01 -3.190299999999999803e-01 1.701830000000000009e-01 3.972519999999999940e-01 6.223229999999999595e-01 -7.792289999999999495e-01 -4.014300000000000090e-01 8.083110000000000017e-01 -2.076359999999999872e-01 4.436550000000000216e-01 7.863609999999999767e-01 2.395410000000000039e-01 4.402639999999999887e-01 -1.677849999999999897e-01 5.923939999999999761e-01 -2.058300000000000129e-01 -5.714719999999999800e-01 9.811919999999999531e-01 -1.930610000000000104e-01 -1.841800000000000104e-01 -9.189410000000000078e-01 6.618209999999999926e-01 -1.877240000000000020e-01 7.919089999999999741e-01 -9.126779999999999893e-01 9.299790000000000001e-01 -9.334540000000000060e-01 -3.853909999999999975e-02 -1.878710000000000102e-01 -6.270149999999999335e-02 -8.295329999999999371e-02 -5.704919999999999991e-01 -7.189079999999999915e-01 5.012929999999999886e-01 -6.694269999999999943e-01 -5.452120000000000299e-01 -5.551669999999999661e-01 -5.459920000000000329e-01 -9.364489999999999759e-01 -2.532760000000000011e-01 2.916480000000000183e-01 -7.402739999999999876e-01 8.992299999999999738e-01 -3.548969999999999816e-03 5.844730000000000203e-01 -5.158049999999999580e-01 8.408120000000000038e-01 -7.760289999999999688e-01 -4.387380000000000169e-01 -6.532470000000000221e-01 3.815569999999999795e-01 9.966880000000000184e-01 -4.246750000000000247e-01 7.608510000000000550e-01 9.475879999999999859e-01 -3.265830000000000122e-01 7.993519999999999792e-02 dipy-1.11.0/dipy/data/__init__.py000066400000000000000000000310241476546756600165600ustar00rootroot00000000000000"""Read test or example data.""" import gzip import json from os.path import dirname, exists, join as pjoin import pickle import numpy as np from scipy.sparse import load_npz from dipy.core.gradients import GradientTable, gradient_table from dipy.core.sphere import HemiSphere, Sphere from dipy.data.fetcher import ( fetch_30_bundle_atlas_hcp842, fetch_bundle_atlas_hcp842, fetch_bundle_fa_hcp, fetch_bundle_warp_dataset, fetch_bundles_2_subjects, fetch_cenir_multib, fetch_cfin_multib, fetch_deepn4_test, fetch_deepn4_tf_weights, fetch_deepn4_torch_weights, fetch_disco1_dataset, fetch_disco2_dataset, fetch_disco3_dataset, fetch_disco_dataset, fetch_evac_test, fetch_evac_tf_weights, fetch_evac_torch_weights, fetch_gold_standard_io, fetch_hbn, fetch_hcp, fetch_isbi2013_2shell, fetch_ivim, fetch_mni_template, fetch_ptt_minimal_dataset, fetch_resdnn_tf_weights, fetch_resdnn_torch_weights, fetch_scil_b0, fetch_sherbrooke_3shell, fetch_stanford_hardi, fetch_stanford_labels, fetch_stanford_pve_maps, fetch_stanford_t1, fetch_stanford_tracks, fetch_syn_data, fetch_synb0_test, fetch_synb0_weights, fetch_taiwan_ntu_dsi, fetch_target_tractogram_hcp, fetch_tissue_data, get_bundle_atlas_hcp842, get_fnames, get_target_tractogram_hcp, get_two_hcp842_bundles, read_DiB_70_lte_pte_ste, read_DiB_217_lte_pte_ste, read_bundles_2_subjects, read_cenir_multib, read_cfin_dwi, read_cfin_t1, read_five_af_bundles, read_isbi2013_2shell, read_ivim, read_mni_template, read_qte_lte_pte, read_scil_b0, read_sherbrooke_3shell, read_stanford_hardi, read_stanford_labels, read_stanford_pve_maps, read_stanford_t1, read_syn_data, read_taiwan_ntu_dsi, read_tissue_data, ) from dipy.io.image import load_nifti from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import relist_streamlines from dipy.utils.arrfuncs import as_native_array __all__ = [ "fetch_30_bundle_atlas_hcp842", "fetch_bundle_atlas_hcp842", "fetch_bundle_fa_hcp", "fetch_bundle_warp_dataset", "fetch_bundles_2_subjects", "fetch_cenir_multib", "fetch_cfin_multib", "fetch_deepn4_test", "fetch_deepn4_tf_weights", "fetch_deepn4_torch_weights", "fetch_disco1_dataset", "fetch_disco2_dataset", "fetch_disco3_dataset", "fetch_disco_dataset", "fetch_evac_test", "fetch_evac_tf_weights", "fetch_evac_torch_weights", "fetch_gold_standard_io", "fetch_hbn", "fetch_hcp", "fetch_isbi2013_2shell", "fetch_ivim", "fetch_mni_template", "fetch_ptt_minimal_dataset", "fetch_resdnn_tf_weights", "fetch_resdnn_torch_weights", "fetch_scil_b0", "fetch_sherbrooke_3shell", "fetch_stanford_hardi", "fetch_stanford_labels", "fetch_stanford_pve_maps", "fetch_stanford_t1", "fetch_stanford_tracks", "fetch_syn_data", "fetch_synb0_test", "fetch_synb0_weights", "fetch_taiwan_ntu_dsi", "fetch_target_tractogram_hcp", "fetch_tissue_data", "get_bundle_atlas_hcp842", "get_fnames", "get_target_tractogram_hcp", "get_two_hcp842_bundles", "read_DiB_70_lte_pte_ste", "read_DiB_217_lte_pte_ste", "read_bundles_2_subjects", "read_cenir_multib", "read_cfin_dwi", "read_cfin_t1", "read_five_af_bundles", "read_isbi2013_2shell", "read_ivim", "read_mni_template", "read_qte_lte_pte", "read_scil_b0", "read_sherbrooke_3shell", "read_stanford_hardi", "read_stanford_labels", "read_stanford_pve_maps", "read_stanford_t1", "read_syn_data", "read_taiwan_ntu_dsi", "read_tissue_data", ] def loads_compat(byte_data): return pickle.loads(byte_data, encoding="latin1") DATA_DIR = pjoin(dirname(__file__), "files") SPHERE_FILES = { "symmetric362": pjoin(DATA_DIR, "evenly_distributed_sphere_362.npz"), "symmetric642": pjoin(DATA_DIR, "evenly_distributed_sphere_642.npz"), "symmetric724": pjoin(DATA_DIR, "evenly_distributed_sphere_724.npz"), "repulsion724": pjoin(DATA_DIR, "repulsion724.npz"), "repulsion100": pjoin(DATA_DIR, "repulsion100.npz"), "repulsion200": pjoin(DATA_DIR, "repulsion200.npz"), } class DataError(Exception): pass @warning_for_keywords() def get_sim_voxels(*, name="fib1"): """provide some simulated voxel data Parameters ---------- name : str, which file? 'fib0', 'fib1' or 'fib2' Returns ------- dix : dictionary, where dix['data'] returns a 2d array where every row is a simulated voxel with different orientation Examples -------- >>> from dipy.data import get_sim_voxels >>> sv=get_sim_voxels(name='fib1') >>> sv['data'].shape == (100, 102) True >>> sv['fibres'] '1' >>> sv['gradients'].shape == (102, 3) True >>> sv['bvals'].shape == (102,) True >>> sv['snr'] '60' >>> sv2=get_sim_voxels(name='fib2') >>> sv2['fibres'] '2' >>> sv2['snr'] '80' Notes ----- These sim voxels were provided by M.M. Correia using Rician noise. """ if name == "fib0": fname = pjoin(DATA_DIR, "fib0.pkl.gz") if name == "fib1": fname = pjoin(DATA_DIR, "fib1.pkl.gz") if name == "fib2": fname = pjoin(DATA_DIR, "fib2.pkl.gz") return loads_compat(gzip.open(fname, "rb").read()) @warning_for_keywords() def get_skeleton(*, name="C1"): """Provide skeletons generated from Local Skeleton Clustering (LSC). Parameters ---------- name : str, 'C1' or 'C3' Returns ------- dix : dictionary Examples -------- >>> from dipy.data import get_skeleton >>> C=get_skeleton(name='C1') >>> len(C.keys()) 117 >>> for c in C: break >>> sorted(C[c].keys()) ['N', 'hidden', 'indices', 'most'] """ if name == "C1": fname = pjoin(DATA_DIR, "C1.pkl.gz") if name == "C3": fname = pjoin(DATA_DIR, "C3.pkl.gz") return loads_compat(gzip.open(fname, "rb").read()) @warning_for_keywords() def get_sphere(*, name="symmetric362"): """provide triangulated spheres Parameters ---------- name : str which sphere - one of: * 'symmetric362' * 'symmetric642' * 'symmetric724' * 'repulsion724' * 'repulsion100' * 'repulsion200' Returns ------- sphere : a dipy.core.sphere.Sphere class instance Examples -------- >>> import numpy as np >>> from dipy.data import get_sphere >>> sphere = get_sphere(name="symmetric362") >>> verts, faces = sphere.vertices, sphere.faces >>> verts.shape == (362, 3) True >>> faces.shape == (720, 3) True >>> verts, faces = get_sphere(name="not a sphere name") #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... DataError: No sphere called "not a sphere name" """ # noqa: E501 fname = SPHERE_FILES.get(name) if fname is None: raise DataError(f'No sphere called "{name}"') res = np.load(fname) # Set to native byte order to avoid errors in compiled routines for # big-endian platforms, when using these spheres. return Sphere( xyz=as_native_array(res["vertices"]), faces=as_native_array(res["faces"]) ) default_sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) small_sphere = HemiSphere.from_sphere(get_sphere(name="symmetric362")) def _gradient_from_file(filename): """Reads a gradient file saved as a text file compatible with np.loadtxt and saved in the dipy data directory""" def gtab_getter(): gradfile = pjoin(DATA_DIR, filename) grad = np.loadtxt(gradfile, delimiter=",") gtab = GradientTable(grad) return gtab return gtab_getter get_3shell_gtab = _gradient_from_file("gtab_3shell.txt") get_isbi2013_2shell_gtab = _gradient_from_file("gtab_isbi2013_2shell.txt") get_gtab_taiwan_dsi = _gradient_from_file("gtab_taiwan_dsi.txt") def dsi_voxels(): fimg, fbvals, fbvecs = get_fnames(name="small_101D") bvals = np.loadtxt(fbvals) bvecs = np.loadtxt(fbvecs).T data, _ = load_nifti(fimg) gtab = gradient_table(bvals, bvecs=bvecs) return data, gtab def dsi_deconv_voxels(): from dipy.sims.voxel import sticks_and_ball gtab = gradient_table(np.loadtxt(get_fnames(name="dsi515btable"))) data = np.zeros((2, 2, 2, 515)) for ix in range(2): for iy in range(2): for iz in range(2): data[ix, iy, iz], _ = sticks_and_ball( gtab, d=0.0015, S0=1.0, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None, ) return data, gtab def mrtrix_spherical_functions(): """Spherical functions represented by spherical harmonic coefficients and evaluated on a discrete sphere. Returns ------- func_coef : array (2, 3, 4, 45) Functions represented by the coefficients associated with the mrtrix spherical harmonic basis of maximal order (l) 8. func_discrete : array (2, 3, 4, 81) Functions evaluated on `sphere`. sphere : Sphere The discrete sphere, points on the surface of a unit sphere, used to evaluate the functions. Notes ----- These coefficients were obtained by using the dwi2SH command of mrtrix. """ func_discrete, _ = load_nifti(pjoin(DATA_DIR, "func_discrete.nii.gz")) func_coef, _ = load_nifti(pjoin(DATA_DIR, "func_coef.nii.gz")) gradients = np.loadtxt(pjoin(DATA_DIR, "sphere_grad.txt")) # gradients[0] and the first volume of func_discrete, # func_discrete[..., 0], are associated with the b=0 signal. # gradients[:, 3] are the b-values for each gradient/volume. sphere = Sphere(xyz=gradients[1:, :3]) return func_coef, func_discrete[..., 1:], sphere dipy_cmaps = None def get_cmap(name): """Make a callable, similar to maptlotlib.pyplot.get_cmap.""" global dipy_cmaps if dipy_cmaps is None: filename = pjoin(DATA_DIR, "dipy_colormaps.json") with open(filename) as f: dipy_cmaps = json.load(f) desc = dipy_cmaps.get(name) if desc is None: return None def simple_cmap(v): """Emulates matplotlib colormap callable""" rgba = np.ones((len(v), 4)) for i, color in enumerate(("red", "green", "blue")): x, y0, y1 = zip(*desc[color]) # Matplotlib allows more complex colormaps, but for users who do # not have Matplotlib dipy makes a few simple colormaps available. # These colormaps are simple because y0 == y1, and therefore we # ignore y1 here. rgba[:, i] = np.interp(v, x, y0) return rgba return simple_cmap def two_cingulum_bundles(): fname = get_fnames(name="cb_2") res = np.load(fname) cb1 = relist_streamlines(res["points"], res["offsets"]) cb2 = relist_streamlines(res["points2"], res["offsets2"]) return cb1, cb2 def matlab_life_results(): matlab_rmse = np.load(pjoin(DATA_DIR, "life_matlab_rmse.npy")) matlab_weights = np.load(pjoin(DATA_DIR, "life_matlab_weights.npy")) return matlab_rmse, matlab_weights @warning_for_keywords() def load_sdp_constraints(model_name, *, order=None): """Import semidefinite programming constraint matrices for different models. Generated as described for example in :footcite:p:`DelaHaije2020`. Parameters ---------- model_name : string A string identifying the model that is to be constrained. order : unsigned int, optional A non-negative integer that represent the order or instance of the model. Default: None. Returns ------- sdp_constraints : array Constraint matrices Notes ----- The constraints will be loaded from a file named 'id_constraint_order.npz'. References ---------- .. footbibliography:: """ file = model_name + "_constraint" if order is not None: file += f"_{str(order)}" if order is None: file += "_SC" file += ".npz" path = pjoin(DATA_DIR, file) if not exists(path): raise ValueError(f"Constraints file '{file}' not found.") try: array = load_npz(path) n, x = array.shape sdp_constraints = [array[i * x : (i + 1) * x] for i in range(n // x)] return sdp_constraints except Exception as e: raise ValueError(f"Failed to read constraints file '{file}'.") from e dipy-1.11.0/dipy/data/fetcher.py000066400000000000000000003207251476546756600164520ustar00rootroot00000000000000import contextlib from hashlib import md5 import json import logging import os import os.path as op from os.path import join as pjoin import random from shutil import copyfileobj import tarfile import tempfile from urllib.request import Request, urlopen import zipfile import nibabel as nib import numpy as np from tqdm.auto import tqdm from dipy.core.gradients import ( gradient_table, gradient_table_from_gradient_strength_bvecs, ) from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.streamline import load_trk from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import TripWire, optional_package # Set a user-writeable file-system location to put files: if "DIPY_HOME" in os.environ: dipy_home = os.environ["DIPY_HOME"] else: dipy_home = pjoin(op.expanduser("~"), ".dipy") # The URL to the University of Washington Researchworks repository: UW_RW_URL = "https://digital.lib.washington.edu/researchworks/bitstream/handle/" boto3, has_boto3, _ = optional_package("boto3") HEADER_LIST = [ {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64)"}, # Firefox 77 Mac { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:77.0) Gecko/20100101 Firefox/77.0", # noqa "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", # noqa "Accept-Language": "en-US,en;q=0.5", "Referer": "https://www.google.com/", "DNT": "1", "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", }, # Firefox 77 Windows { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0", # noqa "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", # noqa "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate, br", "Referer": "https://www.google.com/", "DNT": "1", "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", }, # Chrome 83 Mac { "Connection": "keep-alive", "DNT": "1", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36", # noqa "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", # noqa "Sec-Fetch-Site": "none", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-Dest": "document", "Referer": "https://www.google.com/", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8", }, # Chrome 83 Windows { "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36", # noqa "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", # noqa "Sec-Fetch-Site": "same-origin", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-User": "?1", "Sec-Fetch-Dest": "document", "Referer": "https://www.google.com/", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.9", }, ] class FetcherError(Exception): pass def _log(msg): """Helper function used as short hand for logging.""" logger = logging.getLogger(__name__) logger.info(msg) @warning_for_keywords() def copyfileobj_withprogress(fsrc, fdst, total_length, *, length=16 * 1024): for _ in tqdm( range(0, int(total_length), length), unit=" MB", bar_format="{l_bar}{bar}| {n_fmt}/{total_fmt}{unit} [{elapsed}]", ): buf = fsrc.read(length) if not buf: break fdst.write(buf) def _already_there_msg(folder): """ Prints a message indicating that a certain data-set is already in place """ msg = "Dataset is already in place. If you want to fetch it again " msg += f"please first remove the folder {folder} " _log(msg) def _get_file_md5(filename): """Compute the md5 checksum of a file""" md5_data = md5() with open(filename, "rb") as f: for chunk in iter(lambda: f.read(128 * md5_data.block_size), b""): md5_data.update(chunk) return md5_data.hexdigest() @warning_for_keywords() def check_md5(filename, *, stored_md5=None): """ Computes the md5 of filename and check if it matches with the supplied string md5 Parameters ---------- filename : string Path to a file. md5 : string Known md5 of filename to check against. If None (default), checking is skipped """ if stored_md5 is not None: computed_md5 = _get_file_md5(filename) if stored_md5 != computed_md5: msg = f"""The downloaded file, {filename}, does not have the expected md5 checksum of "{stored_md5}". Instead, the md5 checksum was: "{computed_md5}". This could mean that something is wrong with the file or that the upstream file has been updated. You can try downloading the file again or updating to the newest version of dipy.""" raise FetcherError(msg) def _get_file_data(fname, url, *, use_headers=False): """Get data from url and write it to file. Parameters ---------- fname : str The filename to write the data to. url : str The URL to get the data from. use_headers : bool, optional Whether to use headers when downloading files. """ req = url if use_headers: hdr = random.choice(HEADER_LIST) req = Request(url, headers=hdr) with contextlib.closing(urlopen(req)) as opener: try: response_size = opener.headers["content-length"] except KeyError: response_size = None with open(fname, "wb") as data: if response_size is None: copyfileobj(opener, data) else: copyfileobj_withprogress(opener, data, response_size) @warning_for_keywords() def fetch_data(files, folder, *, data_size=None, use_headers=False): """Downloads files to folder and checks their md5 checksums Parameters ---------- files : dictionary For each file in `files` the value should be (url, md5). The file will be downloaded from url if the file does not already exist or if the file exists but the md5 checksum does not match. folder : str The directory where to save the file, the directory will be created if it does not already exist. data_size : str, optional A string describing the size of the data (e.g. "91 MB") to be logged to the screen. Default does not produce any information about data size. use_headers : bool, optional Whether to use headers when downloading files. Raises ------ FetcherError Raises if the md5 checksum of the file does not match the expected value. The downloaded file is not deleted when this error is raised. """ if not op.exists(folder): _log(f"Creating new folder {folder}") os.makedirs(folder) if data_size is not None: _log(f"Data size is approximately {data_size}") all_skip = True for f in files: url, md5 = files[f] fullpath = pjoin(folder, f) if op.exists(fullpath) and (_get_file_md5(fullpath) == md5): continue all_skip = False _log(f'Downloading "{f}" to {folder}') _log(f"From: {url}") _get_file_data(fullpath, url, use_headers=use_headers) check_md5(fullpath, stored_md5=md5) if all_skip: _already_there_msg(folder) else: _log(f"Files successfully downloaded to {folder}") @warning_for_keywords() def _make_fetcher( name, folder, baseurl, remote_fnames, local_fnames, *, md5_list=None, optional_fnames=None, doc="", data_size=None, msg=None, unzip=False, use_headers=False, ): """Create a new fetcher Parameters ---------- name : str The name of the fetcher function. folder : str The full path to the folder in which the files would be placed locally. Typically, this is something like 'pjoin(dipy_home, 'foo')' baseurl : str The URL from which this fetcher reads files remote_fnames : list of strings The names of the files in the baseurl location local_fnames : list of strings The names of the files to be saved on the local filesystem md5_list : list of strings, optional The md5 checksums of the files. Used to verify the content of the files. Default: None, skipping checking md5. optional_fnames : str, optional The name of the optional file to be saved on the local filesystem. doc : str, optional. Documentation of the fetcher. data_size : str, optional. If provided, is sent as a message to the user before downloading starts. msg : str, optional. A message to print to screen when fetching takes place. Default (None) is to print nothing unzip : bool, optional Whether to unzip the file(s) after downloading them. Supports zip, gz, and tar.gz files. use_headers : bool, optional Whether to use headers when downloading files. returns ------- fetcher : function A function that, when called, fetches data according to the designated inputs """ optional_fnames = optional_fnames or [] def fetcher(*, include_optional=False): files = {} for ( i, (f, n), ) in enumerate(zip(remote_fnames, local_fnames)): if n in optional_fnames and not include_optional: continue files[n] = (baseurl + f, md5_list[i] if md5_list is not None else None) fetch_data(files, folder, data_size=data_size, use_headers=use_headers) if msg is not None: _log(msg) if unzip: for f in local_fnames: split_ext = op.splitext(f) if split_ext[-1] in (".gz", ".bz2"): if op.splitext(split_ext[0])[-1] == ".tar": ar = tarfile.open(pjoin(folder, f)) ar.extractall(path=folder) ar.close() else: raise ValueError("File extension is not recognized") elif split_ext[-1] == ".zip": z = zipfile.ZipFile(pjoin(folder, f), "r") files[f] += (tuple(z.namelist()),) z.extractall(folder) z.close() else: raise ValueError("File extension is not recognized") return files, folder fetcher.__name__ = name fetcher.__doc__ = doc return fetcher fetch_isbi2013_2shell = _make_fetcher( "fetch_isbi2013_2shell", pjoin(dipy_home, "isbi2013"), UW_RW_URL + "1773/38465/", ["phantom64.nii.gz", "phantom64.bval", "phantom64.bvec"], ["phantom64.nii.gz", "phantom64.bval", "phantom64.bvec"], md5_list=[ "42911a70f232321cf246315192d69c42", "90e8cf66e0f4d9737a3b3c0da24df5ea", "4b7aa2757a1ccab140667b76e8075cb1", ], doc="Download a 2-shell software phantom dataset", data_size="", ) fetch_stanford_labels = _make_fetcher( "fetch_stanford_labels", pjoin(dipy_home, "stanford_hardi"), "https://stacks.stanford.edu/file/druid:yx282xq2090/", ["aparc-reduced.nii.gz", "label_info.txt"], ["aparc-reduced.nii.gz", "label_info.txt"], md5_list=["742de90090d06e687ce486f680f6d71a", "39db9f0f5e173d7a2c2e51b07d5d711b"], doc="Download reduced freesurfer aparc image from stanford web site", ) fetch_sherbrooke_3shell = _make_fetcher( "fetch_sherbrooke_3shell", pjoin(dipy_home, "sherbrooke_3shell"), UW_RW_URL + "1773/38475/", ["HARDI193.nii.gz", "HARDI193.bval", "HARDI193.bvec"], ["HARDI193.nii.gz", "HARDI193.bval", "HARDI193.bvec"], md5_list=[ "0b735e8f16695a37bfbd66aab136eb66", "e9b9bb56252503ea49d31fb30a0ac637", "0c83f7e8b917cd677ad58a078658ebb7", ], doc="Download a 3shell HARDI dataset with 192 gradient direction", ) fetch_stanford_hardi = _make_fetcher( "fetch_stanford_hardi", pjoin(dipy_home, "stanford_hardi"), "https://stacks.stanford.edu/file/druid:yx282xq2090/", ["dwi.nii.gz", "dwi.bvals", "dwi.bvecs"], ["HARDI150.nii.gz", "HARDI150.bval", "HARDI150.bvec"], md5_list=[ "0b18513b46132b4d1051ed3364f2acbc", "4e08ee9e2b1d2ec3fddb68c70ae23c36", "4c63a586f29afc6a48a5809524a76cb4", ], doc="Download a HARDI dataset with 160 gradient directions", ) fetch_resdnn_tf_weights = _make_fetcher( "fetch_resdnn_tf_weights", pjoin(dipy_home, "histo_resdnn_weights"), "https://ndownloader.figshare.com/files/", ["22736240"], ["resdnn_weights_mri_2018.h5"], md5_list=["f0e118d72ab804a464494bd9015227f4"], doc="Download ResDNN Tensorflow model weights for Nath et. al 2018", ) fetch_resdnn_torch_weights = _make_fetcher( "fetch_resdnn_torch_weights", pjoin(dipy_home, "histo_resdnn_weights"), "https://ndownloader.figshare.com/files/", ["50019429"], ["histo_weights.pth"], md5_list=["ca13692bbbaea725ff8b5df2d3a2779a"], doc="Download ResDNN Pytorch model weights for Nath et. al 2018", ) fetch_synb0_weights = _make_fetcher( "fetch_synb0_weights", pjoin(dipy_home, "synb0"), "https://ndownloader.figshare.com/files/", ["36379914", "36379917", "36379920", "36379923", "36379926"], [ "synb0_default_weights1.h5", "synb0_default_weights2.h5", "synb0_default_weights3.h5", "synb0_default_weights4.h5", "synb0_default_weights5.h5", ], md5_list=[ "a9362c75bc28616167a11a42fe5d004e", "9dc9353d6ff741d8e22b8569f157c56e", "e548f341e4f12d63dfbed306233fddce", "8cb7a3492d08e4c9b8938277d6fd9b75", "5e796f892605b3bdb9cb9678f1c6ac11", ], doc="Download Synb0 model weights for Schilling et. al 2019", ) fetch_synb0_test = _make_fetcher( "fetch_synb0_test", pjoin(dipy_home, "synb0"), "https://ndownloader.figshare.com/files/", ["36379911", "36671850"], ["test_input_synb0.npz", "test_output_synb0.npz"], md5_list=["987203aa73de2dac8770f39ed506dc0c", "515544fbcafd9769785502821b47b661"], doc="Download Synb0 test data for Schilling et. al 2019", ) fetch_deepn4_tf_weights = _make_fetcher( "fetch_deepn4_tf_weights", pjoin(dipy_home, "deepn4"), "https://ndownloader.figshare.com/files/", ["44673313"], ["model_weights.h5"], md5_list=["ef264edd554177a180cf99162dbd2745"], doc="Download DeepN4 model weights for Kanakaraj et. al 2024", ) fetch_deepn4_torch_weights = _make_fetcher( "fetch_deepn4_torch_weights", pjoin(dipy_home, "deepn4"), "https://ndownloader.figshare.com/files/", ["52285805"], ["deepn4_torch_weights"], md5_list=["97c5a5f8356a3d0eeca1c6bb7949c8b8"], doc="Download DeepN4 model weights for Kanakaraj et. al 2024", ) fetch_deepn4_test = _make_fetcher( "fetch_deepn4_test", pjoin(dipy_home, "deepn4"), "https://ndownloader.figshare.com/files/", ["48842938", "52454531"], ["test_input_deepn4.npz", "new_test_output_deepn4.npz"], md5_list=["07aa7cc7c7f839683a0aad5bb853605b", "6da15c4358fd13c99773eedeb93953c7"], doc="Download DeepN4 test data for Kanakaraj et. al 2024", ) fetch_evac_tf_weights = _make_fetcher( "fetch_evac_tf_weights", pjoin(dipy_home, "evac"), "https://ndownloader.figshare.com/files/", ["43037191"], ["evac_default_weights.h5"], md5_list=["491cfa4f9a2860fad6c19f2b71b918e1"], doc="Download EVAC+ model weights for Park et. al 2022", ) fetch_evac_torch_weights = _make_fetcher( "fetch_evac_torch_weights", pjoin(dipy_home, "evac"), "https://ndownloader.figshare.com/files/", ["50019432"], ["evac_weights.pth"], md5_list=["b2bab512a0f899f089d9dd9ffc44f65b"], doc="Download EVAC+ model weights for Park et. al 2022", ) fetch_evac_test = _make_fetcher( "fetch_evac_test", pjoin(dipy_home, "evac"), "https://ndownloader.figshare.com/files/", ["48891958"], ["evac_test_data.npz"], md5_list=["072a0dd6d2cddf8a3697b6a772e06e29"], doc="Download EVAC+ test data for Park et. al 2022", ) fetch_stanford_t1 = _make_fetcher( "fetch_stanford_t1", pjoin(dipy_home, "stanford_hardi"), "https://stacks.stanford.edu/file/druid:yx282xq2090/", ["t1.nii.gz"], ["t1.nii.gz"], md5_list=["a6a140da6a947d4131b2368752951b0a"], ) fetch_stanford_pve_maps = _make_fetcher( "fetch_stanford_pve_maps", pjoin(dipy_home, "stanford_hardi"), "https://stacks.stanford.edu/file/druid:yx282xq2090/", ["pve_csf.nii.gz", "pve_gm.nii.gz", "pve_wm.nii.gz"], ["pve_csf.nii.gz", "pve_gm.nii.gz", "pve_wm.nii.gz"], md5_list=[ "2c498e4fed32bca7f726e28aa86e9c18", "1654b20aeb35fc2734a0d7928b713874", "2e244983cf92aaf9f9d37bc7716b37d5", ], ) fetch_stanford_tracks = _make_fetcher( "fetch_stanford_tracks", pjoin(dipy_home, "stanford_hardi"), "https://raw.githubusercontent.com/dipy/dipy_datatest/main/", [ "hardi-lr-superiorfrontal.trk", ], [ "hardi-lr-superiorfrontal.trk", ], md5_list=[ "2d49aaf6ad6c10d8d069bfb319bf3541", ], doc="Download stanford track for examples", data_size="1.4MB", ) fetch_taiwan_ntu_dsi = _make_fetcher( "fetch_taiwan_ntu_dsi", pjoin(dipy_home, "taiwan_ntu_dsi"), UW_RW_URL + "1773/38480/", ["DSI203.nii.gz", "DSI203.bval", "DSI203.bvec", "DSI203_license.txt"], ["DSI203.nii.gz", "DSI203.bval", "DSI203.bvec", "DSI203_license.txt"], md5_list=[ "950408c0980a7154cb188666a885a91f", "602e5cb5fad2e7163e8025011d8a6755", "a95eb1be44748c20214dc7aa654f9e6b", "7fa1d5e272533e832cc7453eeba23f44", ], doc="Download a DSI dataset with 203 gradient directions", msg="See DSI203_license.txt for LICENSE. For the complete datasets" + " please visit https://dsi-studio.labsolver.org", data_size="91MB", ) fetch_syn_data = _make_fetcher( "fetch_syn_data", pjoin(dipy_home, "syn_test"), UW_RW_URL + "1773/38476/", ["t1.nii.gz", "b0.nii.gz"], ["t1.nii.gz", "b0.nii.gz"], md5_list=["701bda02bb769655c7d4a9b1df2b73a6", "e4b741f0c77b6039e67abb2885c97a78"], data_size="12MB", doc="Download t1 and b0 volumes from the same session", ) fetch_mni_template = _make_fetcher( "fetch_mni_template", pjoin(dipy_home, "mni_template"), "https://ndownloader.figshare.com/files/", [ "5572676?private_link=4b8666116a0128560fb5", "5572673?private_link=93216e750d5a7e568bda", "5572670?private_link=33c92d54d1afb9aa7ed2", "5572661?private_link=584319b23e7343fed707", ], [ "mni_icbm152_t2_tal_nlin_asym_09a.nii", "mni_icbm152_t1_tal_nlin_asym_09a.nii", "mni_icbm152_t1_tal_nlin_asym_09c_mask.nii", "mni_icbm152_t1_tal_nlin_asym_09c.nii", ], md5_list=[ "f41f2e1516d880547fbf7d6a83884f0d", "1ea8f4f1e41bc17a94602e48141fdbc8", "a243e249cd01a23dc30f033b9656a786", "3d5dd9b0cd727a17ceec610b782f66c1", ], doc="fetch the MNI 2009a T1 and T2, and 2009c T1 and T1 mask files", data_size="70MB", ) fetch_scil_b0 = _make_fetcher( "fetch_scil_b0", dipy_home, UW_RW_URL + "1773/38479/", ["datasets_multi-site_all_companies.zip"], ["datasets_multi-site_all_companies.zip"], md5_list=["e9810fa5bf21b99da786647994d7d5b7"], doc="Download b=0 datasets from multiple MR systems (GE, Philips, " + "Siemens) and different magnetic fields (1.5T and 3T)", data_size="9.2MB", unzip=True, ) fetch_bundles_2_subjects = _make_fetcher( "fetch_bundles_2_subjects", pjoin(dipy_home, "exp_bundles_and_maps"), UW_RW_URL + "1773/38477/", ["bundles_2_subjects.tar.gz"], ["bundles_2_subjects.tar.gz"], md5_list=["97756fbef11ce2df31f1bedf1fc7aac7"], data_size="234MB", doc="Download 2 subjects from the SNAIL dataset with their bundles", unzip=True, ) fetch_ivim = _make_fetcher( "fetch_ivim", pjoin(dipy_home, "ivim"), "https://ndownloader.figshare.com/files/", ["5305243", "5305246", "5305249"], ["ivim.nii.gz", "ivim.bval", "ivim.bvec"], md5_list=[ "cda596f89dc2676af7d9bf1cabccf600", "f03d89f84aa9a9397103a400e43af43a", "fb633a06b02807355e49ccd85cb92565", ], doc="Download IVIM dataset", ) fetch_cfin_multib = _make_fetcher( "fetch_cfin_multib", pjoin(dipy_home, "cfin_multib"), UW_RW_URL + "/1773/38488/", [ "T1.nii", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec", ], [ "T1.nii", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval", "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec", ], md5_list=[ "889883b5e7d93a6e372bc760ea887e7c", "9daea1d01d68fd0055a3b34f5ffd5f6e", "3ee44135fde7ea5c9b8c801414bdde2c", "948373391de950e7cc1201ba9f696bf0", ], doc="Download CFIN multi b-value diffusion data", msg=( "This data was provided by Brian Hansen and Sune Jespersen" + " More details about the data are available in their paper: " + " https://www.nature.com/articles/sdata201672" ), ) fetch_file_formats = _make_fetcher( "bundle_file_formats_example", pjoin(dipy_home, "bundle_file_formats_example"), "https://zenodo.org/record/3352379/files/", [ "cc_m_sub.trk", "laf_m_sub.tck", "lpt_m_sub.fib", "raf_m_sub.vtk", "rpt_m_sub.dpy", "template0.nii.gz", ], [ "cc_m_sub.trk", "laf_m_sub.tck", "lpt_m_sub.fib", "raf_m_sub.vtk", "rpt_m_sub.dpy", "template0.nii.gz", ], md5_list=[ "78ed7bead3e129fb4b4edd6da1d7e2d2", "20009796ccd43dc8d2d5403b25dff717", "8afa8419e2efe04ede75cce1f53c77d8", "9edcbea30c7a83b467c3cdae6ce963c8", "42bff2538a650a7ff1e57bfd9ed90ad6", "99c37a2134026d2c4bbb7add5088ddc6", ], doc="Download 5 bundles in various file formats and their reference", data_size="25MB", ) fetch_bundle_atlas_hcp842 = _make_fetcher( "fetch_bundle_atlas_hcp842", pjoin(dipy_home, "bundle_atlas_hcp842"), "https://ndownloader.figshare.com/files/", ["13638644"], ["Atlas_80_Bundles.zip"], md5_list=["78331d527a10ec000d4f33bac472e099"], doc="Download atlas tractogram from the hcp842 dataset with 80 bundles", data_size="300MB", unzip=True, ) fetch_30_bundle_atlas_hcp842 = _make_fetcher( "fetch_30_bundle_atlas_hcp842", pjoin(dipy_home, "bundle_atlas_hcp842"), "https://ndownloader.figshare.com/files/", ["26842853"], ["Atlas_30_Bundles.zip"], md5_list=["f3922cdbea4216823798fade128d6782"], doc="Download atlas tractogram from the hcp842 dataset with 30 bundles", data_size="207.09MB", unzip=True, ) fetch_target_tractogram_hcp = _make_fetcher( "fetch_target_tractogram_hcp", pjoin(dipy_home, "target_tractogram_hcp"), "https://ndownloader.figshare.com/files/", ["12871127"], ["hcp_tractogram.zip"], md5_list=["fa25ef19c9d3748929b6423397963b6a"], doc="Download tractogram of one of the hcp dataset subjects", data_size="541MB", unzip=True, ) fetch_bundle_fa_hcp = _make_fetcher( "fetch_bundle_fa_hcp", pjoin(dipy_home, "bundle_fa_hcp"), "https://ndownloader.figshare.com/files/", ["14035265"], ["hcp_bundle_fa.nii.gz"], md5_list=["2d5c0036b0575597378ddf39191028ea"], doc=("Download map of FA within two bundles in one of the hcp dataset subjects"), data_size="230kb", ) fetch_qtdMRI_test_retest_2subjects = _make_fetcher( "fetch_qtdMRI_test_retest_2subjects", pjoin(dipy_home, "qtdMRI_test_retest_2subjects"), "https://zenodo.org/record/996889/files/", [ "subject1_dwis_test.nii.gz", "subject2_dwis_test.nii.gz", "subject1_dwis_retest.nii.gz", "subject2_dwis_retest.nii.gz", "subject1_ccmask_test.nii.gz", "subject2_ccmask_test.nii.gz", "subject1_ccmask_retest.nii.gz", "subject2_ccmask_retest.nii.gz", "subject1_scheme_test.txt", "subject2_scheme_test.txt", "subject1_scheme_retest.txt", "subject2_scheme_retest.txt", ], [ "subject1_dwis_test.nii.gz", "subject2_dwis_test.nii.gz", "subject1_dwis_retest.nii.gz", "subject2_dwis_retest.nii.gz", "subject1_ccmask_test.nii.gz", "subject2_ccmask_test.nii.gz", "subject1_ccmask_retest.nii.gz", "subject2_ccmask_retest.nii.gz", "subject1_scheme_test.txt", "subject2_scheme_test.txt", "subject1_scheme_retest.txt", "subject2_scheme_retest.txt", ], md5_list=[ "ebd7441f32c40e25c28b9e069bd81981", "dd6a64dd68c8b321c75b9d5fb42c275a", "830a7a028a66d1b9812f93309a3f9eae", "d7f1951e726c35842f7ea0a15d990814", "ddb8dfae908165d5e82c846bcc317cab", "5630c06c267a0f9f388b07b3e563403c", "02e9f92b31e8980f658da99e532e14b5", "6e7ce416e7cfda21cecce3731f81712b", "957cb969f97d89e06edd7a04ffd61db0", "5540c0c9bd635c29fc88dd599cbbf5e6", "5540c0c9bd635c29fc88dd599cbbf5e6", "5540c0c9bd635c29fc88dd599cbbf5e6", ], doc="Downloads test-retest qt-dMRI acquisitions of two C57Bl6 mice.", data_size="298.2MB", ) fetch_gold_standard_io = _make_fetcher( "fetch_gold_standard_io", pjoin(dipy_home, "gold_standard_io"), "https://zenodo.org/record/7767654/files/", [ "gs.trk", "gs.tck", "gs.trx", "gs.fib", "gs.dpy", "gs.nii", "gs_3mm.nii", "gs_rasmm_space.txt", "gs_voxmm_space.txt", "gs_vox_space.txt", "points_data.txt", "streamlines_data.txt", ], [ "gs.trk", "gs.tck", "gs.trx", "gs.fib", "gs.dpy", "gs.nii", "gs_3mm.nii", "gs_rasmm_space.txt", "gs_voxmm_space.txt", "gs_vox_space.txt", "points_data.json", "streamlines_data.json", ], md5_list=[ "3acf565779f4d5107f96b2ef90578d64", "151a30cf356c002060d720bf9d577245", "a6587f1a3adc4df076910c4d72eb4161", "e9818e07bef5bd605dea0877df14a2b0", "248606297e400d1a9b1786845aad8de3", "a2d4d8f62d1de0ab9927782c7d51cb27", "217b3ae0712a02b2463b8eedfe9a0a68", "ca193a5508d3313d542231aaf262960f", "3284de59dfd9ca3130e6e01258ed9022", "a2a89c387f45adab733652a92f6602d5", "4bcca0c6195871fc05e93cdfabec22b4", "578f29052ac03a6d8a98580eb7c70d97", ], doc="Downloads the gold standard for streamlines io testing.", data_size="47.KB", ) fetch_qte_lte_pte = _make_fetcher( "fetch_qte_lte_pte", pjoin(dipy_home, "qte_lte_pte"), "https://zenodo.org/record/4624866/files/", ["lte-pte.nii.gz", "lte-pte.bval", "lte-pte.bvec", "mask.nii.gz"], ["lte-pte.nii.gz", "lte-pte.bval", "lte-pte.bvec", "mask.nii.gz"], md5_list=[ "f378b2cd9f57625512002b9e4c0f1660", "5c25d24dd3df8590582ed690507a8769", "31abe55dfda7ef5fdf5015d0713be9b0", "1b7b83b8a60295f52d80c3855a12b275", ], doc="Download QTE data with linear and planar tensor encoding.", data_size="41.5 MB", ) fetch_cti_rat1 = _make_fetcher( "fetch_cti_rat1", pjoin(dipy_home, "cti_rat1"), "https://zenodo.org/record/8276773/files/", [ "Rat1_invivo_cti_data.nii", "bvals1.bval", "bvec1.bvec", "bvals2.bval", "bvec2.bvec", "Rat1_mask.nii", ], [ "Rat1_invivo_cti_data.nii", "bvals1.bval", "bvec1.bvec", "bvals2.bval", "bvec2.bvec", "Rat1_mask.nii", ], md5_list=[ "2f855e7826f359d80cfd6f094d3a7008", "1deed2a91e20104ca42d7482cc096a9a", "40a4f5131b8a64608d16b0c6c5ad0837", "1979c7dc074e00f01103cbdf83ed78db", "653d9344060803d5576f43c65ce45ccb", "34bc3d5acea9442d05ef185717780440", ], doc="Download Rat Brain DDE data for CTI reconstruction" + " (Rat #1 data from Henriques et al. MRM 2021).", data_size="152.92 MB", msg=( "More details about the data are available in the paper: " + "https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.28938" ), ) fetch_fury_surface = _make_fetcher( "fetch_fury_surface", pjoin(dipy_home, "fury_surface"), "https://raw.githubusercontent.com/fury-gl/fury-data/master/surfaces/", ["100307_white_lh.vtk"], ["100307_white_lh.vtk"], md5_list=["dbec91e29af15541a5cb36d80977b26b"], doc="Surface for testing and examples", data_size="11MB", ) fetch_DiB_70_lte_pte_ste = _make_fetcher( "fetch_DiB_70_lte_pte_ste", pjoin(dipy_home, "DiB_70_lte_pte_ste"), "https://github.com/filip-szczepankiewicz/Szczepankiewicz_DIB_2019/" "raw/master/DATA/brain/NII_Boito_SubSamples/", [ "DiB_70_lte_pte_ste/DiB_70_lte_pte_ste.nii.gz", "DiB_70_lte_pte_ste/bval_DiB_70_lte_pte_ste.bval", "DiB_70_lte_pte_ste/bvec_DiB_70_lte_pte_ste.bvec", "DiB_mask.nii.gz", ], [ "DiB_70_lte_pte_ste.nii.gz", "bval_DiB_70_lte_pte_ste.bval", "bvec_DiB_70_lte_pte_ste.bvec", "DiB_mask.nii.gz", ], doc="Download QTE data with linear, planar, " + "and spherical tensor encoding. If using this data please cite " + "F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and " + "spherical tensor-valued diffusion MRI data by free waveform " + "encoding in healthy brain, water, oil and liquid crystals. " + "Data in Brief (2019)," + "DOI: https://doi.org/10.1016/j.dib.2019.104208", md5_list=[ "11f2e0d53e19061654eb3cdfc8fe9827", "15021885b4967437c8cf441c09045c25", "1e6b867182da249f81aa9abd50e8b9f7", "2ea48d80b6ae1c3da50cb44e615b09e5", ], data_size="51.1 MB", ) fetch_DiB_217_lte_pte_ste = _make_fetcher( "fetch_DiB_217_lte_pte_ste", pjoin(dipy_home, "DiB_217_lte_pte_ste"), "https://github.com/filip-szczepankiewicz/Szczepankiewicz_DIB_2019/" "raw/master/DATA/brain/NII_Boito_SubSamples/", [ "DiB_217_lte_pte_ste/DiB_217_lte_pte_ste_1.nii.gz", "DiB_217_lte_pte_ste/DiB_217_lte_pte_ste_2.nii.gz", "DiB_217_lte_pte_ste/bval_DiB_217_lte_pte_ste.bval", "DiB_217_lte_pte_ste/bvec_DiB_217_lte_pte_ste.bvec", "DiB_mask.nii.gz", ], [ "DiB_217_lte_pte_ste_1.nii.gz", "DiB_217_lte_pte_ste_2.nii.gz", "bval_DiB_217_lte_pte_ste.bval", "bvec_DiB_217_lte_pte_ste.bvec", "DiB_mask.nii.gz", ], doc="Download QTE data with linear, planar, " + "and spherical tensor encoding. If using this data please cite " + "F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and " + "spherical tensor-valued diffusion MRI data by free waveform " + "encoding in healthy brain, water, oil and liquid crystals. " + "Data in Brief (2019)," + "DOI: https://doi.org/10.1016/j.dib.2019.104208", md5_list=[ "424e9cf75b20bc1f7ae1acde26b26da0", "8e70d14fb8f08065a7a0c4d3033179c6", "1f657215a475676ce333299038df3a39", "4220ca92f2906c97ed4c7287eb62c6f0", "2ea48d80b6ae1c3da50cb44e615b09e5", ], data_size="166.3 MB", ) fetch_ptt_minimal_dataset = _make_fetcher( "fetch_ptt_minimal_dataset", pjoin(dipy_home, "ptt_dataset"), "https://raw.githubusercontent.com/dipy/dipy_datatest/main/", ["ptt_fod.nii", "ptt_seed_coords.txt", "ptt_seed_image.nii"], ["ptt_fod.nii", "ptt_seed_coords.txt", "ptt_seed_image.nii"], md5_list=[ "6e454f8088b64e7b85218c71010d8dbe", "8c2d71fb95020e2bb1743623eb11c2a6", "9cb88f88d664019ba80c0b372c8bafec", ], doc="Download FOD and seeds for PTT testing and examples", data_size="203KB", ) fetch_bundle_warp_dataset = _make_fetcher( "fetch_bundle_warp_dataset", pjoin(dipy_home, "bundle_warp"), "https://ndownloader.figshare.com/files/", ["40026343", "40026346"], [ "m_UF_L.trk", "s_UF_L.trk", ], md5_list=["4db38ca1e80c16d6e3a97f88f0611187", "c1499005baccfab865ce38368d7a4c7f"], doc="Download Bundle Warp dataset", ) fetch_disco1_dataset = _make_fetcher( "fetch_disco1_dataset", pjoin(dipy_home, "disco", "disco_1"), "https://data.mendeley.com/public-files/datasets/fgf86jdfg6/files/", [ "028147aa-f17f-4514-80e6-24c7419da75e/file_downloaded", "e714ff30-be78-4284-b282-84ed011885de/file_downloaded", "a45ac785-4977-434b-b05e-c97ed4a1e6c7/file_downloaded", "c3d1de3e-6dcf-4142-99bf-4dd42d445a0a/file_downloaded", "35f025f6-d4ab-45e6-9f58-e7685862117f/file_downloaded", "02ad744f-4910-40a2-870b-5507d6195926/file_downloaded", "c030a2dc-6b80-43d7-a63c-87a28c77c73b/file_downloaded", "c6a54096-d7ea-4d05-a921-6aeb0c1bc625/file_downloaded", "d95bfd9a-8886-48ec-acc5-5526f3bca32c/file_downloaded", "8ef464f7-e648-4583-a208-3ee87a08a2a1/file_downloaded", "cbaa3708-7f39-46ec-8cea-51285956bf5a/file_downloaded", "964040aa-e1f0-48f7-8eb7-87553f4190e5/file_downloaded", "e8bea304-35ce-45de-95be-c51e2ed917f6/file_downloaded", "ec5da4f1-53bd-4103-8722-ea3211223150/file_downloaded", "c1ddba00-9ea1-435a-946a-bca43079eadf/file_downloaded", "0aa2d1a0-ae07-4eff-af6a-22183249e9fa/file_downloaded", "744d468f-bd0e-44a8-8395-893889c16462/file_downloaded", "311afc50-9bf5-469a-a298-091d252940c9/file_downloaded", "68f37ddf-8f06-4d3e-80c5-842b495516ba/file_downloaded", "d272c9aa-85d9-40d1-9a87-1044d361da0c/file_downloaded", "0ba92dbc-4489-4626-8bfd-5d56b7557699/file_downloaded", "099c3f98-4574-4315-be5e-a2b183695e71/file_downloaded", "231bbeac-bdec-4815-9238-9a2370bdca6b/file_downloaded", "cd073c2f-c3b0-4a46-9c3d-63a853a8af49/file_downloaded", "470f235c-04f5-4544-bf4b-f11ac0d8fbd2/file_downloaded", "276b637b-6244-4927-b1af-8c1ca12b8a0c/file_downloaded", "1c524b3f-ca6b-4f6f-9e6f-25431496000a/file_downloaded", "24f8fe7c-8ab2-40c8-baec-4d1f29eba4f2/file_downloaded", "5cc5fb58-6058-40dd-9969-019f7814a4a1/file_downloaded", "aabcc286-b607-46e1-9229-80638630d518/file_downloaded", "87b6cee7-0645-4405-b5b3-af89c04a2ac9/file_downloaded", "ad47bf18-8b69-4b0c-8706-ada53365fd99/file_downloaded", "417505fb-3a7a-4cbc-ac8a-92acc26bb958/file_downloaded", "7708e776-f07c-47a9-9982-b5fbd9dd4390/file_downloaded", "654863c6-6154-4da6-9f4c-3b0c76535dd7/file_downloaded", "5f8d028c-82a1-4837-b34f-ad8251c0685b/file_downloaded", "bd2fa6f3-f050-44a6-ad18-6c31bb037eb7/file_downloaded", "b432035b-5429-4255-8e21-88184802d005/file_downloaded", "1f1a4391-9740-4e17-89c3-79430aab0f91/file_downloaded", "6c90c685-ae16-4f21-ae13-71e89ca9eb66/file_downloaded", "f8878550-061b-40e0-a0c7-5aac5a33e542/file_downloaded", ], [ "DiSCo_gradients.bvals", "DiSCo_gradients_dipy.bvecs", "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "lowRes_DiSCo1_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo1_ROIs.nii.gz", "lowRes_DiSCo1_mask.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo1_ROIs-mask.nii.gz", "lowRes_DiSCo1_DWI_Intra.nii.gz", "lowRes_DiSCo1_Strand_Bundle_Count.nii.gz", "lowRes_DiSCo1_Strand_Count.nii.gz", "lowRes_DiSCo1_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo1_DWI.nii.gz", "lowRes_DiSCo1_Strand_ODFs.nii.gz", "lowRes_DiSCo1_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr10.nii.gz", "highRes_DiSCo1_Strand_ODFs.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo1_DWI.nii.gz", "highRes_DiSCo1_ROIs.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr40.nii.gz", "highRes_DiSCo1_mask.nii.gz", "highRes_DiSCo1_Strand_Bundle_Count.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo1_Strand_Average_Diameter.nii.gz", "highRes_DiSCo1_Strand_Streamline_Count.nii.gz", "highRes_DiSCo1_DWI_Intra.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr10.nii.gz", "highRes_DiSCo1_Strand_Intra_Volume_Fraction.nii.gz", "highRes_DiSCo1_ROIs-mask.nii.gz", "DiSCo1_Connectivity_Matrix_Strands_Count.txt", "DiSCo1_Connectivity_Matrix_Cross-Sectional_Area.txt", "DiSCo1_Strands_ROIs_Pairs.txt", "DiSCo1_Strands_Diameters.txt", "DiSCo1_Strands_Trajectories.tck", "DiSCo1_Strands_Trajectories.trk", ], md5_list=[ "c03cfec8ee605a54866ef09c3c8ba31d", "b8443aee0a2d2b60ee658a2342cccf0f", "4ac749bb584e6963851ee170e137e3ec", "207d57a6c16cdf230b45d4df5b4e1dee", "2346de6ad563a10e40b8ebec3ac4e0ff", "1981ac168ea76ce4d70e3dcd1d749f13", "9a06a057df5b2ce5cb05e8c9f59be14f", "ac9900538f4563c39032dcc289c36a51", "beb0f999bc26cd32ba2079db77e7840a", "8d5659e78d7d028e3da8f5af89402669", "db335f7d52b85c1d0de7ca61a2fca64b", "f1c49868436e0beca3b7cc50416c664e", "b967371fe44c4d736edcd65b8da12b93", "d4da2df8cd7cbf24cdd0af09247ff0a2", "80316e4e687ee3c2e2a9221929d1e826", "4c3580f512b325552302792a64da87f0", "56ee11c59348f3a61079483bd249032d", "580d72144a5e19a119e6839d8f1c623d", "f7bdfea9c3163cac7d4f67f49928b682", "7cf61f1088971a8799df80c29dc7a9f1", "da762d46e6b7d513be0411ac66769c5c", "5d7bac8737a4543eee60f7aa26073df9", "370bdf6571aa061daaa654a107eae6fd", "50fd6e201536b4a4815dd102d9c80974", "8269aed23fc643bb98c36945462e866a", "86becb33b54a3c4291f0cd77ec87bb3e", "d240cccd9d84e69bf146a5d06bb898bb", "e75a1838ff8e5860bdbde1162e5aa80f", "f7d9507884960f86918f6328962d98d2", "be4f8221ccdd5a139f16ac18353d12c6", "d8d33dbdc820289f5575b5c064c6dfe9", "53740e3d2851d5f301d93730268928f2", "ed98e0f26abc85af5735fe228cb86b4c", "50a9385f567138c23df4ddf333c785c0", "2d845131db87b6a2230a325e076bdd7f", "c53f6e42cfd796a561b1af434a61c91d", "f97730469c53deaa5b827effebef3e78", "d34fd49147cd061b9417328118567d1c", "f0d28f1d7eec1fad866607ee59686476", "e475641a08ebafeecb79a21e618d7081", "8c2a338606c0cb7de6e34b8317f59cd6", ], optional_fnames=[ "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "highRes_DiSCo1_DWI.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo1_ROIs.nii.gz", "lowRes_DiSCo1_mask.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo1_ROIs-mask.nii.gz", "lowRes_DiSCo1_DWI_Intra.nii.gz", "lowRes_DiSCo1_Strand_Bundle_Count.nii.gz", "lowRes_DiSCo1_Strand_Count.nii.gz", "lowRes_DiSCo1_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo1_DWI.nii.gz", "lowRes_DiSCo1_Strand_ODFs.nii.gz", "lowRes_DiSCo1_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo1_DWI_RicianNoise-snr10.nii.gz", "highRes_DiSCo1_Strand_ODFs.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr40.nii.gz", "highRes_DiSCo1_Strand_Bundle_Count.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo1_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo1_Strand_Average_Diameter.nii.gz", "highRes_DiSCo1_Strand_Streamline_Count.nii.gz", "highRes_DiSCo1_DWI_Intra.nii.gz", "highRes_DiSCo1_Strand_Intra_Volume_Fraction.nii.gz", "DiSCo1_Connectivity_Matrix_Strands_Count.txt", "DiSCo1_Strands_ROIs_Pairs.txt", "DiSCo1_Strands_Diameters.txt", "DiSCo1_Strands_Trajectories.trk", ], doc=( "Download DISCO 1 dataset: The Diffusion-Simulated Connectivity " "Dataset. DOI: 10.17632/fgf86jdfg6.3" ), data_size="1.05 GB", use_headers=True, ) fetch_disco2_dataset = _make_fetcher( "fetch_disco2_dataset", pjoin(dipy_home, "disco", "disco_2"), "https://data.mendeley.com/public-files/datasets/fgf86jdfg6/files/", [ "028147aa-f17f-4514-80e6-24c7419da75e/file_downloaded", "e714ff30-be78-4284-b282-84ed011885de/file_downloaded", "a45ac785-4977-434b-b05e-c97ed4a1e6c7/file_downloaded", "c3d1de3e-6dcf-4142-99bf-4dd42d445a0a/file_downloaded", "35f025f6-d4ab-45e6-9f58-e7685862117f/file_downloaded", "e7a50803-a58d-47dc-b026-6a6e20602546/file_downloaded", "12158d68-0050-4528-88db-5fb606b2151d/file_downloaded", "361df9ec-0542-4e05-8f2d-1c605a5a915a/file_downloaded", "2bcf46ae-0b07-4dd0-baf8-654a05a4aca7/file_downloaded", "6a514a78-8a8a-4cdf-947a-d1b01ced4b0d/file_downloaded", "a387810f-54fe-4493-8842-f3fbeea687fe/file_downloaded", "2401238c-ae37-4a99-81a8-50ba90110650/file_downloaded", "980a6702-c192-4a4e-b706-f3ec2fb5e800/file_downloaded", "c5691dae-5b10-4c67-8a58-2df8314c85ec/file_downloaded", "d4eabb32-8b28-461c-8248-a0bacf4e44f2/file_downloaded", "dd98fb48-c48f-4d72-85a9-38a2d8b2bb3f/file_downloaded", "fed31d0d-7c75-4073-82e5-2c9ba30daa5a/file_downloaded", "012853cb-0d7f-4135-8407-a99c4e022024/file_downloaded", "46d0466f-75bd-4646-af10-5215477339d8/file_downloaded", "d1adcbdd-1d11-43cd-a72a-6830e91b7c33/file_downloaded", "f3e9d4a5-dc9d-4112-8a67-b63b40c87e5c/file_downloaded", "825d2154-e892-474d-ae31-e490467dfa3c/file_downloaded", "bf62b3fd-fff7-4658-b143-9ec948fc256f/file_downloaded", "173dacc8-683c-42c0-b270-67f59c77d49d/file_downloaded", "f981cbad-84c9-42bb-bf19-978f12e6c972/file_downloaded", "d122bf02-98f7-46b2-9286-98473216d64e/file_downloaded", "50b68dd1-1037-4cb4-8d1a-025cff38da55/file_downloaded", "5f3e04b6-9bd5-4010-bc2f-ea16b7ac8b1c/file_downloaded", "f8d7ac40-7d0e-41ba-9788-4f973be5968b/file_downloaded", "d4adf4a7-1646-4530-85cb-66c2779e1cb6/file_downloaded", "6b1f740e-2b20-440f-9bec-344540d80d1d/file_downloaded", "7ea07af5-5cb8-4857-bf30-17a0d4f39f83/file_downloaded", "7ad1c5ab-9606-481c-a08e-0770798b224f/file_downloaded", "8d8c8fe9-7747-4ba2-b759-f27cf160b88f/file_downloaded", "143de683-9bce-43b9-b08e-2e16fea05a53/file_downloaded", "f3f5f798-0d01-41de-ad96-6a2e2d04eeb1/file_downloaded", "0dc1a52f-7770-4492-8dc7-27da1f9d2ec9/file_downloaded", "bba6f22f-1212-49c2-b07f-3b62ae435b4d/file_downloaded", "5928d70c-01f3-47ab-8f29-0484f9a014a2/file_downloaded", "6be306cb-1dfd-4fe8-912d-ec117ff0d70d/file_downloaded", "e49d38af-e262-46ef-ad44-c51c29ee2178/file_downloaded", ], [ "DiSCo_gradients.bvals", "DiSCo_gradients_dipy.bvecs", "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "lowRes_DiSCo2_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo2_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo2_DWI.nii.gz", "lowRes_DiSCo2_ROIs.nii.gz", "lowRes_DiSCo2_mask.nii.gz", "lowRes_DiSCo2_DWI_Intra.nii.gz", "lowRes_DiSCo2_ROIs-mask.nii.gz", "lowRes_DiSCo2_Strand_Count.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo2_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr10.nii.gz", "lowRes_DiSCo2_Strand_ODFs.nii.gz", "lowRes_DiSCo2_Strand_Bundle_Count.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr40.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo2_ROIs-mask.nii.gz", "highRes_DiSCo2_Strand_ODFs.nii.gz", "highRes_DiSCo2_Strand_Count.nii.gz", "highRes_DiSCo2_ROIs.nii.gz", "highRes_DiSCo2_mask.nii.gz", "highRes_DiSCo2_Strand_Average_Diameter.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo2_DWI_Intra.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo2_DWI.nii.gz", "highRes_DiSCo2_Strand_Bundle_Count.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr10.nii.gz", "highRes_DiSCo2_Strand_Intra_Volume_Fraction.nii.gz", "DiSCo2_Strands_Diameters.txt", "DiSCo2_Connectivity_Matrix_Strands_Count.txt", "DiSCo2_Strands_Trajectories.trk", "DiSCo2_Strands_ROIs_Pairs.txt", "DiSCo2_Strands_Trajectories.tck", "DiSCo2_Connectivity_Matrix_Cross-Sectional_Area.txt", ], md5_list=[ "c03cfec8ee605a54866ef09c3c8ba31d", "b8443aee0a2d2b60ee658a2342cccf0f", "4ac749bb584e6963851ee170e137e3ec", "207d57a6c16cdf230b45d4df5b4e1dee", "2346de6ad563a10e40b8ebec3ac4e0ff", "425a97bce000265d9e4868fcb54a6bd7", "7cb8fd4ae58973235903adfcf338cb87", "bb5ba3ba1163f4335dae76f905e181d0", "350fb02f7ceeaf95c5cd13a4721cef5f", "a8476809cca854668ae4dd3bfd1366ec", "e55bca1aef80039eafdadec4b8c3ff21", "5d9b46a5130a7734a2a0544ed1f5cc72", "90869179977e2b78bd0a8c4f016ee7e3", "c6207b65665f5b82a5413e65627156b3", "1f7e32d2780bbc1a2b5f7b592c9fd53b", "36de56a359a67ec68c15cc484d79e1da", "6ddfa6a7e5083cb71f3ae4d23711a6a8", "21f0cf235c8c1705e7bd9ebd9a39479a", "660f549e8180f17ea69de82661b427e9", "19c037c7813c7574b809baedd9bb64fd", "77a1f4111eefc01b26d7ec8b6473894d", "cde18c2581ced8755958a153332c375a", "88b12d4c24bd833430b3125340c96801", "4d70e9b246101f5f2ba8a44d92beb2e8", "c01bb0a1d2bdb95565b92eb022ff2124", "31f664676e94280c5ebdf04b544d200f", "41a3155f6fb44da50c515bb867006ab1", "9494f6a6f5c4e44b88051f0284757517", "5d9f8bd4a540891082f2dda976f70478", "8e2e5900008ae6923fe56447d7b493c7", "e961d27613acc250b6407d29d5020154", "ef330fa395d4e48d0286652927116275", "00c620b144253ef114fee1fe8e95a87f", "7e4cf83496996aefd9f27c6077c71d74", "e6be6c73425337e331fda4e4bdcbe749", "0b6c1770d48f39f704986ba69970541e", "69da5bb232b41dbeeea9d409ca94955f", "0b43ffcc46217da86a472a7b1e02ac06", "4e774e280cc57b1336e4fa9e479ee4ee", "cc5442ea2ff2ddfcd1330016e2a68399", "7f276c0276561df13c86008776faf6b2", ], optional_fnames=[ "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "lowRes_DiSCo2_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo2_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo2_DWI.nii.gz", "lowRes_DiSCo2_ROIs.nii.gz", "lowRes_DiSCo2_mask.nii.gz", "lowRes_DiSCo2_DWI_Intra.nii.gz", "lowRes_DiSCo2_ROIs-mask.nii.gz", "lowRes_DiSCo2_Strand_Count.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo2_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo2_DWI_RicianNoise-snr10.nii.gz", "lowRes_DiSCo2_Strand_ODFs.nii.gz", "lowRes_DiSCo2_Strand_Bundle_Count.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr40.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo2_Strand_ODFs.nii.gz", "highRes_DiSCo2_Strand_Count.nii.gz", "highRes_DiSCo2_Strand_Average_Diameter.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo2_DWI_Intra.nii.gz", "highRes_DiSCo2_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo2_Strand_Bundle_Count.nii.gz", "highRes_DiSCo2_Strand_Intra_Volume_Fraction.nii.gz", "highRes_DiSCo2_DWI.nii.gz", "DiSCo2_Strands_Diameters.txt", "DiSCo2_Connectivity_Matrix_Strands_Count.txt", "DiSCo2_Strands_ROIs_Pairs.txt", "DiSCo2_Strands_Trajectories.trk", ], doc=( "Download DISCO 2 dataset: The Diffusion-Simulated Connectivity " "Dataset. DOI: 10.17632/fgf86jdfg6.3" ), data_size="1.05 GB", use_headers=True, ) fetch_disco3_dataset = _make_fetcher( "fetch_disco3_dataset", pjoin(dipy_home, "disco", "disco_3"), "https://data.mendeley.com/public-files/datasets/fgf86jdfg6/files/", [ "028147aa-f17f-4514-80e6-24c7419da75e/file_downloaded", "e714ff30-be78-4284-b282-84ed011885de/file_downloaded", "a45ac785-4977-434b-b05e-c97ed4a1e6c7/file_downloaded", "c3d1de3e-6dcf-4142-99bf-4dd42d445a0a/file_downloaded", "35f025f6-d4ab-45e6-9f58-e7685862117f/file_downloaded", "3cc65986-2e64-4397-a34d-386dfe286a89/file_downloaded", "ac384df6-8e9a-4c44-89db-624f89c12015/file_downloaded", "7feb06e6-0bdc-4fba-b7df-c67c3cb43301/file_downloaded", "c8bddc43-f1f2-404c-98d4-9ab34380fc8f/file_downloaded", "00bdcd82-dde0-499d-9bef-581755ad7e57/file_downloaded", "8abbc092-9368-4154-9e20-b28e3b6d03b4/file_downloaded", "e0773f2f-7fbe-4139-8e71-2d3267360ab5/file_downloaded", "8670b9c4-9275-4da6-adbf-398889a606d7/file_downloaded", "ec2306f2-7d60-452a-82d8-66962ffb903d/file_downloaded", "00cf30d9-14fa-484b-b6e1-3d484e43a840/file_downloaded", "b61b83aa-522b-4239-9a7b-74c80aea3c07/file_downloaded", "75176143-5ddf-4dfa-b2cd-b87da0abfc7b/file_downloaded", "d5fe5abf-b72e-474a-9316-714e0739feec/file_downloaded", "45a139fc-2d99-4832-8d95-7a7df2acf711/file_downloaded", "68f30089-1bf9-45d9-82d8-d77855823229/file_downloaded", "4162d753-0d7c-4492-aa4f-88554ee64fed/file_downloaded", "5ca6e137-2e65-41cd-bb4b-e1d21dcb5a28/file_downloaded", "8e88dccf-d7b3-4692-ab0f-26810ba168e0/file_downloaded", "fd97249f-ab0b-4ffa-a3ba-c3d0b3b1c947/file_downloaded", "73636f0f-7fc6-48c9-a140-7a11df9847f2/file_downloaded", "186a2116-21a8-42fc-b6d1-f303b0ba0b69/file_downloaded", "e087b436-d1be-4c8d-8c56-3de9ef7fb054/file_downloaded", "305ae78b-725c-4bb1-b435-1bca70c18b6b/file_downloaded", "2e62eb11-660b-4f09-920d-f5ee1e25b02a/file_downloaded", "a35e33f5-f23a-48de-9c03-1560ce5fecbe/file_downloaded", "f1737350-0b46-468e-b78a-af9731eac720/file_downloaded", "86ba2b2c-2094-43ef-be3e-e286dca5c5bd/file_downloaded", "c5a4a685-c3d6-4ef2-9ec9-956fce821c9b/file_downloaded", "d84ec650-8c05-41a6-a4ef-a4e3a2f66dd3/file_downloaded", "7648b6ce-2bc8-4753-8e8c-399978d4e187/file_downloaded", "1b702c78-cbae-4ceb-a5f6-89c5cb0f4f88/file_downloaded", "835af619-7447-4247-8e5b-2e2895480add/file_downloaded", "99752985-840e-4ed2-b068-68f6be7af48f/file_downloaded", "c07c7b51-1eab-4852-8b36-bd9901f15d7a/file_downloaded", "af6f3bc0-a09a-461a-8e6b-6095d83110ea/file_downloaded", "482bcf8f-52eb-4bf8-83b4-765ed935ad81/file_downloaded", ], [ "DiSCo_gradients.bvals", "DiSCo_gradients_dipy.bvecs", "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "lowRes_DiSCo3_Strand_ODFs.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr10.nii.gz", "lowRes_DiSCo3_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo3_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo3_DWI_Intra.nii.gz", "lowRes_DiSCo3_Strand_Count.nii.gz", "lowRes_DiSCo3_DWI.nii.gz", "lowRes_DiSCo3_Strand_Bundle_Count.nii.gz", "lowRes_DiSCo3_ROIs-mask.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo3_mask.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo3_ROIs.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr10.nii.gz", "highRes_DiSCo3_Strand_Average_Diameter.nii.gz", "highRes_DiSCo3_Strand_Count.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo3_DWI.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo3_ROIs-mask.nii.gz", "highRes_DiSCo3_Strand_Intra_Volume_Fraction.nii.gz", "highRes_DiSCo3_Strand_Bundle_Count.nii.gz", "highRes_DiSCo3_DWI_Intra.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo3_Strand_ODFs.nii.gz", "highRes_DiSCo3_mask.nii.gz", "highRes_DiSCo3_ROIs.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr40.nii.gz", "DiSCo3_Connectivity_Matrix_Cross-Sectional_Area.txt", "DiSCo3_Strands_Trajectories.tck", "DiSCo3_Strands_Trajectories.trk", "DiSCo3_Strands_ROIs_Pairs.txt", "DiSCo3_Connectivity_Matrix_Strands_Count.txt", "DiSCo3_Strands_Diameters.txt", ], md5_list=[ "c03cfec8ee605a54866ef09c3c8ba31d", "b8443aee0a2d2b60ee658a2342cccf0f", "4ac749bb584e6963851ee170e137e3ec", "207d57a6c16cdf230b45d4df5b4e1dee", "2346de6ad563a10e40b8ebec3ac4e0ff", "acad92cebd780424280b5e38a5b41445", "c5ce15ea50d02621d2a3f27cde7cd208", "c827954b40279462b520edfb8cd2ced0", "f4e8fca1a434436ffe8fc4392d2e635b", "62606109e8a96f41208b45bffbe94aae", "a03bc8afe4d0348841c9fed5f03695b4", "85f60f649c6f3d4e6ae6bcf88053058d", "7fdec9c4f4fb740f595191a4d0202de7", "df04eb8f1ce0138f3c7f20488f4dbc00", "0bde3cefd74e4c3b4d28ec0d90bcadc9", "aa7bfbfc9ff454e7a466a01229b1377b", "218561e3ebbee2f08d4f3040ec2f12a9", "577ae9e407bb225dae48482a7f88bafb", "e85e6edf80bd07030f5a6c1769c84546", "49ed07d5fc60ee17a4a4019d7bea7d97", "aa86d752bec702172a54c1e8efe51d78", "df0dde6428f5c5146b5ce8f353865bca", "0f0b68b956415bbe06766e497ee77388", "44adc18b39d03b8d83f48cc572aaab78", "0460e6420739ae31b7076f5e1be5a1f1", "80c07379496d0f2aab66d8041a9c53b1", "0e3763e51612b56fa8dc933efc8d1971", "676f88a792d9034015a9613001c9bf02", "6f399bee3158613c74c1330cee5d9b0d", "534e64906cc2bf58ff4cf98ecd301dd5", "7d0f7513c85d766c45581f9fa8ab34a5", "01b4fe6c6e6459eac293a62428680e3b", "2bb5a0b24139f5448cdd51167ce33906", "bf6c5a26cb1061be91bec7fa73153cec", "e8911b580e7fc7e2d94d238f2c81c43b", "647714842c8dea4d192bdd7c2b41fd3b", "bba1e4283ed853eb484497db4b6da460", "d761384f8fed31c53197ce4916f32296", "3b119c1b8487da71412f3275616003dc", "47c300fcb48b882c124d69086b224956", "9e47185ae517f6f35daa1aa16c51ba45", ], optional_fnames=[ "DiSCo_gradients_fsl.bvecs", "DiSCo_gradients_mrtrix.b", "DiSCo_gradients.scheme", "lowRes_DiSCo3_Strand_ODFs.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr10.nii.gz", "lowRes_DiSCo3_Strand_Intra_Volume_Fraction.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr30.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr20.nii.gz", "lowRes_DiSCo3_Strand_Average_Diameter.nii.gz", "lowRes_DiSCo3_DWI_Intra.nii.gz", "lowRes_DiSCo3_Strand_Count.nii.gz", "lowRes_DiSCo3_DWI.nii.gz", "lowRes_DiSCo3_Strand_Bundle_Count.nii.gz", "lowRes_DiSCo3_ROIs-mask.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr40.nii.gz", "lowRes_DiSCo3_mask.nii.gz", "lowRes_DiSCo3_DWI_RicianNoise-snr50.nii.gz", "lowRes_DiSCo3_ROIs.nii.gz", "highRes_DiSCo3_Strand_Average_Diameter.nii.gz", "highRes_DiSCo3_Strand_Count.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr20.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr30.nii.gz", "highRes_DiSCo3_Strand_Intra_Volume_Fraction.nii.gz", "highRes_DiSCo3_Strand_Bundle_Count.nii.gz", "highRes_DiSCo3_DWI_Intra.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr50.nii.gz", "highRes_DiSCo3_Strand_ODFs.nii.gz", "highRes_DiSCo3_DWI_RicianNoise-snr40.nii.gz", "highRes_DiSCo3_DWI.nii.gz", "DiSCo3_Strands_ROIs_Pairs.txt", "DiSCo3_Connectivity_Matrix_Strands_Count.txt", "DiSCo3_Strands_Diameters.txt", "DiSCo3_Strands_Trajectories.trk", ], doc=( "Download DISCO 3 dataset: The Diffusion-Simulated Connectivity " "Dataset. DOI: 10.17632/fgf86jdfg6.3" ), data_size="1.05 GB", use_headers=True, ) def fetch_disco_dataset(*, include_optional=False): """Download All DISCO datasets. Notes ----- see DOI: 10.17632/fgf86jdfg6.3 """ files_1, folder_1 = fetch_disco1_dataset(include_optional=include_optional) files_2, folder_2 = fetch_disco2_dataset(include_optional=include_optional) files_3, folder_3 = fetch_disco3_dataset(include_optional=include_optional) all_path = ( [pjoin(folder_1, f) for f in files_1] + [pjoin(folder_2, f) for f in files_2] + [pjoin(folder_3, f) for f in files_3] ) return all_path @warning_for_keywords() def get_fnames(*, name="small_64D", include_optional=False): """Provide full paths to example or test datasets. Parameters ---------- name : str the filename/s of which dataset to return, one of: - 'small_64D' small region of interest nifti,bvecs,bvals 64 directions - 'small_101D' small region of interest nifti, bvecs, bvals 101 directions - 'aniso_vox' volume with anisotropic voxel size as Nifti - 'fornix' 300 tracks in Trackvis format (from Pittsburgh Brain Competition) - 'gqi_vectors' the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio - 'small_25' small ROI (10x8x2) DTI data (b value 2000, 25 directions) - 'test_piesno' slice of N=8, K=14 diffusion data - 'reg_c' small 2D image used for validating registration - 'reg_o' small 2D image used for validation registration - 'cb_2' two vectorized cingulum bundles include_optional : bool, optional If True, include optional datasets. Returns ------- fnames : tuple filenames for dataset Examples -------- >>> import numpy as np >>> from dipy.io.image import load_nifti >>> from dipy.data import get_fnames >>> fimg, fbvals, fbvecs = get_fnames(name='small_101D') >>> bvals=np.loadtxt(fbvals) >>> bvecs=np.loadtxt(fbvecs).T >>> data, affine = load_nifti(fimg) >>> data.shape == (6, 10, 10, 102) True >>> bvals.shape == (102,) True >>> bvecs.shape == (102, 3) True """ DATA_DIR = pjoin(op.dirname(__file__), "files") if name == "small_64D": fbvals = pjoin(DATA_DIR, "small_64D.bval") fbvecs = pjoin(DATA_DIR, "small_64D.bvec") fimg = pjoin(DATA_DIR, "small_64D.nii") return fimg, fbvals, fbvecs if name == "55dir_grad": fbvals = pjoin(DATA_DIR, "55dir_grad.bval") fbvecs = pjoin(DATA_DIR, "55dir_grad.bvec") return fbvals, fbvecs if name == "small_101D": fbvals = pjoin(DATA_DIR, "small_101D.bval") fbvecs = pjoin(DATA_DIR, "small_101D.bvec") fimg = pjoin(DATA_DIR, "small_101D.nii.gz") return fimg, fbvals, fbvecs if name == "aniso_vox": return pjoin(DATA_DIR, "aniso_vox.nii.gz") if name == "ascm_test": return pjoin(DATA_DIR, "ascm_out_test.nii.gz") if name == "fornix": return pjoin(DATA_DIR, "tracks300.trk") if name == "gqi_vectors": return pjoin(DATA_DIR, "ScannerVectors_GQI101.txt") if name == "dsi515btable": return pjoin(DATA_DIR, "dsi515_b_table.txt") if name == "dsi4169btable": return pjoin(DATA_DIR, "dsi4169_b_table.txt") if name == "grad514": return pjoin(DATA_DIR, "grad_514.txt") if name == "small_25": fbvals = pjoin(DATA_DIR, "small_25.bval") fbvecs = pjoin(DATA_DIR, "small_25.bvec") fimg = pjoin(DATA_DIR, "small_25.nii.gz") return fimg, fbvals, fbvecs if name == "small_25_streamlines": fstreamlines = pjoin(DATA_DIR, "EuDX_small_25.trk") return fstreamlines if name == "S0_10": fimg = pjoin(DATA_DIR, "S0_10slices.nii.gz") return fimg if name == "test_piesno": fimg = pjoin(DATA_DIR, "test_piesno.nii.gz") return fimg if name == "reg_c": return pjoin(DATA_DIR, "C.npy") if name == "reg_o": return pjoin(DATA_DIR, "circle.npy") if name == "cb_2": return pjoin(DATA_DIR, "cb_2.npz") if name == "minimal_bundles": return pjoin(DATA_DIR, "minimal_bundles.zip") if name == "t1_coronal_slice": return pjoin(DATA_DIR, "t1_coronal_slice.npy") if name == "t-design": N = 45 return pjoin(DATA_DIR, f"tdesign{N}.txt") if name == "scil_b0": files, folder = fetch_scil_b0() files = files["datasets_multi-site_all_companies.zip"][2] files = [pjoin(folder, f) for f in files] return [f for f in files if op.isfile(f)] if name == "stanford_hardi": files, folder = fetch_stanford_hardi() fraw = pjoin(folder, "HARDI150.nii.gz") fbval = pjoin(folder, "HARDI150.bval") fbvec = pjoin(folder, "HARDI150.bvec") return fraw, fbval, fbvec if name == "taiwan_ntu_dsi": files, folder = fetch_taiwan_ntu_dsi() fraw = pjoin(folder, "DSI203.nii.gz") fbval = pjoin(folder, "DSI203.bval") fbvec = pjoin(folder, "DSI203.bvec") return fraw, fbval, fbvec if name == "sherbrooke_3shell": files, folder = fetch_sherbrooke_3shell() fraw = pjoin(folder, "HARDI193.nii.gz") fbval = pjoin(folder, "HARDI193.bval") fbvec = pjoin(folder, "HARDI193.bvec") return fraw, fbval, fbvec if name == "isbi2013_2shell": files, folder = fetch_isbi2013_2shell() fraw = pjoin(folder, "phantom64.nii.gz") fbval = pjoin(folder, "phantom64.bval") fbvec = pjoin(folder, "phantom64.bvec") return fraw, fbval, fbvec if name == "stanford_labels": files, folder = fetch_stanford_labels() return pjoin(folder, "aparc-reduced.nii.gz") if name == "syn_data": files, folder = fetch_syn_data() t1_name = pjoin(folder, "t1.nii.gz") b0_name = pjoin(folder, "b0.nii.gz") return t1_name, b0_name if name == "stanford_t1": files, folder = fetch_stanford_t1() return pjoin(folder, "t1.nii.gz") if name == "stanford_pve_maps": files, folder = fetch_stanford_pve_maps() f_pve_csf = pjoin(folder, "pve_csf.nii.gz") f_pve_gm = pjoin(folder, "pve_gm.nii.gz") f_pve_wm = pjoin(folder, "pve_wm.nii.gz") return f_pve_csf, f_pve_gm, f_pve_wm if name == "ivim": files, folder = fetch_ivim() fraw = pjoin(folder, "ivim.nii.gz") fbval = pjoin(folder, "ivim.bval") fbvec = pjoin(folder, "ivim.bvec") return fraw, fbval, fbvec if name == "tissue_data": files, folder = fetch_tissue_data() t1_name = pjoin(folder, "t1_brain.nii.gz") t1d_name = pjoin(folder, "t1_brain_denoised.nii.gz") ap_name = pjoin(folder, "power_map.nii.gz") return t1_name, t1d_name, ap_name if name == "cfin_multib": files, folder = fetch_cfin_multib() t1_name = pjoin(folder, "T1.nii") fraw = pjoin(folder, "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii") fbval = pjoin(folder, "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval") fbvec = pjoin(folder, "__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec") return fraw, fbval, fbvec, t1_name if name == "target_tractrogram_hcp": files, folder = fetch_target_tractogram_hcp() return pjoin( folder, "target_tractogram_hcp", "hcp_tractogram", "streamlines.trk" ) if name == "bundle_atlas_hcp842": files, folder = fetch_bundle_atlas_hcp842() return get_bundle_atlas_hcp842() if name == "30_bundle_atlas_hcp842": files, folder = fetch_30_bundle_atlas_hcp842() return get_bundle_atlas_hcp842(size=30) if name == "qte_lte_pte": _, folder = fetch_qte_lte_pte() fdata = pjoin(folder, "lte-pte.nii.gz") fbval = pjoin(folder, "lte-pte.bval") fbvec = pjoin(folder, "lte-pte.bvec") fmask = pjoin(folder, "mask.nii.gz") return fdata, fbval, fbvec, fmask if name == "cti_rat1": _, folder = fetch_cti_rat1() fdata = pjoin(folder, "Rat1_invivo_cti_data.nii") fbval1 = pjoin(folder, "bvals1.bval") fbvec1 = pjoin(folder, "bvec1.bvec") fbval2 = pjoin(folder, "bvals2.bval") fbvec2 = pjoin(folder, "bvec2.bvec") fmask = pjoin(folder, "Rat1_mask.nii") return fdata, fbval1, fbvec1, fbval2, fbvec2, fmask if name == "fury_surface": files, folder = fetch_fury_surface() surface_name = pjoin(folder, "100307_white_lh.vtk") return surface_name if name == "histo_resdnn_tf_weights": files, folder = fetch_resdnn_tf_weights() wraw = pjoin(folder, "resdnn_weights_mri_2018.h5") return wraw if name == "histo_resdnn_torch_weights": files, folder = fetch_resdnn_torch_weights() wraw = pjoin(folder, "histo_weights.pth") return wraw if name == "synb0_default_weights": _, folder = fetch_synb0_weights() w1 = pjoin(folder, "synb0_default_weights1.h5") w2 = pjoin(folder, "synb0_default_weights2.h5") w3 = pjoin(folder, "synb0_default_weights3.h5") w4 = pjoin(folder, "synb0_default_weights4.h5") w5 = pjoin(folder, "synb0_default_weights5.h5") return w1, w2, w3, w4, w5 if name == "synb0_test_data": files, folder = fetch_synb0_test() input_array = pjoin(folder, "test_input_synb0.npz") target_array = pjoin(folder, "test_output_synb0.npz") return input_array, target_array if name == "deepn4_default_tf_weights": _, folder = fetch_deepn4_tf_weights() w1 = pjoin(folder, "model_weights.h5") return w1 if name == "deepn4_default_torch_weights": _, folder = fetch_deepn4_torch_weights() w1 = pjoin(folder, "deepn4_torch_weights") return w1 if name == "deepn4_test_data": files, folder = fetch_deepn4_test() input_array = pjoin(folder, "test_input_deepn4.npz") target_array = pjoin(folder, "new_test_output_deepn4.npz") return input_array, target_array if name == "evac_default_tf_weights": files, folder = fetch_evac_tf_weights() weight = pjoin(folder, "evac_default_weights.h5") return weight if name == "evac_default_torch_weights": files, folder = fetch_evac_torch_weights() weight = pjoin(folder, "evac_weights.pth") return weight if name == "evac_test_data": files, folder = fetch_evac_test() test_data = pjoin(folder, "evac_test_data.npz") return test_data if name == "DiB_70_lte_pte_ste": _, folder = fetch_DiB_70_lte_pte_ste() fdata = pjoin(folder, "DiB_70_lte_pte_ste.nii.gz") fbval = pjoin(folder, "bval_DiB_70_lte_pte_ste.bval") fbvec = pjoin(folder, "bvec_DiB_70_lte_pte_ste.bvec") fmask = pjoin(folder, "DiB_mask.nii.gz") return fdata, fbval, fbvec, fmask if name == "DiB_217_lte_pte_ste": _, folder = fetch_DiB_217_lte_pte_ste() fdata_1 = pjoin(folder, "DiB_217_lte_pte_ste_1.nii.gz") fdata_2 = pjoin(folder, "DiB_217_lte_pte_ste_2.nii.gz") fbval = pjoin(folder, "bval_DiB_217_lte_pte_ste.bval") fbvec = pjoin(folder, "bvec_DiB_217_lte_pte_ste.bvec") fmask = pjoin(folder, "DiB_mask.nii.gz") return fdata_1, fdata_2, fbval, fbvec, fmask if name == "ptt_minimal_dataset": files, folder = fetch_ptt_minimal_dataset() fod_name = pjoin(folder, "ptt_fod.nii") seed_coords_name = pjoin(folder, "ptt_seed_coords.txt") seed_image_name = pjoin(folder, "ptt_seed_image.nii") return fod_name, seed_coords_name, seed_image_name if name == "gold_standard_tracks": filepath_dix = {} files, folder = fetch_gold_standard_io() for filename in files: filepath_dix[filename] = os.path.join(folder, filename) with open(filepath_dix["points_data.json"]) as json_file: points_data = dict(json.load(json_file)) with open(filepath_dix["streamlines_data.json"]) as json_file: streamlines_data = dict(json.load(json_file)) return filepath_dix, points_data, streamlines_data if name in ["disco", "disco1", "disco2", "disco3"]: local_fetcher = globals().get(f"fetch_{name}_dataset") files, folder = local_fetcher(include_optional=include_optional) return [pjoin(folder, f) for f in files] def read_qtdMRI_test_retest_2subjects(): r"""Load test-retest qt-dMRI acquisitions of two C57Bl6 mice. These datasets were used to study test-retest reproducibility of time-dependent q-space indices (q$\tau$-indices) in the corpus callosum of two mice :footcite:p:`Fick2018`. The data itself and its details are publicly available and can be cited at :footcite:p:`Wassermann2017`. The test-retest diffusion MRI spin echo sequences were acquired from two C57Bl6 wild-type mice on an 11.7 Tesla Bruker scanner. The test and retest acquisition were taken 48 hours from each other. The (processed) data consists of 80x160x5 voxels of size 110x110x500μm. Each data set consists of 515 Diffusion-Weighted Images (DWIs) spread over 35 acquisition shells. The shells are spread over 7 gradient strength shells with a maximum gradient strength of 491 mT/m, 5 pulse separation shells between [10.8 - 20.0]ms, and a pulse length of 5ms. We manually created a brain mask and corrected the data from eddy currents and motion artifacts using FSL's eddy. A region of interest was then drawn in the middle slice in the corpus callosum, where the tissue is reasonably coherent. Returns ------- data : list of length 4 contains the dwi datasets ordered as (subject1_test, subject1_retest, subject2_test, subject2_retest) cc_masks : list of length 4 contains the corpus callosum masks ordered in the same order as data. gtabs : list of length 4 contains the qt-dMRI gradient tables of the data sets. References ---------- .. footbibliography:: """ data = [] data_names = [ "subject1_dwis_test.nii.gz", "subject1_dwis_retest.nii.gz", "subject2_dwis_test.nii.gz", "subject2_dwis_retest.nii.gz", ] for data_name in data_names: data_loc = pjoin(dipy_home, "qtdMRI_test_retest_2subjects", data_name) data.append(load_nifti_data(data_loc)) cc_masks = [] mask_names = [ "subject1_ccmask_test.nii.gz", "subject1_ccmask_retest.nii.gz", "subject2_ccmask_test.nii.gz", "subject2_ccmask_retest.nii.gz", ] for mask_name in mask_names: mask_loc = pjoin(dipy_home, "qtdMRI_test_retest_2subjects", mask_name) cc_masks.append(load_nifti_data(mask_loc)) gtabs = [] gtab_txt_names = [ "subject1_scheme_test.txt", "subject1_scheme_retest.txt", "subject2_scheme_test.txt", "subject2_scheme_retest.txt", ] for gtab_txt_name in gtab_txt_names: txt_loc = pjoin(dipy_home, "qtdMRI_test_retest_2subjects", gtab_txt_name) qtdmri_scheme = np.loadtxt(txt_loc, skiprows=1) bvecs = qtdmri_scheme[:, 1:4] G = qtdmri_scheme[:, 4] / 1e3 # because dipy takes T/mm not T/m small_delta = qtdmri_scheme[:, 5] big_delta = qtdmri_scheme[:, 6] gtab = gradient_table_from_gradient_strength_bvecs( G, bvecs, big_delta, small_delta ) gtabs.append(gtab) return data, cc_masks, gtabs def read_scil_b0(): """Load GE 3T b0 image form the scil b0 dataset. Returns ------- img : obj, Nifti1Image """ fnames = get_fnames(name="scil_b0") return nib.load(fnames[0]) def read_siemens_scil_b0(): """Load Siemens 1.5T b0 image from the scil b0 dataset. Returns ------- img : obj, Nifti1Image """ fnames = get_fnames(name="scil_b0") return nib.load(fnames[1]) def read_isbi2013_2shell(): """Load ISBI 2013 2-shell synthetic dataset. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec = get_fnames(name="isbi2013_2shell") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) img = nib.load(fraw) return img, gtab def read_sherbrooke_3shell(): """Load Sherbrooke 3-shell HARDI dataset. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec = get_fnames(name="sherbrooke_3shell") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) img = nib.load(fraw) return img, gtab def read_stanford_labels(): """Read stanford hardi data and label map.""" # First get the hardi data hard_img, gtab = read_stanford_hardi() # Fetch and load labels_file = get_fnames(name="stanford_labels") labels_img = nib.load(labels_file) return hard_img, gtab, labels_img def read_stanford_hardi(): """Load Stanford HARDI dataset. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec = get_fnames(name="stanford_hardi") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) img = nib.load(fraw) return img, gtab def read_stanford_t1(): f_t1 = get_fnames(name="stanford_t1") img = nib.load(f_t1) return img def read_stanford_pve_maps(): f_pve_csf, f_pve_gm, f_pve_wm = get_fnames(name="stanford_pve_maps") img_pve_csf = nib.load(f_pve_csf) img_pve_gm = nib.load(f_pve_gm) img_pve_wm = nib.load(f_pve_wm) return img_pve_csf, img_pve_gm, img_pve_wm def read_taiwan_ntu_dsi(): """Load Taiwan NTU dataset. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec = get_fnames(name="taiwan_ntu_dsi") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) bvecs[1:] = bvecs[1:] / np.sqrt(np.sum(bvecs[1:] * bvecs[1:], axis=1))[:, None] gtab = gradient_table(bvals, bvecs=bvecs) img = nib.load(fraw) return img, gtab def read_syn_data(): """Load t1 and b0 volumes from the same session. Returns ------- t1 : obj, Nifti1Image b0 : obj, Nifti1Image """ t1_name, b0_name = get_fnames(name="syn_data") t1 = nib.load(t1_name) b0 = nib.load(b0_name) return t1, b0 def fetch_tissue_data(*, include_optional=False): """Download images to be used for tissue classification""" t1 = "https://ndownloader.figshare.com/files/6965969" t1d = "https://ndownloader.figshare.com/files/6965981" ap = "https://ndownloader.figshare.com/files/6965984" folder = pjoin(dipy_home, "tissue_data") md5_list = [ "99c4b77267a6855cbfd96716d5d65b70", # t1 "4b87e1b02b19994fbd462490cc784fa3", # t1d "c0ea00ed7f2ff8b28740f18aa74bff6a", ] # ap url_list = [t1, t1d, ap] fname_list = ["t1_brain.nii.gz", "t1_brain_denoised.nii.gz", "power_map.nii.gz"] if not op.exists(folder): _log(f"Creating new directory {folder}") os.makedirs(folder) msg = "Downloading 3 Nifti1 images (9.3MB)..." _log(msg) for i in range(len(md5_list)): _get_file_data(pjoin(folder, fname_list[i]), url_list[i]) check_md5(pjoin(folder, fname_list[i]), md5_list[i]) _log("Done.") _log(f"Files copied in folder {folder}") else: _already_there_msg(folder) return fname_list, folder @warning_for_keywords() def read_tissue_data(*, contrast="T1"): """Load images to be used for tissue classification Parameters ---------- contrast : str 'T1', 'T1 denoised' or 'Anisotropic Power' Returns ------- image : obj, Nifti1Image """ folder = pjoin(dipy_home, "tissue_data") t1_name = pjoin(folder, "t1_brain.nii.gz") t1d_name = pjoin(folder, "t1_brain_denoised.nii.gz") ap_name = pjoin(folder, "power_map.nii.gz") md5_dict = { "t1": "99c4b77267a6855cbfd96716d5d65b70", "t1d": "4b87e1b02b19994fbd462490cc784fa3", "ap": "c0ea00ed7f2ff8b28740f18aa74bff6a", } check_md5(t1_name, md5_dict["t1"]) check_md5(t1d_name, md5_dict["t1d"]) check_md5(ap_name, md5_dict["ap"]) if contrast == "T1 denoised": return nib.load(t1d_name) elif contrast == "Anisotropic Power": return nib.load(ap_name) else: return nib.load(t1_name) mni_notes = """ Notes ----- The templates were downloaded from the MNI (McGill University) `website `_ in July 2015. The following publications should be referenced when using these templates: - :footcite:t:`Fonov2013` - :footcite:t:`Fonov2009` **License for the MNI templates:** Copyright (C) 1993-2004, Louis Collins McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies. The authors and McGill University make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. The authors are not responsible for any data loss, equipment damage, property loss, or injury to subjects or patients resulting from the use or misuse of this software package. References ---------- .. footbibliography:: """ @warning_for_keywords() def read_mni_template(*, version="a", contrast="T2"): """Read the MNI template from disk. Parameters ---------- version: string There are two MNI templates 2009a and 2009c, so options available are: "a" and "c". contrast : list or string, optional Which of the contrast templates to read. For version "a" two contrasts are available: "T1" and "T2". Similarly for version "c" there are two options, "T1" and "mask". You can input contrast as a string or a list Returns ------- list : contains the nibabel.Nifti1Image objects requested, according to the order they were requested in the input. Examples -------- >>> # Get only the T1 file for version c: >>> T1 = read_mni_template("c", contrast = "T1") # doctest: +SKIP >>> # Get both files in this order for version a: >>> T1, T2 = read_mni_template(contrast = ["T1", "T2"]) # doctest: +SKIP """ files, folder = fetch_mni_template() file_dict_a = { "T1": pjoin(folder, "mni_icbm152_t1_tal_nlin_asym_09a.nii"), "T2": pjoin(folder, "mni_icbm152_t2_tal_nlin_asym_09a.nii"), } file_dict_c = { "T1": pjoin(folder, "mni_icbm152_t1_tal_nlin_asym_09c.nii"), "mask": pjoin(folder, "mni_icbm152_t1_tal_nlin_asym_09c_mask.nii"), } if contrast == "T2" and version == "c": raise ValueError("No T2 image for MNI template 2009c") if contrast == "mask" and version == "a": raise ValueError("No template mask available for MNI 2009a") if not isinstance(contrast, str) and version == "c": for k in contrast: if k == "T2": raise ValueError("No T2 image for MNI template 2009c") if version == "a": if isinstance(contrast, str): return nib.load(file_dict_a[contrast]) else: out_list = [] for k in contrast: out_list.append(nib.load(file_dict_a[k])) elif version == "c": if isinstance(contrast, str): return nib.load(file_dict_c[contrast]) else: out_list = [] for k in contrast: out_list.append(nib.load(file_dict_c[k])) else: raise ValueError("Only 2009a and 2009c versions are available") return out_list # Add the references to both MNI-related functions: read_mni_template.__doc__ += mni_notes fetch_mni_template.__doc__ += mni_notes @warning_for_keywords() def fetch_cenir_multib(*, with_raw=False, **kwargs): """Fetch 'HCP-like' data, collected at multiple b-values. Parameters ---------- with_raw : bool Whether to fetch the raw data. Per default, this is False, which means that only eddy-current/motion corrected data is fetched """ folder = pjoin(dipy_home, "cenir_multib") fname_list = [ "4D_dwi_eddycor_B200.nii.gz", "dwi_bvals_B200", "dwi_bvecs_B200", "4D_dwieddycor_B400.nii.gz", "bvals_B400", "bvecs_B400", "4D_dwieddycor_B1000.nii.gz", "bvals_B1000", "bvecs_B1000", "4D_dwieddycor_B2000.nii.gz", "bvals_B2000", "bvecs_B2000", "4D_dwieddycor_B3000.nii.gz", "bvals_B3000", "bvecs_B3000", ] md5_list = [ "fd704aa3deb83c1c7229202cb3db8c48", "80ae5df76a575fe5bf9f1164bb0d4cfb", "18e90f8a3e6a4db2457e5b1ba1cc98a9", "3d0f2b8ef7b6a4a3aa5c4f7a90c9cfec", "c38056c40c9cc42372232d6e75c47f54", "810d79b4c30cb7dff3b2000017d5f72a", "dde8037601a14436b2173f4345b5fd17", "97de6a492ae304f39e0b418b6ebac64c", "f28a0faa701bdfc66e31bde471a5b992", "c5e4b96e3afdee99c0e994eff3b2331a", "9c83b8d5caf9c3def240f320f2d2f56c", "05446bd261d57193d8dbc097e06db5ff", "f0d70456ce424fda2cecd48e64f3a151", "336accdb56acbbeff8dac1748d15ceb8", "27089f3baaf881d96f6a9da202e3d69b", ] if with_raw: fname_list.extend( [ "4D_dwi_B200.nii.gz", "4D_dwi_B400.nii.gz", "4D_dwi_B1000.nii.gz", "4D_dwi_B2000.nii.gz", "4D_dwi_B3000.nii.gz", ] ) md5_list.extend( [ "a8c36e76101f2da2ca8119474ded21d5", "a0e7939f6d977458afbb2f4659062a79", "87fc307bdc2e56e105dffc81b711a808", "7c23e8a5198624aa29455f0578025d4f", "4e4324c676f5a97b3ded8bbb100bf6e5", ] ) files = {} baseurl = UW_RW_URL + "1773/33311/" for f, m in zip(fname_list, md5_list): files[f] = (baseurl + f, m) fetch_data(files, folder) return files, folder @warning_for_keywords() def read_cenir_multib(*, bvals=None): """Read CENIR multi b-value data. Parameters ---------- bvals : list or int The b-values to read from file (200, 400, 1000, 2000, 3000). Returns ------- gtab : a GradientTable class instance img : nibabel.Nifti1Image """ files, folder = fetch_cenir_multib(with_raw=False) if bvals is None: bvals = [200, 400, 1000, 2000, 3000] if isinstance(bvals, int): bvals = [bvals] file_dict = { 200: { "DWI": pjoin(folder, "4D_dwi_eddycor_B200.nii.gz"), "bvals": pjoin(folder, "dwi_bvals_B200"), "bvecs": pjoin(folder, "dwi_bvecs_B200"), }, 400: { "DWI": pjoin(folder, "4D_dwieddycor_B400.nii.gz"), "bvals": pjoin(folder, "bvals_B400"), "bvecs": pjoin(folder, "bvecs_B400"), }, 1000: { "DWI": pjoin(folder, "4D_dwieddycor_B1000.nii.gz"), "bvals": pjoin(folder, "bvals_B1000"), "bvecs": pjoin(folder, "bvecs_B1000"), }, 2000: { "DWI": pjoin(folder, "4D_dwieddycor_B2000.nii.gz"), "bvals": pjoin(folder, "bvals_B2000"), "bvecs": pjoin(folder, "bvecs_B2000"), }, 3000: { "DWI": pjoin(folder, "4D_dwieddycor_B3000.nii.gz"), "bvals": pjoin(folder, "bvals_B3000"), "bvecs": pjoin(folder, "bvecs_B3000"), }, } data = [] bval_list = [] bvec_list = [] for bval in bvals: data.append(load_nifti_data(file_dict[bval]["DWI"])) bval_list.extend(np.loadtxt(file_dict[bval]["bvals"])) bvec_list.append(np.loadtxt(file_dict[bval]["bvecs"])) # All affines are the same, so grab the last one: aff = nib.load(file_dict[bval]["DWI"]).affine return ( nib.Nifti1Image(np.concatenate(data, -1), aff), gradient_table(bval_list, bvecs=np.concatenate(bvec_list, -1)), ) CENIR_notes = """ Notes ----- Details of the acquisition and processing, and additional meta-data are available through UW researchworks: https://digital.lib.washington.edu/researchworks/handle/1773/33311 """ fetch_cenir_multib.__doc__ += CENIR_notes read_cenir_multib.__doc__ += CENIR_notes @warning_for_keywords() def read_bundles_2_subjects( *, subj_id="subj_1", metrics=("fa",), bundles=("af.left", "cst.right", "cc_1") ): r"""Read images and streamlines from 2 subjects of the SNAIL dataset. See :footcite:p:`Renauld2016` and :footcite:p:`Garyfallidis2015` for further details about the dataset and processing pipeline. Parameters ---------- subj_id : string Either ``subj_1`` or ``subj_2``. metrics : array-like Either ['fa'] or ['t1'] or ['fa', 't1'] bundles : array-like E.g., ['af.left', 'cst.right', 'cc_1']. See all the available bundles in the ``exp_bundles_maps/bundles_2_subjects`` directory of your ``DIPY_HOME`` of ``$HOME/.dipy`` folder. Returns ------- dix : dict Dictionary with data of the metrics and the bundles as keys. Notes ----- If you are using these datasets please cite the following publications. References ---------- .. footbibliography:: """ dname = pjoin(dipy_home, "exp_bundles_and_maps", "bundles_2_subjects") from dipy.io.streamline import load_tractogram from dipy.tracking.streamline import Streamlines res = {} if "t1" in metrics: data, affine = load_nifti(pjoin(dname, subj_id, "t1_warped.nii.gz")) res["t1"] = data if "fa" in metrics: fa, affine = load_nifti(pjoin(dname, subj_id, "fa_1x1x1.nii.gz")) res["fa"] = fa res["affine"] = affine for bun in bundles: streams = load_tractogram( pjoin(dname, subj_id, "bundles", f"bundles_{bun}.trk"), "same", bbox_valid_check=False, ).streamlines streamlines = Streamlines(streams) res[bun] = streamlines return res def read_ivim(): """Load IVIM dataset. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec = get_fnames(name="ivim") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=0) img = nib.load(fraw) return img, gtab def read_cfin_dwi(): """Load CFIN multi b-value DWI data. Returns ------- img : obj, Nifti1Image gtab : obj, GradientTable """ fraw, fbval, fbvec, _ = get_fnames(name="cfin_multib") bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) img = nib.load(fraw) return img, gtab def read_cfin_t1(): """Load CFIN T1-weighted data. Returns ------- img : obj, Nifti1Image """ _, _, _, fraw = get_fnames(name="cfin_multib") img = nib.load(fraw) return img # , gtab def get_file_formats(): """ Returns ------- bundles_list : all bundles (list) ref_anat : reference """ ref_anat = pjoin(dipy_home, "bundle_file_formats_example", "template0.nii.gz") bundles_list = [] for filename in [ "cc_m_sub.trk", "laf_m_sub.tck", "lpt_m_sub.fib", "raf_m_sub.vtk", "rpt_m_sub.dpy", ]: bundles_list.append(pjoin(dipy_home, "bundle_file_formats_example", filename)) return bundles_list, ref_anat @warning_for_keywords() def get_bundle_atlas_hcp842(*, size=80): """ Returns ------- file1 : string file2 : string """ size = 80 if size not in [80, 30] else size file1 = pjoin( dipy_home, "bundle_atlas_hcp842", f"Atlas_{size}_Bundles", "whole_brain", "whole_brain_MNI.trk", ) file2 = pjoin( dipy_home, "bundle_atlas_hcp842", f"Atlas_{size}_Bundles", "bundles", "*.trk" ) return file1, file2 def get_two_hcp842_bundles(): """ Returns ------- file1 : string file2 : string """ file1 = pjoin( dipy_home, "bundle_atlas_hcp842", "Atlas_80_Bundles", "bundles", "AF_L.trk" ) file2 = pjoin( dipy_home, "bundle_atlas_hcp842", "Atlas_80_Bundles", "bundles", "CST_L.trk" ) return file1, file2 def get_target_tractogram_hcp(): """ Returns ------- file1 : string """ file1 = pjoin( dipy_home, "target_tractogram_hcp", "hcp_tractogram", "streamlines.trk" ) return file1 def read_qte_lte_pte(): """Read q-space trajectory encoding data with linear and planar tensor encoding. Returns ------- data_img : nibabel.nifti1.Nifti1Image dMRI data image. mask_img : nibabel.nifti1.Nifti1Image Brain mask image. gtab : dipy.core.gradients.GradientTable Gradient table. """ fdata, fbval, fbvec, fmask = get_fnames(name="qte_lte_pte") data_img = nib.load(fdata) mask_img = nib.load(fmask) bvals = np.loadtxt(fbval) bvecs = np.loadtxt(fbvec) btens = np.array(["LTE" for i in range(61)] + ["PTE" for i in range(61)]) gtab = gradient_table(bvals, bvecs=bvecs, btens=btens) return data_img, mask_img, gtab def read_DiB_70_lte_pte_ste(): """Read q-space trajectory encoding data with 70 between linear, planar, and spherical tensor encoding measurements. Returns ------- data_img : nibabel.nifti1.Nifti1Image dMRI data image. mask_img : nibabel.nifti1.Nifti1Image Brain mask image. gtab : dipy.core.gradients.GradientTable Gradient table. """ fdata, fbval, fbvec, fmask = get_fnames(name="DiB_70_lte_pte_ste") data_img = nib.load(fdata) mask_img = nib.load(fmask) bvals = np.loadtxt(fbval) bvecs = np.loadtxt(fbvec) btens = np.array( ["LTE" for i in range(1)] + ["LTE" for i in range(4)] + ["PTE" for i in range(3)] + ["STE" for i in range(3)] + ["STE" for i in range(4)] + ["LTE" for i in range(8)] + ["PTE" for i in range(5)] + ["STE" for i in range(5)] + ["LTE" for i in range(21)] + ["PTE" for i in range(10)] + ["STE" for i in range(6)] ) gtab = gradient_table(bvals, bvecs=bvecs, btens=btens) return data_img, mask_img, gtab def read_DiB_217_lte_pte_ste(): """Read q-space trajectory encoding data with 217 between linear, planar, and spherical tensor encoding. Returns ------- data_img : nibabel.nifti1.Nifti1Image dMRI data image. mask_img : nibabel.nifti1.Nifti1Image Brain mask image. gtab : dipy.core.gradients.GradientTable Gradient table. """ fdata_1, fdata_2, fbval, fbvec, fmask = get_fnames(name="DiB_217_lte_pte_ste") _, folder = fetch_DiB_217_lte_pte_ste() if op.isfile(pjoin(folder, "DiB_217_lte_pte_ste.nii.gz")): data_img = nib.load(pjoin(folder, "DiB_217_lte_pte_ste.nii.gz")) else: data_1, affine = load_nifti(fdata_1) data_2, _ = load_nifti(fdata_2) data = np.concatenate((data_1, data_2), axis=3) save_nifti(pjoin(folder, "DiB_217_lte_pte_ste.nii.gz"), data, affine) data_img = nib.load(pjoin(folder, "DiB_217_lte_pte_ste.nii.gz")) mask_img = nib.load(fmask) bvals = np.loadtxt(fbval) bvecs = np.loadtxt(fbvec) btens = np.array( ["LTE" for i in range(13)] + ["LTE" for i in range(10)] + ["PTE" for i in range(10)] + ["STE" for i in range(10)] + ["LTE" for i in range(10)] + ["PTE" for i in range(10)] + ["STE" for i in range(10)] + ["LTE" for i in range(16)] + ["PTE" for i in range(16)] + ["STE" for i in range(10)] + ["LTE" for i in range(46)] + ["PTE" for i in range(46)] + ["STE" for i in range(10)] ) gtab = gradient_table(bvals, bvecs=bvecs, btens=btens) return data_img, mask_img, gtab def extract_example_tracts(out_dir): """Extract 5 'AF_L','CST_R' and 'CC_ForcepsMajor' trk files in out_dir folder. Parameters ---------- out_dir : str Folder in which to extract the files. """ fname = get_fnames(name="minimal_bundles") with zipfile.ZipFile(fname, "r") as zip_obj: zip_obj.extractall(out_dir) def read_five_af_bundles(): """Load 5 small left arcuate fasciculus bundles. Returns ------- bundles: list of ArraySequence List with loaded bundles. """ subjects = ["sub_1", "sub_2", "sub_3", "sub_4", "sub_5"] with tempfile.TemporaryDirectory() as temp_dir: extract_example_tracts(temp_dir) bundles = [] for sub in subjects: fname = pjoin(temp_dir, sub, "AF_L.trk") bundle_obj = load_trk(fname, "same", bbox_valid_check=False) bundles.append(bundle_obj.streamlines) return bundles @warning_for_keywords() def to_bids_description( path, *, fname="dataset_description.json", BIDSVersion="1.4.0", **kwargs ): """Dumps a dict into a bids description at the given location""" kwargs.update({"BIDSVersion": BIDSVersion}) desc_file = op.join(path, fname) with open(desc_file, "w") as outfile: json.dump(kwargs, outfile) @warning_for_keywords() def fetch_hcp( subjects, *, hcp_bucket="hcp-openaccess", profile_name="hcp", path=None, study="HCP_1200", aws_access_key_id=None, aws_secret_access_key=None, ): """ Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS specification. See :footcite:p:`Gorgolewski2016` for details about the BIDS specification. Parameters ---------- subjects : list Each item is an integer, identifying one of the HCP subjects hcp_bucket : string, optional The name of the HCP S3 bucket. profile_name : string, optional The name of the AWS profile used for access. path : string, optional Path to save files into. Defaults to the value of the ``DIPY_HOME`` environment variable is set; otherwise, defaults to ``$HOME/.dipy``. study : string, optional Which HCP study to grab. aws_access_key_id : string, optional AWS credentials to HCP AWS S3. Will only be used if `profile_name` is set to False. aws_secret_access_key : string, optional AWS credentials to HCP AWS S3. Will only be used if `profile_name` is set to False. Returns ------- dict with remote and local names of these files, path to BIDS derivative dataset Notes ----- To use this function with its default setting, you need to have a file '~/.aws/credentials', that includes a section: [hcp] AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXX The keys are credentials that you can get from HCP (see https://wiki.humanconnectome.org/docs/How%20to%20Get%20Access%20to%20the%20HCP%20OpenAccess%20Amazon%20S3%20Bucket.md) Local filenames are changed to match our expected conventions. References ---------- .. footbibliography:: """ # noqa: E501 if not has_boto3: raise ValueError( "'fetch_hcp' requires boto3 and it is" " not currently installed. Please install" "it using `pip install boto3`. " ) if profile_name: boto3.setup_default_session(profile_name=profile_name) elif aws_access_key_id is not None and aws_secret_access_key is not None: boto3.setup_default_session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, ) else: raise ValueError( "Must provide either a `profile_name` or ", "both `aws_access_key_id` and ", "`aws_secret_access_key` as input to 'fetch_hcp'", ) s3 = boto3.resource("s3") bucket = s3.Bucket(hcp_bucket) if path is None: if not op.exists(dipy_home): os.mkdir(dipy_home) my_path = dipy_home else: my_path = path base_dir = pjoin(my_path, study, "derivatives", "hcp_pipeline") if not op.exists(base_dir): os.makedirs(base_dir, exist_ok=True) data_files = {} # If user provided incorrect input, these are typical failures that # are easy to recover from: if isinstance(subjects, (int, str)): subjects = [subjects] for subject in subjects: sub_dir = pjoin(base_dir, f"sub-{subject}") if not op.exists(sub_dir): os.makedirs(pjoin(sub_dir, "dwi"), exist_ok=True) os.makedirs(pjoin(sub_dir, "anat"), exist_ok=True) data_files[pjoin(sub_dir, "dwi", f"sub-{subject}_dwi.bval")] = ( f"{study}/{subject}/T1w/Diffusion/bvals" ) data_files[pjoin(sub_dir, "dwi", f"sub-{subject}_dwi.bvec")] = ( f"{study}/{subject}/T1w/Diffusion/bvecs" ) data_files[pjoin(sub_dir, "dwi", f"sub-{subject}_dwi.nii.gz")] = ( f"{study}/{subject}/T1w/Diffusion/data.nii.gz" ) data_files[pjoin(sub_dir, "anat", f"sub-{subject}_T1w.nii.gz")] = ( f"{study}/{subject}/T1w/T1w_acpc_dc.nii.gz" ) data_files[pjoin(sub_dir, "anat", f"sub-{subject}_aparc+aseg_seg.nii.gz")] = ( f"{study}/{subject}/T1w/aparc+aseg.nii.gz" ) download_files = {} for k in data_files.keys(): if not op.exists(k): download_files[k] = data_files[k] if len(download_files.keys()): with tqdm(total=len(download_files.keys())) as pbar: for k in download_files.keys(): pbar.set_description_str(f"Downloading {k}") bucket.download_file(download_files[k], k) pbar.update() # Create the BIDS dataset description file text hcp_acknowledgements = ( "Data were provided by the Human Connectome Project, WU-Minn Consortium" " (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657)" " funded by the 16 NIH Institutes and Centers that support the NIH Blueprint" " for Neuroscience Research; and by the McDonnell Center for Systems" " Neuroscience at Washington University.", ) to_bids_description( pjoin(my_path, study), **{ "Name": study, "Acknowledgements": hcp_acknowledgements, "Subjects": subjects, }, ) # Create the BIDS derivatives description file text to_bids_description( base_dir, **{ "Name": study, "Acknowledgements": hcp_acknowledgements, "GeneratedBy": [{"Name": "hcp_pipeline"}], }, ) return data_files, pjoin(my_path, study) def _hbn_downloader(my_path, derivative, subjects, client): base_dir = op.join(my_path, "HBN", "derivatives", derivative) if not os.path.exists(base_dir): os.makedirs(base_dir, exist_ok=True) data_files = {} for subject in subjects: initial_query = client.list_objects( Bucket="fcp-indi", Prefix=f"data/Projects/HBN/BIDS_curated/sub-{subject}/" ) ses = initial_query.get("Contents", None) if ses is None: raise ValueError(f"Could not find data for subject {subject}") else: ses = ses[0]["Key"].split("/")[5] query = client.list_objects( Bucket="fcp-indi", Prefix=f"data/Projects/HBN/BIDS_curated/derivatives/{derivative}/sub-{subject}/", ) # noqa query_content = query.get("Contents", None) if query_content is None: raise ValueError(f"Could not find derivatives data for subject {subject}") file_list = [kk["Key"] for kk in query["Contents"]] sub_dir = op.join(base_dir, f"sub-{subject}") ses_dir = op.join(sub_dir, ses) if derivative == "qsiprep": if not os.path.exists(sub_dir): os.makedirs(os.path.join(sub_dir, "anat"), exist_ok=True) os.makedirs(os.path.join(sub_dir, "figures"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "dwi"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "anat"), exist_ok=True) if derivative == "afq": if not os.path.exists(sub_dir): os.makedirs(os.path.join(ses_dir, "bundles"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "clean_bundles"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "ROIs"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "tract_profile_plots"), exist_ok=True) os.makedirs(os.path.join(ses_dir, "viz_bundles"), exist_ok=True) for remote in file_list: full = remote.split("Projects")[-1][1:].replace("/BIDS_curated", "") local = op.join(my_path, full) data_files[local] = remote download_files = {} for k in data_files.keys(): if not op.exists(k): download_files[k] = data_files[k] if len(download_files.keys()): with tqdm(total=len(download_files.keys())) as pbar: for k in download_files.keys(): pbar.set_description_str(f"Downloading {k}") client.download_file("fcp-indi", download_files[k], k) pbar.update() # Create the BIDS dataset description file text to_bids_description( op.join(my_path, "HBN"), **{"Name": "HBN", "Subjects": subjects} ) # Create the BIDS derivatives description file text to_bids_description( base_dir, **{"Name": "HBN", "PipelineDescription": {"Name": "qsiprep"}} ) return data_files def fetch_hbn(subjects, *, path=None, include_afq=False): """ Fetch preprocessed data from the Healthy Brain Network POD2 study. See :footcite:p:`Alexander2017` and :footcite:p:`RichieHalford2022` for further details about the study and processing pipeline. Parameters ---------- subjects : list Identifiers of the subjects to download. For example: ["NDARAA948VFH", "NDAREK918EC2"]. path : string, optional Path to save files into. Defaults to the value of the ``DIPY_HOME`` environment variable is set; otherwise, defaults to ``$HOME/.dipy``. include_afq : bool, optional Whether to include pyAFQ derivatives Returns ------- dict with remote and local names of these files, path to BIDS derivative dataset References ---------- .. footbibliography:: """ if has_boto3: from botocore import UNSIGNED from botocore.client import Config else: TripWire( "The `fetch_hbn` function requires the boto3" + " library, but that is not installed." ) # Anonymous access: client = boto3.client("s3", config=Config(signature_version=UNSIGNED)) if path is None: if not op.exists(dipy_home): os.mkdir(dipy_home) my_path = dipy_home else: my_path = path # If user provided incorrect input, these are typical failures that # are easy to recover from: if isinstance(subjects, (int, str)): subjects = [subjects] data_files = _hbn_downloader(my_path, "qsiprep", subjects, client) if include_afq: data_files.update(_hbn_downloader(my_path, "afq", subjects, client)) return data_files, pjoin(my_path, "HBN") dipy-1.11.0/dipy/data/files/000077500000000000000000000000001476546756600155515ustar00rootroot00000000000000dipy-1.11.0/dipy/data/files/55dir_grad.bval000066400000000000000000000004251476546756600203450ustar00rootroot000000000000000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 dipy-1.11.0/dipy/data/files/55dir_grad.bvec000066400000000000000000000047631476546756600203510ustar00rootroot000000000000000 0.387747134121 0.946163995722 -0.922518214794 0.444901340191 -0.088648379736 0.398858764491 0.707421617532 0.441719326768 -0.984543274106 0.816864993569 0.973222292055 -0.365922889995 0.697928959906 0.194286490773 -0.421934676133 -0.918294577668 0.951258678509 -0.482260349695 0.017747824887 0.008439275026 0.022003496707 -0.302920122261 -0.605795821812 0.801088034306 0.197614702014 0.822315620723 0.531572028134 -0.900257095184 -0.043757932112 0.654348201408 -0.380464261169 -0.819307000804 -0.721142532083 -0.549296621645 -0.696135340591 0.364655265727 -0.090729077048 0.511611491293 -0.693341730595 0.751543758 -0.780508167624 -0.100583708102 0.219939025696 -0.780300794447 0.755047655881 -0.478715333951 0.152870187393 -0.388991446276 -0.066544469261 -0.306611784146 0.0724455254 0.456711407833 0.392354846453 -0.550316062949 -0.25272017908 0 -0.296393661931 -0.081321931027 -0.384882209717 -0.161185017106 -0.92988659956 0.828672486485 0.031786637059 0.343437931108 0.070601203125 0.449497285481 -0.16040248204 0.925434114895 -0.475145821454 0.979839225871 -0.626891396224 -0.058522138943 0.302296624642 -0.174024786406 0.566474680917 -0.843382140828 0.06386048901 -0.640449793751 0.7510264358 0.039519118666 -0.693865338491 -0.565236505598 -0.807120477633 0.382874752951 -0.416578339326 0.755830304425 0.645279718315 -0.266629899103 -0.480787014528 -0.805702982944 -0.669944149585 -0.606149719157 -0.273405988981 0.471016167011 0.289001415041 0.486149073628 0.586409922872 0.932608599035 0.15823728798 0.22399249773 -0.439220995009 0.665543534564 -0.986482345681 0.292599882564 0.886002068699 -0.934865622332 0.640718077698 -0.838931327882 0.776467536851 -0.160226448479 0.232824243654 0 0.872813242996 0.313305660232 0.028737223536 -0.880955270009 0.357004729282 0.392700389778 0.706076670591 -0.828815072158 -0.16028103921 0.36150210598 -0.164649366846 -0.098347026224 0.535846634103 -0.046560186303 -0.654964355074 0.391548500034 -0.061021941069 -0.858568767676 -0.823888008525 -0.537247934542 -0.997716234245 0.705736113015 0.262622761778 -0.597240488038 0.692458895234 0.065610309056 -0.256880737876 0.207229549349 -0.908046105978 -0.023430369769 0.662465871652 -0.507586973175 0.498795845093 0.221621128787 -0.258012449344 -0.70683028737 0.957610254627 0.718607996331 0.660117737014 -0.445915976414 -0.216634260053 0.346589265082 0.962594299624 -0.583916116532 -0.486814086579 -0.572611065768 -0.059019382124 -0.873539331368 -0.45888143117 -0.178928706485 0.764350698803 0.295988035322 -0.493108343755 0.819438659125 0.939108823648 dipy-1.11.0/dipy/data/files/C.npy000066400000000000000000010001201476546756600164550ustar00rootroot00000000000000NUMPYF{'descr': 'ǟH/a1C?~eO7.̀1rx(Ck/Whd׵tdm1$"Y٢ӕ¬E&ddߜ bo7NqJ%{k69?ucPb'p]0c+ѱqLe:']wY_Tj{[inokt'ҍoɚ =ne.{жlB 4dz/ߍ ruo3so3;؛zYV?.FxD9!Gt|,*؁pb/>Eyt5z؇q\,x1a?e;X> Wㇺ[tzL3ߑlU7Ag 3?3N%Fk%TţqKtu<9޲0ݹi$_> ׭p~ ?}il6ɗ!^!/0S6꞊UjlnY^ p=xUoXN鹡"9 6UMRVlo^htpR|\`#{d:񣳲oa$p:2mxJaҲvޫf~vKX"lPZͨCQ9E<_gP6r_Kem 2ٍ{딛4q^gSvH`[qۮ9q7U:r~S=}K`N7QgULJ%m=wOeuonGW 1&slRTdy\4.%9P9 _A QlȄ\dU )'Ι c+/_ԉv/PR\G9?0.|Ʃ;K?EaR8T "Nbc  +_7JN{wyF?Tck+K>0?'<4ݷץ^_J%J"34[ݕuV3L-̹Grg[ŮǭR il%u\Ǐ/FG1DUބz>C ~~.Uʨ -N$q`yJZ)>5GHcES^vT-@a)zJrĂA!wC)̂cÌ[S%MH ؤ&İ%m)C'Nq˩Hh~ӈCoBxa)BT"$̚nB֜ Sb a]'#`{*A6+'v(\RXD(+X=6u>!*QB ыtUA욚*]";M 6_rŢAAwّ]x Kn^\1uk ^aZK.j:-ީi/?ۻ:H 3.F718-o=~O/ =\,6 6B40v){@vd~yZ'Wzeq\bR, ۏ|.ԯ'sۂde=98M>/~ osNBv&}VӸ< Kl:ǐRزةxJفrr9< =< ce\M,@ṫϪL-[^UFM;(c۞42ל8wg_ޘn%O«\2oM[=KSu qٯY#z_i1Mu!X\ћ4.[ǶeKEV.e2\b+^e::E(ʬ۳+C u:i^K?,u}i1b4j4fF5 T{@hU&*]vKV? x\_h<^j\8%uix˝)V翫_"y){FSׯXɺ P"/AШy9hsWd:<coU5!:o#۞[դݓC/ .akxBjXĠM }bJ`kZCrw'tCG~Rg&8tBXA//=& 5(:qMPn lca&Y GFvө/P}R Ұ^n0"V'Ռ/EU*kOW1*;Ngo)v ^qG;w%*>Qgib٨G}mkε dԂ7 5]AE'iK0}}єy,ViC Q5 B4`R*Wrv5?F= W}D8vUɽ:gYPIϬB)R 8aL;A-IS>*<l&XQв{rkUEsZ1~ T94udqꛟ-c)}{iK Dv]SVVNhgB#(uUcZb鯉*V| 58 r St{RPHCSd뎼N-el-1tUF K=)LT$X)I*Üz姰b)xw'd@VdjjL}\K)r!t8qoXBc? =揂[lQ^r|\^m3(1Q(5 ~0<_NK9 jw (GE^No'2~*'ҙ[5lױ8 zـX̣zZ?+aDs1 9`xO?bl e úP%jWqnuٷ+aǷ*'~HDX@Sudpk~s6ʔ1S4whwBx`ƺ>˭=m>zUܭS߉%c;o ݸ!\Pӗf,AYy}g)r?;_~?[*ЫHC?52ǠZ, 0l#bU9xNgqQϗT  8OD@EA\JoGQr%QD%t$*i "]gIFJF1H(8!H8!$hDq8& y e#v! q]?JW!5ŸL!l)8J㰼zbd}zq۷N,P,ݩzyQ\"\ʓLϗ_Dr5MOV-o3."_uxxokH#[H"ęӵy3a(&| 1}i) O\zT0YF`?]3MSJ`O v(7i8: WBf[#츾:Lbt4X'p]vq''.Mg:u' *R`D@B%0[ jp^<>%bڴ xw8;?I;;q}>#^N=yap k 8)WO z#o `$"O嚝/Yњ(F.cE"E_Q .ԛjv =sXg$ZZ[zE18P]Lbr~ҚV87_!*RĎW FEܘ&թ{8R쾺[!x%`=}{.Ћj&BHu[*k֟]\.<ٺ?xE)WJGdaB毺iu) $$ Պ^\~@Yl_u F ިxryZIGys)>ĸ#.;hE ?1EwfNkhؿ({[*;zT(+byzEA ;b2`vRIpzV W)dyV.5+]o2`1xڊ5zgtӛ&fE0AnO7yq՜ 43=s; p/7 =XM# =#(:c'KGfJCS+] vdT؆pM4i'UՑm tk|ѝL;b) Vm49P( .d$gW< ķxxX5cvb;vg @Fm,0W/W`HnT$f.uM!ۻ^1|~{{ݮQBz@Z%'VoOk9Ȫ[|~ ۡowo(U[eOC {)(-)ElLզ腪mA1ZA,&(bU|=B\stzRߩIG7E'x0ݼ}luc8DMͭh;b:#*Ͳihwnt!b, q&`zo?ߠEo|_:bwόm 3N=RDOotТ;gobm`W PEjgmGܚ= ^rV/%d֧tp &~"{WLLe~)k_8qS_gQlHeA05S)SXXwSuH!d}08m(ZΖX2 "ż.KeD"E$B>dMxj KI(4Z?Me\w;WG۷ j'ߒNx.-DzE[؞^xoXR.HCLX<,|U*6:%H73qg%i6ٚCs! Oyu{oNq5͂Is|%1GjnIbL8I0F?IO&[Bl'Yo"r|ч hV~gCxMG*&a0MT+90 !F}H:7:JɄ&jtRsBdУG7<(_ A+.W^輨4lwަH JP82;etiKYC8puj*5~`*ȖE&%`]b` zyLmRN^ʹb<Úe)9X!=\Mz ŠhԾY6 $kܖ-Thxq@uBN5pSHc❬_@]U2܏$~Qz/ Jt&b kBf~!lZI_JtA&7RY]>ԅu0C( h ]LJ*q%( _u*"ܺ%7Y~E7KFݽY~_*`4Sez4y;Z3ԞS@p=dԌ=Ӄ5W+fZ#+f4ֽۘS! Gy(H)13\ӗ Ʀ' bW nf+vNa`d6~0o\/|o!_C5[<d7Æ&!PoɃDIFPE?pEl2F`sDDH i<؞xF]{I"3ǿiʿA¢ox'.QŌ QPD('IqZ=$q1>NE-3DǧIObB%|Ei3ʜr^8uNըJ3-J0NPw!<|ziu \ȧ|)Er h\`r>FT$2ƙcC3ǩEQ\81gʧY$1nGj$wC:N8B fVň$R gAv\A~ЇG/,s?y!N-A5XpBe 6X$nnZf&f x<.3ccIO@j7lS* l~JL"Bc#]@]cӮ R̻6E,&"dU9DL*0|Wl<üc*E+ddD| />W] K5/E}7ˋVu^%^0-8*/kUHvR$q&ͭ IaLBjeQ9 E5A35\ ˥8zTLl & uj3)aȸ,~0Yeg3>"3aLg17M 3Eƽ4>b?' CkZւ D 2e4糅Dua9ˍ_& yLgE]`2 I8 뗑0pxO^:Y}j`ac=/5-)ZdN49}87횲%)0; cɴGBC- Kw;Hb]f"ئ^^gƁ:E&z|f ӴZ8eK7_tIj=MV=+Wl!aW;U mCG?rf;: "Vg@AYWJ_Y%>S 2M/k,.U̍W Y!<5m[E0^k'zc q9.Yb*O]ט0P $ncze ۱{;kݑJhUꚹ'ܡHYdPx=όb5R 楊(KɅ7 C5iA!] .0H3* RGQ~f[PԮf>WO?`2fn(CH6J'R#MP+W;K:dXʒ3~AE\ݐY -/ﰔH"#PQY] \!F\)mqkCYXpk=|.a/xdʌ+S΄~Qnît4yr:1BG?Xiy@,F7;fwgGh:ωf8.<%uQ5  _>A>Ax╧?ԯvmhN))nM 71Xhf'N6$5rpHX#_X-S߆{ C؁ EbV¨ M';J:aNTdI4jq諉qskEj@0@gJR ;DfV!ZAYC9aiZX[_*cV|pdKx9R:'-ZAe<^ݻ;x j\b;",/T$©X,Q0]Ŭe@wX$ :Ȇ":_*fA?<|[We_WS5 G9]\ IX OwU^cQ{)m9)s1~s~/Vu^ } ed5-a G)2Y8W>b2 g~m<_;^PQh+%pN^tkA"EPMvqfgI4jls-4.Vm둀صj{Z1Bⲁ 3 `'fʘBYz|rj1gS5 w5(v-3 t)rfgs2Im>7 [AsDq|>F]yHx:,і7Cr<A\ªv,Nu`纗e RXхÕp ZWǿiU?V~sojQ* Ct9Q).$*|.—Cew.5It'sOsS V&cr|ּ,ؗxq=DJm;ڼ+M=}._*8Q2úrR٤8)%"8珼 df/SMTP> >@l;ݕ,!kŐ'RT" Tc|desXiT#Ef~7([2*$ţjv%VeqWda?N|Ce+$ 81ܯi i𗮍zy̋U T;)ǞIJ)=ɋ_80Mw+@.5ҿ9,hE pRc1XF6WxOҳm,ժ}S:c듪$!Ϸoˤ!m=Y29DO$ҷuI¬V}”v=`x1:j*ZQ)l8TN=BU:B N*$s?!1U9;jӄ!jgll.1 Bm/}aAd 2HI5UiJ[\ߋ?*-Xze^j2pVKԴnIoۙ+v'9eN%Jaz{a g84gKMG6T*ZWZے68J Z` 1-`JHw(:t74P-H1 ǤTvSAW0Y08[USos9wuV_jU,K݁܍B}Qq!3(8EXf fN;_:G*k5AoPljÄLއ%(8>ZNX]~Y"]3$Px6W"?xzosuܢa]8Ⱥ$cJ1Hu{!бMgVz!VvZ1lK ʴZgOKlFuMTBeh4U@EHg1/Lr1y6o[:?{uXy<*C#9zfc)݊1[;2=p R]f> QV ׽=mQ\g"[Qz>, *`J[6;fm5?|c/OWwmX^ FY'Qŷ_r=ͱwz|2b`}FA-4(C,}c򟨑ajՏ_|싲B}|BWu%rꬬay#9 +O dKN#u5OJQ^ e\,b{uu1Ã8(^]bw'kcYx_A,;>m>TC[GC& <&4DTH]d5!y~U 9cϓaZ^ - Urծjb]١_ 7aGcIs[iikj,UZsnӳ2/T38a4k-ܿr$SvB^AzڼiX9_1J`o;x(H& ;ox%E?x{V"N%)CA4D$dU (-p(oL(BK1 菱:?F٭6!ZMxI[߳aI>] "9]Kqc[;\̮Uɋ>6J0$IԦ&,ŚA+1t#%q `6Z !Neap g2hI sk z(ff: a`c N*R@>Uװ!W%e,: TI8h18vAzg>iQiVȑ$CܚOZ'*[|G5Eq3-%Wab?DLM*m(rNABuy0aV+QXvZI\He}5%|(3ux%#:m{y5wP*NWbzD) ".r G/_J!\3~XRɘ"GUS(d>OB7E1U+@^%{M{Y1]"CAkah(7Ք+{pgfI]!Ԕ[ؑ(寮tC R|^dŰltUN M&Ťwd#7ZQXi8cW뉈ʒy8+mzߖV\M55/?̆Ҝ2rFu 9 b4 oHWk.f1Ja}UgHLLgon]"gk<=_h.Gwsgy؝m [6;xY7,K~7|^W:UI8νV2'&TAsD)VX!3gfL[d>h)D,Ggտ{=Ox E.22ʽpj oCWo&h`Q79VG!2f?eH.@%&z3@ L]+嚜 G%Ij,ǴH(v[*U wf"WBϫD"{2 meU gGƹ:Ք5d|8OIufᏥQOE3y3YoW(DbL_a_EQZFx%v1A5 )ѺxaY״2 mLaY mxI}l|7W`"+z^q1ev!JQ c#z'^2͟BG@RV.F\}8Ad|0|ܜW̨'ư딄]RvZW˵3 '"IԑRNjۍit~k9qoB$ ŒzVڧۣP-`//Am85?y|BR]D6u,bQ{)1XdigЯxޣ@­,62TScYD/OQMum*m?n$5OrkB.A)::70PZR@50IV?WJ0E|?_0KBc2ߊp/岱NaJ_?I-Ȩ/EPQӣoa#YmbJ/(6ʊBamd\#-(PxM[dN'GauZ@riA )B5NV6,nL+gKͰ Ki@59GjEAS삏эjꬶ<+c"';̩~W=DZ ZVaӕ.{/|FĻpO2g4M?oe7-#)_%1nLM -#qLDJgPoݩ dPB)HBEQ~VJh`,HZr0$ZU/sFP4! 6ͩ tuWmӏ3f_ 0ڕiY۴T?;5bGysP|J-_Bjb?ea֟E+A~uRli -~ANU_KuJ]1 a])O~d{ yX|ϼǚD"nL'"݋ ;ɺ|O<)/e࠲ ])lqwTT"Xnܕ'>|Ki^yL)MQSTe/Y)m. vX N!UZE/ƾtc *zr#PZ@gib=>FTILjƣSff[sb7V.WDvb?.`P#.?Ipu}SŅX^X~vAB4MNIU"+r 47b q/ %8dY&~LI5h ,HB[:b<(A\RKj*,UqVXQU'R,4mj}:]ż|)>(We@0fίGQRfY=y[ zM.sǟAŐh8 [ѬBnLf5V܋2$/8LXzZ^^̪ {8e9#Ɉ% q.ϗTǣF?-R9A[X\.o{j)7jZkjK2e}Z B ~O4xEtVҊ;u oFClxST;}N<2bɨR}l9'k)ӬS※ }+Ԉ>҅v>%Rz#*2V)0ԇ$DN4֢"y鋑$ZҢZDFy"8L1Z7G#fȻHhD9+Dc%(IG#J Zbؒ|_N.4(/"~/drq4_3Nܒ82IxVdIq^"Vf 8O" !㿬Fp8Q7}4e4 i񅗿1d.߂ϝ"i,9Yqٳ,dgMX&)/oGTJ7y~y"M$$^N+?2 I$'cr9zD gCc!׿$Rg\(T"B-ӓ^ؤN3H<㩑i#s8I<ȷݧ?$p^iJ@,z=䈌wu6Av\p7Z'.dZ$oi(Le$O)/A(.3-se$|Ƶ*?I嬒iWY:kGk!lx ʂh]5CPR}A%6j0FWY{{c5Aj) J`\}x܅V@n1C%O}40JaOX7|;FLiR~aFg&~֜PTE@6,%a?*`1_DG'J&/ r:`M"*3Y\ b7\?[ME,%VR8wSM8m|XuFb dx #1lԏ*e, t|PF>GY@%7ics<FZ~4HnX8^0:zDlqN*0Z,nS[HSlW:S tH D$F~ ^X9RT, q5dR Gp䅶 I uYsꑡ[S}z8_?S엲ǜ&XpOj$`lDJ9UGy\ޯ[٣j"6:*g]ǃoI%L5`d8I <&gT%˛ezQ^wPtjؘ8 y"Gg5u%͗HìGUt׫0H>pYX%[9XV;kSv$B_ [TSE;ނG։vC!xPԒgL5i)7e 3w1*N]WALāGQ@ P/0ZhC\n,:3(lkE]H5:+u>WzF_̼4`ώeI"'sIeu v6!5 +G٣>MJjD>3oJJb2Û19٧-3x̂'B]Ez\gJGP_CaJ9LUOW.~>bwCD=7R̮91E!5Bu9bڕ{5ct ϕV-s1 yGn=ŀX߃gOӛ/[xqOOjHB|[J ´RiLU~HoMyg"4EQseX8{l.US):}yT}ʻ}" k q#Y ()%#yq@dC>'BH2qUjnJ,t[^Si ؕM5Nߔ8>ЩRߦ'^Bc,8<~|Vf\X6waM sbV"ewI pӬm)c\4$)LJ7N]-=ҏخt4e\4X*)NA*z+JXd!&V*p>[Md`!_%WFGx:yƪzQ#vt_<ψ9aFd} ҙ=,jcN @XI9ՈԐ uI$i>J"c #*0`DIha]a0jb4j %e 2HC*bJ%OIN]qPD[{xnY,Iب'x̀F+eD&18WMh76WG~"ļrB,ga7dX.SCqt+[[cySie|DD)?RV0l1h^4xjb:ubɚWeW7USZd昉JDjw=L)*- i_ME\_"sEҦ l&8g[)pDޠPWEUnYJ+SPJ "jH3ưPdJ㤫*$ Jߓ#p%ӳwM[ITqjE#v͖)ŒRI<HxgRXC*-ge9 )jV)S.-GUڣa/J^l|֛vWCruhKKi=*,rj"8Sp. yquoi\j,;$:n`Pƚҫ= 8n~WA&Xа?8?pÚAW?̪ؽG귳+wW{D"nRI"fT^i]6AxSP\c$W"s/JJSyS<^̮V%`/T;RFoAC[iN DFT gYB$FfQCFB3쩔$SzT͖VzMߖzΒZpPo_ڙŊ^މȡ.bYUoKGT{Q]_tz,fUTfЩ@tr/Agl%^ثLwWS*>wtwUًWr"vv{)."{pOxp/^aT!Jٳ/,51oWHMYVwX9q묚r DRU>hTp/TZ=qeJQ֎Kdeɬ>,#eo=i#|Mb\&ʛnFo:QMiY5,An"ZA>ߩefLm>&D[*<ѳM&XG\yP//ڈ۫|fA~{cwT9 =$ы"T}7Cuvo[DBIc[$DpC<(LB@06 G!D^KuJ3҈Mβ IʙH-J&czɌ6ym-CR&ܗU&xk4.WzNMv'&)nFH{^Ɛ狲ΔWI-?e㉍—xPDCvU _*Ď~Nm(٦팼I,&+)ab+}PG^`"JU_Ҁ`2NL<) ]sך\zEve) cJɩL9?G !@6L}x[X l>4:%F~7huVr".,GH#ܘ׆ 3uh.zpO[>uì0,Ўܯt!Vn]N/&KJINc%"pwY/_Fb=+n 3ĵآqGW~ZR{tLnqbԬtUSjUKx0קtLڳ^ȧ&-e׈,1TcF` QK, tI#g]5)w,ч„1;``(Qi5RynD4)I*H1_HyMYM|-OwSG7Ls㩻mqV&`V[FM:a2 ӿج FPY~Sk*˴?+هSN.E1Մք!Z&P0W : dc[tC@ƵlyF䯙l>ݯ$Wݪzޣ|m| TX*ny2)9u0X ދD٤ Y4sDc4Dt}#"T,\JE`X!zEw'h_'QЇ~c?\Xc+M6yf9($zݵ@.8|\ϰGe,;w6M)%=U:> k""@sSBxh8cG ZCxr~X֜4~&WI58Q쑆w}]85ﲓQH?#pcxҬ2cPpXE-p7>mBdQg7&@`G8|}TU;k/Ad#m|+/MC>mc\:M o37#zxn);ut岚 |X ^.oFz_>dV̓?܇0SF|4JK5aiR͠vԣBTcyCsᷝB ^ =U)J|L "9~ zdI%ZĢ 4 [Fp8 >wSk2|& Osxm@d#WƓWC_;)@V҈ je./;}Ŏw_:hpa2L,pKroa˥2.*pc9XHR3ء] ~}:OSYcjjT@vmɔic{ثjq.6kf⯝+Vp a> xCFMu1\WY2M]sQ"rd&VW ޯ #Qd`6ϼŵj9k0<^u~&KWt2Wf t _p坡|IN}xJQSte̶l?T9Z62K#(JxRuB%L;NIv[GYmRp(_JV {\Zb%Ym *9WPԊelB!mC$0S.1 fq03PG%z~Y#} W1:oH< ]H]!a4gd1(Titv5!hjjOOz"jAC_hC-kVnu$7ջ3xH%.bzeHtAu;iKz+5&JT`JGH6s%쑌jT_g}B\,ƴ2Ɨ|~2 tR{eWsV+ m|ÄˠA'kHAgnUL!HqZg@Ωc܆u.X9ŜK#YgE:VlHUl.13/5k7[9B} jsAVH} ZZ%T.9!FPK&ܙJ|l/Rik+j=aX%(ReQupYi۲Y"`y~`/w7" ׾ӗ| ˌQ,VK>"FjbuI }9a_G(I8IVc{7f㣿wҺ,D42.Ɏ0-\9c)LZIHܿމP:kF4b"mwA}hW`b9+}P2j@D\mmyuBQ`Kdc3Is<9Sv3޻y&MBIesWMyQ҄9\Z;>-r<$aP'4W"K]WDIlϺƠ*EX||_Ύ˥SxA@5 4#aXKPkz ,ԶV"# yp 9ͷ[}X^!fN% W!J)kܓ;8)eXH6^5ܧ:<ǟ06BxnP' =|b>[M|{fbM` xjn32>2I&s]Tn]MY(G*V?Y !Jh|Oy(8! \aMjbie''dV5qC9X~32t&a=ƽs 펗0Eޫ#9 ΍ f4t(=-JzK-^^'w,77>ڜPNV=l"ȩW(! Y 9<^j/{}US#%+-mYMsuoXPh͓6 wGHTٔRCUŕ& VR[ w\.v [sXcS~btӾC̔I 5؞yӵDhFsJv1J1׫)Nj* &~E>>[Sj &R>qզU(BxCjYLub 7Ѩr[DLЁjn[ߞ 4LJgbC1}Yc 41&ualAg^]8HP0s!>cY^Ƚ[n ]dYIa SٶCq4W/]ӳ $c}q[Qvd+֩ >fSuYH($Z-X$:NCȭ]hj[\SLtx)Wo# I=W=4cbͯO!nJy@/bI9(ͩ~mq\{ZM8X$-rE*R25/΃[6A?[&!yӞ[CT(KKOl0bU*,Zu56ԉM[1rtjE]IF@7^JjJN%GGz\~oi}8הvcH-i,qO7pۦ㨹/ʌ />Es9XVgІ-D>l$"X2K,,)IL.uۏ,&SVEQ  0'MȆ!yƅ#2>M~EtQrNObjnX'PKG}~AרUEMlk{*ѐS_RԤwz\zݳBCNg|}OVFx\@v~: xQ"e'"U=mN6hg`CUSa:7/HtL!n_Y˭IQͥS[-N*(DNUZg- 6/y?qIawd!yzR1[|z4=\g(Գ& cNTaqx[ĜXG!pil." Ulr'ФK<?ʓ{^a ^wU y; 2V˿u mǠ wF` (A$_d _)v,4W8 eLjPxQE\ZI V bg3]9^a&2;FV>/^S$wuCuh_"~F Q& tYG@Lb-Na2ڠN%mHH,'Fjw& KRvGڹ(ɔĆO .'+-/1T[#cDH%(c 4) Ui}Ifd H4LaCrWp-Cz$b0R 1۷K@,a=MgM˭7kJGgQUEWXjX"Nby Eg^s^=:r<"[^WȸEP6ʜiPC5hxFIfL"}8S+"]LA<,vW\r?MZÝdKMrvqeDʍ~qߐ!b/,@ZOq!'b[LOs]V&ҟ,@O<ܫaH4Pbzy> z q5q(bG ~uJZ$볤eNLf {ρ*.S/=TT-N*Ǡ|PGw<Qʘa?;ꎯ]:j42:X?|*xBYw4ۿgً$ Ƙ .dΖq0rSbN_ঀp3ާ})KJae}f!(P5IiѤ\ 5ks␷ʙ٢IjoƉ,:Wl6MK2MB4;ֵd-*ZڶTyAH(7[&YWUS7^v XKni,.OFg9YƢy.SCFd^QiTJKuUjv ®ysyUSC9 Qk:rt~'w *|D$ #(+ voBdCy4r)^Wa^ 08tZj?i`GapM5%gpGșkWMƓ^CdڲJ x|-_"4{e&S.}-ooܱwHdo3~Lot@-2qB3np 3aՔU˚eRE26Aן?geF1i?G c k\|5?v+[T3LҶ<[BEo:5ZYmDRe@Gb9tl(ixkXE4KTk!S~Yk%v"Ȟb,(T33@mX! ?i-_I(DLǖJE?jfU9t,- n>2E~W}s(@J\Ti"K#WKA%gj"җd4Rju LxK )*ER}K %pBTBuW8!Ve+Mjzڙ#^!CekLGӖt \L㒫0Wn(Q2ZàCPfylήX1rQkda[UHU!*xGhPk͚Eջ#+A9n-![TR(= ~⵵yy8V[-s ެ#:FQ&ht-V LqCN?9G?b|G*>kvaIO7(LA*!5*Kbg !Hga2S/Ĩ͍٤=viT2 Rǃ>ԏ >*m:gRNݯ/Vݣc ~)6Q-:,b/1$@HRVK\t8u;,+Gh8:V;oTE- E* 8c߶u 1]JR33u4:/.meп6su- O50;}h$jፈO@=hd1l+9j\QkǶ'Nv{LӡM#(0;-@xL WQn}nϑ2`1K1Gр"wu=Sxst8 1mzC[ eP4QqH/u ?"WQɄt{&U"z`= ) X CycOh,XELQ徊=g~Fi(k5%b(e<T F`˥z{!{H?}䖌Z.if^NcS?ǿ/}Hg BOp_1!QX~D|6s6@]|)ʃ.\5qblHףg:G!~ƹCnp"!(X{%.?Ewyl񜏐RLc1\aw噭S>L;$BqO!Y0nс 9ŕՍJA}%n#o*?b#sYB>/M~uM! 8| -l, m(N1'a$y)\(nOr)ɆvuC Iyf5_fH!mW0G'b0;5/kL_ྜྷգ@3J䫃˭ omUsշ6{`ܞQ;N5O)`le~с' oe{埫)+3 *R7k\pRޟ2 YQӁѧT6ݶ:fC:|GOzi6sX5(Z9Ft E]kXC\<~1Lb&&8j^ 58HRUX@6}M "x}^MQHoH.:qZ]X FjIN.(ZU$|ETχjs)޿*3I]wܿVM!/,saqs&frh{׬ZWs3 اN4䙑( 7pC@)s%kq;cٟ 0$wNwkB. Ѡa2)WPѢX(yQ*GZXoVSAz /%ٷ`O*YW~ֿWZO.poEi>cMotJ?dIa_HP_mY$9Ad$+hVGK5=l k'oft<'0X4𙱨X99xtjJjS]I٠wEDdq%P$ \]if.jgjqLlޤ*Ht-^S9(q)?ti5%>}W%$#pyt1k6k(SuJ2Q .aL,X55{L߷j"~ns{jLi~MmHlo5y8]v[%ry x-?DƽepU4x>#D9Pnt7SoTYlކtUDZR Z*ʺX~# -S_]|v kǚʕeFYBOu5kwL004g]Uf<7S.v'ja [!:,Ѵo.,OʲK~ˇ?/4t&9WMdA~_[AaPWϯ*./o`̗b`75PE+>#&]͵šQ4C{xߵ`qtaSN(wӭ}9' )? 9jД1쉪*x# ¦"2VNL3jh؊̗ %uZU| W:0=;A֠\(i!P_x[GTp{WBwRpVNwD}/?E$~F5H:$՘E^RjOd 9B}DdeEu'j~4gi*}u(6 "6 t5*ԸO_o&nC"r]CL,g]zߨ1"/TF{TcQdXY ;Aw| ?+!82).a+)#SS5zoO9AKogͩ洪'uP,( ZvI)q[lu&TT`b_"K+R?okRxT;&7 J qY\s:9ڨtDA/Å0=6cHdVV0)50Լj1-, BҩYhNV REKp+ⲛbKHmNQ}g8aKSEl#l3xHJ4M|0!؝9Ui?bLohe mjpZjUe;t8VMɼD%Lq 5p'Ճ?j1#eBjq~uRo$"O--Z%Z=>GWݽcg!W(b:*^R_ۖ<4ZɯC nw%߽L.Vop DOv0ļB*JeN? RQW4) uKv\@/rƳwcfNLkM+݊ElO5TsfB\M:PwڦzLF5qr bMlv-8+O2CrC0erY CtM^ ^8Vob 1B݋.&(:vJOԱ'^_Afr|$ASvQogDkK9yűThmt3\],8vmS6LPK&Q f_ /aDO{|yF;p die4py0]f QyqI+BTZP+ox۳hc|XHcuYJ~?˻3Ov w][ Qqל¿˖7۰ρ{h.?IŽHS]p?Оo:>}w~{R QaEȢ,#WD,Ǩ:$>%K,gE"Y!}I5E4f8Jn厼4 / SoC9hmaQ=bJ/GC^)sfW;HkgG!0(35fl)&( ,ٵ7)BGn変'*?î(nc|R!:xMj{rZ=JSQK PLiElb;mcv1$ܼ5me˃ja[nYTS0薞YN 2-W1t̪g'N3 .͂9J;Gz0B%jUD" v)mxO+,skw ()`ؔsL&CAppdU5I|}!u 6*]'^%5j=xM]S3^Uc~L۔AOd=L U&^YQȿGtj.a\lx -,`\ ִYF4 2!8^+Z`R! ?nW 퍅ub-Rg./]2\G1vs`gPȫIWCm*gMs!8RhW昻ll$TH͸oY؛'%l \+_2{+%㝿mBG%b'j/KK%v0C&SCc܍a-Gۺ|UC"^_n\S0t2XJZL{[%HEG֔$u` ]eI勭J})kh$KLζ0KlJ4|JݱһN8kQt" 4%Ҧ4dq>ߚfQB]XӪ,r5n(E@xTi&,_+Uկ BRQB%!AYZId_ixVH3H[XY#Yy:2na#h(+)@0Xu^0{-ћ+NPvRe~)fT:MbI }yݿ}ZS;Xά|#qs,ưNQ}0lٖP^Ơ'^{kv` y5E>IߞAȲeyB-Oyw2wY9jf^(; 8ѕɐدE$2 LZ׉ԠSUtPi5mW尰8`flv)01Yh?Z!u겳2U&խDu.XSe|-Ԃ 3/L}wP5Z_/]`.u=l ($~ojga}ju~Xqc `~y_k8HЅx"&GQJkD D]M9zCVHK BMiugEj >IW5`llNv~6 vIiS hI!Ba{C<}>}tR:=KbI_e2Y .sLOy!{cB)&Na'?K!XYa1Q1G dX,c+jL4cv06 |2J0)*`{%y ϲڼWM7Y6qd-ɳOVv_1ݸn)@{N@ z 1yWm:O_#:B0k&tL e'Gb3eKb1ȃt:{F"2a2:KlŊ|3`ԋO zYѶ ^kV2oәR^| }x,Om^/q;}k3V*/ 6}VEǐtOZ͹ Ċ=iZ;K{ho@wM7?[ΈwQwX%iĔ`{pf¶2ىy9eHHao5}ButfHEt^&*/N#\ckl皦e2@>A!u6 wd7W4Q\mA7ʐ*ERL^)}oTZ\}HqEjbc}e16uww7mZ;Οn"{T+ګ&҅^Dt^Q$zk,}#8in"j=yf'^,Kb[pm{i1>@:;&/O.2!8سeĐxS"4ϹU%rꤻ oSF꓊@qUbumZ9MAL!fB1,s܎ؗy6pJݠ%%i A{p…rgt!qUq [vŮUST%$gAT Ӝnx"uJq؍ljƑHJ؍eƇd޿.Ii봎#񨜳lb$[79<*K6X䏘ܷ9 -!QkLǺA斎kjBy|~}oV,.w&_BYdJV KDb>lu^hZ*g+Gy X [PJy3vͮ+OʜL{yRFshp֕)LB`a}|{:\w/7:\_)_s=H_):?r:8#ٛ`4+*STΠT& Zӗ`֏r0 0lɏQ?2,BGSMɬu~=`!ߘvt` nI;L+% J,JNll@ZHϊe/4ϲEKRf9|I"9;W %goĈa10V<_r52wErAI"ܬ|Sxf阱&IнJðι^hi7饫L%vx[w&&ů@~lsc揿u ׭հaUqêDYQm$iy 9B+ 4Փ`<h6:VOdj9[˖ks2 156] 3nb?ǩ/Q]yc >f.gۥ)#Z;-ݯp1顱mNOHω}}# ž½3}]LԢabh5p wL6A+ֿE:Sܗad,]WTTh]_TߥuBҙtԌ!g)2ujZ` ŵbGTFI+-Pl#wD/R7a1p8Y{0 &2(߈ &ݳׄn$r5 ! oўLs!w~J%AD/f{.jzrw("υ86dfu G% ^'f%hYByWV]r5۴hPYYx͇]Ylً6D?<Ր]!1l[fz#G0O]MI5!ƨl19n7B0FPN^\\s!+ןo̦Jܕdgw9OQ 'L[ζ FNj8;"h)E ^_wz{R 5=pWV} MX"TI!/t :5h[g* o<[zzд^G ZMKkstۻr}Mb]|QKby)>F\@ddsUNSl̆W806XM oNYచ*v'k0}_nޑ TlG1ƖjB NR@JnAiUPwnpwYpڿ(.Chkc SuZ6t ugy.WY`S숟CXP8"k|J7Jdb&҃UJMx˔0ieE;t oYd+!HpDp%2 Cy8d`PNA =A,~aۈ!+jl27r@*d' ҁ :%yvX]QˁyeV.) m|~XE[ޔΗ˟r]z|ڄ垕+ Pa1Y8Cnϫɭ`.u 'CyMϥHj_NsC=8nc_[JrnOE\D}t-١1B;:^%A5oF+(5r XTdR{cA^^OW`di|!By3b'D..WՔ'u$>S]S>.6ϖ瑳.ʻrjnqmjز8,@_%I*EFcLj ӍG!8MZ3}+%Kqzs @ B NܙH{ Ԭ`^Y+6ͯabCo@WVQN޳qeY8Rx;6&8d=7Tsyk7EkWk:T/!sݹRj.K*Aa|D P"}jO=Bv^*~Y%kݽWM*ٱ&7 +ymXnXlՔ!^m_vqm)E*„+Wl-N_?"ot(MqT 8xhZm "X0-rvgb.w)402 =5FVz h.Qq]] !Vl{z'-n㑱\]Vr|h)c/D#]m+Ӓ|f/[?S6@eYIVF&1J šB=wF#m,>O:cXlMK@XL._*ݝ s E3b dYxkQvYآvq0)\Z)<1]_ϑ !>ډ`vPK(y@HjH,|MP|)/}s8v+qӿ[P*VWR(61OoEQ[q~X\Y p KNJsA Č|(;}Uq^fВR֜׈.]. D꬘ßڿ*5‹4辭(i#dLoJ/`lyQI@ޑIђT]͇lF)S1BTbq(V2c+b7IxԵK"d u ZjG`Fci{89-<+pIOH*ps>$.e 7'x"%K8LCZN>)1+oKb)r,%?!wkQ%EXvmRXB9Ly)4G}3u2R(\cF joW_VƭqZTя{ufЌ(+h8]3qq%Y+ҋ aE'GJou_,\+Jaj=΃8_ T~zT;k$WZelގ~lm{Q3=%my~6.s;.-aRo5 ;_ϢY{Ӑc}Մμ ͊{"2>hrjep.dr AꔂF}>ܶL[ (}3VU:& _ՠ!8ҺYMaɘkS%Z]sZ2{3%f\>VԶ)"h̰ ҐM^69Cp2Y%Eo'3dUPn՜Ԉ&g:B,WŔT7u n?[g]dGmBC04URh]݆`1㳲1?h>! GztLD[WX3a׊]M}6+bYPKtsBt "WN[Fu t̬iݱ&CZ\QT=]F^ I+"F^ryCLTg]5 E G၂WؿZM5IR˕7~Zj$H/@nlg0 Rv5˘쭤Eah-?ݤ'v^*GSt!F_^}8}1(JL0A3ռRq[P#Gz=q:9Ȱ$(EnXlcNV>-:UGaD]wH)pBh!S HuaN 6wa$M%fTHbPg$;O˹ԡOJeAAB_;^Ǵ,ԔHeHӚ] 򈬅߸n%s4E_G6/g=. cLcg鏈(eu,7R,M AS:"toUd~J^tg9jru{t +CS:vzNľyv~eS17?$ jgٸH 1K!,Dݬ u* 8SOlcQ&\XϪ+(u 8 ioإ~\VAxfw'EWMY҄Fib;MT:J|5%45' 9h9`HH)*ۛoərq^p#E%j멉Ǫ96;ڼ*Oيo%韟𻀰z]`irxd_OZs[I}K^5Ay=d`[R͵gg3KAJY!IhסTV`i<Ѥ+ '6AG|_:9Jܔ=gL/Ɗ }:|bzȘn _tPإެט=JL)$9N@v8s#9̞LVȓC~,^7iQf/Hwj)9*b$.2r[ 2G5S8Igs0P>ƇuA|$ѱr:nKw#rk-%7\z:pce[ ⽈1gk[vqp!|*^Kjx_31${#5IlkT*Ϯ;f`ـPJcǵIGjV+ ktK1mtK-y wH;1ѱCrv+Ջit_BKcεoop}5ゝ;>+q%zPZY=f\luNW(!.-pPI8/fͩgkǪCOHT)د8,S\L]fY(3-Mug YFN'75u >H"ܒ(JH[ba?(?H+欻ĸ@ "i՟~~M.PjV<975P#A;v!ԠPzQ9t?Kd{'N6,J?KMk@ 3@)S*5.[{1p B#yWo)Yum-lZOsy!#k0gAnyrrl~\h[+Czh^sΩ <6?Ө Bvz]Oa ) @|1 ~LNz j.[=UcZGUc-&c}@qңOLe?"Z\ɽư}&L?~EDN{`kE=O7~ޫU#_ i̫m(k:߱_hcWMFA(ꒅfBԆ !LUY]7j-vn[2PQw{Rkn+Rh~NT9c]iĀO4+𻨲niBrP(*V\h- E"˸!:0fbvKr.b)Xk5'X'pX}!D֨$ADU~-j`ʡX%5.(sV8D[/XâVNsH1-A.l{xr;D=E͜B?CCs:MQ[(ӷA.hmtF!*mͣL4Daj-~0a(M.ϏRb؆|^2ݍKl4NԼb4%n E)S!BFV"l.r@i;~qR6(nF'Oӌ pLu%ڋ4]԰;D{5`}=}!:0& bCͭ;co +FXM~tNU>w%7Ϫ+;: q2Q2O1:zZE|zf%4v)ْ; ȹP} ;Yͦ3ΟۀfB4z?'# Ȕz=ᄖ;[Rt3mpo;ް95 }5JQp](1'" sCUc4{cۇd "hFB > 3ߨO5;mUAc-bH48m#F3;J yH@D|Dzf-뫤oʤoo!˺2axZztZi8K-i l>1l7[jkbVoHTzky6%4wa?% ٰE. 1p'm|ZE\F.7K1Cϖ@tTqCF+>rTgT(h]x@QƉTR7= Ye +hKuAj%C4tU8'l cNΠ h01+>/n[Z(b7Eř/(׹ȊGx$cFI|2}}pK;{t.EYV>רuTsף-(srvhwb!hme8cQ6q-B{xFסʫQU?p%Fo -2'1&i]knKF|;rĸlD4;sr@T;` Y e{1xg&F߉8y 1ptax r3Y.B-xMW郍K^J96nHR6ͦVK;yzぼ B{zNt x1D9ed?9).2Sl#QQr6~{9|1 t#([w!ž!.vCsy7=_"9[Vh;lM/m:`ZgA%]u^,}dW > nm5=E+5x#f& nU?ՋV0ICzy`;fd4$X 1h>(:_M#IRt0 ʝ.'l v^NvjjqM.L9Ny%"ڼ럃}~E'L؍˻gNeu<{x\@]HA#ak~k,Fس?< DD3/㠃Ts2),v =x61!(p]ډ?zJ.Ө.s? :d6Rzt<“RA^vM_;fZmʭ%7y 6]w6Lt \Y#8`="uw|dr׸e}I5dULJRt_fL8˚ w* +`tGJؖq􆩰[$;9Xzd͞&nMVQ%L)ĢvLg7ubۼft9tY| h|xD .y3ɳ7^'DVVȦDo 5yri sfZ/y\zL]rj' Mcu)K{I28Ǵ06:u~e<^5o:k'y! wE;\6\" tX,ֱX0$۹Tu8IG>5f;ழ޻SHخ7sDbh)ɘ{5'- =l!I%> 1oNJWE57º>4Lk|.e&3MKEdβ$K/Mʨ,8X4N j|gEʎi7߬`,JK]̌=LK<'yf9f=O39ٓ8gihzlc>h5?8N_ssscaZ(1g3$P7|Gyd#sD7s gÁWPXiTi=gV9AGU:<dBCwJTI2ci牙o07ܛ;.R*~J E~( +l3a$?Q%Ei]\|1'ԗ=B& _P YzuNV'6'LOUaMi> tvSЮ9pkELg,1M֛kI?KE>rEb/Yhi;!lEqÕ͙MkN=;LiF ?SIum=uH3 V*P"n58~x*([Ou(p.[*#“Z_8E<@5O .~F5@㹂æHj  䊝d0jxvLNf(kcn}SVTUȤ:93C3wuhjxFvdCi]RD PϏ@;QxB`Mܽ=NR%r'[gTŀ]bp,RgYJ֜/'o_( W^E3~V5b bmO5O魆{g;VxM0ޅ8/fheolgGœJ 8.j9Iθ-O1+v.Ŧ64^[{؎_[tG2;am|G,=ؔi^=+HAxj+`RsDAsoX`mr鲑bƲү\Vo5Un~";Ug~䮄,/$B3kEPs{0 wp]fERPTjٞӶ?=?y_'yFO奬T=$6%8^A:aܷ=R fVY&,Aayܭ]pn 3;v*iLZzzn>_Ruw>BF{a5ɀkz',A3Ca1lqD2L;"癏=yFSXC Za=$2/ж -݇RU˹AׁoEP?]ܢVɱ++%!S~ (̽0iHͺc47WB(0n{V3 K-2/x=:;-CF8XNHLf椰1qmk^S*V{It[7Ef mqz48in[>#X*"8͌ EAҢI /g%ޑw_qap\n]d$ʸjXP\?Un*Ϸ:bD|OsbQe!ܹ$@r.r+ zO9%ۘCc(6mZѧ ;q~>_X<]L)R٢gZꄓG9 j7ŶՋ?p\ˀ0D_# ynG P]׊զ'?vUKp:RO`xңi=Fuى Cʪܖ7E*% #8~ GVknZ\}2t}:XdqAm3Fi}mf c|9<6Gq033{.>$'8k7:a'4S]gU}Aݥڴ@)pBwg\I>vmen𺻥8jIbB͇tK9] Uen(,z+Zl0F umN%-|Vz$INb+iMp:MSmՅ(nQ*g5Cp_9pܛ{)>ZEVۖ]s_>ݰ/ s[5?@T|{^DbRc)8([ܰk)>c#`h.Ķ@_'7HYOQڣ0<7o 6c)T!Orepdp鬞tz3C͊fMMJz6/ٵw%֧e&6-33Ǖ{Kyr8.cYY1&1gYiR~Fl1̲ f"Y28eϊE5Tz6m$u}^]Ò4B9k -2smMY?v6&, p⿏0ʃTK*v߮p#sҕxK4u<ːQҼCn@"ClOaCN-մSjshJo&VƞwSfdS,8=CPCd}HS EkBi\ +iܗ{W-dM&C}xt?~' ~.Ŗl'xDtIJ$DR痑u~c9M↋&t=n~Z-# 7RV u(㎲+ȍlGq+XFWLlYsd>hhJ /n M[h2ڰqs*6"ɦ9ܒϢ6vb+Ǭ#{a>X-.osab껵<&k}̍O;6ިL̉v|08{Qq0[< g>on9trC.=`0Zu9탏g \1_KFGsa {s EVa?7+qɓ+ A!CbPq'9_b} ⫢2!/Boxh #˗1]'3*gDrly_Y~Ni#dշ:䢇!z*%{Oc?N+[;Ay' c)K6LNm]Jo!Pn0HbIdU]M%9a5q؞ ^zȤ ) )%@?zHemDZaى.mupuI!9 ]x_hd@)0 auK&lҝv|plhr+ "$ނ6NP^}Pn`g@M~9Q9uTeՅde)fA<^;,^$X0%^؍?Qʰ_kz+>Q2 i-^ .lCYU՞'``٦)ʏ)av.VlFOУe(md|u.6|_#]xΩ T+&+UǪQbڀ|~ }؜D*Rg?ܷHgʺjxFoWҢ9Kx'N~Vة9q~dئQ !G]jEb .\橞?ot`/1 _e "nJOgd`g+3ׄ "D++(9 ]e8ׇhsSdm,z7"6jf[ƈUi,Lˠ u^};v`’UH o>TI-8&+ۜo72T#obnA; LSVr#^ b~TvQN% 4+wt[ʘ#BHrl˽ciRx0 1 .;x-+ωc|FƳ ְUR*@De;{ڴ' 8DZf;-[,kLƴLwM~ԬVXpx+ƅ: Oէý37FF1Yi#Q<|oEtc7e{Öuam|k] 5 i/ݱV [t:Na7;r(dJP%4%*K#z<vWrK2>8g.3[m[[y$ʓQ sQD_5Gq@ pv襻}rA>my;Dp.#dлPp%3?Փ욖d|Dޛ˳'(N`gU0]3/vZw.[rqX;KO?m @tycɧ Z?dPA!x]7Ċgm,,O25dӔR"KpD``Z#A"-MPts?y">k9l8= ~n3ыnx(}}g؝[٬eHFp ^؉\6ʎJaobT^Ѩ$?O"x}ܻ[uK|/i_㹬yc&DSWu9s" /PRDj]ALX'O^[➓uڴ\3Y?0|P'zLO}Y3;Vl&nvM9aO&҃<\y9L4ok.\5C 9+%˾L2+Iu]wTbq:~OxZ0(#^vq[-=:zڽ]h竏LJd6V]oWr #ݣ!b#؛?Ro({H͛\-.f H 1T!<~&+pabӒL-'vM*pv#9qlzxJ纆RsB88*!C򰝐tR8 |cqsL/j1~r [Y,66&UHn]ȿ8.Szr^6ObhJ]ċS7)tvWr)6º^3W *ͣ A3YJZ#!h=IZ_}XWd2Q|/N Xwo jtM)Pp$;p|/2vɻzWT ?eCRC_sE_rx7Tfv̏G`+3-P*ea {p%Eeϥ"*J[138+,fK- ϳˬHy1c]\3d#,A2n ͊M[,OEu,ݛ/_Mڌ#eYT//#ve-R+)ȳ%|iIqz>h% T!IŖS)9ә%WB|ReiKEw-f^޷wm͘q]g-L874lrkH;-X=`^}zNDRX: a,m^sYt"zGqu=_M$D (CfTɿ-h-`Ƚӝ֫i%F~5|eV=0"3>97 &:[I뵿&0lwd~E,l.N5|[y( -RC>;cCzR/$J ¦G[Mѭr&XEƴu.%Q%7-`h[tgxN>a,\q<ڄ5b| qhTVS!fusS\Bvӑ׋5&G6ހ}@6%UKfTAGf%5_^܈&e/P3 HڶkY!Աn&PK/@2 L$^>?w۽*pmHݪE=KQ/rw9tDjq) U,,HЃLt[ Ѯk gMY|9T'ai>o4>H?f sϺT&2pzNX"2C݁g1(e[d"Uv6lMr}0C/x"pq}tv[oəY{-p47ՍlgȑV* %~feq't$y}bJQf5:hKxpsjU6~IPʜ Y/t 'AS\*#Ǽ]\LU4ר1Rر[)C7$2sUP. _2e-%XɡU'Yg(Z-}Y6hnF=f50V;7)O8 3o)2ЪWnS) v6ta@6dqm*10ѳfT*`%d4uE*S uYVkX#vj!7OPam'C졬"oΕkS!IQ1"bh/JObVO,o7~suLT; %_2O}jL_4K͗., ڪY9u6a-y1@GGed~a"E>n$MT(WY+w)A\%"Eun TQM0hi0סR/;o=֡N}vzsjm,ik?Hl4{]Տ͔Ltawȼr>1Vc1ދboH2 ^jS||o[u,"K&Lu^ÜkY"USKE貈]z߹ QE.tAN c&d0ל^r&i(Mk'u߫Hh+=ttM=5]"TQl07$Mp fysey۫%> SU`#rAlo7MbFWB2I[X;i+Y0S~q~mk'N.ѹ͡8{tXou쳣_8cȶ>1>M^Χ[@9_9ZۤR9 @gS+{R?27G&ػX>(*(%^ L5_0*Z]u^];1%aX?` *W.fjc4vz!fDYHU: 4F;keeߜ2馧s}LgXMKPC`Qo] y uIYCaLǢ'\<[Nb,l%V!8&8 `&L_.V넟%>ݚ΀iT 5#}r+ byio"` Ei"Т{]5j*l^]WR&3,6[6Jy 1)(g KqH)M˒lxTf)&nu,KBd90`FHL A!6TVa9VSM*p4{;ʓ5UomMaex8bu5q0b z Y7G;\kO٢6x/Ћl٤Q#nɮrS6 tx]5s"jS9E._O6]ƄaOpA_ 8Z+XmɴEϕO{M)8OVYKϵzB봹"S h$%I'CM&T['xe[6ߒ Ljq ,դe-Tރ!jU]v/;:?=- ě9-+/7n/_ԖF,? 2&Y*%Mѕ}{xݤӠxcxڤ8;!Ǿi-%aH27s_^uCK޳Tĝ„nM['$yWz^`~9x#b}XXn ܊:+ޜsݛ]fUB=.4POԀԽ#f%HI'T$6.ͳ_r"ע?_=Pb蘼 J]ݲ; akQb뵾N mUb G7vD 2-7̃0C評n"!̋ϙ?/,x iM|YG:H6qӦUK>>LuI-.fMPRoqvG-N1O6P?j}-S*ng|]Rܔ.6^QIlZMk'JtEQv/B9N 0F j^}.eA2o>D;U[AD/kk,ahVI?HdGZ 1a N`!/d)!9b?d7F `poN$qch֛%.tX!%wcם{kP)mvW_\ޚ:W E"HtR}حJK ?i,Boi=, tm)9y~ИHOX(}C{oS|V&ɰ'5}g6ۅI3{j/QL'ALȚn֡+be#Z`F~͉)s-?seE`QsIZ'>כqj\bZu(+Zq}R!BRS FNwuh6]\ Xu #.;F|Ca*L q#]-}QK=Wv,Aqn(o?˟JF'NJF ~yV9\O r~ܔT;h;8W.VJj 4)0ZE˕EoXJ.ka*B^,ŽS}vudCuRT-\"@:t)]BmQ{¯QO9Px=/lƻ8NfZ?^[8*8tZZ~9p1G-$V* Oupi.IgfZ0k 6R.bڎ#@rVU x,~goЬKS0Mh j+,C) Y>]zZ' /;]Y -fBEtQ3JWЬ'W1,wN435͛r W@+C.=Sޚ#;qk++ *6kW6wגT)X^$ #FTY::0+70t8״N|&3Olk횼I:.-Y`]"29c_/Os P:o<~]~]suN1>čohae١)tWPQO,q)dBhj*ޮDNR)/섥Qh??oEKCgʅ+|'0j01 4@證kC HA;)#B,"8+T J-+(X@1z >S~ךN?VLsQ%{xY]5 c!Np3;gwfnS khQݓ\5}.7͎N bWO}H3V_ڤ7/ڡ:`;a GO[Vq&5:g \_}'c~,M% HHu'#E"3,d&_:gO\.?=m#MjuEQ~ܥ0Bܳ Pk;|.B"NZGk>}C띦&* (NCD{,Z{nalmx%P.ft,"c>9)̪ujt9hHmUvYoR&8` JE5hR)k`ڠDJ@BܛÓ;΄L2utY}$:)fJ+9~cadV3+ A'לϕ+j]bZp쿘*v/qc}̢a\4Ƃ4y;vE:QUa-6~M^OԢb_Th?dVաL ,MV$u\7|IS'z[ OiYi>xoN9Bc-ҟd.F\9|X Sn=7,3.KH3 ak`ҳOM&*cwo:bW.[ՈD ~Q8wL[:Ի#X"sOs>fTdɥ Zcݺ/]ZLSkv4H/@<ބt{2tH8(]w.%Af?xX˸6n={I 2y'wțî_jCI, 㜡.8^iK$]]-Etwz"-jʿ=/RO§ɐ thv6agI|r;nc. =5}JZ[  Sre ;{Stx_/;vgp  (g%2qsUx'[ldW- ǬٵA6Tqw,Mֽ]!Q"&.{bW!Ks+S=&ܲˍXm3"вiҚtxvEOAO"IP.nX2Rp% ϑPVsPjd缞"V7x[(n8*db\)&t_qq3:*G Wtڦ^~7Rp,#K<k0ך2Pbe K8no9'Nj8,Ɂ8MH<(wZVmTN̞;88+ٽxs,r/\<߬:{U<-T 3ٚ8CR.ScI՞Sȿ]4իF'E5HV!Yn.3B<97x䈅"%Sd P 6fP]O[ڛW.YݐZ7֣j𜸊+{]/ d֚Ea-mJ^dP uGwKٓZ]Wࠈ 4+ֿn$^-?j87f(# >{v3K ljdrڢ;34C} 1ћ)u-~L7*M.9[~@ADU*3w|j^|\-{o+/[9黑˽ZmgQύ=oQ1$Ho,d19u`O/[cObl1Zδ#>&G?(ʫwZ;'߱C\L93c/[,a,3.[Od?Њ-6.TGց7̧Ab)Þ)sʬsrO+?$yh*TId1uݸH+*iTOnuta~?Sm-}*9Vsga^Цb3[S ׁS&>ő1M;s}YE(tV\bwoJMjήnʦp}Vz0k("\Xo*0@_NȇmW#*TܗFhz[n+blX+ru+R,yrg[F$@;9s~\\ G0жȜKFV#}1+ oN|vpz8YFSr_ w[H8zb~qM:Uad-"N{$y&ϛ}#$Ԫ Ddeb6p: dE6娻A .x5ΣK|.۴J4鼺S5#;m.Ni ʼnѴ"Q0r|A_33:~m>na1Mi[H\;gKH҇b?4%iA[~ F%`PFRsYFAz׹(svSR$pf9 F$.c+{N=UIxM~vV[r&40XakZ=4M9Z ? ⱌmCk8JÒTv]XOVX@P3TձW슙ve(dvhQFܷ*˸,Z6U>o7_3]01M%*ĸ_\+8gAfKHqd}:9\rx+/Yߥ{ܶ\i6_{@Ѷd`df n}`: s{"k{z.le^sg}uPՆ>4r-PЇQ.Ypk2cbu^ jLJ:`)pԦ~b3kHk1fZ8zf㲗gZI@ WWLXD^BTVo-xљH!_W//ߝp9@)h6nL@n׮eʺE5Cz"z:qެ;kݸ _Rsҗ#|J3h'LN~[avtfSJ2ۿxK~Kwz"eo6 zCU)ԛ&$C#rn>Cdx=.)1IJ #aDiռ G}SQ8Lݫ:^T޺HwZyt4}R?B4a8}ۤKAx숩\emu]H]9ԡIA#(`ZC weP۵ڸn2p0RJ;n !̈́pa>{>Pʴ'lpzw7ێ7v25?aMڰ~/6)_Vَ2ȁ0iGutoa/N1ޛD9Y.g@"YpK'HJA[T`\y[G1cm8=o$1RW\ Z:Di)d7c{eB1\Ƴć}m{?9Z7Nx;2'dw6{ݴ3bbc.{5u xflfH?bqpw&J20\_;'}Wvpͻxy{ ]2@)ikS}@Ы[X~߯,C lyu=~kCz<{Jms`[UtE(b%[ KT9s`i];uzIN!3t]kZfsi&3[i4aT9>Jm׈8(.-8&⃛?Sk8F'ݔ&)kD8$;gvs\?R=]5|snsv//f2H/#׶൬.EQ(SVXZGfZA̡JX@fuɝHztrQF|>=8IwA砏}zóyGof (oÆX^J /$Adю3_,CX#Grʈe18[ƭZRM}z"iZ΋F·O害`Opxy!yD8)εm\Jl̋2Pr5}u#xM_^&~|8v0.!UQJ3|x1J+'El6;sΈDkid ^86QJ՚؂RcFżKTt52 R&~p(CKKÕ18x 8vc+"xygsbkRUTy%~sr?C/5tEg x fOdr.1)OP;䈰NLG5C3A%ۏhwŒՀʷ%{B_]Õ>vIdSh}٩t0e3V}=Gb?摸 ALDͩSnghP U;i3d#ʃI"ʂfwT.%[{QP=N@fI-ͪ[P0L&B'ٷgAG~$6ݩ[N-on!!U$nD9bܱEK<לk(YOk0w/0ݼT抣jB1͕d1i&CψIY1 ~2 }kJ슃}Xַ&&ޏ:Gw;,F&HEC%rSĺJAia~ۢ[3_Z%dܟno%DY=Mog7uHX0i8jz89?ůiR;vfDˍ'Gv֍(TM:{~ƿ+4[ynʰUB!=WdYZ-+V^+oS)x@X(NQJdv.ku+p݃Lf |?MVGn>ı?e8S)@DRK坿D.y+ mߌWq2GV"xSBuՆvL~ ނ[r!:k2՞uXۺ @eomRʐ'QVA~:֡&umJ@]s[OzTqSÌ {Kq*Z°FkWIdqh?8TFl%VӕNbhOTӳض%ug#ρ]OFsnζ5~WJ~Wߕ;Ker6j908/4J999+ryE1(2EmTr2@8K2N?NYd^=/N 1ieޜ˱B _N> #vՋX\4:hOD-"o,ǎW4Bj4+fX1n-7Х6[fb[.+qxyϪ7$) vۀ, ϗ\fW]cWݫ}:y4QnOc8(ơEHd(,8g\5neN?E$!FpԎP9dYF:$l8G_-/ &]RBser2ӨOىX\)>xXc{-q{%.^5'ͣEx~k3 [8JoE߻ͯ~5-DdJԴjۃg$hIԫI0 [U WI)%[MjB#ͦ'wzߕ?$t} 2)qMz1~5=!lbrGxb>Xs}f7D gK$`NSo4P-mDhQllw1WFojfYQH)%ҬHGg(v1{"lZE:h^]0rjH" /~:R0%~za]2bqbq.x;2&F||˖:ލa$n D4J(<dM豊DvxWJU)>8zn#y1vjH6ٴqD%t˱bIؓw/aSYkJ:bnnE/=ny:'Ęl!ɑػ3F˽ V56;[< fMf{SX.ne]uUi*0#gީ5da{MxewNT%<\QDD%?TGN'6+9j[0Ow}5]_F<1Kw&w J<7<Ԛt5lվ ߨqp67[J[i/8e^w+7˞σvAPd"&̙I!rV]_8Jz X=}Bi?tQVfKMu$jb4kc:3)o,XÄOyb>(G._lQy`Vvٛ_ QrQHe0ߘ]E&Eբ"$eIݱw`=xl!3))za;L4UPc74)|a6k5v&ekD4*!46PQIZkZCeNZD8}!OVƹ?k-^>=ŵin1V@tc Y R/Lj^3П3Rџ ÖSΌPÙC߳.ڜס򠰇HȲod9XG+K4Z+ pԖl6$uB]Zg'Uŗ~x{pM `*&kx^'n2t5O,#%V9d#_D^,vIoh5uѽDƀ,󡔰&bt) $"wۦ]o a,.嵱cnPG*ծ)V2vdJ ۈq+ӧ>/n;ɸEwt9&YȕLȡɝWx} ;xBsیNhQyv";Q ˉ$~'54`2Yi39]|f~ o;܂;wc0JADN)Vj'lׁKV:3(R o2?XN@^U~[rc:k)A/:z{UWt8V}y~]g+NfmwΙ-ب?*̘MP}@}C;Sd#чZb~ s< ^w;l_ԁ9Dqm$FKMԱM<[V7(OLb-$C-Z nZ~ LWɮ]Y&T+KZ[F< 3x@ Pboj"Kxt].-Ф"!C$ZFwGԩ=x`WPրdI*dE \|Nf:ek˒NK@2m'QOWLR`$w,c4oKɌE"a:fxYqɁ"2UG>!Zgܖ /@v^Pb:aͦsttծ>J|3{HƳiFq7bKRzz>?]RGJ$ˡM:y? vd<|[@QloCE@0#F-VuEp=2wP'.̓+{)v쯿hٜ=@_`^4iD.TjguE(A6*f9s^fW:6>,aTX84D+^Cc>o߸@bQ= hLP?1f/pA]ԢXqN{eؤFW T9 }jUt)VIҊ%̍ Ɨ596+(>@󣹀^1S*3,rGdWoIm??\rٝ!=3sb0p:OpC^۴h|B":<~@0xt0,If:vO Z,sg/~צ~?|8aT0]`n 34trSl% 5EjGlB^0q?sa:[1 2v/1Raptk%F)}3p"ήY,Pvªb S`Q =xv)zg2М';ftYtR;mb[!o emÞV*C6H!:gL o<.tUz9zۊgU)ߓ9s$:̝a)TGz\ƺ9geo"jgCE?,2+-̩w T1 LX(skGXf{cmp^Zڡ]ftя$s6ji+AKdH0OKeyAa4l( ![T5ꌖ9=yb]cf=)`MHL+BJN=[ vrFRNuP15+c?udƈV}7#~f XB!D&)ɾwvp ![v\|eor:A" <\sek+j&̲iw5b`oHv[^}ݲ?O{4|\+_Ů ~5ӨIr-ij-ٴwY/9r? k\8qm޷I*zyXǑOs&qV.9,?5/^ʟ8&S?dD 0P={*Gw&;^$J\#22ٻH<T1qvr]0rag7!s<0pEP,G:ޖGms(_&X:57 {c, H^#Yrشy;O†@N?(DWO+˾/)l aCB'NN8xF$%x/U^6Bi?4p&cIru*wS6mXLhZtGz A?a<<]3 hR"KiE gf{x5wH^e3td46OÙ9[w-;?Ӈp2 ᙛXHj?\$[PJ lF_AzY>o9?turHl/+R'eN~m 20ݓJ'R[f*sVRxq:eGere 43yUt:8<ّ[oSx 4[4hNs(KJeڬMAdoh]$ .KjDoimW]7I*m֤mݛ'u0?$o*`o"In[ Z֖$($*jG)XKBDUvFEѩ́a2Jȸ9z wcm=L Rx$*[䳛b:Aqm8̖AyD=v ®il+'3ףky.w8) c$ȼF6v9]Ryo)c\@:{(2bf$3YsΗj0C-3gE. pc_ݙe4o!?_Oe(B<ݵV|>3סXJ!(NkeO6 쉵ʑ7j񂇶֎N<թݩמz2z-YiNu0_}z/&#{?q2=dҶڐ FZ149 h`l̅63sf{G~5}M"-ڊ/)鈣ciV8Lܵ"HTQB Z/_8+Yg|TP4[v!L/}w`kHq8P<뚖AMv_ K L _' FlLGL.*#D~O>Ggg/6".P-+N4xWcCG!¹<-}n1N͵Xi3TL̥lq٠mQY;Ɩ&va+̉?ŕ^*0,Q1R??(4<$w3&wegqʩjE:ѼDy'SIj]ump,f, ʼnij$cY:Gybu(K)RNr :5a{nv,)ˢjj6I7-}f~k7_bvCا]:2sگw3F1{,TxC=Fq[2^N@$r܄~.oj~<2=>^qf%0VOCZ[|j huxGW=I/ps⤅Ҹ-C nօ?R_쫭pKDv2 )#\&_$K=Ջ/ @Ut ՝M%[c._cfWz5p}A.m $}i/8& vK^د+ X}ILhX`Ͱc@yPu`EJ.>yXx..ϐ֎ӿDP.zzE^I#\}weN5T%3Yy?Ǖy@.~a ,{ow1a4f u}<H^65kcaS1{/ +qnSW V?cguɂJ6:a'n3Do*UKEŦ& ^kM ?pW7Э6dtm{|ҞCQ VVϯXqF3m3Pk1'63HMQ멧lΉkVΗ+TQU{n3CzaVCHWQ_` ]Aaۆz`q4UkwF;bl1%Ef1gkNMiq[jyebvdmC"8ij6%i7@1lJZjǕd+ؽN ~NUՑk9l/^^59Lcdu"daOJBwMm9!}a$7L>#LLVf\;[^ cK:jqf\jMYo:x~"bwH}tdIRD?G%kg4ٶ6t֦}Ǵa;mJTrRLB߆S暍Ϫ+PK_HL]i:9[ցX"!q7.P٠XQj-- ZT@>2_,ꩅ)+_w0F< b*[3 RWZ^n#6=e:q ۱QMwT 'D }YPQF#֢HpTFscNzs4I:UBErf3(aF&:Wɪ1?{=¸SDx=kak#g6K"̑H5]Y OLպ҈hUz0|]n^-Fh|N1B6>qyi!C<^Y> 74J$|bP-;.m1+f|B{!ɆRvm;3a7Lڦt&z"[_ˣEHr{z@ * J'|թü r v {s'I/)d6dVB:_mSogs3c- p-MknÐ_WWR|N v^c¯וVQ@R^Zm>-h~1(LZX \*nW8Di܌c?b:$+gkH^D a4h եI5ا+seu;fYF`@Ѩ]EBy>]4gHNo&Qc|vq~N4c>z}vL;{cPoyLp܉k%l1J N5d;`a"$uhØ=&%ĝ@-Hi])r7A-Sy]okGtpkuLCr֬W5B,dtqJv~ɁL%‡D\wo2867v.,?wVER8M*S'.o{"MŵxGxZ֓qڼvnmʊ1bȣa DA=c#cD ?sZ=Hѕ"2 U :|IU|O%Etvt$WI+>Ik:ya=p z=LCdʴ w\/^dԵ!aSSxw' +?,'=c)Bg*|YRU$dRjDږ%3aZ9ɶ; wxcz 4H\RYX66VMY>bD0?Zx_vЪ_pE^x1kU?S )PlvЊwcD'ϖ̘xJt"-2mqf#AmA1Դ~ h_'d0LG/s9Iz_ϓ:S*c2{Tog8姊$@ѵ\$)"}w8E4QL~@7O!{(,hVT!Q{?V(Ndؗwۑ47m4 Zs6+^4*t8 <}ysOy{UM=DOD+CQL!xA#B|29=}wyQyUmk"wHbVw_ޕg t{P22Iemښwg򣼢bL\tƏu}J]fz^.Y5}m0\e )ËaGAx}W/**O0z䁔9^ҡGXL]vլj<:_gL!'~s!x7젥9N<ѰD.5JֈDI~IOU6(0YS~-ph=$<-C1[ggIϘόQ:=GY9d9h$yfYEkdI"Lܧ$0$9 ) N[řˬbK:/mRV/Ydti-XZa?d Ӏ7=_r`mWdOI<8c=f@bD@s|ߞ<68mMlZHq})"ew?69a;t޳?o,ؓo,~O-/?VS8_c1s~/ u4sd^;& B뷒Maޜ︾xbп\CgoNPu1+Xo{\lfͪl܉6 䋰 4 4if'39ظ8ƻF:,&pA횵W~2D/ޙWRpi$W-4`Zw㯙NjwO40Wc_`ҁ1MXrC7NA`p4|bqiT ףsMh]Rd=٪Hۧ [24x t}>Jh"@cc~E(]k5 ^8szon#f kv<-*`м:f~NKz8XNx*娎YB<{j+Nf+/8XePNLq`Gjt((nN[kOfw8;"v'sdra߆C,J&lHD'hwntg]x18.{$q cD#u6"~LbW:: )%$s[[e"lNB0#;t#Se'^ 9祗b`k9<ęIMvd)/}POۮQjlC;[~g$q V&D`mEs2L1`ŽӮ磎vޯEQa[{=-]\t.D"~؎b#ޠR5y_)a/l^p|#/a1N*6;ot8k13#KADknq<Dɰ]8/0;L36[{v/Chg8*z s`<24vD{zfQ+e}h@%9hF%&a#?+eS oKI-g˴T zmi y#8}&x̀XS8kGP =Ԇ+Lڭt?|}q+:Ta3&֪U4k?ĔeܗZ̪݆JzFmZ;p8S  : zjy j&XZtB:іpy2KWD^lKbRFN@j̕O>(7=]{7yͭ j:Nw7m!.^I1~HYACz~C(ԝ,?|KvrLQ=N=n˳Pl2mk6&׫m7َDC~N3.AA@D[Q>T@lB?GaCEa4(0V?bs̒ݢpOc:vw9=/29zH@f8VU_\?Q=XAcF8 XXM'.U:Pm-mF)D}`K9m!2A]wHeBB>{`I.;C-w4xR{he:(Χp y'} q\ R<䖐,MC@+sq˦ ΪT+hܻ4d{;Ԟyΐ LW͘uи3 **S(GAhn3tfn/1{el1 Jl7Cƴ:wN͖̋si؇ 1 n!B E$xυ8 ]lj̤pD3Fe v!*NB UcapgA{TlsZ2Q=4X0T~inX[r R,A"ٛ`~E^Hkqz7Q1'?0qB)lG7BЈ Bj=;?<٭Ʈ<"Xd;Xg0W빪,b*\U,.f(!N@o͙JӬ@f)r~^q2U̙Myd%eҒQRO`,Geykjj`г}T.hlSH;HHruMۿ6֡ >ø\Ҷ|0a|5dz*`c|#+l)TqV5<`ʺ^֡]37M8^Ͼߕ@#rMYOWHa,:q^eﻵճ䬂D!a[O1I5CWɫu 8asCs7 { ӜB[<76&*|-[\pn^l4.$k7SKƚ_I[hlmڊi&0o`|ٽXd13z- d=ǧѲW>̡z{?2Jy/s$#}Q&pbGUp]LrRv Rݚ~eg߰JbHK!ڔYm`SNz](d[obyZC(z9.Q;GFZ NSڟ6yXB!iܜ4@58zpc~qU f֧">_M2L!YՁiP>̯ѝ*iJ+, /hB5EEk[F@Lr|xJ{zP0n3bvɥ2hmdyW m?e. vL&lOOqCsKju=L[p(m%Q6jm9{<]N!9LZi ᤹@ 0H\}K6<gvȫcPO}\(XlvբRws |JҶkݗ=%8 &e)*Ӂ;J"Q!j]o^n跾rVڊJMrFC ˘ojaOh"Ugu(Bޛ-8Qv(.;՛XQq}Ȗ(ԏ !2i5Hk#x6"k-k.;oJ.R#0eJm;^N+[Yy: h#lvMCb% 8% 5$Rfw֡m'M*zU~X>~kp=W,pM}L˛h-;^ĺrィ'vOZTPcV# f>#lw0  Yc$J:nrog xsY{܃T4EL].WED":Ruj4Y )?ڛnytFt> %|b~E9xY >ᆛU#[29u05F~9/jEO7 ]*<^*2_F /yGeBAuD )7/ r oxefB'I&nj7|j1q}HCWբ|x*w'%a»OYWyYO ?g[|Iu,G ȡ@^"o _i [dF|u-tcǤO@|0Vvlş"ҳ~T*&ǯ]5\G>L$㒀c\Y(=چGX}z3ֻGpo曋ԕgfp}l),M0Cxt'=6}2@8+-\P.~Mc|r!gD@iّUInEM܏z[@{N(}$APz$ccܨN }L6voXm-2b+cgiU2r+Gܚ iR赃7Wv;?I_x}{.hVzA@xMgr:ofހtRxP'mL S~S<~Z٘š.́\X\ۤdf.z!m'߱Yqzb }%2u76d_qmj;ִ Lktt(˻+9Sɥc>7io A/:T 2hעb,S xW1xyo,u(C#tٛP7+sW`S+0Mx P.*0N V:T36MTF?s}„QgՁD+9873 SխmIA6f[⸞ cGl {ț2E*5#w[`t(y?ͱ~Bm9PX9>}6 ɌwN„~e{RJZQr'Ivo)}WzSGx+ ,]3]0@yR]3S"yrҟ'_Td-lNOۧ񝵚Yזq9W/ LIįOo굹i|63 |oޤg(ڣ {IN&Gb9KvOn5-s%3>^TώڀuTٺꜭ]1}(#Ʃ18lwG,)(?}4sCZE\_; J 4{k5!܊X@ӘWyn[]4K"+Kw?"* 'E;OG)ro}uȖu.L5s}5_o01}Ֆk ~abgmZ—t8CP|+ߚ!wi5fϭQpkĻ(A,}mi1!Imme=2&Rq/x|Zځ ggܔ( .Vtip͛)C"/6=ax7 6;aIff+Yse=%@fo;TP:uVc8Q B4w0|z$s gyk22r+|įQBq8brֈ9ɿl-: װ{loNwXfn];Lط?Z%XQܞ~#NpLkb;.RA:H𑧣CF=HkE_9<亜 Q&Eg :D7~~s1>:I۔)9PU_V583w:{H~ٛUsx #I'ְ%HnHHqP.]kv#Ws4ݖlD%kwa'kKm578cz~|z1A:s8] vO#aQ|yY@b}GX, ad1-4+kMCPpH3O|RUL;SNg(E5J ^Ed}Rl$G y7؂mOZ\rҜ+<;v2ܑrq_kաY } xaPcGu'Jdz^ͱy;=>"&|$GTg'jH"؏ceT~25pDS r۱'[(/e\N=٭=фhO)d t_<}\ %͌x3h.05@b'}$8NsJOÝ p8FH=rjehGI>Q֡ bX 6t ~8<"aRHiuNA#TR'r,vݶB#+7m8 O:3͕ޑ7 )LfcDlPe la )2 )C.d 9f;' @lW~V7;Z0">[w>C\pe{9Xz6NƜ9b0eP:Y6~ ̫Izˆb [v͇l5% nOpˌu7;ؚ:ջd7Y2IQW鹶sg43K \^̞?!F&o5Nv6uT/3E#wag'q  $7o񕃾oۮMa̤b+8fbxxPX>>*İd z`F; 3q_>fś|,}'^bZzy,2w3߉EE`A2LݒNR϶|pz0'hxzh) /XpT[?S> 1eρpWq}v.Ѵ ݢ%#H9Lh&u~G7TJ!2䦧*ba7&!}f`{p]7*W[ MƋ03u.Y!dR~EW8ҞrHBWoɎu` /0d`.g{U`3cx>ֲ|k6)܀s{'/*pܯ1_o0'&ZKLz%T(&>U'`Ϣ2N$܀EIssd7 9C^6$wf/ a=أ;:f)/^b9列vC7R So\\i ?/&߭G 8~v%=Gc7{Z."+3%_۶Fl/NG, ,cBC8Cqѣ iְP7WT.as9jǮYWKуml/Ekʜ3 6.DؚVsq˷=.hn"E8lB 6-j ^iy_M!XPY`)PXeTCR܌M'hN-€3"~)좞wb_lAm0.,wp@T8vvX`&~J(fElG0 49A̻䂳t@+<´= Iy̝aACYT6QWy3z Q~ugMA k` {x/]f Qghe5:TuCd'i0g#`3Y?k|Co>Վtg?IRЖ Zl\]C6u5@֙J9SfO^OUQV؁l1F yQZ]P r*|Iv:b(viOkwh9@|Mw;筢ǵfmX0CبEPfj`'.7YQN!( z ۺD[H Eg̚CaH N/X!h=84NC4XC0`6V$zSCGGz$eX%ޮ%M@=N,-c>!{~TjՒ씸Q<|$mgP(W\ M큘*́9܆bZFEu4>G^,nY1pF4vkӃգEjhXT.LawԿL/͢ ֜5Ndg3{3zѠdrfE[:vgrcܳ9$DaLV'L_3IiTהs'k`E#&)رd!Mbxw5cfFYW.e{#ܖ攮]]ZF/٧؁]t:cdx8Ȝ+d 9po1R^mND4hG477'{$S)˰t=ozⵍ~ "ʃKy&h[9=comݤ$RcKs+ɳ=ߚ;>6㶿ōZcO+Q@E{a(O7NGJexj VjH6sTCGq>7vh/bjܾXKʨTCcZkڂ$vfmY¬Mfެ,N |a)ΠÅ짜92EH( j8B𭓧 *,= \&/ r\$'6@-Et ,y}Ĭ1%zR tf:ve7M ֟3'I3(fÕ%^yt1WZMle_G??Ech8(|"q:a+i`rfׄu j ᭁi.֛ ď X'{qb-,MCSD&?Op}Z'[@#h!{ChPk_*tupap,_Ob3dr5?\}T*ˉB".֢׉ZS@g`E[S[Zs_tMkK^%^_rU[3 )P88xP1PFKX ރX(2CH,sm6KVXwȐ}/:rU/H5ɔ"ҲVz+5%Sf,|v>hD埬-fkT߶z7hbDvT:y,5 7wآZ[X7:-W&P\<{_fr@]*ߙ_n@f\]θ] gh^>LCb  wor/ 3+PBy*4؍ Lʹw*wu&ޭ:{vEB_qޒ)%"lxw\bߜ2̜nsJr~>CjRD[ jk؅3%3\{s<43[+). QA62lp6L/ksL@ ǒpߏ!3^~ri-X5nX>L^a,.MHPR|WQ5)Gi~۝[kz cI1蘅ŹZ N?(Ķm?ޫ1nmG}_a9mglM:9,ےo Rg3sfζ&W RM@`{tŻzPK-i]kgͰǜ=F<ن}CˊDgI{2#'fMUq訿RW:!zw %9}.!U Ú㡺p[QMVU?m ]i +Nk<ˬŕ))*߶ "ݭ˦aZ#ZP_n5y}aewfnQ6}'"G_ *_Q*aU~79goj3zԿ=AjHv7¨؀w.h #747zC~%ŭ+;;L*l#Dׇv&)(.Sٗ1 l}n: (MΚ=B,=uiE8EsG6?UG~ vo^9݀(XϩA/h2x}B˲6olM/#܉_0SfV,:<`e:Ys$Oale7=9̊j8dzvxh}r)Ќ<m PI 3dOc&&ihP~ͶD^溱v-q>xS|ʺBުW B\ւ.n1䇛anKǹ$djl/٢1SB/x cIƏLÜҷШԩ,8a _024̛ڑ"?hM򱉂̏s= ]Č1*|jݖHCie3,ES"ws=Ҍ-<*%e$-:;v?!5DK=M윾B>O"9jɆF '5CIr9PcH{y"axoQtg#ېpكƫ}l S9k{qʞs@.M7w ??rjzgS2a*Ax.EVXU7? 1VAϾJ6}zh;JZۍftZ''dyaoPh]!Q\Ī"}:`0S2gr!QRB a! RVpx;1"x@2GtƐ1LGI>7&gqO<͞`\pˤ٬Ȓ?$7$!h:!MzA\sqו`8YwţG 7idN~~ hW>B~&ZB P2C+/DJQblIo6'm/EpSH>qL# - gGIuyَDglf׻[ Sg Cyˣ8F .pc?36};Txh=A1YYo#6Q6&/Ba?Z©HSEtȾbXF$T~g'S7\"վr뇫#iTgƱ#ȣ]9AR2˹Jg뵬8`S'oӌpܫz&!~ ~VSU1ܒ}Ȟ>Qk8iNvVF!Ebbm*IFgaGfc6/+_>ZT|!\ﵳk3iB -Yl\UF ^[rP!xrtȐ;Gc;Ƹ6Fq5x_y'b$2l2D$w̪k8!B(.!ƪ@bWPJZ~C s_ݏYBB/փ5*K m7349)aj!Qo[j79($t5a,6OHc\Qtcn~r/- Zd/dzY@;ytN,n4#?zI:(:A.=@^if.h%Zn|Y;^3Yp')>HUG{6Z85Nj;%~lO{peFjNa*B|Ȩʉz0v<ӳD񢫚,<hr#쩻aq<}{)ɩ\ͯav ;w(4} S;;+N("R5!̭E,((SF@QFiG )Tˆ;Jog}qDTDG\Eݎ]t꣟řvS. Pf,_tUu9Foe~]J _Sr|g qOdP ˼h, UHZ羄*SM6@I?hWJq1vlK5rk%{oں.D{iq2WA_L[ёpJ(,$2 4cֽTV A4'fp83W(/?&褍nXXn%aqMk>b_SBT@jM2n#f}5Q"+zZ ,|otQz{NJrs輶ɔ덭t;(w{}:eݳ0v `nJxlJhrS1 >> *Q8LKU;uHM 'MzyGMf$biå=)pvQ{WKcHĔÞ~:w-<6[Bȴ Yr "KQC.qerO'UQ3!qw/% N{5*ǖ\/}1I_|t54vzNeSCK!b Y7I-j40f="^aT,'?bC̜~z($La ۄM#x{#B:ƾQMg-J;X77vƽY,߿=!1Ygi{fE T7?j 5dJvo~U\EWɫeY\%6E~5C:lX+ˋrn2p}q2:CK{ EQq~MӭŮ ǀ W1Eϲ|׭dgL pvG'g,Szgz8Kf/'QlY#23o?Q6E"N$N|5W1?efXG1zK]ޕkl&(_?WdJ)Me Sh(z#7Ixjn+{}:tσ=c]_)Mݛf~_F alMi '.y6[E<];U= `3?NCX=y[^tecXL;SYQ-Xf3VXڈD\C(^\cntKΩXV{",ZO-9Mea #a:HһI~wP8>MrG.q4 Ai.؟_%T^̆$1B+ikc.麍@ ix"EڦZ_e ͭ(çTTYW?OgZ4dуUBolEq:yP4Y,f}Mb!Er M EX>L E5$5Ԭ-!j3(QQdZ, (3O Y]VL"pdr^εP$i^3dM +#<U}']x%Xzqqk؝-2N$>wYp=#]ؿg+9CX ć/&QURJ^df6Ⱦ ]9hw-Os@5t)~ sNZECwуQZqn~|TKumEMZޔgЄm[vkh<+FFxDCK`k;kȢ,'I=BV^NlVq2AvGbZLʌ#*Ʈ]mK \QcR jDJX!4xMtqv`o]~zA0M)Nc;,EP`(~I3$iňlX{(`,Ń>Qlޭ ,œي)Vv$5/J?kzf}z7٫f˓T*\fbaE#$96=|~=%+v'K0Q:=rfYM*v4)&bťz=v?M;k@/κ²)8v&{8ٿ qC n[_V]FF(Fc ;KۀJddte? p|^LyBMĜ%Z0pE8哻:-":]XgqMb\d5 |_Ul<a;z aJ?-n,| 9PeqPcԨh[;M˟MTOϴ3WãMn?[C42S0!Zǹ-[.>d(:ehYqlcX>w(ܰ9Tgrho`+P*U^gRn,=bqP^\u_GwAGDDV06 l։Pp8'I&c80k;)ICd+D-l/w;eoX!H|9e0fZbEo?k~jU¨:d0D%DNY Q'"23=Ҳu0F)UxA('~eҴrzJD Ga;){Z8@4~0@ɵjf[a v+hSo|= nzwYҎSԤaePwS]|;3z^{%JfhB휉8=+\Xk 8Vu/M|x+C,B&Ju꺰*VsS4~$7OClR~hhM] ^"=w>4Ff<n}27svlG#+u<<2?fX'E>4iޟZ/E!j$Aze22jxdž0}rB؈<]%Y<DӽJ90.TXh0ISEGŔuܵ427ѥhvNQ6>K7F*xuktMSàf@)EG]Oun`ζ򗝞ޏ%0,ΒY7 c8 &dt7|ٛ+vt#4 ?'oW'3g1"[60͐OӢ,)9+;e4u!E}S&s)&xKq5Ո6x'HZȪYi?]UU [m\T~srI`e2oIYOaeN@3ێĝ WҨ}F5&"a? #)[a$LlxƼҿ\lEkT/;(7ѩܲqؚVr]p'm})Ёk5x书\iNF6yHIyLMKYs^C,װI2("`m=Jg}C*kZLIz7[.7X hvVAg8YFjBUCyNn<-&5fX wL$FVfvam_W> Uͳ'ohexlBQKbױY|p+_OAf=tdbc?|HV {ݞ ^m|1qߥ 0u&5Ф d1ۙ mG&B+:rcLsJȬ%x֝03aA\f+*H*+*{5{RmUa$:ߞ/ajo&iƸo{zy)U8Uc90R ـq~a2ze tBW¤Z̧V^ޟUnL9AX{J:v3.5R ޤ1+`?L[2 b^ܸ6?Fţ$~R ETuS/$e`ㆊ7l/]~|W0V\;`\ڈoJ3 OgHӁtoJL}n6_w6xT t=b;X:aWߘe0hu-&X1&VSWY IcQ$ܟ0<(wH.,;S~g:"X,S6mh ;J.L͐b v&ݐx H(,]iTW1;q;E1XǔzIqiV* G'J*J&O zwPH3Cu=L,`a5@֡+Z|S 0%lX_M3v'vd9Ww.A 'b 3 `{Q5UfƮל/vϲ0 f=!d+qwUc@tѠxO8 Юv-P⪖c^W* z߄6̍rv3i: #zApώƇrv 2X]~Z~'PT>4YrGݾޏ7A:P+ua&GKJT ìi#gAR]FIঽrXn?١&~(ŻWW~쇺ESqvV+[W\)աOȎhf;jWGbн6/uFjOl\2ќ)q fՑ[>%e혪lx+1xVulY6&zx8+$<0$׉DzƼy*?}M3|]=ͨڄqqd/>dݤr.p7Ezr]0CuVY8XD)Fc}Ҝ ;Q@\x}7'/N #"&fzoq HO:?J? Fw )Y*2^bYH?N?؛4f_GhxPK;3a+d*z!f.lJDsom WK8t>Bf"С _r W.Nxr۵k{XxdAH`1?0DG)*H_ :fjg0+Oՙwmܛ!T/e%~6]ś`4*߉*rz%ܿWfT&f"Q[}+Fvh[FZ9Si)]bSr bj&ޮ6+~[&Śًo /|1rY.41);y~e{ŞۜQ\6,EF rO=]NgA,J?ץZ$w<]>pol` q&wv/e䱽eO(brg5 04G!K'X5Lc'lՉ/Ff&nl&.Znɖ>MtKsLqY5AkTٖJ@ŮxEA[(*6`pCj`p#K{rxHv6WPҙRNWⲖ7gߟQ)PIg&RRdb_CAb\v MNSlXO:ttY]֎q}zbԴ/j(JiX}32l3?`޽ߍHnX֐e^R;ak<)x ϗQ@T-sDWSqhLAo'L~ c+tDFyi=frࣺ#% cWIO~6(θ~D~ڕ/Z]o0Y}P6W.^U)>ΦCs?F ^bnMjL#:<Ѕyx #c QoGl X~{1a+ɻvV[dk-;q' -p73 P%(SW#Lx,ѓ=WzP:Ɇ0/ 9Cs/=u\1؄Bg̠/TIҙ:s0JiQXM6dFsEVܾYa +s%3ٙ#:ǜj_,RfʛY!Q2(FgSGHL˘UY>ų8;?1N?dD)kh_eHWcّ N~a綎-z6B{oUNY~F;> .=]֗Bj1\ KdVqem&kcamoTtͧ n mp ;7~3 hAzcEj&AdјF8/'-_wxw}OG1 wJQ1 A"s&%gwЗE9Kݍ qѠRjv yMg!<Ś@a W-ݰwTc(yF+ *ŀ}s&oj׃ѥV]_Y$V3I^߱3'O =NW֕rưƾ4IJ: )mb6 ^m%7=GJqYu>;EiuS[ |=9F@*82G|-%;=!,z=6YܩHomً9詽&KfsZW]8ozo1.ƂZ8#<9r~Ƀխn:f<=rK^$ؽOpc*j; HjHn 0=\# Dw%,0PX`>EzXYM?(f55ZؑGlkN@?{R/=v˩t"|>e,2W qЎmR=/xC'{U )f>-3=@02S4 둴H-U?5qV7M40ASn,y;:ZVDXc޼v"q;gL>Tbؾݬ't/H !肵wtqywڃn -.oԢUSM4ȍ݋QL'pKwP)>M=~:+9=ܫ_)ucUlfndjV%o\} 07\X)g0^Xo5;H /[=xDjᥟOP.{=t(k鶮&fa}1?bP}1##y?J~~cfb":ưMذ`cM0\5Үвs4eaK Zіv{GخdR r]N6 ${c-EO[2c02l8g낶8>2t"z_'0y 0*a`ǙR4uqB50vHuծE x'nr֔l[8hnQOlYE{Q5%:W4mez:KH0w@oF: R(o+n*DN-b(NiA:%ڈM@m{Hś_2!7Si`mR)_~ƤV,ũ 8T55mBC8Nv9.7Ҙ䟵p+)B_bqAm璜5kEun^dQkVlAm佒-(.2ALfֲ.N2ELӜ?٘*g{Ä%_mŕ׸PB e WXH)rؼ=%N:<. Wݎ4ԕoHemۢ-ۉь.y‘6W%vVh_gy8Oe˭W )n qr0%opة\¬P# Rz`F8-=cы,XTnV x =Bpo'flDL]+:r'(d90{^3jz= {duQ_ggaRF׾T ^8dM߃+T '. lj֧n&✥ Ʀ˭U}   VfG0iɳ= Ȁ؂vԼH&MK+zUo 3+_+g93WUa.>jجzWFSCmT~Y~gvoQC(Ꞝe׀U<[4bp4,eFdZ޴Gm$(㕹X|))*JtEXu埦G3z`#? [.=%kxx&"ī-J3F6{ s#yB.Kڧmzh)AD~vAbdKEw;RIXT-Si]wK$rS}#|4t>NY.Yir9kv)Vt[:Zƴ`>8?e\;G;0=;zL(&;܇ F?싰gͩ`;V'RT7d؞*6pΪ.<#S;o^1 J!BvQc>dPª]@+?9Y<>Lv0ˆL`{nJYIK``oTSm` Eaѳq&+j^F 齢^0meFzvS<ƴw&BrkA5 N5qhSV%l송Օd2e2_{l[Y=뷾Ӑp|z??3JtmsrœKT wc@ jbޘg?IOwp&2}7e.ҊźS/D$_+Њ,A}=~{Fv'ҽ3GPO!sCssَ+ rɂ͐`uN+ijNuqc`e=ULtTf6aT6UogGX߁Uܹ hX 7ކ?Ɗct>3?ܘeAX♳r/~{Ĉutu:}I"׭c6f(&q:k/ؖa}n}:t2?a!UZU1c@&Hb*8Wσ8ѷ92R:O#̜@ $>4Ж L٦7L=v(w6,X~ G>l'/f3PXx"̷(}aoUT5N Lr7H0Qu(z54Y cM@e0{07sN'۶~rBrm}^ck1YI:ٽ Jag5b.}qgP4ف0ҏKʰ9>e?>""[ 2F&F ["E؍(Fڷm  X5^+ЬTʎQĖ5 faQG8c[욬T}FŲ3btm4]k4LGjG1!J-'%յ{OݖzLaHY^U뮓ՙR(be[(a2a,i[OԦ6Ҿ̵u;gƂgwf<6=T*XDwAHq/l57> UH! w=_{:ܽLɌF~bZu]è$[ $1Ub rӞx|V@ﱕb s3x#662)H6JkqvuI {d:S2}d\lyT1ӳ> /% $0խ\ $qSp>|oZ`K5fCg7L Qfl u9.(r,. TJfY(̳yq^&j%Tбṫ&sD0pqF~>|^^pmB`G@p'R p#c'c%-+uP.IIqx5  Vuo!vA>#9~vir43ї*6oҏ'7]%BYFrHG)=Y qs>vG`FJCUqMo{U"ډwH)ԧXm[xD5o\؆xC7ac{okX0 ~t TQqgq=D6~+kdyjAq#O7=6*bB |Fj-j3x}X~nf.hA nˈ8k\ ZvrT 0R̀qvnjYW˷ЮSoNnjĠ.MkW'2~pK#)餕;XVy>"1LrTʍ+|R:FgwCD&.Q6Վ}랙EqnPY M <ڒ9 rb~F)YqFΪ~М`S95n0v~?p}x9f ";xcOZef2]İ>Qtކ ݍ &&o dcVWZ |t},u=sMcZq~dwQz.?G5l}]XPXEa%hGyGMXAYh+sHU!ܙ`'wAszi N{*lIk395c?ե* UWoOYOaԨmcN')N+ȓPEct9>Ǧn2Øp kswYTK;~KnlW;\];W+n9loVaDÄQ.#uw L/"l?6GᕕgswzER^E];odnnWG6¶,NwAUBO醒ͱH߹qg}O:"#+>7-" Ɍm[ơ*~#'`6-/Z_M7爜=&3Hnt`(k+q>Tk(8([ߍX#BQ?S$(Z  lLTqt(?sl bzw,H\wsGK]Fq{ci=r\S\e$9*޸lF… V́q6z6hdmϐ.1@my#J%l>„P6C""RsYĆ%f7yH[w!:z1㥬 xX[Ap#zAEzj>ޯ^hrXbhv*|" bpW4I@Uk,G_z_X'#Н軿 sR_J%cI, O[-~q޾GBlBhjhbnuW, zݼzdb}./Akq4(f j5;+ hDI6ny|@~GTKߛj}H:-,mBsw Ymh:yNsTtb9\l\>%Dj^;R^Uq 5mս*[5uY -LWCez|12A6IPUHܧ <Lw&sVֈ #"ig>Ca;V:qXs5gwkk ALDT-š{K3g"䙋?531M5sEB9%,/"?cGl˃!,dac eS76*uZP1z(␳B˄S#4'VXPb0Y/a&:L΂ytrq[sҀEh};oW6NVAu<9|}̅"Ll?lH4hr|AdW eD;2P=E5k/VyJG9t`_@zȫ谲-wʶc%l salGOTmoT0y[@fY)kcJijˊ.HD>%²b6=w{g_ u ! sxO秴BO׽)+[\\UݪkË3wnO1{ޖt'F>d"kq3\fFvwLᱲxSj%}DMӂcR~NI;{\\x;IGnoU2­r-4V)e2Ll$cXw*ؕgQY+Sxw0J@C?cfK!=y1 tvثlTŢޱDYJ'4\H98W?(OgVoQ7a,!Q3fTqf?cׂ]"zr);.PNbI<%<KD=v{~4 "Qqδfug<%_^4]F.'XEsV+0Oc%PӍG`; aT`I |c{sL tӰ3/ &f%b+5w./jn%_ Ϫi^W{"-؎;Twà;cRzJkd߱{vp6lF0TQcv't^6-=sw@6*Äzajl|X6 ^=76-rҏ&3|bx5aGT ʕM$6FabW`FF}& J/`(^pCʸ+G+EgO@ o4:'.t_tvˑi}}ɂK%7Hi,>a$nbci! 1i.̂0(S5uye ߅ia3ϙTbNgŀWxݰ#*A@X8tBC#BHq4_/b66.u`׿asg7'` E顜JQwt?=/&0b66UbHUCk{W/7>@po a3xH8orvv,z-QCA$Z0|z: 2^M\'G8Jrq2T+O!4)v.g4驝u\AN[?Of(mig#,]k>MM0%HrT×lJ/v0gڔ@?,Ϲ (?xA SWb\i>ܧoh*'G{LVu5.ͱ tچoP}Q ĹnP)#R*zcY`Yr.]wN Gy{};3oj7Ğ\qkW}{nFxuU_тo 8{nWmӉ8*M}Kv;̘t 5\НD nFc&[2P#v6zK'ņYCӽeAw97E֔%mW~G#Ѱ<8ߚ~c]PԪr O%&_g)Xpf}[NjZ &IaCAvфUI>Cmzj.+FZj P0P}-KN]r^PjtǕ0SSPlʅ럐lo:+O%jDK%ө1pzۧ ^uf㙅EHv>:[`ʍE0tJp=ISME`Db GQxE`NTV͛ͱpn-9l; CXxvV[j2T vbyEuļJIZAHһUjm`TtJ!"=M'G7Rk–Um St7,MqYNG~c`e낞ArV}L*1?=P|)|v'7ey4?=s #=o>)t9N9QjғF]/sԁܹɞg`Na| c.h\`德*L%YDSQ)_-8cJ'( |\@(]2'$6+HuDr&EǺK[$: >>UەiCmGѶhgVELXP/eaW!Ս ]zYvY$S %Ư8"XSTM]f Ta\xfn{rݫ+ ܯ>2gK-MviۅPRy(*RgA:QKǰ8=hSb/̏庭1//sֿ 'KFֆxy´(6Ȳ{qG# FX}&ĢRux)Ԛoymk6X>ژ"jJaB0B=#y["=i˝D|u+$#3 A Bo:lZ AD˃rcu,7KU, 61?>Mn:ޒE(.NҌi(*ϣEQvz9۰x(ElWg3*Qϓ2S93ViAD=8)Gq[K`!Z,A0Sm2R.Ve>ǎل3jjr;@AY3ɇ CvY$̶-;Va¸j4vo[p!*:nPVz_ ƌ,QIfչ?xC\ ϟy,6Py7ٌIzY/¾Rl2Ph*_dH/YfF㧛91ʼ 8yXٵ xvk*v@}Ho*u3B(ʥ4=&So9YmTؤOM+DC&J ŏ GhN *Kijtjo 7TkVA;9v&.`Ry"Y5y  r11o)B2SV#;sgzv #Ga< 6չQt()u7Hf ID1=b9&h| Y2NXyS"a2|zUĐˬrFQKEaڱd<`a2~ AZcm&X^=DC#VX'#( r|OT3'[>O܅{1Ԕ̳!=e2 AX}R ŦD`l-QnQ"w8R- c4wv{;<#UCkD pJTlwa^!拻¨fS_HiA=]vDʵg/1(mP^ #xe5$+<ꂡyAX^T̮(X!eg _FpK#!( us̲n#2,dlw0$pRW038W[l}مcF|oPɻifnC! 'at[$/ "J|=Wa!KӴPљ(#yݸj7lk 0sSAwSryl‘-5} @Q^66l$q6*:էs^v߳_9+sVk-Z 4_jwBap[/aYyBYٌVGwȔ2d8T'͍q<'#?-câ2{Ѥo55Ӱ$Hwj`6W?1EBqskq pRf6M6BZ?~\w8e NQ#9Z274Os>'?F3:s9 xm[$Ew<̼uhnc6"6hVc'9tqB0a >MP&N{K"+].i\?p`NII(3&'xdw y:/"/\TN5ݺN+7N\ d--%Yp/?2ה\'h>Ae,g s4,tx.Ld>c?4F;3Yn3U@Ǯh a/Zo3[Pb3ы??pfaQƷoۉn$(s"#DS!f*U+r&fGpQ3P:(7f=4ټށ`Cr:ar,{BMgMH(A*: 2 ViNYI~ m?v=.n& J+L!Xkӆ-Z)Tw|vizUaӬnn6q#y!wIfei;Ԩ:_NAvÎoϏ'79g2XPUFvy2]_ o`ЛۇAj|Kp^i,+{.nzįMK"G=aƈ۵7a_%M=}g@^s?4`pN;1xUzR- ww3" u0SIFܸZ,Lz>Yam=fDD¤F#x]gpqaãOGAwOKA@sS~Hbh訽<. c C7pHlZSWVTi'W>a 6ͦ!Ʒq\^8çv 3|_dA#ÈX #q!B;pۋ_j8(e`6S1nk lVa}'=הҗrMNKx>Xn].gfmuSw-^q`hx~(gk$1ﱘ/YuH;!2m]nHMLo<Ŭ:Ŭ#D?Kqb/re9"#?Ocʜ9()%'9/L/n!^OK` 3jT]{p#~[HxB@Nĺ;zX ]곴L속5Jx0g܃#u: ez(e#vx¢kj&W; `w ruI3܏N4QrQ Sj{OQ`k>9 lSԉ{&/7# ofv5!2aS`ζ02/2ˮ MSM`pXj> D1Cƈ@ভ.n; S`>U; 0QN@Q቙x+ʪoi D@ tquKR j,m"ժ㩞V[UWGkϿJ',/~}(ke_mԞYX8,F[ ØT8Gj~ Izڃ$&^6aHʬX!gQEH/)\;{C᷁80,F5Xv^Ԛ_PY&ZHu&af x%gխ*1ѺB{<[Qc}L%z5, "0Qzl^FL}#%kDY;TdWP>jX wG{6n vc& X& k\8k0lm0ﳿkKDpzT}_[n9,72/;iX)܂C+lEuQO9CVُP:z~+QUO A8 7Rwt͋}| ~3ǠKyyy?;[,лHCfjgaH+x|ks')sJ@{Dߜ˜Xkƙ=sdVz,?*OTZ MrM^+PX>m615gQPIH1kPĺ6^}j^%o^^D'+UOhͲ8˭7sYtTrb&MGVKhNp]t3JSCR}!GTMץl!a\s[lfz#YF2PT:f^r{=,) zOsS51o!. 3v,EIM*G|'w))D&0%2H#0}gҡ:3qi,xyk/55gMY{` End9,[YeKY^ÌSNG5h 9t,Mļvޚjz0U 4}9ApirP{t(5a1+& R$W:KFfw]Hoʸt8Sv=ip¼`:p\ߜ mcnUIa $A sʲVv<`B3UKU6um@8Lhq̗"%pV)&:r jfG%U &2tJg츺/mr|yb -=^}E%9GͥIm ƈRl[|V6  "8Klcv-P3l\~) 1o +o[JAc=ېY%1H| v(oX,a[}5^cm T隣Dʆ[("04鞝w=p]H@K\wVBi~=Y٬3¹ 1Y)W[?ʑ+ZGUqKsj|wllf츃~JoFpY {O :iw 55$dN.=Mi/аcm_LYkZP**W:̢6zT>k~ZL ϋvJxp+LX~)/ө|8->2ۻřgAIJDxsJDD?L* ŭVs9"9hLz@ї*MzƣJtb,t/l8:; رju(b> d%g.bL(_v y\Q,݄S#קUX.O8rMHpѩl_ϥ҇Yo+ `#vHD36qta Tt (8dc| ʋzhSy*Q# 7Hsպ4ZiDiK ?(t%>E=^=Ģ)70c[$.ߡmAwUX%\c;Sukte"Y(>7p6X!˘8w-ԋ2!RX%6NRKQ+{Ҍ+t}.aZ-ܲSQ.Uǎ3;xy&ICƞje2q+OU,lyۢoi* }e ((_Ħ~Lcqc~aV@`H҄Iz h&%%wx h?۵5DBNo#w{yo@g5'KTVך${ڳ3n'׏.Q!85dd!F0&mi #DƦa$Iɥa|~ߴԛ 3 !:ٰzSS{''g&`GVlrLihLʂbv5X\CM|z/.Ō$f-dE:Oc4m|nn9g t@b*\Ԗ#' Ɏ~k'Ny =UMnH/-'Xד\g6hFW?Gp\>r5S{-܍逋YXf/1Ŋ>47H`K|D G5ȫ;zMv;LQiO﹖f6'ˌzWK!nOfVK a&ien({" F+cʖJU2X'b zl)'|]] s,["5<-PBU"ui$Dj4ו+ۓg)|e:;=Y[n?6KK/`](d'>sn&V脉f[:jɁeb{l _帀5{S .!CQ%jv{ua1|s 5J`?f!l ne#LcfX:%},ޫǠsdY !wUݚƖrAyʽ76928l{>\3ٳuy"UBh|(#8Yeۻ&[5==a4ϯ (}- ~=KdK# qK RLU ("cz!0ߑ <:oldLWl WJ(*ֶ2;a{7뵑OcQH3eV}54_%rѳ>-5 Nv"1$xS ~kIk%;r 8 H2{韗Xe?e%hC;ts҈q=)L׫,O'ZI\7e`uj>X x䃅>Qj(#Tud59\? ye&zOt7

cV3+\|uKck ʚ1Cz89\cK%8 rOQ|FY4?Javzgo %!8ĐّbYen)lLL!9l2,Jo)tz"ryyr3s\ 6=HVۤ"YㄍV&񭅦\9$ݭ72 #0x1O=#98usNպ@J )hstFYmя[P<??6bJBvtnS!6oɧt\473z.Ҳڱ> 2cQ=vW96| qeQ:ʿO1,Nu2T•(A!/&S_S903 yaP(Lf]nL] QQ\,akpjqBt'O6Ot[SH֔8uIwǃz/c<"339k[L(-[$݉ja,$K4BA [mR;^hOeTt38]3 U2 -n>'[i7#ˣm>Q}:Tfc&,)O[Z\{CDEax[SpT+c WyE/^U)uΦl'ݓ5X=#eEYkHIH7ꨂcJJ~G{RƂ'3tDr\Vt~6“ǿ$W^yoFsۇ@{[o 2M',sSp_'b4L4ͣ4) ]VlFM!䥊)k椡oQcLD `3/O-d>R!Ҫ"-meS{U%=8yM瘮6wڎm+Qɚ8cnJl:W 1b􊭊D &*\+'\sӋ;dkS=jvqi$^:pKD E 뼼hU/MD wxzdd(*-Z!!8ETJ i'cWEuL~=|NW߰Sz)\C:8}/PR!W28!:2NY <~œ-Ynj)A0\^9MeB89zMLޟF/љA\嬲*ezҭ[DbԹeNp.7U{,Իݳ.E'h^Q!짆H~N{Hnn+˄XM@W&YN9$x`E%:j~`+O+QWl28{`MO Ü ˜FjccimQMbMO̰ʫj)/[DhOO/sN&Vɽr4+W5;4^l@ZkDkob54$R ieĪ sw87J].YDIo8t~Z5f*d(v $M4]sY'je^-ߧTۓ넄< Ml3yl n61([Bqp4?*}tܫds{.}n>>虼(~#SQu'n{Ҩ9+6Ul/tn8?0B41Ė-=;_=NFǞ|͑Rݻ<̧,L0τR5Yb%,Kss#󑹌96o^Bn6Ua8esm3\2#Uu8{ RfkJDRM+LrIrl΋={lM ͟V #'&(>.纬؃)gqJ&ZΈH-tfS;13fvvƑ߃vk`1^Йf^nit!^9#YruL0/e~ w0B @+%{w/wGQ̨Uf& 2KAq$bѶ-QVKY?>&s]ic##: ##Zյ#C=$6󌄰b/2>vmbˠj|̛s33Y[cSG`:r~R] ya,B2te`q&֤N=OBcz&巓{oTŻ^~=sƼ8"8k׉"9es#žD%Sm[MK$iaR8i%s-Ln{A4 oh ^TrfN*H?]Wr9m kGyH 2>.dO [hnx\Q:+yxsVZ}o:ltKNʩ'1W.[}[3F寕s,CmNm`jiD`v>(>1 eȄp*z[?No媻ǽ)- $z#R+ülPH%a;& cJ'ŶEݾ ckS[95/)#ĩGvaTsё@?N%5m>sm/OD;VupՖ:SzwJ[ϞO v4&$xal[G(4 FWϕ:#ט|,Ea9i#vɖ'(WJ;3T0P ozq@)6&vnj.Kg-1}ZTZw7|HcUw˯xf f#=㧏o1Uio#xtۨe@aSiᚚyvB4;U~˲s.hVBY4II,. O9\QD)MȻ5]6߯@ 2A9߆ e1i&/YQ,Z{B ,^/6 ^?D(n/oqRT ģzs6֝~k wxAؗdIa'0NokˊmĢ1!}namO n~dCqg_}OF`& -H=Fsk:yN'B nV땆]>O8c̞WbTf dXS{VC'4\|ަPZ#;'$X|MzzȽ- yE2 ._fOG &|\;ʃoܚDܥ[QlT x![OQ:Nf%H$hZ=DA atJ5袹#8߿^Z/W!A$E\oQUj5dn$b87e}~iB;Wl6jT/KnIoCώ,9w ݙ?:VEĹLʨSsǹ5C*8smDP1reЀ\Ĭ'1RX~d~P0f>[_,b[JkUtmQzV|pKW|髫>VcJV844~YO7ςvndaH utΫC;?݆rN)Mv/%О.*9rh_B,2>qU+/w!$5m~.dVƕ?Ow\ 'a2yeit1gR_ˎaz-D\3czHQ$5zF+4%ԸɎmd5"{qCB4Vۀ krw@΂320|k.xFMJ2i;uQGlLeFxXTѮ!/F ;Gqk[_Q\ 6#gB]Ɠš seY9Fv,4#Gb0R9&|DG}O|2x0}r_{ƠGvŢ7j&–_Υ*):WvDAX6{ ̶}#-fx~%B?.KOacr]|Za-#vz[hl!Lɻy%J-뵀fYL+_IV9c4ČUwHQR8 BR]]V 7q ßg߰Ct̐dPu닇[p&tihx\c^U;yEk {_Q6*s$S}˻X5b{6LTvDy4~IL)1J4a 3'y3, }!-+"[͈`OՏZ\8΄ɾW7{F0skJ#mlR9X<žA5/-E;●`w#xgK񴍫QX Y i*h'.DYջ6[ݎ,6~}u5LEq)/:`\Wz<= @PPc}6$ gV6lVGd/Ube}AXqO͡ v3 /~ /,ek\+ !DsJ0 +:gq5OKu)YTFk?BڻĦܫA8,/#rts+i_Ip:{Ys >*  4hzGl.ƢFNB4l =WXY2" +dM d9/8vTϿS\gT coB˜&s"\24xAs\X}i)s1/ Yb0ϖD9P9Qg+qi2ǫOpV.= #yjTyOTef#MwvaW3*Aܴ+|t<,*.졦3V^xc'`-lہ1-l,`W'qd.jj, iJ'ldzCT|9S>v(dDuդWϘM鸐"QqW{V${Cp;}o{@OtXk #W됩q5njDD>K\ edsoS=Q9HR,5iEsG1\z:}obq;T}=`0(%gW`YvV^MG=yW]g[=Te rغ\{_4UpQfp^ţQ~LANDi!=@(6 PřX=N%75ċKNBforvPS]QiQ9:WlO3|8r@sVs<3'N5 lMrK;!H 3e᭔Й3݊>pQϤ[f=DkGPy"/ni^HIT{4nH^[aS0vA75g2)s]UY5hPG|ErٟaQJnz!>U&+yMQw&HCոHDiWbty偋GȼĔt/QDy}E͠08Tţlx Zi,%*JNniڧP`c jɎR2e ,'%kOR cK+?BV594( e>lSc c8GYm\vS@>moP)xuk6M1Wpe_d娆q)6Xeߎ3cC 7;%nD`b,TQnN,GO- )>z[xV<6`YބupSeRXJX!- ]WnTi`)Ȝޑ +/Ns'<t<_* QG#oRqm?XqqC[-E*{^r_ۨEz#7\\ &"݄c:9+;%%b¿6:Kyh/lm\NYur~G1B^j@[뢡%XӞD'yM5a(|F4ĢL9e 90uCc֢RDb~q<4wts\! 6v9Zz3 QC1UZlwm"mLJok+Dnr&DlBPz _Gc>=Z~;*28b״YilJv||ӞĬB3/g*LӟaFtK-:9AYSS](P+\s ۺÌ]ii¤hog˱HQ< mFZN{+3Y|cE֗gN.8?Buw:$jS0a S; ]RX>] w9DpFyЦi& `?ǾƯȑc Vr8Pp"{/χA{aS4ój8Ӝ/Rcוd' }ЗNOr 4BzgfhK0mi7 vE^z㓀@5Ş`-zfP.u(,So3ELSk3[φ?պlb9T쯕)(I37=ދ;ikE4z+çXN Ƿh4ҭRޡ˾<.uAHCδL}1 flz#*Xhg iF5W)|>3;-i!hgq\bFZS4vn;.(t[G7c}()ޓ "~Q]d,Њ Vv-pVCSQTm(5,d oVk^Tz+]CO V7#AR`,ʺL$2|ޅ>\Dete`m0*fkLse|X$\`b'cB )@VH7S+qҽSs5)kJYΫb]{Tv "Ttcr}XPp^]h0,Ys[^U?e. ӵHHlWסu9bpislo`V0𺾧HI!͎zf6phQKd0M~!V 1gBFZi_q"&SɚX~ъY!AsI>.kJLzܾqtR{OM;:f<_zMD:{2NJdd2g ,,w٧Wd:aqH$r@Ec0Z5 q&%9qnv p.[u8(]}٧fB"]4(!>C3IK4w 07KFyô ]ai/vMߡ^g^Eu͠55be  A@:JQ/)TFc67#E]"iGYhrC./@u)-<p^J9vjz &>e/0yw,%x? [sM+&xJ6}F9[&yokPpL"؞=k\YV"s\ĞCP*f^3TOEl*;$S/a L>9@JZ\^$/Hej\hLCGl{%dPJV¹[mmqd^vb3DѽZoH/璭F#X)CH)AMV3nX]BNbWq V{P֑o7D?t2ߝҊ-33i>T>3;iMo[˼S枺 6v,M)|J=d䙍A|1|m#i`mR*Vbpiũ Pv<b />J>lܳM@K.3\M; unXZC `޴V!T֤|GAlPZb1:p/JֲG8H0= AtC $ 8FC|O - BZ^kjE}F=6Rc#y+Y$϶_Dư>Omxgoo􏠐m(h Z~2dA)cǨr%^ 22K9CTDZm҄ދkL,[t_|Zeӆ3Q p+MƟXTg,ޱ]-#G0 ctXaQ*߅34jo1 tsIy>,`<}P/UxoV36tsva" FY3rqذ!оw>/ei+ͰWc"htꈵ{ڴ2 ^G 9by޴O.XbHtYcm#߫WL~4 .h_3fo|Hg ȖECq|bCHt%NV0 C^9 >8!ՓÓY/)DmX~LK^=wFスi0]$V[g8WP5#~ 5[LM9E =|L5Qk6tF{]5Wtb_GgPU25]V)=6u&;R&[{5bGaדCy[R%.y9Ji0._CϰƆkS޺_иa:I֧.(Gg+NMŏ__Ve5 aкK;qDv;9Ȋn}\#FPV-3W&Lڙ?Y(!sV}N4?5 kW<{=_3nc' *h2% J1SqхQ2``Mv0уզ\\0[fҮaᶘP:G?`ChXcYiOHc(^9A#'3v\>,v6I4'կ(C?]{qQ T²exl}z]m1ӽ3YpFIӞ]Ʋ`t ]8&K-2KHɊLG}ۥLB<wjQF1} >?*$ҘxoU`wDbSv.f, m] C5Y,x*G!ʍc]4ےRId-Nڋd Q&ko[~ # w&X"\i0j˲d2MEE-4'A M^AnHl]k(?X,7 ՅV!%NjZS.+wC*YV0R{\z, *^l!&yu)o+G_hlNщ::[Y+{Ꮌ:ARXtjwE^7)ƣAՊ߼➹ ;C$m 嵾k"]E+W,۷n8Z%6izWu&y>Esf"29> o࡫=#8;eY{ kQ}ܖؑI\(CUOrHм㶜-E-R c.[~?FJ8ۀ= 55 6IL!5>q vl>VMT6 A4HmׂUu,]u#d6dVQˁ%^ 8pMsԡ)JAͮtޒˈv@S--yYYPqFi^Yj4= eIY,uPQQϋYߴƨ}+}e9j㢸(S+bϥ.yQ3̓"_򂲀>7B:LFGVI_'yߥ),?NƑ1ә+7?XZadqJr)L%m&͇d86=ӚKUF^t>{:aom$uQuu||+^dZH0 f⢷ty/_6aOc^EP코fX0Ue mƣi4Om5ѴS…jVnN #K`!_D5Ȣ(;uWN=`ElrY6*M8Y6z^TqT)4*ćqdC 0Ih&|1mlF OӍj-i]PlHTGalGY%J^҄DWL%ZXpCr9޽H74q+v)U2BBy1n'> ) ͢Aō?֒asG^H8"(2%T?Fd${YSA`ܙɛˇLmto^5mLZU}-vDO |FԽ/ wh ]vʕ at@S?mBk(#3&j7@";|Β8(f+$ͼKw=Y79̅*BiyL90o.qqb4gq<aC,p3պrċO7ĉMnWapT~C+c?s#$G F k&M7XadbPbƊIg=# 6ˆ AXM-?`tvFYaCI.cAy5B ]fZltqn2sXȹt@MW2[pË #M]207?_v3DL`w7]0mR]2ǬDa+n!.~DVPY1n>6cuUߥ 0Gi'QPA&CטjDg5י/sd1't<,=ֆ Pc7IZnа+AE1'ǎn 5{O?7ǃCܓ1-3Gyu˃DnR9 k{KJOz<iO's[X-Ogag6,Z뷕P03羳Yy'PMd1?1{-~u<'3Lܣf ˡb[Ww{0IW [;,yr!f Y&ghqLsqmvdfYKF?#gkׇK'5Ɲ5LgVbDDdg}yֶP\. h }V3;q͂ȴ_(8{8!1g%Qn>2bkQ7cs\h˟?39cZHcz+fO=(/(>lY3ykOVh,0|4q9ȅX$-ہ9P_@:vUH?^'mwY/mqk|*Șrf>xm)QX0ڮ݂^7׫ymmƿ_uԢH>A&L`Oy:rŭ$T_d#^0ҵETfUbZ? u98%8*l.K@;D!Ts@2e~\)7sFοF$-m$|Zq(j'Coʥfx2ٹo}U&o+H8<8ފ`[(Jl$f&&enj Χ0FRJ<C% &ÈmW;KML8-"Dx^ToL6 F([0w "zWʫ hQ]_m1L7h33iۖDPΉvƝ=ēנR +wc7œYjV{8@{|(c(heo%R痗'שdrLO}Kce^]N锈)dgJZokKoK_g槯/UBP G:g*NRN WsB-\-jmilG0H`·EDrhIF"MN?_nf JkȴgD}csK1gqʯ1j+h0uϔb[|N};>M Dڵ2-r\ G\U7PZ= SWRnUO?8 SBJ!dB#d9|XK߽U:8,9r7K^8#C2` oٯvxZ[IB̂}x tAsL>;9_}gJV&IJ\o U`,]}k[ W 3NG{vJh˩0|By).fFjvFθ3ڠ}:GsF,rom0mvJCʶM56Z2w| }j0RI `g,<5l k R1ݡIto_ZZۼ>ezZ?q?ca{ aEmShJ!q'fȅkAoMQK>:cV+ ,ކ:9ݻ2\05 "|a{@3&* \(*V e?Rms._3"z5X*`kS!\r85S syf x:+[?+O&^ν9Cf`jzrȑ==/_Yۖ2j ڏ\`ͳJi5 Thv- 2Y)O8W:2OJ^ jAZKx/K{b+xA(XhXힺ9t5uuG|~ g㱤 1S.ٛ~B,*NLٵcY$ӚPtaRcXpcS&@_,Hڳn-<(T̯`=\ hƼHOOY|=t/Ņ ~KI?oAUJ\p?MV1MRy3#^Z3;%I*; 5kOaÁv]/xS806i~-3#[!<:YcX`]ĬoOԞ %J9>}X,5ċ)3l/3:׹0Lgzs~S4*v4-smQŠ&$v`Ϯ}ԒhKmIY5F!T[X!ykyV=> pꂭqthvx^qD^Fc"c_ D;_[/ä>s^]stN4Q{bW5g?[Bs˨SVRÔfI(h|Җ <爴FܧU0x%muMo&WsBGzho cC37P53 Qgduˎu/n_ >|M AҬLޗ2lزYtjۇ(LvG6 p" ̶I{hQ/(}əᾂR-X::D{&֌V*Eq]R LTi {ֆu&!PVw0D{:U|` "{ݭ `sHe;ةd`51X2 3R9J'cRX<̡Bcr@ LftBzmͬɵ}u (u\+><=k0 I4lAjjlv3Ζ_Tnj:Ku[o <ڙ{E-'0gttč>(u);7&˕\=XZMq7?'EC7mnSn =erN $R(밸ė`4M: Ŗ06 C[S.oqb/{$*p?:e۷K$Tjʘ.;"6ct߶8f2t7dn+E#^-U8AFzl)򒩭.`S`8yb1pkl׺k\Gzch1G4f:c "z uZZcpCVJPCyRQ)&0MKr`¨muS9pGNl &k,O"|'75~\zVRxd[0r(a}\IׇM0Vmq mSyY3֞US4CȚ?w݁3\̓J,/jI2Rha"8.r;"p6+C8؉Rmy?j>C19Na6jb:DU]W'&Ecgkt\xe>EJA96p#́N۰^%[޶ N+XRœ[1?QxB11ۘb5tP7RgCDq g$jfܶΨ\Yꡦ٨ ftwؽe?~Qf;ZoCI]9ènCp:GNJk$y\)mX*,lr34. T}7.V!);l&E{N)fmNTeݡ0+mkn8\:ۛ`kOGP;/:fw̿pwZdIG(# #pۿe]pphu?h]J?w:|G?Xcz15i`,Op풌y٣>^&֬NoOEi=7eD}CFjOLO;ϯݼiVPLY+8ȃަ^Gl( ֖pWs$3=.^y1-ލ+o/MyMl 3zHI caݾͥq˜ѽ*ʎs3輹nShIW~a[5D-cHl/t=Oc;I 9\?4pRx&D"#qnBI #y޽({?]5'Ml3;cKHѣÀqs{)bsa?c5/*3o\m102Y2 L%¿t:qڹA>fokl(xVC׼2o3_g0x2ZL?$Vzێ^??@C)eecl{n Iu)S/+>ũ2N,|,|arGw0Գ05*BҴݢ^C*)N 4YyEpֲzX -_ L}J9#I/2\JE)-X,՗yk{_;9>LϋcCP̓(5P,]YXp1C ѵb,ؔOffM}83KiiCIxd:3LsZw_3fqƚg-g#u.%żH5ut~ MO<6K57":Q?XSn 8NjS.c>s(ʹc -Hutdcs|%=ZDgŬb'3^tD٪]ГwOtt#H/7 ), -}S ̇ŏa`7成Лa#!GGdza4QǾkPAoe-nnDR rm)a^̼A`hGp.>u?ō"hnweJhu5!!eE϶L_TT?,0cgD] x(KLM$D0%0Kr/Oڏ6OEO u?Q?&$gBc(KU}EׇȴEgX欇sw>J?ffp2ԙlan8gʮlNLs Be͔lm7.ow;fK Hxkyfb~W HvC&?O4Y7;?^s1ǹBvW}ȗ=m.4w[I]`\WOaMOnu)Jwrn8#rbjc[I<.!Tm>5ZUq~(-x ̱3 tỶ C7%tжvȾ;R#;U#YX鹟(T?B-Ma Q ÷d5**=Z b3T1ntN6&g< W|p*c| !7[F |:R&U=XTv8>x+zώ6aJ2fGY`gQw0j*ݧJz3gM7n?g\+vq<+fwegb u,ZL qqS#2m/{* ,AZL}S9T39oiǜGVivєĝTZ`ʘ lp̑C`,7So=7q.-PP< Vd04@TP:J #$ nKq<-]G4R7 iCFtW@͸BاEVY(+Rk<8F15NK[΂jH+fS%AqyL ޶46sn#10u?DjOiF柧Tj-K![ěb!v Wl/*v-Ϻ0[I˖'VSTOnk9@["0$gӠ1阾ix*;wꪹO   Z؛ous"Cy&f+͸Oژkm~Vh#!MM[ZPb׊@=a/[d73@؅Ffϥ]#rUR* &K7"Q0>ӽ0roPpB"Ȟ $6? Nf〟ceFơN`P?.-7&ɮ;Je{{ztG&e#b猡;E+JzyX瀲uijOk؇ؗjh:D؅2VCz6 \.g6tV1l|FXPhϰ\RzZ9Jn>VjswƟ ,&rkV,VI$ESXfYԩhHۇ=Uuުdn/s|ʺy:_j!_=W\ Q; q{†iPfh \ᒺй ށneH0ΨOo951j^8vf{T͡*Ctm_b`eVPEF5 춢FT Cuu<g/Zkugd?$_oeD{D_3fˊアOląƁ?&:|^ hV ۢrMɴ$ZyauO7Dce@¦U ؒ<۶Dodu{ VsuPz]-]&wUw8?$ 3nID%E~v"}i820s"#3ϮޔB̓?9z֙Էr'l]O䅬Wzh< ViN!pBKqShXK*w``nՋb'izGb}#eJma[^o޾If ZnM nW m'UcoH,%aXeyEې)B YMd٪-6Di$YFK't1Whs:BO! Ψl\I>Oc\<4he'3>9`~MH _O&I!ލ0ƍZpbtpwtAsESQ+1tH$TY ]0U^.Q/ ~ȱ6 c^Fwv<)iUb_ q{CAu5 n)`6\Z]^14ukydA7vO_`u{0Ux&>IQНhU/ "#M1SkE~I1!k!dC `k–skuё+h,n]_V!fkxCnNa5. e)~X \D=\{L6%޼Vb(M+a)F/EO 6_yV,/<F%X^9V|rE\-JaCu^ )?U`_8TٿPg#ubCЅCJ,X7Uo*yS4ՓTNj{mZ}wɺz06ҔL\CGHpY"<&MkFLGjc5$c`VY!B\,E8EoQ4  $b>{TWoe^-փG-_=:zKm'uu~. /DwL l"XY?[.5|W_߄B iaR{U) ri97^ehWa7kK[Ms:ϯOoPsrj^Ѷ@CV؉=+樅(r[w%Dj"sz3, MQ9F7(sܦ!?RGuNHG@Ǡܳr_.|rPRX^77}4$ybCulS^4a% 78_\,f6%~`s(R-vǢ&D/9tG Wq_d HOWگ2C P#'FNGG+<8 p*m>H@1Y9O,\3{[3Ԇa@{];.jrki7Ϊ+7)a'I/3!|"'[XfW"#<35E.sw@:rp0e"^nSR( z e F48;ITW)#x6|ۖ޶&}u$}fCB*CL`,s !Œ _Q8ouT"5BsC;o-2uf'|*LQivcv8lUze3Gݨ`\ [s7ީ|,l 8;6*Uv2.x?SDVNAؠ 'p*(T?%I#dsr;PazNh]R ImKngDȿ- [dwX"h yD,6Vj3r(=^zT\aJq:ֳ>E߯ EIPw,rٴYx a8G0C'DW wɗ6%1frZx$2Wmy=і k4) d# "Ս~UdXf脧?8~o!V*jS־8]Yy{^sO![_Nұ8G+zSSg,_vvJ}]Ж‹5|,NʆP 'ug&w;1H 2ᅣ!O3D2g}3{Fv=?tFCnybq"L%G=ߋS?7TJ$N]O޳J!$@+ 9ólIlIʖ$Qz8NW_eEY}$f>%3}2$?uTQg?=tkwu[(ł,=&^XEL(Cs#1xu{e?4k6/w낕gځ +XcW&N0}:wܰݚ.[+Dj®ވB:7$wvӌM*4޿Hiu)&4/,]RfЊZhg{ sMֈG %iy[vz}[E5`feqQNjs]s(MuQ? O "*t`(!GӾÆ9 g V mZe׻6 %1SFuޢ΢ֽ6"˱Fq>[I(By8TS9Q&̧\0İWQ£kYMX.`Pkvأ"j9{̟<u -tN\l5YFGs'v Asr൭-oPvm, o T̠xr+:s#\̓MWf&ri-Bsa&?G;גɘz߃].oݞ$k6d 5HfɁ!te!"6ܑU6X Hn6o$)O Ra|m݆ Y. dpCxˣL`BY@a.`M&0 "y}LQXGukQ,ܲw5+!ȁg|T‚')4d:12u~,$x4;QR f K7FN11ZRGSPAL@94-0?usE`012vp:S"k ;{|ī1QĐ[)ڏ" ŷH%1 nXQMO){4 gk ۭݮT6/`FN~.*sdՀ5hA:punQᷯ 0M[͌uoNe? H#+Nan a;q]ecxu׉t;#cSU)qO-Y~kU;`̃r`&ÇsG l #&#ƸafWMaA|=GYemtcWوUP>!Nlq0?SNwޭ7&.Mtx1Em$Lnk>-${%ơrtY}~.f7iPî؃|#?ܳA8i؏q9WZ1p,3J4 $)J,֬ԵbA0)%`=T_&[<-;v{IH#="b>`_n9Nxvb)9L;VGL2&.MFbV ?K[ΪY!ZDQg39Y2/bg4P[k Pf8vW0FȲ`Mst3Mωޥwށi Lli:J274?G>TⷽΊfN X<{~>98#$T.)j8=V2Ň0'mk[޵F!9Mw;[e.cJkw]56G͎z01D^-V; (sȄ LIʹwL@,iJdJr#I*lgˏX[t|B$ ڡ fP`e_Yh7:?/|U2*]h;a"%FK|W~c~=9BTxSur/.^֕yO!W"j)Ɂc3ƱJsmtܵLR(^ؒ_@~.r!׏WU6B~Q@&M%UCaE+XӑDʬ\ I(D<0P{*5ꙝDjԪO6yFr&H3؝B$}CuCYwycNTƣxxO ,W(Ty#.{5!=Z&;:rzE#؋ŕGsXsS*㢃-pby fE F>n%myevntzO~} }ڭՍkfp+̋k+u uw;7:8&aS4[810}'1v7Oanq >nbB030lѮ˷hSgݨ}ÄVw @^4+˞,Q\کff=YM~yu2R 2 d F#_m c.<~~ɤ1Krps2ͽ3z.&@K%G毰k-~.(Wet줂g~OB]|UQvM*oaȎJ-RC]KZ':igTFM;~Gscos=mvq ϑKW_i>OyNqÎ6< aULxs3}x*1;akd|mH9ۃ5/couf} 0eUiuJӸW GhnH5XN bRKbƒj`7"~(AR/Y`{qrYőJ@|J8,>#t(:=\KHԙ 3IT:OJDt=^VJDӁw:X;^C1:?Q ˂90JX{7L ".E͗#@p2PAr -Ĉv(3yie6n6Kǟ钬C )t5"ĦGfmdAciA^f$AIaiWS;? cpW[Qy ;e(X fWښ8bQyʇշp:z޺%cɬ~pl~A,Ltiu~&WX 5m fY;LFls@5q[!NCR-"\zX&Q~gvv#\@)fpb}hpPN?h*,u-+K~DkFzrճxy{?h5Q FTe*SΗlY5K1V*Q!^YoZRw)) #;HiQ+7kIPe؆ltWXdl)0L@ "rCbfݝzTxjXq'.i'@~W_ТձY@ꌂL 673}MxVvvG337e뤌'`j7̹Y/x|auk}inT Sm0$p$*vGKvi9{\wt]QXٸ]mئ~~Eg}?{|XB7eAܾ~{TUPmw)nD>.͡KӅ*1pч]Չ~~]Ld(ިv?1pyB!Cְ,$S-Xc?jQwK|AGYorfqjܽٙg1>8Y~,YBObn{\T.SccJϏ)fh^U{檌dzRUC:j|ͣ\A<]Dlc+ֵ'[- *,Ж\&h)3,6t#9"2"h 3+p vm6m@?31Ƀx5:H0XHihH_D:BP;GjJQ, 9?m=pT԰,zŝςw0112s^ 8eO(99d23_2/yr6Ə䉗Qx[ !!)!v[堶 "Vy$$ E@%6h'WGhqnޡ;e˼v8b>76v*6e]G\ GN r #QSqtI3|y6=CzKezdg]ӕ-]IԵAc'.$3gx·ҔZ:q* ֱ9 vaR=,u+\@#7$E%L;%ȚQ[Ҭ,W+6HTƄS*b@k$תnT_(!6s\ݫ}0#NYKJ=L; 7VoMoG5ytgE۾ g5&abw_auflfKv[|@2k=cȩ@e\7_Kvg.=%Lvf<[3 8z&#.ujXu^w Cd9. W#)jy Aal{zvz=ItA_B\(sUx\/ү+Uͅ*TN Z:;%ެ(MZZbjyų{[b~M ;u~6՛-B܀O5ּ5oX}HC( Y3,;n>()Bur{0xG> #17nfu0Vg{\/* m#ӥͯ ٣o(,#$+9(SlP -o iJPٹg;ckʆjT_$ӌnڄBog^Y\oy1 ,Kvga9ZQ-.*cYmάhnOzņP>8h7 9 Fb]Yݺnpe]xNB|D9ՒPUe p$㱘pOw ӔBX)B+-PʪD:Tֆn j&L^(d' 1.YNC*؏Z| !2@dQ8<׻qn-?uۖ d.WZTc/Tʽgh\>PpA8)bRY2:s/I/ c>2@HB10@:wYol(N x`̫{f”S)keP}ӜBCbQk6'{Inh҈NVv[MenQS6ɩSM~tl~{-6M7)9J+j gWBѳ܍z@P Ck;WvBۅU >GİAhcRe-*\q 9'mtQ e^&qw~BʌZ#py)Q46KTAJ\L<^kDw9>)oO!;5\싐F,-$;kBFN,vi@ FLm+Wߙ #RM=zKpOnYB #CA0=E اgOH.i͵@ T(u9?u_%"~X4O,)ݷ'H|kJJwMx()d>Sf-:@ 2#<:ax<4,Z O: G^~j0<4[ 1j5*,ҳ~ǷW>ɔNqՒp}I !hw>lFz*ډkh &Wk»Nf)y ~ON |300Q*;a >qgN,tnUX7SPMi,R+w<"BCZ*̀v lcw2۹*9N >0mwE]tֽ{1 CR\ ?0OTKBQ6=Ŋ%E;tyRԝrL g l cZlpYjͳJt~`rxH5E_l4+ǧ-uEPۗފ'HVq٦;Nؔݻj*"}}Fj}`h84듣alim`G∟ܡKEk%0Z8;f0^9B^+ˑ=`ti+kfA)yNXt"Vw~4䨐 zל)XCx c#hWj㒆0<~//VLC&euxb[.ۊ^#^_]/$zƆFpAbe;u-;yaOwlilB|Y@GwJ_NO []4Lhm ~[#:n ײ:6$> 4M۽||9x*|~F܇QMA gvI'0HqMmItT-0Z7i=[syj$ިHEDQH{S{P_[&Ruoc1Th{ <'5thL_`je Sh6s;l|37ϱdQcd]w=븲_RlM;kl繳rze.ULHdsI+abL:9'ιu-^'2`/(NbNT=" sT'ͳl/Ab还yaס&qy)(èwaDVXՉmm_m:M67I&edRB+ s:$ad 6+jT/ѣh߁ K`P!`_#?SU ֧`5V+fV,Tս>V%&y[Oa{4x}iԯeF_GȓJdZ ig=ꘊ]^75Sd^ee kK#W}@\8 cwBe܋yu}i׊betOܽ;Ӷ}?E[bQH1 ^[aHMĸ!?)2xxf:jf~HQGE5$9_X0cڿZZJ6Nxb+tiҠoxݠƧw=[<n#KXd%rcA~Xm|2%m3ѺtE|ޕggf՛hHܥlʐ, xHЎ(;,E#;O R#iy̵QkhKx(0xF@/N}ǂ˱G8w/\ Xţ0N(AJ1ץqS6LH<8ORJFeYXF}fB7N}mS+I*bC b|fVsx {β)0]C-#vkN)lBgc?v_!LzxIٶWVZ>KwfA'6N ]q_452G{yД cR!~~>WVw[~ kK@v; p ^t~-ڀ$k,=ė TД3OY`Xa˖w([P]4$?Aw{z=Sx*7<:'B BH,W_2UgWdz|#vaM^q{cۥ"g&By.a7Kyϖf'9/E5A3s~~k:؃Un% Ex;4wf A:(fWw}]=}xiy{-l]{Nl=?E{UVliEKJJbQ:ܤ #gF^ ;S).ʋ:p^J?.-'H/qm&omQ~oԀ2Vmcb-2y2r=^_OҖƼF "5eNE>ѵ d^o#k\&̒Ujv;`۳KliVk $ K`UmG ͶepԔ>93 x/z;B3I/ȏ20鋳M>y)tp}FƬo.|7/ Ǻ!H{ؤ4 7 +*Q:BB:auv}]<^K#bn_7{Ǖs8Đ{z4HZo%zs;|]¨d= tQ{KQRuQm cPeP3D4wu>]V60ruЊ~ Vp!T^Q@)U7Ksl(=$f?-԰k;Ih1 1G? VZ˪ P&eb&H#pr1qbrmDRVj9 .6a]&˪ȢAVD4msZPY ]u` QUYw*Y)1[OUqzzQh*D|DzTBm}@3hG v4n0,_޴5;>G*k⢻e&Xvq8~IJJ '%Kp Q9sf\ #O.7jN]XBFM\F@Ѐ?LT)jU^< 3J;|j+TA5\FFm|ٽtdÃؼus H9SJr^?G1>2%;~B\rW'7)l!7<06Ee6 .ڥbG>3E^-kt^`[9#&:hD~qf`Z&\GKO}xS?\O[ %˨D=z2S]8]J|g;4Je/Ìj/Lu¨PbΕTBY--ycH, ofGb$fu|y鴸^@Vt)1 @Qܗ3[j^*?CwGuge~ t K[gFcï>_phKbƇkHK kXDYc) -8A༌L8Sںͻ2kuMpix2 au:s{~Y|wlah4?[ KO0?èj㞚}~aʈR) uh2>I_򊒵v (MbcV43]6cuXcn]^}V(geX#V%ΖmoKǮy ']< V'|Oi3f7/٦*c- mfna>0x̂HƴҫyxAIT_YlI|L*I)=w1 ߅5Z/EAQ"wmI*#tjqIvQlWCH!FxӥFp P(FeKr(q4Qq7H6+&Z\0<^>p:xMN1-/m/TSODrVj.rc0^vܞPx"ڍh(+ UwM0 Dh:nN@Ktc112R ?VG?oSOc}Lẜ*<甍)dʓAsƶP*"Y|H/r6l5`B.V^g9_ihf .Z7ce6t k8,nkJ,ph-Ug؝)2#'88{y`4ߢ #Kǚ 6$:Xn=x.PI]CC]] PChf‚OX`p8}DF1GAn^#3h).+3mDڙb ڎ% td%\IBSfvԌTx jXY/o2qk\ǕgKK?5Ҭ,gM$~V677$6l~_|׊!6 tFrח4"0iB o2M43*5dȁ5 Ů.Gs$}OʈΙB8"c58ml>{,XX✧?mgHG'b?}}fO=\7`8Fr;f[Q^:HuuRX{ kl92} ;1F9ZAQi V{ToJwfaK%AuR1{p Y [Od ;z,kj[0<&tud>*YCm#.caCX@>!ʋq6KqE#{]Vz9_AI[YyowF{irÍ{Mla=")a]WiyA~02-qi(K6z*x(*Bt q Ǫ?wZpJPc97ctX%XD{^nGWlkzeaȁsoNa=Oa0j}uzUt-aT|XM>LYxĺRsJY-;`1{L%W|N \6N6-!A_j$rPM ukj,},.ba!XpP_F$b-wMxf(4=?~[H\K Eu"?τB6vPޞo=_ƌ)ZUX"lO8)cE` pLmM ǼGF&ĖQ,e7|NZ"Y;eօ9fs01ʟ!zۙXzbDcğ[61/v  $ ceLDNC z&>ڴe+uj{'娰1 # st{i57vY)Nuc{wk={痥P{9hcSA5 .t0LG=޷0rnYE)I8# ~GƏi,Bz2[E~BVX9i/i`v8R>g~As)OݧVtcThZܰ%S jͮ|eZ)V."1hcc#)bcgi7Fh(5fc}NF~ZVܥ7H*^˻<=2xmUm~-;s<ں޶Cē]qj6>i[! wN8ӑSvtdak!~@ZNïlOe'}J; ҝGi%7P\`\}.jJʯhQ?+n?u2 a9T섗-T3.bUVS pstCS̓*7(C ֊,mUW| 'Yk8)m,~xyuד+S59.5P|zjPH 6Pm;O^>(3n&eu*ZeaˀȈ$@6K{1t1Kf2?A39s&zy!D$o<{PGX4p>K˜F0Vݶ[9$#HډQG0ƀ \lPաv޷mZr)8 v^(3ב2ۍ[4KF0 w؅MTR5:&WExj?1Yv?!p7X(A<Hh =+aUx"FRq˰1FI-?b? :!É: 4 ,Svo9-HqZa Ʋc6Ht_;6d,>jjT\EݟG;Dsl/c!:Nx99nك-c._$0 X IbƯУ/eOH6&e2tKz8H-UlPQG-hMqlW` k,{j< հ9|/OXGOA->1*mDtt/ {Lzlǒ?֜O6Ɲ'J@ jbV.l?m>NzZCCRygj Pl*s!?1Z@zeWȆy? 42-;G M^=! F'~[:dSj<5+Ay08`Tմm#^3FmB.,RW] p5 q+A륍ySV=VN«Qs` q1Vwk[rFCBY.vj$ZepyTGFb}fZ%Uj`l{L /.L,A|$bֹ5貫za?= ,~o'tF/oA &XVI h5N\-]BmZks`J: 6EAGReUpҧZ-Ms chRn=]EX[$}XX*Qq–]c suSg+@bsD紶 v[3~V4}d~$Xٗ' HlzJ˫ LaINZ)He| Q{$F$. tר"m [si BEevOρW<LuՈ12an06Z3N#X SІzV}<p^Y\:Ggfwrs+X!tW ѵiM3 "cetQU$ZLfü;<;諙E??msw*'w{3CsЙPr'J^=~w΄hpbC{ep136MjͰWʤx HD,:fOV5ŮюTʼՉKH|iiju00؄{Ո\-Mz[b.㐿P@ (:{eZ%!<6\’yΙoNGU_doj#$3yE곓!Рӻ[:~(e(Iʮ[m{wKrVxɿ`4^갲]gӞޯNɵ1srMv!wu7S)ZM#{8tw,@d<10gX#zRC2Y3p/6_#&ʀ=.4&ٻR1[pL=֊VC&N(Xb?򟿹>jA7MX,wPduJ<Hu'9Zɗ1)줫MDH{uRev˹68M+;_JAΛΛR1RMIUv^6$4KJRnk[~QAzka'pI0n0 ZiF]":zkkN {TzznJ^vrK肷XF'r ?Miu#q2`2 cZwk䔕3dV(*JMko^+ytk,kWxFT(v6"5D m/G\Kd9 k3))A.n8G]5"nVh =9ʥ9a,$ՙ0b~)0a2LjQ'o3eaR, 'Z'7x;g4J ;٭!޽ӫA- 6j[AN~wQ!Q<_A [,^m p^:ZUQuQ9gs_KV*28Tp>]{IYE:~V3o*-6BLdEvĭ+ aQ|3c`"RzUOHH׆KQ[63E>>\?v2 2E; bҲ_"%Gaic M}ڼc{`O~^vl˟ByޒPb+U,§=>4۶9,0\䀁?xY> HwJTb긑 oѲ~uͶ[6)oxQ_n됳k8 J^(9zU-obrĒFPXZ;a;,ɝIl$YZ/Y3Keg(Qi'y R%efFVNrVwa|\ULCD JmhGO+#.m! ۶UK.kX 玕"?*n;1LI̽ѯpvL[LWͺU">L=!2{޶lRhZ>çB}|pAâyiUL]3zso˜c+`UJfR;Vw̩pWN1<^cdIX UMe^ݗ~*Ld-Ei*+PgP&b_;Ѐ:D+`.ʧط/ldU vv}ogW}ZBC'#/[C%XlND4_+iǦY^Á]zItY]\u~\Xv /Bp,6-(f2D$O-ͽF6Jt½v/kPr^BswrbokhbW2k&x^ܨQkx0AaT[ɦ5QBگY11yKOuK!,J[8Yjʪi㱾Pv  cy3RK2k} a/eegGɡK(1q#ZgCc`H {6(2$p_<̝C?8V 3, Q#36DG&wd>&A XS/,0O(H^HzI4o xm.`7$;7ѸoȘˮ$_ ɲcYQcȂCL8-*au%ap9n6=rι@"IxV aTi iuk'Ev ; #[s!ޠŸf'=aTn=3a =P]mh~)So3CFB4T^ӢχӖVk;|zR38E}zuX>޸dޯa,^LVj-ZGxb#-*٪Kx`v/_"DߎF2~=߁Xk(Li]?V*'ᱛawh7m#J8ѕm͜& 3`!B_]bTy7M'&c 4/lm#o%AmSṐTQE/cüVE$Ey|AgJ*gyS6ŇZT*oA{6s}sdnBalFʹv3X AՅVtivdBw,'ƗLI-Q8*9n`Ek0V@jRQCiQEjҜ>tG {e4akN-}xaj8yCqvď^ck3a#Tф| ݘ\Դ)ub's9x 1ž_sSяmBvrf1g>&}VOR 4ͧ5WPd `7'X #;nUm*\`b2-r/= XB@U~nжs\'/!}m?綮H}kذ$+۱mKܷY2|,;>Ao=:G̃-iY<5ԟ,&ɮ_nyiƔ^;eNvG!| /k.*tE  /GLz` 6̓ yi," &;LwQ07ôƩ+pUnd5L/nW̪*y30MyI7\1ꟆB5{᫴mSW4) `x(8kTno1_:tGqY#,Bgy%Рo9XG6>%R`%J@2ݬAPqr4WXebn%x͙;g;eAA4yDda+NS<{Kl‘e'D9~kC6_2{FB|=P§xٰAu& @nUyx'k{Z=W˳x_rʎ؂>> %*ؗ:iNƠ׀Af<0צ R~N Dl ]b΂K`%Ӡw'vjѹH35yr!(Ly%oQ^oU_t"?4dV/\G2U a;lsgOCmT Xv~UApw̐FzɃ]Ʌ&_o'!{{}4k5@x$#g, 3riXEyXiٱԲcgfgE:Y5@'ai/(Y6կi [jJqxf_>-?CxHAZ3;yV5wdWg_^2/P]yrKq_t\rcu{/^`=؟]bL%OL nqW؃w;PAq#C:,3$MIzک/Ɉ a_n;w t.Azĭ]fAGL0{2FP}R;KaB02]0h:joZ;Re6T>ufs eVTBOfS1a2?Dg;v`uV8yy+mŰD 7h(;=s#uVe_;|V->5Zaj2frːб@v9]DSVe[Jg|3jY) Tè$Dwj@oڤ\3i?D#\ KiOWad4OzQT7KFo4t; U!$藙g*),&@CcO_ƾ@d>emݦsFF7EGm]R>8ҟ1pA{RVE-΃$}mslpo=\(mؿtֳ)&[sNł )C=xrb9'(<ZT[8 gu`zt9~`r޹X3=Q\=Ծ }nt<^; ,`t"Q<亷|cUzƇ;|i!5H|dk!{& %6~pZmmviq}-Aspf_X]@ }'z,oja 42 \ó`fv/oP:6@ߖ=OMla%kQ;1|q3l'+-OEnEOUŵ.c+#|j74 i?S_>\a,e9f-2g?8h`w٨ ͒k;2i7]O&O eim ui.SS6JSccIeew4,n1΁6pߍ:% Mt{aVV-:}Z8}٨f𓁸soI0f&PA\iDcpkj~+3*ȍo&'[uFiJeӞdbE.:zQժS޶*" AC7.@+A!J858H0L+xM,.7ΈzGoo<2CbZVۼVlӋ5`Kxʸ9et=<@F6'NWc3sqZjTe9wOfXZE^f ,4[l޹2sOso.)ىSp}" Uwkd'`=r%y>q ʪDdF'MfSUpתCh>[v[s\t}0%$T~arjF;W1X&[.1%]Z^<`\~{[/V!şk~ʄ9t&O>۵Ki:;t9!ϳoRJP}uX&R;i.j#}WV^@blr2[eT0pӷS;;}g^`#[{Ξv;7 ΍E;K[Z!g8.f )ƣJ(PNJ-)֌*uTz1QU[! &`E~7I)wʖP,~qcR{ %)~<#jkU!c)1GMk_x B7lSҒ;,UYdu^gf|zs 'rU f_&@E\+Fp9-FH61$X5Ld2<="~ЬԢq2c@:)0c~j.W5Uqs݌j`!vNa6 Ul+T"UgvEc0 MB:ƒ!BiĂ8rHg5\LC6:Ѥ݀eA.ÆME/ǍY)/էm lwGFCL^]!Eދ dΫa~~^0_P}xV9.;?OJ b-/{e}m`$LY$!CļYj~˷RҾ/6q,pǙTW$[LƼ&!WC3e;QG˨w)$e3 #iP|RA,u9qEf#C)b+,g /T$.AD/\Eŷ`?cB 6/uR|`fΚ-8ED|+]͟k﫛r]AaUeHZ;! -Rέ o-†;*antq^`978)vt;k}HС~}z v?@Ξ|9Oف B0~? ĒdƊR@THF0n]Ί`LlYgm6F[:W>]EASQ3]^c3xS̃6F4wH` ="Xg|G I,Eد&a}ZV.ٱ#qt11g-(5i\Zrq0#/'}/6K8pN#NK|i mȆ}⣭m˫0JXV! Kx0Yٓ:Sɴ ڊè@|Y,a 0{A%޴b"ֈ(W߶ͣGps{\^L fL0QFO5+o`]N@j</\}0ζ Pl_D'x,Rg|n3**WSjvqF.ڕ&ɏ}w M }RjmSx X u\xgN۹wa[a+$O!.L -S{ens|a@ s{dmZ[V]fq|F|Ƚ@"IPHyJAwD?>G]8s{.7ek5JXC+W /S;X8+#EpR֐3{ʔwسV[ W>Z<ܿ~ GMkY..cx@:=7v<h{6P2l+u?;+3nоaDONG=uj#=0B2J̕W D>D΢yQa`ix /ȯW_yP)s-&ٸ pBqkV1fq(=mW7+1-lYTdu5%w08i*ަg{ĉHagcd5b py[;ϐG"aܙ,H9z (&Fsugɿm!՛T&@EWA@;9iHӝޠ%{d/KQ_Sgٞ[0zJܤy!,֚KƐbi;M/U٦<pptZ1,hIDui'(E3)(&hУaȖ?p)ٖ>cGcFOwcU&ނ y$ƂNF݃#-yLh#:޿FVs(yĄK72z DHpP^zk`Y`.j[Xst/O`_cn,lTR,Bp Aa@Hv'Q@ |wK|.A_̸,YN$*3f Q qsW8 ƭqNz0LjNmlf:^" 6Q߶}fon SS7*gfʚ+QkSuVER)c / y\¯tj2uњI*v՝k{cxZ|O;̼O$ֵe31z/QD7OaLn#B#)vRIƛmK56.[-bZTSMBUY̜]#CǗOծjN '!XQ^&0 g[ܶW`jMmܺ-%bpIM>X<6)=Rh ʖ؜VEE䥮9Kb1l뵫U;iё[21DVqJE|[?g~N4 yWl&ˏ Q+Jw9'L/R N޲vK4t9NčY9YjȠ]+a4X%% 70 Ȝ?1 z"5@bI<3?w=#჊nV0qEgULN+NJq2:}%1 yP #[C;@ú|;Èk}BțJ,vlCOROSRJ,(Ej,6X3Gr(]7p$sx]tIoQJGXYܛkƨaIYf ,޹J2e٩ #153d lVؕ뜎d]1jԔN¨P;;]*lCqЮ!0!`o>V6n%9:.}Z4|ߧ zxx<.3\Ė˙%mqASz5^Zbte+0D0S<'p$`5_v]8 .5emX, ti 4%C\F Ai;G3cst@*=L6 O%oN6ՏuCJ&Ϭ]2C>{_;" 2JY/:14#i<1ԆM&?9ʯ r u̫ebelܚE-lcsqs>XV%ef>v =vbλ;|N(u[?ס!~s9/ed;ci^Ŏ)\vM[i 3Z~&`@*)Vf0Yl$-ՐcA7{u]Q{s۷|sX ?\%W5x\EwF:b6 #ɕlσJn*+CG6LT) XT_J+hX9.[xV)i(lEqHx_"Oè=_TkPr܎RNtV>S,t8Q+i c6t&1eKޜ^pvlLg@{ĶˊzC?rLMu1޸:8œ6}`{@X3-w$q9VRbBGҜ?2aF_=[_u ̞ V$*5]mH ^X`0<^vЈV"%'~X \wL1ը) C&]n#+~4 vDv[:E~|nF0׺'࢙k&5̖:$ (avSN5cߙԖi2`x7ADt{$Eq1xw#M 3+ZLQX`K|G8I>sZ?-q<*O6zEuEgL5s8UIX۠ߡzB~k-atab5 Dhܼظ{Ĉ<2ICHR/cǞE9R7g$ȪGZF;S7 po3M)We^1Z:y3q&tH3}vb]B˸. FuzP r~jAB,ήʖGc虈TWfxl( "Z\_V/ u ˇru:()8M(jLZpc66wu/!^b2$?]֥5ڈG_,?(^ԝ>|]2MOs0φىw,iGlp%\X9EYsϞe^5I>5 QNqPm0P{nKFuCͺn1E9u~RoE~N%0wP5*+ޚ@֝6c]cpĝNbssu5IX -bajn5It/ل[ QhlLͮFӗ8%(ݲXm!y{cZ+#@}`&:.C>y_ 0ԎלDCȋuT|Q7l{tg6N6\.E1jvpZY*=TXmzV.6)8$WS-JZ6uw9OEfI*\ۤ8aoopuIm[>S' 0vm@E!) ,WllVR~rs{Q(VWJ%,ƞ'8 w~P--i3(ܦwsQsltwoLa=@%DG@6ioӛ*[8NT Hٿ~-b v~+0)˧aM:0-~,3~ڳ蒗%/,&,a \/.N@0k~F =Ϩli Z OǦo/>_L M۸ק֚z-kx;󚠘=s+lTX8xQ>,u*[?HiA?j?.XΆ6*U44|FCwvpaad᥍i6~%[0ΕH0 1&|uvД2Nwe_clr^CnKbdpPtiGc0LdŁ>N,r"Z1朂v [o}'̫:wh-ycIr^].fG!/؏ $CXPA)T5V~- D%yb"F,:.3qDC 40hq6DڄX4v9;XoՅmd(1Sg3}Wl>ȃ?9}u\O2,W9T n7d﮾LkK"Ju(K>]ō}AyX_D`398+K |o8ijcOe(R㤡 %t=Ld΀1,ƟW`Gm 3Ǣ/Wheҫr鷢xؓckr#BSQ)!S:$G'md?cg-lFy7lN6>D>Z'dY73$kφ5n`\_.7JBupMv< R^\ݳ8O{!˅fZX沌}6ʋٍuRsdmؔ9ah)&g>JKֳ&:LVć,o6j~Y3ѓl;{J%iǡ,Iš­~sEZ8p5$^ NͿ Q6? &xޚa¿H%Xǁ{2"ۨ E!\A f6jvcȮ`l\>p٧}:a]Ib#OF^ @y3ܹ 0pw-G̒Rɰ]sLi\ծa$LՑưfC{뷵ǖ\#n+b:Dn4#WZܙ3+0q-2 \2n$<1%ȑZ;EgwH;=mKE_g o f7_eP</t-Ŕe~(ƛsal50|8h!NM;Ooc'|w: l^Ěqq2&V^olL$OkS A9fof`z$c8S:豟qb9C{|2k8(JE‰u&Gߠ2~$W08 0|'fa\)@"/فh!]1ݪ{b5'#Skagb2* YҠfV+҆rr/2 9h&2uJxFun:0o5l W[a\PNY.ff Ww<#3R. niژK5C BOJM}RA'U__Gk}L0W-F2&Ve5 g>ug/kl9X&SbճgZMmMUi9A1Y9Օ=Zu_5~9b\old-bY0qB,/|ɽ2oiڬG p'd4\0}6uT\]Grd2ՑXSR7Mݎ:©?Ms'M9zSZE&jjuŜ,UTI}^Iyll>pJ{%mab''cgbCdE!`%E<5־m*b{g*It0Vc,P0?U@̌Yiن-aՁW_οaJ&XՑ2yAVxlїBCJl1Z8:~6}zm;se{6XyzCkUp+X7hjj8Oi>bP Ǘ }XNo=%V#ZlQ= dm58r.2e'Dr輔ᒀez Fm)ܣ!-f8@ՙ~힋~GbJ'U1-]/C4$1<gvhWyc0賅-ru/]u {'ͫQ$x͂US\6{ 'dUF$#DȪen3)@?'Ɩ/aY$5Y3_ bCcZ/A[ s1l\K}Mo!8ΝDQ`Y?=+BfU+Ϋ'86_L($m}%|h/Ϝyo9XȼJ>1:c oNt.f$:c':ϓwi%yK2WyQG|[i޵Sa %䔄Q4x;U:G֭rV WTc^?%| <Jv1'| :lx\\l,}E2`S]J K.jkvoɢe^0rhy nO|c4z CT~)_"_(] jCn+SIIҼFb|TyB\ źH_FA سlGů~r)mZL.ݯԝƻŭfFęYYNN T1{P>9'[bXV:5̜phxhxM,rIxfg1۶w8ikh#]y@mpT%=yA3 X&{A-HQ:X :h꫏Y<1U5YYjנD1[ @q(=(ϕ[鴊q@;cq |fjxл Ri҈Q >wZ>?w>9+%1,)¨K1zH'Ȓ2P\8S8H~oB.gTF9&ye^rar ŎGc908G*Z<"LE0\aUɜW"/#oW%keYTCRVPw\o,.«BLK3=X/:4P錙MP3!=AGV=S#pgs7)k"M$ :4/èL![/堄3iL ^ T[q,ߴ@}n Z]}uHG\APx9oXbU.A`KJk"2PK"炰qhT['k;j{>̏8!L5bw0ԋ?*쇥g*O4YJ:yVV T%B(iMev3o{Öl&3o'=¿Gd}b-J?O&Z\+)i+oKXay:Y2\!(!, W8=p2q= 0ܠosFqv]|h j=vNÂб.`b vq9d5^^U3'#7}9ctY=$S'w8,<䊏3DpRb[(wzH3S bCw{yDv¡_ݯ,=$M/Dk&#$i*y]nG/%8[8UkaH:qUh'nR/ xn8vOunS%AHy^UF 䟈W,B oIEyg v٭7P>kh5vfZ3` QyK|/QЫ*2lԴs0 vN \+r#H5q~:H'cv!&Zle 0"C)ZKM.0. ݧUYݨ`*pqGRIrRR)s"'GI'0 R/ҭ+50&]KHL0x INWϳOX YZWԖNp* \颭rm! h=5ʹzY8YRQ+#L1‹DE\1Sw.߃"uDvWׅ1EXӟ4ޤ^M6%W#6OH<ŏ]e6Wrút# qZbSLpLJ:]Vj۞%bH q,;= 4Վ~{T2DE8c|E.{Ev sWŸIˏr49x"xot.{_(נW)&o7 e5j )׎W^6*O\3HX^ѷ w*i$ՑYJu_ЃS-o=`ğK:;9hYyRyp*v qg#a.#֬o'L[C`Rg+ZSNvMyTubm?&ġjmڡц*0'!b[ѹ7t-4b .bќ? md%tYFZn\>@Q }/3 7L1̗268,1-Z d{ߜK7y ~\hq!r7;v(?>}jtʃ7q9c!0O2qhŔaȘyڼdz lzwW3!I oؠT!4kg1 rS\:?/vbH7GSKq\>;=LjF~PfJP-.[·:u\y4uPwV"Jp%;"h "[ˣi`e{؄/`jn?ԣyff]@nݾmQd dwnxo(,x) Ԝ59YR\ӻK΅|Ăx~Ρ"2yi W<˸l[w*5_\&TT7賞 K~o+0HhWY^+-1kBOh)%6팟*{bO5bl ^U@g//ns{ܙX`@ʙ[T}S 9`G:c2 "j[9+>LU(=}V=puP@9S"(gPl PN>},G> ~o[^ߚղHXfX^KqZOO&6u <#eKObiJ/,b55Syp˳ Ef*8a.gjw({*/dLJc(ܮVۓPҧ_pg/O <Y(&ޚOŊAo<l Ӏ!li-!̙FO!'/:#-'}]2 q',Jҙ_v\N9%k|(&R'!DjQv'Z9.ǽ]p{R zL|?kDQf&"$IkaeoDJg=34=^Ndjnx#2%$14r7_v9\=&g١P2r9(8WL氎aIjaJYWvalfߑ$p8iC,jLL0'SpYkW2ˤ}2H\gʖ9_Œg,ZgOdMaĽ?4 5r&DpsV#0#>FvIJ %xX 6 cxvby>kO,FR#ӏzGkxsxOՖ&NޗgĐ~)4@:nO}~-!oݝo){7&'w;5)s/< 'tO ,L?rj1FxM'wuNͼߌ=1þn"FE/M_Ex sΒo&ǀ\kX@W7*ԄP3'-+e{PIdҢ=KXC[`;l|7t cz,b"*㓿ȲM)7~+͐{1@6"G[YyFA dK=҆qCGoS\7oy 3ި 6 Ç3V߸V:bs&'Gfcz׌@'OcKn̍t[`;:ޝym98rjT]kgdDڌV;Kg&@u?_;l)FsSd:m׮1rhwz hO ;AjK*NB!N?@ݞq:+S6Q G&H8DCFN}@_i/uN^-;O #]TV ^h_TR x4%%d?5F{{aSDC9-ǔ%^p4><۪W2|ƚ^Vsza4.4ehV5we8G3jgx^xMϮh>JO\q_ '_J=Ow62պ)߯xt '܉]ȷU^RM?Xu7=\^BgTfEIXsi\@Lޝ_qj巊˦u f{B.@;xWjN@J )}M5l@ Dkޤi}C_ SAL}/08aC6г.UplèFkz1՝Ƣ{i r3p/D Ogc=1VTуEG$ԉBKҊT}OiF܁9y+/M !s(B({:OYEvF̃A<cC:bT Iz5ޯ +w"V8r&4`m!9,2N.Oyivc~I=*\^ǯ' nRiBOYRڜupW].~`;/p5Υ?14GjKV v'&_ҿ姁%y*xy0•9 ,GȨ<\ (A ka6ol=$;Z繽LW_r,Q>C[kLF`sUSXQ>|\~lk 17!`:!><b(3&@Ϟ͇/^8p,<آ 5Untlqkm[?݉v#4vyoI,S nըY~vMv\FJɘ[ڨD./K0(p^ȁA<XN%HA\_aL̦/:ӝaP6cnDl|{$3-i|lK7,N=1Up@8k[W8ӕR_/KtJ6V&'ʱxS{fxn͔p[n]aRRhxBVg&S~ ֋~ʀucoޜY0Npk^ 6}YKX1$t *՚H T`I7;h ?R}hqcaznVS(}_Mk3=ɰlC3?Ӑf#ҩx|^ȗh''B';4R3[ѳ to DM PӓGΎmG*er<͟n+(/]hw;)BAC@'o=~'};fs<>|_N8fBw4)'ّ!'Q3HJ]}Mr`$u$L} rɃ$SiFBVgeSE$JDaIs>h^~nX"M p 1;W;3M1MC[;)ə uqA!mky {Oc(NJ,)J2D39.pWx'[ 4XleWkQ[E29Edx1Sܺ1Mk3?qp `13EYE1Iv2$%'iZ2a2%0i?Ӥ$3|ʘʘ<;/tÔ6L+2h.NWi*'k!bL]VܒKņ4+?bS^ҍCb(,|f'c:Q`~M*"c#"'#G6F ػ >4E_Vsz0IW9qDF[aTYH-c^<^Ac*Km:25(Jh7q=0ێ#F箇CYc+Ru:FQpX#x4cCD9Nϰwp =mW|-N9e~ ]W'?_|t~|`gѮ|18EV^V63!sД`dr{ص4['_sn=I"_b00+|_vÁUjןh>SنR0,u;Yxnw"D/K)+m{@PwM%Gٚdzwn5cNy|Nbc'cAȱs-SPӬUd!J߳pqH6ѐS#4Tx|Lqd'lǒ"paRHB`9Ulثinz onsN1Ԕ/ݮ )^6:-(R ^vlq,"j-6oFIӟH3}]nf]wdb @J ؃4a- RdkYRڰ1 E׸Y oe˼_$M3Z,#T>h,զ{#0Dqy dWꏟpfXp_Oq]ugrI&%9 D [Yp=fI'|plU(Vt{|h䘷1z=KAkc0SȽc!I<#O<~Ae~F?TڗgN5_x|tr &Sa_LA?>;.~ sMd'ʈj`18O4Kҭ0]{OiuaVj%HP _]xy&WtJKсC@NЕG\zNӃ.ѐ>t[==M,p07&|l9( 2q9!Uu>-<-޼(\x~f_9D+x_84$%q|7݉_BSvdΟfkYn'G8 T<ľ坽AدL[O;ˤX ?'fМ(:`.ź)F=dIVɰv$I'L<8NnY-kg<Ґ`X?ݲ3HoA|H,LosSnmmm+$IAiYg 2k'ö\T,lUx#!aYZ SUrCRe7EK~>$YO_ El&rǦ|=)+*KP?re~R:7-ky#)O/yח!YmFc8<:WD%=rzq;[b~%2Pۑam]j  i0k|uGpa*,agMtN`l@\讛r.vcwlR@IZ^BNf 8V` 8d&YQ:;P?*v_Ft YY pRi:8ML璘~@k`+Cw`_ AØ\#i_O5;RƟ+ @9pn"g? a'cWRyqL }ISW.5wV0Ul !RpBSA9)@ yޒ [p ΛyOvYBTwOI]8qjJ*Rf2b?=Cξ? :B҃Ts y1| begtɥ|ՂF] mйwo~˵ʢO,.+ңO?}]_7=hFzEl:PCZ䴧j;}>1/+yP=W&e3As`Ll|0pX=pdbN=!Mˮێt4OR nByU{iw,n!kz9n4ݖvuc0xK?[ZNQ9}giP4ul.)@jYyT-5 \~oew靥j)M;Cgwȇ_ O^DTݐ݁#i/tIŴU\?cB Rs%{VS^8~n69Tj(d,;ѥ}\`PA*C9_Cy@ٯS4+TBvDqMu|z܍] ?~o[cVܜwK&$+'g3D#@&?AAc_')<Q{Pv^gCTb.(!hD'%]EGI Lm)7=,5*R7}JSz'c1^P%YL˘ZdQ0C 3 (aXꊉ" {"U[IIBt p8eUκ2Wo9TAYHcofz>3dd.K Zj𹽕AmZd .IIUumKnH!MF_M%Jz^w̟bv%S 2wLE&"V 0u]&?J_d6y;DVBNb8LμI@DU;< )!&C׹/Ӫ05-vV?;{(Ac+qMfHepJ GK#-ui;9j۾ҿuy@FAݱʮ][)K"X cE +ߓWnܴqZYxdۣ*k W8l\[#3Deޅ]sv[Hs4DLO3D7NW+4VмJ]'I,16L+{멅54DZ0_70o11+{o%YFD^9SWH4.FDp`Hf&hļB{!k\S|ρ^KFeP _I_xݹO/9VB3}lL-C%,Ox9'A#pK(@vRV8 ?BZ3~F+9_6Ew!yk L mwIFӕ۶AGĮ>ϏϠJ2:]^s$auJhHa}bуeN{6%(nZ Kix TED9 \I~xu؝I $ccJd"3ABA[ef%lȊ4RnPx5fI展DV_B-F 4䊙Dx(&'~g)iX\0[KP/9 n1V3?}5Pϒ[dwJL"uÓdqze`qBh1>&4 [:a\8YN 'q4 :'H&)7cPV=N&]@cb@9@^qL ,$giλإ& +DUKcjr4&n$;d-[$ʌ+F)oЪz`^.oGN#owB49/O0A|s$au6;b.*MO弞Xb6b(JOcgՅj/_Ӭ3P~x7A4KSG\q %j6G!-H8Mؠ]m7դZBйf][G,-iiqPV`JA ӹ @v*{o#S+sa8+5\ ?2YQ.kor~m.m->[%NY W #>ܨ^V&>ƆřFDLSx.: C}z9^aʾޱlWe1=\q]Ӂ{w39aCO^TbScl)=*}!es^XGyܼM*I鮕÷S!' J>8N jNŷ2V0G?0 }6*~rstC( ,jomcHVXVg oJ?ڏE=Y?Wn* \ k?D*ƺSfOa :Ф93%,ۍ;MDW'_V\uYR^nަ*>?Bd)Xh3 syk7LI} USV {|+hO=}07w;eM z)wX2|N;OE(56ݹo;se?z%E|.H]rqwO]ILg5&Iw`d~%̢uj4yuNr"wW'Ƽ#$tvk>*OS?1%=9?rd3Ql tSd)&)X{?[s֔Fk]F6!vW-}v/ 'st?e<4XeyI`$x$m;e'tՋe6@gM%=:1&ByI]E{:Ȧ]P#)`3?eI-ҰڢT>x=j"r]~AzES WKtە;2e~=Nb'%E;Ob-%oRS3{'SY~ԫ gP_|K kM~G >Cn>_NU&3$50o<-TcfF({k͗u-]?3|]熏bNz|-gZJ@E+P|n*7_H/tNvZ!fr~Ms(Z_yyE0xc;dDIM鑦z~|˄1>]s.< e?ƙ7S/ m-+joɈ|+&ܴ/EȉN:\=W|R~pm [C `=Ť`,ENN]ӷL8Cs2E'l7;'d0'W f,^2 wn=ǻӣڿ49Z[טiBǙ骽lHden 7pv30sI&#RNIla6-RIh'Oʑi1٤f _I#0)JAEɄYf'vu]xDgF2n7j1ixNqnaJ`ޒLQ[٦[kܦF!H-.s5̫{\߂qr>[pꡆuP>yzH~^i0,XRqe|"dS_Y&,$^n?6'K>YXA%12ji3: ማrިIe܁G6ʙMٶ$}`4$^Ui]bO-zZor)Ujie:!&lC$Ko8}s+^v5/ld%$ nq,]FXDey}+lzR eUjm\bsC?R 7lНvCƷdgg!]<98q,)6,Bm^mf&2{v8-҅kᗓ1-0 \\$eZH?8Rr)76$b^/KTN88cb6mfNПtI1ه7;^3mENjFYPW_l9|3\2/ 9_[f_S{}@׾u'}E_ .=Zƈ\T+g79D~achډ1{4P ]!֧/" ^%\VKt87WWנ<U`c2#%s>i⾋L%i$a q=ǠfP,^Ʒ y阬HґBy\c:6z?V-G6=E>8mgj3r_.vcs!b%\cm<̺x%_c4 jpPZdivFs' ;yX o,Cg-o1C,f96nt v(rgj=_.5%}beod0tAHtq;ks[씂pOద:z/_WHCFu>g^ <Ҽ$ה:搪x`//Ҭ_9s"x Nk$ؼ]w@1 X943u 2PFGAfqw]yd4E”kJ"5IE[9UcЛwBQ$o !6"tS)"`)%:Ԗչ"8D R"C82Lفf%ʄTZˎx/?vZbN:dH|6*rz@)r`8qLT.:V`Z% -׮ockN~oV>֨) *#H> Z'[fK&v,Ig6^'cQRm\ hZā kpG VOXq6Úb_v606pDlzp}Cjߡ1n5 V ꂓ8*]^ciC>'lE`hҩ!ǰ魥4EzK{Tcۉp9*^Ŷ-A/9o.aLhxtC'Όd֬<'U.;> +WNY-ص>QuJ 5L7Di"OAJ!4~@n!l9\yK-uto, 7ݰU<5 ].;:S֦>⑍]3J^[H9@$nkz7A^mOk2«2OŎCTCz(Ls[&k_L6gdX \,>^QA\rۦW6y,XP؇.nN9ad/ [,O˂l(]Yc2SˣULˑ ΗP4nc08U:Y޴Q2+m+z-kPrnmwNnݠyp %٤3Ր\%Ⱦܶ _iMwȡV eeo0!Ě g~Jف+Ler=AM1j28:+eTDk`KFz빾>!Ǝj"Զ~"o1.ޯ?wU zL%pM܌20mܜD!#|G_ U>aCn @ݕ 6eŪj!YSFƜkM5Ieo;v1 r{ N|`1߿5`Ӯ_ċ4~=l#Bom\.cBºIiwխvcģ̗iАc)զ<B[S9uC|[JcyeDgѺS![gƨkɥTI._9س-c(|1>Pʟ^7c`0--ڇ=[:!$jXw_Ş0*!$|X+ys?n w|s7eaPA눖:cvJ/!N'@Wuf6;}{tN=y ?:*yS(Fff}6;LGjDR'\YoOddRbNҰL.@h | nFKt %[8OҘ6*׉,sښfC_ݾ'uafvEI*AU}5e*@tAe_'rj6 /}YF+uF4eěv6#Ӗn8|U0-LZ9 n̔~}d, k|YIɘҸ{t~)YSa(,&_X5zy !LKu%XnayM(ò4o{qԥ*4A SdtM~sgߡ-;g_'e {;I>8^% kk1K2b-c-4O& k1{x!$\&:y;v ( s0 \mn e20\P70{=SLl|c4"lT76M/ 0wl-)sa88zsyukP{LMH(Lpfmkޟ i_I}uLfc&𖙆&JgJb/H2CI/&_ch;|{W}#׼i.{?^ ~};B9y[sXJҞ YlrvjZГۣ ~!ܑk1ϑ#k9;(Qux)-Jc_•o7*#CJK @eC#d1\Τ`b)G[mJ].[/U<ixGKCC%Kٱ25Qt@3KZ߈]e+&ӡ& er3/Vv@O_~([JW}Ltף(Ui|&- % K9/e9H*L lZܗ5 {H*3EOd1{Ϩ0| ᡝd?db?^j"(kj!HyuW?DŽl̅Ίq)eٺm~~f[ʄLA(GN}T'/>)Ǖ"z=7ZNv33)nZm*5sdr9XwW?N)}>yNoR.K@53 a=dJpޓ.z{ $,ev.叡dJ =?x7~i{)jguE9i. { A(PtgF-S%e׮DL^QK_I<0A8.peV"-֪g&ۦ. Ŏ$(%B 9OlyU75YMǰ%CxgK*Z&wkk>aSV )qƂӷQ37i2 xoW:3*+"詳SSXȪw]e4aȍƫJ@T[E~~ڇbUtAc!æh^}*8toqA8ZxC[sLuk=REpnrOD`';@.D77y vb+Buʎ NǃԼ,& 4a&f3ih(ENϷhJPC2n(O$:,+7Cb,N5xbV&:irePl۬.Ch, $Iz"5A^Ynv%G uQs~|& gS켨Kӗg6`~7 /z&UFX*gYl 1愈ti!@kq孇`D0xbQ^WG>cvޢ@@f!iڣ$ě #$!y9!,sd.*;g:mAXIF>Z)Us]Ts'ٶ!Ǚ)W{`~EDg>Jo)W }<nSKT=;QG^Lxiw D3;|$yA LNrF=CTVtr_2^nf;] GSgջ66H9rݽܚn{ϩAߘ8:l^)Nl_"e~v|,phcɘm'v>>G+ܴs;5*Z |VOV HZIXR LSck77 0i >IW:hB.CN>oShKT{T/@7B{LH^({=ZO0]sw/*e-ʒ xZ.xR?@ˣY &b>}>>l(St&.`͸M]|xT?kUfaKYbcU)]mh_}Ӓ6^ &9$љKT{>Ue괚ml 6~TivI4=H!PLщKMxS`{,zL9FS0/c9*:fh~VgA`7MÈD!)g˻76޷_* ,-0΋(hZA2zv Ÿ)kN(R@MΪ2%Ĥeu= [ Kd$ov4ԬIgKo| 41o ~zEpMƀSWX[_ZMsIMs㩥-9}<h{%t"Cx_".4 XT7pPhp,YӅ U*Ϥ90\֦֗l# +Txǭʩ9]%DžL'!R7kZV?u-x,$3]䕽 !rsΆ>ϵښ[]sʬW2x5|29h/e5I~~h3;k[N󩒇9z$Dj%bwj)) ݚFCo?v_"k&ɢŮTaP-X8=*n!sG?3n ukuu)E{[[?'8ruAkծf`îz,>&OBZs.V&l51!;@VOvI OCJ*yL00I`9.",ްP3sge?>B{,)nR^xŵ!MOE$3bzbW%ɤ{:OM@ah0e l:هنLpS&ÍkMH sTR {nW1+i^s!Uo[8Azu!o|e[f<-ruWSIDh=TKBk'su0š֨/٦)z=!;RҘM;JLYsV3 <ϟ>%u0VȚ(Ĥc?elo9f;C:ƕ2ľwxJB Na1O95.-^U檹K>t ^ĊA7 MS-Gf4HBUUMݗP`K-ȈSvk('4 _p 0.t =CYq:o\tUaNXxYm)}0DWEgQ\^<탣6t:9\_|i:- To)Wt="qߘudacfe{Jl(-9wv+զw$0,Sm󬝡c^O{a{_,]P醝I;&;h=yDtQnYڇXT-G!ZEDv;%4D1lEٗk~qfRN\s$[[ֿ8zeIY[ #<˸ŹaSM+6o&d^ }%LO'2-7A:ɭb93eo=ˡb"9[|jsq6_Aɴ=[4WXi< >4uYq'@HR~"1.&W{FDA&;r2T)WwbO <e'wHWδ_ȁZLP M!]CҽC,̮);s5887EyPsz1tH؅V_ԂV7_)lD)taJb$ c K;SKt:9S׎x//MSn4-y񟺛rgѧ5dxB3J/U*(iH?#UJ?}ХIhlO&OṮ?*@QQ1|tM¾ "c_N-0 `w,.0]idRlď641;^9\™Tubb+@Zʅ}_=Ā|=8ږuy_X:,skC>"QZ :4\~gOo+|@Jh ܆ϵ>+LL6bCHo`Xjb~FL>k>fyL ʼl&^%o]|yǜŢe$l&}|F܆R|'.c!|].̡u}#ל"̃?slX ͌yM\> +t}P2mP5βJ͵[e{~}-E|;63z&xgrWl8I51˳Zw[d̟鱆fuLMusI=bL<=fN.2X͇$E0:mA&%L;%c`[ݗ`؅ң $ja41ܻX\x4IB`25AAafArL%vK@1oп񭤔ZƔkmNL&g'W{Y©g7=2iy.v1$5I_G ;_@*Kq賷\Xzͷӷö5Ɉ 3A;%ʱ= WU0pj,Ɋ!b( @7F0-Pd}L}P{^6د$[xPAg)*wͫ<6н#ҥvYYvoI=}V~tώ @ڌ *jܭXk[|Qѽ N=8a/(x[K`6J>䶿4#5|b/[bcgawW] CS4,/`g_ƕ$%4Yb[tl=?N}t yСe=xލ)J=Lk"غjW4 هC}w4_]rr{zۚӆ-F,??\|LCs8msa./I>VZ\N#\`FWӲ8#l/5ޞ3adMwe1}9[PYq/ }5#vȶ1&$hhSBZaX|FZW^$6l+Zcl vpAzv_aҰhS4wWR]څHvϏTPQ:woGZO=C vmgzBrs3XLlT%A=:ĒH͏So2 BNd^.0ni}YYLj1 `X}䍓D8TR Y?f'9֯m0J.$uS]Yԭ| 3P]LjU6/7SQ$~UE,_:.љwaW޻.gSsɩ 6U2}./{(7n Y$ ~*0 J%aIukF챥U?=q5aE[GM7D2V((!D/#rGһNiunbA(,!ඵWrk߾Ц -Γ7՗. $Rdxk:}p33v.ZW)yU[6 *OUkeS΄ɉiV?\9 $湬cڝxc"[TA[ck; erdm &GF)ðz/F_ `D^xXC?Ecn"#ILmtbp:=lr%oK`w?#(\S?kX7I+ۡI? NX3TȦU [e(7tz !FۑBH͘OƘ.#-n~bs$/ a 2 }cPOcVoGAQA=c&\5(gh1C IMC닞\%p5*f^s)\ ֤BX# 9W?W>qV^9PCjwtR$nnC9ֲZZoX %"7 9}V3hl8iAt;$ H7IA+?o1NzNJ9GSX VPuz~M4G*bXfA6+;hGPRXj#:L5/z0ЕT7]L LRulx teZZpbQںhcQpS׾ )2A>r trl0HVnHNAV$6`|x8ssa)W}Q94ГͿ+%U %+*WW7IzuVvR&Mڈ&/N@E[YJrޕ7?ؒ5\<@cR~=/f6ɋ/UP=ds{/ O~5>X2? FS驎d26P2 *E@VzgϙR oT->VM0e 9r:8tZdɆb˯,Զy+4*x\\rgZʸsTFb_KhsԵ^*렦&]zKGx |&Rãi[c1_LYn͹s)Om{t5KBYDrcUVȰiPپ۪gtM-{sT X`.gpaa3t{, ?{ȪSegG=">ɇ3$-R\Zfd+} cs8*SlYz: V"A}f8͵49쟹ݤ#D,kSz DFĬvpDtGһIk?)Om7ES~rmmfv^V4}Mpb3!fΕÔ p4Es{"j;zU^m͎K* &cs $$ݖ8=1U6eÈ /DNN;8ŽMAwJFV,ĩu%O=p(Iܼ͌j+S'zBk>0]&3np$L@=Qkt=[ETYlO͌ YHb6X!Vf*޻MqS.]J%lJ[Ywfnї`dYQs-4)7;?k氦|p{o4C͜czf.޿-E~V+|&L2pB1VWbDS PS3H:*c'˶RTLc>SܐSM"McG_6iM1`A.ߥt8 \9eGU}glwh3g.eRKNCk~KiD: fj_>U?5"ե-'./.noìߔb>ڷEf M8~z;=z^1}qn>!Cϑ ;B 펶iz.o I4)w!].!GnL$ )ٛ ( 皯N9v_PN_70&d)>o6^4Z'(O~(J6>9zףV pIDwFԷ;]Lpycrn *e7n(lRw^xS3(ʽ.o2Qzk:n)T#;#Syy7d vFnU%Jx0Pgⰹtl n"~Ll3e&}9 "S]| jюu)(Jm(!&n=aDO-֏$R]8y߯侽;U7~b:YO3r9?>ޤ) ^*CtwQBvڭ_i9LqSa%t%)NG^i / zC(mu[=g/ԽksҘ BCA&ri]3:U3Ugaf$^ѡr~u]!H;(++Fvnؼ.P6Tޫ֗Fp A~g@Ju3/j3h;hdK!idx @Ӏ_v!7/7" 1Aj5E:p_,g>n=q1F 1ye@o;t;%!5?gGL)}QEa!p"R0+@_Rb,Unguc Nr-1Y2Qh=q&h0[jcO02Prl,C p+t0H?+)xH1Ŝq>" {}fD=Ra9G)-d$Ͳ(UKd<,k,DqL~{N Ï;"$cbȄ5-Zo|7xgYG΄%y)^IՌu9u0q6} >#PWN?otsWMHrI͎(u5K6 \،0oe6~Ϟ'QwHPg/6 T^yf~L7dvB,<3G3urTЈ]#G3_"Bc9SD(l 'nT/yCO+{3<(C6JKVφˀ1!{~AJ O,ϔaVnelV$he {R˷t6|(8,u#UVZ&})B_|Ps'Kyw8eGe C'5{4É?[la*6|d1 2e*(vck'm`Ų{MmܔbjD.KWx!cu d{-΂ڋ+{!x-R(!(>)FcQeTce3zG&7b~'e ξ k4#+4Ȍ 1:`ꀉ#! qT·!*'w!I}(i9:yګO.;b8?y;[CA}K*-O~ǁM;!Q Ds= ^ K\ѥkl_ib/aEڤ@uNZ^'a`GUP[Mx ;\UQgW%W9eMvoxƒQ΋M 0г%´#\s8X3ǡnL/9pxt0#\]Yoǂr+j^_dR6pstuXm=? _׳kmm-eKyϸ[ӧˏGD?_xlCm^%sG ( ̆!,>>i5Cr.k:[NR23t?U;ŦJXyjߪKӏ̳6. Ig;9E2u:Dٱ+~ϙJ#-C\ f, '*8fBp,ai.?IO޺ k57Q?ߙORf"&w!aϸܟՍ'+YќqMIG1WnQ?rU ad-;fҽTf98"s!c/'YH}/QK pdlݾYN`?4ܜ{ٮS"?hG4o3Mq[ 礹ܽ%Bk,dt)$(]rYg4Uyh)E \톧Y VpR|[" yk('wC,oԩgB#JFTw;ׄZLISZTfAfh ĺ}kt,5WOϝU/G$ڠwVgqMˆ+;|pLr:YNOy&.A0jkJX.H]xZAP+8^c$y \ e۵7m{8ݾd_fb2qҕ5=ujk.X+'NJܩf Xwgq1VFx|k0"GIbY~![y Xlo# ש R._w388,CΑjZG![[/}jEh:Z3as/fih/Z~kjS_qrc4!Kj #Xg׊+|cg\kkl:\7b 3,O>M3kw!Q4{i!6s 5Q] R÷:>{!2r;!1-&A UAOXA{ۍ>Bo٥R~]]s;ܞ_;l͙~v[!BL ]DbLlrm2D%( 2sע@AJlO|L oV'~􋂾|HPW?eCj +lj {!CX?9*=S~cXNdhvt !җA_R\ҍ3.z |bfF-L]CxU>sk!0k{M3LFKOwH*.vBD萄8wU0]b۫2\څlHN<ˈ2`=~唅6Fo4զ6a6^Rܯ-lbe۲&h-Jԅ%:v"QrȺ XpVwvў]!/sNJ!Ҙ4=Ms/S-_dGHwK|ŽcAo٤MFkWn%{zN/3|?^aus{,,T Q$Bp>D,ZqP[x+qљi/"#tv&פ@HV?.k)W`DBq1vw5PX̱r,OWN@HƱ=M% 8sA[U;  d CtT/dv/ uM,l%A~}SپYQko-PWt.Uo}GVCn ?1 YEDw4L }U1V #/N̓~lA;K5 GBҫD&9+eveEW{Qz Tٶ4>%:9Iqq=?C\wفvn~",+ e]=y'끙7*D2 S:b ҠJ&U$3 ݔJ'Zz8gwvTz Tv]lRm3 N0buʾ.9^6FWز<>T$$Q3s Vr̈́(5vFO*qte Mf9յ`fK}O pEɝiVa/aYN8{;#U}m:d CrL07IPqZ:(f~3h>N<6?ZCF4$L5d|5'иM9}y˅]vTf@9&se h't{`")6n $ٷCęΨ_GgX0UDbAtKy0P# R[VxVĆ-3 TPB;,RGh5y s*}Ϧ#V\(g@UCTݩ:G8T7mLZSԪ`Q:K,.UJ$'ՙ)kyq`}bjV?mo-C a8~mTn.kÿ*=8x x{c@rg cFOoƟ^57 8z DVyufK|%r[)ŤyP/6;^/5ҎF,yQ[> pǧ4l/JZ0e?Μya1I9Xm}.'`kGpYg07Vȯӗ;|!Ă8m9{uҢ,5$BiҢ,MV $tH,_ L %>~Y_du|kI^1!a!WF?z-5KI>UELL@]dYYu2vx~Ai|03t9h${bf9Һ-701s-L;L~YQ Ε@[V7O2"Ak}TIU~'pat4rgtܾc[5:]{豆CM ,IޠB2!NU, *\Dwl^ ^8 n I[W'g` τ*R-cvt>_~ۡ*cόe/E./dsJ[Gq!g^ObW}g1A~,̟=/AUSpsM7d)/ ;1/eiDm2.g` !2m5xV+ƥfcԊdz@oW]3+-EvS ^엘0ftqu xM^r č[is/Od* w/2CD䂡1ezZIY>iV̫u11'1 EZԿKDW<+hV%v˝Z?/w-$G/ua0tib`i[Bl Oݿb,bqd0!QS_`lPNgqlʣ9t}ĺ4;/m#d,er|5J+uy-5_(D,!cumOdH蜇Xf_PKZ$?ᵻKwp.C,UId+)vwW牨1{6m6o?7Ir{GzFņWVt$L펆"Uӈgl#qDzW$QН[4%cSJl#!vBs%Bv7k[e(ꔢL|wʩeOӱ cuofv ,:cY!h=:%WDأ=jمLOJDIwIߖܦ0{M 65{^% WF6(qU wC$U⦦Rl 2J4O+IӈFʔu)YI1/փk(Œ]8.[?Et@^7mԐQ'4Cel~1V"[]6kz(Q@ʍ[Z65uf ]:kisp67¹*& \N/aJz[E]Mw$*2H:=lzʋ9IOsC&^Coݸh.]ǑǍSW`%dl22T?upTU>A݈" s~fV'[[Qv0Sv+DԌM>#_m* ]^A`їd㴨̉ .:}6pѯJL*l .>S/c@J+GVgrIY?Cá[nՒII\ׄwjRI?Y[ڥ&=!RV8]8A^pl]X8N(},[bnvG1_ی6 Dz} ধJQ4u'xYi' ߅ DI]E"w%s8z! OP{c ~'O)uzffK8g/$D6F-+hOe41xx}]"W!Dr ^^ߏ}N0]9.#E&ty%! uT)_T;زL[F$edxE!bnɯ캐b7/3{emUzVS<bXF ׵TRHyVίhқ0}H@%\` /e'ᓥs20CgKzri0nxD;㐻u{?â>?>~534=W,w^.!BLGCY[ӒӒX}6- !k4n?+n2R(ϷgsvO$,<h"`Tfa ^ f!~rx!b!Skvr9 Ce=A LtL@Vn㉷}4 yiwR+9~,Z;.+%u?v[cEǖf"9p%JroKm}Oaca5y2(mPoŧq͐|tv F] c@%Y8_%-V:7kn.&p*q 6Cȍ5ShMM"#.Qdm?ǜ4GLuKYQLx~KRL3ud6oT2=ә"<^%KkײV 4E[ Z*1*\vL ~au;vk2>coyL)ߕ/߾掼yOb,WH~F軹0cy9p<'f#SXҾTBv65/AՐ|`ݎ|_Ne43]vG^|5a8&8*}KJPNO(ABftYYn磯—MjAW<ªǖǺq'$_7̐Ȗy7v? *z@Wg݄.X$o\>27u YµY4ǿj*zT1,y3*̔枅\?=1j9\vťN"OS\Y8Ļj:{Ȯk$gj-7y\)ӂL”Ч;hocچ\Ԩ17 l468"kԔ58*>'tenkAf<_>@VzjU5Yr ,qp*i*`i\:Ao.DozN;<烄YOg$Cx}ͪ&9i&3!-keH[Jv&rJE];*0s "˘ @+56n"{M{ |?q-Jl|o!g{wKt&3 U>0nkTfB5@3ny_`1[W^" qڰ{*(F-ZQI=2]/4t)YENsiiѝfЕCmgYpn]*x쌧I{fZЪAmq WIUvWI=3pbXj{1 /l<6iqn -[Aej le9r6ioA)џJԕ7u~H{~]/'*JV,H^b_sHl1" f[Ժq*~f/N M2I}6OCĞr"D=9(+^N@Iؓ.UD?RRC2}eie8)U @f!%EA7*$1qei_ XLwu70ݝ>pbuBxb=¾%|m|b_CdɀxS +w7í pr_(qY\^ۍ_:> kNR""~5 ]4e VR3C@`KPǕiBU^ڞHN?e7I:⏰Ig"˒e//ʭsD rHR%B,wZvȟ;"}~-R[xagR^}NNjvg^$]hOuʬ\t?1*؍20*W 6piw&kJ@!V?`*|-\;"tm9`T0SWj) ,W >9T?n]Rjs;HxbJX]vI} 6]H(*iY)þAݱGs*XD+…>1kd/Vjxu w!r`"<5 --dW ֍Ӑ3RW> FHb[\<>>M=l}RT߯ۉX59MQ5+/X#cleϟ{=Oz .O`187ow8 ЫgdwŘPj(G$ìXM g#9RQ8I'bԊ8+V񳉑y^4\Z"ȪT {%O).^)1\^6kU3q7q<ڇGXm);[(lӐ%X"G9|䍿1Dv/eq^~r_>%DH?첿]+;JHM^eus\0 \9ʾ5Nz> _\E0JQ[UQ4 yݛgd'LUwoU;k"1s!H׎~=0}yW叩 31gRgMɸ޴`_SFuSwqNJFos4"g K̺@q,_ke;́,w8 |~u Y7Bt3qDb4)xUaJf}5uڧVцƝ{c<-pvmj۾WEzApG:ybH6zAϜ7[JqPfR] T3 %, )L}4  LJS##{! 3生bt]4o! %l>wלskQQyMкz3UR\S|;˛ ɁWt4/韡J.Kg#tNN.D4%%0\q lXoS|3kݢf̞%Hb)o<z< s96 PB`R>G;kS=K!A1m-aon_bNҹ0T]g` xTؔ"?QΛ"BIFP$WgƺB+!;ɪ8MhZ]ǬE6{tT(ՓW&:yRթ\qsXɐR )z,;tj۞c<#薧/H_^k=6cH<f_^lIo,15fETٴ]ceT3ͼʿ"{Id{Nܬ9RGXmk4!2'Yu Йt ׺}0( g*3u.o|_ؔ geq6W1׮, l@$?ufڿq|)xbuk tun K$aE4&Byߪ-pC{x85Hpn 3.&.&٣UZ䨃Y:3J%% dZ 4ʖAPw!<Π &[ u&ՙG,/\KTd՗r%*b/*=ӵ?sQ2GC3|:}4A ?E\e f7h=o#<=Aŵ1џ)_B??ݿp 79}%}Ҭ, ;:7Ak9+~& :Mq3%:1c@Z#,+ʹDG3t ,{V /1!M֗:d5k6植5rYsrGʰ%hk&r>.uAӝͯ^9TMI-iTr:MXck%5悴XBsz͇l滷d&|Y@5Sh˝'#Gd ؁X>Ӿ}Yۯ*7c,9 '+VL.o§OEy%6o89b .*V^Ǻx%} RKrG(+jU'Q=A:?X&( i~^[Q}t$ZWGN4j_ktrՌӿ/KSF@8yQvH 蛽cH(LhSWk*.jS",G1 5Mo:9цof;SsvՁl*!Ѝך#+DgyTAA- Gd~v-:V4cOsc幓_#ܓΌs:F$#~uI`j!TLMј?ԉ4 =z(CpH"@i[Y#W>kk|rBW.<_vvq?T.n))_,o巺޸g{_#6o~?o{uxSٲL&|v4_~z)T)T> ©WXfm׵D^L,>Lv *$Ir}WwqތgL)sa₍0Ps'zA#Ϟ8hcŽ4vJljFL0ݲY*<{Ѥ>h.U:.cakR׃lù;m75c&@:>NCny7 x& ' s?jS_1)b߱<&p._luJ_A`]n^V.inaQuc5>@ Dk'VB?;|&WrBG~zy"½HD$B4*K_4IpcEƒp"e13g@oٚ*e%Z8lI>Iġ(%j5|(X8 ,:Y!tgH>YnIKx s9w2(6?GlGLÆ~;qP<˦R5Ud$ug7S׏bX$_4S.xTȫ7H6C*n(Jۉ xz>sqYd<%j؈Skl~MU__wO[p0ctڧw<9<% ؖt&RNInrǏPy`{"_pw.0=?t`6ӵᐒDu(zoOY|ѹQA?;5N\tdAyӛa.\_m͗J9)ԐDmvq.DB"arϷ%RUqfZDj&ݦ.xHV˭ a"&FkbtC~_1 ?{g:۶SnwHrcơ%^{LnaZ0''ҙ9 xY1=bN;]<3sRB{ B6r[6e_ҏqHO(Hx?v+AUzz/N;烽uA&R{9\TM3Y.K b^=jobˏhSdj)wd2 ~Qn'e :Bd9F8S"/Ģ%\4LhfTb#7sA!`:"[j ܣ'㘻BЎsY"&_xiM4uu-,8{)Aj #Bb-ǕԸ̔TTdsH"0 ֔; 0|=Vtal\BC[:ʗH޿1uTzwLќEɞ.Z|Q#x5BEYA*m‚ּD*.ڙxBTr{>>lԁ)bN~DtqOJC+KÒJS򒱬B 4OxWê|";RR[a:j hT"ǶEgod&Љ./(.f3/ь*9\X͎H">_:Q^fkfȦ f ԒKG} ^sm% {Kumy˟IRA!6p]b;P4L u \͙|Q5&2p~;8#>'*^Hd!N߆P]`$A 3E:qM|A`eP3[3۶ʇu +^A, lܒCzB k3Q7/(Y.5e5I3 c˟3i9 #ws2{y}Aw c 8o^e" §&яX+o^P?<$TYhylO7qx 8&M˹?SETD!Zyj{d}iߒ¨T\Qzw&`,S]{79Ml DCqb̼BɁ>1v6/f- /Rn )GJY;prIk;ppȉƹ#!6`p"u?a!Ńi^Xhk&s H#:;4|gմQLe[9Ik08RtG^ Y!{G Pͩdn[o]~hI>?.{4qbMThtblqWBa{^׍҆i<c~({ޗ9ßCġvA<&Jrђ iIKܜ5ގf8N7@x+)>Z#t6j{k?@ɟ53Q"3͟JGGT" ߎc731_QDXӟ2 \6ɼ7݈N*5g)Ud(Rst^Gh(j31#D(8+:$"Sb-ۼwM:_Ej1nw2+,K7sr<={ hI<5<:Fw=e=SFZƇ̗)׿LiC%u܆HEVnyUl0RleY|Uc?6Ɣ!]>.<{{oǹ7G_*ſ<֔{IYCOM1@!C -W!%(Cyw39a>%3>* R:P\=,/ioD"S{Ղ!! q̍%+iO dո6xӷ>Lw?n QAcM_8Ҝy&#.o6[ & e#xDΠKL.'3q,[+PMGFj&Ij[LXw.MQ\ K0MO+^AIUfeIOY*XxBèk !aq=q ;';FɁ Օ5+pf,}cJ&I~ew3ʇDŽ6C(r\!=5Y,1}vA_<ֳ}Soʤx|A,e<78l Hb0Z'AkOm-=t]ߑºwP@F7m=_iوO75}B.l=zz|&wI$x[J{q=8D'*=[<":@m%~FȓK6@QRxE#l$t[;M5G% “ZUOymBc*|o?;RNB86AnV]ơ]R(>| B|%7 G;8̱"0|!qHˠvdNe*ym1C$D,C-B! J$yfv>"Hn}i̦Tl>D2WP0nyS'>OW==Y+^Fаڇ#5ߵ$ȼwkUtǎ ~C*1C  XF_ƫg>'3;i ؑ~pdWvi@ D>3G;Hdc[s orv' Cػ je=I^]0rlTLm&I8_Wonͩhx.Ucrdc>Jɑ@o0̧,D*OjYow08Rl#! nI<7Rݢ:m_ ޫ["?"SYmK;$s*vN1S.Ԁ ;Sf&Լk9x'zk/6Bo"(}3u|y, - h- eKk EdK3SEQ=fVs"8N ^WhղMk :9y].;.xկ!c7"w:\-:~jn%s{!G*(y5Qv( qH*H90ӟ{GQήzp]vaE̟|ʉJN+T-*eSB X93l<[_2iLM}doQ-øuVvrsAޔ!>.X'(D'd''NVv;ϒV3Jsgi5 YFO*%[lnbQPYAeޔՠ"P2 L^?xCtqZ-L{y^o@|ZZ`=Lry)6X<#M>d,Y3V%k[sW_UKl0sVsIx_Ó׬S}6.h6[9$7r\8#!ġ?K1O.%]vlfm*?ìIڭ0N1<%}TD/C/P@ `N 8a2W\wC+^;9" ݨ G.#)ۇD%MƸwߖS"H?Za?w3hܠ;ݺnoalxhIӇHVIۖƱ09p2icוT(GAVWJţv+̀)CP aI?K"ڡc'`;w&VY6WL~&Jzt2YFxEfVZ~^YMFjyf2lwCtե!E@F0E ;SӼuO)k#نfװAz5Xݍ KoR= ĜɚQxra0w9lCۍ(ƻM=}Q1k>I|Ɔ}1]uڸ |w68nfofk`4YIS7Lmi-7f^{RC0l9_I%b1-E.v bМm@c ;ojnjoܶc!bN Fڑ]W_?I ㏚DӪf?]m%gȝ*n& _?؂붚Pȫe/mk>)KVWD4@6h#v^b\Vנ7g^֌_\6աZSB4Ld mS0TIVM3&M]6NrWNcDPaEΫ\5q q }1U\w5>BZ 7,R;3CEB.IUF.;C%Qn*m xWV[ }/d̟怒{ :vmn(CL!mX*xIҹ[*̸$3/(9 {.iJOR@ E #mb#7k(!'2U؏f B;#o5`ݦ-V^/37)wم%a" IT XV,VךċIMt^F%wW{kǮq`\%vz#(rœo%!ENF$Vel&ezL=T3+nmٴ{{Y۳ˠIԐx$E<Vc^+H0Gϑqۖ/!H+ KC<-&-$>vnԹ;CFn~=ㅭq3]e{MC7L7+H%Qq"%~kO(xUEC$&021W+˥0\eu'{n=aT {)#7/kc1/jR|DF ,J+ Gϫp%"f+tSw{+o85IaK~ ':դoF1`~eh<n0橁SH=YI-#E-/F{@BD9mSA40 w\r[оbt]@ r f'3| rCnh'yok"XuKٞ5[׳1{_~ /RQE<$gP. d뾬U9Y9Fz5[QPPtjJ鞎'!BDB``-tf!`D:@bw2克D:=b@o QGas$z!ڍb}TCo 6<ē;a3 ex%`24AMXEt>a+heA cklYS`пs}0dpJRp({!!v+t+hEܽW_4$lho F!{^^\8h @ds/xU7Jʩ8ŅL;-CyJn^eZ'؋3Ąk~:QjJET-)hRrc.T9p XpSM(S׋Tkҍ]gx ߀kM)o1Y|{ 5o뗉V669Jq2LwroF_Z8+٭JΌH4t:yzVMl g<Z8;,h<01d4z 3U9[LZoXFxE#TLg;=eSA=3Sj%a`ܼvqȏ3fU;b+ SkN@%US1XGq;|9O*mV".3ԒĚ)Ir  {u)Wo&V~sNv{ޱFYX<ЛKܩF kEi%^N6Y,X ^UN;6[+AȦx sy4dgFQ$KY&Y֧c$MKrAG{#,SM508w2zl1VNz=8$O4 s 3El Vyn,D `?Yfsb0i;F5ibrlgBm'˦ Zt9"("KJ6deyP}U;7^67alD7I x| kD1᳃ۮJ!۹3dp# PO z5i%fGq=ʥ9ɆX<FVJ-vuûCşXWG?DS+a) #Aoՙ-+B쯂,*,Osd_oR@5l Bň3N߉8AbcVFu2x=D44K_.!`NX9DwK2rDyN݂)D&tc!` c\:,l89&۳6$oiL ) K$=3hأv_#^hPP/FqS>^ː:]X~4tպVNu==1؇ʬq}M\- x!!Z'&\>(=CCIsS9bi.~xxV0`XnܹT1N 1FhfFvQ{l$5O`XE`0ܫP'GW.BhwD#r;+[`H3SZ?jo? =mu4:VLgg!\^ӣZj8P}yx`e!YEIG.WK:OLB;3 `1j;o=\x?i0xf Z&' oE@">ѡ#-H85ն&AY"vQ̱ j9N#*~P63KU-\bYf ,EG ^wȑɛcau} k Yky$iʪ+'m0/Sn˸HMPydFv)*zS^SmxOsb3ǘx?(*.  FKL<<.5(ZؑPU#0k8!j~]KGt א =9 ~;rS)>#BU ن"uovWBysIDVv*IwH7= ?30W&Ad"x^}gZD׻Pq]Wʁ[+ uar7]!ؓ#bD*{`5]I`!cQ鿘D쵽vW;ŧ]v?7._AȺj ,VHyoxz ̥Ȍ!y{xtRضYuHt#0׻>3 h^6ï+wI)5|Gr% rܚcFkSJBܑ{/S(Gf ;@IJ#;$t ɕ|,f.cP(tGLZMU0r.;r (:ֲj*[[`?* fvX~жC{)ŨNOh} c輸YOH+އ1͓ ]Go70b XcnuADZ%J Smڅ)c< 'L؉[֯#/we6ޖ7s ^cf(ї)hKw^bq)tnsR: •/\ɂ5~?q $vR1?9X-m'î.st=2sd=Bm#ff(fUϯ]%)5K"pA9O~^ #Pe[bu8h*tKR[En|Dכn5^AaVRQvH2~ʗ &oOS#hVmgBimS]gl0hgfceld/fu{@eQb Evtb="Hv̬uATw["̜י(Ռ'٦]3\u'B2N/×a /GSbr#2ycCkĿǺB|dd?ґ99 /=ƜKu27o.țԴ>Vl|͢3gejq,Ƴ`,,#-0r+Gz RCBQRV?=Z^M?z+NԌDlH<-L(AwhXYS)n[0B^i޻K$caE`jdt6t 7p; FM l xUf#O$*UlmWB(؞mSrampcg'*'Ȁi!I+GvTyF^N>rbS>alZD+Q MYۘ #Q˹էca̬qWf ۍs˙ESl2Dђ3.]v5Xv^á.7so|hwNgɊ  α 5";3a2&OEg+&]UV(쐬O S $7Q}/R3 0[ RY:id'tx+n;}& E.$-HAP>pB jp ;Qs92Of=1ROCD,lMW1dlZ rz|RWlN~ia`>~4m1ga$ D5!I#Zv`>J>DZRw%Ew6 :/N #g^ ~'Q^݇0R ]k`rשxH]µsQ[6m)vI%+D% qsid=1_HLCJS!R)긿"Ǒ04#=Q|lfp؍zXp/}9VtߑAgI@-y0V1XGI0q}Ij21B޽l&;-b|Ӷ M0Zvd@E[ Ops_&1YBً)l^3:y`|js=F)v19е{.G=:<4<0Y@GbFyM27EXt$٬#q񠍷Ep1/pE[6~J:2V5mk8`3VQ!&e$pWczPy0YͤĔNJ%n߁4C/Y;θ?~Ԫ7~0!4+~~zgT ݮlZ#nFmG˃!ͼY*᳏1ge/_E X+tFûzĄӘ$UCew̛gXi6a_PΐI $o?Fd6R@_b'}A7 [A1Kj3^4.h[곣*,oX=);uy]2qWGb^ZH2+I3c|X"L]W;םd>oIDvw:b a 0?}/㚠? cϼ $&ȓw%|n;Qc\jK )NX^_Ugؽ &Vtc7A'KB#dFzhx5N/P4dBc܇ .(yլ^F!1H\}nqAglы Zw`X 6E&v)tS4ݐec&"ŀӾ׭Gʖ˕^mb|&4Q%ASL=ڟMSkA([y͏٤r Bp&6}*4zRV%9Nܵb&/Xa,OM3)ĹJHUٍF" ϟx~Q$1(hfYsYlY15A=_ !lJбXo"|WGoL!%iQ}䳇E7ڈR`Y"fW~Q?h2^~G?ځ-JDy@RU2 =vdImX^Dꋉ\D1ͼSuP6+& ,gG3W.~O eܦ`W3/9calxaba>[tv\,e{iIsbVb "4vmW5鸩d{_ԗƺDZJJ {`#i2Tw[/v,˙Q<쀏ur)鲺g][?N3%P)ҠTHyLްC}BEsT(nW:^b(onwN*vƢ<^DqhY8)+6۰@Q$2lN/pWO$ԈF4i^4($7Itcu1'ᕦ"E=#bjQ̤إe>gc035~%6]`>L}73hF)/loɑmb"9gw? [aȚ]O2hcRbgmj4.K1a:eud|W'۔TKqİ$aԍs'J@G7) 1ݎ5*;{dWzX@}T6(Mqk&_% E$Oha!V++Y^&Z Ҏ=)Ɣzh+[Gto*dw,=Y\H;4KZ>b۔LvcbܼzJE̍ `I#O>óyw'^feVY"HoO*l!Vg9<3?J^34S%Ze^ X4mN6ZQ&{ zڃ =-l ^+ 40ٵ`t>飪=J{W>i[&s s,Č5«INbkmcN(;̠25Yf3L fZ=Z-qrzw &wBt\Q #[B_`)^9nTfq]><Ƣ)^s}+IwDAXXY)t:"G- SCxd2Ʀy(3ڼe_*:1r$TD.G0=ЃJZ %ItqJ Fm4 yw%9hۼ?2bۿ$)CPO D e H%j33}' 9r'$:|@~eZ&a~>Η75~Am~V\2Bh)mX~j7PT4mRZźRv`A1%"Nʕ:~q LděR+nCcTZ0ns ;MRnSؖp=XӅ?JMa~3f| n^X懛NC°}'g̦-=p^ :LPޞ"o(5 kmqQ !10pelȅԉ ZT fu2C7E䊭-ȕ0+&kvKѻFjW0Fn;ay0K}|SY2 M=̦ vp?"Iif_qp jcUw-D/W{gLa7Va5m(f/fi$ͺPܴe$ (S3ƇF3ͭ'L0o-{fF ٵڊF˗C'9 &`E@ R̎SlZ߆f=ReϧG}x~@JͰᰮEBo"m&'&B ? =l̦(zLÈDVrRr!TxnlGT G$l9ȑ9Hss|,XOn><9ְbnM9K:/hF&!E:25q?Pɹk#,[a{kx{o д\ 1J_{K,eEQmdxo:[vu(SrsM͖<](%+w,;/|{vQɨ4qRMLlQ,7>ї`j",{iR؟qjF=)XjPeN6%!-6eLv2]k3bd5W8c"}jXB(7ӛ±Kȱ͌c79v\ʲtV'j]>R~&< DN@{ ĠoMm̳x37'”'t}97ŏ=07jYcaJ#'+Si鶑D^SlIT79@׀Z`|#\{z~bcNtaoa/rXE,gRrÃ||`߾prnz4X u~ *ݳcc|z$< ǥ\Ĭi#NhsƆډ1D `xoj vuɯ칥O@;Q&U(!ꄮAÄ_>bTYo=imtxN)>òuPlDM]& c,n SdGYWwSe\D3;K t,O=+ztpizDF~7ܽ @;$(˞;^T_?aRk01ҷ7t5al#?V|t%h`mT K:d!NsbODA>*NNeUzaVI3V5? BI`)TFdޥv.|x=G9[ ~MX Vx~ήyLv,$kri\ 2x68 M42/][Ă$,;B V? &h njq21J_icEʅLxkΈh5t$#ةJ=veuXV Jv&8_ϧ(!SљSA#iq0+'PG< S.w55$qi KJⶥɭrC;䥓aC"etK+x!PG{Sy,Wt7ؗ|ފ;\[v&فMeZ|Yt׼:MlLxV"G8!*.Q.nQ1bK;׭}N?կng31foDYpݲevm/_7]"_,~F 3[ـk aOewF0?FKA.jd*5BGɫggXJPh0E `id|@ZV-}wnSïjʜ&:$EzD1ڹN>fU)%u`wu|VH9V|Y}`!AQBj$3Sy,g7U_!c^y ozmc܆ +F:!5pH-qP.x{ yX*VmˑɁi׳oخ Uo3y)Xd4̫׭wn Yu`̓Ⱦe,NrT,OWV -Er~Y.PtZT3'O1/wXFRAhO%FMmeŶ_(!v~/(W\" C~eI{{éz8>YI! SvC;ek@6hF7k?C8ZJ:+roȷ/E IJdTJ(c#N >a%L_y SSg{8)ףLAKR3yuql$Yc^zL'P05=JRCqt=A ^d~ESt0Yoeb;*$mGp1}u=EK7o -&6al#GY6S8.@VGݕ=SP3s3 FtT8؎X>} c0-E+j .Q3XLJ_s.4B>ekKd-x _2@cAg%gӠ"G/Fr^4*a>30_W鍗 =>9L,%>2c@AeD;3ZBև3*V̀~%G?:K!@i`ʷ?he@?GyQ0&3y "Hpi}ԁ^M`uk~v۽UiԞ2{+ob{OHJxl۫'}톩(e <Q*D RX,gY}>V}Hj)w@:7za [a΅#fq^pR#vLƀVr+Sqo}""dTg1aN߹觢xGI'9717O +i/huy\0[+Yc/n'oǂp̃zv Jބwk雷CԕǂS& l AL]#uq ą[$+ajtU *Aqw-T*1C .=d'Ghu cСEڔq.P)%1e6> Sm$1L,i2yVuzy ,[Q~u.|*,rɧef Esw\uN=zäIY'PҘr^6p?y3`2sـM\ xzSL,#܄"Yh3dy)cU =v&UnM[k/_"OV2!kÿ"]8(s嵮Dٲg4T5G2YhT\k>`:2Sۨ0>^_f舄't{~uZRO(OԢ֮P]csq#/H&Y4}DWrr2fDZ(S0u QiC18#tp 5V : -~^c^n=ddd0 asP GKSe?fAPGQ^gUӯŻI6۰gY9 c(XR: [:Z Yh@pJoϫ}bvdCplTڱ8dfK[b/) Wf5B^ѥ@GrqOywᧃ8cfRs؊哋̩CS&/w!qe֚9v=#`Çzpi<ڪ~޷n?e.QANB2]ޒC= 򪒎f|oUZko2ȫ݌i5)N^մ:6SxVt oz2[lWh[OGVfCp IPa$gwEXciNiċCv: Y}DLZ=dwIw&=p y]W"Ű)1h[LEI2^1 A; m 173Z+֮;O[>;d$9'H-ɖ(_v'to-:p$€T&bDa{䌈g'ڪIcԓCٺT0#4%>$7q(Jb' Ɖ٦xЈ8lyn6(2F]q_/EsNHfB{Y;b,@ b4FLk`vW3a/6r s f5!?>w2d fIɇ0 V?].KP){$j;ט.zs  w`<~Gǡq vAכpt lvݿ8$syfYvE#!ΐ^}W^ѧuq9 p'ff]>-o? } G2 a4 7_4,DkV#[8lduF)RoëwڪAx| ^W!1qOy$6$ݞ >K >"f}Θ A o[cr Υt*{a񊙟׹<VċN@\-4mw c"E,ey&*u*Ji. ߠ28 }̖70EGӵg3d0];R'&Wbku/;j+%"%2Ts\9Nm\c_5SrDt9q:gs)+Ys$5AF42$$?ʓ)тN`kJzݣ2ܿˀe8k4mN W01-\h -&A“Rn171߿ظsiyfe)1FqX/^d fag&Ѭ0NKiy9k3dR}%afSԊƴPj" -Ɨo;b@ {=#\|im Bhذ'l2?HA{0@P*4 (Lz9:RXq"9Rxi>-Z`3kaqE`% ҏLII&e,Ĵ)GH 65ܩ1ι$N10FcmbAA?&j:dNwspir1F0R=LX.&B.l7c `XhcT*˽ӰVu=?EJV4MM'c,o\Tҍ\f3#O)md3f3E[/imAPqp_D# s!&2R P&grF29gfǜ̪]I:3ccP˖j3F[RG(2#{]"洃~:)Il!vv%mKlY49Sޤ:a6gO64D3)EjO.Y!GJf11V׌>|2I\e~.(W*%ųB$w1GB t멢o`Q 'PpID{Ytz[g?7ٔEBB<&.81 /6V mJF0ic(zys%>fd6Ydkɹ],Þ .1Λ.MXp*u@0o@*HKw[CO*a9o=07:e{pE'T/N`kY{ϧ&LMs[Wj9TiXf;aӴ KndzFtk~Tܲ-M79 6' (Jpn_픍mv-dc35DDlo4i>yNj2wSA?2y|Aq^%1ߖyK$E[%H'4dyab-G6t$TT as6LdpYw=0@om"RC11 l/T{$K3WAf L„u>WƓ"bU^<ųH~~ױ[_ψ-PorL%y߈O\N(fzbcf|'[qJiK1ړI?^k ooj˲,mUKU2]ƾQc*CjmO)dg52Xv4y[5c5|ն!C3_ 0:77l|?M?Oi?È-XDϏDPر TT~Gc)+ySdVE>/J@iXז:,͔_Y;g妿xˆ~o~Ҕ="!iS4a wb[rǫ$Zd=̹ jtmyNsH)}Z[T/Q~4L=Z'1_GH߉]>藜 )oOeFNJ@mVCP=n${&UpO_ÔvWaY4p)r By>R^BUaj·:ȦqnWI5dxac:=#~⸶l#Mf0 .nYYU{dR5H^зRN_/?Fk^+ M,vh$;7 p˞Jʯ n u)&#gCh 9V̎E`N_mcOi\ 3z,rfw 1\ 2G(E2.VPUX{[~_4{KC"Jx|4ljʇ [=D`U詠 D+,§ M9GMuŔ)L?0ˠP)Pp%qwu*]-N"D,4r)s$cϪ{k%*!\[/Ktԫh ֭B9V%N6{l3rCWc6O ͪMcr͏?]\]3Z0BDhNsVWر s[!%xw#(M%ٽqLPB5sFL ( -c3 ozSơkۏ3_DA #BMRKUUgZ !ZĒ_E}:b0F y2д:%oD&IEutwL,ZR?~Qpie]t\/G'@լeb8q:97#"=@g*La-Amy+"ctXO0m&lM= 4+8To"3f *hCH5qZmUp#σF>ҚհDwNM~ Xƭpd!>bXYnsӆЃ&5]݉'=SFM?Mj@ܜbjR0VY궯$Xe6LE&jV7{qE<,opyke bv,K!2cDVvk1QIe bŬ&˄yT1ag{ Ȇ Lۖpm6; _14,Q3żZ>2Nqjjafu C7Hw)_}:3i< 1yurkt)]cP=ec868ԫ&3ғU$hɑu,d հێ(H fs_yu eMg~[)[Cy~Xi"a8&O#Θe.}!ak>¶Tz8`HuL@' "W-P"6_#4K#|EȻ_v^>yF*\|?VsqGg:*}pIvE`\;sʹV%%"r8WY4y?{oia}j|, <1SMl3iikY9^G^4_~Y.iyPg%֞vu[$[uiza]p@o‚֊ jZ%H)AT!gp헗j]aw ے!܎qذ'" z[oc3,'&Wc?rX?<+-U {j>ӏX[&Sf&םUjY: #7>/gG0w` *Meb;eI /%eNYTa6/v?^,1bo3.On\|s)r<،hީe:ϲn23g )SC*pq (:'Ge7tEE룡_,@]92(.6f]b<63FKѳʪhG.ַU=;OfyAr4K,MPB|tCLGmxɅ f@JE.jY -V&,sS?\Wv=3y47em? dݣcpoXbygy5e*Øm`K#haBҴH-7P~vRhFg㷜:p,)9d9.掷O&0;C6%!x Q,X|3Y՘ {O3P榖 Y{HNB4ݎ~q?UP EXҤTȂlp`ii2}kY 0'佭lH!o2 nO_@VfZFa7+yF M􆡾NV@ֱni)Og@{R217v\nvv0ÂP,1u<^2E b˞Qv;LRIb)+y&Zxd"o'#'JT!/Y{|bU ow}o123"0cCa%j0xwd`,\jb-yUf7l5cpUYT]=1ɏxsu9$v^1Ձ:~> DfKskyN+ %}8,L&Y}Mo yW0ҥY)G2QƆ˻}uxfc FuSvNf~o(OzKwC c>/*kPkFwo.œ20~TDa?ZF f+ԶL0 54&bHSecbca"d;K3U8TOACrg;RDK!,hh%.~J7<6@0>CV8f"$|DwZi?1$30yFk4vRAa: ;iw2{3o VWv$mma s"FY:/zGyEum~;S差4%^5I4",#Ć$nk,H]*e}W^0sG DV6W@7<cyqyޒMt3aFtAX+o/e5rN7|Ƽ3>pW܂) {=Dz-dng*;(GtP=X6Lo"ku(*$i,7<<{>c+yu HC^3}.C#%W>Dم8~Bh.p|Ge4-x(G-B]t,R'/`2z`pY`VSj@\A \ 5o ?ue⮎L..JLE z[2 ~K;"e GM7rIr>Ŀp^VbITa(w첞_2R$vUc:}&x\imw{\NnQuCr4Lˤ$_Gtܸb`V9jb tÞrFөXW>6-A!,Y<,SќabDI0rvଓrIkoM7e膫=X lR.Xݴ5  }Oz+ܞɶlj Tp,aӟ= cc%}ȁ#p~3.Zeg%R4NiQTRάA% Ha}dr/ӝB 'ƺm7tf)#n3WU/rzo2v 2!PBn6`iK Y'kA~YJ7x 85^RE̊idֽgOu@kvA}{Jc8qz@bW<2o,)]#힏Q>^c `Ot,k1~]qB}gX9d>U M8;my=W~O^h)>F2FΊh*JhF>3'x0'`x-NՏS]Rɯmwoh&)=x梫W]s$\x/ w(L6}{ ldIh&܏|>Wu4C c$[70o<Ѕyf!Tn/Ϫ mƆC} L` W1 ?{H'׻:N~%M.a~(i79`d1_e5R,*8eeW#] 5K2͊0q&F31XߴQjo_ѷ3-`!Q0SM#a[~HtDOA&xvJ#zxe'`kS}>,F+ٞ/օXIFi1k)˃Mj6~&Hn%0Yʥd_m WLDr?aȲ0n<'qt͙E\ ƨ#һܭl"ZKݞ >7jM/i# #Pflf"ApLzInIv>uP:S6#zPiu$b6t^T*M3t`7dsϬj3Ղ*1{E;dN!ovx'LsjFĵ c4͕ Adq#eM˝h;Pa{XzwQBBD]5ءEukGwcI+t0x]Lg>R[>Wh2*݇5p:(H8Jb'%\8OŃTSM$J~DLV3 3o~2WoS$b7``, *!t֐dtloH(MDG Sk %I=tkiٙ~لNY_ Nt+k S6D A-,#9l58UP7P l^_Aă׵ʖ]#S|nn2Uric#Y{qİ2J\1E\ietFs|f?hY$xΗLf(Jh-;,>>^GРAȩޱ3yK0f,ǐ;zN}DIת4 O-Iٵ"&n^*zvD G{={ b8/ØKNJvek7O"i3B -A+nhMr Ccv,̑chy67M{c]!0<SA)w/Za c>hF#*˒(Ii>N>^X [w6a̱$u-=5ͨ'zf)]M6MC$ F0wʹ#(cdJ-?Fv)ń􂋋t>n>)WDFM)@Qus9 4>ـUqyPwxCg.JAcdZ"ζ)M85]824?ل\㠪׽Y0j'AY flRsCT49\H}ο?i8izb .4\"mmJ9*HqԬqޮ=46ub^ލ -bevh[S5/n&0SӶWB\ISL Bו{&Ct33ySX{ݔf dO?l>=G#˿@kK}F>I$6䢹P(XG]/0.c ?^_nCˋ#Gݔ^8izn׈[HjB'g~8˓#l=AZ zD1laP'H|mG|#e375ͭn/?/Dwơy456Œ@s,D"xut_qN%5IW7n:]rΫk1on:20La_T¸@*j;ErNɟF[7eLTDZ.Z]Gց&i#i^a aq̇>8 c9/aL*nè[a+` ޮ@G3bʰa (k'od!uB} vLs.}܄hC]n;@Ϧe0;6X9K[kK^E_1.R.^ 4ӼCH9I*3|4 '7bkd٭d*$YꖦÒ>(M>KR/aٿ ssKU.d=6LǹK,8|es)Y`DQ:8AƊz05VVsʣ ({C)2X=^RTŲ+2;P]ZBκ3c辕>>bTWwOgqh V윾NdL\k?AX_ÚڻD}u$?bFP Şba6)qt$:rǻ]|CS^ 6u3Ca_CG"Rpk`̔&U&3 3 4E ]£I?Aŷ1CTĨtɺ\rflPkܞ[ f3R[Sݙ*l3; (?-e"#I y!)0"Zɵj^DY艅./קuZ̗a08IUoa[Jh^(LЯx2w`3+KGfc}9T&xݩILlZ9 n8Bpb>dsm b,L"* NO30? #' ai UCW++a܎GVe"bS#l(} cLR<5)}Qaj_ٮWȄlMu! ƲH4gdv2 @w [=*q($a,˿a7Subu.za.w=r hqtFk‡.janjǒp6=Y׸eiH;I5^[Y(i݆Vn_>q:WAyVrfocDïN~r S+d[Ea^{aƌG禓EPN ˗`5afNJ̮;Eai 2 izs֝k@HFFWĦ=x}f"6LF^G> 0 d:`-5X?s[WU]`uvjNcq|" me%%P>k\o'0`5B4|8u{\o2GKxRhNj |DqgUPM^YZhl%K:w>5mIY я6D> C~7Yu9@;>}]2/VL`O.qfn HH 846JV8~]Q0h2l+L U?LWRL kY'ecN:U?PTmP/R+)ea7Tb3}SX+@(n LP8.\OC 鴰6K;p INNijD1!ƽ]0[Q`;Ӱܾ~+ #49V .e1H0HaB77 ϸV" qF1]w޳IܑT6؅9uӱ]j6VÅ#o%+=ǂ*A`*(UM&A'kN˖G,i> Pry`Wsh<SL)/ x;q}2竎q=Ӊ17r"VɆ"ot>O#a5%DZ;. m-B'w^jeS 2i0vVo:ס3~줴6,O6V7:#+YݹO2xfʹ4TdBb0~n/|@)2䱻18٘pxxPIEԊ!\zz WH+1`޶{lfLxDs|mq5HFf$|@Gi맡[1 Sy;ڃLh޾ڍ-lHthx/5J}U! 5|nޚ$w~ r٬ڔ*LjV>;u$o?"nV狞 爙q&z1W^'}B<ζ).,}HنrLUcfyVg7i8)rc6[ܱ(G:j]cL4K)gV0lѷVJcڭA\\XSD@" 鬣F#- VwCk#LvhY“B7R&mu^Q J0:fU6 ZǦASXZ>0C4| cKe^vSgɥ ݚz,ۧRut/N䴠c;/\L`R~! B,L^EN|ÏU?f-s&=~2b، 61`H( mhnCid2jR #.#KR/OhmaWp503dl%Wrz;X1)Q=ˉ08oI.Lm+g7xO2o^Az,%gy$"6FLf\KYN+ <[λ`Ə:Xǵa.υ}GDNhIx DNQ^&6YE}.izcѸf(0hƴ FtCemدa6e6}j8c L$(Krr:h.\!1U0!ƒuS]t4[z}UX鉥r||B[.[@o_)巓a p+s}'YG'V/xmiq co~NsX{ &%w;ykz6= P|D2?~KH[ s?Y>py.p*lt/kdZ^cjNjzy E9h -j 2mX%t2 #粌W}zka*svi±ZZѼCL?#[KDǚb _ a|dEه7khfʚwy}ng|X1D@]XJn*QMhX0kxaQ߸ ݓrR{(r/S"IjD҉$i"IHN$Ie [JV xQCmNcUi_%X:q[1MϹ˽̑ca %bf0TX|x[]z'|/]^(5`KV$kptiewei !JH:bUGU#E+]r[Cfh*+LMlFTP-ww,q'XQ`WYoD Ka( I3>{u*A*,$Jr XŬF/ 8u9˵:`ADY5%쬘gIGկoFP)kWTM!N2^tkj5%M,B _jf<= N$&sxo3ݚ@Sp]!> KXɃ@  Hx*sʬbj+yf3Ѹ'ڜ[VGeS⿈m[eē]aBgΞ;3GV|x867+`5|c9J4 V5Ô܆;Wv?n79@஘{VnL;E)]H 뜖C範 DXmYRԺ~xvQyꇜ%";:ZQ,v7ާN_Pˑ$vY?:@ʭcnuɴVTG&ϥ4WޑUBʉ$w[cM͚8dMCɱ0ӉP#됖ݿrsn2NPso(j8\(FAf,8h~'M~U7?Mjn I=g=j#z落ÿ$*%wH14&'P_Eg|`wJ<qi 5 oUIsŻ@ Wn)`ߒ C0]''8PG1cƹ\OM& uʃ좙1#{v9Ͳ/y3N?Z[ֹk;*; Uz*_L,,Ezjֱ+wbt!X4B?sS!@CٽaAkZ<΂wn05X[X]YCW՘ʼn9G#292?n 9F҂<]NJ@n'a .J:3`ќM>41ҋ"r_koGpׅr:}4d޻0 yu+ҔZ]b 5Y% cbo{1XiީA":[u7 cJFޓf+8.?Gz9]ˋ澎waQ #-=b"Ήm(j+>9ȌDqLv-b#;qDގؗ(|Ҿk> dFNlΐR#$ī ͧ61d\$6g_²,'>hnvXGu#џQLYmeHOM/dgn N>wnAvL_-X]m4uu陭]װbNN-:ecd^cd!.Z!Rb9}>LV7X/TmGa13^1[)f?ܩ-#~Q]ɒvQItXc m93K)!go{<iiT&(ޛ'6&td8Z+Y}t$2UQ IP90<26 Zϧ*nɶkBFӶ@̅Up͆'-^/53D7bKSZNQn=nr`սNX-1?6=qL{d.tc7iX_r61 4 t#5͈{QD@]NPY{ð2+KW4!_\^]ɅM\9KuLu.3SQ3ƢSgY -A*qodS7۹pMӓ 177"tV8V~_7o76gYlb7\p;G:3v N+M@$F=PJxtO(iʪ)i)8) ?qQ/2ro VWkD4EM*y>Ud}-ĆBYJ0M8 T&@p11*ϦͧX[u <#)&./a2 bØg{zA-5#|E";%7M`>Z~ V`%?m"FʍjQp'mZJӥB+F;AyfD=loht$w{QT褐UVY Ͼz~.l|K*E\M6{c[6g|)O\ni@+L%N6Yw6O2Ƙֈ*( ZLK>cdԷv%bt/4&ag(0MgIOP3-^0߁85CPb)f0"ڍ@1_ OZjr#l{Z }[)f?arhv7N_wpyW 7W-b/kd, [&Rtwj /nva8Í Tpme (kfp,_}23 (ǡlH~|Qa,N,M7]ٹ[ǎu CS{iT{TNKA7/S]Eh(i\X~;9[{gax1.(|6m-D*%: u>2^ /9 5OX |Ӏzvugb{)wlBHBŐɥ,^x|cI7ّD6 13k'k9iK3.|3k6G w Y/47 wyS:A5#@N\<u~1ԟc?,/iD1csw5X/]S$/ X;4R6 kkǰcNځF(|[}k6\4l{bʀ)X*B~ƷCR *x BJ^fv`P5k·Hl>@qR%[1 \"i(I 0Wٺ![&g_N=ڼ9inRưXV>ɻߖ1!ȴupH*?X%BK`̞P#yBܥskwL).Q͙n}i'XFOOK2klqbT]ɱbeΘLFCzxH0fCrRx9-3t<} #^ @yL4*`'G_mNqRm< cBK W=k kʥS%Kn}.PՆ+PfM*zLg 1Q}|"B};,6/3b!^lf[mՏP߁L5{X)E6eqMpһoH0,ABtxoKrH ac˯BRNWO!|{Z:^R9| jRxH$;^0?ɥJڽ`m ·”vɪb]:+udf]ȭ.w 7IY Ӆg5Ȍ-tܔ%O(iaP Ւs Rua2"~ʴj \ErG?cImOit ,<=i[䨍^T&b^Xn-/de#FB txy72"PpZ"̈j1\⁡mYC`ZyjHk9| V܀ֱ3DKYL|'j5se%;}ק#]3p4gƹ[!AA}]L7mO Y孓Ćq E׶Bi: 9)9Vef&Dtg0XKɒu7-kfc|bSXvp.Uq5tbR(j)$dr:Ll&}j{e>@n>fnį$SY֋|cp$ CyC=17={4HSM|S\6߶ÙWE؁^|y% yVy^>ai|:c:bfu1{l8,,{Eꏲ͋Tī ۇRB3|6 ^j=Lfn IU1ɉ+d/ G·Ir$cLFX*< 0g1tTDʤ$K_JQ4Dc%~9pq4~H%#ܴffAYw%N!m;2:h75Z >-F"ss"GU-tH̀ cyxj`+V0a9D(Zr ޹e*y#ScZ=ܶ) }vV I' l1e{`i*cޝG9<Pw_Du%zI‘{`BQ͹~7ee|W֬筙n'{ř0&1ktl<5+5W8Au'`_d{z8REG;"MX`ћ°]vӧ< {vя X' S^ I)'̯e#36Tf~iY6_8LX Ji|daY#1gTt:bѕǽ%[Aͯ[f֎[̹`iBCT @>h "Qq P&Lz5U0nfY}|<  ݾOt r(+3ZGN;R'{OAرm*șAbE%ٴX3 Yc;6A^huuS?){]E;&Ytv@ _S}-fh - AO7˼ s>gbZULsd@"i}5iR0'!5ͫ-s yݮgV/ͮ#mNSRpd,a>,UL-dDt&W# ̯ܝO2 $5j6.&be>?a9"(Iװ˶O5U|-OcuFǿMPҲdU0叝K_0i!Ɠzˏ8'1Iv -]>|ՖrdړGӋ:ɋSw 0"M-ՉG1 L@1]D7H3k etDi2KD6I#r;LV[\OvՍXj|9\FGGYH.E7A& jfx(!VrJz7lCIH3M~k>}ldYg9nNL6]uG- к ^"b,@{EafJŒKtAl-Ϩ6z7=+jW}dq#^ߣDM!desfn$0b(W++E7kP@%ӳOs3XUD^9ïYlme=pZscu7zydE{Jaׇx%bz_cKܨ!)KhRt(O!Z]vWB^7lk8Rqaar~1 5suVhs@`cS;߼0b7]vM3{$xx4bH\ͽ#7ݭliZXGXޥk0i%N;L-ɱ(Qڅ?\}5jc!#cib|,hynCSN%Hꈔr>F*я|=_Wb,_^9'G8Xy, x.@>XO)c/E%()Ek"Fa27bڬs†~X_铣ǸzL~ Pi%)5bcx _trjypǬ*_ZnKPQ@§e"GY޻Յp,̂5`%o-fL"adz3V>{hK$̏Ή"PhPfe`f ՊbkUe/R1 LO`rQ7A͂m$ )+u%yeS Nw[?7&H( TNЇm"A׫uP{m18~bM𠙒vȘzf])[FlL9tu#Xna/@:a z{K޴jx 3Ni{6&E$3K+MZϪhu T,ډȥ. 4>TV O4&:R*[ݵ:AXvh'0,*Ԏ"XZ7Jű0rjD@5Bx`hc1=ũ+a&Ba al6/HPiP6x50]WN^vkݒ1C㫬eeY%4`1 +0fS%l*8*rr3): Tb R"an5=ERX"T1nd-W x/bAb:Xy-Uw_v|$k8<>|$`)lo)5]әܶG9p\%EršTqqUJ1 X :P./Q*kvzC ?+pebx跴zuȷO^ cb3tU? c]) A |3HQ3+wF&@ިON{W'cHh6p S_TъE5yw7-ѱͳi^1DgI39:umIl|:1KkZAb܇+Fΐ_[?9]f}2- :t7@bOG,Qٔ ag8 nݲ^v̡NPkܣ,nwkֶ8G8z>}.g~:,? \xRP4&Jw]BF^9Aqb:}nix]ecKgpKDfZ;ư1xgV\o4 o2Yczd`H`ѿ>_bhx0)lNy&I 7|ԗ w=8eU1<0)[5Y0t*^՟&̳jj=ϳHM *HMl?ւ$7xq[(UǒcO ?'gѿO9.{е2tjN%!BKECr^[yZV5J#L"X1Z4DUFdN|ZwL9FmʝinݤGcvիC{Q:6y\}0_몇1(&hKC]}FT2%{(ڳԷCޛ &!{)ŁD*oЂ?0ɍ SfjBurI0Ěfm] 'ܘ;>r[OEٗ@lr?Տ}*Lv7qLE.2 8 y !D~yk;f0LhmHx1T/q t<\?)YĘ&#՘߄_ a,}ok/^Qse~CU\rX::>bL$J2%ٙj`݅ {+~rߨn S Z'"t;%6̝O1&VnK=DrNߘ): LH :eՃH`J*1X^Ž/!gb/\xDZJ{Oņsp#mUBz0esC-s c7-*I$n5+?z<;{CWV PXKbLΡ n`\JwOi;=׃eu$ix-ax_ax}zP{W ܱӹ<%rۧďbmt߶ΤGG'bThd`)Xܗzl ry,;;V(y#chG:ic'7Z}AEk-;*gEc&EZ}ZϏxeBG#/́V%# 0(g6DfA)`cI}Ko._Se|j&'+,5?s;VA-fuơ^׮Hl” ќ; k'J8B&jZ\d{Y0' &%|Ӯ =T~0+vΧ5f!e UFzJrj /GlJm xk3ԙe."jJ z-Mv)c>Oob[+k@0{L/iggK:4K~wviwhp#,Оt4\`'9oᬣP${/@)pw_ KtecO Ejls+z٥c;^-R7Օe4y+ݺ}}dԈR [[hLVtί]ë[˰U> f/|g\Nw%PÉD dpX`'g]]gLtXkݨ:ݐQϬWf\-2+\ʋ|XJ2O f%|JY`pjǜ;&p1+g7OC\c6r*j^1}힦VxFmx_tϙ%\YA2ږwgvW /P\yi͖?*k6Hj'}Uqv[g ʬ2Roi񞾯yp Rkjc3jBjb00j<2Tmbő⨓Ph;F1?AO1svT?TYsP=fν!,Zqvj>L}*64܅&7S4F"2Cg/m(ĕ쥔z\cQ$HrhW?ajLXHpݧ zf#ˍMz6O`jd&@T?TZd9Tkv=2ԣdnMyS]{-ɺ8&%7RLr4v\qrz*l=\8  TabҜ9׫gPCCg{5 cCif2cJ*IW 3)GITƹ FsEg Q 669yU#8ћ+w+{|P)3~Iϴ0܁*|Esa04 ,}tl|Tag3mli `?'9R˵ rE7-9=fTPnޟI?O^c0Qb-7+HTba/ʴ0R, #Yǚ eO[, ՍUØOX~333kn}'pf,VsmsuG8umu6ʈ,ਙ5҂ո {8ǯz$`@/ѩtVO&IG|ni1Ӡ26B>[,YM6x<-nQ&:5θ%W,y}%;1-w׫\'7zֲxi`䦇uPj[˩x#l;(ߣw<^(>T$ZHN|ᗺrdӢ5f NFj6] I|Km}_Ptv^P=ۊ\ *n:_NEA X?i>7+=oi  )LU:|>y5{? c> w9ӯ|!y:ܔ6ً1һQ7h4c'LX{&\b}vTEXLvA颾=STպ-{6o ~\EqsyIP9&Ce{:@xG4{N!Cv' kU0#1+d26CQ޲=XD,)ׇf_%cOc+ʟ)rdC 0 o<.AJT-+^"OeƐ+L[lR,N? {_dex~) {ubvcI*mǤ(<n 0(qפI4,XD?q:meOlY{$!,SG(^;}x>:6xݘu?ZuoWLJz-}f8 'Xw*3/٘A|a~2R]xE:V=YOÔ#jIuٴ0e;Iw/wѯc&L@`xF,&4%<UN6Oɼ2][򅌌\c0 Vk["nc'XSd;nsզ  3Tqg(֤X/ D̷ {x> ܻ<5r-3M [3~c#0/*>{0L&JAkzR ,t/AzUG_Z=M+Ħ@.lbCjaߘD y$ߴsv2LpؤK 36,7l >ٱl`3@GK,*|=̃ 5 c/5EoZyLFzeDkVd+g|աN߶%>a[)dayHl܂~lU6`f PwKZUˉ]N ˜s[q[ӦsvS>5!f4b$Ӣoz{򜘬Z:-ǽ21bm =rXR6RyHlس;{Bܵ [R{Mq,hC;l"cq `PlrMaKI*2˹ćvkr⵳gBZtX b;,H|#mBZSjXactX,Zu+ǔd[ Hz_V6;ɡ85q乻I gjH:ud8t됟_&_nF㚩uCjv бmw,NԆ9DWmV8M=/ɝ0FuoIKʽmdžtWRo +7ݩXO.嚻Zg=wuÀh{k: P̘-AQ# sU'NSI]j׏ lC/^+t]hT SOWS$݉n3qgW:lI`9zZD7r_ft=AǟlD˻1Sr ^!zЮRЏ~Pbmbܫ*E ݗ|;q ɣ4ǧU ౒{YĹ c2idQ_)y%s鍥}dX@MF &SaY3m[i'oԖHKh$u! 9QV}BWmj̕0V$^U~9vyXi+5wC+Bq 3fǴ`HN v-p&p)j;hBY gN`n"2^DTrל7cIg 1cKO#$GmuR"#r؁qeSv0fdB%ǣl=X]hwxڮ(ܹdk'aϨ-R_VXV3݉fX08<*lupyYW&sĜѫfJ/AD+=XRXG[d:؞6Am֞戠sHb  } \k6ģZ۔\ >Q b,1#d Cgon6{[,/VHvb}?@F>W~a`]cyvʠm)"w?Я#Qڱ.?Z9N֧&sG,I@RW1m& '$(%Ž:˷߫teEFߺY8gfAtj_y~7D;͜_5)hD1& P̛0cTy(J,~'*ؚGR85}/]ڶ,N\oqL L%6}ĠY[_XOf_;p8#~a-%|T_Ybp9k<VV:AsHi阫阄R Rݻ-U$Wӯ 72MU8#%FV)xd8hT6kwc7p}m,>E&lof2lq+7-L6wS/P)viMsi1F4#ok%Q,k/{7慰Dpݚ-i&n0@k?% 7ީ?oU?$O eDœGYY.or"h*(7;e0CHFH1;B_\]$#MV,4  8;t3;%W(ØCѭvR8(S+A=>P"TI۰n8ӈ%@q߆=uP&bl0՛>Đy!g Py!FARA0tј6B+Ң5NVcTO/SWS/9cN+ߘOm"Z9}|qX<69);qH>LGHJ"1;OPga{:댚:aեŠC`nWNЛ: qڂ40#+CiC% zvϲxtyPջ5a?K(+g}܂k 5Hy-`G۱T`V3ǧ\Ζy|p_LNW骍-1~-ϯ? E9࡭UVer"3#יu4yeg0]ɮ1^Q^; kR<5H,wގ[AD^֊>Vcf_u:w#4F},/Utn~dڟxyiL)SvLfǀ GHE m*|W/Jmt.cI,h2fFZ.<͂T`Y F=|Zza QG\7hǂpT~٩mO2(3=cj|J8|:=XC/RŦt*Kʭ N^ 4[toĢL7b[|l^ @ ]fEČܬlAE~kU&=4@SuX;檳jA1nF[v˜̕H36iӖ` 2X>!"2r{7q|t+XF?Dai-IN!Qk@Z[ɕK^dE#Œާ |KjyS55#4H@p%78rDGV5j+dq",7L6XC1k[&B ?؏ߔ@߂m?( bwBWDgG:6pҠS ~%wz~&ou \{v1@ZG_4*M|b%vC>#bVzM'R·wc+"~'O.*5y N٧>}3_=Lvuدh_j{a~G]@DZcⱾB$Cc枍7*w>EEJ\ґwgr:傹3a:[4a`1}k]Xu@#!)HWф g 5h4N鹠u;>kZv (.z{ܽ!5jgÊ(,_CLnah,"t$%?5mOS`,φϑ^7H$N?pZ|O~hf>GOcje㣜2g5@#w7,`دi 2= Jv'/9{By>3_2o&e,+Zz9'cz9KEa8eEQ 'ڕJܞ47ކ3@(;:~T!6I (rSj'Jpج6mM*66Yְzmk.َl5^sPi0[2ӮqV{R8fP\-Q~Y`ErJXlŢrB>]j@ҀCѥ V7M9tev#ܟ mؕC` T]O?kk2k vDN&InۤqVq QM ~0!YW-[ՓwLEyi BRkj1uP6wdsVYlRF; B< 㐣bddٜBMBUNonCul 1ľqz֋b'p 龕-F_ۍ zϼy6 Poi"K!x+n_ Nwiaպ"*vLj~tJit% G¿S.nUl!*6RYE% ˅HÖu;y>+LfgRB嘜(N: u||& + k<7$C1:IJ޶dq6Cbuc^7wJpUjE^PY {URQ.{B\E.9WNҊXRk+u6٩뒛xhcw` .lf~~ܚhw<9B':1AOo^WJ+Axò72IUΟ> f_ZKU@+з?P'mkndw<ےai_5L YusyNusHb2F(C'yOF5!+ L'RދVxS7&>qEO2vGZ^b-EY̛pĵd2ֲUF[ *ϡL`.R)XԦⲪIӮ]mE/CƼXdT1'򕆓u@ 3 㱗x@zNf%~ׇU$ ޟ07#&@Uk?V^@.St>?_\t)mi~)o #NqE{j7fȭq纴B{)rX^urK3˪p!. ŠjNaZm,RfX,"g?@roz>bhrrm;wcz@)U֛IǼr\l1(lKb*)Sk0^7dv|gIWT]Wͮ]R/SiYro$A %l2N^+q x*M4LV"M)$2zsazT't%-,PHZ$ c9u77Kșt +eeօ:\7{E;rC-S Bڜsdssc`V+i8Aì]By̙PLḢfXi:j*KJr(j~~!A \qzS[h pBX\p:x6<0,{K](#:ɹ;6:5O|vALKݾ2//@{VMCye=nr 4kpaoּ±`s9@X3léZmHD/ Ƴ 8a^2m=B6` 7w늊4̹t:٭Xԋw įlܳh"~O/HP\$.]y(4AZ;3ejf}W!ק/o(P&cfƾ`#]vc\0]Y]TEs](8kz QB})lDW?5}&^`O0Rz9P=>k65 qĘ^2ƇUHΘeteuw0&x>k\1/z,k0G1k`oA\I MgHq0vAm֐l QbpX@ R83.}!4\Sj UzsiqL+WܮӯlHOV?v pneo"fJ7kUyEK5 ;9ӼaJ@Pz"HVTWނ+@|&&p]Ro4T\ؙσxZSZXZFc'q $cN`*}~i'\KH]E-ʍξ'I`4aƮ֓ oU F,%"9T bBpuld»bՈ2_&^`f3ooC7inX3/l+A>S"i5Gt}hz&%e_ ﯬ{0 NjkƽWbYbi+m09 |Ǜ\~.ި+ Nj~4@“TLD&7:Te.#01AT6YEW*; c 9 {0gSB^%XB"[L;-%3ӴbnX{wrԛQ #^n٧ʌ:~yh*oi gs^2XfKvʣi'8T  )9L tYķ< d |zV[޴9[Rd{އثjꄙt<0]a#ȦlĘvˑp޳eZy., ADZf uI%+f-> K~I`9.QQ8-h>%Baj96 q2Q\Fr4ڔ¤]9]gBC=":Nn<۽rSϥ2*8q4k8ɿwRl_ƼcɆ&Z:P ׏`1ֿ7| c%%utdY/.Ci3Qrf^()jB faG߿8x"8;K->aq>&t'^WZnLl! $u;w."yo jw]La|jy|Ovmef`Zk+0`$gü,ܕT[ɣsKY2Xisr5X^2KCٞ*ȼ*i]71(ˑ_[ڞjѫ{Lhh}Phtn-M` ymErgi;hm EK)尌7t`ZY-M f.ݸktyԢtO<1݁ X4X\ m(nCd=;@I+.x/0cB ,MD$z]`#3t͵UeEӤϑnM6K}H<2A{lѱuB&C$ΗPמ31 +[tZrov{gZ9s>op|NP:5oR#lSEl}Ertyt[4_A%k{(t`;T:x(l E3<=T8N);UKL_n" }N@#Xx6]`H&N)eo&X5)OK:.. X| c^(vpS I6kNU{{՛sFXy˹xꓽ4H{Ԍ%FB:ɍV $vt@ g FR*B,dOͫ;z:)VEEW'VȐ̭9Z!0ث翙>"*PJFJ)6,K g@)SwUQOn alԩ S[6hQ$#VQN3?bn}`{sk#?g찳CYϽЯʓR܊\0)9ϣb UzR{刢j@qRF.}$̑$EΉbk㔢# 61ƛɸgo% 4=X|_ATŴ %h 9lmFsZNӖDB:#6;;v>k2CJDvLY%m_sGn_Q˘Q]0rS;n^+HUc.pb`6qs3IQM+dQB0m2 s")uSJ@n= c.=9CR,.rKei)d3s{&~0nԝTd# 8vrD';Qy郛m4>GRCtC?eFpȉ"\TO[2\~B.s:Ǿr(LW Iئ{gruJWЎ}{X 2_ v^֘X=fSc~{M S@(s;pM i{F9̺Am崍U6GqprkkregZ:'1ݴ!Ӄ@|s._d6D'8ˡ0x37s^8S~W >I." #K+;Zm*K`r`mlMw>N.`6 չ#|S>&0p*缘6/ؤsVgl*ͼdcyO'!%X KhI&$&.h"X;E=F7XJE7?X*D2xfÈ{Z"7 <%FGb#wq0Wљ,;,ƺ0~ObD_8dr #=0Vq\{nˀ1/IYa?9wِB'= 6s<⭷My7%Ϋl6߱n_t/kB◓)`LjZμQ/ߦ)=k#^g [F«: is3R\Waα*#! uOϖ[^99M坝Cۑq {aaˋbbm$3k +g2u4d4{N~e}#bra yYR=. \ mߑl]+19߁>4/ϯD˼K3Mvs}C}YS)xr||mQuNˎ6lx_ITiyl6Es^]x}B*KrgC[/M;딼ly&5`Wv3z{Jr$뺇.&md爲G`(Ґ7883zæI qoaӺ$SNC` 1{`q-`k~B-wX'nX -[ogtIU zL{Rrt<)ZPĘE$-f~FfW_ce<zlTf1[h~nG-7S3 {[NK?\ R \7QFΤ[qev< ރtƩ+!OU1櫘`ML+stkװE!w9Dמ K=?tI)mȔ^70JSv{B;V{0{{=\4\5!f2٤34>+ @|l Գl ^K3rW jϒ:]>m4 ;(j0,W=ʼ:h?*:Cʦ0pZdH^r#S$R8RM[=3- vTiu`$zTc_Ҩ mE՚NBJ:U n1aB{W=nqtr O<ji-Q/9^#@A= 0<(fp_|@o0oUAZ'[E xba[h }]8ۍ]ГX[DBX$ir%W]s9Gd1-(ys+ߒyHdT{"/Zkb8Kɑk_,ei=VW)OGT_![EG) Գ",Vt0\\9L-J z ̶Ab|2;3Tka,>}}M0ֲz1oued6Q^*t;c8 ?>O#& s/Ƥz`p= )*j۬7):ĕjjev; #z b16RIZ4m,4fEIwYCh:z)6+`xpAm^5<#Mr\> t@FߪIx^Ϊ60N8C{~ٌA ڀl7AU7ʭwW` zq !׃#)8Qe/^5rm 2VBrJ^s:=Ŷ+Q{SeW4Yvi w™E>rrT2o3:xYY]{Sn̉">d%o+ZD&/RBYB m >"e+t ӗ_ ˠs6'+xxΧQ!h^խ24h.@k\}~LEmmv[LUW.7Jp!.>;Bnuڄ8 cxn_CjU5-C@]=w)g,J5MVwS@Y#޲z&ik}QI&DΦ΃Y4 8FW'^FD_ gAD+DajTҙ26bs ɐ ){:z~Sm^`Ew+ ҕPGW+m e$ꑃ8vfldwI~'-rb-NmL8gz2N'Ő ('܍r?GHe&/*@P\VO[ ׵G'-]l"$a"W3WvT vbכD.QE C?@:*HI> xh/f:ߘ)`J)9~3dO6]ʽ Y {EZ;WBVo,-c=z 46zxh_0IugX|QHTTqTY9Nw&@;WsҨ4Jϥ2VoF,"FF*{pqKiLb[Sq  #aA$l@gPJOlv.oM |f*bRX='KHwM?jX 5,YFKx=Mq`zbc$Lkl pQ9]?,f$:})g$5y P8u1[PhJw=;,r,;GViM{4oግ1-aʭAZ K,YJK9z//?Ff(~QF,XN.2nO1Y0Kʉm#(FQpn:lV/a!Dl[j;Gg^Dzf&"K+P5' gDf JWX4a=] <dOYAeRC ;جWnTG/Ő82J>  M fjO d(Z#硞NW͉:#g"2 Sv<| GސM!ևlJ,{X/kk\*䄕8Z)E;ĩiI~0!W Joq 3\m*nlslԸǶ[ X2fgT(s0[pu\P]Ijo3m$LU%pPWI!t, gV ܶ[¦T4yœ=-  2QBV-[zTJwq=BM ΀,v5:`dd>5dLgJXcc!(XֻƋ?#ePվÆ9y\㪜ەݱz %ASE}>uV?=AH<|8YMV`$ ~vS8PWq  ,YX3)l׈x7mr[~#lxnܤݶƨ:֦۷z4X_ Wc "1q8xnesh: Cƪ[cT_4m[r Q`88-Wa깝1Fq?!f<7XX,Cf^\U;Pښ~ʔBBre̦5'n`|)VOm9h;R0٪]~v|+%`<\ֻ[< st=r1~bT |cƾ^nۗNI;!#Yi8S6%8W[Qڙ=[M'z[ӻӨДz㈍엟w]+;wbC[cLszۇ `c| Mu;64nS5rP `Vjq~ͼ:ʮ/R;G^NlFa%qFo""Abn4ꄡ?wީ{~~p8A-v͓ap~͊SsAF|ڻ=: Ʊ›pɵ̅ӤEʻ8{4Py98L=)m]LafL%w8=%MyGXGpڤM=#)}u:#M9C&z.[v|kb 3tUz|n 㼄Aޭ{To;o{EnA ϽCIΒ%[Cܸ*Jnd%` 0vwE[WQüu»L$/sr=\ C?"h|pӯ`3fjS_^O/FM8{UĖũl1R $Lv+e1f7y*|H8\+7gyh?zf=Ac_ {$8jOtA[LEac>iW6>"׌)_N 6ǜ0Nj٤kŮs߶R^s6ߩtR4zzBS[A<캷,IxaG4>2S 92Z'sS4ŵ`AjWcthռ06 d[]@ L0t7-A3k^5jW Ly9K^_Ⱥr!@PKW[Zi^cP+>ӫ)>"r]VW5;%J AM?k;ltPiY OuNL YOyTڗU:1=SA h )AEaz%=Ir6^hOOƔ=gU(byWΠ Z~vY07vCP2QpiJY?giy E˾ijh5/CxoR֠ݚ ƶPD@";1hqs}-\SN7)Y[P/ӹ6ufkfe:p1%EiƐ:M(V.;)k踿?DZ`(MN1̤op|yޔ]7(Kln=#ox r »M(g=4UWAfW-Ulqbᦊ50'q\CIfdi;~?9E7[p]L Ӎkq6U|0{7eTF,}ܓjFq-azzR²>=NR3^~|"0c;O-oL#cE|IYsv\[=R {s wj6au-S a^k!D ɧJ HkrYqvpV4e~%l{vg}d5^vi1ziu#: @N?Bl9!|t)lsGll̋.ArjPc.D[5e cE]MӔ!,ijC\Μ]˴Q$HE/OauK4w)@@Ec[L""JUQ Lɕo}@jh{ˤ>7_ ZX-أAtR_";#e,G3Jϥ3ٱoHN8>csŒ%?֙#vh8{-e0|UKftT1:ed4i '}"}| 3m$jr3yEG3sQ e%CPİcQBJx0U3͞b"}s͂_2%# mzhɕ$BOuÎb3܃%LDzaUyƅ}"T5Z!ӧb+}Չ -і#R$=yu&[Kp̾<7t |IB;%&EjDakۆ^\2<3үޱ)Ovޘ@%ۏ-R- ,)ʧO֦ׄSI@ӧqMl0]^f (NƉ4Xin 1ZR)dօց`9u:sPWL܃`1>N{^fJP>Rf|x>v@e-~XvWƹ氘. ®8C2&zPr4To|ܤC_a"\bgYeO:KH0;1|;(G7$:{+sG|읆iϢ5feli65*`P4ns`Xb56G,t"1zAZsO)YD,ǯ9Xa  ۺ6ׯ 1Ӎ-Sk'8GxdaznWٽ"h%bAjkrpRvUtn7P19Bz*\O!Wrd_> c JdiF=te*s5tYCvAq'Mu6S ~-m!۾ Șm8(KjTZS̷DX1ckaJLw#3\CPLwk05XKn7 c5 ͜ ^YUה{s}~=S\8ԭvC&eU&}0)8lP1ؔDOd 3<:EpJy⤵Di'ف*am Ȟ#7%ˌ5&s4%j82fwJ,-/2T[*҂e\}B˼Hl0x +e `)\I}9g.ECJ3D>DefWq}!`P ?0_U wk ^;Cf2dy1"+Ov_xաРշ&t=\>0g4V@*KbFR%Ϥ\bͳjw(-G` -\2jC 4Q!pqI,C53lO`H>3t)~BmsdxVDi>\qwfYVͯH#1?7oj)dY>; *Io% ؘUZxb*_Խgv2.,I(Z;+a;ڪqBd,[Axޱd{~i-Ks:%ưViVkhϡŝcx䤐 2ZVJ;QjQjQIaF n,hCt-'vL jEsKLnBL010Ɗ9'rikbaD.*R?mMWlf>%P8T-qL ʥ3W7f@.;+2kVeg=QtTGȔ2j؂{^S*S,,>!{B2PJH|^^0SqNFǺj/p"/Vخ˝mlU5r] #M`FizYG;-BNi '0:30*j^|z|aLK`3|iQ9q]QWILJ* jh"!^K6('TqBg WՇJJE/bQLVMIBVA@|~]]s.l杔Zien!ΘELpnRH՚{>VÐlT}0 Օadd(}g!Kec'OuDEŚ;ޠ) VIN(% +UE(4cτ+wT%a,n-9LUC@-e6EF-cNg[-㚴I샂*_5;QӲO#Շb7F>3j[ٝY8Dfɩ6({W{ji2>XWa*eXdL~Ro 5ڒmAt;TXTβ6s7&W c18%N.TŭՍӿǪZt`w9jG*!fkߴgU~l¦Nr`e^=˥fZ"Gv0ҫ%Ψұzq SŠ^q]Q6Y<1nJ0Jgl>4Jb#G(k;γ,{XL0"Q˗uzz ')sdFV'x(,0N\<1'.ewR6xeE<D1ss5a&>ME=ۀ-v-$~ñNgEB_RS VJ<*axصa֝|ja7|V&w2|_m !>p"Eۇ6-!<'pT;΀*H١aDa;^<FbMډmFX]yr*#WGѸ}^m]핰3PalD94ޮnӜ4C\`9n ,JJB MPJ $Ʒ0EN[ahݍK_K9l[IwdX.6=9^HOP'2O5Th: uݮ'A|߳  '2FN0<ց2:lnh끺s_|!oMЗ(%;aZLFwTx B;Iש c6pbJtgבnֻAƈ[/]H6Ë[Y6?ttHUشLL7gt"N [gJo/L[etz W~)s azmoE*m#;fA a0Uͱ(t/i}ħ% ̳'awIV~ɖ/]f7k&b4>P[@֌ f=x *Z8k0BIsc@ǖ[]n~IL <KL織k,/w܏3AH OޑM:I1ɧ4gE]GSu۬`v/10+vH%b|Vcnԑ_ |VWzxRK{2MrAHJt= iv($[e&B%̦' Q޶M.JteO, S[f1R$Q}ZO%Z a \PWxr#P\t ^es~H ݬYNSO:/A*:{P+gDWF [wz&lBՎ.Wqga3 8eD/J9/q) lܣt?SQ X3j8T1z͙Iֻv#U}lz;!$g"v O1 8D3^~pbA;$C>DېL2 ;x.#\9 EϹj|x]HkNs{Gã2 v>5o7gV5j"*A1TzOS8m` 5 26^b~r$Bd}@F͞-+"d2Tsh3eK21zշAed?qs'rrbe"zEe>؇l?mQ]-/q&Q$J8m ٔ3 <`')gn+vͣ;ZxÙu>=A] 򆩌qZ\ٲ2a#\.͢WT$ d"=:u$rMo`W𹎕Ŧxć!vQt],]~t[ZU/C.g- SɡOt^c'])gm$#H{srv@i? ?&b8*>ۙ_˄ <\*37/4ʒF{7;=@@%}Rv<eAf/O|8:"Dl3`?'Q73{nc<-\a!' \q~&LA:.Be,5lc% w1玃nu XBًA&NW)׋t޵6dh)<M|w/X/v"q[XI;_V۲Z73-ӌv*D&bn7Y4*g5ҍ _Ĝd(YVN'BeF&0X…vm0>ګA43oД//htpdY'àhAsRS*w];JMW 4SZvѮA6\e_3&̪?l^#˓(_˹^ا" 6w$.ٓW1>|(SR,|=ZR3f9 h G]- Re6'>1G@E);MJ٢sEVFЃ^2*۶C' `rN+~|䧿 Ɛ"׃_U%d}zۼT&丌>$nkϚ|R8BK UM{;Ӣ[SڨQxSCats c>e#UĀNJ,${8yd98Q1bez4vUZmJћ~'*1*S4씜&^\8≩#g4`iݱ+|aj׫շ60n5sCf`FHv|m8̱?Y 50E0mKVrR~f&wy't_~"&ތɥC8 B)6ƩZ0;=;F UBo^- ,#űd =#<Pی_f3uQ/Zڂ,@ވ}]0aD!Huτi'<+)ӓLZ\a]e3p=qV2p@mkvQb#&tN'0|Ew2(뫽u=Ls)`dvH`sQS1\iʰ4GX YX8c߃v,Y2rw64&#S`}| m}[P'3&K3G΅JK9J/mi mGY\5*|f(L6AvЇxP/lq:φ7ׄ3i!z:4^2rY( Ro=άwai6bxFvR|oٜ`x,5ֻ.D lH,^'f O;sl؇{ɑ%@z<6czrHJbZґsl+/+[/0@p+?'L3|:Y,DZp}V1]hmk;Ney91ӭz=@759ēi l,HYA>@uegeAS#sT-׳mM*zb-TܱWi<v:WSzڬPb4ʢ+7PKe"N+"4}W,Hp,& l ]:Fv{b[wL(a~1/1Cd;^ķɴ7AŨ[1S1-t%͝еx)1T??ihɹ1\ϥ ΛY:or7W|)e<[BHG.AōtSfL58z:W79:IJz/`~ԓGM-W/w7 琾ɛr^2wAJ jT=Fe_ު6|[Gˮ}]B-;tǨ~z" GRzةs;7t@<}\VOk+ =MvYхq3&.Q >{-?1=B ^5goi%Ĕ%E1YW )!*ckyYռv%>#Z3XWp5Py^YtqEk/Oi)E,m7X7(R3\Ӏf.-`;9ܽT#?1x[Lѽcjhn3ƞBLz?Ʀ,c|])^qdC^󌹩.rpg'?"ŰNF;͗09lىOa%ZKNWDP lGQJ:K]RHSKyFGCnѨO,F}mZw2fWݽ)sWM}0+ufWԞpL pJd}4R,*cI/ I- `GĔeaO'WCsqfW08F3\9LxkC= *7QZ۹/&Ag%W@!J*x5=v9+g7n$r!8^󢺼lC<5.4@%ǦfA\f.X׹9[gLBU'-3#Fd4`n ĸ@.:LOHxns*oR>MNkc?^XᙺnUً/m [C?M ^<"ű(^9ɰ8t`YWLXQ':- Tdj#D_+VcJ+a$Jkyv*Xc0+m-}xTyA [8E~`PԀ j=Я%#[" vwpGa6VpѡωNagxʆ\,0%G$@jdz4,5jqr'~ԃX|}L}VVUآ@dn^/LWkW3aEg!l9@8vn,FPI, DrĎ1#52d.t9`W.5Kƞ͖2.k @f=գ kV+ 4qj ?%.E*:7yE\؞S 3䇀OlK\DYn%* 8}N散0h-b46LNOUXqUoAs*;&Kg_}Z,/vknmEK/FQSe2l0Ec$2i9; r ߕ U?9,| "q0qN&g#GFY7s r}ߪC$9-9ce'jԘH%tؼ}&膬iaddh3*+n:HIW>.IÂӳ\?]PA+wp-gbN)~R?7Â햞&g”lLP4~v8~<21F a,f$4H0a]S@"$"Fk:h}Xcs}r>3zO7Q3ّE>5r|H aMW DVUw̖-x:lvsWH}Z38͏18M|;|E=`tΚɁHTyF֍뽹HaZVތ˪rq]FƁ뤭Y)X%xw-:_8 D>l(WV,e2ΖKpYhCqѝVyP# `+=-$.^$4`23P'LQ*Ox!5(K{K02 *PKT0|/̌Ct' Ǝ9`J6,׀ hdVtęu^<7ufYW'@l#%K\{~-' p&2 0bȭ(V9uE162YO/)?#[ņN*MpF2L6W9h" @FP24b9'>yANhdt̲ ޵hf;ϫsgjv>pb6x}إ>8gZⰹ4oyh!}`(ư1)TjkãGbxޑ\ 9-Jm CxSRf?,+K򏮈0TN2mzd.a0[~4-uXʝ>>c[X?xgu("91 ih̫{k)`XlqPst_n̈́Lzԩ-:VO`\\y1&CaE#1[e+E+ D잮bH~T"$.g+dʹj )tѝŞ)g鼺v~\yc Mm_ȵ}3;m.R8D|ga Y!O3AMWsBW ݄)Q.sg?s| prq4:QB"3w8[M՜Qsoъ%чwͪ(8)\]َt+ng~4{T}ζ^\̺"&oقY4#_QR%YF'foYcfj#:WeAIW0v\~&rJ 5 ݠ24Wr윘?3w+:+,ONq ؗP<]i]!4[?>ۿOnHf3c[ObfLL0yV^_~b֍ ۻc~J(H(l[Wz  < z$ 9|Z!7E`eLWrnf7?ݳ_QASE`)RaMKPX ]NiMg7q[JKԳUxIMr_] 6wGp8!P"3K*9lP`OQgUbSvr,N6͸[B/Dp7M}>>55ɗ4S# 0IubȺ}\?3}/&߱sZ 핏<]CZpĔ 4~$ ޚC:yy>^,7d)5ca?\l+qP&otq4?𫺶+ta- OCNu7I(0уx8jy0]73:?Wh;~$p6YG`~)[Yf}%vhC6s|0U* ̰|,H0 c5-–z0y pBSU;`& ;"m6Ѐgş{r`dX`̈uco}s$hnOs{/دCYXE'g dYmP1t\D3W)"i^2@l+4cpAq9eMHօ-.(XR|0ߒ'f9|u;᳉\a[ H].,Ơ?'7$ \qۊ dD vWYȖ{WG7Eoajm~YXϪGrؖ'G![A3u/ :Xކ/"KOVm^aa vq6/1Z '#,-"9k= NU;xI^s-T]osMRΘ}ƤF:FX֥ozs'*cAu#Pۘxn۾ gnr.T_~w4JkFZnwg 9J8GQCv~#ur8LwIroJN#SRS5ȁ  [E"ջY|DO4K4<;Wcls_T \)ťQoql"Y-/HIS3+mqЂ'ՔyRs=H&Akκ ti}_!U/0J7d$i(;S6;1]]نpVB So/N-df|_i { lXkbSؒ֩m,TczgnSC:xhXw^?Lula`ji%Mvq\a8OZuԑe1n5HeĊ67wBBV{+5u935] EM'Ju"ԔVV+qGW2F`'[@k ٳ>w*$ix L3N3*ge9qG]ZkQLS0ʅ<$=6 7~7]V߿#(j O5@'i~mt?KAؑ?nd,kzֽjnkd"Kf?1n}?$C JT2zevq%p ")?E@ͷ!=(~bW\׭ӗ2'/1/GFɖTXM̓0Ud]0ܱcw|fzqj/~l?ll5os^:3۽:*#]ωuk:x)jkYcP/]S+RC9 ƲULl1<$ zD( GO'\^;̙uba&tM隶;y^QSqf( Gh$xrro30 b\2ÈֲdΦ_ cC0wVf~Cme?+ݴ cŸ#O[+b: ݷ݈h͈ %l6hY?SG uHeW(|2u/ý՛S\M4Dcd.j rQ68}n-K:[t*7-x8ĮoL'[t).nDSD02Si >%ǩ+~3,r9?'̐ݩYݯݓ)yžT%NK%&Bn<9t<0rٟ/D,ad==^-:H@m^xKG@4kט$T)yb;Vi/Ap*;m]K7VXPiu _s}purO6>B˒,œ2g~e%.^e;NdaYA$B0Gb5(t5GɷvJE4k=:bu8\ү n[9"i/y Kl't8h_vX9r0U;A#kʝʷ9=7]Y;]f\8XKdjV1} r.`5,ڴ,i_|df`qq_iV{)gqY#¸w"#*83F>__1 sf0$2KZr_Z!~sT2]Xy4دst,~zveJi ^R-ML-Z;έnL Ѿ㰂د6R xt`~q@g`vsJB gѥr``srh)DA?0m{?b *לǬzkRɍkD\؍_(M#;p9|LWVf|``6TkPU7\?f:A|Y3#?) v/h 3[Qu6sR go6oL链(tƏ3TKoDͅ+?J-*$W}{J 0(̝2pŹ$HZsI)Hp."3KR|K3D@u^ ^|AҎImvMGzQquQm+ 옐aT&葘* z!$OsaE߷WjDђ Q-$GfO违mUW:ꝙWCߤ 6aM%YK3c[Ri)]PYd8X,=pyTtX.S {A !ΞGQ=2^7 h ='R lAu'0ʪ?p³} D.`͊u.V.T$JɮUb2Gpϧs>Ξ a\{ ueB*}uwA$}hTK3xNiNMx^嶈<k[1x8& EO0X99&ֵĤYrs8;Ȍ2)kdܒƒ95s1蛔Iݶ+%A)ٷDKG/ˏ!A9o, Y01"{}u%A|UFn޺/Gñ,(0"Ȣ!g$Pk!|G8X Ps]%,'`t~eQa!2?S\UIRY]84Ҥu@iTXȅ#woaCl.;n2^QynM-t=vZrбٝ2z3"3DQH=i4W}43),o9}Wǻt&}uJaid)*Sѽ9} zYꛀ$ guc3Z*usgS p/fPցMz%A9?WF$~p) > VR {ky_LOhK aYq!at mUP#s8YQʒnUg[Sz ^|uVVt2bi4/OME$_q:rO%fƬĖݺ 0̳$|F1} "EʗDo&i,Df]}s65Mͳ¼72N¬/t #f8+\.lnCgKX̹']*%L{lQ <۟$a oe fZxjvr)̀SxzOٮ|&JXJŲq˝OfTk}ǛhЗ?Ӓ7Ft"L{@rLAHl(+/\c[ѣu07򍝇ѱyM>:ako3k??q\O]CzC`nςґ(8=}~do򬜳SبSӷ fOnFm6O,n9u"IEO=Ql[?SۆJmAk߀qyrf^m>Dzrxt qt_n0(v+vݗ[q^$*}4oi>n'=>(cP3U{qpCهC}5;D/O#DLǨ@,B96m"XG9?9&oN0E0J!4C ?VT^x`~9k[Rk;D@@noh1]=lLZ&}V2, n`E+i%RK(= PFHdinI$Cۘ i?,OKZǴnr>tPC^;q3lxLDEOE_E_E@EhD1]eg]7Ұux;xRuś[!7YK%JBg[m߈pSwc!"ҥGXcl)aJr7e0[Vo?\NP߉8(+4fuΟG`} i_a\.f3Mv 3QYLЌx)"$+%`kշ[L h콆7sf+!* >^yBP y,wGs/!Al>?|v)3VN zvN,x%[|[6asQZ=:PU 7;362laUr ;ǀl!G ĊyL]ϧW4M--LgUpb&8!ʖ[΃ Aoۄ)rq?n3. 2%84w_ޘXTQb_dskqXJ=φ=51h9 s`v/usQPCdCR ySb>`0^ڤ>Ֆ"N_ps\fjVb,:g$sEsP)Bخo [6~2}01몊n655fZw]4{pfy¸NNTuX [ `SfF[c:r.bj&{9g :c!3c^I5z'8Xa,`fY\~{-ЄH-8NgMLoWx{zTY7X0V= F O/!Kٵ1{ fťX2[вHJ_o)VmfA_?gS_vB0Ewx@HKdtt\=` P+T*z)H.6u8 E 8I0tn%'KD v9/̮yqɼd=TH&[E)FB% C(:ܒ z q,QJNq?GaꂇO]ơrR 욽sLLwY"zo7M ^Ts.6'kٵZ/[`nK|FU.}W_٭:4u5 fJR|;ƥv@-@q•Vl+D;{Z2}wd/7ܐ-1o[H=ZXH$@8яNr1{ѵ{%;ckY<^=ٖаf8%JEYgi_˜㕾ſtH_ {:lRR& К͌4F[ 4#ղܯ6*U}_ۉYyܸ?՟n\p VG:~-Fnj 1bawӚ3MZfVc9ZύYN 8DO2G^_riQV=wqrXߔ Y|K:#xtP #Z+R+ÍZ9?mN7)+R"e2IF_ jk/'9={a/IJu<ՏF `'3SvPt|Ad\撐+ݠD`.$nꚻҳ /n؎vkݐoiɟy,đԘG}O"]F.GᙷHCKf`yacrt zuyֱte5\}5{O7NvKߩL!d1RG?E3Cf[T{,ۥt8wƗ>4$*%Pz.Qù'bfDЈJre@=TB\ۧ8~rNzkfRJ  ^FgrYywNqL|G&.QԤOimb9kqgJb9#sۮt.?EhzKld/o9"I<>]얍 ދդ3Nj6C(c*Ǖ%v-uqHcΤykֹӆNf N"MM u9M5,9( bn:^ƯTJиͺm[Tm:m2Le=hv87=/lMNy(1cu0_6<ތ7h}sP)q~4{1n00 Fi~aqz%fKjⰭ>\mVg!rxeܷq咹bQjloï,m%k2EˆyW97@ hq>Ml*-jY#s\w!#e{iP)r;0D&ڦX}? #QI)J5?9ܶ[ M0sa%!H=y~ScO)Awoahljt_g)lͶ|?>L0KS$ƒd3v1$: "\M{vZY]Zlq⊖1:õYjp lq V'M0Rӈ/Sp̪_~ I<O2_:oPivˁ,U5V 7=bBW6yv,xYݟXb[$6+m{Ru66a{L?\^$zʏќmXւC7o'8Dvw~Đæ (rPHE{O(Lpu$;SE+GpLiJz1߶K) ]KU庌uvCܟԸ%Eϲ <yO[2ݍ~k0 R@=#CD7xe/j_zK% Jᣏ@a7a$7(F!~ )vΟbqڃTqKX^/eUc: O % TgK^gv(vΔT8lp0ĽP+a'Nӻx.Y/ Ju%t %R^vK\\އЉ8mD"ALd !>0U4l3}2Ul]t3iDDM*jO>:. rqT˰t͚e #5M~ڷR ߡx,N&_a$jy)o ?Q8f{¼sܶ#8-ʏt/-ϗ~WwP8w1i}uqI0 T8 `%ǔ7X_{` I %̉c+4|K[emQ}# Ę 󊸐8|R;/?8JK[ދD+ kуP=pI:RLm{a5% QTtH`VїyP!X~fEuĐNi݀s!H5֪4d#"!搠ٗe9zWY&|8\8,4bP ENgv55'>>0"V"~1_==txMj*÷8->V+1X FyKbk: Mv^h0=ȥ6d~3Zص?˰\9pp0}PzK(lKTVӳ0w8t8 Zj`[51pФm`Oؕt)aoHwOD׫[V>Ȱखq;ݙkDr~X.L,8;3-Tau,r'g> \q:v~!)] J tsR1g#Sz#U⠱I(rC46K~HaCeQ ynY{{vyuEZm|2ZǼ>ssXZ?ҠV~Zk C#Y ( sWGai\Fw>QJ1ﭽ$28'UlDTnk5LCWOSX0T;Iw|b1P&of\'7^5ע@ۜlt0z94^E&O~ 9 SE&~nn.Ae+6Bc뾊*__\a몦l+?rRƈ{ ->jI߆Q^vzr;k ibHڪ rj- MJtV ή٢{160o.L=w;{(Z4'|(=bX@K"dwts1y;-{3ߓJR|_T`/@"9_a Gl1-j&3ߐyy ËaM&&6O#YvK-CvF(,U8&>LJymUR82cKgb:_6sqEu?9U^#tmsG:6` I/|,%r7$-K V 'yENݮi/uM`c)z*GkAF~I8_Tq\ >bo:0!hH,D7r׫^xj}M [HŜCKdX=HNmJm]܇71OKcqxzGBnO1!}z-0B۞^/QL$ce\AAQ&S $/W OޭtVO+UJk ,[ҭ G x9 ӦR  I|OUM!{uoZ4u@X3 #[~A*0vGLl%fxcIZSθw W3M_&(_ʜۅ]#px(ןݱy]וl۝+n-W2L8+4Mx_aGoz܊WgYhчr(Vo8J3 @}f{e͖]Kvt􌧸B zvAY Uft^,ÖofeY:tZ,S-h"/MR4 6DNjڤȌl] cM:qeS8e;O 6o %ʭeV@jr^.fҩtua'x ͻb:"0q #\Sok -+te{tJ6KmeCB|IS6Դ,r 5^y1{(Sxbc瀞kϻĜXccvY+=kSCla%Cz-wG=QvWWL?2 I^rNj49y S be3W'7.qqY6o p߾ӝD3k0Ksj^,@Ks٨:7!6 ~c y.FnJ%Z5߉BcKxtMCrDe=\(.g}oywX&|Ofa˾8Q7~N=˖;K; x"r8B,I,_5N[2pYjb1FRZY ][*zSPoSd]?2T#*d:gS X'6Xڹx -| {\gE+ȪZ&~0 QgaӒZ^iRho,Z9a1b]T'Pcans(H֏.ip8ABʢX0L}l\O\{uYZ3Ǚafz1"Բ vǬ=">O<67=+{^}5Kɥ#qLyc ?|'n_@]K36S yAEPy346S}\ݹFǍf.w<4}Lc$S[t1TT\j赔S->pl_kJ\ƹ E- n0)1*)"ayD䒬yFd;@GbT!\P²a-+#u[hؑaNz$ an'G"EѡRBwk) vBϫxΓJu zjq~L&f<tċ_.8?pIK AgKP5l* 0Law:p̓\_U_"-iB{ =FNGMTY Qw4n^d>@jAboTѻaX\J XҌ;_T1µLX:u1=NդD/?D(伭$ɇB |eƃS@cɯ55kٌ;9MǯxކTos\ҶțH CMĉ?5ʆ,-qNV#JFǗ&JOb}r$99}K{mV@ ;]%H'EQ[!FXYX/֞PQ@-S 1 ]aEj1=#1v޲J[)A_ J N;6%~ lj5f 9usu8g6VP' ZO𨂵y, q; hHi#Om-=aՅ4_䠇HRN8jF Vj]^Ww`KMb~/u}kOS>( dɇ߇t6,'g1=('kxr?y:˃ӶJE3*R>c]>rpSV&ap Po½DM!?isْ2Leˏ|hlh0 6 ^uXCp̪"l^f>^IQ&aNp[?;/䭆u{:E^$&0_x M;i<"!BX0BZI#) DEaR1.RRJe/u~'ǻŽos^*  ^mMG'?h]mIl. #ehJZ".u)0ehc<9Ч<^FOmHש>q6gM]ѿ ᗕ,X6\ Ҽ;ЎQԞg)1G&.߯;6;@"I Cil+8 6֬:qڕ]pSlDAGg̡0slJtbCĤcڍU&Z}8.1 ~Vͩdm}-/ޖً|Q%7dy 3( k)G.7R*Q+@䬚C?@3G\b7x'[w8LWI(ObZXT B!\^< K8D s懲8^اBP%O7Z-LU\,VǡX9~؁(v{ͱo bݎH) |j([ :H9+F:T9"/aHd#ͬK f Ѱ }D%T4¹|)\$ņ9EH,-47Lí/ ֫&7>44{a4GT?Z+fWay#F}KP-\M ( daKC#…(F/,$/@DaEiKL؀^ujkϪJomDm4asGlw"x48{(&w$&L'K[ L{ktك;A5Yۃ'  3U*mbO<`ٙANӠ'ỉfBӔ3` ͊NW|qeC,"/%Zg 3 A)cEBqF9 > jkc|`F_(DҠ;ieՓI^L7ayD3 H_ פN 淮v,#۔(hZm̬N%kLs_&7]i[Je9DR씌3⠽' IZԫm% C9_^T.o ]lKB ǼB*n{쌐ܱ::]O]O/]dUE6栭LrWINHi^1) |C4MhNڱֺ6f"ƵXJ⛿ cYBO..P$wnD8E/˜&i("{ ٴ ;]CsM–'aa8usmهKtu@ @,b6w(Xs3^]@bn< & G#اG(JogO_GɞTZF]mj?K;,~|LyB%>L ;Law>ssf N) ,K٭aȼ.'>cTz;} 4bvQRÃ0?92[ ]B2CgwY 'd"|f/ SW"C/M:mU2lwDzzP nvAu?]ZT/Vf0Gs"[-U&)ev>AWִ2q rAag35qvy\ƄF!Y6;won*NZccf ZK@T'o]w4Q]KcMP,q}+F6ݔΪuY9a|č_Pk =tj r}r nE޲Wݭ 6 уifZgZTn ӕy˕rX"+&6Gɚ&:%߷w~ȍM `sNl0Xu3h3C5sb&yd ۍ+Jj:,M bS;=6.٫~͇g7dC21 GZdemHJdvrYr'jܷIIot5jҰ iob]<[ 5Vf,WufU^|DNX|<߂Y'+$cRfV}yByv$VAڟK+-sǛ0"$0*Yr%RLGvXψNo,pti:pzHXGEHEV[3rW`5f2-.;)0f+=j {XTy K頕lEhgf3I&Mrïvn4 :e뺒|Q]}Ho ۉ q;k$xG*f xO~3Vpx<ŏ!P$;?8j#/~#b]vV +Fpgu㭰}¾!7dڙo_s!q$}1~&oلkHm[ڹ:8n^;׮Kk(gZD?Xv<s_˹$en7deH?=G]VxfamW#d:&;̂~ONqVm:6kG|r{&LԒoOj NO7-a~ c!#\R䮽8doh $.k+sv78aQx H /xwtk#:- ՂzOOVZ(k}PTӿ띣egt|wa^6{mjLF+huc*ADBUEl$WJIWT9J{ 0Z^ ?%kV u۰f~?zϋ> !< pzRL1p"'/XXv6rԧv G% ~np;$W,"RF@ CIÇsql|Ci Jz@I"aRf_+ , Ui! EDWv0z.Yxɫ]PlWn`M{LOOJi EzV9<[B >IϷ}1/hUBMgs+#?hm2XTMc}cU ( ph `Y65ݹw BP1 jdgrDQɔ3vN桒I]Rv?a߮Eo*4MMYt6)wS);p(s?,D=9A 5GtAw%^3Sa1l:sepv2V<{0c=LXsg݄YW,ꊷmtI%5k)v 2i ]n"<0O[vIڭk[N?Unǻ5=0YN%AI8h=L: ~3<'xQ){:$W9cZ6v料 YϏ-S&#.wក;C f8reޝJyxɦbQ龿gaP34۷; N%lken3Qqÿ:|,kQr8%],ٶE,ЧXy(3b&'st'H:1c~oSMnr?4μ?WMmg@DtbQ^]`flQYb 2ԹX ENSiN'5x$ R\B0,0t])@*_-;+Sm}C٪ ŵ1vӽ0rn 1u ޽WlvNƇVșzTt/Whc/#4`9)NPS6%n!$2q5)3ϮQU/FVqer] ch2 /=7:r`ЛWwo{Gv$ˢW<^䚠d`[Rogfk{I*+Mdd ًJ]:,;lft5՘9Nw,IM~U1P"5yD+4B&FA{I]\ ߵ1,]xq683TJ WRȈeТԐC7 A 02ZHHՒZ(0㤩fNw?;(6C{a~oaZB7v0jT}i!Tț9+Ṛ!KՂ 76ԛaeߧ qAnAaF~h3cM:tfʀĠ['FuNjkGCl ǟ# 5Y_}j?sNt̂q?M0eVi!`mv#fjf'65sر85Qplqc=g-t]~=1)#DQ*!Q5޳XFΥٓ1%ĵ037Ng7KũRB_W`˴yyw a,llW+=1!0d5;~]D:yZs9fس&p>MX5?4_5801Y7 x!cƅn8_!j%iw+dۂ}q<Alčlny43ix5.R). cvcg E)~Q8 [Ş^a{?`TY1iD7J|rCN&3s̥d=Co|Զy.ߵoɻ0@E|c~50w[ cH6{#S_gK3cn$:D]dǞwF3v4?TAMSS@!n2Ћ4)g.wćLdϫ]|!ٞȠ-:vO쒛Ky<"KKsyj3"!@Im.cQ29ȸDw.79UZ -&I,zBa\Y8qlښ [%Hv5 o5Msθ#@;XA+k'ҍR#VAɬl@} z Qw}9'JQ4|GgXL3E'pTKd-6g 9 $MvԹ^xi}@[X;.YSvB:i%3X&8lqUK~JsdAu(41aZl]T3ÿq W:Wu@\!GX69)`#mGҾs(l|(՗Kba>w#vg=`?=]B.b)#:NͳqVEcGcewHz8mCT/>{H2TbG:QWo"Ϝ+F>ה ҾPVk>$ؒ -N8lK"sQװ}۴S\2ޓ#J_%ŜA>N9?6.#= R=]8Y3nLLGW QpTA 胼֭lkw9{e x-=9E+y{J5CO׈.D0Fn/9s4޽]g2=Q.&\Lr\N f=}hi1)zॣfC6wdGc!0+I@]u).PQS)qpMT0yjL\٣5M^ǂla22B5SEiJk.}])r&kH &?o~ 6>1/PObsT?}^j 1Ʒ/p!og\|N:lVB2jҤK v un k?["! IT$!UfhE;ƐY]˫ͅ$Wm:y6-O ccwzP ޷D7#@ʶm, /sGIE)9vlsOM3|#B,}h>e UX2ՙ8gPHcQjMOO~(nZA$8ƜT:aRo)osL0ւ9;L~wp[^|*j>XZZf뵕&)*ֹR(l!hL| }4G!MSa EdHnK L1:[Z45 { gvDwu`K|h/Y= W 7EԾrS- oY7-k믟n{.{.ѡ`FiǯpC>ۻCv&WLw5O7qzcVenUv' aIO3@չU|rhU)!,TM~$f ?kܱa]/a΅hI|r:0:svN&QT3VX %{!=FSˆ.ѓϨ |,VZxsH٘ fj믜SS -^7;&}=:6oulLuCj[Mm5=Z3 <7J8@O1RF+>?|rwou6ZBK0$7Sl8|>Gw_+Pȑa!/r*bcpĀ] 3\Q^!?ˏ}i&~vԊ"q rdӈ~S씺9rlL(69d8B?`-wB~m-Kw#g`:s1gC6LS \lRo`cj\VnjfZnfӛ{!vfx~_p %zo_xiF:,Vcrֆ%EvDpfJAM$b+%@WyA{ ɸ%s^(yq=-FLxF$%^6S:QFb-30z4G[,hJuU ƣ<+='Y.?3й"#ᙜ^% GXJ}1qS| 6f%3s'# $4X_7Ójqj{^GY !,mI .ج"P@n;`lgYv~]^>EdׇveVhq . qgӈZ&?L'A*Z_{Iɶk+-P8~d-ԡw&Nkldܸ+-r4c>٪ϓW!l}MݗX&J 1.{ 2TXRdT|NMVfx+8/-yɹ374 w{ ȉcD *ʔ܂U7EqTC:h[P-e􋊧I7I2RݺC.9cIs|RYbVW"GƱ2 hǙRoܹ2oj_!Gx)"ֶ[{zҐ+s- m=lZO侤←\pV,iv]X[[5SӢ͛9΢źo]8jݿ!U2HeYoݰbY5cLL-QlA0ʧM?j䑭cWe==kƨGt[4}r6;M,J]㴵):$'0: TnCZ&a>XC[@~"hɩ^Co'gpP(oEFAڅvYO*HZz++,2*6do"Ś(g18eۦ}6dD]ϗ[F\&>ԙ]]_I_>a9F(,R;ɗ`"_["h'X:V0`mg`i[2Vf Qjt=5}Y8 62>J8!3~O1`<)ٿځX.dCy c_^7=K .hG^?XtoA}E=~ [N_C;k@ ڄE݈=;Uh۲XsjӏD(nlNV)AQsfyHd).C\@.n5߷#&v{ҜI.n1apȏ|CcMo|ήqA7,[ו5|lЭ--&MfheX?[#Td%<[ڋR8˫\Cy&U".PtL;"͘1@[lX~vs^K1{Tz(ر[sAk}$,,v繃Qrj6E#SYݎEfiwSoİ1wXD|to;;>TA1ٟ~honͩe;-U+lTj4ܓKg^W%`׼=A[Gb~p]et݄=]]aS澣 ڗzɮ?zgs ,&Z B̏Y.Al>H4_b3WXPCi$Y!I xguBJ#UɫxPԬ^MbR }[zw=V9w5kz=T~kLIF8wp.'i=HeEן]F%!6[c6 w1҅yٸ1*BϤsģL/SqV+Œ #=jQ ʜ0q܏ n4܎t|;Z'ȔUzޝ*.%O9+V$Z4%; `(81TH8i 1F/e1ΖK\pɸvxabUn+SsRsE9Hùb8߼R9[ϔ^??7ݢZ|LxkCi,%k750}m =MwIm!x;9 x:0 :2[d1yWm *}H@;fYghX|[Dfk>`fIk!bZ)Ykz+G傻i] *CH#rQ{gZID`ע6>TDP:j?B!2qq@yj'-o\,3 ^I Cȃ͉`&+g#;/ {Sa[ {}`<渭?R*I#3FG~5j)# Y#s=k9s ]"- QDS6/ה0B7⮧lR'{- 3x\I\w2|?ϑQ1´Ҥ'~ Uܳ[ 8!a/aLInsRʫyBLCVJT@8. ?o?$0I#(Z-rt=6 q̴?:ĸǑߑbhgw&S;[;]v+_kdwn7jD]%!؏_^ !-l-Уe 2ڮ02+?E +jEvljfܗ3QgWȘyn@8z<_^Jsw쀊ݝP,D?<~4|jgz2^w^OvX茶'xE##kDT'2WfTU/x/Z!9'=#qS)OAAS^x<~e[J1cĴNU+7 a_0zSG`% {bbqC[fZm346лT[4{qW6tcTi^-we~۵LgjNܽ}Hbk4Xs(ް+㙤tHJjvs06@oBXL:Ȗ﻾:dRDRҦW)6. $w Wr 7Qh;`K'ߢ's Bk;w29䢽|uijI'/cHe_2wB\:S)ǿ%fRpw-B}.ɞQ*_ƺ'4U&6Bl.nrcpbD7L*k: s$=L8]/ ]YixL3V%!Ů>O鵚ыE75uCm95Cs7^"ww5ϤoA&(Vi7c8Ob6 WHƵ籒Gf=dAZJqbb}}-E)ʼ5_Bk9:E%gȋƘ均aAUCcS(ezؘƳ*iW;5FBuْHWnMٲ e%,LMZR'&m`[V}TdĔ t%6NƤgD{݅䯬ɵ¡*g &!촌t@[Uq2pUPhk0Jo/LL|eVqɮw v|v\Q=G~މ7^x+پ1JZAɀ8mrӬK'S?OOiW=)a]"''b+r|]Vk[<ߜ8?r}ѕ5恗 / 19߳<<S,66NmWlw;:5=H;dgl#_<$N%Fq50$57m-AgRxe]5͖#hբsb'!sN,Oe݆nN-[nS"橷̀uBĥ8,5|n)39Hgg$mdZL\"ADQ(@xzED t ×=7WCs2s)SwlO+47o[giAWXT_pK<2kXg3Ta ѣC0֘Q$i`7p0Cj;s픥qMֱlCƘy954.0n(HGhO&A=Mqƿ$5{ӣ\"a #K\|=\ز.Tj?ų̔6ڙ5 SUlދ<7,S|!#O~53 Ȣi6ծ(GWϫxD6/ b6c2`t5{%n\6!A͕i!̜*_פU{߱Q{I4κ/f*4j;yy\̸͜.vr\ _`cW6׋T[{מlmS,cRČ<e0M4nmbI|HMڜ1ƿmZ~6n&ŷ*SVH*lrTؔ2[;e 5Nof*6.:-- ßg3Oц߬˿r$tZ7XI3+_Y}Xv5oGExGhrGaW} ?X4xv%E(Z{W;`|ƹɀO=b./fKkKYMjMa6q1HrX'$N}Yr=,(! e>15wɄn3aO^_,nc9xy YT1ײȃu~Ujw e{kBwD6[ .uFgiV{b+JxSuwϏg7 ǔURھt2 R1 Ddl[a꿈LmK? 3Y)ye7 6`TZ!+v34dQ%aC'y.휕^Ys+evI"؜?RfTPji5 g$gB d]'5Ǎk9^\>}]g̩nQ)/{tB/0Gٛ'p%Dڹ3ʇA ̗T^ڦ]aŰ߭W˅jJ0t_dqXf.rLVCF}/w:%Ğu%3z ʓ1({(}"jֻ*WohT/X@|,Sw7S/yׯkaY$p=2=zN>G$Jx#wDt/kmf*ئ ݍIHGB9] kb9慘4S1I3J28}.af Z-4ʦ2Onߣ53NkǫZ=4#+U?; +IJ fWht [>k z Kg7Wm A4LYpl&iSL&:|Пwky݌\А8@~G D+kh̷t*+ .}yKzLqٗyP[_L`j6v)A'-TL=K-k?J<.M V<)ʂnLCuS/Zcߦj50vvߟ~ K ʩ;6z,:˝u㛊Ɏ G϶\59yzN%'Ww_Sd{a81iԞp Ć=%4dSJ z8Z!C\hdy.HHs88g^Tpghf`;ҍ=Pl=<)gγ sFZLW]edm"gaT\m|<6z#r1 AyP]21RO޿c f?SȚF{z=#Q8W:K+bGL}jQ 1ℨIX=HLj )[).z8fʶ_0Ԉ0LS9ܸ joe4<2ۇӜ,_͋wI=\sDz%wSmՕY,%!e&|rS0Ѱܯ[l\\5(:AYG7Vg3RlS|F.ct6,[vLj[0 Hڛ_--mTT٠է,u^q~=A?)x?. <7l=lsюx`)6(͟,jM>Kϸ&}^&aík[}Y̝%h&DM-/BٻGzէavG8<60Nxu˷e?G͘&բ z[J;^(69S4!]'mؽ^ S(=J6̘/nccB2p2ʂ۲<,CKgK}|Lb[(^ڪr @|rx~-,8vbxF=_<1kQ#HT0>Д Td9M6]ӻv.UCͱn3?_bq,4|'KH BTlt+ܼtj/'bѣy"9W5&ԝc|U·D(2rZISNIo eh9QG7TWWLb&B.[}נOqu->C6wbM엲\VvTn<)ƀ ۍ3o{7y~ݥ3y0Ѫ? eFmrYϬz bdw8=;IB$rBVD-@9-t5X qn0ruAkjo[]c 7y.~$U=)WzĴުF~uȪ̎1j`(ȭwdzNYH+M8>UFl_S]2\trU H,*D  8 VeR,Vԧ{b*tܱzW Z +>@Lԧlrjb;NomcZ. KDMK3f-aM t_ѿ 'L=*_ sQ{[ԗe BF&@jS3^+F8ƃ`U|<_PW4nHP11Դ2HTHyƓ.C)'[ebԱa.\!63m^ ;ĶY _PvlE/xV T_L!dfiۃMs_Vw#]ȟ2RUawFɫcIXfحo[|Fg@˩0靯} ިD!;I\Ħ&xG'o>b-6}yl|V€>{ޤaA\f?y)SK i9 JM-}p'u~1*ek>j.2_oXuIof@=W*k0͆S,V[88 sE$h[OE[1 Xes@$X"R\@!( (3"kw $ ӈ̅ DxSUۦ^ugޝ VeU|Ň%f<2`W~^gD[iMZ,oˆuoh8/¿GIYJ}YnM-!q.7+&4'3Rꓠ^8OO\Y)ojB Lb=V Z \^zE֐ߕ8u?oS+Jt`W Ă1T)9x.Qv9>}A>IUpcIO& Y>[asV:gF-xm;X9{X]lNqM6"p+cpS77$R9g51t'nk4iA{K⨪Z8&6WtӾ 9E;k{+5wueĘH+\BpDO,A';nn~0g-P/{+L%q°>f?D/`"Z);LL>m5]*ˏT/\EYJ{n)YLiϑҞ&L`z( y gxj.FAĘ&SQ`zRrlQ6V"wrZ\P ?Xn:a{pr*}RtNҞ}pm,lABjS˓u 'gAZ~%v4'-Y0F46+RsXKf`0@?䏑X7_DvDC~VUI] Tnqʚ|4".+xT_,PPc:Ӟg"gdAnhD)(1#Q~"͒ɗmxs@fA;}]::?c|Qi v!#mVTi"bkXK"pUR,7eag D-vJL?|6IXY8gVeߚ+aeS<;w:s=[zvN $R/Uc`Au4baQǫ:nӃ:Y&]Κub[.\#n?y\  9{@F4?ڴVZVnw qpMjeqQh~>k+ v* 0]\T֑Ub>«gIL P 6Eq.hv ~qYYn)"+pozE7dv0Oeo_wlW{>9< ВӘ_WS.(i{N yՏ|"t[mO˸&Mƍ JjzW߰X9R7VfacoZ+.OCT$BalFѻ`[q$gtRCgfȗk_o b ^ij*/!|_c|eeQX4DvxwN^l4 JCAۓ]SuG'7u>t3Uafv``P>YZKvwn qg6mfF/9O4g1mGf}8RX X}.3驚a{˖E -pM~AS Y4˖ALMl :&B,m|5ݫެOC|g>1Փ|=27ZIr(f#Q?;(l37=!okNjo8& %{V)dgǍ)f wۛG˳s%C+=%ȡͦ)BBY#˅v\V7ۛJ7L_7FM2sJ82_QVU2?٪Bd vpKu>-I// 7`Jinǽ9VbkJrV?[tzCY_xmjzZT|x.x%Dѩpv˕m'3>kNݽTewH qCx'=SP &{r<v27_0PtaAp sHakvE70~+z+2AEBԑ}=&3Lb;4~Ν,Z"Uy4MBv^V.20j$/`coG8FDe֞`Ŭ3ֻOPuz?v8%P`kd.#G:Y?1TLudal Ox۬tze/l4B l`K~kaҮ(AܾלBfܦDdx<"T$>{Cnӌ6ɘG|V;A0Pj('/ >1>_Cb.eNvV1Ba3:,lmu<Ui6Jo ȫ`[r޼x:;[k%ijZM%SRczQ#n`=(r3H0'MQJO!bd,41@xJD\F"2 9}sU-d6n<!{/x+XT$';ŦCN;lo(_qZ{}}taگU4kώ>Τl};ϚbT"BRslLLܺ{TT>^k_!o6'SUK|+x,W>ǎYW!׷*e&,yu®2._)MJ@u>/RUUUP@UMn 7|6jxsCrƫ:'f _JTڹ>kke=j{!<槿w9%7Z(Z LNbzxlg6 3ڰ^WKk&fTdURؿzZU't)y(hzVFChVr#*)Scudwm<*;jf>M]*wyL_@Tf jXqvZ-xC둟pg:5.XGa 9PIgWpǕh/"Cut 8~95ݛX;FyhTt {ұ`;mTcw{\Z>4u/*+ф1kñBd J^;Kpҳt!y#gz~2ίɪ5\#Փ;cW V]$_=. eBd33Q׸˕O"݅8G*[3δPQM)qad1p;Q@F"X#ޘ͉I֍ if?Ȝ~wf b%&7-ߺaG.>vq,ndͅY3#T#Gh[892Oܳl1w]!'e 3>f8%ϪVg"u^SB Ft-gTlii Q6/Ȉ{).>"("m%w;R+=I,*7f6Y,wɚkPҜMx>GUi* 9㢐 OLLU !Gi.+d!} h@ij-3Iﬕ5AM#մ@A, +מ3I@TEwr2pP0"-S7{_ܢa84.3b%{$Ⱦy 3jå>IL6ϡ>wʏ])^#pcw֤6agO :9?H 4p׮UƒFzs<%;Ļ>l!?Rb]+RD?ZhkB/#bs@`8 6V%\9F9vA 2lP9"]84翂55x38Dj\|.rx$MR=LM Nݲ4IPyBYf~UPanruv6,;WϾϭs'\M_r{2ֵg"k@WZɇaj'νx^1+ugwqP$qݼ4eTnd|iRtUBSPw!R|Xm>+(hCI [K[7V צӕ|_V4P `0 XpjNkT.؛o]{٭j"g> $] i=sxSZu"OP&lHԵ5dC4 R&ӥ%oN;m7"S_a>전X RU5BɈ)Asx) Q0NU<cp8X3>uSt "x`y3v%{vWE絖l+:z^Rg@E [8E5kR{8kEklTة"I֊({4% .8Ga#rvtEnyD-tJ(f8/ثkQF?pMfR=K~Sa؅+C3zI&WW(tĿN?kPGvԭ=@^Bಞs`4q>秘JkR:OÕo,߸* ݳ&- &󣎷wP,I\qv%*$sKj zF'S+c~zw۫>eN 5PjOR ʽd|Y|lop,~8Z3;+z@(saq6Q| .c4> ةc_T;_>Ï8fk4l(Sy0D}j5-isnOt-Jieq+N^M9ܻRBeɦ{{#,c -<3蔔}xh3O\ "X[ݭ=NgG^;DyAKVh'P=rqԈ~˩X|S(#ogYOg1?wI6'%ppHxn&pˌV T3KU଴ζyD'6[06oz##m_cO uG zLl~qDk" ㆠ'5څR|o%v7Ej^~ta8=J#/Ͽ OמT::񑛶8쎐,y&U=!(0@ޅGPzbHZ \F1^J1Վ'9k ׳ZZ!v*Q__]JJ,Y2Z HmV5dbE|c$hc9 CK]R2sJ}Y8ÝAQ$֚Ӿ-b̠g/B}j7i.=_%-RS-1ȷm9uK۟Vsk0p 5JWv"D&k!aݺ k-kxW&*~e_ވcb(Aҙva"(ȁco0U("tY[NaKK\FL=vwCsN+T;{ClЈ_+'f= qJw% H KZ9dBB׋S-N[C2 ,_{=4WW߆) 6aﻯAa#w)"\99rp.R>B]z APO]9^%܆Ue}ÏVADDdq:$y\J`D=> wb=Q'[0$Վ2{Od}ɡ{Yͯy.쪫Z\iW\cTs[ s 3ڇԗ1_z Gޓ;#nAe7zJU~Ã1nڌjGOFDHϵW,쑫  }(.z^޾,1^k}I(1´YaZl(m%L 7}ϰ\%R~]rz-!iTq:i-"RҀ ǠuU4.Q/^V_b!h<4NnQ.lUdgbrb~3/b6]o蔮t:4# [ʜKeJnj?I y66 Y,sz=7,nwƥ~f,ġl3C:)SN7m\4 Gpa:ޜ 7-b2=d}_ĖB AjNb`&6DfģC2 N !MS`d?uQ_3{jom3yמ JPeZFVqOSs,lkLYd='*i/xZs(ceQ=¿g"`S~?)JW\2ct-ljlHgJ* ˬ.y a8 lIB>rq;liOD+% ؏ͣ#}9Fj 'Z[KFOwD`'6D.&lk7.e-I/ q ~7ϔ%VƧ!6D^>=cHRDF}Ym1G3DK2Xꑺ斝 z`HV)MFs.|_Ɠ'h _! X`HZ(>) bP+Z% ; lʙ`S^/YoǓь(m nNlNv)]y"-5MLcoəe±=6"O>Y%H<|D.M]H;Bk &z)Hʁ7Ӎbϔ>6 k q/)$/;ܹ6Hmכ^}׽^V3ؤ(U RRg%p#n Q~~ؔ/ЦP|ny\*IB,nχ?$Ꮁ{N7RDڽ/'Hu Ε2i^F*2]hXV[i z>8ai;" _3ܵ˼mW& cӿL]pjX@e'pNJL~DyK/oX^'6Ɇ#F0PB/_uJ=XWM%1'RE\k)PMrqXږ:xv D`jJ5:M6*ܬJ2 au4 :,\w&X­!9gPSa;iRmWw.gP_MP=EKv EcIul]1M- _`}V1~fyeט8D{GL3RzAsq ZJT43/ AX*>\E)+4gf8+YT4h9VÃ9Ș[ n%3?uC2UQ5Iu.]mfgy q,mbRb)` ~M^!ʘIgB7u!6nSt~h)IoI$QM0J3A&dD{Rʉ.ÔVI96 9t.]-(M/2ŏ)?TNH-fC$-7t”s T w &(4a3xjw1/>i˫g䓣Ό/>ͷUXYxqK_>: An/yMo}o2l{6h F4̌Y ^!l_Es⍼lIO 4m"Jha߼LB%$#!V;(4;BZ5 939;} 1^w!)>ݒ(+U3~vSjK&3UUNɳg%[x9eZA:[d{|a{̘PA!rB|ZC|,K*ݝ!ƛ優n8 !6y*&ĝVmOieշC7S37|:hޏ{> "N$]#vϰ8&u;TrrB ") $G$,=A}жq6|VluӬ vj;$9!RzSt3_i5 y@-dk[mt 3u9kk̓i&K ;ögivy:Z.??\B'E4Si=D rf?ogHP&Sͥ 8-?Cp\Eڼ⊣<1d߄be7L@\DGQp;TKvYa*Yy3՞}=62r5/̘:қ&.6|4dcێʸAG8G*˭[g|Yvs(leE{{:9 8 :/m&GҫJ(΋B=uLD8`[cU >1J,kR195Xj% MF"k oXS9y kyi51n E1O^[?zmU4qq,~9F0"CD,!JӠlGyh,UkRu5֞.vC:+WT9-keל%c7(kE^ bs o@DS -gR]_&fݻGeoMÚ=8D7|$̪c*/22Sbs9wu@l{T=j=2^`xf euZ$!$ݚ>X5:kRZ /p&җϭNKB/ %K(n#ˀ}c Ĩgx6-Ç~,2Ñv-|~D3?}S'ܙc>$f*Kw8CLN5뇂U'pt)Rn<B# N ,}zD#a$v=YRO2n"Dlful2'Uۢ這nmIa J̆ כ4X{ NX_\R ,Z15翘@kfqf-Z˜([\+ ko5)~-Hb{Y,;˨MR: Yu|{ 20 'Q+d~A[Ν5'^S gC3Ur2geC@)*Ew_9(p:~^9t7+VQ}#: o ͟e2&݋5 54rtCy(%@USJɿ4`lZaO9(]Cv_o9lP͉Çu|( a-t})l"5xqMxJ4v^˥[2KYQKs&6Y`Toבe +|YWSCKUf`+5۴ *zj lU;ႷfV0[a#`AJIbǔj컜J1Y"&S= މ5` .^:k 0Ynu 8\, #]a5} janj4z}V}Ï^%z!^= |-oʫU!MX¤MfO}p m.34T؜C!IPEoƜ(<(-[Gx(#`d gkudo%d>e\upٝZ`u0;`F:UAc\s%3-b˖4$aGr-;eBau{,_füRR&3 \Z5-`_8sHXZS}X˞j*8br6'"]&Z=O7lmؒ1+@!؏Y*Mɴ8A2K G`)&]foaB%n(? H)/OfjLU}~Ǹt@W@+ FUwH?%r7D:]ɶXo qS_Q70. ѝa'BVщ٧ojRJkC…5\1 !u[L-&oj-*҉N.{Yw/4i44rM`ۑ^XM~ӜITIݨ됙3 `` 9"~kl-ɏJz`۽G{LVx3WYqZ(:pNGOf K؟}r#YuD$keڄUL8|(0%ʾ?{hU2&롾>|ӹaC%!6zGѲiwY5CSg!k!>ڇ-59w!a`'O}mwt=/YP~؝-|iW2ș'l  ZeFQLf%}}2s:2wv#t& S +a#˚>!F'&~xR*bs]!Hix:7WUˮjH f|bS߈,!bfUb=NtJu[%vwUejێΜ`ԹLK/(9cĹ(%amoĘ&0D]Q8S kX*d-#ohvBR{"z.z>i>_Iۉmw#'‮ȓ.G[yߐ>_s<1O9auzLo "u a 1 j2hwMz }fdqy1ݰ?vA A:{ؕ%|هVƂ{zMC'O.%:/BF^f9,}$^1͟*H7rϨ1Qœi@_+Wha~@>AzrsI!u,"op,`l?v#|ptZK\q]`VBx>!>]/ҁiSħwNƛg!7|:g жrT3r- dؘ+Ѩ׺`6C2Z"0ITV{'Yz\sG,?$}N8 Ͽ1Ui*.hv,jWЬ4DJُ!DXpؑl#) 2wQ-ZȚpX)ۑ/*dkT'o9qՀ( 3w\5COw/Ar99u9.xm^K#?3ĝV(E6cr.ܐ'_눉X^X5ue}'{ɋl+Ն3,;6St=r㸀~"\7űm{fRdPYU6-1"w6L@]1b]oH64F*N.n+D̽\.2& GU.L0<~Chv̧V% [Ň?$2kЭ苛#"{ (ⓜ)}v?(u6"j6v*{I\2 'J.%LG]{2>1kt[MvqN|+KwoĿij^ m~L *S1ligtdqMZ CԣR3WW"S%S4itY 36ݖ((AH{%ʉ-D'?K~x'0yjZч.8LV{|l.d 06Mj (Bܒ`E)aJu;53?Xg&?e12zu.5ֳjצ5$86Z94*3 >36qF7dbVN ENy7T Ջ!fWE$APץhzhg#(ˑԢ|ͩPHp~.ڽ r*d)5MU's@sξ۽8:sԿ+Ύ@sQ!Nrq7a/č3Hjd{4WlqY#J%=.K W&}[b3oRZE5; -_]_>3=-?8fbdCsCRF(=4W)u]{qu>MW*J~-Ԏ:tM+Tf~`Y. " 8nm;NCz,oBP[vI&yxKW[&lupF"m,h>l_CQ1!у)?^'ifLB $oo9 &Jl-yN٥+2f˶!l bHSsږ7ǎ@뛛zpjb]=M[ փu0V֪U qB2*bx<34dEP,T$GV%I Z+WHkISR>| ;|:Oy ^'GNDjApK&/-i&o=_6Ls<@ͪ#8ְ V6ADJÓȮPXI.3fY^7W"6F h= u o,'r~) ;٤Z*&јaZVxjd 6]XBP>(lsRȖM'K4d1<ƞfVßZ*uLi֫(pamCVmCSrBy _#z0Z6eTs7lbgV+7+@&6?VU%qol8SƜ?+}\z8 Q!ju)w@]Ѡ%Yv )EceB 5DВw U 0 ـzs'@N!I-(]%zb._ϒ1g'v% ጵVG?(oMK֫n_r ӭ0c2Ʊ[zM7EϱV6sz_ f663LE?QIs̓$ӊW0Y#A*;;6{i F0աVkC*[ 4ce GazWФBAv3 %@RrW+~3:?bsQ9WuS+ K6wUj├XzpI:C¹bn"_WX5/ؘ[QownUxE0(+u3'+sse}|jl{>WT{=e>U4di7ݦK`!ltf1nU_df8vi9Z6L LtA9o)1qzXi \Yn)vՂO}뚚=w鐮D'|6yᑶcl*,8ͦՇBD=zzW7PL-CdQ܆ R fClG%Ƽ}b'M1a.D>߶K-k&&EN{baPlSբ2p +-NX95W~܋a2UԱajpV26VwN!_͡E.< m`>W/Coէw@ *XDiVv9MO:T~wYkżki`'{Sc,fXFxCD("^]>.rmzMVdJY*]z-z7TCKjvXevW.CB{a"|A(#Ηs)d|cE8!U5:W+\5|Na[iYU@ˤwlqz!spݯnRYudӵNcave'ȽdX->L+i@f@g4;!W7%'

,׼D$.btN"ؚI%L4/I>? l<4(5vWZa3auJr/ҮU{8{a$QzoxdPԵ'lEleRs0XdZtYQIs NLpQӽV5 `U3YPٙ`.)(b_Z,YXw~wno.#̜j~9ѻÍmOZnMreKTgw3^mMm-̰.IqСDK\ƫ4#S@R\̾D ooSt&l}T4Rz Bv,-11ހc1k9rGh3k;BléϳpZDr*ov\=*U TáǠ]Tp,"AL\ٳh6pzT'GlG1+Qj ڤx̋*HIR]Oʥs{')#3SJROQ MZJ9ў?29ϖ_4;mV-&:e&u'ᄱrKf'3&00`ΝEGGoagkX't&8]n ^l5V=F6KbJӧ!!-dg6[o)fXKB|UrbѰX8'Q!`, 6$ eE+"$F<"cd>dtsʗrȜ2'Dv&"z/l,HH=0YzS*[a*{]:@mjѸXLY>7eCi6k3=&[2$C,\jk/bH ? dNJ\Z9$LF L&WfӢCV{yi#RP0H/b`LuۚG.hC() JKedÜ؍҈kN)2AorLPMO>W)%#+AܙiM Hy'L.IL'U>O@oUv?`R˫;Оf`Z0em'Ǧ1f!Y +G0j\}po"Qf^A WƬ\8@uk8ftUnXVt >2^XPV7[@k3$7CEޡ )ȌH,c=S7bs͢1OLh rge`lNn[jsg0o&xѠS-hM -Aޜiq!ęE9^R' C; 2Š{fބրMNb~I;}ɠK?2Oq:]^a?A#'9&1(jtX=;f%K>qNPf]2<c gS=Pɋ%EL8`u_F@hQ' 8DȞ# R7e.59yVBP̨yГo 58^6}J w3Nqbdf6aS: |DT.|Y;xo -8QVWUlظV4F$w ;Vl,q7WozҴ/AAtZ 'ȕH4<,{3fgG8S-/9X P]`N\f+*|E5+W/{^mIGP6Tn/.'MukeswyOy! mʧǔy"Uj^iԼҩyԼy1OԼYԼ_oа. Օf(SM7ZT'SزAߒx(D:ђSWp{4%&&YJ'a$'Dum?kJ0$VlQJ0CXpmB h@&ZN\$2{4ZE` ZrL\ZJBN'$2q}I2X[ggFB}c_$几>/@}& sH5 cue÷p {1erO)jv!ϦƎ`ES'.3DLnz-40SC#Ϭk-tžN>3)" ^h|qOCv<TnffJu1= Y"0m tp;g 2A%z}bvA~^]іb}!Kjq*bY2)WCt?&J:'sP;$(B[H}`4ڝa3ԝH'_ɤTa7̉"ƅgcNv͂ =#!ʽIa!1yd( 6:)$TX{Ȑ #8Yq4Υ(ș<>^  ҂qoþ"'\WabWs<(LO (޻oS0Evݜ8SW*k2C rXX hʇQ{T@{Yk tm׊_$"brկ>ZM!t:^A. Mʉ#C >/^.؂$#Bw:Z:$G?P9i6 ]nOČ&7ny(qQٰmoM}o5kBK׽\~ӭNݖCfohzP)VoްO"8+5At*ɇd*EA e]!@ 0ioxt>~aZt)/8BIyg_\}@ęz>ư@#2/C?OE-$װ9neQ^m=i c#9^< tQi8Xת1])P/nK ܍$Ƴ֌nm_oR_5e|doۘ?I;~xRVK'#]AsH'D|K:15fȤA mֻchgD%uDle&Z7LpO'8 6Aa=]ڐAM<+n[ l2|nĢ^zLmzlIA(}՝ ȁ]kOo@4o É x7®B1F+3*Uߪ{! {e0{"g}{[@9LTx5 17{wNOJL߅cMJ6mfQYYow`9YpǢ$|uv=!j&TEN}[&E o+!7N3ibfU fD$=}@ܝ3J2mtwda$TD3pޔD~ +fMHM^S&9*I;.'6j>)\w9;ɚ$${\W>pD!HM5:]ƛ}r, Oj BǠ,XN'v#06~sjm_be:|e[çsga{5 ,\*| Xb<58b\f7ڙ`"yI&cdMC/C203-4U\%, 6۶Vm8IQ@n)n7y=}V /b;'~+3L[/ ~W,4(kBJ&thxr^ʐH2(_T"Os΂/fk-F"Si5B K@;I r<ʺ*")ϥeok:mr:/f4"*qdˮ8~r̔U/Oai,q* L-^R`6~ #oΑȐu@yg$##(E]$|1@cb!z6vk_щaÊ8&y'5Idqq8 ~euusC$ %E#\Ae)C~c7;ˈï^Ӈc1*ƙg-3sXqɖwLiu*A M*N࡟+xj3l@@\Y1cW89Qeu-X~|p_)ևg+Y5aZ:lT3}ai8󓚳F2(ue+ӞoHJի$g.C&ḋdaݐ \PXe`oz(=4U䂓kL$k4 6M~{9YY^04 Iun0һoQђDnzoi{FZKeRGx!DfuDz#[> St3RA#.)4)̪C`ŬY~D' ^.[Zq@֠ gn)s:_g$A}hO?/\loӿM_ s: \y,)e2MSwHx; v[]{c#-8:+= QL/E$HB sY\6?sm6}:K_x@yA~?}OTf椵bq"0H~ LNCh[Q5aFWlPkapњو)% /`.X;c1?SuKvю;dqenR9›d5C8i1z{7)VŹN|"%Sx UޜL Ny5|×1]LKyκ1?3]sɭZLZnjN W+&5z>{>fظ lV#הw v` :ݭi~v}>Dȵuܗ#yGb׺if1u记bPoz V4%'(e004Vr obzH*UGeYb#/IBXi5 $ISvW95gdAntx&KS8Ɵ@J6ߊ9%ZXydmH, hX*9kk6`ht.W HL6y S»G,Nܣ_o8{ ,lTrZ$*rOXMm啑ړn$-T\XMY\6qr`4*$Cۈz⣸zZWѰT~ ;߳grSG;/\bV@yەbV>~1b5[Ġ}9>]趮oѶᚧt$SE'՞ȼN+}M9x=t/CW?V„S00;%x ۃk# IW }qϟ =٬\u}q*ӗ53,bgCh!A(죙Cq xزZo-nILY4sGchuo nٮ3'UmՐG:GOŀ|7"K-a.CL$Xf.JuqaگL!k۽X8 BIs_ ( ɫ>S{˟/W9%5JBx'|';tduD/Đ_?tX'vj0J-֘FkOWLB)<TBǎ`FthC6!<#Z Yρ{X۝9z2%e?O{>8h }Z-YG.^W,La9-HH5?; ؑ=K,]:z^t̘9_"[/W V?(p)jq/Iк⹶ЫA2J9je(Ub ~&-1/_#Ce/Xm`s!³Ofօks1*I%ZaЙ=fP] ltfͱ@->2Z<3gs9[$!#ea,)W Q'f[gb겯f%T*)YqYXx)X9|*-1^dpׂ%; k\NwțܰdMsK"9*U0pּ]XsƜ+c 9Q4,ރhtb?(Am#f.q8E]+} _%3o#':I@!"rr/RyLM r!5QziN 3Y?rc3 75;".@#(h쨏7fyqF5g_?S{DLdX5Hq5<|@q _؁:;t)2^ajU3oSh*ȅ6qI3Ndv+v3$}JU`!(h羷x+EtH:8k7K B$R@VۮrcUBe֨O IHmCd›)'!+q㨸dqa^N*T@oL=ƑiA5ǘJV,9?hLv\R)ӭdF#>UD_l<5 ރ"7%M8~[,Zc2ipw`pQGx_G3!bj><sk ĨT5X1ytrLt [`k)dWo9G (r!eG@"w*%N!L' ,\>rCExBFa://pih< ^Ճ#⬂ԘQ6`\ڌ鑦pM+Vnbz *_98h-ea;?RJIyVtm" ::ar㹰W'jBamR9ZZtzr8KfN㵉L g;]~g?/M} *AyvOl5{o)g2* ӧ@myzvɆ)rBRksv\Ƽ  ,!;;F%}n'R OusqIq9M-CB\gvz w bяk7L1K/?zm@&`Kj%M~H'Uk"J;u9oEM3ˆwOmAm:3k'lݕ%T n $.L?Use]?VߩXC^J1uɁ <(K0p;QuXq+S~ko>srÝ7 myNқn4lzX*ID%r)H 1o3*[ƭf]y|[,ڙ0j0ٶrj`Ϊ `:f彵c?rRVA\M} b2E4gda)ya6bH }i~@A_PcvhGPS?ޫ.P͠Zȧɘ0x8~ܠO)" ȼdꏨ{]@nĠȦ}V n({Ҥ4e}%Bո9w=D'+4l @|"t/hS^r3RiD&ߚ*u)Ĝ&RT]`|c*ib +'/c<%W9`cwbKX*%U]=_[RICϺ|2fv${Ѓ%k?z6Bٱ6uj  5âϚUJURwFq0i_b@y͏B0Eޤ47N҅.5^ C>f;wֿwdLxD`^("5paSe?Zv5޴ "q`}#9\yZ%ht qeWhF +;d :-(LN͠}_ƴPyP(ٻF(Y y J ]>/"5xܘ[ߨ}Pbu vOdž /IgWH1XWt !? ^w%+PSZpMagA-hM{c`Vэb})kk;H2I9lzSi5JQ[.zazTpazܞډ|a7vc)Zdg%"7w^r0xjˎ|k-'m\ḥ1wOڽ<0ȑi]q^qǚg+a_* d^< pʂwkoC P?gzw[g +,Qɸox^.@u:0Pu1}z1h8,L1xiWvL{e]('?&HaӮ9>p… _8ַ3r#wao:Ğ*It\$}"q4oa%a䁙XSRM(/Z3NzCpY&lMYI+V@PГto}e}T&w sƮ}c ݏ֒7)Vj'J$(']0ޒ/ UfL? 61)ލ_ϱU ViuaE֩\^"^[ 9A@Ԁ{N6<Ο8<=}5FZqg,SS{+3| ѷ°nxn4D3^pȋ(~Ir f'j=4UdsM4=haqRNLwwHLo}BuhTb# oc֞΢kJ}=܊ fAMӰxk8r*ľmW^@z3*@ɨV۹p@OPmkb_pΗDLcvmq[u$Tfئvoq쨰?hA+Ƶ'$Xv-VI_&m|34ֺ򱯛X8 x4DDvl] N$.1_osxR:;!2quٲ$(tuUh k@.? q|</83065vg2eeӁ \[q]3lJ2|in换Kx$ z 2G:nBǷ.S!4/uq_F(mV[)8 Ub\B,5O,5,V{(VF*'YdC% " πQ:L7Zż4Q"sMkMҩ'fYk؎VLhZ4\_>mԭ$?U vO^M4·"8n7xE?_5y}_}C.MX7{=D,FDjUۡ-QLʥaPoү0R߳k{D˴t4QdiކdϯhaGV,vVx@ iJM~mz0uC8'argiW`cCҖnc7j@%aL:`q;3MU#H<0g/;h0]8N+nz7!Ul> DX''a1,zb"xl B/2zG>q8C,mȃѼa*EbZ'ĪED.ֻoaʢOUrCMCeZIMH7pW3SD{߼`85DC|[fo=DJ"-~qCCGC6,Y}Oad#5M4 &l]ex@S#ZG'hAcL[Dk|KPe,&W_39,G?v.>лfX"b¼Kd2Gu@!S?V=9UGw|e ̔ĬuʬG&G"y=. T4uyT zpTo)xG^m׽%}M$gP=;wIt+bAv6rQvNq0_4otiz )mw${$aiE5_j|D:U5GV0橧آ_@}j~(#/AB&A1gixͳsR0{#Zs3(ĝꉘ4EbFP¼*cBH d9՘eJZT]$i&Ѐ3og 8r`쩀aMDP/%}"!#ߧ!i!BY7cI-DQ,'8{UmZڃp2̟~]40Ysi=@ CvQӘ`~ CMbön$q\ ?O&&į89q9QzBYfeݠy#XzS{M⨡̼f&.ބb4޳Y<敪@, {X`YtW$aG3|qx cw;A=&_faY՘$x|M*$G,v A]J2d#Mu]kYbie)# H56VRzEuymƃf:ނh3 bQ"[1%1G$Hՠ׭<5`(.)c8Mq yud'_nȷ{;br=S6m422辫MGdV_61]H޲O瀏 `?|+.z;#k7F3_0EYzhjflq?nC_3>Übݸ"U 3lGhkJCq 'JtY Rޞd\IOGdt5dh\&nG-r)=r\šD淤 F ȓ6Zr1WґrF@aزbȍ&сmk:X#-weF*NqzlNɄshA} rqOkaKŀʆf^8 ,ɠQ(GnYzMMb U?nƻqZzxެO@, "e$$]eЈ_'d9"$v1BlV٫n;q4wN9D>N7Rn/gd`@o jlI8wJc-fD/eUMq+|6 O- ߞ,flEYD_;4Ħt)qom=)R\m=@\ (sVLƑ5&'J~v،V،GS.g1AlgPzkzםe5L-jC>s~Ac+e5l LR%ƭ&/A챍I6_S~tuCSUzbio|s|hޚhg6\3:jvxqFԗ-f֤0@ZzVeG7X8 5яv, w.Fy|!`KY&wˑe.y1χ0f# =DsU%2,R^uByRЈrH&[0o!Oi)v u lq^][ Vb4BWlۮX[!J2(nR ;\d3Rma4ۧPXK f_z^˜;_xXJͦ:ivk *&-D[4^ĵ$c֗ .[AV_ئ6ئL`J`K=-?l_S :D&m|g-LJ7BP=Y *K' hKEƭT葝>O԰B'|z"I"$ろ]&Cz*  1IK0jQK-^N4A1)$TcF)n/2hՎh,I;Ɵ;D:f4)%F\ #^0kזUr}?T~Ta#VcG9idåm5ݛG,d#|>;һ- x)5-|rBH,0H]<޷Dn`tM lĊݨ8CpO2uKͲh ))8 pV2)H޳6N^|;ʵb> y& EH<}s u 6fbeg|Xq遒!HL $Ψ_,E xaT94cvGb*!9\pL{E%ٺKcjDn-)BRH#pL,u|L 'zwB\ iU^\ZN ?$h0<92` 7"^ȮONC8o$M\ g9AD~Xdj 5j̋#~f~n:G5.x֢{G1i`lzThj)%IDRn<ˆHQ2 H f´M(m'Xsy5_nC<|> yTԙe]ySE*yv+Tig(y| bq<%fn}hWN]aFO=ȩ/go1WP0 8nk;kG2!QQwUH5SmmI؞eDso-aGP\=\ 49VrU(esjbn$.x[tѰv 1s E=? E"ybh"Oh^ 9ez^+3z:WuKSwYv3g0c4,^E2sl翁ڄ4`opݥ7kdH\F’k*gte2T$8rO\+eIM9c#) IC|^̬s+χ։<.g0S`aWv5K˸wJ_ ovZ & 8q$I0By-ڗUK-=o}^Q#.ۢ/.yĄǽ!N+ʭl|7y熰Xj>r"9 pc\"EoQYu<[0!6ְJ;a$;^Sc_(V5Z7b_Q[,F\D۴vl:W eƀ9ٹׂZDe4㏇؛fTV7ni[N] MTTóT{RK+.wpLHN]*bf7#3=*Uᄓ«2]ᴧ~`& 1hE6`2zeLcQF[OT]b!b;UjBr9!{WՖ]M-vb3e|x$a~Aj?#8'vFn%u#Yǧ BO~#l!(ϧdwh" vkpz*|)!gģwfpZ:[?L/bhu1eY%V3-آCUo9B/C ZRtuJ)6.@+=) k Yd5HnQ- Mq7vvM;%aE\?{)6*E RX)XI)A_9sڐPή )dJ$uR!԰÷v}~nda &H)tfp}@F &ŧ?= ;c]]s.iU[L"?s":'E=X^n+BrK gA3dZfgs\h; UF:vsĩBMχqY[m S XdHUԹu?.,u!LE }&N&T lTW~r^hs_ԠL_e8vȜxov-b5u?o3HlDW N92 = N7&._LqdG[ RXeҡ֔lz #Idʺl\A†6w=56O!@4ǞgnE;-w>\˜&nlE0h}.Lcwn 39|ec"%G9;C +TWq(gA)ZUG+v #COMYUrYNe8k[8pz2䵟= )YI8*9f OtNo脠gLy4Cg`>Uгfocwk "OaN>Lwh#as8qI[f'sQf@ROn?l#͆m$ (2R߄LÓ'_9,jf׿ I$ )tb|K'߱T^C)Ѽ:hV2dgԅŽ\|h6Ut 1Lя^1A&"S_LPضZ$0?h$cޗc{?O("xQCZ KTX*!0jKgt:ǬB݊=mu~֓NACNh39&'Hi'{$a؈جڪT*ڸ%eJ,hhD24)DAddZ*To酐gfXe_;vſ&',lV] ! fAD>>im2q^ {+[.7ojYOrv[DCmЍ2c+cǤwRCI9.(Ez:;,iFe4m԰dGI**ayx4f 8~HW0U rFkR$ctF+LPs,akgK"6[4VC4.9 A#݁"tۥҫc)ǟb[{luT Of=j~> s;$8čJ[hs~34&/Lz nv9>Z؝KGbq]^ڨff&}vRs)!WPc6nΝϯmQiIPSIA e+sJMiZK8OkgFj>FǨ N>d>Ц;Fn4\S[L[4t18M?oox ;5Εxƕd@@CVM7#=)88?,o1K{gͭre-j?ӘYM`w{¥LK7؁ @[xFjcY#A&Dz Ƹ9m,/ъ)vdS&18~JF^Ӈì)_KgY"#`2C0ztNzMpn=`ۊɘlT"?ku;>͎t̾Y<~4Ϧw> 4G@I*ߙUF졬<&ArU<lKkB|'}Aא_l7&$z ŏ0_pg` & Ij=@vtʢ݄NNUՏoIK@le?7irNC%ueqrk59M&z b/RRkuXh#1Bv^¢Rڟ2XYQV:^[>dk?z"6A6cz6 j8?3)Aǃ5O^ m@C85eġ/?k)+R7>%i Lned`A. xMԝV+,A]dA!hlQpCĒN2lYV[$[ӦuZhp̣C~4po0CckaοD2me:yq7v_ҳ-Q0 S82ڵό37#W+ZZ dSW0<:rGQNķ߽!Vib2Okv_szdv=#6hz E쁹 r56eo4@7B*jqF$6ZDy4]b˯z_n7|g *۹g N\\ O<zʓu#PspT0J8ds5 Yb+yBKG6f*PB3O6#{Hbajvxk# 3UMeҘ K[J7ų;^Msh=Y@& l#]'튠&}gy rgë2UTڢȆCߎ|'|_o%+=$±`F==%%Tqt/Jv% };:7.zq ѻScjZ?1.ydZpe%P.&(Q_OAeLy ! `Yؓ2%NIՊ"ݻ#j^J 5akT0; B߇2e=zwZVla ÛEh fHo6YUOn@bH|*aQɃӉP0E(e1?7Ԡt[SؾLx򫶽ϟ$&uu}1r`t,һJ2boio_{g˿`钍͡W+q^̶zD O;݈j{%Lf?, n<7oz}_hZWhܽ\VC7uO}@r5Д^|#Of~mch7da:U.7x{s1]=-uylJxYpƘcЗX;SQ3)).@0P[rZ{"Q~\ P$ e8"Gq|6>O@|Ld]-+>Vg@M4b SB-רS5H%ky߽{ +NLhGk0cbTM/DɠTplFۚIl&C>-ޑ]r`է 41S-./RI943Ze/y3gf oolxL(bԭ+2$6Oq _w^̧KMly`%~.\L@ "`D0_aɢfF¤*V7ɝ+/ZTv?Ŵ>5ӶUk<$AڡxrwsŐZ|*^wG_j &,= o(5,GzMtx`˛vR׆1U~~Hv7jt} ;F2gzzj\$q~YbWmeIue9v>p< 4z <;! {}8\`zS!Jˆa b=y7xrB;#՝ }Z41v]+w4f+ ddX^y3 a}ڥ-YîtF9_2Ѫ:Sɒ5:G^pu+n!5J뻉^@'s[6 M|kluYUg)(! kakkH`8|1mtBiĕIFu߆cu˲CbjԿK]: y 'IWL/9+WK.3z: z%7#y؉,OHaƃGXƃT}̖)y3_jz;(2qvthq( r4X2, (>gݕT!cr=:ASHdF^47͎/k3?hy<=FZQ4 ǚ]N{QvF .ը/TYȳrvGb46(v`iʀ'&T^9U[#>ST8%+ ;B%,̪ }*R#r,w~?F3cTr1*AiPmwôч|TP* `X;yFK=3}bu^T{χ5 LM,0D6~r\ ߚydmDf'[A3D: $"&NWeY:7̃gcYGggdTݟ+Um9O geH:ӛ栤37/)xQt=Wf3UAςbª1f4y@jO!c+BDk_*TZ3uhW hV(>9k7#4J2T/*HgȬ|jLs%-Y17, ۦx;Z_9BG$j!םvFTC ]S< Cdևt9P苬Y- ˞=@y#Axn+I^niaab0І%w;| %U8JԁQ1-ڷ;:3Nk] =e6޲?<rs Xn?mֶ{v%ܑ؞Nj-$6 =a[=uf[;T6}.&ՙhus -4j{f6'kwo3:,S胩xm\M/mL* ߌHU(ya?BDXLSr_A< $#inpk}=m9(3 "izNn$.,*K#Z1vF Dj+Pm0 4e"9ϓ[mgua03dxe m8˥A*?'$1L=='[>ᾓ N2[Pɍ:Ȯ|sD<\YVh3Z#1cyCM;5s9 ȫ4STTJ72VVwDMh4mizYvDU`Mr8e2SOOXt-QmA1|M${x&7(>=(&~.l=DSv_?,QzPC~ηwJU7GV4KG#!;XF`2Is춥 O[!\*_r]BOZezsʽ2- ` jN3ٽ9zS9\X.( F͇p.uưGe5\sY$>^X,)^#, ɤI&'p]9Jyj:_47*#flwځH>rBp흮kW _[#M<G~1:tYޤՒKS컫&M?WBNB_^ t$vuw0WGUhc- հZe&aszyvvpTr+}ƹ`Z`lr+F敀`ГmD'~$%yh+P"%g\ƪ[Gؼcz!tS{eVrb r:;?Ӧ<}b <?CK8z}?0RЙ`: ->w'MgBO(hr:ڃq̺ /2R0]v%Q嶁9$贻blyS5y^="pI ޠotU_ͶtdEU&/|4f:򍭩^Kx?mOʎVv?.r?g|CjGBߒ C؞G%fe-\SgP~)ؾLf@DUpIC{6L=]r]OdH 7OÊff3?VÜ:zej_QO0+0yĔ .D'zYj "wq+Ѝ$H:U}>js$ʭ\mEvX.?~?M\Kye{6Jx_*. pjzRܾ-T'4Rƾ|[8$j}bC0~Co ^Л5tnturj<%p]f:x f=1@ b쎭- j{UȣF%œYWC˗4N}?m3R*)C_G GeW)E ]l烮}HQs)<,IZt&?@z3Mho9fOG Ɯ-7E]ŗAgXt<57ͮʱ&]nkRȻ4BSnlNO= qziʚa m-dѺTf3!7POއ=:kՇAL[:x#ML(\ tϩ^nny˸;VeKiTp5dtnn=̈M@%>QnqQ= = Wc\8J[$qhicPYM9wg(AHƦvھv3+I@i1U+`|l/X<5itrk4wEii>;~ݘs]] "o 8DgQX+b \Hʈt_~rM\<;7-d޷+n94A3{0 "Ľno/Z  O'\>Ha'맛,2<8TӬ`xdY2k2R(7Kh ,AM]_Nvֹ{I&V+d`Åzر] X]}ꑆ2Mi?Ed{QbBnKɆMapu+ ˭nC,o!0:G CT{M;}+"^rɓb.sMuFnFL>{VZ2#GF?j,[=dx둙^NE1e"4]oGK!"kF d7('g-01k/ŭkl}D:ivs;gL t4gϡ62+tdy֨?ٲ#/~J}䘣. Rv"P`: ?Fy.+,7?VuNz@'ijW%d^tS%ZXcP^P   R%X'oz*3;AX JKG=C. c!0-'̽㯩# [C2[vA(TvAP^wJp q9l3=@̱sp=~rJ`r6BF<1ȊOPv9dYLk@]muPlתIzN83XqN-غ\K$=XhHIL>n#پAYSOLƨja5+VɁt%;j}F_XF{hur;~d5oa}ݣ\uu*EYhc'o8\ (xdv{bmW EW\ʃhmN풲?q0懿л\H6bl겞Fnm~npacp UA{-}R;7)K[!:p`26`SU![!dipy-1.11.0/dipy/data/files/C3.pkl.gz000066400000000000000000006576421476546756600171710ustar00rootroot00000000000000?:*MC3.pklz[?-3I]mf2e[vDFVi=챤^] WB$?\{ѿӻ?8E'KKQ?Fy2տQ- aAG7yEx\\^$aqI{I43K4,oiߋ21˨oG%eyeErQR od1WqaZGA!̸0L2~Ȃ>ZA`aaQfYi<仢" 98,c$H8 ~3!+ \%v1Lfd)7L"+4bʁ~H/ajV2|,essoAh::j8wykEX?Ɨ>lGyl~|s>l|m1gg?ug_կ![Rϖ[?>|,gߘgհ__~ȊR/{߻^vhoc0 ӯ~/{/@NV|z5i'MaOëգWfGEnaA.JJڰ!zb՗V0!Vʴտߡ[lw/ 6,,1c{Qj#[DA;{rQ1'ՠxixVbe6OWѾԮ- wavc6o 0S_aSgWЖ8cUfW؞=A jF[A7?ڄR;z[qj"9aXnuюM<7Aqbިioϑoq+c^XI7+Tbd7aOLX=u6xlXh30}m>v>Šq &NضDύ޻Յ՘zo/t]ډ0VFzR !p'Rrc5}jEf ,w ;CbOWCKⰳ>mMN*_C`}ӭ=ƿ~EOPᝫev9 Kn!\oM[s 4>l4\X]WEDegv2l1\D:#IZ-P]qOK 0h0x8D_*՟EίAetee&8 8Lnbr01\ "7{Z l6d0*zB|~?8ф<ދ6Ws va=5c]a(=yV/`_ВWJl]=wp^1N]a.fo@60ffjE.v :h[D '%xe5 lF8}O)b 1 F<՗R;JzPBZTΉ1T$Wj#⴨Z%lVZ%5bxO=zBrx o+ԠbGe4Ǚ>ñʴ3O>C1qxh dU`ph)\L & md$Ցg,N(71>U0JR5/!imS?uF8HGH_Ov\Sl DcT L`Y*GxRqmTq YeզXER={]s3{-PLpӹ;̠+7?:%S06&o* ixN *8Tn!xXuRұ0h;76Z xiircHB{l{i1(in0 [B. x2,.T&Vg ~_wŠXܮ_ Oxppv(j6xlJ~fuzHa; Ðszf89 %&zumL60= %%u!@ /Clzdl1_Аg/c bMOP/ǘ : r ,):u~lnp^osɰ[xY-@6f3,Q83^>s|C*\ŏ=:jи{v\~74HG++xú1#RhNiH(Tl;rQ}n nF0iB>\Q2W&8!H]!1 M2o$ ̟Kcu]QOhmd@;x{QWn k Ύ+0v)U|а0A6 ALyXN?3jՠA-Aiz@}b߳ lD\0h3D0Lw+Ći7(v6(8"$;ZH5eD4 5u J<ծK;PMic#)A"KWOE-y`*ciWFp^E򙪒~u*h?Pnn>7}|'ONRS >V? ~/#i LP~/ldg'KZ7>|u`G"aU ?D02ejK\3:[[omAvQl:Yg@՚][b1i_Hs&=(4`2՗sx9rh̸ }Z=hJST`ac{B'Oqpv][_ZeKaLSd9ס4 Gwî면߰3` c'4 OtV$sagюxQر$eI^]Y75p=g}l{}| ^f<\a$WL0U`wO;miW}g7{ط?+f@9{xρmEzFSKYϗ ^7&\JK#sQ+r#M+ 8,Ӓn(޲.& ) g$/_6Tܾ6>4lD+\߬? ,P? Wd@CsBy U1Wh [)m ԣXd4nғ "W5O-a9(QE 7 n_xپuQl/N^e^2pla j]693ڜeJ6.L} .~d|jbq0ͧ2h|1oM1%H{2@,..nbbE!_OɝNhgn_mzq(`% @g Y¡V6Μ6Ƅrum+Ή-i qͤww:4;h ;-`Kms*4h+lQ-e_$׮U5 F7[akէ:sm͜ cA;4F-1! v]32GS.WeEtxL2`~E.+3b~-.Y ݣ.ZAj`1@x"2ܕJ-aDpX499"-tBŒ.y{Hw%B"BjF6x$A`*A9veGI4j\j&>8qT,ԑk;ZІ5p_@FGH9Áݚ`;8(xӼVjY`$TEt2 i0`'n:8ht+>/7tEuW2¥qFc^Z(fD63KKPmp I~Ią&u-/kh]:0=}sl]u7ݺgmF~5&t3ðq #x ݮ VA(2',S_j|P`Y< RKȍQ4%v_ևtKH"<>z*k}9//('JY7>qˆZcxu~6}b(X;nlG-t&6\)`Ӱ0x>.3+ ۸6?&〆#[CwE;Km|d CKj8E'9dmm_xiP}@;`dqE.vP8u>dZڭ ?m]ԕ:uBToabS:[=ctBmt6GIF1U C%EwJ;?sy8J]D&<գ-#jwaqUR7Ċ}$tǐ$3gbzW0UΜ)&8^;o:?J 썭vy|aVCx!CO˅o ?}~qh<ڷ8IA- >ta5YI@̇nv_t:xNvga6s?לSO/_&Ljn!t=]3TBN6ܴ>^~voxh.t 8VO X֊}E؁, ?OGIL4z~)OܡJ%#.?e=:B@ή&1cŇ"ǃz ~~vLw$F%^qx*#{{{} тC(R+q 摴NlFe;CPOD{NCO|UO)*e0s2bD,yB&'aFrlz"m3`\_xǤ`'l=_ 7d F_]@083~gj!ck 3tr v'6KMjfńM -yb!zf=1OM `oJW헫7]\+S! Ӯ2LĄn]I?1`Дԇ멽)-xԙ{,@ ql'Q.:ptפdCkit_ Hb ;-hJiL!"EDI&+9A8hv_h-`Mx_vQµkȝѾz (F$b6{fQӝ/p20ij]~sp!L 68 ""{x~ʭƒ'ayHl']1`t Uu8 7l2/pQ&F rƉ'/au|t%ygE0=cm缞ԛ Iu{3c#\P |AHCh tˎp0 0:k!M zN hq \gnA}ßZHgyRbSJ po+ F{Q}Y'A?Sq ;YZ=Ea'27F)ъmBo]"oP u6&4v m5I ^^ܡe˿u &? T*;וBW!FxBZ6S萈y\&17iikG4T1k~_HntR/͂E%[9<˳29D_=tua M]=Y;s?pw@]zM :KS<=!=z g\a=s兆:+4ЮS:=k80$K璡o[6S% L;qvS9zv6fF#^:U΀ߟw:wR~;vq$Y#wW]iD ʟ5] P%gzGFs'vNЬ9b}; `DGdglm|Ax.#0A7e5|T]@{`$l+vTYmd|q\O횚R_ 8#=t)Ǧw}PF-@ob=rqR#KaѢZgm WOd(2GQ -bL04(IWBԑ9l:t8, }Pu!8_hn+s$Qfi%mMiq8oɴrqO_8!Z U9h VRДa;^ѢaaOO\ 'NHymF$kO4<}\wV>u^ӐqSHKY-`&J3? = iݥpC&J\ڹ&ń[S\ၯ|;e!5e޳f&B<d65H\\9yx3҆xMZmghw+aMRvEy9C t\Ga00Ff0Ucm}{Ņg(vd/F _H*Gkrex}y4sQɻrd9|/ uYz??I`Bo\p-Q>&jRK\׊Z#OU _ 3Xنkf}y;oQxq@̌)]uTNs)]lR\U$]>B<]oMkK7BG.ε.e !"6u$7.i4(O6އ$j!Gi)ׇ1Fgz5pmR5;#щ3W)? P6I'0P$Q 1v>!VP=D]&8 R6$|B6LwN/3{qpevR: b 9gooH,>jyqwq(46{θ `Un/8aBRg,mᨖG"z r_@(&q}tK,ː/WÀRX'/РNhwl)CH05>eAc ˺ˁ1&t )4GVjH۹ʸk+u:FgH^+eO~b8klï' sbKR˜Ó^ȐC.'J(aG̻ϯkP=&u)?]ĘS )^TJwp* .umv;Vl7l8*p YK:@@Ozd<} JQEeH6){ Xƀz*kCʅW>Gٖ=<B%1ę7gS0V cpEwnv4EЌJW쳸C()ffzhmwαZFW/lhD O^ ,;WUk0C$g̀[jzh~kO\p|zw%Arq/msjxGJ6 Fd*΁IzZ K4 F=:FF^ʜ/w"r&7CbqJw1js&zA) ysu}v;,4&=Z<ߦ;-,nӎV(=< C ;]$1IEV'C[Y*o;A>ue%Fq=f5v1arR&t4i 'z{y_1PIq4Urk, Kii=]M""b!M!t5k(tM|~3 㯴\U A?~Z՝@~1mZRnoZ␧c2rt1ĒZ Jgvbepa)g + sa`ɕ%ձmP-f\sd6M(IJ8[K; aggqƒ$RnEB9 KlyJA}ScPnhƠblyoIM}nF2B)NEڶkLxGʦX0o;& oվ_~t܅1PDGD_>/Maڋ).3IY#ߗI=HGLTNSQq-ڮ$Les|iAF缬˭_fSDD2y >|@.Gz;B,UXK¢M>("qLI|R? ,)tN14"xv[P@!( 2E;Fqi$ϓo5Y%"3' $yneCZ&QeneЀKQEut!JJc iFynq`4濡u0Oo" 0'q>OJNVm#߬1aq^ߔQIjl0EVd֝" ʸ$l~-o}0yӲTW/ K~q)2#/eaY&&z(ckb^~MQ!ϳHe1((RΫ( O)$~/ep_{{ HM~*߄AVH4+CZؘ"ay0\- W1jOHTϑX?$6"Q0am;)9cɭDF$=Г8ea:q¤LSƶL~Afs^+]*Xei%~f(jG,#լI!,'W=p'2}Ore)㵞U;Hh51J}k Qx\=S-7V<"e2M[f7V@@>2J1kF5Eiy_Zib3Z9a#vPt*5lCp[;odQ˞k8d*k{MKS?<- @3sÍ(fevBov& *ݨK\B4(+_o޸.raHNz fl-fp2k |3D9~p N#[}馈P) ߘ-I-ځNh4l.;_(Y9in޶ʔgj nCWg4Kؿs,۬3[x%=DCJ1ehMw7b3ڝILͭR| ʛaäVłE` ɪW d)DHcwS qwP 8fa3Çz]gOO0$#,b6 7;qtCTUh wKmuW1L{IW#/ ygvͻzsȯ!l¾%ϱlRۢq2G.Tf`5ss|qg |BDYuNW} \cpOfXn>_ŴWW^|%>vHhZ8 *o?GY2Cہ^+{y8*;w:I %G┢6suۤ1*2Vu!Ƣ"سjC3E,9Z4]3@):ED^qu֡]QI%Ԉ̽49߭z$Zf)Vt6THxtIU>}Gۓa0U)4i=p@d ORi팕?7&{֭kW`QRr_ `ט^x^oh7 TdXTotARf.-Dr.iNBl@{$d;m@m=49٨srFDK+ݸ+gwY]W/>ĥk=fOkKS&J4Cv;QiUFW=f>"^ IW;Q8T @کn`aLuAuV)=Tl>Gab%.X̚sU/EMU{0aAPEjeF;3OJ-"mUTn#k>KWT/]>YeL^{7 pza.`'{D|h&rFXK&8VYﻸV!g,9dÀpV^('Xm1'1a%T'5(t ۓ|4%:l6PhEdjI7n^b^0;k1H~ k' 5Y,\6U+&R{k#?fR෿._ 5Q?mZoxiS~tShLЮ^ٶ$`0ۦEE ad!T*^ox;& iEA (0O*Y'61$0?c]ޮ=m "8t˧Ӊk $&bj}nb >گ LbpiSɹxAV/R=ͣ=-s.&T86#EkKŒKh0cPC,3OIA!`8T'J]EX~E#l0Ƨ[IhJiGoj}Y %Vz7Wڷ;et9GV.B%wZ3F P\6J39i+ѶoY9堳R9ʎ &Kezfai; ){[?]lsAF'/%Xrg&j]*4O=ez^XȚIb`; üVO,伋(T7K6eBɦ ^)(TbޱJppyq U:֧O<:,2*5vW]rJw >5`8!=QĞOzie vؚ%hR=v.#p$[7BёmVPڳwy\WCzKUg!8vIb)--g/GP,}KPSj"֡{p .键w1uM~O0ɿFk oZFmڹzJKjnO9ӯՂ;5h$2QA4^"B%yhE€_ϣ97!*/kk3}d#&a/חWfU6|vEϳ:ɣձb^ yQPjpCEz:({4՛LQ.cy7)BkdXL5 N.`=&CK~bU&MkZSX+]_La<{7M?)#Zcɛ1POWfUX.31~i ,:CHNB Ee+?3K(q_Vꐱ|cS/@ b-.z*rduEWRqM“HK"q68/v^Ǘ"B~by=[ﺕrj unO~8GM\W*cf `xb_[ԗp|\ v{vx}R!dΝ4{bη aVO% XILJ/5)db>g&Zs7sۏ]dܯIo96 ]}_tPl߻z"\}4}Yqjm^b{Ԏu8\% irVjB;6S kS[i&4p_=s+z5U#T2yG+wUOY齾buY;#zM-GP#6M[p¹ Ʒ*{CoCH6_~k' #K:2љ%/<Ϥ7vcl=[S塦wF),)2K!#:DaOp_Jz[Zh XeYsJk&2# m?8»3w_y}"q(/ܷzD#%7W|'e'|/_>:g~.|.CPb4VOX nR@ʞ,To^Z[ڡ.C {{G%2Wv^?SdJaĖN  aF8,8؜"xrәϺ3ss:L6㹀}oh ISӑb93luߍl٥q62%4v֞_ڙ:djgriYO@Ӱ8p WQb>"\ʧnG;`w= )_k._>5adB'8~Ӓꞈus6¯Ad9/NZ=_:8kQ=i?Lh;˟xKՇ| ~<)iGf,f@1 j ʷjKHaT@B4#}z&?~sξiWVn 9 oqw@PBg0jw?J62oڋuRyc,ՓLj!' ]#<J>aY^ޯ%f&'Wm>rkq3vP Ib@YYvUK:,Ab0nS0U2e폼]bn."ծn<)`d|sP C }+xy,C4!~W\tVҠNa.z~6_X<;$s%$8̂T`2T=3cf0k g8JğQ2UxF1az]SyXUBcLIm6 o$t ۮf ]ؾT}4g9>gaoM;ۨ ys('@HCS=),ϤԘ'G0صu< b}\|,kLJ[3ljapmNHcMwa]}u;vOh=PA _ogXY箪!zFE8Kz?TCelrٙ;;Jr`-QT>8P /\>JNZb$:Mq6#4J*aڑ#hna .^XM#_:xf%god wĨU;_~],bX==ru;AyMIt81+|7X+᳘E`Ɠ+s9Л:o0SeL {M:jʞGdt1G= < ضr~LϗR"efFƀ#36o3.Pϓ_B٪6݆s/wgU i `IH)ϹgvP -8fX6/YQxms'QO&zuq`^SA䎎3yB{HCm+BgOT.EyG?=@&ypDiSTʛh.x8ޥ˝ĀjNc+HQ]oid\TʔaSswf8&M9'oO⶜MדVc>T|:/ۢ> BZu s|4W HFsHK( y DM2{{N {!)r"LQGКQc5(N>MO_@D\x N)@{][1^ @.m0y91nSYӪ(i[/4b،3Ez Q ~%+ԇ5>T+T:H7< =)2i1[۴m&="SJ8HP'R89^~]ϗ?c0M&Qy#Y<v5m3"ŝy]9 i2u%({Dcxуx+;9ڼ`6 pǀ<̼BtQ (% ]-v*Gp7Drt@09~Nv-^3U3$멀(]x]7MoZvyݐo1'-"~!G:ml[,8*؃i+:{-g,RzK;.3EI`n? DsnQ}`de|<8AeaTT ѩ:mvS<⮎Qď>QO0a(x8ro0BpDnw(.ėzCW4t60.O=h.I$ ͓ 6=M!]{u6 Fcm G) m6zٰ H6dGh-ɱKݨk^z+^2l%]>U#Žuy̌\>m^5/QBa=PQ_ץ8筢E\@M8PB({Fw<@옱;:j01X5#PE<Ovܪ'EoZ TH5!]e@ fcGzF:o=36Q9lS psZ4(NdN~UtnfjG&)&m1:sh!sk׿4z"+ |xifIىBz(cbc٦!(i8J| d|ZM|ZPaWb*Lo]o_w ; ❴Jȡpjw@%^z!]Yyof1P8D~K,4J"%xon.;HI„vω7XӠ_Ovb`G57|~1t$nxM'@?X>'k hr>2|2Ӄ}tnpWSw0FPзYրw:kfڊe~~ΕТ@0`@fiLG6(Aڌ MVW@T7ܺz!~v| ?&%`xl`A 貣ᲓiԹ,Nt:fPdž~:&ݍ  zN*p%&{n ʿH09+hJp2nkyŇe[Qǻڢ|j -9%tSFC58Awg~xS\|rc1P+`n 1)2a\bzv>[@lz5:r#pz")Z `[^ey4lDe:R\n7}q(=_5l|g%$$ Z,uP"=Ɇ5sݢn26l!,D_owˎTx< 1:?啃o6Z?d ;SޮG/I) \?:O l4s `k)cJI>M,sL)EhM/jW_ncSk1am໗g|L0L;꟬;s|-ǥ} ߅#kǐmsRbʭܶ*T3Kcf8.ڮxb9I&'<-h=L>OH3ˏO 2-Uk48wbmLbVP}qxO<2 Z>*SQ NGfnr! zF|VGOm@n򥎈 NL^,C:} =G2ۇbicYT s:빑DAM{LJΑ.@hz3l6nMxZA/eO7L0FV7 Nh~|_0Bgsmy`Ɓ Z;+ nBYe|ekb[=5Cˋy1i{tp7U1_,VlJzSOJS{v2Zl;siDehT& Q̒ _0.W4,+fх0G^⭉_S KD⅑.ecjAn~i,~&9~}+b/(D6,Feb$c(qsVX;*]pի(]4 c4&`8R֧ ӝKx}gF)crALNy infioDe!zH`Gm]E#N-3:9B&Dt7 ARMGڬzud;.%dXX}0>Ԥ`#ß\2>^FI-h>Üme3G¡},m+QpdnmAĦTw7dX ;&oَ˅ =,GG=ݔ34&WѕomOC kf<4an na&xrh~W^p3Gf@;4[*NmB9:APM| Xm-Ldn7fLF7(ktTEy0B6ڒ*KT #υ>R֔fI^ JTq48p ~=Πs}ۛRYP70 :!G=-PO)oa$#?ܙe1;l_*:ݰζ&t{imXOi­!2~jdK sݠ>Bn"n\q/bʐnAjQHf8d\9)#;=|*F?*{ :QxE_^^IC`fC ~ʢ8`K!ysK& e'I'8:D1{D (Cr۸F]~c][X"ۻ~a#Gv+ʣ kP5nlT{^*(!:ùoE貽[)P)?%(j#-^aY6,^Ғv߿Kǚ~مœ[jISayjZ95Ronj\A덡\\X:|[O[Zbm|M/q;%/(?۪hIe/ 2JZ)Fބh~XH(53,(k z\}a tiH ਛmxؠg9aa*qJ3\2([f+3YARZ#bI+ٿg4ShUG_ֵr=551?UK(2:ՇQ-*ާ;.E9+ܧIQy%3ދ C$/=-~) 6@kޞ%?}Z̺0Hc乚ӧfΜ,@fʫOp,9ALSk,B9%uPQOaV(&d2nQuث+fǴ ~-p`J;8rY`R-6d^zmj=%UȎl\^Z|^q$ , #bMb xN[6u(N XKE7%o˳j0^6v*PoScدg2cw4 p̢޸/ߚ9_m >\|؇b0ޝ+tm{Xw?EckTu{zs.arS>#퉃` 5זM0FB*XhCc嬨~DPCEt-b@|$W'˭ "־ h$Qlv^wތtWA/sD^<G e8KNv~&v߼nB#EwS56`K/ 7Pat>Xz] }272ȶJ~gviߵXx+ [*KGHKDuw@<rF/bv}7ՙ(6f8@Σ]9K+,SЇ;aQmÎzD,B=_fLw)Ȟ?+`?? }xP]lKh>kx2+u&+Tr Ї 5b׍74}V\3{ 1c}dtuny11f{^0NZ=,sHV&ws9E)ņ1+VvP(''nnӀM;NOö\J'!4ndvآi*/͏ (JEքy̢Wc~*|l ^_]\3ox\ڥ &z AO9-zyrD&r"(e I{=hOQX3(E2l[Mׅnlv`}!_ٜU!!)Ðm?[@ zɆaKS]X-vG$mFU'U᝱S.(6+i|UYbמ-u Fg> iD z_|ʔ̯Cek̹ {RJW}{Ɵ.O>V}Q+Uğŭbᶾ8mmmXa!mxԥ6۰F) md~ & 5TqS/}Vq7~bdZ/XW V|N]zKifwQ3]-6,[qgzxF ų3{e&6Oeq%2O4%Ս7LYdQ/m6;r%={.29lW#vfjUO b5l|Xoޒ._X915&5xE5cL_$(nRVٶ1Ɂ![3v>Z]t.AlX V'9|w_1[vWN.k% beسxa؀"g_N} ;yL[̔E~{ɄJr[uiomu&8G)H#=F${+bC]Wcc|yqa]P]v9NUƇ.UwqNT{!x"xgm2 !jPی)%,o\e« )6oEWk9-nJUZ0cv ץtU8i63 ,$;C#X9rJ,Ս0_2}!P8:f, h/ rVb p[I0Kz\Ce≡#@-#rWy%+,5e;FH0XJmx*iBD^"N'*`w]Hk3t] T[`(kڸU@ݚA 6wxs;65lS&b%}LΰgwѢ8$~I,?Jc>LG$R +a"h"Cya!p'ܙinΈ [zX}^(lK.ý.W ̾`l眠6Pt19WP8h1ۛ~FPΘq%zr=tZ0z@;>k"L c2"nE.\*Ial]l =,{dïlDe1¾@>*XK/Tf鉥1?<4>{*|}/3 Ǯ?I/9/o}g)<[OGYm\x}S!g8u.nxpFblNr:5N$S3VBN8ˀ Nէ; ,8wӉEiLcݓQ!/eTVLx|NGV59&ɊJqn8,Ws2J3n̫dc`25*cs5Zp}Q !~rC 5TB-ƴ{6b:Op ^m3 Tk( LEv@@uzÄ>C\ϰ+LbWIq9P''w@)0 L[ q5S7X!}#NnN揇 6qfc𙓁/[zJ4C/?k~GNF ѝxJ>㬜ɖ:rMo7Mj͝NfI>_pN J޹UW{Қ pHdr? yo2SW{Bgm >2 @+Me {a]׎:81 *fg0at]hmϮ:T6V=k=[ uH/#ٛ:Ll'f*p4POςB v+P}-佩Ԫ[j yۺlξGK(3S]bʿ\dڏޖ@5v*YRP8Ss%h*iHCO& )-dVS`@*[09~cy}^N?A'TCzO5D.3"7=G^K֫]͙1f aT _{T‡ SA%=ݩgI(Zx..} sx3GLoNܻz)evf@R0 )*`QU qƳJa%d|Rk sȧKlE4NO,G3Ƴ8@?MMyR~!0ylCY_l?eFW =)3`VxMcݫs+dVr37˫V0ceKߋc`ڜAE#EZY*>Y!Pq %Q$ZeޒHU`{lgCVAPAeɱ"hxzI`;k8i"HHVbeOtLB9Yj9v|. *vo: ECB.9qՉJ3fT e&V;# G_p#c.e"DD j)fo+V涬oR\mˁ{ 8 ë6XA[vXzK|&BҲ^OO(E D,F l8wz~<~VUA2 % TJa yɲXH]DAm,sѥXdY+=\frPO!H3K *U}:>(܁L%wKl˔5z]Rog~lp%apPKU׍ͱgLS8ler7&ŌܧZoh=P:5Ji.C 2h:CQufCJ3U( d1Rc6, VV+r&ݐ0^0狧q, tYO!L3_+F$PeB&3K@&]],,aV/pFv M@}q?:a}Ez'}ьѽʢ dxZA) F`{{PiQ's"J6a+(a8qn * ߅7=b8_N_֤㯵: ŘŅOبFPǮ߂fRsIar35<爾M=`2*UJ)ܱi~u!Fݝ@#4>n v?BݣFnbbF0>`K`{빨(pX8̨q  2,Ӌ)PcvqO*V$V -fN~E~G[zA=ufMqT|I.|θբA`Ew./ dkjeZ7Hg*6v~&Gj9Glt33UkXbVxz[= f Cvʅ0„ܑ3@[c ZҍXSG]RFBK<2iHyoc؄ͳ&0`qVe`Qr!gmwҕh.܆*ZOuxBn~Te;.VUHUVR;10O?Y|,x3I3u/0-[kl¬71)>X n~XԌ&R0me\|TdUuhJeq*[睱V?0b@R&M&!2"Z |p@ BnO'/ɖ_26l5Q]ţ|)d_`XgW~ظW4bYC@p33U2L5_ҋ(m>Qڙ yek 5*ƃrѓH=:7peqOa ۂ(X lCKK%%uW@*ɶt(b@|$$/ظ; ()3O"yiye, boDӄE0QzexpzQ^ 8,CBiqXAؘ(. "Nc5 ,,K^<,^5LcyUOEGXe.[ (xdwJQS@~ROۑJȏuNU*MMޞF;r 획NZi7 *9*A4{$zBfD ^&3N;@wū=PX3KA'VxA;.3y+jo,9=oT45]ycup1a2fPPpcfgb3&>r@Ijp Gq@}B+ц|\r:tCX\V3]&B 0Cl_`|&2҂TQr8%xM|)Yzuq#[oٔꩊ zkyu3y4%j ˣ= ̼XK}ִ3g2K<.A 4t2Y:9r7 u:-2[[?֊ӺEmÁA4ӃHc'Ik'M@G!6C4.^s$t7Aa[4r׬644Ŋ|4 Gswtp~dc2f0ABA C5;|iD~Navfy;ʀ7$~3x~Sʳ~N+<}ەn :Jd!yϰ@AꝇH/}Dd#6x 7Rv5C !+~Jh0S :L $B;bN|UX0+GJԆrMy"ًeZ$ /2X{=EU!0ɵvK[̅\bZmI"g18;`bOuFS'MFD +a7,rГL3xuoX =fdol.~bBv:h޷{W Eي6J ,3m S,B 5Vl{qہ`F/fNL e'N\40wbxl[^yzRiMݫvaP l :6\JN"8u~*o6POYesD7Hs1|gY9ä\ ;/IGcssh:{V/oA<4MK< -fz;S`amu,~E.=.uxnX:JH) I/|GO_SF\K8 e=CZfȦPON]z{%04(u4OM=,b8̦l0u+J=kо q~\v{`GHAqȏL]1)%$p,<fbMW:"hRy(=1 ߷wL#,KgRwٜSOASe~NB_uOx 9\ 9s'vdZ(bL];{l.2276_j7qOCX}#)m:CYoL!N1wHݿ3( ~p8QjԨgLKm#4X4ͧz@0JʝPÖY#mOz7\_3j*~/lCA 2/o a8袓n"nŮ+N.@cL(>xhRrz*r$ڢP!U}}H~JmG,Uyd-:K)2mp⚊iP@,TaQ:|zc$n.&o0k:r5}uFƦ;kAZI޶DE6ȳdi:ZUYM4E[݆d_UEȆp!$G䅃UuڬI$YuSX*Q:tm-4aaTTWF7y$KҴ)ͤwpV\M_B֜(x}W+ o: L%v#q=zr/w,g`}r@ {Wz #"6k[b/rr#WBN=)¥Eկk?LfY4sU:ӮhgV8+ Sjabx Z3(q8j;A\R`,3fK :psr28cm7Jz*]^4a?*5`pWI&7zVwՖ0Hl !2V,(O;9o֍,7,1al~sr ں>V{ JsNg%z6DƧavآBm!f6 | ǽzn4.",nWg`/l{m#AQv%K.mb|P3[zP~zkԱ~,Oj'%yuUsåd΍ nWxh*Ąx%\xG` APKl>Xjۄ\S/w&>79·3]/r+2S{:k5N ύ۹`/@NA8#7*2:&N\+rk'|`v5vm,UH9AQ VȂ1LXfuf8U.  'Q10Xh0{$a,QS(h {Dp>?*}6HZ A@R)\eiOé!z_ Os¢*Úˉ69pTg h SDzJK&r[HBAuCVJGAh nT@aȹB\ 8Y)׋h8?# |RmtPO̰ btU9C>b3*؈nd2 ID=n2 )hy]FH+}( 9]*Ϣ0%!CyW辣 ;yC!"L:V3g΢j>]D+"q ^ Oo)(!%'>[(f0$JCNV9[5Wl35JаchCF\r=2b+Xe;ZZ==T0,>Nu}ѱu~RZq8hA ʺesB ^:Z=_a }ao)Na=D4o}Ghڔ.% nKvVW VE -y%t3<% n8ka%#HmB hZOGúX%4|8?ݨLzɴR7ڈҋW]Z=<;̮ZΎGqM$eKb06`Xlzo.I7, wBd6]v,ӛvO7p ߬[` o@@U?[Ti``Y`0ظů-x=y2X;du%W`Ş:fy kcb %Bɤ79)}.eE':V0v*CB#7[5:@,5G TyNHa$_}{eFllxaj,zP*5K?Zzb ^ 9\Ie;w6:Aht 7肐;dT%]7gA]6s^Jz\.~{Kh:]=uyC'܃\X3vrVh; O1]MM~g{z#ACPh7g,(|-q!.}&RlKy\Nnq$] Dzr`=#p6ٌH?q=%ͳ 4ezjíL|&LSO%lY {]xb- W~+11= f{އ%c!휘rO!J]N}QYJU60}bX 7:4$JTۆ#ze|8o8W\4B2(q͖0KQ Kt=`̤R浑WsuX*8CccOtkkE4j`< ly6IYm֣E/?mm"=ݑp2,,u #8 NG0!!$² U|3Bn3 1nA,j Z2ĥȝ{ifI>Kc"ie 4J%F ~Șav(*>q8J1ZYtu8ޘ2T [_z6;ᙪK0^=}P=WֱZ&pY?]%ᱝT`"]빠NCJl{̶XEj a̯yOpL68`uh5㯱Z&XVT gn$ҡO ?m0}*ȲEpTLj 5Uq bm LbR5V23%{ xaY ^qn)V6W[PF$bj+t -MZV]Sra?1SvT͞2?˿=V%]˿`*2%ӘQ G G1:¶H75Gw2fKkq&;pN^4W/-S<>@K벿. ^~ M\;CwA~f[^"W6+s)IIV=K˨yaHNS-+,xbhJ>z-{LSX\Új{P=!SIeںJ!!CuzkmP fj3 VceBql ]525YR# A]G[ tgDe|n~L~G-n'my]BAIjDJL"^* )xhj7.5)&r.[ ڼ*q!3LwlԆ}=ے3VvZ'Vdg4&xxmr>}_+Q&=nx0mB^.HELSI-D=N7?iq.g8K22у)LEAFA!Aћ59J" m]OV`S¢J4|w o:ڪPP!i7vQɭvshtBbL]dimFĂXcVF.4V-N^!fBQXFA;TPv{#F wSLFlrb=պkSG+3y?׸]=i@/@(A,:tit &步Vid6~!( #fvCddK\7`tcEJcaV_s J]a揤>;Yq7/8ڋr+.eK~Ɵ,nL9b5}a=r?9z}u: r t='4-3Ctz ;BH~Y[/n@DxF5UZW7*J| v~I+6WZj1[[& pI|էʓ(~/Skx,ru C' ,[TaLe1x_"uW .*H-߼*pe3gfgjI3h_ԡEWCy[dt&+@YY$K0D&m"<+{2E3t ǵn)?3 :IJbE0..fAv\]uu/YF.2]Hc˾VVGLF huSd/^l*z?,brԡT2.ߠAKDvT鄬X|%(, hyE ݝ=QDC M7/Z$-Bvat?EyZ,\1{C3--a%<;&nx5SRTI_mpGwh`3*H< t{ NYi;"<>Ca_j3Ⱦk^t6]G0([bVLouGGgj8$FvWWumMy" ŀ #s/ooVj4{y94wj23T; 8 sݡV߂ <`Ib鑈0Y_~ݧ*hTe:M̒!:6bۧU{KٶtjO5KrF,~zsz**zJSATU?SXMd'"uG\Y,:dfJ^(6K v]oOH A0P + FZμ w O/6Xm)Ydј^zg0{b={Z.Sղ@{(}izq HyEX܂/| TYe:HZ/gk6goC4Ѻ=( F. )Ve&"؅oO%Z!&#~zbFF&ݭ zNJlVsf(T']jJ_uC8h(lB*ҽP.ȢgSGW簉ؒ޷_z=ŭwJqm.q!^kwL",904wxت>Պ̳KK,|+d)J'gAL 4csqW;1m;Nj4WER% PëdK?*m(O+'9&%~KEF2qhX #?јRuN{OKVatK{V[8Dd/?L8`@PCX'j̄O*;NS{.2(t1 )J;e'Pz+]3wʄ(XHaL6{v5?VcOVˬ>=xV\eSS(:ael6#̫3(N̨ܕݠzlCƳhVd6n):AVx`f2;ܛp U$kݺI&i咀a^GPa^KsON[&N"yEygeSӤXţV;H&؁iX?_z {)MncByd5rN쐂īy*>#-C**=YZo qgbL}ktTTbG7e5#C.j3!69t%$gjȱ"OD/C,;*aP ;P+>,ʫD w|pUu#&&0meAmq3vJ!'ְ6$_TwiGIɻ6I=E_ /M-'\A A*ceg hNSf24^^8[ tI`Z ^}fEvzJ<苃Y=tUUu;zņk,H7/tZ 44j w ;vcV3ؙn,|n]4}%lGɜ[޽ dGք5yTGWMX1WYIgD6eχz~fYx[(TQ<~4w4|RWaVF&`۠y"XTS'Dʾkjgs{`sZ(̉ 1ʣ0i !l5O^ʤeJ$FdXN׃MX Yfd?pScs!I}]Iܘsu,2rh sHBf+Sښ\ݗz*ö21U*0r/pU=e1MEzVBHŚCw̧N_ݑTCK"}-wy_;1FxB|LtaZ+Ƅ/+|+I9΅*j ѧaeVrI dv"}ap(du!F=UAz>RnKȦbVc>! m]q.6Tvc鸈8E=O ^^%%C ÕfXp#9E!,ؕXUmy=Xz۠+kߦ\14n=)_CE ֎z^#dg5XSWKv7Xp*\N~%o.)6;jpl ,Ez>TU E*mAüLt#K,)>lQޱQP%`;2pk>384mZ)ҟ-'tmom6Śrߞeejtuτn1㜹n ^{ʓ5'Lҿύ"x~Pl6h:H xr#,4QEJ(%|4m>\+Ңm 2U` \wsjݎVņъsQqctT<_4$z_O=(jq' (9 x"8"."6쒏՘-ϫDUw{]E  D\{ԿݑlJD=abq TS!:S;s$Y;V?}Ƅ2(*S^Ar(fe >f]ȩ=IC)l.{>Z8Q}?\9UU~|fyg'X=jVٷnV߾S!b]Jal"WB~N]/0ى$pكY&D}pw`ZAw!䙏f`ҙ/ 0pv=8b/ŸKbi#ʶ;n'$P7HvZAXƙK*;LEFplRc8mõMI\$`c/y-@6WGT]=x`wm|eldta]ܒ7)z[95%sf{OG/_H4 )״OUK 9;cDb[9S=AbkM-@;6;@:ק l5 t 0k)+o5VfEw VK` u]JN4m&BGd>0 RY(XHT6tAWW%TɋHFxcPfP=ˣ'\f(?/`mج:Z_-oA63H ]Ec`SQ=3q6?Kf~~,%pdg50 jQi?"I<6kSqH( , Hf_=? 2#Q+5wyIոYɤ8O.Xn2oWiǖY:EܩI3I =ZPّ-E[l#{6,nS G!~fa=z\osH٤=]^\>\HOIgg&[/4fx2srAϏ @g/.ܫ4.;ܟ۳-u DW̻]70Rlb1v lUH6֖htcfBad;QR+J>P"z.E5yDzVv2Չ :dMoO54FЖd۠%_EGr`\/%C E6bُ~\窨ƱbdoםW'O@gfmxKE=bސ:+euKVBbzS:/6 ,bSBUJ19B+z>%xdt@t?0H{i{lFmN.s(9C3/8UX;,(p,(a/DW6rJ B1xgc >Em5DD"SsU\OehŠ_u*Af˫ؐ@-js,7_3 8pokfhd,HAc=U&Ҧ TvELido%lJ4n1mDjWȧE1AUS[xSt+oTUfۧdnJ!d4dzmec.kh|ֈ3ZY:@]W[SB=7ExpfoNk#$*z:8I2vN蹛#}2YaՊEklCjѥ+*d^ kA )P=6J(b$j!Xq]*soҏBtXP7Y+קp3Rtub 1i'3ۓ F}oh'FYKnρ#: j6 0lGfM#R. B_]+@vсwv\%iFs[Oa6k7ڔ}3q3;1nc^m=)s We&k;&9I}B΂P ^!H&DSOl,fXEs7N,mM~pأPTQ.5slS0"E]Z t:Z(Eʣ:Ht5 PR~3ciWܮ6ұ!+M' 42tZJd?cgc +M ^Ӆ,Y (s͆Sp4m T0oɻ_W( 1`2T v0EK`Ӫv,8y +oA Bf6ff$llt&R_3_+N( HכF_Qu|zJ[ Y CW(n9tƏZמPج$W|6ssX"F E`TpEƯTL/#rH?)Hto =עQi>I#x |SQO=~Zzlb CWzcLJ{L:k+sc.B./Wdɐ mZYb삼JŠ%4&D4|_2gVvٹsTCauٵT_̒ 86gqzmN3Ɍ{WjZ2L&Xl͆6ŝ]24N<= abf/{X!.p -PqH/=hsMoB $ fPdVh^10X`6ts h}GǖaolqVq7NmUVR7+ڇڗ){.Y$ .K%LQ 5#7MI"@6e)~O$ij!ilˬ 9~4 {ob?yđaFqp5 bfK'E"}Ii=9ꉼG/W<>ڼXlF/ubiOѐ.dcQD NxU[_I'CTCj9@e@264CYLx;3bdZ99ꆥ^e7#nz6TKV/gYa1>3Wݐ8 iű.\ضȞ_V~$a܅һ+y?XG+ӸaMeL4G h|^eFkSݠ뚳˞GtnJXʼa \= $^!76(iDf1KMPT(:S=Zl?VDxυMYUYfoX.38͜,L0)uVS`0)o)tqyyiz;bn#[i_xK>o?W|>=+sm' j- %ar]V 6najK$q:ҎE!6i 1z-% Ԕ Es8s{v%wځkA޽D<]jP8{(8kU(MuSqIQ*1M:y| BS&lh ji~7 -,p "2{*Dz#S?Ze 7Q(JW[2΢珕\j7YfKV2mm]Td޸F:Jџ]SDඳԶ]|L)=mjXK?siccOyůz΢i;Nڔ'Z6˚p%8~J K;niRu6m6#I DfR^Q f?]pxa2JuZn#FT_I)+$(Jѓj 3iTMH'>u68jSzyL ȵW-P;Eӎjb=6msCyJq];MuͰpdjkfm TRU2  wِDkqܫ)6A0RbͶ߳Փvv<[՞~|WU_.Toƈ,6o63Q^C=w4g@X'_50$8FA55LȮìS*Fm2x&)fH/: |*(m%i\vAV{ad X,"2R>81NoT%iΜi"#3?M(qű5B3!MU^؎o}Qw^Պ8}ɋ:,Q|,#Bm=ϊlg ;8$9}^A%Q$@ĝ5!gqbghNoۥĮ]~-@X¶J%+*CJ<hJR~ -J:QԸ(hRh?--!Mit]+;\ΠL1V_G%_lL)SDySlh3Vᅧ &/+ǚjb'BN_DG{DE%-ZEߦ ?spKsl`ʒjA|dhqfF^FQRy5G kdE,#TeaNٝM.T<;$wS9R4Wvi8E7b4:l086i 3!5G}O !)>e^/P:zpakBG|e s п [*[ H"[* 5TQX=3~^@OFEA<Y "GewjZlċe~&,ו LѢ[ ^pgOwP|2!)4O5~0$ -9WDB04O$2c}ACfܶgvQP-+v:S2][Q<>elp'2#,@lkzJLN¡z e558ëbmtHӳ֬`j˿/E.k?ih,xBgrcik2*vfl}4э^?Mq.,7\e/&Auݝ" )4]1:ZfĘIOJM&;چ4 /};w~슇2(,] SE)h^%fҕMح!& [P\G| ۔]BwY>!m+bǘ*]<?)tsf4۴趩ZW;00#$cxjw-O ًA#.hi,3U8l؍aHzncp.EXVNHGFnWJͤY)^N_%H6S>B<_~oGlp  NꙁjVCFȂbΌ CQ :TysoZGWzֱ9; [狳j ޏ__P_˱ۆ:r*R((QnP*#)Ek7gu-y+O(a)f]}>_׆&>0/^c^8mccN$ _R$q>h;^3@{`|~4:^'2%mSklYE)9\s3v|`~ ݱ,vM-3AM&2f}5vag\ g3tޕE{k|5xC7sٹ3_hB>|TʞOUY*]wFt pa1 {3J۳4$'hԌ̒K:̰5e00A_v?{3va͕7L Oc254Lb[ 1I>ep㛥v\Y?`2Whw_VkAgN"/[,s1mf;oAՙBp?΢++sem OK0MG& `*g6{¾]]ww'sL*X)ku+@0M^Yǯ5{.Pl,f'm%ڻ2jEt44銾p#/̩K9M%b?@|8?D״, P̬5֐{ǜ;Øힵ{]'{ CW[5W^>竼)PJj}V7Wޠ95Qb}rlސ & 6T6 .X%9ȳp~=?-*$V(?,\+\׿ fG"h=8-5[dۢu ^Jy16GN~m3zT2чʍ(+G*5+ͥچb1g pmjJ8'OjiwEA'ybk9_g]"u$X$n~ h-aiF"?4T@ %"#v؆56҅jU*C8+ˬzAVu׬934?YU_$ˆe:Dl5k!pgf>$2E|Q|;\Ŝ_7?P(~))7>9ʆb<1=Sz='d[^ˁ#Fc$RM= }:j찣';iҸŃ;sc($ϱ. த&Y .D6U*tk$O aMpwzbVs1*G}nqǻm_!HNUڝ%ce}jqM [:Q'EHQzѹE'LC =9L<%x,_e01NQM0Ȱ?:1mNBa%h޼H'@vhVlN'O|& gLPL.'k%»OYgmgmEI^¾jCNɁ!~n6?zWM}VT ܰ)/2ͧZQɂOPnWooϩ7$؃u~&ioyێm*tuu+߬RCCW7\  k{Lݮv-. >]o#YA2SN!Al|6E x ҺB`M>ʿ述/Ŀ=k_ѝ_poErw;1"M "mvwd[e"a~qEY8e()[DY)҂'llS$QlHW[gA`m*"yqZfEXa^uZ-"$^n;HqUA^S[P9}JgZEtc3>~Y)k-De-}SXkҋ܌*%2M.voJ2N"~!:_Ы 9jwg$mfc. #Si|ZdsXUM7;IY8T*WU=I=3I: 0Re-ѭ_)iPdž+>]UZ]#>0Tȕchלz-oRybB!tWp޾3}1_Ʊ]@]?>1q=l.9zc&z՟Gua G9G?FX'˃ 2zyv-5+mC7ހisU=U?{>a.rW#oےºfkI VW6e%Y o0t)=wfW:7GlEUqڡ:amyȍhl[B¹"&ů1aGli⼁;:[+eG7ØcŃxaM-37-;9EcLM12 "3cSY^L6xy)A(om/![6e$7J%O"Vh)l [ܲXycCzY:2PF*gD 5uBA310$v3d O㶡G@;ocaXFTx iP}RCS倾N,eFfz+ #ڶbuQƀHP5G8C{-8P^KsC t2<LVhX)OE_tfm#|qYyen8֣- VV)?ղ+{ҳӧ |4>4߼tBVK)3e\vָ &V*}-؎m?BD˗QFĖ6R) BlVܳP=# 21K,&]=!+S 6a$DqAܴ I~\߈34gD$9 zkmJIF'X6i}zb }} yꉿ:^!^\̷}΃m }F>H>tA6igc=1yG iKShAINjA)Qcn)RZ 58^PNVjGO} cx0µ, I~Pk뜱OlM42HF,nvim|FD:~ DP94=:F3ǿ: as¬JunJzzs0{{F(L Qkms1ɠzF u`{ «e!W1 v _2=~G-Wxʬ$iad:JI>S +43qF (M)Gz,o<;+z)r0Q.n0WN RCOsZ(G*inηBa{fl`?+(8Rs\Vؚ@f]_,$s&-܅WK(U&fq&:A%{ͭYwm!<=|y񏈨$?Ӥ,Œzi6/H K`}^X-⤆KK5HZ0Tb0+2C}0}ֲ3?BخFwD6#(y4P_YXR薷![+Rir?/GglZz8UV@r bA=yA=1$̚ cb~-Ć',sYfN {"]QBB` cݠ||g4k9A$qxaKŻ߿|_O3`+2`)5+E_Azj(<lʂOD964 "P'*?Pgz$fg̩"v|t#4e.5XYف2g9CqE>آy%FTŢ\ J:iTD.x:ؙ끛޺m4W{}uaFau|e 7R9-) ڰMaR|ϳcOil,:uVWƒi=.Ͱԅl.7:o] _g5ݒv*)R 9\d-!ni(;ϊ'gx?lRzqdIzRO܇)\Ŗ{_?AvB{hyb+FQYT{ >S> #zd0н,/s4`1Hm^ 0ެ/X:5֦yTY-r< <ޱ燈e}$&7r|04bT3Ey%J(i8xPV4:ې%Ueo%k@&Mڨ'S/*9=,uD-*aa :V4 #@8z~e,bLς]:IhY|z|xA[D%Ӕ̖re yV9RܢфCwrFp:X!6fGnP-Ѓ]Ҷpx\m*3%`JR1aj$9>*>`-rڳq[&pGgW҃z\U9{]Q<F⥒,#`C쥾*˜|G/ ٳ7aK L&S/ . BH[r .E̓%cjWGǐHӳx&_멙% |.m%5(۩LҒ.i39fv2 R _RVQ21@GWO+kF蠽a*$ċ r2 Vx/ ٩g>`X(`_t`.|^{ְ*e^,\{\2"]{2FYJ+ XplP/i@CUsזYۈTSxaATB!-K+Y,$ ]qoiN |v0]pJY]?f{ՑmO>L2^7?沕]C5pI 3Ț{"M~?*In]~Raٔ%gIfcyqkɞV*tφr8їyAO`-Ս؀B ^* (ndwox:!'m@M-5AZ5AZLg^ǃGC!&k?=S 8_߱%32P]/mMbFP"Z}RQe}&YXZ/,WYj+O’qVLbٿQiS.bfc(\KIM-"7lSQPlo9zx= #K>E`*- '~vb:e!Gp1+T;dG"HS^e- /U:Z3Oh(g: э'<>=aPs,v= ጷ 9x=E"E JeXjh;NEhe*1kki@ /)>Ikx p>3meT5 2 ̋"ywvZL؂ xzRr[;Vqc=UyA:T5T^Sh8 -H?4A2ԣqTt? uADޔ'7 i#:8O僵 z)_?qDS4wt_՟ۢ DdНmo~ hsmA\]˭My5.5e8M4!t{02\͂"U< eS%+)L!D\1g'N;Ǧ)k0@zr.ᱦFbR?vW{ܙ7MH'h%#F8-ܽ>e%Iptt􊾟emGVI]~N8m[ ͜a8aFc[K[ 3t2+\'GE.?掃Yo𼖻Kz"˞;dSQ!X3Y oCϓ۶R#L(~0ЌkM?G`~N `e'V.S (N//}~T|OXTmiTiC \ĮZ`<+GوZ?[TNHc}2+I?צX^)=Z6@[/ ׀ rۂ X{gT~#%}ZE䁔_^#sK'Nd9/@mrBb D=:Q~ ,LCOj k ?MWzd| \%,䡖 [qj f0Z`(RŲ_q^)TX~3Gsc/P"Z9=vy/Me*^L}x?,0we뢟J\Ȳ}:M ʨw*XeGd] wd3BR7ٱ?+-Wե~I9 S*ZO1aFJ!‰X& xen++{{ǗB̺)Ζwk뮼WP6@gRn`+?gRTqc27MV"Xm6䣫{C R^`zE5 Ob̒DƗ}!cyQ4ՏM~QgݡRbj_I&bӊYK?Ikm=;k#Ohl˜'vN=y= ? `UE7jX^yyK>{Mv&1;!҆rADҫcủZF4;nQ)UcMΆ2-6ˌM;8Ƹ4@Uhb9| D:Jh ~ pCSeZYs{N9 :i߁6Jc2 \?C{nIY y=ȃ=^Jǝ|fǸJMaOksC2>}$aXI]R˃wJt_>~RɨDQ%y@pckhFb멣yac=b)-} JUvì&^l\<<3֜>dspV9Tʗ'~D+?헕(,T/fU.SW}LKsD: ޟ;|5cx 4M*EV)bRW˹L7Ť( ҧ!~AӔɡYrE/7 TUYt9P3M/[F9n'?7Od2mRlߴaP욗3_P"ͬzEGv>AVƽкOhBBPǺ6v7j҉XH= /hŏWF8z[x=`-!b>SG[O>7L;i~&'zm5*iwzaDʻ`ʐuf_ch u_(_F*e3Cڻ ֮颞fR>=ƒvQM1 `'y P H;Ad('+a@ ir qmnʛM_kq>|-o+Ƨ._63٘O7m4le:B7s[}SҨh;K:28el/oh׫&[,H)i;ea0hÃTOTx!}!UxږAejjp!=Xڵ6 -Ynʐ#+6 :5\~ʉe?nǒZbpY=*'ZimCtT}$`|T!WO6qiZe=՚i*ջdHBFu^Ws]1dZm!n%lю[1g؞[zq1ZgNXi:xL%f`x.wD m_v<6#=KM# ~?plT4VC40}ʴn"Q1l#@\sBPetiɒwLlۂjԥ]k/k1%3;BzLdsb)+ Pwp&]z)&][KƏJTpԑ% ӤxR@-٠*&޷tnr'Hg~ pPċ5g~@M!0Y<ĶqYo|[$vtyq>ղe *gzM~Z*eTR5ܹ 5e>A|} z-k|o;t63QXy`v})bt"%NuܐG*HK$3 I驠+O-O-"!lμ򶦱 %MZ>Ջ,xY!b #:(W\VGd燎?fH3+ 6~~X`±-tU&~PX@Mޮr7- u#"4 ҴhIиJn<)(tl&2JN[f:hlʃp*M2αCCvcMJlP]{i5 = {=^ ~>`V(jZJ͹ /,{%&MYӾ*e {IWaWc Qa7 `@m m2v9 3c1RfƻGmQvSar];=o&34&>[{."{o5vz3ơ%8c2wXN02MzR%RYjMDQ\CXS493Ej>d|#.-/*G[ST#\ǯz~-m ]B֕|X]u y AuׁG'??}xC j7lMKz>Z/Κ .GhS YOD|#U(j.F"<4iutnMc&/\bXD#|'i(Euiǻ^"T"Wcc1FL]!Sbyz`^)P6#AsGr\Kn} Iث -wc\Ŀdd}HxP`DHꂶAڠfeQ!;ly-EvzqzJeI#6I ӌw=q/8+=Wn͙Zs 6Nu>CC Mz^*e CRV; DlA]И|YxQ {Ò7"'˦+64Xp'7X B_NJh$7怡]AU 8ZgQNN3 5L!L!x[:aojNTžip Eb\KG}y/[_hrBz[-[9-Ր-vapz eB#\3%36.t#SF),C`9jn̘LjQ_ړ"A ~RI鏴B  Idxh//<0ovS;PHdߧ/⎲TcAX,|Vn;g|eZQcym0_…F{`1۝o%ynH\:yߙSqkܒξ쟡>J qz6S=>'ˢyq^6/o{C Pn!EI9^4I\g4fthMCU]OqK6щi]hU룕Fg~(Qh6LAs#[yäq᧑$eZBF YbAzӓ=E#}}`N&H8?LΛ3K\𴓱x|݇fjKYM6fkkW!r)3Gװ󍏯:^0/윮w:J>W$=xbCx4׽S_{+w'?#=Xwe,`l0_woNΞX@Pށ.&ZO|/“~<>[G\};gwk w?kK2CwK(к\)k~k:/\$_8i\İGu٨7SD.[K}-.#hкkgR]Vd^b V,][ƭzjlόR70\O|1sKuiD)'0{ݙa(΅ucwB䐨-GJZD_qfWpUƇ\|Yݵ8ݝq,APBt,^8yW~A#h^{qlOp[I8XɥPp'jS5mmTGeenN/Td$`!=3h ^N-k{Qp1Z!piSQ",E/}|(%Uh̆ICTN[:c>9!)i*i*WO&E.Sfs9NYVD QnHmNsqd$a0+2f&GXUd(H<4%3X!*16602%W lc؀d0 3T B1TXOx#̶u+JL&/[[^'H6y5\nX&</UAw5KeLf A7];+&DIL|9RUWwӦŞ G ܲ*d\s]E'b%!(6ˬ w|6ohp"FdқDI%oz%Achd0?Cƨ|ED?-sxawM5vLE'߃N!0xRo&pSmltwDTesoH9 xL e:Y"1< 9!ʇ:dc,*ӕ^]C;JcS9 _ M&i:]UzqTُߵv+s awנ7.*XK2148,1̘ n@fX0& %&&si og3Fp3 m2Ţ7 τZ Pȫݷ-052w{;q)E01%Vjmλro-݆Bi&k"4uyign01`,ꔽT߰54"1~:2 dZ`#zK=-#.T16i`0|1ԿFQy帳H "eC3/=.%O@PWWbD#B"WL[. evA{n-Ӈ{S3!gs C 3X If_]هd)\~=! !ޚWK7UĦՍWԀycמO\]z $GPǖ{[u8X̢>>LE^A6/8 j_ L崼jm#*gqT7\( BӘ0t>@cft3`fz(X6/װ[n™gFV )TFU) &zcd!6\( |o<͗A`9?FRK.=~J>kŗ(}>@Kޅ麱8D#cSEe|0Ef<˰ EQ\ڶ"zSw ӥX^ AۖAXtZms{|8RfTDkv?7Yq!(V;/x?AX^GꌌhΟ1<{lϴK ě_k HWQIzm1Nt,l>|ouMj0蕛cm?1*e~=HUl$OѧicRcâ&S$CB~^tV9N!D&PƸv#JI{"Y2RX)cAxwC]&Gf b\b8jj{ϓtFz#ajy7f{/?HA;X?Ic:[K Tb4Z]@vO=Hb:hKAF YY˸, Yv(\cf8&QQ>)*XCjEY  VNjUX)(-,[c6* gbD;PPKTЂ`fW,5l/P).⊐pJ)ܚ@~醼8xWj6$+!Z8 %$6%sz hQW'3]Z!vM`z^?5F0Ϻ.۱TI'[@n)Y4lK|RDe+Z;X&blQXxy? fXJ`) Vp! bp(ⓕw_7^^ MX._TTl4힚x:Fs*i_JtJGyMVzB_HwۄIN3'Qo"*L ,rw,e5}!ՔFpZ!WP<7n6NqSۖVr5%d }pL  a_.+M?,اڐAq@X9"}B><" 3 e T;;w*3EdI AJ䦶]khb 2%鱬G%Md1qvAO #m2I 6JW9B%.QPVhN-9OWQl_VqJ!B O8;H$s bY TFUH28j5 l]۵8%4{%&%pJo ׎W2ukhF7U*eڭ,עn)&<֒n3F/g0\C~VDΥ+c7x|B. _:Ž1΅2 mseUeAV[n,-ރ+MFtm=À6zڀpe0S0f KFM5cɕv#v=;/O,.3:tkXS"_"6iuΌ]ASKsB/>&N2s,#)O6"_p>7PXΠ0~OQW|>E9)(g`'iw8(.PK~)f\MG'Loik&E?e1:[aN8TyK2*dT^l?kԬldW4mW$-cl<5Yx 4+̳S;.l40Xo(ĦLYc~_ЎC0@Ђa/raQ>1QOw\Wf9G4^{F&TVρqq苷 >98 Vwɡ PRp= Ń%l bA[Vlem [8uWXMsuE_]w6ŀ+zjbS(@P\zix; 8=Z{* 3.wguvQ)lֶ:g3\M1ivlCW&B!EFTlReBp9)cz"B=8?},D?~zD3z:by !zl 6!U.RCqe@:(/p?TO2E ~͇zm4Nee#<œ\'YC#~zNwTt7wsM(k,tVXKRʆ:Pp~90[{0?ft)6"v?[Z4؛|51:_Z0?urh]%͋; pbboPӓ㞲 D.ifA`ᓋb'R!cl¦F"Rr_maB+AΘ{5D 3_b`|<цx+A:T26O_uѿe>&D5(M|;4Rt }*`쥠Dz 97yh o"`RVɎXkk5zїp|ck89ec@g-Uph |a i˳ǫ?¼GPF;}I}\O+OGI@t;hyy)$0&+}uU .3|b<v[ѮX@fjSoR{2И DYs=;u@\) [[9 QPRNF<$@(3HfG"T$muA@/ ( ]R{ּ,k <&\\r*ff\ӝU,>++xϊՁ?z8/zW%BW5| e$^#S.d+=}Nw(R^;_e]nhU﹌GfX6P}TȧZ&±- v<Iņ dp-<ƨŘ +M4ҥAذQz2VQS)ImUa{D!l0&و1ky6Q{Tȃ䝺Y|Zy!h(::T?3 e/j1&cߦVM;S\WhA&崎^̍Cr.|B7əZybt+tG`XҠ)mЄN:88ۻ겛w*cn*.1>:V>qޘ3D筭|1Qhp »)CU])?gGg͸5Z=ɐ&pjQnݩJorr'L-d1+++Th엁+g`9z4ن1O--Ay֦FWS&ä22|^_ E{ω mRqUdze[3-sGXVCrja/Nk`d#N߃RINhy.\)Seićb9ЩMD=q|{8eYO/9IiҧX}!<f{jU^%<9 S#:鐏KSq|?W׶ẖE;G,kMx RS9{]mTn(޿3tzn 8N1xes 3jE0@Ekr(@CpF>POa-[D+zm^(LWL`n䟊cϽFJޣl\ wDq7+߰rS,7;/(΅CܫZDLrZ~/{X섽\.]餯lql :8Lfop X5[K0&PAq?*QA`.{> rW,x0_Cǜ5{ Nw G!Ϫ [<ϠH!Е:4Y[R۫{2 4>BDqZ/%?۵l$ > ):{?a>ItOHs,;]Ԯ.hTY%:+ճcZt$FٺVhi>=Z|:,XD6 i=D|~ێ7:b0rk#Gϻ$Ui/ /jїvx`vEGg-dQ@P1#36~i+rXnP5_NF|6 3KbWlE8~]چۑ]I&dOZW\؂M 5&tK[t 4Xb;եĎ-;dMGvYaH{3KP Z!@ 17 ^ 7syЦ@o%^Q*: s{k}*alQe=ڤN-!?O\""<`Wj]R'0zϖrb.&lJJ'փĠʠ7OZ7>Sms~`P9H.5yz~Ȏ -sZJojqכxd -/0+)}R}Cr9*pzֺh;V_ ((&lƓSM25pw1e4]|+ 8օbUe8ɿ齌NaqjwmF"juk ;:QFThM@]")NMFt7Rr7Ca4"+t'N-o8 `;Z%xM2'9R*0W^e^ɐmH̏S-mwU@tN~%Uq/xT+Eo=h)_p'!PSG\BhBƬ@4 ًl:z.}KZCʹu㷘LǦGecUe{:lK;a|h횢Wz#򨞺Đ5<޴5L+Q(eap\SGrmO̷ 7`o[6Xzx4C=՜~/.'k_vP[5(_lDxGJ|~P=ff&J 6]ߧ$b{]E5z.f,BMRBx. A;0r?iיtٰ,"YammrB};m ?YuenYx@[ZGs̅}w"A rg, d N؛骪|Ǫ '9M'H%Lރ8uW]9[SpSpR|-~{)]r8I.^@%lY8j~Z,tlWw =;Jp7L=7PLIܡk=jP MZv [^bv yj>۷ז6Z$fJ$&%rw]i.*%:'(7Mpq[6+ZWқ(-Idd1WgnxPblxTGlyngbr Ȫ..Sw \ AqsN5 SOF7\Ab&8;lW?+3C[يBNƇku>V"$VJ *[@ ;V̸(zLp $SD~A>j~-:ZFHܭ)ä(|Vs7Y9D:BJ;hqY(zq%%p5>!&9 p(?W`~0Q=h#~C4V.紏mރJh{ {-P$i^ ~.)ۀ,G]Gc# 4'hZ`e[Pݚbfg k8W{*f͛y(j)I|"Тv|[&/I,e8 +dա'pB.EE\K2e5'{5Sc|Ҭf8Z jAuMsۓŇ! YQ2mL). 2O%+/oCv7F !f\nG03ԖڪewmI*yVuP! W*QC28&`'ĤK = Mlԉ*V֏؅㭯ojVg=04 8C~Z¿S lhW'(1__ո86my업>]Vmbsy\5toJu5kU btIto^ITSmV& Ql.,˚nq>9+\/E/*Ac;@ لO;퓟  |(N ʖiX%=|gmayA#I\Q9<+ة9. m.WƵ,:g=7,c;\GmSeofLF_*oF!pcW|~Rh i?pr&6 fK624=͛ aٛIUdrhf4CCXlki~*3K /̒ea0I !'ƧYxéژ$)$aJ"G㒄AdcwҲa|}DV r=!x3K;8eCTC$ir^Om]!+m)ٝƪuSz*b|0G.\T,df'g,7+ӍKbb̏ʡ_A_#cԂs<ώԪ֏WJ h0¦Vţ͂kVz D(Ơs [=DX $2njYPIȣŀG;OtF`#c~*0;F nw(tC*|zNogZɓNY+ ]${dR.TM!.`0u͓[<+}!y;}I)-<_Գ()ZWPTYKHcÜFj0Gr:_P|o` kXH9  B?f-/4U;`vosӗ҅;^`( _݀Sߺ棴@QWF| !_#-O4ygFWN\-t3/)`AtmTuPT&5L6?9kz &kcbHcSNk*u[Qc[eMsadTJ0Zjc{6{Eo:)@ _#8b !tMk;0`?XUх$ϹJ`I҅:H@)#dLYO6# )G&C[sC{lj@,NY!IQ=^M UM_&*7>6!-Xc`4$kcZ{e<1]b>_Fݜ)ֵ7q+~< |NZ 8\aX\SbxsZ6^T?{Y"EV 5q`R=j۳%Dڀ%i'ч{uV{`/,I _]7t޲\ D`Վ֞Y6а5 NkXOPzBR9q/eE8l#y͊.3GKvb;5^ئXo7&7Ѯٺpԇv?.Z׆rH#DKΑ zؕ4esNd!F3>u2cUK83['鰷ET =JV6kZSN/mzir<'můu}ؤu.Mq8 Q1-7x. |~+@uNH "s+eM|QzSkxb>% 1 :%hu(:SC%%z1;5Ab7Vܲ;{^MeHYk!A[ RKc1?3'A JΆy2`۵`&:`4/~% ܽ[bjjF9?">8H:b̲cmI>ꩄMZCaP8Gd/vgϕIb煦ԚlxH`v7ښpqf3V Vf" >D #D)"@&cNR/q[uꑣ9k@F7N>z%;>( J| %4B:qV7EFAv{@,20Dqv|n)cD)* ฤ6 ‘y> Ǿ)0nj빀نm6K&=7^{#m?EHCAxT Z^8=ڶΤ(̐`jϪ4 ZCiXM3RH>hu %mp,_kvJA5>mb-O]sٓwuu:Jt*a؝ǖ2 )$?M|FV b_%295ek'^'*_ 039mGU(N[#U}4$7/`_=Mc,+K wZʦwH'f#EnGi*: 3 ;KƎ4%J}?7 {KٔAVm)y O8ݜSӥ~!:ŲRd8KyERԘ)wo|y 02S }[[K&;j6B`wvn-YRJRĘ@?x!Wڄkr(˛B&+4ږ7"\,y8'E@C:e~zHcwٿ0"޵XW# 7>T:w)*K*)CI^ *ĸi©oDƌZc˱R@3Ʊ~r&w&ePŋT 6Aփ@ZfγlַR1T";mչ(O~/VZ\էDuJ붼v>Bim| .І=W mk96,zsLDӘK:趧̾(tfߛz>BF8`%l8|l}UO@a(ڗҠI!Vm|J/fpHf R7 )j>9+5,ߞNJخv]4֬g^1=4I׀`r>J{%>au!-؁b%z.oqStNˎG,A7 KG+Oc휛piyMYD3.D;rZ$ARDڝ89(ߎ^D"c&L: NXxmkZ,0 /'es 169!a1+-Eol{ >*دn!Yg>3`GeE.+vm59 h=ޖ NOV~8@ך ;0iF戉p|Ax'~ɇڨRG!u({ 1;i<&1=\T@NjmwN5FY`S"nbc58$@|wgx-;þ9P@1řҎ[J^.Eۘ)A{[Γqkɲ`;5 l{1CwxxKN]Nhhe\?y2"Xb⥧/S$( [!*Dc39NI&XA|W= <$eΚi7ɉBtEiz,qtL=m@즆/@0<' Y Wo=Ih^%e19@'kD/މ1<{irʹoLwR4B}+:}_svHN@^^tE<k~Rqzĭ.շꩽz0wLv_>-HC}5 q9w/S<4bk>BN=N0u0]xV~i/l+w躴6dߒ[6Y;łس]p NsE=Gzgz)i?E=q^znĄPUH0Kc*muiM&kopNX5 A ^#&S1N?:Ɣ0J<;O.n9k2Qc'|St"@n0&?Cܙ8 2  ()(Yr&pZWӈ]˽948tKBj@B}v|sZ=V}\5;]MZ#~i 22"J;lk@J,uBC4)J^ 2D/&?@d&AY.Y%{½ڇrO=ͧGLdw-.5~y?#e'eI(36mShѶt@l'2 K 滵cM4p;/{QOrh|O-85J B \oÆ1SIGeƊÅT(.G5_#~𤅗Ś.@= uU( 8~CLk^ snH<[i w%cL\:E<4%`KC:q-))jO|k\M{=oखzR.tm<ɆX>LM:J@RDT,n(pi͑zf" K7[JsGJ~Yb%yS`V~UrcT]|jNl),ǟWΏlNps.B.Y Lqn4e#V8pf9z:ۄ胧H6 SW yְ5(?^pΏ77Gb_^n;410n 3 E]Ose04lH8U$B(5 BPz凊)ٟf\Lhu`CWS]mTN Ye`Jh+?tXL>y9>Xq;4A5Hjf?5jպm6Oڭ+5FCynr 79qFXW/h)} S^>}QVQ]75NI|_wGgA]jTAd`nJ3A|6 .,S86Oxuv9 @4`rKG\y-Y.o'fwujO2@V˿R>Lna.pp|];IC(1T@@*B ʖ0nvwwzp`PǴrR7SNz*L[4w ȼg c$^L0bqkRX?U&o[$6&tw]mStr+F{Н^de1!G ]BQw&_(sx_ lT;GRb%e),kt3W &W2/hKyybDɘ;.^{?.0Y Zg, ͝tgQz5b| 1}˾'MAku~0,Y6d8X\WYtN{IFkc뫶L]}:)^i75~ u:y8S;N-:Hdkz"7q`Zx\a9~MLO,P^oLlJ=Ϩ,]ɼ=<l"/GV%bZnfY5Mй ˉT4d2M!.|3Yv*I>)֦{|p??y-eo9ay:gj]ߖ ]q]:+ / 4?Ն@|Cx\GvL8tB[B;c:s_3iت}>Jd<눆jK~z8z&>y/+7 . 1F˵,ΫT\?YهBdLj/5vXH,D{|0Mf,_҃ۼ]JH. #_IL? 3BrJ$NqyísWI.r\d(tRS[o ㈈L"+YsGV a$ 35%MW>(^E3iS n1riIhcOs(d-/zRypC"݋ƤOzQC锔FW50gْ{U.o`˳5tyn?@,q HX]^KK"ai ¢;/#gة /8ЂѬV]Peq7 uiRK2 P5uj7X>h;kT?eÉu|DHΏ2|XTؕQθu_~EC9Z^4Ca;$6";k.G7{h3NsE3vו.c)``))u`sN wPz :#œ.?_ kr@Œy@GE! ;vfүu 5B0Y1P&kMKM-WS`8յHSiEt̆8Eo~ܸSF| 銕oDIzhbێ ^claf{R9v1Q.ΛKCZo߲{Ql۰d3~A=#aqG~OVc8lp,RJgtU' ,n}P L$Z'`>LHG 6Xg©+`z0;j+w # (? @/krU+:x?v(Vim (4+J֛OMAR|0`5t#yl„F6ؕ(ћ iPKeGFػ 5ĐaUim}6: cE>->ewޚxTtW,+ǷiSt][0"19bHal>E"]LlXR _WlЭH! x?;Y;F[Z(m?E@HUBr]A]`GBfŔ=i2Z1-+]U{(oOYTfFjvQ|pxkڪnK7]J1S&C#lt3u / n?1զa<|BZ7Zf 8rt)35Z59֡'̛M.7z~sWG*#okAXfy6IoCM헖*nGf C2DڇE:;bywHw9ͮěՖ 0q#֢OCg},X"ieeOy>CcYO$6* zR-x* +gjsɃ䤻  YzrE+;t؃L?kW77@J|]*77$?D[C7!0VЌXE7>>~ ܅cE>:7G0?/k[#xw^ #Tq8i=_2p'4n9A玝U6͵5yz: Ջ?KϛEISvhwMת{F1E͐&+NccˑSՙk CȀ:ևYœK9/se7lbFYÊ# <m+C b"m{)Ԭ"q@({sj^`(8Hh'QMv{_T":X^M))'K]ߖrWsV&Ԯ}B+JtbwZa;BY ƐJF1_(ULöq=une]ItfC oUg~kƎ g $326b8[HuTnݯ>E {'YgI#4Ev4~Njٱ"5f}>LN誡ى{$ch.DSE(|FJ5_&C^{#w>'7j-\%j8uq`gY 7:FNj.y{9?:τ8 2Gh'S?m+N"YͺClO=DKLo~*oYs$}VعZ h.c̒ɏ2yS=F`-*Xf]zxx 踂G&:3D}M΍&0y ^M ko/?NNC'.IK+EsWk`HnP ee=A?GrDŽ67(UL^f$D/?}_Dꓔv_9bf^+ LJ}ig1MT {lxꙜV10~w~I~-4Nϰ3j!6F5$,A+p_:qۜ3yry͙r֨%QK"RKx$dQo|"S,NR;h뺳7:VHIg% @3%ʠL Qe4㺺dx(1zɢ `_+0x5Hkt̯)WTfwm j \>926aKi$5p}sl|n)k5yTA,׬aYqsFa"Ch8Χ`*ei?M6t|Ñc GE#δom= oJ|5CZ^&(<>>,wu镧<KRB?rcYG؏1MWq& e0v+kЫCV,@Ώ"m 0GżsAet܈nZԻpe1< sfv\lD57uK@x^"37BۛZaK>FsɔĭMlǔtk!T%f%Ѷ;PtSHꪩ>݀Uu2W+v l8DGퟅiAly|l|/ВR :,u'cd0 a_!l Uر|XNhܮE 2;](gh&<>pO/UJL1Tp~8n+kաu޺㚅ѣ6*E?Lmm7rhqndG }[]pV[U-iGCN ڑzF7Hk]K֑ČʄCPOo8 QPq"ȶ1XX`EWqV4`Pߗ02]c3t4nTÒLjRC8p3愧7^ !5'DCB#85fSQke@2f*{pkPȢ/ҍE#{`Ƙ@52-aVAx/z.xيwR2V" !Ȗ땾ty <>**FӯK&w+y|_sZT#NLml.1f=$k[Z̗ v-򰮊I2;dE?;)&>{`eBOOe|rO7n}У8? 2o kp84f >-mȬ&١5=&|;т7ج%Wr,ϔ+ݣDHg%=R.2^>SOT҄mT{@Sh6Yx`̠8V+E#4,Y{<װqΚUVL5ٍ\FN735b<\" :օ:DD ] xǧ)y̳z06wٕm{>p/d^0>>5RL #SBgQEBT~?j:IWiaT5m٥?ZO_䁩{&Z䔆M_>ԠJnkr(Vr-zshJ56+pAƄkk ``4HϻҌZ^)΀:GB+w .Oz H}͏v1R$X%DAVý+tȩ_ aO{Lk [dbxWvGW4 .µTy'@~ZW]YȬ_4}C1[{ {%c0ڀ Y/"A-}w?\'R ;>B e(v-9w24rk30|c >gu0w"wIM`8+5ѮRyJkBets{LcpPy$DUqsI:䆝5I[eF ԽmԳ*zn2>lżpdPi֌hs|?@m"N o5<ʪ>ىmY瘶B!hJ+Sx1Pwz,=5;$T}B f2&gi>^eʍY7Q!~׽7Aܫ%dFxXeUcAo}r_'YrTj Gt~29+JaX۩61W2gfKW'ꩂsu@(8Q7/$pi4܈)N :ݥkͲy}-4C,vQYsqLbہ&}(JH8+'PSu& ge.lMlM=!r׍Q-֮1Fn3[OokXd^MLwS mwٙ%64)z("0!AH8 ļ)t۱wW)݀29\ٱ pA~G10KbejN](l<y\a)-.oO C|A9G'8W1o{h d3Ců7s<IJ|b= eS5GaYDp1 v+;)Uwd7q />TDMO,zs-yu}l6~ x$2GSJgCȢ5pH *̛|S௻q/Ʀ_뜪DLv&?i@Z\o,hKp\rx|j_5z@/Ia_x7Vd|" zc?k#/?z<Y2m V:3㕰Nm7#,ÎS*1WfI7z˽__~/7J!r^mHPp5>-l }&w./>O+,o¶y?wH tŌ&ߏ8dt>ot4{2|Ԙ\B7pиw|/fW ImryQu@ɚ= BeapWO]&T; Mw`f-HM) aFuFqZX/= r:5 ʔBKԼzjҸYN8jQ3̬[C 0Hu~{ 68V0 lvd.O]=q~Llo]k`g(uesbvPIa!h8*qp/ga6+`jBuGu6/ֳj ݸܸrM2lڕ.g," KZܶq3>:$ LK|6Ԧa1*ԁq]/kRHx}N1{1t->`}%ܬn sąhoMsxl:+q^lmyRF3fݼx o.;8O㮃Lo5. g7OnzG!.f&lcJ*gⴒ'G_bspI1sub;/+lJ2P!S9UOܣʈARvνt㏮2[=ѝ$XphC5l5c4DADO,-F[3W_fp %#ȞBcLZ_P J>ח9ꊗdiٶD a{ &n㝸o?wLeϷ$:JD$Ѵ_c饬( k9cEu;3L>ХY-il_xM5Y_MLΌ@dTl^N׿உo^.J#mHv6-USE=4(TɚޮA 3NZmh8=֡eS UgەEN0_.H\d4``箞RIV8=_܏>Ք+ʛ]L',6)wn)joU67t~`k1L.9)r=}-4;-1.mcwy-:?fh7 2A ɢ>5<_WSӢQ䯚\Crb rHc:& O셓{,9biv[jKyp+?"ZA1H-yk~=Tc֯YUDžqYMs mz`f{+oR]lnr_0(0/\u|A^Qk?\[+o!aűm.ynܯ> zt5y"šڪzq{Ƅ8Cnpռaf?UL. j$+ق⭞δn/mEZl:F?62ՁZo?%'Y'\| &LL*O5)zsYm#9: Ӷ5E)?l.8'^6jTcNaw|zBL=M\w͟oeV{b?g˓!XÅX-]߲׶⽀^N$藀^ۊ|O_n,_+ rsޛ+bjѓh8oVƚs# U4x\i1[Ǝq 2Â`sI`eUa40ٓpkln:\9FDcLCvoa\rlg[ñlXjpyPW㥉µBDi?xAti^\|5I|6F0yDR XEK bF%5}ajx<>U)IbV_.Û0-H^/@FRJwr;0Χ6ᠬ^irHg>~.z 6n&/EemL蟆6Xd-bK2PjG1܏c8bZ#7K͇Y5LzbuַLD> /M)6|stl,xu*AdV Vq/T(v*ANjDv!2S^cǃǬaO$0MoT_r\֫̔D$?DX(TӿckNŢyG6 L(2r {XУ 6GE~ Æ9L7Y g n)QM;`bF' SOIaʕK57L܋PX]b 1-'M`Opz* }~h{;bRVGkLS.>VEitg&mMPʪ@ncfB9Hu6{płJipN<Ќ`r p Q@\ѶDk?IšTMTc: 5y--3"6'?_,:ݫ`pc*⡡G]@d r}Wmw_+`zZFmOPXnfoR 7qi{ $*}PN<\}PG'L^luYאdm 0)/ n`f扝I)t7TО-{[ȘftlBkn]V }`H׼%1}(3n?w[>(L֛ԡ2{/>FDc TX@V3h9RIRoɇPZٱ#߉mwa Ɣ}͢/hwm:>"OAC#lM%+%i^=~Tw-:xmAm Yn[9/"Diy|dgp lsFk?=|looN@<=$NP1/5*-lcm|cs0u{\x߶_MTi(TE?p2ǎPO.Z;R͠\h"Hzo2cm˱ 7M*8&\ 8n&.|RHFMyy^!| ca81x_{52o_wߞgb[RLeq;G+PN{8R,ſ 0]Y0黓)=<şޚ#7|Dq걊0d0kmciM`'ۍ?:~:hޞ0T<\èҨ5wvR?ыT5鳝i͟ysM9˟ЯbJˆ{0jޮ'$զG7Wŀؒ~Rmd~p{Z [:{fwkٝ8WS=VG&3_7} ,N/Tݿ ## 3YT$/,-Vߋb͝eO䵹y2J#v_٘Y[}LMD9tOKTx#mi_Xs] #_&@|{,IR_(U.T lE CǦԹD cd;$~YgwWGd~,k]Ƚc̜S8u nݧS8 sp < D]u,3_ZFի?eX>2J9=){R#2wthwmVǸ?8O"KSr#,]d1]U0zhLE{84 o<;Zmph? GoA@V Je&k :C~wtDkL*дè8C0T3 T kA_Z2ݶm<`㦞 dHGt2TlTk|g(NY¬cxl-1w%飋tӪE1۵YmnoZzPø//=>i;.{M|qzii3GHwSePZ])+aT^E$ s=&8FcΦJ6m픏R糠Yts" n2-XG(5nCFIb/v ̹(=% CtR^%4S2p ^Bt%(#x_lk4d4Qea5(7oA#d){b)r |!4:m}SuvQ[SߜٖS+f`8Z3f8x\2gBn]KkI nA7t`ېAkIE LQ|.;K fg*qQS6?q} f9#}$4zb']A 3LKƖ7bWꄲk-+-vW{]HMAm;3K,nYa8+. 'e/\j)&GߗWiJ1 _[xƿz+=A1_^ay'o#DL9بwaوu84[;cR̝3BYsG2 cu{g Ktq[g-އe3$0&uM9ڴ+#/Չu]bQ Lz2N\pq䬦/al? jַ& ym\ƇB)YCs^ʗ(bc-y-5˛l7z^ib$kא9aiv}(OUAl ;= 0W_'RsTe՛Ӎ?V,;9MPNeހj`5sagk)B()vǜ>\_C/&'J/yēm5R%tREױ)jeQ%sM&aNP*rj6)5]:zY[GN+gE(wflb9R]zOu_U׈Z =tlYb`X$aspo,NNXu9%Q1y`@,%oȑk{@0E4144oW0ˊX2s]!* %%/}yuN. -,\#L",xk |Au{bks\]ǝY%FHu>$BrgHfS/b?(C6`_=5[0h'e#tMe`5Iox:{Tk䈳Ftwl:Fb#+ @n)s$#AڅBs^peXT.Oc&F3 Oʳ. Y0hs(P`QwԬZGg=E`lX~w^=xM@f!|VeBqqJ?E@R =~4>qmuA,X=Tw K7ЎLboА֛R7r c}_L`<`8(ȑb©^\1dhEZy1Ģ`dHj0-{gxağ:iķ}^P11! fGB"EL-[*2^Xݣ>E,L^5l{#EY~i#<ΨR' Z܅s*wV&jvT!d`4 u Ŝ{=J&gC||zi{lpx8Fw3VؼOB ґ9 #ZTocV*P.(:;JH^'$[L:K2a' P}Iwq+C,%3.д0<96]8z~Dʲ`ZY`,W52uW[%KÚo%AL!m4wpO21ߵ*dqE"۩]zwx9&~ d7lcx>2ԿBC$pRלS{una]ScAZ%Ⱥ7Y\skLH1\\B|kbvPdɐ<]S3kALu6`-r q&023U˗ptVn725JJKݍ4h9|UB f y8PDBI`//|tt&Ɗ#R XEgy;Kj_q9_v, F' C&5IUZZŒ;,냚_-F $fVBJ;ӮY=DYt%;{dII>"ڝ:+9sL? sf9 X0 B) Eq O@X_ Ȓ- i1R~6n5Mi[mvؐ8cf1=c0E:M鈬SM0:f%DH(@5|ƑuAW=2oT cw<}|UJ$9Ҩ#b|9HN@F2.T;Dק<ԟN`^ XñX0#a ?,si,T[Ųm)œ6yfi; & |&xlǔڞ*x!8t@pT1B@^ofX;oxY4k&+ OUB•^(ȺĐ`$#eؿA^M>FRrb:s4O9g: VFt^sUO)YK(q_mׂC'qo!z!؅ [i_!+?EHY ')h sZ6U&8mu⎭R{ahS0ٳ}"q(kbG*ZS]j~Z堍ޛu܃%uSm 򦀑R5^8$p,}|p\ZKYHގ,MD+-l^~O0i+J88qk.gº\ā§8#_>xP[40Rqe3!xi5m7?Z,><[T^8|4aV*XXc:Q܇~1UP^ VmiubJ4_P*R۱s9V!kQ\z%:A Wf,ɝIzI a6iw/r:+?*=FjժTuC^! Zk#AlWGTgs[+HGAeU^h!,Z֘,D'bnVL-(F7nɺ"bN)8S5pleCxbOb['bdª [*)̮umؾwUX6dXf ϱ'3fjpz5?,1('%;byuщ0[æ;wR][{]/OJ<0iw " ^df @m?16 zPA=4+I|CB}o8=2Q9=rZ]pQBYD_wڅF.>}5 YF&6~ak-fI #pR"t_Qr H\NQ02YDj( NsD{"WuZN\4B:Z=wq-2M9JVd1LjRiky'vk]E..CXD>Dvpۘbn5(utB">h񽘵A5"h5|6T**~BeKP@~q3 xĖc?CK [(Nx, @ J;ຸk0H4]_ S y JZ$ntƉwy4|HSqĻ0q!SGQ g :tXs˺\H9+t0'i&=pÀ.km('` s݄FxzGH<].S!C) $SpR8Kra^ޛ:FThWwN O /fȑ=Jxt*JBga{b=T6d&E)E>?έ `P!7HT:#(Fqm{ՎR:0Yrpp<_Sx35?9eFbj(84-zes,M`^n>tl0y'IV eێu knm kD.K-Yתy7{*;p%6bpLƨkYPޫxcde޷*E'ajË D~m,+d>W[+O`9VBPj[ pV(~ts[;cQ;ȱXY؆*uQ# ]f&aAU\RAn_ڼ|Y*^0TtS"2wO lVwԢ/ Un3mh<|]UEk)>c-#spT4,og͑}oWCg´lfMh08SJ-E4n2P\ex&_ϡ"Q򯐌XbV3vĄA6{{`\|licNӃ.? "WY,_ `q3ĻnhkSp}[4x]lV5J;m&#g;mOYuo痶,VN%y]SG}|oVN`s%1oػz[p-ywR]c=~_ѪF'?}?aAų!r o2N~/JLTز]hHD ɦYz?I&Ϻ]3փ8W/yJ69 .9֒)319dm-Zea$1GهO h^ctz ;m-\co7#0szssiOp=l04|*EL@sl7APeo/?v͈lNrX_dVDDq9eP|yc ˰[7*FG3t&3} 3*-a>0e:4ȵ~xYm DǾe$]~a^SQ>NUUH  5Gw;lG\6^yUahX sƲ䣎{`P6c6sTCƑSƸD KVRt]cC[ *y&bڏuT+gϱ.lpQ)PTWBfųm'Z0 ?,[ j~‚>9»MqxtO- uu - @6٨rn^9~\<7C;#\{¨tBP36<ⲵjkʚkU^p>XD G āԦm(##i˨_Me .2>_y8^CY6A%rp`T:F#I%ED*$"a-Cz딢`Qrvx}G^|E1m,=TE]wcވ f.A#]EF;f1+l0nE(@m0Jʈh,{=-0SV =&%(\cҕ$>OճKs8H@jyTկCkegK>~q-eNۦsi~YzWp3~z =2+WQ %Wh3"h0+@0k/ae Ss!@q9MPw /XXEC˚yu#dba,Xu)Dq*]/l '( UlL<|]OMSbJLA^~a_e[^e-Qrbe!bdu,]#1e{ J?ljK_W30O6xhkXɟ'?*Q1ԁM k膉]b,s[=hP+lŽ/poO\}/vrʠ/,#9Y]=^}$GV]d=ATP+Y}̺ozxYܲyuva̹5f_W4p}%Q &*MCơrC ZB)m=/ɡGZ}ק mZ9  4ȴZVeD(c%-fxB/. yw.Q7YiIl$ [rgYow;CnhUgfqs'R܅1;B.J^jf^& {,P*Yt؟-)tn 6v,'O,T*qlKPd \'xg30[blم}b uJN*jOF#㝤V'GmcfcoV?k%]ܪ[k}gftA~/Q3c71V^E )# 5M]Ї=fգnO1'qt7sNҢ`c:XSVȞ"zX23P #b39o~[,oF侠ClSп[yA8LQa(%;qWehl{腲3f.^WjQ#X`v[ePV#6@zI$]w(.+CßDf}FJoلaɍW.x2w,/UZrI! }xڌ@Fd00D;u zK2m=,Sa<3rΝ`V{F`YxǾBΈ9T'#g7pEaa+;]o: ,h]).E;>^~lY(èuDhNX'=LGƖr+0sWtqV2]B-iȺY=u/pb o9qnt?gie`]#Z|1ǖwn[Gg-|eNz~Ij./ "k0xhJR-famyI!LJ^J-P`KH(bK5a$P v*-RXm+Nh?`R솣#&汢,Fp8lp~AɨzVZz#ek(E!hqBCۈݜ Puc{n19ap5%ti踖n' gI$aO Y=.(O~!Q02/P8 F'֫k'QrIOdV\wE9yTs]^Ѳ:9[_YP>9ddx6qeڃl.۪XZʐ.:qZ ZR,_Cr84Z7|,?wޝonc@LM~!1W}.>KM!)֏ |If?-Icި4d5=qr}\068jൖ64"G~9&"e@yK]#=v)̎QXݓ,Q/;u .\n*@ cʔGJ8tcoM_#s;Xk*H[]Ǯ)%1nȹHhu)\$l=^4 H/czH*bbkU ᎝Sam)kla$h 0"RͱT4ٶ9lNų3NԌ@W*Qk.aӛs.B*!1]2 So*uL!Hf*5u5z< )"M<غE_;8pM4>wlv9ɨZq3{;-s@?2X_ˎ` c; ^>~e6@nBLK\\xwiv"1xg,Ə/Z6??} Ƴ;{Sɖ7pj me'n[񋻦%eG؏~Wl VʖfUww4fҗnIӺ~s3g0p,< loEOQGæIEű0 C0O80* J3 w|OB769~2 OCk^$} #zS_=nop%˹<,Ų68󌳶 x@QE:OY oƔzf_KeS/{+mWF8g5'#6kRdl۽_0Uw_ >4+VKAP6$YCv$;TЏ oVg`>Y@wج;)|JXQBjZ.sj8vA\4]'[-k[}ȿyOFfOD(rߜ UN_j`9X&1}PEz}Bh$ t٨޳Wx?6Mv-z7U=㺆 1V-iuU*v$OR:P5F]GTwa =_`9g9kiBW;h[ȏ;"Eb@V [H28 kEĞ>ٱv^jk]!E zDU"z]BEHiv^L8sRF+_b+Xډ{~= `#> U\ u?ʶ%C&cbukgX9J e˅͌hü`4s !*[%:eg&6URE)^o!FNYy~VӚ{gD 0Ybu"@n˒7V+ ޢDe?'ee2bp}I}`yIJ(4gD@Z'G"Hs'ãw rS=ilF mȊhB 0YtQ6}qۉpK7%1|݋٠-,XmQ KO˸\ a6xx_6_Ap'3Ͳ!Bմq֬` u$S0~*FƱ3%#ʾ7nQX=ӽb6;*݆˻8 21\-Y o &^vfllȘr,rh |G%Beϻd;r׸[2Eĭ]@lѱB삺}A\+0Wuճ[Y W/ecTL@)8cVS]y 8~OLM[UVYQψ{A f}vL0MzVq">iAd6rܰ/ iTpX^ [B7٩0p rXUWSj% 4\~R 9*hcrnೃ-+TRofo\Z$cTJsl#:b/%csti7|.-Al c4#v[sbmHn G 'wRJw{K?vSh"H/A,\[ا ;|[4|U 8Ñls]0`3CFf3qhuށ̨8<}9J C#Ь>B1}wb6VnbO]b n1frz y(YE#WAlyprR#3U錕Y/-/` A/) U7E%\Ww ! DP.<*Jog'Kr(WBlT0 KE~lO͖:VNLK*Ly;1\C}v?AvooQ*+fݷmb~#}8Canq8e׍v^0GGc KPQ#=؛ Hq$<,ύ6% -ZWyU`EZ]}̓[fl˘ }# r7u+3;ͽua#㵜sK܃;R>,"ˉNR]\ "BǦE!Ɩ=vT_mZNf5fVo0Pmι >6Q&]juԴs{i25RѯnxUOPfCW>m T{g⌔4HK9?wkkJRVs53r)9k˯w*̉mjB8.[z"Ubf ^2awrb{|P磲*'۰Վ7aj&[bC2ՀaZb`p l-ٔawh~~:2}Djn/M'lq 췢 0|8CK$|З h/^'-UQ7CC,p!k 3FL x6L.G'ѕBTC&v֞8:k=?şuMSaA&R }Zm^OÕ P>#~wA-j'*jvMRƄ;`M7[|5mڑa9/!˖V:8i:p]ʄraCf,wEZ=mjsaq-{0{8ͫVS49H`إ%Do\p)6fx\I.? Y]Yc\ga, :8cpna尶c ^B%`(/@]/Ե'W̡JUEqTm]%[Bob n/Z ނw^]Mx>὜ NCi87Ӱ :.;0Mv vdA>ׅExKŔ(nB@+\`~RцZa|}w[С}ְ2wlYVoKP?5='2\d.Lo~͎ 'ZhഒVO@β~ w'V4it->n`vW[{'+U^ױrƚMk:{3,& .mH>ce+6}Nc81g!.5]~F3&KEHVݥ"niLz71x?.ܰ#"[j"V Z.wXxh{;NQq+uVN9$ou dT`BYa*o'9FM4DzY^O[Rz'|YZym[xdjFᯫˏT9-1c够iQh a*Fp:Jt;nK >GKOLf"h\ܒ{yC_GKBu]0 n1e<[[;}<fkrxwdWlrfU! |ԜAmfE,bkĐ;-CcAH\ZHˆVpܰ|lnW0\fXvۓXW?GH5%YO*`NXSFez7J!\p Ju 8ڏMdֱ)aav(x_%kF(gPᐼEpM8yaQh0I׈/ɕL @;QgBtZ hvsih樞v_Jw]i^9";"> qH(`ۦMs}ASgfqL\VFڳHXG4#2B(-2=8J+AQZ9DXWyQ<8IRV4bK:ˢա"աr_CiiY?KKڜA a#y}%gYM&l X6l2QmWؗRfU+t#4}k_'w3LN,H(g8e.oL$e0<ƮFeSߛz|\#W{[RXZѸ#6G(oJzuj(C% lZ8Zђ.kn 8C @Q&j<; Imyf(xQ+Vag*aӰsvc;c6[mc]'0q )&ۆ>E؍Ms𓾷.y}/LSn=T\ f}m2ld2D~5ͽa0tW`Mu=\7ײlكȒ[)v|`6˷0_XqG]ʊ`/QL(L,5ړJ@*{{~k)9FOCʴ(:lA~wأ YM)i Q6F\:wW|컷k'k݉W:`Bo| `7|U۶=L!aPI[{noPrC8*1vn^ae{Un-{HB~]N B+SGw=aUY,0nk~$~yк [`oe^9j #},])8b*rX\-diHyǦl1,c'!X*nj57B^gDN,V~b9kwse^ufc +_Ga2bqYDlTar0}^1k&5Ho #-f`=fbo- q#;ۆ_S/$Y]=a:OPw ?0fmjC*uBH71bli-\;]3_]Som81/ē7@FZ=`mT_+"`+0S9$&x .dٻN5ZU ֥/,-|uo(+D( Fmuck-gNs1roru.NI'yV#*,\!\T+KH=&Wx.y$cV)0L~ܮ]cs s{VpiHD=FaOLd֘R;6*2s`9h6f"hr+\<8=^fLhZ:8{۪E22DZ) u&#Lw4| xnC! + =C@ 6 etMtl$g'8ćeSL]rgQ3luu%7%??u^'|efa¸S6{ Bb%`ͷ^ >U۞ {(TPw\mJ=%%(s=|eM4zYbV[V{|@lV2bq+W_g{ B[;( ˕ }oc#>8^=}U^6l~P_D2ce=W kEFmQzLxWHd\<ad~gm7e]0?tw`8 2HuM.-h4Oa-|qɀ[סXya.jU/O*O/̊n&7z < &ꐡ5T܉s0 rYTC 0=Fub R,<1&.G0`\ g]Dqh9vpJ"v!q@2N'>JZ V#:1hϭL-ښn8k'cKaB}tInlK9#ՍtA(ׂe@)5#l <WsM7~z"(Ba4k@6*?rEmbv2a*8n}0/0&! ,/VHf8Jꘫ(W4wpc]YŶ͖>)I]TXEn~8B-K?TBx[FXa~jVұbbr_ۋtlVĺeĠ@;ń[?"K 4wpn&ZwBMعE tZZj3faІ2Fmwp}/!l7ˉ*VL"hQMF5XA8%$6NVZsv\mafBy&pzsmZIlcebc0?u |vF!Q\sf }/piow5!TtX゜8Ů[$V:eb{ԗ⾐ְhߌf fYWI>z[) T?x/|ĨϷ !KW[\'9,~`|8 ΐCݲqcyVe6ŢˌG=vt>LMƾ1Pgu!Jgl/T@$m0Yy|jAaﵨp0=ƻID;gr|^9 Cp ظgvPn\ηX6߉ŸOMь:N-GHli7so'_Iz6M>3r^s\:a}Šmvq,-`F#^c.yA,AP쾾ˊ[3pkEڄNJʢU],j9ANhm2GZʭE?IoM%DڔZX>#!i13;uz֔*SYux%h12 S%@hԏʷ^آgqʮ Ta_voBV^$Ș[Ί-F M[53,Es:V1;  +bpX('Y)UDq ."Vn g[ zӭIf|-'62X"EG~V`+vo1hfENRK8ԫ-+H =df$֏Qu2bƠQsy8a _"rڮ^L1pcΜƾ&ld0ė/V2"< X4LgByIFNbl lcS)Mgl`bAw"i?K/|[?>xѮܓ)ֶ!ss"Kʰ M`8PzX2^9n6k?B~6HۍM\'h] Yh,eڠ }7NAS\޴gjlتlA(̶QN[^e[LJ"/Շ{iq&ü{f%eC=@vkyAyn2`&"Lvjf8~;dYb|=켂+f$\^+{~L'`1 > Wp38u o1$Т*!jlUʌ2bYFM˞R P]K8\ϦX.ґ?7! ~s)v5srz0f%B8Dt2R`o\N]vν@Ì0tPs) )NQ:<*/B}>E kXBty`]úw Nf h|lzTj&2"L;̟;۱7 2ܼ|tazz .f{綗\b%N L}B9D= rCw, ޑXZ\oRִ?^I텲!bͩ+}G uu|b'8"DΜ%r?);鶇VL+\u nY@񸐭Z@yÍ4X=⦾(}8 lG~AP#7:Oe~ mlm$5uojx#;o߬;~'+w002~q9&7k/*s^ǀS uКFPCtla#Ry;I E]POeC$p|u`T1[ ") XTnUεLQ_v̘4fz}$wHlaZRˊE{eJHtuq;:1/'n7BG2F9 Z.ֺ=*to_o﫬!3jk@7;on`mf_,,zP"tn3ɈmW2"D/AĀ}il<)N͝DKM\GǷ BҬ}Tj+NjkPU,f)RhE%?{Id6C">mfi`$`A? yMsaY4TEX \\P1|wzBV1rU~'`?Sn}j1euF`d88])EUhFi+ڲUn'@PmQ{i ,8I8zgxs^>i)z=[*sݻt~d6Tۦ%y+,TL&xaۅxUG;5s]k9Sl-M.v Qy evuǪXjTQsXA2{V:ݼ| ?XO{fWX`ib|8⤎?S9 /$r{v#l%)[`@| qu 쬜D\ TfA?W^ZH0 Ev-?kzEd5X+ui6g?J.+-zgp^\.i ODbO5E6Ʂynﳋ( 8( sD!8!v|[dɼ&=c7:I~%Ҽ'8RSM;tzx"-j8G-a&G1X_onTWU=.N U2.4E{%.񎟶!2ȵ];ح-+ ؿ2M~JV-Ysa1@֏\XH- Lf6ڇIwD؍wY^y^U*#mk!QygR5:=/Nˉ-θK&չ]gqڭvsQ:S`نqFAd8wOi4^*?K΀6V[eJ-O|v|;S`Ij^Θ{5FQ!;:'Nmla3Ĝ+˓! qkӄ {V՚_>Ti͗slc"6(W_&k( Pbw̘\ޭΑGGܾ,^} -wO(F/[┍(DkzZq|0ӫ E|oz jiYqm|5jq9mjh(=Z_wח~GU欞%7  L\ܳ,28h礍6zh.J,BC3=`1VXGw=F\nw:CC]op,vHy}ÐBV@1G '`F t9TIξ) +&{3 )7 Ь A,ao;'|dV+ݔ9Qw>exd m-o/'ӣ&B7,:;>J<}(PC㖱>Z)T˧A {Ѐj:l Sԝ0U}zV`>m3Z̍u"afptR&S %)*bl[*rpGl5!ssb s#iw!GZΧwDrNH92{#|@_(30SNTu`9ü "d5 Gua}z .Ƕ=B!A드!jM")SS1cQFzQ|MHҟ6t |W*L@v>ڸW}ThlsZe3KʌyuMhpH<챐=&R/DajM'BbI"oz6v%5ïh sSOpC _0[wŎq}PۮY7f6kBõ|\^XH<$¸𢮱 5۶D>#YRWC˺* 3t;,nOR0"j'X,KjmȬ^PCR@r *HGZ郪V (' KGկ|y!6bH֍ TA穭W#T̞^X[¨KXGH?t2pꝡ3=~W_xae#qɼ2i]cC[ۤ êj˾7w*WDwV[h0}ƚBא1:+JdBHӹxΨ2~fm?'dO,.-bu꫁~ %C /-gtdɥ`%%J [_96ho,`ge_A¹Oe^}9 ]Dl#"Z}h+iOS&=Tavva֋ Z(0h'ACU &gSZTqm ?l3ecŢC\s`@uusMS ag+Jd՞~Cs1Mȝ6b]zR:g. b aKç5(j;/@(!bcT͗s(LPˏ0bP"1ऺvvGsJD[#,*@,yݘ~!lmH@Y`daOV0 ,DZkg !"%gb:](O6 ֪`!2 ݄nqUB9` ?Y ίUR/Piu *8a# x}{lV;'gX;A;X*͇!Ͻz?CԮTՈpav tjyb߀˜r[@PI(WI똗~#}xpK>ՄSmyBYcChߤ@(FUW^_Ʈ(HTbqEL7Q%i[}QJ~Xf#X7(UϿ#<6nqf*6.mIp_P=Ik9@09닺oQəY}RMCfYZ^Ţa7.^5 lZRؠ8U1,[]s}{aR()70Mq1ĺ7¨ 3Yݑx+ğ*{4@Yd;h็2v+!8[>y$ Q:ޔR5bQ|]lCHPx2zaw٬M )TXQ(д 0JJ_RČZm o ڛ=7 B=`|ө/z=JQ] fr[)A;ןA25pC{.5xюߴB9 7 /=Nq:ea%8 &_ fF -R&_yUկޚJz|C.>"'~Yo+aZ0 ͷ _Nh? J?W'/"z%?4MVVZW7IL,v,ץ< RW.ZuQeT2+6xEboLgѐsyԈj!Z, .xq:jXh3]3w{J̇VR YlT{"!i$*TMg>Xڶ"Ƒ>Wqћ`w @l/VO|Ց3A-u'/W(l) 2p&%%x )O[i+I8 Orloʰ˱` 7_2D=vw`8`|/6횰oiޝAR=Sx-Hw-k. v˃x bvKd<.Ћh(ᄰl HCY0N%P:&o-7S ?|F6:6YgH>N/@:K܊tB3!7$"2Z$ֶ?'Ϥ6 ίfxb55-`1jE2zFyoU=ov7i:۵N# 5qfF ~ f *{>Yt^(iǏ+DN4BOp/ )?'Pd+7R;+;3#Ov}^v.|]*srV m5V\{ Eg̷XhNBI<1m1Y6n"P<&(PGcnW +`0\`y}aZ>O9vG[E~~-=ۯB3˸x2]zAl5W%GE2ɃD$#&SS3Ӄ`7:FG=[G$綪Qit_3+ <'\uv$M(_mR">ԞXr5qo)JؗҪf\pӒhU=C3@6۶Ѽ·Bqӟ,sJ h4َ'^W"03J $tE($L37i{`Ќ<_cܛYc]zt>n/0fuH*k7ĸ}>)yP5S,? 6O}?W-/jEU]w?4ԭ&qP& Jԟǎ hەcS:S'a6ITl"Iv1:\c65ؖ\`ЁhEjdaW 屜 cTiQ`Jy gMlMW_iJтyip^>aF0 2E^lVnFJ?a_4$X MT D<@]wd=Eqt2Nv%t1){#~#tOY\'}z/c6涂^}JegvR1 xâ~ysaWc6{'0}2\.4Cq?5aQwQk|q4a%4Tҗ4i)7~̣Y0pJSB#$dgD0z67*r5ZEyG^{ٹ]1fxM6N|VOY=-i]'%i(4G8*M8 [uN3۶yt[ aYPkC+Fuxk٠K:ݺ]ABO)8HdQA`@M.<P5.+n@O,)čaIFkvƳRL+P 1v88~nΎ<>hWGy_aP@dSl"!aqvc&o 3c^|-muf|JJ~'d,F(dQF%jN*8 e>L UErMvxlYcap`EՃzR([W'c,^7]ڽ~0z.-Q:;EM#`XzG+^⨽ 3x/heا^@&g;k\<6 ,{Kԭ6!~F+qo "ISS8pSΎ簷[&bBԂr|䈑24#] iw=W21yOCE726xb' /D]cŇ^LV0mcwȂH?1.|5 v)&ϰ-j3ҚF4;)SUepe_+hWIcy!j̪;V(0؇8oߝ[a@Kr4Î-XY4K1\E;}*x$O7J>&)͍s[@ݗ4v!W-O)c3>l~7ph{WI dU xTIEڪNI08IRpi/n ZiAs38>0*P1Ѿ~< mQEɨ͐ND-f 6|&Vic`9V`B4;DA|#dMe c^d%}0DVvw2X^٥5Y)&8X-*D휉Qڃ`^_iF6?ӣ̂Ro piA*,;=HhA9X& Lsldv#2.³ YeN ]<n azi )d5ჿ<.LeVk)=)*qPkG/Wwm}bW|'7f߱j fAK;"y)U1٭1myZz4MjPRtSdź.j8e'3c< Z2=a)gCNZ5T~FF6̈55NfLO3-jmW..-7ݥyϬrCFd ;L$N٦OQ=ff?.Uu bVy*0z|%t@үN(D`/te̞#{`jJ}7*v&1gW!E_gPgHR׾lCs_?cn-]wtjkS;; T2/j+_fCEJI{OMn_,9-.nsk5p5kUƸ0ǯ s=[,5 }Bg`,5ί0F"M2b9v߹W(R,c?Crۡ[!,?_o?:ˤg(YQwuñnF&( q|(l%U 3ϕ`} E.̒nNfHk}dkBlb^̠3p59&rj"A"溹gHЩ׋u!1?y+SN2DuG_ !ymNIL`%9p8ؒ=cgw OȪxn"HD]hf /kdկKSN!I"  2!vMWn̠S )&Y}‘Ă;YucaΉDsjN6_(_V{:̭B~nsznQU 0GcOQqxkԣi#"C,J Ce>Tgg,'޼sVjIuYZy=-0mp k:3۷^ciT&S$IԎ6#W0We;Ȍ*}@tO G6 g/ k(0c)QKDvxO΃g*]z6}N=E0VG-t~qjm\ժU`5;oLi~A".6z ݵxIַ줍s-SHHgɔ<ʎJpZ;vy/alfuc'4 L|Q;;%s|gӏ-jn4dpihfoXB?ṕcn_Œ <7)1+kR } Y7ʁ7!0/B<=?knLn1z ?+Ϝrx@qJn*OGHEhUГď"|/(9ykQ\mqhS_Evy^ q/tZ_O,g@-'<ى(M/szI|U!lܺ5is3 ;n!V7v8D%y~'""HvgBT(Yx6*)zh:[B{m]pKqHXő_(s[B[,Vzco0ԯB-t7gob9ߓ?PдiKteӎ``JЋb$_ls㓖!8L%&9I1Ვo.L_hI1것&|a[s{kuz}+S_4mZx{ia˽\xIaLٓӑA1zv괝Ulf|Td=;3}=w>Eh\,ކQΕ'zЬ/fuGDzLx?8?E-J i)Y+T-o!g- ov_۷ga\:+*smrݶ 6xYriV0u- i/Y~Ϭ7[Axs㹠i)H M;Z N-Dya`Tm]$T8)ѿ_]6BNYB*ϕH[RF:=D~ b;EP*CctO\y,'( :C~%73fJ+<-˴6#KNjS#VsLuv/u݅; !7˷pýǵ|%0bp fh[&C]+2CtYJO]f U9kx:|٧,2.4'v/ӕɴ64v0-qm%3f \ ot*WM/~ۧ9:qk\3w.usC89&ztt%y~_c^m(B۱]}֠gn mO[̗Ɏe!&FE!L5[w̱mn2#{ؘOCnz#*`dx)8jkecvYgdKzAn-3xY ue=q[9evCloPN*B'{6Q׷/*0k~||TEolW#֭aN[z^y>a}9Z~ fٌ"EWb+4|0-M8]{Ljn[MZkzka䈷&(*x ABok{q">B;;^2A&x%`wb} 1Su{ &AOh0)k0!E<90 DӺlX?f}4_ho#8봙pڧJ-|v84dNa~پ ){{y&uiώ,OՎo2Cn/2*lu.z+ۺho.Ϗ+'dyV_߬zʰ04 q8:96HL#XKrGZ7akih 16DmL(y&elUSwqQ/6vRU!(C  @;e @h~qmxFˇkuKeɟ3r]d-ޒQl3c3ύ\:yyZds<SVCY?,yUxd8H]YDq`=})S*Z棗i׼ "wqn'!84߃cwܹ C@"HFek_[jIYTq+ۼn<@bVxj4rIe(oN[tau*â M>qC</%4Xv8Dg;hJA>OЍWg~/BERIP!|Wy ܹRw/wzu*tɱ"=ֲ5{BaJ<ʄGĦPhӡagIgO`"a?|/\8!brqrQ2s{.~$qU9Nf͸uУ ˁ~ߪ48ˊUIu4uvj'eJٻjHϛy|4F$bYʥ=a""FlI1x:pw[T."쀽v6d_gH VxPS]8ɴ19+$iG+;BFWgB@ ({1ݺo>fUwY7ȹMu߳ zT5ekQȄVg-cMgY!7T1O-$*ԉAm T&K GT!CTF 4)a3>g䢲B<= vC~ $72-B6@0rGГ}Ae iQ7Ol>*m6 ]\ջQ $.O%9{FAQ!B@Q60f*MFi!h$\_vHht<vô!/Z)' EdK@V{3µDYmEض , ͢8y,xo2N)$ ,szL h.Gjt^,Q(潝dFUQE+]p6ckQ rs!{?j=t߂h KOqMq'RY+Y0wD'0NBw^Ŋ=[)tT+ɓr9'߀2xo2 j#as9K|?x !aCAF2R\%mJҋ kmTmPm#\vrř-qޘN.+wAz?jcufA# 5H TJ-u} (0k˃S6 iWX &q:jDL=jkt-a]rP`̀NCm>cdؙY;>*@21[ |ujvq|`>ޒV]/b׳3HݺGZVE G{8=VoLFUNI>lș_yaEhŽF$, 6],:%u2U.,Ӈd;P>N~FM,:eeo"ww?f}Wo ת¶+Vf{{Pk R)e4yjG0x~81scʉ%V~[Qytbc %3\j`DW%Xq3?iv,_jf@Ӣ pg(mWmQnE߅39Q;gOb0_ϿiW#+ RNĻsf(6˦|O޵K*(%8"ܙ#9 3|®F_AaT ؙg q5;))zl˂-u"éu9ܼW7S(&}sq&gmq^XA;З /΁McATԶHTS&7hl& ،O Q&ҙ*FB8:c˄Rw^"J@K3^4ޒ5; ,[OU׏;a34+YeAȕd2YM:QI7MO/lT cubK6%0l^ꐎ-xLr"n2fQrJtBOxϹHeФ ,U%c^J .ah~k}#۾[L[i;Lr,죨>i5؛@-O O:uYvPǧmq#<F`-Tvg UQ%<~V4U}n[D/ I [Cq7>FsdywK]z@L'O\n,k%$(+MlR]$xF:VbbrvuF1PВgBSWDO;QP]W\g _9os8nmhIԥq\7AWnLd"=,gίyքm{'ƠYhx0gee PoSE&|2X Q*qm _ Ӕf%FO8Pu:Gׂ( DP] a6T?tGlbz&EVazHOYy.׳m7;([xx7@Xs#JɋtD+k)d2C6u_,, R*kF}}xvy]1ǸGF̼a 2jͿ)/y}Ϧ3 f *#ψ3bׯ^ #۬tAķP?! xLIox0sElCqV.8iyfx'MR|j)6;ƽ(@"m¾Sx:v~ty%j> 5A.=׳-~Auo#Kl%39e?m I&` UHxt(·\X߇qs3>]C{( 57>jօX5.J3EKExgz߲ #C~C $L%w/|=׮T; R"Qi;hv0=ZS 7po|-) 4y;`Ri)V:J=ε,Bv;;.+Hp׳tWȲ6 ym3ZCwZDžoݵ0U)Z\]5]l6z:kv*=_S>q*H6h$Ir,:nD-+\3_{oaF7\GS 0(9.]S,υQ3w߹|N5KCʏNM#sC`i}%L= a)6:4ŦFjG0"ZmFPUh 4\6c߿~>D&&Ɛsm={sj{ tDϑyA}6ʠ _4ח 7ڱ8˾Z uȰQ\؊ufJFÞHX1>kseӌ5mtz\q] 6cm .0bsp;Bdqbњ*zfx#s*嶵Jj3Hʄ]>K|cnQD#_fm=0 fe>6TA؋WVNa$ 2dI>P]QgH=h<' 9fN'BG 9ZjdQt!B P۞͵1$SCfh3N€a"jf̕ 9B3x*F !hc$jY(܃j (=SUf&!hYq%KOWy*0SXM&|<(a9';߿_ #nq 'ز?_<ZϚ)B8,pj ev7o:*cne+5ڼ :N>\ ?y-' ':N"ٙGv7,·A>!¨Bi11^NG2͗80v̎N֐pU뢡d*¨ M_81n1g8oeۉP4v(jKEy?#DoUal;ߕW:^cʜ\8- ltq#}j λlA WhקdbnT@m(Z M'mDfٳ0v-Bd !k]g0kcSiuFv=foV=vCe,Φ;"'9HH8ͩ%G<;9]/m(2#C;6} i=IugE״r&=nZ=%h9I·Ӫ'?7c~+Qb:=0YF708RXɉQK\ qd\;D\z^km+t0dp !WC(,bՙ܃Ho^󅗔DJ6Yrw^DՉ(nb%+sc#>˓aTɭ{h*spﱺ|S:663#MXL̮?`meWBOS> y!xB,{K=WdE#xG>\rkd~A Q%CH.w!{^.g4m]!9>sȿqɰ.n&re{TǨ%Qͨ ISmh9+ f0K}fkהIVz$̽]C]L63c%颦Nka{{&UD7a-٩f4xbYHQML0yC5Էi[}epP8o4N9d˜J`tz5UXȧjNɛkZ[/J_p.]t"yX؉aCi_t4`otL pfrNL{PK_9'zU>[z=|̆,vT:Se{OJEpvU,fp0%xQGX:g:'gўAIEg$Fƍ>Ǵ6qeiz 9n\gV*ۚZCY MEt`!XsrXoQ?mUXs2FN~w=Z6- +F+7DUz川PVi{9]$$$k?+Kܪ"Y4=fnO\$|(rvne+n䬺}sO1ȼ~y6+iw]vf-3}.ﯬQF:^tt]tlIT2^ a`EVϰ3NȤϏ[LE6Nw`*}$}_ܳeSMfTw}] |@˱=aË~П+3T{xV8‡ggi4l@);G#箰 mxe[L{kXa J #Ce&U*Afuh]u@Lwnn!yzڌxJwhu#-G1g"|QK+?ge"!ȁuGi3X:jz+3R[?ue[Tm Gcaץ2f? ;$W8#ϡG04 G?]^y~ Y_.e ހ=jmYa]H 8*80mU@88p%8, -ܥ?bO Ս3 EFwbN9Yc]^ٟ؇f\k,F8DF-1 (:n^vڙ#bIt"uWe61bvTxװ/OyV-gW=TfM=Oϫo "RO̶B |wF3 (Ҁ>N񺘲Lsp9g-E1"}cf &Bm͇H-,Z<+;2`>_yL-DJ6͗J Cˋ`PGW/7BX?FpI}sa!tyƢ0f1*C9dCfq/lͳcw7'1 E%uszkւdǗ*kjڴ_#/H(pjϫSJmU/k3n%JV ,-lgD5*/ul7!/U»&#y Չ-JlrͨYcuf1s<—J!@0_XCX`>髄@ANݕyaiio{=WfLVAR5M⮯v ŭdΉzo)5d )q ?PӼ/* Pyu-L>QMa7 b,u|i"7 {jgG$^9^g!RRFhR6MN;&Uz,B vJVRlR%fuXP/'nr 벾7o9hƕ̇ Өp~رn /6Ǵ7n1Z6) ߙ0[MOiW7;F]Ӌ%q^FFY~#%FDF#N'OM^Wa(qAn5c%jk>1ebZ}mm/-lieCfTO};JY'Dž> wB-ֱu$oĀȾ㧓E[?~]d TMy) Lݱbi.XzN5eDԷ)gq27LtD2qǜ.ҬcJ><۾lkY; )X&9N-"H y٭`z̢6~k/̧Ùl%ɁP9u# `V_p[APفi }28̻vͦ8CH wFx]lxZX3s"AQl5tZ;!Gra;Q9l""!+bԺWLY/tgy89w{W9|Ъ<;7*Vޏh:-=]PJ`llLF=8:eDqjٗ'̃PR KP#ə ST6m. l!޾ꬵ${ 9(yjP2 Ál*]H/̾qPLOv{iEgޟsFcŏEDJ@QtZ n؜Tēee ٝwkV qI<!3Y1+*9a[{~!ؾ{ Dj7O({Ȟ =^7k-@Wbr_ju2oefj%~x!aRryvviF)u1@7gA0>7W>1 $ ϿU`8զ7BuB3(s-vp! D i5>U`c 1c|g޵]oghg3KɁUݯDru:_^?BO^I*MQuwE ؿh4:RnE87ɥɾ2؞ݮ0D .aCtn_+sz;tɵ $StnFNY3Z,3⒥oQe1L AlaWGfa|+HlfXj⚶Rv~OE~[)V[S꼶޶ƨS؁H`DQmJ{,+jdĞTۯL'@0(q654X^֐rnaZsdI/oBzS4#Ou/ʎSķY4GoX|dEQ[.]_aIYTi ̰\GK +"ݱHi#}3U#\\Ld{ t: PegDx)u6(Di͹j]F 4S>z`'~cⱇ݅CM!B|yb'.KbEPHQ[n3ɲ-/ %+ߒZ-o2"#3޵=h(Tdz\-}*u,o~d9Xl)?FIW'S\`M#\l F!e(QeQuf*s}b;gdۘƐ[w(e'\*Ķs2Mja&f'jeK_-g |{0A=r}vYΡ)KMhNrI-rO*aőT1j=#z@v';A,e+лѶ]i0MJpьLΪ8æP.WM.R@ͷ` WSs9S~.565BD899(ظխǪ3stC"4uφR+ @;sq/<H}k˫{ ꨺?qˢ e3?:$ 8IC*RBE.b9Sq-Wv".ſ]zGN'qayHQ5zJOx}`+ jҰ  f/$Z-z-[c~Onl֧a0Lf{y _J8xt"7p)2ܝQ)鰛l6^k=aL[@]G^߅ mJ-4OE.~JG!18:5+0\Hc]IJZ X'jwr}Ov ᕌ/)AZa#c C8Bרwᩅk>ɹo~H5CO I|SopxZ[jQO7KZ͇?;liڋ @۱؃?coB Fƚyw+6ĞJYX$C.u>> =u7xa k7E6µKH +565n"XϐuF ,ehX+}YOq976udW 3 oY˝U@̍=)vPw.2J⏆ =cSFe8OB7@lPaD;Uۊ_$2mGpMZq>; E%96)fGeeFpOe IB#sWyPr]$26X_8%Ā#:vi]J}(d6 Unp1 *r`$IO4b5P;V=gmQEdҎj'gh÷Lq35],N{vя? Fk Z}e3XbT{Tf "JejlQL$Z=@Ob2n=|L]x=[m2r";"9H(^:>E.D(o^)m{iyLQ*){<-g_(l,9U]L,/wr|dʕ^M&)_9&[BoMOc'8ƬQ]^~ID½BD0΄ȕWn&aA-Ɂm ]lnkx'OO}wN <}3eA ʻ]"Gǒ(6+a9r^/l-Mc=aj)_ 2n?BW,kj¦|Ea(sXh0ܰ(r`jpHD1 !by{'qΌ_jpN3V/jivO$m og"'#5z2V;Y{/+X5WNZ_\ QL+ݕ yr& ](.>Xp%LXV+jNUf- ~atӕx^Mɳ &zS۔ 3vO+Yh~QQgĩEvOaRi郤 +{*~;/0zh?阠 o?K1S+tՇ0ȢQQk/l { Z-]&Y?=0܇;g#'-6zMՓw~YT&oxͱ ]ߔ8x*|¤uٍ`y{DrݏעгKR\aѫҫc^= ~>Z}alSvkx ij#Fp_3s` 3ua O$:Ɔe"B7.:YUPIU%u-^4Y8_0R3 'Uges~{A(ˌ!ޙG +f x)tnz F6sD?#F݋;sȮ-?eE-\Ѣ^8Gc] [B#؄Sp\/:;3P=LQi[Gmt\9:=CR1"dhXБ~Fr}h.<9Filʠ\pCuIj r<_#: `okĥ5t3;q[#/*@BW˳-d}Fؚ 5{';Yo[3F;3p~i-֢^ӎA+g%a4A׾o 7s\gS}/'㈋y7&128ȱa@W;Ά>&%؎ZM>lLa:ԢmSa|7Zm&յTLi a !m|c=tXx̆hrE:3h@'ذu: it8'‹?q&8\nxL. 0O]UDz06T2mhn˶Q@QԢG=beA^0c;6A,lIca|嫷Bk߭0v`K1wfхQÓ;gIDEc60v4L(bMdnaA<溅w"S%'a9exMC;4DT7Rz%t"hS+fXŴxKӉx!_&xX ?4 clrLb`SG7 %e#W>;cøyw$aRbih9*@Q~=P4uie)р9\q U1ܺo95#9Iv=+ک0ISPZGsHu >1ǛK40 jl ?lЉTkҴX>Z|{Q~>碅׳{Zvq,D=(Qb4_y}wBCUyqr@gȣ*+iWaߓ5X%wU"*g>Fo=:ܜ{G̚x+J(jG/Iur(/N*5?\V^֗3?J_izf5b-sƕs_(Ǭlhn2rh qZv&Mr$JaAB{ ) F"36yBXT`;ufJ] I$UX (, PG,:e9de7GvَnJI.W:DǣÚ ={W:chr]]7`M)o{_$Mo'tCl;gz-; m蚾lMT&vl O|4C5jiftj{vS!]:H <S8t`c3XVN,QXcZ>۾шp<}{lVYBbmJyG``xy,otyW;,/-Gjj  Ntnc lq0S<{©3m#c0 …3)+afG_'@3r M(eO| fxX W>v9"{K7y}.`(s!<׎#4{m *P w\BFj=H8Fo.|3YӪ2 = &Zczs[hre q>J٠Who*h l Z S2ʫ.S$ ևlPO˵" ~BOZcac"{[3Gq}Ha(#xKEI^ݫ/J1u :p"?۟ӧE2wl}v>m0Y!zJlwaIP5k:#g7GȤP긟6 5]mJ~AD(n^ݮMO|NqgCɳ"5oh6=xS2:V_`M|?!--~ C]]]DwmQ?-&>.[2!mJ9G"lD|GΌ0"SvN l沣|}IQKuHұh$wll^5n]YQ'(~.x"v{-#v_n;pgf71ڻۆCyv/źr3PSmHhi϶#Ub0-OF7J}K;_aB6Mg,gr`fWqMpZac_ѯ0΍*;~݀ݹPkSiD5,4 g9];}XDpЖ/Rg/Rj@C]Dׅ#hazK0mKYcj5߶RƋwz\X?%~ _C[=ZtÜ?F:êiѷB: ]_f{NCӉ<2qs~Hk9h%/9>(3MR ٢7d plP{Oʳ0]2㐦"sS}لbG9vc}vrEb܊q+JV/ RQƺe?ݴ>*p]\W:eQ i$}}5%zIR FYh"ǟt7y,_5szF!haptsM3GgAhBK] FC0WthBW`|iF9y(AN_ӵGY8>CS$xK1>Mi{ /6calĤz݊[pY"K9v<,r@lڟ٬NkH;ƞN #Sn%10J[S8 Na+Im\dIۚ b> Š'al,_69^̻70HaPk N#x$϶J 5d~COvKFl{SDbNMaB/3(O =I+`Ip }f v.OBR?%eM[0]c/ .\?]%=)@/ ZO&JETVi]ugJvy?'sBi8G0p+~y8Z{3O6XwԈYbPY ȏ8$:6'Qξ8?_=.;S!CNyT LwO%F$8΅@xF{צC"m|/ݶu F+[_=Ol7'1k;xW( [ }+Ñ 6wKH7 ylk[ɘ"zU*g;tٕ@0N ԗj(<@8 1@tURmc ]wy`V4-ŧCLQobd_')t؄q8fRk0'% R9 oT !YpS5GuK3X;[`>YO,Ms}iڕfc(PYfhc;?7u84s}JYlU7aN.m9>!/G*} [H)ᄊ1<_qf p54Q~_Ŷ6|L~]z;QqjuDb;Ğvx#3mnc/J{O~Vݱ.prSPj.'b(̃|!]* O۱jUL_%W~&D.;̮doeA-e=-Ctc>\9lp_z(K{)\8B ݙH>st_wv"[Uƃr*цj_}r[_ȅЃ=+B]5t䫇ȝ;? &g>wWΪ G;<*ҲZ3AB꤉[f66<}1Jip&Rw'jb*&M0rGv9n7NdD\6׼MS˶1XtB; d&}drz*S30F6C.6-iVe`G6Ilʭ׵aJw]u=T#f'![d`.@ƿz"S4óL&=0RIdGmTJ06sEڞ"A8L\LnLj9?2~}<<&{.Ӹu챛Ca0Ƅ/ ^GYƠb$AuD, Lb4u]x&(L4#/?6ƹe\H&,x\@8.ĴA-zo(WD;+\!~6A&rbF$`YÜB /m?@_%`K`AIk`G&fy]{h?HNY߷7_dst,RYIFy;O5lfL/SOq7R)up''0 `W#΢ߑ9Yl&{&. ߚ2;DpRbmFI0hlp_Xy'au=4?Q#,@ 5vtA<"8oa8'A5Lރmh\/6*q_3kbuFnùS&Tc_c߅Qv ~WC1PpBUGPevźRS(k2i~.*'S߽A- 7w ɻ(!se>`ta)I5M؞h2bH);Չ_DH3< ݶ=O|mOD"+B<07~g4GermPnF,wz'8Ya&bPe}y#!̫j×>GY7\Wm!`UlzOLc",غ[-o`UqhB&g%ϖ,h&rFf%-Hyt+/s;3ŒDȾ v&W.Зt*-\Ƕ~N=CdaT)cdMzO"8f$>)$ d.w#JVjpЦ(63W{&ugA-&ôĩS OwLeѵ*3ŹX>0c gN&G:f9aNeS)T"3(Ľe锆ở*&pO̳j@5j;-šӓg`ڕ*m,Az1\Ρ.9M/1ݰ8 */m񜁧;xP_Ƭ2J%4H =//^<$S"NKR7NCDx;^un0f;*9@\晽t,9 `<ou,)v*Up.f(u3  9,>\S ͪKs~4L?]]@fi,Bgթ\dbJԑ:]hm k~/6nν# v\0w++49=mTƖ{*܊ݩw!o {0YS<2jϋnPLg9T^Y6F{ԯ茣9`'#Əzb຺?gÂm@uJ!r& sz CR1g豥&&ksυC]̶hJK_:PJZ5-?t\2X2+&Bc4_9O9ł8}&w=eZԯM4g/?َhvydQ!Y+cʖ*K $y4u'eb^?{Xr=xGdPثVN~H['a-0b/8&S]\ֈV|v=xMWXņcS*W%]"ۤmTj-EB'LYYlކM%b])[m$N dT5k]÷1qrF\(uKٽJMG:^zZ@.oCu8ZW1}טh7(,mi Muq{+UTI5wl_E(b~oYO}B헱H1_VJ6r4ɗ!pCm"i 3YDqnc5{vei'<F!Fh|''i/|+be \/t'K'pϊ,}(U98eqKm(~z | f90ہ(Q VMݖpwjc׀ FyύjESH>v, ՍST ؖ{eC.`D&!8 7nbmJfAs!peu^:Y?N<@͸ D3aR0F,ugDmUs_W T.pµ蹫X˼6,î?N<ABӦ EQϪ[.g\b?BW!|+ jBo0i~T&RٹbP|ͦrR>)uHfI®Jm6GVR[vyGd)c6JVJcy5HbK}F)ĠF)WFzf8 )`1Dow]"(/F&[az t Vol$Tlz>wc30rM)ڇt _lu6)2J~ꤨQDܹcRC#4s\2nCU +bo@ɬi_,hUǬ%n"cT88I[i*r3f:*K?Wi=BdP'#R< vz'#}^_u)Yr/@<6GS˟Sx2̳yKQ+N/3!ul\d,<ݟHg'HХNF1z/kX,3C89&}mAҐ:=^2l/oQO8YDwߵ?20@3K3C97 ȒR1t8\-qDL5AT= WfK|-j#]vX(Ti½Yg?^TOyŧmOfK 1%eڜ2\$WsaDǙt[ A#Un3z~}v=de˘j p;-jr^X*RYBszpo\΢T*]f7KuA($!cA{e?qsB e-eeCs6C+Z-K7D.D$gRM\]2/G)$};eE(ve4l{(33ìzr"LeqM9VPuϼng%#NٰڈOn-coh0Xo>SrmN&oja^d-;Pq֎Y]#vtDH'_g♌9K[ K3庝Gbpy˘`c=n] UyUaMQ6Ғ Z}~ˆ$!bL;{+`bf,Ne⛪8 5DU^q(\&:LN(R9Ʊ71tw\AYh0鱨x^;GWtV& :WP`U89zŋ]XcNΔ'ڵmP[ ӗ 4{T?.*i pu~EX8eQhTKtRj|yO}iAUQ6+}*=Ye:d(x;$i\SVۥ89zf2#Y٥k3 .GSGK_a2 `eC2pjre$խ^}RY":a,8[2BPQ\y?[A\ԘFik}z>_OJ Py~vӃ0j9{aaNΆfZŦێŔY#6dABfd UEHNF'cZ`0A*g惭7!yfQWo-&MfvNP&{i5[Zv |r!娪fox:S6d5<.'CZ|Vs$&t?WmG52˅Culd-(DFf; \ O\ J4 ۛ٢h9;UK|QGjZ/Z>J ]S=` N8VʱǛ2AjPcE(1X;s3i е 7x 3{~ {?$2{)h62Js5M88}ŸG]68gޏUG3 5|ج`ƅ$_1H;oobaɚRAwxHS=XObwbakÔnVBܼc4'|X&жc}Afi2 aқ u&nf~.*90\悯ݰ]#9sk؂H?ǝ4R ˗*Saa ꉘ^3-[^#ձƬ2ԡ DT\l^ܷH[8|[?D|$pd`^pC=:~(aݡ cE~x0vtZKpx* Φ̎Pj5傍n=EK=_m]~5#5Y pL),[QG;Y۵{19޳d Zg}CJOi3w )8Q AT4Ѿ<^P< #mρ/r"zs֔bFjŏkL3ޱ٦HX#սV{#'˅}qo^0y"i}=5Ov4P&{YlCUM?^Ī>]-TʜC!ws*01WWlqşE>m(Q_waN-){5=Z8Ύxm$̍[Lp1omE3ڻ IPa OPݠYW:֝5|yU 1A|Ni‘P9+lqWQ&v,O!jj}F vm;DQxE&>Q)"'.ݻvXt.IʂXr~PpVךOmŔ\ E?ڡ[/˞arr^Eӟϝٕ @MЩl,eXL|#\Pm#A!H P|j8oO'gst1?g׫~{GkvV!6^pWC=z&țUͭCe*a⛅vVeEz[/{%nSc W7Y䐊V1{08V4/:-~^ϿkkGT9! DY Cd1o] ٓnQQG@U ML r8C\eX+O0 gѿ5 5j^eҬhT`K;$ eGouVUj#|ǽ6=>U$4r򁇏~~̗Q8}Nq)Ұc;#^gC+|08Ғ 셖ǿ>k 낲w> OˎgۑDѿL)kU $#ggHΫW:7#o8 ÕlyrM7%#N@HX$JRETQ-#R'r̲~ Z qLa:*_؃(DW(N9o1NO=#X$yΝlAΖxc}Qӿ$"wqHq铆qɟ^[uan6^zɑtGvءF{(c)FT#5Б ՛ؑ=jۉg0և#@}9ԯ_ 74@Kjs*uzױ/\n0P8`MhΞBqh~y#ބ4H'bbuC GrAe,ı8 GQ'~ًK ߀];~T4[CܡDEݑ՘+28E 2H@Q0ƟHX?tRk`E5Rfv~i_wi:'y# |al3lҝpjvs P~\#S^@4/6Ua;o[XȢA:,4F#KpRK5TPgHq H& `߽k<4~ 0>_ߴGk&EUC j1LRM\4Ek8p ܷSmXVIh7 AMQE:DG8]yĮ_D-3\فiFA!N>е:'#s"*{:r6wϰQdeF€n|+ c[SDqD9n\T5Eer}% e=nx=Z*$ԇQ"jBGiXB]fhrIQI>ƸyQ^˄Q0 㠃.} ~/ZҖPfNب_| ^E*'F{7n/¨aI9Ud5ࣣuĴLg))d FE!ND_ ɭudF%H ={fFf8vݭðm9(5+um 7@7MzͳBIaƒԫG"N0C,Z,ٝ \Xg9{5V"Ye~k-n&߬V^%%!u]dMF_Dݭ[V6Oeɔ_WoIV輏ZP,.mMJ ":,}ҼjJAD+K+W ;ewVô0~9~Ì7-mg϶O&;DuPZRggȹ]Q,Dj0la#26Wa^StiRM 29[gsLdbs\Lnka /=h{K_*·Oc[HFVqo-~!:#{v&,hGRtb~R; 5p`U~39,tF[?ET|E[Kv8t)o.dvXYP>r%L0 ᅂuƫ_ʰ%Z'۰M&LީFHٯRgPɹǯFJJ[s?H\ṍґu㳿pp'[7PV$:]V/PjQdf)u ՈxޔA yŔ`A9= &ajs69)m/0 =W#mn osk-)˨q3*>60ߤDU[ӌp4EC+>m^/&62~U?'SSYOiɒy5tj Sa/|OO9>5 qMcg#1'L[2()&n0|oОc_x!:W *_4qF=jB|4A3>~5#QI9Tmwnץ݋/T?LV+z^Wt/ٌHn\-i{ޚ4c1K[^}4mc}PZ:ɺA`oQiC}ڽ*c.}نQ hXۚE™x"wzp+aض}MBs:j!ݨL(msX=Rxx%6#ߖhvXG%%Bfcw~T(g֞{tA4S~(G*oWWkV N/o!',U/:_>34VffǛ9KP-{6b;װT|m/b@=7ۄv 0Y{ʇKC~yvxAbǫܹ+m&k&2Jӗ;wŌ7sCn8ӭG$LmA4R ۏ7he5oK;Ӝl7Pa ~iMP /<:VZn\-,ۦ/bTC5^{$u{v~#iȈ- go8>&ZP1-9Qcz)x Rذ\!^Ijיό񓍌]tyz*Pf8Oԟܩ??0xiO,ԣN$Hu..05O09E\z>e8-q<}-E$C{)٢{%'i釢ݖGvTQ{t)!amv&FR]Z8##ap.LߙQk`5G{9hj,zNef{I|پ3}OxѢ/D{NvkF6@&,,DH]c CndƎ~F@Ka[Y6I4"砅lEx7`/]R۶qnE~'U'yւ^Hn`) +ށ *EBI/rԌ.S6hr{kc|AC9Oge?S< (!P\ƚK;exZnÂӏY)1/'ģ3jB%Z.37#8J x^YcWlmv)Ϋ N.&OU5} dq ym~ɜc7Y-W~pGw*{|wnZF^ r(5)Z] '#Eh[-I/ 1OjZ#&};?}5|Ù.q*:Y8ŦD/2i|_8UoT ((kWH[sxh5}c_5dUM-Y#оAf[rsMSbhLV2uJu<[XWw믺n oz 6EMF^ A}mvʡA*3i,¬#6=6s 8M#,gΨwz;Qe7Kd&],-/ *oTXAN8G_qo,oZP[N!R^Ee~,UXF9_H_,id"vN6-ib3\FZ5edPy#ϔo/7Z(LMa.Ez[їXuKROrƧ];7"b%͘6?!.740J|?MlLn3 b*/>~Mb L&Ԛ-z0b&\hMҼ(_iZq!J0ɟ'DŽrz $gN%vyN&nAyscE{nP>z⒭b^m cGydKi>U\O=h0ny:˩E|c|q]e!GcQ%Y'ϔ9j'.̍[8x0oƲ\`)ad#ͷ/ ist4~-\JArX> "?qّNzvYc\֊`&WӒNPtF%;rFU3~C PP#DN7zI$NsvR ȟl#[ëSAcn_S )9Ep%MMN뢈%"#πw[TOृӠVc:uqj9k <"[ف 3 ي[-q\fV&`k`qՇnu`fԽ8])3mmu%zu%Iv3ddԅ/fٜU:\pCE@>f,G(?ͫf6V.I>^ JQI0IJ=Pbf/500p(pX\/yAP_Q4YSZ.~dJ ¾9pY>v>0 Z2e2_so[+"HS Fݮ? ,9U)p$a-P ĂRJ]֩Wdg9m1(3 mk0,~Ю&@f}i?t"Vo|W\TX|+|O1_IX'oq:=c^N tɍ`m;&jtɛ]pCy}ԗyw[utq9M@`K<ڟ;qNRi\࠱[ M 3-_bCg2c=(w}O OO<u3X77xQvqWw%ËC: ژ-_S6eH#+^,mig( DZ?=-"e @^Ҳ9Y+Q,1lP»_0]N4Nd.w1Hda~M"?MtDꠓJ?j]C^E67#֏)h-=dquDԍs$H?Ҍ{U'EoU7 \aOF;esQoiLZv\:mx8|b3ګjZhl= ( / `a| Oڒڕ^Q]L­m0$o PfQAr/% (;3F*guM1,YaO `NYO`I.LkἽo ryR ק ` ֐!$ml[J8p}W,1#qx'rp[dA+~"j{] hpƏWsMUx %\$6}BxHk{(U-3~[^q2Ltm*gɦ-U36  $=},n4hzz^T90 {U={][~ gyr݌Cer@otO#>-Ca?AKs43CL q 'WݠѪѓCdHWlU!GFqA8W}f: Lʱ1o f@pM(â LƀXM<9=+9ȁۋ5B+ V7rivM+ hpva8v6݀RCnDzF}vW>U|qSL*YiՋ=V}pGXX|tU֣6᪶DP/=#NPV}oGzy Σh r˲a76z[P=Gc [Gz,n[HVVlcVhj6 UG IxY;Xe ؐbiy=x4q~VP 3K '_[,S>N}~0>>6؆[7`|sx!$ ǬIxc]X.'l> ׬ ƿ dm>S9AdzuAʹͨf:l7:#alA78NO2y@W vn׊3IevVbFU{ٺlZ~P|*zɏݝ ;l{j|v M"-6G K j"ȴ_Z|86\Q.׽ ؝8ظT3*>Ttgkߒ 4(ꯆq|yԜ("ۑ׸@n_gY6kf:qEp~$Tt.ѝ+D7E j2:,Nx*xח1)+%M3h-;]t:33~>DŽe;2040 uDwl5{5>պCh.~J-wٹxyJ=[`(W1@~hEAf{CS-B^]ݜa[=eC_BOQa*k6۱8(ߡ +$vTȠD12Rdڐ*pȎXWr~lS#e;JS=C=pRɫ|su}\@ h2ܷº1X5c بvwm4Oa)u͚֯SBU|1Qu%>5FдCu5fQv4<Llz75eiP\Mv{Ug*ʓcOFCbCoژAGq&VsqD<4 hEa7$4=Es |FLph8+JHϵ\,M7Z'w)?\YI7eh/eJ,M4cq)88iS֍EkAn@*2? _/E?PDx,65FwVJA珂Q85\`k'|W4c}1chnIzY2i*sM_CUx`:8QEs~él2NUl=Lbp<2sSdAjU=$oaYߚ"(Be"aݬ_%.hbPWv'b8Ry"6b?eMC:HbD\(AWF(zi%F^,ӷC?@8$2 ;rJ0 'LN{O`r6wܳduʂbDm~Zfyf'"[\ů"R8N5R~g[U:gq֖rA=e\ 6#-0c( :l4CF<05{K3fޮ,c7+x} 6h,vݓq/>!jSոIԾ]mܿj{us<>S#۱qe(&a ԼDT֨8F*z(LᶹOvo:,E[|5ѐV/ڬ)Dk̳ob0e C-EhҔ4a! oAM `&6D,̶adx%۵0N#Zh-/@ئl{9,(?8cb^iZ'#\bW08ET/)A=VC?Zf͗tv޶fxlʹNDgۘ~.0:]5aCSvӣάrN`.χ}sUWdov W2(t>h;EWcĽЃSHdԙ7Q?ؒ;\%/5uH/y[fR2h\w2䤈&|;iNӌ8] }Yeky WoQNpf%Q pAf?V#Z)\7/5a/r}emܪkW.M6)~ V G [v@cua5"x*'s=oJhz0\,4o aq'* d5HiL:Q{|XmD(ͯҊ'ǿkJ?ֹ 7ʻi/S)$֏C]Bo,Aqbo{]cNRq{2#w炽6bG"nc.>(; 9yo.(qNxapfYyg=ks%H&h`1e&M;Jl8+3TbQJ_ZLKq_/<;<`,G Ĥ]Kp=Y*{mw^}"]Ԕ r^*`M 㲝Hs $ӳp^ k? O,)m'cvjjMl!"ȄV dfn69FX19eֹC(+b;P'¤:+`Ņ•( }; Z? ch=oׯݘJwQ}!hls6Uu2LkI^x?U x)kVCXV$Z1Gʛp.~-ңLa ǔ(ԱE}~/9xP=A}O#/_:N,Yo§DDbd8!D~ZhL7X=P06)ϟُH^vlRTZs .^ԫ_}xaT؄2 AN *:j߰g(0v6 -B̞MD%8-GU#cuG{އgT厲zWcW : ,$P%CS-0ƪxR# PL^*vRM} ;w}lUv ,qQl'ז]ȵ8nȉHT>`6BxbXYl|#JWiSP: ~'Ry]*3ɀ6}>8 QMɥ'O8ދfY}Uoʶ&+)w!*uiq1*4Ԗdd->J\|t({nb=W`h2Pm؁ȴ["n̢7ZCOPJUq({__u ]+o7U:ZxW~[+ KK13> G>pG?&-! Nb1ZK5Ƃ&3ܺ.1J Q )xu̹8=q>'2+W~BZ_^/H`RWAL faRI;-JK_e l@X6c OZȿS{.?.\9-lkwvySy祁"E нO4b&tTLcSˁ$&A?!;[ ו#sn>C^ug8T6cc{;s\6w`b#6F[e̛D h?_dO,R;"DM V<L}ƲtL&#Շu=mg)cP]UqL[:K$6Mf֑m$ԁs3+617%p87\E9CEdHbo Aq/kur&ÀaB,휖P愤E)dK%`! 0 io.&øAͼBQYQr+W2\6UJա:+(!7&;1K*#e[dq%`ax#ΠJFAVVz@4 'ECT*躜w5 Ez-;L4|jM,XPÊJAQC\qܲW͖RS$E4VQoXfb^ENvadoGHJCm֞8X)a5_UF! S3٫Eo(U4(̄ZHm)N7u#sAz/w/2ȂRAOM,.l(ю)b6'{ ԔckXtŰV7mP"VIkX%dѼƺP.$8՘`|Ć <.eu55VTxDHm1xn Rxv*낗A g]ė90vĬ]qS(<>"WhVtdnb- $~`0cęJr7}PXFZ|ImZhcͪąW9::qs2[ |z;'Vi*مA%vk}0Nk^Q˺6} μ &2 -g akfG~Ħbe?W ";h{aDzYuKs;UwqNM0oA$㭻+[";t^+V:΢F@ۥ"5U(\,>LƑeB ,~kgu\fYK'ыٸz|Ry~&υ?!/m)M[؁ qw񎙬P?ܡR.~ok\6-m:,.ғͤ֌i\H%M8-{˭bN5,}#deiYyB8Cge{\ټh,6[Mq#D’idѩ4w2.Uv*HU?-3 P1]%[?¤*4ꮂ9N"vQ ܶkfŪx`tpؐ72K3DU'5גMIkNXDg"7/\|Yv9n^cGEqTdՏx50 {^ruݶ?JyEZc񋲏ӢSk9߶ĔBI#. tRt%?HE`ٯF.lS>nhӮF0%uT:)rld(Fv`[88'rv lfb͒lW'.O}4E0v-,mFmwMcǢ#^jCW3O.cepgmkC~ͭ&|_8:+AV˰ !CjbRfsvls'2!M||2)MtN4zP\kCH~TLaJ,zL[qz 8}c[<5\|{+0괝DQ]֋wflWRk+_sٌti0MmAVY`qc!^kYTFv(\py s0wD+sIŔPo >a z'l3M_bk^q?e+9<(`deKp)t牧LS cg XhaemCKG8SPĔĎ/T07#H̖ҏn_=FMX| )x[&Jn<]> # c*Ovq:r2oe}CQq 嶜g:ɧKBHOɜtK=_ LOQwՈGKh_n}sl7=n@1Tfc+&Rq1 TPvw ufm-k+{7Uwo5?zHs鑃D558LKyKĿ톑Q2+waכ~RCsFKZʾԧQ#q<]?3ʙ}_R<xT9EsHaF0&"5SXV-w? D0mݺ`^~8N ]f5 `VjrA'uQϷvAϚ7?ܣÉ;Wv@'>ׁ6a E w/v@{w6#Qy_p3Doɫ6|N Yjͫ݇1bIW+% >rwAeQvϧh`P=\ ;aruǒţ³ _1h'&3J.q~* N{ ˩OPؿB(yj+46[9GL&}Y~lI~r=63Zq"1%Eb6V9GZJ1jm &' 'C2V؄,A[dRkptxKZu#Gh1Y\ξ X|~DF |rܝ-JͺuR\׭x킍\ߧ;/АD)!Qnfl{|Y}K,2SOc:8akɁDC)#Y5u'F2gC^m@bW<I;9IK΢u$QȬ(d. "-1n& lpl?q'$rLgmrHYMMɭo;b7Dr@_Gk2,LU)KM[0j)83᎕dFh\euYose+ʓp7t;m`c0RRF\ˏttTۧ_F]C9[~oNEuPH_a >i#LKD1eD(5JZ: zJ%U8$Qa8"gJ!ۊVz_~m=}ka ϱ#\k3d^ipd .>qpP{7ЈnaG[>V/ߙb<~iG j .سu=F Ʈwח1&uUQvQ+V xwZ(0y9șھ E+c8j'higG?rY;cftYd/Zؖs`k-gKS 7(0tn]gR|2Gܰ#;YwX~}|0AyD{K\[JŶUc6X#j_#3 Z+$(V ?!WnXM| 6'ϫ5b5/M9}|\D~/A'FQZF;Qmz="XnFӫ,tR{FjTLqsvQgayJ1R>q6Lp*u~ be`TcTTE < qjZ+;n4OdsaT눒08M_#v%Dc mKթ #N|1̜Q׎T6(JiۤlPhOaZ/݉|WGilL&` E I{O0.FN$wBRX}W{GPLTn%P'4\o${&Z:&Xk$7ųcK*% oxm'ad't&azY߾v,~qވohN)<-oveM fG[2HʇAÖ KX7bFNTN|7L ݟN.M2;?2,IHcSN HrIg)sbLAiۜV-gd(i#1y y5r.+ա`z~||۳x:D}MI!LnV)yd) {K6xkjB-"V4EMFR:8rYuVJۢAۙa %E :LŸ536 rf {"^ #MY󻃚s9oNFe<78W.`~%6.&m罣# l޴Uhy[/k]!'^+ܩ8slP`tޅe+ }hjw섦Qdp$B2. X|ݽ0 @.KC._X{?㱭pPO)dIT$Y}4K!PdITY/8J,ebO4d9™*g*82?ٳҸ'0xŗ_)xFs*e$ ;iҍ S&L!νQ+VOmYRh-~o\Lp7Eug3"d"F;snc;~0,JVlMl6R5JXlѤ dCQ|Pƴ< љUk晌JWQa0[_E{2TbaY0 [ 淟JPf0ޥtڍN: `vK6am s% Ȭx'ԑx=.¶䁌t6k%>p1jc\1dٯ1 &Γx~[Ew8D>%jvZ&v #b@$ @ I28r*)BCmr lД3<lhCNUOۙws&Gsx}՗ΜԼZ"DOz0rZFN*aԫ q$҄͘t-6]' e,ϚBWm$}K[=(Dg/iaR{6J}Vh.I)ba+1+?Om $M_l?t0lH?G_9oAloA04x̙ܽ Y_^C -c5ļ1<%Lķ4g?b{߫USZ/lF_R׋Ad,fճj%zE5Q(v;*!E#S@و?+9 }ú{nF c9o, u<~huQ=;<|vEjot|Th Eڷ~0`?֞!ك v:*?BY bW4EsWR;a"N+L/qiֵhF'"BəMmSo13/å0 E/f-K/ʚ7ؿM~z uN`"Bۍ  :L`q>5)LrEf~6Bq,̡w3~x 8]d]6~Gu7r+,8-pijb0jc->e(Df0UNʹa=h*lѰ.:¸x,f ۵ @w&CRF e/bfxT|;-O/'RVva)itG-|rhñFb0YFFf(k_k jP.(7g/FբSJ?12CzaA|Yn t*Giq!PJ|&Mm,8F5=aE+Ȅq\0d% ˫CX2o|{"|Ptn_ethQ/׺L8yػ<,G!g&ٯ;52N#-1.8q瓷-sBYNQs lOETG,؆C s./!>  k;kw@رq swkwc_3o$X]>-.b I>-GPG'U߱\jcsQ&8q*s3v*ۏ}z⦅ZW8BnuSmw|>F:xY7"X0bnuf=F|>~8=w̘;Sğl!E.[/I5!E: (d*p)4ZnԊ3Q?nJs,O!hv#H~KV 9]akrƸ8ױIKD~%\et/Q(ip~_։`Axd ߲tYf ~D_a’tS[aJTٳcuk5E/Z7c`f& 5 1Zi 'q yȫk&5!U&%e kIz}I8c nwf{k9 bcoϏ fK"g]/?pWm?\h^`HD`9(rƲ`?E3[+LTE(_{,EBm8 гm5)Ub,#G,`$9sv#ةV~+kDlma ig Tm5JԳ_pep2 ~e5Q狳hj^8*Y{]W~" 'ہB@{4a~zVYH>KCT@q uN7qn|KJ*i~rgHVez~ N=kW'~_9o"-&(W|ky F_B:N@6_T FSߍW6jf Y[’Ԏ /$]Ύ!hsP c-X jb3QB\&Qkb6  ̳VzW#)޽`<`-=[bHֶ.0Jmx_mnfK1˜|xP8ھm'ao{,*6c%a,qŘ] or߸bYxmHZlEux8e`5T|m" :kige7WĤSWؓ~0P᧒\JbGc, OsŮ\D)ؔdLyt1D)HGs{BF&E_`^Iԛhl*rAb7(Stү9: d9wa*Gy ;dDqFW5Wah~*ͬy"X~< u? !(C͈f*E>V]T7wgwITjMYD5%zz-]sJP<1 D_p绢F*p[b:X \CK:=3S*flQV;9)au0Q k+ExDoS<7gE0]%!5gu3ls:L&&/b Ƈ6bë$&i@0IiY@QnWhSMɺt[xm5v:BMk3DC0r:ƈxvLoa^'W0'ҤՍOAfa@)PjHTa+iQ;4^ሓsWojs3tM~at;~2l\$ 0p,"{ sL D m`x?̨T/{v)#9^,L}9:OEāfb&DigӖ J.KALU쓁4i( /b^FS2cSOf KGTojcyrI2=P#j Mɴh@ۚJlYDSvc^>\A@Qw)_$)Ӧ J1LVV@i_/)q k'-/hX_CJ(9ݛjϺaqbpacxӧw s3.r}8fBi?s25Qӷ~:4ۗ<+ѪEtLF+:v|M)&FhIsA8ծ`ToS9QcG=&̘yT~NM[nSы [(~m6Dkl1~Ugy048eDN狓;jPT#X޸}Tb}=_x[g,lcl֜knn{TW!ȑH~[;圦;}^&`#O{~5v-{'BfG ]m;6qu`lњ}XoJqJfEJ[:xf2ڔ9^%3::Ĩ;Xc~cKy*g@uPV{u9qh|8"QyːMQj@tZ \jHX)O H3 0|Dٲԥ rl8~ i-{ đܾ3~խ0'w'qôT"5@ BDuar/R fk3K9b=viɢQʝ'.QToQSZVmvW\6f6ے7JmSޫ{M"`Y/*ErmU,nJnQ. "=o c~;D` 8E:BGi}(&E!/cS~u #8-9LGݳwb4 YrN%r{E CGqVε'oHw l Y¤\xƹ 'S`&_% ;(/;guoc֯̅{'RFn`Vuq={eF,lSm: lW &ڿlq5M 4woEW+XuN&%%LHij0 Ɉѻ0JJqX =>hz}0Z!j]}[bFF8 MVW ^:Ir&8! Z[d1Mݜ@/jי2Wc0/sd܉:ufN6jùk=sm=ߴ*HGc#06n'^Qu,8p"2<㼍a~W<)S g,ƒG`MLJy"ĥ[RgI+%vm>a{#رjq|b&mχx^[ lۯлmڱh͊mWU?pvx)Qtm)0llWˬ{tM=ǑyV RsBsT#R2MxbDnq( F?t,!bJ$@dmk26V懽}ϸ9;ǥRP>$. -B=J%QV;&kg (G ԰!+0ʸђXGUaہy}83*5{{|Lxn\2bczB+TkĚ6&=3]6⼰{93X@@8qt΃dQP'jIN"+DTzzl Dj{ݻ͋y+*g7潽k=qYϮLbȚg00G/:j'1 F3g$,Ors&J%aF;DZ'xb bdU)sL@U^X%L!!@7l0}<pi5@ZvWcuI$30DA[ muc1=8f4"ìV525 I]9?'hgN|`Ի)ixE%HHcЦ<.@¦tDZ(C\GկsH'i|)l990 0zܔW iMֽ !#mɋ6w̘>;YbOߎ:| ]'3&fp>[[ƕy*O,BwPR6 }؃v+*ez!VjǑ@mqn=khk&Pgʗ`#+08SgmGMLw(p|:ױ*Z`B+马\P쬙lsL:VI_آſerPuֵ>ԛADgDKgXe%]??ޤ7fpn e5cH 5a] 6NJM!^'_%gB#9ܶ͝H棟݁&ܢW ӌ_#Ş*n c$NLD8 T*zwuslhXYkn"8#eP$i ´NF[*=dҥ/aHyR^VB&NbngOdsljq6dC%_a7Eչ^u6z°bsrSݻW=4<Գw|‹̾kgL\?rn 0Mn>_;?\^ADg:DA_.Mhy)"p#z0Mмž2JAom//3V ug>TXdwhA*F98DCC,p j<UY5YɓWu W,W | Q-Ydx0+._Xz+@YFe[A&3}kN䁩t/Ʊ{]El)*ՇAt 7x/pUoi\fqV|>HeRA -8ș8l%NQ]WB#ڞamz~~B@4x-Y{,O7Il,}EOqm}dPy6Pg[DWm[:%\F;4R"ǣF f cp>_\wު;-&qoÇ׷ $\id}%ùȞVl+kٮBRK=j  cj9^㏃'?,N "D}bwR_# mh1܊,bU-RJ߱o`w/> †Nřl?Uik%V $*N%<-!:@IAWwtc!J~,"" iفZ4r %i@~Ņcw0<ɰF* όߊ]c#vN7bx{ZGQfL-fsqSKe}a;ճN E2{&־b7|n0ɷ^Ш\ܚl+4 +H$$^lڒVpܡrbBW(iMmy xvCH6J.q~@'4s*0OR8Ѫ䚦Y|0uo8}k`mۅ 71B3 n#oo@A$JWvN< ]^2m,$b_ k3] 6/8VGR%bC<d_=6 Ap}-GCìэ5" GO{X--CqkK冇UkfLK 9a h בֿW0umNma iYN~ vҏՃΖGAڹJ}RĩHp~kzƖ]֕1đd:. ᭝.i.sKZί&n="~;U=ZEz'ldP`~>)5y1[ X[q ch[_ "}זYvYɚ7-E3fXIS82 G~fuʗJ^j&5bv S|;wx{͑PQ|Y1FPm,Y;brӦ Ud&od#׾i?uc{?C7I-oV-æ7;N)dpnhó -9Dn7rICmdr/qDhnQ5Xe%e';=´{D+]͟ cY _r}I+h0tTW:q{WWW ~M uB μ3a}c8fr3¾J P0Cr[;=Mԏ0 tv`@~V55Mp~y Aݒk3&Wカ"ܚ$. ̋,Gj>cdzLy2"V:1Tzظj{7F`%N>zЖOvrpDXmκ@߾%/m39zS 6Ͼ46<%%j*a?Q2L芆{Q]_4?FFzJ-8.OK_>IY ϰ{4XRTr2Ss5eh b[`4keG$US,C??i[giá3>M@<<' 4*݌Lڭ(̊`5:Q)9?QAp|L6u{=6iUNOչ(cώbl|ggԽ`@/W/ e[FRBDSZ`Oe e&}Ɂ3Pgs6)LwW׏S%4n^FģN$GNA1lu'+Qpa7ܼXN5Qddf}]Y~yY7w]S}e-PobBdyjzHp6GFH<^)Ni\QSdEPmJ4Z}]~Γ0*0#d𰧕%`G:T)r?3 5%jyKr)_W糲[@(!hq7d3y̓VH l9"0Zb  K'%#:VэN[ʿn74־MiqSWW]d>T 鶬eQ)O[ObɶV srgC^RͳaRV+J"ޒ*Oj@(EQ*($U=TS;j>iYxzs: ,L㚼jŰRLMPz(2{!_M;e?:/Z5OnPV,[G0)ԪQC"Sm!/AM8?}7;^M>iJj{ݦ02,y͚` ߔċSV;khNg@O\5}0Fkw5\}2ZME.W.B%el\Of c%ڮTiQa6k NLQT|Qb*4 P_}Hm%𗺭ˍHNd6jEoWyɫ'ol&wTzk7Aj%: lP4/L !DʲD ԥ|<:*& GaJKnLguE$B H2!6l-ƞ^5\yCV TYFwѠO2cS%mvH qn>M@'" >_QI6 y$|w|-$]ыu_T3ipB0fÕu/#4^̲.ɱMoGuM'@XRwCM뇌φg{ÈR+GvbK*VzjF^pm-LFp(!>J5)]HY[=$( eR'A(y;{U\uKQx0>˙~[.$rrK2G*mTN=I ٶK9Ǵ؇ܾ`ࠥdP8r+zs -^x)igHH'ZyS^[C0e@!gEE}mGPէDcÆ7d~Ptӎ6''oء*]{h zf+nܡL 'i\Zg~21~~¹39Z4t/Wv:R6VAT è$]lG8 Iuy͑;Ff Do!1ӉUl>ÕNeϤj{`&F;ԬJ2JMi>oc*Ʋ&q0WZ*XI,_儌CG30|=Rs QCqA\+;?hڷlsa|>0~[fHjel-D iم,C3 s2hMeܥR/>+(0uP!Jٸ El":!╽:zD"N}u.a2;:XN+8QC4.ʁ}'NVX *ْ# yf7GL3{j1M7ĵH]'Wvd;w@p =+$YjD*Le7LeD?|2 >pՂgӽ!#=PHclWe!  _? ~;P+@؅(D}6_QPt*2mdqa5 W9):^OͶVy=+Վ\ɈC>&)3+^Ci]'mʾZ7׾{ T6f: a67D9ˈ5Gr:PWxvr _JEU۠)\2-3镎՞ڏ8jAR@"c\#e[-.*`q޳wGO|Z'Y8unKVM_a ;7ȽC`-{wBxȻ 6wz!aWlݕk$X&苲Is.S=]PB1 s"Ψ+~n#0&7PHC`&d諒2͐(gGھӓjg)6[bۓD:&,jQ,e(&!.l `p(Oju摩[Ux qe,cG{7+LWUD47* |23̥,;7Ѭ $5&g$ F'{J/5rڊݿb۲CeI*NkfA#5+;C O ZVg-C#u4uX8Rv0؁Vi/7ߪ-ݩk}4Z㓷I517RDsb@BD/&+*2Q]%m{[d mY, y% ,ίSε ؅IU.M$4<;L>jS Բgn| b7I0k<3q{ ``+<(Ћx/0*rK IL'Ճe\462OmmaFQiA]?62 ]$K+7a3بx;,9Z4pRw Pw 2&3PWyナŲpQ'*Q ?#z8 Y%y* YlH60M-aQOq0G}K|aE S3ƾ1jliH"ѐ &1ɀ1`۟j.0tB&E~YOc%M*HLJ9oX!1°哫 Ail֎24?8~Udϔ`q3Śz\xP\|Tb4B@Z8飋{2 Lv[`8 Wr>4*^'/-C}>z dtg7T2N=!dZp7/v\=Y_fXK+,6-ˊhߞ2߭_VyRby$Mq\zUPlt7&r?&TMH.?-$唇 +}^ت~ ts-5* 3@Q?>h3;!nDy͡=ݣB͘YQօ3(:+) /uL4w'뼭eς%E_a_QSmz%gu`=4_bZݘ[ €djs=&rɤeCiLu/}ТrH7) @S""ŗMH&1%0#K`;K`,?EQKUHSk(*ыm;ȥdrZɸ5b熌X /8Pc [||:q]1@п5g-0_ΉloYL*9ja>?hcqyZl|)Ӟ[ad&ٽ3P,H/@ pUF[)Z훍3!ꋐ:CTV *(P\L͑iseA@n5,v:ŐsX5Jyr乢z4muwLJAr-mo l0",'T$tDlqƾ7htHS ިCyW s3wh@>2y]=]G]IW e x~8ëo+Wp+}sR^7|~1o|iǎ]R@w( qRj:s@q^A|0s)X uVЉ9cy-.m%UF#h`K?">I0LeCF%X+Vo]|QACԫblu1RZ&a*\J[][w%+Q# H.sS)r3 m,2ɲ;%Sg.}ޚ5CXdծ=DJ.e=tsvɎ:v%!=D$J{{gV14Wrkw2{Ym%sE\ LA">H%6sHc# ]H r%c~8]v lXV2[kbh|:lRPIy},ʩd_|ɶ#ah5 u k="h&PnMm+VFP)Ѱ2(HhVO6naAϤ #3:A!VuUanc9הGFCVw@dciK 4ɒSeǼwʄ/gI)t"h+@t˚ FkA8z2Ȯ;ԃY~盒5;aќ~M7~p[abz*qغ?A~=4sr Dt;cHs;~`<4츋Ӗ*t~ڟگZ ?a֘|Ȕ 7djjNLEP ze8gܜ'=1yȧ%xV6;t(aJ r!:@Q<<8_^+gUhD"ðQ0sA&zJ\BS~jd 9WTYymXW$MiL)@! ADXo=Hmun{UĻo(UU~aڣ93V,00y+Tջx 4ͫ8YfGY[yZL_2Z%8!+9d}03UU+n FbwHl!B@+5&!VmsRY&1qY տ6[Zӷۋ5xkz$Ps:.?0Roe:$ nm(P(]cΜN3[fgP Wlsm)GnH5yϛjڑU ~N}`Ĕ ^1/@=]ǻ7x4Kt>|pXMr}_ 7 #q2yR?R:/8)UwVMeMٯ f"fZrx'C @;oy5B^cUhN!._'k|ǧ@d~s$uoF\ϕK"*4OuP[ Ʈ+VL @ daa(xŅ`@W9#=Ҧ H打M~Tp9n#.Jr͂(_<k˵X?H3-bĒ0厢z7AQ}{qcÓVNm@;"Xm/aPwl ˔pM1QdQ\]..vbnq#58D bBO8GvqqcKF?#5}N dqURox#;H\>COvg5_pXV~ mڜZ)ccOq j+ h.>U_,+ -Ů5jtj; J;P7W=<wDދoP8]$ 5p†̙= _)%b U3v/= } C5O7A#fxvDnjgcVUm r5\el[ {\[QP,tGGG0H5׶`{Og&xd/O:RWf9OY[<=Hi~d-jki+nu[.kZcB,/Ca'os{,l`j8 g[򑘰Mt@ d<(˚nS>R4럎>uŌW// 0X]2ܳr0Q2oщtڱg'tM "WuqpJ=?Drl/.ڡb$dEM^mg.-a[4`͂P\v͖ q$>fZ͎/0rby9L=Ƙ\|ƼEm#3! hqOрo6Jf]?ROLaaב?p$ KOd 4X$s-ɿAwGf%SN :I<>A7IB7)OMA@z$} jSmEuC;qDTi(JBI9wO3U: sBwO [߭'sTB~PSa ImtF߯7!uZj-:zʹ"EӌLo׬qY L^Gp(}&i5,߂p&[a"/f岞V6"U<[:*q(.KĻwt(P-5/Œ{UeNNybF2jzg{2 ykhlOo2 vs,`r 6b[/!7QrRX,H+!W_n0(&*kcKĬD4 %]xu9x!wf@T-q=fI)S&Nu[n+Ff] `/o<0کFöYXV0PʭzȌtYm[,Mr%tSe G'!_ܠ^/9s˽`C9} ɈT {I=kǃ8';$DZS\%,/\~`@I^/|N*B_gM!slŷ*Nݖ肖Kf*@PJ[oGէf[N౹f'/$ʞ0TPy!{G,ՏoZp3=f<;vzTsd~_:a=%?JZJϥ˶E:*~|I?O|u'Kn}.}7iμ)-;P% d%qX2Ⱥ a׸`<*b)xDbw"n+B~rʐiȗW=V\"NJs/q-%'͑<[^⇭j2 z}ܥ_ٖN)`8Z8s 902 QfueŎ=A#nTU8oʬ߻6F!vY7kuW;}K.d,_5sJmPf-ΜQڸ*kD9%ֆRRnw_צI8>oU}SnYmyjnFb[YՄS{Qˍc&zG1qm{ކa%]|恆,*=Bt`u{zs;RLk>0%ν> "bq驶HVP}ƀ nifr:g>a(0׵3&TH-5aa,ݷz 8Q$yXF+@8Sݨ2d9'ӳt65B?ۢrpu5|l)z}lm. ˮ-8Y@q!05L&NV0U=kS6xMX D6p؟O= l'u87Hu7N+ҸGS^Ҕ?uݼ`27XCET-F$ʣgL )ٞxC&XØ Mb;pMu^W[NN~T7"Xy3tO:%Й21D֍s@@ ʭr c-tqTf#B|fcǣeSQҹmiJ:1b]@dž^LQ30N[x KZ/jh7Q; oM}m1c@^+E]&яpyc/LfI<Д7^펞TR˖?kHZ ]i`#:w~z3_ښR)8ЛW%$l1(??gT,>mlU+VʚMa8LU9(ˏ UJOK!n,qwK|Ok +WJ'f$o Eb}B.SBNp2r%=H۰-pEݧ5~ A1^iDfv7^'&-ι}#%>n5z<_} {|4fa O]_"gsaҷÞAdܻMA'Ʊ{kºyd? tmheZM\,6.s^+Qoen/A[ rXEF< Nc!2Qtpۥaov5jWg'Rmdu͕ht\U,ZIu3V0wb(vl' !^ Y"0 irAr-Gjy^)jG . pF ѭ5QD~!Y+ǹWB5`F׻l9j_P`5_PჇe ai} H:VinU͒O-b.$/#u$TK+@VKT{sPrP砦rPE:I!5=3Eאeje("'U4wr^݆O 9>f69DE#s@ؽȋ_xcdcfah ĕڸmMeC7X\l^ iJYDoljU\}= C!{$[yH5 lUT?l6r*EeV),?\] X] E `?u\|n_ѽ:;%lb< 0UAx1]"U-7vҿA>">ϣaא01Mp&bqlʃ~&űφ1we?S7G0}M ]m:4SUNM E-R-Sٞ_Qw۷޷}apʀ5\ ͼt :!'o Sֿ}r|MDڞ#?>N˜KJd1ȤpMajKpӎ ;=Z+=-+ 1Y^xǔՃT37 65$cبUaUbx%h#ISQIߺa˰FVND"$]BN /ѨV/Ǹx1Э` 0ѺSp#&I'~`%%UP|c|jo^k+e؈1(UI\pfm(.D. UC}2:rN=m{;}[yAb*[3JO6Z&D[0e*ŴC%@Dl@Q}:GɈ;A]>'׻V5Q5lKGTZ҂ؽk$ i-Jv m@ZϛY`'L4yZHLΗSjFl+qFxb|j_?HkȲF!=A~vhzsdȑ"RW& "Q@˶m4_g<ymAz$DXתT$^^'r2+(xu{"s٧0#6Q0QaQ#aR58?p f;&_/0<37k-`"J^(j.0"*X( (~=7`r=<njb/7^;s1Tx>ZN?-FTrJL6UQ;cfcm} R}ݰ B-[l؆ϋ`ک)֢&usHl'_/sG2_,-|`eRE#._GZo0°m[,)7k16*.+.ۺoq  (W-3+ +xܝeM*qZKm,;QkMQVX~ģ8h~ʼ+5jC19NQ9yR;]{tu:"M?f`QH)yQv:I8x,nlv۔oy~a٘g.dب$*CPSl\YWSlS|5t"x3pe|ƇK*Re^c,qD^IA,"&9kH?P0_|2tG:'8<; U )oˤޞ'i6L :׳*f#KN$>1 I#Yoz#lwόOxٯtwL}1[S ,JsA6IʷA6`6 Ex2|&2MXL 鷋'$#4B6&hV}{u_#-06MLdQ?5XN]:u46qur~ &g#T`;P vep,%5OW^S+$nLy ů T/WR¬8V%:sم_4橚 _; n>~(t IkbeNy}Th` Ẹ5mŁnvCQI1W3CEgIFGu,/qMs! eS0$SL;,;I>@l&)jLX5!@&yXa( Nx >c ӵ$(iW c+]"bŤCY OH`#O6Tc }v:JS%d8+* ۊs{ DNmF:k.iC&F֖u,mz&u橨lk%u^Sq)}9 ?*LV9:~+WᤅD^73(t5hS ud/?iv}[n0;y*7S{*Ȣ%WGr hX&呚VOr~ILL k0a\`=bhX%1JsH'\fobk>1ȺABVGcc>"2,YuVY!m*(՞/xʷtٯٽ;> sf}9LWroo'y0*ebdBBBAu%-8uyiΉ-A_^qwK б?Ut[v4Kn_DS觖wh/>< e^nE_B`ȩdkHJk] MX=Q\Ug*c Ԩj1[J6` ë[nlGhT_ל|ZZ2ΩĄ339#)ftg]? j,JM-3Ɛr*ۃ0˲F.pI!AXI9 ;Fam/X菹ʉKhcl!TU7/IEfi/<. ڙwVU_d!R_ks 执 &;CPc )qPa]Xb]C+qN55z B>-ӆoz#pY{&`xñݶs\Z `̏&M?DŽ"3Ǡg횢tޗ97CyƚuK 0YԘ[Ռ4ͭ*qPВ% dc+Ӗfu{MWݖƒ&\ak8v*]e{[6J\W67ms!un?e{51@B(-HR;q@J3CWy[:%w'b$WDB+P45m(#&A90F32fq19_)Cڇ1+nJ=74 {aTP1Rg6I|cO}'h QgUlo4- u;_=ʹd2*[’ Nw6ߵio&!Ӡ6̤S;fb~sy*0_}Sjn،<b]̩bN{{͟S(.~7fxc_C|muJbh ۇԉ=|EFZo*ךu&J =_lXˉ6܎'6hwU%Hy݌Ww"c5Bx>۹/I0Q8}C0 2V»<ffn?E_ϴ;zmiG |6!W|ڪۋ&k"c3^4$ L'tpm*WH  Dqs2]-:ZӮ^~tn|ܽ׈V/ ϱy7c4!s%O ]AyHoh{x)?J3B{R+ܜ(E5R(%]N r%;߰u w0P=kBvHwJ'GDbi$aE;hI,بD IZ>J"S{FGTŷNa/-P CS:hͅIk]Z9,+ qN&0C!,fnM(~t['%v8bMP+ 0mM&h3VAңJ& ; ,')˻OkXn!_P%@ !&K; ]p80c՛BBMV`ޣRM~=yhLѤ@;6m[Ur2}ELhTn<<0]`ɐ7\ET`&Ml3>k mTvPhm{xME8sxccup*qS~őKyCv//*864:bpq;޳2` Ofo/X6zb'yTobJY cI( .K) t!v\\|t@tjN#0t;n^c榴q% rɅYڨ~{qc^|-5u8r>UtϴW6$4^Q8egPjXqm5UkPTauxɊpxFnX>*:ѽb xLNf>؝M}yCRnjrca,#.MߍnJ.'>8`VVu+պ] 9^PadƤ[ 劦C|{ O子84\TO!@TԿbve:5ixz]蜬RFӠܦN_ oϜ;{ )# i4? bdipy-1.11.0/dipy/data/files/EuDX_small_25.trk000066400000000000000000000076101476546756600206020ustar00rootroot00000000000000TRACK???????RAS<nJ¨ lkE›j§gnJ¨ l¥Fjugn5¾m6l ”’jŠhºªfn#if?m€~£clޝ)kRiJv8•g[p>FfnJ¨ loµLi5 \[gnJ¨ l†G=iE2o2gninYPk2iHf'CgninYPk¥wmiVC~gn¥§kb ic>¬gnZy6lc|k.d kX TCmj–4jlœZ%m6G«Mm_˶ mhL?mnnJ¨ l¿I)iUp¦gnׂ›£lkI(’h]+.fnׂ›£lk$th[ŽPyfninYPk1.i™©5gn™ IcRk¸`&|>h¤¢ACfn™ IcRk!jM h€Dx"fnܕXkݔ 3CjnJ¨ l‡œEi&wxgJ'Kfn™ IcRk²J˜hڎ RfninYPksIݺiTq‡egninYPk€܏,iiZQ‰gninYPk-_ip-gninYPk/¡x„i›y~fnscRk'N"hnJ¨ l[Cj kF^³Y|k-03mninYPk¬QqjŽ}i>ij0œbzlninYPk2 «j^f\j.ևǃkƒ_–/mn5¾m6lbtk|>: kĄv‚kninYPk͋H@jJ,gninYPk8$rBinEIsj•–Яg?mnl7†mnWbgULmnn8nnq-`mninYPk¼Thin¯j1l۰g£."lpiV>•g²t§ f9uM’fm§*Th,žjn@Ÿ,šmƁ%~Rkԍœ¿[i47R8h.´0iii£lkn?މhii£lknv0MBf½/iii£lkn µh<£lknXrh6Jȕkn-0 3h6JȕknTń¸¾Cgكajn̄'†XgكajnN$gU³jnsʄ?gU³jn<zEh¸€5 knẅ‹hii£lknBƅ7 hii£lkn1!®f*qDj~np=f~bj~nGHn?f~bj~n?f6f~bj~n&€f’3I/j~ńpZ ZhCy/cRk~ni•f[chγo޳6hXq\̻f>fS(Np=tm=59g̤=\&rg-nu?WA 97r%Ω~M>7qChm gsT%SV9Ocǥ=aBuw8'K}uxRs5|5 3gTɰ|:!%η>{VxMC-qUyCiʱQog8#E? ȽqS6?iȪ ȧ=W'4ru#Oh3˳sܵOj`GsrĹ,jdv>&gGp/s,O]G=ic/S&{zTϓ8>gUiGLG7pYCrLsɓsG5%{1Uy"m/Ӣaq8QDNqbqvQEv@F0Rsk}\V8OA]&Rsxh] ehK >y2:ʺޗxhQeA-ιrcy~Su G7us2^795e8g6U' <9LcJa.7߹QʴRWp&~+8Y3W4slOFsq{ċK:_FFT=x3:u_G^Vnxeu{IYne>dY8k{2"|L X8BvÑ a-5ͩQ!_6c~Zًc&pz90'xFk=a{r^3U8&3}pz!ie~{hO7?)xs] RDvQ$T"}Lɯ20k%a9g)Jxv4ssfV9ʄN7J2Z?Wd%?e~ɚ)ӚmRNUǩU8o35hA-\U{+jue %.p:ڛz kڑ)]i9 +J-4v#<γ ʻvBY QjiutJE^=!=2F;dQsd;G 3#_UeMfVPHœ7ʦwG+7Zy^Xfy8G;Ќ=ټY}6&W8CaDJٷ<.c~b-W3uLb*zhkd 9WJF|sm6L⸼P,Fbl<录>kWc}Umk!?-UĵsC=,~kg=޳2 # 573iFUGo.*'޳;㞙} p 5{W59͸@=Sid*&/lNl's{=1tA]ov3h=V9)8z2j50;MhQ[fꃖHGWiE#. kcoah\ď &ad_]9~"WV8c*OV Z-RV9/yfG]; O2QV˔j]^V-9^`9}ܚ2SF<3*cœhq}==Ǟ؈ U4x?vE#>CKV"~AW0/WVЉ@ řc}'>61色^ï>bM2>qb\+-5?.;P@>MG?e>\~ΨR_{7wW(>dbQj_ьhuveE 8*arR [v3zF('7wg0PoA, EX:`(-qx``G1'žŧvHU;wM{7Eሺxm[ EJߒYw}nΒ!*㡊ӲgF,q4a-V:7^FxAej^8)ޙIǻJ3l\mǜ_a FqF5&nəJz|'y8''!q8dݕ2%bDD_K!&Zho==__܉g2%=k h9#q X<ްSi?W4J8ޫMx?6zqD檩 bފǣY aZ6q@AޚxrmߣՖu'(r!\%7!~=!G=scZoXl8˜m94 =怎_c6 `yXEKyģivZA97[|V5Ƨ+P`|_Wsƺ:(>>6X{bcE-='hLƟĥ7_@cD}8+~~Nu|#؜z?bl:QdZ媳f7blz=im1%5a|͙ljysF[ѩyyZk$8iza7mH*x$A.=Y*Povx<+~X6tqU{: 89cy2/ISgȡDwg2jC_CcnL^1uZjfѯ^"hy ;3b> /H[#;1U鉸3r芳WS-g41>#AP4G;&(Kx;ovu:;ʬsi>Dfs o,:ǐzbV2GYy)DF06\ kah .vv!]|prwǷ›p!11n`!nh3ۑFa֏a—`/BMn|mSB&SqB=gx^TqSuʀPWXp6׎A+Џoɻ im7qnl~;Ŀ>zJhτ[ ix1KrrPrw>nD;KX"Zm}Q]OЈPS8FauH@[j{_΢iZ9DFo௽i|Ǭg6qT\O'ۻ_/҆xfAog9Zc6_x/ ^ xi./W0a>Ͼ0Ӱɟ cNev :lU 6?;gHǐ]-O/g.e\`VS_HWݗlٜ(r̐!#JVNٞ 07f_mӵ^@8}}l\ޚ¾o%qڰ~40kxP#gc3/)!}勵Q]LӢ`Et}8f͚*qC'BX8E9fTZJR~l6<Ҥ;,sm!z9(~6 Kfn7}1"p)pIŽpn 7#-ŷHԸaZeׅ)xxiݐ窡cVc"q-Qoc[2[}Pd4rfc5M;CʷXX=eEa 5] (<!UZXnz0 B34M gik8k2Ü'fx+2++> b1c҆H gQm3l,VTS.,GF{#9&Jus#ʉU#AJk1K `1vyXQ ;AT~SZRS_Q<:/HyJfRōwbp:̰K8 `D6go^w 'hNCx̲bDByr05˄"2kQSE>3ߙa<8X!ID'<`M/)gX]o-4W073h@xSLt ̌`3m vy~ʈyb *ʇ3nOA*d=A϶Aoʇ bb/!~Yn@{ `1+%ǰ+<,Xmi˴QA3>C嗈vHwv`!Kбxſ+c)I ɌMy/m2fxS5Y.WǪwE }r\9%OoNee.){JP'Ys{Aʹ['_?4$̊epmjXV)ۿVG3RzU6~/oğC1 )2^sy? .>B?$coq'ZO,Κq3"yc<9ƴs.տVNh۸~ɚX7\祟Z|մ}A~oxQjrbtљG|x.ayxeAyqeB\h.x&!tl,PY\ ,lnL}~@۝Gqkv u΢7;bF!װs\Gf ZP#r~6CR h 0|-\v%|#|m\c`?h8ˏP~V٭@7`]]XT(ES5ɓgҕ鋱!f>iG\;=׳1/nq̔4m^ożQ##,@$Wc771> ,Fd2x a[!_FCRǃVw)>) Cn-?W~|X`|sxvظRŋc'9A HtYjZƾ2Z A=1yHaUל'7XCSaS9/6XK 3x8Ԗt:ۂzذҽ֋ avFBba s8ɹ rr*}8:MΝl͊׈}7x $_EV1-ȡ̬3iV\A0篑Akn1ЋL/X&eFM\D!sVQi Z:Yqj0MJsرSa8!Cl1s(OhҦ6wI#Isc5k`::5ƥX^*X?MM~ ئ^źFUszTb#a'Zf̛ZSr( 9R,$?T$D$xH_#^ ^@_$ќ_YbOG>.Ccew6g^n v,9bvW%>5naퟱzBz?yuj|FoX#ȱk&͗ȞI\/EHaaCxY#:ۀ`}#f8t$32-I6/W TS6MOA3`']-& ,%pLj..+y>n릏9m͓H=T~-x%bwrs sae|#LĿo7x/#w {['<\=tw6 ®O:ކV8KŜьiɍOǔל0#r՞[+dH*g%hBjU Yy?û^B+z Pw'}z~.N\T_}h`(LX*F 8 Ix2L*D#3e%K!lsn(5UF [AfVփ༩'OԾ':ڵ"OdݍY~}} YJjߠwԒGkJ;WZ:^7"/E:Ф](Jy꿑7J| __pjc%dQk F|c12L`%5ÀL3ٕ4.hD̠ߒgi~Yh}Xm"ZffP'J |TF8y {ٿw3쇰WAϑ.rvG:/s&$,LBo㝜0SZ|x- 4˝drŲ`F%]׽4Zn\S?@;Okshy/+ ЎʄtECqf8=Jp/тG_+}`>vI8t##!1N5GߢBdF3៣Q *4M-%=+ۨ2'Cz++3%Eiv\iȜuClDDe1?ppXHuǯڵ4!77D7%wm89.a/( +̎MxV,5y=6e#kdՆs~"7eLϕMAT9f̬˭ ץ}ѻ0>L|[ueBy~x8J=D-a$s8<2b]xh2BYސjqrjx]>/Ʀx&EFMX27%svQzmD*ӋʵrʺO7nF*߮= ت݇f/J_QuT`HŇ5_4HwNvػ_]V|ħҿ2AW0g39+Dp[7oӚ0@F V5Yn ֻn#-hD@GYTa1KGavrS[SQk]O!χNJQ-xwqifQ~-Wz-4f_:+_\,Ql Fxd;fnOܙ֚Y!7 ':eZ$35k>P~ ݱLI0w&9SJcv:1 cG=',kg°ҿ7Nq8%*o2{Pǻ'ĉoMϕ.Zr _V8-~'0|w0O(UNo%v^ooA\+W5geZzJQW Q̥ͯh DD`}!144RIoG*1>lxYޭ;amQY!nQn*^dq zp)dsXtvxPqT`jb'2gKӫs\嚑^߮B%k,8`x(VO>`3zsq#U5 kK;"s={qŏMGŗ`{`Aֿ~q ԏ<]zYtXb:Z聓6PI68OZaC]6% ֛VCWlzkrrϾZ)l bI_bO5WN3KpES-QKz.UpH$h`.s7Dxܱ@w7V^R)Q$p2TE8(4/?sRWyD(dXk\] '2]_/޹s\wfXާV8Ls$mJk{Ou|OI*S?/ȼJ<4\' ѭ^%g{դX0d)lVQ4nätx|"Āfl,'WȂsNmN ވ7vRCyVi q*UO}_sV_(klLrNNhU/bـLx)' Vo;+b`SMbUSm/2BbE9 #yD 9 2缏 (3ZKW/a؞uS+7=KƝs}E=]IL>ȴH82ݜf϶91z%?[InQo]Y=IlP/Y^֔fТU}>Wl+ִYì6 9k0o-cgF "x7X-umZSA{c<|1>+/Y_ZNOkNx@&x+Wo!bQGJ}lsWUoNBs^zfω ;|Yʿ>0/4eu'[Qy]35qvLZm>jziflnM0cLzfx1ZL[T]ZlccX+GKs< _^/uU&k]Y1|]K-]\lQ Oc[w(#c@iXϵ=Hiķ@`Gϔ~_z8'~6kSShٶx0N'sWՔ9*Mn迵@r\)lTˠג ] =˪r[{=Jl(+.Ū=~ZŶdVz& D?zwao0N'0{Ɏkq ȟF(+hH[+¹%"e32dGΌɈ\21Yxk'qk7 _Ab/mpkz4PxH09^cof| U0>"7/QWa.0͘8ـJz(0cWɌAmq9̭l PB2NP+J|YN5+JA ޚ1rMzd,U5i6mgatGyD5: 0q#סTgJ-0 >I ?pCaH`䎌0ŇD=!GLə Kx% ,=ԖK-tOunO+fof{NYPfEdA\/ ( kGKT <B`X1óy ;.V487H?Hhp7# F+'"[iC'`l)θ 0g#] "veEbyA@ПhJQq4 | 0=PLc;{ V^ .So3 FhF与pJtybS6m>{ o4N(-?#mTci;=ȟ-z`3-u&SKkv?e}ԓ!caI0'e;~8XS$M@[G~Bz>5Kt5ѭ/7a庨;oAF3UWG~\ڞ\ӗ.YaUL^+Ȼ[]dK8XaG@W2܌r* _ݗiw?p$Ξɳ*ŃŔJKD2>&Vgqktuiytk1 KKW\G,eX/CoDo7ѩxZV䇶h*̎5 FC]!3YBzYvք7[O}DS.Y_dL!!#[ _(؇mnd>`u>wQ|,[Ek5+˟]Q[K*.Jw:9lh8/7`ڿ^ r昧(/f!Ǵ9V k͒8ιBahP蘘jwNX.??owVE ;] Y$XY9zȬ)t`1Ҝ*gռ\HR"<Ǘ˩8HA8)43'lw~НT_ us]lSp?Z5%6xNYGgӪ5$m%\CErxv 'i؇g[KGVaFS?]^@ZVHOY VPkO_"ӫǹR/i}@bv8rluGHK] rdG;x{UWh؂nxxm8/ҭieXTUL@0+Dnv q1#J|8xk^/ mSs/׭EcS3w8Rp >ʃy/+ZfXoae]/_dLQ[SR`}ٰvdg?/+JLWϝʯ;<_[ۚc}@mJ/7*v-yz q8: (@+\Y3y\F9" 5ձK3@*F삚f|eZљ;&\,R{x36~=ygȯJÕ^Wc;cwǴt G}RʯKmtL\Lໃ`3\ٚQIb k{Y 0`WefĽWܓX|[c5rwʆj롧I;J/P"^[>dX'w}pG>Dr6!=;8 (c#ijIΈ{^o^H[ζY>cWV*ڥ.Rp=뤽2XWg\d#2@btKUڻ?'ѵϙՈV6(/ZZeCUmַvOEQF)wJT?j8R b[kk:5ODQ+3{-](q$ 9`S;d.%f?ΚLȽF;`d6:kzx^tZtڃ1 % 2:h8hΊFrύz<*9 Xv-AKo._.DC:8{1g{^㕫έuXuD vKr=_}f\W3l{[9.qzF*{ qeMrlPrE,\;#`", frv`t)yVH.&e3b mlzUGeն!=5WV,+z_ij`\8fy͖ + b؇ޟlP/JUwm0{dUK%>)qo\۩_18J6 Ѓyڎz5Ojzۚcwyw3ΪgNBbP![X爾xk'2 hY@/F/QgYWi( F%)9 g0"iY娮-Fz MɐiԖ>z]y1%^HIꬕN d]6jn+rLۼrkSݲ25by'vso(Q2c˵4A(cƚ17LUV3V'>5 Q %3kv-9 "[( W m~xY!J2knj׫3252-ވmLg81Iyd{-'+soʮ,޼}TXU ccZץ+ᙳ3#vtY۝gkaǢQn᜛_LnJGb:\ʅVtXsT9,2L^gŧ9[}=Ѓ9r>9B A~<#l~k1_yH]Tuso~9ozkl]Bsw"Y6%NqEBUDW9_βQ- @j+jf?b w2 ^lQR3B'$ZqGFj)z?;Xπ1RYOjh׭m6'3|uVE(dbcmt?cXgY۞:9Z mSя19W5iWOWa`5|(RgSebxLAwj4Q ߽e2LQ% oʠ~xc!3cW ]֩#8(;+^k˭ѓ#e60We_][<+ >'cZR2^eH^i՟v!c+B[F~Ց|+E9JAo1 }qgTI0,uH9,ܴYlP }R+?޾Ij{GTd[KiQcoyOGG3ѩ碲f56Ӫ'7Яm5=s7X Ua~Q5C᜺Yۤxw7ޚC[Ŗ}=LQaVyoG]#O19眃Q-i=.c; !~/# >wX}ukuY0˙ސpX:8cjͲ)Be-񉾡쥇)%LIOhh6I?Ί^;[i+^; 6Q"ZPm^G *vg(F^Dj5_-RRFVF9;~f[[TأʬDD>C܉2e]eG3=]!ȹ>X9%U&.q\qwL|iB+\ܶjͼm]]|x׺8fbewZ<Ա{Ժ89h\((^Anۋ^Fr]:>cԐ߬R'lˊ'ײJGtQb9zOEjgzQ*{zΙgl@"6ݳA IneTs;laδ L$ݴ9Qe b95PU2b}X%#h|zX$&ilYl%P?, C'Tc+q6F>(|Du ,2xYD{}zmGgn]־ζR[+ bDC uj1f{rМǐ19")7NYXmR6q) u)=8?QcrU*UқVFeZSh wcX 5frzm~V2jg# N{dQyq2o/wTr]XQ˯^[$d FHL.eLmwf,[V7-)U?H96o_q(3n//rc9B/h^\Џ17>!iS8bZ4 ׋r0əq4Jsu0*JvKV*ֹ<`e=Iga p?Eky-J!/xg?} .iQyx_٩2[q~zoo]]Gߊ|l%p3Bso)pgӝhcдjgȆX=Jp^+8&-jzf(uЍ8E%T`N</:m^9c,|sW@Ҿl~ӟ~@-fm=zFq-z}gE\-1dVθw;qǨ1xysoBYq)3>˘Q9_qZ5ƋSÂ*XF5ֱg%Rfh#pMY!WhLІTηBfWYz!kwnޘkX}6kdz`2OZxFaɓxq_ޥ==R|pB"Ҩ4S&sLbh"*jݵf/!ʅEvH17hMBrL.3άqߟjO2ּm*tuծ'{ cl?56(7iVw;B 1|XnrU&ӹVېI"#~T8;?|:k Yȯݖ(m۱t1чi gN^iZs=6O԰9Gh^#d6[H.3"BfOdRZi8R|T[d˛ݶaLvzLAH pԔ~.+Y ]}>HO2s:gTT֍k_1~r#c**ۼF[_ѿ2.y?<ޤUn`xv ĝ]:p7M R>~V߇/}!?Ie`yz\< 7~9sdԖ:lz-^Y/5Nks}Rb[[%h[콕2 މWL}Ej*ێ^Wڰ5=k3;|}67A/btd'5؄/#O-zqu}@ƲD%LاNג1 ?% =V漅G${vXy|#hdN6Wdw>@yNfG1Xe!~=.s+Fgy&t<%e{~>Ǝ7ٟO9h9F]]yY6;5Či t_?y,yfJiITVL?H\QꭕNdmLI'<)LA+!l;vDGoOEbG#KwcH9H g ; a_̓h{E#Wc1Kr}I|חn_*fW$T05a>aHys"4㥗w$Wр]Km461ok3v/Y b?7#yf!{?oþ |NBJF2+Vc=ւ;VPS%]$BkC+>ʃ7tmVMͱUscF&>YՕ,I1HYE\lj|Gkvx]/nZM:m~RZ'k0'E@ГA/[7ޅΑ*wU"2?\3*E&OvOIy_!ð^*ĉa^*0|6;obVJx"JeO}q_B\ 5\W|K HR:1mx/,gk#B y|`(p젶!l`(t>OIɳͶL?zv25Tb#b+lFO+V3/[`|M xF1Z0d<Fzb`aiu][DV>~ά]ߚ5Lkw/h7?c1EoDhʹTĕ" Oz-a?ijla7h}tCxu =FaDnb?'î#05xY93-$,A|?r߭!N9z^Q!6Ϫn+Mh2'FIEG*7n4]r>0>~@ga9`[ ,otj祟 KSKI  Vc=KyWp*ݔt(Ob%C#@2ʀ$ZOBTE䴙HCM;}X"1em" 8Ҵӂ(,dح-o~5xZNC8~Z!0B2Iюp_@th.l"B0? ljE-J|d61ƽ=Z#\1^"k+.';$3XkyWVʔ+sdO$"3#y&y^K?gWG740ϻ^ay&dn!kloAFa)ăc?w$u%< 1/ 97Y@՟5478bE3!ï=rF;崫ỗD.ND4d݇/-{=ڀv áx v ~?8p[)<Ex|VX<^Ǽ<ћ_ 91t+,@523.e/8Sɘ B >fܕ#ܟLoWc5s|W̒Hz--1m}W=rroz}vg.%l[; <;W3/&xr8;(x$~ @Z?{𙣑tF>~N|< VW89ksb\*@4bS<5Վ'ODQR'\D;vc)sbtlj/!n c[56(6""!jO6.+Zt+V3W=*bO_:%}р]oҫ9/a?: Ӓ.od)|/%(p0:44 {/ 2B{ɜ0TvϮ /Oa>`aE(VCQ"DZ1_]q]OFj^%u1;O؃miNTb᫾SB]rwi~l*<O$Yg.ͷ=񃽘"=N;HDžlKvK,V4#dff`m_E݂^/ϵ'1jSЌ|ODFM֙و6=Pw<1ͳD4)'0~Md$ yC{2گX]@r3Ӄ}H"LB&H'iG4= ?*ty!gglLpX͜d"w,p_k.&wՕX}LvFQzpirՎM\Opʈ|ܖ́sg `&Tibl2 Kz`U1(snDۣEQ!*2%Y!74%v꣜p{)y,ׁؓi+uW1aڦCD6~2Z;hG׃ec7-_E419_JYN,B^SzWbU(WNQh9 cmg0y?#8,xCӐFx\ᙦ_`[zÑG `oFw`>R3u71u9*hYHJzk~*Dy"7-±X_5 "uKIH.AIXO2$tB7g0% Q̇lkCBCZtn>N4/iIYֱs6$ּ,66ŮݍHfv N?:xxgdKVtN6C٪0a V,r$b%x6ώklTֵuZ;㊼.ڿ'i =QXէ!k ڬ܊ܺFjRt kFq翖1r5r4r[r'p *15H@_Nx.ĒMFgw\]Os6p왬`?Puc{9ϙ>+ӻn;949}+ssgxĎOd 7XXcPF:XS1,tZ;c1_A]k}cieĝ 9JȻQyH{.!hu3Uk;ɣ x3쀲B/6/G<͵-}ɿDŽa:ϥ C>ԕF ~橛#E:}1H|WֳYuVًWOav^z fKS52:kG8vLNsp ٚgxp6T8%}8P85DF'eh+ׄ_Cc'" xY"K32'mX8 cV }F8lпLrv4jmؗ+c վ bd_u敝+ ZPu{wʸùRVz VEx<$-ݳKU5> ~'C2kzg=VgY5XEe'%K:L O\^7.\(YvSplveVL1Ӧi\Cma?G{hUeUUDQjRuܣmrB%[ZKg&cӬbGزI[-ݓa_G-[u+o91E › ۰rzҫ^ Кt86n\72b3~wE^<:zHA;#.G`;ݞ&ϓ_GXdg A;c'똃G ]eb..̹bspbcMl2^A_\ [A`W9^Cx @wz6uqqM1qz,F;f}U,s x?rbdDZܕlxO|% ds aZ"oc:ґNY*| Xj$`2(M9=+ )5Oocn~ݧ:aG77Gd͖vA\ Ԝ΀M&創Cvpn{1x,3aYgIIH,pr8mV ,c=wױ |xBU@zR.tFX [D u+n gh,|_p rCzCKfdb89Q;~ů2"F:B6W:j!gZ/~B*2pNpk ?йpGne3­dOdLQ뻔GK+Ñh2d;aHN6Pdȶ7DrJ@0S *$uF%;oZ?6Jm%ЮdƯ_A+Z Xx_T:8oSDֹ[ 8}J>s r'LxPIec (e-N( 5?q _ '>ߵ~<6ty @-ڐ庡X{ +旰;d+f1ء:Ј}AbTvunbbcQ-ѐϽ v 5 ?1/`ݼqVO bHNЩaGN:-8|v%C=8.1ւ yGr=p_S ]ϑڎ &9Xxzf'/8:ZcTwPֲj_}_Y(C;\ةYsD vuO۬v1rZ,oQ_lN-:t}*\=Y(pv'^g| ȶ= ftI~؞ $.ns |*W C|hR݈xIzEOP%3é[n,Su0y!'[aMyo1a: X\?DZP/=;#kߛ=^K/P؍YQV0ՓyA=@\4(1M:qwSj;DBmݨ7y5ݖ/µJo! 2v燳Am1yD >u|(mfc)Aw*,>L~-{~I<\ɕan:0 oQ+㙊Ƹ\VTDd>AT%^/ 4$ y2C#ÄoGs 9l&X7!)Ă2w&Kafz18gˎ*ΕگRVG03fo(۫>️??pCaY bl{AD.Onkʪ ۳ *c4maneH[ YH0xY~ g$b=>&g\szazb[vJ'DK.qa{Œp~.w4C:c̮5Է.joi!q|n1rzW1/JV#P6=^Ia plwX.:@XFؾ-ъ.X] cT}Mpu:%׵ g0͐ &aEI#,a%>]O)S*=z<,?|퍡VTBԤUQ>2L"f 95-~J]yVA +=,c&y-[ͯWpv_R:${9gX4j"Fd}% `̺zjhyYc3a"cQQ& r;_C)0lZ9I+*Sb VAM %(ޅT(\[B "~kD2Dvɜ +(})k !UbDVެ, z"v;m䯍f>vLXܕ"P~WZB#tʺZ"kvy5FԾ䄬gqxpr8=2\ +8?Tg|u,RvI^_S/,{khoe3-T4.g>c3>=v{hcͻ AYv%mkOW,#u^"`=:M4;.G\jd">񰶣IϬrm{7%њw'7I]/xMlvKa#` |):]csuܡ:J5u4!3V^s\?dDNw@1I}̴*qIqQl"ŧr*9C9%}0)ܒmb{ .{%pB=s8,xߐk"5a%(`Vs1ȧ;؇ؿ)J6ݝꢷxv맜ĶVAXVOe~ӉF7&vLΣ3_jת?`sGU.8j3vO89=?}?>CeOJs(DsЛpb/|aXm|@+ߒX;O"VsgilQ\!)R~-vJVjĂ?DgX"5@G7sTʌs?J3Зu=}ZýTtgiᒲűyUvbgrfG"2R Г̏]'yƈQj"\Yщ,ϋ;}؎gvf DVhٜ+}`gW'X bLk|>սxƋ$jz- ~~tkxH櫒Qd0 sh 5Vqv4EG/chm\f'ycLkq7T:\Hn-:0#R^W ZBS9BLb4,sU\;+eI P-VdV?WTgƊ3JӖ%TW Vv^F]lU7>#`7k"߭]F2e_NGC1>2ʀU>HA{R)Vl&s 8<{Iw#j/z~+)ev' Mow</1Zp'ь'){dMYʊLV+}._nL)wL#DZ-?MǼG=;'N5RŝͱaĺGs_;\؋g/c0D:l B% wֵwweVÊ8U~pwvLWXx^_f(Z&e>=O7Q)ZZ*v4֬/>+2rS)^vsdͪ^̫׫gݦJGfrZ!1V~(7]9ߕHq(R-fF<΄s#滵1⫔}G9D6 +_|T#bŭ2TQFg$Ŏe2~CW4g`A5+CSŻ`2Qac3b7ɴyRU;;*:/B9m9b;h$ F]I#˸œ`6W ln=Ph"FyRYѺE62l<=orS[ʴCݖRab;1b|kUEVz#ꎈWp^8{=˜,h#Y%Y$CH3ڱ"͓δxjTq]Ȝ:";HT.tPSdJ4KFvkoE˥hekD"O#5=^3:b狨 rʆv~`λ8y?{;z D5\{4>'ckc]XըOCD z#_mxVz- D=x|V*:mƫE_/$jikcFZ4~G睜S}ʤ4vGJY֑wCjyV1f+uW8zzQ-qV^Ot&D12j~\qC>݈Бc4Vи',ft9ʎگq6yG{b )ѿE}jl=2:ci2--ՖSNﶘ+7ݸ'mkܵ*SkZvMvΖNDmk103Q\Kk/5GL-;Ii=5FJh=1n"=steWFD|]b1l"Kz"DZZ6qB% &hWIdGkq:@MԦ*Fi$b>JidәRϸs1.]⊨u]Ks|j>u5{)tϊZUF-beZA#V׾ݺ+bֽ4^~Vؙ#9rrʏsfDYW&dH#4&+jyjFY*nQu57t(NYpk><ڌJ}Ld<5R<`CchN+NJQ;VLdOvk(1+CEwꁔ"qsDQ媵r.Fל?ʱe w\V%T&X'?\Ư Rz W[O֯n"Q5D4Yë*fK紮TmWm̢hg}r=#3Mp+"` OVpqƴb@JF\H9~5rqqw-%rb85@WbgO/,\WLPB2huuk*m2彴x'DUٓoyE`#@D@҇{ZOЖh/e8J3gݣ^k3?E7햌&KF~΂W+_?X-w{*1pOwd:cݱr+2j=kDr)+GZ3C[(nD<O[fw]]zMFd2Ԥ_>L縼?GQl6Ǩ'"uƵηkevVP=vE~\֌Gt,ș1Ĝ>T߭'QDMzȚ(g~~L6VBT{>D|uO}V9d %ZJ{ҋ%Zhe_# JkjQkGW>IX_! 3R/zFZcykCΉk+i"Qj48jM+>h#|$W.z7@#BߝP7x̀*g꡿ky/`qܬQz]Yw\}1)^s(@_*Ǥ˼v!Ɍ;jZ%& Ǻ Y(5kQ ±)ΧSՉHݖXձ.Z5kbDJ9h{h#*'ʑi\|EuhFT$uRyrd<>uܯhgeRsΔ-o<Uk#:}sm)e0;0~3~/(M$+V,}oF3q|6e(f\'#/9#RwyA56S$mՓ.1仜}V(q3F24>̖ll51nW6V_g9ȜkA3V'p:/kc1\0WjHY;6u0' r*>._,Oäǿj-7c#m*yS;e& γ2e)'sEcJ?R$+eJUUTWTDF,I|xQ_ fM\kHnWҧʁ88ʀ@Gk;}.ϫ gLߪjnj%jgw@o*黔wTkͲjH|ZhKz_&bY+ЕE?( [Pb@FT5W[;Tshzm]G[jmDaZ3|"mAQjR+UF̋\m6VyQcz(ʈ\R*09>R;DQ~l2PwF_>:g:1PjVbF˘_vWRSfdy]9!ChezYdEΨrZiƧ:(Csg(QYFm嬨CZ6cm{\|E@/k$.9'"`댈"i~Ǖpd:ʝ 9.e[3Θr#4Ɉ7f``=y=|6\,rQ$*HYsߡesL^IP}q1/^'z/ulүNd V?G̪.Y[&2YYjYb5; x҃GXM.*ֹʀ-VZg,=?U{W5X+ό#zÉ" sLpקgݾhV Q"3] q3_O#_&Wq1֨k̯`dBk:"CbTcz+@]sZ$|WS@+"!GѸQNyʪ3FbZ+c;9A'3ų; *ʪ)O[5r QTf{ݨKE('OEK,bmRo(kj2;μ(,\#حxFxl9Fk-]{ FΐtN}'1>TعA7+ՅxO?ϼ#ϔCמ鈿=>w5h)juZn\BqwYu=k^)LX0:wճs~I˭eReHmlwF>Yu%֭?QUVHƑc|M̮Aꑍq]M1p'ȒbFb-G-Oi፾FkֲǻTqO#JVʕ)s=/*.u>Oou./}y1FLwr a_k]=l 1`1S N-LdJ[ΆL.D-nudbHNDg]r=O7WU(Jw`U11-xE^DҜaLf/跎=d=#0WDz㸢'YA֫hV rO;Խww1FU]nDFeZw$#UR7m|~(I+'2ojgUc2}HSgɪaM.Yiޖ3VUo 1*~wEګ͖3#k,lLgP6[-j;]cqkZ3Zݸk{Wbj~$bwF8;8P%=O9:ީ )} FΧjLș^Їk"͵q*xzHuOf#ϥ丫'Ӯy2oSaܣ9c]O?}Ѵ܄u3FMJ ]tpjnɚx|2>~;>(9³ Z?ʒ^K$ukx~jJq^@U+Z^! QncܩoooG`1lL(F=W$.`+TFyst*( M;*EUwDCz=ݞ:u?f;ܭHZ/'su{?̏2ibvzEFctcM[N4#sa I-㲪 VT~]srʼLʤ|;y){^H}8\_+R\_}h.c8e{%o<ÝrV\VZ3,x Ob6Ǫ(JlQS-#5R9$E9FNW8 =KOEp J,;Υ;֢t$Rz}ʞȟz`ɀғ^|Yy<1X)cK/V7zV"n9&3au\U.(Uȴ;SD.F"\OP_εQC{a7+ٞX)t~83AΣX}ڛxv6wbdNG[+vY3n`l Yɗlluizəȇ8"*qָVXOΝ52ZE2~'zm4ҋhϋ׃ϻݰ;gVcav—g\:D^ӗBA>5[˞G3d 橧x/d@#_l Rⳋ|EBN:jvevaFzXmTjU 1#ihr^VV^%+k@瓦}Q>􄸏pvCK7 MtGۃ7kr/8>oj }wp15/il>|\/kִfhMiFN?"޼+Ա]^%1wG6O7fʐrneZy\|5$pjNYI'9fۄD^:DӁB/hNHU`s2wöN:/cFڠG,d|؋n^GuWG7eS5sʾ?=c1k38rFijMYwgN3s{fgLŪK\B6]&bَL,:+pZv#e Pz&}.X@1yڨs[RU:9fxFYkaQdb( =_[\-n(_21VŘZ=:*rJw,͈bGUɜ]^_Pz-o_= ;v"b z:2,.g=3$3O혏yF~ i{‹:Wpݬt 1Y{B1BZe'r;Di (1"%fܣb̬lyH|{qtߑIyp&'"H';<Kם>Xjt0,PN.C>.[YJvb X-}6ݍ>ͰWB5HHֽ',"\_$l"&qjoq܍_y/ѕTad`nC_=W; _r[ៜO}zv'[֝5h}kdyGa?qqbd3W t =Nuv`(vaa1 ٗj==vQ ŌUgwkb%~{|Aif>bNAo`E#9)6D GŒv8K~MuA-t4*ғ@ir쎌tC R1 X>T3kc!i=%4r@։\F&eWrfEV7gr^Vz㝔s"k koMWX<Ū zê7D[1ch]8zX q~ p>@&ܚnK6-|bV+ oكyB&@o7yvk_rfdGӍLyzV7'ҝkm9.g!2fz=ɘjGrqt4qN-,,3q|0^3FeX#5Oҗ9+߄v{Yq?)RP &ك#-Ofg%DcK2B1Z35Zme F~")!8k"A91Bø >r)hw;*,?/?,gkyYE8t$sQDtxMnmBNqsUrX3KpQA?ftl=' uYuDɲUGy3b<$a+ięƇZYxQ1_9%ݟr$Vdl_$`䴈BÞŬNbsv=Atss! &MZPt\|{You4lu -rN;NDjՀ6J>_{ WS_z혛WՋ Ư3vpNc a\CY`TIfpk_CZ|-{e[dݴu7sMfr!MoIHtcp$4^Ճ~J7z[ C$ܸ]YFkvJbC{ٚ綪 ʦbأ432d+A:>8~DFnkҹv͚Gk/b\}0: ~Jo(=sr!xaXלE2ꙿ93ƄsԎ/_)/ԁSM#~1_)+U/\G|v6шWIю*9"?YUw āZA,̓k{_%ݲyݑH|G|>bEbG+<lMNr^z2Μ*yZo{> I鄶hb2qPbm@̯)˱G~Tm"*~^ a֌֡y&{sRͩó ,u2#'}Љ?~=MyC!KI~"Kk?<=q %[0VIΌ'U 3QNhwI9Vĸ/ɦvdǁ>H&K t'x2[t.O?hhWcA _Yݝ*ܛ&l V}7ډ[Z!6'OBރ#Wk?d6Z$[d2Έêa[Ecx+fmtݓK8[MW8s_r49ΙhQ!љhi '8 418Qf6,s"]SW\=!q^{g$oY 5J*JG缵gKeeKd5sX5|Va^F7'la`>%:^[^::퍏xٺ~&a1o̕#C&3a`ka/߀3d5XjkАEjyBzr%WenzXIT{ܭvVJ<3sbV)"w`=hS:쒧=X`%oE3+}2/yrX#}x?wC wz#_sz2 ٟL|>u/5gV߉w5PO])e5vWEFD;AHCEDX=2gAYMQW'1KIL<Ռ+l\ F 8;13߃~vf=kVHƭ}&x`;߉ ANFqMX޷ b#N}!IC1+.Bk+)nӪWbZ&!_"c^<~O`Dn>u z5"K?aSjzc>9O~03bMAsR s57 }y& h+hhrC#O?AXiz#JPUl_w٢/pZ1w=h(3җQ1@VN{B]'<4Ng^f,4ɶr Vl :MVLjˇ&+:{6vHo= NS cSǶ3xG`:cƖP|6'9,55%\Ibf#>+c-]2-Ҫ!cD[ YlJa+9]zv̮˒Q@fhOꮜ{'.{b~* nLX%A)w/g>w/z<9Y<8ӆ[}?Wp9Bm~0ޯ#kD'Xͮ[PSnL,l\WýI4`_֧ɠP~F_oY5uXX!o3 ;yѕeD~] ӂ''ÜVsZ#7qشfG̏ɅZAwDHퟑת ̆־),c6~TjJT8Oʏeً~~ke0k`\Q_ G_)9%R<H9-2kj Ϋİ/x+tX koi՟^z5o\oG ʑmD_׌ٓf#(/?*S3C1{wA-' F]^l:XO߄WGC/ڕ?zW2rڗ~ٯsA~y|8u;hmMNH{7-x_}2ӻ%--^ɹ5vdfyN>,VOXuf%SN_2 3-/Bz^yluFo.WW9v7!Cy 4z4]OKEs,XócYpxvE? }|kӌ:;.g\d>3±wC@͈Z# dF/[)eFnu@rz/_Y9 JX=)+签[e%P؟OKCP ^7;VumY9:,d|dYf> 'gn.{2 JzVD}<]E,[? K!lOd5[^ Z%-4{^h)"_kjCYhh 8:ʲydc34͸ON6 G>YЅ?RӋeχb7doQx鍵8'yV\xd穖󬅔(Q̀ #4QUb"5r#(XTSGfW[a&cY/Q cM>gc ]5d +.9$7~='kMNd>v*^y>;_$ F܇8'%%œ KCs"1XW`IDx!x;!!'8'E[a,߄=ozbӳOk1%d~2>8JΣ pQNVK w.c_=ut&>NkEE?uKYEݓ'/uvDJg {A6MXu݋o J[ v_c§ȷ&}Wk}5; *jS)* NWe3ҕ"@XY-_rDB1'c^9q#ڪIX%Z¸/ kyGI1mhk.WݓJ~R~*S8/eDMTC5:R~=lK #/fn rfM Kts.>X}&3se}N7QCbX/ !60'FskOc* ycTUO^+Vfgk=,{Zw`OF3=gk C;šG;\@uÜաXo߁|Ά@} R;+ RQ[#sz 5JkbPWOFZip7Jʌ z[,8xlqva7xKʩaBY@}RBz`|?b"s3?z-|aم }w"yQ|䩴)3`QZ{,"xQA7bĘb}FfLڠFȕZF-"l!Y6xeqpBYar6l_ 8Qbл[AGQQzf=%p71;;1g ?۹4.o(.(Ϳ̷̎d7.~6kPZnD/q璱N^k*JIG~N"F#񃧿Z>쬪;!JIHHd&=!B)қ"*.ET V@&(E@"o=s_”{{.k>dOȨ@,vޒ_wd,'fm{B5U-_XgɐȝK+Jۓ;G:#۩7Ϸ<1Ŭ1s}ukp h@xCaB ^č;ᨕ(7BA~w6K.$z?5x^0ՈT=/xgl]lqrdԹ|chOWݸ;#a9 !72k0.;ă" c[7{_Lo3vȆy>R4K>(V5XRtD7N{y31. r8Z?36%/1ʌ5/ƾ"|7ջ4wn}a6?jnm@wxxY^PvM܊>P:c!vP?u9ǩͧfؾD$YAURф&?,tӝ?>(D~׻?3[,᲏M K|&99sYßhɰo>?aC$ 108ʂg7Ϲ{ʆf<CGC%?/*qN]}~gAbҟdt+S%&,~ owϨn7k{ɡpWx9 ƛMύM{~4PD Wτ7>X{ljv`bJɄ#0`9-A/S%:}hBu11,үS 5LhtFݱꃻ|9Y33e'3bŦN/%p?z߮ 8Au{UwC{HXνTvS)zY  x jHs1{OF.' gW\-1cƘKy3lS&qmA?wԯ0|R{tk|&^g+צUλC)I}@62Ns{x`!~NIJz_1c.RE1Tk;kz{vU%InG>N -7eJ[zdjԷwmS'4//f%֒'9Ox،N3)#x_1:fqx |.2?$Y}z~ ֐GczX-02Q۔LCƭJBEB8y*fjSsᴼzyUF`Sޫ*<3YǦR}nߢႌdN޹[ҷz۸c/oL 9󊬕  Hqʅ/r4g%G-G*.N3{z~޷4G/a |^쎵ZEjV=&JUHcF[+5+o_%B64$-Ҭ\ɩ{߯;|Ȭ<+˓El!E~,͒F i]λwﶎwY"5< "@$:k"e64ny'G<9oֈ-|\QJȣ])lbMTHB%xS'1#YV63 y"<;#i/ce&f4C<,gGU5A,Y{; K id^Jn*bLD1bI4Nb=vy"U?:(K܀g4+ޔ/ ,ϦelԱ7kZhA>\J^ 'M_`4~oF%Nؿ^ +[E7\S/^<VL;g(0Ώļ/bVÈd 'enyY91<&XqXnX.L{VNssk* 糲Zg,UznM26X }3brʴʪdbb7R֟O.Oo2Eyc9-Y<}33Z[zd{i_mؿ+O9E!`]gqCC?B9*9Ld L![D3}2ʚcYs~9/[Ƕ$^W4-Ge%dH}Bji9 #d @lY+;ƊP<`E2;h`@R9TXԊAi*@.:RNǭ:C~{Y-6Mt0V1@Y!߉N0-VЯ[MEF[XY#z{܆RY9CCw;3.o5qQOĶհ%dث;3Pwri.=ާ>ETrz9C3cAAu,+J[nxl10[͇մr*?>](5]^@DD/v!8ZG^8Q0qyJ'=/_fB|jT -umt.Bc F. sOA_N1*ЋV^`Ej/<Ɯ<'eC-~U,L;l,ů4b3H&gH+l}?$"YϴD,U\%KVrVSK6wΨI㖋.^ӳzDV4AKϕN/Z :)GeyR'#r)C2&m(b¬}nde%2*F {SnH5%;LnE$sSw0A)]lȺO|xY ^4aD)S3 ߄~v;K" UX٫`\7Fb _Kzh6lչIP66c 6GI_9XWvr$ uMVE@$I[2Pp91=r0ndrk [Sc>`ḅH<&ycau 礕]-65Y !VV? ,7plaNr&#@ CdRXEu_f댕'_lida?^S$P:T%NjGy]rTS7SzViy*:ųsFvِ"%*Egd0c'Y0עtcªOa̡d $|:w)ٓכ.\wDĮ0"5tV'njϧ4MR1'/aUcf$Ny0^&2ܘyw!DWc1`<,[PЍ*dj󻒷> ̧zH "$s 5HZsV@bHO(x͞'2oXD&Eő?:TO5UE"A]Iq :XFge#-c#ip*/dPN, dpl<U/ \tcQ'g %חy\Df.fBFjz{ qugGɄ> |~˓"X^jy'+Up/_PM*]HFF|8 ӊYKf%?"𣤒M9+"ɟL6̉XAuOih?%v!3Nt# q?K Ћ[ /32;jbX)T 1{bY4VJDj<U49B ֍&3^jaِR QX7adVudz!) 'qİ2>[?@@֕SfytdNP?/§yThhnGA\Vim?Kvl{DpE7l=.fBd+["!t.px:kBɊQCo Yl /cP PQ>2}  aг%Sf{ր${לh.NXd, -\!c7ҏc_ fp(C)F$m[m5 Bmq:gwUs_gx9@̈́=K2%O;ރUVG |kE^ѷtMZ,MC;eXe'3Y, ,DDJ+8/`4Yi{=bhb=}E=6LYjvPDXBh8C!4)(ԟEGSD lbYC)zil`Drl]@r䙿XS\+^Up]m؂%Ϳts1(hV0%л{&< 7;O:_>Zڲgs /I`)|nh,λ</Sc'7Čb4?LLPl|~x/kl%A#PwrRO 4#fdU'vhsnw $Wx jdU)ɌK+ `JJKDT%oEJ!6TEX]θ5buRHaX3a[Nah=ÉH6AE#$S}ͿSdf7kR uxjOwʼWD,(Hex|UudNħX8ֿL"?%>dMqC=pJP=,+ e0uzN@: ]~:9r6!ӣ"T̎,?Z(199H PI47^ ^MWaEqNs߅=cqv~bE͓s\19iVӾyS%RE8E F[k$;0|L!igLe\9b3yk, I$߉ɉx)3j!gb7ZXZ`D2f_aJxuǰ{\Φ :!*=/6+!o.AyX`E9b(6+"*Z-lJ?\*?J|0)\NlɿMLϺV!?GA%或)Y o'WbD%bBsFg:=䅌 $Rmc}D!"[؏y޸ XƁv3sS=+[ͳ&:N$ `bێ-Y.2pJNjT@ƋIXbVȎ}9+ f4/_/%Ж*HzEyjуJV῁lޚh>>r(%JTeeK=S' N7 ۋX:'¿dQOUs&3jx\`{12s*!lćVY*ݗmFP=+޸|% [abaNƖ?TPPĻzb&f ;;,J$b/X >"7d?<%]>ςvYd 867G?,;!RuimH^[:JT*,8R Y,ݕ`]^.~EYj3'aTTF\>I܁q/TX_n]OHo9))j횑M8,2#Li0&hᐠȢ>|}3̢9냡'?oJdhr~G / 0o=eL;YhvUn}=x-FL{\b!knvS~rޙ+y>TJ=vΈ<~|T͛Öb I%I̖L]ޫVy&S/kIJ7t~ `UۈpDr$`a F.9*c9U\8bS SRIԫ9jnh 4IfW^Lrge+}mOo|ঢ়2J k_cW]+wq%,/AAv_0ht yL<egy"WZY9&=rcvGbݔD\ޘ3JSպtaÚ9yazmS*]4"#W7='D^L!y&)2%ߩ"03/}":R`iK!u"*ēW|U[.trֹ/'_}>=205?z' Eaex[M,2;A ; l n+3''P,|rOp-MwwB:~8 0.ɑsF:B_X9]"#Gz;옞!Sbޕ^oLu2xiYk&-Q/ƿ]#,A|b+ďG5a ݐ_Haۜ_Ṝ>,VFϭOͣGm bӜg>u?;ά/8>>3츝*9n}t Voacjxx/^YLԮa [Vx3V jDC<-UEj6S'1?6#:6W O. 6'-YNbMW75/T[vڽ9DzE}00n67I?JM^7lE|Â|2ƍJ|DHЪão"=䈇d xX~ݓ(-g6^'{}Mǚ-oӱs$ùч-r^nz_N'dUװlU0OĦMP|;ŇZ]NbѩLYGý:y"7]2svuYW^h3m7OoS(^Oom=ۄ/gVpj^ ,kF&ͷl4.Zƪ$B vP̶߬Y _xVE~,[gγxɏTgGON}ՔX}enkcN?^ئ`gZ 뜞>3g|jUn:?c]^;=Isw}!# ܅S^:$#Ž 8_՜tO!68{gt65YC##%* +cc,Յ8H(<*R|諲@^Ro&:/3+vK {;-u;"I}euwvͬ;966 2!dXc_xHml$`HV2EJ /ɢZPr"wœ@8MvL$ yH\JuB~g}}y]ӟtRnew5>'fvJs~؊&1s>9.ϊ>88qVAf3-w_X'kͱ*DdR|{)wݤ39S˹E &.+݀˻d#5Nsf=%=n]E;>qRgԅa.<̈́8|KqzcD rY";4@xHq],:XZ=Vo9sthF;:0;>]@_lW~|xG汬k&3zu1F]Agb8"E>:k7O5t&4 m4 4;^%^g鐷%UVOѩAaά^7r'űNOڙyntkn;-?2ڻ~w^)bt5%O7@pyKމU$h:Y|#}wΊO36;4@&":0 -Ul$FS#N*.?5.5"`B Q* #4ofl5<;N][⳱n3+ͷ7eAGLw] t;zKϺpsFO8r!?Wh16C8rUH*Թd-1"f!1CbV,ۆ 'js)yb/M7׋Qͨ2:~."F{{tt}z;Qq؀[۳?e%2d<]7xC49Q{v|:OPîcLK]Q/:Z4JdN.1tخazU8|/:}y85C^rb<1v}V{G97W"wV2iMՙ5GNsT:D̳I/okYEA [;ye,x#BpTBӛl*[UgmTܐZ}uGnJT ' C;OG}K<"_={fqFhvFܧ;wwjDm,@;oMGw&219e?"^𥱘W_gY|y7 2lˮCzDĀz_WMe0"%6]EՐ{,q,xJV_$ڒ#cʫ.ק5 w';;:;-{EN.^EWO˿gs}͗HVw侟0d_ 2Lpڠw%bu%Ұ06~e1=CuDe<}{lP:M~ aQ yH5_En#M_ߝDVdGE𘬺VD72нSŌh;A~"VyY*<߯v{UmV[~ Y5/i厔_}4br;t +Q2.ZX=e#ut1 lP3̮t/jdm}l_Խ;"4-Dο~kvuC^ĸ/V#'w_];#64|,e^l"ƚ\{m5rnvH&4WXm5;,ơXP9Aa NX(-/~ǖ@FuΝ(M)f<,KuJtjatjCpeӽ^h ۳U40 V9afu63ksYߧ ѫY*aYe`:IoI Z6Xy: Ru*/^/\Z8} -~#jv<^o%9޲Osbxo3be :pK cd7H܁qi#׿~jv/]%ͪ^ܐ`pc]ɑccdo.`_=1{> OG3Agsũ2[UaiOSՎxmHx#7_Y]oU+l3ꭶ ̝a%on[~T`Y-Μ].SRͲ ۮ^پ53xbspkIU',w{XՅySxa23^ :$v/ԏwٜI騽:JU=X*Vq5`!Ι gR+6aͱV߮G7 0:ʽҠbg瞊E*+u2;_ϭ_|9}J'~mLhѝgPK[ap/}yc:Qo8J)8uf,{)VaJu7t *Wq0!ӻnXO^|[}StN|4m=hd|k)bEΟuoW;O&(b`GU}{bfFo;di9UK2R+>d,fap~Z,tI>DDӝV!qp/ۜΊ[%F9(cfep@_џGY{{7.(wq=?'4,vAgNg!]Ԯl6 mX*qW3:ru$0NF8P< eHq<=唏y j`(Ww"'{QD/DpDZ7կ2#xqei|nNwS_mwoОp[DyS0r%_?_=. UC6R ի|,р5Z'"#z;@FoVE)~~dq=udXzhZgD l`oGC 1"?9yԋ|Y2 U]rb_hUl;%~qIvILup48|]s{-\v牉-VbrV+su/l#G(Q֩ .2ٟ;&~Б?.ߩ-C< oBO/FُL}ڏGsꟵmV-g#2? 3շm͍>{]Όogwۂ?~ܜte=;]g"Almg S"|N@e?D䊩-gpxf+?~/k̶|<fC0Dɀxi? =6x%cWέ/KXy*pMC<ݪozffxߕ}Gt΍x0 ~*VQ̽u&~eQm3őhcYW2#50:sf-kz]s@S`n~0X& M;uNkwjfmsd۫1wF=sOTWtJͦöm/9k;WdGh'+˛:6մ*k͛7fL#`}?K^mu)X:/G`G*ʜ?sڒyh;)}À rNS2tfwѼ'{hYU%_$+O>+W޳;mrTOH^$;6郑; ô۾:jF$@gnu^Vމ[?;a6P IKt~@r'9[z,NoݰڋW˳ ^άCaw޹5}ۦmlCoVxyM7sRKOt\v9L]Xgs_:ry,{˅߰2?:QňCxV?t?|>ga|.#}YW(U5=f}~tEb:9当-\Wl;&qͳ=&$v%d>{on>D{l|3U9R֕{xD9&{Þ @:aqlðWvz&LY0͇{}f0 nݜݲtڃ6t_jl&Y/v4w=Llޚka>l`}Et,τD$NAjŒBft0K/|2N9S3*..7A ^ &={{ԭokQi&k|#3޿]ѷyA ]I?;Q^iTsT}_y@"w7d_]\lԚy'8BE27%oM;1M6}㱉K /]XZy/r&+tn{ogBޜlo?yݗOXs_ߓUy?/~\^$Hm%Yw?Mݿ{G;Re}AҌI±xVDGop@=<Ӗq%S!,nzL3}^u3m@v؝<#H{)w]3:3\.u M9HS/uLW[;$+gi>7!>xmvv.Lܟm,@3Eӫ !`:TySqxEV?N}:Uwsލ[Wt 5۵Od;*?Pئ}%g4*+p7 &K>^*Vr^уA_oh?؜<\Ӽo{CF*r{'mʟnP*Tw-&C=v\+7o"QR-, .tnViljJ%Ș㝛v{:w7ݿNd[\u:{o3)fYڈ _b=C W!ŨaӖ7cr~p@!A^MOa9 *lS|8AgŬyi_I}tO]v 8&?{M=wI?{:vu|)ͷV_͈$iXo kqMӓx[ٮO|kn }udcBU[cC>R 52=#{de_xZh ϙ',9DzXb 5:TeX$LE&Ys b~ݯ7bk Ge k>J;Rti }&rkx LB^5yݑŲ /.Y',K"O_JKُ]J;W ;2VAB6!#sl #G_}oՍGW"[ޜ(nUOՋ#dx;r3>Y]~hpÁCC!<ըzTsS׹b?ygދS>:3J?Sux%ܤ+#>e"r )?N7>,_%h8*˟ò߫j=&<]5w?S- <]m;0-r3"՚$W晇^/S19~C5;22yS,f Tǡ$Ͳ)`?'f#]DeVE~.W9+zwګg b[rG#'y~S# /mW:r'1*sh5ʈ^o$s3N:>s5&@Uui9Ā&vDAg؇Re@RNՓaq$֒LeM؀W۾rwkݱaG{KVbm2${:$[ې}\|bfdg /M]9aI֟44ztҨzs HS1l^N8k0S8jxlȹ>pKW:vJάi`.?jv' Dƪ>fQ޺vejǶ5+ssew,6C%xfOs^?:g4y \t՝K ~;wbw|{߷[8R3GErNe%ѝ5XwgN}G,2-KjɧC{zyC$~{7B]SQ NO`4\IBt^wʡk2_22!f`yR+̪VTW$X*^^?<تn?v{WF+V{ ^%g!N_lk/B3,s߹Y\[jY7co1Ҭ씹p ;‚|>d5D##y72*ăxo5X]"ʟXݙHx˻v}gg75}#N?~ h#9tHF󔋂hP5tθ] /,Un<\45߀㓽!YNO ,N].d6y O] qZoR4X޽'{0$%A?O ߡyKg]Z {N8WyR++Res{g"9|ݷ܈o=4=t%ѿȎMe3uiU/j`EčlȀ!U'oM| ,kbҽ#? fDaW};DʼHVEb̮.˟UAu ~հ.LŠn!jaX*鼶LӪz~i͙]7yIޠC͡mwgdϢW =dX?m9Cm|_.ޞkF3-xM'Te%aY2\(|5:;cdgt;ݭ7wi;>(|p\zʣ:+84]9IÎ^^zuvMɴXyY(řnq]鈧+@.PcLK'L*b@QaYD5u, %  UbmbNNm']Ծ.p2G-eZ'ch*^$q\~T: %SoC̅Sx6S?BY>@5_MZ):Gt`CO[F*|Udect(mã? LVcDɉ{֊5^ De|`<˪Cg۝<߾9:vrM,zA3OxfKXr_ƇC,t7S7Gul#Vo5csXYCHdgA^92 +{}Eumf73ai:ň`yє=U:;vQ'+#?[ncZzf#Pco<jOH'G]]7U-pR"'3}Y#‘.Q9/j fl.KϧjȎΟƩ# @H..:L0r\=+W ra%__xrs.ے=7|@4Yg^Eq 6>VԂe'AN^m[!w ʈ:*+*E=H'i~NYuce˥B2gDVGzDV@\D쌩 Dyi dWɓD*-[a越fcuRO !{k w~Ϥ0%'[K7E^H_q?"#[`H/ |;#XFr W1ㅍrJ>deʴzu9Z:UQ29o(u%EtXܲ+*W!EXT?c1&'텩0XO>H%[ޫ'f雱s*J]1مO䩺Y$ 3ϻ'Gfx[U J6!Z1]XrOA|݃Мdٱ/b% 4tZ.5dSY8R%8gh 1eh ,րIJRwj?:05WG)!|tG={NAsN.<@v{Y$9$_(I("b=Rܪ؍y`.5xU=9&~Mb .Ue- /mlRYq`YuzK^9㩳O<)Q3K|f,H^xX*ƀVatE=y47tIFTX:4>hhކ%y)y@+ZHtMRBX1Рو@xM#E85Cu,0*Cʍ+7K_vҫ) y8/9 IpbAj/V|l c/U)ΰx&k*V0"ۢG PS*R利ա[+:pznB= 0͹TٌΩOMX,cAN.=ߢYp7YCs˖A, [V|Vz V}cF;-%v˩ 3&*8~O+Ɵ{Zq( 0<VX$2Lj @^NCyOr S0r'kLJ޻/ [~ʚusPhN, c36v1ϡITvVs4g/)(׍~h63fX[G/t..o,=ºe\5 Rwx$SXs֐^ڙ3^,4dv~"ok]$={NTJ<>rGEQN3ԕWof1x-%z8U GSf B}yXzy<΍ z¶v?%W@n1v/( + ՘Xk/rBkLU緅ٰ|r#z"ԉsZшۚ75cq >'F]sxQq'd)\k lmflcDŴٝ0ն2$א!Y!Qb>%l??r5/[D!7֞z7? a` Kd`"%0&V% Mzyj)l_l%FbnAX/v#&]M, @evǜ%8q iИqUeMU1Rzvָ2 8Gg!p)ƤN:ABh, * S% / +-[.[o`-7ѩIIX$γ4E2#֭F#4_1ϼ5uȚ-nd+BRU]r$> <7!e!}ܡO?")Ck>N1\yf43 yc@`` 3p~MVFΑ"}{6&s*>vnJ;(3Uq=}K߹ 3LgAE2&,:y~0/^_xOXSt_KFaK9:ibG1ec3 V0Y.\ƮиL~!P̧b^nJ{PP瓨9:Ze%' `iiٰ|뜌yNa6v/>j5":6p.>?@u͓g1%y-}0w%)҅R5&|?_' ,2Jm nidA\4V$>wrYx<*^w%~C̆ȓ`}~^'*wxbj :ukO΋|:ZB|ff1 ΂[wHyqr0-ͥq1U$9Ł:F2I`Y91u[ C߲'V8QL.sSD4Mkvap2nfgf=0 Cv0O/`_'B|Q/y{ǃd ]տȋ҅b77N ~y{K=H%©˹`.uigdg fK_5_21-0lRI t 5$f rGYV D3bp+&-g=W>J .X3fʫq~Y/|VrMD'oymB*3Ua;}-#WX؀tŠ8]CxoV YC_2VG9϶2v;^ޣ;ء'<7S'fEH6 `@ͩF]l!%)T4/D$ľada!I<j0_a+2tYvGAC;%L&m >9s< OѳTFwv1 ; xM$CfHY=+ q`/lKH I͌CflA6%dH_SB@;pT-o~A)y!!~ʽ3IH)$ΤMz!{ @E(vĂ{`CEzSQQD{3;sޫ]kMqĬyš^Gdy&tsc,WGZ`53Dr=1'f^br|Bm$. }gYɿ*!g6S_ff, ȫk$hkҐH̑=Z/@^7ЕkA%QZCqMƏ=% M!\*HfT`p5< .6-ϷK2mEf1O!3CpVt6JW|Y*wf~e0VZHjyL>GEV,4 /LE3OEeUؠyY9g>cwD?3=3r>oVB̥î y޼Cg+)D򴺬< f}F9BC R=kpI#l[q&j tP ׶˸vIC,@e+l>N*o^G10B!=k .(Ş]!tnB޷|g^^ayyM;$D1Y^zZ믯dz(hA$JKyvϭ2jx(w\oTuvbV0;q'ܑo1j Bƞff?3~<'RO;Ypî`fVblUA_/ߡ#kg ktBhrxܔf})y-i!{_B/TU2~p+]7Ӆ NXOG`[HX2ôD֕1`oC"-ytd xX i \!aJ&D,`EDݼ=X%G<.Vef[.FX?hb҇#(eXR߰2}{zWXjf68hrdtUZ|4a ҉HۢrWC%;.`ziلr?)p {q\0S؊1fxdPqLkRtP QVYP}0a:g!uP `ip,yURi;2#)6^nYy29̭_zx]:Y~)f^Iɞ͞yH8:!ˏJaH]Y7Z3ҋ͊FWv({i 3l 퉦dTwvKA%[Llozy&fo_vLODzW9ٿ}z(}A&ׯ.ʧlU8]fW|: KKREp8a/~%9O\O:h<>q!jfd81:?2+yR:|2+5M_?et_1(WWl^3fןI/KvEd iC;; X)9&SC0Aٳ> *R2шy +u'Z\ ɵd [v (vW~5)' o7d? |@9\_jlewz0]A}s= "ANkz=l=A9fgq34&]gJv/RR L%;SX) ƢT)/Lˋ9i}~6nfX_T?IYpxN-P:aNL'wyr`Kvſ).Ow[r2ք~]kSr/3 %YK],e*<%JXdس9i崌#ը.=y 3D# P>zbTKeXVͅ`ulv]έooΎə}1p\vM2;;>l^?ec鵃-P) J#x#?R*^wͣKƮ 2T#JagXbU&rOY`{wueFfTn;gǭ|uQz~^٩jvX}p;l^ܑ?ڗeƬ G0c,j3 W~,P8fλd˼J_cL&]a!X?j.~%vIv|zu`|qaslч1rm['stDyO[,.uvЮn'֗G{y& }Fxgp~ɦCެ[eMJտGqh˾=yAeb>(Hd'&NJ.-c>֌Ynr.鎳,.{?Ynګs鈶$}F8IY!uNvLܻtDJh,H#ؼ|<ȈbDR] -؍ijt81)*Sg*6.EqE`-L'::{}rӿڬ;2Uټve.i-z ,bPm٬m?uH3;:`mfeO<Ǽl-+!?#kcaUYC@fw5D5p="yz_s3Wgb;M?'[8f_Qt~ﮌo3+-)ٿݦy~"3ph|ŜyqF#˱)6NHb!ـ٧u%cR"+S7~=؉c-]E|<ƾW&Ϫu3·X]92sYB<]TFlO˿>R)b1U}9.\yyOlmjILj-l#WkeCcp<.}4j/c[X`RFbF):"-jBOD<9+]~ =9#_XGeMc2[䤤ūΧSKqIeI6/Ϝ쑞g5A\@dl5f?dߎ/4ϻ>5?9 b= `WE>[1o:$vVd[9 Xը K"ícC#''m5e4/WfYYm6`E}sU<l|q:>ϯŚ\xKl9_M ߢŌ*Vyl7'sċ2XrsJ_RӤsNzOΈtw<ϺKlAݟ ѱl7 Έߒ>HUMYSޖ~vC:dJϻGD~ͩE ~^v:`N=e 7W?dR*v,h}JvNyZ,[ڬY۶^Xߐn{F:?D4+׷7S5Bܐz/^qUh^NNOWg&=2lL `^F,o!Q/> a~ų֘0?܇n1(UKQ>_xPsOWEU;93ӭJ,8޼yw3#ѓ9)jQt~ݳ}U::( g|GΎL\k0#/u]]fG_Y@#ݢoѯY59;(aH_o>C/\5t_xU$f>f䵡D=coe[̗3'#;Fww', \m>/VÑ遶4OsZΐ ?5+2,W?pT}:.&>[gt #ֹҽA&T7Q2fgȁyoH4/N0 R`pۃQL,sKXgZ9̹_)2 M7'35ovo./_lŰ.|!|K|amLuy&9{N~8,ƫɸHFtOcԝc?uKU,m*HiBէΉ6?Ls[VrDâSt>zw]xd~߫|lUXӈ .ޗp׶m0ۥ jߠKn@UKDP<i=T=x N2"HG~E2?)^6^2䓫 fH'(?l =#v'I:%tpAc3+|*[,e}ћ]Ŧ5=OHwEO/V֗&O͸:߆Нf򲕰 ˞Dkebyst#ZSt?|ųMwj3=7&"rY?&Ոu=?ٹ/zGb-;sscKdz1̷7oAk갣|/1×Wg}@<@jxxmѭ%ƃFt,zP|)yx%|v<˶Lodl3k91cY82ikvj*?)^Vs@4H!9d8;匧o5KvRl3-+s͝6{O `/MÒ+5Fȃ s`B gޭZow^X{t~cXpEArVyct;{_ό]0,ZaM:so;jfm}ŹDTc|vQ NJ,%ڐѭD>VݷHٙU }Ibю|B C)'65^Ά1?s*ČM;&9#qGuF>9=w߿UWoߵw|?1<=3sN'c1g^ ~>^,k_y>7A^|/QVXM 84-NFP+"%µW_ow/c:W,o[~kbbhF_9 v 3ݿ_)y"2p|"olW<7kƤIA>[eˣOT#2{&>#?!e[7M%b3:Ho (_ a6`x֫x2[FlbHS;5OYLp{07Y O?!hw9itNG&9Cb, K ؑK.N II<1mUzMNOpӢDX55|ǪЀ١e*?l) ~lPIC&f捭|$u =fIsqPhY?UwWW]]NjwgXҿ]?̈́F6 5'ޖ}xlD7Wn9rj?ʾ^8"$#Zv{˼]5zc}\έGeZTy|f?E'$`}uz>^]EMb"̹`#<6eЖE鳜m̳U36nT?숷M'2.إ œ z#ӪvMp6ޫ瞑 UMoIԟdwfG& [ĭf4Y‘^۹?8y1M\{y?…}8yBֱ'K33-;rOɖX=8<@v5_5u < :08&Kl r|ȬB_Oݳ^Uپ9NF(z@<-տ#99sO5/?[2ˈ_/=5q9̏G}*?s_zJQy}jx ddE\sjO9ݮ3ͧ:WqC[SpCቑMy.^?{:Ϛ;}y&Z}_bD4[oi+<ȹо^R*D.P hH)8x3?{1 d'~bl $YgE%Ԅyyo  |fUƿ{{qky7tݿ;l{| UI/bL?K>7nf,?AFV,E^Z]Oj|VT,m6TNDq56' cv\$@q 9b !> ooƮBŲPbɿ/S$!UlNlD+:ueOҷ#<^>>19~w8hOmn,kWߘ5&'/LfrgE7ݽDé9`m+YXl~ a1VMUNx.rIfxR46+?=_ #qovY}k!Q۾-'!: _f3';0x(^-vnސq?Al+Å6 g~]-_IS9 ;h jN OT"8.+[ep>׬̊'owł8o+x{.J5L:nK ҆poӪn,Geo,nΉ>hwg&T |9K'=$RsP85)细YpHW%Tl?l歭4K[7,vaE١O) 4߻jvX,7&0Ow#wSsClҨ䴇u.h$zD7W }8q`o8Xo:.}t!D 礸Sb/qqX_w\׼22596aEHĐӂ³<8*VV0yx|O졚)+nA3rUC= nV},yi4~ Rwِ:իZ՜ծl>Ķc|5ތtWw45tRsϪޛ^ aB dlH\8'_v&;v. tvSopbxgR$w4\E@<'}J5kw VCX502H0e0؝zй>}l;7w]HFr[|X웳fؿ/u!:[ ]4],af^[xdD%cڟ6>k-JU X#-vԲ4o+skO/嶧Pf?9ߥ !&וOĄ(pIsl/ǻG/w__ GV"um"ݢvֻ;wjqӪwDzz11aI) gL%߶MkDUtXIZW)+.PmFx6/aWxl 12/:Ú,XbɼڻzCgNN7OG e{|L<4ͪ/ܝ@oowfsC{n?~gTwy/ -? ?ݱ}2>'뚃z~gŶU`@bU{k.h<.]D~%zW z9%^gHWw\nh:Y sӼdh8h'>sdaA#zI;t͆|Uy_OfpNr`C9 =ٹݧ瑶r\E~߼X116njb+p_/\;:ɜ~m Lffqɲ]$1K;?O>mz<<Îy%i+:=r[2P?(v`#Sr2#GazA&.9S],E Hx3iNso<}K=Cg_^r?7PذcS+8,']{@c1%!Ywd+"р!c;6w/k ޜRޮ۽ku-; d"Yȕ#۬Eq Q3߅>o`Dv&Yl@,0AfDʞv>\% MAfvNѬԀV6{뽿gX9wWVxQ_wY2SǓ> #H3!9ǚ7O&F/=Dz+쮦iyVG^%Na͋hoO]]@gx9kF͝⹳^\~Sj|&EAlTCh'Ē 2:~lF99щӈl緬JsxXDƳ"3mL ̎&b>6䩰!y*׶%wCxr:مTbIo<ɈcaQ^33pt}M=30aWt&wy٧׽{b{ry"?V›K9^:74^ Ȕ~V8&L.ա~L<-O-:2*%T5NI[[1uzY! ³<7;zt;v~Pkok=18ռ^5'wVNg|vA53] LS%x39gȏ<֓"q$eFET Սhx=OTh- #?lΘ DŽ"1Yw\JSb%7_L1ω'NחwhCSSO3fjp@ 3+ dʎ>XEV[zLϘ!#%)LM((\(gFN%η\7 =-jUV7#6e$#s2am^>93/vR|Dn%I97ǰྩbW-59oQjaFa<.|_1:0#ץj[ncEveGnFn2aCjl+-޹iH'GΓÅbq&>lT!H*=v*P/`;51|ЗbOZVߝn~6jN6ẘOlbw|CfdA>y>ԋHO[K1bɚe*ǜ/z +'rifV^;,癞wi6s$~]@Uo}*:u0xuF5*}3#2{2_TkŠM/3Au>y4ߟeuO;jXEA8ݑoP,фtsY]y!CѰ^>Vopj=݃{|1bLbXI?J/鑍OWҌsg'&Vc\$ex. ɱGG>֤ĞWf4Fw[v %򗎆œcfuh )\Fy"Y^'QNjбbVOi6llLsSufwS hؽ|\. h>ѾXo>߹=%Gc'S?E?,Oє_w8^׃o3I~LHB[=Qi#Wt{Q"cG;RxY"7oˮk"W'e5ߘSñUکe3l֞Yzmw>>!57Dj-[ni)$?>ݮP"W,:{q tpyݫjB_~C5~t@$mKM*"[Aʞ,3o-8==Uskj~t}I=\ܕ:Env;G1ȇF8Cf\ܽ7 =23" W/W_jiVzzNՆt:x:wӅ xAM+~cTt`@hYƎa-{4n08X~m =?]Yо=lԫN<v݋zF>=/zqUK?=dhW v{cz~٢}e=}gNS#.K+;i~؞7ݟ7u&<)3\|_ڹ,_U&\]"v6%ܗ!OyG^qyjM0 GR 'L >4깹 c~ 9yaݟ=. SPvXr5;֟Nc~V׿kOoO?pWnݽH{pS%cuG%˻mN ΋Jҗg:XN޲2'H}ĺD9vt[OA=Nuɠl,]%]ɿYDCF^P7w\;N{ўtu&sj/aAߋo;Csj_rT<`ܫCәS̊.PO, '$? k?k?銞C|ssޙӵ=rLFGO 9#QAM>^]<.蹲D[5wH`͔3t#&BIt0RM=Bk Zkvt_ewHo (F‚iBB S<:1gEw}z3ޞ;)pa]9%5v'fG4m~ѽ'G V^gq<Ք17JH4NO\;ClnIU etXR1qY¦y1Y2WLKكHQGe[^нX = {.o.^%yw/6%caYY+:dݒ}{X+tGy~=\Wtlk8:  7lJ,Ӌ/)_h~RʊMƥIvĝIk= O_8xѹ,=X-ɭ~W#UgII$k5?w$7ח,b*g'@VaUY)`3eE8PrMXCO$fN"+V>T'nq~A6^]޹рy-§ΫYY}ve}A^)I]\H=ϒVZس}|BR?ncH4Sjˏj& M^Ic+ f`Sj> ޱU!Ml;?>S&>sOc- Ԃ 6x8|5sȬ1uW5 7G$8ʼJg`=vC@++n_oeΙC |AQHQqYҵ`vbC9柊EF|v+tsYo؏㬅n݋-(jX.Q{fZʞ`k陉MDꏴ_'"Ifjz+:8 Go| jc'fN5T*HiPcn>ͻclacu)bk]C*EI"+I2bNZƖ(+}4VMj&1t&7Xv {v%lW=}UչI3J6m|ob6xЃ{ aƑH?ah)] w;G8",Uj!UoYCx' 6뻽vu#2e J L/$eՇ>Bn6e[Kf2ZJ9ey缳8Q&kzg.{9эwm k8bS̞BqtNGљiϓlYX:ONc*ֲ:b]^.U/enK^O[֎1kG]pj:<%V,\80$&gpQ藓UtWāy|oN\]/>;sg2^ʧ 74͌Z0US?y&M)׊2E_tĨsf(fKJ.<ݢ|cA tSbw.h^lh<\Tjh)`WtV;BF{TX ×[&@ x8^"!rGo].Zp}e /\3_-}ڂc=xAzdޝ<@8Mɕ`|^ٌ;t+SZ񿐜ɽHM`klƒR%XQ*:Ϳ *tA|FIOϊ U8&s &2^[?Lvw=/k=_ ; mvJ:S4  h>MUv齃AErYw%Z33%tˣIԵE| Xyµu§2IhgfT&pq XCzߗnnO䬈>O3xM51.3U* 63Gçfe&<#ȆQ3dkl5YfD|N -{? iĠb_֎bј[ Db`o3hȦءBj+ eI6C[+9{?YozOg!!Ƞ022@C@fyF[j{艝a6k9:"No\ ژ DŷI q I8;/ h : z0||QYS\|8VOOGY9y;lʞ_;*{aNO>Vedvz~ u!{ &oeʓ1YRaYfp|m(5s#Ѳx%N2< UF=խ.K[Rb!5>A]y3nTkDM0\Jy`/mc?Y`~&_Ȫla>iɓf z%pe*7V}9.kfȦaey.YJ )r5<8l߫>_bʜYk̓¦7FZFȭ̗| BpqY7D*V,'Y#WAhs}htjKGOG^I C5{?3 4zv`RW}d urWTm'gxC='YCSXGˏ>}m/O Y1LlcȒ@fw U/~Ba!3*/ ƣTrx)W&,hc,; ͭ*t;+xJNo{]3H]pfCBTj\,qӶ$JEWͥ~g|ԌkFQcĚ{KEK <*}X71J{O 7GOyV;sG ^6<1zW lƕme-HL%y*,5)wm:YtE%OI+sk)5Yjr3l @Z75HYcwF-KC+h.Nq aL4wv%ZIc17l4O&:MorVI^cD,>r&kgUSXǒ/ȍnb!.gi2DЮwC@bN ۣSqiTAv'y euV!L'9 (V?VA9,{y0>xN$@故2Q[W<âV2?Z}5xCK|Hb\6ʳ@CUd0UaS2]qK62^|԰A0kDBDh9s TH5V̂/Nܫ'Ft6`; [n`Fy s`uv~7xJtE/bwuUbuS3N('e6+yH&kȍ[Gue:}tw(ܲ$Cf亸6^c'gtC汲%qIሊc5w{a"<ސm*uւİS6/b1l f\<4ݍصc 1熌g|Yy~3FH(r:68< \B#)kk-;Bl܂.?&nԇv`!e}; )%)`:,9@z<8,;:#, KMȶɉ-ȻWNjz$>'R}묈3gD皐y?H-y]bt$`902CJW>#bkVZBPqAcl 3WeoKƀa;ܵ0O ~Ϲ2S)d~7D-* nJ gOyU֔2y~ ]՗aܥnbK'C.y=@|/ JqX_-1lnD#χhhZ^Y0Mlq ¥û¡/Yr*m/gQI~jPڲyf\{(./_/tXs ]6Ě#w][2,Ũ%N[K s^>)ӪK̔byQU O ͙'.;~QT3 \UI7+d^}wbս$-ㆳ͊ՆſŚ6xfzAZ gUW'2hmn~S\87{U$cqV.ȲO\zgQoN)^`bQGY4Eo3r*nQ@q%cADr5WpWG칳0 #VM 3/פ 6X6]d5ؘFpygefK)YFl.hYkЖusl8.s133FËAh\T^dE|1*A'KaiXKk%`PZJi. ZЗ=ganHH@ͨnNG];d^yD[/Qefw,c_xz]$9d$;t"g3)xcܱ b3'ole =zB/H Y"Ks@>_~s)rI%|6;ޓ2%{s5C$egVţ>;]|Yy+UsZ_o= 7IqӦfy=h#)Y_:)O,R8,z\z>[)=<֤6צ?t*2ߘOkem~:<{Op}烹RrJ)nay&q;;n @~=hե֭<6v:@`isf~Ep\G* ߝor@:-l>gf?pMeTY/l57?"Ee vC;b OkaM"tey ~Pm9`k''J(ȦDʢ1^xgfoybN!]+bxcV>qcf쀟߬ߔ~̮gEx1o~.:ΌC͌ٚ;z2,;:%iW6 GP"2u. yOȐDU vD <;WSHYvϜI| w99fSv􏨏i~l.tyJ]?Fgs<\;7'3,7}_ՀYwx n>V F\p]]^f`<*S(c!bi-adxHudg xQHh7Z Ҍgo=9,MOg6/Og M*mcv5}D%}5\/^O6ՃPGUqz g| ˴%gHUƇf0P3+q}=!juWGbJmaC.{!?^/IOCdiD0"JC۲tYȎr,u{ķ\~{Hexrb (S%o.r|. +[$#wIAZc- 0|_4xGק_vq:]Y$h6]lJm9ՙwhsYV}جo`~!㲴"'mNJ\tHeF>+>-s,M<"5*-4flWN 3zSsrꭱk{yEz8.HǤ+-ǣNgޏ+'qc|!g鹿sNM}Rʒ<(KrD0.y֥bARS^tjaZ#_Y;YBCi:].u~f"w)1f?pJx`zZ9f}wuw~=5_;POv/|=>FGt]ڮjr՟ 'k.MvxRTx= HDL(o4D%ƁƏ0Z}y;R|~Lj XhJ2?M-c"볞_c95{ߓސ^@}7g9;=irZ$~p"r1N(G4l{|Bc(Yg(z,;O%c}osT `X`Jޣs>vdW3Giy^m=i9"]nm#gG'MSc\6M9df;-]N)tB??<Ӝ><ݲ<|K +Ko* YRj\{d^oȈdg$yY tE|ݵ{lߪ+ӯ;&fG K7?hnI?W[!j9M97œ;''<IL~_oW^ake\Pt\s3OiH60zxAd_EB֗`= EY{;ܰV^\nض$N׷"mt?< _}إ~xtl8a*ͩ{BEq_P@% F+g K !bssV)/Ʒ*:i&kXk;ӫw N|lfgFfg՗RGOK%oN۫6vɨgܕ>İrO;aX/#ex޲g> E+X` /&ESȌu`wTۣĢ3=5Hw}~:3s$Y]W_>؞R,ݡ;䡑3uVfmcW? ة}C$gŠʖA'V7OsH`#xfZYWȎ65CHxTXC+q˫S3s`뼣#ّ8$zEN4ƿ=#d5)ml!G>nʻ4)?omO̸l"Yj61֜?0:5IY_-.J|Oy h_t5[oX33" n_No;h~ּJ{xjUt:kw@azo|+7e\νYYmq8W*U0&V- U!o!L f9п=&rQK:~9%Ͼ4V;(R|t0#+ё97,E;%-il>>vw_.yMh'>:"rAVp X|l'֥ƣx0oqF'R^0TTK&4<]v@0,Lad>zߜQ_OWcɎ l#>;!VnwKpip;ӑ*# c9N|#E9 W;W(F+QePY Yx;1K~XE2b]?ɇ$Qb@t1W_z7z1?wt#{dL]_,N Ķ)ъK)r/gC؀WGX^M/}r]{2y c"8yr> "ELp${Õ-–l:o\f-o?4dܷ3YYHЬ[ob韫F7Si[|ܜ<2.B>37WAyf~S'//*' t?=,vZB@Uk W{!#eS񓤄˳]> ;-̪XvoNoNm6f]u{Ug\7o_eu7ˈj椇W߮&ꓚOK%Vџ9,93B)?r&0'̳$ʘqwֵl8WR9NO.K/JS`Hȑ+s]_Wg]bկ'7DF{"Yvny89-kX,OfjdŜuo2 Ljw/bh.լ=cQ_`sceW";A$b1݀aE,Xd΋'p(alXA/2bf_kLREڟ&FG׳2ye<)<٢؈?>6=|s" ћ 󯑄Mz6t/ ?0b\x¥O9 ; qCa<*B1<#]Ljz>\WxD4Ř'Ep#d Ad%بhK=1fWt4~~lȌS_VZ۝j9vm9[Gkh~, Sl@3mݹsW߻ w2/2?#W.G$/ճ'X޽pTA{pj*l!+(l_@ itn,:@;hu8'KF0WPv!)}_-Ov~ݹd;:굑/SγB=hN>(`lp59!9{xT03x Syv'N\& jPK,(2f&}x"^ #˼ٌWȰ>%3cޙS ^,mG_N9 pyl6vdӏ3Z n_z/V'2!_d}΍97gUձc2 s/aB'/XN=r_P#5q032i1ɭ2ɻxfÔK{6SOb4z0"H_/o.hOG'>*Ts(gc>X J~h39ga0SprVѫa`ɵA r3퓑.|k%AbYʤ=F2dz1-U I$ȕghǚ}z<$x %VMO{rl@xxR9e5hՔmhg~>Uݖ8{jQ&kXFX2E_I ,oȈu~X;ò *_Z6y{wzURoF)A-ĶhY%pu#03(BdGvr1OjN4UՆ{^f{]@ĮXc{oAޢQQcITbרXÝ{eo}kDzO^X2 HfC2c`ܝ=K:͞N?'g#+`S8iNj O^[ tc5?V".&rI_^\Lf0ϭ$q|WDMP X?؛wN"~>غ ղ ¥nDio645cq3t8>3 ۏG)x5+:/x!Rix-i=z xxh[w 2+nsƪhUVRas6#n4Wf5JqPhN*?4|u.P=> vNuta`{(P\0@dX+B0<ܖTqod##Йm>+ps}A,\%>;-bK/ss5>Y橇pHV |Z?g,?{i?vabLHɎ#A?##OE X;)fųHUKF3[KKcoy9\D";#ݎ[fqr?`Ӛ\ͭMzsLr"u( B4p_'k?|Ϛ5춖~?A=7o͘'c'o[>1R0$#3ҹh23K%eYaص^ m24cMsb E,E#v0(ouD;5o nwjP+u'Ҍ2#G,e^ qYx2sGbCAY j~]LN_`zSqy83;u֕] L 6ƞNdU:6D]0+G0]/iJjgvVy#;h%=SZ،ԅK򃲣Yd|e~=qwb,p857Fc6+ DßF!o LZN gׅݸ 6u|8" 'ٛ5aŀzswSm꽬w)ߞ:wQQ gmrFZJp\c_t<ORXEy ]UeG3W-؀ 1nܚ5ylɇXHda1 n|$2 C욬,R| Яl}yY׻qWzD.{c UB"cy!ߊ/dNU:\I  0 ](98.7f{ }Tߵى0C͍Ў}mo8fw][ r&+0$frj눔g2姳鰦#CW½gQalnˈ}U [CPs13B6j.Yºn|9po}Y'&7R;8ٓT>]0υz`[KɻmkB) F-8/V^e_Ot+{J@L'gkkTu]<>+Vjy%.:;|Hog&?PBk')9 YR<4mnʃ(MXYbt#px[f;.cdD!_I C 7R qxhи_|;Q#3ҳ[')P3hǪHv<0q'V3gdDNsQXw\؜WA+;h8%wix?nG|N&TmKh!ﹴW}6+ cs26=8gzlؗ!fbdZc`#B-"㲜eߗ&v#O^@~GSFY0wc=Zd;͔{o=ȔruA\1+%=1!&~0{1'k noMTjS&|{_A gI QҔR^qx~0*wofw F:5\v zbšhFCÑQ>\8hIJoٚѱ^^K U Z+e;=;G&=Qݏ9+#q CAEE|DX .?>>A؀@PbuY]TFpa&4KP zp>4$\qG_Ϩ!OcX)om0vjx_г,VGz˟۲ri]ECI  ͕OroXɝUCb[n b nz;=g h=2P!M ^uEp5LyRgtAzxuSc구*z|gX%!*#d#fCS<#f'V`d='W+JJH">억Go h^O&y^v+Y:g2 ^27VNKoǷ5; S1gu7~D=ev=c l5P&Hm')k0+n$aܜE,2` /&5L"?f_sgEbcނھ" )c:>x{b]ӗʷZqR_8McX3*HQgfb5{*.u"*#ˉu?)&#V_]!"h3Cj+߮d]LNΊcc{ϳ+3V(,B{35!ǽSk^BVXؿ&pd'd7$^P p^X "?e]cYnTNW3xsKlQbyh! Ob~On!fݿljޡwP/ ʓ}퉃$rAصZi7#Hvov>>Qwq<[Г?0般t=a"x`oHGK0t)z7jG}@S~S'֢k{BseI3h,5ܭs"= n2F0T+)/ݷ M֡z֮JV->A ũ}Qx{~7e {^x?goKoR2>`s:O;rEl{k`</,'>&M4~-y^;f5q( ցx_O#S?ie#}=OG";Fp|T4>DhbzsWfΠ~=kt"1ymɼE"̡-X:}`e/0;nMJ7ȺŕPZgTݭ]Qll[Y+Ċ(u@bFуGcT-;\%M~x?ۅ|{5 Ӈr_Ö{iYx.|)?mhēIW!ЩDf#鈬}WXݳ+E|OXoׇY%Α=I_*0Ѯ5ە5pY9{vU ҥx_`?jl~::Nb=4y:hxM|Eqy1 .;hx1 {#;k,|A|*I<QO*c.zS3L:F2z9 ^F|Ƴv3 *RK A$>˄+oVi ~/{tz}h?a[8,`,4S*A݆̔K>Ʉ؃&XX/a` ևʬn=YzGYZ #9Ş7S5M3"icŘV~̧F5'4)ظy|MHqpӨ_;.~df;KG.oTNI W u,mϟ&I?` ; O[es!`Ҋ؜qH@yd vu鰯e kY7y\f# ї.t rm2qok"n_a|G f'U]2A11utySQ X]C`uqH۰gޡZ, v_aOMV0,o~@N&4ol>-n9M+oʞ" !6u{f"wtmNn՞0-|yыg?]W_+*5rnUU=\ Ϝ< 2m]YhA,w;XXaU>ؑ+Å0ߧ/Xr<͏ovJ~0'k+8>l`ߴEvm+mFWofdwt!Y^MP{ZudZ1h;*Zx2J@Ho|$^t%*#72i 1>dߥɈ7۽ܣ;|'ڏui^_/]&WC`I_>),}{i,,wi86GUғKo6_'z|{E~o'1~ ^RYſXOm2zYU48/-tLD,sMr+iJ=ý=KFoL!sXCKׅnN.{Ez9b,?fI+QU[#{W)9v XSavf= ?_=TڥҞ+B릷A`)nI$; {5\g'@kýŃү%%9H 4ֹU3ތK?q&C=;!wSլ2ǰˡ&cZD{ji1Y٤GN,7dX*D${"j\ w5rsn94k3q.afߥV8uɻCf`#^c>oT&+eX/sZ wU+5,3+5Jo5. 渌{u 8s< C!_b%W7gWDJQ-[䎪5fS>yly *6fv0r},vk4q#/qe\S9>T E?t8$e*O3~v"w{>9 qﶃgޭ wOJ9WcJ&QQqw<)hиWBw1`5:-ߵT&bV7Kf}r} scvD[+#F캦EV_S`U5UzTy<cUwߴS:wˈέBt1!hh~2Β/2Fj$+jV})k)U=}+f%yW>qy&ϛ̅Z@h4y&jwG:GQnՋ[qvvhŜvG7{z#gdt.sF[)RDJ~r,<:+JxPhWfjGMx;#wnxc۵U\i#(Q# tq]sl7h{]3S;KnmIO!{]wUW?k:cO3KFv60J{ W=L<[)k}bA3L~1l{܇5q_=mΌoiU&hZ<ɛE~\S6,fZbX\:oM=nSd}h36k⼝keߗՇpN=(RٗAVk|n5e6WҌ։kOyr'wqi%cP;hTfGODY}1eEcz nZp1~S(ZX]E"]"3Fxڽ7 T_wʬwoͬI<{{-Opg{W1ϩeVRe3-kobaQ+̿jc%as)#+Xn dn fJG FR+:Z5 HD6?\-M_9z9skDZK"jn"xF׺ 6"-R+/z1V"IBJ1rz#JXTq fk1=%ዼR/m\`X9r=#gY^[ˎIQ)^p13Z3+JLcZkz|U'׻7Y"g^M"ù>Q^K$NXFT"1ٌJ7-=COv/[DFCJEJ6Q)'2DK"GYvPJźfZ͔ytA-صû2d<*c[ևYjqz2y$5}w}eM%A<#ю귶ߊd}Ld+!Œ9wǠn:^W'DY;Z:r8PkL!-(Dq6;Z5/ӈJֺ5ԓyKff`/ig+Ob+mڣFEʹ]=xSF$"%;YoxgEV:foOC82rD@TyNllԠ_֧nj@D$3pO '6"GуF~cjߡ>DwV(FCY_k展h.vFcDʖg7jڈOpdVvzYnpD#a55t'z5-yԚ[C$JOĎc,wLGDNr1zh"[ӈF2jVP}63~C3?Z*yqhG$. d}LyZDjgbVݑsB( ۼ>ﲒϱ[Q5OBeƶ޳-ah+R6-~Nn9Y;yYs,1VG JU*:BQx~^ 6bϩrʚ빅 k{Eh${qLkgwzlaTO4V;}^?3Eh]GKחWJD{P=j810-~S }ï]ci9ZhJ8BߩE /[9JeA_)֌xU]P]3g dQM4t]ЕǞs;I{hzcE+1: jٌdT48^- tQZ*6/+Ş3W'2eĨF~#V8vH%+Ӗha|e(N.zNTo|\5YR6ꃯju%b3ַ[k+e-6'ךX@$`MQGh'bEǑhe.Ƭy(o!œ)?5E>jνZ e4ѵq9gTK<<-=Vl-v^✵Q1HX(FBѸWӞ9G U1;=DNyXFA ;,ʡ>ƣ屵29.2]}E[(a%+=Zo̧?:2WuY)DOpCu3qި#@~2J99`y-J( ?5JkGNU(ar@@e{2݅Y"w,'!z˪3ڋZ>TiF`T+JT1zDLM֡+w`5o6~LwukmňK|XUZ;"BWRcD֒Xav4, Orf`5JVPkDaJƪN1ӺQeAdAVkDVw]+/y٩KyJV\~Hϧ,a梌+GLsv-+[9mHEZkEE<~kb`wz Q'J:&ڑy5ǣ2b]#Zx׃ݝ_)]]c u%(J{7zjVA9Ȥ;W3s[˨&hM{*zN1`ޛb_8CўE{5^7goڤ!vI :)2tdbAo F} =r}hEm5[w@: ='ꉍb.dDzOF%~뻍A#h'_W3\qY;?/lR+o$Ps0ӵWK"^V#zzQdFDK%2TmLIά5EiT]#llQWյ/=vz j3%:/7F;g)0"mH~riKO>'4O+Z˳]o~\[oҮ8Y-9#6#NDJbw\]s[M?"U6͘Q_g.\6ϮROn=}=""ՎrCY7dȵwD>K1G+fb@'oj?'Xzv[*3I h]hU79W,܄Jdw~?K+7'SɯCj9Hb њ1$2&5sCbd}g:8gg>+0oyzt״q6'ND%NO_ZPr{ΆQRֺ򞑃̨;ZHB&5w\eBqn]b-k?YZH9:L#[Q}Yh>ی]\ȹ~ў~Hk'j֖Yxg Y<ϩ jٕmԨtY8}cfd)9%`wh8Gkw[vyuR^QֆR+c-rrmERE?z=j:f'R:ѷ8`&,rRɦ_NoD$#LM܎}2)x|ژ+Hh8sŠAL~[c$3FF+`~K(l̝wf D/lUK(aO0l}6GϨFZ+MJ\kd[έ dd)l޳'nz_ˍ-G6@a s2Nȕ8brE?NggwZOzr+VvwotWdFD#Vs~zP\^ܹYg͐ض\Q5u_>]r V%óiOΛ3 y=Ծ{<'Ob"^ՔPHٍތGV +ʌ2g1 (V-s kPX7cuc_eViyvz'~ LU:-+/VcAt>vmÔpZ\7J݁3M(#tw2 MHyF68[ ƞѦarr\F)ʿBsEv ]yp0X?K"mIs" e$RPa>  i8r/yЋ{Dz0k d2}s Î5ڲqHJ̻k!c\3h uQϭdvebZ?z]Jt2y0.cn$hM76]:瓹t¿ֲ"iVcsaSqT֏>W|ŚXV3bQL i3zEo\n4--o[5W6v-؉!|{>yjBkn5}ߓMB؁9HAEz,bY*CMgmso ӑ%"0h`F_Weeb̉] U_\hh]k•hem8 izsx:Jf܂2\M8n.O5-aOeOe{rS<`w~,nhƟ s!E/ӱg+^_oQcT;-)n>S6Wq.> iLoYkٗht:>lϤ7zWI]5=sYt+1Zz[.E?}4f[nQ~z͙)~۲֚xmLj&jb>f[Ǎ7pp싖Ҝ~,?=$|?3] _ k0{>ChoLZ"!LJAk!v0^;o'\6J|ed YyZ9h="5#6~5y8%lls ?zU͢t-j`sA:{ j7nǽįE.oqH}5s_LT4Y'"  >inʘ<XļT'\}ƌ>*$_;j?&frHi2Vb:l>N'0{+=m_zWd!$bf=0ѶXs虿؞]?;h<1Vw{eVffzen䮔d1|oݱhzgb]jDDJ3MZRu e?7ҥzB$v;h cMf0ӓ;hgk4/p*џ}_~9`o'd2s]$ چK#&6~d3ZX\d2f#a\#OkOJћR0KYgz#-au5sִٍ^|LV—xo ElDiIe2 +pT~4!0ecN3f27b:9c#7MX 5I`Xc pv ?SU~naAV͞Z9[aE&Mgχ۲g? HH3t\pџI%|Z({Y409NO -HRZǺW2"V0ay$s&_^Ldy?˱%~SV"E_b%xp o?бQHXe Yv:88}YɪVʭV0g*.2qP WO7͘ז5J.+>bhϪX9ئِlF>*Kvc{0QaEyaitqe.(Јd@ȿ[mUxd`gBH/zJ!B~&~% Qˊy>L@ƪȈ±Z.hUZX$̟|@`[&ӿ{X}D\Wu A(xSuL+Yg=F Q?5eGWx|B>;%Cy̿^<t QaV|5=+ #u y@3x\+%B)Ͼ32"xjū9$e'+%FmngϪثj dKةx?}=Oŝ(L֕`^P‡yg_:x_ `GckwZt>LZ>WKי7ue=,2rB2V͒~KW%ȧOgvhފXf*I;6Tt~9{{aa$֛ z$ ^/÷~"g7+K蛟:E;WO79c>3<߉%{n<۪ -cňmdKh&1-,DqH=.dl^o4fb5; =`q>N)~X{ϠK?rh0amoؕOm& K{/dk8wHҶM)W*n`qd*UA $"{Uen|'816F{*_`dDcр9p!X+u` `zcuY+$qONȪ 8s\ ku&uz|S]{Gv_ili$`$dܟsEux'=Gl3؉8|g/pSS~v^=Nx,-Hz=ȇNGkaK@SUC`N% :xX/ 2[( jur[/\IVhF V|ciA1&`cqr>.|`VsBa#AScxtKQ[m`֛o*@rG 1Gp7 =g I~ , Z s>/yn›ꜻ('x>g<eȈH%#'numCD~#FF 4L/ Bs\` PH=w|x|޹-o4n$reRk2#Z};dS~SoI9~skE}2JѮ)V> YMft=0Se*Hnwk}ހg3~`&L8vo_g+z=;Ȱ],z^~5MaCo.nΤîtGpS^Zu8g})a|H>/"7{o2vzCGtyX)5"}X>x̹6o :?<SQ9VGmp,Jr̡eٵ^by*W-:5O6:{e}FZ{Ѥ+bq>=i}+~ 7fFvZ'j `uԀ lbqA+u2/)Oϲ3'k-a,m#U3,fR/îƬ{hZ3>ѳsUSfߒ 68@y<ѣy!O&FƓ<Ӟ߶|,v C0+`ds"K>gXQrF cT/˟w`߫GٱH,Dțњaz9{AeCéYw0G46f(NRGdkY<3ɎYaY;suDD8FPcCYb'Rc&%a6ߌ VgP7'4zرn_%Cҿ3g$ͳʇqKԼWF?1>K9-kK-ؽiYִ+پpI`cmRrb1"5rvR?݌KľWVl̥f 6sWtWp<܆\Y]=i9 n|8:I>[YYlvIQ`/VHMS8ӻ{8ϓ#Ob$Y>tg,[F`Z.+0$>rFt“y ?)rbF+cMI\OJJA+BZeCh`vo427#R ͐~R|c_y|U`fcz;2_8[=Ϳw _ݘn ߸VŹoVY"zXϣnp`O{|2OA|ql]~+H<%s5n4~ŎwƢ_JNuS$ CrKs+#K'C}{pps2.B`E^KSrG9\L=81z8}l{E!v2c ?n`7ZLWuˑ}1>gv FlHDv4 fཞ@ߝ\޿ kЫkxXd:9Ұ^XF_72 /V )j,e5W&1nZ:53y-y21:`1,CD:ȧ;g3 @9Xو JKn,4(>̧U_^S~?:ٍY~@FX!ۦђ{yb.|Ʊ Pf(UP}bg)Rmk̟ Ĩ!慬1ivBL]GL*_+C->}< M_+@C%*;g ߕo.#3Y7Ɉ]`Q:i`O^N{ϖNN&|J?ZR*;ȧ_׃ofb &{i LY̷T^-Y}p豞3:2@V VRF(UIffO>K?n,Nx6cnkρ}Dқ6p$O$o&CXyUYr\vd]6=!ٛm7ﰧΘHb"oyN8h|>ֈh?+VV̀~Tgb?)`N`"w7VfD2x\*#CUx:+P 4x|[,ac2qi$}#dOJ_M;uaGo#:k DNNd xR9e̝1/dlB_4,',59a*&+1q\Og ~pVplO9*-kyga0m;09ٺlAPoi+> fv>50\DCmrb򊤄,aF@s;\ eH=0̈ w8 oɛ8Y5'ItsZn͗ؓQyjcV_{&fN=GlZ l1bX؅dARXeʲ~Xx1KױRPAm1`]*$}O5og~< B7IǀzAizެؓz,rʆ^"O&cjxjі^<:.DYru^Ec'-N N p |ػ[[횝HFss珒1B-XZ^hIƸ/0gf1lG/(gez=`Xl[!# PiS'v fzKe'G֓iuZL 7@Bb;81-l wVh-#fMkW?+"BwL7z:E@rB>I6]YS⌷=þ;? R}9Ϟ hq(۝Yam.w" |'xva%qoSԀS14#-Jw {jERnLVSdHJ`=9c@Y)q z?.Gl=I/X& L7 -GdžGe-OL|߯IOl "% pJ4^J7LFʴ#4xfǻxӊwfįGs;YUDD6'̒3˪VhYAPKҨ&ϗ+&y;w} 6ǛK=eXRl]/ȶ×@jZ ٳwA+Ѝi01bbslxR"o{~~ghlO4WzXC"/NRH>図7pNg?%I02~Šz8ѼA__6EcPSAבzf`(fW5~V/n,5j}!ʄ񭌁q%9k5UE]U=Q$j6e:9I%<n m%K<q)?{&'珆g``:79#gp~z/%~ܐasvm|\%3f^;(IfC׿1 Ђ[k1E72ևx&ew#;ʼܑco4X|?2@o&m׬gMyapv`vC~Ty|KJfM ntقC}a-Rlcp#aׇ#jPUmmlIh]nuA#&*zL+ 03bwEYipB|e ذ?bBe,livssyQi}b6X{puQq 9 mX1U22l$\8W5Zھq2<Xl(-OswR%2a,dhD :cڣLj$}3's%;0wV^lJVx},wF=']}֩⢦Z-GEhZ-e ;0 d|e(V}"8%ȬG$ ѼNDZ0sXrw QGЧyI54b_o̧5&U\q9YX!#`)*v@rUu0g^^*FJU͏n[O /lթܖ+f'P׻19gY{Ǣpv& D*څd|0ءD/oӓ;sFﮐ@&vPjQ럌 Ev>˜[Edf([ʯ,2v pm~ d5NM 8s* H&~`{gg ϨIVW]n.Nh<0ADWgOegwsl{QITbe;Ml[=uӪCb(?p%GO]ֵJ<̜zRmK2Hu6 r!p>I^O/;dqBuO+$%W}t2Uϳ_ӑ~vÊ[TqQ0[Ud%./.Fd1&#x˗Do:&t`oh0fR/3i~y09=q;׳҇ZROIN) H*" "eTĆu셱!bCQQaP@wp}J8}guv0~zCb?K\pRyhiIoN 1|}\dE&1|7y/_ٳsS$߫7&UNŰOy x7.I1B?}7›.dL ȷդYִ/ue{xo"gI~^sa]͹ݟTmlw/hFU?_oLdJ^L&WV\GcfK3PM>W.Y`݄ƓijKDUz2O[;)mզzbW{_fw8V_QmʟIzlgvOھ-u#Ҍmz9g~!>l $_NDG LT<9qszCsw^|~>Y֝W&y^C9)%6 +_P^U"8+3fFda9w;f` |q֖l#=#fd +|p#ZMoI}EU[ש]Y!DqnjYOVk4t|}{sGxcIPuŰ9,ʅZwc/ lk~F_u]XD8A|"a nW&NJbg[`nf=_$nFEĽK^93'[kt<[xVu]n=|6,J'Y8PIq[=ɓՅnER+GjA^m=!DeTC.9+0%k)nHߏYI ًx=D,5$C%)nMG:y܀1 /rt8)q"UHս ;c(?sB$- Z9y]#y$G^,y*1OuW3y`#äh;2<0<d? .^_vGmx;`LT@JK6٠ a1֗[goFdΩE\Oֿ, (wXE>p{KY%mt:ٞb|bZVU ?T~9Ɖ⬢Eoxb " |y"2I)jnYRq+.= }"gD$h,.zȖf^[*!C*}%!5d'hhVthi_+GxC'JB|D4 Ѕ%$_bmXq$ sJKYuSyxT>iS5bt֟ D]:t;eN2_17@VZNmޮ̞j~"JY1M?&DreΞͿ"VM^J\_[+ʫ`ñD*q_9Ao#8&6:sHUJA+|M$K37rhՕȩD&vNy;M ӁO-az]ܨaxvR?gw@^S ɴqa ^uӽ#;#gۜRBzO8?N 2Vbrzcc0x/ b-z|ʗK z:9]pZ.M'W,n,3=nFMt*}efzV\ݱ#[bFf7K+>0t/;+VP8̕%I~A!֝DZCC2>7ypQ^t+CZQǧBQݙE`;[?sXrKUvg%/Nzi lT|u@~ 7w|$x.HdL|:Gg HO)N"hF{enfAkHf6.<ͱWTh7>:Vt;ǯ bT|Yy5ɇ$0x%b]D5`qq0ãJ 5nAJ^n*/-r'֘NJȫXc{ķ0d0 vLU8-<'ya*p;-9/aa5co j76UqY DD^k#&8+A=`}veGy QcYiS"4C7GgR 3X;O1;^k' lpaoÄ2lRGl-8nkM<5H͓`D9KxڜeTe3KU.MDb A <ܼT\ff-1 ͏dy@l.OAsXWȍR=-ů8 /@>qkмJK쨬R HF(oAZ>*ҍK߁\ESvY>,og%e+V bq2z{pҘg? %<+j(.汥ykc-H/Zq#1N^k67Q@{FE^=UJ\F/zY%",4o%'cW%Qp۞D; 98MgjQ)xnV/%(''9(&n -r6'?sS5z;DS0 ;9hԾB^OI&.A;s;f MDN}>#`/b;<[u!iTʳxM$']\./IHB4vv>}D7ZX}jF^1=Q,##1.Z;*'`!6B'&OO]vVfYоힳ y26^Aaj(T|d1Vf Wg\3yK{Oyq${Y|#d1 HJu܁^KypoDX'h[{~,+6KLT&p2:;A>FuURy.?|* FFNI&Xܲ 0߂񐋴[3>&s`P`9W&O<{n(#YGM.G̝``?dH23f|R|//- c`|ÆN ^B)4OUl5~耥m%?-,m EqL2YbnQPj0v*6 =Aa,UɅ* Xvcϔ[JY kDqZ S<|AxDRg% Ih qf0g^`wƏJ {N"|€ɂQa,{"diYq'1@F O`̅+>R hOa?(^R}0Zc36v.bǏM5`:'8 ~X OE#H t] [%rHȤxsXw0%;7b%΄)eYȶ]1|ČH/,xQ;q0ŋjcH@* L\f>Xdu;E` b(r|E!IdYXRh 2 DT@U|R*^hpGQbIe?͉/gejd &*#-vLZ}>Ȯ'EbtȀtܐtqzy7,XX9)a(:AlkE)abҘz6"R Ya6^g(*+Pr:8 |gfމWxhTQP )sQ^i`5~[pT2F\jKi`:whV~;\yb,bJIʌczFT:O~G13[QqʚA/^AbGJFƹDfWA܈3%+^-'SbX8UtfB- n==D-}Ǥ'tGӗfB^'{S` kjgJ\^|U%B&I7sK$X r|,/ Q?Ng|Cx(;a~hbFB |Q ? `i M"`S6MފA"@WG?**@ob~>K&z׼  WfKKFĄΰQE6M2Wl_2{ iH{jNF WY=|h{cK&.{j Ԭ#G_T'/EEd7xYeg=*]3V3{knn+%0w,CL"%ã3`(S0:>!7FCT!5xIx&6\t\|t^FQ|elgF`F:b6w'N~g1^g!3%N>wtn s bM~E2J1 ^ AK~S,b/생b i$JS*aL-5&vlPb0Zw35zӃ|W#~y-_МZ`'{ԏ˧Y[ֈ"5/K?|U]%˻{EX b1Sݰ%xꝼ#343kvm##[ȶxWGBƢ87'TV$< XoGgflp{xZȪoVҼmy>&,[ٓQH/_َU\fA6ߊkeD K<ϼ /B 1dҕ(䉝Ze9Ѵ=+cX˕9tXspp 0!V-3# <]{Ve=4xng:RT3ȂZ¼buV׮U+(ȒIu#{Xz]W_)U|FI27W6@93^=ܾ^{^NBޜn/7dfȓ}ґtv{qB@ۨBT=' 71 3Ey~^g4xr ҅{*Aiwҵ1&c`uy$͙ mێi6=Ƿ㚙i[O`_99XQ2h(n=dKpD13gDqJΟҋz@28a;$+h0dVDH ku{`ڜ[9ݎ9+j+rW5邹);#cyftx:0wޘsMF B+7I~P[>騭e2AN(.`-'I2g9kSWqVQFL[<! v 䗊A/0s0>۪#ɟJo6OlQtMܜsm9Fi>~$Gf;IEe՜tgU Dr@)y)ِr 3/*84cgDqU[L1WV[W$I%#ꉾ@BXdfapCvܰEy%Đ=:͟k&5MtA4'H`啕Vp\[_Lkk^^0׻[ͳSzm:4RJD!#C߼x WA 78!.ap^laXՂ3P]u52/ݖ;nޟVtD<2mq>`Y>VLWM)U9Y-,`J)8L*#V?)OT(ϯu6DEHtGHs*EhI}wVE{Kg rvf/'dWկo"=ͬ9;]ޔGzKX+Vbj32e"A]Z~sDvHHL`<=w'c7*QđTtgN /`7%~tk-ʬַ_k~rsrymSMGC{;6E65kna;dQIg)~o"k̳XA@~vndo]_P]g`D %/u$BeJ66/üwuOpW9 KDFR:#t waf2& ?:}v^ ;>vOo/66f7P'xDJWduJߏqtZd-ߒ>̬VJœneC0{ ΋` 7qs/Oin^Xc =;?X_~< }/{O:}-3Z}ݐ;lޑ Y/xu5%t:>׍:>)/<82HpBFlX# r3Uy11/A0ih-J_Ck:X̊;b͆gS2%q rі󺢾=~C ȿ'G`|vG~V5m2>'ǧeTK" DjՂrS"kğLsJly.*8! OKYN} ߲Ż}zN܃rM!-7^{p}U.yQk|U^qS=k}i 9ym~~t$t)98`v]݃ufzZW&g _& \ #nr,CddlY9!>9.Lno~=hnYn?|([UxFvq}z^oI߆!Uk^Ӭo?gWS7_HOuzi,0Cc5ѮjW/7H{j$h9.C*QRg5OYWIYx/ZODebǼ2Qckx-fnɖ[y6֭I_Ú4?n| w&?7־~vs@dctz^xL_Mww7MmOY˚k 6i{jNGԇO㵉AZɍAQRoM3w=ܒ8a3QIy踏S/0#^Nx}%F?oZ/R,R HH!ظo$)'i6|gud`/׿8/nK mf?x8'Bi],XY=pGߴonNi6žWcϋ) ӄѱ\ <ʜl0q pYlTiLgSR7hz=XX׼=ݫkvHUчc\nn'ݏ~CtAGvRVnXc)ӧm.hY>X:&!9eU$+2: iu{GNYhx}}X %+`\ l 'q5W`3'{Թpf[s_nZ߲v2^{^tdC}k~mfO WnM}:>vE{ydȄ lF>@Ԇ.#G{1|5wPy(*.]M.0j+9LoSZ\ljж㳂x[#oş {wv63V_6t=KV}(Ow[::7w]vV6~FFnʸd!:OhɈPiR_N*,J;kFyyc|AMkg/sV{Xg\{l{n|~{xhg?g`sCtyݸ8^O5N_6/ge]4)4ڞ6G OJr0^<ѩS;_jXw)]FuXOPCS2*[8OHX/t?t`]yy~jp>ߪ{{6=QڞW'ŽvߛUv3;]ػOy~z >kbQOuѦ.W}d84ښL@ kq?lRNf|)dUx|X{|YdLb;"_loKmղ^>'\sHݶ{񗞈 -Y`J_}dH3?~ äƛ^򓦪w.O$DxVHCZa8Fɇ=g)p&5b*Eu԰qrs!xN~y{Kܦ&7)1a\[ov|m喯Goyc=n}Csmw(7!sI@\555h,̍kFQCc9c _SKvO\K &KPj$T4K*Mِ5wNm[696m瞙hXq`aadƿ_:kQ2Iݛ%bdIJU/i}auMY-+*^ '+._닃tzWvi;) yEPGf*l֌wqFvUNCEqVĺ7ޖK%mӝ{Coywyp*Ւ &2r}t~%ñOcԧƝt&ؙYu8 V.1u%֣L hF^A\p8|Փיhο?6祖e/qub=lkr'֋"+{˃=Q匓̥~qEAu_:;77sÂood~{.Ƭب{gۂg=E3Ve=mչ!͐S63W7l 99؞/8٭\}BugSc>?T\1;Dy폸JAQ yfBvMߨ/[<ΨW(nt [o/Ś)H}KvsK.+X%!ZjN`vI>\XE'6RsBf]'Doz/ioiu뼫}!;WvO q쟈Ԝѽ0\z,O 82oɔ/yyPԩ8' +;57n¦ȮΜ/+]β-fi`Gǝ5<{f ί&晝?%+8';(?"rS0wX+;+_>KnG-ϸֆ-Ç9_ ^ݺU8!1; yܠd_Ʊ6;3qp1읩ye3%w>aE:.:>xgV7;v.h?/ ev;!XpHXhG]{AΙoenkjϤ^ KR EEKXXN77Cf#&"cT:\`{CPy0=Z8vx6 kOO3 ꌼwcsW{K?G7D> czVVsB-a/7^ͽeۉ~<߾g;';r|b{HUg='b jhtzOWf3볘248,ps5==*;q_^KI>91 O^]^M MsGr9.klc9yy?&c;5s&pdnV_ wNOFN0X; 3 2RNwd挮 Rjzvcc]wC{|O,ɼ0# *9߁PΙՓ'tַ+"3kˍͭ]5kO95W5onNI M#6 |ƹ5²D/.k%FwfF~.-U %F_{Vdσk/Ues[|u aϸh+TxA}R?z=_W9]Da\+%ѠKW']T?nnJ#a'|wEohy@tY 1_'rrDΚA8.aey#/OʁU#Sq`bn٭nϟtΑ{[{;7/v)Uulu9-\^XR]Pgj7am +X]8ԨE.h5&ö _W  ^Sb|`zY10>Աۃ{Dԯ5WωwxYs|{q՝*+Uu˝ o_߱H$g 6k[Z92`_Voow|~M"jb9n  {Qfqd>QR+dlĖ:Ͳ4:uv}Ow6 |^=&_'nD88<9[m|/잟x6whk.?sgU}CY$.4!lzWQ'^CMp@T~ UOFw*{r:"b< 2DDDIuÁȃH gŝg[2_Z=]z{^ ߇. {uqx|huB=DowuI.GF~ߺ݂XscYHՂ]ݶD(͈qvg"UeQ OzR't[#pM*}q?'adUU~U;%A3qg{{/ėm?_?gwGf{+ﵹ#v;;_S~{5kzho~LoSoTS:/૓ ۣu?Fpeϊݾfu&\`wXPGnuC Y tc9S.a;:}WskYR /Wx33o?GyPuKlfjޮé{<(Twtxu~m:2jWeL$`m,j@~̃3jH:0W(.aU1 _jK`.KrcCz[^ W,^űTݜV$[?vbmw"{GSͧtOl u< 8pZ2Kq Q(kAƍі̝Zʊ^-خhp5j|3ݷZN9M:7-Rl5ْu88s}"&a(m -H>i^L ߚ䐦uO O1q` Wg^r^feV=/@4ǩy}վVS4^$+ɲ87e?y2D;ĒYVl* &ʽ_^?svAQGcd{w+"[?N_A'ffzZbȓ)|~Ӷ]cWE>aEO&K^ԐiikAQ,fߞcN;is!{FB02Xί o^{* $1 S+ۏ$7qx7L d'dd볗#K?9-~G4g0wHS7q]|^yH "ƥt:P*_OVa3j`iܘ?]p0}*܁ؓpO0anlۿwe[%,p58(SלSIۆ5LDt'lv_).+m&2Sw?$>9QzghdU=FRoꝔX5$6`;;ˏċy}Mlecj$hДT>l72ݳ*Us65_Ϙ/easdiԶƈb0>+A=g3`%%"S]ĎY%oDe/g7tfuc~N.Xܺ29'Ɖ{'=ٲ`He͑{~/o~Z?+((9t';|4; !w#%JNx(lA `O('$! -dM9Ssbw]gA9G4x0vF"K;%pd؟5fZO>wUn =I+gr^x.,[)`mT }K}LDzX`X̚RskÙzoYP,-m^k⾺3Y JC{R,[LtsO `Ta|UsYuDKFQXF{Ƈq깮cϊ8SfLZd< q5\OVr왞[NT5X bJ͋LO';'=f||Ū@ҧ5%uH`^$~׹%u+3[X6)S⻓o /ܷjޝ=whT!gg3>E ~P:qg"o^\j-sbd2㣅"N*A@#Xz 켆v# !$0+_e 2__ _O7؁#L{ǤqA?{ބ MNlNTüwKoy/in 80=bU +\lސ/#ۆ,μV^~mc.hZ X RGE9]yaqVg Gf n6>V}#6uD5Kd[]՗'v·O7p⛛;V䊰y џ I_SUoj6׷6o߃E?ǙR ȮhxfjE< c?+Usbg,Ǘ`n5O9YG57&tHj[vm׎M맪ͯ߫NM>#|fκsػ8e௝,D.>;{ 2*?Ϋj`yhЙ}\V\̉zDNF'O""=S_(;fHjl EY!Te?kӣ#%}+?y={Wƞ}bK~R?, . pJD¾^ݎmoJ.7'pEsmp杩{uwܨJ/6Qk5mVgxi~;!#-rݎߊ,Is>) /ӝ\]DT-~$b '})9y9{jbGד0п5Kxj'/q׌vCf6Ӛٹ|+2#YjeNW9$%Wcw:Z.fBh f~`n=vOםZ˺;o 94O;_٬Ɏ,ɓ1/g!DWBgմaUj'ĻUKm2zLN=/4Z^ԓTO BY;Y{uN%'Bl^{xf867m_ʰ_uӽa࠱7 |s˛R5rnړ: o&v.yS\p*WVeKyfNaDlWZڌ%sr~/OdNt $[4cmI%SSzSon={_#nW 4tjK[|{cmYӾc^wusΑK;K륱7S?ENv,+`u+N<ka~tquϼ'xS6{kVঌb?\f&-Ld>{dKV܅9"MOE3WG:{һݻ@gss~D [u2=鵿XXROrU2肼gj Y? bVGzFG/=g}6+Q2E,7zB* ev{D"sVd^Q!.̪lɸd.tnHDfS|Zw``pڻs^̫A{:o@U4[.ǧjfqȏ;zo_v}n PPFB *s3}䜏pYlKkS8-hɁح̿x'{e?6"VscA&|4=9qS|_jnR>ƖelԞy؅?vhFPNzhNgDJ[+)vj+f A: VAF \ӤNL.ʌe$H09K$.',xZ+|PMgp5l=ko7D?/]}#oT"}O̓Hӽfůzf:##K9_mcH'#/TiAz~yAb:<U9Cj@$`f,/N~*),[ r%:eEݱZjyGUS~,3I-HV+sVY(Oڬ' ^T)S'SSIYU+g^^m̍NWk~d,z؄1 *r("h1`6Yg dV*^0SV@# Vtc nЕeɊ3šX2CޘHl0{R]r>MS9H84+O셻 #:iD%Lܼ#',+0~1e +^k<S-F^Pv|b1b}D~ @6N,A̜/HBqC8YdqɝC'e~cV6n$E?wLTcN$ai~xt  byh2=vB(/]IS72އX_:i_f8äA E^<X&ĶFoZF ʜqs;u_)%AV3bx{rH-9D;H~eQbs44 (U+6TI hV iL9*EFK  LFx8M4oKjO.g,c; <4vdH_%Yl@GR'E`rAPYؗ$}h;Ki΋}fXInmΩ10r4J.57-S`kk/i:#6W*Aɫz td{xD[0|+ ^bąJ?vzf=OLZnZn?bL_WzzxO72ђ%gT~͟?l}x>,ʲ8Ktc eT';1DAݨP$Vo x{%dMIIJQBX_ I'ᾬє49Y<a^W.hK@PZUbj ܱR+&#KxttclU _~;8L'd}iv`$ZbukE]m3tRVS蹃A9ÈYcz3N`]>7GN W"#}f.ʩ.߳Dta~lЏp;gpX^O6adJyNcZlcn.D!]*M{%3- >713%BI1]SC2k^rYK*5 ddq;HZt7,}0JN˭jaVCm]aptFzygĽV\؉GNTtl' U&:W1]O2SllyOw I;x*|Bsq/:2S,2Y+5ɫ9v|ɂi"I9?2l 1YkLh尀϶d 0I*DfmnP:_to.3MhYCd"x& $[|/|>X9R^@!b{*qܼ-~y`-4 >X܂%NY ^2:S,7rʒ.J>`?OޘW/(s'xc^KZM6;k1zwg)x'Y)So1Nxd "XBdZ":?fNa8;:+vkL^F~Ȁ gQ6=IÐN]X&q ʲQm99.#6}:zѷ&coKo;?(bK˪>쌅1D qzM0#19iX$.HuaDY⭕xuK^ߒŧ6WȰ;&nBVϙf'/.d_kPVTs0?K^2E[1s#YJ4-~{1{s]:X4{d5J li{`!Om4K M m1 )%xPwEp=$ &B#B ȯS#NxeȈC=5ytRJIO+ x$ݥ3dfex ]e1FQ28pGT̝\+y8[H{h 69:OJ+eou΢#E+ot٠y=>f'|g,xh_`8 Cֈ!xܼfc$> iy5 |hJM̗Vy.V0B;V܎@6>+[TccfγyXk\V\SOnS)X-Ϥab}e/K<ͳ'|P͉"yp!叵ea9*Hԧ[ǒ`y81d-87tVˌ w=4f`ʅegf7ޤB-%WnpĸEkx욘YAO3hʪQqŒ1záy(m3Q@KTHﱗ'9嚝rO6݇--t#١ _5-^b/Yse^HnbD/'={]"X3Q{CDWe\uQd-Ӈ31V"j}bl-lVi4u^Ur vG )"8Hy-c aٳa㜅gC`WA voTbN\ɭ9e^]hc86%RaK&BB.  a/gQdI SE*>#[:`{qvb?CHv[x:aEJ{g;BbeY` k/1̷ v {+ ziy']br1uI~8(l[Ñ VQx;6j6[O,<>VnCx]D> 2r@0DJ =יJZ&E   #J&KV1H 3-"4Nah"QY~Gda-éI  Ӕm5-ՒɓgP-ʧ\ ۇiYIjaui8e>qtTrؘ|O6Rd0881m?`FMR%hyqhZէ#Q23ZcvSѲOQ>,)y\A!F$R{"pS*qU;[* yh)\cA@l׌Jq kީ,g_Ǻ7K KFGIX{,K|3^WKTV ~jOpbZ,MDH8fqx-}QY^^a?['ʩJ؂\i vKB]XlLbuAVqsz3db)֐Uv/BQ-FZC'fJ䐖*ƥʀ>JvV V^=V}Aơk ܄*>>;^8$+=&= ]w)>5n [?\_^ ;->f?c>-^(O-ET7tY$ CH&3SQEyqeݖEH9N: h Rt6u|b%¾/'mK)Qh?•"*>e.=YHڲ%W O%X5mMө3SX\~$\Y8&ڰ()UN֔* :^"o}l4,~:X  k'.I#Ox"@^JgX>okt6"%4얈'$]tX &X"R&ұCDAN0A|~|OeKj bO?91݋?:7>|8/y Yg\X+^[1*EpZ s""nO~eɪ8(`!| eل؁UA9Yc]R _x b"ty,ÒdL=nBǺcrE%a]aN@.\ghE 6lC>3O\ؾiًr#RЈ49N6gGg #"ssRȲ`/,\װ2D Z?a.y5[ƜKLA[%ZɆ@f|( Rlb/yZq\@UE9qMӬ)~ykӧd0^gʞ -|Jdg~>m^Wr=V٢7%01-#@.q>/OeVO> r@2fܦu1Q?VOX o0ZѮW[TYI|9$0%+4+όX z~t_:w.*,+";qiFPO$dT+.8 )d9؛⣲N1~Gb3Z?ҊI+rmkƽO vC=9=w1mIүvssa\=;}nό|!As!_((&uT>k(#kn7*1*H];Y'AE#;b(<I7h>;Y/<&R/':N8_97#坑,2Chnϯnk~?9XNy=sfG[D__pzhʌL=޶WFNnbk(i(Q|{O"N*G|7 %F/#)pg̪n%8}`ۚ)4<8+98zeV]L똬CeKnowFege@ȀhK},Y˔`>wRQRi$jw_uZu`VUW>gszmiPQ*66Dc//}5vcb%1J#/0ι瞽*[k*}^?8u?K@ւX-'OwFz=n~U|e@Eňk@#YuUsm̅~*t=l W9E Rt!t>*el'\A]?;/Y9Csb߼/Ldf{akO%؅=#yz#OTu@U3/WcEb*='YjEky~Ge?Λ8ǘۨIqh}aVsDfNdj="z2Nۤ? wq2#@ogϟ徉W'{ze8ю ɦКu^眤>]FŸ37qyɳZFI57Ƒu6[?#$Sav8yJ#dV Mܩ+DEˠzӟ;SSA/9)}6ےWð`VF3.>qnJ:"}ӳc1nt;wDXA9k"??eO}䊟Nm*Q'sYm$cD.+΃s΅CR-&NoIqXYۧkE"KNWi;dz% Y5gΪ}&zh*0=t9({1;"K|;O:ڟYgGy WJ籛u~FE:>O \"-hV)7ۘ 깧XOT!e#5jVy #Y]b^ +C;~/ z=NcBQ-'xV}Mo DZ~!nTŔkE^OoW#>Q1!J9͎:Oj\^mio X/So6'пz(3R B78w:a6hf q}Y@5o&vspDLF ۹(7&.fՋ'k16Ic#/sOkf"*}gȵr""<O4-.Ŗ3'b/7;'p: Y8t=տNbN{=ӵFws|W러Rwˬ箫{D1u'$:=μudF`FbIoAw`.ZOI+]!1|Rz]ame4#f K*9#;;dcofmhOktqV,YLj'LjfـН?n?FrSZ rƴG\!9e6  8οӱ )GbI_迵JXW |R-yc}yc+Sh۔vHkƍrb"Ċ~}S3-$c4hg>szA{2`$gVoM;SN?{[ }֟NOQlE"whsFߊJ/ͫ.wR61{NU N?`,qՎUr$JߵE},d[k4m:OYܗȿ^WpБ@3XVc;"FߧYG~0سYxzϳ+۰Ia)ߤOxJϋ8ٛgEax+!rƄT9ؕkWsVh9i(7o!%avvY2dݺWf[mucwX_YD߰i)OC9eNT;_y;K,+9ݑ80td!eNNYȨō!ӹfWÃev&~9?O Lj̴9(W?2?qwF{9QBB@G1cCƜcdFKաyagZ۵n ~[/B3 xO8<[?X<>؍oəxp#aEN~L!,}v Kg1i5:X`ݓeшL p]nu11B5!yJuϪoTbd69k+Nq[3{Ԣkc{36u(XeCk;!|ւzۃmcxA_~>!;-{ 9x7,Wvd6ϲÑ1؁6hmax9ٚpb8"+gv"2E\yDuWy13jezFv%TlRV(CYϝjc:d*R~[a;1Ɍ $wp!Hxle-~t}L™"#SAg~{]h7f+yyR w3S y;ƻr濕~c)K ycR':;mSx-9˶9?_HDz6XyáZbg/B[TnS`ooJE~^CzON'.*+?$؏8 xȖ+_Vg% iF5V_[:3>+S"y%ŬE=]ys*? .ebjr6z]mqV٩Ag6֮;{+6ð=fq 66@tW݋4!H?̭ا=6V<9oq %JYia hdD,291h}xV"em]c̺ee:ԑXaݑs1X|O_1| IgS]cgҦywc0WCAC'GuQшѓ?[*\Kt|JXe Pخ]LʪHN.31ڷ:fe f0';E90lRF2u`^#Y.fLD'}WlErbuX=G5aSq~Wس/ !{CpY7/} _2 -F}zm6ۅ܄" |j+rc,fU/jCѭ[du/";{уQU>aq]A^>k G?/Dr'nu X׃O˗qoX~hd~9 jNz֡H@;'b> {'@?rby>N3NϫXlŜMATdd.'J̯ `|3#D"&-G?f Po .C?n `z EsaSCҏl\x&5`݃ 6mpnIKٓ3ߚarr l|pf9@FrFk>AqŸ=Lb4rg2Ć1Cnl?23 ìgyc1c Όg.'BWP_5X` EXڔO ׇYK_b87A%hf/r*Z-R1qX|6lDv^/\"?|e#^h|Y0iO;g ^uEw̲AhQl ;.ca?'U_@*&2ZEْu~VwRF(5 k=y53 _=CAF,G:&p>ͺŞu%Z71 pl~~ov)yɕׇ7ªŢ|9xK~wfgssRJ Ŗ죬{x;/lFWphpypg f=-H*v>믬zbF3Ӊ;a7v\;P #;KC'p1܁Jzp;PzaXt_zs<{ k~\6-[ ~}@XFw1qUs3Hc=٣8O?Ȃ[ǻNbݻ1kVvBG|NncS' b+k?'1#'{Jf:tw҇䅾6/ O`F-஻aKā#9+2EYC7BrF>Ff͵ϑỲÝ< hRFl vD(KDGM)(t}0"d4uϛ-/O/w͚J(ZArF8wfyuj9b+F&'$[ʪ:hfeV7xAdEhP5#Mw=laF -_$  ezl CD6zc^HJO텾Rxvsy ͸WfehGpXɹrQy10r|SzLpzQ2=o yVXqk`>IZMЬSR囋e-gK,_Â~?9LO'N3U؄dbo+}r+5pj)P3D"O18/΃r.#9QFzL KLAG׆'XYgdD O$_dʳ, ̟;T'of[%gb58`a0V' `aTE=׊̿NʿMVYcXWO |fXs-cu`u~C4rP)|&r9~gu$)Kg,e[legcweq}!KgJbIv %Cw7[XNxI%lɱ+ҧM@!uҌųƨ 3ky_ь_Hn/&я {?zacd́눀֑{NvHZ,%dx=ɷ 1.?oyCqz8O:1{xT̽ Jꏤռ0ק7䣊~=UdDw}O뮝nFF Sk ?E"Ey1_$'%:q{l߇T!qbܗHl3viImZx2ibUl5Lz{0fX"1ػp22ްeKdЏ+c,XYd@؍NT|ԫ\OB .ۈ[ӹ۰vX /ZrM6{G_J s+v M瓝(tT+ߙjhr!fKdz0R\I{hnj yOٝjƪ/ _-̚h%̣h75XP`x+XSa6g`,3@l{+p]$ʿ$7e;Na(V0tuC=8w/<@-s7=ֳ-3ёY.-M#0 <22ΜQkdaf䰵ꮰ2_j7o-/9sc"q+gQqab}TQу2z< 4 > %p93CcOlj`?py>\sN:aFǂeVG´L@sDwڪcuTC3q}{d\wُi>keck#57H߉hl;h d Z(=F3"b-ќ^ΦD1Ph.3wcrY])˦C[>IPfMy|=+[up˩d"~X]zc"߱Re'>JEY0 ZeFbק/Ľ1VoFK'Z6ײ=Do lO!ZSm}Lc d)穌l.06 D?4\pR*>`mz-#5xX vhcr/M[(&st'MvDxz;t`AŏObMIza02+ [d5Mze_櫙O~HYJl- 㲛8Or"#iaiIT3~])wݵ\>a݋c1{[ Sf|ȑmYʇ\|VPVEzܚ,".Hy}~Egrҍhm_z:=u % *ߊW;Id<(-+N_ X1茽47}_[XڰbB:ث#{Oge]mퟯn*ZZlX^)7#? Z*ӏ*Aq*;ψ;<θZ4g8_ҽHg𑓩 &ېqlL%׾?]^~7U0O>Ⱦ)Ÿ+D$d_SFnHmXV=x_.kHWeM}ј:R1z$.Mj ėyD,d_c eᵬ_~ )8z4ޟg\+nR8)}:х_0}~]~=<{4-PfZ:(6kڼ\UyBmsȎWƼZf,|r Oujl~_rn24}.;Lfd2Vz9l$fpD[Jn(f|ұ;QCC>ޝ 0qYrCV2ey\jMZ.@-eg_&g6A7ckI$o8vqx1.;}fGԈ02&tg=! ;ee MWmƔ] ZیzZ80y'bN w柅|L ,/'קb}m8Ȧe">ZXAcһA2V6Wc_m_#+\gߒcG?pW3hlQ__9 ~nD??|wȏ!FjF~6 ߇7 LK-6Zm$\vZk]'=zDDf^uA+WPˡ9淔s_r"WYKfEˡ#TgnC:]ݕAۙiu79iή _懕-h׍o?ro?Kbwo‹dWrviCvfqi8\vhXώ5]w.h>9sp4qqJ=lheŀJHړ޽Z^+#OBlb)y$X/Cbz邪mIǞm)S|Wn7T3:T߆'3'^0y>T\d7d^ӝ)FfGw"JP}7C51zZ B0+PS<)vgZTeE7Da Gb6Af܇6,յmWy10C;2&W_n-Q:tt>.}߯;_$o3^kog<~fֿQw |zL; E2Ib m|a{4Yg0B8DEU2ɵŽFeO厑kDFyLn5ܵZl-QȪoJ/3(*MFyc#%HWZEs6'dA#|Y#s|=dcH%VyiA侴^ansFVȺuu:qbx.C;z7ɧVh3WLɓL+g2@rvݓ}P}|6/>BSżG[JR}4{<AfS#2fYAWþn8K ::yj V|3fV1kb5qg^]^(sν+羞݉)iMWg?dmb&ߝjITz~KЛ<;H`|8SշT@v<>Zf"yX}͹Z]\>~q9,_:7娍nb^~Gߢo);jڠo4juC;r%+쪮ӕj,?9.ן݅.Y|Mʧi*ssws Jw;OM<^9~kד6d^V&Nz"2"95E6zBk 8\y+9%v_6Gf~ЊAk]I fSZр,fI5Ro9jiC#w}~2#11@k*k|B]UfL֎*Yrrd舗bOAQdUgẌP-rUbWx6uRYˠ/ySt?əѳYw_Y^[2揻8*|}ik3Wmj?3j>BMY5݋̿GDYFSF|R/z@iܣgH§=BOݍ87=W8I֨-+׍?NVO-6`m#4ùQO?OV S2fw{GcO,"0 tV)ʊ^{(8w+ʩH+头E}PE^ZX 9veob2Z!k'9u`jA q^ÕIKuvU!œA`5K͗_r½R̫vE,BֽY#Fh\#(Mj|,r?r kW\+9C0>Z29:?2Rj/5Zn5 ւh:]X@m0ˢ%}Ev!jA=oJQ淺5δ-f5EOՖ9bgQ;3V(|hy1C {Q_E^CZjWI+FY P(ZBsF\7q`F-i6jSd]nS"b6/O>A \R\'UQ&UF*{(bHcv=g~%UnJrHqWA4ʈbb5 F002΃^8H,*~GRy$W@9H(U\*Oc6xG~C Bj9\O-QiEm1pMDY޹=L?njqBGqDܽεw7sGOVjwF8~A#wzZ9oQʆro.rmM?*ˬR ?SLh\5s1X^jN'QT}hE .MUz/Yݣ橕vn ZެbL2c7LC7U?]HV [Yo.P]4>!kk͸'1WВ#cUf'/w(m'1 k%?Qʉe͈n# BN)z8WL\i`w'BDW[z}z3fKJ=tJyUٍAI0wrrJl1.bYBGYf75QϪw,~BW\OisʓAFBlmZQ[e[b%l~1iM6je ;y^s2jWvyjZCe@<':ZȁYٞ܄#c/Z&:9Ρ{xd_L꫹G ЊEk=̯GUs[J%DM]3K(ZD+˔ Q񷺪.)V^rF>`~2&kP?.]R#0`Si8*GS6'dC^79]/z"}6KWneAHI9szqSO0B?%Ck3y"GVE7*PδhOPJ Fjʻ"-\1"| 5|WDUjWx"Λ+}"͈)dHD_B\iw لڧ~ZMWC &ʮoQҳ1g(P˫ƴvM,GfK 3SXOWjl!Oo`6+8vҕu^EFq}}zk`}H8\mݘO̤jQĜgȢVPmX[]k:HGÙV8'ʬf`+Fδh)2"V$kQGj̼O5w0>Ieᙞ-L9>S*V_m2J+cg\K=XZw5:ӓcO|\q s%mXMN^9đbԠseѿ5kc/~O Qc$֚eψ 6Иy5J^`LYIBA-`RJz2l~3\զzg55zhkܱ+ԑ<ݎU]{8Cs n!URlw3 {^ FۢV::-2:x%&j(Az,ܐnyq}QJכO%=3D.mB=$gwd=:P;Wi-_zNk wDU15R] ru~QeT>"quvdrLFEJ̲+~e"I>Od=AZ^\,3OC]K MIMѻkҫA| 1M]h~j$ $qcn_Lc7*D-X8VoDjy_O]/+UhwO!7bƓ1{ŊuLlhcXW0h33݋!RK;Az;oxըUYLKs!Rf #\ٕS|n OR^[ݠ]3SP{j%r~Μ$52mI6W1]]ȸwFDصQM;U];M2"S"bYGN̽Y?,Zw>E" ^.ùSj|&w=|$Rn9Trsn`}Ƨug棑&a,zXQLw=:_Α+ЈMBKi|KDGIbLR%_)Sek(uKO6xvS[m p2E=Ќhttf ˰.gE2ıJpCyPc}hqd81ZW4aXe,ziV^ode/s*0-w]gI"ȷeͧ{"m#˕fbJ]C>ILe&Fr7G!#ttM֒kG-)teOk!W4^l]oehT'#u|z0s Fu~2m43"#EugN=_3ڮmBsϗw+WjuTƹpVOe|r9 v? f3BF7Og|dO~ 㙥mvYdFj>وZ9Gѯ+XTG|5HΟ.ĝ8Ed^d/rZabF2O7{&/Ja{bWE=};0#oy:3۝h6hEF"O{1+#КxWo3R>r8>vAF7XSK/|s>Lq쁳gH+`osF&SejыEǦs9H:(OOe=i xyҒ^:;QaH)=?j׎Eg0vס-:`52mxL4bL6)ƫ"gL1B݈`= Y'rz>~@KwGx'"^I_3E2_?Mm4)lA5;YG?az/-!d(N=tץw1vg+kdb|""X[ŞZņʵ궒{Z?g@$r21߭."LmM'b,HQ)1}{kvnݓEד:t>W_%Nɬ!-AO+sn9Sw+ ifGy*k-BU -}n){3i@ā^ iڌ ˼2Ӱʇ.SOXt<-}Jitibe3^Hv?uyߐ~ɦHLK~vO锵 <41meVB;n+٨Vݘ0_*ZW艻& >dTs=4-/=|[grKPMy՟ٺ-CÙ|hᴓVlyK5SYV=Woz|4F2g|+&]5_*D^w緌wю LZFOH/ct9G*ҿ}9`Effc/;h,ҮL7Jͣ/h[:ȟ,ޞmk۸]6zڊv,rG)1Z*Z*1: 4OmVXU7ѐOK@~Xg",EFƆd8mnh"pM#T:'{n˱_ ɽStKy^K*WNȰWC7}n_zzOxʁO;T{E57ߘGћ[&XDĘm7.SP8e{DVQE\-/.uXm46ʏwK<FWxQTVWj0ЖWYNuC֤5 g2<(..sv_Q.}%>R^%Gƌ (?F-^1Qw 1=;mb,W-<5D9ێfEwBzxVG:%a7`-+Ka6=9~qHoN~qWa'^&fwb[8qbJ/h (Ffcx8J3+F?i%{ԶRI?Tôfk{Y7XЄ|*}A\ÐxQbV~(IM|TȵЀࡍIA;Ñ,"_SyN#䩵25t\>{}@Y5ѯ6Sx]SW֜܌wPo[Q/?=٩V[X ^T: Kw5?wf$adO  H7s6 eYvƙoq'dg>X*{=Tqv PNE}ܣluZvGኩ h??D΢Jv%pɇYͫ0[.g2ON<|7SG73B9ۇeفH @]SKE4ѻ~QD/h+e6e^V>ʋ:Uчʶk9"7Bzx˙ؘY%0ݔǯW=ɝE߅خ SFw,30<z ]^K8;~ޡY=`F?h{N` @;=Yyixc1 M'o=㽒68WCg`O;qsx_?lП+gó+?a_I 7yz@eJ6p>ݗ瓶9ЁsڑY $zUiVG׏;eݧO7'rf}tYv3#505{Z]3L~F`9Moφ[}'ƺ>'=:+f [232ߗw?kF2-`b^Yw,N]Mϣ$4. CrfdĂW(VK 'BR+ֹ4›YeŹwVk0 Z֤?ٞJ6Y+E 6nKRźeFg<7~=͟ $ ׿XT_޿lp0=?/\U<or๩zοoHOWw}r +mmT΀5"a=w^ർ_ #ޘ+|OyXzEؘMsH~ cD!lqk_ϊ1ѥ{H<|8}b'84|5=|f%⇦\qƷFr;e䇵Z~<"ZY3UDZܙrՀiyZqpjx65~u⏱[fe>|siXy|wuC2 ߰#s NۄM|W?-fmKyْ`x]'?={.̞9u7ĚNہ.ꥫgC{dXGlcg뙍+ad=[\ۓ{%R<5}La}bɏGN J['vHD -(૤ĹX7?)S~g1WNa:;k~N jpR:Hז<2b]ьK^DvTf|f+ʒDiGeDM7v؃σqw FNyMgcGbXX|doK=_s 2y)|E&?r8~+6K66㰃zo~WhņAth]ѮDVA=W̚ ɤ\՜ NG#]\9zޘCxI[.O旇7s95iBg_vpЈ-;ozch U0'GJ0Y}2j5ce|[m@y{?JϊcFIFŇ#~ai*=}OU?F29ߟ$3)8)_Ʃ&񓝈痢doC了Mf>KGVNrcsdirɇmC<<>"=[j-xufD1j68S"H#dZ@!~2pP=Gks>0? aH?L- 5yzFp=VTW{oXdx Gkc ΕcELUOu.mrFƛ#%`y󬦀1ETAt!#}?_^e{+м6BԼ ҈&1{# '{XyUdc-Yt؟=LYR#77תK_6B?gcSr?ȝ3J rȪg&eDžksY]p']c?A. {0 Xo@E$W'gK yK14'{+]]6iM4'3{`3אGXd"E2wK84#<%J~\e0OYT<|;mDw˳<(*4~b=ѲWuQb0`dbvB>6g?K"ݘ=-=,5d-]ֻs+**]Tt.%q`O\*<*HEV2"sXojyV8lJib[ѱ_k#?ёԺG<_߮gύaUPT>}~Fƣ,^wVp<wPfd&0bңze^z8P5.;bvz`yX_N>t[48`!Exyl};%ceG׃A G+9C\2p|^˩ X<3.i>ݏfO}xKN?l]lg<сX1גA<5I--+].R:t6p`L3sW 瘿q!$_ )e]Y3[J{]cd5CZ;f̶Z?'tB6'̢nHCbwN\cx_KqbLNƸ^-/)sDr$;qq?r^%zn3kg3lg=>% 쿈hj1c!aמl"`#L9w ̄t㌂ D;%7}_27gaױ5ZW_w5`k0$=ϔF,}?Wl-_|RUz j !5džS~D54&'&SAb2D>^Üv]I^Y q{(^A ;;)s-u!? @JF΋E&:[n&?qrҠYߔQN8xQ=2H+WhtW 6\0X?<:sB~?1N9П#rRwL44:" HF?3,;ddF.C[_}pb?< ܔ_un/95-R~PfUiVmᎋUOwIY ̓WW&{{Xk%2|wְ6w:XxRs|^ijjhDd|[Z-_})za؁9`'%P0Ѥ3;VoW?zF}t{MD|RyhN7?pz1v9:k?"x]P?\y[yp!Nn?cO/$"{ў˝k}NebaʃanvcxMfN ڞz4Ffȅλ'55Tߩ~_<QY` ?V41?=x0!qlv?8;i5/8&??((:]ؠxy¬x^.&t]՞/ەͪNB،ݱdq^?\u^]#is"Opw,糗N=߮d{{g_ a~^}Yoҗ$3&Y㽚s;k.9GmM߭gSӨD*|= ysZwaOf '$%SNԧ)Ӈ). %ݛV ȋ!rX꿬n07_^Z=߾1JE7n_H$NrM6x\Fބ=WnmTMV_ߵgu7{'#l lfF3c[3M܎Dcījؽ]ԲNᝑHD8DFgnjHtC'&z_ya̎ C NUgY ^&,uT3ޘ, sz~ٵw9mڷW#{n0୼}&7U޽?-T/=:7{ [vdYϷ$p$c]zq0 G:1[!%sݐdcr`ꯗUwάg:/,>%8? ],+F~zp63{;C&왬r 0pQ?-%0//{6EV?,xkvY?8<((j)d޿w\^]SmU"GD($[95]2Z9+rue8#9n3Vų/4&D{)Oxz8?W,MmQ,kowҬ83T< jF`9-'q G% +d.dTDfFrzA?9<>~b|phҌSfRK0A4lVWcա5AaW(;!:xlTww?u:7lD= 3 T˃lk2Y륩M>RߘnύW5 y@f۫Ϧry`Y,Q Vūy>`]A) ^)&{ij.ӲsYti|\V>l>Nm Ud, xfMi|+]=_Hn_h;U/&g(x6kI$d(4+˷'ӬY"^="IUC~@,dG-rbH;THB7}#zڮn|n]nONٙǒHx᜛RnvhIwܔގIÉ&wd,]},v4+YgtߙΈ0؇uٟr~Av,e>TzɃx5! p*/jicji\׷{G77Ϝ7 ~}ZfYv?&;<jMa^ju:6=* T.,qe)_* _y3{&K .omtj5kӫrwu].:+zll0N$L[wh5E]AàF3^.?Aron\/Z 4xyy |MS͚ i#/g(k7:p"$8NKö5.ı~Rssd*7d.G`ydzw Yfm,dhg9o9*2'+x[uWBjGLHged^PU',X~(td/3ed V?QYpêIfM RVA 9t}*]ܛ4xz~&x۰wsGX?<\qu}Ah'VXsjZP]_LJSW=(R_3ZjRVfuD\;hpU_HǻO.mOH]CԻeXQD㚛ÉW{[htTU3 7f"`Hn;No pP9ODi90٬%.*9yl=ljM;?zQwenQUXy]Vs?z'6l;87?~+e!geH?~wΙWS+ܴ}]YxU$yJb. +)`8|H"b4-}b7[A'!T:ESEFvhU\|]<]NX#g{TsM{GPn7e g_\` K+3_޼`ߡ-pt|bBV_ 9N.輝T:$d !lwzH CE23ssNВb\ܟܼY4uzKa?u0O32Gb9Vw@3%wJ=?񊰇't G&ŖV5T^1 w*NtחٖpCMVz 9-Ys✲әDv@}s)$섛yeTf5\],տ]ONl3q>xh#ǽ0y07n.iW'"x՜0zmd}d{lF`t^+=5M}srZŨZ>pl#[S+#ɇĜ?DWAj[eͣ|61_{weyviO̾cٛ8梍v;wUCS{g6wntGr;ͫq}sr;ܜ]R )]\ceB(6m񜷆89O#:Y0;ůy? &Ǟ$̙& XY,-($ (ϵ/0nsd|՝t]yom[v''zb]ш)͉tZV̑*UP^h!*(H/'oA,7G#׌I d Ĵpi n?{N6^DvUIDxw~'{ڡ#͎:yѿTmߝԝўWhyY9#QtK&<>;a\3b71YG2o!H q8Qg ]_,DrKܨkr%[{.mԬ o*V[L A[>l.ޜNx`d0iY[w[}m\{E8yz{D2c }j_3a9N.n97ĺ"ZAZ/G܃-3M8u3НQ mXT゙9kb\NLh^:֜bq\b%zZC+}ۛz_lW{7vOt1]RgU%爍=gZ.;gUSoBLcR-b uӻ|sٔ+TA"ZPߐ7\nM1ɚ5?(s^_^!m9,>=]ߝ>ksHnv܎fAQ+jFN {JXD,f -eUX`xQLROg;2&KAnĉys:|V4ɗώ 5='3]Gg.DvD>t2?2|pXälʠ[iֲ1Cwv~nUx̋Ey+nHݜN~lV/|8Mʖ ffN m~ ھbTv7HWi k{ v\%sb mr^+:cgmi%|! qk_%e|A!^8~D IE d%}ivJY'ʲM*{FR.ͼg#gWL Zk}!<p.&!>Fn" xˍLsdZ(R)pfWCqH`E2ncG4xV,|ƁzYmrTAƂ1C닠'b#KK !'ny ꓾$n4@ǝFYMQ U2O7VW?6!0*,E'8=,eMbz0@˪زs0R`Հ t4ړ"т`}q ~S}6GnMoq"`8T^Ko(7\Z5ڄ_cYڛ5Xk : O7J==SF&ӫŊdhCab뜗ut8.rN$чZ+EgB"J~A.nCO/YxZGq q˪y8,lEg<Uw˴{/bW/O'.k4K/xR)`Gg16o(`0tb) ɊG(Ocz!oM~0Aoa@>WK˭7<) ;`WMC6Aޠ_ƱpØfD;QyxUEt {^p]-jLt V튊;ڌMd dVTw}+!菮6xqcsfNQ(){:Lx1ZOǯlp#c'm1?#_j;~ΧXl3~9~>"Ab+gy XHNnrOݰKL ؠ{Oe(n1yT~V=:'+6V6c$IkYWh -\{a4)Lx~ilKkv7%U4mϿ誜Pj:L=XrIAX v6nz,_D?wa>ZR$OY.e}.2`d;o K06%;sF,ߥI2ŋaQtG@ ڊ?ɰeEvmnVֺu|%̻eeٍ)ҋ0C ex'ja;3 `JlF.Y|œ`w肘њgBf4 M"x9ypC`yR { hoeE_l2SWfmbyy) *rx,B烘xV~;IoSKD^j} gį~îdXyYdˢ> f^\Lqt ;K0 Of5%V;K?36n6ir8 32ӂ(E0b[aCVk4+5u]r#=3t&ƷtBv\lF-Mx l#'\n.οt/lμrYc#zT،l5cp_f|K{,]0R7EkQ>?g{c3*hɩ;d[qNd4JBu @R4-ԏDܺ &{tz7u[!`jAaAk?,2b-U0G،3CpN 6_eŜSrMXm`"-I2:~ ^~SE}*KVx (DD(.w3"*{ҩ!rɓxle)@.BbO>afI[pŢl!+ElHèqm3:##ɓLw"'6'CWjtJpJm=gBtSTw/sxO@NҤ$%[DXD}:,E&ILj v_؟ ܱ)z$!c ].[ɳٱ+}hR:۰X,.ǯ]aD<=203 "aZ~*#KYr裘Tl[Ï:y5t+Ax{QJ>Sncugkҫ<q[}>gOzHbQ]_7;чH4!āߗwί/-1Z%:Rw3.WU]ѝgTJ-W^ TǩNi8wץ_#+ڱ/,CͼJk*&W➺rRKy*s3@Vzuܑ/'Vۃځ8!Y:|/CK=wZ'4VNZ*= u)*r[מbӘf)i$J 4lz8`Ȼ:(hb'ݥ|O>IصT*Psi*b6Os9;ȃ|ZAe] `!EmJ[s 5ձDm^m..IKX֗)D{kj7z>W70S*򽘩?=jB%B5|*ZLn.X\H@zpHJhPּ]i1r_+e[F__k |\+'j9MK)Ǥ̨ڲĬ&<9]@pQ8kK#j[W Zҗj]ԣteq_qZ1= prcT2)yu[1B?LftGR7ʑvH:d7WH:>iO_޴sbb9<#]#x,sm01UsC'At02JM,cJԧ3j\k,%$_Q\Q\](Jsҧ|^: :4m\ G;眵6a1;GWrvJ-ߪ!)ݑ7q^#5ӊ 9Śh|mz4{ƀMȟ#Q GZ=Q(3Jʶ*K\~5N[МK?֢1O|XڗNҳܣ1Kà (Cb/u_k#sXOn?01Mbi~"~.9^.!݀e!9VW>O`z@5m7#|&Va'tSZLw3DxjDFJ^ Jl姞|}+ޢ>us8盦(Mckn5uPM%8$FFڦn]y/eN\[7`@鮴0y{t|:9->S1&%<5mI?GCn඄\9<+hʦ`O#hXCzͱecܐSPb 6w4FtT:-'kyi/>)\6Jg&iEKl|>mʟIE]ԗ6Ɲ|W3Ip잴صCm;2ۥX,GrSiʘ+E\mܣmӡ|*2=UMF3|D !M"jvMK}i _CqQVX6F=ûxR#Ob~C .gei~ox%J0gJ::d r |4֧c1d0n wL[mVa gLW[ "g*4|xUe.M㊑|Ҁٝlmױwƽ~>Q=/N\xkw2ZآZdƞ >tnz+, =F.j$fqe xRXHތ*Sԝ5;ou{!0 (RX ?ie#/U[S7_F`¨eVT p$%-{1Z3mto`!&j|33nP">SsV鮼38V?0ʍ-y\"ߔ L9lmV w1x!(|8wLPe</{srOϭ1УZ#x4)]>iW_Z>1Z6қg_̏b_K9M@EN˷Cfm>E}<5;YLMFX5,3 k.~TvbLc͈4Ԏ[m5lgfu(F8>=JybRiWzik ʏ21 ?퐙{gf5+z~D^chx a^+Q\TWXUN6 6B;Wߪ#k'H}C?D0cѻq]1 `,mh'A|nB+ C+Ì̬{Y!e#Q'!ƘwuN&9x͌W ٩OP yiD5FVemPUjq#*ha3,qu\ߛѭ =zf}yjΤj8VE`/`8v#֬5rW?^b$)dx갷Fcq/Um8;[e$qf<*9ܳ6a&6+8}%^G"9(y=-qjcD$e.wصΈƲjX#0vHm> k19N/|iU+lf14GF{xS'>Fcr׹h-B[1R捤 ?+hkqCCAz#4ӗIj?Zy+dFՊHxZ|e2'|zVcTJ~\M(42“vF,pd/ .%8^vw|5F8zEyh,rZtLKkU _хt4Ծȓi't T1%EjLH2a}b؍?HaeJw5ARSdpm=GӖk븪 O*Q|丹`Ai Y0;ѲD؈ֹ+oʭ'n+ai/6}-6SE94wb_5H_jvGlݥd^z[sY=r),䬒xkjGsq:J,1L#}ڮ㔂?J1s>r^;灟wx`4(aCA3S? S_,!WA[\GxCvpؘ!_ފ(̎V _O/PLhrrR4?:Z"Qw|hޛ~OϢIوJ䓺iH+ꂔuZ|zR9,W|X\P(~$*4;o-ZQ;m%Yd6"JRm@2pz%?TX\X?%/*h0ivq k_zCdzꂚh$ϴzdFr⼢Kan;]W@S:.J]עǡvh]T퍲DP@t ,X$]=5ݲ'_Qo-mU5ɣ?jZ$ ]X33Vj']oi4<؃a&wMDEv WMCN{Փaܷ5Փ3\UA @2s eEBh}fF}7+aq:R}^}eCVP^fn|F;6G9:(=ï1fBs6Ng/Un-mʇy͢f>'tFuµ#Xz| OA Kws!11h>TKyt+V7k} Q_ 6(t2ѳ+:"#-Iv2yb޼L疼s>Kd^cxטK?`'soT4X +VOmߙvEH^寨,> ObMސƧyWSl*)_Mw֓ L6*CA;UTCf1z+>D<,Nqh$:/@ӝ`½Xс(#""*Q䡺b܍Hg-鴴n~nXՓ ̛ˡ]|]\g0+ȟ~[fAO3V뺨 H85p镴,G{d}jצ:. R>\5BCUfq:NvϞHe [Y2Moe:WLzi[b3ެC;ػb+_ۿle〣 XKվFlq.̬t k*{ӭ˳!~6˼s ;ٝDY9[@#c ާEM[r5h$j{XqEX\V1sK0҃KSm^E^:0e6}ifԑXxN!"Ecދk^.:id %tfy:9\QT/ o/%F<ݗ~o&cxe.v}oqX^xudBgbl evõu̡98T}:1ٞ4j{g0 ] ?j$Sdn/2ۿS ״ kkG~F*zs).P`+ fk45?V<*"Dh`_6; rT}$3Qj>mٓt!9?O҃( bCb^郪VmaMɈPa̓Ë+Gvg,Zct5h_p~f[?gR?] Źw6Z99ʽVǔ +R?!QQF(Mg֩"D mz#m_;Ķ?|&5+G[g̷.eh0$FrEު=dMy[9hyuzd)9 3HR??i5s|5ǕKÏMDq݅u`nWD@Ƿ=F}ϟtOǓ-@K]ѹ.fZ\O.ߩYlйW̕Xgo|X#o`eثiͼ(KH=H +*'oNjYy O*⭎jyci=\d"4~u6Ë~ʌ&B&wh/2(6T夥6@!?,rma 5C{`2 HK0=p2z299_da3> T;oz@c\c>jN;vSl̜ϸ9hTi%^ey"ba <{{dey]fiSHe{F5E_G3?mxY#\?FD)k?FvJHziMxuP2b̀6u2V1R*O~k EzDOklGό2|k]UZWTW8_8=6u}07+Sotl\^#Ou@dZ]!KpLg2F~jWi/Qr[Q4O#&vvFapt=/}<^m~r(@Շ:}dg+gjS=hN)Ji T^D(zH*;Aq}.xHw;:N+b$`V>IgoQ:r**הrmY:dFzQF/RtDelZ}e{5 #䉜#ϑٗp1GqqWڪv2VLi!dعig"ZϚxڽ˻]ꎺvN!}bH+0+bs)3_j(-N ܡҾct%W\MZ8yݜfikCʹ_&Fr-cl)VQ~'L3#7Zß}3='k5B2cf4W) c%eCROʴ>ܮݧR$T,1jI9Xc.tuv;g᪻-XZO iFܸeR+8"q}Nft9{RE刽 ȳ_k魒Pn >aI੫qH)耲E:ϔ8/_{qe1FW3uXקJ+ELJ.RmrEfQK|ܭc#r]-ڱ.f3qA\ob2嗜׮;=զxP~ 3B+ݨN󓩀?E5`>,GG/dx:8EB25woe<3:FeOAno[|M`q Fr 몭 'V:(uOxFOkΩ)T -xg>o:.s8 s&H˸HIR^kSܣg(1v.us_yŦ S'`>br(4%SGȬc-pϊN?sܫbnx=C׉PsChQʗ視Pydud_,奚#z"ڼqZoi- >ڽZISduumt -n}nV(J˟DlTS]%G03\Da6}/ғi]4ӝSB)_LS"k5!Q3ӯQc4#T1gaT>MT57o$?|%0J&jCnÛF5~gɹR%a3t˰oow^<k!`񁑡u5Ͱ~Z*`PmUŞ5H 0 t43=2{.zQn=1ޙ>CWRs9YBʓ6X ^>hTNs۰Nޱӵt7J5<~Z%k#4ԈWˤwZ3HQe#z? [pp/R#sy6q,&>t^w5H (LQ~j',/4G~K|!*9 G3A3,HxH;ә/&켾"jvw(s{u/b$i>s?i>&-݃Y:Xd- yp}@4>{~b4]6{֢?koxvysu%oL9xr.o^Rő>*F琹Zs9@%1iE5C;i+{s}혽h3[.i-T;~)2#C6Vը[5z'55zti+* Z:hrlURet@cwG.σmWp]cauWȑi;ztZ@R^F3v1>\t7A7>ovw<sA2CW0YISlK2_'߱}ۀs1_`צXs%ySxD*~%rn#^5|˘m׻ٝO KcEst-f7{+E^t}\e(V)iԡ f0y^[TeY'_\U7"`?Y :s#=hYwF`nc uUM!fA|9Rg_yveNukխĝm&W舟Vkx$w(KB_: -ho>JBja=@P2wM;IS7,&BKkf-1\m.1O." > /_ibi~-/jo1߰Oʨ뷵FJMV^ ]isPdb*'cd&I1O|a7w -k@Y \F1w6X曈B#wV{G = #h{;CO;Qk Zzh hن%rvAZc=gXȟo514vd;&JFZX;<ʊxϔQ߅O3 c&dꊟ:^Ɠ?+zlk19CnШC ͝ƉBw>>4]1kUW7n>j':W\ˣ W'8yjΤ+GR`da0>TSkCUH9!h[6^kC#%۫Ej\54ւc,!Rԕ@A!xGTp=c=s׈MQH2 ^)HW{5%GVUܣ vµ.f)[9RqBC|*cK/mD_2d66x\e7,xJ I ^{g zUG7W}#Y*YSdRV9/=I#i*!BF9g_Y>Rd"S:D]:95i#SJQcUtU5ҩC宣K rG;']*RT9?2kp)Dy,}Tp#[^ Qct|;>޿^[?ui)Eт5IyuqT{<=X(+_:?@sAF+z JRj5@-Z),yBS|Tx< )1QSSG&0r[ {VxN(4kO~-ތ I IY\ VcDd]at:o;MOٙ}a|RΏ{3grr>mjdX|lRUue:cEWһ NR+:rc7a#~3gdZ=H &G'm8e ]sJYTQn8`;1ŃdëYz[iL7kfej3lԾ*_!Sf_-ۋY[jFbǖz Jk۰n:d7*?ycF2=oH4Oqr$pC4+s+f V7<ؽ od4 ~΁;pPnKsЭHa ˌOc`zYe*2a=q4TV;Yt:>+FJ?tdb/dKCr=w?MJdiQnO>2mUjY3^H0,{qZq7UXxge_4S"Pz[\WAyS9yO@*ܕ'ϴ9&S~Ɣ%^oR0G ?!OhE:ɼY~Cx)Xet]3kl+f"rЯD\$fߵГyYEEn\yTN_Kwї< 8?ե[ͬyN<_NUJ;7z:UG43(#4XfkGB'H%tJ*b+sSR35d>/]0އOLUefFf{j}h?"1+ts iX[ތ\vwCK/FPv]@<9ikGlYL$ڟ\M܎=Ct:S&g]/K73lt lw6/bHHZVSn\ on73kP={~5pXIiZuӁq=@[t샲9zAۧ%Aͫ./:vgIy4XtRiInU|N;^UL=k1:½ ,ڨHI?𽶸Mn2WO '7۱`gg,3u߳dUP(F^BPn B}zXk{nz/*<ЍG be$| [r:aWH~j V!~uOt\53RGK^A<O^o3hcXVf;TmƗTc~bVZr1MA܉XWJkZ[ܚoa'-<NO<#m!<#4 Fr42:P5> ZawJOp(Veݏ&{ـ왹ꞱjLfb ƬcUۢYɊn~=n%}5KO[D_7XwڠV-lyhB*F(9;R<ˆ|6'Ep {9xY)}x:WzN-1qmA322O)aNL+3nO\;齁ʨ:z>ys4>I VE+k椽Sk}h{aA앚㟟`#} yVA*ہ{1ֱ7Krܬ0}SffVa\hLh67FGAwPC]g/uS?@痠<U=_30TGb]u!5T+05CwLsA:c *[y I㓱v\<9vD< PNAg7qu3Ykn,gJ9uGp1!=;f%s0u X^k-Ҩ?Jc qi\o_;Y0ndN1[ݰ"m17*ѣnV +_fB.Oc=]} מE+VQ+OmxiBN]7kfE^jn*iի^t2Nٷs(ں aG~&%Bo)Q+37/Q|Q1Hv)bQU c׬JѥyuMaD HK[eI@F)U4|T֞^jUU>L{ʐpB"*%nه:) Ǝxuܑ' # Λӊ+FF6V2:wc1F2V΄ٙ9[:jȀUYV+)K:}شR=ycm)}7w+71Q/NwQ:WH۰Mqk68E[+k-R"Q-X wQvLmzHPC:skREZg<ʇ7BTۤY'z#"ѥ< ˪܊yӎ$']"X?EEKI#9@J O;`0W;dw':<]GvHkb̎Q{\瘱CFއfʏqxNW(j8c2+UOMQnB}X^ǓwiZyrY#B^ QzI@ѣ}TuG QN_Sg%/4Z^VsOY9Wk!BW^jvW~QtX!)\iƹH5_+~RIZ%߻L#%H =wr+g]ʯuL(bi9j5dX \636յTVX~_"Er2Nzo!<<[t?e#3j)(JY^H9\>8-P&JFB4_a_EJW;-g#wAa;]~*WNΪU ƚd.%]E/.'Wښl@n {5Wrr+3VΙx({µFxvb9+>\-j+ݹ'cFYrVu5a'BC i%t h85tC.v5ʺE~OUjK\7RC\)1ԵqOۍ]෻doݯ )PqQCEg.3'id@G<Ldg);:!u1dF};\+A ilhUG4zʞl`^GvR]ȕݥě| -(TU6vv7I7(w#=^S|ne\m7 |=sȃ¾F\'L=30rK фsȀ>G7/!Z58On6Ou(ʶpG((x^ڸqDX9mkizdm)䣙iX|ڲ)tX{ʃh6 Qn( 9X?bmf GU؀G+nTh%`ޝbV4-Bk.*K=̈&ijΫCmSXH/{}c^I%8vܐgSQ8WC>Y~/@PJXkOmvAnZ5wGW̾O.+a] ppr_ .*6e19{Y_@ѨHhoZ)]g|Oh`̳F?RL*s|6U[ɍ&t-x WԹn]֎ZE~a:x;~qŞ:.<2?.Iߕr^7^ԝ^n3PZiwT x"_3B yҞ'7GJT,ȟz›`*>7WI1&BŌo#e(Z{6?ҖHr";ojg~'|>9iG,HvxFڃx׷T0y@lbFRQϢ/S"˱J*:n0{:ǷePq1]uohUځyٛ8>=[^^iX<#.ޑ57 T!oT_pVW 󯨺cY? ~vyjB%}obTH뵥O 8[7gfhZR=KWgg i]K;ƇV78hƍ˕V3멡CԹ'NV,+žMW˟6 ^X!{u6VGG6{ G 4TV{*^ 'EќĮ],g0]سX[6f0; ,]ϸ9.=jECVXFP5T'#@k {;.ū^\XJm[ V6d͏ae:쨃j--56o16Žy:֟K#<:pu$f\r/ȴvr'aGIPt4'~c8a`zQz5\S|O&AA5;WH䧖O;FB1F~V@5хKKũuY| u[qT5N  ΅/[ WR\ԇx1u ,g/Ϯ=$wnim*u{O& VsѥpOnl̯AOE['܀z&'9xqB2~jsa,m -F73Vo|vB&ky_]|\bV9|cx?[#_Ō ͨ27{{+rYڬBo]_*,:elA|2qp5Vkp*[{$k)0՛mםe]ݦkҩލuڑ oGd}5}ߊ}ѳwZ g&M ٬C9^f=ɧˉV+N#`Eϵ}_=}5Tv*Àm80rP*̴7gQ~.hϡs8{E~ ?o x6Vv: _bwɢ_f$ފѼyc֒MݜWbp}7? |Aİ3/NjuhRR_:ckoR;嗺QCORܻibf#}e0pٱ ԑpr~ A_ubuW)nDGuD (Nt8}~< >y0oA%6>ϬQ^p[_:Ih9]:u!9Ei`WFc }>1u3N(rPf=4H]{:;yce#p2Vjhc}\ QvFuxeCW JhGya 9iHVpV*7#sЦ9f觰T@Tk1-OTVêۦz]XҪ +TwޫON f ݕ"WWCJVD_ʍJ)87#R;CvR uJ9vYE=>n<jsXxmOݑZzJhE2=kŚ%g AqeZݵ[-MO8sfG2xX]mW54ow))1ILJT Yz_ȁj;m^׫عJTEfMx!L܈rsb-VZ\Hyc~}fIe«"CKzIE&j9(V[!},xQ24oJ%نI79:{5'W+CI=RcEZH}Gx]Tg9KuAۀ,?U3JiH3GǜĹ~^,.!qJo]*'ysfvεV 3{5<*jGUV̊H+gCƱ6kФ^; ]dFIIWu`H_u>::TaSy?xAkj)i+< ȿ%wZqKq/jrp-S*ʇ=^<[Uq_kU ;Uj#!ۗ\`*@9t;vr wJ`:p'p+a6lԼ"S?37# ;C?XFzsl^Fc)pK\ŜN#쒧.*CF(bmƪҵЉG3=z'w[9{9JB`Z;"F/Dq:% vɕd\;ٍyjC299.{!|TݕձP~GNO_Fv"ӟjA]DbFZt9|p-V'O4^d{ Cg@!\@̻;wEq[K9RjJv+I5Xr|JqE/<s:^FdoȹS\g=#uQ@;`=Aȯ,9( hZ_#uI@sЕQ|Lx9fZa*藩-2oUyT8ӗfOjWbowj3_غxt'2pSOjNUweXe:??k9qg`;:U2~)j*ȱ3ߕA ҵ.v" }r.Tr/_'Fᩧ`qW}A|9(xvxp9FgK#s>WO_# pbx 5Z06lRxlpN1{jKxqLb+;hʔXuűrcd.,iiSD;Ϡ߃n?4U(ނOc:(q+jV҅#krcTW1f1r/=ĉ;hxEWAyR-ߎbpc.PS`ϬVg} TL誚(xx{ѷ#C/yrq!NZo˿/Z~NKʈx)AkF*/iG=PU=uA_C_~D7/ YM0xZ3JdE'{bu7s6cP~o\FLJ?wm89hCV~o-*-XAPo&ݫ=k, xWF%V/ /'A (mPg=(;i <2%׫ ]؇ ƨ)1fRBi7BjұͶ jeD}]p+9bS6.ԍ߸jg|)sFWRWIR^>KU~xVX>;TWrս5Fϧ6L:PZ Z$}"8IY9kK[R"K[k'4k4bύT@3mP<[ օ3t݌i͹UbД2x _V%οN͌>ަcQRmU!:Ce\) iVMrcDjńLZϹj֮~9bទVkW;E}ʴRX޸nJnrG^]FE`<{Mkܨ/u0SbE^v0ݨ7FbKyZϩ>^GcC;Q j;"ʎ_V]t^BNd^x؍Ucpz3cVPBR"~[φzubLB/Ԅ =Wʔ蛿JtP/9i8K`mX>Y<ֳ7 Z4VV -ꅝ:5|BZjmҤϯfr;oRT߂$c9(=yuGy] Gz*!vw%bCx/}§v{u^s~jƱh3$G%&Ra k.sEX{T+LRypŏ81N2t_/=uo.awGt Fr Oz Z8}7x;m݃LdSSx^zXE2"S7k9 o_X^zx/j #T}z4jRS `=g0˼$\L,Yڎ%0?@7Eú~/+a/ɱEcq1gW┧ϩ@KbBFSJzx=J9j%kMfîZUZ尿/uGJK;]wQfי!pEr5+u/TW+?qꬰMO^ҟ"cKs|ހZ{aח.-NϥtMY]w= eSyDѵ!*mh]֪gӬul^XћϤf?JkWt=JTq _ >\^iv!m^DO Dsg:]1IgɅxKs+nʇsb՝t]柳gq-&,aolQ-"W&8i.elFajJ덷@qj ~kQV&⅟7Kō&"m[3|: Ѿ8A>L;DnS ެW~i(>9V/Mn.]Np'a_L#Y_M2w1X] W#G?eH|CƸnA282}\lN:N X/ǟăSs7q~WzXb9̝jQRZD+}s~9Pý8u$^HUZ"J[#C޻r L >)#*g*fWSlFN<&}l{:-2,b:V|K(*S`[YW84G 3+FFZN7ΧyنOFnS|=(?Ny,hbFE]XVcC6*ʳ^̩.tn'o_c+~oub|Kq )xqk,\9`d.t<^)2zFwuz;1 7|nu} obsg!ڣ{G9=H 14Ԉ/v*v!Z J0Kz}|+ђ'3D /Czƺ4w YmlLjOw}\+}%OQ1bkcc㩏Ef~>oK_x+Rᵑ| (Fal#EvDXזF+- ?DoAED ,#AEEb/{QcbF&&ML * ]H!T{ه̜~~S%r{zUw.{ߛsfV>\Iň"*]lx2~>'>ϘusxQX4b؛i x:ϒdmZ yѥ">g9@{u?` @2qcOک9K=C=DlNJtᙎ |VlAz"=GVg.ҍo]|W7YDsxm# ~qflzNA]5e"d;[U&T[RiGG6_[E)#6OͰ)m͌!< ^s 1rksie=I{oj.VQDcD&я'uE<NDL.vB򤶬Վm*iYaL35%FJ`t;]vtb̞X9ճs>z"1*3U]`c9F[soD[T~C7^M.Z|@W!Ч^ő3ޣ!8Vt""k"osfw{N C}`h7 wYbC"A28]1{} 9*z>jM3]y}8Ҩ-G{\xkDQ ຕg7v5g rd1N[aj/Tqƾ'mC%\%q<ѹŁjZ_bԞm""$ʷgʪT诵tQEsL-kS\bAwI_ej%ߓ8N53g7\c#"cqҥc{#ls_MVQCx# ʫ꣫0"ʔ>N`gH_*x!voӮ*(#8Z*m><$ٱL>=N>8HwEvTJ]]6EgEQcCDT"=JQ+/3OfWRllT}Z^*GҲ$WF/'/Z Ⱦ9Sn),glNT ՗:zqA%) LLyWZ>tL#hF ./Z_nC4;[v=1_i[җFqF\_ˆ5j1:݈$_m#sUSpIJK# |ѻ9K>0tB;Eݕ7R W~Viɣ@IWr#OAΠ0şȼZQl2Y-dQ곬kg$۬LbU'j !;%;rsmȦ''Cγoz=0"YrK}'81PbJg޿ҕq5] }D7~ r ե?1o/mjS?sZ饤[=bԹ*oݡASv̋ёq_ÈWY`++c?j+J¡/5R[h>TRlO5''@,'{ؽ{ j%P/Ft3yde[ejgaE+dJ `e Kn[(9)P$t U@\RA$4K Z y/6vq@b0&HmR<|3;,~..dFF<_=:츜wԌ骍}lOZx-[,&6Uc}~i׌9" } Jl^ C 5{~J[NTqIUwupr vf~qmo.oF>CvCwZR?k 6WL{Aݧ+9cjԈo]]PwD=ec>NE\u iػhξU;ﰣ!>  Qs' iԚaGz_L'#;䭹9|ۊ6neֹ"ROCwvg`v{<39"JlTX{ lܘzsiI/W(]eo{kJci,h4,gڄ%jɓ̴Cro&*jԱXjo8Z]\.U'ןJsԻPFjbwb [>ba/ZI1 ;@1Lu+AEsK\x0b9^i cө O;։f|]ʐvN{oӃ+]?@JìAN&wj{afak <7D 1⥬s/8Yz. xD$_/2Hͨ_> <*'T[ΏSkYKAVM/C = 1=66OWzE{DWOcy/ot}-~SD^w0iVj;wOfǎ6f0r7S |^ŏsQ%>X1gFH.{Wl[<.ӱBmZ1w$S{T{ġf - ~f9;b]Iւ06gۢ ~%l˨BjwA~;\jIJv iXJe0cwZ-im*)_՜Ԙ`,uN-w7"c ~ڵ1Eݩ}M[ίaAغ $}%_۰3vIG%XvESߘ\+F%љcw7''1ܑxs5AZ/Wd˰;6sm?SS;9eD[JKi玜WjR j7t8tĮ?M~>s wJ?B&;YmD}I,١ _MOL@n9՞E:my-Np2+n{v},-@LԞOdU6E pe]I"5ba^55Trs67FiHyy7z"XL0dجю9ܙ-)ږd-S5cðx "h{/5ߜsvcbh,C=ˉBOC55uHwc׻;ѕ7HbJs\1Eekl!x:ғTꖢm(=[%$kgz-6)䨚|eG,Hk[OY/c86nѷ|z0Φ{:Qte+촒JN $ZFjV+P _.2ejY{*uYzQ|y]>5JAiQ6sxv籢vHN}nꗘhvD!x&$8+t9@C =ꍗ`Juկ[Xe"c݆]*+ш'kA[7VWZy`<*9Z;PY$pI#U'|gؙ-+51+s,mQzP31WL]\6QT8>#/I9/(ˤ#;g,8ѓwiwy_pghFh(w[SC;#ݮ'<+F9ʧi'# 朜B*AJErO0˕q\gax@;N>k |H+~D+7zC|wǍqy1#ל?/8xŵUzrR^, nydu|Ҋ{!d%TrDW|ո PD4(}fya9 qw"ir%OSFT;yEp5>8A++W]P9iջa<ޝj%={=$61Ôິ5.qejozǒRD PO5x NĕV|Ҡu13ʴ&:qZg幇$#pv\~|wT4U[OЊ[lኤ蘾#ϿkeC3HFo^  1|\yM:lV+o<=J5&Wr16'?Cψ 體Æd5?[+n9hw*SSG.1cjqI.GZt/R"t3~e̠sߝzE?< XFfs;("hQ#aAEZXHҟhvD='>(&To*.Mx!Yu!>1 ͇t5E`JR|j=`mh*wp}#Mw$r'xhvŮ Go3Ն5UW7*R,#ԑKv*L66R{9ƚЃ:06 F׀jqd;Ywg<[a#}M54Sꋚv|'Qƒ@EJzwm+ |ڝvri)Nb'(dwwH?x{WoG>CWg)XaVjۡMVL:+ HTuKi3ӫ(1@ Neq$-AN*_7{~卩_>4ȯRwΫH/k /yze` ʭtvzUQĶح7]mH<|KO~/٥1 ~YS/.yw*_xq]O5~x}XE33s<3=y;9@- ٭Bk tVcgR1FiBǵzfxBKt7shζy|k~877q*iU. \x*ɍ<@,MpVԾkEvz(;\,/C.y%'@ɖ~NG!Kk^x=yE6JZ[VOgGT\"y UWt?:&>o9QޏG]K}"gRӞnE;7;=j(V%RFhȚ;&!c(vy>͉Eti!~]>o'6g+CSX\N<-ݙ|θ40#V`F쪤>h'ڏ{a%ʍE9H{X#7Jz}cld~Ҙq93g5ԓRZ ۿ)͕pU]MW)ڤԙyYigQYo8їjԾXC5Pȓ,]Ļ՝:i/;mGinw#XKܹ8i=LtE,NlU'SǕk53?byv?~jQV<9wE59+ƭ}jymٶmgCxb%hߒʈC42X {f:e<@?eծ#HDK,\{蝕t-ocz!ϲ6wx; H;nv{JC>7r:#"2:#)(d<ޱ-7')ʫA"!)Zbuy&i4qjԶDN*)?JvP((9rM_et>4Jw5HP.͕cN[DXj+s2#'GW\;CV.<I)W|9%uʉS|ݡr[{O͠_'XcѶ5-ԺB9>|ܛ,U}Ei+!RϹaJWjYeIȋ9K;(]\J*g.|Y?IY4FE:r?aU(ěV#`sҊBJ챨nN(G3h?NJTߔ;K< Z܊'SWY.*W6,t.|4z~L)w(s8MwxwٟC{zgA+\#8GA ^n{~JI_涃cʹzX t)+e9<`it]jaxLfT<^fS!3ʼn;m5V d#B c}ᾠ>vj6^ tezXaыsrz^)torz v<ߋg\Iڽ+V}ZVskkr/e ~JmrNbg闼 Ts܉|sc^K,:L=*G7:#;Ue^n*PҨu$Lf<ܩ>F pf"PMPQsX8ZǬm2yFeu{Ol=T] /l`q=__Da1[CV `:##o< ,}AFE_An缃jȵpC2EŌ ֚{V(U/Z#iZ gA飶3w?W!qFx_8ŏ᥻^ xrOxgPwxepi9D:16=MpN3 -+"̈́{Yx6W-eH#ݔc0kn=.i7>J14j+8gSqS1X{KB]oCwnʮg6PW\iK~64#ĨPx xA KέdWMeuՍd3@珐ñKSFzY*٥ KPneodf*?HC+67Cm=8>̿ʧGkc8\Pk2l:fo݈HIϼ/zwRp.UKTշ^U u0|$A|QjWXUA9[ꥳ%hpJ# 6NCz?+̪6jj4z ߡ:J>٦e*i—h㥬j== ؟eHS*3 L=c_f8J<$]j.|,h ܛ<ɾƈvHΚ3Z]XWUtIS+X*$v=e/T_9~kz+yVDHHFoC˴FflbQ%^xr{`ˆKS{3[F3_w ζ;?f.4}HMo喷6Riy3λɧR>՜G}[ ]ꂷP}Q=Ҟ;|DUR+ƍAc]XR<-è=R:UZcMtl< Gj{G:; !VcIW+-?l[a5j.3۠+.`S ɜt=JE> jlH.nADԑ'͈z.hL*e_P{o?eʶ57?ſeAlǬMѾ^ `A۰u6XzSАO@'j,O]DoɑV#?+$yxRǚ;}u˞HŮi1wbt}q_d18If--#}TeFn䨇EͲ܃ʑ2g0*5Wq*vhnk`rgެ5۩ʐk+X ^h(5t' oBã@Ī7@م6/O[Eu3޽#ګKTv›Kg=ȹ<c6`gt>Uݩxbԕ9۳6UK _ASKvآ+\hጰNJ#sQoUJS|9Wf׏WюsEuPyjjBj ܃=+Ȍ18P;:?ta//,fEO؆+YYP)_0M v&׉3+X1ZkF%΢vAmwj/(LV)XaOIozDa aޥ^vֲ= L56DCj{ՠX@{QN7qYq-*.ƫB|tb;Y2+t/Ot m|Ҟ<䷱\taH kNpp)tQܒO{i35qE"ВT)s.ffC)i 6b! $U쌢~r:ĮR@Kt<m W Dgh3pŁ^KmOgzl c:-1Qx V7sz<9|4۰N^y-c&:Qx>v/أ<=wx彴 >%MZ#pe[x QR+cIE] o휛IGN/qSx=yKrS2c_g,Xk>a`Lg⚂HA?o㻌~oE R3ɪ\~c'atМgQBW Ꙫ~Z3%dO<0S-׵QKoddMu6{ 1VX/v7 GFR&rByrJ)R/w'["Q%**IR!+z4+7y߱OG.f\ؑcrU:-#XS9ȼ"[VZ]D̜:Ϻ~;פR#֦y/>DIs?GT_jr(yn(f ?|xZ|%Ƚy7OӨՂQ?"߫V8 +r6{OFMEv"x.{K#-k7sm\U:뻝¼N2_ 3T']SԺψD|cƚ}3{+Ff4;GϿEO]V>g˙@rsl\Q3XH'3?;7tq6l9:t!?jQ Pܕ7=5ImHH3-u5jԖ4RJUdD ;A3ƽZY;%=Nq~qbZrUtP7Cv#Y`6D?vB%S^>y:3u(uJSV\w%lq|w?<{3s沬]^A,?3(/}Q f[ZSz};}~}>4Gj+܉ɷ]r>_EƻThԫ%(Ǡ qLޗw{O5vsZsF`^ӳ{;Ay}+5"ȓ_D1W;0A}$NdV?kU|]1hD5-5YoOs~m<^|LF^[~p"9 NgpbMwQOs¯JCU&مM%g]^o~W>lTGԟEPZ&p;skpXZΨ4_ݫ-@X"S+ؗu 4ndmI%9)yn#i*ȭgsyQ53ݜ_ Z:_Ku ޑx@og #Beak/,n4?#(d/p{\JA i܈IOQފ抑XnFuh {0Z &Ge= Rp2VoHTkzv%D*u]'1VIfz|-hRIRjvb_~\Nut 2~^U:;˿֩}zz{wd-Ͱcf^한\]͗xihGvp#h)u[/lp2q5dnELT'ϵ)oq$yw4u!<]jfu֟D9ӔꬃU؊ổOOڟ72 @r̳(A C\hQ"r Ŵ  ,@+#3OcT?ڨ (Ğ5uM/Ux[r 1(_ ry ^>M̼%QC~5A?%XUͰ7ߍ[I_$_O`}r_4p=9Arp72/V]>y٩Xʍ=>O9Q?͎Q|v*MCr-CBKVU=8?yxqR^;:8>͟XS;䫜jNXcg=ߨ;pR*ŔZik kX^{uڻc? F%2:GC}k4 ᳜SWQ]4UUDKOӑ~$܉ŝLѝRtu0l_M+x}\guOxЇ@"Wd™g+!1>f\?3j'IKwcuTR4CT?kI#9GJ$gkZOIf+}A_R{+ D:#u~k46u_w5NJҢG6OfL?!5!-bv*͌b#"R{D8f|"# ʼn=OEc\o:ϰ. ih36RE*/)'pFH*.%U*I#R?}w(WZ eZZZ'=6xi7IgkTi61 SVt4~$ҚW:VF_}gXY9ཙYTQEH=q:JWs tAMζ7x0's<ޣwi>ଝc]种9Q?Y4_q>ᯔRxjo˧Cǃҍ]Dd0/O ;$'3$jP bOjsJ}p2d:,] <$Lt=̧UeoaӘӚOv]Mx0<~^DJft8H{<:Ct<)Ujwf{ wICN#o_F1]`vѕN)1'A*__~*wdIk/6t6ZJ9JjFG bi9Yb(;LqUhdcFl^黕 >HBcվPeF-k؜ :ؓS9wke|etVd)*W@.^|yVJ '3L퐑@X^G8f Yu)\˰6@ BO;ۨ=D|:qpxӑQvR;+c?@q]ՌybrnKɩ?&N N#yt{ތ-~z ݛcK1'eԂO+[Ҷ,hI%'}1KA/~H0S}xǡѣAzmь%]Ox:Tv"⍳X=A;/ȁVtG=4ʵXPY3x|Kܧvg9^EY2?ueE4MhG_r]K^ItIK@xz keSѥ)%b.`n7PrvSU3n:%;؇@S{*-5_7WVS+hF1 {ރ &Wl·yK=h@럅{+^@ۧOAq*]yYFɪSJQG F|F]zlҽi sޥA> K+s77׳iW Z+i`Fl`$C u%y{tz$֞#}YH^PhZ^#Ҝ7u֋ލnp=8sA;Z랈YPflMi*c~綜SrJO73*)'zR{_&jV¼y? F@:2w,I֣o!UcsbsR|͙-wL:WLuđ䰜]iUnK>-eJ'^w-MZ<&K=#Zxv\9YY FV~Ti7E45$P.̸ #^yނکaW=FW 1`uC"Њtv>/_Z/\d}4~o i: ^=H?[$ksm[S-KpR+| ^e=khiԪ|0,Saҧyq޽thWDKߞGۃ3'Smњ]#90Pޛا!xrByQE= ݸ&YN.Mst+@9T[<5;r1%&rdmBCF#56Ds ğ+g&UZաPujM}0派ݫܥ\=dҋRSž@@щr)!Ϧ_ҙܭ}hJUߺ[g59]nMˣ7̫wّ}mFFʐ(Li&wp| ^Gl:>.^'1D wQ1OBf30. R3yDt[`3FS˫eROvs6 ~CF-ސePtN5gQ|XNod(HK]yD0.ke; 3u 磁!ëf5ʞ٘aPƩk|bWj|jQ7;pGvu}gHf2 Yi;ޝ@XyTv&{68-D۵2#)fԍB˅`푼 Ӿʳ zb//~]7mXnD{3ڴ'몟ZZZϢLm |%5 v7S{}~3? 6V ~vziZv[+c]"?Jwξf+gߑFWp>Y97G &Hce\PDPS|VO"C4h({<i6 Uk[C1|G+d:1ʓgoʊ{0(j ^WdĜolDn۰hnt$݅OO9."Wj ]hJGa; vn~r9};5Z>Q 1I`}h- wU8!`8-ڹC&x#WFmsԛsy=w(HE"ΗF3Z)kz*^ƞNxP s;joXHOr{=UaƔ3Z U͵'jBWTqVH"kAnֳNDd#01;ѵp C;$͜3 9voNc_}8שi/VX >X]($k{qDx5sD +E+izk]sgj4ʃ^I<:)}3q&GDtȺyeRcBšE.$J҂t!yձt`yGAFYz+9糕nlbڧwbiAFƣl_:DF>a;3lA;gȧȭ Q`pr"ŎB,UQDǠzGߥrK(_}3Wz/~W+|z*H܍o'Q:FH>Eq!;>L3ŘyQv ўnP5\lKrs漱^U~"#]I+)%mܛ;xvg䀮3t';א195]I*bws qcx3Q0 . Q\IIR,-ʴԨFG3zz43S m0>w6͎N%*+coeJN4RU~rY负}>ӵHCxK'e|OR&N=brB’DP==ɻ=#&rR--Qn) mRrK>Wkk~huD=]{]ݼO}1 kֻi}{G \od] G|i#+w Ո\RTbf rIg"}[bݪrXؕ7pL7[ςףGOjPF=QBiC"b\wVי'Zp.LG^ǑU\=/[N:q:;+-Y`Z'w-;B7=@]it(@/no%tAS%+f~Hu^ |;]sD)m_x_ t;{@A g%]drw.3:A-b)y-U9wHjѭO|y7?*ddmq'GU`|/߸C>+yn{ȥVš?Zպ/+"yk=΍=+D >ϡ Hjzlf"]EՖ#-R6Aq܈kjԛPּ 2i=:2MOӹ L&xoe›j<.?B͘3DvI16|/`l_e7!Gs:풾!O빭؝o(ĘƎ1/sڱ#f5(B*sK_Tz/$Ԯ>.Wq2NͮP:\d =o䬂%?)`[;3O!nB>z?!}(*F&2X0\ͺi^r'u X`BГj49> & dʎV-w3h;;ɉT'{(z<\ſ/ӿ兠Q@U~!1 \ї5Do\-3q&G5j,~ O}OqTn䄎7PO.tL#=œ}Wƛgj;QKxY޵&|`=~orq#+~|XT;A+;}rϵ5ME}zjśfSehMsSk|j51qW,i|6Q͢gȥ=_(edjS=wtv~WSrUy@U*}<ϭpv? 53c#sܱkg֭5{2!A 6hXw}͓M=^*(*Fϳ]dx<|R@ E7Ԓ7a#IcA* G;wT)a[y4GVwϻpV}wJtm~goiRya۲C N!NX]ʠՏttD-u_!y)gBH]?x"jw4r2Wm%VyyV+?*/ZW뀍 *vM6hdxQndcR31?Ȍ}Cʟ@bs7bˏWZ NM>ܢ|+0#҄n@uOϡErDњ 4.C]G@.j"2,jUkkI~j?=c`F<~iil736;6hdnVJN)Ū@wX_$jԱH*ZKxyv<5\m-ttbc}m`chpykUWWN2ۣԙڤpFzZO>OR_"W421BxP}PTNf܎; f_Qo'6 / mxDbF[z-Gx4MjAW+^+v%\2 nn!O:djs+Du\ϤWE6" WU73犊w.A R;-~ɷhoˉwUUgjJܱ!} Aܾ3U/{K[㽭)Υ"EK*xUA-\X jV >,9x%c/E!PXkOO7oAaP`1^#E,`Zyh;P; ٕzK ((ws"+6&0ӛ+XoϚ<\I6)e+"u嬲Sox̙D?&ZIeVO;f O # ck +Uwcސ+iaF+%0 Qݕj160E2Տ봶Zq6sf>URc/RVKyX&մ^Ni yҾ&Ǘ;3顅񯎪lq=x?/KMTnG%XmQhͽ!+;Z!4gͱ6.HQ-b0g\nʪdmaacyRFq,6Joi93TV"z+Rm44&kq=9fԃr^?wj('1ϖJZMwvb*^.4}k[%;?N^ڵ2O1Fo4k"dGE.T{{ }֖O":tn@NLޭ4ۂOjӤvT =W}x~@{]W'T*ُ@% lzfZrR (5zs5s6t wU &u&˹_p ;л}9S =;P0FQ+Uq{ /tz^Z?$+K%QRrPNJCSGMd7鬵-c(ʊa(EQS߮$w1Mo2,ʭt⤈3o$zoDvngA2O"mNe4 ᳵ#`|g5B+^ci` πϬ$ 4sf ύL@s@K1#]OjFdX{r%wx:4=/USqd @;saԷ0ޢNW *q1HI)_EmmLgó?7d @O# :+SZi5<og=8 "ahMOJﰛ❱RKZ)OS=hXk8MFD6kt-rm^y KHft @2Fd+Hi[emAЇ6,ȟS6L-e)=( =_aqTt^ u'Q >ي|lju)CnvU*p/'_zJچ|s6*4cnq|FɅ+O.Ni6< B2{=RΛBΎ0537M6?ʊ P oݳ1 fpא)|X/+-zp-b`r#vNY|cS;P2cn)ߡ%E` O7s l#sƚr?|RA AkXbh:*.GYm]ݍwƚ3tܫ\ۺ؁"* *a`( 빞< =|)V&|6ҹW+q\q%MfGG~ 7Pen/rӐw@F=!9=Y C#F9*=բK5GIx F?ȫϔR]@q"JC(mĉU6猨p7:c_CqUF 5#@,m}E~m@^?F;H93Όb_{.E}#1Ӡǭ|1`O`gJdp=XG{ksn@֫M䚴TQ~؅lҨOj4fB9MOoTi~MTR7]C;b..O_5a%,ke< #!zBvZ)XD2v|Ά7b[і>`)l}UF}dvoHFhT`+2 HkjB}1z[=+]8;uTʫ1ݔKb[S +dMa4cA=Ӊ[Ѕͱxzq\#~ H-#v%u-o+7ZuN:jo#1"RǻFMS Ÿóu>^oR4ZęX]KQt`?|>]*%`ZP DtdľԩXzk;2y 2ސfzinC3>*Zޥ 8sloUe(z.iQvU5ࡦd|+ְ~rE ^]h"JY ~a2B&qT,cVzkçnl>PNZӔU{ ٜˌFx3fLuuT] b+r`5Q }X[ѾKG !Ψ׮D1f׻sVS~HM!bAu:1*Fkis"0j 'p'p8 =*ō񁶡;mƮ?;w";R52AxToߊыlfPtb&DCCxJW0Osi#[ 6Gˎwd#jA̬e] R;^>dg}`L0^3弈!Hh2Fyzk%!# f[ojx+ނ /f;;MyjG'y/*8;\{jB*խJpJKuyȪ9J*Ha)wwhˣU fQpԐ  S`gn~N /vH6n ϕbDN~8]4(g8tS\80Zmq= oWLTc!ۍ HKdh6w:ipJ䃝OҀNzj'菺5}VӘ{Ǹ a?a.{GB;sp!6Wz;xѓZ?FtDna-g5V^pdJoi-x.||P,,a}2(GUDm:G͖ꡀP -n~NS.ETulgT߸u|]qDWaC>r)bEVJDjQO n>^"B/Gz J1k *Q~V+#,oKl +WC<$=T+R画R{-cn5y3# -)WGd?ÖE &ؙ]incvT*o<6p~mض*tRS)!Zr1AْY1xW!Н&fh͎LFwv1l#NZq\2r(mrQp0䴜 #aT0WkFb{'kJ!*GM ̸jv؍A^ړ"jxgc;Cm5Woq<%S vs 4*˃ܐs뽟}*Iua@Q[$h+I>W5gnܳ::4uʝ6ߦ[.ŷ?6ӫI -|ɿ(_?Aav~G+yP}:KK^ i~kp;ʧ7TOuuM.{z}GUd,06y!yg?"OPtвg#%YQQZĈr\F1}TJ.o=É.'u%y/2wؙ[9it}YDUYP j흚ý$]Ż+PyHE-'y.}u567krROj'yx-\f_F hJGvN%H(8\#Şx9r~Hsr6!e?p',*3deW4z] ;mJS= 'ky'w&;6yE1q|IC#^4OƐ=5Grak"osFG K_+]W_F2|})^·8^Zu,I^wYڷ1m\:WjŌy~Cp=Hu<5%꼞WǏ0΃|ģȩwh.ú@AZ̪V ֋LnjkG yɜ7]84swI(\1E/P1ǒ.DY J^5aFBg/Y{toߊ2"wmHi^7ŬU6#8ʼn|7$/Y\˝M*E^ cNbj\~\')ڎdNֱ}%695i"etXjԘ'G\cHX+&Z!)J_"(4[|-QkSI:x1b(->K<+zS'7f Y.VoV*s38MS7ڕB➠$G^9VZ  3gՖ3ڂ7a;[y0&4 _ҡp*X]H.Sފ_- K֏|`ybh_n HA͆d-~Fwl:2A5=K71Y0$-\{@)w `(14gS}}Ӱ'=PbIb^{`S4"97|o!?Z|"> noԃ TS!pUBԻRXsC7I Pٛ!:{B~dίщϒ[ 8ˊV ZyBix]x];Ύp\ ~B/wEE ^M^5P=xYHHQ貰R}VhXk쪏-!+y1{ |2rns3*YϧYe?Npm|l}`kw~"e e$suE@(:y/2#Pt5'Ԍ.Wbi^.~"Ӽ OT 0s.aȧw{kI6k?WG'vw[4G6b7SLw&6js̼7u3}VkE3fueA-˻`&Ό(=]܁9TQ{+RzC$kpnOO/ Ĩ94XS+;Vs&sGn;r>` rkx wVDZ"C$^ z(n6cAt/41V8dڡ9P/B_ĆR<|2ױe</%%3*qIbGYlm:1#A|9>7am,bUT^A8t;Wo@:Kow>TK=uZ=&CώHS;[xd7!O[.r$#rTY]~ό'f-#ï]Pޖ-Fb28 tح0S5"W*45~h_]߀np7ظbVI^jq\Ħp3Azgї*˵h{ 9{؎F%' E.m9=i,kW2oDwo\JkՖ蕄ddFZށt{C￞ L(ՄK?V>4/&4;%ߊg=S |>9(#H3Nwƻ_F k*wWqFoHOy@uZHiPJR3SE/ŪdX)n`fc3|tmc^S_{l|)(7jΐ uo8%vxy`7GD'Op+ݕ%113+J4re} {W;_bWo^fоtJVO5A)Mļ>|5qS~"%-!K5rUdSn#yrqc~F0<1ŵñ^sW#9بx o_^G,8WBNXDiϧk4t3Ը+yZHM6[p_X˔ek"/1g;=FxbU_eN舮#lޟUJ]9D'/o(/̫ylg?Lnq}>/u/uNZΌdC38H݃!53vnWeN&2?7Pm؍ r|?ډ'bUq.#^ŒR l7HȪuxuֳM=o\**&;T^9"&u: ~Ihԩr#nVT(NURԁ޿_ۍ~7!l!=$ZbNџ<~زߍYSUBv FPzY{Cvxb#&uy}JOwx& jGHl2aHm.n F6oɈ#')-$0oӢb61 T^J^zaS"mPw> ~2* lzHK\dm$C 9fd,Z.l%鎃d׷^/KhFݪnvΐ]!eՕO]&;1=/rˑ >e \#Sߍ |q-D^UxW)'rFz߈OQ?\3JF|}x~" F -o$Kk8#ݒ'sѳ*})pZȣS~@'}ͽAb!=WMSyԒo[: {NW%;bu85~y?01<|/w'6|Vlf2ñl/B8 [ '5O]׻3|"JNzG9[@ƫ4^R )\<\3?*(mFnxHv*Lc}A#Unnin#2M0ۓ~UEK -o~zBo^ZK1J9ZaUٌ.iu8\kƁRWvowΎC8Y)O!*k!!Izƈ^|kĦv|,TX\'COԬ۰rBx0B ,푬ř30MY0.pNr+j6%7R>aWC2|'wWߏE5X|.z  ZʧrIC%GHKtB|4=܌CK%Տ9z{Aט!`9{1jxbEQVU* \|.5._җ rƜ@02Re̺܅u9yOEjݝ9K<%v} S0G57m!X[L/D+CFrS-T6Lidh3FĮ\6΄ҳí}Ys.4dGagp;ۏ_~@?Fv/HQ#8h2_%/gN{H* /1ܳ\*voMwco]Njݟ^dD=LR(CP+{X֫f>W֕SΠJtԅ+A|jx^6:#ȹXP a4cnn7c5.ΡqQS.BGۡop1;HOye UũER)EM[#[OE`9@6-/W\ؿ}c@;VQqW/ 2.5WEUR%/X_bJCg;\=}='>5kO*~U>[j3tShYuSϽ (zz$r5έ_r>74#-15ʮ##5_k]OjGɑHrssy5wk|=|F&33Nbj*!=_IoTRE[e;u-k@qŨ|kW޳g/w>YYPVS=oS_e52JҞ{q5O!6[!G;6T|xRK X2* [&ykD5FnF0}GL7cu]/ǬWg%;.%1g0jT+Z#T9uRֱ}ߔ6A!!2|<ъR&O'=ŞAI7Vg-_y_ߑzj}EYR/B J3|)HT5sK?ǾyEtF񅦺?xH>3^ĺ,LhAϹ&oMvE UVښj񦄄.r. C !8%RJ\!0pT/08u5I3M(Hiu`uy{xH|-{;3EIC7k<lzU>9.pJH|=sE:ΒU|9mo3OOQ Aj5pꫡ!TžT>uӾny|-[YId(kwҢʇ\8|kt-UgezxGg:hv&D_ut;b&ibL؉+8%*rTţ?Ɲ8 {nGH!V=٘I v>a$zCf =A&nj83n#w{ZxT3lXоdp_D?#ߵԒ+^8mYqD_PAxq2ZOmvV6zG}yR9myjo.GKKhZîDoRz,®ʽLFQ1Vָ_kEгr "+ٓ,MvL~b->B-Ÿ(",O>B9tJ!邑;ͩ tk ϔkԊj!9]|@t9cs 7O VUlf&o:9sWmvEu{YZ&"mR8d`7[t෡y%ro9y? OIur:nj/$x gN=)|>Ar|)q&rk*$|=2 ϦMv{GWYDrکzؾБb$ڳrϳr#9މەzg r _ȔW]͟`8W)^R!N=ٴFŽħlwgpSNʽYBpCڅ`~ n[C~_s#1'aQ'9FZ3( 5_gfq~Gsnp81~[֣2ᩛÖ֤QT8I23ZAyp:@w)hmU=jOZn.Qt@=rMgbVYou=یleeTFf)I=qp|5XX#enAە7ʈ2y8O;w%? ;C$R<v$#h-;Ю62 _ {wį̤rX9z7mf8un*NVbõSEsn?7-gkN?q o rPh(-XԻGo^'`mOԋfR|K}%=*<]sGN³y13FSo[^rVۚko%2y_ 6k{X`|-\tp7DF,2-?b9|j9H_^5kjFJm%ԽW*8-U6}C2݌]ʹ5&3sw1l|kL3#hnHEQtK}:6j)A/?rI~9&ZObl vF^F'V09+js=Q'{DWhwl`s-3:K0oPS[s`StU~tr3D4QN_.:xWۄLӓ4ZTuN ֬^hi(]Em`m1[j\kd-N+|#̾FۍO@;;c'_R)|u y]u\ w62ň+еSFrNPR V{)d'.ݹ3 Oo;>Oͯ`n.uWAb9r5/ܪ 9e_ y⍼u1f OFW 5³{# v!%939T.M˭ Mäc h_q:׋cq|n>1}N썄fPzEQ _WYPR؆h+֡)Y|ARK;]z!wO/|˾$^}-ӿq4xC$ažQtѱx_w14sоoutdef^MD#>fxDk|Buo]}9d| :aYy9l+n|9mz!\QY0{k&yQL+5I "#& ^x B Fb>s3Qr >GE9Ru١nX.*~s~9y}|ҳ{r0~jiT -b)4Uّ̽y̘ ]"?^ސS{5ӔrJfCpWS`JFk.bCjIK|$>|Vn] O䞹oQv1VĥڑuLk'XU?Axe3H271;rr{]36I]){mZ1˘S^IP~!ߕG扌Llv5BׇXŕ\sC[zO7:~ĶzdD8vׯ?O$xDż5Ui\FP㡛FWa;Z?(r^9߷O¸>Q yX́u88<ɮ8wa /'YR,3VeBmueR_Y58Yގv+VꚾQU?DYrue! ʅd:TGipoYs5RYCtk]+vsPj6>Q|_Վxc!Վ+Dr<ʤkT"GgsH[!s*>?6B_CĊkqM~SJ;0?${n~ß7 G${+{$R̄SzٹyRB}b9]g?XIɰKBsgJJ+Rg\ZjCqF2BJHo;)|)ӮBZ*7^&bV19Aj?\OpX ~b#d@]0"+$nJkf\m}ɯyڛc;YԉxIJ(1Aji.9$I:)=>Vp@%V b_s lm\(1Zx8:ą3QU@Ҭ,vgFТ=`V@n@JwT]Tu=ĽqV-hqŜl3^3z߁׭߸n홝srBxzj¾V훸fiZVs>זYoٟmG19*7IGwKXbSZYQT 6K^Rk'2^jXLc[*֖̾{Yw(LOjrzInǬfädLN y{2WO;]KӄoOI[;Cȳu۽JnԆ6i'.Gbe-HܛWL9q\λh?"=#F0(]Ļj%SNaЖjW[fMpjp4Vօ %*53N/mtC>eV^S;?:R[qέ/ 86HV"G}{8/@X2Ok:``Tu r9; М# }<[89Juz_$id9j8uwJm+Geg?9j!WN|N<"#OybX6zzMz͢o'R%Ξ]/&};tL%o+fAYUhWjR}mhGm|0n+g3Bu5ijmϢt)+ML'r:Ǔ*Ů)uBm}zW S3EO2ffꝟf,>Nڏ851l߂:Y^ M(V3ꉫ=\u HpLAʼ@`o=,h((~ky3ZpmVZĄj;ι7c" >M$'C| %EgSZJDT,]\{Ao1Z84nAf%Feo2K7@d6ٱ`:Zgz/Ꙫ-ӧ̶y5]֙ 9+ p2 y澙Yir׏ґ,)SE^Yk:X#KWt*Ёpv= k˞dѫ,"cԐ:Uf| Y{*Aρ@jcUc59U?jsM=}…ō8OOrWlC|f9򇸛PÒ|zciI}"gWAH f/@{2#A6#?ߍ|O?ƲC;[;y–iwz+H@9Iϋ!EVx;2.N͇OP?ha@MuHjGyE RKʐmoz'v*ܿE^kw 3'fi3 ^ ɰvd_GY^Er#Vm,#`}9#=g*?>o~ͽ'RI>I3jV=~ fމx/  xˌAm++|~dPB9 nNo}뽞da}VQRq@r"u@qNM/4s3{qȹT&Poc:'rMZd;[`#mb'w-8h?&u^\}Fxÿe˨>l+ӿ} GJ"4|df+k'Ot`';*0g&(א'CN`_|ޝFm+:_ɿWuoU݈u[6[Ζs{gz(^;PapcrNN7VΝFHOk@NZmY33:򧱃wSW+Y͙uu@/O{WTp<:95|īYM;d5s.*L'`[̆(Wc8W]] YOA&K`+9?n|E[N{̜<竡6wtsS:=oCVi'Qcu@Ԯ4_ 5mxG|nuK% gQ'r2hR tCR`TcCJFx6X ?m o΂q5,/{h D?+؊om0yGI%Ni~Ѥn]u#4Ś܂vgK4|q.]Wь +1ˑq[uQMdGMo9zv4r4y<+=z;#p弼xN9>]Cuӎ|-^v'nKHKy1zQ w߹Er;/|8| s:ܑ9aƨO*g1%{<|$1(ߤ_'8 N2ؚM}I9|sG)j1o}}>BSX]~V m*wB5"azԎurS+:%+PFe-Ǘ Xt,m[*-gS7)ۓ `*Gb59$$ڄss<"BO N}r-ߔkɷFAz+Kƨ>Ud6d̅_|{a5/'e׸QǜJ<i+ŵj);`W26r屙/Df[)!~S5yTBRbG&voD4`WuF R^X~3 g`Ri?f8]1Tqhv\Ƨ^K֯#8Kd3b8* zKԦ*MrėUGVB#s3FX1Egłu௰Z2G"N8Y =CEDݥ)9uFuo)bN%JzHo5HN G6(uZ}t5SGFo h7#[F>9\Coδ,T!)su*yVHi!/+2+,˫>5T^q=bҼw6cSaGVHJwɫUv/ezv'i1슺]x]idc4sUV!|z"\;N"*!y7b9cnbX'g_uBT .r"3]K~> {bPuy皨;|^|X)eE \9A/rݘ*S|k7g+cVY EYV 5Q2tg^qمGy=_vkGh<oaU\]q)CʊRur~ igbf{>'S/<l@:M43օ񾞟sYS3CW]h/tv.BJ6k2PZ7?zNCޘ nb=KSIrR <PQ#+w|Qjc k{ꦕ.=%wΏe{Q~^t1ɗ^cy?An6GRS^8P^M *W3-Ϥ-ԬWsDjĺX(")l=ZnuKR ]5 !RH͓ʑܐorhFOY%ja)+TbʋXDwExiX~ Κwv#BeNj8-FÈ̈́ :Zm'RU5^ T)^TKzc3C~t)Kt]T FT _ CztQѿ5sv@'$=u^foK.{] oN [М,;bN}7wh\?>ƣc}dM/-,?j%y72SaG~7_.a'ToƳ2@SۆXƒ#)}87& L#\:_|ۡ6y=LXp&6ڵvZmuX;6ų89A}Tp+Uk3GC~zR܃la/kD%gh԰2qA`B: w[z2ţ|vj4!G&-'6$=Km^X,o>ꃽӳ~cF[t:^K.' g}x{x'%o̷d5>g?`_{Z8<63:ҩǧT;'sQKIMi#8lDk=ŏp07oARcm{.T 4[9OתFVn&hF5T_K|j6?@lTVmS/S_$uXO-rƞ% i)cq wftO)72#N8{}:AQn҉ŭeׁؙu]{1uAOWgSoC#d!#8av5T+-##yfOb>cnwI"-3&;)u>u=_s-rd#9c.Y[ ?ɐv\T*N,3#'?Yu}x@֮cws`[j>9]r&{Wo9޾`s/rmyzyLv];[8|_4!A%iYm>7S5di`fd#j_06/>.|9it9/1y U%C錳7<"f-\NrO2T7K!at5ey7S52te/Mֿ,+9d:'RiT4ul=]qv>1j)NՉy<m  dq{uϣ^L-pr&S*qvA#RCF_ZGDs~rqV+8u-vgn꟤6]qA%D -f! +ECJ;sQ+֝fpҫTjtѕżG^V$bPnHeg*t0XDxTɫFzEzpnFT={s/.t7n<=|_>GBGgQHGO_23>ZAsԒ j#Vj\zhRcLjۻv蟖n'M-؋@?pwɥJq>*FN\o7ds2*kBAg@"'ˍ%9{:;[! =R7x ~PP꥜y)zų7j Pw 5.臋A,F R.ȓ8=B!Dձ1r͛轛k˻FMz5t$`19B䞅??,*)/Qo-ZHlK̳yo!!A7JʼdiX~^1^~Oxg}9ޢ9eӚ^ZM*fֲxwJZg-SE2,a`o\M -uENShaf"ǫFO/rJpwō>sOOۃQwoI,{~: _ ed~-oqʙfQYf`(hfĿ]5|J;{5|}Gj)5f</'#i kohl['[^4rC\,nW(bUӨoikB~Xmmy/UNB^duFҘ_ej0 \^zÈ2M&m6G,9F\hO-.o&PQFFS3Vwʋ sB;t|$V5%D_17*Z+|js~Fb=5`q#ĆURT򚷷(p\὚u+q%}vq'D.UJEG8ŧH~Sg,a1.s|%ǨҨVzh5Rn \)QB%Og5/jzݕW&1U;?ϥTX=ܙ"WI3wNi8,RQ-T:\;\I t AK-,LZ])l'6t-C G v9<r5gLUKm!!B^s%r*bW {;B*ޖT(X}~W]j{U<īR?#=f^ ֿD'֏ .CB^yxa'[i6XE<5^+SZ6jBUPVg#d jZgsRaJ8PHռчQB3f+'k)5KJh>=?ŒSh4ځPN8&.xH+І'!TG*!STu޹\yջZ"-D01V$#5&qd瑺ʊzHR%dF ģ-Eԋ`x$b#ힽJI+9 "8W׈_mV%x 뽀++FZWZ{YuEHFbsS@6@Uu\7FFNk.GXF< īS_/0 y&"ڹsiO8f*eN)ד:xx%9[Zsx46eST漏`8st;3{UߪARđ#PRbᑮCN=[i;= 2W|&L %2|+WC*a "?Fʢr#W$kdMBd2lUΦwt>=@{A.dI36¾3> <=8F}<˷K^kXN{cΨk"ƂF|^|_kc9S[}TŮ\7 'l{s6U|:$~ˀEQ_Cb@%Wpkv'; SNT,j?v72pI-S {+pXATˤ~8[تT7*OPqE7U}|u)L[;yorQAn^sv?A:[PG8jxM yL2ߒ?];W_F~3ԓ>jhM`VB&P ɥAM!= MP~_N0DI a43Eq)U{.x/tw]fTdNNP̸q@CNCgI.zQs>$P\2E'ph:)BJ*΢`6H̞㾢#k^+8utUT\>gd߄s\Qurfjx/"pdV-&.fvhY(2ᖗ7SvfݧyyL5} Pk#'Mw5|CRgWOWjtCވ+[y^>0}FO>!yӔ;ױ#k0XKfw3+ !_z{+ f+Q1Lnx :h5h?R]5 chٛ3GC]qYc%sU|~2k˽^d%d)ΣY~Jz3Sq}6cp1kV"V~UO6%yrɉYA:g=5yF&yk7Tpsym{|;j;C,oc'"}>AuDNzVx&;kꁉQG_4EH[Gմ//r?;kىޣ24.Ĝ\7&n+IGW\j'*}F.!n߹܊]X- @i˟9hc' IO5눘T\i#czWrl]أ{`YY>B?X<ܾu9[9^WOBQWWh¿QwoAЁT8קe4W]&}.Ti7񄾂y04s.BU7;3&rVmU&A[%ߊoy3Ub^^,k&1k +* R.#Ŝ iwg?k*ir8v9LYZ’Wwg&jLt:,v5_@H>ǖCc_ m=ʪ֕;|8 J>rQyrNz iV9GRG <;z8k+B_ysuGG~oCcNkS) CF_2{Zt6dN̥䫪̫Ż(E]Fa2,jW cYBzU\~HH*J+G g+B"Bs +5=}XTcXUD)Qi?y\)ݡB|`<;GCDi>B3t\}yeWտSg8\CV:g-b'k,:qyhy@,Z8}ziwy53g_aOZռ#>&Võٱ~1jAE np aPn50bP;:W& >veU8ߚ[F4Bk}Z~)7'.O,"ץ?rwH\P;+qgOC`,`|mo1bUim@H] xY'$>/l6r@ ;kAVBr]])b{<+dʈE\{`1 [p@fӯu.+}+H[>Mz\,MU kMkܩeh=i"Pʉe4xB_8O>g\!9ikE3,u|ϥ6`ꕼucGUBba\'qרlBjP3paO\ *029kC&OVjZZ|kpL 9nwυSk>kۓ&P\[_U x|5AuEϪ|淔Vd~p{ϚRʜ֙ uF+ΦnPۈ7Jh>L(kGV^ǐ;|F 7s-cg{>QOq,wا/f=qnVo"ͨDSĀlmTj^=5uގ'Cq {xX#LUr|P}Q0+3ʴ%GӂΤ{ s2T_ ? ӡF-ub?Ph-ńkRhzmsS1%_V^ SW_&?Dnipp_V+I'%S+'ʊ8j)9\] 枛!`JxO朣;`:Rx;&*xn_΅v]Ώ:q|RQjtarwFa: ۋɕPK2F_k6CT ߾)Ui/p*DuXwj<"ůMןtDޣӏV+O5dͧ2,0txlB짋uͤaYbzhTمqYJKyM-EF琧4Iܹ2}PWy8W9X3T 񿾹3n=HF;w!d/{sbtl=;=kUMȢvgwD_Qs@\zR/afO7`g09ryZ>| ;'81baܓaA:g hoW" ۔S<Wo&3ӎ3!?LEzG[х9ws ptSYnSw*/w/aUʇiWrHgz]ڜO渊'߿d8pay5eT;P8=fqRG{gZHfq:6Cٙ1u'Кՙsν%C(JQQAJTAEbP51Dǀ8HTbhnA,Y*vm}rus=|N||z@|!/N~ܞ[LlG E Eɟdߵ4-38={w_^5}zvx״tXv!4H}cf/1^i<_֍u~Eڬ Oqpf77;>s<'gYz`Ra[ LeBJS ??6?OctJnX<%^InZ?<1 g]2uOUAm_6]1pxp|׺% Y~Znl-XW4#U2p&#Uܟ͎oxDosPd@u͂s_;44/47%N6L'g>ӯ[3W)bc<$4VNhu.*Y5ü9K3e]_ϙNl7Y=v6})Ԯ>~--Ot+>gZD{׻v ^f|N9I_l&DM(xD_Tbb \_w3hs>Ⱦ7 w䴹M4{ޙBѱNG~yAYXf V*OB~/~x̽~x]v<ɉ>p2gxM}RzW :,ODQD&G\e̩)gn<\Uу&81w<?DkKeiuޘI4vzpk歿{V zzjX\Jq ]NáRi|>!Mb[?9QqKܭUlhc^h%wѸ>#RuiξlMٳn#I9w[ ;hQmOɩ?dΘVˊl =9 udwD ox~X-rCϩ J(=|'?~sv<^EWiENw |c7b921vmahX@Z`zaFӒd~˧9&ej'j[AT%縨BEqQzm[" r*,$V1?MоxhE!y㟅Ϋ6s$-pYybSz]?4 A[MWU?}[яl)1eH=4h x8>n$YWC~a &{ޝh5f׮ʢ:QF\.E_hٙyN\eZZK~Zg-.yYy9&? GȤ1;(#Ooa ius>3by]ih>uy6k) PuMe+>۱j}C}MZ|v{&:yi2uprpm]bOFCVgEkճNOo\׆3糏9oLQEXk8qªL\$NdNv;7Nf 9 735o!HwώH<{͑LwR -Dl YA%hKtϚS2}c8v{~Ğ%͜Ƨg6ǩ}&[}}[=#-mJ}]Qu=ɖ>f.6{d!d]U~m\<ݙ^S쯲N%^kU|nxxw O@u;vԟxo.̣dv]3û]~:>gM?{>-}:Y/rW bȩ'`3Yܕ2cv9'p˧5+?56wӀ4SZ){I,0D/W3g+gϞ1=eja<q}]%3exp_f x|ꝼ5ӕ5 FPrjqH#dks~xg/Y Wg_DzGpKf^{9{=sA1~'Oa.Qӳ27~鏣ϓslٛCeUZ7ߢ3Jx\i{N#trlmڜ{}2鰜/|dD6nUo䐴g\[ˬ+`^ɿ^4wOOy/Cy / ̮qny7Y Sc'{$Y'5"ُ&N~T[U_<2<6hՆd~Nj%%ޖbY& >M6 <̋=*VD Zn)#@^N畸qjop윢/K̐M-ggv /oFg+Ѷz0(4;U Xl ,,yyYOyF{Y]">i}QƼD.ƹ46ìL،c1E0A .Дz [.h*8Gm,jc?%u~`Z H$^ k~kNǘޙ>jݐnO`f6t@$66:ܰREN&07%yڭG>Cj^Z'v{yOcc'O @q+i O;| R LӘ*;|}(~az/MOy5hጬ5Vћ ңZg)dȂ'4nM/=5.Q&,ݙuGPF\<\Ԭ=.=F6^LKT$QrhU>1q=xd^w_,ӬQ߳6>[vb Aj|L3d1x=Ѡљdz4|5Gdl FYjėzhn沭UeZ L8 IZ* yEIc!QB-v}n.'E<^ӑ2H~ }oMKOwh׎Ho چ:O>M :?`GZf;ץ7X\W]6j)^e2rѕ5jد Z3gmYYfs2R*dNYiRyŢzP]`ڝɛrs!VGS98|8#y̆ 5؀%A[83s,Bǹ s=9?N{1xu\"r4v8{#w j1K.'-1ͩ`7ΚFօ>!Bo;0$,Iu :ЃK=,臅3Y[Ǚ{< O{:/7R%BG8Sbi=$=;~&j۽Ĭw|8{+y~H5E6m`CuLH4*ܗz[ g'M,9'ygz~ַtNSn6_{W=78!4-ޜYR!3?l˾r?Ξ NJn_c(XOmyVl2Rճsѻeut-Wg=&' >+< z[BXw S/ÚVr]+V4NmƿLvٟف`ίA!x^1g}S۰t<+߆0h՞ NĐSCQϜ ~t;ƇNQU֥wJSnf2~wll5*Uk۲})xwLNB|txbvωsucjZXyowVⴜGoY< 1~cCQ5x|ǰrzpa{u_}6d͸ml̸],{DkRw=)=?%mr-x.I<2a8Z* ޟ+ecdzbVMg#`nIͣɷlnZaߑ%Ѷk^t_.Yr7VxX yb`1ԫ"xFl^͏\^WfL_Y`+_^3'`0;nȼXoup򉇅fobuhYZ븊:Fݢ͛x3:{\7^3ޔOq˼s*TTEg#1QK9o@*4p_N'<'OP;~\ɲ̮*_?;~e87s+5^32g–p3$F">DGѢף\Y<)~dx{"{G_̞SӢb?#׊"FgPC0 vGEDΣr[g.C3d3y^؈Y>$k\O+2GOY~@ֽe`czcc졡(h2c_CfGn,[ox&>Zq}k/ԦZ|`Aϋy6If>@nLfT#%_h0s6m>kSϴR/˷7 T#%KR|Y|@>`(lT*Bi̼C5E/bF{J xiIx.Ahx>w%Lw'gSHEVIzʼ>4Ư. %?ܣ{rmo]ѻCz_{xAI퐪 HQ?cn*aKtOzkϵjEP}gԛ>=M@"wj>U>-4dztd[ړ^}Z3}[i~M6݈Άc>K&9,©3K[.:C,|WnkϴEE 2SvG,Q̞B~ $M0൞w]EmM3B%k˭ȪO=+*Mw l6bdQ8Ǖ8c1bRXyVw>J.h8S?|]ae-,Hzeu+VI:7P;a r+mRq7A3MBA]ou|;߫`Ylт96 E<¶qrUvh\olsh|@ؓɺ/j()QkOm7Đ;陮gcEnwJFd5|hƇHkM}b%/8ӂkyj9ľ3Й9 2xz([*-\uY;3˺=GF ڮ*}PGzQ (|gd\փkMkqѬIGbZ̑uRQ}m?څ`9>IF(t][{Z 6ÅK4"=ШW$3Urj>;&ZF[E8>\paDjS%ۘ w!4ybTIT2 QGPg3|]#ػ ف*Q3 eܰ>cDDj ׀1.nHVpB~7BqMܿ2Սfviݬݚ|)M>vPL̪\5 i6ϝ{TM {o4ΊZY;93Il;T#5>Gsr(R{J/k#ZAf'7Cl <iʬ]+3.鈌F?=^}F= tԼV6P|.ei]A<鷳|Ғ Ug6e^Ng8 R$06@>>vcaTk&S=JmXmWyYpqFւӸ_H}qzK/?54\&TK{rϖܒ+ܒ6|jxz8ueeE:p:4w~EOӬ3C4Ew9hoRykif(чO㶎{>Hd(HNum2(;y3[RkwzUT 57.֨+rPT/5ED*zS"wc6򚜴qf_2ݰXУBK9σ/=#[wbSwJ{xeZ=:TutMW__|7#VzzK+~GWx7EױTc3e<Ž6}Ÿ|O-J>~͙jhy_i`Xz|(3uWQ=hWyNЙ~,<8,s ]qyBew0OH_tOHBD3E{D;>Hv'p ])oPR/~mAkEY[$2ޠcw|N{Jpy@ܟn˓>lͲ]4/jAEkcɹS|Gz\(nM޼݈bUyjDNY<'}_[\.|tMÎ [mϧwG͞gf諉?;Iơ hi[f'4>;檠hkKv]:޲eGsc7/{±1@_yOXzEtk"h —`xEx7'N:%j׸7oLkkvA.}<>d':[7\v{$X\g ^ǎCO?k2q2#m#sz磖3gN5ydzhWGzYOVIa<Ρ`>ũO٘<|[̛7?-V񱉥 ȋXPP3V/Ck#7%䠴ċڍGэAUKo[8i&, nUZ)6[93nt]=5Ex{ 9{S27B< #Ԏloɫ6f+"ߑ6JcsP;}b)Ĉذ MZɉSME/Gv'aNx@{v~Z)Ӑzn XD{(Oښ&诚wD3ǥW|#3p:[ R)Jun! _hΪ|TM(<9ǽEY#1yſWe{dꖖ VleXFz4 =g!~DFjZ=~+! S(.ٙ͐"ڮ|k 2BQƾ`DZ}ܭxʈq;+O@jt2g<$}gIu rZ;XGeG dMNxk"zpdz_MmdPzWU[үZ oqe$HN/Ccvs{y4^oI>KiG} xdipy-1.11.0/dipy/data/files/ascm_out_test.nii.gz000066400000000000000000015620251476546756600215550ustar00rootroot00000000000000oWascm_out_test.niiy4}2>k=s^:8彯}{_pn8Ž; E׆pÅoJl p|ua,`2r wR͙0Q RlԬDp*`OD2D_?ni/Ň;e?!|H_,japٛcw4׌$/C\MW4{aL~9 PNHvm>aTa(PG%hr&ܗv1449hA:U Sۼ[E"Ye=%bt8ۂnp 9fو'ОO,.ZGP}ڏR61`W pBb\jH+4_NwA_b)E<0y$1)FM>6ՂŻOS꧎;(-4-#m5w8&H<|WZ]fV~ g Qe oTK(:%k#BoV~p鼯2~Ձ8YDmie3GV )ުS TwB?w,m>}\JSM%;* |jK˯o 㫾krM ĀAp̟7L]d/,#I;Yg)tD>N$Jɲ+@$?w>}DZ>Atza' Z? ǔKݳ^tU8~!#4F9T-ErtcQ!i.+{ph,꠯Di"!IJhnKO;Nup"goe_gS6+Ũ4,ٚ9omDO;05^H =E6=&mҚ&@ J%v 5. vOg5nfplIЉ/ 6 ,UXE ؊M-גv..7 bHR% 35&lƃȢBoaǘ-y}1L@ƻ1>ѿ{t{$]WWj1' tfg!ܺ㲗ȡ-j)87v)#&ه[Ϟ껟#uWm%kTvC/EBhKc@u*yx.zj]MWϑ+_=ϸx(Aͬ2խ+pNi*[wI&qn ʄ'~߮궡Srl9 AX{`˷A!wB/}v9QދwJW^._vR"Ү9Gw4Qzп`:xCE%㈰ŅW88F>A*تC'RD@Px@qK1ӞssyeP/LPs鱧&>{I1+rtD"_OT+I~Ol i]D!/8CgB1 k6k\S36}՝&Küpm:=A;Kcz] \ I[ $ejw= N,F eA{e k <յ $3ca׬;PKz$^0'eHH&W :F9̷ N :V~<'k]_"`eBx|trrm6a3zUf(NoTaW%wo^A/ I~mqM!~:~\cܘ3, rj >˱*\K 9_#~@h;jc 33wB'=>A S=9)Isr=T߾f$jS@ݸ)*u͜Xn0z yKUaw88X*2*}ujY4f ċy%xUe!]#KWDcE0\ NIQa8wsp@e'WзkЭd?\:QCjgQ/V |JKXc-1X]38SEh7^z*GVb Py"uS*>I{Il窺l. >?XmZXP8 d1737d914& ї)]H4|e$Rp Lg X'ԇۃ2߃ V4DžA>0*ϬbX| L<@5cKƓP^6OP ^m \%]W^iP/N2 v^L`0S drAv/)si(e5痋'o锊; osE>@5f7d9:cy>jC)*`1-,]С?W,L xYgjR f"F%牁Msv1ж.I#Lh|$'zJp&SzE>\ا߲_ ө?,؝Run-ϐ%?$8B : Ɉ%{Ư:oEpSh2Rʬ껁ƽcu7_n*l@qdK|c2;\TPS*IIqz@pZ‡3&Y^MDp\l q<6y-"+8Ywտn&)ԑ9|XWpw Y:6c`tcG$Q^KC Wyދ@y-Ғ:0<( e>#LJ/<h]Rfx.5F Amw[<ݛX}9pT^ J e>]v;p:i()iGCԗ~:RrݑiM-`5I\ |FJn.p.+#(d[Dw!uGURcC̫1#@ƜrYߞ!-vFF .O:ԸtJfbc8죗 ٗpBSl_6Sc=I68G=zp̓Čun @Un&Ym\"5Sd|'77B-#pwō )!L! '}U[r'ɴ\aJHc@<,Y E ]NJ;#UjʻujTH/b^0 w3,Ї0à>SҴ޽&4AO|N/8sKp:&ޤqgIaeS@Ch5Vqiwxi.S/"an2()C';_6f{izRH+NFF:sBuJy3W9 m潣BE7z_hC}O.K^(+k?Zs|:~\/@VbHG{R)_D)H5( ֬d2!2d}>*gF'hB+yIq~. 3 Awo~;LapY+OW[AEnktqAEa3n[41|7{("Kt;N>$ =g ٛI`0h8{һ(gWR/\BmGJpTfs+wQsClD+N6H^oy߼uJDɎB = kgC/"e{6*z8սMQk{}kuPl'pIax%%]4O?D_@6߻1 Zp헧gttFq 8g5 ] TS`3JOf}X+r8r~$+]¼̉uEY V/TSVj%Zn8Q fY\xc;bEh"j/@ܬ5Md p%x\pu&ì)C| @;9#) p4]ii]^%e\9<|Vݧ!cөbG×_XWf1*$u Ż׀=6^]l<쯃|ە[WI~B[AaRo M$ai$ 6~Q>%]>H3''v]͏%:8ymʮl.ѺP* ĪoSo7^:}I|JoryH_|OL*/9s1M6 Aز{,Ҙ - MC=zE2u,Xï鰝{Ac.jsVzݎ`ǖXݚi`̇+T6wBT#r}jP ]\w:-Uo P;4 ;7KMhrZ05V:/{# h+y)+7K\WSrA{}V吮4Ȇ_kq|JN`8i>W[p5|R蟴Yw)PPB\tT{|v@rkb Y4t`7Ab +| )2\bÌ2})mFONjE,g]^FүL6K4m5L;}U4Nc.ʨ3e.8  u܇6 pF4)QIu%EbaIJ\rXLQ"7>1 J'ݧYtC2IW~UhVC^(Ja{FH{Cp'_g 8^O҆O霆,!EҁīKNp+1fZJ7/fez>7jubVe|3?lǝ:MfYÅ1ߝDEDJeqWQ # 6~quь*ÛPQU4nD<Ӆ?Vy^#00lwb#{m?jܜCt:۷6;{o+)ю .0a1QT|dx FmK}X T_`Kҍ`(5[,ꑫCRƳ0Z~ׯQa7cpΟhq(q᫳ }:k Px8 *7'XBV˰&8L?yy'a矈)h3Cڜ?)qjf4ԁ{UPEA|}O1󠎗;Ge1pu|f#/=x NOv:vcXL-U4(r.sx9gQME%se (:}X\H.J魠DuR4u zyL)Pep;nz}A;wP{{a'!vZ'˟洱񃸓 ZXqw*xK$Fl(ks~zv =&{f)g*X>+y5vr7c.AV`820"x(z9s >5*a 6F҇;Q mSc3Z7_},s})HW6V(jߞAbR>dl|4O&5d?:7/* 䌶B67Hu(OƂZ0~*a2@"? @i7BP*bp[4yaJx17+1rí S$D? >R`\WVd"$ KJ8o~ײ-^$,_X:1*^3%]Š5qŜyاTaQ} S%Yc6Zp n,*]'rPԩ3(ĠSxq+Xmuo?Pzout+mo`~)# r}Ⴡ;`6  %H #(nX>Ƥn>ԭhӜ;E12砛sLSݪvk[5砌]m1H`1?[Hv9h<x ;WVPz;1' mSj-:Ld?< dOr n$+?4')(]%vU(Fk JZ<Ɂi0a"^>PѪ4d;Ro> 5G|W gtg0W4ŀs>gCo|#sC}Mq ߹AGwe*mg3^5t@VruۀVbv 9 q(<]v=% v|Pց'2*Eʇ흋=jYLf)5 $'?!.lq;CqNT+yӐ[dfWߋGbIuU(2~nޚx;#av7|a> ˏ`XBDA8溱4Ê)U~_EObUȱ?4F"EB LG&5Nzx=~8D̦ʵ!ww@UH dhwyNbK}F (ep,j`Z,MPl"}L]fOoDk  ^ECWz!K4;iX*!<~/|. i+AVn1(*Ewd =:i. sgϒ~ V ’Z uS% عJ8HW;Z(&%s<-΢SKI;aS:V|?+N&D-2@L `:._"ؼ*hĝP\lfzDú\t (6TqCq?ﰖ2ˍpًENo3ˎH}M{k5lZFώ+4W`viŋs)igpNU(Jy ,(]mk ްc[mLQm56(j3}n1ؼ7YϨՁ:G@f; Hfgq;`g"s<ݺ*׫7}ozy2ԣ>UDJ:`to)z8kγ':b˔|HCa 0`)9)܋K?o?]{($[f،O)+tK%D`~&,3ܫiM=نK0_WhF*6:޽c_؅:+aį4cE`#yLژd Abt2}djfo<?S с7\\Ug,ĽABϝ!sS30gYr{OzM3]Ku/C+'ck^Df&8'`*&IEuN((TZ-H[ߴ7V9 %H&v~i᪻k|NLCoP+Ęŵ:c kG~ÊI  F3E'ӷQI%gftDs0}a\Mp[hfOˉ YKOPēhIRd*  ']7 cW&tq'(õǪkSp7{1.1ZI$MIO/xIsD_. # p8(&BD5 9k?1r{C,>[= yLHi:^|r3,|*!Ssr"am6c1ÅZ,WCtdt+G f ",R%({@sIJKz3hmQ`F^39:.>ٰhgV t$<kq4wcE9tģ{[vߨ{p~;}c$[$}*,xQI^Gm%F[=CqgwIuu[~=8G>>A@܀<ң(߫`qL;M垫h:Jo? tL~"UX?4<"ݱ;:#;+}$N91<,pn}f2?"X NSF KnQ]U {_ym/A,7=2'&eucs[("G3hӂ\xSNkioр%׈߻FM7KkdvK6Fԋۭ[_FWz\ݐ d:m")~)p͛ا->_FT>c9cxJf}+HSIy53#aoфdpjk8 iP]W%Dz {㊈EȲ.:@s CY\&*`'Er/&aUԥ)ƒ1Γש1qC^ÄczP_ 0¹w6c01$+^fy׍ lhCR59p$!9_X&?~b)y!9"#6v lȏ >ki) F:%ѯFٜZɿ ;]y] m:{,^y,+9$P]> ̷"*Llْ]thHD@tgh=\Y;DF A `iPqsov}]e[ľ9zW?_ ~h\"Cm sC9&}ШVmCIrbG.|C(Jo:s0Rnԣ #9L^'&_0ԝh{ټZ0 )rjy;w ,P4^Q7H5,;o+<Ǧx(V;et؋.dyFE=7@3'-bܸ*5]u9XO)y|5 V=΄Y*~)ì~\"e71dE(hu?|PćkOח|wMt=,8>-@FcSY=:WCVcp,t+yY]jUt|+I?Y@d~l^IHmX>IИ"=&|XsEI&:kd#i~?VްkهP؂Sm@b܀\n5]x4C.I+@h&"0}.3 G7H1 (b̆̆G1LO"<|x{2ՆӚ1yL[s&5S)=z_LM.u?)sfd3wYAo>䧥}gщr}>)z` 1LQb>jeo:zɷpࡢnp_7K'נPWR584Ew3H9EtTi@\;LCHzm0K0d(rl UA1-#FFjczˌ z={`vKR.t517,`}Rz1D 5zzw?uذ'qL'Y@^kc+H-Xx<Lve; ᄅ#Xh1t1JEuo!G`nt *X2ϺHW\u%s\bv]4L" E:x%O۰,vpŸI:V3)<Ѵit-( AL=*Ԥ`q>W}W0f<ܼ f|L9ЎN< Y܃H\d!%2ͻe9ֵ"BvZP$v$V79G u=.?K89Ǖ?_*}wܽO֯ pRӇ+nY`Fқ/g EoA@O4JG?9u8`[V${fkBFMNx?Sk`WJPI  ;|RL#2j^FژLJ24[UЎrot&cmSj8#awWRRL"iJ-²DV]DEkV|n2O9:>>,I|P#gsZA+𭁡k{A/O],mUռ#D?ew5"3\z2{-̀ҕpE Wa!?aqp9=;AC':|{hI" j=bt\qANP씒%<jw9Ыf"?~^k 9kCi$;fmx䴚64qMhvX? ON Y&B{m#nb:5Zr`Nq'88gMuI(J$޲< 1誝 3l AcGbBSt8]c8!| ]x,JU$sQb"ƿAQ|- ]κC-7-(X)dW&Q|ftx˛K1^ S-ךñB =Sjr r#{J&ھ֡ӢmT0Z(K3.`/waa|"Q!8SJtzD6a:P>0_^i JhE$|Z#Bo;מ}49FR?U>߰ XLQ=B}Ģ"' X3hrZ͙voKH(e;I20RUǬhW꧲Ǝvy½ZZGXY/J?tHJPJ> ˮv= ̎- Uʦh58:BO9 8y}1HaVުT+~w7qhGƫ#귷мs &5/ە_;mµ)64# .t׀4&+E@ bR'`߷x+~|}29NT"^o1BH!>R\9I,"O{WΫgoٓp'UhEqdn96#;c vbQI_!%<{"ڷ޴ HKĶ^M_q{¾=NWݯB4= xH[pQ 9 I"%.ζnj'&'^.c:_GbV鱼s/(qiiפVc=Ђ&!9Ԑ%cq-FvtoUJ :&Y|ȋV(GjԐ]ˆ;9«L}ovZn=-fǒNlmn9V_,+HtQI8K2(xd}NxK-A?TNrސi I!.%d1E5KjXAfW@jUp 'آ7~B78[8A$M|֊Z" OtﺧqPtĦMA\7q`̀43-TG #5}/f)[$m~|?ǀ20EXҲBI/^Feo_#| 6|77[Xbom#`hߛti3\c~s -kd\uXEinc'i+4K\ tqUR )7~!! <&,7?#S7%0EJ6V넰x9=G t74-&Wۡq+ _+nE(14MF WbeC) 4Rp)>}$든bp'tNsmgF/(~;bW]FCR$J?m33Hr+W.RXc;*8{7=L Ȉ~:ydC;Qz {jؙʉF)xJ,=wVBzY (ץgf(_ BE4Sgzm0E)HAꒃ[ߙanAu@iJ޽'9ՃѲ|CZSyO\P:Mޞ6F=U]>ǃ v\#Wݯ_э=,\a@K߉ 90ag^җVLJ,2 ]l@z>cu޶a e]xx\e?QhTIwg#mw%系™1 R͉*Zу.Y5k'h 2]i7@SF8ȑ:>6s/+`]%d,I utJF 9$(QDE$(DA$* ""aS[żg>sZWOv.-[mX`;!8f, {ǜU/U4g?ZDT LoC7v\&V; Աʰdj Ȇ>qG%2\F=z4ɇtACZ6Fu]bm\7 OkAЯ>l֛T(r3#ZGA̡|DYmAN9x8d̸Ed&<#?H\{dñ&|Xϧ߾dZ N=4)U! 6&.5c`P;W!I:dGµUyѠWeph`E o&VZ6=N}e冋{nZZ}A{.>O|π]0 V־hy]"9D($2.UHC呏4_$XևSeFs زcI7·dHs!;1x|D> CKh*!0.흵QW {le@~|Я/Øj(KE;Tve1^`iu50tdMJX9W+#O0K};E%hAXYb>U ~nU=Qm}BӁGӫqZGrHi76͚8AGA @V {b?}e,Inͦ҆%=̷VZ}Rɴ_nsHhƏ&6-f]?ߣ OF9mlcdmT/X'ipu{Ԁ/2\MS}%((@`s'Ms؉nH(pU=G6zo(-` FWsp xv Tzsj4BI&.Oחf lyS蝝hb>I|m ڰ7?_j_wu;A`>搜Ӷ[<)0u!)#!a }FP“EwexrduQZA.Vg0?62v4Ӯ@hi'E+g(Ux;GriJ#3C2= puf^g} ڬLΉ]0c .gtQFܜy[H$v|._ۗcgؽDPOc[ɯf1}rO-e|U،vC_#o+s%|X{`?ӂ>d0Ku\;u;"VY@E˔"|V"b&_^7ꀘף: /R: lIйj=pYxb2 p)lL 1PI5Rd2`K pSE@l~=*v`<ݼIz0Ȇԩ(Z%Sᒌ(;xx'O_/7c~˦=$t Ik,{AW"_4fAa|J1V#@,2Ɉ䱫|b B3v,#֝'@}k1bԻP"tC\@*NC3@V墩PVAË ^"WNjt@%rdCziټn;;5q$?uu/ˏ#}b'O_O'D}>_O' }>1}b_~ @ʳwk{Hxi( ݽt;oB>F3^ܳu {jJڀSvN@<\pʼsT~C+pM9^J0jtU폒'X F;&{FUb SREd@ىQDzFIm}N5|MJpvj=dzs]l9WR#0ϡC &` pa?@x \92=M Ncy[:`4XQ784ߩCmF3]P`'9ylGL,~womƨK(+0_G {"ssAk{I`ج?>|z3.ogUʾbyѣϕ9iae0;0hjEk&0RAQRXo=3GdS"PѦ@k4ԩQ\3"ĬsjƜD_ԁtLy!2{E ZQԖ'iMJ"_#`BI![+>B(qP _J/\ZQ䧐cu"Փ檠`fYg:9 \]/6-Qi,׏{񐞊m`=hJcٶW* R*jM@"T> A֍wzA%\wlϬ~-zoK9'}klSu1?t%m9q@[3?ыHsVi Iyu]'ZQ(B^2ݠ'Vpmy"tl>Gmw`ѭ ~Ĩx8,HwCiS䏻qA!>DKf'A|ȝ6 D h6, Ko9t ;ӠЫOu߼e bO7{wSJu@·1Z'fo+EsQ4 3KNc{81X ;6"sym$_t7+Λ(z"* ֊ O) Kyֿz{5ǣvy o{z*hv4qjRF&\C 忽!#*ΘCI{̺une]"ZdoXmqδ̴/OAmdBl@/5T;-r¨!LЌkikyms81R0 ~u3+3Wh9հceLb&_|V{oQ]RhyzW$z>ȣn͋#s4Q%yyT96GQוO.V&1;em?CAmm IFg`';tmP4%>OR<$ύX!-[kJ!bՔ}$={ k@9H wT\I =MJ\I9=N%kא Ƃ.kSxpe=GH~@’,.8xZrh2Hm$_mҳG gҋs^H,{.ms"Ȭnsij~NDpie7ؕZi j'c:J=Eo3Bt|{Hd:ሽ"_lEmWؤĬ=`k #+|^3$neF(ܐ$.iޙw>:+˅vz:;h!Xu5UKPYԥQ37p~,- J"Lݗ)~d8AR693j t0J7ּE f2=f_IUhM'1Oҽ %XpW>Vf?Ma rA C}½*ۇH7oaaPpu(eKVU(#֍(ɔ] މ֝αA T?)󂚬t9aNAgAV׏hӆ,+2 o'U?چٷHsMœs"] Q:RuvUkGGK M?"ܳGdiُ (YD* 3vA7%߻;AoNrԬ?-GE ٵ kPjMia. էrH5s\M8NCU-?2I'uh0Nxn]h$ yKpQA^2T"Y( @~/ |"){h)^;}a TcU mD%h1pޞ:_ҟsyd1'./m,=AqSE|7~j0owmϻ ļ/Ѯg"s73`ܡs;,ʳ ǝm?Gyg j3\`mvϐ|$OӰ9+K*5izgQ\eSHbzjZ(ഭ?\V?rH"] &g6[k#AIAbAr(,r<>/;3Zm Nbf}/>3;Zo]xO߼bg?y=Xpx3j*8-7 7!ӕsdR'Iny ]O,5"u+o*w}>bO.O/O!523y`M9!ddtfa)p t_tfhhkI<\VfUnOF?%┱p,e$gP]%]r@$;W,8(큼1..~s t2;>Ձ_ޥ DEw NMg4 4|cLЎUU˧psa^gr<|9瓯^Sߔ8'45q^{%jj O!tqP*WM7țh 11Â|Fy~ޛ<h`v', ;)i/A#}8}##WW5Yܸ;ԗ(> oD1 pX=G iqVߥA.;wNsч/P*: /w!S+'E2Є9Ɗfsdu6}EdvА k#w@sz`ȭQu8|{ JU[6Qtͼ|d3V'+a|V{NY3-`zJ1 ә(hDI>JT+NTg@,'>ԍ'`U#iTUq‚w?zN|ԲS"?YjR=hW>d9og('V?`q=&:U=_U{VUjMluG) ۍ"5bu}^▍u>GW3YLވ50H5gftB{^f.}t{Hdzhϡ_AGAĖLe%HΣ.3Xn4Ww$-a b-X~ԵN H<e$08M%ti`Bc*2-gt ס޲zΐOv=tX3?UYYNn8#;bU7@E&nC5[j;uw]$|C_7xA 9*w _}ێF>xv|"S(uJ#=P7ە'3g|5JϷG9]YU9}'8E c?Y4P9=}hgCv~uKx8M-d}JEyEth+m4#5SB=:Axժ9hDmanۆW 8t[*!NٰIws n~S8[BH,p@rJ;DiO59k;׷ nE"..o|qlO.:O_ &bbM$J(pyWhC;qRu4J|zdkd}nD=ET3| aPjK2/;}9Ke~>7I|Bi1jeF;ObkC>W12o?5-7M_% Ӂ@eV rh^+(sý7 iQ})d1KMӰ( n4s .Uhet2=B@ Qr -BΣoϝx(VtXooŸW ^<{;D \E@;VN.sBrub)l}_> HśGOC e> aZC皿-1]AN3yT>}jԁ]]bM,Yz 'Wo'Vo3܂.FK]_ҁeW;Ot7U-eTzK[쳡5N{ѓ`:Jdђ{!m;KAG-0wo!H}7d&"Ưik݄OZ"%97(}a}F=VaaMʯǎW ?gLI {$Ӡ)v]7>H,խwa|y.B!=uzc ֍1i*8Ar8⥈xJ[`'-{Qocsp|{0ATc}I8$k ZAwIQm*Ȱїv.`Z;8n8/Y4iNbnK>O9o'=z<$Di~f"~ }1tzn˳%&(0Rc_L0T|:QM+wG[^#稹Eymyδ]~jss,D҃VһI̅%/kT!H :BQkGo?EWL`X57x5:Rr[MVωB XKWn eTlZiOF1ZO< /9V1+H$zHbېč?Ab!3qv34_DH07hHۘI kWDת"GRm=AkeXsC:xE! @ ]au'Ѻݭ(Ⱦ;VpE@[}4,]GDq(N-p8;w{- S:}m ~,'Z"~݆RU~d0EƋkȰTӶd+./39+r|}9O]e&j]iZ ;a:}wD$2|`!˫ hEQQM/uEzNoUӰf3u"Ա ߨl&EL'Kњ60}Ѥ&ݧ;Wh?E䨩Gsx:A[{Q<㡻ؗ'_n`؋!.׽m|PpLt)Az\RF U|Jcrw_#shmڈaøM!u-P0;ȡj>mw4a儡8՟pRr| Z{9~nQX?qC(gƧC>Y%. CVٙH:KݝLt5iwM(m _Ok;D LE0`"O!E>M_5G^i̮@3H}z>z Wm8uQwDy"S]O贇M89ww?ʺB Ibta0HSo_@#(Pwy)pN Bk:]a؋G40m)An)v^yp QѺg}x^Js%!}1 ԰IB=h~DMAb  8$c?=Lx[Z=Ips+`Ӊvi<*߷ Cm'ݠU'PIW+ feAnu}6?(#t i(z Jĺ5(:&m*C^i XT~RS3L>A)пsyn\A>Ɛd_569;>O`4Aן6}r.nJO(Ϗ‘QјR"-aʍ}Vg0 QݯBS=qw6$n) /C 2㾾hW2@E$p oe={GQXCpH.v1;YH߇5Dpq8.Ʊ?f+Bo׶8ŦE$~X=$bP)U-yCz!+Dcq^^pOXMrج @M+2+ʎBz,j=]pl"jW㊊S䡢r+0֕d=umA4q3ZSբFͅ߿Jd0)kJɬTp,|xfj_3VqӺW"7q,6q4i"D S̍p^KeDh1 [~HwcQKzcl+_9$OLw/i=TI)Mk)g͌Y$Hmr77HM?o{LnHs=>䧉y5]F!gӣ&C G\Z)H.gxAK^H< ; }ê|U(xN:orFkY4]x,q7Zh4hdpC۹ ;f'( HVj|lV+Fp=j#\ȚmsuU6iiR4&^?}o1Ճ*17Вru@)z'TZFK/PQ{!,KD3#ݹ|bF?$ޟ~[Z>PPzv{;f+d?<$Mu0:H(S3Ҩg$f-#,U `8 Qݎd)Gj@]GqWR'nfEQft}=icQvę1§2VŚЩU%/|{7f74ߺid5~/L:Cgg?zYStaqƁĝّ'aPh`vṇ HY5d' / s}TpKFyw7mYƑV1[ۗyC*Hw ^L0n_/x,s?T2^`:~Rbsb; *!K#624.RN܃[+叐vw>&q;,29`g !'ZXe眜":xQz9Jx(C;t8n}- 3@Y}Bh n`88pDjBfJ:}s jkЉCI h[s Hcl'I +x]%oOPD@#5W0y NHTػ-/3CO_[/%1a^27#>5b<:6PA,FmHdg˷j/ G?rJKj%QdB+-9S@a>לŸ\O{ؿߗ)2'=J@ DFvky dEDwҦ >|+cD-|;_"m˓_i ڐL׬-m%gRNI:0$hS;ۏ !Slg?gI z9WxoW3)}mfOIO E9KٜڍުzS8aNܸ{9_ppV!dHdl1I7l4QߛV3\͢~7AK @}u]6U6/"+jQEB=eȉz3$vZCd@F'zUrEYy5.{ ŇFU^) p0_! ZnI `8ĮRnE1; 05>?%S~2M <&)ʶ-ya=FxwSy&=tzllBW?y`_Q EǮhcw?WB5ueȂP W4FH a|pf&c:eiqqu}ޚ@ > U*? i^3rU2Eo7*?DaF織_R880 iTS`~ (U(1 i:nEg2"!hAhSó$:/!r[O^Ps I5;F?7qԱGpOkvܐkJiĞu0 <1oV?7EB2@2?C,0v?@[["6y0(:s?U>l.RT V Nzc}FDZ8c Sj-Ƈ\Cӟ8InsH2NE-v0ZkS<10lt= ĊMYh:GA2 I0Zվ*Mu|kslבuZyxo;$civfO_0ֆo~^u2) 냱/̡P>""y3"10hrբ)?4 I=9Z: ghBp)J'- vs>p[zu3lvbY*\_`U^ cz(bGW݈C_FrD+iMɵRuR/џEGh4<5NɊ=E:?=>?Ù`Yl@Od, >]Þgh7B.f%*K!c:O|_'%M^z{w䢡XLN9^㌻jϮWv\ZP1\mʀC8.omkQ(ٝ!^Ԟаcm3 J1ނ1!V3](qK'4; b ADR$TcXtϿϫ߭D-<75oC_CO';yWa&m \%n>>taCe)p/S9b Hywm?u[- yV̜m*~&Q N0:elY`p?#bks#-,?Ў:[o0Dbmr$a Ga{ӧ"4n2;P*&VTJQޔD<@r&বsG:@'dZ9ܢnփROrԘgGTn75BԎ,3,Hn y_;j&=/vbTsgis|lK#dj|twWJlVparC\)I7}Oá4yoխ7+s߃W5e5֎=-2V\?%?:7iC!C^oV*BwvǁI z^Z<&ܚQG2 J߽t2M/Gn@HEO,2Ṱ"2V@[w~ u$NXoZj$Gj=9W~:f0a(-CQ~u~^Ȕȁ9<i^?yj#4q?CMioi8A3_M(!>{>>{"p`Ao4,*+1D@[w@2LHw- 1{⼒<"֍GRWM"g[g_y*_L/7~%cjL  %r k#JPa!a<ȋg#.U@Tjv i|&}#]Y}mE:hHsQNX]Ff==4Y-)@ga襴d u#8m^0dn]kт`!CfySlr^1 ZrI /Wwx>ȉ`o8A>WdUڭU4esMԅ0k'2pDtLjpͮ7X~<ɯ~bAAY'qкE/3X 撁ąu6ZTwU> e\u L#uNU6^&N;o8uf E0vep0:I^{gٜ^%<u2 G@^}3M)Gv Zl+,NqzvZGh+}o;|5?g ?פZAƋp0^6=/y ^ƙYsmϿx4xvO5/~vs5 \Mǟ:^_#K)sHTˏ2|]/> 'nZswtnAti%ڏI6;RIVVP&24c8#e@ca7҇z;?E:fK=r~J)1=u d^G)%z=^ ݰzbWKYk?[s _oN4p< \}m9ٓK23-gh,X<ұb?RnYAFn6TwRG awrf/xB1C/iyy)'$/Yp,hgBY:Ygv&.~ESqy_-!WҦD|{4&L ~ " YL6qygnÈ@J䝢cSYH<eE)SЖDí.a??3DY=[Ƌ!\!;7F,7{ԑI웢QTA|ذsrsBh.U. ݜߤ~o "tHDH: XW!wM DM[fy蹰EKkefMm 3{} ^ǩ]nPq Ew N+L51cw*=b;H}W󰧶uمGaDN06޽ Rb n+6 FUob}|X:]!3̛=e>>(׍# )|;cuLqX+u`sPIܩldQJ!;"@?ra (79?{hL%"EkoL3eΔ9s2&D BkT29JfE\Ϻy}{?k.eϹ~nNq{<}vc"ʠNzTIChh'ZIWP}*z?=_F* U ݰq_~pqL0t/84 v0x;5Ի0Ju}<|xl\!Սu!sV7X1ƫud9ڔuX͸mEϝ_/L_+BzH;]_t(V/6+KDtNUv"`~tWH,Dn۲n#}/U=x\N0~97rjp 2wMgz|+%3w/ G)%BP(_f ǔt/Q56 0|8; /kbev:B(o׃U uӗ4`͸5M8>Hw]Y{N jzaGd$ȉwR*`fIM+8!9d< ]GGX5-.V NWY R'.ǏRb`hI p}MwR\`+Po >@D=Bك{H7_vh#{s#ny2duzfY^N9|4.d<3CwWM XZ}@[CmjWyC,:u>h4_,^Jm FV&teFX۪8ػM]8e Tbm8;#sa#?O8)4єRK݂N\C7;JX2q> &oDtmn:@@ֿ,y_qE‚_7 Á+wohΦNbB62[úr`ɕq\WVdwAMT4MV#-9h6qJ8>&H:@PqIGt &Qcc|J#r$ֹ ef&ߒoʓS~\ƸOgP%||?^C?+MZw*~]t>t"xT P D5^tZPqAURwwy}DO{˦-_8yΨ }K!@ħ /,Bځ̱_}TG;3d3Eӣu;([恳{/8(8 D.RXй<>IrH5Ԧ{i|6'b6*TG!c:'D1EGd"Y:]HGQyʚ&ƺƾӐÓ ̕ BEu=Lj,xk;ኧXuycB<1k):zx[*B86)-EUC9 *rrAvOݻ{ h˧:YK蓩1v}e(fR9+Ϭq\[ k/EnZ5ėd66{ 1!HЏb^H D] *+0W*xT!X]C|=0/4H= 6ddxH8@G9 xWţ"3O)$St@x3aXٹ tf^JZ}"&AE+T`Tz>*i_~{"XƓf%DzS,@XF HA9Wv4'm. COˀ+/A z%b Ѹ-0–MgّXg"\lx?:Py+7Vo| vo^D!c~G[RwШRyW@nxyd>JZBJh9gD)M4j<:X"Ԁl@N9 U08b,EE|^Du}'Y%y(8ܯyLj|3I4m"{x9+CwSٓ3k ݬ47CnΝvZbv pgGjɝc@d '|#p9dpSWX6:dr(z\9~Ti"#}3 W}=t{fȊ<\XV~OFZt{MTOw+q\_JZ) \-Zc0l͡-j}kE*?E߷]yɌOZ3:@Û*߀#kϣ\9﷬usvVvвt lbJ\ʹwX@Ho6"̆ v޾i/t@txSj'c7 +~5o-b>;sEsd#.3%t;\inz[Zpm?0Z!ʮ~|Wl>{ʓQn%$ȨL\%/Z<.FQL7NmrW>CXJ `+G`@?"9fC * rÀ]t*Pm~"%`/ĐUNqfXG4H ,g<(%yP 2 #zV;p_`?Vͯep2`U>Ԅ{ eߍ=}AɅ^9=;4AlT"Y]4C吰++ێw9jݻLsmH\;h!{ģ5!Qh[ʠkpG-~&[zޑz=|3 3]HA[AWtVF%b֫`fW3v ' =:CM -Zdѓȼ~AFXR0mC̷!@k7OJlIRU)FF;\+)Y 6bu< xTfgp7'GP*B=5Fa⇛@80zȄeZՓ2W =۝*q|2sC܌ّ̘ y0UEWjZ-)GS53A%F㨵P[G>/u|i4446'ݐ7_~b2,h6` =+a]S[!PQF.w~‹SUCzL<1;!!bȱ'68`ͯ?R|"Rw{% 7'N-"c;AԔ" oqR&yBBb~h)jRΚ- cMH9u:2D# tc(| 8+Ŧ_9FOc=۝od"9>=֎5&\u@飂Z1 ,?^9¶ˣ|-H%oཱྀ$Z*}UPeцG:N_O p`37HV' Z p|wA'x1*O3 ;.;̞՟i$LտRGh|f(1\?zd ]4|179PmgsV%7XKMMWRzS IUaqwZ9?pۡ +󖓶:[l04:<489̬hBr}Y}e]ʻ*ŀ¼Kq;5:ENxl>Fy5/,I),s;P^IژI?Qgb*4p$1H{O% OhG*;Mh ߧt#azJ:e:$ ;`S~.9`[w4 ;bzЋTNG<)!49m^f ݊WO]KCگ5d{Ң7/쓣-%DPM6v,l$XUgtӑ[2~Z zcJ5C!A,,z*ڢ`Ҝe 7n~BQ\@qW z妈]Ly>qcoi$Ӄ7c1o_@] |NjEyh N{/J}D.}e I>$2k-uS c]ǁX b72?}T;Xt`l=R}qi9 RhPj'E?2m.(CHɄb#Lli0VīCf"Tcnm·+s[C#^+PYwL N|q{yJ P1%_[DvP`zN!^c]:ɿؖ$|fE+Ìb?pc?_^DZvk2cP?OnٙճAFmɱ7%VQs'9ʧTbȮE%}TYTGσn6(0o+{ĶO^?kN>On?+)2Ȑz͋ϢSgrT?[D9QUNTN^~H wrBSu [YD%מ3& ؇'gbwȆ"Ow!;ݙ>R"+yI뮏Hh:/Eo 7d8 [\P['j|+0Uh yJXLHbeUiHug_tov? nG7gm Ot^*5[͊‚!Yiߐ@;"L0CKi+\Eqļ;n 9ɳwfPXB V넔VP~q7b Q>#J){zy?^yR$D7ٲR),۰ȎZ /b}J ]7P? +YCng"l'wd;t8iyx3M7f"IwRގ%{h![ c't|ksyЯt^L="VÆvI+ҙb'U?ԽD,GI!͂m]`zo ȯPc.=ei#wdwNGKc32)TNo)))&X;Z=˓=h~57 D)gqcuD1.ax ?clmu%d$ Jȸ TOwiP!}+hsr_/Bһ*ZQ95r&W68gK`_J ɽ֏s `?/ (BHa#܌Š[&p &EI$p[r?C boQ;}xɸz]i_B959~D֐}^y69ƧS.V|ֳ<"Ĵ3 q_8 M75P|gw=v݇j \wC~hMp(oZA}cG;鼘\H̓қ3_'?<6!/ΟHSh'RA(,逝&Dt׹U&k3M{7̶?q{Rc;ԅ[yx |,Tqxw`[yhw^l$flhq]܊OjnD>>48 /1=` ߰&ñOٹ@bUzp&"ZMPznO\/85MtcmWG蟰Q6QPgj bS;E8hh*c}m&wC?A3waN|S ?Ѯx6G@`mw}^/1}Éim>msrit؄;i^L \ĀOgQr(zw"4 e&DO`s%xBuxQZ?-2sJzVNpΆFQt Z=k"(TۢEʘ  5!g'gc=A*atpD%<,F5`e (}Ө^_d;ޫyEFb>GxŰ|\J }B3Va盉;vy0"oy#~]~H9+`Ѵ\${"iK;r-<{qVՠ%^u0xӜR+f^qK}w->}]aPA]W%&3uQiL Lt9Py9#{ -f/<t(mx g*8r O7mDV'Φ/*haۋҷ ᣝ1x3JJ^CKƹCfmfKo"p,sfq߉K0MB2Y-8vH\iBVM `=8gv0L4`C1zƴ>>+|6E.2/dQX/_sm n+P~Ѻ3l=Vn}Gu}LXiS'n;{<:@nOşTMOGa'rsHM3JG۬23¦i}?SS\)[gVE'w aNSu9";M9ZVd) p]W@~-4 hibQl6hu2ryA+#Z]3K' ש/v0jrL;*SP9b IEa;ÙJ><<b-]o9(vzHW\/CDm]{i7yL0_Yw2`RX[Bֱl&PQ z^\5E5%Be1Pxf"_j> ᑽݒhn|{zr@:)tސϹBrًvJH3,\>e,_!Hڋ} S:%9pw@x91Y!LNuaZ@N1r~2՟rԴsOԮt9^|8@l>>k'?DxU.uWJ -[냪$pP~N]|7)|_F̺+jK^S? j8@azJ0/G =3Vt?^1wzHL~Bb֙s!0^h0_$[~~I8ud7ls7>|ius~^c)DHO #jAo$0Mm5CXjyX!\iJ =r#u֥t܉^ m9MCXoMrLEy <[ ,ʎX.!c+dkBNKO+aœIߝ P&+?zƀ2`:1$:)CB}:$Gɮo3/ Tʪ:($VT禬k ^_N/bۜ9amO;7nk1Xٙ0fJ3&ۚ *܊Ʃ3Ok(k8z ?: :28>2@SFyxA5_WoވJRzRSe~`0RDS믺n^ O9N*Lð1\HsZl3V/p` b忐aO6CrIfkՇ>T S@&i n?MHBzvXyDw@gsܪ>=݉R}ѿ XL[.}DՅfZȈL()5ʞ\7ǛĝϷc^O>=،R{$ՁW_FA6YD c|t@T}'W3@?h=Iՙ-ƂQmM'_*݃Z(:H}dSU|Cn~34m]C0X9ȁ Jt+b>nXWB]y @x^Px-0f<9~8ڼ6Wv4k︱LH5Z(='7Nɉ=y/"&㕒r2N'/$-ǴUL5W<"l|gEځnpKp/ʥx ύsOCs3]ej9J[P<3 8 6UlO2VG?V6(5b Bb 4'$`Q\uP 2%=G)ʇyx.#0'2c!P8+Ьw`ySn2 滷ƐՊR/:Iv:I0\P:BKgwR#345ÍqK (/{O!>j 5A i#!CDzJT Gq~QrF5{k!;vm'uJn2Ck p'M(<4Q^?x6fE_0/hfwS7JRw?q-.OvP6OwXxc9t|UX@QSiUȚ '_F%Lp3h`Ut dp pԭ4k|&jAXQszP̉)籯7̌sTI/R5 ЉkQ,]wXkӗ ?h^]>SY)7hòrs|2$TWá >1}$Қ[ jdFBFKKRX&N4+:^J.r.B+wFXwrC?Jf1 .vdK4-^Br:  Zn ^v=c<Or ݛ3u3'ڠ'OȝrdUInhFi4zF5h;7$v_]?ʻAduPu*2pm#+a~>VPL(1yS)ޢ1^БܪC!N+-tUEA;OWd9!H5_s\-3#aPũx_h#__2sXB8 N sA'ۗ`bhm>NPa`m*|ٹA’#>f$1pv4QQ01tkō|Nd,u^EXR?0"0C,t.ۡac(W^` ow߆:b KwnB]ZFݿyf_f ZNp и9]a&..]x /mބ}#-` MWB 84Jcvږdxt92$E p_!8S{@` Z:T;kmg|:2 Mv7R@*R?&v\8S;eͤ)>g6CG{h*<$LDy<$TƐ!Sd$>  )2>k<>NaY^g}]{\ǭda9*z*8]+L.Jhϱ~aOmNӇՄKij MUQ)Hi a|Ooc;Oat;kSo6ܭh*CfM)jsCO'Y FW> O8Ͻ"̚Lz҉+zDK W`j 6P`;?LCNX^erd]_sU[PXyBJX$gat7]+3<|Sm/Y"RM4U$(9?;Y,@^o usX5ӳvD-qznicȇG)OC_ P$[`ŎLzJ=[Gj(P8WVS/m 3.qT?nsWm)0L;hX\ xSuuPvyeSQDEXύ ,S+ x,RQ-~lkhn|Be?x^9.ʍG6h;C矾^VarStN6aS*`3J ͅhkjsDHbCԪAǣ? \pЯNH`1~T[w"=XCf! A2i=zDeܰ^7ckr*j|qFIK%<׺l&$JE<iOf ;BJwNp7 wr'#4k&tQʙ$D}hʥ}Nhj^{e`IRV}o9`0E[ ƚ|!aƵܧ __G {[ׇۗӊߑD:B)np{K&(R_vK[Ae7o 1L#ugA^$濾gcDWr%޿֪O6hA>+kR#|K'L4xUy8*ݮPLtSp?horzfD6;5i#a~*u3[XV3q${eQ]q*xq\kiPk:~BL2I`~uu^vjԷw8s'!;?9@sn  8 ݙ?  pkMb#$55]jg*}GBgr߄I߿XDWC=sCx: # L w05S AGͅASE(dK҇[ó߀d Z04#€0pta4Ub?K*tħg2ו)]lJpr~MJ%f65Rz.pSG (w'uy0,Вrk0Xԯ̓]DK2rK'Tߕ'8w/`Arl\_yo _q q4+`+\?Ͳ%|6o2rHIdM$ 54ѝQMǧ<=yO"]pqmkLnA|Uq%]?Yb{ u=M6Z$:q}~A8,Wس>લP/noM8V &)!_rMu4 \N g㎠Qt$s!&d5/b f8Eb9$_> [iiTa\hN=^=tN*:8#;=zX`xn<87$HX[@wi761(0ptw=>@4vG}e*bAz2;(>b-$oۉKRl0(ó$1s>=u4L~jPmC 0cvp#},+ԏ_&Nag5~/h9?b:?WǻDA̍V,g4>m&ďsh }3C> "SYsB%ز {<=ÓOOttw5~X^&,5Y Fsױ-Q%Opmufz t>m`讽>9=ћaWn GJy˸ODMzybȗc7wS93|k6lg ݒ=qތ׫ J3&-!N#JstcqyZ|~!7m!Mvo>ΧC>yrW]`m=@H?8+`X&wHY GZB2`h۰ hm>JQ\E|kjAU"tn_ɩH\d4话ৱZ@ZjwIq^w{ζkA9fKܟ9+f(XH w>Z +PVgAs~O*W 8n?n +aCJATOpdmfe;P NM2/ EqBտT-N;IDk P=C) mAH^0.Z`td|eefٮ(`i{OŷUm!}!?w8R~wK2*5z=>.݁"?Rs\>R;y?fTÄ@Q9"ZInns>S17)_TCڬ)Dml=ۖMT8ijs IPӞW`Q6n#4юJ{gN)Npik 3w)p1Ҥ, Akv/eU*H& !҈Hc* ޣq9*p|ّ{/ŗs~AўV\)}Y'(sV/Q6(Iqm#EU}'/!r/~8NK1VI$?㱬o&EWdm7Js~Q5,}!('->!7ֽ^hk` O~$M[ۉt|0nwH:-Na4bnh/V8^n'-nIdsQV˹WN_8߅b&`Od MMyfj"/s/8?XuH~Ή@F<%WŶΔ飪#']@]](|j Og&8^ϣЏr%HPĢĪ3PtI#?K7M^(~э9 95a)+[">+ z_Gj|f9V@cwю-~4".3(G+Nܽ E(Oa!Y22!:Uv'i5OC?ˎI+C,&rs]O}[9HBkf S[ևޙw/1`=\ (BQHQ9lX #y#9A}2H2&yaE˷~!P^ 5$_aeBI+M ~?-9}TKPz'KMTG.zFQ]RR zny-_q6wGzƻ,@cpT8=нB?hoz"E9Q!Jutw$ Av(w,|*U`gNLȝ.gD-2u0OΜ8GSIpC@|YFv5֭%!Ǎ${N, {qYYTW@f' w~~;>Ir3I>X B8Urw<1DˬLᡁ{?-792#PʮB-$lT_ggtϸe\jJVJB_7Wc#dp4-ɵ.^w ^F<opи`r\B!8===9yq_?GA [vCj Un+lx)~$Jߦ[o+F4&Z~סv^0ҏ:#y^?D37^wZj3ߩYDΞ'te*b(`xp/ ')8FdpVGwߙ魈1Y`[T6*ڸ#NP!Km+euT M,?.ЄS}#D8D*CD>Op\ScU5=g8O}Jv1eU&%JQ| 2m.ş{enߦNb̴jx42")_=Qa R֯9hxH2BdnJ?'6Pm v#Ա3'3MAv;gmi4(.Xx|t9J"MSAFDf6G'q0\X@%>yT05 s!{?;T|2!+xƞn N%?*Qk ΗQ'@KҺ5fP̼L7x.(=~N/bx]$7!\gnF3R@_gp^ w)b+\ڽn zz-cA=H%eA)'xY]]Ï6tH@SOуp2%@a%TMxJh\>̹찻Mr@]Z=!T=s=:eԺ؋B7 SEJ[iYPWvkҫLp)R&87Em* |J<3O=[o9UftCrt>U}lu@tO?Q~j{_[|k p,k ӭ('ۣY y_KDA<,@8hE5ÅV LCs_"}cU&3 Kl a^\ٳA2o}!Ccl>˂, u} my5Jga"Q]#nZllmNDo=AVb/>DuKIn>@EA۽| Q"z(4QAXw`cX<-2L7g@2|J'M$epVd#/JY"ujr} yݙ^r~\L zu$oq*EpM >r&5֠ŵO(mY_'fZ*Aq~ç _MfE1J;xs( Hle:s55e!ocH~'zw@*C>yaz2W&Cs{!7hw+TK*<?>l rE V}iG&&Tj-Gs#/>WZT xU8>9uBPn%uY)G%xЎjtŬZU⭍/ĭ|]4X p!.3=Z`+#}pjNԴʁQi"qaa"8ǵb)$7!G H}bg+ia;TK`6q|~,͍]fRB"W5 eq)<<;y3 $LPu۩|jȰĖAAJ_T=(EN/Z="7G1|wrTs̟$0T6iS^Ϳz-#o>x փ3IyT'sۘpe'㪌vKU(h)*%x#C=PZcuk. l!d"JJ*#ЀnW4%(\b4Q_A .ђΓ F|ݓnRјV0\k] fQT6N)3t n)}gpь1^l! +[\ߊ#(D4?Gpv_jx0)Ou!yfo6}zle0G>ؽ#gCJJ/_mD:/vQz*9cz7YoO9*+qL]5_.. cJ7|~^ax Ď"nŲkHŋИ09YX6a}jI4Jծk*.e>Eȧx<2SV%怵㯩gAFNW"Yo5lL"qjIi|A8Gr?|dd+?e?zS^=d]GuR.KcnK]{I5SQ9`fݡ4J'F3p/8 ._XΔK<>vjS˭k?Ǔ,1tgX H_^C3&eOV: +s]Gu!rI"h8K?G^`#Tw, m ٯ|'k?9T:ov0s(#,{(,)O̴e љq@P7Mcq3)1[ɠQӧϓcӹV͵P!F>F'(!kNHj;czNȯ|X&J S;(Anc'6뺎3Pjv|&!ѽB_AMB(4n{ ?2; KAڰ Np~uq{_SKka 仦5JPT}'h3S%6 ?_do\:Z6 R ύۣQ}P.OY \7))ƤG ;(X.&"7: b}N+3YCqw31Vi6ա)-vp qJ&TWV1VJ"4Ƒz_[0̡dy;10J{wݢ\R-~+b1,u'xCO zͯ |U*Qٵ?/P4{kt/OM AhsRdCo1VsP_͗z%woc[[hy -#\ {)j^=E‚{t9c1\s;&gژ~Jg%݊RެEl}2qUX>YjƇkq ;)s R-^=DA8UAWW UΧ~mǝ,?/B9OYqoO2$&NjM1e*BcW@R$Wb dz{аo*̞*Q;A`[] ;yoXýh .ͨh-^Z+ wT[]c BaԀ:!ؕ(>4Yl^<@WcpM%L${ L%֖gu3|!~G>ȞWD-Ց?ΟgVہt\}, MUHyEtu賊'oϯ[|}9 0ṷ:7?2O4S=אG[;{{, fQ+mq3-1.Ќ'My~jGtYܜ񹤜 I-zp;Ll;x+ c۞[c0oDf5񽬧{@:#vP5#amxJΓY Z^|}>x,w7 R#w?e{ճڡʧi'$`._x3o/$|ސs eZozBM?W;g>^4KJwg?A2}ob0|$䠖;0;APʥ6 0}z-DZ|Ղgp+) KMЋyC]s^Byک(}x*5!eIQ#i% v..%>NFYgH/^=?^_:xH q)>FUAp"%Hf?]yfKW\$ȥˮ|)ūFnY"~,DJ.+2[_gCmو Fr`FR 7-t@d+8(d>dN tTC*U"sߓ#:69%7T1aٕxDCCf瞲ipǀ^pŃ@5rM>70Dp5e>E#dec($SZTi@LGqseaՀK&Z5S*Ͷ[<f]|,abߣY\z$̸}fVorϾ3,Gzxc^TO\w[Q(2ڄtn4}ws8%v<̇Z;, ~$Hݧ qmg+WzK0\ ;($Eo2IsbˢaEĺGkXq*U᷷ ꝁbp,ODu43cvQ@Cwl֞{TN&UUҺ|"_I9.I^V=aT&f/^>MV!ܟn$[ ^kQ`oJ ڦ)ԢUBиLU-=y@=|ڛI|n_q@D4z} 0ڹ/0ΐ+>_c biΆN^J?o-+`ڗc+LᥱPJX{Hi ` Yܸ8zGxww DSOR9k^3V kpQ'nʼ2wtO_Rs>^Q > !փ L -P]3Oq$~=ES}l1MBMT[CcIlT!HMKvn:쀉ҝ]-|X T :uoSX Jiof - $?jL ,$zAEYPtZ+h9Q5^F1[HJ wu' ȧ=o38/; uo ¬%EepAkӉ0vkq+^yVB$eβ> t/X-&SZw-}>{5B\VxI;q, \xRUe*$<RLG:9l!0z0:B ԡ[ξ)ph7[d3>N3:w) ?4౱Y4|&Z( 374yC3wL,\`RN OXrxUlyㅼ#ь/.X|?ęuQ6=dԆO+<%u>b;Z@=\#MKg߃G\>Bf3.0'8a^{oC#؛< Ug~E~*Tu>]a;5aYZ'kk40ڂC~>51$xRt";yĆth oJ pTfAY=Ǡ}~"R-+cB55@T-n&e$4R_^ ;.͋P8??W}M!r><&l?MρҜY|pՏĞq)5#}': h377{i^yC̝= ˯6}޻C+6yj1 _M٬!sdyowtX\= / eg4oBۄ}9\)'6w Yat\`^ %8y%EC9yaI,&~XU^kV+r ݶ:B&Ar'(;ۊ_ *u4tqCR5OAFdL_r bCay͋ Ta$?䃥~p ⥑ĕw7AY\o\yZEABB} s<>{U遉gq{$Z:vsW8cР +&G&* -Nu3HȦӄ0tP0KS_qWy1'i^yAM^ЕaT-xZ 乽c* V0ѿ?\"OO贗<yp֜Tb@1ѕӠį;)ceδ0/FK ~5_t9nC'S0MQ䖛'#雦{g۶}BCP2D":(dʔy&<2$yT$E)"2d(2{{?}>ֹZl~ǹm(zFdͭ^mELź>_j:I@"l"du=QyIG58kV{3͉ /o??)u# zJ~(/ BMJ/pPxX9fADLX{7Rp׀k&]v;56\+f*psMwtD>a 5Tl9 -:—?_ѹÜ< fN[߽nV ~R|;"MW.u%~Um k+;25o%& ]5Se< *SVTB?7bWD(*$͸f N>R>MS=_ݣ5Gd o| <! ~3=N8#Hf< ĭl:蠙y.Xl(@oW"J:i wfmx AUU8VJfnn3t|gBo!2+a\3[VKĜ~lC u9Dx|%cQ)y R 9Sm 7{Goȶ@#u]LPa1!tOq"#)߇6\;03oQ3*GqgXO &m**K%>i0CN;jF|ז}뱍Wz9kƺoiWI_mo; =c'dG=A6Ăe:u)tlXqZˈjpeaP^ pKh`Ӑ"U{Ma\-[ N[s"Fkf9WPKQ,a\#en sZJo&|G2}ع*wosЄq*|b񐾙ȴ<esO>~؎XB k"8WOS}} :aQ\4K`%v%Nm!Cπ,/_%5мr6^}j6}x$>~" Jwduym 34R}R؋$&dnpq('ć*JP8_;(T#;]S^HVEK*v5q9cL8ly'~*A".PY +ަhNx_6U KYl\`~I7([ry7㠧V0i8/v \֪U? ,Y& /^1:ʓIy?SG7/OM}OIѢ+!P 0ۮ@Wmj רfY@D[wګȥYbhHLe\E8OK}{KHAzKX ۷?zϧ~REkIΛ QOؤSz&1W,C5 Lby_0ghdTl[%gu@gPVؠXC$־+b(u6ܶQs؍S"d{GV@j;#(}9׃^$AB+^#.OjD@F@2CLXU\D̷Ңpve?ރӡV?5ri2y]^T4δ /6y1&}md,Xf']ȹj~؊7=aW:5d`q'n 1>p7sr`t$-~Ҽ); Hct,+W!Q ޞGrMpL {P:3YwW6 '"q]'xN`Hiʅy\}:)ŠQ}zɮ T7bR/2p/_9<`\v s*9KL̓ڋ܅;}J3غ}bθLE:?)MC᛫?i-c!>yXyMJ$(0Z7'ޱ~syI\Ar.@Ͷlr'gb~ʾrY?p%>|k'Ҭ< MB6 }!,=YOH/qh2>5GC#o9Ys*a 琤!`R涨qd r/EVСI{S}Y!VZR=Y|<$ObɼL'R0zoקv]z/>0Gg ٕSwn]z_d@LS.ƥ=% ѸveLjZ ;nLAB܇l_K;OqqDASSp]^0f|OJ^vŐ^]o;C_~$h5>>/40s1@ώ'UxyzIzrοk {}t/̓(Rb \vz`iϪD6{R=? oqk..MB(C7NPEo)@ZLJz: O=kv%RMD\ (v=L--= h`Fؾ ׭QWZR@i.ZÇ٣JY{k :7qBzNɧtՠD No\X-Mq iнB;\+8%7 ԜɅ0:o d ƌMS:=sT*(6nP ݈D IoKnP2ˬ6N;kL)!P}6^4s+hT\t nxif9+rC,N[V\~xpr|׺n!!#Уm6E3Pf:=4g% ]\0ʡf93Dy&j(b@GtBz[9*yq8Io X=/$mKwρ?PnCҿ= ':T^ .{ U+%ywXԖq6 ]n;_[URKO군kxh>5Զ~rSNJ"/(י/yX*+|'yKyy:FX`|D} :D%ZzytQIIӗVܗu+x>plqhv듂97}ш#/`7U`[58yDNM}.i͹,` $,Fve[R6$ ~׹{)}Ͽ}?s@7ɉ1݂$<];&os*_`~u )E)>8Mɿ# XM^2YG T<&k\xl>CiN'c_CNIC|=}zyw~;ܩ{Y &}rX7$YuP->l5D+4?5X?~,!I>6%Qtkxuk؆-%OI,f2fu& iQOnR YKBM*us{&%\y^O`O_+}lݕNtJ‰/vCӊamZ$Ґ@)?ȀLRdwVV %@eĭG[#Ns+4?$s )}r4|PJ5~+9\REm.W2%m~ffwl-a5P;L1jd_r1RkkN : r0C"" ~-[8ì[-tbv0מdPWtO}q3k/35zAN&p%Vc&v o.ZLu9]j钇h _D#]2o1^w'ams%l0_*gzy3n;!š2trȆWde8eF=bhr)qM*@:Q4AxW<톨V`mF7ev g\N` ٌIHJ.  # X:R9>AZ#E3#zje ڒI{Aq5tp)ź@?DmM%_6ЬAaɘ}txYxs@jh޽а˗0o+b~zFyXVT1^eWs~fKȅ?WS7mbn%5<>_P9~nAǑ'x=u?QhFw5)i@g*w]iP@(:s.)l]J?NXbK)9,rrUw.Uo`zY7z~H VY5' eb{b-_v{}k},MdeJX$՛ B`$qIZvPD砓0QOѕNDDeh xprh1_YmQ5MaR% |uj):'C颤j n }wI`$﷖F >L@B F<5iZ$d)`) 1wԏ}qe {`=u:lYsҦ!iS^&@mxQ;HlfY@pܑI{SfΪ }'f_O̾gϋWCPt6_5iahQ{ b@= ;usdKӀޗ(*ڢ =v#d4\ֱon9 Z^|}Pl \1mP;Qނ:IJ:e=h\G mC~0ENP'.,~}ӃH_=qDG9 ` gGM/Ii)#gU|L )EuoI᳦ \&LV@Ó]f;A~^5jof/$"ѡ6A/`M8"Ry"7tz>33U%]Ti%ػ >W•a'r=i<$T_d_ċVD8w?3l8n(i`c2 NuZ<E)e2vhz':уmFhMWցIw&X ")mQf2)QxϜ&U?n~.AGXls\uHԏh<tn$MBM*~t H* kw%✜qlYYf+"@ 穞wA-eyX\rU>Άb[Nb@iI3.KlWxV9vq<[XYa8px[C+^3%NJZBWU$)6e֐AnhX=zcx&1 6(RK ^٨GW ]d 2h{߭71l©kClJQG ?;*`"~$>/È%/Ӛ 24y᜽Hn`- p{h{UrTRۈ)߻ ϓ)4yŇY,7 O3ɹB s ds~\鬟R0D7\*Vߣ~} *ThĘ w q^l8!K](C3߃.q-lT1C7LraF@D2ݵ .UM0fW'P\-<;[d@.ȥ|=꿯#"Uk0z~ "3o7kJկHv*ZzLL(TV"㆓BŐkHyp8UǕ\4M,&IbJU氙qg/󿝞k b|6W%受xOLOq%1>BE pw_ZS2F~M )wE;B-j,P*2 t:GPHM듁kJh1K8kM11SR*F_W# t@IRg`Tp>!l'd/Z17^Nj5瞮<V`ʻ*^.+lҽSo?RVm-n kzSlL/Xa24d'F~D4Fzb!Howk}deNck<h\%U]OඟJީ]_8p|ͬր~_ \O>ҿVa8BW0!` .V QA:ҩ8yu̡WGvJb:tA]y%2ݟ\+X~ФR6w``X)4SI esbO]&`6IW̏;5zD|@.^y^v/9Lv!{ Gdg^XhPXv|sâJ 6];vxZ~Q?0}s q29`&R$vdٔ @vCU#uzБ=c;_tchF&[@_塞' ɜлIlSrκX;tRr|Š|5C*hy_O"}^+N3Bز0;ef ^qOk5e\,)8ѰÖ~)m#M(FXf%Оz=`{ƍVԵB&n EuȆ{F5CCșlñ,=b95 /:vu/a*~?CG U$_!P $ބ VqO8S5)BvZy7'ף:V Z` cv `۔x]Kge|G/ )ϳY3E패 t },bv%yHvD͠f>7zJۚ 5;"T/i}?G>_Ohsອ Ղp -i9/%ERHе՝/%e"DYO>GN,cׅ8CWRTxq/ )9EguU2Q=_X$?7)ʃjKX(y?)Z֠CGGe%n۲Ky}~JΣpj?X=5qs灎=sYxPg aZ5V"*O*!v h܀W"^̙z]7c]'Q|>g2~4VR̼ 9 (9b~"荊_ &VMa3X`%r}: D|͘bohaUpʱO ʈß" &iSM-Hٽ84*}N2WYlI_xkxOl[kJ(GO,36x[0Jb_c_᫢#).sv1|M[=XM*]S1e:Al6qk*`.p翭鼱l|"`B( -(b¥d 2̛FV|6txOiKx?R(S*.WDžAr b`+zNn'zІˊ\7A˝\#BZހXg9,߭m42\#)Kx3"W+2vcHLLFd1~xO\#tĸ.3"7{" ҵ^ao侎tn+S i8Y`Xβ_;r=nw>z|[Ľ3egZ+Fo+i7ې̩]e)*G@Ţy> Džį{y(&8,LMvӀDD,9kguYQNWZ@w5!>)iѩ> mX(l59fҏZf85k ƻHbܑw%}8K`O'+׿#ä ~慯?r/۲82H2elOד./_ЌXp6:8πֈ+" "~%R*/pc{٠ta"Jy"!)͙EX@dd Afa-ardɎ7?dQz l뜫0cզ쐞UoUl SXser%^OHnhY@1ڣoy&P{8 ԧW(-2KIe+ȶ:Ojnz 2Eɏ$by+f.xђ+2u֩ŁiC.F1Lޭc {|^ks';jK"ߙ+ZಛHĥl& '#_H  :{u/\F3WihǶ ÅjN@:CN«kARpݲ@oxs P>}P5ۛsW!)܅G.\/Ug76 4|a9A;tؖ !&A&5;CLnH6|t*:>ն{?P*Mr_#+:PTiйᓟ֣`(M6ʔ#*6#?ܼPlD<: ^xv]ũ[.U]+m; SD胿 _8(`r? ;mOHv$TwZNB%giqmtK`5kAwk.4))8BȑlkK[`xŰvxxmTc8j._Oɻ 3?s #l4L`U^H[c=ޏ-!VE\QSEgRA+>Ŵg70q.(HsBed#t+kZX\"͛ј =q뗞\&k:j&mM'2 vwӗe\ˑ݃߯r|@ ٵA(zW}qɿI @pC|y[ Gݼn̎d[rua Ȗ 毩ߎg]֮ˀg=QZN';? 0%70P+򋰧~\xᓸ\߀ԕB;?IH}!3sG'ЪNW`\歏(D1hLzFsoT n}ϟ:y2yZ)UگX|n zgjk?=ԮJ#8{DY#g{5l8mFWS \iGo|~9/A˽H~h'EC|g^#FBJpGяPqR/y.KKš÷EK8oL{i}â]vk1!S07{ԦMcw=O;c"V0 To<'^o׹90)3MT.;WOAWA%P E36sP_:Q%)X9Uy7)#dsE/S}׿P]8o$T,.F0ŀcׯ;r+I$U22[fJptr>6z_V@K?;?N'_=Q7|o8!ǚ!P ;kXLH%Q{aP@_{ ?M7BeNf1bOæn0|_KZr'&u<;m<@0l׃Ay-V"ebUyVߊ3<юڝD"VgFkG@s6):\ibç/vvv=;s6V&Îm!K5<ԟ.P¿)i (. ]"T[LXMd$lQv_ax;I_q ^W-f9u|v? I:OE?O ǸFL|NUƃVx.w#]/ĢpkHRQw[,}T Z!(-zKFHawԭxf{0VHmp/JM-tL}d VbuWsX_)sP%  8х/q~aY-\l8Y,j GOv7"%BɓUhÖw\1G؎ύ wlZ|xݾ`M5kFkT碌ǕKO4?TLi_Ġ?)e#>5}Y],V1:9R֓aǒH{=q$)EGGpw ~čH! m`BFFp3.xbPh?vKW}\ϑR=-5m*T15=Tw >>դ]ewg!$$qyݛ] ,sc` mT(N_w۝:js $}W=>g'Jk4qzTqb,ZyN_Pny' `za@8pj}K+(q/˪gA8o74㫈vbv[6 : 4#vGr*pʟbvh^rjd$K|K j\N5@&0Ϫ'+vt}x@imdTO<_P P tCI+?)g]Gw^'8ܹuJH+ݶ$q\s=p'b- ?дb %x-fOuÙ"*xlF\9tM%m$v`KuI-;ê <\R'F5=aH\\?+7pRG6G=Q$eAMz9xCDRTEIY3Yru_#6s:Z< sݩ8_3̒ vBe~[_ ٵ ~gGS@% Nx'p2}+UX2}oLB|?UjzY6AR#v =0]n*?X*ˆZa6?ᳯߝipH[$wίY |s˘;v$`%OL[1 |ĺ7U9HșTOײuI\},e`+wZnH&'h+m wځt@aԗAp+]#xCf\u?xX/1M_Y! 3jU>oY}>e`hQ5'&;cΙAO>Hbg7.oAٴRPn#j3;b 2i׏_I'>m諻.H `Qb{?@hx&l-%mz?ԏ_:C۱oB"<Ƀn0&*|_dgiC1:u9- W+ץ-y`;$#ʦ<%sڟ8xܯsh*J|.(!0LĩsA-yZ>X}8A ?Y^aIR.bH\k+iks1VfN&6ZSv$B_e3ZQp(ڵ~၎WMs^z[6lNa3oXx!ޥ7Gix# rx^!`w.hAp55"/NؖQoptQ6TdЂDQ!|A'16i¹)nkX(Rܾ 1ǘ28 PJ9u?^d\#C Cy 7<@Gf'945j EOjL(|oVI vP xiO zBn'hC4 _V|$x Ehwm=ajaoʢ#Bwe''+Wv27!y'U&c%]wEܴ;iqg!pYhz_c Ԣ1[x}m`X=oΟd 1aD**C%^(w4`[N}cWD9zny >w:֢3RKR݇Ԟsm7{"<o0 {輿 /HShE,ѱ81"9 ܈##!ڨL?6O4EFGۘ"_uvز-5}{ W|vcaNJӶQi}MB8tqvuW`"ހv(=^?ye-07P/xY;f{sO~O| zht呄qO݌&\$K@M`ڪIL$6%h/ 7ql0#P!8~Ax%2[B6E.TWz8 }m A, X(~֎1"֓E]B.{I "MM-1-?^W[fC!g4%~M WxoL,鐊k G]cy8(;B{c,#abbS0hɕ$ . p_࣫}9#10XgV+ԧ.d ɍ<`TB'NУ >B3+lع:SRp_>YGsm"T~80W'G]{]6󹢮Ki"O{كpޗgbq *QW9B**?GqᅦD^N;C@'NnYKE"M|ۿo*!)e*cR)Cq[Bd<2)IfBRdPLHHT*T _+o~k=Z׺}_yY:~}kV !Ohz UY`AG释ֻKQ8mZ=Ԃ X0^zEm FGtAgX_`ڷ+>[qx_9mh iL so8Gp$'eܲNn@1w d}qy2N{F~"toCAgQr}-'56pM1]w):m=fϻw C:gՖ&aLDžW7Pl%Ϯq5!|Q *p!bGw$9s  dh{>?.} [{(g;eyvWτoӄck(NIq'xJ2#8OKΚH{C4\jZkް"gk7gFtXCk 70v/f].opi5ت0> B}E~syB3}i/1[X7ǻVn6:BG+g(6?(܊~^Ot _Cot6vڂMh~|_gyኹ!‰?w>lTnk0:DV]tP[,R:h rFe%b(e2ҵvNgDRZov-`H`Ts܁͗B@n)f4Ѷ>v,0zn͆/̓{̜ٛ<8Lj8z~\_$SMUUw8JЄΞ/'Ddbj$*?XKy- 2[KI ##d9r F,<ܗ|7J@ᇘ*SQ&hU fuб$J?l|e:io,g) [rԈWNLQ`y5N&'+,ەSӝncNvH^C5q"|hR R1Ț.Z<=q"9.uNj.D-C4gk xp>+s.LE?js9\ᇲ^ c]`X5D1:lw}ϞyU+_P8iq{0p0 L~~_~;zUsʨ >YAUIx[9m]NΡM\7[#1AoR-(P.g4>hZ"Z{œ}BjnY_zKBPȠ %&# FTzQ!Dy)2>|~bW|1f xF$xҕᷗM3pT6沃o{A!i"=Owk byd,8U^&tǟ$rWjK/h~.K90xPx򬟍 %`=1!60y2:C :;h ZL,`Rl6* _78_N %&̼N_  \*^2<L=B 4L>F9s%19KQDO@s*`yx\"YADtwlxf?kzE0t;hQ"N~<8ݏgdk J!ťCR;>D~s4zk|/H{k/I֓ZOҮhge7`ȳ8q +D;dᐐtIc#/_q'%^K̮%7d2j(fG0PsbKt-nLWKdJ$(F*y¿ژ+66)2ַ4˟S]o{6hls|'t=1pj5܇UlluCk#ߪ A1s]q <.m'/ń@'3A1Y@n kǻ_9AY3~XØv̤oKwPν^,:y ͳ=p&2T_Spm Ww޽WQƚ`=.4xG´%p)<ĝ0a/==֙ Gv-kE;Jyݞq iNR'ga]鵎N ?l>8"7JE;|ۥA dYcu ݥ`7-"0ͫE/~m vcvrU&>*OX{'NaGëݳ /4&EߧJ_F]x3px+b{gc,z2Kp(sHΜ^M:UgQtFgX<Tp ؏3  ܑNqOTf6rz+=Ї}*Љ[]91w֭||=<<ᱝ:7orJE3Fc=1#Ba`FO% ÍE-/az6=-K T2py)d[<-CU(S60'*S:lRmp d<[.Rs/N<)n ka!`\*' QS5HHI=ցOIwu aO{áӦշ AVߣ/j#*bj&xp>A q)~g/EfXvS\ X_֗<h ;ZԆX)0`ص5t̸I-.d᪉ƪfkgֶc{Lpȥ>.1#ǖvk;EէyY>7m7=Z'~F'8;IݎJCў йo]Th9R )![~f^;ߧ4?—L> -dd+7I43Ҵ_#rJ~_}b T?a7, IgE3,dބ(u+OyoSGMnؑ3ER:YW]ۡDJVBU։Rtm—W:9|4_0|Q]7A,zC[)V~WX)再gNQ7mIș9{d XFd;@]YuotWɇ#u[a~R"0lb .,z[!Ew' wP9Vn2Z19hlW+4npnOͫc8@N؍b(~yY?e$%{T@'Yq ln*W εvMc;l{a ޠYLs-$zў3Ţ_HtlD6ozL/]iä?s-cJt.9dOJ]翭g˞@E: -WH`хDe|k;2*ᱸ=F[qW1#>d[di:n. ]FTϨbT9.C~;@)͋am~1k8?F`T`18naWIz|ُ[esdž9f޴&ݏVa"ɕj/8ĸ=Py cSyāܷڢ^-~0g֜d ]= cocy~+m Y?#~N_.PqKho>xFSshA-ۈ2u>muD=< gPaKhˮLy)O6LN?յYSpj[i,p{sJTV.Ed1Iمqʩ&T*3`,4OA$f##PQ36DUa)Tjr%GP~k N|PwSN#έ\vo 8撒YYn OQ]{\\_O}e3/פu6I˜,Uͱ|1wezP-vG?R]C,|*|(->P"+(8-0l ţOXK} >[wgPx% M9]D*Am}#?K/9̴ZS:lQvnTvb˟?!A.|#w?"pxb'M)q*묔10dyU1{ L+Hga'r>D6A)sYv&S^Wn!ݯ>"Sgkdsr1ޢz|\6̺u]Ͽ{tIu>l. prbٵ? qC1Q@ N70/ɞ]NgCzFBjFT"NAT?*GޕgSA(YPc;6A&+'m0>n'+ۂ}k #pݩ9XhQ1;0ouY fM҉u5mCuI9lp=qʓa%ǘ \P7ӛu'3WDҤp mwOv6B#y5 aA!Y _/E gt/f}Ǩ}hE:_, b38%ڷЦ;'ޱ;?hR}9"ⶽZq_ '">1Ѽw?1ϲZp)EuW<ۚWڧO9"94uWtLh7Ȍlv2m]煖hq $|*tZ!}ߍZul'9D~Kے1=SOݾqX~| ?<4ntq+s7TU],~[6s-a1iq5>}=Jۏ\R[u[eN@bhVgF@%DvjYcaإsg>mٿ1y:O뤥?mbl\ ⲥgYo# lѯ$ #aJwIcfiٞFڢ51R^: B|vgi2+qm9W$rvМfBS`0F:K^8Z]XP $n:+d_[>r+m |m1{z5g ;?JP\~ _$D.7K-.]1~αv{50dI]nsp狀1|?/|iw +xi'ֱAaeku +"O*.Kwᆪy|Bn֕OeGEt?/n'1SItD~;Cq< 6Mqe~g, ¼_:3z`EL⍤#m Dwٳu] dO˾ wίg~ĪΠRIȥ%UڷmtfOqc̶^,Eoo;" ֪왯zY~+DN*&5%>S|=x;~}~CEYd{aR`Mg()6\M}4`#k7m`bĢ}9>BI Do-9䫌= >WyzJeF܁ʹcZ.Պ-,D:j8T?-x\4Vh;#o %~y:f.sAot,xxqX P[b繩 ߢ=ڑd^i,4ȴݴ/ s]m8H{_\lx%=o*!F3?h0 B؞mWqL˕{>y#H)|QĜ~*Ǡ±򾒈Tִ~n/G̟r8u'%xgX/~s?1G VG20x|oϷ23s#$)~+^B%,zDmAwY&B ;9xZ;ûDͺ?(&7ejH@V I`%͊7Nz.M (Dqmms3}>t OJs@(quN^Ѕ^猨ᎥѧzquaGQU췧$45d)q|xu췖ݐwFF1tE\hYb9sdKrz,yȣ?!4;c}Zxy3n0u|8rcvO u+FuxIy& {rwX¡k;]iLo8o^ fH[B`⸔jm v=?#QZ쁂o[HeW0=zG#HpVP}!ĮTYbXUk'Lc^lhgRrx(O(,"{}ՑNr'1˩GӞ#kE'C5$k[sDXa 7~+ (VV\!"bVIxy9h;@y/# a)_1,_NSreKDdtFgȗO4[+m_ڙ4w d錘72u(e(ǭL[ni3U+mG/|W\ҍd,3&\Q?N;T/ e¹ 9{}d$dXOLjvwE@GW9Z[F\kfYwok֠M#kyxYWiKMm KpEv2_IHF{p($dax5_MzɼUɍGpGώ"]AQz ,'0Zœi/oI'鯞zŻ c\O3Wc#q #(AI8vֱi,tksdg7ZG5DrV۹nn^j8V 7B9؜!J"؟}<:w]!wTO`=wa)ƈs@ p>-^n <cRs̯rISlO9@ce^8s1>Rk3ȨTо^U]%#o[IX7D̟*~mZd0KAQʬ1%1Tӣ2+EZ?翪= a:=Rc Þ*{~g;W,yœ2bAu}u *?ޥD au_$녷?DcDvӲr>wlqC4_WL@s9Z;H+A췃eؾ0vLfg m v*Y `}(uBvSlDohN\yˮa{yR4jcSZbZ3dmH4UT͵ :δt՞8^ʊΩLop"zl|nY!yMY-GN6FxiːzչPy~c wi2@W WlPҥX*q>Kh׊GKۅ4aXS y|u@Qt>_T8?|Nvbgq'^R0r]G\BfpOMٟId&k>z98- Jiu( TOz 42rT(.$+`v}g/U/?-]Y&sddR k*Q= W4meȝ~?*$"-În!u_sUWE1&Е[ld{J|ޅTo-פ?7J+ $5jn2*E:(&ouRV$3rocV`oɲkZ4yRR7ZP'T>zWMOW)ϙBVo9ܮبJX^ƋP'aIyQPm8ɧd) /8H(^\}i,xZ;4_q@3MK)+uX.[ )z8Ηx|6oN57t8+_$  AG+PPuu+}*vc6=6Sb6&rjaa- Ĭp;qW5"mC .Y<^8 .\5 sJ=zd-DO b*ͱپð`v ƛ/>h%mZD۱=#aժt rOO["MNq~&sI68C^'ꭾOP;RESɦo!m`o%1'&UiWԜ3uCѤt@ VN0O7¯uc]<>^;y"-n3Lo]g(3.kY e#HtHk:(zU7قV=u?_?*lM^Hؤ$jNQ,-;,Tkm|ۊ _]dTC|w CQKqavgWdʣKxM{hJOu6*}sq9L#[iK v7Qp@Tc8W)7BQiv!2XظJ,ɸ~ВF%O5ĵX =љ!&QDHMǵx$6/ߜ^i%,E?&]J q'鯞zzFYO?@Kb?p]UpKT8/-s,<$mwatk^#]%pGm̿W]NwV̱Dʆe^4Q=P?JȤx9i(gS^}'?7q|ʬ.RoG%yhTn49cNdkW!E9IO:y> DO}yM?amIZgx rk0'פw'΂rR!Cm#ۓdv_UB'$^Y oQ=[y+7vT8;rhtː5u9;ү|b +M492g\;z[Ikȷ$bLWPlRH{t|yMB0'Q^QI|>N-@O%>6ڋu҃Pc4Ūaӫ5 l3({Kf"J'jhrN_$Sm@>yn@w$nr!Y^)՞w<ݍ Iѫ+^QHŷvˑdnVޘF46{e^vV/bv2gs;Igݺ}bO'fNI7{WU\F ǘbÛ&oY'\ѕ>["N;O%(izh!b-Yz;^Y$\4uKp>{ku9o2pՓtiU LZU|ZNk E\|}+~}&{ ېL^Tܼр*)5èE2K2rsa)LOczvuשߘҮ8Gvzb0$rM-DL =mʏIf(%W%NȡXY, V;>2+@?߼oX$pg!ٿ3uRqM}Dwt:PH:QEzD_irY=ھb3犣RT˞!9b.rAӷ+݉SH†'X-b?ZhfE"Ά?iO7T1e Чp)2`uļ39/ٳu iJ: qGEy6VEV#K&b!C=8z{!s{G "5-cTOQi(IPrّUt prx;f6| /zEYt"H] Nx^__NJ<=POH)n37航/T:|?.X|xG+~3VR`p|9nzyP:y`;ncR{eF[prjPm9S8#W߉zǿ +ȹZ]b"SB枕*-EJsP!{C&zRdѵvAG}pcٻJy.~qhܾS^ RSl/ /<چlzB(GYGm$NflF֚cC§&8ܻwn-fbs&TMnFzvj_O;'v伩8Ha8l-<><|2&,I1kH!=!W?`?#&K@/8K^b.E`-(_wd‡/?~=d:6B^Ko9bM~!X&8x Ik+xl{&Zp߫Є$$8fYFg!"E9)m3\g[sd]c)?oLOpjV\w'Vx|G9 0gQNJ) :~yq)?*BylO˚:^#}1vG:ƒ=~rs}2MOhm@ԃq <koyE>LDAiƫ$0D-/yj}ߣ(ꒇ;A8Up=h{*z+la= ~׼$ÆW-EcP=uƯma ш`8x/%v ENk9@Nʍ4頠JM |в_[C';g(Z71{6 %8/o?k1iR]~悅'=|JNi|EujZnKFM7Uc)}{K+ٚ/ak1Hljq7z p=vz^9WN`p3%N]/{:TBRPdzMTDlxF۲9;a78,c4_B7T& QK31ڦ_iΐd"fk `VH6yX}; ϵ_;nw<hX]f"5t񽵭L܎&CA[#NвRz9. kB Z -y|Sgۻʃ\8C^3b`r̭O@憆_/BsFhl `^ٍ@E0M !VXa^m8D;uY.foռh> 䀹LH>2-TX &2/fw>}⇨Mw9Vh:T"/˼+kzwe\ t.jOmr#JA,kNV] 9N']XHƥ»gwn=W{O֭3Χgfx"psJf+ 2i鵿'oj&L5`qJ]o1ogUu,a-(4CPV 8^y%|0:D 8kZѝC[ndgjww5Rʛ5O~g]``1cvIgq}]œP/p첕9t`ZW'dX"nDisctum@(9Gax}rezlsbD-TB}qH7]|+v4TMS\NW'v%(aݦyXMPkQۚqFcʣ_u<1>q3!אK+ȥ]HR'Ix,(u M4Rs_ oX!R 2f9s2'Q2UHPdE2Qݟ=w߽{Zku{gc_liPfe_;F'`m" ^p%7w^;SN9]6 (Y@+dhigY@V,XQ:)+yDbfC6Y(nۣLΤfek/LASřgr~ 1PHm撂oB0Rss{xyT◸L UCoRd0iݤzh1&)x}c =-=9?$vhϑ\O.P>yɟv_@ ܗQӚ7o.RߟWg6ݽV>2lC1;3-З"]؉ȭp9aOٚczhy77Є cX+SRS#p5{_/ hc;1 e v 5E(!kUY3=haj$EA MBs?B >w,˔戆q#ji9d9BjD2TAs{NiA̐)9gaL祸7|BR (04wz54_ׂ0SkBm|$6mzS~mHJx?|(frG󿼿h[)N*^sCu[>6||.Fxa+ǍM(\k ʴʜ`gWռ8[Yǃi`D7Mb7Ie.1\6b# V rTs! \[s7 izO$,Euz>fg$ZIV&#=$n!z' !lgSY|Ki"i'oBZwbPAֽ'R/:VPEe҇ 72^,/yB9K{ |(ap(͆N9%c'MXG8*([]w},{[9>[8[l7Xyђqz۝ UOe/79 '^ ǮGUWO4b{ "s|47k= QO] nrPoW5AY0OKKT Nmޝ$# W/X ji@5Jfdž7s'U/׫܁ )e `dE>2aG76LHՖb$^ǃG+%p.WGf4 %b2,.Fʞ%|%aZH*F(9=; " ˱wW\<(hnܗ{+n:xAc\ Ж ^gTJg:IO#{^!n& |"lAI.CϏ*V%yb?eyeH|;]ԞжAH )(v]0"k7}xVG9_ѵj?;D!8~bKѕCTj(pQg&Y?6& + =%scɤmP :ֳ ?ogHcwYR(\ZqgZ 1g;l2ּ1D%>T}*f+>EMmoF{'ePyNJ'mVa :pw, ^w #yj`E(nR`qPW/| Ԟ ~&Đ/J2~[ ;Mi55qHS<}.;jScm@9Upȩ|*\zT񁜭N`VJ$3vE -%n^]<KB l v됬zMbӚ" ,Ww_KS#h5t ùk\n@c򒾞x$<>Nda B|8?/vūt+.-׹gUh|:k oF.ͩĒohLa+ I4V\38\#W'Io]m9ئ&/*B֧-Kw?9<L@%v`1 q(G7 4_o޸EȐ3 y~$'ԟW= uIA<<$K8bG(i3MC+kޘl.3^mϭG;@BHpV}V> (~O¡oF$6 []q\6[ᬶHntĝD7=/s!3sˣ^p7B|Ka^p^YGl^I'}Pk#PE>,(@!1-%n6YbY Pɗ&OmCQ.^]\*) 23.INN~}!gUqZ*TM6r? eAZ&t0i6pI;<7/PGK[rj49+Ȳ8oVEk i8oIWc}s+8_yL S$_w3On_vBre^*'C,?ovs UXG/:} yLkxßW78W]4qwH bEZ n[{DW_4Ż"{Sg蟾jQmT:o~Sl嫞ˍw> : В8bI4n 2i[?cjjf4S&0@S$ͅ=ܼeS~1TE1KP&Ě*N!*][͸NˆWI7v[[l׳a_y0aNYx}qjnv%t-5}5W^=3jep":U VFqE;f^+AgpɩK󉵳.ˋh$jH] ?Җ AOVw!or٧SwiJ.7}ܙ8_Rw"D_G9x fvR#QႼS$X檫oʂWv}^R5D]/5fr5W$޽$]>A =|u!oK2+(RJ Rn}Zȥ%ʁ~P{9;8'#[ ` mZ9R1ۏ?]=D|ЦФ3W011FH3*ߑg2oV,g}'˶uoѤQ%ьीFl/1?2aKVc?EvĹ~&@\gEV-;›z j , ž wߛL. Q~d]%ҧi&*:= R+c)γR|RwcJLk2vC9JN]/j1\ |hvş8O^[={Թ˃z??(C iN'_1@˷o Z}Pw#r'Gca#mn.l]P 5=?Yɩ K?%_]>P_; Y[|Jԗ$߻1ڷ@KX,Y;M䣴_%lsCFq&_`ڻqOCz`]0Dؘ1}&G,ab X"ދOK>葬-?.`W<ٝ 6Z˩ nP@a@<_Mh{~}Q UZ1` 9yc8xj(XIƉg W~;DMgUOUc/xSmuM3_dZ_Fkq'W{b/-/_נ&vra\'ц} &9@G9j*vy]y|/PW|q$#$t| '|Ù?!og<_hzIsk69C| ̦M oݓk~^ڴV<"ߘO{qT7}@Syb?\\߱|3IR|4d5<"A-[ MStNvt5t$ aYa)h$<[.6,_m.\Ew(;wQy@@n)JDU3"ti]76g#+7A HZgMNV<#w8];]lVg .E*)>?|*LC7Bb('u:3y7dž@q K#();sZ⃏_<@~!8rEacQ3_m,w._d \'Y3LA.󲷎5<^G^_;kRło-3{,vq\H J7"5J) '"qpjuwГ{Luj 4>}n!CE 2H} Kށ2[/ UlTktmGפFB;Dzk۫k,Ni:ኤHa݆g#hEbꦘm:5*Hݕ,9톽֭馲* 1e@, pN?X7I|zKO Y܆A؇sp394*vxD.+4[k76~jsl6Ok;ߛ/EFy,ۛ %XOoL*Sέ;5 we:C .P9W;f:[GTSvJ3:R2m˚.>Cmo钐F}I X(W0v8t$F&Zt_ud=@RG5}^3 2_ 4m78^R\ (o=8.: * nWhמw}>*+?XKy8t(j)tr-CK`CA)辱ohor g0(}u hbS$hlxl8LD:OH8 6EzqY['dz 3L1B.iE'K_l;6<_L͐n*me-bdcU<3g~GW7: IƷb*G7pt㽞^͢w{Dk@ӳlr.E$l|H=\ ^ ɺd*~E9id!rlpO/K*+öp!~G0)'Ԥ\YҼzc2_F?R͡xن @qstjʾVܞ%pObt3Ӥ/gݷAZw)˪ē&Ky9܂8-5|Lk<9U"DΩ.DŤ ,Qs< l꭮5On#,_uk3@&a^ AF`T+#=(gl equ|#]9Y]J@/>Bivw7gϲ4||[:#-4|۳wtf1D\ZC#] [gtz$#[Vs:po&_R%]Wz6zşxbVآooY)K.D)8gxM!+1K"] YJpxňG%O܅FX?)Qvo FaEsw- w#?ET60jExl)(v}gn=+4?i !7ZBݖ03<*J'2F+/D+}`,$Q8[[>::kgQ_q9|:n𧜵"\ik#W{U>ris^I(_Řx_.vU `k#>hDmbd(WdUt\$[G:/7 9ޅ?GiUU> ߲AKٞΐ&\7ko۩Hys+b [(?G^/j4*~}e?Rz?z),ol(e#mzf;P[ '\`zр"1i$ieuZʱ{/To5htf-=^wutG˕I {:W'@LRUřx> Ϲc/cC.3|N!kxB7z,+VntK Ԣa0{60?{0uG%檦QļH?~,gg4u턁2]hxͯ^uiK/AWM qOjZwen_fkEhDśgZ(CV+ob*ۗ3(NH?G8󵏿NM)먑8d }Oc r΍[]U)cGxΰ]0tײTWӮ9~ L4 hu[DH5y؊Mj0L}_osl"/SsAԐBR k|B0-+DF Ga `n_*9i_Ě׾\|e05]?ޏt@GQ3J1(f…FhlL;z\z|luo!wxE-%vQnstytC;R5 XgJ}aw?Z:,`qKҭ& զ\sw( n6 &Ʀe@2Kh4 UC˼.P>bٹ9yZ?])i_\(N$Э dy7y#wa]~?oކdԒ{W(y(E#j/TBC; (Y,|C`2El;8(HS@oAv0+;ACKȩuV(3^U6sm[(rp%a6ZS(@;x=cgc:7/oWӁ*m+0rlʧ|uPCJtXVd@$Pr]V@QӮҌ9mGOU".G,4? 7tIeqK3d3Sir k׵QU3L*H'O$G$sxc-PrzjE ˀKXRƴWx Sv$r_fu?ǣCߵ{>{|/dzZ\3^@ Y. Aq̽EiBO%raQ:Г߁œi>2 N8;j|s b3- F7R |d,pNz8~5J=M} /VLTiJnv݃j= o؟>9ٱEZʿ~tZ _yJZg W~={VJHIBh[h\qiR%8Q~ht<)k;o@ mo лr}Mhyǧh}=öSWA-[ wuxO4Ԡ.!aq~UQF8#& p&|->]˵ ߇y),t@h:YՍ_5G^˩ᐫɗN,܉5WAn=0Jf_bE҉u_u氦~=b̓`ơ6Qæxͮ:S԰DSf"+}tkј{/4vH | /ĎLl(9ɭ~Ryt*`rgxJs]wЖӣ˦(w(G~l v?q hE5zqxrp >Ïߔ/fco_?rX qǢy!讌'7rX1_X$)CBLM>̫b zQNPC!jŴR W 5!HBy* ʼ+3+ID˜-@nnI&s{Գv8s&˸~yt%@zo([DvSyX0pMfY8 gA/t{({6HI{(&L_?_?R_?~u$o=ziPXAUϵmv/ vzGkH- ש"Pi.o}Ǧ mQ}Hg6!8FRF!qv''xD#AR飯 'X pQ5>r81@&f߄/>&]iL5,4~~ۓHgXl!x%mkwRHj0~6w5nwv0$ho2koݸ}7H2w3Z*l1ćG&e%5!w%HԶuIJ_ؒd« 9,* %.>,  y\OVYN}BƳw˦-O!ý>tڈ <~e)'qĜ+=1Նti^X:@J!umdcɲKǞaTXR*Ѧ+-_XV^rqW۶\u^Xl~=Q*q{TR7|U'zdJ{Z[儰;zS48XSR iP7R]{)YbZ˕|.NE(uGzYtO/R_Hڈ^,ESWn\=-O*O_u*๏Vf=ŘHW.Kǡ쿳a(V# IOW= %[cW|])fa$KAmlv?͕ ssINڞ:9}$$EO-y -ܽn 2Y͗4>S_ _U쾫M+\,]c&;ifXϰ"RL+|g&%R4CG=w 'SvIQzJ=rt_lu4}ef~#Ԁl%+>*r9/Q&2Z+8ma)br0w%E_yѬu6BFgB8Qd%Vψ}Iu\g@>ō?*Ѓ;f?|nw3~,ߠΩKiNj׮N)ÏPnXvs RW'ߥΦNiC:OhKS6'^hE`^ Jބe%19c;zs9~Kg[ۈ{[~uʾc{Oh :*#V9پ:m6bnyKg<,D퍍xAF9‰4}ql|O'|=q*K^~v:[akLBFADV|tcqtE0=3$PwLk[ɜ:6o7wwx,Ǟ^$kiWYUե`:i÷]=w0Pf鏖Ոh-dېCvH715)R֡{$kL(AߟuG:^/TQ_A_ l_&|=,տurYY0U;*>F?c0:z֑7]{.6]MsKQv~]~2y!;s.NJ=볌5#?S"|Hv ɍ|B[:c2DEd9s]}(&D<4/:~ m9l>q?A Q,ґ`bAeZ$.x->0D`O2~#82wN!"kwQbvAwjyp"h|FL5f8f$wo^\d v,.{AM\gxuK %qGe8gekMQuHn; bGTnPixz!F+bwBn37Plq)sr\~֛; mLT$_]FYju!/?Vφ]<OcgdtTROnٓsa(ʷEYHs|,}XB4ZMg]O¤j^Ľ|ۮ}wXaJg}]vx0]2i*C\ys],W}yzw럝m^5|h,jOJ>Pf"JK#.@gh ( 7?4JqC S0<Ĩ0}@i]E[ ^Fز <;c+EO㰛N>a3]ܽg]Oj!S:8FdBqc-(G~ 7f-cRRc_a]>1R3:MҷPbߎWQNes="ٮ7cqP>wIĎM\&"S}zÉo1q?pv4]/Oig lzܚ 9~aֺU$ń+J؎F|Fe磘Rνy #לA(6e+>i|S>f|_^us+ca__y@Sw_-`ˢ!jr(DՈL]vȶ}PF:hf>![ my_>ΰ8pc}k=(?qnR>0:,qZiR0KB6\xp.P[]hF*ewda]>Q(~jKmlj^V?g$_0=#i-dƛYHlmD:xFeg%g>.ZȃY1|x^ ~`H ئz5d\߱U\Ԛ8n-pHN)3JfpPwފ<& T_ ڱ3R\@ựt3 4C$N3%j WIʁszO5GYMm! `>4?ٸ>boѰW ^G (RY:qfOEѩ3.6ͿwNokBͣ<>Wr2+Xc{r ةd+|zn8~$):WxQjvCg f j9(1jyDZ1+ !|-A&b\0v[BSӷhGi'lO 4{8AH,3dxb+}wroZܡ;^0Tz>9u..P6 ~B? EtC,\ ( VcOBq9- @NU%BxYy)K H Ԩ{ ]TGyW2Вb.%8יȻ\R]ƒuFwgm2Jop+֯%! $ǰ*^k1*t>G1 A؏ v +&Y"(Fʱ-\mX7Ҷi/;?8~Ƕ|Ílc uS7,fۣ<[EMaU gAqEI5Dk<^5=+6T1֌fEԅ:xz2:@QX=ܾMR-0gi)n&5+yXcdF|ɡVuXAij#o46 hW%etq+*Fec,T۾>eTكI-c/)-]&Uh~H\F`SP20GSs3LP7(RM3ۄYoeCy]JYI׳|AC'~Gh_`ZC J q{M3p`!`~/_,臄OA;M:?XHPcl~gIi4.tS\ȐksON!يףM&nՊ֑ii(Tᇫ:uq);?O> ;&3nOH m,k QiMB|k&0qv@/"҃?IIBPNZ {Zp!υlg{2cM?!i{}p ԨI;{m?(j |A ׹h\?x`-ߚ|齌,iyC_2@A /ל'y )gAI9x8!ʞy*ݺvY(xb "`wJ<{Wpc=O J k,G*a^Bg(ydvb~5Mzᦎ,bPK!ZZ&rս:K_9)}' h!j]IWVvh=!弦:ӍR0ge1|%ҷa9ni Qy5+98ckoX=]- ?wV&"9 4Zsb,s^Ϟlr:0M ;+N7$x{-f?߶j!NE10·tv*v*!!p^ j_6pYi9@"o$8ƿv{TBn;Q}6hvLOV|*NCZV|96t^)L+_c=~O%l{D>I5L/^q)&7 qXi&ɔ<-O-cB=oQl>P?m sSO Mzߣb h6K "DJWD)z ^ӿ  8JüN16u'nL<G"5 P8"*_AjsG#E@:pH:`|OL^\_- s< לj/ +#v;P<ܩc/(+xâء?Q03J=6RߙhO8cub%+eIMyOn !`/ ̯{+O~n-}mYϪ K? >){+uO;w|"€V-&u **0JPKI"'ey'hkM`etvH!"r39b=Zq02rq<)C&ț\l-F*Fq$"l۲E?Z|*^$vB{.8Q*-F}3+' r4< 7@g?C Iw Rľ x&O7ُxTS~iksXSa[ޙ%ɐgyJN9% xo hA bkA/iAX8 G=! ֣֔x 5 _s e}]ևfl;RpFZvUcP{Fdw/]INo k5 \\@l{uмlhu>#TQMY:ډlﷷf8dDLs^k/ԥePx2.g ɜHcEl6߁>߹m;%⤡!tE1tC E=d,Jm3]m>)AUl?ߤ똥P6عzuƓ>vGAka0ắ+.wK^0v{(5nS sLuߓZsU/lүgNސ,^ .VxS!8j_y H4ci@[i ԜIj :+?XԳB”%jWQ_6>Fū{#Wή{*(?#*(~7?9abZouXIOҩ{p uhr+| ~43IH &jm ߺDNuD1Vsu?(أJ|"J,o=xM oEg;ڸ,i w"I&@o8hK ֔QwieEJ[UTX% &3e~'hs3c|e,$<ei\s$0AWJӹy~gKYw$,Aԟ}7pl ~e2x^ȋfvsg#d~]^^F56y2\\MWG3޴Q4[]!mJ`VXYdq^yx# QM"gk}ώXYqQuj&6BQz׉Tyms/Wn ̣E,v0L`RQ0)[<`\ރw.0H n%z czW XRGq=y㸇WJ/)w8%}+եx%l!kxFDZu5,/&7d'ysۀП 7kz1WWYCHhEZ\T؝sKH>q?̇ zzyxm`O|-JrC@#ҏO&͎дǘFhP*}(OL=YɫVV,JX|d迭Gf3JHY"c0x*#,a _Dl}j O|þ7w}D_b2 avwzAܱ+. 4Z3ex^#;\8l _jo-L}Aztݺ<su]a:/_^)sH ' ^\T MG0yԯ}B /Q4'_oS4ֿ4ۓhjg7x#ڤ)T;RS*Ws3^4aEIR"{>c I5(mÅ柋iY0t+Dv1u%`@GsqzC8)3G唷/4ZBhO`do qHRZ1-UTu|W\P(8GX\\c}xc˷G[q~&&ZƆ=PM OL r/dD_'-jVhd 7諜yEL8ЂW9u΄%P(1C᱒65NՅz&Z gcB07䝻!0~V}ּmJ(ψIH]ǁ(sGprMvl>kg߳;lsFƠC6(%nhg/%x"LJ(i棭j'jR!*j b֟NP<#uxV7bAw9 i Y.+t cr 6@>/Jwx{'E7܊| V.`ӽg%l:NUҹOѫZA7 "瞦ϭփ6ȼ⏻d@cB vfu7n[p8 wpz槏2g)ޱb޺m!&`%JU4ٻeŠQ5>$s pVh(''z^qI8f0b RA7Wks bt/p1 =FDecThK7uҼ('$:tD#n9FS3:@4'*aFװ=J[J*?&qqbBǏ]A?C^XOqzMbؕCo? TKwwf Ţ7?M4`3Oٓ=ӈd A36¼.}I`%bdhhy>`ɢg {;Ipz}.ˌ JJ;G8~.>Փ@]W_tl^k^os{ ~O@u<( 2?HP;U=9Np3y4Fx^`~u= &yŹPHQK s۞BsoRBgq&;"y %nU[sA ̻导YҞv'c>tŶ ґZ3 uù%g}z`MRy`OV䭽,ETwש--}KMӎu4qD#eٰWSYAyEl>4]E'Ip%r8/5Cr-tKCury0FsT  aFAK?!*T"a^|Dz~N&rI}MZ%q2M[c~[CBXc;`ҽV=}܉ۀšqIx@5ыU-3ʼ P& D Dåk"uy?13A7esӣA'`R?flal<Ӹ#Sd!sT_ gw S" Y,MXo=^9¡2ugx{Wa\mvii_z2_hh z3p?m%mfWQQrsxLcnP=H E3Ek&Џy&{H9=y&Lu~ŧVlo/&#i_`1}9?vtFoq<ɔi$ OXAT.gG}7rñ^? \=  AD9 j~>YާŢ?+L ;ѸÍw;AYFaV%%wk7~Tf&xogCP; _Wtw")-NvPEJ: 9$"(o|kJCJ B¥xڱ>>(d|aLi^n} <(+fJh8R>\)Ij*t∮K|B=)Dnb?nI_%^ &oFK.CY}Vov?&VԪ>ݗd);Z-&lcfGhƽF%FU~0榖>,y×`Sߴ1m2Ԭa֝K&9R[9$PP:Y`G^MzCzpK :<>%i{֘65VD3c `E0#|%4Tn9ApOh#%0u Bh00Im0=s@/:L@ig?.PRf+FeRĝdrRHd&WG&Qpd6nHKŸn:IW E= Yc0 ~[}+ca5pCK$3O>tx"O/HE`iqLU{,^ئ:Z͝.Wܢ!?R=Ӯ;ͦqIk5+)s9m}iC'tYbʇvO)'sMpi.,d۰SJμ} FRվd67׎ă|U~= v=/W26F![-5$L(`AMUnzf4Tigт6t\gT’:оLD)FDkyp^r&>Kq އ^ؤۈL={ {L$Vѹ@V[%/"ZJqH)D,$ȫY"POj]2M,g+r-WQ湒x^ȋ_jm^P6wo % k8;%!y3i;%/:E󹗎b|˰rwU\cؙMou={`IF (-Nlg^ [I,gîM^224g=j #݌;mO`y9M+0uyUMEfp4T[䐦}A<`cta+5M+lϾ{E" CH\tk+OWlh􃲛9'5.ާN\ `:<rJ]Y0 YlMFԀh Ll#\}SAVb%gA~1n#qKZ;tk+ź +/$^fזC׶9 _FtrOebC߮Hm}~ &,$P&z\.L;swm>싄 쇠1o,ee'tlkIW~޿.{}ughQմb9Į[@l]p_5oG<4#jxαKD֤8)a 5t^–f+8| p˄@-(^z'{2o {dqwNtWU;; =ϒ!%?:"2C˃ )9Ҥ7=JGd9WtK b-V3O_忮9@ɟ\_i$쎣eхd|:ͰBmsdb[!i[͝i}Q]Eo"zH:lcZ~o,Ta&6@yskhL7 l" ߂ޟTIʲwDP7g {Ny{Rp=E{2!)C: r-ο}cX~-Ba gh}jdFy(2&ę?D_Ę- _\hK)e賰4ס]d}&Wc~ `=|{¢QǼ1A> I;D}}ՑIŃ g[.oSB[jɟFP|Uly)EJFJ5Aɇ"Pmw-w`I*u2 jvz:@bDHP|˩x?^yQЅ-x.,Cq4K?YHϸp {+oJ/ݡnDp= _dV:>LMF.s p4.-`#!,fQUIĿhys|*~9KKtM]6z}\]?Q7?$gQZǐ;lOyaovB"ߵ1?7cKWNe!!!L_lP2,;X$&6l(R(߭CelY9O &K9duG7@jW?(awR/^ To|gMyaxOohP-y׏Y3y~ɣtcUZ@+ߞ$<ȴr"/:˵Q<3KCB}_MbVObpϺ(p) 9ZCz$uHXa>غ1}nL\XX4UPՁv\W:7WO_= =7W4N` ;qу&qnH08!VVim8r+d8ӆb~pÃvS_c+&Qtd(TİYO< ,Ys !/KQ H\WA5|B-1Eу,}-؈vX?(Ko9jU $=#l(R&5LJS}(h_ڄg;+35F,DtO)/4 5#lco%\6%1zv+_oz׿y[kR"l JkL%Dxg\#-*y~)x>A󍝋+J~DFT*f2)7(L&tbñ#A|͊| kX[]FOAGFQ:ӉʩQ(7>-4Icmi yq^íwz-b8Jf ֫GZϪ(l",>+"X$63}8C˨~ld҈{y=ii+~C#2WwTA~d2ApO#)sj;fz #,> 1Zof-ި+pLu>6Fjy. HQH>h"}f$O9I}`i3/g߾6ß*3ElWoi(> LS|y{1>Ju1T-!3Ԡ;CH+Kѫ`qGO>A邏cDV,C0'㻅 UX`Q*CdS~|~~'Hi:ngM7quy ̞S:.9Q[~a"#(&HSώ[zqW.)}}1WfC:W]uvϬ+ztpo Lhwlor[x1Ђ_njنV?+QҩqЪbAM- M%|*ـ1Jا >G޿ɍuKf,Pcy\Tg)7g%IO5,{Ms}-Z2Ά))\Z WeF#ZMو΄3DJp):pqV an&}T/*uGZ_ fZ \ %I4I1ێ=3RgC_gmAtSs`OOD o"WBѫ)R,+ꤼOEAO!_f c0K-ΖЭ$4n%ofM{QGN2kZvmVioOL E?5,ۍ=!E+b?+?+C-zʇRkEu?t>qb"KO~"[_cJ~FLP[c)%TIߒI%g JkI=~Y8#-fX*ȗu{1i,cgEa^tv `wٓvlg_?wz<޶q+aGTߎN`jQA#?KJyjcnUD{]r织ax祙i16[4g&QT9~;LOq_24l (]H kbT)nCVYX}jvN܄7kYl4@6krX.InN=HBqʜjRZUD'.-H!cleA@;/iGq.{JKS:6*x4?e-$jؽfb2s0/*"{XR  Rތcѻ?;Y]ٝoxjC͕>^vmw4:1>B լxiiDLf*4hx6$H zc"ljPF1f|IxM;Oy$oZ[P-.ى!t@]Co Z3z=gڔ}_oqGH~*jT$=o9R3zs#M'j1Gw9Xl>H'ghhJc mؘߵY y|~9qkLogY]S-kvG{&84ThDckŔ͗Vp 3gV Z_PfKBBi%"_'_Fb 'SsΣ.^S(׏^F"_/GyG'g6ر"Y:p%9J [u$%x~3bkLjE$%&KP[IܢBpu,j6KϚ3]*R7^]C兜S]ORyc5zΆ{;q`+ }U 8G &~+;xQCG2H衹ިIN)g/r\Rדz"hfnhZuA왫Itrl]"{?u:K~v{ZLfKהG, !Cz'ޔ~sD?z[o1WV[=e}W* :DpGYftka< ɬu"yiWx\2E.f"~cO}N[K& ߷"#H&c9s:G<o TA^ hE2o*ߚ„[^CS&ԹEt3xc0Ǎ 0SEd'~ݟwT]LϬX h4'u/RjE,a eߊ_bic.[Ţa깴e3#h;'H|K|[wSMo!Cy>Wӹ p[<+`I8|mm%kE,`wR1j|[0"1CaW[_&@JʷX.黾+BcyMLrnb2m>YH$&R;>违o4 `;T]m,{̔Em0e--9|cGYL-f籅pca8GЕ.KlV8SD6f/SM=E̓qSov :L˚sHOY1by0:ܩ< ܜãB# @Lt8;C5*s|>m&^\e1|H!ŪM! % c*@sTux6]"=#1S{ez/tR0oԱ*lG< "Ub݃RB1DvuxN EK/|ۿΎ!4N s_xq9$À"g!4aj{ OIp}v oU{Q=O'D䮃pIէ7._x%F޹yf]G}ܠѦо zaw،XuOHdOA}||%"|!#IS7!Iy#})h:bDyBЩ.!wg2hGxt_lLt^z3"gnthOZ1\o>H|tRrgǹ {;dm`u}K^Vgr/>{ ɱ茺@]qShI}5)s7ŮQm,|?,3,)6I:|X9Ll~ijsW=w;k8wyU\9k:C5>HЩ'_﷮T 58obU pQN1Iv<ƈ\|M(w|0;F՟hH'`t N&]u+y ad㐰/?] W*vt'X.~#$H2Qbs +4>cH.  i%a3r3ZCI._dM~`ݙx`WN#(~U3l,e81}B >0; ǥܤ3?n6pHvte腇ϫ5>_Ŗt2O#%ᕭd lerTH,cvKu.< []^ 9)LDGp/`?t㿺ro%$'eԽk.Nй { __j lF7<7c7~4Dt 8ail}|]@BF/ f྽(x焤e-hpl'#p[*rL6m!k| Q|[װ%2Dnϓj`؜XD31_!޶T6aUȳuguY+9[HLa?oqs$#NN0=CebxA'e1F |CF2|kk$U3$֝C֌8YKS8 &sOm4 >3wB(sQU3Vgr@+w[;a'f#rFOB73 T>q=mWPd&$[ӮR>hGC*1]ͣW#nK9ىsE{f^XMB7Q=|=kqQuvxǛls DT9L\j |W "q>4ޮ ڟ=TF]?*B}N52(qL3a!}lpz#=7nj ;|)Z06MsXk=Ul\f *7HsHr`y3*?~3IX| } >Ub 2+T:ŎӟQeHQ&~9iG"#s],2"ЎNVk&j a$\UY N2 iTU.\(I_v +;({';qWfg*'&Im'L<H dUu *E,N=e^Wo+Tu\H,ZD4߇L<sHt2?$-]?(M1\iPR=>W~(1~;,z\ی~p]lP7ܺ2m˕cs,L>\.4f"Xrzo$~E?SqJզFˣ7#tVЂK6a|_a 1>o\;غDQժ XjWaTsoZN#S|ǖN1ߋMfܶzAp;{ ʸݵd/|n sW{{vz_ߦG]7k^U,`Dm&ppe2Q[!l@~ 5 eGAt`1='lD&*X9"_1qJe53`OGk[gV R$zj^Քs 5&#| ~eӔ!w3M U[^gs,(b똣0[0[NwpQ`nnCuQ[bzMU,[:/3jZ㎒e8g H&,g#bfo _?< ae:B]8IEN:{P1\k*݆}xF}zFkd{Ҿ_Bٿ Kְ1 _ZYI.&mJWTllɬoe,%B>fElwn&80ٮWCRMc.gPҤzM$x0*sg6,9n^Tyv cElgFDg[BԱ;56 3O)%!FҀ@[~ʍ75-(%wF5F"Hy)|sv9_ZTbʄIm8DޑҲKvϹjykgL;(C'y l}O2e?AW YPZc.HlQ@CQ;V%$3%ƥ^@tϞQ^Cq^Q=o@t |&21J E9ʙP| p=bo w";:pWv+-M/@'"_cU#w}#Zhu? M2j?ށ=ys^ ⼓naw7<~H:.$xYlhe89}w ZlP?Iyi'SBBM>-|n.X?P]7?l * B#m/'ύ>bfK? (h.lxp]pΛCM#0 eΨ6;ZD; U@*E4Lxr\I_U?Loong8Bϙ%60F\ʈ?O]sCkpY@b?j,}oŪj4x!3?:>\-`1)28AR\A\@1e?AMVp}wl{Gcl/N =s}҆ڿz[!_1)SZQOϣ;>IrB UoO?=@->Mk~y~e1_m g8xSҥ+Ҡjr|B-Qx!O*Xϻ> k8F,N#JA, 2ʼrpeW &M7{h<Kc!ӴKq_c9z>i|/'U*d9jk?6Ul DX#f +Qq٠ x Z{\JxL۸X65eM!5LaY@ 'JDVxc EW4'5љ5 Tas;W%vx!vG;ͼ1~L41.NcUi:m绰~ 0F;t6&^-e[%A$Axk~bO*ox@ܷo9g]@\2)O+ȓyC{ӰM+ܱ{mzҬ/ME_?'4*{r4kSwPZ?k}Ch`)q'^'9\}Rpzy[鍒N{),gLle\;o(2tj9k8r 19O:Z'qJ4IZJ?LZu)+uvJ3R[GĻ]X~e"(Hy;9ΥiQYx?J7?uGkvy嬷pH$-H1uΚj UnAL䎔A,^`,؛z~1o(5:])GEJ΀܃@`]<0#{,V9]/HOĹhpѐ ~\&~/,*$N"ɢ`+^}5lNF7B[)IoW~?3,I| Da6+!PM&r[! /z[wE͎KŨ鐉m";5;,{zaGFpx0Is¤*ǂhV fvc ֩@'HB(+l4gA76!}bqp=v./[&f +|\D8pwd(wlE`:35*tW:ם=g'\DIC"Ĕ("f"^5Yܮ+eum(1[7``PiM8|1ΥhJs*b9 m.s?E畑ͥ> ^\:t> k-YZZxL"LKc'9xdU1*t{M vřܑӓHnKY~0* &&O ~n?`gu)vvY{. v؏ 9AY9n?+ D6#7zO C4T`ixN38CU$Ta(+q@{2W 8zq d9O "-D,Gw5Fm~ϱس(\+URSh**g>bܝD{{q~=}3]vd71mѷa`Qb77a_ɣ8t(t}d Wˤ$78Aez+$[m˶*<4; ^/y9Vß%g:%l(}A,b~X|95}C/yx=x8o{xjPtC m4uqV=MHt wDA4[xW7L.zu/U% =]|?Z;+ )ߙ@=G?#V@e,*XYG:C-|==V5yC-|߷=&Gnxuj[=XݏDOxr6sv!vz;}+ӱ`gx=i•wUM̊FF*¹1OBj=`P=QZ"Z³DTv@{ěQRʻ lOtMAH.FCn\v?oO(;n#o]qCYH=4~R]d^j)xNW\wU 0,Xq_L"}{.NE ?-7o 49g|a߻?_v:@R%a}$3__û?)z}\<uk#l' JY8cʹ,5=v8?#o ws.B9SkKv}  H̻5UvB4٩<*] l]|!moP9 _ qx~Z9Z8rkM)h3vv^new\H/ύY|ĬµPQ{4版? { K~d73xa;!$NųLf?l.|sn|8 kLK陪x6SwL=EV8\ωGÿ|`$ k{XUp3ȧec I/TЗy(h܀A]jq ,PW4ϑ[ :Pg6DufC@8LP:0jXsB{99=#Ivp8C9*@E?X,5JVUC_j_dSMtNE1 dg7.k@Cuqa]fLd]2/rO] HO|V&-X|sGw趹>xb).Yk~e~*N w]Z)En; {WLZDu7aɓsQ_7)s~Ҳv9-^R@B ٛ,wR}3l f`tLw>⼭xlj]6񣎰IwLAvmm8O}T'RZүs'g;A⠍/bU ãσhr\ႄ x_t; Y,J9/tl]Axct\շvw*̯(ܸk#g b & ɉ|I0G hLTZAW lY, j>8"F/sA-gQa ?vb2k$sz)+t>!ό)`i?yv`.{_;v(1Z8`7\OZ<琙c%Ȉ:Z{?1_w}sLvy "U;2+n !x>Mo{xAxT8V.GY3Pb'ڷ7d._$*>0+PG.CVlqR?*h즊j *g` }Ro\TH<@E]y'@oJj*nw_F ^_vBg.o 6&cS)me-PFMlAE&-=_m4l؎.>r܎ S+r۩iVmʢVHfm9;o)A&*O0Śk'ek}~,Dm(xX;%'(6^G8ŀW|[7-}N(Wj= / -n8f[jf_?'pwE L%qipW Wvuȟ:+` ՔNׯ^^p6~խBOn{9TDkGk0fmFq1;8B ^ }[H]9lÆd{ZaMÛt|wag9.~";ؐ^c?37y9TюbuʻY^y!^v&2ǵa՝M]0Fm$uTkt)8"hP^ }ȾĿz v^OF}6i0t 꿸+w*7.7iS s?nq.s^sԓW؈]q,2^x877.($4֢^7 f\ fM+;@}^p^oXuADfܽ @}(%N@`U9-TsDW;v$[E*dDK\iXv'4,HQl_|=WN1ݺSVa _^ʉݥJc~';Gtez58*QGf  3@^+ܡr&7X9@ ^/J|* ϥMU+1v=ň|V|T>S8UP87 vf+qnYN)έDp^(˓{Πi=e 4R ݀^A(8.p ̻* QnD|9On2#Düз~ (o(D_sa8nb3r\_ G@_bƬc3b3W25;d+cW֘!H&y%㾖~,d$5(MDaGaGdQG3. 2 JƪX:x'󉗕vЛX0X$8 H:5 N5YEw9E{W}X ^*Z//?R>CF͎ߛx5z9[i|桀sN_/>9S[sc3}3v 4ֲd5Ժ41N=S+7wl_H&Zw1L8U:uڄ#md{:> BŶvc$ჾ蔿T::BPۥSeW?J<$ UƯܬ!b\lMr1"/Rg#8`_G~7H/_-'? * [I>n8OWw\ w7fفxFU;ҥkUreʎɽEH3t@} +?yΛqɞ)9+К#YOpgrz|mz>Ox W8]>b ϥV߈s/6pǙn`}l!ٯ!֓ )]ȶru[z}Op2af q}ս׃S> dBq=ao`3X QP? n CAG$aH,sL-#E_Ơ,_'S5E7RnB) !OГ4>t7wpXվWM0կA"xIuOs_F](kVZL*7i^ͪ(QYCąToǮ8oCנ%z'ͺMCF %xT\%< vq.wJ{9]] M0p^H ^>5?,*9ޝCw_wo>u!>mqJc] #ԅHʏq8o~6 ^> <|R wCxR-OB*z~(s/Ocb{+:[T߄Q-iljB:lPWqňs8CQ<1VnK&W>pV%j@] Oby (82UiT,9|tvJf#+`[~ IKJ? O{^M 7c?y%qRa^ 9=Ml()J ${Y畀܁n0Djt K>WExLbw0# 9YKۡjnֽGܠ_/>>eq,~op{<dn`ik ik z'[džZX>Zc*^QY. ^oTo"҅}){U2x뎒Ft,v~8gUփ\^ {.Zc?#e?<*xx#s!t,{o&v0MHbiD+a{Ob=5E\Dd_ҿ6 ?+,ݺ+W>_vYrS}i[V+ FBh}@ĢJ :>pY[#gYy?=zݡ֕ :}/Nq?]&#o*ԤBλ4E!}Nb}0Vە>`?`?F- +?[O_? ߔ~=Ayqe,p*ߞ![(DL>Ћ>LًX^@]Q&i&"3~^~$1 w GֵԪ4K?6Ӡ#-kvʜF 1G᠊bNu'*.OLQ4βnfyf;0MeI},WAMZ ]2! /ٯ>doGP,Z܇wC%k0%rW'kZrw% |xOj| U8VIG4wkpjd>#j_ U5a]$K1ZmƏ@MEdcHoJp '̨ +q”߂r/[A?;Lq\JNk2Bzk~Q1]p:Սs?Vn~,7MdUjqCؗPG}90VY: Vke\! \3>"^xx7|w[ nihw._ڸ*%%5\4:=Ǚ]Vqi[Z/ػ3rX=\B eu:Ia-H䷠E,,ڂ$IS#VHh"J[I`B lYt6_s=i]_}q xñD"9_Ip^;׬/]ģR׶MCDOvY%(4@otzEt>_Zɚ(ieo3̳c#пi;*xO2ٵu Wf::ۦdcAq=`>s1©N]b;WtjrMDۆc,HkJ( $6wcXI\u@S$ӟJH%lݧWBG\I[Ȯ>3RS_f{gʊ=a~/G_ӧ =OhT)Z|)_2d/a^o7bg9O^_sc:jҝ(M/˳Om7x2xowmpd} E\3(&;wҖ{@bxC9_sZ m0]EbVXc*̦Szս6'3 %`~ގX&=QA]L>%ωFZD^B]^DqO7J>v,(􀔕NnO*!9/u.뿽HHzWNv{Az@!٘ t_I84,8#8Iުs|?/f.:QoF/DۓeMiω&ҧBZ%[(ZT)x0cW~t{.CB 6zz{NC/]j^X79g S\.VKqX`9Έ`VA)CuaüW<|tEY7G.k:CVCR;cH$q\M[O;򜵅4(ݷx±ޙQS\6v15KDC,!kȐIZ-@I)vz#4 \i6ǜM#z·>躄.GFR/—[A}?Rx7[X ?"Pm,&eQ[z(fVKDh$Ε:d eiJqc'4 I*jʇ ĩ/F5yVF_?וϷs֕M]鐩ij=;L£B+rYw#eNFI*k \G0]J~5ץ[ΉC8PjluP2?ON<|Em2ʳ--"X~Ң⻗!4辛8dm #coCa@U͊O 9x$>e%tRެcR<:Hw~ X_>"+9ܙGσ|ӆ/>/d65\=zQ+O3#6t =6Qc,Kjir)wN0= F>Ϧ-&O&&XaCkz8-1?Ⱥ^l$ɡɆTn6飓] .Q=B RsO_?\q,"+^WY:{FO8 %08֮KB"1ʹq3(!$V.6짪&ܨL_A궼k~]ĺ|SN weղ+ .c6^XÛxڻcwq`""|Wݯ!{/#,EpZWG't\+ί*rډdk!%;ou}`[DY*yZ3__9Aq.x:I@b|l_RYmҏ$(rJ8gD?do3_C9w ERQLWc,mt'֜. <[$~ۭ6~u&#YO8~^nC^49?*roM*wX uUm{Ik/e>YጫQN!D nm򫒪ō/n3ݬFLQǿ䫹 4D}#VQisg%^ܧ;އin!Ox-jC4溩>ڄ\]Z_ 1zģ~1_"5>; Pxƅ>yFYp6cjᐄm9*lPԦqڣB5pD!3Rj!%,)o*dpIu_^}\W3|!3ܬzgfETCc+tWn (l6j+Q#Z@5Zr9Ŗ Wcѯ/u/aON 3|@䩾rpC"jƃRQoqxeWgWƳ3!d*pqfZC*A[u|t @[OPѴ@AWCrcVufu"i>tt%ᕋΧtv'ZۢO>:?U\0 uJ2E/(.xdX: mج">Ո4|Ṙw$aEuF(W@u{ ^)ʄ$d(E j"Fqz'7Q>C~ȴA|P<3 ~"EMi/'N)I>~.ݥ˾)#xS3Q~~s* g$".kreoJeu4.(w0**7[5On2LXҜKsAE__gA{b[g/eWo1}'QL'"6 I:&o.xyv[fqjCw܅Ԑ㋗3F^v# 5طcWש"+Wo1D.|Sp=:4>Ӷ{eכv^&UI@ ͿKcgݡXtEKiTSQuvkRit 3<=߄y`SKog[&2ڿe sм}gqu Ty;@}t3-[Y^~BG6QOʎN2R^{RxP[y>HOu+~X /8)Ď OIVx˄(6{{*NM>mAߗru| v}i8pHFʀ1HYWX*0}J[*EnmT:]W&roJqH,@o.'YvfpDhv{Wۑy;%24PQLJy}\s>|˧p5W#<4]p"f%zZJAq}7KŽT0XrQk'Fk~ ;,N`yTm|tȌ5$FOxs$Z91}$78?p6 Y_-̡Wnk; J'Y1?瑁攴<>k MEJ霕#MK]$xUڮl4rXAzzfug[=21*@Egp;x|bQ$_Dm=y-3Rb"2ёVm&o(-kh/ҺiQӑOVK/V&왂N"QR^>J1=Ө"SDHuDܧ0MR~*lk%f}a>cnume31法@ |i j|AaoH` }  sI;O}j ït9X/*C!'QЙ0?鮔Kv}R y^޴tsй')+m;sQo ؎G|ί<0+Ozt-Gje^|N*+xૣ !45dԄI#6!C6Jo5; ki:DuѕgXX'Cs? s;p;q^i't\!{-'; MqKej&݌ ?S  ⶑP_:qνDGB ␨~yw4c^W9%:;:5؟oPq|JLK7hKڶ,oWG<  7Ty>_c%?Px{zvZ_Φ "< }v&\xzpشVgw_OzSrV< jڢV䰞Ϛj<6)󺲫>YK֛< Pku3OTV?5XS;M:p-ȺLv^}Op_vT(R~U*8_Cs.ІVam(qpr w9~_Ga$ ۥm#^װ!%*jh 04bi)؃Eew"̟GPSiFP3Xg5lgdwgF 18+b7;RYsg ^ 9/Ϗ)A1$ڈh4A:[ʘC=T:څ?5v$+dgFMk ?Z?#OOef>Ji cv4oZD;^+RQU #ݮ ! Ye _/sg bND~ݎ>B/6ڀ/ml@̢޴tNWw|#~nrJO/ܾƟv9t(E^.H,uha"E| )Hݱ[`)4mPNo6!!~y>Ck=hZb LIUh%8&mVQG Lnqc b1_Q ׮5f04TD(\삉""\6AVԉ*2:^r!r*۲7JmT5L-\*|3L#p?1xU!*+i%)?3A@JQ!CJ6|o.0bZ%",VPg,tGZCmドa2rV t,1Ԑ-Op>)>ڈ9}f{ Ӭ*qnV۝W˃bĽOveb,TLf;Q1euoaFY©5Tyb$`W3JVu5P>W(iquj7DasbTw,$t[FSsP~#i.8lz>Z} ^e bl@{[!4Л1~d7*p笵~PlESEpNN2P>/n q#Ѻ(rp3;9t^VE;pQ83^ "c;"~g/2Ӽ 6@3RvԘ;MpN~+Ximɝ;2]FX֣Pȅ-mvA/.l?/j _{u":#Jf34Z6|1zSm%c/ 0Jo,`8ƘϘלSb |є6q#;n{y)t%M߹azֆIiC  (/C-Ohژrt=rY_ ޛ~7TmB=ݝ#fZ o0W?Q~]LL69zSL]sù%n#cSqBK^$fDT5K$_Xt }d,KzF`Άkf4Pܠecs2`kbD>y]l?hGC \BmNHxy!GL$7@OR*_5xsFnyzç>\@w \v$Sz0zHnbӻ4hY0dzG]/ n=xtk<=9؜~z4(rdna&_ݼdú,,oެx) r~mgWZ1 /(6sHa16hXv-9K exYlO[}ň<}v8.;5#.2 N(9Hϡ؜w7}W'xo,/FpqmWņAĽ>n0=3hJ.l D?1pFOm`r8>[T_ȦDǓ#VWƔ6Ja%wq́է&R1QF:6XWz 8s֘Ht&Y Fü?}\W֌2T'O/#+[DZ>J~;)#rMXN/-uQSX_|TG}ȵDX8W HO7>H9:7,{ ўPi},R+p'.p8pBv1M'|m<zDnA m0 .@z ptзsuE@)tm C_7yA*C_9PJ?V$ :;E Ry cطڣ*E2\\{B9^9fK&8r.rO]>˜{tJ^CᵸYsWrʉDzZQu޺d lҺ]Ώ,ez#_B*"v>Y)Ţہ27VGV7>'zrY*c^0A3ox㉥!VOV4y%jXAS"$¦'pn G`Bn^{8oZq6ݼA<ݭ`Mwс;;<+z >\c']ľYχk LhRG$N؂-ľl28qaz^Ryy}7H 8dUMlagŝN`?y[s`wXix$~F_"{!yTvf/+1Rxt!)O"D ,V M<'Rc?:P+_!kY*41tan^} TM6+ُ:&gRoq#D_G%ZhVk-M3QV0b_3zAO_!w>p8O AҌɑ c:Of O(`2ۭqq!n'Q'.1 $N:6쉅Tmn혁ݱO˜ūD8! f\s+d<Y'Rg?srь{pS=~LžMYߗtT;J0I>5nv_0 3+&ᰶ1vDOE,?9ܜqǻ}zg W{E¾L)pT)e"xAϮ.OlpB a\ ~. qU{_mƤ,yS#py܇GÁʻEMI&+qQRGM- ?ۣM?nn e6{8QnM$=AǍA(p 㡷`ʲE,)`DNsX-5=xXVmj64N5uKhk٤{Z{4W>>ctΨk:@EDPOuW)U3!JL.z %w+7m\JNC7r#SفjӘunҕ0pQd:P-bᔝ.c\~SHq ?HI9  ^;×vP'굯]|'\쇭qAxjt>F6zzÓn'Re}] Fgjx#zg;-柲#k_WL!e?{dA*\3&8q1 |>w`{DO=N= /jO׳7V"H^hnYG (1pL̕>`>/͙ *:Cڞ+Pes:`EPAv =|G4Téw##D{`tp)| Ip4;puH[̜NA} <?~F~N"uս}|s`Y`؃98&sHK<7Y^~#i<#ȡG~4?"nmόỰR9zVE#\G$dr>ҊpQ9uN\uP0gN_ Z DW@L4};cf,S &+l7]ު-Hdhk[6ɘu=+=[sTIddĦø VN@ 1Ԥ?D> *Qw%8 [{Q3k?+ynňK(Sk< MD̢tJԙǵ# )yP)RA=z?:HE< ^Q^;\mn Q1{V^儢K?Ea"{?6nЛMu ,słktB k TnBoL< ~zAu70-{<(cpo-ϨsGa_h hv)T(rݜǾQ'ދ&rMw&NFB^;"?'|U^rW՜ܳ<.79L_076~F'7'e':5 :I2{\!HMg6(f}ɅI;=#s1W!y;r͖%LEʤцJ @dm ;.!L1<+^WkeSۑt /+u7F!v*Yb)AXɚ~;%Kf~+Fp,7R9>Jp&_33 ?շ\<L}? N|v D?ԌNB{D9-M~s+/s zDjԦMѰrPF, 5][s:RFwi,6߮Oǝ# O9]s~VdK.n3cw.; E w[[0n9v [~} ">m?v ]QZvf,,0S{yG0s b5}:vK 7܄T9,v>Bi^Ѯ1QUe]3J*Bx9k گ[3d/"D.}OO.qhz&ݼ2=a2Ab/LS4ԋzUdz` ?vݗt3mD2TpZ&|%z# :53Xß%_ܞ / )U9Bٶɽ9eX@L_u%+U^b0t=-~]a&xP_:d|yo} :Fks)mP5F&n9#pjm1ѯ@N⩓c8}<.S/rxK B[}]$|A Z|$] ` G8XYgqH1.Է*Urt̄euy^,AJJxFБ@iOsc*Xngw#s.@a|1OYD\~-vAph-Ezn'rtz_FNt[~Ojs"dۃ%o0"WB0S$X,9X=E{WЏ 0tiQؼ G>_N7  ]{%#,5Mf'Â7мb<@9n~pPu9 pxqn cP+. a(- toEV%P1M!S /a9(^,#|5d!bD[܁wjujO姩䴭 DCy?_SUL?0.);h[!vnع7:&H$}?2yŭNxP/2#]WGH߼ֱH_5 6=EX%0իe(jjKǘSwd;^yHx6j~ 6c x>on(SP&zq5"؆~Mf r2ZKubX#D@`)^!yE[g<<_O.([Όgl۝ |G 9oWƏ/]wb)e'giWdr*7DmH^C^qapAOXv=sMQoom՚NܟOݯeǷ\U&j9"ys)#Q"EE{*JŖ)5v#/%~Pxob惧j!a|pەR?,'P9"ןJ*ċɌGN[1 X} cΒP˞&b.Hhk9Jd8GKIY47kz'het%W?;/>VvxKnHZa&"hP݁fbvE'bqWf;M pxG%bן0C_l= ˥d\5qofxU18޷hpJDMؗlt}=SvދZ<\ DJ3זv k;ġ{V4=/U9CBG}UbSy?zvQe#{}\QOe:^Ypl)"WO'_(,46TЮ">7Ir֙V71/<28r/t}"l"hHx"7^Tw]%8 6; )>0Qty'/&rIvN A|tal*S{-r_">14X&a~tqR:ݿ%LE+JWgak9aZ;vi&qѳ`nW|ZRl YO|X:7xMv?FG6>(IEnn=` stD+KDD{& .qtPzI:/Q.xCi̅_oaw7xucqO񓍞6^oM{5L R=GM ~psyO;u.D?˯o~o˽R!mp/^ kF~yY'ȳ?)smI2Rv^?怼-IDNĶY7h20qևe?,[s'GBEOGNj= $@ެO{ć8^F2s ޵N޹)Δ\ʾ$:bt0u+50)AO9NﺒMoѽtG3ZRycG輹{"FO@\q,~\#V)T&~ٞo= = 3&[U\u׼M>0#zuy(v(cn ^f\qp{o_jBm!cM)Q3SynzQ׋DҝM*FiqM)~_ ļ:!(Czja-|%7ud!e kTvn5y:G?w*w>-@TG^ict{bQ5QiRIiT4P,*}PDI y'M̓D$J}JJРP E뷖7>w뽾[cu88~}}=/^+->Efʸ~ c!+ )He Rnn{y?m([$"u&@'X)BO}`MPK f>禫/}= D峲 HwfegG}@`,*/7{X?t tU_" ho'Y|>ZNFSDa-ߢ1eؘ17h%WսI m_ {C'Ζunٶ /Hy*{ٴc mjĪ@[%di<^<ѨvU/ g9X9lqE 2>11ڡKVjB EI]I&}|:fxyRXZavg .QOuowֈe-gvpA `ߪ)FmMwǙGMۂ!R\j`.0=AzX? "\=Eoq<Əq:6|!Cv&U#\wz@rgMC>>Q씻eGt|)l۩58yJ.r#\~-T8r_09r-N9d?)I1=ƣP)\03Y$/|pu]_$#Φ 9=s9'eʁQb?E7tQx3A^VGWEP70tX!pJ1 <:wE"0`ԛH%Y݁ =/z6'-oV \<>=.Q_*N|.s ف'PHnFDWWBO!^}۶lpx/+~E \e5;zM?%j'3@+){+\"}cei&"p /gODegd@WӶNyrޟ6]v(pG <hl rb leeK~Kg\,,BőA]$IF6.4 0v۩P;wNp26Xu xؗL`$K*]zi$g;_1[͉cO|!}L(Y&Ƴ{֩g`aw6`;*;%lj9kCDϒmкe~*㇈mٙ{$Ƙ`xtLb^|ǵv΋Eo/G+7t?' by:Wg:7+Wx#7J7qQe,&(ŝ+uGLh9Gm_KA~ќAIT}U$5fu֘<}]u+85TE5Z8 j9\M1GP!g7M>yh j\}S4atʦB|]$,D!+е@&'4Yٚ 쥦5R8b-ou4qBTj;7hL?~s|t|Mܱ>bRguW?~7sz. hwşGepD83Yz@h9\1c J f>Ec͖Bd< mxB el9&c_^vנ35 K&5H0 ŠX ⚞5]u|wsleOdmޒwJ?l?R)΄ֽ!}%xg/?hq\7 /mb5ߟNYMv,e W1U oS, ZC꽲ے}10&yhλ>*{ '=-EO׏LP =Ѝ.+aA*^_[ޝ Fwe)q^#yo5ꧡ_E_H&"X"Gsa%{>chtgڜXѠgP J$W=ZU*C^uY~"mLnjj3ߒ*;J_hLv;yo|q\5E|IMF;7N-i+r]7=_S6,^(伉jA0=NKTnCdsEdpktw UzKDjf<{U:s#?6o {#:9sQ;d(,QgzRR{'H]2:X2w.pGB:>LG`vnWv׃ive3.yC*v2뫛1=[o]_w!{/ZA1/ b_\l(&U>T7@K?"Cez{TgK)RB ,o{ٲS~պ)Fּ:vmƤ1ıo/n;[{ʏO%ZR=l5p36~"ի_CUE\(sz9YO,yWo/}ɸk')JA8rxIɷ;#E[(&yiSTŝeҒM2׽+Mǧ7Pρ+@cC"{&+%7jJ?2GFn’;;jH.NejiF7y+K1ʏ5e!QiṰ]wr<~`(*x]QeLFe2ih9)<^2U`Mrl_zGt)x~?;Wqo_LO*?B%3A3(ypi|B~t"^0QFa/}1ꊂ&ks~FCO\Z6ͤa%]QzVNc J~{qtPTS63XޓKG51S{e7x|[GuB/ ?S)Z8ץJJF%1lG1f!Ne̻y[1)OZRJ%㎳x7>N\P^|7 }GU6^O mVڊOX-!U"uq_WG}OH#Nv7'K_rSmDy맋"7(߆e:,6wH׷#1x ' Fᑹ(j2!>Ԅ{/;rqj%nh4O8]Pm:j WX^ _5dž*{%@>j :(+(*ΥkBц+h՜=!ԙKZ wV;_)kf&0vav{,𭚩Df0bk1O~>E7S9d 㚮~mhl@ۣCYF2ϛd[yk R0廩1~Sp(/3"ݑ`sJSLu11E&7X8ZjpR#0eSűZm ߹iZ<_'儣\aد҃ȶ=C3 89VSj *҃-[D=B<&ğ9(sg8"#akuUFT)tIG]%ª%lA:GVwhrVBS.텶۔xWZ?S'e.g/<$ )n{YRʾWJɤNc/hKaNqGIؼOIgŖ`%6:fH2Kp! e;mZY+V'> fgx*l cR9{ӞTT"0O9w pQw()/2r=*L0Nx\.OWtmW@6֧G+y'wsJsuR ~;()]&1jW\|}5c e2zRqBL=zGWOʌe>y5Ӛ܈^))HϏpuO0IWs( 琮1 ZNϟva}W)os XmJՎ資T2BaFt.#Rۺݻw /|C7>!g+TPǟ&[^8Nu$š ea#ODWFʏ7+1Kf-|8U+OS[x?54[C3! jE4Q-^!|(_>KNh _Ǿ86#ykXr]ۗdc Hĩ$,ۨlֵPnrbQߣ)YWao%g|=JMbXWO_=C5]}5~S(DV%I5t&@ITV?rK"9^E]Du %cvO F}(3 tSzjz[isQAHlRriizXX̶m[Q@}7^tvr41}qNKtF mbA~䃏ާ`jQĶx5ƹopymutii//>">,Cg)4Jg#i-'(-6UUoLUqǞDfYゥF=˜nO'R18Sl%ZN5[B՞2n}"d $ؚs![bK#͓iB>f2B_u陋p:͹?t{x 6ĘdBO}/5 !FkJqx] gǙM0P 7p]JZ#f4ʯd%jH^Db:0zX!~kSIv?9MM%&/S=Kbø6A[wL(JHyA#J`FF)F{к= Q3PXib#U;Ox,Ҫ+꣰f%ƹ{J#|.hBhe:NM#Gg-'{|oޚk# . q)DN ۟,DyO2̰JU*7!gˤ a>HŅ垗rS uY~=`VNymNy~ԨZ!َ-uh㺾 Id$\Ɵv~OriC[< IK Uы@%f;Wo |ż_Z!e'vt9W(CQyUE}=g:k*)k\ ) $/66NIw$mv=9]\D?U%$3y oի>HJ3kÖX&]`n^h 3ˬ̀OZ7Bg)33eDV /TsrԷ4T 8 vj5Js3)EOw?bE9|{'NPX9sv.÷!V)~dM52y*׿4E*^Lس2S<'Qs}F Gڭ| #:WudFO_j橪' U8xS(&ڏ.+}[@ܕܹxNjM/⛹ ЋX= 9#CuwN+oĺp?B1O[%Uֻy(@&kG8Z>`ejt| ] y#W0h^(ByVu}'/|Tr2-etSs03oVȨhe1K0:e4gnfrJr-Ar[pk[В `[*pw#M7sӀ^yw$('HOfx2{ˇ.ۘS>Y9ETiK!gwDEp;X)FbNjyrvĽ .EC# x9DcdA5 ̊п MdƩ`O>`5ni_޲.6kd$yIžݙĽZibHHY[f<Ї,9 ҇~{ERDaǕVmp)}X˦H]DqzkwDep,Dc3Nx +IXDJIQ]t?GQg7PeVȉpɕu2 o `S)p ȋ->E7^!|cߛٟC Le$6^1磈{ᱱдaq0v \]hv+7\#c"zNՕpuI>z!xCY1w?14/yMtrr S_fJg{N;B6 1X㿝,I~?ш]OŸ =Sl?insNVXo;Jl:, u[o9awspMSkM ֬* Pt3̖Eq6B9É~dyGh =Uƀ5hgI{CҡDa_^=bTᣊHl owȂ!"|d0 RcR3f%vݵfS4Wc1BT#~ƳSן:/d=M5ԏ6]sԚj'05K$ ;zY}-8 &iAY-Њů;~:*O~2u 䧦_E̟^_39*J';Eg0mbAÁ!<`2l&bNz*D΋n4eZ{BU Ó8_gYs.ۨbU38AJL"jBog=.cxI6$c rV !AjaD?NճnM?`EEk0ZRag GVq#׈ze mvAn"犪p-x:2dCx\Oܤ񂇮Hg2s[=|]3 'Un ݴjv0Kyx3տd/fo t>EK_?9KLGhccP wtG:ȧD ?cpw!v?Bל-D*DΎBD.@hm}C{oNxwnS,!`=>P ewΙ&[ mかHz[M= ^2!a5Sj̟ޟ׈>麤0)e3hZe^bm`.2V˷R10߻~5Lf,tfyVUkj^̶+Ӷ#:&5/ PqKE#%qdsw>߶Ek,+e:d: D+P,H@PK4+ Ahr V`y)pS+;Xchi-}lATH<5{V3jn+*zehh $K6GZ}yȿr?>wޝa]4%'Uak`NR nU]8e~'_l7zw3"&۩OgcVNq_5Ngyu9p?h6ت}xkANRo#3qW\$jQ"[xH\뫹^% a"b!};usϡ461V yObE[9 l p!?E4\ ūi`nAl/o`5 /J/+L&0oa9X_Uajy{m2T(T,!up":mt72*6)6>F>'Wa{)O>NXfSomŸNA´Jǒ?DPѹ|̏7 f aǏ~]:?+WGNL J•6w7&לs]| zG{Qjb&]ygV˫̡5|$ 8Va%dJPo[MU[F%DSn|"Ygє'vO쀯眊u.0h0zx/9hOCT.Aթ3ps}/㋂:yr|x NJ -e׼i:6ڨ#HKo*.#02|=`x,UPw9o_eYGV?z4޼LjEZ#e>Qa$>4n16I&mB ot hj0~n>&|HcH#FREJ& y 2& QiMǾbsPPPUx2#r'&i0soHMT|~n·2T;yw;Xȇ9ǣwow'NN$m`\%G8U0'|6nu0\#w.@!P}03ӯUqL@0B K'zRi(+7 3zv{̉DdIM0Q>5TJ_Oz£ C% ?|f>E23%8.{zu%F_j%=Ju{P!_<֮&u8xՑ 5c=8U2w&d OQg;džJɈDtL_޿X>R$/?b>F=eea9i 4]2oY"l![kivˮdcnWUZq9xSk WXضcо;&Ee9)$[1V@*43KVV6װ2f ˱[1{ S1%_zyܬCG? m?IR`$O\l5!H-E}K}TY/q"|E[$;Six{Ou.jE?IqZd.˘ٯ58ۗtix\ʀ|a\?&h&a. ^C"}nyL>uCravd"RG d ] |u9?H-8TK䢞!$m6-AO-gV}Η6̓g+*&x .[@}}/sڳ'lo"[ 궋PûMtL6XuXܜ^\ze/rug-S[`s +1;P翵z 08 |~3&2G`\,WְT9/* Ә]c#0u!y ir i߆:IΩ譍9L=j?YYyF_U]\?,ʔ5>4Ws$Aⴼ*{uWtlJ䕩%[/m9- 0~2be&ގlP9cQ9By7,_"Of`7R|>'Tp߶_`k `ڦtDc;đvC˸fk/`jH9*"Ke#XsqG{ /\GvϫFy?ß?#rg!geCٕρ&EMSXdPC}r"ά0ӓ:~CKϭ iZM#+3g2n_! oӳ28sjZAQp=Ub}+9^LY5T^zTY ΰpM;@^㏋\o@y*~v~ ͫ\J݌үBf0QIp2>f|E n?X4=A('wDN 6?q9XNUD 8$|eb{r̠ f' 48II;\3/dY oмDU۞&QVwE^ $ꖅ*G {e1J=t6A+/(])>L\O{w!Ë_ph^Ǖ{#MRY4zG}<`-7M{fHw&˱si?𼼛C1$0Y'/DMͺ#V#f NôɡRag.n{̺- r9ɄOja-&BiV|s~ɽ|of M/bu71O>Rg!f6_\muL@A{$cC -@Ӊ0j,m ?2.6 מ3(vUJ- >/#AL珪#Mx<Dl=aհdӯh_[DWofr D ܹѠ5 DeNF~{P~6XZ 3ZI֜/5>8TYX-Ćʂ\<7U>C~_M`&*bن.lX@ݬ}Iz`Dߘhsf`=Lpϭ#RD밾MlsOVA c~KlFs`VS06hكuMMfGԟrxa |M[W@0R\\}8b ǿȾe"r |jvxqu(ܯ l=x̾!RG]Hˑ#3]xlE;૬D{-I eY:2~!s_P ZLSwp''OL_=s9x]Ӄgn5b9|Ϻ(kۏΜV`Y[r% EN-qo~֍{3#f F'n2_]ۑEJzPljhρL,&_+Tr]#k?Rj\DZ-*s9g |Ȅ[P*o7}*5e$X E}kГq!|?npxY$t+jU59zVn}gA[0"ij?I SjÒp\B$ #/\9 ܒhL?v#t!O]:tN>֮[UZ5v~wS>vdyWy:3>v*A> oד_8BJI F٧aˊ;Gwt\^eCnb:n8B_DݶP% U-Yf |qk^_xm^дZ4kzvLp1޷Sh*P  uR9 Q[`N->o:R6?b]?s~$^#[!;>Eu/=Y:+8d}3N"욞F3/__ם}KO0\p=Q/}T@hk#VdwGzK١e͞腈f:3РAv/=0 Cgc>{ ] $r8ke%^~"A;{B4s$Kwso/,=t?n;ݰ 77g/LozF(fjZvljOu7>g1= \1xV (+z]*P wif;&]^zE'vy݇?4˯> ER!I"C)CʹvR!$<<LJ"C$cs"S!$("Jyg{o_nGXXk_~FCml Tq=a\EW1I+i>m-FrRN^3 O|ZɍbaxM]Ո<6"уa-X^X7%1U3<@u5*@K |vMڄ4_i"?9J{Y}C|+yKs/ }@8t>^0]__nZ nrQfBxA5/))FF*'nO4tA^|ծ?_} K&c0(Ԥn?M9%D<5{R`؂@R[Llt1>-1i6azuh& }lHy+BzG=)gpv5{(`/so,9ʉsoc |ZIzZ{i#/QYuO2S#8vU:`9h{?7=CpXiG,BN*/u O?"ե[νw" G=i4?)B=xj[eb}Mr/GoU"jDwfq-ƛeLd%gi^7z~2-hW|LDC%̔ȩT]ix.閸8~L;3 C˷HκgUk.v}X gU:S"#awD|qu} 4ψÁ}= oO.?sM{+~SpC+Z2|O٪ytf\ #'!z'ˇ!OL)_<9!(a2nh)ɺO~ɁCQU^@/+s؇L(3LM.prIgN+wwkL^8`dm=7zE޿q{>~xٯ(㻔56s9Jz;I,uD|6g$*c w;(egaI?JӤkw~FyqD=#Dw>w'Jh9J4d7*}|SQ,QbO#Pٹ \7n/~(Ʈ-\gÀ7/TȮM# ډqsVG'Lv=&wGy,{*7WSfc⢘=Z*UM2b12] Y5讻/S&nu=.ŷȦK!o4 t;XmIc7\k !C6[4/ V'#" հx 9 §B`AMݠ З; "}LfJ="-긃gheY LR $p l Vno#=n`kb$@*ʼ_?\0Kf'/>|>הt=)+8nC+zZ1ER.y%h7F4a["vyejطON#55k$͒41Kv}ĠRX~{ܲG׍ /X na͎k4WtVo@4~N^RMTzUxu4q{z0#qrV; ݊^ O/"xEE^4k@{4sMBS.WYRf$!wp4ovɰ&V=~ F n(ӸCQZL+38E}n fѣ-Ov6~Sm@ JEM[Z8?:fhn]S4(A l])[:/SKKv>iaSm{1yMuHMfls0[F'`K^AΛ o"hԠ-t%n{*whvU|a]O7-^(78L߿Y:ﺸS*]8FqԪ#?̬sM`;ع!/KjVsKNvRu8ӰM2 Dxb| 1nH2UP|kWY;ݔM]7i,Wʼn%57}  EKmR եS%Qs 98SkgBqRh}G*fƤtRqc!Vؕ[ER7WyWХ{}j"3:n05ƖM#>Zŋ85M=A%+3 dp{R}#>/0x$TV-P< 7@?bH->NqIS@g.;f}EJ 򘋆Z6zCfI[ߌ)aGM_ ē7S,"o) SYهuޒ|]-O(k*zgD Y{Me T8is Su#\:WܕsKSvKC yS_ :Qv8=ff @Ҽѡ8I!wy0u7RmpL}s ps2FuKDIG:GX8"aDuޙ Iw?yAݭ8"~<~kIu;ĺ‰.B{a$q<@4\M]&Y?p_W\O|V=}9cEA}}T˕a$JMbA(7jˋHt͙ ,Xz5Tآ%_(܂N +݂8:.>_kz`$[}4[#)_ wТ$GiPqn>$ԛA1=Pu?y\Ny-[Uecl;o(lETzqj+ߘ1_,koV\Ub߹Aч+3xm#*KzybqK`j*nS{Z?3{vwnAQ rfvbUSx==$^01d'PC ,3mO?<⼎nO.: Lvt fz(ݨc$;g='ݐpz4* uz;9)#5}1>dMjByM Z<l*֟or };o;j~Y\@2|(?o*]ptZGML-<𘣡%mN>;ZBX}gv(+JM[sZ rM7.K-2|ou}>aB[k)P@nl{ BSw(ʾ'iF{'6>+Ӯ' x%Ĉ{|#}07^^|XV{ !kLE7z?TC]ھePΩOJ=P_};f|A,AZOuѶiKJBu|LwW ɹ._{Smp䌹,Fo$n DB^Lۏ*4{~}9EnDoquީg7RN^zZ >&b8B셙ONŊ"kwR=|SԴoc8:INۨ0^zG[ :N%_bBpWAۧؼӟ_nJ;lſr[Ʈ!?VR&7PD&tid3g QMyY(Ո]Ǒiޞ3J(i/v=_EZ#=_}}P-en]6:5_xmaG_I{^bG޷昁qt%]67ǛpZ*g!Ixn/6,SQb W(ySes QZXE\>a)yN~ 'Zdܼ!:T;K%6py5],sks%61BFf= "">.,v}|b mPxՎ27Et\EzQ  \FoX2_UWiMI!Elwz)8ȣ'7E*Z q 7|$?"hڈ؍YMX<)W$ܗt2 468{5q _ΪfQ``06䂝Pޝ"wD紇ܸ!Db->dqK[А-W d Kթ^W7'i*QItUL}/~wWs{HGJOQKBhԼeYRO[ݧ>w^8"Հ[~ NM[.!=g4P| HofOuv苤H(ЛdR1.AO zm[%gD:2y_L=X7H4636_}?Jn+a;65x5sc8s8ts o֞çպL&@$!XC(!sdG4*J4Ek4*GMcz$ޏ;!uy|"poofض[|=HvcC-Ï^!fPюh0tG WpE杛xd 9TqrGdFnJ8S*aR8k@7BV+l,R"r+T?,>_E>҅PUi2 a bQPRRR+rIm3Þi|%_Q>wyshXcĎcmײSc{veTOmu8t &L" d}v^9qp7htݎ OھϏ6&/V,'63MPWSa덋"4}R46cE/z`v.4^4Dnb|9^ #D)k>3q*eMz{n&39F$[FΫGy4M9`"2/:$ijӷXO=4C v}jxt9δh,n8$V$J\Hǹ^mO×M؜Ϛbܨu<2>_D"ϴGxc!ܝS-Ǎ '{vU!âqt~Xa3W4@-8:ҜlCޜJMHnuDr3z1 {/K0? ]3w (`KѶU__>tJQiҖ(Snȧq%[WJǓds!1QGGXA_Y#SQ1i K#:tHfZ<9Uj H;6Jm>5Nlu%4j a(r`/^fc/ .IBWosw82ȑH/C^r{8]Xn%jڔMo. JD'FWEuf uɇIDLS㦜寒?7}E=3+eMOh)S˝xhDۮ12OI9p`@*eŜF #oM>t/;~^t&0+(p)t~lRPf5R[[=cW+[Cu4%CNH%IN, j4WHPVΨYj^84rbۻz4+p]u^VE5Xu_;=LK_5&+h`z.dl}~&%r!dG>t-b_z(8ُKfM_kO_?$<-[Ô?ԣ?3)yAJQAsuKWW# ݷ<;ku>62aF8YN+ T_2vLƠ[m+:_RQm +]U Ne)LנuZPaϘ%H~i%uMe/puؼs#OWOd7sOG_2fqF:e<E_tsM?nMT;KY"Kd'7dž[RÅلӋ eGIW䶡"h)O ah?63ȼ).6GWh[ Fq7^v߈NK"ДR=*TѠ,3lޣ,;~HW@i#W[E# W$QLg}]9lvcQ o㳊?~״Dú1sz$mA}8UE,cz;M8!(3gx*G?zJj- $8GS sIM&7,)m{`5jI!C?52we>~=?:cUu_k4q9s"~qۿ[42$’սr'ODtbpU@y BJvBBE6r 1oֈF.ߍK~A߱g `mZFYKN>s/V+#I^xj5A{t*ŇK.&:E.+4y!ܥ Pu_ڦ*V[_~>e5V$&}:V$xc;3Tcwsͬ2Wƅ(ב?H};S ϋ# H udVq-~I'Bj-[:w8THmw[^dCrhwhпAIѸhU,J3&jaR`۹,~2&Fv][j o+dɫ٥v9ti)rzOyMH2w/ʪł,g(i!:gZ1PqA4LXDr/:6Xm5>Ui "Nl8^cڿa i641FѣuFԛV.ϠC).R_O <#`RHmK VDe7:2X+~/|1ַfA{?jevgfsƌ[z%u?`qjTHiS9FYS}i xA,|ꔤ?"%J{S ^Nxe+{hLu?CHŦ*ѥQRg~ᷞD;Mo& 3/@zNqX8+jŘlMEj">7ܹ;XɬJ͹и\2,~>b`uF/(g`y)͗-$o{}^U-Fͻ j@ϻE%>ERO.e<@7tj8C@PD x,0;?|\Sǰ]z1˛O]wE|_cKV9 ;c+H|\g˕EP{uц3h䳰R6YfxEǽvQx@/Q[u4j('T3 wGUlQ%7psdj pU@pef7kY@P?S#jHy:ac =d}qH8ϲ.Aa^~@f.zbŸӍ$&Dꝳ~> ءDx?ý)NLBU9m%Thf7*Nn;a;u@fXv{W ,8cGC_#'r3~GиCOQC^wQnȗ*<# W+֯e2qXF h7ŬU3XG9!p,cW%!TҮӭYMg3a!Q.-ϻ=.l rC/a .,Ea˫4QpEܳc2$DsI>Qn'xkBs'K~߷Njz֝rWk (in4p$j /u~[Hlc,Lz*8VlY6Nz>s훝U"G~~A44E#* F3I%YW!c'DH+.$!vq!\q?7HaADݜ_5uRՊ %<1:ILѬoFa1\h)9<w`u㥁*~Zrg(D;?Ң[εbWїuk d'C־<73+ȜamX~rppyÇݩ"rQdC#-$i<3kC'*ih\^$ (:lđwgWmV1kNLm_lY1*5D`MFQ\qK+)?ύ*+"*hl|xz_Vd9fۆy7srn*>۬Z*/8$7cb.ńR/hjtUsiwFf8iG׊9~K&Ɨ q2;mBgId7!Plyij]2w nqG3xՙ5%dV'2֭DO˺D~c5:ٜ r.LjvOڀ30DtF#Z ]G7y~TҀ@Iy򓞳m0<>Z#/5bSR|?^vN;vW *w墴Z0_B%yOެe{`] 9=xM;HGpo8| 3_@~gOP: n9g@g^3kxaopxb12"W=a]mm/T>J=^7sv8GVP.o?,3[ עLeA@IQ;h(^If}nj6CEɵw& ӓښxAd"EʙNH3zKWBj]kSS_j#ge׭?Σ?~V1yI幈I[?MGsD S WX:T+papя`-P'H lGfhd? 7q"o y-Ĭmo d@?yKJQ'n b?l[Yϯ&Ans8vF_ Вf3^J$m WK߾a f) hr=:f S3Vr_-G9LpIU{5k7 9ow]K(R?BS;Hс_6'].\C_㥪g OKel1%r'Ѓ*Rè-<ܜ [僇bGyT@IO"־;D$\OӰNp t> !W#ҟVoiq4Gg&mD ;Iap㮎b-u}09o SL@QΝ23c>c^ey֗ EԒspG9ɻ [ݺ? 'l n0ƓsJ=NܻmҎy6rId^#"]*\0qAUBwx͌Os⏟ޖϥy}άOk/ﲣ)\.u-3 b|݃GS:~^2H~\]"poי]L*Dn޻?EʮwwsbxܮdLpUXA3@%K[DmQ7e=m &2n|Ak_<1}3Wb2ŭ!{x23߲Օkf:kpxOʶzWdyed>(盯؃ՋGvЧ#t=ȱ|cgV\U"y!.g[ў6k} ~۱d\LhOԫ5mrmskerԃ :K[1L!x -rM~b^m6N=f[ČfZB;y-,nUcJYȧoT5[٩uM#uNon.AwpZg*Zb)r$/^zu`C*$zݛ/{ J&=x/m}8˶%O7 [׭<0O#:GQڽw>BrV^dʣͼNYB_(]Ζ2)q5Hbɺ w l"x8 !))G!S6>keE0WmTPp8;B5 xa #] D~ߎխ;<Ղ:3Ȇ?eIA%9uYB'Cgyxבt!j1jњ`ճ۾N@ܹ^7;4tnj")qU.M#ҧ͓D.s07wSd;M} KZw-_kj&{+X e}Byjw2Ĉd2;<0AQdfMA ⠱Oz %}@?'|-_vM"mSvOlA21,QـxseY[lM=+jT],sȱK@h8hv8\c"jzI qpQxUMiX9P:Q QXgKsS5"'nH=+ldVnp+fw^ڰ?W"rBu>t戫-\?lx ىKfi$S)ҩM-oMH\|m0E?v('$笞/-/":|%}٤.>Үte!5&otHhBK޻ׄhQIߎƣz˥ 3ӑ GRrhh䀋Y} >I'[EخOp5Yɵ9_7@cIɊ,kpV⴪ gu]=pO j\pLk]ާ˩0%(v0";eէ=Ju9-tW~*!Ǖڏ V Lmk^/9u2B w"#үl҄dƤ^ɭ(uu3f ’bޠD>\Ҟga9/  f d">q=cKom6؀[x ({ݒK鐰+@W?|,EeZ;3.o"Be.B7uvSXwMBsRvqzdv[yǝx}Iei4Mjr$j : t!桖;x͋ xg8gJ8*x^t*3GWSP}}qZJJ#G5ߘ iM*" w\P'c?[<3a!wӅ7)Kd>uKdo 11iwQLr,*!S$v:-z$#:M3*uWZWg{X,П^W& :K4I5bJ)VtSy-XT(`A}m]Ү8tU> ?-G8Z}J=XFB[7C~{Hz h[œ ?.q5I18R^pWMwTe)fjn/F<=3I`]V=,˿d/D1S%!淢#5"j T\_T:=P|p~Dkzr֚c?6| :k &(7@bM7nBLI<4!ydHJ%S$N2O JJ"ֳǵWeQֺ{_y35Ri_dwr $E6Sqhw+ɜb+P sF"L]B!VUda~'vq?#m@E :  x+n%o+ۂ>ck1:{`&kck-0=3 Ov[Bw:l5 s)xہYK077i}S@"Y0`kM)uƓLyHتTgC;۵<'L䎘:kUOBݳc;8(SG23S{(W쀙g%qB*gwQ5s}M ::C贳3(j Ӽ:ՎeŊ֪\L=q*5đxd8.E'"3oduL9g# 6*"sMB5ݔ0iɠ#~4tz`2нnېkj6.Ӣm!Jב6UYG'w{S7N(|oOZ:S(`a W[? ُP`DI q/?+q8V] ۾OiĈ$ !_?B 9&B- t:>HK nI 8.ߎ#A4vǧwa0_]L)\e)7ЉO< =+a&Ih=Eg[nY?֓oQ9-N;- DNSƈB<}gs}I;ƻps9sr?01MkD[<&Wș-*y`tk~+c,R6Z;й`Ȳz' bdy)$7R>Ь{us\`ID~vP Έeڈ$IwBj0k!GV˰{v% D֩y;G*0RNʦ wD ~P:+Oa&3UN\&44Ďnm!P,(fj O⾜wr>1R{ r7Ȼu/g*z60@^ei). b=[p-iՄzhԛcMg%b+X8euZt?QTs%-Tf (=yXVb%՛Vrр`WN.&Szï_C+[ޯ"woyzۣmDgN<+o(4FΐgS8\҈:Ldz6XrW ~cwOe;b0hK7x s<eA+m'jo7[ ڀ޵Ձ'7c :oӊ8q$y@@gQ~8kuBVU3!G0rq+Hș$Bt&BpoE}&sŗ޿> .z2J&ʵՉ*m׶L[\u$\8QZ߸w1V5͐Sl0IxG|P%GxCJgHHiL'kuHM{@hG3I6!_}&32] o(51{'Or6m}[jkT0|(p><xƔ*|јMG/եYM{t>/&g NsuL֡ o{<{G|~‡Q =dcC3vWy%nٚo:'<9m17ݗUҵ9 gvu뿼ŒwK_7ԕ_ {v><^f5kr!άNe>2 u!qt [d5(t&& 5DcAcbf\8nZ/6=qYBk\AǽB``sH&aPɗ/m "S>>o=t͑Jnh+osJ·~+Xγ &AG m/n/8=@g]qI)mMt?WOiʡu5qe[\.S]4S1%nghObmØm/ҔکsԊ10¬@/F5d{A޾ \֘9? l}-<)$rMhC!{N,М ڻ{u@w7Z 2ulfLFE2LمB ss7y ps'pAO>2hehVV/B  ԣ"o|63t0!ZۮTX՞}i)dj#}WuNHDTp4 Q(ok&'f݈eZ zĦ>\:ɧ?&l~n esBId8q=8CnSmjΕ` Ϲ/G`B{n Z&[gjZC,׹)eaCJL=pYT,@d f:]Fc`&"~V—z3xO(cQ]ZCUhφz ̇_bkyCr!Cv{]6O i*Ք=x]]`"/ 8F٩S]KZ#c_;mS}'rOZATxY,~{BwCTy~؟t{|M3Qvv4DbǹMސzE7/7jwl(4dzdفJkMOμNg, soY~xPZ{6nm?:IѽfWcA[^|!k&P'{fۥkrC};xo[ \hct&pcJPhcbb~,P}-o.Y,Er!*f2ӘRgħ KBgt$Ww#Ç6ۆ9F8lĻHgڀx :?0:`|M-G y2N%}#^Jp {{Š^˩_x. ] lD.v8_<#q>Հ84 'LA J_%8DJӚRgh^R/u%1ڏPaK,ѷm=,U[!\Nw\I&Kɵ$@Yx8/Z Фs=0|Fv{P+3DĺJvHSLZ?^}?zVXAʤP˾o+*')ɇl;|qbl?lYt#VwoX vFCYTte@r,{=)P?;p]锑8G5U!Q(\2/iѰK]0&szN`6V&Wq{.8-1ոQOD+:kIK94dN Sy^Sevۦ*+z5u+8<ǂDC(2R[p gh̆b=3#zcU=I1?ЇkO׳dZL`:/ 𤜸lvz!Q6B:Im=w 0=æZБX$+-7L:o_S_,UJDN%;,(r3 9.3n|ՅwoQ. l7ؒnjŮfB3Mӹ߮<s| {\xعQ%h:&> Gp_zzp,G~o4#=cJ7?%rXZQI+\W—g O(%*QFƭ~X[/ʥِ/E_y3TYn9֙~jC~Njt$LZKFɫ^"RZ8}ϻd3w$/5`$u&ؖfJٛ ǵKvu ]`X U{= yf&9BIͅ~CaUTjșKX? "#;0F+!D}J%8^%quޭJJ#`|KZEĩ 3S/;q vwfcXDf7U-8";5i8BG/wlu=u͙+kOHdqfn"X{ TiR-Z^(d0&ۛ9Z2~)"'[Ay RW e=9V{ .JK]'U\aBwUp͝+sP3bKDF'^h70t9!E+zi%(0ng 8 i?7$Ra0 hYSkUzk|vM]8J%̊\0ǵ6JO_=I-]$OI͆yLokh`Dq-9sT.:uը/b>q `/Ll>jF5փ[9!JvzeLC|z{qPR #n!=@Nlw 4Ič| #R .=DӇb+{F)psr,%~6c(zl9A,Lcvsjma]I*~x šy6!8}3Uٮ9ذNJw*8p)5 ,q ͤ'ߌȍrͺ(n}D~i^T o;ދqypVEXN 젩WQ½ϽxKlVƒ֜sTVj75cW!6c&3ytSs[(IvwO3 csGRl;i;1Jxz|KP6"Hens'Eԑ;agV'[ aӤ{q'ݺ ?^I=sw!;ĹaqAm~h#Z_{4*x.$fS;9xY<)CҋxkݺWOkkҳ?(+<w1U-4> mw+TDUݖ=^JPG94HC;[w0lo%#԰0H7ZMCo+ަT#ڬWIPVwƉn{z-95 5?9u 2rS֥nmb:o1Q }q=ܞ߉9%o{M*NpRFr/Oo%goC {qb0@ TtP-igEv^3ynW1n_/I2/q'_NJ/ vG2Q2}5F簋R_#}G<=3;3%egBręX|/ǘkC2z|=Pj2{.g8]搉=F?AU`uD WSdpld^Xg{!5PR_}:u Qrh UvDү/#wF'Tm l_I++}0*˦gQ6HG|2bҮzQ8tvT[%`lu $g5%'/ <d~j$r5Do7YC_?9b8x͋9&sфq_si@эwoa;܊JJQWѓ$mWP2CJjz6defѱjse߂ ʍ ~Wlg_=Il''iA.ɉ _T; >[ul3a7@^>ϪE& &Po99*mhK U$UoIkA{{ԉ}yBo(,N/9t MqWD7O8"}ne祉)*@ E]-}O JZM!ܮL YMGϤ%5s둶o稳{3P(ؾe+/8ٶ5Oh~Cye #WX#eŖ{{.|)Y$F{U>vžSE߸ ӐHˏh>l6jCzİ8LA[|fuj滏{1/FWt#lR(:~="*c[pOED).RQ___I*, i_E1>\tOUOsTdjT;;nDB,!Izg*n!_Xck㱋-8SE `MwbI*PC|x\/BtyF zrSqSo+@TBݵz{z6?͊8lDvzRIs<8=tsҵj{Wy0Ul AM$GLr1m7کeWN7u*􋯕VPSJu\lFΨ?j@uv?i{YJ n$LUab{xHFjGYi8Tgu_u\/ G0Mou$v:(raU o 1AzP'dU<"d+A_xPBw_-3"<8jDaM~\r;@ bZ_NQŮ.st-͛Gw^]i|wv j Wo4S HkA#ѓWqAix\uD/?)k)V*:C_?:߄1m-Ǯ^RAX.1i|M˒P{AvRhiCe=wC~/=C'覟40_jj%0~`h氢.jli$}72MظBRL)w/~sGFQH4nVZËTu;^ӘֿAg>IrpqOG7Æ:V4b"'U7ĮñRB+tli ؾ 4_Ͱ .(W߆ A]p|ˍޚW6 H>+Nhv0|Byy'c ɥ;A:nig). &BCQ<-VL[}o價l y 8Jvj,Qt_= 1Z纎B56`Fɏ$ yͱ}h >TQNQܑR2Xֆ~(Le̫W_]]c(ݦ"m!|p^CR$7Mn\}r$j Æ[裡A"™(8u b|1(__z{lhoi~u~NR乯qa ؇CB 2p~$@|`n{eO9}<NKu<~QG0R'"wPHJ_&l`n;H詻 ka="sK&YVFuO_m(oU϶⦸ݶ|Ct?].hpYyKUB-"$߆3f^Q^KvBIyK6[ٵ׸¤nzNȯ5)=){-Q=h{%bFm7O9C4}oA%c%.jncf݁@qE!,%F))s@!vt0wC&/nOV7{o$h e k'qmleN%ȽIS6ۂ߶QD_9λWaIw쪻{ `0~IGRҸ6t^.h(('&捯 p/D03RV+1gO;Pl" vZ $;}'nV_ a\l uǟ~"|X7y[BN )I#ϐZR +UIMo3'nk$(qlBn6wLoiiaF*u^˘ ~4y˯)h_乍wgxzߑ3 /}߾[ yM.sf?n0~%,( V|0~xE)*e2 %\F my I,)kGb^"M_5BQjwN YSU? ڃ+oX2?!~]p/&]'az= G"~3z<3ĜhLyKl4#3o4Azixv#Heew_Xk60n"}7y 9i+þj[e|m>,~w26ns~H\12jó血SS X[:8 \ru /[*S?SJ]O0c߂O.Ϊ(co.V%)JӪ`RU#g>оyړN$n';iS=~Ds˛@C3%DGfv9Ts,HZ?;͐ 3eEj(Os}Q%ԺMHm@b95PAiuv Qrrzjmpuuxd?r{D~-T3&L?0q~טFy#(?73CHb{6e?w;ÂxiqF$i5 ~6`O[;P<pƈ%?tN H?YW :e/_g<%FUΡO bS:yiJbÀ,/XݶxO*8uRQ)H?ae /U<>)Xķ mwڞp#+́K(u덶s] C C m&6LEC 1J}nB9$)!Z}]׽K;~8J/did^P];дN͏ȝd[kVl O%yНS4a Y{e OJ0Sz'}ah; c i?o+e; ᭮PTp?1(={| -kڗW'S/bw+H ?$v 4\(W/['ҝ6;BIO9e|-n3ye{ݚu#dB׉F4ܾNBӃh!eft %OAgU)UJo] |#4NU G'}]^RL g[|E,g (YZ=ONfo %2)] ZLVYLg!l+wM؊rckPi1Q\bŌ9>֗bA|zK3m[mNДQ ''[SzR_'.ȴ Jr?6ܽ~:_C{CCjdi8Kt$r˃uִD;>֤W> ]jG]!GĹuMxԲ1jZAGQ]1{sc#H ?I_n,oZf|kS>P}JC YNZf?ѧi # =ޢ3ttYI#}-ʔ]*8 SipaI 4i+۞'1Uen>&WH*=ɓx>(5 wQL'LIW.2>}&|X>mp#~dPGJǂ9qVRP'G]?ѰpP- k\c%C['frssBZ]d̹ ʳo}F\ ~59)8Bbl<#;֐홣Od ˆE'8Ns%]JN~Z&&Ydyd o|7cm ESXNDW TEzHoNdẇ/{jUܧF:74V'3vgt #?naC-Ukh4#gK|Fӣ wf3%qitP G>qQgql/H{i Fo آᲭ-l}S-[/2D?OS5s\uC{( 'rB 8."TD@KoVu^.eX{ӗ֐cR>Q͎߬} yX9܎ߩ˜k j-=Dtr5κ-Op2.#rQ47 : z&ޜ^ 똉_u cў'nn%| ʪOm@]kˉ'(GѸvȌl)YS^e$rqilo"3~83XUO?JBLw6VUԮRڽL>uuh@} %L& |\s׃m{xakOmP;܃-?~Lp\՝' |{(ZeG &KEV7|3E9G\Tg܍. He Co[:soUmѱƈs 7.W䒠wDһhcK5/_ưׁsOԓ%-؋M !cX $2 ӂjz3ybˏ t:?<͔\AS?+-D@dҎo;Y&WDT}Sxlc)Д^FC2_?4ϺwӁxU{[hI5bzsވ,3M5n\i+nf!{.x#ǷpiLI">زykI;+N?qUׅu=mg0[_yÂ4;Z+wcs r5Sذbtq>Hй \{,?{cܮgb*J|. ͖\,>LfXVK:bH&@ɇ)RަdJb~%׆vSd7HU{BfE?/tޭq (wk8fa0xp‹:_Ӽ0ji29)=d.Qڊc,y73ǵkjn#a(}pJZzz7{?*Pzm\SO]Ŕܱ=o+q\LJok)w{M:ړ?R]ncr@9^oy>4M\Wak=f}`]9Ot} E5-6qfhk;oclT uUgs3Ƚ8q oZ+ uqTײH:#-ZlG[߀CDCc̲j'߲$_ M?&/‚\ *:}w3%W7eq!; AӺV߈y<0qu]ٸhHؙ}kB3Vy4zmcA=/:YZu/: ґdp9Ӈc+0epr$|u8t-ʵeZhJ ט 1x}+xڹ;L3IQ9CL)z9vdiy[sw I4MǸ_춷 ףN-iӱ6;(p:"2i FGC.k݃׉ ?,bN-H>yއO\6J8 >.q){xY\!/4[)LZ7=Խ./hțiJˏnpB}n|s٪qO_pqBo]V v뽩`>4OǢMi^goʺuynid?Xx`*ТnoFmi6 qL4chgO|@'޲mq?iw) a?#FJZ7}D=@1܁짉/y\6/um+x7b` 3VSra+79@p2tGޗ|@Zw;l#hBw}p&-D2iT99M?~}aϵrW[ײ7i _qAA]KS|=,öi7X:N $9}6?Xr&=[0gXNԒc1ߖqǕeL~KSuVld6^}OM(\C[ُEûY^bڗ.9isRD}B.".+d%,4BI𓍵;GL~Kx_ł8HLX flOC<ܗ)ntn级c5~"ji8ke_ ;}/?R+NJOfƲ#>v+FoF+PS~']物NKYOR7v]X=ߩ:£]goڂ~:>/x}P.U}8hL,.Q%.9-hq+XΦ!ɶtYt zZr %/.a=gF:X[:gGv ~>h?0GDoLR|F:?.np]NpdMq8!9E$>ѿ)BW]Kkp^F&k0_𙈀+Y`BVi>+"<>b)BA4ͅs r? 蠢oXH "gsișbb4pڻn0Wu!_;nL뗈\ozb:ND teb:iO쐀托mݶ8R^sx\;NuVnxUyT_-C:ە~X㘚nR?~ůIYJq\y4C0%j;8:>hVNgO!vD"u*~"ʔ/޷,Gc},yU8:),7NN;[` {MpFx$[G'w`P{⓴c0w7m"%##b*CJo L;(6`,1Fɟ7K%8X'wO֓C6s4x4|bmkb+bf#@tЋMD](.;rLdP+ieP 鴦[KX'}Qf[5[zn.ܟn ;n>у4 m-J ɴ5LCf%>dF/۰+H$&Ddx9qZfAɟN\ZsU~=R}?l@rx ;u9Jwdۂ|rsIN-_nk+.HQҥC0:\ϕ"@XgR;^_"MMzU\׏ 1"O[/Y=k4mͅۺL#;o@u 99/M'& Gz{˰&)̟xnlM{nҸLE>P{ 5@D n!G!aNDŽsh#-;(#<Ѿq5~Aߞ 6Ukجs?ͻ:l=sQv;3.&WĥoB.a7+7h,KA5 sLx=ϯ<3D]IRIu|L1nZn)K[#Y9mr"8}%*^mgrYo6Ж-vĸx'!&DYJnpx}F7Ie Pu{^؃@go=Z߻\sC_tx%u }!GՉgk~'/>k sqWpv7}3e31;o.z 7`aGrkHt!s7య,k7I*INݵg=' ou?VuA]Ly2|;+D}]@xy %Z}}ᖫ?wPe!9n5yב3~IOkJU)v0;[wbR1lP%W]k mcp ~Mc6QLuʢa? J"m^L9LnΖnRN.GK2&)cV N_2sw2W.G= Ny&iXdQ~l."盬_qvIL׼D y #Zx'ℴ´C9K?:@ic`[4 W!ygk#W?LL<_\Q˒R OI> | ϏZ}uY{=#m x*&K2Ȭ9v"NDs:߯[= ;}`| f0vHQH+ȉ'߷9:fd5P' O̻#Uo: /w7~t#bG+qo_ӵxP׊Ec<^EZ{qޅ3?J{!>\lQFW*GgU Y`C/ɂBtc^*.+t\9F $Zߦ#%緽-ݚbڳ]3(YIeOcn]$j}7~i'6CK&ޞ ~b3FG2?i?~IWU6z 4TN|{HsuI ]xiAC_]a4qH2t';/>Yo+tj᫕32XYOu3^As^yͷFXs'aj'"{Ec†]#5nI7/BR.Uf.LRrn*Dƙo;‹KW>E({$w;Bp4w2m#:'J}*n1ΚeO ?:@tGk>9yae딏369n9~V$~_G"EҪF7U,KJޫ@7J ĭW|'v;GYezEDڻwQv9=lE~ԙ2@Û ö0{Щd Vŕwd6 +'9DSv+"&7]"5wJ/aבQPRPZb- yM9M~-~\qǟ} X,۴NiLwz@Y!P.˵⧁r_oIR^2NDЫ<HjCmRHmd~? %lp|Uq???z݈n9pVg+7%kICݢdO7ӭc 4\;eG}W*d=w:Ҭ/smA_\`QPu+]t˟]ʍY4Kvw ,NE=U}UڢAfw"Nzwd%@m+cRP4eI{ޔϠzݴݔ2Ĝe,uJ )?uNv8D=Ak~k3NrEIX'+0WGʹO`"3f?֯upg:A0WJ'0qY-MwOo&) nr=,a0\uX+|cC_nA}4|:sl'?R^ⶱ"`OMpsA^%d0Lekf?aq!*@:pU[.9ݧoW"73g1H9ې_?>H]Fu٤I]ISX7a3)iK$)d$,Jqqt>,3 G\#RvEC#g3e ͠\ -FHlf>ZSF"r9Zjv^#c7H&xx}"k&_s #rj;`ڣX8zG&r%cȾ2;&o *ȮF;z|Rz }^&X rt;#ʔ&8wotsI?<}^ÆJUyI,֘Hmx^VLT-rGBjmِiFlt跚 $\E]uq՜;l*(FZ Rwx!f]XRiv:uJO0f0FW;C3vDnC;v¿g̨MϪ%HE-=.;,v;&PIx$-"-Q;l|,pM?dvhCOx"*k*KpGlEgt'U w۠P]?|@}p򟷤O_?`I#DZe#K:IHףlHL#9O#%ĝbT7*B ӗqHbft걎ݍ]X::o%a,Ƚt!g2$i'^?%AMU>w#hG^<Gv<8 :Z:#;BwKC}ġZ%&w6Lم>43*>fA;:`4}lQ8̯, l jDmvQ-)U"f9]]B?"HWO_1廗㘹+dUhΘmǗK$S匎ܫ~RW}_c$l4k)< 7gݗHO(vGk_uh9{o z[kZMo%Jf hVP}A9eϾQv\ΪMeu +hmtmvWutn!&~ڻ֣KdtyS$JCmC(x)EL%~ιǿ/ȞE[:*$w.ӫƠ˘-?޷-uwq/V#ogY_?VE47pOï$, wE ǚ&%?"4:[N\ӮQǘκgQBmi369:N[wOF_w Nnx4.[Qߣ[1,3]xiF MUC(4<nݎ{γ1xo}Y6``ח 𙒴`TӨQ_yBي]m(:&yF= 2RWk,nwUP*l~4#z, ,:=djm{WLq/*S,y5$E=u> --<5QvD8ؘrdr,LqmӓsyiǩmES$O_?I$40GqYjQvZSoo7?bpj7s <_jzN_XxQ ׾=7T{|Go2cyɯczӇo:[f8Z"5HEvYA=Z9:yb$O%|G uzcr?#wMtI"{M?d68byQ{ }k,|nv\ND&/4TE3-S1 >nw)_0`5]k7c<׏ʨ ݛ]~AW;d]$1 ~ /HF׺\9h굴뻫hccHAlOB+hȟ"tuXb~eLU7y @R2U1,T$zv_?F{Ss 6Ȝu@U7 EoX߱"ɠ9qOˢn>djvGY&AҸܷ#iUWoǞIjE~jDJKq7|;ej4Bt=STGA3Y7q*G6_5XOq^Ͽ Csy9[|4FYw eŸUs{ε Ua`gK(/0mC##qqZÍX5RpRSK3cyۆyy 8M)(u~&!1:kR񝷧R¯O}dsɥxW$t} ɧu{#;R'y>~^,A753D5d/rggܣǪOڿ,׫󆴪ɝzbx.‘H;3K- p!2y7w.1k,H%.rѧ uwd jS)D7DE@ v /D&V5ԤQ(vsx#c^itO޹_s@ftjBQC?<\qٲDWRD=Jk{qWzӉkG PF+eDw5w/Fu4u~z?IZ^xNgq 5Oo.炏*$P܅E"M6XIEbχ}djëĦG.TFp}j½ i;H6*21Km'/'_``M(ч B=B14v3k:k|Q=gu%'bj6ɝ{8,Η_vK CƏtmTҘ TMiXH ׼M-E몏Q̮ӛݮhDl`} '%UZ!RmNa9UD&HM Tv\~@@6lVſs/ ϐtW\2;?|jK s_Ҫ~XܓXr-T8w40o'cBJ hOD8ۨy dc׸ݸ:]|2zeH.XWԚ^5*koa6]/}u!i& ~㦍IvBZfuܸ5gӑ#\+sKW!8,w ʋ쮝d`ZߎeHv}{)~;IrEKw,?~Ӷ(Os|YY0!tW__qU5E&Swrƨ{:D.ѷnBy>/ٲUo(%tNw2?-M1seBQDW};>]B J.[IM:JYA0GV VfUgռ@%\с~PBq_9.$[K_ۜH&#$%瞞__ zgv,%Dy&]]g钓oI#HCL, /đWϩŞպ0j (ʚn5A~BʷO/^w8뽨@`N:ʝ IiS2c pK~ߌF%D9w]L&n% WFΝ]{46'6z#nu9MP{u SR̹}c_!J,AjgU*htNk$,sNu#LbʓBn Sʶnb;{qjtVZR[:x$>hDp;L (ԽX L\~rh~>gT.Ç#*<`Ys3-*]{=iu&%[ qGO=;JEx@5_x[R]TӞxfkKbʼhY6y_)sJmAu~"(?ʖl;[%zWYUo'_cD=.f)SZ+y ^ݿS`xzh%wcW_#ֳ|\CMIVڦe } qZ775V`g/`^!EZKz`8gn|7t&9~1?pq@P p \EO\> /֫_~K2:v%.fppMWALUuľ0I֍r>0R-~'30$ϧx~n'5/^SD6!&4z˖]}rޖǤzCT ֵ\NVJ+;^UROCBrg3LTzOGyE/6q *9\02lE~⶷Fl3*#UWj J '\u^b*ΐޓ"z7B%_$@6~ގY&(z§Ie_#rNPL,mRnƧC]]4yYqM=QO@XsCzGվޛFS; !I$I>  IʐyYmgPlʔHK%(Y!uo}ﺮ~^{YU}s~|߾ ƒ֊ti&sX&Yue'7yu#ꐏŢ1]Iސf]ښo6?I9qDJZa)t!FA8ۦ9"'2OwXM`}WT}LW,soo07Ƿ+$S-Q4+:BNEwaT/9 Nk Z %P (<V|G<6TABSˆ7=W0I-|g_§ʖWL!G9novdlCH}N3:dܰ]A;E`΍Gֹ@>n5z4~MYPnP,\3 )l2if)y%>{m}k03ply?}{ v ?SR ?o2*^yhb! Azv(TeQ&zhx. _VgsOO[Bz-dC41a:j,xM+qUvm_M5ͮ-DnxߗwD{m/kw=V"Gm?C;@ ('rwYj>S\ .g?4 ~Cy1nSD$~6g8bBw2i yd0+xMɾ8F#8GOlYA4 m?/lЯ/5|ˬzupkb$d |џq!0׈ wwϽAv Ib@o<.0=J5G-2db(ܓﴃAmz~mG TYRSk]`ng|r9$\+&HӃ 0c~*7'ڃX-!b~bpJ Iڂ{vh>n w*PN0/~AbdJ,Y484t~e:s;mWMFX/e}9K7W{/:e%=yY$0We nL%mR-roX^mIJ/Ap+ U<ϰͺ‚skX:=}1zX7y!;!@̑u ksV\aSwsp Tyy Z$S]HgHOT5:u4ƟLQd%j3!'փdŶYJ3.x%\7í?qtw#)sGYŧ Z/>4-ѳŨQ7q@˚h2ތٗOr+ eAhD5IpV\ҞFL+Qhʠ"ks5_nYқBRn< v34bmbv 3CѾ.8,B,4Vo㟳pSt#,ĵ<'w*aik̞K -} ަ^_t4yNZKd Ŝ[ofp' Xq/| e Z>Cgžley-^s:FlL݁uZ xu&AJ|#&Su5IaWj(\E؅/Όg_NLm&̲5#H2 Z׆;oM$q_# 7ۥ!-G|sA;|yNOl ?2)m\ V+g5(o2xEZu0Hv%sx%֒ +#x8ry}fp HCf}ӄK%~=jv+q©43ձ[}hIp2_YG YH@)]lA*5'ྜྷ-tvm*LC%g'܋^.E 8,xl#V8?+JiGSfh~FN;\ڣԺ3->~\Kpj+|voxЎ$QƌxpBy,P37Ђ-kxWQV}B7Â(+zoon9\xVXBاW-!~I f.o/*9͉I z8msvLI +gSwS)Kض&5B/(KCB,?M o'-ըMf!_NyoQ T{Iyzz/Ϩ[XxB;qh\Yyfu=6(Q's=a#=I3 Z|*umU>ǬtcJn#j.՘ #fzpb~S nTW a'oxysЍ2<;?ϱ-=no]B^5T-*L@ļ>~vL. ?T.Trپ8~ICu*[0j=ZoOCI pD~u > /ڙknw!H87+|)?jY7UǷ*|˯U>ԇiׄ u8+#ʒƇ#Vw#ͳU4|vN "rV<] k gVyϯ[yfA޼ְN's5%)<9}!nAc:O'^׎޸?;Sw5T[e`i E+v <R98LYHB\sMsZ o+~ w>߼A UE;[1Kmkʰe⇸}XU!x9]?oa2-\<8x@dzZQT^JK2;Y~۪㓕uT}vx-y`iZ2%];"/5"tuv}nJt'1WB$ENyO9r̯4`â @?;9 3:@pۅTKeC*c ;KiP֌'^JMmVp%0/}$n z-UA<5)AO"17.U,^^Շ:K{0k%i^pi~3dN}p'~BDRޢSlЍקqt FʏMtoF9 \TI׶MrCuoawYjwUnY̲^z*7%9e4B%ee]pJR"/GchwC4v!Պ0Ao|Wt8s ^=k7#8eR[V  }9FpoH8FI[Mn9'o3OTYPwD%6? <޲p X>jwKOR۱7ULm+Xk1^k`C,9z'i ^R=0;ŽH44O@Ty{a+줈>!bQiteg$t+ &N]d%/q5RԹӖI\(f Uˤt̚.Eк3Q+C0E6InK=Nu;Z b9][ks|UIʊKmŬT!M {(*6 _UwCǿLWwb H<{*NyۮQt83m=nNggX= >)}$C4pױ]U>Pሿoň~V~˶2yQl枌z$w|c?m/P] ee*ؤKoV!|{-M Wv|XrRf~Yn <'ozhxnH1ǩL5C5Q-c&F:Q/ ssD-Pg rd/Ǎك@J{h ΟK|͟CnؑK\5 F,`SSVՊׅAnu>3D=Vs.h@3]@Y7*"~HAuS SL7x0xvҁ- |95N;A]p` 5R:l1.v_sH BF$;ja*ziPՎMf6EjtҐ L}ve~9/{4urYe_.ٿ&}0#XyExf0lKnn;/oZ [Bk⬂ !ţU+{j9c`$'Ļ3ϸ_E_59q"O;O+3]~$66,@$4?\ a]ES:DYS$W^qQb*wpH$x3#G`f:͛f+n\';?s) {5P9;RxI[ g V;M?L-4n)&fB.,Sdݮ_uM{#Z *cH pa'n8`P\v2tzX}jV@jў2 / #xu@D0hM; ]Z4@8"r,EM6kOpiL78CՍ ֓lO=C`967 N]+޹)kFC|phDi} `R9o dKٱu,7\ B|Q=I"{N;l79qƋ)P+"UK[v}': ֒hJ~(4t#-t3 OtGCfsoo1їi2K櫟`0A~ccW,W" -Z?U`w~_/lMU-\W  >&2E.ny{+Dz)~ 0wvBUS!Ve# ˋ:A[O>vD.Fre?!@(8SȜ6SLA &8B@ޓH|:OնѬ=9[ .&m㴂n-~(xdGypX4$}W^ 9(X8zC6{(^VbEm{ 池a E|]RApm,(e5fI\xZk1|$?#]ja(dϊdmvD,2]Q:K/_(ĵ}\ŢGI^V9MsE%- &W7"7Ɨ(X)l8у.֟~:;V3|a󿴴ŭmm>2w )+y\}AXp?M¦B4p?A ½^u"=Q!Z4' ǂۆCi[Ӷȱ xfg ]=BV07|w(V6ز2׵sTAc/Ȁ g΅EDhv\Gԃ|Z4gD{]XӶp;'iϘ؈>n/}NL/OwGyU_ %y4u0|䚑CÉgmϝ)X\L6D5Ke QOEŠxϦ::O7-Y]^5]x}] K5GںFAf΋P(jy>D)XuseaNpW@P؇_7&GJuhԵhLb%T6} xkfGl+ 7Z=}`5e} ЀWD#j `'.Qr3ELd޳!o6Qk* V;jDw͢ *F tDMG*!IRezaҜO0`qk;s}!d#_?OzsiB}6$ZK|(T'˒TA1>z:M]um5IbG坻" ?;7`)VV.7$@ar&qۀʱӭt+r?3} ۇ'SQTsPHO^OrnIB ?*z}$_^79Yϓ5 10E~שRC(i({־ق̣ ů(sIȶ0":kYRn<}[6#\% "ߋw\څ6>ࣙβ'wZΟ}Ttڤs9ƨ՛cۭXqъOihxqZ3%rUhsQbx*l.7}yQ+Kn{Z6[=Z@rP ]9WUoiO먹ԯJ` 칶2udpLIpYb%C :>i"SOu)e_K?u{wVL K^]Sq__l`9u 2lFupqusڷ~+Fhye%c3>2s>TEyfNG{k<̽qwjƽ_إ쫺卬W U~#yYsB dqdǎӊSd-ZE,KWLJ޳D=)L⏶)߄ӟ9j^ ёLr xތ Z T(Q+ _ oi?{@Ib]ΙX.\-W_kb2^7\߻|#c뚿f@CN+XGT!.%#X4<3"uI;h'q"*\%+5(W1H>u\9%J0 nTK3)ڑpA:G:Cv'\p(R }=F{b4_+Hw_dA lYb fJ|ZU`/8[\WN۹b]&XJzbeT Ynqx_G]/ dDQpo\'dilr t_}\ ? [׷c;하Zt2fu wq<1䗩ʺ䄑'ԗוՉ?O?~W!)e]gM*MB)})bi05:rΆX(1HPA[K*?w)o>3E`?ddU+uBD;ЋEWxSOB9*K)~DL1ndRt 2/|O< P\agdn.r|.OxOhZ'U:z!s =pg D^{׹P{,Q{3Fh>^.+ʯwP',]ᢺi5q81ʵ;x[i\hےWܾ~@qoiDrl bMs~X7)ڒO\rW$,颹8>R?* Y(lbۓ{8;.JrMl^=[مk RL~2Z]JR1^X8ʏٌ'NeKt^GaT6y@2!&dݏs NA1?y*eD{-[kOo~$񻁬 %O=K~sB Eɽ@{HL&NBWa q~\DݭEPuLLP/P3W sVЋH LW؍O `q XEҟ,q٦)W,=BIV_A-⿧jnp8Sa,e޶{Ht83GҫEZ۰l/{%c9t~f3Sn\nGEl$l ~pJ=쀣u6 v90zJ8x !vܵ\ F@js!ÐnkJ^uw+\NyaTm@B1>wuWD:˹{P8 X'} p;Zq]q/qz[L>ĉq[[}#7$iX| _ѣH_o7giOI޸7,ؽd1!{ b`6gpY!ghxk(0N<]?E3P"LK#Z# m8o L|VA{?nϯ@7xr =""z տy_?` bTT9?MvP 97tѫN$xNE{wsgM%ν}; MaSy@Ї$WghHvu|M Ԍ 4&fddߋ>0"u?)(B=_#x\=[Bj%QȩCͱ6ocE)ח덟Fx-U`3HzFie_}/+KtAŘ/+p}4#+ q79y8YSjbWqkGogiF힆OadVDp La٢?xc%cΕoQ'qR ߰7;sG#7=l|Bqs=m7 (T N+9JY7HTJ/.r >wtw.  =抂ٯ} TeEnjڅ m$#%[hmZo5k>Ǡ 5IJa1dffƩJwn2/pW7z^5kp!UJHdML3 4ڳ1HQΌpU0J00 {<2\e :jIZqGEKt;,R+%="sZ*| Zg m׉3M]Sl|A"QI#>98ceWdY}¾X;m:<ㅔ F#;9pkDѐ{*NJJj4\ߒ?$,ڼL ]ȡ@٦MθڞoKm|8Ɨ|GvVa60a 7K!'$t iZa*EUЄK7{p](q$ؖqàw>kqm+= ԫ6;Db>),m>QOV9yl7@rlNE A`,OX }òqgRn0\~̉d&ccA?Ly Wlz\Qcη.}Ǭ@js+wX:X.NrQγhSOKDj×aC۾G̕cGqCy#qcMyi7S4uՑɫ :4aLܚ؊lݸQuM|Hǯ4u[ej Q98)mM5ZCY݂}&[?A[K}qAF2KR. @<) u䶩c;^'G׼΋_r|@%e7d J}tE(%I'~~sg-UZ:~FɆk3#.Q'w_Ho4𻶭cO1wUٟ_Ͽwd~7 بԍjZusk=ZpS} T ^iqf:zsa:A0[X֗dz9t&gHघdc=꼹%8˩?PmQ>r(ٸG$iZG}.I^߭~WCe͛:=1KroV+'PTdqyxc1:ͪwM-/IE$iǶ>Kd V}&i=2HMb|%m7mu8J G:a:~ĺ7b7"|9vaȍT)tD2{W X70J6Ug`˲?/_*;XO %,tk=hכO).ᨍ8JF\} V]-.BfxRNV4+ݱ}zڂ3rbtE'tf!Pͣ(;9wr2Xg^-;VPܪtM!qƺI_;&e5nW%q[TiG{ǎs KzfŽ!CmOiw64$:/rT3z+Sߪ]J`1J2C|8sKIg7\n>[B/_OLؓ%fXgFmp 2wL N1Ԫ A4CcihE*F_zW}\9[3&yY}攔"_i#cSo=޼ꛐvGޮUn؎4VJq*Cu<+vZ-!8Dm}nG(R~@1(w\zGڧR2ܞ.$4.saFɪދ9RhDVF?{9Z25b9m[&G^HJKJU>}f .t=SiY<`+Xv-3EOxyP["'ɏW}sIsCCjxM~!seNR">&"26lcdJa*S&s^+O'뾯ukW:qks+1~!/ ̀DS4aDLFtķռ ȨM ?h<SOph[Efiu2_~޸JO=QBht(ts8oy2%a%PvDTrTnC_g6[4i0sNa54Fk<ؐʊnOP%̓cqU 1f[5_c(@{=H$x^b, N? kxpZ:Ic?ﺠEO|t4[]hI03NQ*X5vXt}O~%j Bg>?$5O[7}ǩ[#kx[F. {%"Y1e|aCKըNrQl0?^W븪%2sZnxAU[Vό}F5zG O:Y+ *=QG Q]Vu42l>vDaV3ک5;}<ӥ'D(e/nv]\ }H~xFW&p _#Usz<(TAתC3N!Nlc.rSȢPe|`~R,;R rIyT+\ixJ)D]-{ FIw]YGo:GT/ō)pF|H0=*;(mqN '}L3$Cs)ӠjGp1jc_H2>{o:pBcx)zFDhԎ6S_"JW|m=X +=1>?y@>RBU^se ukS&wQO>?ɺb ZjU(3^']4cs-n3N}݃UU ^Vxe&_@i'E;e҉1r4 Aq:s$jѱWc ӧyFg9rYyq%8-d␷lyvDP;@iqOhi˭k !Z*`R@QX*ӂ% q44)` 5A*_I>lgǪ5#} Pu+hfX*Y_`҇8/&`]DVq:cT4=e(>~$J~ۂ/zSw,YjPt !H"{ |5u:Q@x±It/{eBƻTQښAuqp BP~J,YNvgul']DpoaNiȋ5nU9gs?DQp^W>:ҍȘɫ7LOIE{O߾dqy8w8q=J /a'L>YBoCe'S)7{%ܭVo{0i.E:@'~Z m~9Y@-ų8sx@f"=x5X% :PlEvc+#d{K^8QTШn@Q]HlSHTqB~WѶׅ$$wZ[r HO׺γW2'n~M 'ГB;ڵ#J^cZ:}wl=Ě˙]vIC|Dţs7|cP2֏PPÛ4͜ [ ?ܗ n~U㇘бc)"&f.xY{ EH-(lqu9q3ڜN xܡEga?1ee>ٝ|Uj eAܷ7|kSFe@bB"V1*bel,۽iUIqb7vUiQwܷ-g9 mPwQDћB*cMThFg,.AP8d~/[N;&B>MoCq7xXs) v1 |K'l-pۺyzwsgB?\(Cl "Um\`8Vz=x1?0KcN*@:h3x5]˴Ó&V1V>'n7,_p=|ǚTolx6,s%;%hiw7mg(g)6<| ׳t43%eH|mr,i1&GFDu_Lj3ʶ]ԂIW~qUb~\KB@"6P=#ku(1~7/]M3yye,W侘.P)hrD+R]>|[ Zi~9g?1{@rS{S}C^<'ṳT0lywNg`x*bCF͠ @?D[JuN$'#Q٦a<{̚(j~1 }{kJ(HTGƆ?2V̳4d:!: (@o=3hOz׉/ɆHj׆xͥ.i`}L.FmE[z+`nn#3X w!5[v?99x{q8)]r<|C`MNz*S9|?QpX~ߢ:m Zk  N8h )fL Ape"ӕ0 F3xNJOb<-geOkɚ(Iݑ'8fhF̠Co)CQ^hKH|ɰDzӷv+.1oHk^܎l^%{kyv\uUltߞJv8\!D.dC=Db[qB;r ._Cdιq `&^3u|4N Jx_]0.G+x!łs+vYCאq&G' A<xh̋-$/ -XPsP { ~l 맴^Z} VΆIg)n W9p_v񤥏/q!TźzzZyЖ]Q+h7#q qW8I) δ&mu&wd(u [O.%^NplY{猏iUMhSf9<"R=ivH6hc.^uW7 5@:yC[n5z(c'*ɼ !tj~WJCF瀍M8?bd Lg&Z}_o^ PJo>9qvZh7'iLZgM}g11)<<AB{!\Zݯ#W8hQKiOgCʐ[0jkQٵp*` ZD+&{BN 4筇xq)';jN?^9:WUbB;XtQt;uBZ9~ (7ht9Aͻh)lWЀƜConBlI?4E~JE\~y/`u#ʽ 5<6W:iikh> jU9BDN~._7 v<ɽJ1VQ~{s|NTZl9mssUv曽cұ԰;T{[+H9f8W`k-gi5}<Gy>< gsݚ[ 駏%-@Q?'|FKmĆϕPU~H$,dF*1Li{sU%~ZU\~ϱ:lI3E4I  mSóDQQz.20%-ƾ}@w"NF v/oj dxI)Zy'b&:JvN>PN=+rƵ2fIx(IkO'cRH,'Z#\6Wos.nBpn=g +Pg`EZWXt&͡v4rW#NgqiSGs9WpO]5SPkW6:\nK[W y#b{7mBWMֿy> Qydv;5+tQ̓ JZFZO̓ )ó,o K+26i\;@G^d%16-ævOBVi8շ3fK'{ɺ9e3mS;jH*nm 1+%Z_y;L\!з//[z kpߏd(+f_T(da=[ƒG%JYCaѡӵ`pFd.}귬 .*]YR`l 7Z_6{e]497N~%^:=․(Kv!ٺZቂnqw_#0y3pODɻLV?}8J*ϔV*^{۞IuJ43`ro%޽1@yzn2O]UMKBO2whP!qcNQjxワ7_>0ܞćShkCݛ5؂@ Oi?ʳ $Gxx+"0Rsj`e{^߹RLZ.5gH*^я`씖٧K`|7oj_w 7rM8hnZZ}O m`@K+}m^158!5& 1t]ō+_uLYnӤc!MB3,~-#HȚCσn7䑸g$F%-i̐dTg=6R5:͇x5 &VuD-G|fzjpU^SOXQ|&rxZܿ*F&[~8 ZuZǑAq1w _ r.ea-m`~Qvs}VA=1bpͩEr: xÓ(H}ubre_W۸ĺ! &G #Gv^{,'U۝[s7zS ɭDU+1[<>'[8YX8fWq`Q;hr*'N+ٔqO"UaAٱU\Rls e.J&3x~"!{M6S@ p>y9t|ASm'Zl6,2p~q5Ν66x3N 4ÜMmpƛgqW^ sn[;~yThxk #eD(m|ƆzGuI 8@T.e)11M=4}Ȱr#4cV% ,|%ԩ?Dp cN'5rF:@x_sĪ{yH}!? n!fwu=OiHDu&ZĵZ%BA14Pڣa(. /sAȁ!a /Ua:_pN_dxWjJiaU_bUɶ\ǔDa`^M>Q3w0Exr=Ľ<<(uϏ|FڵDZD'ߜu䴶/ss~yo9f n>@ u]X|_Y1-9rCQW{2a~A8l'x[eED&.E~ʧD`dY3\"݇/ᠴpP=b5%GI?8?:󬾳=Ѯ8>R|[yU%Bxy葄A>H"-^{/1Ww_KXᴒc fI+XPӞ;ސӣ:0-.|jGyOHNMƹ9@VlR Hvv?ap ZmX1) :}hM$>a;έ|^H+W-NCl2 |>=)bV'_H-K;֡/yu?ux &VuGtM5v%ң9LT_ >ygX\;'pWh{)J{),Fuw\Vd2-cO|\=~5[ P4\k-G1Svk8N4'cW{ϩD9(&9[Hvj$+7MŁs9h/+W6i1fW VgP nN6<1!F8^?\9t,_5W'DYg%9B:ۘ@ GjM|C_y9]Otx0ηhN5{Ύꅫlkt]|BL3dzF??',q7 8ӪooD%8sx|w&|\s7d=^p׼q7h]ߛQO&E,D0Q.<η嫑4&?[HRn+ܖLCχ6@_Қm3 `yڙF>y`{7[PnX I*5SL"@6{KO ?(1Bkk Mma Gxc;mOQcpO F_~Sfz5od'YD| J^BD"'pZQG_8w`{ly\D6cS{_?=]VVu``C+TַPbT2Փ7лO;n[3(Q,{^q!ptZJ"pG&{KKzdg4HݝaG(b+ul99;/AA^c=Nf&Q@𰾝=t62Fu\ .`}|dzyB~ 7/dcZ}&=a6?]kHXT;SO\ 'f3j)V/0*_x}uGf~ꁅk@ҏ\D'Qhsuv$j#: .Z'/I3{S_ֳvZwiU'7 vߟ>f׎fw8A@.9 )ܷ' 3e# U E}d&ELdU$2 x:AbUos&ofLdžyL2+u`Kj[<݁W:(!`ҥ@~!#KλǷmmSJ(a7^Uy1d?\zw2,Fs_߼-1oceO)4 ސlI|oLswi›]g4lPDK_Wþ7θ`>܋ !"u+ 󼏙wot+Ȳ6XL5NZThEJҴ磡ݍi0=]?K-36g='Њ5w/_{<2@]&IA>c?_']|yq×oQ9Ze=Ǭ cCkoAtVlk2⼾=Ho]?> i@lv?K0fpewV sj`t'qXb exhu^\/+\oȇ?(`,ݷRU8Os6Z$C|q޼V>ny,KwM7L` 'VMOݖˁf-twû.`hoS9Y_͆-E;_/I3} x@24b>eY="rS"EWI[ODXN.>&u+ [#C#&nGwp=dk |ۙ~6U\x ><>$aNP)o=IqhH|e|D}%Ct%fpҌ DRWDgA{;4kC͙|: MhٿZ //$Rx=yy[͌tbK6]gci1#.ba;l]oבu&#6J9KT^/S$n0y6O+<,=>ǼݓZ8*%|1cwl28o<îWڸbs}C<,*T*p;L=1ߟC9sJ * lJI hzgWݞ mGyr7Y$'r%vIUḠh*]7ڞ DOpa@*J~fy?V5CAnC ! {K* q%ñ7%l?o}ݶ͕( y Qvwр- Hzz2MTR/hyt#qxQp:v~D{3(݆A-Z\=?̈́=j6uc]>Dz_ ˹Av~DoH/v!ֽx2\}c݇큨[WIi ?Uv%jZ.9LS%Cw9w#c=aznZ;X\ "YŘ)R~7-sT"ΛY~ݽ fDKp ~Ϣ0;v){:K9z/WtWGۺ+8htF"c"w0Ii45P)sJ3y$1+t]Ց^+k0QyӛW(&Ҫ_aY%ņkME9O>{#wk"g-f}}]yD΃-":CGzhĀٶn̕==y9{0f,{H:4tGe]cW;8 Y$_s& شVAehwwظ5Ns=hm< w :ã,>uN󢺂7k;QIhG$>у]a<_*E}>V}$04d]>KZu@D i&swWɸK9ߒKSΓFx~%c #>ܪ8eFl;ME=|#kSgXW`H_fv؏h<4I1:CTAپ'DX9Z_W M)ЉpB'Am Tp5rqkHP`dev(MRqUѿ[5N~J$H?eR%^7$S5s^YY]ĸS|YnVT#aSs'Xv#η2S~ JCb޶p wc<뱈{'bʟK/ʞ ʹ.BcS(@pb\5utpd zT*}'="*f |i\}'16 lj ZŘ^Ϥ=i.aywҪV6QRUd٭Hպ*mCBfkWhħCm&.g 6[SP@Y/b~f{]@mӥgD^ 6Í̿H”GxD]>vuʀr!qDy >ܹ|N5.e%hzrHjDE+asσPql+"DZBkȁsp:᷻x 'ߣIvT#8sꍅ6ۛqb6˝b'察/ϷGELGw~:tWxeۇd8;AH@f˪Cx,L)0-VZLK_w fxMJj~Aj3n2CƏw"g>~Gv;T8n)lbQ&5r @tOxrYD ±i# VO dTs߳JB-`۞+Tjvv6~vϴ33潚ɦucI[28hlmOAX3FmJгVk 1x HWm;OV( ูnw~}b eM'Ae+EplO.*jfL |t 9!d_t% _Mz׿Fy'b]3fD5~GUƃ (U9wY^ 7Vӏ=%37SH,aBԹ)L0_΄hCqR)ֶZ]nKn9Kvv_f'2qC2o @gBq}٠苖Η1Oz++%LMӗJ(?}_*w'p5-G1/Q.qk X.>jk29Ex{ܡL}3eY& 1+>:!e)kܾ> +RԜ98&PCrj $.O (PF7m@e^[p@&*y+8왐aK#w$&j%5+8|?O|3?Dhf4=u^vΖ?T2G cMEpW&ykPKl#Y$^B*y[YC4ϬZTS*$Y>PTޟ\v1KP'Y.˧'"̑D+Ͻs0 M~ tͤ݁w10(<aF68J.X`ET5,ly[ޫu鶹7,uqd]l&W4oJQBܫ ^#XUb"8v_smM*B]G{bp5C85 E?K@֎3;{}*o3E)uUK?(u-r.vL;/Zk|mR+֪J@ [MP߶k?$o}86'ٺ8\ܙh4QbB4:xlD;O ݇nn_31xwk*Sq<&׌1OZ{`U6;w'(Q*察~ckj,ymQ;̷O_%c#=Z9 _(ϧ AqtI(sOv7%.`fM2T73Zĺ>Aľ0ΦἐUf.1k(gVv ƎǑV~E8&~yVL-h#~rj2AM?+cKSC}yk$YȞ/:eD9{Q[%65z]qM톽f.RMs;Bk)Ow.:JMK]Ey'p] |VK]s?_37?ZhPv`:Gh_M"s,cNR-]q{[8`YQ OQêI .tagnؓ>80suY?Z{U9ʰ1[Sbat5pU5MH[7d)}e)[ݵGp"7ջ5U!+#С'寣)zZSǹۢ1I)v>1<"%Dg88<׮~2di㾯Rsʟz7)H\ *a#Hҏ <a>-s)e[<pdv⥤\|Үx8wl^~PzG"|8'O넾b^Rr]݅/r|S&Wl|07fM)i5הݯJ*K#EC?Pz\ 91V4L _~Լ4CMUJZYv~Dl]쑗+PHӽhfl x >^8m~h9[eVa7I 9]g3IBF2ߔmFG3c4{ {~1቞G){wm/![A1n\&QߤTڌ?ʔ_;?xC*sQV--Eeh^h//giOh[#pA_پr}rS?GXqM,KN@^m˯\AFiEP~kbhuZ+/5j⒢hcYn,}Ҿ]\WY{]KqQ6qE(ҠT}gQ󩢡0޾?DMG 1*9Au3p`BL{WYիmкyw?\S- [\ /9d@b[`۪'|xGV}z3\V$c9t`5߶SdKeWlwZOH^S]G5xOb+sn G}_T,.`XXHdUaD)/X)*x'-Pz ɛ4m}Pl|sς.X`PC^gB ̓ bl&\OogÒ[$_xZ1l~+ TYþy퓯|aW. q+J9_h=AP(1ixBPlP ]w(~t)@[?ƱIcIS%"ea웚v/Mg`=z@+{V:,#Üs)I&*֔16y`y~?K&;-b #F2Np@,7 ;4 >0h5QyT;p:Qe"CQ+pjÙ-^tT&XlG*qв{4rjQpsϮi)pSi'[ enjj;9M0$tMקOoM16Ŏ{#ɞlDވ=ܵ C?zH/N#] q! JEa5YTUVgcm?+Oi*08f6~V^TX ۜfAp1yN{PPv0c_iᏎQ4'JMGXN-,vwo^"\ͤJg๬R-vMVlq7H"vhcxIǪE]2ЂCܓ8$z MuOs&{BA*98yѩKk{]xMp#ؓ;ٍSQtC"ߘGcX>ى(anJQ.s;;V2E)h^c'r+߻mO"FHY)N7n` FyM74 `ίt<~E# _M^s.VG \pI*1 6ZV9WG@#` ω^7P=^_gnQf7Baݧm<@M{nU|Qq.G8>f-Ix$g=G_w]aEFZanDP0=.$rKWԳ]wiŠK2F˥(#XbO"lO'2Y߽ȺiJP7*o EE1SH/%'A^~ΐ`v#\b9it̠*2r $vk8L_*9>&1Ϭ Oq=#CXȳltwJ/9vYn"Wm4nǼ̞y9=˚[/"zV'/94\|t?6Ce'7~xK" O-|maP΢i0:q]2>yŝ:~f6D>,*{(n}_~P5=qu Y-{\ǧ.)=^GNY -:q4$:߄+i'Z2eN#ݗύ~G24_03t;8;p!gP:q۝A#t&Ut5A_F)ܤHE~߲w9C^Ļʾ.P'OY-a }CK2FAoYw}H>j#V{Dqf"Nz%"Z=e4f9 ){DqNvPuT%g\DruKDKuG#A wj9x"؂ctZ_cAr GhD,)kgl@,8;oPk$pLoпPۜ9~G02&>LL-l :l 9f$O'r*ܧX5._{Z>2 h8ʱv|q Np$VCcوy{#!^ģaB ҺdHe;ebSes F$ߨ3:/Bœ6#o߈J:۾9ui'_vKWȤU8&3+g:N`RFi흫k!jqHpTBs >Myg[5JTu)0_LQ3B=`?UX0@sd"PɻSU$/9Y o`nްu8.-%C$]lU_:=RS]aWfF[]&-ZePj{icA~ 5ߨxaZP7YA:. s3;3p㑾ؓ"n7n(5(n[_MAňhF .|] OG9FWUj[˨N`jdD#{V;nH#W\G'aѺAUhdË OlPn@f9K#5j[9B,epZr鴦 5Ls5,-I G:LR1ʸs! U2O-d3/CI k]x|gAWlP- nQ A0u.59o##jޖ`s({„Ru[Dl$MA2#1CNF l>3ܪ0!xᦐp4wr%$8`N]bz'@pV4Nswl4{kfhZ󡪟WM wMBO*UpTAzD2m$:sT^Amś+.p%d#sgo&_i;bGwP4D[8uG$5a 1lks*: js'FDj])qUJ'pk(d}wxSS܆?P:2᳠ JOΫQX85q7<u|mY.\-*jxs=us7|<[h#K 4P(rS}bwNAVIYT]0󅦛OWG )'ÁȣH w$/`~%uP0_HPuf9H"ZRDwY;1~CԆ?^ gCQ`${Z*%s7V|gά>\{}Nlz eMC*_%r"%iU.0훫˔fx}eڱj a [vnz.p݊VR:#d z%f}%n`[̲ ^ {|&}zh]zEc0Jy}$ V'lLVP^$ywkW8KM_"OL嗠iYmHPuTS %Crzp=9*́yYyz`hQ>Sb:DX;(yȢ}=vϹ(*u*"7F=T\-_K0)}oYСpw$[Z~mk5X >{Ln7?55g𕔌iȓf?q=T3kte- 9ӈ1:a` Lmd_񫇋t{Bosm;?9zByMT+h._j ͅ?$VPEq ^;fO}z,VP.hP4 |Ϸfrbᰄ {g={.e!sԔEʎ-lܕ}=\hKh~ٗo>7Ǔ.\n񶂅ܛ2`qڒF<0cG\ש R~j4ԐU~aKzʅ{Q}=Oj S4\äFե;;(ru7J,O?Ӕ5u^|I^FwX%_C =~5ۜ00ws?ܞ%IB_o?Z ){_y脨H0Ypd1nu+U1p49bf5K\Igj/>] u;~qC֭pxeǜ+L`{>+sn6$@ϡ5E >6w*P;be>n(uN=Nr6[F$ԧ *"xfi 1i(sMJJh0D2ȇ6I'>`>GYS×7%s\GVlQx)My~(֢ CS?'ʘ7LhڢHL%2'L "=dt37A|K/ٯ\D*>s/{j> ٺOQGa*]zP_ WEe5NE/xw_^`ga5!hFDʿ:CqIeĴ9 RޢH+vѶ0m3عك>C;V0w) q޽ ϰvMt^86^ "rƤ?JI$MFVlHJE=:_~ j|/ma[xipB l_Xfw+>߆~ HHF˼iA"zƪDE:˜vA]"׻PW 7>]ҝ}*zds\liVK;)9,gT9jSX(Xuؾ/BBw-0\prPxCBsNw%HvEBEwʄ,<>ୄHYyxuv^|+5!}6oC(tO'sYg- bnBЯP}&8wpSV&myQ셚>.%#JU٭e$qQ .+_ > b[G"_pIA 7D/"R˨VgD1+Cz;V4EsOǩ9)q nL/BBYgE/eFꦄU'+j'ea<{VHpzi !Z9R1D[r9}r5ıYȵNC%Iz֙XuW`qIMau6sEr?\Zy1&dpp]񷶍)w{Orc·||6,?`gg=JDަnF}O$ذ'A|&?j'񤆼|{A5~f|הzaMo Bz%jI  S0\Fi(5@pFo<4~g3H[naT.d_a62og ߮'cGapC}&~'̊qy`l4i՞+}Ko`ۜ=Y47; e*2&Zsm!Hwe+~5S^=ocu[DRl]o$B\z)'G#`lgMd<-RlWl*bC^Q"7h}jUj >TzNpqW1VOGVy0ߖ-?5M"X&O7!8375*J ./^3' 9C=cG;Sz%hWO˔d<Ў̢xuUf;Qrڞn  |uat#s鷾 "Ơ&j<1_qKCYB݁}Ntԅ]7S"s%F-.%c+'BOd:؏" 5gGTR|8&Ruk2فWy+P*eM38e}) 9Wva-r2T$Ir^Ij!I&?z*>B5q -ѳPA1;з=l=Gp6VůAcEЏ+mA6}lJm;rXtlj:O3{JDu4+[KإE?>Cpw!+h̹dLѶ=~󃳠O^K7р8'1qAl#"s/oHk.'-϶'qyD-s*u"㿒Cu pDj0$>@$reD0rɗEN+nIfI&>Y +=t>4@[8K>*v}3NNNoL-]~dCLKB"h!u/%Zsg{ep+ZJos1 nJQ)Oh~um+<x]>n4٢-ܿVU΀N_|jUE7aY+~Y-m3vx>"˻0[UX[_I+fNi 8@nsU Y>Hu2z/8ɉ-muEKLIW`~>Nξ[USqZHՇ,*1 '?W&M G= 0.uk[Aq)?>xދO)1hˣTAUԜgmL>f2w= 7=UhCn* !^(te_ rfW`|s[ ~wQW$7'g,Oa9|[%t;)I@ZY-~VL\ 6]?~'P; y=AkfKW:4;L~Lam 8TDۀnpxZ;}ٔxnW wr$rF+idglY&8> JC(#mF|oK36#lW&8Yk³ѢFMGH\-m'kA/)c=9m`4 ?w嬈i+>n!1QO7:ՇϏˋG8}Y;UG 4k#{6ISD܈BF JS/@WTvgnJ %:`,\l6!.D]o-0^7\I5b>`9NμZ^|-V5[8[%:YÎ~5;X I'.E6sNT-T qb`v-t戾wl\Dщs畯_RhBAljvT҉lnio ůU"—( '/›Ghz4^8;ϟ K c>m/$z]p|!w=@+D*kQ}/ߵ8~l H䛖_ *z36`zZ D`W_C@;ܴaj2$x^np擬ugeHB|-ol~Vk8( "?0p'|z9Oh|`g6}/ gḫ,#6AU}D$}'M0I\ۀ8=,U|63',J.('>#'G ɅI qN?/'8a7jmQ'./rsRn/U >n= Už7x{ =, vY)cG a{JN.KfmCN!{a4>=w ./CwTcOK7{4z,4HIBB,'RD\u}.da2 {{aMkMm?үtX׬ Ν{(}S[p>E>{5<{K0,'ͶH.s]5Cd)+eVɐTg="Aɬ݋0-2O7]s;צ8>;1kx2a"[\@mZ]Q?+9V>km괔 P5^"1Ȫ~zl4/74oi[nYe@V.gxɵgKӘ~oqY)8G^aךX3A~CI'q ISOw '(^> 2@x^wn, V'*nl>K2ڜYsz|r Y7^H97hA< k>$i,ĺֲ0Hȇ}$D#:B6T!^*˷Fz4x7>}NMaEype\D8I5_]2yb~xئP |+k;) ڈαRTX-++?ߐ3C$W ^xG=>CQ!co[ }^{!arR ;K[0쎟5-vYkޖl;)p5(Ns%.2y>zjpm}UWh%;X@ 7PK$uu#\;-OBjQDOp"4NxB>󢱲ж?z~A_X$AQAU?Q#$ھWRM|xW8"ͧ tYK Ȁ=)|V)=W8W|W0J\ϑQ6e$)6EBΗ*ȮB䴯mx0޵-}b+ʉ{l rSIbq?PJL|ӳ"$IIeO{[ `+ ׋ }s ͮ >;;d>fR%h[al 'ڒoIr&֔g CRwgϜ~J6]J;/)vD쏮y4B!S2GʔLq6 BiɐLQf>kuy{>:~~؏}jcX\$ uzká^U{(}At3?dX Aq(\CY{<:KqM@zu1L G)ްY)*jr,:BZsIIfpـahIjQC9pSt-!hNw5+'?rp :2rH19QBo+hI8{l^ᜱ,3BWf{@5s' ;G JgXT5I/fO{@W?h4vo q4lɫ6_p%j~=3yE{^=Q 1XLBsw@Hݢp6\~9ŭ.A_U|7cbt2)Xu(E2(`w+}@EQ>l%F6/*_gH܎`Ep]j{@xZ:=؏5x^'- QRVicY Ik?[sxiUF jJϱ=,c}nX>Nrb 2$Jg@;biaaOSr?_{d{C]eOxΠq2iA1017{qFn@f[$բoTyWO7z4O1}%wO~[V')D,'D7*$X0EN^‡TC.HIQ(zl3൐?Ѭ~|9G7.syF^Q+Vg^@xR rzK@U_ŞspD[W++>dyE0fd9@Mxb q3n%!2e#\-P$3/Ag_6+ w@tG~|?Y:"Y<%I_ $"w2B.9nH?'zn=q[: "eNXSOyA  ɉ?1^ +@ԳZwƊ"#HJc"Ța%FT|MFܩWM=R;f*{CVqIr[S" :"I8b/t/ab;?#ζÁ6dJ;+ WN0@tZo#s߰?IQ(dy*V>S ׷Av 8pI`|A _Yy.o;[Mq鷦gNڋKӣik:)G1 .cO'%a;'1gU$k{+x%.#0 1^bM89k~Vj_|W츌'B-¡%?j L5 @"ԬIH6[c!blmQȴ! rjD>ݟ{D $}&A#?7J䲙=Ӳ+=! 913*|lʀIg@•(K=$" n[ST:<.QP pza'4p˲4[14ڙVЎMɨS*4C *:srƌ ` 97)jGg y  {wȺRMRYGa)֠z pyzȃf_PD͗nG_ÁU)@@=I} 8{Ǔ83N 7QR86TO hiMNjTsALڭ Q@ZаY%ѨXwEѣD`Fq"U im/GKS'zXgC P 힍}P9U>bӧW l?_w}GYVC] C2p_f}⫒K|Gw{,`,JGPzchEMV*xS,nz3v*AȨWgk =WY_lc8ߨo C+h;Bn xobIS# y{Kdt/4_REd׎UWHP/Ư+ %\5RI~I'௟3Twߊ$(pp PZ)}#ã@dqeFHb p@ż}[Zp2*xE0iP (/,t_P4xkia~4afF=}oaִ[*hG0!} :x?mfm!$RsJzutdiJ_?I?Ax>2Ǘj2]>%N6;ȱkbl=Ae+֚(=6rxi"3C.=ZoqW=:!9),`N:\_NP-OLo1>B .7]8my_4>( fHYZ&(у܈Y!1]3~z )6PPN%v,H_ $z@WR͏Y ƃ\`vg}'e \&xNd.0#YMMo@oFlOBd(q<(2O}gڹE)GR/na9!GBZ,{Nf}:$֏[ 8=l'pںpO_? $g~#x~*#Lz3;†vixTG iE{гU39k~Pi `s x`Mӂ uTb_q6ZO T s*ZUj(B=krKƞ&x*d1|_>crձ+dh/MBNH=SFF2LSϧE*(vV:V֯=qpPEv&9pE94M~"Ak#gǀj&gC-!ލ6v+㛱9P1ӫz- safl7P|8R 2$ڮ'sݔJ+GTyk{qEyTJDu%vQV1‹LRhO}?b}n:䯂3z;]cx*H `nM92O5ۇArYsT#Owcg҈u.]0}!@blo98Tw L:0Ӳyt+| na%E;j^W.E`V-KpXPSJq@t*.O;*(Y7v{"/j((1Xq>|9+L^'$ERRǃW_*?k$LP&Мbb앣P\ͣ3 >&7Mr x:~_9`d{\ҧ p/=}q/9v)?X^ԋetvKtSm14헪F+ggOs JVuj'~ʆ}r=SN MM^r }J[:*n\GZ:G]1ު+ھap)pj8O&N],j``ԾlvoyσDhdlFɬVgeQw(#K„w1.jR%ذ~T. s)AwH+=B8 _n@IU{>TZ {FVיtg߳b>O=i J^lБWY0Ź#j TJS9g!#:3?9eb#Ep"'sDe؝I #]YZk+64rF sp"*ҳI0m m0}O|ޟY~ m*jBb<[h [ˎ)FBDσ ?r ?]efBm GnVP ORPBͯoYU<XeMCJjfDXcyO65 j"&a7ׯ-5)aMݓ'i]@Qnٲ8nZ1Mn"_;{o-(sCW4bN!~n>^%blMS3D؜]rǁkY0*Kܼh%,+ST eT4p;[WF >w/k[E ,۹,R %ҳ%(|f8<FWU]n-Ǒ6>\ǽO_T1=&E^nʲzj~Cesjpu@qw/8<23,% Xw$"{S+h";ql2#;m\{1[9 rЇC8=!u~O$u%Lz m9HLFĨ`A0;1މkc ?ZA nO6>Ed3~ތNlg ֨´N]ngt5Ԕ'?\W +:õ[?oP zfW9?k_k˘f/ѷU^ blUP/(iˏ,Ow0COAe jSɃX(i"9ƽQ+W',gBlK݈ ̯Bg}wuhHuUs`@p8PTV '7˼o?*3)K01Pg- r3pJ7 "宋$Y<u`Lxի&O ^nFkn?ZbpSvȠUsB]>ȫ$F7z5Os>5n? i0 ]W]vW2@JH?Q~ݪe){@y/z@۬O]ؾ{DG 5{):].@u2z76ʛ4|P#w4%גK14Db?@qŭs֐cو%k (uH6B GR^^}`/]<ĥpy m^s">fػ+B,`\^ua"}Gu{l|Ƌ1䭘͖_~Lߐ0 V ;'EGW"H&Û#awmt?*.MY1G(CIy].IJmjÊP"[<ۥhlr^U`'Mv?ڕdZAON?㶁.|ls;E,e5-,Zo s6fG6jtcTi{)n鳅jV)6^>졨8,1ԥS63t OzyouK,qΨgG&ϋ2lFb 3:I,%u÷j4?>7U%k> uE Wݰyybj!8%bN#>FZ`˒&Ϸ?lIq2sOR_E]F8qAh)aI0Ъs^'6Χ &ތ)m$;DŽi |l L`eĸz:GpN ˃:^o}\bx] r\XC $17ɬd1.Vl+?4ۙ?Vg;g%=!󩦫{aNϋ(nNCR߲s*$_ˀ$ci]N_sK֭w̕[ŸاP^x4x/H(ea}RZ b".vH NxPl2\+$FRB6p^=T ;xƍ`Ճŕ"]ᗖѰ`X~ڊ-<5cy \9`+h wN >hs.>X~g0刵ۆNL!~N&o6fF|̓%:^;Ndؚ=nC nĝGI ,+1Kt4 ^ArTVLJr?rtHf ˖ay%T$d2St #SX|\lqûʐ",b;X}O(L A[7۷wT-K0t-_3k᷁VF Nʇ[i nRT) m|v;(g o(R[CCp4o|W%X^rN8gyrEH}$yCs~t&ay rb2E! e{op͝5Mߖ-W[żV2.nF<`q,J6؞2q߳@ y}/Ӑx:|$_>I~|$QWvx?ZJ^D?DpW4Ye-D髃y`+;чɶt Vl~hKWadpY]tB;MSϏY .m2ktVk||7ސd:{*ܤaq|i,be:N\R)|Ҫc)> :M6vpA. Yua쥣t1_1n0e vJ-e b_WOwr_mM]6܎#u(p'VA,;{ (f.b<>ArDT,Ot?F-@8<\AR?!՗;Hg|J70m<`s.SmHF_i2zu;W˺V>h}I04hs]2 )?+-+ނK&V$n&㭻£,mJ'Pzѓ ޽.(ϰ(cef Td}I~: W٥IY+vxyINnbzT.}ldXsǗ:rN6҅ʹ~ЏWYp,y3SSS^خٶ}qF&%ivꕾ`C]k4<+-No?Mq'kBz[ɬ6wxM* iy9Gea}ao8Qæ ~Щ¿M<>vy,ƇL tLC{گ+@ !8"Ccc< Ш `X|?&q i,9"/cDOR d4FrZw_/?y I;kpz.)}PE{jb;8hl{5#bbofoGF!*=#9t3TpC;l /Ŏ~&lA㰦;Xz}Y;|D"= [:c@7Oo3 9}y%ΥDŒ߯jJ((ub1}R`yeYpTDN2CxsB:$/h{TSe@r~EuI} >2ߔDDorL)[⟎~, ~r/φ?(C ?au-ԋs{2'@h椨/Ե{!MG,QuY$pMq{]_6܊GXGŕf¦ٸṬq_v1J.٥tN0ӡ'>\=ĸh=Oj I N^rjE۹M~KKOآ Q_#aH|Y&lțDbmC<l}/Py๥8I8ĭrMP:O|߷"Wk,+$VeBx')?*q^Xr e3s0'1/أbSV廇\;nPgf]+yEޫ'3zA>4 >u#|<j:s7p: 9MhmE8É ΰ_AdZ++Js:²V*iW83R ˊ;1ܼN l]~vlD|މ.DǔHgF4|ş_U9|VfgQj2USHGIbW3stÓ}y1@,YO3t"QdKN%o4f]?K5wrE1`c&D NP(YNi`Q{4_8kw\~ѩ3?ĸcF7I '(.c/#84J]7&}4` 2'皆axC$XE3p7`HMY7'C(`?ϼyߓ9eEhwo2`߫I,Ip-Uњ KkVa] F7h!.֠P=Yeޙ" nw-h̥*QpV$ .DɠWߊQ)GzХLm[K?ꔕ#?K^GJci\]>ɛ3-ʹK.iG3U3ug'̃jXfHZ6U3׍d"!(KHc&~OίBHzt ,'u g%6^t]>g- 8t~glϔOd X}7&SR?K6=gnƱA*s}6&㼟 9/bLY+_dlGxb;\=ma'Y~a&*^0ɪ e#]>ۍp5|{^`=$]jf%.GKjAӰD`sw!"hG~U^I-p2$~.s“4_|-Jt T1Cwo]_H)WrBw`7$ew=3$3\s ٞAX}VG^%&V{Y]@gWV s_*M1>97xiwmb\POq,]u|H (O pU,썇u;L%|T緯/d_LJ~ty^X,y^Ja0QPO_C K@+;@`8VoW~>NN}ԉӠU bS;OPT?>;h<Z'D b"% IsȻ2'ifh1{EkjDHid#ԇ7zs7`ZB+12sPhu % q#]o!Kpa@_٩n]|W59 #.Q0Ezw%9bc|`wGظT.,_x\F6VOc;S}p~S |Z_[#Q}rꕧ6\u%9VM_)}w1.DY{tv}7<(Jly+7g"Mb>XC7R3~¾3hq-P6Qoa :w]t_|Rx+},PϹ/{$1fOҦ#m R} GjFwϨ &筱sbǻrWhN9 x={k6M1MǞkoC^p V'`d 9KՆ1NņO7xLg3whsTN˯5)q gO}srsXƌU޺Y+EW?>v͞s+,$`ʍtoa֐+|``ʍKЪ;"OG'0'ΒW"770ypj% q)*VW<)Ok'IǢ;#GڷDH4ŝ:)`#7.,*iG0Zf^ӇB@"oO)p{@Jj~OWz4cϱ2h>6u^ZɢnP VFGJw=%Jp_BAKϰ桤֩f2,XORsu Ч9~03yՎ}Gܴ(xİ<0$ß\\&Wx2% <"iN(~P1uMSzy[$ê\ҧW%jΫ~tEWV_KBQ^`UdU E&h87ؗ}:OHzݡ}/z9ꥌrP9IEIߨ;~ g,~"Ѕמ1go0b#O(VFhxF.쯊+3AAO+f%ky-l 5f,s9$-Lԏ~ H{c'1~uȥ8O(vf%_ўL tPjEa]d"Ki_Ji7p2/f:>&vY = %oS7v8 ˏ?!y+/a1c<.P!)/)/k, &U`5 hbm{zDD'P0*RMENο0hdQ_cAd`BH]esȀֿ{Y- wpn?ޯצ x?{M}ǃNzsB[{0I/]Lt=F&Apaϊ*w.1IfV n]}~f(8s&xaW d-W`AǗig~"5o;XQ% lGofحi%;`mfթWq}hOϛ C@R)|44" 7QOcӆW"N:ub$\' T+X>?P79\O'NT6ygѤy~[}{$EZ(X,z"FQB&R{ww[4c<sMsźBfe\qo/EwwFJ\{5o!oy}Z Clnx|t*hy$m;!%Vw++AU.e=U,D"G,5[5J%Āk(GN<@'0Ci6U=vPaW3:Q;շyz}`k4} 0<< _KM.?j߸R%*s!ZDy]ø ?xV /`K-}Ehl14QR>NsemtrAŸ0eЎh 9? 9P c{]MrCo/Ÿ,[3(2"&0] T{Ta5hEg pѨH UZ%ꮤF*^5GHW.YAVQvak#G ݾ7H{1'f>s`]&8Z?QktS$~++Hþ[x?vVlo0Nz/V'!ׅCaj^%WpP&c9|4!u8ۧݻ*&]՟W>i&>p13}/(2Pce-ϖ#y1n,?Bq&)BnpgurDTݡ-,JI0&Z=ivzC/kYޞ" q9lU,GPݻp ,Kg/ {fR~gQyqDRƸ] ,O!^pls,I'jQXû#{فR4Wo)GDVd\f2pi/8XY{U`W0,[nKOz{)iꮐ9dBƽSWw Q?.)Ox(*yL>n#ױ2߆O,h ,k_ uayDX^ZZ-xXLr=ԟG?J(ݞ=3LՒy-{0G"]6֏d۟)q՜@ݩEIPׅ)}QM1ޛ`c@lvRx| h>;H iY 6w wFP=_%"ljq-6|pZڬqѧWf5pQ,@e?Ƭ=_aX|}n^ptӵn7=b{Kkkbuw-k|V˙! ˷#|לl './=k-זe]aƯ=kn0І1 [M=1NpaނC4.|ʸm7vFc a@/`DŽp.8e% Ve84ql8ZWp%jnb fFpv c9':ɐO;]78TᓃF~'-F͈: ^F z5q4::,S0.J5VHR헱W!Yo1\|AdJ784'H~r8J{BsZb),'N˷bP,QϜ&1wT gHꍥh~=+ e_A*OS/xz6_d.GNVYEzzֹ>(\(5M W1F`])4*rV:8ga4fpd䫜zr3!!$"iWUm$eCa0X7nP:f7mfsߋHt~_1?wKM:'Uv0Ϧ)O6t?_} VC,/5,uZϿ \ӲȂ\`$X}Fw=cCL]M[TDpl }]|Ae8_*N:2[r WX䕿?]t>if +ᐑm6%wܣMRDX?z J'@äWhiw WK# aZ{D?WGI C$g3MTnˤMAGgXݗHhz0Յ] ȉ,IΤ$O\Bf~Ǫ}1}۰%O8oN [Zbv_?Kc|Zύza952i|l{2cNzC5ti0su}?7\Sr$eE^oBܶXrq ܽ/4?DCA5Y[]/fi%8TK,ңQ3knW6dz=:O|̎lXD1OtX[ԵY"7F+Doލ]@7VN_gq*~e=Q!ԼZ=*eF Coɣ~'pvJT5~3z!R>3mSݟ=p$e5. 'iF+aALXT|7,l}O 9=^.>&ca 6`'S. /DA0oMC>ipRwr/}N笴3ӶC<h×zaU Gr'{ zo #JT+~D3uzA0jIdeYO\{ͣ Ye y[- m;;\kwO)s f[vy+xvte W_-  郔jl]EW7\ESwnpݰ;|NIf QwQy>Pxo{o/́OdUnu.rgTmaJiOدΥ3-ig}d MOk30BjEK6uڲi4 EؽٯoR{%T%wO[}nzyK;#. ?gdM|yDe&KgJW M%q mxb2s  8+ 3.= 0dD`+0`(F\1pcu|>/Yzmk_? GEE:TvE7- Ok>e iQQ[~j>WͳA߰1GW{ϕ3.~yB{7\e;#Ё>k o MlJEëB}b=~>qaOBezV|GĹf>Hu Js}o}p}^3 M*>HVDCV51? ?On L,,5 Ug=D@#SVRO2v+uܰ_HA.2gEDUE˫褳ܡeDexD j=0=Γ`_;C/RyF|z{"Vs9l˟*Rt"*;=`ֶc=Vz}Wdғ$>tZuz(S;\{u2k.{52vk'PL.&9-2f,gIW>g!pЌ7AWf|r˪U񡪕 o8^C/_=~q=-X}0{ QOg9p}\H=(G>A.\MQ^Q5#0k.ƩoS+DE0A;(u`X}||U,M(/cwpϸN xBr6R0Y_*mny;_1y yT:[{47\^p 6nyhx^]| j>WL4󦛉; kٯdQT74@9g ks"%(v) ꢈUl A{o/+۞VIhwQˈ4ʽ`+I >!OqyKXg~χ|!@D۾ [nPm^zS`tw6 qR&@Ɯs NtQw #rEqY-r #G&n5G0IĶVdY6vYFJGJA -7W߳v(_ &1;Oks +:JW2ɤaJ;UJ>?䱼 X.Ɠ"A,/z=&ZZE KUR/S}VBv{Oaz;{#ő0‚ І %[XpB-~ވx2sdyjYo~t;^%,q6p#|+}AeKs\i,:-X@Pvp-pOX-.:)%xBVxӴH|~%ldn =*h,=3Vī)l.Â2|ш2%w9@~R-xs$tÖ:a)tJq6kҠf&~g"$=JXlrʇB:[̸E%KsaFl$:XAugjCfW`(xN5Zrz^gAR~!WOFo Yn{ևY2823BwWE$zgLj-ʩF!R~(+9_zW}%`0I~ wOv=q "sI~X~\F, $~)LŝcJ&>Z)򕴩9=%@QA? 󌵬 FVǩF/r6sWV (Do ;w8" C pgP3:n9# EZ^^\CM TI\'MAŸEuPM2;!j9rq6{"/7;:G@`u,! ,"9m<ϡX BTH #U .<0|yd׳?>ՑGUڠcHRHC]w9 YStDW{r== }6ro φ)7MLQqXKY=⇛1fAZz6!D yvMi|T|`-sS A\QsDj{!u92C|K- +-7I )[-9R(wB[*;/OfN|+A,c3ՉH!o0O0F#u p૦utfcVB0|9hEF:4Fp|hO=]~yfcӨfB(EA-գc7^K'yZ5q1 g6WHt2d~e+gȜ?wM_EIUEڙ%CJKÇ[:gS|)"7 !YD4qWو۝_Hϱ=I훒6G@ d ¹ _%䍉@[y  ]AǛu:oyrsnƷꎮU`@I.vYhNL0 DL]3ؑ Q󵍨CY ꮥ.FU|uRid}yF^ }X1}?3\Rͮ8Jfwm"ydqgڳKq#(/֠߄`k":֋ߚm-ml@Q5CLW}>Ej&ƯL3֗s8wc3t`/@Bx#O{\GqZ|=`mg<{wjRDsU._!Pq|gz8xiP]y_2js'|7K!qP(kjAr. !sʴy] '5GC%{|k[=d^ZJD7*8] r~vjAIsqIPw4ʦ#UJ7]Gx,ZSᤇ|I[e.OqmDBmau]O(g9 K/h6TItNdx D(:jR[^Dlmx r9@rYUR|+l i^]zu_QGcѤSxQ|ۘw%OeĴ i}Ȍ&vWCԯ['Y(d qbjQ|{BL\cL# .!WgvۄGIgZo _q;J"=%r 7MxϞ{o?y˕=:4XFմ$)%d3x֌ ~OF¨CTq 'OC,'l6d%XYz7[5ߟXk@YsW5B׻~rہp,6wuN`}|@WaO7͛*]elָWJZ_ngˆyP㹟 X J^jJӌf{n@hQ1hȪt*2~uۇ1;d c8Tqh)m·]8vo _|N׫>խ+c T?N_^>Y˳D,\ιnnxkfqT,=IRRÃzB͇<}NFզNU$IA~@چT{?}~ f]6O 7i.J۾J^sҪ 3N?ѮyDcyeO\TQ:9_rG g(;{Nɲ4Z h??0RKC |bvT]Ûs+*czC*xOs -7ɦ=Bꁹ_xYQdm~2*y0(69}XcccGl1o)Qw3Ɇ#:Rz$ϐx]Do&RuNJDVDH[5_wc!l^[TDj&%/J6&ӡ9M |CCoCChM%,6Ҧ[~9uW7sǣ*ô(%y(]#׫1l$@{P?1$ a nB:j|L5l>1JA;ޞˑhF0wwP{RBNi943 ,óf+ &VQx=SL_ݶ&X!<{)AS3%@ⵈ ~_tO#4 pKMxr#\ϜWZж_ݺrU)3,,eW(|߀43T%˹.]U8Q+ fADrg-_;R`; x⿜"\rY LڨaiEM9i]c쳤 k&CSS@ߧOGrRV~k!cr51{_:9VůgSq "8lRF gE>Zh:~eĹ m-6ɨ2~rsMR-}7^jnTxMs|̆X> gϷ=rE_n^̂^dZbFuy=؇/<1vS5Džm`09ϙLt=P.RDtճ.׈%SG4u}BQ\/'I=ڇ'R"7?g sİ {iV?Grt^yߣ<>WM .'VoSV䚌8'^O R}p3>LA %י_ 3#!N#2,jMA_I;lpr:)ٟRUqQCc%&oCIg1p?FCb9W*SAfmMy&jJ*bMC@}ı jUh\fkћ?R v< N>Sn8>7 (51鎝h= Sd|jn0wޣWZF)u ߍ\UFqT DN?9ckgGv'#7޵&H-!M|Dr0^x!J޶5}5F`:(qr$=[-4Dc:IkAs!$$@?']NQGQ )N׽"lF]O!35"&n؃ެkgMxe%.~ϻYpN&*u"5<GezQ43s)Z`/|Z(֨jӘY-=njV ?D<"{ھ+bco!$@&J=KBbcI5Li !3,r~vdL=UDzrP|)f j6~c '9s|>m7"?o/*#1&-8P~pВՏC,/(ߍelg 2⚽t\e>bP'3-EU5AtWx{u~06ҙ*~9~Ru s ^@,S)Rq$ mr85aǁO77z{x'Uie?FGmbwHVs`_␼_?q̷%)I..*2tFu^|T\r^8eBp:v.^wc=V%Z}\c1.P'aXW{y`>mO`ScXz" hok&ewXs_g#6$@uӢG>!yp$k2/=|Ef>Dǯnpi;D#vzj&L9@c]bs oV@ kp\B?O1Wm]%Ѩh+u8(TƕxCenŸ bgwu`dz?PQވ cܠh ɮҴq9NsnGмi?o uU| &iy-FI882}:|gWB˵d>Rb$:mQ1tA D#Ī!!|%t8ݟ߮'Zy!GB1[((Qĭ<ǎ!!: _ûGYؓKXyه%0M1ٳ Z:$:3fEV2vؓ쯟CŹp5\zȍ07SV19PGYVh+IVG2Jxq4"J`݋Zn$}Jeg<.F $]!8l$?d<"ӧe=`xoώR4XƋ3(q@rh]mjڙ¹.1*lC]@`Y r!0L|wQD򚚗N5'r.Mo?(}>jx{2=RYc: _߹~7ix PSѿ8(uN7v.qоIG}p|__ܽs4g1LZok` q8".I8h qoPsQXSVH!ID 212{ Qpߟۗ|ե {/d:I-n*DSJ=z^X+*H}qt1M[9"pC5oOmR.qLEte{,t ՜H74K/C7g`쨝iOHr3}w ldlh4_iն󼻼O-!֦ - f2zR %InÞ|@qΓ,,{@ &F &~y#{TF Z48c9lƩ_!zrTkG}^w`>-g  xS>H|!pvHxJjbqc9Tp)蟼Yrt/"kT|8y3̝'7Fp=Cy ?M+}NJGLn@ԯhg o;&E.iVm(mv>AP?->C,輯T?gCԶ1mel{pUm \ZiIkX΂V~FAS{C9P6$9c.@Uhr*u J52;C JlF#-uɨ`ɒb `~=v`žvcpUX^y]b0*҉]ѶZ365\!"w%}p>NJVjao ?ijŢIޣ xUcSFCvA fۣ`-_Hpw~5fQvY~~ٱ+5Yx :u!_t0A0H9w4;'WNil?,$rJHfPUK:Q* laHP+m;4o G-+l 5yZIRdžޖSAy}WPr];oa96T}.p>lEpe8QfWvY‚?VP+C%9m %> j{>pe>ImOuq}_=K75 |Z*]0A=q-^qwWanE öșN;Km VD꽯菾&.ݏX; h~ŵYOyIR!藳,]U|S gE-bp,\i'N[ "Zd,.D^PH#ftHi8Ԗ$}z;BgpKs-cv ۰T)7zloLqz^aKT T܎.x ظQ&'mbSA 0xv~U N;@]/؟ςH~/Y3* Po2 %`@/lūњMv@ ] >&mP#Evfp3)m48qq v4Ur=w 7 sC 8*|Xp(_C5 _:b\YNeOS$/.&+,~B>NY{֓/0uz:Y(`"ҽTj Ϫ\/NV"ia7G;l괡P a9 =p`/X*QSB=`Web>{Rcta-z2La ^np[COy8?} ;^×A,qشz<)Niơ>uԵdJ5PzI4>3K8\sAn_RcvkMn_S?Z:I@wJ<_#n#.e"1v3kyiϠj >~duk/Aקap*(X_~I0 WBΥPaU ;_6 cٰ|l#^JځZְq7,s d@?It@uַB ̈́W>֓%PGze;.vXں#~b$z6q?Ag>Q H _`Ϯ!Wq]ؾ|__̵*Ož23w~Uy 'c˽X^9я&aNp}wi'VᡖsVE&5:{wCASm\t}Vl^mKNq 0_8!.w~=ti5A6`XCBFf,ALk8'!uq;ܚ[\f5TWb~?'$pzر|nxzpى?ePƃЊYZeR(p+=y7zZ(BvK.֓Xw9 +OPXi:\ʕTLW>lו](z=@K@Y-CIGdɔB33qm"w֣oOy‰}\[xll]IWPy$=R ۻs(;8-iv?Yl:WڮU 1֢xΣZCYS,#ܖ0p+d=]woiYH@Rmv^ѫw8E ] }Ϲ~YKF=ٵDUA78_ۆ,W{ѹ +GJIɱ&ҩe]U'Ckx A'eLaJG,gZtpgij;Ky[wm<u\_$`4)[&8Rm IXCEN(y&fnLSSN~}cG=`MrO(ZqNQ-NĿ6T߷9J PTX>PIH2gYY"!YyB1}ϝ~uk]}{=nU>?yL'VhQ{HoFa%NTCqg AI` ipBxp܀Za]ɐJ&p`&ש$y+"f'@UW' |b5]]6[~cyWyoAv[ó6XEM<#O8X3]zp%C9/Y]IJ7MV^tIs9]OG״[B}WEcj٤B 3ԧJP٫@{uQD:#3".4ՔUƺW,Gz5 ;/m{5Um[]]xuB_]>aǫH%ʞWH_P$B<#v@";ws,$.~EF;M3Gzaϐ&GH6㨔% /N.:B9~ɧ02 DL}xN *w>$|kTZḩx%UujOQk j95/*V\r˭X}2l!0+PnYw{ ߥ?%,<vrt~*lw;lqw^0;}ٽ`v*$bpZaXՑ!UW{v_[> %/0~1^x Am?_>k ).e9Jm . CcgQPe7_}* M5DQL^EnLl5yK@ð$FXgO`9qCOAɨ_x|yoߔ}XAL{^7G^NJ)er60 -Jrx@o!- ^z۟+oZa Yo o9A;[k!  }{9!G>vR4Sq֐]jΔ[^gǔc.ԣiKy'ޠX%5Tewߐ6#.yxFlD$ŎKvwOJixM9UMiE@ב!ǗD_8~x)Kg$ գlYROD,h=FUeg&pqDտ>VSь[P@#hp댇 )n[Q4j/̉c j[66P8 >Oi nzK*Es0"V󉿒DoGq(/ʳƐ]gD:# …^iPҜ+>%{/_?|s%x"۹f4O_ѭvq`#v6'0 Uda7I݆0_@2 =QOs[h2mvֻa R?(;aoѩxf }ۜ84|HKCmi?k:nPan\Eܦ!/SR%/=A/3 $ 7{&+CvMEI0S!2L#_Kȃ$x?s Y) OP\ v IYg2Hsͦ.#6~#s'}дq^jO8x8 7H\] qkj%Ic5zc9 ݧBV:A{5*5i 'ZF_|`y_"3 /9ۯ t J}aGkh QMA7zLS(XXS#3rNާ#a,I>O_A_|~CU:M,_/Lسn@6;% 0kX#n&9%@g&cAx肈eF6qXxzAjD!vL%6@;{%^)ߘM"RBbZtʗ#|X.R7r|L}q݌?ͦ<'3.1M+{nwNB=~S9;,7lǂdl}8?rWnW+5ږᡱ3bZ^0΍i t~xK_ .٭Xg7n }"yJ]!3^tDK my."2A-Yه<׿VHRBz,I|c_m(fsNY| }K7^糕? v/i) `dĝm/8U>솱jҲb=l ]Fe:q`jr&beDPNHsIov|+dOr^vAq͔._X|0V{Hef7_묩SrwԎ8B8Xjxi+0üYӃIʰ{o o~~m+|i&h q4?OYW#؈X}/9KX~ASl]DT- Z]¬ǏțW QF/qٕx z_cख़5xؿA:F<x8%woSu=heʤ۶HUEk|^5 KKYwTzlVw~ (2놨vMѺ^UiIvZkd']şsMv R}k5TXߐ<~F !7pu;9c_)ac |qvpfΔ gxqU $ OHx|;L|$ ( Ϸf^_ ]a-K+Bܠ?pC'(u¹غ83Zc>Qv?ֿyZRe$1Gkő‘MD7} }g Δt<DSx8L)/IGJ&E&p/?Z}DJp$al<#,os@M 3R!SFASKH_$>4{m /񑕿fnBn!=L#*0BvYVb `L9W۴:̌ΤO`׀gVfaiIKuhPg ٥ 2Cq.f8ސjp+4.־aoѳ.W({gﳏ\.?ߟv>u o{tO?züIᓷ_#.n3z5*$).JvU;բӂo(_W?+~Q;O{5cPRN5{}ёk/yU@o '((pyF`E%"x6I 5n5| w[_9u"s)ďuguaԟP6-X>_]g\ rW;B _졆u'ֈ4vpHk),#չsXI_־;g/c wUIǹB;\0kGZ,o0auQ>ueO(ز{4?!%r{(z8 =!pakC=Y3G;Q "vB`犄LĿ:PQ6[}zЛjižhut'e@*=ASjaQEZ_2WRx.`<mu^7Hr|A!딹 J?\paؼ%=B7*"}~HBaK˅˦P"Bү `/Avod$ϣ{k25x9 NC, n?o9 p$=82n bj֍RXwTrV^S5nb xZaum/q+C.G*TEwS{8L}^ah@|P9=Y[9`Ϧ<-Quzۿ>5*u]FSUp7;d#eJ;'h:)~]Lg Yc}`bŐ˦p@Yxsj%dLWk^0w(h3>y`p 2W/5g$逇nI41Z%,%c#4d yY+-?:Rd` $?=yO?wx~_$AE '^n,?6GΙNw .c1O:N$I3< F؈ԍq q"/{\qxy㓥lmy#O m)VG qd`^亖7 qd؏~"]HWs 51y !ok/ Q }˧_%%ƋUv>PEiEA\,`(آ+ .1KFXt<ӵDnEEPv(|N!t$HsU t8p6 F=GG|diň_K 5y P(uo$]H!4"~өn{ GcLόKc#i҆ZnYz@EacY,2p/Xtv:YZnyBtY5 {hZ?ELoN}A$O>\Wp&X{\V8R0ݶuOxWF@L -tXA1>+}/|य[,$ѓ*)9b76y0VyP(;&LW^/&F}^;zMu[`vNZrG+77=4#]AzUODg)es- o[B,F𬓓1:TvXښVhJ6{VYO4JzAf!N0qrU.lwGv,,=G)EH=sp[yu t{oXmO]h06˭ >n\l@4z8"eL Hm%Ĉ̻ ם9PE_󯟵.oSZhڎ,y(rXz< {t.juK}&@^oK3DrF#;۟Ʋvmү}z@x_%H=j ~%68y~)+|veDƮ'Gw}JlI 8/KȰs6+ANg>t$`z7()fd ^o zҷƻ]XԶŧ5@N =2 Lڋ۪sփk.|Mx}I'Ni1@s"a4? 绒(]n ԉ!yϕu߅dz(^?g(6韸Jʞ@PvݟH)Pu"qex2NsPL]z)``p tb)'#dK0ؤn4}:IyR,)#!|3.dj DD")Yk}-kd6eك\@CU9j\*n؆KEѸ5GCVR$ "lӃ " s=K?z>cŸ].X#4}n2 = (hn'ǘͳ%h>سy,U)_AϮDpĂ dkh>tGj!xՏ5ꜿ?C5,@D(!-Gp.q5mP7lPgE Jkr xƪ ^gގܲDRM(Jm%7 a>\7QfxGP+u}r|KwNAk q-NL?"Zv>2{ϵҎ5 {=U~"ݛ:0YC' !,—T'Ő8-me1PԵQAjj'hNrRh50VF6/1zNvseԚwx yh녶 ۓ-GKžHwFbbJ[r?y-%l_1AA!El]yN6'؃(2fWp 9 ߞ(r^Y:͕I ?-Kn<*ܠ7 &HlKTК 0oW3\EёK6W9w8.ţIƾaԣ2Ѯx޾c`8^0 < 8}o6DEi4OFd"ۋлU? J$Uu=yl]x1ZE0. :NO/,_6eIZ21.@|՗ɝVig MLsy剔ޑ._uW+$ yq.@uV=ImԔ4}]* QLIqpe$W;P&ִv qSq#n$GO= E f2;|qkXyYbߖuAy~8|U,f?7.dQ3ۗ=-ĩj@ĸd6p\tg}ހ ̞YQDuʫb`ɏ1@Nu$$aDDqi5rë_9\M5nkEx*4w„ta?ם-/ƶZ7C!חjչeYF*:{9dJܦ;6y-s0ck/[y =޺y8O1Hg—Ytq.ܽHhȞ o_Hewyk:pݛ } OJR@pGqs9gb.z"3byryF,o'T"#%=]wyXڝ68F׬G -;׫_LTRI!4])*cZ}e1cIh9#"׿ bc*Sڤ|j +{n t)4wr\ eS U v{ Nf=)G!4e$ϟ̾&(Om*)z ųꅹ-YdjN44}/`$N1E^"q{=l;W]vu,ד9'VC'(̞=M~ԧd?DhXu2l @'Xh'=J +?#gZaLosG380;AM2ܝkŧVA|%?= 4\m=KԘ*>gE{Q/'3Um3,7Xj}~fHdp22e$tWNVs#iO>.vrc'4 "ԧBssnFX!DgxV1 RnɇB~;Z^HDt:J62jqjX=q +a]h\QOs}ESg'WK߱\~'Yp.4O j>&l7@$Jy#~w^lVF!H?{޵y}7;pt?n }e8'~܁v!4%g殎YtFCńP]Eӑ$oH4nQP2"/r dC^aQ%-s`񽊵J8OqѾ60^,eXE1k%2~#!}; .*b &%1U.LQVЭ3]{K*;'u=FCQ[Dj.J#wCYI9F Qq'ø1k$dSq^@^@rMi@%̠&)P{*qU1;=AK* .m"\5p=![pl{9Y Ҕ""y2*=}9P?Ra':g !cfnun'&Mq\=8~Пj83RS\D4wi,Ç}^X Zى#+E]"kd.ѽ~y0c'>4^e9b O6.[$ XٙWZ>(PH z>T, N +F W{p/8ܳbMZΡ}ԉrۘg;aՠb )+xNCc!VAmY*HG&ڨB|n/zK# 2yr_ltB)Y-`Kgt ^1";Gj!O-~*鬱:a͸ yČT/Bo74ElDb,`)E*I&- r3$+ڵ}̫S[pwT§L luϯ=G͋Qh98Nuܾ\3Ds?m# ]ᔤm3y:?H?Hglzd#a)ƍrX>@*jf?IFjB>+_D 4{RW&~w,%4݋LBQd+_X߾otŗ8Oy`2#c*۞P;S W!:g`0.\ت3Vn> Dc=EW[ @ ycݏzBz}38PV Ly_=\:(TzCAmVneI&)}8g>xN4wīSh¾GltnQ2 ҵ.CWdQix7mc*mzw-M۹ 22Îd-; b}:O$4P/CmMܬ N; h^CS-e.kPY;zzn݇L\䜠]w'%tC3L1Q< o悫tjh`fAv՗ 9fYvF Jp=FG 06G24z| @~q߆ "S$Y~ro@u=J@x}ԑBz9%mc}oY"%pl]g&K z bRVFlȤ ȋ4a}gr_PnX4#no;4+e֦3 w$,˾R;dm҉Ta/XmD3CaC@'T4v4ìUXX\&4I|z;5(L~ mάC|2tsyo{RrkTf'6dyo DKEr !ɫ>2% Lr{;lv5%'hLpJ3|a- .,w5N|-Q27fY{IC -ar Gbz; :9a?%@1Δ*- XtQ``z}P|ױtls*8mʛNffɫ矞%.o]@Xx[E?€T`bi#V'%:P64@ G޾&:YD@Z"ĖlnZ SaOhQq&;bpopX4єN|S`þg7`||h'T;|4gus x&Ë)=S8r>%Q&֗Sܯ6ش위fe88"QWxaWK?߷;|Aϗ *D2.Gi<|s{ԬBp\w܎wh|+(˦ttY'pocerx}:uE4buuD#Ӛ|9̿n O̬! +cjyjbRJ X0z'0H? lĞ#/cQ "o9E;F\|Ve)*^X@H/-+sr)AQt7m_~a Xi'x쎚V=ԕ-ͅ\XA{˃)G8@l֑gmŠw:4#2)v$v Q,C.邚5 ECyoc>HVLaSL4KlŐxB%5.ٹS?iPrr:O\RG:LEqޮ3|sӿ8iJ"R\W~~;Ķj+mtPP2WSZRLT|NP쓽"esz)š/Qh}ax7<4xrPUit9!S̈́S;-)_'~ zby_43YA&VOj(tP?d v<5{V<+T[2#X3$H DJ Sf_YAjU7h3Σf0+Tx'x!ߠP~"S /ſwfCO!ܭ;P![3آ۽j :J͠ٯ`{5As<^`zWXqs2 mhuJ +A:YցL!#N>ڛo7Xݩa|0MA.%JFcO llfZs/h7UG' 5-k}yWezgrv)Vx9~u0H2je|Mn<r4±^mjuz%kI4X*W\E-`~՚ ?{BU-ay*iԹBVyiJze -jt/Î1}fcm\[T2灦~N}K%Essd|B:x$;.!2[ %uj( F @l~Q .ΕK:{^_UwvpE.oS>49z0ɦ4@TS{g;~?*YOŒIe_y,߬M]nblHWا YJGP%;n<9{(d{;}Tuj8A - #XYe/s+fjC CNСy ̐LN83@bRK|{HA`W_x(Ka֞_k0fp%5wSD ڀ 9rh&!3JUx!K/J,Pb ĵjw3~&[e&?1n&yg_FD戡7O'H,q/? 6BvɐE `6 sϣGԫʹU ~G[KdEo'^2C#' /vӘӢ5ˑ7)-vR <~WŨKXC6Aw[PiF38Q%zdTQ"tGz{:vKYlu8{jmXI4nt}hh/y‰/[_KěnGqo'~ bZ{5^~꧔A%'XC6,U'-zBϮG~Y'35QW·A̗?7> zߖ;i]Wa 1^{_8XQHtY oy3Hl TM)k YL`JLdL $0HJujN3k9)Dh<0Ჾ{oM}#C2Ӏ$3 qIeS<<$!DQd,C):2 K% BHB^s~sc֪:ϳ_{Cz^w$''|D99CѸ^]^Tw~:?jy*F`Vcaꝸ.}#,*דqu][-qxnuoV<*$ z$Hd.GBȑi]GkM?hC>Pc8d\ǜ^恸J>H 5b~~w=2?a뛌n;CϋҲ= ?->oJ 'h`^{-.0m#= })bUpa=Cڷ$qлwM#PZT./Pn4PbuA.aB(L<t"!M>Q gNʪ_4۞o|b,|#'zy*Q*gXMF֢ǷNmT֠ƴE\d\ZTJS\b?'ꭱjA(Ņ dcT~Np3IQJ ѥh';Jb< Z@ѵ3xIBoaH^#!;J#^l8>o}놺E3}O\W{u;g<įn-C v|'ht: .v"'D.ɰc(WzD:éWwVg-'~Gv?o=vML[ė#>} J4᫢Q-0m-Z NaJ ^P&wq@gbv~ HXtŒJw2$ |h3~k3ҋOC\aMb>reUչv鳇 ~o\Ε+ק= w:uq;Ԓro=)iyCi9:q<_l5z&h {#Lf(##.PUGK"!?ϏAmV?AK*6klBբrwR兗_Y뾣o9u3˥Л#L°D;wYr[,6 \vdV;EA9Cވ8$^gKf\1I_Z(Y^3áéE>!Hf -mI}l<ɶZE+\Z? }< ,=oŻ<1d 9<|92ybRzTD)%?֟޻1gHÏ ,/C1+v46&\ ؾ $Uf-aC-E/ zo~1dя @Yjv}saF bΰx9h`c)yuxUυeo~] NiL)~on/݄]^&*ǹ3v-nk͇<@RI֓$=Tђx,[+Vfj+5<03?1nztWtCHH  { / ߻Avm`hq} $ K3ca1֟~Z˜`f坣㏚Au8/o8o;s-{OqR~=M_q7xJڲyO*OcD.zWdi`}̫v^0+ > ROFA7=qUqp7YcujF[ud[g[KoΞ Նb^KۗKOzl(64^{cAhu7H<q<2>Ǿ KߢR@Z#w1W̄! R]y ODBsI&ȑoVKA$Rl t9Y4BHJ6Q;^!rxUO7Hk\x04e} EUBHk-)B`yR{XcXM#NWb`m5қ&|v+VV0O{lGc//HNLcpqw|r/,lyx%43ܭҠhQ60Fq;1y tbq{{pƵ"A \{IduN^z6_D 'ˑ[{˰!Cd*֏C$_ę@NBS!ntc# h6͜r'v-a/fuGKtq2; ៰S6T'~y.[hketc3X}DZ_;f!և", q"K/5@o͢[Rw? ]ĞjI'0]8Wh/]r |+0̎ka~˯RdC7g`UE@5 2XECsEA|uLaw[,׻:%5f:Npۙ^J)z[}:zR,9\_dG'#mKh!!)Gj uң(]lm0:1L^zqG 7RJm2ow+RU. Z,;(n:!qϡPOD6)9CgL;-pI2w hzoi ^ &@J5^P\ijaT]?ѕNP߁-Jiv1NّӲ>~{I"0\Tmm}BN bzzR\Z;|,SF>5櫲?6xboz Wyy2&|PļdQd~0'>7{>[ ?9Edf ^d(Vz=]#3+LiKL"&tF'bI_9η_Rٯ yMf]q%vJ$OdOt׫Y}3~{|  ɂChqS):@,IPG,W5H:}ח(h{?ga ϸLҤ5Ƃ{p3wN'qi3\|㌿8 !S3Y/{unoӧ78[ϏbJ& A}g7||vF慥J"ꝏغv\}Mv+L)̼Ǖۯ:íp^Oxr?p_"'z ;bbCr ncnfzå)?hN8/$v}(x?L {5拞}^ɧ! y-uĄ!Q'sO鉚+q淹CuwL9Ni..“Po$jo d ^"ʡ$֤_l揳;hCe=XSHl&Wº}AEؾKEl}Yc|:;[z*U%?W-/1/kCSbs`>"ga76hļyfACw~yBmSTnA1GHZVݠWKITc96P87G+IOh_ﺵoxHehaf5yQWQd&D-C5_ehfKŨp+U (~syK#>!2z:)gX*Kꖙ(й. q==g߷fK{۬ivX!PfIKG0xX$+*gdb>wW\=1Ě#a!:}y%?/;@҇f \ܘ0浥V@(a7zdkDCnI_#l__cj5nhdza#=/Vav{o+VgtN,use5lތWz܇_TjG9Ӹ iO*3o!> cuC,RǸ?׿vn(nsB_g>4om'_z $KM$9`NUV0l|Z{x#-1e8" B; kE.p~ WOpo#N)rBKBtjՍJWBU:pޠB|" T`R3cFg ~;n-sy{hI^ps:fƥ65w[i~1߱*@,׉?xXxV/!';JF4JPI ~\WJAEZQA:JX Y*h#{krhvjd]w?׻Z7uvy0ߦr=8I`tG)tBUD[P8Q5T:R $L_;}svwY``T_;Ѥkq,| Ņ8a}~]mlc Wk1 PA/>58&CGSR]ɑa9دQ ̪]4V2 ۾! Жltj{{VƧVnw'hHr~ q*6?}q {nt`9Zyv#>a#뻪x\LKBmV)̫YV}tQ  I#v~dxEmpSE!ϝɨ g~y|L R5RƃtqC[f/8mI" i;xɦ 34" 5I`nlWR]tEon U?;s퐥?$plv>PSpO2{H73!2Ȼ{;b"v//;(Ln`Y 5N?Iu[W^#1ZFt<={-L+IvFV5@"뱽B B^f]oMg0Mc˯I; ?ik z9`|A& wx+Tfis2B=-D& JS7Cmygwr&)<p3:d?gV|\ tzm4k|LJs /$) RfrFU1X=;\ཆg-x@PbPa}w)~6hq$mr}>}JeJ^\>YW4:pV«h7 īk4(ygA{F9e])OԐd=&ɅO*QD0RC :9J>@E<7 kU.ݹ; y%^Cߘ'eS7nQ:AKʌ#r1K|.*ds2k>=-f(zfv;ǹNVMcﻯ_rOM  l?ߔ9o6YR|ik$󁶇_#~osĊ,XkP53J Q/-y0xTʟyAcIsWऄ7QO~=|q~qlVK^L&] 1[>V+g!թSGԤ޽{,:ױsY>z9{a|XU8j! W 5}1j毓]lKt_6*^'2K_x7/BducW{Si7L3 0C4-{nكy[Fwxfćy7X#zϽJۯ.x(Th WNT̜0/cҩA{fO0UtzNB8ս {ϡT _DŽu_ǧvǓ]fR\z/͙JH~X_z)mhKe*DIdջW^iQ1@xeh1F{-8oΤ2=&\WFW_0}ܲ=<)vgޒKw |g0uD;&as1MX+3@ U/Akj#o?i X V/nka#'b^6|C` _xH̪.J-!WC2|NmN$=Ĥz$.\݈w8qe v9W˧6O0RLpw$D3:5MzFM.hn$jϵZbFኜ'ocM7XqDEGvh%ryW'nɷpcJ8ō"%f5.t!ҟDHi-829W8|Vs;坠1~jDz4!/|o:'y:qx,ЁnrSEn߁KpReH+Lb{WJH'НSf؈_\~w' Y(r{¨aga~ߕk֛ ?xo@`q{3-æF=z'<Az_^UnnwT9g8K%R I8;i#p?Xy75@}'XҜڲUOr]Q"0~f `;o]#޽ l Ԥ0'x߅I4"'")B\zRjCpIQ oo[y4%AГV0TL8+zb_ŸOB@g&|G\N}f+a+J8Hx_Za<¸ dU*)Zr(_|f^J^{jyЄ7aw/TDror\!aw(c;z9>rty;D /U`h{bn ~gc yr`9E|}v^%?0@gQ~C ]GFI߳i;POID>ʙyN)OlԐ|SpA}>.yq>d`/jK8J`]=5y*LCI\y=chz5~dTzT {7JC:Yc1"ؼW8^͇.v!)gpv}'Fj㎝Drn0?n䨒3p䊦I;#^ؑV}z(^8 g\> Δ󂡧96ݡkq͛Iĭ6T+YxD).'"7*Pc ߡ@;6ڠb2©(*⇿H @1_G_[񂺹#.p?>_uɑQ̽ ϾwjOۺt5#@,Q >yH/E<@vMZ)9h19 ՝zׄ*LFHN2de>1Þ4^Mtnjf_t}Xl"g_?V"7klD01)\ՠ\JQnd pBG}@0UTrZ7-99ޥe (DE6ڇ>~EʲyF^ #/Sށ@lPy .MMۧ4V߂VSSPHA,ޓ~H8#e(YWbh $D.{MGvGBg=ʏLoos<(Tͣ).) )[8i)[`P 5E44=Va:dn=^wph 7k.J?yxCֺ &TMJѸCrôV{)՚t 7&X)յ w}"~ZITV@LoD(8ݗUl(Rx&&ӫ}-ޜ DQu 3= }OP=12 qdu0:ND}Fg V1C2:ey͗}(^(G V]Tp-,jw=If_p̈KI8OYOqEYq쯵\oɽYս1O]w5'omLp"XV?+v*V9#F:!SXsid`% U![d*(ϐfsϏM{DAWЉb  XFdžc4sOj탘=A&F4Yk_ŕ^FIy=ݗYfcce(㟯O]?p!)![%zPO QF*͞-vך tb(;PRr9U9¸Ze:=ej-xk%16X^DB#sm-06t?-Z;/>檀́AI%ϋZCW ^g33 $s_iZMI]Z]Еiw}9_LCUA 8Sq6R.gFq-n5)yQ94ZReBwJIaϏv'b࿜N8bA9jTu cqw[#V)ƏBVUc>ގ(B^'WAϏx?ny{\$5aBݪTWp2 p??s"X]jM-UXZL0UN#dLfSkK8 =܃J.+;3}c@ӹlF` e^p7_H&Ȉ‰ 7/kM,PFj|6\KO`vԹkh+J/Gej᪛ "ĕ_50$DrGwnk}S|\\^2JftU"qi G)Vu iz7^ar}$P?+-@zG~aEd Yˌ <9@zPs1]i8CE NHE3KN྘SfElBB1sszT0Nә8{2t,q9e^اw˃*' 72&#fSE}ƵRU >'HU 3/>:}Hǎ ܉? ^|W#W\;IRKd}toG,(Q;Jjߗy #Ҍv$ kj^E֗KHI#5Tש{Y4v dqx dHR,>N*kxKs u=&MBx5[ f]o?rF'y˫IטWu y 9=⡩+7] 1ԘPX/X%M˔Mv2 6s1-XFŽ6(L˔ ݐ./b0mp'l\0:.bn6+UGeub~gW2PF.Vg%fa~.#.stU"Dv/_\%hy p:zjr 8YQkUctD5\n穙9&KeKbP.A}ҐJ05^Iu q\Hvd|hC$cPpVtYH +v/X^Jh_/; #t OcK@GOzέ9 H:*堍G&^AlpE1HN{+(x^kU@m}#9cV#,ORvnjT5|2ѭ~jw ./\ًXƾN#)tH5?~>qsqR%CwC`fM%$;$r7䙗77 86t;]޺D7դDL>X='1u2508`3&-P㗶^cI&#6Q1ҶV p2zC,'?>9 `=nDbNX;ߘcw}N10;~]j*gEz*p(nƔETXK ʸb\y$ш$ w1H7]hlh',ОJcهZ- ̛nB^x4N5O`q(ޜY գ }O^E&DgT~@J %"m72q!߯: #gcmN~*0iEvqLA)Ň(+i.& ÉHc7j#]xJۇ{M~q!܄c W(R戰3( ݑ4쪨ul{D'qߩ~8 }5bϦqA~`z)g +a/csX͗/_y T,sߙd̝F>-^1j=EFLp ܳ^P//Rk@:7 t.q6S GC,%m`>rܚxiA^I >SӋ ]:G35PLq<= .!rScΨEPV;+n4pҔ) ʟ=r~(Gt4X- OٕWc'Dʳa=ȹG.h܁X" 6EX_%xsv2@fZ䧉i Nj)Sb 2䈳%7'#PdQ}(`Ӡ3 G_kcD> Prh)YcAuU̞j0}^UwDyA:.ch"Tx$ԄqK]|wkqO/ĺu%ҵN}.We?u-.}(To8LJj:gSλCPn9? щ8t"؃:Q[F{q_~.Ǚf.zB( $D$%*|dc=>HC8Eՠ0v_Z774Y @7f&@."IDXoN[ySK[6ѣUuҗI_Q@6[+4%'Jˀ֞I; ~u+ 흩(IP봮?n|pV]juHU g;0[2S g(ygrNGǯ_, .9F# ތ'b@˭{2.ӱWb&ek>G"+IOGmQAҌ䍭.;xL!mLY{OSB@ͶpWf][MNm yPPh#'-§dxR&$)Ϫ:OʨV>h\YIMtpY8X5sфYF}:eO^{qn7Ć1Ւ @l\uqe$(¯žYpz13 Xӟ>;ju}4E\T5)G;J3Z~p%'w7ar!έ%0 2e z4xE9+فpj"׸RO83CZ26sK#(bQVT7*72O״c^{o߀+{.F\dB;U2i v8tvȀGrQDf* | uH{kD[m{|#a&Jͤ8ij/.?DE 2| {OB.߷`(R_jnA CwDX1!1,(iКeIl~o#{kf Uٵc|@U< ?hױ(2;2vmc}O)(V-awIaP 'ҨWxmsVS{(?X-'%D=J<ێwv:I$8y,J=:Ѣ1AN;MMN-R{+FG{Πؽ/ j:C{ +:إSh5oʼnW8FOs׍|~L#D7_U .p+=O:M(~ZR 9Q`>ոC޿hօĥ+7!APֶN_W r i]a,)?А"YZi T%zad'OEzC6Tuv.Ϋt͂ 0h &m/%2cv"՗.vo"S{a`7zj9vu bG`)F~Aґ|i{X$j?v}/R#hz1S3{JLh2*8D?%K6#xbaK~8tuTҬ'.)$wJ`{]w>$5UEVvY@b[߷g&(Qǣmap*'|v"m@ i?>y#^kėӸ,7S!P+:z =_# Վ6RNޏm~2eq^'xg(/]:N@h~ȿG)3@-N@ָp_qY/URD=7~w#pny2Rz$xj &ɕH$Nu ?*魋/Dvy`hP< )m=HƂ{\@v!'* weIsɈzޓ3}b!J & ἴqwӒV6J_SsyQŅ,X?D+o_v֑c>pz, *Ex!Y7{L4R$bW]`aZ5-᳐7\elaR.=Q*g@r"T;k'0jҿFvb`fPcģi4 JxpفH3gv~ѠtkEuĹOo/ _&MeNjmg5|q6fy{lqz$uktK/)wTp'SL`C OQ o3mbVsfbԫ_A-WHQl9=MxՍ2~nP^}s:Qgq)M1燏IJ#n,aaNJN|5Ou~`e`=Rzc45MP#(7 dyon bT>{m"8{h,2T!d&SB8 $SRyyȜ 2Jq$yLDڛwkeZ1|^)I*9v#"ptWDPs<9f¿^ Xۉ`IcNYd,d \]"(|7Ўm~qfJ]/.UG.kݡKyި ;?\PKwxrϣٱoħ;K4 %+IzR?~H&_.pس,S|USa6wJc P7NNkuSJ" cY]!{l6/(o):pa ݂Wca[A(4x 8>?>3Ag,7pa.J*Ьu Ϥ՘8_ >kfrJ;cZ2H 8"w~.V|qnZ%ָki|ŶoSߴ@l:n1c(/Ad4^8@v7SFDsexa*_S{PSđQR5ulo|?L{:Ԓ;`>S~<>6?atc֎cns7[{C=qX,l6=w;rkf+̪2YexYT>/$I%-ה E0f# RHOscjQkVB%5ӈ8Vg]FmsKsҺ?Rhiw{B+WJnN’C B1 O"|"(j 6q5%a+ew"7Vh]3Qyu`UeB"OmuΨ.L׵nl ?}[0џ^D쬯7)OF>v_`^߶* );z0jRmJ;[wOyp7$t>7nw̤G/CbCe bp`:{$?H4G2Η@Skp_t>wgLAN3Y<&t$Bԥ5x@+O8|-;9C+मuA+J|_ uU6 ~n;CUNu_Ϟ$:a5ųFT&q-3'^|625exY[v X]]9z\w6ioHR˟D@ercHBc";fddU`ҙdUEۨa \cFEo^v2%mkjtx~й硴k*;&꺵=-o) nEzNFA0>Y/hMUoeLgX@%&83(93|r VĥՐACMS'w!A)Q">g~uE8ɲ+Rf/IF_'VPp>נROʼPr= P~"\,GuY!w P:mK裠dK1N\t`r,̗BCZ:_gem<8G\Na¶9b,9Ԝ5Q[t67@sVtCI9Oqm^Wl;Tep7qpL|+EܟI5۷CPj[ m_62v&øAO!> Ky都MܗeFF [XFF0IKGWw0tꂼlI5;܇Tځ`ޅ_PS=l:ޭoYl\:[2m l0uEEՎrL` uW]a3}NJ]*Nܙ}vHZ:jJ.3E/\CmJ?cJvX$m,@>~D"q%B9%jg8QN =]^ނN\|ĵBD'@)V|m)^-]DC-Mj/Йö|Brʉmsih62 vF|cُIz(߅p6e{FZ٩.È ٚ3GYIap .y|ͯΉ@Hucpj$,FWyvM%N܀|Jԓ|%6x ʪoH㷺&,N0qeu;x~')4gXp9蘬+\=naڷ>:Dtf ,`rВ Tq[gNo;C;>ʿ ~/z/:@/A0໳nW-d\U9{wX*:<2RJ{2 `zT: q&έY̚"=d\o߃솸j6v]%ngFs We,vB|ۃtAgr MJKC-\u|oIJS)`pxHm.'Lz[`{lmoWfF׳pP~#Hq/EFr]1c2k bPE+1:^YsuفKi( Xz}0x^ qo|K Xz VCJ[/AN0pt;1GDuW'lh|GT} ^n[sАg=2ANz>xaZςwx%tC4q@xL6( }UcwʈD)>wLt]CG| qc(Bo#Ϭ,2g e}}WM`NJ7AЎrĸ=|\%Ȟ"qAb}˯Riϝs>nai50 ?e ~Aﯼ oRJ(s5]u1v)rl7_Vf<Ǝ=Aut͎W}X)|==5Ģ&7z/̢|opsK/W,PEStJOV7 L( }hG5tbsվИzwi!5`J梷mݾml~P`.6WZ\P;`r iF|Nfz͖]t ofỴb[胧]x2p?CIn΍2(½wEga.U&#0XGx\Xz~5x?{ ޜ(-XAMI+Us.q"%>o='z.}ym".aR GuGo.%z|>@ڔ0k!׵(채\,so^w-x-w0oo`eAKn# .; 7i*Iȕ<`AzxgwOFv=5o ň6 Zخ"N:n`x?6;{3ZWKh~/wg5cš>4I RwCޓ>A~Pc +Bn$@rywVk=>yo'3 x7_T aQKǁ0#Ypt{K(/ө>osz~ bsO}q*f8!{,t3չ --9\ PR햬Z\Y0JS}+o/dXݥA%̐pIB_DLNj-CҘ twY1y4|XG Pß9)xp)@d9j:OLʱog!Q Y-j@1 _4 sDE6YO`s5^9$-%9LEPbr*y^cʩr:gu&;={eꀏ7n ~NxYT:,u#NP"Uћ+Pü9;r|X`veXˌ-/LOv'oJ|2~:fG^ϘW,kTOV+wW{'--{2P:lȨz k&'E ϹFՉ` Gh#\pF6ߨ7uovO7{_ w$*}60D#s {'hNoC+Ts8j T_^̼DsņK{XC}?NߥC 0S7fMT9#ON\6CUSdSdЦ GnH!lCHS|}$j?ΟN0+% s bu' 7j(WwCD7m!EǩCua]#w<⽽vt^p}ybz➿`P k;-s|?)߻@ =N09WB~=^&=cğ,^q?ӑ {<_ӧ x=S=Ļ>ߐD+QxPym"+t`KUjۂ|&Y]`%Q~D8p?.A?x!`PX^j'uPi~:Ͽ+15z͈ע|Ѹu{-;O3h^BR9h=aQA ж/FD4̟˿gɏHW!+$p=kqZoZHtv 9A?wWn ,O/|{+X2'Z'!uj)'5$yBJfuS}>sCK3d)Pt'Vy:Ba\\;(~z8RQޕnj'}!oSIl y.4GT(eO( j. )}Э y^39AoC  Q@YK jc[sh!m2TU쁚hb-s4%\`ON~{ ^ Sgf) EuFF(tdI/ R ew2ږZ`7!i=iWx|}ݠFexE~o.Fj7 bq?zo]'Q@;br5{ fi.\حӶ út6uI·}e A?pVo9P|d'ZHESs ZOu$)O9 u, (? "FE5)q֟`sܷ("]˜Dx{m.i;-Ͽڧ;B$Rw|IL^>%?&bM1baascpwtI!@ .yniM@bqs GB/,|ß a y5܂`g€˜ ,ry =<3 e7Xq7~тLuTCϗWBػKkG#`cr٫c@ee>1zj[]TAVS̑'=6i1i1P ۡD2OHXH_c"54]ǞQt/vnIo( */1] Zn+2Ei<z!gyG$jJ9 ]yb[ybkt_AU/. BTTiָ Mb/Qb)m1u;(-sD_[|XFffQP-vR1)fKmCҕ#Wv~;,9Ua R1i t{@#q/kAg# 4x>V=,zohbN ĴT=pG^'`:k&2 FhtD߂N+cؿ9LjHdG(vd KG8h%zku1Q<"~j&NN4FTsdTRM5u81Zg''A)+( πiw&\V<ݘFC*Δuʃ?b&ɮVDtM X\;l[CmflI} <_?vY%{=bX^+=QT%+hP0p숖E~_}rEkHavi 7Lyэ%\*\࢑HW4':4Y=N>"|;:)6\ʼ@6+x;M;ѰDBq xַ՝CyK1AAMڦ{|ý&ex?.:Z騌Qq/ǫ@}V ?͚$n_}+:6Mܦ~yCAWARcE.F{+d%ό; X(f,FrПբ їUrW`'W`y]6"&$`rڤœ0(4*z ~>7HW<_ϓR0EWe66zw}(eO)RIC:UZpd:%YԺ /σGZ,`o-CU8ɰFDP|H>*yдF&̹v|PRޗQ*ɖw鯳N9.͚u{0(GMm^=RԬ=m_?A+F-ɝ0 p=(:pF&~)=;@y#>M'=,;o^gK_ {zt1]qח= =k`1 ڔf5S?lQXl{ZpK)kr8]ǔdC㧗a1YҌQiSag';uDE5hb9\[,Oҩ8Jmv`K 2R1y)e YdD曀C,+DhDAG,Q~V)gTƀIM!k6CeR?FnAtgs8vJwIl¼o=0fx.{߀6iI@ S ڈ'LHb ҝO4n^}C.Sw~K ̇(uOz8'6W&S=0,x"a(IU{I䂩Ai?A~-DXtjTy4?xB9.А? 2dz ߿]R$H05-f ѓg g_ wRM[=l;.@?ok,hnLh*ĸVO!_mV6y gL18]W SP[D=t' 0wf|E HVc{jћ4 n}t6J eP!>y\:`FjhrD9nғ=Ć8=І$7!ݝ&2Ird]zAѡZI]Ck 7܅!ܷ9qA[mäWL8켃zMJC\;pu7pC@<"LxEB>7*z(NR@GUDu̪C2snpA%G)У~:2$Ihz})*Y&ыM྅5/-{*&01wa?dqԘR\z߻:= .{¤wBWhpYT= 3=~zHSp"xy+ޘ'cX/1׹h<&B?]>J5WӿɈp,iEi}H6;v8 (38-t@=(% Fq;ՙ+w}|'ha'p?k gߎK@vfxI0Ahqe^zc}S5F쿄b'8FVk/MVij}g aC7@B=:}[aYUT3ؙ#y>GAHI܆e,(Egp6_ [3mS8@,8DJdݔ(k x(F;$AvOM<񍻇!wπp]F~6eV:i/O=FFƝG7bԮU+)Z?矵?:>8oXނJPF{/Vcg(_ ە]"TueBϻ)mGqBĞX:4hWK N;~#żz2aȎκqBq~ЇٵZK(>%q=Szk~=u76_ WOcϩ5E֛ ջыkɦ'.dcMljx+SR5Eg~P0p S3e;gm̝sjyIOZ)=jb0kdZ5OG;)0gfgکј_@|!Q0›])Mz{ճeya]'PK"taX#I[ }ԎvIP?Nt4Oy8\tÜ(~ ΔxdG۹O;6Қ("Eܦ%=gtVL)!Cz!Qg,>vhp zɠstnrALnH%/Kyh"yAfNĒ|4AE*ۅ]Rqʢto4SG;x.05Fr #(oط;5E7E:"CL);;&%.QsD>Kr;a)CDGk'!@;&[B,+[f`¶_=rĶ'q+Ol+Ol+O"kJK" )e&s_ tD),cO Kn`W%SbtagI1+mA][-Hnl9Y{Irp`Q=`]j(kc? +m}I;9tvҧb7T$A?؁уgTblA7X.֟ö.Ͽ:PzYFܦv}>I"s =GjsN=y( *R\t2ZKUήʨ:S ^B(39yw1u"D5,G1&694ÍB2 K1v6Ć_{B~z%UcyҢcvTo&^*°$n"K'\dz__?$nIʓ'ʓ'l剶D[yr[{@}e)gC.й#b )[U|6at7XoDڠe7n<=yKE~Aw3״W't3 7IK0W/;:xGz}W1,,22hYvn89,PКy{[M~%T(C8\?E FJ|t]{T-S|FYoEҾUN B؊QWU U}.ODA/؝-ÑkgeAX&\W;h2jU=2(X}*b8Ok| aXe2~0qއF)w7lͿ)^66#`iwL¯XC:5li0iVs|g" jk31G?u]>`P|5x}W()kP' HDK?sIVuHbuC0(OyR\ֈ'.հo _( ve+@Squ |ᶴ(o*"cA:hPKw i}R=ϸt,GVt7ȴG[KNUm9f9іHv8ao=9naPZZ7:)3V#H3* (Z};MrG9t(?M7~>DU:ަrft:&G-Uv#?P Aj*7{ uWq([ >33 H+*Mh81r4]KCE;mQKTէH~FAV;}۵ZZts^~:T#x~^Gj.AI$s:8Qi;Oecq21Y>X V4d.uY%4i-(^ ҂>^Bĺޱfњb'of\ĉg%rtf[&c_m@֤ 8=SKAt{HGf\{Wt P&nH?+[l.|Una5mjucewsPu+江/o{QDoТHOәr6H#`otJ61]Sh5tE"Jۮ)rM@w $_eP [ ל˳5+J}N ڍ+ʈ"݉, 9O MNԿdU7z{b=tK}_d{ûЦ=kga vۜ2xŝ[N (r:xQlћ۽H|3Aݘ:V\ruΞl݋rV ʘc`&)vK=q4YHRFHW7)j4lYxK13 r.B ZeSDߟ47뵁n}a>PN{Sq /*~UH~KuitmOƑ1 trf ܕJv4,eykѻp{*QjA!&q7DShsYHSH׹b"tH5 ًVC@f3Jt+"Ud-ӋA6 ]͗1h*s*Ѩ6-jG]{tR.IJy4ʰ-=푲&}618y;VЇ5& Q 3 rwSetCLh xFwo!lyS@t)(u h>붸TSM"T,<ʹ;smfX`kR *Boy+q.8uNg}k}H WշS4"j7ޝxspN&Nsh\B|Eݥ/E} {Љbfī=C}MhGÞB? rLep{;vX(#xw{v`ͣ3!QE "e3d}oĉ+QŽE#0O](Ad8Fؿ~h mCo:̋h]QqɌgzCD۲D/)iS<-Yqo}J=߻#z;U}tc9g3."~|98 @jA!!G ⿝1 +Vo߈) KeR gdc|gqt}l_oOɣ۞RuiM0tL.bږ?ٔN xB" xXW`]xdepVR7L xRqع]ݾV~ k k\8w*u hnmҏ ݠ?(A\Zl Uk9y 6l^ 9V`X-qy6qA+߽H{hulmM!fB*Y&_c 2zOʰ"z g@I)A+7 4@ _!R|_{!kJ yܶJOz̐=2E*(~-x$ ܹ}[f= Mc2hB8yZ9C$aw9~.µE6R^0i_)eIAOe!s'3c?d1e05r?8?^D)g2*TyJpDŊjb xi!ҋj ȹbO/nN|mODβ=JXvDLݹ*)bZ͒kQ!~2}:uP˳ˏE]m&K@78Fkif8jhFie^ë9T\=}Ca= Xjk״?\247K=*>(,s}u+FY\\&JһPn 0W|"WN_B7z1{f EYq&ϵh|1ڟ} }=[Cbc/vo/Xnz*`!81g9x/Oٿsc~$.q#?~^{$e,vxr7ÁQ;dx &Тv if{V^> 4_:e\ j8@k4[E=ӧ틃ԏ*]!ًP̓,W.<-h+o\y+~6+w_s!3AJ57<4FZ Qs]||¾}\H4:iga#gM}:,):XH ZmFSKg jߛ(ʍy faV݈SZ%Pi;3lA{vZ0 5<4i{ф!;~`@H]l?LU g' Vnyȕmt P΢{)@[KG] P,bY4_0!ƺ>|:k@C}HXc?e8L<`pKqĔ"8Ӆ՗,ӘG`5z懐 ˋZhHhE" ;Ц_픪-M7*Y-~柶ؕXp3QG Oae w|$E1al]5!97hQF;GGI"RbnA{u9:ǰoX+bHd۞K?v~$ܮix$M hL/s,? x]E٧{}BVUY7^bKqdq<ԗ=pۻ+OY 4 ذR g LU@nC!q;B`-Si>NY%MpU H%|>i@w[@;&{I=f Brlqt>HX">;_(_8mRK d0#]]ygv,gz2I@ӿ7G[ Zvh.kE/_]Pyta q!R'j[K:v@%=rTn%C.i|r_b}xLf ":eȥ%jTJ͏V#רH%oudbvA z=thbU8x ~l9NsmU=X O[GV-/BYqXJqOA}%W/Q-gww2W(m>?0^wz!,OuSdmI@G^d#quFd_=IV 6 m눻1+m'p|gGsgLcr$:[( p^BtWxzq~$ G?|=N4~v]O[wԓޜm舤1ܐQ>+ <V^ɁψBNdw q,sްt'$p}0Vk6"#l\ *e͵Yi&vdQ!?:C7MMP2X3t`+yEGV<`n #M]xm`BOvohVh>==zJc8~dzty7ۥ [$ Yՙ VI+l5j(egq`C M _X皢j DdY~ sPe |ywe^yD$'/!کDi/ԗTwr^R)lT}<]xӐM79q)Lp1BCr}C{T~]~kZC]JyI> m8͗հϰ !S= GB{^=O\M]/nR'WLwܕz+(pPop"TDQ!z:D\Q\fDb7j37[mnIg*@O;b󅯔s|(_=R#-" gupO=݆}l`a`0r~ܞ-6P a?L 1{Xq54=/|`ic}oqOVΉZAvT_Ŗ  8AuCcX{U} :g W8 "Û?e0_J-18JL 95gE^$OD)WLysb -zL0d vH]?"yv^$%Ct~]D4LȅϰC\g{#ZcW>l]˙HoN^օw]o5e\!Oh=0`yj}m[/_ ]/yk(.Ry‹眣_VC+Y y,jݠۃ Nf?A--85>~"d%WڂNxy#H)w)E ö"gG99zhQcߵCM狜[(/]KAt% #g^fe4͐NDG<|grW,#wH]? GcD΃Zy~ĤL"$!$CW`|Tl/"$%-eqsBKPs{%dEXv=ͷD)_q1- ,p=Zw94`"E};-/p#3+晫s_\vl_Ԟ_8%3'~L7^+YVt?4nlWI#4.u!OKya>9Vf(g赯_t4uhZTq^{DpˏXL2wkۤ'<Ϟ4R1MUw1x;jϻ*jO9V~l}~Tj) alb~O}.{y@[4"oib@8D$&_jpqa7xg(60*h4>kbUܡzǥ"eyM}8DZqԯN;>_Cdtx+o=1x_:s8oLh׏#wH{"(=I~p_d4^mn91aAo-wG[!O=f줙}`:5%?` #MRs|ҥ28YGfT!'Kn{#lf!r+=}_hv<&atLiUelgs <NДO!  @Sco#)2]6Ǘ ́;nPrW'(N^ &쾥ѹGrnXũhW56.69%#2;6p YIHPFѨ~[xr/pDJmzoiiAlZ34yġÙ@[`[s&x? ,(a1#ss׋#LWf;K7&?lA+ q"{ͮ>OgӃ.bȮg0Vi{Y.2puoi-ByS1^obq'Ȝn: /3FÊ D=Óh%A'H61 |~Ru_LSa`GF;0ƏR&lZjE@qRޤrxDFV/>| Np z gcUEr.K LǏ ãFW|p`>05W2S_CGHJ3AB`e[_ >`AdNCF#s[iǻCQSéxB{hKs /W}(Bi~=s~!`Zc*-$j3Jv_VvAD!W-""?ȎrVQT:tz(5֙ (/;Ў;׭ E1zʏop0 ', , S~`t;W3MZ'}f175܏ ^3Y&;@͸/:b*qM[Uxcd 8L`kuQVŒnX6kU&|c{8s24(/|.#*f+a |=F2?*w'dA1]fslA'5WfDیY5c*hHnz\bkD] sϣLj1 ysOR ${#>fHIiO-h?ly~w6=^XFC.#^ Gvസi.pe$=;Tv虣5lMl:(1b]c6][$p [6Zyo;C3* ppNC#p(:5RO:݁2,U^K^SB/&k`x _.|wA TrnpxcK?XSc9wjF ;HMxr@"ƏP񂜴҇ؾ[ G0_ 0^OXKM3@x:Gm*}56D+S~,vϟ9HR9}Swh)ฅr-KpW4]<.h~A7rl41L(a# u$X9o@CPYL> e8¢SdzëSS=>f;HָPCF[qӔA>x{aBώiGѼ[R%۱D|yM}HI0\a8\V+cmqB+j<rxaW>ցߣ h'}D>TQ,\1@L|A=!9g)HLOύGisVN4 f8DxyঁE;W4> "[/6gXuOҹ%fC!}tvA0, JУA0!HT$Tl'{`Ϲʿn||tHGCnCK.&0+E.- j`Sd}<ն!vs7|uL B^w pyQP$V$i_64e錭?`hMJlV5&}al+w.8pE] ygW3I I~8p..Fìik :R RFX>@[ /Q{{_ pлQFozxw$c!֐̈ fb6>%` 䰰-c$`HSw0x# K|І0vOS`ݍhnྐ\9ץn˞lNxOVWSZ(iQ<=xTk7]43 Z^|H ;ϵ=ߘpVA"aW#)s<ˀ_e3d9w.LP"svU G h Sp_[Ħ_|}|j'BߘT2?! P6pC/}GH~A}\p ֑G"'s{PK;f:SxoQݠ0LNzJ}ח~8Vc>7df͔N7#^ _ BwʍBα8!s8sUY7p2<8y7҄0/mC.B1bDrEK'8VnX@AiZ)wW^%3$3x)ᒋˇo1JFUV ҪĻۉ OJ=/T\sM-#gO#hfk X_!s=_  1՘9UX25Gٙ<[`HF:| 9}ny-u:]iwMV*:E_.-r=sdC901="d&.]^GaVJC";<0Qٗ@ >l@_ÜbpaO >VZb(uSP !3-oaRq2/dk~4Xy Ķ1p^ѡ6R8}Q_.7n [=}u,֎Vzn*%pN yV::KJirF{dn{? %m$K:]?Og0}D(ojz<~Qm| 0u-D]ǽ&IAp}扣gKƽ yG?kGh'k]~.^AqEœԩ}gizޞϑALY<=(Cy ǟdİLY)ԫzӱnwDew'?*ChL49At ('h`^ :rtfrQ##ՠ'u5:w*5|)9\AdEK|pij~6JY>7d"p|Ty)öˈO(x!t[aZbF't.(^QY+Q_Qj Gӫٝw* _⯼_FR,}[pD_Ilζ]rϟ?(  5qx̮R ̖Mt ?$u-f@ tx{%yC9S{#EVTMF@f'4nWO^A<'`s<ؙ#3nĬ,|kFH-hhK*n UKbg"sl'Jy:QU0NK0#/_Kq|J.a@V" qIT&ݥҸ`>^ خz)^ ЛCJL1g2(@Pɳsp lu1`y=>XH+cq,)؊gӿq}"kkx0 l6N1r2=ǿ}AxHXg¶}k⨩$k3_Uyq .dBEA\AqqqQHL"9ěpCuyg,ef@Xʍ|z=y ޾l\kTօ ,fuX`X)GKQMV}H+j 9Du]HN7Q ^>0PA kgZ-6Ii 7@LeD3C lT}!tl0ݢW0YQpjW6n__otK|Mg|gK1*j.=ǃpeqz "i,@u70`aN]$\ސG/dH^UW&[0TyI Ev(׶Disw> RL<j: Sʊ(Ӿ+7<Zt:mP7?HE$|w}7-D7}"7LAw+c@?Dl* $2oPᗔ Е97eV{-ۏGw/Ҟ'_=sMr] /Cיdkz ]heƱd|ԔX'6۟A?G"6X3M8`֧'1;ZZ&UeӟC08pwUOC_#O@sTAeڤStWs1fGTH6_$bӫt?k{='n7OnH{*z/L{8("^2$p9ZyeVP}N\ԷS=L'8B*-UZ5R{w*zߑ?g^W+yR{;(47P|}ʴE4,&y8{91y5x)PJ¾~J~5 n}2UAT: }l,y:;; ߫s9dOFB\k)EMIYN?, A]%eԇ|޻UC/)L+3-p^-ZI/f%~h"k hdwIO;`Şs:h -72\Pqٍ5GL0[[ۧ-@pJy=S]yߣॉ㟕l+" ȗK5?c ڈ#jiъG#ٷ~ȀVήj= 9>% ćz:K?#YӴG gHfX͹ cbԚ:>|UU3Bd3iͷ+ Bqkr{r,~]Gсߧ/?^rʳ= iW! ^Vqn7m|MVq9Kd~e4@*,IL!if\"( qcO 2\KqcO-o,-i<· ɗDnTuu6+ڈzuz|(-ߢJxZ\d(˲"RՀ:y-rQ<sFN損sPMIRf5we3-JTԣ+'ϐ4@qo(COpo8PXZ? ]CQޒ](bϱdiʱ&`3c)q X~dX dl#͘V@!GxXڥE.PsNp˴~lR G1Of}vUZa @KLˇn92;~XP߱P$r e6G ѹ;tSxJ4 E\9PwxP.{(_C5/~ j~>?%ԑ `:G!zuf y$k%2 g=)m% Hē}'nŴZ+šTxxɋHw&z%XqOrպEsȆEcf 0 tfV$\DHcI/rT%(8Q:˻j 8:Q::蠣GP|j%Y 6wsv 4PRmfӝʔ!J,Ԕxd=g%9,+eF6tsظI \(c% Bɗ2SнS춈Ysخjf518Za1:ˇ%,h-b.X q{ϧ?WQ|g{N1b`=A*&8?&nۂ}ʲ#]숧wgͧLQ?n7?RUI8)\,ʿzXW|\/KlraA?,Aq6W^8*J) /zmtz 3IC]6S()y=O؈P꣛<`O/1覼K@3>KBh"@ƣ;K+e~ |asK2WbM[Ű]'C~iWn]n6$qt!O$nH0Z1O^.]BՈޙw8`Dɨs|:`ѷ32"&A0s\d$2yr;u6q}r]8㿼njKaɘPz/(|/.~ЛPz#*~R/x :JuejzcUsOFC܊41}l`'Q摚 MP dbдѨ  fu8yl9lY< 9tc_[nIgOF@w%vApǚOPTkUܪ>ߦlE676,_~pO9l~|ȫS. C/?P1'${`ɯᣠa@ <%IfD4/_ؕyd՘'HЏܣ_>3N\F׾;#k&I] ьGPfFʦ%b2' oI/+;"O Hd '#_]#uH9*SP'g"saYAn~8/79J"X+"!cRIpȬ Olaa\қ (QoV<@8"NlxP{vVr,(l+m3+LBNѲQ3VRK"rNNJ Y g&UAfQd"]V?0<[*Mly1Ab4Hnq6gϿ ">ӵƂЙ:;y{|/Pw*ȸ4f^&9] (1lqSxϛCIg V[QrX]C e_w< I^ʶ.CˏzB9jT8tʅz#S oELZL+p0)C-q|t_?9 &ǰs~$OIGǬ_XkOSGig((I"@0~#$>5kvݟ2D56\;1IFZQ?݀(ݘA*Rӛ_1'_N["bƨ8De>7ѥ.t(KÚm0EzUQDlբPkx^Z&Gx1Cd̡[>(-O! =N. `s,lXQlyзp8)c; ycGÝm 9 } x$*8)w;Z.&iE9i ~׸ w)@"̯W£Y L] pd2G:Ys8yVo7aapOfcWHF >-Vrn%a#ל ,8cg!/t"@th+`)tK 3*DZTtAvH y>B>w}?^~oW>qFO 2`9CG U#PWEm5}P |e? v.myAgzBvl2Ēۍ`!8j^P%M ZR> {HVwފ;'Օ-Ne)4M+/H말5Rkg?C[.-Q_CWKxZtP tzou<>nnt`I @8 G6qFD7ug U^UIͼG?'o] D/˂&` ǿ~$jw]LeM}&D~).|b9&+*: }!`=/؆,R#xXDdoS5k~.JH',>P۝WFJú30xsc+2>_#'ܐ΃nKnPn]ZcnR2"UaT`c/?h 8zPns:~pvL컷b ]M`"L?;=\ 퓥S¡\Y_P2Sif{>Jr&1nty+c,1q>pNiT񚔷"xXNpksvPf9&6kDqھ K_s C!ɁpםEbє6qm0ؕs(mKq"a9@4Fm%}@{:^w4(3pb\[7O' r|궿q{g÷6qsCw%{ U*З\ćTDdaHF`{7(r/FU©5N4P>3R55;k2L ʞEF&PIedeNX-.DQd[ %>K ^k0'NM(?HVv^9"EWGղXazN7%}nԼ]FFyqxF8Vx@2&A4;2mʯ~7Ox߯o~Dm9VO^tsʫ"%`Ius"@drU|y>(j?)g@}ϲ}`$=wV L`F-S!㙶&d jN8avbqzOeEvn<HwՊbpY5eߩ| sUD$z4CBYPwGIK%=bFR?P?(򸦼?"+m|pIL{JָY`Ua!+7So3v'}tiwyHHwYkPz1{ϩ~,n$ z=ϩ)}_ 0gGn[ ,V)~Y~7w9H-Pl'M"=g fa|okjپ Ǭy7e{0(DA\Egu\I?5w.N>/j#VBl"Jlш@HRir;ueHǐK _Y? MUC(^D﹬&I `n2oFem)ĮϿ=ϖs)h/&*@p?hW4y\Y5*tb_&F Mo_;Z&Um/4oO#WbC .^0LdsH ,0{ڨ_!>4B/d>wF''A ݨ}4ƒ IsZ(Xݱ FzF;Q\;3J؟åP`PA?f, rxEWBϝ!PTj, d*Cp#¾ʌ5?o\>;r=롯ig+}Z2nL8k㮩 *ތz Pu#wOgF SSe`eH1<%20Ia 6Ga lzc-0Aso8e37rsa#Te*dLyC'""y@#̫,0&5~w19:e#E᷃{'XaN[,L{0rij7 CX(w}Wv%N6b _<<ɬ DE&ء/Mj5dcݨF_zT?.#׊FZS?!nobn /nv FB `H(Cp:  .F4|B/ Y9!G7^4vßKn#]P{9,7a>, ъ/YNx74XlK8I?{2<}ʽjWh=Ï.9EPHFd4~߶p^fm8 봳dܣlݳ!P-\YM^vs6eT&]Dfk$$v}2֬{h4melPWlIy%4L Tg=CE1"&a֮yA4ETGhRR# Ρvp > p駴1{UFDzq 'ÂɗPJ-DAfGYθPD یȿUFJh5&O\lug-{d}a"XJx Mٜ1-,2 qY"7:'>\)n$œ5C3`\gvu?DN<.2q.0bHZǭs."iپS# Y9cGm +S7kqx9dTK5ܾ_g'8SWt3=q\:nde<EBg˲Ɗs>ђB"j%Q@hv0R鴅 ݙu-=",z}4|"*F?|aTN q#θYwP< ^q_N2v:nIWCeضJѰ`ʽq`7 m^^vWX^sq!4P?:^x2u)^Y 2| /ka!l(vڲuŚlk;/De?AbN% KAsO|  Ůc. !Z@WPpTSfOM*ߴDjr-A u"DS'Jްv3|I+o@վU}/HbqOWmxF+stWOA~fIM3vMVIaaѕ8?}Fi}8o&YFBݾdO 3;xn x6f|YA|U,N ޶o3lZѐPٵK+)7xh5,\yvΣZHbA2hs!SUa+$6_5Z+/ss:f>xdNGF8)Ziӆ,Cp[r=پJ>_?;%4~sTn'h"Eb$ (È<<0=hz^υ@mF^pLnMGfz_B`;vRq(Hi? ӫ)*Gh{ ?C`I\>O ?wےkXݭʶ H8z "[rljų;|aqWzHQe71|,7ӉP}[Q')8kZG_zME6TUFwބ6 4T< K6n)6x,YMԹA>7(2RQP ~S׹EWC\*AvC @(*>-v/ߏ~ -"ËJcrϫd"aRV=&0VoX\ lsJV8=U=#{\I>%tlHc^*7F  mj4#v绮0 `2٭.A.NER|7} .|x:I_u=.%|Gzאix|"cX*v6Q' О({z wZk?4;k ^q$d([fx[(xm7Di4)ZYbЧA6t7i>^EWLX!R93yc=|ZnpG]{\ں]A~t0|BωjSÑҺ/I'|%k,'+a__PGa\v TT=0 ~g-ŏJ#`2 `x\kT 'l'XO2ՁmOE %q5DBz86_yEb4d:Bgp=KԒht /"ZSF$p)~b;67Q"C/.AF>M unlz_:7k#CZݫ1HcZ8/$ω='f!It0T>ԣu;B OGy^k| g@DÐMyrփ१H Hl@<m D2*ѧZ-AZOŏxY/0i{m}O'5_+k ]/ƌЄ(љAo>C/2aG_"7,l|!EV<[KXL{6*,4 :mŲ@íoI`.O3!:yuC- ƣ^1_K82K z}SRa#p h-}pK^` COd h{+͍*ЗDÞJK ԂR*[hZxBxӡԳ-8T_cp_:o!wr}1{?+rN!faozт ]FVt_[箫i/1-;+8*cG*Zf\v4]Jp>lSf: "=# EСa3y6M#|PӵM[OhE˹oHVAC/6M֜xmw>nhbh([7dbSO a#,x#,# DsKUsč3&=7i|ܚ%?PYqo(bsqR|8fxL?<Vj495 }P@`N z.(dkxԸOh/mrvAZrf>[}r:.HݯȞ&Q̈ 9%n`1;jxF0܊'Ff2OȬQL 7:kk s `ARFS֮wrH%q/V¢ hC"R. 3GQ~`]in*2Zx|9hl)]!2#F=P%7թ(N㛁'.j(v`׋/ N:l!RՉ| ^w~,[ȇMͧl%w [&qؤ/6 *-8 Ϟ !CЇa=K mu4y+fF1+c?)4sP;`&% Ԋ- b<갔.SE#P8瑭A(E=r-sgP6VG.v.~b \~"jXF_%'v O"7qDxt-p7D7b/Dyz^$FT٬X>X:(mQh+ځf2&ߋaDy4^aP\3#,0[̽+x.f7.F_?rPPyc`6]@,\-g`fɲ/^l3'^q/`mVA#R 4`=n4&vQUn2 š$3kb#g13.e2S eH hW71탇CуCrs qq' f<Ʊۧ-E)i@|5OT2~L|à }3vTa #~]rIL3KZ 쇌']N/>}VX%j 胥Ya~wg]͹wᲁ~QKG3" >iFw5.T!CIŠռwfՖn^*M!Rm$k*h@6L.\~;B7" -NkVxQ/#K?aXxکw4Vϼ~ M(dF:w_.6l8~5lӡjc|}- ;2T_9}-3 2Nc;| S̸lGOx+i\:h͕ G<뙫LOEѶ>O޻IS}t(u+]o"8ICSqA@-b]v`69 pW.~K߲mQd)/ͻblSC.]&x,A|8וDs$eK{ zӢt5֝"ռ{3UNZxeh9ntT}r=Kj8?E1j5[4jFsX!Se/G>'- y:fN6 P<™6D-r:+!g!)}=zs[~nr9r_ C,/:_%$Z]z25;艨6Z #\㑨q@e=mXUAfo^KF_,! bwZqM]?_հGcU0ry8tUUM^E 593krjĦ n/OƲN,@JW~]WFtޛԪh%8{ѷa!`de_VyJtP@Rt [k0ӵ?q}dC+q fr=ϣtԺ08ƍ-xlbfSGq5y~lZr~Jĩg}0E'z4Sk"[o㌀?=_Vpaoȵp)2 D |(uS0MU," ,ݛNa񟛗Ɔ;y̚1y #^hsOk[:B OŸ/Ϸ_c6xn(48|8D򈩑w>^t6m &ZOe܎rRS17IP,/IIK'jEJ<&.r`Yy}]Z S71TO;pzΞ?~>7'f/O^<1{y<^h/O'n/O^`/O''n/O^<+\?ZZZ<:}t0{~$ܞ G¤r#nĕx%|G)eh|zp|l(-P\ Yz=Fqf|yEA00C!1!%xv D؇"fL[6>< G0{~}O?rϣۧlF^Ϙ+Lvd.N`bܢ' jt 96&AxTz1@ͻe0`K*uJt&X=ǵw@W#E&,X06ExZ+nsKzvӗP?P4}ƇRD=\L8%XE;p#$!Hd\Dg-R dΔBdB2eLƳNBLL)Cs٦21D?$~x{x`ť{g"4'Qy(J^B(Z=EWeh  [{ k$78>TMnţ% rBhP 5[ȓ[Fb Qnhpe5ǢpĤ+Ynkt{y'iV6;YfT;mdE"z×7${^F"5mq gcUq\rW!nQ[xH oO x6 EQN"U+hq€>:v(䷠;п5X4獦E`cܻ\QR t@4G`aLBu#}oi}pz.df#8{cf?<@ýmǐg:pՓ"TRT$Gy9Qz1,%w0R3[08k\1ϋD$ET,GFTS;kWq4͍z_xD12R>]GX&[^=?xRFZ;qkK)jG"o?K&qR:eP5ހ/MdS>3q`wPg9-:Uwu[3j qېXX cF@euϷjO:bP[ʚnǠ!zt_ɰ~\0<T 6 3GtW)%f5̆D@%jKzg"1iypDcWa|\5,tRQ*S:\tFD[EsŔ-sy_7Zp¹l4Kݨt%c `Ѯ$˛yUEZP01lV /ExwTn"J{_cbV[`جM ,N, }4 s@ُXG woכEA`_N:0D "Qᐋ wZ.}tdP{{~sIaE'#qyypDmh9hv>*چ f;^2?lÈ/.!' 9gT4( M9WtXwd ȽK>i .;R 6y+(ߢUn$S- "Gt%I~V#u@7Xs4顰){<THqRr$.F/-Hvެ Nw*\WxvҌi Ծ5޶CEJf: ^J+70DGqo$`fri{cω[ypD.pd>ԉթǭkjрti`GzRZQ@v8K)֘^^󶘠_C@#Iɟ#UEDA']g Y+}c{]ZFG&PόԱ뵮K_ ;[>|62!c6?I"V5Ma;J.F'_xC#Ů3xl8 b"/雥Enega]R{;#>  -@ }79׀lM])\ψ{^$ d>2Xo`d@@XVMHIXn/AG}U`#$i7LRA0CxZ 4%2"}IC9ZtE̠'HX3t.%|xM+^M|v _Mp.2?1dTxZ^e{}{ץ7 ߹m-mU ųjnYS uл/$MS^Q]")\/mcyg`9 0=֡COGMȓ 刌:;K\="OUD#GAds>__Ti vVFH#i՘$mrFKʜ)Zgh-n [l#EnBhڡo0WZ8nFKҽ#~?VjGz뤨5̊-d(V[Rk63)/(GkQQ1 rQL , F헐D&WMD8wAHқovP( h:ѯQ)}ҷBE% SXG*52T5<+浾䕿uO`ŸǺ|!}|EJݨQ rW!e32$Ltkz`gnR"mL,,̧Q`,J(VVT,muO׃#C#j-ȄvmH3S !=xA3-db3ICL=%`wE0߁F߀炪0õ~ׇJS0޻fbj_t?B Owg9On~񓓯SZP+3TExXSӓ+\ huiVj't=qЧmܽ P r 43q0muX5g'oޡ9}Ri>R'~V,1nP g?}Hu <[ɽ̏xrBH+(902R:@fD φ+.mnm?01%9pgg =k*c[f3 ֋KIP+ƹUȄɷD?-[ЦB^@Ky3)X щնTB'ahEr4]˗iA{_1{oϳXKў`w9vrYcQcy)>GsE!״pr>! !Kإl/D -O@WVe妚V4Ϲ

1%]/"q eҍBVtܬE+я!zӳ(M0oUKz j8 IQ"Ж݃AQe=&$VL?or@C%eNw#`Qq}=3 h8?Xj|NY[Mld'- b_۬_8 }R}[pwE,P'd(baw ޒش6` AoZ]orO.ebs/lͤ%ePHmnI}\G ڝvQrH^_ +A֍!,իΨܣ35bgUG!Uof.w&0<{:OJ]J}"Op@,R+snmhͿz/I>£FbPz}(K[Ҽ/V" 7%ĺ,.'+)Yd|L~G#fUⓒEw7M.~U {]Px#y轱 ^}3 B&J gp#^DzVh> k9"a|]'b߸>ˤb -aUmU0mߏ7PiS3 =MY `E7ށ5~ OwcV:ۆ ¢ПK=]#-;Co1DZ"Rc-̼8$cKbPB'6.q8yM".{9x%&Ԁ 6%TPmuWrhc^PJñtGj[wQ(jNҴqʶ}KFnmQ`~3{Y|¢K ?\]7Ƶogga n $aaxHq4">miTwsnkxػ0WXkf[< g=aY-ΦFS? :!a_v KzI~6ZRt怜!ct!yę.&苛.*@K<x_BDg.S?uQ5"A/0$uR)G}NR-j57Cv&-' 1(f>I|!̏=uGab*y?g}0LtcUW4噱i8i.` OmNb֬xWҫ X([ }?Be{Nd. {9R}uaKX'oꖋ=]qf;Y2`WAȾeqLЮ5RP$5޼a|jv]WpKG Z6pl|)#"SVl \_ bef4KRM鑁Z`e8! I$wzMu*~ 9ʃv{1{tfRytE =Rn6:Z^n!|$π;" fj)1bV<쭹r!")|2+ Gk| <ںC` mῈI»SOaM:3WؗER>P ctqW]7?/qz[,y =v6GJ6P>SJ?#k9kB~_\5#{#7dy~fÛp& I0rON}?˾ JC fa/#0TEWb?!4O-[{tYҖx1 zԾŖMkupYAZ:X>OEu|u4h> ܊x5JboPM/O1aݥn}6ΎQ`YnRr >2V@xJ0qJ'舱Ir<``SN1_J7?#v ' 5oȴϘ^sqSrY.׼+Ƴm]Qŕo,\ido!GW<Тm"XR{Mc at (}ϼVّ-ФטL"3Hq#!~o5!C3Wn(a(n2A0#t U Ua6Xb#ZI/;6]љ!हУI)sEO ڸYnê.Ip覟\Eߘ=mis|IM'd>,u# -b߅" 0/SHEt> - bxε\ nMwS|> H2q]'h\e+ܪsW%"J}d}mZ~fF Q>- K1(#yC+0y-g)O]M_ǂ{}$c(Hޏ. "^Urchu0`8mVط IPӀo_Ip8GR>@,|V4CXyP ?rD%In"k'6/B#z}HN%?A։8}:sb1vL4,.0'\ѕVMvK>Tdg('7R,&/q?XؚQH" ;0z\"nۦ5#MRg"Us$Qɯ5U̚s vm a>.*1ze;hÊ^PS'ҏ0?e㹵~MW<'z)5B)ma04rG4;Cg{@KV!QMA^Li\\)12/[f `Bk?1{;uГkMyB(0E0l5|Nt뫟wu[6=pՅLl8Y7C ?/v6$4 e헋AȇT.01_::z\] r-sSB)N_xDw[ xfouAW ]ayo8Lx\b dž;-tPsQajnBj|["g~s:yIƃyQRd*<,(:aKCՉ=A;dߛ< nv;Wኬ pCeZI^ >~n4#tBk=#sX2M."oeG$0ZlY.˨CwvωܕFEL9IV@y"fKL`ܷ@MzjXTN 9 gO Bq?_WM,Rkopx\)W9D-l}s曔koO~>Dؿ;nzNIJckG׀U# z]{S<+ C N<"0K)h-<ԎZ]z:sn~0T]w{0F1<-2\ Y+/G@&nٖpuB57xcu] g#?4mug`fny0O:o$ gk ^3~E0XAh6w@ Njc1ay/H"od"#ΉjiA?'nϸ1vs?_lCZIe5Z%8"̌pm'{ȹA܈3c =یwy 2ݒwW<ڰ={ P"=v.WQ/"<|pt?q\1}bN*%0GaF%/L&6C2' \@4QNxp92f#x `(:M/o!̏Ê.2 v _w H\YMPKi.Zew¹N#^,N^IXb9]c_}`GR/>oS~`$ 29e]O{ل=ݳ⥷[|<@~1^~;«Gi('^]M*eU2 9ʊRq}vP?^$B!{p8BaÇX@d3)$4?z݊[Wt'x* ~s"<*;IU2Q@'FBq^~@sl0m x[EiyU}W o8Y=xK)Nlj3y ^ƃfQwCϬ,0=M{_> _ 'A!0 } zP]]'ò@)TxDϐv.5bt]^hUU4AD?~@k%>Pd.|K(Iɹ W%C6~BI| Զ\R8S}҄ypHmܠoB2:)!~.p;$"9wF F%7xaX4mاd|e 05ÿ;K9,QCtRPQ/C&25@ ]ЏLE>Er&*=yvUctxx6aV@Pp;TE" 卓`>_@j>j)V9>x}-tRA=nŶE~~@Ƒ\fi#֭9h_ 7^+3cOB*fY O'k3C7<PX(4i7m B-_p8j)x$̴lH+ x " ϛ~~sUp8iuv x"O'4j(tn@<:qd]Ԍ<p[p.zߡUZ'b~8_kNƾ M'襦d;0@~9$bQ/ǿ=ϫʍ &0Dc_o]^cgDw?mU-$@(_N/g"9'.GPZʑ:wQm ;`o`fg4 )NVxD{5&N(sMW o62haQP.\s^r:Bɰ>BgvްwƄ޻,q)ʚ>8 u&o酆s/*wcv~xV5$PD^"pS~P) c{dB pW⼠rf_dbZ̸K9Ү"ϻK0mcr߸@g{b5jAt>Р z -V-Lw[>E|so?jk+ $ 3! *uJ%sgmE a[C bpTBUgIkΐJ-=&o vHHM}Cy#Atw+:U<'"1sTja! ̀ (F6n_!NOc1FPކ|COý w .{ط-C8Ɣ:b{ )gB y0 Z"vAr X|Q1=9|ɷ>nn+Nry`:_blEB-k&0V4 xC!:SLq{^8"ѲŃyc8K~ ɟ_NJnЁ8 Nl-avj?j d?۽j@7WXG?b"b+߿' ZƓY`g+{eO;*:oo:#wYBS0`|;q :]CLÜ  9aM()F|ǣCHR(l~cÕ_iQ9%3x#39[zi'̻'P pAT?$$|0rDZ.\!#:בX1(﷕Em9wJdٝҰ񀝤q82C|{q }wd)ؘuz:Z}^z%Y*mx*|ϳt=YP%*u1a%mEecl5pix >*,&-ov,B>ƇLR<)}<_ j>.,yR& /.J懫j$FΜ‡'Kg",,zy_qh;ԛ&ݓgI!+?>sU.~a7.B;tE@Mt|:5&_1c:hrJRͽv4DpBsMؗRipY%PAL0g$4XqϿXq6&!rT#,N*{n%Q['!'E,)x4Gu>gQP ^Jk?ns`;<5 Ŏ_.bN|ɫW"[/ɠ?߉Sg7`g[TFoEѳ)YM-PhBX∶ v:Ji^m#٣v+4x]VY9Rة3f)ufejA:Q)|z}ŧ'ԬUn+_ [#mID:7b"S/U,l+WZƸt/H)1l鶍˓ȿ}k&_1D^ʜ_AY7&1}L ;']$3zQ ZM{|:h,e"Uو3Sm?K^0vtf%hTp8՛C<9[* }iW85'%y nip…n3Mrri"P TSoQ9JeL"}Pv2zP6U}?սwxw^G@ԜB%0m C&Yͫy#rke98 M"QK\QUdžfԃ)ce=t|<5ϼkPF6H1Vdta&HZq8%x{5> Ԕ6wyVb p)\@lLVqՄe{Y.Ⱦ-ET$msȴőehN*E7c3~ď)xk+_DQsJsSzH+9hvvj":0.\hO6" q}v?SfFc? ݘHs4. }͖(V y'B'}q GIa[KxP8]qaH̽-̵[F?>'Mz` YcҭU J 0ضJAL@Lrc\D+QK' P_cGkh/+VtQɥWo?QE@|* Z `՝ .E!%9)Qx#OLf؏~\dB?I[ZuLur/ލȿjlDd{ZY%A( :Cs˾f "*О#SXVu u*CO@.6tMp$6|&OoG F(=sP'hGJ-<XFGOL7 v;%Hڑu8(PN5 (z_ <1{ybΆ\ Li8رܯ 3NՊigMrN,n Gag0^[g _EqA1YYsݜ:` ~R٬y#Й]qO5oxp1aW9`Yz`32 D (v9|"mut6-,To2g*{"+ݡg_w )A[Adan~ur%#mI,Coe7&GE~N#5`IR" ąo~k}j3GG^<剙md{?a|\뮤4LJmjXoWN,YݶX\Gxo9}ZR ~rxR6*K=M>v ݴdg(CLȇ"U?F&Z͊0q9b/2NW|=::h"z6b}&&( <+,eNCha; ۮL[HsLb//E?^knb-$:X/`jTLr]\Yr;/c/Gz˘d:Y5n6m]Lv+ŀ* K!L0y;d Z-A:E#t}):ePj"V`L5wQC65ERӢp~WP#id*?;5y9#O ٍ8H¸ݴ+qUL ]u=:DW>4Ê%OORb.sYUdžsNa0fww1['+0[^KBH039 PR*c-9Ed%M1tB@E7.֓h3/b>:weEяD}}>&z|W{y<^`/O'n/O^`/O''n/O;{ 9vx4hN]$x%S{ a-N< a2xw,]+ч ޙ&Ҙ蕢s {lsfgO9ygx@g] ×^'< p\u:҂{^2E E\9#E~EϙP}VT8Gf >\ju\_-.[A̖m/F~ qB'`7qko?4..jĩQJUc q:8 ?$BqvC=b7_~N<{\"NTöHLLIPːTF]eŌߤۢeҀS 5w\!f#NoOwȃ=aR]=ZO/_y;܆mL6F7kV$6m.Z_cFoѭȣ!mU|fshQAŶ^b/Y7+DrW^݋/4ĥiLoJٍ^Em@[e\me],o&dU.hĉZ$б532'W7xgK7԰gq7ŘZo33·=nw>}ݙ6]fEj Veދ6${f+r =Le+M9 B#]τ#ߖ cwi=n>N~'ڄˮy׼<`.x_.Q3K\`Vw+ 䃓@ԌVE˂+GDоiKe*5 X #ğ;#m ,* 8v/qWo.⭛ ~?u=`# { f={Y~4$bu8H>N^qgmyXME3/]b+~(S?p\BMV;hR>slL՜A ƀ>{D$NV1n6P1.rE/tݝWSnU8g麕 Z{I,e<l13%1_%z9hY.]u=䤁K$5iϰw3@a%ۺ@1<#Ek8c^1>$ӓ}Ս5x]r*!m˟ JadeU|EΘ[u{n0}V^(DT$_5ʨ)lFE޷x!*_5 QgvP0k6 ;=3׺~7X*2aA*$j;r>yNOV #ȿ7xTd>yV߮I|&ll'blvg`YS|Ա<{1%EXkv|Z"fB~kBu@i:bqn ;ߩv}HfAiK.yeKys! &CLk FH^%XZ&P q  tC^#{փ杢Q*G눹^ט,/,b#oȻe`OM}Eݶlg+Q],`T/[FcnĤb!Ean7Ll62/ހ_`*wI:=N(xV X0yT"Kj~j] &**]>-H|,bKځhQ/X7;a| :+}[+ zni!ܼftl `caТ$["gp2Xk[ny;4 /j0'Ոxȁ#Gvܽ4t f-͝زH;qVI^7;? Nl;Ӭ2.b>aF ~sk0iB%Eu.7)S/AfkȺGRp\ǯPx'i^4C;I<֡Ty۠V|It@aR0=Nj{[ |[ ,A@8d O=0C_p-Wm<[ޠ |3̪gT9.WZ+ 2Gy@D蛏ڧP~oByD0L|]zX=S< ;\HsQ(^:Y~ToEOfGXL ?ƞGt@/7pJ^H@;xKңȃ27Y۾t0>TE;2jQ4ݜP(``3W??D_ ?OrƁê]7V[O|`R:Hm/п&Jc/b2/&is듹xI?gS_t6 ̃ A %RocG˫ܾ,a{b}+tMeZԃ -:8g̈́WFwHkIKQf0q;b8Jי:_ ~^։}l)AS/L{`d1##(nݽޠN`{sX/#/3h=xS ޜaC6)rd!23!R@Ke`gf1%A솉H" ;U Pyg}L3O|D!6 U[ڻ5yEې%ys=8VW%ǤUǒrMձmP0 %s.[\9׸A$A:@u#&,}%iŒbnjEҡ@fq@ W)@z!&qAiwrW!-M=|`U<.it[50U8nHqXvld)­3c3%be'ƄU Ӱ$Cޢ 4 '< Wݷ R_]*O( H5=-t>8 1]%3,{N°d˴J^PN|߀/6wdKגgD_s$?x\ ȕxF /]5T4xys M9KP/7c=G0\ǚRP󋬻M3`!8o9(НB~[ UCH35JHjNm'IZDS5vBK_5P\O }&G;+ǹ/!4҂`OiAھpKGә[a'|r>Z,JHz19E*%ڄ_J'&Spx~_- gK~˪oٶ=SS(}s. ݷmH(^b+ʘ v`gVf=hǞ@Zr:pq`l3>͐Gֽ&0V ,Ɇ~ՒqpAv.׾E{d\Vwev$z0Wތ-%bXt_=^-k!ٟ6!Cо+// ]/^< 1QpGtH#'Z87ɼT$>{EٶUy6Mp']?_7ovoZB:>"`VHtsS"Q)}ӈ#ԇ ݭ&-$ks BT2 j f`6OY3Fͻ NaJItQ+b@'Q.@Yx->$ջh; m}s9MgZNӭ+@O CC?Vk ZY\M?zC0S{DI= 7{a(ӨBq‚#4D&&H1?MOTaw]n}XmZQ(H|N%:_9R=?s|KH$4]fP">_FAt6m/1%+!{Cshl̷1| ~1 Phɹa?g$p0(Ƿ0+)}_?b&2^ wQr\wfUm/X r\5T("NgEbK{a k)q:.^*M^/W{"b@h"W8w֒auxGcߑ+4z]jP6i`xhN$v(,=s h?n|ri1lo$4*ire%tLXc uYfy-XoI/%3EڮFi]j(/fg"Z/Vǜķ *o'\+vdY;ta3E'5<ix?(i&Hyv4Xv=xI"3,TZ+q5'fQN1}Φnz ٨mmkWi ;eV.l7PC6-|!34b)V, 0M4QTڵFcL: .;sWm9{ap~W.8daGyyЁ]oU[k A ]#l[,c:)=i5UywW<ߕz8rEM|sEO9Q5^S74(f{dd)hKV@FxD'VHrSpE`[?LW( Gb;>lCp$些QM;M;=/W@ O~Dx>څK:o* J'~߻쮦(bA  _M~;@C>1p+NwNZ?uiT~\f׍o),و}2.7~^>1>8PM+*mJ$u_bsY7Fe82]0 ̦2FrG<"$^$XKdQ'S*.~$ ܑzsMeT$0^~~$E88e$-5Z@AS|+_/V `:ө4NHxwvLlXk+nf¢kl65XQ`peHvJUC rGڐK%~]gCfHAKVڵ޾0(HLFMG OXXwf vgWϫVeyAPǺz<#m7;lL?ع2iۦM_h9rW]/x|&kiI4Uz.8Ѡd =ǃTϡuwkl$}"%|h}ZJ.!;q;hiAJ08Ɲ"γ }ub2XM ;"8`7ߡݏT &uĜS5k( (; )wA2>IpdRC$ qV_OXer#࣢?{sk_URB+DK P.ޮM4\c& zV[U=UE[!^1$Z,m6& ]#74o jr8j'2 }FcQqc_iHWE\pДNG5):O'o~H,w٢hq,b6`ǞY[YQt /`Q>VNk!b1uuzV3“ͥv>GiaD g+a0>( Iܼh=Nžl1{1;,v>nE+8WN%b"l~@ym(6\:~1i\*7yxlH)ѣ=2p0cT*sF"7iU|D2wZOi92r{\Eg*_D*Ef]2Zjy*X>ֺ޵:tOszsq*bg%p`(i3eS mwj%kv:"3o,VB[_4}3rahWp7OC1DZQXyX)L!ЦǼﺸW%2`m} Ss{*2 鷣!U5l0LmRP8~}uP |r+kG=I֤{j/#gi!s׾h Ci(|򒡰R2jE&Ӧ-y÷93> oO-''J~fYuw[<5S;@H^L, C% 7+z[ 5"#/p'vX#w>(i].Ƿi^2J>|~zs[$*k<6q\:FNa- ͊FEB}2t:wACaA`Ls\kx>T~ ];M_ћrFhWhH԰;C>-òt$^;0fLp'kVQ-4b Y/j?#Ĥ^|rOfp}^L:m_.TD~MYԯ񢂥)< ~|~P8J PX% ..qߟ e_k ԃj ;_ݓ" <~ #و=֬w߁{3O7fw~JԮgXﲩO5}PI~ wtCM7mHϧS+ |V.r9-@E.;sN"y~zc|9X{7B]ǜƀVعw X)[*C߯2`|Ұ O`*ޯ:PcݦEy4ڛoakD$\&pOEBy`;OBz8*D@!\..O=Q0*% ʩ G`e א  JC Z^D6xҷ8  `hvh -"^kX%w;i40o Y=PTZ%j6=EU]k#~osL1ΝAD]҄wIy!ev)MqdȄxx<Nba0tcA/dySA ݝ't%ug0.]zX3z>xu! M}O(ABްaܵP/ѵ(1W5r`B]fڣg`]nF?ȑ=t$F3= yFYjڡqЭCkF c%v"ʹ7a`]RT~aXqyF(ї"J"JW 4oz; _?mD6=U/q:Ju6p6M*F-x ([_FlΈo#Y$u,R|'FgWGJdm&-f Nu~ό}(ݎbU nKAta`8uŭ)|8{s>(#_'%%豵~C 0PLbY[3s)ͣ#/!)#3SK$x>)5aU{Y?|E=>>2 Y/^ %=4Qؽ}XY> CCnqJz]'ڵsH"Hd'TKlX< 0dR@GyO`xvbRM׈2p*,`hPI\Xl jy_C-jt ViA7]>##uA#CĿO7&_E'7|SUG\߆^J$ݪ̍^6K7zxF?D(_ hN{@Pg='hWk Tu?}ixh=x"Nhcf{&E;0u7*j6 |3?G NY@8Ŷ?2C,6 :&e`1+UͲ=?w[#ܖl 6i>{8Oy,T*#5]T^E}0Udڭts8܋^cA?5?1Q>1OcXdyKu2yxȑ;'Fwل[=p. ਰNl,H |DxD WNy/q ^:I9.\59b1"Sw\'ݹ 1j]7 R*Sޡ?+u?yr7 485M'lQ4Y Z~20(%ͺo4I$k9SROIvhXpjqZhj ӿB 2nfoǡ`u.⺘LФtmiqc4ZΙdtT*g~4vi]nH}8>8{l{OGk";o#&L~MJ =xzڂ*G/[cf`|iR35Z%=~Iw@A1)XCf3_{A:Nпȍ(0\tE*3rn[\MHU>Qoq>4"ҖRU>YE:|d X[{~~:=lv|Fx ,w٢̥`tkt H$[q[.w|{ ÁpF}Τ 5wh9r֝ ~*p"Ens{d湡[QUǴQH_}(xۤuqJVBĕ -oiP CWQS˸' SQ$ݹVc8gZZ'W #>[,Z/]5OEQU5- xݝgį^1ZFMkRo"Ll<`A?ZYP@_Wh4o'W5ǎ=b"44}w&JiSwCZ}[g?8͊+Kfd+ETL-(Q9ݍmgqaΜ?m T^Dz6w~Xr쟛4VNX],n8;R .r xk;r(WyGq"W7DpZڣb`ggx<;">EHHZm GJ6,kB2nNZѮ6ɬnf~ l>p#ƴ?юeH6ț{oTKx_yO Spl8mWxiTұMl/xb+DhUܛ ;Xd'&PźϰY|Cҏ?|:ݟ2.bی:V3Y7Gױ]Yny/w7 lȦ>h!yeT_F+HKUZa;tZ^}mDP7W܋NIF>%\C~G)zנ9>bk( =ibw7i }cPz7-6d }pkBKb~>H^. `w+vFoe wHz0Kv%2x-ZW8[5{6E }3.!#yc>F^k] qmw[[2Լ..N7g/W0s-C~-ȃG:܇ʺG|r44' jW~ZyTΛ*}~>OS`x5OFjvz) THߠ*u~}ہ4pQugxݯNG߽gjx/LJcsؖҁʹH)sE~p@"YIJ~|7/ pO Y)Hשd H4xaNבX]cΤҺ{؃F[86ymuxE%o3#؎glo/B40|}qo鈶]O/|'64}s'&k<f3I!@}R qZH\|/G v_c. fx#&8?LiIB^a38dLv?^؂c8JVۤre[I^;97}"١T, DرsTDPd03N*oں0ag #ދyk)<"3|\pi4fsR,{f#y7M<_NX/&Wpxk"䭙N<?zGPQ5ӞS}j"n=<3s236H~y ط1(qx<PWvfZ'b,06I5ļcz{0\u~Șl| Pjh;,wReWJ]wbd6@TBQ53=`=:WQtOYv-ӎk?5>upHC\$e /olAo#T%I>6׻nM[AXKQb\ -Զ(&EH{N֜!ӆ[{QŃM}#@}a``]( l=v\ggDV5HU[U[XG)_ex*[Љ߸*<-|W F߿'TdybۼOMP2\0 V~q]^|'6N#T1O'H>'P,~\v0(9: omMR"Rev~Ԙ2FDʒ?DF ;ō 8 Dl`E [!e% NbzQPqm0mǿ[,&6Zuk/9z(0=+3'|3l)y%ZsoJnE@߾:2YYM Ӂx3j3\ WŘDDŷSaIm|b$8'N|b$'"I>1O#I>?799FGyMROGGI}$GI}$GI}$GH}>E$_u6_K#oH}$GI}$l0R HGH}$GH}$D#!R #HH8H\i6OR #H}$GH}$GB>"0R HHׇMR ' tqp 7u ajH@7ξY=Z4d?NTBwcP,:oR즅q8۔$ST LCTNrc%E2:X*\BڌFYt@75JN6nPxxn&.6lv#?n^=nGI}$<$xZޝS 'ce39sk3G%BBL!<%K ؗ1dJ!SBmu8y^ܯj뷎þuf6v(@(~ax·e8%Erln~ȪTjg޷Ԝ*no>eRg ~j#7=ec/1ԛR(껏z.V|\ q|A#FlT;b~2} /W s$y9:ylzHgSZ?^ND,NszKy&Gո[0GB"aji'4HQ,Od-ev,g+,WMju߰82A_F?MކO#m+QC=4[2ͪ9=]kՈoЪT+ʬG.gRLkxBʙlTnU&YUSģ}?B@$Z`EҌмUKmRRSW`~uu˱Q?q- S,۩,$x:= #?"aqgh[iXi2_9g q/a/E24=dJOD~s0,*#sWqn+QMeEg+bw1"4‚"ϭ! /.)>n~)rݟJr;G TqO^B?x {^o{!ЙVR V:i'1^#W vRK fo gyH8B+0}ǜ;qs'..q&} 1pa .@ThzpcG18n@`8l>F bh6X۲ v+3mz2-G~Wb|Y@:sKGdKXBG,z.JIG͆aYj@Gp0e(옪EA fnѷ>6 ts!ӴGqgsnB{(;ACOvϟt#l%*S+ \7$m͟1~U&ԏM)&%;FҺTG_0ɭ|$ܞ\$ݏ\$Ћxk_0Lv1G1ɼ WxDY($ʥ(5C;j6+nLdL31H@5!U-A4I]+`=SI뤟%Y^|8L5O,`'UF-)#Xp3Oi*"0xt{^Ec_e;+svR̠esIΰÀ1`%NX. 8XNq>T踏3;'#|qstpfK eaK!#th"35p#b-9T' HwxM$wr)xCik(GMcێjc(d4gID4#?,~RwPVvg2< bG? ts=_wom=p6Q}d¸ugl~Vc#Qw[UPnH*~WG;4`ܧ[Pv{6犆^bXU}:NZxK+an,MDO!!G l2Fk N:! Kfj֙8؆uT^d^G s$T~B 8]QDk87ҁk@NؚnDaEs/`uXiFh~ENC2^IG۲ŜǂqY6K,7LbG^u ~6Mރ 5&ZkR*p-@hE4Ԟ${.K|F S,I‘[%zc0;[A S\ ׇ# J:e{Cפv)TSt]kZoʼn9փs ɬp"ڧ'(E+t s P d .@n\@D@и!iwP#<1> Q|辇`,֘^sۏ^zU})ɱPRI%Ad{h:x~y>-͍*`5x"ˎӗqn"f8Y&+k#T4~ aWz0?Whj ,ht$lsZDާ,9Fo~v| O+!*ZEecf3V0/26ת=E?5w?Eh&wbX"ij6X[ǰcY Tȋ>_e\8o:L\9vʹ=vd7ЗtEFѨ/cMlm8"."cΉtѪˀ5,gHVe4™r2pL35DT܂e c_yE96>M=q jq> OFprvaZxx:|?* ^݁ :PZ2m95T>3d=ޝո]n|ZZ ި' KNݺmCC98l ||0B,vsW\9%/o[y؂(m3eo >.]nUyaTy{4n +֖7#bb\6Hc1o׆_M8X] >w[t $ L-cLbEh lvձ><ڪ[堢)R? t^\$oǎΙOx!mBP|?Jc3irV44dwqxCTiwPgTP_~29hM.0Zx~# "+UedA²z6}-C u*5lIJq{Ͽgp}fhJ')VnyIfOD P6ҙYmGpvi';UZ_ƠdO(A}/bi"cҫEu;5%YMsrƱg@j e,ّbJ!)'^25jUZBڋ''堠xIGh" trA7<Ƶh_p$?γCSoLRoCϴ *_mIֱf4F|]Tͬg qY"2Pt߆i374ؕc%-_K@MX a8B/gJ`8El@'HXb *ұbn2^ANZԢu:љ,+J: ϑ?z-{uNUD?sG^Ԇ8&ӧxF|UMGӧF-0$7<=5s 2< l~ŨC&%t*li?qxp~*o?6KCvV4/E8>X6w7RV7%O~Fi6)ۣ|Qu`Fm:2d K<95< -F"_ӗY/mÓ^#Fڐc3Mk!yh&V݉ шv^4/l 3 b}A2£qPtPbD(w8qw)K1CBs7P೮2H܌]|G2:$BSn[LSw׾-"w u+յTPV3*3nƃfmL{ݪNda@H^9zwD|a[[/(4@ɝ  <hT(&C4 5c H| 4$Z3Ϫsw5! JĦ9(ekGX斚H[A}y̾dӆ(9y~ :4hߜ7xӀNUz"D蝙c&[{f ?ПdSkyC:.9j8@^j}NfKe?&2Sύoï|4( ?ݬbW~#oWp[r4\kNMw`+"5QPǮ{ .O=I"UR`@Ľ+10z iZ‰𥃎yg!N(]{6TIB$ Pc#ZpxQuTͫg|}wyB@nf|u GF<{;cwQ \?(Gؾo?>Q F5 7r^kӽ䡝ʾbR*KSoށ+rɥsۈZm e;~y]2=.&Ev1#e2p랎f>Y V{AxVA yIBcl&KO-AD)A\l _X*L[ymLgwvҔJI_xϏnpcNR7%r8}\\&Cp$`dc3`&c/X FMf8%%Im\GqH o X\85dɏBf+?3\{Z}/XBNj/,[A)ڏ"z +Q>Dm-Ba UMp{n?Gܦ+-":aqֳE~Ͽ$ElAimc<4<\e =_-/ 2L^yWMYU8vHr[kYVa=ia@ma${}1 BjsQE;tR&`mMݴ ;+n*2|۞ВݪtZ JPJn^C-^@fRG#M54WMMD<8v+۹F.b 7(> .mg>u]`k_yEɛ!>|/d- ]9 3kGU^Ry=OBLXUp)g.ˊB{})P'z4 6 7l?|~pd0TƯx07Q'vu ]/ozw<~2'lL>R||25db$ *LR"O<@`a@jA1,`u\v>ܢi/Dة@@>Od DQTLZrIh+vOI)kۛ񂞙W'3yrZUn:{2cPB) WH c&t[$DMS\~l:EŒs1wg\l~3jdK2Q[/ Wj DKM 6\]pZY֓'gO©GQ] LG8!i]L .OaãX?ei8^oߺY|9b^Goˑ}L9O{C$ŵ#8Bό6D)UFOo7X"G:,^6ށŀ"ZNqYlu^:Qor*sLë Fd!pŃi&y{Ja*@ʇ!3P^oW 2sE))O~:&Ghs8v?WMKBm__IZa'"zf#dnԼ7{ ~;H\K5qyd{u}UvNne{5hk߹xWk71UX.̺Eoe2nMr ÔCU׿ *:a@eO(qK|"< ~yl⡀ { NM| T"S<KszOB\$X'3|U}2$bm g0PSru}b1M㶴%|`1M0~gT>P{`ųi7"XĿd%Sˁ/1X?V^g"zWSP%r!ɇJ}j?b*GCgTҙU9,:uh nWif.ٜSI|>\nH$tT B8(w4(6繡}2 ‘IJM]ʈGmxu'pxIz#>%{ѵb%BE}I2d,AF}`e2[H%&$} z U,EJCqx=ngaҽ<>iHJZ<s$Ó<^4 -.Qܫe4pI()?J n>K8J|$$gU:L Ъ3aP~axsUv SC @?p@F!p;| <0+\8Sp1PeR in~A1d"'ϭo-¸p;M=Nׄ&ЈY;gC :dt x,ajm7Fee18FG vlAk5W|d mN0`z`9>0>y=rqgFfyQۢ;0g-H(I쮳gS\)|^W>L>?JyM#Ъq >P ɓzK LdNO/M_H>\?:C_UO>?zqݳY7ў#8II$}1"tIb$#0L~5b;hDPЋ$'&U¬9!}yl1'`(g,#_<"vlh $~}^`rx1> sM_Jo]"3|<4xJ\?ůXt@?3 4c}  ֢Hv dX]MwM4\*V="4u P䏠𶚏.Aa"h=TqH H-@ڝc-j|$_)^ bd| 7AvLgXN)M ^FD*Vm\5Xl39o@pN_=V||sL (چ(u ZAsJ)6*j dx2cyQ71=pDs/%npsg;3KKV0stnUKҮ6=vy - {ׄ`f'l*a W~q3^ס/%>W8)h@/S?O'[4|rfָ@3j72%0򲜢RPՊceಈIH_Χl [*wQa`ye|li: TBfQCfYX.*t."I[9$Q<[x.Ii_*X#<#63ˌM?_}‚B ޴F?Yu&Ol|bI C#@2 0-:3t@N7n~BZ vM7gaehZ\I+f+̩VrNAWvF YV& ]8;>Y8^7MU^e7H_o-~ R=2 KIqɻ*0 *#bײʢbxiK䐨yS<+]b$GK~{kO'$^,UZCk)0!5맮8lycef_pEQ"կDx  3n_<{KF׶)*A"zI1BrN,##YNΘdwѴQ0> .S6yfh+/5ϙ ]/>_gf!-]w;\<-GQGGEz_ ^>AP;36(ϥJSo5ހp6וBLzϏuP41gV١݈2I'uf^5_?^hi*|:`l]3eR̨$?y~~Zp~LWҬtgFMaiSi#<8&P]Ͳ9Ah)u׃j(}M%ds͙'@)6#x Njz'r׿|=DLRL_j+*t"q #iXQu`N)m(SLS-zDPD^>+hŕPKMuJ9hBw*9Y'M4Ќ$0pQsqX 614wWAN!YiAB@H#M顳N( ; ]N? ׊:fGi󕼔g#F%޻e͎~I ;ݼ8zEMovDof<Z1- Be X*b]TPTB><~ҏ?>L4EaS[57 =e>vgn{si6n#H'>>bwT0B,dV$swpvdcv >ZIK 'PڷOskpڅ3oo i*ȑ n2gz(\v'Z"cqX^Uz4OgW[eրVI #D(M!mU@:Z[&L{\_2[QZa: * ig(L=c+U>^PJ1G^Γ{Qv^my)hNMcCGA>PDk7N-m83T0ej\+W\]WA;/D^_<(mz Z0d|3n{=J}Ʋ+tSZQD#lH9 p= z< @H Ruu3{!HbFNSsX)tVŬ]{#H0ٱl|m&&R:ꑓoozƙOl jPyK]借 k~tXcףgVr\C rᠹGD>ozC)JɪJIcW煶\|qP&T<"tDQd^f".32]η[!-dBocX(,}Lmt/j+0M4Fe ŀ˧ܸT(2r=w CJY˜=kdkʓ| '[%:bB["TB(îq\8cC&V~ǓD|1g PqF̆<,|z6qbuz,| n_i͍\[UI4fqh' ;i, X+ymMÛ+ȱeו΁G+Bo!B]z[$hik:oo0$b?W֑S֛#g@u45l\•;a9"ngi lEPvB'fK""PŜa 5gB$8:vpZ+M8c|cA$W1M:x^@E'NZ~A?:HH iw!.+|"V1tp=o!ԍ3ƠR#I7k3kR)i| UO+"cQ `~fœ8#"YxZ6MU,4̶T]NbXWΓXeX:+P$c;'VYCΚS]?ᏻ6mCk2ݎS}Q:N۲^Ww?LQM+X݋'7ݯ\Y#ԞR@FNB?h3;д=|to*x4 ftVasQ*qr=P?|ZyN+ yw#{_"zo@,>agTٔg.CT[QydJ3d )}8 D "Y+#G$OO&+sdzauyBS^Z0K sI&vTH>B}c6+^x;ocM7 XCߴ@]K$;ysE+&;C{ B#NUJej"ƚ5a 28*gq9QVre)]3P&(ZsX%.m.N^yDY7h\,,^+=(aL.nt=Q~W\3~G*4N%H1]m<$mWIa ̣'-$b#<+*=ԇ߹S ~az^ V9lMɐ Nv!IC{>CIU_6|&7g l]$-\yl[KWfC`e FcKCw*B= q feŵ;M ;(u@^)gbeZ= $㦆<`{)3NՏ4-Hg#Ox%8:<GBg;61pM862L%J֬U\0]O:Tvz&ՅzAU%ĝr<;Rv5k5#M>}-FDtrhnDP`l75%7-B@^x0dZFB T]O%ó۫_0r_m\8KF:C~=_{W W.5`y4-bFMMf802BbR59Oࢷ=H `؊j9t9 ctOm\28|R4j2=b .;,icR{fdΠ}T<j[wGk@YiBP!H`GH؞H؞Dpt?ߛgjGB{>|$#= H`GB{>О|$ܞH؞_'={sw~n5` dipy-1.11.0/dipy/data/files/cb_2.npz000066400000000000000000001454761476546756600171300ustar00rootroot00000000000000PKxD1Q7__ points2.npyNUMPYF{'descr': 'BphBnABwBAA B)A܍Aѷ B :AUAB `AQ4A\B/A RA B7PA%AB7IAU ANB+PAEA["Bx@P$A\rB)y ABQA AzBA A>rB B A=RBHByAyb2BBA9BBAABAG(AtBy!AJASAԅ(A"@1A(-Az6@yGAJAqA(AqAbYA>@GzAg%A7A]Al1 A=A5klAQ B;S6A &B9v!@SGB{B!@4:BBt@T-B=B=@ BB3@WB1BA@BK*B@AB@m@-A# B6AZA> BA{oA2(B0 AFA-B/AA<=AANAFA_AgA@AF`Aq@uA\A3!J@AtA%?@AKAjAHWAYAGA}7cAeAJr@sAhoAc`@ ̂A]EwA+@?)A7AQV@}AAG[?3An+A 8?QҦAvA^P׾kAAȿAcfA5ؼAAAAAAN[A-ApAAi,/AAFA(A}q^AA4@PܧA@˭A'AqM?3ABAm@yA:A@A52+AC7AAhAmzAAA(AB>Ad6ABK@O8Aj B@w!BB)@ڝBtBZ@+BBAAH>BB@pPB Bs@m/bBOBg@sBX B@wB#BJTAi VAAn*AeRAAAA*OAA[[A7MABAGjtAIA4'A%Aݭ?ArB~A0ABAA6Z B OAg AY BeׁA0@BlAZ@BPA@yBZ4A@& BA[@PBB,A@bB3U@+ @g`B@n@ B@Ab@o+ B57A4ABP8AmA4BzT9A B_Z BB BbABJBþFAVBS#BsA-ВB9+BS@BA-Bc@dB B~@:`Bv?BA1U?BGB۹@>BIB@Ak B8oAWATBA>#mAA!AT@WA<$A#@_ACA A59A5eAAIAt@A2AzἚA&A@nAAgXAA B%Aչ$BfApCBjAoZB_I@{B=oM@QBi?@X)B@6@`BD@L}@AИB@@eB`@e@SB%A@ԵB:lOA@XBzAS@BA;@BA@WBEA@ҁBxA8@IBUPA@}BArA&pBA9ANbBB@TB5Bר/An B _A~CA:LBm"`AvXASqB#`AmAƃB^A ~A$B`AABhA*AܶB*pAH}AFBkiyArAB(AbAt&BލAKJPABA?]:AB EAR!AABҺA< A6BAA@4BAj@p$BA@NBAJ@TBAxT@{BLA@(`B’AӔABAj9A\BA OAB)A|DYABۑAo_XAPBALABZAL=A1BA1AϥB%A-AsBiAC-A2]B+Ak31A9 B'A4A\JyBmB5A9/nBBn3AbB\BO1A"=WB B-A}}KB BAk'BKA8BcAA~n{AFA5ǑAcAzADAkRA!AA@D7A4A~EAu4 A"@cAA@A6A\A~AfA30ĂB@`A B@ntB/B@z;1BsB@&PB~B1@nBG B,@BB4@m BX A@BAS@(PBȘAЖ@LBszAAiB@ AKjB阿@AB@-A䞸Bן@3ABKR]@XAB @/U AB?l AiB ?AQBKV>9ABϗ?AHB?!=A7ߝB@Y# ABgC@@B+z@@(v8B1B$'@+B1?B@BBO0@MKBshB@B'BdT@A B@A_ Bu@ A BiA|AaBOAABܹA}fMApAM%AA/Ax5AD@AUCAA@A8=ASj @+A %AxŤA}:@oZпp2A@ AA ,@=A$@`@mA`@S9AgA_@AA@VABC@ԮBlB@B]Bj@N:B BE@VB' B:@[MrBB+p@4B|QAR@uBAH@)ћBAԬ@^BoAj@5BϗAҏ@LBT@/@|B$@@L{BHiD@&rBwB@pھBʎBGv@BBABxA>tA6BAt+AB>AP$?AB3A,PAsBAZAB[~A$D^A7B3TA`AˮBHA{eAB-AmfA§BA9]ABJA'JA>B ALj8AmǜBIAs,A8[BAD5(A B^YA_@` B|A @$8B3AAoBy>AoArB7AUx.AѭB?ACAe BkAqTAYB+QA^ABA_fAnB AUiAc B+AcAQ,BAtVAqBSA'IGAB<Ah9ASyB عA1AmBA,A{1B9A+AhBkAm,AYB/AH@BAƿ@BA@BAA̗B8A5*ABؾA?ABAOABhJAZAf B}A`A8PB|A?eAdɰBAriA^B#A'gA©BAY\AGB̌A.KABًA~v=AKşB,AJ1AB A4)AB9ڞAC"ArB^ADA~AOA >jAl>{AWAy?E6qAEFAtV@HhA6A'v@)^A)A1@XAAj6AUA A["AfQA%@fdAtsA*=JAK,A/g4A^*?AXA ACB|A*A҈Bl6A}AR~B" BD AiBԒB5 AhTBJBsȦ>BjBA}(BzBEAGB#BI A9A BqrAA B@6 A]AB=AQwAgAVAgp.A3A͆A@IAA#@pAAeoA 9RAqA4\AOcA:3ALLAYAx+A!AA7k:UAAMAn@ B–BA@BFB:HA~hBa BdA^ZBmAQA BABAjBB:A3yB B^.AeeBB%+APBjB'r.A ;B6=B#A''BBފA&B)BlAA] B( AzAxBy*AwABhk=AφAnA-GNA=A6AͶTA@uA-0AABBgGABL7Be[ABA$iAҞB+AlANBMXAdAoB/ AxVAĬBſAd9IAB0oAu$AABA&>APzBC#A?A(BCAܗ?A2}BB+>AtBޙB;AkBLB7AjbB_ Bn4AXB{ Bn3ApOB B2AEB{B~x@B樠AP@0B25Aba@B A*A0Bz&eA^A7B`NA+#AB*LA3A؟BHAF>EA'Bj=A WAbBl/AgAB"AxAB AA2BnAMAڤBQ AkAIBߩ&A*{AnB),A2_ABz5ACA~B:A'A"B+:AA-BA A߅Bp{B Az!{Bc? B( AiBJB AWBBA-EBDBA3B jB)A BBVAuBBlAQA B{AA BAŭA"B.AAA$'B*8EAUAA{aAAսA~Ay@HAAί@NRA|zA?FArq@A:B[BE@ACB(B-?AoLB=B @A;UBB+DAoe^BB JAgB6 BQAoBv BB YAwBBaA;BӘBEkABB'vADׇBCB'.~AMBB}AB Bl tABBNeA,BBSAB B>AsB&B('A:B,B,A AOvA>סAC*AA\ cAPKA7k@rBAAAD )A@4}AX$$A*@w0A vAYATApA䡁AAAASBF@`A4E BF@ B-B@֯3B%B@*PBDBd@ABAHAb])AA AW@>,AR5AARAPAoGA@ι@p~A‰AA.A]AAv&^{A B1Ad!B{@,AѦ!Bw@8ABF@wABAǛAPBAǩA^1 B2AښABRfDA6A4Be WAd#kABBpAmXADB1UAD?AXB8A$A/ BWBtA,yA%BYAABJ;AWe A4!BMAMABg@~1AmlB@ AɨB@lA4B.@lB'A`@tŽB^IA%@rBB2@LwB%+ Bi@:bB_B2AEMBBTA8BVB@ʨ"B$BG@F BB@A B/AЗA#BqApvAVBPTAbA/hAT|&ABHAkA1A1@sA&AlP>@4ǠAB @YTUA;@Yh cAseAEBpsBa A[8BLBA)^*B-"B*AMBB @B'RBA#BfB_ AGAߪB;KAAi BAVA; B[2A AgBNAAB1tA\A3B; AO6A BYA| A*B \A:a@ƝBK(AuAWBkb@)AaB@XABE;@dBnAABA]+A BdAKr2A婿BuAJBABANAB;pAp WAc۳Ba@Au]AmBAMbAb&B֏AY]A\B9MA@KAB|A:A,B|A(.A>ĝBFAs(AyBlRAk`AHyKA1qAlbA릀A Af @ҾBLA@BRc8AT AkB|*A"ASB*A`[6AVB^ADA%B>A_PABA(ZABA`AfZBA2_ABAVARB$ADAZB/7A81ABDA5[A*tBGAz A~BV FAc@xBEAҷ@չB?A@BR0Aͯ@B,BeAtB\b-Bu=AҀBѰ$B }`AdB BsAA=DAfB=CAZQA<,B%AXA>B=AZ[A۪B[A\\ABP/Ay`AB pAiA`BXAHoA1BcMAkA3BfLASbA]8BPAVA%BWAiJABK^AA({Bf=[|@}BZ?JH@ϜB]u@@B1@@B*T@PH@aBBr@ݬ@B@@0٠B[ AX@_ӡB9A5@=BhA-|@JڝBAA@80BA@dBA@nBBAc@όBAPAчBAAB@wA AzBEAo@PB0AhABщA%AWB@A<*9AδBGAGAHB/A/RA1®BܥAN+ZAӱB&AV]Au$BA*TAiSBA GAᮠB'qA8;A"Bd6AI2AșB$AC'A1% BiCA`Aq BIaJA2A8 B3wUAg[ABϥdAA BI:tA$Ao\ B)xAVAjOBbAv;A'B;A46A"B:AY@gBh=@͕B?͐@ˑBg?h@縍B%„?XA㹉Bw>,A;BAdB ;1A}BIASwBzfB9@b@6@B Bp@wABn@5AoB(aAvAUAA&AAZAFh@}AUAE@«A<2A@OA$7A, 3Ai@aA[Ap/@zA~A}}AD"BWmA80B)AiBQPAHBުAݙAȔAEA"V2A-AFt+vA{kAL,Aw3AޜuA|xA]fA2iA?LAiGA5W@>1A/)A A4I4A~"AF0BAr#A'@X{Aa+A@[A6e@PS^Ae@g")A@誵BX@1=@r_Bǩ@c@ްB@&@ Bs@{@@d$Bi7@@HB4@F@ҒBr+@@Bu@@>B@0@X#BB?@DBB.?@\B/g=8Ah{Bџly@.mBTS5B)@'{B54BXAB 3B2AG֓B{.B?KAߒB'B`ANB BZqAB`B~A#9BRB^:A vB BjAB^BwAdBB68cA5~BB@OAvB8B4FA3mBB@AdBxY B)BALj_B BIAjElBBDAxBcBUj/ALB,KB AfƂBILBv$%A}eBPBALB*BbA~BBf.AwBB&?Ae7jBBH8AeYBCB|7AGBA4B&;A5BmB.Ae%B[B!p'ABBʥ'A@BBrK1A`Ae Bb;A.A8 BʈQApA+ Bg5oAyA@5 BmARDAB8AO3AG B@5=A*(Bs@!OA+BA %A @ZAvA@}A@^ABAV@o,AAq@&PAjAJ@4u^A@AjvTA1A+UAA[G^A5A08rANǼA)TS_AA~؞AA'eANAiA}AA1R\AȁJA6AbBpB"MAl5xBt BcAHBBLmAPBDB~NABBTA~B'BXA:AAIAAwAH!"(SA&DAA晬@Bj8A~@ IA~ A#@8AvAjA:AAiAUAzAA3B@wB B~A}'BjeBS1AIBYB2|AkB BAхB>AI&AWUBA] AB"AAFBX!Ac@Br{A|@ AHBo'AgA`B3qAdAfZBA>A u B_AAڊBb4AA>BQ$A)A B|aA\BABAoB"B0A^S3BB-AuKBB.AbbBBB)=AyB> BoFA1A#A'AZJ2Ax@vYArAh@YAAAX@YкAoAvA~A3 ABZADA}@UA!B @nAyB@zA{ B/@ B Bj@."B#B@8B1Bq@"MBB0ҠA*'@OB:A!o@ѱBqAT@T AzA@~A<Ap@KA$AI@ALmAvADA.`AsA֣ARA_6AADUEArNA\ACu:AkA0Bq0A,A>BT#A襑AIBAt{A B@$A!Bg@AtJBJ@:A B@ܾxAB %A^BxB0=A<^B(oB8WA#]BZ B_qA]Bl BIтA|^BcBd9Apc`BrBDAÞaB. BAHcB&B`AdB-BzAfB 4BRmAA/iB>9Bc]AkBR?BZ>KATNnB0DBe9AqqBHB)AuB\MB,AwBBSBt A3yB/XB`}@mxB|]B7ATaBBR9A2rlBBM!8AQvBA4At|B,OA/1A/B AM/AtB%$A@0AIBCA75AݓBIAz BAB̾AQAqBgA\AFBA]A%BcA]WA9BTA*IABbBrOAN1AIBeAbEABA@BёAڨ@BZAFA@AAA(XAu2AsAәnAk#AyAԃA_AAA` A+AMA3@ 8A!A20@"B0AZ@@HBНA@3BA6@QBQ!A?BA-ཱིBHKA|˿@Br…A3; PBuAhB _AojB+LAuŕ Be9AN B?)A6EB B:/A3]B, B1AluBB;*A^B: A*AѐB A+A6ArIB4A(A#nBmAAoABNA wAZBQANAA]l@ϓFA6=@\i Bb@BacBއ A@+Brz'A#B~:AMBBPAzeB~rvAfB4A&Y@oBɖA@@ J BA?: BuAޅ@ B1AI@۳B1Az@A@AAAbAA>AVA.AA2A2CAAA<#Y AuAµAA>1wA9A}AjAϏAt APMҺAA ĮAվAA,AlUjArAJ?^$AA}(@A>A@{A煙A@4lAAX@G``AA@XA.oA'ALRADWA@A9ZPAAӜ~ɍA6A pRymAFgAa?rBKA*Ab@0A@LbA@eA&@CA6AfAp ADA8xAA} A^ABCAںAd B A<BrBVWA5BBABSBBApB BO A5B#!B. AXBF%AiA{B`AW!ATBA_AQ^BS4A]HB:#A.*BA7B.];AA?&g/ AAVőArA9+zAhA??OA-A-@y#5A!@[aA/#A@[GA7AA*A4 AsAGAG2@0ABx@A B@BB@7B+B@DUBuBAbNBrnA@BD A:.BΨuA6BvA6BA#AzTmAADS" AglAle/^7Aw ALvAd#A+AZUA`F AyA;8AfnAX|AkDA$M\@vlAz#A,N@-ZA9AP5AwAAw@-tA*AhO@4,As@JBwA))AB ]wAj@CgBiA@ژB&A@BFAr@}?BBBA nB BA@5TB~BA9BB\}@WBNBf@BO Bt@wA5B AOAmB6A\AfAg#AlAA-AI@PA,At??A@BS*RAnARBNA^BZjCAoBRn! @DŽB>Iw@@BM@@z.B Au@&{BjAc@2vBIAG@؆BoA-@lBBy@HB B@}$B* B@hB>BB\@kA Bm@"@cAAJx@%@AA @wA @QFA+" A;AB2A.#ABEB"AwB)BAA^BBATEBBgA,BdbB A;BfBnJAA B AA'B 1AcRABLAFAAAAg AA2@@#AZzA?J BrADfRB@A؉lxB^ A^JN6B@UhB@^'B@Le&BjTAb]"BV,A!B{ FA!uBcA%Bi~Af?BBAHտB7&Az>BGA~@3BzA="@ B=Aį@uBeA&@LTAA@ڱA$ߋAAAPuA+AŽASAy&AAV'4A8AjA@u}B~+BlA{B"B`A0AE7A6?)Ai!AK?_AA?-Al<@#M?(Aݶa@_?EA*A?3?AY?WbA:?A@}/AgE@An>@tA?QT@VAVI5+@3SAS0:J>`:dAH4jTڕmA!4JAbB4A3oA(BzA AHB#fA A,oBByAsHBA]A"A^1@Ad6AM@VTA`YAkJAޓ@#AҒA;ġA`ҽA4 ૽An.AnA_BnTcAc8B*CvAWBPq3A̴~B AKA=BAdABsA|AiBA|AB~BAB|BGA)B BǃA?BB|ABSBu"qA!BPBcA B#B>gTA&BS)BY/CA<5Byf/B2AB~5B#AѕBy:BզAB2%?BA0ЕBoCB0@vHBGBSAB?B-TAMABVA4A) B8ZA̟A6 B^AAH| B8cAKAL B"jA$Am BrAѥAW B yA\Aw B1~AOPA5B zA(iA)B kA3ZABTA06SAi~!Bc;ANA%&Bq{!AU$NAs+B APAP0B,@=XA6B@AfA;=B6APAoAr9A(~AֳtA>AuAΥAρ-AWA,쿵AeAufDABBx Ay&ByUBAۿB7B%4AA\ BNA[A BcAAB D+A! ?A;A O6Aȗ@dAs4Ai@q/A@R>]A{)@us`AU@p:BUAQ@ΑBA1@2bBA)@OBɝAY@BmDA@tWBX5Ats@B/[\A@{BW7AT@XB:aAH@9B`@n\@B@Ŭ@)B@@hݒBp]@ @cB<@@4B @k@I B?AB..?@)zB4l> AyE?*@Ao]@g@6A@I@A@@惫AAB@(AAy@xA,AlAzA@Awu6A5fA҂@m]AYUAƦ@QȁAVJA&f@WA!"NA?BAvgA>W|AAyAƷvAbt1AfA)pA[A".-AhIA8tAuARBBPFA0B$(Au BkjVAA" THA@AIlAjw}A"?|!7AR+A AAͮ@A@lA4JAU{A AAPAT@@AB@TB- BaA?BiBAPhB B@&XBA @B AA+BdgA@XBvA3@AdB B=DABB-.IA%BJB0LAO1BBrOAHW=BBRAܪIBlvB-ZAUB/uBbAkaBHB|fAkmB=qBiAyB BsA5BqBH|AՇB'BKyA9BBDiAq BBkRA[HBR%B2ANגBs.B A!Ba2B@xB'4B_AVBJ[AymAlB'ZTAzA袾B:|OAAꢺBJAT,A jB(MAVAP7BPAwAVBoQAG_A5BeZAqFAB3`A6AhBjA+AJȡBBvA'ABrnA(P%ANBdAvG#ALB~A"ABAZ"AQϒB(A #AnwBA%A4B,EAd"ABBsPKA/BMA?bAUBiAYABAGHABIA@A0BԢA{y>AIBBAcjlAЯTAf3@YAf;A\@#JAL#AA BA2iAl2A7A@^A^(AȘ@hAA_@ZA@AH>8jA|Aj?A A$F AũEA,.@AͮAuA^AeOAYAf$V2A5A?B+B A".BcEABm$1A/BA:BFAApblAAH9APLA% Ay>AUxA:A(cA_A%@ʜBA?AN@J :AA(A{+AcF@iAPAlb@A)@h:=:OhAAύ&AUN=A3,T@A8@)AyBXB!A@jB9BBAqZB"By A}JB8BV$A:B{BAS*B?BjAYBh?B_A BBB(=AA+BM"AtxA B(,AUA BoFAABlAVTzAdBQAdDAbBjADAB}q2A9vAB@Y*A9#B}2r@J2Ab, B4N!ABhAAB^AgABA AbBA WAE{BBAh`B+dBMADB6B,!A9(BTBl@z= B Bk@5ABJvAhAyB(AHkAsAA; AAA@\=AB*A_@`mA,9A]$AA>_ATAP@}AA/UhA GBAWA!BAeIEA=9BA8AJ&B7A4AB/0AT94A 0BGA;W5A^BpA]8AB A#;A|B}AY=AuBA=AS]nB'BW=AUfBiB LAZD͸AYAAc4bAfBAF AgA>mAyA[ڿARA$*?:jA#bDAP@~ѕAUA@7:zAAx͈AcAMAUDA@AU@3fAX_@'@Ax@?*CAW>24@ؓAЗ?Li @,GAl@>|gAwM@9|S?'AAS-A-@5VAh@PEuA'VA0@DA `Av@QAA k(A[A,bGAApADA#iAYAByA̐ B8UA`%B{oAdJXB2bA?wB1[AABNMA6mB&3AݻB ABtAdIAPAS @AӻAdVA;AڅAמA;|AA!:k]RrAAh uoA]A~;=lA+A?[iAA R@RdAyA&P@ WA!A:@HAřA莰@\>Ag[A(@U&9ASA@5A ׅAA2AsAsABFBG@YBB@|tBY B`@~.BB@BAX@}ӌB 'AS#AW9BBAvB BAIdBcB=AQB8BĄ>B+dBae Aq+BB}A!BBoA}>B B" AA) BQoAA" Bp $A1ABӗ:AjAշA=MAn)ApAuA@AAנ@PAA> @A>@q1BB=iB_AX6BhBy&A9B|&eB;Af9B"_BNA8BZBPbAVd7B TBhsA}6B}NB%At5B2GBAC4Bz@B2A3B9B؂Af3Bo1BA4B5*B8A 5B"BR xAݎ6BjBaA.7BB5HAK8BBr/AF8BB0A![8B BA0]BxAA=y*A%B9A6:A0B0AbIAPBF#AŊVADBCAocA BJzAKpA`Br A7zApBXA9~AlB@WA|A"BsAsA\QB:~AaA\B &AP LAL٧B8+A(5AaB.A%AB`"A7AaB|A{AͤB@qAJB@hA_B%׼A%@B\}3pA1BUjuAqAy㿙ApAIA5A}nfA^BA}K@AA @,sAb8A@Am5A_lAMQA"A{AuBHAB B+A/BBc-AVBBH@A|BQ BZAEBA FABEB`@VBPBX7A;e@g`A40_J@AL`@A"0䚆@|A{hH@AŽ@۵A?;Av>7?{A7\+?Ad?{(?AHxp@!?dAB@}?:Ab@9 @AA?ɨAD#A @ {Ax AM@T*AA/p@AAм@ȺA3AGAK:A; AE(AAm A{.A];+AGA\-AF\A{:AudA}pAoSnA =rA<aARxAPA > A7@Aʶ?êA1Aݰ?XA_sA |?wAmAʾ_6AEz@׿ɊA@ȧK(ىA@<}A^v@|B&A@*BA6.ABaAcLVAM_BBAj5WAąBAh@?AKBȣAw.ABA6-ABIA2A>}BtB_1AؔlBBv+AZB B*AHBjB$Ay6B@nBAB^$B BuABz B*ANBBAA-BAjAPA4AήA2uA@A&LAAvFۿAAgZ&%A65A*D-.A@A֔AAÆA]AԿཀAueA^ ?gmA=DAy@ڛ[A\)A<4@=KAj Ac$A:@A@4WA.A@AqA# @dAAp-jAAUB9A "AeA0AA4Bh=AKATDBDNA{@B[}A@% BUΒAA"BMA(A^B=A“MABV=AMdAxBnA2rAFBNfAoAIBdA [AꮮBҴAYAB}ڔAVA2BJAGA.B A -AxBAKASBlA((@BhAAg@`;BLAAăA@(ByAQPA0,@_A|)A`@0hKA@AA/ApV@BA2As@@eA130A/@ɛAE"Awc8AtYAAKAPAATݯAB0o@ޛA B@UBQ B@Bi'B@N/BSBj@xEBfB .@ZBB(Q@{ pBB?@ RB2Bx4AbBN6BPRA B)-B]iA1BS#B}ABBċAWB-BMA.BlBwAI0B>By&fAwBBYA AlB"} BMA`BB,IAQUBB3HAWB H@^0XAb@c%AV$A@3@4%k@@Pƣ @y@so@[H?kA[[Aܫ@3KAo;2A8=iA&RA\yFDAXA.X3@Ah8AA[APAKRA^,A@ВA .AR@ឿA.0A@ AҢLAPXE1-?AW(ZB QAJ9?zPLoAKKI@W Ax@:?A#A,A'\ACALAZdRtA.מA]-lAȨA=A E H*ADg 7CAWqm QnXA_GmAl?zA`L !@h}Avt@sA3@cA ?@KA*@1AJD@Aj@nQ@@@ìN@@䧪{ A@aCA4}@ pU.A5\@gD@r@,_B0yA @BauAzABnGdA' A|B LA\wArBVAe"@(NBMAل@@B9ZAo@O1BȷbApb@7#B/^Al@eBt[Ar|@ BVAw@KAUNA@vRAgEAm@AG]CA6@ΘA"FA^-@T{ACSA:wOAXKm@ŗFZA!#@_1hA Z? .~A4-?2Aú˽aAٌWV۵Aw ɿx˖;SAh}?L߰AI*JATf`>BA?A.z@AqB@A0tU@M|A(N J@ "rAPmJG@|nA҆S@rA3@9@0WB寭AQ2@YBAm|@ZBrAMPAN#YB]WAZ<AVB҅AAPBlA1@HB \AyE@=BOYA'@2B&ZA@9%BVWXA@GBXA@] BfYA @LAmOA@&)AbEA@#A&>AB@+A|AL9C@&#Au[VΝ@ AL/&@`@J@@XUh@ܳ@k@@%@Ԅ@BBŃAns@f@BA@BBMA@}DB ANAdEBAa!AEB(Ae,A EBFA׸3ADCBښAI3AGABKA+A>BPAaAI;BnLrA%c AC8BfAB@k6BaA@L4B`Aۤ@e3BbAT@62B4cA?Bv?,@iB B?%@p#BEO?ez@T B@W5?"[@ BJ'AB@񜿆A0Byg$AJBqYg-ABw4AѮB8AΪBY8A B+5A܏BD^?-ABп%AB&AB=AvGBF^@B?o@y?B M@|@vX|B1@ Cr@kB;YAS@:sXBAH0@PEB.Am@)1B$6A0@oB4A0@$ B9\1A8H@IAn.(AGv@A AQO@UAQAw@UtA AAE!+AApJAl@nA:A?| AĘAI1As@@DLA?{A_A@?y@@!@ķZ@T@3@Xs@P:@9@yei@Q@l@@ @xK@@)0if@@vٛ+@w@ sJ?쑖@@@3@x݃ޘ@I~,y@![8EAՖ>J@R'>5p~@t?2o>@FA@&\)?RAdARO!JA<#A9A!6A-%AHAA^A5e2@wAYAC@ A|N"i@PAA$C@氢An?MvA|teTl<>A %A̫XlAX0 ZOyAh?\AP@(=!KAR4g@;@'1@BdA:@YB AA}@2xBc AAXB@AeBz@h@N8B @Q@tBAs@geB1Ad]@DUBgGA@A@DBYAt'@x3BqdA@"B `AZ@EB~]A(@nB? VAWo<@AFLAhh@A IAG@{A{QAվ@P}A)FeAn%BH7A@ BAAwBЇA'A||kBv>AM`@XB9^)A@/>B>A@"B??A'@B9A@nfAD(A@D A$AڼA__AIAAI%AeC@C@΄KA?@bAU@3A&Q(@h@ w@~~@OAAO?`q@?GAfA@`.HA9@&@FA?]_?A>ۜ?A^?HAKU?"SczWA1@VjA@'&AW,A^AO`A;v%AAlA$FAƷA|A:IAk0A 'A4sA_pAA7B~B5M-B'BX)p@(B֘:@7oB Z@dԛBdKܔAAB7slA BQgABAB8B!ABl5d#AKBS? ABa@ABA@OA˝B@_@ BPO@@Bݾ@0A@B@ϟ?\BS@e?B,^@nIo?B:@\'@UA= AӾ@A! AQ@HA5 AA=ڤAP AlAPAt AAktA&Af#A}IA|@~*A GA@& 0AG@@V-5A@ӑ@[:A+2@|7@"uFA B XA8A,B)OAKAA BAAA9A$ AVfA-4AAAXA]m3AAA 3AA0ANA>+ACA%dAuAUA]@cAAbA!Y@A&dA ?(I(AM\A@rLALAr~rAl/A\fXAA[ᎌA?]@zsWA"!@혊A@7BK{A\@@BtA@QIBkgA\y@QB]A:)@lZBIhQA3@rbBt;EAG@nkBt8Ai@gsB)Aӈ@zBAeם@kBAn@,BL@`@[Br@@ @B"@@WB@*b@NB@5@PB @8@YB/@Ź@dB@ @6A'A1@SAT"A@DAA @A?A@QAA AΣA1AlGAfAAAamAAoA!A1^AQ Ag(AZ;A|bA/AEA{@-7A@'@e>Aq@S @$6EAbt@o@FA @6=@oCA 3?`?_?AD:=9AN俽[:?Ff9Bz|A?GIB)eA?s|ZB uNA@jB6A33#@8zB@AaB@@@B2@@EB@1?@B?@iBO0 @qBAZ)BAoBw6%A2=BX&AǦBna@׻A̝BI@u@,BA=%?2B@%Bx@A"I?Acl"A PAv8A5A>0nzqRޞACH>HAV'AQ?eAik@ŜA;@2RA 'AM)AG]ARDIA~CA|AAUay?!AxgA@h,A=|AG@{9A!Ar@GA1?A߄@;4PAeAKAOABMA!A;AAAB/A @AwA AjإAejʙAAVArAJA?AOA*H A2;]A@]A#'@\MxA|>l AA7Г8 rBALɯWsA?{xP lABH?AK-@A+PlZ@uA@GnAҟe@<@"XB%#Ad6@qMIBn0Ať@S9B>[8Ai@)B5A@ZBE2Ag@%~ B.AY@fAh $A T@AA@A ,A}@ԚAA"AlAhAAt.A@"^A'@%@-Ao@."@?AZd@?3@,9ANA#@(:Aez@y-ZBW"A}@drB'@ @DŽB@&A B,֗@_AgQB@ Ɗ@BAǷGBʮ$AJ@{UFB%9A`3@;u3B'?A@@ BY@AV@} B:9A]@A0Aj@}A&A@;AMA AcǂAATA?ظ@{@@v@:EZ@gAɋBZvAp;ABB6A/^A١uBAyح@N*`B A5@_T@p*A>=@@jD9J@@#D@v+@G T@@Y@k@=Bf8WA@DBOAͫ@KBj\IA@LVRBiEA_?@pXB9\CA@A^B~AA XAhcB_DAkA7WhBOA=AlBcAGAMoBFs}AA pBAArBOA.@tBAd@uB2+A\@ wBʩA?ǘvBEAƲB>&uB˟A/GTsBA BvA„AUA AALA[!AAYUAT~AUETA+zBAFÅAVA+oA @XA}?JVNA{<!_AVHֽ[Aj?4AdN/@B_A1$z@-A+Ac@'pB(AA @_[]Bt:ARx@HB1RAAP@N3B^AA:@zB_AC@Bj`WAlZ@AdNA@тAHA@=AIZAP7h@t"AГ{AZ@BAΏ#AxBrzA^/A!wB,?PAA5mBV-Az@9tbBNA@ SB"A@bBB.A)hc@0B}4A'`@ B2Ap@/j B0A5w@YA'Aʓ@FAAF@AA!@A @A9@DC_ACA AA;@AG@dP@1Au ?~G@ps@B2AɅ@B,!ATAIBA AB AqV@ggB({A;@XrBA9@FMbB6Abd@ "QB;TLA }F@\~?B]A'@g-BKmdAJ@lB.>_AJ)@8 B.[A3@LA߲PA<>Y@`A+HAyI@VA KAZ@ gAS[A7@+sAPAKAAAA#Afm@·AAL@AA@D@eALA/LAyA AuA5RAAԘA9hA_@$A@fb@ 1A]@@ xAH;i A`A7Ab)A_A1X3A92ADA1A9AIwaA3A=[EA9=A!/AםRAWA`['A@^A@6y!AQ~fA#q@g"AmiA6@E<,A0ffAg?@AI[AS\A)AKAa$ yA6A'.8AcA@ls A:WA".A@yyyAEɃ@IZA(@+AN@vBA@_xBAhA }rB=A AXiB[Aߡ@ ]B"3AC@LBl,AXN@I8B[2A o@7K"B5/A8@n Br"+A曛@AEqA5@AA@A^ AA(\AAyAqAu@:&Aj@ON@?AAT鑾:?"Annؿu@9l|7C@^cOBֵAr@SBAAISBnA/AR LByA@`=B3\A@)Bt QAm@B'PA=@LAbyGAtK@Ar9A@g(A1AwAXsA3Ar6A$AUA-UA'2@LA9#dA>>F"Am UA\_cA{%-A0Anc@!>AG?+A%@k;WB'8AOm@,YB/AAWB'JAޮAţJBQI`A8@7BtMA@BHA"@oB|(DA_@A3A@jA/)AAmA#A:5A<A A`UAVT@x@JsA:o#]@`A(v@)AV A@ƒ:Ahw@oY A @iXA|=AVIA9@|EA@l@GAãd@S?CA|V?9s>A=]`FA~?AF4ZAY`@ᵄZqAlA\FQɅABA\|A0i}A})QAԀAO8ÀA=A]A |PAo8AX™BB}YBBF%¹VAf)BÄ,MA(0Ag08@a)B1ASTE@Bv+/AGsS@`N B,A}g@LA$Ah@HAAJ_@AA@AA:E@zA1| At@Zn=AAAiA@A@+@3AT0?@9AE;7_6GAk%u A0CLAɾA)H+)&?)ߕA'x c?k/A+j @?AAZSb׾ʪA(zSNARwxCAXO)Ae7Ez@A\#:AmB2AY`A\ ?~׍A?ͥA[8@ kxAh@AzoAَ~A$HAIJyA9Aݭ/$kAUȾ@`A8@2eEAƼO?"]/A1$—*D8AoӿnoPAxF]bqA 1fځA;Ofz$@J{AvA@YA3'ߢ@ԭ0AkNYu@) A@@Kɽ@@tu@NI@j<@aAx}3bA xl|dA*gy>OKkACMXY?tAh0|?2A?҆A۰!?AIdz?Ahsʽ xAnJĿfA:0GHOXA1A1>AE4AK>ǀAFA{?M&mASAC?@AA#2BS6hcA$B.KBBJ'3B?B|gAA¥Ax3AILAF!A6AANBGAA,ADzA&AGATӵvA+A`A@JPARC+@uIbDA=W?Y lCA~>ANA?uVAV{@(@&?MA5@8HA:'@^GGAnH@85TAGy?' VkA?_A>Z|>lKAn'xZ'DA@qM?CAI@\@Z>AAA@6An gA>@bAmYAU1A^BAШAA*AHŊA LA&A4a;ATlA: kA+SA A=AFA#A9AA wAv}@mA}@[I-fAR@opC `A4?"#XA>v0KAx$as#FA-+3\V\CA'ya}@hnAIA@^A=AT@Au=A7N/@&HAGAx*@f B;OA@v"B+RA$#@g7BGRABq]?d{;AyB4?e1AzxBvn@Q@$B(@Q@pB@/)ѪB@=AmAkHASJA|@A} A{ [A8A@$gA9@%@;rAZ@a@e|A}>{@ AG$>@EAפP@#uA AgaA,QAm6KA7Qp/Ada5AKy58@AAԯKAoAUAWA؁@TE|[Ab@LTZAE@k+\\PA @`=AU0?az]AC@nARA 7@3fnAB?T,A:/?#"AپytAMǿ& 嚩A -_Aw”Aʲ A4A"1˿AChA,fOoA8+eDA,KUAp3Ah,~&A gA A*ؓA)=B`*# A%BZ'd4BAB #B'A#BkA'¦AA=wAwAjA8yA@A /A٭EAAI@9udsA@Io`Ae>jڡwAdbAq@?zAQ be@?A'X@&Aؕ@Ak_@d @|B@{@voBZ2@܅@DsZB<A_@8DB .A?@mv-Bx3AM@B_0Af@CfAV)A@KAA}?@كAA@ړrAL AAh/A-@ AK@@kcKAgfˢ@U?JA}Ӯ&W@GAK%@r@:Ldv@݃@}RϏ@S?b@VARvA?ABAZA9AX AA7AAA:A A$A<=AqA7GA@ B)aA-/@{B\bAW@"BpLdAL@%BjA@f,BvAAT/B A5 A1BȏAAW3B A@2BYA@g.`B8Ayg@gB@kAy:B4O@LAB @M*AǕB2>@9B\ ?@В}BZR@d@]bBmAn5@%DB?A^@%BzHA.9@B)BA]!Z@_A>5A봠@τA5A AJ;Ab2AIA@<9AJAOTA@'ӿ:]A?HA_AtI^A>%bAw?@IlAlAyAXA,œA1AV.A|@Ԣ~AU@,җ_A!>љh'3A.&*IDA8.ZmA8Aw?uAzɣc@3>GAWQh@>#A+r@h@q@@@A(@%3v=AI9~ߪA@sĮAZybwA S.UAERKWAm1|AFo˯AfE_z{t2Ah1Q@AU>! E/XAI@`ukA?@z~A!@W ATy!A,A0mMAoAnvA TNAA@A^A_bAxA€%@]@@UKR Aa@F)l@=A.G@zAS@A~|?n^AY9 ,eT2ZA>^6mA4@6sNAr@AYAAMAHaA̰AAAA9sBA> BA >A6UVAAxB2A,`QAFA[@OA}@@LA\M@s?LHA?K(`@A1@y>A䥽: GADw? EEYA?7@ymlA,*@QoNAA+1Aw5A^AAbAAzʅA:2ApܘAQ AQA+A˽Ar !ALAF’AA @#>@乨@<8)_z@T@׃0`@"@T_?@xfY>Z@0VxA+ڐ>xA.򐿖2%A"53&A>Z>AO͇??uA>b?i@J>K?Z@]?@gp@#B@q9@Ÿ-T@NBNA;@4 B|QA@I|BHVA@BVA@BJAs@~B0I@A@0{B"5A{S@Y|B(AfmAB"AA.'BK A!AOBQ(AJ'A`Bv6ANTABlFA@BfxSA߭@B~YAhpE@,=B&wUA?)BvNKAs݈B $CA D0Ar@MAKAMfŒ?[Aoe{JkA9`jyA:" ANs/A :TAŅH-5A[ϔ4A{Ay&JxAJ VA.[A[(GdMUA]dpA$yUA!@5A@)(JvA@X ?BH9?x@^B@'A@B > AFB5@B2 @ pB3P?K~@IB3@=@pBAh@.WB8A?r=BTA?"BIWA?}QB9RA@AWGAN@,AGA@@_rAD-nA ZXsAApAP@p@A[@a@t@4?A@@8AAH1@kAt'@ Aā5@h@yI@f@y@?@@AA@:vA1AABA AkB[dAfAK0 BA[@ BnAH@e'BaAZ@%B _A@}e0BxcAA&5BsAI)A-:B"AP2AK)BuAn@)=BA@^:BldA=5B8ARB@I҃?;ZB@,@B@0ט@OBt@@"BXW@AMB@ 7AӠBlv@T&A;B?'AZBf"A7sBHRCAvB`6 ANB8$ABxA{BE8@ABOF@gB>ۊ@KB9}1kv@B^']A 󥿵?HA @@6A@/@׋$A8A`@EAɐaAq AVAAA$@TA(AuP@;A|K#A9@A/A)4@B=8A\@P BI;A@Ty0B_GA|_AG8BvhA5/A@BRA(ACByAqtAHrDB JAa'@ABc[A+k?ߡ>EBGAyi@}PMBAN\@)PB*Aq(ANB AWAEB?enA@7BzXA*@#B~PAA@kBX#TA@AFA@_A9Ap@pA4AA~As5A0A8T5A(AjOAz @AbA @BAWA㾿NA6As@BN;A@tA 1Av@JͻA;!A@΃AAQAAkA=.hlA? :Aeac"?CA?mA=3i?@e[A@x#? fA ?T@E/B^@Ae@g$B(?A@B@ܾBk@g@B@'@uB|@2@LB5m@T@ ѐB@d @,BV?7@ZBF#>VAֹBઽ AiBI WAB̔A2@B[@B v@ÚBf(f}@B`?KL@L~QBA^@YB1DATr@QbBO@͞@0jB@@pB@_@vBM@?A|B¼A+A}ɀB:SA~:A7#BAf:AB1AdT.ABgFAAvNB&]A)_AezBm7tA.@ЩBMA(@-YBhSA@cBKAA3@B,A+@ћ~BAE@O'Bi,A@OB_4*Ak@9 B#$A3`@tA|Aj@ؾAAaj@&A[A @wέA^v AA׷A}A AՅrAAAz=AK@"A Ar@ "AT@4@ )AM7@%C@x-A4?7?J)AUu*A|W@HO@de` ezADA'y%@ntAݝAi@5A#WAj@ÍAKAB3@A NAo@7BO[A@BÜbA/@-B}hAf1@&ABcAMN@TBLA6y@ygB81A@KyBpA@Z!B,@y@B@rAB@X@ϨB@{@1ٛBV[@0A?DB\@f?gAXYA @HAtAy@AYöAD@%-AA+@A0As AAV~AAB`A2ZAD B#ATAڼB#A@ BoA@BiA@3|B6[cA@TJ$BCU`AB@4+BAcAl@v/BjuA+x@Rz0BMԄA4uQ@1BC-A`Z@I2B<ŢAAA"AP@/R@A_@Jk@zAu&AS@AcAQ@@]AMAY@A A8@AIAAX#AޫATAVrA(ayA|AKAhA+WAprpA2@T8AI@^ь0AV%>if ٙA6&L9{A J Z'AbzA>2K4A(!8eH>A94aO@hA8oW@DžAn=7@{lA)*AKAB%A@kB>2@NXBcb>ݒ@wB9]@f@t6{B\@:@;gBDA@QB0,Aɇ?e;B*?A ?%BBA@a{Bwa=A@A(5A|O@Aa-A@3ÖAS0A@m]A5A AcA"!A;AgV@xA2Adž58A@KhA,@uA@* B\Abe@BDZAQ@@BRXA@#B5WA@.&+B>YA@"3B%^A@f:B&\AW@,@Bq[A AEBfAA9JB yAR%AMB#&AA޳OBZAi AAQBBA@vQBܓAD@PB㾽A*N@MBAP8@/jJBxA@^FBA0(\ApF3@MAqԊ7 @E^A¼bft?^oA0 @PAb"@IrAD_?SAB@x fAq~j`nNAN6ZAAs,}EAg+섭A}RvzEI@]BZ@ @tBL^@+@YaB*3Ae@OLB&ASw=@7B2Ao:@Fa"B62A%{M@N BK-/A,h@A$A+U@үAA'@MAA@eA AAA@ 2"AK*@wl@qLAb?F;@6XA"I,@.ACa@@LS@\{@5~"E A@L@}#Bl/A}@Aq#A,@AacAAʤA PA AA Aq!A(AAt@k.A?@@ 9A@WU@CA?l?C4A枙 _A#@?5vN@=Ant@ >ſ=@;+@}w@tmek@24@[Af@@(@@pm?|¿B4ALBtA6Aj B낈AA2AH BvA 8#A% B_AA# BƪKAV@' B:A@ Bd*A@ BH4A̔Bh=LAli?B f[Aa@ZFBdA}@EB^A!@ދB-QA0$ARB@A,APB<.A`AơB"#A^VAn}B$"Aj@z~wB'AT@zsBl2A]@пlB^>Alg@dB2EIAR^@u\BUAJ@23TB -`A>@KB:hAf:@qCB/sAЦ=@!;BlP~AVA\Q~AUA\OAAemA9AuAt@v^8A:'>,)~A1atAm*H[A2`A(ZAzI&m:`Abز3Alh(tA?gA@.6#At"AAkAtwݷAgAA{ųATu9BA.dq@>Aȟ A@]A A%U AvlAd'A%A A,Av/A.Aņ0A;YCArAkmA$ԕ0AncAje[A/BA xAf:AN3zA[5@Q[AJ[@(+ٍAxT_@j A-@[A4Q?kA"O@eAAx@y|AdA@AZA@Az+EA:Φ@ҩA|rCA@|A=FAՇ@OA=PAτ@, BXXAv@Bn[A60w@Y,B)`A B@;B ]A6@zJBmNA@YBt>A@ZgB//AA4tBI1Ax8*A@BFAZ!AqBlA@zWBvmAInGNB@.@?|B"Y@x@BBY@p@6Bp@&)@hB@QJA͹B@'AB@+,A$>B8σ@;AB`@7HAVʧB<@\KNAhݣB @*OABE?IAܛBҕX?:9A<2B>;&AگBp=ABv@@1BHx@ыB9JAtAp52A6MAAAV,A@$A>hMAɰ|AH KkApz6L vAyfn%A2XSAOXA]M#A ><^Ajf@趏A33@qAh:Ae cA^;zAxAA4#>1KBRA(Z-AMB~ALx:A|EBAAftAAoA婕AdBAD{@CdAA`8IA}QAA4M&AJAx>@!jAp+%AU AAAeAT$A@/١A=U&Aw'@A@(/A@ B@>Aqw@-B+EAUu@|I4BIA)@LBm8Ax@'dBAdAkxB=-A6ABy;AJ@⒌BvA2@χB&ZA>ՌBҧA @'sBh0AG@~Br A(@iBX@s@B`@P@69B@)J@UB*zo6@B@)Bl A@eB.+m|AYHB AB"In)AuB+nOA)AZB%-@!AlڞB_@{@AB: A^p@BAB*?jBA#BhA*@w)BXAY@h B]AN@VBOA@@AAA`@A}9A^A3A,8Ap A%A3A`-A݉Aj5A!A\DdA 6A25A&J9A)AIHA AyAYAdT@A#gAC@~A:iA>&A`A+ݿ)KAZTA1c:qA$BA'jA2#AߓAPKxD@ offsets2.npyNUMPYF{'descr': 'PbtPKxD offsets.npyNUMPYF{'descr': 'Pbt(PKxD1Q7__ points2.npyPKxDb0b0b _points.npyPKxD@ )offsets2.npyPKxD +offsets.npyPKDdipy-1.11.0/dipy/data/files/circle.npy000066400000000000000000010001201476546756600175340ustar00rootroot00000000000000NUMPYF{'descr': '/l ch'%.lv>'lڣ c-=5ۡ:Gɼm|HD9 Bp"VLR*E= >!Ç 2}?ыō=C~&_B_ <`>?_PK!,k  indptr.npy i/Q%[.d"BIRV3Ҩ̈7SŽy\wz^g*ƛm+9CI4( Jdə\H\Ýၔ|t~hř/ؾDJ1,ި8݊8O?<yA>:|PK!H indptr.npyH YKQ=6)c1DIJiZYfsLnO ދabΙzwӹT&R>Ofph6*1dZ^+{R.uYGMRVh@k|j'>uBWMA?4H\Q 8LIAS.D4CfaΣyiEi, B   [ ;v ‘1 '$Y IN\6pyɍ- t Pl.O  UB LR7~PK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!>BL shape.npyLPPZ\nni_TR_ wK)Ng$F:: .C&0P PK!scdata.npycPPZ\nnf_TR_ wK)Ng$:: d.phZR?A' >*PK!D indices.npyPK!H indptr.npyPK!tE Vformat.npyPK!>BL shape.npyPK!sc^data.npyPKdipy-1.11.0/dipy/data/files/dki_constraint_2.npz000066400000000000000000000046511476546756600215440ustar00rootroot00000000000000PK!ŴO, indices.npy,@b;l>t>'5 E\HC AXy~՛w~wˇ|?|;)_\Ntv Ӆ'qNN ^+u`KK*Xk80Aw7>K__pPK!1H,O`- indptr.npy`-SgpF-Ɩ 1E1@,D!P ^bo^b/^R&̤9w<<,93 g^tlTL\`@v@3ɕvF8Rœ!tw;!->ݝ亻G&fg&'dЃC q:|MPgEI|VJA"(U֤ e򤂍Tb*C? Մ԰Q8 %Ԇ:eQ}!PP# h)4LKhe@k  mgA{ mtb:[b+t+tBTmR_Wg!BI0 B!S! P- 9H!1d,3L& ,L)T&L'3, s|PaYlcTa +Vd Yˬ# d&f-d+lgv0;]na^}dppQ8I8p8kp^ႅ\U\nM[7țb&0 2)| )9|5|Kc'??, X~cnwCO6U: 1"PԠb8S4(J1 *ÔitPTPQ%2yB$UI555C84Ej{aQxz`Կ(y ܣ}Ԑ<>xGa4QcC<ȃЌi- kIZIkaL! mvހу(֨љtѤ+B,m qB~]}c_5m_ |էxyi]}>#o%3a7zT[=qYѶJWUmiD]}5{Ymfr_AW}Ww$f9p ,OlkGv\ls7^G}E} mϫ`ۋ+nE(|}!y@_{Im/Hf=2x#2W z4vF/ސKrG XvY2?Uq/*4cLԃqIn5Ƚ܄dJ.Øk-dR;%s&Jikmd%Wru?;r׎~ݐH.;~>:6wFR͠վɏ}nio7Ξ ,}ԋooM]q.J+SRwh&پd$hEޕoO>mn KGn>/,M},ZPM;,ٮI|m>xn*.G-!;ւϫ5wK5x^2[~\S߿s~JRǗAץǴnn:.;ܭBruSж۲ ]d?Hs$ۗd,@r[oosK^ZmB[Rn%K/7^;, K_yWr_$>\<8-aM g$OlWN;KݐKЇ6}oBE>XC7mIXFHӐ4,J+Sű>܈Q s۾ gR>\4xX[KlX>A^?osrE>X[ ʿE)39ߦ{l)ÿMXH9Oҹ){]1~Qv5eqK:mc56fڗnA{m}~^{)eɸlU=~,scکI/3V-͏@r[з~?YȕCVUKq[}RHd{<]}v7s}lkqVGJT\3NU=~TWMꣷwܬecOr-]7mKxMIY"[T^Z\Wr1^[mJBrc{Vk}t\Sy;ZOs^d?Hk3کJ'ُ>I.C$!56%ˠߨ+Jۗ~iT}ՆݑqlIeūXt^LRlC˶.'mzٓGKmKu<~Κ<9f-'.וܩV''.w!1"&˂Xױou>1'\nCr$>pKɍK9*H7A9'BcI~n)F'ӣX˖uM11}]|"'wI]p(i]}^C}5ہQ]my/m:kУֳ֓ޏkwr&>0p 0w~|uQ`|duO/e\?`q; :a`q[GF?n[,<[al.y-{9 }?z$I`l߷t/YlG۝mbKc3} &Ŝoð{#1}"|Tcoy_̹A`;χb·k; NHL<9g>sOY9ً< s{9o>wCP̹{a? ;c1ۧe,pI"{}1@:bA Vluf|xDc![ys1羥<} f1ml'e3봏rxT mlsfb%<Qt %s1u!Ŭ=㇣s.I,]tqf=Ϗ/ Էs?\ϣK ǡgyrf7D̼੘LLn( m;7C{-ЖFhy+ ǷñOC~oc;Adw߿9^ךìq9ni^\ܷsҹ,0\uy5G4O?a}Ԇ1sLx!*KjU-F>p5qf-W-5\>1#纖bn^L-,޲4ox`0kwfs0>G5x|ʙ{0k$LCז۸|suoUK;ZÙ{qfMfm &O>?;,ĵ14'&Ӓ{bۖjNy7=u{'6[̽>xS9H"SbŰZ;̚%\ m^t&~/ЇCG w8Zį*Ѳ{0kN =?9Øg051_s9R#9A|X m|kYp|ϙڊ Kՙ{={%^b?bgAL,b"&j͠3@ܛy9|纥{0s떞a<&>7lQr 6⾸"kbr!&ךbrmUfML7 L.q_~&c >S⊸ &OxFn)\l¨>}51yLacajsGLmcj/L텩07yg qkxgnELE7.L݅B݅pG20u0uxx sd+L\ajnGLͅ^,Lf3oMbs6Tg+b,mSgajw0u0uxx 9WT;L툩0uZG2 wKkv0k0svz- ;j`iSr;=,{/ڰ"몬/"몬/Zk&Zk&"X_달*ܛ m޴)x;yMK9?O[z\: ?K0c>: ~9%7IJ)&bb.ύ\Ή0*Z2 miY20Q1f=55֣XS?A¬GUŬGĬG) msEY1hi.ubٺG#@5Jݲr+f|zfu-ό0>`'`{jb{bb"f|-1-f3>z!>\8Ng\pPal Nflcf:cg&cy>JMJ]W6hm13>z9,ƗNl%czk0 aCQ051u1~i ,/yK/%Kp@`|S'̏ꖝ4,Ӝ05055SJ 0k 0k u1_SԀs=S1n!cy8wq1a8̵<̵|]?2o13ڱnٱf?cfl53Y/elÌCklkfŒČ~<;L?wh\'ε4X+ᆥ L]^QW` L]vQW` L]+\QW` L]ɿ7 wo0~ S7bR05:C05F0OF5D;E6o`|,ՇA8W-3a??w~Ǎ}FmlwC{UKct%k;xw?Ղ+x!fvGuKkD0㖥ԉ[֘' ,'Y<ݲ&#;8wq<=ڃ-ڈM&eDzvc؅>jx ]^[z"xykKQD!{f-w!r'Kzm=;F_[z/jO,k#6ވ[nBǮs888>9`ϱw~l=a;N籥w{!-yt:[䋠:|2oy0RFP FP-! GXqw "pDMPt 1. b B\yD| $%a$U)#K`4I*AiiuJMAPFFUl 9MK N<" P(,.PB@I ( *ϨQх*qTQUP TTQbv:6k jф 5hnZ1ZsNP{F'Йtnzh}mOP!PF FQ`DM2d0L2fQf+C<|[@YhE`e`+JXV[`u`e(]l eI v1vץ12RI82'$)i3ଌ2.]qFW(W5p] M-psOCxByb( -xxRmIg/Opß u  H@`\.a@X uu$ Q b@,MG@\x >H((GbJ‘aR0Rr0IHϑABFA$dv,MPv\rK#(_ PHB1;H J *BiʘʙD%*U%\uj@muD= \!d4%͉$Z\kI jGttҠ3ut3Hw=^ }es2Xh 1c0`.62F1`,`죠O))?\'7/Q@"&,` 8#GH"F0.gD$"2ED4N)X:&0qėEHdp$)rԔ4@d$2Q2Y\.+MBv"%'%ܔ<s((b("%eҠ2:eTTTCUVj SjhPR@)u,PR9PF&:4<9ha@k6vtpN&хtt=t zy@o}9П11X`Ɇs FqQ01Υ3&8DI` e*ƘfxL0Kl\b\n1K-cJj k\fuA[([U`UءNb 8`!f!R)8N9I);M9YA8.e\pY*Jᦄ;]9}<<)xf^W&{Mx{>,bQC$_ >*dr & *!%8#N!="AB3'!%H.Y@EeDbH%#6G!KH@IHđ#HHʑHRJ R  !# Y9PvK2p iPX":5I17X %A)2DY]QQJ*˨ I..SKBmC= A#;X 4ԡhm6ځBG:ź zRziЛR_e 0T0.c #a*Mp*MRa21ETbFӉI̲lby:,L,RUXJjxZփ =d em#{Nv{.#K82)<ఌ#.s8ppez9AF WJJ7 wF=C=ᑃ<<陃< 4+ym7&ykwBL@\N_L`|w0Od_7/??! %e0z@0 ]&%ea= .DH";\:T4A.F1* .q*J] H$ud%w6JiT:v46Ikt:w 6hL0Y,D ݥrD yM`a (Q]G(2ee=.SR*ɨ2UPMAuQM%ѡCThС%hpTj-J&mMNFMI&]MM=M]F}M }bIh02b(B#橂g&y c^o=xbd|tOg"᫋}|w?]!?@@@ $!K1QP%B$B)aLFwhH"{TQmM>&&RlyD|FKyDbJKJ$D KIIAKHep\(,l.@9lSF.m<6˫ 7@P K(bJ \4(re=Q*SQمU=.QR#jPۡ0zD=;@HDBSiRsh!(hq8;@}L':.ҍzxLO^. DbG TiC  s2RQ1cnjUaC0&*eLSb>fJL2[+a "#hԡ=hJ+deQac 6:&EV&aA d1{4k} A]GC<s8qqsw^ȸq%\q|uܔpIʸ"uxP}#<1Ǽv7>ށ>0>O}c$?$!o?@⣂f>( ,DŽ)IsD!)&!>*6EAqTkx>&$#I"!C$QQJ |Lzd0IFJ&YPeA4n e&C!to> 0(CI۠(I)J#ەe@yd DE2M*:E &G-6\J]POB}dD# eP2q4GhAi)52]Vv=2EQN32\(ݑz=5E ӇS??2@Ad F *`H2F%`4lq#M&h4L46E4Aӑ*34ɘ%`66G2ɘ,h %`MkXiU`5b Q֣@l&f-Vm#`.ا`; 㠉 8Î:jc: }jtN>A.tI> uq79npܱ]pO&G:<'6yxsza2^ڃ 6'||##[?6_p>?E" #0\"(%GpdDHP2a %"E""#EQuFDGb  W@<!p$THAb[I%$BPJJ*Afi4HKiAL2#ղh#HTKBn$,yUGɏmR(RQ18ѿ0Z8*ȯ&&!j PWz*Q tjA#&iB3 &hF}H eӡ`D':.6JQwa4a}lWA?{ Ti4C 01܃FQcXqrD (238fʘ%c62 s%S0b,F-X&h9eJ+dFQi-:&T(aH-*mc;ReJ;5EFp5>ѿH8hC qG~'4)A}ΪtN>.pI>A ut7e趏c{:!,H>ŞrJ~$ 1 #IE8GqUBBIP+'< *_UT5H5?jiZj#:+ }\5TH>hAs-%ҨjcD;(>*&az X~{ h6BC 4a#H0 9h0 !GO@LD&sLSmQSf3-fIM,74,@XH,XLLr dU2V3 ӬX'a=d6qlnm2v CT[d@8! GnGU:qF(%A59& pqqv\5p Aܴ-6Rtrd(&zx$=xjg /MJk72음?>(hO >} |Ga~ /2*q PA   *#}T( c c"X,"G$A}HDDS!1Ub;@F\<.$#%I@R 9Fr"H R!GH 0҂tV)D&dY@V"EvC@N Y&7#(/ȇ,R@FAJ!d”" R!SgQQ 2!SPJpeTaTQ  j!VG]F=d*5`4htkQfH:-%B6@J[F;I{A'$3bҝD=)ޔ>&Kdg 0@ $i!*a.a FJ2Z1x 4Q$"`i 0CLAl0Wy>n(X"DKK5X棖[hJ|jaǬb |fm[U#;N|nأ`/>8"8DGq'N ۜ$N)8MA8KpLwrIed8s@qKm;w%/2Ce,R2R HX(-G:}PdddYV6ȩ m<2QBP(ŐSJ%A)dҠ\yPAAEP Y2QEBUJ5dD 5)%ju%!h@i(2Uc&!4h!%G+dp (:ԅ =@O !zkԇ_@ `0DC0p0d#(hbIR!a &HL&X`*1 )1@3%Bfs1\  LcZ,a JX\ V ZcNkkNT(`flU6;$Tam=;@tGpTc>MN'1mv8+9o 2.r\!*: n!wd%!K<$!<O=ϐ%3^0^rB{x2{i>s|2w~?~!CW)#2L`! 1QX"R%<`D$$,2b `19b!EqL#D IL#JC )BICAiC^ *eT +NYT* Gv向SP.By*@)(+l"DQA|Tq%AmTRVA9SFTdT1)U@U(%ԠDE-.2]=J} ( 1hhLL%2E+ZmmکОҁF9 tը;'IzS !})Lԟ2)Hd!HP0 g@Fj4J2 Si /5dSLaJ@iY f 1B| B -" XZhe>lMV Zd-c>hM6 D샶m`;c؍lKGُlq88D9,w8*qY$qJid3s#K\\q 媂kT7TɸLs[;.2} F/K+dނwH >OH}|C|'~'R7$~aR?%AJ1PP & ) M cpHVx"Ȉ((Y( kCPL_bkG2o*HBI$EHf /Riq,A@F&AY}T6d#r3ym@aE@QdbDqJ RҌ2ee%T`TDDTPLWMPuPR uiPg4@hB#1 2\S )-ZVkMALО4:3TJt3PwHD/胄%?c2dh 0(Aj0N & Lh)/S 0Mt3̴,0['`> -,F~K,X&`[a*}އm96Q6#m[%llGvEٍ,G8 q8LwDQpr 'iNpeC8E2epW)אj ܤBwLtr X!xJ?2~m?/<_-@@@@ Y,8%>.B+C )O""KDBDE&(:ALK،84qULj S$T!I*$(9#RRT:fAҪN HRF2# jlr@."#}!߈V?(?/od? lF  #K B*E L+ %<2U" 4Q(QeD"%Xb#š#>2TJB Q#$LPrd2R2R JtK##t#]2( H, *Ȧ ;$N98r# /%H)HQ +bHH P)*cD9HQPR TFBT%Ցj0j@md:zD}dFDcd&J@sdDK8Z#Ӵ! hQNp]u##CГK@od>rb2)t"aa#]F)`H1:4D$H)J&a:ReAfdFl.1O|$dIR1#!KR2nc%XV#[kG6f ت6vd`nd={%쓱AdF9qT1Ǒ!NpTi gprA%H+:]\㸎4M-H;K'>RG#aOL {x ^!۽f=ムOP_T C_9onu rB)#,N!(!9B!MB#R-"HTdQHh :G &-!lH@I,H̑Y*\F "%L*JjJ i%Q32HȈLYBT(DM"#?p9( UQ\A %aJ1JTQN9* *T T樂tS5:G YMjQjsAթ>GZC59 ՚f͉-A+ 6rmA;AdD'Fg.2]n9z K$zLח'?1j ee!4C)@I%a1pc)0j"e)Sa0 Lg@,H9:8#]贐XıiDeˑ&+ XX4YJXGY,lYf ت`e;S]`72b/>%qqT1Ǒ)N'8 w89<2.(ap\c\GM[H:w9!]xi O(O)ϐf /^72!S}d|B ;O (~2)aeCSHP:a8"] <#IȌ( *2\4 Sb1biC O #!2D"s$A%)Gr"%HR#GI@z(LvA MvJJN"En y(y 2(e!"KPQRҌ2TeA9ʃ )TLTA*P5ͨ SWzah 4͑#-@kd6D;=M٢ 誠,ՃSB/7L })(%Q# C5#ӌTa1b Q#M411jFS4tdL0 9lbe.l5cfXYjr b%*)k%TZ6 l$`3bm);vjnK# Cad#1p8)8lq8q8,w"qrYꊌ5: pS[62⮠{}de< )% (DfA+*8Q$(E) ە% Vʠ MUPMAuPY&QKFmL] () % hD4& K4(A dV5hlSt@щљ,U@7Н,ӓ FG ` 1Yj0 ![ 'FPFmFSc)-3&Ir90ir L0 j612l3XHYlc e)Xl`Vk(ku696&d` c+؆lc v!["O~p 8q8ls8 $4pq<E %.SPʸ,qAmprW=d"SX@,#,&2dJ j5X#a-Xl؄,c YjNbnG`2~†!dGdC9.$2ig))΃ Ep 9ep\CnH n![fܡ%!ݗ<$!K=OTx !=gPxLZ*})i>SQ3~d& E?LT`T!@HJ(. GG@"Q"#DaDDё-b bRq2ePJHDI,DFR"#92] ))ti@ZJ:.DdFJdّmr rRy2e H(H)D),QD@QQY(I)J#ەa@yd J2TjDudjZDmd:]}FTcMR@sF ٪њ҆hlӎҞ@lщtatEѝDz3V@dAPda)#HdQ`4c l5O@LD1"L3LdY0j|?PK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!+7M shape.npyMPPZ\nni_TR_ wK)Ng$F:: 4C&HcPK!eirdata.npyڽ@RHHBHDCA"ʼn Dx (I\gH'n#;{Rkt뤎G^pq<~eũ eTJKE1EqoQdA2X 1JQ 2a D='B=5/( %TłP%^P%*FP uPm:=A/ FMc&>π> g ¤/MÌW4'1 EXe+?h]b6i MOo#أ}ph'W.L\?kPK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!)L shape.npyLPPZ\nni_TR_ wK)Ng$F:: 7 `PK!?2^data.npy^PPZ\nni_TR_ wK)Ng$fF:: d.&G@F(M|T~T<PK! indices.npyPK!k=8j4 indptr.npyPK!tE format.npyPK!)L >shape.npyPK!?2^data.npyPK]dipy-1.11.0/dipy/data/files/dsi4169_b_table.txt000066400000000000000000005164321476546756600211000ustar00rootroot000000000000000.000000 0.000000 0.000000 0.000000 230.759995 0.000000 1.000000 0.000000 230.759995 0.000000 0.000000 1.000000 230.759995 0.000000 0.000000 -1.000000 230.759995 -1.000000 0.000000 0.000000 230.759995 0.000000 -1.000000 0.000000 230.759995 1.000000 0.000000 0.000000 461.519958 0.707107 -0.707107 0.000000 461.519958 0.707107 0.000000 0.707107 461.519958 0.000000 0.707107 0.707107 461.519958 0.707107 0.707107 0.000000 461.519958 0.000000 0.707107 -0.707107 461.519958 -0.707107 0.000000 -0.707107 461.519958 0.000000 -0.707107 0.707107 461.519958 -0.707107 -0.707107 0.000000 461.519958 0.000000 -0.707107 -0.707107 461.519958 -0.707107 0.707107 0.000000 461.519958 -0.707107 0.000000 0.707107 461.519958 0.707107 0.000000 -0.707107 692.280029 0.577350 0.577350 0.577350 692.280029 -0.577350 -0.577350 -0.577350 692.280029 0.577350 0.577350 -0.577350 692.280029 -0.577350 0.577350 0.577350 692.280029 -0.577350 0.577350 -0.577350 692.280029 0.577350 -0.577350 -0.577350 692.280029 0.577350 -0.577350 0.577350 692.280029 -0.577350 -0.577350 0.577350 923.039978 0.000000 0.000000 -1.000000 923.039978 0.000000 -1.000000 0.000000 923.039978 -1.000000 0.000000 0.000000 923.039978 0.000000 1.000000 0.000000 923.039978 1.000000 0.000000 0.000000 923.039978 0.000000 0.000000 1.000000 1153.800049 0.447214 0.000000 -0.894427 1153.800049 0.894427 -0.447214 0.000000 1153.800049 -0.894427 0.447214 0.000000 1153.800049 0.000000 -0.894427 -0.447214 1153.800049 0.000000 -0.894427 0.447214 1153.800049 0.447214 0.894427 0.000000 1153.800049 0.000000 -0.447214 -0.894427 1153.800049 -0.447214 0.894427 0.000000 1153.800049 0.000000 -0.447214 0.894427 1153.800049 0.447214 0.000000 0.894427 1153.800049 -0.894427 0.000000 0.447214 1153.800049 0.894427 0.000000 -0.447214 1153.800049 -0.447214 0.000000 0.894427 1153.800049 -0.894427 0.000000 -0.447214 1153.800049 -0.447214 0.000000 -0.894427 1153.800049 0.894427 0.447214 0.000000 1153.800049 0.894427 0.000000 0.447214 1153.800049 0.000000 0.447214 0.894427 1153.800049 0.000000 0.894427 -0.447214 1153.800049 -0.447214 -0.894427 0.000000 1153.800049 -0.894427 -0.447214 0.000000 1153.800049 0.000000 0.447214 -0.894427 1153.800049 0.447214 -0.894427 0.000000 1153.800049 0.000000 0.894427 0.447214 1384.560181 0.816497 -0.408248 -0.408248 1384.560181 -0.816497 -0.408248 -0.408248 1384.560181 -0.816497 0.408248 -0.408248 1384.560181 0.816497 0.408248 -0.408248 1384.560181 0.816497 -0.408248 0.408248 1384.560181 -0.816497 -0.408248 0.408248 1384.560181 -0.816497 0.408248 0.408248 1384.560181 0.408248 0.816497 0.408248 1384.560181 -0.408248 0.408248 0.816497 1384.560181 0.408248 -0.816497 0.408248 1384.560181 -0.408248 0.408248 -0.816497 1384.560181 -0.408248 0.816497 -0.408248 1384.560181 0.408248 0.408248 0.816497 1384.560181 -0.408248 -0.816497 -0.408248 1384.560181 -0.408248 0.816497 0.408248 1384.560181 -0.408248 -0.816497 0.408248 1384.560181 0.816497 0.408248 0.408248 1384.560181 -0.408248 -0.408248 -0.816497 1384.560181 0.408248 -0.408248 0.816497 1384.560181 0.408248 -0.408248 -0.816497 1384.560181 0.408248 -0.816497 -0.408248 1384.560181 -0.408248 -0.408248 0.816497 1384.560181 0.408248 0.816497 -0.408248 1384.560181 0.408248 0.408248 -0.816497 1846.079834 0.707107 0.707107 0.000000 1846.079834 0.707107 0.000000 0.707107 1846.079834 0.707107 -0.707107 0.000000 1846.079834 0.000000 -0.707107 -0.707107 1846.079834 0.000000 -0.707107 0.707107 1846.079834 -0.707107 0.000000 0.707107 1846.079834 -0.707107 0.000000 -0.707107 1846.079834 -0.707107 -0.707107 0.000000 1846.079834 0.000000 0.707107 -0.707107 1846.079834 0.707107 0.000000 -0.707107 1846.079834 -0.707107 0.707107 0.000000 1846.079834 0.000000 0.707107 0.707107 2076.840088 0.666667 0.333333 -0.666667 2076.840088 0.000000 0.000000 1.000000 2076.840088 0.666667 0.666667 -0.333333 2076.840088 0.333333 -0.666667 -0.666667 2076.840088 0.333333 -0.666667 0.666667 2076.840088 0.666667 0.333333 0.666667 2076.840088 -0.333333 0.666667 -0.666667 2076.840088 0.000000 0.000000 -1.000000 2076.840088 -0.666667 0.333333 0.666667 2076.840088 0.666667 -0.333333 0.666667 2076.840088 0.333333 0.666667 -0.666667 2076.840088 0.666667 -0.333333 -0.666667 2076.840088 0.333333 0.666667 0.666667 2076.840088 -1.000000 0.000000 0.000000 2076.840088 1.000000 0.000000 0.000000 2076.840088 0.666667 -0.666667 0.333333 2076.840088 0.666667 -0.666667 -0.333333 2076.840088 0.000000 -1.000000 0.000000 2076.840088 0.666667 0.666667 0.333333 2076.840088 -0.666667 0.666667 -0.333333 2076.840088 -0.666667 0.333333 -0.666667 2076.840088 -0.666667 0.666667 0.333333 2076.840088 -0.333333 0.666667 0.666667 2076.840088 -0.333333 -0.666667 -0.666667 2076.840088 -0.666667 -0.666667 -0.333333 2076.840088 -0.666667 -0.333333 -0.666667 2076.840088 -0.333333 -0.666667 0.666667 2076.840088 0.000000 1.000000 0.000000 2076.840088 -0.666667 -0.666667 0.333333 2076.840088 -0.666667 -0.333333 0.666667 2307.600098 0.948683 -0.316228 0.000000 2307.600098 0.948683 0.316228 0.000000 2307.600098 0.316228 -0.948683 0.000000 2307.600098 -0.948683 0.000000 -0.316228 2307.600098 0.316228 0.000000 0.948683 2307.600098 0.948683 0.000000 -0.316228 2307.600098 0.000000 0.948683 -0.316228 2307.600098 -0.316228 0.948683 0.000000 2307.600098 -0.948683 0.000000 0.316228 2307.600098 0.000000 -0.316228 -0.948683 2307.600098 0.000000 0.316228 0.948683 2307.600098 0.000000 0.948683 0.316228 2307.600098 0.948683 0.000000 0.316228 2307.600098 0.000000 -0.316228 0.948683 2307.600098 -0.316228 0.000000 0.948683 2307.600098 0.316228 0.000000 -0.948683 2307.600098 -0.948683 -0.316228 0.000000 2307.600098 0.000000 -0.948683 0.316228 2307.600098 -0.316228 -0.948683 0.000000 2307.600098 -0.316228 0.000000 -0.948683 2307.600098 0.000000 -0.948683 -0.316228 2307.600098 0.000000 0.316228 -0.948683 2307.600098 -0.948683 0.316228 0.000000 2307.600098 0.316228 0.948683 0.000000 2538.360107 0.301511 0.904534 0.301511 2538.360107 -0.301511 -0.904534 0.301511 2538.360107 -0.904534 0.301511 0.301511 2538.360107 0.301511 -0.904534 0.301511 2538.360107 0.904534 0.301511 -0.301511 2538.360107 0.301511 0.301511 0.904534 2538.360107 0.301511 0.301511 -0.904534 2538.360107 0.904534 0.301511 0.301511 2538.360107 -0.904534 0.301511 -0.301511 2538.360107 0.301511 -0.301511 0.904534 2538.360107 0.301511 -0.301511 -0.904534 2538.360107 -0.301511 -0.301511 -0.904534 2538.360107 -0.301511 -0.301511 0.904534 2538.360107 0.301511 -0.904534 -0.301511 2538.360107 -0.301511 0.301511 -0.904534 2538.360107 0.301511 0.904534 -0.301511 2538.360107 -0.301511 -0.904534 -0.301511 2538.360107 -0.301511 0.904534 -0.301511 2538.360107 -0.301511 0.904534 0.301511 2538.360107 -0.301511 0.301511 0.904534 2538.360107 -0.904534 -0.301511 0.301511 2538.360107 -0.904534 -0.301511 -0.301511 2538.360107 0.904534 -0.301511 -0.301511 2538.360107 0.904534 -0.301511 0.301511 2769.120117 0.577350 -0.577350 -0.577350 2769.120117 -0.577350 -0.577350 0.577350 2769.120117 0.577350 0.577350 0.577350 2769.120117 0.577350 -0.577350 0.577350 2769.120117 0.577350 0.577350 -0.577350 2769.120117 -0.577350 0.577350 -0.577350 2769.120117 -0.577350 -0.577350 -0.577350 2769.120117 -0.577350 0.577350 0.577350 2999.879883 0.554700 0.832050 0.000000 2999.879883 0.554700 0.000000 -0.832050 2999.879883 -0.554700 0.000000 0.832050 2999.879883 0.000000 -0.554700 0.832050 2999.879883 -0.832050 0.000000 -0.554700 2999.879883 -0.554700 0.832050 0.000000 2999.879883 0.832050 0.000000 0.554700 2999.879883 0.832050 0.554700 0.000000 2999.879883 -0.832050 0.554700 0.000000 2999.879883 0.000000 -0.832050 -0.554700 2999.879883 0.554700 -0.832050 0.000000 2999.879883 0.000000 0.832050 -0.554700 2999.879883 0.832050 -0.554700 0.000000 2999.879883 -0.832050 0.000000 0.554700 2999.879883 0.000000 -0.832050 0.554700 2999.879883 0.000000 0.554700 -0.832050 2999.879883 0.000000 0.832050 0.554700 2999.879883 0.554700 0.000000 0.832050 2999.879883 0.832050 0.000000 -0.554700 2999.879883 -0.832050 -0.554700 0.000000 2999.879883 -0.554700 -0.832050 0.000000 2999.879883 0.000000 0.554700 0.832050 2999.879883 -0.554700 0.000000 -0.832050 2999.879883 0.000000 -0.554700 -0.832050 3230.640381 0.267261 -0.534522 -0.801784 3230.640381 -0.801784 -0.267261 0.534522 3230.640381 0.267261 -0.801784 0.534522 3230.640381 0.801784 -0.534522 -0.267261 3230.640381 -0.534522 -0.801784 0.267261 3230.640381 -0.267261 -0.534522 0.801784 3230.640381 0.534522 0.801784 -0.267261 3230.640381 -0.534522 -0.267261 0.801784 3230.640381 0.534522 0.801784 0.267261 3230.640381 0.267261 -0.534522 0.801784 3230.640381 -0.801784 -0.267261 -0.534522 3230.640381 -0.534522 -0.267261 -0.801784 3230.640381 -0.267261 0.801784 0.534522 3230.640381 -0.801784 -0.534522 0.267261 3230.640381 -0.267261 -0.534522 -0.801784 3230.640381 -0.801784 -0.534522 -0.267261 3230.640381 -0.267261 0.801784 -0.534522 3230.640381 -0.267261 -0.801784 -0.534522 3230.640381 -0.267261 -0.801784 0.534522 3230.640381 0.267261 -0.801784 -0.534522 3230.640381 -0.534522 -0.801784 -0.267261 3230.640381 -0.801784 0.267261 -0.534522 3230.640381 0.534522 0.267261 0.801784 3230.640381 0.801784 -0.267261 0.534522 3230.640381 0.267261 0.534522 -0.801784 3230.640381 -0.534522 0.801784 0.267261 3230.640381 0.534522 -0.267261 -0.801784 3230.640381 0.267261 0.534522 0.801784 3230.640381 -0.534522 0.801784 -0.267261 3230.640381 0.267261 0.801784 -0.534522 3230.640381 -0.267261 0.534522 0.801784 3230.640381 0.267261 0.801784 0.534522 3230.640381 -0.801784 0.534522 0.267261 3230.640381 -0.801784 0.534522 -0.267261 3230.640381 -0.534522 0.267261 -0.801784 3230.640381 -0.801784 0.267261 0.534522 3230.640381 0.534522 -0.801784 0.267261 3230.640381 0.534522 -0.801784 -0.267261 3230.640381 0.534522 -0.267261 0.801784 3230.640381 0.801784 -0.267261 -0.534522 3230.640381 0.801784 0.267261 -0.534522 3230.640381 0.801784 -0.534522 0.267261 3230.640381 0.801784 0.534522 -0.267261 3230.640381 -0.267261 0.534522 -0.801784 3230.640381 0.801784 0.534522 0.267261 3230.640381 0.534522 0.267261 -0.801784 3230.640381 0.801784 0.267261 0.534522 3230.640381 -0.534522 0.267261 0.801784 3692.159912 0.000000 1.000000 0.000000 3692.159912 0.000000 0.000000 -1.000000 3692.159912 0.000000 0.000000 1.000000 3692.159912 -1.000000 0.000000 0.000000 3692.159912 1.000000 0.000000 0.000000 3692.159912 0.000000 -1.000000 0.000000 3922.919922 -0.242536 -0.970143 0.000000 3922.919922 0.970143 0.242536 0.000000 3922.919922 0.000000 -0.970143 0.242536 3922.919922 -0.242536 0.970143 0.000000 3922.919922 -0.242536 0.000000 -0.970143 3922.919922 0.000000 0.242536 -0.970143 3922.919922 0.000000 0.242536 0.970143 3922.919922 -0.242536 0.000000 0.970143 3922.919922 0.000000 -0.242536 0.970143 3922.919922 -0.485071 0.727607 0.485071 3922.919922 -0.485071 0.727607 -0.485071 3922.919922 -0.485071 0.485071 -0.727607 3922.919922 0.000000 -0.242536 -0.970143 3922.919922 0.000000 -0.970143 -0.242536 3922.919922 -0.485071 0.485071 0.727607 3922.919922 0.970143 0.000000 0.242536 3922.919922 0.485071 -0.485071 0.727607 3922.919922 0.485071 -0.727607 -0.485071 3922.919922 0.485071 -0.727607 0.485071 3922.919922 0.242536 0.000000 0.970143 3922.919922 -0.727607 0.485071 -0.485071 3922.919922 0.485071 -0.485071 -0.727607 3922.919922 0.970143 0.000000 -0.242536 3922.919922 0.242536 0.000000 -0.970143 3922.919922 -0.485071 -0.727607 -0.485071 3922.919922 0.242536 0.970143 0.000000 3922.919922 0.970143 -0.242536 0.000000 3922.919922 0.727607 -0.485071 0.485071 3922.919922 -0.727607 -0.485071 -0.485071 3922.919922 -0.970143 0.000000 0.242536 3922.919922 0.727607 0.485071 0.485071 3922.919922 -0.485071 -0.727607 0.485071 3922.919922 -0.970143 0.000000 -0.242536 3922.919922 -0.485071 -0.485071 -0.727607 3922.919922 -0.727607 -0.485071 0.485071 3922.919922 0.000000 0.970143 -0.242536 3922.919922 0.485071 0.727607 0.485071 3922.919922 0.000000 0.970143 0.242536 3922.919922 0.727607 -0.485071 -0.485071 3922.919922 -0.970143 -0.242536 0.000000 3922.919922 0.485071 0.727607 -0.485071 3922.919922 0.485071 0.485071 0.727607 3922.919922 -0.727607 0.485071 0.485071 3922.919922 0.485071 0.485071 -0.727607 3922.919922 0.242536 -0.970143 0.000000 3922.919922 -0.970143 0.242536 0.000000 3922.919922 -0.485071 -0.485071 0.727607 3922.919922 0.727607 0.485071 -0.485071 4153.679688 0.235702 0.942809 0.235702 4153.679688 -0.942809 0.235702 0.235702 4153.679688 0.707107 0.000000 0.707107 4153.679688 0.000000 0.707107 -0.707107 4153.679688 0.707107 0.000000 -0.707107 4153.679688 -0.235702 0.235702 0.942809 4153.679688 -0.707107 0.000000 0.707107 4153.679688 -0.707107 0.000000 -0.707107 4153.679688 -0.235702 0.942809 0.235702 4153.679688 -0.235702 0.942809 -0.235702 4153.679688 -0.707107 -0.707107 0.000000 4153.679688 0.000000 -0.707107 -0.707107 4153.679688 0.000000 -0.707107 0.707107 4153.679688 -0.942809 0.235702 -0.235702 4153.679688 0.942809 0.235702 0.235702 4153.679688 0.000000 0.707107 0.707107 4153.679688 -0.942809 -0.235702 -0.235702 4153.679688 0.235702 -0.942809 -0.235702 4153.679688 -0.942809 -0.235702 0.235702 4153.679688 0.942809 0.235702 -0.235702 4153.679688 0.235702 -0.942809 0.235702 4153.679688 -0.235702 -0.235702 -0.942809 4153.679688 0.235702 -0.235702 -0.942809 4153.679688 -0.235702 -0.942809 0.235702 4153.679688 0.235702 0.942809 -0.235702 4153.679688 0.235702 -0.235702 0.942809 4153.679688 0.707107 -0.707107 0.000000 4153.679688 0.942809 -0.235702 -0.235702 4153.679688 -0.235702 -0.942809 -0.235702 4153.679688 -0.235702 0.235702 -0.942809 4153.679688 0.707107 0.707107 0.000000 4153.679688 0.235702 0.235702 -0.942809 4153.679688 -0.235702 -0.235702 0.942809 4153.679688 0.942809 -0.235702 0.235702 4153.679688 0.235702 0.235702 0.942809 4153.679688 -0.707107 0.707107 0.000000 4384.440430 0.688247 -0.688247 0.229416 4384.440430 -0.688247 -0.688247 0.229416 4384.440430 -0.229416 -0.688247 0.688247 4384.440430 -0.229416 -0.688247 -0.688247 4384.440430 0.688247 -0.229416 0.688247 4384.440430 0.688247 0.688247 0.229416 4384.440430 0.688247 0.688247 -0.229416 4384.440430 -0.688247 -0.688247 -0.229416 4384.440430 0.688247 -0.229416 -0.688247 4384.440430 0.688247 0.229416 0.688247 4384.440430 0.688247 0.229416 -0.688247 4384.440430 -0.688247 -0.229416 0.688247 4384.440430 0.688247 -0.688247 -0.229416 4384.440430 0.229416 -0.688247 -0.688247 4384.440430 0.229416 -0.688247 0.688247 4384.440430 -0.229416 0.688247 -0.688247 4384.440430 -0.688247 0.688247 -0.229416 4384.440430 0.229416 0.688247 -0.688247 4384.440430 0.229416 0.688247 0.688247 4384.440430 -0.688247 0.229416 0.688247 4384.440430 -0.688247 0.229416 -0.688247 4384.440430 -0.688247 0.688247 0.229416 4384.440430 -0.688247 -0.229416 -0.688247 4384.440430 -0.229416 0.688247 0.688247 4615.200195 0.000000 -0.894427 0.447214 4615.200195 -0.894427 -0.447214 0.000000 4615.200195 0.000000 0.894427 -0.447214 4615.200195 0.000000 0.447214 -0.894427 4615.200195 0.894427 -0.447214 0.000000 4615.200195 0.000000 0.894427 0.447214 4615.200195 -0.894427 0.000000 -0.447214 4615.200195 -0.447214 0.000000 -0.894427 4615.200195 0.447214 0.894427 0.000000 4615.200195 0.894427 0.000000 0.447214 4615.200195 0.894427 0.000000 -0.447214 4615.200195 -0.447214 -0.894427 0.000000 4615.200195 0.000000 -0.894427 -0.447214 4615.200195 -0.894427 0.000000 0.447214 4615.200195 -0.447214 0.000000 0.894427 4615.200195 0.000000 -0.447214 0.894427 4615.200195 -0.447214 0.894427 0.000000 4615.200195 -0.894427 0.447214 0.000000 4615.200195 0.447214 0.000000 -0.894427 4615.200195 0.000000 0.447214 0.894427 4615.200195 0.894427 0.447214 0.000000 4615.200195 0.447214 -0.894427 0.000000 4615.200195 0.447214 0.000000 0.894427 4615.200195 0.000000 -0.447214 -0.894427 4845.959961 -0.872872 -0.436436 -0.218218 4845.959961 -0.872872 0.436436 -0.218218 4845.959961 -0.872872 -0.436436 0.218218 4845.959961 -0.872872 -0.218218 -0.436436 4845.959961 -0.872872 0.218218 0.436436 4845.959961 -0.436436 -0.218218 -0.872872 4845.959961 -0.872872 0.218218 -0.436436 4845.959961 -0.872872 -0.218218 0.436436 4845.959961 -0.436436 0.872872 0.218218 4845.959961 -0.218218 -0.872872 0.436436 4845.959961 -0.436436 0.218218 0.872872 4845.959961 -0.436436 -0.872872 0.218218 4845.959961 -0.436436 0.218218 -0.872872 4845.959961 -0.436436 0.872872 -0.218218 4845.959961 -0.436436 -0.872872 -0.218218 4845.959961 -0.218218 -0.872872 -0.436436 4845.959961 -0.436436 -0.218218 0.872872 4845.959961 -0.872872 0.436436 0.218218 4845.959961 0.872872 -0.218218 -0.436436 4845.959961 -0.218218 -0.436436 -0.872872 4845.959961 -0.218218 0.872872 0.436436 4845.959961 0.872872 0.436436 -0.218218 4845.959961 0.872872 0.218218 0.436436 4845.959961 0.872872 0.218218 -0.436436 4845.959961 0.218218 -0.872872 0.436436 4845.959961 0.218218 -0.436436 -0.872872 4845.959961 0.218218 -0.436436 0.872872 4845.959961 0.872872 -0.218218 0.436436 4845.959961 0.218218 0.436436 -0.872872 4845.959961 -0.218218 0.872872 -0.436436 4845.959961 0.218218 0.436436 0.872872 4845.959961 0.218218 0.872872 0.436436 4845.959961 0.436436 -0.872872 -0.218218 4845.959961 0.436436 -0.872872 0.218218 4845.959961 0.436436 -0.218218 -0.872872 4845.959961 0.436436 -0.218218 0.872872 4845.959961 0.436436 0.218218 -0.872872 4845.959961 0.436436 0.218218 0.872872 4845.959961 0.436436 0.872872 -0.218218 4845.959961 0.436436 0.872872 0.218218 4845.959961 0.218218 0.872872 -0.436436 4845.959961 0.872872 -0.436436 -0.218218 4845.959961 0.218218 -0.872872 -0.436436 4845.959961 0.872872 -0.436436 0.218218 4845.959961 0.872872 0.436436 0.218218 4845.959961 -0.218218 0.436436 -0.872872 4845.959961 -0.218218 -0.436436 0.872872 4845.959961 -0.218218 0.436436 0.872872 5076.720215 -0.639602 0.426401 0.639602 5076.720215 -0.426401 -0.639602 -0.639602 5076.720215 -0.426401 0.639602 0.639602 5076.720215 -0.639602 0.426401 -0.639602 5076.720215 -0.426401 -0.639602 0.639602 5076.720215 0.639602 0.639602 0.426401 5076.720215 0.639602 -0.639602 -0.426401 5076.720215 0.639602 0.426401 -0.639602 5076.720215 0.639602 0.426401 0.639602 5076.720215 0.426401 -0.639602 -0.639602 5076.720215 -0.639602 -0.639602 -0.426401 5076.720215 -0.639602 0.639602 -0.426401 5076.720215 -0.639602 0.639602 0.426401 5076.720215 0.639602 0.639602 -0.426401 5076.720215 -0.639602 -0.426401 -0.639602 5076.720215 0.426401 -0.639602 0.639602 5076.720215 0.426401 0.639602 -0.639602 5076.720215 0.639602 -0.426401 -0.639602 5076.720215 0.639602 -0.639602 0.426401 5076.720215 0.426401 0.639602 0.639602 5076.720215 -0.639602 -0.639602 0.426401 5076.720215 -0.639602 -0.426401 0.639602 5076.720215 -0.426401 0.639602 -0.639602 5076.720215 0.639602 -0.426401 0.639602 5538.240723 -0.408248 -0.816497 0.408248 5538.240723 -0.408248 0.816497 0.408248 5538.240723 -0.408248 -0.816497 -0.408248 5538.240723 0.816497 -0.408248 -0.408248 5538.240723 -0.816497 0.408248 -0.408248 5538.240723 -0.408248 0.816497 -0.408248 5538.240723 -0.408248 0.408248 -0.816497 5538.240723 -0.816497 0.408248 0.408248 5538.240723 -0.408248 0.408248 0.816497 5538.240723 -0.408248 -0.408248 -0.816497 5538.240723 -0.816497 -0.408248 -0.408248 5538.240723 0.816497 0.408248 0.408248 5538.240723 0.408248 0.816497 0.408248 5538.240723 0.408248 -0.408248 0.816497 5538.240723 0.408248 0.408248 -0.816497 5538.240723 0.408248 0.408248 0.816497 5538.240723 -0.816497 -0.408248 0.408248 5538.240723 0.816497 0.408248 -0.408248 5538.240723 0.408248 -0.816497 -0.408248 5538.240723 0.408248 -0.408248 -0.816497 5538.240723 0.816497 -0.408248 0.408248 5538.240723 0.408248 0.816497 -0.408248 5538.240723 -0.408248 -0.408248 0.816497 5538.240723 0.408248 -0.816497 0.408248 5769.000000 1.000000 0.000000 0.000000 5769.000000 0.600000 -0.800000 0.000000 5769.000000 -0.800000 0.600000 0.000000 5769.000000 -0.600000 -0.800000 0.000000 5769.000000 -1.000000 0.000000 0.000000 5769.000000 -0.600000 0.000000 -0.800000 5769.000000 -0.600000 0.000000 0.800000 5769.000000 -0.800000 -0.600000 0.000000 5769.000000 0.800000 0.000000 -0.600000 5769.000000 0.800000 0.600000 0.000000 5769.000000 0.800000 -0.600000 0.000000 5769.000000 0.000000 -1.000000 0.000000 5769.000000 0.000000 -0.800000 -0.600000 5769.000000 0.000000 -0.800000 0.600000 5769.000000 0.000000 -0.600000 -0.800000 5769.000000 0.000000 -0.600000 0.800000 5769.000000 -0.800000 0.000000 -0.600000 5769.000000 0.000000 0.000000 -1.000000 5769.000000 0.600000 0.800000 0.000000 5769.000000 0.000000 0.000000 1.000000 5769.000000 -0.800000 0.000000 0.600000 5769.000000 0.000000 0.600000 -0.800000 5769.000000 0.000000 0.600000 0.800000 5769.000000 0.000000 0.800000 -0.600000 5769.000000 0.000000 0.800000 0.600000 5769.000000 0.800000 0.000000 0.600000 5769.000000 0.000000 1.000000 0.000000 5769.000000 -0.600000 0.800000 0.000000 5769.000000 0.600000 0.000000 0.800000 5769.000000 0.600000 0.000000 -0.800000 5999.759766 0.784465 -0.588348 0.196116 5999.759766 0.784465 -0.588348 -0.196116 5999.759766 -0.784465 0.588348 -0.196116 5999.759766 -0.784465 -0.196116 0.588348 5999.759766 0.588348 -0.196116 -0.784465 5999.759766 -0.784465 0.588348 0.196116 5999.759766 -0.980581 0.000000 0.196116 5999.759766 0.588348 -0.196116 0.784465 5999.759766 -0.784465 0.196116 0.588348 5999.759766 0.588348 0.784465 0.196116 5999.759766 -0.588348 -0.784465 -0.196116 5999.759766 0.588348 0.784465 -0.196116 5999.759766 -0.588348 -0.784465 0.196116 5999.759766 -0.980581 0.196116 0.000000 5999.759766 0.588348 0.196116 -0.784465 5999.759766 -0.784465 -0.588348 -0.196116 5999.759766 0.784465 -0.196116 -0.588348 5999.759766 -0.784465 -0.588348 0.196116 5999.759766 0.588348 0.196116 0.784465 5999.759766 -0.784465 0.196116 -0.588348 5999.759766 -0.784465 -0.196116 -0.588348 5999.759766 0.196116 0.588348 -0.784465 5999.759766 0.588348 -0.784465 -0.196116 5999.759766 0.980581 0.196116 0.000000 5999.759766 0.784465 0.196116 0.588348 5999.759766 0.000000 -0.196116 -0.980581 5999.759766 0.000000 -0.980581 0.196116 5999.759766 0.000000 -0.980581 -0.196116 5999.759766 -0.196116 0.980581 0.000000 5999.759766 -0.196116 0.784465 0.588348 5999.759766 -0.196116 0.784465 -0.588348 5999.759766 -0.196116 0.588348 0.784465 5999.759766 -0.196116 0.588348 -0.784465 5999.759766 0.784465 0.588348 -0.196116 5999.759766 -0.196116 0.000000 0.980581 5999.759766 0.784465 0.588348 0.196116 5999.759766 0.980581 -0.196116 0.000000 5999.759766 -0.196116 -0.980581 0.000000 5999.759766 -0.196116 -0.784465 -0.588348 5999.759766 -0.196116 0.000000 -0.980581 5999.759766 0.980581 0.000000 -0.196116 5999.759766 0.980581 0.000000 0.196116 5999.759766 -0.196116 -0.784465 0.588348 5999.759766 -0.196116 -0.588348 -0.784465 5999.759766 -0.980581 0.000000 -0.196116 5999.759766 0.588348 -0.784465 0.196116 5999.759766 0.000000 0.196116 -0.980581 5999.759766 0.000000 0.196116 0.980581 5999.759766 -0.588348 -0.196116 -0.784465 5999.759766 -0.588348 -0.196116 0.784465 5999.759766 -0.588348 0.196116 -0.784465 5999.759766 0.196116 0.980581 0.000000 5999.759766 0.196116 0.784465 0.588348 5999.759766 -0.588348 0.196116 0.784465 5999.759766 0.196116 0.784465 -0.588348 5999.759766 0.196116 0.588348 0.784465 5999.759766 -0.196116 -0.588348 0.784465 5999.759766 -0.588348 0.784465 -0.196116 5999.759766 -0.588348 0.784465 0.196116 5999.759766 0.784465 -0.196116 0.588348 5999.759766 0.196116 0.000000 0.980581 5999.759766 0.196116 0.000000 -0.980581 5999.759766 0.196116 -0.588348 0.784465 5999.759766 0.196116 -0.588348 -0.784465 5999.759766 0.196116 -0.784465 0.588348 5999.759766 0.196116 -0.784465 -0.588348 5999.759766 0.196116 -0.980581 0.000000 5999.759766 0.000000 0.980581 0.196116 5999.759766 0.000000 0.980581 -0.196116 5999.759766 0.784465 0.196116 -0.588348 5999.759766 -0.980581 -0.196116 0.000000 5999.759766 0.000000 -0.196116 0.980581 6230.519531 0.192450 0.962250 -0.192450 6230.519531 0.577350 -0.577350 0.577350 6230.519531 0.577350 -0.577350 -0.577350 6230.519531 -0.962250 0.192450 0.192450 6230.519531 -0.192450 0.962250 0.192450 6230.519531 -0.962250 -0.192450 0.192450 6230.519531 -0.192450 0.962250 -0.192450 6230.519531 -0.192450 -0.962250 0.192450 6230.519531 -0.962250 -0.192450 -0.192450 6230.519531 -0.192450 0.192450 0.962250 6230.519531 -0.192450 0.192450 -0.962250 6230.519531 -0.192450 -0.192450 0.962250 6230.519531 0.192450 -0.962250 -0.192450 6230.519531 -0.192450 -0.192450 -0.962250 6230.519531 -0.192450 -0.962250 -0.192450 6230.519531 0.192450 0.962250 0.192450 6230.519531 -0.577350 -0.577350 -0.577350 6230.519531 -0.962250 0.192450 -0.192450 6230.519531 0.192450 0.192450 -0.962250 6230.519531 -0.577350 -0.577350 0.577350 6230.519531 0.962250 -0.192450 -0.192450 6230.519531 0.577350 0.577350 -0.577350 6230.519531 0.577350 0.577350 0.577350 6230.519531 0.962250 -0.192450 0.192450 6230.519531 0.962250 0.192450 0.192450 6230.519531 0.962250 0.192450 -0.192450 6230.519531 0.192450 -0.192450 0.962250 6230.519531 0.192450 0.192450 0.962250 6230.519531 0.192450 -0.192450 -0.962250 6230.519531 0.192450 -0.962250 0.192450 6230.519531 -0.577350 0.577350 0.577350 6230.519531 -0.577350 0.577350 -0.577350 6692.040039 0.928477 0.000000 0.371391 6692.040039 0.928477 0.000000 -0.371391 6692.040039 -0.742781 -0.371391 0.557086 6692.040039 -0.742781 -0.371391 -0.557086 6692.040039 0.928477 -0.371391 0.000000 6692.040039 0.371391 -0.928477 0.000000 6692.040039 -0.742781 -0.557086 -0.371391 6692.040039 0.371391 -0.742781 -0.557086 6692.040039 0.742781 -0.371391 0.557086 6692.040039 0.742781 -0.371391 -0.557086 6692.040039 0.742781 -0.557086 0.371391 6692.040039 0.742781 0.557086 -0.371391 6692.040039 0.742781 0.557086 0.371391 6692.040039 0.742781 -0.557086 -0.371391 6692.040039 0.557086 0.742781 0.371391 6692.040039 0.557086 0.742781 -0.371391 6692.040039 0.557086 0.371391 0.742781 6692.040039 0.557086 0.371391 -0.742781 6692.040039 0.557086 -0.371391 0.742781 6692.040039 0.557086 -0.371391 -0.742781 6692.040039 0.557086 -0.742781 0.371391 6692.040039 0.557086 -0.742781 -0.371391 6692.040039 0.371391 0.928477 0.000000 6692.040039 0.371391 0.742781 0.557086 6692.040039 0.371391 0.742781 -0.557086 6692.040039 0.371391 0.557086 0.742781 6692.040039 0.371391 0.557086 -0.742781 6692.040039 0.371391 0.000000 0.928477 6692.040039 0.371391 0.000000 -0.928477 6692.040039 0.371391 -0.557086 0.742781 6692.040039 -0.928477 0.371391 0.000000 6692.040039 0.371391 -0.557086 -0.742781 6692.040039 0.371391 -0.742781 0.557086 6692.040039 -0.742781 -0.557086 0.371391 6692.040039 -0.928477 0.000000 0.371391 6692.040039 -0.742781 0.371391 -0.557086 6692.040039 0.000000 0.928477 0.371391 6692.040039 0.742781 0.371391 -0.557086 6692.040039 -0.371391 0.742781 0.557086 6692.040039 -0.371391 0.742781 -0.557086 6692.040039 -0.371391 0.557086 0.742781 6692.040039 -0.371391 0.557086 -0.742781 6692.040039 -0.371391 0.000000 0.928477 6692.040039 -0.557086 -0.742781 -0.371391 6692.040039 -0.557086 -0.742781 0.371391 6692.040039 -0.371391 0.000000 -0.928477 6692.040039 -0.371391 0.928477 0.000000 6692.040039 -0.371391 -0.557086 0.742781 6692.040039 -0.371391 -0.742781 0.557086 6692.040039 -0.371391 -0.742781 -0.557086 6692.040039 -0.371391 -0.928477 0.000000 6692.040039 -0.557086 -0.371391 -0.742781 6692.040039 -0.557086 -0.371391 0.742781 6692.040039 -0.557086 0.742781 0.371391 6692.040039 -0.557086 0.742781 -0.371391 6692.040039 -0.557086 0.371391 0.742781 6692.040039 -0.557086 0.371391 -0.742781 6692.040039 -0.371391 -0.557086 -0.742781 6692.040039 -0.928477 -0.371391 0.000000 6692.040039 0.742781 0.371391 0.557086 6692.040039 0.000000 -0.928477 0.371391 6692.040039 -0.742781 0.371391 0.557086 6692.040039 -0.928477 0.000000 -0.371391 6692.040039 0.000000 0.928477 -0.371391 6692.040039 -0.742781 0.557086 -0.371391 6692.040039 0.000000 -0.371391 -0.928477 6692.040039 0.928477 0.371391 0.000000 6692.040039 0.000000 -0.928477 -0.371391 6692.040039 0.000000 0.371391 0.928477 6692.040039 -0.742781 0.557086 0.371391 6692.040039 0.000000 0.371391 -0.928477 6692.040039 0.000000 -0.371391 0.928477 6922.800781 -0.182574 0.365148 -0.912871 6922.800781 -0.365148 -0.182574 0.912871 6922.800781 -0.365148 -0.182574 -0.912871 6922.800781 -0.365148 -0.912871 0.182574 6922.800781 0.182574 -0.912871 0.365148 6922.800781 -0.365148 -0.912871 -0.182574 6922.800781 0.182574 -0.912871 -0.365148 6922.800781 0.182574 -0.365148 -0.912871 6922.800781 0.365148 0.912871 0.182574 6922.800781 0.182574 0.365148 -0.912871 6922.800781 0.365148 0.912871 -0.182574 6922.800781 -0.182574 -0.365148 0.912871 6922.800781 0.365148 -0.912871 0.182574 6922.800781 -0.182574 -0.365148 -0.912871 6922.800781 -0.182574 -0.912871 0.365148 6922.800781 -0.182574 -0.912871 -0.365148 6922.800781 -0.365148 0.912871 0.182574 6922.800781 -0.182574 0.365148 0.912871 6922.800781 0.182574 0.912871 0.365148 6922.800781 0.365148 -0.912871 -0.182574 6922.800781 0.365148 -0.182574 -0.912871 6922.800781 -0.182574 0.912871 -0.365148 6922.800781 0.182574 0.912871 -0.365148 6922.800781 0.365148 0.182574 -0.912871 6922.800781 0.365148 0.182574 0.912871 6922.800781 0.182574 0.365148 0.912871 6922.800781 -0.182574 0.912871 0.365148 6922.800781 -0.365148 0.182574 0.912871 6922.800781 -0.365148 0.182574 -0.912871 6922.800781 0.365148 -0.182574 0.912871 6922.800781 -0.365148 0.912871 -0.182574 6922.800781 0.182574 -0.365148 0.912871 6922.800781 -0.912871 0.182574 0.365148 6922.800781 0.912871 0.365148 -0.182574 6922.800781 -0.912871 -0.182574 -0.365148 6922.800781 0.912871 -0.182574 -0.365148 6922.800781 0.912871 0.365148 0.182574 6922.800781 0.912871 -0.365148 0.182574 6922.800781 -0.912871 -0.365148 0.182574 6922.800781 -0.912871 -0.365148 -0.182574 6922.800781 0.912871 0.182574 0.365148 6922.800781 0.912871 0.182574 -0.365148 6922.800781 -0.912871 -0.182574 0.365148 6922.800781 -0.912871 0.365148 -0.182574 6922.800781 0.912871 -0.365148 -0.182574 6922.800781 -0.912871 0.365148 0.182574 6922.800781 -0.912871 0.182574 -0.365148 6922.800781 0.912871 -0.182574 0.365148 7384.319336 0.707107 0.000000 -0.707107 7384.319336 -0.707107 0.707107 0.000000 7384.319336 0.707107 -0.707107 0.000000 7384.319336 0.707107 0.707107 0.000000 7384.319336 0.000000 -0.707107 0.707107 7384.319336 0.707107 0.000000 0.707107 7384.319336 -0.707107 -0.707107 0.000000 7384.319336 -0.707107 0.000000 0.707107 7384.319336 0.000000 0.707107 -0.707107 7384.319336 0.000000 0.707107 0.707107 7384.319336 -0.707107 0.000000 -0.707107 7384.319336 0.000000 -0.707107 -0.707107 7615.080078 0.696311 0.696311 0.174078 7615.080078 0.696311 0.696311 -0.174078 7615.080078 0.174078 -0.696311 0.696311 7615.080078 -0.696311 0.174078 0.696311 7615.080078 -0.696311 0.696311 0.174078 7615.080078 -0.696311 0.696311 -0.174078 7615.080078 0.174078 -0.696311 -0.696311 7615.080078 0.348155 -0.870388 -0.348155 7615.080078 -0.696311 0.174078 -0.696311 7615.080078 0.348155 0.870388 -0.348155 7615.080078 -0.696311 -0.174078 0.696311 7615.080078 -0.696311 -0.174078 -0.696311 7615.080078 0.174078 0.696311 0.696311 7615.080078 0.348155 0.870388 0.348155 7615.080078 0.174078 0.696311 -0.696311 7615.080078 0.870388 -0.348155 0.348155 7615.080078 -0.696311 -0.696311 -0.174078 7615.080078 0.870388 -0.348155 -0.348155 7615.080078 -0.696311 -0.696311 0.174078 7615.080078 0.696311 0.174078 0.696311 7615.080078 0.696311 0.174078 -0.696311 7615.080078 -0.870388 0.348155 0.348155 7615.080078 0.348155 -0.348155 0.870388 7615.080078 0.348155 -0.348155 -0.870388 7615.080078 0.696311 -0.174078 0.696311 7615.080078 0.696311 -0.174078 -0.696311 7615.080078 -0.348155 -0.870388 -0.348155 7615.080078 -0.348155 -0.870388 0.348155 7615.080078 0.348155 0.348155 -0.870388 7615.080078 0.348155 0.348155 0.870388 7615.080078 -0.348155 -0.348155 -0.870388 7615.080078 -0.348155 -0.348155 0.870388 7615.080078 0.870388 0.348155 -0.348155 7615.080078 -0.174078 0.696311 0.696311 7615.080078 -0.348155 0.870388 0.348155 7615.080078 -0.348155 0.348155 -0.870388 7615.080078 -0.174078 0.696311 -0.696311 7615.080078 -0.348155 0.348155 0.870388 7615.080078 -0.348155 0.870388 -0.348155 7615.080078 0.696311 -0.696311 0.174078 7615.080078 0.696311 -0.696311 -0.174078 7615.080078 -0.870388 -0.348155 -0.348155 7615.080078 -0.870388 0.348155 -0.348155 7615.080078 -0.870388 -0.348155 0.348155 7615.080078 -0.174078 -0.696311 -0.696311 7615.080078 -0.174078 -0.696311 0.696311 7615.080078 0.348155 -0.870388 0.348155 7615.080078 0.870388 0.348155 0.348155 7845.839355 -0.685994 -0.514496 -0.514496 7845.839355 -0.685994 -0.514496 0.514496 7845.839355 -0.514496 0.000000 0.857493 7845.839355 0.000000 0.857493 -0.514496 7845.839355 -0.857493 0.000000 0.514496 7845.839355 -0.514496 0.514496 -0.685994 7845.839355 -0.514496 0.514496 0.685994 7845.839355 -0.514496 0.685994 -0.514496 7845.839355 -0.514496 0.685994 0.514496 7845.839355 -0.514496 0.857493 0.000000 7845.839355 -0.514496 -0.514496 0.685994 7845.839355 -0.857493 -0.514496 0.000000 7845.839355 -0.514496 -0.514496 -0.685994 7845.839355 -0.514496 -0.685994 0.514496 7845.839355 -0.514496 -0.685994 -0.514496 7845.839355 0.857493 0.000000 -0.514496 7845.839355 -0.514496 -0.857493 0.000000 7845.839355 0.000000 -0.857493 -0.514496 7845.839355 0.000000 -0.857493 0.514496 7845.839355 0.000000 -0.514496 -0.857493 7845.839355 -0.685994 0.514496 0.514496 7845.839355 0.000000 0.514496 -0.857493 7845.839355 -0.685994 0.514496 -0.514496 7845.839355 0.000000 0.514496 0.857493 7845.839355 -0.857493 0.000000 -0.514496 7845.839355 0.000000 0.857493 0.514496 7845.839355 0.857493 0.000000 0.514496 7845.839355 0.857493 0.514496 0.000000 7845.839355 0.000000 -0.514496 0.857493 7845.839355 -0.514496 0.000000 -0.857493 7845.839355 0.514496 -0.514496 0.685994 7845.839355 0.685994 0.514496 -0.514496 7845.839355 0.514496 0.514496 -0.685994 7845.839355 0.514496 0.000000 0.857493 7845.839355 0.514496 0.514496 0.685994 7845.839355 0.514496 0.000000 -0.857493 7845.839355 0.685994 0.514496 0.514496 7845.839355 0.514496 0.685994 -0.514496 7845.839355 0.514496 0.685994 0.514496 7845.839355 0.685994 -0.514496 -0.514496 7845.839355 0.685994 -0.514496 0.514496 7845.839355 -0.857493 0.514496 0.000000 7845.839355 0.514496 -0.514496 -0.685994 7845.839355 0.514496 0.857493 0.000000 7845.839355 0.514496 -0.685994 0.514496 7845.839355 0.514496 -0.685994 -0.514496 7845.839355 0.857493 -0.514496 0.000000 7845.839355 0.514496 -0.857493 0.000000 8076.600586 -0.845154 -0.507093 0.169031 8076.600586 0.507093 -0.845154 0.169031 8076.600586 0.169031 -0.845154 0.507093 8076.600586 0.845154 -0.169031 0.507093 8076.600586 0.169031 0.845154 0.507093 8076.600586 -0.507093 0.845154 -0.169031 8076.600586 0.845154 -0.169031 -0.507093 8076.600586 -0.845154 -0.507093 -0.169031 8076.600586 -0.507093 0.845154 0.169031 8076.600586 -0.845154 0.507093 -0.169031 8076.600586 -0.845154 0.169031 0.507093 8076.600586 0.169031 0.845154 -0.507093 8076.600586 0.507093 -0.169031 0.845154 8076.600586 0.507093 -0.169031 -0.845154 8076.600586 0.845154 0.169031 -0.507093 8076.600586 0.169031 -0.845154 -0.507093 8076.600586 0.507093 0.169031 -0.845154 8076.600586 -0.845154 0.507093 0.169031 8076.600586 0.169031 0.507093 0.845154 8076.600586 0.507093 -0.845154 -0.169031 8076.600586 0.169031 0.507093 -0.845154 8076.600586 -0.845154 0.169031 -0.507093 8076.600586 0.169031 -0.507093 -0.845154 8076.600586 0.169031 -0.507093 0.845154 8076.600586 0.507093 0.169031 0.845154 8076.600586 -0.507093 0.169031 -0.845154 8076.600586 0.845154 -0.507093 0.169031 8076.600586 -0.507093 0.169031 0.845154 8076.600586 -0.169031 -0.845154 -0.507093 8076.600586 -0.507093 -0.169031 0.845154 8076.600586 0.845154 -0.507093 -0.169031 8076.600586 -0.169031 -0.845154 0.507093 8076.600586 -0.507093 -0.845154 -0.169031 8076.600586 0.845154 0.169031 0.507093 8076.600586 -0.169031 -0.507093 -0.845154 8076.600586 -0.507093 -0.845154 0.169031 8076.600586 -0.169031 -0.507093 0.845154 8076.600586 0.845154 0.507093 0.169031 8076.600586 0.507093 0.845154 0.169031 8076.600586 -0.507093 -0.169031 -0.845154 8076.600586 0.507093 0.845154 -0.169031 8076.600586 -0.845154 -0.169031 -0.507093 8076.600586 -0.169031 0.507093 -0.845154 8076.600586 -0.169031 0.845154 0.507093 8076.600586 -0.169031 0.507093 0.845154 8076.600586 0.845154 0.507093 -0.169031 8076.600586 -0.169031 0.845154 -0.507093 8076.600586 -0.845154 -0.169031 0.507093 8307.360352 0.666667 -0.666667 0.333333 8307.360352 -0.333333 0.666667 0.666667 8307.360352 0.000000 -1.000000 0.000000 8307.360352 0.666667 -0.666667 -0.333333 8307.360352 0.666667 0.666667 -0.333333 8307.360352 -0.333333 -0.666667 0.666667 8307.360352 0.666667 0.333333 0.666667 8307.360352 -0.666667 0.666667 -0.333333 8307.360352 0.666667 0.666667 0.333333 8307.360352 0.000000 0.000000 -1.000000 8307.360352 -1.000000 0.000000 0.000000 8307.360352 0.000000 1.000000 0.000000 8307.360352 -0.666667 0.333333 -0.666667 8307.360352 0.666667 -0.333333 -0.666667 8307.360352 -0.666667 0.333333 0.666667 8307.360352 0.000000 0.000000 1.000000 8307.360352 -0.333333 0.666667 -0.666667 8307.360352 -0.666667 0.666667 0.333333 8307.360352 0.666667 0.333333 -0.666667 8307.360352 -0.666667 -0.333333 0.666667 8307.360352 -0.666667 -0.666667 -0.333333 8307.360352 0.666667 -0.333333 0.666667 8307.360352 1.000000 0.000000 0.000000 8307.360352 0.333333 -0.666667 -0.666667 8307.360352 0.333333 0.666667 0.666667 8307.360352 -0.666667 -0.666667 0.333333 8307.360352 0.333333 0.666667 -0.666667 8307.360352 -0.666667 -0.333333 -0.666667 8307.360352 0.333333 -0.666667 0.666667 8307.360352 -0.333333 -0.666667 -0.666667 8538.121094 0.000000 0.986394 -0.164399 8538.121094 -0.986394 0.000000 0.164399 8538.121094 0.000000 -0.164399 -0.986394 8538.121094 -0.986394 -0.164399 0.000000 8538.121094 0.164399 0.986394 0.000000 8538.121094 0.986394 -0.164399 0.000000 8538.121094 0.000000 0.164399 0.986394 8538.121094 -0.164399 0.000000 -0.986394 8538.121094 -0.164399 0.000000 0.986394 8538.121094 0.000000 -0.164399 0.986394 8538.121094 0.000000 0.164399 -0.986394 8538.121094 -0.164399 0.986394 0.000000 8538.121094 0.000000 0.986394 0.164399 8538.121094 -0.986394 0.000000 -0.164399 8538.121094 0.986394 0.000000 -0.164399 8538.121094 0.000000 -0.986394 -0.164399 8538.121094 0.164399 -0.986394 0.000000 8538.121094 -0.164399 -0.986394 0.000000 8538.121094 0.000000 -0.986394 0.164399 8538.121094 0.164399 0.000000 0.986394 8538.121094 0.164399 0.000000 -0.986394 8538.121094 -0.986394 0.164399 0.000000 8538.121094 0.986394 0.164399 0.000000 8538.121094 0.986394 0.000000 0.164399 8768.879883 -0.973329 0.162221 0.162221 8768.879883 0.486664 0.811107 -0.324443 8768.879883 0.324443 0.811107 -0.486664 8768.879883 0.324443 -0.811107 0.486664 8768.879883 0.324443 0.486664 0.811107 8768.879883 -0.973329 0.162221 -0.162221 8768.879883 0.324443 -0.811107 -0.486664 8768.879883 -0.162221 0.973329 -0.162221 8768.879883 -0.162221 0.973329 0.162221 8768.879883 -0.162221 0.162221 0.973329 8768.879883 0.811107 -0.324443 -0.486664 8768.879883 -0.324443 -0.811107 0.486664 8768.879883 -0.162221 0.162221 -0.973329 8768.879883 -0.811107 -0.324443 -0.486664 8768.879883 -0.324443 0.811107 -0.486664 8768.879883 -0.811107 0.324443 0.486664 8768.879883 -0.324443 0.811107 0.486664 8768.879883 -0.324443 0.486664 0.811107 8768.879883 -0.324443 0.486664 -0.811107 8768.879883 -0.162221 -0.973329 -0.162221 8768.879883 -0.162221 -0.973329 0.162221 8768.879883 -0.486664 0.324443 -0.811107 8768.879883 -0.811107 -0.324443 0.486664 8768.879883 -0.973329 -0.162221 -0.162221 8768.879883 0.811107 -0.486664 -0.324443 8768.879883 0.811107 0.486664 0.324443 8768.879883 0.324443 -0.486664 0.811107 8768.879883 -0.486664 -0.811107 -0.324443 8768.879883 0.486664 0.811107 0.324443 8768.879883 0.324443 0.486664 -0.811107 8768.879883 0.324443 -0.486664 -0.811107 8768.879883 -0.486664 -0.811107 0.324443 8768.879883 -0.486664 0.324443 0.811107 8768.879883 -0.811107 0.324443 -0.486664 8768.879883 -0.162221 -0.162221 -0.973329 8768.879883 0.811107 -0.486664 0.324443 8768.879883 0.811107 0.486664 -0.324443 8768.879883 -0.162221 -0.162221 0.973329 8768.879883 0.811107 0.324443 0.486664 8768.879883 0.811107 0.324443 -0.486664 8768.879883 0.973329 -0.162221 -0.162221 8768.879883 -0.973329 -0.162221 0.162221 8768.879883 -0.811107 0.486664 0.324443 8768.879883 0.324443 0.811107 0.486664 8768.879883 0.486664 -0.324443 0.811107 8768.879883 -0.486664 0.811107 0.324443 8768.879883 -0.324443 -0.811107 -0.486664 8768.879883 -0.811107 -0.486664 0.324443 8768.879883 0.973329 0.162221 -0.162221 8768.879883 0.162221 0.162221 0.973329 8768.879883 0.486664 -0.811107 -0.324443 8768.879883 -0.324443 -0.486664 0.811107 8768.879883 0.486664 -0.324443 -0.811107 8768.879883 -0.486664 0.811107 -0.324443 8768.879883 0.162221 -0.973329 -0.162221 8768.879883 0.162221 -0.973329 0.162221 8768.879883 0.162221 0.162221 -0.973329 8768.879883 -0.486664 -0.324443 0.811107 8768.879883 -0.486664 -0.324443 -0.811107 8768.879883 0.486664 -0.811107 0.324443 8768.879883 0.486664 0.324443 0.811107 8768.879883 0.486664 0.324443 -0.811107 8768.879883 -0.811107 -0.486664 -0.324443 8768.879883 -0.324443 -0.486664 -0.811107 8768.879883 0.162221 -0.162221 -0.973329 8768.879883 0.162221 0.973329 0.162221 8768.879883 0.973329 0.162221 0.162221 8768.879883 0.162221 0.973329 -0.162221 8768.879883 0.811107 -0.324443 0.486664 8768.879883 0.162221 -0.162221 0.973329 8768.879883 0.973329 -0.162221 0.162221 8768.879883 -0.811107 0.486664 -0.324443 9230.400391 0.948683 0.000000 0.316228 9230.400391 -0.316228 -0.948683 0.000000 9230.400391 0.948683 0.000000 -0.316228 9230.400391 -0.948683 0.000000 0.316228 9230.400391 -0.948683 0.000000 -0.316228 9230.400391 0.948683 0.316228 0.000000 9230.400391 0.316228 0.948683 0.000000 9230.400391 -0.316228 0.948683 0.000000 9230.400391 0.000000 0.948683 -0.316228 9230.400391 0.000000 0.948683 0.316228 9230.400391 -0.948683 -0.316228 0.000000 9230.400391 0.316228 0.000000 0.948683 9230.400391 -0.948683 0.316228 0.000000 9230.400391 0.316228 0.000000 -0.948683 9230.400391 0.316228 -0.948683 0.000000 9230.400391 0.000000 0.316228 0.948683 9230.400391 -0.316228 0.000000 0.948683 9230.400391 0.000000 -0.948683 0.316228 9230.400391 0.000000 -0.316228 -0.948683 9230.400391 0.000000 0.316228 -0.948683 9230.400391 -0.316228 0.000000 -0.948683 9230.400391 0.000000 -0.948683 -0.316228 9230.400391 0.000000 -0.316228 0.948683 9230.400391 0.948683 -0.316228 0.000000 9461.160156 0.624695 0.000000 0.780869 9461.160156 0.624695 0.468521 -0.624695 9461.160156 0.937043 -0.312348 -0.156174 9461.160156 0.937043 -0.156174 0.312348 9461.160156 -0.468521 0.624695 0.624695 9461.160156 0.780869 0.624695 0.000000 9461.160156 -0.937043 -0.156174 0.312348 9461.160156 -0.468521 -0.624695 0.624695 9461.160156 0.624695 0.000000 -0.780869 9461.160156 -0.468521 -0.624695 -0.624695 9461.160156 0.624695 -0.624695 0.468521 9461.160156 0.937043 -0.156174 -0.312348 9461.160156 -0.937043 0.312348 -0.156174 9461.160156 0.937043 0.312348 -0.156174 9461.160156 -0.312348 -0.156174 -0.937043 9461.160156 -0.312348 -0.937043 0.156174 9461.160156 -0.312348 -0.937043 -0.156174 9461.160156 -0.312348 0.156174 0.937043 9461.160156 -0.312348 -0.156174 0.937043 9461.160156 0.624695 -0.468521 -0.624695 9461.160156 0.937043 0.156174 0.312348 9461.160156 0.937043 0.312348 0.156174 9461.160156 0.624695 -0.468521 0.624695 9461.160156 0.937043 0.156174 -0.312348 9461.160156 -0.937043 0.312348 0.156174 9461.160156 -0.468521 0.624695 -0.624695 9461.160156 0.937043 -0.312348 0.156174 9461.160156 -0.312348 0.156174 -0.937043 9461.160156 -0.937043 0.156174 -0.312348 9461.160156 -0.937043 0.156174 0.312348 9461.160156 0.312348 -0.156174 0.937043 9461.160156 0.156174 -0.937043 -0.312348 9461.160156 0.156174 -0.312348 0.937043 9461.160156 -0.624695 0.000000 0.780869 9461.160156 -0.780869 -0.624695 0.000000 9461.160156 0.156174 -0.312348 -0.937043 9461.160156 0.468521 -0.624695 -0.624695 9461.160156 0.780869 0.000000 -0.624695 9461.160156 -0.780869 0.000000 0.624695 9461.160156 0.156174 -0.937043 0.312348 9461.160156 0.624695 -0.624695 -0.468521 9461.160156 0.468521 -0.624695 0.624695 9461.160156 0.000000 0.780869 0.624695 9461.160156 0.780869 0.000000 0.624695 9461.160156 -0.780869 0.624695 0.000000 9461.160156 0.000000 0.780869 -0.624695 9461.160156 0.000000 0.624695 0.780869 9461.160156 -0.624695 0.000000 -0.780869 9461.160156 0.156174 0.312348 -0.937043 9461.160156 0.312348 0.937043 0.156174 9461.160156 0.156174 0.312348 0.937043 9461.160156 -0.624695 -0.624695 -0.468521 9461.160156 -0.624695 -0.624695 0.468521 9461.160156 0.312348 0.156174 -0.937043 9461.160156 0.312348 -0.156174 -0.937043 9461.160156 0.312348 0.156174 0.937043 9461.160156 -0.624695 -0.468521 -0.624695 9461.160156 -0.624695 -0.780869 0.000000 9461.160156 0.000000 0.624695 -0.780869 9461.160156 0.780869 -0.624695 0.000000 9461.160156 0.312348 -0.937043 0.156174 9461.160156 0.312348 -0.937043 -0.156174 9461.160156 0.156174 0.937043 0.312348 9461.160156 0.156174 0.937043 -0.312348 9461.160156 0.624695 0.780869 0.000000 9461.160156 0.624695 0.624695 0.468521 9461.160156 0.312348 0.937043 -0.156174 9461.160156 -0.624695 -0.468521 0.624695 9461.160156 -0.624695 0.468521 -0.624695 9461.160156 -0.156174 0.937043 0.312348 9461.160156 -0.624695 0.468521 0.624695 9461.160156 0.624695 -0.780869 0.000000 9461.160156 -0.312348 0.937043 -0.156174 9461.160156 -0.312348 0.937043 0.156174 9461.160156 -0.156174 -0.937043 -0.312348 9461.160156 -0.156174 -0.937043 0.312348 9461.160156 -0.937043 -0.156174 -0.312348 9461.160156 -0.937043 -0.312348 0.156174 9461.160156 -0.937043 -0.312348 -0.156174 9461.160156 -0.780869 0.000000 -0.624695 9461.160156 -0.156174 -0.312348 0.937043 9461.160156 -0.156174 0.312348 -0.937043 9461.160156 -0.156174 0.312348 0.937043 9461.160156 0.468521 0.624695 0.624695 9461.160156 0.624695 0.468521 0.624695 9461.160156 0.624695 0.624695 -0.468521 9461.160156 -0.156174 -0.312348 -0.937043 9461.160156 0.468521 0.624695 -0.624695 9461.160156 0.000000 -0.780869 0.624695 9461.160156 -0.624695 0.780869 0.000000 9461.160156 0.000000 -0.780869 -0.624695 9461.160156 -0.156174 0.937043 -0.312348 9461.160156 0.000000 -0.624695 -0.780869 9461.160156 -0.624695 0.624695 -0.468521 9461.160156 -0.624695 0.624695 0.468521 9461.160156 0.000000 -0.624695 0.780869 9691.918945 0.771517 0.154303 -0.617213 9691.918945 -0.154303 -0.771517 0.617213 9691.918945 0.617213 0.771517 0.154303 9691.918945 0.771517 -0.617213 -0.154303 9691.918945 0.771517 0.617213 -0.154303 9691.918945 0.617213 0.154303 -0.771517 9691.918945 0.617213 -0.771517 -0.154303 9691.918945 -0.154303 -0.617213 -0.771517 9691.918945 0.771517 0.617213 0.154303 9691.918945 0.617213 0.154303 0.771517 9691.918945 -0.154303 -0.771517 -0.617213 9691.918945 0.771517 0.154303 0.617213 9691.918945 0.771517 -0.617213 0.154303 9691.918945 0.154303 -0.617213 0.771517 9691.918945 0.154303 0.771517 0.617213 9691.918945 0.154303 -0.617213 -0.771517 9691.918945 0.154303 -0.771517 0.617213 9691.918945 0.771517 -0.154303 0.617213 9691.918945 -0.154303 0.771517 -0.617213 9691.918945 -0.154303 0.617213 0.771517 9691.918945 -0.154303 0.771517 0.617213 9691.918945 -0.154303 0.617213 -0.771517 9691.918945 -0.154303 -0.617213 0.771517 9691.918945 0.154303 -0.771517 -0.617213 9691.918945 0.154303 0.617213 -0.771517 9691.918945 0.154303 0.617213 0.771517 9691.918945 0.154303 0.771517 -0.617213 9691.918945 0.617213 0.771517 -0.154303 9691.918945 0.617213 -0.154303 0.771517 9691.918945 0.771517 -0.154303 -0.617213 9691.918945 0.617213 -0.154303 -0.771517 9691.918945 0.617213 -0.771517 0.154303 9691.918945 -0.771517 -0.617213 0.154303 9691.918945 -0.617213 -0.771517 -0.154303 9691.918945 -0.617213 0.771517 0.154303 9691.918945 -0.771517 -0.617213 -0.154303 9691.918945 -0.771517 -0.154303 -0.617213 9691.918945 -0.617213 0.154303 0.771517 9691.918945 -0.617213 0.154303 -0.771517 9691.918945 -0.617213 0.771517 -0.154303 9691.918945 -0.771517 0.154303 -0.617213 9691.918945 -0.771517 0.154303 0.617213 9691.918945 -0.771517 0.617213 0.154303 9691.918945 -0.771517 0.617213 -0.154303 9691.918945 -0.617213 -0.771517 0.154303 9691.918945 -0.617213 -0.154303 0.771517 9691.918945 -0.617213 -0.154303 -0.771517 9691.918945 -0.771517 -0.154303 0.617213 9922.679688 0.457496 0.762493 0.457496 9922.679688 -0.457496 -0.457496 0.762493 9922.679688 -0.762493 -0.457496 0.457496 9922.679688 -0.762493 -0.457496 -0.457496 9922.679688 0.762493 -0.457496 -0.457496 9922.679688 -0.457496 -0.762493 0.457496 9922.679688 -0.457496 0.762493 0.457496 9922.679688 0.457496 0.457496 -0.762493 9922.679688 0.457496 -0.762493 -0.457496 9922.679688 0.457496 -0.762493 0.457496 9922.679688 -0.457496 -0.762493 -0.457496 9922.679688 0.762493 0.457496 -0.457496 9922.679688 -0.762493 0.457496 0.457496 9922.679688 -0.762493 0.457496 -0.457496 9922.679688 0.457496 0.457496 0.762493 9922.679688 0.457496 -0.457496 0.762493 9922.679688 -0.457496 -0.457496 -0.762493 9922.679688 -0.457496 0.457496 -0.762493 9922.679688 0.457496 0.762493 -0.457496 9922.679688 0.762493 0.457496 0.457496 9922.679688 -0.457496 0.762493 -0.457496 9922.679688 0.457496 -0.457496 -0.762493 9922.679688 0.762493 -0.457496 0.457496 9922.679688 -0.457496 0.457496 0.762493 10153.440430 -0.301511 0.301511 -0.904534 10153.440430 0.904534 -0.301511 0.301511 10153.440430 -0.301511 -0.904534 0.301511 10153.440430 -0.904534 0.301511 0.301511 10153.440430 -0.301511 0.301511 0.904534 10153.440430 0.301511 0.904534 -0.301511 10153.440430 -0.904534 -0.301511 -0.301511 10153.440430 -0.301511 -0.301511 0.904534 10153.440430 0.301511 -0.904534 -0.301511 10153.440430 0.301511 -0.301511 0.904534 10153.440430 -0.301511 0.904534 -0.301511 10153.440430 0.301511 -0.904534 0.301511 10153.440430 0.301511 0.301511 -0.904534 10153.440430 0.301511 0.301511 0.904534 10153.440430 0.301511 0.904534 0.301511 10153.440430 0.904534 0.301511 -0.301511 10153.440430 0.904534 0.301511 0.301511 10153.440430 -0.301511 -0.301511 -0.904534 10153.440430 -0.904534 -0.301511 0.301511 10153.440430 -0.301511 0.904534 0.301511 10153.440430 0.904534 -0.301511 -0.301511 10153.440430 0.301511 -0.301511 -0.904534 10153.440430 -0.904534 0.301511 -0.301511 10153.440430 -0.301511 -0.904534 -0.301511 10384.200195 -0.894427 0.000000 -0.447214 10384.200195 0.596285 0.298142 0.745356 10384.200195 -0.894427 0.000000 0.447214 10384.200195 0.000000 -0.447214 -0.894427 10384.200195 0.447214 0.000000 0.894427 10384.200195 -0.298142 -0.596285 -0.745356 10384.200195 0.596285 -0.298142 -0.745356 10384.200195 0.745356 0.596285 -0.298142 10384.200195 -0.745356 -0.298142 0.596285 10384.200195 -0.745356 -0.298142 -0.596285 10384.200195 -0.894427 -0.447214 0.000000 10384.200195 0.745356 0.596285 0.298142 10384.200195 0.298142 0.745356 -0.596285 10384.200195 0.745356 -0.298142 0.596285 10384.200195 0.596285 0.298142 -0.745356 10384.200195 0.000000 0.894427 0.447214 10384.200195 0.894427 0.000000 0.447214 10384.200195 0.298142 0.596285 0.745356 10384.200195 -0.596285 0.298142 -0.745356 10384.200195 0.596285 -0.298142 0.745356 10384.200195 0.000000 0.894427 -0.447214 10384.200195 0.298142 0.596285 -0.745356 10384.200195 0.298142 0.745356 0.596285 10384.200195 -0.745356 0.596285 0.298142 10384.200195 -0.596285 -0.745356 -0.298142 10384.200195 0.894427 0.000000 -0.447214 10384.200195 -0.596285 -0.298142 -0.745356 10384.200195 -0.447214 0.894427 0.000000 10384.200195 -0.745356 -0.596285 0.298142 10384.200195 -0.596285 0.298142 0.745356 10384.200195 0.745356 -0.298142 -0.596285 10384.200195 0.447214 0.000000 -0.894427 10384.200195 -0.596285 0.745356 -0.298142 10384.200195 0.745356 -0.596285 -0.298142 10384.200195 -0.596285 -0.745356 0.298142 10384.200195 0.596285 0.745356 0.298142 10384.200195 0.447214 -0.894427 0.000000 10384.200195 0.894427 0.447214 0.000000 10384.200195 -0.298142 -0.745356 -0.596285 10384.200195 -0.596285 -0.298142 0.745356 10384.200195 -0.298142 0.745356 -0.596285 10384.200195 0.000000 0.447214 -0.894427 10384.200195 -0.447214 0.000000 0.894427 10384.200195 -0.298142 0.745356 0.596285 10384.200195 0.447214 0.894427 0.000000 10384.200195 -0.298142 -0.745356 0.596285 10384.200195 0.894427 -0.447214 0.000000 10384.200195 -0.596285 0.745356 0.298142 10384.200195 0.298142 -0.596285 0.745356 10384.200195 0.745356 0.298142 -0.596285 10384.200195 0.298142 -0.596285 -0.745356 10384.200195 -0.447214 0.000000 -0.894427 10384.200195 0.298142 -0.745356 0.596285 10384.200195 -0.298142 -0.596285 0.745356 10384.200195 -0.447214 -0.894427 0.000000 10384.200195 -0.298142 0.596285 0.745356 10384.200195 -0.745356 0.298142 -0.596285 10384.200195 0.596285 -0.745356 0.298142 10384.200195 0.000000 -0.894427 0.447214 10384.200195 0.000000 -0.894427 -0.447214 10384.200195 -0.745356 0.298142 0.596285 10384.200195 -0.894427 0.447214 0.000000 10384.200195 0.298142 -0.745356 -0.596285 10384.200195 0.596285 0.745356 -0.298142 10384.200195 -0.745356 -0.596285 -0.298142 10384.200195 0.000000 -0.447214 0.894427 10384.200195 0.000000 0.447214 0.894427 10384.200195 -0.298142 0.596285 -0.745356 10384.200195 0.596285 -0.745356 -0.298142 10384.200195 -0.745356 0.596285 -0.298142 10384.200195 0.745356 -0.596285 0.298142 10384.200195 0.745356 0.298142 0.596285 10614.959961 -0.884652 -0.147442 -0.442326 10614.959961 -0.884652 -0.442326 -0.147442 10614.959961 0.442326 -0.884652 0.147442 10614.959961 0.147442 -0.442326 -0.884652 10614.959961 0.442326 0.884652 -0.147442 10614.959961 -0.884652 0.442326 0.147442 10614.959961 0.884652 0.442326 -0.147442 10614.959961 -0.884652 0.442326 -0.147442 10614.959961 0.147442 -0.884652 -0.442326 10614.959961 0.442326 -0.884652 -0.147442 10614.959961 0.884652 0.442326 0.147442 10614.959961 -0.147442 -0.442326 0.884652 10614.959961 0.147442 -0.884652 0.442326 10614.959961 0.147442 -0.442326 0.884652 10614.959961 0.442326 0.884652 0.147442 10614.959961 -0.147442 -0.442326 -0.884652 10614.959961 -0.884652 -0.442326 0.147442 10614.959961 0.884652 -0.442326 -0.147442 10614.959961 0.884652 0.147442 0.442326 10614.959961 -0.442326 -0.147442 -0.884652 10614.959961 0.147442 0.442326 -0.884652 10614.959961 0.147442 0.442326 0.884652 10614.959961 -0.884652 -0.147442 0.442326 10614.959961 -0.442326 -0.884652 0.147442 10614.959961 -0.442326 -0.884652 -0.147442 10614.959961 -0.147442 0.442326 -0.884652 10614.959961 0.884652 -0.147442 -0.442326 10614.959961 0.147442 0.884652 -0.442326 10614.959961 -0.147442 0.884652 0.442326 10614.959961 0.147442 0.884652 0.442326 10614.959961 -0.442326 0.147442 0.884652 10614.959961 0.884652 -0.442326 0.147442 10614.959961 -0.147442 0.442326 0.884652 10614.959961 -0.442326 0.147442 -0.884652 10614.959961 -0.147442 0.884652 -0.442326 10614.959961 -0.442326 -0.147442 0.884652 10614.959961 0.884652 -0.147442 0.442326 10614.959961 0.442326 0.147442 0.884652 10614.959961 -0.442326 0.884652 0.147442 10614.959961 0.442326 -0.147442 0.884652 10614.959961 -0.442326 0.884652 -0.147442 10614.959961 -0.147442 -0.884652 0.442326 10614.959961 0.442326 -0.147442 -0.884652 10614.959961 -0.884652 0.147442 0.442326 10614.959961 -0.884652 0.147442 -0.442326 10614.959961 -0.147442 -0.884652 -0.442326 10614.959961 0.884652 0.147442 -0.442326 10614.959961 0.442326 0.147442 -0.884652 11076.480469 0.577350 0.577350 -0.577350 11076.480469 -0.577350 0.577350 0.577350 11076.480469 0.577350 -0.577350 -0.577350 11076.480469 0.577350 -0.577350 0.577350 11076.480469 -0.577350 0.577350 -0.577350 11076.480469 0.577350 0.577350 0.577350 11076.480469 -0.577350 -0.577350 -0.577350 11076.480469 -0.577350 -0.577350 0.577350 11307.240234 0.000000 1.000000 0.000000 11307.240234 -0.285714 0.857143 -0.428571 11307.240234 0.857143 0.428571 0.285714 11307.240234 0.857143 -0.285714 -0.428571 11307.240234 0.000000 0.000000 -1.000000 11307.240234 0.857143 -0.428571 0.285714 11307.240234 0.428571 -0.857143 -0.285714 11307.240234 -0.857143 0.428571 0.285714 11307.240234 0.428571 -0.285714 -0.857143 11307.240234 0.000000 0.000000 1.000000 11307.240234 0.428571 -0.285714 0.857143 11307.240234 0.428571 0.285714 -0.857143 11307.240234 -0.285714 -0.428571 -0.857143 11307.240234 0.857143 0.285714 0.428571 11307.240234 -0.285714 0.428571 -0.857143 11307.240234 -1.000000 0.000000 0.000000 11307.240234 -0.285714 -0.428571 0.857143 11307.240234 0.428571 0.857143 0.285714 11307.240234 0.857143 0.428571 -0.285714 11307.240234 0.428571 -0.857143 0.285714 11307.240234 0.857143 0.285714 -0.428571 11307.240234 -0.857143 0.285714 0.428571 11307.240234 -0.285714 0.857143 0.428571 11307.240234 -0.857143 0.285714 -0.428571 11307.240234 -0.857143 0.428571 -0.285714 11307.240234 -0.285714 0.428571 0.857143 11307.240234 0.857143 -0.285714 0.428571 11307.240234 -0.428571 -0.857143 0.285714 11307.240234 -0.428571 -0.857143 -0.285714 11307.240234 0.428571 0.285714 0.857143 11307.240234 0.857143 -0.428571 -0.285714 11307.240234 0.428571 0.857143 -0.285714 11307.240234 -0.428571 0.857143 -0.285714 11307.240234 0.285714 0.428571 0.857143 11307.240234 -0.428571 0.857143 0.285714 11307.240234 0.285714 0.428571 -0.857143 11307.240234 -0.428571 -0.285714 0.857143 11307.240234 -0.857143 -0.428571 -0.285714 11307.240234 -0.857143 -0.428571 0.285714 11307.240234 0.000000 -1.000000 0.000000 11307.240234 0.285714 0.857143 -0.428571 11307.240234 0.285714 0.857143 0.428571 11307.240234 -0.428571 0.285714 0.857143 11307.240234 -0.428571 0.285714 -0.857143 11307.240234 -0.857143 -0.285714 -0.428571 11307.240234 -0.285714 -0.857143 -0.428571 11307.240234 -0.285714 -0.857143 0.428571 11307.240234 0.285714 -0.428571 0.857143 11307.240234 1.000000 0.000000 0.000000 11307.240234 0.285714 -0.428571 -0.857143 11307.240234 0.285714 -0.857143 0.428571 11307.240234 0.285714 -0.857143 -0.428571 11307.240234 -0.857143 -0.285714 0.428571 11307.240234 -0.428571 -0.285714 -0.857143 11538.000000 0.000000 -0.141421 -0.989949 11538.000000 0.565685 0.424264 -0.707107 11538.000000 -0.565685 -0.707107 0.424264 11538.000000 -0.424264 0.707107 -0.565685 11538.000000 -0.424264 -0.707107 -0.565685 11538.000000 -0.989949 0.141421 0.000000 11538.000000 -0.424264 0.707107 0.565685 11538.000000 0.000000 0.141421 -0.989949 11538.000000 -0.424264 -0.707107 0.565685 11538.000000 0.707107 0.565685 0.424264 11538.000000 0.000000 0.707107 -0.707107 11538.000000 0.989949 0.141421 0.000000 11538.000000 0.424264 0.707107 0.565685 11538.000000 -0.141421 0.000000 0.989949 11538.000000 0.000000 0.707107 0.707107 11538.000000 0.141421 0.000000 0.989949 11538.000000 -0.565685 -0.707107 -0.424264 11538.000000 0.565685 -0.424264 -0.707107 11538.000000 -0.565685 0.424264 0.707107 11538.000000 -0.707107 -0.424264 -0.565685 11538.000000 0.141421 0.000000 -0.989949 11538.000000 -0.707107 0.565685 0.424264 11538.000000 0.565685 -0.424264 0.707107 11538.000000 -0.424264 -0.565685 -0.707107 11538.000000 0.565685 -0.707107 0.424264 11538.000000 -0.565685 -0.424264 0.707107 11538.000000 -0.707107 -0.424264 0.565685 11538.000000 0.424264 0.707107 -0.565685 11538.000000 0.424264 0.565685 0.707107 11538.000000 0.000000 -0.141421 0.989949 11538.000000 0.707107 -0.707107 0.000000 11538.000000 0.424264 0.565685 -0.707107 11538.000000 0.141421 0.989949 0.000000 11538.000000 -0.565685 -0.424264 -0.707107 11538.000000 0.565685 0.707107 0.424264 11538.000000 -0.707107 0.424264 -0.565685 11538.000000 0.707107 -0.565685 -0.424264 11538.000000 -0.707107 -0.707107 0.000000 11538.000000 0.565685 -0.707107 -0.424264 11538.000000 -0.141421 0.989949 0.000000 11538.000000 -0.565685 0.424264 -0.707107 11538.000000 0.707107 -0.565685 0.424264 11538.000000 0.707107 0.707107 0.000000 11538.000000 0.565685 0.707107 -0.424264 11538.000000 0.707107 -0.424264 -0.565685 11538.000000 -0.707107 0.424264 0.565685 11538.000000 0.989949 0.000000 0.141421 11538.000000 -0.141421 -0.989949 0.000000 11538.000000 -0.989949 0.000000 0.141421 11538.000000 0.565685 0.424264 0.707107 11538.000000 -0.989949 0.000000 -0.141421 11538.000000 0.707107 -0.424264 0.565685 11538.000000 -0.424264 0.565685 0.707107 11538.000000 -0.707107 0.565685 -0.424264 11538.000000 -0.424264 0.565685 -0.707107 11538.000000 0.141421 -0.989949 0.000000 11538.000000 0.000000 -0.707107 -0.707107 11538.000000 -0.707107 0.000000 -0.707107 11538.000000 -0.707107 -0.565685 -0.424264 11538.000000 -0.707107 0.000000 0.707107 11538.000000 0.707107 0.424264 0.565685 11538.000000 0.424264 -0.707107 -0.565685 11538.000000 0.707107 0.000000 -0.707107 11538.000000 0.424264 -0.707107 0.565685 11538.000000 0.000000 0.989949 0.141421 11538.000000 0.000000 -0.989949 -0.141421 11538.000000 -0.989949 -0.141421 0.000000 11538.000000 0.707107 0.424264 -0.565685 11538.000000 -0.707107 -0.565685 0.424264 11538.000000 0.424264 -0.565685 0.707107 11538.000000 0.000000 -0.707107 0.707107 11538.000000 0.424264 -0.565685 -0.707107 11538.000000 0.000000 -0.989949 0.141421 11538.000000 0.989949 0.000000 -0.141421 11538.000000 0.707107 0.565685 -0.424264 11538.000000 0.707107 0.000000 0.707107 11538.000000 -0.565685 0.707107 -0.424264 11538.000000 -0.707107 0.707107 0.000000 11538.000000 0.000000 0.141421 0.989949 11538.000000 0.000000 0.989949 -0.141421 11538.000000 0.989949 -0.141421 0.000000 11538.000000 -0.565685 0.707107 0.424264 11538.000000 -0.424264 -0.565685 0.707107 11538.000000 -0.141421 0.000000 -0.989949 11768.759766 -0.140028 -0.980196 -0.140028 11768.759766 -0.140028 0.980196 0.140028 11768.759766 0.140028 -0.980196 0.140028 11768.759766 -0.980196 -0.140028 -0.140028 11768.759766 -0.700140 -0.140028 0.700140 11768.759766 -0.980196 0.140028 -0.140028 11768.759766 -0.980196 -0.140028 0.140028 11768.759766 -0.140028 0.700140 0.700140 11768.759766 -0.980196 0.140028 0.140028 11768.759766 0.700140 0.700140 0.140028 11768.759766 0.140028 -0.700140 0.700140 11768.759766 -0.700140 -0.140028 -0.700140 11768.759766 0.700140 -0.700140 -0.140028 11768.759766 0.140028 0.980196 0.140028 11768.759766 -0.140028 0.980196 -0.140028 11768.759766 0.140028 0.700140 -0.700140 11768.759766 -0.700140 -0.700140 0.140028 11768.759766 0.140028 0.980196 -0.140028 11768.759766 0.140028 0.700140 0.700140 11768.759766 -0.140028 0.140028 0.980196 11768.759766 0.700140 -0.700140 0.140028 11768.759766 0.700140 0.140028 -0.700140 11768.759766 -0.700140 0.700140 -0.140028 11768.759766 0.140028 -0.700140 -0.700140 11768.759766 -0.700140 -0.700140 -0.140028 11768.759766 0.700140 0.140028 0.700140 11768.759766 0.700140 0.700140 -0.140028 11768.759766 0.700140 -0.140028 0.700140 11768.759766 -0.140028 -0.980196 0.140028 11768.759766 0.980196 0.140028 -0.140028 11768.759766 -0.700140 0.140028 0.700140 11768.759766 0.140028 0.140028 -0.980196 11768.759766 0.980196 -0.140028 -0.140028 11768.759766 0.980196 0.140028 0.140028 11768.759766 -0.140028 -0.700140 -0.700140 11768.759766 -0.140028 -0.700140 0.700140 11768.759766 -0.140028 -0.140028 0.980196 11768.759766 0.140028 -0.980196 -0.140028 11768.759766 0.140028 0.140028 0.980196 11768.759766 -0.140028 0.700140 -0.700140 11768.759766 0.980196 -0.140028 0.140028 11768.759766 0.140028 -0.140028 0.980196 11768.759766 -0.140028 -0.140028 -0.980196 11768.759766 -0.700140 0.140028 -0.700140 11768.759766 -0.700140 0.700140 0.140028 11768.759766 0.700140 -0.140028 -0.700140 11768.759766 0.140028 -0.140028 -0.980196 11768.759766 -0.140028 0.140028 -0.980196 11999.519531 0.832050 0.000000 0.554700 11999.519531 0.000000 0.832050 -0.554700 11999.519531 0.832050 0.554700 0.000000 11999.519531 0.554700 0.832050 0.000000 11999.519531 0.832050 0.000000 -0.554700 11999.519531 0.000000 -0.554700 0.832050 11999.519531 -0.554700 0.832050 0.000000 11999.519531 0.554700 -0.832050 0.000000 11999.519531 0.832050 -0.554700 0.000000 11999.519531 -0.832050 -0.554700 0.000000 11999.519531 0.000000 -0.832050 0.554700 11999.519531 0.000000 0.832050 0.554700 11999.519531 -0.832050 0.554700 0.000000 11999.519531 -0.554700 0.000000 -0.832050 11999.519531 -0.554700 0.000000 0.832050 11999.519531 0.554700 0.000000 0.832050 11999.519531 0.000000 -0.554700 -0.832050 11999.519531 0.000000 -0.832050 -0.554700 11999.519531 0.000000 0.554700 -0.832050 11999.519531 -0.554700 -0.832050 0.000000 11999.519531 -0.832050 0.000000 -0.554700 11999.519531 -0.832050 0.000000 0.554700 11999.519531 0.554700 0.000000 -0.832050 11999.519531 0.000000 0.554700 0.832050 12230.280273 -0.137361 0.824163 0.549442 12230.280273 -0.549442 0.824163 0.137361 12230.280273 0.549442 0.824163 -0.137361 12230.280273 -0.137361 -0.549442 0.824163 12230.280273 0.824163 -0.549442 -0.137361 12230.280273 -0.961524 0.274721 0.000000 12230.280273 0.549442 0.824163 0.137361 12230.280273 -0.137361 -0.549442 -0.824163 12230.280273 0.824163 -0.549442 0.137361 12230.280273 0.549442 0.137361 0.824163 12230.280273 -0.137361 -0.824163 0.549442 12230.280273 -0.824163 -0.549442 0.137361 12230.280273 0.000000 -0.961524 0.274721 12230.280273 -0.137361 0.549442 -0.824163 12230.280273 -0.137361 0.824163 -0.549442 12230.280273 -0.549442 0.824163 -0.137361 12230.280273 0.000000 -0.961524 -0.274721 12230.280273 0.549442 0.137361 -0.824163 12230.280273 -0.824163 -0.549442 -0.137361 12230.280273 -0.824163 -0.137361 -0.549442 12230.280273 0.000000 -0.274721 -0.961524 12230.280273 0.549442 -0.824163 -0.137361 12230.280273 0.549442 -0.824163 0.137361 12230.280273 0.549442 -0.137361 0.824163 12230.280273 -0.137361 -0.824163 -0.549442 12230.280273 0.549442 -0.137361 -0.824163 12230.280273 0.000000 -0.274721 0.961524 12230.280273 0.274721 -0.961524 0.000000 12230.280273 -0.137361 0.549442 0.824163 12230.280273 0.274721 0.000000 -0.961524 12230.280273 -0.824163 0.137361 -0.549442 12230.280273 -0.274721 -0.961524 0.000000 12230.280273 -0.824163 -0.137361 0.549442 12230.280273 -0.549442 -0.137361 0.824163 12230.280273 0.000000 0.961524 0.274721 12230.280273 0.961524 0.274721 0.000000 12230.280273 -0.549442 -0.824163 -0.137361 12230.280273 0.824163 0.137361 0.549442 12230.280273 0.824163 0.137361 -0.549442 12230.280273 0.137361 -0.549442 0.824163 12230.280273 -0.274721 0.000000 -0.961524 12230.280273 -0.961524 0.000000 0.274721 12230.280273 -0.961524 0.000000 -0.274721 12230.280273 0.824163 -0.137361 0.549442 12230.280273 0.000000 0.961524 -0.274721 12230.280273 0.961524 0.000000 0.274721 12230.280273 0.961524 0.000000 -0.274721 12230.280273 -0.549442 -0.824163 0.137361 12230.280273 -0.824163 0.137361 0.549442 12230.280273 -0.961524 -0.274721 0.000000 12230.280273 -0.274721 0.000000 0.961524 12230.280273 0.824163 -0.137361 -0.549442 12230.280273 0.137361 0.549442 -0.824163 12230.280273 0.274721 0.000000 0.961524 12230.280273 0.137361 0.824163 -0.549442 12230.280273 0.000000 0.274721 -0.961524 12230.280273 0.137361 -0.824163 -0.549442 12230.280273 0.824163 0.549442 -0.137361 12230.280273 -0.274721 0.961524 0.000000 12230.280273 0.824163 0.549442 0.137361 12230.280273 -0.549442 0.137361 0.824163 12230.280273 0.137361 0.824163 0.549442 12230.280273 0.137361 -0.824163 0.549442 12230.280273 -0.824163 0.549442 0.137361 12230.280273 0.274721 0.961524 0.000000 12230.280273 0.961524 -0.274721 0.000000 12230.280273 0.137361 0.549442 0.824163 12230.280273 -0.824163 0.549442 -0.137361 12230.280273 0.137361 -0.549442 -0.824163 12230.280273 -0.549442 0.137361 -0.824163 12230.280273 0.000000 0.274721 0.961524 12230.280273 -0.549442 -0.137361 -0.824163 12461.040039 -0.816497 -0.408248 -0.408248 12461.040039 0.680414 0.272166 -0.680414 12461.040039 0.408248 0.816497 -0.408248 12461.040039 -0.952579 -0.272166 0.136083 12461.040039 -0.272166 -0.680414 0.680414 12461.040039 -0.272166 -0.952579 0.136083 12461.040039 -0.272166 -0.952579 -0.136083 12461.040039 -0.816497 -0.408248 0.408248 12461.040039 -0.136083 -0.272166 0.952579 12461.040039 0.272166 0.952579 0.136083 12461.040039 -0.952579 -0.136083 0.272166 12461.040039 0.136083 -0.272166 -0.952579 12461.040039 0.272166 0.680414 0.680414 12461.040039 -0.952579 -0.136083 -0.272166 12461.040039 -0.680414 0.680414 -0.272166 12461.040039 -0.272166 -0.680414 -0.680414 12461.040039 0.272166 0.952579 -0.136083 12461.040039 -0.408248 0.816497 0.408248 12461.040039 0.952579 -0.272166 0.136083 12461.040039 0.272166 -0.136083 0.952579 12461.040039 -0.136083 0.952579 0.272166 12461.040039 -0.680414 -0.680414 -0.272166 12461.040039 -0.408248 0.408248 -0.816497 12461.040039 -0.136083 0.272166 0.952579 12461.040039 0.272166 -0.136083 -0.952579 12461.040039 -0.136083 0.952579 -0.272166 12461.040039 -0.408248 0.408248 0.816497 12461.040039 -0.680414 -0.680414 0.272166 12461.040039 0.136083 0.952579 -0.272166 12461.040039 0.136083 0.952579 0.272166 12461.040039 0.680414 -0.680414 -0.272166 12461.040039 0.272166 -0.680414 0.680414 12461.040039 0.272166 -0.952579 -0.136083 12461.040039 0.272166 -0.680414 -0.680414 12461.040039 0.272166 -0.952579 0.136083 12461.040039 0.680414 -0.680414 0.272166 12461.040039 -0.136083 0.272166 -0.952579 12461.040039 0.272166 0.136083 -0.952579 12461.040039 -0.680414 0.272166 0.680414 12461.040039 -0.952579 -0.272166 -0.136083 12461.040039 0.136083 -0.272166 0.952579 12461.040039 0.952579 -0.136083 -0.272166 12461.040039 0.952579 -0.136083 0.272166 12461.040039 0.680414 -0.272166 0.680414 12461.040039 0.272166 0.680414 -0.680414 12461.040039 0.136083 -0.952579 0.272166 12461.040039 -0.408248 0.816497 -0.408248 12461.040039 0.952579 0.136083 -0.272166 12461.040039 -0.680414 0.272166 -0.680414 12461.040039 0.952579 0.136083 0.272166 12461.040039 0.136083 0.272166 -0.952579 12461.040039 0.680414 -0.272166 -0.680414 12461.040039 0.952579 0.272166 -0.136083 12461.040039 0.952579 0.272166 0.136083 12461.040039 0.136083 0.272166 0.952579 12461.040039 0.272166 0.136083 0.952579 12461.040039 0.952579 -0.272166 -0.136083 12461.040039 0.816497 0.408248 0.408248 12461.040039 -0.136083 -0.272166 -0.952579 12461.040039 0.408248 -0.408248 -0.816497 12461.040039 -0.952579 0.136083 0.272166 12461.040039 -0.272166 -0.136083 -0.952579 12461.040039 0.816497 -0.408248 -0.408248 12461.040039 -0.408248 -0.408248 -0.816497 12461.040039 -0.272166 0.136083 -0.952579 12461.040039 0.680414 0.680414 0.272166 12461.040039 0.408248 0.816497 0.408248 12461.040039 -0.408248 -0.816497 0.408248 12461.040039 -0.680414 0.680414 0.272166 12461.040039 0.816497 -0.408248 0.408248 12461.040039 -0.680414 -0.272166 0.680414 12461.040039 -0.680414 -0.272166 -0.680414 12461.040039 0.408248 -0.408248 0.816497 12461.040039 -0.272166 0.680414 0.680414 12461.040039 -0.952579 0.136083 -0.272166 12461.040039 -0.408248 -0.816497 -0.408248 12461.040039 -0.272166 0.680414 -0.680414 12461.040039 -0.272166 -0.136083 0.952579 12461.040039 -0.272166 0.952579 -0.136083 12461.040039 0.408248 0.408248 0.816497 12461.040039 -0.816497 0.408248 -0.408248 12461.040039 -0.408248 -0.408248 0.816497 12461.040039 0.136083 -0.952579 -0.272166 12461.040039 -0.272166 0.136083 0.952579 12461.040039 -0.272166 0.952579 0.136083 12461.040039 -0.816497 0.408248 0.408248 12461.040039 -0.136083 -0.952579 0.272166 12461.040039 0.816497 0.408248 -0.408248 12461.040039 0.408248 -0.816497 0.408248 12461.040039 -0.136083 -0.952579 -0.272166 12461.040039 -0.952579 0.272166 -0.136083 12461.040039 0.408248 0.408248 -0.816497 12461.040039 0.408248 -0.816497 -0.408248 12461.040039 0.680414 0.272166 0.680414 12461.040039 0.680414 0.680414 -0.272166 12461.040039 -0.952579 0.272166 0.136083 12922.561523 -0.801784 -0.534522 -0.267261 12922.561523 0.534522 -0.801784 0.267261 12922.561523 0.801784 -0.267261 0.534522 12922.561523 0.267261 0.534522 -0.801784 12922.561523 0.534522 0.267261 0.801784 12922.561523 -0.267261 0.534522 -0.801784 12922.561523 -0.267261 0.534522 0.801784 12922.561523 -0.267261 -0.534522 -0.801784 12922.561523 -0.267261 0.801784 -0.534522 12922.561523 -0.801784 0.534522 -0.267261 12922.561523 0.534522 0.801784 0.267261 12922.561523 -0.534522 0.801784 0.267261 12922.561523 0.267261 -0.801784 -0.534522 12922.561523 0.267261 -0.801784 0.534522 12922.561523 -0.534522 -0.267261 -0.801784 12922.561523 0.534522 0.801784 -0.267261 12922.561523 0.267261 -0.534522 -0.801784 12922.561523 0.267261 -0.534522 0.801784 12922.561523 -0.534522 0.801784 -0.267261 12922.561523 0.801784 -0.534522 -0.267261 12922.561523 -0.534522 -0.267261 0.801784 12922.561523 -0.801784 0.534522 0.267261 12922.561523 -0.267261 0.801784 0.534522 12922.561523 -0.801784 -0.534522 0.267261 12922.561523 0.534522 -0.801784 -0.267261 12922.561523 0.801784 -0.267261 -0.534522 12922.561523 0.267261 0.534522 0.801784 12922.561523 0.801784 -0.534522 0.267261 12922.561523 -0.534522 -0.801784 -0.267261 12922.561523 0.801784 0.267261 -0.534522 12922.561523 -0.267261 -0.801784 -0.534522 12922.561523 0.534522 -0.267261 -0.801784 12922.561523 -0.801784 0.267261 0.534522 12922.561523 0.801784 0.267261 0.534522 12922.561523 -0.267261 -0.801784 0.534522 12922.561523 0.267261 0.801784 0.534522 12922.561523 -0.801784 -0.267261 -0.534522 12922.561523 0.267261 0.801784 -0.534522 12922.561523 0.801784 0.534522 0.267261 12922.561523 0.534522 -0.267261 0.801784 12922.561523 -0.267261 -0.534522 0.801784 12922.561523 -0.534522 0.267261 0.801784 12922.561523 0.801784 0.534522 -0.267261 12922.561523 -0.801784 0.267261 -0.534522 12922.561523 -0.534522 -0.801784 0.267261 12922.561523 0.534522 0.267261 -0.801784 12922.561523 -0.801784 -0.267261 0.534522 12922.561523 -0.534522 0.267261 -0.801784 13153.318359 -0.264906 -0.927173 0.264906 13153.318359 -0.529813 -0.529813 0.662266 13153.318359 -0.662266 -0.529813 -0.529813 13153.318359 -0.264906 -0.927173 -0.264906 13153.318359 0.662266 -0.529813 -0.529813 13153.318359 0.264906 -0.264906 0.927173 13153.318359 0.529813 0.662266 0.529813 13153.318359 -0.529813 0.662266 0.529813 13153.318359 -0.264906 0.927173 -0.264906 13153.318359 -0.927173 0.264906 0.264906 13153.318359 0.264906 -0.264906 -0.927173 13153.318359 -0.264906 -0.264906 0.927173 13153.318359 -0.529813 0.529813 -0.662266 13153.318359 -0.662266 -0.529813 0.529813 13153.318359 0.264906 0.927173 0.264906 13153.318359 -0.264906 -0.264906 -0.927173 13153.318359 -0.927173 0.264906 -0.264906 13153.318359 0.264906 -0.927173 0.264906 13153.318359 -0.264906 0.927173 0.264906 13153.318359 0.264906 -0.927173 -0.264906 13153.318359 0.264906 0.927173 -0.264906 13153.318359 0.662266 -0.529813 0.529813 13153.318359 0.662266 0.529813 0.529813 13153.318359 -0.662266 0.529813 -0.529813 13153.318359 -0.264906 0.264906 0.927173 13153.318359 -0.529813 -0.662266 -0.529813 13153.318359 0.529813 -0.529813 -0.662266 13153.318359 -0.662266 0.529813 0.529813 13153.318359 -0.927173 -0.264906 -0.264906 13153.318359 0.264906 0.264906 0.927173 13153.318359 0.927173 -0.264906 0.264906 13153.318359 0.927173 0.264906 -0.264906 13153.318359 0.264906 0.264906 -0.927173 13153.318359 0.529813 -0.529813 0.662266 13153.318359 0.927173 -0.264906 -0.264906 13153.318359 0.529813 0.529813 -0.662266 13153.318359 0.927173 0.264906 0.264906 13153.318359 0.529813 -0.662266 0.529813 13153.318359 -0.529813 -0.662266 0.529813 13153.318359 -0.264906 0.264906 -0.927173 13153.318359 0.529813 0.662266 -0.529813 13153.318359 0.529813 0.529813 0.662266 13153.318359 -0.927173 -0.264906 0.264906 13153.318359 0.662266 0.529813 -0.529813 13153.318359 -0.529813 0.662266 -0.529813 13153.318359 -0.529813 0.529813 0.662266 13153.318359 0.529813 -0.662266 -0.529813 13153.318359 -0.529813 -0.529813 -0.662266 13384.080078 0.000000 0.393919 0.919145 13384.080078 0.393919 -0.919145 0.000000 13384.080078 0.000000 0.919145 0.393919 13384.080078 -0.919145 -0.393919 0.000000 13384.080078 -0.919145 0.000000 -0.393919 13384.080078 0.393919 0.000000 -0.919145 13384.080078 -0.919145 0.393919 0.000000 13384.080078 0.000000 -0.393919 -0.919145 13384.080078 0.000000 -0.393919 0.919145 13384.080078 0.000000 0.919145 -0.393919 13384.080078 0.393919 0.919145 0.000000 13384.080078 0.000000 -0.919145 -0.393919 13384.080078 0.393919 0.000000 0.919145 13384.080078 0.000000 0.393919 -0.919145 13384.080078 -0.919145 0.000000 0.393919 13384.080078 0.000000 -0.919145 0.393919 13384.080078 -0.393919 0.000000 -0.919145 13384.080078 0.919145 0.000000 -0.393919 13384.080078 -0.393919 0.000000 0.919145 13384.080078 -0.393919 0.919145 0.000000 13384.080078 0.919145 0.393919 0.000000 13384.080078 0.919145 -0.393919 0.000000 13384.080078 0.919145 0.000000 0.393919 13384.080078 -0.393919 -0.919145 0.000000 13614.839844 -0.390567 0.911322 0.130189 13614.839844 0.650945 -0.390567 0.650945 13614.839844 -0.390567 0.911322 -0.130189 13614.839844 0.650945 0.650945 -0.390567 13614.839844 -0.911322 0.130189 0.390567 13614.839844 -0.911322 -0.390567 0.130189 13614.839844 0.650945 0.390567 -0.650945 13614.839844 -0.130189 -0.390567 -0.911322 13614.839844 0.390567 0.650945 0.650945 13614.839844 0.911322 0.130189 -0.390567 13614.839844 0.390567 -0.650945 -0.650945 13614.839844 -0.390567 0.650945 0.650945 13614.839844 0.911322 0.130189 0.390567 13614.839844 -0.390567 0.650945 -0.650945 13614.839844 0.390567 -0.650945 0.650945 13614.839844 -0.911322 -0.390567 -0.130189 13614.839844 -0.650945 0.390567 0.650945 13614.839844 0.911322 -0.130189 0.390567 13614.839844 -0.390567 -0.130189 -0.911322 13614.839844 -0.911322 0.390567 0.130189 13614.839844 0.650945 0.650945 0.390567 13614.839844 0.390567 -0.911322 -0.130189 13614.839844 -0.650945 0.650945 -0.390567 13614.839844 -0.130189 -0.911322 -0.390567 13614.839844 -0.911322 -0.130189 0.390567 13614.839844 -0.130189 -0.911322 0.390567 13614.839844 0.130189 -0.390567 -0.911322 13614.839844 0.130189 -0.911322 -0.390567 13614.839844 -0.911322 -0.130189 -0.390567 13614.839844 0.390567 -0.911322 0.130189 13614.839844 0.911322 -0.390567 -0.130189 13614.839844 0.911322 -0.390567 0.130189 13614.839844 -0.130189 -0.390567 0.911322 13614.839844 -0.911322 0.130189 -0.390567 13614.839844 0.390567 0.650945 -0.650945 13614.839844 0.130189 -0.911322 0.390567 13614.839844 0.911322 -0.130189 -0.390567 13614.839844 0.130189 -0.390567 0.911322 13614.839844 0.650945 -0.390567 -0.650945 13614.839844 -0.650945 -0.390567 0.650945 13614.839844 -0.130189 0.911322 0.390567 13614.839844 -0.390567 -0.650945 -0.650945 13614.839844 0.390567 0.130189 -0.911322 13614.839844 0.390567 -0.130189 -0.911322 13614.839844 -0.130189 0.390567 0.911322 13614.839844 0.390567 0.911322 0.130189 13614.839844 0.390567 0.911322 -0.130189 13614.839844 0.650945 -0.650945 -0.390567 13614.839844 -0.650945 0.650945 0.390567 13614.839844 0.390567 -0.130189 0.911322 13614.839844 0.130189 0.911322 0.390567 13614.839844 -0.650945 -0.650945 0.390567 13614.839844 -0.390567 -0.650945 0.650945 13614.839844 -0.390567 -0.911322 -0.130189 13614.839844 0.130189 0.911322 -0.390567 13614.839844 -0.390567 0.130189 0.911322 13614.839844 -0.650945 -0.390567 -0.650945 13614.839844 0.390567 0.130189 0.911322 13614.839844 0.650945 -0.650945 0.390567 13614.839844 -0.390567 0.130189 -0.911322 13614.839844 0.130189 0.390567 -0.911322 13614.839844 -0.911322 0.390567 -0.130189 13614.839844 0.650945 0.390567 0.650945 13614.839844 0.911322 0.390567 -0.130189 13614.839844 -0.130189 0.390567 -0.911322 13614.839844 -0.650945 -0.650945 -0.390567 13614.839844 -0.390567 -0.130189 0.911322 13614.839844 0.911322 0.390567 0.130189 13614.839844 -0.650945 0.390567 -0.650945 13614.839844 0.130189 0.390567 0.911322 13614.839844 -0.390567 -0.911322 0.130189 13614.839844 -0.130189 0.911322 -0.390567 14076.361328 -0.640184 0.000000 0.768221 14076.361328 0.768221 0.512147 -0.384111 14076.361328 0.768221 0.000000 -0.640184 14076.361328 0.768221 0.512147 0.384111 14076.361328 0.000000 -0.768221 -0.640184 14076.361328 0.640184 0.000000 0.768221 14076.361328 0.000000 0.640184 0.768221 14076.361328 0.000000 -0.768221 0.640184 14076.361328 0.768221 0.384111 0.512147 14076.361328 -0.640184 0.000000 -0.768221 14076.361328 0.384111 -0.512147 -0.768221 14076.361328 0.384111 -0.512147 0.768221 14076.361328 -0.768221 -0.384111 0.512147 14076.361328 0.000000 -0.640184 0.768221 14076.361328 0.768221 0.384111 -0.512147 14076.361328 0.384111 -0.768221 0.512147 14076.361328 -0.384111 -0.512147 0.768221 14076.361328 -0.768221 0.384111 -0.512147 14076.361328 0.000000 0.640184 -0.768221 14076.361328 -0.768221 0.384111 0.512147 14076.361328 0.384111 -0.768221 -0.512147 14076.361328 0.512147 -0.384111 0.768221 14076.361328 0.000000 0.768221 0.640184 14076.361328 0.000000 -0.640184 -0.768221 14076.361328 0.768221 0.000000 0.640184 14076.361328 0.512147 -0.384111 -0.768221 14076.361328 -0.384111 -0.512147 -0.768221 14076.361328 0.000000 0.768221 -0.640184 14076.361328 0.384111 0.512147 0.768221 14076.361328 -0.768221 0.000000 0.640184 14076.361328 -0.640184 -0.768221 0.000000 14076.361328 -0.384111 0.768221 0.512147 14076.361328 0.640184 0.000000 -0.768221 14076.361328 -0.512147 0.384111 0.768221 14076.361328 -0.384111 0.512147 0.768221 14076.361328 0.384111 0.768221 -0.512147 14076.361328 0.768221 -0.512147 0.384111 14076.361328 -0.384111 0.512147 -0.768221 14076.361328 -0.768221 0.512147 -0.384111 14076.361328 0.512147 -0.768221 -0.384111 14076.361328 0.768221 -0.512147 -0.384111 14076.361328 -0.768221 0.512147 0.384111 14076.361328 -0.512147 0.768221 -0.384111 14076.361328 0.768221 -0.640184 0.000000 14076.361328 0.512147 0.768221 -0.384111 14076.361328 0.384111 0.768221 0.512147 14076.361328 -0.512147 -0.384111 -0.768221 14076.361328 0.640184 -0.768221 0.000000 14076.361328 -0.512147 0.768221 0.384111 14076.361328 -0.768221 0.640184 0.000000 14076.361328 -0.512147 -0.384111 0.768221 14076.361328 0.512147 0.768221 0.384111 14076.361328 -0.768221 -0.640184 0.000000 14076.361328 0.640184 0.768221 0.000000 14076.361328 -0.384111 0.768221 -0.512147 14076.361328 0.768221 -0.384111 -0.512147 14076.361328 -0.768221 0.000000 -0.640184 14076.361328 -0.512147 0.384111 -0.768221 14076.361328 0.768221 0.640184 0.000000 14076.361328 -0.768221 -0.384111 -0.512147 14076.361328 -0.384111 -0.768221 0.512147 14076.361328 0.512147 0.384111 0.768221 14076.361328 -0.512147 -0.768221 -0.384111 14076.361328 0.384111 0.512147 -0.768221 14076.361328 -0.384111 -0.768221 -0.512147 14076.361328 -0.640184 0.768221 0.000000 14076.361328 0.512147 -0.768221 0.384111 14076.361328 -0.768221 -0.512147 0.384111 14076.361328 -0.512147 -0.768221 0.384111 14076.361328 0.768221 -0.384111 0.512147 14076.361328 -0.768221 -0.512147 -0.384111 14076.361328 0.512147 0.384111 -0.768221 14307.119141 -0.889001 0.254000 -0.381000 14307.119141 -0.889001 0.381000 -0.254000 14307.119141 0.762001 -0.127000 0.635001 14307.119141 0.762001 -0.127000 -0.635001 14307.119141 -0.381000 -0.889001 0.254000 14307.119141 0.381000 0.889001 -0.254000 14307.119141 -0.762001 -0.127000 0.635001 14307.119141 -0.254000 0.381000 -0.889001 14307.119141 0.381000 0.254000 0.889001 14307.119141 0.762001 -0.635001 -0.127000 14307.119141 -0.254000 0.381000 0.889001 14307.119141 0.762001 -0.635001 0.127000 14307.119141 -0.889001 0.254000 0.381000 14307.119141 -0.254000 0.889001 -0.381000 14307.119141 -0.254000 0.889001 0.381000 14307.119141 0.381000 0.254000 -0.889001 14307.119141 0.762001 0.127000 0.635001 14307.119141 0.127000 -0.635001 -0.762001 14307.119141 0.381000 -0.254000 0.889001 14307.119141 -0.381000 0.889001 0.254000 14307.119141 -0.381000 0.889001 -0.254000 14307.119141 -0.889001 -0.381000 -0.254000 14307.119141 0.254000 0.381000 0.889001 14307.119141 0.254000 0.381000 -0.889001 14307.119141 -0.381000 -0.254000 0.889001 14307.119141 0.889001 0.254000 -0.381000 14307.119141 0.889001 0.254000 0.381000 14307.119141 0.889001 0.381000 -0.254000 14307.119141 0.889001 0.381000 0.254000 14307.119141 -0.889001 -0.381000 0.254000 14307.119141 0.127000 0.635001 -0.762001 14307.119141 0.127000 0.635001 0.762001 14307.119141 0.127000 0.762001 -0.635001 14307.119141 -0.381000 0.254000 -0.889001 14307.119141 0.127000 0.762001 0.635001 14307.119141 0.254000 -0.381000 0.889001 14307.119141 -0.762001 0.635001 -0.127000 14307.119141 -0.762001 0.635001 0.127000 14307.119141 0.254000 -0.381000 -0.889001 14307.119141 0.254000 -0.889001 0.381000 14307.119141 0.254000 -0.889001 -0.381000 14307.119141 -0.381000 0.254000 0.889001 14307.119141 -0.762001 0.127000 -0.635001 14307.119141 -0.889001 -0.254000 -0.381000 14307.119141 0.889001 -0.254000 -0.381000 14307.119141 -0.762001 0.127000 0.635001 14307.119141 0.381000 -0.254000 -0.889001 14307.119141 0.762001 0.127000 -0.635001 14307.119141 -0.254000 -0.381000 0.889001 14307.119141 -0.254000 -0.381000 -0.889001 14307.119141 0.381000 -0.889001 0.254000 14307.119141 0.127000 -0.762001 -0.635001 14307.119141 -0.381000 -0.254000 -0.889001 14307.119141 0.381000 -0.889001 -0.254000 14307.119141 0.127000 -0.762001 0.635001 14307.119141 0.889001 -0.254000 0.381000 14307.119141 -0.635001 -0.762001 -0.127000 14307.119141 0.762001 0.635001 -0.127000 14307.119141 0.127000 -0.635001 0.762001 14307.119141 0.254000 0.889001 0.381000 14307.119141 0.762001 0.635001 0.127000 14307.119141 0.254000 0.889001 -0.381000 14307.119141 -0.254000 -0.889001 0.381000 14307.119141 -0.254000 -0.889001 -0.381000 14307.119141 0.889001 -0.381000 -0.254000 14307.119141 0.889001 -0.381000 0.254000 14307.119141 -0.889001 -0.254000 0.381000 14307.119141 -0.635001 -0.762001 0.127000 14307.119141 0.381000 0.889001 0.254000 14307.119141 -0.127000 0.635001 0.762001 14307.119141 0.635001 0.762001 0.127000 14307.119141 0.635001 -0.127000 0.762001 14307.119141 -0.635001 0.127000 -0.762001 14307.119141 -0.635001 0.762001 -0.127000 14307.119141 0.635001 -0.762001 -0.127000 14307.119141 -0.127000 -0.635001 0.762001 14307.119141 0.635001 -0.762001 0.127000 14307.119141 -0.635001 0.762001 0.127000 14307.119141 -0.381000 -0.889001 -0.254000 14307.119141 -0.127000 -0.762001 0.635001 14307.119141 0.635001 -0.127000 -0.762001 14307.119141 -0.635001 0.127000 0.762001 14307.119141 -0.127000 -0.635001 -0.762001 14307.119141 -0.127000 0.762001 0.635001 14307.119141 0.635001 0.762001 -0.127000 14307.119141 -0.127000 0.762001 -0.635001 14307.119141 -0.762001 -0.127000 -0.635001 14307.119141 -0.127000 -0.762001 -0.635001 14307.119141 -0.635001 -0.127000 0.762001 14307.119141 0.635001 0.127000 0.762001 14307.119141 0.635001 0.127000 -0.762001 14307.119141 -0.762001 -0.635001 -0.127000 14307.119141 -0.889001 0.381000 0.254000 14307.119141 -0.635001 -0.127000 -0.762001 14307.119141 -0.127000 0.635001 -0.762001 14307.119141 -0.762001 -0.635001 0.127000 14768.639648 0.000000 0.000000 -1.000000 14768.639648 0.000000 1.000000 0.000000 14768.639648 -1.000000 0.000000 0.000000 14768.639648 0.000000 -1.000000 0.000000 14768.639648 0.000000 0.000000 1.000000 14768.639648 1.000000 0.000000 0.000000 14999.400391 0.124035 0.000000 -0.992278 14999.400391 0.000000 0.868243 -0.496139 14999.400391 0.620174 -0.248069 0.744208 14999.400391 -0.620174 -0.744208 0.248069 14999.400391 -0.868243 0.000000 -0.496139 14999.400391 -0.744208 -0.620174 0.248069 14999.400391 0.000000 0.496139 0.868243 14999.400391 -0.124035 0.000000 0.992278 14999.400391 -0.496139 0.000000 -0.868243 14999.400391 -0.992278 -0.124035 0.000000 14999.400391 0.868243 0.000000 0.496139 14999.400391 -0.620174 0.248069 -0.744208 14999.400391 0.868243 0.000000 -0.496139 14999.400391 0.248069 0.620174 -0.744208 14999.400391 -0.620174 0.248069 0.744208 14999.400391 0.620174 -0.248069 -0.744208 14999.400391 0.000000 -0.992278 -0.124035 14999.400391 0.000000 -0.496139 0.868243 14999.400391 -0.620174 0.744208 -0.248069 14999.400391 -0.620174 0.744208 0.248069 14999.400391 0.248069 0.744208 0.620174 14999.400391 0.000000 -0.992278 0.124035 14999.400391 0.620174 0.248069 -0.744208 14999.400391 -0.248069 -0.620174 0.744208 14999.400391 -0.744208 -0.248069 -0.620174 14999.400391 -0.248069 -0.620174 -0.744208 14999.400391 0.496139 0.000000 -0.868243 14999.400391 0.744208 0.620174 -0.248069 14999.400391 0.124035 -0.992278 0.000000 14999.400391 -0.620174 -0.744208 -0.248069 14999.400391 0.496139 0.000000 0.868243 14999.400391 0.620174 0.248069 0.744208 14999.400391 0.000000 -0.868243 0.496139 14999.400391 -0.248069 -0.744208 0.620174 14999.400391 0.744208 0.248069 0.620174 14999.400391 0.744208 0.620174 0.248069 14999.400391 -0.620174 -0.248069 0.744208 14999.400391 -0.248069 -0.744208 -0.620174 14999.400391 -0.744208 0.248069 0.620174 14999.400391 0.248069 0.620174 0.744208 14999.400391 0.496139 0.868243 0.000000 14999.400391 0.248069 0.744208 -0.620174 14999.400391 0.000000 0.868243 0.496139 14999.400391 -0.744208 0.248069 -0.620174 14999.400391 0.744208 0.248069 -0.620174 14999.400391 -0.496139 0.000000 0.868243 14999.400391 -0.868243 -0.496139 0.000000 14999.400391 0.000000 0.992278 -0.124035 14999.400391 0.000000 0.992278 0.124035 14999.400391 -0.124035 0.000000 -0.992278 14999.400391 0.000000 -0.496139 -0.868243 14999.400391 -0.496139 -0.868243 0.000000 14999.400391 0.000000 -0.868243 -0.496139 14999.400391 0.868243 -0.496139 0.000000 14999.400391 -0.124035 0.992278 0.000000 14999.400391 0.124035 0.000000 0.992278 14999.400391 -0.744208 -0.248069 0.620174 14999.400391 0.496139 -0.868243 0.000000 14999.400391 -0.620174 -0.248069 -0.744208 14999.400391 0.744208 -0.620174 0.248069 14999.400391 0.000000 0.124035 -0.992278 14999.400391 -0.992278 0.124035 0.000000 14999.400391 -0.248069 0.620174 -0.744208 14999.400391 0.620174 -0.744208 0.248069 14999.400391 -0.496139 0.868243 0.000000 14999.400391 -0.248069 0.620174 0.744208 14999.400391 0.124035 0.992278 0.000000 14999.400391 -0.744208 0.620174 0.248069 14999.400391 0.620174 -0.744208 -0.248069 14999.400391 0.620174 0.744208 -0.248069 14999.400391 0.744208 -0.248069 -0.620174 14999.400391 -0.744208 0.620174 -0.248069 14999.400391 0.992278 0.000000 -0.124035 14999.400391 -0.868243 0.496139 0.000000 14999.400391 0.868243 0.496139 0.000000 14999.400391 -0.248069 0.744208 0.620174 14999.400391 0.992278 0.000000 0.124035 14999.400391 -0.248069 0.744208 -0.620174 14999.400391 0.620174 0.744208 0.248069 14999.400391 0.000000 -0.124035 -0.992278 14999.400391 -0.868243 0.000000 0.496139 14999.400391 0.248069 -0.744208 0.620174 14999.400391 0.248069 -0.620174 0.744208 14999.400391 0.000000 0.124035 0.992278 14999.400391 -0.992278 0.000000 0.124035 14999.400391 -0.992278 0.000000 -0.124035 14999.400391 -0.124035 -0.992278 0.000000 14999.400391 0.000000 0.496139 -0.868243 14999.400391 0.248069 -0.620174 -0.744208 14999.400391 -0.744208 -0.620174 -0.248069 14999.400391 0.744208 -0.620174 -0.248069 14999.400391 0.992278 -0.124035 0.000000 14999.400391 0.744208 -0.248069 0.620174 14999.400391 0.992278 0.124035 0.000000 14999.400391 0.000000 -0.124035 0.992278 14999.400391 0.248069 -0.744208 -0.620174 15230.161133 -0.123091 0.492366 -0.861640 15230.161133 -0.615457 -0.492366 -0.615457 15230.161133 -0.861640 -0.123091 0.492366 15230.161133 0.123091 -0.492366 0.861640 15230.161133 -0.492366 0.861640 -0.123091 15230.161133 0.492366 0.123091 -0.861640 15230.161133 -0.492366 0.123091 -0.861640 15230.161133 0.123091 0.984732 0.123091 15230.161133 0.123091 0.861640 0.492366 15230.161133 0.123091 -0.492366 -0.861640 15230.161133 0.123091 0.984732 -0.123091 15230.161133 -0.123091 -0.123091 -0.984732 15230.161133 -0.123091 -0.123091 0.984732 15230.161133 0.492366 0.861640 0.123091 15230.161133 -0.615457 0.615457 -0.492366 15230.161133 0.984732 0.123091 -0.123091 15230.161133 -0.492366 0.123091 0.861640 15230.161133 0.492366 -0.123091 0.861640 15230.161133 -0.492366 0.615457 0.615457 15230.161133 0.984732 0.123091 0.123091 15230.161133 -0.615457 -0.615457 0.492366 15230.161133 0.492366 0.861640 -0.123091 15230.161133 0.123091 0.861640 -0.492366 15230.161133 -0.615457 0.492366 0.615457 15230.161133 -0.861640 -0.492366 -0.123091 15230.161133 0.984732 -0.123091 0.123091 15230.161133 -0.492366 -0.615457 -0.615457 15230.161133 -0.615457 0.492366 -0.615457 15230.161133 -0.984732 0.123091 -0.123091 15230.161133 -0.492366 -0.615457 0.615457 15230.161133 0.615457 -0.492366 0.615457 15230.161133 -0.123091 0.861640 0.492366 15230.161133 -0.492366 -0.123091 0.861640 15230.161133 -0.123091 0.123091 0.984732 15230.161133 0.123091 -0.123091 0.984732 15230.161133 0.492366 0.615457 -0.615457 15230.161133 -0.984732 -0.123091 -0.123091 15230.161133 0.123091 -0.861640 0.492366 15230.161133 -0.984732 -0.123091 0.123091 15230.161133 0.123091 0.123091 -0.984732 15230.161133 -0.123091 0.984732 0.123091 15230.161133 0.861640 0.123091 0.492366 15230.161133 -0.861640 -0.492366 0.123091 15230.161133 0.123091 0.123091 0.984732 15230.161133 0.861640 0.123091 -0.492366 15230.161133 0.492366 0.615457 0.615457 15230.161133 -0.123091 0.861640 -0.492366 15230.161133 -0.123091 0.123091 -0.984732 15230.161133 0.861640 -0.492366 -0.123091 15230.161133 0.861640 -0.492366 0.123091 15230.161133 0.984732 -0.123091 -0.123091 15230.161133 -0.861640 -0.123091 -0.492366 15230.161133 0.615457 -0.615457 -0.492366 15230.161133 -0.123091 0.984732 -0.123091 15230.161133 0.492366 0.123091 0.861640 15230.161133 0.861640 0.492366 0.123091 15230.161133 -0.984732 0.123091 0.123091 15230.161133 0.123091 0.492366 0.861640 15230.161133 0.861640 -0.123091 -0.492366 15230.161133 0.615457 -0.615457 0.492366 15230.161133 -0.615457 -0.615457 -0.492366 15230.161133 0.123091 0.492366 -0.861640 15230.161133 0.615457 -0.492366 -0.615457 15230.161133 -0.492366 0.861640 0.123091 15230.161133 0.123091 -0.123091 -0.984732 15230.161133 0.861640 -0.123091 0.492366 15230.161133 0.861640 0.492366 -0.123091 15230.161133 -0.492366 -0.123091 -0.861640 15230.161133 -0.123091 0.492366 0.861640 15230.161133 0.615457 0.615457 0.492366 15230.161133 0.615457 0.615457 -0.492366 15230.161133 0.615457 0.492366 0.615457 15230.161133 0.123091 -0.984732 0.123091 15230.161133 0.492366 -0.861640 -0.123091 15230.161133 -0.492366 0.615457 -0.615457 15230.161133 0.492366 -0.861640 0.123091 15230.161133 0.492366 -0.615457 -0.615457 15230.161133 -0.123091 -0.492366 -0.861640 15230.161133 -0.615457 0.615457 0.492366 15230.161133 -0.861640 0.492366 0.123091 15230.161133 -0.861640 0.492366 -0.123091 15230.161133 -0.492366 -0.861640 0.123091 15230.161133 -0.492366 -0.861640 -0.123091 15230.161133 0.615457 0.492366 -0.615457 15230.161133 -0.861640 0.123091 -0.492366 15230.161133 -0.861640 0.123091 0.492366 15230.161133 0.123091 -0.984732 -0.123091 15230.161133 0.492366 -0.615457 0.615457 15230.161133 -0.123091 -0.861640 0.492366 15230.161133 -0.123091 -0.984732 -0.123091 15230.161133 0.123091 -0.861640 -0.492366 15230.161133 0.492366 -0.123091 -0.861640 15230.161133 -0.123091 -0.984732 0.123091 15230.161133 -0.615457 -0.492366 0.615457 15230.161133 -0.123091 -0.492366 0.861640 15230.161133 -0.123091 -0.861640 -0.492366 15460.918945 -0.366508 0.366508 0.855186 15460.918945 0.855186 0.366508 0.366508 15460.918945 -0.366508 0.855186 0.366508 15460.918945 0.366508 -0.855186 0.366508 15460.918945 0.366508 0.855186 0.366508 15460.918945 -0.366508 -0.855186 0.366508 15460.918945 0.366508 -0.366508 -0.855186 15460.918945 -0.366508 0.366508 -0.855186 15460.918945 -0.855186 -0.366508 0.366508 15460.918945 0.366508 0.366508 -0.855186 15460.918945 0.855186 -0.366508 0.366508 15460.918945 -0.366508 0.855186 -0.366508 15460.918945 -0.855186 0.366508 -0.366508 15460.918945 0.855186 -0.366508 -0.366508 15460.918945 0.366508 -0.366508 0.855186 15460.918945 0.366508 -0.855186 -0.366508 15460.918945 0.855186 0.366508 -0.366508 15460.918945 -0.366508 -0.366508 0.855186 15460.918945 -0.366508 -0.366508 -0.855186 15460.918945 -0.366508 -0.855186 -0.366508 15460.918945 0.366508 0.855186 -0.366508 15460.918945 0.366508 0.366508 0.855186 15460.918945 -0.855186 0.366508 0.366508 15460.918945 -0.855186 -0.366508 -0.366508 15691.679688 0.000000 -0.970143 -0.242536 15691.679688 0.000000 -0.242536 0.970143 15691.679688 -0.242536 0.000000 -0.970143 15691.679688 -0.970143 0.242536 0.000000 15691.679688 -0.485071 -0.485071 0.727607 15691.679688 -0.727607 -0.485071 0.485071 15691.679688 0.000000 -0.242536 -0.970143 15691.679688 -0.485071 -0.727607 0.485071 15691.679688 0.485071 0.485071 0.727607 15691.679688 -0.970143 0.000000 -0.242536 15691.679688 -0.485071 0.485071 0.727607 15691.679688 -0.970143 0.000000 0.242536 15691.679688 0.485071 -0.485071 -0.727607 15691.679688 0.485071 -0.727607 0.485071 15691.679688 0.242536 0.000000 0.970143 15691.679688 0.485071 0.485071 -0.727607 15691.679688 0.970143 -0.242536 0.000000 15691.679688 -0.485071 -0.485071 -0.727607 15691.679688 -0.727607 -0.485071 -0.485071 15691.679688 0.242536 0.000000 -0.970143 15691.679688 -0.242536 0.000000 0.970143 15691.679688 -0.727607 0.485071 -0.485071 15691.679688 0.485071 -0.485071 0.727607 15691.679688 0.000000 0.242536 0.970143 15691.679688 0.000000 -0.970143 0.242536 15691.679688 -0.485071 0.727607 -0.485071 15691.679688 0.727607 0.485071 -0.485071 15691.679688 0.727607 -0.485071 -0.485071 15691.679688 -0.485071 -0.727607 -0.485071 15691.679688 0.485071 -0.727607 -0.485071 15691.679688 0.485071 0.727607 0.485071 15691.679688 -0.485071 0.727607 0.485071 15691.679688 -0.242536 0.970143 0.000000 15691.679688 -0.485071 0.485071 -0.727607 15691.679688 0.242536 0.970143 0.000000 15691.679688 0.970143 0.000000 -0.242536 15691.679688 0.970143 0.242536 0.000000 15691.679688 0.727607 -0.485071 0.485071 15691.679688 0.000000 0.970143 0.242536 15691.679688 0.727607 0.485071 0.485071 15691.679688 0.970143 0.000000 0.242536 15691.679688 0.485071 0.727607 -0.485071 15691.679688 -0.970143 -0.242536 0.000000 15691.679688 -0.727607 0.485071 0.485071 15691.679688 0.000000 0.242536 -0.970143 15691.679688 0.242536 -0.970143 0.000000 15691.679688 0.000000 0.970143 -0.242536 15691.679688 -0.242536 -0.970143 0.000000 15922.438477 0.963087 0.240772 0.120386 15922.438477 0.240772 0.120386 -0.963087 15922.438477 -0.120386 0.240772 0.963087 15922.438477 0.240772 -0.842701 -0.481543 15922.438477 -0.240772 0.481543 0.842701 15922.438477 -0.963087 0.120386 -0.240772 15922.438477 0.963087 0.120386 -0.240772 15922.438477 -0.481543 0.842701 0.240772 15922.438477 -0.240772 0.481543 -0.842701 15922.438477 0.240772 0.120386 0.963087 15922.438477 0.842701 0.240772 0.481543 15922.438477 0.240772 -0.842701 0.481543 15922.438477 0.240772 -0.963087 0.120386 15922.438477 0.120386 0.240772 0.963087 15922.438477 0.240772 -0.963087 -0.120386 15922.438477 -0.481543 -0.240772 -0.842701 15922.438477 0.963087 0.120386 0.240772 15922.438477 0.963087 0.240772 -0.120386 15922.438477 0.842701 0.240772 -0.481543 15922.438477 0.481543 0.842701 -0.240772 15922.438477 0.481543 -0.842701 -0.240772 15922.438477 0.120386 0.240772 -0.963087 15922.438477 0.120386 0.963087 0.240772 15922.438477 -0.240772 0.842701 -0.481543 15922.438477 0.963087 -0.240772 0.120386 15922.438477 0.963087 -0.120386 -0.240772 15922.438477 -0.481543 -0.240772 0.842701 15922.438477 0.963087 -0.240772 -0.120386 15922.438477 0.963087 -0.120386 0.240772 15922.438477 0.842701 0.481543 0.240772 15922.438477 -0.963087 0.240772 0.120386 15922.438477 -0.240772 0.842701 0.481543 15922.438477 -0.481543 0.842701 -0.240772 15922.438477 -0.963087 0.240772 -0.120386 15922.438477 -0.842701 0.481543 -0.240772 15922.438477 -0.842701 0.240772 -0.481543 15922.438477 -0.240772 0.963087 -0.120386 15922.438477 0.842701 0.481543 -0.240772 15922.438477 0.240772 -0.120386 0.963087 15922.438477 -0.240772 0.963087 0.120386 15922.438477 0.481543 0.842701 0.240772 15922.438477 0.240772 -0.481543 0.842701 15922.438477 0.120386 0.963087 -0.240772 15922.438477 -0.842701 0.481543 0.240772 15922.438477 0.481543 -0.842701 0.240772 15922.438477 0.240772 -0.120386 -0.963087 15922.438477 -0.842701 0.240772 0.481543 15922.438477 -0.120386 -0.963087 -0.240772 15922.438477 0.240772 -0.481543 -0.842701 15922.438477 -0.963087 0.120386 0.240772 15922.438477 -0.240772 -0.481543 -0.842701 15922.438477 -0.842701 -0.240772 0.481543 15922.438477 0.842701 -0.240772 -0.481543 15922.438477 0.240772 0.842701 -0.481543 15922.438477 -0.481543 -0.842701 -0.240772 15922.438477 -0.240772 -0.842701 0.481543 15922.438477 -0.240772 0.120386 -0.963087 15922.438477 -0.240772 -0.120386 -0.963087 15922.438477 -0.240772 -0.842701 -0.481543 15922.438477 -0.963087 -0.120386 -0.240772 15922.438477 0.842701 -0.481543 -0.240772 15922.438477 -0.963087 -0.240772 0.120386 15922.438477 0.240772 0.481543 0.842701 15922.438477 -0.240772 -0.481543 0.842701 15922.438477 0.481543 0.240772 0.842701 15922.438477 -0.842701 -0.240772 -0.481543 15922.438477 0.842701 -0.481543 0.240772 15922.438477 -0.963087 -0.240772 -0.120386 15922.438477 -0.481543 0.240772 -0.842701 15922.438477 -0.240772 -0.963087 0.120386 15922.438477 0.120386 -0.240772 0.963087 15922.438477 0.481543 -0.240772 -0.842701 15922.438477 -0.240772 -0.963087 -0.120386 15922.438477 -0.120386 -0.240772 -0.963087 15922.438477 0.481543 0.240772 -0.842701 15922.438477 0.120386 -0.240772 -0.963087 15922.438477 0.842701 -0.240772 0.481543 15922.438477 0.481543 -0.240772 0.842701 15922.438477 0.240772 0.481543 -0.842701 15922.438477 -0.481543 -0.842701 0.240772 15922.438477 -0.842701 -0.481543 0.240772 15922.438477 -0.120386 0.963087 -0.240772 15922.438477 0.240772 0.963087 0.120386 15922.438477 0.120386 -0.963087 0.240772 15922.438477 -0.120386 -0.963087 0.240772 15922.438477 0.240772 0.963087 -0.120386 15922.438477 0.240772 0.842701 0.481543 15922.438477 -0.963087 -0.120386 0.240772 15922.438477 -0.842701 -0.481543 -0.240772 15922.438477 0.120386 -0.963087 -0.240772 15922.438477 -0.120386 0.240772 -0.963087 15922.438477 -0.120386 -0.240772 0.963087 15922.438477 -0.120386 0.963087 0.240772 15922.438477 -0.240772 -0.120386 0.963087 15922.438477 -0.240772 0.120386 0.963087 15922.438477 -0.481543 0.240772 0.842701 16153.199219 0.717137 -0.597614 0.358569 16153.199219 0.597614 -0.717137 -0.358569 16153.199219 -0.717137 0.597614 0.358569 16153.199219 0.717137 0.597614 -0.358569 16153.199219 0.358569 -0.597614 0.717137 16153.199219 -0.597614 -0.358569 0.717137 16153.199219 0.597614 -0.717137 0.358569 16153.199219 -0.717137 0.597614 -0.358569 16153.199219 -0.597614 -0.717137 -0.358569 16153.199219 -0.717137 -0.358569 0.597614 16153.199219 0.358569 0.597614 0.717137 16153.199219 -0.717137 -0.358569 -0.597614 16153.199219 0.597614 0.717137 0.358569 16153.199219 0.358569 0.717137 -0.597614 16153.199219 0.717137 0.358569 -0.597614 16153.199219 0.597614 0.358569 -0.717137 16153.199219 -0.717137 0.358569 -0.597614 16153.199219 0.358569 0.717137 0.597614 16153.199219 0.358569 -0.717137 0.597614 16153.199219 0.358569 -0.597614 -0.717137 16153.199219 0.717137 0.597614 0.358569 16153.199219 0.717137 -0.597614 -0.358569 16153.199219 -0.597614 -0.358569 -0.717137 16153.199219 -0.358569 -0.597614 0.717137 16153.199219 -0.717137 -0.597614 0.358569 16153.199219 0.597614 0.358569 0.717137 16153.199219 0.717137 0.358569 0.597614 16153.199219 0.597614 -0.358569 -0.717137 16153.199219 -0.597614 0.358569 -0.717137 16153.199219 -0.597614 0.717137 -0.358569 16153.199219 -0.358569 0.717137 0.597614 16153.199219 -0.597614 0.717137 0.358569 16153.199219 0.597614 -0.358569 0.717137 16153.199219 -0.358569 -0.717137 -0.597614 16153.199219 -0.358569 -0.597614 -0.717137 16153.199219 -0.717137 0.358569 0.597614 16153.199219 0.717137 -0.358569 0.597614 16153.199219 0.358569 0.597614 -0.717137 16153.199219 -0.597614 0.358569 0.717137 16153.199219 -0.358569 -0.717137 0.597614 16153.199219 -0.358569 0.597614 -0.717137 16153.199219 -0.358569 0.597614 0.717137 16153.199219 -0.717137 -0.597614 -0.358569 16153.199219 0.358569 -0.717137 -0.597614 16153.199219 -0.358569 0.717137 -0.597614 16153.199219 0.717137 -0.358569 -0.597614 16153.199219 0.597614 0.717137 -0.358569 16153.199219 -0.597614 -0.717137 0.358569 16614.718750 0.000000 -0.707107 -0.707107 16614.718750 -0.235702 0.235702 -0.942809 16614.718750 0.235702 0.235702 -0.942809 16614.718750 0.235702 0.942809 -0.235702 16614.718750 0.707107 -0.707107 0.000000 16614.718750 -0.707107 0.707107 0.000000 16614.718750 0.235702 0.235702 0.942809 16614.718750 -0.235702 -0.235702 -0.942809 16614.718750 0.707107 0.000000 -0.707107 16614.718750 0.235702 0.942809 0.235702 16614.718750 -0.235702 0.235702 0.942809 16614.718750 0.942809 0.235702 -0.235702 16614.718750 -0.942809 -0.235702 0.235702 16614.718750 0.707107 0.707107 0.000000 16614.718750 0.942809 -0.235702 -0.235702 16614.718750 0.235702 -0.235702 0.942809 16614.718750 0.235702 -0.942809 0.235702 16614.718750 -0.235702 -0.942809 -0.235702 16614.718750 -0.707107 -0.707107 0.000000 16614.718750 0.942809 -0.235702 0.235702 16614.718750 -0.942809 0.235702 0.235702 16614.718750 0.000000 0.707107 0.707107 16614.718750 0.942809 0.235702 0.235702 16614.718750 -0.942809 -0.235702 -0.235702 16614.718750 0.707107 0.000000 0.707107 16614.718750 -0.235702 -0.942809 0.235702 16614.718750 0.235702 -0.235702 -0.942809 16614.718750 -0.235702 -0.235702 0.942809 16614.718750 -0.235702 0.942809 -0.235702 16614.718750 0.000000 0.707107 -0.707107 16614.718750 0.000000 -0.707107 0.707107 16614.718750 0.235702 -0.942809 -0.235702 16614.718750 -0.707107 0.000000 0.707107 16614.718750 -0.235702 0.942809 0.235702 16614.718750 -0.942809 0.235702 -0.235702 16614.718750 -0.707107 0.000000 -0.707107 16845.478516 0.936329 0.351123 0.000000 16845.478516 0.351123 0.936329 0.000000 16845.478516 -0.351123 0.000000 -0.936329 16845.478516 0.936329 0.000000 0.351123 16845.478516 -0.936329 0.000000 -0.351123 16845.478516 0.702247 -0.117041 -0.702247 16845.478516 0.702247 -0.117041 0.702247 16845.478516 -0.936329 0.000000 0.351123 16845.478516 0.351123 0.000000 0.936329 16845.478516 0.351123 0.000000 -0.936329 16845.478516 0.351123 -0.936329 0.000000 16845.478516 -0.936329 -0.351123 0.000000 16845.478516 0.936329 -0.351123 0.000000 16845.478516 0.702247 0.117041 0.702247 16845.478516 0.936329 0.000000 -0.351123 16845.478516 0.702247 -0.702247 -0.117041 16845.478516 0.702247 0.702247 0.117041 16845.478516 0.702247 0.702247 -0.117041 16845.478516 0.702247 0.117041 -0.702247 16845.478516 0.702247 -0.702247 0.117041 16845.478516 -0.936329 0.351123 0.000000 16845.478516 -0.117041 0.702247 -0.702247 16845.478516 0.000000 0.936329 -0.351123 16845.478516 -0.117041 0.702247 0.702247 16845.478516 0.117041 -0.702247 0.702247 16845.478516 -0.351123 -0.936329 0.000000 16845.478516 -0.351123 0.000000 0.936329 16845.478516 0.000000 -0.351123 0.936329 16845.478516 -0.702247 0.702247 -0.117041 16845.478516 0.117041 -0.702247 -0.702247 16845.478516 0.000000 -0.351123 -0.936329 16845.478516 0.000000 0.351123 0.936329 16845.478516 -0.351123 0.936329 0.000000 16845.478516 -0.702247 0.702247 0.117041 16845.478516 0.000000 -0.936329 -0.351123 16845.478516 0.000000 -0.936329 0.351123 16845.478516 -0.702247 -0.117041 -0.702247 16845.478516 -0.117041 -0.702247 0.702247 16845.478516 0.000000 0.936329 0.351123 16845.478516 -0.702247 -0.702247 0.117041 16845.478516 -0.117041 -0.702247 -0.702247 16845.478516 -0.702247 0.117041 -0.702247 16845.478516 -0.702247 -0.117041 0.702247 16845.478516 -0.702247 -0.702247 -0.117041 16845.478516 -0.702247 0.117041 0.702247 16845.478516 0.000000 0.351123 -0.936329 16845.478516 0.117041 0.702247 0.702247 16845.478516 0.117041 0.702247 -0.702247 17076.240234 0.000000 0.813733 -0.581238 17076.240234 0.813733 0.348743 -0.464991 17076.240234 0.813733 0.348743 0.464991 17076.240234 -0.348743 -0.464991 0.813733 17076.240234 -0.348743 0.929981 -0.116248 17076.240234 -0.348743 0.929981 0.116248 17076.240234 0.348743 -0.464991 -0.813733 17076.240234 -0.929981 0.116248 0.348743 17076.240234 0.813733 0.000000 -0.581238 17076.240234 0.000000 0.813733 0.581238 17076.240234 -0.348743 0.813733 0.464991 17076.240234 -0.116248 0.348743 0.929981 17076.240234 -0.348743 0.813733 -0.464991 17076.240234 0.813733 0.000000 0.581238 17076.240234 0.348743 -0.464991 0.813733 17076.240234 -0.464991 0.813733 0.348743 17076.240234 -0.116248 0.348743 -0.929981 17076.240234 0.116248 0.348743 0.929981 17076.240234 -0.464991 -0.813733 -0.348743 17076.240234 -0.348743 -0.116248 -0.929981 17076.240234 -0.348743 0.464991 0.813733 17076.240234 -0.464991 -0.813733 0.348743 17076.240234 0.581238 0.000000 -0.813733 17076.240234 0.116248 -0.929981 0.348743 17076.240234 0.581238 0.000000 0.813733 17076.240234 0.348743 -0.929981 -0.116248 17076.240234 -0.348743 -0.813733 -0.464991 17076.240234 -0.116248 -0.929981 0.348743 17076.240234 -0.116248 -0.929981 -0.348743 17076.240234 0.581238 0.813733 0.000000 17076.240234 0.116248 -0.348743 -0.929981 17076.240234 -0.813733 -0.581238 0.000000 17076.240234 0.116248 -0.348743 0.929981 17076.240234 -0.929981 -0.116248 0.348743 17076.240234 -0.813733 -0.464991 -0.348743 17076.240234 -0.813733 -0.348743 0.464991 17076.240234 -0.348743 -0.464991 -0.813733 17076.240234 -0.813733 -0.464991 0.348743 17076.240234 -0.929981 -0.116248 -0.348743 17076.240234 -0.348743 -0.813733 0.464991 17076.240234 -0.581238 0.000000 0.813733 17076.240234 0.348743 -0.929981 0.116248 17076.240234 0.348743 -0.813733 -0.464991 17076.240234 -0.116248 -0.348743 0.929981 17076.240234 -0.581238 0.000000 -0.813733 17076.240234 0.813733 -0.348743 0.464991 17076.240234 0.116248 -0.929981 -0.348743 17076.240234 0.813733 -0.348743 -0.464991 17076.240234 0.581238 -0.813733 0.000000 17076.240234 -0.348743 -0.929981 0.116248 17076.240234 0.813733 -0.464991 0.348743 17076.240234 -0.116248 -0.348743 -0.929981 17076.240234 0.813733 -0.464991 -0.348743 17076.240234 0.348743 -0.813733 0.464991 17076.240234 0.116248 0.348743 -0.929981 17076.240234 0.813733 -0.581238 0.000000 17076.240234 -0.348743 -0.929981 -0.116248 17076.240234 -0.929981 0.116248 -0.348743 17076.240234 0.813733 0.464991 -0.348743 17076.240234 -0.348743 0.464991 -0.813733 17076.240234 -0.813733 -0.348743 -0.464991 17076.240234 0.464991 -0.813733 -0.348743 17076.240234 0.000000 -0.813733 0.581238 17076.240234 0.348743 0.116248 0.929981 17076.240234 -0.581238 0.813733 0.000000 17076.240234 0.116248 0.929981 -0.348743 17076.240234 0.000000 -0.581238 -0.813733 17076.240234 0.929981 0.348743 0.116248 17076.240234 0.000000 -0.581238 0.813733 17076.240234 0.464991 -0.348743 0.813733 17076.240234 0.464991 -0.813733 0.348743 17076.240234 -0.813733 0.464991 -0.348743 17076.240234 -0.464991 0.348743 -0.813733 17076.240234 0.929981 -0.116248 -0.348743 17076.240234 0.464991 -0.348743 -0.813733 17076.240234 -0.813733 0.464991 0.348743 17076.240234 0.116248 0.929981 0.348743 17076.240234 -0.929981 -0.348743 0.116248 17076.240234 0.929981 -0.116248 0.348743 17076.240234 0.929981 0.348743 -0.116248 17076.240234 0.929981 0.116248 0.348743 17076.240234 -0.348743 0.116248 0.929981 17076.240234 0.348743 0.464991 -0.813733 17076.240234 -0.813733 0.581238 0.000000 17076.240234 0.348743 0.464991 0.813733 17076.240234 -0.348743 -0.116248 0.929981 17076.240234 -0.348743 0.116248 -0.929981 17076.240234 -0.813733 0.000000 0.581238 17076.240234 0.000000 -0.813733 -0.581238 17076.240234 0.464991 0.348743 -0.813733 17076.240234 0.000000 0.581238 -0.813733 17076.240234 -0.581238 -0.813733 0.000000 17076.240234 0.464991 0.813733 0.348743 17076.240234 0.348743 0.813733 -0.464991 17076.240234 -0.929981 0.348743 -0.116248 17076.240234 0.813733 0.464991 0.348743 17076.240234 -0.929981 0.348743 0.116248 17076.240234 0.348743 -0.116248 -0.929981 17076.240234 -0.813733 0.348743 -0.464991 17076.240234 -0.464991 0.813733 -0.348743 17076.240234 -0.929981 -0.348743 -0.116248 17076.240234 0.464991 0.813733 -0.348743 17076.240234 0.813733 0.581238 0.000000 17076.240234 -0.464991 0.348743 0.813733 17076.240234 0.348743 0.813733 0.464991 17076.240234 -0.464991 -0.348743 0.813733 17076.240234 0.348743 0.929981 -0.116248 17076.240234 0.348743 0.929981 0.116248 17076.240234 0.000000 0.581238 0.813733 17076.240234 -0.464991 -0.348743 -0.813733 17076.240234 -0.116248 0.929981 -0.348743 17076.240234 0.929981 -0.348743 -0.116248 17076.240234 0.929981 -0.348743 0.116248 17076.240234 -0.116248 0.929981 0.348743 17076.240234 0.464991 0.348743 0.813733 17076.240234 -0.813733 0.000000 -0.581238 17076.240234 0.348743 0.116248 -0.929981 17076.240234 -0.813733 0.348743 0.464991 17076.240234 0.348743 -0.116248 0.929981 17076.240234 0.929981 0.116248 -0.348743 17307.001953 -0.115470 -0.808290 0.577350 17307.001953 -0.115470 -0.808290 -0.577350 17307.001953 -0.808290 -0.115470 -0.577350 17307.001953 0.577350 0.808290 -0.115470 17307.001953 0.115470 -0.577350 -0.808290 17307.001953 0.577350 0.808290 0.115470 17307.001953 0.115470 -0.577350 0.808290 17307.001953 -0.808290 -0.115470 0.577350 17307.001953 -0.577350 0.808290 -0.115470 17307.001953 -0.577350 -0.808290 -0.115470 17307.001953 -0.808290 0.115470 0.577350 17307.001953 -0.577350 -0.808290 0.115470 17307.001953 0.577350 0.577350 -0.577350 17307.001953 0.577350 -0.808290 -0.115470 17307.001953 -0.115470 -0.577350 -0.808290 17307.001953 0.577350 -0.808290 0.115470 17307.001953 0.577350 -0.577350 -0.577350 17307.001953 -0.115470 0.577350 -0.808290 17307.001953 -0.115470 0.577350 0.808290 17307.001953 0.577350 -0.577350 0.577350 17307.001953 -0.115470 0.808290 -0.577350 17307.001953 -0.115470 0.808290 0.577350 17307.001953 -0.577350 0.115470 0.808290 17307.001953 0.577350 -0.115470 -0.808290 17307.001953 -0.577350 0.115470 -0.808290 17307.001953 -0.808290 0.115470 -0.577350 17307.001953 0.577350 -0.115470 0.808290 17307.001953 0.577350 0.115470 -0.808290 17307.001953 -0.577350 -0.115470 0.808290 17307.001953 -0.577350 0.577350 0.577350 17307.001953 0.577350 0.115470 0.808290 17307.001953 0.115470 -0.808290 0.577350 17307.001953 -0.808290 0.577350 0.115470 17307.001953 -0.577350 -0.115470 -0.808290 17307.001953 -0.808290 0.577350 -0.115470 17307.001953 -0.115470 -0.577350 0.808290 17307.001953 -0.577350 0.808290 0.115470 17307.001953 -0.577350 0.577350 -0.577350 17307.001953 0.115470 -0.808290 -0.577350 17307.001953 0.577350 0.577350 0.577350 17307.001953 0.808290 -0.577350 -0.115470 17307.001953 0.808290 -0.115470 -0.577350 17307.001953 -0.577350 -0.577350 -0.577350 17307.001953 -0.577350 -0.577350 0.577350 17307.001953 0.115470 0.577350 0.808290 17307.001953 0.808290 0.577350 -0.115470 17307.001953 0.808290 0.577350 0.115470 17307.001953 0.808290 -0.577350 0.115470 17307.001953 0.808290 0.115470 0.577350 17307.001953 0.808290 -0.115470 0.577350 17307.001953 0.115470 0.577350 -0.808290 17307.001953 0.115470 0.808290 0.577350 17307.001953 0.808290 0.115470 -0.577350 17307.001953 -0.808290 -0.577350 -0.115470 17307.001953 -0.808290 -0.577350 0.115470 17307.001953 0.115470 0.808290 -0.577350 17537.761719 0.688247 -0.688247 -0.229416 17537.761719 -0.229416 -0.688247 0.688247 17537.761719 -0.229416 0.688247 -0.688247 17537.761719 0.229416 0.688247 -0.688247 17537.761719 -0.229416 0.688247 0.688247 17537.761719 -0.688247 -0.229416 0.688247 17537.761719 -0.688247 0.688247 0.229416 17537.761719 0.229416 0.688247 0.688247 17537.761719 -0.688247 -0.229416 -0.688247 17537.761719 0.688247 0.688247 0.229416 17537.761719 0.688247 -0.688247 0.229416 17537.761719 0.688247 0.688247 -0.229416 17537.761719 -0.688247 0.229416 0.688247 17537.761719 -0.229416 -0.688247 -0.688247 17537.761719 -0.688247 0.688247 -0.229416 17537.761719 0.688247 0.229416 -0.688247 17537.761719 -0.688247 0.229416 -0.688247 17537.761719 0.688247 0.229416 0.688247 17537.761719 0.229416 -0.688247 -0.688247 17537.761719 0.688247 -0.229416 -0.688247 17537.761719 0.688247 -0.229416 0.688247 17537.761719 -0.688247 -0.688247 0.229416 17537.761719 -0.688247 -0.688247 -0.229416 17537.761719 0.229416 -0.688247 0.688247 17768.519531 -0.911685 0.227921 -0.341882 17768.519531 -0.455842 0.683763 -0.569803 17768.519531 0.341882 -0.227921 -0.911685 17768.519531 0.569803 -0.455842 -0.683763 17768.519531 0.455842 0.569803 -0.683763 17768.519531 0.341882 -0.911685 0.227921 17768.519531 0.911685 -0.341882 -0.227921 17768.519531 0.569803 -0.455842 0.683763 17768.519531 -0.341882 0.911685 0.227921 17768.519531 0.341882 -0.911685 -0.227921 17768.519531 -0.341882 -0.911685 0.227921 17768.519531 0.569803 0.455842 -0.683763 17768.519531 -0.569803 0.455842 -0.683763 17768.519531 0.569803 -0.683763 0.455842 17768.519531 -0.683763 -0.569803 -0.455842 17768.519531 0.569803 0.455842 0.683763 17768.519531 -0.683763 -0.455842 -0.569803 17768.519531 0.227921 -0.341882 0.911685 17768.519531 0.455842 0.569803 0.683763 17768.519531 0.455842 0.683763 -0.569803 17768.519531 -0.341882 -0.227921 0.911685 17768.519531 0.911685 -0.341882 0.227921 17768.519531 -0.911685 -0.341882 0.227921 17768.519531 -0.683763 0.455842 -0.569803 17768.519531 -0.455842 -0.569803 0.683763 17768.519531 0.341882 -0.227921 0.911685 17768.519531 -0.227921 -0.911685 0.341882 17768.519531 -0.911685 0.341882 -0.227921 17768.519531 -0.911685 0.341882 0.227921 17768.519531 -0.911685 0.227921 0.341882 17768.519531 -0.911685 -0.341882 -0.227921 17768.519531 0.569803 -0.683763 -0.455842 17768.519531 0.455842 0.683763 0.569803 17768.519531 -0.911685 -0.227921 -0.341882 17768.519531 -0.455842 0.683763 0.569803 17768.519531 -0.341882 -0.911685 -0.227921 17768.519531 -0.341882 0.911685 -0.227921 17768.519531 -0.683763 -0.455842 0.569803 17768.519531 -0.455842 -0.683763 -0.569803 17768.519531 -0.227921 -0.911685 -0.341882 17768.519531 -0.569803 0.455842 0.683763 17768.519531 0.227921 -0.341882 -0.911685 17768.519531 -0.227921 0.911685 -0.341882 17768.519531 -0.683763 0.569803 -0.455842 17768.519531 0.683763 0.569803 0.455842 17768.519531 0.683763 -0.455842 0.569803 17768.519531 -0.569803 0.683763 0.455842 17768.519531 0.455842 -0.569803 -0.683763 17768.519531 0.455842 -0.683763 0.569803 17768.519531 -0.683763 0.569803 0.455842 17768.519531 0.911685 0.227921 -0.341882 17768.519531 0.455842 -0.683763 -0.569803 17768.519531 0.911685 0.227921 0.341882 17768.519531 -0.569803 -0.683763 0.455842 17768.519531 0.455842 -0.569803 0.683763 17768.519531 -0.455842 -0.683763 0.569803 17768.519531 -0.227921 0.341882 0.911685 17768.519531 0.911685 0.341882 0.227921 17768.519531 0.227921 -0.911685 -0.341882 17768.519531 -0.227921 0.341882 -0.911685 17768.519531 0.227921 -0.911685 0.341882 17768.519531 -0.227921 -0.341882 -0.911685 17768.519531 -0.227921 -0.341882 0.911685 17768.519531 -0.455842 -0.569803 -0.683763 17768.519531 0.341882 0.911685 0.227921 17768.519531 0.341882 0.911685 -0.227921 17768.519531 -0.569803 -0.455842 0.683763 17768.519531 0.227921 0.341882 0.911685 17768.519531 0.911685 0.341882 -0.227921 17768.519531 -0.227921 0.911685 0.341882 17768.519531 0.683763 0.455842 -0.569803 17768.519531 -0.569803 -0.455842 -0.683763 17768.519531 0.683763 0.569803 -0.455842 17768.519531 -0.683763 -0.569803 0.455842 17768.519531 0.569803 0.683763 -0.455842 17768.519531 0.911685 -0.227921 -0.341882 17768.519531 -0.341882 0.227921 0.911685 17768.519531 0.569803 0.683763 0.455842 17768.519531 0.227921 0.911685 0.341882 17768.519531 0.341882 0.227921 -0.911685 17768.519531 0.683763 0.455842 0.569803 17768.519531 -0.455842 0.569803 0.683763 17768.519531 -0.341882 -0.227921 -0.911685 17768.519531 -0.569803 -0.683763 -0.455842 17768.519531 0.227921 0.911685 -0.341882 17768.519531 0.683763 -0.455842 -0.569803 17768.519531 -0.683763 0.455842 0.569803 17768.519531 -0.911685 -0.227921 0.341882 17768.519531 0.911685 -0.227921 0.341882 17768.519531 0.341882 0.227921 0.911685 17768.519531 0.227921 0.341882 -0.911685 17768.519531 -0.569803 0.683763 -0.455842 17768.519531 -0.341882 0.227921 -0.911685 17768.519531 0.683763 -0.569803 0.455842 17768.519531 0.683763 -0.569803 -0.455842 17768.519531 -0.455842 0.569803 -0.683763 17999.279297 0.226455 -0.566139 -0.792594 17999.279297 -0.792594 0.226455 -0.566139 17999.279297 -0.566139 0.226455 0.792594 17999.279297 0.226455 -0.566139 0.792594 17999.279297 -0.792594 0.566139 0.226455 17999.279297 0.226455 -0.792594 0.566139 17999.279297 -0.792594 0.566139 -0.226455 17999.279297 0.792594 0.226455 0.566139 17999.279297 0.566139 -0.792594 -0.226455 17999.279297 0.792594 0.566139 0.226455 17999.279297 -0.566139 0.792594 -0.226455 17999.279297 0.566139 -0.792594 0.226455 17999.279297 0.226455 -0.792594 -0.566139 17999.279297 -0.566139 0.792594 0.226455 17999.279297 -0.566139 -0.792594 0.226455 17999.279297 0.792594 0.566139 -0.226455 17999.279297 0.792594 0.226455 -0.566139 17999.279297 0.566139 -0.226455 -0.792594 17999.279297 0.566139 0.226455 -0.792594 17999.279297 -0.226455 -0.792594 -0.566139 17999.279297 0.792594 -0.566139 0.226455 17999.279297 0.566139 0.226455 0.792594 17999.279297 0.792594 -0.566139 -0.226455 17999.279297 -0.566139 -0.226455 0.792594 17999.279297 -0.226455 -0.792594 0.566139 17999.279297 -0.792594 -0.566139 -0.226455 17999.279297 -0.792594 -0.566139 0.226455 17999.279297 0.566139 0.792594 -0.226455 17999.279297 -0.226455 -0.566139 -0.792594 17999.279297 -0.226455 -0.566139 0.792594 17999.279297 0.566139 0.792594 0.226455 17999.279297 0.226455 0.792594 0.566139 17999.279297 0.226455 0.792594 -0.566139 17999.279297 -0.226455 0.792594 0.566139 17999.279297 -0.226455 0.792594 -0.566139 17999.279297 0.226455 0.566139 0.792594 17999.279297 -0.226455 0.566139 0.792594 17999.279297 -0.792594 -0.226455 0.566139 17999.279297 -0.226455 0.566139 -0.792594 17999.279297 -0.566139 -0.226455 -0.792594 17999.279297 -0.792594 -0.226455 -0.566139 17999.279297 0.226455 0.566139 -0.792594 17999.279297 -0.566139 -0.792594 -0.226455 17999.279297 0.792594 -0.226455 -0.566139 17999.279297 0.792594 -0.226455 0.566139 17999.279297 -0.792594 0.226455 0.566139 17999.279297 0.566139 -0.226455 0.792594 17999.279297 -0.566139 0.226455 -0.792594 18460.800781 0.447214 0.000000 -0.894427 18460.800781 0.000000 0.894427 0.447214 18460.800781 0.894427 -0.447214 0.000000 18460.800781 0.894427 0.000000 -0.447214 18460.800781 -0.894427 0.447214 0.000000 18460.800781 -0.447214 0.894427 0.000000 18460.800781 -0.447214 0.000000 0.894427 18460.800781 -0.894427 0.000000 0.447214 18460.800781 0.000000 -0.447214 0.894427 18460.800781 -0.447214 0.000000 -0.894427 18460.800781 -0.894427 0.000000 -0.447214 18460.800781 0.894427 0.000000 0.447214 18460.800781 0.000000 -0.894427 -0.447214 18460.800781 0.447214 -0.894427 0.000000 18460.800781 -0.894427 -0.447214 0.000000 18460.800781 0.000000 -0.894427 0.447214 18460.800781 0.447214 0.894427 0.000000 18460.800781 -0.447214 -0.894427 0.000000 18460.800781 0.000000 0.894427 -0.447214 18460.800781 0.000000 -0.447214 -0.894427 18460.800781 0.894427 0.447214 0.000000 18460.800781 0.000000 0.447214 -0.894427 18460.800781 0.447214 0.000000 0.894427 18460.800781 0.000000 0.447214 0.894427 18691.560547 0.666667 -0.666667 -0.333333 18691.560547 -0.444444 0.444444 0.777778 18691.560547 0.666667 0.333333 0.666667 18691.560547 -1.000000 0.000000 0.000000 18691.560547 0.111111 -0.444444 0.888889 18691.560547 0.444444 -0.444444 -0.777778 18691.560547 -0.444444 0.111111 -0.888889 18691.560547 0.888889 -0.111111 0.444444 18691.560547 0.111111 0.888889 0.444444 18691.560547 -0.444444 -0.777778 -0.444444 18691.560547 0.444444 -0.444444 0.777778 18691.560547 0.111111 0.444444 0.888889 18691.560547 0.111111 -0.444444 -0.888889 18691.560547 0.888889 -0.111111 -0.444444 18691.560547 0.444444 -0.111111 -0.888889 18691.560547 0.666667 -0.666667 0.333333 18691.560547 -0.111111 0.444444 -0.888889 18691.560547 0.666667 -0.333333 -0.666667 18691.560547 0.333333 0.666667 0.666667 18691.560547 -0.444444 0.444444 -0.777778 18691.560547 0.666667 0.333333 -0.666667 18691.560547 0.000000 0.000000 -1.000000 18691.560547 0.888889 0.444444 0.111111 18691.560547 0.333333 0.666667 -0.666667 18691.560547 -0.888889 -0.111111 -0.444444 18691.560547 -0.777778 -0.444444 0.444444 18691.560547 -0.444444 0.888889 -0.111111 18691.560547 -0.666667 0.333333 0.666667 18691.560547 -0.888889 -0.111111 0.444444 18691.560547 0.444444 -0.888889 -0.111111 18691.560547 -0.444444 0.777778 0.444444 18691.560547 0.888889 0.444444 -0.111111 18691.560547 0.888889 0.111111 -0.444444 18691.560547 0.444444 -0.888889 0.111111 18691.560547 -0.333333 -0.666667 0.666667 18691.560547 1.000000 0.000000 0.000000 18691.560547 -0.333333 -0.666667 -0.666667 18691.560547 -0.333333 0.666667 0.666667 18691.560547 0.666667 -0.333333 0.666667 18691.560547 -0.666667 0.666667 -0.333333 18691.560547 0.444444 -0.777778 0.444444 18691.560547 -0.333333 0.666667 -0.666667 18691.560547 -0.777778 0.444444 -0.444444 18691.560547 -0.666667 -0.666667 0.333333 18691.560547 0.888889 0.111111 0.444444 18691.560547 -0.777778 0.444444 0.444444 18691.560547 0.444444 -0.111111 0.888889 18691.560547 0.111111 0.444444 -0.888889 18691.560547 0.444444 -0.777778 -0.444444 18691.560547 -0.777778 -0.444444 -0.444444 18691.560547 0.444444 0.111111 0.888889 18691.560547 0.111111 0.888889 -0.444444 18691.560547 0.888889 -0.444444 -0.111111 18691.560547 0.666667 0.666667 0.333333 18691.560547 -0.666667 0.333333 -0.666667 18691.560547 -0.888889 0.111111 0.444444 18691.560547 -0.444444 0.111111 0.888889 18691.560547 -0.666667 -0.666667 -0.333333 18691.560547 -0.666667 -0.333333 0.666667 18691.560547 0.444444 0.777778 -0.444444 18691.560547 0.777778 -0.444444 -0.444444 18691.560547 -0.111111 -0.444444 0.888889 18691.560547 -0.444444 -0.444444 -0.777778 18691.560547 -0.111111 0.444444 0.888889 18691.560547 -0.444444 -0.888889 -0.111111 18691.560547 0.888889 -0.444444 0.111111 18691.560547 -0.444444 0.777778 -0.444444 18691.560547 0.333333 -0.666667 -0.666667 18691.560547 -0.888889 0.111111 -0.444444 18691.560547 0.444444 0.777778 0.444444 18691.560547 -0.444444 -0.111111 -0.888889 18691.560547 0.444444 0.888889 -0.111111 18691.560547 -0.444444 -0.888889 0.111111 18691.560547 0.111111 -0.888889 0.444444 18691.560547 0.444444 0.888889 0.111111 18691.560547 0.777778 -0.444444 0.444444 18691.560547 -0.444444 -0.444444 0.777778 18691.560547 0.777778 0.444444 -0.444444 18691.560547 0.111111 -0.888889 -0.444444 18691.560547 0.333333 -0.666667 0.666667 18691.560547 0.777778 0.444444 0.444444 18691.560547 -0.444444 -0.777778 0.444444 18691.560547 -0.111111 -0.444444 -0.888889 18691.560547 0.000000 -1.000000 0.000000 18691.560547 -0.111111 -0.888889 0.444444 18691.560547 -0.111111 -0.888889 -0.444444 18691.560547 -0.888889 -0.444444 0.111111 18691.560547 -0.444444 -0.111111 0.888889 18691.560547 0.666667 0.666667 -0.333333 18691.560547 0.444444 0.111111 -0.888889 18691.560547 -0.666667 0.666667 0.333333 18691.560547 -0.666667 -0.333333 -0.666667 18691.560547 0.000000 1.000000 0.000000 18691.560547 0.444444 0.444444 -0.777778 18691.560547 -0.888889 0.444444 0.111111 18691.560547 0.000000 0.000000 1.000000 18691.560547 -0.888889 -0.444444 -0.111111 18691.560547 -0.111111 0.888889 0.444444 18691.560547 -0.888889 0.444444 -0.111111 18691.560547 -0.444444 0.888889 0.111111 18691.560547 0.444444 0.444444 0.777778 18691.560547 -0.111111 0.888889 -0.444444 18922.320312 0.993884 0.110432 0.000000 18922.320312 -0.331295 0.331295 -0.883452 18922.320312 -0.331295 0.331295 0.883452 18922.320312 -0.331295 -0.331295 -0.883452 18922.320312 -0.331295 0.883452 0.331295 18922.320312 0.331295 0.883452 -0.331295 18922.320312 -0.110432 -0.993884 0.000000 18922.320312 -0.993884 0.000000 0.110432 18922.320312 -0.993884 -0.110432 0.000000 18922.320312 0.993884 0.000000 -0.110432 18922.320312 0.993884 -0.110432 0.000000 18922.320312 0.883452 -0.331295 -0.331295 18922.320312 0.331295 0.883452 0.331295 18922.320312 -0.331295 -0.331295 0.883452 18922.320312 0.000000 -0.110432 -0.993884 18922.320312 -0.110432 0.000000 -0.993884 18922.320312 -0.110432 0.993884 0.000000 18922.320312 -0.331295 -0.883452 -0.331295 18922.320312 -0.331295 -0.883452 0.331295 18922.320312 -0.993884 0.110432 0.000000 18922.320312 0.883452 0.331295 -0.331295 18922.320312 0.000000 -0.993884 -0.110432 18922.320312 0.000000 -0.110432 0.993884 18922.320312 -0.331295 0.883452 -0.331295 18922.320312 0.883452 0.331295 0.331295 18922.320312 0.000000 -0.993884 0.110432 18922.320312 0.883452 -0.331295 0.331295 18922.320312 -0.110432 0.000000 0.993884 18922.320312 -0.993884 0.000000 -0.110432 18922.320312 0.993884 0.000000 0.110432 18922.320312 0.000000 0.993884 0.110432 18922.320312 -0.883452 0.331295 0.331295 18922.320312 0.331295 0.331295 -0.883452 18922.320312 -0.883452 0.331295 -0.331295 18922.320312 0.331295 -0.331295 0.883452 18922.320312 0.000000 0.993884 -0.110432 18922.320312 0.110432 0.993884 0.000000 18922.320312 0.331295 -0.331295 -0.883452 18922.320312 0.331295 -0.883452 0.331295 18922.320312 -0.883452 -0.331295 -0.331295 18922.320312 0.000000 0.110432 0.993884 18922.320312 0.331295 -0.883452 -0.331295 18922.320312 0.110432 -0.993884 0.000000 18922.320312 0.000000 0.110432 -0.993884 18922.320312 -0.883452 -0.331295 0.331295 18922.320312 0.331295 0.331295 0.883452 18922.320312 0.110432 0.000000 0.993884 18922.320312 0.110432 0.000000 -0.993884 19153.080078 0.768350 0.329293 0.548821 19153.080078 -0.329293 -0.548821 0.768350 19153.080078 -0.548821 -0.768350 -0.329293 19153.080078 0.109764 0.109764 0.987878 19153.080078 -0.109764 0.109764 -0.987878 19153.080078 -0.987878 0.109764 0.109764 19153.080078 -0.987878 0.109764 -0.109764 19153.080078 0.768350 0.548821 0.329293 19153.080078 0.987878 0.109764 -0.109764 19153.080078 0.987878 0.109764 0.109764 19153.080078 -0.768350 0.548821 0.329293 19153.080078 -0.329293 -0.548821 -0.768350 19153.080078 -0.768350 -0.329293 0.548821 19153.080078 -0.548821 -0.329293 0.768350 19153.080078 -0.109764 -0.109764 0.987878 19153.080078 -0.768350 0.548821 -0.329293 19153.080078 0.768350 0.548821 -0.329293 19153.080078 0.329293 -0.548821 0.768350 19153.080078 -0.329293 0.768350 -0.548821 19153.080078 -0.329293 0.548821 -0.768350 19153.080078 -0.548821 -0.768350 0.329293 19153.080078 0.329293 -0.548821 -0.768350 19153.080078 -0.768350 -0.548821 0.329293 19153.080078 0.109764 0.109764 -0.987878 19153.080078 -0.768350 -0.548821 -0.329293 19153.080078 0.548821 0.768350 0.329293 19153.080078 0.548821 -0.329293 -0.768350 19153.080078 -0.109764 0.987878 0.109764 19153.080078 -0.109764 0.987878 -0.109764 19153.080078 -0.548821 0.329293 0.768350 19153.080078 0.768350 -0.548821 -0.329293 19153.080078 -0.109764 -0.987878 -0.109764 19153.080078 0.768350 0.329293 -0.548821 19153.080078 0.768350 -0.548821 0.329293 19153.080078 0.768350 -0.329293 -0.548821 19153.080078 -0.768350 -0.329293 -0.548821 19153.080078 0.548821 0.768350 -0.329293 19153.080078 0.768350 -0.329293 0.548821 19153.080078 -0.987878 -0.109764 -0.109764 19153.080078 -0.987878 -0.109764 0.109764 19153.080078 0.109764 -0.109764 0.987878 19153.080078 -0.548821 -0.329293 -0.768350 19153.080078 0.548821 -0.329293 0.768350 19153.080078 -0.109764 -0.987878 0.109764 19153.080078 -0.329293 0.768350 0.548821 19153.080078 -0.329293 0.548821 0.768350 19153.080078 -0.109764 0.109764 0.987878 19153.080078 -0.329293 -0.768350 -0.548821 19153.080078 -0.548821 0.329293 -0.768350 19153.080078 -0.548821 0.768350 0.329293 19153.080078 -0.768350 0.329293 0.548821 19153.080078 -0.109764 -0.109764 -0.987878 19153.080078 0.109764 0.987878 -0.109764 19153.080078 0.987878 -0.109764 -0.109764 19153.080078 -0.768350 0.329293 -0.548821 19153.080078 0.329293 -0.768350 -0.548821 19153.080078 0.329293 0.548821 0.768350 19153.080078 0.329293 0.768350 0.548821 19153.080078 0.109764 0.987878 0.109764 19153.080078 0.329293 0.548821 -0.768350 19153.080078 0.548821 -0.768350 0.329293 19153.080078 0.987878 -0.109764 0.109764 19153.080078 0.329293 -0.768350 0.548821 19153.080078 0.548821 0.329293 -0.768350 19153.080078 0.548821 0.329293 0.768350 19153.080078 0.109764 -0.109764 -0.987878 19153.080078 0.548821 -0.768350 -0.329293 19153.080078 -0.329293 -0.768350 0.548821 19153.080078 -0.548821 0.768350 -0.329293 19153.080078 0.109764 -0.987878 0.109764 19153.080078 0.109764 -0.987878 -0.109764 19153.080078 0.329293 0.768350 -0.548821 19383.839844 0.218218 0.436436 -0.872872 19383.839844 -0.218218 -0.872872 0.436436 19383.839844 -0.872872 -0.436436 0.218218 19383.839844 0.436436 -0.872872 -0.218218 19383.839844 0.872872 0.218218 0.436436 19383.839844 0.436436 -0.218218 -0.872872 19383.839844 -0.872872 -0.436436 -0.218218 19383.839844 -0.872872 0.436436 0.218218 19383.839844 -0.872872 0.218218 0.436436 19383.839844 -0.872872 -0.218218 -0.436436 19383.839844 -0.872872 -0.218218 0.436436 19383.839844 0.436436 -0.218218 0.872872 19383.839844 -0.436436 -0.872872 -0.218218 19383.839844 -0.218218 0.872872 -0.436436 19383.839844 -0.218218 -0.872872 -0.436436 19383.839844 -0.218218 0.872872 0.436436 19383.839844 -0.436436 0.218218 -0.872872 19383.839844 0.872872 0.436436 0.218218 19383.839844 0.218218 0.872872 0.436436 19383.839844 -0.872872 0.218218 -0.436436 19383.839844 0.218218 -0.872872 -0.436436 19383.839844 -0.436436 0.872872 -0.218218 19383.839844 -0.436436 -0.218218 0.872872 19383.839844 0.436436 -0.872872 0.218218 19383.839844 -0.218218 -0.436436 0.872872 19383.839844 -0.436436 0.872872 0.218218 19383.839844 0.218218 -0.436436 0.872872 19383.839844 0.872872 -0.218218 0.436436 19383.839844 0.872872 -0.218218 -0.436436 19383.839844 0.218218 0.436436 0.872872 19383.839844 -0.218218 -0.436436 -0.872872 19383.839844 0.436436 0.872872 0.218218 19383.839844 -0.872872 0.436436 -0.218218 19383.839844 0.218218 -0.436436 -0.872872 19383.839844 0.436436 0.872872 -0.218218 19383.839844 0.872872 -0.436436 0.218218 19383.839844 0.436436 0.218218 0.872872 19383.839844 0.436436 0.218218 -0.872872 19383.839844 0.218218 -0.872872 0.436436 19383.839844 0.872872 -0.436436 -0.218218 19383.839844 -0.436436 0.218218 0.872872 19383.839844 0.872872 0.436436 -0.218218 19383.839844 -0.218218 0.436436 -0.872872 19383.839844 0.872872 0.218218 -0.436436 19383.839844 -0.436436 -0.218218 -0.872872 19383.839844 -0.218218 0.436436 0.872872 19383.839844 -0.436436 -0.872872 0.218218 19383.839844 0.218218 0.872872 -0.436436 19614.599609 0.000000 -0.976187 0.216930 19614.599609 0.976187 -0.216930 0.000000 19614.599609 0.000000 0.976187 -0.216930 19614.599609 0.216930 0.000000 0.976187 19614.599609 0.759257 -0.650791 0.000000 19614.599609 0.650791 -0.759257 0.000000 19614.599609 0.976187 0.000000 -0.216930 19614.599609 -0.976187 -0.216930 0.000000 19614.599609 -0.650791 -0.759257 0.000000 19614.599609 0.000000 -0.976187 -0.216930 19614.599609 -0.216930 -0.976187 0.000000 19614.599609 0.976187 0.000000 0.216930 19614.599609 0.000000 -0.759257 -0.650791 19614.599609 0.000000 0.216930 -0.976187 19614.599609 -0.759257 -0.650791 0.000000 19614.599609 0.650791 0.759257 0.000000 19614.599609 0.000000 -0.759257 0.650791 19614.599609 -0.759257 0.000000 -0.650791 19614.599609 -0.976187 0.000000 0.216930 19614.599609 0.000000 -0.650791 -0.759257 19614.599609 -0.650791 0.000000 0.759257 19614.599609 0.000000 0.216930 0.976187 19614.599609 0.650791 0.000000 -0.759257 19614.599609 -0.759257 0.650791 0.000000 19614.599609 -0.216930 0.000000 0.976187 19614.599609 0.000000 0.650791 0.759257 19614.599609 0.759257 0.650791 0.000000 19614.599609 0.000000 0.759257 0.650791 19614.599609 0.650791 0.000000 0.759257 19614.599609 -0.650791 0.759257 0.000000 19614.599609 0.000000 0.976187 0.216930 19614.599609 0.216930 -0.976187 0.000000 19614.599609 -0.976187 0.216930 0.000000 19614.599609 -0.650791 0.000000 -0.759257 19614.599609 0.759257 0.000000 0.650791 19614.599609 -0.216930 0.976187 0.000000 19614.599609 0.976187 0.216930 0.000000 19614.599609 -0.216930 0.000000 -0.976187 19614.599609 0.000000 -0.216930 -0.976187 19614.599609 0.000000 -0.216930 0.976187 19614.599609 0.759257 0.000000 -0.650791 19614.599609 0.000000 -0.650791 0.759257 19614.599609 -0.759257 0.000000 0.650791 19614.599609 0.216930 0.000000 -0.976187 19614.599609 -0.976187 0.000000 -0.216930 19614.599609 0.000000 0.759257 -0.650791 19614.599609 0.000000 0.650791 -0.759257 19614.599609 0.216930 0.976187 0.000000 19845.359375 -0.215666 0.970495 -0.107833 19845.359375 0.646997 -0.107833 0.754829 19845.359375 -0.754829 -0.107833 -0.646997 19845.359375 0.646997 -0.107833 -0.754829 19845.359375 -0.754829 -0.107833 0.646997 19845.359375 -0.646997 -0.107833 0.754829 19845.359375 -0.107833 0.970495 0.215666 19845.359375 -0.646997 0.107833 -0.754829 19845.359375 -0.215666 0.107833 0.970495 19845.359375 0.539164 -0.539164 -0.646997 19845.359375 0.646997 -0.539164 0.539164 19845.359375 -0.215666 -0.107833 0.970495 19845.359375 -0.539164 -0.646997 0.539164 19845.359375 -0.107833 0.754829 -0.646997 19845.359375 0.107833 -0.215666 -0.970495 19845.359375 -0.215666 0.970495 0.107833 19845.359375 -0.646997 0.107833 0.754829 19845.359375 -0.107833 0.754829 0.646997 19845.359375 0.646997 -0.539164 -0.539164 19845.359375 0.646997 0.107833 0.754829 19845.359375 0.646997 0.107833 -0.754829 19845.359375 -0.107833 0.970495 -0.215666 19845.359375 0.539164 -0.646997 0.539164 19845.359375 -0.215666 -0.107833 -0.970495 19845.359375 0.970495 0.107833 -0.215666 19845.359375 0.107833 -0.215666 0.970495 19845.359375 0.539164 -0.539164 0.646997 19845.359375 -0.107833 0.646997 0.754829 19845.359375 -0.107833 -0.970495 -0.215666 19845.359375 -0.215666 0.107833 -0.970495 19845.359375 -0.539164 -0.539164 -0.646997 19845.359375 -0.107833 0.646997 -0.754829 19845.359375 -0.646997 -0.754829 0.107833 19845.359375 -0.107833 -0.215666 -0.970495 19845.359375 0.107833 0.215666 -0.970495 19845.359375 0.107833 -0.970495 -0.215666 19845.359375 -0.107833 -0.646997 -0.754829 19845.359375 -0.970495 0.215666 -0.107833 19845.359375 0.539164 0.539164 -0.646997 19845.359375 -0.970495 0.215666 0.107833 19845.359375 0.107833 0.754829 -0.646997 19845.359375 -0.107833 -0.754829 0.646997 19845.359375 -0.107833 -0.754829 -0.646997 19845.359375 0.754829 0.646997 0.107833 19845.359375 0.754829 0.646997 -0.107833 19845.359375 0.107833 0.646997 0.754829 19845.359375 0.215666 0.970495 0.107833 19845.359375 0.107833 0.646997 -0.754829 19845.359375 -0.539164 0.646997 -0.539164 19845.359375 -0.970495 0.107833 0.215666 19845.359375 0.215666 0.970495 -0.107833 19845.359375 -0.539164 0.539164 0.646997 19845.359375 -0.754829 0.646997 -0.107833 19845.359375 0.539164 0.539164 0.646997 19845.359375 -0.107833 -0.646997 0.754829 19845.359375 0.646997 0.539164 -0.539164 19845.359375 -0.539164 0.646997 0.539164 19845.359375 -0.754829 0.107833 -0.646997 19845.359375 0.970495 -0.107833 0.215666 19845.359375 0.970495 -0.107833 -0.215666 19845.359375 0.107833 -0.754829 -0.646997 19845.359375 0.970495 -0.215666 0.107833 19845.359375 0.970495 -0.215666 -0.107833 19845.359375 -0.646997 -0.107833 -0.754829 19845.359375 0.970495 0.215666 -0.107833 19845.359375 -0.754829 0.107833 0.646997 19845.359375 0.107833 -0.754829 0.646997 19845.359375 0.107833 -0.646997 -0.754829 19845.359375 0.107833 -0.970495 0.215666 19845.359375 -0.539164 -0.539164 0.646997 19845.359375 0.215666 -0.970495 0.107833 19845.359375 0.215666 -0.970495 -0.107833 19845.359375 -0.646997 0.539164 0.539164 19845.359375 0.107833 0.970495 0.215666 19845.359375 -0.107833 -0.215666 0.970495 19845.359375 0.107833 0.970495 -0.215666 19845.359375 -0.646997 0.539164 -0.539164 19845.359375 0.107833 0.754829 0.646997 19845.359375 0.539164 0.646997 -0.539164 19845.359375 0.539164 -0.646997 -0.539164 19845.359375 0.215666 -0.107833 -0.970495 19845.359375 -0.754829 -0.646997 0.107833 19845.359375 -0.107833 0.215666 -0.970495 19845.359375 -0.970495 -0.215666 0.107833 19845.359375 0.107833 0.215666 0.970495 19845.359375 0.215666 0.107833 0.970495 19845.359375 0.107833 -0.646997 0.754829 19845.359375 0.754829 -0.646997 0.107833 19845.359375 0.754829 -0.646997 -0.107833 19845.359375 -0.754829 0.646997 0.107833 19845.359375 -0.539164 -0.646997 -0.539164 19845.359375 0.646997 0.754829 0.107833 19845.359375 0.646997 0.754829 -0.107833 19845.359375 -0.646997 -0.539164 -0.539164 19845.359375 0.970495 0.107833 0.215666 19845.359375 0.646997 -0.754829 -0.107833 19845.359375 0.646997 0.539164 0.539164 19845.359375 -0.107833 0.215666 0.970495 19845.359375 0.646997 -0.754829 0.107833 19845.359375 -0.970495 -0.215666 -0.107833 19845.359375 -0.754829 -0.646997 -0.107833 19845.359375 -0.970495 -0.107833 -0.215666 19845.359375 -0.215666 -0.970495 0.107833 19845.359375 0.754829 0.107833 -0.646997 19845.359375 0.215666 -0.107833 0.970495 19845.359375 0.754829 -0.107833 0.646997 19845.359375 0.754829 0.107833 0.646997 19845.359375 -0.970495 0.107833 -0.215666 19845.359375 0.754829 -0.107833 -0.646997 19845.359375 -0.646997 -0.754829 -0.107833 19845.359375 -0.646997 0.754829 0.107833 19845.359375 0.970495 0.215666 0.107833 19845.359375 -0.646997 0.754829 -0.107833 19845.359375 -0.215666 -0.970495 -0.107833 19845.359375 -0.970495 -0.107833 0.215666 19845.359375 0.539164 0.646997 0.539164 19845.359375 -0.539164 0.539164 -0.646997 19845.359375 0.215666 0.107833 -0.970495 19845.359375 -0.646997 -0.539164 0.539164 19845.359375 -0.107833 -0.970495 0.215666 20306.880859 0.639602 -0.639602 -0.426401 20306.880859 -0.639602 -0.426401 0.639602 20306.880859 -0.639602 -0.426401 -0.639602 20306.880859 0.639602 -0.639602 0.426401 20306.880859 0.639602 -0.426401 -0.639602 20306.880859 -0.426401 -0.639602 -0.639602 20306.880859 -0.639602 -0.639602 0.426401 20306.880859 -0.639602 -0.639602 -0.426401 20306.880859 0.639602 0.639602 -0.426401 20306.880859 0.639602 -0.426401 0.639602 20306.880859 -0.639602 0.426401 -0.639602 20306.880859 -0.639602 0.639602 0.426401 20306.880859 0.639602 0.639602 0.426401 20306.880859 0.426401 -0.639602 0.639602 20306.880859 0.639602 0.426401 0.639602 20306.880859 -0.639602 0.426401 0.639602 20306.880859 -0.426401 0.639602 -0.639602 20306.880859 0.426401 -0.639602 -0.639602 20306.880859 -0.639602 0.639602 -0.426401 20306.880859 0.426401 0.639602 -0.639602 20306.880859 -0.426401 0.639602 0.639602 20306.880859 0.426401 0.639602 0.639602 20306.880859 0.639602 0.426401 -0.639602 20306.880859 -0.426401 -0.639602 0.639602 20537.640625 0.847998 0.317999 -0.423999 20537.640625 0.847998 -0.423999 -0.317999 20537.640625 0.847998 -0.529999 0.000000 20537.640625 0.741999 0.635999 0.212000 20537.640625 -0.317999 -0.847998 -0.423999 20537.640625 0.212000 0.953998 0.212000 20537.640625 0.741999 0.635999 -0.212000 20537.640625 0.212000 -0.212000 -0.953998 20537.640625 -0.847998 0.317999 0.423999 20537.640625 0.953998 -0.212000 0.212000 20537.640625 -0.847998 0.317999 -0.423999 20537.640625 -0.635999 0.741999 -0.212000 20537.640625 0.212000 -0.212000 0.953998 20537.640625 0.212000 0.953998 -0.212000 20537.640625 -0.317999 0.423999 -0.847998 20537.640625 0.741999 0.212000 0.635999 20537.640625 -0.529999 0.000000 -0.847998 20537.640625 -0.317999 0.423999 0.847998 20537.640625 -0.423999 -0.317999 -0.847998 20537.640625 0.847998 -0.423999 0.317999 20537.640625 0.847998 -0.317999 -0.423999 20537.640625 -0.847998 0.423999 -0.317999 20537.640625 0.847998 0.317999 0.423999 20537.640625 0.212000 -0.635999 0.741999 20537.640625 0.847998 0.423999 -0.317999 20537.640625 0.212000 -0.635999 -0.741999 20537.640625 0.317999 -0.847998 0.423999 20537.640625 0.212000 -0.953998 -0.212000 20537.640625 -0.423999 -0.317999 0.847998 20537.640625 0.847998 0.000000 0.529999 20537.640625 0.212000 -0.953998 0.212000 20537.640625 0.212000 -0.741999 0.635999 20537.640625 0.317999 -0.847998 -0.423999 20537.640625 -0.317999 -0.847998 0.423999 20537.640625 0.847998 0.423999 0.317999 20537.640625 -0.847998 0.423999 0.317999 20537.640625 0.847998 0.529999 0.000000 20537.640625 0.953998 -0.212000 -0.212000 20537.640625 -0.635999 -0.212000 0.741999 20537.640625 0.847998 -0.317999 0.423999 20537.640625 0.847998 0.000000 -0.529999 20537.640625 0.741999 0.212000 -0.635999 20537.640625 0.741999 -0.212000 0.635999 20537.640625 -0.317999 0.847998 -0.423999 20537.640625 -0.212000 -0.212000 -0.953998 20537.640625 -0.635999 0.212000 0.741999 20537.640625 -0.212000 -0.212000 0.953998 20537.640625 0.635999 0.212000 -0.741999 20537.640625 -0.635999 0.212000 -0.741999 20537.640625 -0.212000 0.953998 0.212000 20537.640625 -0.317999 -0.423999 0.847998 20537.640625 -0.317999 -0.423999 -0.847998 20537.640625 -0.212000 0.212000 -0.953998 20537.640625 -0.212000 0.953998 -0.212000 20537.640625 -0.212000 0.212000 0.953998 20537.640625 0.635999 -0.212000 0.741999 20537.640625 -0.741999 -0.212000 -0.635999 20537.640625 -0.741999 -0.212000 0.635999 20537.640625 0.212000 0.741999 -0.635999 20537.640625 0.635999 -0.212000 -0.741999 20537.640625 -0.212000 0.635999 -0.741999 20537.640625 0.212000 0.635999 0.741999 20537.640625 0.212000 0.635999 -0.741999 20537.640625 -0.212000 0.635999 0.741999 20537.640625 -0.212000 0.741999 -0.635999 20537.640625 0.635999 0.212000 0.741999 20537.640625 -0.529999 -0.847998 0.000000 20537.640625 0.212000 0.212000 0.953998 20537.640625 0.635999 -0.741999 0.212000 20537.640625 -0.317999 0.847998 0.423999 20537.640625 -0.212000 0.741999 0.635999 20537.640625 -0.741999 -0.635999 -0.212000 20537.640625 -0.212000 -0.953998 -0.212000 20537.640625 0.741999 -0.212000 -0.635999 20537.640625 -0.212000 -0.953998 0.212000 20537.640625 -0.741999 -0.635999 0.212000 20537.640625 -0.212000 -0.741999 -0.635999 20537.640625 -0.847998 0.000000 0.529999 20537.640625 0.741999 -0.635999 0.212000 20537.640625 -0.847998 0.529999 0.000000 20537.640625 0.741999 -0.635999 -0.212000 20537.640625 0.635999 0.741999 -0.212000 20537.640625 0.212000 0.212000 -0.953998 20537.640625 0.529999 0.847998 0.000000 20537.640625 -0.212000 -0.741999 0.635999 20537.640625 0.212000 0.741999 0.635999 20537.640625 0.635999 -0.741999 -0.212000 20537.640625 -0.212000 -0.635999 -0.741999 20537.640625 -0.847998 0.000000 -0.529999 20537.640625 -0.635999 0.741999 0.212000 20537.640625 -0.212000 -0.635999 0.741999 20537.640625 0.635999 0.741999 0.212000 20537.640625 0.953998 0.212000 -0.212000 20537.640625 0.212000 -0.741999 -0.635999 20537.640625 -0.529999 0.847998 0.000000 20537.640625 -0.847998 -0.423999 -0.317999 20537.640625 0.317999 0.847998 0.423999 20537.640625 0.000000 -0.529999 -0.847998 20537.640625 -0.423999 -0.847998 -0.317999 20537.640625 0.000000 0.529999 0.847998 20537.640625 -0.423999 0.847998 0.317999 20537.640625 -0.847998 -0.423999 0.317999 20537.640625 0.423999 -0.847998 -0.317999 20537.640625 0.000000 -0.529999 0.847998 20537.640625 0.423999 -0.847998 0.317999 20537.640625 -0.847998 -0.317999 -0.423999 20537.640625 -0.635999 -0.741999 -0.212000 20537.640625 0.423999 0.847998 -0.317999 20537.640625 0.317999 -0.423999 -0.847998 20537.640625 -0.741999 0.635999 0.212000 20537.640625 0.423999 -0.317999 0.847998 20537.640625 0.423999 0.847998 0.317999 20537.640625 0.317999 0.423999 -0.847998 20537.640625 0.000000 0.847998 -0.529999 20537.640625 0.317999 -0.423999 0.847998 20537.640625 -0.953998 0.212000 -0.212000 20537.640625 -0.953998 0.212000 0.212000 20537.640625 -0.635999 -0.741999 0.212000 20537.640625 0.529999 0.000000 -0.847998 20537.640625 -0.423999 -0.847998 0.317999 20537.640625 -0.847998 -0.317999 0.423999 20537.640625 0.423999 -0.317999 -0.847998 20537.640625 -0.423999 0.847998 -0.317999 20537.640625 0.529999 -0.847998 0.000000 20537.640625 -0.741999 0.635999 -0.212000 20537.640625 0.317999 0.423999 0.847998 20537.640625 0.000000 -0.847998 -0.529999 20537.640625 -0.423999 0.317999 -0.847998 20537.640625 0.000000 0.847998 0.529999 20537.640625 0.000000 0.529999 -0.847998 20537.640625 -0.741999 0.212000 0.635999 20537.640625 -0.423999 0.317999 0.847998 20537.640625 -0.847998 -0.529999 0.000000 20537.640625 -0.741999 0.212000 -0.635999 20537.640625 -0.953998 -0.212000 0.212000 20537.640625 -0.635999 -0.212000 -0.741999 20537.640625 0.953998 0.212000 0.212000 20537.640625 0.317999 0.847998 -0.423999 20537.640625 -0.953998 -0.212000 -0.212000 20537.640625 0.423999 0.317999 -0.847998 20537.640625 0.423999 0.317999 0.847998 20537.640625 0.000000 -0.847998 0.529999 20537.640625 -0.529999 0.000000 0.847998 20537.640625 0.529999 0.000000 0.847998 20768.398438 -0.737865 0.421637 0.527046 20768.398438 -0.527046 -0.421637 -0.737865 20768.398438 0.527046 -0.843274 -0.105409 20768.398438 -0.737865 -0.527046 -0.421637 20768.398438 0.527046 -0.843274 0.105409 20768.398438 0.737865 0.421637 0.527046 20768.398438 0.737865 0.527046 -0.421637 20768.398438 0.737865 -0.527046 -0.421637 20768.398438 0.000000 -0.948683 0.316228 20768.398438 0.105409 0.843274 -0.527046 20768.398438 -0.421637 0.527046 -0.737865 20768.398438 0.527046 -0.737865 -0.421637 20768.398438 -0.527046 0.843274 0.105409 20768.398438 -0.421637 -0.737865 -0.527046 20768.398438 0.000000 0.316228 0.948683 20768.398438 0.737865 0.527046 0.421637 20768.398438 0.527046 -0.737865 0.421637 20768.398438 0.000000 -0.316228 -0.948683 20768.398438 0.737865 -0.421637 -0.527046 20768.398438 0.105409 0.527046 0.843274 20768.398438 0.421637 -0.527046 -0.737865 20768.398438 0.843274 -0.527046 -0.105409 20768.398438 0.843274 -0.527046 0.105409 20768.398438 -0.737865 0.527046 0.421637 20768.398438 -0.527046 0.843274 -0.105409 20768.398438 -0.737865 0.527046 -0.421637 20768.398438 0.421637 -0.527046 0.737865 20768.398438 0.527046 0.105409 -0.843274 20768.398438 0.000000 -0.316228 0.948683 20768.398438 -0.843274 0.105409 -0.527046 20768.398438 0.737865 -0.527046 0.421637 20768.398438 -0.843274 0.105409 0.527046 20768.398438 0.948683 0.316228 0.000000 20768.398438 0.316228 0.000000 0.948683 20768.398438 -0.527046 -0.737865 0.421637 20768.398438 -0.105409 0.843274 -0.527046 20768.398438 -0.527046 -0.843274 -0.105409 20768.398438 0.421637 0.527046 -0.737865 20768.398438 -0.948683 0.316228 0.000000 20768.398438 -0.527046 0.421637 -0.737865 20768.398438 -0.843274 0.527046 0.105409 20768.398438 -0.316228 0.948683 0.000000 20768.398438 -0.105409 0.843274 0.527046 20768.398438 -0.948683 0.000000 -0.316228 20768.398438 -0.421637 0.527046 0.737865 20768.398438 -0.737865 -0.421637 0.527046 20768.398438 -0.737865 -0.421637 -0.527046 20768.398438 -0.527046 -0.843274 0.105409 20768.398438 -0.843274 -0.527046 0.105409 20768.398438 -0.948683 0.000000 0.316228 20768.398438 -0.527046 -0.421637 0.737865 20768.398438 0.316228 0.000000 -0.948683 20768.398438 -0.843274 -0.105409 -0.527046 20768.398438 -0.843274 -0.105409 0.527046 20768.398438 -0.737865 -0.527046 0.421637 20768.398438 0.000000 -0.948683 -0.316228 20768.398438 0.421637 0.737865 0.527046 20768.398438 -0.843274 0.527046 -0.105409 20768.398438 -0.421637 -0.527046 0.737865 20768.398438 0.737865 -0.421637 0.527046 20768.398438 0.421637 0.737865 -0.527046 20768.398438 -0.527046 0.421637 0.737865 20768.398438 -0.105409 0.527046 -0.843274 20768.398438 0.105409 0.527046 -0.843274 20768.398438 -0.948683 -0.316228 0.000000 20768.398438 -0.421637 0.737865 -0.527046 20768.398438 -0.421637 -0.527046 -0.737865 20768.398438 -0.105409 0.527046 0.843274 20768.398438 0.527046 -0.421637 -0.737865 20768.398438 -0.527046 -0.737865 -0.421637 20768.398438 0.737865 0.421637 -0.527046 20768.398438 0.421637 0.527046 0.737865 20768.398438 0.527046 0.737865 -0.421637 20768.398438 0.316228 -0.948683 0.000000 20768.398438 0.527046 0.421637 0.737865 20768.398438 -0.527046 0.737865 -0.421637 20768.398438 0.527046 -0.105409 -0.843274 20768.398438 0.421637 -0.737865 -0.527046 20768.398438 -0.105409 -0.843274 0.527046 20768.398438 0.843274 0.105409 0.527046 20768.398438 0.105409 -0.527046 0.843274 20768.398438 -0.527046 0.105409 0.843274 20768.398438 0.000000 0.948683 0.316228 20768.398438 -0.105409 -0.843274 -0.527046 20768.398438 0.843274 0.105409 -0.527046 20768.398438 0.948683 0.000000 0.316228 20768.398438 -0.316228 0.000000 0.948683 20768.398438 -0.421637 0.737865 0.527046 20768.398438 0.000000 0.316228 -0.948683 20768.398438 -0.421637 -0.737865 0.527046 20768.398438 -0.105409 -0.527046 -0.843274 20768.398438 -0.316228 0.000000 -0.948683 20768.398438 -0.316228 -0.948683 0.000000 20768.398438 0.948683 0.000000 -0.316228 20768.398438 -0.527046 -0.105409 0.843274 20768.398438 0.105409 -0.843274 -0.527046 20768.398438 0.105409 -0.843274 0.527046 20768.398438 0.527046 0.105409 0.843274 20768.398438 0.948683 -0.316228 0.000000 20768.398438 0.843274 0.527046 0.105409 20768.398438 -0.105409 -0.527046 0.843274 20768.398438 -0.527046 -0.105409 -0.843274 20768.398438 0.843274 0.527046 -0.105409 20768.398438 0.527046 -0.105409 0.843274 20768.398438 -0.527046 0.105409 -0.843274 20768.398438 0.316228 0.948683 0.000000 20768.398438 0.527046 0.421637 -0.737865 20768.398438 0.105409 -0.527046 -0.843274 20768.398438 0.000000 0.948683 -0.316228 20768.398438 -0.843274 -0.527046 -0.105409 20768.398438 0.843274 -0.105409 0.527046 20768.398438 0.421637 -0.737865 0.527046 20768.398438 -0.527046 0.737865 0.421637 20768.398438 -0.737865 0.421637 -0.527046 20768.398438 0.105409 0.843274 0.527046 20768.398438 0.527046 0.737865 0.421637 20768.398438 0.527046 -0.421637 0.737865 20768.398438 0.527046 0.843274 -0.105409 20768.398438 0.843274 -0.105409 -0.527046 20768.398438 0.527046 0.843274 0.105409 20999.162109 -0.314485 0.943456 -0.104828 20999.162109 0.943456 -0.314485 0.104828 20999.162109 -0.104828 0.943456 0.314485 20999.162109 -0.314485 -0.104828 -0.943456 20999.162109 -0.314485 0.943456 0.104828 20999.162109 0.314485 -0.104828 -0.943456 20999.162109 0.314485 0.943456 -0.104828 20999.162109 0.314485 0.104828 0.943456 20999.162109 0.943456 0.314485 -0.104828 20999.162109 -0.314485 0.104828 0.943456 20999.162109 0.314485 -0.943456 -0.104828 20999.162109 -0.943456 0.104828 -0.314485 20999.162109 0.943456 0.104828 0.314485 20999.162109 0.104828 -0.943456 -0.314485 20999.162109 0.314485 -0.943456 0.104828 20999.162109 -0.943456 0.314485 -0.104828 20999.162109 0.314485 -0.104828 0.943456 20999.162109 0.943456 0.314485 0.104828 20999.162109 -0.104828 0.943456 -0.314485 20999.162109 0.104828 0.314485 0.943456 20999.162109 0.314485 0.943456 0.104828 20999.162109 -0.104828 0.314485 0.943456 20999.162109 -0.943456 0.314485 0.104828 20999.162109 0.943456 -0.314485 -0.104828 20999.162109 0.104828 0.943456 -0.314485 20999.162109 -0.943456 -0.104828 0.314485 20999.162109 0.104828 -0.943456 0.314485 20999.162109 -0.104828 -0.314485 0.943456 20999.162109 -0.314485 0.104828 -0.943456 20999.162109 0.104828 -0.314485 0.943456 20999.162109 0.943456 0.104828 -0.314485 20999.162109 -0.314485 -0.943456 -0.104828 20999.162109 0.104828 0.314485 -0.943456 20999.162109 0.104828 -0.314485 -0.943456 20999.162109 -0.104828 -0.943456 -0.314485 20999.162109 0.943456 -0.104828 0.314485 20999.162109 -0.104828 0.314485 -0.943456 20999.162109 -0.943456 -0.314485 -0.104828 20999.162109 -0.104828 -0.943456 0.314485 20999.162109 -0.943456 0.104828 0.314485 20999.162109 -0.314485 -0.943456 0.104828 20999.162109 -0.943456 -0.314485 0.104828 20999.162109 0.314485 0.104828 -0.943456 20999.162109 -0.943456 -0.104828 -0.314485 20999.162109 0.943456 -0.104828 -0.314485 20999.162109 -0.314485 -0.104828 0.943456 20999.162109 -0.104828 -0.314485 -0.943456 20999.162109 0.104828 0.943456 0.314485 21460.681641 -0.518476 0.207390 0.829561 21460.681641 -0.829561 -0.518476 0.207390 21460.681641 -0.829561 -0.518476 -0.207390 21460.681641 -0.207390 -0.518476 -0.829561 21460.681641 0.829561 0.518476 -0.207390 21460.681641 0.829561 -0.207390 -0.518476 21460.681641 -0.518476 -0.207390 0.829561 21460.681641 0.207390 -0.829561 0.518476 21460.681641 -0.518476 -0.829561 0.207390 21460.681641 -0.518476 -0.829561 -0.207390 21460.681641 0.518476 0.207390 -0.829561 21460.681641 -0.207390 -0.518476 0.829561 21460.681641 -0.829561 0.518476 -0.207390 21460.681641 -0.829561 0.207390 -0.518476 21460.681641 0.829561 0.518476 0.207390 21460.681641 0.518476 0.207390 0.829561 21460.681641 -0.207390 -0.829561 0.518476 21460.681641 -0.207390 -0.829561 -0.518476 21460.681641 -0.829561 -0.207390 -0.518476 21460.681641 0.829561 -0.207390 0.518476 21460.681641 -0.207390 0.518476 -0.829561 21460.681641 0.518476 -0.829561 -0.207390 21460.681641 -0.829561 0.207390 0.518476 21460.681641 0.518476 -0.207390 0.829561 21460.681641 0.829561 -0.518476 0.207390 21460.681641 0.829561 0.207390 -0.518476 21460.681641 0.207390 -0.518476 -0.829561 21460.681641 -0.518476 0.829561 0.207390 21460.681641 0.518476 0.829561 -0.207390 21460.681641 0.518476 -0.207390 -0.829561 21460.681641 0.207390 0.829561 0.518476 21460.681641 -0.829561 -0.207390 0.518476 21460.681641 0.829561 0.207390 0.518476 21460.681641 -0.518476 -0.207390 -0.829561 21460.681641 0.518476 -0.829561 0.207390 21460.681641 -0.518476 0.829561 -0.207390 21460.681641 0.207390 0.829561 -0.518476 21460.681641 0.207390 0.518476 0.829561 21460.681641 -0.207390 0.518476 0.829561 21460.681641 0.518476 0.829561 0.207390 21460.681641 -0.829561 0.518476 0.207390 21460.681641 -0.518476 0.207390 -0.829561 21460.681641 0.829561 -0.518476 -0.207390 21460.681641 -0.207390 0.829561 -0.518476 21460.681641 0.207390 0.518476 -0.829561 21460.681641 0.207390 -0.518476 0.829561 21460.681641 0.207390 -0.829561 -0.518476 21460.681641 -0.207390 0.829561 0.518476 21691.443359 -0.928279 -0.206284 0.309426 21691.443359 0.721995 0.309426 0.618853 21691.443359 -0.618853 0.721995 -0.309426 21691.443359 0.928279 -0.206284 0.309426 21691.443359 0.928279 0.309426 0.206284 21691.443359 0.309426 0.206284 0.928279 21691.443359 -0.309426 0.206284 -0.928279 21691.443359 0.309426 0.721995 -0.618853 21691.443359 -0.721995 0.309426 0.618853 21691.443359 0.309426 0.618853 0.721995 21691.443359 -0.721995 -0.618853 0.309426 21691.443359 0.721995 0.618853 0.309426 21691.443359 -0.928279 0.206284 -0.309426 21691.443359 0.206284 -0.309426 0.928279 21691.443359 -0.721995 0.618853 -0.309426 21691.443359 0.721995 -0.618853 0.309426 21691.443359 -0.721995 0.309426 -0.618853 21691.443359 0.721995 -0.309426 -0.618853 21691.443359 0.721995 0.309426 -0.618853 21691.443359 -0.206284 -0.928279 0.309426 21691.443359 -0.309426 0.721995 -0.618853 21691.443359 0.721995 0.618853 -0.309426 21691.443359 -0.309426 0.618853 0.721995 21691.443359 -0.309426 0.206284 0.928279 21691.443359 -0.928279 0.309426 0.206284 21691.443359 -0.309426 0.721995 0.618853 21691.443359 0.206284 -0.928279 0.309426 21691.443359 0.309426 0.928279 0.206284 21691.443359 -0.309426 0.618853 -0.721995 21691.443359 0.206284 -0.309426 -0.928279 21691.443359 -0.309426 -0.206284 0.928279 21691.443359 0.928279 -0.309426 -0.206284 21691.443359 0.309426 0.721995 0.618853 21691.443359 -0.721995 -0.618853 -0.309426 21691.443359 -0.309426 0.928279 -0.206284 21691.443359 -0.928279 0.309426 -0.206284 21691.443359 0.206284 -0.928279 -0.309426 21691.443359 -0.618853 0.309426 0.721995 21691.443359 -0.309426 0.928279 0.206284 21691.443359 0.928279 -0.309426 0.206284 21691.443359 0.928279 -0.206284 -0.309426 21691.443359 -0.206284 -0.928279 -0.309426 21691.443359 0.309426 0.618853 -0.721995 21691.443359 0.721995 -0.309426 0.618853 21691.443359 -0.721995 0.618853 0.309426 21691.443359 -0.928279 0.206284 0.309426 21691.443359 0.309426 0.206284 -0.928279 21691.443359 0.309426 0.928279 -0.206284 21691.443359 0.618853 0.721995 0.309426 21691.443359 -0.721995 -0.309426 0.618853 21691.443359 0.928279 0.309426 -0.206284 21691.443359 -0.206284 0.309426 0.928279 21691.443359 0.618853 -0.721995 -0.309426 21691.443359 -0.309426 -0.618853 0.721995 21691.443359 -0.206284 0.309426 -0.928279 21691.443359 0.309426 -0.206284 0.928279 21691.443359 0.206284 0.928279 -0.309426 21691.443359 -0.618853 -0.721995 -0.309426 21691.443359 -0.309426 -0.206284 -0.928279 21691.443359 0.206284 0.309426 -0.928279 21691.443359 0.928279 0.206284 -0.309426 21691.443359 0.618853 -0.721995 0.309426 21691.443359 0.618853 0.309426 0.721995 21691.443359 0.206284 0.928279 0.309426 21691.443359 0.309426 -0.928279 -0.206284 21691.443359 0.309426 -0.721995 -0.618853 21691.443359 -0.618853 0.309426 -0.721995 21691.443359 0.206284 0.309426 0.928279 21691.443359 0.309426 -0.928279 0.206284 21691.443359 -0.309426 -0.721995 -0.618853 21691.443359 -0.206284 -0.309426 -0.928279 21691.443359 -0.721995 -0.309426 -0.618853 21691.443359 -0.206284 -0.309426 0.928279 21691.443359 0.618853 0.309426 -0.721995 21691.443359 -0.618853 -0.309426 -0.721995 21691.443359 -0.309426 -0.618853 -0.721995 21691.443359 -0.928279 -0.309426 -0.206284 21691.443359 0.618853 0.721995 -0.309426 21691.443359 -0.618853 -0.721995 0.309426 21691.443359 -0.928279 -0.309426 0.206284 21691.443359 0.309426 -0.618853 -0.721995 21691.443359 -0.618853 0.721995 0.309426 21691.443359 -0.206284 0.928279 0.309426 21691.443359 0.721995 -0.618853 -0.309426 21691.443359 0.928279 0.206284 0.309426 21691.443359 0.309426 -0.721995 0.618853 21691.443359 -0.928279 -0.206284 -0.309426 21691.443359 -0.309426 -0.928279 -0.206284 21691.443359 0.618853 -0.309426 -0.721995 21691.443359 -0.206284 0.928279 -0.309426 21691.443359 -0.309426 -0.721995 0.618853 21691.443359 0.309426 -0.206284 -0.928279 21691.443359 0.309426 -0.618853 0.721995 21691.443359 0.618853 -0.309426 0.721995 21691.443359 -0.618853 -0.309426 0.721995 21691.443359 -0.309426 -0.928279 0.206284 22152.962891 -0.816497 -0.408248 -0.408248 22152.962891 0.816497 -0.408248 -0.408248 22152.962891 0.408248 0.408248 0.816497 22152.962891 0.408248 -0.408248 -0.816497 22152.962891 -0.816497 0.408248 -0.408248 22152.962891 -0.816497 0.408248 0.408248 22152.962891 -0.408248 -0.816497 0.408248 22152.962891 -0.816497 -0.408248 0.408248 22152.962891 -0.408248 0.408248 0.816497 22152.962891 -0.408248 0.408248 -0.816497 22152.962891 0.408248 0.816497 -0.408248 22152.962891 -0.408248 0.816497 -0.408248 22152.962891 -0.408248 -0.408248 0.816497 22152.962891 0.408248 0.816497 0.408248 22152.962891 0.816497 0.408248 -0.408248 22152.962891 0.408248 -0.408248 0.816497 22152.962891 0.408248 -0.816497 -0.408248 22152.962891 0.408248 -0.816497 0.408248 22152.962891 0.816497 -0.408248 0.408248 22152.962891 -0.408248 0.816497 0.408248 22152.962891 -0.408248 -0.816497 -0.408248 22152.962891 0.816497 0.408248 0.408248 22152.962891 -0.408248 -0.408248 -0.816497 22152.962891 0.408248 0.408248 -0.816497 22383.720703 -0.609208 -0.507673 -0.609208 22383.720703 0.000000 0.913812 -0.406138 22383.720703 0.609208 0.507673 -0.609208 22383.720703 -0.406138 -0.913812 0.000000 22383.720703 0.609208 -0.609208 -0.507673 22383.720703 -0.609208 -0.609208 0.507673 22383.720703 0.406138 0.913812 0.000000 22383.720703 -0.609208 0.609208 0.507673 22383.720703 -0.406138 0.000000 -0.913812 22383.720703 -0.507673 0.609208 -0.609208 22383.720703 -0.609208 0.507673 -0.609208 22383.720703 -0.913812 -0.406138 0.000000 22383.720703 -0.913812 0.406138 0.000000 22383.720703 0.000000 0.406138 0.913812 22383.720703 0.406138 0.000000 0.913812 22383.720703 0.609208 0.507673 0.609208 22383.720703 0.913812 -0.406138 0.000000 22383.720703 0.609208 -0.507673 0.609208 22383.720703 0.507673 -0.609208 0.609208 22383.720703 0.913812 0.000000 -0.406138 22383.720703 0.913812 0.406138 0.000000 22383.720703 -0.609208 -0.609208 -0.507673 22383.720703 -0.507673 0.609208 0.609208 22383.720703 0.000000 -0.913812 -0.406138 22383.720703 0.609208 0.609208 0.507673 22383.720703 0.609208 0.609208 -0.507673 22383.720703 0.000000 -0.406138 0.913812 22383.720703 0.609208 -0.507673 -0.609208 22383.720703 0.609208 -0.609208 0.507673 22383.720703 -0.406138 0.000000 0.913812 22383.720703 0.000000 0.406138 -0.913812 22383.720703 0.000000 -0.913812 0.406138 22383.720703 -0.913812 0.000000 0.406138 22383.720703 0.913812 0.000000 0.406138 22383.720703 0.507673 0.609208 0.609208 22383.720703 0.406138 -0.913812 0.000000 22383.720703 0.406138 0.000000 -0.913812 22383.720703 -0.507673 -0.609208 -0.609208 22383.720703 -0.609208 0.609208 -0.507673 22383.720703 -0.507673 -0.609208 0.609208 22383.720703 -0.406138 0.913812 0.000000 22383.720703 -0.609208 0.507673 0.609208 22383.720703 0.507673 0.609208 -0.609208 22383.720703 0.000000 0.913812 0.406138 22383.720703 0.507673 -0.609208 -0.609208 22383.720703 -0.609208 -0.507673 0.609208 22383.720703 0.000000 -0.406138 -0.913812 22383.720703 -0.913812 0.000000 -0.406138 22614.480469 0.808122 0.303046 -0.505076 22614.480469 0.303046 -0.808122 0.505076 22614.480469 -0.101015 -0.909137 -0.404061 22614.480469 0.404061 -0.909137 0.101015 22614.480469 -0.909137 0.101015 0.404061 22614.480469 0.404061 -0.909137 -0.101015 22614.480469 -0.101015 -0.404061 0.909137 22614.480469 0.808122 -0.303046 0.505076 22614.480469 0.909137 -0.101015 -0.404061 22614.480469 0.808122 0.505076 -0.303046 22614.480469 -0.707107 0.000000 0.707107 22614.480469 0.505076 0.808122 0.303046 22614.480469 0.909137 0.404061 0.101015 22614.480469 0.909137 -0.101015 0.404061 22614.480469 -0.404061 0.909137 -0.101015 22614.480469 -0.303046 -0.808122 0.505076 22614.480469 -0.808122 0.303046 0.505076 22614.480469 0.303046 0.808122 0.505076 22614.480469 -0.909137 -0.404061 -0.101015 22614.480469 0.303046 -0.808122 -0.505076 22614.480469 -0.101015 0.404061 -0.909137 22614.480469 0.808122 0.303046 0.505076 22614.480469 0.101015 -0.909137 0.404061 22614.480469 -0.505076 0.808122 -0.303046 22614.480469 0.000000 0.707107 0.707107 22614.480469 0.101015 -0.909137 -0.404061 22614.480469 -0.303046 -0.808122 -0.505076 22614.480469 0.505076 0.303046 0.808122 22614.480469 0.909137 -0.404061 -0.101015 22614.480469 0.101015 0.909137 -0.404061 22614.480469 0.505076 0.808122 -0.303046 22614.480469 0.303046 0.505076 -0.808122 22614.480469 0.505076 -0.303046 -0.808122 22614.480469 -0.101015 -0.909137 0.404061 22614.480469 -0.404061 0.909137 0.101015 22614.480469 -0.505076 0.303046 -0.808122 22614.480469 0.303046 0.505076 0.808122 22614.480469 -0.707107 0.000000 -0.707107 22614.480469 0.101015 0.909137 0.404061 22614.480469 -0.909137 -0.404061 0.101015 22614.480469 0.808122 0.505076 0.303046 22614.480469 -0.808122 0.505076 -0.303046 22614.480469 0.303046 -0.505076 0.808122 22614.480469 -0.404061 0.101015 0.909137 22614.480469 0.505076 -0.303046 0.808122 22614.480469 0.303046 0.808122 -0.505076 22614.480469 0.909137 -0.404061 0.101015 22614.480469 -0.101015 -0.404061 -0.909137 22614.480469 0.101015 -0.404061 -0.909137 22614.480469 0.505076 0.303046 -0.808122 22614.480469 0.808122 -0.303046 -0.505076 22614.480469 -0.808122 -0.505076 -0.303046 22614.480469 -0.808122 0.505076 0.303046 22614.480469 -0.707107 -0.707107 0.000000 22614.480469 -0.505076 0.808122 0.303046 22614.480469 -0.303046 0.808122 -0.505076 22614.480469 -0.505076 -0.808122 0.303046 22614.480469 -0.808122 -0.505076 0.303046 22614.480469 0.707107 0.000000 0.707107 22614.480469 -0.303046 0.505076 0.808122 22614.480469 -0.404061 -0.909137 -0.101015 22614.480469 -0.505076 -0.303046 -0.808122 22614.480469 0.101015 0.404061 0.909137 22614.480469 -0.707107 0.707107 0.000000 22614.480469 -0.303046 -0.505076 0.808122 22614.480469 -0.303046 0.505076 -0.808122 22614.480469 -0.808122 -0.303046 -0.505076 22614.480469 -0.101015 0.404061 0.909137 22614.480469 -0.808122 0.303046 -0.505076 22614.480469 0.404061 -0.101015 -0.909137 22614.480469 0.707107 0.000000 -0.707107 22614.480469 -0.101015 0.909137 -0.404061 22614.480469 0.101015 0.404061 -0.909137 22614.480469 0.404061 0.101015 0.909137 22614.480469 0.707107 -0.707107 0.000000 22614.480469 -0.909137 0.404061 0.101015 22614.480469 0.000000 -0.707107 -0.707107 22614.480469 -0.404061 -0.909137 0.101015 22614.480469 -0.909137 -0.101015 -0.404061 22614.480469 0.909137 0.404061 -0.101015 22614.480469 -0.909137 0.404061 -0.101015 22614.480469 -0.404061 -0.101015 0.909137 22614.480469 0.404061 0.101015 -0.909137 22614.480469 -0.101015 0.909137 0.404061 22614.480469 -0.909137 -0.101015 0.404061 22614.480469 -0.505076 -0.808122 -0.303046 22614.480469 0.000000 -0.707107 0.707107 22614.480469 -0.303046 0.808122 0.505076 22614.480469 0.404061 -0.101015 0.909137 22614.480469 -0.505076 0.303046 0.808122 22614.480469 -0.909137 0.101015 -0.404061 22614.480469 -0.404061 -0.101015 -0.909137 22614.480469 0.303046 -0.505076 -0.808122 22614.480469 -0.808122 -0.303046 0.505076 22614.480469 0.404061 0.909137 -0.101015 22614.480469 0.505076 -0.808122 0.303046 22614.480469 0.000000 0.707107 -0.707107 22614.480469 0.101015 -0.404061 0.909137 22614.480469 -0.505076 -0.303046 0.808122 22614.480469 0.404061 0.909137 0.101015 22614.480469 0.909137 0.101015 0.404061 22614.480469 0.909137 0.101015 -0.404061 22614.480469 0.808122 -0.505076 0.303046 22614.480469 0.505076 -0.808122 -0.303046 22614.480469 -0.303046 -0.505076 -0.808122 22614.480469 0.707107 0.707107 0.000000 22614.480469 0.808122 -0.505076 -0.303046 22614.480469 -0.404061 0.101015 -0.909137 22845.238281 -0.502519 0.502519 -0.703526 22845.238281 0.904534 -0.301511 -0.301511 22845.238281 0.703526 0.703526 -0.100504 22845.238281 0.502519 -0.703526 0.502519 22845.238281 -0.703526 -0.100504 0.703526 22845.238281 -0.301511 0.904534 0.301511 22845.238281 -0.703526 -0.502519 0.502519 22845.238281 -0.301511 -0.301511 0.904534 22845.238281 -0.703526 -0.100504 -0.703526 22845.238281 -0.703526 0.100504 0.703526 22845.238281 0.703526 -0.100504 -0.703526 22845.238281 -0.904534 -0.301511 -0.301511 22845.238281 0.100504 -0.703526 0.703526 22845.238281 -0.703526 0.703526 -0.100504 22845.238281 -0.904534 0.301511 0.301511 22845.238281 0.100504 0.703526 -0.703526 22845.238281 0.703526 -0.703526 0.100504 22845.238281 0.703526 -0.502519 -0.502519 22845.238281 -0.703526 0.502519 0.502519 22845.238281 0.703526 -0.703526 -0.100504 22845.238281 -0.703526 -0.502519 -0.502519 22845.238281 0.100504 -0.703526 -0.703526 22845.238281 -0.301511 -0.904534 -0.301511 22845.238281 -0.301511 -0.904534 0.301511 22845.238281 -0.904534 -0.301511 0.301511 22845.238281 0.703526 -0.502519 0.502519 22845.238281 0.904534 0.301511 0.301511 22845.238281 -0.301511 0.904534 -0.301511 22845.238281 0.904534 0.301511 -0.301511 22845.238281 0.100504 0.703526 0.703526 22845.238281 0.904534 -0.301511 0.301511 22845.238281 0.703526 0.703526 0.100504 22845.238281 0.301511 0.301511 0.904534 22845.238281 -0.502519 0.703526 -0.502519 22845.238281 0.301511 -0.301511 0.904534 22845.238281 0.301511 -0.904534 -0.301511 22845.238281 -0.502519 -0.502519 -0.703526 22845.238281 -0.100504 0.703526 0.703526 22845.238281 -0.100504 0.703526 -0.703526 22845.238281 -0.100504 -0.703526 -0.703526 22845.238281 -0.502519 0.502519 0.703526 22845.238281 -0.301511 -0.301511 -0.904534 22845.238281 0.502519 0.502519 0.703526 22845.238281 0.703526 0.100504 0.703526 22845.238281 -0.703526 0.100504 -0.703526 22845.238281 -0.502519 0.703526 0.502519 22845.238281 -0.301511 0.301511 0.904534 22845.238281 -0.502519 -0.703526 0.502519 22845.238281 0.502519 0.703526 -0.502519 22845.238281 0.502519 0.703526 0.502519 22845.238281 -0.703526 0.703526 0.100504 22845.238281 0.703526 0.502519 0.502519 22845.238281 0.502519 0.502519 -0.703526 22845.238281 -0.100504 -0.703526 0.703526 22845.238281 -0.703526 -0.703526 0.100504 22845.238281 -0.904534 0.301511 -0.301511 22845.238281 0.301511 0.904534 -0.301511 22845.238281 -0.703526 0.502519 -0.502519 22845.238281 0.502519 -0.703526 -0.502519 22845.238281 0.703526 -0.100504 0.703526 22845.238281 0.502519 -0.502519 0.703526 22845.238281 0.703526 0.100504 -0.703526 22845.238281 0.301511 -0.301511 -0.904534 22845.238281 0.301511 0.301511 -0.904534 22845.238281 0.301511 -0.904534 0.301511 22845.238281 0.301511 0.904534 0.301511 22845.238281 -0.703526 -0.703526 -0.100504 22845.238281 -0.301511 0.301511 -0.904534 22845.238281 -0.502519 -0.703526 -0.502519 22845.238281 -0.502519 -0.502519 0.703526 22845.238281 0.703526 0.502519 -0.502519 22845.238281 0.502519 -0.502519 -0.703526 23076.000000 -1.000000 0.000000 0.000000 23076.000000 0.800000 0.000000 0.600000 23076.000000 -0.800000 -0.600000 0.000000 23076.000000 -0.800000 0.000000 -0.600000 23076.000000 -0.800000 0.000000 0.600000 23076.000000 -0.800000 0.600000 0.000000 23076.000000 -0.600000 -0.800000 0.000000 23076.000000 -0.600000 0.000000 -0.800000 23076.000000 -0.600000 0.000000 0.800000 23076.000000 -0.600000 0.800000 0.000000 23076.000000 0.000000 -1.000000 0.000000 23076.000000 0.000000 -0.800000 -0.600000 23076.000000 0.000000 -0.800000 0.600000 23076.000000 0.000000 -0.600000 -0.800000 23076.000000 0.800000 0.600000 0.000000 23076.000000 0.000000 -0.600000 0.800000 23076.000000 0.000000 0.000000 1.000000 23076.000000 0.000000 0.600000 -0.800000 23076.000000 0.000000 0.600000 0.800000 23076.000000 0.000000 0.800000 -0.600000 23076.000000 0.000000 0.800000 0.600000 23076.000000 0.000000 1.000000 0.000000 23076.000000 0.600000 -0.800000 0.000000 23076.000000 0.600000 0.000000 -0.800000 23076.000000 0.600000 0.000000 0.800000 23076.000000 0.600000 0.800000 0.000000 23076.000000 0.800000 -0.600000 0.000000 23076.000000 0.800000 0.000000 -0.600000 23076.000000 0.000000 0.000000 -1.000000 23076.000000 1.000000 0.000000 0.000000 dipy-1.11.0/dipy/data/files/dsi515_b_table.txt000066400000000000000000000377071476546756600210120ustar00rootroot000000000000000 0 0 0 461.538 -1 0 0 461.538 0 1 0 461.538 0 0 1 461.538 0 0 -1 461.538 0 -1 0 461.538 1 0 0 923.077 -0.707107 0.707107 0 923.077 -0.707107 0 0.707107 923.077 -0.707107 0 -0.707107 923.077 -0.707107 -0.707107 0 923.077 0 0.707107 0.707107 923.077 0 0.707107 -0.707107 923.077 0 -0.707107 0.707107 923.077 0 -0.707107 -0.707107 923.077 0.707107 0.707107 0 923.077 0.707107 0 0.707107 923.077 0.707107 0 -0.707107 923.077 0.707107 -0.707107 0 1384.62 -0.57735 0.57735 0.57735 1384.62 -0.57735 0.57735 -0.57735 1384.62 -0.57735 -0.57735 0.57735 1384.62 -0.57735 -0.57735 -0.57735 1384.62 0.57735 0.57735 0.57735 1384.62 0.57735 0.57735 -0.57735 1384.62 0.57735 -0.57735 0.57735 1384.62 0.57735 -0.57735 -0.57735 1846.15 -1 0 0 1846.15 0 1 0 1846.15 0 0 1 1846.15 0 0 -1 1846.15 0 -1 0 1846.15 1 0 0 2307.69 -0.894427 0.447214 0 2307.69 -0.894427 0 0.447214 2307.69 -0.894427 0 -0.447214 2307.69 -0.894427 -0.447214 0 2307.69 -0.447214 0.894427 0 2307.69 -0.447214 0 0.894427 2307.69 -0.447214 0 -0.894427 2307.69 -0.447214 -0.894427 0 2307.69 0 0.894427 0.447214 2307.69 0 0.894427 -0.447214 2307.69 0 0.447214 0.894427 2307.69 0 0.447214 -0.894427 2307.69 0 -0.447214 0.894427 2307.69 0 -0.447214 -0.894427 2307.69 0 -0.894427 0.447214 2307.69 0 -0.894427 -0.447214 2307.69 0.447214 0.894427 0 2307.69 0.447214 0 0.894427 2307.69 0.447214 0 -0.894427 2307.69 0.447214 -0.894427 0 2307.69 0.894427 0.447214 0 2307.69 0.894427 0 0.447214 2307.69 0.894427 0 -0.447214 2307.69 0.894427 -0.447214 0 2769.23 -0.816497 0.408248 0.408248 2769.23 -0.816497 0.408248 -0.408248 2769.23 -0.816497 -0.408248 0.408248 2769.23 -0.816497 -0.408248 -0.408248 2769.23 -0.408248 0.816497 0.408248 2769.23 -0.408248 0.816497 -0.408248 2769.23 -0.408248 0.408248 0.816497 2769.23 -0.408248 0.408248 -0.816497 2769.23 -0.408248 -0.408248 0.816497 2769.23 -0.408248 -0.408248 -0.816497 2769.23 -0.408248 -0.816497 0.408248 2769.23 -0.408248 -0.816497 -0.408248 2769.23 0.408248 0.816497 0.408248 2769.23 0.408248 0.816497 -0.408248 2769.23 0.408248 0.408248 0.816497 2769.23 0.408248 0.408248 -0.816497 2769.23 0.408248 -0.408248 0.816497 2769.23 0.408248 -0.408248 -0.816497 2769.23 0.408248 -0.816497 0.408248 2769.23 0.408248 -0.816497 -0.408248 2769.23 0.816497 0.408248 0.408248 2769.23 0.816497 0.408248 -0.408248 2769.23 0.816497 -0.408248 0.408248 2769.23 0.816497 -0.408248 -0.408248 3692.31 -0.707107 0.707107 0 3692.31 -0.707107 0 0.707107 3692.31 -0.707107 0 -0.707107 3692.31 -0.707107 -0.707107 0 3692.31 0 0.707107 0.707107 3692.31 0 0.707107 -0.707107 3692.31 0 -0.707107 0.707107 3692.31 0 -0.707107 -0.707107 3692.31 0.707107 0.707107 0 3692.31 0.707107 0 0.707107 3692.31 0.707107 0 -0.707107 3692.31 0.707107 -0.707107 0 4153.85 -1 0 0 4153.85 -0.666667 0.666667 0.333333 4153.85 -0.666667 0.666667 -0.333333 4153.85 -0.666667 0.333333 0.666667 4153.85 -0.666667 0.333333 -0.666667 4153.85 -0.666667 -0.333333 0.666667 4153.85 -0.666667 -0.333333 -0.666667 4153.85 -0.666667 -0.666667 0.333333 4153.85 -0.666667 -0.666667 -0.333333 4153.85 -0.333333 0.666667 0.666667 4153.85 -0.333333 0.666667 -0.666667 4153.85 -0.333333 -0.666667 0.666667 4153.85 -0.333333 -0.666667 -0.666667 4153.85 0 1 0 4153.85 0 0 1 4153.85 0 0 -1 4153.85 0 -1 0 4153.85 0.333333 0.666667 0.666667 4153.85 0.333333 0.666667 -0.666667 4153.85 0.333333 -0.666667 0.666667 4153.85 0.333333 -0.666667 -0.666667 4153.85 0.666667 0.666667 0.333333 4153.85 0.666667 0.666667 -0.333333 4153.85 0.666667 0.333333 0.666667 4153.85 0.666667 0.333333 -0.666667 4153.85 0.666667 -0.333333 0.666667 4153.85 0.666667 -0.333333 -0.666667 4153.85 0.666667 -0.666667 0.333333 4153.85 0.666667 -0.666667 -0.333333 4153.85 1 0 0 4615.38 -0.948683 0.316228 0 4615.38 -0.948683 0 0.316228 4615.38 -0.948683 0 -0.316228 4615.38 -0.948683 -0.316228 0 4615.38 -0.316228 0.948683 0 4615.38 -0.316228 0 0.948683 4615.38 -0.316228 0 -0.948683 4615.38 -0.316228 -0.948683 0 4615.38 0 0.948683 0.316228 4615.38 0 0.948683 -0.316228 4615.38 0 0.316228 0.948683 4615.38 0 0.316228 -0.948683 4615.38 0 -0.316228 0.948683 4615.38 0 -0.316228 -0.948683 4615.38 0 -0.948683 0.316228 4615.38 0 -0.948683 -0.316228 4615.38 0.316228 0.948683 0 4615.38 0.316228 0 0.948683 4615.38 0.316228 0 -0.948683 4615.38 0.316228 -0.948683 0 4615.38 0.948683 0.316228 0 4615.38 0.948683 0 0.316228 4615.38 0.948683 0 -0.316228 4615.38 0.948683 -0.316228 0 5076.92 -0.904534 0.301511 0.301511 5076.92 -0.904534 0.301511 -0.301511 5076.92 -0.904534 -0.301511 0.301511 5076.92 -0.904534 -0.301511 -0.301511 5076.92 -0.301511 0.904534 0.301511 5076.92 -0.301511 0.904534 -0.301511 5076.92 -0.301511 0.301511 0.904534 5076.92 -0.301511 0.301511 -0.904534 5076.92 -0.301511 -0.301511 0.904534 5076.92 -0.301511 -0.301511 -0.904534 5076.92 -0.301511 -0.904534 0.301511 5076.92 -0.301511 -0.904534 -0.301511 5076.92 0.301511 0.904534 0.301511 5076.92 0.301511 0.904534 -0.301511 5076.92 0.301511 0.301511 0.904534 5076.92 0.301511 0.301511 -0.904534 5076.92 0.301511 -0.301511 0.904534 5076.92 0.301511 -0.301511 -0.904534 5076.92 0.301511 -0.904534 0.301511 5076.92 0.301511 -0.904534 -0.301511 5076.92 0.904534 0.301511 0.301511 5076.92 0.904534 0.301511 -0.301511 5076.92 0.904534 -0.301511 0.301511 5076.92 0.904534 -0.301511 -0.301511 5538.46 -0.57735 0.57735 0.57735 5538.46 -0.57735 0.57735 -0.57735 5538.46 -0.57735 -0.57735 0.57735 5538.46 -0.57735 -0.57735 -0.57735 5538.46 0.57735 0.57735 0.57735 5538.46 0.57735 0.57735 -0.57735 5538.46 0.57735 -0.57735 0.57735 5538.46 0.57735 -0.57735 -0.57735 6000 -0.83205 0.5547 0 6000 -0.83205 0 0.5547 6000 -0.83205 0 -0.5547 6000 -0.83205 -0.5547 0 6000 -0.5547 0.83205 0 6000 -0.5547 0 0.83205 6000 -0.5547 0 -0.83205 6000 -0.5547 -0.83205 0 6000 0 0.83205 0.5547 6000 0 0.83205 -0.5547 6000 0 0.5547 0.83205 6000 0 0.5547 -0.83205 6000 0 -0.5547 0.83205 6000 0 -0.5547 -0.83205 6000 0 -0.83205 0.5547 6000 0 -0.83205 -0.5547 6000 0.5547 0.83205 0 6000 0.5547 0 0.83205 6000 0.5547 0 -0.83205 6000 0.5547 -0.83205 0 6000 0.83205 0.5547 0 6000 0.83205 0 0.5547 6000 0.83205 0 -0.5547 6000 0.83205 -0.5547 0 6461.54 -0.801784 0.534522 0.267261 6461.54 -0.801784 0.534522 -0.267261 6461.54 -0.801784 0.267261 0.534522 6461.54 -0.801784 0.267261 -0.534522 6461.54 -0.801784 -0.267261 0.534522 6461.54 -0.801784 -0.267261 -0.534522 6461.54 -0.801784 -0.534522 0.267261 6461.54 -0.801784 -0.534522 -0.267261 6461.54 -0.534522 0.801784 0.267261 6461.54 -0.534522 0.801784 -0.267261 6461.54 -0.534522 0.267261 0.801784 6461.54 -0.534522 0.267261 -0.801784 6461.54 -0.534522 -0.267261 0.801784 6461.54 -0.534522 -0.267261 -0.801784 6461.54 -0.534522 -0.801784 0.267261 6461.54 -0.534522 -0.801784 -0.267261 6461.54 -0.267261 0.801784 0.534522 6461.54 -0.267261 0.801784 -0.534522 6461.54 -0.267261 0.534522 0.801784 6461.54 -0.267261 0.534522 -0.801784 6461.54 -0.267261 -0.534522 0.801784 6461.54 -0.267261 -0.534522 -0.801784 6461.54 -0.267261 -0.801784 0.534522 6461.54 -0.267261 -0.801784 -0.534522 6461.54 0.267261 0.801784 0.534522 6461.54 0.267261 0.801784 -0.534522 6461.54 0.267261 0.534522 0.801784 6461.54 0.267261 0.534522 -0.801784 6461.54 0.267261 -0.534522 0.801784 6461.54 0.267261 -0.534522 -0.801784 6461.54 0.267261 -0.801784 0.534522 6461.54 0.267261 -0.801784 -0.534522 6461.54 0.534522 0.801784 0.267261 6461.54 0.534522 0.801784 -0.267261 6461.54 0.534522 0.267261 0.801784 6461.54 0.534522 0.267261 -0.801784 6461.54 0.534522 -0.267261 0.801784 6461.54 0.534522 -0.267261 -0.801784 6461.54 0.534522 -0.801784 0.267261 6461.54 0.534522 -0.801784 -0.267261 6461.54 0.801784 0.534522 0.267261 6461.54 0.801784 0.534522 -0.267261 6461.54 0.801784 0.267261 0.534522 6461.54 0.801784 0.267261 -0.534522 6461.54 0.801784 -0.267261 0.534522 6461.54 0.801784 -0.267261 -0.534522 6461.54 0.801784 -0.534522 0.267261 6461.54 0.801784 -0.534522 -0.267261 7384.62 -1 0 0 7384.62 0 1 0 7384.62 0 0 1 7384.62 0 0 -1 7384.62 0 -1 0 7384.62 1 0 0 7846.15 -0.970143 0.242536 0 7846.15 -0.970143 0 0.242536 7846.15 -0.970143 0 -0.242536 7846.15 -0.970143 -0.242536 0 7846.15 -0.727607 0.485071 0.485071 7846.15 -0.727607 0.485071 -0.485071 7846.15 -0.727607 -0.485071 0.485071 7846.15 -0.727607 -0.485071 -0.485071 7846.15 -0.485071 0.727607 0.485071 7846.15 -0.485071 0.727607 -0.485071 7846.15 -0.485071 0.485071 0.727607 7846.15 -0.485071 0.485071 -0.727607 7846.15 -0.485071 -0.485071 0.727607 7846.15 -0.485071 -0.485071 -0.727607 7846.15 -0.485071 -0.727607 0.485071 7846.15 -0.485071 -0.727607 -0.485071 7846.15 -0.242536 0.970143 0 7846.15 -0.242536 0 0.970143 7846.15 -0.242536 0 -0.970143 7846.15 -0.242536 -0.970143 0 7846.15 0 0.970143 0.242536 7846.15 0 0.970143 -0.242536 7846.15 0 0.242536 0.970143 7846.15 0 0.242536 -0.970143 7846.15 0 -0.242536 0.970143 7846.15 0 -0.242536 -0.970143 7846.15 0 -0.970143 0.242536 7846.15 0 -0.970143 -0.242536 7846.15 0.242536 0.970143 0 7846.15 0.242536 0 0.970143 7846.15 0.242536 0 -0.970143 7846.15 0.242536 -0.970143 0 7846.15 0.485071 0.727607 0.485071 7846.15 0.485071 0.727607 -0.485071 7846.15 0.485071 0.485071 0.727607 7846.15 0.485071 0.485071 -0.727607 7846.15 0.485071 -0.485071 0.727607 7846.15 0.485071 -0.485071 -0.727607 7846.15 0.485071 -0.727607 0.485071 7846.15 0.485071 -0.727607 -0.485071 7846.15 0.727607 0.485071 0.485071 7846.15 0.727607 0.485071 -0.485071 7846.15 0.727607 -0.485071 0.485071 7846.15 0.727607 -0.485071 -0.485071 7846.15 0.970143 0.242536 0 7846.15 0.970143 0 0.242536 7846.15 0.970143 0 -0.242536 7846.15 0.970143 -0.242536 0 8307.69 -0.942809 0.235702 0.235702 8307.69 -0.942809 0.235702 -0.235702 8307.69 -0.942809 -0.235702 0.235702 8307.69 -0.942809 -0.235702 -0.235702 8307.69 -0.707107 0.707107 0 8307.69 -0.707107 0 0.707107 8307.69 -0.707107 0 -0.707107 8307.69 -0.707107 -0.707107 0 8307.69 -0.235702 0.942809 0.235702 8307.69 -0.235702 0.942809 -0.235702 8307.69 -0.235702 0.235702 0.942809 8307.69 -0.235702 0.235702 -0.942809 8307.69 -0.235702 -0.235702 0.942809 8307.69 -0.235702 -0.235702 -0.942809 8307.69 -0.235702 -0.942809 0.235702 8307.69 -0.235702 -0.942809 -0.235702 8307.69 0 0.707107 0.707107 8307.69 0 0.707107 -0.707107 8307.69 0 -0.707107 0.707107 8307.69 0 -0.707107 -0.707107 8307.69 0.235702 0.942809 0.235702 8307.69 0.235702 0.942809 -0.235702 8307.69 0.235702 0.235702 0.942809 8307.69 0.235702 0.235702 -0.942809 8307.69 0.235702 -0.235702 0.942809 8307.69 0.235702 -0.235702 -0.942809 8307.69 0.235702 -0.942809 0.235702 8307.69 0.235702 -0.942809 -0.235702 8307.69 0.707107 0.707107 0 8307.69 0.707107 0 0.707107 8307.69 0.707107 0 -0.707107 8307.69 0.707107 -0.707107 0 8307.69 0.942809 0.235702 0.235702 8307.69 0.942809 0.235702 -0.235702 8307.69 0.942809 -0.235702 0.235702 8307.69 0.942809 -0.235702 -0.235702 8769.23 -0.688247 0.688247 0.229416 8769.23 -0.688247 0.688247 -0.229416 8769.23 -0.688247 0.229416 0.688247 8769.23 -0.688247 0.229416 -0.688247 8769.23 -0.688247 -0.229416 0.688247 8769.23 -0.688247 -0.229416 -0.688247 8769.23 -0.688247 -0.688247 0.229416 8769.23 -0.688247 -0.688247 -0.229416 8769.23 -0.229416 0.688247 0.688247 8769.23 -0.229416 0.688247 -0.688247 8769.23 -0.229416 -0.688247 0.688247 8769.23 -0.229416 -0.688247 -0.688247 8769.23 0.229416 0.688247 0.688247 8769.23 0.229416 0.688247 -0.688247 8769.23 0.229416 -0.688247 0.688247 8769.23 0.229416 -0.688247 -0.688247 8769.23 0.688247 0.688247 0.229416 8769.23 0.688247 0.688247 -0.229416 8769.23 0.688247 0.229416 0.688247 8769.23 0.688247 0.229416 -0.688247 8769.23 0.688247 -0.229416 0.688247 8769.23 0.688247 -0.229416 -0.688247 8769.23 0.688247 -0.688247 0.229416 8769.23 0.688247 -0.688247 -0.229416 9230.77 -0.894427 0.447214 0 9230.77 -0.894427 0 0.447214 9230.77 -0.894427 0 -0.447214 9230.77 -0.894427 -0.447214 0 9230.77 -0.447214 0.894427 0 9230.77 -0.447214 0 0.894427 9230.77 -0.447214 0 -0.894427 9230.77 -0.447214 -0.894427 0 9230.77 0 0.894427 0.447214 9230.77 0 0.894427 -0.447214 9230.77 0 0.447214 0.894427 9230.77 0 0.447214 -0.894427 9230.77 0 -0.447214 0.894427 9230.77 0 -0.447214 -0.894427 9230.77 0 -0.894427 0.447214 9230.77 0 -0.894427 -0.447214 9230.77 0.447214 0.894427 0 9230.77 0.447214 0 0.894427 9230.77 0.447214 0 -0.894427 9230.77 0.447214 -0.894427 0 9230.77 0.894427 0.447214 0 9230.77 0.894427 0 0.447214 9230.77 0.894427 0 -0.447214 9230.77 0.894427 -0.447214 0 9692.31 -0.872872 0.436436 0.218218 9692.31 -0.872872 0.436436 -0.218218 9692.31 -0.872872 0.218218 0.436436 9692.31 -0.872872 0.218218 -0.436436 9692.31 -0.872872 -0.218218 0.436436 9692.31 -0.872872 -0.218218 -0.436436 9692.31 -0.872872 -0.436436 0.218218 9692.31 -0.872872 -0.436436 -0.218218 9692.31 -0.436436 0.872872 0.218218 9692.31 -0.436436 0.872872 -0.218218 9692.31 -0.436436 0.218218 0.872872 9692.31 -0.436436 0.218218 -0.872872 9692.31 -0.436436 -0.218218 0.872872 9692.31 -0.436436 -0.218218 -0.872872 9692.31 -0.436436 -0.872872 0.218218 9692.31 -0.436436 -0.872872 -0.218218 9692.31 -0.218218 0.872872 0.436436 9692.31 -0.218218 0.872872 -0.436436 9692.31 -0.218218 0.436436 0.872872 9692.31 -0.218218 0.436436 -0.872872 9692.31 -0.218218 -0.436436 0.872872 9692.31 -0.218218 -0.436436 -0.872872 9692.31 -0.218218 -0.872872 0.436436 9692.31 -0.218218 -0.872872 -0.436436 9692.31 0.218218 0.872872 0.436436 9692.31 0.218218 0.872872 -0.436436 9692.31 0.218218 0.436436 0.872872 9692.31 0.218218 0.436436 -0.872872 9692.31 0.218218 -0.436436 0.872872 9692.31 0.218218 -0.436436 -0.872872 9692.31 0.218218 -0.872872 0.436436 9692.31 0.218218 -0.872872 -0.436436 9692.31 0.436436 0.872872 0.218218 9692.31 0.436436 0.872872 -0.218218 9692.31 0.436436 0.218218 0.872872 9692.31 0.436436 0.218218 -0.872872 9692.31 0.436436 -0.218218 0.872872 9692.31 0.436436 -0.218218 -0.872872 9692.31 0.436436 -0.872872 0.218218 9692.31 0.436436 -0.872872 -0.218218 9692.31 0.872872 0.436436 0.218218 9692.31 0.872872 0.436436 -0.218218 9692.31 0.872872 0.218218 0.436436 9692.31 0.872872 0.218218 -0.436436 9692.31 0.872872 -0.218218 0.436436 9692.31 0.872872 -0.218218 -0.436436 9692.31 0.872872 -0.436436 0.218218 9692.31 0.872872 -0.436436 -0.218218 10153.8 -0.639602 0.639602 0.426401 10153.8 -0.639602 0.639602 -0.426401 10153.8 -0.639602 0.426401 0.639602 10153.8 -0.639602 0.426401 -0.639602 10153.8 -0.639602 -0.426401 0.639602 10153.8 -0.639602 -0.426401 -0.639602 10153.8 -0.639602 -0.639602 0.426401 10153.8 -0.639602 -0.639602 -0.426401 10153.8 -0.426401 0.639602 0.639602 10153.8 -0.426401 0.639602 -0.639602 10153.8 -0.426401 -0.639602 0.639602 10153.8 -0.426401 -0.639602 -0.639602 10153.8 0.426401 0.639602 0.639602 10153.8 0.426401 0.639602 -0.639602 10153.8 0.426401 -0.639602 0.639602 10153.8 0.426401 -0.639602 -0.639602 10153.8 0.639602 0.639602 0.426401 10153.8 0.639602 0.639602 -0.426401 10153.8 0.639602 0.426401 0.639602 10153.8 0.639602 0.426401 -0.639602 10153.8 0.639602 -0.426401 0.639602 10153.8 0.639602 -0.426401 -0.639602 10153.8 0.639602 -0.639602 0.426401 10153.8 0.639602 -0.639602 -0.426401 11076.9 -0.816497 0.408248 0.408248 11076.9 -0.816497 0.408248 -0.408248 11076.9 -0.816497 -0.408248 0.408248 11076.9 -0.816497 -0.408248 -0.408248 11076.9 -0.408248 0.816497 0.408248 11076.9 -0.408248 0.816497 -0.408248 11076.9 -0.408248 0.408248 0.816497 11076.9 -0.408248 0.408248 -0.816497 11076.9 -0.408248 -0.408248 0.816497 11076.9 -0.408248 -0.408248 -0.816497 11076.9 -0.408248 -0.816497 0.408248 11076.9 -0.408248 -0.816497 -0.408248 11076.9 0.408248 0.816497 0.408248 11076.9 0.408248 0.816497 -0.408248 11076.9 0.408248 0.408248 0.816497 11076.9 0.408248 0.408248 -0.816497 11076.9 0.408248 -0.408248 0.816497 11076.9 0.408248 -0.408248 -0.816497 11076.9 0.408248 -0.816497 0.408248 11076.9 0.408248 -0.816497 -0.408248 11076.9 0.816497 0.408248 0.408248 11076.9 0.816497 0.408248 -0.408248 11076.9 0.816497 -0.408248 0.408248 11076.9 0.816497 -0.408248 -0.408248 11538.5 -1 0 0 11538.5 -0.8 0.6 0 11538.5 -0.8 0 0.6 11538.5 -0.8 0 -0.6 11538.5 -0.8 -0.6 0 11538.5 -0.6 0.8 0 11538.5 -0.6 0 0.8 11538.5 -0.6 0 -0.8 11538.5 -0.6 -0.8 0 11538.5 0 1 0 11538.5 0 0.8 0.6 11538.5 0 0.8 -0.6 11538.5 0 0.6 0.8 11538.5 0 0.6 -0.8 11538.5 0 0 1 11538.5 0 0 -1 11538.5 0 -0.6 0.8 11538.5 0 -0.6 -0.8 11538.5 0 -0.8 0.6 11538.5 0 -0.8 -0.6 11538.5 0 -1 0 11538.5 0.6 0.8 0 11538.5 0.6 0 0.8 11538.5 0.6 0 -0.8 11538.5 0.6 -0.8 0 11538.5 0.8 0.6 0 11538.5 0.8 0 0.6 11538.5 0.8 0 -0.6 11538.5 0.8 -0.6 0 11538.5 1 0 0 dipy-1.11.0/dipy/data/files/eg_3voxels.pkl000066400000000000000000000351121476546756600203410ustar00rootroot00000000000000(dp1 S'gs' p2 (lp3 cnumpy.core.multiarray _reconstruct p4 (cnumpy ndarray p5 (I0 tS'b' tRp6 (I1 (I3 tcnumpy dtype p7 (S'f8' I0 I1 tRp8 (I3 S'<' NNNI-1 I-1 I0 tbI00 S'\x00\x00\x00\x00\x00\x00\xf8\xff\x00\x00\x00\x00\x00\x00\xf8\xff\x00\x00\x00\x00\x00\x00\xf8\xff' tbag4 (g5 (I0 tS'b' tRp9 (I1 (I3 tg8 I00 S'v\xe2\xa3\x83\xca\xff\xef?\xea\xd7\x8b\xff\x15\xcbt?\xa9zt\xca\xa1\x93t\xbf' tbag4 (g5 (I0 tS'b' tRp10 (I1 (I3 tg8 I00 S'\xa6\xe9}d\x11\x03\xd9>\xb7s\xd4j\xff\xff\xef?\x00\x8c\xcb]\x1fmH\xbf' tbag4 (g5 (I0 tS'b' tRp11 (I1 (I3 tg8 I00 S'\x08\x99\x85[\xcb\x92\x99?D\x0b0\tT\xd2\xe4?_6nvHI\xe8?' tbag4 (g5 (I0 tS'b' tRp12 (I1 (I3 tg8 I00 S'\xa0F\xb3R\xb0\xdb\xe2?\xc8\x95zy\xca\xac\xe8?\x00\x1f?t\xcf\xdb\xce\xbf' tbag4 (g5 (I0 tS'b' tRp13 (I1 (I3 tg8 I00 S'\xf4\xfc|[\xef\x1f\xce?T\xfci\xe6\x04\x16\xe1\xbf\xb3\xb2Y\xdd\xcb\xfc\xe9?' tbag4 (g5 (I0 tS'b' tRp14 (I1 (I3 tg8 I00 S'y/\xafK\xfb\x97\xec?i\xc9On\x9d\x07\xd1\xbfr\xffU_\x95$\xd7?' tbag4 (g5 (I0 tS'b' tRp15 (I1 (I3 tg8 I00 S'M\xc7\xb7\xbe\x00\x89\xe9?\x14B4\xe0\xf8\xc5\xc0\xbf\xbaCM\xc2"\xd3\xe2\xbf' tbag4 (g5 (I0 tS'b' tRp16 (I1 (I3 tg8 I00 S'nX\xcf\xfa8\xce\xcd?\xbck\xbf\x94\xaf\xc8\xed\xbf\x1fB\x8d,4\x0b\xd2\xbf' tbag4 (g5 (I0 tS'b' tRp17 (I1 (I3 tg8 I00 S'\x91\x95{\xef\x95\xf9\xed?\x89\xaf2 \xe7e\xc2\xbf\x17\x15\xcc\xb1\x1an\xd4\xbf' tbag4 (g5 (I0 tS'b' tRp18 (I1 (I3 tg8 I00 S'>bl\xbe`\x1f\xe0?\xe7\x80\x90i\x87\x14\xeb?\xc5K2\xf8",\xc6?' tbag4 (g5 (I0 tS'b' tRp19 (I1 (I3 tg8 I00 S'\xaf\xb5\xd9\xd8\x81\x12\xd6?dx\x8b\xc7\xf4)\xeb?\xfa\x96\x04}R\xa3\xd9?' tbag4 (g5 (I0 tS'b' tRp20 (I1 (I3 tg8 I00 S'\xfa~i\xee\x958\xdd?r*\x81\x02\xcfA\xe4?2\x06 \x86-\x01\xe4?' tbag4 (g5 (I0 tS'b' tRp21 (I1 (I3 tg8 I00 S'\x88\xe1c\x80\x9d)\xdf?\x05Q\xdb\x85\x88\x88\xd9\xbf\xc91\x8e\x19F\xdd\xe8?' tbag4 (g5 (I0 tS'b' tRp22 (I1 (I3 tg8 I00 S'\x92R\\j3\xbf\xe3?\xe7\xed\xb9,\xea\x9d\xe5?\x9aX\xf8\xe7\xfc\xd3\xd9?' tbag4 (g5 (I0 tS'b' tRp23 (I1 (I3 tg8 I00 S'=\x1b\x808\x9c~\xe2?\xe8\xd7\xe2f\x8f\x04\xbb\xbf\xc4w1Q\x17\xe5\xe9\xbf' tbag4 (g5 (I0 tS'b' tRp24 (I1 (I3 tg8 I00 S'r&;\xd6\xd6g\xea?\xbb\r1NR\xda\xe0\xbf9b\xaa~\\%\xca?' tbag4 (g5 (I0 tS'b' tRp25 (I1 (I3 tg8 I00 S"<\x1f\xfd\xab3\xa5\xec?/4\xca\xa4I\xcc\xa5?\x96|'\xc0}e\xdc?" tbag4 (g5 (I0 tS'b' tRp26 (I1 (I3 tg8 I00 S't\xb4\xb4j\xe9\x88\xd2?;.$\x9b\x90]\xe1?%\xa1\xdd\t\xe4:\xe9?' tbag4 (g5 (I0 tS'b' tRp27 (I1 (I3 tg8 I00 S'x]e]\x0bn\xbd?J\x0f\xc4Ke\xd1\xee?@hW\x9c^+\xcf?' tbag4 (g5 (I0 tS'b' tRp28 (I1 (I3 tg8 I00 S'\xef\xfa$\x95$\x9a\xe9?l\xa3\x80k2P\xda?|\xa0\x14V\xa6\xf5\xdb\xbf' tbag4 (g5 (I0 tS'b' tRp29 (I1 (I3 tg8 I00 S'S\n3q\x9e_\xe0?%\xa0)Q\x89\xf0\xea\xbf\xec\x0b\xa2\xaa4\xf8\xc5\xbf' tbag4 (g5 (I0 tS'b' tRp30 (I1 (I3 tg8 I00 S'\x8e\xd3\xb8LPF\xe9?\xf0\x89\xb6\x0f\xfe\xab\xc4?\x83\xcd\xe1\x8a\x04\xef\xe2\xbf' tbag4 (g5 (I0 tS'b' tRp31 (I1 (I3 tg8 I00 S'b\x17\xe6R\x85_\xee?F\xc3\xda/\xd4\xb5\xce?\x0e[\xde\xa9\xdd\x15\xca\xbf' tbag4 (g5 (I0 tS'b' tRp32 (I1 (I3 tg8 I00 S"lmL\x98\xaf\xb2\xcd?\xf1]\xfd\xf3\xc7\x19\xe9\xbf\xc2'\xab~Ih\xe2\xbf" tbag4 (g5 (I0 tS'b' tRp33 (I1 (I3 tg8 I00 S'M#\xcb\x89o)\x94?\x00\xb3K\xf1\\\xf8\xc7\xbf\x8f\xc0!\x8dum\xef\xbf' tbag4 (g5 (I0 tS'b' tRp34 (I1 (I3 tg8 I00 S'g\xbe\x9e\xa3\x08\xa1\xcb?\xff\xb6_%\xd9\xaa\xee?\xfb\xb3f\xc4\x10\xef\xc7\xbf' tbag4 (g5 (I0 tS'b' tRp35 (I1 (I3 tg8 I00 S'W6\xdf\xf1\xe3\xb4\xe8?\x9f\x81!zF\x80\xe3?\x83A\xd5Pb\x14\xc7\xbf' tbag4 (g5 (I0 tS'b' tRp36 (I1 (I3 tg8 I00 S'\xcar,[\xb4\x83\xc4?)\x10\xf3$a\xcc\xd6?\xe4\xe68\xbc+u\xed?' tbag4 (g5 (I0 tS'b' tRp37 (I1 (I3 tg8 I00 S'G\xe4\xfa<\xef\xa4\xc2?5^\xd2\x85\xb0\xaa\xe7?z\xc5\xe7q\x0f\x07\xe5\xbf' tbag4 (g5 (I0 tS'b' tRp38 (I1 (I3 tg8 I00 S'\x1b&\xfd\xed]c\xec?/\xf7\xa4\x9b\xd5\xf0\xda\xbf\x7f6\x1e*58\xc8\xbf' tbag4 (g5 (I0 tS'b' tRp39 (I1 (I3 tg8 I00 S'\x94\xba5\xbdd\xff\xe1?\xc2F\x9f\xe07\xfa\xce?\xa0\xd9W\x0c\xe7L\xe9\xbf' tbag4 (g5 (I0 tS'b' tRp40 (I1 (I3 tg8 I00 S'Rd\x1b\xf5\xcda\xd8?\xaf\x1d\xc4\xbf\xb1A\xc2?\xa6\xfbq}\x8e;\xed?' tbag4 (g5 (I0 tS'b' tRp41 (I1 (I3 tg8 I00 S'\xa2, /\xfd\x90\xd3?7\t^M\xd4\x85\xc9\xbf\x04\xd2[\xa6\xce\xca\xed\xbf' tbag4 (g5 (I0 tS'b' tRp42 (I1 (I3 tg8 I00 S'\x9a[\x0e\xba\x01B\xd5?\x83\xe8\x00eR\xef\xc1\xbf\xb4\xe5\xe1\xb3+\xd9\xed?' tbag4 (g5 (I0 tS'b' tRp43 (I1 (I3 tg8 I00 S'\xe1\n\r\xa4\x98\xc8\xee?v\x12m\xb5|S\xd1\xbf1iG\x81\xa3f\xa2?' tbag4 (g5 (I0 tS'b' tRp44 (I1 (I3 tg8 I00 S'\xe6=PTW\xb3\xee?\xf0\xb5@H\xbe\xe6\xca?\xa8\xd8\x8c(\xf5\x14\xc8?' tbag4 (g5 (I0 tS'b' tRp45 (I1 (I3 tg8 I00 S'X\x1e\xc6\xb9\xf8\xd6\xdc?\xa7,\xf2\x8da\x82\xec?v\xc8s\xfdP\x02\xad\xbf' tbag4 (g5 (I0 tS'b' tRp46 (I1 (I3 tg8 I00 S'\x11\xf6\x92\x19L\xa9\xe8?\xaa\x97%9\xbc5\xe4?wR\x90\xa6V\xbb\xb5?' tbag4 (g5 (I0 tS'b' tRp47 (I1 (I3 tg8 I00 S'\x178\x1dD\xee\xb3\xe6?xq\xc07"I\xda\xbfX@\x82\xb7{S\xe2\xbf' tbag4 (g5 (I0 tS'b' tRp48 (I1 (I3 tg8 I00 S'\xc7V(v?9\xe6?\xc0\xcd\x16P\xcf\x96\x99?\x1a%\xcd\xf5\x9e\x02\xe7?' tbag4 (g5 (I0 tS'b' tRp49 (I1 (I3 tg8 I00 S'\xed\x93\xa4\xc1\xbf\xcb\xe5?;\\\xfd\xd4\xe3-\xe1\xbf\x19\xc2\x1c\x0b\xeb\xdc\xdf?' tbag4 (g5 (I0 tS'b' tRp50 (I1 (I3 tg8 I00 S'\x0e\xf4m\x94V\x14\xc2?\x83\x1b\xa2Z\xbev\xe7\xbf\xdb\xa3+\x19\xc4H\xe5?' tbag4 (g5 (I0 tS'b' tRp51 (I1 (I3 tg8 I00 S'\x93\xc2\xb5N\xd6\xb5\xe7?\x00\xea7X\xbd\xfc\xd8?\xae\xfePY3|\xe1?' tbag4 (g5 (I0 tS'b' tRp52 (I1 (I3 tg8 I00 S'\x8f\x1c\x9eX\xf1\xf8\xb9?\xccjn\xd2\x88X\xea?\xdcr\xfc\xa9\xdc\xde\xe1?' tbag4 (g5 (I0 tS'b' tRp53 (I1 (I3 tg8 I00 S'\x9a\x8f\x18B\xae\xad\xe2?IhMr\xbbX\xe3?\xae\x95\xb6\xab#X\xe1\xbf' tbag4 (g5 (I0 tS'b' tRp54 (I1 (I3 tg8 I00 S'\x1c\x0bo\xe1\\>\xb6?F\x07S\x1e\xf0\x17\xd6\xbf\xea\xe8\xeb\x9b`\xe7\xed?' tbag4 (g5 (I0 tS'b' tRp55 (I1 (I3 tg8 I00 S'=3x\xe8\xbf\x9e\xe1?\xeeL\xfdq\xb2\x81\xe9\xbfK\xcc\x97y\x18\xbc\xcf?' tbag4 (g5 (I0 tS'b' tRp56 (I1 (I3 tg8 I00 S'\xf7\'\xdc\xe7\x14\xca\xea?\xf5M\x9f(\xe8\x8f\xdd?"\x98\x00' tRp171 ag76 (g77 S'\x07\x00' tRp172 aa(lp173 g76 (g77 S'u\x00' tRp174 ag76 (g77 S'\x8c\x00' tRp175 ag76 (g77 S'\x06\x00' tRp176 aa(lp177 g76 (g77 S'p\x00' tRp178 ag76 (g77 S'\x92\x00' tRp179 ag76 (g77 S'\x12\x00' tRp180 aa(lp181 g76 (g77 S']\x00' tRp182 ag76 (g77 S'\xad\x00' tRp183 ag76 (g77 S'\x07\x00' tRp184 aa(lp185 g76 (g77 S'w\x00' tRp186 ag76 (g77 S'8\x00' tRp187 ag76 (g77 S'\x10\x00' tRp188 aa(lp189 g76 (g77 S'k\x00' tRp190 ag76 (g77 S'o\x00' tRp191 ag76 (g77 S'\x04\x00' tRp192 aa(lp193 g76 (g77 S'T\x00' tRp194 ag76 (g77 S'\xa1\x00' tRp195 ag76 (g77 S'\x19\x00' tRp196 aa(lp197 g76 (g77 S'@\x00' tRp198 ag76 (g77 S'P\x00' tRp199 ag76 (g77 S'\x0c\x00' tRp200 aa(lp201 g76 (g77 S'\x8f\x00' tRp202 ag76 (g77 S'\x96\x00' tRp203 ag76 (g77 S'\x0b\x00' tRp204 aa(lp205 g76 (g77 S'(\x00' tRp206 ag76 (g77 S']\x00' tRp207 ag76 (g77 S'\x03\x00' tRp208 aa(lp209 g76 (g77 S'\x80\x00' tRp210 ag76 (g77 S'n\x00' tRp211 ag76 (g77 S'\x06\x00' tRp212 aa(lp213 g76 (g77 S'a\x00' tRp214 ag76 (g77 S'U\x00' tRp215 ag76 (g77 S'\x07\x00' tRp216 aa(lp217 g76 (g77 S'k\x00' tRp218 ag76 (g77 S'g\x00' tRp219 ag76 (g77 S'\x15\x00' tRp220 aa(lp221 g76 (g77 S'Y\x00' tRp222 ag76 (g77 S'/\x00' tRp223 ag76 (g77 S'\x0f\x00' tRp224 aa(lp225 g76 (g77 S'p\x00' tRp226 ag76 (g77 S'\x93\x00' tRp227 ag76 (g77 S'\r\x00' tRp228 aa(lp229 g76 (g77 S'h\x00' tRp230 ag76 (g77 S'?\x00' tRp231 ag76 (g77 S'\x0f\x00' tRp232 aa(lp233 g76 (g77 S'X\x00' tRp234 ag76 (g77 S'\x81\x00' tRp235 ag76 (g77 S'\x14\x00' tRp236 aa(lp237 g76 (g77 S'?\x00' tRp238 ag76 (g77 S'=\x00' tRp239 ag76 (g77 S'\x12\x00' tRp240 aa(lp241 g76 (g77 S'D\x00' tRp242 ag76 (g77 S's\x00' tRp243 ag76 (g77 S'\x13\x00' tRp244 aa(lp245 g76 (g77 S'N\x00' tRp246 ag76 (g77 S'\x9c\x00' tRp247 ag76 (g77 S'\x0e\x00' tRp248 aa(lp249 g76 (g77 S'\x1b\x00' tRp250 ag76 (g77 S'A\x00' tRp251 ag76 (g77 S'\x08\x00' tRp252 aa(lp253 g76 (g77 S'B\x00' tRp254 ag76 (g77 S'\xa5\x00' tRp255 ag76 (g77 S'\x13\x00' tRp256 aa(lp257 g76 (g77 S'}\x00' tRp258 ag76 (g77 S'\x86\x00' tRp259 ag76 (g77 S'\n\x00' tRp260 aa(lp261 g76 (g77 S'@\x00' tRp262 ag76 (g77 S'\x8e\x00' tRp263 ag76 (g77 S'\x16\x00' tRp264 aa(lp265 g76 (g77 S'$\x00' tRp266 ag76 (g77 S'\x8d\x00' tRp267 ag76 (g77 S'\x03\x00' tRp268 aa(lp269 g76 (g77 S'y\x00' tRp270 ag76 (g77 S'*\x00' tRp271 ag76 (g77 S'\x10\x00' tRp272 aa(lp273 g76 (g77 S'T\x00' tRp274 ag76 (g77 S'\xa7\x00' tRp275 ag76 (g77 S'\x07\x00' tRp276 aa(lp277 g76 (g77 S'j\x00' tRp278 ag76 (g77 S'\x82\x00' tRp279 ag76 (g77 S'\x19\x00' tRp280 aa(lp281 g76 (g77 S'M\x00' tRp282 ag76 (g77 S'q\x00' tRp283 ag76 (g77 S'\x07\x00' tRp284 aa(lp285 g76 (g77 S'?\x00' tRp286 ag76 (g77 S'\x93\x00' tRp287 ag76 (g77 S'\x04\x00' tRp288 aa(lp289 g76 (g77 S'r\x00' tRp290 ag76 (g77 S'\x8a\x00' tRp291 ag76 (g77 S'\x10\x00' tRp292 aa(lp293 g76 (g77 S'\x9b\x00' tRp294 ag76 (g77 S'\x8e\x00' tRp295 ag76 (g77 S'\x12\x00' tRp296 aa(lp297 g76 (g77 S'\x96\x00' tRp298 ag76 (g77 S'\x97\x00' tRp299 ag76 (g77 S'\x11\x00' tRp300 aa(lp301 g76 (g77 S'e\x00' tRp302 ag76 (g77 S'\xa7\x00' tRp303 ag76 (g77 S'\x0e\x00' tRp304 aa(lp305 g76 (g77 S'@\x00' tRp306 ag76 (g77 S'2\x00' tRp307 ag76 (g77 S'\x0c\x00' tRp308 aa(lp309 g76 (g77 S'[\x00' tRp310 ag76 (g77 S'r\x00' tRp311 ag76 (g77 S'\x08\x00' tRp312 aa(lp313 g76 (g77 S'e\x00' tRp314 ag76 (g77 S'w\x00' tRp315 ag76 (g77 S'\x0e\x00' tRp316 aa(lp317 g76 (g77 S'Q\x00' tRp318 ag76 (g77 S'\x8c\x00' tRp319 ag76 (g77 S'\x0c\x00' tRp320 aa(lp321 g76 (g77 S'v\x00' tRp322 ag76 (g77 S"'\x00" tRp323 ag76 (g77 S'\x11\x00' tRp324 aa(lp325 g76 (g77 S'\xa2\x00' tRp326 ag76 (g77 S'\x9d\x00' tRp327 ag76 (g77 S'\x1b\x00' tRp328 aa(lp329 g76 (g77 S'k\x00' tRp330 ag76 (g77 S'\x7f\x00' tRp331 ag76 (g77 S'\x11\x00' tRp332 aa(lp333 g76 (g77 S'S\x00' tRp334 ag76 (g77 S'\x9a\x00' tRp335 ag76 (g77 S'\x10\x00' tRp336 aasS'bs' p337 (lp338 F0 aF992.05050244317545 aF1004.0307819709187 aF992.16372365672748 aF998.64382386853549 aF989.48344613176005 aF990.58854125405651 aF987.00457077448243 aF999.88224459176058 aF990.24926647985785 aF1000.4429426677075 aF998.16475718741572 aF992.32008352195669 aF988.2706913413632 aF994.63057077140354 aF985.57914467532316 aF995.74587612477353 aF988.20307136686631 aF990.75951822578531 aF1001.1083783354445 aF991.90841421488437 aF999.6582291142721 aF988.44940351310686 aF992.31413008681807 aF994.12050886666543 aF986.85780956899418 aF1001.5717049277649 aF996.32882435626766 aF987.80747919441819 aF993.85668754423943 aF994.71195241265218 aF987.35168048372861 aF985.06260837744503 aF986.12672062268973 aF985.48928715054569 aF995.12792207504356 aF991.89399589858431 aF1002.1866327590234 aF998.2199341226268 aF990.35905828638545 aF985.10772231282237 aF992.59917742412256 aF993.6172958073148 aF990.53378981226797 aF995.47758389009357 aF992.98732183421407 aF987.55356587877088 aF997.73723515575693 aF994.56379309079273 aF990.06942752292616 aF988.51781688375172 aF1000.747909545062 aF993.96117952476254 aF994.20318850889521 aF989.75808235351883 aF988.6777283166474 aF996.2943494724866 aF988.41722440296496 aF998.1975649976838 aF985.54779099062011 aF997.88768851250029 aF988.75360728578653 aF985.37260384645003 aF994.99028798954612 aF1003.0305096565716 as.dipy-1.11.0/dipy/data/files/evenly_distributed_sphere_362.npz000066400000000000000000000321101476546756600241430ustar00rootroot00000000000000PKJ<@"@" vertices.npyNUMPYF{'descr': '?@Ba? =?B?a??a? Xc/׿a?` S?Uƿa?l?ka? m?8?%?`'?%?%?%?`'%? m?8%?`yQɿ:.?? ؿS??E? ??`???T??T??޿?E?? ؿS?`yQɿ:.?9??Z?g??ଈ 'ҿ?s?N ??'??۠? KHҿ?Z?g?s?M ?۠?`KH?`?9`? s??`B?`ӛ?`g$ݿ?VF?p?D?k?`?g$??s?VF? p??B?@՛??`D?k?@ mӿ`?? m?@y?? n?@m?@y?` mӿ`?`X߿S?@Pu?%?`Y?@Pu?vɿ@Pu? :?@ٿ@Pu?<IR@Pu?@X߿SPu?<IR?Ou? :??Ou?v?Ou? %?`YOu?P@?h6?i?N h6?8?`>g6?P@g6?`i?M ?g6? 4?mC?T?`?@#˿T? PٿT?`?tT? {?S? |S??t?`S? P?`S?`?#?`S?`4?mC`S?F?ڿ:.?y?:y?@?߿y?#ʿIRy?l?`?y?l? y? ??y?#ʿIR?y? :?y?F?ڿ:.y?#?w? #r> o`ӿ #r>#w?#r> o?ӿ#r>#b>#r>`@Uƿt?@՛t B|ǿt?@՛?t`@U?t·?`KHҿϿ)ƿ '9?Ͽ)?·?`KH?e@տp@q?#˿p 0׿p@q?#?pe@?p?޿18tT?@ٿ18wӿYֿ18hۿXQǿ18^?:18^?:?18hۿXQ?18wӿY?18tT?@?18??18 |Ŀ 8 8?`ӿ8 8?`?8 |Ŀ ?8 8? y@mn߿ ,ֿ@m m?@mn߿ ,?@m? y?@m?߿V ?ɿV ??V??V@NDU Rֿ`HU@9XQǿU@9XQ?U Rֿ`H?U@ND?Uk? Pٿ k? P? `ʿwzA`ʿwz?0F?@lVPHl@nYֿl1?l1??lVP`H?l0F?@?l@?`Yl@n@Y?l?`Y?l@Яg!xٿ@(z?@?'ҿ )ƿ@`)??@'?@z?@?!xٿ@(?Яg?@W?$ݿQ@W?$?Q :ϿBQ :ϿB?Q`Q`?w  8?#r> ?w? ?z?mCଉ૴?|ଉ`y2xzଉ@n@տଉ;?@|ǿଉ;?@|?ଉn?ଉy2wz?ଉ?|?ଉ`z?mC?ଉ ??a߿ika߿@Xc? /׿a߿@ S@Uƿa߿>ۿ@B?a߿ =ۿBa߿?a߿ Xc?/?a߿` SU?a߿lk?a߿ mѿ8%ܿ`'?%ܿ%%ܿ`'??%ܿ mѿ8?%ܿ`yQ?:.׿ ?S׿E ׿`޿׿T?׿T??׿?׿E?׿ ?S?׿`yQ?:.?׿?9տZgտଈ? '?տsN տ?'ҿտ۠ KH?տZg?տsM ?տ۠`KHҿ`տ?9?`տ s?ӿ`B`ӛ?ӿ`g?$?ӿVFȿp?ӿDٿk`ӿg?$ݿӿs??ӿVFȿ pӿB@՛ӿ`Dٿk?ӿ@ m?`ȿ m@yȿ n?ȿ@m@y?ȿ` m?`?ȿ`X?S@Puƿ%`Y@Puƿv??@Puƿ :@?@Puƿ<?IR?@Puƿ@X?S?Puƿ<?IROuƿ :ٿOuƿv?ɿOuƿ %`Y?OuƿP?@h6ĿiӿN ?h6Ŀ8`g6ĿP?@?g6Ŀ`iӿM g6Ŀ 4߿mCTÿ`@#?Tÿ? P?Tÿ`t?Tÿ ?{Sÿ? |?Sÿt`Sÿ? Pٿ`Sÿ`#˿`Sÿ`4߿mC?`SÿF??:.y?:?y@?y#?IR?yl`yl ?y ߿y#?IRy? :yF??:.?y#w #r o?`? #r#?w#r o?#r#b?#rPKJ0) *5 *1+B1+",8",2-F2-%.@%.'/D'/(0=(0*1A*1,2E,2?$3R?3>)4S>44 5M457!6N76C&7VC7;"8P;8<#9O<93:L3:B+;WB;F-<ZF<:(=Q:=I0>G.?6%@U6@5*AT5AK1BH/C9'DY9D8,EX8EJ2F@.G]@G?aGafGD/H[DHCbHbhH=0I\=I>`I`eIE2J_EJFdJdiJA1K^AKBcKcgKR3LS4MV7NZ<OW;PL:QM5TN6UP8XO9YYD[Q=\U@]TA^XE_>S`e`u`?Ra~fat~aCVbhbvbBWcgcycFZdid{d\Ie]Gf^Kg[Hh_JikLjLQjRLktRkSMluSllMmMTmoNnNUnVNovVoWPpyWppPqPXqsOrOYrZOs{ZsaRt`SubVvjQwQ\wnUxU]xcWyqXzX_zdZ{mT|T^|rY}Y[}f~t~gyhvi{eux]]f}[[hw\\e|^^gz__ikjkllmonoppqsrstktuluvovypy{s{rr}jjwmm|nnxqqz~~}}wwxx||zzda`cb               !!"$# $('( %%&,) )*!*" "1 # #-  +$ +  .% . & &/ ''20(0229,,: --8 11; //<7:7:38384;4;59596<6< ==HB= )B C>!*C!!>">I"$?#?J#D?$+D$E@%.E%%@&@K&(A'AG'FA(0F(UB)QC*SD+H,HW,#J-JX-RE.&K/KZ/TF0"I1IY1'G2GV2)38L3LU3.4;N4NR4+59O5OS506<M6MT6*7:P7PQ7XL8-X8VO92V9WP:,W:YN;1Y;ZM</Z<B[=[b=C_>_a>D\?\c?E]@]`@F^A^dAg[BUgBe_CQeCf\DSfDh]ERhEi^FTiFdGAdGbH=bHaI>aIcJ?cJ`K@`KXLLZMMYNNVOOWPPPQ*7QeQNR.4RhROS+5SfSMT06TiTLU)3UgUVGVWHWXJXYIYZKZ[g[\f\]h]^i^_e_]``_aa[bb\cc^ddeeffgghhiiPKJ<@"@" vertices.npyPKJ$.? @?`@ ? ?@.ݿ$? Ŀ?`ӿw?wӥ 1?k 1?ӥ ?@1ǿ?@1ǿ ?BL?2}ѿBL?YΛ?<;?̿@PԿ`F?`F?@PԿ̿<;?YΛ?s?s?迿 ٿ迿 m?'׿%?'׿ m?A?`D?`D?A?'Lп`Pܿ`PܿLп' |Ŀ 8? 8? |Ŀ cƸ?%7?<ڿ%7!?!?%7<ڿ%7? cƸ?&mӿ`???@?ӿ(m`<ɿ ?j ?`<ɿ@?@Y??@l`\w?`\w?`l%ܿ?'?%ܿ`'?@?@f?? ? ?? f?xտdxտe;?;?2Ϳ"2Ϳt?=?`d|??|d =?t? gm? g;?zF?:޿m?;޿`zF?6?@o~׿ (?? M M? (?׿o~Lп ??߿ >w? ?#rw߿R? F?k? ?j?k@F` ؿ`kR s3?@*w?s3? 3}??`п@3}ѿ@*w@??,?W> >?V?ٿ@+8??8 ?? ??`? ? @ ݿ?,?^'ֿ@Q?o?Ry?^'?,̿ o@QRyb]?6~?TD?TD˿8ֿ6~@@?8?b]@?V?´߿ H@ H??´?W@'7?^?%7p^'7|Ŀ8? ? o o?@8?{?8ۿ 8@ ?o?ҿGӿ ?@G?@? o@ @?G?G@PX>?s3? C?ۿ22??BĿs3İ?`)&7??`m?'?@mð'7?ο$A?׿$??`?A ?@Q??Ŵ? ?`Ŵ@Q F?XοؿR?଱??`X?@FRw?`ӿw?ӿ Ŀ@ ?`.ݿ?`? .ݿ$ ? Ŀ$ o?/`)? İ?@п|Ŀ|?? ? # #?@?`Gӿ@XW?G???RyRy?ۿ?@*w@*w?A` ĝKؿ ? K?ĝ??A?`п,̿@?@ ,?? ?@ ?b]G8ֿ ?8?G??b]? >߿? >?w #r>w??R Fk ؿjk?@F?` ?`k?R? s3?@*ws3 3}ѿп`?@3}?@*w??@ٿ,W?>? >V?@?+?8?8? ? `ݿ ?@ ?????,̿^'?@QoRy^'ֿ,? o?@Q?Ry?b]6~TD˿TD?8?6~?@?@8ֿb]?@V´?? H?@ H߿´W?@?'7^%7p>^?'7?|?8  o? o@8ۿ{Ŀ8?? 8?@ o?G? ? @Gӿ@ҿ o?@ ?@GG?@?PXs3 CĿ?2?2ۿB?s3??İ`)?&7?`m'@m?ð?'7??$?A?$ο`׿A?? @QŴ? ? ? `Ŵ?@Q? FX??R?଱ؿ`Xο@F?R?w`?w?? ?@ `.?` .?$? ?$? o/ o ##?଱?଱?଱?଱?଱?/?/?/?/?/?2?2?2?2?2?U?U?U?U?U?U?U?U?U?U?@ H?@ H? H? H? H?%?%?%?%?%?@D?@D?@D?@D? D? D? D? D? D? D?8?8?8?8?8? #r>#r>#r>#r>( c>( c>( c>( c>( c>( c>( c>`( c>`( c>`( c><<଱଱଱଱଱/////22222UUUUUUUUUU@ H@ H H H H%%%%%@D@D@D@D D D D D D D88888<<<<<<<<<`<@@@@@@@@@ ţţţţţwwwww'''''`'`'`'`'`'VVVVVVVVVVee`e`e`e`=`=`=`=`=`=`=`=`=`=kkkjjjjjjj:":":":":":":":":":"WWWWW ..........޿޿޿޿޿`޿`޿`޿`޿@޿%ܿ%ܿ%ܿ%ܿ%ܿ`ٿ@ٿ@ٿ@ٿ@ٿ@ٿ@ٿ ٿ ٿ ٿ\Iٿ\Iٿ\Iٿ\Iٿ\Iٿ\Iٿ\Iٿ\Iٿ\Iٿ`\Iٿ׿׿׿׿׿׿׿`׿`׿`׿@տ@տ@տ@տ@տտտտտտeѿ`eѿ`eѿ@eѿ@eѿ`mѿ@mѿ@mѿ@mѿ mѿ mѿ mѿ mѿmѿmѿsпsпsпsпsпsпsпsпsп`sпD%οD%οD%οD%οD%ο@Ϳ@Ϳ@Ϳ@Ϳ@Ϳ Ϳ Ϳ Ϳ Ϳ Ϳee`e`e@e@e@e@e e e ``@  #r #r#r#r#r( c( c( c( c( c( c( c`( c`( c`( cPK!}M faces.npyNUMPYv{'descr': ' 62B":1F(=%@'D,E*A4L?W>X5M8NCSBT9O<PFZ3VGcH[7R6QI^J_;Y:UKd=b@\A]D`EaWkXlSoTpZsMmNn_z^vOqLj[xcyPrd}eRuQthfY|U{Vwgi\`b~]akxylozvps}mnjqrwut|{~ #%'&$     )*+"(,!&3%4#6$:'8.<-@0/~!1 |295xCEBFDHLGGMIJNHIOKKPJQRTSUVXWYZS]QR[UW^TV\YZ_Xi``eajkafbbgclmchddnepfgqhjoilsknrmtuwvxy{z|}uxy|~tvwz{}~     %$ #&*!")/ '. - (0,1+<=4C?5@23 6 7 8 ;9:AB> DE$H%F#G*M&IQ0P.S-R1JK LT/!N"Og<d=eCf?h@'U)V(W,Y+XaA\B_DcE>] 7[ 8Zk49^:`i2j3 ;bl5m6SvQsPz|RHF~GxT@M<Idpfohrgneq8J:L4K6N3O\}aj7k2_?l5c>i;]=m9wZ{bu^t[y`U1V0W.Y-X/tuw{y*o}? +n )q> ,p= (r;s"zv7              $)&+-/!0#. 2"1(%',*4$?)> 5!8&C+B"9#<-F(3.G/H%7*62I0J';,:1K.=/@2A0D1E4W5X8S9T<Z6M7NC_B^:O3L?[>c;PFdHe@RAQIhJfDYEU=VGgKiH\J`GbI]KaLkWxXyMlNoSzTvOpPsZ}QmRn^Vj[c_UqYrdbwge\u]thf`|a{ig~ehfiklospjxymnzrvq}|wut{~&%#$'&%# '!$    BCDEFDHLBGMFJNCIOEKPLQNTMUPXOYHS]GR[JW^IV\KZ_]iQ`Ta^k[fUbXc_m\hYdRepVgqSjoWlsZnritkwfxm{h|`upbyqo~avsczrd}jegnl~twx{|vuzy}       %'$( ) #"&!*!+",)3*/'2$.%-(4#0,5&1+6-.4>/05D12A3B6E7<8=;C9?:@7A8B;>9D:EHJFKGLMNIOGQHPFSIRJUKWLVMTNXOYSgPdTeQfRhUiVjWkYlXm[aZ\^_`c]b[gZd]k^f`hai\jbe_lcmvs~z<|84:~@x6@3<pzosr|nvqx180:.4/6-3wt}72=?u5?>y;}={9>pwq{ountry1;07.2-5/9+*,)(* + ) ",  (!"    !!""##$$%%&&''(())**++,,--..//00112233445566778899::;;<<==>>??@@AABBCCDDEEFFGGHHIIJJKKLLMMNNOOPPQQRRSSTTUUVVWWXXYYZZ[[\\]]^^__``aabbccddeeffgghhiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~            !!""##$$%%&&''(())**++,,--..//00112233445566778899::;;<<==>>??@@AAAAABBBCCCDDDEEEFFFGGHHIIJJKKLLLMMMNNNOOOPPPQQRRSSTTUUVVWWXXYYZZ[[[\\\]]]^^^___``aabbccddeeffgghhiijjkkllmmnnooopppqqqrrrsssttuuvvwwxxyyzz{{||}}~~         !!""##$$%%&&''(())**++,,--..//00112233445566778899::;;<<<===>>???@@@AABBCCCDDEEFFGGHHIIJJKKLLMMNNOOPPQQRRSSTTUUVVWWXXYYZZ[[\\]]^^__``aabbccddeeffgghhiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~PK!"$I<< vertices.npyPK!}M <faces.npyPKq[dipy-1.11.0/dipy/data/files/evenly_distributed_sphere_724.npz000066400000000000000000000634601476546756600241610ustar00rootroot00000000000000PKx|@[V0D0D vertices.npyNUMPYF{'descr': ')v;?4?jc{T˿_]_(?\@U.?ur̿lLJ ONN';?tJS?H'أtpgĿ$޷V?]y$ rS?g;>)7տ2?lLJ ?}bUBտ TA]Wvп*!zΰ,?AME^) ҿrJ?jEOt|a?+'<|ܿsd!?FVa &?&FLܿDZGθJ4*1(?ӟ쿅]45ѿ1 &aL}k쿲em? ıϿT쿤F.ٿh?|?g;>!Ͽ?vsEۿ Rn*'Uͧ?&ˡ_?/dܿ/Zl?$˰W_?+MP[ܿWs"5ʿ-=SI?@)7xJ¿.п̰Ws뿱Kě?T67!Z]ҟK߿6˔? 1O2?]StCr*GF?'-?c[?Y'Gٿ6R?AME5?IZoR׿o.M8 ؿh3:?hP қ?._wڿ7jiP|?܊忩I64*38?6 {翈}~Gؿ0a7ÿ^َ翿fJ?zW!BտQ 8xm<?߷FVa7>_?\zlduJ=)?{Ph?T4Ms6>l/$?ҟo`?ہ.jc}տW?0&q@iſ0gؿ/^#1+@?go.Km3aΒ?Jo濫e?]2/CI5k?,M^;?fuJ@Iy3d@Ee!Hhi?z޿v|Re"Y=^?{?g;re8Dz?ՂHbh⿛'$濳^R?!}?)$h=yJf?1忄,wDp?6"#uC-`Lې(ɿ>䫚?َA./Կtп^ҟ(?!:^ѿ2忉{1忎 w$L?yY]v ?xIoi? ?8xY忙*rn,? 妢yBN꽏n?Hl㿯e+V &,ܿEYY4?<>)7忘󵿪p߿2cSx?>7<鿠9.0_??-`L} @b\"ƕ? {?gK ;,c?5EyJhPPE hA?e+9㿾?V]>d[#X? }:V?5 D7SDB?kduM?iZ 8x_ÿg!4 ?<=yڿcs̿'q5/?abs׿?QUerU?AMEdȼ?^v|RnZ 1?pj&?]#˰W=/Õ4?َA!:?*0w|Rn*⿉vMۿZ?)7]Q ,NĿRUG߿e+SY?\'(T]ER?.^3z1?3tf}῁/h3\zѿk3?3࿌*տYkKKտ˜v1?RtsѿNEZQPK?!Hpԙ[?M͙jP ࿚7Z?b2[?Jv|vr?eTN*?%+oT4OGp?P 8-+٫Mv (-9!࿝i?F&Ǜ?hP 9ӭĠh;?.^߿~E|?-St뿪I߿k&Ը?^َ߿DeANƿݷFVa߿<V?y ܿT4߿t̠翎Y?jc߿?ȦZ0/^޿.m?%?Go޿^#i}[71?euJ޿e'GB?2O9鿀v|R޿G~p ٿ{!?'$޿N 'Ͽ 74ݿ1ݿKgr??ſَAݿ1뿚@a?2ݿvebj?OgBIoݿ$BJ?YN?!妢yBݿ掗NR6?9>)7ݿzv I?XaܿS~修B<?q5ܿ(#?1,m濋Ioܿ%?(V?yB-`ܿ=#o/S9ń4?2ܿY2? T4OܿE˿˫WLW?eۿ:xlۿv~4ҿ[#ۿ%fUD?h;ֿ-`L}ۿ.}=qGj?IhPۿ 0Sۣ?;4`d[#ۿr8G?0??}kduڿE[Zm?ڿoTFrF?%I迳ڿ@Q"U࿊F?v|RnڿL|J5mَAڿbnzi?::)7ڿn,IPO?ٿ??c?lUI9!Hٿ8p4?U4OٿNb-d7co.^ٿ?`9-?n4 5Ή 1ٿ@9翼\??g;>ٿdRn?PVؿt*V0?瀱u?!HؿWsGpIqд?Jv|ؿؗU?&LT4Oؿ6jտ#!?+9!ؿsIտxSmڿGVa׿xz36?PNпbl׿xwl>I?}Rn*׿R^R?mBal׿t G?^?AMEԿ ?O?+'Կ?RN?T&eFZԿ/cݿ:c*?bL}kԿ{y˿OԿxn- ?.lԿ@Ӗշܩ}?Wsӿ[Q?"ef˰Wsӿ%(z?BV? 1ӿ*2!jYc[ӿ߅{?ibo.ӿBn3翚X{T?7jӿ۞RCB?H%h>{Sn*'ҿw8?m9wn?mLJ ҿO!~#? 妢yҿV7e?PRyB-`LҿBѿS?ҟҿ$8#jڿNb"3׿+9ѿ}y?PqտZѿ)g?޷FVѿ?I(7jѿNj? e {Z?ErS<ѿvc꿦V+B\?`َѿzpIT?,k;zB-`LпC`}? пYN_MUη俰lLJп 3?!Pp|MEZп'{?妢yB-п:iǭ?oGzFп 峿8mx?8 {Ͽt 5ĿlduJϿ*\ɹ?/W࿣0&qο WF;??2οՆ\gg?ЌA}{?g;ο?hx?A-`LͿEPM ,?|Y]Ϳ?,P쿱e+Ϳw4ڿLy&|?CrS̿@ӿx޿Jv̿ӏ?y4˿OI̿OqI?Z˿]??j {?g˿l-A?Yc?5 ˿6Ư^o" -H?)q5ʿ9]Ö?8 iP ȿ1Эʿzj0s&?P 8ȿ߿>eҿ$ǿh?Aڿ 1ǿ)xukP?Cg;>)ǿz?ufwJƿtF?3Wl?˰WsƿA|^?}kduƿ)[~}k?)7؆0&qſecB࿵E)?P}kdſ|Dƿel%㿃 ſ N?zFVaĿgms￾MgX?TĿ4AI?z$˰WÿL;hb?fڎ0D?\]ҟÿTqi8rAMEÿEN:?.ldP¿H5Tj*w0?sC¿U.?$*3&q5¿r?VO?g+968wF Y? j?`<0&'HտHvK?[#WؿDtڿ@0&q^7?I$ӿsS<N͟D`l?W 8xXT?ͯo.þ;?n1_W7?.$>( 꿡[ʝ?8xY-?Su\o俦nU?lc:y3 Le+9rq~?kY?FME󄺿/)7<1lC?T豚U jcxwѽ?ҟ!(ofg}kdu)i?8Is֣n)7̓``wwh\?ZOI|s?fyr"/]p,z}WA,?psS)?v;4jc?{T?_]_(\?@U.οur?lLJ ?ON?N';пtJ?S?H'?أtpg?$޷?VͿ]y$ ?rS̿g;>)7??2lLJ ?ڿ}bUB? ?TA]Wv?*!zΰ,ۿAME??^) ??rJԿjEOt|a+'?<|?sd!ӿFVa? &տ&FL?DZ?GθJ4?*1(׿ӟ?]45?1 &?aL}k?emܿ ı?T?F.?hݿ|?g;>?!ϿvsE? Rn*'?Uͧ˿&ˡ_ſ?/d?/Zlǿ$˰W?_ܿ+MP[?Ws?"5?-=SI޿@)7?xJ?.?̰Ws?KěۿT67!?Z]ҟ?K?6˔ڿ 1?O2ҿ]S?tCr?*GF'-տc[??Y'G?6RAME?5IZoR?o.?M8 ?h3:ῪhP ?қ._w?7j?iP|տ܊?I?64*?386 {?}~G?0a7?^َ?fJzW!B?Q 8x?m?<߷FVa?7>_˿\z?lduJ?=)ҿ{PhѿT4?Ms6>?l/$Ϳҟ?o`ہ.?jc?}?W忡0&q?@i?0g?/^?#1+@g?o.?Km3?aΒJo?eۿ]?2?/CI5k,M^;߿fuJ?@Iy?3d@Ee?!Hh?i濂z?v|R?e"?Y=^{?g;?re8DzՂHbh?'$?^Rۿ!})$?h=?yJfؿ1?,wDp6"#u?C-`L?ې(?>䫚َA?./?t?^ҟ?(!:^?2?{1? w$LyY]?v Կx?Io?i˿ ؿ8xY?*r?n,ÿ 妢yB?N꽏n翪Hl?e+?V &,?EYY4<>)7??p?2?cSxۿ>7<??9.0_߿-`L}? @b?\"ƕ {?g?K ;,c5Ey?JhP?PE ?hAe+9?V]>?d[#?X }:V5 ? D7?SDB޿kdu?MiZ? 8x?_?g!4 濚?<=y?cs?'q5?/abs???QU?erUAME?dȼҿ^?v|Rn?Z 1ӿpj&׿]#˰W?=/?Õ4ͿَA?!:*0?w|Rn*?vM?Z)7?]Q ,N?RUG?e+?SY\'(??T]?ER㿭.^?3z13tf}??/h3\z?k33?*?YkKK?˜?v1Rts?NEZ?Q?PK!H?pԙ[ڿM͙?jP ?7Zɿb2[޿Jv|?v?re?TN*뿳%+o?T4O?G?p뿟P 8?-+٫?Mv (?-9!?iF&ǛhP ?9ӭ?Ġh;⿍.^?~E|俬-St?I?k&Ը?^َ?De?AN?ݷFVa?<Vy ?T4?t̠?Yjc?ϿȦZ0?/^?.mؿ%ԿGo?^#?i}[71ԿeuJ?e'GB2O9?v|R?G~p ?{!뿛'$?N '? 74?1?Kgr返??َA?1?@a2?vebj࿮OgB?Io?$BJYN!妢yB?掗N?R6掿9>)7?zv I쿿?Xa?S~?B<q5?(#1,m?Io?%(V¿yB-`?=#o/S?9ń4߿2?Y2 ?T4O?E?˫WLWe?:xl?v~4?[#?%fUDh;?-`L}?.}=?qGjIhP? 0Sۣ׿;4`?d[#?r8Gҿ0?ܿ}kdu?E[Z?mʿ?oTFrF%I??@Q"U?Fv|Rn?L|?J5m?َA?bnzi::?)7?n,?IPO???clUI?9!H?8p?4U4O?Nb-?d7c?o.^??`9-n4 5? 1?@9?\쿥?g;>?dRnǿPV??t*V0޿瀱uп!H?WsG?pIqдٿJv|?ؗU&L?T4O?6j?#!+9!?sI?xSm?GVa?xz36PN?bl?xw?l>I}Rn*?R^R߿mB?al?t G׿^ؿAME? ??Oҿ+'??RNT&e?FZ?/c?:c*bL}k?{y?O?xn- .l??@Ӗ?շܩ}翰Ws?[Q㿙"ef?˰Ws?%(zBV 1?*2?!jY?c[?߅{ib?o.?Bn3?X{T7j?۞RCBH%h>{?Sn*'?w8m9wnɿmLJ ?O!~?#߿ 妢y?V7e꿚PR?yB-`L?B?S뿾ҟ?$8#j?Nb"3?+9?}yPq?Z?)?g޷FV?ۿI?(7j?Njп e {ZErS]Ö鿹8 ?iP ?1Э?zj0s&뿟P 8??>e?$?hA? 1?)xu?kPCg;>)?z׿uf?wJ?tFտ3Wlܿ˰Ws?A?|^п}kdu?)[~}k)7؆?0&q?ecB?E)P}kd?|D?el%? ? N鿜z?FVa?gms?MgXT?4AI忲z?$˰W?L;hbfڎ0D\]ҟ?Tqi?8r?AME?EN:.ldP??H5Tj?*w0sC?U.ſ$*?3&q5?rVOпg+9?68w?F Yݿ ?j`?<0&?'H?HvK[#?W?Dt?@0&q?^7쿶I$?sS<?N͟D?`lW 8x?XT߿ͯ?o.þ?;˿n1_W7.$?>( ?[ʝ8xY?-Su\??o?nUlc?:y3? L?e+9?rq~濬kYFME?/?)7?<1lCT豚U ?jc?xw?ѽ○ҟ?!(?ofg?}kdu?)i8Is֣?n)7̓?``w?wh\ZOI|?sfyr"/?]p?,z}?WA,psS`S>>>)]jH  .6C1$1FhFhF1F$! !eCre3l~\q\OqL?a?Ta*?LT?2?*2Tvavva   5W5J55 5J5WBWB W B-B  ~q~g0ERggR00R00E0-dBOd-ttg+wbwMb+M+0E0R00E# . .e.PePeezw99$1$F$F1j3Ur]r;];&]r]b@+@U@wUw@bvVQ<fQ1S19NAV A  55 ==tR=Rq~y~yv~q~v~~/fQ/Q\q&H&&&&33&H +3@+3+8#bM@M+@+M8MZ8MboZMo|||o|Zohuu>S`>`ShhP.CePC]PrPerXeC6XCX6KOdq:\G\:Ott[F[[F$VAVAcccc  C. e.C. U33_Rt_tl_*2*ivTiTG\iGi\~nl _(_J_J__=(==_(=R=R_=#EZ#8#ZE#OqdOqd  +@ + @MbbZM8Z8|ZEgEg#8M88#m6XmXmmeC..C C! !CXCzXCezeezzXXrzrwrzmwrormjorjjH3H&H3H]&Hj]HwwUjU  ^I^^^'<'^I^<^ffS>u>S>VxxxxcxAcxVAkVkIkk4I4kIk4VV444 ,AA,c,NcN,W BW5 5WlWl((=K>`u`YD{DY"fQDf{{vn{v{{    -B  B - \~^kI7*LdBdBd-dd.&]HH]]! !6.  +#+ g|gtRgtgREZgE|gZ;&.P;.&;H;]H;P]XzezmzmmzXmXKmK`m`erzezydWyllyWy%G2%:G%2dBWBdOr[}}[Nc$NccPeP.eP.@U@Uiv~i~vWW - qdqyydM8oM8oMoog|EgRg0R0gEZZ|gg||ZEZ#EZ8#Zo8Z|oZzzjj]]4V4I^)> ,, WyyyyddyByWBJ5lJl5JJ({f{fYY{Y{Ynx?\~~\\::\%T?2T?T\G%/}}pp}[}phF[}h[h}uh'<'I''I^<'^ D"?vTv?a?*a**?Y7"7Yns{nG~\   F9[<^IYfDYLnY7L7YD*"7"*  "  "  ;]&;&;P;];;P! & &.!.%]rr]jj,AcN,c,NN99[$[99N9$9pNp[ppp9[pN9bb@bMbb@  {Y{ff{{{n{Yn##88  zumuus^s<^<sQsQss)6mK6K6K)Km_J=_t(_=J_(__tnnaa?a:OdO-Od-O-O:iT2Gi2Tixxk^kk2xVkkVIV4IShu>S`Su`hSFS1F1S>  /Df/Q/fD/ avaL7n*LaL*7LLnaLu}ksnkljlknmkjGi~~i2G2i2T2iGO:q:\qG%%%:%G\:%\)1>)!!)66)K)>K4, Np[9N[N9,9$,1$F$9F<Q^fQD'44'I'<I''/7D/"7Q/D/Q<"//''/< %-::-O-BO%-%- JlWJ_l,4AV4V+ ++M+ @+bMb+@'<'<4''I4I`K>`u)`>K`)u``7LYaLa7L7Y%:%:G\iG\2%G%G27"7 " /<<Q/Quzuzpu}kpsupmpkm2??T2TO-:O-:)1$$1)AN,4A,VA4pcNcpcxcVxcAVAcN $ ,$,0#  -5B5- B5W5JW} , A A ,   /  *L*?*aLa*?**iivsQs^s^ssfsQf  ##     R=0_=RJ=_ D//DffDDY" "D "7YD"YivvTiT((5 ( (0=(05(J(=Jxp}pxs"7"7xsx{s{}x}xPKx|@[V0D0D vertices.npyPKx|@#B("(" ZDfaces.npyPKqfdipy-1.11.0/dipy/data/files/fib0.pkl.gz000066400000000000000000002734111476546756600175300ustar00rootroot00000000000000d#Mtest0.pkl[i[_2tw5()&! bDD QAڻGA|{{?HwjصޕӗF,N}?F8L>UkQƫMa)k|<97֡n6EmdgǬU:J}<ӸWJrZ.^J(R-(uqjJIϊ"%;gVJmXcLkWxR?~!CpD{ϵ}e5ܰ=wj76 }v}}m~7=^< H "~.=.YcnϚgV߿1c/Fy570?qk_yls6!Μ;zCt%]4{o;삺W?Dn5on &<Yz9ԏs]y,aoε} 3o%]]Vs ; A5"ǡ׽Ơ~nZnӭx=\veÞŞ_dV-@21XvLV?ߴ썂rӻ-y~i㮋+m߳~<֓R+=(纥˥S!JQo[PEZ r͢t>{._qB'V,,@P?1za6؎mJ#+kv'VsXTvBٵu5{Jb_vSE\mxCMq= @ŊjafKOYQ_q%;(qp?jHdZς٦u}2`w&vQM[V6kvp11m `#f,.NxbhoPer>YNXtK%cYBEΓX.ݺ! [3eTa}X= j~ zXW|xO1|1y.,Bߓ꥝Qϙ9TPE `~mp!9 VȍSY%=o!8KM`yK5+'\29: A;_x^M@>㄃8Qgݪ^>:l N$.+;5 . P}zC74}!4H̻ g$"Zv C@g:K>{'|;Pl!]8%,6`"PNI:FΪ?#">Z{+#>I&.H /Bm] jggˊ=]D5J`M]0bUrGLGtp3"먦@&ߡ"؝ H߹:7țGsM=c8qa~d)%F. 0s"rPձïbN1"')"Q՛0\,$;g.Dcl]$?j FɿqU!ejcsF ~jdq.ڵ\aQݞPsbb-oP&r \QT' S0R@йFNX1GГGeܯ!/>d O" ۀSq9(rW4˖?WI\tEZj ''̡U\omw9" K.ZtN,yגc୏/ ŵ`XKCۂd`ww-FQ1[%Ae Ts>ň4Tы N)unJL4NVd`B8ƫP!V Yc ~E.F\yP%ODj E@)l+QOyp*rG`P؆(Ћ/q'T}S+R=+fB-#\hBCM`b=M(ڒqjDrtfn*1F ̟SBjcR~XEW;UC$RK(o?ÃLa5p % 2Yʹ(K-w2$kvjcrU a/1eBx-t|a&R"Za!-`Z^ɼ+%-PC῀wAeH(2EyjeNJx֥?Y#.O*LwY>tAs١]91Cx"FE@<'M'v#耒pM sZI t>D0p^s  %7N"T[pFy~!ƼvhI`r{ȡ:ޠ0;֨ 28V=p"k-"pe!N:rkPͳ#K2ꆂ0SȢv~I|*mI9d:Hk/׮>ҷH.p!ꀃjЕg&A~2Ԑ 3p{kԆkuMDAwPȣyn흊蹫\6WII3TιDx\#eȘN%~x|}4Q# "R;_^KHj(t6 ޝvXHeś NIW =Y1J+omiqb̊]90xt- fٰrŐHFbaqYThɗʉgN2洂'kJ,nrJmPTtUW34E2R'?0 MyѥSpH$*vuڌGhij.YҘf(I]Zr-5t5nF/qi=qvG495m eg4Y[ʇbR [@-dz4z8"wڧ9C 1NW"GaWA̘֩KRߋ(xOBWtMcnyDB(&|!E uTeMwΣz2A~K5 D*\g1 0tZmJA_֗HyMar"FI'6LNPD5zZNh P$]DjWzӅqPo< HpȐ&UT@R89?CjmYWqs0"(0`L~Uխs}:TWWg_s-S ~hq-J NA:nJAOՊg Xda,ؗr֌x97#z xr Jwf^3ƃ# Y2LmMuoiŦ"0CǗ_XTMk<ꉼY\j_ʬ{ϝoE3l8N<}i?"Y j$h."8o6\~s+~͜ 3eLOW cze/~;S0ǒ}n#q_H$—0Or0!A0V9SmEx-3QkDfo"G?jjb&X'/HުBzNPZE4{}ծ,Auu4OC#,;,QGu`%j,i>_NzhMMb BA) 8``NNw9[XC5I~6e?{.-۶G0sI?Mچ?0s$~+H6OX)M"؆c\,!OEj9JI|Պ|(pe2p;FųOIL;g^~_e_o1Ĵ -LA1E:E!( ce_?n濄[l/s-NVZ>QWLm#;R&RpsV.%m63AR訃 :+$*7{:GZK;;0)ŀ>ӺA_*b>Ȥ,K?Zy5D~.Wԃ*VD"n0śxUN2nqO%r^McՔ(WPWOy @٣ XA~%2t&#FAGK=Ѥ{Mn脴'GB ^Y}ȱ?FyqUۜn{9m:߸]iO?AQ9dٓ&GMU@;{z  yM!V[ ы>#"6Ř,2|kA TTN{l4F*DbΪ&NE:9hz4ht^ bP&vFejzrVh?JF~DŽ:ffbq}al`(: #DV] ✼}5>Jbw `_YDUU8.=]ei'V(|?uitCϐ[L1cǍf@qbWǩX!8u ` re(KQ>ˬŌ~Xfn 9«*8dڅMa 1VL! $>%Yv@*kVvmn== r]}c25U1;t7_7zy Jzc׼ 6OZ!g&lzY/mKg0ʼ-q+sV|H4 iTsLR{_1r]jguGQ&>\BJ'"A.0,pH) \+mR Yx~V_cF^eXe6.>XōΦa{K>zPڟwӓrMHV}o']҉k2zZ0[ޏX.>/X|+_YvQtVTJ%A3l`PxT/aXe 2`4ϛ7'/qs/{ ._WgXɑxÄ)*FޏRTW.a|ru~H2D\\ ?g ۑ=(R/]<{Q$C?zgԊ)Lt"P˂w`#eߔ\|B*lB+n<v^z珥ҋ_! H'̸ #~;!f.&=6wV,Zdfۦ. peW7$'8] #ZYӫ-_5Ir//IF4-H/Ui_mg7U`vu\>la̒%[aV;r W@%P(3*F)M0ue & %dMBkIr~]IL7uu^o {ٴMo?*Fϻ׫: DaRt_K[>ɫ<]{P#bw66ԥONⱴOtMߥ/?xA1{l8# &ʙ3mcAG DfgQ6ʦ3Pv?I0k;bqN-[942̪`G [PV>ɴc[S!tDŽ;ju@ߺ14nn*/1L# CzW=,"ѧT;Lq6*ZSaʀqEqNI!}i} 7[_tfd\2$,VC~Ԏobh"/,ikf>)|v<krJ;#T@쟭!&L_ @ ӂbC:yϿ klҬTO9TQ,lMj?,gA:gѨ}SAZ ;Woux|F!ϋ5z7 ,ɍz5D12!<54b4y%8+5_Yvb9aAѯa8L wvuU^>{_ؚVg{uY@ϓH!Oo5WNGL"F谄su4`ՆM9E؅TqHsU00O|@쬒s]3lsv:ėV#[ϏX3[m-[ jiPaAUׁ@5F,|<.Ȗ+DÛv}0jdV{׼@-V$<[؃gUDcM2ӽI@I{ :k1Ź?n_D&2C\lgZAe/Y%tTڊDO6|Vx ./^ P*pY M2g õp놟oGRk8y+M;Iq@3.ǯ4iO/adufQa}/2Fv ;)b۷SDQE|eKNRrWyR8sh7DzϗZgȸXѴqQ}~ylD\v#w2`T,$Pэƈ^ɳ:9A$tu۪#1-5iFނS늝d'NoHdf?"ͺXL,9?XI 6˒Qd{WPhK)ͱ9}e%eAh>.kӤ0>|+ sܼDNoVZ(ݰ-a!=du},Y>.. '#([؜-{25%AlUoQ(*9ÚVJI& .aE<La$k-MĐ"EJ}Ȳ밷7+)k4Ց01|ʒ ǽt,Qǘ& n;]8G(iÚ A. :A|x!L%{rK bIe}GɾrD#ǟo)Ԅ{5>jCWGo؎UhD^*JAS%=!oX\DXfE 7<H.`&-&3~Y/fR=ح$aIY6)j6OD^=xxF=4_&8iDnM Zd|V !%PVtEj +x2F : r tdQru)dK+|࣋LF9QVbjhQֵ?%4v)kl3m2rt|ny7IQ}7Ć8DRm%u{NXaዃv׋O7'od ((2Ta[-'ilb.DQz{p,WeOXy F {ڽ~pnqŶ匴Zb HD9Z+8B*dctI6TBU(is4/+B?ǓyϞDclfD aNPTe:' x FɓT݇ViAّ"vhuN~kĝz&픲/"$$è߄(ئk1$# MI *֝lV҆ V/5dpKM!$\:*ج%/q6|* ŋKfeЂJ9v7HA]wvyIۓ/8Pm,ER#|x>E $eU62o5Q}J˨h7I~E؆bD^ ht;rmHZBzY3$;L1h⫛rSlkM#4YȊޤR5lYyL R=ʘhr[<کQU/0.J>\jer$]?OYxJTM2OåCƅXKz69ǫ8'r c ӝK5Np/6[r*<.#Dz`Լulҿ{v'mZI&]V(|̪ݝ=+VwU_6.ʚ¢N# ؂W.XpJHOv68)7Aߛʅ†rxW[U bu9!笒T5WM1Hy 2Pd)ѳ&dt_v'JU4pfqW՗?ODn<6-sŬDEwH*&~v[yJ|b"-]un?'Jw6ݬLeAwv<&ÔZH `1ĵ['{g۬yI*F"nؙ37oGSU XrGgF`'$Jau&;*ʭ#K\C:eTE6ol^V#Á@5WǚeȰ/^jZYdeF H*u(}3Sl.yB>|oSoup)5\kPDjZWQ]EѪ:G 1ss9G37w8eJOS>VGR|n^\/DZ~ۦ_B9O_eŢ,t!0[l=u5`(=z SE`K9r5:1)ݴ3 i;O#x3ށ]@Op2g02dgui `4>E@H|aJxm t.Fۏb*æn6׸K*!ZIG>qj,t1NЀOG&("ŠA Dೱyd^]QMAH\_(JM2#GOz^|*TCsOl=x8:_Xy eX⼱nnˎWwmd#Y"mǂ19xs1&8%|h$ќj\x|morh.EvDq Eoɮp&toy(꫹o[cdŐ,JX@Q*l~U>XV͂+'$#971|g2sqٔp1\iS"oNl~j=NMbUC1g\$ENZ4+qeA' }R%+˿Úl |gheCj֮+syldHnė8Kc^-7pci˰ElmWÌOi Lq3+F Ks/ĵ9Wόl}*_*i9&^6ƅpTO}HkB|ɎʚS( QEIoFY=8'F9#Ay#g0O.h"fT_bbD]X%G1e…x?x\O/:*ΧA3טTR Zy/ o/wO0&ja8\=1ȄpjFΊz,n/A+P*¬'g`k|ѤRyypfr m5^h Kg%mM1WTw@ "hom}Z \ QI.=fbPAсx֟lUoD4L$g2\/sQp  qf`|=)YqfAæD <!*85#)|&Yy t|!c;g'?nY~e5hv '<#(3ŖBκy_'_8 <|&g 96dAeJH9ƎH=-Sf3dUѐ3ɽPcX{US E)D/f=%TYIͽKz2aڐ$͒=~\<3_`VPhpHMer |,|`_mpdˢ왅4r)OFSܿX[ tꦽ/+~@Wvh㯒myu l:)^u촛6V0JMᷱ,dNfgRJh4B3j Y*`bJ0D9ͥsн;ߩ\"IY8MwG]Of Q"(:[HJ:YjU>\GHspoʦӾf9$XN`WĴw0NwٸY!$o#BzQd&@" tYH{ԑ=YTOqFX) Yn:{jcm6-5TcjknÅ\ aÕő*eYjuk\Vsǔ)"륂 $&rqmsjgHcr.ݦ}*V Ҟo9V4s-RCExV0C4E[y"&4yBf!_t;8;?0\z8*̴%\#$WK1KeN=޲WLM|h~|jU,f9h pnrm  voA'O%Uz T0[Dިew M?I64 3n*A#uB0>@ou1==wB͜yTOCM$A'_6%plyXZπ+g#q!&yUnձm4ފj35hv'btՐ(/NIqLDž%rD ˵g`(pť4vkY,7 K\S_G˲8Cty @cP%`규V9@ ,v{s,)v,:c8 $TMsJ;ߥ 9Y*AyQ>L(",J]pmܱxļ$'_NKZc$E:yى 1%{ 8AX_%Hˏ`3 o_(V#j{*P7ȝ)%S1&K-Xpqسx[D_c t1OfM>7W>fW1E%2_7DONz $|H0Z3g5vM5"qnVhH |̱rCpL:?Q˕P 7WY_vzV}̪X @ 䠏| Kwyzd OA+5X?Dcl՟a~ÿ:!y߈)8f eLIXi9.G9#gAe>1.OVgz#ug>39tyWK]W'V[LMrZfeQY*`69x ol ۾ybTFA ՜Y[G]fg#e .ߛ .A Y6%Z?_.qsA~8KFK$c.ILs=;v92ţwr5>$YZd5worhTy4TEU{$dXmk-IL ߊQd]=bXYpoZ=N 5$`0c6{{}WmeYF7o*/ٹQl ^د2U~jXW3˃9FD@4}e#_ \)4\yy3*RÄ>XưNԐq[t 86wr^yvrdgbD<bBS`EZ" EC|ɕK͹UR[USagDU)WǡYX_x}"0vEא}b.ܑH :_K$p2u5B 8 voZW0:!ap' Fh&Kk5gE<)ARj 5~y)+<S s%qF! F¿lzІqJh$oW"`?{*yGLxag}9w8~. S[ 1.ݤ`#J'>79{ٍ'*wKZ_Z{*NlpA-e1ev_$& 03wI0ؒ> Wzzh3D0.|2W4ez Uſ1ń֊}B߀jE҄YҎ|lRMr`˪؎K29xS`q^>4rIK( %Ԥ隊C-wVUY{m .$[cu%/+EcupWl0M" L/; 72̾Yc=:F|w@&z~*/' ^p TKH*eF,=cA<=]Ivΰ ߃=g+,\j d0z{Mb6-jgݖHq$01cfRa/GJ؛iW8 4L`^Ԑڤ@i;~l'x\}^y"K;/S ~2j3oP>#ye501D*橒5.y߻8F;nZ}Tivrj(ۜNT&i >7sYۄH 0-*4dFT"Az#yQۖ-aFozmj| ":&(zEV=M@#,dGKYnwut|[_(I+jl|ͳ 7 @އ$HUd 1/aRO)> 3vDBgU==1ޱ}W>:PNI`_С2Tq&XW1 *״w-wĦŐS46 [sS>g2X- Y1 ՛UPOP)_4>@ߑ_(cAڱ S!L+]O2L;҂E=%{򘳿 ܠ$jaZjؽq!i?,EOHb߾{(k2)ư4522ax'MΓpHQހ&W {7x)t JI-Nh3am8 X+r|(^}r4ERc|RajY,xVS)6/`l'I^inm&#oǾ| bG/jv34pӶ*][oݡ Ҕxw%2sHF(n4u'=XS3J i1ю\=9><$ )͋'6Մct9h_P]_){I"W,wzIMH;XroJvKD̴>c&#|>nEajO0?3 SnSWDnRmUI?~]N4 q Ob q|ٞS@+r_Є#][fẉHPqY܄+QDzt<'i$z0&az(jMB ,g7{K&, *vH۱zsaPݔ L޼/ka(Xg:Cʉ;X zutvMsY-\lGڮZէ=`-'_^Tu =r}Ylc^B-#U3PBrO-3ï'r d؎GC2bm ~a[{yۼ7{Li'٤T!AӗOن$u,plOqdX%@2ur#MȈK'nZG >Ь sa)*`tc8Rb S[jz;T4AkO Ɍ qMȱq$Njыo,$Rl 7rT9 4+ 4e"ej~tra]Rf#CݡT .dn'զGtL3I| 5mXg{S64 S\a<Bbl}zbG+>S- H;a&~\f,B_F慖8R`=0>H;ca\m҅#qRXœΖHk9p>JbbjVHrwb4rUf˹*S@~jFq{qv)7 K)Z7my\ ƻt˧ď[^Z%"֕3Qς2uh8 a $Di|=(cQIU]/c@ "g 7 Aq]4?%3%2*!(u/xD3 [Ɖqq SB#iJ;e7+fL(%w)ks\tIUF KI=_E: d;LQ7^6%}f׎ B{_[< -g5H,8_*JaELgcI%z-;3G_vM0qvJ (-0¢Taf&Q۲xs*o4pg4o$iOʑQY@pՇgٰdXfC,aR_k An~DJ\zJHWhfEpL?ڃ=Wġe:TG1}2=X/d`j}5/q'>{sa C-&%: k"{Td9GLH۷؜*D F $4.򮣰!SB?sԊuz9a1!AIBl}o"-gPqFA'-==ץ83|+wAդu8)X')B>TӦ\U{pv` @*+JOeS2ig μzMA*N% J!ń-31{ +8UGLBL|0K'ɰNUqu݃3.<z8Znu` |hʟJ+HؤcqqR\+RΞ;j(dn,t邌6g8Ru|e_]'zg-ֆZK^WZ')6'sYLN#}D<W7Oϟ64?EmL0:Lii 3{a1|4uO}0C5\{-/tEE >!^tW[ #B1 _yPVjaIאhteXU N6?WtOؿcBX"Ⱦ_X 6.xFպ8<'(] !.ECJj+ IF/7c3!Ǡ]ݪ+ڌmO,&c@RHh<D|ʛU+YWO'FjM%p[L!6dG!ZXKA"3VMgT^-H!^+b3|Ӷz?lvdȀ7*ďYC &Z5vK\%e,d¸,>{E({*YS1V]S8lw>PdT3Et sƩ٬1' s¤Z">'Ӟ CtѰOR-ur*Bldyr.7 |A$6Oqر3$b[<8xኂNCί7\RLMP4%b56Ǭ6Zr[8Mja>Qܤ1VXZ!N΂'u\I4踽wD)M*Iqw>$\7$~86c^oH"Aݓ(iR<(z!oeIV9cʔX1SJܺMlL-oq!.Oo[/*R`Qʂ/a]}-NٶD5+0!Ud d$@ȤքD%9G9Iě"P$E_+b,dÉNefYcKx\der '3/¥{Ba0\,JqlN5F,Lz8pҘ`u8P;3Gb|EËz:;9gSO}Y5) d ^1QC$ l& eq7/QD6rVÑ/'g*lޖ#p^ĺ-8ىCuT¨A)ownb{kWҗO⵻fwȊ!*8J ]:nvĬwd(.<ݷ,iBfd,T4ݕxXEx9rnx'B3qmQGtLr̚ڣy {iv݋Bu[b#Gٸx1i]΢JZ^mQqn(KAɿhFP bMI(2ӟ!ܛu[ed7UZ|ྜྷ~ ̲ ^n3&Jу:s2n3>5L]Ŗ|jުeeTJZ$8)l8JO:%pt\Jhd .u(~ R=4J{f?lOa#rH)q#y1@[8I5_iWi!?kѭJ{F ǽjGop^Jb(T7Cja7Ŧn8ďސBCkщ`?H'%B.& J3Ɵ7f1))EXW}/>G>G8E8CP*:VEX>H_2<݆e$"_YhD^#_uZ4xW*F*س8YގX'mmpYv6($,|F׸WfN;xžJHiL b]j=M>~ Ա1S,j!E?s &%]*^j^۞DA7 _ϲ~bwF}ɓݠ#{rQ>"kȖˋ/8br'U ! mH@zvh1b{&Li{R7h Άz1 @ Aɚ[.UxX۴W9340PKMU^ |}#jA^-Rg>vt-^Q8NU;~ChJͳۖaâB2bLr1JŰe>p[?q7][HF pV}h>ӛC{8e3槹 8zYJ$mmZH_8kxq1TTcby]+m]:0z1>Z8({sv2H(ZVNLk#͹]_,ɮ^.&&Kdl_i搟ѿ'_0 Ǜl vt:x p3Vs6y3TnKhb/-V7j;ÙV:p|?  m7^n N+8bRO =b/e9 g~P,DVw {8ݓPE/KHu_Kl,y ViݓȠڢӞژ"4BgBϦj9 anSM2sW%+yOϔDnkR%s !φ dmi)Fx@R|] `47$Se+ZilKQ]оoyp䶗@q{/ad]qwqm,8ia]\@\j MV-&ryrr4KeDkp+$nHz7߅K,PKJU{RCTJ5V"Iwݶ*`3X[\ IUeQ:z<[t½S( 'n"OB%憳7`V.!z?fڦݬMBB+9ji$ח6+5QZOllAK: XvS^@Jߔ/R ͬW-wU)g ДU/eL 0lYA!vJ CALK(#Q"C&vؗi~B> ˓gb_XxM6vEg "AZ3SKE &rM%+DLđ 抝C.<60z(m9W#v嚢XF߱~P$g:M Jhs"Y2y;ȹ+Q6m^( GQ¿ѐ0 )M+ XH0hm-̥^:1AxnYydR|21ftN/R) `3Y8%9Rɵw~sx2ޫS5,$m_rX"_}d1+` dAiQj,P׷K[j r(9r˽nS!gKOX uuML84[VA-w,[vs.Ԕv"!Y&w$)O '啚:Wqt7P}2?!+qPUdҶ3RS,YJLIꈑ?9EU}aLOo'y A|\SQ~e| ՇJR f!ֈ446XUJ$: Ⱦvg Xqv[|Y܌ȴ ^N(BrdmTf[ Չ.#&Lc^:456FZAJvc ̂,17w+]. `[H݉'qkdV /E)C? h-pEܸJNX!;!&dCvW#UBORBBO^ %7eǗho_Ay˧rfڑɆ8x)%7C[ ĸrL{?Mww,ƳS)[ Cؑ/Liro.a f"/ĬgݜM|B0 ꑇ<[ 0cΕvS_Yrzp< fTMo ޽S-g4|hۿ% 40e'PF,}8eR+F)`&T(~ը(A|us::V2᲎Dtt'aP=N`I"VRQ j r6r5u5q)vQlRcMWpBu&L0Mتf1kTu[ɕ&)uexj}P(k>2wp"Z0.Fڲ+#P/*2L$[z'lwL/63UDy!y*I.٠&r]],gD\^B;p.PKQev2zֻgB[^35m% IwCM}alh "<a!7kM]rƑ(1 ޱ5obh 'h~\pC&fIɦ}up0}x#uC7>G&B}I"Y_X+c68i"l5*q3(Mջd{aV)N:#)a/]/7Sp(;췬^{zݜ)*M:@yYX]5;)r;V*/־{ףD2ˣ4W0fɸt}D6r貶7GX9`ʞڞ@3VkkR) C<~اhLūV~ -tEMPqpy02O0-) ޙcs1g=̒TvS3R{>Tl,Mсp.psI}N[]Y,~vz ZmfmNIJJaO-+BS\ Vh90Z='7tn ?Ls KWQ/ݧ"_Ƙsѡ^$~FfI8:o+!P3}m$bHaoL?A>#rI)=83[ G̟51\o$FqkDK^I!fZf֜͡ǸogdEW} &Gdt!]-޵z⥳z&[uO.cW<:W+3zȼf-B|I=aWtս#ob8IrEʄ^KMXz~J]"E˱U4v6z&ip*?$0j!30D҈|ŠЍ8Gªscz]a)sxb@?[Zb= JVrj0EFafw(R NgK {ކO+2ʌˉs4esge*MMm+6D9cFi..W*۷V ^O8qi;[\o>[r[,"Km_*55aelOK^fr_E;*2  S+Gtf9+գ]M0#aGG(;sOqXJgRJQ@ڿKit* [":2dްW.+p==MD#nRv8/@np]h80EtbOPN~X+'}ťᱶ4_-B#]N?qJ)pR0['%$&zbx6S!U2[SUKӃ7sa,$XF+wna/~ځ\}s`g&-=q4o˚%=BQ&e)S,K{ KWdMIUq3;5 &Wgp(qZ ~)F7صO~`aT-El;Pl%,k~՝ۺ.ˀ}9&i|聶i8DrRKz C bK׽ N"n\N rruZ3}ei$A;B:HI["Y'ㅀDϹw5&,׭Ѻce 5u8i[8\LCW\݄"=Vefm߰2|C)Kf/4`1rQdڇqo+ƥZuopTP&|U\SYBg8iC'%h-ɏc~0}ÝGpO 1tL<[hWZќ9Z=b>O-25cDKI_ZH-1bױ<7wD8뉂}v\W6[@uÎٰM5N!"԰%XH&8N%?DfՂ,R~D q ZtecJzHJi:R+h AWepKigrLs*rV iwxĽb9,\(x:^ʶK( #=Fv+j1^n q9b}b[ w$v nrXTz]n1CʎNWFb^ 6X`'XUtOh3u w^MS| gޱdk\9u5 n=lN#%F$΢f,NYgV$?X$70o,Vdlpt\n6/'vau\;>%L]M-pcY‘w`UX?EY ټWNp.lXpVO&XJ'rϽK1)_8Kn/ Oj$Yb\Cfȳָ-٧ qeЋMYI M! Ku^>e즭SOUUrB=M;E+IGi2b&ce,%&~B`lJџ;HtlrÛ@*XLw {`674j~sK'C !08>*]37̻j \$) V#{X0f VAdm %]bD%>WՌWa)@n_x6Ȃ:w1`gqaѨ oPS DΣ#qIC!SL\sK]1 \rFh1RxxsdTנԞŐ*Òe- D̞ZG4`;0kW/rW4d|ND-,|@?YslkcbpYgQ*N"( 6dK#a4CtWNÜb=! @&5P#}*aJY[*@iwۤѨ ͓,-hA+3%뽆hDjik7Oۀ]po),R_]b JV\Ddv/72|m EzCU9DX (Sªc:Me̓2dnZy:&#orK]a.]xf.b3D<E;']#N@}tלiNtd >8 Xޙ hj[3y's&K=~ebDXkrB껭&4#qc$ i Ӥ 3@ic,؋[)>|uM0IAJ.2Uʂ?dE MbYlbx8f[l|I" :&vSoHyD ;_ױxxR&QzΕA<;?q2DQ d9?F9~1YAr|DSK%.*8$O{Il;3q8"#bDB'WHbE3lH*_P+_O빤VesR2Hu}g] ܷOىwح#/<^ߗ>kuI4&h ٝ0L9ugLX2n`DC2q|,͵\e%z#LJOرuم0ܜ~>e(d{t}Zy%]+y(jH;yYrg%Nb qC`f>RvioXSb$QrDji8K7!2 \Ut}i{|rn7U׏Iѡ_z Gng(ɵiΣ=t kOAWV5G߼`u(z2swk*#mƔxxۨP9HF4dHGsPGI+7#saU:TT:^DKHc%9řL.zYM ?7}s kwµK#jȹmjRӷ?(a.#_)l&^Vޫ:l0E=(4?F)A卍F {d]HX@4`@Γ>%7z@\ŽRM'+|=Gh4;vMR#,_s G0ΎX%D}!z: A33(1$U#)mVB4- 7~'_&i5`U#'|[J˜BܒEIH$3p˦56.?oR7XJ8&Nh{S;SQđ`Kț++hW-Q!/,^I)r%c :W硢K֮ tԏ38z+j,)$LY6wMr)Kfh+=rE74(<-km!K6w=H J])oя"{Π˫dJ0yT))G% g }4л_j۱痂0r厲2yz${H[FUG}$űB^ߙ,a. ͈Sߏ*G9>iA}EY2AyڣMChA]Xp̶:Mt@ި,ip(6A%~5s s[|U$ U~&m/H0*G79W4oFڕ^09Z\BQj4.'>ց3 Q`,a(üecAxnC*Y;4LDa`=ݐq͘N"AJjj>UUj ҉[WÁנOF4z 8SrhU~k_PNciZD(eiPE^B^X"K`+a=̍ Y0O&TeϝfWtnZ3dN2)+'k2+O떇0A<P#2n4LEf)%1#-Q.mBnbYghgobSK_}Lۍ0p ZlT[BO(=Ծk)"(:9^={|_j=q &%")Z<''5#GV>tl0q'8س2 :{٭5VbcPx߈45zAͭǣyP D3$Y0SI64r䂼O̞oGb~&_L1P0VzM\ 6^d;cn؆#xTAbBފfU1Ҍ#l%I2h_$٪zU$bQNGKp~F4z#1 Ms捨>0vŽ҇C[2[LgگeW06\HFlE܀s ߎP.PEd&*BYI;rń!qS.|I d͂Mnk/7DG*嫍 x`. l9xT;r8 -S8%}0տ6H-o$Fw-" w,CZe .|=!QFƃ?sjT z',eV.u(UqFK@b5q_8D1[I 7{/'m A~Bfb,]%5 Vl-(X4NzWZ@Tl"t8mhvrn}H3Vx5M4M2S28I0wI!}jLE0a ƛ/Oҍ*e?!Qv*ZZk |)|QC_eS Rtwb2Xݬ m>+qqiaoG%$AR xɌ:U41/>H$%~p7""/Q,1!q`N乙UK;Y?Qyd=Ovm|Iu2#+I8m/$ a;n] ?a+q75Iy^GUaUlH K]x %g$4NϞT-)@2G3VƣlaׂFȁ2 m0j%>& T9Br]gr\er~R[-yhICM\ 3r߄g|"nfQ?Gɢ+GaRmbLұŒ}Jr]F2y,%${v!k#1Fc`AoW'#I@(GJi-zY3[c@XPnǧ~ ldJWJɁVD,y/أУ[_`yMnN 5[dhs@E|J@}@W[{8թ=PG<*7&;=Uk&4!&lFoZ7ßlW{Mph[%vپ7.?c#eDYh8 f`J敠xNqwqf=k\YIS\ 2#pAj?#,x!7A P,ΐ4e8J׍%.r5h?840fBiPR1)"F^"PM(7S;rDzs NC!0U D8#t: # O|COp?$poXqޢ&%/ìqZ}֔]\7!YyBRxZI0"o_q&t!2/t%y9bZN:$+k) /8sy`ʗ,P1wjkPs&W^)![9hwm'@PܓCYZ"h#PNi!;:(G]U7 }&1. {qӈڪԫV Om__Ck szMgꕟ$c#5 ;dpJuNɶW~MFIk|apgL7I O?x:)nw6+J6Do GDSwQ}=ZM0Ȯ+qwDEXa6ieg~rJh^-d>Uj5FJzƼ 5>V7JMEdv2~"gOqi%v\юܢT,|tD, e>(~/9[":wv^ ?L;B&( @jd9..AQI.ߎo/~8q]U5o+3Q˴[SIpWMn#7(B8pO6U]~_ y/4uT55ɼWHQ+M3"8+UVc?sB(vȢj#op52 >L0qTsT>۹8dܳ# 827O4'tvN@(( ˟Ȑf2mPKd^5Rzavv&}zH+ >9,E/idOGC9R8Cri#*#5).ޖ * H#'1d3o)9^t-0i qWUA# P;`%.T[XNcCrX{* ck*E䨶x-7e];[x%,%/N(v}ynz WX.{ӧjvj\m:!ׄ'vRbrL:ɧIn^m3`wџ=ĀKOyr~^%eO1P#8_:\5ՁͩJ(Tqe4(xq\#/WHE`(e2┗X1:3bzK6+ՉISUD! wE溴L?KQm0ٷߐ%v{hN1]#_ {-{2oy4Rs?o'zP 93Fp<F9'O׋pzrd-mN(J&Cg)o蛜0%lοp8ɼ|vnU,y wVC,^-%p~r!A?f?to5|bs4Oh/I%D{178+VPfʫbO'Gp'%e9_UKUK,U7*[Ȭ Qp~XW<;8SOjYw1ф|3e^ižh~"kYW΃&y3")[+na}+*壖l_O<=Z|dc9}iƑ{FPH5΁ ğ& X1(%\ϝ+ONqY;9cE*z5 G>9œ[4;N˴.3C&81SX@~nDnt8mBɴ_Y3V(ſl(zgf<)fO6R7)˟TC7dWٟnsiĚ̙B-%ʭR%'٣*@ÀX'h^^i31FmW~d/>X%͟Akl&c3h _hSM(r[ܙ~wQV.C ٽǷ;VZpX |\N;r!>rw6G\P=f6;W1;wX;2OQdЊB"D E>ג; /_XE5 7%xOI&}yY!>z K|A po.~9&3_Vw`"b~^Ak-bV6O9^ OjbQ}8`K⻟r&A[|Xv0"XL<,B3("۝`øh*US&ܣNf@c"ۗw^HTsHH݃6N&m {xa-Ӷyuк +b`8CcV(˩rnֳ;Klɟ-#)zfZKPӏoah[g>*9n.r#G$9_o5'܃gURK(Ϡ ޑ$ILy@wiP݄}3uUsk?h ÁUnB>s=· l 񧆉 =b@8t1'X)ejϙtohzl?Neߺv_$؉w@E>ݰYpBpxbv fn~x =r8&]*Zs&xl>˯r%cÇV)J7%Np+ 4꒠$@d"z6aisMWWIJf"AzoɒuP]>`g!]50}UࣻWd$|xn㺡w;u)'gL~Z|zaW1PCl7@_Uk蝠Fz4m^6e;^YXӀ1T>㶳P*n '}{q\ 3B_Eؑ|1DSFOZ0^~nԤ4A!XJ|T[ ሌ=SܣHgFrtz_r@T70o,5jS48 $A!5nbzji^$=ߊ% @5gH#K2ͤ6+)\N&2OJYۥcz=J F3:Ĉe(xj?oӴ7t(Cn #$M:xOW'6l$bي Eh`x@gk9iW-KHZ86xK"YK; \y 1cQ7E"*_%`uyE f|[ߥBۣkF 2wDr:+̽i-i_Й 4Fg\p` lupNc*‰֎ǶX+,|Qn+yGJd0"WQ+ZVER9TK by%I#&~7BIGGVPKqn$gjMTټU5+Y*a}\[7Lє09mS׉!ӭ_(8( bDA\G %|L)Ӌ[%yW=3r"69K_Gsd#!SQ߾RP,7Ҕ~̬ր!қ9Gy/SVd6'%VS c%ɑB}YԨ ܕ1aj1bF.=>eNx'tAfi{Gq&t6Cz& Oʹ* ʃ"Ha:m{1D*~qLJ5D<}Y{Xkx5a lMߎLQ% fs\; R-J9jՓ0R>.l,Ȝ rj2;'8NY>y @XN).z-Ӈ(Y*(3h13B43Қ4]=`l!T1akf"EcΖ %<0c*˹Jи(e`?Kfb:Ǫr>J,笚b:Jt5H3 .՟̰BѰ8)א驜P*еc3frJқA/ұCþ#aCcoNˬE2Fy3,">&OSβ  =3L }t2 #"vtUݥGY̐b\l/S_19=I{UqJ/ u檴ף^&a\w|+4(rIs]BZ= /Z%;BD~bv#8k\7:v0&ˇ| ۔KzrqoYxdζZFqGٯ׷l\ޅƥZPB 3Uzuk]bc|/Hį,Y%B{ɚD2%>]ZVC=U7+F<ǼQ!1&^eHK;qz$H #$m,lO2=liYy*QPh%R P7'*CEڔB6RkY kG;BJcU )[-7W[OL/O˩lDW,b(O%΄4搩Ipһgו66 У64NY/>Wwe5ؤJtGf_bWD|H4wXV"HTQmДCy6Gwo?szz2/N69"Mr]@0J +mEIn^k 05aJ)|46p Dl{+Nj~pP3ް8`^g'Ub}{1mNg;+tui!v~C@+]>tq_lt ^7_ Cߒ(`=;5'?|'53 gi &8mƍ}Zcpˑ1hvlsF]~3GSB:Gi2K%/04;}LPO_2bk'/ŏ0!ݘ'+PԌY,5:]HWNnv&M :/fw‘ by-d!m._$.˰uT++E<6gzIDy~^m@J-Qo ĵ`UR\q_n(vvq%wV7aG`Ƚ]$:Qd.Y ̒Z$coPB N) ~ ;`hk~'=a\<#TOr5@=;ΛrVA)g렒hR`a6o\JQf^ 4I+V)OQD@f+n5z#6% /Z]}S=W-&2iV4z@E3ai%qZ 3Hk6pn[jU@M$ika}/51usb`%kqv}bs~'Hxz/-l֗e q78Ec8I$3iϜ99}[;.1! rdL95[K)onV؍CvZ^J~5]=v.{ 0޹b3~.嬒ֱYʌS*enO/eY$qۻ&YiYQ@R ϖiË0uM.SVnDokSm2d^Gd,]dChMqqtRI_'PE^Jn# Roxj|mTxH`\Oay꽽^K)"! p`~{W=aޛ,=W|}S m;8A69[p,:bi)Gla41H/4%>67@`S1 914ق`[%@(@'L,-Irag'7XOguעTB_'XJC6[ <=&I^q?>`g<&mX-W4tϭEY.sZyXr::z{aϗ$"VIj:ÑNfUϸbwɶA)C5W\5v Ǻl%|=agh+]!t$ D_`KKeb7LGEŖ)7M䎳Ɯz/%۹L{n(Sy D_4& `cVW$~ޠ2kʢV6@pd6! ogu/1}a?rk*90 IT\5+񶋮sWf RP$WD;AۢgCa%Jp#Mm d~髋eHѿ:VHd|cG2̓v(C/}|AOvs?cU{?||PlJ?7(,e \x %^6$ȗUW/`rL2'~z+ERxC3|k?tĺ36doE.KT;]1W2v>w=캝TDfS2D7ӛ S2ByW&ɋ|?sA*8.i%4JL]VDIÙ9@ZWWX [i3Af vHVjFl1[9oSgCVck?˱T5 9ऀ'5sG;13O )Z ^9k U-ά|J;""cNGV3kQԯ-&m4 /jvc7B2gXdL}Lq'mu9馓,st8Y ˄.%%.A^/$cH5-(ΟC)M9B[ḁƚ̜0|KɗDCIx[9?m5Vu]sNUeʴ 2;eڛ}U2/;7ٶ5Et߮p'{dUg!N VZ$+Y{*maf +;5q"2WYw%<3.\#Jݚ-kI2>%_G$p ,$$uwl5us%ԢIhlz OA&C-.K"] çr2\Hx.iԋLA"$ϞV$!BSgB=$ P'u,1kzgD/7wHǶcnnpc"a{¡&d%eZlLa-1-7ǣVpu)0>/gϏͣ^^14`1y¡!h32iRw rp@$нnQb HuB/IFBSΨF`XquRmy.ጨ](RxJ81"őakD:w*[ƳoILAzLioKFlycEĤT3 pYXLV/BOf7e_pjn =~5{ KHڌm^Nd3&FW 7.d)Xs7bCIB(f g-B`d#sE4H}C͖W'czQ]< ,N Ü&K p \Nsl}RՐB ̂#yD6j>Lv+pYSE0XMgعc$m)HfH["uڹ"$CC2.,ӛd6ᓒ =,%~c~pRi/^>=%)n"vjdI0V/vjkXUIهZɵ2S'f"Qa1l{r=RQcgSږ̔ خ!@đճqJN 1-"ƨKk/Y`#J;zE;Q}Ҽ&S>L[MrxGrrY:ʔBz'+yTr NUv.Ӯ2nLRaZB'('S\JE;! XK c2Hq8K;7=8MZc:IZ!Էh.P ɧ>/M!vHӛMObFަR}rJ'xQB1?@31c o8WiYp~$M(v@pKl"UN^,J z7qV߶d+s˩< n Px9:8wpQhp6\[mM`ZAҟSdd<̞8![=}LKJ* FA&B*>$r>B(Gs+ M S:S%G,uB8PܭJ; Ձ'7 NRDvHXII"88L?P\L۪j$őt uQ5 cHK|;拘{ {濶OCY9 eEi2?$Uޱŕ; NwOA- VʰH#Q$PtoGL2㮖k;V @`Yb\Pzea|lGCfۉ_EL!8Gr׺©665eq`9[k-\ ۴pg~%`g!M=#ݪ.:B9y+uVڟ?d*=U֙Ď%s~ %8ԓKY'#C%Eldmz҉M !ظ҉>.<] ✙Eћ'T ؄YN Փ("x+qY1ELjDiTY+H5m/˸ j^>}V7Srt K< j]RY-"QL 62礗*bp/-`PC++i.pV 5:P).S?;˭tHl )lqJϽ,ۉoOD22e:&+eH$@_s($y$pJeZ:]jPYD {IqJ{\JmguYO+C\A.! >24ħcgWIZeV(d3X1+'lA5!̓,ɊU2K>Lߦ’I;e#r0)CUT>jo. +J=T%!ߗ 1GSbdihjEy.:}c/urNR 1ӯ#,NY_ S@J)TI<&;ʱEp,> =e63JMꐯH8%4X+Ȁ>Y@nv;1>[-mg;BSźFص!41 gGj5ip>)RR`J|WKKtK(ck!RݹH3UB.m2}* gpR8ެrVJj'Ck'vI)ezPNʡ*&GR$2Rf9FJćfr}lFU_?Ie5ӯ;Ԥ9CD~l5:ca<..SS2}ЎFz,^-><۱?1~C>~awj筳-W n.HҤ 1.Ec9H7՜ }umνa:{I"W>Kn 8,One-Yͩ쳵eM:k 6zι4v}%\`=~ 2QTOjwZgZeܾ!Ótw2p)Z u*6 6v Sޠۭ-M\ tIԂFVݽEw cQ 6Uur$RZ\D3^&?#2H2p֢h,U|.v(~j4Tچ kUΑe4Rd֕VrPe*v^[3 ? +R8Q`X׻-O/ƒRT9ChVC+YmY 'y GTQI,6UkyΖ:uc I瓂 ]Eߊ)Pc> ,r6=ޱ;Q^M' 7p|K= _kB׽iE|83F=x|F>/R12VzʛǤsKY3` \kuL %w'< e,T(7$qoxpU``;(JПy ,>>8sovbk~i0`UV`6g( 0HB\P3UcTE#%O7:mǗ$Z(\w߶ѓKN'b1db%i΁p=m/\> eޯoa?CÈ:4|S7Ά-.&C툻ćT~̛`o$bDb{`^sU-ʮ9ܺ}~lffsGs)`Ta;YZX6D0 =Zk^e L4jdd'p|cuU\;"r jڕRgnvD1"YWAAIn;r;5i ^le[cek{ӯqFyc&0/4H0~2rd, 0a죀L}RP!3'xo"^T<hx,fMLb <0m%2z84*R} Fl7hl]Yz0}r,1)Yio"MN'6lIcr_ˈ=e|KnH_iAmBV.bYk:DrkVO!89HΒ>IO/A=yEzk|ON9;7 'GrvpKݥ^~3~}DawͯBut 6v fBLgCfhDafIASՕYMioyfxR#Z UL s0 #0kλ+/X1e,aME1׹Tbح@컦o ހ_,Qq6\m/CHc?e r&K0>D),7TΞ{QuOT6ϙ|}:5֪X`+h2N8,rE- R ks$nuvc|['dgjmU.2[WTے?!OjH [ZC lDlwpj3>fVmþ0XlЬgR纺΋3QQ7&;|\kHm'~RP)V8NOn AOXIBJ1Tf(0/z).Nr'"@C"+x&5D+n؄ NjN)&$8=o7&D0-3)cb )^)-<XѐM.+pxj}Pa{[LJӬн>e+Q+[_0 7 Crn.b@99 d(}h-խ؋V߼ vl7Gr0u}*mʴo2~I=5:2@1Uu@"t"yيڦ9iJ"8ZH6*@?rj?[5}`&h if`_UiźQxݲeOf͙`'u o޸z]@9 a]*tۉ%\i+v- }h:VЮ[+%إ[7;dBY7FYPA'~NKÕ9Hy0 N6\$<(\6uLO S>aO#Ro'юӌz^ LEM,<98Vn kSXgRNi1 ]+J&#s|nB& VFMT@$$"b'3e5gQ UHOH$DY).%Zr6T܋+-8OA_zըfGW=%)RLD*?]2mbQIJ\[WI#68qr(k AosUO{"Qx"@w^GleO{c5Pŧ#qF:j"a?:ɤd)E]w3I"ds!A*b?6t2P O֋&Ieu:%K#k F6;W`I@V I7qDً*WP c }S2tc4}$~_&$j(3yڞq %-^(D {YlJg/P“|lJFE=u6+rho07ʒ!ޜFsVzEl%DpvBJ/_ƅ$ -YeHͪyb{9Rܿr(Ȑދc >L1v=yeC*o6-fS-7$m-m-S\cټ̲n^.iQFVθUKH+}ĥ+Vf{iMeD⼫ZUų^yklYJj夒Üѭ,)G1Tj. Oد7ll[1cK |RUiX[N[:U 15U*$Px8u\g/t> Kz9=q}؜d~U,͑B)DG#;rrVW% ; -pVɇ\ n ϚUM\-=f`xW 3gbG]I9/,^cf4l=6޼XN6؀],e ]/#[xʔ ` h)М&M`8j8/Г˩>߽ idOH6bRf}ɝ@R *9- Ku  pհ wj ="}oJ 2n2@i~904ۄ:ݤ]q (_9{ɡ~Ou&lώ/<0?$b}|6FPbp\MLǔV(+S=C_o4y_>bo* }^A=B)[h6漢MTпV B`Nȕ%]˧._J%S_Ɔʯ `+ 7NciS$|f"M r65bGU_r=4g8% 0pr)a^dLpS4fH7}'lN`ޘI$Z8OU%HAԋ%g5Q/@enN#Z݉y݇ۇCuh3EU EaYK{C!mP/ߞBa8gL74"Yyf=򞼈\E  [p + +u`VX-+ x&l*~dwZf7BT"~EJtCʫH.tԑ=$uq (6r̞fQ&I {EuBfLh ii܉?U%%tkFlq qN.TGY6n\:H|E#wDn,t YzF@P3D1jʁM>?W^~H\0+1>3cvB݅)J{27d2&M5z[g{v^B;XoÂ񏷹JXN5i ]q8_G!J t4Q-<ޱ5c=ɼWk1 Wq?}ϺƘN;3R hy>O7Hg mQxq7Q \-b.IiTs`o4N@?Mx= lTR}47$7w[5665޻*a-a6qÅJ|1-N܋Jc-ܦ肬j,/48He ixD5HLE}`Xb3_b vD;4ro{.hCKMd\05I\+kHzId,3P%Ȼyg}λSjp'w1Y=$lWf7= +}O,{Q,67})_uU4ɩ1!JjYVKwbAJj 9 q W3B*vL(. kJmvfs4_Ze aDv::~0zrx+`!g y-`# (,wLo*Q ?E[V7VMʲ֤X#3>08^\n[&V'uT!jN{Wfy_#_  h _-K0&X` )DRu}uPkcg(C6 -!Oo$^mk&xh1L;;gIk,*!\X)#gRmI^2@15!3Bo\Ey#|ꝆvPl{_}o6YDO)!"~'eD7 "qMI 6;o՞ߵFk.,x 'Ĕ0PCNPfTh_*1h>H._ /!2,o%R%_›5ieYc.pr$H| sZ?(d5 eq"\||X!Rc򔒾NњH*o~2C.Z.5 PV"f qL $k!bfon g[IBL}xym۵lWvK¼Tdk% a00tS#<4Bƹbueړ\|`|/ WT(6CZ6PP5HWzQh4K"MmbA|&&}pEȱ5jʐSD3߱/67M z4n?쥐aVMˉNڊ:yd UW2 2Mז/EOVae,ƊI+zǶjPC+H#=$ c Xu& W+cG=|Ģ8mM~"8K~M:Ug?em\ە-OpR/]~^ Td8`OWe QJ5{*bWh.x <|ЅU g.UL>ik26X\(1_@C< P8VX'IɧGY޲Ԛ=n<4׿w:9bȍLJXKpDv^(/=Z\XMFW1.^#yO:*F3&y1~ByL߿ߚP*ʃxXujg[|ld( @2=s_ɍCXq!]>j#T#Nm,FH+GSdÞ vziTvWGX2Z xQtqMȾ0u{c(dIX!8Q3^dY?%uD%Fq`3H; fLS.om~DD>TB` \__sT&0Aϼ c~u@VCmY-{ܑ?wP')F~x7-Fnf{JNWy 44vgbᐶhٌ_R h|ٰdsQwj_24{)i?_E/*ϝ~ٮeFX J~j >cS+뫡c|L;RcjڟhV֤e2RE$@X_Ԥ'H' ,=79+0[<$5MD icvW}n5'Y1 f4 טx^ӹI؄hA4ofnBp}/1ہzD1R_ϯXp.ʧ H v3TӲ[ë+d3{a5yTol??~Pm0܃/mF+[=G$J~.q8=}嬿>/NVA($itI) 6 ).p7:ch@rT!$b~!X<ؿ5 *<#Xghœ{LDOc2<^^!̱Y fk{jTҊ2A%yW*.0{)vpI-'`U 20Ylxҳpwj=ͬb r/>iyޜ֒Jc]G; nU+>H`9|{C^(NC>)8s;.ijM)sL9R} h '#<mpPc_8;ٴ/9gb NDw])]Dܔ!Α79 d  >9KZ|od3ds kU֦;*nVh7!fkT9)NfAY_M=9ZC 1ݦꟋ{PJz(>)ߕ2 0\ސI6o`!0W]IJݖ6oہz yWy4lO0TayGQ}KlWiYr $oJ'vw_3eL2q 9~cI#L B͜0 -F`{E2;o,pXM $V|+' L[;%eM#2%3y>:s~^xitno܂<`4Xշan}jFsG5y~@4u#Nv=q&H3o+v'[q,E 8|it; -nI 8Oy쯍X7v;D"ҌSv1}k]cZwh H&ФiaԗG4`X=+AwlJ}zj!+sjʐ?fHs\\Rւ7Fx%.}L3L6?tb9;l$$\_~R}"Dؔ:ڤħ՝,f+h]8K! ,TlGvʲgNP# eVB<؃Se8{ȱ:ViLSm7QK[^C}1qa?wmnI*5<$O*_,XE捻E5W4w?b;Lj5t a@LFUtkPMWGFaa~\q={})SGdG V{Indr G`z,Y>ef}_Čϙ\|F5+HB]JSCeԯjgclkPiA`ة&lT]*\D#w Fޝ>3L 3ֶ@+Y ̳\Q\-+LH`?8$g4ђ,nd:hSs7Y__ıw>O~w|5f| d󲬀F*tDckHJ(NΜ3RzS "]`=A1$|}8dWUwڵ U5y)ҖmyPUS0X|.ǐ_za Wt-o K9%_ZC"/n6 Ձ9+2=u՜PZwȋd2o? 끪d 1"V6h,G5JM=%6-]nPWꑔl?Xb7$<;WUk-WvTxѤEXU;i$wţQ|.y7*f$%\G_.cd ]V#^%U ͯD;~#T;>/_jPtI(xY=v2;lĹ3N[ݍY尲V8. LBA-PtBmg;n (2!/R5V% Q*X5`} :$GLHhͼ*/w/I^h%u)M2K$ /AUT V4K%QxAkoamУWBepUQLqfH'LȑB]wb3a~e9/o`NjЃ@xQy/'ۨ-(7#KOsu7{L!AWGG ޽sKhIm<0-^*05;vSM~3";,Fjm&{|T%eÔMjRĖҊ}տ$/ԻҠC67_edzZwۋ)0SJvTjʆflDg?&)t~?2/Zv ' ^gW[+*A.j/Yα=:RQ]91ro'X1£Aw芧V>+"u@.Tږ3|1l 4+TS]PͤzR1L]`3oxd;=&9qRz{']PM(fHM^>z"Mzf't ^n|SU8F!M1&A;u:o54 gkbeT= 3."!e#c )f@8u7&07󭀉G񋧳uZDZ~$Oq*)1߫&*ǁb,?³_!Ņ(Fpl\MO*gbo5*GSH/d')$=ڲB0ca ')P% N CѾ) Őd_֯0HHs/ ɺ#nrc{zv̖v w6^qՀ8\KhG1cngs+ZjEIRbZq g>~> 9C6DB|!ua@C(~3H`h^/>ԅ|LGr׆xv+ ra]TѯYOzKYa;Ĉ}̽?"6<3o2~J濖%q388).j5Eœ_!@%MGV*gg d߈ֆU!40TCI6^VmKB]2V:5`7i@GiS #(pۈg8NjMj"ٌEΔۙňU_P^,?{XFWIf7Jn"NpHkrʖK.+mΟ JuWEgg)j7h ݃C6+5$AsV)`GJMV>%DJnK\꿘g)||!p"c7= M;YrheH>F:Ɇe]|JcL;xA'꘎qZMLF&#vQ=i)6VHp3^{VP;&c@2y#ǂ!bM t.$>t(c?yYBKڀO۲7j * ǂaq^ KY'Q +Gsm*9D pp؇N,hd6YiyOR G3'CL^Y&$x"SnYBͨLgK,y Ɔ(=jnYuɨ'aBWoSĪ^z/t1 R1R HSQt;yOIhBnL_2*CR:Rjzۉ Lq, "qCB!'`9\ s(qJ G3ȂU,ȁ^ f-Q #hրR`p3CQTJkR<Қ*OgW Dcvz"mIq+6:|@_əfTCD&ZC_d0&gokxF*ѾRpVCvY,#,XVz|D0MNڽB|AKu OV7 su[#^kjF!y?VY#+7A=yG+ӱ+/#'$^\dޘ23)/8'j+a',uS}BbYm/+75g7Y@f$ C e-KEX! ~L*m"$H,MvG!z"L@VV #ueȿfy: s-H/Ds(״ k;{ 5ThcYOb<7F rZc92eRډK"i)7D$H$(C nxr'Gjj&j} ~{N#m}˯@J4Wd)?TwHS}7qL&X8[G[*DDEPXoe |4GtZjRW;?-a⁺PXnTxǂ3sS/Sx)K5$,Aad8˚59!xoo49d3)]TIvFr 9Ǖ`ns{H"6Tm28ZٞM; ƫ<@W6mM3],D\w~J2u^@ ; +l.D[iZxPEaqTO.WȩE1nv׆p4^A)J25 k(Y4ҕb7pCޘDbLNv[[=d(R%_BZSo|o5k!yq5>(ޝ`$VC-i_#mY>@ Bq-_B[q&ˋ"m.׻y7;yuIUFn]'{,yr#zSͳ 2B?OϞ"a \oꚳmZa7zUtT|L4SH&3,LGC%mlm4L+ 1-F(ޡ8 Mc|sz0FΥ[@xY<; >̖2t\Vо(I9*b[rR4CvQ> F8xV#*ccv/:LP.u5Vc s_@U%vk)5|w$԰_Axin)LGO QH&D&BW=29BJltpd7wNx-5D`:Ae,T0&bN>,W@G Ҩh玜>i=5NW[&w&.]K΄XW4Ey$XlbD}襝(I:[4[YgNxzif*Nl5LaNf+lc3 ~Q#y`Q /S_C6 3矞$,9F|ּ+- l%4ǯ: _]twLOweÁR?ԣF2fD 1^I!dkʬw+(&S"+~huO'T6rRvPU#" ..vy\ y-Q 4ҫvcdڃ"PkQG`?ұ3Q6DaJZ[>jE5oW ƾC%]WAj: Xg?%[}c'Ii#Z#j=x!{(&jOvd!o_|Ֆw$4dLl|>-h5:9O%4p+|qˀ,ۍGL.>%䜟2bty{iWtS;ڔ&4#ai',QUY QD߬6oL7աl K`N5N}*1մ=sÂ$ǚdݔ%W11rQ>_PDʯOPÇ3|`3PE y?Ԕ?XyVs41[O)Kk8U|y/[֢@qyw{:,v1Zz:/{9^Ҙ% dj'{X9{zHu-_(phXyb1| AϐFGnGԺ>2w%Aq.N%8A-pHf[+C 3@ŧM׬`Ol?A=yKD )6%~pާK5xMAs8&~+ɱGHRT@MHD)ÎK시dBi!L-(#9 mǻ^]41)uͻ ۽LljThmh&i"5dUsNd1;jȽ&XDKCxR`G#C{p\1(]CtceSJ$:އ.:,_lbqT(- :{flZ{(hE 19@TlXQ;BC._aMvI=kD?99Qy/Ô? Z"2 S'%&'t U*oK/%L.ssu>< < <'iew $&q6ۦŽ*n_,k9P#ռoN;Zs u'*TmϷ6 Di3{; r{φVw"Ew⑋X@.8it~eP+>y7EVjPT*ѹXe8&(s"$51N۶//\I0jjg !0 _L^/ܙivˆ{<+`bd!}{-&5s X ^B\1#,ljep ²sg%-+\+5aU:N.xJ U$"{+ٍVv|~d¬W}o7j1ٴȱIX7*˅[M@g"X'e\ ^T'\1:~e՘)BZuuOc嚒_5M etTT&HX-b3rk<W3ϺJ -*Mފcϕ`OqYd DYr2=yuM]֌8D- O4}9o>)&v~tZ+ka$J& Y_n/az9Ϗ5=M2Vk&XuZ1EέO~ #Rbd U%mWh,1=p~? )ĖZ=- &_cSM2ÿQoQW,f {ٱķHɞ{Y2Z :XxnjqU0p1)jNu 98]ҋߠ8rB9&Cۉq5C,Gp?bӪ\{)T?c0 :2 Q]KV`K) C[=ZqpyO`+EFDؙ0ZO,Grd>?fRd.ɞd07ɘ^4VH, =΋ȝ$ݮvX&C_F{USZu29K3j)MF@q–ްpqa%b#h VAO]+_*W0pO!xgR% GP/R<ٙ5vR#x=X=& ITL#L5u5Thk ~_n}/U|)!!M\ ǵƟ_$[ԳFLhחJ`ce_XP+C0ROt;xg  )U40&FYklYi!"vI8`/)#uĭZ!~:z]Q#e0 SWC.%<ÒQz12ɀWv>Xkҏ7| "cQ4D ɌZl p<]Kwm`0V}uxaovP*f*6uBp:-'`XVWՒH,OCw-? +uzR-Qw2)ĚynF oWQcn/ 4ȇb*ǽb)+#j. ~_egP"K{">νs)s8m, l08$ȋVIh z16ɥH”V컉>%o-|)à[2559ϖ FZӰ8n|R"akI#t;O "ɴ{޴eoPF%_C|)tՒ1O|,gAwAʒ.a}UwiJRDWffA4[4jKqyZ-?(j,Ιs`_3}&YP0"oDŨn&4~E\`_Q=_G" Af3xC.\ a=1fj KCYuAXsٍuT=i"0uf8k5!1}Ǹ$TBYix7M'F |<泵dAĊvNS܉I"į}jnڴ)Ј°RyƎ QRn,C|3*gцbzdR͵'YZ@2`;*3(Os!@DhZ)B[-Wmj*Vdg$_42i2(aV {oWPY zG;5儝;>5ۅO pfSq2M.2|$._YydǞP&'vr@\*q +Sؿ係 ɺ7 8_2koiק *=f+X.& 9s T9j#B^6Xe4*˦ž3D\ Y+Jfնur%9=|F)T5 }i*KK|1˄YY6%v"]*ș$f#q_M4ԯ^wowd;;Tʴg2pņ5 ;{?vHB8 ::nc m)`$Z s7•V%pDtƼ@^BNl=TfxNinm*1$&#'G*GxewΖ 26sښKeUBPG'; Fב:n3P=d6郣OGt%I#l\E$6?žH0uֲ>\酠'$3I.:JMHw '5M SZN,%hwd [!gL BJ`4lIx V1a||^qإ؁ٗ _g}#hY|&H$.Og4 XzHrScI-@zcS~~FCA߲)KɪY2|bnuTaCqQD&VѴ )9Ґ,BLŘ( HzI"Ȩ8[RrJL#;ik}k!sZ,,%J%1#)8 *iJ-}85nf͸ vK)U9VDB8l,դ>]> 45.Wc1˨hQ^/S96TwcX"/?a:4̵ ǀuҷ{Oj>?U^GvY]wbԗDg6◛w GP zX$Nk!o/ey++)F q*|MF\5 mC"1I7}zU^}Ӎpf|uw5-P5zxld2ܰ"f./H#VS>j2ud9Huщ2JˎqKB11Rv7äI ޤ#mo)u !f3 8_CL*` kT}ӄD|a2OKЯ3(lqH)Qhtw̘Zbwi"][vЂ C[.Kj}cNc_ h_mUϭPP+yH!?m|tUR*Mm 5ڕ&읽P^3NDZPtVڨiy:¶,e; OqYth TrD5yԈ|9s=', 3J6cYLPHQ:aDz6%fD%3㼝L.!+8DmKļ/\a5QLNyK4ɚbm䩚,db,'+acA)lF@B5Ls/,RAv;ѢY+EMqXsbI>Sps2O&V߭hܜ:hUqܺRE !?1L n[RWYSc-9Jq4bke#0k'ѥf$ƪɈQ̂6=ʥ\u0v\5jXgv(|{PznLEۋ9oΰ"x\] 1p˓.P&'t94 )mtFrKSў\A懳 ^4) -`q峑T}ײ ڌu@t?7K֋nZHLd'wZ9X\&ggWoxQ{9UPWӃpY5HZCPG `MMq"OObo󮒞uF_ EVd f% ϴ̈K GvezL.VX7a6>۷qL=l(G@\ жZZF_S"bK?v9R)l+ś\96iоm+M܉?_ȿGF9U;Bm _row4\ A U l\29l;AjcwG$kɦ eg!e wۂ9އ97?oF=M \uޥPqǑx2pz,bkA/On4R8Jٗ5\[n3JUW |'z` 0@ t]fŚvNS6i#+yY%GTKdN7tыYdAUSX W5dšցXD[ H:[Ħs UM-8xxKΔ㌗Xyf_/ɵJ&Ve,gZX5Tn iz0Єy4ˊ?8O,-DBdؖX#8 ¼gs!!hwB XceZƇꢹhͰUU/=؂+}H7? "u`EsDcSk6FfjxT,h<&Y0 :vUv-vG`Q28 T_#2ː`}4o@vpk!zJR:9wҬ,YzTS")d '9hA32OAH~c_xF؞@]1AWć&O*XZeo$5u?6J聾xS`V~5}!E"W|EhGVG6zfEvW9=UxHL_Xfb(kD |EG>OLf]n+<]7hmQI8<@GڝaA9lROQ7>u-oYf穽 >!Kn]ՇNj r#1PQqVdrY%F}ʪqt-(% C1(~ qHsN)@dBJNf^!b9߄ S/hБ PHE&}4ƴ 5|pby&37Jۣ&#a}VPyZ2`J$&§ʖrIn!R, `}P8gAU-.a_)?'+ǖ>%dNWq,ueXpҖ~V(vAQ&4l2!']Qh+:)Q~kf]~:t Pk ZG%؝%L32B2n;?OSvKt 4Goj@mob;*sQ'ݶ'5rOxłdapAoN02R& \9&K;F0~+:]gX3!u%#wPƒ͎RCv1!i:)Z[ iՊ\?jDWOU~*YU> V:zj+~~S"IִG3%qU4[J֌ܴQ`)-؏{?n| MX[67$ nɜ 'לC0֨ͬ$Σ\?RUM|! QsQު|ovׄw5$DY.AhO%CZX̢ph'cf 23.>:ѦdK[o?4uXuMiJSѝ:8ڮ P5U> v&>>_u“\dR:e{zk DML̔+T !F+R(Y+~廟Y[V: e]=w>=/,pr&U |J9B'_q :_qшBo&Uöq [(豰ղ w :~~GR\WLJ] dxZLsOSdŞOQ{-sa|X$0W΢lD&@%K>{tx[bCqyU|hGL)VI{CH{"!I̬Qn'ಎ˝pVe~kU # e2b@4ةm/ǟmv-4U*%S$ 9՞hVvҜ0]Rd({,j4eWoc'4ؿ=NIQIjBò*%)^;EG\roW}lYv#EGPwzTڔycv5i=#,!6u9lS?|"ml:)e+T.NҼ:L]f53RfTcM 1jDT3ԍޚdnqMȡ, Gf 3|ʰgUo.qG9GBMP_)<ݲ}bO ,{pB8x =i{l$p,G"/ٔ=Y+N^FBU PL0JgwV"ap-Vqh .[] b-lccҼ&ּ\Nڕ%ve[ɻ|X(r(_Eb/)u?kxi$. B9n7G/ه &;8pp5-b+hc#{kbIy_?v'XU!^K|RٲSIѲZ#uxڿhWw|%^.Kfgyln"MςǗI" rrDz)Nr[cV1&+D;bx7IC\Mb ̺*T̗昵">ĴyhRTQGp%z2[/8 ?{`5qI iy .|/ 'l;ۥ𚕦%l{4y$ //GH"Z#`Cr9)/I8UԉY_>t9$׃lAi"zRO~%3s6zQ!NOT1Ia뎣Ctjp: UpYAZzܢrَvu޲!mQ jz/]SV`{ Rt>nf؛E^{qwoAK=x{ 9'.aCiX<{ 9om}`K$<v;Į 㗀ÔC/_!E 0 h̓&ZWRGb]~͟7D4ױ,qptB[Iǩ^j%VK{}B^3,[86j~o}_(2OYKT%pJi(cZ ^_^H4yn!k>أ"9$f+ZKMy_3JF~́FDE{rp4QPsaqXG-+R*Q]t8Rֈ9 5hB% 9=uL(.ER91 ϩm9My!s0uX7> (z; riX Dnk~~m :@-:n B4cAxD_aùqx^H%nI' "R\,ݴ֚ xZ.iQ&vS5y)Wm4QaeDe ;TB3cKx&HDD;{ XP>@ZTyR6%.rhx^ 0Ei!oY1eR:@NǪngteSܣ,Q~Qbʻ3M@58OǽE\ aٞ~oZ.BEiLѼYfrX۶0oiJ H CZN+G֊PhoA!mo1 y(kY+/7RBwF$8 ߓk^=[n#,jM Upywxk҆ 7rx笹KWyD\IY 3nCx:y$63fa:6 |N*dz ŕZ9-oFMxnPzIMW+Z0,ߟjW1 ewdFqFȎKqa` V$n1ɲAU=aJ5a?U?p`uxbںؾoQ%Jf* WN mw Iݤz3aykP.5J?6A~ݦ@PROMǸQ";;'>N<Jh4;͠Qv0FPƛ 彉~D-|b3.I/.^P.n?ɿ`M#oKh`x蝡mK]q RC7Ҟ2)H*yF>KX~M6gE &ߪy`gQV[.5_yc(2l߯TC0.#ÎޛOğҬ # O=!x; /ZojyyX!yߤ&Lvu'5W,(n| :AB_毡߽71"ؤ$l M uOm"_4-O4j82!(㞈]I5k㹞$ԗь-c1P/jeW_p7`_I(CWo.Uz5YE,e.κY9ռmU8j_%KB[} q~)L iaźJs`@s]55sl />IhXDtVO,S#Tm c-Zd]Ⱥ}c_R/BP9=z}2g1fYŸw\_~7+f{BA/ȃ;6kLY"2vF4oX~4PK'Yr:OxMXDbq2#{ʺǻTPWPuUNYiBQLTD:; f-ͧ_ :IIOofe0!?NIF_&(QT(iB+ePJX7ݪY"ǐp w?=ñ!X8SOVz '/Y24̀qjaN#THEQԴ{-7͞:%Kfg!** .B8XC)R j"_Һ&5f>H2&}t5N'ψ;52;A FD 1Z k3}-URVSkTГqoɖ b.uyX55ɢ3"=慃{)~e=b|!^#\A=%G%Pw0d;9)Ctp .7F,_ƞNʺ,aAjI:YD%X&ull_ ξ#Xa=ܤ00Ae[UOb~$ -p71f[=0CA)dsC10Zbô0<7>O30΄s;IiՅ'#aڦ )&gAnpㄯk~F~J]qsJ~mځ--#7R@؍TNNO;ʔڔ06DThcVl08%KrR])ǒBuݳuY\郟])N`]w؈<̧EB ,'#6fQ2t^]R6\u X^4_UƳWԹ֫ʊ[R5K9ԾPk 1:6ڻnA+Y{H9 ߭8шigMHβwxvz_p"'kGG`)-fM` bBUkVE'%R K ,>!-ғxyw} `E5ړ{0/wg:Z{,67)5Us=M p 2wvj|og6;~ܮ s0sYTN2ƛzbg0)5l怔[(%g{GЙqfq"}(GhA8}ɞ8"e򅗻szl;/r}%*)lO9a4( J !d{XCU%{{,vE׬ıZk~&={&&"{FI5 l TIˢ_*!+*o8AZhfQ USWb0ܛI;2=%7ũ9~4` :-y4(l v"q&|(}M9&8]p oJ5+IJe-L5Gdp4 %=Ec|DG|loh0H`7sDi%Al=uL sjyAFpT[\i;|C!^cj@ܯg&f)eejwoItQ3}:_ "v; cEyӔD|fEF*Y4#s7%yCb-0GDOT0L-EYSVVO:w[EȩUoF)&ViJ2ғiΨaI"p1#ƘILֱq2=.zZr$څP[٘PGJg}Ӻ%e_`2ߖm(W6jϾrѩaͼ)3mǾsD Lv1V_~K^ J73mOXh'Wgr f3&ҿG\8]mF/r^1~ÁXhx/<>40 kmH&r"SL ;iӲZbW+gFR%F`S>3czp;zhS!hghh|(h\mE18LjFBNV O_!g@~g _7=g7% PfȞ"MJB'6O9_Skpv H3=l/O[E)Ek'CK-6|Bza')ȯp5@:K +N>Y䚕;+;NZҚŐfM.[e̡ďvHkd'>Da"]j.!>0.,OFprI Go}KZtRfha|޳}5DL l|f"ej[zgb灮jFiPkaX=᳜PIu$:$ڡUA$4p 導8RDG8pĦJOW*R(-7/oGl\ RIUC`k.} koY `%5o?y73 iIEMB@9.a9ۓlDЅ іWLv-n^Yx?@Ww(r@@'+i`ʯ_=],iO(v+CTsdj^ړSH&;41~2ĴHզb;Cވ*\auT*( 5֭87Q?b+76f"{& \y H >ϖM=Y {;VyW3?2evAǶ̈Z/ǥM_x QcW5m 1{!E.Za,FF;xT6TvgtL2 `0?hrw>m7ڎ;4I向Yà%(v8̍El$j c/щQ` FTCBcgz:sXQ:%ؘ{FOz~ 5y:P8~t?^%fDAuY `Q3^ ;P^+Nń#Wyw _|q80k9ä%&*7cvBoDOȀq@{Ť`>F?/4-{_27r$zՔJ <=pL 8 #THqJ; ς:Y/RY؂ vʪCqNPbg0lJ4Kţv+ lO_M.^ .kku5K<ƪ#W.b/t*Ԓ8-o' h@u76g^?(!QccEk k!yA~2e|EV.ґT5:}%%Gu# 'Ŗ^, ??HPF<@(DWp|2,(4kE(TB.m&l!lGIHpSGkWTm q~=by0wc9g{KV/g".MYq($1--ͱ$:)Kqu?$lCnсL96 Ӡ`l竖PW[ ֗B5N@[n cJydI!b{̞V|B TT8xh'6`<@pfourO8J!K+m ]#cw~ >:#nj-*ḄJ}uQ/ I ךW5+:4s6xPΣ+2"@[Ǿ,y^IkH} 6+B7XD xKWUyhSZ[eq1*Y:# Vsf8YY=QÈ,Ƶܬ3}g]xN/KwFfRi_;Z |X!qs$. O5 4_%V L[EO9pՄGTEaW<݋XiBӮGУIE0s(FNn{wq^bd]x.fhi΢Z5(x"\JhW©exF(z)Gxx1~Im?Iux4q6#ɔ\gSW-5J#NqRBhl{f%GJvw;b@"L XdȎ%fAS'_՝vm- ̕ P;%J.Ӊ, ukN ,w"l2ƚpfu<7vVr)~(9u N5s%Je7ZqXֳX {k߲8c{?pTu){fI@/V&WZNi^īQ \E<1]ޕȢɺҳӄ+$3)RW'T|E+V8M$#{+tԿTb)L?Zikd WFz13eV-ID=-ԚӰY} Q=#5.W7MmD#BBp^B9<=;_SJT@OwW]Y=L̩=j,M԰2rN0*3Qqٱ ep N 򴯹_9Be2sv(o^D*P{zPH69s $rLeֲ-լXokhj+$juDe efVԐlmʨɀ80 'M7c^ bV9cG-ekr'mA3M V_rn:Փ^Q z/7 (D"~T:"c4 .66!,/QCkT]&?j|k?(✫ew?Xؿ"T$. t?hj_O(3jE42'~u$)9X~Kw%Ef;O5 ֪e(#Dq˦o !t ygRZJ"ev)<8/IlSYA)WO̳OjXkX _ʽttz<ϥCڀgc0agy% $.5*i9#`x) hxB8+IZ\y΃r7$of;vu(_>SJ48nAw{I` " c', ?+n5@<_))fH/ j;l?%4In1(% ol0Z\HS0t0wHg~/=Ayloh\7({rwD׬gO11M Qz Y?Q_qe0.)~:dWXiGն&F 6x>ԇ䩍<=ˇyf#<]tPkE[[}wYJP-r,5 }IKG/fWxзSNTcVSEЬ]&a6|{tZs8RriP7Hs]\ Yn#gGVŔ60wUfVDpW?eB%q.*t[adwM=&䛘e)CEʁZr'?qq}?OsxUW&w]wCN?>]z@v&4=rA?Yn{j3u^Nghk+Χ)?{9|T)W~&ƥO|$gh,a.)?Y]`P*}_ _oˈL?¢0C`>~>'+6dt%1W_yX4:dipy-1.11.0/dipy/data/files/fib1.pkl.gz000066400000000000000000002735541476546756600175410ustar00rootroot000000000000008L#Mtest2.pkl[k[J>B@ٝL rS@.*舤D(^U嶟9g2KUW2Y{sD]*v2˲6[*ڸ݃!*v?V8kvh۵zgD(Qm&}zPZߊ!6i03(kS!j/:'B|Ueoqc͓\ƼlR6ιv3?igȞ[|^gFק)!;?a_Wn$2ymuV7?l3!yn;`ZqEHYLk<_ASD@OTs R-W^.w( VB_0y/C0?dzˋ~zY9]JĖO5/j̛5oX}~_{\J|k-^.5&S3'=Gr:RsgZo*;ɬ&@)%m߿6G;;7yJH*"<cy wLM7SK]Z%z묢j(xo͇_BD~PYм:ǚ+ Kɵ={xU< ŤŬ} `G#(,En\Dmb>;q\l'a@{dpxX:;M||[ vq{NS6̄^q_^dlxE4t@b/l~:%:{Qom۲>WX ܧp`wểiOɷH=@G|9nDX.wkSdrI~XVnI|En;};YU^ {V7YNvW&w WDӱ~trX:܅-0`!9bR بlw iYZp*ś{Fe[b8׋+e>kUl4B%TǤ (;ę^pcѰuDiP$I`Q>zĈcc>l`4RL#8Wd~EbIWpTSJ5[/hVeAڌBɛ<ƩER i4p:|\V'#tp.q4u:fOy \ilX8>}`"_hNzYTP"$0]注Fߍ]#A,-U%9_Eo|6g"<3]:%ȯA4(-OL973ْ#"z1Djd{~;tdnr!ǩNr+NY /t0߸)eOR$T.afLu6mV2&Sh{+Ym!Usl'v$i02>V4@ycHw6,Y!WwY|7?/38 O )Yщq 8r9V:_8BΡĊx8C©y_!@.TI3 L`@[`y['݈lN!a{r _ P MpORg [2gkD|SA[ezPryRpsHh\Lfw8t]l#Khd̻f H-7-Fn'?g!aG+P##eP}w+Ji|6Nni_*V!gy';얂5ᔾǁ~,v(h, S JO]KST08,j$_fߐC#fh nw]]"5Z1}akU,5N;Ӈ#` #/+E1I8 knfKy} Ag_=.XYCjId;7C±~8$ Nmv3Iaݛ/mp>TAe#K D8²7@&f#}vDnpf+a$+2܁x1 _6HGH>"l_'owNY6 z.sZ@JZĊOH!%xB/9qnU/}BCGKmql%GB3%P4:ݥþ$/}/SWp62Z5bPͣQU bjNgwi{[~Ciա%Y5d{SL(bCƒ+FCx;Op," 5DUFsSTtqT[PY%/)$+fh`f* Hpܩ]10C&b_s@qcYl(l> i`wq"gJ7: \Bjp-\y {a2>K53± ޖZO`-Ռ8J2/y> UXAY.q$zV8LPHGj`eO9§g)Tdz(x}тk'*W%T!$&&xPhs#D_73T FW$ I!Kjhbե@a%hՉ&PXV`_Cl< /*[UED!Aa`q(]zqx5]&muN^V%С@$)p"q {Yds:vj`Qwڤ&w:xɳa;Jr2?.9^wˍ!Mxdޫ{h@E~PQ }9hQJ3I?F;V9fGX{UTQ"c/WZgՓ|eEL?ދԞE%⦟da(K1ۦ Y$/H[CiGMLd$!H dT >I4[zXVɜG 0ѕN7&%K # 2tREk@ȋy/8U Nj=:@JEPY#BB?$<*n6 4ӑ:yީ<~YJn;\RUԋPz&tJg=k5V3ۑ*: #E";Lּgu3.g++92c`6" x Jz#C?MEP]d;oB`3? p%}Z ,Q$5cj8\*Ib?r|O!5Up=;D6d fMF"[6^kfʋ6%Ͽ6e(\!!$ǡ?6O&; z^ƚupQfS#IjM~ ZQWFh*ש3x:[ g{O .r~H|ƙHǙH|է915ҘnBP*ӬҡYOJ۫G[ԙo+5ݜ*}KtF6:"N1T2ဃ|Rh<9e[i9Zu[L"l7~: +8nwzL3b%QwHۇzy]%$:5reИ , ق,4̍ 5jT| pUtym yĬV{:45cnᰜ/>EjvtBޔb5Tga,$Q 40|W.]Bh͘/I>H$*kW2qP&E/O#Y+pAV&?ʆuH`n[gfx=ݪx)cKa ֵ bW}Èʻ8۳|#EM 6SCmQ\ʷXX@͈.rO=+6EwSk)#l?`*:Jazf5HOTX~Yz0P掛@#Y~x;;CEQZ 4,|8#_3!LO",*&/i( p&j,nAZ{"~  t eGOÅ2 YVE8 W#1?D4:wh 15[/u3TUTCsl2Ђڮ>lPK?eRUU5R#=x;㶚YH' H O|FqcsaM\ M}U"hm8C}|V &ea*Hz-@}g՛a: nySy,? Wzl2t?uI=qInޛF8p1e`c 6 4Nc pr~'\$/[$y^ՉquK0CESHrԑ >'d[44;o8 {[S̰N~Ɲ4Tx'gf 4>_8޲~O섅0YjP$r BJ5"M`kY"@?ERٛ!1.4Wf ja #/ a/8jNr5 /_IjPC2C%<҈.<ö75ᦣwp'4B|InBZeR_ȨsSCγSȘbSD᳃\Jp&~Wj4HqxU=@ #?&<~NpM <ϵGW)_+={&/KXa~Rx mh5bXlYUfdH| _/iI ?M1fu#O &b$~K<xhuIǣśE/6W^Ecjb$%,7> GI\<n/-nuQ Qr _SD刺xh;] Y4,cM-"!<_y_@Lo8o+ LdfmEbq7aJFa^+ݮp}ՇXUAV9=&k F>Reʹp1FXȯ!*;p $",[*W-^* /0&cw)rcB;r]^|*glFazuSdwfOlҵrLO>n G7(OYIeч0k;YfX;ԥ9""+e-e{0^NDw <<*"[%qfe!pi4Ȓ9|%&o?"H44e 6[X zzY$Qv_IʟEeUɨ0dWPn/!ɂމ*ךll6,B@?cY*lYyXGFt@1HB$G 0x gqrst:BHuɈ0T$z,``V PH r ɯ*Oq.ޓ4\,kXdziZ3b~g''#QJ>nDIt ցoC.6j!ٝW]əɭDح\2|CHOT&^Fkܩ8ȏ_?s9L~!l` |ZbQ~O3o+qA[O N&E.`W4#MY9&gW|P_*w VG>Al~/S5<%SGKvx\*ؤ_`۝G"y4FAiEш+w;txڻ?@>!c (;/Vd>z h3n߀V Oh^2VGdkܬ/*пy S8He}gj}&7s&95hD_t ^yg**L["<ݝ-?Jߦ٨]Rxt9|yͶq;EDwD0iPNJe^?T/[k6e@\I-P&}V,φ5k`h h{ު׾,yT-8pW@UUGQ&@M ʈ}~BnU20ӿ8bS A{|e*cc4) ֘@8iihRc~׍!ܣ]"Kc\7ye%3\ iS#'̗׋wPovAd0?`aX. b;Dx~ڝL[}f0:n-vy2# F>Zc⬦Gg5$KԐx`_?3J<N+-x5i4]ʏbe 9Qk32iwuUN.$dVM2Ԛ&<=6 r=b(Öm }uޟp3V8smԳ\xB| *Yz@&< h]*3eթYpmƹ&'DV#H}Kbh2j,-yw˙[V7or ށ޺^R5/LhaI`L9CN1&IWXY1Ǎwg?el*9q7kpĤ&ط/$bt8Ü)E _׫L^<7%XxTшFUL?ܫx6]KW,Rjy>Z{H&6 * )<-6nAt&xaf!$a rmO_q]xZC״c(n2(W8Bw/T:Ҙ eݲ 3,o]['70$IsKzlf'xV7bEG.F5fi̗XGYq*}X1Q^T9|Wī~S#]*N݊,n ;ٍ'hn\}{. Z3qM3or`!c3T!᮲vR9`>ĵ =HCS/{ 0 J>;գ74ޔo5dgK+XxM` tN픋G6Э-j3v_֜ճTv"C;C&E Q*S.?kA/w\طÚE;F{dJb1+NP=-k|GoT 巷6䮿 l#0s6_@Oj yLSZb-kn|eᣬ;,\@V̈́wEz=#3u~<0K`u1\zT]_ V<ŮDf*M riuaX XxXz7=XH?+v[&׽Z LT RThpq<Ғ~)|]+CM$yd%4 7C0Bu.0$ɤ] Z3, *=l\я@/X.)v-+ Q<z5Xcy,2:x٪8"k0aU~}>^Ȳ8W)1<g,ZHԻ=Dc eVW%#]/=Fi`'crȗw3݉[eA;F" ^kcIJKd4 W.e<Yz 9Xcgشx$dgW?ѣ'wL(鑬>fN1.u,8rC ]:GJ#J:}]qwo1$E>{?bO`S.AjZ_3[to!h7S.\4Ajru\޷rMBvhkY&+fj4ug_CjWY?^4b=U-xC(C$Wr>xG/SG_|)ž[쀻}眷)i>W(.UgXzYూ]/{b3i)/WPdutQ 5 vZG+|FB255淧. 51{XNR *FZ0ڎO mŪVIF5g+3&'C;'7'U w/Y~'@0|pbÖIH5PJIx塆@jtF!`TsFb2 _X $ 5 jyfGo(S9LafAP'd-iFKpjh#-\` ye װH +nZN KD^aSw1)%Pә!^VnkoVPf}lҡ^B;ԪT9V)95}fAV|b<0d*9Qn[ 3yH/ZԞ}:Y09.W6{TލX3)s|16ِ֠2da(H セC @턦5 k)ZJ9$\`)\QO7dF_ +hH]2J5 `]XVBv{y$ݥHIr䳡8%ߥ@S&0lqi>y =l!XlC JlZ50h4bDʠCB,wWYc7Ɔ* r5a?|*$dف|Z0V6WypfpW|"@Y؃f UYLqnrxf,1xڔ28:9(lvwTYUk KcrP?:NkUif/>ٶO!4*K8mh`q~5/ 碙l3Y5[?b٪f Db9 :@xObck%"ze55pҺhl.fV_2cekXFDU6.=E`A/uF>8$zQ=tsx $t2cnk=LZ&;~qL_`j )g/Ǥ'x2e@Je9"kFo%$UY"'}L`#ۤ|(eϟYE$d&_98*P8=<ű;=?cnaqmXLndOETYQ0~`12_]X[^,15B f_f- b=y~Ys ֍(ꂯzl'㌬v*r0i՜vfjf.[S \EBRLG ?@(w]VuXY)ۧ5'pi]xyk`']o|ή|ZzE=pGړBv2t%W|uo^j `בoWf$' E?W,E{H+dQ7xjym8@`Ez8".j9ӌx׎iP_~WrB+(7AV Ve]xVc S~lVXS0U٧ 5 e,10G@B DӸlGBN8BY'VTY75f i4=؈p6{6TR#Gn&RYRt/n441rzQ6zB /''SKxJGM(nʂ}~dS܇܃JK90!y.6x?7ˬ8EI%C?py}2P?Rƪq/]d&,tFvG㔔L},T*-:G/M= nBB]uvZH:P@3cH5cp)a@@bRYr_"CzZG|t Pe^-O৻r_,=1:w^r%}eQ=d#=ǓKX #yMx<&Mt'%bPWL27dHW/PLSvKn<;<hXZ%cٚ9 iD؇v-AFv<>k$YCf(yjz iMS uiv;;܋@^p{K**9!<§-GP9knT׉? H˒N{I!&IpxYfSIL(ifbW:6֌8fIYɯG%lY?4)iPK؁ ;006 햸5 S(W/;E]bzTh@ZU B,B M<}ᩲwXr> !_] zf!y(sv9`] 2w>w,y#"=õWP59W k;AkUǪ>?|K.O^sB[5cVE/ \lGrS8-‘]W˚e̜Lʰm} n,xҴ3UbG})[T:~H8 |AcĊ|^jNt)#LZ~3wkѓHk[\WO2>1wS̞jE @T ۽(DfI9 7v0=I :Ȭ=Z(yR_/C)x|,NhyhQO#iS$`{I׽i^f{ "yH;I|S/Uҟ|R+I ->{Ŕ$FٓȚkAG,2#nL`c(s/\gt噦#QHb' .)׌-X(E(;r Q43n(t#l~e=(}s2&v٭&vzX?S7Eֈ^hQjCpcd۝-#4رzv]zAZejV*.c__ʕ9A/^p?B+`tFS~$!m6gP>7>jx=D&#mwrhA G5xO+9W(6=TaxQK uϒ3P6"<[fVA{ 7)'rl2aS;\,"FoV# x$ zS;I hTK5 GԊ`K: %3;xP6=l/Ⱥ$FySGq('S?f]J .cF<$XK}y1k;4'8wd "%G E3f ;Oi۳o% UQIn1[QwĢ[OŹڦ~Sf$Xť}P[R (,-tȍt-?cok;^D3 7 R >gU J>0و%$b̍|l)ZYle\郡%lp)5%bkhd⽼>/l0SRr*A`?T^ E2x 4 ~W?e@_p"S/&^+ΰ)_JDAwߨ7צmϩrB9/Q@B0 Ťl.JE qPw7}w0@t vM{9ݘk$))Mq>>\U3f>}HTr)y#h#N=MQ5{l_s!y#TrX8 /Rtp) 4P$b~N8 x#ɲl.>%9MVLJTY\6z(q<7}I,|͐H]NHļOϔ]n8 s]8D +to/q2H r ȳe'2As莧t'm|u5ǯeWB,.JZKv秜Ӈ_D YkE@Qjv_l31/p6Ʃ~ vSqfڤ'Hr +׫889゙oH=] k .Y0tӜ5+kZhlםo>Ѣ+gQ@=g>g! ;XMd=l3DkFٕݧ!NEoF Bl/5M[օQx8֓-l\>ZL7;gQW5 Y*7 W,|E>wl/҅؂2hlM'YVmOE;qfE/VD8g{J RKe\?M 26 JugUQ!q+|5شJB׺zn]eܠ$ko<*9 44Rf85^l24/\_ihX:ְD1b4<bXGGcD;(*`PeF,I'iKKU) ()tu4j{7^wx-́Iځ< X nOB .2_kwhg:W6tȷ[B,K鵒Ngr51 /|wFY*qgyp`;&W(%(HiZe_lkI(5HW) ?$ ovZz=]\ ֋T!ek -j8 Nl@I͓)ّ$LS1Guϗ!lr24ėZScRlv=v˷4U-M6"afG+Nl/J#L^Hw:: a;'b 1ymCu}݈f z0% 6徙gK `9r{4e`<%6>Vhet1_%6[\Bu s˹d7x2gZфS$5R ; +1u1¤g~btJd8 2BO~*$s|-m$" cL%=՟4\IIznA#S.st[z%[ݽS'QeZ^ZP ]E)+rؿ'5Fh}dn$E=buְ~e=ۈq\EGO5&{y*<(Zzܐ^",n蜮 Q?MY 1ٖjZ4KၻUmt6R{$ǘҔsykdל4DAߩXh^ݱG@ 6 oz ^4T1uh%^Đ#KFMPE'~~*p5]Y%JP'?1@zA]%m7gHBug?M]{.(Wȷ7<(JTH儲(B6'}3s(zۦھxPBsq# 1w390a5F(7n!Z#‡.BJ~۰,1<<˄qm1km+$ڔs(+eg c_>o:X7.gx@(32@bLZ1 fK%o|s@BY226#?J0x wMh|otF$$lwWnHԊce?3Pr8*8^\b ꠕEǗ/MY5WgvC!NR@,_0+v#q\ˆ\R96BL{f@]?響Cxvn;aj<ܽF /~W9(Ż/H#X_@eܚB7]eV$ziU؋ bkLし|\{e|I.h>rp"`ϩ@F[fI:NϜ4^(?[ gP1;*ymȩWˠ-Myfe eݮiЉZ@@KЁĢQ]\XSi!!QEhkb^"5O~bW[&;`Po4|XR46&v#p="#Nd/|;b30T=)Ԁ-CI3crX"HB<ˍTэΆ^w K6>H2C=ε2tB/ۢ@ DhkhNS!Y0y9הu/< LPa>+j&gUAfnάPTp h;kD: T2c;%[ᬱk}ő}|f }71׎z(#c2J>+#ޑC&ְy_۵G`€ U z6n;~>XGTk&E&b \#Tz+Tv?Kzxo<ͩnck#Su"ɘ0r2Wv,`R(ϰ2 {oA0 7,0i[L 4=p!U&:p jr -{y~xq{'6@?e@ 'BÜBybg8%c[ט?Yi-ϯ!wRH3)ѱ8*b6璟R~kTHVY+LdS#(Pn _+ S'0XnrA` n,?+v8z6$nc'rAA}q)Gche7s,i>0(lr^<6W>oТ+ecHȕrhC0Tz2`?ˋCmY`MVgK,R>M'2c@Mw+LmE~ꕼO1zkE9U .W6W D0gܲ"$pgӗ3 "Xb]~7CehsN R9tSF-Qns?3X}p`Dw29& |E͉}WklyKFE^ٶ-*Z;g6nH>wf(?k"jyW9dgV}BXlZD/o,kRXgR˙3fRLS{aEzϖt}d :8|DbVsV?>dbo((b٨ZٌNrnB`fNՊ~odjVJPFl80n f'YP͇NiSM1!2|q! _^~2庠XAdGDiwRc:\ukZ[5CfV A%Q@<_qƗq6 j ?K8 *K}Md%eNZG}VLrUa&^l}kS{3|hҍ#m iGp=#N&]&)xA0&NU|b^\p~@*i#Jl:7HQ O7vt} 3+3eCդ.!LFzRX^!iSbN-;Ⱥ-p0.3bl>r2xP*ϼo\.S{48ߚ9Mݘrfȫf7Y<IN,Sćʥj 2>k#!` +ⷺBRi!jyf߰[Ǖ06ΥS弧F!l{t 7%J b=Qlxƅخ<9 D=5(gUsj7uqҌ.YD>T݈6)&RrAp_/H v\pK?f[:}Gy/ڷqy!O\4TM)J_Xg`)> 'ASywX}sAO}5b74m,*(Vǰ Rj뗢u$ ̿Xi_wrEU ^O|k[4]CJɈ江~!&atQlv+e5v\δ[%3y). LbCޛ12BSOvYW'.8N8K:Mc"l=F5x01~%\BL# hۧYaH!xMWmM((ɛX\m Zc`GK$F#29=lx<G6zU~Kh'pRVmKI'SUNYT’{~ލɅ4ɑ^ N0F)!iD]a q3#Gv>_u0y/$?TEe| Ғwʄʢl>>8E0vPCALPjv KԘ~cv4 c<3S>JiUߠvjE|ƒU 㷱$Z}x}Xׯ{Upf]=Z.vZV@!n@egՏ̌zy67p2fU4Ί b䔲hn*JsO#"geK_X󭵻I 5ckAW4簵$YMxw6}lZ CFYyȀ%`FxfBW8zɧ-0eL4+Z65MpXSe(N_G՞Zi_c~yf*EX._ %9(&f!H>x(*$4}d rQϥzʁe  x黒Wi$B1Asj sHк_r`+v)5 ʠU:mu/u[_h,|G["&F~1k78,nr|_Ki{^Lطx+5NG%#q|v4ҴF_E sr9l4P CT#u2 f^r?a+ΕcNOK>=ѥ9I{+m|{'l, 8,lVg9nc%+ޞ )HaxlM s=Y8@;kV)+f‰ ApK{ZjK2L։Մ`9VtJU2n&eNXs*Yp~%U]"͏j7Vg@UY;_s CP%14FxS- B/~hT5BX(<$yvgյa&f؟ ^lMl| ؓ_j̻,qnvW@ Il8z>}r0*? fLEX>3Vϔhv`:b'~fsrBFlCkn3abEI {FyW.U<Ȫ\' Wd#-DH :+F&)ҙzpXt*s2 yO84i!lH~2b4QDjoGL&|MNbŧ6 XfüO*g(3Yy#` YmQߌYh.jan(tp1Ƚ"X;6iBHsweԔzK;e~N%@jʫ5<Fw9@F҈3Oۧv8uvn~E];y瘱=,qj lJ>_mwb$vOhEa ,ĝ :e۝c` Hg"OБ=3K%+&ACSv ^M$64C9e'StOuvkZ RVM{yB:В:ME"Gq TړTYQٵ7؋pNHZ̳Wˑgѥo+DxHw6kj8PgӺ5 w>,dK8ţ5Oj.j.4IV1 _>c{6iKU:kϚgZX(yS}X &|/A駊(Uh\h^|SS-soV}ؚ'O[miڹ|Q}f- 6 Zl,!dcvfrƪ`> }d5RrOLEj>u5=9,DH(gM]wWf}tewwtܺ ^qUl+V;mM=!;KbhB,Ğ\YױsҲ,o_R:Hyfc"ZL>eQhaM(G!|Y9!.*)\44v,2L}Nf.3z7E^`PqBVOabRQj7a<'"FrL~%(R-Wdޒb.Lѻa/0Ҙwb~>. G]4̠u)f_/=LPhcze,ZuJ)Qyf7mf|o"K!O^sGϭ:SDcT3B MdpҭSk4=@!4Exk8*rNۨj l]x+ʛ8 @vTmp+ |yqǟJrȈءo? PHSPqgmPѐX3%6LƆj`]L`t viV"yєшf2;n\k,**!G S2סg>49W1H7fyIW />S\dDc_pО+|ɶ nES8,o;z Tr\B -C6/J9`>0o.'G'L)V4xuJ9>Ȯ_uLW::J OٝЛe~F%lpԆ=TxkRv] F=IܜLC1d)+*$. mL T&@P= o"nSzGA!ߵ_L Ǣ}ØIA&(wT%}*AI]R*>/^PO2gP[Zl].RB~ 9dutMƓAH3r#֯zdIxj"ʯ*Xѿp: Cͨzxvg¬<+~=f́x<ǷءEL:A=y1]{27vu|~AHC@Uo+l6J~4dv]|QDiSd%G($iഓYqI@0s Ce;}H/d\Y93}@'%2HӑF|"? ''QBBͪ8sX*e\׸Ov4maةO9Ycd7.FV^R6fIl?]N72 k5HPOx΁ѫu AXMasU_k??@˥ eWXLU7ʈOXajM᛫֓]ۧTI @qP"WHéE+%4-?5q l@TGY'ɣ.hiFn>ՔZ @$s<۩h}L1%^~0^fH[{[>K)ms ykl2;+ = Hs ' HJ,"y˿umø7 惮E?\$*#baܪh_(JX5$jTB^TƊA]oGmh*GX5:m~kRͳAb9!Z!%RM-+zOx se9(׿IP+ہnxs4Rn Z}IPnƏ0þ& (2.HƗy*qM1Af3j4̶aqp$OuF waqK'eUN*9|7Nʻ0 ;&kX-M%fJ˥,[-ؚռk}6ޱ.k0CҐyݒ4m$iܓ뮧M{HCd7 .X*ܙB#0!9 <%h06x]NO= tpK``]FSPsGN<((<7i_};ܗbټo.֓NR >T >Dvlfđeql{~}Jck9Q>uQ ‘& u%ߩmGĒ=w;Sur$=M?ҍ~7DhNch2.?8f<7+nplji"~jw5[ZBD[XR[)ДD/3D);TR[%!#ʠ:E;aLq񫀔RIn5T(Ez uːqe3 hjRģlN2JuE!| /).&yBqWɈM:q% 4qWrzӘzMmתـ/-SL7DG_ge]J emde{ݑӸ;^^1AXznl$Is.综@ =k>\P\5rK|cHgg44h4ָ5*h-}h%3EIPL?kU:A=ę4T#A6dS^tHpC%74gw!Y" L%co%f2W'O`yZ{-2`7Fۆs ػz >B X\gW k\{,7 3:$޷M/^փ8+#6tw 5#/ ZDyZUGD[N~?V39DRt3ISNs/ǷX܋c{b9@Vܺ}Nvj0Q`#ؓja^E W(ϩ.G(:8W)o}xUp  4^G h<{_{m1vԀFk̟֠]5)sk"*XBrQLE"_&`= H()c4ccmW]]tpB!>Σd%$%fU1cF]ߐDHk>mQB*!Vk 75'j@x"[<]nLq@$Ay7Mjo@<[3Fl]]/}\J:/,\(&dTKS)p@vq!lIL5v|ŭ &6/ +z73,՜ImjMPQ-}v5$ gڡ3AĭoqJoo{x[E }9٤*UbFH{MzUeנoG0L5^H,\ۣLGDmJYQITBjӠj_tj塚P&i$prj:Kg|, 2ن8,NNB`Qd53AmXǮe-̀@teg+Wʕ-5q'e;6 $t40)KrFwyNz| 8ӼQȒ}EX *M3zn4BK[2Ca-lD" 虰n3Bq4$Rxȕ58Vz"9~W/b$Dvw2nu!+NZJ}'1 Եb,=(Fټ[hNӾ~ r.voa N$ _3%C!5<=V7\j9;)ǯ:VAu$T"ef V]g- H9f#? #)NѱR`WAeUا@,-1B.gHFOVd4뗉3qb cRǡ KXcϿ>=F||B\hʥYDR]3O8 [˚1]}BUoo"RT .xsM 7s}կH,H~MelT'tGdFzs5⫕Fq]=>}OrdgNc $=MA@T_Kj~&4%` h0M<Ҩ{1'=<3*q&،L3TtMc= BcT }j#TLI6ς0˻H`Ft _^c֒5ҚTx2R?x c[ c=&9BTZ0 v,k$]>ijPf7WL6Ӓ"&z,(}l}9[ C RZVRI!گVMlfK(%ʳBizzfutdw,d#-L7y'N(XE1rmgGr+sDxGzSp ?c˽aRU] 95nj󋏔 _I6ܬ?(|7z \5hPXlA7E ʐ0'7^>M,~@M:x'CA'݂23~}166lXʣB8+f Xց9>61 GC7!P !^YۈtL ĺ8xK̡`Q |vo6 Buzǹm^\B諂(XdqO{j]K@ʥb$ 0ј1aѺj|2gԑzc?ly,`9 : %4x@1 (l,AFpzVֶ 'Z.xv"J陟yoчaHfL m,bR8b!}RM4Dv$> ruB,_HEJAM3|HbތS$@F N#Leጽ8Y0DFRfѮX!ʍ_~ٳyO'be2՜$F+zz#V+K_zS$,N%%  jJJ}%M`DJ%O)d[XCmOXGe^7n ['}GVRB$%pv#' 9v_.cSMi຋aJ/޴BUihZ˾[q1r{`q8y|SڎK[۽>}۷fnA Tfm^2_篍R`&wC/@d9t^G 3l34i7G 8 $3]Λ[-4 !8ĹCE⧥'~OYl4E )/?: ⡌Oꋭf !#+1GA>UkD8_"Д ,M(I;#}"D]܆FCp-ޕPMO?S?'9߯u:C_nȔxd%x i yc;tUcL'4#&ʌK=ˤ\V]0GF=39މp6>GjeiP'w5/X* jf3.J=t \:g"1uAt<RPL M =&_S,SUa7ߴκ)>//rZѦ W°$6v~#/-gw"KDROY>9f?oHiI7y}>e;'m 4؛S.H_\5d7H؍tb5(89 Q9/SUIp3xdYܒdohe#LKq5KT3x_I=40=̌DrL66߰L>ipwNPCh:D8Ѽwp83jʜD v7*Tj-ǐ>IH\ŕ=7(be{J0t\^Ҁri= Vc\p!X#)Bvv5JԂ3Z_gOpTQG>71ܲd!Rc6uY" X* )BUWnHϞjXTZU'zˈBQG`@[՛'$! p! i)+d<_27)s"p$>#YnL'mNhmUffn/)4F:,g4s%L~ͻ>Ԑua0wejnSp~T+͎;}|%o̶͂\0ж3G _%D7x°n)b!wabO@.7l]ddEլ/ %T u+e+rdM ^Ih wAk2`ROy!X߁<ԣٯf_o—ID_VIkxa8ε(VnfG=_\?? PTiz x g~0aNf+Fx` Cћȯͪ5Ru~uJx,$LjbaDUA "mA: ko:`@Ia߻\xsAv$ $3hATͰ;QiQi V4jY ΃ 2 X/'=:_P>kXb-MenL k`W@Bdmz~#Z9ăɏ/pe*J|[6vPbPq.*(Ws?4;Ƿ2)S=M$):u6➜~cu2x*W]MczboXXӉAD#*vdY@zoi}D"28a9M"3?n2!zk5"} P+/N Z4vþARP|L{\ksY0ϡQ`WNa}QcW,0Zo(+6Z v$6 _nfَ-ɖ4$ǃ2C> _%OMA[j;F#ܥvQ ._⭢7mImj ަ:mY+ɮ gcqɢ糞>ICYY~lO/,hx\߻8jJ<@G/]p/EEpUx~tL'HGs"Wfy?#*!YpJ}78 ]ï;3*^o1O = #8K:v'zϳپe-=.G0g0gf7iqus„Pآ:Zyޢe5x4P;?^/mMy_o2GXk."~KjUNռHO9w']8K&o#q9f1ޭ57 A`4EvxU'd4cNh)Q*Q@TͯU:&}`_8FNC>O[ocН%/$ʡqZa!Id9gz,bG1h"^WOڢP"W8{Ee`o~Mn@]I僯ƼKZRQEy3u_l%JdoŊ^GKȘ>jbR&pJv0BL&PǓs葇%@sZd26@6ɞOg~4?j쀠[$!g 'ƴKD5qe= uR)NP,wef(+UX@F-җal]T-[H(y6+iD 8  t{LP a屝e$$XFA#ZhvYO1[ $8d,LSn g5CxFFUc{1&?y0 isbpJ[!T$vȴμrJBޑ#{mr*XϩRəfC"J8$-7m۱fAջzA usXn0-:|AR~z2sz} E] q`C'UmbqQa~Ma83ZS,F&c0}"Q NMvFi&{W?'њ%95.afZ!w?y# &IQDru̵Ho;%fiϹ|l!!EQss*ݤTBக/!r^[gW^;lvrD8Md;u&F7s[}~w{yכ K6C6,v1zEl/$Ěϒ"UhX_<fO9gc;9Vu0i4!bgbLp0G1NӋ\'}ɪiM2 $q 2l%q%I Y}2Q7X6B]qNSNӧlqry Oq-l5*$w5OvTk+Qw'\g™e${@i:J3;KX]y~t q` t JủiAJKYWͲd}M;L eXÃ.: ҭhP>T/WԈNܙ_80.1Ö\Lmap'ShXͣsU?ݱ<(ꜝ}.{G'2F,ǎ[lj@~I⧝^^Kݲ)s/V<lӏPlԙg (@kM$("<#FP}ћ' 6: o륱_>;aE)nTP5H鈖_nk5"Y-I5ł"/xlCM)ar}rt0~r"g umQWĎb oFP)Nx)Xb^^?xBxeIL?Ň+UDn:Pڤh+;cj$ ':Vr޷37r#vZ ܑcԈTFt$.zaMdTAWuwO$ 9nៅ\Gذeld Y2zş71nEٚ{Tsdʸ]`o ʄneѡّLX>x k "L=@S/y҉+Y:e/gX8Cq_3zMGm-_lh6}tPL6[u/qxٺc@6*:s"Cu*#"I뵦i@Tfk[cq<$aYQYX*,gm]-6~,\ա#U#PVhbEjxޤ:Wo_"(rW+Ebeya0DYT?+AS޴a mŮFH)}pB7 P3аHaL)5o"}Ѝ&?adޮ$h/8F@)CZ B,x%'Qǿ|B0J2B"+'!'=2?TH(uU@HXV~F[=ـ3V>j-d@ܣ~*}4*0^fNROh:e/BġMz ֝ܒnPa:7@!ƒYx:UwJ%4Z,Gzt=eiÿX wk _Pp =0PJIzL8M֞ jl%JU eR)PeF]^_fPfBrmeIњZƧzuP*dCUF("y|@"˞yVUoMjil^-g&!n!Uf΋pcOZSMLG계0&QW9=Fs+ycu~idu$]By])Ydr=G0RA5=F}=4:Y.zx7T!zNAfd#OM<RRXE% AAժXai⾨YO/6^bK S?O4·6WF0b;ER |`(p2>JZK fm}_XK&ϡym" ʿyDLnp֩ÍԠ3BA\}#X,|Z/8U`nhV:)6 |p=.m!kV=N#9&U9(zg7ӐT(?*$ _i`[RB:qD'Z{ п0ą5SxߣP]M̪!-r@ +H3'6mVYD.=B%:<&i=33IKW(z %R5L1YI %_<{5 lB3Y_}.`홒)8#k;]]u6mֆ"0[dZUSX6SJM($ǯ IS.v~}*?!Y[=vRx; E wn!ŒVOa'3`Iʹ&ՠK;'+Rxڬ׽g@U#k)[I4W߯ ƠX]-1٪/aB,:M˲?f>1媩x晁6 ww4{EjJ ʒ4г^ OXuX$0xFO$GB3Nb0.tik%8xвV|/rJ_RL  郏l(}re9] Mzbo/H/I}mX \D$EjK _DΓYHw뇷H:PRJ90,˸t<hgwvI$2!H=K`nmp K@xkF/[l#~ -l3!"Ri"E3L?-#oƖ5)fCy_]j'$ jdbQ31߽,&Hr}ء?xk. 拉{W:8E$Pzaٿ:] s鼧ӠECY|o 6XnX* Ƶ+9ʷ(G5 R3j-AiE/[=PCZjӼ8A޼@ % bep&i2Gæ9 U(`BwNPI2+{$-Ni8C+#QuͥR4l8Dڥg+A5<͝ a}ܯO5lr^X5lڰE,/Ȥyp8y$Jqh{~` d4C+_Bח9ǡҿ˨+/Y1ݖnIIŭ,kF֜+. )!o~L&a3OPf/$b: 7ȥ@e=}LxR@-$6^^<9puxc0Joe6f 댷? U틯H+Jw׺wȆb|?b qXMa}{T̾2n垲-9n|Ỻl Ÿ~C&VNW4oF$3A9!«i.i_'L۵`B ȕ!RNZ6Uma+N>&R`PAhʵgN7]F\,IM[IWϯHWQF˥mGє0 QήڟrOɭCמ/֯hQoWoH*Fbޜa oM/‘.6Pa5`bK$Ʃ "qsMR1X.*"LfFAyyu0E5sZUDo)1,! I`oMA"^<ǮXcw3J8ޚTH1B&qy9{z1jO@d99caq]#[צ)@'55ȍ(Ϭ&+DUa2s+)_mc\mIe kq?B8<*~mVKw%Z{F"a]R-'&8m9b2tH.` #ݐzfVm>2`q<)6ĹnXRۼW= $ 4(#Cds$U%| X':oEU6nr/ð>2SGB܅&"t:5Y,\o/mښ\`9{3o5ZY)X`,6(/vUw1}9U@ =$`Bq΃UZo>5bzZ$ˎ\p@ Ii?PT0u6T/aUQ̭;vhGg\lX.cSoü o~Vh(aْs0fhD_W@rHepZA{=xج찎MN, _&BqoSTj=74-}4k$׳Hiv `57kbȑ[@8 NOW auhP\7'D̪5{I7dk꠳G"!0B E?zhd;3]J#M$ՒPFXM}6^θ=|GצlzI`ȅ%ŏlD-T-znKH"%_">USX.9nqCQ+,G> Z"6+S-qH䵊ovy:JS8A\X!zE;`kϝO"[[`n2g +\1`>hA;\P-gC>+yC 0g)Ne*Sa&60@,!\ԹJ=懑/H%i61#MG-s& Ճ'TXb`%ͦB-MBKlq DiO׏.gqY;ZD֛asbB,,̺EpJ '~r|<ubA#3jD#׻U;:԰`=~ z_t/PQҏj.9 [NS %ctY@llcz0~nP師e1UiTt 12pB0fXA.is, (Rl⚵.1vv3ƈ^ۭB]^MB[~6l+oNao3ƗXLD2m!kk2(G+A'VY+4+Ŭn}eeG(4]!wޡFhwx#0gތ jIE3GqiSҾ-eD]r}ƇpU4UcX5; bM;dG}'/W[$H. ik=)BM 2e<8;ħ!s,{L[8[̨"m.RK[_R“8GY;rjH٩%m QEsqbL_zԚ(t^>ȘGDʩw r_~*c NJqGSuxF J82ǡ!8`ͫ ]r=,zP+m LF'P!Z)ՐC!\pTatֆ&~98YA`+ޡo(#)R&s:{vbY3\~4nīW,Xh|eQ'3`u)Z W(C,$Miօa\c1eTolaNܐn䓔õt q?$ /kEFY8 R R -KXLހ\M=&a!]F5\?HΎhp毖 cf&ׅJ=AU0.^9mRi 峌Br(3S8 y/4j m3{x5AW&f`5Te]^ŤX7ZޭeE24 _ -= `d K?YQuN-y] kާ\lwxQ d9Y60hU}ù-tK }Mȏ\&+LZ8֋" p2Ou &\Hc$U*q^o!e-K 䣦ޔkʸ+-U6"bj^sÿ"Ig&bqp"gG1llz6;I)KW4B}y6/%c~ g H?yfDqh_aVuϹJk#&IxlhA#pAB,>-%jӀb!8`P¯ۆ#faGN uD5鈦\>+}ͻ( /13k=5±c{MgWnr,/Z$/v%j! v#kæW苗5v.Ԝ3LRE OZ= ㅲtZ8S;8nD5U;A:LfK+ akNBr4ҫکN۸ 6ᤕęUc 7^$$ z^Вu+K3E-s+aBj^!qBn$UQH?{C"C8y$lF8eoG,Bmٰ%J\6Pq.m_}aA"K=fjZ7i衝?8CEs!yOg%A0|ܴ@^z<Ԉ:aWqHqK %B(r@ڒ=?&YP mn!gݢJd" ԟP2k%5@Vac|8d? 7{(q٭#XHE?mX:kIr(3(v/_Qe_#]HZ Jԓ V"ޟTRawne:y+oߦRt x;K3k{ NN ]ĘQ`Ifm~GvP^گ}OӏDl#_ ZuX80M/,I|\CIn&-%~z4j11&݉bkMQ5m]+@R9lr=9N HIw3HW/z?E 2+ \PK"$Flb"`5qtSG)CL`POINJ\xڀFp 5e!3!SLCgn_DLD7z/~x}ϢP-y4$i\q؁*~zjSX{(42W"AIٻܢö=K'7 6H4"b;mk5`ƽMrHPO-RXw4Zt VhLFmgb;n(븶W2;,GH/N., ~f{rqٲҔً4U)SoON~&NOxlf/c* /OHVᵁ]xݟbc摬N֚yK›z[&-\~KVƂc )1.sM@ R&|$YDYS -ʯΡ(T7w:)JOBT!R4l).hXBe٬Iݷӽ Yt ٓkK%=L^u$g5ö\ YV3<(% (_->ϲ8:OX럩549ir -)ߢQ"!ڗ'dy|ʞ/TISnȍcd4\s{q O8צvj++> L+'?ĢOc/jWdds@M!%O|r١=FN)p'TsY&c/P^9,ps&Y?*#*ƬZ }]PTE;Ƀ$9|nm+t㟥OT[j>"Уu%TP9}$#(AZyhoK'ܕ6_%?)J1ߦ5u^(*YCR'* -S|O%#KCVBRNShw&(IJާX}o{FB; \(7g2 LV?B+&[QB̩"opj cC-ճD!'SObYfm8b"rS^_Wߜիoh5to[) P7)#(;]Kmx,fw5]My3;}1A4:vT=ԶvU8>{Da !@wQ˶UT,j$ tWٞ=3;[j> 3mS =ʽ? e\ -uڬx pdl&[Ervdle{ mRsqdrPқVبeflԴl*_Z6 J$Oj݇r|rIV&N"rR5`pTZs7 <)}BsMdKMvfFNQ8OwPe (Â0E2duuTHI01,ƬP>jέ}¨FO09މ{XOd!FP 6kNGXMQ(: .Pb N,g~,IZGFG#SB jC{|谇qRT2j>Lj۴&8՛/Zq4xP$9cȟ\'zzMEob3foң{)am‹MMȖл呏xHBiub T;680ᒗ[ # =2{M> cnqP+26~-Vdlz%._yzxq KcA#K WVNΩPX5] P\yVs`ElZQ#HBg5DÚ媔`pRϮ {6x%M^G0zdݭ4P¦Z"ƃgrl[luVB=9U`c 6 Pso %mH&sGUGzQ/=Ł$wX;۪L#[HmYrRPO:%Yh"{A4{;270).{$3IJtm0H."%/X0$M8\7߄rumee(1)8>̬Mw$,똬mкm5 ~XCMZQ;]vY*۳" 47e"[7B$qdoc~WEIF3w;#@YnoMD0F:Wri%Kh;s\jnK`G_;hO tHhLn|p5xd8,_e3LO7t%3+! 9 "Aά:F߸(e~#nHacTy)v8T'U]3I78={FjW"WjzF펆9K m )gNv0i`*``KJ ('%y҂/3 q_Gv7Y/T,(58cd)߰L.__Ki^_S>v~^uZ2,:l[W RZ/5|xۑ&A\ay 1BJUrFvf씆z^))YL=!luޯĔn3IW5/ dܶ r~\?Gitd~& 3rʒ[f .x5JQ}j%f2VՊKKpS_'9F7y"*N!n,cMAZL0)`8;z;NΉ{&zƯ'!1SWפ޼u0'td5O|EYJ|@˴ne(@^W tS:2V|,Lꭒ$/>G8 6ރ|{hם7uB֘1.hK1uWæT;3A>8ՁK4eꈘZ0zIB9^;_q@g*gz~3xSv{ *IӪD@=8A5xsݑ h)Vސd(ґ_* 2L^^/B?SiUԗ~XYng(P=17.)$WԐ!В:2q+YIo] *[>7= {ǼYW<ߓGmrf5cKUH2,@wLGcBH)B,w~-IG5# T0 }K>A0O=زON7Vyi|~AY3Z]$WN|#]J-Bm`SƎ3*=NNhduշ!*EK #Gn읋z}OYʼncf89plbqW6/x%9#oLYb=nM~\b"J&U6*=htzزI BM~|?!-{]G}Uev\22-*h;rGKX{&,'1\h:mɶfN,&Uu)RVHX~&/JݏwYhOHG˜1yY(\WIRpHNeKh%Ny6&e,iY0KcKsMlNh=L0@.tWf'Å&CTDn{ %}H.:-\K3/)k$vC(3^\g܋p9w5<ُxx#ٳo?̸])*C6Oug.5 gFhuROv@5DrpF:d1L27.*GD&Zɞk!r 8=t;D;oT dEPais{ў-Npfet$psP"oq1Y`d[rFLu%+R>#tROǏ/}!Xc`j8a^|X ECXD`g(ұMj4eW vR v K-朩mZ_ɁdY7r}ccOT,c8wv)e1lF 'pm$ϛI͚kI\Rh$;!=Z:coPBmǺdq|=Wd,,3 N8 u d=턢xVۖQ3z5jW.vHU?ddY0$xD=xE6%^` ) 쯏ɹ<8ˮU&D.yVhJW6;[Z,J]nJ J$եYKEjƔgr*58!WSݲ,  Lx-HBț~;F8/#6JQ@qczPVMwn|AEF5+ňE$۠֫ܜ[eQkMnY '6qa Dʭ [`X 嘲;B Tlh^NZY.w5.XFV mO<)#1^ܐBCGF(6lP;_6G4lmDs7VȞ1yC9VwS]u-M"gdW5l1N &sȟGl*Rv 1Zb їE,R{?K]T6/I$F(Ӑ}'qRd$'N~e.pV2}ίc(@I2ۺ>MpA`$'@=D_88Ɇ<ɉ",>3rŻ&ΠU}ΰ3*TiO!I!Lv1wDkioc(Ӿ) XxNs-=6ŴfǑ+Җ61* `bjǜQǀ ^K_[i}Rp(P2Y߆7! W, H$?Dܳc !=33s;{u`=Ho~߼89܄$`A+e#qٹ.D ,;/%;x*DVC.qaeܗ4>&UhT6~ "7L4]Ũ]k[3MpÑHύF1EHK_/vhć`u6ŔZrˇv(f CN؎5,o‰Teb-5%.S\\US[Cڴ-0t<%/FZH񬐃;#ESD t)s{Kߊpm[pKɼZ^>"TXgGꤢ4q" |< L7vffY'X9g?=-/s-*jש.\Lm}j*EQ|-WcqS584Ndڡn`g~Seo*튧퇙b[ ˑd1,P0 4.GDg.t1f[~ͩ[_ {zq(?m͑/qiW_/uŽŌF41)2qbI8Nϩr#)1RP,{E`%[$y spkOIx1~Ҟ' Od/9Z>p/$6 LAX7L<:K =kDsc+B04g~ŨDDXִpA "+Oդ)W}9m[c2V! )L7ɲ636A`\׷oJjD֔/OEtZΠq{%#+foiӻԚA\"{,^zlٹx&*T1Xk$+S;BQRp~qT[D'"Q/rcdbRz_s_BApWo@A3[86O-.Q(eN8:¦IgmVGJ^ |޸/ \"/8w葧!5ty dɱd U`J7շd͔E% ոE:XvM>2mRAzK7#ls1LkF}Mqʎ(G86EO3yj#v("Z,=9}Hc~k@2V 4˼TO{mn>x2\B]ɩ/QnҳɰG7L%Lc>vrsc% tEz#_0 1 jf)#Ias jrAdX*jpԡvاonI*rH >E#'}_/uMNl]fTZ%y5իw'QqM5&O0Rn#X>(_GQL<kgnhgapuz檜mvr:`grlHe\Ĩ_14{\jכ'JΒ`A6s RL:XQ61^-uxXաޯ,xpY:a|BaXf5jHAm]ehSrWV3}~{9dU,_Y?·W֑B!QU9lj=bG|ok*1$ 0I6'I$ۊE[)[ц-_ɷJ,5|@_e#0)G7{-#ؤW=?BCԬs6F6H=gxhČDrZarNS <氤 ̓oue)LL_7w}$_ǭ/UΘփ:/i\b}[ze2~wݤB?,M-uZcf0_xJh$xl#gA[:'cܜ2榿))y4 gٽ[Z6UCEԸ2 Zmm!!#ę׺U y{S% tM{qd S1Jeߤ0 O%]yϢHn//IXUk~YԜu'L8fҫyx~.Q^>ǠW89.mR@3J̖gEZd0cP&".;= gJ:veK#*D0C0G3bI~D (Ə v3=`q,^eZgJf ^!$Z/ID1XӜ*ÕX'Yq/8 p6(N!qbw% ӇcoЬmtJgS/2# ~USSgf5KlM$K '=!l)g2F>oRd| ō!5v-"ם˓$ob"  v8[2lZ/sv ry>19`D=oG-Dr9sX=8Uz3t([C=;qL;|֘u-SP{#]͛zXk*#WlEۭIT@Tv0l7hen/n|&:2K ?t]v'}%̂@ذVi{e:xL Pg8;Ɗ5s'!Ve|EcXZ73"M ϐc]Jr*nz 9{/!)$p'LCYxvGuu4`4JLJ7XV6Մ OmEb|b;NR9P8$rcr*O+m?|VRyE[V{qWvR_O mI niT )#-@xf +%R/4(%ZgW,Ad~5܂DU2"AeY[qPJ*RUyz^y;NܢhVriK )T: Wۼ]ž{F4d]דmu1wvCZJLa!iOṂBp:z z.xN#'b I]QOC+4x|,d"cCPC_?T' AH[QM iO-4:z+._aPϴg8pG6Z=i7cj{Nw5 @w*eiԖ7۫j(\pEPJN;̜Z`-?씩*mN ̼N&TrSkC#;bjrXeLKuhc~̸&` /.[Sac/Ǜlbs}K-Eih`hÞJ;Ư,ѤNAn=`9 v$ ύKBu6/ևuۥߋD;磖NvXmo$.DC2=*a$M+z5YOa /,e{LnVjc>P$Fxt94))$9`b `12jō\ݬ.aAS  9+ 2"I h2H=6G>ٯlX v?mTZŕ:N=>W*Xa?Hf\79#Ƽ(9;%d:,ICJhă1vVh ch3cvN\nQ 0(D;1e˂)rbmT{ߠ(Ϟk ;;'TC^@/As_Ҡ߹ `JViwqG&&"f9:VS¼*@^sj؅?p&++Xgf;C_W;lZ+D4O!)x߷5SBDDJ0.7Ǩd1h9An]o%dQkۓA-Y)o\M3HEhb__K2T8ak?M[!RXq{$uyVL^RkVY:D2.&i1blOD\g*廽JV1⏠IO6 O}%S@ =B-/CfNEA,?C̋K)S㼘] }FgWz1K29kG˻~GxZ_܉ i^WV#*5\B&'; |:\Yd!W\C\9ٕ̀>3ZǍ;21 f% :!qKTO`ZJo낐̇G ò5u=S?͈cqI1cm B>`{4 2G\etθT'Ɏ|W͡\,Т8:Sy P4< p&AVDi:dyʛG,\T9#Ȓحcb؎m] NIP'${0{lIn䀓ZU')=ZX `gz`cn$X砷̐/;Zj$k~jBl*f 2JLF%އ֗yEIAaPB ƐN^VMQʃiԄޖ*C/VJX1"#[`d>W8$*^\ dq?kjQY<.&E2߻~1oIp6&:>'' yMo%y˞Dy_ =[/Ԇ&zzٍAX+bB\쒍mNs,i2dGk%v\"d  ǒL8ez'[HE'Nu jhh"-}"@_P".9#"S-E|EXzSj@bڻ r 0;J Ɂ39u[B- kp9p 6j iTUOJea ʯ괸$nABmL>9!=2Q>}_SrlU!X#$e,KaLMi> ]9ГʖIPGP vCgZl!xSлu.0G)FB+-Z-R<,) |U+"iBÓojؑb fm6w/iG >3 s7s8zQП #gIt۴"=E6n ,]FOa]q'$d^n bH%7꡴o`"6, sO9{dMT2g.jLtB>w4d,dΊhܘ5[Ȭ^nZeqD-^DՉ%Ao90.D#);#o^Px$UMP?jv2o9T&51|\z}zeԤr Fwגi=F$CflڝRpqExt`{Cs$N%=&Mb@R ++y^u8P愴!Uo}tIra/EL%C^ !`]D+؜(Y&shVWOQ N~)n a7񾆯.Fư>AzlrщKtxG3gg:@ڭ>pǘ̊6=bu,YxrgRcɥo`5ww2J U8N5lpѕ-YAEU%Ze=5"^xRy6Hgrrّ_U/,M1Fܴ;mvf^ʟhEL|DG>;=?+B!'VbsFR鎬Q'ޗ(Y;†i%nrkj'o1ۨhoO)rúr%N3^cz-ʿC paß /- N{WkOp>U2ʍ[J{xCI`tZj9ʎ%]߉v1DlH(DLl MC8 ^7l)A (P̜8-$X99"}HAilRV^@ MՀ,V[La+ag;s?I'ؔv1`$NPBݾW[\6- 8ފ]U1(v.N &K70/`eqp`4dS_I*u5醇GuN35iɑ59Nv0& /u-=2'$PmFK:!3xiZ~A^v mוej.!j}X1]f?45k)[{&{Ok ɭFW`a~8kcn۵+y SM| ״hC U qϪ[qiA\҉UL\6Rm&9W? Ef1ک ^H0C^ɸy>)Wt€ƪۇX@S=HJt2{.6`k 0>L/@L^umr7m8Z w`H$QIDQje\jUO-ؽD$ cBDB E.i*X&gxJ7әJXx:uFhJI^^:[eyئܨ3l,O-n0V7̤4n*$l`^6+GxgAƈd~gzC3fS'˼>?x[AR 1DV(p ̳8 /d5aRM Xc1ȁ9mR FVem2JL#rΧ}%gD}#igdu_2PjrS6P+.:;d6c%œjH mԛe_)eiJԼLu߉a)9M9˃ʟ!gDmP[wUdNz#i`WŇ-nyR?}—y0'*#4;niă8;|K`>WMPH@՚PБra ~fr8ᏽnK3io{"\ ZŒ{=TL5ZblGb&/dm^,P{iӊM,VWCȐR3Qt(.it*N4MW=(rgg#$$F]w#RK`1z/+aMA6g o(dD!{Cp|,|Iyge!ʐ +)-@V~"ؗXy 'iM$qtLjErXfIo F6p_IϑKN6{ 5_3ӓ+S4D eхW.X4po3WqNPUǑۿ<}ĄTy%jR'ĒeBr 7tj!LC*cžqU,fCjg(9cw2\}3sfVFG{aBOLe\^Be.m+cm]y~(? cg`gi ͬ>Wћ}Mc~D۹<odev?{4 Ic\{y̨5vdo<%иo[ev)E3!_M'o(kL`KH/ۢ3c0Za[j+Kש?ׄ4bAzض䅜@zW4R=vA}#>@ƤƪDGָ,7'D+d$9~;Co}22-O+ɑ% ]rrZ'rH3s/=Żs՚m33IKR me0.AkX?5̭-ml pٶ=*'~\D%"z,W4QtGkLXeH֚ۜZRJX#|Bq&ص;PܐƟQ* ^(GE 3"$yT.Hׄ_È?ns;zM4jQw1 &ʘ 뢐SQJl /;S.0-UT>Dhm4,4AX2|*f©k&Ϭbhڡy*tk+L{`pJvuR7)^  32FɡJ˦ULZ.tXȪORpL}kc& ܾxK@Dx74G!ea6RqBٜ@Bc"\u{;&/̓4ajGNfa{n3mb`[)ۅ`}T'ڞTZN|ڧa ʴw+:T F)FynR s\6=Nv)찯tBIɢlGX 6M!a־! [6{lYa bA` BH@ 6N@F+ _h+-ɔiŠL_˩Q(ݫ;[ YN7Sg4 %εnGN:<|a; 4iLUb1 kmɚXHz=+gZG6T5g7(֮GTnWc1'EPͱJq}d%} YKDJ˃ iۏ= (}pD6"~R˃*\\u"#>-F'lv Y\qC6FQ/I37]_Hay%BVZfOV^](? a1nQӦ3AfU3Jw}>qa"ԗpiQ*f ڄ7vØѰ@9yAo}I't7VGcY%4y*K ,EGSM" r#vF"㈧jp2s](,SEh9&bQTX@) _:ZzּK#@zsrŜ@4t~ ^*Y$#;}g!Pw(T+w:7ZHS{=:h$Ŕfs^& :'D\Xjj\}lT'N6_afa7oݠषEZ'5wJBE͵AǤI}(% 䒧 xPx&e_^iQtOGUI@(} 68 6o%"t{=Y~K`pX: Ix_tDyXuvBs V< .K%&$aY*_٢T+TӔ}BX;jJkUsa;BPf_>'p7Ě&q t|ABN|FR&5-ҕEq3+T\vo{$T+a9uţ_#Ƕ ]*W9R2RVkI$aua2eeݾ,VjbJX[)+($ ~l̋/tJ>9%{+/%ʖ}IrrgF=jӲkRPv]#ch:IPk,o%=Wՠ#z U.Ny{xP;xwgPNMegFWA.t3Rpq(6肽)$Oٝ?Odd*jʫ dήњP49ŽF[yMpOM|4A?./,v3eˆ5!Tڬ*FxȾrVC7uph^amR CV? DMn՘Nh ]7 9`!}M{uR@p~E5̉6K%*H; a{yrن:FoP'D' &>_.=zfl2T+X'J5Τ+>'ObQl_BIѐҾel!fH\MۑED1=zgW9߭KDF\#[ 7v _"aH#CY2ݮՏf5 8l6R0v&R4D*۬|ͭ;2Ge5D$] ~¢(MGLbڳG㴅L!fXˠF@~^{@q39#:W\+QABZu5I0xHvW+dOv\4Ndix䢤¸i~H:z2br6*2LmeK׬]Ч.'gK21!|`ʚ%39"X5HYd0;~f/5, "vu}I{L;Rm!L-O 9S^Yە+ V ۸{W\ Vhh/-~PMlV`z3t<몥:q'coGb~KrPۚq]Bt+ PT7}bYo%Op-XlٮX2 i{5!U&@G@(SkLz;3m)Mql܊6]@{;;MiBQKŇ|BƋesޘ"mjrQmXdEaRp=(r[2%,}},&F7@ ^k m"/UC`"yؒ#1L8T ](Lqñ6IV4abQMj 8
?mG ;XvGswESUR§l? o*):Wk¸ά|cV쐫KW;;K܋ˆ:ܔ$$PE_0s*-4f z!A%މgyBGWNL(dLIyT?v& ȫxC'EO P0>JJڼޏQ[ƈrdFF:A,3H3Fvt^im `A^zL6vzX@Ds3÷+Oeȩh/Eh#O:JU5C=HJLfRegʐGGuݽmJ=5@Yuz=]V1'EMU2`I^gC%.zo#i*G1cL\IiH [sl_ ɛ^\AYRroIҚt1ɭSE=8g! cY4u:{q~/ΨُG1p Ng8Z!Y *?YxHQr*}UaN'cZ*]Jejmg3;9n1bkNB󚄑JKhGZ5?8 ,/I _cF7S+^۷g[ә5r(h(~(3[:lx*sOQg즽bd0Qqz%Qvᷴ&yS"Js2V,2GT6ZZi]# չc6G<6<#y91dEb)Fvj g}'ɶTE*bOjZign5sWmuܕ" >/艹+x 8Et*%;1+R3+<!yVvT0}e'ט:KБl2ӌ5L͒E'B_ |rd5GnΪ5L,mϼx0?Yeqf`/%Zx`I(ZE{G-3-x*W/yiO`Ӱ,L_J~;}aĠqPjQ`N|A1`%:\Ӎxi/ߥQu5$Tϝ;E\Ӎ&SQ@p0BOX!:;aӈlHFNQUX^NS9 z˗.Ҏ=Cq?sT!N+٭?LmҽׂfeUlrB uƲ{BUro{M))k̀Ã@UXۏ[gc_.F D* bp`#V3eг 1K#Ji/8(Xqr}%(ș;m|(B]̜qMa@1WJ & T8vIї%Yk-dG٥*w>*콥GV³ʬaǡx"C<^\{zJ6) ;{S9RD?dmeOj1{{3DD ٪" ywc[ V6bo8 |h yͼzP .ҙmFѹ5D?o#ZE\ۺ+D~E|OBO0\B/#^}ae,H[}.֜N}t g.a΍{fY+FZޘYy%BF˸_6::4KwF:%B`d L$OZLsU-ۅu[.Dj!^D G<^Ir^ @⾁pEө'y P#MrA$?J4hg\c[ : ȡcK4;dWlAj!Ob{D[8SN+{AA2R?jyxϊق{-9ۋHZ#BSgŔ-gcdI(:g)ZӎU6nx " ԫ'b<Ca]uu  b= N <rQ#]M(듉P*1#ey<̫٧1 _a4zA"Bzs3t3#@KM:C_%rBHD?UPES衖rWSZ"hp\[Isy]:ELi[}䒚zV"9qUoTs8fX8Z3kq.  W&6-S9E`% F vwWlDkSC4xrR+{W^܍.LnJU<tFYNT:>+$F j,'p'b$'TѝCF%بfrQ B}[4Y2 ~~g` 5]W1x W0|—HX 4(PZsr 8ǨD+&Tybl/& O;8ՊŽ36­S>IJV^)ԺP؟鍥=`pzP}G $:%Qf+9)iPv9~|.@rA$5!R ZYnVz>KDIq;-$r>'37 aw"8՞i Mpz) `m9@$ s$Wʑ^a@W ʈ(ju@J9K{n/Jf<],j}C1(97޹'"ťA{[c&Vm(mșq$I cg"Y~~.F8zsH9E""p}3C5Iar/Rg|/hI%IPY-:I1%7~ b˙iRxVp#Xc쐇}7߻Jm,9:l v(=1?epF\׋I=);ۓ;kw${{ZR'n(̑A9rpn} 86 :f5xԼ1 nP-!UIG.Y yvHD߆&Cw2+}tW`I KxazsD<ژoQ-KYᎇ$~ɲT JkezL۬TCpSVK{o*j>iG ˧ࡴs  soCz}%@{! 8T#7* Lfl|jV&Jb쬂o4a4Z$Ta Of;8olR@~s&3yo Q;[g Dj q-q\gTRGHUPΙ@HLv!|(وFBB/8a5= 'r;v= qFo"k}푷XճH '$3#}2eRw櫴8Pn&0G is#_MvVVVՍ岲" کC]ph`lBwݓ%Z[3A"pWr[*F @[#[p\Dd.l%0lHګajaI[?duUL_⊂z03,i=Z4dXR=\$]TP:OR'WiXeM}jNzkJ8Te+o/:29|]L+`fL)FimXifml%5 6*<Ϡf͹Zm,2zѪ6tu(@d4%a-5R٪D&eǟW(U+")ԨB t} 8ڭBz ̭ud/wj?&od&]oNSԪHb(vdDʈ3`l730l0Fd- DktgӋEnwcbS2LN+EM9e1AەzMZ*1=.;V,41v#H" 3sjG|r|P$휖4(W.Q o<gxAa~e`8n眕O {`#^S4520sDǧVI"*L#=x"Y(mBEq$3J^&I gBzEM-u4`=JQd~i^iNg5@zUըL3l{Bvm_źH.wQ͜'HRzȖhՂ>EDX.Fq$.Ok梡 [0wʋXp|ɿO;3Qu*߶;!f@ݸKv=jلCCe؄r̨Tvdz0ռᏣyS!eǴ{ڇ|no^4^…TN,ғ2^ ׯa*_tgq"\f6&—xH2єDŽTejx"SU%'Ypѓsf!"]9J&yU/hXcEoj+s /f4+I&M l8zl\Бl]QmV+W*)&_1=t .Ռ|T]M6Q\Sֲ!k[ 7>r{vO1^P`#Z bd_h!Vl,"{7D:0 6>皬pNEױRM;'/DoP'v yo} <rƲcqE |K8(o9/Piҳ3=02Ou+fZp xpг5u>V+J&-r#8UBJd/4RgUqf?W,9v=ξvl0[Age_!(/d1Ky#;Da5z15gCBn u|buV" p'[<{"VogG*^>R7P+Ro79Ll3F_{v,SX/SL%|iY6"ι 2t[4'0FK1f 9A/]*nkFIiR]g%2V pp8̑H5xZ0W>Wn3N ' $.xWN۵7Z={iնZ r,u.7[q C5|H.(ΑY8܂h>)f_˽3[H^pk(3&T j⫙y#D _ qkI79kZ YZeci9NyNJࡷF$J@>GS#6buGRDMN~8$|z"XH"v6 j)KuG劦 R}'!5. $BH|Efq ȬLd(x'8r.SSѤ g% rR-eRiIWqdQ_hI(?oeH?KRδ)|tM?X]V( Nl/<}@cBn\e4f+ \٩ܯHSaL$\;LpY=5"fZ=FAP] ÐH6[gNAc鞅#u~,c^>Z;K5aUz(8ݢ_񢬈& 5/vavv-c5":kT2dRj)HUI+~ wc3<˞#bq6ekCA>]pW@H۲G[,}qVyXbz'c64SAh ($%O9B >XIA+ #bp J*CɨSqب(+N8Xϐ.ډNa ⧥ήa.}7(ժZ1% NxJP4gnb7>G.:e<9de5w#~*˵UV+mzFs XN)!GvKV۾9h茧4wL)t^02k|ݲ؎ZgSx}Z,/Ʊ}Ä4SGxnįwRH& Gws1fHY U=80>D:-{} sQe8G :“jtY i!FLXed=ꧪG8/L :`Z8Wܣ흨hsNB & ;#&YH] _huC?^ɻPO8-KhL!sW#QEKS×7?$U6y^P|҉] ߏLn3Ў3ymޣ (t 尿7تmD&鐜W? $zq=Cs+3y V柊t/,cxqZ 1pED_bǀY{\x8<ԿM}vz%,dr=qN8,le#191Nx5yɄaAKcping*(;e1'궭i6eR#=+8[5;M?SQ׋0wÁ3~hgT߽O87 1x!Q H8M3va5m9:X-"{05,#a#9%7m\kgBoIY;"1*Ea/J5ŝB4bXܬNWiEZ*jFb y#4W5'ɠL@b0^A`T{3J_ސ$DJp$vb)hƌQJoEMdf4ΓiOƺ,qBFm݉`{Rz8k]b YJE')NEn 3"U4"ųCߩJj~M{Z)ȑgT祒bbe c>U42BHQqoM##_hx+ʷ+U5D> }h!yȀrY~oIc72o|XhƱّeg %,TOɽ6A%E*K_Ug MM @C?7v$wNpW&g;B,SF GBzK\vȴ*ƨ~C=7"8eUm̲MY$y\ǃVҠ.K"@x0]rۅ-D:ܵ}ͱ2x4c'K}]>wD9lU>ϭ yfI %3vnec?H٥]9ìdj௫s.Z,>@w,Oݖ!#  5|5y6GUɊUƔuROlrIxܚC’Xs gwg0v`'ыEfEs'*GK<ƀW#I8S IUE̱Tg]r;Mx<}{~ 2'B4S@Bc/n˰ +s&V˷Og$>hfITL bkb Δ}v=-=\ӆدq' .byBJSN 'EY rwUhfkM]і$|`P`?ßA"k6-ΔKqd oCh'F\Y蟏Z%GtJlaP#8cSȁ0D "^Kkn3e\ F0$HcƍcN,E3;{z 0S RޜJg\,4L}Gxu$m5th{xL.jI=O<"]ibA2)Eu"|_Jy**rWmZ%cS= -tr}VJp j^+VeJ#y+531+ʷ},˦Leo*z{<'X,5 dS/1?^ܖhFe<"6w& ߨTRiD,0 ҴNK#wz-[+*쒧ÊPi8>mh'"5](Tќyɑ]^W'XAvlF Eʪ $- )^9[wgع* JfVuq2f}**A[3#̜R5y4iG1l['-:6ϋ2Ԡm;T R;)u^\Ex7ѐƜਁKz U2m鼑ϴ&\:9rf i;P(=|su s8W}>|j޻Ji=J=';\1玳~$fϮW7 .e&L-Ț82:5xOcP% 2+ຒ&plzg$u{'iB4qg JvsfxWp4Wtr4sRï/ub4Z]7r^.ŃiO"5*3e+;tMt۩r]"ھؽE xesI]܀CO}Ew|,eǙyH;äeE&`ڑkδ8,[YO3%վ IYNAΒ~-VPc{%\_*n /zq֧,f5cH(&Cz/$ND^8'盓[=!-alkIᆝryyxsv}nVV]NAݤGZ;Wŋahؑwf[OSIj9MâFlsB ';]@Q#@rȍYVg,DXĂn} (!,W`+x[Pii5{`܉S꘏w C;r?3}?jDۺBc8RPf,]mmςu\(\CI ZGmnMPKNŘf'lĖf}{Slfaa_qf6q{67l?Nłf.0KJw ˿lֽm}VbSr$ؚ^l1nLa{sA6i ǘ 4'`5z"ְ7SDt6&74>,'zr&mE&KXCT<9iǎ;8/=ҫY;N8b kW `]))'=w`;BHicHS*R$Hka;R2gvѠg9Yџt.YH:+.$.7s$s@Ec)_`V:g)eף2B< tة%ŒFK}qG$}IAӜ(IOֲhE'@ۏ]UNnCRXX4rJՒ0h,۸~g]5M)CRzs? A]HWWHiQY}|ԫ} P/qϬ[,&l1Nj:n3q*)пݖ$qRn91'w!6V }g# )٦y~4svdJpV@㓇vp4wekEW֧|Lœ:ch`30 ;:0jB;+qokJtf/Lu5`$:3Q3Gdc;No5$|o(g(;=Ʉe :l¡ho no܃֫Nv"IU_olvhDHx.p fx(dpp=V.Pkl" ^ݲ5Atϩ _1bc;۩瓲lߥ]"mC=z:PW(|%ު37d&d^Qsz΄A 8Yy*a0>6Ìnx+16/ξ co9K{V2"1a.u㏲!_i#CHl*jR "wl"R(D; _mdFlH:nL=}W@gdvբzKT!)rчr%ү~KQCSؿ&Hs{(=cmG[hROŦR.1G_[#ߦW% ȾmL]+W μYʇh6~;;AP?iR|Pᢋ #D3uʹ~)E)Jo'֜MZPP*;AdޭL}3_'_B1,3$\DZ1O4BvAzQiCެ;MZ`#sXRsCn\&>|wv6Ky z,_˜zhUkXJm?JsTsV&-==i53%T%JSzΥKQ[V姪B]2QfDVH$D{+֓?s#lGᇰ s?Sodcb8%-kGuFp@̊K2I^2(5&惢9KqKp nLEdBˢZǚuVh ñCyU?jre:^ ^8;$i3H25l#ge18%,۴h|%hBQ{V8|?x9K[Ba-~~[]HcX-ROwA3{X@ _$hʪ,d&x53qEfH7W^U6c[呸:kE/DZ>y6ĀAҼ{SwfX *ď5:oj4$[TgK$# :IϕoweQFKR7 ۼ֟=ev_ _Gwk˚ (#:: /~K@S,Ԫ_Z3GQFgLhOwgsn;EߩyŠr]YZo)6{ 1R1+F,wSʺ eRi 3(/C*@(T/&2/T%DvUtr04-c:)276ڗ`v[Z!}!Ins7IkilxH'R\*o9a3Dވ'j:yB'EAȚ᠌X$}z|L68Z D4Qi?ncC| LC[pÄY17"|Dv5/PKgH#SfUW~Z)$H]j21fyI $j M П|yv}oW M_( F`or Oo D9pz,s%I3zY!y6 əkZ.Ovڝ0k:, cœlyH蹛yJejd6IUװ)Wk/yսaOIDkCyWOIg"nniFf3ޡ.Ά58sb o˝(X I:}W^TCHHݸP2 Ǭ΁wTHuc@5W?uG}8H#Z!4\FIҵQUǺU`~U>[8[:+IܻW-Z_:n>chm" 5Y!gx"d$`"1PtpW}Jou`7z/+>jnW؏/<:LjOOStb*8./,\P}h` t:0eod<ϻ1|͉^MWpW,hduB!}8Vid"S>4k!0<~d*&u[#!q6f'Gl[eωYƞL\v~3:wf7b';q̥AqU5vXT[|+t&Rf|9^l/V I7NE޼ 9CSHG͛ 5$ԅ%9<Ҽl.n y͢ PϜ^5[BK~p1$I8lsĔ@5oP1$}-N)P4LRz{/e4=ia3}~-5~b?C USJWcFG1G؝5ؽC(/X&4w㏻LVa]HskRMi bv |dxYߓ 1H/X+*>Kx൘gnb wJS6f˕PE~ _ %Xt7ސX AUdD{D0GnkB,;=aN>gGءTwt n@h54-(5[:VG e{yA]+g\V1|ԅ ϋI`e01L=|_YW#SH0C/Y[%-i9鿏D:Ll<}wjpJ? a8 ջ{#FNfAB,64oY|7\FC\$E_yE!vA6uɀ5i<ܪg,A3֐տ/U8 I߮X`@OK*&X?Ꭸ( %j (%х=q!zЕ8u*FF`dd'TC7ɆEJ.*K{s(OUͬ7'^wF"?K::KkkTx!1bEWF':栰!º4Yw&"ڇ SBk>|B{Kb GDcu)1?ȗ[~`upV;1x[IHJ[ g,[K YjjD᭦dK)dH#n&ip4CY=P+P ^쓶+rLQǁ21\,^Q2HwHUN3qұL(+ 8bg~\J»N[=I~'Ͻ]*Z4W$Sd+l)a1gș(O#[MLOYRZ%hjϏ4uGpIYttRpM3yL340X82.خ=׊9s$\AEgO$9B:wJhheRæn5,L?/mڙ;5lzsqT1 ϼKJ,F_8/\Ck?~k;~`XIg-6%)\~}(n.!R:ey`!h@u1a![Jԫ$bɵ;$S{iyj1ZHŊ'/]Q<^\in!.,niݰiP"PYQ&ɵ8%K-~6N ?'\6֠d 8Z~ylhI~agx[>ELfr_UBCnCY[QɹA.OtW ǘ_wp ~% 1yw_sə)l쌖-ѫm<$w.nD_U)r< hZ#!bjQڎn'dG 6G$AB0y^D=8/~V>=AFae1aՒKȆt ݠ5E]mtpp1@kquI}jmEr.$)[+uO =u<IU`O>FB9@OuGcdO86ފ/_Ȇ%F'd%טvc/r#wuwdF;*` ѩ1yDzX _C ʼn}чj,)ĕf6W# a7{N6waJh@.0Ak,ӱpΘ6eI;Ss+v٘]ՀL3V)|?6x/ee'7"\93rn&t )jn`BãX{*QcrH$ܳxpS'"ղuʴ;-C JX"x0e߂ey a/ *7JgaaI8χҞ|3z l7s)cU[|[M[YwzCQ0cDΗ;/ؽwqCXTpowtp$=v@v*“HSK{KV'/#;,JuEZ#ٔ86Hqr8rM2-%(! 0,f +'I;b2#sy5Đx&KKiY;PEe!W3)xǴ*&w@~vvcuXhǙ ת43A+T9^FB}5"Mwjs#Uk=#H3cUB"(Q:kOP†tQLrPx=,ZBøm*PhDYJ{.xDlo^\'3KF ((;‘XS M#;=s.w_,{+ہѬ!8F(`C+:~s^h@~gv+KwKw}3E;gOῖ3*s=g$LgV:93P@? ?U}?_靇'AX4F|W"rWFWsyyW 6<dipy-1.11.0/dipy/data/files/fib2.pkl.gz000066400000000000000000002715351476546756600175370ustar00rootroot00000000000000bY#Mtest.pkl[iW˲޿Bg*kdD&ACGiiDo#=*(Yއ^65dưcǐm=FvoE$n#W/ݸYڷլfCC o-3Gi~ְ|0,DFOOZ+.M[6_:{%Ქ7kkx~e]tzvPAo{g'zW?|$=l͟7>Ygo\]-s_^gK]{n#ȯ%M\Zē/W%B2&߯ ˮ_Cwf+\!r]k=Ohʆ/ W7 j#=vJj+7[7nϯyYQ㦋^v꟫ۈ~< [XGݶlݵC!j뵞?1(:z-7hvcX ?7.Uخ`rhKuyWd:h?¿WUwn -cD~kexb: bպZwЁ l?Th4vE-c5ҿ\ZJ?fl,*X+ }<S'x++QtZ>vzZYž0e5*>wR D;[PbMA~! J5hI^FvH)2"}!+pfr|`YLoƚW0XHT?)DN&'?/PCTs5jkI?hx/KL^tQVP0:82 ߉E NO~G˺F;_z脯;)i؅ \^ 6yۙj+#T*甸v*߆ ]Ճ*'?xzZ^t ȟ@L1B \!t܈ZJn}ߊ^GmZO3ߏ#TvWk0*e@4~Bl &C lVM̓йZ!rRL-?4NBj9VpW/)S>X51]Dq`p,zNg̣tG T|ȒT'!%hiKmx oHA)i`IjՑ!(JT <&opaٶ*B" , h7fg0vxCԄ=Qb>A׫$`> !4X<9ąVYgE"KF+"Zb[-~zEBN1ؘzKٷ"R<1CUSج@sHou CAQӊpYDLFUs&"r)/dndZ%ܦ00r뫰Cށ*Ei"17خpB.ddz8Z!;FE7 _TzxhDHנ_$2}X)lsB=1"IQq1ɩ0J)3Pki[b0"˗҄M`Lqnõ9yG-e F򡐹 -2;/ `ڈ/1g2f1C Jq_攆@Ġ=g1i`ὬZ5$?UO8ԏ:b+j:ot1 ljX{C6zB|%̰9N"7˝qPKfTGI3ύ@y(NZNCԛ:ehqf'<(Rj2&s$o2`"/${GEpDEQ8$ ?йz1C˧!T+aR`*^]i81~40EtݪM0.XM>s \'H?EoPC.Uϧ1n9T2_[!0;ڡN._S;p "X[#Tc. ݒD})h77G[(l2#mQW c9(ҠIVhiUP*CqTN+{*KL\gR?> @z6ހ狠)[x&LƦ?ӣ F=񳋉2N*j#'3)%. mFp!E\x8hҜ&!Ӕ4&RUpc:j`V2otMBJQbTI86veJ p2tq,=/ 8ay:j,]S+ >)=搚_j;UBjӌ,$$$u2b~:'T$CQ,E4m %fhžf* M3Eg.k6ČCRiJpxVF:CG6ɜ:;kQ|DsW:^(?(&,QhFـn!>[)h;CJ&4 (aᔐ(=4/(|k0~/ךVdxw`AP#Tv..+q"63`P:`ή :a kEޫ^}3_-gNt4 `35Ph+U&L.&dA(@'qxCSG8-^]tn| SjF80$$'䅐+6Gr}';|60f]3ŧ!!u}գ٦4HJuʜjC&4ώjO pZN2(hݡ"iZBsR@D7+l"p&;z8"4{J6Էa&}60{(^/rOuBz]$BF}Ň5A*x(D'h}vU= i##Ӆ( ~uBU);Sg ꀈжPg6 ՑDkQ(;Q/_9COSTT$1gÄ^m4ܟ&A> qf~ƌjsX  f\Yp e &tbCUs_ "V_:JzlɟyJUT. RN6`|X(!0Fi^Qou c6Y{ʅh6KzK`?h[&}Ou`Wn|#9Qx v89PT8jW'!]'PK::4U/|2LV.%~P̣Qj L~_\! E#@%i@a@y8A2yCo""EF[mp ]ߡG4*xj_ [<ٷ#2{з.2i"## +4J隍ܫ&y],j}l⭂EOo*B/A ơĞ,ϲP!- ~|ij/@dUOHʉc,=#2<ޓp0d';FbY,b~ ̏,Zm# ;z9r աbm+BW{?c[7c%#'ϻ)㛰;:tGll42TgݚvJji:{K&`ՔA֧s<[rB<'m ;SF;8Wv #o$i]Ш1+&,IL 8ye\.L1Gz>1/Ojud`W[GO! ~֤ FٵrؽKPb_ȝ 1Gsw?r"o8/Ip3wT <8P':뢔U1?H Pɿ=JhK8s\{)]mxErL?]+i]wm6ux;ĶvAzjg fk\E2r"Y7ffYydk »rkYcuF=!"]W\JV&@ ocr^6Dɨ@3A5vC.W^ċVxtQ'Ϯ1[B]x4صv#3; Rpssyͽó7$^H\ 6΅^^I$̜QW OlGjS#SU=ه_U'r/$%] d b,4nk_rB۬$ bTٺWJkH[ !0P-idVMb)E:W?Y*nwb5y7Cb>'πQ)ߨ%d5h7jދ3 ,1_J.ڝq6IqK;GdB /uU=W߁[5y74ge䭦qH>a>I'Ƽ1ޗE< =5u=fU?ghV#EET-MgidX/K5pqs W9WDg)7t>h EƗtC zeL?3 λ)"5H'̔Dإxq)\LE@Lgݧ%e)iB  4Fq1o5jQGN~Q=!cRɱ@X W`|?vsޔƞxS2a[ }ɪbby2_~RcjiL#3K׹ׇpVI0@dXJ3mMߕx l'fo5q.^VJfʌ^HóVx 8ȥ Ԟ幼h=LLaOdcq3t"F$|H/}y.c llp҂ γx|^i2*DD_3w_Nڰ~ifx\ՖcXE%=IA#.d;j\w׉ibT_뽂-UQJZޕJÝ5)pSg0Sq~< hWeqR|ވUj}Y08I`9cu>~iZB>G#SIr36G _Xrʗrt+o倮Ud(e‚Q/p 7 ]O9t. >k"߇q<_"N*[y(c_d>$<$WW>? b*{b>  -\rfɸ,ft*y1g%6OtSy.X>Zv1p`!%D#5&$KfGnI{>"c뿲8%PI9IMm0p.OόfY6;(>d'Nr(Y'ڏ+iB f5̦~SD$Q?(`Ʒ*x,V<)v!{+ F$KjXGNz`F@I6ǧmZ pXdhJrJ؏?CWx.V#d>G] dSiҨ)r]ŶU}j7Mpf;ُ6ؠ%X4+D?bF`0F4(K9J#vic m]WI ʞ< P*tuqy<8b_^AZ.G/ixKgŴFO'}Ġ+ŦJs=>A$]դ!qͫOXasoϻfU/vsv>/BneBxlrt~{3hs\ڢ<,1/J:a}Wl~ p5gn$g qǛ?6kvs&VY& |zkx# `ٻGyA]K'K]{Y@QW`)ܣLK8srxgV=eJ/i.Efx{r6/4 -*BJS+Zل;p,|TLr(F&p.K&{kTEpnȁ&ڐE*K4 i!#x/p^20G[JgMwԫcM/%Z8 UN}EJwcWUQ2fO Fj;@J(YYx! ƌK2/SMdS0iQP8/ϏOXyRSBw(7ʽ[Zoe[}}}MY,c5xVLSqsOtd|w;qdDSZ&+ `,lp{ˎRأdx0[dݍvY 8ƾI&d7G=0®uTNS;]JCYnm[nP)m'OC>A0:hM{$I{> 2IIZMMՃ7%H52'ܖ:iބu%?YϕrxLX$ɡW)2rR2sdxfO0$Ɖs7jݭϱ*<7z[w|uG?{H4"RK6f:&ŢB1 RT> S4B#\ЀTWxm&r/WHڗ2+Iof#SU`a^ 0Ix՘y}LHxl~Ծ5lo_7< U-VH6Tr67^giO (9i}Ѥ??: !wrKE7KޣZz„0kQCLHڙA3YlB_˼p$L(Ʉ=&_UK<_Bw=98! xJjԶ0oϏ'L%=( ɺba!Y*%UJWo.'٪aڋ•dm&<,1YUNC5TR뷸 &,nXgi褉'7F2X"5 uQ 8[Z@TWϗL2NBfv{ƽԸ?{Xst4}+?+;M0b^bPn1fp!F,h@''~-Do"V<ɍ* -UEn>]ץS]'y$ɥ;QHׇ$ Ax{н *̝=5{l x蜾(Q;D.IaQǮd{)q$q,( ,HQ٘]-Q-;{Z'oɕ}sbb>V5rt%;֎wdzgr U>b%o3aH` DH^i+xZlC);y/[2\9܎ڎ+a隌JyѪy9}*KrM:UyZU&ݡ$.uYm7>\Fzpퟥ\`knLir+$vls,6-@8fswٽ[3/eaF4AQg.-Twݍe3bSNm^PN]Zf]Qzv*߻"XzCv ybYM ~\Xız(EI0ɃrR<4)xꌻŚi;KKRz -$ i8dMګȣ'__yߝULߞuyGOsSzT#`@Y/)RRPtA7S^7ڇX K 4}^I֏^#)ɛ*\XY2AXZ"5,Bt-P,Jݬ()LEu2`.4uy:mik{7Pz۵JMs699 aˤ5މ X3ʐ78N9 _vc`֌ay;@s]ev Ν&Yv nwV#_Lc0ջCfO'o1")Y[rPM<:\;Py~]`;z}h$#Z(d-B0նր]_뻌_5l^n߃}F8`l(&kN%HY6 -_pSg~;}7\bJ6H5^M{(GL@`4_k> dJ(Zp$퉈-?5č4D/,hޤqDm@{l K=}~_,QL£Zؚ|L͔ sFVkQ4jl(mx:°S6% J-.2.v‰ϣ];U՜H{K"Pf,)]  '+1٬\IfG#$D/SQ}aK:bsZ"[8d"y+ad1q428 DGa!h©tRl]/6L"`IH2v.U  JieVSNgsrW$%_V(>̤e۠BO[l6(B /Ye2>a Dr"NYvQ>X E-B/`| E&_JO)NŷXsOJ圛dbgY0cu:Kk MCn[fI*n ;l'OȲp>/24B?DQ%QՕm@dDz>]E iWk9 |0%NcԵSfP3z|M=yv$dud_ŸO~e24"z:P~Z ԗIABwHkOޝ e}|Sk&V{nfPB&|wǘJǗ2q]0T/#2DK,׺yrD}f7Jh5j3)ܒs)=W^viE A5p1~j.УG6施f'\)kN#:?"Y048+"yJ)"r_2&`A`ޤ%Mu*NÛM.Bp~Mfu1ɢ]ٝbQ*Ϫ;FlH6iFcD}ٔ8e1 FAo)5y tx[ӿ$?/*%ͣYa0|e}xiᓽ BN{ OvU#d5fQ|i=g0ft5=6f^ ev:7裶U`k a>LqnNT gDaKI _Gn_tcW~zIX\5bپ|ؤ 9DN\IFF li|W,-WOXڧH :n(Gbg|Q0)mk$IÚB/{Ȯ&d[Vg|.Z4"X$~(`DHBO LWէ* *ҝ1i7n}-l(S]dbRfXI~S+ I6-bb(M&Bt3#$ҩ_ƣ]ռ#?a[u=]A*bZ6#=aSϽCŶe'/st_2z3ug(l ln|C+E^w؄rJ$TbLZ| H8ZwnN|߭LqrD^ a'S9ˮz QNMG_޳6MEJW j^6*@X3?n-8Շ.ܺJf}z&"$HOhexKZO W7z$4^pصfvZ 1j @O0S-nmpJm4`KOOy0TDc Ɋnw5ĮUT2-4 j1ʡeEVKypWdi1U2J郬O'&^zmVf Ufbd 9cF>Y:U Ͽb2YLg}L9T03Ώ2dr!nуM 9Tcy7FTJB;N:v[FcBʚ҈rYфS&,c(Q/>"y XnƏVi-F2/ Y4#ZLWr*<;` ֆ>ke/MXˇȄ' .G^k;F!݌Mm %FbYMlw#Rﱥ!&>p]%rH.#(;+բ-f>k&_}D3aUp; 3ɪ#ȩtLSFm'q{b\nDuWn%,.ӴB$ = z/dž`wwV6(&7E)H4D$"I+[joefm! .HdKKe/o@-|wQS%o~tz}5乬]Ie;ֺ O{YcȂQrke K?4lT(󧚶DPRFg8KV++]Q~2&;ׄ7=7\\~$ΎzjJq}F FmiQiR\{/>0.전ؠ9\ƣe5"J17apͪ1[3F¼Z[s(puf(Œ5$0 =ݳ:>42SEċеLj<p(1;^Bnp݄t" 6Uaru<=V{.&==gbZr9u#$kuI}r6M|BT$8ϚJ!MTt5x|9^3?4P5,,TC$aфZD (9KW߭dIH:~ǥaK=FEg5̨Ÿ|X/<^|b:W|s 1R'FmH@<G(լhdt4݄ N0T#XɁkdSE=N|~'ͷr5?e_y%Lq6}DA)S#4ʾk E}B'Pstv;.ժ2 OYx*Ѱp2DZ~Sjuc4 ՗ԥZqGלF`I%̙fj'}*?4rʖL0}(=9L "p}]j>P*Zwr0D9?օkBv{ ?njF(2 -yб%Õz&|09U!>\f _mV!?. 9!|Q&)Cg9sݑtEFv@/N(N+\KU~UrQ¶fVWJ*vY E廒4 M^ǣ$T}59yry/7u3̋5) 7:5iw 5auef;DM`J9=V&ݳR^*׼ !8^{.S 'l,.[%G]YMS2;3yCb898sQ AÕ" "E@ː ?䜉ȡt]t%Bա n Ϲ+4Hኋ@#>ފ*=BTP`$MfL¨@faދ]m<.D"z)Kh5Z$Z*MU6pjpd'Hd=݋^ RWUá88ǿp\^SB ϛs]H'yGKpb]96waUA^P̭/t?Vd,v'ɔhfg]jRmj,WDX4x˖Gbڤ}^Pe\(?:\XxdjuÎl!q aO*d6S'vH H:~sm]hU,@mN i:T̈́۫A;I$l~ħ$u19U@9 ?&'n ^8КFX,&Enc1 $`}˅jM7Ey/kzaS.}jgnJbS~'m?%9@ÙB$%L4\]_C: fw!4?M?xJ0Ɇj4]%?Ѡz=e}XY`x@:[m˄1V]kYC(ҜJEjI*{ #]*M8* m.`ּ#a OPaHxu@7݌Lg48{/`9gʂ@#C=c^3fo OLӋXT 蹢M"IDY=AA 6z2K]8w@+#9 8/C&6ϸ!WxAT#6pZ_$B@;JC1L.PN;(m s,Lo:}] Lu9z(> q}`cҵo( a_@ZH~T,+1Dnսg n~?:|RmʦIJkأm:az!ɢi\/ l~/Sб-DNBMvdьXOSyʻ} #r4YQZ6'ӨIk1Qb7¤M#Ԧ-x=XBx6^茕o[_`R :ә,ϰTFB Wua1Up =,h{"Yrz![DtX (t df_4arz(B9fbMz u Igol kkFDQ 3=+wGr1νSR$ܥHfd8o 4HM)ŞiqՏO}draYoGS9p'#jo-Xh0eӴ|*Wb?J>^Մ'*'ܩXjc66ʼ)X&n^C.M H7܋E ,D0BTrΩSKC}>*8uSt4k#d-(ik7^ e kjaB^!+$jXo~'ٰrUl(ǥ&\}i6\*tdd]nm\Il]͟{8՛nOϓbAOLE\؁֢ %0D"pabhj)j%W=\YspcLqӫ8uzʎ[-ykioQ%o >5Xi=o,H]mL*27[F1h+#?:=eNڻ~!iqTa ۝їjJSMRtz15m )"f­E%˘@"&P8dnQ}d%P5l">s2N7cqf\"503VG1|P?HU ] ) 6ڑ @?nd&XlG^)^)+~7 w:덯Ԁ2pZ{kaùWԅ`X֓""`n:9`uI~ƄH?2-Ck#YMx48DXe"ϘS8kE9XXc4~x4]R~:@=א܇L݃LU"NCu$䜽&J U}")%z"W'/h$B-&w/l9N؋g0aVlsDZz-hE 0+l҉0Rvםu=9x4Ɵk־k^TƁκ^ zUJo6 MaR Fdv .|)fva,^'-ޟ lr=qRK_,] 7 qUy>kO}PF kt,+ѭޙ&FHLrj}+a) 9HJY>bMJ[ԪlWRc.BG0v: dűt+IUXf%*P?npZo Ce߼̔#ZJ k5gGH:Ơ8.*|`M8 7-gv7%cSb7g\IaV/1K8J4~ǯ @Suq ,'F. c)Ё^%Ue7Û0:uO|(Qdrn__3x5;Enfa_x̴+ң<ݯL5eӫ4nuC` "ᗏx  L"ٔuZ-BƓf/ d*2Zd2xS>WZW8,gdbKG/"YysSFg!f%72.qI}_亞nTEJ/©vM KTpHSצRyT@o:JkKmġMgʣBU,L#g Tn:w`F7t}9ʨwD Z]:ou)_}]gy'9rhs|T27ڜX#A~U|,ĩ pj₊m~3o 8k-ۖjmiuKS.m|`@Z"L)VlJnq b:BYXpZI벇@b$RۈibbEe1r4a)٣V4eאǃIWplCS:Vﵯ]mIΧ(1;[KNzFFd׿@= Wx%Zo sgl1ɒM4Km!O3`[н:|h@76 Ԣ)k\kz`e֚"D't8eq$3`gs.gZPo"wzƯUڃ_]NQbZ."5< `O:2Ibd\NyǦ!57Q㤠G75 碪1B }f z0@RzxlZaSe!jF& &KubJp# 1~yk$Bfr1j*YG[rd>p,y[ά1cr$WuHz fH!fjN1[Ks;+RųP[to$pL9ݻ4x٬"\{JB47ߧ]Pڤ&Iy`'4#HrY$B#YԚI5딎3-VوBP,cfpx`PmV }^fL^LDf_I8jQd !ZYlW dp QV@vM6fW~ݻT50(3Jed4梪d^윐۬0ZlC9`hFvUDmޭab\#I\lՉ֏PEN?)3Bv#y7V۟%m:+U`Hzh@lo%C< XVprαzQQk%q2HjʖxM;=umY$}#Ze8#-HCYw[t.XǙau!cDHl"$oU-W4ddaIVNV}=? '3a2L"0`\ΖVn7O$D|s}gyX⇴Yv͡a|enu7*Qhtk3by1L1DJZ(WnNS 6M{$ibvJ¢Mڦ IZA}’h0u≞hL/Omֺu~D e ۴ C8jRXX%{P&g 9L75f?KwS7z?rV!\#C l(CG a(JjT}wn*FRz:n3ě559hJ QvCyNm8/;Q:j}R[x*V[lvDr 33GWz&ꢣ |t*_L/Z)Ŭ-Hz ~ؙc1_;?~C6,U7㍄TVfP2$jnΕBE5@p>'pFrWs 7h]il7Ou%Au '+tªԄ/w:xtfCv=& >?Mq0ǤSr:rnJ&vZݓNm_{ncұTsS|Zz&gG N",ϗbYRY< gʁE;7]Vc*6=L'yKSN!L \Sjӕ:ED+1NX-kJK R\ҋSTʞ2*~\7 &~T˥"oe{F,w sϲ:~C [OK?j'3_nd$kd7l;4v/tPsuSgqK$[Ec^UMo]O4{'i͢HK\>rbOSbfm2 /Ly3FшXW˘X6+*.+[xV XMrdqS R&d%x̝2O`g䂢Te+yKVF=e.iGzV K3Ӫ?7p9ۑbn-D lZ"̃'j Dv_ YW'v@ryJ; '?GW! CMT;}U6|kHwGJs74F>V(1P ?r&_ZyҚ4zgY6e7:le0` Fu+Emu2Be. 8ZpFֆYRVC%R,K_vmOho=(l3C㙐*M W ƷNaŽꗩ i7:')0Dӳj\.w!pNJRB7ك7`F^zV˯aTT}@6O&]^vR'le -gf8ޥ]2ݛ#ȅUgۧcgi~In{ĖH M_pFFB'}Ēt!E6lXy{3!x>"gC~ub"O2mR::А{\s\&̩OIX}J8Ѣ(.R|QBN"m+\x C(9^>`&9U{9F|,1#q dᚹtpT>;gEw< 0&|m]#22X ur(dXe;(j띤vSk30B;"5vmKKw^9BjΞ~ *2ȈcFLbYd:?tKEr³)`= s̮~wBY}AD)߉xY.6ܟQRd} ѕ԰'X$r,pZͱd35:sm !H K.-8ԭ i%t.D,8:!PTQU)JC2B>[Ϩ A\>baEGX֬Q-1Bo򳏗Ԗy3]VX*0,}Ew n#x/ P  %hF)ogHn."R"DsKҹ?t.nl;$K 67+x+H)y>0_- 0lɌvgl'+*F=FXd}).G)KNy; hEQgݗm΋7w[W:+l jb،[δ吻~ɴ]Qs.?4ʋl,ʞ)7)J WNIk{ . 6=)w"xPՁp`UHjwRw/mZ!A~1Gx4fEtfB mP'R7 W #~@ 0N&n v[If'Čdwθ^Qq#!!\;$#Db|ǮÙjĩU| ^7~%Mst=m"-jz{U;L;>=cd(45>W,[MwU7>t[hڅbFx1m#D[~n) Os/Щ–7)UݹW#HyLYxG6\B:RN`5.t1f jJ׺@{UBA PO_[l@ؖzh/D!%PH򂞩}e;QgE Ž|6`Wj$GMca.кN!)5Rޢt}Uw-:Аhf&٪jetcbҜ.w4_j;!&/K !S_Oe-m8y:M XɫԽg>1usuZNczQ-g]5xB7g|`A`5QYD Kg֪FC0e ~.6& 6J]kӍ'b Qz&52Xb~Q̺Nk>IV1Ua0Z|$ F-< bji\0hfn/I«PPhAy14ZjFc/tu)FXJԌ[XTל%Ĥ"R)t'om0o{sgDr6jRUAPG蹐 ꝱ]>|}2IT!?gAM"!MDDH_×([tk e:ʵv y ƈ{ {᭿ǴuPiM;(A ,+֤+q^^  "KB~LےSeV;L|j3_r%;8uT%$ϩ?\q߮g9.$bPKjȡ5Ђ,1>H;C9*^=z7+u7V8-SN!rőf "V&~v[ m5dMFoY&Ch!(GAoqƩoU9UqB:в|EM0hhJ;dkH7Aۼ? ɀ%~ 'hlQQDX ߇8V7'xEA'&:$ {Ic - S%YvVOb|[Ҿ0ojT`n*sҨR{[ aN`~Ҳ]p&l1Sa4թ|߶t,VZj4 OwJ.*̘oL_ 9FJvkၜVS QQZPN/U1ICQyd!Q+TT1u5!GmS~` GQX_ R'@"4?7rKɤhEMtT NoagtРhn3xjpplO#/SiĚq~|K~WFpnGe̜R)ߕ}JrvHjVA!8"nBW RZ>$v@JP5S}9e)lL͹^Z)'ᵛM˜z„B8o*j m;B97ӣ;N,bN{?HQ('sp 8*ja莆tyଚ|I% psK+Mq ]@$SEJ,RyM4blͷi|@;/N2٥b,6\lo̺ŽL'Hfb&%q9alSjfePSzl}IR*[?U yp`ֽ8ŖN!<'vE)WsDžGooZ֍%GU xPy\FIvܲ`YvbQk:wٛʈgA30dk>'8wkKa^bь7ET z4P|wu`rZTׯgaj:ؤA RdRoDȬ'>.}i8\!YBEP`Y.GpGMa4>$)kؔr^*֜@q9&YqfX6/XU[^)\[{X-%iaA=E9k[BLf[~hE{vj{ b@&B,u 65\Ll:'&qՑQ1R"0[RٝRs/S[K2ukPYq'q 6tȍקzccʊ52@&ڷV)DEPyTysU<"~&"b‡KԡQG5lXu}!el9RRK}:ɡ ֌䂐 jbM洋&6A)b  {$۶o~Ӗ)53u"qv3?\B9y2"fOQ JWI)gTia7* %n꟤wKyN z׎}Do1hcKIH?(j~"=;ٻ9wG$QH>սdsM`LW`bp[㺘F#]4[z86ӠT N`y:QYT閈ݫN)MMT( '8"Cj(4V6 K)t#Z"Sp{yɾG|J^i2z`B14?/ĥ͞*O2^i+_u`7/s[ܿ׎V Nzxp56VOnmp%<REmpGI0#R47@5:b9| Frp?j=j}yV:k`{ҹCۍ&p+c,/e[喞!<]DHnOe"<l*Z,X$m?"jgb}%$8V{̤\<}eU{zW`Zo{\'$huW#= KX|Һkv$hT?}Mc V0GDv6̍Tf`/r{{Jn 'VQ"˰Wcp-*ds3Ҕ_Id!/#R&B:`Q.8J\-`!cX*GӁ/u"8=Lww:u%{od{a'Lo{ yZmoG Lh^ܕflMD)Yl;%#8n7D;y &C|e' BQ?ÒrT&4Cz3 ]6@`4EPnޫGkS}),@RMhDQvA 6?$čwÒqwʾ<Óik0gGGXgA3\~zd7R GDN. kf=ShVՉj°餳}'27&{*Wpɀ8)/ 3MІǴ;#C'La jmFS˘\{ Z':6tڰB]޷C Mҟ9`c鮛_ȮE`eكZ3n8l^m+ɀ_O1ә*irƺOs; \3zJi.?%FRS,%=7]ΡA5ڛC37O|&:;?{T{KpIG.0{UlP/E2e&AwEqGur5w4gNх_أ, F< M‡`BJsqjvk֭ש0md=E .J%q8+Q)CqrYĆO4;kh\ԨcC/t1ejNK@,!]̧ř"5ȡx\12h 0gYA5n"\WNgPd,؛PQ\*[WZiAstY91U؏yuO?@t`#٢IR_ Tv5#,{j-!-o@)S]mt(?]Ȯ}lXץ,XR0 g/sGq1y8X cEF3U}K$]h|ː{fԑaο{[f-k^Pu]9Ħ!`'4/W؛8{zZfYO*uMf mH*X -ߍAX]<. teM.Nl\c}hI!fYoF'h`VHn1TKcE@OIo_A0bXJG::.2<E6+Ԥ5rzgR:W/WR^fWh\"!5A/l:n%w75da%2Ldgve@D^F56Y*/㻃I Rt p*L g܃j2 5֢{rnoh%!4s]n }Ffj$'pA ^/nAqDYA)d;[])ҡ ĿIsDsLcOĜLR?&Ծ_Ix WSaTU.%D~ƻx49`RKp7c1VhJvڢIJ @`dprNMW:uO5)rd6՚)4,Zd CHĽɐKk \$ARx)5⦼ꋜKi\\v _ȼ&#z73꤄4)0ҭ5^09{pK.J*Y{ 7cKp tPԊfCEQDoh@R^ـD:Wslj#COɥtz} JHpf/bqgz}]]D&H&Xtq3+VB°913 6jVK. hKNឍ"1Jق;y)rE~bg_vy V.tS(gVrA澛}$\H*Co`m (tdk3CVu̡ؕ\N3$)^d7ys;lj칳w@쎜!, EXZb"͘$*cp@EW@[}o=i}ɑ@~dʋ,՗Zۋ3g$kh]?EfWےqvMZw "A)azyRS g?f=tn VHF5gw`Q$ӧNS:ZcpRBC, ؁C2rv𚐺n0n8M.)Xק "w|Ty-iER 5I 70`'3粤գzpA U H#}lH>xU"GY*3_" 1#vy}d_^vL^Yڳ<S80M o ՒvɮG]t@F:Ś}@"1TZqzS'wkwRwIXU/͵-9b7T Sά}Z)ceO8doD9_ÛKѯ.n/ŝjKQjyYpOSfG#…Z:%ͤPE$,ۓ4 bnʣzjZ[^3E)Eؘr0I`XZ7bƙqL=idI#K /8Nh u5C2Йv3c4&;K"io! p&դ$0fZ2,IL3z~qX(I 2Y5gIFZث4Oĕ *%{@cDbj0_Yh1$EpTAVD=cXoΎx)f;M\HlY=JsX 4pD%L:!/$KX{.Rbvћ}ݬi;a~E" rĖ18DnwK#,"}6DaJӱ 3݇wiM)fgh]|8أj^tq{?(%*G\wRJ: $ z- cO{K]|T 2ZI!>\iͭ`m_£t𜿎Nb/CwucCK?KObbC;,rf8^5?>(di|"Aʈ{vW}_bI90Zs!C97c&vٿ.;<`]|41b3ZnCQ5tI!WE8YG!?LAm8v+ye5d"07⍟2k Nِ?xeGZ$xzS.M/Ct?f>P*DQ_%v$#KY¯74o=s$A.19* te_'7UTVS Ѩz6*GGN_ܓ[by ͫ$DeVq[|_"j: -4 .Id:|C1Q@WxsػCY("3s &cE1 )[3 \fokJ@ \{Ncn"e鬣v' % ME3b"l!l^ ~W Xe2#L.=oi0\O_̵)6\3f ` ߆2r#8"k}լ͝SVe7+k h"t1Q_3X=ǽQBK37sZr*U֭YpH\\2Q}lZ&~ +!=Wlc,KN5/WbqmNBzUNO[ףZ\-/d/ ?K A M&hXa .CZJ|BX!,A"W] D2}vԊ|7i/FFvE`^b]XnmV`n~mm@'iى1eطA&ˑ zd%($s …Od]:P d*xP|' Opw \ f j! LY67Z|ti3LCJRLIXpB]TSxl dDBWkWՇCi'PR&O wHeIk K(=tΪϴr:J7n 92Qr@>w|q=!yl&H%=uWdꍷy4}3I$+̧rK4Y&oV&,P1ip5zѸ؇%H_5;d$ Xf汓ٺhsR $#K sTo3/?ՠe=@R'&up ӇI 0' h#ũP_ l~{s-'nglr\<C ;)l fT;o >la$s3gl0`&JU7Lk-)uDtWO M1@%z4#zzxTUoD^1cXR#ꕇS'{7J♄#&zEj"qvgVRrml.I\to5VTjZnByh`/zQz8C泭 oe>'ƹ~pN1`)ԯ V4-;3՞Ӹ)ŚS"R2ko5OqvyH,˫.0g~CN{xժ zItj- w,9Z7q+ 6]-a/\M;אg%5U=^<0tGYKߜg2-]эM9)E{ad\p]4c@x/rc o&ʕ+ڄ+[Sh.$wHzG 1r ɼYã0 S0,f)kKzHi_hF rK] 'G]4Vʂ390Uӑ  5~DQi}Oø 85IIu!p T5đ]M F 2(搪#-jSu.\~#N{4d2\ ݂l';vۇc"I1TkJv_j2_d(xTUS#`y;/:8pcs*C;Y|i1o:FzK45*U,o֜Ok|Q0$s#%`j=%V/ײ;tZ$2!= _jfU(siv5KN-oLJgMM~|jȦk4.1ĢiWFX'Df(Kɩ-0Vnx}vxTFٶ?Dz7HP:x~x!"i]`b1X)!}W$LYBΔXkQF7jC܍0uځ 0Qpbirp'c榐GL3; E}7dp,!MJţLD{ ֦^/ u0GS4CnL&R9<%CҚ_rC:!Mb) &|>#sD&7+oe+1-Y6 :!#|J4 }WűQ4ܫ J4Cx֞v0n2|N~@^' b mCTCC:g{u Q]^o̬X<&ZH_n!lg\<6ln4#cҰ2j٢̓wxa1ͼ ~etKYG-Tڂ{e_vR׮pPZf}2* KR ]GEaO:!$QVӽ'vEp6ε7S}Me=iF,֪ b܊*JXDcqI"jRF)5ʫn@rv(_e7Xk̄J%.pHQp.ݫM}~^VK6 , ;J h'^HfCy35P8e")]3<_zSpS+`c:h;vXu++2 /cG&Sͳ+zU4)n:qZb #@"`sG=$jdQR9k=2K6b;L郸՝fAyxQG0uǭZ2t?JP=")ܖsZ5Sn: _M\(ȓǾ=r7V4]>_axZBN|gl)]CtZ&_sxocTQ椃> rQB؇5_!`41z)kaY(qҸ{gf}FWN~T(~"$'XAʒ=B-b@fwB{χO8-0 n>=~4eHz^78%Ҿo$z̙L'xC;5[h0<O#ۦ3s>NA37nx6(]9$PNN4ŒNBh\:'` 6C*)FM $a&By\ k)-hrtK`L7 սsE&.X ħҖ>M_f[ϧWƎ޶G&A30$ 3 7͎xOK[}m _R r(ܗ]*uc(?Ԍt+(3"纶6AcF̃(Xpw9,|H]&9#][1<&o*=`uꂁV{&>RϖBS2l?,S "Xc$%a@-X]]NG<55G& َQ1O6".g{/G ^2v,/hmv2\ZCgl/^RY4]q24,TcRc2J(]#rF/Y.dP6F*"&{8' WKJƲT" )ya;]끙VS?/3Vv"`sN~W)M2hБXS@G4qpK74)Y U$&B(pjSOJ#%ż1nt#ȕY8GBή=(]2K;U˒H=2ݪMa *ukª1k8fKq) YFJ0\kUk6?Y 郯4 nj:%4 ϋWH5[yy6V= 03~z0;ug/L:d2)r 9 AJODڤ,/5iSOv"|BȂT#?QXyDx"]3aZ+7ؙ2wd@N]"2!aPmrLtCɷ)9=|REQK9yq5I&ڍ]wtزTno4 n8qHUln`-VȽLHRM(a:HC rE%JJY缴0Uq1:|^kiet~=Ru4;ͫUl[t Q_Z~x}f=1w )Z{ !WHhIlr(<0$U7%dmwL".hE o9M?BńpI$d,lNZ 2r싗bB h|~`. dn血pPRz貈m0AN8L5c>.$6Na_S>şh_ΰƐK'玧 50xԦ^2|IK^\,Z CtAB#DIY2H/ДłD,'WqQ.Ƈ <"6JYcWE,V QnUp?5M6*8D:\9ʥƅj#ۿn^KFf'WEǐEVՄj.jolJimB˕S8$^01pp>N#{d$|iXf!\צ?׬l*| vK籂 J^ ,žTKp?k"m׌_[!`( e/|m,η RPt8R^@ wyRe1bD-9) -KJf$䁂_VwiS9,bdM>βYp(@> 'iP6R4C`B&^ QadFz;k-6H)/Oh g&bNìder7i3G*.v=~Gq6-F~_{>^~D>ΨԶ߸} l7THߏŠџ+ꤢyb,+j>yت))J(WSL=)A}\Zݜ.n#50FȁjTU;BV-; ,if\A$$bftv5Eйޠ/NJ{yqoUm"ifmvSe4eytv*]jpO9DWDC牠 bk$wX+:ka2џ}6* S6LQl]GfU.kQq Q+8D#͓=1j3өJR~95R+a4V0 ,im _tDPDqM]x.ׯeJ#!QUsא ޼6u Mڤ&Ue2?vcxbPjYEqoۮ[t-0PCpe$jeiA^n+6ON4Bxbn`(tyR-+ XdBxX:f] `5:>p=t<]QP̅ϩ`gkjM,3]ɋ~vbuJs lZ:XҹO%BGhR}L$Ha%B$=pu=su=nPag?e;CWZ"(EDy$QKv55))]+KY+4 q9NT&1`{e]T-ѯy~7T+M~_ EW=<u]8*>0 Px&)ι/0Aj\w6^M-"sF] }!4J:iDZ:#ܾYZ)~ЕυtS+6GT,:Oxn= ϑl[ϕr᪔1Hh"Bu@T&-~ޕ [_\Dt޿Ӿso }fsgS]-s >v:7bOx<뛲9ɚ7.| &nc4?hWy;rEL `b $q=13v3qD0$[ ?0ȣLXpF!Cx9&uP^}ܭ0k#:C^ZJk|vpIR ^]{sha#~kӅ$,H ص7e2' Â@"I鑸 ]'03Tmrn1oS킎u)SPpG}r"8*"k(@0&whuJ7Ow)`f`ƸpG 5o7!c˫0=넬R$$Z֕Ӊ}cm~t%# f G"_%=b_ \̥ov[Ħёlh7r;ޗ=5rv{ݡMH$!KlٰjNEvGtD"^dSebsnd$w3e}܁@sY%S,imᩩf>hE&z$ iðEa5 g͵Vǟn[OKwEs"XjIZ'80)K0e< ԪH;ZU"W l[³!!_C ĵ}RliplKg7桕a֌;+qyP{I {sن;tx"఍ pDOf $AY`i N\䖧 q)v![9BԖ)=>$N8O3e4 p, ڟcH?2I՝OMPz80-PJszy%FN~[㔠gT-D}D\. 淏W` ;G@zR[5CMWy V&VAuYZ-Ь%p&K=yqHF񏴋wKUj?@'Q^ѯwKܞ)6/uJ{E2u|^7dH7q}sSRsGK;\Kn-6)8{gRblsGŖ{u%H̔V7÷;b=$wַ*`x߅KhmM\I:k}`OLN2EAs+ŚdOSaYmI65%߁5 u7JA]i7f^) Ntⴆ`98ZW-EK:ʰy-$0[x#⁄ئ@'̩yl^" ^y";G3Ckw8)43kEh& ̒\4^p%*-Shb'D5T=md!V?p\)UnfMI5Ҕh^6fG~'6RK<:6Mf*ba'=w]26+xӑ.mNP+LJB^2=sd=5$Ú1g[r?ܳ.WbؗϰLvܴz6Én-> éXp2i( x)(7ܐ@Xb]Rba0?]r Ѡ(XQF c1r1cŦubtuRg'\[1eA֋I@bFT=[ϗ hKb 7*^QEf:?lxf+5RMUy+ _N1JgڰK=X^3"A)'L"R3] ̓0_bQ#>&;@#X&@ۯqGpKCgG,萺 Zn.nM"2 g%z5< .d#^D6k^s/4A좺W{WI5CƲ|?}e}_Sta4XTe?lp_SF)4G3_*ϰciMJsB8qwN>$daq“yf4tlM vNɲl`gX쫟28wg$-wl.Α!h۷:Cq(r[q1ocMt+b_lid$i+l{?OHq4XMq[Vʥ@܊!j˃q$aRDPVT6myp/ZqYWd\vKTF-cZiy&jEMJ3i$=H842^ 9 -$!]<pڗμ|?H`_LeX& c$m rN /( pp< Ge͹IVU4pӘAb~ F^($';z @W<~2w=YD87y4@ eA#]%"M$nj)u<P29NK\hF>mt#\Ikt'_kاkDqGSPkyVDS?#lQǜqa Ty %MZ=gbT7c$5P w۬c-z&dUʰFN$)9oJV0Khb.~8A)ᑛ>l솎|kxqg&cJRLjyn4UϿuq:5O*-N ݰ1QWmy #QF/fozP<=c㏯ducBI|r [fȼެe"S# f9=ɤAliVKU~ Ϊ+-vF5py 1G>#\]DnG|~z]oϲY?vՐƄmҗፙyB#]Jj: \F,lOc\@mMRqg&Xav{^yI6j0.yͱ1 ZXbFquJ~o //[Csb2`}<7Ɗ4md9ˀ!^7TtP^SF&{Y46EsrNϕNujoJ0X^/eG46tD Owl:ZvTaw G%5S`2R! B KId7"1!dGI#kB)).~{W+ušc%EvC̖*UJ[JG/G+K=R;;YB{c4[r7 49;7{@o0I Yj$%sP*h+ܦ 9XBL,8+9xAMSЫhŁ4AlhosqSu,ic/tT$J;eش2}-Y]{jz^=%x5qt1AJ{kϸ40l 2e u wʃ~ j}1%|]!p  ȥL11Ln`UE(ɅSA[)p"DTR y0 Qҭ@'{Ɗ\IX!/$ VŢ/rIS _la\\? ZzVg0{,Չ5wr(1]@;R'U]zY+2U1n^BȢɌ}МMm.bIzVq;7/{Qnʊ+68~eS? AB]f0"8y}l 3ϿAI> B9<}'Z$ϗ,ޟ+Y2Ϯ5t*Ow2@;t7t֭WWƍ$[i7:-Jڏ8Ay*ݠ(%~#j7.~96]Ev]x.8(TφM$tJ;/qꭕqo;BdZ_zf Igwpg5ޖ9b95X, [xވxLd:TTEͫ r[G$ע C7 \4_Z,?ZEaƥ ThUXԧ H5`)X֝!ď^[!'ʑG#61iDs2M-bAwy'(ug"B dV1w[L/OtP]7kNJvPhjcJaW@I,;~jwUs*w=ў۾(eȋ!8f]^L%4j5& %MT,IGUHƤ.k @I$|"NXj;I_,euXٔ oˍ='QLY~kd^\}6U6̧=)I"ٙyȑź{6wOx-1s%onѡ<_kUYD!0Ėlg"xp}9RbE1UjC.(,cD`8Ea6X1 )ihLFחL jRΕ J xjCȗ,6uH;J7Ee촒]72JA$Bw˞jzASk 8("vYoU 淙PG]j1[ې޽C T(n:o PN05rE݇ۯ`<~+ yţÚ;!ntwNi,c\pm/vNdm&u OVga մHcx&; miѝ:hӷYF.wdT[}¡ Ԫ$`^3 '[tY ߱4dda}=pCOVQ ѫGz~ENI+D#@SOh DpJ3Fˡ׵[ԀA+ȥu >ι91$ yM'{Z|QjHJ$|.9G8b&y.=Lɰ]0N {q=k7vB(>, =4ޠѺx̩fԑ cM P^O圵8M޳n5yoM~WNKr< ]`0t=Skme!_4̦/r!)hn]@Z1-e1EՔFRKHe* 'aUp;d§ze?1t*ߔU7@62Zza@[Ô չuW1Rdtr`"x]OjRW,ĉD^`y%/D#yrwKIbL(sR+s-›;/Y-~ӆceh,8YbU_!#€ c Vd;$G"dߣquHQfĚ/DlqO f`f0+ -mTQˑ|q{y!` ϗL5h2y5R vBǀ;ݞ悕6fI1}3.q$`݈UJ3Moz,;Pv.-$r@x4zA#׾Ev.{Nj1*pKy )NtZ#Bެb8V#6~Se|VJ Lfo_ë-{S}`׮: 0}"M\5`6²c\%-5?72rp6 ! /0eTō;LWhn5\:R ڧ/#RΔITט .k``Ot$I$@?D@S9tF ]`AeUKgzTքR2 c&ON>Ki6wQBȴzнH,7Tٍ&&l b/ؿ"٬ O苢Żb|l#ȠUϏ$7k_ LK|z'юTnqx{D֨Hb;Q|jcHhBUL Dy-PS>C.ɘřaw L)x-c7jzm,iRW'f0!9~K7v2ϿPB ~n^Z,Y6W'VWr؍>q6Q ɕdSSSH Z&nD4`uMCDT+59Mྔl(sga2#I~ ID25dk>` <լ;n&`X6%[csھ]L37D J59S0Ёk=ղA %@%@w fR? c&۽akM5x  S6i M;_S|Wx +]@ kO8G‘Hzˇe?QkU|h*oθP*V5qANrM5e. MSzw+-wr9{dN4E0dFfVe=OSf4džZ1G\;Et H^:|%w:\I~ž7Ȳ,8l^!M%l6%uS6)ۋCHAHOK5I.e ]mf?:?p?F Gn w^L{)w.M!TœI[esdŢ.vVTkx$54zTEmsI`aT*ԉf1>Ɠ&{Ek!k*-c!+rg2ce}J#wy*aLϻӚkjy| ÎL!˞g]3˰Np-'D +}@?$R5V^L*P԰с)L m#PǼs!KdָFM(cJA6ǿ*lҁM˿P])8TL/#Zz J9}2/ pɘɊ2p<[F+ЬVR!\Q/(\GP<]r βVu(6\N=qBTY0&ױ-l oo]7ͭj"/& u5 -? u|YJΞrvժҼK΃OXs*=M䢖O)O0Ig1P^04|R:+ -JU48~(c2ԹM':G/EMQolyȌ!c΄dń9+/)0]4;m;ʏ l!$xQ z- ['u'GcfkYy*^J@S}!M4R(l9ːJ;݄UWLpAurt,+ (=T.! މxLM's(ux3܋I9딩Q(0ʂ4@nLE`b-\!ֺX"2u&27Z!v n.$N-jzrhhA}~9:]&n kW#O;"^&5-ꏛ񼻷כ)é22,rf R[ȏ?1`Hr*yU6i,&٤ieQ)Q1gMA6rcEqC[ 3ve:6}H}xZ3zyͳC@MNA7?i .6z1cݓ^+˭.]uʼn\2'qr86g7v,gWԣ>y҆7í q*R/%I` J4%H4eǩ`(F.@6¶*̓17uP:sh*~/ID Y7ˊAwAUVhvSM_m BOaĈO :쀣761e<9~ [7҈H3Oni2p+'K{v»Ja'7D^ICA6MnVi!zjM2r$^[yug8etr wS_.dgFb{dm^3&'};WƌD$|yIrS4ʣ"ܫѬ'Gsk;͑c_ၷT Fֿ}J;Vp]3D&?4(~[U$s/Culdk"`ٞodǣK?irX4dbt^)M >gK݄K#6)y1{uM)&B:jޅY>?n%V%}tt,e#"}Ϸ+< q\L9ǿJ +4)j(`4 WMcڄgyzaG>5a,5 .9iƧEN zr1'[i/N_`P_*)*IRǀ&8ŅDĦz [tӦA?:&ˁv6SD/q[^-4F?%lAwA07 wykE $3kY&Qtn)_ȉY<,I 3NFNv9Pƈ4[CMn^ۃN/>x>oкUDTq`6{(@]b&[OY_txX!rFi{1ʹ!<_2twؔkP N8]@@$3~ۛ`"QJeL}#3(#xK|[c?o+H刿{-/\Ҿm$80V9^)l~+Y>%{/ӥM&? 佤V*d9t~Vy߷V!x`LkH&&_fNzov/G%[$T|oJ =Oz!Ny&06eU?pՒ 1yvHөlgnuƟpttjnivar?Gã(8yv,52#3|`seR_l!5 NS+ '4+ z^;)s/PϛHeVN#B_4[e9X6pY=GȳT ~r%|>Wf5H^o>ko*^UMYK[Q̴gSfr@Z 9JIJ/)TP':p0GM 5$1Ƈ`QhސIV_pm,PV=!0'_ʄNT sخQmɊ@!N\IUg%f_o0QLI-"A R ?e2T]Sl-Aq}0(2)P Rx(Np{Z=}n#NS" -է !8K N鶧`Kl'm<y)_MulM0z ic Z<2әU= Z;;'}vijߵX0g0efatw_/$>\,sIxJ?rdD^Y>@H\oRɦXr 1@5Լ`JPz1/'/jWAӈ}):E !4z8pqSD≦Aż|pBӿV7kc%VT4OՉZSuDv^M-C@`|N..Y'%56EãHf D;FJU7 ֏LfPa ݸLMP?gbkqQ܆Y_XE:Z;ʔ{v*F_ 6Br@ZHb'dSg=թ-j^$'pjq/XrI'awAFh$SltdFT7YOX]!ghI&`ݛ@kS{< xlQ<J\1ۢ yb[I<~B(3VP\ hP*0 0J/Q㋼ N7RD 2a,G2[8wF׏5 %^D;X=jy&HCfkf2V aݬ~EbEfp>4=tǏ/v}8߃kf8⿨u :MKU}#8lDӌ"p`2M # œ0OWtFl6`m Y'v) fHdŀ?a^ivZt;㕠#o\F f_r~ cBV9nY+!C*% L !:J4wQ|*uC%:$,t]W+'?[OZ3Ή!`AD=W]-%$0m$y;;59'!h#(YY7_cz )[+ՈGDmP6P]hgZA6gJ苳ľܲ˕% A(e“eGqQaIĐsܭ\!62k!nv2Aoa"]K*Kp|v3nQ=`ȗo-M脝𙷗j,Ǝ WM{Θϧ\eJgrѣ{46J!p ^PJ\S})naF2_zxwJ@a4ͱ)?އ&]%w' լu#Ƀ6v]yJD6d[5 ZN:v[YS%v BVNmtIfpRayZEPڎ`п'IvBby9n|c`r7j ;Gz2 zJcpf[=9x5-N瘥BHȂG Jѫ[Xm2੺ f rUEu4l&L~N0J)8bY{u'b2݊+9N4~*p0뮲 q6J&U# NYa8E^1@\sO!TcAx}͎IT\̶zuݜMNl`.Ŕ;##>' t';dgˬ{ӡ=J+ :ct;ӱ*R9I3'>o9k)3 Ԟ0f{ei! fnY-Ȭ=/e#t:{ŚBtJj& (ɝkZ8wxj[l Yגpф|AnܿS o)=WYüoL"&s51~EkY5-͒{]Q;3+k? @j~';aCJje%wHlGJB/={95P0*W'$Ii;G,X^e3 Yddc03(&@] 3sOXHĂq .TaYHcӎѤ,jEJl,`nBD?) 5?u~?/CQ8Oox}>z76i#b]jFA2uMb{On 2AG`PCſDW2:EI|m>=h#@n= PULL=ns9GwqH{Rnoj \Dwƙ]jd|5l۪N/Yf/CSv}+Dz@dۥ,Y7*Wtaoyy]]NokBYKde(ӞI ѼOLgLOg Au蘴;hGVC֡&{ [Kk#@<"/؜l"J Mt1>|ާJNj ߟX> _p( p@)CF`| LHd{i0j)&Yw?4`~_ }gP,iҁg16_rO^~!ٍ6><_VG /?{6Pij$9)0c_p@CPV8iDkS j9xŰP"cR*4ב^}-iJLUb9 ^؅? :/вFΞlK9YɁ) V)=bȩ%lY%o5zm?`_&qBL|YJ*_b 1fkS%JL{ly>n 7y8(ǔn _8^y0R&M4R +3Q./y^ ?T&Ѷ:1ܴ&IҞ뻴?Wp5Dkէ 5ybbEWMYfQ#@;be5̏Lm?4G/ 2wWYO"d.]kbB`%d'Xuԉ.S~DTXH)rT?<;nzZd(8 5LW6?13kKZL:$k>GI64XtmΉ[&>82 8^F3ޝ% 6 1cv%;D%V3Q5[V"X01prA!ww%2r&rgc1Zy H tW  `s}^"mU& 31:#KX6h`Cb]~>/ yĿBo)lK6|+ L*sA 2___elefv)08hjȬ+#9AIŗAr3UCAXFWQQnb]d/~G#iXwKn܋?:ζa= n /@Yn0] ;Sh+N)lvǃ^o^7b=bdlp20 :QQэa1>%ǀ,> cɑryEΘԇXpeJ)k!p clG!U*f((G^og$;:؉y#m\9ںl݁^R`om4gX 5k Kܧ]CnuPޣY0†MoDZ/b yz>~S<^-}GPͲ̟n';]z0#qHQ3F5UVrK4wpANsu-U+>fwZ|1ſI w ƋT18o=[?M!xP5B} ilMĥ!_2:"{AY7` 0F1h$XN.HW ߉e!e La5x%,訣B T <*~ڗtQh?Ԟ6bCxz\Jq5ׄYnuDEnԔϑ^>t"B5ȥpȹ85~ +w@GLHo,:#?,A/X]33V["$1zUG%^=hh9P'||IEW2fhxF[ˆ:Jmnn `$)W:Hkh>[tG@M<SH 9Z XGgP* [)ce2Yx+xiA(誈xgbU" $l]&RgEozAa`(\0) s=vޞqU0թ) A7ɗ.2@̳4L>XjF]Տlx"U]tV֋0wRm4Tu`S.1;C| z;T \ uAr-[aTsn?zm` (R9]o3G]Vj[I= 1 )_ɔX+r CvDk-1VXD/~_3}G  [~|骐QinhHӂ,s)#kJgCf?#Byכ,[M̕\NZ03NZҍ,Jԡ y#OECx2ٌ/|߄W4Ň@LOcO$3ԩ羀 U?!ȁ5=gsD;$ )^?b8I1u%W{@<6C7Jfւ$q\2Y<'V^M i9s7H}IZm=چJL'+ƟaytIX#jgh˩ۓ }u=<6aS1zo(i_hTR/NUooqxWJ^n|td`)YςIPvTr&C wuG^d"V~ do,ZHFbѥzczzY2b|+k9Z(Y"Nsc"L`QALx&a$Hjc7%No QF[Ң49|Tr"q>RTtBܘs)>#Y]&]5[2м" ;Ǵ=gTm_2قm΢'eǔ('fRTJޢ *0G'󹧌lU6q:UCqlz~%:]6 Zvi6#dё6Mms{V/6A 8KNSZ>z[XA ycޙWq{oVm KJZ6ԮxHW% e{J]-p 3vhXNc$'Q-ycIo|Oc5SVA2`e*'8kݕq,ZLs^5+Pd,$8ew6"L,\գhN1+Ll;D(>3̬F kg9ik~ƺJRk O NM)s&@7a*Tuil3 P,>,ilC.9G]niWMRdZOR\ۜbŸ֚́Vj?05z~G4MoFyqM ";{/nռ@Y):E_'P֙VwՙSF/ .q8e]{`M5'o~{b~- ]Y7GZ;~8/`fW-Dϙ]sEDVy`^o$/WK0Ρ{H^Bc"]Q^2O\_+Cr>рZD褔D04Z9`@gk Hr$Tg~C]4YbGKƙ6wX)ñ)O2OK%0C_MB.DMq6+ m5$zz-P`T.VB$ⶩ>! lx^p'1G,_Ψ"֯=И"aZ4m@p f҉\A_֓j%饉@Nfn UWMÛ|MtZjg}R pfفRI X&,{irB }uݯӳ_nvft STċzD!>|oMF Ѓ='#$VfIK#?HF2 ++$lAdK:756`! KLe33{w_+o \9lAtqE>j6T꾫.Kɿ~aLDZkH.rK.lA4eb%.y~-=&p![r&͝,S5˿LǼl KDڒ+w}AbuZ:Ԛ%p'ߋ1 F 8 ЗY E>oIo[CΐAI1O|\% Hconeu} f`,KƙƝۦ՜CW ^Ն픿2"I) ())FsH͍ӸMFdYQ b_Ea8%*;yM}kshjG}nL1!TQË$#h%Ky\^ns/ϙʇX<^-#~&qzBS$P\I?DʯB2dCMp0Fq##6?6N v\m"{`ԸhhqaKF5)bMY5W\*}ױT>fQSGA7|#e"2j r%Gx,ԭ lRh ٩!˭jN 칦@)Pf2x<ʿN&.~1eO餆&Nxdy~Xb%Hu1W6!yw;µDZڿ*3T6#>Y~`K޺gTlE.᧽5Ü <>zF!=X\v.9N#Aɖ_J&3t(0n[O:֘}W)^#٠\9\_cfL^UB3jn)G깧[ 5#L|b]"&쁸N"wa™$w`kW}GlI;=TC[JB ʵ埇TU3AC>"V׶ӸW|[Z4ٳTjK\x͙=c| I3 _s}Im!h"86_3?en.G6 ; XHOKVLn17HK]zQcߞʩC_Fӟ3;"ڰ^(^Y= M-,Y_b6%IKqVQL3EId6++rML&jsV1 ߬9HG5ԽfE䊗|2qf!NŞJ"'j4\_ijBlp’@΁Nµ+Unʂ,C3 sVpW?Ԁys,^GH*5Րj.%Q5ƃGH g?k#/H8\0OD_k)$PYfb-hRKG{6A<$78Ř*D% a }4Š]%Be5Ѱ|޼+(^chulm+FNWb(T+ե4r灓lϚkg%SMa@eJĢz$kwӥ KbNy>Nv ČpLNwքpJ[e{1qg@B 1rq::u: BLDu:$ԔxxBjꑷD8 uw9XeAY֐-wrQm %<<1`kK_g-t({Ϸkn71GKoR& n(gWK$Vc 7ҏ!0Ji@aч=]#X B")| 08? ԫAR4/眜zSUv|=Tyƨ s+yd\[NINOiW:.MDz99a^Ib&* Rx2,W8y"w`ԀGL[ä0pSh٬'2\'_iz7/ zT63xVŻ5 GkwSQ0|)EWkU݉*ICeopt+J6vbkxYh.3T6Dh @f 3%-~hZP`6}b0`fCCGo?灂lBzޑzBmFt-n\ o xB\όKR7=XU3ꑆZv0z/#*BU 6@4=uY#j "ݫS,V*m80%伍}_aƖvݸ>Q DWzuʑI@͌;d#'_?w9ix)}pɜ67]_ȤүE1[Qz@ zޡ{'|y:qM*ˠv&%O-m8rҚM̺f4U [Hե,5;EHFen"1*7߲kߖ9J|p^yͰV;pG T-9Zr_h,PbC峄\)6|uؗz"(/luߞF&_M?m$%Egx0R/eGO~˘B ~,., a,[r0IsU#qzG=~~':%[功$^SUpcW)cuAb2}Á]7JXKQ"e& 2 囲z;5%R;&.ahNb4n1FJ=ݜIY۩ M&ߡPLee:<[UT0\2>Y,d )"?,-L>ln]d!Ԗm6vsݟ'jɐ&W8fj鋕L+")gR NN @w)̜qGݢ&0M o?L]o3s{}T%DԂsy0x q{}~s ܂ʭ$ȹ,F4j6Z`Nh-'"pՠLi9= F3b7z#6*4=t`V|Ks 1ɚה](}ԋܻEdm n訄(g9@2*tg5̇S<+1_mHۜLNrxSs\ /ؼ6?Qt}9o"ZF"KPWA;#ˎH1dGqŝ"T 5@犗 7JS$,4\ +Sz$WFct_j1Ϫ*Y3]Xd!A/gm /TCRhV^0La{{MiIgՅS? ;l'ⴡ뗙ƩJJsv¸O^^`gj@71 e$e ^<`y,nF|?=skT{xV GI/' &4Q]=Lwy|_7ǂo:ΙЍ-3BM[Yxr ȉeϏ4ϓČp@Neߐ-lCȅVrNcAww9&)nhŞ'T厈xn>icSL=a gw ٭)|p^IјqbyժIwpl赙ucJLid wCm!mmWm Jj}}×{߱"vT(lQl5kXB-q7C-(۷#3%ho4% OÐG %b9)L'o+PQgsznZ2w}_/a4Ft<핼u PH*cܙsLJ4Wg8BFA%Wߦl_HL[n,/OŒk,tـYjBq]Ae$B^Zg{2#2YBݺFi.ǧ8e_vc0$ZQhc" wOxgN$/&MoKT⧘afZr8BvSpzS7)W($_Ul h#s SCUދsy3k'')bE@YI_Ò ^{Q/J 嚎 sPMu>$F^f_† sjKcQhY"ӎ_o4p)좋m>f[ȇ`_I&ɷE׋xiv&CB!& 2XK \~ eUBSЉч4N |`I%>Ԟo2)hでo H:xUrq3K 1R#U:{Ј\5awU)!JjƊ89좝Z5؅a:Z-.X^/?|s@ t{64%bGkʌ4:sג;h5G-``Mc ")i1?ۤbPIM,^?AceBetҀD#󉻜*pn l.Vݎ$ Aŏ@ysqSwi_ѓ=˲5WfBxU",_KBF 񶢧3&O&'"-rUp8Ǥz fd^"$<Жh!I:e\?-놁m&˥z \7gCg5 hF/TvͯQQ6sHlvf1@ݡ٬A| Zedqb5`xҬ7Vk) o^`1UI1پD'ܻ{Q !5sX2(u$;_ײ>|P.֜F&0W-R-2s⺑UJy%]B!3x8O[Pސpf_5,5) hВ\cJ@e!;ZU2 H=. iVHҋ&Lb #&X?S& ?ԃ!)穤6rAhA S ~z-?ɋ^f֐dYg8i,ev@lPJ*1V>V,BcqMzP$[d.Icz*'$oӮQk d Y.%-]dY7ɞUxQ#!{ %Jj3޸{rT gH8m:dGB-E`K6m5p.FC>178ΘKFzS^HZWѥշ>=:17ʫsb,hq$~IbNFpQX$&ҳJ)JJ镚HZUs- r?^@).m:z'.d) Bzj[FCT%JzSIveM۝z=;zړhyF YqJy'6=$}|@nN5$lPʀ_X9sM3Ω]ܞ0S5O}^K*yUszR~s'I[Hc*fGy~injwF{R-$'n͟*İQ3^נB4RkWl3䉡&EzN~}Q#r~s9$K؍:֦hN lp;wfWB2  =*i|=:~,ݚc'齍j7Q(oa)Z+ݓJ6^p(j%Oj!]b`dݳCM+ma1u./S&+ms^1jjR"1+YxkyP˒WO}ϴ4RP̕)\ ٥Sw<̺~?>(#t,b'B,]xPkb6^g{yfv{ZjY6{2\ ̼46$AЗ O}[W>NBzr[c$H+,!l1V:y;ZHgju#no_@aS1,/RfFk}S ;5f7\F49d.uOFcyà-'8 ԈyF,Wx'J\bh~J6|%7FW7r@)眛@Y %Ro,3MݛTb'2Q΁/[׷"9r :-PYx_cj4xds}#idl¡ZnW)ԼS1 5BQ @\3 E&w>(hJ0ZՀu|aMfd6mhSـ(c28n?@bğ;s/r=5b',z_j;qx.QײT;6^}&m#@n|X/>v Q'βӽПL3z{8Ckkno{!9\:%MIesLìu ԓ†i.DW+rW V9C?)uڡogz3e`1=YR$kv؏FVTd=s{A %?k'bVy"< MՋ_M7P@y_SE?%PWkN{ =H!⢦O@JP/KZ)_u+g 5c aR_W .h抦hMp>J R@͕~E;} ?3oHSUk,.0iMJ 22p_1!/K#t1YP~(+iµlƮTi䦗~!'[O*pJR?Hk'rK6W9bg{A2o(;,7p YIwOKg ~tT6r ]*Y9eB~`8i ?̳Wk7)YfmyWzۓpusOe7*ΛbU}7ѽOFSդ1IRFȒeg'ʟŹRB]=%e m͆0`raȱG1=Zz#SCp q8Q)Sq :|'y$fA#Fڵe-c9 dv~R+dsuyw./g2JNKM9 ᥇b+: x.^j ?`t*x/9BmgjI6tt͉n~T /_[^U.`) ".NfeKRX y}Ohg4u=IKeeRa|TtAr"cZb=mB6/A'~f!F`U7Tsgk-hZæ<[sOObhC+6!@ *JwR9t]O|t7%!kbzm}}?y0w8W?&'-Vu]fާ=};-U2E%qapndaaD[O\~& pt)lZb?d!\do|u}Le^FǍ VٽPen 4i7ݸ9˞i#oH67p#ә95z$η/ŅP\Zmb ?LQ9%~.,昋55K{3 Vv*?ߓL^QbG[ 6MGpuR:O"v>U4@ؓx]DZ sF+&ӷk jYڌ!Ry4aLpjUv7hgMY/cnvV!u<\A5CtJ!(8+!tV$0ɳՆ||SE1ǗpQg'8SR "2@}̳h3lcDks$G l 89@'߭Pw&ߘi3j|xͤS%OWWFTk?xL%tfYL)A︳WpnTYE d|d2ivBW\cl9TSeQD+Β|3GV,^/FSrƣ8M5j?P则x :]_ʖjZU’6t&R(% ~9«*?3gDpF٭MFj"ҢHeE'9IC#Frշtn]iAT^"|NZGJzdYay"ш>;ޡQ^kP#iq7ӺtEj.: ֲe:ۃÂ9+&Z)ɏ/eU DltS-vձdnMnmSBmTEVHjʶпMFӌ^!t`i\ y˿7!pzBT-1  3pϐ>sL l\9硝ݖQ@:xl68脾Z™D ktWS3mp_P<М^cZQvUe#3 K1=bQhePi9" 8ɗ`~ 7#+#-{sE"ɩ%z?==G[ c} r8_=G&%V[?ts`&=&Ď)A4 [7q:7c?ݒr Iyx4xJOf%ȐnD"W 7$$7|Yn&^.ɔ;Ӟ7QcwW)k+M8ަa`oj\$\d=vSmF3-5rHGS6ew7&vBW t\a4}u>FXilh̥g#EPT2CD=&pCk6il[bfMhKFiNsO +}H$ŻPo8-/]2auKT1;oF!Fݍ/J&vnl|=,dh**Ә&OI!R/rEZ,1\c`2*YT+H_NkZ!15W<6yٚsZqs@o۬U;XsSL& ihc[?& yp>{d1N96Aeկg%2tZ8Lr{o١Ŗ|<._8w.N+d£zZ< 4a'0pǐ쒢p5/e 6 W$]bl^cǴT@mT35)dz HUO "AqCsVoJF,5_R5`WVZ~$7) a*#$DqՓ?YOEY wP gy#E6n]4 O{q OgNs)u~_FҧqvdƜ^*Z$[j,<_op[?46 kTt-1@@8-Z㫫nrZnPGByZ 2UUnvW⑾\߯9bh$ XVD,ƀ5}O]}wʧ+h3} KWEy,ٖ7VX1Ťr ĜfǤȤt}O&$Bei6x/nNv2osGZ#w԰aZxI?؄P9k1xz-F6< #[3XmfƢ Y+x]{b . Ӯɹ,{D¨[aU|˴qӣ6 }dV)]h uxVJrغPXG[*F2fOTl07#4!yku K Y@yo}Lϰ!0n!{AMpkͼaK LJL%X/ f,=~ %SJgo/|&w yWH[/UPOf륄 1׺L ,J<x"n# 8s58Rqc >Sa'Cx[Ň.zK y"2s啬t\ǏDo}H6#o(g#u3(GBd_ۣ}u8D"GUb%*E WZjcŕ_D{RIܨ,MpO78ֵ8BsGZw9.f^yv6W%{uׇ%Kgl<55P)t"F"))'R˃>*_J*\ Aפğ:N;g^n81$&Ib5K %bs K\ELF7V 7uG*ٵQrJN6(zxc sBChQ j͑uWrX7JsdV5MC6桞eT5oúq6ߨ$漩XxG\\fz =X@ 0я #Ϋuv#') cbsF3D5{ p6ո4{aۈ S," #X2"Pf}/ڤ.`#D_X c暑tNp:gs@}Rg9w-K"<uOjŲ-x6FLJ˵ ?4.e#H!S(Ƕ!Y3x)эY9)eOFGke2ȂZ9q6 `P˙-.Ô$b8YuOX2CL1GQ4rM;D;I2$clPssb+04ox:BZEhiӢahfSh*#G-4NȻ\F=&B}Gƿ}e"_?jfIDpkFwa=me<\Tdd [ 7hK9|͜MaMJmšxX`6Ď#=0_)zq&17)ӛnT>Yvv(&AA ~iJB2+{zbr9\Y$>%(~#yN7 ;vng5eUŃEE:eoW@ b I bM>EI{A@O"MͥLם˶z/fF~N\qQ_qIlU٣LK5+ܲb8%Ϸ8N]گ(9GTO\SR[G 3ϷqvZaL(ɍRN?aPNy ~Φz$d1&5^k SUŊ}Ǐ) JƜ7P1T\ "7O%hNUkq>qtv%qzR|%v^(&IC,A"ņ&So\╘k8:3I f!Ye[0O?/b!^4fR9tZ2j_r)DoFE\|y&`L.SyR*>'b#RLUn}n8ԇKBa D)9OngKj-8X /d'u_H̤/$<ke|Z5|L1ܯ3XH5_3g^i>Q ^.~w3mYNo18e>.L\[_YMXdvT]ìQܠP#q9 "S=D淡z?ovm^esO) o(,eЉI鼼RNh@fT`)jNG2LP}@ۑ 8$Z6si`QQU"F[1R(k3ͯ(b0Pzi$qpU0A6rFWI_}K\$V!DsP"T6j^i/NH3U!59@ ǽ~i,8Sr@Dӵ(lÎ;Ňb§hOOz}oJ ^__Fn{/dUp]Xiē5"1^sݘqna(bٛ=ܪсƸ-J}[>m3`& <$JH)|h_VlK^UXz]VMhTP@>ZN+K۠1q ZڿD`Zx '*;Yij'ΠJla"͗6)&^%APMas8g%A#/IhV*d0F &^#O.XxS}6ڐ.LT¼b YU" [rJ<S {wg) 1!$n,h o)/t`^2򩩱.3Vdi`'5 6r:Tu 'h?xbAh >J(|29h.X^CsQs|XMB_l\ttVpx"2jBvpaLCА ]2T0c@,J".V3jl G"&e8(*;L%~^_Cb?<G6ba|mje+TN_Ds<ck{x8υc2 &8j@J*$ \9q aQ要QOo0qΛn0ÝdFM) Lh2˃3veX*p7gC lXLtAT(3P2; 崶`"i9q_u$/33dkW2 $jڔ(3 L>>9p.,+CAhU\%Mͮ \)3qX<BT?VI&^ns`8 ?t<epa.mv ^ϯmqʥV(!ßrdK{mf).*%:HMfqI9QG6HQpo^-# ˑ3n )SoB VVGԟX]鉊-,d9RvKiƅӧWnj<$kTK*SARcӠ(^1h?!*4̵!:g@ZV|ɚ ,*tG)F_T0ڴrC1.phkb&E&ݻm>YLL}PPըyZ? L2Stɸ3*gP ܓMƼ]r`[!놩Z :yqWBaYCKl(:ja.dcVsTJ_[bۈl6-D6|ݕֻ=bdLm?2%9hyoE#zm*{ _GN{lj61U-ى#^#:yJSѨ_P;^q+] GM7V쳔g*ӵ8+jQ/bN-+wgg8akI|BE "giǨ$y̆<3z2O?HΤm1WSҾGBfi$a}u?cԶr&6#Mqvq>J j*l#hLeEp<ZJPY$苒vm5<2#,P KD& pYH ϴ+T" Bu#oqOՀ$7 .!&PGϛOIܧ j ˜&P(RyH2tX낍d3X׋fJ۝nbBm`I>"Z57(=]5{ZUJ/y{5&'ʗ)c6πxn(;!.=dBeiP+g#cWh P*ݹZ:mJ1Qڹr'o{hDksqwQpު-->Z o\{;5h~* uW+6ǽkW' Svl*+ tYGu%+VaW'9EO]yɽZ\Zn(L+O~W }/ A@[=<vmU({F@DdDGE]eTﲋg3E)Hd=ؔV|jX?m'1t&Nm$ l'%a m [\Ǩ*_U|B2eJV@ӮB4-{ۿH*dv}OM5j^bJ6xCBc=4EnĔ598 cZ3RFĦ#蘛?Jܲ9!R 9:a9&QcKnGqrZ ڒ89nYe>nL 9.7!,fWb Ǭ4ghCH0uSp|]IOXPǫ uM@*bCKfq&u8VW ;BoF_`Qj\z6)iP-D"iC&<!E~ݓb≍Yh8D`ta5,3`]dJbYԳ!v!kHoTM?ZG3ꕣ)j!6;ކ5fﰕ Z PsF!`*7,P_?tݸcc b.ϕ%Ѻ=pf4ƫ=rNQgؚQ5U[uBS>En=(qb{_`ӬqMjuCy; :rRwXEc5a1G%ɖzm~{u֛BK|,2G'8jMJY(ڏ.ƃV"Ao`XFJ}śh4s oC[z2z;Zn"ىBa:*6#߳uGLEV]LaM+ ~OUCNH!G|ڞ,bϯn_*_ZD>(Pg=늝9XtRϟx"O'ۈ[Yq,s3pL CISm.|ݢuS7uq ,1 ' 4ɜLgRT>O) ov [U'W- XH)BDݛ_L=f%˦c&j=g*J9u-:ek7אpQS ` ,d c fL4싫8X -/.F=/hM9LL^an_ϐ(GUhbG,=20ʤi8JH2 dn[. ~׉}wܝReŰ,H~{Mz% cH du Z!XfٞS~N7M cy#JJ$ "Ebʆ^*=^Ǹ D^ SI=Wsb@6NӷHbF:P1Һ/ _;rHH7s)ՠ!AxsBH; E`"{S(dPTH^wREGl/'Kqf%Pv0 Tkkk09a!( -84W4|}HM?xYU?AOy%=C>.F<񚫌S^jve6t ƈNg!2H/YgjZ+;9g;.onJ}óHQ _|R'Zk(G&N\ll߹&~VpFzpFvWD]=Nţ5Xr蝻Gm\Yl{w,cg~} j{{LEbW''G|G`%h." 8rTt(W)֚?J33k0LqWCX0˜ғ 8<_#BC #G;hKz:Esvc0(T?ɬ#8^"`;{pDw_gj^J`^d\,ƺ+בA>Ԉ62H.>E$<ܕiP]?C\lpǚ# 72]+{y k#nnC3B#朆W,eb5HTGW!Ap(ON̗}kIP&1G} /J\]Uf zx&$P,njMFL|_Հ7hW$8p.GVܕjK&;<ʋ1>L}m,10*ib 3Dν>r:tIw|ljcՠRc` xA|akM 澛.08+8m@N讈{HYC,dHױ3RAֽ$q4E>~ᨷw?a;0s= F8v/om.7gf} v*6QGϖٕA=j{UN)m(u4%uOgts1U;?]a7sz3%m>l(3/ x-5e^nT3\|RC; zSM)} )\KC̝zw}/=O:BXa 2VO?41u xd" Uq,}SB:$f HWM-k7$6RVb7{؎"K39%k̞B$$A/->IA`kV%&1ri9U%Ø= 1tiq( pC&Ss?YG0ir(,x|TNv/鏸,6 s`Ghۥ I;ς'uD ifkW ;Ev3_OW~xsN ~/HX?v(ʯ |" l%QSFe9Wos 6g 7s@lhӵnifrFkXﱮ%:Vbz";J>R q ϔG d kU#-!spÃQX hXfe<S?*+jN hmUWj`78s}[P5S B%ng,2/a y]<^o`; ynF[$_-]κYtbژG}"#IZo;޼4ttUuHZe4VVv^`\ף4k%H{4u?H#/Ya@쥈=?1 ljCf%ߥ}s S9KdM=zO "_hry&^L&޻j:'r~-w(grޒ0r~ %n2<(’07nWjrV?`oL_k ՗7"g )R8=P>/o$su5l\*_8Ob3w;ҔӵbV–r!箲LGOuDRtq -w{Qnbftm9PA>;.fT_SB}܄˖ ͺ.A c n miq1 Ů׬H" KuRl =?fQ~aS9ob8}sq}M] Ko*!J*p!mc(x>-pTTa"lE±E}m.J|:̫-졄> 5!s[]->e2WJ{jcyuSqlU|Zy-FѢ݅1ZІ\h = LoH'r_Q3@K&&MZK H35.SWYMXU7"j@91Ob 6CR{{M^u<; UT*tKS2w5=g"i O"_%a0md7Lg͞I18$`(*a~gҜ˺ۃL ?lP%,|'%қ+ot1)^!9%bf6n}rH U}$ϊV\̗"R7ќEQZZ"' J7~Xⲁ hFt% !g[ewwPY^)Ŗ ~Ha5˦% [XAn^qrJbsAS/^rG`OO^'Oߢd:j-oyQ6}Bo0dfp:zh:tqkU ag|:v/6_c7MGlħ1xaE)rcfD Yj\w_:%#  amgb# }ZU`>Hem+m[׼H^I{WbvGz݊(}]鴎X.zCHus8 a(O+jt|WJuud7jdEͲK_2;kAH:STF,NWQO|w(j?O<>4.1`}V : ;)ݞ;oِ5~ɷNmE ŹMVІNX  L韭/ "{%BDM,$i5̺b8gw6'7 [5v}bbޝ~}q 식r=3Iw-cvqua0i|'ӛͧ%:ϮӖoBl眐juHNe6卿$,c;rmŸFkZ{T%vgR$4 E*Ajסty(GKI0GdlT}\.#paଢK,r.64n@pM_W\[C񜞞J{PPƤY { Q@^g͗l\1Y5-k]V [ne*aTiŎH=tpG4lb4x0?vQ^F:Em,UmfT|-Ȫ Q<#?pSA1'ɍC3HQH \*L ^m>im˜W n[Fu"k8Ϋj^a}s.ҽquu!E6L/AK$~N $wܿ;&b/T fҳE̺`) гOk:iYӒ=N2^eos M漘4Z&,,:Omg]*b4 45%>v7ph~F qb7gr~I7\Լ)e$9΅ b*lqQ*k*ą27Y 3MKƆsQ#Fx"6,,zTB/p$eкt1>+nQkhâ{;҉v2+(EUznG|L[9EIy[Ӫip0996]ݥx Pͺw})F9? $ ;qS][0 V(}*$܅d 6b7$Xb2~zx#ݧ Rm䦚's} pEi:Lp1?]OO;",2Dn2>f*Fc^fmCä$s]4{.YE|jdSqAv\)aHA`7dRc_4t{⃙ydc Y`[>^Yq-&@ B>a kl- 5bNv%R090vqGIܢnp-+U?0k5`y ]їR& rMaIU16+~PR괎-e~87?>o!Ɯ1|Yu1, WBn;m*/__LxF{IDݘ a/3霛 n@gvf{] 2V@|_p][hQٽ Sת*K,?0>ƃQnƾ%]8L<$@^8odͅ@ß :Plc. 1cjvY|Q`/ &CDPJQݲ#GL҂) "ptd,$ld$A )۝9֟m6$D5P%€3TBZ^ЙtB3 ]'mβsD'=|ubj6rIHr8}y>)CLY8.Uh DoZf lj:P p#>,G Ll~ˡqj۠ݍ/^Ǯ?gҗrP ~qOWcjh G*p M_Xb ?V5S> $]|fW7Y nGPj*xMJԑ9kӐ[ڧq%sc*V֜杕" ,Ý<~SK*4i Ay8 |<GJX{;Gʒ S䮧ݴD~SՒ$+/8K{FYM9KNǰ2p@T)Z:3E\};}?V4rh U<`rZ,1K:\5r`YZA\g"j}W*(PK +,&^,^J)rΗ4 6i'˺A6ܲn${i{BmB ~^FgQCњRNdy=K2n =uT# 1}Y[K:TӥmTL:!I)͟w^R!g Y- LY6ZN߫͏OԧHSMj3~#*snoa&Hp ib*}_˨L?|xއG\ѕ\b62h0dipy-1.11.0/dipy/data/files/func_coef.nii.gz000077500000000000000000000077031476546756600206320ustar00rootroot00000000000000+Qfunc_coef.niiWiXjNKs$BQ羽7R4V" Cm%7i!C[ȔJR2$Q_]Yz~ZYkb7))) /w\))'N'}MW\+ead>겊Xc Y;L75I^6:AQfYY60FS&d$SJ%!e_!+(jIdȎ]Ҕݤ.F#޾>-s |s'Pu5,&J-Igŀ ޙ=L bĎ^ɷf IwtN^3ۂ)}՘uO)w̜|Ej|8rmA:w&DO YkI}XeÔyןyNDTGI}?4\.i\`tJ$y>z&"2! ʃ^lH`/")˄lfs\9>ۙX5SGrߥ?ifu8~hhKl9Cݸ9@ $cs~47mmŘ.)mwR9ǰy?g*DƽQ2ccMGO~Zi҃Dʐ4/eܼ`XfM=h`YMY/Y_iq|lyĄ8<~Yo;/`3yq'<'"dH{u ?\ٟ(f@VXfw3C#qఌ>ƹ'f"y8mos.kzIBXXP*vA)Ԉg8IjWS96m$) w!}iW7q9tGͮ4Ix36FB D[k쳱b= 6A-Y!Rt]Cw*HKg(D؆oIY`Oy>gWe\R%G,:z@CgV%jOJ"hÑe.z@+':Y;XniHtT?$?1s|] Ǻn SC Z~ƙЅGR kP| }Zaŷ;kX jcⳭP*3(cn*)Z[(-1 gbҠn7+,y&9;KWXϙK;|V^/@hJ(1dF)Q2V-L&ܱEÜ {-zT ( /-#?]R`?qN9IˁXdhȇ;z8 NWڰI; w'}oA+Sp*2KQUca-GyH_B6ʪ;E/U0a1D,I|7/KnLar|9܇C΁yYPy@qS7KT.Uu*?@\ /ބ(TFV1npy!ΩG&P9?K.HOWpurm(}L vzEvk3,.Vɶ:ᗍ^ֱSemהic-\}y$,kj@gTo +,~֢q,:0m +S;bѸ3ˀUƲyKfGnj<<ҚW'KK+C`V|hE ;Ъ9[qZa7OI05+,QE1lUo9k5fYU;-]"p!suyetwؚR=..ܝ@X#߈K_Ow^;οFTaySa2vsf|޾ uUUٚ3o 6 cxk3h.N ?qRrԹ|t<117i)`jLiŪå3nAYpɝO75 Yp].,#ѷ[0F87N*&9mw` ZWtN4qvS,\w5 emuuoCgT,g/P& ^=jc A|~E[ NcP>EnN,B+[`p);QHM\;=GFvcݩi?GG):W6W\ق*fcg.R3Ʋ-~nA9>w7V ]ǃx::KnQsFR+Ÿ({ O1xϛGa. c:L1JOC)B&caWs矧$R]W@exy"2rq,MW)(0C8L`Ehz97hsb a?r^pן>S¼c,<7)&Zxؙ4ȑTy\w&?`%qEڳ"{ 1!{+6?|-1oPJ:gK'; OOʒ 0b&<ϳ_!UL}(Jݱ- z>CSV~r?7صEUx%,dS ygjbXI ōY}}cx<;4M9w)h0<,[ʮ'h}o#[%L y. >I㌅K!;m+U)WSmz1)j0ݛI^~al$FFW!ϥKw,cCkH^V Y8<0-ưi\g]'ⷛhOI|wQ,*aQΜk^BU=:9G꺲i_/#ٸP:ɥ?v;+VıC4Jp- ]ԈЭ5_vm@dipy-1.11.0/dipy/data/files/func_discrete.nii.gz000066400000000000000000000155051476546756600215140ustar00rootroot00000000000000_*Qsphere_func.niiwwPͷ!sHsYQ%L *T&PP̈PĀ"&P QDrI^ի[[u{jfk^ZVfp2W2T-x00O8b!'JJi$csadB ^DBm|j# E005ƏP0py)OQ$q…RYrQ=FB{`|,X CJvH$U1]hgKGȡ_藝A 7k_J f!Ї O-"nm~S]抌ҏ0$c,40y1+4!i#\i&_6JkB^Cv*iD<ץ:dAev9'ɰGLAZ3V01iʞt=KhYkQo?oXCv$p"d${OpB S:ڢ hg;%MAy}4MwiE ;(XFNđ ҉AL8Yxġ @[4&OȯL d] m FH/?(C*]VfEMK\뇩'2HuldmU@%DGvCvoOLhFl0DbW#@~"`*t^]M 2^y̮@k\`;XjiM"4X⩓iC+_}ZJHSXc@[\IJtǝ#RyT5דgz(8kK:{JlpO3lӌA~B Xw;!0ޚOqU{ z"΀QLx,TI0!:"X&.upZ+Ăɉ*Ԯk㻸Cy:T"ҽOh.A_  wC>6{k)ꏉ##eyqxqlrtz$]Zz&(,דlpnŞX ] *RöJzSL42`YhB< qL/H(qK1;](eb7?{JKg#lSqƒf^g*?&Nч9Et7x@s4Rbɘ*.@!!x KCMo˕q7CcdqDU8EW](L 0NcdQoWT8Ğ65T ;Ր[ c{%bcØbiuKV82JLhng^(*z[FAUtSʊ֠vk#Ӥ5CT$DwR+$hA7[J(m eմE70g>Q /.j$XYKw`RAwd&,oI篟b]!uKE=57CETf̃,M)-\ҦdiWKY,jPh9NBy.A7z![7%x0rշ`{O#42cF4lbSi5c4JrY]K8$b n)s `#||&ЛδSM@/Y⚫:xI 'H '-4p s"v - @,sʤp@|~XCF kkgIh++P Rsqќ~d,r%1Tw99%b>0bbӤt@c6JH EbNJ:DqB==:@M]4a#LʣunGs\i\]te*z)E I5i~'t*`QаA;|+_ @S#yq 0X?BtA'+z5fh'Ft; iɤTGɐyJe0K Nx8HYcz‡=`bОzTiY.RHۘ2fg!4Q N._6xuT!2_o%P&35oH-T-ƳÔb QӅXS]Mc*Uhlq22i"`"" 0|t80f? ҜIuu8$p#.h#ɽ&T&=2G*6oBͽ,bkDRgl>R cd8VqrjH+k]9P #f 1y mk [4pV %6X0rzdyrx* ôG [}S .B 1rRx2Cm4ecX#* {4(\s AZDڇ!?/!IV_T J6@( 2j&Se .A+Ea|f)5A(9Z@/X׿o(-C <4Iǥ)%09ר NхW{4¤plbJNag~M[1mɦqȡBk}}j*FɵnIK#5II#0v}p& z1cn$MhiV[% vH{vh`\a 7IXTwbϼĺ6:pI_h ylV"]zO Qy8@̬/}W dG)Q uPH1ƞz#ANF;%á QB4lM^";L;6vǫT3Қo(x”D\z7FrI|&Iz:|EHn }Vtv=K'SZt.Aj8TC=v0DB 7?"#+'0avvT6@5GZHHe߻Ȣ cH jlD,371 Ї@ F['_);cqY1C) aCް-TB*>cI3x.`M)^|0He1qG 爉J8} Ocf ^&Dg'Cz",JPP"Եxۻw’8Qx%#KGO3HjzpI -&?FrVb =dRcfk)3M/01> ZDp.|rGHѥ|HfU4T ; ?qxh?v;".oCtQ};<(,qn\r#OKha q*1)`Z1d=) ,. /I.hx X&FuG#CiHJ;q׳UB*ViuĩBY op?J6['2zrq3h5X_AsOӲwmzt; a Zm)J ϼEζsD2Y.iV/wL14 sp| qCAzrD*#ʹcr[H9CY VL?yjP_&D7!Bo%В7 P`!$B` )2KM|k^>ф1u|lBl7zUמ6/Mlu__u MNR<9TvCZ"n1 cDmϘPJA-.s5d]#3@Q]QD%$ْ&Z-wJ m,V@"4]FCf(1 VcDBȾ̕E)%drYH`vp VHͻEpC^ LHBdKƹjwVWAfOxԑ+ˇ A 0)wx A}<X(}>)-rضSn3d^-bknŋt\^rG ͵nZ9Dw,nm8Hq2(G:Uŀ4:%0g)@PvS(h&+#ԅE^ϵEC Ϙ$ QRuN~!Fzg B\/sL¸񢝆_cÛ=mo 節ghdd =$7|+@ŧ0Uip[µ`'k㸑>4M:"t<|ѪP+X8![ܸl dӔ:g7AS\0փo6s}{%ёHC`\9p {4P& za  dipy-1.11.0/dipy/data/files/grad_514.txt000066400000000000000000001013031476546756600176160ustar00rootroot00000000000000 0.0000000e+000 0.0000000e+000 0.0000000e+000 0.0000000e+000 1.6000000e+003 -2.0000000e-001 0.0000000e+000 0.0000000e+000 1.6000000e+003 0.0000000e+000 -2.0000000e-001 0.0000000e+000 1.6000000e+003 0.0000000e+000 0.0000000e+000 -2.0000000e-001 1.6000000e+003 0.0000000e+000 0.0000000e+000 2.0000000e-001 1.6000000e+003 0.0000000e+000 2.0000000e-001 0.0000000e+000 1.6000000e+003 2.0000000e-001 0.0000000e+000 0.0000000e+000 2.2627417e+003 -2.0000000e-001 -2.0000000e-001 0.0000000e+000 2.2627417e+003 -2.0000000e-001 0.0000000e+000 -2.0000000e-001 2.2627417e+003 -2.0000000e-001 0.0000000e+000 2.0000000e-001 2.2627417e+003 -2.0000000e-001 2.0000000e-001 0.0000000e+000 2.2627417e+003 0.0000000e+000 -2.0000000e-001 -2.0000000e-001 2.2627417e+003 0.0000000e+000 -2.0000000e-001 2.0000000e-001 2.2627417e+003 0.0000000e+000 2.0000000e-001 -2.0000000e-001 2.2627417e+003 0.0000000e+000 2.0000000e-001 2.0000000e-001 2.2627417e+003 2.0000000e-001 -2.0000000e-001 0.0000000e+000 2.2627417e+003 2.0000000e-001 0.0000000e+000 -2.0000000e-001 2.2627417e+003 2.0000000e-001 0.0000000e+000 2.0000000e-001 2.2627417e+003 2.0000000e-001 2.0000000e-001 0.0000000e+000 2.7712813e+003 -2.0000000e-001 -2.0000000e-001 -2.0000000e-001 2.7712813e+003 -2.0000000e-001 -2.0000000e-001 2.0000000e-001 2.7712813e+003 -2.0000000e-001 2.0000000e-001 -2.0000000e-001 2.7712813e+003 -2.0000000e-001 2.0000000e-001 2.0000000e-001 2.7712813e+003 2.0000000e-001 -2.0000000e-001 -2.0000000e-001 2.7712813e+003 2.0000000e-001 -2.0000000e-001 2.0000000e-001 2.7712813e+003 2.0000000e-001 2.0000000e-001 -2.0000000e-001 2.7712813e+003 2.0000000e-001 2.0000000e-001 2.0000000e-001 3.2000000e+003 -4.0000000e-001 0.0000000e+000 0.0000000e+000 3.2000000e+003 0.0000000e+000 -4.0000000e-001 0.0000000e+000 3.2000000e+003 0.0000000e+000 0.0000000e+000 -4.0000000e-001 3.2000000e+003 0.0000000e+000 0.0000000e+000 4.0000000e-001 3.2000000e+003 0.0000000e+000 4.0000000e-001 0.0000000e+000 3.2000000e+003 4.0000000e-001 0.0000000e+000 0.0000000e+000 3.5777088e+003 -4.0000000e-001 -2.0000000e-001 0.0000000e+000 3.5777088e+003 -4.0000000e-001 0.0000000e+000 -2.0000000e-001 3.5777088e+003 -4.0000000e-001 0.0000000e+000 2.0000000e-001 3.5777088e+003 -4.0000000e-001 2.0000000e-001 0.0000000e+000 3.5777088e+003 -2.0000000e-001 -4.0000000e-001 0.0000000e+000 3.5777088e+003 -2.0000000e-001 0.0000000e+000 -4.0000000e-001 3.5777088e+003 -2.0000000e-001 0.0000000e+000 4.0000000e-001 3.5777088e+003 -2.0000000e-001 4.0000000e-001 0.0000000e+000 3.5777088e+003 0.0000000e+000 -4.0000000e-001 -2.0000000e-001 3.5777088e+003 0.0000000e+000 -4.0000000e-001 2.0000000e-001 3.5777088e+003 0.0000000e+000 -2.0000000e-001 -4.0000000e-001 3.5777088e+003 0.0000000e+000 -2.0000000e-001 4.0000000e-001 3.5777088e+003 0.0000000e+000 2.0000000e-001 -4.0000000e-001 3.5777088e+003 0.0000000e+000 2.0000000e-001 4.0000000e-001 3.5777088e+003 0.0000000e+000 4.0000000e-001 -2.0000000e-001 3.5777088e+003 0.0000000e+000 4.0000000e-001 2.0000000e-001 3.5777088e+003 2.0000000e-001 -4.0000000e-001 0.0000000e+000 3.5777088e+003 2.0000000e-001 0.0000000e+000 -4.0000000e-001 3.5777088e+003 2.0000000e-001 0.0000000e+000 4.0000000e-001 3.5777088e+003 2.0000000e-001 4.0000000e-001 0.0000000e+000 3.5777088e+003 4.0000000e-001 -2.0000000e-001 0.0000000e+000 3.5777088e+003 4.0000000e-001 0.0000000e+000 -2.0000000e-001 3.5777088e+003 4.0000000e-001 0.0000000e+000 2.0000000e-001 3.5777088e+003 4.0000000e-001 2.0000000e-001 0.0000000e+000 3.9191836e+003 -4.0000000e-001 -2.0000000e-001 -2.0000000e-001 3.9191836e+003 -4.0000000e-001 -2.0000000e-001 2.0000000e-001 3.9191836e+003 -4.0000000e-001 2.0000000e-001 -2.0000000e-001 3.9191836e+003 -4.0000000e-001 2.0000000e-001 2.0000000e-001 3.9191836e+003 -2.0000000e-001 -4.0000000e-001 -2.0000000e-001 3.9191836e+003 -2.0000000e-001 -4.0000000e-001 2.0000000e-001 3.9191836e+003 -2.0000000e-001 -2.0000000e-001 -4.0000000e-001 3.9191836e+003 -2.0000000e-001 -2.0000000e-001 4.0000000e-001 3.9191836e+003 -2.0000000e-001 2.0000000e-001 -4.0000000e-001 3.9191836e+003 -2.0000000e-001 2.0000000e-001 4.0000000e-001 3.9191836e+003 -2.0000000e-001 4.0000000e-001 -2.0000000e-001 3.9191836e+003 -2.0000000e-001 4.0000000e-001 2.0000000e-001 3.9191836e+003 2.0000000e-001 -4.0000000e-001 -2.0000000e-001 3.9191836e+003 2.0000000e-001 -4.0000000e-001 2.0000000e-001 3.9191836e+003 2.0000000e-001 -2.0000000e-001 -4.0000000e-001 3.9191836e+003 2.0000000e-001 -2.0000000e-001 4.0000000e-001 3.9191836e+003 2.0000000e-001 2.0000000e-001 -4.0000000e-001 3.9191836e+003 2.0000000e-001 2.0000000e-001 4.0000000e-001 3.9191836e+003 2.0000000e-001 4.0000000e-001 -2.0000000e-001 3.9191836e+003 2.0000000e-001 4.0000000e-001 2.0000000e-001 3.9191836e+003 4.0000000e-001 -2.0000000e-001 -2.0000000e-001 3.9191836e+003 4.0000000e-001 -2.0000000e-001 2.0000000e-001 3.9191836e+003 4.0000000e-001 2.0000000e-001 -2.0000000e-001 3.9191836e+003 4.0000000e-001 2.0000000e-001 2.0000000e-001 4.5254834e+003 -4.0000000e-001 -4.0000000e-001 0.0000000e+000 4.5254834e+003 -4.0000000e-001 0.0000000e+000 -4.0000000e-001 4.5254834e+003 -4.0000000e-001 0.0000000e+000 4.0000000e-001 4.5254834e+003 -4.0000000e-001 4.0000000e-001 0.0000000e+000 4.5254834e+003 0.0000000e+000 -4.0000000e-001 -4.0000000e-001 4.5254834e+003 0.0000000e+000 -4.0000000e-001 4.0000000e-001 4.5254834e+003 0.0000000e+000 4.0000000e-001 -4.0000000e-001 4.5254834e+003 0.0000000e+000 4.0000000e-001 4.0000000e-001 4.5254834e+003 4.0000000e-001 -4.0000000e-001 0.0000000e+000 4.5254834e+003 4.0000000e-001 0.0000000e+000 -4.0000000e-001 4.5254834e+003 4.0000000e-001 0.0000000e+000 4.0000000e-001 4.5254834e+003 4.0000000e-001 4.0000000e-001 0.0000000e+000 4.8000000e+003 -6.0000000e-001 0.0000000e+000 0.0000000e+000 4.8000000e+003 -4.0000000e-001 -4.0000000e-001 -2.0000000e-001 4.8000000e+003 -4.0000000e-001 -4.0000000e-001 2.0000000e-001 4.8000000e+003 -4.0000000e-001 -2.0000000e-001 -4.0000000e-001 4.8000000e+003 -4.0000000e-001 -2.0000000e-001 4.0000000e-001 4.8000000e+003 -4.0000000e-001 2.0000000e-001 -4.0000000e-001 4.8000000e+003 -4.0000000e-001 2.0000000e-001 4.0000000e-001 4.8000000e+003 -4.0000000e-001 4.0000000e-001 -2.0000000e-001 4.8000000e+003 -4.0000000e-001 4.0000000e-001 2.0000000e-001 4.8000000e+003 -2.0000000e-001 -4.0000000e-001 -4.0000000e-001 4.8000000e+003 -2.0000000e-001 -4.0000000e-001 4.0000000e-001 4.8000000e+003 -2.0000000e-001 4.0000000e-001 -4.0000000e-001 4.8000000e+003 -2.0000000e-001 4.0000000e-001 4.0000000e-001 4.8000000e+003 0.0000000e+000 -6.0000000e-001 0.0000000e+000 4.8000000e+003 0.0000000e+000 0.0000000e+000 -6.0000000e-001 4.8000000e+003 0.0000000e+000 0.0000000e+000 6.0000000e-001 4.8000000e+003 0.0000000e+000 6.0000000e-001 0.0000000e+000 4.8000000e+003 2.0000000e-001 -4.0000000e-001 -4.0000000e-001 4.8000000e+003 2.0000000e-001 -4.0000000e-001 4.0000000e-001 4.8000000e+003 2.0000000e-001 4.0000000e-001 -4.0000000e-001 4.8000000e+003 2.0000000e-001 4.0000000e-001 4.0000000e-001 4.8000000e+003 4.0000000e-001 -4.0000000e-001 -2.0000000e-001 4.8000000e+003 4.0000000e-001 -4.0000000e-001 2.0000000e-001 4.8000000e+003 4.0000000e-001 -2.0000000e-001 -4.0000000e-001 4.8000000e+003 4.0000000e-001 -2.0000000e-001 4.0000000e-001 4.8000000e+003 4.0000000e-001 2.0000000e-001 -4.0000000e-001 4.8000000e+003 4.0000000e-001 2.0000000e-001 4.0000000e-001 4.8000000e+003 4.0000000e-001 4.0000000e-001 -2.0000000e-001 4.8000000e+003 4.0000000e-001 4.0000000e-001 2.0000000e-001 4.8000000e+003 6.0000000e-001 0.0000000e+000 0.0000000e+000 5.0596443e+003 -6.0000000e-001 -2.0000000e-001 0.0000000e+000 5.0596443e+003 -6.0000000e-001 0.0000000e+000 -2.0000000e-001 5.0596443e+003 -6.0000000e-001 0.0000000e+000 2.0000000e-001 5.0596443e+003 -6.0000000e-001 2.0000000e-001 0.0000000e+000 5.0596443e+003 -2.0000000e-001 -6.0000000e-001 0.0000000e+000 5.0596443e+003 -2.0000000e-001 0.0000000e+000 -6.0000000e-001 5.0596443e+003 -2.0000000e-001 0.0000000e+000 6.0000000e-001 5.0596443e+003 -2.0000000e-001 6.0000000e-001 0.0000000e+000 5.0596443e+003 0.0000000e+000 -6.0000000e-001 -2.0000000e-001 5.0596443e+003 0.0000000e+000 -6.0000000e-001 2.0000000e-001 5.0596443e+003 0.0000000e+000 -2.0000000e-001 -6.0000000e-001 5.0596443e+003 0.0000000e+000 -2.0000000e-001 6.0000000e-001 5.0596443e+003 0.0000000e+000 2.0000000e-001 -6.0000000e-001 5.0596443e+003 0.0000000e+000 2.0000000e-001 6.0000000e-001 5.0596443e+003 0.0000000e+000 6.0000000e-001 -2.0000000e-001 5.0596443e+003 0.0000000e+000 6.0000000e-001 2.0000000e-001 5.0596443e+003 2.0000000e-001 -6.0000000e-001 0.0000000e+000 5.0596443e+003 2.0000000e-001 0.0000000e+000 -6.0000000e-001 5.0596443e+003 2.0000000e-001 0.0000000e+000 6.0000000e-001 5.0596443e+003 2.0000000e-001 6.0000000e-001 0.0000000e+000 5.0596443e+003 6.0000000e-001 -2.0000000e-001 0.0000000e+000 5.0596443e+003 6.0000000e-001 0.0000000e+000 -2.0000000e-001 5.0596443e+003 6.0000000e-001 0.0000000e+000 2.0000000e-001 5.0596443e+003 6.0000000e-001 2.0000000e-001 0.0000000e+000 5.3065997e+003 -6.0000000e-001 -2.0000000e-001 -2.0000000e-001 5.3065997e+003 -6.0000000e-001 -2.0000000e-001 2.0000000e-001 5.3065997e+003 -6.0000000e-001 2.0000000e-001 -2.0000000e-001 5.3065997e+003 -6.0000000e-001 2.0000000e-001 2.0000000e-001 5.3065997e+003 -2.0000000e-001 -6.0000000e-001 -2.0000000e-001 5.3065997e+003 -2.0000000e-001 -6.0000000e-001 2.0000000e-001 5.3065997e+003 -2.0000000e-001 -2.0000000e-001 -6.0000000e-001 5.3065997e+003 -2.0000000e-001 -2.0000000e-001 6.0000000e-001 5.3065997e+003 -2.0000000e-001 2.0000000e-001 -6.0000000e-001 5.3065997e+003 -2.0000000e-001 2.0000000e-001 6.0000000e-001 5.3065997e+003 -2.0000000e-001 6.0000000e-001 -2.0000000e-001 5.3065997e+003 -2.0000000e-001 6.0000000e-001 2.0000000e-001 5.3065997e+003 2.0000000e-001 -6.0000000e-001 -2.0000000e-001 5.3065997e+003 2.0000000e-001 -6.0000000e-001 2.0000000e-001 5.3065997e+003 2.0000000e-001 -2.0000000e-001 -6.0000000e-001 5.3065997e+003 2.0000000e-001 -2.0000000e-001 6.0000000e-001 5.3065997e+003 2.0000000e-001 2.0000000e-001 -6.0000000e-001 5.3065997e+003 2.0000000e-001 2.0000000e-001 6.0000000e-001 5.3065997e+003 2.0000000e-001 6.0000000e-001 -2.0000000e-001 5.3065997e+003 2.0000000e-001 6.0000000e-001 2.0000000e-001 5.3065997e+003 6.0000000e-001 -2.0000000e-001 -2.0000000e-001 5.3065997e+003 6.0000000e-001 -2.0000000e-001 2.0000000e-001 5.3065997e+003 6.0000000e-001 2.0000000e-001 -2.0000000e-001 5.3065997e+003 6.0000000e-001 2.0000000e-001 2.0000000e-001 5.5425626e+003 -4.0000000e-001 -4.0000000e-001 -4.0000000e-001 5.5425626e+003 -4.0000000e-001 -4.0000000e-001 4.0000000e-001 5.5425626e+003 -4.0000000e-001 4.0000000e-001 -4.0000000e-001 5.5425626e+003 -4.0000000e-001 4.0000000e-001 4.0000000e-001 5.5425626e+003 4.0000000e-001 -4.0000000e-001 -4.0000000e-001 5.5425626e+003 4.0000000e-001 -4.0000000e-001 4.0000000e-001 5.5425626e+003 4.0000000e-001 4.0000000e-001 -4.0000000e-001 5.5425626e+003 4.0000000e-001 4.0000000e-001 4.0000000e-001 5.7688820e+003 -6.0000000e-001 -4.0000000e-001 0.0000000e+000 5.7688820e+003 -6.0000000e-001 0.0000000e+000 -4.0000000e-001 5.7688820e+003 -6.0000000e-001 0.0000000e+000 4.0000000e-001 5.7688820e+003 -6.0000000e-001 4.0000000e-001 0.0000000e+000 5.7688820e+003 -4.0000000e-001 -6.0000000e-001 0.0000000e+000 5.7688820e+003 -4.0000000e-001 0.0000000e+000 -6.0000000e-001 5.7688820e+003 -4.0000000e-001 0.0000000e+000 6.0000000e-001 5.7688820e+003 -4.0000000e-001 6.0000000e-001 0.0000000e+000 5.7688820e+003 0.0000000e+000 -6.0000000e-001 -4.0000000e-001 5.7688820e+003 0.0000000e+000 -6.0000000e-001 4.0000000e-001 5.7688820e+003 0.0000000e+000 -4.0000000e-001 -6.0000000e-001 5.7688820e+003 0.0000000e+000 -4.0000000e-001 6.0000000e-001 5.7688820e+003 0.0000000e+000 4.0000000e-001 -6.0000000e-001 5.7688820e+003 0.0000000e+000 4.0000000e-001 6.0000000e-001 5.7688820e+003 0.0000000e+000 6.0000000e-001 -4.0000000e-001 5.7688820e+003 0.0000000e+000 6.0000000e-001 4.0000000e-001 5.7688820e+003 4.0000000e-001 -6.0000000e-001 0.0000000e+000 5.7688820e+003 4.0000000e-001 0.0000000e+000 -6.0000000e-001 5.7688820e+003 4.0000000e-001 0.0000000e+000 6.0000000e-001 5.7688820e+003 4.0000000e-001 6.0000000e-001 0.0000000e+000 5.7688820e+003 6.0000000e-001 -4.0000000e-001 0.0000000e+000 5.7688820e+003 6.0000000e-001 0.0000000e+000 -4.0000000e-001 5.7688820e+003 6.0000000e-001 0.0000000e+000 4.0000000e-001 5.7688820e+003 6.0000000e-001 4.0000000e-001 0.0000000e+000 5.9866518e+003 -6.0000000e-001 -4.0000000e-001 -2.0000000e-001 5.9866518e+003 -6.0000000e-001 -4.0000000e-001 2.0000000e-001 5.9866518e+003 -6.0000000e-001 -2.0000000e-001 -4.0000000e-001 5.9866518e+003 -6.0000000e-001 -2.0000000e-001 4.0000000e-001 5.9866518e+003 -6.0000000e-001 2.0000000e-001 -4.0000000e-001 5.9866518e+003 -6.0000000e-001 2.0000000e-001 4.0000000e-001 5.9866518e+003 -6.0000000e-001 4.0000000e-001 -2.0000000e-001 5.9866518e+003 -6.0000000e-001 4.0000000e-001 2.0000000e-001 5.9866518e+003 -4.0000000e-001 -6.0000000e-001 -2.0000000e-001 5.9866518e+003 -4.0000000e-001 -6.0000000e-001 2.0000000e-001 5.9866518e+003 -4.0000000e-001 -2.0000000e-001 -6.0000000e-001 5.9866518e+003 -4.0000000e-001 -2.0000000e-001 6.0000000e-001 5.9866518e+003 -4.0000000e-001 2.0000000e-001 -6.0000000e-001 5.9866518e+003 -4.0000000e-001 2.0000000e-001 6.0000000e-001 5.9866518e+003 -4.0000000e-001 6.0000000e-001 -2.0000000e-001 5.9866518e+003 -4.0000000e-001 6.0000000e-001 2.0000000e-001 5.9866518e+003 -2.0000000e-001 -6.0000000e-001 -4.0000000e-001 5.9866518e+003 -2.0000000e-001 -6.0000000e-001 4.0000000e-001 5.9866518e+003 -2.0000000e-001 -4.0000000e-001 -6.0000000e-001 5.9866518e+003 -2.0000000e-001 -4.0000000e-001 6.0000000e-001 5.9866518e+003 -2.0000000e-001 4.0000000e-001 -6.0000000e-001 5.9866518e+003 -2.0000000e-001 4.0000000e-001 6.0000000e-001 5.9866518e+003 -2.0000000e-001 6.0000000e-001 -4.0000000e-001 5.9866518e+003 -2.0000000e-001 6.0000000e-001 4.0000000e-001 5.9866518e+003 2.0000000e-001 -6.0000000e-001 -4.0000000e-001 5.9866518e+003 2.0000000e-001 -6.0000000e-001 4.0000000e-001 5.9866518e+003 2.0000000e-001 -4.0000000e-001 -6.0000000e-001 5.9866518e+003 2.0000000e-001 -4.0000000e-001 6.0000000e-001 5.9866518e+003 2.0000000e-001 4.0000000e-001 -6.0000000e-001 5.9866518e+003 2.0000000e-001 4.0000000e-001 6.0000000e-001 5.9866518e+003 2.0000000e-001 6.0000000e-001 -4.0000000e-001 5.9866518e+003 2.0000000e-001 6.0000000e-001 4.0000000e-001 5.9866518e+003 4.0000000e-001 -6.0000000e-001 -2.0000000e-001 5.9866518e+003 4.0000000e-001 -6.0000000e-001 2.0000000e-001 5.9866518e+003 4.0000000e-001 -2.0000000e-001 -6.0000000e-001 5.9866518e+003 4.0000000e-001 -2.0000000e-001 6.0000000e-001 5.9866518e+003 4.0000000e-001 2.0000000e-001 -6.0000000e-001 5.9866518e+003 4.0000000e-001 2.0000000e-001 6.0000000e-001 5.9866518e+003 4.0000000e-001 6.0000000e-001 -2.0000000e-001 5.9866518e+003 4.0000000e-001 6.0000000e-001 2.0000000e-001 5.9866518e+003 6.0000000e-001 -4.0000000e-001 -2.0000000e-001 5.9866518e+003 6.0000000e-001 -4.0000000e-001 2.0000000e-001 5.9866518e+003 6.0000000e-001 -2.0000000e-001 -4.0000000e-001 5.9866518e+003 6.0000000e-001 -2.0000000e-001 4.0000000e-001 5.9866518e+003 6.0000000e-001 2.0000000e-001 -4.0000000e-001 5.9866518e+003 6.0000000e-001 2.0000000e-001 4.0000000e-001 5.9866518e+003 6.0000000e-001 4.0000000e-001 -2.0000000e-001 5.9866518e+003 6.0000000e-001 4.0000000e-001 2.0000000e-001 6.4000000e+003 -8.0000000e-001 0.0000000e+000 0.0000000e+000 6.4000000e+003 0.0000000e+000 -8.0000000e-001 0.0000000e+000 6.4000000e+003 0.0000000e+000 0.0000000e+000 -8.0000000e-001 6.4000000e+003 0.0000000e+000 0.0000000e+000 8.0000000e-001 6.4000000e+003 0.0000000e+000 8.0000000e-001 0.0000000e+000 6.4000000e+003 8.0000000e-001 0.0000000e+000 0.0000000e+000 6.5969690e+003 -8.0000000e-001 -2.0000000e-001 0.0000000e+000 6.5969690e+003 -8.0000000e-001 0.0000000e+000 -2.0000000e-001 6.5969690e+003 -8.0000000e-001 0.0000000e+000 2.0000000e-001 6.5969690e+003 -8.0000000e-001 2.0000000e-001 0.0000000e+000 6.5969690e+003 -6.0000000e-001 -4.0000000e-001 -4.0000000e-001 6.5969690e+003 -6.0000000e-001 -4.0000000e-001 4.0000000e-001 6.5969690e+003 -6.0000000e-001 4.0000000e-001 -4.0000000e-001 6.5969690e+003 -6.0000000e-001 4.0000000e-001 4.0000000e-001 6.5969690e+003 -4.0000000e-001 -6.0000000e-001 -4.0000000e-001 6.5969690e+003 -4.0000000e-001 -6.0000000e-001 4.0000000e-001 6.5969690e+003 -4.0000000e-001 -4.0000000e-001 -6.0000000e-001 6.5969690e+003 -4.0000000e-001 -4.0000000e-001 6.0000000e-001 6.5969690e+003 -4.0000000e-001 4.0000000e-001 -6.0000000e-001 6.5969690e+003 -4.0000000e-001 4.0000000e-001 6.0000000e-001 6.5969690e+003 -4.0000000e-001 6.0000000e-001 -4.0000000e-001 6.5969690e+003 -4.0000000e-001 6.0000000e-001 4.0000000e-001 6.5969690e+003 -2.0000000e-001 -8.0000000e-001 0.0000000e+000 6.5969690e+003 -2.0000000e-001 0.0000000e+000 -8.0000000e-001 6.5969690e+003 -2.0000000e-001 0.0000000e+000 8.0000000e-001 6.5969690e+003 -2.0000000e-001 8.0000000e-001 0.0000000e+000 6.5969690e+003 0.0000000e+000 -8.0000000e-001 -2.0000000e-001 6.5969690e+003 0.0000000e+000 -8.0000000e-001 2.0000000e-001 6.5969690e+003 0.0000000e+000 -2.0000000e-001 -8.0000000e-001 6.5969690e+003 0.0000000e+000 -2.0000000e-001 8.0000000e-001 6.5969690e+003 0.0000000e+000 2.0000000e-001 -8.0000000e-001 6.5969690e+003 0.0000000e+000 2.0000000e-001 8.0000000e-001 6.5969690e+003 0.0000000e+000 8.0000000e-001 -2.0000000e-001 6.5969690e+003 0.0000000e+000 8.0000000e-001 2.0000000e-001 6.5969690e+003 2.0000000e-001 -8.0000000e-001 0.0000000e+000 6.5969690e+003 2.0000000e-001 0.0000000e+000 -8.0000000e-001 6.5969690e+003 2.0000000e-001 0.0000000e+000 8.0000000e-001 6.5969690e+003 2.0000000e-001 8.0000000e-001 0.0000000e+000 6.5969690e+003 4.0000000e-001 -6.0000000e-001 -4.0000000e-001 6.5969690e+003 4.0000000e-001 -6.0000000e-001 4.0000000e-001 6.5969690e+003 4.0000000e-001 -4.0000000e-001 -6.0000000e-001 6.5969690e+003 4.0000000e-001 -4.0000000e-001 6.0000000e-001 6.5969690e+003 4.0000000e-001 4.0000000e-001 -6.0000000e-001 6.5969690e+003 4.0000000e-001 4.0000000e-001 6.0000000e-001 6.5969690e+003 4.0000000e-001 6.0000000e-001 -4.0000000e-001 6.5969690e+003 4.0000000e-001 6.0000000e-001 4.0000000e-001 6.5969690e+003 6.0000000e-001 -4.0000000e-001 -4.0000000e-001 6.5969690e+003 6.0000000e-001 -4.0000000e-001 4.0000000e-001 6.5969690e+003 6.0000000e-001 4.0000000e-001 -4.0000000e-001 6.5969690e+003 6.0000000e-001 4.0000000e-001 4.0000000e-001 6.5969690e+003 8.0000000e-001 -2.0000000e-001 0.0000000e+000 6.5969690e+003 8.0000000e-001 0.0000000e+000 -2.0000000e-001 6.5969690e+003 8.0000000e-001 0.0000000e+000 2.0000000e-001 6.5969690e+003 8.0000000e-001 2.0000000e-001 0.0000000e+000 6.7882251e+003 -8.0000000e-001 -2.0000000e-001 -2.0000000e-001 6.7882251e+003 -8.0000000e-001 -2.0000000e-001 2.0000000e-001 6.7882251e+003 -8.0000000e-001 2.0000000e-001 -2.0000000e-001 6.7882251e+003 -8.0000000e-001 2.0000000e-001 2.0000000e-001 6.7882251e+003 -6.0000000e-001 -6.0000000e-001 0.0000000e+000 6.7882251e+003 -6.0000000e-001 0.0000000e+000 -6.0000000e-001 6.7882251e+003 -6.0000000e-001 0.0000000e+000 6.0000000e-001 6.7882251e+003 -6.0000000e-001 6.0000000e-001 0.0000000e+000 6.7882251e+003 -2.0000000e-001 -8.0000000e-001 -2.0000000e-001 6.7882251e+003 -2.0000000e-001 -8.0000000e-001 2.0000000e-001 6.7882251e+003 -2.0000000e-001 -2.0000000e-001 -8.0000000e-001 6.7882251e+003 -2.0000000e-001 -2.0000000e-001 8.0000000e-001 6.7882251e+003 -2.0000000e-001 2.0000000e-001 -8.0000000e-001 6.7882251e+003 -2.0000000e-001 2.0000000e-001 8.0000000e-001 6.7882251e+003 -2.0000000e-001 8.0000000e-001 -2.0000000e-001 6.7882251e+003 -2.0000000e-001 8.0000000e-001 2.0000000e-001 6.7882251e+003 0.0000000e+000 -6.0000000e-001 -6.0000000e-001 6.7882251e+003 0.0000000e+000 -6.0000000e-001 6.0000000e-001 6.7882251e+003 0.0000000e+000 6.0000000e-001 -6.0000000e-001 6.7882251e+003 0.0000000e+000 6.0000000e-001 6.0000000e-001 6.7882251e+003 2.0000000e-001 -8.0000000e-001 -2.0000000e-001 6.7882251e+003 2.0000000e-001 -8.0000000e-001 2.0000000e-001 6.7882251e+003 2.0000000e-001 -2.0000000e-001 -8.0000000e-001 6.7882251e+003 2.0000000e-001 -2.0000000e-001 8.0000000e-001 6.7882251e+003 2.0000000e-001 2.0000000e-001 -8.0000000e-001 6.7882251e+003 2.0000000e-001 2.0000000e-001 8.0000000e-001 6.7882251e+003 2.0000000e-001 8.0000000e-001 -2.0000000e-001 6.7882251e+003 2.0000000e-001 8.0000000e-001 2.0000000e-001 6.7882251e+003 6.0000000e-001 -6.0000000e-001 0.0000000e+000 6.7882251e+003 6.0000000e-001 0.0000000e+000 -6.0000000e-001 6.7882251e+003 6.0000000e-001 0.0000000e+000 6.0000000e-001 6.7882251e+003 6.0000000e-001 6.0000000e-001 0.0000000e+000 6.7882251e+003 8.0000000e-001 -2.0000000e-001 -2.0000000e-001 6.7882251e+003 8.0000000e-001 -2.0000000e-001 2.0000000e-001 6.7882251e+003 8.0000000e-001 2.0000000e-001 -2.0000000e-001 6.7882251e+003 8.0000000e-001 2.0000000e-001 2.0000000e-001 6.9742383e+003 -6.0000000e-001 -6.0000000e-001 -2.0000000e-001 6.9742383e+003 -6.0000000e-001 -6.0000000e-001 2.0000000e-001 6.9742383e+003 -6.0000000e-001 -2.0000000e-001 -6.0000000e-001 6.9742383e+003 -6.0000000e-001 -2.0000000e-001 6.0000000e-001 6.9742383e+003 -6.0000000e-001 2.0000000e-001 -6.0000000e-001 6.9742383e+003 -6.0000000e-001 2.0000000e-001 6.0000000e-001 6.9742383e+003 -6.0000000e-001 6.0000000e-001 -2.0000000e-001 6.9742383e+003 -6.0000000e-001 6.0000000e-001 2.0000000e-001 6.9742383e+003 -2.0000000e-001 -6.0000000e-001 -6.0000000e-001 6.9742383e+003 -2.0000000e-001 -6.0000000e-001 6.0000000e-001 6.9742383e+003 -2.0000000e-001 6.0000000e-001 -6.0000000e-001 6.9742383e+003 -2.0000000e-001 6.0000000e-001 6.0000000e-001 6.9742383e+003 2.0000000e-001 -6.0000000e-001 -6.0000000e-001 6.9742383e+003 2.0000000e-001 -6.0000000e-001 6.0000000e-001 6.9742383e+003 2.0000000e-001 6.0000000e-001 -6.0000000e-001 6.9742383e+003 2.0000000e-001 6.0000000e-001 6.0000000e-001 6.9742383e+003 6.0000000e-001 -6.0000000e-001 -2.0000000e-001 6.9742383e+003 6.0000000e-001 -6.0000000e-001 2.0000000e-001 6.9742383e+003 6.0000000e-001 -2.0000000e-001 -6.0000000e-001 6.9742383e+003 6.0000000e-001 -2.0000000e-001 6.0000000e-001 6.9742383e+003 6.0000000e-001 2.0000000e-001 -6.0000000e-001 6.9742383e+003 6.0000000e-001 2.0000000e-001 6.0000000e-001 6.9742383e+003 6.0000000e-001 6.0000000e-001 -2.0000000e-001 6.9742383e+003 6.0000000e-001 6.0000000e-001 2.0000000e-001 7.1554175e+003 -8.0000000e-001 -4.0000000e-001 0.0000000e+000 7.1554175e+003 -8.0000000e-001 0.0000000e+000 -4.0000000e-001 7.1554175e+003 -8.0000000e-001 0.0000000e+000 4.0000000e-001 7.1554175e+003 -8.0000000e-001 4.0000000e-001 0.0000000e+000 7.1554175e+003 -4.0000000e-001 -8.0000000e-001 0.0000000e+000 7.1554175e+003 -4.0000000e-001 0.0000000e+000 -8.0000000e-001 7.1554175e+003 -4.0000000e-001 0.0000000e+000 8.0000000e-001 7.1554175e+003 -4.0000000e-001 8.0000000e-001 0.0000000e+000 7.1554175e+003 0.0000000e+000 -8.0000000e-001 -4.0000000e-001 7.1554175e+003 0.0000000e+000 -8.0000000e-001 4.0000000e-001 7.1554175e+003 0.0000000e+000 -4.0000000e-001 -8.0000000e-001 7.1554175e+003 0.0000000e+000 -4.0000000e-001 8.0000000e-001 7.1554175e+003 0.0000000e+000 4.0000000e-001 -8.0000000e-001 7.1554175e+003 0.0000000e+000 4.0000000e-001 8.0000000e-001 7.1554175e+003 0.0000000e+000 8.0000000e-001 -4.0000000e-001 7.1554175e+003 0.0000000e+000 8.0000000e-001 4.0000000e-001 7.1554175e+003 4.0000000e-001 -8.0000000e-001 0.0000000e+000 7.1554175e+003 4.0000000e-001 0.0000000e+000 -8.0000000e-001 7.1554175e+003 4.0000000e-001 0.0000000e+000 8.0000000e-001 7.1554175e+003 4.0000000e-001 8.0000000e-001 0.0000000e+000 7.1554175e+003 8.0000000e-001 -4.0000000e-001 0.0000000e+000 7.1554175e+003 8.0000000e-001 0.0000000e+000 -4.0000000e-001 7.1554175e+003 8.0000000e-001 0.0000000e+000 4.0000000e-001 7.1554175e+003 8.0000000e-001 4.0000000e-001 0.0000000e+000 7.3321211e+003 -8.0000000e-001 -4.0000000e-001 -2.0000000e-001 7.3321211e+003 -8.0000000e-001 -4.0000000e-001 2.0000000e-001 7.3321211e+003 -8.0000000e-001 -2.0000000e-001 -4.0000000e-001 7.3321211e+003 -8.0000000e-001 -2.0000000e-001 4.0000000e-001 7.3321211e+003 -8.0000000e-001 2.0000000e-001 -4.0000000e-001 7.3321211e+003 -8.0000000e-001 2.0000000e-001 4.0000000e-001 7.3321211e+003 -8.0000000e-001 4.0000000e-001 -2.0000000e-001 7.3321211e+003 -8.0000000e-001 4.0000000e-001 2.0000000e-001 7.3321211e+003 -4.0000000e-001 -8.0000000e-001 -2.0000000e-001 7.3321211e+003 -4.0000000e-001 -8.0000000e-001 2.0000000e-001 7.3321211e+003 -4.0000000e-001 -2.0000000e-001 -8.0000000e-001 7.3321211e+003 -4.0000000e-001 -2.0000000e-001 8.0000000e-001 7.3321211e+003 -4.0000000e-001 2.0000000e-001 -8.0000000e-001 7.3321211e+003 -4.0000000e-001 2.0000000e-001 8.0000000e-001 7.3321211e+003 -4.0000000e-001 8.0000000e-001 -2.0000000e-001 7.3321211e+003 -4.0000000e-001 8.0000000e-001 2.0000000e-001 7.3321211e+003 -2.0000000e-001 -8.0000000e-001 -4.0000000e-001 7.3321211e+003 -2.0000000e-001 -8.0000000e-001 4.0000000e-001 7.3321211e+003 -2.0000000e-001 -4.0000000e-001 -8.0000000e-001 7.3321211e+003 -2.0000000e-001 -4.0000000e-001 8.0000000e-001 7.3321211e+003 -2.0000000e-001 4.0000000e-001 -8.0000000e-001 7.3321211e+003 -2.0000000e-001 4.0000000e-001 8.0000000e-001 7.3321211e+003 -2.0000000e-001 8.0000000e-001 -4.0000000e-001 7.3321211e+003 -2.0000000e-001 8.0000000e-001 4.0000000e-001 7.3321211e+003 2.0000000e-001 -8.0000000e-001 -4.0000000e-001 7.3321211e+003 2.0000000e-001 -8.0000000e-001 4.0000000e-001 7.3321211e+003 2.0000000e-001 -4.0000000e-001 -8.0000000e-001 7.3321211e+003 2.0000000e-001 -4.0000000e-001 8.0000000e-001 7.3321211e+003 2.0000000e-001 4.0000000e-001 -8.0000000e-001 7.3321211e+003 2.0000000e-001 4.0000000e-001 8.0000000e-001 7.3321211e+003 2.0000000e-001 8.0000000e-001 -4.0000000e-001 7.3321211e+003 2.0000000e-001 8.0000000e-001 4.0000000e-001 7.3321211e+003 4.0000000e-001 -8.0000000e-001 -2.0000000e-001 7.3321211e+003 4.0000000e-001 -8.0000000e-001 2.0000000e-001 7.3321211e+003 4.0000000e-001 -2.0000000e-001 -8.0000000e-001 7.3321211e+003 4.0000000e-001 -2.0000000e-001 8.0000000e-001 7.3321211e+003 4.0000000e-001 2.0000000e-001 -8.0000000e-001 7.3321211e+003 4.0000000e-001 2.0000000e-001 8.0000000e-001 7.3321211e+003 4.0000000e-001 8.0000000e-001 -2.0000000e-001 7.3321211e+003 4.0000000e-001 8.0000000e-001 2.0000000e-001 7.3321211e+003 8.0000000e-001 -4.0000000e-001 -2.0000000e-001 7.3321211e+003 8.0000000e-001 -4.0000000e-001 2.0000000e-001 7.3321211e+003 8.0000000e-001 -2.0000000e-001 -4.0000000e-001 7.3321211e+003 8.0000000e-001 -2.0000000e-001 4.0000000e-001 7.3321211e+003 8.0000000e-001 2.0000000e-001 -4.0000000e-001 7.3321211e+003 8.0000000e-001 2.0000000e-001 4.0000000e-001 7.3321211e+003 8.0000000e-001 4.0000000e-001 -2.0000000e-001 7.3321211e+003 8.0000000e-001 4.0000000e-001 2.0000000e-001 7.5046652e+003 -6.0000000e-001 -6.0000000e-001 -4.0000000e-001 7.5046652e+003 -6.0000000e-001 -6.0000000e-001 4.0000000e-001 7.5046652e+003 -6.0000000e-001 -4.0000000e-001 -6.0000000e-001 7.5046652e+003 -6.0000000e-001 -4.0000000e-001 6.0000000e-001 7.5046652e+003 -6.0000000e-001 4.0000000e-001 -6.0000000e-001 7.5046652e+003 -6.0000000e-001 4.0000000e-001 6.0000000e-001 7.5046652e+003 -6.0000000e-001 6.0000000e-001 -4.0000000e-001 7.5046652e+003 -6.0000000e-001 6.0000000e-001 4.0000000e-001 7.5046652e+003 -4.0000000e-001 -6.0000000e-001 -6.0000000e-001 7.5046652e+003 -4.0000000e-001 -6.0000000e-001 6.0000000e-001 7.5046652e+003 -4.0000000e-001 6.0000000e-001 -6.0000000e-001 7.5046652e+003 -4.0000000e-001 6.0000000e-001 6.0000000e-001 7.5046652e+003 4.0000000e-001 -6.0000000e-001 -6.0000000e-001 7.5046652e+003 4.0000000e-001 -6.0000000e-001 6.0000000e-001 7.5046652e+003 4.0000000e-001 6.0000000e-001 -6.0000000e-001 7.5046652e+003 4.0000000e-001 6.0000000e-001 6.0000000e-001 7.5046652e+003 6.0000000e-001 -6.0000000e-001 -4.0000000e-001 7.5046652e+003 6.0000000e-001 -6.0000000e-001 4.0000000e-001 7.5046652e+003 6.0000000e-001 -4.0000000e-001 -6.0000000e-001 7.5046652e+003 6.0000000e-001 -4.0000000e-001 6.0000000e-001 7.5046652e+003 6.0000000e-001 4.0000000e-001 -6.0000000e-001 7.5046652e+003 6.0000000e-001 4.0000000e-001 6.0000000e-001 7.5046652e+003 6.0000000e-001 6.0000000e-001 -4.0000000e-001 7.5046652e+003 6.0000000e-001 6.0000000e-001 4.0000000e-001 7.8383672e+003 -8.0000000e-001 -4.0000000e-001 -4.0000000e-001 7.8383672e+003 -8.0000000e-001 -4.0000000e-001 4.0000000e-001 7.8383672e+003 -8.0000000e-001 4.0000000e-001 -4.0000000e-001 7.8383672e+003 -8.0000000e-001 4.0000000e-001 4.0000000e-001 7.8383672e+003 -4.0000000e-001 -8.0000000e-001 -4.0000000e-001 7.8383672e+003 -4.0000000e-001 -8.0000000e-001 4.0000000e-001 7.8383672e+003 -4.0000000e-001 -4.0000000e-001 -8.0000000e-001 7.8383672e+003 -4.0000000e-001 -4.0000000e-001 8.0000000e-001 7.8383672e+003 -4.0000000e-001 4.0000000e-001 -8.0000000e-001 7.8383672e+003 -4.0000000e-001 4.0000000e-001 8.0000000e-001 7.8383672e+003 -4.0000000e-001 8.0000000e-001 -4.0000000e-001 7.8383672e+003 -4.0000000e-001 8.0000000e-001 4.0000000e-001 7.8383672e+003 4.0000000e-001 -8.0000000e-001 -4.0000000e-001 7.8383672e+003 4.0000000e-001 -8.0000000e-001 4.0000000e-001 7.8383672e+003 4.0000000e-001 -4.0000000e-001 -8.0000000e-001 7.8383672e+003 4.0000000e-001 -4.0000000e-001 8.0000000e-001 7.8383672e+003 4.0000000e-001 4.0000000e-001 -8.0000000e-001 7.8383672e+003 4.0000000e-001 4.0000000e-001 8.0000000e-001 7.8383672e+003 4.0000000e-001 8.0000000e-001 -4.0000000e-001 7.8383672e+003 4.0000000e-001 8.0000000e-001 4.0000000e-001 7.8383672e+003 8.0000000e-001 -4.0000000e-001 -4.0000000e-001 7.8383672e+003 8.0000000e-001 -4.0000000e-001 4.0000000e-001 7.8383672e+003 8.0000000e-001 4.0000000e-001 -4.0000000e-001 7.8383672e+003 8.0000000e-001 4.0000000e-001 4.0000000e-001 8.0000000e+003 -1.0000000e+000 0.0000000e+000 0.0000000e+000 8.0000000e+003 -8.0000000e-001 -6.0000000e-001 0.0000000e+000 8.0000000e+003 -8.0000000e-001 0.0000000e+000 -6.0000000e-001 8.0000000e+003 -8.0000000e-001 0.0000000e+000 6.0000000e-001 8.0000000e+003 -8.0000000e-001 6.0000000e-001 0.0000000e+000 8.0000000e+003 -6.0000000e-001 -8.0000000e-001 0.0000000e+000 8.0000000e+003 -6.0000000e-001 0.0000000e+000 -8.0000000e-001 8.0000000e+003 -6.0000000e-001 0.0000000e+000 8.0000000e-001 8.0000000e+003 -6.0000000e-001 8.0000000e-001 0.0000000e+000 8.0000000e+003 0.0000000e+000 -1.0000000e+000 0.0000000e+000 8.0000000e+003 0.0000000e+000 -8.0000000e-001 -6.0000000e-001 8.0000000e+003 0.0000000e+000 -8.0000000e-001 6.0000000e-001 8.0000000e+003 0.0000000e+000 -6.0000000e-001 -8.0000000e-001 8.0000000e+003 0.0000000e+000 -6.0000000e-001 8.0000000e-001 8.0000000e+003 0.0000000e+000 0.0000000e+000 -1.0000000e+000 8.0000000e+003 0.0000000e+000 0.0000000e+000 1.0000000e+000 8.0000000e+003 0.0000000e+000 6.0000000e-001 -8.0000000e-001 8.0000000e+003 0.0000000e+000 6.0000000e-001 8.0000000e-001 8.0000000e+003 0.0000000e+000 8.0000000e-001 -6.0000000e-001 8.0000000e+003 0.0000000e+000 8.0000000e-001 6.0000000e-001 8.0000000e+003 0.0000000e+000 1.0000000e+000 0.0000000e+000 8.0000000e+003 6.0000000e-001 -8.0000000e-001 0.0000000e+000 8.0000000e+003 6.0000000e-001 0.0000000e+000 -8.0000000e-001 8.0000000e+003 6.0000000e-001 0.0000000e+000 8.0000000e-001 8.0000000e+003 6.0000000e-001 8.0000000e-001 0.0000000e+000 8.0000000e+003 8.0000000e-001 -6.0000000e-001 0.0000000e+000 8.0000000e+003 8.0000000e-001 0.0000000e+000 -6.0000000e-001 8.0000000e+003 8.0000000e-001 0.0000000e+000 6.0000000e-001 8.0000000e+003 8.0000000e-001 6.0000000e-001 0.0000000e+000 8.0000000e+003 1.0000000e+000 0.0000000e+000 0.0000000e+000 dipy-1.11.0/dipy/data/files/gtab_3shell.txt000066400000000000000000000347031476546756600205100ustar00rootroot000000000000000.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00 9.999789999999999281e+02,-5.040010000000000545e+00,-4.027949999999999697e+00 0.000000000000000000e+00,9.999919999999999618e+02,-3.987939999999999596e+00 -2.570550000000000068e+01,6.538609999999999900e+02,-7.561779999999999973e+02 5.895180000000000291e+02,-7.692359999999999900e+02,-2.464619999999999891e+02 -2.357849999999999966e+02,-5.290950000000000273e+02,-8.151469999999999345e+02 -8.935779999999999745e+02,-2.635589999999999691e+02,-3.633940000000000055e+02 7.978400000000000318e+02,1.337259999999999991e+02,-5.878509999999999991e+02 2.329370000000000118e+02,9.318840000000000146e+02,-2.780869999999999891e+02 9.367200000000000273e+02,1.441389999999999816e+02,-3.190299999999999727e+02 5.041299999999999955e+02,-8.466939999999999600e+02,1.701829999999999927e+02 3.451989999999999554e+02,-8.503110000000000355e+02,3.972520000000000095e+02 4.567649999999999864e+02,-6.356720000000000255e+02,6.223229999999999791e+02 -4.874809999999999945e+02,-3.939079999999999586e+02,-7.792289999999999281e+02 -6.170330000000000155e+02,6.768490000000000464e+02,-4.014300000000000068e+02 -5.785120000000000573e+02,-1.093469999999999942e+02,8.083110000000000355e+02 -8.253640000000000327e+02,-5.250339999999999918e+02,-2.076359999999999957e+02 8.950760000000000218e+02,-4.482420000000000471e+01,4.436550000000000296e+02 2.899920000000000186e+02,-5.454729999999999563e+02,7.863609999999999900e+02 1.150140000000000100e+02,-9.640499999999999545e+02,2.395409999999999968e+02 -7.999340000000000828e+02,4.077669999999999959e+02,4.402640000000000100e+02 5.124940000000000282e+02,8.421390000000000100e+02,-1.677849999999999966e+02 -7.900049999999999955e+02,1.579929999999999950e+02,5.923940000000000055e+02 9.492810000000000628e+02,-2.376949999999999932e+02,-2.058300000000000125e+02 2.323179999999999836e+02,7.870509999999999309e+02,-5.714719999999999800e+02 -1.967070000000000007e+01,-1.920310000000000059e+02,9.811920000000000073e+02 2.159689999999999941e+02,-9.571229999999999336e+02,-1.930610000000000070e+02 7.726449999999999818e+02,-6.075339999999999918e+02,-1.841800000000000068e+02 -1.601529999999999916e+02,3.604130000000000109e+02,-9.189410000000000309e+02 -1.461670000000000016e+02,7.352740000000000009e+02,6.618210000000000264e+02 8.873700000000000045e+02,4.211109999999999900e+02,-1.877239999999999895e+02 -5.629889999999999191e+02,2.364819999999999993e+02,7.919089999999999918e+02 -3.813129999999999882e+02,1.470370000000000061e+02,-9.126779999999999973e+02 -3.059540000000000077e+02,-2.037930000000000064e+02,9.299790000000000418e+02 -3.326819999999999595e+02,-1.341129999999999995e+02,-9.334539999999999509e+02 -9.622389999999999191e+02,-2.694639999999999986e+02,-3.853909999999999769e+01 -9.595320000000000391e+02,2.097700000000000102e+02,-1.878710000000000093e+02 4.509639999999999986e+02,-8.903369999999999891e+02,-6.270149999999999579e+01 -7.711920000000000073e+02,6.311750000000000682e+02,-8.295329999999999870e+01 7.098160000000000309e+02,4.131589999999999918e+02,-5.704919999999999618e+02 -6.945430000000000064e+02,2.793949999999999889e+01,-7.189080000000000155e+02 6.815489999999999782e+02,5.331009999999999991e+02,5.012930000000000064e+02 -1.416890000000000214e+02,-7.292409999999999854e+02,-6.694270000000000209e+02 -7.403509999999999991e+02,3.932230000000000132e+02,-5.452119999999999891e+02 -1.027560000000000002e+02,8.253669999999999618e+02,-5.551669999999999163e+02 5.839130000000000109e+02,-6.007820000000000391e+02,-5.459920000000000755e+02 -8.775499999999999545e+01,-3.396509999999999536e+02,-9.364489999999999554e+02 -5.505060000000000855e+02,-7.954839999999999236e+02,-2.532760000000000105e+02 8.374430000000000973e+02,-4.622019999999999982e+02,2.916480000000000246e+02 3.629289999999999736e+02,-5.659300000000000637e+02,-7.402740000000000009e+02 -1.836109999999999900e+02,3.970810000000000173e+02,8.992300000000000182e+02 -7.183230000000000928e+02,-6.957010000000000218e+02,-3.548969999999999736e+00 4.327819999999999823e+02,6.863609999999999900e+02,5.844730000000000700e+02 5.018369999999999891e+02,6.943369999999999891e+02,-5.158049999999999500e+02 -1.705180000000000007e+02,-5.137690000000000055e+02,8.408120000000000118e+02 4.631950000000000500e+02,4.280519999999999641e+02,-7.760289999999999964e+02 3.837130000000000223e+02,-8.125720000000000027e+02,-4.387379999999999995e+02 -7.141659999999999400e+02,-2.514669999999999845e+02,-6.532470000000000709e+02 2.592050000000000409e+02,8.872580000000000382e+02,3.815569999999999595e+02 0.000000000000000000e+00,8.131860000000000355e+01,9.966879999999999882e+02 3.636330000000000240e+01,-9.046159999999999854e+02,-4.246750000000000114e+02 5.708539999999999281e+02,-3.085970000000000368e+02,7.608509999999999991e+02 -2.822049999999999841e+02,1.497950000000000159e+02,9.475879999999999654e+02 7.203509999999999991e+02,6.119139999999999873e+02,-3.265830000000000268e+02 2.658909999999999627e+02,9.606829999999999927e+02,7.993519999999999470e+01 1.999957999999999856e+03,-1.008002000000000109e+01,-8.055899999999999395e+00 0.000000000000000000e+00,1.999983999999999924e+03,-7.975879999999999193e+00 -5.141100000000000136e+01,1.307721999999999980e+03,-1.512355999999999995e+03 1.179036000000000058e+03,-1.538471999999999980e+03,-4.929239999999999782e+02 -4.715699999999999932e+02,-1.058190000000000055e+03,-1.630293999999999869e+03 -1.787155999999999949e+03,-5.271179999999999382e+02,-7.267880000000000109e+02 1.595680000000000064e+03,2.674519999999999982e+02,-1.175701999999999998e+03 4.658740000000000236e+02,1.863768000000000029e+03,-5.561739999999999782e+02 1.873440000000000055e+03,2.882779999999999632e+02,-6.380599999999999454e+02 1.008259999999999991e+03,-1.693387999999999920e+03,3.403659999999999854e+02 6.903979999999999109e+02,-1.700622000000000071e+03,7.945040000000000191e+02 9.135299999999999727e+02,-1.271344000000000051e+03,1.244645999999999958e+03 -9.749619999999999891e+02,-7.878159999999999172e+02,-1.558457999999999856e+03 -1.234066000000000031e+03,1.353698000000000093e+03,-8.028600000000000136e+02 -1.157024000000000115e+03,-2.186939999999999884e+02,1.616622000000000071e+03 -1.650728000000000065e+03,-1.050067999999999984e+03,-4.152719999999999914e+02 1.790152000000000044e+03,-8.964840000000000941e+01,8.873100000000000591e+02 5.799840000000000373e+02,-1.090945999999999913e+03,1.572721999999999980e+03 2.300280000000000200e+02,-1.928099999999999909e+03,4.790819999999999936e+02 -1.599868000000000166e+03,8.155339999999999918e+02,8.805280000000000200e+02 1.024988000000000056e+03,1.684278000000000020e+03,-3.355699999999999932e+02 -1.580009999999999991e+03,3.159859999999999900e+02,1.184788000000000011e+03 1.898562000000000126e+03,-4.753899999999999864e+02,-4.116600000000000250e+02 4.646359999999999673e+02,1.574101999999999862e+03,-1.142943999999999960e+03 -3.934140000000000015e+01,-3.840620000000000118e+02,1.962384000000000015e+03 4.319379999999999882e+02,-1.914245999999999867e+03,-3.861220000000000141e+02 1.545289999999999964e+03,-1.215067999999999984e+03,-3.683600000000000136e+02 -3.203059999999999832e+02,7.208260000000000218e+02,-1.837882000000000062e+03 -2.923340000000000032e+02,1.470548000000000002e+03,1.323642000000000053e+03 1.774740000000000009e+03,8.422219999999999800e+02,-3.754479999999999791e+02 -1.125977999999999838e+03,4.729639999999999986e+02,1.583817999999999984e+03 -7.626259999999999764e+02,2.940740000000000123e+02,-1.825355999999999995e+03 -6.119080000000000155e+02,-4.075860000000000127e+02,1.859958000000000084e+03 -6.653639999999999191e+02,-2.682259999999999991e+02,-1.866907999999999902e+03 -1.924477999999999838e+03,-5.389279999999999973e+02,-7.707819999999999538e+01 -1.919064000000000078e+03,4.195400000000000205e+02,-3.757420000000000186e+02 9.019279999999999973e+02,-1.780673999999999978e+03,-1.254029999999999916e+02 -1.542384000000000015e+03,1.262350000000000136e+03,-1.659065999999999974e+02 1.419632000000000062e+03,8.263179999999999836e+02,-1.140983999999999924e+03 -1.389086000000000013e+03,5.587899999999999778e+01,-1.437816000000000031e+03 1.363097999999999956e+03,1.066201999999999998e+03,1.002586000000000013e+03 -2.833780000000000427e+02,-1.458481999999999971e+03,-1.338854000000000042e+03 -1.480701999999999998e+03,7.864460000000000264e+02,-1.090423999999999978e+03 -2.055120000000000005e+02,1.650733999999999924e+03,-1.110333999999999833e+03 1.167826000000000022e+03,-1.201564000000000078e+03,-1.091984000000000151e+03 -1.755099999999999909e+02,-6.793019999999999072e+02,-1.872897999999999911e+03 -1.101012000000000171e+03,-1.590967999999999847e+03,-5.065520000000000209e+02 1.674886000000000195e+03,-9.244039999999999964e+02,5.832960000000000491e+02 7.258579999999999472e+02,-1.131860000000000127e+03,-1.480548000000000002e+03 -3.672219999999999800e+02,7.941620000000000346e+02,1.798460000000000036e+03 -1.436646000000000186e+03,-1.391402000000000044e+03,-7.097939999999999472e+00 8.655639999999999645e+02,1.372721999999999980e+03,1.168946000000000140e+03 1.003673999999999978e+03,1.388673999999999978e+03,-1.031609999999999900e+03 -3.410360000000000014e+02,-1.027538000000000011e+03,1.681624000000000024e+03 9.263900000000001000e+02,8.561039999999999281e+02,-1.552057999999999993e+03 7.674260000000000446e+02,-1.625144000000000005e+03,-8.774759999999999991e+02 -1.428331999999999880e+03,-5.029339999999999691e+02,-1.306494000000000142e+03 5.184100000000000819e+02,1.774516000000000076e+03,7.631139999999999191e+02 0.000000000000000000e+00,1.626372000000000071e+02,1.993375999999999976e+03 7.272660000000000480e+01,-1.809231999999999971e+03,-8.493500000000000227e+02 1.141707999999999856e+03,-6.171940000000000737e+02,1.521701999999999998e+03 -5.644099999999999682e+02,2.995900000000000318e+02,1.895175999999999931e+03 1.440701999999999998e+03,1.223827999999999975e+03,-6.531660000000000537e+02 5.317819999999999254e+02,1.921365999999999985e+03,1.598703999999999894e+02 3.499926500000000033e+03,-1.764003500000000102e+01,-1.409782499999999850e+01 0.000000000000000000e+00,3.499971999999999753e+03,-1.395778999999999925e+01 -8.996925000000000239e+01,2.288513500000000022e+03,-2.646623000000000047e+03 2.063313000000000102e+03,-2.692326000000000022e+03,-8.626169999999999618e+02 -8.252474999999999454e+02,-1.851832499999999982e+03,-2.853014499999999771e+03 -3.127523000000000138e+03,-9.224565000000000055e+02,-1.271878999999999905e+03 2.792440000000000055e+03,4.680410000000000537e+02,-2.057478500000000167e+03 8.152794999999999845e+02,3.261594000000000051e+03,-9.733044999999999618e+02 3.278519999999999982e+03,5.044864999999999782e+02,-1.116605000000000018e+03 1.764454999999999927e+03,-2.963428999999999633e+03,5.956404999999999745e+02 1.208196500000000015e+03,-2.976088500000000295e+03,1.390382000000000062e+03 1.598677500000000009e+03,-2.224851999999999862e+03,2.178130499999999756e+03 -1.706183500000000095e+03,-1.378677999999999884e+03,-2.727301500000000033e+03 -2.159615500000000338e+03,2.368971500000000106e+03,-1.405005000000000109e+03 -2.024792000000000144e+03,-3.827144999999999868e+02,2.829088499999999840e+03 -2.888773999999999887e+03,-1.837618999999999915e+03,-7.267259999999999991e+02 3.132766000000000076e+03,-1.568847000000000094e+02,1.552792500000000018e+03 1.014972000000000094e+03,-1.909155499999999847e+03,2.752263500000000022e+03 4.025490000000000350e+02,-3.374174999999999727e+03,8.383935000000000173e+02 -2.799769000000000233e+03,1.427184500000000071e+03,1.540923999999999978e+03 1.793729000000000042e+03,2.947486499999999978e+03,-5.872474999999999454e+02 -2.765017499999999927e+03,5.529755000000000109e+02,2.073378999999999905e+03 3.322483500000000276e+03,-8.319325000000000045e+02,-7.204050000000000864e+02 8.131129999999999427e+02,2.754678499999999985e+03,-2.000152000000000044e+03 -6.884744999999999493e+01,-6.721085000000000491e+02,3.434172000000000025e+03 7.558914999999999509e+02,-3.349930499999999938e+03,-6.757135000000000673e+02 2.704257500000000164e+03,-2.126369000000000142e+03,-6.446299999999999955e+02 -5.605354999999999563e+02,1.261445500000000038e+03,-3.216293500000000222e+03 -5.115844999999999914e+02,2.573458999999999833e+03,2.316373500000000149e+03 3.105795000000000073e+03,1.473888500000000022e+03,-6.570339999999999918e+02 -1.970461499999999887e+03,8.276870000000000118e+02,2.771681499999999687e+03 -1.334595500000000129e+03,5.146295000000000073e+02,-3.194373000000000047e+03 -1.070838999999999942e+03,-7.132754999999999654e+02,3.254926500000000033e+03 -1.164386999999999944e+03,-4.693955000000000268e+02,-3.267088999999999942e+03 -3.367836499999999887e+03,-9.431239999999999100e+02,-1.348868500000000097e+02 -3.358362000000000080e+03,7.341950000000000500e+02,-6.575484999999999900e+02 1.578374000000000024e+03,-3.116179500000000189e+03,-2.194552499999999782e+02 -2.699172000000000025e+03,2.209112500000000182e+03,-2.903365499999999884e+02 2.484356000000000222e+03,1.446056499999999915e+03,-1.996721999999999980e+03 -2.430900500000000193e+03,9.778824999999999079e+01,-2.516177999999999884e+03 2.385421499999999924e+03,1.865853500000000167e+03,1.754525499999999965e+03 -4.959115000000000464e+02,-2.552343499999999949e+03,-2.342994499999999789e+03 -2.591228499999999713e+03,1.376280500000000075e+03,-1.908242000000000189e+03 -3.596460000000000150e+02,2.888784499999999753e+03,-1.943084499999999935e+03 2.043695500000000038e+03,-2.102737000000000080e+03,-1.910972000000000207e+03 -3.071424999999999841e+02,-1.188778499999999894e+03,-3.277571500000000015e+03 -1.926771000000000186e+03,-2.784193999999999960e+03,-8.864660000000000082e+02 2.931050500000000284e+03,-1.617707000000000107e+03,1.020768000000000029e+03 1.270251500000000078e+03,-1.980755000000000109e+03,-2.590958999999999833e+03 -6.426385000000000218e+02,1.389783500000000004e+03,3.147304999999999836e+03 -2.514130500000000211e+03,-2.434953500000000076e+03,-1.242139499999999863e+01 1.514737000000000080e+03,2.402263500000000022e+03,2.045655500000000075e+03 1.756429499999999962e+03,2.430179499999999734e+03,-1.805317499999999882e+03 -5.968129999999999882e+02,-1.798191500000000133e+03,2.942842000000000098e+03 1.621182500000000118e+03,1.498182000000000016e+03,-2.716101499999999760e+03 1.342995499999999993e+03,-2.844001999999999953e+03,-1.535583000000000084e+03 -2.499580999999999676e+03,-8.801345000000000027e+02,-2.286364500000000135e+03 9.072175000000000864e+02,3.105402999999999793e+03,1.335449499999999944e+03 0.000000000000000000e+00,2.846151000000000408e+02,3.488407999999999902e+03 1.272715500000000048e+02,-3.166155999999999949e+03,-1.486362500000000182e+03 1.997988999999999805e+03,-1.080089500000000044e+03,2.662978500000000167e+03 -9.877174999999999727e+02,5.242825000000000273e+02,3.316557999999999993e+03 2.521228499999999713e+03,2.141699000000000069e+03,-1.143040500000000065e+03 9.306184999999999263e+02,3.362390499999999975e+03,2.797731999999999744e+02 dipy-1.11.0/dipy/data/files/gtab_isbi2013_2shell.txt000066400000000000000000000114371476546756600220220ustar00rootroot000000000000000.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00 -2.266327214356250806e+03,-9.183275903750475209e+02,-5.200340317942658430e+02 -3.451967493954964539e+02,2.539315081030927601e+01,1.459518548048878984e+03 -8.687821369576009829e+02,2.323250139612324801e+03,-3.126121995295425791e+02 -1.332678531203753437e+03,6.057252180305250704e+02,-3.272077210434978838e+02 -7.133089050173707619e+02,5.552534181107091626e+02,2.330854789063293993e+03 -5.909558927697119088e+02,-1.373764685157115309e+03,-1.163689074277325233e+02 -9.620046143336023761e+02,-1.663268413255978658e+03,1.599401546662312057e+03 1.524040250997601333e+03,-4.976289031939233496e+02,1.918245757989622234e+03 -1.310906776715079332e+03,-4.267496519835166850e+02,5.911075682939853095e+02 2.137449464632421950e+03,-9.795263447809793433e+02,-8.496104554574311578e+02 8.036905147915512089e+02,4.652768750402192950e+02,1.177963915401019221e+03 6.118441408476945753e+02,2.071391299541785429e+03,1.258961807003270678e+03 3.149948695314147926e+02,-1.233873374323025928e+03,7.926756766203922098e+02 -2.016575661811054943e+03,-5.906019227499958788e+02,1.354478485999431769e+03 6.460309928619076345e+02,-1.121364874288886767e+03,-7.584093716278345028e+02 6.044230534930856038e+02,-1.608119359468752464e+03,1.816211688682767090e+03 -8.824433249298083410e+02,-9.492765701355546071e+02,-2.137795072423571582e+03 5.310700425339645108e+02,8.492127701239588760e+02,-1.116603009570261293e+03 -1.054671935951507294e+03,-2.258501524869488094e+03,1.919322007337414959e+02 -1.373961930556389689e+03,-3.773567140735632961e+02,-4.688608788599892705e+02 2.071938848285220956e+03,-1.281610506785919370e+03,5.608421505757620480e+02 -1.311137097464583690e+02,-1.074853619200096091e+03,-1.038026441093374387e+03 8.942967847836255260e+02,-1.808493731043927937e+03,-1.476341317412907301e+03 1.764983771476244669e+02,6.108062603280885696e+02,-2.417801487965517026e+03 6.987629376269226213e+02,-5.293533263847286889e+02,1.217175177550222770e+03 2.498545348743911745e+03,-7.606967381351635993e+01,3.852979361575307138e+01 -1.215314074213265940e+03,4.552574404363761573e+02,7.521651174751937106e+02 -1.626364931671346767e+03,-1.503360529448681746e+03,-1.159674190247172191e+03 -3.607006642415059332e+02,1.455976251491915491e+03,5.309040147464097359e+00 8.331893303581111354e+01,2.340805027516016253e+03,-8.738934594980764814e+02 1.124353191327967352e+03,9.833302798737638568e+02,-1.374462143312825049e+02 -1.653720553265972285e+03,4.593280934373784135e+02,1.817753017955097675e+03 -3.759970659660853016e+02,2.382088554995548748e+03,6.589994859968487617e+02 3.001173301911491649e+02,3.906608855968609362e+01,1.469150580724652400e+03 -2.128663806851728168e+03,-2.144264981200675990e+01,-1.310851101448424743e+03 -1.030316972919468753e+03,1.089894130130593567e+03,2.403581537912267052e+01 -1.901566073143071662e+03,-1.487916930230456046e+03,6.481895387961882307e+02 -9.451522326186659484e+02,-1.215736079475225040e+02,1.158407145621336895e+03 6.878294554085988466e+02,-8.127358768845556369e+01,2.402141803519633413e+03 1.624240188331681566e+03,-1.850731259269334714e+03,-4.320157596333787637e+02 8.200799773977817040e+02,1.037808396856925128e+03,7.074055145986018260e+02 1.175216553582782353e+03,7.076400293120931337e+02,-2.090002784950342630e+03 -3.784281936408362981e+02,-1.317364986763873276e+03,6.093780385822526569e+02 1.423187921813606181e+03,-1.642128421682733006e+03,1.236102902636202316e+03 1.208722462037743981e+03,-2.384435267328210273e+02,8.556486979623286970e+02 3.773953003791272209e+01,-1.451307454120837292e+03,-2.035259787222704063e+03 -4.951854729525189782e+02,6.748410455297414501e+02,1.244741302699125299e+03 1.670044415493887072e+03,1.836049453503702352e+03,2.997900174562804523e+02 -2.363898709855261586e+03,1.599047940265688794e+01,8.134661603982726774e+02 -1.488815236979110068e+03,1.018782253857645372e+02,1.518223215837661542e+02 1.956382677784426107e+02,1.565485886215385335e+03,-1.939324523704342937e+03 -8.612954914426065045e+01,7.484932807333950677e+02,-1.297050311075543959e+03 1.660689898552811883e+03,6.556327428480628896e+02,1.749929932125905907e+03 3.214001834136819724e+01,1.420492248229029428e+03,4.808001580098056706e+02 -2.912991904832986734e+02,-2.478128359467806149e+03,-1.549987600762498516e+02 1.516096796083063509e+03,-1.290727017378622122e+03,-1.511778512717902004e+03 -8.822996955012581566e+02,1.048749381914242576e+03,-6.096490640137826631e+02 -2.031307195810249823e+03,8.358456205600374460e+02,-1.193797794787752991e+03 9.602188732099014032e+02,9.029973251154810896e+02,-7.159438150901182780e+02 -6.268612903098597400e+01,-5.301030544174469696e+02,-2.442347477515107130e+03 -1.907738712044329077e+02,-6.099802978175640646e+02,-1.357029611445553428e+03 1.221964464137328832e+03,1.961471459061434871e+03,-9.536417375896357953e+02 -2.341088661959557612e+03,-8.057016790353089846e+02,3.466246979718731609e+02 dipy-1.11.0/dipy/data/files/gtab_taiwan_dsi.txt000066400000000000000000000362501476546756600214370ustar00rootroot000000000000000.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00 3.060599742382443651e+02,-2.765200783212004509e+01,-2.065571669387280451e+01 -2.575623717651059152e+01,-3.056332799436796108e+02,2.808762071403956995e+01 -2.298419703420260873e+01,-2.631524008263777148e+01,-3.060118213829101137e+02 2.288267619722078194e+01,2.621510476963157643e+01,3.060280238997245874e+02 2.565432154927272634e+01,3.056307052103832689e+02,-2.820864793729555231e+01 -3.060786255380648981e+02,2.753264386414221931e+01,2.053846417284802328e+01 3.957169112450297348e+02,-4.706585449911170258e+02,1.066115291785341590e+01 3.996136383112861949e+02,-7.619153715938998062e+01,-4.612253134216570629e+02 4.645188593254337093e+02,-1.881913001993654078e+00,4.030430345936409822e+02 4.684046378059773588e+02,3.925064641541216588e+02,-6.899833968350519342e+01 -6.881458102723127013e+01,-4.686915655385455466e+02,-3.921960859456324329e+02 -3.919827094470945017e+00,-3.944913987875558519e+02,4.717903890905229218e+02 3.762919180145303244e+00,3.943225724193932820e+02,-4.719327805098901081e+02 6.868429110232389689e+01,4.687154020658587683e+02,3.921904384632695724e+02 -4.684076386195116584e+02,-3.925343798873016112e+02,6.881892682528970795e+01 -4.645089378174691888e+02,1.724467875798906924e+00,-4.030551735162717932e+02 -3.997932084373172188e+02,7.603910042827406812e+01,4.610948337310392162e+02 -3.958842029919957213e+02,4.705142547243480635e+02,-1.081822178473176344e+01 4.451682293263964993e+02,-6.221211594988657225e+02,-5.164489427834931803e+02 5.247096225433380141e+02,-5.312420748196007025e+02,5.425778008284095222e+02 5.342094714925420931e+02,4.354383677705396849e+02,-6.139565688556374425e+02 6.137389895140746603e+02,5.265011995205273934e+02,4.450280212005625913e+02 -6.137117042060538097e+02,-5.265050958462578592e+02,-4.450610386996640955e+02 -5.343587285723475588e+02,-4.355955547189902859e+02,6.137151309097728245e+02 -5.248382747781089392e+02,5.309857216276328700e+02,-5.427042921845379624e+02 -4.454299932274629441e+02,6.220262675053053272e+02,5.163375288188925651e+02 1.223260965509586185e+03,-1.105195865064901852e+02,-8.233487267852983393e+01 -1.027376982809065424e+02,-1.221535699862614365e+03,1.125011080527421541e+02 -9.186216412046562141e+01,-1.051755212393736798e+02,-1.223053740657020626e+03 9.165933192442591348e+01,1.049754576277323395e+02,1.223086145848695423e+03 1.025339929453075456e+02,1.221530513357083919e+03,-1.127430052299052647e+02 -1.223298245390085640e+03,1.102810135967481386e+02,8.210055338188493579e+01 1.309532167632073424e+03,-8.060567319704998681e+02,-2.901804235093799278e+01 1.315616458707049787e+03,-1.822767383882529089e+02,-7.754821237279820707e+02 1.418301908210537704e+03,-6.473885939890702446e+01,5.913311908313959293e+02 1.424390160056957939e+03,5.590415150214839741e+02,-1.550137297641648502e+02 5.685897422544877600e+02,-1.426805659368653551e+03,7.984557217878109725e+01 5.807895781612451174e+02,-1.792958433008455472e+02,-1.412791727918280685e+03 7.860309621033701433e+02,5.556040880755914202e+01,1.320799896876126923e+03 7.981451219078418262e+02,1.303372472226758418e+03,-1.721294949049230354e+02 -1.661319996627892408e+02,-1.423776032930948986e+03,-5.574100525999256206e+02 -6.373182475918985546e+01,-1.306378415686428752e+03,8.091709890631771032e+02 -1.599399102184175092e+02,-7.999475199451392200e+02,-1.303820229346417591e+03 4.513471650802490842e+01,-5.651759281416964313e+02,1.429679344333865856e+03 -4.537838032058055404e+01,5.648740457392486860e+02,-1.429790934035341252e+03 1.597298441428331159e+02,7.998893592556471503e+02,1.303881662513778792e+03 6.348757213011214873e+01,1.306207196911330811e+03,-8.094665446589300473e+02 1.659213869491033790e+02,1.423848553773994809e+03,5.572875283626835881e+02 -7.982316475003476626e+02,-1.303356800019206958e+03,1.718467013723954437e+02 -7.861007442366864097e+02,-5.579579414929161629e+01,-1.320748442840565076e+03 -5.810991571178793720e+02,1.790668337341480481e+02,1.412693469459428343e+03 -5.688841082168794401e+02,1.426674051687754172e+03,-8.010007277857546626e+01 -1.424365082466851163e+03,-5.591823225910078463e+02,1.547360395994629130e+02 -1.418260680277899155e+03,6.447766687747051151e+01,-5.914585980875659743e+02 -1.315815036927222764e+03,1.820219067290956900e+02,7.752050142168296816e+02 -1.309714981246806929e+03,8.057685995210898682e+02,2.876859265757046202e+01 1.378532999392024294e+03,-9.475546163768949555e+02,-7.807067430028159833e+02 1.491101612643861017e+03,-8.187878911784133606e+02,7.168810013039011437e+02 1.504378911701863899e+03,5.481091518971567211e+02,-9.187689849104049244e+02 1.616920918173165774e+03,6.769327743141775500e+02,5.788305135645115342e+02 5.667300689798133817e+02,-1.627650931571458386e+03,-6.612756413695425408e+02 6.790192881037959296e+02,-1.499059388742953843e+03,8.363430847472923233e+02 5.735820095967370662e+02,-9.441958708425680697e+02,-1.478923201437725993e+03 7.983763482558605347e+02,-6.869623581997329893e+02,1.516045489080818697e+03 6.993954119729967260e+02,5.513105248225369905e+02,-1.616978281527312674e+03 9.242276633949672942e+02,8.090049261766590689e+02,1.378053067061416868e+03 8.184834411452368386e+02,1.363595586874766923e+03,-9.372341927323776645e+02 9.307480713274635491e+02,1.492457929264358881e+03,5.603511016283881645e+02 -9.307983572293142061e+02,-1.492374599025283942e+03,-5.604894953193447691e+02 -8.186738917716397737e+02,-1.363731915964772497e+03,9.368694254326629789e+02 -9.242700904801380375e+02,-8.090759345825546234e+02,-1.377982921491845900e+03 -6.996875391908079109e+02,-5.515983318669281061e+02,1.616753731334099712e+03 -7.985444365880805435e+02,6.866014941334057085e+02,-1.516120434216226386e+03 -5.739383475486121142e+02,9.440633614507000857e+02,1.478869548939179822e+03 -6.792900994017161338e+02,1.498785675319946449e+03,-8.366136864231536947e+02 -5.670796005299498574e+02,1.627602686715093341e+03,6.610947139862807944e+02 -1.616822270023855481e+03,-6.770494197444879774e+02,-5.789696282004430259e+02 -1.504514594681671042e+03,-5.483313079406922270e+02,9.184141827747965863e+02 -1.491208124943704888e+03,8.184254504975290274e+02,-7.170732947752254631e+02 -1.378843147706513491e+03,9.473092654522874909e+02,7.804567442280463183e+02 1.584489418789225056e+03,-1.883876011287723486e+03,4.299367208835884213e+01 1.599907036018945064e+03,-3.050430833680687215e+02,-1.846263847175255933e+03 1.859720871054988265e+03,-7.533046566298685853e+00,1.613327472948968989e+03 1.875300406678335321e+03,1.571172190097363455e+03,-2.758810138025534684e+02 -2.751990001622239674e+02,-1.876185657890463972e+03,-1.570234659990613864e+03 -1.569180411383414153e+01,-1.578909329933991330e+03,1.888979379223353817e+03 1.537773327623712483e+01,1.578571395499443724e+03,-1.889264373937704022e+03 2.749381736675210277e+02,1.876233335826987286e+03,1.570223382258684524e+03 -1.875306372629160023e+03,-1.571228080280154927e+03,2.755219202089027135e+02 -1.859701003363345308e+03,7.217874295875285284e+00,-1.613351815438906215e+03 -1.600266513440163635e+03,3.047379455874483369e+02,1.846002673476180007e+03 -1.584824240270395876e+03,1.883587146555032405e+03,-4.330806834798309524e+01 2.751580822960364912e+03,-2.486007347304538655e+02,-1.853689548124000055e+02 1.611398444426871265e+03,-2.076314517715665261e+03,-8.715928389000364405e+02 1.749091243873480607e+03,-1.918809001555268196e+03,9.623995200308421545e+02 1.619574944353243382e+03,-1.239222743532253617e+03,-1.873143078233903225e+03 1.894949501326790141e+03,-9.241789190699852270e+02,1.795110223069319545e+03 1.773664275765259617e+03,5.923306586983419493e+02,-2.042209692377373131e+03 2.049108372268141011e+03,9.079044404007614730e+02,1.626107439808695290e+03 1.919606900134974694e+03,1.587136842361944446e+03,-1.209738398403369274e+03 2.057250984517960205e+03,1.745052080865178141e+03,6.243978073055643563e+02 6.253118210934724175e+02,-2.072281378365647925e+03,-1.726845683690275791e+03 9.005755011350458972e+02,-1.757220490019601584e+03,1.941262711796275880e+03 9.335012295928689809e+02,1.591112077962034391e+03,-2.065017871523619306e+03 1.208827791671634941e+03,1.906766084385130853e+03,1.603227828326614826e+03 -2.312749624868818046e+02,-2.747679662236234435e+03,2.532377646798806836e+02 -2.066585745791770137e+02,-2.366090926341302065e+02,-2.751121475114422537e+03 2.063543985318382852e+02,2.363090692864599873e+02,2.751170093974476913e+03 2.309695126782845591e+02,2.747671897221782274e+03,-2.536005312116034816e+02 -1.208912988988030747e+03,-1.906708347619437291e+03,-1.603232254595191762e+03 -9.338153802740702076e+02,-1.591413096390093870e+03,2.064643841489782972e+03 -9.008658780612915962e+02,1.756804939152404359e+03,-1.941504075585362443e+03 -6.257308003513546737e+02,2.072234022865709903e+03,1.726750740544057635e+03 -2.057188503399082038e+03,-1.745049581336911160e+03,-6.246106148301645362e+02 -1.919748268521095952e+03,-1.587311360867676967e+03,1.209285007419945032e+03 -2.049033553547601514e+03,-9.080566040323811876e+02,-1.626116754820904134e+03 -1.773972078699697704e+03,-5.926550876316190397e+02,2.041848185124994188e+03 -1.895054767280823171e+03,9.237352122864908779e+02,-1.795227474886749860e+03 -1.620012636177589911e+03,1.238970222779204505e+03,1.872931617996605610e+03 -1.749363801658917055e+03,1.918410180057762545e+03,-9.626991588737391794e+02 -1.611824710475071925e+03,2.076102115206173494e+03,8.713105703125619357e+02 -2.751636775163151924e+03,2.482429597619677679e+02,1.850175410558365741e+02 2.819486845238788192e+03,-1.227648640056062732e+03,-1.063096707450663274e+02 2.828162390533582311e+03,-3.452315832136267204e+02,-1.161998987401962950e+03 2.973334855834526024e+03,-1.789263453068564047e+02,7.714883006466220650e+02 2.981959980866465685e+03,7.034744983688980255e+02,-2.845475402379931893e+02 7.230249676687612919e+02,-2.984012797181214864e+03,2.020681132349376412e+02 7.489778006532395693e+02,-3.367769033382248267e+02,-2.965390795747931861e+03 1.184490518661768192e+03,1.617447064878777212e+02,2.835268922187722183e+03 1.210447621351424459e+03,2.809291044684306144e+03,-3.326099550843914585e+02 -3.164204194545539508e+02,-2.979684082978914830e+03,-6.994925902354550544e+02 -1.712007129092333457e+02,-2.813585486746007064e+03,1.233837924798716585e+03 -2.989023076954838416e+02,-1.214904567546315320e+03,-2.811155154417737776e+03 1.361337529030404312e+02,-7.164565036804343663e+02,2.989328800860578212e+03 -1.364724277583266314e+02,7.160417096522999145e+02,-2.989412742747303582e+03 2.985993340325934469e+02,1.214748407075016075e+03,2.811254834628835852e+03 1.708609740912376651e+02,2.813417148331899170e+03,-1.234268802572842105e+03 3.161164165399382000e+02,2.979774390314867105e+03,6.992453031787978261e+02 -1.210633049245885331e+03,-2.809258029081730001e+03,3.322137054870627253e+02 -1.184657796865844148e+03,-1.620704477796920457e+02,-2.835180430639548376e+03 -7.493994931753793480e+02,3.364579185183467871e+02,2.965320466441154167e+03 -7.234309579091122941e+02,2.983889571783220163e+03,-2.024343660104547951e+02 -2.981934995609053658e+03,-7.037366589637366587e+02,2.841608642874509769e+02 -2.973292783721014985e+03,1.785519144542232937e+02,-7.717371548119670024e+02 -2.828374738710904239e+03,3.448638511725153535e+02,1.161591262698065520e+03 -2.819682428423139299e+03,1.227230202143105771e+03,1.059529791444305147e+02 2.881304513741023584e+03,-1.374783354193673404e+03,-1.125291085954029541e+03 3.033515438118813563e+03,-1.200452184780246171e+03,9.027312106885174217e+02 3.051646416594431685e+03,6.506646001614756187e+02,-1.312369889239718077e+03 3.203921179788254904e+03,8.250851464605177625e+02,7.157852853995125315e+02 6.825453354012404361e+02,-3.216977826098676360e+03,-8.021287499594091059e+02 8.345628163833371218e+02,-3.042768927986690414e+03,1.226086112958236072e+03 7.005984814201951849e+02,-1.365872669659540179e+03,-3.016915414476985006e+03 1.157021815870793944e+03,-8.431097265139912906e+02,3.067359044301229915e+03 8.709746846479574742e+02,6.593944380604731350e+02,-3.203876881803872493e+03 1.327437412918172640e+03,1.182806721685336015e+03,2.880469262797486863e+03 1.193479328296101130e+03,2.859492562707587695e+03,-1.362767176281363845e+03 1.345807443829256727e+03,3.033862884038389893e+03,6.653595456503406922e+02 -1.345957356279911892e+03,-3.033739311790014199e+03,-6.656196986086278002e+02 -1.193744183881032086e+03,-2.859613456516492079e+03,1.362281433023298405e+03 -1.327575570191480438e+03,-1.182969916343018213e+03,-2.880338570803467064e+03 -8.713894667325072305e+02,-6.597978521331864386e+02,3.203681037742064746e+03 -1.157264135139769678e+03,8.426501969698164203e+02,-3.067393904777462922e+03 -7.010493324374873509e+02,1.365666179882160350e+03,3.016904161324157485e+03 -8.349571096592077311e+02,3.042496863227117501e+03,-1.226492748565068041e+03 -6.829864543349780206e+02,3.216954517131669490e+03,8.018467047391795859e+02 -3.203800938671151016e+03,-8.253273808934641238e+02,-7.160441744176018801e+02 -3.051780134508651372e+03,-6.509826429453520404e+02,1.311901143075285290e+03 -3.033616851574064185e+03,1.199978718217251526e+03,-9.030198633870395497e+02 -2.881637164928362381e+03,1.374398520113323002e+03,1.124909310839740101e+03 1.781071476557796586e+03,-2.488504613201357188e+03,-2.065428087701867298e+03 2.098964219088664322e+03,-2.124711308557529264e+03,2.170441213733573022e+03 2.137093445331861403e+03,1.741648533401185887e+03,-2.455678316070957408e+03 2.454955958056298641e+03,2.106004798082109573e+03,1.780112084802250365e+03 -2.454901378352156371e+03,-2.106012592698100889e+03,-1.780178132087265340e+03 -2.137391958256570888e+03,-1.741962917435640975e+03,2.455195473085442700e+03 -2.099221546854858843e+03,2.124198555924154334e+03,-2.170694218960844410e+03 -1.781594984817244722e+03,2.488314824566794414e+03,2.065205230458988808e+03 3.122071386978404462e+03,-2.500457173606919241e+03,-1.960044816937114831e+01 3.141650565272775566e+03,-4.884078609877844315e+02,-2.427239066727171121e+03 3.472637623359345525e+03,-1.092074727769958997e+02,1.982135632271116037e+03 3.492511150936095873e+03,1.902860559500974659e+03,-4.260135580972330445e+02 1.926738393524785124e+03,-3.501888028920384386e+03,1.563949993498424078e+02 1.956413636250567151e+03,-4.836033707272409856e+02,-3.455224082995787739e+03 2.452883595697436249e+03,8.480289013174439106e+01,3.158507643774781172e+03 2.482590061957568196e+03,3.103437592759718427e+03,-4.532346987105904645e+02 -4.431545236004035360e+02,-3.492041647612927136e+03,-1.899805042510755129e+03 -1.123974266903735497e+02,-3.113128725733727151e+03,2.509142553838837102e+03 -4.335657648310167929e+02,-2.486203684676753255e+03,-3.103354953250792732e+03 6.280308967399663089e+01,-1.917694343293884685e+03,3.509772667513671877e+03 -6.320049359778251130e+01,1.917214973507414925e+03,-3.510027413421148594e+03 4.332310247845669551e+02,2.486178041584565108e+03,3.103422244024626252e+03 1.119994400041999683e+02,3.112784436116729012e+03,-2.509587452094109267e+03 4.428191523425172136e+02,3.492151038798712761e+03,1.899682163030365018e+03 -2.482671475424489927e+03,-3.103439339104013925e+03,4.527765603641891516e+02 -2.452934275758073454e+03,-8.519077135765037667e+01,-3.158457847001713617e+03 -1.956910597927751269e+03,4.832273103631895310e+02,3.454995264575221881e+03 -1.927206706691009003e+03,3.501612185631609009e+03,-1.568005456668483646e+02 -3.492477240167506352e+03,-1.903024130636394375e+03,4.255606715000259896e+02 -3.472573526139696696e+03,1.087929047581199598e+02,-1.982270720519273937e+03 -3.142032771572712136e+03,4.880050899196319278e+02,2.426825311920005788e+03 -3.122424355004954123e+03,2.500019500228152992e+03,1.920014976175221477e+01 dipy-1.11.0/dipy/data/files/hermite_constraint_0.npz000066400000000000000000000016751476546756600224330ustar00rootroot00000000000000PK!G indices.npyGPPZ\nni_TR_ wK)Ng$:: . PK!{VI indptr.npyIPPZ\nni_TR_ wK)Ng$F:: . `bPK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!gI shape.npyIPPZ\nni_TR_ wK)Ng$F:: .FPK!Idata.npyIPPZ\nnf_TR_ wK)Ng$:: .0`PK!G indices.npyPK!{VI indptr.npyPK!tE  format.npyPK!gI shape.npyPK!Idata.npyPKdipy-1.11.0/dipy/data/files/hermite_constraint_10.npz000066400000000000000000000354301476546756600225100ustar00rootroot00000000000000PK![C0G indices.npy0GC[Oc76 _cMsHQ(/yS4tHQ&Wg>r3${ծ]U߿~G?gWnv˷_\|oOq|\]5{_U*}}_|vzw-V~/kڀiYĬ͡d;A=v7V7}vzhE~{Yk\ɾZ)o8%uz:Ev:Dcg)[y]]?nq$9uMD.^ #c7ߤ~ӿ3Mkabi_,QuNZ|c}ղ5ɯ܋،7V]_#ekw|סlmMek]~Y݊P>m:c$i7wr?z :7I%F]GY߹n"ŸM~19EDZKsC+e:6~/4ضѯ~fsޗ.a6Ob}OxiFkD_GQ'Y;UlߝX(3=륽xi;9fc϶?yM{ygMʼ<6Vlϟ~D9tMچ~=V z~nO8K4cufþLe}q>k8m7uQ~}\>`KhRPYsQ-;7mK;6?O۶g?tt gqc9vKqX_):(ƷMfqqQk㢏o>>JsW{3wObv 1/hIϙnJizo\~L+J7JխqEI;!঴{gz͍J+oiܑnܥ2]ދwM'Jiu}w-ny#F[iE~MZV0Zhϕ('1gDžXK#U0{ܳPN}veFҜnʭqMu7ܔ>pKh?)?K),7>W5i!\ݍnŝ7  .Lt)_ywB,{惴96؇ԥU$6Q5nHxOMiܖW\ܑa?P?X~Vyhs|-΍n7e/ c/e\iQ>-Ҹ/ vҍ;nqg-I~Kvmy-;nn|$[Ͷ'1wƅۖZ0g1Ͽnlb~gmF~[ݘ-ϕƎG1v<.رͮ9?eˁlS6> Ńܘx `lJ?9>l_j̃q΍0i.OIxIh86=81Ay#Iibƴb17f== k\ߘVK1x#1̍{˽dR|V4"/>-qi[Cl_~__bh ъ&bbLljMm1Htf]ǑO]3'=1/fNNijHg]̓ύǟVXk&P{ZbbO;{hZZ3kҸg5ӗ3k)P_D75ӷ),z`5_k\_u[<<ߨ:J ̺GnwHw=1~1ųHz$>ʹn0 W0ދn[vO;ڑ~[l6і Scb~x c~{x~u eИuY yX=?c|6'8}^8.zŪ~ϳPH:ݮFUy8olyu幵wh6>/;f}}i}H>3#gIqN"\x3;2`/|w1_9>9ǖ#G6g<31\;sg4V sDsIwaјu4yl=7=F_Tcd܌}F3ӷqM[qrfE/sK_wa30Sg>zaufw=Cd1k&'gV2&l&yoj̻~#cg0/b6>qgm>Sz}ʏrgYj;ڽ{PK!K;# d indptr.npyd# Y.ضm۶pm۶mv6nl|{939StUwOTbAt ;uftJ/I-sΐ$utjܮaNMOT6߹EPɑ;gfH7 Ƕ$K$#DžrPhI(™"HQJ :$$ <(6I&IrHT K i SLz YCN6dQN%AnI0򚐏_1L(HRGM(R%M(2 *:*HR @UIAu=.FA44&M#Z0ZCҖI{I:0:1]CO7& P@u Spa$0Naag"LbLStdjǘ`9`$a Ygz@6)`3c+l3vdJqqP/rD4pg$9 |p.RWᚁm#pW.D?g tW>z $E+2MALGpIB( a:, |YQ]G IbA\$ArI IZI̐BN!G#%$IiI@YR**P+6!uI=o4d4f4!Hsh5Dm%iI]Aw詣UH?8a0RQ`4a)0U!`01,Z9K`)Yk`B lU14lg=~8pY$'$&p.+*\#MwyK!9y) ^w>'J_2@$ O"p"2"$GC|F?%$$$&JGZJ!,MCrAn y BPS2JT)()eIRT'5]@mNROC}FhRhhh -4dbuCCO$ }H?2H`0 a.01F23Li:K211[`cY@.T2J 2z$| 4q^g ar2;I3r.p.*o 7n3=#xlSx&\&bxCrޑ`> ||g䗏7\ $'B95Dr(.bJA%$$dRAjI  #dl9%!/PJ@I( eI UԀ6%Imu>4Ba5iH' ݠ zJ z~П 0 HEbqd< S`*Lh$3s<*nedu&mMYA[`+б^Epsh8 ጂ9W7V 8! yൂ[xxg ?"Ap ! $#4HD 91 & 1' $ud9) ew 22@v 99]&7&"Pb%)%4*@E ՠՄڤ.ԃ:Ah ͡@+IZCh tnݡ CEXǙT3avH2摅,eV6&B[;`'E>A8DcpNI?4,ai0CL$ s`.,0a!,ŰL`9հa`= ld7쁽p_#:Jr Sp8% W9:܄[pAw!yDs3򜼂׌7=| Y/o'#h-8 AB$,s"qjɈ͈ tė$$D&%BGH!dtP& Yt`\|PQ ۬P k(e9< TeT'@]o;14!H hIZC;ho9H7 =/dCi#4dbcLI6 |0f 4,K$Y V*F`- h+c;0}d?98Sfe8ϸH.q.NnMps_Cnj'BoᝁIn@W0$$b%Hx YDHQ5D"q jHI!@JF*HHG2HGY l 7# $(`BAF!¤()FC 48I $&ԳI}1fEZ@[h'I{%] jni^pJA5 @C,41fuH l2?8Iƛ0O}0I&M TI230,IfhN ̓dX`!g$bK%YZ+%XYXkuXo 6Z`E6h [B;}b'{u`?.qH.(9'N3Hp9<tIe]!WpM;1p7L/葁=1D|/4蕏^[ Db$>K%_-|؏/ f BX$EB S ([,\"$Q D 1Ė$2I8(!'!CjH&ArHနڤ4Ց.}A2Y ,e5![!QNrY @ Es HTG$(,Q?X1(.I ?X)(-AX9(o hJ:*ZaՂ~cw'Q}h %hh?H fjSji%Akm5H{lQE:b:Y@4"5A_Y?c 4B 0 gdFYl4#X?`$4bdM T MiYc#" 5,.DdÖ IV*&k$Y A!9[,U6lװ"; n=ګaM8`lp_:9jc& I2gYD|tY+pUk~.pC~eXnIpH$X_S \.^Iq[NhO3žjf~X'痍bmBGH"m@X6`BDEY(&E$d}C$$}GC| pZ$[,@% @rPґҸ@ZAzeeBJMCv`X.6+A>F~Pb8mTDŊ1ۨ+EJۨAy*HR*IV٤*TjUA IjB-Ij$uYw?ԓ>i`~jdG ͠iVZ6hGKҁf%B:==mKCo 6'b8m2H` 30\0RQ&h hI%@&J2%BZlK4fZlf`y~oB-l1Y␥LXIVI8d-c$ll`3gjf;lS` v`0pPCDpDc7d'}pFs: 6ȸ~⇫p!mCnMn1n+=rfmf9O$kY /$y)b 䭎w G$Y/&|7ICł3/D(B)vH%" "Dr@dQ!͢3b &IJQlN 'Q| $H#DHTC2H!QJF*RHcN&dAF$AfHUC6 dA?Ԑfm!y! 6+H ٤͊0ZKRSRR%+D8% PQJ% 0P&$mBfM! m;hbfɟhiV8%hN:KօtA76=%ꥣM@_Iod` fdF(挱X2l21LLtafh6̑h<' $Yc%[j`E3Vl V Xkud66mmFnhk}8hCpXtN$ir&g9E\,U\\o 7pS[%wmvf၍#='6kIF/$zy〷N>H>?}&_l;d?/ ,#B№, #M9 <`! hΈ!ILF,ņ86x$d qK$Y2FrR8 T46K+N2B& YHV1IIryHnFKyD~NFA(((B*8)(R9eUV)ϨPBj&U'5QS@mF9\@Xchh"ԥМ.ҤV6.G.L]|ԕ@wᇞ '}M@0!}4Ą0# 1f0F`q6o &Ld 4$Nf0SY#0Wy0BZX1 YXXXG֓  63(nmCa;vnE}&9|pRQqs)i8ᜆ.wA% ]슁k]on0nB}t]Cy䉁\_?h/:ᧁ_6 !pALn Lm0p&A "j$QHT=&#&!q . HBkHHd$R1RHC*$# HFEdҐY 'iȮ#'ґ[C y],] CE](2%}T J("e}T7P%* ]j;j%PAu|To; 4Ԙ&6kjf&4џkAZhek#VC;ѣ:AgzL7FwFEz>WC?F `E !Cu #41R`cbpj &r&i)S+`d.7\g`-0a!cbZbRLrZaJ V5a&-&lum&lNn=::hs8C~:,#8jc丁6;iS&8o p.eBks 7MŸ w]ǹCxu</<5 +㥆W zx+^14|d|B>k" ` ł`:\BP],Ia!. Dr>Q Ds>1 r~q s>H P $Q Iu$sXr?`4A5!C[$d4-f)f!| =)L@Q))NJxDIRJ4)򤂂*r*q*s(* :+wz oBh(i"Ś¥Z.mI;]:"}ECWn.=z r~髣icAl!danF:`Fshq0&DD&1t2fz,ay0_`,T"bR-c,`T*jFZglаQ14l&[*dv*b (` `;h!rX/;bQ8KI8 YrNyK]䊏2i k=ݷx㑃[ Gǿyjg $!o#Oq _7}g`RTHWPN0NpdZ1a8a5WH!ȊbBTh nB )bb}@GBJĐDGRJ)]&Rsut>JȠ#KdCf@Vf:r8,rAny"5pPA(( E<(Q"e<,)(ϨJPYCUeT#5PTMF-NmNe&;bh@r)?hfBs?„J˵1-i':БIKu1@7Nwპ.G}.2Gt v!J w%0ac,2 wYXYdeZNVXIV)h5N1ul 9تa"C`vu}&qt˥tqL IrJi9ンs]B.%.q%@Wu\spnmr{p1Sg_8_MwS! $HP ܄!ʄF ˅3)>)?> |q |gp& R\C 4$D"!DhC Ibqb8ˈLjHD$1'jH5PPJT$5'bH5dPHF ,j ɮ!r򚔏P ))bEI1(A%HI(= )(G{@RSTV\FUF5Fu$8UG.]Aid?@Sh"4ЂӒJMhhir}:r:\g <;Az2zy@oF駸A ,03TQHMp*fxDL20LLU4 30ˤٌ9 aG-Ed1Y!Kac9Y+*֒ud6j[U`gv] ڣc}ٯ BU1PI89 yrC.%eWᚆ$mp1+C#E=xpUSs żF!oMz|/2)KaAj !!I( K)&I "CAHt1=&M@\5$$U\2NrRaRCFZEHO202*(@fNUAHvA9I. y<&/< 0)VB$bVTr*@EF%!UHURTPQpǩh@*?šB3hQB FKh!IҖБt0*驸^Л3@LxP0NFxH `4c`aCKg?0uV1|1#G_T\talUJ}Vws企C_Yf)L2^/~~uQ rxuA~='͋Y{eni3FUxsc-w׮3NGW}oƷ] x›k^>qҞ'>5%OTqJViC`'ěSשU[}Q?ԓ`oQWNz } ]o[?{<9@]- ;n[v޴u1NTXgاxuPۍswЖr$/Jۤ}}|m& 3aOj}tKLD?R'l"nAUWУEIqڷ?&~wq@0WHA=7}lJ.G=ey{ (/jW7S^=\Clm!]j~5K)u5!2Ɠǧi^M|iS~z־:=*uj.~Q|b2)u<7Hz~ᯣN[T<I]xXTa]yAt1 .XqdlBn->9%?>Npgy\̻GegDgR Kw9=E~r߷DW]wRo]P/ڍCv!8 q!-5<;5k@=G5y/cA/@/ȓ˻(,a>ՓlFgsxo8xf"C>\H *-'Oc}<43ͣkqy:™xu>w/;O~CFrc|f6[Ew/|}R9ʫuyiae*-D:nܸۛ>@K.cb%.8='Ht߱Uzߟtm6q| IZ*فjQ3O\%AdgOdUf?ΈC/9Ӊwy)i͐c7ox7O`A0g~_}K)_9qyҔLyU_TO'6-O+OCxRvPΛ_]+WGksS<@jla?~?!6pO$~P]SzN˯^.3QzO{Crwq7ۮi8.PO~A?xѕ\7½n{+8x8'<>O>7~޻t1f 9f Af߹^zOGl_ӯ8 m>{k-> ? v3d?K_'rs\ا~BO}zxwQ}B{L~A0A{';g>]~ ;?vKqZL׍f/Ӽ8enQ[YT혯[y*Gy'#y!*LQGկv>;y :?/8n3v]’vo sZ7yz9΃vra Ǿ9?[8q^.?wv{SXv~+h%-7|;盽úcś׋w_z53~\oc_wo0E~aOj|V;w%z߉êWG7mPK![C0G indices.npyPK!K;# d indptr.npyPK!tE .format.npyPK!uSyM `/shape.npyPK!Ը| /data.npyPK9dipy-1.11.0/dipy/data/files/hermite_constraint_2.npz000066400000000000000000000020041476546756600224200ustar00rootroot00000000000000PK!X indices.npyXPPZ\nni_TR_ wK)Ng$:: .fьP6#3AŘPK!Mi indptr.npyiPPZ\nni_TR_ wK)Ng$:: .031+fbv bN$  1/PK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!g$J shape.npyJPPZ\nni_TR_ wK)Ng$F:: .`PK!r^data.npy^PPZ\nnf_TR_ wK)Ng$:: .0`ֺ>mނog0qb PK!X indices.npyPK!Mi indptr.npyPK!tE :format.npyPK!g$J shape.npyPK!r^@data.npyPKdipy-1.11.0/dipy/data/files/hermite_constraint_4.npz000066400000000000000000000026071476546756600224330ustar00rootroot00000000000000PK!>X@ indices.npy@ @/9s>u&b%vs F.b#>/l3˷37f#T/Wºe NMxvhB;嬘m>c~#|GNH$`ٟ=dJ=zf6 S<5X!K"8(̌d#{bF23I~3{ PK!7#q1 indptr.npy1NQ`Zt-XbbAb1>/''I EiRF!n>E=ɜaXL=g BJץ95[wlBBTT[!%(/Q4Zhb4CD+1ڡCH7>N?  !0G!#d1 $QL4G,# 7^EXb, V [۰؅=3P J)#9#pD n8QxC qFIx-7,A~PK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!L shape.npyLPPZ\nni_TR_ wK)Ng$F:: 8F0P PK!ժdata.npyPPZ\nnf_TR_ wK)Ng$F:: d.0`ֺ>mނog0qb S|B?)ثFa4Լ8! s7_)!.8=#A^PK!>X@ indices.npyPK!7#q1 indptr.npyPK!tE gformat.npyPK!L shape.npyPK!ժodata.npyPK[dipy-1.11.0/dipy/data/files/hermite_constraint_6.npz000066400000000000000000000053251476546756600224350ustar00rootroot00000000000000PK!3 indices.npyTN@\Ǿg$AJGh5 E$Dh_Ŏ<^ n|ٽ] YoC[W^U}o7vt˾6o/>>;]NVW/=Ot=Q ޥb ]>s &2|/4 :n8Q- a>?\?r{őw,uF& M}`n{| I0t;ũ {t' xF`py|;Sm~b63gPK!V#& indptr.npy&#wu6cf\Kbfn\re\d}̶b-$rB"K(\PE(ܯs^:?<5%s_>tOn/;Nq{g{iYtþeZf}nFZwt\|l1Մ<"_"rXA:m qXDQ0((\D@CH)(mR)S@EJPYCT ԰& qQ'Q4$% 1M$h&h6I$AtHnz* <Ǧ 2?dȖ!^0l`D>h l#0ʂ0G&$24 fLEf[4 B `%Ka,Jj2`=lMl#a;]#+k8C(9F N N8s.0%.Wk:ܐIn n;.Sp_<K`BAE:"DQR2 1OpITTPIPYI U %uTjP݂+Q &hkS_G&@CADc MfI -DkYڐH{ $3 J^0)xa )Ƽ, MЛ1!xEO"bM1D.I 1Pba;uRoPMa[0R(mf,3x2`wa23ŤtL%Ә&1S>el2>p\!̃6}D,b>a%X|`̤U:V35&//a lh3laZM5Nv(FEU[=W>;8m8D#'~c'4dN)Eï4978Ky~? pц?e WU i7M n1ܵ>Ǣg_RIa) !El P PC p@I QPK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!ϓL shape.npyLPPZ\nni_TR_ wK)Ng$F:: N0P PK!0data.npy0PPZ\nnf_TR_ wK)Ng$&:: d.0`ֺ>mނog0qb S|^`-{  ?CݏF]?t˧?a梛^wəWV|>0zr}}(w^rũo`U90X#->PK!3 indices.npyPK!V#& <indptr.npyPK!tE format.npyPK!ϓL shape.npyPK!0data.npyPK dipy-1.11.0/dipy/data/files/hermite_constraint_8.npz000066400000000000000000000145671476546756600224470ustar00rootroot00000000000000PK!!8 indices.npyK$U3~d60 A\Sܸp%Ӣ tn_vG 7n{o.Ow_W=xޗ~~߇~R]}H.Lw\WcvvN%&A94ѼFnέ=m{›HԊz:k569_HiΌ,BqkZ skBwx.dwzIggI?U:~[x[\yYIZ{O7*_}'zzāvnr0s//ɹ's^u=^׿Ob%4oO_HwS>Wh}}񹇟080|+:S7Sajc30\/ s s [B\ LLRX4nY<&Vs~KOL\ E;)nSŵRQQ3=7z8cQ 㷢[c2G?;F0/sKv]GVn S h?O^[`ab-/jL-N4a/LQ O[s?K?< 0~m5p#ֆ&U OQ囨[<wL4y u-ܷ<33x/cI熱bG7Q&o+\Ld0Xm#Lܾl"KaGsӭpacf43#7ǨΡS͹mk~r43 ƎYT,1aU?^foY^{J[K`jL݂LVb?[]KN0vlYEQѥlZ;#x#V O |gel >q`R31?f::}ah7C)a΅[Xx/!`b >Ǝ"k$vs& ۶wϿ5?2aPm၍߷#} :#!9Mcbܳ[gO0vmu߰ev.ȶF0&?wo;QwPK![Y 0 indptr.npy0Y WEwwwwYaaq  xHtImkS=ԇ4ڢ6.i GHWݡ0$YfA. C5 >10c`<3L>Ls=6O0"KeYհF:XcK67Yv ;a>/89Lǎ1$sJp%g H+ Dh$T Ķ(.2I$%5 RhH i2023Y!CRNA\K BaR;T)i&e"TPBuj:j jCO@Ch@fL hʹ%j/t@7;a ¡/d Rd0 0 hca 0&)0Y2L'3, s`B`"X,'+`%Rd` հ60f0[5lN=W}d?s@pfQBฆI8͜!gBo䂎's\ ]cMr ng(/y%yky+x#|rg0Z4#*DsAt!!&rAlcS\O$ D%$6$$S, )4T((#-` @L&Md-&)rZ$) <6䕔oM%" PTR1(nJ(@ eU"UlTTWE5 ձzb L4 4fMmIbH{BP:@G: :+ņ)zXB, 7ہ>g:d`0CM|e0I# acm'i<``I&S%M4 fJuqhy:+"bEذ,,WhM+[M֐6]ަ F}C6>Bm';ߒұw8W~ x ^!rG?~d' ǘ4S.9m⌎_\𫉳䜆f+􇉋:d.)r٢+k \tC_M%KH c}@ȦB /5R5Ƣ:9^>9a^LDv ՂhDw iA,bK5φ&ٔXRHj"RH))Ԑ@ZٔdPF2AfYҦ!"9 A&y P *PHPآ"*P)nQ %=V!e=T)T`*z2"@5nA jڤꙨKh54qAS͙ZF6C;A{& %uΊuT }~60@0Ȧ:84|e0hђZ0΂&0-ddX4Նi f4Kl H+a7@B8T2ZJXYìi"6j'`3l6`NK|Z#u:a`G G]cqKNv~,9' njwmgo><`z䑎ǂ'WSx&x^Wk1yǼw>'<<Eh6Egb(ӦXb{,L<א2iC 3!L(ࡎ::1=EGWt@aLO@8ہ> uWd t`2_Ia"%0ցq0^%M"%L10U4.afIm saM-Z;,u`g+`% Vd YK֑}^6o9$8Lpc rҢ)NÙW8 |r^K:.;tE\݀In9t[;]I#/yE^Gow住>G}E!Ϣhb@L}ąxG HB$ $Ր {,IER{( -IǤH:/. ԑMpYN|䁼GIAA(L():J0%=RJGi(Ô@9 *Ae TI5R 5}P u'Oj$h4!M=L\C hVDm!i}BBI n3=<'E=Ga3d`ꃯ`8᱑Fh2ccq0&d`)dL "0̅y>, }Xed9>[EVPK!tE format.npyEPPZ\n^l_TR_ wK)Ng$: "PK!L shape.npyLPPZ\nni_TR_ wK)Ng$F:: .m}0PP PK!65data.npy5X]HTA^3*AX~H, %z0 J2J%ARG1PL0(zK?/fd߮fRhWT{eνnz/眹3|̙-KIݺ}wv‘UH;ֹ8v;7P~}YI9Y~ˋVs.vڋfntmG P/{"R"Yy^( ~'Ѻw{~xp|z +|>2_t]/,uNj]^LJ+Ov^_5Ƈ""Y?/5~S6W|}+uSXWߺ-hso0^:},{P\}qx~磟_!C6fWPl ;s 03mo#&f"w݁{C#oR,m.R #σ6?Jܼ qc>:W+/Ϋ(au~PK!!8 indices.npyPK![Y 0 indptr.npyPK!tE format.npyPK!L shape.npyPK!65data.npyPKKdipy-1.11.0/dipy/data/files/life_matlab_rmse.npy000066400000000000000000000011001476546756600215560ustar00rootroot00000000000000NUMPYF{'descr': 'fx"@G`<"@3E$@ s%@yH@a#@ )I%@ʚ}2/$@ ݬ(@z淊$@=>u&@2*@ P&+(@E&@dO{8%@gz03&@J-!@%/]@dipy-1.11.0/dipy/data/files/life_matlab_weights.npy000066400000000000000000000010601476546756600222670ustar00rootroot00000000000000NUMPYF{'descr': 'P?~?Jqx?B?B?e~r[~?dT)j?Zꇸ? %?Jb?e*lŹ|?˾ٖr?냭+?$-Ct?dipy-1.11.0/dipy/data/files/meson.build000066400000000000000000000033301476546756600177120ustar00rootroot00000000000000data_files = [ '55dir_grad.bval', '55dir_grad.bvec', 'eg_3voxels.pkl', 'aniso_vox.nii.gz', 'ascm_out_test.nii.gz', 'C.npy', 'C1.pkl.gz', 'C3.pkl.gz', 'cb_2.npz', 'circle.npy', 'dipy_colormaps.json', 'dki_constraint_0.npz', 'dki_constraint_2.npz', 'dki_constraint_4.npz', 'dki_constraint.npz', 'dki_constraint_SC.npz', 'dsi515_b_table.txt', 'dsi4169_b_table.txt', 'eg_3voxels.pkl', 'EuDX_small_25.trk', 'evenly_distributed_sphere_362.npz', 'evenly_distributed_sphere_642.npz', 'evenly_distributed_sphere_724.npz', 'fib0.pkl.gz', 'fib1.pkl.gz', 'fib2.pkl.gz', 'func_coef.nii.gz', 'func_discrete.nii.gz', 'grad_514.txt', 'gtab_3shell.txt', 'gtab_isbi2013_2shell.txt', 'gtab_taiwan_dsi.txt', 'hermite_constraint_0.npz', 'hermite_constraint_2.npz', 'hermite_constraint_4.npz', 'hermite_constraint_6.npz', 'hermite_constraint_8.npz', 'hermite_constraint_10.npz', 'life_matlab_rmse.npy', 'life_matlab_weights.npy', 'minimal_bundles.zip', 'record_horizon.log.gz', 'repulsion100.npz', 'repulsion200.npz', 'repulsion724.npz', 'S0_10slices.nii.gz', 'ScannerVectors_GQI101.txt', 'small_25.bval', 'small_25.bvec', 'small_25.nii.gz', 'small_64D.bval', 'small_64D.bvals.npy', 'small_64D.bvec', 'small_64D.gradients.npy', 'small_64D.nii', 'small_101D.bval', 'small_101D.bvec', 'small_101D.nii.gz', 'sphere_grad.txt', 't1_coronal_slice.npy', 'tdesign45.txt', 'test_piesno.nii.gz', 'test_ui_text_block.npz', 'tracks300.trk', ] py3.install_sources( data_files, pure: false, subdir: 'dipy/data/files' ) dipy-1.11.0/dipy/data/files/minimal_bundles.zip000066400000000000000000005157761476546756600214640ustar00rootroot00000000000000PKf\T sub_1/UT ݷby%bݷbux PKf\T3 sub_1/AF_L.trkUT ݷbC bݷbux WPUM- HI$HT0cݽ0`9'YQ1!(b@D̢PDSE ( p_í/TյkZ{=='ys2>22ggwOr~w HXS]J`J- A3.jLT i`{k=e)% 6sޯ}p?T})v^{i vʯؽ墀tGnx#y/5YWjr}eW5NW{ bwRs{"_8\fݷo/wZNԠV 8{Aq O#]]#fn$R+kC2Ҭt&iD=eEN&μ(EjS MwIA>=w|$fƉz0O){"o݊yh_mh-˒J0-fMlrD=r[hqIM8;&).B .i9?u%wq~~$Ez9 [z\oӷB=9Aߖ_n >\f U }lC|{Bۭ%p^7 +4\M>Yz\|?g ӃxϰoAHD"k9gJZ!lM [قx 1?!/2 Y,gۃRqG ^ވy fhG*e:7# z:ѩGPDpMr?v_xYԇF5s’^Tų?ِ\=Xњ4NXSۮC[Z~Ǵj\7|lQcr-3=x-~L1Gx܆Opl"s#(KWIim5) ΍Xph)NhQF[`AѦ9%x)Y>=xɉf1u Jىg"^Yb?=Ii野߃3ʳ"iD{-|զ܄*FaRB+vG&:tMt>VAy95lF.ef z悫%HO눨fѶZS6kIK4O-K/F?~%@Mǖ÷njCΎN&_pV) lIq MK? wT~FۣFg ->2B[`R4/]{uMF+Hy r4KΤwzjQXU_ u/eֵ6=cT;}d[?8)VUڝ(A%?c/lZ6/=*r~mwNŴ{w6

ƻ拏z඼p嚵ЌsKBc q$9p%?G2y(̇POvY&3^=.t=<ly /poM BvWU?)L`Z[K^_fKڐT wk@Vڷ3OﭦGo bDj͈;"R )\]x??U-I|7}Z:HG`FS'!'QD":RF/16FCkѦ }Lj*˿8mz{; *8tC;L|Q ӿ-bɇf6ݯS,>^.X MD,"q|r7˧v).^d0[7&,w` s74teNivD@zwVȹ\?Lȑoh93e( ! U-^uN_D2%5F튉B)\)&=H/w lY55a[$?;|}ܙlM3[@e)0;{yZ3P}3ESy vV̟L|&MGMY2tKgɦnNj2P7f} F䋓9,hK(4WL)KMvC|R5hØW0Ӆ{<.2OצdVI+-R#, 5iґ!t7Ӆ,ɱ“)W&4h@oưޝi}5Mpբf4Qp+֙*G΃J1@CDӣ'd(t ٺ5:3OyhN9LXG$G:ġc_kA>Eg),-<ɡ 0}tGqaosu>E݉M[Cןz#)#w12„H7V2|Ӱc(G8%-=hjmt 0iwICJ]k>:y!Giyn){ ߣl2,an{Z+ZR&'н zJ i1m+ǎloԙ /֦;7qm]UIpudgnۺ3Bx$*%`Y݄gBy] ig+3JPs_vM.,5E9xTH;_N/15'a|GHn}aӧa%I+eNreRsx=+qA8IF2)\8!Ogq9fB]8IG#`A/9ҕ3()Gf Ͽ9cZy۞O"wR}&,7! J¥mgn$9nH܆WfD/6եNs)?dn&.Eҙ3S%7ؗqX nъT6_! |P$KZ3|Qn%Dj0Y]cjnifPN~kyJnb4:KJ֗ }h^cc${Ã*V%Xpo:w!]c9u$ήdf7OI^}7b-cH}&ǘ4ķ:POOkoszT)KOMiwQsޔ2}9 ii8Dr]j߳cCJX5EJT9b+DOމ&HGwӤ- ?$~ٚ2]+g]Tޕcͤ|K8&Ԡpɋfe!rh){u'\4W\<|[xn<͚5TNo+Ȯ6/ 3R TIJ\r:wd_}~a<#g{sd}&3Րrac :&yY&+SFy&T$3-tIbBˠu&%Mhf=;4#[>UF%i𺁔7mCS~xQ,ҧޣR+Zi즟5-Ћ(UVbm5?V'ޱpAaUaezq=5&Edz Qw^ jNlGVfo̭[rp4o@ܱD>Bk gxUS~=*yizۅq|Qp&0Ǚ>T_p wt>y+"G!^*/w'KBfsŷ^Lc|/M܂LH;y1թ_ 2{C|%OQgXE>C #> N.!4 OǟxEqP Gbj 8C WҶ Xe k?[9|%cO}F\}_mƴhŅ|&rjΤNez K[vB7\,"$NiB'h_ X[ñN{3ENw2ѥ!Q{>td5'1A_{RhY sA;qT,:W9pFBX9*Bo]Y fuYđo` T|K%g؁ᴣתCQ[(,}f*L=[/imMFt*[P|ѐ: Dtq7ᓇ|P&Ss#ʈ+\ә|~" LiPE],:j*W} ѿj_5Mڱ4\h7& rT]G6K؎L9TOTF\?cr>>C&m_v9HZX`'WEӫ-OIе.w:G@4#zҕ.vP5^uaq@RU:4H=3_>;Uz tcXEBTFLQt̮‹kЦCۀ.ט jbbIN&zKhiПA17bhSc˖'>*zҚvì F Q b>ϐ ~yjM%Smc~fgn#; F}9Pd#5Hw5w9$l1v Gf~#[ ObVY‡!ᔾd*Kq:]2?~cKq >y*d_}MpXTLF5>h(4i(yNtvK3.=HG'*G P͍|tWgjv1-YCzdֽ =oIу[9=U~7~ˀl~lDY֡Lu$KUhH3 p?REr9~g։WWb&ʪrAVR5nixW8pl!fS آMelDfã7Hͩ+~ jN!)n{w3 s*-Jal-؛2tG2i?darZӆIy(~6!]A|@F գ9]o˴~J$,' }gOW?%?\"Ͻf7}lrd+&oC2‰y B|,<ֶ=61Ka1HPawpB34wHҢW5)yG0uZ+=_*xd"S4{y0ZLeϣ'0UG缯1@\\ Nw`a}'Ӑ83/݊NeT#nL=$m^)c잷=yY`=޼7Ne<6c8 T^\8 H}3w0a.]Co& (VKxsj[Zb-%ZĮ Uq1|*Lr ^ KC[]#gl18|0*@Au9;[㪼y: ǐUhYYɏyz.A}lfܙc(]ނw*y]J1`#_Cegt-{^L<Ӌ6{X"[^pvW7钪UԑWT0WV.$*8kKo9Ik\ݥAvb5;{\5Oho 6@HlK'l_=\J ^_]zsx q/_^sH7 -k:*7o`&ZHtU($'ye/fUy 3~{6ٍnE9sSxj*‡GEggYrljz4NyԢi͂X6 53]H)CXyCBv;q`u7Η@(?}RUX? w1;b[,_4}y\}Z3j|wp^xw38OiЙGGdbf~Ӄ1Do~,lFgUG>wr_68b=;a3Z6YhĽg~@ #W4R[xϞ;poFI?^gɴt,Y$Ń;<3h1kzc+=a޲Y$V~Ô#5$;sJ6U<;C^P֔)/E!}q =NĂ02Mz #c->r(s#u%/=z⻧"E[SoQȀܡӦt쌒7) 6'mՙJPdFvT{|6qv};ڹ]#w_dnH5\qsJ#FS>GO 8f lE+{fjS`^ yCʸ?NX7Xǿ] NR;.aS9>T#hE3]i5\EF OJW7mRZ@}ka.97׷u-alORm~х (9})7==a]|: BxN$D̉3Vؓj0dcLNQ"C{4AvSW l4{sl0(N헸ّ|EEم?g:,?Wq!3E?jtAqrUä)*uo O Fΰ@mr" ҄aLύe-܉/i5]O/3KxχMu]iGz!g3[1~APK6mf+3PKf\T3 sub_1/CST_R.trkUT ݷbC bݷbux i4_=4RHAMBRUd(!3eXd4dȔFy^ֳʕðXzFōX츜Yj{ܠKuS'^SQFGR!1#~P֊ 0y d*lqRTS· ޏάdǞ|<#a&H ԭ{4`#q:m5& bWɧ"(!^X} s3q"|( x2g{ۯ̶Œf}P>h2t_[sH% yK ĠѪ3f2s3Er<y-=!N4C-2#s{*Q>^ޠF#hXφT#]٫jۏQcP^ NI;="G/z6t.h!q$4:Ñ̺y;FK".[<@$H'}r12,ax>󿅽f|E8FW=d ʙ=1r`&Bsdž/U <}h9q:*@t{b1~ "|RmDo3i5Ġy4H[FDr9?g Rkо{5\zig*Ntt60-}"{mIm(/3_. as9c= oOl—[1t-[|LMC5p;Bl)Qf{ 3E8'-x䫠fAImiC̊`v?ARi j9HY!G DoyzdDx"vx[`i' X4záI'셷HB?7a!{JX;3GNqvut1ml/#s7y~<0RK(zԌ&U@&/I:FN ?sM(5|wF˙K/bF< ]LLbǞ%[? ӧzss:_7aˇ$L4?ƣ0$LwKl9 yPKe w]2υX-q$-`G<hQ'ހl!鄆!o,I; /?I̬O@m%xC;ԓ&&=X6^tD1deOgL/1"DZ3sыHrwr0^+Έ0dhK2 dCQ l/6S'ʔ b6_'CdԉDd[= s+㴃yMLEOBuxv@/Zuka0y#T=˼A|BC\]G |U/"1Keh+~ 1NfЫy{5B}WeE/KXV taYD X )M@. r7naM{zp^C:71kq?ki`Ep+c1<ϙSn2_/&9:w2JZXI:h\F8L q2Bbo@Z< (8= Kzâ`E#[։-LAG.Ķg&g-0'@ P4}G O._W-vx"?]h-?IA@-얁':~;SAF[- "ѥVJ*ikx߰T*1{¾L@: Iyݷ-:PZG~բv͸=dD?VūȊCvڞ.|1#Zed 9~<{PTh.yΏ;+: 򱳗`$&p뽮woՇIUk)bF[aTx~ؓܬ-׏A7#;[2%bdŜXW0Óp?77ħ1E#O0KC*$~a >v@7Ɵ/J5ӬCtP|`M62`Lh1 PHѽ9+!iž8~ Mn{V^Ow.㰰,'p&EA/&Iya/L@R*4*ykv.1L'7`!SHW G]e"m+D>\w/ UXYi=+_&*0R93Ra\LLJ AS#!DRp@XT1:Fo҆uVy44+/s( %]KP)聤b nGֆ%$ QX1E_xcX$O‘)!odh6V L|H IQo~v2@V6ݿYchNѷe24:)pV[R}L{1Y H݉hi 4i_!j®U A+}_#3bco_Ý=⍲-㘑g\jL ͇fjn_S'csH^9"I)3$zJvxU|I9 A'`a}ԨU:zq%Ǖr[ؖQ0'@MlT3r DBxԟ6e.ňu۶4)DgF|c,NU ?@-մ ?V sv[>WUAĮt])7Li"Gc5 /*car6A++7},PRxiAf$^3*akk@o Ss 21#3m3b]M=I}p#pbMRlS`M]UIɎ܎fPRW-.4lƲ=V@/{wSq636Xo%ic_nwi!*8w|6m%n1g{!aד#nxCCSk{̤T^:Ob) cmKS\KӉfmd5C0_kB͘xX cQ]; {IP8{F¿\5M 2ދ"fa"f[Б & .Oě ԰\)Tw*/%[k %pى9,W%8/>O!G4|48)phohS8]}ccpyyL҇~`yT`NB{QM/'b?h}|:ʼn,#LEbY-l[5^("%־#FؤhGs A9W*gpCy?kԿvGH^ԲxO'B$s=Pwc77y֝GR"1[#އ:k Z CF> B8|T(g3/hfbMjeQpnИL"j6n4f(F V?K0Տa|UxEy#3OҜ,fo3m۩rli3Y%Ry/E$/=u)L&t~7BLnx\H 2,<`GB*?`򂒩͘`tT}XpM䖕׆8}>HM:Cpcd~~Ƥ5;i쯀 /K$q{҆e^7e#61%1øTrN9âs9y$ӝ*`$}0_KR(~F)ͬ.|wNu-45$ @d~p%5GB[\L|$6=md}5TDd|ÏMX_OƉڭ8",TVyX~C&:c<@e]t;Ӻ _S諟 TD Z?>h9KOT za<# Q?0y9r2!m쩥{pbٛ7jS7]IC=:b\ V$2mǸ1n9qh?_ 1U[ S-IATf%6j0r[p!-YaҰLkF, !A r$,ZE1N[QbLij5*57Vc ^n}+3KK.>g[D^;WuX-5{ti};8ŽJt?= tղ}}dÜxy=rH|J@j  mɈXFv}CG=ܟqè_ ~=7yˏ5qCٌ73&.+p@w;wȾX2%S C>|DrlN&k6TQcH JJ ; +1sCǴޙ򕮇x2Ë㰝?NץڹB|`=</ax 7u,v Wq+RQQ8s[KtW6**M\+y%}`0w> c|l1[yַnXV8<.vujQR=sitL~CBMa*aD|ҿsWb{ n窊Bdj s_%հ(v[s2qfqNAT2*TnRP!IOomqvZ<#)vBEn$1;Sr(m'_ _aWΔI#Lt] IlŲ`|D{<$O5m#P &aiW.ȭk18TY?Z %QȚC(ɠB8'U0lJJ>3:7pv*-ڥ9oNe%U=uhj^>:bS#H]U"4# _d׍[Ol6ec,_%|x8BI(; KB ޽Ur3͂;=uNOf _A7:GQyYA}q. Gw> F| 2Wy:EXZeD$YI}*v-dx/"B0vBA{\q>ɇZ;?| vBlu8_ŁBԾ$9O\p^|e 9De-ЧF6az$ .lVx1¿ba>\0袾7TepHۆQZ1#݋{[Ȥ({Ҋz1k+N ]pi=PJo^ڛQ(RR1c$|p\_de\@'oqӓ\9a_zcFxڻ3H: Ypa/{f[#kώ{Q ~Nb;N?2臯HoU&RjfQ Y^D)wS8ie"k؉\zl$p|Т7-2`zazx8bCoŜ ̅t'.hlk81握^aXO5p>L]DP#jN _1T]ú+i>I3qVE2'a_#G"yPztj mލ}60?w15_~ֳ;F]O?)h˭x8{z0[V,ٔ><=xir/۵߉̚ #|I?6L_d?>>gXν̒4j?ajDu0T1uTn63ax|:R@jUq Af?"3R?)b4H%_gQd8a4'3 SwC 7.eb\\ʃ=GrlpnnI6#W1\y%m b'> Цqa:tj9yq M<.Ty o71 4IvO>#u̍SLKML3)*vt>|/(܅0E G"JNsyX:HGT8Tw,OT f?N<N)d6T ҫ%hss%,>3(rĨukȤ&| &՞g컭(?cmXޞ uRh2.-c|O\$C1(mElЁ!K'ZYc= >.W~`nϼpw䎓m>x|DHqY+ *5H+DsoP!-| |oZTf˶0k`ѻϬа`6n{;QÇq&3w4$k\hn iV4WB~g=`nKSv i|p3Tھ&u'0pmKɄ_1&Xl2#w; }HqK*o3kG8Zī  gBNrڶ .]fGьDnĹbM^ň Zdѯ1>ĥROLa\y"v= *$ wk^!2:w?HrW*F;W⼠j0;h/!8҆sy>an$K &Za|qhƲv̩ܼ5?J~@@R,3!󲃠|/xg$|AkJZCYؼG  `5 bR$TƋBxy;|̏~Lks:"ɰӚ"}Sm!4yO>}K!1nnĈo?O9 &v$̙katצޖX;6ӧ)OsF`xP!o!wYOH|^d{*|K L|iL}#"r_s8 S L"Y֊7{ӕzMx@@,+ĕtWWm7ktJg)/%CjdfL#G*۷:Z|69:.n>)wH7sƭ  5^pc@ !8'MIEkuXݴ wxc:{,V0o90H,ظhu]BWԄsjPtOzO|z;}5HX VcYҰIL&#u7"Ɏq8k~'wk3L5ol 7-gTUw Oa.Hh,?IE`=J<\ ,:QG( ]1ĥԙ4Lfu!xqƅl~j'g~"~'#i -Hhms[d"6a[=V#ثL I=.f0U颊 &`)-xc4C='p_i@#&:U   <(-% 2 AsY1M3$bkp$ Xz] (ٍWUCi.n!b9 s:`WN(^n&ƞ\݄D_.񣔧 l/l}h-~^[P:-J탥R/d͔dQuΝ̜Քo Ho# as*qgt,Yx+IoBdܞAVu+^f[8DYD5|]#E6Ktq͋wLe/>AdR k^b/le)ҮRT-+40a"3:4Z,3-uǘI}$^_ XBo,rݧ`ZA6'gCig&ޥ´A4tbXĈ0ߦA8ty%ZiYFͼ lFWs qtfWrB]FZF$Q#p}xO#)F>?8Az]s V#Dd9mGhc>KB~߉<4(P}Z-c 9m:p"j2@b$v pZs1NCA &@k,j5;@3 r#[oV.FQۅ_gv)n: Ax<Ȇj/9"3rc9?9k\\<{4qd>&;؉ÅigF;g_43#5DT- 8o5Ğ:,yUw3rVA!uL;E"r'5N? wp^-$KԵUqa#~yQf.ĂIʕN I| -~RXaDn8~L &WTgUrԯ?!:(E-7BjG Zy0SrqrHd;]ܕhpNM~> ߊ10`æ/ŖX6.ى0u:xu/='~f#nah?d~)_; # 瓇tڢ,RǘZo-*Vy>Uw5zTV&[3HG 39d%{2L~_v'[񣼟 Wo FY K,zZg pułF"bswDN7-kKpXAD̬A1_MH@@'͗?arS"PD3 nd.9ڄMcklAϺ{ʥk- R=+5Ҫ&/2l;Ш42W^8w,Gè#cɓF2uR} 7EK[ AzFMc?ⶦ7_h"p#08cu x+LAA97 # !6*97tG1EP%s8̻ tKe{xa?C/pWGY˜WJFi73iijE54b.#Q-=u?!$NkXv;X(KDƎ_VEƿ=> ǐpX>U1oYE#JU2 w1߰Sv==[<#,a+3qYnLۇC#o@.nYϕ\Zj0Sz}n ~jmMH^ׇ|ju$ :Ĕ` ҷ#(t<*Ngfs[?:*p{sGs7"7mFG*qەNaSaBr7Nla~[F51p3{xapd]u-cy*a9Upoc4԰0$GNNG&#qHd&O[#`. nE/@fNYzrw2Nއ79=wW Q;#\> XӘZ)JV -Y<}*HGH>=+D qӆ uH:Kt+s$չϡ2N\enifdD5ne,c6Ů ,{Dχ\v,6Vع$Iˆl>3a*x}E 0XE#GEru^nd*M)ODOfQ|7q,ǬSfLJ qc2 Qqt?޳֏0˹L\{24x\VvM[<ꓮZzia[|qkFF@'9#Ng2&?l+;ISvȽjN(@"G5geV TW wݷſV6\m]U8&/Q2{ eɨ/4q Ƥ@5Pd:+IZ[+Nߴb?=DAO%)W0I(W̫_OPC1Y9]@i'*%På$^U[Gr4"4L%:v_iAia$ҍπAqmD"> ؛h3Wn3o`~? ccY-qP,oHXGCj^ϒ:]`zeNܟ xWY2d7~J7ܕ}GC=CڱvTRug#HMҴO7S\L0Hg/#7w!:;бG{M^SiLz E#^RI$߽`9&{tL3W셸DCPgeJ*nCcM~/ǖa6%Rwk Xb:ғUN00kui!-k ZmZGkUVa[DP!evfwS|!(σo9hJ_ `9 'ѧ4<5[:g;RWS4oih/†T"݇8uB$%(#M^ lC?e9l(󯟧BG:YrvMZ7h\J}2H|$~e^# c",ҟ8Y۽bmlo8ݢ(yOmA7eߋ`aeNu\D&Qf2 3ɬz(::RqF, 7d>6ұ tXISvYHxʄ;5jCy:{4den a#;6Vڛ"t<&sTIv-^G%,J]MhR(7# 5UBV <'T ܅pS18115tЬotE-29go,Ǩ%Ki_V~dPҗRqcːK%)R)*с $z&zll#x@#. 4ah2Y5亮y *~XEs/qe(N3a6VRTLO&ks&n]4 yIHT=N;@:8dv\±=zs|ha^ƔU!֖?1'7 AJ9NV=SvyTŬ fP沮dSJ_$VЪ,h{Y\*+Dmߘ.Ӹρ@tFSԏՔXx4ݸEbTbDO%eᘵǘìl8Zây7@X-usV8qZ,, P1s^P",0[OXQ'O O\\7e!$D|,w,D`k 6d^vY`Ob%)_lk jJ*v,Zi,r|zՂި5%A&Vo jѾl4IUͰ?[9X2{9H-jݎ%xN<6'Q%!NňSF›ԭ4 G_n<D _Ml/!!>Əb|<"ա~a?;ƐY 7YX@ #0:㛨S/WNⴥ#;<ތL|r+KL6^X.ߙ˞ooYSGwtx}=DY5Bu4|Xd)M&dVWB,4Tu۸1Gh:. cd6-{-b ;2u!:m03h#w{'tb*. ߃F8 ;qJZ,W:y Uq,ЦS,9> hn%ilu*Q~ w ˔aYnł##%MHp~2-g&QQTd8elI7!e3JI6= GvDVKGC5߇a)`2ՇQҜ w쮛7ZVcuޘ.;dO['{c 𸘄؁S;k u%/gIP+4nfJ moJuʾ_\~ݞ{@gӵ]$nN!~whYH·P6 &ʳ( RbYx:PQPZ#,\Ȼ8M zR$jjSH0~[ARhgnt7ݱ ,f%DZ+yU2̇MOgg{LfI$i,B5l1%o,8ȁ폇La1_gYU#BW7 IR/}zިR40GNfB+}^ Hh+Am6񑅫|xʹӏ!6O@{6A<4}o!fZ PG@_~#+-ӧ)3MlTDZLŚEspF@s T,[ؚKk+nhgGD!Qvz*e(7C$ N?$_5 ^g7h1^v=^Gbkt]KKv?f"(lqnS7:G>/T,ΠҊIheR$/:$¿*FKp6xɆ>iLSmUuLfFoB6v v!FZwipFU.3ϭ$mAQ4*ȸvzIo6CX<}NZC?zSx ГKǹ #K;n?m)AGZ%@# !Q+xQ,rжV* 1fC̯Ĥ1?Ÿ{\p>|E /xMC&dSvbCf} ^e<;}`=^~~0#YhTA',Tc죟g>Õ5);l bX\D& TNiWD.-~^m@߷!|VtN»O^ <[|)פ9hvu8 H+V3 LU zB3:&M#7c$(<`/~ѬNQ|"y$2btŘDeH5@<k(c mɾi?؇@`Ql-lzĂ%{ m%?E#q=S[Zwb f3uB} ,Lt GqYܙ~sNbxՏ!s>Lv!6ڀl9}P|OjÕt}tVs ;Zuh;DLOZlJk/&u*XXr,G9 95SۗQpɽ(Ãؾ6f7bȼ.h^&|I'fxr1ݘUN7sp*U+í~!U U_~5iY jf:#\& .}A0_EB(#OӪ"E :_9Lo5QRd؂;-h؞3jL `8Odt/G]ѫt;lP T/ *)/Ti`~c6hh1^gDe,{RGWt?_O" cvI8<5@=ID:K_1Z>sEuu6vya_!,uy]@~l5yi J́(+͘ωИ*MY6\aee>L= $lLF$_\^˾'oQrhBWÕI׷'^ou?#7~q+硠ЙGPa g2}}[/Sip:W PqK&c\x#F0IB1 &d 7W}>S?$ AL!: ɬ\[ ޴C}l&܎4Rʞ5~TEN<5bOWĞ;뢂i﯑-^gpj'*ږ#(xZBVh 7м s ʳܽPt/Ӧŭ|䎈*yHܼ+i2cqA4ҸNn-kAC$v `.&M P,~\lb8Mu (_W!=?:6qd-h'A^`&,h%FәأAc0IImWB)#v`U9\qs:y.h"|uWYz_.~r1.xaHB/s iTEkquNVLPU񖡴UϹ[Z29K?BAegFLY˔12^ N/Ap~8L9$E9U4).eiNޘ.J3XJN5UX 7Dcl7lU&iNW0Qw+^px<6wWy%=Vw3 %6@0؁鱋&.WU(3u>pw]$e-%~ k-aT .nʜwMئy=mD2ޯ=`25꺀ҺH҂54c| 닂9:tdB?Sj㱥_Dk5%ՠ)ڛ[>ڙ7CY܍*LYs>r=.MD=܋omẋNt <h:np=20)囝J֣_)EbGΚI5 *[=L.(7KLUl9ϣߜID&S9Gr2g[d?tX KsGң˳ uG46+˭p0^Gx0&4&@mW*8C6Ipϛ?0` aԦbŪyP& X\H/'fjP=W f}AFo0G珏NS6kk~*Hq# F7Kӻ9oCk1%#8Z0Lͣ=༇<:'S d*<<n1 yMxO(boIz+ȭaVb9NdE~ıt9VM @oԧXxM$;j$|~~{hN,|>qof~K!-]JaǝT6UӜWypz }ڸ+62ԡiaaT5j.-g:˧=I TI*"O`\ f"Zn 5~<u};Y[]ƌ!=6C߷P mJv '40ZOfnnR~Zb@9ΚP:RYzvD0r W'0qn*qBsmڻY= [TNA s[31۴؅ gFIla05 x;M!5jv ΘDJDК|j-?}> vf{wzpBIr7L3Ί%,ۚ'ThhGߐ*IMp+z_1}S8l=u,Nbo FTm%lBnsFzØ]&ҏ9X0. m _ë,Tb<ӄ*zT|T#Ҵ8{=I-Yx%5}D&N|+lPGO+hmΞ{9|'?n*Hs2Pm|?ݍ7/1:ceSbuO0?<*X )|yOpo( GDEY_eCʎb'ɱ9ҙ%`V&;}P^+$Jy(7>dSpΰ cmI碠u@K>7Nt= {h)W9 S!7B}[//Mp<.vLa|G\_ "cVnaDZ!'DR_h?&LwXn8DЙvZ4,} A7yO`σx9yy < 2'lQdߺcJnyB ^v&?U]sefsODoǧ7ar{7㚈RKQuu8J͜Ca܌{)юxA/u\NEB4? +9 z25HQ4134 As_GOԸpKQ>dΤք!m+mR>l8.@e zIQ v e}=Uo~P*;qFD dy<͒ 6]yyJ#_p{ Iv4^½ЗKMIr:\wQ#+L,X8m*ߟCv`,t_I2: {xaD{UY5aqtޟT5-grp*-bpgR E)=2ׅ?aaE7ۮ8jQQ6o)[=jp zTy^pmi^yI{b?i@t1 '5ܦV% ?lt9J)ia "~鉐1Y,ҫM$巆|4L[, gҾV@ʨe93پHͺ r֕4(/ڳQ՛ ښ (8g߮G+!"P5;ailXGqHϒ 86ʂ$e$9ܘ*=Ni<*puim1ypq^&W~G-8 jsaq]=`õ5h5.*6!;\NAPGۺo6ŷݸoN|&"b&M^ǣzMYvWbhx^9Kӽx-r7Rmk4/O|I'bϡgOq&9X,mRpqVLA0q4^-soPgOhY0[owj 8c'ܷ,yC"3z]*d.n,›ϰU>4~ }z;*~B#Ld+_BfvOg={ slyMD9w\F\l ZLU1֜^%T$e!@תDaPW:cLj3`F%EuZtv&CTu刯ϻp/ThdTI-T$.(' _c%F䓹1lp >e0e@D+Q= Ӵ "ޠjI!!lZ!pktY> g,ӸbXu^egP^j]aSDJ.&H_}ef-F&cjt[lka$(团IC0,B&g@hI єV7$lsr7 +1iBTnwS!s8%*tGSg{]]M_C@r,<{w@A3!LKaY\p]4t|¡8t/϶lXltĦtUw+Q_߃$z8m)Vp&˚Oר?hS351z{ʒcsGnn斂V!r170bF70h.DW6bFi_ wXBCsa0(˕hC=m(_M/Z||"ߒ^fma3Y#ωQZlI0G2/36Irzŭ)0n MSfj&R};^mI NJTX#DOYV?'*Pl#XɓF(gmѩPxGs7r/0S+ܦ|>Hcβ(p!٪tyk&m6SD9M͈l?s_Mi>.P3Hjț؁k赅3 g&Aw9WTPOE`h$Y kܨ$]5 z;鎅!;VdX<$f% dO݈cxٞԧGP.Eku}Q;J\,ܷk #M,uf'8G}}D#M74]m~ihHMȟopթSLVh/uݲtĐB([c/^Nx碮rQvs58<,2dހi8Ww>̉rLrR]`40ݎ !%PU׍`oƜrkjbCQP7cخ[s&- \Ban=n8Av2]Fa ,L0?d+FmSگ#߷jd=}ƥ=a[MKHQ)t dUh1'^q׏Q|`KGQg*فFԈ3rJ 6Et>[PI39|l|wA =Ϭ߇1WV48vV uf6,7wQ) (K%7[-y+ò*D+1ͱOtz=5sF>"dwj>ൖ77H3#V3dh[_NGAH\ s,彔b CᚧUl`fH= 1ҫRZ·ޣ zfļ Gv8ҋblfG3ܒmyl]{yEnm, #㭟}WБP7s](hF͘?˓+⡚-B<"ë́hL?%4(ia\Ц@w20O,~G%rkb= A#e7JFf3Uk-t|Nc8d-EK}z  %Bf1)vȃ$hRf}wȃ象gUa4=4aoI c~rtcK?`5kFg6D\+25GIL?MA}/lAYkgj4]׸HQKNܧX,IM:{9UJ3f^;ݖwG+ "eiqĝ}yčZf\ꄬ]`ܐ_^ r(_}hs"m9vٳcP6*u[:yQwwA&Ɉ>o4}O3cw LRfs8WCux=ʵAfпgƉC|^G2`)]+Maћܴfp{^Əm 4ۘ985nق&p},{ lgh Jgu+NMη[pUV"U_Yy7q%e/8ɫDc4r0n/Dj8Jm߹%x| >)4rŧnuʪʵíQ&։3 qj\ q&b_|]c0:3CI?Iy"d;qe8iopA6%SaLmiK`i#bRQ& 89TH`u > h/QъD;JaՈY%t*;Ak=ѠܤvI|3HwM 8Cşݤp)$w 3 )Ou-L*W[z:Kzxoe{`rK =7g_c.YHUG8q$* uze/c5_4;p4vߓ8vCVRU.cZ nls+[퍧HKQy9NHb2޻(U)K;T[@+$6*ӡB JtҫL#MjJnĂc)򹞐kDIl&yWs6 sasȺK5l0֋saf=> C7KX|˘T.{ƩPP~~[/Lē[KӺ\L{{M&̮9:!t;G7R<}N 4捂] t1?0>)R?^@"Ṱ{:(#)8?fAJGo3 5> U0.|5[Uiʲ)my:'X+f:g27* t">KLs]KPJq*E|!B q|\hW<] ۘr%J0<@i;w`Mkpz |ޕV ]sOphqVM`"hO7 IV/sjl 0ۆGP޶#|k%o/rNz)f{ {dE?Jgˡ&].r*+>~'Oπ9{-grYW~A[q;v}BZ'ǏucDjM Wq ߇%\e|bGM9eP,yrѬl-02U:2ٍO֏}qΩE9ۢ۩Lvl.s]9WCݸb huSsu>r i )EV:Ycj-bi %jLG4Lz@U>8j[#Wk j|8=e.jy{B1!̛@j~Z8%ӈ jԢ{Ѳ(O_W&8t(6G8GM >^] :o.s}ZVlj3Jh`Ll'20ԑwohvyte>L)t`wQHueW\^Y#^}m4v9Pl{Øլdu,}徴uN+:Va_YVkNո@}E]:"4j-Dq%ͻoJP`Z!@CzT:+3$J gQX" !Oß6fNO/q4'+A0H$~l̈́hfH,C-~ مp5X`qA^>$gk˻Vk1ϜO!&8u&pkxt4Be9ߧs iµHeh6 w*OKO1QC4v'df_K#]r: ۻ!.p}'߅?:>Y8c_ PK+3PKf\T3 sub_2/CST_R.trkUT ޷bC b޷bux yTM624HIB(Luk+U(*EѬ<ϳTҨRQ!"̄)Ck=Yku:\9gF2򟗈Oys{ *""3xuffO}tk]Vxm[aY"fԷz*j#hPW$/1_Jxy^]5{|x['E#dqy,m7d@y >Nw_ ļh Qc"s> -~0WFl3:(@ūF),ĐR8pQBEM 6iz#/0WAf<6g k$}w3DVQ@.d|,9,O)KnzZVfnbE6#f-mȦ_fV};vYS^a TVX(k];xy~?RiMh{w4hboDzDcKZ%peO&jNVc4F{M0gx˿R4ph]T`-%+d[~| XʞEcfۅ"d8#1fF>pTg7-g eDr{Mótz'Y<Jbr%0Ǔ $rIbZȓDC|T#ޫү>wɁRtX'H}+ JĹ|b1/3 @u &pݸ?슥rѐ -\)Qd?~|lG^Lv}|c/Øp1IL͜i0r3gkPv\)n]Τ^8)2?/ۋr\J>{>LU;.@Ir ,PȀC8>N?1D])1|qb-̋G8C!qXd'HAe |rh/[GVQBb `6n<[ FN:;ÉT Ѥ^;w#;2,[kXiLԦʼdq5-qչ RƵ5SH%B7M9nĦܝHUDWhAOpzIDYAnΚ}ٴ>~'7V3h86^ojm0>T:. K8"|)wY,CnQZW x,D<_=^#g'Bs FJy8z N1夡P1]gf0$Xy\J3r1t80\suMV&!]NP=(EB?EOʈ࿫h4"eqEQ0öw%cjgeB:r )ĬVsL)* E=Xg{!"q;'/P{BID-r&(fO8UAΆ/ڦ"7~tr>ro0V!hms Rs(gG]`)uDj]Be6uh'_cV,3toݱ`.]v!0ZKZC~0.}9ǻ"!{B{De~g3w  } Р.y{Ïoڢd8Gd$;owl2QW! P>B vls)y0e.m>Sxjw^]hfϔk#΄Rjn'; k|~nAWω)#VѿG= ɅYB;9i8=A^eNK;:O bR6I| A'EW ?pX/8GnZ=P,Tg2r]eϿ`W*Io)dcBJi24Ic:ՙqilcSk 4?+c>y)3{*Ltp1o"g W'PMdQy1 '4@FDg% yJ`CL5Ёatʡx=WKǡ8߶C`f6զkpD :#O(;/6CMfeH)S2{t,YMuX٭$%H^q~N凰F$N?}.׹x1/:FXU'sDuio6P<#3% [L>+BBu e<y )l(nد:=s`ZlodM~ ^6@CTW,^ qw ^4@~9[G UdƙXV83RI+Γ ?<| ]O.zRYe ,dMst;fLg(êRq$? +0T«yUNS“ŽP.U~fD:5CFy-Q!~Nqtj:! z`4\9Uou!~ #}z#!|OhP[2ԳpG1ۼt3{Q8΋)poq$|<w߮ )Q5fg!9W6 `&.7‰ |Z.qFsʯPo? Ro,z~"[_e-زd0zɾAz&͢[n*gOUʌiJw[Re|~ I."RDΈ#b>%IǶb$C9 #{";?Ψ20UJŪO}+h98>jgã}_;vxv8ꖯC}9V8 ;1v 'nzͺF4&c8U~I!t{𑟿}_r\T693UIr}{wm ::pɗTq/ȧ9yi2R [gqt9tYT̢}t9?=MZƠ9m? ep {iV ^<^y./s;pH0`ģ.2̼X4c o6YT'9U7PQU=暆M[j\2UAGxA`\z>:p!J$e{ |ZO=Ek큸b}Og!od| ~o2G#I b(98l*v> ݫ\LULj!Uh2^DFoqa2j&"1 ^5Tڃ>+E9 G<ɋ ieػ5Wfg̍͑ }9ui> )l {'8䠑 ag16!\a~OHZ8k'Ag|^vj WKsn#NCE`~z9ɼt('ol#ϏƕguK`)1.NQgͤUfЗF20enAɠZԃr?s~EqV|>lBd۞M8IUZ[Umw>n}gh0O0ѓg3H[FI Ml<}iu37K#lJL/gH*ܘ mɹs0bm%/=ncfEK-1Gq KhR{ _ns6^gØt'Aэ+äN(ț\qYX^`xa#S^"b v5stc:jg*te|V8&yhԖtBIr|5zwAgc9Y&3c*$OFtGuc I۱`h~diNg6BI`~hNm}"xY 3ɻ:Lڼ6%sA|(tE*yL[n15"3enm n d؍R%pB7\^vL`#pgnq.4U!|0>)#!tœؒ_\g_Py灿IMOvah>ފaןÇe{/T{gs чw^ ȻY%ݯgJnGDatb1VXQ[FtR|"!VR;VmPC.ӡzv֙69=&y r#pÜV+icBI$~)DcQD$#{xqZsYk}8lqdzܨ[«!!Gav`}Z6j䣄S9RJq‰M>7x'E_\eFG+@H\OJֳNzE(.)#CGo՗{_HKcWJf:k3KH}ݽ( Fuۍ|" #v~ W?@Բ'ϓHhd%i#Qo8qh΋-ǚ0eoʴD@}7jtf . pEW0=|KLofvazU(P{Ȗ2C' VdoN?ehW 3OYM.[uU ٬I24hz5lMCs9 ֤Bw,]8>]=R`[s몠&;W׶uUYr yXu8m?\Djg{?}g=MKE?_yD~f g" _^=|SN0OW6}&at1v䓦'nfN#t~[Dr/ RBm7q'/T!پg'ۆT,*c]]ß䐼eW".Q]"#e/r،BHXtwWv9>#8rLTL}NZD^Ex^`bi(^B!6]z1KTR<Ԃ #|5Ql`{l`̐HEM;yX_M_6xx9ԡ Q@.jBY ͱ0S⭇A2ΛWuU6T/+պX|/!c-0 ZwC.ScNb!> !c0tkl QO*"LcT5 ϳ%v)!8?ɞ/`[Aj!ٱAE3rf6*k%$v2US:i1OHB5Bƍr)@e?I MŹ Q6 Ec[N? %2>y I&9E83 ˠ*Ea < KΪ,7`7߫IB'oz0; 3l,ǵiAbɗ#L"D4Z٩s$jS`OdZ;e@Z+(&ǯUH }1 d㽢?n"m/ W=p_'Ab>+L_/ Az; 7.:\2.ʊCCB@uTAcQ̶DNy)VԪSuJ5߾-f/=͎3A'O$\`SQybhCb Y21z։6X }M4@o(F*9hV*[~P;^;"Դ0RCS; GXA`<4'B:r/<OQ/I`U,F˃buT+I{!~ON^H~">. !u'\Ƅ38w؃vac8"n^EMq/aFL$? %8: r:T{fU >joRk_J 0M0*jQs#=#a~>CyEm}i0K 3aj[jlSgd{R!* N rixeh}6C-YȳQ)H6 WT/Tw$ 2Ҽk?IoIJ/&=yLn{PŔ®{7>:+#Gh<+}GE>_T9T{d̨VˌbąIutlBٖ,>sKQvQ)5 A.w Koƕ8OZ V˯B- x/o!KV`ƛ$\ "XG>?5WVPnpJ5b 0?77R)ϲ8%0^[2yʇ>‹Q͓kCL Ac tHO2e.)%)`kUs_ i!|/  eN9\}'%v>(^Ăz,#pqpqWO^6`Cd4$"e6Eß4R-Pyz e~c:E076V#_f9wr{&p!)>Ǎ`e^OAer LHPL^_Hkf\rawȷ&C"AFq =5,cs>|޼ٮfAD%6TQN YS\ޟuqe6'c9.Yg%pg2 qŵwag3h 2 Ft9NFY1h^Og.vƻ9U9rڂ}D'w ,Bm@A.{и+d0L@PP-xA}SofWMxWc*jrSe+ѽ3ƽ`Ѝw{Dz^Υham^c [\l yՊ7¶AV.z;'Ymfl(/ŞHO8H$(vZQՐqd 6ѷ[I@hUӋsI}.'@PH;op% vfRg 41s] yRLCag[?3sZصʆbY:]>,7FKxj[IŠWiLJ2 S0/pЧ \:_="V=~=xL|~} E-OI\O\K}%Tz0,ܯ:O* J2 ѻ͐٥\BqF3[RrܔN4"wl3R#v͘#> 5 L`3uRًU+{Y&_ՖG0|u(ĵ!lDW-|UOrH$$A{B.]Lơq/ڋǸqL( Ȑ Vnö)P4EA*ɩjn>z-{4=;HLڪ6вwjD>h7N#ud\zl MN-$? #U}RA:f c7{y8==tѓĩ|Б>`M,"6BGcQl<ְ%|&sO8I/>"<#W#/^Zaf8Q?;Ypڤ&Mt:R ٍ̫,Nida*s®X\NL3ꢱ7wjw\ 7e޴AlZ%[rB|S| Ng B6Ȏ 6J';vD##r=d]F|P=U}k񪞏]iL{?-{~D:Ba85l!:]{aV3h|HJAݍ1BHjso {茸!1/"yb'|l 1/Mŋ!Y76&>UN~yy 1 Nw4gXۤFe" W9#*a\7}1,ir "^[ap~ۡ:uR?9Hg}fwg4VMD,题q4B`^ ^m\T+omHu_6lLr;* mp{I3qCO2~ᇖ‚ 0ލܛ(^Xv&SGjw|KY#a5[&]~AϘ7KjmT8c[F\Jf|S4j/q?1hp v\BE*w'goYr 1(NͮSef-MHG:sK2! y.2ed.~Qŋe0,T/AD\q鍹3a\MU\PpF~ J)w`H%/OrXa:R{"y"iבO~yv6-pC @9K,,՚~uQA݃%dC?}~"1>?\ܘIO՝[JL֯2~aq$h+`Ƃ h {,,xT.sP*]P0iv]vՊq=óOpQ5+*&Im{aY?7ҜR^!,ӨV4FNZ%Ml0f2wdv z.^{mEc'[Z\YhB:b4'2s.ړRW؆6W3m05XBy|GIv?}OqӰ[ɐBx2T =%VbN?)kbDd~ÖIXvkC|v-W2;y8|,u%xƍ}xxF~J??yGwww-<Z,3G.t`Ylݡ9g̼05dAnQ2Zo ^ydWB jm0Ӏ0s&V!i³9MatuEn6Սцm$NDSBZ@Suf43G%1TC8#u-Z:9nBGi֣zy/*}?P50\\hy…IݧE+il%X3T\"{*{UC=spb3G<Ր?0`^ډc`U߿F_m+ͷňHnRI O;SuG@m <݁pwO yXHbDn!1#AgXmH+2#ۊ.[Vlỷc9Gn'ofFF/m ks-@kGp3U%E{A'" &mC8m@xY Q9La wTqw`{h8_蝾jLwn5t9TQfUZBWGp:G&4jhP"4h6RJ\UOY$2"3h6#oezF,10(r ڶ,1b?aVB0զtzds`e0&aCgo@3KobK$պMoSzE>r ^r'ч <&t4F'aW=GaiPXd 9S 0[_G{x"VymuCTٱFM׌Bww`7Mbc;Skز%18e?B1^m*lE!ЌG(ҢӪׅ;%*nz^-%.Ğ[ꕙHHӋw@'':VK3璚X:ew24@|A1.ѫ܂orجC<=o!I^"b~0 ^dAVt3vDd*zy%D]ɀ͋"I-f&2Isc2Í9>90 k"ZrD{}OԄP)8 0ΛZ8fd6~e^*wQ\#a4m<Eo&HjƵP[C^[xo9{ /S|IC5Ӽl.yWR>7w8?\-`*^l\Dƙ2)fC3gEq1aBq Zh{">ކ&-.bɝA[jә %bcH:l 3k-Ovaz2;u1KewZ˛QXU\VLi:%3t>FflVH6][%6Y:6)#+9,D$o~5@\}3pu MJBt6x- s#a$nhl5n4l" ɰGz*P{K}@TdZL_Bt <UBIN!_x2LA?b.bŁ/8[3*D)82+1a9WX9aEAR6dVTBv)WeNP<'_ 1ߘ}P4xԭg_, )1 C]'G' 8gMh:/Z?\5ŋY4}ߨ7*kWђڀ_6An9w!B_٦Dq{*cNMeJyp͛\1O} Db`XXƮ; EqKaz9_a",6oWo§Mzx< < /$Fhe|W44Cޑ¦?} MGm6+tiC98=J9\c'z^#q'{苊NO+lts?d R$1j xb9PVe+bq8yׄ- x3 i [;n,K.l[3a ip1dw!^BaC.E=#Ę1]>@z[wBxuy5SBnmA"mIZ4>(H`zAB4g*,QN}-|bcFlG׊lL,i+-v Bj!`Gi#t6M]ɋu2 A0:Ɋ5m$a5̕od."Qr1%X,u:bO:iVXXpOwh,}N5#160wL(gK-Cu--bFC-MF$EfOS!n@u;X أwpzG8j֑Z;ʐ"IiQb AߞHFaopN)k D{oX~!qrS> r1m?$nqVz6|R:]Sx(5 O1(_@2Z}6ϧͤj(EɧT?ZdDlP. 9(p7Cd^ֻKQaɾHlA3bęM)%kȄ5ͳ0mT)lӲLb8鳞߰(6@[_O,HC>GBm?T!͢k_9Tj]V`RW)R? =]GlaR+~QEtLdzۿplV)9΀*[ъ=՛3u6R YZl?<\&z,Pca'Xi _C`WX\\M|mt<0æP>,]>{Qږ̦ 8qo v5k)쟑 2av.Lbm zp{: sbbN$5l4xj:c4dUAXJ^GH%҉j\?Ʋx6/EK^/wa23* q+",zWx{(OVqL_VQޮEl)Y6t|{e2DMg) ?le(̼uAipMxI㦾LB[}7} !A i>7roRts8ҹSHu$7[@Nw,j#k'c+:@&`<7vl9F2` Ӊ:G\?f=,tt>grzd=ǞM/KnBKYK1N|ֲ.텖:;r6F$~Y!5'[o: Li:ȟ>+H%8<>w!Hm?ї hTil#Y7qoa2ݬQa_}ḋ]yʇ3/f?&aC<Y?]D ݣ\dx6N^/D2Id@V;+;WӐIg~]Xm[dy7ѿd _Fd }l.J[ʉx:O[~%#oA5鱐fAA1^M(Ж җ$:+q!޵~,h13ab"#R8g SGcv<'ws6D9%e7҇Q~\s9QZGb&!sQ.j--Tڳ}kvI8Bs«n*,74"oys.HNsREꆈG[zYY]fʢ[m-X&NNٱXĺH+j9*dJd0 xCx[Р8[p֒)tƇJ6T P&F߃c óy◌ir%kP/n \h=;#QI+Kʾ/\ͷ.NAo&4#iCJx`Nw7MIvr@C`d?azKȫ ̽s+q̳ce*=߈yyφcܷr fVwF<ٮ= .9-gSyK0,Mx:W')Yq!0kN!uѳe\A94\B }=n4|X[^~u:ӌTE+pT1lo}_XQ} +uL=R; 70jY)6Nf2ת퐲hi:;Ud<LPL/x,BL^,\9hٔ>9̼ʼ)oa^#WecS)l=A[Q?/\qwh <#\(JRTy撋Y3ՏUq8js*CU..{_ /΢q%-X7Mg&>@6]Z7J8X@X0Y7aUaģ&bJk#DW$bihYRI5V `Sf ;c%e;6.6f2uhTaF1NC4/Ԟvd"}i rhI8|- #-x^L @$zo)gڸ< Q;/92Np*7ETm{$]*n d*q0Gb^vBazf'd$kjdЉͿvz,TNsvgxƸMݤRle7WםL^[^g Uv$d?dLj*`mWhZ_*8NWi$f 1NM0ٲ3OSDܹU[HY_CirY]4KBص/ɲ+AH~1Zrw },؃\<2!BX/>'ϩ' I~ZrC؟nΣ{LDgN1ov˟@Æ/䕸ˆ*u4"eG0 r=NJ}x Uܹ=uvFB:A;Is'ل,-z_Ik$z)&u̓!q>4U"_#[v6dIynWAKð+դI]1*O'W7J~U!_Xř<.ha.zDe,bZ8ojȄԙEC4vogM&2~- tG1w!zC1R<^1X#ԯp9dТePy6kJP3/@"i\7eW3~y@Jp{wCZbbIlujJȥ7*쒜 2veoކyQc\-k66/#9ΌhGIe<\۟R]Um;Q(b{uE6zvmMƉ,d~𱗃h 6V;Nm.ѪGxp&{o*4lVE!UpԱ&*!2}l" !4G"ɞ^WA# ^Bbu:3 7y>ddaV%ތ<0lf3ՉͬF%&e8Cb63%Ζ,ţZbB$:.'Ջw_ᅹ6$@o旛x|3!%z:^0s:y2'ˇDgS``Yx 1 9|7? g:w|mlH' ,B "I>ht磜fج+7W2<}͌I۱l%w ٭? 66i[88ʳψZB$'jzp s໣L5T^o3q|)ɋ!2جIc:;?,N_MC#1DL:l9q+wV54da]M#".#:.Nv2iu~nMY X|7L .~. W5tj"u{<ձ|dP}\4:'n+i (ҨAUb`j=Y3“-MyWHBٍ$ .q&b[6޹*lEu'&̹\tQ 0OOvEEp{*V7@.E]\^D nfP8hh'Z%͆V lb11Y ʓAO nmiat4MG;s鎧}4)"T~}> G~cx1!CDW%x|Hzɍ!,+:VAN]d^"2|1` m}]I%Myl(q Y_Ș'^<,`-.ٔ-´OtN%XfxwSS#l bCkH>OBQ RE}2$҅-F jeI|I\SLӴYG=GmxSdЁ ? '>z d Kg͢Bm9@gYU (]XaLgmKfaz{exX%%K:4 7[>N KU[e 3:z Պ087aK\h{@J':Jĺ;:m"-p,?o-O$iC&VD, {Q ɥ4$ 9zw1=Zधs]Qq2_[l 8vLW,}>y+Ck1 ۏ3a"xw[Me'23ӿe𳋧Qa} %Xx%Gl- Q)OQ~}3=flYA%kO-cy7!y-b!m@w#zsJ:! -ӈfQ.q4o]dEQX]ű$gAWV '0qXZ$Y56d>{VәZRg:bLHڋ u4*4lS 4 y1!'sfާİ)<8]!GyE  Bm'/r໸=͗LB,B]a2Ia=k䮘4bhruhϫC5T[фc$:̟!z j̥ vҺtZxd-Լ I?S;:2RpF  [-sʢkWҮC,Tfl'gՖ,c^PK$+3PKf\T sub_3/UT ޷by%b޷bux PKf\T3 sub_3/AF_L.trkUT ޷bC b޷bux WPTk-䜣(AD$%d@Q3 ̊*ʚӜ1$ x<ܪroVuժ~k9B/Os=>'/'?%8r}G{(o؋ ڰ?g5vNj5̪r2ru,vsO%5*T.s>a2 p_,$GWxɽ\Wh֜?5|FCQP{ 0'̼ŊgÞV.Z$ÁX+rO|%Aq7l#aL %w0ϘLYlVLn/d l3 LaW hkkLJLyNwee6bbHe=lqTx,Ƌ44'AQY rZn`/Kw< e)_U0<Ӓ`+7&>;D2V_7#ɝ+}HNUB4sx J^kʜ~pܗl0/楋#ɱn%gy=>nc-툷<J<hw sMWٍo$m?cXQyjwi4Zc0wb H>qIol . ̉r[zS 8R?ow^>{/v$] ְїF/džlN^ę M,uj <_&}lkz&Tu)HHCxS rfq?v}c -E^ 3`ՏtOѴp m%xi)-NN WL;-Ty4ibDUl dD+dѥf,ٗE}EX]0$_σw"4rx\vg|\cyx0˛zo܂lWVJZ1KAʀUmOkeu/'s>32?V̨h>]ׁJ3E`,U{-r]~=9v^~c=pgis?^NX}y&s6JfmI+kL&욻Xc%ِN.˾"PWnGQ-LFUԢqGKϙn $̝AاgG3-g\3 (q97zE5 .5f2gW ](1.+K:aV85_]u\0ZE;aݓV3jm/+;}=XxP;^66flv%M?i 3Ó-]EZuaois^)cB Jqƒ[q4Îލ`fB%/Itױ,kv]k R՚$ѩ.B!3/LBosoAj uȪTncްMHƖգDen c~7,i%?8qB)LҔyjnR5EO:־: >L룚o} "˅dcnwm}7;XUoIr\ `:-ﱆ"|p#DS迖 v8-~,2n|39Oz1 =)ȼ@TngË5T.7j8wvDѻ4E W^OxªwO=TRDY:r+ILV"Nܐw #YBt]ku)m`(7hGB|'6Y[)Hte79{%t<*O#31[@H!Wϳ@,C]6?HAZVm> ש}4i[ ퟀ _hX8 ]$3SBX5kN[kjHnW?*Q"e+a;Mzf4Weܾ<޾\ aVwcμ'w0Ԙ.p{M,,өנ]<}#31I` 8& E(1Y7q)AʈRzb0ڭNCNZz(I )D(t-BdÖ6Hj%"ޤ@!JJ4O43`v>?03A( wNP&쓍gƁ49 -Eg|[:Lid}]G1OܫI} uQ~|'GF:"o$-qɼqPG `5?)7}v>/B{\ occ ]<:a6ynz[y&,I-l,@P6U@\s;>=ɟM+Vԯzҷ7427?)aI9G=e!!g V=wsF믚j[JJ9\zg6puQS>Hvc~C $U}:6)I6|ZΞ+]/:YGާ]_u!5ۛ?mAMc9 2{ |(s= V¡S{cW]9B !@U_,%`=31pdGyLH@؎xh|*BX~Ș5ͨ; _Wz (iUܫ; ;pr:q_4l̄H-WRrA*ysVb@Juy}zj3ٚ|oEE,)ͽ9;Z3 =+GY6 !Tq4;# L4k(=6g`S0MgUP"Dk.*ɣ5_٥hk@S-vSH7@g($"Ea0oCx[{>߻$ )|MO_A n+%x h;čqa&?-Wzn2Zǣ>jHR x-]|ya}"y>#2-zQV&Mz57-L%M ›ԲߦJ@hŜv85TrRǗRΨ4gyPM6DžMcSc*b%KQCHwy_KX 72G4<5'd{'gӤvv,Sʩ\h;JIRH>ՄzFw䊌vW fM)/|T ,wmwgd$;|4r`@^ ijCS5qo!P*D(W5`Z\y}7i,8J+ea֔FɋLڂy{)E;댘JF+q\v]؞b1ӽ%SJ*ULj ·F?vD‰h<&?3 u ۘ_LfP~e=Ǻ§#z&/WS+.̿fNBp2q([A$Uu7{)ި ;Gr,d{@EֱIޠ[VVdSh<|W kN"?H qcaVq{aqֿ̝iˌJš\r;pW=2D&:37jV-96}̚,71G 7zuٞYώ=YԮQvՅ</ U l)X+ЊI0%Fj`4.؋( hits5y  GsrZ1ZZz}bT*#YgJJL[-Upm+Qj͏3h6NUAMU׽݁3 ~0"=ΑVE9Rguҟ uJ=,`G[gbh!x63H!JO뒴N9BVM׀NZ*;Y}U gYSP{ lܠ/4S3p;'GotIQԩ.FLj5]nHNo8M`dDI?e 7M~q5\Ԧϓ|Y֒v&{o38Fc.3=ا8&I:ѹq/fGc |-H\8*XgFuk;e,Am1àW"]'1饪$lm`̾"M!L{JS$ſ}!IPN vy VKRsDc崨OD4E]:.C),"̀s35++>^a7hίw<޶rQ:1kGǓW!,Huxk{UD'.םCzq&>r4(C}C&APe'LVN%N JEcC p! ƭQg*~bf"Mz3wT~u'Abd %H?c^sѤ YtY 2~zPIG*Dثlvβz$NW#`5^ֳa;'nc?hMw4NJ*fjQq?ͺĺ6>wEbף$ |_*J$Fӧ`Y  8x,!.5^\z0vs뷥@ixw/n[w,H|N#J:Vj􂹰tVhfVfz/= NZRQHn'-+JۭL"1(4~=PUL6K3ۙzWURiz]1PX+GTXQo1-Vdh:`eKHrZdW^;:5]2Tv 6`S//nj_,BwʉE8Ï(`˙Ê>4yr: %7|8}ԏ6{ F`?ٜq%=?Z~cP&0[?6Evek,M Uz$*"U)YV'zO`Ӹ 6*ۊvV`>?#Ih{78F7bʰ-\φ )(6r k(I.XIy zt9Yor ZgZT'1`bh_ ,"Olںx"/@VgJSm抰|K'-m]78R?ڇʞ.'gAm?{Q _co:ʵIpʺcYE }N6eAc}NJ2?\ n19`y7(%VdBÛ`Sݖ nzQ2;[WL(^I@*XOyPk9?L{ѶWB,CZWg KU0f&#%:R!u&7Bb8ߗ(ЋO;q.!4fuKc'FVoPOjxЄVWNUzo舔\i',b=h((b*V'3qe8j N83Y˓ْ0;Rl¹k})v}W*ٽk*xN>`cɰE(;0]CXG'/gq!]`օ7G//0ǂXGQ`1ۅ kO:$lḦtVԄ;Ozois{e0q m?}:o:B<~m:$bEl FlHkT` ft9=fH*W{ /CS r43ĭ\U0ԣ:UL_!mo [?xY9gV eI][QhlħF"(Vslp>,lB/fۃ5"Aw?}PH;瀆7$̓{>񌅋çq vA9`Ue,V E/8sjgtzg 7 t tgqŶ"_CpZa:!m^c` 4!CcZh7kW^An/1 -Zݳ-prel`)nrYjLBL g TwoJ;׫rQ'-^EYA oq&׽̾ :Q~Cv2"vMK F}_t +3&Wr?Mdv[w|?D݅p8]Ɵ8TxS5 PL-xu Pr_qac< Mw jKSG ojUh V$+^3[-:gSUa=n{ȴnƺ(Lλ:9)HyFx#\YA<5%j{0Do{SܚNt9>we!~?)KC\rB M&άRUc[fBRUshc|g޶ܚzt$2X5ʉ@󹔋L7^]Fb]zy~A`?8_l%܊Onbohe* d@{eŔGߏG۬k4չDÃXQɵGm%Jux8xTģc!ob?pޗ5s)? ~l*7'lє׉7ױ_Cqk9p1x?cc3rtpW ;)cHol"|~瘒FXciq~ؚlRTr\}]ʾ<\ZDdTJVh0ls*њO!Ʈːb^Xic$hRd3DM遍4˳y[-n5V /N9y,yD,i S7}Q3c[F@lf_φb0o:DHPyePK c+3PKf\T3 sub_3/CST_R.trkUT ޷bC b޷bux yTMo6\I"%"ɜȐs_dj4e2dB(EJ4Ҥ\wf$MH(xǻg묵Y{>u}޼+7X kwemĦG߿woo-y 6skg,,A5k'U'>l5318sR/oYy"ܒvqsfDUm݊K*zhG^FXz%NIEJ׵eQFMQ)J:GRQ-ƞTIHar hVZr$lASCQ{j~'7KlYFv:2s@)JЗX9}ZPFK(&ݡ|6z}Nm-PaS4w#V;zaʝ$=ѸrBqG5dʆF#Ryм nJGBgvxL 5[7's,v@~!K؄vQS?)Z`D[t k:";5TH+0%K`CE~٢ptx+.솁|k> M> ~v`~#! WxXtM/F}޹yޮesd,fRPM4˃8z|,L:=_}xզQ{6:!}xaQIr9raP=c=lĜ,PؿW 7y#/:FE$֪;kÿw`#L{?2(;mF,9MLӤYN8| BLR1̃/a(Ѯ`=P-o*Eh;FSZ@On$tC%Xwuv!D⿣/!Cnl0 5/fʥT5Smym hb2 3 6+.p$&(L id7::Ԃ`ě좔2H"!*Ģpc*UV`E殐(O !'bZ ҂r\Յ+e 28so_΋ag+.qdf9"OOę\]->(>0Anx3j%-<6o4d\"yԷؘ24W'4s&V >9KdmSMc$=9rJ1{kjBlh<5&>8?1 8tDZ %Umf WvA ԇN{{)$r;Y-ۂ90Nx7"rƶ%)[SA;gNd2 -mcɮn-C0keA^yWVg $Vgo"훞KJhvGPv X{ݻ } _@#]nEA)^BN-!g_)׬08 :NFr)tp %0pnûPqH7S{.mټ:/f+i1T=p;2Eloj1BzJ=P[g<{t%>ט*ԗG{j}&QLUC#Jіmw]:>f;T !8l#jT61u8%޳ټh|;y'6IT+3r;qUŎAl-"1t؟Emr?pO$g~P E$r 8NѪI2ݕbmOG1w%ͩbKkT|4 Pu4Ժ ϜR<ۑGEh8yý؋kܡ5a>֢I\. /iLz+5wgkKqb,Jϐ~Ngmxv*bl\[yz 6!1o´jEۜA6:d9$Z;yvf1]G!twfFt+Rtŷ xZZOk+XE"F ˔MT)vFS_5^vXboaC..MGlțpq g3M:wMxE-Js huZZ0HCEcRڣ^KQH21:GШp;vӛeP vd"P[v8zGWU!n)+0M3Xn :gFz`Q `6q2 cixUh S˳nz:{i&~Efߩ$-F^qM0}z:h&>a{<7*Ry6Y>c6#ܠ4qKk̸#cX~' 1h5?h1Xr$Ed8I"jPl"ؔaL\4=:_hj1$?[K\o"OXd \5`k6f)z yocZ搅zd|6jC1ro^t2c# 7hķIsw<4qC K68޴~p,z2Ǝ63ŭ*:y~ TzrCwj|mƒW[Ȋ"t\s..C8; )p}^<cs<4c4 {]?P2@ ^wL\-IYyNYmVZ*|9C64{} &C-&Ŀ蹎F`?ɫX(? t1 yJ(lyOhLLy2L]Bwbӡu,B][qy6NF](\\?)5P̟ș.)onD:OM`u9{76l)-< &0ȴFT tFu킥s2fñ#iDvƂ 5~KLkF1neN^+ Üir\`C7?=it8ɜSdH[$sWgk~cZ>{"pbj|ŊlmɆ\7ܑr!r0&Xi'#Q;w;x8<ł P)ĉ5td*ԃ.4τ}i9Ist><@o2UE p/ܔ' gWxl26ܤԓv0bc*~}A(|DNK HĒӮq;_q{bWmH(S&M@dlkΩFtB8-gQ8Uc M4K̼J;w)"dtCxlk./KG`B GA6!\1hag*9Ym5q!Q4$Xg/h'0.dz'-b veAu$T.N!i$`մ0'uޔSNB5,;*Pk0rD4Q8T#[-H֤> jp P7ܡbq5 * >yW0h/pl8ީD@ /ƥ5$Giıvp3{h pDP5Uv|nP&){/AħF^7 ~dOtf8'3$s¸qCP?wrp_:$ 7IoF~,fw'wf ;r0:1b܁VI^ÐYP߇e^81 K" W/'OK҂Gk끉$tmĩOVm"Xs3\ ôS) %7kS;=Wz?,ӾROl_Qf rRsn]d}&lD]Nt(v.?ŨsE}kf\Izө|t{*թw3^\ZGhD<ĠF$]G{CĮ`C\0?˷cx'/pZ `nl8~}xKWӣ=!xNw\mf[unsWO LL8`WQ ̂Cd&مCbzē ;{%!xtD1. ѻXO%Xi5rtUmX3LR+B9j3SX2g pV[c"fw;VȄx G[BAcDBa8ЅW͟Y;y.O+H2bNs>8a{Z >q|AW-FZˏ2՚/`LI,7p#v~`Қp0֑a8zo1Q&4Ghr%cㆩȪCXl@;I{]xxb !tGTu~;YiC=Ҝ*O@yDG yoϯE3O ouKJJ;e,$ИwHf,AolPBY 8#ųR/oa#]2 =O^>GWX݄ƿWde^T@RNQ f4 y:z"K=@E++ݔ/p6wH}҉ PGn~l#kca.OVۯF_mjUK%8`\d",[OtjIrE_ اmȗ6l[i~(s&C4f#3&/e9.CUjO6{ap\Yg){J4 ,:)p@6͋y/ztaa/$|=Ƚ Vq/}#}oBAKA smM#T 4GbV ,gY:b(ZAtl_wO 0C3> & nwZ\^'KkKs'6aD,:I: W'lD)PZ4aR9|}琯q&B"0sD㿷^J^ޅV< ۯ,_݄-8[܊W8l[KylLm9.:=fla ,Y'ؐoȿ {_3`x(3٢ ,1>*U۾ik-îI|nPpea;|k%vys%J )Nj!in"0PFc#~%aY=ׁ`};\waҒ9_MbO*8m*AJ0!FmgSLiG^#rԻҐ_givnm:q)=foxf KS/pC =ayF*עƜ{ m|ɖՂE:'eO? g\~S0lo\7hB?oXT~lhcrd&Fh4>T#346e`S:7>7V}1Xd0C[&F~1l cIOo+᪬!%>bޕz&I `?0arj EQb6o`Iz>~/'r|;4e)xw|ENxy{!Y[U^ ' Dn-d^?4L3;]"|!vChEނ$8fC?"Q:,c\9*m룖SA<+dOO)zbt| .!c/$飈9zY)%Lu*[Z,korJ,Vlşj̫H=^a;&Ovsc`Wch[C"ȄET$[&`ʺ$j%v3'RqZV#11l7i>k/ Ŀ%p`R!kq_I#=gŠYpc+&)"R!Eq귺cNV.%fj,`,Yr5mr,Ĉ́@Q>!W(,1P^DE57y+ܒ\],o˹ |'{Sx6GPN7P;dUZ5gtd^x ݙj[#0Nb~K j99 p0 l5zKq(M1KL{9ėZLW>i'UN6IwdCH5_k^H>X',?]ĩފSrTZo[i),xF O,;l{6RdC_qvl.'?kJ)fsb+V,ʮai̪3Pɢ4EST8uo ch03fSSw3/i5i\t)s4\# is0Ge>!]MDƒT!R8MJ$WBi)z &3uOSYNpE )܎5gL¶GC=㖢qNL*MYق+l@\X=~L~.} 79<-H-z EU5P$I+§%vDJF0E(1VymcH ,u>xݎDd%zրfCn- JBzGU pN2} yǕ9BUbcz_ed6NƟuÒ-{*42Q]RT.~T#X]/a7#V9ˑz}Lw@/Dg6U"!g`SCw4φoN*o3Wє]^7v)D49 vo9<8Xl\ɫ}λxl]vHZ/So6tR3S=<zlͼ|&]l0o:E c"ӑMa"OK21l V%X_i)fH d87"v^r&8JWZq<4wg1yds ^ɟo5"lX1ZnǛs$t'dՆ,ѕ9ȂEwXt:k.,j>iޢsAR5&mVh<~??DK"mBÇpQ[xn-m~$dQbG  K>t#="X?%\Gr +3.2X,䯶<u獂sC0QӁO)]F]>-tA/.,. Y{n~Md@͇~{n[uhSz&E/\9 џ_s5ruboPK/>C d֑n5^6"ϏaøT;+5^'j6I@l)DHrI2sŜhkzg'gq/t!z!.*mh/a3T'k:0k $f.%xnH <Y^KB&Jp"qZ"وS׫V-hZL]ju44 H5K؇oT? >;=3Ay7A@noB#hx"Sd|m3]90E)Y4]Uy=r4X'QӴA*Fҥ(__^N8*_OށteDؾsKאōeVQ8z`oV^Jqb_bz8awP2]?1>~v4:g,)(Ɔ \DUI/1pS(JS5vl A7A=߬vZpF/J[ֳZ'ey"1#_L"?[ G༯$A2g`+n9Bc=tDkQgycm`';:fUK4iY+qJ>5g>]Ǜ ]S.So(7)f1r4y/I]OPQ6S YG"c04o rf?(3>kLũvF.AuiY_1$^о )2VPl(}< _)]037k9'UbQƼ!_Ud80)y~;t-?9ӬC9o,ĞCe.A2v &{ qLjL3"{{O30֏RNBY }ρC&%8[KVaT7wcj!lu>3&r[1#}q#z#lμCJ[> Y8f/U+qHϛŸ90lJ|m])!8(`{=DI Lc!Nap}›7h ,qE46d_O1>{-"J\D7/Dqp**܈6Xdځn[O|Ą-xg&yq,/=vW 13"k;i_$`zs ac᧏@!ӧyQvPYsxU%$I9 VyG^69AԠׄNZA evD ; W fӪVH.Qz4~١<>ᣨ*U~C9Cӳ~;#ZrD@N:fޏ7к,n ~}+q„^]`)w;s)lj+DYR{-0ㅺB͝a(ǛVbwςxB駑}2r,>~tNƏ4q7iԤ Y?FeVY[ҍfJg:poR4wk!sZ_{~""ݲ C&F\va2!Fb|w>Y C𵰞\> J׼V3tNbbV}.Wm| ?'jnY}Mpw\0xĻsbt!Z-H˄rMzW8Bi0y%A =i/l ҕ*ăxfFg̭ޡ4F/Q>Z!|eV˓[HS;]Yt1 OG8NgB ;}? %V gaWgɪ9CHr+a=!|^#d{خ~Tq7i|[Rϋ O>`=s/PKCd*+3PKf\T3 sub_3/CC_ForcepsMajor.trkUT ޷bC b޷bux iTO6\Q EBIJ\_$dJfBeNH2EI%I{>Q4hPJTJH~7ϋg}gu:k8׾}^c118-?޷Z^)Ͷ2ƿhWOa;5-, $gqq=*Wrtپ830IEbZȭ/gpJą솨Dǝe}?sN(%h֨#A^кIͰ{ܼAhDRvDAmk(1TdO(}p-0ޏe5k O"; ~)Ivǜ%[+am: |oo^x+Aұ| dyJ/ïʰe+ s8iݡ4d3<ÏFIGt&ҪIŋi,+-_WlŁ|8$[~63p0/n=npCV\vx`jp[yֿ"@M$הp**#ip ނߨ0-Yw.6_əN7r] `51rb-Z7'օ<+ɛfn%UD~EQH'л$jD7zd%Ǫ#J Y$/bSgs0unu{@ׄ&OgS vԽDRB(*O˹ hij"$ɋlϮ[kq>ˏҷxϔz̩$Y#VzV_sO(n#8B'4\cwLB~EKqvY;6#7 Iwk#̾=0U~Q.6əÐ1X~Xi6ϭ;, '!PY"worXJ-Lm"IvG Nћ,mu [HՓ=0 E|Ӿd˖D]o)A+gāˮ佔jDq&ۚ_%>"41]^9^cZOO)%?õ~? dCj)EW&PuYyãfN/ޢu ٢f1:d-W` 6O8pZy>~n~J߁F*6V/ab)7  ɺ:iwndN IGL-?2.Y{a{[d FĊ*Q8Wc+15Zl9$'P=L>%N6t^N1eh_V#lb`{r'Xtyd_}bS+*^Ӥ%SZAGЂd]|)EDŽUW>Lr}?7>6do9ϭijQ~֎s7*EƏa7+R5S)adCNXLy(M "bwU1m01a3U`玗 a& aeڱ01Rtȣ{p 'J1%}f,5_!A^D}<[k龫:R,9kM oj(PK)>ؖPH7VACGnِCP)_vs=aM='Ե 9>=0 [ sJRSq }}iبIO`ˌeM0>vfYv_3n Dn62]\^Y#yIaO\]gcB=)AY#~2….7##H-NK xݣ~_:Bi\dJ [Y4`#Mī-%}nHLfa8r4/ѿwP%&tmL˲'>HTsB| vioS `^dGAcXƣtn8{˒U v`J1籠Z fZ܋ ѿ]ÉqM5-_-7*4V>xs-{QvЉY׳\Mf8<0PT7Pi#)-GWN-9Uُ(ӊ@ss&_O2KiBdĒ4:z&̲'ѻ1-}4`s fBYޡDKi)Eu[xDdC5fB?ٛV^P`;=$romo5>M?s_SIw]O"a3Ӟ]rL=)ֈ~Mr55>@/Q㘅?~wZл/^J|2 %G0?@,fOdWxL`gjњfD-ƺm|[t&ڭ,8_uVEe8|u#b;؟n!GܨJIzcAS*F a5/HQR#V,L4rn݊s I0(ֈ4p%:4^D{Ni3G/qa&)ƞB7y&{$Ogs}ַN~̳λOjaurxZƾ{X\Ʈp,3]9=PMo!圊t%\B&;[$,x8oKCukdYp6Ob[K}.y&f+9y J/yE9 M)RePȟd@Z0oOЈ&.b>[%yCnjVO&fOFfGxyeLh0x>]eΝ_NoTXTSM#Ғb8\V/'wH6 r6!$%dГM}|-m~"ĀTxnX'2r/ڄΑI <GL:^(ds:?: Eg 1ɬÐvLyxhɽS~/5u=89@t w\ocq!M-ݜ㆑hDJp?JV{e1A %qNԓ9JR\OgTbU ZxσDYKfgf@1'A 2)6Oò079vDXv %z*X扽hFY,")4ǮRO}g-O3ϓ?ֳ0J\ o:cU;68KM`ExqeIעqł(&j&azbiBt hBܝFیGOP,1d _`g+Q*ٹn{B~{ihi0 MTrc+!7 ס$Փ0T*e6'D:f{ qt*$Q;a>+1_g8 OuCs88uR9H;|!^@:Ӳ0)nxþ8s+/ ilZpè:9H T$j> Cs7œ Ţo1.V@/PsayoD.Ɓ*1Xe5A,AY?  %ۭJ$@\hL&J*{7~~oEUV}AQ34SC\+KcQeI 7_)E齙lD*n-`e_7Q>ԙ;tߤVl\3lF 1L Qa7oHMDr&'A,~N(3P1SˆKM= 9n-cYg~x8+b|A*IgBzc4og=CO Cu'PwB&OA!.0f?GF Qq4`UZgTs9[f{r/U-Å?J=(2p7fB%_l_&[6xLd- :5^>[~(pHr2© Nx@r2c7a h-Ct$ә'e4؎y}=RgFf_Ӳ&,񷅨N4ݘ4Mȼɣ ,(W{^:n#S2=E~!/]}^3^854>^I-? *hs?sDܾ}cshzug`,D fdkręC$ߚ4Z2 7g" Xr Beir7&+5i6rG_uؔ څ{V&IFQXc<"OZsm?ZRK|K _-.?=wNP53ؙ?ՑK=TZ A~8?Ė>MaXmFz9/.TyMuf'n[~3j[ 6DsŅ&}>YՂԿ|!3h'٤)`vufM~qm:![z4]~YT‡@!b>lX@e|֕ެ_KUYW?KDw"}&z/Qs6T?`V{H sS 1 6=ɇ49rzE2eYlCΛwнla~S'zv8g 6KX:[Yx4kŒ(LD}HϸET !!h|rq{!iKF~: ACLtC+p`B l;eb#9DؙA6] K7`TYHn1D7zSn,[Qg0&ϊ s;c-Tlň}v8#zr37}era5V $\p: q9q"QE!,[eHʐg~Iaǵ'bj~+ѕ*M,JMKb(mB Ҵt3NGWIzԄ(rBoV|16}|wFK6Y^3sc)<~Ln1RYۓLM^^SkL;5ޘhJ36kmYpu r*?UY>H9zũpxB]腲tsP2֟=%̬`7~+P:*T-?ʅ",[KY籘wWsnb0|]d^VdF hɾ{(TE1o!CP%4wޚh+&ܡY7I{9͚.Qe|5-$i8=5OIoA=-N b4dD &'%͆'dgIqt+ 'F~iV݁a:MHϛIk6[T>z4WTTd14GLaKd8Rb:$NL}0K+AؐLq*1 3XuѨ~tR>Kn ]oË3>kJj]|G O!ˌ5:9a|\ N|5^ (aQ VT}T ,|C*ݑ)^(;q,5"XOU|_`3K mՑ!cJ;/kO&^B;w36W݁Q3Xޏ\A*&-\ޝlnJ-11/e'v1a^m;? ˺lƞ |C"59wt@ `kρt22R[JF>DWxXs&Ea6bBbvND-'=]m819Nf wt)I'?M_;5ήI z]Oezkq;Am[- 1EL-: ̴d-]WdA[.K_3:лl?ѮQ7zhk_ G/_Xw[JJxI\/p%wM{ޣ"*u (+^Σ],y Lc cw/ḰRznc8vB--45bZd|X&\ ' uyf۷np9J !K3bW2\mC^!4&T-Db^#VaM< ֱŭ]FWբhHwr:Fz:Waaڏt̓2?@zJV&7RFG4kvg6 h`6LWW֝ >m+s;6&nY_vq4E/a7;k9JSl h=ZTMu,m*ΖA9479ʂ}Ґ)s@l~+G8A?F~y9S="ˇ/GE0;UW,;׺f!䟕/ki]qh}MWgC3UMvnY' =y&&+ƿsaZ+i^Rs7'pǽ 7NME^&G׶':u$g8XћNÊQPqw(E&*Hg\Q-44'Jt+EO tkyߛ1])8ꃊ;&ml=НoqCr)(i+zk0yn?k-VJ(}zBWRa8cI(~ k<2\ڙŇ xӹ.j 4-ozG’N󢎜F=~=;BvjWPqcdՉ7R׸%V:\Әj!OM%]ү _p03YaC'&*8$QYlTяㄮ.ae &ժ!\z cÅBl} 0/Ϫ@}B >nDFHS27t$)*88(Ĺ[Sď'\{Ũ}\Nzb$}HB/Q$?~}nxfKSKBB8 }H~K*.M7G3G2o#Z!GG&x ?0ɯ`oS|{N-e9vX{7_3f%IGzrئG=pWcl|6ƴ_D{L͵iB`3ffHmTհT4w~?J]8_CsMmgH[./*S("ˈh*]vr:QJ>EIecip4j6w4Fm-avs>軋'}>Qc!IԽ\ &<] lBY0J͑z"J[-z{t '3i^K>‘~h\;H-> ɻK,E#EatJRT5OU".]?[o@=Q;:ANN=!=!(.5^]%{[4QqB} +nK 9O&_5ot?w0 p2Piv[}6i&JS0w=G E805-U'쐡pHp CsFE=EywZ2=w%0Sߗ̮"\^=I!ϥмLvT5Ӈ5$bYmVx:Pƾf;XNϒCKO&B RB(4g:3EENa(<._:R8G])9Ù $u=b= 0Z+{ GjrP9܊=wy~G%#0]{_z %{e0_Nɕ|wX>lh{\xei>j_E:B/ޱÜqBǨ/-GzsilW690gw-͑ڮDKkY۫:@ey 2b]);giGthr6+:SL#==^s^'8O ._RWYaSqs+yoG>WO LP>.T䛌_XӑFOppL4yP@^(6X QZf;s%\=Qo1j:9]2尃y2،땋S .9\sA`fwg}ѢRTb[8!ML ̓5Ԫ K`o &L+@A簉GD v>&qlJ0ei-h eWg[ ,K#}xT LHe?m bĒܘzvo)?$Ѻ,{IE PK'`+3PKf\T sub_4/UT ߷by%b߷bux PKf\T3 sub_4/AF_L.trkUT ޷bC b޷bux GT[KlrQkNAIDp09a(P$\P H0""*˟xxcνcܵGujk߷jGq"sϽOYa{!."?Kc.319NMb%Ψ1Kə~+,F4Kywr8ga& E2h;$oPUI'ԁ>ǁg r psf;OUBvŐWtg>LDvg;Jw Wm#xcX`u*R$$ |\Xz[ywz [~ŏj4`KcR\"Oy҅aпӔ}T !s con*p%uGh ZgKYzBu~*pyT2"( > #%s\*:+ߋŏQZxD31- ; oiO)Zw8&+ZR^U:xQ&8>S LJxb( +MPZeM ߎ9Y>[KA\]~(+{k v`<쌪Ѩ2r(;[ kgt(-AkM[hs1l|Dg-?`5jnES(|P֓~&~N[>GdS?Yw7Tٴu5k3ӡ i+,]Ů%bY`mqdSf*m*) 'dnE"n-t&E.~v kuP! &&Ȣ=@e_Wjeu4%oG:~^H' s&,O{W1koZW>7sKJ^ҎmE%ٹKc>:ţc`Xm n¶WVp}tH<v:Lէ0k,cE\c[E#xuƹX<(Y$7@(̖ Hqʠz>X^T$عT\ۗ"f|pfi1^Zx/Hf^̈́qSzDWp+$W@RSa^- Pj!,qTFUJvJRY܍`# 1I+8APMIhU9I,b={f_KmN ŔQ9EUGF \'bjv綘 ʺb-!'Q^Lv ?&azmx,hi'HV5YXnG<za-vwfLTTB%y{TГ#aja%YG& LcKfrzSzveg_(mRyooM ',[9"4ؔR8!#[bMB (8LHS'egBbqdjɶ~ǡPZeNvA/p7jC5xzY:meT1 r\T2ϧ a"Ur>N)*˨^p'%,MkEf[> {G*(OS{ z~$X25tx6VenoL&$w3fE !`3E֍7/GVm QY !I$O7@|< ff@hlta!yD3dq0Ncaƚ(ꗉĹ?Y~P='mxz^SL}Vӫ\nWLJ!ܨv*cg OוN*.pr5lRFk˔Q^ @Xc)8;R9Ft]$O0F~αaѳcږ۰Ed=SvQqWeo%8)#t+wS93wqu;ܗEo '^.T0z OhTۏeW]5.,^1+6Ъ^VәumwodgZJe}30K%%"ftg+gKiɯ#GCY4“S>ǯ| HԜu F)2%1R)rq;#Np;֥evfv,5uetKSwhLpvBŚtڹ梏mڕ$셣y-q7mmFVmt#`',c,ք(AFڲ gN[5sk0*\vŶq6`fvy%Cr=,KTrG pOA+zl+'px>zﮡ _]yF|=yI4rsUMCZ HƠ&9A]fDujmd1̻.n}/Yi`o$YwysX.g؆~!Gc-#spn ?,8u61.?> "Q,cDs4K*=>uRxm.#6Dz3uz@No}|V+mcVq2L!@Cy$,|8%2षPit,ຈk[^hk4E* khQh cv> c 0'8dXڍC1;QwÇ߲JnzS!Mk#44KAp{6; ?gF\RU''"ܗ)Ł>^yVE UC%߁gMb-.[G顲".U:G(3Gw+P1 bKKK6ypL Uem.MAfLeB5ÒrS53{QBH֘'so< 8|{C??ߟm)TV[Tnt| UT4Fm6 X>h?-ZA pHc֖ք`_hR Xjr9͝~ȋC=ԮhX~凂VgzqxB^Z]ZX*\]8ܠ7MXlwl MJlr8yୢ\|LC%y&X8}m|;{?w٢+${5w@?g#w#c"‹8cL(w.S1fU(gF [w i _6Uы+5ЂyPhl~$C> [! qe^ک`- 9ĵgJ>avz% f-nhSu9h{->DݣEЇ_Cʥ ke5fe衽eSܵ14{=I~d#X7Mw 0Ǝk,{d *~Q[$z 5 hg_'W{{ K$(OZ蛙Ca-}իFV}p%L1YZ0^Swv-Ur)N0Ei8ohešS a>gGDzk ,Ú,Kqwf:n[ìbV^k OF!g0/\t*f|iv ˆ(UQV(o- geb蓐CF+C+:\H \?Sڝ+U&ƿה+4 L+9cG-RD - ϩd0\2'Qi<)u[ko&|CX mE0t$vdn+Au%涝W: SCf0{Gbq-kÝV [_zn'ǹ=r[b 5p]Z3 eSW>O()&.E  0-v lMΠ߇PIĈ=&xߤttϒl0?TkJs9 v B8> 9`&y~޼Ƚ1ݘpbϽ_dFlr_ a~| WHO)-pޡ4u/&~&} 3AJmݠ !2hP_IWq枺C#bz CUP2f҇bt*JޭOk TgҼp+c:d؂#fgRG㮒k"FjU`^'IӜ]/W*?a6,KY"<>Rf:ctw3?Rs]>(V&_?~÷p^PŅR3 ,;ŝVB}IB ǴiIƘzZa} #C#lJUmj7 qxjy"OVA13l[p jMdohugˆ5Pdowg^&c6&StVp DpFzl]R&v/xR\ޙ,Đi+:Z{upX-̰s4 }>FIÚb\=GR"Ǟlќ47UѷW`jzZya2-]?'  W. *j5JG3Ul.1TJ;z מPa%hl#A _ڵNB>^cx\ Va=ᔂ-٬%J,n_h͖;@`5-csbgawp]yK̰,&cXNc`h:L[Ʀх1@}d,%:[,~=Zhyn7ހZX -Ys }-J׷v7[L1 FYp/HԽ9 yi~~שV^a/^i.( Tz@l6tcUh}X8)CKϢY/Œ}g4*vw߭YH[9.Wcvt*h} Zay;PuLJaMdJq!hdvfq-^x,ݰZW*#ew1jR9K(rS`ADJig)q +ZXNƆas,qNQ|z+uk2bW3a&(u-d`>~Yx~,F=Xi!2c6_h)Z겶rmeЇ6td~ B_h̬`XnjJ k៪vALSSkyKq4e+C RK.t.@!~#=[)ǚ `q@_9ewr Vϖ|G9vPݭ"IV OnRelǿ >3fj,$?|V?9bqL9p>n/+ ,&kڡWSBPi[Vزi>5 6PAg<7sjh;""X#㻫FT5(yt%إ KY j0ߌa-cAغi.Ass?WyBǍgm`d78N2&Al6/:"(`~O±Cqv27uP/jc zm\&"ǽ|"@I7VA\,,*x08Q*ߗO+CJ sRP!LRMDqd 6,kYil):KrwH[y)dy0$mJiO!lTErҽ.SYP:x{{knܶHaf'DY_CbixnX:;hT0IY\΍Xy:1.5dv`d2ypV x>B) b"Ȑ0s48t.saIH_"+֑@JwARТ6leS+'oY|fxBNS'"qm+X oq5QFE7US]aYMW݂50 ӌPfF^'dp9a20H'3|zjabKw7 ?{ J0ف=bBнom^_~΋em"a&hl S-A39xsmم|h< X:]_ 3E]&æ 9Z+ʛkz ϶Q4NJ\[J d@6Ɩ5`਋r#4/*F6kB>KHIlsS{30Z}rS %J1?LSg'7\1Ɩ,1'dmY >p8k6{3Di~1g0K ;97n|{I÷)ҬO44):?Ȓurp">5 V T_rK jϦV+WQ:~0}9r37mIwaxY txi6)#q«tl`PUTqVMqD8*6L&jlc+T@i-~MsF"ψEH Rlx:X -n?n=\g랩cm `WS?n>ixJ gM GO3گWg;+8&-TԘ#l!Hmʵ  -26M$KI娥]/>0 Yaw2+?:>;ʭg&.Y04Q3Y|.,s.K }vn!pa{EN9QVKsR"<>4~N/Wϩ ]AVwa)Io'gJxq4Q_cRɇqb{Ă;aa}4M ge1,UG\dcOf Yi4cKzeWxԎOQ$qX ?Wujnht9 颭g)&H\f>^43'7n3^1i7Z"`t4ev(0va( vl:[9f@62 ,s9ƞFp`틪IYLrN,_#=WxC<׷PgZ2!60FgDpyFTnQH.YCNUߣWh(UUϑ{M{V oD{;5] Ws?^ޡ=:v$nJafBTf=x7 C= mUbp'_Gݑ=_{D!xqW*d,Z:YAt=EY #b~uus)Ge1 zB_q7}:N1Si3M~ȍ[.@&sJ]/hA>7g>f9rt@^B $oOGwnՑ,EKmݳ1Pv-yDw_PKL|+3PKf\T3 sub_4/CST_R.trkUT ߷bC b߷bux zgPQ% `Έ EQE%^*(&TĄ IAEAtYr9 D~:Ug~LyꝮS릪uk^.Y-F?6r6gwrqZ *`]ΆU#Vȧ1WUq%qLg..ve667`!6Jzϓph奡?r*f:90ز"~'F< FAȦk 'Y>@cN>J炛w2nD\ǵ1cjG21fa՟9mCpjI#'&/>V`w.V"}UuHkC+;@VR8V* HpU?c͕yHV s?vS#P艁%ִ~Rsbʼ$*ʖ?")јg΃Jf SbqXS}wq#b;2z)󱰓}+B3\ Z'iضr`_|1YNbY% %F6bzzelJ)񧍞) $g]&ظ(4=$y-8v5a]gawx4ypw;HԧDBf}s".:>yv;oO67?/l%b]xfx)6}?)[[art-Çݘ;OX"NyI;|D:! :-7z9̪Gn3jvil<Ϳ٪1.ĚXڴ-Wvc_I1︭cLH4]z nRUQ(01O?^h5T40q갬)mEXџT',3apΥ2BVj66c.bi\&Yx +Z|{vL# H) Ud;RIͷ6X#w?ab< 4ɿ z4ëCyȅЋb`S V=gGTmRjEjM”!&Yy3~èŭB\R%uc~R'70 @Ueǐų8Y]SҠIe^R<`$ځR=82lBE<}ɸv\בPYm-C,|(RF\yFrgVAA'_>֖"8bPZڌ^q\,Fo9:0Ug ﲝі#LvG+'J*0 ,*3S ,"t _>3s<x Jf}z}~)GRe ս ObgH'/fTl>7E, qU0d2*T^a>#MEs8GM#?GcNJ& CAp.buX;敹X~hNrKstTϚDf= II伍ģtyiEO"YسO ƏأJ ="wގ謌 `o ^:ς&jl=!k?v9HWp5y0C_΅ۙ}a: [de`gd`$|ax p8?[4 ;!x&%>X(ck %te*ڈ)+qHeH21kFNA?:DOUC,VWȚݰM7+r*?IeLNSqh_|P*GoL*4g\ap;[#|V2ё~pbckjOh*z6e.VBٙp9?@Թ#"P }bΕ887f~tGqj[_,&HT &)SQ(zPflV% ŢlyDY ztbk)`'H-ABykZ$:1>Ϲ(,z|'M!ejʜ'{e-sr9$@BuHԊ*P{PԚx\}@y ~Dk|ك'ϲ&?Qx?~*9gL[ !J# Ls%O̼cx{oJɕP%ÞA{c0b^cHг=!rS\ӎ=cȚluy OY2D{RR+!3a¢,C&t'*6l ^ڗǎ&2u%GqڼXUyR;f̎nz x)`Hi«&NRS6/'8C_3:diӊ44-ٝpUrnܧ&Σmo)Y2>]D |C2]ؗRBwe^n_lg*hs;jZO|LT93z-.'fX o3.A&0`ۢ`:%ucE-65Qr o1F Ƀ@7fQL%NhL]u+00 Wpr M!KqJ oƻ&L&+q$؈WZGچ G jzgp1@^5{[5u槸QSа^R65 |OjVtPdcdL7MGHgCu芋'kCʼn]-| zТF/ξ͟MWp&g^QLJ7) FŇ/z E2QM٧jE1[f@QhL͈e#3~euFܷCޢw{Pp;]Y{)*ovumZ RVw;"1 kSy܋HEHNY{+wB+Nⰹ&qU =rbW: S_'̈!__}ZeKzŊ`Aͧ=g(]oAztcqn1W`q I|"C;a>Vc/iJV~=@T@Nr?9 O Rb I~ L ]+0 gX LF2wEa OX@ڊYlxAPYbnݐY0u:\^p*[QڳYt10*x6rOBk}'0^ _Tx:,|LP2G%'U<7 r4q ct[LuwL bkqVR'YdwSda./'Yj0nK/q׫H!zg?!ۿJW_[C1uP'2/<>kRov&G]HsHmy^Drp$?1$j!"4)/S]nwq/ ߭ % kf/VJnv^\n`}C@ ~)^S7mC=N-HYͽD .VTUF3\VSZ _iϙM1 Lm G2,y\HBq Y00鐪8K +{,y8<DdV,<5՗܃Ž+b85h}\$WdSlYcE2R LiCaѭGDzzKd6thAAn\=^VԑG eao!avc+zBŠ:pQ̀0jURXQj`q i$7NÇ5C͐ QM8&pc'QvЖ(_:]ןݺx# :ָ腾#\ ^*AKJA1 =dҰR2vBc#B8Y(? FZr7%B7_6~}"BpZ`*-af1'l8QUpɐy-+('\۠k3ܜgN9|iӱyrpܙV; ]"9;Q|n}XQ76"d"9.˵|G7oOcCګc'9 o?c>)-z1`2"#*0oʈF?$nK1]ͦWA99pj&F@!#FvBANyV{bv[} v4&{OuJ֭^{ռ4L6;Uה7 s|T Hp)*BKxM g/ 1Ҡq F IaW0CW©xEȡq lr%3K~ˆ̥Sc ܝ)Ld ~ߙY7Jr2=$>Hҡt]9 ;Ę38yӞ*-&vA\/O]i}K}rKU %f , ~*nyL7Eu/'?h}&UŸ`&YEqQX܈Iɘ֧Fzs #BB׿"Q/ ꗘ͎)=zWA|N)߬%G*> pѹ9 D(1k<4$%cL Ǥ4m|ud!kDU,ò)[/O#ݲӰK95@B46p '4 h,?'N*\M$OU28uɳKuEK>(9J°eGU*@& Mm(;]SK1?5`;^cU!{iI v8]ɵp~c tS]S8?! 5"<'C 9 3)Go,SSG*~q|WpO,]Ȅo >]YDH8#ɒ"i6wI(T{^8#ևՓr=xG9a]BwD\=N&aU]/FaOG$' 둕7q v}„oT)Oѹ-@1dgMW0 d hǔp1^"mDŽ~̋>.W R]Uxk6 Ci 1Z+`<Z82Ќ9iLbkQ5y~!As:v&Pn0;NMCՒȀEڵ`K^AŰ|27_Co=7!QE'ϜC2y=ZK`evAzw`JDkMֵ)F.H]ԻOypl({1mK4jz>7+KZvۯYY-j\ٍ ]wJ5{X+cg\JN&dԓm)h\hC/C$5iGp\wl6 @ | C+#WDN4A ư! &Lўʲ6љcG7S1G!>[b0i6y݌<_wW3fȯ8eMb\T{Xr՛YWUDx'5b)jmxF8ƴY$?Z!b߄ݦo"o.'D\W~.pwbYc TrL"4Z3ea9FLr, x>7ꤓwaL.Z( n^lzdBVћi:|> 1i 0YQۓ>!pzJ?Y)ϧRERj);)]l$2o1ՇVe0CoaW,KrůmxmɰVE"bF 4:Gi4Z"B  in@ <&=uʩKBD>  g"i Qacpn"WCயGOg'3iq ¢/BՖw޵[BD?͏{NS%?? V1pGĆ&f~XBmH??>G. %1RtK\LSӂ=vNlm|Zm< rO|b~BX^C)(7=KAoM&8}HI1u>e-PT52l l:tspǗ\'o͝ml->z;]:c}[D=[aV Zmɞ8OX*6\\>ݡ܊MsU[PDm?K^Ĵ㽆8L,4LYg3[F_0]A|z:UF&Fyn==:?SN^ǟZjf@Y*Uko그[ :jyܯ"}t \t,DL~fBV"\8ӎPO- Hf"2PLb@2"K(C6:9j!rN!zXV褠wL[ j0״ P:p1ްZ?^Ȟ~]kσlE1D 1lƂrXm+rɽEğЮOα3%j|\buqY-.Xl g**O$?![.<@Iz` ɿ/>]+6Jzg?Or[-"?IpD}MKхpfĻJ#(Œ@H^jYXg֋p^?}ـ?>1K$~E"D7;St^~eG{!7s8g}X̏xz/g/Qƙ2\/K¼\HY5Wn%mϏ^r RBn1-z0c#3<*8+zPWVFz@ >lx&@%FbT@*ӧ7NcJ]Pb=jľjWXԴc[/n"Uď{DkЊnŌǴ=3HȺ_7_ur+Ù5 BU<u laszNWo}~sjlml}l,s%hݝȔ]P5m^eLbCzYSslٿ|S)]X:st̷}=j{(c&:@CDFȘĆe7.dK)Mc1CޞIMBHPgaܓll/TLv%Mf0_Ak]Bgrj(8 O&QHR\'ed$̍7mKXkTqBz+j.'Vdv?gbFl5TVa+69MW8C-«ÙJ.!ҕ^S L;Piu9ƋU1*JrS. {ɴ&orjSYs2%6Zͫ#0d0]4 c~1lM2V|%bNZxq^("#;_201tѩηܼ;f0QAyZ+CO;xXV\bx(yAX?sʰa/^AV[`UNr(m^\rH0ZA9\8r Ů+Ì3˅Xu>3W~YDZukP \oʶ\C@lڸ)˂ ޴"o`cN=~^wu|)f]$fcB9:ǟ3h#6z`H f,u IΧ\>fY%-mgadz\)YXdecl(} NSmM Ʋs;W4ZtTg.V1dApTffܲ",0gDqSW&m=\Agf)wδߟ%k7CyE rtzwaPi_8{I 7γd g 9\vɥ/(Oi5D:ƴhm(T $l,mv Oc3H,BYfq͢dُ;4lIW*gv 3&p,3YX?m[%;7 8* S3ǯxc˒$!^AGWP{P*>ӻc?A԰x\H3 nV%<>?'xniЕ-T4}#%HTjaUZryԯV)baz1`JZb\8Fye݆t\9{11ޥjNj[L|i \[s#VZ&-]lujPަ zeܸ6 .#C•x`rp݁_eh-[bbA+^8մ ~K]9H'h?F%X)<{T3 ]I ڇzQm,sY+ka|VPl{r=bˍp@?y^Pw. l[;Z(~#Яvv#TdjiMPM p/y^&;x ɯ"~Szq%K]Xsw'/$jżj4:VSIlEvSNə|(nX(uk .u9cY8d@Wn곇wfBEN?u<|LzH=1`*3Ó0x1[S幍&,6:oqmB/%1zJ4zIcdN9 *Ū`dM A~s0z-APcg~4dvp!8$fG]Lchne\яviR!>9, ,1l;Mo8MYY-dc*މ] 9+ْϑМNƮmzLM˜osV3h;P:a8mݓ'pvօ٧'DxŸ*.҂S1Yo.&xţl@O__sgD\w`wj7 CP Ms=Z ĜtR~oշuԨW3MtĪ19a(|>ô 6fl3urX﹊ vT4,8cDf ϜsܱY^tlI^^rvqwK Oq?, :!^GFdA,KLaϓ(_LXX񴷷b@U#G^|Y]z*bij9^ևh;t)' iVG3qz" %8 d \z4F0:|2=l"[b=)MސiBnS2dWcktlRw"]v]!kot#Z2<J7`s:N=asu*La.7ROfW:/׬C򰁔nɫ?C3J'Eqǝ;P^8U.[ԧ#T0%+i䯡a\xnȳZk=4,0ˣ"zcNb cH P,|p\#Rbۥ+H%gRb-t5请1<}Ÿ/ lU%iL:ښB9b{ruxDORkJj$i(S}%ouiHw 2']ҬR^pAl}~r ?9@xy5ȳ}{\6vղc&lSO3C nҊQpѦ O5OZZ Xn6Y,mq⧕%B";Rmчd5x!s8HIӳeYSY=^Y>I#\Hjj/EʟdA57y VSg/'DŽ4bkk'Q 2 é9<?6~M=>8ng/4"FE54wK+0,7҉89DYe!N%-,js&pp& ͫ~,] Cve 1%s0dr3`vLYp`.`U|SaW#|&:'T. kT\1nbwQH;7E4>Ʋ/iP+ꂀ#!WqDd] 5Ɯ|!,w~W Ŝs*qMksA#/ RB:WyYH0'D-ZI 3UfoCk,/=WǭFllHVb_UpGlxonSC|jnt]zw.]Fn2i{i wg~~]y6=!x~9/J^۷΍v||!woyEOM% gj pƿq5C4RyM(efH2һNi*JFpf;Wc[aAnN| m`,}nu L;@41vV||mT;5G j<tH|;]6&bثJ3\q%+QkFrxCq._} ߔV+p *N)dl*ζ?z[Vwٯ7 >tGVWYT_r];]Dv&00<_*- /Bv`v3e^HL𱳨7w95;BT΅#]ѫ)RKAj3T_Moa߀)0=NӰKjİLb;f>= 6lİcFf K4o>[o>P,̦e]>ߦQHD qƹӑE=S5Sٗz2/^/)\L~ Lh(V|-΃tlt;?y܎#4Y?o{ خLN9-`q*a忚?ΒI[֓âdVsƯvD%]r]MǵCh`Ufޥ #|D}|)Nc.BSOJqHST9` m-*W|%0/JtIP"rpet.ߕPoČ$ o𬡁FAꎢ삤+ 1b >e `;bb¯azdX2oy@lfțdC.f9KT)R L>Fn8{Rz}A7\N 7}C2y'oWAI"GINy]&ӔZnO=t %+Z7,BLyݯ.cQj^Gd}{!]?J/uՔ*Kc7"r{Co&B?_<5sOs55Ύ*uy{ytіqy꜆kװ?~cb;c˩ή},n{#rX /r4szh/X0{#G'FuS։)l:$g9 8fYKYqH˴4Bxd<(u\:\ tP'O ,Y`D%E)=S2Iؗ+:xG#'SQg!"K_W6`Mvm`bʔ ӳ^KoQ%7GB~5ldXQU*fV<;*UL@ǻ4RQ[6$@q<ߖiߍ)y琱+W1EX  #t >Ǣ o} m%ZP`cu|B.F2raE-J&ƭ!͖,9=TwS y- g0OoWAϝ9([l\l߹u7q&נq{2KXQ)LG9Ѷ7J;[QfH?:ڜV美³WS>~%W鰩g<ُlDAf{ b{+0PJi4#1^Ts>C3R-G2j4yx42UC|3[VV;>m%lk=(=lB}t%x2e.JaCag+0r)]8Ug)Ļf0.tH?㿚PqIzo|(v,$13 =i_EELdQːuv#{`깄4ZNO;I\/l F$qW A: eB@?N]7 I(CR`dlL%S2lSc=r1h+/Bw50ch?fYa'&n#gW';yhK}^jBK0ޓ fA D);(kI,ubJj<†"k}3 sK_yCb t} eX$Ixj$6L M|d#p:ZE{qO2)MEek$<_cp5 y 8ןƩ3ә~U) 0l3>@7acVM%XD)TSA@}5MqnXJ[H2>C#VIu(qT"mD<*>񭂧|u2"DF#Wc@nޢ7ߞ'BezRKבq6.?2fפxu/|LЈGZ{4!:ʲzd![rK?Gn %׏[ Oz` [Sg]U?JsW⾨vװ(^*&!k+^[56a^= GRxl+Yq!D>{|nW96azOLX]l< `1TgG~OhjWq&;e b~2B\ y#ȳˆI p2e[&8ҾR:4s"чl[р >MbB:}DP,GW*P Y'h}xf`hP JsYDOϘQY2(Qm`c=v-RXz;jۡ:Ηq! [d:{`_a#{,L?f k\%O/cP4_= ('m%&ˈVO@c`*MǑO=YPp _Kkz847.*2P:/״Q2 )J/ enB.: FIh^GEYmeaWY M;+$1%[qk3SToTfib>>?f jY߳jKޠWVp>yLJv6r:Qc>&HH/*Ϭ\}u$wnz[WSͽΠot MakBzЗ ?ڦFEed=]eG:RHSmX?]k3ٌM}>sVa_DxX~zaibvi`앫C#!HMBTF&[gX8β8L"Q:w/,2d_avhk_ni'LM?:i]rkYG+iGmex$TJH\/vMdgo`HnsI ߴznRivVQ̲FY@ޘ{zKWXh37>Kh9TcN*؄*,kWwؼxꐆ;S`>b8 ՍT.MB_<̅㻥%/M}7aX]g01EDrvG06V]U.? m#eN@Ac w- }lt7F{:¥Y@c٩3EFSX i /vk&_ȅKɱ]d`NE}dؐ&ՆZ_ROr1/kX *u[іHǯGWJ1/!A4ZWoEg\N *+ƍuKOm- Xth %Oէ 8dwTFjdЉQqi3i-Z7"+,%.5v؆58[Ubua/љel } G`tmbd? pODZo2,HQļ4- {/",ynߏFNK)Wc5xU_i2IoM<4 *M8m^;"zomOnt^rb$sCۡ?O1(zLo<ȹcȺt~N5@C,I9/#'/ ū;YgWIjRtB(u$H$yn?>ɣP̜<8+3Etط!b]Аew>fl`="foFǟ\ddc]W06n"=~c*mtrT [Gd~(䫰e+oI$ҟC6{+o97w9u70?P8̐KǝJb䗝? |3i\z9ӋZOIzFĒ{ )FV6d6ې %=Xo| OM9 L$~kuO\ĦCȽ@pl[n\5Bsg{aП3֭(ӍSrIrǼտ"Rrcgil(g&?q7=ڧՄt& /Tr2Ҝb$z,7ϛ#pn]c 0p,pRMCY|뭧'$&]D}>і+!n{_rߨt6#k4{Mqa:Yۤ-%}ڌHJvޑekyOY/44bW/l~b4dO;QmU@˳q3(3[c;Z0qgs ͭ9$**~{@_zbFu 'E_w"!'o`qY*@ѬcYGj_ayPoá˝ F_{yڟ(K/ʞqY3]w vѕ@~U' {CfyyVY[O4mI3^zRz%Ő7>Aw^+6OFR.7X UYO|c1V/"P4gޯ^ŜuF;hAK!fw\s_ҢP(b)6> }c=-Y h6xqcVZ߳!dL;mz%^/pV ]Jq,:nGD4sVO s=3aYӆiKh|8H$ e#%[b 1RSq/7{CQ竷%`! ;7Ys_ 맒tѦ掔7 |C(Ť?yx~&7ٿ$Yq d0v@I/ݮSc\zXΟckpt:E? iӝ6!ܝv i#G+,{J`x?C*.7r rܡ$jn Qdsy/tc֓h1#ȱ[BY5Qh?% 8Jegmq*Hl?>ΎVX>`##k 4GE٨AUeHY{h]f!>[6E!ilH6mۊذGlV^ H;ka߇#h`{p:l͛"W\-}ә&O3L}mxQnCk.;sZ(؋cT)Ȁ#_9x""VUS,_^His7Gs;2|As._N!y 5!Ϙo˓=#G_6! `NSaγ!/ևn^ۅ:%}Z8 !ɧ 0Y wN$6>&F`nJmIF_ROO8 +c!Ҫج[ɯؠ `H<=:Ƥ|LyWgQ_x-.R8EM'5tcUP( CuY=]_gnQPFҋ11{aƶt꾵69& :=|:0 v&PS Dc}5{?2*9Bj6_If&AЦF6Kc6go5p%u).2^<#cKsWf ^>Hܯsѣ OWSpX!G}OA`g0ą&9$|0gzT Ͽ> #o}qu `x zve)+^QJ$mM;,zSq"r~A. ,nm;`H B 6o ;d8fRbyTleQg* +y%\03S'J#VKt],h )bOVRwRUĩF&mVA'Ipk7]avz"y;C_,|HK vS'1yn4Xj ,ekf;LZVBF?mf :Gr:ĨbKeڨtf`ɢ=Gq),).>N\Ϝjyr<ֆϺƂ6扭pqMA~m\Z+Հ cH/[JASƐe:* ^rfxz HC+[ ь! wmbd_G7ڒq+sZANPI[W\KupW }O'Tn-CZ8!ihL`˘1TC5.-CoaT-;UI|=`ꚻjpQe7|k@>.ÜbP-~cWކp=}mUiD5 #f ԥc!XwSl1;hגwig3!\4]65?Z ;XY8n5|WџtIZam*HE93@'A; tl4!u'at8N[zSjOz鯭`_tKa_J ĤFFo]v]'T b7w<`MDbMO[hzMʭP{saD<; LJXO.X; ʾ_֢UOx ū%.3ȩh% hǟ  {wd%Z#/X cqF ?_8)B: 5!Axf G{yLloKF.w8۹{76֣ʎ]qYQ?e{ml9[w"Ұt },Rӻ`Gu^DW_v<1Ʈgr>x;_3@R'B(TDa1_/.6pW77. o 3z>)!3"juP6A$ "+Ȇ(֖\-0_ ͼqFqmiy.]"=^'cc oB^NbSCYS(Ũvoz9BE>cXX~>|`1OfWNj_Mul unF$2EOуqCO${7Vo܆ҟʫǣJhYqٟ zyAN~fy􋃔g UٱwJDeS aHu {ه= <%b;s4ҹ@ ­98%j3t bi suʸg%,Vdg0<=uj ҹ@> = ^DHN%iC*8/(nv^q2տժ^xK"|}reuhF/]?,ѧ;-eZt[|G]Ŭ;hu3,ܥ>jT1Syk^wק?TPo7t30}$jc2>sd )Ug>fL` ;f8}y]u𤓃WA{ l1aP]Tr&) nIH4Xf1fd~hNc'ϚJKqF U;еUgA gބCXxҩX'|c՟[U직=Rp6] ݯpA]bem~Ml$.`0ghA+J&lս/`\QštD?>637,.b rF#u(lovO3`^& oOR 0O~ +q[q 8X -D3%oƏAS*` ' "<~N4}~fqNi/*0 ` C7(౰EHav?OGszI*+}%Ѡ|6lUq_ }- F2m2Ju`]|Ĝp°k\ Dӓ?/[>*i.o_w.5u#PuN ''bp&Lˆ\ 5?U^ ?HOGC?n,%Ek K!fAEim|^f&'00YM]?)V}(6 L:!K wuqb-N⪒vo~_ׂwah.*>N3xt9^JeOkl+ǎQZr>y\17*NR:]/c6aO6R 7T1>XcwB$QX7][]SgKD.` Y_!|dhq} ɴJ#a7LQγ%[bRԱ"0۫bU)Һ6 F.0>f;%gAى ^t(7X- *hXQQAm<>~ V?a ݰF^6κ~0 S{38GRwaF - ;c>菢9)پp |>,{U[[g C_vA"-hӜyOለEpPj$~8E0 K\,W0f4o,Np̂f-GE zs2-P9t1[ܮ]߬>{cT邵LSXT#TTi=ieYW:|KQK`i]\By#[sf#r8ڠRІ& g3Zxxպcќp n7Ylxz؜ڼK=ncbu1c- :*#5phjo~⨹r( o{s y/Cfೈ[1MPa&/pf߂`hw=#6rPxOw\> {Үzc Š8ۖ4&xς3#Xlqª^>Qe1^xgwUޟ2Wj9lC( (*,I.APvPVt2E{[=1=!4螎g#EX(]Ȧ(n@6Q0aJ޽6S _Ԇ2x@ 2#Y<{iY9ѓ0u.{03=ǘjuqz: cVvmQ-A͵Xhk, Ael9 \rQal]ZPW`h VӰ-b1"N}`Tք_8hgNF= j{-b2M{Q5o0AQAXwzP']t$v*j`d3y=K45#y]0q8.sr&nvC!?E[in# *Y«nQܴ]{\[' iuf=fgSX+88v1< R~.تu z]!3p-5t\mLp p1ѓJ=զM >+vHR~r몆Cw$> ?f G8x|kFxqk}ym=cJE[v6|S )8j;{H uq( kC Qʐ*tֆl$g\2G3m߱&ksԉ.Pjcpj հ];t :XȔ+xaT,c Y}.4ȌC&bKN*kg<(a-BaE <~K .kUuP&@Ǩt]`7rt FΌ.O-C1^=('(_%K+Id /ZniFdGF 2c(tDhRYgae6Yҳ:Q{{2#]rx/SgRIv=<]>ϻXdGyYly 4_`g+ +|+V,|\\7W-r9#ZJЄV>."M}A\RV xU4&3[#a8ҒK}_ 0li,Ɗcϧs§y>vZ eL2P ::JʐZiqMfɘ.&dKdg]p?[Xu\'44&ݹؕ^5+M:hPz'4Ch-)~dW3 ̥H]^A)4=:U6C 5ksMϞY~ςnS^L.tssqYE@2o.p]n/o@Ƿpp:^\,j"/?ۊ(~טȍHF8{)MSG6dՒ(Zeu·RQHGu y%aF\_$BC"fLUg l"Q|ض8y zxϾeLg4BF.&ko𫍇c)<< rCٚY#s) ruP mrmX:Δ@eecXyg*/. 1FؼHVu=z;Yy}8\g=йH+pLnKϜI(ՄlJR1KT@>+/_A+Wդ3BԂ.*4_ Pk,M`P.6̚_`&>7!ܲKԭlc>^B`GA0Z>Kbƹ#9UƐiiܗڰs1ܺ,{eHN,`Й!aa!O ERA,Q«|my5̘Sv>RpkWqk 9<d~/l*,KNFW~S9w.e-[،;`[6f>#CxEMJf*@ ׏̅h1&֛ri~ƅ.PK9+3PKf\T3 sub_5/CST_R.trkUT ߷bC b߷bux iT_ߠAIBIJ}m4"!CAҤIQ&JF)iPi4JЯ/~O~kkY{u?8y}]>F;a O3s0t۽||S*$unOӁ>yzE5tĔVT4aVez SǗS ߐ}rrrTg,/JO`ZwcW3Bv}ܘZHm2vNHJ| P3q:OEd./^˛2߿{i(l ;?D}670KEp!.P $v_%r[cj4iZ]X &9OmQx>)isdWK&B}'s~ y.4fZvL%|<0;Ryi5K P_31`>w f@il.а \@|2zfC .7@VeWo|  6TC$i9#$C WJu`F8KEu3߳%1suGIB3 PaD!9Y\Y^v4c XLPf͋x]B85^L_27`&=RGW^ͥoԥӥهуȵ3h./.r_RϢ!{=>%+/@A5N>ϰCaadXsz&_ |p(iPԅL^>93ؠvnKxllJq'!K:¦9$E~@ta'iQ[>Z*BGG(ϖw#-hʓfxQ?6?6gR% .93 f36jt^@b6c83pKk)B W薙Jj1,it즍Thc(6n'Jje RP>ƚ aӲbL‘.||̹'mq F10JC ~utrq>å%+_eXroy,'[ulIüKL=r9 =<}g;ۘUox;늲ABĸ%g/؍8y?U:;uj`9Uc\ Πt[u6(zܯ:T7nufN-"2 3Ӑ% fe^jzsHDCU#$40g+J8ĭ욦j! ##d&d$o푿s}!t_ %}?e6+;G3Ql>!OW2˰5&#u}|ỹ.0.̀^~$mׯXx1W/pZ ZC >Nvaq96_8>OM)JȘWupJiU?}kl7nd|PFQ(BC.#Zb/Zam4i9KiP:^@> j~+0J7hP3HyT01x,NUO=,ۗ (~;!$t[}&-xfJ*`K ǷB8c:u$bcC&v(hƥ v;2q…ޡ4[X˒+dp!쿿$Vy;yJ3aOtI|v^>08k*yK߹Xy vI4W-AڬL*d0'A?C u@%%dՋaK9ga[=枦FEpʵnpf?.&!7d$!WF?DsV)8UFjsg&` ?*A5y,? 茄 ck,*|<˓a‘z`#Liعyhs#Zθ a /-bxC9n3`: w1S=%ϣ#i3Kf'"ih2{-g͝OƘ_>dst`dڵˑߖ(z+Vȸ@y/%JCP!(\Z`i`MyX]ѭދU-p+/;\EZgs.ĤA~bacbcLG]OOB#JhBm{+om/WH_J|FK=f^Ӥ۹dxX\NXF"k7Pd扱ι#b>s"-fgcWh8mk#gRa(<^qE08/4>Ey<~M+$u3|9|2%=vZ]@ow1v 0[}Lwl/`|ԒF31UAdzCm%D`^X#ZLe#PQbIA2݇5(gNS:.Ndx{ȗ4\$Ž!Φ8rm)7<=rfzۣTp!K1jmXfpG\w"VP{(~;8v9d*(?t,p| m0:^LrY)L_@3/,<NXX.sqO`ג<` PeN' ʘӜe CPfm~'7703Kp6)I4~bܰpfK0}$T#yv2<-jȫmn4i.wX( *=qGGm\swa]Ts+ShOT9Ŧb 3i|:){JeVP*t=J.Oc'MVܖKGrvLL2qq-f`'Oׇ}xiH"I"~f &b[Z.#EUTH/`lA'{Y"MmgǔpWsJx\*wK0z(D(?"X7L'f0u0nVHø˚Ԥ%'`b@ he1݄S%!Jcν'IK dmn$.Q;,Ư|sw5*[IpQ5M$>{0Ϯ\}&^@MĖCxUEMqTo%90?-HR h ;V'wl,%2tw|&O":ǫN-,ԡJKNl_DZ/zKB::uɳƜ׺a9pS|h9fiˊblt)?*ۙFx"ILw: qcA]$!aC&nXber#< Ϟu,ÎmI:={Hґcgs,-e:Fί0: v7($НtfoGvfAycgp2, Zl .1|Aۉ_ɇ@{fa;*60e!mSfgM~W! sb'ޗف,n=$Od'=ˢ5!W'NuHIšOpY^xCZwJ. B#s 9#zz,:H*}|8R\F05q|gnpq}c\uvm~фXe9kLC&PJ@re^HX"LG`>df$"Ks |SšKA%D|n|$bCN [qu.PsB>"{_Đ}Pbsɹs8\ ܠ(*FWz13`'&3ѫ!($aω+o$H5az/GrFf%luM|੾@}v"y n0Fn \is3<]3FN]VL]8|)o6ԞKhgxoL&1Y EWɕ;a-xEi9Cs_~ث)$z¡:9v2j%cG6Fd .‹tL=[x{БmWn |蕣n Fz}F nl(B҄= ȏhnD{auiX5vxkÏy=œ8eI4Ͳb{qp5S4cvijD*; I^ HϋP4f~禩؛$*@!ĜcF[}k0mB |z t&jV r19ЩÃO] N{=n~OZ(@dNMƍ8P 4t32 ,BA!̐8WΡ$ -Р;i|ja ^d!Z6Wވ0.Q9{$Yn2s_揫 _7-d׬"o26C N܏ e&'n-8ԓNAVJ905MsWh놗x]z=>;c9!K'dq/O<ڱPljG{ ]踱K!<b}s>OW[g7ҷ#t6o3N|CctaW$ aeR&œRTc,XAUz6y\?rb1tb3v/WAG6P"ͭiQe] #MP=P)W CES!vJ21>96.`z+z*~}EڏfLq]f/s#6b*x+9ԀY0iu@ /"ܞԏMP0=*g8^/u 9b6U F3LۤFݐ8ҦNnl6 DNR'=&$N'hxТQbo]ϖf~C2٨dN'S@P},hүdnrzfP>|gZA:'!'9,JV~ݛ(yMA~X$wE V6zSda0yBQ5t:\Uٸtk1Z3Ц͉_qgMXN »*$39]IzWU$1I?VN&EA6Zp-a 9:[9 ' a<ż+!|P^ Ch`ތ.[;^ Aq[po ⇘ЏHZX/[YJl5PSREP3rt h-5gO?! Z(\Yc]s7TGuշtI2w+M>A?޶$`߸SEnNes\+sD% b&oP Aq ew͔PS sB#!>oϸ܃6g%a{܄;ݾ }:c]jqI ̺3cR8C-x#D\ 6e!~Ĩ%&K6MC)u6^%L~eXj0[(β;j&1=WL~S'6]'hFv~"qkaOCpx:'.țG=`̓&B70-cg _I;&[7CR0eoN\NQw|ȉF:_;*?~ J~B-?4=$RJP)#~B~j3+(TrKemķ(H;cr&l3Bj!1~OA1?dnF•Վ5ox< '4 aoY48v$qE@y%K{x8T&tJ4.9hk<_{D5ɦ6|j,VVNo«Pl?V .u.IlKjgl*UQ87s'8A<2.^=Oc:ޙW0DE:DŽ齴c_ Kn1Mswxu::2{lǴ.|8)A7Bˣi {V7[a1u?]KH#pbT5$hfz7Ꝡڊ@~S Őԇ;k ^U M+ _G+aa[{qKKa$'Yc^얻cl䞐wYᘾrqMjp'| {Oeٿ| "r@EmU@Do_De<ذ>zaQg&#V3b6Z rt5K9xĊ{c4miJ-DE/GqТUO憲fK-9|y}KfR9OOlPb%0Ӱ*:Y"z\6k+*:iEt!X rN(Hi4ؓ'fEYJvrJ%;b\;v؊? ;b5-A8<{̴qٝs̐6'sS~\)I:L;emm`2#Ǽv_υb*8҈_7Pkz[B Aq߈Al*b͔[LP6Q[v=d q=ۢ?ۯH' lbn5r:9dgsaR/Y)uK[Zp yP l|KޫM-":=[~Z(V L_02YL. zX҃Yy͘pY\ឈRC; U~ˇp])Tm Ȯ36aEj3vjx2z.H{ԑba~\4ԑg<c[D2_őYbaIzʿ;KNhi"Q8V=vo)Qv+zG3sg.oC B66C٭LT=aU47; խz&owcThܡ1}'p$_bw+ ηUkC֏Fb ^!QtJ.v-tIw^v|6R: t-soS&J PK3OR+3PKf\T3 sub_5/CC_ForcepsMajor.trkUT ߷bC b߷bux i4W6l($!QҨhP2e+2%I")Qi2EȐ(dJ"R߳?iRH4T5xY ݂_9{n7`Ym{I]ųv6pH>rKйlpNG!4iṡ dzu*ҳ;}AYhO іl0'.R` 28AjCGrʢڲ>6wzsŶM!>ڏ[51B9 CxD¤G7/|8iEWVHX.?s0`^lVKpoiE}I[pO$ccDwyK ja+F!o4u) Q&,\q7jYx{`[-'5x!'Th;?SהpJ64UE$kc|ھ4ըά38,<6*Ђ&@@S>ݗEN.z2ޓ&HjĽ0,Y J ŌMÐ`ZA= F*'[1&6EL5n(r#rO/gΛ{^V`hVXnŶO'и!3bElNAj3iw 4cy7ay ,/k´il|jMc+ׁ޽` qC?O-slJA҆gXҮHƕW-0v@C yO1}7&%?BJQ9Ccr8)ٟkE2Yfd1"E8I{'x7&"~}1UwvSR=ve3@|KU%X+V]s&[հs-Sz,soo,tا`^y* uV!՘/>mhFiCͣҶmȝAh_0{u+fHE`t%NïKh嫏E:M7׏Fǀ] S?t$"{$P/wZxv'-(i!T*N~޳רG=u΄I]j/ja r8kؗ8UGW|qzPNNBP*3 h'uk 3/.`9238ܬp*mẳO@IM<9{d>A+:yyIM=t7MZқq|\'v'wv(Rh:5\۫FeXo]?9s஘uk --6nB@:^:hIǻcx'u(//y^ ʩx>cKEVH4tv !;D;!0fH $Gy1A,>،F7q[lj}(}aZ/U}%m8ҟi.4`*~$~:cY tˉ_D:"2eg\PƩOZ}WŲ!0`*~防*a::A5Hz Adr>TE]7nn%J\Bz7T=izOC"JǍ&$,4MMVR (; 3醷,{<)m/Ğ=hkOo~SLvbHv7mWܸ톬*_=WFz3ΐFp!`}/{ٌ7Y(يG:CΒ \^ZyM4vL4EÑNDpQ'T|o{XCOO tŸ#'e'XHBӖQ ɍ ?=1ڷ̂O}BQBx(s4 )[/^fҴ>l^i1#];"HR\nNuB҆,ey|Ibm9A&6ה(jV`{z6H@Zv'1NfۄB}fpQ~ZYn/.`_jJ_=tÅzFiˉ**YVM=Fbt,_4- ~%֯Ka7ttN1~?q <)Miv@?ƭ Mݷ!I 4ZEo]ܚ ݤ=J$8S DI5j֤%<5PE@AU mPF' CV;nF 25QX ƅzP]SÌCzXH(Ewd/ @> n_#W!溌ZǡtPx:YiTgD؊9lCԧ 4]HY* 2ήζDN,@Q&Nl8x[FU9|RZBEv~{\ y q߬Đ縷t9] 0kRZ!p=/GA1NX1.HO!^fT'C$sfMU[gHL&Na?rG'SHfIzwYg?nV%<7q4{-[;@s<،nas:mkգYßQQYOZLx<0k?Q,nDȂG|_luV&0ۀ܏I][a|l8.ZmeHE)snDž!=IHIon'ĄE;hO!2\Ķs`oY]&y28ssىX~ `U|;އ W֬; t!Nv ڢgr rg4D{z5Ց1u=x:7#iC?MoUNPk8`r8gnhGIOҺ (.h x'to;lE$=^T'f/\ȗE46 :_Dc)+1W L2Iј1Z,ct ?RFnٰWseg>ce K嚻T?o^^#G 6B._foέ~5XU>y'硍A~W(8KG|@E=s~rݝ%\A"Ҩ{BǨ@+TҗXIV9/PcX䃢's`ν22osL({,R*l{C,zpI}T1(acT1i>:HӌU^ݿ**Xz!M]c#Xq/8n< ޏMCA3 ?㯇C `,[w%gíVNb挩NM8eV.ǡf:q@BxF̞CV8Zq9^u`GfwN\f3yҌc, k!;P|.؇2k,ˠ f}6|jeir><^TqCsi64x9 Eumt20h+(e:;1pz[k>._"!S+X= 8lH"f1__IefTu ,mCPao7C,v<,hwPiCKH<{-!?w(d%qr|V˅Vo1|,س*2$rS'f8'CCLZ^H\. nGG" !;ϢQIU+#H(0܁;YZRm,ly>/2I>q];Ia<`$mVIٱF'_.}BRG7yI@I<ó(nG}?,Iq Cc)D/܁ hMQwL"R&S Iw+} ˞#"Rday+T8>GBX^J/1t3x_{F1;$3}x MGքvo*F)jZH0e|p&b?0gilue{'tv,;~?a5=m'\|NB%$omqhX}^]pd3TW>$Fygs'].\o"å0H6=W]:fO/74=rA]6YWg/Py/ցJ/?8&`%B$Ƌ~+J:'Ns ԟr!a̔5_`h^|d 9Ԏ@$kM`'t%yC>t Nv٢^oaߚa wiw@? '?Ü9U|| nGVCao 1Nci QD lD%.c3ۇJH̼v'. m_z?U̝Ξ\iѬaFf];I0ԧ[;A-~tn&8ҒgChzCB> IZpxӟ| ^ $ 5YA0t,/swXzv(, Nc?_)섶qo $ǜc| D϶s1{[{Y|  H9.L,ىord6QVƲ_oGps,}MtcRҋ"Z6/ /nc!4I% EAB9a5x23MOGp7 V+GR&;x kEБx{!i31r6FiR4v|4FEH&WesVb6P;*:/E*1ITec̢o 3[Lэ'e͌? HʮNO@|J*,Ud]3RH^) G\꛹n߾-+,ڌ*χ"ymd8|N MVÍiSj%N~ ?~(W{7DƞZN]_Js3}W\^ {=fA(VW0nW~B_>OnuTf&V4R1*wEĬzACURlLJ>e6Mz_++b_B(cec\I -;ʏXOotjY3bȳ9"Eжtdx,|Bjڼ np(y8c~6,@dA`pM|M#[]:۶Vc hP p|̭VK?$RiJ^WSAߵىP)xJaw[^J/{#/򹯷qiՅ00]])P=SIuqvn (/@R9(A ı#7N! hh<FTM,Y9 8 —bQѝd.Q?z8H1Nm5D<=M >o9t,Ǘ^t 0~zDž_JLWwB& K{IT*N摺l$,EVg2 {jAq96>_=ai=-^ ^nS#h:z*:'59_1߸cqj=<fBlZBrA\ 6~hXA8fVWk dX BC2ͥE%4s ?Do܉05j%iLu8,"-IMTMJɥf \g2#(k=˵~97\5Z9K-=9DN^ܖ. 5H\*&8QwQ>z"(\{?8K븼HS Ʊ_sړ211ky6W=fs0WmU֏eAۆWzXak;T@m?d < ڈGc)r ]AlV4!l%H(?PCUϽf~5<ACA@1gV]ZuQld_eJQ&-LN'XʯđNLױph1?**NPOW)"&;l'2 j YƮEOm>X q^: `Ba̚lJe**8Jf|BxtlO^tа0^ޙILr{[GӼFiCQ+r&;aMD,S+̼>aYM)Kh%_XMGyv,橴o Hwaҗ3v'V'5\B\Z~rq9-gWvv&q0]\TT~+1󷿹oF1EOR;۷~>Ǿ_˨vBb1HӨA']^<z8~b )EMҡxU6X%Ug9ҧGCqxA^՛p!(ks3U_n|qvDC_.Pfa~{c+h zRZev"ui);0"Ž?7 ;6? }˸zwqU x'F=fopc_mCKPnBbކ`W5V\V]8obl %{W4kZѷ#Y|L8K;ݩ%tsy,qA 2~DaC5M>PFn kr8é~m=6FˎکLTz~j+M<\;W%^BG d∫,s"(FJMN}OcqzsĆNtm0qj'bzjgR;w-N92A%I y"~I7\A-t_2^\Bsv!rRҗ죞=KKG0ksr> imun#[N7 9)y?{1^>G:͆Ile c"~gocRmⶕT 'ɱq9tF,~1]M+3ܞNŨTț/?%;!:K@;6N+xq]y/xley ~4,z>9E84 5hag7N<3cuoqΙf)[\_7d."> aU-f]uRxddrB/%;kDY&m7|MFR;DP4)ǥFkDSB4 -b% 5ZxM.~^#Ǖb>&FrԠzST҇qƽ@laΞG(B[2Zl4M Kz, r87DAWrĽ =7MbbF" LmJfNHw 4dĤ)lr}ώ(٦֌}D7TLO_#\ 4O\xï<[UƊ'|Kr~-0U53Kn+8:ApgH3)w/ 7#1[$|8LZrK`؃^M}9:|9?ljq7ZUS!Lȉwf,VZ_Hi&3XOt8 F tZg^8"]!kc#\֑y %[H4&g.^k<^͈I:{iH*3a(hJo|u `*fL]Fbqx8#Ta0W%NzO/y_UL$6+{Ջ̜xrq%ב9*Dڳ VKQBgS+OU&ތm+dcهRͿSc+?4fr|x@wtvGV|1R~=|{ e+qTzFp( PT+J7[H7+B P`ܤ6/(ʽ]d Nt;y4s4Six^"vl4 e]44QOTi\اwE Li1,컘Ÿm6΄M7+fi’(`_PK2j+3PKf\T Asub_1/UT ݷby%bݷbux PKf\T6mf+3 Dsub_1/AF_L.trkUT ݷbC bݷbux PKf\TIn+3 o,sub_1/CST_R.trkUT ݷbC bݷbux PKf\T7"+3 :Xsub_1/CC_ForcepsMajor.trkUT ݷbC bݷbux PKf\T AÃsub_2/UT ޷by%b޷bux PKf\T+3 sub_2/AF_L.trkUT ݷbC bݷbux PKf\T.-+3 Lsub_2/CST_R.trkUT ޷bC b޷bux PKf\T$+3 isub_2/CC_ForcepsMajor.trkUT ޷bC b޷bux PKf\T Absub_3/UT ޷by%b޷bux PKf\T c+3 sub_3/AF_L.trkUT ޷bC b޷bux PKf\TCd*+3 4sub_3/CST_R.trkUT ޷bC b޷bux PKf\T'`+3 N`sub_3/CC_ForcepsMajor.trkUT ޷bC b޷bux PKf\T Asub_4/UT ߷by%b߷bux PKf\TL|+3 ьsub_4/AF_L.trkUT ޷bC b޷bux PKf\Tv/V +3 sub_4/CST_R.trkUT ߷bC b߷bux PKf\T V+3 hsub_4/CC_ForcepsMajor.trkUT ߷bC b߷bux PKf\T Aqsub_5/UT ߷by%b߷bux PKf\T9+3 sub_5/AF_L.trkUT ߷bC b߷bux PKf\T3OR+3 <sub_5/CST_R.trkUT ߷bC b߷bux PKf\T2j+3 dhsub_5/CC_ForcepsMajor.trkUT ߷bC b߷bux PKDdipy-1.11.0/dipy/data/files/record_horizon.log.gz000066400000000000000000003254541476546756600217360ustar00rootroot00000000000000frecord_horizon.logM8& f]fF# Ԣl 5}( ;?3u(˼ܥ=W:(o?~K?>QGO._}%їR@go`/ۗjh_4YKBJ_*!E6e_$A wĶ,,}G%BOK_H[ ) ;|,_(Ѷ4S/\ :NPWkM"B;(d`?'++c Dw^A[[~yk\~m ќ}zq:SrEq? 89"\{z w]d]tOw]w-q>>n.-KK/G??]c/Giw}6TZ xpZwɹrf9U_݆zVΛ0S=mgQt:c8gtΐ9owUh[r7L2cvM<&lg5j&̵EeX½*꼛BcY]mP3Ź~֖߮R?֓zhnvl'+%o 3:~\STN#o۞d+ڼIlQB0~f[.]=QJ Wۊn++h˽XE?~,!ٛ RWTWG`W˒Hznj!s~U痷vִ>~lqMmoܲU}^S΍ۺn+yG&5ضv]:,y;sٖe[{>ռ]͐}e s9_̖QOD2g&^7p=',2̧3$K1ﹶy&}l0mv)fGO 6]jۄ,iytI۶^+7B«Ge$3'XRcm>X`v}=<oGy*o9`8{CO+ڗl61S'K6/RazaXl6 شlSVneȖbdָ`jTVsodپSNi4Xm^Ee]؈_Yt|^7n=m/%Sv qO;.F[%,5yxM⦰Bֲvz٥ǀ^ hϵ3}dk픓2y[ u[ƍ~G:U&UXi5vD2OM xtXOG|"\<~z`<~5ߣ7+яl_qiP :b񀷢2y`FѺ Eǽj*'>6 `FGuox\l_x$C=`l? LO;]7}`]|Eg㦧8DžB˼GWu_-i*aj8G9\suoXnw8tU^bXLz;@^hv8}[qm?FtZ<-.J@symm"zRTo)-fo.{\>l {#ulUsqցx86b b-Eu.S]' {R{K 9ϣj/P8N`DJ,?ݻmO {ђsoټǖм5ټ5X 7To=Tᭇ<67og ߠ@`njdPSqG׌~I[()az o=[P@ oe>4)Ljo{$w( A77Y@s"zr^G}?gUHs4ḱN| {GdKVgVۈ{[T=~0'rR??xmpӻ}d|ø޴;ePqhq;̮{`48yCa4٠؈6Olm0:4 ;/΍ Λ"⽈!{bg[- x/AiJގ֊;Y+jZQc7;T=E^qIioWNNS7@/~5X wmFؒV߀FS`h0S+߀l_@駻o.j~xzNL'ݽz9g=ү{s{mr?O6/qLNv̽;y,VhޒD+;w;Ҋ)@O{tx%do R0%;hWօk]|%H(to`]aj8Oǥx6`H 9?@aPa i|oW߼~ן}o1Fa]b*ع 9=.h[g7@u|5b@Ao}|O?<s#|8o^7K\ €GK Qo0h75%C>|<.@=|,ޗwܗ-~/N00   `PaQP3 AG - g(nAA0Pt${Bldn\` 0P'{  ilBPo1O=@Q Şpf i /<\@ 8h(hHj}$5>4+ àcyi0"kAC= >2@o|2J ۥ@,% š2 <L㣉{n0@%@BA@ a ]=@@A|s@ / À <0<|h0ЇAq /((0@z9:=A $` À~(́> 4@6Pa K@BA@@_-3vR3{` à8=a? ^ F@`0 dH a].@`y  À P <{@ ]=~   ZP  =rA aK@A rP:`0C@ % C / 䃴,C/TH0w˻>]!vA@@݀5p@@A -P@a5 AàT 8  ?-P@`9H0h2xTH0  :@ ݙ @@A J 5a@@A8 9y;  40cw Pxy4@( Z!Пu slÀ`a@AP 2 J@@( ڻ%~tp r@0`t䇁, 5g( j À | y+@ ixX~œ7mwUrPer?ݟ P0.4tZp 00@ rP`@0@A : ZrZ'|൱a @uZ|`aP ]2:jJ@A h(A@`Pa0(( %`HPa0h@G%`0ڝPnA : FM$0@qS@A (`HPA &w\?.   @S% =`9vHPa@ .Z ?P,( ZП= % =M @tTd .  }ll?o|25(,( e =%@`P!0@a:n.x^쫭nY jKA?8 ӝ}T0 $D7-;oC80( 'jO; nO9() =1hЖy cÀlI> (` PE`a8 P`@KAf/<]S! 8}B@a]KJ@A*c6 @06! 0K@V@_ (0m@`0~ѳK`4? ϭϭ0bh0(p@Tc`l Ѷvѫ }h `Z@ 0 ێ : FVcSP 0v0 C@aЂ Gr (v(Vx Ї 57ҧM@A@~лC穛.^a @@w/?ׯ߾]ӯ߿|;O_\]r9a"cS<dz Z:%dJfB!/0OPs-[KH?Aj&T? l&&:a 9&---q6jVT&2%$3! I(2 KgQ#G5&@~ ܭ:%dJHfB%u%0Wu O\ٌg?9yxd3! Ρ:h9 <;ypac 9& | SuNȔ̈́'d;^1WL j&&PVT&2%@~9 eT&2%3 I2T \/CAJsDs_͵Zbo-oW-suNȔ f %$3! Gbj.:OdJ 3UXUX A6f $$sgIΒeEN~B~z8WL L~BuZ0)\K_K[K|39&~ ڽb&?<0 suN1atNsD l&nB5Pqqh L?AbsBa/ OpLesuNȇx^O9~"HSL6O`'AnsuNPfaIas5P݄l&d?!; buKs8ݒ3[N\c8_O9~"SB\%' G2%43 KfB0.i0?ӏ 㝛:0?H' ~x l&'@~y LH~Bz:uܽck4P| 'ť8Ȍ'?l&d?!{ LH~4sSn Myr3A̭[kCfZzgRWXйu.ݺyu`;|.v[`]ҼzKX;3yƴR`Lm5jO11(ÀQ04D>gjEv1u"3:/`0q>s yC(؇3 {c(69*"BxՍ}^_5~W j`Ay04aA+ƊjsTp@VVFC h~yAP0^jԄkflm%A `LmU׊qZ~luIjd?`Vvf5`\ K>h(l>hh( EAAG8ce b*ܜ(A̜ 6N (04t綞m QP`0N(֓`>'V> m\`0VuX AAG]A`0~AAAGAM00(0}f!l?``@@ h 0 $`@aЖ*ZA (> 5` y rP`@0@>{wP: |U/~~>3y}$KfB.0u %CZ:' 'p^&LՓi`.Cz /-/-/@DD޶DǓk±z"%a x%ag?> ڏo/OyDj?ޭ&' c!!Nww O(N0ULcNL8VOn ~}0U焉L Цp$QVf$L9a"S<&' b?R:>-}Is9[D5X=IPO[/zPJC5X=IN4U焉L {$Y03Փu:tp$$ r9 \&2%Cׄc$݂zt<_v9Eֱz2G֑|fj*NB$],]y#aL֯;VOdJ8fp$81QpsuNȔPՓ#9k±zP꘱^.T&2%}ׄc$AB*:X=I8)k±zhV|p$ɔ&' <9./0)xM8VO=(./yufB9kX=I8ӄj&T?^GW:q=QCfrrx~!zXzz69|RgLY1NIأn눢ٸy^[jEe±zp$%gRf0UO.9dJvX=I8;"jyVO<^ɔm rB8UO 1zrx$'wqZ=-9:l잫'GrRۂiK1ۂYخ۾׺v$S ӗmiԎ-dL'G2%l y"2UOdzkldG2%lɜI‘3tILO.h0UO\ZVxw&2% ˄c$HgT-c<:)mr-MՓ#/@|6U焉||7O^Nʗws9摜<͗ sYl.K=˄c$HN^rMՓ'G2%T3 I.Y.z\HN^r;LՓ#9y$_^LՓ#tT=I8vdls'=8dJ66ȱzp$SOmS\'=i䅃|~$I\{4'=cvl' Gr6_>n' Gr;._>n #\'ȔP̄''!or:'LdJvzp$DŽqi 9&l Nb?}s*_.-_.֧IA^vN]uT(_ݰ]N7?o˶8Q/??yxN3=r$ɜо\> g 2%.5zp$ɜP\^ճ7/Փ#FmX=I8y1e±zp$s:P=K8)a˄c$H1@u¡zp S>e±zp$ӌ.cΫS\&2:Sc$Hn&t?; aH8VOdN`3p4xG"G"Gs_@񖡚?RHjj.ˠ2u±zp$syjAy}K0O7n>=fN8VOsp$/,WOG+\! _qS$HN-~ j[f5:KFo"mYiI#9MfB=(z>MՓ#9K} R6Ə^ IgJbt~~v~&kZb%&%~su۩zp$ l&L /Փd 3r'k-qr'/! O@/eLՋ_hWW $K(fB}hdIءt}0UO%2aخ_嚪}*H(~B1I‘\ezzQK k啜Ms&H[ms|r I]:WO$^ǖ85p"2UOڍdjz]]LՓ#@1"\=iM\}/I3Lcܲz@-LnN?VOӋ={N3:gȞ=}J됩|${!2UNN~Δ͐|){!2yoaCGct'Z!S) '!g3d*(^vBYؿMM$ƒtwI2NeU^!|#]qםIdj! O!S$d2/]Ҧ/t_7GOs$d2SH3ϛs$y@h9d6s!r#Db/I$]Rz#z!,֥\> ~#{!j^)%7D^HofT> 1dm9d6SH1|R~@a's$+eo]IdjIHNZ<攺%|2)d[VT> 1DDNsl12OB&3/坔OB&3h6CIdW%9OB&l/f !6f_8)Lf lL哐L!2OB&3t;^HkfHk~Hk^HWW~2d.!C!5CId2|B~Ψ7vFe?VpVXqd(b! vHR{t;9͹: :o9O QJg._bt:_O߮4syGQ^ h( DE .@pu --@`Q.Z 2.-BQpuȸ E DE\ q0wDEPE D…0.ܽpoppP|6\H_ .q&װC@$\ ڏڠh@d\h5( * g3`h amς Jc v c]Z 2 F۟{`jAx = %@`P!a}8 6A|q[.o߮ؿ}` v9|T=IqD" Iw%3:|S$9@e #}In9S#7c`vӸakmfwmfﲗ1Pktj9~UԘςzYO{ [Hsv6{M$򗞪' NY/1PL>+Yf:{.g' G2%.)S$HN(XyCPӳ@ mm f* cUƊ*b-C- @C `Aam06P4t buca( C .(0 CP@a@̷k&=X/^ EAގ U P` 00((cE1ƎC(0( hsg4V=0 Ə PaQ0?@0P @ 2 bZz. `߀ ÀaPaAS* /uP\@@t\@b1/cWP4\jX , e.j_c bPQq Z xB@t\hZ A ,@㢥,@! DEO D"A /@BBl D "x.D""/eGK 8 .-1 A"$ -D,@HQ] c!,AA/@bl)h Dx%y( ` yKHQ "=ʚO`mFf%(Ň-nbo ] q1c;gP c+$Pb߅EY x m1x DY jAD[ zQs"/%  "AD ":.r "rQZ$2(!@ B,rQZ xضdk[B[@l[!bےon "j D[$*(z 1~"ePY j + =XB@GP1@b9bl B@t\wKE~[|se "4hbBbQ@!AD "thADO@"d .mcoP  "j D "z 1"e@%y(A-t5PPH ]$:.;{pW^ ^/Gʩ|o7_O~4Ϯsb*-.D "/)}p7B@HA-% D!sD[ 4 ,@QIbD,yHh}h Z p1 -@B^  D] d(@1o;/eH@Q Z .'wD] xȸm - @dXdB@$\4] x(p@ - q^@\p_ Y hH(@Qp /Iz[~  \@$\.@d\.~GxZ .XY hȸ@A DEB@ QZ   a0` 2 T`0(  ABt4(  AAB7 `00`PA@`0 $a@0(0(noAÀ`P`aPxqT  k^AAABHr( *  AAFv4( * $x w7 @`0 d$x 7 @`0(0(oT  h0P  P 8 dx7@ @`0 u4( j 0 =w7,n|@ À`P`a^ `0 {~)@a  à  yoO9H(FHZ0a@@AAB7"ʧAa@K@@ `4h0lZ7 =;ڧ$m P( j A@@a@0(00Hoݛ!a06Z݃ h0P]A`P`P0@Pa 0`  rA@`0 Ӣ"`4P@aPa 0`d$p0Pa@0(0(h  À`PP   6( *  T0a@0(00H(A 0h @`0 dAi@`@ 2  rA @`069C >zo'@ j 0` r;  5a@@AAzh0v^  2 `V @A Ǝ 5Pz `A >  Àdn|ɽ |TH%H(h=0HK0 9H%kA 405wEOc /Th`Q`e? d@( ZQ`caP rs Ng4}N^_Oo|ׯӷr @ (@  `o@   .g>(0 p 00P4t̏ T42  A  'd AAGAI00 ;; (à`04tpaPa@AGd  aP`@0`( 4 à`00P4xm7@`Pa0h(Zn x-(7@@AGq AAGzqd  AG%d0  N00(0  *  ^S P`@0` * y0fW2 T(  4 à`0P4A0(0 0 $P 0  8) à` O @ A : JAAT(  : (@aP) >R~0  y =(0@`Pa0(؟a à` 7@gW AAg4h0(T@`a@@`PnAAO0aP`K@@aP?@Aa 00P4{V3+`0@aQ@ @aP9 0  @@`Pa0h0L>@@Q 9 0 M0h0(@@0` *  : za 00h h) ÀA  ' `AAa 00h0(d  AAG' F P`@aО{K 7>  )T( `o/ AK00(0 p 00P4AO00(a @- )0  y rPThP0 tPA(@`Pa0(d@RaP`0v0@`P4j(SI  ZQ0F=ݿdl^)5P`z,@y4Vw àTB _ ѽ*AD "ƞ^#(@ B_Ds ,"QE "h bŇ4>_}[߾߿_~ۥ{]o׿(VAQ@b?ӽ'^@hQ "xyH؛@`00`87@AABh @`@0(0(P h0PT à@`@0H(`À`P@ 0` AFAj0 q/@ 0((h AFV  K@`PP]h+ a0`d ABAn0$$À`P`A@a0(.rn  @ 0xj7@`P5Σ : * `0`dAn0PT N07 €`@P7@@`0 dÀ`\6 a`0`d$x77@ €a@0(00HPT 2 àA|]S@a 0` JPP:  y{_9 4( *  A R €`P`a[  u4= Pa 0`aPAaPP;   A 0h0 < *  AAAzAd{O a   @aPa 0`P!pd8s| 0a )y:  t (  ŞS@ ԙ> 7 >t[O2  j 40%@!OA P0v `;Ac' `( * $` JPPz06 À`P`Q0z.@sg 2   A`R@٧ 8q000 y0T@aPa 0` J06{}$gAÀ$@aP t@ Àd%"`<4hPF J>14A/ƮGsqЖ0P`28% A8@% i^ ~ <17ND  ra * $`PP/O?Sl.>F88%'' ?z ]^n`nw|ow߿?NFtO'ThM@ (@`P  : Z * -()@a @  q0F |tJ@0` j@A `a@J@AT!N@ (` j0h0( 8T(  {+(a 00P@  A y 2   *  = 0  ǁ ( ZQ  AACAI00(0@`PAA_ G  A  2 (`H@{xCy { :4(  : @ 4{O`aP`@@`PA : ZAJ@m75?'8P`0%= * :jJ0K@3 @0` j0u< 9(0 p0v AAGAI00(0-Aa 00h@GM!069@a 0P4'`0 @ƪ` j8(j 2  A`à@ r06ma@aЖ `aP * 4h0σʻC'@a @  q) 0 Ϋ0hh0v$dw.e@WzThhAB`tJx%@ae x:ei{gл˂-lx( j ÀaЖ0@%0H= @%KB `<A50 ((0'ڗn@ Nf.9o9xz =jA 5a@@ @: + 8 $pA €a@(0 à@aPa0 =Vn{ #G 4 2 dzuT !Pa 0`$TA@:  à ~# d7bPT  a`PUn/& f  À`P`P:   `zd$p0h@  à0 ((0HK@C>NZ@3( P < 8H C `[#-P@`0p$gY7A~ A@aP d{0OyOk=  =JA(8 %BH(=ړ~ 0 9H(ا8~8 8  `PP`8aЖ 륇 AA{@z4(]OׇA A 2 @;  4A rPP{A   rP =h0P%@`oO ʻia @ ]@+Y V>.MMhv}n|Xd]mmc!-: -ퟴ.}Pd]x۪.}ɺlR]"uz Ʈ mU[>O*ڶsGv(&нЏucPhﶃwy#`%({Wl+itx|s}9S{} -n jY3qںϻxZݟV#O4N-B֘3$y5&tt q:(?DCbEi1! 1 1.`]AQ#0 Dy`bAZ0Qǚc`.@}$$HԴ脉@„,u m ! m e#&dHTPt\w EE^ I‰ Y ]@%.۝}^L"QAQGcDGD_$:$vgwTE/wƄ.m-EE^$e}Lޱ@} L>&W ]s`n0! Dga-}0Q/Tϸ ] &>$.> &h슘}GLpAbI*&C%PB)cq!Oj !&hVy!ј vc-}z㉄("}R{Lj,&t9m~x3{e~9Qז1 ˙>I7{blA@bb;=b ]$m`P}D-wW((Q!b 1{}MćN~?"xM@Q nB@D@DENAD"AA,5=(iȠD "h BDh!`ϋjt\PZ 2(J Q "8H<.F{."QAϋoO$!D xv"/)hQzC}hu Z 1lE!t @Q ,DQHTPR1Q#;  @to_㗟߈ׯ?X> ^/CId:!t#ܐd! nL哐ɜeL哐!ۦ!Xٻ]Gr$]ԯmZ@7=NTE7:dD_A-9fTE3?\rE"Wzʓ+7xbLdVs0Ex.2yrE3/L&6a0yFEɬ 䊄===7ɸE,SMO~1SLS:”&2M[ S:W42[!T $W Bd+DB+{:{:J{:5ȭ5H'”&2U S8IeNpJa0|Ѥ[k9+&U(l"W(RVr(U`XtœNfX gxwӹD6?ss:W(_:S]әB|uBAExd(A02"? X)&xs4D)])^|3? >)W' @0H*C àP;M PaP^O? /ŏo~&<2.^|3E~Z Or Zc駿{w;w5]+>r/W_zuj»/ꞿk_{p] %$d<`/ 7a@0((0* lỀ? !.  d@.P@0c ¾ D8=` 0(\;o]C`La@ `!U@A5Jv00 N$dc0hA(%dTaP`΃~n:ڡTA h o-€`A22rQ ; xS~MPatKWŘ 'U H MЄ z=OC|MKRP œ'+5]T(鶍r+Lby*o,t1K|= Hmpq_~J xB>/pM%m<Thr&Tvf)],)+83VҹD707H0朽0C"_;w^rWk]TH(R"c=!H࿮uS`qEF"-2ŋ"1E2_$obiEiu)^L2I2o'9f\ 6} Z"^A8B>"6DUň "go "61o tЌ 2aK$#DP^A8\xܖ( ["Don 11~ "+߲ ~ "(㇤ȢuQ@Pa y H( @HǏzCʏ2 G" 2spj7@n̏qȏ˜ QW PaP6@柁Y Q6矪G# ϷJWK `1=2 q7 n?+2<£D@0IT@4V74dT~ P`O7SlOC?j&A% S# <@/Ri9 Ҥc2CծAծ"`3pr ;@@ >5 b~K}" W 0j߁CAl0_5t W] W>+&AB@\%;f>(U{  Л=iT i}tY }gˠ82軛}_n, dw;̈́ bAAY!2;D_"wd";C`,;_`p dPl,0?z:7@Pn l"_\79Mq&\ q'6@ܭ 0vG@Ann w770軛wdDSwf&MznȠ,`49aOAͦ  w-79vMU$j0Mٴ i]znWm" znWmU2ݏM!Pa  p6@h0$A*Aĸ à79 !`Лw u8h!Лw u&!0}D&[@@ w!:H觟Jԁ@3op7a7@- @A t@D&cw{w-7F A7F(0軛'.# P`.ܝ  wzgp7Aܝj9rw7@?pw7@o0ܝ 0@Apw7@o0rGAo0  w}do '1a OA P@x8T&ǭ M5Dʍ;A$w=I~pc6@?pos7z-z艷~r/6A@A?poFpl`=nM!P`0=wC *007F ~p#YÍهÀ`0? ƁPNTܨʍ*E$ßU  Ȧ 0^x @}fnl&0(*z H0?7m`! Oh; (3;(=HdW;" aP` Cq@0)A0 @0@0M 2R3* a02" xX8PAA1`@@4 ơ{0 j dDzܣ›~ 4T#@ Idß{|D {J~7TU~aPƹC`$ y 0Wc1H'08DfT  7@c+P`U@2C j uAA0<   H0  p(fTd hxfT  D Uix 4Thnu  h` A5ơW da 2\AA6 00XU$T  dD wqj 0(0@`n1 (@A2'swU 0[@A2* 0F@> 1N80(**twu`@* B qn؎jd# O*H >_ȯ00 $W׷?۷_|zǞ{_~w ~AT@8 e< G@(@5* FN(@?C@?CA?pot;8!B`4`!`a0N4 Ƙ7Z_!`OA =GU @ `492Fw0(005ԣ1ٻre0h09 a A?:  ӣAA``aP:0(0h(À` lT` h` bT 9 |5J~u40(@CAv7 @@AA5yP  2aP O:  $ E,̷Hq,HR$N4B:t6s2SzЃc0 8X|9\=";xHdpߛ?=f+r8H 9}y`~GKC. yp`0Hܝ]0y'H3̲ys:G0H^<vi?إ$ߑy:g0/`^?R7 ˫s|N'x8wK@z8og{$ǧ;'0?x^|,y9ysXt8χpްܻy?>㋐<p<ayp`k#<3y=7,p<b9y8G0y p^͓;{0x8'0O|8/`^ayv̅O1O`+?/@^{}+k=0`N y Ձ?0`??0g&w g0+"y=זk3t4`톓y|yr}{$`N`< W0ox'r^|=;vYG0'v[ l={! =9{|^𜐼1y;g ܃y`Ȯsa9yb#0zN[y:g$oX7{v=?>CȮ(Į)]T3Ǔya׳ʮg$ox^ŭսskodٵ>xu2`_OzQ$d ҫiW_o_~+󗟾3'pj `P b$wǸ6 Td$ "  < bAAAAAACA0(00000p(  `n ?)`  À`aPP+ ʫSƫ3  E `a`P * 2  Q@AAAAACU~N aaQ; *  0p(`PaP`a@00D`Ɂ  `aa@ZAAAA smyP * 2  7@A0< b  9K7@?00(00H0 Dx8~~inXd$ 7Cz@ o8.qnm Pa AWy 0W~c HT  ~M  `x~ P0n60Hr[@A;ƭCn% bd$) Wx@X UԚPaP l o8fT hx@XJ4TC@ @0< y P d a0 bd `xP 2` jd hx 4T *t w0po00(@A p(p0(0@0Al@A6 @7yPP`a A4aPP * a 2"  < yPaP d@07#vd`@a@O6^ o8<0={S Wӡ/0 $v0й(\|=3_o_~+󗟾-??w lT#h((*  à ATDT44a a`aP6@A;<J  `|UdU!PP j;+:$aww h(+S~*AAAD  cݝ!`0vw9   !`Ɂ7r00 < 2 ~ aEa`aP`0\@;F!F-AT7$ Kh AQK00o ݃l Ac?`z0;B i('п$ ! ~ 0R 4X( 8< _{eWsC! ] 22 qY * hh(#AP'J@@?  2 )VHmA?5w8ܜ:~so~ro~rn~rUn; jMDQa: $a@*2 ƁP=Q h('i=Izb zAy ;G@{F=Igq6@Aoܓ4pF= ;! f=z rOl~*Aq^l`6@C!0Lh0$ƍ zFo` Í}ws7@?p7YcgTd1)6cJkcBjDŽ( PRDŽI# ?&bF@A}L S# <"F@|LE1 0A|L^81q,*Hl1q,cWCıHi_PӾ" =54A|,X1.* F' w[ǢH)xPP0:[?&E@xLAzL H1 0_{D # ?"F@yLEJ$xttH0 B =@q0i`a@0H00( Pc / c;6^`a0Z4A =f!? csT0h(h^T €T@@A6P KDAP $ X#xg7  =|A@6PK `,0? h0 (*C KB,6_o_~o?˯?/._~.*8 S:Wg”&ye*B*ߒ%/}K驁 S:Wv;Ly_r[aN 3Y=2\ F})+Ldu”&T?C>ø{_aJ Yʿ0s<0ϐ@&yO)\! w֘ӹB ^ ^ mKNnK| -,tZ̏ZpMd1j0sLWaJ YUҹD>}9)ΧS:bS”& }pMd19SaJ *B+g+x*!a"rv5'dI>$lٳ^~,,}1L=sBxo۔.*t.ظ 9qn'd1A}.."ߵ9y ~Kw&/B;W䁽%=;O{|OoܟW O1Kkhs:Wȵ˜̣s:WȥB9*Za<`t0kHIt*L\a"Sr5]T(Imts%S83vfLJr?.ޮd1.oQuc G힝E,fr t|%٨ﯴtщ+1Rb\J׸=ki^eW?X%y]"tqC #Uw{sM7u*@XyŰ tQA\rrk%,4^9] dokS.* He5.n ,։.ogӁݟE~%MiPEg]N<ۻS(%Q rsˍo.3dy`J2Ŝ(1]s \t[.!4rt|*?Lz {Jìnzw1T~dƔǚ#3ƭj=tqҺʼngNL>pBeyN,r))] nJJQҺ++Y Dv̔.Gd. _-NJ}g?+/ş+Y=؇;p;o9fs7UeXraJ] W2jvLbTÕ,vJ>2;=ˇDxS~6nEЕ,MtQJ~)]TE\b/~~VNsp%P}ٔ.*\b\E,&M.*\bt;nqNd1y&WPy;sNC7s̆{/tQJ5.(L纯bѕ<=V~ 1tq% [!Ta|2s0f2 LSŽ7嚸֜.O '4ܟxtQ!'ޚ}Ҕ&2Uo.*\t'ŝhR k9]MO\mb~.F\bTC{9Q ɱ#33Sq%n-btɕ|3\+YqҘ1Nrq1)+LZaCw'9+Ld;4 Wr0&0sL|5]T[!8Syb˓\ I Y P EP9]T|eS3\ Bt0B?n•,+dB+dBd+DBg*P*P+p+ E(qNpuZ$Hi{ڷn9] ϵB3s La]R'4۸tjN *d9]ThtQAXo颂|]5]TQjnxtqM].pW.W:mmZ5]TVkT/>6_9]?GjtQ!Ƶ 9]TgP2/n{AU7 a a0fy-0 9 C<" H00(|oU|&a 1;$] ^)aU@A; 0@40(A2+i_w?t;U&`s駿mwC,à  rx2A|AAT0  ĮeG$o~W&w]9]_:n~nqx Utzu: ̪܂ >I{.0 ~귀 2 P jh0?;@8,Hd3o wwr*c"pXI D @”a2H0hEM$x2h(x̼ U@@A;AQM;`L?Ya idP N N* e6A@AoB  U`L=zg,AAf?(ܳ]o_j`,Vm{hKmh@D~}-AJ"bDEo>ۢ*aոJD"( a]m@A bDEV_fKTяJ0]xPE ['P%Ƞ(Ro¢[Rn DE?BX@1V| $> z}?BEу RnƯf#H "k?ncV$*(z " <( "D "cES*g " ~K,D?[&RIAdP3n*(J*v^~ΈJ"(h@3n&P3nN)$Q@ϸEA3]nW)^L%(h@3]H$PvDT$D2$ Q(!AQ `DDP+Jh6Do:")lDת (%q" DQ߂  !!DADP]DEoI<x}RYI_|zsjm]MWEZVjx.2H?^\U.}k*r5sz_\ͲXɹS|SnF5^Hc9)^\7"xUj"p_\\x0NLEB??z.2ŋ"͠dWEf.xUj"tuMd"kWE.7!{v7}{ϙ2EeG?xUjE*_nI"yH㋴"]y ecN"E>Iak*r5S]ES(2H?Ξn)^\UQ\T$oxQd2s~0&<ū"W3'[/Lf./sWEf*RGqdE&tҙyJצSڄY)|Q=j߫)^\$uq5ū"W3])^uEޏQEE^?LEG ;~WEf.B]Uy:L ߄ Y,N-U qEx.2EhIE،CuE&3'P\Un9^T$voUxQd2Sg9^CuNM徟d0HYLLE9>LdExx.28Θ"S(2HBƫ"W3$/Lf*[dE&3|QJEExQd2s'!6Z5EgLd>t>M&čMbq"xUj"'}k*r5s~Jg)^e9)v& _l"E2ߺI0\ͲHl"U$EFu\UY}z2OR]MxE&$擰/7da_wyMH&MHb)EXEFP1cm |Q$pm»7auQEX˩k*r55xfL<(,_C/?}ߔoJM( $(HAD!tK?_!ο7 פ7 />) N.AJƒbAGw@%&TP{="&ɀ EIDPdD?;)Q@ϖ>+dD(G7"(/k ňHF) ѯrDm*STQD6 j 7 ƽ6qSEA$A "*"UΆ(  FDR "(RԅW}KsEVUAd#")RшFEj EAO ~.M ႚQD1"(Ơo2"S8"6QbDdPG&OiRшFWΆ} وH @QD4"DQcgCi( [vE2"HAD#" MQ15ZVIA( DT{ߓJ"?˗OVLS?,Hxd2}H/;B.$(pdo)jD#"w dD`'8uݯSx]OH lj ;"+J?DI.u~] N ~D8Lr+>a3Ҹw/Lfyw)^-nwLj0ReNdEc' '_xH⋤"w-:c!/ bRA0`DD!HG4jD#"II$PdQ Eb EwD?[gKH B@l1UQ@DTdDVN#(FDUhH7U"( HAADQUAV pWAAD1ZE6"h+ HA$Qh]L  "+7"*RIAT\SQAo,bBEE DVUA4\d QDŸ! zEEu + ( ᢹ辍{A "+ ,ƣܸ) ( ؃"( hAADA b|LN~#.[)[;yb<; HA$a( *12^AD#DRYAT%@lǐvhрFDTdD#*( .ȻWAI8PD#gvDvz Q씭 KEd oK$H " ~FMQD1 ! ȀȆD4"F!RSUAl{WdRшFW΀OԒoUA#"* ȈFDPހh]8\fDT#@"vjw B Au@ /rh ՐDhBxC""(hDD$DQ D0 1f7"Ɉ baQDEtFWAAD#"و( * r "( 1ug@DA ")bDTp1W HA$DQUA4\g`DP 󨝝gQDU )`DDDVEAT1ZPxZAADA "+ hH>, DR) DVEA4\  DRҕ( ":DPQAH "+ 6y HAdQDU ! ! DVEA4alІxi\F)$|ȆDyst޲$Ky+_;*fC[&7$D0$ PDAJ"YAD$%QD!YIx"NAQ$0kņcNTD5".Uv7$D4  $"yk $ M( jD ȈFDPވpQҙnC$m~ 7DTp QDQYA$ vC KW½,닯P"H7YA$A Bboӆ IocaA BcXTxCzCdx'B#XT!&#thC$D5!ޕ$QAX1u;Pg#YJoUA3 g#zKf w<(zKf6D&oGo[g{vD?']wDslOގG #glނc7DٞQDlo]yWdPGdQAJ7 Gd{67E"+#zKf{%wDo3;"uA[2L-Qh(EAdPQ% &J~D=lG;1ԮGeGTj QQDE?RDG۳#l~4ZKJ~D bޒ)"6P}hvW$P!(bI~Dc wD?^ яnSP#h{6EE?ޯ яkC&\o|׆譐=F"BhCl (p}dG.(c&r##EEo5bn숾ٞ]QA[3#g{&vE}ADoc=; gbG3(gGe EoAlƎ At)ޒޞ[2;mG.f#z+d{v) <(:툠#) Qm~DvG% 7O켾lݮ=G.яYr:xcVxŀl̘/\;}G6ۇNp%$t]AA I3.{`GeD?p^_UID%;{l H?J  !DDzE5"8[z ^A8\fDTQlD#}] Ո( "IA6r3Td `4×!aPP 0m@ < `?n T@=a0NAcp* 禪 Ax?4kuP PaP`a0BA@h0"`4 qMA  79n& aP{pτmAo0Si `Ed$&`!`UCM{Ns)[@Aos `Oܓ à7ZYY><aT@m (@A?xY 0(00'n6 ΃O |o0HO&NJ3* 2  xp( *C hx`&z 0H@AܔecqPaP da wfÿB dDck+x _86k0f>/Iݞz ngo1̖]\'MJ'Ǡ">~u9U?JIO@gmFg='l7r6_n]m@<[O2y6j W9lݻKj6v?jڻ:['羍X{[ӗy<>u b~AԿ3_~ʫ $à&`_ Kxp?ܟ+K; ~%H 6|xQpS AMd<#PX d)u$%n8x98#`PCA@`0!sg< s9+Ci  sh2-oco4;* <(8'N+``0پpPa 0+}?8'gv@ à@#^@à D!IDx8PDpI&D[]֥{DpQ$ ,hIC (@G'5I'g/5’Dx9$D DhVJ"8@$B&DID!Z Dp!I& Z^ D$B&DIDE/&Dh$% z A$ 8$B -$JA[}9@$B  Z$\%D!I& Z9D-IN"j!BK"<9Ĵ?ފ$DM"$8@XhI$!n3  D$B& γTM :.n}-3".,&DhFIoG>ʠ$DK"<踸)DN",h“C ($I"4@Xђ=RX_y,_M",hDAIDX< ϞW  Za﹤/<% M^' Zq(!IV@qr88R ,qQlI  ۓXxh&5@pA=@$ܓ^XA! ν1qD  4@HIs/$L}h`  $@AG+,jAU ,@hQXP_   . jE  q`aByd(#[1q>1EK"4@Hqq>~,[X^y&{}XaŠQpq  ,@hA8E  s:v -@XQ ĹPaB 8@\ܞ[D  4@H 5@pQP{à`0 xO  &_KT0 (((8   j@ @h 0 'HO-0( $0` & aK2* 8pA%  Iwǯ%o x`` &2fj0 O]Oҷ^%M$0o,ݧA &:&c ,N 'qH@K  d`6"`0!@ 0p ad4TpPPp-PHPi/@z0h00( $0` x98{pf1X&5`PB!OZ`0Pq `6p`0T0 h<(((=p4 560<0h 00`  ^a`€A]'߱a&T0 (((8 Cm( $0`PPPp; hP@: <h @a 0 'OZ`0P * 8 {0h % I* 8: <h0@a @>o`/M0(((&0 ((ہOZ`0 'swܺ=:  A'ǁ#O-@a !!PAA'T-q)008 ~p1@I@ 58$PX>'a? ooǯ.A;ռKlϞHC gKOL'HoLޭ%,=:1=!~lO30gV,v2+b5|^ufgKO~ Ef+f%l :(f@g? /]gߝ&f_eQφ!8Gq^g\WlP1 h4X6YΎ?cPEm1evAoA{@À`0y8' 4€J f]mà%( $0`PPP0( @% Ij0 OMK(f0!`<,KK; 'ಂӥ  CI0 ~D~ʫ?!` @K̦J_8ٺ:Sp /m!A%tOJ5 K <ْSpPa &[H0` 88^ X`pe3 u o0 jp7% (Ηk pPa 08OAAGJ)   s%X`!@@ 8N* ǡG 6_%sd<8 @a` àJs͠&%@apn/ 80h`0(@a`0( `0 M `0(@a`0ptpA@a`(@a0 A'Z` @@a` @'V`@ '( -QW 404xQ0{p0 * )븴%QU [ c̀ I4 p  ÿ4`=-P'S0 h38OK;s /~͏Kʮ;sڜ漂q~0H`n9?}oesN`Λ ?mgǹ yZus.ssۜyȥmg`^\6970osߜw,ײ99yݜ 6 }sVlusn`6[ٜnusn79ss,7]]|wccl+>够}‹qk2i~ݟI:߯[/{v\-풽c?}2S3{%{10?G=H=Tu{^8\7INk Mj{~? ۱"ߤ#sywd/FQg61̅߯ϭ&YOɿLrfWL>^jvɞd܊tJN~l|ڸAq q ]Ku'>ieгϧÜ'ۦMjɺI0}3qw߯svYM>{)']˲u{q~.2F˞OGR5n2.~#f47o}Otr~2kTnKvߚԞvmp)eǿ~uoW-3ylk9yݜ3v0w0o`nss Y~7 us.sm esΛs7 ms.`^7eoN}sn`s 9o`s 9'(? ۛ7070W0yss7 +3ys0oss 9'0/Xn}s6 99yr9W00s79W00s嵃o+ 99yr``6 9`Λsm es^7 'yss5970W00.˛s哶?>w0o`n`sٜW09m 9w0o`n``.9g0'0/X>issA9͹yݜ3'sߜ7070W<םyݜͥo\\6 ׎ ms``N{ssߜGsۜ+nis^7 mss9g0'0/X^g}tYߜ͹n+lk``69ysޜ{ߜ;͹l+6'no\\6us`N7= us.`^79'0/{sxnHY~lly{߽v?pn]X}?vJ=Pv}Tf_ݲ+L|NxH`^7 9os7 6+ '9!ysߜW9nmsi}g潀99o+ ;+7$70o``޷g)87$?OErޜq|6 }s~ucNiH`^79W0yۜ;999o8?$͹yۜgJ.qޱ9m\6 9o`79ysٜm}sޱ\7usn`ͭys[ٜ3 3W00W070o`6  ;8a9yss:70w0XN ys^7  ;sٜ +GCrߜw,is^\\6ms`rm}h`.H`6ǹ"lg$'0/X~^;lms~{4/Hhߙ7070W0``N`^_ԑn+3(?cyssݜ漂99yٛ{ߜ;͹mes^7i_.`^\[mݜFٵnmso{S79lmgK-*`Λ 9osߜw,  }s浀9mys^\6?>70o`R69`.`sۜ70ysrq|μl ɏ;[ssޜW8瓿?Vhr,g仙8ov.٫#̟{OϷ yssa^h!ɲ㧁(o\v,u褗NZ7pVKvyh܂z0S㸌{z'I ]?p>!7[ܽL'W=>~.>_珿ν{(>o;ɭyۜ+ 9o kߜ;7070W00``N{s``6 9s嵃o+99yٛ#J͹99yr1+ 3,/}s`n``.`^@=;7070W0s\\9oQQ>+|0t3tķ#ޡ#, us^6|2470W00``N`^gysۜ+ 3xɕ ׾9w0o`n``.`^͹o\nysN`^n+Ks,O?]m us. 9/P~{ms`.9'0>ͽo}s69ss7o}s\7usΛs7 ǣ&ysޜ`͹yۜ\\6ysN7 es^7 9/X^;漁di.`^7 eo~O8ossݜ 9o ޜm +6Kߜ;͹l6d|?>w0o`nssٜW0'0/{s`90sޜጆx~݁rߜ͹nes^7eon\7Y~l,ois^\}[Y~'WWg97xXړA O|3h(J'y3 nA[|Pl@/ pPa & M  @`004t``ApC Q@G 0PHAQ5G$ 4ACU8 ` ' w!TthC0( +pҸpL`@"n (  h(p :ZQP GA5QC20N Q ^   @C2  VaQ0;/AAGe Aà b  , ut]Xa 0P  `00``0(J( -pthIT 40+ à@%-pt TH!@ 0i7XO|<^7}i? @` @[  0 *  K :DPh$4HHPPX'@t\^ (Q?wD -ђ=8{( Dh?тDE'. q=a%@P'Ǒm_+D $$mhI$&!DO (@gV  DH$\ X<<]XD $O"zq%G  Z Zst A $$DK"<$ 8@$B DID}8{‚ $ba Dqk͗ﰠOtI'cq^ӗ~)׵8 jMME}^mqH3goOdjg>_~vH}vev'D` {TOVC8N"j1kX ZqKID  4$RJAIDHpPh$% :.$ 8B& ZID$5IMz+LO"zQK$ $$D!$ 8IM",h“ChXܷf,OIDH’Dx9A $$D Dt\Lߍ$DM"$aID D1|?A5@HIDK"<8DH$’ <) (QI",hͿg#<@J`PpBnG-@xq%$&Dh$% zQK$BDxq!%+9@D gr{5@p PP(-i{j^`~6}3ӹM)䚾~ ՏUyda\ ѣr>LGI򱸮zzl:ENG}q.tߣFF盬<}GiꚾE+4Yv>LߍCXDL+’#B&DpqMf_ $ ǿ&ΫDXxh  (@\ -@hA 4@H=@!D  $@AaB a 8@P( ZQ   jAZ#L: 8@P( ,@HqqkG $@qnSThA -@X!DEi  $@0.nw (@\ <@XAZBB=@aBD  fw bzǷ"ZA{B<@̞:u n-BD  D  D  .Ĭ늘"8@.-@XQ 5@0.4 $@.h6%+.n.hB€Qpaq7 ` 0`j0`0 !@0j Th0 `0 fC0.6r * 8   Fu  AAA0   AAf& AAA0`0TppM   I*  à%HP%a @@a 0LSx<6AK  ]ϛgWk@`P!0y5@0(Wd!%  @ (s0{+B@A7 ߁* u3sgX@8NZ6?†@94p0T9!vNpΖs.0 jp%.W6-յC`4IA=P -!@p'A@ic16`y"-R?w~ K@`7h à|Qg; 0 * $P9f` @ΩA@ '֡%sX`pPM?5BP@@ '( ,h à@K@0`T 4h '}r2ge7aPCC@s00e-} n~spS0 @}}@AirmIL:?. u܍sIht]}ܹwE_NK;[Mȸq%anK_GXg ,%1yeIvĹ @1yjI1)sEfS\>nELgk sD3ywIl֊1~It\d%1yILF,]Wm0Ź:(5c/tE Ǟ X+(_  q;8uٱ9?n8Υű A8. :DDEű'a8Cq4pq40ؓAq\A!P{tI{tIgQ8 8.\ (=(=2(=VP{P8j8Q͒8Aq-`8CcE{а`Eܰ`Eܰ`E[}l}E[8hqvsqlqlq|[#L{>[:&_VqT6űŚbN8> Zso:|"-6|"3wL{}LL>XK>UX·PPC(K%E;DDC4Pk:[[ǯ?[Ui9D..ᬈc ,qVw+;qûqr+q+qr"8~9QPC8&W+G.?Dz"=heE{`EgaqAAqA;U8갿Ċ8a?q;!AAqAþ+bnmU(Z8 qn )8^WqDhY#+B8}P)$8a?ov -$8[ 8FP$& 40PPh eأjMH) D=%r&@!4@(Z"]p]Y޷ǵ|Nooǯu㷿;\?~}J{y:.J"8@!I~$.{D$D! D!I& Z9D-臠  ZḐc/ <@t\( 4@x踰 (@0(aSP!,@t\qE@!pP_)@H%@aN^B@Aq1y& .  ,@8. Z  VEQDqשhsᇨwXXaPCTL 皎r  BD Z  S[_-  4@. $@hqQj翺( ja5@H^  4@XqK$  ,@IDE("(@pa%= (@0(jB& Gϗ“ .IN $B&x:޳{:3|<ռ/:޳|:3{zo' Z$B$5@p$-@X$BDM"8@P(9{  jAzh 5@P((=@xBs~F\hq=@4\4 0a Z ,@hQp! ,@=@X8(y阸 . 5@\ܮQ>+xq*D90.Z8o+8  <8jEGbg  8@P(P-@\E  },4@gmŹ;(,<@a/ 5@\WhA@aB3(jA +&6ĹJPPhQ l8 [ђ $ jIydwPGppaqF   v\P9hT+| Uݿoy| Q @켚``Aĕ?`&mz<w{nTPQLR̅ |Lf.:.n0Q=\ <@t\1}־ $@hhD YX!BqApB&BD  :.Z  j / dhAXђ'5wiî̀a+pB{)2>'练EhM/> s1ڭ/*l^1/{,r_?y_y6/sw,r_7/\;~s]E.z|(r1gKi?L *\̫NNlEyF\E.Ev^[/Eg4?/}4jo'x…?[npM/6ZB+=>ڻKkzp!O|u]K+yp>Mw^+\s9;_k yp{.vkzp!CKzz}qM.R9cEɃF; r 6`n@0 $Q (  - aP'j`29(JfsPa 0ɍ  ~&J/<^A@a`@s&ϢQp @a`0ps'1/A`2 MpC. ˏAIw˦ J`T50h0p܆rP0װ %ЧK84TH0++`A}/ӲLƙ[Yj` 1f` @;lUy[6N5$C@a` @:?AFb ͠& x04/CAM$PX\`!`I@ α-ApĚlfpA@&C |h @ Sp𸴴͠%AM4͠& |3CK̀ j6+Iy  --!$K@;BPaP7Pap6 gT  X @lh`I@+^OsuwWZK^%:b&u/m @A`+%0j̮׀%dr<50 (()"$ 4Blo67 Ck@a 00`PP^y`8 и"(W@à { L׀vx`@`PN-HdD;-"6 <u8 * d"_^^E58  9 à` >AKhA=;@` Tpkàf6S  AAw@`0 y8lr< BAqI?9nA`0PT ηk@08&A=lu ZM\4 0@a (8  4Tp jà%( 5`PPP=M5`. Co-PP6KwO ,@`P͗& N( PHPa?3^ah0@!B& AOZ`0@ ~w8((N-0h %  G0 ((p ~q2K+0$un |9Vu@t4BhP @0( ~4J8{yA/-pEa 8R!pz!P@85~/2j[ ;&=@À`P `4p@ @A9À`=x`` nƨ}/z\Wwcqeꛇ MѹʎLJ c+|Xy:r\y[G 1(* 88_[ K50 ([|8A'ǡ7pX!@! 08Ԉ_W@N0͠% >7Ty 7E{75v6;Z@M$OÁ% `x('k@%}?'SM -p? 80 p0 pk69CNZր@'k``0I9(!aPa 004Zmcolkp   :-A0-@0 0PXA;16K- PB@K~0|kq P @@a` @K}?TöoM -͠a 4 8 @!`!6vO: . K>$&@G9FZ`?=_<pP(dpsW \Ro:J2 2p^ػ5V G.…\E|Oվp١zͣ"ZjEx\𴈌Bq]("m\-i"}\/]4޻haE㽋.]4iᇧO6.b ElGpMp8*<ǯV\ܮzmK\iy_9~U\NEo'":e>B=\x*2y𿅎BqY(2<_bD8^'x,r_kdsȳ[""ZDoo<ǯ\%>-nOwۃ[и-Y"\Exx-s6P}_>48SUE) E8x>}9dOEWEÙNfý%>;uW}_G]=v0|__;=OT9~US iێmGw{rw<ǯLf| Ξ{.I0W=a}@{/^X7lK `0 -0y  A@oi.Aܵ-. 0 `8_<h0r4P! | ^ʸם2:% _ȸo=p4 4€rvjP0 ((((=` L`v: AwCB@@;  Φ#BC` @%7,=Kdrjak[a``0i^@q`~+@ 0 8&66 !@\ ~8ߊ<h00qE`q/%e?; o|@[N_d3 `P.O~{ @ à~^л=tey>wZBW}?]W֚/]'7@촺<h00( ,( < fC'4dx:?cÁd`!(h!qѷ' I #`6m"(`2`0P * x3~LފY5( $ ÀX'-0h 0 '" 'fp¨BญRA#1WAA7?@%j AP vxо ;̗>@!p\ VI* @Aw + vб$ J <P@ ۳n_@B$ ㏕_Ҹ 6~cKa PPpDmG;2[fpA@apĆVA=' |%:e6~f((q I*Sа񗨣^f Q?;' S/i_o 8FSqcfp[38.t_o`pIz_* x3@0(a6I,P* 8 6K3 h3ONWz_ _tU+-|P5འ^^_q=50ΐJ C@Co*7X08X`P!p\h1ÀPqF`<8.rF][@`p:~ .q*u] %XgJӥ8Í. 7,P]4/eW[ȸ h up48wځCI6F8#ȣxZsr h`0(׃`a@ ,(0.!0y 9a \BTAK毦C9@&;@'\Z9dK@AAOnAN)OU. 66^u< ( À`0,X P9B/[ AAq2YLa@0 㕷 @a`/ezj𗀆M@@ &f^JPa\`  xrH.^ %h(0!@著mu@  `W+ Š @l@r1 7Ϸ~OHb8g+}×kZM~{wݷϽڊ0 A}?>A m}8W`H0 :`w9㬬  ʇz@ YRq&oGG>ml|/|-}?',)@Ah ,LKH"(@ǭK:.2@0`4 tqҗ~]d\Bw踔" ظ("qE`6.>>Q@Mݿf0 zPZyG^@z!%2zkZAE@ȲѮ@0 A`]H t( ~ybk`zvvq{:c@?``TDufT vF~i`Çk 9>4>t wk z>yQ?|PLAT#5`0gbz>Xs;3| @8[9gxc73S=gt_$zY/ E?;%{(hAdP1`Pq (A(Ҋw%%A PP BBA](x0e $(%'LP+9LA[-(P{elJDDh޳";+IA{%(z uGQE @{gA=(Va {%W@{$PRr{p!{QUL*.*&49^^DŽ8 EaTAG5P{Jm׊^^,.(`!{XL^5裔5\WDݫ^/^rRy {,.*(A^N"ZcEum D?{WL`'qE j"E?}:s9 A)qZfׯ?˗<9RHsߜ4>}y,W.:B䔂mk[^-𘐽B9 E "jpS@ DXlEu[TaWPdANbQ  Aa~ETяUA я!B9HPT Ib; QX@B:((9 M(+D#2(~w(dPlۯd'a B "؎ *(.jr # AXQDÝ %r A yWrA)AzB ޠAD6; 7(APQX Z 0͊"#YV; "4Ƞ0QD4\l3a~uA`!OG`.t?ӅGv}"!A(jQ}ZC냎~p:;q7gAd$]bo> fWi> rh> r0ឝcI4_jnMc1ќN, rh> r0g׷joODA'Alumv s6mCI9 mg[Ɖ4 ?6|9)CGcIݒ凗6 sY/=gA,5='A,/='ALC}O̡ ݕ^yzq$gʣzp getl=F8CtW&¡@Nޖ ph=F8wCٱnm'Xn[O"II#wH^niŇoFo=@zuZʰFo=P&uܶ"pD;{rV@<'Y9@;heRj+Wh֓Q5^2Sܝ NeVe/8ȡTg/T_7]YZMOOuh>$%ݶdߦ784_پ5wcgcef_P94ݽޛ ''܆ON͟E̡$l3 s796!ZzxO!-?'AD[cI9F|`A]B$q޼m=~@@Z4 +Aςܛc<r| vo> roN8.6Od֚i=z867!IgCҽ骇GÏp|q|罦l|73U ?/i8m%Ω(JMJEዢ:DX!ϋwOOmçOˡZg2qBD".DqDh!wAʠ "9 &QxKŢAD "A(((Ӫۯ ~6ڱ(.28DXB; rA_mQEJ_ e`Q&r?/ V~9 ~&/F_ 4^H" D  A0((5^(*((ߴ.$A0((5^ IdPQjuIq 颀'lf&a B"9񚽩E: "A`P.HJici}L/ >J/ EqKEJDpe̪={eE^i 7 W8ޣ-@{puU(zw.~(ùUA{ppE5#\<[+jB@{P{dP^) ,Xsba2Ua=j8"zoE E?#VDQy{g1*׋~uRs@B@Q;EQy{piQ VD9{T.D?VDY޳3;D,45":޳sQ}tg s u ޳Sv EpiE$\y\ Eq $BwAAN"ۋD3xGaG, }F.>:g#_( IdPn<;:1]}.>ҍgg_$0L>ZgDu>Jg}xHfŒg_$3^ 0^BH}DXI0(J A"Dƫ;#!>baU3/H(cgAקU8 9u9 D9p=ނ-p]#p]ૄ9 : "8pA6:&J "с?(brBmtHM~ >J jRa jQ> пJl=[mt" PB"=%?(>b/lQ擾7m؏ss=_ׯ/_YWB݀v|Dgc~Ony,Oݵ] evXonOTQ s: CY:yzf| r4@byX|t>,yl> r0KaEcI9ۏr;v|`6N09493÷A'A,} s?vm)CI59Kw6ȡ$吼] v,NѾq=CY\2z߮7 r|ddqK@}Y{s_|tqPx* - rh> r0 n5] mT949Cο>>,'A~] ] O1Hۊs,Hmvl|`Axd;49cەv,M6Ĩ=< G~d;?BIx>BlFmp{xrl> r0 no] eiۈ;69cm7<r|ҷA'ADuo> w}ςܛc6YO-p|Df xۏq 4I"wI#7I ޲Pz,Ƚ>Ǐp#7}Y.2_yh> r0Al 1ϻu,,uoIO?96I4޲<~rl> $u< yD4B1ıQ: biL]x`2{^1t>Fw#l6 rXx6fW: fAx4B y!H٪ 7yGŇ泛wsT?rh>{ѦAhܛguіugddK:rh>{VAhd;6I}N1H Af' jF\GS'ur9.xV|ޜͅ .qs!Kܭb|'[ͷlwSgSu|M}m>g w97Im6a|܌ч HlnT&iAς$^cY4KYDZlz精 r|d{+xuԱ,r]cY=^ph> r0+?;6Y筠=^sh> gJ }Oi\o> b [xAϡ,Hu|,:[3Mq%j “=wzc.YBm9cYeZ<=6ɳr|1878.i 2 AB#HI ARWfd;γdN?ZA]KAdD4 -i ron D5B0Qb@@@!A;g "Af6yAXa%VDuIPXW>: ׮rG8E… : 9HN`1ܗ"Ad'QA{pŋDE?{=^%cU(bPpAN@ C֣(zoy(A9u$ BANB@GW H7`"ѯx tUýs+8 JaBAaAD-W; "r`AHaA: ")~Eַ[9D6QD "Ah!A; "HQZQD ",@BD(#(;CXsHaAD $ApA"vbmADE $JaB1) y~5}((u"xw&0PQ I4 Ap!A:@B‚mD!49j\x=9v$(4(ADe\DP Q@A ,(~0ah(A mB "FCDdAA; + D(iq4` r A0QDEu-2APA8޳uj#a5h)qM!A: "J(Eq]"no=JQDu-_k-H ;yJ_ׯ/_wO_q}<[OBv@ADEjN"AXDvsE{AîMcVDEi: A paAD…!p!APAg\+"ႋȸG"bTyE(.Rs 2*Z0 TAFA0j0H(ض  T`@(FA"ZP C&1(  DPF1@(T`@ (`0(R` 0(0A?GE`{t0*|30  s\\a0`$vlA€aPݝ@a@0(( AF5 ZaP``00 2 -g   `˸ €a@00H(زrꌾ( $`  =$ 0 rP=Tl}@AAj0PaP``. A@a 0`a|: */0 zUAA ,PH0 `\H50(  C.zMx`;6u  40 (H׃m6 bP@`0 [#lՎAP,x|  8zU@` H rk ցB/p@`6>1@Z %0h À\@ AA ,PH0  ,xwjyܼ ,0`k0  t= ( $`PP %gK3    $PA  8 dAMAA  d 4a@@x% ( $`5(ςw^m0PH _d㵳/h 0`Pk_ @ lgh 0`KoP Y+_]??~?}|oo}ӻ?߅2PL_,(0@CAI@AM0@A * ZA~T`0P * H d `fd  AAAC%d0  N00  40THA a@0`H0(@AC%d  5h(,0 $P JP0X0 6Y2   @aP\V?a 0`P`P`@a`0(00h0 0  * $ Àa`@dh(  ], AAACAI07/ `P`@a`ςU.0(h)0@a`0(0@͒-@`00T4  *  ( À Sa@0( `a@0`(  a@    `PaPP r0(  /@a`0(0(=~ @AA.@AI9 0 5h(ȓ À]( }  T,0 W uz1@C>m z 9a 0:NسW`g@ 40]J   ]j 2 (` 4 -9 p 0PXP v9) 0  9XP`PaЮ`  %hN@ TXP`P٪_M/Z%0h(-9z 0a R$P jPP r@@ %0h׃  8h`@A    ]ZJ@,x@} AA  r r AAPAa00ޟ6 P   `,z~?~e}&y۬dOUjLmV6a_`-~M@iN:Kjlzugf ʔ8!r}cI H3 <2'4)u ޿MN5lfOJS\fY%A:K(Y§i/Yy,t}eHksʱt"϶, r,2?`}ѫg~:3_M@=ˤUfg51X:w3@<6ו˷iRk H0  $P JPaP )0 p`P{T(@``0(00h(  & @`0(0׃a 0PT4 `0+?4mw 0P Ї l jP'`   6q&( $a 0(0h(P  e[@9 }} m !H A`0PTP@``0( Ud0(  a0PT `P`PQP  ACA0䔒 B9*b(A0pA AQD~u r A(Ow^p! A0QDu r A.9TdAA;u dA JpQSAq AQDEK" A0Q;q AT\ B: "rAA;q Dv Q(A :mM?,rB:sA4\hrAPa’ BATA%9 ADu 59DP!B9 0H";r A0QDu ~Z\`!‚ hA`!B9 h Bu5h r!AA =-Xx "; "A ‚ jbM";vD ARDPa8 .J "rAq ,(Okr9 B "AQlDvDA9 jBNA vD(:CLWdJdAA 40'FE ".6D"A': "JQD!T@DDlC Dq @QDŶ2(z& ">J80aN|aODQx>m֟~_ϟ߾|嗷>}O}Bv rbuv~(ή^$A g "'^#g^8s dv"A";`Q͞,u Aȸ4PLE.q ,ZWV@aB8vQgs+pQfs+LAq.{fVD…p!Arfq]ZU ^ $pAl0.QH9`aBĖ^0"LPQby s .Tq!Aȸ؞ Kl8Ö{J  ..ju>o]1b qV9v Z:…g\((B@"ႪPA""ȸHAHh@QPS5C` Ra`0 ` 0(  @* 000 Ta0l?uB@.00(  ` `0 2j0(00( ZaP``0P0  n AAB5T  m00  @ *  À 00(  0(00( \a`0  RA@`0 $`P`0d$ @`0H((`aP`  `c  CPaP``0P0  `PaP`00 z@   ,xb `00 z* Z@0Hk  `V%@a 0` &@BAi@A@a À k@A  8 d$h jP``€A@ jPz-0(@À$P ,PH0 (  A`f`0p@ςwK`Pڇ0a@00H׃`PaP`` AAB,-K@A À`aX/h ]f@ J`0PH0 ]   rP-0(  8  ZPaP``00  %0h 0 t=H %0h À eX%`jH0 TX0_-*\x5 z0Kڲj0a@@A̪/`  2 @[PaP0 |>$A`ejÀ =Gՠ ,xi(>q50gMo}p&P.<-GRx 8 `0(@u ` @a`@q v=(   5h()ς% O5Flx(KyW  tvE À$h * J`@  X @a 0 t=`  2 d  '7@ ( $` ru' <^m9H׃`Ph 0k_2 `%@9H(سԋAA@a do"0P? P$h jP Pȳ#WO d4 `xf^3}5 lςnIra\@!`緸vKgOo~ϯ_o_^s^wםD@۫F$P@"8>J (U8HGn%>Jv.@n'oIH{i5X- "^(zQDs 9y/  $ (ADh(>b/rEqEg ׋m,^Tcr >q l/C4P `ý"}i/,q?%ޯ>\y=w!t@N h>bKU=-nGۿ?}C~|?:˿տUo?~V+A9qDd (jQsDHf/A" :2{(D g 9:{r"QfO9 "x6[ A "fb_:΃ (&+B'3+BD̠ ĶKeE8 9 $2($A";4Ym",EuITPdu櫄:]DVUJ$\lY\erEd'ë:qI$P.ep !jQ&V9 >\ 8 Cl$%0A8;[mi~Ad9-wQg9+D@f9M:A8 vDd'aYD! @_ r- ,P!N"$.ƹ!E9HD͝7[;rzN"B*(((O޴zs A ";CksH@:D&ӸϢ"`PY s $HW; r9(‚EŖ6ISn֟~_ϟ߾|yp/unXxFϵ;I$P鋢zMk/BA!_f "APDEj9(DrDDm8=(["  } | :0(40 j * w`0PH #(. AXG>Oͣ= @]@@?-v. P 0Ek0.@ C@ewd* `>y`>F; wG@jyUa W}yGk@C w=i]o{3Zлhe"P7Z `'>h5"H]oy;Z|f_ch={w dzϦTх ~DAF+] 4ZkF+^*08Z0q A@sBAjy@@a 0` r@A wP!:(00(. +@B:(,PH@t=η  4p@@B u _& S[P!\@z-n|1$z1J`õs.@.`zb P/C}Ը  1A r1x  ] @:^Š T" @J(p ֋p ֫@Ay_llPJWt1@CAI. C W   `>\} .@!`.@ Ї v=} lH8J^5(P`0@.t1. _ Z k^ Wq%`@b@$p 0\jP 0ÕsO(E`h.gA$Pda)q[ D r9HHTQk_bP]@t .va $H 00*b_9 0P, 6G 0` Ї n@I4w᎔5`07 ;p7+   6A@Q y -6.c8a㲐 E2@ P]2.Iʸ0hBsKKuWoճW^e\|Q=0KCU@?_UEvU }kΠe1TiiЯ9%HA_sbl?@a.W<@ЯQ@3J @`N àYúsѢaU9ְd\Tk * 46n֒)P!Їq)ܸR!q%h(跴<$;Ý~؏NAHǵ_zWXqf .!\ 7Yp@?] a @?@a`w4 hhf@!@@?5w=ch>mh`  ڥ I#hzsY(I0(I(4]}\ͪ. s }d)K͠C%L>:T 0-^g j@-9 u>4Eq$}jv[ "*(hZ!)9 r$AH.,.JQ]"Sa`!AD?e}J[ *(Ah@s eA0'A'I " (*&8;E?/0;q r}gs%6 BRAbBAœڢNDED4\hDo9 NB_} QD "Z aA ~k q* AXQ ~7^:1^ Ux܂h]/Gu/ JQA!B>ZNZ~G8$ P4\3T")q3 %@DZO ՜o?7)is"/o3QH4P "; "AhQ .Z "rA8 "AQB{p݊ "40QD!rrDhaADqA4\Pr9 A8u JQD!89 AK'+ 5hDdA8uD D"(`!A[D "jp- ! / $#=]>q'fij9 D "J1Z 41[= rA@bbE !(B&W; ""dċ85b#fQ w "AH QAA9<٥*pM j B@Q9  AHy`Eb[(VRQ'Y4VEEqD(N@ANBAD!Eհ,MAQdZi'AbGU";B[r. \ A,/NB6. Eq6˧"Bf9_$xEDvi{.J "((< ‚u2= I`PN@b-,ŢH(A9 "A׋y "j,Clk`C JaB "A "b[' ӳ邰 BkQ$)q^-"b[EaDݗ. ,ٽ fϵD"R A5(fN_$4ٳ51XDlhAdp-<(NB@aB "AL,Clk.#QDq僉C}-s$$ApDv m[jy90(,Pȳ܅Ws@0Ϸ~OH:ygɶ}/}?ko;nN3Zh{{|fzoo}> N{蠺w}A{~T1P إGEv A ‚ (hNbbߠD$YŊ & @VDquTeE4'B{rTd'Oq IUQvD(N׭6I s>bvrEI !Ԧ+B9 $mңbO[ r y$/ɳ ['9493D}=擏p0 ?&OL 4 =949YևA'Al ȱh֠O!ey,Ƚ9[}6 rh> r0gGA'A,0-YY&2&2&MdM+A'AfOաl#Gi<YcE ?M(/i|wv,H\m9GA'A,C0,]nGA7Io-' r|D&[a$l68d,'79I,Р ܶ&L%'r0u;ph=F8C4$(¡@fA5 #,Wt ;~Bicyx_<|E~[O"aۧ&;#}ph=FɬYۖ[ѣ@"mqC#P(¡@x0@#zc,X2 2 2 <<@4@i!##l zp pۧ/?Yg?Ỗ `=0 0`( À\@@vf^yh@@0\ Ũa@@x{(M/v108 0. &i9~sr0WH0@A@; Y\=m 07Ѹ0h1rD= A1sgo9* c\@ QcqoC #e]ξ$(4΃2;9;ZEy/x~c5(gAA@ oCs/lu3M (sc5m"0r> H0`$#@à"A!@^Vqt   MP$ 8  0(.@0h@ 9@H]60@ 2% w큽 g f8~\b5Tpz3a @ F %@0P ls`ף0( j (A`P3A@10 %cP04 0(I N6 V0 0#c`ݮ )mXoAA j(hN4 Z0X`P[H(c ڃ0V {[JA v=Q[`4rڝ%#ZrѳP.Fq;vGdbw0.pR S.zbGuK(ew0.vgn;bwP.Zqv~unM\lw#ls-ѹX@BDǢ~è01E:Ea\| (t}-ѹ A .s X0.r8uW(9q kbu-QcAPab`Ѯk;o hB:s}[bpq] !D]?BhBĀBD"X=j玸(N"CQDs Y? b󀯷5*S5N hDhaG,=\\g,A׏%D: D_?vmbfDve`b=AQD Z!ؿMo}~1}9b@QD#F9q ]@魢Ba_u[HP 'a5E"_xS,_V(NATxi{w D "q#‚ "A4QDq ADw 4 9DC zaBA A$.9 B-AQ R Q/OgnLoA9A!9DqH4!BDt1Ŭ1D%9gqs j1D "A(rFXCD5|D|>|g9yI_ߺo7eP\k~7^q-ADq9P$.p=0A8DD !=w s AtaB H\KP885 sAT6nk|!pq;DsA$.9q29 2gPQD\!#>,A(8 H\ A4 ATPsF|G͵B0D S8E1BDu=6*ATH AT8 9 frwl3yTh"s!A"qѺQD⢚hpQP!9vLQD" 9EAh8EBDrG]+7ua9 2{T殦;mX#("qqIc9 H\ A.;9 בATQNP!9 :x+A$.XP!g-la4 *>0 L2604 jP0: A@ 6z (@ v `@iA 40 z0KWSz4!P0.@ς7Xà#6u roD݃=0vdaP1(d  z`7&@ h QP=@1aP1(. #1HA@ AŠ9HA<^Omh ^iv55N^i`10 ?SXnz1I|_𤋮Oh/>s:uݜuqQl@z>ǯS[ހ3@?Kש?{W ~~ޜÔ?ǧG4ON]`3Hف̛+e7uۀl?~[v_o۬DmF-˙uS|Zmξ̷}s>6+v:kߪo6OiC);`;7>|<ůS_W@m/ҿYͿy܋AOk@à~qAu @àc0 )a1(T b` r 2`0\@ŠaA5AŠa`A@10 $a1(@3k+\_uF@1\p (S d a0( `trJATD"gQ8sDu A .jqA4!D]kEuA0ѹ D)7!B’8 a=9 9u 1H"; ‚B䔂 JQDsDha;E8D4!BA ";D 9 "AXуEM";x~">ρOwEv%^W+9UpDEqE,ߺsK-AE"A- QC AnDu%Ȼc"a>Aۆns&A4QAxfxB8 (";Cbo=TQ7 2w) 89oL, j(ADvb;‚ٌ{h: H\\g9 ; "@ :k(u ӛP'Qтky+" Eb\=0` PM0hT Arb(kO(sѓq^aP0<^sAZpaA A<1r =A @0h. !06A X#z`(4 jP(bP0H.p@c`- r(#a`W)i`@__S$hT` r 8@A}<ԯ+FCia[DXƦӾKzƬt7!Km{9+jcs_9~*ڴthV@1Զ%%_<~3e3Df䏌 u|_.-uFz SsKpoV8iڵ]Ŭ} f=Nqy jHpr|:ps쾠|_Wy3iz]k^k~xOSv->`S>j+}ӆO?G@?*wk|67@sq{ (@àE`PpC1(T Z .@k$hlnwJP9b1}87@`s hW}l?-aA5@A0 >.kA "pʹ0P k͇ວ8. !P0@ [+R~j 溡àc0΃I*sP dW= $P ,.@\r;rP0@Za`\ N= yp=\^ #P1hiNh`@`\x@5}A58.(EZx05 cP0h." b`@pl:24 d7Aypς  ԯqT Z  (>a1.΂k0qP04 $= PA +\ oC@0ZYpt0f|b<@@? $0 u x<4 *ǀdhJqwpWǏ!{#=Þx3|'8ɞy7p {={&?ܙ^ao ʻza{^;|^>v]ww9>P^tɞa/{=rZUgw9مFz? ƴwpO_뮣>~nk?oݩ\gG?ʹuss3zfSsKZkVTNhu9RAf0\Z<)>e.uYN,=}Ю'VK`_ w,>kgo\z=֮Ǐƿb =)םr]_kSN-$_ m\'XհS}{LczD.W"ol@%ŷד>ǟޑvEU?6\/_:X}ijـw`G-->kWa>×?d]̇7X@xT X#9Ht l}[A[e}s F}t MO  A oݺ 2i` t} @à:9}/sg?F<p7u}7fDٯ#:+w4"7Þ>x/'{`W3t;:~u=nax ލλx7wnBx;`Wd7+p'{=K'{]x5pý>wXϰýwHý>;dc {9X{? v]v^`χ{:ۀ`?ݑcLz9_ao ?ۻi^^<-c,OMO~u,\=۲<*^v?ދmU]zslK=aCW]n$)y/Can~eo-y+]ߍ}پ/a{Y {/Wkòo}.gn4g~2򦗾S{zFZK캞ZwٞӴewg{jOٺwlz~گi[֫}4yw1m'>N5_x_m^a/np/{r+v==^1irn}%Ww]w;;lpϰýw9v;;`/ý{?.i=u^ {;pW ~kz?۸36]wl_m|ϰ-^Or(ݘ./M6 \ivw+.dN`zўa/{=7Ov9p'>X |ý~rᶏG {ὐ~᮰O {`ݖO`$^`o >X |.+{},-^`7uV}%^a |ŶkZc3 ˷`}ܲaW VFz.˷ nG=^`Tn˷zx?>l3mN)a1(@10 :c  g *b`t 7@WA@0P vo0k@10 :c+T |{s503Lgz@y (Ad(hN@!A:s5AĸLX "t E "j aP4!Nr99 9EDu-P(&,L oWGIo(%P@A!B@aNB=PbS\@d15;D IHVu:=k:o鸸T&B(-10Us-zDw#EN"GH:$"IQ@s/ 4bn $Ͼߘ?Dm :Eq 4AsۢARa; Fbуs$:ms]yAD\[!;6%4u;9D͵;"m׽XtpGD݌KyADڌ!La:qEی#(";ą;:qm3fDhqwDuA$.p}s]0BD=">=EAyTyq}¢;s 8E;sC펐 9 H\\B BDsA}wDSכ(h @]f~ 2= nAŠ`ׅc` bP0@Z?gtxc`aWoAà#d`  7ACDAt Tk-$ KA0@0h@T J).YB׏ 0 4 ZPAy@ jPAAU0Ԋak3fݶXn2iy)A7>`MoN]6'ǻ{W4KuP\?8wׅO'^>s:$͵.{6?%1l >ŗ׉`_7xOX$qf~lev>j<9>-ślW3iNü7`zmZƽO~߿3o  x$M8,@0h5qnҷh 4 *A:]o $hT kL# AŠ`0P A c΃&T J1HԎa A QP @0h 2<00 `1HaUxa?u c 10(Aàb1HA@1($ vgo@0hT Yc` A`w>`PA :b0 2u`10 AŠ`1HaA Q;'`(Aà`1H t ŠbP0[@0P AŠ`1HN`=0 A Q`#X-  7@ŠaP1( Q@ jP0@:vnXaPd yPG1짖d yP 4  Q@`tA  A JXHA y{t[- @ `Dc`h @àbP0@@=0 4aP1(@ m` d y  _ 4 ZP1(@Uo=0 6o8z  ZPAy=0 4aPAy=@1aPda- c΃>  N臨0@0hT 9Hz`h @àc!  5(@*x`@0P ,`w- cP\!P=X rP07@Aa3``t ?V&x j0 40 :K  *- 4c0΃A j07 j@àcp-wx @ *-P1hb``Pp=,8@Ay b@àc0^'i^e A * @10 :@0AAO@Ơ   :#a1(T b`t =% 2A $0 :0 j0@10 :0@@0P ,ԄAƠ-@10 yBAŠAyJ| =|\= 2%@{< +1`a*W`  jàc0( A *  kàc0(p`PA`@0R H= *  40 z0΃9{'HSA`s  2Sn*x\yV U~?}čR׏ݻo5,~>{\lb_㭜OqPOS߼ƣANCVbyPƝg byl ǥk?g5(d yp4l qlހsH0fZ9 koa T d qu (X\ NFZ'jP0 +iKG@1 ZP1(\㺊`P0\p-4:\3mE*eE a$Lt\YB`(sA 25a0$ aǁa @à1@1 fQ(xu0P ^0h`\ X\}EaPA *:(Aàb1Hpkh 4 jDu50d AŠ`D zpm+aH0 2ѨA0P T 9Ht ,@0hT J)(#a`E/ 梗a΃kq`h @àbPAA@T J hc 4 ZP1(@}עg\ 5(K7N.@΂kqЃar #Pt rq{1t;v]w9|o'{9ɞaOg ~ ^돧L^a/{=n.qwp?q8۟u1[jwߌ38kyvv};ͻh^?v?DBl'{~vv9oV~R~OX]ԬgϖR9ͱu跟M7Knocg%zjmx=i&͚zwpO'MG}.<~^mG }j?y$Bfv?npϰý v9p2>`ϼeye`Wpt+p;hýl |+pFY/p/WrzM^`{]``[=v ? 5 {9` `]poS&}}q_Pϰ+pv>X rý.+{}#{;v?S=^ zzN{^ao za{>^ao Ɩ<˽`χ{ v9p_~%^vk7᮰>͵_3p_].n}p_^v zag%3v᮰zO{=.q{{-r+vwa/{=k~Z9p7pYw{=^# {]`ݖ/'lKa/{=z᮰~٭OzM{>}v^v zpg{K}ZҷA@ $ ݛ: cP4  z0M As  ;wz?ٍw{?Rl2zoNtBZ\6L`o?}MOKa`kîHu?Xn_7/M=OiھzfOg ?i w;^p7pޕ{= zwpWrgٞB_vpχ{b= ;{^`GZᮇ {=ލJ {=6o^= {=Ý)X^b ;}5}WWWWW+Ҿyuڶ {=~]B`W{=ÞX/ͫn}`W{X繏Lzbzv]a{b=^c*w {=k {}`>F* {9]HO߳`G&^xO=wn.oDz^xWXw]yo p^HO{}_d{=^v'{;+{'=^CdWNz{?HW+{žzw v]{aOG s'.{;+pOg{{w={ĺýwz%]`o{= al~t뀽t]{= w{:ew]Ov^w!=Þ6 fvW7۞Xpki_z {>+7sݓ~2{'x]H.;{>5Xt].?jc<3zpg\w-mϰvkqN+fpv/|eӼ^>ۗ7(BG5c}XEs\mkf}s4}{]\w~ n= >>>lڴm7o|r?yB3|]w8-=^z᮰zNg+ v]_ONkw뛳a/7v`{^ao+{}%ao ƺ `a{]`7;sW^aon>yyu+ v]a7;a/}on{?뛑&ý.Gn}u_zýv᮰atg^{?^f povwٮpϰýw9v;>X{w]wl |ýw]a}.{}$3p Q^v zzN{}._d k{n}5r]w~֯ {9{õ v9vl{> d7vztMi`} p_?3o-{voBzJHﰏzfp/7wY>'w}vi{𮰣y3Þk:hAmt۪7frmӻ=f?Mao-yW؍Z9ԧKaP6_ORѐkTڞuyJ]S+Oi^gҾy3\O1Z{K:3 {]w>O9k=^`7vmJ?d/Wv}_`]{]`Wg {]`W;Vf{} QzNHo2N\ nyj Bzg}^aVI?>MHY~3ߧY 돞HoۏAWGOH^a?P)% S  *sA e0swAa̯0)A@1_uE+@Ơ`P=0 (A`Id sӠa 30?`@0o;q/ MADvgp-$ u 5 KI bpS(:ḒGA@NB0ѝIt&zrADq Z Q^cDnO_!N!B8u=P bp1 9Z E "ku  "暮qq]ɫf!baVO%a˷$  IœD;$Y=斸0@ 9 "Cœ \DdQ\*%-Kd(-% b.R\ R) $DP(C1hI (sbn4;EsD-V(A b.01pAĵdkq-ŵdZ*s@̥2IT&{ \D";Ds4 [d3ѡPq- kz9KTPT'QD,G'тB ‚ ?C\]DwD|~].>m}nN_  qA (4()A@t'@18By@){6EBD ""I4!txsz)DdqGlAKG=!!8 %yZ^ޯEbsFA9uD4QHMу "AHт "$>ܚFsNBD "jQ0SknM0A8 *H "$>/MO1B AѠ@: Ka?:s=1uE"heA'c>ފ*H Q^K4{ͻ:sE"yTb@qQD $rI87E,vOy(:9.!B7gEٻb@1n<[.QD:qG͵w sA5zBBwDveshN {$l3X#! ٍ#(Cw@QBтٌ#Af;ŀblFInd]QB(wDuEیPX߽qGrG(ADbd;b8 qʄ#q$ Ht1b]IdQ A͓)AXĈ!J IdQ@h  ,PX 1)AD " B.>uz֟3^~zÄ@DuAdu+^ cSB_-DuAd.d8 ,(P A ";;zh#b{ "dw&h32oA "iwp/8">Eߘ;g|Ch:E]S!H1D[ 4^O|hI'"qAr74 c<$"qvcnu n (HX\#¶oIJ[̢o_jADuAd.X9uATv U}:qfAǠDٽ)Ȼ7I$.;uA$.Zw{s voN!oۋ3 pqBD۽ؾm/=X"qQD"]GX4A(8ą6ufBDᢪ\ A$.rb# c2]0a)ЁA@ LAx5Dguxxd @1䎁`P(HAY؃y (ةe`suts"\Wm.A bח4 o.VصBi{`Bu<»7@Zރ]g&= nd .Au:%li\M3LSÇ6=I_s*=6ͫ\^W"@@6AC~ HsC6j7z{0jx!W`0U vM)U"@Z&N'uU]6+1I s5(`~Վ PAiQ{: 0X-0W !]M _s-i||̙v=(s[o}܃E-߃sYm悳|f !.ރCOkك: '0a.d}:ku`~:0k.-|/j=_m{0 `~Q˷(nA?l- o}ڂ1z<?ll=_T-w`~=_rGkyp "`,!>GP?h0Hc>7}v@򣯮l{{f}^tBlj=ϮA#־p9}5^AP;c~:{|`=v=nc9LN/]`WG znwWG?PyHLݿ^;!?4*?bCY%<"$y?GFp?P  ?,?sFK06dvK?@gك?sh}_`F?4`*漿؊o?ⶴ?=Voٿpwca8?J 5?tyv?<3?bj?}׷?i6#?"\?~#\鿣0j?DR?⎩?O?Ye2ѷ`aпTӘa?U20?`)MݿK1o5?PQg_ٿ?W0m?gwNٿP6+ kE?$ !?c%r?ec?kԴ#tr9?n^B_? o'?,^,~/ ?EݱWB?xQ?"U?υ4dJ1;⿤m-?o?g@V\ZJU?U @H}?w:? P쮿gLJl?|p?m?ɠ?u5.8V).τ&?"D?_"@?[sHv.ÿ+%Gy#?si?#-4Ic?,?%Sp?T=?H:?hLìW̎ ?9* D?0?*5!j@1?H`?pÓD&?P*ֿ?79e0ٿTj ?6H"?n+ѿĔ-?2t\l?Kإ?Ɠ9ր￴K?gDʿtU?WGQ*)S?DWĿm~Dᅬ*x?t̺?(2:EƿLeؿd J3V?]q?cߛYZ.Upڿ)\XXF?,ސD?Iq2Ik{?Snp?˞11 5/A1BKȿ~OƩˉ?lBѿ&X θTlFǿbN?Òϭ`=$/ɿ濮sG23? t?曆-~~濍*i?iۗ>PyHL?^;!4*b?CY%?<"$yGFp߿P  ݿ,翦sFK?!06d?vKAgكsh}?_`F4`*?؊oⶴ⿞=Vo?pwc?a8⿱J 5zyv<3bj??}׷ֿj6#$\ѿ#\?0jAR߿⎩OݿYe?2ѷ`a?UӘaܿU20鿺`)M?M1o5ٿPQg_?X0mؿiwN?P6+? kEѿ$ !d%rݿec߿kԴ#?tr9l^B_ڿ o'-^,?/ ԿIݱWBſxQ"U׿ υ4?dJ1;?m-ܿog@V\?[JU׿U ?@H}y:տ P?gLJl?ؿ|pmɠǿu5?.8V)? .τ&!D_"?NWsHv.?+%Gy#siƿ"-4?Ic?B,%SpT=пK:ͿhLì?W̎ ؿ9* Dο0Ͽ)5!j?D1ȿH`ݿpÓD&P*ֿ79e?0?[j 5H"n+?˔-2t?\lYإ}Ɠ?9ր?KPK!E) faces.npyNUMPYF{'descr': '7:?8;@9<A5:B5=B6;C6>C7<D7?D8=E8@E9>F9AF:?G:BG;@H;CH<AI<DI=BJ>CK?DL@EMBGOBJOCHPCKPDIQDLQ=ER=JR>FS>KSFNS?GT?LT@HU@MUAFVAIVFNVJOWKPXLQYEMZERZNS[GO\GT\OW\T&\HP]HU]PX]U']IQ^IV^JR_JW_N[-W)_KS`KX`S[`W).W\.X*`LTaT&aLYaX]/Y+aMUbQY0U'bMZbQ^0Z,bNVcRZ1V^cZ,1N-cR_1PK!EW vertices.npyPK!E) faces.npyPKqdipy-1.11.0/dipy/data/files/repulsion200.npz000066400000000000000000000166001476546756600205470ustar00rootroot00000000000000PKC?Jİ vertices.npyNUMPYF{'descr': '?[ ܿ^ӿM}?7xe?V+Ϳ`x?(F&?L5?/?AiP̿m(Uؿ3fŤ?xf?%?]r?>† /࿷\M?ؘM@?.i?>6ܿ( ??e?ȵW ?3Io /ֿR?P?ԟfǿ_?Կd6?ՙt?%W-?ĪAFi ?k:+?,I?oơҁ?zE3迪 8> 9?t ?~:?\ # ])?5%V ?g׿#w1W?ZȄ?XE(*]?ua1?s;激а?T>?)qQ?CaLJ鿎B?hSOI?:??{'k?͓^WDؿ^? n?ro=տA_Ui?A"ÿ6?޽ ? 6OӿK?gM==?^9?d \I?!# "d`?yy~1?/ ?cԦ迶ͷ?EZ?QuNA?y Nہs?;L JDP%? Z?cZ?j I#7=?]j?}ff٪&S*xol?䤇i ?@|'?~ ?#d■`&s?߀l+&?wC?sd)Gs c[b?$ ?Tl???$|E/`]X0d4?\6%H?g\? L\?k꿺Wո¿?$?b^??%?f_:a?BbY8?F?9j?n?= 꿾d*GJ⿖j3?o$0?&|?Ϳ7YPL? 8bIΐܿ aqNa?T`‡?sBxa:ƿ^խ&&￯*u?c+|wԿ$z}[g?$8?TͿg^biV?i?_<Ԉٜ?,MA?{lFhExǿ^^?/?9ݍ=?;JHp ?Vzۿ%_eĭ8?IۿOzv?gG? <2?P ?PI⿇pƵ5D"??:d??Åp˥뿝Λ&?dtO? +??8I`wcX?}nl p"?\7?<__CrcE$R#?h7z?]: ?pg)-?jP6cV?y?su9>:?+FIi?Qsm0?{5;\?+?gq33?? Ec翳`f}.HVC0?-K?C&?E^?Yc*[ ?^?M}8xeܿV+?`x(?F&࿃L52?AiP?m(U?2fŤ쿕xf %]r?† /?\MֿؘM@鿺.iҿ>6?( 꿞ƿeȵW 4Io?/?R忑Pԟf?_翽?c6ՙtW-ĪAF?j 迗k:+ڿ)I߿pơҁzE3? 8?> 9u ⿱~:?࿼\ #? ])5%V g?#w?/W[Ȅ쿴XE(*]sa1ݿt;?аֿR>)qQڿCaLJ?BݿhSOI:?{'kؿ͓^?WD?^п nro=?B_UiֿA"?5 п6O?K?׿gM==进^9Կd \I!#? "d`xy~1/ 㿓cԦ?ͷѿEZſQuNAz NہsYPL8bIΐ? aqNa뿬T`‡ʿoBxa:?]խ&&?*uÿ`+|w? z}?[g$8ȿT?g^biViU<Ԉٜ,MA￀l?FhEx?^^/տ9ݍ=Ͽ;JHp Tz?$_eĭ8J<>(SB>?Ozv쿲gGȿ <2ܿP PI?pƵ?5D"?:d;by?π8]Ee6?./[dB ba?{l8?.J+9vF⿀eK~ҿ8Zr"Iir@?l]E B&M8 ׿ÿ;o?k雝G?Fͫ濕d㫑tB?abgJ?DoP^Id翘#(l0r?"[RfS4%俏ÿ`c?r(d@ٿ ʇ濞~xGݍ?e?/쨇ۿy}a翣?F)?1""?/5;뿥.ڿۃ?_Ԝ?4<ֿ8쿣+;ԿA>Կ…p˥?Λ&տdtOؿ +?ɿ8I`?wcXؿ}n<ֿ* ;ֿu$?I&f?AƷ,߿G!)E\?/bU\U2?I??l p"\7ۿ<_?_Crc?E$R#h7z]: og)-jP6c?V?yuu9>:ӿ+FIiQs?n0Կ5;\+eq33Կ? Ec?`f}.?FVC0Կ-K?&E^ĿYc?*6> ??? 7?:?7:? @@ ;@;@ AA A A<A<AB B !B!B=B=B CC>C>C D#D <D<D@E@E F FG ;H;HII0IJ!JK%KL#L#+L,MEM$EM-NFN FN O !OGOGO"P"P CPCP HPHPDQ#DQIQ0IQR$R=R=RER$ER%S%S>S>SFSFSGTLTU'UHUHUV V(V NV.W).W!JW)JWOW!OW"X"XKX*KX#+Y'#QY0QY$1Z($MZ,MZ%[-[)).\G\&T\GT\&\*X']/]U]'U](^0^(^,Z)_1_J_)J_*`%K`*K`%[`.&a+La&TaLTa+a/]'b,b'b0Y,b0^(c-Nc(VcNVc-c1_efgegjdililqv{{}*~&hkdiinfgdfkfdldgefemfnsnkgjjootgthppuhiniqejemksltmzuzmunsv{nvjorwowjrhkhppxkxlqylmzzmrs{otouqvrwwxxyzz{{||pupu}y~qyq~rrxsxs|tt|vv~ww}}y&y+Y1Z||-[[}*}X\+~~Z^-.[`/]aYb^b1_cPKC?Jİ vertices.npyPKC?J-< :faces.npyPKqdipy-1.11.0/dipy/data/files/repulsion724.npz000066400000000000000000000634601476546756600205700ustar00rootroot00000000000000PK-E0D0D vertices.npyNUMPYF{'descr': '%#.8?K4s0*ѿCa:??9Ŀg L?eQ?v ?Ç_7I;9?}L?;65Q?RV?qؿCX&?Y6 ?[*ĿtEL |f?̾j?Xrh!v?Y]{?ax`? EKYA ?(׿A{G?.|Bɿ6z?FE;?X%?=7d7J{<?a q?PV?Z ?a6#ڿ1!'3?Zd?4XMѿQB0,^(Z ?.4ٴD?S&[?a5IsU? ё?>1oX?r?3މR`?U?݌Կ-r*???%|Q>(s3Ux4??3?x^#?ŗm?޻:?YM_q쿓?cۊؿ2b̺쿦]}0?h ꮿ!ܨ?=u?K@@=?T>cP>뿓fF?FQ?=},?'-_~?!߿6/mt?&6?kѿ"My`f'n$?|=۬?hI3?QRj?۝w?YXaQhP?c.:޿/!Կ?3pvԿl+dLA?sUX?6" C?aR$F뿫\4{?}y?^f9;?xd?m>2?)051ғ?,1l,ٿrf72q-{r?.F鹿x?6&?˥?z(qKG࿁L濯6TM-?VHs\!?L#~?F?|`}~A?n?N?Rgn?ӷ SG?nޯ?BÿJ)u迁.. ? "?k?52EY?M{yq?x]翲x?ɝ%u翛O?kV,ZT(ҿO1z/?ލtI?蛄j?7h濉p?B#?hxK?̣k?Glr忊4.?t3p?})h0ٿXp1Q? %ڵ?h?1Ş ??1濒Gұ?PoLFRBHbIW%?RM?&I& R_o@Yҍ?H? x?\`7j?d~?2{ &??_06j"??>4KBOb?2? }B¿N:b忶N0\?#b#?Zu?fS@Z?u\?1q㿌?D/ȏ濝:7C. s?x\*տ]3c?65nG?Dz?e,nrce?8?2[?/5?PQ $?ƙC|?sؿ?UTK??wr?*0H?:'?yz?㿋 J?oI/a*?Ua_ o8?m?9d}o7h?dA5:?Ea8f?%/P?cZ 4?jfA?Ɓ,S\2g" C? G㷿7[v?"%%?қMi m?mf켽ῧ3l#?ɬy?I?i ??濺TlE??Co2oпJ)GkF?un?sT?0:M?.)E? J>Ͳ?gny:tPQ^1?w]׿J*?wG[il?[x֮?.5v?$P?J݅8_?\\L?w~?c8?Y߿f@2bqo?W(?ƈeL?/0H??8&$uK?FybO?[IK?z?+2 _[ހ:?%Z?A ~$rn`B?UѽH?5ֹ#?;2X/?7f'? aϊ>??M|"v%R?WcͿnd,?CCQ?ҁȎ??we9 vf?hrP?kOx ?62?o7鿣.?"?oG׿ϴo2?^r? ?1?UL꿚yZ?,?ON)N!࿜WR!?Z7s1b?m[ ?e?b4 wݿk!?*N?yorp>??dg-҃?g_ ?@ο"7߿>sA?9H?$Nvh ?* B?[r?t+࿴EY,?Ƈ][QTQah?W+{ѿ ?}#?>?j&*sbm?8 ?ٱ?!?_{8!|'~?,a-ߞ? iݿ=K˯ٿN?X;?b?/ Z-?Ώ?l1Zؿif?|eM ڿ#):?L⿓vK ?TqHk?b됿DTj߿ڵ?i?A!?n?!?gz<߿Kӫ?c<\+Hp ݿX?Nl][?d,?{,r?[Nۿ>^d?T?E?D?5#%mҔ?%?u"hֿ "ٿH?pS:?E1X?K>?lp?S5gۿHQk?0(Z{NݿT~S? ٿIG?/HS?~-&8i?< 1޿Sn?|Ri?1?? T.?Ws?xN?wђ*⿏Ћٿ<?V쭨?=]v?& 䣏?ƾ7?wڿ߃X?me?>Ԇ?uނ U?"(:!n?PY?dhT[b4ڿ]t?nOJI?8'b?%Y~?[ Vc?Gx2ۿ ?+:j\鿣NzۿVVA?gGƿ Cu+? ?sqOH?ٮ_ٟؿ?X+>?;N? T,c?`nVM?@܇1?mo?ziwEؿSv=ҿM+?Re1??߾l??*<<!ֿL8:?GliAOֿk!£?,7߿] b?čnܙ?jƐ?22ڿjeW?U?5lM?%?/Kg&?!{׿Emw?9A}t%ʦ׿-AO ?<]JHN?cm?_?_տ9 ig?R5g?Q?l( ?MP(t0Bi?Y-|?P]{_/(ӿd?eCޘ㿘??ۥM?b'ZYɓ8i=cֿ ĕs ѿ@{V?}m?tRE?Ry?6?*Nxҿv٨9?3,??K?K-:%I?-@d?ɜѿSEiK˿Ƣƃ?2$?!C?| ?;?A_$A̿O?Ny6en$qҿ9A89?]ҿ$֪?5sN?.C?y:cvп0q#?0 ;Zz??`U?gK]?a],w? ޿I*[5@ʿD?Y?A-qP?Є?&GP?6U?]3[&ʿv8)?c .A˿ $;?"p)<?8b?^'x*&J ʿdGQ?ֱ?8!g?} ?L?yGzͿ9-tZ?.B@!Ϳ?ٖի<19?߭+?ZM?$C˿ؾ?.z?ed?0x?*#鿴?ADHO?a~1aѿ̖b$͈Ղ?B=.6?_;һ?*94?!a?vDe?W/JĿ6?5_ڿg#?iIH?w?m ȿ0 Js?Z}\=?O('~?R ?3FG9R\C -?>lΦ?J.hĿZbK=?t?1!"\1?ڄ?ZG[;?$%,6ǿq0=:?ijÂᅦHſѳF9?g:E?}D?}/YUuѿ &mn?? d!?kV?6eG?ÿiIى?i*HgާȿB>T.?DMſ]?U46!H3?sk?MkſJ*?D?h m?-/ ?jzX" C?/^j?Gmxp|ٿUfڱ6V9pZ?!?(eѱ?R,L?8&?6_Lӿ:7?+ȽѼpcƩ9[?NlUaa| ?0?&b?neki?hTm?IUy?͕o??ߝmf?_|?濙2s@. ?5.?aD4?a1~?.`shdo?Oj?.T?E6?~s鿀? | ?8XeÿH7 P? 6n?V?nYr?ߦ?h F?0O뿰{0嬿~.?87ڿWisW?sS?ܰzQ?B ¼?2?obP?_J?0-dX ?돑M?Z࿵seyDi?Uׄ<0?_t}?X4S???dgG<[?M?*/Z5V7?cj~ݚ|;X?#?0lZ̡d?KįPsA?5w?}q?HiPc?7q?G*g9^ۿ?^ ׾%a?:LĿ r\1q?ǍA?u8S?!&Ȅ?_?k?.y?*~뿑@ ȴ? C ?7VҿE $%?,ii(?Z8L?Mn2Z*"?pq?HU\r|? M ? 7|`?߹-Q?ްI?V?j=Ъ$8_A?˧-l/?2z:?S:Aa? ޳%?*.8I4s0*?Ca?:Ŀ<9?g LeQѿv ˿Ç_7?I;9Կ}Lֿ;65QRVݟq?BX&\6 ÿ[*?sEL ?|fؿȾjſXrh!vW]{ڿax`տ EK?]A ̿&?A?|G籿.|B?6zHE;ٿ^%=7d7J?}<ٿ_ qԿPV] Ϳ_6#?1!'3Zd2XM?QB0,?^(Z ӿ:4ٴDS&[c5IsUݿ!ёտ>1?oXֿrݿ2މR`쿟Uꖿ݌?.r*쿎ҿ?%?|Q>(s?1Ux4ݿ?3ӿx^#쿁ŗm׿޻:ܿYM_q? cۊ?2b̺?]}0˿h ?!ܨ=u׿N@@=˿T>cP>?fFܿFQܿ=},*-_~ſ!?5/mt!&6l?"My`f?'n$ۿ|=۬iI3PRj۝wܿYXaQ?jPпb.:?/?!Կ3pv?l+dLAtUXڿ6" C`R$F?\4{}yۿ^f9;zdӿl>2)0?R1ғ,1l,?rf72?q-{rֿ'F?x8&߿;TGտ0sN?FXL޿K  & sŌ&V5O?JN5JпTlC-k?V ?Te [形9п=o xU߿Uכyڥ1?L&ĿTg^?iYSt?ypпB N?YuÏImh˿j⇛?j=ῇp&G,dϿBGMr?p0EYM V0_?֌)?LZeS޿g?{?ЧmfȇEݿ(?N9Tؿ|E?F!7@?EjWʠsk+?xӔrQؿUzϠ?[\~c!? pCq[_&FٿZ#_~Rܿ)U7J修[{??{豮a=?2?S'ؿۧL?!.jv0 18տt?0Мbu{@"Ai$jǿ)?EO!Tblg̿050?1!?bBz̿"\3h㿾U׬$Tx]T?!³ҿjy?_7?3!Vſ˥{(qKG?L?6TM-KHs?\!L#~濷Fܿ|?a}~AnNagnӷ ?RG进nޯֿB?J)u?.. "ֿk72EYM{yqx]?xɿɝ%u??OҿmV,ZT(?P1z/ݍtI曄jʿ7h?pB#gxK濬̣kԿGlr?4.x3pȿ)h0?Xp?1Q%ڵg2Ş 1?GұڿPoL?FR?BHbIW%NM¿&I& ?R_?o@Yҍ㿫H x俔\`7jd~ῆ2{? &࿤_06j"鲿>4KB?Ob2ܿ }B?N:b?N0\!b#ܿZueS@Z㿼u\翚1q?ͿD/ȏ?:7?D. sԿx\*?]3c俉65nG濽DzԿe,n?qce8濾2[/5ֿPQ? $șC|ѿs?=UT?L忺ƿwr)0H;'yz?? Jܿ?pI?/a*Ta_? o8m'd?}o?7h鿎dA5:⿿Ea8f'/PcZ? 4㿅nfAƁ,?S\2g?" CG?7[v#%%ӛMi mmf켽?3l#Ȭy鿰I⿏i ¿??TlE⿛ٿCo2o?J)?GkFunٿsT0:M.)E ? J>Ͳֿfny?:?wPQ^1ʿt]?J*xG[il]x֮ʿ.5?~v迡$PJ݅8_^\L޿w?}࿙c8ͿY?e@2b?qoW(ňeL00H8&?$uKFybO[IKz+2 ?_[ހ:&ZA ~?$rn?_B꿯UѽH6ֹ#:2X/8f'} ?aϊ>??M|"v?'RۿWc?nd,CCQҁȎۿ?we9? vfhrP鿙kOx Ὺ62ѿo7?.޿#ֿoG?ϴ?o2^rп1;?_DIͿd?BOݿ+?$Ɩ)!?*t}sٯ"?L'\+΋Ɠ~&?觚@b5re߿TБ>UL?yZ#,칿ON?)N!?WR!7s?1bm[ ea4 w?k!*N{orp>ݿſeg-? ҃ܿe_ ޿@?7?>sA꿹9Hݿ$Nvh ݿ) B[rt+?EY,пƇ]?\Q?TQahԿW+{? ݿ}#>Կj&?*sbm7 ٱ޿#ٿ_{8?!|'~ؿ,a-ߞѿ i?=K˯?N鿏X;̿bݿ/ Z-뿶Ώl1Z?if⿥|e?M ?*):L?vK ߿UqHkU?FTj?ڵiA!ڿn!뿴gz鿼lp꿷S5g?HQk׿0(Z{?M?W~S˿ ?IGٿ0HS꿁-&8iʿ < 1?Sn}Ri1?ۿ T.࿟W?sڿxNwђ*?Ћ?<X쭨=]vܿ& 䣏ƾ7俾w?߃X忍me>Ԇٿނ U"(:!?nڿOYdhT?Zb4?]tnOJI8'bԿ%Y~鿇[ VcIx2?¿+:j\?Nz?VVAۿjG? Cu+ٿ sqOHԿڮ_ٟ??X+>;NT,cֿ`nVMۿ@?ۇ1Ͽmoտ|iwE?Uv=?L+Ue1ҿӿ޾l+<<!?J8:޿Gli?BO?n!£ȿ,7?\ bݿčnܙ翦jƐ.2?jeWU濑5lMֿ%.Kg&#{?Emw9A}?t%ʦ?-AO 6]J?HNؿcm_Ό_?: igR5gQտl( ȿNP(?t0BiԿX-|R]{?_/(?deCޘ??׿ۥM濂b'ZYɓ?5i=c? ĕ?s ?@{V}mtREӿRy6忢*Nx?v٨93,пKK-:?$Iҿ.@d俅ɜ?SEiK?Ǣƃ2$ڿ!CϿ| ;A_$A?OпOy6e?n$q?9A89ۿ]?!֪ʿ5sN,Cտy:cv?0q#0 ;Zz뿼̿^UܿgK?]¿d],wο ?I*[5@?C?YA-qPǿ Є̿&GP6U뿒]3[&?v8)޿b .?A?$$;ſ"p)?:ο8b_'x*&J? ?dGQᅢֱ3!g¿} L ̄yGz?G-tZ.B?@!?㿯ٖի?<19Ϳ߭+$C?ؾ.zfdɿ4x˿*#?ǿADHOd~1a?ϖb?$͈Ղ@=.6ۿ_;һ*94쿾!a vD?f׿W?/J?6ڿ5_?g#ѿhIHw˿o ?0 JsZ}\=U('~ĿQ 2FG9R?[C -ƿElΦJ?/h?YbK=t7!"\1ĿڄᅰZG[;翆$%,6?p0=:ijÂ?H?ѳF9?g:E¿}D/YU?u? &mn d!ȿkV6eG?iIى¿i*H?gާ?B>T.࿪DM?]¿U46!H3ukܿMk?J*쿷Dh m¿// ֿjzX?" C1^j׿Gmxp|?Ufڱ6?V9pZ!Կ(eѱR,L8&$_?Lӿ:7޿+ȽѼp?cƩ9[?ҿNlU?aa| ȿ0&bne?kihTm>Uy͕o快￝ߝm?f_|??2s?@. 濘5?.aD4ᅨa1~*`sh?coOjᅥ.TE6~s? | 8Xe?H?7 P￉ 6n῾VnYr꿝ߦ}h ?Fп1O?{0?~.࿸87?WisWsS߰zQԿB ¼?2ibP_Jݿ0?-dX 鏑MпZ?se?yDiYׄ<0ɿet}X4S?¢dgG?<[⿪M??*/?^5V7ÿdj~?ߚ|;X##lZ̡?N?Kį?PsA6w}qGiPc迕7qYG*g?9^ۿ^? ׾%?a㿞:L?r\1qǍAu8S!&?Ȅ_ k1yο*~?@ ȴ C ߿7V?E $%+ii(Z8LܿMn2?Z*"쿻pqT\r| M ׿ 7|??`ؿ߹?*QްIVʿa=Ъ?$8_A˧-l/#z:ᖿS:Aa޳?1>*2?*7?+3@+8@4A9A-5B-:B.6C.;C/D08E$1F$9F%2G%:G&&;'4I'<I(5J(=J)6K)>K7?L7L8@M8EM9AN9FN:BO;CP/<Q/DQ0=R0ER1>S2?T2GT3@U3HU4AV4IV5BW5JW6CX6KXDYEMZFN[:G\:O\;];P]<I^<Q^=J_=R_>K`>S`?La?Ta@Mb@UbANcAVcBOdBWdCPeCXeDQfDYfERgEZg1Fh1ShF[hGTiG\iHUjHjIVkI^kJWlJ_lKXmK`mLnLanMZoMboN[pNcpO\qPerQ^sQfsR_tRgtS`uTavTivUbwUjwVcxVkxWdyWlyXezXmzYf{Zg|[h}\i~\q~P]]Pr^k^s_l_t`m`uanavbobwcpcxOdOqerezfsf{gtg|ShSuh}ivi~jwjkxklylmzmYY{ZoZ|[p[}q~rssttuvvwwdydzz{|}~~nnooppqqrruu xxyy{{||}}-111->B    S      d    u ! !" "##$$%%&'(()*** ++ ,, -- - .. / "/0y112%23333444'45 56!6)67"78#89$9::&;'<(=>)>?*? @ +@ A ,A B 5B !C .C"/D"7D#E0E$FF%2G&H3H3H4I'4I(J5J)6K*L*?L+M8M,N9N:OP.P;PQ/Q<QR0R=R1S>S>OS2T?T@UHUAVIV5BW5JW!6X!CX6KXGKX7DY&HYDUYHUY#8Z#EZEVZIVZ$9[$F[FW[JW[%\:\%G\GK\7]&;]L];L]&Y]7Y]'<^'I^8M^<M^8Z^IZ^(=_(J_9N_=N_9[_J[_)>`)K`:O`>O`:\`K\`;La?La;Pa?Ta+@b+Mb<Mb<Qb,Ac,Nc=Nc=Rc-d-dOdOSd.Ce.PePaeTae/Df/Qf@UfDUf@bfQbf0Eg0RgAVgEVgAcgRcg1h1ShBWFWBSdh2Gi2TiCXiGXiCeiTeijljkjlokmplnqjjokskpslotlqtmpunqvowpsxqtymrzmuzn nvow|pu}px}qv~qy~r rzsxs{oto|uzvw|x}y~zz{{u}uv~vwwxxy||}}~~"tyty&*7773HHDLH]Yjn                !! ""#$$%%&''(()* + ,  .. //0#02233&34'4 5(5!6)6"7*78#8+89$9:%:-:&;<'<=(=>)>>?*? @ +@ AA ,A  B B !C .C "D /D#0E$F1F%G2G&3H'4I'<I(5J(=J)6K*7L,N9N-:OP.P;P/Q<Q0R=R1S1FS2T?T3U@U3HU4VAV 5W BW!6X!CX"7Y"DY7LY#8Z+8Z#EZ+MZ$9[$F[%:\%G\:O\&;]&H]<I^<Q^=J_=R_)>`)K`*?a*La+@b+Mb,Nc-d-Od.Ce.Pe/Df/Qf0Eg0RgFShF[h2Gi2TiHUjH]j4Ik4Vk5Jl5Wl6Km6Xm9Np9[pO\q;PrPerQ^sQfsR_tRgt>u>`u?Tv?av@Uw@bw,AxAVx,cxBWyByCXzCezLY{Ln{EZ|MZ|Mo|[h}[p}G\~Gi~\q~]jI^IkJ_JlK`LaLnMbMobwNcNpOdOqerezfsgtShS TiTvUjUwVkVxWlWyXmXzn{o|p}pq~;];r]^s^_t_`u`u avawwcxddzzDYDfY{fEgE|g|h}hi~i~jjkkllKmKnoqqrrsstt-vvxxyy1{{}}nnooccmmBFhhdhPK-E0D0D vertices.npyPK-E8("(" ZDfaces.npyPKqfdipy-1.11.0/dipy/data/files/small_101D.bval000066400000000000000000000007571476546756600202250ustar00rootroot0000000000000015 310 310 330 615 635 595 615 640 595 945 900 945 900 1230 1230 1275 1540 1560 1515 1540 1540 1580 1495 1540 1560 1520 1585 1495 1870 1825 1870 1825 1870 1825 1890 1805 1890 1805 1870 1825 2465 2505 2420 2460 2505 2420 2770 2790 2750 2815 2725 2815 2725 2790 2745 2815 2725 2810 2725 2770 2835 3080 3100 3055 3075 3080 3140 3015 3075 3100 3055 3145 3015 3405 3365 3405 3365 3410 3365 3450 3320 3450 3320 3405 3360 3735 3650 3735 3650 4000 4045 3955 4000 4000 4065 3935 4000 4045 3960 4065 3935 dipy-1.11.0/dipy/data/files/small_101D.bvec000066400000000000000000000123401476546756600202070ustar00rootroot000000000000000.51103121042251 -0.00053472840227 0.99867534637451 -0.01570699363946 0.70641601085662 -0.01127811148762 0.01092797890305 -0.7065976858139 0.68343377113342 0.73030966520309 0.5616380572319 0.59289318323135 -0.57953292131424 -0.57472652196884 -0.00025380766601 0.99943608045578 -0.01570699363946 0.44710674881935 -0.00719600683078 0.00685171224176 -0.44697961211204 0.89384758472442 -0.01408623810857 0.01400898769497 -0.89438331127166 0.8807618021965 0.90730500221252 0.427066385746 0.46771842241287 0.39947691559791 0.41698148846626 -0.41200670599937 -0.40400171279907 0.80501043796539 0.82725661993026 0.39083641767501 0.42591732740402 -0.4161410331726 -0.39991423487663 -0.81810581684112 -0.8147137761116 0.70678371191024 -0.01118473988026 0.01102659944444 -0.70699882507324 0.6895825266838 0.72428482770919 0.00020647639758 0.65857726335525 0.67422258853912 0.32043570280075 0.34651586413383 -0.34107080101966 -0.32525697350502 -0.66917836666107 -0.66387826204299 0.65083771944046 0.68214958906173 -0.67179018259048 -0.6612474322319 0.99962842464447 -0.01535430736839 0.31630092859268 -0.00479150284081 0.00514880986884 -0.3162562251091 0.94836509227752 -0.01460747886449 0.01520668528974 -0.94863158464431 0.94004303216934 0.95664525032043 0.29797855019569 0.33462598919868 0.29593941569328 0.3072674870491 -0.30528897047042 -0.29774451255798 0.8966423869133 0.91186267137527 0.28440615534782 0.31877493858337 -0.31257089972496 -0.29010045528411 -0.90635699033737 -0.90257829427719 0.56480538845062 0.58966267108917 -0.5829153060913 -0.57155054807663 0.5545887351036 -0.00859258882701 0.00884065870195 -0.55467271804809 0.83181297779083 -0.01289543323218 0.01324754580855 -0.83195966482162 0.81861901283264 0.84507393836975 0.53703784942627 0.57221281528472 0.50123381614685 -0.99942123889923 -0.00006244023097 -0.00174600095488 -0.70692420005798 -0.69603151082992 -0.71827620267868 -0.70749676227569 -0.00125570269301 0.00121260271407 -0.57137382030487 -0.58311182260513 -0.57197737693786 -0.58333188295364 -0.99986988306045 -0.00003121430927 -0.00174600095488 -0.89422339200973 -0.8888925909996 -0.89982837438583 -0.89451342821121 -0.44741609692573 -0.44243630766868 -0.45222103595733 -0.4472998380661 -0.0008025788702 0.00075893837492 -0.00156686280388 0.001555887633 -0.81227976083755 -0.82043009996414 -0.81256824731826 -0.82058358192443 -0.40668523311615 -0.41009393334388 -0.4047719836235 -0.41179382801055 -0.40494835376739 -0.41198608279228 -0.4065374135971 -0.40994676947593 -0.70707148313522 -0.70211195945739 -0.71214079856872 -0.7072148323059 -0.00124539888929 0.00122358393855 -0.99995183944702 -0.66463810205459 -0.6686492562294 -0.66252219676971 -0.67070341110229 -0.66263794898986 -0.67082816362381 -0.66471493244171 -0.66873240470886 -0.33191391825676 -0.33501955866813 -0.33188670873642 -0.33488422632217 -0.00002017364568 -0.00174600095488 -0.94855099916458 -0.94590234756469 -0.95138692855835 -0.94866377115249 -0.31616678833961 -0.31459307670593 -0.31787794828414 -0.31635177135467 -0.00056963047245 0.00053470541024 -0.00165830715559 0.0016543198144 -0.90207701921463 -0.90676438808441 -0.90220540761947 -0.90684694051742 -0.30096718668937 -0.30189031362533 -0.30025422573089 -0.30278018116951 -0.30030870437622 -0.30283868312835 -0.30115994811058 -0.3020381629467 -0.57489740848541 -0.57976102828979 -0.57499867677688 -0.57976251840591 -0.83195155858993 -0.82845377922058 -0.83555054664611 -0.83206850290298 -0.55462741851806 -0.5516784787178 -0.55773359537124 -0.55481922626495 -0.00098021887242 0.00095655908808 -0.00145794369746 0.00144742033444 -0.69829213619232 0.03401271253824 0.05145435780286 0.99987512826919 0.03513642027974 0.71792268753051 -0.69567221403122 0.01295893918722 0.73001152276992 -0.68311512470245 0.59841012954711 -0.5553902387619 0.58050286769867 -0.5739454627037 0.01612985879182 0.03357987478375 0.99987512826919 0.02144854702055 0.45805910229682 -0.43619033694267 0.00741326110437 0.02924659848213 0.89668923616409 -0.89179581403732 0.0011721990304 0.47355878353118 -0.42047256231308 0.90421897172927 -0.88387620449066 0.42499393224716 -0.39117905497551 0.41229027509689 -0.40425899624824 0.43193197250366 -0.38401752710342 0.82668405771255 -0.80561792850494 0.81415206193924 -0.81874054670333 0.40673112869262 -0.4101036787033 0.02250898629427 0.71197879314422 -0.70195007324218 0.00030738834175 0.7242060303688 -0.68949991464614 0.00981692876666 0.35289132595062 -0.31357958912849 0.67704153060913 -0.65580773353576 0.66676956415176 -0.66648149490356 0.33219629526138 -0.33475768566131 0.6828202009201 -0.64994901418685 0.66223043203353 -0.67127072811126 0.02725872956216 0.99988067150116 0.01431084237992 0.32441619038581 -0.30795523524284 0.00437629921361 0.02534307539463 0.9491142630577 -0.94800966978073 -0.00445062527433 0.34105551242828 -0.29125520586967 0.95457118749618 -0.94234961271286 0.31412887573242 -0.28873008489608 0.30467051267623 -0.29828983545303 0.32473245263099 -0.27815243601799 0.91047269105911 -0.89816850423812 0.9011737704277 -0.90781635046005 0.2963438630104 -0.30679860711097 0.59202021360397 -0.56229448318481 0.57409614324569 -0.58069401979446 0.01696711033582 0.55999159812927 -0.54934251308441 -0.00045547669287 0.0218068100512 0.83395713567733 -0.82991433143615 -0.00432626996189 0.57433605194091 -0.53464871644973 0.84355688095092 -0.82010388374328 dipy-1.11.0/dipy/data/files/small_101D.nii.gz000066400000000000000000002032041476546756600204670ustar00rootroot00000000000000M/tmp/small_101D.niiĽgxVǵNssq܍1LAH j h$^LwǸc8'vb}u? & [?Zowjӭ7/ߺKmאq}SYo=nY3?yͭo=G~6,W?*ZvoQ0'f\i27.50!l _^ ~ W+u q݋G߆{&WG0/os_ >@4xϠ#%|?E-)2i9\>z].kp>5n ? ? #MW;s-}8|g A>C _)'| پ|XU>x}UzzʿW #LאWoڼ C׃w50o ? {AAR.PWSO?z(o# ̓ϑhWpy-> ~FAb ia.-7̧O^,u-/A*/hGH|:mքѷpZx33?Irpx8/π};x=vS;)²QWywG@gpO Y}ja:~7A~ [[zz^nE6 ga# ne| SC @cHz 3p /пU錌UkJ #pQ8875&7O +ãpl~veDZM BK0߄Gcu4_5 ݂ͷEhű`;z@X,vw{}GUhɓ~*Xn`Wp, 6r}8x2̣ơ`oQl{gsA]NѲ[RApm#ۡp6p?XR)~`Vs 8\:XI Dprz.ҟA K0<g?mOS8\ j/e6XBQ(HsHl/pA jH[s~A3o~{? /QqC3z>pHU-xh,= uoa$.@a0ne>ہ.ɜ~ }3g;bOiEJ@9Rs~C?߆_R37 wNa_Q02`^ .S/u"xο`0= ?7bLgQ8i[(} 6ѿdw.R~5n v; 3:@x a뻇ocׅ-8l;9 ~ BX3G.tz6QeY4E=D[[0[KFW!k` ԛ.=0 yj/a_8oQw | CH ^o-GP*=K`?Ó2^QZB {qz=̓8mϞS8ܝL4NQڏ}pw`އF˵|nWqt>|}Ed0 0n0"X7"C@_ UqI(:>scԯk~Ge<$whU} (]|_Fv>KkQ/Moh{~+^%+|/u}Hi7GW/?Wq0OÏMԥ 6-z}M{WcSs*<ua+Os72OC5+ *#˴!~tc,}<4r-O=uW~;.ADH=`]ރ~ߎh#|~>fG|o嗡3841? kw_#da?y#>TS@rw>;@?F1(qZj7w_Q8z7H`I۠7F#gKF^S<= es}VsGpdMaN`-)4?^oQ2yIix8E3<; wģ99N{9נ1p9 {H fG9'=G# 쇸?Iw)7(= uKPt >Gy",Ij ܝ*c(%"*N3O41*s|ГzOA{pV;FJ8Z3uk)9@~KmEr{#F#`|+mIP&j 6Ǩc|xSSԏ}Mxߝm¿n=<)| w˵/3T[#}X{@k 5h/+؇Kw|N0jpkxY=lإ B!$vzH2G0c]:Xz x;G6Iy~G+` $‚ΓQ߱:YخMNМq.Gm iGm̨v4F5&zW+|P=s7Qz3>NmϮiAE<8_eOm;wY,qո`:Wigj{V:XsYT5'@Cf kx`[|ׂMό5BviY =PqU|Ά`%sѷ |)`~Kh"Qe懣MH O\ - \s}p΢l&#sz*ߏD>Sq .-nTruhNÍ`1l<7#˕p5 R*|zaVq7_AAPN_\e<e@Zfh] k: e@, } NPܾ9F׀oFh?elz;zs+ʹۊ_C#yqYI\xgӪ? j=_P9l1+S`-|ľH{?s/Dgsgx뀾j&;O1zr \zȈUPHڷpпثjZmZjnZ亅1:[=[o~Ok>"xy YEl5`kІi/=9s@4涕{B CP+zmpm Ga ` rZÓu௅Y|],@S-+OT33Pϵ)(\1t0f-d4ړs{M_BbjWsw_rQy$.GVoϭ(T׀!y@Jgt9rC30G m4uk,(~ѳ zkx_B@PՏ2Vn* d>\ÈJ~._@>h{S.\?) ig긶?-~^AnCz!ӭx ڊm:{=)y ls)%YA ';m/]zW#ep%ߦKаޠ/Ep]ol0l/P:tw*|j1} YCvC_Y#.hOjlhSV>T5)La1,Dj/`"zб9'L'lDΒz+m2Gsr ?]\8Fb,)A*\ zG|}}T=s(KM}<=8zO u'R$3 Vt;#H?'(bx<ĕ~ ƹУ}\Jk]F6K QH,ؘ`8rXh\LfZe\/yhzuU<Z_[fYM oM>2h}vbyC CVg<r0BZKFUq/a=%uO-52=X꛻XR$z~%.C>u`} Rz [eEbPaL@l84_7 hI<0h[ Z^*GKyu$) n.00~CU:8*ߦH>c Uӿ3ZR\Ǝ':t壻0j_ jDKMkѱ74FjR6Q'W\&EP!^S0O$ό=O/K-< wWF"g)^t+:(k5d:7ɼ8E 4_zw2CF1DX]>MT>N@2$s=<+Hw:~p |!5!+o!O陡h$#e@d҃3磢QDԞ{C6$>h~}\C,B1,[1io>L @A|Vy<2wLYMF/{~x9FS]g*߅`EGWɧ r 2hN쀇>3gc !lEgv YQ;rVx WR: J&5k˽+I@E?()Zm n*uq_Lp5#ڭjtcr TmTꌣ r*eĸ CpO\ϥ#-0֖RO(=sr,2+Ce; _g ϡƜ #E]nrdYP'Wi= js>Z+s֮C-c;z wz)0pum9zjLa%5j:~F*^'+;)Dj3[ i|GC1Z=zݻoSe:A稻6UjyƓ; =}<#e/ϤA9Bf"}0 eŦb\y ;k׽lc%寡|;ˑV~r{s# \oF'|~x/ru+WQEfp 9z.'Phk}YA]􍾰#Ͻ3і^¨uU!@COp~:d2둘4ƒT#Rz'~3Z0-|`|lVD8 w>SuV~7-Ή]hיH5b`.bpG2۵X681k .*ƍ\'63D=c>4& K3V#Y<)u&ڋYklq8tƁ(+rvl% ʹB-ƚ&x/J`23=vr{N\FFi*ֺ/UjD8cCzjaL]BpsvF^hG0WF6ȃk4woʈvCړd spD;ɞ!%NHnP6Qc_f,a>м\+P[^%$M3f>+#2R2\hp)]p*T:..#Ϣ0FZV(AdޖE|1tSzg>[r`h]K8\8C7f͙z=Hp%1Q/qpksgsdc|lF̵zl5V@֣)y;lE˷‹MZ 1i˝mrDz7RsF<JJs)9/t?7xѐYP+w-VStZ_ Z9cIFΧRg5*@^zFo{U,nstRܩZY41e%ϣ7#18'OF@{J|5#R5@3d30`pgR&'GZ裞GcۀxuZ{@z Fkno>1elەJཉp?f돹 7l+|K2#\nKk\-6բ-/1'Fg|9},32~6Co-#xK7z/k;<¨yL7fˋa&ų0uk?|`OЌHc=u׽.בr4= m0# /qV*L#=(DwT{/-2Gۏte=PP65@Mi͈˒< 2^\Ñ7:zZO]_U"[Sh)c-}>ЩPˍ)lg }\88r [MAwAd$Ow GPomN&ZU"e\OAې@lluփϝ2$\#ڍI:X!q0ϫD3+nDlx6v|K޵S}u`Zg52cMq5Z:x%x屐<ȷEpЧZ@м8 Zz{kjf'^zQݘU<裭TNu󗫘)Ý+Jd>`!rF.2~xwQk:O mre7n.&zP 5TA@~|*y~ {^A@Q#v#`ێ̽\ͥYػS-;\GQ(un pQocKncQOk!8?}hV>KlYLZF? M wU.vGΑ{ǡθoHӱ 04D}ХtyHt/f3ۂ&fePD:- 6ǢirhIkv$0WトR&) >F'ݍgp2^̓GH ]˙H Ws1A ڌl9Fg%T('zUx\筧*-<ժ~3F3_Ia&߅O '|\)5?*6h`tVsCj1Jvr >|sY1C0j<:K h@=Hamwn>4OHÌ*Wˠ0s'HLE/=^YY8΃ГMnQR ^mxhpQx(ZrMtlq&QWt 'zS e`hgb\USù6[Q3:lr>9ژ̍>a]+lAX u|r]܂Tjx8ȓu@Ɠ=F9廠l }g`[Nts;VmyБ0nq:h4+5ׂ2là#겡uow$eOZGDN+#]#&'F:m8B9PYr(<֥HQMc,|ZOˇ\U)vJv v~,22jqQ`eA4"nQk, d5ʠf,GfR zp!4+7j59xobjx(f.'"\Y2 #3)qQR>>Woјw8٬bWFFtF֎y3iIHwV*~:ڊ(se\cȁqjcn`mwn U}L 3pFF 6Q f6*ތbm own~vy)Cx)I1 y2ZF p FSۛǨJYp'5b\L$FF?k,ܯDNߝG-iE3T/YT >^)APY Zek,}ȵ;+epy?K3d 5+G#@jS^ h+QPAQI+w "jԄC,4|732FncYl_M1hv*BGp7^D< Ef ^9z]^k hUFapo8eu]UXfR=1h~GXFrl|d6h6GN^t̅\el T跍sejmR11lv~P!3wzPwi4{_$]Vf"ĕHƭv2x;:xv5˪јDuky6>Lgtʠ˜ޱT1r6X fLl zϪY7x_^HK 8>Cj_E4!q؝H]("vY*zQdCi3 C>wPgFSۏ9w ~$R?} zzR%ЋMtQ[]ۅ6ϑ0N̓t+t( )z19o5А.Nbw42.Gp_ &@Bʂ`+vj&Izݚ;{1 ZLGӪâMBBCg! ȯ35{#2k6cuYNkЏZ9`ZLTx̋e}h1d?kIG\7+O9@Pg6̟vO)*aO"p;{3FCQ"zvf@9j4:VzSLQA-(ɧGqo,1 9j]30sW4jI4,`!vbiPZb#0o^7CGn4VT"mW W}ԓ?#q? Kk^{w«1m\~bmG-_RwΝ@wl0Zm~uN W W$(Xq݁4lƉR>(F觤YLny<)Jʀ+s1GC<\hqRЖ ơN`kx IȭًN,\WiЕctxKoiR2gSDI/ʵn.Y1}rzE,$jl1O,Qw",|$q]eMТq\Cc4q[iO>3H>0X߹3'R+ٙF^zy/ ïg;u=2IErS0~1o\5'k^tOp4KW?xӐf5\WA.=x]i1^ (j|'&DlDv'݋>SB`Q#GP+x'FzBrw s36=L{ٕ~U˱].pEԛ#a?F1x= bVYa3R4@Q%m,m2krm//A;C}C w'37 5 /w|uQw/Qk(&G#dhKm2̋[c#G͡NuJjfl»>3g!% 92F]_f>ʤ/q=qQ6rZ;S>>'VPnl{;hYJ-sQs^(p(S᪲zy|j=;gIPCl7=\-~?ikSf zggT"\(ezܩ嚺:pu񘈺jzJWzʵtet5aG)1OMhzj%HBR4_y&TLQv.WV!]Yks+4{Vy&C)ȲU$ug $њєAO>fq255K_*XW(5p T܇p7L.@\Bs+" Ot7WdGjW|$~"B2>:u3OGJ=<Լd|)*?#ߎʢu4毈1V@2B7lgz;̮SyND; #!O s${`Hjjf)~-A?:pIr,7ʀ=L6H?2t?5hS!'3y>ъ^4nDg(AP]ܻτ<.Z7cb$nQ =~ҍH OMVD'If<I7Zջ:2^-@ ~+h6j|Ҳg<]{$ݯ7ߍ{@OccШ%u6qgL7`QӉJdЏ0>c f.s`j,QxLCHϝ)tƇِ~\a<8IG> fb(bS:BZ:cJ"yqz"6<넞t{ 0$ ҜYHvu.UΞ=gCBϧn>Г$GRI5jC\YjFt5?C p8hW3w-ƻp͞ : I_q=!',d ji{";-8z"(~w%s].<>jo6܇-b^G{d+=\\ G([FCչf濘)%`hŤ=@*{BnuGKU .gww@* :f y.n57&SkʬLk%5sdƂ3ѲqHrgInyzM}(wd7eɝyZ|+FiS dPR>I]UmS)\7|wnNVi$ c;uOpmu0(m=Kg&#qm\+MxbLq!P{+sR]8$A͞PEӇsiI7([tQWڪ푊hL[ É~CW2G]v@CͧpM HmтiIG[ڦEC~Shs/-3Ɨ .ǰ4QP-)@iZ#!-p<1ׁ^46Gb!es&B6Io&p@Mb.܃t:B"; AFT&5>W1q"Zևbrф46?]tEutd=0O}WouRyOܑ͆ܵ$ API?T'ԈKԤ Bz;&wڍL&2P?d0"HlȰ _clrhp \K!ЌNJ$ZqEi{S~OD֯ p[]>])w-JmKA@1 AƿziUa* <Gj y]Аz5іdڵBmkLqVCݖvBrw@2T"!W wq)N!x#ѣ`p:'5 jFOoC6]3ho .\?.`U[1&BG=0:3y{XwP:&0!/v"=\wF}wPܣ[n }P՝ͨ%3>ħ- GIOFHO\$9}v:LK@] y֒ m0Q"Xcz F, 3GF=hZ Pm~llOxE}|lAngy->ZeFy(:z@m.<}*z :JͲó&:IikIi@|ipYF-WJ YjQ%=hsb\ýDx_ErVsk AGZڹ1rZS,h2&`C3s\m4"1g!)DRRDڨ|(E#l"ÇWH6M Г=iT]t=cHG=?:eԺz-6_˪,O=35`cNj"sخs)wEݍBB\ ]gw.:jFˤH\ Jxų\GAfi};Ƃ L+xQ9\>OLϞXIke3D猍od֣g ˮo/ 0R fG?>D!2 jzq+s3֮?Ӹn}tt8#WRvXX9^83Kka$ż,w|;Orԋvtz.=;n;Gi *E&߳ NDδ޺QUcEA #H|zou=bnsqd$:RK'~VkAhF48Ӛ{'Dூ9!bZ߸}!R T{ؽw (jGp҂_^ҡ;spv7.rrI_PotR5L- fVlϨFNݗ "áӜϵdZm}\̽|ngf7=h{ѮH\-Խ4W@x+!V ѓ<ThшEqix0u_g&f>S]Oz #њ4Zfa + ~ =J)3{ZP5RQց+cL݃K-3;#m31V FD_+>hosڙך_Qpn^Hj{AUc(V"2AU_j>G\eK]u@Əc>qQ9kƍ^1|0;m~q18jH9/:2:ijy̻G[konYfuv?=D%;JUtho:cU u6܍l,L01JC_1.ԨR>7Fz!M}l}=572mb e5{ӜL#"jԲ6QM#jFKn}]KWe>{/Ƨ\g޼kO-u^4.^B{h;91/'ZGo0 0}o>Yy֙-lw+ȓ0-m"e 5:3YF#ۂ3X\J:#'cԅPމoUߕt E \xn xi黦&p6:Ɵ'\I6WaZ>\<0ʨ'1 7V¿qvɸ;&Sw-`6q95Brwí_{j2rP>;Ǫ.\87F׷={V '50*[C{>} ͼKm,_8#qzϸ1-Oq%WfezG{x4w1{lWG9 O֙މz=~>agwcO7w3?GER*fP~"O@D =3Iglfx}5OdpFM%E6$3wxQjhд@2I/l=GM{xwA]'sHuG؁y⛬@W2sHgUyn+1;G|Wu˫ZC8!d<8Y?jH"]@g^ok,c(pϱxT^/Мpϲ-f3i!#ܥFmnc=UJ:S(1"cu4uBEDhD~k,Mqe;rqAm@uhv,?,+LF5·9Bf_)<q=OΓ57GD)F?7Z5|Qr_9os|=wړ`1qf9?ø35;5n ҕą<6eо3JwmK΂j579z hzثZTc/)M3Fc^{dz֏36{뱹<=>@tZ1v3Ajf-kTϑ%݊f3a %smd8V2zϼLVr4{ab3t~zadהΐfhSf$tќUG#sZ"m=moVܙ|Iyuݑ+=t6!̤tvNo~]5Nמ4gNj+L팦_dtDLFÅ;f;t4&#s3qW:3yFsS2ͳ'W$P3;\yVuM 1T31}sSe[Vflyn!{ۖfi:psk Y97Wp^NJig]Y#q"6١PGp߬`T\HbvP`9e3֫nb:O{nȉ\s^F.?QG\F]guv=b63P醧d+J\G&s'Zid]̈\{"$߮+s7z9a&֖v\4 1>H̜zS7fu7x^ƈA)¹>L c]I*]9Ivએgdv{]K٘@{Fk[1=uGbR=bWpfY?w! $@(h1Q֌D}Nm<kwu{F!R &FBcn]@zbıph$Rsw:U m`>!6}3uyFa+Tt$CseOݟ7v쫶6.=yVf)Kw}zǬ?3<{Cg0JCӵSKEUs@s3ʸv3J K횠Y>|H]9yrEy:؊G2 Aʚa]%+.+Of ,C#L!POwā7C_[:WYDZgf蛻 Gt ν.h UJF̡6b|i-=[l?;M%G|NຟnH;,WZ&`ʝXj|ZBqGFt~$@1ki,k7ùekz[!}h/~m@~7`67ZJ >H=# qpFiŎyN)ؚ:j\s!NcNGחo3z4!}tN~oAu%9O'7^Yv`-T֞7}W@-whhj@{לz )OzJ=E s ׽v4~ׯ݁D TO뙣DziB}0B3/sZ> 0P!~wBe}{pWYO-ͥtR x=_ q6*Ԇ:׌JqvX2q|ݚ:w}͗Z5-;RKiV"6ғ_EUͶ7'Bx=3;O U[$>R7.)3NP4vFq6߽#ZJCMmVGyv>̮#gg:3<2#Bf+|'S0USJS+u Hm̼5!88ѓY"PәR#U(ܙ+=?efCfu`N&voy՝OL[$n]Pcf8]G4i/$NWݹ܎ptؑRL1̶;%_U uWIR O={nQjc>מ1÷nH9 vgAm6HI*5JV$gJ="-IhC<<=}UV\&Zє,G3JgDs@SQZs6(<Ҭم,ZI6Z}˥N4=eY†Ry׃|wGs 39ntQke3ZEcB?wMhiQf|}.B=)GO{J%6OM]*jKwdT,<7RO{s=ht@f- ]{ۈ9?ɉP!WIor\xRfwuD"@; 0k=Ai{h4N0HQgF}$pՃcbрF =5ך1֞:MP{(j*.=n!1@r=@9)_ciUig\,Ͳ߲\=VXs_zM Ƌ;JhxV'@O=x68{AQ~%Ϭ ͡U\?J͕ua_!O;L*Xky794gFϵW*YAzJ1ʿedhJYf{>w_f*7MU7\j͌zZE\w75qGӳfכK3;"+q6 6ьW]l e/.\uzMj?9T|cX]E6cTL#ߡoҏ~ZbwNRPD'zuKIO9p)xsܻ`sd5\{gb|gM"L@:f:Ss"x:i0P|{!MzF^"=3NݱR_`K<y7Y mQOsCUfexo/ʦGK#;PƤNѫJP7]VKrmZNDlt̯ >N=L`L}CF<+{\dTw8!UIJY0*?7!Otno>axl}kl_TZ׾ߧBY+FY |VmeAaBٹ8w;م7$^%½("{eMuYe$$y-ZA0x!eX pϬ9%YT{+'[ޅ}Oh c(;x8!yҷ3[&F\UD ~+;n]Q2Q'{Q> >mAT.F*A 7aZ7[?tȝo Q}}Ac!ע Xjp:lؤva!tȄk;]^$_Z gZ~CkXDDNX+؁x̂_}/auٝ%2hka\Nzխ Us_G84>V6b=Է/wۙU.WZ$A;T>o+:A\׍Γw|ɮ 5/ۈJpJnӝygVpD)L?4VVTo*ɉ$DzZd`J81|ѨD.W>Fzs;B6>ikh/Tŵٗ%`>}Zht {]ߑp'~sOa-ELUV(  ]gbW `c@&8i0F&.\5 2Ή2cƌrQsQr-q2r@U?zL9˭ZKfW΅?tȻuu")&Lrb0N/O%[E^G^&f_>z~#@: Òeͧޢ]k:W5 6lgrE" :+xA2! Z qdS/B"2ډi_H#LHWY(LJ72z6l &vqp4C&I:)9SX2,.J4D9ʑpW:'Ӱx,9ta%<XuwU\Ԫ oZ|V{ߩ0rO)<͕l4>b}iҋH__V`\j$Ŭ:o׫kYPc~&;/7Y|+󛾵'N鳸`bjz|(O%y7' O۷=TebaҮn&OwS#ۭF>=?oWFTL3tyhja(;MQ6jkR\_j͉T!%^<<7:adôpk|9ہu>|~VwZ.c_*[{(ՅqV]7&_"G&;8ÃFl'-E$ ~VwZ L5ܮ?9Gtg %dBu8iU>W\*@]VufEW _> &w2y@9oV-D5bԞc:_~$]o1uhgr>|}Z O F ȋeXL{_"kn_7ަHw;=^\%/.%dW8 s~#9h0+>D<d#Eu=~=ZH`ڹN+AWb.t-XT0Sl rmmV%ky76/}C:Gx2+JU>^Y.S],ʓ'}|9qn .?ɽJ2d>y:qN4)B/N6fgKLkƈ}{hdgd;0MV+.5RB9+YP' ie{NNUZ"vxb:K@|ZTX޶|WU>NCFu 7O^[0h~1)H8.O' sS==ac1]75 96Rb]=.=7S[랗lʿ eĿ"E4q\-V'Zw{d: 'M?vW^}^t.ҘΚp >8f90fFGDtҲe*dR:z42/kZehwޮve)EnN-Rbĕٷ{r1P&k`x'k+MnxG]>snl|EjFZ23QR%]7V\g{Q{7I}> 0x[Hٵ[l <7eSτa_+Cvw.'e/Dڜlj7Н2bO+$ !T 0滻J\o!jzbv:D0gdI [ݜ6^ooʳkhftx=V4ɉi6qŝ\d.,H( &r#wGDˮF?Oj1\S<.OU{eH|prA=xڐ^ܭ|c'ɫ}O\;j1vƝ$|+zϾ1i|J#Zq}}r;vN#w&x\/X:jٟ*S.=렺"0ךBN>j~.\@3Mz[kt#Tn3ΓF'SϙUM0ܧ[[ a%[懦17p2x鴲8fq3o=d JԸ#gdI'=6b4V "F ?Gx]Ί0fȠ c9ug{Xf̽`u !oNpX|=z.v؁z4ÒG΋l] Fp{h咱v;:"?]Y+ wS92E`=F|-m"+/&ԖBˆ'A4Ǎh \gi=7 5Okv4 gA?<"i^,'y>ixWMx*u_#h.cEӲ:{wœ|S<}:\>bu1i(?ѷ=-3ol0? >Oܯ 4]"n$ː`ѯF# ?Q}Svgɍ˸"=W8pغX!l&ګFn)czGQwCH1w/qdQ;7߶z0a:Q+Mglf\J9nD31=w\ 7!"9gJ'Hܻ \H~n3'Ǒdj] z5>v2'-9[n's1o2cʻ ꔔajgyշ^^`V6xap\LeDN|P]*e[u|Q6#4֦Z;O䯇QS> b0'IWa\; |H'JccH D'dR е-7/^Y;sk{J"b#JR^;<o  Y~ ~{ZVJ;w8ܪƶl?,Ч$ Vѥ)l6ND?xkj4=Jm|]dDq7c$3%9  tjbM@7wy`dV8ˌEJCnGUG\Jμ=ҷNq-d+5+}ƪ<|Ay4h2ɿ^lLPYIq"6;MFؿgf߭\(C"o{6ąz6*Ĝl؟s.}ooHds7+kbx$i\=]d ] ~D#2(}L5j1e?;} t7 0v$EqYGcoNߕ5lx1jU=8Gk I.DmB.uI02{BhVy8^ v7+h3q5;{qq =h7;ֿ\k\I;WKwB ;aWA7LV0]4pp:`n;#jm?%CW(=}OzZ<:fhzKBlYjL"B{piG}*"dC {jR^q+}nҘGN`x8%f5 NmE3V_4Cݯ/Gj=f LƧwSj!q%$|{oǪǓO[}(frjw"4ҍV)dO$mo@:Hj 3cx[Eg|!G5i/*a_gw `?g{fvL\kR?lLHs&ȳ^J#GIuY} qrδvkt"arz2idY8m >`eX*oW;UK,XE}+V#Zk u?{N'>iWENt=&EgSD| ƻL;j>6"DY[xv8"I4cE>A7 ?S>v%z{o"k!I=VWUQ1{r:[mTCn-DdwJ{ÙӇ.C[F2*>ޙ?/҄9i;T{xgF͸]إ5HӚ؝!{z{#eVXϮndpɛޭפ=jel3qk;ÖjNw͊Ws$;^Η%KXϻhK-L}?8O9 xKwnih{uMg\u|a|,;T->8 w 4kb]y'5KvcI ^W+өׅjSչ'f+Ff^^R3b /V^Rm/X驌]CBĘSlvJ^QXϚyCe|{=]t\j+dSuǶ! iHC82p}ld߬%KG=MVU.̶Jy=jU;%VSۚaayxO6˶VΏ|eh{~m[U{jceM}x;ͶSPOC?=/9y#]^fz?pmښYZS_7U㖾]VGc82˵ wV OUAIDc6u~mh։^:C/ڎu;Їc1x { }/iu'*ʵwXr{#nIuNd/nzճ< >Qfzs i$,Hej-:$)W'&h~<82SH~1|#Lg<fN9i2Wa0}ϡ6,(ƌULI3q}OL- =X8z[G{Nri64\7N#5~6|V`s;Wffk\`9 4QKw֊aޕ33~ /'[LVn1ĴҠ&d3HkT-%[OAΓŴQ ~vkO0klY ϵ{n6Vr>lm@FO|7S}ԫmMm/:?S"@\۽F^o7UL'=ŸGNqe fZ2o؊zce_9[6_duY]]x2Div ~g$M,J;N,="鑷r_iؐv\{ecobw BQ'toOн4e*Td]rɿIC4EVxwjB xba^ 89݁ԻgD't 5 nfxO_Yri,=#lu$,tK-,i=k/-@V :|^ ô7nCjwDv9oą_ler[~3 4kHr3쒩C{Ē~VodF7/,)tOVb$XONPc_L$}82-~gda fmL6o',"6n ORއ)I+n^ږأ{}  =37p#t䣁oN3RoۆNqΆ %ST>*sN+%(0E PV0>N?ԉ,yd۝6v7D, B ÖIL% ExسX9ݏB]NPl.4ľ7ل8\$A㥑y9I.6,mPκj5א/>YZ7ih")wz}XϦe@:+-@XF$/%DħžoF'Uo{k5a*3"Tύp'3٪PHi~VuOZ]J󽱿>I(ި^^[ɉ5SKiXo+z=__@zc{'ZOJ{)o0;q߃eFw]b(uNߍӕܳhD IeA]357S\zW _+[d,<Ӳ{h=8ڽ?g |oo@!Zq82G{Q9s1s1-ggYH%rQrX0Ol'}U;fIMkx.{ $=&*v̉5jK_-H6&hPqr}n?6dO+q<9s󢻴D؅wȤQ%@/o!NUYCyKത"HY"1pYpЧAj Ջ C`Ye7%{7}dyF]q+ ?yMҙ!9C+: x8LcI|!Hl]?Dt) :;g+ 4<8/$݋YtvִٜG}OH=}!&J-ǞgcsOYw%7ğ=CvJ{~^j |F}KWur/c.>HLEcrod}c;SK($+ܬ Xܣ>k&I&si8&2 sЕ>> L}wS#^]%KmHn$N%_=; Ǥ{@4HuAe7P%N+ߙr^jecz#q3H0V55V-6Gv;WJqmNO͎͎h̠|c,Dy,t&;XXd1v.9B-ԌK`ّK眬%ACz( mog+z9Vb%aԆCE3[tT,"zAxw߲p=X쭓:=(ܹbB{F\_N7bfg,]q#HW~m樋ImguOGv;O o}}/tP;H:2F\QřkGPf__`s.[~'h|V7r{,`O| j׼kݹuJ~0"lԞC=[$̕cGc*qsonéosNYTvDź&'yf庞10z5m{o7۝=˯j%WRՈDVj˚uJζ4O8XçC[v&3Tm=|!ٮw֠Ck{ˌk#[VF "uTx*'z{|ހ$љڞ\F5Y};xf_~te04min[">_?{Wdߣyyd#հd Y1c`=}FZd+֌v3k w +_VK7g$baD5?+IId8Ozς7d́>s2g1|{^1҉WLKޛe_tw~>V/H)4?fy|Z6D ҹ)bimX:\<oEi|K5=Yp58t~Hf%ɴX@vsS'0 xuU'=BW%P{Z ^W<}vtj+{ۓ~z eR "v%VsH[+~Av _Wϰ-ainfK󎓛rZuKZ.rڄR/K_i0ӌ1lO_ޓ+tF뷵y[[ ~O!P*W 28+YlB+jQ'|IZ4?o%s&XFeE!.ONlĎQd+x9kxvi[ZCKPխ;6!_>0d)?V̒/i4yY5 YS]IԵ߫+ݳfuEuYXl|KY؏tŕ"g[d/yEU:xLi ˪A oi<65xs% ZϓE볶n'tbY53 5d0FKtwV_'8D'ႾU tQHP'p 0v$<4tz語⭃%jE=}IJK`E~d}\Ⱥ~%%Y]-gr$8ۙ!:0*y0SF\'sT x2@)0c Ɖʠ[{rU- 3svG}7::1.bd>߸awmjObefo $ V0 éWOxq1vC>j,Dwm=Mce5<,>x?\swkjd0s p:;Y4^+lUm2s IY8beB!5l#ihk{ΆzS+l:{edȽ@=.i]uDε_WIq.+XnH㍏e2{0ʰ^\afk6edUp|P|rے πU,agWb%Xx2]X~"|{gV7n}?A,!QŽCxrirydzĆqyj.tU靪4[LWZ4 >; +gz7x73 _{kﴒcCZD!/ NT*ueqJa :F^(xAXV'+ߑ=7>kՄ@Cgi:$V9-kE%ACX${ Gk_ъ"lb6^?yj.u7& mX_)|`/a|C#dZr؟ O]D >xMv{; & G( Y/y@1##O&jXL5Z9nm=y7{ɐIU?fފ 'H&|lu[Ǣ.4Zv=b}:!.&te':_v ֫k:@z,2,[o."?#`w.n>KP S1[FP13baYT..hu>NT.o&DF| \f!?WtdEx[%utCics'zE]浈OƁ1zALI:XH]Bv(͍9#5PdP'Ahfbgy;_tzoQ!T?17Ehs99Q vlO_bK)-.;V4w1Ac{?ihE[oS*V;'kVdGl|k]3~8}Y"`VJnkЉ%Xtak5 Q?ml` \HiqpZ%'k3dU|:}45aZrlٓ,LГ m }Y(")*TA-݅m"WZOd3]$B bvRS<8l ZFÃSaɯj4غ$|+(.غz*"GNtK&38NZ1]!lj|-q 'K~trW#1B!v\|M{%>5Y1iP '8?djD߁BHA/IJSC74&(th-N}B~?20Wl=mߛ:fkfŪ\k{Y!NZq6dqwaC7Yv~W-M3'$0LdxyxƻT}&^gU?ٙޡZh` "}{VLg,5ίUb,+[i&>BǽXj Z4VV"&&QX" 5#9>`d;H=c(֘^ʱG41uk,ڎ{QKYv}r;DN{0X곛." SJ.jSU7F m[,IGڞќMAz%fedYJ+{0wVXEpR?X}'*%)7*Ohvh73@Vg(^97:3́J+KQ+Hd KJY@gMAq1t$0KתKE?)>ᩐ5(aBWy(=yod,2bjm$, #Me>80ͨ<ڝFV.0j⻼Է$H:fX$ KrD9$ [wORhQv)3==ijXm/'VN?reea3SXvcc-aّp쐓ɪvky4ū e~gQ 2SI1|s__l%@]޼2(N4LT|yRem?qԡ[*,;k|ހB]=KՉ%\R?iMc`(a[jl-8FؒZɯ woO=XGGYBd9!4=Ylm,~>T̑cK[i++"vWW;T7B<\_̲3H&5Q,({^T~2D5{T\N+Z`wS~v?Lzuϰ>t.aN|7g7bZcS$ F0SzxCViBkcD0R\ ']'o8T*:mb6u$9)ɮ_ֵ )ze\[Hy[$7|mYkX.ΰ:FDu<ٰt'm7Ov@SS65]/+;xL~ &ohXq-XkȳzigʖiMSOq'[C g*+.]:=}ۻVqMK^]Vv&%ʪ>Ru0l W*)иRc$xu!rӐhݕf?nTOΈځ>Z]Yӱ*AkDױ:[%7l4ë'[~S;V@Z E>|Gk4҉`1}1~y!KZJ.*gD vic7NH!'g˚q:L=z.1FwvԈSߋ+aSx+TYpʭj w'{cUbdH@xXՐb|{>zւ^^c8h.gml#IEWoQw቙'eQ6͢ZU54'O+=Vz+S. d෋Zܨ[7JRJnB,G:2p5b=>2Rm>x僱%Ijh<2L~ EQbE% 3hVBC`{S'y2822qk!&tIvNt*G7t+3bqr9k+EOӅ{# &HLN!DhѪw'c4Q;Qv3"YjiNpV?wi]1xݤ,}MIKvs`s@3 T^mqm-ĝTW+;cdI70ۤ[pZٱNeҟs2-lmi XZYfR4:vŝYs>oW*x{ٵKT5k(kF$q^E?'/v%nɋf牦DOVQ_I>H{푯IK ]iZ'#l<=d^]{ּ,P>7~ƨ_l=l4Qo:  Z ӒKIZh‹M%FB~;)B#veZ璞%/ ǖ4` l$Z8 CgH iv s!?eJF,=Bρ9iaAt!n2ɪsx BDڅN@Rɒ ~Th1󓶢xDF`Mc{{&cv6 \wX;Cg7 O}g/T>4ɬo1,eZWu *ث`^wbo~×`zy TA]rGϾd3-αVkrb፬shQTT@߼KO~~\߆LV2fewh~\qxH2iU~#aWo$]">Ζbo{a|2MS-|;fȯ|ޢv91ޡ~Ae3ih0uܘ -n'VCx33mN:=r/٨x _w]-?O!.?<`q⯤z#\TyWfx;8­oLJ$"VVΣj3X.<הCi4[i8 |g9ەOtՅ!fJ# Y{LJ!.<^x8hvks6 {Fle!%: a:S+d/;]_Wst1JW\Ok|c^bb~^h5Z2kv1LSx)cs:Hc 'eBqMGv-&`ATߑ> !3}gŷ% NaGӏ="y{X64Q%-$O)#\S,vg}9zNXm"Ck1}jg*ߩa3 |>2 щK:E.)1.ہ&& `@c|6,|8:Ylf>q*\igNIg$vO5HX|NmYΙWSN6)C֩Qcu[Sh h"(BcAyow8El}Ylp(`'.`|r.ⰷ0VXYCC,+zmį$h p+>gzޫƣF-8ٵ;I^!1 t:Lr#8.|ySeȱ9bƞ~a8)fk86EYo1{'~[)&DɭɓLJaܪUz5eg8sd$5|^>rш#%m ݩ&:}9]!f HÇc9PJXQM"nVKƞKZF5y:NpVncc){jzW:Dnk O]S 1uR͵˼DzɯMf^ucΛ[NYoz eBQ(?u?\FY\yg;?o@dS/ʫ,TZM']miO=!iu$[O7zޙiwk19bX5wk)J!f }Oa2}]ɿ ?nڿ KFe*Ͽ9S ]nD8#ʱYM8(co%Y'zQm{ָ?M/5X+Vw{E,JOa1\9R7(]>yZ_ ƃkv˟>?落o 4+^#nWY%C<7:/˹+܉Rl7˶*6tweQɽI44#Io*>x_ey/}ғ)p4vΚIx|V|;&! oֺ_d/~MI'aF4B4o5Z;n)RlIPiv<ir퍝kh2kZ}׿L~s/#SidrV19YÑ"]AV׉V:qy/t+OtOM=1<+sB&;dd Y&ڽ+ھYԲԷb%UpX|$YPA^?m,~%4;nM_a_e ;\{8ޯhfrX4O<Ih|pyrdwBz'FGTȕsoq2ʇo3?0z/lkq8mNfSIZjce/\58y|16ya6W닕 CéxDl3#17ˏ+#‡tҀž`-i|śv&.FGL7]W]P! I~5:ICRjIX2tM&ո3{zi}N~ ͢钫0{ZSI&[95vk)Ew%H$䫦! Vy PjLϕ >{uZ9)aҘz8/lnq#9Uֲ؉mRV\2La+jgϊlSnkZ]'9f͙7d ~[AEZb+-āA.5O<9YТ{eU.PVSrA;E)+~e`%RII|cՎ`VhBq}vFj?WOvNUkә)g@~8+aS'Q4"F֡JHx٥~;,gj_%71MyOR2i{y?O2aƘfaoکh6yo;R~RntO6cgHX'7oQO{\~wP q4mdCME{pss^h? W+!V=I~xh6Q;F.$c49f+4vI2 f+s$ !Cu3al8f# mp9'XN*-ܞ}f ׊xyaA5:<[xwLD5IkqeCn=9jmw; d*\⤋f 6Ć J%A,IsY(M(eyͷ to/i\r9x q{㎬ |h6촖9y#"gd$L~VK`sf9ūbfT_VVtCLB%u`Iy<-7j 9|Iov ZgUy mKkcxIme}ل;6RzZ?鞽پ%Xx F9*>GWz{q,W#GRe}O|#ǚ/yH=1g 3ՊYl w*^$z[C$c'|xwB0+0$|c,3/9lėXˋj O 9k` m+vLgtaA1y 56kYwt|^|6uCQI&=Gƍ]>ZqRod\jq5z|bVV;Z7^9=qxh9,8y377x ]xeXI% :K2 G,w6X;s-kq'tŝ'P.( eLjPWٱv|"ߩwu!?D[VٛOzǪ,<7vMF(9O WBĄa_I'5{mlH_67>P+&Hˍ>Z]fw ?3ߤ==-JkoT#}*2[J/Ue%g=iJUŗ-`o/ǗU.V lZz{LY$}~Suvv 34" @s*rN W7> N],híU1VqOΰ, Tw1w=Zֳ,~*|w`kqOMݩ 74ڍi^v|4z2'd \ƈB==1QC{ o 3Fo]ֵ-.hi#~i>NxMk6+b_܇hXp{:Wgz,aM6S3(tohQd7JaCIzB V̍d(2 pprr [,M|ָ >+AohX.uw왫݂S|w+vB.Na7Azg7cY+OֆMdEa0;I hfYN DіP]ەci7yCnnmѢC &)SAM uB3`%ިJ|Zl2*&j1i]Ydރ^o}TCռݪDxxW:^ߛCGt#PڱwY͞oެe,w3wS~\ t~@is'F#ݧV12U$'/j%f0]o]zuZ[bg|X87 9gqp"uW/4 R7Rd#cmkc1bɒ׶+;9XWwbc 0]bks575s缹W4=f\W?.ַX2s.O Dfc5c2zHzNoj$ xғ R˱&oH>ENvN Ɓ3Cͧ$;4RiGʎ.to`֓'G9.혏ىg}*9|f25ÛF¯k箎xƱ H0`c b@F6x*$y3US&ɛvQO}h7Ț sqwX'Z 'TL>$J[ŞEޟiK~ hoeWJxbDf /ewo+|z?$ΑEzx"`b}R-P! l$  ou(A> v}!֠2 &wQwN«vX7n'EG<JaqT{rr7,F*z(xݟ7͵L=_w.]hr?7|4oe !εLspBbڳE _afZ^5Ks͖!BȎ"w1{gqwUaO՛7*g8yZr(7YB3Vl=8,5Ynx"*;nBee}ӊzѸ6Cjɉ4}WN@9:΁؜fn$ 7Նsp81L*F_Y/Ng[ 2 t**l+ֺrʡ`V=*;q}%PXvύ·eIdS4/Ij  'DP%de֢(în>d#3UAX [jl+6cͲW %ւ^՟{T٬VeL~4`_o˾^XdDc7Hn@k/FRwF_~/vRi)ojQ“³TAL] UNsFogŪ8:!Ī]O7T$FX\2UnVE8Q,^u)j2Q-B"//D﫞U5e5&TKbtX۟4kAU}[]g;Q>U5.ϧ4!"C.lŬ[V5vY}+MNK};ȳ:)aP=:Az[j@qDTaw엵Mn$4I m"g`l2~<<*{S/EݮsѭZ?س>ojӌ,g* jpWe1(0wv[*p'\/oW:ricʱcvXI2u`x*ZZreZCī6{‰C}}?{2@_wcW/1I@y[N#]["? #y *nՂhr7~ӳI\LjW98۵J ]4&Z4ީb^<;֣Za8N066,Vq4'͹1g $J?f9Iտ}R _vIEߓ,MXry72.}|q/o*:Mu=_u>cDzgڑցgႼ#[:h~?x V$ U 32 L|6UZ+dHL@xg~R <" zְ`DԵ{bO7ńʼnIgBEm` ,%1O{xc`9 Xj@3Boa7(/sY?Qt#:UBg/b@XFRcSplӽ4>}!A"3bXyJ8^ a7*ɋRKfDL˵t9rdө3D{%j #Qٻv _?%vՉ{*J'{M¯>ܚKWPՖdMʰy$hCP\OM]pn&! >~HqU]G+^z)nNb5o'tFuD\c7BNKz뾾KǓ4lVv>4DV[ Dy!M~#vjM ߪϳpإN9çP۩|lr>6Mx$Ox%.响耙bK{w}ƒn6"jc>Иp4j=4{{fAPBrvXV̱bv<ym[KRdK +TW_xR<ϡ 2fz]Y +~qpeJbBhr`^"䧷h5 {##BޠbA~ғJz5i}axO 069>ȑ5y*ߚ; YLfjv|b+Q `=Cxw.*d\\,u{ek^!qmlbKVҎio=Mݮ{o,6O>7~6:{M,y鴈R V+&qe`b.(=җlU6~kڣ 2f90v$iĎ09t&e_N,U]H@PW{\8@f[a""u)|sY^wl!l }Rd/|,[48 Q/e!Yj}x N\o'S=bf N0s ^Msҫ3$O5Vj(Zʗw >8bDE}gY $OǸ:VUW}XC jшѩfN=;yZQr~:M} ne߮Sq^<%~=GKᆩBQPBHuX^֗=[DhēGᣊ sqJz>ϋ4 i Y~,yb d<+qrG{P&~ öpF0I-q g5ӼWji[c 6Qِt8p:}~|qa:GI5yswN+d{3CxbU 塩t4=komudH~4[ӝ~߷j|ٜ᧺MOj7mUO=R ^y[#56ru}1@^}^+^7I&z1?5hŸ'۩"pf>\e-IF0xi݊xn5~rd JsjC42'w){`-$z'P=} |ZߠYÚoSiwڕ_Y֫k17D~7$H4|4ҧc/u=:E( |ar[g9Y,L5mM3AoIt-Ƹ<;S'\ifԇ5|KŸ`g|}m~!/#I/(3IiIu{VOXߪ,Iʴs:Iú.zQF;}NO>4*p|gz>)r~o)e*I`]G O1&lk;ܮGc^mws;{ >Je7th Z UDKgm; 7WYԓ W[,_ro~6 ]% jV*D!&{{k"_%W4k8I={ #Mx߈E'CLUX{\}zSgKũgzs@cdz w{}_,cZ0Lv|GdˊHkE$NsJ ?UtJ:l?}ʬs8GHOz2B?̣ =FqOfj}5ȳPGؕsE,݋O5g+Kd{Dװy >K}!gvoܻ=sMF>3S_ k+;;vg󰙼ෳiD9Ƃh]ziKw'%>^Z!5Ggz8lAN",fQmӼCɱI5/\ e0\Ⱨf\J)O- +?^3nа߭kɬAȌǘ=ǚRHubާYYОK }nQYd1v;6)C0ksB9rZ,;,jT<QUvx.u% p~Xg!8:ݢ¸sٽ͟%'z/kʙib$8X2kEtYxò\D16Wlhz"/5smkKm@\=Doyyʡ{~G!z>d>X}.9p.Ͳb7#˃dh jc`F8Fegb:f]m SϬb]ڼ|mҕc?aV861kr oMmcN87sb"xjg~zcMzeX1VR x ;b"h }Cg9SaupqV c絳?4,Ys;7EҊ{t''ǍMq?4 ݵE_5ŭ_v@ԮCh\2Ǻ1F~eOGd}02i㎉}4"]4"ЛjY!ZD[]~9gq?RU6N_u7*H9OaaQ|:;1㢤K)_,[y04'zk'<[2hs2sW|^G,Vd`;l_it$Objz4j{g9] #?َtb!6?Kb6#&2cɵ,ߓg$X fώ[alEtKLgFoլ^iUjXlkN 8;xI:'`OQ/͈]*V!K=J;x.Z7Ӯy:8O& i@R7bͷFYkzX=m JֱJIfW l RdΠ$9Cy.,ڂ,XZAQkŎsGYg{( b{"i9w'bTD:Z [:eAO2ɶR,|Av#rTv5R);y)rXovCa,b`?K^]$LkZtn͚Zx/{'8OyOM\"-QS` *2 #lw#΁ _r{n(0_F>QXl[hTOiڔՋA! YsֽS93=5d-Cnfgh ]#ķ? aZ>xkvhOFzCÊѲzδ~gz lu{ֺ>y7}.llYتd$ _AVqȘFP5g1g{궮}?6b_}`JVs,KFRwac`75@ `9hl5/Ky@ϦU:~H ,ރY!kZ "6f~J܋ٵ ݌>DӛMV Jݪ5w&;Z7ӻ1,͙9'C;X~*?DG\˱uvh:&lZ}W{z[*~/uSROyUL%B;05 q:=swZ;sݍBB?Pc}Ee4粙Uvu6+n᝞*B3S>Uɞc?/@a`VTgޓr0X&}(4>\cqz'lt7Si nx;64+cgH-~Uo0}h퍕! *ЋGu>HGW2^ӳɭb}O#ֲX$ޟȗZ#|@ =|6FZ9X$e,xSc%Qab,aa7}7֝y"2<.SˋeƬHR>3kӳįo y=$̲xYVrh&66fQ,GLKAG 7d.բO&A?<{('hA+$d²a4lRl1_Xtm< [agΛb&v&!dӘ3=fϛ} *,`#,hX1kМΐü.kjPgCc@Zv[|X /l^+^nMYKvu o~ؾ>Cĭ o߅vU,r!dnXGN؛T\犠fN{}&Nq P gN09ê]-^#> mgOlwLÚc=:$[\cg?;q[24::Z#O?>Ӂ%fN: b~\SZ;T0޷%Nd 0vfB1Z_~ľަ#l5{%/8ڰvduC>SkZT_֊a,jvٛgJbp kcPlw?Sr>lT{[~:2}۾Ub|'tRuF|S aF-!\KgCgԐx GvS(xIcd{_5 e绒ε3F~vOqKX;w2<4"Rm(njXF}5&7,ޙ%6m@/g5l^ Dn#sEŘ6m@ yaGR]G"9 '![>2D'C+jgZZղbeAwqiIYTi>+¦|߼̺U*n[mHWmg&?W?X3^|ElR\pOZDŵalvdT+Rb/3CDrĹ%DK:sJD<' ;^%F2+wuPQAƍc}y.MN|7ٽ}„#~{ҀUI>t ŭҺӥ>k]pSŨ/U= .{?`l}Hj.a :u.Jm!$ϚQh[GjfPiqEB!*;ċ'3a|\d4m-ߙGξo q {Ύ`df_rviwY뮧'wȥ^XF63}f~SW48YVcE/'Zr6ro^Op\dTM$ߋKzz\]_3[p@ݍKJ}7ue==M]KΓ(I֡Šiʝ&em,@k5x9;jQZ̗d8qSlK:Z2~vX=rg|IYExߕѽ/U r?pd`NEX/x4$f5J_Kbg*XMRY^#$]:Nw6DB~W+*ШscQ?'=yjR>:'\‹ToLDŽm'VD@&+*Ԯ̓J+U rCV(/Yk)U:dN]s }{)^q4YV,E{6%Kyʐ|Ǎ#v6Rֺ.oN3dcWk*vؓhr~\Wcy7꓉V#AQ9*lEr=0au:9d냄=a;FX$J-*,(Nۏo'>Fћd+F댾oOvCCCef\> &ogη*Wm|:֡>]++!=9* /d9XjKm%.~xWy\2ןvaԘ,Q{[g-B?H6Kf1 ,hly xHYD$L mC`g[] BCla:,o0E6%:Anxjn0CE8HK}> Wpy"n%Υ5!;K mXu<&Y[(7F0VIN,I/Q/{olm_sD8!.)S J\voǖe=:ИԺÍ$oՋEx+n/+:j3M^I3޶ OiAlf Po*aPO&|vzނx>?NPzhcOr )8!Ծ3(x|> GvL|.!I\6GN럷N ,"f[F-ApWlGov[?Skm-bC'wՂOcC6y3ev)N~cE]snoDK6?G,epfjlOkYSV0ba-'wC6L^$eN~$ yxAU~jƊ^zo*1~V~53'Sm3@L$ /eG|v0=/N"9( v/:V'PK#~UYS-gBl b<NI4yēYe`Zb=M?Ӄ z'3} rڥ Zz us>w/0\.Q>o$&\" vB cR;uƾ7G^/'J-Ikg'H' n)Re!]gɹf{f4wfwTȣzmUszrzXb=Jr, ,JUͭosQ5ZsktT]F0.ivst'⁇{K8 Oybz GvBRsRσB 㒗Z~n20+jF:vzc&+?Rnք]͍( Nw;l4/jɐzC+$>oέc |]x)/ծ #dqF06ɬH';/'|y:N K.҉/v9X[b*3߸ǼK'JA/<>9jtwZqjEQOlbۍ~SZ~dpdVτpl>Oy|) QfGGFg\D`d;g|" 珳Wtk0bbJ;/e_I4݇UЛ_7ZB̮hv5θnDXb/-"^KĹ>n )bݵ[[|Ue˱~*سUEt_OY_Q؆I7{VfEqG#;<\* 3ct}tl39CYpZutu?kZv+]j͒5Fkfbˀ4Y,Fәw.zupz&;޳+F :UV VLEͻY3EZx7H(' 4r\b'\l>/kG}]0}gB!asǣ U{@8h>Dd.Pohlwmx<{yb&䲱Ivߞ~aȊ#kk}k^qFV{ͻzCUάF*l=v?;/u(/ ڦ:wOc_y&G.4.~U)cry  Z,"^!{LrTjҵ|.k=3yR߰D6TeCFǍP{Y|,;x/_Y Syv5P{L]xY\YJ-Mt!"SN'sz4>8U)g?}^l"LV G$%#Esk5 ݞ^u}0='O%>nni ?9"F<7xzXj]ɞ93T5V|+ =آt 53ٯ/yovDԺ>d-7NN p U1NkI`0,Q>ROg,rG\#N='HHg O'9-&'Pe+yt;U+N[$ߓ~2\7^%KR盧ϺTUW5 +lU4"#zsq4!`C o)a]Xgydipy-1.11.0/dipy/data/files/small_25.bval000066400000000000000000000001771476546756600200420ustar00rootroot000000000000000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 dipy-1.11.0/dipy/data/files/small_25.bvec000066400000000000000000000010731476546756600200310ustar00rootroot000000000000000.0000 -0.3347 -0.6643 0.2446 -0.9666 0.2135 0.6467 0.9431 0.6058 0.1294 -0.6945 -0.1746 -0.2989 -0.2396 0.6013 0.8309 -0.2385 -0.7242 -0.5691 -0.2364 0.6801 0.4600 0.9120 0.1145 -0.8928 0.2460 0.0000 0.9330 -0.2155 -0.8902 -0.2364 -0.6025 -0.3816 -0.3316 0.1735 0.9162 0.6452 -0.9775 -0.7757 0.7204 0.7967 0.4474 0.0906 -0.5791 0.3576 -0.4061 -0.6898 0.6492 -0.0438 0.4344 0.1239 -0.1143 0.0000 0.1322 0.7158 0.3843 0.0985 0.7691 0.6605 0.0231 0.7765 0.3793 0.3183 0.1188 0.5559 0.6508 0.0614 0.3309 0.9669 0.3744 0.7405 0.8827 0.2483 0.6057 0.4079 0.8934 0.4331 0.9625 dipy-1.11.0/dipy/data/files/small_25.nii.gz000066400000000000000000000071711476546756600203150ustar00rootroot00000000000000ZQsmall25.niiWٯ$UX< BHxM{{j}Zz_oݷ3F?D E@ H c+Ď=3&v8 jLO4x|rX}rg߹zx+/}_xK|~G}p[?x|=|>xzG׾?GwW_y_ÇO_>|;7oɟ;o?|対>W~OAE矿sy';XezzzHz딓eߕ…%x#XTIRXms烖Jգ{ǣ&z||4lE;ck[uHji E1 p"'YGcFG˱#Q N7TQ'lpr8c!0$E!q-G)Ҩ 8l%-Z!Cu5Lb04$)m8tDAeLp&3QΓj,>;7gG/HT;c.;{νsz[-t8>9<5W֐f'nlN.s0?;3w/n3+tZN'˻]ft o^ܝIdm%f mJ ſllle _5s@P%99d `!)dQR4%Y zT u0;x 9 .E@`Vqe9cd]2(i‹4kM["e:I]qqȗd'BHzA۝L`.\b:fJdzyWOw{4)loCTgւÃc39i70HuvRl`З b@4Z Kc(ݶB VV[H3fGx]0z*= E3AiVܩtdhBWD]IQy)#2(G;:L\εv%RFC $JH]ls<% lrh{J8,L*ܿ/;o^+BٚL-"ua7rJ͉QhJ)wrkMOf2QJ9-߬`C&ڝEϔZ= !FPHCjL)gg(+]Y6n|+E 73)(@ }0Q+gm`s=Y&W+}ިT۲P @FMN" CJgi};8}BqSfRԖX1RQ\ǒt 5TcP6LYdR54⃡|NٰkM eKSn(*4ARwZ19s<"LyYz~C UJ;V]ϔ*kY38Ma D] ]y&)5C7 ^*dk@T=וVsY U(U*1vEZk6C v/A 9HՠԽ8]̎:^/C2YHdRUUB4XdZp*ŽwCFLTIɝ3!( rT*o`>^-`sRS/vzfP%^Q{m7`}q'$&)$V/Cy.J FSo@ Hzc2<) m[7UU}U^.z$R^{2Cת8v r:c 9eVM^ ʎIb Itm"@E%XqjeRY'L1Mb*# Vtdfl5L5v4U8-LUH9n ZJ Hr bd#yݢ)RdAVuamK2lCT9׬?#ݠdipy-1.11.0/dipy/data/files/small_64D.bval000066400000000000000000000031311476546756600201420ustar00rootroot000000000000000.000000000000000000e+00 9.928797843126392308e+02 1.001021565029311773e+03 9.909633063747885444e+02 1.000364252774984038e+03 9.942512723242982702e+02 9.939778505212066193e+02 9.891889881986279534e+02 9.969196834317706362e+02 9.911624731427109509e+02 9.974664035236321524e+02 9.954073405684313229e+02 9.919624279819356616e+02 9.931252799365950068e+02 9.940771219847928251e+02 9.879731333782065121e+02 9.976620483924543805e+02 9.900065516055614125e+02 9.899225600178399418e+02 9.983197906107794779e+02 9.948038487748110583e+02 9.968387096356296979e+02 9.916486546586751274e+02 9.944719806636786643e+02 9.940168126899344543e+02 9.879607569811387293e+02 1.002991244056878372e+03 9.994929363816755767e+02 9.876152811875540465e+02 9.980621612786436572e+02 9.947304108200418113e+02 9.917164860613376050e+02 9.877203533987607216e+02 9.869461881512532955e+02 9.895954288021255252e+02 9.959811626110692941e+02 9.930680889749546623e+02 1.000572361150565143e+03 9.967157942552372560e+02 9.904673272499851464e+02 9.896968074768431052e+02 9.961932041683226089e+02 9.981179783129384759e+02 9.906313722879589250e+02 9.938537721133617424e+02 9.968091554088852035e+02 9.924976723400540095e+02 1.001038001192093816e+03 9.933728499281497761e+02 9.953140425466273200e+02 9.926488343299331518e+02 9.984048920718623776e+02 9.972655377164984429e+02 9.925551656113113950e+02 9.890137636780316370e+02 9.885720623682524320e+02 1.000291601730783896e+03 9.918921471607960711e+02 1.001110504195383328e+03 9.905137412739931051e+02 1.001481457968169707e+03 9.884648285274217869e+02 9.903733083706629259e+02 9.944549794894621755e+02 1.001693658211986531e+03 dipy-1.11.0/dipy/data/files/small_64D.bvals.npy000066400000000000000000000011301476546756600211270ustar00rootroot00000000000000NUMPYF{'descr': ' @['@`L@E1+@;B@Zq @*tĒ @!@X'ߎ@K-@j @‰%ga@6a2@5DHn@e&@Cq0@V̝@i n"@(+[ߎ@-5]W@q2;@܎@іiN0@Š@K ]@뉤Hݎ@ˑ׎@-p@tk@0;r@m92D@EV%@@2e@,!@0@ @Iy@Yx&y&@?Ţ;@MH@= @(@ 0@x 8=3@=*@p@&!0@UWo@dI3UB@ #@ PH@?b$@K@2@Z@\Ị@8M@dipy-1.11.0/dipy/data/files/small_64D.bvec000066400000000000000000000114141476546756600201400ustar00rootroot00000000000000nan nan nan 4.163478118279527636e-03 9.999827048187632794e-01 -4.153975602799726656e-03 9.710771441530797743e-01 -9.949625405048536080e-04 2.387638794982227808e-01 4.484975525965129717e-01 2.497431846103612477e-02 8.934350724772029961e-01 8.065213506191166726e-01 5.887964846640970640e-01 -5.331051155933171776e-02 7.115302898994344538e-01 -2.350396452768258593e-01 -6.621789876640383765e-01 3.454708480709161589e-01 -8.926283878308921560e-01 -2.895936020902123986e-01 2.289071300630692724e-02 7.977559449542579451e-01 -6.025458219490047451e-01 8.361017508803255671e-01 -2.326122361001924099e-01 4.968152672687530247e-01 6.117524509784533909e-02 -9.368291423024520670e-01 3.443962071801469071e-01 7.798364496140078872e-01 5.044848155988271854e-01 3.706078556690835524e-01 7.280092759402950753e-01 3.449800194759559679e-01 5.924451707181486171e-01 4.638900575839798868e-01 4.567629595072385529e-01 7.590610076251582683e-01 5.725407393389004840e-01 -4.866172062933480924e-01 -6.598490708764559454e-01 5.581262946147673709e-01 6.171547684827388691e-01 5.546305355807656934e-01 9.942856509510769603e-02 5.781386146941556170e-01 -8.098578286604697363e-01 5.603713195256103674e-01 -8.248798185830820140e-01 -7.454709348772665944e-02 7.413641793937422730e-02 -8.949442633413948744e-01 -4.399756323337084551e-01 3.373237308808803570e-01 2.898505589550048889e-01 8.956558234378173555e-01 8.768163366278239890e-01 1.152814573980816548e-01 4.668011326065273914e-01 5.035104108851373717e-01 7.995291679813333330e-01 -3.274604948346549471e-01 7.757347240068448446e-01 -5.124427971198213250e-01 3.682906700556473623e-01 2.976885732092064418e-01 7.892763288277961919e-01 -5.370515712040915268e-01 2.819504777234194681e-01 9.486619923932395615e-01 -1.433330118989497026e-01 6.232379823719625955e-01 -2.320164922113039652e-01 7.468217757074890883e-01 6.050338085237734476e-02 1.947738415084017405e-02 -9.979779418464482799e-01 9.759409902906680534e-01 2.151818865167165751e-01 3.515592674894449376e-02 6.353466709480807273e-01 7.715365788614170217e-01 -3.264835668164086518e-02 1.244764298570132099e-01 1.599343550651121104e-01 9.792479872228273541e-01 8.743126749329229730e-01 1.464436261515695559e-01 -4.627435691732692535e-01 3.617517941136363935e-01 -8.877710659259927528e-01 2.846017813721339884e-01 4.234504161725230476e-01 5.621192138273652938e-01 -7.104306683198732264e-01 8.736588532595805645e-02 -3.812120228599760186e-01 -9.203502570805403016e-01 3.726468191887949422e-02 3.056051540215105056e-01 -9.514288377577031497e-01 3.580783343848107370e-01 -3.317178379678600852e-01 -8.727789997577440895e-01 2.722414522447213492e-01 -9.620691694719444298e-01 1.753581567102956151e-02 1.564914425310713619e-01 9.598269716399268070e-01 2.329004356523862451e-01 8.806037057829864123e-01 4.508578266987513516e-01 1.458229524654813536e-01 5.912869227710892961e-01 7.719437645497624345e-01 2.334150794885302138e-01 2.603772661861981641e-01 -7.094721239216934539e-01 6.548686773937528738e-01 1.566041352813896115e-01 -6.939788963499021746e-01 -7.027577365164613399e-01 6.396611894362609352e-01 -6.808897123262511730e-01 -3.566829998433663773e-01 8.703417501146003543e-01 -1.411334918339134659e-01 -4.717908175136747428e-01 2.469526034715729401e-01 7.409388219811428034e-01 6.245190739439496763e-01 6.648409967484210092e-01 1.021779043512370533e-01 7.399635970133635610e-01 7.157210270916442019e-01 5.831057599687722304e-01 -3.843580154883236011e-01 5.583121379159108333e-01 -8.739995547229871542e-02 -8.250144268067105546e-01 8.333840007224894153e-01 -5.500905125207584678e-01 -5.358670893446502298e-02 3.766549463259035724e-01 8.381553084046976521e-01 3.944955391398701217e-01 7.293684374934888970e-01 3.615995413482260834e-01 -5.807473237863943760e-01 6.044737588089437175e-01 1.831503974899794385e-01 -7.752853712089822213e-01 6.774144314458742100e-01 -7.189693188462814577e-01 1.555403697648201633e-01 8.081532700735619690e-01 -4.320944987827871620e-01 -4.002282301275862930e-01 5.470294864281085578e-01 -5.019604419915061344e-01 6.699212309323325787e-01 2.922710122332284333e-01 -1.700591514095178003e-01 9.410938000167882178e-01 2.248563410182856936e-01 -4.633438858718583742e-01 8.571768016745640040e-01 8.948795193930951797e-01 3.834677269608361416e-01 -2.283487423882004097e-01 4.041415251468313818e-01 -7.132497231793889503e-01 -5.726643519868492849e-01 9.534686225700469420e-01 -2.589146765750766077e-01 -1.544693368549269752e-01 3.215216961394217199e-01 -8.201509429384397994e-05 -9.469022083537210754e-01 9.806514599835323143e-01 3.624096994593548754e-02 -1.923780292277270654e-01 1.121394846554832070e-01 5.710716476281935128e-01 8.132047154661753430e-01 3.760475566018071647e-01 2.815636138887444573e-01 -8.827854589353636428e-01 5.131690040773655426e-01 -7.208576195215492532e-01 4.658560567728727841e-01 9.530327551768297267e-01 -2.653357783804909942e-01 1.460325041601345242e-01 dipy-1.11.0/dipy/data/files/small_64D.gradients.npy000066400000000000000000000031501476546756600220040ustar00rootroot00000000000000NUMPYF{'descr': '?BпRn1?dipy-1.11.0/dipy/data/files/small_64D.nii000066400000000000000000003764601476546756600200170ustar00rootroot00000000000000\ A@@@????C?33?r=AF]A EAAIGvF]A6vI? EAn+1Ygm{ {2b&)BSEHh}<:z}}{16}V)AB4~p^y[kyA5Ia#  U`cj}Gi? g@ {jy  Iu8Q`Wqq[hT <TnH+'NlaWCC' fwo *o# >yikit;"-o.UE~);xRu@c}B&}Ev`koT,zo[ =?w&CpJh2j, `': 1 J}3s-#VETA|L`X'F`2>/r(zU\J+ Gx%aO2h 4:eqdod\RqjU}n-5M4LudlzO=CDBvifO@vVfV|v."EmYZJQ?0CLCHeePBcCKOQdgT2G\]B:[}WfO3Smx],fc_xd,,\pnq[<B>OdsE8r`kaRdODNuxcJ'<5>0-ivR1E.IPB1_V@TfPb<(S:]jH_Z3NUA}V@ffdqho_O[4/J_eJ.~xO=br7#G]zl(41fntjJ>=KeDsG8AG17_XUC*!.D3F+TjU!YeCx!KbXLaGOIbeM03H-J;.y~VRG#K\G9^GP=5OCF+7axX8PvTc~xvfcZ?\}\iOHQNB~|[/G`djmfXZ]Vc^}gaYf]zO135i;CT_gWXVYjNcVpW\OBJWnTE>8HSRdB_mAhqZgfo;'WuSPQ,hsryvQHH=^eWE]gD0jGM\UO;luaiYuXmYH`K58>\SMa;^<36>N[Z`4?HPXK?qr\<1S:abC[Yk(lNHodM/13<!E.axfcURiOA[`VWOLDN&ZM><%F8L<TIMXHXZMX)SQU`k?Kf\h`Wh1+2:,ImG>P5AY*^SybJ@)=$ek6ZG-,M:*9,+Mhsr4amaXR3#+KCN8EkKWH4$D_^<+ZhOV'3)E@{O6UCFH#/6<lO09(E#UTYE=A/$EfPINR*46q|\b!X6DP;F9.A\oc>[~qArTSB69[l?<_khbes7/BggB\k?3EVF21<0(9 -?(3*. J<=>>7<E$!AAbxMQ0&AQ,bb:G}bC{cX#c{eLwK=\ri`Q_KK1{H;M8_=y@1C1&,8!G_<',"LE9),74E"I:&175,0:?07 .A*;/;0,)5B!((H24bfKH9LFVy$';2bwY HDMEZWQM82?8;D1L1)!1*7,/+5.,'' #@((%$'!'7>-@;@L&74G.7R!6ct~.o|xVqy{oWZSKVW[IQxmIGR@[ftk?0($BNe:<FCsQTrav7?KtkV\NDpB]<?JNG^\ycXa4Z<XUb5vXaRX&,*[`]hZq:F?o|`<q[P4=pQ.V`NpyIXU!GfoU5K2`BTZ?gZnV:V4REZUCWTTY&=^91,Grw0Ag;L8\g95 Bb*0OD6xkOygZE>b{y|IUNQ:d{^K$3&e[VQY?RD<4TqkeY?PZK?Q_oAJJ47?EGD@WE11][TYkGhp7)D7CUB1atcYQ[9,mfGszd19aP~Yb0RbYU\iNFO)eL]6dyAjVO`LWxr5GEfTYVQ0M`rQrXG>23tyVa?0#DMPBkR/ToNID4 TCL1){~yDTUla@4EMReS4BF4G8@GP+ri4M>D]J/HSN:,Xz1HW=TM9Z6_ld[AH3R,YmmUM6QiV9[[3BJT[L'X/9*A/0|v{k(w`x}\c7QYdprP;\aMAhab]S-DF^k=GiKhGCx^:,S[\LQ@gC7AAQ*TBF]XaXK;H5S&g?/"2dD01 "#ckhfvhMaE4WY6KPABmiEoJSHkkaD-:AgfRjuPDYZWA@()8P/6]I?*^D,?Z<9O 2*TT+!9R6 B7"b=> 4>/%zPkodqp.obhVT_EF/(VL\DT=ZE O;08tS"5V01 3)$LLJ.:*;IT7M0&S"2%($5K8K94KF?dQMX>(5OU+ae4hTcO2RX}DfV,<YDtQI0'HJPC#/ZGW= J8&Nf26EB (( 3NQ"gMW|qilcrwCrsM(oYYc52"dUS/ @h02O(Q:7&$T3/'/]s4WpRjY?Hix;ubs^T\@Q|iY$/Tib];;[WymYOjvhsY QSMueB[dNXLVJT{h[YtL>L1HTVVIH>Lu:3Z\eO1$M^Z-GSW`H>cSnzatNdmvisiqDrTPc_KW{rNYx`gpWgepkw{[bqa{{YTfm\P^n~TQlraa[H98c\C{_Rer%ilYXs`q}e_`V@4|j]SR]@rzYM`tUjtstWqBaSqN\UF_dc{V9QBuZ<afd^8PI9TkssGH6E8gK;KYg_E}1qPnjn>zjSnBrufntmkz}gbjj[h|acGEb`[hX_ohtMLcl\aOT_F@@4`}Q~jWY<Z|c[b=2[Zs4Y;XzjIElTwK^W{MGCmVO)KcK`N`[r}8=XmdNFl>umSC^{*D7KJg[WUH=YoX~raQ<qNH]]b7$4/)DSI(Fceb^Q\qWZj_ViQSaZNprQ~gM4J\VMeJ[G*L\bmVoy-5Cpx[}mo&6c]Lr{j@*\ka[PhM/6@\\[:277B23!74'jleji{}gVsxN8*seSDNN4eS,f^hwah}c(2uasa6MH=-me|&"9L[tjnw",cTHad_*+<XgcXB,+ /$/&0dvsg_`_`zn0^iyMFLfh& ^+*CBc+$3_E+$0?$W>{p,5/7P6u\*Q3!Fr?9 (.:rNV'"ZkeRL!78&2e~iglncvfdgh-6tzrg8!mP d)!?^<"7#4]L0@4R:H.%F+h$!"Mp|sx@\esyxsnt{_E}_W]C\TEMbh65S?'iG HJW6y_.3O>Boy2Qwzoiu<"!KrzdVFjeDPgHUKP_koLEbaY/6BRK<Kj.W@?<CKVMWeuLN;8:0VlW^]1@c@LN7ClmqihYYZhc%OA`W1 2\flwZDU`ljiK^VOx~IIvW>GY`6EmhSjlMD9Jd{X.?Qge^[.<??cc/zXlLLRHeQWfw["PfeYZS(T+T`27,BSVb.JAVB=30@h`V>p^|rV;3ZPOeWO]XI>eS^T--necIlFOvahJN]hTl`,LQ ApejRJd["No]NXzL89WVTFXJiC8qU%6sks?09/-bp^QcadPiONA^^\5AmzB>V8=@YwEUF?W]jPS4*lhhj3Q:>;5?N5RfAQG%%GDWH:RL2D9%1GBI_YZ`e/1XrA'$(YHDX-MG*YYozC2) ^jQdf?NWbUJsxI)(9e`FWa^RKU7Ijcm00477Jm8QKB]W8;7>U3dr@/5jkjW@Eda;No@ceC);nlD.o[G%A7&-ISq}ZwV8JbIReBC5Rk:_oL:/8TME9=.%O;@Xh^sFVDX\ELoVHH^OT93I_JH<HZVMx}C1FNDryORU-8b}uPD,1_v$ 97?CcwrJpZ?3TCBRolZoP(0F\c}se[34&EuE_Vuh'?PCgNYtL&MSN^XXR" 4ZkG<beJ0I]9>?:'nlLW5(6,#Yn08EAV_IEOrChUeeTGaIzGGLF`[NG2zVX_2,>iA>Q5?B? <99'%50c]F9G4C8'$!<sb997Eoud^O/WfSjg;*RI/>2EtUlPU^WUds8cUQKUqhH+@]u\o}G4CI^ZRIB+0 ND]hX7>--^N8_L/TP/]ENU\13$+XS#!3;BdH(/M?VX_>d_typfzqfgXpwu]*3RixfZA&-cnW77A7-J[;/5V%02-98ngRjgIFGqu_@U7>Sgab\@I<<?KQmKDu\dZ=RR\[LXHrn{S*YnQ@+6OnTdhlR(Dcs_T_KWrK:MNKHgjdrB`MtzbyNS=*,Gau}vanR\EnV+m`TUA]g~\dNdM=^po[|^dVqnm`cIuIAmOnif/-P;V9^BSrL%;WAKXawNBlVcrY`sPYXLnJET]YQ[]IG2`P0MKe8]F8Ma,;dVQYTG2*80?T``@sq>GF%`FzT[[99G&GFTtm_3Pklhr\WMhGNrnOL:hiYpUdO`]TO]o=RRINPsIrkO#h{wu[WbQHTNwMB65.ULieA5Ps3JvZi\b(Cmvgd]8YSXrizcny`lH\gwu_=_~X^VRojI|DC=6>QbEYd]A>XZ8TB&0"^Q:S]IDNH+bwRk[NYv\a_[osgglOAPE4dlK/))-XJKN:sPWXr`S1JL4KZ|u^sNSW/Tfv}YG64P`F`bV=SCTiycaW]\/MMwOZTKB7[UzATWRIed0M}DPysCT9Q2H7Hi]fvwNVX;qfy]Q^CP7Hp6>>PY;TWED;/T_QYwPiT[_Y;Q`_t]QT14_]LQ/`MlJjAGiV][`k@Mf]Z-,".*aE,#fsWbgrQlvD8Pix@3ALOUe_hJlrM<[fzLFOH:-wePNEBTVj_E@.7`HW2VuE4+4(2HH>g8UD+D2G\HD6j?-Zb\UN1( 9LF_d>>[_/ROBju4Acq3qi4<O@99ql7K;0%U:L/;!<;%UJ3<_D%AAPK-L#8-8?!( )Pb@H&&C19-H/92\=.7&:UXcth=9msr#PYgY_d'iszcJ S3!qB5-a;"#=:1C )QN9FV6C'%=;=+9?%JSN6$,8X|Yvq~[pk]+ }e\gJ%VhrPJPB>W<c8\kL>Y]H&?XRtq@srcYF%N9hnpplzS'bvKgiI6=mdRyrlDhqbrIXxtduT<aUc`ziaSX~P9gg^f{Ocv*?yvx|e--\dyphgNfQWpJx;F`c]nU`YKBrf=?SJ;}~tdSf}bTnigbuA6x?G^{N;RC@Z{=b64Y:LYkwM_vgBkswf74YYmZL<SL+WU3KRMhTtTjgXNgq^eIeOteKAur[Y]kM-Jios[ZiMOPKcL\j\VOll?U5fd\La8j/6v^Yh`72zp`>NUkc/[pTcbVAf_yiIa7wwvn98RF49MfoPg`v]7'@B0ai=B>_KfJ>XjG8B#Z9_g8<$xdL5FjXMemWXP>L]d=dNie`KNL;TRLXh^F@ROzA\jZaJ=E}M/FZOK_dk`I=8CP&KrpWw\Jiqoo4)z5(1QeH)reL!dSCS@(>OHfPo[3KI8>|LGoQ<?CO\OkTaQCY=CSZ}hmgOS27UKQ:L>G 1LupttF(9'Nn^`F&=3< %~qnT@I~_nyI#+DCKf`OZ;FbRmRF<Z_MM3$XNRQOR@+Ac+@83@KLSC H *DW5):3>A%#I%$2\'BSYF"J <o1 !+dl^VellpzkagtvKAhbBD* &o>%.- -f@<&GLLN-$FM/3Kd<,3-.(Q:,!;=QK\= <'@V?.WMVhEJkLgbpo]Oo2RNaTrLK3P|cH[qP73@|pO$;?%M0bPWPGQCYTIL9AT=9'*(AH>G[D+&&6iI6Lb:&(BZK@O@=0);PR,7YKP@JYDUsmqR<LVrk$KC)}mN)?5K>5*3XNdL:+/D7W[=;DNA%LHd,7ceE(@5>;B%9WR9%3O?8SA1CTQE$OLH98"".40=aN]LkC&8TB\d*WnhX-HE4<CUI;L<.GL_yS=HDOS1QaicLZ_F@O]mdGEY;Kx~p}geY!Jieyo]^IDa(m[zsfWA %]hNU]K0'HaaYU94U@~VOI[[IMOWUF\[EhtYgfKlkTSIS;Kjd`G@WkNUwpn`494=nqvOHJ>7[Bx`8Vc^EDZ#z}aiC:H'/9]jEQbI2CRIOSGbZSM7ii`USQ/!NtsNSd-%\fvLQ]QO<G|f:N_UAbstUjpi~oK8O=\rPyo%3]Q9gj^L`;FdOuKEutk]L?1baHQPa|eIYfl_O]X>?<:fK9JJ1LoMO#8AL{s`vW?>@fiUculi]WeK3G1EfKDu~<0)FjkoLP_K:gy.G2j_SG=7 _mgw_bP5QWyxji>N9'ktk&.@MSJychBF\J?zpKb96`: Xn%T^<.7BzKrRGX'Snj3:Ar|:J{ZC`F@T^kW${>AW6/8 ^Saqvi`ZZIgE1J4/>8]y[%.C?@HcOna<BP]v[<L[9<[E_1@jdGds^\LR!TI3OT^r\ IHG[RFfnZTQ\\<&0=qJX A6@Nf`_wP<y:B<s]J;WNRA,MoH5J/7YBj (]7dMKEO@5WKUK.18ZEtj-'!/SVEKbl1ihdFBf%CAax[MS1?5dzK?1<N@ACng`lFLlMjnz`H-Tt`UiyO8]`71hi M%/P/.47FNQ676,<"FE#YE&>c4.LN<K=(?budX<.Px`VemZk\rcItJjYIc+0p9RltN<M]aC%$!!LXW,$^q2 b8KmP*F--+laZkH,J& :#'M6JA"81H<'T*T\e`[FYGT4HTN4COFDwrXIev[JG=%XgM;4;$<1S<EEHd' <C,H5"F@,3+5* . 2E=%%94 /:A?Q1 7:6=LT,1HC9A=*]sXOQG3 We}wb83dopqCWol4Cchp`Q3IH[RPfHU(VZ*ER^HPO@YXeaPVX@XPf^psBEo{V3?7qN0-OT[i_9]p_cTun:$,]xg|q`lmZ7Z\;WkndENVkXThZV|e`bp<5SdQ\^l"D$8\WAWX31LV]]5QR\haJBYFMfTQsO>:*lN)8rxVXo\ 5~k3VbUf;>NypKLiTIQjdMO;PXNjtH\i8QRLjfTGW1R9@E2fc<3;TWk3cvBBk]KI)R`Ps<I;QNHe.BKNZTF2#xa^N DP[z>Ius?EeR}ebqOmr*Rs;CsHETbzRNF6^GESC@KgH%A9X<1NkICTNNK')WwJ<6`aNJXi_4")=^X4PU$ke[8I{ejT_-1E^KWE3[v_j7T:2CGisd5>V9TdVc7]-FCCIB1`FG!7=5(B7LVd%PRHY\[eM~K*@]^I&3#A8(&oC9Clf[64cKYdBOFR[QbKgoCA8@slL'\`<HZn}}"*A4OFPDok: D9(4uH#\W-:VYC""*2?N]@ % =, ='zspujOR\eY[rRaPNoQ?Q]0-cIdGRIBRHaD 4B!)6?WkF79=J3Wct&)74)5:%"Q:;-5/- {mx_n\HLFK\RCPVhN$4N ^:$ 9B1/);#/,%+-FU5-"B?,D=C*-D52237_WG4/ Vx#&GD#(Me`l_V[OjoF)ef@LH|)@{F, /AXOM{LU"PWT)>7i>'#O(fB:VmAH2 /z>!2''U|P@-HW9bxszmavMwDialRoX0X]gWA:!NZB:AX5HQ%0...))5-?mJNIUh5Oxzv#O~wO:0)Q@:QafOM=NiE<Z31V<\~T67TDIhna|X@D50'S6N-=M33UpW;4.NL\w{o`CoSPTO^R83}9|`4]B{c]VLS<fpbpF>k\'7^_wD<I62CdqJNJd^AAFSGK*qXAOsbfQV-+[`G[Z[WY@Ya/M}LxaYaGJy}ipi/OYC sqalD"FxvJd__1:on7>KJZoL6jE44<&~mA/BQOZfLF\cQDHYdFYU\VO[bK?T]</CMcbb^^LDTE76SQmQPD5k<CreQ`SJCj>S~ulXqqdjWBTFg{rMJC1^lHuhmWVrlaUW@$;Z^S}m[VubjwYfQgFpXG_bvD[GO-HLVgPokSw{ifA&/YU6JJU0ztSK\10d\gkWtg-IP}mHEcHJ}My\g=P?@c}1)?QR]TKSLqDBPZL9NKJdICIT3@jc0ECNO]JL%YQUnU=A;D94P9)5%Nyy}sk>@3d]YR5/:%@mz^S2QC#QHNGYo}d=WP:\?KOSx^;5NnYD(4@^TCkP[G6B X/U/2cs=RA5Yh6&SP-#\jG-+&~^LgDbqn[YO6B^NDLLbpBP.UW2:D05MVBW=6H?JkIMPMLIIV4E~XUHb8,7_F6s38 =sDZ:8p1:B\d+L!0@nzJ>6(>NlOULFTeKrW\QnWIuVPgd;HlY3Q^;2EJ0.Q/ZIa '.@ 1;711 :.%*M_W@/UD9>&Ce}]v5#8k`hZJd-xXJpursg_aLi@6(2_ucP1G,;&-F^7 26DAG:' 1O:+0HPP AOSM.1:KN<2F2&$-%;9I2! :3N>RU47Rwb_rvWgjgxWcq]-dc%*HSAM-6 FL51+8$.MQE.2  +5(! ,B1& A'@>++995,7O,^J4+)5E4DsS?/%VjjxrIKjKdeEB]lS<}OJM:U[eba(2<<CX|`68.\VWYYILT_XhURASLGqtsdQB>qcsZnWL]_G4Uvtud>M2 Pp^4gtYj"bZW8\kCb.N}L8NPblzN_ZH=@V[mpbA$/HWN4,\9FYNNYfHEd[]eZ],1AddXweC7,-Jn6mIQW>1 }iQH\OzHD&Egsc]JW[r}LFfNAiN}g:;_|VL{n?Q~Y4deH>K\B)%CUhIIkLJK)Rt0m;$77A]^:9C~laaO3smukoYoqUmhZO]qi|wj>d\3NvYJOj`d{dqz}a=Cb\B2;]8BC1HPa^<H4FX8:_ho[2OtC.L^_[Rdm`HDB53\MA~gCBZ[Zvw}C5X6HSVayUG]aMXh8WQ;]klav;@LBRRamS,?V0@^A_`tXLTDB1%Ip+S-AGMFN]ZL|aK7aUC\),41eoX}ubTRyZc~e:gWK^fmP)Bu\`C3L=DZPL_r8M_AANkmBEPACKGgqIMlC%MflnMOZ\nknjPbd%Hj~aWV:+?b"6*4<cwmVP>Tgh_L>nkb\AXtie\kQNGWV+^[kKG<F;Qb`E2A>R`dj}B2K..L_l}x{$M(VM>G0HI9:"=D<>:/?7@DW=!&#%<:sjr^Wf}wGJ8mGYzTdZzwUDJ a~J&4k:),H,5<BDTV39>>DJH,#K*=HFr@Qm?+@N#5LXw|-%M10qSSQWtvSRnsgMogD6d\cC+{p^4ZA0GKOU?B8<G/K9c/'1+&.EWDM:5<\-*SO9(73mm *$GE)D_JDJbiaiyuceQ]s{~zMva4xgB4c]F-JAD'[jP,-1$!p/3%  9TrV&oM380mpyjQx{Zlj~PNUrbyvg[jxgmigzmcq__V|m7*"xveHP4goe:UIm|qd[OFgwcULW`wLgUHnRbpwS9A]aSbhpiRQbSySG^mxQTQEB]PRv`mj[tB_xleSZs[:7#3 (eevEFJRr;FnQId[M.:psn|kZgTFKWPtvVihIj|eaDgGbYDKPylysH9P#5MBw<UnDK9+=IQ-nl|W>$ io~cqbtEo:65NAx@)'SjasYNW:eX3)7L2BR3V_2WZ)Sc^<6=M[QnFSJnrYIe>7G<\\WNI,:B?aX;Z2105uQy~zh8=@4HZq[KLcMfKvs0NiQ&:kbSGJlf^h[y.AdXdM8aaFDVKOR*TS\KOV9QY4X&5_LM,CbwU %$DG9KUTdSRoJmVz]G=oJy\NYia@_Hm70tpkonmj=?\q_B\KS1KVLdqkFjTR34`L;f!L%FW8R~&gk>8K[Y;1(GQ^,3bobyubrj[KUiUIWc,bva^FTgRacZ@.c]GBztM2bU5IZg[UREh@:zgvRSlvmuS;Ha:'!U"#6>Vul]YwqfwAhor>=Y\uXjv^>DSFDjb{l_Pbi`jP?4`|x~o94?*3Tif~REG4?[O@H:8?#.((rfqTiB\DOQcj|a^Y^jbm]>mQkjN3-T0623G+HP7)+SU*eg/S)'3G@#P^]NBQ@0FE"G*2Bmpub.&MW@P;Ni~rg{@qtle]KXza*+"[G="+(@5I=Oh*9H} "$<7J~[=YJIPI]uW[nJ5VobP^wnwzvnFn|h{e|:wo]\p]eoqG;ecpdz@<1:Jn[K%G3-Ui}bYH+-AWzkZzu|xnf^Qcpx-_zvud=Z=}xj[e[Nbn}fdragWPZscv]PbzT,}HIc`Oj]|kN^b{wC\kiTugkm\FQa[~xaDQX\g`/4/yYcw[8axp3>zkq^Evai_ULS^GaUR@xSMbK|lVN[aIQfudGDE[dZcgenlrMN6E]:G7T>ZM_PR\e m<`L ozzTv\U`oom_cAemGGeeufj@3S|wRK-9qrivxz*BhJT=R^N.%ZKkX]lPZotFTFLxgA"JnKdUVZ!~f`XawrrjRIS=VA}z`q`gUCWZLvQp}N9YMY]BE}~h^iU`J=HxbhheC=Kk]~GdC&@lsj<1&9=MdpgR1|ps|w;kGo;CrqlnIzLe<Eom]]I;4Ykx]v9b[ceTZS{FIGO_\d8r*35FUUVK:l\t`n^ 8QE6RL#4xtT^zcm^gXpkeudOHOmQxqB.|dqp|Xq``j`{aN7KifvWMqL:>fksu1e~{vg`I[]|Q1G<"7@)!4()75 {rg[jm9bi_y~khKMNZt_.6Ci|TPbyjvbx_6F2@}s&!@Gl!&Gvld`~?0#d{P>2&1UGLV)A##<j]W^m|y|E]w\C[2ZX}Fbx]A]n4 Xk(SCd^/:46=VQ59B!V][.19R0vC&"&"9~C<NW5gf\lZT[{gYrnf|Gv7RuewU:{tiHVRK?tgXib]YN>$iA,'TaWJcE>.CQ@DB8%EoP1 KbhjjxwqT]{ybUiuevdT~o@Y/7TY|v\W=[MuU-N? 23mfpTMswhtoznTJ<[\e/MKpf|rH+gjSoMalL@Lbp^Mvnbdd_[z}PBPkoxboZQlOW-=hlX;?X]CqX[OEZYddhgjNaVd~cpmnijJ~}t\-")3Vjdf^ReoIjpqjXrno{eYf{lord]GJVr~U_~C[bcIbpY_r]^b|}\_~r_m]\UO]nmc/AI;aB0Bol[qT k[lr_?pzyrd\ir~f6:KhciGTiikWRie{kGMq\ysY`bGFy}fbdXP~mo`_L\zjW[;0H\so3D^}xaKMC9pc9QimVlH|kunkJ*>ad`ocgIeYZYDs+^rR_eH+xZOfJ<j}F@Y\qig`wCF^XE,Mna"EYxni{kNb{U[ynNfcqiWNZe`n@NcMPMdC5QR@/`hxm^C9welmF]sR&?MWg_OfkmuQHWz~n`PNC=6za|~dHR21tvcV^S<UOq(frZm_Y6Bq[Tn^beNWHQcrcPLDC=SnT?D<LA!@HJ07D<:L=2kXmm`G(&fqjN!K2CQr8/ zry{qXGK~lqrOEFSq~rbz:{[lws^KEb~YoR!8``w}YEh|cBf@!.aSB=B:.K7&fT! 2\iW=#>%:ngJ@2 %FLmpS\kVfdxiVOOH)d^<RZ82Jl;[>0$7;F7&J.%J,b2z;3OZ0 ZQ%$%1rg)30+8BHT5;<7MW7XBlULmy;Ci6%d}CKj*vxVc~((1{_^mxF@OI.dB?B?BF=%$)N\2+'UPSK5+QC XL.?d)I|XE09N;F~fS[|~_`wpxz[yngTn{l]@vWXeMMFzc#hFE!9G.M3WI?GQA$84 Sg`>8^~u^IyjlVLvqLUdrnZrJ_b]_j_EBDK\\Vh_0<naK`^JUG;`XGeV[M.MfxV<YzLCsmsxY}RkV[>@_wnJfOsogpeM4HN\TS@s]p\KJIJ[[8ED:RSpopXkXDfryga]l\jn]jyt\fcLTH3nX]FA\O9~hGbPZRlBxQiE}}icNy{gL653XnwhK(1BY[XdEbfot:Hbm]X[QZ5e]9hiLUAHfI'.AUX?Z\W366d\6=|WNQN73wRH'urvhQdd=uB*Ye=r{}dGktaYthQOSUm;Vnf&He\v_mlNRICepSrO2I_QZ<m_AtgKymrwQ5(NQ_ZRcQ>qwREq_~eQRfx^C`j|vROO`WZy(_mCQT6Qf~UY7RYOe"%MKL4%9p'YFC[_I3G~6;%HWOl~w>LrP&asXiM !:"==z}|nN=PKawnPB_UpY%MfFHafpgO08p)OyI7LAnvfA"KS]_RVW. hD0?Aqp8=Pd^OVal^4J]L6JM1NRda,E )6nmr^vfofYNU_YvJRLekyiUyfIzfIk[V22alw~nvP:68Ghz9G<xD:~|& $&N@RoTGYKxP'M?)UP^V+'%) }mGSj\gAmwIyYQnNM2yM\3)"3|H,JR,%3#{r%?K5BFXG"5> DAqyO/,eR9;^On{ZNzogmqds_tt+1Gyjg]MD67c!8W]W9#"[pVJ3:T&u=1B?)IyEa/O$ /UyzB<2Brn[Rwx`iWiOSthO?ORaD?:r`FI*<Xkb0/K@IRP/6_47lL9'&ZfWnzsomoRx_qutn>mjhjegliX__0CRiiqgmu]H!K1\OCW24!*3FLJ0DZ28X]dT;U4"F_NKM28WDKPn]YP7[fhH5D;<bkReeka?BBSk_hzma+PlA]P7]K>2oo*K_tb^ganG'6:%1NUD5:ET/H>?L. &B6_ZU:5;o;5`plF-opU#@C-+'<EdW;SE>oK3rgxHID,W][m^VcB+FWdFfRN2(OIPo"NYKkeF@2.E>*JKB@!L;;5?g]TT6S1FP7\`\n`c,#^u:AC5RiVAYfFFhlajcX#'LeN^zdC2\d5?7HzWczL+DSX]RJa0'0Jg1!]tQNbS;OBeVV5</Z_QT?(bi;0$QTYA]pXmfOuFMr{VNypxZ]\]H8wVjpdA9ZIEPcdON_I8OdI>M#'npSTZ3@$+UNRbZ@6GSOG56\i6LqXXmyua[b8iN3bo^lR-Z6>{h`_R;3_unEsa:R^ZXZ1pU^eN1.~4VoK=?=o1\e?kn@^lfBArhbkK(^^&gYW8%.ni[WTTF/[ffi__RKEB574aM@[IEF<CivpsIHOqt]B+jE1q[M^ .M]^IZXmp(3RCL98a@T$CO@Ruo,2[tO=H45)dMVzF8. !]IIP`_Yx<HK`S%[tyiU@\zaXwumbiGY~[G62Ufbb^UY.<)?_rlPMP*kbb<NnqojPN}uj2*Koy{m&.@?UbUoa6Cr{j}|<8rSXlEuz0B]LAIk^/.c$/&"(JN;-+" -D:%?Z9+<;Hkek#>C2;:H*7|qLsukqvM/YZ=9gaR<Jdg|x>/n6+>MVvM)75;@P>F,:-&G!57.-P&<as<%K20BCGS3*)2:;AN)Ei\7^|Q<8=KYmvolY4AT|NlzN}UjS=eT!1btkNK;<cEajSVD2IUCA`hA*8A-2QD@/ A,<XK5=P*.+TT>MYp^KB,B?=BK^slIWkrz`k~j\S=Y3b_]zT3GD|W\eEJZxL<r\H^fhNbjIGfie`f\T@Y^lJjfI429?fmLS_clgNJclIC[J?m(*nhjK>. 6CguG^`@O|Rvhs7IiTUjkF;]xynSjU@b]D\~f;dnNqk^>`qLgX*>m_WWLLQ]ljdC9D30>E1firyBt{@CAU"?Jlmsq]I=jfjZZ5;gt=NZTLCPPZeDZaNQ]a8<{6B9MlM2/H_IeRIQ<FRtjy[S%8`XVQ:[{ziBRVgdf)bylC:{cK<D1<i[=l\UD,W^K7HVmogn]idIfoU}s:\F_I>M:%ZX.YY1=I,I^qjNVEa@*wKMN#,a6>N&poE3,`*3AF2Z~L-IP\CrMHWW9HkaEm[`gvt98MR]op>AoO)PzdPcF5EV@g3Q+EIZ9FdevjDLiITW-8PB-\^+}{bP,M6eDG6HBt|\@bX3MSC~F=pxMLHDSa L``P8VX9K/UjhWAGX6?3A0EJ\vN"KQ8INV]0Q>'>0'$:1$wxyTAH|SVH1JJQYP<ZQ2AXU:Anbky[2#bG2VYDI,3=0!TqG<c*L\.DXNNFG)9*=-&?xI 9>0+x]/J:39)B]vrNH6htsqI<XRRC(t]E?KA""EMF('+0'5' 66 @5/B6$#@\mB/H(45xY%.&$%&;@5N;#1. SGRc?;:RZ>5>w~iUGJ_m>8rJHN02T9\kl=cE@)+L'9B: -7tF-$,!D:CAZG+#X;;(*! : *:.GW&LJMQ38d>!*9-H:8TG9NJ3irWlieO([r>b2L0`{kBWgM& D"bAe_G35Y3<,5<D9~U&2-A]F 4~[HIJE1':$0MNaeIC&#Ao]R\62I7IK7MxD<AF.0os|zUXff5uroXrZ]tZ}bP5<qweNE{Ldyku[`qL<V`\K\LG[CG;SlM3XqVgXcjEC`poXhc0-Ua<O|bg}B0:0_ackxc^}prs[UabLKU`Jmt^?S\bM{k&OlUX@[noHFH[bTDKahpfYgVAe=^hj4(Xm1gG@OYiOa|Ht46{flWMpz[HGTsmTeOT]angidPY`[gD]fb7rdnIAg\VaPXVaKPT9br@AWY=YISufDZ_a;Z3IQqK9vnB7niQZ`RsyZOG?QqZSmeC2QaU]Z~uyhga7WO~wC^_N`eupm`;DYbXMOoW`JS\Oj0?fZJI*ei;K3(Ox@7Rt|E9J:.]d\8UH8ArVAi[]xpub4RkU{_DV]\x8pgQG[hhWjid9/_db8mPhiRk{HCjUF^t_*,'O1.$FpVE8EhZjypCRf_=B[dTwbNkdDi\r{`<@QG9#j`bvhbX>L@=jhEOe"P8CCAUylX;$xpS_:+YLqX&?d># >ss_zSIoZizZZgbYFausNH21@B'x\[rYdkJ*l)7WgtRXK,pI'?`kSJOJ=& $>5BIPf: %WP+<QBByN%bV@A(\`v8*qVQdcTvpRQP\c=lhVo`JjaQ3(5m9"Or7'BI>DG=%!!u_35;=b*:P1G.&#0>BE/-F8A@fk9BKkZ=@=kdh|hpZnveNMfZt`_ycIx|r]L>DBz\ba`R6096/Cac-Y>%gMiR59691%'.+og@82Q8-0"1?>=L<H@!1SWJF.%1HF=ZfccS/fad(0"en,2E< 3Q-0BO@".p@&&MfBB5.KXYL;3;UH@09vW!&-,L-O?7G@$<7*F5 LW!T1MG7d<?*,!Kj|y`OI1'gT:B'`_Fp{\lyg]v]}M]li{g[o\bKD2[e|rikSOrqQN_csfK}qLzVWW6s?=HziYrvFmovOZwg[HsmpYtQAbf_xsU)2M][gtZ>VPwJSQm}oSbZEAHmp@AJOgN.bVB[gHSlK@VRq"9l|i@C@3|s|zojiu|}bej`[~hl|{dW4WaId;BM6EjclmZ\s5/LVg`gB]^kMYO^mPTHI>TeWNPM>-FHJ4,\~a`R9.gqdf\KNUr|oqsJLb|{dKfY^Hio]gYzsf|@84XUZgL~}o`UPUnlJsIOFB8_FTF[<:Bro]PU1-:qdOF'0tpyr|rnboo{pVLF[QiaI`M`paiZozv]M^`}kjz3'LopLH5nJYW'EoH3>gwaW_;Dw}L&D3Q/ #4: 4pztpo}xshvxbGkQn{ook_CimI=t|r$ClW:oV4baFd|qnc|EgQEdDmaitX;+*ZjJT-' &E*.1!{q]KnY?_\Nizb{Y@6YyxcZWTpda{ybLyu1WWPWXRX"";pvy?2dVztaVB6ebH`H- @4pH .9.y`Oroe~V^vo<kHlkniO`VZq9?/PC<;[362gKev>,RRM=R;<W.O[2hb\;5NWgA7P5M?OiOej|`laRtrig][[UxVdd56IV3W#GT=&2/6LS_tA,6<R87:7oVB]gMT^4%Atet}zjyKhu}{kt'K`}}nDUW]N18cUIQeh\^5.Hw^W1BF75Qxo&4Kyc_.:|hiXJe/unjG{a:_nDLu?z~B.H`qpcYYSC8DXcR3XAV^<H=DD]!#aa{]W`~tKY`goiv`B\bPG) -|QhUCgsIerNM^e}ykCXjrlVXxvbHjW?SYZVF<;_]IB|I/ALTK>6\m)-+WQN*+YMeNQTbIU8qHnqZkKX1n=le8Xvb/<@DLeLeZR/fCJ8/gZOg"25FKjYS[NEZ.5N{Xh;2yJ[1D[Z&63,JP>_VJ/.]QB'- ISpi" r|rku^fYviYiPgZNOYtpBVeDO6eN>(3bB~i\@YMH'ah]F\ZI[HY+@\kGOD"4)PVzndOU_DfNC/T0+YS-wWfy[vT8BMnjicVTWSsXqhRNoRaM=>WFU]SL8CeFU`bQCEE\[CTW0AW,_M$^YmartDtB)?xdP5!C  *)rrdhDd~BUUfzh\mhTK[lfcz`7eMpqdm\V'SkTnidd].OErC-RnX\[EA=GPmI)N_=gk}ef:'FNNLi>#$*9 &(6& 'jsy\\wxjqGDc^gRimgu?4k{ulsrA92<|iYfiiK8HQPjh\."-[[CngVN=UoJSg27>dJNWFDms71=>0]F"4D&76LLdnd}oUS^^SlXPc\AXM?DV@4uuRF+;M<Eo1" )\78896;F,C:<+L@ D:^]WWQ6?*8CUd:(R'I?uq4##LB33]|{l]^;`!zwYwBPtw^N^;37~+0'D);S1"PmB-$T*OY.0"3QW4.;8fe5NGoeco{~a{OKYMHp]U>.%MNNX_:A8&NB=A8du!4 7+iU7JQwf@V:6^x##O=9J,<-A"<@%?iTC2QSC96E'XeXS4.?`uX&/;'F\O ;YRA$=QlO?cUQPTZS]fJ4TZF9QZabJO/^y;W7^ajdIRoC0'`xhaRAF?=P^laKut[xQB]N8JPxK(HSRda_bO7&-31:[Pf&8bIPH)@iA\^6(.CRVv(/lV-4$9Wm\ONHPMTCK[BQUUW15JVGdj1CS<6+Sa>.Sb6HXmV;9MO<;vnEs^Ig?6bK<ZU68C_Jigp^8SP<"]qc_K@PE?BKI8`G gz<[m[dViJ<l_b[0/smFfR)=EXW_ZS9#<\sHA\pma\F:sQ8h]MkvnQOP[L,@ecG_H:WU5;;TQ6ZY<EeAQDX]L[PMf}:PX9NUT`]WLglIGjj@O5.<msGGM1=0T~\J]M%7Ku`Ld\/(Ed[C?#;`P<k<(`]66tG^>4Pl\4h8]]R^^fs]lX1-XbB2E^7Hl;AhGWN@E`IF@-k':D)/$^{NXE%QaCED[lE-GN8-MFAf35jRB`VSMJ5OcIM[>78E/ZQmrInc*"ON_QE4[@50CnqBbQoq~Y(JSOcO.6F%9 ".+1GTdv{cQA4W4:]sK]~WHA ,HFFBD@3=V8>yfpN0FL0<t7nmH)BrjfXL!%--Eh_n7$C>4Itu[LYOtbTh5m&~T-2QE=J!)F@8BpZ+;TSF8bnYZjg4KZ+ie38SO(.<G 4H#67<IL;C' 79 '4+,<0#!0:H|JVZ+5BYcKTWPws~r_ZQP[fvoS_O\vulmD eygpEGu]*mT1MmW2,8=LLNMVPNK>B@BLHF>3)14FbA!!.!,5# .8?<1?\\9>1C:MqhF9J?Hu]lZPS~{^(LgDclg08\iKL;]WLTD7C.HJ,%3KCN81H)+IPJ-9QK>N7<>T*.,,54K6@P??CKAQ\TUvY.P@na_iK.l_TYqYEQcMozM^xyjMDJBAp{JBYIbhanRAUdSEBD^pl;3J9Cf\I>>HM_o}DA9DdkG.FxstW63]jd1bpyfQY=(a|agq?Ctx3R^QuW_Qk>9S^h/G[=1,;FEL;cq_V^Y9>Q<S[>jzN1b|I4UgFIFENcP[}X8TJ %stkojUjd4XP$\c34?g`YcfY=5ia_<BbwD4Hd^fkd[<:X=6crX`;(=J5A_xQ60Chvvy{w@:;Tjg,LwtfX_S5E^.=QSH d9HG=YIS{jMF4@zl@BM3-YQjE=XWU`m@^c7?eJ`yq0?K]qAuP"BG5i^Fa2@VXddL1.$3cV7xqX/bFsc%KdXuSPGHKXG?G_jF8>ED\;?39yyIHEbJBC<=A?'FOUJ7E*.S=P4eGOp\2TeQG'-Oa@3!&372pX/INqVV\[[fS@MJFW[TYJ2OVgM73PiOJT4n[ZRFK=OSCQnm*2BR6$/&_7V@BdBwDCoZJAD'=cD" '8G>#B$u{n?\^kp0.J?ShWHtZ#@TZdY*LiBH- )?,5)MO8T>6;4 #DRO2<AAUJI]<8/*(E;8  ,=9#7/,@BG0,7A$+9s{NTk,ZMX@Da\B:9W}l<+17'7]vaTJ42:1AFW 9-37(N FB:A6#043,>JM#2,K5=/%'59JK-^v&ZR61ILddbmrIEh`,=W8;G]iSV3e`swA%F-s8!." =I<Hb"-B*9 I2XTn3 .7U? "D\ub(96ps.Y`tU@HOdkUnLazAW~bLqwB+\>TA0@o8"0$7$/?VKZUV9;:,AWVM~pF[j\FBNIjoIdSPW@w|krvxGcqf&Q;5s_<,!9$8:/;+8`#"#CfcM>)PTgo+9L!>ZVh]t!&M6.QQ?SOXHD]B"VQT^rP:Obb_[e7&UR?EkWq]/=PLp-veUkwTP22MoaSN9C 4,.HXHJ_D,0U\O9Isb5NTfIDN:C]el1=5/G2V<dhN5;JO.5cclQbY7D_hxHRYG[OX}lSLHD#:ILjcUdcg$.(<>3aV4H >W^F?LGBHPeJ6)*AkKQjR5@-=m7NZ>vJ7bRLOLBLQ@4ZFQb?!(3E@MY;P=/V@6~mD8^f90-/kY$]o^@Ja. +8B7HUI..=RUQFTXQH>n._u=)OqL/JVHBTH2[M%VeD?kX\C8YZKN`K6fU5XRBM/+I_G+9_DD"PA:Wj\QqLBO",oL%SQz^BE4*GTTNTF2'FIuPQnS[Y4rO]d9/A:GUVrkI>.DcVC9GCB[K,%ch:`RPN>&)|>L-<dY,4GZ"H-IRBUgv8_?fZB_; "-;2!*M9CveOZWS/9sYA?FH<gcG)(DGDo[E[s8$OJTuDh\%6hSXrISWUHTBBdq..QNaX'WJFR<NZQ/BcR:@' NV,9  ?!1ZYjnYUk~1 [xh\Q|Nlbxw3%2asKFPrcBLI>Po=;NN\hSAU*'ZhePr(4?jfe.69]_=Htn]2c1&T2IZ/-6dnEJoth=*eWkkgmN^zgiB_KTh5"RC76@AG<JV)!_@MX@5 8?[_D @4#h|O##Zte{oi}b`oIPjaqDa^npgjA{rhU+Jj;MvMl8.BDKJ)Nkm<=[4'1II@\5)E>#LFaA01.aC 15EWV54=<CK'Cg49W*.]Tj~WGrF<qw]`h@Vp[_PZ[5Nc~ZHPZ]>,kS%,WN.;)=;&%)9F2*!2 <>G#)6*Q7 4@?@!)6676:2GP,%RJXLNcJYRILMTtE:QY%M@RnZLHZd>ayw,Afbtft]][C85UeBVMMBS:>WJQi}KT:={c)*Zq;qU8_\hRrHOFbiD0atjS?[qtTe}T5Ip{iUjW)RoaQXFJhmvjm[mD:MZrWXXzwgO]h6"^`_F8XJ:S$SK5AT\FR*:`\CwAUKbX8I@XQy;ImE:4NWcjKlh{AMXgQ(_O?UO{TC.XF.^aqyymgYFJPU~eO3UUYfUN[GFARfKR[cm[nXUX9.nlv~eC)I@JN\;KWB9EWRdM\U1#PmyH=iOj`88rcUv[eO\sPxs0BMLAPsJMC&6O]zrGT\8*Je\pkG[WC_FOtklT*9Ce_.>ZlXXX<>=)ZCMh_lDih\T<`oIQ)&1FG9'9=Mr}d_F16QfmTqG/@JKKlQ2+2NT?b^gzMMG.9>PILOijG{Y;[W:#R^E9(an*0Ub9]djqXoRGyW<st""i\705#!PNsnAM{Q;UKG=3N][>3QKTfLSqNa;JnON=jPNTG;G<@e\C)9.ac<b? $GK:LLgHEVd_d<h@cQA8D2#qS  64TegxYT_w&?OsJZP6r?Y\53QK'\e=PK6PYL5iu@4QXjjU>g{=iUQf]jd! Ab}56PLTyJ%WWL3U'osfi"xM6*Um=WfW=1>Pm6Vt`hWVW<Cac/5`M>B 7B7<2(&ZVFAN<5ULh<K/@aEEYT"+?`]j8+>@6Qwfq}xhwzo^cBluwd^6|TIeU*)8TxX:Nia%'9J6FE]V!#7!6!82DPL9&#$(126.#&("*I"!.7 9DEJC1.T)oehgweYNSexU a|hRB9$)CT65&L:949*+".)&%*&9:%^39>@27A(&<<7!#285W#4E?4D% @)U+:5.'QSm[psrKPXJJYCMK-URWKVNRrmUsYD[Hy|td`LQt=rW4axWze[3fpLPVF=fz]Ytze%(&/jvhVRxFng]mPhQ^P9t{fIDkT:XuoG)zwI[XQT}JWOFLW#,JeCU?J{?F75NJjJKino\R@/TJ1bIl6",E?WVV:SoXr;Q2pxO[WRgMjf_5IAJmmUg<MnpT`eNDXQ?Yv}ufvR9qJOTCZba9'Pc=cUZJ@BEO4KxSW=OFZS0LQicU_970cTXibHYa]Q]lsMG[[dSLX[JDUi;\6GEP`0OnoM9gY8fEImaRoC[kE*.$*8anmjI?7'.KW"P*20))(,|naI`^^Pvk`jOd_ZV8o[Nao_ekSUqZ=AVSTgop[R8OOh>F8;bZCFN8-U&3dJ[e78)JF\fYQH[kQC<'"!J;0H%YWtSxRZOY3ncwWZ>\jN|Kkz-NAO_aQ/jYGH?S}v#1,FS6d|<1>1`SLle>WbvXU-2T0;/2:)?:( *gttz}wXXtocnJP[fRR``fCd@;^OU^P^d>B3f_cj+2KfxqvF6CwiGWv$*94(D<4BVOKDF'K2%( )v\WjkoMhlO*iuuB)XZA)_q;1/737:((*#j8-#/ #1@=6?*8Q;mGf*.GD=oX]D:UcECNpq?A>mUoy^PXh|[(CiB6WmR/>)MFo<U9cs%?!:C B11.(2?$E30U;*Dy\J75#3Sm?$ 1[iMin,Qstk{j]NZd\OO1dzZI#'kS+ZS?8<<1S&++5FTFHH5^jTO`GFG;{fdybKx{xkX<(=\cLW[;@?.7_<*B.I_XYP>BF46LTYd_FEgLqrF%8eW.WtmNH;H_I\zT@o[WLVQjKvxlLPP]}pHqa>^kf5QO$ )bQSeQ]ijO3at]aecencK1PaiHv\j\T4_{bZ-Ijcac50=J_Wapi^RumDi`iiKUKbYItqF'A2Igj:C>+NKWieqyO'7T8N}nY`GWcq7fMQ^M=JT^G^8p_zmm+SD~wVOrtPw\W^?kRH}sXPCjlhTUoiqtmP\gWnej]\W=n,_t\`d\2LQcd?J=gi|`F _gykk^eR?RBbse>V_>Fr`dqQQVD`ftleXIpVBTU^[[PEMNN}|Z{02gkfz|dRxmJQL@qn4EF^%E-cw{R7gj(izxwJ7if]~rL1SSz|pZcb]/U\xyPfkMaNzjfS>UF>fyoT^A;R(cwyqRISn-`jOpgSBQQ{}fnUknhkavn>9<zDWF-*7XawjVhazabnZc|bthNG^Sxmf<TyOKD`.Va}o]rbh`RMVtsmnTq_E]NngcLhwagfX^{R{uLM<_e_A8-11Xq]rK@ *,Bp^eljdxko|}xmRfudT^tsxzQerdYuXyOD\T?rGns};.Ibs{OJ+.5bf}lgWA5[gt}wA?EYz[?1&<Z@}0A0mIRvoL4kpLPct{~1btuTG?uhttQ32*0K:N/? 6KCHN`0<FOE,K;H>hgm0:28AEWyl7',Qpr@{RvoyQblvgb`v]FgZzi]VbxnI03sC<JuN%#m[G\=Yq0-JW>P$,04~k>ON1@CF}F_F]rqk]n}yWpef[lEtyIht?YecnVV,^6FovSP;Z7$gkybqG@|h}p|~Y~!FlZVdLY`svh[q_bDKL{h,0uV?=>H734YFP8@?nT8K`O/8CphbPUe-OFYG^r9].Sg7mkBlnS+EU^uuY;kvqYh.# kOO++gubKg'}|q\RFsu[fc]e+QL_^9^5cVf_Clj8`+O3]j7`BUvMD.;]XCUX_5)REOE4b^j\_[QL;8=S6[|kSWpqSVh|[jtnlXQf.SlftwkD:EXMbWj\48*J<E3MF)WD,UN@A*:C.TWRL>"TmW'm{?OE?90SZ^x^S*1NoPN.IZdHlLt^XJFT`>rNKKeSOg\:M6<;%?AWPTxBC=6j9F;IXh3YQWNqXdV=U|VhP-POEKgTNmSO$./m/1EEYvcAL@ '(<<]86xwlDAKFGVmJQ|NZ==9MH]LE`2iQ<S)*Ek`_J(IV*#CN@eVC%<>IO"V^ ?e?&,DHcNMNM44B<KV\Y ,JC!2/|>FmauSgzkgAjOFjp]@gTbB=?/)8UeCK>P3'SW3s?c_I5:RQX|mJB!6JUV<[8?KE7Vj83arPVJ;>&- YK#>+$>29[Eujh|kjy_c@XKNKU7EqJ9)@]N1kOOC)<Pi%:UVxY[Mb;P+#+@_I^;%"0DCC "1/Q]-?%<M).&:&2!,OFA[YbmHnVNA7Z?EArr4RfJ;LG% S44@l-!#03(IO5,!!*% $M((^Q6?)A0G}<7#'8IYv"3+1)+'/szkfr5Yod@ggfUlx01reYSW>@8NKa;,;'\KMSE);W).GT=.>:,<R{K,%('w|_[NNOPi|iaesmzvQk]JyTC-+;va_6\N/TK^D$,-6^QN@#5:ca{?a]}}lz!3Umq_E]bDUAi_=@%OE|wI8H`\]MquBNN3PjSJ~c7F/?B ;@62@^HRX06ac3YdRk5E{xUIYqua]XWKFX|Ve0@oyY,!B5\qV:QvLNOjj8!?o}f?Ks>NH2JxrGBQC7BVRvU-NdOF_x:P_L4/5"IlWL8EH19OzYTA"9/?Y@YwHLUXD2+h~ Pr|= *^xct{TcZU,x=@lObeKs\]JNeK:Imd>AXfb0+9h[7FX0@XIbqH<EQF&'3?NDSSUV[K _?[{Q/AMNeZuryA@d@A5DBVYbK%ranm`je0|jIYP,&0NGaQU_k^ITOoc1RfC;Yl\aiQcc~h:Ci-ZX2SjE2JXgV)(D7+_Z_b?!2>G[gMM[q~aCgZ$*+2qXV\2CAU~zhtXG^".NMXkSOp\KkIZkF;6G)H=*4<3VI[22MV_N>ETcXH8ac)GqCJMC9*4HcrM6kjBZ|anXQUnmnw7#$1!#]>8_rVR93LYOHDOPFEBK`QOMC;HrO4>T',M@ LV!6ETqdeVaN=VU`]M:GP10BdU3IUxTCd<%M!w{A8+ @jMA0)5/A*#UVnen=Ds]FX^lTew3;kLLd*Kd];GN8=aFzksl7&ThD3Dr'oH,Q9(,,Qj`r807bpohYP=O{UjEASqVjJ"&8D7!RCh5ci]LnjWbuUBoYCdaEOLMI(3BXT2BI::4-0B0%7(OVN/ JH(FOmOG!SeBN$,nqufsa<[\@gcZO]tu+ZjApw!U]@z[4>QR$M4ES *@.^]dNQK'%[w!US1++2S'%6-!EL7F=37)(U4R@5BE=/4H2*88(D9 0ZQoim|zP%CsJ?LgM[L(GL\jJ,'.LeE=WE473)=L;4[O9SRD-$1cM0Ac?7[.TZ"6FdS#,0 >RTdL1.!)@1LEFJPamM.CMW&L}f_N5jd\RbpO8;GO\nhef\FYbcWo^Yn_csfE]xElkKsL4=_WI=bjhekrxf_RMVt=8`N8q{"JQ*&rpk[Ny_Kj`CY^^uykIXy|^?W]Ikmd]qwKLPdNJm|Y\BldQfY^j^skcjqj_|eqNgBpQRIJ7Wh`4Z~sqpz,aYr^z~yZZXcmnhffk8|O_aof]}Um`_al_|Um}vqwvncN>fTbcwJUjVZ;WleklOGHyvg,oT,sjn6>4oUiIz}TJeT}j^GA}rxpkaxekkigapTdlN|[`fiJPVgpmzc3qRisP1?TWLgAT]y_L>Sbcb_&89;g/FGbf_pA@\Um#/yvcOSmokcx`Xp}DmksoOSkX!PwQ_fs'O_1IE@douNV<Dx^~Q@;LKojF1N?!'NJ$'{]k\Tcjiy{i<T`wvVj|gOCZKIWXv]l^yKH$L[_ZSKiS)\KM}nyI3WduTl{nh]nV]t}^PIt5">J-43SG'!6-sb~scTtzroV{v/kRyojkraYAvU:@FBhz^#IBBev2>=Dsi~{hBAVewZ`Hka.TNM*6 BjSffbMdHS{muwpfncIE`ZJ^]CK^e79;1WUaHA]E$<nffMHGTL04<7B<ND;KESfD'6_a~zy}WSVj}f~]VRz6_trfIgjVLD1LknjkF"#?I((h,9F:Pk[@F*vpLF:fja\R<Kky{xtzpAwrR))jz\DCRuvG0Td5s]f~9'XnwhVO^9>G!MyaePiF++=FE>E3CJ,+>=U[(Me=7P<kobIY;5gJL1FKtZEZTQ^;S[TxnRP4E\y[gx|KEJC8I8dwhizZ1$U``[Tzp\Z'*%9G=rZt,TPwO3Ijd,PekJRF-\G_E$glqcFY/C8i]nG81=kRdg[Td?BHXVQ\coG\cEvz[0i{mR6IIdpL@'4TvcZC5Y@L@%:CMBIXA@UWEA.VQ9REMWO_E4?HWA63CA<=YgxYHDXQQM=3;[tOV2=WEODbV)MSbTVYYwfg``_CP;4cnVNK3! S-GQQSoi:MSoaE,<TNYR:KdE<=J;K6VPUi`3.CIRtTMgO9RttrRDD7:9-+-oRXB)>9\MCfV`YT. <owR[shYT?-8L2!SGqG!/A`hn`byS<_4r<X>9P.O#>e]9=Y;<KLQ<S3>?0]CifYI9@6>RMXb;8@(Xna"MY&.&U}PSIQ{_=PeJosJ>ba|U%q:29-D5Lu\<OVPS<tk68XaFF>2_^66KO\`SNWD=DTfincM$"csmKQ8pUU9I]3AHI=SWKRMBBsxkjwwGM.$PcO5!"!Ss=; 6*&e9Kp}fCu[f?6SRV[W\Nebs\K`VO){diMW?M?yeXi[W:4^@;2*RaM.,fW.@NR0/,?C>;A0%:GS8>6G^6,8Iu\ +4 3]GAg`Q6PBeaqNoI>W",MgF=87&e?DI"JHHT):X;85+B-*4=0</2"SIGC4@@^I`\X@):<XVMnGO]K:IQODnGU\qx0AiNmSWg]eVmdLs=8tvqfibI>3DOn!7I;*?,DccAO1*M5DWQIZJN^yc20-s]Vu|ovL`kuo{(7LgrcO0A%@KK%%*_G&14";5P[oppg\8Dryptjvt~vy{/:Ngl_flSK,\mXDBW[n._zG7CC`oWjw7_S?F[OpR3UbDiSTgK]L1N;Kzlf^`IS?Qkek^o\Sg}}eF;H34jMxz,kR"6yQp`0BHnojdDMRA~IASLUniYwtVVfI'JmLWNEZbZTAaN5RK@ei~tF.K^94+MbzZn]H<SpDMDom%@"1<0xfVyron,eq|bf>%3nZBKD&gljT8@a]F6LUYL[mxL9RNDB3`x`:L^LDJ5>HrzhXO@Z\tYuU>,bxkIyg$5ViZ-+f<9z@uuMA3}{rPNI=.#daO)@KWJgWz\hldfu_\ni}[TK*qzI0=I;^fdOX<#RUDLCT0-jFP61hcVKLTVDR,&(.:=CM40`DXN1]i9XL8nNC:OqVHPb*6DEiwcz_fy_rrmnDn]XrT\NCEB;?/RpC1b<aEBUipiwxn66DzW?L'+ 4/@@jru_<NYHkhtKQKBVXNJRMTQbD9^VH[LGL\mFT::n`>9OnV;.=LH(-q\9NF<jW.@Q=3"9[;4w^%C.!)*#[6("$"k}THOvS\?5avJ)QLw`@<kgCGV<@pG\H$&m2-WYR;m] A:+39PS/ = (T>ICT88BN;DVD#5..,HH4 <:/. (F0QaRw]cYIPP2Wo6byhDP?&)_bV/ RD 005=^S 5HH WCO""s9*\6, RE,5PF':;1f~r^k`gJ[^abSC]^R#jqXRvB** }fW.H'B):nM >#62%%)/TA3@-71 <3r4#0*3_H# A?!( 89KM!<VI._K:nC*TB6CwcCL}YDCi{M-1B:0rW%@m-;7:05KaS&7/@ ( )*60* 7>31A3)(,<*4T2=31)-HE+((5I7*?/53<tW=IY9/;%+]RARDT#<-3UaPF<U:?:>/@38+C;GCM>DU;-2)185,OH=X)JN83LK7bE:j^ nf^J_mIK5,f^[Uoop8'c^pZ2]\ow9+,)/HPKI0J92&HcoJ%8PIFX>QY1RY=[BPlscbS__EV81SU@9MJ(E\MB2S|pY8D:;_IO;8E`/Y^QZVBN{wR^},).;7"5MV9*dfTFY`T4-0WQ;VZLiT/nF5FhscGQVjgu[EVQAScI9GR>Tl;AzK2_Y2@$-j\vvZXvf,AyW1Z^4dMdnOK\\[\Q"]A6RPE'2YKI?CD::e|}`7:SXFVt]XytMsjJ^p[SwaCH\J]oDUX2LB:HWPTm]Re@Emr(a&,yOUl@jvR<-(MT3IgI<9BAT{n855XRbX&LANjnjWCE;7Wpy]\M8F=Y^jJiO)\[ T\Kq`IO[9WAeE4I(%7:^/W?MhaWoc?G]OM_jv~^$vt 6lFfT=?Tre92HMN^_SHF.C2qvL9TQ!7][JI@/1B]Bhf.,)#CO`sK,-,85KN6GIIG\BGG[_ULMK(]PESJ/%Fu_ %!<GXh_hmX+GHt\[vlO`kU_SOE_DkolR-BqWw`agbbHULT;M)WtkOIU"4<rj{8S<KYoqH#+/lyx\H+yyB +40<>e{G1[iho^BYk/y`ovA ?{D2grSZW0 @`IOKh_8@<K<UQbH%'<S8`1 ,o``j33.Wo::[qRHJLRNgadxfsW#Rlwx*8xvhkL#0 (G?:6B I / P1S&I`Ga:@O-4FZ)\DjP##/6Qg>-=JjYUY:^L5hrb`]eRC?3Vhv0`wy]DQaKscg?9x& &KLFQ?JD(3@MUHVOOU-.LbO =QS?#'\M'OV-7YeAABVVPNRZW5J71]YPX<ednDT{|[b'ML7_rnwr@U~hP]vn=SvttM\jvewKBOIX]`;IXaYPv~v\pNIbehSw\U[Jwrwgigm@@msygIC)GH/Mqrj[7Pid[_^?X;jgui]ZGYSPmfwwS@Pe_yiOPekThM8}uMQvhYnR>tgE`dQtxNYyFTAKh/:XPxoog4/ own{b<Klp^mskql~vSWxw\[k~dIGVpzvZB*?XYuvPbD-abpm}y\ ocVYihMBNhP.^p8odO]PvlYD1\tQ9cy^,%!''ranQ^yoBVps~Zhze]Xgv_^q~rcgk]]klxiaf^[NZXRbKM]b[q@>cdWfjhG]a@GsQSCEWKUk>OEZoPIUFO485bmIGD/,XRksMc}mkoaJ]]}n|mqyzxpS]F^rSVsdAmfUri6ftu|U=OLkvSEJcbw-el_k_ptFZTS+r4CLLWgN^jarRrSvznJrbojOxm_yt7>hl}VN^neR9Mu~h]x\yrAHtob5)WicAKB'"2iv_ErEp{pQOo_p:u\pzQGw[If`Sjn}IlJomVFPb^olgeT)MGj~QjYV%AxrwI96LmlyY,XmS4Hysd{Xawcognwcm~{8]bumiz/10pW=9[\QRCJ(;E.$AX[gF44A:3>/R'3Bwwm::UqVh~Yqwvs^eob`nkbejA+DYyr],IOcznX;+,;>U1=X':D2MY@JqDQDJEeADUC7h|s[n{mV{zql\oiC1tov97R!9aMC* \US4E7IFQtW~ex}~lxcKwcA*"(RMWB;5!PZcUQ2S9J!^qGQ;DF9NP<JPH.ngB:IdeMwjIokFWLgrLND]ToXci|o~caDF.;qPfu[sglvi*]YE2 ;<%-.STG`&>k0:dEG[72_mcAI\M*>\abafukI]_IKzY6T67xz_]{w]:;dlph\]}v_Ln7lZB[O]gr;A{gm daaSopG KM8AJM?<$@oQZk<@WIBmLYh0;Q#;]j'WqCEenduuJKxtp_ms}sUlfTGfjpS>jD6VjGD@9AJ?3h|8LcbVc^oW:UobzyfD[T"?mWeF?X<op5zt)KVHZ}`3>aoG.\`NOVUGpvkobUG[L?ISqK>XVgfZYAJIDv.%?Zi7et{sa+OUtz|O@=2dqs~wdrXjz[Yu[_M@On~sCbQTiqF1>H[d}i'.XNAYonvqcWddblnmiZP^m~LeJbFJl?n^zZDCPhXlG=t{g`bL#&USK3.^7<INHS_JIB+bU^L^O'(d}qF1>GFYvbYR*Jnh?8<N]TKqaYB M]]l[V},(PoTLR-\t\`K_j)2VE:`94Q@2I.&'/?:EMFKaqw~vNRM]pprpBgKHrgZiZcc]e^]iv_P^dXe<UZV`J]te<&=6MlQ(6Skzut-99E^kLU\Gk}^P]#7VwU<R*#alMJdQ4Z`IMaiWcDcsFn|@sm5,Va}`;#1$-C8 oI?'#92>5C: .EH@5=I@GODmkm@!vxp|ydw_fLn{fr^Mlp2lvyxeKH\Vp\QRXc<4Gjn 'YDFUH4_aRUZR,N`?@ITZ(*&/]LrR2c<+IxrZXuwz[Zjv}|jzZmf`joxU]|NTdLsF3@Yu=[''?cVXTQ@o[%9bWA]qqh^R=Z\]R^yHVeJ`hsDP`dn\JpqzvIOkeppvv\Jb\a]XbBey^7EYglsmq}1@ytgwp`lG@[hd^^QohN]q^sJIxbX]D`Tf~eo_cyrXNZs(hmj6 8[beluhcun88hfXerunjJ2Vs`iRa_SWscMsAPB^{yl}hli^z~d?rxgc|ahe^UxlSQpai]Yg#Fj[[A8iqCO}6:NaUXJPl}TfmqvaZgSevkZ{^Yunlcfo^ntxKGvyfQfqeH`o_wi`^PZcVgpiupyo@\gei^\AH_&VlYG[S81Iu_vdbjSk]`wnIlZ`PYr`\A9jIUnwo=-6qNjWNfdW:Q]h4C4RXCjS7F{pFBrCBXUHSE]cYO}xejn89&_vGF3sX]YZLb_3aVnnDXtgU'ZTt[9sou{xpfkmb`_{|pSr}[0`mrnVT-$ACmbY_NO=@t[dFe/65M~][b`@X~t}xJkh ;b91Q^Dilk1_tn:tybgM2J\njtgPWcMeS_v{YIF`l}SV|H:EQ21jLBfDN6*un7FjVCobWg]otjzB?_hH=?jy@() [exg_N`]]bZpvykSX.{kyysTnXrcVR}omiq\HAPvrvMIf;Sz~d:NcYJ/ky=S;EQxND^2mqYL#oKy|~s\l]Tccktc4/xOXB1(g[hXwbcm!2:/)(@D56!e)C"QJ$7 (/J]<. ;B@wD<8*?TYUGli{ZUCT[dNPoaX=|dZfq0[DxZ}SHK@j94<C~eWG3CJGQLDsf7A1 aTJF3QE%BqwmiZ{z~hgRn^vtkN!`[FFh9l\LONoTVUYDPRWSGti_[`ttph%$U~P3I+meV@E>NmSF1A;zq^ERF,^Mqf=ZH?@L\e\gHB;U{QK^TObpNss\hlS.bszwmgK@[Q ec}Y 7MgbL_ph*gsxU5?JF~h@RH6WlF93D;3}lumPGLo>MmtpK_^EpynkgBl?_IEA9dYJaTS:JNmrYr^ZB6*;[ColU{n&v}j?7Yi{:9E=`=9eq:KytWL6YYCgZ8YmTM$6YokNDMgCIjiEMX3=;T@5mcP^#:I6Tkae'Ksv_O4FaWFK6qg`HKrQdgNW(125rKGN]WKstpfXilQ``vb|P@c[DG\Qe0HePMWy\H2CN6V54R[AV8IgdKHW5TOQuFHUiL1,ta`v\6=rf@%Gj;}UX=TejWFnucK5AZl;"fMk_BdJTJI;\Pg[}]P=Xi57G23o?FMOs]jt\Jk9:AUqN(|W,7; g`mlxWL8PtlneRJX@(<_ohVSHViFlj@1DuU5:HwWk$!WTI03vF/OMNB63/ *GVJV;/0AK[XAS9dW#$WMO= ".pzbhpa_No_j3VhqOI^I E;173.dkC<=;X3-'XKD24RnFA<.'48?3XvS81*uhJcW@P,8:KLK(E@)%@cEpW_C "O3rhO`X`I]mY5KgVNY>hX}v94K33RbC+&OI/-76:/CA G[7Q <A!C cF?*!=39??%(4!H@D'G*8KP?[,V9CwfG5nd2zfjpGQIKd6RY2La5>gqtVJ%IJ8*!1&/!-"5 (I!3$1_b`0 ! 6;5/* <)< 9C%0X1AE/'B9BkK.0=EmfaspU#$]X0p=PE 3byQG+NG<()!!':F DAM(2,G<E677,2-HE4&13K9)(82R->7 7,"P+&C4*='3:[A/4(,&D|wXYK3Fb}p^wWHT=ZqTz9N[}oSBKhigx?'\Y3MK*W[TZ@Q^CFDT^>[nXTRZlI=IIqg9Z]oKFNEZgfV_gA{irvJ0)nu44_itbvY+S]xvs\U@KfO{i<FY2@[um8NR[6DNPuSo]]BSo>Urb~dtS@G:`DahT>>WjVr@GEG<5Z_!igTpK'nvl,=nw}LZlfVKXu5W^SZf\eM*gbYryG3O`QHUu~UGSkdf;4__`BG8b{@:psR]IhfK[imx^7HN$E]@4ldRYu^,pl]=[zjs]\eu}extXFVR#[Wm_tpL^UIZm-gyejX8F=SU8$^|p*BXTEL2e8=BZF?^?kINTOI[U`dJN-<kZMmCMV~{yvgEeo{_AilhS[h`aRi$Yi\b*qezdDTJ^u9$LILL"NlT-TyC>MlQ;t[P3Y^Q@n BV:CQO?fi9rgD%Nln++IH)IfyuvatkcoLPkN)Vt;V&+452"M^CGUI")lNTn`>raO{^XcS)SrYZJE^mK>QJ=KF9;PWK_Q^XJ:J)"<B@6(HTan+0_5+GOct_jimS8>V[XjoNNGNG6H:/n57QJ7bk owwO=aGq<hZV!:JKTPGT49/2?hRW/FK?mxox]JN?htQc>|_C!-?"`VXhq[l`Gw~YauY"zkeUK@j[V". Tb'N;'3,wQ8%,.#00ZJI0<[BWy|$Q,SrvTN=?/zzVdhv`u{}YMyf96C^+#8nVTt3/EG2h0rK 3k2]m=38EO':o\=L??.6\F"4;5 1]!:1f.?O.*@d%' H-;ZX*@Vs}nsIAykV9=:Wa||I&lnPF$NVRopYqO-QH0Mls>'A,;UO)RL!-;@HL9&<CD0.0'B8!*CB&%=S;4-I]C .241*)QtoI2WbcUviI`ghs^Yaigde_t`^?rhJSLM7;PahJ72`R]N/&U\KN3RZM;8kUabp?,QGIWgcMN@KUWfmB_TPT6M*7\qqhepRgakP`ftdiLU*nzNjsRMLCOUT5^cGTQ:dVL?IeWV:H_`\V<8\m\IQb^=F_`K<>^'O8D)M|oNU5k$QW?HTasx~|[zou\VO=kebU^C`qO4E\ri:RzM2?\il^G<*PQ3K;`QS<48TP#5]_JA;7iD%SrrM-9KE4Sf]`rN\iE2(6sxoZ@94hKv|BRjr{\U1=]xkIdbH7LXWncjUNhdb3g\5sT2Sg[KMb1=05B9iE5_|(#.=>=&AajW=NMG%@fm<4BR&4=?<-qKLJ'-IM^vtpeZRrY4 $:Opg@D\WGb`k +5_laYc:" 0E,7h.bf.2=-B=YSk_F[@AOduBQM_RM]ug=VHSeolbBdL !D]F "H<:cqjKZwsT]NCenqj.1S:.7?ERvFHO#9>%D\8O>>8wbg9WL6%=vtc_`IBQC?2[xQ<>lXofP>#gu[+@L0 ZC*'$DwqS\c[ah6KVNMBJ`pi;D[Q9>xu;S6GkYh\sNEHEQ6 OWGctfD"?@rpRXE\V6*(:CN0=<C3=WMMI05$6+/595":qUl]gLS|]YjmaVtkh/zf6O]S)tVA`T.HVX0.X7D*D -F,=/ "5%DD -BFJD#%+KSK[F$4DUSTEWH*"\xdmd`lOOsj<XtYgD>9gpJ[~T :,qs4DO.; R_35+?132%+/#}"+ 5K\44-AS*~cHO^w}y~IK>qmhLm_PRd^XkPOQvX( *='&kQ)Z_W64 L=HD)MOwQ@0?XOFzftzrwovy~g=y\gkwqkl]7{epqr\sql(Ljs~`R7V}cct[nVmX\[ICdkdpjZEGfhVLXq_%4;?:LoyTQV,&M::dKgfWO<b44Zu?vI+Z^UpqDyYFhu\7c`a\>,PsN0mLnntW;hlhRMS@>3rsrRXc=Ff[V46=JkcQfI33?>`yWCLpvc7.PJCb]45rkWmoPj>P[T>9JJkq4lx\S[-_iST?gXUEk`27=@MntoL6BNecMOA<Sj<HhM>)>`-2NPB9_LGW0:GFI46?NM1[CGR&ZZ8N1;V?.&%EjWoe6 / Z57i{YIL[YuN ESU@_yN>`LJpdOaKLLKIzerD<7bX09goFGXhLR[5]7;b[HG]NfRUb||WCWWLN7$%UUI/e;Szv][/JESAGFjh$+XK<Gi1hxY36CNO`5K[IK71JbMHc`\g4,m:7\RLG?.Z}/YbBE[[M{g}S5:^rS\ =J($#p^pyjjh81cb^k]Da@f@2QW^{W;BpX&F]Q_x3Tz\T]j[Bl<IpS,2?)J 3@4(Kx?.VawsWb[9j~MZG0(/8<_65;0WrkNI}qbP--[fqpRU4"1JJYnYJH5Jhl]Z]t]Ym^WT.4J\Riv]qU"Mj\ryH>6SmeJMC Pd=P_OSN~|>D VXd5K{dYdOLH[luynr^foa^tZyOGO;(3<DW'8Sf_G?+L(PLGBMNQ58@/^+6HQhAH.RS2R(5CWGw\x\ny}Pkqf\kkv|iIVpUJyk6%?Iqv2%5Gpe. *?TmcE=0C@L'=?9,R_,&!..eGCCE:L;cYBW+8TZm\Xd??EiAQ\SB<RU`}vW&axg{8$'#QdZKA9S4)7:==4.F5720)>H=pDS@0Z52JROK7(TB3E=:=<<7(!G[aC17L;H9?orS"HX\`C*.."$IdpW=RG$*^gdhP>865?qlpd;P]~YPYZkegLtzV:/4FqXngQPBrlxkFahvHTndFUxi{|S>ZNi{C0U~P'"(BcL?JS602Iet^d_R8KcFjgFZmPTq=Of07?'=JuJdQ0Baf[:TKF{dFB^^R=aJ+Vb^b[u*i&-PG1X{lPDSTL6+rtKkw;JpQ3diTPkUV@&!QPjGFIKZycrZ3MQK^pVrTWfL8_o\]ysMQY?7LMOv]I23SUpJdmC.9CS\f6;GX:5MzUD3_o[R\@+~mTdmR9=?%jNKQI0Isik=2NL@LQ_f`Q<0wOQILk]55^IOSEG/d^yZTGDkQV_PIB,2CV`c_;3`iCGO931"(6UFCNa~x{p\Y7ktcCIPE>F:chVeuhzY=CUOL_H^mbpydNe`:.olFVU^_D1JVhE)DgI:Qem`X|@KRtJF6&-OU( j@tvP61RXO|eF,$J7)4YfAK[K6,9`biqYF9)7EZ8@X^XWE>C-7MIclw^7 ZfBFkV)=O@CTFQV*EX0@I*=Wa9;62"8=~o{nnOxz|NNcYGD[oWThK@M^E$|=Pf_JiH8giMDJ(MG%FbeV"%Fs< Aqm4>i: 7G7Oez#+S_MN@B[RxM1; %5brznvbW:TQ+_NLr6Gc_C9LM 1cI6 76,!9),IC2()4?6$*?; 3"=JM'0)Ps0:4'!(QlkW,>UuyPq}54pOcwpmK9H{uUf^-AOj^OeQ1F_06A3VK% #ghhp,, .I_*9?/ TY2/*)(F2.^: %B)99P^H;8$jchx`mk3`bpmQoZD.azW?9MS72@F4+?)&G-2^,@1BK7+((()=<0J*$A,LN1'K<MV1FVK*-"%%!4A_NA,&U_.5NQnZFeT,9<WdB[~\Xw-^^``OFVadx{[[V6KvpVh6=M9Plv@JqgXPHn9F]F]y[aelZZ=@q@kQWXQrObwiQVjDPDZ{UdE7%_XGw[^c\sncFH[fgQ\STowksysuoujnltqBHfuxr>u_\n|KjqMG`knzix]:Ulv3XIVj{h1)t|zS{i54y|VP~|rhZEKtlYi`bWQNcui]uqUSZGxhjz]J\dCG\sR?g|FOOThj>GgAJX]n</`kpB,tX]~s_|hts}hGfjxwcZs}[eSoN[vJVzB;J?[gZBOdS~bEy/Sg[inYmse2P0EyeUO0>wx}rl/;rmcL86{cv`(_LVreRrvj\svMkriedK;dt^XZuunXJPAQ@IF-Cbd[NiIZxb CFghO:lxE^rggmgV{o9{`TQ``|{QI7SxW_QE\NuTf|pzXIVrmde^YWGoAUT6XnOY^OgmPBJxuTN9*6(0 2=#yt\\lu|vGW~\78lVYaoyatW2\wR)XAn{dWpu9UccRZrjb{b-5'Fqth[>)I@X[d^G. Ffrb^zMsmK=LP@cI]mY^Wtpn`&MM{IWtW 4VWKH;LQ'3E&8eQ 3-@Misf3:<?wZ=X/$11$afM+!ZSJN@>Jh~XI\b^Aps`jYbdGZof`mfa7FNtuZR^D#%.f44U>K:D1:YB8>D+ '(nWPP *7hVTR03B8]\tZUK-+h]]fZZ[NvwwgP[Wk}k\Zb<%ike[dyVvskh2<46b|&Oi2FA(/V[;E&^k{(&G#Q{dmYhSwtyVOibrcela[id_qg^UGzf{XUUINH3i|I1]9\}vv\XI)]~QwWIK9~V]tinf>;kzkCRvqfPF@GgamdEF[qWt\IMgm[Ip~qdZ`kFH~fNF~ebW*=YD3[iQPogqhfaDjPEj[[KzaVRXdlU_tva3iaI[`lFVfETbb6,oH1HQq`xN7WPGeqfbb>?W;:cfv\T*#~GOwwmXO{cZwSQNElYpJUP[W{I9VlyrQS{Yqup]<7ey|_UPC?VGy}d~FPjzslrIUncO"B9bKKW/ySukYlawdwo=CSxQt_'ToRhu]fj]w]Hv~I3:Ziloge:;ZQ#r>9D>+,7^r=SmSMF5Z7\cisQU9 >).`t^F(~eQQ/ou[abc:+dpTAy}TNQ\pzf~WYd\]gTgm^FKZ>T5{*6F3Ne;Djf&&WM=ESR?Ql\VdK}U->_ci_G-<@:}[\I< vgdjXU`N9D[QsZ@Qi_XNzMkyUMf^FOvBcS5^o<96JbdG=@<RQ&J\gA:bKBiM1+h^_@5f@5?CYK, !lc+L-'flf84 YIzR8>1s`L`D>Y?;J&l~J:<=64)RG0=HZOCI35LY?A1(OV>3F=L85.(LR854 K~mG=N722GZOK8:' .3Yip[ecrpBexE!=5-yH,H Y:,0XA (EM79!71;2++0,75!,0,3! M>?,&+G%&6ADUQBV[y8_VVc~XTT>LJ4-]jRlpA@5*8[fiEB5,JKC/=(21  <>'=4>&$S*1d],,21.013#15DfOWC?6~Gt;G`urT=?\Z][ckiB>ZGCS^WN}}R:oB2W,6+)7E!6/8C.Ffnyc'0zio{XdpNc`f~dz[[a{skFdWutnOsr^T{fm`NY;$/QYlefsia+5busbdxbf2LQYLWKqQ{O<jSWeQFUcV:B.K[JdYF@ORxghI~|ZZA-Y~v=Vt]["$WrQTKmSEWdtkJ'gnf:K % pmJJ,Y>Pbdl\jYON97d`X}7Oge8\ILhiHGGWX;fhRJQ02/oKYW[rsAV;SlL{f^>^G>sHSoxk65aWF3xLUrxs|g{]Ztgin{r_]3`Z|JAYt2PLWl]mTL81nWY}SYCTZ^SK2RQ49<]alC6_,#&l]MQ^_:n`sY}UD^KD>8_cA@, ~deXtoSHSfh]XWVX'3]hGbQOn<AI?ORbT6/BR1eeL]B\XE|/?*JYB.BcLUcGL3*FnU^rI>=PSb}\EbJjG!S"K8INXB,(bgj{necOYnbjZh[TXC7PLnp`dQ]i^oLawjBNcjw:T9/>?hk<SPS<(9pfHYuWI__i<B|RdfoSN=TVYaP}Z?e!(./52>{vW|~[hB^kZ/cvWTG~9b_0NE8Sn}|n0JA]OYQ]?KDFSFT[U^?+VH:([A+7%AA6OI]UvRo]*AuT"4@wV16#(.ENd|qtUf[9M[TG;g)+~~U83bZ=N[fshdVUIkinm\<%ch:4$T`4MT}iP+OaFR0@ XDK#OQ"*?'q+?JB9*%%iXEC+9ULVucsSI]{Vi@hT806ZaSHL[4376uWH;55;:PG$ G!JLA?LUT7%D8M5)'3;:*5!3; =fLA 9DAP7"HgO67Ms^R:9!>PoP%eFcZir~)&8jqfd.1L5X?/0>, 'MdgJ/G5J>40FECd8^xLP[XLS9<)=Itsqq=kIfbLMY>mkbaUC7ikl+'G 7O1+6DqhoL`yk5@owtD\xNWHWNBVc`C"7AMy}O<E_fQNz_GSRiZeufZ^erk_ePslhJLZ7Ne!=rde\Umyf~e^f|gSfRm^bnh?@zpcB]zhv2e{L[ptemQZfTkz__bBhp\4=i\fk]@QQK}QX`R>Lacaf_tp}) y\o}]hfJ{t{cAQ`jm|eWBjvvvO\>XpbsYUvwRlh||MKblcdhZjI]gXSZ[cthmF08L\g!NT}mQ6}m]b^|izoP|tOwZ[seriny{lyc|Z/VJXhQj`ZXYnq8PXesNrdRJ>K>-cQJ#Vhkl@-JQ761dno`*uaBbskO;xjGKAcyjIWLBozRa]myaQGdsUKwYg[kgQPibfYh89F\Q?fD:?0;UX[xWj9(Q^`6a`Ukin'#brF7[uf^VzsaZ\{geouYEaX]i{``n|hN]MawmY}nHcpY=N}]@pmpbe|Z3,[[?ekwu(Izmjkoj9F4raM3<s_AC8)/{z~ti`Z[RVZKnxYigZZZJiX?urE^Q]UIaRataVL[U>N\Ac7WZbymOGLq..5@1[|\UsLK<Pauh]2FxvETX&"Xk@B4(MB}zavaZvm\D>W~QRa|~mvB rGN[r yg17#VE%]ndA58T@E-;%+-5\I:.$92cx+% Yvq>3nqMYln`deHmUyn|>,gp{o\kF:^BPbfhg%$B!WqSWRN<_T?AW^F<gfc$2 :]lNg^_*B7DAAb^KGWIGunJko8JEkgG5;^W2i_W{~P0Yp~Z,((PVOQDb>%<II@2I9pC8FRF/-Xk_MkjNngYzb|\xZP{||M=~mZgeG=kFLWooPDNOvYqtKGZ61=Twx{uT;O7hqSiMPIuwtu<Yqql^QG]Ib3axslZVIZre^M\UNAG]gZMfRNvia|o[AORYhmS4QRWYIbci{YqB3)LZJzsvKZnghgPLxxYStf`Exvq_m`wjUQ]Z86^UWJGjv\dCZkEX`qmEZjqXiwGI'1dz}BsWB(NGYpW+WqcYddP _Nl}Zyv|uwg`LyvauoEEs_SWObxW;@dixtgk(:HaZ5WscoLc>FQ4A_[OJR11>OBaZ9--IINAOOBu@LjtihFM# `WAPdVLk3ViFL7iucys^JHXVVZNYflqqW7ZcT|TPmeXqza=m_CeALTGq><@<HBe|?HHBZ={[^%A">WBC?Q0~Vz]Pu[pF05CGdse"giAO8ehSQ}e/1>`EES?Le54JghdW5H7VZYJE7IV%Cbh[X:.'ke6!P+%5%Cy[wBkxv~SRjkJ=[b^1n9DKk\enNgakn`Z3_W?uuiw\JfpY]Dz8)[kTjkMy3"#lT>ukaP 6S]?1P9- 86 .pZoxnwxXepedYH?J~Pg\QufZPs}~9(SGyqo>. 9R{~y{/CSy\dQ853/Bf^JfIH2;4&,?G3@OO }psjcdcxU#koz]X5&)C9)S%*>8+#@Rt:8XJ4QMZgkV)>#?&P@PQE2%4@|WJ!+5t"ViaS7WZ|cq^w_(D>Amnz'WrwvcO\9JiN<&WNS#%hQ'UAUZ$7e03MS4S>;XZUG>;4:VozNrz^Xo~UqxZX1<eW\@^R*$dS`V9/!7A #:4 aWTRr0/X}pg6=LZ=Sf5?:7M=,7K_@1D)79)$<<GZ=MI1S;!;\=HK^22H0L[Nfl\lU2X[.QVYHPa6JRNk<4fI2@<Wg_hoW9SJ^oB.e[<?SnQ"%>BW]VIGSZ4?=(?RG4VpQG7Mi>Yb:5D[IRiC0ad4DVFA*)Z'OeMKI<0Xkd$wN(&L>RFW:3'B^&7EG6'/:SJ"js`dp=+*4S`iZ%K`PL&OWLL[\P9@YM4:-"Ef=&^@?n%V[RnQES^Z5TH(Zd_aWF 7Y.i_50CHRP6,[jrH<aveM1_yaXKt{OiD[xMrk&?FC9YhaY04J\*98>9DICD2aN@D9+DIPiBYuH=7]^X<'__ac9Unh\aUDZtU7W>pm$/Q@UpPVe9ep0TbjJPeB]F_c,N']Y|OD;AnXejP=C[;+i}XL* KKQvkJ.#8tIB8HJMK/46?n~PjZmL`KOEUEGL3Zvm=@8TU@WYX;PA>OFUOQSI%bqfCorK2PcY11$gsoTJVMV;kvSF*Qlew[>W?KYn~rvimRD]`|bPKpEC=;QkkfWQ;G?.UQy~>;29N[eI?/B^de35<sWO?57 ;i1#$+9*Gw=R<1d|DUiDlwhiQ8LK;}Pv:5BBE;zifK=WHK^SIL: 4LEfgF^~J(P^AQfa)23DXAIY'XH-HL6En4ax~c;311!FGHFMOIg_kumVlt aNF|Tq=Sj[D #1BF"):TAU6B-G< 8T*/NE&$*'3_=62 SppW^((;xiMM!EkkJQ;EmMamg;?]-l;Sxwv@6(olLAP4,VZ2]+=]mT D:XgW).(*)`>9+#R/+/`H%*$MK6%5"@;:7KbL78~Q&1ckjXORaXITKrjT^\C&g{p@ 2>,>expOYMC=)'!HMfjP#R5_fME8*Fm->J[NJYNG)!BhTF[JCbc5%e~j;/`QdY5h{THUff<s[>YoD/':MO|q\_D-;c.JU6bcU8JXFCXa\a\3AQ`kkTKH+,m?^gmWOB~9bY]f;\p:Au_G[[=Xz[,8J\^__S[]iv}U^raih7+OB>DUMJt@2J^sRW?[TGgrz}fg+k[jeZhX^O:F>ZYO\^g:]ftGmvk`C@iTSJZrHONriH\vaa_nhV^GliB3loN\VB"HW[HukwkOU_wtQc[T?azhg{m[cHPSqNqxnV4HClZJ^hZ>BiOltUW`LRt{gyx^JXLNedQ7AduKeoy`l&H8*.,TAPt_{Z&A<9L]`s<ZPa.LFRkYef<cy+]kl^o~`SwYqrq}e)2>dp[`=LMIttwSOBvchNxp-#5pWg]OmQWRC>*CevmcC<9fk^I?<HQLWSoUt|Udo=TPnhVT-hNlb5>U4=P{A_Po`?VrmsuzL@HGdQ\oohhcRpWU;T\eOOrbUeNpt6 =4,JA5 NdXLZyqX<RMxgKAkJEN]\T:ZS0:^rnYI1GK>KrckWL^cEpkkspfY&GefhPp@0`Z`+&MbYo~cQ$:|zh9* !AoyQ"9? `5]fgfvoITgk|cUSoy]MNibHedTt^G3b8`myhdPXbfokDaX:Xy>7sr8;X9)!dua8E0XUW 806O?VQGcJnOv0GDPuE,arZUOKW7AupP^s]TjcAO9;B`\/6J??Sc^;T?+^GB7$H8G'`QO89#6DMWd{;K "RJ(^Z[Z'[O'OoT^hu[Uu`"uO8G}R8KzyqaU#0T^,KH&-2KGHX;BC,7D9$!.5Q9=9/';Z@@4c`z}zFjzB;mL;KXyZ;!=g=&&!ajL9H(Kn5;L'6B.=y|dc'Kla^~V>oxhnI_LtwwR`hLXS`iRQhWA]2UJduETLta5EULRLShmg6<lM5>mQ6LEF@8g<& gsWaE{jidJkL)ZdG_}n^\..L~M3V]1,Mtf]J^Vg`Dbvfe10JGE`iVWW/Go96Lb5dvm+:@d`9^I_KT$0}Ki9un[b]OfKh]CWUTDTdqgV`K)G`9VW@GK9I]RmUL@<8?5=KUEGIW=2YQ:8[X5(Fw.?]Y=FNF<h~HElB x|caYOWKdKP3|\CpIViq\[M\I$:UdP.6#Kp?:AiFBxyVN<CX|WbcNA7;}`sc(/H2daiNIfaZ>BTfNGp3'B7LtZFasfC^^=_d47VYGnhRT}+W_lj9MeU0<c}n]Qm<2eM/IU`UucNCUd\My4P79oWSgb\ZsK4ds`J#li 3)5wnVSu_JPXL=qi][MFWh\-][L?HNRFKJ1Fd\\T70LL-RWArV#52B6<H51B)/2 fw\[>,=NF!JG[M "Th? &@,2MsU`hM~{VSgo[N^mc?823SkRD8<MP;"6~H1:BS=aM/_M*Ajc`Q,_C7IY}|=3Ch@2.1-"3K3(%:2gP5$-5eh]JQ5NVvOFLH,TW\5]T<L.,<^V'8HM*>-4(48%'@,)8?#/@#TSaQ)~H/*3P!4AfxX"#6]WbQFR%M_jmAXF);K[[sV0BFfG*/Ibrwh/(%854wI1*0#" &E_D7'&4$4,<\&=7?$.FP;31D\UK^W&WP[Bdw_szvccSze*5aa<?W|g^Xc,A+;-AX/1+$& <84BX4!BO'3&Ogog9+?QVel`qobiwxxGwiizn}`ildVTXv~xkA5T\o?uviTaf;pbKMk`?QpG%HlkSywbjQFKYi[flPDJOT^MSUN5Znj!hUQgvu}ODP`YB[\8Il~iU(#YGu_Vkpxp^[ZeJfjz@CF4@P}U:,2PkfSrQR@fYkqeX>Z~}eQZxZJOflPlAQqVGVLcy|TN&-nolUoamq4 {l}XLMhd}lIjemMKOPs~r63nsmyq-4&*Wk~\5IU83ZvgtTqjRkO<ggcX[Q;6LNHS~=jp]@CJauh@AUtsI2|p]:HUrb`yb<>MkS4dk?Hd]G;Q[8?lvind 2V_]bm{aNLpf^u^%EvV=vw7=d;L_MYY#(\gxFmjGmdepJ"&N_upX\I{uV^qY`hqXPk\Dp:HWV{Ozo_Zizor9mhT[_eeuQkogQCewV|?7bKAe\bBXN]n1SvX_@[_jep[!^".:\5. |}qQ~j1cx@TzX_:\V3CgG/`k]4&8IBNkT/h`QSbqgwCBsp3a[1fa0PH^yj_DH[TGliEG$CNE:W-HY><I=U}uU`vheop;Ddz}nydNY<AZtvpkO*BbNFnuRCHDZ^rmWhM(_wrhe^72O{^L<hSRW'4Q:d\E+ 7]x7)BK$3xUDIb`DXqt_@jpmX{m|_!JdtJ.=cuUJ9!G#&MEgV+!Vc5I03g3AhA^QN=F@)}k=>3>U~ZypkV2}g`W]p|M{viS;d?SU`4?H3 =G>Jks-3)Q>?iL,)HclE4(9 @5<J)  R^dQ>L,)D`ljbI=ET7]g?`D!Q4UtbRiHIjk=#8YksyN1AjXV< KXX<@9^-,<*5N>hgb1386SQAL'7E2%E^7;b.9OGFEQR0."1nE$YL&UW'?4&/QL7EBWa=LUUI.2*X|kX+2:?;DUZ]dE?EXJ@\cGg>$*ZuU>BDHR=IZ5HSV_T*@Q#+LJJTHWH:3QT3EVTq|V\[_^G\O|eE)$.6D:AG1m%92P}hVUA6QUALWbZ>3ME>D+'ReOHQ1QnZOSFb<.NS\)?_)UhUZD_Kd_DI_"7Z^XDUsCnO.'=WOc^NinoxeDAHkeHg&.bi1+DV\K7bMW6>NFASUZ#bN=?XS_7EQZk@N5JO=E42sm;6?TL34SeMN:HJrkRsXPBQpQQapk}bQZ56EEoJ]gbfhB)GFDB<MR?)BD:7(*6KkeqT\iQIB^smbcx6lEWfLO|VM_\]U_|B[U91(3ujUY0i3KIXXMTP]HDOW@:GlUBsLssbmZ6F1#8EfAhIa@BNVUSB-0O_n^BjX?EC\zi`T^?AZZzUZTSF93]BnTRNN,F\hm@4DUB/Dnt\bEE_XXle8Lyk{re\0E<??&(11=,Chb+R?FLD=ugS?L&8&GkasLgR'kzEZVZU>9k,T[]+?A_a[0]0KALskt8.4FFT`d<9`]mvpy? N|hC:6A*(cm0& BZq8T{~rL43>'acL|gKOXQN8IR}R7D}|J5eQ{dnJjiixG,Oj?+ZPNHDRNZCDGQNn:kreMN[*/QbRDPSIMSXdrTA1=D)*We6-Ta8wWW@JaePJvOdms8SeZ!EtE0gigY)&E$,AM#=?B!0B'9T:18;';o)*0 /6_ R2F.2WvL/\9/4]G7RfLBUNJCCo\Sn_^e=OV%hCK5@;JXE=)'N;B.C`K 1@cN3?(L 3& @Z]&8%2ozMj[Cbw94tupQi^`xVK{ z~U?O@Kg<FS."+@PbUEU}PSPykehGNCLd{baPN/YhqVmbkz[sfi}lxxyjw_dZwiowyp_qm("6?;Kn\JX>F_DVWXX10JHYJ]nH@1'3lh/MLVDJZXiOThA?d*fOFURT1]\Ixufh[eFo`l~TRb]XreNRki_n<p{fs\EM32%6tfz'PDAVFNh6H]s}PLM221?UpkUQKC`rMNUa-"G7Rlq\WK:<DbQ]qY^8Xlkbl\?SS,A_p/Hw^A5 xx9'cWDjob0$)50CjxiR9+U@*ekbXDE`zl@YuJ4C{^IgFDZX7'bO8Q[@giZ`Z^f9NaN|ZVMFC+8=pW,6WU]VmCW[k:@dhi1C<{J`P:!MM';D7D;EIDYUgqA%GL@,PXwa29I<7d`455BTQa>JPL;WUy_@T]7T_HMf}OJRY5D]BJ=t^1QG0?d?,Saj:JMD^C&3OcZRa<C5OnNXZ8gRbVO[2CD\]p[_ga_bPO?=rNGkZJ\UbUrnc!1<VubheVRoNF`{Q-Qxzxc@t?^spc%^(%Uq#6Y7E8GJHP?\[\k>EC;Y"P<W|wg]+KD*ipDUKWwG3ULuJKPZkB-pHu`L0+PeTnXU=:fVWh_p_utJ|mK%-]A0)3_B HEWEoguZ_e6M?`cCOXVkct~w>z`Miiy\bM/nwlY~hC]lJzQtJR]kN`8mgoBKIb8\<UZXQ]Rc:9rydD58bXe<7"(@Rf]L]Yk<JXyvolsm" iQ<iNPSTOY\B)=*ITZ0SUBI<4HP_UC8),BG[-#2/20R+@:;asLW@IMTY`T:kljv~E-/mmC>Vaa{y?kaW^F<.lRF*A.Tf;2#<?;U=25.18.qB5?">,rL&F7EdG8ee: RqpxmaW^yM5s~awXXx[`txF ol40WCJZo[Krr/ Dm}bN?RNIZCV<Nt`KafbPoxGm_sv`^txt{mt{!bffB@cT>54Y\H-5JN=~mWLZtfjvVTU2:KD=+EE3*4Q]<2Ie_UVT[snoY8=]iRPaAKsA+h|{j4 ;Uzo_N6EaZbcceZTnnU(nXXym[xdSHXpY1]_r]wldnURU6@70da7Fc`j{gfU`QSNh~v)LKT]DVbjoQwo=v\X(mN>2d?;S\U^OtteN;"|WNR{kVbcs`DJ[Z-HlkmQgQToMZr\Y\]LJSNXyh[GN>0<CB1:uY3;-0udE0~`NMEItakR/FVjNO@In[H]]VPTkwWJ\aqiC^iCA9C`\a]SJGTLDyNI]l?/aX8os~5/\S5diUeB9.FV_MLGn^B2n|dGNVeVbeXZjoID<N[[;]u\8)17AJEOYSVO*/eqK7*?L=:7c1TZ=$;jsqc?=-TYOZWX1a^K 9=G%-~@'>fDZG-SsfxxmHGc\^D1ka_\K?[J5w)I`13L:`X$4"Ep>@I^a8;[D,8]\fdDbj->R/6DAEA,/33'  -{rpZKDofrYZUM8vleqq@-C='b;G\oYA;v[<OiM/CY9.-SVDJ_: <'#?\yb->d 8'4BK!77HW4WQ$5(;%&O<5:-jlDR`nbvq1:K']_NZ?lP~H')J]?-$g[9 $ UP3;X6 3?4P/#4FL1 'L+;25SiSMK!9P0ZV=HX?<ZsOtlx`c[y[hY#W}7OW`xb,mJL\8YnLKi;$1>\=[:G FM?:>,0J #1"07*X22$ 3C[rIHiYAHB\MDFajsrfekfYa(Mll_sd9&)[wy`rU4%$pkg:;N34 AKQSD<,1 &=\64,.53?TuT=CDomkBxJKL0<[;(CF1#DMLG>;Ui9`_;FQdI]lqrP=B:aefp#X[|Z]h{zigJcmnlax]SbcOoaUf{rld@S+IqqRYkTIxfb]V^a\|qCemd_fzRgjdMaqx-Z^na@:'OiuuRYmjq}^wNv]_NT~i^_Jjcnml>KCGJyjLDusP*[h/_o>CpyoJFa.'2+PXKr^@-EI>Tm6JOT7a~VLGL)eqsoothXe?vuegdXTT4L`apbrN<KPH^]`]P=NgWLCFGR[>ikh^C@IQruz[^VG\e];+}f<PQ8?6Ji|u]aP%|g~S<SqyTZjUM?dbzW)5]XDfjeu]KL2%9hzo]u/?B=rtU[U'G0TR!gZ&5L~zF\ReJPj]mqb`REdV^ES*>B#1V',voMWO]lgcFPM83 liU3N<ERH`@:W|vQR2U`SJwmqybADUc%swpT`HQIdUbTQzW61y:/P6*GUfJ\V[C?Ioi]`^fSVDUj`jueKYEnfWbn_mdYR&MW^1"3nWSofI4 %,!N<7T^lM$4]TRu}9*;!9S?>\!%:B}h8!.lwoyoDS4qwsf`SG8AAVnx^i5Wh9"aVs`IV`h4fjJ00H@>>##TskZM2+ [oG7L9Hd&B5VHI4G4=C8/-#fu9&  tl_M[FdY}M[zKN$a+;N,0)q"S7+,4NX 2,)@>98 Y3NI`=%)G,&N A?(6J/.1+N<)8 ;A>a32'?TXFDD7nXCxaUf?9KF_BOR(ICZy_W]^LdD;RTO/ 80wXyW*4-$%`N,+plvhD^-eK Qwv]i}kst~nsi5Gql^mr91>MpU&":(408/<.6UfP02;> !]d9 ,Nsl{z~g<Gcmk_/Wk:7pWGS^K7Az{qo]A*\B_]_[?ZjoE-8\higigOT[kUKVKWSAA><2FLZw$&_/7YZYdpt|yF5cw{Z=cs]jQ&47kpdr`:6oyxujBFY?~LiuAKsk[J[m^C]f`HVH>P{uc\ra@Y]\usdcqn^\fOmc`MGMY<=Fbcn`C##3ON>']zO>ShV({WZTN>-`ozb\fM?tfDb|K5Xt`JaX=4.Gq8PeKOPbuavFMU=Rbl\PJ$jvWesueGsOKcS4B:N\o_EPE#2k`$YG=acHtoYKB6Ji`kW\dahfZYRAZ\o`uMLpWUyPFwe-GWR_`Q|O9mDD^JcjP3MSAic@?U5*EJJk$$Zsph\@ EP>5H)/p[UVDRceY,WS_r>?97I_NJYjnkww6Iq}<DomT=cYzm/9an}PMPV]QLZPq TfU'Lx]cg0JG2"=meR"0, 7U -4w[LE@{M<PCnXw49pRHSIW]_wMEgq?Q>H_vA;]RNMc6iY@h\vj,@P^KQ^~| PP4cq[P=3Pi$,Y=R8#)( 0k^yeQnsoPIkZPihLlGLkT;jG-Z7A9BCMY/F8LCdB":?%gQLo]]S?,.)>_Kn60(10$ !%$ r]Or8adqxwiLfzhmeD9':@"GZB%;A(0MQ%).:->87#'/K/-!-/5aS;"=.:itr],%?#&>bc-@L-9;>Q`l|r]CMQTG'MYqRkpeP({ST\LH$/29{:D*/6&196C /0&#V>5IF1#_60C<<::;)G2DelM#0ELWpey^@N?CjbSeehlfvaqqdJVX:DDYe<3FDFW))E2+)$M13 I#=*/&16ZW3= HOJSR@%);A\qRuh>2:{K>K^cGQ\WYWENMuks^anllr@VSnN[oJS;6trI`yld<EEwrnux=1I_]unenUbWohdbftZLjVE$Ilsr\eHa[[gJe^\ijTWEL_TgbQ<DK-v{lFfGfV)Tc\`OjvVIQZTbhgp~miIFXJqkyRMuuzs{{`v}i{Q3skWlDhlk]L>WmiPz\ytjgKJgm5dTHO'8/H2`Y_gD'YugM:>t{dkzcWmuj\Ad`Q}sNckKms^Vnc`WFWlAzyd[dXGpWn`?KvbX^fLD:2JPmskwI^GfYv|)NXyw5h`69uJ<LZJ-,<khUL]pLrzifsueEZMsNYn@J8JO`T],fikiXPAjJ:KU^B_P2A_S5LqH{@ZT:BD1IG@MXLNA1A>J 4CDKD_Ul^Ucd^AnVWaz`Z][GI,9QjoJs[\^=YUijTOR{u5N[l]jbNZj27oduTCuZI\Z_^IZ5gM_zi1!ETeUHwmwmy|v\meewqq\uOo_[Pnkhb2\ayx|jgdSRHdppt-.<BZX}n$d<aoW;SeXCJ.(4Q;RU6$-&1%yzd|symminmrK>Ywfo]dcEdvPvq|..kZwG(Tw|vy;X>UpnaKm>'ake@Q-G1JY3)%I1qlXiv[mfUQin4rnbM=ZmO?l`KETJe`8A:)04G=DG0,Be/',I+ /4&%7\6. *+"IRwD9=FHx\BL1ML_Y[drxDhhf``\f7pugwQ'/8vdZV$5OEA+GL/?0"PZMGO=<MF{~lVVS;Yadvb^DPt7i}mx~}u|_lzeuD_EYLK`h14*fRy;?4/APJKKJ]rmIF]`ljw`Ui[y~`1DHA2?KDQa]l`83TklH-[_oN_h[h]KDpmf_aH2K>lVmgfXg\\f_JZgISmZvwmH&DNM5<^>;9#?GGSF@G1TxprblY9 Y}TFDJJHP@PXwxsNKnZP:OXbKKJZX/S)VfBVor>ac^^jNSyn[|bRH_povhr~mM:?kbidsN9V0N.mY )JB/Rl^yi]\tY8#8__XH8miGnpRKKJ0@=uDYGa3]O~W^StT$cnT]u}~M2Ulz^M=An`Ac;UmF>;jE<J:AcQDh]taDGmBFUEja32dUNYjeMm7gQ9kOod|pU<6KFkJMZSX\bKhtJev[VNULq{Sr9pwKSFeg`avVYJ:XcS`}p^9+Ss\BvMbW.7Q5J_c$<!<=SO8XrjFIpN)tykW]]UAL_SJ:%":vARmVDgWV.Nmdkg=^5XqlX@R`Eh4iE&BbcAHIT3.Vc<7nnHhsBKmd`T* "8SF;9${TbvvY3PLtW}fkBAO6S_cSdZ(%B><JXFaYSa6=]kUPITy,bp[9Fktt2>]4NpRERVqMbX*]T%*8+26M?(%(yg^fxznP}qg^gzmaG_}U@?CXY28l~VQbIcH[eKNcRf2M4BqhN9EgE'Rr?=!-$(EL[XU4?9G_hKXE1F"YEK? zjXjuRTuajdXY_ddB)[OTS6gMCnR=S]Uje;8A/"C?(cV6PF("-vH6C>"$),9oY3JZ64rl^E*Thkj[IipmkSfft{_eTXxh:NlrzMx_trd=S Ww<X]:!y 0+<% +u1CE-=6_CF"WA"1#/BJ<4LkB]MXrahszsj_trluoY}ssszii;qvpgCG@3IX][SMo/E)<lDJ<&+mz\}I8& 3;Exc==-2gTQ[OV:j`QwJ5\YYtkkwYNhU_AyxUY_uuw}U7D&vixcm@?XK^lDbwU1@6@>UKW3!Msu^A\]AJ5PiDPVICNi;NbD3?iX)\ETb[IOH/PnN6X}|NZvjQyu|VT{U0gtxP\E8 )Wy[squDOF;jR@niOFeM+Jru9bOGZV98Z\YlWNGWYR]IRPML(MZTVLONVUNza^SXU`:JTOMKd_Zu_G;<AG)A&R~ETt[,!Y[SdwHSNK^^|yZ\8rgUmt,W5QCQkshoRDFID]y_;pSHc\ZA|[N;$95^[6hSP-?>u_NjhuN`8yuUZWXn&/YU=P5Map2DSF7kwaQ\6\.NIPI3ShZF5M?L*^N5k]}FNTOR7CVuR"V3tfIa_V=ZHXLj\BJoNcExO9\Iy8CNORDKiTQcZ=ORJUD7<51NA3.1!cRo{am>J=AGg}O4OaeSQObS=an%KR6F>>Edr5r[C@41f\cX.%N6"7\DG:<,>LQsr.M9<DOmc|?rk,*I}7WS% 3"_quoX_W_]6UT@F K?tw:3G.>DcdtjDIlaQzz?EUSVS+2b4"-.TR%4E*0@9OP;PCAiodP\Hd?BEP5&'8vO+01:T=[~WDks?9OTL3Durz=U[)WEIun[9V85F\wVw_cD29TPSdB26[y|gO6;('7B;ATA05Zh1'-F<L%AYI8849(/"<8+:$;7LPc]Q`IUL7H_r\X=IthdNCI$%8v_F@7 iW;&&09CIG5$(C5!Q4 ).,CS?>GBL72H[?YLb0 LPG9a"dN\55"3vSkR:<D_LKU|kk#JW^u\Q8OABVQY=!5L$UKR^S?6!5"I6Et\GX<,Qcm"12-)\xh/1('>vuhojzq[EOl}QfHz_qo[(ZKy~=]<&PEV9DM4I #+S]>BZgzEOl@PfsWqjncc[}ddi7T{|pvoverskUuv&rvr_RIfeVURLG~ZQoRPOhi}lwR\t~(kXVy~kO^q[]avR8i`zjhwt}oRxc_~uTiE(/Lcbj[Ovt>[u<ezyWegbpYWGxOQ^Scnp|aYmI-:STzQ1.Rd_t~s:cd;NmQdzfZV9GmsxCZJ8(O_7%Jf{P &%z{T=YMSiQWSGAjXUkfSy`OoLL>IZKUZ{P,2ZGz]{uZ`jmp_:=QPam_pXTiVjgo]e;>XJMh,L}cyT2* lmwdku\>Lp7txdVpD6PXNPZ\T6?WZ{iG?PTpxjPI/5XnTsbOVS"2F}oxxMZU(.Vn19; {aXq}}VbpK|rvZkH{qXelc`N\hdyK8VjnBg}URkOOZIUd\gi742<_^[wMTptlp;30&TH:38K~br{SYRpbierrkLV}onbSGAb|tYX6GkpPWXcERqdjp|w<@]k<_yE6YfK\Y-:paNnIXSuiS#?xE4N-+)nkyKCkvvrKDYI6>EUtWKH0NSWglXne.&zu_TnU<Ke<4PRUhI,[gcTytUY (LMXE+3%DLswSUT^Z,Qgipld^JNTnsoQb~h||bCwyU[~Y]# b11qX]ZD0E>''/A*1OM"11>6QC?3M/2:pK@Y.%C4.N;ALHJ:(I]i<0YhVapiPpJr;3yVV|uOOdOSb?|kc[& KZu1&%Ff 3H"kh+'.(CcmF7 +eBCYG6YIIcY0J($11UVhoSDDI`T7]gtg]B;ZkR[]AioBU{I[fn~jH1Jc37@@+W2A-WTPaYA*.q_{h[EqpbiSWoxl>\krD,`zcO=s^\a}vmtf@XihnIX] 2[}/Olj`]TF~vrdh`.YLnPW@Odi2NbGOfNSGjf`npJ]^?>UxLW-]CKba$I;#=OQUB?QJqb{P$6EEwqa^[2 5NVVB`__VH9cps]r@8AKjvLWSm`9S[[Od`nh@BNnO2F__pn]U;NaNYYfY^]aN":QlghU|Z!2:SNNAWXLF+buxt<5gOac}MIktjaI[nd]WtqY7;GovNebdiSP aELOKDiUQbZZFPFD6rI5@?7RbcLuN:AB3Y93FFjWVV$*"kreE_wuzQM_gHRf2IR,>GmkpUTwk><0Z$CWI427hrbFP20@suG\?"Fg.QcA:[/EMWmU<ViVva*X%MC6IonZ7B,\}a291vmsMICElYT`.ZS7S>TdefSj^LFheZSU^:5%1Z4R[K8C?)QW1D3:,.98OQGfZ==dagl#R~I+*OUeiNE&E-~G/Ctc`c=>\PEP}uaP_=>>]WH&!W;GZABmh7HcD/L1B\W=LHT?ScA1<@\-?m]N1,F' Ze55/*%=J,6?$Jt_]wR5Uc^jl1/N@ECS_]C30B4?ywP<ObCD5Q+ >UP^ZL'.(#,,F?8ONGQYBOT,, +)&3@.-" )3(544JF#BL4"&S3(KWW}\~tan{qjGV6F= _}V,&%#RcZ$.4 9=D# 1!E" !.*@M$G`+#*%-75I; &R'+" N-,2 6)(,6mZNPAOM=CrjKamYQ< 4cszb`@.RN[E=/D &#+;>:G1;8QM:J$&(r.:E I~2 Tbe^aJK`fbOsb'1\e2*E/Snj8&O7-K3A"NgPyk~LID]rt>weMI^p[><2DUeddhpLAR\WVlV9&@nO.<RlIPRs`^^_ykkhZjaZ[z^IDHemf{_gdJLfbYwLmV*orA@HJSpLOVIZ>i_lc-jHLXu{_eunFpV1KqzeQQN\fpF^[PM UxrKgYFgrlvulXb\KTd[OubSYY\tACdSJZmg+XhboZ)$dK(5[eR7C2:26R:U}R4P9U9:-HZSbGbw`G+Ln_tn[gJDGDBU~ig^G'-Dq`mnS<d]RG?=nc<ONQPdM(5B=0B1)E&Zu`~WQ-*lpKhp~G<M|{kr\|bo8TqeZIYh}VcW}}JU`P`i^g_bhbvzq`sPby0KMIMg{lg0UWeqieK[#;;\e`>JcQad\da_EjkudwBdu{zxzkWjnoUh\7TV`Vimk~_TpqfnauvXYx1eCD{V3XuGw]PuM1}K +fFf>*wwpFqhmz_`huq[ur@.2Z`LyyZFIUpsmpnsnnF0dc@cm\bRrXxnQeJbZ7\ud<8vq:5./zm4< 7$lYpU\l}tysjFFkeuvz^veFnYjdqZn[deUyshnw|xj`RD;nmq`gP1IbnTrmP:6V`\:tx|eV7(7C_OmDGaskt~y\0QozJ3;UlV8CaS[RG0@;85>HYkK8A3=u88ViLIDACHXI"BO@~yH9.>jxZjhuk\ps[krPu~i>_p51}|R8NRg\EK=;F/5-=JPGC^L3@@MSHZK4j^QCZ^T[w}[Lpzysjnktd}KULnfsbVCL[+_s=>WYS30JM;EC,9<gYRP~fbegwsl}ox)bfdx[Z`d]p1qnquA5ZXxQcTR.DGV>OZON\XdH08Yhv\Q2D>S\Gva&fmVCnc+xRwdkmme`b^cjdvd\U>^F1iYnXhc)F \iVQ=eny{ViclWaf[Z?TBbSIW?\BANhdGdaL;muvX`X\}G`gPbjY.mYpo[EECRXvT]WuG8]WdVGC@ejKWozZ_qG lZFeB5,hPrha<`Fy}H(>NgVs~jm^]Z9P]NhZCD^Q<HiB*5UWk2pyINVP;cdMdb9X@>LWY\FX3]U[:[AQbSl_)7/JZ[<cMTyo!OOGiJo}:M-o|aaiWGTFOrE]IU^Pj4G@KKbAcT:C,|xU{>,50I&jF4Z[`<M]<kag]G^HEyNCJ0::3>4O94XZk^`[c{o:PcEINoy:P/GF$JQ>WllNcp}'9./IuDJrDOC-P^A(6JMVK,PZ@FYcWHuIQAFXKF4+8U0UX+0VO&  *#)T_y|r{lP7Jhli?4H<k`EC]wLIS@lVK`]E,7?,dbYSHaQ$E=3`Z.:aW1HV^E(6/#2J63, 2h`kb]7<1Z93 8/!;S? UnfdS9boh?>Z[]I5ckUHKRGLYOVNTj2<I?=QSyTSQ@E1 4<>;bL3*'$?|_gZ-&"#$#pQ/6N"E#"1E()9=0V5+3=);Y@;8MTEZDZO9<E451/QCXf;CQL0#aUC=@2=>T7> 3Ab* $ "FB}e%23=$B6 #HD;20k9J6F'kJ[zT)-HbhJRS`MQ>gN ;iRM3 g K9'#99*#$%L:icKP*? F7.63J$YA=-(wopv-R{ifb[lNnt|gEd: ^`d59I!NAFOP*wk?& B;Pync\?hEO}_V~lVWkyyew^d?.~zaYfJen{VIwgYeg{smg\eWvt`eZipLMDL<l}w{i:<`s}eOzmcve2' 5g^d}zZ3]|U|tyijuZe}RJys~XeUIO~nTq_@lo]jniXVsocktUibs2]nuU* Px^yua]E(ocqu~rzixpu|a-Fi`oUznUa{j]iZWuiry_C[a@ibbjTXjnle~O_V3c|k{O0S{x{:#U}qi_C]qkeNo[VtuPz}9jq_\xpo`epbNSYe|yCCo]RdqZDi^gg{?7c`cL&/VV~Vqrg[aeqV52Kq^Ul_.0|lC_g]olLMUWNoz]nuts_Hb\qaYXJJq?A_uh?8s(l`F0GJL/6)Oc/9NJy07nrBF`d8XXFPm`h)HHE -_\K1"mrii]xjhxcgTjgWvj]hj_]}wojXWOi{e}y}trsosjfugKe|muz Xqs_>11xr]^C(<8yr{U=tf_{yr^h{{olv\Pd~^x}p{e_guYl{qkRrlmIfK&Hqs~U 'gmpbd0XnL.C-!5m-/C=*=|}\y^`hliRPk}^w8ydqdPd. 8<_u@-C"M&Ag^5A>&[=Odf=2?C8SD%L]fIpUjvt[~mw~|rXKs|c44%~B(BLLjP aI2OW84=QnHfMBKQFCyc?FUWv]z[roEXkg}PQ~aW{br[Qkt=ohz}j]y{n_oaD)lnvG2WY6?>Aj_2DYMwuosrm`aomr|nR\9GXE]IKTj4ba 2534QQ$%M#A6Q`)DA=0?RF>*"Klz!O].HETw93VG<OVPMD+BO1A@X8.M@WfTF7(<rgMp0]fI=fZ{r*ynOXsO62 RlTK$><#7PYQ[bVH//CN@G]TSBAXZKN:N^_]]}SSVA9_wiW=AihH:zoRNFk_KZ]|eJX=n`3FFDgai20NUi?'TmRAGFCA;\T@7P6MB%>jIp>QgL,KmcX|REjfI[^ReY_8]YZl@R@JNLX/8)ot@x|^[O,);STr>RpT\DWmBNfQh~W;Zu0:>_|_X^I;/4%4ZevtC0/B<B^KR^5MLLqqWRE`L4d^][lZPYcw_[7,GJJOutv`0>sA;pSPjgY_[FmEP;>043DwwPngMwgAjZI==7MmTP9B0;;0JLA-MmbSIZVc>5geQURXMEE\UdkEVE&HTM|M;^A$DCXfP`j-BW\Osgjv;QNILXZc^He\<CAMf;w|" ,15GL&B*@tP-kXOf;"K{G9^G@OIRG?_]@8WcdNlOT_V<|H0JdvYRTjtBCpycdkEWTVRX|w@atd[rfUwvYB=14M{o_Y4S:<Tx_x{)@@>tuxlTbyZU%:hKsdp[]jc[U\[y\gqPZS4^5Wl/;2FX`iim ?ZJ]~\j?Qlwkn.VHBP.'+[fMT}sFbicdMXnz3ewt|}Wa9@2HjbG3 4D8NSQ.8:lmQPC#6UjG9//(u`D%)1M\M?_XaOLy3[}_?0ZwU3X{vTZE,kaQE[^+Q?itu\GGT>"DF,,40JE75VIJ`D 0+SXXU@'Y;.yGIJ<9Kik`wmtkJaZmOotc5Q{jP0 uawPOAVq7NIS( F{Ao?7>E#OTD3'Mcdtl:4Tn~pf[mzoSIp\irdddikhhtsV|aup*8TaUJPIE?vtgaI=DDqxu^IDDLSjX<A8BN_^]GKpo>5`hW?NU=OG;wHNilN")kVz`O:TdGPofTR7Kk2$.eQVgqO#>xz^`QbB-d^k~^C9?JHXkQRgt}\QNPrWAfz`VE5FZdiwTB8Ln[MqB,\iI4ZR`qPNmcD7YUVClQ%DG*IgoCX{Z/5fog}TMM-_pm\oBLUqYNkT<]:P_?2E?>GArKjcEpeJ[c[ehMDDg`z_/Dhjv]O FU1Q<Jnrp7Rf8-Rh@ 4clkrsM3=|tOie5S.DH5sP3=>,>o_h\ZQD?1KhT3]`;SiuYXP=csjG]O2@^Z@KzQOQ@;?48W]e^]Z_[`iQ=7)>ZUKXcCt:6GkwyF\`?^[DGg],uL6Wr_5W$k`PD>Inf[g[]RVU+@;lK$Wl25n3\Pv_`qBbR6,~sC4E 2 (F~YaT"sA3@)<HhV]h8"@3;P:F^=\qfjFOjO<AFYj/)PmVPZCZw6faRjcOC# [S9KeXT[0.C+DmH-=5.3' $W[f>~MLYX8Wl_RIQTUaF7RIKh3sABb@,FB.4G*EbVB>49I;PTNL8&DDCIKFG4*E!.U#4B-/DJ3//-#zBRGUXx{^^I#@cEc}nQCXK85m]72: .4%"& (20F(IL/,@2!O+F"EMIPO.=0QVm;*S]6WSK'SMD]gr~iBbb;Zybd4LZ_knljE7gJ:IS1PXTEK@E+RDU_.@8))LZKdg=7PHMkt5e/04>1OIS]J:6WgQS].di^q|J`LWxshWA:$<6dm=216EN8_lG2&H*6)/;SY%5=9N:)CF8H8'44?#0!>@) /?*AI>B+&1795]O&74>lbBOeS#AmZlilfHp9)9GD{}_^g^{_J2'rM3[g4+zgH5?S:}W[Z)7N?AS?BJd\^ChfNKkiV:9@IRXy]dJ'LONob904MH[l:Mdf<?/4 $9?RbSKikfPLf?OK>V9=fE7ItauUV{c?TC*5XZeBID+K?>Pp4%+DQ*.PqLoZC.+4"]OS9+UUA:UvTHT_\B?JYcgDAvv=iDH^RIRZb|yOqy?NXsRekC}tcb^E[]SLX[U=HLykK@:"ib\DJK,$B:q[x\7IcX8L<&fuH7?2+nN5EotO=KUpddiQ#7P`\\A=KQWaS~dR5V~7PM)g;Y1>RSlEWaW];4M,HbrUXcZD'=2U|CB(_u6/CDPR&9QlJ)=tXJ9)wpaRKh~j|[9ZJ?UR^qrp?]Q@BO<eVerK^2'6Z'a]T?8]E3UPTO@G=T^MUQaviN<cRKe}[KWFaj!hP8*L)I5}XS-;pEqrRZ^e}R>`kR"K[KSf'1D@_=R@JZWM [_7lDG`OW(5c46ALB)ZD!'Y>ZjWb(#dRU2DV$%6'*, |eHWwMb}W:DSLS]{nJ32REFnX+7G_fhWB^g)217`@!!NC:@4FP;08OtA>@*'K6/."+H8;15DcaV:3+& KndhokrcdgyTY'8MO81tqf_D(%$$W}K>"@R@/?44>)$3?8&;".< :<4+=(Q]ka= 0."_#@^[eScPCwDghq#QX9UcqP\G4<rR|X!.!.2!CU$; ,P3_wZ9lIG&-'0>Trh,22ewwK<CYunfrystMdvn5ANK0YLxM:! \ef: $"#;5.2Z,VZlYb<:Y~|dipy-1.11.0/dipy/data/files/sphere_grad.txt000066400000000000000000000056101476546756600205770ustar00rootroot00000000000000 0.00000 0.00000 0.00000 0.00000 -0.85065 -0.52573 0.00000 1.00000 -0.85065 0.52573 0.00000 1.00000 0.52573 -0.00000 0.85065 1.00000 -0.52573 -0.00000 0.85065 1.00000 0.00000 0.85065 0.52573 1.00000 0.00000 -0.85065 0.52573 1.00000 0.80902 0.30902 0.50000 1.00000 -0.80902 0.30902 0.50000 1.00000 -0.00000 0.00000 1.00000 1.00000 0.50000 0.80902 0.30902 1.00000 -0.50000 0.80902 0.30902 1.00000 0.30902 0.50000 0.80902 1.00000 -0.30902 0.50000 0.80902 1.00000 0.30902 -0.50000 0.80902 1.00000 -0.30902 -0.50000 0.80902 1.00000 -1.00000 0.00000 0.00000 1.00000 -0.80902 -0.30902 0.50000 1.00000 -0.50000 -0.80902 0.30902 1.00000 0.80902 -0.30902 0.50000 1.00000 0.50000 -0.80902 0.30902 1.00000 0.00000 -1.00000 0.00000 1.00000 0.86247 0.43287 0.26225 1.00000 0.69511 0.16208 0.70039 1.00000 -0.86247 0.43287 0.26225 1.00000 -0.69511 0.16208 0.70039 1.00000 0.27079 0.00000 0.96264 1.00000 -0.27079 0.00000 0.96264 1.00000 0.70039 0.69511 0.16208 1.00000 0.26225 0.86247 0.43287 1.00000 0.67870 0.58769 0.44044 1.00000 -0.70039 0.69511 0.16208 1.00000 -0.26225 0.86247 0.43287 1.00000 -0.67870 0.58769 0.44044 1.00000 0.43287 0.26225 0.86247 1.00000 0.16208 0.70039 0.69511 1.00000 0.58769 0.44044 0.67870 1.00000 0.14725 0.27221 0.95090 1.00000 0.44044 0.67870 0.58769 1.00000 -0.43287 0.26225 0.86247 1.00000 -0.16208 0.70039 0.69511 1.00000 -0.58769 0.44044 0.67870 1.00000 -0.14725 0.27221 0.95090 1.00000 -0.44044 0.67870 0.58769 1.00000 -0.00000 0.51046 0.85990 1.00000 0.43287 -0.26225 0.86247 1.00000 0.16208 -0.70039 0.69511 1.00000 0.14725 -0.27221 0.95090 1.00000 -0.43287 -0.26225 0.86247 1.00000 -0.16208 -0.70039 0.69511 1.00000 -0.14725 -0.27221 0.95090 1.00000 -0.00000 -0.51046 0.85990 1.00000 -0.96264 0.27079 0.00000 1.00000 -0.96264 -0.27079 0.00000 1.00000 -0.95090 0.14725 0.27221 1.00000 -0.69511 -0.16208 0.70039 1.00000 -0.86247 -0.43287 0.26225 1.00000 -0.85990 0.00000 0.51046 1.00000 -0.58769 -0.44044 0.67870 1.00000 -0.95090 -0.14725 0.27221 1.00000 -0.26225 -0.86247 0.43287 1.00000 -0.70039 -0.69511 0.16208 1.00000 -0.44044 -0.67870 0.58769 1.00000 -0.67870 -0.58769 0.44044 1.00000 0.95090 0.14725 0.27221 1.00000 0.69511 -0.16208 0.70039 1.00000 0.86247 -0.43287 0.26225 1.00000 0.85990 0.00000 0.51046 1.00000 0.58769 -0.44044 0.67870 1.00000 0.95090 -0.14725 0.27221 1.00000 0.26225 -0.86247 0.43287 1.00000 0.70039 -0.69511 0.16208 1.00000 0.44044 -0.67870 0.58769 1.00000 0.67870 -0.58769 0.44044 1.00000 0.00000 -0.96264 0.27079 1.00000 0.00000 0.96264 0.27079 1.00000 -0.27221 -0.95090 0.14725 1.00000 0.27221 -0.95090 0.14725 1.00000 -0.51046 -0.85990 0.00000 1.00000 0.27221 0.95090 0.14725 1.00000 0.51046 -0.85990 0.00000 1.00000 -0.27221 0.95090 0.14725 1.00000 dipy-1.11.0/dipy/data/files/t1_coronal_slice.npy000066400000000000000000020001201476546756600215140ustar00rootroot00000000000000NUMPYF{'descr': 'R/խW]vwv>}yk'r yR;_+~?S^ug"o^y?^^ vE+_; omWmO7kS;_kΖe{ʶҚyM{ikûʺ}yOč7/=eu+_:@?V.'&8QX;;\=uxKIANSeiy DrqC>|yW޻_mf 9O$7N{Z4yfxO֗%̕_7G2\sbo/i" jmc,v".> ̔폳@2^eȳ4ОcwwMsxɛ~r7_FWK[GGJ,י}iwxq@FJv߯n"x Oϛ #-=R:Xp{vqsaynqJ&_ۇ_uxy=<>_e% :Ye|1릝Ϟ_GyV&_]7Jb>C9ۇwSù8~>1"<߹ΒߝMz~)Up9^O jqz#d0 ݱ}tďZ|2>O-~\q\B{4Kqq6tr=_ypq]4x%>E ߝXI ڇ_/;!n:y^|~^2ɏ#̮.qߟиF4Q}xt,8'7c8a<\I2я??tmRx7ΣK(n'Ahଙyb49Ws#)qxEg{ e)5f|nj9sh7'kQj!z]oؑg{EO+z_sw~64[_@>GDsw]1KL|/OOdioq1x&￈P[u]CSာ_k}r?ƻ N㫍)׽zfʋ_%|oT9Swpd8X%]@p:),5NsUK09y:ǝ3y$>iF14Wy|^F^I?eqO=5#S?ͼQhQc/I~{|S͏hOy@S8,n1Wx^I5..ϜqwOy=W%o|/y;G'Fs^yw?e{a%UF{{k7g2.?zy486֏#5Bc}\8_7q|7~5[M%yw @1".8 &n .ow3?2~W܏Gb7LJ~x$ |~cnBMFU8gf(?4t{/{g#Ty:<,Ǖ񙼇?2]SǕvcUuHf_$C#5g/(0=?;t??[S{Z<οe\ѿGw$!_fS1J8<|2{y0>H>eh[4nҟsɃȟnl8Q2m~tF̸+ޘ qx@Y:IVΨ}o\<2e>fqX9Iqqt =yͫm^z N^1.CtWԯ?V兊g;W; j~k: 歌/5>'Ju2wSQ'ag50y\3rw;F"/|ΎI|O}>qy8x ֺKWW{4/]_~~70~z|7!Gϸ*﷼F6gΗFm^U#wjb9F䏧/SoUp.oqkqt,.2Obq:qM?5}ݖI|}᝹O<0_`AJ؉}yO]~?`ziX?UC=ǥNO?k3oyK?WQ4:]ڪPDZ!wg{/q^G *>SϏ<^=`vqrъ'ԋrg{eӉX#ӬCy^1#hsͿ-u77+Nj9T~'M( =͏۸ ď'ӨKPG0o1BcJFx+_:PLĺ2i:e'L礞-]~l6C/i>el!hNj?7եnx8W72Vu@=Tfćb'h-ޯ57;x)5)+|qǡk'wDg:~N=} VRa:X{{LJ%b7^ /-~:'Fcuc}l:^\gOlOkUZgzC%Q]tH>di/RY+4x/K>ksCfG_?+2h?Xu['ZO:yՇg@c7pLQcu_B,c{w^''( \$NuU]Y׳HF~䨰\^%j)qyE{{z4s^>u)VVW0P?1F,Sa~c}F5'Y?~Oݚ~<~MCs?\w_Ƌu?Py`=y"8T> ?5|e|h|#9~$ ny; !|Joįvt2||iO}#~> ;uD{⨷y>u|u9gW?g=C^cd@|̂_/N .#ߎߙ߈ףx%8CO ֩ . VݲRqrJurK<x|(2OgMī)i"ǩ}Ϙ_CVQ>> ռد<~9w5ޙg.ިu;u};W0: uRߴXP,?" v>KNgOA?H] C1k8?|Y݂rP89o$w-z~Ow_^-P7zOh{VoMzo6ن4}y;:rT8/qNJ!%؃ex4/tyy0HpLFxσ7(.K62jbi]]'vb 8jԑl]U}^2)I>gd_u,u ! ӿg8G:1,/[}j֏S"Θ:֋2-_N_C_*xw&~Gcu+,uֻn-|hlwݯ;X0BoM8x; - ƭ7\sއ>vM&g|d:Mp).خ8*[V-YbD+w~}~r&o$u&^?g>?̥ wGW.|A& 7ԕ7w]8|^gY?S=dLJS\GB }m}'\obLe_XIǵ>]Wϐe+3u{{'|xD3rS6.8|~$y[WKĿ_`\a+y_wUjGٺS'Z|^ЖO3+?Ѻ*x;W$g < \~t~ൖ-e iMோu _UWU 丁;6a[m< r\/oI`Wc亜p{ >KNj+@g @7u/;߇]z`kC-zVU;ɬq]駰ue>YwZ᳚܇R:c[}6Ve9AI\VҾf{RugpR+UVnA~Wwx x{d^<ǥuGNP;_x>z7:g;]sfحG~@st0wotF>&x\sf?V9_Fv KgRܣ8 et*='O֡7xN _OmȯG;;Wy8pr; ?"/b>3w/^Rl=[ߓ֟,"| c Qiݛy)Iy =Bu1uC9)A]cίNWI[ X8{o&y 5m &RL3_w͑KOyK+)dlzBwgdޝ {A1]H;%!qX9^9;.YQS@yE߿/E^v.BV#?[Óݒ$ׁgq$T_"}`ѿ)LJ8F.E|_6F9nޢNMU硼+Tf$nD byٺAs9zykLzyֿf?`_浖hL~bDN?%y|3_8wrl I̡&dV{]?܏J)c>Yc-㵰_f=îV.#dz{H7^uבlˏ$O +oߑag8N1kx'+rGWÆUv?xAb~͹:<)3 axq2Q>e]~/G ^~ze?C=oZuUճ(/jYR?qǞ~1_U[]ՋuORu۠WƬc Yn!EVʸFM8>h{`K l5t/W}/~2\;i,O -w(n5uG}K);oF?jWaeQg/܇6Ѳgc[g7W2~>j$[/>W>޶9芜]8a\s>WeH5wO߄~1M1PNʣw^~qXoOwԣ]>qw_]D/tos!Ӳuf/OMy|?<۪Uvz/~#oRE8r/6xIS^]6ţ/vWIfuHG>|e>\-VqY{w׏Qa膼5ŁfYObF_dWmyo1-, u ?^X ~}΂Չת'G-gRz=Jwu#Ӆ4>ϧY>tړ䱌w;]ʾ:Gr~R35o8SpUp]hqǵ[gބSxs!lmM`Wǿ>!8~ n!I۲ U(+e n+~>n9[ 8NvЍO>&|YO{s[?> Izϱ'a_YZ? z}Z9=s_Ҿ+=@לR2uj|mA>MJΉ׽9?0v\V ?eC #^ @4JԞ2ch]퇸p]jIU<v3ޏNvx4wMD }_ųhOY>Jу|8Xĕж^MSF]U_40Iw\zC=V?4^8Z>rƯPwQ7vĵ謾.X = C}4@@VN~_^h=ɞu ?jzܐäLuh;s=gBIpA>SOhܧ =:sSg8b^RV?f'mvy})j*k7ߒNywzB Mo@\{g9Ο~*.x7p7Ȼɯknv9>i<4u%ka%ۈ_x:Ž73Oݪ>lW%>8׍΃(!g%;u3Ӷ^쇻MsδfρSSU3Խ:^zG <^'z;3+;_ ;Te7-d'4,=]yR573WFqK~r%赫GW "~ {J~k/Q 5:K Ta\p\GdIul=跳@StXOE">iipQ2I*k;_j0H:5{rq|"ߎ~u"xδ{wQ > 1NOmGю;ZcT_}n+~/27OF#>7^Ѕ]r }| $Y-=oݥ?ޱ?Z֯ :ߒdU9}|M ;xn =EۮHTz3|Z_|k āoFO|l?>#/}GNntnᅚC_{3:O?&=#x.ȳZq"Q9s݌ď;:[0ZG בM9t<~MJQ;_~gMJz<{V[N߇;5w #oXuyU)~8<>'Ϥ'ڰ-CH~1g~q0jw~ĭ'F4JBW:.msGZqE?гJl 0\ǪdWO'L||yow~>gC\|=}R4w)!wݾby_8 zi:*>߱@Fi !77ƣz{pU?AgyCe~8 @gxvug7ĥ4`;zkEqsWρ]>1/:K낿,kS/s]܂ڰAny.S`LJYeu6>뛳wnIB_GJf}򶞟{$KyxՇp~ѿ#&?_&*".<" ,<'u9yN=T;ܺuBsW gvv>wӜB]>7m][&YjV zGV~peWOk{q_fܫ|K{yszy˝ढ़.]cxThL|^-/Uﲯ&t~֯ -97ݶCשuFOgӆK/b ~ 'mq)"yy uuW,:XEbL症Q~*xƛwO q|.FFcΗ?k\g>7?w{2ssvfwZ{k0;y4N9:WW:|':=X7  CuP_Z9G__8-,s8w/Sg.[g#;d2>Svv>:/:-V}S//ODӺAѯt}ZyNAUy݂]#ucY$/ 筮qy~kNϙOx-3]Z0 ?^)ˉ?g}}~?X+|y)1~ٞo45S/~6>2Rg>E{D? 5jW.M}.+ +gmܶ4pTb~̗v|*"q8Z4Uy2+>؅; pVZGO?]V͗ց=|Du'3[C+[{]? ֑U<_?x+_v`{^E~ d9^#hCiAO yBa7":PcpDL`g༞:QN Եn؞/d//P`3>_nBa}=מyү\/q]=x:#?N!Ip>>){PS(瑞]COogN}p 2/-qy`8H7xPYb9NtZ+A!ٗhFG䣪;jO|2| SgQ]p1 Ձ#jOhnM16w5>z͋?~yT 4nq~4gdވ~hzv\_/=͇Tf5sfxzGž7+Yk_ Ьl@<ܳvAYN>A|~vzWot:{:ǛvD|񺠎p&'<{[n 9W ]p_ۧrܕ? = x=pQh77DY3a䏫<-0zMT>/Ïm }c#O!ݺ 8HHk*|וn/ .xn75(.}!NC>o y-?90Lo'p>#YYS#~m\Ck?B\U$Qa"vP:˸?l ܟH]6p/w i(e ={tY_HM#;_Ns7l*nu`m1YF"u^_~_:-H?J{B[üۏ,C|%2>G̏}bjp?rzzhҟQ(D?|vվS?蟬p 툼>~ZG߬EcؗAs"%5'{C?x[_UTsjI^Y3̺b4irSUygX}#EQ>sp4CH;ȏt[uۋ^%Ǎo]9#E:YCCᇫz|rwd_3HPuJ:/}׾@_F} >VoH=~D!XKzBzk~;E57{i{ m“~~(\F?Oop?~.<ѲWnԑnuc"0ߡrEpm8_An/|]o'%: 7YO)|ca3>( K.C|&_/Ts7pp߸yq"ݵ_]}Gꦉ=yfq;##O>gf_?\'a?g<EOI~waO?9!=Tp 'zw(u7߭s~x/t;e9mc_v!cE:m^d ~>47%PwNA?̳#$nՔ9F:׃fN<5@ՉO?Siw W|g~?A4}):excij,\s#8LJ@eυ4v/>po%)' !x->=zK>>+ι.\^j;O 0#ݙD?} x Y~G4an$|E;GL6gbcLVᒽf5y~ήj3 ~9#S"}9 u o~ş!ak ΕbkokGNWԭ ,;3 ?82&\A?LJp[H?<hn/u{nwDEQFqƸPNs/I6 㷫PΆ>^ ̈́'~Q:&ʧ쓋+M/^wpȩR3LG~!UA@HE<)x:':ng_:!gs )e󟭌8bKUCmvE;~0&rgp:8rv|3>B 8]WJ:笴 ]/=^s]Cψ>E? £4ܓrGA'}kV_uo7>Q>ftKB1?Ͼq-^Gg0Gf/D_@<$['qɏ5%#x-;?<%kܯb+)\?Os/ըe~ dΜ妐oI:A~M( #e;bt?_q9Qϫ) xG᧙Z@E|t}b#'Tȉx`;]=>8}%_'EJOVytx}_V=D?ܺ I |?5WK|S'oξ&nW};guKC_C{ώ?׳5>` JQ|JႴS̽pދO.|G<Єpq@海bř1: ʜ|u5q0G.}o7y`%/ߑ>}V-MC] G7-yѹJ0VYso9܇%~(ԯ~G|ҋp]jECUx<9~q>[|Aapx?|/?0ךzf_9tRwk[''~+Gvǵ_.>̑=f O7gԆ^#Cj;L']x0WP 6b,O,S#|JǗjC8/%;=oߎ/;: s:.B„x_|y^b87 $jR9o77\=^"0u֨K@\Y\]?Εt1l5î7X|H=0ʏh =Dg R?J<黹7= zM=~5]31YGeğ[w WSe!>2˴Rs1Jn"IӢQhMC/&@п&4/@:|j /!Y &< :Nekg3=f ^"{W!"~y/8S%^P7~|(`?s\#uC諌GLP'$4m6x]cydA<$\Ӱ yIȳi{nx꜇[My\IS:xSL'򨛝qO9~f̞`n9_OMه͓[+;m]?/u#"1{Ykϣ9a(?2s= [c} cr`0Sj⊝.KW\ !ʇQu{֠O^LQ} j2K]P{qctOHoYՇ\y/ht5/A\hkyރ|VB3;_]D߇83ח:֙"ؿ|V|xZE Oi95T[x x NOT@\2p>̩]C}讆vG>н95;;ujQb{nk ?&%쭐~ XGw8! ڽ7ȃ_ؿT> ~I}1^W1ĿZ0[EN5rbp} \ NeLE/K/5A}%A\728iA\1z5hڟ.\1u'@}R:wcG׋sgGC}LzieO /^On]5'T:O}O.a{/r^ROO\9bm)>]+aw Q]1/{?e~ҡH-]"04"CJ7Hs9tt3HHyZkf:< =LWk] rN-hw?Y8EyF#o >5b{D2"b)1˫Uf8t@۹yأ`yg:O^"w-!#/G}~7v]& T_;Xg)4_]O}۩By7[o.K@cFdk;3^bo8Yr~AOh`63 |͆k_y%9Zn_ϰPuv|D>__\Rg]0+h{??&%a- LLoW O;g.8sz>ޗyrϜf/%O Ƌss#v_>4ߞH/ۼ/lq54EsA{ ^= B/8U}!)?9ȼ^$.ꔢﮰ_ϳ$pm&l=~(a}\OfO%*K4\օt?<_!s}gu¥W+/ꭑw'jsv*u^+9/Sp=?~u"=T㓏ѹHklCWnSƹZnJ^U,S#Pawiᳱ N}v#̾do8w6}W:oSLدe3 D؂8pN'̽8OU?r^K>tj`[AyA<8۰ģ*A?ipUX?o7ݧz!,XwȽ;Kdu/Ag)/ʫ[^ϑ(_@CA9Cϩ2+dKw)_l%Ϡ>|@鿭&q% \ αa Jڞ>5~yu^u9A<<ϸAs7C n ݡrcy# .k+`8Pd<oG0gUXIP[/s#;vyҜߥ?j-8?Kz">@t?x1 %myANw-U\MV_˵s'<3oz06x|]a/.. !;g߬-ars3 q=R /3y;[v3oϫ>(sI5yO?wԹQfy5>|TWq>M*ZW֕zj/~uXoDus\/I6';s,?yyyI #ηѿr<|v?ug>)^'>vM ?윾%|Fqs n'5{iW8\.=u- ?ȷ WUW$q7S|&!} Ptn{bӪF\u 'b`u^ΟoNy(Ӈ>xeB^}:sՉHdO.A?Jm\G@<ҹ:Mk_X3eʓ/]^"䍟.:x} O" 8Gsb4?3a/YE{9?ՁOfE&lABɜ+sBM~<]ޯ^ >5l'%hL``y"k Vsh抪C~@x]Q\B#XY't}M> Ny4:I?y:לS?\EaD\c7(_'ySo-mGq[oSbu^[}*)%=^#*ΰHg+Ncc\4ʍa:_䯥 |g?,*ODpx{:_ua[yz7P~Ae7e~Br긝?zXgϫ#ݏlyryy:CVId7m|y?_%۴ZOϊ? vus]U/xƋ{[ϠpxG>g_d0*^_'Zfx?A;܇΁:Ry/Տ ꫱W1;dwOBxGqd\TŽSnko;בq-l}7APɱT%}_%=y`(<=|:wxM_l1qiŋ֜L* Y?k}#i6Ϫ~AC'FWu\?ۇ>EAZz//>},hگ?15꫉[^ocl)Ygh-h^](N-%:g> ot>Dd|cc|4ټxCE6Kv\G:/Kyj>Cr(d'%[VɃyyms_%7Y<8cqP=SݝK cy.9~o@tؼGvB1k1/a^w*>' ~W['xm Vߋ[l }:'zz\$ɩkyVN/Gz+!\.c.3nWbS|ђ9p=YlHuݴk}`?yƳ.%gҩ:gBͣn^%+ƘY>YqxyCK/þL鏭Uf3COוW_}OM`yUg+gʣB#0.>__}~O0]'}~Os)T4[M}$'n^IYI}P.ʏv>uZ?}#|P9z=X>JUg~3y!+[ۺK yfyur]d/N֏'J_m)N&.9G? jOO3}yXΩާyշjعeT}}?u疞segTJő?ԇ~WKm}N~s=p6%S<(A7W[vU|ҹ Z NwtN6XQPqޫA ')~>*~{ŏNYi [\rMo+k>x\YWzNL:Tuą".KkѮi_mmę`S/1~TkKESƣ[ky^6oD{hYp'(_~aų-.[E[T53ɫh-w8UGsK4C+^`Tv//<G/?#,\bғǔ^Y$OxMsG?m(>ƅʳ1X*nTI>O('zzO_z(_nyEEo_ZىW>E|3WvQ]Kz/nHs?l@-R\+dk9gaJu!yGZ?Npzl^AuaӒ}׬#UO~A^"~A/ ]G{#ٟn#)ޯ,mu_ߗ +/Rp8 g:͕\ m; T]WWoķ.8n_g]LvWj~Nں'#S]є/nL.NKЊgXSFwTuHm0Gp,yyHY#y;}g" o.zsWXWy?=Jw(_Tpʓu.l埕fߕpaNPKSD),΢}uYʷ?X?L!YW W*XW4Uu{^+_s/nʣKl VT/I}}{:wTuS|3%vinÿS9wxO;>>uߪ G 3My/)(yKY'$I3}qz*x5;ǔ}s[AgOOATmwدeY7mBX[M]oޗT^}rOesOsxx<͌ܬ<Z?A }=(C=h;> BS__ǿ>}ǿ>}=;6uПuw ~aU|wPp?0*1ߋA'ѨuEX[7j7 |Ad70/zg|8uO9_D1=B^UI%PXU_ǺOQioL \&dY:Ζ&P̺C=7/> 0QYOv͜ZtEss)j\ M9llS1]g=~Cb!_xn]ο1gi,K|zVK>?TGBʝ0E.CJ[>π{ y֨p<;><ٌaŝ۳Cuk"_q晡yvȯfǜiaDYaC{W؏eQGw;ޗP=uE(t];n-, /tߛkF:;'azh7 ^/ tb_bhg|3 {,z4fݧɞ}{g 'w BA/t߮B7PVB)u/Hs~V/i߃vr^Ĭп{:5=jY.H)Pt-֩WewIϠ}Ȅ'V_fާXڰ}}u}?8/ 3#3/],Yz?Q?w֟Wo`G;E6s s=g~]v١ϲ @*CynBN9ru53te|[RB,F]Ћ9!ϒ)ɛ׮ƳCW鉰<z^ޥ.ʝGo&k~|JOЧ+/'O8g~{{"8qS<CfgC2!M9??TϽ5V̭;)TtNBc@Bĥ~bxFzO2|T.& q_ A-έ?x8O 3qPq'qyaz7/~Y/8~ے#y˻c'B-Nȷwj8x1bp`wP%em{۪0/,~~/\_{0anSM89|i,ZιG/}qC?A;?2.o|w%;گ`?,!~]ge*{IB>O[%_ bڍ0{.9+AwK*~١r}8~W\wn'>x97"r%;#tu5зW2f'xOԂ=zK8vIvuB8>?W.yړ qp}#W탫n ?ŝPZE=v*yi?GJ Wayiɡ'?1WQW#~z?޶E,xʳ"CY`w΢Cp'B31=&6n-$ds3w>˴wsۜ#.> 2ε8Tv;ъYX_U $l%#/'AB`| X^)љbI(.(rB宱 ?X BG/v?s1zK̇wg,>c\+#.;84[ǹaGGc?ox'8?a]aθv }#O!%K|iXDظs{Z}zZ.{3 }Oa.?ʗ H^i->O_WY~uq_S !I}O&!}UP;x[ʼnl~~;y ] \_6>x&NU!~|?SCpcD{_ɏ~k7+ߝ}&r*^8O5C僉SY2./?"~]vwS-D ?HzL7(g?zF=),>Q<쎇<{|[w ɻgOF\$byOĕBYo{CosM·סgɁ^yAڵ}`T ,{ >̹""^Vz$*.!/A3/+dxaOOqhGY^V`4ecugXwƿ>>oaqxwpCm\|K?6NL}eџ*s txkƻ=}<RX4퉵Oyw.+1a~pCVo(y|hG=v?_y* u<qec oӴ׭];7= *¸-({mN>'6 pZ?1o~L8(.x/nZ?)GNv{qZO[%^TvC{X| H_&^^*z[-<򞌯}!cu_>,cY5qQ옻NE_"|` VϘOOQU<sG lJ*^T|7<5_,8Oy"w0uW[̽9K? q헭 _,9<23@WEo…6R Ѡ[6>!_|P=yʽMخëgikJ<%$Uuɇp>,z@WSxD^W:(Tȼ[h.PyqQy6yxCyX HU&,W/C+|X_yk~͗O,Mr'^^@~^ vx[C>G?z$_v,hoctӞ0>}Mh|>wyg̳_u߬}vhMhiRœmż/N6e_5~q\^C93s'KMɏpp?͎,E[ *ai$7kiµ)/N1ޱ<ۚeZ)W|&Oc.?[Ws|pL5ZT~SB/^JyoBk8Sk#BquF 6r`'׃jS4Ӗ%ϡ> ~1l`2r/o#c~Sg[sS^}6nt\GyyAOH:t?m>q[*3oa"Jjof܊:yiAQ[eY-cW}ƗlIj!P/*C$(E}2rY,CŴ6XLug-#/J ɱs{ߦI91r+w }W!kz>N[ nr2]|kzjh|:>:W K[oN{ԓ09})o϶~FMZgϒ+/?$/OCujS=Q}>DmO>׳3bJQϲu`<';-<qY_%* :R4v~lZgW ~FW#m[Zg0[mac,gsidp.<|w~] A9Ac|Io{gxNWGy^7w2+rc'TaXdeˈ_ XK?}dߡ2ɹhm*KJ<,'Rr-)Z!ɵo`Tغlej 1oegqq{ng9q?+j_5f>]Ky?z|KWs`iȻfU=A|v|pu~o׸߷|߬EFOQ{gF^4:Ӟ%[ aK3rȾC?,yc`T]༫'Fq+qsTע|Ra-ޫ?<9A8 }=d>[C>Dm^qx}Q Ew^Y-xZy,|8oX{}}4+Տ:<Q)yz|ݼ_9Oytoe2 )SO6F{3}">dߨFvAFޝ4{ia/qyUq9'qu+.N/e4+8ͽ፟kg_כ6Ix~fٸQu튿wOW++ާ T֭SpaϮ?`> 7wG\4'p/b8O%\\qagw0r}FN=y<y~<~e^w7=3Iȫ">_I0@x)Pp N|iF{\Fsgn.eŵW`fl*[qOM︶t|'}Pୠ5vݿyU3#΃ەryvxw~ھ êķ%_,^|)_. :P"UKV<'Z>y .DP6lH)wֲRAYϢ<@F>GIo)<]xck0:_۽|ȡ{^!;D/i>)Ͽ\8`xP\/ͳ7'ߊz'Gc^w.kwՐ5/3~6x| n?b޿OHǼ_> {}vNO HU?:o_c3p|%Ͳqz92򵊋j'6vɌ|;mc~=ȋ+fo9#A#gs&5{]\eL| 5{: [B?64rH䮄owCVggr p~~`USs޾)}zb~l^& aWɵ첯Eo{cw;\3&\\`/C|[9JqW}#(GP [wƾ&[oI/\dPIvNׁo xHc Cև$xsOҽ!oy;B-#Ă6̼W{q\k~70="{\s79W㚿}f}Qȹsski!|ݼrw>+uZxp s_ ,HkݸJy'k]Fރq ;\.с#%ot$U˅U彀_Rܯ?X* :;$:S Br7ڶ_C-u$ZzssOyq]F\.4|s7mG#m1?֧u@7MM9u7'6{JVEIzo' M |JnX~xV'g\Qw+cSexU_~+Ͼ7}~l"5p=WWb0;v[>#=k~ו?qĻ?dC)kC@]>Hu}^<}hN89Ywb DlfK&_#T|dyቿsZ8LWCg3ć6"!"N<.l%6Fw/}9ǰw׋&~A?кPTy#.Stjb%:͕"F.}}]t̜ƎjD;lg ^i]k;J@w:~]979bPztI~M{+%܈/r|~yn{|_<.(&uq>aXwclTWSPB<)nyC}w\'iعyѼ}Oug=|#7$EyՓ1=ɯDu7ۯyTB?`Lc Oo&s& z!xf`76xӡ;_.!j{xP~&/r&L$8^sޮW3;1UOi|=\/B^|ZWQ/;oGumq'% sQZʣ^i ?9ߊž 뾹'? S>lrͅ=8Eۗy_볡r|Z$;}Mtr_4 y8`QÞ_ |~qsˆ _q^n.? _)k*߅/ O;*Gg sW)'\-B}E-3%qpд=uMω 1b,u`uu:LSE3y'WMq?Ձ7N~vSy:#Nyz&o,͇~MA߇0o96ƺ{AP݂+qe4t]?I}Vׯ 䇸Fh<F IEs wn ^`vvL6=cgސCuZ瀿jYב ?roӿp oTxP;[v:^V<8~x9EGP}5)]Ao:}b~i_^ܰ+!Ṛ~/f9-a[F}Ixh?aM}؍!q=KO:fOߍ?zx`'Í|/P@{Q{ ϰ̙P_gh^&/iT ~i/sfO HO=^o{AOLtcsPu ?1oFB_2弪 {"Wu0V wУݫ -V]^ Wa9SA<,^<CX!>O xGUS ǡC?w3ښ Y4 ܨ_a5_ɿ΋\#cjꔘP~XUwݼ >a=O ~2#,ʺdɟn\~SX^wN=d '&&y9g'}{9/ko7T >Y`w^S+7l'r vsBgw\ݮG*_o "+_ux?8rx@ i~C|qW΄~Sqk}q0uAfw#,^q5)<qSWۻ볞zUoP y9Wv>`]kjN2_efNS WުN}btCw"c|=? =y4v`υߍqaRsZg_y:&ׇӥ7CX:证;E^ml3\=BL ;|FN8W _OLk&╬N1 ̼8W+?'źֈQg )Cs~R EmUz0ϣmG#!U ԉ8w_1s>ߩ_.+~Tݣ{t,࣮ĕp.:4ACGWuOU\rgxix1/2G'NɈko(c2\GzkM< }&9CZ/(^}y] K_  >Ld?jP>UGQT4ǞǾr{N'XGad*oற9,0atȓ0A ?~ [H<ݺvxA(}t5O @'bĿ#}3α SիK&Wx8a'U/~ \Qv?ڭ X틵 ~MH[ijwsb7!NO ?%O@K y7W6()lv#^gdn <4 Ԑ`wZK0 UUП!p.yD}8?h)mU'>4 17[~O$Ҍ@*yqAOuo?uǗ=s `EJA7X2 WeM g?c?8߳q*i9.:DD\Ѯ'>s_!^]r80z0TWjٽe!I#|mWF}̼xZmO'ء+p-AVnA߀w}?`+yu-M=?DkUx܊sPyռsSVA9sL\.͇y՚Źy=O9wGg{<e{guL_zw^x?t|ZȓU*dIp;jWg؇- yr_y>yҁ`[ݭc\| ;3+ct2|.mf5Xyi mE?Di>KaGظr&>%A7V4/uEwé>}nAa?\ù]wa[ WۧzwM/Zii7,X|ܺų}Β/Lon`'V@Bw`HC X]L zm:HOٌ:اw'ۂ)-ﴼuBHxu#/̼'%m.gC[S8\}+*7ऐ_Oau؁na'}8s&ܙBU}+"^\%m@cbmf7aC\뙃uf߄Ӷ5􍼶xB˧ׅ^v~5?q0bnu|"Uc{n|xg0>@lW},o^ }:q*"^8L\ke{Ɯ5Y1S) !R|'C|LņCy 8Wϩ7i`OXU~8YtìgFwSͳ{>БL{e ƃ7'<yw^3Ϯ9̳CA|R,$݆X{x`+#]ʹJ?bi>{ySA?S'0e=0 _D4p"(wXݡ?k#_]s9/L(k쾫/Bls8Ʈ{_gۅNK41z{z:[ߤkAޮp?ֻ$6-1}ܳ9.p|wn^_>Ѓ\]q?ׅ֥X%$s~={'bo|ix ̷G|x*mQҳ8 #.wf~.ȿӁF341AsG~ <*=yqDCaq~ܳN_4ۦ}M.prY!*-+qܟ{@$أ'5y:rXJ7RW/\*cy<:OEx=cs*_캔 sܟD_Sm(4s@<3>OI*~dkSޗxG~q57k3BF;6&?"Q~^f߼C ּ)`z<nnN]KgrqvaZ/mJ ;0mr>zODз`g4ݍ ן:pWF=7Zyu`5wYOf@MﲯESU}.'^ )*g;ؕYU^\ >L:`&_7Ix)~sAt|x_6m>: \|by1l>?c$oGQ\0-f[?uz}Fy9[ObarfK Ɠhx g>d>盾FߺO| auvwb#^W;P9dßfe̋ Ҫ F צDskMx(ZE{ 1ˀF}?ukߧt|M<\Ι5~~ͻ#*p{.嬃9k3.tx9#/{γy]LSa }]n;*o_6nr, ygL 9XFx낵ϣy5]E/ #v)g~ 'Tqc=yNC_bOEL(8޿]X݂;efy"ʰS= ΑqWuOq_eCiEq*s 6jZDЋf}}?,#Â|ko1^4ZR?յ]ROWsuG;n "F~i݇L4_Fo%#iy*k>0P<0Yg`=hܓ @.B|KZ~6pe䅼b_}oEy[ssWn d.2-\G{EGG{欸NO9JT˕yоĻ[08*CFsܬ+ߟ;}bD~<O"bg[#7W8B}NUwG#rG'M{\{[̺1xaK~o>T,SsԷhϽ6 |qu#y7,P}z// 1Yq3݅a>Jw"*B|1˰:S[¸}g?Fqq~so]"B/LrsCg԰>3՟y ,B vm!, ŵ^_kг QV6cGP'G@_4 hmg>agx,,ּ7?{7,߿xt>_sTzO!{쭈?JC|O: ~Z8u01h[Ðog.-('gԦw&5Zybo2aL5#V3RǬ{N"9 mc{?60B͕.{ys/=eЏzO{&y\߾"ѳ%0)}k22N ggPWhdgoJ}{:Ǣ vs;cYJu'K9y18? P giV^ lx>srj6;q=CV0Ǖ 'G9=C?18˴jE7=G?"c8?|\z;_{ s=[yցs[G'/19TSCʟ_}Y;ntoA=F_Fg}ˬqJ\~3?e3*iWJJ"#YuM17|t,'iu {Ƽ| w;2Pp[RݧTo3h2?:p7^qTצNr5xb^5?0(|У.ЇMՍ|'wB#DށT zv[-'uGܮt =}/Es$Ϻo9 Mz6Ńg^q9__iF_=6T>0 xuӅ ßH 772/Q(|DrwĹ{iާ͹:?BxEm<<~F,޹UgewCdP=v[t$΋$9Z6|49Y ʹq9= 9bw@r};w'n ~Aq5Ce><'JQ os¹=a'#o;L^L47}P;qf`Yi\fPvm~ht֋xNԁ\s 77kmac_}`K4yĈWxNa >I}zs59J\"w5ToX/?yw .^T( 4Sgk[AY7a?oca9 u΅/pK =D89ޝtޑx|01>v.㫞ӕ=+1xK7>>"y/>~}sB:O_+]3 yUqE;AG5*h7G ʡTΗ;y׹):wTAX{8%$ۏAvηP6ֹ1n ϔ?c/4g1&뜺j/V1h77=!.uG,|BC8Q osϐciG~p ,O y]vJ>6#N{ӏQO=i&7oùd>\ >w=t9\"xDsr J:=F_F|?]z7__5̽4bwcqĽ+bI)WwܳX(:GQչ]G<C ͝x/sq4W|2ף%>_vuhc|W'} '׿dwI-lҼ@u?n@XoaI 9Cw. _b?\ЯJG,փ s՟Tc$9/H_^=JS8}sn~Xg򟚫<^Sӓe1Gu~Nԁ1Q3e!דKҜsZF)Quoy^@jw䛶k"+kvy9tuPQ7JH)'=K.P~ٹZs"eC%[&":xwD7x Io)oT<)}3 : tt`TwI~4$8C4d׻}Ӝ[?|B;C=]7G)'ˋptK.+=1RĵPuNuQtb3"˔P9oεs@\]A80ov<_(^<й/WJ}ozy?7yw:/8_ǰ8AA?|zogR&'Y(_\ikz{qE`*os96AޤKs<7{/c'kI+(7+"*扉G<!o4GZ'k_tWa7<"y2Wxٷ%%9U 'st^2q?↮s6Nn4x̷u %>}(gҽ+@Wy2ℯWD:Lj=^/5~̅|h8tBM~@r8X 9L ^!wu98Mמ/Y\mk+a ^:'yC_)_6Լç:-5?*ĝQW:_z">NM`oO~w΋^Ox|@d?3rok=/7MS\Z?Ev_X=Av'Mq0?rC;~ onϩ۸Λaf,9W_~rʺP{*r{;cU~Aso3bϝv 穻\3q6 TA//-(/&?/Pgǡ'C6E<%LWC ӼϷSߗ>>3#̹1{}q;/[<|v@ޕJ7ܔH4ijO/lz-G1Oy-4Za/?'~ L*i9pq 9L;UX'wD<3'~zqoɈzb39:Ps[?Euc/<*tEo_Qꕑ "~Q^[+γe)rUGXcz>2}숭P~u\' a@Oڨ'0s&7ޔ*ϲAxuuc=tn7;0&?%=8Oy"O91ΣL9]G!i貔nY S$;FvǗ.Z69O;.|e^oW.y - aE|" }_!/iM2gQE}UVvAb+gQ@yYowUNϏZanF(]= de]G;?ˡ_S@s\-;ꗮrExC;.'p7Գ +QGT7ouy$K͋_&m\_Kil2x ܩ}n6cy>A͓/[שy=:CA=NލL׫g?\abފ oڳޣ_\/C68G~]l*. ?X. էZ"{s{XN<꺥'֯;#]xA-v^RW5՞#AOsea^:oAO>%~}P=S|K~}-AK+_W@}$_+d[@m\w>A\*Yi s)75ON<)^&ֽ{z^\wZ;N >]{%m_xQtxcq%wʛƧd~m旑#hj$vϞ8鏕ߒ(NW=^ѼCa?KX?e$ǣB퉍i]۹Ѳ?OIKCY|Gʯ=σ_~D甉_ܗr#?8[N#}eyZ{!{:VޏpK7HG3xͳpgvW=[_.%|/}8q2sw׈ykϑ躵o:/汘ϔ}?:|/ta#$<#{}dתn SLge%?Kw/M}hy]s,oD;h^0~Q_+^q_|H9ާ{)5#oߟ:Ï득y]s+/J-APr,|ͣq_Һظxk՗>_X\!_Xo,ƯΜh+ɛYφ}d6+s wTQ:2kOT?%ޱVa懃?0u|Wd\v#lfcO-%|9Ւ?xk s+R>Ac3?l^uyZ]6)~Rr8_{(qi*#_*'+~BvK wwW'z!ШЧeԗ<ߝO_<nHN lj'5WT#?+TIdGWoRg..CVO_+ݧpo9U͖Sނ8?Je Yq|6Z9R[cq9O%N\u=}$gz]qxC-;"A;\'yh߲O>mF'RJ~U~S른ճulq3e NB=nT'E}0ߋ4TqÞsMaT?|mʫGUkk~LfZ~fW+֏>Iky[Yl&r+i%L3GNC}z?xN.<.)P]}ؚ'eyS}/N~]DwU~;OvCy=5>)TI=ɏHԷ2?o[]Cڼ VyDӜskx<^C/xz_=wl3{ : +gŗXJUK겸ok+{>ʧY\8H9ws^4M8[K5vo}NEu"N_>e%6&{({uP:SiKz);t=6L?.yX_>נ_ ΄P9ă;՛)&>Ru磾)~_~r.{vԪRC{`2tk~lJ</U*ߤQ=nA }^:ς\:?_Px:=Cۧ8Wj oھ1+ޗ_[?yď>z ҹ™-(';uO]{^ol>쇭e>JͷUH~ꏕ< 7jpp_ПjkSd}1S yEa5`>3H'YB'1[7UG`GS~\|='w*"䙼x/{>]| v/Wg:ⓔU]I?D;0zuPv4ѼgGRz;O|<: 5hXߨyf~#(w82~|Wz5^ g ?_ǿ>}ǿ>}~džgknLب0>ލ[FNjW3/Wg oWț%oGǜXGvE>t_nB>?[e艼Ho>i!?PxDsO=AUw:U;>^/P5- ?16~lOw!O.3Cy<6ȫ~|3gh9׳b_} gw?AYνzCb{-uM1$)rݐ/xCyFmb[83rYOŌ}䯊؇!'vr{cu?\B>Fi |rfGӋp_z u_χ);6|}Ћ.iP'b> VNrPϐ!!w"{%=t_EkY١`V.dm"E^ hi6⨳gP 4+,Wca3je7Yi΍qT>Uu)<-:g$L3-fRW,hP:ƑΟ% <׭Ov;s%_}3M.⧐N1 s͹~e_K1k} ױP;X5 r~ IKϺp];Co4kY~j/| >D3ԡ@~΁:ޡm؉E0ߢrH#Kcݩw>yD@I-7 ψ~?ܰˆDs>Pw{.XmQ~MȋGe]x &O ?kny[ĂPy?Ūz?-o\}So2'?cƆM`geཀ+\<ڎ}Qy?+{w%f+'c?V _wkB.=w㧝>3B"^=u)QaF{8Tqۡ稗'֯u.g0hF|n^ԫl~Qw0[سK:9sZD[ 3Kmmzr?`^ܝ 9uu 1hk\~O w8%!w cJ݀΃_vΈf vFk/1յ% fvqO109>,xS;WV_~wޗ7K;N@}k7k=›s`>Uvȿy[[ )Z3Ao">|' {,G v\gȈc&DƹsV!˔;~0jc_~[ߋU.=?.ݺBއ|=6:˷}` ~JX4gYj~>߲O~>o,ꇵ7oF!`vb?ZOq]2B*>绵F]ۗ# T8#C"7fSmpkfcشSfk|n`q/9z:?3nC;vb=C[}ŪbⰇ[|,?g)tօuK{m/@ۘx؀z?[=:Rz~{%T6 _A\s? L˵;g/>%~bnf91ny*̄}+q?־\ޜ+D.7^>yZ`2א8s ۰ϫ؇z/)!5<ެRl4)a! F#>?Sa<_)Ͼ \3Z`ogA^86L upW7\̮`?~s:]wrV|-j`_Z-}?ҧ)q2OOOA;*/|A mCi+ᢢ8h17µ r4DҠN9qVxiG>Bia ~w98rKG=둢 Bk>Ns_\O\ْ/}w;)G-}~IX y'*"TV+p䃜'n~}؇WTׯus0^nyM&~ _ ;27jLH ] C\]⣏Ox]xqwx)Zw!e2ou#ߎd:ę4np.}ﷅ᧎h+Q}7Fیau`3C9~]1ɜ '^Rd+3vqԻ5:s[αه0_W=3G`۹Z#zr2=F7;˸ֻ;޿7]C1!J}U[{Nc"MTd|E9z roex]B=2Ø8ySrB|930|/gʙx gsL;ͳ?:N cX`"n50 %ʆو?`¸__-m]Xoon"o_]uo؏`[~%XO=ym?|'F=ĵNVߙ}ʫpvSӰ%C*=u/I84O3OL+|`38q㜓ݰϽ1A2KB[y_B:ǎ$?i<3& ?rO|v3#uI ?`ʅC :$>{(iw~oDk`HK8y瑾O%3!䐸V73E`?"&߬oI΋X3/9?e.)V~\tOrV?%ሏ{-`V]1lWc7>Zdj4N7'.xZN#4T< A/ ]w' `o=A[~ f^ӛs7zzWsOԂ8tnİ<5w* enQ|">LT~=nf_8K^Z{>oW$$\o8HW֧iė.Ah/[|s˱ixb&Bߵ^<:}'S|~8ՓGg:a~^_)z3BGן憮':qT^w+d̕?l 1o#j:a_1){|(_Ω(x$MMu[2[{S5]ő|#(OFxֿ]3#y+`/-+u$O`*%`?ȳzӟǺz9?vxװqB]p݁8~,sE]6fS7^sN!Xѝ$ Ƽ q6r]y#^cN8Uޟ/8< /;y 3'S"15b~1F9s>'Aܙb (w8|qWX}s1/T}1/z ˇ0/Δ żh?Y|7CbⲤ9Xiq3g.}s,ai]?AkO}hSl[y?vt.pʰ#oWI:nF\ 7>m4Ly/tcgW3a/\cϡz㾙gtKn/kROWlp@X>_S5h|,pef~C;|ʻ{bjr6e^Y~!Рڧ;9pcȍ\{w^--5z<̘}|KU~]~8gkFfכF[yuـ?S};47u*/3:Cc#$GN5̼"a .Ch7G[s[3}ob&< ckyx0K?3ַX}X1OimA?fxΤ߷?w$Ǹ&px^g_z&>}_I<:Y9m! E߇=%εtQg">r].`Ո_q?\r!Z zOSy\'εJr:[O}Gfc?Wy&uw.w<uPyNvXkOk-C8o<yأN">nX Ͼw!b!9nKzU sg7l"BUa;yG[>qM]4܇聧 ޷!ޝ9֩V:9Sqm O$O'f}2n/N8EƉNso0ΒXϢ_ݿۋwq]<y?誰g̲|OaiJN~0_ X }:q, _<b7axQ:׈\$xZlΫX?ZvЧ 'g!?1~jQ`=8i;o81WiԻ]-LA={3n!:bBϩ~ݨ5y$*qW[fT`gO?} du)V#G>xw] ث٭2p#6ñ9]e8mKXn3$[ @Oa/G_6~կ*a/ a#oɍW&>,jqW=Ɍ|n|d=-Q)ω֛EΈ aד›'9P[xab&$b~O|n;{҃Y ֙uQ/^8S!c>@}"4!o=q>)>7vF +9ΡSzgQێ3ަ׀X,iN\̭*`SudAn뷕80],%> T׵ϯua\KvL/K׸:SWlk {["q[rb^.Eo[o B\ny֏B Y7x7s0!zOovD}yKO'Ro6۰:?:.$s|-Q~Sƿ&-/)"q|}X~Y7zia/N}!D5# ^H;]a7rRs.q:[3hnz {ߥwoד;@6(z F{K^'2ж1Pu8S\m]m!_ƺ>L <`/>C䌼XO+K9\ۼCzw ǒ]n}q}ggaX:C''#^:CFAΦFϊ>΃o sm @؟O3  5򫹡+^Tomc{7C`Lm}bgτZw\OŽo=9 T}7ПSl@J#׮-ip4?6qA[A||n†_-A fݘ |rFWaWȗy|ϋms#?*ofwᔰqB+9_8a/BoG;S/ÿ?=V3T4h7~-Y1/=]O^yī:x=(3@_FaXVv?h43 \cɌ\yOռnQ|yAy GF>=;7v 9_8h ƯЃ/2ρ#|{/29_Gog;y)s}N{t48GKB@4Ok숍Z4g<#:)l۝0 z~9C!/76g1G7t{ʪS>wyr[9Cו>lIa9tsV|mv񭭫a}Euq>s9Y3s7-Tp |C5N/okȟ++ . xݷt6]3^Ïr] 'Yr#C YyoړgsW퀷'C/7_w=16˖4/~)_DOq:_{j#ng=QΣ)nGvu*dzKllC:*ۿ7~Ĺa`|"Oqv>ԯ3gv? p:n?}kw<7XW1z=w˞ߛ8}ZV| [ șo[2]]E`k?/28=ys}7wЃJ9=/vD}Bs |6rJ '8"1D1|wa\);F)':74iAzN {rm?:xxsѷ>Bɻ<[֙e0nsKd)ظWm? }hyo ~!v?l7<,q푃; `1$-fSc=~4ρOvC~E_z=*|ߓ<! "=Lqг/k"@O;8q>bףOO9~/.j섧\ 7: nFkYcgg\*^p<䵽q"(xr7n9$ ?͖xu}A6/Y3uiGF>B|yN˳r.]{ʳJʐﭹb3{ެtU򆸟<5yt?b]8?gəw `Zр}yvμ^6Oٗ-a#=/ћ@xnM,;{5z6$9~ow-Urw yʹa߮? C/^is;zK%zt< ~}8?Sr#^m|Onՙ{ sw| =٤V>s%{bfYdS|'M d<W~;?oxf~͇'uukx}Gmy,[)cqu oԯnuI}GdVqqc(L'!33|RWwȾ+t?zy݄oy_+y(Ǎ|{;i1ƍ&{qww7_ܗtq!^H19wM] =رsx0G3s`omsGmX#L| o2\Ǽ'Z iy~V:gSq埘uP}\5&ډz8wWrOX{ Ƒf;׿5)l[/pt+_ wؼ #D}! ',~,Ya3e=%oo`<ųf&A}__c47ϸ,?^ ?{*Om>ԏ?/?@}'4sP=Dտ 84o>_O_^.#kwߋ\z*ͳ~䝼xg#s;:Aۦdz}Jv03`o/3 o( J8s;"pwn{<a]~z/܊|t,O_׭=T._|Gk-|ʻЋ {\ T$Y\_գ x+sW|YG'w|WyfY޷}Sg^Jr:,َ/x^`?.҅ Ab߯7>~3ˮ#6LR=Q=\;/Y7 okwlE^AsxqE9'S@=ң_Q7ι1Nzsqk7b>n3IQءe13Sz6_oC>߄\+ք};# #ب#>qm?I <<_R]s}|rX?Bn%<ގS>o97A2rS|$/YO_|)G鞔xWvA< ؍Ԅ]Rzw5^k]L I|?`e^Kt8+2/eoMS2˶ߒ}}j!}!-OE_ j˃.* hz6CvuXvE٭/V~]{v_,?]_7 C?:&_5j_=%,zU?P;;@nrC>K,xDnع;𹵑a ?v+[>CV>,3ߗ"Ā=+/Ny iLoC5x aV.}$+wrW l }_X݂͝'PlMWRquQ|]'1OSBv^@ӪbKtKCbi˽~g͍Q掫0x :g`T=p'kQ`WFľ}{Rx|XR~+ /Y|2!7 o{þ\#^ž}oҳîR7;L \e$Lzg? vbym>{)!=?!."ʹSwE?h}Ok-9 | a5:Zգ{sF/'^%1Wr8vOSװ~ž<+z` ?S-pwO/m ~viWޱ{ / Ջ`| IF˒_cυ};v0=ҥ|[W |xZڅ_,;)m(~oy?+,~ }]w~Bj%x7sr{d̃houg>{4yXg{DWΔ+>_}=s@'wC)94PWȫ ;qH{':+z~6|cW#Oy?U_n 9z4= a?BؗۿeE˽? ĝ3opν]sZ+R/=yd<4Kg|f6s2]";z•'0p=~}'>k eA\_={!؍MM?ma|,}+hqhw+(1 6qvBJ?l;xlvO18XoPZ=T{&; mV?cWӓnn_{(]ǨRoD !?]{sxcB~iЛ% <2INODYPrs*ދgL5#Fb؇_co5y'OQkWT( 573?W*C;J#.D~˿iOgl%T}uΑ}לHOoHaݦ}xmmb*u9 Wp虿sgpð}dYs1tޗTꔋ?P&e)<[U}D Z[݁6#O- ϋq~UI]OZd] )q,wӜ:g"q]wQlF G_%M ).ϼ,W}]0}^E݁LF: rn䱫nVu]{r ;>X`g>|+Gװϓ列+~4x+U$?=CΙShN5oƵ9v~N~Jm_:Gj޼Xh.[O> 6,w{^!Q{g57H#똗nG3 a]_*4޷ X!ǹ(y*37b.ߴ ;j(9SȍzCx'C"S  s#=fW= 7O 0*꿕~n[#C=e#Y#~?i${/я_r* _7(?~w׷Ky@Ȼ͛xZ.zpϩb;,r[/ۦAq}lkζިKc/kFâFNu~Xَ z=%ZA# q;+a1ʕ vb]% @>}xsƍ*_˚OȻ, {Tl%~ '2$G;97 xf ɟ+QǝyAϪ]W ==ژ1·ȕXBi+8g\o ._ qEg!O9o'!N+7!Ǒ9CNz;ϽlO ^1/{lk>U!yS_y}xZ5o0lmӘy:QS&~s-_9(}K5?? ;#1{quL <$1(-Z ey#o~fWV?6Ǖ v|%-ϻsň"^^ݟz,'ක ?wBoՀMlOg^?~)8323 4Pu]uq2.AyߠG$j8 ~"YCa/~ZqWVÑX>׵;d ӫޯy7!un}~e\k/y/\>Gz9G<fc9@_,$b+ۇ[NXT?w.DoXmD>f@\}=bXVrav U!,w~ݔ[Gxޓ"=.v&G|<̳z/1RravoCӣS, m7(g؁ ; tp[=wwUqwwTh"⑳8!?LԙCO63usY*I|7;:GKsG zT݇RB<<{^&Ͻ>]9,EzNÆ+7g9x /ݬ_ ֚|W ߁C|Scp{) ź6D`x8 = vϷg{o宇>*Xwm[~L 4w#l?_!D]|0DjD &z(э.zIV2ޣFYkD,Q.F}3{?%ޙy~^u2G4~.2F!q9y0M;G|LjW>l qnt >`#;OS=8_ŭyi$^ )xZrJsU_ƮkO?{& Y!86zd '|yWZÆJ=ƭ^y>G2GIщQ6/mQ:WzߑjvWu yW^n/.}/IQr,a{CXH>dd^jvu+CWkp#^ o6yANn'Μw@=|:PH4TlA!8?`lmL\$N͕ANL~#zߑ?Ss}Ov>Gbi(&?zC}T4H>%ֻ;H}ӛ  ?*y&ۅCO$qt^~Y!rz!~_%Ua"Ρ%*XcڞsW.KW1ܿ%YqCp˛'$7IE}j{ =ч8J?%|oT' [JobּW7mw:F2 ..0ךhK2Z܇Wb77H}}՗w{X-⒟:]S{-oPl3~&RS[fi?8qߑЫسܷW(Zv$u]7;s}S!o;vW⡂e?SM)5fr.s/n95sV7Xt }/3n_APU>+U:c~$b/9\]?geP_FGbGT@Ev~?([;.>~[J~/r}=7#pvX3n=OWO$_{i/ /7w\WOZ՟]s^%ϖf7~B z ﺧ\-14Zr%-qQwAn?T=y9ڃ٘~Vћ+'#(^yX%>c)|\9赇7}IGNDO.SxyIB~&-d>(MZF֮&9 y KefsKE 7F;kx6y8Uouv>WxJ?߂Mnu_oo(z1~F6[M~Q걊7l&yʯz)s[}_XP;ߊ[B+{ t?f˖eg>Oߡ.}m~2xH](o0#^646~6>\lFqĩ+˸үF7"ux)>w[͓.Q ;?k ^j'eXdNǎ|!xZ}v_򟻗6Bِ~刃 ՝Ѷex_w/ܡۭꑗx'#ⅇet4ϣ5) qwovb]Soj?z:U)Hr>:KdMwٿUn4?N><"z -\^{F  cYx1,cq~k&-4MxAyTtuW6'?R{rv^c%3 {@}!?jeVO}N{־H?۸k8[5y 8E{= v#.w8$u{[gnM*]{S0gB zg^eE"ɷGF|<G+߂Qw;yyf툺y’xB;ߒR 9H-|<]I]ZC|:>>},s\\zDqX&k#{qmߎ.qbfH}K_9.T!8lG{08[!ޭϢؽ!#MK~ :4K_fvuDiQpXrNW< y 4#07'/Uz6#ޑZ:OzWC޸{o;17qj_0rqOʐWL p3oO["/W+o Lt7^ ̝% `&ǃOYܢq3';x1S^%<yH$d?M~{!]o?ڤ}ȧxu0./?)JB''=OKqx>Qn/}F[❶.ML??ғJ]ϭ)2O^UKGb?uvwgJMN\?g?| pK4 揈;>hGǎ]H6'NeهgOQl&qJ!;j?1]_%o@}db>^L}1~2ފJ߂/ebȿ)& o;ׄs΂ } vy)c&vB<hTm G_z"~칚˲OMWX$? gGOxs*c{+}|aB''=튛Vźoܯ~2oX{~x$5(IK ܔQ=|8a:59>R%s=qko׊8[o;$IԖ=\xU_fz5W0([tI 7DKlL^`>N|?=Za48ʈI<+5ybM<ԡ _G`[$|d|0̳y~$eO-kW-V(F`z`n^%8LLW2עkHVG֭:Y4F~ǗBA^/L:J\gD}M<5uc o~4r~m//դY⋁GK}Jr-uW"~hI*ؿ%EH}%ě"|V{@AᏨ~nM'8S7a &u{>Q\Ǥ5]ps;\/+y^α'̳ _aPwzl|67rWѧ]W&!8Spз[}zxJ"w@bt{*?q1cg,Ds2o|Ws,DUsjCxt>}8M̷\[8;vz%}P繰*]۬ ?-!2>dϨ׌w2M+GM wsa~ vQ#v,~ o.Ft saְZ5h߰|5/%|KN|.X{+x9qcC{/쫑}znϽyZ1\ó{:_ J؟b^^Y̒={jmG%{c>[ၶ[/Q~Ipxzby~M\Qz-qP?N&+~g?L:YBax&?@.U'|]@2/|»ԁt@o:n%/Cx] [gaI{Èeg9Φa܏>XZ~>+ 8{HK2_믳z48} ,qsrpcHkirؾá4*w*gƾI*Cq/^wb 쫳f GnN% I_,1{+ϥޞ:'هfgoǑC3Ouo}1+s z!#y~n+|.NU ݷ{6{W:1Ԣy?bΔwHƏyטoT|[YwQySx׸JN,}|w[sȍКGÔE$Y[;cs!-_R\m).{ŗҏIֹ zd3[ԁ_v9/۪zvN@nCd3wx9n&Cdw{\#/5dN}[-x"`Uz7$q^7]&qG3Zwu:Y2z:ewM9? _':﫹s~Oda/bwEˍЗa`ww#o^Su`WWֺ󽵞~VyyP9wWpo^wǧb߅G]:͇+7=O[4oQrZuWnk*/LL/׷sf#ju7!c^x#{4*.ŽUs/8D7~ncuSp{d'ԧ/~(iK=oƲWwz _aHԽy+sx_)?$k\q|?TtMQ~?Dj1} uO [㛲s%.&^`Qj}nz24_7kNMG>og_tV~S) r0j/q1#~sg>/_>@dϲ/::j/_[#D#B.enKxwLNq~{\Ck[b_# ^?ﰒϏ kQŹr:~a_c5P'3)o] n->_tR述Sܜ ;b;CbV~N]AnkMKOq.dẓ¯ǽ|Iq\rq^Rc?!{T㚐ޚ[Q_Wry,G{Q?r}KI=ۚE Yz47{˶>d?8 }e>Gn&t'.؋{"5/]=A+{L;#?WCxs8sm$9A,hoI|kF[O~毈??d4:{qlܽD%{DŽޯyw:wS$8`dKmcG<ISdoƑZ _w߯}{y+?Ժ04BD>/'Z7qqS%S Oq㡸wmmY$\O8x#EݐAo*A~)5E}|J`n~?#)-43=ǜ_s :yփϦq.k`E[[菿*T/P\Ρ[u^Wcŏi?hpw3Gǎ} NiKO?+}+.Oneyr"{F Hu>GϑNC~*s x=ї\7Xr gdlG~.E^7\7$ۗV\@KG83b_{5ϵǬ>HTHMi}.J}+t}̗ ~An9~eEsk)zcGwIhnSs;77\WSf.v {>A٨)zq_>eo7<}o2 ^}M}! уȗt~Ey7x:) ujS=#:SWQ _.FŎɁ`.JH67ؓizxA|)LU8)ПV FW|2'1!|WT>E,Co-{t~ǯ|Isu_H؍/s$Y(''dqo-qm.͇|* 98 }kQ\Qn?go]U>ˠ#vPYk{~'?սY F־ \9/<)uEփ?1OR|/^{x}vL Ch}F3庞iE>'U4OSݎ(Do1r,yk^C<םkV> ?uKl~p\^3y(͟N"2O C{=IbO%^W'usCl?769ܞu+&Ohn5`LM|Mo= w?^}98+Yğ2jl: .Ty2=N2D2,x ;&{5O3oJ]> pZכ~>% oNӽ$k>.N50z&pTO`k\ȹɾN.qAIt~YSg qR'yBƸ}qX փ|ԭ1ȿʺ>lr9' '}EUI_,3+zږ%+ ¿}s)~)؇ׄQr?Od.%O_/t*ILq#u̠N>]oӼDGlu/XT>_j/7F[/W=>3loO\̌{S<=٣I5 A]\[ Z7z!|9u G^)qTȗu>Z]a OlʩuߝPKxlG;O N"|Wp-pN߾|/y9Ujhw]28_cGӵ#sغo2t5/ȫ _Wy^=jzWxx Wq&pKFO$$~n/ >A;Z{~aĴeOujFb<;vW#s|;Z ?~>}DYj!yOssw7 p7׹R?;^Uֻ)9!t-sˣ|Sv`(ɞפn׼zw~.9<0)n*8q!O"Y[xT⫱?%*VO&yϗ>꜀>gYcm'౼FO>}H r$ygPǓ9a;\sͱ G^)OqYy!s\U#"+!AjPptewkx'tƿ.DX֋M|>[ -]u&^#ͯ5ѸR O/:QDR_ÈZq6nKޑQ)so|[x`^qc:E #uZ ;z.qɕ/C_g !:+_G7{)j<vF۠)sc[5FPY8\5/=uy:`Z*n؟:5*d`'Ǐ΅{)Ty$x2'E#"۹[g'[ܸ3#v9#O']<Ɵ7Qy-[񒂯z|t[[W9%eM7RB%>sA}NCxorK}4 G+-}2O}uF]_0Z7T95gyȋʻ4/E X coRN t~ǫ-~F\U[M6'7 ?^}Ii>G/s֌ة#vEpRA4ޱ6YG~tpݿua9#4z#uj{A‹'4oo/]&vǿ*GpOxvRz#ϡM>38y~ û|8ԽZ 6F %hR)ҼV5ikI|$se>^'{fo,OU_8sdoү0 3*|;;2 912>W"OC1W/  UB|Ί8W ɩU5h{$s>VU3T:uyMuǷB$Edܦd&S&Df*6]+[fLm.ɘڦ(.vH"yydk{?wsϞ~gOk/g ɪgB|"TWW5߯}(Zst9൘a}(|^F*B=wW/axnBe=]>Im _zϗec˓?_=|Tۀ"oyF`̽\36oPpz-W*-N)ݞ\H[Tyj׹6S2}%G|$G|$G|$G|n-vUցo_x^ _)u*%G3*|o ,f%?u12!a ygU^&A.{ܠ_9)^74AzD.sgϷqqͦy30fNYY ^imׯH7ׄԳW|)_<̩fbɵ{vԉ"3l2yGI7j/zq;yw˳L\ϹEH~7-w~Yꏝy7n[d?kN{T-7@p>oɱz_3jj0ti8;_b=Z&i>۳WC>{F|r]cOຉ؟ϼ 8s.wB?g,%_)uǿ@,O}M߿U8̾LlB^~ޔ-D|hg,? ~r!Wbs@.} =}n<!7o yYqy˰'/?s{=mLs\g:.h,Ə՘o%|'K&pg[H`生坻п I鑅:-/gkw^Cu>|zz(oX OϾ/y_JJߜ98u'gQ3ޠ$uI9Q2>8g9|dwP}Mcn]=xC6(7ᩞ.QcBt^vOe}[{F5fǃc{۩t76e3yxfӢ0yfrN;x&VJjmgp` nQ(-rE-]VJ|y`Z_\^$W$+\}jvh!u'+*%OBE?k _RG*Uc]{}+|nwXN`E?>sv?F׌qϧvnLK쎅EYw}gd~o]tksM_@N3^ȉ wcLj|z~Zޘ7]q6߫hVgqݴ̩2_yC ve أ2 ܛ׍a^T7|W}oF- 䟳/b? wkcK+w4N%k}!!}`FѣYnM;O:[U'.nؙjL^J?볍},M! ߏ$>ď^vlXorD?BgUvzrQ8Mv@WE Ocj+1؁ߍ˽jS?߻[xxc^~TPl<-o|^| l'5k?]/ SwWI%+8U$jTO`g0?a?[CmۻLҩ=;@R_{(Lș b: X̥̽} ~<~+?B^z${vҳWٱM9fU'Ox߯/T/Vvlkڣ~]w]zUHL9|?\a˜e[bدCo`r-mCo̩5ߡuk3RއnJc,tw{#>o_{sEwzڼ/>|]5c..zY>5$%R.+w쨲G`I0! |w~\ԏ[eE[N'V Bo7)TUqY5>!g^ָe|8/Fş=duEo|}5#sEčf)1k?vݬĎv\s󾙘oYIzx5\>͓Xs1jU$ AWOĉ<{,[o̽z[W~?+љz{CQg^C\ Mw»_o%g?(~}'9oMķ4ʧO鋓赽1@>Y>I~o{|"hoB^"WЧif* yarz?͇'ٱg #׵2N_/xKbfv˜ &Sw>.n8F{Ļ5Ωq ? ow:Io%]Iy Snf](/nמ%>,XOSio<8}2G9B^j1t:yHZGtWs?;'CFmCWPP֚Lgީ;y2*OEk8yJ/Upr a *zBv# Eƹg"Qsʗ }>C 7ɜֆOZ AGxOd^跺w γJwu8B>$ Ɵ4N?Ik\Ssjӫ9Cb`^*enD"FпQ2C]YO^bsSu Q4^&W?4 DE^V}|8n`3&KŇӹK+Ka/?BH_/r^9=%s.-c/?~S?®.[a=.з'LZ!~Sr#)> [/4OiM|C̡2=W:kyKߧX?EJjȷ8 ~daDntI~fv_ϗ#_gA}#3FB7 ۹Q>xuډ:C+ɕxj޲cEC5OR5h4[y֠S(oOXsc!q7AK}IDydN߬"ӧC*Λ{o$a1%ϡ1:sIT{d޸ɩ^:/zq~Ywxɼ Rkf{ L\si'csoMZa6} w  xG3/Zlz8hwtQyq\CRrU+|yِ''~+cS;< N$g=2>  ѹ\N'5Z_uʞO߬CC{oE^tw )1!-J !%!8(Cww!R"c~k;~yח98R:_x7ũ)ۉ+Cϟq~G$uG$<]k4-~CF|OI~'xQ z$e#E#F_98a˺/a 9^οEگqۈ+jL/ZzA#W<4䬪#8YW wU_1>+O!Gb%8pC_'#+VXߘK~EN_QvOONQLR90ħkc#7[ =/ZqK^wԛ}n(`I ]ZƷK8DŽuNQh]NLpKfvd緷0JB Q>FY{k \j"%⬽̭TWvCW,A|~u|mf$c*j_r~S93O+`'ٕ4kmgA2Oa`+8V^Cz3:ԮOO|6:_'_#5;YB߻r=^s959;Rŧ)>?{n{Wm3v!Z_gm_x|g<<ݖGgkC?.rIފSKK@? &WMǷk ῦpDg3xg36Մn{n ::5=B~Nv7[isRrzT<鯅wZ] ֙Z|pZ X7iNHn'4;GH=VϗG7. wۂW׸rhCeC5:ӸQ/l~En_lU wF~^?^o4.}hhr$O;+:_ \!wh _B+UI\|ƃԇ|1ʉ.5kշ4NjKu~qk;lY{'و9zF'wÌBtм}t눇J}{(z@WymOr͛/d!l"G>6O>}ڙN;U[CFF.̗YXu^TgnlM'Yp|\m2y5t< i6M?] s3Oo|ཏdCӿU /GNOqA\^p='VivZO0yqknN`_!uA4Oi\PfFAO -=7@1C~} pY}_ qQj<ĜАy y/|#iF~zqoϧ{B8ƟanӍsb"rym$҂ |=>֧m}eWS"Jm_ Ov/o[nEk|䄝[]|.ǸWo'1sIZU;w3U<17>~4f-uG(rw~[b5h5yR[+oc85wSqb':|wURF~1O4u4_fWP~]Zgw};HXw3*5td_-|ؾ7 ;G. ~ ˅ę"eDd|?L'~I W|&}sWgw@6~Q_'{ >^~'=[sO.]彳1Ջz37[é=ֈWiPy%_寇o9SêOظeNnJ]㸡}7G5|w ]^e?W<k%Qd֏Rn2MkЧq }*SfbiCLޠk}xB/P!uȵ2ݡ%YǙ??fn+t?=wov#Ό z'^a~OV7Ol>gvGoT>؛* /gQ@Qu=-WHFq涌+ꞃ?d|OayO-cHȋE('YkK7nr8$W7?E?}JKVXɗjqG/K\]s: '|?f;y$ IZ6kOKykی%}Th}iO$i|F.ɚx{Q7AO3"fa(ȯ7NGӹb| ەyX|taW#~7;*O"N^׶CCӛvI>9w |ЉǕ93?8ECSsȷoaoVRc-YrCKFBl9瓛GTƼߘ<пxi緾*$[xӨkse>wWaؾ俱`[oOdJ^Мe}#W7gW{KNJ0y:Gnu-/Xts~7cE햯O s/=>-&;WBek6Nߟ:q mo+k쎙롓_ d cp[{ݏ"' 2.qÝ=R` n]\| n eL|y롇}{Ƚ5xB˷#~G\ 9-< @!mFޕ sbߜ=Iuw_[ Y'+r2 qCCb'I]GPcauC W_W<ϡsQ^~9~ } }4^d)#IC⦤I|]{L_޺Maoc.#/Boߪ~N}ICO;:ޞ\jo}f~x+ } 60;;]t\/M]_X3臭GّGgz3# y6 tgRYT<ٗ 8N[ɷ#L|);30'OK #fUa0+Gsȥ9~K|FAލ-]K nq69/>įrEyn>=*rg@ouZrJpUz8k1K|'i6|=m~e}?<Ҽ9G?}"m[/9:u?_eO<>m30#+[S=>(o|z$? 5 G븂]{_0 Jk/!Q{wY}Zr6zg: t^2|1pJw>Ao~3~(FE5٧&9wv;3G)c'_4W&C>L8d͕Zq3iwiN߱R< ^I7݄Za"2/skޛЃa~(>{ң$t>:ՇD 1;,dFf@=z677j?-iNv!,:7?}93pT։&%ou-v֩iʗ\w3|~_gO﨟A",د1O}zv]`Gf.V͆, tEJ~MzY9|A'+6/}zzyr|_~Ov2v[`' :eO{h6r|PpwOP N)@_VF{Y#\뮵=՞:>HnE"W/b߮>}}6N{Gw#^ks7 !O?<,N؏7#nZ{7/}]աA>t `?S89_~ ɑYǀO/~nx?DrP*f%5@)uy[u*Ց϶}+aQF"#[Ux~>X>a8vՆ<b?w;iйo\_u0}.y;`tpWښ GBjmmUn4WgĮ>,%8/R0gH0J}{^^Ez9!vEnT6NMYǨa86ydG54(?-319>zxtaS?#clS/th_>S×1pзզF ~?mo/Y埜}c{wٻC67k &S/S:W[Em| $@)}e఍7?ݧ}x4?{se@WyQn/z%xdNJf{/i3tzDދnWbNn~6|\p.S GF5Bg,)U7z;[o-B/Q<2{$Sv䞑u tޞq<:݈K/}]ҽ>{Oj7Wssp?'܎>nXE-QgrىLt/B^] />#rcsη,fbOeIs>rUu܀ as3җ<4^pILKrܣа?zOLjΔ~M!4=cgl~g2J`^ďa]5nFi^z)wX|0~^uCW'sa:wm)Sw7,nƣy3wbϷ܀<+6B)8RV?&Q{Y]+/h.6zpni<歍FbW\}_1}GEǤ?>xP_1 _* ; "7g_2$D?tifg~lL'*u掘t_\f^$"xy/>uB:wVVi%zH߫XV-|\S.pG\["O样ǿ'~͍ls33?'ĺω|}8|i=!s n!?#\'Oi[:QM:;rv>5ϐ'Ay!k|9c*O>E^D@Is*۠bgݮ5&#OS*6|Zυ#J'd':W^>|0z|F6M!Vp0ڟUOD^L']B^/ybYr~!sݬ5o n'+!sxߦuo2WW ?GɻY|Xˈ_FG.5zy*%0"_e]+/OvnpF5g[!M$o۬}Y<e05m tR#vOs~f^2ͮ'}|Lį?GhC#/տԾ^إgF~_rn(ag߈; 7ƯOv\п گ)|{9QNWO꺟ϥOtyj??̏R<'O@ݍԷK/ /EF>R_hx?g5`UAJ0ú.?ÓVo ^m:;FuuNi߾5j!X5+7ľо:/B79*Ʈ8\~[?GZݳONbKRw/%/p }V'\n!9> ʆ~_ڏgƺ<<WOB /->鑉u9r~?;;mJ$)iϦ+Z_ˉR#eG<׾2@7!)0ˡs#ǵ0Ѿy J?sac~V7iد ʰ`7ksPz>Y]i_&ތ]"d.6wۡ/Sg}4:gf{J2/6dps|l+JQ7-J[XoZa]ڒ8zԾZ荥}3=bs&# 2ՃI[rt~8|w@GG'5@ؾإ<I=k_jeY?iodO>O{'st·΋V9kY~}^t\ѧ-Bw@9fGrV@ 9t3ԛ5Z'`'=:9/uxUx:=$`n+|aXne |Uoh_ z?zb$f֛k]#t|;u/'Qu0}iHR?!!gҠ0;/{Wuk܍}n ݞZGp!Z'kgKhɃya[z!J?%䵛!}T?w\7+f>]z2C_=lGG"jjs wϵ<bVm zq(,\Hw9:"Ӻ:v|>o*~ZH NӥS!7)5/KY-qO#W}|Ls%W!q ͤƝ^9x{O)ga/)ӿ[DgH\@^~ͫvi}~)׏6CS< 8ϲO,Η/Ş |p~Lb|m}:aDɵϹaC?l5*C|hPQ})t{" ,Uȍ^erru:&~puނ؏=$ 롟{5;o+@KzYMˑ]zڅ6Tk|n]~y\[xj^vP6.䥙*%t)r2!~A_`)J_k}ӡgZuB]W| ߵ: {#;x0/ ~/TC?CO}y:wF%ٮ6)*ij|?lhty"u^5j" Ҿ^R!SF9|:}#}"uބ5^=(vsG;:uR;|E_7;Go%gaz -~gI=[Cy/`L})p2v8YW~&u{7.tv?sވy\q(z GN̟|%+Ͽatao%^ 6F#0}S)F Ns~}>c#y2Ҝ]v+^dK?Ro|~ƿ_@)<6nNGi?R B/~/zahdikƨ?_F IF7a.pE怎ssROGnl:K=+qz"A!%Nqg$wERo7cşuo}[Ižn_y5W=q_ZSsF:ua{˔_5aI:uϑǔ9H_RsE>\>Sv݇Qy1o8x6lȻIȂ9[BN=~,ͱ>~H_ sG26ا+ZWӡCq ql3S'h>6;j<P+혧d>S{|m:T^9?H<Ə?R7W1r-H|20(!|( |NLk5I;23>ϑ?wڑ>-u uH9_1_8쵎Q쫑ǘ#qN_tW]VQqi][4&$NstK|: rU7On?zmn+uBw/Vo`o)>_efo@f9KD]9 ]=~/Y<#SK?q̙|i18˰{y=ǝ^C_߯ }K]ُ;@o:b/I>X}**#rH6?G=*kƱȡ93IgȔz] [ 87L/hCY+jЂ/1LЊ:*>,Ss/7-{"yh[I/a?*}u~uu<|Q Cra8=o4,DhI@ꣴBШّE^sfF2뱇m"SbZ SyKY9βaMڷJQ=JKQAτr$ߟ9 {Y Fޮ1[a/m\z^cCzor{!GLhӽCﮇ'[ܖ<]{د+`+%-QӒ ؍s=.vīGW=1J㌡:Ibf/%nJJ+/ȃIl/j{a9FB_Jշ!Ϧ\ \XnG;[Bq+_ N#As#Om#j?ƨg?A"|;Lԛ wp싢e _CʧW ǿN t.}RW'bj^y+Ӹ(ꉼR]'.C;f/Bݐ"?V (kgl,z[p*؛i?y맸;Xq0\&$/]p~g䅥1g%A:?W~R}P{cP~>xUS)\fM vb,Kzy%rTw do1p2'69߸[пS~j/&ҟZ~S\?~&ZHZ9I2 qENkzB底5-|3 ꅉ^ >p? qGyΊ_G/uϬ{<3viʅ&N)86D^>uI]]BXB(ovc% CZUss_R7px1%﹏s4:W;; ~M4Tsˢv֓rm+^9=#~w'.E;/9_>-t&}|п9RQ: V;'G<@\fvVJTogs;ΕI2ĵ:OU¹iS$uGܹ96!t--_/s֡O>{I߹w"e>CE<8P/`oӓիfso~{KD>JzA@~7iKj3B^/TOs{!7y='2E=ͻ觫9w "S'&t4'c G'y/?C+a ) ;?(/W :5ȁ}WO:']uyG)Zw1rax9AAɈTuѿ5. $\f{ʼ/7xH#e>{߀o!B<;Dޝosuu+;Ú2wȍ >6%=2e+{ 6oI2. cZ쑂7nA/ޞX⹕zBg "|2uUF /+}$gg%ow \~t⡷&}d?a}=1=P;ۏC_}B |2[1n?;f Z;<~,}f/'Q'YK;%+s#k8y2wOOWW8f6}X5?sf@nJ_)&-G^AW=]eΝ}% rfwx?cȇ?~)~F1X] j!o?orA,y3ibZOe[u{_(oO>4t>N֒vub葸~'7d[#5Wɧ "{dݔg?e~Řz?FƊоZε\_=w AƮ}Ĕ|=sa]e-vO1}^z^.Ͱ{>C_zZsнíɿI:J2AoY_\K7Ks S'}q:^ſϗ9m 5Z; ܹ!sZ]ţ?]W4:|A~EOVEt(yb#9_.t0IYod[k;.[Mi$shKio e'{2E2Г pĬ fSQkc&V~y:o^W*Yg z?T: 'u7:6eDnD!?~pNxuy0q+ϑq,e;|:Gj gOL(ر71[!]4~qVʼ ym>?Jty4yꜗ\zL=&SW=tX9\⣂v&/q~0:,M3YV`OZ>s ?lmqRxݘ2؄>o:?̼~::?_wCH(wib5J?2\~9432g먍7C4ע^MfNHF(iw$Əg7ROs?_A8ϟ"眫3KNZqr®|%>(}kn>=cd}~2Y?R|,zOӾv |4WK϶e~`.uERog˹09Bݯ]{E ڱgf?C7@<5?[gkGGY wx;8}#ֵHM g솟jzB>=9Dly&zT ?a5KoO}Ա/ ެt^UMSRg _?/K" toX~|Dk͙wYh?1+ᇯZ;Z.H<, ~jyRRd#hཷCUoy:/SU֯Ɉd[R7H>O-JS擯KG\ #d?'t/hZO_"Mn֯SUEޛx$J=0Ϫb'y?I\f2ѯLrA7;!J|>4!px"?F<~a>"o{+ ~5΄ؿQ:-qAa_"s6:Y$F 3tѦ?k/o S{1`sRvbH^G v{EuHfvJ/ i~-u2Q׾9Zy-T =r߂>A?A}}iU m>oJ|VwYcΉQX1{t)/n/-|%:k_3YיCoaŬO|.<#u>n\XN.v''2YSO%|cᏱձ;5n~jma>Ol?+zk}Gܑä?jAt&(5GBԁy >]ԕ,8~9_̋SO+j_PwI71te?guUyJv sUy {?E_}5K=_ ~9*}v.&+Hʾ~t])xN队*As3o~?~H m<>| ?d QyEJx>9]"/5mM|BڗƩ_r'kC}ظĽ}~Kԭ=:-L=_v Z?4&Zy #g O1~G:Wh6{_i?S(1b4~e}.s8B=SQ]tuJRҹGV'IsO_V_ڜ?)n+ZψܾT>gIEjwn Q}4:Sw>/}U>~~T} T(Zږ{vqT4ڦ]鿤~>5&:HkoI3iMs43ΎOx^53z_A?޹u%Q)vk1oY;~"NQD:ygc_~59}VbRf?z #?]5-4.5~ L?XB_wex~.}}bW(W;r:~)p:o?goqk:J䩥H\<>B{ԗvT}AХpTo|`l4G=:?82 ::#:x<;Yni_N'nu}Z- Vx+煠k&s~З7pC7OsmEco%_f˲ ޅ_}q_>9p?[6#k~y>QGbw~VϬI|H9J_[vЧIf=Fi:;<h>ʨW}}qGԇ-$,rf7MhA֜?pk\C/fsLCK^ꊷbqa;v[)jRxlC dO,i,VZ9d /u2[hL~<voQjQxbD𣬟veNS~SL |>/w@N\+,/R)EZnй'U#;`zI)Us}>~ ]pO刑&>wV#||<5`ޏ^ΗVatuYz#]ыRįeh'dWtoeL3?_ \+̻ K3N~vk|Z݌۫1bXWOS*#]e/{i?l}!u̡!xrsOra=}\>5ʏfů^=W2k_?ǙX{o'p$/zNq'?z >ܡy4T¼rXqo$>mO]%|3g#ټXpg%-yГG'rĹ.2H>3+qR;0ֱYosJ_qI^,ϞW%>`m!omS`B2;~k@A_o_*zz,r^))]b 49tygT7Eݰ?{qtfG i7e>`j=@?JWS Nuco]{!Ϋbס~[ ~tLpJ\;I_gjkNx:z ?/އS"-~WoG͸n9R?b-;쿡E~%Nv2>B~"BS sf/ պ'vU_'#7 x'f$N()U'G_ ~}%Zy/!w닞>KEqZ`]u}NIƹTWv]q?.ѯRB^օ\/b'i}9F9 ӏЃƵ-Ǿ<&-xP8i=i 滬|Y[=g Z0a~}ܨQ }n%p>DClk+# ' vk\P{!(ta,$)zh=HM9vz⇪]0wjXyZ7גּ6",D[Fc+qUPϤ18rPIo Q#gPA կҸphCe0UxG4c|S4ʢPBzݬ:?GH|"gP4I:Bo׷s&$ONQg2>ܮ\׿?)rCI.Cpڗ]q :F:G_l>ZWH}\ NR93ŃӧEBҼƷ8>/0^CL- QC毯ÂWwTqFڟH-xPtWp&av)s~\\@akү[dZ1ֹ&OW|1dGXw9 dR>y961'_"v E3PA\=ȑ95gIp_:bȥ,#u b'ɜ_sCzcWNZފqܿIu~-v}Re9|7)4yAT}&şPKޑRO5H72&}{썺Fxq!/s&'d$D"*}4РP~k}{j7'H||y~}5WϒQ&=UF?aV{>Q +A`mW+|9a], c Tyj(Tu :xO"%K;, Ovћ&]!AH^>2$KGTLs%SqޚN:ή9CnB?_5~kI^YvMNZ'zv>#dNso#+xfο8y$wq'y Vt#%zP[/ӣtov6'<)Q]Es܇ë*/^2 Sp5F|jC;ŔOTO=o&| 8憞נq]Ou_ %}NYHb_xz+yCF6Cx~q|cs_\ SrzOψIdtq8řh{ŗi=.Uѭ%rQ䤛ot?Jtn^FVVW{G^؛. OJGa &7_/z33Od]ȅ([j>:r:yj i}vw*D^H;+B[SZ_>]Z?Jke Tf7WCqACj-TW!/^$\sjeLuo/Vyfgn:K|FZ*rDwZ?ԓbxrswIN~g\=?<9>W}e7ߟjoNCmǮwEw󿿔\ֺy{NoEؾS t俷fÛD}%/}׿_}׿_}Ѫѧ~eWlx7!nSb3?t}8ЙWV[~е?:7Ϗ:7/%>tz޾2ďɯs̖яwvvSo.l}dmH;ڏ#냍?R'y3~oϛ>KǝwK*\Ǭzn|nW.܅x?9^k? L>gS$Y?kH#wgΦM3q:qz;(N+ M GzfΊys]>;4)?dQ{pg wg9oe"ϛb]L -3Uυ"3oܞ;}\lυM?֩`]3ҧYD2al~-}/E?ޒr(^~?=+FܘNwLl(z~t'9j'י:H3KϊKG]^k wFmܛuG##.6/`Y8SrccB_`JAy;COvcT:\e(MUǐLJ9EˁT}m^Ju~Y( O5zp˂S; O*wxΌy "@.܊88>?/wB,JMx~)G CNx~H_mEACmَ:kJ}22s}9Y,#g \Qbw ]O\+W~`3K?g݃Sg>H_~p-W̟t*m('KMVq#8߀\C俢ww@ޝ$?^]\ O3Gbϫ]e_y1׀g{M;+8߽ c#buLJZB (=JH*KĠ .%].) ,݂K(Rg>g?<{g9z L \l^7z($\R?1{Gn~i O2(G?pfìWutf7iچf>e/9 kgf$~﯐y`{ҟ}X:GvGW~)}==Nuqz\2o^ku+GٻK6m6olRl8^x"/şۓ6K|aM>YuۂSG3v w؋24O2cNKߣe]Z{ z)q~c5 [^7?|uǯf˃ ^swwu^ɩ:\Bn['=yW?o9>'z !i5BCd|oq2;~`/אַSq=r9/;Ƕn.~ >U\Jx$}t;9mo[1 $uɋY ~k7ʛC/5_',.!>38W|$ۆB۟~ŜISixam뺇mP,G_MwڳJ!Çr_vRo/ :j`ѷ>:Gq7fgni;\_~r!ו 2y^Bnnw,9~|)L{s>|8'"3׶T~Luvno;_>n}6nȿa񜇇\z+9Z V:uzW$ ?p!;x9s#/=R'hњ?,vJy$r84wހO̬\O/,&Ex @+9)tyQ𼇺cO-#U.YY+KsUY3֢?ytW )7gy/"{Ⱦzs8/^wWwYޯH6~s0hZ;"#u)5:'G/חr?>><&|Wr>72L]aP>O3qzه3 š.IfENm1LqԾ3ۡ/Q}kIk."W!z<1~$n nnaX(:UmmU$^OpY珓 Gm!'^(;F:`q}w-G"F/noiG >}Cߤvy#}vu{ b$?́?ywfX9etOouzIFȋORpD䜋i4'G\ƒzg4[=uw;8=suݝs1nks̝Vu[s?柰|PALWK򨚗rJbZqby7W^p߂u؟M$~>j;oO{#bgjJ|ftnX^9' 돝7(Jn{c_G߬&;OXK?T$rl\ r4z-&N|o!{؃c_lws{,$EǾy}On>d.63{V>%?,,n;z4qnKoſFau|Ʌ]+$%yg3r>mOς\:ȕ{ZQї:{OةV$9{̌dݔBL?o4vW3 q?9",Ɂ_񇚯b.J0v-aAQ~y3sލ%_;\Ks<5bI wp/= (Y޽2+S\6eM}p]<\pPtS]2~x}|W}첰H{| ~LPy'`NDQk|\EOMps1g槹Eq]3z>h.qi֪t죜++^: -|VTKxIq䅌neV[t#[:%=xc-?# :+.FTEjO tD鹢}C o ozOtUo꾮ʼg#xn qϦf44>GgU\#:{?6l*"_M?#!ԝssm#wv`_sJ|tLw|}}>[/{b4^Zgm- n{u:[6(bsip%8H~r~ݟͼ$jc/D g>GGVKPu<zd^AQL, q/y?x"=_9XUg{?7?>r7@y#sI+ Rg^w5bo\#.C~JO / {=~ AX.~`uFpnQ{2wOpGB#ۑzfg-_\b=պzd]%yK7c*s9~'踰=ͧop+ $y:MO_kčz} uOc9?b(Ϙ~!u?V+_F&y+i/%i=bǩ?>UEAN [5? ɹ9X#,u53mpؕ#S?:|])#ih#Crm<-Iݡ?`ji\uʙaY6.(0>m'#@?'O`ʡ{+qBlܧٿ&v1sݩٯC/ɜ''P:o|Qs.޵RA_Y3|ġ1w9;/7#5~l+p([4P*͂I2׻}u ~[XBu '`+x!N'~!izNay7Bql" Zϥx@vV-Y{َ>=lqUfƒ^gvOcmrS)3_yDn9xr/8LQi@yS>vFOH>Aq9*r-nR韃 ‹/禡:̽Y'rk~#ޢx.]g2dyv3DPKue[7\u(1ȳ٬ppokڿ{4<$ğX@Uo?p6]ƋE'On~d/sҝ{'/ps`$\du|NǨ&W''a|N]j_;#n)*q7aO@|#T%}Y&^h곞=G^O#_ҭz!ޟv{}.7K?c(umR7+d+5" }`})/Ch$E|ߖs#_| o^;j.Wyp0uYYCN|fK|I spop1 ςk( =z=KӜ6i{~kct춇*^E6b;ͮP%}Gne! y`@vd+|ʃ_~w"\cUaږ$~L=h[$+!|NET~|EEʺa"< ^ji鿳_{<2ڊs%xC`4޸,4q ܡ!,@sV?U{s;kk&^[޷~U'; -T>|ƧS0\.#`5Xj-q#o^NRM&>K];QWr_ovǫv>df fݲ> _>b]}Q=cT{}۽?ſQwXt!:?u߹-]?uI.>3&dYYY?:ާW<{MOXC^?棒qfyUS_Gg%|9 D㟆6C\̋Z6fsy?y&ߡ*>*x`}j]d3M뫭d[u5 UBժzY;vDH=ى ڧBGqw|G[H}пH%8}ϡ/[~rEmp|񌣉 #<6~݌MRg{?Jc"k#y_eO6r_{OR܎qפARC[bԍ- S 9اwaݝ|"~`-e^>6AyeeAľg~]腃۰CF[>}z"ψR :pY ͏ZKkg'۳A6`ʆsbol^yqf(qS{Yܠ~=k=Mo*eJX/Wֽ?l%oM[̿g)ɔ Qb/lk̾L*D>Bղ/ QqE )#<$X V9,~.س#NgwrCqǣ\ #q|73_ȃup#'szo+c5˙~VwEߊ#?=:o/vE+^PC'54/p·griߢR~=szj{55j>}yF3OY݈}޹~OAC^t1CWB O쥏+i_B%_x[ Aikm*^S_og ~ث5Xz:Cof#yϩ}{GVer~[}|kE7J scCZ".v*S?)ш;9{;6U)^YP7Rwo׵õr8ąz%ysFgu#GO?|5LrFXy(mtmI}t!oǏaFi|/u)}cq)Y!?41/@(6c)2g>P'.u\J>(Gg7c̈́x֋){ mO%μns޸;76y#vDBuoؗ"O`WF٭z}lL??q:x]/kg ~b8/9ޛ:|Nf;_'{qiá~\_kd'?y]~Rȇ$9{c<_לfU=(=I#yŃ7/̦ccUi^D<6Į;D\yR7{MnߐO$+:;b0qOк`eyոsP|5qy=)>pRF 2Kd^GVb__zӫ:;5 q= 83-*rW$.y>?wG^n[S8N5K)n2חぜ݈G7]Io8s2'/&}K^+.N 7N/;S\Hc_>.E.7O1{/]}9r0vٶrek+X9K5d*~qO7O59r{KE>u_uUk?z3H>/O>z&r\q={Wí=xcV}GcJ2c/ >Řxao8;>8aeF/ #z>zրp6&%.#t51=/ҖߛFӁ;~ĥZ~k[W'Bml|~Q;Б!Yγű7]n8]{Ȭ˒Gȟ_d;[3O%q>_ڻ[O'Ϙc9|3vU"ďęݧ{h2Gj%gp k]o3\K?Lu D\(28wcϠ/t_zs%E:ryx5߭9;}JAw錞XVw+G8r>=:l{}sQ5ܧ={_sg_^G <$DĊy>s }28KsI_3%qq=O[/\O.wJ0q+d߹ӿuT%^_hAy?ʹH@ڷO뻢/oh]y-2r_ o3vAH|:'oc7ȵS_ͰHeO{Mp{Sf+B3xo/ȃ9lr!xL#Gc-:co?-0W-s |';pͷrޢ9~;7nEo|Y|(pV}3ʞ_%G޿lw{C Zf oCI3 MA#Kܱ[8) Ѿ7RolKEx܆7AVGS{ù]}=l*  -#4MiS=,Sc_O2AW7a~/y ^[U9 zjp|C.`-;_u/y3ūZk\Bpee2K>"oN'z$aɉ;z=z?#spyx'a8m ]{_dlفߑxji6.<k~~<3nݟWf(!z!%|: wγa/(vσw=C\=7:KR8B:N>g+xXF9R͞\ۏ~'?-s >kR.?82;BW!%GȀ\9iĞg\+5>|:^r3OM/'52]~y:zFk _;Yk}@M:~X3u'π/Oɨ~7i4~tttvB} x~C`>&> `7}H\kJv_ >eB|GIN_.8xSWqkΟװFHGG"+bHw7 |= |rV숑ww_ȝENć 0 0wzYVA 2ïo`d<~zāӷgg.ʕ4y!pm~nPÏϛ[^e=O?bHk]AF\`-ѲI9>$"Q=zʔYg^Zom |y M'nUcC?QOԏ#D_sp 2'#?Mw;0N<:9!8_tJH^XSҏݼ5RY7ΓlyX]kT|9 rKy7/ Q/[:,|99 i&{z@ZɂCp^=oc~Wȝ7uO/%\7{Gmݥ#׽},k[ucax ^,ݨ:FO;=Ov'}qO?:'xYC㽟M:V[ ݧގ}(q?}$ĥ'FlSxĥs=nv?oÊ?oO}/}~N~9۴aH>T>J& ?d.'ִ3i>ىu-șw6b/|0' ~fq+LIl<Ɨ>Bz )t;kQ؅w"`A|AѡsዤYώq.|`  {~y0y)K ؀~a{Lݚ>VpVZK-NM;B ;]؂ Ϲk_?J\?<"D3q?-7*_K{L]hȠ~bf9Awa$_'OᯋW3VynBؗU@Rw܆;}<,'7s;KR>Rp9;͎\|v}TqA.Jvkrqv鑂{+&tǮZ}t #cd*8N .EOAF:ǐ[^c)bmM-swOMG?V _6]MW\/o)]C;=~pzhkW1_n8ҟ|Ήrצ>Xzߏr~{c9\'_kоfG y@uWy{鯡r4qO-}.L{mx[WOIrЫcH|[s7Rq۴Kп' Z~ ;vR:zVkg5dƧ5"0'vZAyRg c7R4>-~*,q[^:nVw9UV W88˒ïe}e>F=O Ի GVWKУ9t{f{={8̅A|[Rz2G%n32フp-~q/Ӵp2g$8Dh_@yui!;C2Q:q?`v}˺=xO {-ݯ@OaMU]&}k:DO+[ ߧ>O8us)6,Ǥ+l>ǐhٝ``N|8u>t[lܫl$s,|0 *tYtY@߇߉Z_7I[;6L$yEV؁ 3!_!2n._ ssQp?WّCbRgtlDʼZ^ݗ9qO//n>E{ _ Wzϰg4GAGGM;̻߱Mo g"WꟜDfGN?\MKNC.>~ezg:)"v);t~񆓟ڏIO8wkbw8!|;!qM{RϢs߂sl ,Ыx\9`n\~EoZmwa}&N$J TԄ/7g.􏴒> 3k,t=u ЅroC Gӯ(P> wxxr} &P'sjZwp`_X__ڿ z}ȼkl#sԍҟR{ܗo꯸'%>< t|M Z8ˡaȗ{ u{9+C e:ەz#3 N"K߭}CF48__uLjAHݧ4֑Ovr5搯4Ο38LoK-Q:PB?Vo.ҷKw<='<8cc~E|,0~N.ddVȺBI uܿz!qzs 9c9PRSg_gMv?̍v~Z FӨW.XC9r_djws̍3 H=(P33~YӠ׆?$vꊽ13F~s~6^gcnW.Rޥ ܗyC/l$q| gsZbHHC۲K~C7-%?S%N*C?LaOQw֖}#A_WD9PyF~Iw8̆s zcȧ!z%Qǵ*v睃s([|YiOXߕuڑ ӝ|0}^xH9Mi?dGfϯVjsPoo8#TJ>-bw;#RWcRU__\3=*|\}z(]@_>NX}S >hL>e͖$Gjܼ߫99Z˾z^(4h(דy5} \̳ A-_O`/9u/r&9bk>өlk߰ f=#5Al!z_O:]bD8# ܇!$yd? |f݂sj;`gm3թ}c6̷ɗ:u79eZ4_(^"K}{ܲ |oh2gY{ _1?~ǚUHa=ܒo!~W=wKZ8Z?Yy~}yC~ m4@DfzGS"{/7) dO{I~=k9x@i%+2d_} z4pph'~~'5oEF ~Ȃ=48 c +@O#w[} ~$oZ8腾 Gޗ[Vȁ鶼x;9;){~s' {hEeLDh\Ź Q5b~ɓm6UX(!zہ>ߣϥÏKӒk+qǖsd>9Uoiӛu,'q#fͬ_*}ݗ޿_+sFg2y;_;K}$we |5@(~wA*9uM1$ܶ$9vcˎg10<ˣq$wYGqS^iY_}| yTBKiRc?ƿ0?+5fEa<,ȁ57$?uBйK-{skc,x>74Q⍸7o;0ηlbh1ׁΣv,?4\O\/\&Bm7~kO7|^/; SwEN8B$2_)OGn[A2J ? k&OF^[2PZCHye6OjD9zM=쥏 TJ>3C࿊hE ,q:+*`ܣVT{V z<Iw': Wȇ=>t4 ~CG3̱k A"F>ɻvG\І']j }ͮ|?ZgBL#Zs'}גl`K IV;8@6ۼwsz>{xYySi-1!)N;΍}=$q-vz;f^q'ր6DEv]Yׁz/ o(Gw_FWC_G#;[/?WZ˼`C?Qxu<ٱ/EDߌ~>"/5X+N6QxEn)Jeczy+!,n?Mӎn,< kv>M^BaWG~*K=sk݇{wϰD&/|qɏ8n%R`/VqWjO`6zȽIKb$;G4|L^az0r߳/Dz?_9;ha go7gʋ蜐Qx?ߜ-5GpgL":zD~VO ->/Kfn&_a`}SPw8OF\;xR9HOatQGVA^+ ;=c#OpF7w4 5lRya+~ 7u,p7GcF^q1;5Nug;`?UEwP s4v̯px؋uHHaꈽcSn>4Xwrߣ>y>2digO ݚ sHֹkQXU?;yެ˳Fn?>@ȵ;@~JE߃:-O{9L6CcG/{y 3] w4\Ǿ_x>F;~_x`駢x@ٖ"uf:0brqK߲KEiWkoy _3)>Do%U8[(.P:x 4g o:[B7 {Rq rUeCQ; ^2H5_WwfYI oպ{No:4F278^Z:WF- @atC\!^$>쒜qw? O9 :˦ȭa/38HMO9y_y ~)ka/-;zfvG;;3җQb:N~Dzݯwd}'LPάtW]4KcAg.Aw2^JgLqY6ZWiwl8=9NFJMWd~Y~IʛU~<Ӆb@fY =k=޼kI|Tz?Swz$$~Eз΃:鳓υ<ڛiŮ֔fm?GN~}xW ۄY#ײ?UFo_l~!|~|U;XoۚT>kح@U*U _o[+u;wϫ'jsfջ㡟GXȁxvnaRQ1뺍[NO珹GsAzqDuW䍠 ymo:H|םMT䭽ށl:HUpNؽ{LN ]J_M;TgWx//jj܇7 SL魞鿦>e^i Uy_gN|X XrZn1%?('ЭY3}.+q+7zfٓ??Oԅ>:L{l{ꡇgG; E_\\oo!u'(:#GzS0;y:/}i<;{K+Y^]5Գ\z]qAp;lw.D|5⧣fc/ __Ǹ}.'yc{.4o/xIs@׻]Iy] Wٿ% ]JW#5qw309}9]/L!<8>}E{ ]˂.6!,vS^3F>Nqs5qՠ[+R8=z,/nP'+Y :}N˖|kݫkau%s՟&N6t#A2to4;5,H9kDps1[O_?xIS{iŐo܄|0' PTk۹/}MHש;g~iJ2got~R6;O,Cp!C??Bǂ/vN@Ϋ qm.YQ~p[G clwe$s%ol4^?v{_s<FX=j.|QaX<9|)Mu.$?L-F~GT:s2o'dшt;\nCfST#:VhǟG$r7I㏣yvUg4.<I: / nFko4rzF 3a=闢uN܇Կyc]䮞]r8rw ɺ++_#aG*!_go46-ȿ,efcweBSb>j쏫gC#כi _g`)K =u2_ۈj1Oo@Q;V2eުm9kf1BՉ^auekۖPg6/"5i܏~ָU _]kT<٭m+]?2bvDT^y?vIyǓZ7CTA?s8s9nfsHM^[?zEK-}5l$=48MP?sx7w B5Ȅ4NQ)ħ^RjO8ϗ5g|[% .(sPty%yW gV@xAr̩tjσoN;޸q;Bτا;߁O@vcnʄ!zN@H=S&? _|RkߍƟ~m=FFp n#uN_3ykd"~MjGwC̬@ƌò^݂|h. ?ͯ=]k0ljKsoZv>=] >IL|8]~Mٍ1~+2i| e(jup#Pdݥߢ=2q ;DM Gq_ 1$sߴsϘ]Rk vl%Bӫsu+_U(/bŞvK?<N4'&qT!<-qv5 &ȍXI'_t> :C -"W2|P [,'[-yrfĉ̒ȁiЧR'7+.ggow/ŏ8C/U`za}ǵ#2TYL]ˬǻ/~ ?%}⏉_K_-z Ca g~ۊKzFOU,>}p~F8lƑWZnӠqLhȜ ib7ձ\;!w:b\s\"YD-<=zN+9YRx!Ϭu<.ȥ' "ʾj:5c-E>Uu9d9ߘJ]sK \b?//q7MK=, ?wioJ| ZA ,08ϛ~/ϝlo9nZ72]\?_"?ͼISdmx|87=,K]Vhbh'8 Oq"z>rx_QRF|hم S%o.rr#[h4532BWRo`̍^>: Ov* ~wQQq$c߄'ٳ)4_h ̻ĿtQ)rf]uYG^ Ok<fDusrYSSyNM.QƳD,DJk'7CS^*UW%YUqMI_G=>ڷȩ~QG|P-oK]뵸}|ǿ[>#RnڍK=ߥ? ar&iSQrɗ5FK`ݻ@.[\eE۫uoug>{9wW3brPFJ[;-s--/< Co&Q}wcw/ڲFT VS^w3;RootT^Pi:u.@_QGG-s_z3*y'?0's͝pWxkuC12T[Bx0toT2KjՇ:OBZ}4o='@:g?z_7o>:K峕o/N%7p_bǹ "Gۊ}=i]y rqwK:k4\_C.9G{'q8f.2p=v؛j/~ Je؂~짙X9cj ,8zy>h9߫Ҿq 0:"o:>a/q˳; Ěa,<'8~ݰ+yh<7RO{dҵmWSw|7=}/,<2;VGoDV4u8I7)vq# @~ʒS\us/*CnoaP^&}&qOŏ)ڒ ݣq~ToXocԻz^Ӿ;t]} }՘]* Ku[:~cOU+qeO>^b_H~B:_S\y^c8^vUy^1`]<f?1fwo:C߱{2ijKޫQː˂?1b"~3hlXB\[xT{<:n%{ռ h\Q[kݐa-?oƹ9<!_wɧثG%;~ćyޒoxW䇗5ةaݴ~zouJaqwKWH\Y3w>;tӞ%uǁp-ow8g >vx]JOb9/'v?̔ெ' g#gT)1qcUcgطfK bDMf:;_?t'O۱?)z~D.˱nr|Ƴu;K ]qkkگ]`}W#'A76NJxcIX֐ɚ:MJ//پQ]|ɑw?/ܓġM/o$?ocsev痺O;I~ĪEiQqF'b_z?=OΥߊIc7wxoxΗҸ\`YG]S I:13L<í~(x ͛ CGw"~n ?5?bIz :=]=t>OT %^j_c5/,76'g|lR/ .ԘtT)!o"&_ѿcޞmヾyW5wH.wy#7zTj_'~ tw즅51p|'>eY< ;AAO } 9}xoyQ;DP{F!%Oݻ|"kdU} Jyq5Г='1W?Kpƈqap?*]XÑW5_'8ɌȯS7O nݾҙmjW|2ѹ=:+~^V?qOO]溢5b oߌ707 s`zR9Y _^\'y_~]Zn}a X:+qͯ(Iy/X.~W鹏?t, n/HQ"u/<SG*'7&q!dθ{^ՏQз1^C"iBg"σyYͫ\ZVVxywAic߉ݥ eSON;sDW☚uedՏ5N9ӓI v9%?든+(yX'zSwY OQyi]4wɋ]uOs_~>& '0~z sH>LqQA{=yE .sld; rNp0ng`Iҕ0Sz˱TQCN̖y^d0q}S [)NA4d;%x-nϟ7s#vxD8x{1<7vn\'w ;zDJ!~Fʼnugm<)$rOD4+xt_k} Nk>>]̳F~[GsvȇO̩S/9^<OZug^-:"^vԛ[G`H~_x~<5GPMڷR5" CqyRȉs C{sqh~xj9a9—~"} /ģtワ DZ$c8^?.R?懝5~![[@Gs-8c\ /\g.~,Dv󿿼$~9ϠnBǃ=ō "w|]D>ژzS'y{?@񜊣qku2aA BO:qXtZ8ջeɏ:4~hUw2BfAy.)!u3 Ǵ]E퓤ts奱!~bom^Ibxֽ߭;h>y<5לm{~~Ĺ6ݟwsD/nz]+y|;kW6ғ'[Yl}y}\;}П'Gq%7ۤr?2o|NעQϧgD^b=N؍>j?_W`^ѝ=I,'<+3$ߧ= V7'# Ez.y L~o3e{bT;u*lߐc0ٙɃV,#xn;rkV\8ٌ f=[y*6VyUӒnj>y_/~:WO݇1GqJ5c*uzKcNYb,jg20'+}~&5Вtk.G4C0h:>šǺ+᫩u8GN1 u?eNmjp}ņҏ]м\XUN*ݨOrnaSC10:rg8G?G32S\s]`nY~{H׏us._7\Uiޚs|ɜҏCb .o3@7? "o|E O}$cGnpV}Elֹ ݺq#v,=x)kyjcA!Few{@]k o8E/O?nq2, #?~k[Ax?ɡm^7i{{߂>RwiSW=nQ7jX[vf~Mlg?G~~޿yTnWC~9!p[5_`║~Y{'C+* nZ<>i84ه/sۅȾv]+( */zY*uYwh>Y 4_(? ;hSf2b:?v>~>*DvEo`WUc%3(-8ؗJSj278RrA~je\͙)K=/KGxٙd<JȬYZ `L;BRI_{d z*OU~[f~bػj"Wk$Qn:O'1_Y{> ? |R!} DX(Ĺ>< ~ż@9z)/~Ab\fE>Y- {VmQ}%\tu8A?_;vS y~*'9Urr @oB7 lm 8pB%N 觾{J# {=enĩ~BmF_6\l߁H=N}eh Nȃmf\dIW˜裹Y= ̣>UJvH7!x;/*YcvRgu3sכ;!YCEE.Y|q7:qߣy?]z]X8}Yyc\U`9s?}+Tcw97{f{촪|`fϫ5 y\F?ɬ8mPH$4_x{ȟ2psQ/"W݁CeF:a_1/~~ O1Vq}fx۱.S9=z.rJgS}^QWTm ra>D—٪r\xQ[CS^ƾz}HECӵw& r(C1L{6_ vVPE:3wrM;Z'-NVUO^_բ?^ No'Rk'!7`o#H;4Q"4n;iED뺭g^NK(nӍ8FT)pktӢR:cWQ8 :8i:QE3nNi\<ɽ_-|o8Lgq&r;g~ٙ}2?|Q9up4(ٷ2P{jYG?쁓ҏ<>q=OތSاI,'Kc5#O1_`PP {`e`% ER!P/<}9Ի<.b׽ina!/: ^8+V8pdswr'V/B~e'}ߍ]~V/`D{(s)Ym7]wdBƑo5ߛP7+x3OaGf~iVh_9giq;AA:^A>֔awEg]2vYwτXZ:e/t6Q~$}}=Z~}Aq<{띸8Tp 8=Uɿr޴+ݍO<~Vp|O;yy.+Oy6 sl՞xdג}ag_A?70i=XΤ+WsmTUNs\?o^W졷}>Ήo#h?ވ̟INjf}kqq-ٷ p伆*)ح"^ >_5 MYP즟n,!^tw e|^_{^;bǽn [7.6^Ns;ޭ sA>qd՟\g?_˜KGk'O幥K5\7 8c F&2"v&~y!pxcyYg,7[|W{/:eOk73؞%򵥟Nʋ4φ7}!soK'uU`̻r!*Aʧ~G~!z-v1x?Ճei۲?Wr/g?K\[q<챵nةe^Nو .V{v&B/-gΨw/]sb6 k Oq\/i;]L}Ӌg k^9/JylP^#"espOt/RyG\㷒ŎPo<0L?UOx]܇}I]ȹ~w&]mW?N/8~a?a89[~aa7=9u?z.ߍ+#d\к ~prQ9#=E^K>xQŽDZָXdkV{v_+oOϋ w$ okiQG;Ϳ՛z|)xY溆ɯ+>-gL;(.)^ !o36b*:wh9 ^F/fpd5SW#5O|~x&_`>5'uz>Q^4SdP^Ry&=ڊ^2g[/A.s>Ikc4IK^Ľvm^Cx-g):ųFڪeN($ mW̞7פֿ9ApS8 :?pG2@ _YG5:;RXeZ'˜gU-rO&u@%s ,$cfpo,I zo=Uv8V^G>?sjc%5 Sx6#ܫ84\ENw]gNʃ?q#J^u;~H:)n:^%ub?-G+:}*!|aT4#R缈܅k}sZ.a^Tꏓ2r*xQyJqۇ9X~ԷbRbwz*yWYsh}B0~{֜-?"{3F/ǯJ^O>כ{+ܡ|d!oҊ܍^i{^-mZc=_ʼnQS溻S Wk+ G3x~Zɛh@ m~?z>Q<? y]x{kc_ O\(}9ު^mm/`G4wqCwV _+9*oζ+o$[.$Ϭ<56=NJJ>6 k?'}5v: <9W7[Yً|5_ď7HgknԽmI}G =b`,~n_7Fy.,޴>_o#'K/- |žEq!w}[mvNs5V FiY|=.M ב\\*^- ^ Z{o%#hv)Go /ڿ "mGkr+Ε [ꫴ6f$kf/om}VX=Ѻ$oi=};zD1%g!6/kǯJטG}ۅ$vÛcLQaJ]a/U[+q/aY84}c3:s Nlg^sf2'pa&Smky{]/|>-ܖWGoyW|v=jݔCm}\OS:=`_> N&4iH=OؑV_+ĞpaQty F!y<~]xL;kewF|#y7ea!;^pO-!][OꁜVTW$}?<5~_wFJF'{=yzVǃƿ!n^?/% {eϷ(ZqSCtOԟk}]W}C< ܯ9 q7M뛱R/?8csَNa{#yzO!]?oX<ü'vEuuM'/i6;`o1oߖaz? ѱn[+f Ϊc_桽3iS[o89أJz>'塼.0k3(?׸WW$ٓ ?&ݪA6z+g% %/>{w#N8 y#udx+xz3mF%z|8©c>v?ΉxPhhq=UyjWRWÃJD;&^2rw5x miv|\t,1׍=Xq5s*xy_ߜBG[˾@3s+pvaqӺ2x1s׺ /zJM0\i. k6xET+"#p&GIHYgxܗW'3x1x˭ݍ26l 8᠑_śQ?CƸ-VFb\ݣڇkeB_rቅqc-.RUܮu7f ص'w'Ïj]7_oF#^ª}MFΝ[wjfy'=}Ȱynݎ;ȿwcoc|oP{Ҽo]=d^/z59ms`\p}/GN7T^̚E0Zwhw?WFg ?C}U_[=o N\ԫ#F~ݖ ;%"WFά@M=Aj|_mr}Ay.!屢y)음wfht=/>lN\bsYmuܲ_y 2W{~]*xfCۧ&>7虻&ww) F/ә8T/}?S!8c[\^|-8޺x^p+m/[!霖p1ɥ=FҗL[Gu9‡jЭAҟsDY3\פlȆ y5w%V4kݹN/F( |##.Wq[wfq}#_ZniݸʵJZ)Լ+u=f^;ӟN怷3Fo$k5g_Y}B ΀_ w`R2gOy\rZ<{ d]G~s;5׼ Fݢ͜@=k>WG:}_ZWD?WV'%_͟+ͯ{挞~uIyWnQ8/PV:n|2eί=_ .q3>H孵!2OAq¡8OpQH]Q7q%#ߋe^.|8Zc=r?~b>geEn 2r׻GIԍF/~,ys}= z?=u+b) `ו6~$c|C/>OIKW/0q{*yL՟ oeUs<_ g27Lց~ҝ7gϛ@\pҗ;wËM~@G|o 2ByQz屴Fɵ\)isp^t#;W"I_76"e qrO=N ywT}M!=sGѣџ޸;K.^j`h{A+M<ǎXq=On LgΣA/'9y|UoBbR KUpJ!! rg@%f"RFXKBp-Z*HBBHŪ:繿dM={g<)8s]<ϱ>Q)>aFWj*z26{Ϲ&Gg{}4.l_$/F@5>YYD>*>GmDH~e5ѿN_Ki?{>\}}ĿkL:~zo$^}|vĜ@k0%x"k_;|ʟ݆G%8>4]^־>.\vι (>=㼻=뀾u~gyz:oQiY`#M򽪇v:tɼnX |7|7zWC O3> w+a#^l~[9sD<s=;1ywoǻINġ9M%o= c:p{Ͻ L+]#b0:* w?s95gM_nHƹ9t)? ]ǾfHifܡkWګ蕼HEKKZ+r/=Iܠ18n_yھ|=>ZɾVG{~Q*|$]+9܋NgK{1KY~z|TޱkpRp)˛!/yʯw:־b3c7I\~S_~*kҺu͛w;0~U#994;ݙ?Bn%8] }ځxٙu<k~&MΟu~N{Ff0:V]+v͢߅Φ/u\MgYݼ{5}0rfG~zyfvUrWo1sr~u՞aq](ukAQGυ~§F >:=a]M?k'go\h3 ɣ:?sLڠW>5$yt %BBI}WoIy*VٿWR/;-IkcU+zy I.tpaO^Lg_/S6!; Wo dE6|r m)>AF8 '{W۹hw;Uf]Uei'͗pG?6{F5#Þ(/|sxLCIףWCSzY鄯.=z ;deih'cYYrT{ u7·SjƒoG&uQ֙$@}uҗȝIot^]G~}|AښqB_Y͖ߓWW靀_1yW4/[>{n'> ^)sx2%'.:﮻&#{{=W|P0~KWSg ~z*[}Em >uls\)׻8|b_Wʡ_EF/d/| xS;K)uSŋvq^ٷyr` үE^cږQo#صA_FZi{z}]#ic+׺9_\[ Z_ÞxsA8ؗۏ8Wۜ^R'`N^TKO Ww쉳k?0bEl{iA~ǣ<$<<yo Cˑ|~35gIE౎OTck?wC xQom}P ?|1YYKd&z$׋͙K4}+ȷC籍">L`n@I.~B AGyסr"_G< ˪ͬk'~8$<mgעZG,I$O=  oFT#߇ͥwC'/k6b={z84A~?:V>ٗ8bץnrQv Mգ߂'yw_SDw>Gm)gdsD{{{] v'Y{Pƿ4OSjl2&ȇ =HꊇS+1xo/(/ aK'9Kpa{d)?NAߢg?%_ǐNW$Xxz{01/I຾w9[k & % = =kuL]؅nv\^Plr(ru<'smLJ#[_w n9E|6bOhT@E.]u}/\S[]tUy~iyK<=[kj{M88.Qrs |ʺ8rڝ yF++zJvg+kvVJ<& 9r> /~N\gm;+7 y19Oh<ٓ1ӗ{3&%os=؂?R5M󶭞%κ}ijM#P޼&N䭻u_%0j!~4qϛoM5?y$xcЏנ״{i\촅7y >uEE۠Y:.zY|BNqݍ=e%dCG\Q\Gu~qgFw{칊WZ s-Esz@D|s|$^>\kۅ6NI8 b]~&W/.V⸾>{k^ p\-jgZcr^1kC}CsnշAOK8쿃I~r:۱F蓌S32u ˛UDgoBjpQU}zNFosY*Cу ޙh/fqh<z'ukkNpF9$=?$x.p33 7S?{{cQbJ?#UO|5xYXEŤ51'=ۃC+{U^ Buk tȹ۲cy!ϕZ}[^w}q"|1siNpEһ`Hßy}79Ui#YC5~h'{CA~k :\QL3sO #2o ~g#g=^MFH};/ĩBk3Pn%yޛ<'qXϮ+gmHDx}Y#A_9r$%Z뿄Q^ yeaKI~.Ӻݔ2EAK5s<^GN~+?鋥uMZuy /7=sV~oZZy} u/:Daǖƙ w#C󓌦8v_#CbVo&~OזoRWqTsYD]m3G-&"f%OvM~{λq;:O GLB $ߜ!bs~ޱjI3(=Du)Ϭe Fnތ}/]T [xЇlj pW!'GØz] dPvѱc6u=r$$dy 2oK#yr-}}W^QUi_m3ġN7Dwg923u1u zO:r-?]6ΣyXFgI`^jbNVpɢMo;WI\Q4ȜB3'^筀/B/$Ҋ|C䯈Wnao'3b/#ouڻyĊ:ЪžG[{/`̯/7<7)۞^\G.Z#ukE~K5;p9t>>+r*A%D ѹCr{etDZD ="I6:ꗴڮxݩ~wfO%9ּNn<:;| \3-ҟWho5@?Q~&|_MkT=.\=u_7yC>ҟM>ӽYy΃΢gtx{)k9EN_[+u텛XW 4d /p1O{$OǾ6#V'yuٙw݂;[.\Vc6Dǡċj{%9 k/Q!~H=rNW}(Ӥgw_j*ÓƳ}3'ʺ>^^m;3ڻ' =5}_]n.wbw|%b:?V`g<ž@Ru:OlB/\zu}k +͵п7r#s|qF0MJ}#չle JO_ǮAycRyٱDAxGs. sv|fp|䝸qk?ug:JyzfH _/V#o$\-4b?Z ZM<iQ?r6Auz 4%#ȗ$V6_ys,E.y^ sK?}E?R宽}pR`rLzN326]agWyc䱕Ïf(R/xor.wCɫ$OoP>(#뺠i}y V8~7z|I0I|‘gb.%n@籃&yoaKC#B^g=y9xgoL?@,<?+wby `68P9tptE*\*gy}XĹ/Xq.yv_Ru?͸3Ǒ~=sq.W㯮5C*WL$%Wѳֹv9Hf27Fd }]ta&h u!cCh-vɽ[>Fag)?B)L% :g}x/,uvۼ-#̵ytZ}ryI/kj? ]Jw5kS=n%6Ǟ]3{9'¨$a!R):z_`wE74s[S3 >.xYwQ1g+sSO]> MuO+!{:tk/Ưe9u^B'):Qjt>YKi5\~cG_]1E9)mƏp >ީ!OCH{iQ}Gg~ ,$>)) q.ho }HȥıPGu(CѧXwF͆رRW9~/>&$֟8UG˚hڸD_k΅4g ml*2\A[8X=9ao96'oפgsw: /|uZ-(e1l5+v ?Zok~| G:h_t[C?IޔQr~__{נ̑A/MxMA7S]xh{?N.;O_b䍹˯<-ڵQW=|\hB<)sX'ZgkW {brr}FȽ\/Kzg|$'aJƥυ܉ Þ 9MjV.)3iBī#qN;װ-Ւ=7%Pߐ[g[ <'x _Ҿ 2H{~:g(z[ܳ2n;?ݣ@?]k|;yq+! ù CKk\] b'Ksr3ICRt%r^~ 8B}-Q]yߕ1M;{{s?̀nʨ+ w;.b߶;8?`_?ǎ|TwqD~wٶEsj#ix9|x)xXeQߒ9FY_~2mƳΜ.#JEF0&Kq#6ߩ/{$_9%wG@ ĵtӷWɜCw12o⛾>\zW%-HZ'}u>y¯hꪌ893.P;ޯ$ˍ|5>a/H9t}AdGMK'e;ᓪOħReu<"|{ \n)>s)oyFP(뭝]w;rGIkI{+9^2JPx7K;mzHɧ~Y:9S''q}ߧθw̠nsyL<}Eh7q xMK.~9Hհz%Cvc# 5ԯGޝQ=7LEǧv_w#g\GnU⯳Cwkd۹ب·eQ{:[hĜNO)7}Yv~o7Ї!6 Zꭠ6xvXB{}?"*s;Y?>?:.'IFWp^9.iC/uPG|ICZA^eG>ւV5:g;Gp7`y)QKJrGS" x`k]l%jj5u0'wO̜`:z\_,Li}Q $_f%}} q.}}3%k{YT:R}g5;~X-Wq/~<ƒ/8G]WUב!7}<7un/Wd}T'o$+ybGGdέq|%2gy7?o6%vOⅮʭR]υcڿSG\Kދ+nڳ[GrF}5G~=ԇc#'A ѷVJ|!,z!qHloo*[L>D+C^^(uxqbW&.?RHQ_~F{;edm;q+\&bGXC.r<+}|z@+{M9 =|-5>%ħH]Ʃw!Rr_yLLs)us\|eW9<׹;#uWmnm݌}<KJ7|]WDˠxPnxD}+z7g"_p|(|4tI+ž~ `#ꅟQ_=wz1|mwᇗ>.fQ%u;):-1cٲy3AuҚ7e> > xzۨ ގ{MAY#sԍȇb@=w&oV~hܗ*3W p׭Hz|68|Si%Go%.^u&W|-/gDVjޘU֠~_UV G'^>˼6w>{A)B\,wCx6؅{R?DD =oۗK:WzIw*ԋAĿ:w͙0r ?%ݸJqďBV1fT(r7u;\psg}|s5vi}atс[f0+V~}<tczƤYqcB@5lj߄KN|uc+]`y >jg߫\y_?u{>u79,\"Kݓ <12oOo9Kb+NȺ3ADODޯ= {~7Oq#I}v~9SLMD=S 7$Oѽh)<\!ׯB~-G[K:Vroʗ4m%soWث2yg7/陵-/2u K2o |ٴ/s]1C+m}s!uI=_[:[a9׃~ڇl%qrl^Gկ\v/i7MM?x+zLM~/uFH2#:([\!pu3 l>2LӃy{0uJ-/<{^?!:ځ"I#;{Fa{&p̡+A裩>FKV%n򋧝_S z|M/ңqniR?zKwz)qA:@k ;9w`,a] v60}2'2޾DKд#kC y|G?׾@ [`65?P쮑u WGK瀮FBoyρ);y/{wX~CKw qMm6sOE\R߼2ssK}s/uiE=̭6BӰ^ϛ_K#`ޠp~-u{Gb,R]Y!{eoCg~Jd6_AN] (_߆|:@boUCvs 4Sz j|篾| ? 弧̢#%N|LAwfG3ռx;nxS.z|2^I #|v;#n?q\'ܾ̩4W?{)C՚)ԻJ'`|Rȯ&Ր"?3G,T2g#{qrCS;j\ %K^ƣ;t>SV?:$nivuf{"ϑ}1/o* ltcj:䍀_I> 諮W ugh &O*{9U7ד>:ϻ:c%>yz%ȅ#NǨ+t: ~j̑AN; ȝ#y_J_gZqE'5X/ h^oKs0Id3!Vܲ%%|7$a"~MJ韩k~>YcG}+ur'AwJ唧_˺;Z'(|v혝1Ho2EӘ~;m|}uNunUm_&ZӢ}MV]O\v9"NՎ2L'=9}Q|~/~{`^ŠvϤ^n+~8y[@bW5Fi$בwc#2bBjKCWzk/ұSۻ@2ѓh||pU8qmO¨ ?7,]u.o?~?p?W 63휛 j<|M׮~$8!uon)=_ KR[a/jj,GO;|i3{xmn(MѪGK9ѥѱSyaOewUG;0Kg4݅/~ a,}o_O.:|5G\ߑ|aww~x-zC_/U5}>_MGrQ>:vawğg=/={IbgnnS#xZN>Ns ǥOÊ4?H}W>J$cM8̳[Ns]ى3򉽲Iyw#z՟“N^ww}͍4p2Z;uqX=g[;R$fa7gOGoJb~0ÜE"Q$K{YoC $OK1+xUjO:3{H}ìrP>D'ߐ:J+aC7sGMiR'jM,}"R5z|cSU9R֯ĥ3Wj"O5YBE__]d/ͧ}Wũ뎮溗Π_>:Q}i_细KepPf)8_(֖+yU}4+`W]L~Ώ]sIŪ=,jU[/cϚxAuqrCTť:w_i :7ckۜWJ<v ЃҾ"sM=~ 6.[п=U}t[I .9[}չ}Xj6>uz7msG^^74mduE?h<=k_v3suaD/fO2Om}&}n9.|~6tˎzT+oj5M9eSb?߯Enu_q̧%쪁ta_9/_?n]i~kԓA.,G޴~$h.zPɷ} |ޔ.EyiRށ^z z8݈SkV;?- 'i-\n۾҄A-H! h;ɇ` \ITG}вy=]jzWк|3Aqh;*''чwa=MʀޤO;-ܯc8vMQpE8ߤ6NkK~||Bf՚g E5%J)1';zv %ځ}q_'jzk-N?zwtV ]q^Dqơ%<W~z^FNӫ )8ו|t.5\֎(.[fɜ 3s<87 J ņMKoƭ(/s_|VW}8JN4=|s>=T Ϻ2yZ#9{9#tx{P˪_i_ğb8TqEOR5#7.)I qɴ/N&~w瑹}.Np5 \ 7ꘛٟ_7X_NπϚ'~oŬMm3i<$W?}%;y ]^|?ağmV#cIjo+UMo\~ygЍQlssS7|?)K9>///t1P{+&v%2d?t~_.wfS֨]Z>u4̷藖S {Go^}xZsn"$(ȼ1gO#%?ެ~쇿;4OE/c&tr?2J\GO"1K;#+-ȳϱ=6^8r{lz2FPgoL**xz _jf;{6(o4..xW3vw/ޤ_X:9{]O R7q]\WҿF/tU*nR[~ϐu Cmu.U>ٞ?|Wܿ#Eh̓8$^ N޹8~P7I׹:_NGV֯\'&TC:# >AjkF1 \ׁ|Z^4E"9Zu;2ut, ~Z״ogӧO$_[5<9(qS>HNϒg}Tݩq{A.ד]5z-y/M@ވ>PaO`C|>ɟW7Ǭ噷 ;C +VXK[.rռo\:vMuկ?/b)P/*|h]>[4?I5gkY׺ϠsKKR~βzC)U0!׹\/'uD$'ҹvtM^-ܟm$~*cz5zP?Tg!2:r|~U5/F&ڷ819З1'p+eO!=C簡_DHU?>ŎsE"}$Oࠋ˹\m_H*'oGG#$ݶKȚ]}ɹG^ٺ91 OS早=zSy-*9_'8I(S`l!##-Oa@~0b/?o+Y{{~/4D?k߸BeGƓ{JKi5̮NVK}D^wÝ}׹KzIݔnϴCwR;Lإt--_\AuOz ao>Ec>h7>pvWByEwf#Wg5}/clGϾf^'ƱӸS;wfC@'RO7?/u^\:XqYcn%?UȷC']ȫ=s-4"=GЋ)ڏRyڷچ! )r[\uH} )vHd>/I݀~h1t'V^ߪQ_3s֡s!$svק_Vuq<늽/v7" B7c紧{^4x0|EOj\\$pdx<"y>:{z5c9L?ȟ_~".T:}PG7 STP'G;$[⯾K7i^U_}.x=fX h܇ӹЕ 𻪏8u{}L|3^ ߓyvGIX/q{'j__-3߯w9_qoݍ<(3}sį_O>%My;mtшuٶ'5/ߩJ> FybTi'Nc+ȓ ۑaN޴8K]Gh^yйLV?Eڍq$OGG4&!'!9W{E~UIM^5WUy#oA~\];pA*v%?YoYuH蟤zGي\!O .|Η8I?J_Wp+9tksOt[DoΐΊW4wHݕK1jNS2yg'R7+_hAvq ryHgugf|g}ǜJޙҝy>\~ 鷨Zs< qƣ7QVef͒8@y)t韢~Fgtz\u}pkI|.A[;3]cJoN=k#=R.bT9.d߆~ Y9/ >9_dASv3t_1O3 z>@fIEN w]>xIwNC#?$Y5i!eȭPo~v JV?vo$./L~1*/}m?@>qo;?E}L/3 KQsi7MlҼ'Κ cK>^ڗ˗u2WC=;S$Li[X\[A R5vxVP744ǁBn/4~$4>}W4nj%_9^}Ոviߣ9yK0\UC\u'IJno ]+:I{7 t/???_]g=;MU_j-_7#t䝽<i]b7zDya71&zEW{Ҽ|x'@O@7!|*u#.r{uIQ[Djv챨xCڧu QyPq15h -uK=Q'oQy-E/^:TU֞D?!u:[Wpy9т_Rg[{Y?FwbMp>w>7VDUp5y%V+q[:H#$(5#֟HΫz)c:W?^SG(bΛo*j{')M I9w > վٕk OvsH_] 2o_J9Qv}&uj=ٛcH]*C?q9ڏbn6q |=ny~|.xҺ[5>jTQ#;["ߥyJWZw}H RZ[ *sGFa_$a/V³Rl?up΂caK2J8 稊:0'q6ͣoA񟰿$ϧ~ƥ?ԛRl>+vRQt.}ww;~/DK Yc[gNgrW>g{ SεDuEo)^<߫;^ o s0ENkV/C[wlGW< ʼn|*]h|ș;txK ˡ+K),}-Gww!mxEڟu<: ONFW{oq@ugo4z~ЩؿWo|>փ3  |2zluݗyy 'KAB@.v^j¯\>|j Uox"#Fϑc3^ꟅuVbUJ"r$q?+q6D;E?8]5jNߞ畟<ԍqNssy@t;Н 黎}: ;$M2#l ]>ӯt{΀f'sռGpW>3To@gG aSYj@ | /nV+xpHqO)57ub}e\Wƕqe\Wƕqe\Wƕqe\Wƕq}[Fk_*~_+W)}`ⴾؗ|6x5?\f&Av| ~w+/L(%9iك81Sz?ćz?ć7Ɉ_^ZCp#W*ck;5{AތP]myy9.Qgm,G|VpÚK_VkL<]S̨GwY-4kJ=H:})7mfQ_7?X@u4NL.Xzs7yO۔u|7ŔmԳ4BCost5??3-؅jO^+K~mKL\ǝ5^_7G|26xa0n$svh>x?B}ԙI(>s<9ZzqK`?sQSQ|9~qɧ'K|53KQo/~<䯌cQ1z5?+t ]6Hk|9hBwuϙ9So.)vɬ?铙z&z{C˖FF >u?5Zl̽:c߬;Q)XI/K[T;9ϕݫUs,+a?+L;DM:N7B>C}7*=&5B{OI=Rŭҿe?ݔ47oqyJ`3Zz9ځ W*|/+w?$^iMGig9ԏ(?M]C/O ß :Qi*1??U{󫱾zK;зoXs =}mtۢHN+s@o~N*YK.?:zSdtSHj]Z0:R6~x{v x"߬߰Oӂ+i^ΥzεIת{m{rf&6JgV]7!{q8ꌭz+FKxg܍ S#g98:Ⱥ U3(j< c'sZ ;SA-? 2)QtW‘Zpgm'S?vBM㠿?UM~2!ʏ*2=|-k=8J:+;:I:G>MGވ?ʞ^mB=y sl' /eoЧ6?^IG/_ j̏ǔFn݁ȹdf_0XJE(t%R*s7!ֽ+8/7`?=ΜBoR>`%SnE]Qt)wgg9,{:f6>tc?miGKJȇ"bP?;]šg\GX=Ff㡥˖L\ 8S(}m/זoz>Ov(ի~m@z1h_4.▹T~"w2+_|+z_?;ɽY uqlпsL><]Qq ׎ NAu[ש*m}ot <ȮW~YSN\GOG޹uZ ¢2הA'9p.Mb8 r|śyUJ #P@O };}/j7@W_D>NUȦ/%Xp^OieQcO_ϝC={:]i$ކJ%F}8oc爼׸W}v,Qȡgl 5_ga|zW,o|?s4UѮ6s`%೹?<9qD`1 b{jCJFsasz]μwF^kssr V }2=%,f֩vȻ~=Syo^~7$؏_z89,648j-|yxPEY֛'?h:vMpyЗ͆/8ҾŞؿOW5n~4SeR~~P/}rNp$nTgә}ݳ0Zr7K=Fܣ7*}M5gŗ'xߌ(~lc:~y-Sx/$K⽡xt4G~oYvo>%C#xα=8FA #W ~8^;r0V};$Gһ3<: ʇP4zWI% oFlO"_ =83;-8μ(zӸX_;yS='#~7CMdN~Qc{N<`Nj WO=.x88]c37ș{դRq|5_AZ\j<+|JoR< jϛ NGO\g$׵+qWWa?_&qwuˢĮq߆F_~rJJ_G;]v,%qq]{7r@&?Bځݧ'y8fh̹25 B7cx~>WρH=$Nf%x#ǮF_h`G;=.>9/"S)92_zRG<_hܣjGYcbt)?^<_*nR&jMXГ=|n<ޯk/4=q-Wɳ9ȾWؔOQq9gWA`Ϋ^>p=KO%OpEqhE3g<pU4Bj>^ŨC kI^znV:)UNR;W>e_+pIUЇGN9!'>a>א_K{'v'8]m4?|~p]M~y#>VmMOVFu3n3~bWΠxq8_va|skcZῈ]6x9~5~ԚC}H.CL\wtJL'R|g[/oAـ;?Hn3_q-kg6=.wĞu?|K8(-=r__ ~sܑ<;v)ysoX\mSxߏF.OLߑ83}?_-{J; Q#}ac=uB9ynYCT[]EW4tscIyJ^َCH>o W ~{OA==ī"~Q9@xoYCn}%z֞\0n 6֕w"mKA͟WHݒ)os@wV`YOKݰO>Cn%[G#2*~}9f\$a+%USr Fֿ>H(u.e~ǡ0[KaA)oHm-J`zg3 3:iu}b(WTjGEs2_ cqSc 堋*6/2-]{<-52HBWckK~;Ϭ<Kk^7de;.~s%y_fW`CT>̼ÜM'@n?Μ?iz 'ӑ?eZ$l?~S$Ĕ5GFk$[+Y8%-($!Z,%ȱlaYJ)[TQywO~y;{~F3=X %~< dSڭKgX ؇Cqy&WH$r׵)zWP|Ąu95Â?Dy]-_uqi!L%Mxjge[)x)|bc\kλ?y?Q*~;:?Q~UszX?9:I\Ru\FyC[MN[Ldvקl<[7n/ ͧ_[Gy۱.R7[7C{*+8 \jr`["r/Z ?q}.w^Ow; @-S9)O$xs^i?@)RIyzgpàx=}t%.A.H(7r:%1p~_3ܝ> 5p%< `(wRqԋO3̫ا9I#=[PqJk_x+v} 7'w^:#VҺN(?£:ȯď+uR\P,7y~KTx(R^7^xװMd? ʳro?~y7fsOl5yWd^cl9d2~;ыA9wz9_c/_lnJgeqq1H|dCsȉ#gu([F֚Eȹ5)Nur%Ł3 ͈]wjcCTlKRț+mqč_{l'ѝM_oޫGk?ڏ+_AE=r$>܏cG9u$upj\ݵ<. S!Zo~|HDk؂\N)zi<~+|{ 5#Β!u47ܻ yhNmuuq";|O1海Pstˌ3cq }9KX=$<&ص?,xm践Bxv 9_>?qh-^GfePY|_㗋?0%}Oc{%-V.zfM[jX4O_쏮Ixꝙ<6컢}Vf;KCK:?9LJ3Z4ou_gn~>k58ٮgT/>:@0k.5f@o'k>cO >qȿw#~F|kAss }~c1oe>Wֶ݀^%m~?'l@o= yYy0zO+($} j_$&:n6ځF;e:i]7^kL?5f-#?K:{F.ታ_-3rd^0!ө qڂ+zU6H{7 nqK?gޥ||. ~g]܊8SW|C.e? 83wC'+NaD>7P;i_G֣:gF [OPTa[g>U5"fX!vZe9 oI\WS:?<*tι bW~ױ x9O N+sΫZmK;Y0MⅣ[{M\o컃<79{p*o\Ȼ+;s__#N~ⓅܸZ|^1A:Ћbuc׻q~+Ms? l?ִP*"N|}}(h8RSo{L[i{c)s=oE~n3KNd^n 8ӄgU=TUqW H_4az݌{뇵7#ԬX}UD3FyU.KZ _!}Kþơz OvIWOMEixg:zw­ A^&F'ϻ Tw$j/A]_'/}7;zwRR}DJ7rsy=xó~Cӗ/y_e$Dڹدп+/XGK25,E/gGg#=\<7sa'u8Z\Y/F6%_ޏV~g~q7#NF˱n- ugh\+Ix$2oT1|ē/9ƾK𯒏y>)E1d>[2{'/ZN3Z'_fw oCru``-w.TCFNO2>aY]E7%l8%SMEJTj8V:7 :S}jDgW]* ӫ45_2Fmnp}Ǖm;pq=CRDЗ{y-#qu1:od^D4SڐWI}/W);zGլ,S e2yzYԃ!/wHsWgxⲖ|sL\ou;T_"fΞQ^^4_%9*r^%4X1"G0 y:Y]^9Oλ,7\M]W6~2\g<)3yޥɛk{|q\嫭FX,Uo;m2kWVe="[9}ri"q#-v'DZ"bGW]xE`jK?{*-sHsҟO]ؙ }UH\$3qM{~'ʓUSN .9 &ǹ;yu^cVɓϬOa7~1 ;{7m3wcow?x-"÷8xbĵEG\mDhm;|Jþ%=:7Y#' o76 ~zx?6oo@Gy%.>?8+>??MXu}Jue9V}Ӽ4tV~:#.I! Vd`g®r:ƿm)ރG$^̯-]J/.Yڗ:UzI_?O5oJ 4|i6~֕? >BqGu`WcTMӬӕdHI)mwj9$% ZMTxi##?P m>w9F@~>{IO?sRhiq@û_F*k{\~NMGs+r\D1Ajqe^1vSۢW&7V}8*>c-߆!4w!2S8!@ H-h;>{"yֺgC53=տ߽7vy *>d:T|CXU4I/b}e ퟑ>0yn!H틶 @ڎuR7p& $R}SoP^xNTpT_m.f/? | wl0N2?YsPC'֣g!Es]C]9jsvM_Ow9-3>r{,r]ÿƮGOL2e}}-e䮆=?@V O͸T#ч߈0G@,V {WrNlO" sW+])[ Z{tNtN.xS($1:!ϘzK߫Sy9-58 `׎LO'h}Bz⡽׹?YC~c n3o]ʔ"wOP9,IfR,\O}:/Y!ϙ ϛGÌoρ ʿ$~kWaR?;}XCocGsxʅ/*;]w qrRcnaef;U&qOSE[ M{zo IAVM;?>ɼ4Y'= {ךP>~Ctb94\}&vrkbsH/.qWcgHhLp O ̛רqY|5Ck&)+5nr4$z-;ǪU}5wN[|ܜ=K^f(?Y>7yK}ҞIM|5.Jlz_?勱зICoKm=Bg|~]~OڼؗMxuq8RdZ*Nv)~O$ߵod^սr~32OSy:A8Vbvr)oy{~dN,v (ΌyV|IAAInrڣ'F\n =)%7ѿmWW;"2r.E<2??q̿zz%uW$.n`#Wf ȹGzyϥ+태]8Oi>]i;ؑqq=3ޯ=b7Ww}mx7.h<}{MÇq^R<-V{7BXK|646xT%*KGC-j{e \5 ߻kl|%o8ore~ sOzM,:GZq}è{յCJ:Лg#sJU'N "?ҧߞCЧxOYęݚ&wB^17fTs77p}ۖeQpS {N/y!{QHL>'C޼{nwi=[ yy;9 t /+_3Zp_E<ӣqdUɫ'ygǰ< 2aԏJ>'uٻYw'C&%ȟ5ه:GƕS;}G:FGn%@.{&y܂x6j{N18Sw,|]pER"P;+uR3b>~ vA ,aNs2 q7Qȡ.3ΉVʺf?ЇU~<68`|glUjjv OpͺG /.7 L?=}Pڕ{T m#Zsč[KOSd:U cOOvӰ<[4yeGeߥ)̟.ʫw"|0CAu*XSG4*)ۓ7O\ܧ%WkLc_ s__),?-}w&Y솼ȾwS96z9xŗ%~@:-7Q8>iW流 _ lxfmp?ߚĹE: ;N~ޛ.{ymzyߜH6م|({׭+qйwX{J` .vz= q7;}F'p5s^?8+C]H%N]7G-|'xag)j},G]/ONe>_DMyatZ=b^(ɗe5,%,e27`TX^];@cđO`gR2qH癨txZ6_O3WHq/ޑWMħ Gc*njz]zyxֵ:ODS_Yus͙FޒzFwF(!Dkw|h{uː/}6wBòGnzrJƖ:{)W.=}'l)o:N>9k;|+nMݿ3Wλ#|eg(^" ~;|NʿcIc龜_D*KF|+|rrrQ!7󀙜 u_kq@pn˄m7ˀ CyIڜG+}!}& Rm:<Æ_!4BqgQE4 ϭ[g{?ވ]]5 l }#2ۋi ;I+ƧHx?=|_ 50'=8߲RelKX d;#uR_Z<Į""sB2ۤ_#9vdk)'AsnerTa<{DЪ>{Qo5-zN"sm27bM𝶃d7#^0`-s@rk b.~ו|z03IWj{nK\oH\ׯ)r~M?y|rS9B]k\BCҀ|CJQ Us_Nek ];^2݋t~w o4bmS]&%!#g!~.;I{/:'r$ড়%}v/Ml"MK8vvOmHJvMwO9}Yp=qX9Sg F-V~:h%ZNF/G9@jwk]r%n>0$LݻOYB?f_39CuOAVϢ~.uyP}mזJI)l:Y7~oà WG1|oͱ;\{&>+5k}><)Wr#*/N,“þqe^y~9w^#~"|?+o'?-6'q"7G߁Ȉ /铎s.*~ wNy?Q( o-~[]قdO씈G{ͻ|N57*ޙ<~ o̓o58cGc:v N~bx2GyxǙ y<(ais}[xA=·':x'4\7J߅-׾_7V?ԡn)svOsovoeΜx|e?ꬼN\;'ZWP_OK0DG/@Mp_Z?Wx~׏_?\܋yO]3,ϑi[+QqSΗHbzC U>OӼ»*xGmskm~N&oO$UI6'0`Oȷ3n'H.j'? 2h˿Y{@!{ZsOWM*9f}Дī9I>V)}'J\n/x+|Q3}oDJp;9)\#8O_Ao p 3W* #7WsE@ᮺ 7f_<ML)hǹ ߏ\)N9$AY/ely,4!i4oWmy󀄯oу,ӑ{miF^,t77"ERגSz?UXq,? vO~k]x6qJ)v=Nm{iPv^ lUi}3 +9oO]/^^ ^kULu~;/GFFEn23`L/J?z`5|3ܔCHK7__#a ⭾1$882;'#k7;VcOd>}<7r',|i<aD:Sߦ|Qe/1|:!-yuΙwasBK~ݗ5.}6[-T[v[ ݖ?ѸeMSH1W8%sb՘?0 I/˚˜۰2R#>mr߇C>fBOYei)ubcRfε+yx{d.}fk'ez_I-6cOcipb5'zJ痼dzv8xmD|υ{Lν=2oCuO۽~&EΊt ޞ~'>ScN,F#+\^e_չE~/'IǫĜmjOzDޤVL_gSg 9:Oi`cJ"kyZb yV5x%ާ0Rg3㒟7\EWRh>}֡_&6r>LuIl dߘe(/IsG㍯Kcԑ!K_ \BYŎ1 ^tr.?kA0X}KЃ+h~;ܕ>@0hnRG=9{; ~~L܍|SG3?+AN̼I'iul;O~6qz ~".4&VbO#4,ND^dO#s*u֕IOs+V͂iDJI~lNQ#1r#orh/DE<$"~"MsmMy>e3W0\8W쳝~/ 6k b۵[?ڇtO}+5zWD1V_; !=}2)`;wjQD SOv&3cg5*Fi3Ux,<6Ƚ}.#̿N^/(5c1F6GD:5>k%|/ese~s.Qو-e6kϙ'Qju9Wxg} ?k _?XOo‡9S''y/;<cNc m| }A6frOIiCd7T<zֶCՋm xdÁ*Rx#gNإsϝݝ9 ףM跒=zA>8MN?UZ9=F~KEV8SCZv3#H/8jw\Jz0|ڻt/#.h(ԭD[;G/"#ѣa?J+Dƾx#~@oEk}Ey{}$oъx/w?g yWz%/|lZ~[K ]axKe?oXk+Wߙї=W_;ʆ[59eK|K|/,1ΕV}|n$L0P3 ANk ꑷ! RW]Z%+ rڀ߯Z"ԫ v">xx{s~V{_g 8Ծ~?8] 9wKG,/k#{5') (#MGȧm7gsNV?8p!Q_p6l5݉ aL j^OX![m[Owx{C ڂ\A}Sa)`KYv~g : 5;Kokksw݌\{ Wcҷ'1yV)^i qOk[^щOPw~' p2?xxaoE}0l; ~ә08^16&q͐;~1'g!?EK Ғ;N0eB{o>Gm ZCۗ1Beos1:OkLoO^i~ ]CQz>zfdn̏< %uo;__X9e5@6mJ(TsjZ" R%n{X{shLyL^ hrV1˾S}~Wt-S j|fLNs SBd?c?9IέSO>/aOyi="yߢO~F}W) r~"qY_ws}:WWVjn%q^ (zK&{>ݿƳ#qԅ]'^UM@d?uqt'yw&8`}𔨑 O >yqxyQ?W7-&#ވҏ!8oSfoȃW^PC8r)~C8T<msY_Ƚa~Ps!κlcrnOsU1.-?|}YD_?y;d](?Nf$sq!|65[{`\gwS/iRz{x7&\<+߭~A~OaND5ZQ~m7EҚOa[J}KJ^8ܕXSdokΫ̻QyW2y݆GT!:U`*Uƽ@S(?y^~q[8,'ٟӈ~ۥk9ϰ_H_|< ?-\tN[N昊^[{+r&ÿ&-!^9\4Aوw 7]Id\'#S" +|ݒWf/9@tDn9481_7_}V6vүa[ٻ9⦈;SBFTs%<ÞȍΨi3SJnк oGWNE)^cݓq_n$zs7ȭ 'oQ\SiS:>|\n)~s|7&u*\5tƯ){&^.#urΥCvG??yֈ,`|9^ȝ?/CyDRz8/7G Q|٭Y-V/1WCxz?b<Yc̘3>3jW欚cWӢJ^B\q01?&x]>v߫o H#؎h ?_-vםo'G"_bt6/AF_󏭗 M,V?ycQVZGhI?w~E6g.oTmh֟zv9u>%R :44~?J&c#w=߼94[m'9~Ciō̫G~%y}-Cx%cŞ9ew[‹ 4~T>XGc_uI">|_99_C.iBӴ/%|ߋȁ]n9WhʳIq}ߚOuyB}og^/͸"#3zǬuLMcdnmUۍ_ԛtN#:&}z`b?B@Лp.35aq+N‚_ov7g![6cw;7NsUun_P?e=H_5'#؇=/?Q^܏\+yaAXoh3u<ӻseQӘw0Kh#G?Z^n | +ij]߷Eo6f!|ޣף vTy<:N26u?٧v\qT 77;,8[{{*%A s\fD ¯KN+nL2borw9mw,$$yC#<WS ic/s O<'֞b̛o.r='cs0\psEdއq?T]>(ߛp^k[cypZ.t$h_pCbO>;H]!P=_6Ǻ׵5&Z9{q8^үbT{#vZnZ#vdx%^ dI+y귇C K^ك;)L=q#^ڜ?Zډy.2˃g>{~.ݯW4}ަOG|>\ ?eu~D]Tn|rO M 24KF5cUvFq 7/8_f{{ F$3~Dz?D^o]yG2@qHr ?~O{E=#ozFĚ<:!泚oyn%V>'<k`x> BZD-y_IL%Խ_o]H,8saj}F5 (z9ڇS-z%u;;+D|]o~W'_O[g}m0كvBU\ȡ]V=3* AQ'Lw[/ '?zSa&7)qݒ~!7 u,qS'>WOx"a0vm:+jl=oW{tgk,7ٿȚ'X7煿>Aj4 pfvW]y/?w}uYݔW VMMFȫ9eq%8(17'|?JrH'KK 궾t%DY8MO0l!>QAϳ$x}<3z>OxE@S 5n" /\J=x"'NۺM(eRg&Cl0b @^ 7;)m ]Ly9fcGsrGox9?_ٯ|po^KcVq?̷@b<z''j0s$%n9( ֝ #\Emf?-se=|K\eO}f1kuagy[ɓ?Y;-ױ"NFEc$9(W--9?0ʿxx/|Rʁ.1r55a;ZGuN{ fVõPoWޔx{X.k^Q$b-c}{y?,Չ󟬥rmtf{*-2oX}݋s_L><`wj\lGT8{9ށoI?^W`r&|xs; ᧴_H)<򣸱o$>!u}m=>'#}8G{e/*"zo{:#vH.xbӧo.f\s^\kUtgMqeOUO7 gs^o>UzĿRvbA@$vJU@ndtjZ0 1z?,{u۔AplΞSU2.J`@tJ`iUzi\~,#>JChQ=ln[|f[jD_qQ2WH݊?_ߔWŸR\R= fW p_`g w)s4~@?I,}ʇPX+X͟"qDz`˅>\ߍj7涼!|}hePg^yk=Kq(yS *s1/+S^z=:@:A97Ɍ P7*uɽA%uǒ_H KuU\|?d#-,zxߴ@bc+>iMJ}k,j\K?rﯸROpQYoмU]+q 9Oܥ݈]"7Wk=qa,A%p|E~j<{g;Y^7? Pq͹,Tqo;%]G~+UZ%-G׊~S8f^>;ZmSS?֯GW F>UDOz+>]GɃu(?_lj͹$>Cb/FoYSf?}=ftGM&8DNpCCծ{ňQȇΡx~e`w ;_< /L)-4G#*/÷^>M%n {CJ~JWU .ff:B>S(#?x!y7Ÿ+d@|NZV{aȉ .cO*2ۤ@ίτj%?@R7:gwσ>\~O+s|"̡pd^ʳ<\{Ƌ>C )2 OfLJXyѸ]tP)"} wX.ILͧx?+׮"  "uZ?RWyƽA[;)FS\g'mgwd.WNHuv߫R} /wQ/R9+$^=./pOۺTyY!ݹC*7>(hCػf/vFq[ٔyw)~I>}^xy\ ~e_A__}HEmg;H\{о?I[O.nUXSÿ6^}ZgyXC2kLިg=n[这PxD=_Ha. N, E6].u]\^~3.7BcslYir(auCwG)x# f^D_,ȹKzSD'jI0^g|O;wȧ{?)I44m]1՝oT(?;S?_ŏxϳ;i྾Ձr]?A?:Eӻ]\09+v.!뜏u׺>NDRoL*R흍VBI@E{bՉGv֧n%Swᣙ9 *^;rw$u)u8>ڊsǞO ݚ b_9 NA[y9d[*HKu&1[2qC'>];hNjJP|y+2l瞅/4t|% Kدf?a |U|@]^_+8Z;x՛- ?^I:?#n&ݥ> w"͉+_Y[w7ᑙ7s >k'D-lK?Ȱ5I }1x߆yU9~˭D}VhuKYۙʷGe3e"5/Xuͷ'3G02o̜v9?+y@Α#;4)[zבs>6q;9Ỹ6jwaN?oO<=v>7ؕ:s+vN[eF:&}bC4r _ D̑~w(Cp!kLqw>; vi|T]*r~u y+UrS9K2spQѧxnXguU1X\\%} ~{ ܈M7;A{SYZ(/8vJs2}]F0'̳MJ4gecF^]wŵnPrJqo#S7Z~I ãv!|>xK kND7_gwb!Vb7{|s4~-WqʕjIn+³Xv%;K?z&lZ=(WdJo̽}[_$VLaw8>8@k J"_&^٩h"#|O@^Ǿ)}UVo=Sx'x-,vs}#nb&}N1 B\\c佒ė3v:'z _ɎM7wAmYҜˠ̬˾˂>D ޶a xR⪜oaf>qeϝp{/8O}6K\i^ .܈[w3p}λ:=}5}O6~ȿqߛOQ懸y27l78_^'O{Bnܐ\0zfM_ݓ Iƾ囍qWf,[joqm/KPɤP9U?yKwZš}%!N? bG^gay"^#,g<뽖5.?myý <~?Fߣwߗ>gsTz\IIp)Vq=p2\6:~(s!}:?k \RPq7ϝ> 8\.Q|_}}r;4ڜgwÿSD bɜeb*ĺv<&(NYV֧K17H}|S gN=YBhҌ8X s?M`ƀ"yWDЧ[E7M.5"ϟv xX;cd~{ۖʧ@ =֟]hi\+gHWȳw?ۿ?X"rK8yFKXEF 5{'Kcy i=wV~vąaafm0|ksr"y4O^ ΒXT|=qJ^D]n //ioߌj%ߵufoj[iGUU2?SU%s;-x@ӥFR 󷝭!x˲_uޗ#}czraUdNȃ&8 bыkډOyPMS:{/8sV {X_W<) Ņ7 b:;`e߿.|zOVD2}]-S8iv}X3xOsIi瘃1_YدɃ4i|.ܮ/vؚ;z7^,bFs8=vǜȼ7\; 2 @ΥWH4 Eo|5p&xs=iP)upwg-q,eֱ^u~7+^R9~Ǿ_w77O\'C757'4)~Gy%vy[Wɉ<s9X`gRk99g͈a^K>uktwanVt)ьcSw{)nyZ`h=ܲP;OXܠձ4}3mğθY RRuwQ9̯Xdaje5eXԽ^8nK|y(9GH9sd=cN9T.=·P~hV9ݶ#Y(h4|Qaug3[AԼpmY!vz 0ai^L"O0qAy h\vN2)f)9 ?Ln ?Ank9$1;,ykKl`nVkֈ[eUnW/nEŎ GN=?;960!?nn bO z "ג<;K'¯q:|Ng'4#~;r$xb}Ѷ!^l+*J~oK!ybc-`^xkM߰|w'n&,jҾ _ɻ )|Exuob%M;^e|:?JO>H>_ wJBX@O~W6>B?͞FU7YiJq9;'oᯨY|m6^&%_eW^:(No錾 D:P~O@\ Il'ᖒ`n$ăkk?ij~|m+*/Bmu.D?;;JN [Ozņp#9 =Kǯ45#Өz|xnC3_ ɺ7W;"~Ǫ3aN9Pn>O#wO|r'!_LN ]_{rT.'];kq3[OoM9g،ydO_w>~)mkrj-+ :?psWϺ;y9GWf/*\4,o|0 VZwQ9c'C$Ժ拝wNH^\7;[Ǯe_JǼ>9vDxt)4ȹsA?"]Fͳ<5~j TQ㚗gi{|^iF5/s~B.-y(?njZ-s*I|Hz]x|}_kht %EStIޮ;Yܺ}"e`<ۃ&̽9\Fi=3hqiIU^EZu"5& RϨq~/(?)T"[5?hމ_:#yg9uoI'r5'3{CV4\yy?Ow;-=pp9@99f@^t38Gp|?3 ]v[7u* Vb[Jߍ~LMCHh}*3?|]?:bCW{SPZ*9kղ E8os=+Ƽ~~dU畺~%8G9cGWh-хWU!n\q%c%9;'B+4~ iѻ!q}9V˅ۓ`gIZ,}+-RNffЗ]@^{ssvޅ/.EO*||{p^W _G/N@?1_7oy>;o<(&zeopGX7?i]^<<"mO- {@/o5GAE#64$e nQ!윂#MK7>IsHh'M'~_nѸGZwx:}\Qk}שuVTJ,|⯵.nk~2'R {!PY@o֬_@N9{søNЗY8t{äd˙t霾ݟ bԽ'γQ蛯.v[+f@~)Ǒ?~@l|\[$8 Nv:xğҼe cYs>QS;/ݗY .oK]۫:4Hߍ97w"~̂||wu~O_IܗijBp~μd+8G"^/q!({:,kԾr># +ȾHMqwO)|[p9!W悮 <~eCp}tm]З!`9q-|؄T1 |3|s~y0_ 􄛬 ~V$C?Zu˻ F<.׭ }1=]5 NFۊ3nyϙ uZl*WBa8۞?y5j^p/edu"y5K [b? ,ZUyH2'9yY(^Xи4n߀?؊87kXw!9Zr^kFCcROܿDi?t;eU v'٦;l?(כޅ:}y#[s/b'δy/Up]st~l`9ۅa (~):^[?87gvӁuWn8% UVɋ:^@MiOd.Xp^g}9l} FN7_ ;)gog#+*Oƺ7bԕK{.[·Ʋӊ[e~ c?,`%nhw1P>7:/7I1oOQ<ۺME5@NY^*)y7kLO`KJ o@ЏUx Jmi+$D_] ~ʅ7,Lzlۜ2C#јik6;[Ѱ%D?W_߻Kpjչn^Gs}u3KM e_Σ .eC*{.9xv`%ѰOzLKc;ߺ<._ WLc]7/}I/m %8`*x:nRt:L}7>:@AɎ#x zпywbO7svg ׫{=6>v.t>9=O-e5>^Ww怏R퀎ζZ!=:e.E@oV$y<}26C6u%tgN $M3]Bˁ>:0wө|ncpl>;qcZν؋W}eg^*~9s:4~OkF>p-)mxO/H>sb_}iW7"UG~KOpx u}Ƈdk_~_&y1u6.eN^M?ƞH5{bQ2R~;}:k֯yOB]GSO  血O| ]#`,IΔ?m(C5Yo{V< vmd?1D u:־:m<ۻs"|۴{Cn˷훉v\}V?%S+3;H?s il+_eqH/ֺj.enܗ^?i8= LvjCZ/~č~ :{~VN3R}MR#?aعo.n{^ Ar#N;& '$,#~c v_Ͼ('hcorۛzp;2@`5v嬳g/q(ک`k6=Ԁ@ cx݋﵈/>n'/ r?oF؇kG]=KDBu'סޗznOf*WiX"gX{RW>2 fgJ8S;$?Eq΍EvأwJǯYξ>Ⱦ_5vn}}~jmA?a_LɌ_}? :cďB}}:4b ߍaiW~"e\_ CYNL>";ßscg8 ~jJm䳭݃^qkqx)vGSY7b7Y^ͺ?3|r(o1[RsG`4?jT|)/ău%fʄqQ.ɕZjUG|Ih^KCu('Pݥ6zB GpB. zXr֫u1=$Omșw7Q{MEp*Σ~C3?`KIVd_>[b`WKOǙG 7'VaBo]+?%.e7 g.'sA1/_r11{؁ogѵ"UmlDJו#ϷUfvǶzx|Pq5ۋQQq\&3c_nl_$ '/Z ITf)ubRԨB|hrFoWyY/3zcf}3/ׇnoAެɺ5 >IC_W]gb }s$?s!>G qbͰ[>#t,ϕ(.vu)?A]?$5[~a_]Y);=3aN"o1[ 'g&oPid wڌ3mzÍd u(#|m?Ho#+pvRoB :Ys4ʳ^+ .vo8eH~ۋ$WmJbt3nIdg /U5~LkgfGYk".dO4O>|}-qoȗJ}ˉຒWR<=L4n#I/T<aIϮ;DՐw_ _^yyzz1DɷcGC@Oo-N俼8GϱiGDD7طב~X)N- "ȷ-3 mcO.'X|ym5?p108;Rg9t_9pe5z惶׏.<+fBxcq 9dc9 ]wFקإ gN|˅8!W|BG!k)[ϊ;*_ Yıq17gWyoƠN^UvE{azC^?$ >6ah ~cB9c@_g% 5rtqzf'ދ_,&t:>7Yn,;@\] kVM_)·8_C Jȕ>2X;d}ԂnwD%=~?C ~޿$W&|sg?ǧByjkYàϻ6r] cx XTIs7=0g]#^`ŏvVN$~>z1ziȁhT$rzYN/<+ifO"sW_^L_y~\ոy>گ"T%K{w=o@ĥ; (4d6's/>>^Uc1"?4jh@Gސo< >_-ùVl-b3{cqs-7~;}EF\o@o kwx|7xM> LO WUY =Kk<:]bO\<1iﰭxğ9}'?yjI}]+%~JZLQџ~8z@Q).ƛ\Cyl.W!_|9O* 3*1zLR56'N,sRдƉ,MI$ឭn|TD,7=ٯ w][4t)Z>n0m~nF@lAn'\ >X)C"'W"WުSռ8 y[Ԩ|X )ŀnBuTw<#ү/2>Ǿ=#(sC/Rx9)m`{AI>oZ;8z/<{w[aM:\|wߧ٫8v*6y4Zղ Λ g)w@j}ԃWGn}I>̵վ 3J^MjPkH_P9J}3%|eq⿸crGЏߌf|NNlйwGX_~\}g<}]\5a;Nh!ç߷]Rڌ"=vg*s?z/lSv7͑|;zᯈ5w`^|Q有:}w_RcϪo3g ;Ǥ^}^MGsjNѸBsW_fB]J4r[g7y>YZ)N{oৄpmO uKXֵEj[)N:{LYЗ#%?s%$<'So \\A\v fs]{FK~U[牶:7.Hz>7^2}eJv\̀8Qoc7|>hNK<p~__wWZgXD/wL\Xϑ ϓE7ewH _q˚\NS"+tNHI'64Cw9 g'9 _{I[JGꢕnš:_sa4B^o-\uȝ}UEw坍ܕT]\I7F"݇7|Qv̀Kۣ4"y!~G~Mes謎g'<_پ>ݪ6AҚ9?W^Ωފ}||px bS7Jq;M%&Cqt_qos2SZ켗sg\q6~Kӕ>ԎSshn|цj=@~l91z6RJ_^EN(ʾM>}xܴ̫r7c].S7oN1T${?/"M۝[:=89t {7X1(D%%1/J>2Smwb')6|I?^;K  >?#k,Fw禼 [ '4yПֻ'qIgsZTW_nޠ??I@/8e:5}i_hou,;#n OgG =e]\V_ qt^uZ/v=08:ς?0EޯcACs>/32?7~?q3Fp#FΜ?ޕ|7.`[n~}w񘼌+;j$q9//ȯM N9el))n/ 7ZWЍIS|n91Z}5ܽY')^TZkh >*惬ߛȅ}cob'/e__;/x+"S_ӾϤ-:pwė) ɷcC7}j?OľwZĿCeNٳ*뿾}B=No&%KeoW}kudKOq*}pyt<1U`OGqɑk>yF/&jlGg+Nx*#wb/SD~5~|j\ +<Γ eN=0y%[2Oޔ8b!䓒 ICԿ"Lu΍K Gԧl >TyZbǗ9D?CW| TWeG\Gu?2j_#0cBwXy)wL~;avLcT?9G~Ȋqg"k k^ۀwQ ÏেI Yы4CA>lsՑOnT2ݴumerk37]ܣgxm(姉󑫵[^{4+z 6yLBކM]95/kH<7?Y[pqn\JgGk'؁#45p|edO~+_Z. gٱq49Xkw#",!俕u{Q_CK?P_̈́U''zQg6^ج[ȱmVbtj=2O䆙uC;R  i}WYOrka}uz<#E?=?9M:Gp7zVJiowJ Dya_l %}S:v_ nJk1x4k0ⳋZOA=7k~/cH[jKq3ߺ"ĞU=9.ܿ{:e ҹf_ѹBYǿ5_X}p9R犧u~Vd4q9{η= uz"{'Kۙy2/,,%ϪuQ槿sGRU %tu==c 7/8o Nu%_%>8wyc"o'/:>t 9\ &wKj# w-=Q|@?^-nnE}EQSRWS]Ϳ"N4!pHK[ Tot; -Y=jDϗ$腷KqF/:uV/pʖ P|\φأ29vlɧ2m>ҷYW:j \"=8𒬋5"ϋ?N1ϯxQopxgesVɬ| yk | >S:prs{M3z_O{h}mur3ހ^K>NN<5"Y}U5j"Q}7 }$=z$g/_G_+aHYO_ԡ};댙8zћUW:Rh^o;|?/}д__}yCgav̋]/m(+8iwZ2'O Vۓ #G2&/[2ȕ/n7u\az1ocwvA?/Q{Yܼ=runri8IxKm t>Mrn 5o=qb.M|vi6zRn7bLądw.Kuu g"u ~ZѶ>?HG~s|yS]'?%x cBk|Ӌ%/1~zvdtq~ 4`"yTp~>F/9J: n&zY3c56:;˒沯O=ق2 92}b>}oݡr_$؇gK^j_#?{"?t`nk}to^2rPlnFUJ_cοM?q)9ϨME#/Z !Nqoq७1*ZsWc7g^V|ݭ}gEUpӥ pڇ0|i;;Ӳa7N%.I=!nO˧;!2(?u[)sШS<_%ԧA7պkJ+ %oVhߜsl< ֻ ՉSE;J^k1Dp.bŒu*6okn b#[3؝'z: $J݈3+k?y5cǼ&łƋ3$uKcŧ^ؾFQ¾p9iG#?6eNB8:+[pqw>vm8\m +!7uZ?}}pWЧ)R>"Ti2R/:B NqsB[VzWsu>Ɖ?ίйVX/9DYɶ?bI(됫LNȷš%W\Q\/rZt^z~ekn5 ?bu,a\޸qW ŏQ}q sԘ#oӛ|@*佚T{Zp#ZױKO˺}x {^|zJ ]˟?qIk_  }z*}c@~)_!܄S{BI!IH9p2P΅Uk/\'l׀d]X'G2ʝӍxāa ťaGNk?w:t{52ۜSn\Oa`Gc 5[eu4fGoFaqS 8;w18GoP9WsPb竌<= Z T(IGiDBm.U'eΖW ٰ?4θg3?f߈[ӰD}+?!X8d 3$| suN:G>C{ .0Q`n̳L$_.}b#PVw_Dc?%߫uޡxusnWNɰ=xQb++vQ);) ^%"ӡ݆baU( ɼ8L(oI.1> {W ]}a'T_.}ums;g0}~H~}o.12{B+V ٯZU;7-P `t:L~Q\yXEVe=8 CwOaUi>a^#3GMFw>s>}Bcq~AX5໬7ǛO.X9x݆;"+sֹƁ% M!.*s"ߊ^S`wI?H Ki^¿ #q~;kgWA).1V)([ л7A}z`A*/pvEt7O~%u{S^:&>~u oG~B/7w&${z-\] oo7rغӛcg?ߟcaFQͫ `3Q䈑a0Ix!RgjtJK5~'2yoݖxZ1> n)^xKdΟS`ws;4nu{TXpGGa9G?ב>ޤ>Dp3wPebD9{[z0oa5M|{Eta^>~Ơ g>/7Ƃ )7w$z_:rE ?[gx'd5:= w.w\#"yзGB-U4YQ3_xzlJ~]GFSpRb8c{T7J~ϊzR݉YiK<i.HBBjVͥG~kyqfӼsৼ_xmMSsnϚ7g24*msLGZpunK>5ChKMohIrfuq ѿVL2lI#*1_oڿ.p>Oܳq?]pw䁞֎N~{>YVbЇ2㗼@UK>[? 438|>?.auhJvoJI*~ט\J]^fم7t(to qs ;g/`e^162AM./O?WL+e_/3ˏߙNy!?,sȜbs4F١󌵟pFǝQa:gu?{)}dgU|u8#s=}|Z:|\Ծߡj+ vNDW'A>=9es[ƻ}h49K}S]c}I_X`SK~dvWθM +@)_zΑ~cv/񣾝|G<$緜G!sFa?K^uCݡO//ye}k;oFF7 !KSZtڷBe>`}ϥ/Bh΋M=U{LyՀxps>=F3Wn&}G~gx_@y?"t?]h֑\A73.\<7e\ԙ'O;%bK%]eiy7WY&Ȑw2ޮ]MQ0_yl7s_;Fֻyon^8E܍>^K2"p_]^BڿV/̩9 'CD@:[AzzYBNȄl|r?:EJn(2zM_Q'?:_9ZUK?ۜKi}|)fR}诡-Ws"nB_}% f~7s]Aq뽖 A.cϏ|<εݒG3+яp ;:}=wӻ͢A}D`,i#&Rסo:)'8Գ9UKcO],HYs+2|>Z?y5řjkҀ>[r΃*"^-{#IH:S u22a/|`oei|oۜ/[*Һa%e?;^jJKG2qP"v4VCq:wZ>HV+E.`']Xrr'iݷh8!7t*RApW'vPn}V*71>9uζqiSA2?@f)n*:~~x{&vEok8{u߱[&^. \]a[K ܊')G1ҭnH2xEj|'lGW@S;{@OKhN֕2G6v\J$n {?{Sڣg9.VV:\2,Jk]gq gC7ď ?C\p}(I%CSWvPR%vQq1>qso3od*qOOo c}9Y;{fدk!GeΕ bˡ?j&/%u ʒñ<9OHKkUB3[s#9~L|ִ>( jiU- cwBܜYrYMwq_ D:OÏ~^_?߇eWl~sY:A97756?~cmy ̉w5F^[['wIN_oZj+{}]xD)9&=vJ,9J^=~\lF~G>t˘=x'Hй:7ߔs%=ŝw3NnyqU+ҿAq:34P_x@O_74Z.]2$%s}鳯G'oe酜`$~cmN?u vsw>ԤK뤞M< /Ϻ?~;ɥҸKOsIBl aI~P#?9p+̻f/k,پX1}3WKӹǜ;nQkf|v 9u 8GZ&:ƶ3ӻI:Oq[ޖ:GK}4Ŏݙ~Ow?2Z7:/Ez޲aÉ_3 &~߯5|uuv}X >e/#Icu.z;gЩf>E炾59.|ыx͉G|> 7m]\G^>on~Fui_H9ʾnяbWEO+b w2go$=z)c'*Nx/sUq;<s*Jr7n?Zf=~xwcL8|%/J'#Ds?> }I?~MMAiޟ׍u9OyKk/lf>G3Tn"@q,'I(r޻ ;#F1"2(Jxz~~U|{+oߙ~;EWֺ_^v/W*ŒwyD\>ۨs#uCE'U?PJH_!+eoN23| q,VvB$VN2_R'"}k#_2)}XQWQZk:'o@ b[s_Gg:K~HI%{en1zh*`H\]J4u~.v>^bo*E;b\ע09|/-sT.[B3#&J-.y\_tNҽw0Rv!2Hx_nάkDg_U]29[#s{*y_#/̃oDn܌nX\m?|gS)?oOs ._cm«s8y"~d&b֦ɷ;@'\8~=[8zԹV\t;Rrd_akMwf@G2C2?}-X=Ѝ-{'w-y;A<k5 )r^} 8H\'#B#c[d]4UQf'RԾ$w`}$؊znqy^f#^dgeN1x?5QG9#V|jk?[ O) Tٗcj$ɍ]! cj>R}'snwb]R:eXzzv]wOv꓊àS-;~ƍ)(|&]L%k?-2}#*gT.=lu$_iuGewݥyiaI.}ֹ)f"?h+]>=wh/I'9vs?vRoٮx?dCV=-Oz~.~P*7n2^BNwN6пջjЯA*+u_V`L Ů bQpW%7^'!GNw͆O՞|;L9ye8}GV^. Axٿ+>C+k_2[㕻D>c6-[97ZCO ϕ_ڿ}j/ 7˰'7?$" g7o~kIS5>uFh?pNojg:UgָW5:P9ѻ6oJ?NoFzBN#'?`)x3O̶^Qk>RTO4g{딺bK_=Ži16sdUC쥇1s~zWEό|ڹM?x5!z;a/oGOf/ ūsK.ҸԌ"s]^Vqߑ# Gizu?noǨ}\/9ՏqVcW#ү05SR`َ߳x| 'dހ4.y?o0Kt2nNNN3w-%|wwFܮ?^/;qS͇,~a$M 4ϣBo:827PZW<28q9 %nmM[7^ufGE\[JqO>Ύn{E~k7l΄nnGϹY⵴Wq5>j}y.u0fV{Ҧ#zVE$N(5?d]tɜ co[#I?ZI<)iQJ} .t.}Ї2-eOZ7D_i>\?A(.Cj_iQW;Q,D3D>,U.iʛ~L:xeJVgLmNܷ׽г1`v/2gG'3N1!7J''I%⟹/bK+Qu_~~t~oL'͘/к(4BxfFЙP<zȟf2ɫ{&D~j|ɳXVͺEOXuּe_6~;V+8!dc.?D^hcI?6$"SWcWo:Թb)}; }H%5#tCrɟM"o 4 }@B]o-%SX/"1%y/SC k?X͹4 :^޴k/7 6cMԕVI Εݲ ?8J_!bGH=ƩՅί8⡴F("}Q;.[ 5s~g i?Ss3;GaB}ov{g8 CJߕ4콛gU}VO[^XnwsYoГ$?uq ~(uͪ#}qA5k[Y*J]hh. W3gD>Kduw̹ Go*. }} ag O/sWygcGRB(#8igk^TqTVbߐ_8G.?R#v_>h(vEsyqH{f'r[<_)rCxJ:9OZ3Q:8'DI~@q~b1Ojj+z ďI^/Y{kԾ82t/B"# 8[ԁAAi|Wpawypu?s/Gy^vEy: (}zy :ZRGJ]I4g[CnhXl[zܷB~иel&iCғPhߎ\>R;^_݅qI8t",9[Bg/3'š~L6#OBť{[-yj/X[#%xڧFIaWIJϴ8}+nXe⏪:4ὺܔ9N'ոVvҧR㩚T<)}y5FK]9/tӯg=$i6ؾ$"7vMOQ=n^Oct,3Ic.x$r@W f c@tǽ.~ֳ EAuQ\24ߪy#o4}(?\HF~sWpڗ%FU32~7ez qoޫs=ci,K cr-9ΩG&nUߨb-=_!y|3#vz).L+wD^ A^W)%~8b{TvxѣY'H?sQ!yc(tqoB8_?EG(%f.k0|(SUOSzs4_:ߋTWԾ[5qP_JSC8pߗWI#\GB7##7'&͏J7|zkmK7vya5OUѢy~kjJ{>why*e7@G^~3#Ǫ̜)HjտR|&qNI 9KYVE"?RǢB1v7my_>`ޚKmRDnei)8_O [da~ڹwçL絻NOzrчZH ( W YW^j~\:i9[[>T_jݧvHzMk#ghCz(613zfɑ oh!xbc7SB/_;HvϾej2_Ukb^|{ۈ]vUa)O(O赮k b&:&~ >?0xWwG@W?ux#/1tէ2F%_kL9)~ba99Zy O_FǎP|+?jX?JJnkgGG>GX'5sL'vR QPWeWr̀>eml16Gp_|+RDj+kA$oi/ >/pS=D<>1sa{/ocO$8FOfYv׿_}׿_}__&>G .ӭOGv[Ѯݲ{9wF T!9*_2HωO [TN]m?*8ġT }gA,4gA5"{{}\g Aݬ}.dy=[pƦψcM@|"VSL?*`F-g++YDݨ?:#}(>#_$'yDڦ ߕ:q8!_W9y4WO^-;~aIdjmA~wS,N> ?H}􅏳8X>ur)ӛ< ;}mR_ )YY?n74p+r_h!8Xɟ'g4;!#{w[u[e>~yᒯH9Wc%<|Ber[-]r\p$ƁO0ۑa Pkے]@SRw'+1Ye>`1sp!Ϲq9`7zC)WB^T}:;_ז|X R/ [?{r(x . #@_'u0ojջ~A+B^jN \#| {kL~ \lX Rlg}wO2mF2_O?#|Q}d~g7KUun:?esK M+( >2ObmuN׳.+w՗C7[^fvq{)/K߈NY"NЗKa %E޷φޫ@F߯7= ߾@|n2s/ɋA>ܢMpM%'@cPߌGݧsy~pM";t#7<&Nn<+ľy >Rc>@&fƜfnwa? <jFanߠ'NzLG4|G=m?HOYg YmkI\~^3Ev{ 1}'z3GR1MOԮOQom>:g y[P[/H~h[t#iX?+Cg4]w9r/Oi襓>:nlEʧwK_ 4͓,_@__Ϛl#lκ@/oJq=/z:}e/Kp~:oܑnB3p3 baO'_{kos`p2*Gq7$dž6CWM'H(nʝrJmׯqy =rlVxӏ|ApXҿ`F*bc/}@v{]Z~]/&^M<ῢnKdž$:4i%E7n/7E+ont菾ca)ϖi3}d#<[ܘ:5hs^m=e$v ,ꍍw 9̕gv8mF=tgG~#C5^Qo(|ϳ[?:7݃>ORϜwԟ0ll\I31z9Fնon ;;eW{5i7?\sw8y4g?I-mnM&C5I߳L0Es\J%,uə$EbEHLE+%BIŜH4Xk™4嚘(d! K$|>ggks={OQZ I0#{>}"wF U7.T::- qZ#}`Γ3޹z>D6*B|[p-RǓ>{mČ~9o: ?-|DyW<uف =U>B.g94yܝ^~g_'1{Iձϳ沟 s͒^T`EOrĩ*wA>P{\&+_s؆ܼx=bwMaaԍ(>.ώ?+zn qrOԇ*Qʜ}~9a2793uPU[ ѳs.c׈UgSv{=J_ߝ|5?Sg"{<5g>۸P;=>)c;gWS%|"ÞcX;Oq]Fm~YTppűzk oý23v.탟Q ;ew^6orC_Y?KgFrc[RXowlWv[=՟ []^T^K(o%?~:q9&qr)y'zwOEл;b2Keu 蠦Io>ns(.>lN^qIsTGrss,z\57+]/>Xi1`UyD.ڍפY_i~[S3 SO?S?\w2޿ Un탨v6^%x^g"w=_J_q/s.$>{-.;vJsVꇾ""ܸMkgcn`!zcMp[磷~owrOҝ>ת`O3BJ%sZ7>a~pdwRy෠ﺝ}<_RSq0i -\ݣq"}* yIl\_IAS?MH͐ ˪BucSاWKqÄK*e|BqGs-n$cQU >s8UV{+owڻܿS+?-vJts+%u_|cJ)gO|;B0ۣrhr[''G4  L_pg߀}Eqx} 4/ Pq"O[#6>y*MһRg{Y_p+4^}븐Ő ZVFɜ6[&"JWy<|*Ƀ _N1}Ϭ}?gHC])i/a?_O ѼŽK𒗎p./nUV9xt972P}+{? _1KIGsWc+?׮B>wN]O]ܢp`q_rAw>m٧.o^/h73K]pd ;]f̍ݟ.رsk~KD+|'؇O3oWX8g^E\`SOd$,lsK<~|2O^<_k+I=tSZ[e;_\;^\?9NzDy?'QUwc儳XJ<:5 yxqϣ]?|ř51]'8+zY ;3U>Hqs WO)qoYGp3}<B'YW/E^_}͊?0Ə%;=S*wV8Q?zT?*Ƀt7BoyKׯ~6G !8Ee1tНuHpCӸ[Wxq cOh??78%ڏY͹ aȃ3&CobkwZHJoy_=.9U{Чz q1sv̏`ҕ|0gpʄm菼ĹEt.|̘_ЇT3a]/`A'$?'\:^`w%/D]@B~U {u̬ y99SKwS:^%̓|{XKy-~J&skq:W[3k^سcxGk$4rS|;̾K%!C_;T?ZYʺMuS9? x)WxyS7_]i f?>oD陕/*?f1 ;SO >~tO%~[9)2OZZg=N_R*@A#u Vts.S įu(ԩ5;+h9lG?0S;L}Z̜ %3}8w vDbWCajxCVꎝ`^=!s ?TVTThU&}{oTNT)y~'{3d"+G?æv։)=)~#wz&gwſM3"Gv$J?o)_xa&rFr' T;A_^gﱆg9߻+CfM!AD )efZC%kEu |֐#ۚA>o)>y=5yo(&q[fsC??9$qKvh^v{<ŭy2]oyQ~Ҩ5/I"O\ʜp{5(J(O\p A}I=Q|9IkLkcTО yХї~/?ƭ5oiqK&enoNw+|meͫ;zvz_?mYg}m{_ Ft yΉ/{db6u$&7A$6cǦl%^-x5i>[rX>u\wZtUJԸgt"[$4; h2W5}UZ7'@ w>P|RwQPm*.[1#"鸏{WC*w' i+8X:(]YfǟS?KZ2Vj>zQk빪ȕ[8;KvV?=I_4ْ8Ki7л30 :9k{;t]|7yy:. uGw]gIƼ?qy%GmÙEm-kR?ci[Jh2}IهpZ[ #X8wd.֛'0֏>$~^|uDuRuje. {Q Sz: ϫ܏""{j׸~9ХۛORjK|Qk^^]|ോcSq'4ݹ\;dy9CgvSб7~r}z_C~5;zkg>G=A)ʒ{ U''+\ix/&gB56~ݾޟ}zCbǪ񨠡_(S%XG(N1>Π~֍K]кhO]Vc~gpn?wCp"?c|87|u74|*i^QwߊxM$㑺"yby{";;h䩜vo/:p ς.ƹ"OБzCoE~;* 8/N2w p2׹A_U/zCkXf=wt3^x v-ouWBoy.'<.lbCO`]q3t3$-rL~kWjGSN I ~S6f}˦'ئ;t'q<8RU|&y J'>(aʟ;c雯]^4"|+.sp^0%v>7ᙾЧkzN na-KyC#y )+>=-EVM#gNnU~Լ:oD~6]skc{cgm+Ri7쏆`w}y~~5> ^qX{ϭ~6mEZϰ KonG& } uwbO NtiH5y]- yϘѣJJF{+v:~]^%ɞ3Ǔ*ħ^~- SOC>4N{gKSr|n\#&RvL~o㷫?Qns<KքT4o}8s=vDС ~*)y!v<̨dz'H7 a/wV%?y)ۃͧc/}^("A1fwR`;Ut,hk{լ_vIz}{ ͠SffXff1eHql,33Cb)q3m={PwfwFzuROC7?:%K }t8qn#<9n}Uh^6>xi]ǟkv/ _K|^:Wۼ}? =7ar8#OhGuY7l7S9y@? GKD1}gw*>*ϼZ+j]aJ63YO5 jcG$ON#Xo͙~:4#^:Hb靳v@?O&v]|kczt5sF>&{;p!tvҎUC_z~o{kcd*KED n|/(.ݻE{EnNggY!/eG, }A"3{r){Fv(IHRꚝk&^۔K+V؃&s񯍈X Wc_ލWA>6/N*=zNY!}Ebi]6yk=[-֜Cf^d+I ǀgHߏgP?%>~ cJmZyJ}g:Zss z9n\j:~ѣ`}a,qM8Wy7en/.~b-c_O{7p1-g%Nۅ+Eʆr:<{<+'%{'>72wJzeK<a b}R)O?;ދpJEUySnWm"=8q 7 .]8潴Dq'B׾j+P'ϓBNJWruJݣy +\/S R1E{J~،Mpq pzb?+|6:nr7zq9׼86td>l ~k>C ?9];W{W޹GwI5Dm}!=wv#r0kmб/6'χO;NFmzeo][v_x~_'%yvބ=1vz[=#g/zOvOlxk#}3 WfoD>뙗` Zg^`0=7suѿֽ(] Zc/%N86wz^MW1Z^g=} olO;g[ET Ĺ!?{3&G $&8X}s}s{sj7٣xӾ4x'~6ލ^ფ +.~ཛcW)F_Nz,FB_a+Qjz^֟49̆+O"/&|tcHdTnGNfO F=:{X>8]mO_ؙFCOWsWG\$3J%BvDDjŗ&3`3_~$T!둸}0}z|uZߢ_1d)1UCo|XO}ۦwƻ+/MkI#r04,z)kܷ/л+UW`]+3ju9-z`_ )|/L)qyYW̻#&sǎ甸-& YGO?q_cFNv: _[cu%?h =[q9%/"u">z=wg___%}}^[v=~ܙ8k/uorݱǣ}>k X 鱋k~Sޘ#y 3G O맵nZ[&rUqDjGH~pe} "}hYcZGa-B ^koy5-J^a-Z{Qֲ1qťۛqͧ?ݬo5os7 o5ZG~E-}nߡ<}0toHzǎ)3k6w$>b3x$MQQ| ­a} !qn<ϛ/S6#[>x{%_Fq<{r;{O)2vʅl۠W۸RIx_>gFod/Dc_1&}&}߇ǘ:Ы{h}˾yw?h~ٶ= uoZ_}o楴]a)t.8MWeCOѯr~oR֓KC"׻ A1^^p 7wsv'J]9C?syu_Ek"3㼆O9;>t9f-\qoF|/s{O苩_||p|: e ?ovƀ4ǮZP]@^$i,2s& i-BwX>[?fls7wG*}uwH_q<D|_>jG1"G"nx0ߊZIB"雡s/5/}H>K}H|u%_'51?W.<VOG-`JF|6x?9DIr7.e><+t8e/Cv f-T%6.qww!Ӷ]/(y.=IOHP,\?[7.\N| },?0-I2k|NjٙJ݂zI܍WٰF>~Npھ}U=.A$wq6ͼw3 $\l'xSay;֮@}3Oy|ԥi?w8I_XQh݃P;\A!XA_8 ;LO֝k:>f#' XNOڤ%N?*r\pB胜GV&.z::\{""v4/_9_}Bn&iV{> ltn["rpiuez V YN{}͞\ΪLKyB,8=3S#o Ygj|5פ$/{Q6qk[g/NCo_-n̹J<ʨng܍Pszuo nH}#%N⑍S79*]?ȭWzhkgߐ.$n'6!N}r;A>68*䋍VDQ.G`?+οt철&>rįr/G۝!ͺTx<7Z|?¯ ?9a+s?SM7z?Iwmi>zIڷT2#Я/z\@&ODy^z,wf^,?ԱhH-š=#7r灃ؿ'}q\ηY \3XƧ?|]={@WO!-cG]VsglM{S>aԁko"~Xr{t쬛gxVz}Uz~lP;F.o~S99rim{`O|@Ei=kݙ9(: %n-&C+1D9TwYOƢx+!%.sjW"g'4xW_ޛk_+j9>>~گDƇt3>ښCn}%>dބ^LGVK~AŽ(Q=}ȗHjy-2puJ=|=~<.c[L_iU3zww_^ %/o~m߂/ZW$d_\nu???_#ڏz$vh>>_ʱH^oLYX3p?чƱ@1z"sf&(o{gq|sj,;_> fQ݂.zi{~{WP5mT*&>;iW>"j/;&oyit"}7/y*n5Xl Cq4R&<^HN"82}Q|ɒW3U_QS+&4_ْ0_Q/CY?xR}Ҋ<֩ɯl#.ƾ?8mOZNtf|R3+5?%O3.i˷Q$sc<7.QrΫ~+/7}^|7}7괤ߎt}nkӸPxnX!yz5-gIE*gޕ'InE_џ1Ȣ_[)ԧj~;'_/ycߤ8W#srcZ9!U{oֺ]gOJ@f[8D|_[#PG?~}82xk|%~[<~7S33~8 (_K>^>~~v>x"Iq{5dzxV1d|Τ!%u௾1{ߗ?}{(v~͛p:7jԌH}/27C:?7q'tu٢oSUk'G G.'SgwGp/F O? 6 s;zX%";Cs vI~LóAwNç?4&>mp{N"fA" q2NjCGk>jyp@]TB}#SG'~UY=VҬ ċ8?{Eهu,ވ]-}5;T~,W#З|IOk_}9xOk]as<-*3tDoI:=sy/ #Y/vؑ7.~0ފ])'$-֢{ǕMre+z!x=,u]*qȫhyu~۟;3h +8q\.!7O4b.&lcUs^¿:L˨|qog 5ifK򴥲qkK 2x@̜v娍U.<-A[9K}zQDVRkWWu| ^)w{kM{+u:S眸?s.w(~AR7azA7Lݻ5GM\!f' L_fsNb`JW͗.EUmh A^>uͳz^׫}{&`|2|b\ۊ딉{AW7bgl9~殕nMr5pKA1lBÑlz}~ϐRȍ͈?[ vٴeOV;{~XwS`.9ݽ_JF1_yv3 ?7kKi:@Y՗ҟ7vJxeOBI6rJ5`/%~a]Oޒ}C?tr?F)#=jgr[qN{3Zwȑ?r>q_|GkӶ6 iU~x:uҧ?gN k sO~\bUG>Vvxlݼ }kL `ۿ@w|h=rt!Зnm¼,El8͹XgsfR߲n!s1m{>w<߹+Aa ~ Į\Q 9 lgXԱ<)?c %u:OYv/Ozߡ_ߙ^_ Rѯ[<{΋QtG]t]TM*Vq;i0p/.og 8{q;5O;Riuo?K\3a=cjOyKֵ~\c ė{ =vk3}v& sʵϡe- 9Xi"tr R_㳑mEZ՗SO ^_΍Y=/$]3}uwԍ=Uʒx>F#AGr(yN;s&U76os o7a4?+ʯ^ZzPW:p⟈ݥu`ڷ_.rrIv_|7+*|ݎbbu3U7a#A\Tc.x?ܻEُ "wFm? s@ݍ&vx%:k9_ֺ:G^ۗ8, ؞sDmwh>N_1+Gf#  wOgϐL/ ̫L x#86 >Giȩ9GGwCnHPJjAw:ߓ:`7O;~??8GQlxꗰ\IxUtx)N[_Z!u<~KCsVGEz&t~XzQ*D9"~Rw6O9{pR KsuOݧ9YGYr͗N}꓍SɳX|onB%{p'Kfb_"x #X1rSw;Gා<8gN:{gP5C׸SqF\j SnK4` ?||ڦǏX#񇉳1OH5] iju]&|=c<0+f[d,_e+}i6x[׾NQ -$`:S?<Z1%v fzȯ,~5A}kǬ9'|/M"1-k_JRjƢ<1t_zAbϤYxH )8CyҾ;y|h?~,>: VShЧ.B?sp$b ~} ]RlLA8)wrvd/wcAIީN+g)9Ϣ,stAq]gk^*$n+x?<00̻P|W\<ܭ='ȷ\|e Y읂'دUyE<~EX/K!&v9 y`]gۧk#!?rPGS!6_ې[8𷦜39/SP!%ryľ  fIJ /:7Z\ΐ zLO9qߢ* zv?WcOu{J_aN 3x|e=7u^})z"u@/քw"sO0|}=^:Ʈ9 " 1 qf/:WCMRcߊ:/UqfM[_|g7(XL?ڟNAi`^McRY}5^Hi'oTލ;'b:3 G]:}?QC["cD>Z" I}'g]4G?ԒE#CsS2#OĂζBպ`-G--ʊ:>9&$^wwc> UϥxP>J1_f=?\e ^ZA݀œl|H^z͢C+si~OZt~~!`_дCVʴż-I_CsS~; |@_ZuNB{Xo:M'>W~X|Δȏˡmrn~(KWΩjMi<:7J(F9^uҐX<8v{{ؿ9O*~GdBNڣ7#]_mh{ m(X3z?b}!3з9Dn̟S#W"!uV{w+yD^ym[*}P/z]Zߓ/sjd">Kw<'oc[=|z;][; Gi.[kGKJ|G  pF2W9DܘEZs2Kݛې?GHZd\+eB1cO-FL}>s"##ک7XIevJ(zҟ1?q|ngÈK&ff[o|=!D|EoUm?3=e{zy3ZGza?xS4salC:ޓE/z6uEZ7l>HWe a剋_Ы1}dAk`/oNPVXNPqK_k?t32CN*^9H_ōJOA,p2J[ ̷>Rgv*4%tQ9+r*nQp17 I'vg]왳̂dFOԸ rWHrkUıRCI\޿q*B|0 ??_8mU'A'G9$c>V~y_;/~cbͻ=SyάQZ >FA:OLxċJ |v)>f$_更 a=R/r]q89ފ3CoJ}@+SD7{O\gy#(>F-<zaEߺw 9G}N#]Dzp?r fS쒘iEDZ_[If:Լ閏J}DXBᓌQ1l:6tB>,].+7/xc| :]lt}Y&9q>qUUN SĹ_>qAwC|!$fvJpW_9[!`+[vpY'@AgáO>^~2ZIHgk5=vczC|][FY/K_;{ ӢD7&<zy?Υ<|ЌE {S'+u끺 :ٝvO/]I{\Z܀ݪc [_.s1Ab|.p6 ܗ:bNV}&zb>2L!ķ8·7 Xv_ˁ~0îKnsޕJĄŞѹNPys 8WVH`{H]/Oa_~5|}#ʾ##?%<{W7ZX{Щ1`FcM]E/k ] 7w3L٘e}#%n+f;rc?>>UOei\H_Ѕ J]I4пy4$9wӉ[Uސ9!c3 '2^6|"YuZ:wTnnt:g9p /DaÉoY d p|5#LہC='x^WW9c!7ݏ'8I .@o/JgGͦͩ#=:gU^pPX_9'b3WЏ\|ډW&v{BTȾl*us߈ldvhuNJg_Q{@*7C/zB>I 9yw05Voi?vNw<G|Ȋx!|t1-l#><_p)*y7}җO_gΤG&r^E6wּ2 s/d~qҭCo跟q;y^vg@{FrBuf#:l*v̍qfTzWgSGsIFYoC5 ?#XIoӸq`{ჃQZZxɸ+9vyR/PuޛU~|dX-S5>\ (y: zz= [f[__aWg_{>Aۅ΁ ?JdDP3"Ǿ 9q/;c4g9T8=*E^%)='Cs[8=S'zCNi}9Ϟ(sQyM9i-cKXz ws⁣y:[Z^S秮M/Rz}:{3`rM;My Si^ hνM㠊3sK+8 @Ū˹=_u@;'Srq?Boﺀt8n \;$+s& x~[+2&%uqw T93"/&(cԆ~/{ |q';}Z?nKND{B8ĩ]׃HH!V3i?} ߧوze牉XƟ\Y?8*~%s6 $}t3?y*x/>IܨQM!倯WI od~ŋeμj ,}/%Ǒ#wyVׄ/^\Hp'2oEG&n7uU^Sw&i?NO>xG> )z.2ΔCF{OlQ\`PwU|Xs2W.%03i->9Fd'eG0$ݻqG@^!J/Vį>3 {59;̹6^o|.S;m5go >jʟ}0OѨ[ڷ "u)Ϝ(%Yg|T#;w_!.yTAa D]'GKc{iHL mgߤ{9Quw1(IVM.XvℂSg|s^oImz6丹VzA W};Qjrg?Ϙ}wH q9jl}@/(N$Gљ62qQEf"tA/IjXb~ #A\G}-w>K._KBcRɰgs ۙsU뻆S(~zH}Yg6Lr2>f#鴯Mu[ag*ɻ 4  sErJ? 9V*ϥ>ƇEȣ=, y]sYC,(nv{Sɛby_M6$ç_q//?pYox9+s쨩o_, I/(NZ ^R't(\IЗxFSKpk[UTI0sۇ.:뼘s $sYrE0wdݸD!}{w":#r?k-u #{Џ zMsooaw;uſd[,y=cF__yoTط tyy k&`."R+kc;CPrR5_!^?rIQ$G?X7~*!snRnӗgI!-#6_G!lQ?)_byҞ/~,~ǠN@gyNbtiKssoz뜫j9k[{jbg}{T;ض"4^jޡփ{`QW1Œ|r~s6|ȣM|&r]W|b/ɜAqGWR[+ʓ{?>ө?[czpzR;vl#7rC4/=DLs^} |'un=/=ڎg1rMp Y +9_ |]Rn(A2fEъJ^M#e鯮Q!2?,9zřbp^iJ^1UD/+ I ~ٌlsOn y=,|8o\ hi';Aki]nQ[6ii~~N<}RX_Oڕ{W\Wq#פx66m'U<27ח:wWnЁSQ=2w{ uoךW"7DhV믴?}'C;?t\IvF|>.J;;Pv#y.~ Pd̪HPJDOjm|,XC/7w !s 9?gJ4nS\%\wP=>yDi^C㏊tNF>Jڞ_~Qh5vK2*Pos|z"Wܹs|gCKH\@Y@H@~HWWlN\IOR9nX[⇻5.ӧt~Jfo) v;gNe旫BDݿ*])LC5y5,q]AXT3)^>K'oXӾ5ƤЙK+Rk]&HF!~e?u4 Gg 8đ}}>k}$_7UpsH:Qܲ{݉:ɻ^?7$h9|W=ju=#JX_<sO/8GK Oxy*2:q6tςnDn(^T Nݫx 3?1>>$8@,u?NU>I`m3߻r;2Y$]){yRUbtyЙuO[9 CV}Nsz dn%a]}&˼CӚT}k^zk~:[+k7B3 Es~.+~9i&P(yD+\Ըr$h$P/,r:=*>:ٵ9 3ѻϑ/[dR(9ڏ?O ď7'nZ~-jQH='<@ߋϭgYo ;ς? ?DiX}6y(>ξț'2܉zyV %j-Nk 6Ns B>y3Zu5J@ImE @Ϥq4k?xЩ4?[b4$k49%?a~^6r^w\S;WF ,הN0S:&'( qT&5[7/%^ﱞ @{U[~ơнwCjf;[r#S%\M<7~]!xK)?JߪWa Ԟ|;+]f;չP?JiZዬmigYZ1:1bq>ne4L=!~RK/}E5>Z/]$牼yhsl} +81FQ$*Wo]ч|~[y"+/%w>&Џ삃<?HSi<,0D9?%:,߲\ 6s3/!>4/'k2Rq:j^,Xhx\"n}-*CO϶Sڒ >}oF}I:@}%(Bk'LFkR0 7 ҂8UgsOУmVSχz׫}ss;xqr{`=R;#{ds~_o}sZx|#A׽ߗݣ [[KQ'%w`NdcG<^{^{[^q$kl k2?e\^|M=w^l)Ո>]{mLFwыM?oKT}J8H5:>Mt;ϥ?Bc+{c ܬKg}3 @zg2SR~3[$O"w|#OVԓ&(;81VPp):]@SR _0O-[/M2G},]ë~ C^)ONs:X/Rw߇+;:l~kM>6z$2 `BVwJC"q 9O2g`Mp;+/C쌥Nєsϧnopݽ7Oli2Kg3+%$OT!$U:{xe6q1#Sy}+#\&}`#7y04~Z؈K/}˻P xZHM=}v/uf"܇@:NfU"T|p>\ \|{+ٚ'o䉍rӝnIY3Ӝl㄁n*,S͟95۟- )Q2sӧY`xw'/Nzs͒^WoWũ_X@o,+OS.>QynII??NpϞoh 7@coŘkD)F=Dۉk9=;3gwG'SXQu5}(sKPu}DF =-9|q~VlG P_D7l~̕!囙+xk8Y3kﱎ?7- {+AqNrꀫfP5 [wyzrθ >~؋ugo|"GuU;H۲d9m?lpSNtҰzq)o_ ͤ _pJ6wmG_>)'P+syF=:Ϲ WOHح{ ]%In+ؕ G\Wj#`HNXa8OяNF+ޖW)6A y#x_C7ޓGm tSȜ/)͠nstla~ٿGR?9l0}6W_wB7OUsC:'u?#Wn}C]:zsSwSԡxYBFȕ1f?O~7п!SŜ4nHB;o'j>)'hy¾:}~_ $|V:=j.㜋JԏǑ:F =^9pBg:Ōs#[Bɣ"_Qdz|zi"P'~5^m#v՛6R<B}yN׉ܶKbKNybΘ\ǵ>^xy|VLc]MNQM7Qy5IM-K|ӎ" ?!JF:Uf#NSxw-Ue ~C:@;#?Wc=?Qd!FXհe?} ?.roֿ` \{נ13L8kz/pn#CS⛦Caa?Rʟ!tۧ%~i$>k6A]Vb/U/s70`O\W[=;s$ 7r JNKBt'sb9Ԕ>6@Ga7иݵV2?pڋlU_Ĺ&x_ ymڡgX -e32p-~$uV- yUsCj{=ыoҗĕ}{I"sFǛ8$N =5f>@0}M7^kōfϹy/~Հ~(|ܯ=S䓳q0ϭq9U9E;kKdI{?,/ 9e~3|VicQI:2Qmߚ9GΜ3Iup?z&"_r7fw}MoW~Yڄ9z˔_Gڎx:CN3P߸?,Rfn.7GITDܬeV!QRYω\Uʼ7hvH׸3+Ղv_u ߴ}"7KWR&q }_ta]8X_u~}ũ ky8#we羙'w+<7}! XE7w|\=.#pc_;Ϸ/SnI+ߧGտOK/ߙx-J+@+}NFϑ狱z+p5T\ҟ0忢3/5݆Sq"7H }Kr)5OMnϫg2!?}!HL CϚW`)JM@nKRnʼY|8LMtrN{To4o%ku6b2Gז:Fq%w>|d^sBұ{:?ԙ>BӱDyJݪsue?Ŀp_ƄA?d.ibgl-:kB{e}UH>dj/*PN*]K~+%o} &wR`y&`|KL ɗX'coQo \%~oOr'1rg>FJJa 1_i_>Ó8S46)s'qUNm3/oKJ_AW>~Q̿8k4bJHnu_5Qe{#| 'zV&Z}ȱ,8*ڎM?V`fе}-k9vq߭lzCZƕrGr52X}#l>GӤO v:x?[>ۺ }{Ɯ_A'r,oR,]\PFDrU\DOK0D H"He V ׵DF9ry}ߝgf9s<ϙ &J5sG`hMʣqwƂOjN_~Ӿ RϞ#[ٷByZ |3o{ի'yE?9^1QĮO_h_/Z /8_f|S֋j's%AgV qhK&X_$2;H<*y vHxʋɷy:Նq)ϳr%a,z*|++wsljʇ_?-kNdN%<"ko?͂ X,xKU'9}oǠч #s6/}1.|2oXST:0nHL7'.NX1y`tc'Y}bCAt!Dp2z<81qk:Eqf|̱tY߹7}E?Oikj9,kyG^yZ-R?,['wQ]c N}?I}Ҽ]z$vZya^uydEQ^~4.QFy n.9jH3:ϢAC3?gO뉒WwϦm\(>bߺ*S_};r.q{SR^kOy!nϻ'o] vے㵲?c3^ه˞CLԩ6IUѣE?ڹ1/7D~,eu%CoVU-C}z;~Ko'{^unk6̫w)_;8Z{iiOߓ(;q'4N?"s]~^Qh#vf)S!.z1D//M;{N ?'\6%Iv}>1 \tE橭͙=П^ßI>*/+>g.hFEW8S~opy>#xtW.1'(XO^GxwN[OG똌/4f]W ?+y+qmfd'<ߦ7wS.qn=' M{Ŷq]a<.aw|͸ә>woqSp.7}%ߥyCo38(ˇ:{sd- _:k!_ ̃xiAd|ʏ jowNBCi\kyF>{C,~9C0# E^,N)VߥU&7I TW6~*I^vXYFVCn5ϩ4~|Y'U]Pzjl1~Ϸcݯ֡-< }fWyVĝ_of~gx!?Jc_ W3C_o<ݒݏOV%uDgs8=5zf{ysOgs_{#v3˭7rinօ|l~ow>Hc(1{V#ћ 3s]}-ے呼!~UʳNfƎ0<+|_noٌ<˺>)=sW?z0 jXA]K~,H3.G׍(?TyJYۇ>+DaGq𒝄ܷ9>ޱsp" yӽ[7'Aj;?n$yj-0sqhikȯ|+WJxsr~3i?T3ᐑcʯF#3nzsu|Sk΋f~|Ne}n|%S2 P BC'o!n4o#ׂt=u-' ?Pq5ȪrF޽o`sM5G?haLLdž-<2{7vUf ǁK6ڕpb>{ b#-l_\=jsb9Z:{_[k+23n~%nhS_ zzC/k+v?k0񲕭;6kvK#?1 > Vv([^8MkAש_I9Bo7S*uǠiZnTi:H1\Vf8Am3I8L([7*(RaZ'ޜJ~aO#gPvœzmU8EֿZp=Z N#B }*_:O OhƝ {B;s/닝*АEGkct3rmm:\CƧ7md\{gp?0~~?V F 8(yp̴漠&NvYg7wC?Nǂ8G=6;,3N<~o,qhfU=<W+>3lW'og$X̏_xBi3V*ه}jKfWߔ|B?>Oݖ[$qJN@>$89\oGe=eky#O#Az/@5^MuY9p+KWksΓk',.Gyp%Oh]:We]SWn!lǛ6yT'&~!z5Ӽy9,cz3F/|0+s;hW MGq\/1\-\5/}Ĝ$*qLu1ϳ72|utb[GawL׻h\rlI +K jN~t7E.^OJ>K՛֊!㕯 ]oӑd@I1?z\oj\%Ki˻˼ooij$#^vvUtZW|zQpV y ?Į͏xNN+qs9y8j#W;#~kjU⑿'n{;Y#9n-6]Lyy Z#?Kwy#o?y,TL?Su~^UoKNy^5;Lw>K|wɼ1}I˱[6~gWJ6t6k<K uG{SoEƀ_-wƜ$>0zrr=&&~`mvr%J~X2Wx v[VmdkA<4i=Ӏӿ~CAvN׍}2v{~=*_\ }ޖjwtȂϽu$3׌4ǰRKf}|_-?ԣ<~tq؛{Пۧەy;9pMZs!AިkdM:{ a@=|طwΆLqg*ÕwLǃ}2<ʿOTA=c_[GagN\?lC[oܣ,Ǐ5oM/>S>\cN?M#OW|} <>rs><훤ɮxx+eџw) ܮMxONa7i .o'#%'.D@F.:s଼7Ss_'n=y<6#57ĝGWg[:Yp*&+amh| %!3Co.ؾωxyGq}Ͳmq+s_-F}xBAp7;{C/kn,/~ͿgʯƠ7|;y[|?(TpxqZ 3ߜACڴ۷ /h3vs6N(;_.y@,OةmKiȣoUX|K]XBЏnuhn؇_3.,j -ȓvj{Է֙2 S q?ݡ||o}އ)gC।n>.rF_?#t#<' 9e~}HA7W)Џs%n>q.$FZ -shGg}̋ OI,u3/q÷Nf034wC˻rS؇KJsUDOw~^d N8\jՔ|sVaGÏɳ=p]GOg]k./3,IͳʐW5+{67|H_N}ü mMZ v8 hq+1r+<ؽ?"9~Iԯ.}Ћ1>J^9Ϯ~xaj2*Eqwp5R:Y'onǿo+WKbu4p>?@x1^׾f܏J_l>ڲܭϭ5y@إ˫b1.׳!Gag_' zy/M ]^-~hK洩O'CG(\x_S7OfP'CT\p胰'箜K~$Sg"F.K18Vf-tI_o/jO^[NC|i,X rٱ/籶5DW_&_ 5Яz7.e&W.בr"~lߊ}8s| s`ם 58H5țk2kiίi~'',|Su 5a.Ͳexv}5NQh~fq<;(/Ci_:_nU\\4~&yw˓ñoDsƟ+ לˮGW9Fo>= #agL턽y;"v:μFSé ΪķC;8k1u b0_Ikdd.Ciߗ+DGXyz`LG5ϓ= ڇkQsαuHD^Sg J \gL[qxq#ov?vԿȟ]b~5xjnh|jy-4M] O'6֗g/cimO.]#_I^bWJfr_ݿna 01kS> ;\!)_(G o C: /KTKմ|﫶Au.uXw6 0zuG~zwcBĹ˾U{Ž/2-ޗ?q" b~F^tK9Mu[qC˳YU{}:ʓd]_?Kݕ~Ryk[Ug#n p9#P]3'/ @5Ƭ=]ʳ'<~ޝ&ػk_K\W/?o>:?Mg d|}1Ov)EUo75uk/v&^nw^0̓qdA>߫HtZ {tOovNQ~UR^{>bXx\6U'7 nw9z'jw}DG(o{ҷBe/ }y#S/({LdBKdc>~ܖ=yZ-91f~] S!usysS!$) {12썌syRo2^0- <f{k=_yQwqxhh߯DG_A75pZ ]u sRtXOFy)B"(<~@a'3gI푗:ݻigr J # *%\ߣLb)Ᏸy,u͊y1m<_ |c5N\2gb-sі -Q(o]9YbrC>!{_u \@P[{ܷ;og~nZoI XE<+lxN/|:|4n D +But.+7.ue ֜Cd.lpSS#c*J=0v>D'xׂH?sy,=y}?/%&˟|)AO\ޑ2; -n]c⓫E8{+:n%df?.Oi>}w.my ?S>"po] #2Oft@~?#s߲]>W昝Yӹ~-[w}~|q3}\7ɳU#P.[KC˿4ھDePeIo+)b// Jsֻ_+"m׺2xe]pO @Jj[x m|13kMJhow8bDεՙUsj?_asi-!k;X)zpdgzr|A(>?(6;[?N~N{(հ!y7naiv}pϽx { =*8n~RySO4\4,'W"#ƁULaw.sΏRM$/j;"uoeTvA^)_p?ζϭ}u=/v'39"8'gP9zS[qOL\]>F:*qЛȟG35pBﮇ~zdQeV:HAfEKN1:7r>]s\TZ; Uh/UD_u4 N|Qχx^q} 27ܘ>v>^m+)egy'_ӿ{ ?K''K'y_Zz;ogy9_渜q+苓zyc9:Kz:'9$] yqzWv >ӹ.O㧯g_h]zAXx)߃{fviwܿnP慕^=:xjGuqmI_s1}y. }>:m&-<َ>Lz<~^]Dװu.<dQO굓uktr2qG)96}`gE>jߩ_ ˽Oྎ%ΕNb|D\Qy0#;6iסU:0y3S󾜻IJ=;9>rUǎ8~= ?]  M“80FCTp7?6u9x_•Op"39@\3$>@'__&g4kfz 8u*7v^&|cҌFE$?1)b[{37n4\QaoWktҹ {{(;S5Cvc_xZtnٽ\͸28Lh=5&ZRp e3{_ӮQH_yݫiKVltUy*~Іfgȫ}T -yMLpC 2 ?k_]w&V+Gךt%"| !|_4.߷".yw7r'07{gw>F+L/h0}7ɘ0:t"z:ֺɰ׎‡E7:}AD? dJgB?+8AFudE>K>Ч}7YcȞd#n'+/2Gbh[G^DN! v6 tewwOA>^s/4J^ iXʫ㥮`fG^dW|~6y{>~Y[Nip1nyyEЇѓϪGK!W~-Cs_jmqDNsO ':(!~Cv7ѯ͈kJRҺ ;rnp[Y$)Uo yOϜg u|wCܜ;{K>m'>6p!mәvuyS (9x%G"JTCXܼY_Zl.]6 s $$= /]s#o wFqjjOEѐ 9"߳Dby(%7B>EvF(] \_#:Vu7~oyՉ&ѳя YгFk^NbMз!yXQȷy+ƣWeJ?dq FcFt Pn_{1ǢAYe_ ^ž7/ Ϛh*Z$xl9eoL_e3koO|lDWi^$60{ q?T0p1#IRge޶CUn;ʜ}t[kg`_8ᅳW7ˆ8]@߽U]2/ _O𖚆uJ8v|s˾&}X5pN2WbwȳJތZ&|Z+#+㈷!P9NQY!{r{^U6{#t G<ָjȽB)-qr`Z#^Gw:ߦ%y _\i,tomȽRowGI>jD|!u{zf?P?&)#Xp >?"8(O_pLx5P+>:P(is~=O$B CeB|K7|OT~f~_K_]Y\W"Hx?۹ŗ1sOurh8 ^:)3'_ypqԉw5Ϙx8)Qto?fo_(vKm"upZ?5Hz_gD܉(o#/cOeΏ9?u7si$aON*qq6җ^~_Ӈ׳ :t"?+9kMn݄;C}fȾ ͷh߀cfF/\.{!f.>ȿ6vUGnXra>gfJ]Z?Z?ﺝFe`~r{[7˞`Or?T礃e۽&g ({o8&b#Aop̏ }A{]caYՎb%d92&þHc,; QOp&v~31<nY_ ew_R#CwsIȯ%ʇd9+t5i49w'm9rlM@G]U>N|?R-ef(_VS nIC;@NU>"bNS焓muʌA!5ޫ2G_%ٻ섻 |y29h{mMDwʏhLx|^Ők9|9kb߄O׉pdɼڼ{N!?U}=8)o[kX&?q+sdOuA^·h?7/Qx#_6!N>3Y"W"!3 =^I葱}#zs2Q:<ιy/k=vo,-}f윣K+cj@0Mg\wHށd^r :W.r}}£rf|V},աOyWO46s| яd_i|n<˂5bU_=:~Kbǧw.wD^o?Hރ*wծ%uyM-7y\ҒtWa?u\uHyY;#|ůGkwt}]#&I+\8I8:oi?bK~_3ZCk-qzaWvuxUns8/< ;q'y &=?e0uXU&Ͷ~(c6.J~O]w^ԑFcW'Qm0brj;qE c{ِS>dPU.Cfȷi꜌eXR}zĻ6n_K^>'|vj77kf}<y^9KٟdtیkПvыbScK_\ 9ODiFQiX9`nMܗ#䔎}Vg kCݓ)|{=y|6o<[§w;18q>)uɻ +n}5`e5'4;uu9pmSy| 1bic}s lJr4p';=F~FfoN}a\8/twċ'Q__z:LuBI٧j_TmY9![YUaWݰ32lf[>u}NHwKcrw;6',u9k鵕˲9 (OZke77cRg8P^nQйsrꪂZwi |Ϋo$ߡ<嚿 .>wP"OxR ҃cʱd§{r/,qԉOad^p]WȡC^ c>_JOa;NZ&_irN]^R[IGDYx9c'8{ďe]z\7>||<_[Y&C<7Vnycach#C]U]ÄO[?IO߿7?,*_Qg{)1R@K=TlJ8]=u#//Ȩ /uF~jT,\4oUh=1&)_QO`1ւK:u0:,5Y\> >uLsM_bn{z-[yw+Θ+C6Q煶+w ['=x~{ȷ̣iڜ9 l|=^>kH}`/ W;_FHC\2~>*ƨeO Q;`f=4=5xGqYd6z9٥Ys;=? Oy-~MS^U?*-}WGXiRڏ==Dz>vV3"͸ɏ/;hL=ネK߻5zF)=e9 .oR'2wsS9 Kك숃}&񱑵)^Z"^.}o͌[R@ߛ9w7Z^AkQl־ t-V(0ȿa}ӒW@w$,u umf b[ο@R }cq*~\G^A'MIyZO~5[yj畏T o"<" Ge_1*(?&9vQ퐿)Yߐ'6`HSARL>M]Ƿ^k'>"^ pKN=yOgi~2u#Nk|MuDU-=}"uUNˏ?'\~_K97g{-on\%ujIߎ9zxHYM iьuo4{$|!J y3(< qi +3Eh(1uGn:݃Cǡo\S@\eKi#o{L/q@L+k9?]Z(^BeY/>)wvmל_V>LYnVx9+vC5=7몁ӛ~nEhrUY8k6DsLԓ! u.j}O{3 ?_a ?$/ /XP;/x3XOqo|  }q̻jf? ~g򡊽727i^j &!2b3y~Z|ۖ;y£m>z[⯜1oos~[kbG'.s%{ěIl3AK棓\ jߺ1&_HB[p_/^ 9o 3Z1uNykr:'7XOo:u5u^'{A+Ogl'¿k?>؊ArP Ѕyq_kܣsKt;^)߹#r!s:/}iNx>w^|εzE1'y=n /sqnf58a!A48=c,?^jBAͮoBuϲ$p ΁|&^3硫H˅#ktm!Dȸt>LjoW$*-[+BxR_좟Gto'tS3?OO{\G$~q¾94\G~(w5yg Ͻ2'xWa{Y`'|\}7 z덽y[/hM>%{@5G?TOGG yA9 z+^FCoN;rucݮ0=߯}GO 3& 7T>K>X|oʾH}R`]dI<+q}x+suǩjn`Kyʼ)ϱU݋|c:={v<=OY?;M#2?_CdOѤ<<7|NJ۴_[EƒGVUKX>^Y-s~B|W?A,> }z5{+r.%y;?yg0y`BN@O}|p nOBN+¾Ox4nT{ҏu]h& nٜŝڿaG }˚>%2_:wxQ;lJ~:"o(y.ͫk+\$!(}Oaq2aNs+ϩVsdc չx't{4dq䑽9w 7}_?F\4֢٭[Df>S^P{qyQ/K=^4/'/9If}S|8OvGbxcΫ ^ s݋-uIy`ܯտ^?V|+#xXqnr䋥ޯ 9Q(~HuB5?A\ v2Wj=&;#|ڧ(S||-|%u whS<_A.d?sӤyJx?L7i>Hq=8ve懖༯ 1=xKhynU.j,Wp`ߙ+GFF&F9 %"NAjqyn"7o@PgEy`>`'}]_c=} ?2@}2/QGc߫a/"p_m9[ge.3Ŀ_́?Z/,NJsY+S ]yxz"^>\;m<{u~Ee27*6ҟ%+7G>7WK+>7N`jK?7+:bS?;y(E`?76ז cEǓjss:߭-!ϓ簧m;iᝮsU4|RV{gS$KJu,%$'Y^8Lve_*![sP3i0"d91׬'ٳg^}]m3us,/|Q[@\_3 ~_~<и2?3}0$pW7`tUsuުu2O8~j>FD|q$N频_WֺPߞSUΞ]J]#^u}v~tҩ#KX'I0Ny+|~Aޕj}"1i{[I\R#Z;'.>L]C#[g8r/?p{& 5Dζ y_ndz ~=vRmfzȱwm^#/:nk\o_leE Wʑr)Gʑr)Gʑr)Gʑr)g4 .ƅv9!8ѽ:sF)6J o VYp"yވBA|*c'z\l6{w[p~]jăWN/8ڈ`_W שy=;v (& 5}uJy-^aO[c:EӖۨ+M}d}y'ق<|<}_+I(~< ZzDeWeg#۫;z^Xc}s)Ak=3c>s/sL䡜-gV{x~R8^qZW:N$dH~a@oR5CIf \տ hV[ÂxDp}^'vߛe_Ns7#t^"B_ב]Pi=ZPj~? }?QGwZC߳V2i/S9o䱿N~+׼#s3xSo%V ^Z uGrGw"Om4$'}7z6E}ȵc Z'3:=i \8W;a?C}{׽*x3oyA]Bݼ' y(8#'S533yڿL /lXs)s>sg^UQ_ي>awyKȿ_lXfIEUuL= >CηE ÎZ5n&Zԅce:R$۶k ߇^ 4>uNhC{zZΉ~u>_k$u:@V2~?>kS1NYvmܚE{~ . R Jܛ\0< ϕj2C4PWj,#Q'au~z~"# R&y5l}_2EW>i m>s[ 翃V{دQ×yv렻S=K6neJUY"ߢ={d)v:FPm@=A d \_b5^ _9/_-M߇/<>ţ?B8 }c'Eی( _@]ȥ:Τ͜𿻇 xƨ]oQd{}r'kvؖ|=2pԭHQskqOu/}cG,ՋŸۋθR ު0CRAO=PW{9se_C&ZDZSW4A/^?>[Ά}1 \eգ4Aﴰ .YƋ9F^v_MysO9?>lYM]1^r]ۼ>æKt zL9ߋ|,anjg٨!R~r8ݦp29ߌ?[.dݥȷ*︥[gETOUꝽp#t.}:vMjw_3(aip/N ~őEO/Wt6{v7z}V"qy^IL.E[. O2O^"˭vv1ڳ'3h#z(|Kx} a8FQp;؏93EXa3Eoz[=U$d*b,ߎdo㹦\No|wuUJ?se:~KJjG\qSvp}m#Oԃ9[ t7Y/(M9E6|h|zS_,˸Kt˷R =-[Oy?6[[ j5ca)vm=puC<)c37O|Dߺ8 jOxDӿo\xMvE ֞>{+ٗ׮ 1[*sOn[7;3+'d9݋W$<튽S6+}؃h{;nb8;U]s\_B/77-7?_܍_?& :Os&8%+c#hZk5) rva|-/o$~Gj#Wvc?,Hc{?~x-"5:|u=/>7ݞ\ו.7٬gUzy^dMyP Ws/[/G7Hk3Ӎ#uº\$@_"BzPxWOJ?ͺUي~>$?gȟH=g#%kd|.X :O5OqtC{oNFƔނ)Z/D\l},~3.2z 3=p+>Ct{7h[Q[y?D÷%P".y7b?-dȼl*ϴ?j>ɿ0 ,y;Bf[{h|hjWC.vQ,.^oөS%[yj׭τ=)"8eos9 cGfݱMejK<]p3O]B.9?@t _$/Yiyz+6"2,b}1Gq_8]' pvs]bj(8]dDc~eky^*INǧ|qкq$?Hx{d ^kyGxՇ"i=ڭv.pdԿ$^ɇ X/ɗ);G9UyM [EX1^({]v SEQ.uNDKJZpua8]w%+nJPX[bڥ6r^csN֛EAZCtOxXk~(don~,,~׵'t1)vWrɈdKHd=UBaEjP*{xٟh][/t.P|?~ۡWCsJiӏd/tɃxg$g5dT^iZ>OuW7?Z<@0/ܬ؛+s޹.~5^y >8nvO/yݲc"ymŗ aEj'O:HpEhVTU5iVk-rx>yT>kɞ}%r݌(=48386%joJ݈S#uS q&{z~bb'Ӽyr0o/$'둆 ZGށN<ƛU; 6sOuA]krÓf} *7-Tl~{SIw^W6+ _yxYZ353I'CoL}O=^FrXfoGe@Rg5>H8֝}:ПᡸUTYe>dyqyٮ̳|ҳ|!q-:aw]S1W6w=jܳ=Jdq w@Ssr1CKCOsh|V c'I3J9~bQF D/euJYb' /6z$'p=yr:}NwΒM8]j_a=%HO)r7wڊ:󁩰$:rRU_9EG>{>NReξI O\\'y*ǽu'靛~I^/O8 \ꚵ.{&{L,B96NV_(Ys7+r!++${2Nدгފ ?˨*Z"jlw` !h4'84Np =8 Cp^}UI]s2_fLO}ׯul};Z^bO edE |# Y{Ut{yIl؝:SXj~87 A)4g ?y%hF6϶}G}v+"{ůkIG6o S9AKپ?083mƆ#HXF'$nVa&I {+q;ʉ~<OWLuoV{RN-XK<+q0'|BNfnyZKa4n=|GsflGJJʏƛaKjN*qhin%uV,Ti\qNvWlُ08r#Bn>+H*B>v{af [OGoxf wUc/Xkgu3S)HqF\'LKFi7NÿOtC. n\XŁn&=sKV5÷j-1z7{o|#:q͛ow呃b ~=~/oԏOEoҼR䅑 goS ?>?zEn~~OCquW>;Ien&Q#Xv{K: ~DKL֗IS\Vg$[A5O9LO*9J>O,qI=~֜G:}Iݏ6\ʑ^]'b'l˽`nڭ{"zAq ]sH>*z=?λߗȺ>_}~C3z|>L/0ЀDO:VIc%YK%INs'= C)Yb4\=y?.o7-MZ8xT9Mj˳>L<O_6_p7T?LZ۝ F<|K<4"_T`HYϿODΥ o|:HLqۙs'[S̛XP9'gyz_v~i1dݱoQ|ӸzT\U7<[pk0xf7^1F?$<{F*\٭N#DŽ/֜o?пd?_E+f2Z(=#zK~Z$t~s83 ;睟a.±G[q!?RgcmYv)GğsLfgkZ;G=b"ߋ^Gu`Jׁ.z DjV0u1/?xH^Эa_idx߰kz]!v~.<{q SG&̺:ZGB{cF{`Z3G^pͣo2x\n5?k V좝/w3KNKq>z>2w~k{ͩӮ>3fܞ}fM]?X+!J!sFWHܞ~qF]cjUg~>D1U|wj[wW?~\$iUN;g~2j/'uNADwC_O㌾uOQ=kV ү{P~b-KX >p3=z2'`>Za|1;޻V?BcgFm>X;E>z`7{[_٫ m\ |kޏ~K{5'P![菫VP! vu?ߨ:@=;L}В'EUTqFFμ9t(ZjMZ;0*е)*qgC $n8~Rg?@*ÅbLFO}c|뭷Yt/,NwK%ȟumUţa w'^~ 5+v:1tc~4/s 3?^f+Քkjs qy4~qz||T<=/|(v譯`o݉~zSwG{+!%%vܦ굼3 *y2aV$h3}MaR8ёw '%ި}k?F4k?rN[Oxͯj^R-> =JƯo_z@21}pְe6FS~Y?⧆{K; 7#g!NjNEƕ=j܈n7~fg_:UMFSB}\gaQp G. ϟ׸7ēz&BW.~4w^9-8;=qwd 8oyxS{>?4qu5&we%usR?ؾTZ-#JY~_/;;|qS xO /R7Akmx]of7V<_2 g',ݓg^<.˼`h5Gf?羂BO􊅽TjN(?tVd獋k^/+GUg̼эsz"v;wiN@gF>Is~ɏ򀷓zu.@8E?V|u^y-_/_O|7*g$Oa\n<{-:V}F:=3 [$$Wg}B|ćr/oNyzX'՗#/t퇡#Z}c7TG7Ӿ:#+9HԞ7oyԫoN!~6DOy]R*aY2wS:+}l3Fb^x[i@gbO,w]thrf3C$.9'.?!y\MD}.dj;29{ cw?x.I%֩Kn^W}Bnl=D!9} }wBx2ie%o_9xe69;mW܁$ۅNCCpuwku {-YMϒ^^'tb|5vև*w=m˰KQ^m]]ȾxH0meӑыţwģsLEWt"W."rxO'cH}]hK#P_I⬊;Uҏg:(?6yyB/o0Ǵj1K><4}jwTzų}oGN(4/rʷE916/euqB|3>ћ zn;f~^O"ժ7\?Q|N?ϱBmvC/5Bmfۗ ?`p$n뾄tǞwd1_ktzF v?u%_ k'GέC]'d~*7͗iӗ֓=eb:b}zuaKȥoyfWET<oW\&ŕk}pg]ozxQռn0iq7.}Nw&-[4}T u~Gcwt<:ҽ!vJј݆qzCϋfߌܟ5'~^\H9ȳ3.NڸQx`f7EL5~ vM"u!cڷ@テ^#O`=88=A!>^XVkL U8Æ/H<L/sSȋ w!3[ORI|_?Y5Bm& wSCoyľ$k<ϹwՍ׃;u1:!w3x'|*&wX]w-vخ<ߵ-#"·+zmsuwy'4}*~(0:!/#EXE<<1CO.uF>K\2_n=jHk[|Ժ⥉Hxq]I7mwh~!3.HRU̍in^7a?\b4yCƩoWW K!r5iVoKx$q kSY*&|P!HA˹ 5L_mspCy|m໎r6N6o! vO kpCg_hViyܫ1˫W9O^շ'q{'`ҘFuamB_b?o9c#Jx%ٯCُkc#|+@x~Áe[[чحmO#r=~ci+Uz݄ޤ}hRW_ΈRJ#8&cXqxZ7;<"8sߗ5g%/α`)o}{HZ!GW%Nr{}vOg2zOC_8ё;'/- t>; |ѵW9F|o)wŻ5t4b O'OlOBw.|s;#$ܮ }љ7j2_!~yǺdu~niѷ`mqy{o[vKgj4owΗ~QqhZh_r}z+,rܑµ`cqp:?9+~ ſjÌ>!~$sЙ3+qS>$cϛܯۯ7Y;'e[a/(r~9gro:"7 TXaTx/mW;j-v 6w;s2ON@j|9}r-p5(!A3O]po?/ntK4jc<^?Ee#9kޗG^@/}S dD||wd-|{>땹vط<ׂNZ-/7ŕH\ry Uq"O@{u,d/Ď ?_Ke ~u/{ـ<'tc/^|INN|75ȸ9%zGy:}{7bw| N:-~ gƞ5u m }<̺Y:c׌uY8Zä!_bJf}?7m'p[rBN[{JpR.R?xs"9E ƫpn<*ZkV'hB#J!ձ% ʂuD~tLEW)Y/8Z;Ra Mξ~}>>.gâC*=-yvCwaܻܘ?\C*@?vP\6籧S'`r/!>U9)Yq>ͫ]Y_^ dUz}}>0꾇ޖ.8}z t n^\B<-גwS}|@~*UI~|{=B}vNH~ȷ3>n:JQ2%Cy-joo[׹ {M~ ~-ɇI>[k`;>DfJ?tjƾ^?dvЬw KۡSn8'kKۻ+ӫrEf'[| ^ǎ;]C߳>oηQ}u'cgIv_G<~@<50v{jsǸSf韛[bw!~'v]M<%IO8<o܀'OG鞼_EXpJW:^Gi܄kj?ys 祟ohQ]H7ݫN>s^nzRy1e'+=*?} ~pȣݩה:KLAJ#wO͞i=K|v7OhbOV_AH,8&o xthEF< ?NHnI}}>$>q|RzqeW!8 ~sy]p=Fr)qV`Cޠz/Xiym4|.yw+BҹηG~Ph K><^x&z)qWGx S!^SBk`wɡJ7_~vgKW!y6$1!!s ۰.5!Nhs{ybi_`lǾuI8[йKCcm'4=~i6[?no~K?GpJ:n^/ KGbIlxߚщpR/l vy|GײqEu/?zes: 5␏҆j7us>3sN{pjg=VO%W3Ā{_5ߣ?ϗ";8Q'VunõO_}6~ULraNs"\292Qu)ZEݨfVHLw}u-1tn%\K= Ӿ(?#F CB/2'ܘ>uV)C'3օ/2%˒<|k.x[[5@%ƈP9 ݏه}2Ay>=OÛd>;}*G%O>JɑXO;:ŸAd̩fA^@^ AJ897I o;'>ߞ8MÇ6;%."WϩK{rZiZ^~cG?>#8.Qk+?0Α~euSN 'i0ّTlΗuއq^.y;>od _0GpGg83#q_#v-5=q\e9zˁ~$nuj ͝o0?4fa_]`I ;?_-WscLu7gc+sQO_w?.C:K\oj.~}oH=bMψȿ : ع@yUg;ݹ#?O-7|1k'%#^[vT׎AXOܧHZ 2Xvx/Җ>ELe9_"A?/[IBW/ao ?C7/=sT_/i!zuKfx[AgB!K){}[+aku=4?~ .+70U 0x@2gLR:{CWp')έ_I s<[աϝ]Y ͩC7zܰd#r^j') ^@WA79sOW"Ag@G#e|k09 Cu,Mmƽ?0?&V'&eZৈ<ԩXyb7LJIVh@Qտ@wCu9́;:׏/{NdFѩs5y Ͼm潇7w-ۚC-D/[[n qtb@Ws O^ۂ~y!j]ꑿ˟ݧC^T'%tΧ9:/̾?vȓo{Rm!57S{/dXB+h?HoܤЇ)YN%/RMENɥ7ǻ}Od'F+\&Q&rɆ^폠I}Їߛ.`;.Yq 6X}3݆>s(3_=q?xU|\jد׺zf %+ {0 =0!Wz[7ΑHYt[^'mqWe<{9:'E;vaOu~CAOpct܁ů5%(K_PV ulK,<Т$zcI;y{ל> >\ Lj^R_S8c}NIO:*?`#OCp4y}oӓ:%nҧgY%G ϩ{^kAuV}&@ݷʷ{{qs6Dٺ ϥ5Q:S[|w3_}W_f?φҽJ9׾@GF~+~?;"i1\tr ]Mz^ko< y)x+Gп}XDOtB@X;}Qg잆m{Ck:sb_q{:A1<ԱX l1|LŦ]M|E^P~/6 {)U][5$p*~OU=-&7WpT)Ëyp:\-DpWΣgC ])Ry x5}.95_ݑ1Lc j50FNq&Z SQ;o2B;k'=6GH"Xdg,ﳅ@UN&)! ߢoJ,:WH9~&G%gI/7!/[%|R_GAݮsbt[~ķҷ''3V#N9!GBj#{aH\܈{ԏݑvG."?=&p o>/sv\o+-ַ,^u(M_ȁw/徟![yٗgطM}5l_/|8}qE;d]@GO=d^EeG|2_Ehݗ?Bqo[/͈qq7%/|oYoߟߏYudCΉ~7.V/ـ\k,޼0%[, 93r4Cg9c2j/ebpc/"'Ks{&žr1NAQ{uɛZ%}#Dκ ߃ot> K] |~$f7i88uң< \_l, {G{D?1kT X7Y37ZxԽKUхdΨ?Fvj$^W_MW,Z)㑹n"|TWu+;J|GOѡS(Y+ŸP:9n{3};h}0.fڊC_?}k,y)W؂?OŊQ} t}gzO~k^'sDS_HD~~ c:oNs_J~Z||q~F@$`dŜxNcHR7V>9֒Y=ڎ2}bIп)WM}z2@@'k<}W?no#c_Y C*CkGa՗ĕM^v!#Flyp9,{9zkf˝H?Ta;y<vNC]vs /*}:Hi']b?M~kfٵm}~ rAEԁG E_o+tcpKB/~|؏}zu,kJVjqߛs֙<59:_:7?Re>#xY'K0Zv /x2Jl'%x4Ϯy [\Kj]}I6,qk_>9٭CᛨQ֣t؛/m+q u%|kʡn<'? 3:YzKb+"(ϩxEsVz'FE]xX@#|7yy3nĎu?&.Q7~^~R+Km|}ěR$B|L`Lz⥇~bgR=`s6.]9jKO,']~&?y(g}?пVes'[zܠC+?EKpRL#gG$NzrFjjpqF<~@XK`?mlܠ8ܟwQSģ됳[5C/'[HVkR|K ?Sv]a:y.Z`uYK9?#&Y籧]9~eSráy9) Ǻ:{_ԈxTenmJ~ 2pzv%b/Ro{4vC~E^K=13F;8^/O'4-As(zIAR@/ɼoaS/^j"/ĦR'N;a?_؈~v7*9&N?a :w.T#폮x]-$u9[FUkr/ߚe.3 Hݙ wϙ Z**Rm2CGRf4ٌ=s+y&hLofs{yLrnXJa5w=YKG }JM7cBXM|<~ NH5YE54Cɿ:C"gJA~$իÑ7"ߏmNK6`?D6-r/sk'H=VpIZڛsNܸ\pCۖş8ЮEWVŁ^ޭ< ?B?W7g7j$ qaL+C'8mծYG/.>kW 눿r9^2!p%2 s\CPGtv'rmJ_qSƴy+#>{Mہ^Ž"w:xH} "WS s{uޑm0bWbGbEs2J-澵ʞXGWV Ϝp zVzjU}z:͓! ;R^Aᑫ $5 /To*c_&Wk뗰Ǟc2/ΚЧ#@炳k_9<~_ ɜSH-ZGr:||C}NlγX;BCGa߯v\&I6r?ݳ`/Ž+K>]Qr.[2EW^9zzqpF[‡ZJc.e/; 8EzMoY=:Y\UItU~;eL&~*ve:Q[9-8 )~ΫzIB|&/81t9ӧ}q}<D}튁aպ"w3u07ko &_ꗃ+.5ν)oLBBp/O87duUyA-^Ck=aE?1&~|v$qܦuss%?#sgYpڇĨý y>5)sx3^?+}sܟ+8.nmpт5/{Mρl?^9}~2 t~t5d;Y}Ek֪o?M^tg3w-/qJ"ۧЭ Sy29C CH=yn>#콕_IL'Hxz ig&#;NSz]*>ҝ{j6e1= }.&]3Ϥ&>5(/v㠻7oA~~ Z`+c2?'x`~*dK[Γ ~/@Nӱan~7BNoC?铟w 'O >{/$KcvqK@_K5nGX/r?I~8R`;27\*U\~z~+4}πA_-]׃{{,M/Ij"'Aj]Q-9?dž`Zy/LW W94).mQ= x@kRlV}xS ,뭯Q yNhKOi^pc>q&ߓ~.!أCw&bZ>S>e3 tϵGF[zrKgOY{D+ƨwwFYmuaUIAbna&=}%8\š ^ H^MTѲC5x7g.c˺`k}̥zdgYvTI/o~љ9r v"ڗw RsDž.$^ڹ=q}҇ Ğ`[d>;} f- )eg=/Y R\CHj7rAvuثw˪κC?ߴ; z8_UE{?G̼XT)y)T}uWBWRsb*k]El*K~ՋWU%f,vR/9 sqM,`?^y+Yat.n鱗; AiK+J#'5'㊛*N||ihRO_{]<8uj9E껴?VpA ~R:'qg zY٠k{}Aj\-sJ lR"g&v#ozFZ?UFo䪋C?⽭CǑg0#/}cs/Gae4n͍_ǶB =Ɵ>Q/WC~ڗR}~i/=H?@J\bEosJ\]vs39Y%sAC؊v ⟾m;c< _ۅ_$}maϸoϣìEQq ״<"ǾG>hKh67Wp*Pvȱ/Q|szN9sFԾ|duhr5E|T嗇^~y_s)(ySAs.'tƧ}~~Ǿ>X ,ȥ {w6?4弈}hZssOxum;P]kpgvO;ȍK 8=CuU')|(~*CUoٝc^؋Bӑ>|%~̍4Npgjp:kaR>=쿊 sj |z?}%5q_~U^9#?}=rH\>Uf_5` kwi|.MWO ?H־| `_cI_j]3ɛoJOv_U~9],u67~b!4~nСҺdt'HײdCJ^Zb\ūb7ܫLRC(i{z~u?ž7w޼~ү]įa}?kdn:9.q޲kv8w~'6KIp;~~R֜GSWQ">.8}}:Umo~6n2y xNɃkC9C[|d{߄f~ yC>fڪϯN؇"uKO -Gzuam+ޒ:B _KqtD'BnL^_q"2@vFH|Frel0oò9Ha3]}u=|]r6Tʽ߂ ^Yq`)V~NQk~~ y"W݈Dn.1!q!#~=?W?k,_ci= n>z`kɟvz/q:T80+y{!@cx!츲v̡ ULj7~qיy-ʏJ #BAT;Dog_!~SuOB_< tyQ12ƍt~ cpN@oO\WC{CԾ?+IHeiM:*p')gXv6}=?h]o" 9eo yOT{awrXtދƳּݫ+tk%>@WO sT VCQ5SE M|;Kl1iK#G%@~=|܇E'XA/ܢб0Vmz~ ϑlB<>B~^_̕{Afc 5 #j>?bOx󋿬TnJ¾*XuISW~Ĩ.ԁtʷ*4=:Yk$/m4?y<̻ WBә!̉؇Y_{Ά$<)v8ys<s"._ )Dv<ŷ5odC?&uس})<1='1KÌhoڕ:דʊWqq>hܕ j,ɧi\cW3F?X_&Cynk58`a/]aGR'j8'y'wiKӾ@rשׇ^/O?8&7CҼS7o'"q}?o"}쌆Oɷb%j?Kr&"w$Xnұ.'k%~q|֗i>NOuR;xTۃBw&_[#'[khŕ|>ۢe ~l*o".P- };BI~%+}jЏ>ؓvÇ;~ Ry'sx>N{G i|imɃ& }T+*jÿ}s/2пbb}"P%E$wp$S̘o .ɯ+кh\ߚs>C!'e ~;?{TsR~8žJ^ێG[k"#rs?2^TܤSռ5z[fңapl4ly?[9$|!]>s|7g_wԯj?>U/)^d6m&.q7nуyrñ^ރ~mxVy`)Th\G^:W~ޯS~.?.#xѺvw5{:w_, M'tޤOfb=?cUB4ko J~(7/oj~S*?]_ׇS)-w$nyDsl^EX~u6p(FǍoJ%̋K|̯ϓ|淵Ӵ>X+cw>w'yF?zQT=F펽<F=3=3q8/?JJ^}8D3(p;RY%ZH8kWu naުy'w[/ߎh"/Z?7)\x^$O*+vA$yg!fo Fn='3{`ZynC*HGRA xPcP?LWo~| }*q3Gy&7cOsȜXwJίuR=Hw~J|Kݷ/_Ooҿߔ>Cׄ+/'ܹ>4~܈<^}+@cԾV?_߁7s Sϙ$~8=K}')jG[U~T}7V^<h^ZC*NDxwg4R\t,vJA3}Lj)Ρp%pƅVϻE\t-úz.yR{G~m_aXp=ӹ憖зԽg.UMzAqԾA =w<[."qDnyuf~'_a/24X*wvnW\_v c]uxkm:KG/"Ͻus&M ^_F߭~~gW?ߛ#ۮ`?_qSkZv gy?33=D_ ?Tݳ.4jg4~>~A䴏.} A_*qέ_H)-]sYΝ6;#F1v&8%(Ixޜ[@ϿZޘ?^~%r2/Į$/teZCg+j{ā7cw93wo!吝U:ȿq_!ۯcx9W:#Ggo?sL&kON~U@ |%j#}Z"z~M1Os?P|r7ޤ_䫎'>E~%UvEO?znDC~zOj+}^z ܉{c;]!]ECA'M. M?V|CcFM*OU֊s7 y,}@hyCɿ\Dv W7x6/ }?V՜o>LFfA_ׯ UWmO/%(ҿv[j5)Ug y*3^|g|}dž,K/;zyϐ'9Xv7.:w=py|hU; ^C=o5J1;aj1xvw<1 ^ϚW'>m9&G9=YG8vD9P1olKzk ?}&?܀-+ U2IsZ;܂%-yki =9T_E4}ۋ?:c؎9 ΋\gw uZ A3@/ӧ'pLSwK ?|ߴ<T_ cМe:!Iiw#xi)ߡ7E{~)W[ xN5ϝ'Csx!wT pU}Y"c{Abx%[-.ȭowGk5uN=kfګ^s=? wE.gOIӋd=,} ΢΃p3vQ&zfFJ(pbr>0r郞ן9U>yn08{-~VR9fG?m6:"}p|?hMhzqng%N4dbS(oWm }\1uM6A?KMgO`bO߯e=Df4~7vODFNo~;rJB >unY s}]baY!v vg油?: ٧quۘ=37ʛXٗb(pw^q6xXl-M[.t?{/f!2׬ WIpse5"t?cJ!4({Nr渄H6#ghJMϞMIk[c1 Gjf&29lr9.MriH7^y=Ͽy<.g;D(#VW㺰Yďg͘!o'{'>ZH?o"nb'#*_$J|i}Ovl)1{o>OOr{i3zۃ7. sޚ4\_lyc/ d,GeϟU~r|bFnGCy]G*[we ϳ[ _ke /Wfo?3]}2~΅ړ[ ;YE49=H ES<`>}hHwa"b"as_Tfi p۝x._9~,A`~#dmoTxgh Nyn 2' ԠqZvFQ!}0Վsok,\ݘ6_^2ٸqvLp_SJ^s|S[c ~ӫ WG۴{rN#l̲p=Ҹ]sx:l^buPȺ6/z0 +s~R=DŎx2{(~GA~Z|TbO'3Ǿ}M[Gz9iU6?s,l]ߝ~"Of߫Wÿۉo\~ gݶ |t6x݁ynz$0w9bmuvѥKEIU`}G^'eGIj/ib=ĜWiڍ<ϪAB?~uHΉܬ9IڥӏpC#G|^{~8_Ҽ,-rηh!3G?ꏝ璧(YkI^XrM(.Xa9jn:ZLPo*Z"dN čqt?w/ݝC^Ows8s]4<*ŷ/aKn'ufǹ/\駩GH\bL4|݌"[ɿ>nݖ89Xt^L]5aʑ;p=4.Eʾw崉F#G)UQ皔*\iZلfZԈsb5?|(BoWOSO|.r;:俌زzvM9n|w&󵮜Ӛ?&y5;1uv^|pܳ(9'E kxܧr%1?؛稗#rnuȿ6w 9$'Rߐ<2 `AC(ΩsN'_>~h~9W?uwߪgz柸vCXp̱dk<7W|>?7j=?Rx>(瑉߱N^LQ./~Fʭ*U?3O4rkn~ʥ/4TB!`&TQZ>sLG#@~*?D9+*9p{̟ոdr )l-2 n&g ō4ovHOi䷴vgjK䅞1'w,g]]3s#y̭w- C/w+NRhm[]zՏy~bnq s)q[wi0MULF1sm+ǟza=}e>A98 rC:o4~76h~iY5,(>To!xqo=P{hz<=,/07t/z,Y7VƥſvAX~H]7t/aGfD!Uk> 7+_[d_Ex:̯ub`ďkJ۸m*?[`dPO&TwosS+)vF@ZuOIHZySv1=\J}ƑДz(60x|{kSy<^}2Lq_Cs}XH^FFJ ~^vvՏj=]675Yu{I~/Uo|䡽EbbαB]ؿ+%ƛ'x- ŋn6bo:΋]6&N.ElfiJqݴ|==Vqo7˫ǭOZVgl=*"O$~_c"=FVP#␛bynhܔ@w)u0}кw5^M޶{!u 4@ƽ$.z5'.5+2Xj$s {hϜ^Ǽ͹B0o*N$ NmAڗ6Ui?nyy?|UzL+;%.Tu'tZ%y)Uo9vH|sQpģkAo5뛆͛{ K zc;U{S*_מ;G~/\%u;wp\MϪ}x8̨~X~7GSQ[a݅o DynLE~;r|!AQ+_J}5e}'+8t^Ruת&? McQޱO|V~Yv <[ iwJ?KB.{xO/%=ʯW|ۧ*"O|;sW˿5UN\<ͫ/a7eUpact.,x5N=ugK=z?@S%ڏfA~;R5^ F"/<`у]Z-|=UZKoR?T^5yš:CJ=R>GHOu|_&}{gc.淍%a1xuJ9;<4qϐo>?`6z!i/9^|o'Fwsq;޹؃I3l#8j}8_HMf5 /Z' JkMN/1[@$[ꘚ0 (}N*/pH}2aj\3S>G R/,L/>u3\{c;_ =<oz÷M[_uopfnۃq-M={py~>(OFs _X.W樽\˹~'Q[,l駶-N/8f w?Q<y~ULq}yd[D}"WB/nM#S)KD>u}7v38yGyq[ ^ٌ@#9?] +9v ^aEgCpjz|=?YuA=؝.yzs C ɧ[j`W;xx ZW;#oh>3įR?sfҿΙyo(jߕqr7|Dvy?Y=?3>{̼C kEqyb~or9C<‡z 7S0mş'lYI|^՗ʸuVvJ;ȭS筲vJzr|s;:ȷ,WN|+NslFOҒq\5 oz4"ȹ7}~GL~]1>]Hp QyK_+jVgK̑Sz=k? 1p=:H~K;!P> KFBvS멒O>A{ iֻG"Y },~AE(Å ZO_!R5~+!v`vP D8d})_H Եn3*@͙ ~9|ntO](EcK.oF<ڳCɐܧ{~҂y[ioC۫5Oe]w>_Tޖn!R ׊?Ut2qԯzZW,vi2׎K»pU~o*|`sQ\R4O}͚o7?%hykZﳬv|MWGE+q\D_ƿʏ6ϑy ZQ'cwE~;>+9tzn p'S`wȝ;uf pʥ [.:߹B'{h-#!-HK ݍ4Ct t7CC HJ#0t}y=9ߵo^s&>΂>qׇH;ҧwu/>_!zZ^!諷{7Oïh:п6>:[`g a~y$CUݘ#~K}+C} p⏅Aok? U9qCvp~ştS1*{-6MVCr"R9yeIwca~Yp#,~ Ըm&7̰:șX'1P9;j> n[s /7takM}:6}ڷ':ܮ0쏕,߻S'h;_ OWn)/'Cwi/?ly5ES!U}b5Uq}bR#Qy=kkv\f&f?ߌ+CΏf^7'=qVЅy~xIR]wE/mM<@oœi^3qw>' {/xii_΢D%QiH k_qFnugмVG[a^T>Y!cvΦ5Nw|sdx/qH=&5rЗݧyz櫶As]Ϫt~}y׸X]J?Љ_Yv,н/,{I¶*!@BަCTܽ;sg~#U`3S "XoGݬn|o?gIC p8G4?iHpBZ]A ǽ [Q ״^ˉ`IP\܏ER{v^|uWWEe7-.wQ ;%eoCYt߼͑D^nNӅټt9/YvkS%~+6|rV˗)@O^>7/-%*{FtoP5ᗯYEg9qYw8]C/|_NVȃۥ/]V4eXie4cdޫY*q6pA'rN#%l,oyDŊpKMc~@#c77# / 7hheZ)KpSUl5"E_n+jci]Wq{v1ݥW͑Rdo3>]wUg#W @ޮ1f>??sࣤ%|" ~Q*7vw;zǫ0 {QR<,I]؝{|cgX&~EsIBٌ_>n宅ϷΚ: #56y|+vׄ^sҟBq-7˼ڌx<%HX,߂Rc7&"kIɃ䙴~Vn99D6}WL:?Hث8Rߩzz|do^4Y#ݐo4_ c7BeTrsiAӷ'?GaUViW5C'p3aC,f苄?}BR㬜 輙8ԍ^:|GWc]q20ia_m{7B/ q[ϲowF7)R[Z3Y~>L/y-E}o4ǟ v9ģ{OӅq}tn,aKnqsҙѥ'S ¡&: : ?@vOw%.L{m}ro\w!^- u1}67tOXZ_+;ևȕx=}$v7Z4ΞPa'OE{h_{h7J_Z#k g]~u16a}\GP*S;waaO' -tڨ1x7+t83womC?!:oy0wnVy' G-Xw =!hsO6Q8 oxkC' ")]j9#P//;ы ƮJp]Y|qGpyYjķ{7t ~~Xq5+u.)~+>>)?+ku{WI3M?*\si7oV)>ڵE|&z%+,-ˁ,4ꋜ]7?~54}.K19N߻i.e =6MuXS!u5/6/nOF-U4J&E.~_tۧ qU+ i=,3ڱ3UBtt(}#|']g\ )uo?~~Vy8y`x[Ai~} |Yl'<;~ vTГIHl/v/CC(U&4n_MVCb)ܾ;jbHcNvހOۻ2|רt7?ƯuW~n'O.e@qq^FN y4o*u݇=ƌ&O {lX|dQ Չu 7.ϱا#i4~c.3Ŗi| ޹2 9U% %}sհ冯v-~\=}:>-~<뱞1 ~EqZGxɬgK:{K )v>g=~+>Sd _`WI?=M6o̊|0cM;F"3vnb]Ml6k#"uI=n{^u{v/ON}N WS٫q{ow2Ag[w_C~uB}2zUI=A\ӊ<ߤKc~͝ķ' s0w-Z˺ &.eI?XidC^}s+uq)Yq7rGM4veb&~#\k? gZ_3*>MXFRO5=Xr[h={5yqUcghHUqVMB//:&RtXl|a%C䁝3$~e>-LYb;gJbDz߇f?ryAwPW/WHDXeqJaecf=W⏻?DߜͼjV׼r%_4*%<:=MZsmwϤğ9:P[WRHdӮ}lRdG<`}/D+|efE"Rwu[[$}v$m%'q ݊w/j,+ d G7-ǯV9 q"]J6>׌r~EU%:4qF/Ⱦ3ͮҰ>nÇqo]8:~V9}t8<+ z,U6%۬cK-k_R*N^R䖘?)5O~ex7&Gz-;q}鯪׾i[EЗ/5o.ZK>pAY-'~Qu+{<rb1kB1ނDYzn#'M!{&zH()8e`le'_b xk*Ǜ>8Zn%Tܚu>ՠ(?=QaF}zeNM)^==UbjbOޙg%~BX;ַ>[}y Ue3'<=*cZ NId,q'aHm?bWG^x|Yu ne֗+kђW퓹mybw7bIXK~PuqRٞ˟CO+`(Y_[rקĩgjb?~ln GïN[2=U5n.S/kgiqƼ{j~MFSl|^~+\m:qQ`z's*%u3bY/#.77rC/ghME;{CU&mF뻛YLra|K~-p~coO)_HTha#>rKLϳ 4tv>PKhaꃼ9:Lj1 u]CC :l:Yn*Ա9v4ljQ\l>$H>S98|Or\.!/Gc~4Cg'O:_kғUM׹:"?lm܎5|ʵlŏz(=j"AƝ΅.;E=9)w#vgn+mɭ=7mK_ HulV3/G:YZj|wZe׫{W_,-*5CǺ^) tzy(?w=ԡbDi_Gŭzgڑ`#mKq{ %>ƾΘ}5׼ɝ/;b'<o jC\R/z6uI<ץS/N#=~xSSˇ:{}!];sqoIV/7e'G!mЄeԉi͠ć4xFHɏ۽g@B׼C 6ןK}ۯGO#Fک腌W?w[R}A\ 9 C_"vk)X _lk WhR2֛k>ɿ [3)tz>ލq y_Ƕy=5+@8 7s#+-$C\i+Xf W2kݷܺ>,r@9?s'}}T.׆xwC'KaHϽ4G_ӕ}7k>_7v_(i/Qm}<שt.s>N_\8q#'g"3)}DJ(zV~Ť.̮CX?񳂼|C) ([B?'qI9|]_5rD>9y u W߫otd|pzoCSZA_1cG?E8''ﵟ<&Fꦭ z,sI{'yM{ևcZ:xR&3l!iOA]z0%ҟPHlћп(2'+c+?_5)-x\K uH0/@ c\1$q;ߞ0To!_r6tS:\Y$ؕbF"o _DQQ ymk&-HowR!/tSeri쥏ѧ^o "JURY5Ӝwvw{`na2|yj]8"OxkvXnQ N N"$` [}Hq|,5v}?I_cWNC촳> Wc}ePGFmmVzd=ƫ?H_n̈=,uO0fg_ihH*>yIʺ|\yux{.^bߤc ~iw{KӷN4;_;7}{{.q.Gkb(D\!k`J~3%4G7)6ژ:(*8Z]H' F>-o3RO`p7n4عGP>G޾>Z3z^+\kO+y5$ͷk2Ϧ|>Ё2_YiE=F797mK7ٴ>iXqG0é2BQ?faפ[~#yʒȷ5~m/ylq=a}kH~R?E="Z'aEěbONAQߎ!j O=D~씍Ae9)^>jx8?_$]<0ǟ ܪ=dQwA.l\8.~HjA"G~.A/^|;XKgg !: _hoFϯ̾k=s[A'fU~?[1}JeʇkeG9܃u '*ħ" 3qajp\ĥ=m8nW~Q ~xg=e(}W0E#GÚg(<3AƭQɭݗuʓD /yAsޏGB/t(-{$~&16ͤ_B~5ЏoQΑ%籒,V~d3>O~U÷g?{gܵf[%}5?PkI'C#~ZuOpig kz{ϒ`s,E)/&pOߦN4R~/@N{E#!'Ń7+)C}$Nju}p-OG˷j_q ,_V^ >^iͫWq'.BW˃_S?j}R>ꎝ!nO죷 Xi~yCxT8| ߨ 8U8P{g|H5z)R#iEdO+u3[tf}'`ݿ=R|`U#/}DTiȻ7 K89W}K]5-QQ{e" U[gd64!O}Ď$>a^ wa֭?V%oWF"?X*y֤#哜eܸfeȣEC+:Y?Lp:"} iuOPv?2t/ƢƋnz7Wudy-v4p2F/">'nt9#p=.?Vh/].0zcg0!s [g=gvѵQS99-#Ҟ@/N\]M3/p2O-Qؙ2GQT~>AQ7q_eާt~bGf_wA*]Fsޟ8 b[CK"Ɩ@ Ҵc.*qu2SԸ-ryo5?cKΌG7:9 :a8O{ ?|0NkZB :r?rAjډwK|RY-|uD~-5q v/p["w6 wyq}uX>?չ:7[akW]5ݒ3߄`5}?{ПD:Kg\jMo@35yEϺk7yn Om"Nu=7kC?YVqOW/7?,.r~7c%H ;78.V\=#UޟdN#޼2yyW0Lߗ%vIOɋ]aqOSnm_`\3>` 7wykfD_t={ z8ּͱx|!J&;ւːXW?%s?q:|F?w򭧼~E̠O3:cg ਭ e/ I[8ۯ_pM年Sp^1$hq%o_ЙAMqjk={]a$ƢnEwy2 }uz.kw0(y=vU kEܴ aVi><ٖi}<ӹ2oKݢ_nr:7EXɯŏ .Zx3bW]O]w\Gw@|';E82C;ѯڗ){&qWKg{K^G ;_rAoFxCᩐGľ8qH{YּƮѶ%׉\~ӶxZK9+T_ctNl>ɻpg+s$lϙ N\M^q;W2J7ԎeJZcR1%?T>^ѓ cWI]yy͕Wyo- >4,~3zyuς7wj5q 2/P߯s?//'ߦɿvGkZ})!Y\D_ם> [ n} %r4 3rd7\-~6- ?I|}}tnί9PɜPQCѹ:>9' ;[aoJϽO=Oe#Roz|`ķ͂~56_^or+b/OjCO ^Vs}24{F(#slToh}w2ħ2epgF+ # `/k?-Kb͗y*3͇:_JߙO q4uWm֞)Jݗ^y6SnȀBUzK&y+ONZR'C?x^+ԃolů16̶5oj^(]L:ǂoh6+B!~5^Њuы-KU{Yl.~GS٧ܭFKUI?({Li8OC23G_tNd9?u0"_"svΗSy.u]Vэͭ}:9Ab?yzXvok!eanɎ'8+{]7п{QjO^!CN\#iCU!XB3ǁ_SM/ЋmjQdy\EY ݹ})>Y\%}^M'F̃yy;<ԯ؉zAk{2E͠~("f$ LBOTAvc,]~9>xvQQr2S͐p&{_dΣ1!~ĊįdNw86pjI ymKɪ#*:9~IϡέN?Щ%w5"a]MN 68z4VL눵P6x.%u=KVRӍ;5})oF(O{ckJL~/<:7JoL,}goD(Xw~׹ƈ:'7^䮇ޕ&p楅~Sj%Է;[>ր* swkБЃW: ;E@Qq4Wq[N~Q2wȵzzLW#ߛ#KC Jg=j]|~͊_ej}|S$H~/ttY, eJ%Jp닟7.eeZr[먴s^3|9:BuKY/܎Hݙu;+|* dG u|^4?x~ҍ%o_K&OUrq8*~[IxB"wW܃/Y}g +ީ?S&.{cL].m ^`7ډ!?H<%sAAu@bKD+O-;A|2 槷YJ$WXfߠx:/)2McjbN1.u[ĵUO X;B p>"N"-;45YZǧ/>aTMpG M~?4QYoT|>/v8_)S.,s?sRI7 {:9zqlm["o+/O+!WB/ZOǥ_ y7':7K=./pi' zHwA ⯂_Dw2'۟udv kc}5/V>ۧtu^ߞ9,rs7-Yk'*{[d'b*3~<32+e_߰~\H;yK9Iv")e֓{[cj֖EKtcϾ3 t%rIAҧ>l! Zwy8tL'o_н׹vl'PnGs8ȈyoPP',}d}Ϭĕ"2#ץy5a>.͎Z4-"l[泚r>`?7kVUٕe3?]x=sp;xM1tv~z\r֫KKe8]sBIΒ֣fη,~'tތΟ2إ2K˸oMAoӟA{1BJr_BwŒ:PN5_&ӫtMce~G)wGqci5t>.vO5⼂ 3 LףּϷE.MHI6e#׈j/-OK)'qI7ү+4>>vKqOn(c^+61fC %gsM]x S2GO󮬗]e4|~hI9 whrD7{~/bOP|{8nGQrsϰ)O8C __2Wz_np2wVڛ\~nQe@4 ӎyk&>d)%PuKF wYriU{IؘE!OA=/_yqEv>ةX-YKUo?nOЩ[ͩ1!,PW6\a%_ sl{Jg0x~KHO7X"-_ΨliM< $O C_ pL]O_NJ]Un~RЏɍ&;6gF_(uE?}5y6vF> \qŭeoZ7 rJpvH`]%>m~羝}: ;p.J 'Κ~nw\ ]h[@bG=O2X箒uĮu>e?=G2?lA-k9?a}5zDi}} Zde(/PСE^3#/2BӖx6&\Jn̥~ҷV[I'z]Gj)7j_ kb0ȗ3Jo7?ҭBZz,;qjB{ 'IU'Ok>:XRhGO%2[e]k][y>.:O#I#7Z.s9rQT 쭏J\99_>%s#SziG]א"_1^ҏ֏o^ ֫>=b?1:<:ZV!-w2?u?4~]=Rj.|LAZ/bqY'8°K$ޤ=5.q~?UGjRqڿMu{U9Zp{qaɗi<O7{SK5#|q̗#cBWW&vdԃQo>^*֗asnĞ};|(}M\$vF>̀$zz<~ϝgF}nGvmЏg>QǍ1ezT0wm5_uVUWqu[bgOdfz"^ >mFiXIJYѾ}cxiQOcttCl׹1/zc29>@.l-z r}F{C^bGI{639RqikW)keok8(QosIlߞⵍ$~|~p&Ǐ8XJzj^BSݑ9 8Wdлg_gx̙~܈ ȑ'YOܒСɟ7-?S2Ph 0t qwI'&s<*GY$vƺ+#Nu&I y'_t>⽂*~YZϩ9Sc\0/͗#~C_Jສu~~{3ר4t&Vz%ðvuJ7ESKO,ۊUZnTyaڏ+Hy2*"*rCՋe[%iGy@^)E(B^_v3W1.}Kleo?ҏֿ?oy'kNߚKPHyAtsYOU/n?4݉߬uvOTyi5y:H󣚧pu?#8_8]"/#zA_(1Ue)vرfΣ5;:?ݮЊ $ꇿ+uĩg:JwNQDNQ݋F>:UlvoD|$ȁא7 Y燆~y<68s|ly*~DC4vc&%8Rj? /&vC7?aa&~̧8NO2,zy(kML_@S \&u0zX|'^bWꮔ.9Yu~G5?hy&%HNIygݼӑkHjwn 5_N`^R?="|s~Y_r+FumT; aK '^ֵyIc` I͗hּ;?vrTϾufG=9ͣz["wt+$#qKG|Sp/㢷I+0 4 U;7*k['b҇^%F?H=O<@ w%yȀy~ӹ+Z/1ҿi)qO]7bk;x&W}B-o?շ'bw P ~y`%q'z7)6~\w}^\Wg(Gj<\yvT׺0Ia n/ȓiT:,sk#8ghRg+Aj #ώJoXg5^{D =H=W?vS߉_~<*QU|%飩[3dY}+}!X:_d~{)rB)h_]A^DℚTI([(`һJqdZ喝~qF8Q|Ή^{৏߅vL29=튼Q#J皧TAoi ]ĎӹA]]@yYSC:\_沽_{(EH{:* .)_y&b_/3r`Y_TOΉ*_B=:΂bƺMrsf;t捼=F[G%Jy8v|/8A7[\'v'fwGCq: y,r[M\13.'*/~\79&zA'}ʚݝ_=7ܒx׻qϫOE*aW^*OH8t.$k˺\-O5H5h|ٌ{O$xO:չ0}5O(yqAi"AMP<^wѼSw qS?ǜC7b~<Ԯ>aFI+簜sG~ҵ4}iz˴[Ar[UqD4NT_?SyuFeC_[n|ą5_&y o_nNz |Th 6SOsAߒQ|X:G{chs^Gď׼5EGwRjЏGb*OJpao ׸U|n _Ȅȧu. fHN]q㢗uZϤY?DH\U' s|4.sTiR&wbKѧCG?Y:q|[[ ;4G_պld~H8&Baߟ}iAӇq^==׭!zIxFP|_"$Ѕm;OO= -JOǗǥ@O]^/k\Gg CEyÐ8K ᴎR3si'qk_ioUSRu\w\y)vGu`ڱg??伵cI1]{!ͅtr^K~'5/{>Q#|~7#{G.Q;7ZZ9<w|1^q8wUk?szG :7_=[OߛBV{ n z|rfkÆ~<[8Ge+7~i\Xk ~LeY_"t!w/tG_+o@_$oHw?yIc8?,:c.+MaT>Sx }MK}C=@$_?mM_w`2D ƚ=u8||RdΔ{LƯ*Dkulήm&|0~vW=GzuB41gZ)3Wt_y_TO޲莺y۵rO[Cmp+y7x~{>sˎo/=|W'󗱯c_>6/G1'џӨbkpG958/ SIgDAows_[|?$tؙ扷!wZd#twh%F=C_e\E־^0foդ^mXw钀XFQ or?S ;Y஍oBS<X?E<DG N9'8[N,;WDsJ/NoKl7N߫tN({N~}U;ɓ/_s\{<Ů Ϸ*zy,x-P!b~~ 㳩o5Uߥ>wOb_}{=vfŧI^VhiC_37%Xos8͊P(~jO^V栘rӗ%oWƷG ')??8$7/?79<'?7F{~k{K,?#nުri+ܪAbd~xUhmV+}:ꂳv76io#ڞO1Y +}/Pmi]zfbs8Z߹!6_eφ=ւ:`J5~7>  z8wKDmw7JW[#7}C_%%|g.gٷ=C_݁}=u (_`OE攼=`^c_UrE4|sSbNKNH%b/~\vkg~ }wwkQ共zKT'4Ǩ~h-ZO^ ۹9ߕ3} +Qc̸|a^VI98p~FwgR>\u Zje?Ϛпh}4/7h1s4hN:zӗ.$]чӞٍu;?T+ o ~:*u͌~`q'WU;kijUSvNm9v{}_gr9o' sޙeB/?|\9K\?Ul*[eߗyICX5iѯ}XYy~"oyy7jѬ|er]o;/B~H"c{fjs]~EZq\PŏV.Fo4ޑrKwwq\쒖HR Grl6T_gϵ}g}* }d[alxɐݙ? }U͊NCk%|'_7lpc'?b|F@"q7_S4P|1)uXU8W?vzz E☡!%P=uk4q_R_-.qQk}<7V_f$>yͪO eۜ+ML9ѳo~wO3M%~:1Besch*5nw*4“$*'}09d^eNư ;_ޕIoHXm:3ʾg:[95z̉CP9a Y6,uQl_kno;'?9^i{~T~vȋއsTlQMwcǮ~l&rd1CY'k#<乽̣WQP3ߣo%~{Sr?nl-H;8_a[MDM2BcR],[ɒ -e˚ɐ%I,]["*d~s3i<>|gys^sܹ{1ð[Ӌs=Oy|GLGJ* ;B?ߕQ'p?o~n Mž=m=}4?y}"uKO5?r(N_/夏?MPzy&,_z>;kj_5a^hwg?xg BH;k"E&BS3bI^]9gbp=hLo!̗p֠3/yؚu&HL/:ݔIl/60Kؾ,} I~cހ{U?Czɒc?dQf9Qy=Q縪Ărq8mʠ/o+ʺv9?ѷYyY2qQإbΥffyQ.'xP-_dKn?] 'uZ\/ymiqeJryA2#k^LVKg;%b=دU뤏}Ϝ)ovcsݧvG~F%ֺn(T(柆wRJ'D&7y;꽇~xym\GGiܠl5j:;J__ƃoz|īn5B>J?Ys?`ӱ.OS X_iOtp0ZXΥUӋwֺzN93ޠ[ }"mWqQ:6W/q}K͂Wޏx1_ݿ9w~S>^*G oCu'L̒'|ݞ~ԅ _<oe<255yyfoe#ފԗr /&^˛%rlbޯ5?i\q$!?G k5^ ) CE 8_Bk\Rpr\~P_M ^$vEDn<6V;Y^ O1<ļ:S$iQ~a^~y4r5¼߀KLae+E<M}üMkz08udǖw:|H=,x*v%4/ɍH>0 |L^6Jz".e[sI׉_8lҺ |a `?W>_D.ŁkYE޹c/,7v4QϹifICe79~Rv @>Asw%@?؅׺uO?ɛ Ÿ:;aN|"/3ʑj>|vO2| ZSQO:saz_]Cp?o vJ7kV1uYo;j;~M+=>t:WNg}Z?PHZ/VAh=Gq=>*nqzYsJ Ku O[㣠ye[ 6.<g|q?k<'(=yix>RK<44?]Z|)C(_F5l8L/Gp{~o`av&_d~u\pt@Gd5ܱ{3M3^oG>9Nj<!?\?-^xwVd{"ăNʺi^?+_~>{LC;׌κ?O`]_12DcYO&X" u }#Hh_9n<[enA?~<*X[_;qߒ ;G<Ԍ< }Gj/=Oذr]{J1o|ΛֿA+z=5uV3O f_״pGu\ _>G)?qԀˬgֹI:%/o])$l~xʼrޣ|y')Bql،HIۗf~Gܸ[c~[]~c=vC%awkm2}q#^Ts7>2[Vxu;q>3ǽy=0usOY:nzi7G營.i_*|yS<.<{Hp䁬Z𺟘JewRI݋a=w5xHB^)OE WעyzWer7V^<8KGb(P>JqJ._4쳘\e"G̕}~> :L' oIb4O?5Kľq;Gv3Sq *FVE?FbFKCy:1coS{w1Gۥre#)?Z_+g^Iz:y2Gd zy97'#'ONNŽ(j˽ȷꦑ 1rX__F+bg5#3-<|fj~lb rG}; ziV%o&3u.qDcz)?/g/^܁< ^hP"Ǘ~/Z7[ +|S!i^%|XȻ§gE/ɳFq݊kQxwD/0Ouh_j&6`KuɿKJQhD ʫ ?)p׊T`nD'L,~Dy+KL nFGL ȏ{p NT@hܭmjy 80LsN^#yQWCyAen3n hJm_?s`|Ѽ}_V~a1h??2Gdq䑝3 \ʡsxj>zxl]xZmDpw%'Rw.s+}V?ڿu Q?uC?%;]h;|Eόrt2 *2~[2o܌#|=y3鋠z|r3 ~Aŋy:_̻CǀzS\(gVC8pɿmU}Y1`현r3ĹW/H|̧^G^2ڷvW=GʾOSB樨=]AR #g;ï#_ezZȱȽNm^iy^s|+3 [fQ 9.y,珽;'$>ʿЕw?*>JTZo+,WoJxJ<ȿ3/)sì|.k"orW5^Q~J|>(YH_ ?EDUۨYa;mgW?;y99S{"E rZC MƁ39ʈ;#cFλq1ƳĆ+G֖$[Bj֖4<}j?"U<6_wn:5}Ы5rJ5۳%<>u63 Iz +u(QqJ 48).Bp˼8~ vSZ_OF1򰃧*z'uGƻZZYu?YEB= F)x"ߝуJ!Wx7OӮ#/9wC/i-q 9~LKndArN "롡l,[{ds _ip_-rqV8VgG}T7eF͗z#zoܓ Nt8:XCe]8>4^b f w1J>W(9cjQ W1}!ͽwNGo%_ >pcоUQNd%>A qu:ZO<הJ^ Z{y:~[ICMYWiI]rrg++[矐[3`/+*S9 4Ĺ כ)Ӄ72K4AbMOT.ly7Y,I\bl"~5,;؟[i_~ܵa/|v@fRViVQ[~?F۽aI>mPc뵿>z/Vgߚ8'd#E*3twq}_z:Kע2-~ iDsNG7󲋽> R$֭O+[$ xcS MV cߘ_"YgTc~Xyu_^g3[7$=+GB8׋#Uee̗U[FNJqWӂ+~Ǯg֮=Ojl~hF|+}׾D?F&An_s^{_x~N(ߨӄo'ƌ.؟cWEï~`ٶ'sxM{; a];KK%)o<-[Xwܧ)s1/}):%[^7xϵNأlQe7{Sxƾܿ]y|ysq|ľON?7 m oϽ:L(N~c,.H.s}ҧk]̇| 9~ ͟З>8\yGMvݛ~>!Nm>ʫj/ȻR |֥$~[>O~|uMɟm QcGcZCBc?n`"~wR+"{!;˱syɿiTȊ?uÙm X=ؘY˷~F6Z8^ ?5]!;wgm ~ u̒aߝt_FǬ9}ܶb+8qB)z57h~HV۟jz s1#/ޕ$skgX Dsr2D$nMmߑ^(ύeK o?CN Oϕ)W~`=&p^oGϹS 7>G~G>/v0ps%ϻC5sRfpC[cDZ>Cb&Eů v@^D~#>:q0ȩ{s16" X[40=vY'97#r^? ׷~_ Qjaq~UAL?]>\.A;&*}}kW&.-e}8?;׬JKSpJ2%6O ;!)F/]g/ɗ?꼷(?7d+sloSk;z23FgF+۲88kߢPWz#ϥ0FE}ݏ|X=f_=8v1BK{#g1BV0t\|?x Mp;DNeï}{.´&xP=& xb5!U\77k8ruԫUe#?Cd+i* o^9;]mW+Q?.z \"Ꞟ|i>]J=|淔)$ssMSmߑP=gB#YkK(W~PM=[R\7^%SY~Oɜa{EBsğο/ѓQɷ_Hu<=cz"xx[&.=N|kzn%8;}'A>I] %kpD(9{o~^G{)`|bq}9c3Ưԃvױ+9WMm綄ˇY&OA긓w7k7NL}HոЍ YJhEpn؅6~$"Q7^T|Q`".X>W-o:G'\yDǠa{? o ~"FrN%SmJ7+SɃi7/Q-iq FOE+z_̻#x.`e׾̚^}:s};CS5}Ӕ9s>Vq*iw;}sR:SG;$\^x1܏#k9TO|{m\)O=Cx[߂8%-v|.{sߌw3~~#HTݪ'&'lNwW4Zy?"Z~wL2ȗ>`̣%}v\d^q8wo[M LIy xd 짱\o>"uݯԵܤO/Ƃ_n9hp?!7yl\ I5r`i )yā"վľ_ \~-uvчD];ۗg3בֿ1j^R1Zqoޔkz~-vOvG1c\ ,rvw:Fm6 R96 <*EY%F߰+]1:b]?Cr3xD{b^qq+xjD_ϐF5Ǣgч:7 \/O˓qh=̃<|~q/+ ^1I[g&vFѧoh\>O)9l#{BXչ9SWDw c#?9<qOZyܹ5/T<xHޮ?Wl^4EZK_zE7LzP__<~ra3 _.lKޢܯSgY e>sKVE>zgx9GlBbǮ)3^*7S=s8x^ AԵC)G7bԺQ3x5sEʾMՖKyߓ'61ۉA9n;p7Ʈ Aߣ!~=|繙gsrǮ}oKXﭩ]Yʫcu3=3#/5o[eZҗjǕ\YX—/slfًWCwHh:BC9bB^R?"_(vqSѯi~ߘT<Ǽnھ:FpʓZV+o7ȥ>M NքK֯x1;=]h/a+ߩ=v1qiE^j^H)287}#NM39C0;.sN`+oq%yZ~=%/M]AѬ_/So9Q>u6ڿܟ j.H>(?[`KfL;W_^~߲=ɯ*d^y85>Q' #ӾR%~K F3TE_@pKRB|ؗ<&YGT>f*p׈%Ok ~8?H1*&>>$OmZ_N ܷ>r33j|,O2/>h?Q(>!- -B&Vܗl[.w,z6$y21sꄠo3sX)?M_ۜ@nDC{ uo?lU T:nB+o'5oC\$s8w͏uVpΨ_PqZ]is3l{~Z#i-r%sK{P_iFr=K_{C?0gɻhysHG?8%jN\ɻaWVO0  g]z8oΣ`[(|1 ŝp^y;oW_N?A۬|G!\WY/,JG\bhI戬s{QZnދzh|'-vE/|Z_Tǹ[z<֐g(2w/j9܅8~_k>f"q9N&ܙ n="9kاWK}*؅s4.zݜ< fվ%ob *<qt18q捠?oV:G|]};xT$W( 'y-䫬Y7#?{ܮc4es?A ?,~GkPg<2r<|M3oB-;CrzFM|eQJcDeޮ?K(XI\;2:q)tD~y|Oa잂/ۅMݺK^MjU?#,ΝGJ=ٷ9hӍ|ˣ>2|[k b bݣ-u<sQ|`NP8G?~ȗ+% rvF\د63M}9E?S߉^LF]`o?pƦaD?{\Q ~ߥ_.n,ןub$W'Zu๶<]7덶#{ܟ?Ȟo杤xL͆ߚt}WYn㧒12b+Bh}!ǒ/#寇}~F߃uvPtz{l8sO=vu4w=ԿSKr8uw8؟aB\mgqe._h</.&00ĵ[WqRNu#lД\Pʚ3WЯ}Sn7X kٟ~o+~挭yK ZqҮM5|hķ }5wr;|WW-3 2~B5l-DYP\7o;'<;sVʟ=)*}oZVto\\Ywr*/zCx{\pwDOB 'ͣΪ~|>Dݗ{=7K`WG¿W_^]KߴM9W"Nȼ9 Mߒ7>@-<8"G[corFp#iZ3/k3{ >V9s@6/lYy<.x{`w2WLu%V}ȗZY3O".灗#{wF#I܈># xyUK @]X,\I)_ 4zOm'u Wwӵ!)}ֻ䟤]mQ:6&9Fb <,wqoydbe꨺+.7EnA[bK$|_7y=w.~\׵S|z^C.^ę2n~0<~;(A^%~"m&xaxJn;@%Eg.< 73~[pe57Wjp=쭂>f+sv +x]> aոTy~88 xz~~DZƦs?iɋ_9Ń[-#SA[IkLL1?yk/XKavDQ)^s~xؑ3 g%ꄟzX,p[^X| τ|~g#K~}:"8mr=8^Sy);BgFK~(<^G}ZNEDYq*bW;p8cMS+8}I' YѳZ=ѿq8ַ_ _Y@0g|}aV_VuߋcT; eQ)z{o^GA ,jN1 ovWVpU/|Q  : GF95v.;Gxlϐ݅{5S{bǘԝd)ћͶo;pn=GZ3O|۝~iNkI`[tkOe*eĭ׾ ϵWud]J.z3R\Cr$NZ1#{cܺ%F|Ov8l>}O%O׉2FǙsƟ,¿྾'wMze/a ]; x_VBhp˨CF6(_ނm$@MҷlNmbqSx{v<=ATۢчVQ2{YɣIYG"9%/My8z~͈鼗;H|YF`GG~:yWp}ɡwӋ}[Dn..d}ߴ>eN͏I\|{r v`a/?qV;ILgPdRQ_c{H*p٫byVu?rщ'~E)~\+߻it[mv -~ zyZS1kbX#}S"_g.Q n~|sl^D)6s,Y:Qg˦ǩL~#FΧmԵf~=nvD^r7嫒z";RϝQgCw1q  spo 4* ~2%kCݧ|rmVR'z*헑a> O ȍ =(h+lCWdApO\_x2ҽL#FqF͓Ȟ.kwV>ߜ1/ʹcWtoCFz޳>ݵm9c7ۧ'Sr:'-fWwdxVj8M59I)1b}x[rGYV^HoL}< g|ʟ}w-x\r}*uy|0NkbvUBju|Qzflȇ < u~gyߏ?bi.;ŜZߩOBoV<~Wѩ+Ρi߭Ĥ2*TFЀW< r]ye5dzLyptUh\<.^،7Kg,ݻ&|쿾lH?gGk  Bg݈rrV\K=Y\C_F?U4Nn✌)%үb  N?ՙ \] '9Sxs~Z@d@[M/Nh$?6'8EB@O#Q^91֏x]\~q>UǷ^4=CdQ><wR23xr!Sn؄G!kv:M:Г$ΚD@ؽyɧ삳2>'W~\9Ɓq/</e&`fg(NMd?έڿˬv\>:8y9nžٳsf7y(iކGT;%"/4"/??=qwf8HEEo$G_<rGA7'ibw7'Ⱥ/xY2'x:/84kw81EUÂJ=tZ!HK/NĽbS+vөJruۂ?WG >>?7:@mOr^}G;< ./v?P<ksy'f7$?Yc y L<4F9D_ƒ%T9g&FD 2qQ,uѣ/[?2_8A _Lt;∋{  s>T =GJsWA\.rh_@~ &^<:孇G_#kf|rFvMytߜi~@J?"$aOJnn?iJ>ʎ8WyÉ.> !{ٮ<=:3CKT9J!K_6>>|#y~VxB=9>'FإwOt3{xÄ86&Άa[-8#[R& LKۼ.婻.I *{ϟ-&qgaR:oGH$s—n;@J{+pkGF]|f!%|ML<&|"uQo`>3yN¥?P* v2s/Ƥo qF? NǮj^=F?o}~߃9oGNn&8I¡:Nu[> \JJ5| \%r/}Nؿo3 x}8#=۵J?o\qm\u9XÂL]Ζ>'hv֪DO淽it_kL?4n'<$)8E>N?} ^v kb%_Zs_~'#<8QJspUzC"1!. ې<˳Qr;>ȿ1%us+sq'YQzCoH9%:U<]sE?ѧVߧq'qCۀϓ{Q$yI"Q[kjmb1}q{ ꧲8W7/~BQ9烳SQ8{\z ;nbWNɞʼȭ>>kg!K:ڦW'`O~Oa|'xB5zmwe6~+b zݨ87uV庴|qݶvCe~]FZ/O ,"Gbo fk>">}vkl;@yk=KtlGhVΓR/+Ǥpud#W5B;'9sykz)H\{.-׉re߷'RU Tb;^F91bEd/;xّ#_=ݛ8YBFA_:,OE>9 >( NoJ-#"}R5d<" nmsm9? ^|sg?q^|n{J߶~xp?"eOwvN<]8v#̕kI;w´iч]^1^K8ȿe$07u ҅Gkl3&Nxy ΡK}NyOϊO%cѿգސ/^%~&<ɴ ɼ- sdÞ.z}Ic\B-Wc$+},^̣e<ͫoI>mc,v8I$M~i;{țV v]^zg^8R8[Vg<wd#n;wWCꜷSϱ{z X'z|v*s}Wo7 +-$l'?ѣގx>)+Dƥ/akpu5/{jwk/)&i5CRp}vJӮ=X2%;owz'CޅJs4yP܀kN=JFΡ%I<ʗe y nC0?x7gH$}F> 8Hc?BF˿Ͼ`I\Rc?5uŹhI"8W/R'>kK_r&|Sz3~Bx =]Vt/]Mx2;Y2s2ϯ:ܓ%_@`om,5wUM\?})>}^%vk9O}E_ޜZ\G|f<3O [K/%Iۋq yTpܰzo9y=-I?] yuڈ  s:9 ?dի omg?\+1';6${8ƹɼ*ऎ8ޥnٗ$NNHxUȏv_L-yd'5%wGqC#?o"?m%^lgEdϧ{ll.%)Kus } .Xhf|OyQO(G1;av>JpUT+;27z]B7y(,w92]RZޜxyY% 5WA f\d)T0] ?(, {uad64~tiۨ]'|߱-ݣSy]Vn,䅎 ޶'GBX^i{jQUn\ o??4n Ϲ*8O-^-CRU/Ǥy@v䙮E{qsA:gD'Qkge.Gywl#'(oOPoRjz'ڟW 8?z3r|2y. ['y[NϔBoBT~wg+;'BX1=p+E`U ە؏MГ`~i쨋$&viǩ+CKEaj4>IJyf"._.՞0}i:\&eƞ:yg5.;)KEL~Պa蟺#%u:7Fiо|}9{W>刍 \0M9 {~WP9mx/ۨDb2y"U#*͊U$:`$ryd%{v<9ȿ52^#  /RU!<90"gC%sZ$}Ӿ9kW>#.jXv p^ky;+)ʉ^&r~b]NN%a\ =F>nr,bqO"$f~= ơs/E[T|9v>}\$y C# ˏ<җo^r+~'nV>Ym* 8~XcQ&r3{iF/^VR-ȿ{4"vhr $F⬓vx.NafS]dFlE%8$DȻp;T>N2:ݐc麛>jb\ QS|-|!  B;r<3;vck߿=IKʹ/Kߒo#r%ꂰ%RN+o}KWŊ,ob '7]OwVE9ɳ)?j $/J< >됽TNM<ϻ9@?d[WbCn%z Vn /: ;*q+V>kZ,>ДT7 WS~0sj몏Fȳ)R>Z}"N*>yG/̟ož;!ɾw#{?l]8ؓ=M/6=~;Qc;!{*g*_Sf'')&qǒzKwy>g[rONt@`l9`jm qٰbou/[96WK=~G˩o-.A}Dݮ)JA/qF7Y oFr?eΣ|ʚ ,wn8wy96+Hk)O== ͻh|y- յz$sʛ _oH>_ypJwȽ5+cko͛\7!ZQu8Ҍo7-P0~#[şh}Lyf }4>pR_6ܗI󆄕OgfTzqD0.;׽H悵o)|[ֽjU|Cx2 [uZc罈󔺅8ϭd;~W(r331gj<gߕw';XoU<({^ߠWZП%7ʵ ξ~[Ӛ_\69m#?wni?NH˺y ѾbKꏴ_NKrF'99nox/ ۰-߷?t4#߅m4/ho9 |>o"3"ǒ0ZWqֽ̨J5sƮtp}9"'+k` N<23MOT?yPJ Y<#9?=>~CR7}}i'?k:Q[#ryxH`8ߗgךQ,|R 6nHǎ#ГL5W~p?'8_*O/j~Mp|[[ݞ|m(3>ycB>/A{i/Yuo!׼Mx+?^-k?>\'>'?j?U/_r?+iߚtJM^։<iQ~ UÉ])PR#x.^wOQY{]F3yo?T^'T/ +/>LS8w7:c>4,@6K+6@G䑼ۯvXׄkK+K=<櫃{6,:~ȅ8ykߘ4cKZsʗaNK\?*/r<׭m 7^'+*-nKp9! zOEY^bv?M3ߺ1|1C݃R #_r$:vO禴x ׮7i=+}ލh D.< }vjE+n}%evH}9x=hΦo_FN_ySFOQď(O>)f=VIX;'9+yģVJG~7XNxʋ-y dN̙(w{o$nn&DdOana'8ٮ0 Ӣi; wGog>ҿ+؟+}CZ=Io˼_ϗu#r=;!^4t*p?%*8Cysߐt!TSʃ vVՔi^erKAwZX y}R;}[A>/Oډr}3ܺ |oqCj^$j9p@͇*>} .Y!|>#yl ^N\A/>~+}rgmRIg2Mh[fl^?$>JNJ(?;<xRqI\w[^IzY rtNgu;d%[Kx Bү#r,>cYJ 'R!{5:~0_)}?u & >6kԹdֽ_C~9sr_$z^|Kyd[#~GIU\|ZG>PyN͚Wk}\=`W.)u3p9#[g)ߒYG{5^?H]l~FFt-~E=tϊg {Z~Z^cFdD%~C1~o}ҟqQ%Z\Zq?[eh?m05/qѨqij:ď³H |]OkϜԅ:=\ۓP<ׯJU6wrW_ZѾm݋fOۀo8D]V)ܧBu7Z/ĮtD]iտ~c> n >_ W]<0Jd/ryI>]m!Z} ǂoApgzV?_dOVyyq[ҷ{k<oC<_3~q?l]9+$nTۯ;{M30gu_s#~^ڋvzJiyo~uwS׌#a?Oҧ5s-O:@S~̧|VTZ9@k^Y2IR8,I bRG˿Kݸ~eR;ݑ$2Ї9:8ZvG\[׌Ӹ>Y_pU^`UOzӲOԎÿ4ԫt n3j|I }MF /k3;2WƧi%a+>ޘc}Kw`~|Uzg1{H-Ц\9*3?a<(#}|?SM#J)ݙk~\eO·*=dByayR3 u;v6|oa~)/CRG-/Qgn^%~zu28u-k؏~nǙ <џT:>F9v4unWG+^ml({Ufüݺߌ=Ǿ^woǽG.N~rbVH9:ꍺ X|>9*_C8ϗH6~' b5̇Ie9Z2^9e[^iCxjܢ:RekTp–C<#;Ož ~q3'~}b\l}}0QsӷP~]ֆ_*Ch8m?;6;vMy;OΝñ16z ƣߑӏԞ973'G^5V_7U0s,}vQҿ%v̎0R ڟ2G~x"̐7=kȼ#;V xZUfyղ;a 0!9 sJ }v1'j#ow7`{1"zw}ͷu[o͋ӑeUу|oyKD?~.;lUʵ4\=7Nt:+?H;5d.@R<6OV30/pmd;9]bT['"o ?8[+>u7 +[2;=v7rkr';sƔnaȺAuMsko݊]Ƽ ϳ :܈w~qӘ=%]~8s*> /o2?Okׂ A{{<"VUxb+ ~ƫ31˻!w$}]*~c[Umb;.?s~bizV"^>į{ӧ8/G4_Q]8W-D2>"Ո̽FӄGUÞGe!aZENn7oDʋ(){R9HĸR7r2q |{F7)w6nO)>x2|#hnӘ/Z/UosnVYίOT%/4[Vzb^!׺ӋK/} C13xٙ kyQ-b#Lb[27ZoNOiW7go\+k¯3p #%+ط1B~?1G`ߑ?5p;|+#v o3u(3#fSmCx9GTQ'xen\k<>?ݝ瘓9wg8i#`'eK~0|ve毝N( ƙ ;WVx^ ]<&,yoj>%[u7A;~&Rx˺S3_i?PvK}?wq㥼<g<] KSݢphy%?C5z JLB_yΞ;@ׯk]2$t6^xv+#FəF&FYZusMqfB4`zF\VUsf-bNJ>{ ၚ*~1{uW}دn*/Qg2rl(s#`oC%\79Vy?^^9QPu r5k/kD;:Ϟ٢QmC=9{"xՈ9G؏¾gwiU9 eWj0<+iFg.RX@R|?4=sN~;g8Zaq| U&n'}s?0H0)\($xʌރ| s&;L=r1!_"?nf.7f$ߝiPЎg~!sJ~,'p-YxnMv}vr)>2l\de߆v]˜搿3F84G4>`gwnb9cb/n>.7j > evaHZꔟGI Aq<=ϣ>|r^K} \>$?ڳ <,da$KswvF_៾?_ik|/׸XYġΎ\=%_a>m+OH}+:=ͷs, 'րab'a%o8YyNN Kלّ#+%>\p6xn=z_{ޚ kqg$5o#\/+8ȅ֠5Q,\O7;L[?ח- +T >2:|bN^i?ǒonPMq=$h>,trFBν Ek$&mDxb2Ւ:# jK~NZl櫘-|k*"7u)Wq_?k:І,C;S`1 C+dMdOX<_-zKf^3/xF3s÷qCX>onokO|F>}Oi`<{oo =;h]0NP~~yy07Oe^ǵ y?xCܟr8~/JO |^"n`if=٧` }-n 짞$$/]R9PN8bͳÐ/"? kDηsF%FnLXH|J~.81xDON!?P<_w=!O53* ~=f܂u#e3r˞熕o/+wg.מz3.5v~5|Kn/5S6u0gH~ο]n{Td̰;%>jȾ+!ַ>\-Rhy${cl{Uh(Ep)xW99ԗ4|'B8L>^u1 pmL煕+SsA UJ|>3oa#pSc;p{_;&bo% {_Gp5ULxe8\_}%^sԗ]M慕[㺷f, +_NDxF9Yh~Ok#33ޤ_& G3 ծXlRIN5ޤȜ9 u8e1S5Q;bи[:Ƚq/P={'/GA= |pz̼:⌠~}C5=FGhg`?^n%=Ug~@Z_®hh8 ];V}he\roV$?!g{}z摬g}Z/оY+~+Qe;!}%Q-{]v]u/8Xʞ 3o1n{oTc? ì\8nϡ{+ſ9Kԩ/ϛ:xdqi}oE "Mvz=m_;79ElkB[}煗"I3r!x67 uO+~|O`=r}S<ǯO6Ub藞{mā̑;_>wQ吓Nؕ|W(ӏ~%v۱SZB>}1~9kj|R];'l\e܏G֔SKy#{fƾ.A>m;z7Hm{qxSuf^ _n= C j*M/I's6g ~|D"95hLI>LxՏ!sxCG/V\=~oأ;z[Уk{{~'{ȣc7Mms<4ˣmj?sޭ,sዉֹ=~Wonr;P8Myyr=7++/)Mz~t_KJ괍Ņ(y@GV泘_:  /+W{[qRwy^u_ѿnT':(SqZVC)wFȼ8ď׼]aetq}kݞmpkr{o L9-skԎ88r)^Sqv_׫\^j5{/1ʣ,ov.3ti|iG}JU=9md苜_;)q>gw 9~!vGβ蝅~}o}佛{̪ 9]njX ~jՐ,J\EpDW}tZZcF6MNE_-u|3)A} _4<e r~PRWgJuGx9𮝔y|uoK}Oi<=u+zKy*aήx4_n:aq݂e{9yt2]nc*Py'SqO9ȣSj][ǐbzQ~Ѐ=MVihn'={])~i [Fދ~4bQ5*+ӬS܌_"x`3ȥVqHs`:}"r*sO=E_d(M{?NsjXįďu~N'o&w_sxQ#*T{HqJ ȗɹ_BRv]h$r^`rE5I>FU*.8oGVӁ{vN]ztgO=ѽy -}o?K5oUpwV wcLy uG8 |]'9ӗ@ώz .+3]?^֠GMhǢZGnџ=gwSwvI}[O>NN޵ʺl0Lyĥ[֕x1[S|K]e9bǞ #d|U׻еZoǥVxP|i,z@떇1G\hϛH'cCew#8 "Z"p:qkB]3A=FׁK "W{޻;SbE3v<BOa,Ï_j᯦_<_}v@֣3P*3#%f腥x3c//?g]ѽ|=4)KHgBgJB}į(s#nDDz_ì~n |~z%pϟjH]tz_ŷy=\-~uV}MFq~HpZhk~w|Vwo7-.[ x!tiGn]M>>_jGş?3r?w_D _77G_ |~8|%RUGVD렇^'w09zjVaS8ZR1.zCZĨ=ӣ4~ؤOġvn }k qkinM.-vQEz`LDc͗̓:W{z̮ٔ9I-%[_ýxQֈf ť<עW?;ٲP?{#y;20"1Vv: *,ymG3WjpҝG6TnDocwc5ϲ~O%c#l|&r;~Q/Ў|5ONЇK}g?0w }zb{e]f">/Ǚ .m>Fk%8\)_h\9 דi9kaYk1 ^#.'{V5uu6g?e?gMy L~"ǁ~Y }>UyK^W[hS y'(ӹ|w;Jx~$vjlG,\4믓xEW [#X}{s5".Tkiߞ=d>] k[ɇM[_G8_9ȡc镛DO|ީy\?e{E;792V;sB+ի8>CBV㘿z,mяW?|]|~3(}蹁%_~Ov>wp<qҜEru.:_g0{~ȱž^@Ϲ~-7<;po}vQ|?{.+=S|[m*qW~}y)sxX;gm5-*|p[O}4-0ni~M:by߻B͒#%|Y[Y*NX*O|И'U7/KM̈́wU4#A9oY!n!:9~hkșWQ$î&"Ҭ 3$~V: {+)7> `xŤ.|"QeάهRѽ(>=yrٰOycu5iu n k7vOW&(}g?F\.{}DT0Ma?uΆ}1wYiϛpff;Op(LP\#X'_5 :q4׭Z;*|?~$s<;l>4ȇz8}-/qu-A9ϢS+/d/qO~.ワ.{s̭$ J?&~/REoDZI?ˢu}ڗ N7~>?t5vƪM{>K$az]e7]۟yw;/SbGE}oIȻ,v! ~|v x]wj{mJz7A/ek4:ѡ{4O}J\g@2?p;C%m_Ey/z?j 2ET=GR4n'w#Idw6|nwjJM&N,Ua?ؓ S+TxeMG.?8= &} ̫W?#:gV|Vt'z?P{=MZegz (|ߢ/]t`9t$IO}>>vz;>u^v[\E{]} v[u!.~\-Javyt!;]BfsJK=O;c>_s?j?Po &6q=|NB/(/WߐJ i6@YZ?H|m>YIvv%ih]]?3"~ິ/}yߟ~v;|5x[vؽ RyOlO#.ρdu uucCK;3<ɻ?RԄNo0&%_vwߐS W'_Q4qDT{λ,q͊H^\~xtft+kwK3y^3ճã/f]D&!8ٖ^j>CZi{tn}{}ȑKѣ7KcK3/@IO].uGT}]ޜ/_ Tܠ{}N6 SJVWgP Wٷ9f y}Q>pzZ Kawun5Vq_9_} 9$vZa?JN|'>k_E&_y:v}}97=&?4}ᣩ˱kou?޳wq9c$,~Ӱ-X91inl]>ߟjG|S )nZ(gv$ k¿ 'zxד~We3H_x~a7)pп>5ǯ/ztXuqn5`? K- q╱of@nIS]*te%SbAQ'`W({zyr23m.ssؿc/h~ع=!vGh\}߽W';+81G5O\} Ò?N^#oaK{߭J=S=vI+?_i``B2Sp7/"&>]-}ە5|}~u1fo3[^GqJfov̄.{ww_:N||7BGp=ҷ@GF籓_4i(_N.|+~?o<Ŀ8I~&TCmOn#yt>|S/vSQQ(> LU=s2_-oG?(Yԇi;{:gEpYݣEgwx`{xWxy^|\{([pcb>yrTf7r #ӽC}F~ ZN|7;v6+koL>>/,ّS乥maO# S|5*?|0? 9K1T<Э? k;Hr+P<12PX{a/i:uA{W+ȷ^=7ƪ^{~?1|K9*`.|^{jR&m>>a/AJ^^p=z~7@"0{ LFMtz/ӏ/R9~qvʕS*~+I|:T,$="S7y!}yӼW80%g&*E&#A'[ H-&vyK}>Xg[$XWKWkZ߂/~Z5߫u=˶Т!}ѴBߵSy?ze}^msEEz%v{9r |Pt&WbُЧNR?fOxfmqp28:Eo\0vvR7ѱϤ@7I/MS ~_ -},T:oM^N4f*IBo9SUlϬ(v[՜!kV9ۗ|7y_a<{}s&)=q7;vK@_K'xa`Q  %̥y׻f|L(.xfOxfϮtnSo磥*/%L9w:S+0vѰ1~[~9:y}p]Fg7*W$i%XI2qzW KP5l"Ez$_*H]E8 [G<ᮡZ{ <8yw'7&鉓Jt?W? w WSG 矊Gƕ 促 o#1|Z$2|Puxwr??_}6kyӦ_ &g$Әgy:ga-ydc>K80_''վޚi1_wߊ? w]؏ۿGofF; }_43b%c'UM`.P_ '{|} ǹň$DߨATz7]^~FivIPpavvTώ)O$8{/{Is G\.@\2'qaڹ{@MG.8Ջ~ىϦnsU-Oxso胁BГto"O9WJ퇣8`RBIf jK}Gz7`N~Lpp/CCpQsϤ>I~hOW ?OO*O]y?}pD]Kǵ[K%{cpKt{Kpe@OF` [CIvH'wkr!ž}ڄ},03l+-kHq[z'?݃7?W$OV.8ߕؓ3ww#cn^xYr_q9_7nu>0{fsIO? *EiOsGp??xU3<柴>Pgw|?͒~6̻оo)vԆu_cW._>NeJgbƍ y?77>bb=G\=nyZ}1ǵv%~kBc?bOUENIΧ FO:^g'~~y3cϨGq]Bﰛ{6n^8 xu>qx.\A^*_9^ăփhnL!#7ܩ3{OYs֯ה<}_BvicR?"UYxߴ{B$xg!G$?v>dcZڗL GAx.<]v|s}R}iӑfh<ػeA?=#8]T(N6Lp_Z/H]#;Ftu17ځhGw@S]/bMU2[K "e> [n{ yQ`(rkM*%mϼV3~dKY[NoOoă|rK2W|ǐFEK&/tO#_JbXӹ!!(ȾDɀIkB_P}9a>Bk8`G]RW=8 3/bϽ'!k\C}I:o:q)wo5b#'em4'}&?tO絿dn6G?o<6>@n,ƺUp:` i (|ଢ}9+O>#|$ \Op7|˪奏l ?t}zbf}]/%|m3rs:H!Oȁ'G|3gya'=Y|\]h܍.Nz9$a{ɩ%/1#G|9EӬ_~WD|"^iؗ]ק`?ԟ<Wy?OJ|'=ơ&xfo3, ]K}ⰍwV8p־Dl9q?QwzD` qiml1/Bl,~n>ڟЍE%jp^gyo:_c$nE<;U=;+gQϙfr(CFH[Wb|I6&_yaSaW\:ovEC\<SGpEڟ%"%\?!sh1#*@[n>7 ^ߐ/W9RokonkSC}+N<>aDSI/Ռ{Vi[?u7 {lB~'BG@#X#{ U%;uAł ::<zy䞧ws3,bb;FACets#vodnyLpc/O=ȝ-u">:!ϑflػ}n56/?3xa|FXuK,CԱ̊:%)뻺50>xu> t#5f_/gNMIK>9{ N:/;Jqąw_6\Y+n eNXjE¯a]y% ގ8[&FF?Oz߯yIi pw28R5cA?ujƾLS awJp]?/{ {"`pUJUY]+q%&I^DyqEĹ<85zd;Wv +x"Ƀiiol ?JWJ?bŻ#s5{:8?$q?TuC;jo~y}Y/V25iklKW- }mt^,9qqOr3zes}~<}c?OrwAFY߆c.ߝY^>#IT#{&xߊ>7{wgf,9 rƈ?{I}+X.t.k_<|M>zw;#-st.y??85k&ӱ8v_W%@EKސw[h*~Qչ:Oӟ)>m~6?¿5#so?[E6"$8:^u>~:VmgdX 2SЏ7qqs7N-Z uW.n(k9"K,亣o)If#{dYg/oo@礄?6]'oErB_!OsfG$hV>CjM3>Z~9|*=Y&$?Aa XEmCݛ!\Σ_c"wNBJJ:f/M@WYsϯ}A4?s߹Y 8 ybfݑugɜL!}PU7g,G N*fAuO˂+A/u5uirB}U E^&Ck=ҁO9.ƞ:U8usZC7@CJ (!ֽ\Kp5:<>,)ƭ!|U;-yz<ϋ7a>"_EzpɻWy1zews/'iuyi_.$V}J]J֍ ^ZZk[a |o v}ܯcкeGzp~-r;Z z{gFޑ> H_}I磸}u s}G>(:~-0$u@:G^9O?ϸ9< ;8~_;`t!fܢ'33Di_$'yrsvp&F(ߞ}W#C'i~x2~>}.@_J@on /O68OsŮɜǐ ly/K"vSڪӈ؁'ZqpoNCR*z@2-$.C@Q̈**D'?Vj\!٢}pNd_=3п~qsN*fw(s]WH:%G2W.ݔ6eGQcתƐW7ʁ+$$ucWw_xHp]wcO\ߚCiAK-]u|!*rYkݧuw-{EcimO}vɓ2/F\M1KuCOzK]稾G^Cc׏&#K=w v`4Kyl؂Ty{; gG,?! K-s)x7׺Mi\0׾*?Ƌrg,u~̳TܷXЩ̙?M_ m׹{VǞ|+CT8&b >9,)S{6M}S|흌7W9cW; ]|'7uQƍe 7IyYY|'䞣OeGi5%ŊJ?aH)'RG"s5p9{ñ5'~gҗ~f!# vr+{b$."3&xy.8KOK 74 7 8Vp?Ro4ۆ͑}F=)]i^U:%cP_~K kte}?]nz`c$yr Wfb ~C9̳C̃M܉(9aw孎!uUa~6yVgF\T[=Љ|ScLgh?MTҲ 8D'[N_wYf`Y]}Ju.S vTm& Lp:m7}ry8r'7ͼ ~-Rjee&Ȝ$ŭoD.olt ;DWgç `.> sGt &v`w(~NN˴un>s/$f\~o:}3oo;~vO~yc2Tk[ޢF6ֵ: 3|0|aj>Ý=;"O$F$&rhu?O7JxGV5v2zH><熾!G.3^J>a]mYd 9\7[kpŏ$uswa:gJ?-kgk??ǿJ(qq'MhV|JzXk^oZ'R9IuK ]H.k{e+r_osk}?qvYBoᏤ=ƏxX]گ3ʏĎ.;@!r6~U:鑛c_t@l%_opy,}gࢷn!8<ᥑ-Qb qw >v9;8H[>Кg _7~BO@f>{㧮 X/gP/&,8*szI`1d^&Mwz_h&з'# KFQxj]>x>?jQ/ &W߯FEo͓\%k?.6>oEw \:*—Sf_ O ?k_y!{m\F\Q?pGcT ?ƈH[sLEz\My:soxz_L>L@.p +N 䏍S8[[ӫw'ϲ @ۉ28v:%̎{q<ɷa/QʂG_y:*?^0ҿb}ŎtsT혇u|<}s𻚚ȣ"_c‰ϯxfW歬I]K@6/˶`g^P#%*"/띧I #漏?XCGޚw>Lbģ?~Dy;v|y$Ow,k0A4`m܏畹]k)W ?:vA,oD/<<,z`N:{ sV %.X|RTJw 4]=?!vΉ~8~M=;yyҟyHݚ4ȗ~g_O$Ɨ+Rϯ-,!ygi9.ϟzZǏz0xJoiKc]ku Sȳu[mD%h5SuS>Yci=3/|r׾/dYڗ;{EUfSn0/-;p3-N0,$Γ}̽gMN0J~Tif|t^ul<*%:bWK^AL\:-VEQM9%"HtLкvYMnޒNO[nы;roOQ_u^h[}J֜oO=3$Mxg v=,<kny=rx;(Njq_V9f_W='u K;WwIx>1c$_}Vw7*E{!g@=Y3+dTHLs?z>Hg~H*Mi_hekb;?OVU dk*s^4f5/ҹe"Uʔ-Z{IRzeoF&yF:ov{uE?${BPȩ/?*n[~5^%AȜ9I75:3 &վNT~W\;rbOk\M _:ڱOjGI-}< o_iC;=W/ ͐cK[V|Q2͘]pa]dsǭ|&>ab/XYSÇsq9N ~=8Mv[y2GŌ`d?>-:K_d=GJgzx<6#_-SGf>܅OeN'/8 5⤻+:OLY:߷y2"CwӉ~{Xei GųO6!g ş~Fȃ,/N,K~pgXOVBx])q8杴NV_nqcbċA|p͸?xo1yn71X}HIA <;-@"oM3Yf]@^]F Y'2/ʎ^}qjz)k` ( =~K^?-pN9vȷi,}]o狿8^ >1#z \VY2Oʮr v~~d?:yFF7'Yzt_/KGN՞} TD/}1Ey.d2BZ7压} .^oǯg%{dwS$)&?'D~gqJ0 }I,P<5%uڟ߾^0r%$|t*0]}?(NZ4;C+߾څOQ̬΁1l ۧпcB*'zFW $߮ ͧn$aU^;=ILf )!Vӹu),#UfpɗO9i̐O y=x?|}qY^#r9NM|AiXz(/G@='G.|ِ9NƬw}N-f2F}(&?8'<=s#Tyvljωy^NowO>>,Zci߂7oJحŎs"~8+N=[bm?&zfOQ{v봿q|  \8Өi˛ 7ХE7͕+E{Rhmy)x$ۛw" Xp(`ci?Y{տr d]#HB%s}sOjU0uw܁ؿRѱkQ{]c~^wҺ©qxy-y!t-gӖ`U"KY5?<2\;C$k ֹOT?b˜K?ӧSQ 3*qWu3gb;u8s϶դyZԮ ]h\^;'n ;[vХQ,5cLP?N$/vIZ _w~(]hp)zjܗM ]G Hh r>=ho {W*>[l,}N>-E_ƄZoC*"TWBinZǶ\ A?k/ <}K<ү#z#i"9iĞ74D~i|,)3_fwc7~5 ?OEpdoe>ӬJx^|։I^:q:ߤ~?w&S ~(:@9v$+Nîu3X' v>@ L/sqC`Tɀq9_~ .L/sw Owߋ/ O?&܅^$O\ķ9ł2v>5-q-wU|Uaa¯#~aF5rҷ|Sy-%8?/h#X}c5ػ7GI~U+l+km sf𜅢@WW<ӟi3DjHOL_ZnZA2o+uj8-rWNz;5Ҳo@K}C * 9MtWxވ~yrjk݉OB*>jknZೌ591ȉQ;bKŸǼxKܫgΏ ioeh&ٌ>ٖ ?T=i/~mJG]d^p# p߂_xc_8e7|*gR\e֝xu>)q_ ŅkV睪|~|HԯhӒk 7f#so5#ӱf2 K]A5&q,TʭXO_\9vO}P'6V^gZퟱ_= Gp/ 9dpG\~\Az|^q0 ,}Bp`-1PܝW=oЏGFWQHI扭 ޻}7l~4I::H/Oו!17ǂZd]/iG.Cג׸χG^R_V ԝדkG?[/g߬k7=x?qF/öG^v&Ȍ3z1K۰7*$'~ V;Z>wTݡI ](FY{nyܝ;ݡ!R@V>7ǩl^ {U\ZyvwuS2>$Nz l.HsUި?gq@z_⩚/LT;W>[!s;$ OA).Yp̽?j_''c_ev8(EQY& +V\G 5`<7⹤nT Dr8W_Y>x瘧z!r?8F >08(R_O~q{^St5qJ:Wtm@d|J]QKrWf)j|IRW2?5jx;K/EBo:]cSiBfܡM*rޞRW6Wn+=[τOmG4_R\ !v~Mպ+(Q鳭yŁ_n4--QȇH=-$BBԻB\ڱBBKBȱBiBqĖB%бBL:BCBBBjԙBSBbBtBB&B)B?B B:ɞB1BTBuBMBDBBBBTǣBwB`BkBbZB9BB ?BBvB&B5BBB>BkBBp>B̬BB;BG+B]B:BtB߰B[B࠰BB†BXBBr BBBBBbB1B5_B>6B*BB_HBBB YB]B, BiBrB B}~B6BKBB0gBGB;BԶBjḆB\B랷BBOBȡB=BIBBXBnB*BvBv?BQoBBB1BJBB׶BOBSBwBBBBkBkB6BB,B)>BٮBBBhB୿BtB+B-EBjB?BBݲBBȒBSBB}WBűB?BRBB5jBVB5NBTIB9BrB0\BBfB0BB ˵BB݋B7BɱBKB?BڱBrsB,B*BB2B`NBgrBO,B 4B&B!B|BBBذB BBlHBBG(BBBi@BBU>BcBN|BB BBJEBa̳BL?BBBۘB%B.EB3B@BծBnBF̅B B B|BDBB B,BlBIBeފB4`BW$B푌B,BBBBB'B&BB`BBSCB+BHBKB=BհBbBBBQBkB}HBBdBִBܯBpB7B]BUBƛBBB[BC BRBBBkaBXBدB&BeBůBBBBBY~BvBmBB{BsBFBǩB BiBBBBB~B(BB3BDB(BBIBmB mB?B)$BTB%BDBl{BBdBBXűBBlBsB BBABIB-BVBBBBB'BaBBp4BִBհBBBBVBBAB"BB7BsBf BͱB6BNݴB"BA"BBBB@(B$BaBԇB̸B+ BB·B BRBBB ɋB}8BEBPXBBBsB;&BBBдBH@B{4BBBϓBABjB)YB~.B+BߖB(:BBqB XBAKBBzBsB\ěBZBdBvBcBM>B(B`BBkӠB$BBwBܳBMB BlBy BɥBGBtBoBBBBBI5BkqBv B#BūB&BB%BEBB ZBZB_BïB(ZBB0BHBB4B7=BN=BB݃B1BǴBBBZB8BBBB0BB[B VB}bB@BSwBBBxBpBByB>B3BvB:BB'BB'8BoBbB^BB BBBBBBmkBBլBBB8B|BfBB`{B B2B#BpBBPBqPBR BBsB +BkUB sB BB1fBߦBRB_B:Bl^BpNB9BsB46BݪBByBBBB%BIB`KB!B2BABBBiBBخBBBBXB9qBݮBB[BkήB!BBݷBJBBBBBB,3BBZBBpGBBBBB͛BŴB]BwBܴB>oB)BBIJB۠BBeBBbBBB]B)BeBeBBƸByB}BBBBBɻBuBB[B'BۼB{B@~BL8B{BXJB!BքB;B)BB8BafBƍB &BB鑴BgB,'BBѯBBΙBB7BݤBsBTBB?B~BƯBiB2BcBgBBBۮBB)B}BHBXBeBHB. BBRB7B.BRxByBBB`BBwGB/BBB BtBB^B{BBN݊BWBBmBBOBBB BBB Bt/B VByBXϒBB BfBSBB%B0BfBvB< B:BB*BBBGBBsIBq\BBtB8BBqBjB*B8?BxB BVBB{BHBgXB B^B_(BLȠBZBB2B׫BBxB;)B&BtBoBֆB`B1BRBGB|BBE"B BrBгB2B6BGBBBB_oB$B_BBB@nBV~BkXBEB%BB%/BB}mB8BuB_B0\B(~BpBsBBBKBBBBBVBB+B|]BBB)B;BB~B-±BSB$B`BBlPBʱBRBBbBӽBOݫBBIB BEBy8B5'BBaB7B"B|B0B7BBBpQBąBޱBjBB#B̈́BrB~SB`BB$BB0BOTBͲB~B8BBBִBI)B!BBkBeqB:BkBhB شBB_B5BtBuBgBBBUBcB=BB BBwBzBRB/BBpBѲBӷBB[BzBBܱBLB B7`B,B"B ܰB+B|BIBAB{TBBBXDB0BpվB=B7B53BJ;BqҮB B>0BBB-B0BHB5BsUBBCB)B B ^BB)BnuB|ƭBB"B#sB^`B㧯B6B\BٮB@B/`BBBBXBmBPBWBBQBBQBBVBB|BBөBjB3BBBB@BݜBN$BrdBVBBkB;BB,B+B8B B팣BיBhOB0BsBB|æB&MB/B?B4BBB0BB BEBBXBoBģBBBcB۹BBB.ϯB掱BjBְB~BQBbѱBKB0BBܱB[QBkBB>B# B2B{BNBeBYtBB.BkBTBMBDBݎB}BfBHB\BBB5BBBRSBYBdBBBSBZB5BB4B7tBʤBpBBIBzrB>BBB$BɓB,B/BBxiBJBB(B}KB8 BBcBqB_B\BB BbB>BBQIBBsB HBFxB׾BMBZB|:BQB[RBnBNBEBBVB.B/hBdBBBuB BBBBfBůB:BϮBBױBB̍BXήBBBBBl|BB~BBABVMB6gB.eB/BBtB׭Ba.BrBȭBIB3 BkρB_BQMB5BBB;ĄB건BrB[sBۘBaB%BtrB BÉBLDBeBBBBkBՌBBSBnBB;BBòB/BBB[Be0BfBBŔB'IBB^UBGBomBcחBxjB_HBNBB BКB BBiB ȲBBBBH BağBYBB,tBkBBB>BY&BB(BLBYBVBHBmBBB)QBշB7BZBɱBfBcBBB?B$BBoBBR8B>BB$BoB B͆BƥB:BVB B VB͖B\B pBFBBdBhBBBBBξBB2]BB'gBSBBB/BNUBcBNBjBVB;BBBO}BMBBDBŷBbBBO/BpBBAB*-B@PB:BB BζB,B!BBBGBABBXƲBBBBBڹB9BhűBUϺBBBBB߻BвB{B B&BZEBsBʕB BֿBfBB5B՗BdB B>BůB3BBYBRBSBeBBէBm;BBﶱBB$B,ŰBBpBۯBڍBdBBgBB*"BbMBOZBWBBpBBBGB߫B/+B˶BBB!4BB BBBB۞B3BBBB\EBڰBBBBBkWBB|BeBBHBBB&BM]BB|BBBrBBBBBͰBBlBBBlҩBLɰB B'BڰBB ZBgByBmB B"BRoBB\ByBAB%nBvB7BB hB3QB(mBVBfBBBdvB/Bl'B_BB[BKBB#BABj,B(BڱB^{BRBBLB`B^BBQB7BepB'BtBdBY߳B܃B0BびB13BiBgBϷBB&BE_BoBoBB[5BͼBK|B*B֬BBBޜBɰB B6BBlB{BE3BB`B%B?BFHB49BB}BҭBByB٭BBUB߭BB 5B|GBeBPB4pBIBiBwBEBtB6kB@BBXBBBHBDBB-BÐBBUBϮBpͮBBSBu׮B2B#BB,0B 1BB}B9,B BTB BWB>BƴBB8B0dBosBזB`B@^B\BSBSBHBSBVBBeB_BXBNB(kB^8BxB(BmBBBBBүBBBB,mByBEBBK2B5BBжBJyB$B1$B B#BxdBMhBKBԃBBPB?uB3cB̭B&Bu/BBԈBBdBmBBLBBCB&B"B0۴BABVSBCRB$pBnBVɳBBBBoBB(BOBBB MBBCBVB-BB;BB.gB2BOBCțBy BB/B]BѿBϬB.BtB,BmBBm>BBۜB#EBئBEBKB,BգB RBBMBDRB8&BBzNBp#BBWBB kBdBBB|rBB&ҫB}BWBBSBgBBBBdخBBBįBBBB ¯BlBLBϯB0wBFB]ӯB BmBuͯB{4BB4ίB%BTBXٯBBBB4BBBB,BB'B"BBB-BBB݆BѱBBB_BsBB*ܱBBB&B BOB|,BĿBBBk8BBưBB(B'BBBBtB,!B2B jBB#հBEBqBBBBVBբBBBCBϳBmKB}BBEBLB\VB.BBMBBmB=BBSBgBxJBB"BPB3B/B⎴B1հB7uBRyBB>Bz'B/BvkBևB ^BpB9mB7]BBB BBhBB3SB`:B'B5BӏBɯBeBY]BwB6BӒBZBa&B>B-B^"BB]B[BtB߯BB?B B{BBqBBB 7BB0CBbBBžB@B\B\BүBNBhBxBg'BBWaBBIABlBB9ƦBvBLB{4BzBBSBK{BBB{B^BKB {B,BeB{BcBBB%aBB~BB+BDŽBB3B䏯BRB,BBB:B៯B [BڴBBBsBΉB>!BBDvB}BpB2cBB8߶B\^B,B=BkB~BቷB9xBB1BBBNٷBgB?dBB B{BB=BBBBNB(B?BFBqBM\B.B B0ݯBPBBSB'BBB"BrBܱB9BXBvBB܇BFBB,BBBsB;BB{BBBԅBBGYBBJB[B䚐B[BpB"BܮBeB B}BBBBBゖBlB2B7BuB B|BEŮBJBBڮBJBwB~BIBLBBPB/BҮBB LBNBIBBOmBB.BaBtBQ_B}BR$BԧB]BB.BBjBzB®BBëBծBBBBBWBB[~BBBDBİB)BBBBTBaBBr=BvByBBմB̮BEBBBJBBE]B B8B7'BhB8 B0BB4wBMڭBBP׷B4ۭBlB B]ۭBKB6WBOޭBBBnBSBBBBmƸB(ߺBcBBB9BJBjBBݽBBB&B3ɺBBBBCSBpDBڹB`B&шBwB"B9>BB>B곋BXBB7ByBBŎB!BBcB7BBBQBNBƚBB2BB+˴B@BLjBԴBeB-BnBuB,ޙB B\BB!BB";BeBBzBTB0nB+BBB+1BRBBgBB6TBB"BBBBByB@BB\BBqBҐB BB^߳BɎB`BvBnBBRBF-BBBBHB6#BBB{BoEBBlBdBz_BsuBJBb۠BϫB)BdB,BjB)BYBBOB BUnB̠BqB JBB̬B%B %BBrB(eBBBB'BzB9B0BBBIBRBoBmBb,B lB܈BBd,BхB{BذB_B {BoB1BHBBKB GBYBYӬBBOBB}BBYB\B)BoBB ,BXeB B/B BPBB!BBeBB:B BdzBȪBcBBNBBIB¿B 0B>BBB.BWBk/B4BB\B;BuBcBB֥B'BnzBBлBhB9ڣBBcBU¢BPBJBB!BrB/jBB,HB4BqõBBBzB BB(BQB5BB BڙBꑱBB>BBB"BۍB>B;ŕBoBvBK`BBM7BB"BӬBSBNЫBdB^BǪBB<WB+&BBRBBvBBBBѳBIB\BަBMB nBB'BB BBBҙBBB+BʳBBw-B&BeBD՞BmBB8wB|DBp:BfBBB뿣B,ŲB=BiB+B"BBrB7KBBmBuMB[BB 5B0B’B'BlBBBܭBBB0*BB;BfBͦBBBB\BBBBBByhBqBPʲBBBBD0BaBBwBB?BBB|gBHGBBVBB"B_BB64B3B2B^,B4BBBBBBӴB)BB4BB_BBBXBm'B-KB B{BBʵBFBBBB2B9B0øB$B+ŴBأB1ǿB>BB2BqBB0ݟBhBABBYBq]B$+BMABlB诤B BB*BLBBBBBB BB+BB3NBAB)BvBTBVB2B5B1]BF{BꗮBqBBwwB B}BM8BBB"BҲBgbB͚B}BsB&B{BBwB>BhB,BD?BBPBvBfBBȳBZBpB-BPBB BBBeQBB lBcBBBNBւBfBBfOBKBjB"BuB BB.B]BWBi{B\BBzEB'BˠB>BȏB1BOBmBZѵBvBBzBAB0B[(B/BWB۴B9BBIB\Bu}B铴B-BB癴B'BBhBB=BBFB6B˴Bn-BBgBTB;QB:BBBcBBB tBB_BKBNBBPBB-OBqBBB|BvBBT}BFB%B#~B7B0BpBiB>òB5BCBC{BSB B ҝB4BPB?BJBBɠBrBz B]Bu̬BL^BߣBBnBIBB\BvB6B>BBBSBOB7+BrBBShBMBBoBBBȬBB ,BWҭBSBB'ǮB٭BMBB2BBvB] BBB0BBBcұBBBAcBBtBBBBoBB1BBBBZBRBBBBymB(BBDB^uB;BBBKBBMBBKBrBlBB BB#BB,BpBcBhBzƴBBBl+BBଶB젵BxBVVBC"BB` BBEB[ȵBABBᇵB-B=-B%BȸBBBBhBfB勺B`BB߁B=Bs;BBlBײB׽BKB%B 6BIBBcBdBB%BBB8B϶BB#BjB B;B.B*BBHWBXB8BoB0BB+BXϳBnaB&B- BBܰBUWB0B="BBBB;BεBSBBB :BYBBZByBoBBBB&BQBwݰB@B"֑B2BBTBOɯBrBgДBVίB_B>BB:2BbBN BBzOBHB%LBB=BBIB hB:BB^BBwBbB$BB}BQmBXB_CBfBYBDBB*BZBBqB mBBBըB yBB+,B BTBhBeB=B/ԬBoBB/BBlB=aB$BVBߎBBB奱BʍBB`B*B;BBFBBB BGBdOBBڱBBnBBkBOPBNlBB4BB:UB&BBwBR/BppB;BEBξB:BQB BTB_[BUBZB=oBBdBB^BgBýB:BRBگBBBJB'BȷB-dBrBBsB3BqB]}BBPB`B~B.BņBJB9BHyBBmBt(B1ެBPBՋB BBBBB9B,B:BvB$BgBBqBBHB¬BiBەBBCBlQBYBmBØB.BKBy8BsܬBB›BBmBgB$&B?BBsKB^BzB)rBHBTBBhB9ţB>BgB!BBRBoB*B $BLB$2BDB]B5BBqBCB;;Bo BB@BB߲BtJB;BJB7VB}B/B`BONBgB1gBgB(BcBy4BeӯBTB~BiBHBBB%DBXBnB?BBѱB,BdBB8BJB BĭBHBBeB:BtȱBGB4VBnBBxBaB B-BBJATBBu^B!3BGBǃBJ-BMBB+BBBBoOBf!B6BfB$B B`GBvBaB߳B$BB.B 0BBB-B%BeMB BF+B| BBBޱBOBaKB9ӱB.BB"ܱBSB CBB۩BјB0B5BmB1BBBLBBB*9BBvBYBB#BԱBDIBwˢB⦱BB mByB'`BBOBBpB1B*BBBBC~BZBB۫B0BBl)BJBLBeB^B\BBdBB B_B[BBZiBdMBQB\BB^xBBBBBBjBENBBrlBB?B޴BB5B1XB|BBBB+BӣB΀BZBdB~B*B+BᨽBzhBX B?KBeݱB\BBBBzBڹB>BmBϸBhBսBݷBKBCBpBNB/BJBUBB]BOBwB BW9BBB6B:B"BBBwBmBB,BԯBzBE>BүBKB]TBͯBXBiBBwBK~B(B)-BvBB~BsB7BD탧BonBBBBiBB!.BXBWBBS݊B¨BuBaB&B&BB}BBaBéBBېBBB/RB@ BTBwՓBBBvdBBUBB(&BVB!BPB/BQBCêBBB֜B9BUBBBFB̫BBoBcتBQBؖB\ͩBEBtWBdBȡBMBBWyBךּBB/||BtB6B)BPBBYB،BrB>B@BTBjBBDBBBBRBUBBBB1BtBBmB TBBi BˌBBqBIBbeBBbڏB3B"BzBxB2BTBߓB^eB4BMB"=B BBBVB BBa;BBBbB޲BB7BcB&BFBmƲB4B{BMBpBB-RB=BDBCB\BBBzB*BBB B陱BvBB]BBBBABVBBxBQBBBJBɱBBaBʱBS;Bk:B2±BtBcLB˱B6yB':BBBB?By]B-BfBBv.B/B"BⲵBKB|B5$BfBB`|B]BA!B'B駲BnBҶBٲB,B ӶB.B BʶBEeB]B|BضBܱBBB; BLBBnBwBxBB BlB8B@BPBBBIB#BIBMGBXB%B1B?B;bBBڽBڲBƹB|BUBĺB3BKбBB B]^BEBB!BưB BB~"B(-B$BBFGBS:BB`B?BtBAvB,5BBÀBBA?BBBĠBbBǰBBNBBoBGBBBBSBP?BȭB/ BתBެB?ׯBSB BBB*BunBEB@B&WBABB5PBB.B(B(BBB-6BeBBBQBE7BB?:BBG'BݯB:!BBZyBGBB3BCBB$BBЩBBBnBzB=B~BBYB,B~B'kBB8B䈯BebBLB]B,?B B.BB{BkB2hB_IBhB=~BBڮBriBB BB2BBVBBB]BBBB^B'BBBn1BRBBA7BBRNB>B^B&BBB BBAB Bx˱B1FBqBRвBqLB BdzBLBrB B >BB5aBBjB>BBBeB2خB%BڶB滮BY~BEB찮BBBBi#BUBĮBpB6BͮBB;.B8ܮB=BEBBSBVB BBRBBBC5BwB:B)BBىB>·B$ YBbB$|BXPBKBYBtBB>@BB3BkւBBBlBNBF\BQBܹBBBTBgBsBsB BLBзBxBqBBBMBBjBrB&BB{B줴BBB!OB ;B[BBBB³BI=BfBƝB2BBӠB¤B9BxBbIB:Bz޳BܢB,BBaBϖB?BBJBBLB%B'BPBB4BsQB`B׳BdBBBa BBsnB^B`B.}BXB^BvBBuBgBBB\]BBpBgBBܲBYrBoB> B5BaBBfB. BOBBD>BBLvBB(Bx׫B{BҮB#BB/Bv_B2B˛BB%%BBcBB@7B׬BB{BB&B復B]BזBB/BRBzB9B B'B?CBGBVBT[B)BBꀭB-BǯBB BGBB{B0BbBḆB;BKBY`BBBBBBBǫB1qBLBŪB&zBQBBCBBAB3BBYBDŽBcBBGwBNBB*B`BBډBB{BuBڪBBB{B,BVȎB(BBQ{BB5Be+BB$BΓBJBeB)qBuũBBJBBa(BBƨB5BC=B_BBB=BهB)B4BBמB BBB8BQB0BlBBB1BB1BbBBBjBBB&B3B"BX0BBSB5BjBtBJ=BBBDBBBCB BBI*B-BP?BBBB/BlB@RBsYBB/BlBBW{B۲BBIB=ͲB3BtaBrBsBB BBďBB"B/BBB5ʑBeBRB[BNBmBݔBJBBiPB\BB)ɗBBBKBBIBݚB ñBVB|BrױB B'B9ñB+BڟBBBBEmB-B1BHB9 BɤBp$BEtBaB:BBBB\BQBBܰBBﵪBEBCB BPBB PB3$BYB6~B52Bi BB3BBdB;B}OBBrSBBcBwBNVBGBBXBeB B$B)xBBB)BfűBiB^WBٱB*B}B ByBlصB`,BBBilBB3BBhBB8B^BRBM]BB$B³BtB{(Bw0BBkʴBBBfӱBB B{B)B묞B{BlB]B BCkBr B B޷BߖBޱBUBBñB BcBrBI)BBB6BjBBBBpBBޫBȩBm}BB»Bm0BB$BBS=BBBwBQ=B~BvB@_BBEBB)BB;BB "BBrB\ZBܗBaBB?B BBB˲BBBBBEBOBBB*B'FBޑB B^B3BB+NBrϗBޱBBQMBõB'BߤBBU!Bo BаBBHBбB6RB7BBBؠBJ BBBFBB4BBBNBYB5B9BͱBu-BBűBB.BN B^ұBB9OBBBBrB4B`BqB?B#BBp,BBOܯB]BB BB[BBB-BB}BM2B;B!ڲB\B BB4BBBm,B BPB{B5BBJB/1BBfBB+IByBҳBBB}B0B?BD(B涵BBBGBBTBBtBJBBaBޱBFBǃBVdB!BBҜBmBB5BѬBHB鷟B)BBKB-B B`ߢB[BSBgBBjB֥BB_B.B8BHBq}BSŭB1B+̩BݭBBBBPB\B"(BBBBBYBB>YB4BίBDwBBTȰBLB8\B!BeɮBBrBBHB(BڮBOBB~BB{BXBX`BBnB(BwBYBBn`B^BgB8BiqB¸BBBB&BBPB',BZBB0B^BBBbB;BյB%ʭB{BBoB/BԃBEFBB͂B?B?BP}B7BsB-B7JBXBڇBBBB tBmBBAB0B=B\BBXpBӍBbB,BTJBB0BBHLBB`/B|BBƯBBBB!>B-BB זBB(BBKͩB~>B4BBcBכByB8B^B̪BBBL BB/kBnXBDIBBTBBB8իBHBB|BjBMBdBBBCBqlB?ШB紬B/B{BByB BƬByBdBBLBKBBhB^BBB%BϲB.BFB2B3ՠB6BBҴBB-BsB|˳BB=BB BBϼB7ܫByjBBBBB1jBNBȮB#B=B#BͪBԔB BjBilBt?BB BB;BnBҪBB7BLB̧B B GBbBBZBBʮBEcBLBBVB*BBg$B2B>?B4BBǜBfRB-B2mB1[BKBB_B7 BΡBaBtBhBeBBBiBcBhB$iBBfԧB=cBB5B iBBB1|B޶BѫB鎯BsBB6BBkBBBB®BBBdBfǯB>B°BݯBBBTBB٫B BB4 BBBjB`(BtBB BB,BBBiܟB8BB;BnܰB zB;3BB B<ؤBB]^B:gBBBݧB}BB0GBjvBBHBBSyBBBLB4BðB1BwTB>˰BɪB/^B/ҰBHGB[BBdBgLBB]BbB$B>BѲB5B5BfBABBBJBBVBTB6DBXBnB͕B6BBBF/BfͱB3B4EB+B_BDB1[BBM-BB?+BBs BzBB xBB9HB,BBLB;BB `BBݎBB&BB&Bp]BBtqBBɯBBBBWBȵBBBBRBkBBBDB:ϵBİBBB.BiBXBmҭBdBBBB B!BBXB NBB Bo~BB{BBBnBfqBB8"BbBBՏBnBB{B.BB +BB:pBUBBBBTBBBBr̮B+}BBB@B4BBXBÛBO-BB/YBt4B>BB-B|BƴB$BcB cBBIgBMBB8BBBBBB B2BBBީB&BB,5B?BBnBhByBBBDBӮB퍯BBۯBꞯBB(ҰB|B7 B5BmBBxBBIBJBBTBwB0BBBB4 BVB7BqcBBBB'VBBhBpBt=BWBUBuBBgʵBŹBBB~BFB䕵BA0BɜB7GB,B,B?B B9B1B^BhBXBGBB?BLQB BBB.:B BBB'ƯB,]BV!BtBSHBBL BBBWBֈB&B7BұB{ޫBňB߱BVuBUBCɱBeBBB"BfBiBy˲BȰB$BUBzаB'BгBְBDBCBްB2B#BBA+BZB"BzBt&BPBLB@BBBABhٱBQjB\*B%BBSBBBBBB=B$ABn5BAyB.4BfBP|B@ BBq[BPBB[/BVۺBTBӪBʺBK6BBBBܟBꂺBȎB8B4BBӈB6BB>WBpBVBڋBB'eBRBf׶B/BAԎBt,BBcB&B= B BBB5BB*BmB;CBaBUBE˖B fBnBRBr\BABBxlB.BvB憴BLBr9BB BBqBBߕBȴBMBDBմBSBiB´BPJBB+wBZBTBh/BBsBBIB~B$B6BxЫBE1BBBd:B4'BӋBƮBUBBaBB!aB'B{ BҀBnB,BB⋈BBwLB%B{޷BcB0B0B|BABrBBҎB4ܵBqBcB8BBBBBeB4BBc(BBzB0BhdzB6BBB;+BBDBKȣB㽲BBoB#Bu{BBSXBBB;B]%B94B3BW(BBu.B +BBAB6EBlBBXBگBfBMPB5BBBJ]BBBzeB$BFB!SB?BbBBXB(BȷB&B:BB|EBBSBiBBBiBM!BBB rBB2BB׷B9BcBBBVBu@BBBzB)B%BɊBBWgBBuB:TBK5BB.BXB<BrWB:BB6BrB8BѰBBCBB5BBB BiABDBfBB9ذBBšBBˁB2HB1vBB8B)BBYBBoBGBB,B'ЕB1B4 B9BBBpBBB BBBBG1B8WB*B\=BBMԞB/BBBB|B4BxBB_ѣBB,BRVBΰBFBǦBBOBv/B$B/HB B԰Bp'B}ѪBBBBB(B^@BBoBMBB&BQB*B BWB:LB~PBSBlBUB&BB>BȲBfB؟BQBƣBCBdzBBQB$*B ȱBVBsBBBŤB"BRABMBcB B羴B BBB9 B:BpB5nBB0 B޳B/B³B#N}B0 B&}BLB]BnB+B]BXYBB:OBBT~BaB⪄ByBBYBĊB 0BwB~ȥBBB=8Ba]Bv!BBB씌BLB^BBcاBBrBcBHBMB[BSBQB'BB{ѓBABЂB_BNBmBBBdBNB᫘BQBSB^B6B{BBuBaBB]&BB5BEBcB$ؠBuBeB_BɪBBB+!B4B2 BuBBDBfBPBtBYƫBBiBMޫBB-ʷBZBGBBo BBBǸBQBRBۥBB,BBȍBBBxBB@B6SBp#B|BʑB\_B BAB߮BЖBBٮB\B&BBkBBB"B#By'B[BB??BtSBRB[cBR*B̝B#pBBNIB9BBB&ѮBvBYBBB4BBnBB/B3B_BB2BBϮBBB7BB~YBBB^B BBB9B3cBE1BB-BeBBBBBFB씲BBx-BBBBWBaBGB^BݮBB%BBBZ}BHiBtBB8BBdBB+B(̷B)B~B!BBiBU[BB BqBNBeBB!BBպB(B7B(ҸB BB BP]BBeBzB^\BB;BBZB2BBB BD@B~B\OBCzBVB8BB BB]/BBvBajBB׵ByBdBBOBwBܕBBBB{B=B$BBBКBωBBp~BPB9B"BBB!ğBϦBBBeBwlBKB B+Bw^B8B]۳BfPBTPBB8;BB YBբB]B1MBB8B\B>BSB^BCB­BUBBu@BKB$Br°BTBA'BZ+B#\BBlBJfBNB~B(dB B5BUcB-B2B'Bb-B앆BӷBB6BhBRBB?BZBBiB#cB2BfBvBjB eBxBBBM3B&Bf_BUbB‘Bk B BiBQBBAB†BBBkBxBBIqBt+B`B;BXBҳBKBh BhڳB B蹣BdzB |B2mBBBBDB5&BB%B^BSCB}(B_YBӠB/BYBpB<(BP|BzB^BBB?BBrB8BʺBBL'BuBB7BSBNܴB)TB/|B섵BqBCBpBB'B=B^BJBByBIBBX>BABB3ɲBk8B qBߌBBɂBBBKB#BNB4B4B qBBBBPBHݳBBȊBBYBYRB\6BBB;B1BB%jBB$B9BFBBBBCBBBrϕBBhBGOB B BBȘB=BBvDB_BB؛BvB"B?}BpB9aBL,BbhBJBޠBBCBuBR9BB]Br9BxBTBB)BJBB]BB NBBOBڵB&BFBBBZBVB HB B_كBNBBB1#B4B-3B߶BB׈BkBBTOB>B,{BG׋BB]BzBB B"B2BADBĐBwBB;^BPLBBBBKcBBByB:B׳BB{ĘB]BB4B(BENBCBBBQB}BxBB*BBBPBB DBBBBvBBBl6BWBSBBJB^BܳB;BaBB#BBBjBuB 8BBFBMBBpBJBBB|3BB_BBBTB0B%BZB$5BBB9[BBwB.BSBB"ǶB$B@dB;BB BB BBB(B径B@vBBoB/BwBB*B?'BBBB^B9BV9B߽B=B4BTB@BB`BBJuB#BB$+B%HB3BBBeՔB(B BxB^VBpB8B.B|B)B*2BBO#BKB|BSB/vBlB?BhB,BBB9B1B0Bb BBB/BnBUBy-BDBB}0BBͳBѸB\BBBBeBBMmB8B~BBBB{BsTB"Bc.B7BiB4BGB42ByBLB?bBBfBB3B/BBUBlBBe.BB/BBB BBHB1B_B.ijB BՋBABIBuYBBBB>LBrBB BBFB4رB9BBBAB}B>B3BBxkB#RB*B|B{BwBxBBBoBSBB ZBTBumB+B0BBBB)B遶BBB莴BB#BBB̖BQBҭBBB`ĭB^BBB:BBRBmZB3B%BBJdBBrYBwBBB۬BBMB#B3BBcBB$%BIBB BmBeB'ҿBCB?FB}BpEBBA?B BdBBB>ϊB B,BaBBBBB*BBWBB:B8BBےB5гBmByBTBBB@B)B.B|BtlB&BBdzB{ B溚BBnBYbBBBBBBƟBBOBqBzBsBB:B,BںBBXBNBβB BKǧBQ²BB%B+BrղBB~zBBBB| BBeB)BB"BBn6B ABQBBIB %BpBJ=B::BNBBWBoB2زB|BBzB-B9BB8óB B=BCBsBmBB:-BBQB|BBNBBB4wBB; BbB9BBIBѺBQBEBzBBTBBBvBBVBBBMB`ӫB^BBBLB(BBDB,BBxB8B BBBBBBXBduBbBB^"BB%BRBB㞬B;BcXBs,B;BBBBBB6B7B.ԭBB բB=B1ABZBf BPBťBBBB\B'B?/B8nB>7BBB9NBRB$ BkBBHBSBBhB$BZ2BoBfB@BdB>BUBHB$׮BfBBDBHḆBB*BtdBѮBB~BBnBCkBퟮBGBDBBg#BMBiB_xB BBBsBBBqBB^BBBmB&BsB]BȴBl!BoPBBŭBBqB[BqB2BɬBtBҳBBqB\B%SB B߲BBB{bB$b"B ?BQT}BoB.BBRBB~BMBWB%BjJBB"҄BGBKBB9^BGB5*B%B'BVBSBB:BxBGBB.B B"BBaBCB GBBcBBOBqaBBBBBѸBtB0B43BBIBmBCŘBjBbfBwBBBBBHeBJB4 BBJB6BVBBSBB:B B;BVBBSBB BXBBv5BGB5BhbB$BqB|BB#B7BzBBlB_BzB̢BBtBƫB1B)BBЫBB1BBB5ƯB'VB7BdކB5BFB""BqBZBZBdB:BÊBTMBYDBABB hBHҍBBBiBBB>B]BBB BTBtB &BTB8Be@BTBOBhBqgB'Br~BB(ӮB2BѿBٯBBVBΰB׺BBB9Bi\B#rBZB4BCB B'BBBcBB]BBBwB6BBBBJBBOB劵B>B)BmBwBxBMB买BBŭBB|BB/BdpB0B(BBցBdBG B&;BBWBjЄBذBWB瀆B-BuB2BUBB͉B BBPBɴB ?BBbBB#BBB$B֕ByB-B(MBzBDBBBݔBB!0B/yBfBBxBSBjB{B/B\B BCNB4B]B^BsB92BnB B؟B`BtIBBgBNB8BB ?BYBBvBBd@BB>BBB&©BBBG#B~3BuBkBRB4BBnBvBByzBǟBaBpBnB7B^B9GBnزB[BB~ѳB0kBmUBhBDBBBB4BBBB&-BBB-BPBA:BB0B ;Bm ByǬBBRBϬBB^BjBmB4B!BB`ÉBB BqB .BRBB9/BB ͎Bh0BB7zB%:B#aBBIBBBGTBB-BWBBfBbWBmBBbBBUeBuBlBBB/BBBCB B,BvBeB4BZB%8BSʭBBܢBϭB6B֏BmBXBOBBBBBk/BBBHBB`:Bj[BB+BVjBB۬BBB!B攮B<\B[BB!BBBBpBŮBxBBeB BB BBiBRBBkBlBhB֩BRXBBw!BFByBBABrB BHBMB6BQBBBXBHBB$BNBBܯBgGB?nBB4BBBB|8BB6 BBq'BQ BBB$ BBBhB6BB B̭BBBB*.B9B'B3BB%BB6fBNBSKBB֫BBcBwDB@*B{)B!BBBBBEB>FB B˯BzB @BEBBBXBͦBnBB BcB/B$BB}{BTBdB¹~BźBBBBBBĺB+Bd!B!ZBHBbBIBB_UBzqB1BވBPB"GBBIBzJB9B㓋BB:BB{7BB"BBbBmBꆘBB/B BQǮBXBPB:B$BtOB BB&BDBBBhBLB8&BeBB紎BKBZAB֧B8BLB?B&.BCPB'B"6BBBSBbZBB;BXwBB`BBQBoBLB"BZnBVBB]BCB9B63BwٰBB*B3BBԄBBsBB)BBTvBgB-'B:BB-{Bv9B6BBB7tB/fBB^B1BiBÞBBֹBꓵB4B B:oB܅B`dB,BBB˴BhSB(BmB"9BB؁B$B BFBﷶBB BsBgBTBdBַB!>B-BBBBBdBBޡB|یBZB>7BoB BYBBBBĶBfBIB+VBBB{BڳBBBBTB6BBUIBQřBųB&B:BBB䧜BPBBMB6BcBAB8BBxZB4B*BzB6B^BuBZBmyBfBrB{^BBjB%B]B[B>BB[BdB~BSLBBBx"B{B*BBB(BBB:BBBYBB.CBBBB{hBБB B˘BϥBSB'B°BB9BBB',BB1BҞB BBB BݼB/B8BBiB-B=Bs>BBVBBҰBaBmBӰBIBKfBڰB\BlBK߰BcBڐB`BLBNB Bu԰BlBjBBBgBBB,BیBBB8BQB9BEBʱB}'B؊BBFvBeB7BBBBBBaBpBVaB.BxBXpBzB}BiBBhBs:BBVBxBrڵB BBBB@BaB|BԊBBBsBmB%BB7DBOBoBճBBTLBoBB’B0B`BBOB"B'BBB]B䅗B|ܲB-BBʲBBBnȲBjB BܲBpBkB BΫBNӞB7BBJB-BB2B$CBBrB:BBVB5BǶBBDzB7B0BBZdBBBaB|BBOB.ЬBDzBNKB+.B*ӲBGBBܲB|9BBزBB#%BӲBjBFBBٲBZB0BZB+BߴBB )BY_B%B?B޵BeSBEB_B憳BHBٶB6BuB*B[B?BWBBiBB+kBִBC1BSB-(BJBB³BBvBBBBoBBٿBδBB$oBBBZB{BdBkBF-B @BBB$BB=BB'BB܈BcBBBZBMB[BnBRoB`BF}BtB7*BB,BB|BBB BݲB#B﫜BBoBXB{۲BpB! BB.BBBB^B=QB>BgB-!BBl}BBBBoBܘB5OBM BXB՜B(4BSB֫BdPB/B#BTBBB`\BSyB%2BdBB)BKzBB0 BBvBұB BBF}B̲BBV BBBBsBBBBB BIBrB$ͳBB`ByqB B^BBpLBBBxBuBB*B,B BHBuBmBQBΡBEB寤B]BBjBB4/BCB$BlBMBɴBhBB;óBtBB;ɲBɮB;BBdDB ڛBB>BCsBLBB B%BU@BIJB|$BsB_BBB'BB԰BBz#B#B$B"B㐈B:߷B`BΉBIB:BK8B!BBB.sBBEBBBYߏBZBG/BxBBlB{B-BB BxBtB$QBVB11BؗBWBBtdBsBBBQBBᡜB6B&BTBBBJB佳BiB#BodzBB|hBBUrBBăBuB˦B=Ba-B tBBsBBuBlwB`B BpBּBBB2/BBFB%B8 BiBP$BBBxB*BBB(B)BB:ABÞBH^B/]BBϵBxBMBC$BrBB3BܳB2BByBfB#xBKWBBF:BkB3LB=B̴BBw?BRBBB?BBBb^BBFBnBBBܾBB{B&XB B,BBBBQB3BIB1nBBۓBSBlBsB1IB2Bo B?BBĊBEBBB]ZBBQqBoBDB BBݶBeBwBBZB_BB BAB BBF"BCB'BB9iBBcB{B BBeoBdBLB0CBàBBBܫB%B/BBQ)BZlBBHB 8BN.BtB.BCjBBBBxȲBXBpB/B:B'BBy)BˣBBBBMBlBB=B얽BCBk)BBBBήB9BB&B#BBBBhٵBPB=BڴB2BHBBIBzB{B&zB9rBB0BB B¬B):BFBiBB B*B:BžBB[BBB-BUBݪBwBBBB9B4B+BɢBhIB;B=B@WB6B BRB:!BתB2B ҤBôBݚB^B'B' B BfBjBfsBBuB;B\BoBެBLBBiBnjB׀B>FBBBoBBKqB$BBǨB媭BrBBíB=B`BBBBB{VB4B!BBBK9BB)B ]BnB)B BnB0BBBݲBޮBBBٮBSBBB9BBB BBuB%dBhBy]BWB̵Bl[BBK#BnBC`BkBBBQBBpBBuB_DB⠶B|BBBjEBBYB2^B B=B֠BKFBB^{B BRBUBȋBBGUBBl{B BNBBXB9BdBvB6B+ЏB`ȱB=BYWB-BMB BڽB%B+iBB2dBߕBB@BVB̰B\BBl۰B^eBЉBBBW%B9BBBBCB0HBΰBBܠBGB@4BXpB6B"BB;BBBzBB=BBRBBB9BBBBkBBBìByB BB,B>B]TBBگBZB~BٯB'B叱B9BCBBBL0BtBB)B^WBnB 6BqBBBB?BB:BBXBBFBB'BB BCBB[BշBH)B&BB9BBBKFB@BBtcBB}B(B}BBB-BdB BlB+BWTBB1EBB7BݶBG\BAB BGBABnB-BB8BNBBcBBB*BKB3ByɲBlBBbBf٧BtBB6BBwjBF&BiYB"ٰB8BB;BB|ܭBWBpٔBBIuB0sBB`BB/BwDBېBBBBBwBŽBuBB-]B BȁBB<B0\BGBBaB!ҭBBոBtBBjB%BBB>BB΋B7BkB䁍B $BB5B&BBhBJ6BBBEBLBBHBNBuB&VBfB5ÖBpBWBL.B5BnB~B BB>BˮBpB'ҜBڮBBMqBBsB$B=B#BԡBBwBBsBPBzB=BBɅB_BBBɧBB|B-%BDBi}B|BxBdBʫB6Bo8BBPB@B$B^BB,B2mB-Bu#BBNB BGBG:BձB¯BGBaBǯB! BB/B6jB B5BBBB| BxBñBuBִBȯBB,!BNBqBLQB$B"dBdeB6OBTB`BB'B BB0kJByBBB4jBBB)5BBCBcBdB B!BeËBBH!BBGB{BGBBBBBȏBz`B1Bg]BTɒBB[9B@BBBWBBjBSB%B3BBz9BBuBOB B}˛BSBBQB]DB4BBB;B㧠BKӮB8BEBϮBYBأB<̮BBkBݮBBBB4BPBBBBLBB'BBB[BtBϲBBBBsB&BfNBB+BMB6Bn7BܪBdABFBEBB÷B,RBۊB2ʷBqBMB"B_B8%BЄB}BvB7B%oBfrB︔BBRBBBpBNB[ϯBB9B?B@B"B3*B|JB颛BDBBpB{B BBxB BLBl8BBtBBZaBBGB8ǬBB钟BIBGVB"BB{BrBIBB9BqBBtB鐭B5 B_ B詭BnBYB­BB;B]حBBwBB٫B?BIBB}B8BMB]B OBB ˯BlBvBϰBB9B BBBIBB0BBBB͕B׳BO֮BzBOB൮BkMBABܔBB.BczB/BoBBBB_BQBBٞB8Bk8BBB&gBBB֊B%BMBBWBB/oBBBlBB'hBBBVBBGBBBX:B*B.B^QBkyBͮB[BїBEBEB'@BBBÚB+=BZBRUBORBBBp>BBBBBBCB B B_BBBB.By BByUBe%B`BFBNJBBдBJ~B.BBkBMB"BBƜBŴB%bBBƜBBB(B}BaDBʤBϫBsBM.BB7B㮢BQBBBBBmB¬BkBBpBPB6BBmB]'B BNB6BBxB41BRBqDB B3BsBB CBPBԭB{ABBB0B_@BTBBװBBWB%BʬB׬BuBUB¬BBBSBOBIfBCB~B(B~BBӲB*B-FBBBɫBZBB`_BdBOBuB[BBtBB0BBD,BaOBNBBrBBw!BB٧BBB B}EB`Bh3BGBPBOBeB]2BaB6hBFB$ʧBvBlB B;B BڹBB3BBbBB5BB6B gBzBQB0BBZBWBEUBYDBmBtgBBBB˂BBZBM)BQxB6BИB@BBbBWB+BԛBBB%BoBBB=BB7B BsB#6BҫBDBձBuB[TBfB耫BsBBwoBBxBeB pBG޳B=6B\B#BBBB,yBkBB`BBݳB_~B6+BYBBBuxB,'BχBBBBxBB%B B;BBһBƴB BmBFBwBBYUBBBBhvBzBB!BζBB(B0BOxB]ӲBBhB̛BkoB喓B4yBB-BfBBYĖB^B oBFBjpBjUByBgBY*B3BdBHBBxѲB(BUBBB BB>BFBBBFB}JBQB%BBYBABB|BbXBBBBBBBσB2BRBjB/lB߳BQB}BвB 6BnB-۲BB.IBҲBqB}BòBёB"B4ȲB#BqB}۲BәBAABBB޴B9BhBnB*GB8B(BqB 'BgBBzBB߼BxBT۶BBBBt6BgBbԶBdBBJBHBBBBB?XBBB/"BBQ޴BBjtB BB ̋B2B7B*3B*BXBtB*BBGBfBEBvڑBNBBaB\~B BjBk{ByB\{BqBBBLlBSBcBomBBٶB^B-1BʯBRBBBRBBIBTBBBLBB,B;B]B9B*BӰB0'B0BF#CBQlB(BBܩBTB;BB^߄B=BBB BBd@BIBBBBBքBg`BwB%B9B#JBÎB6BxBVBABaB-BBqBBB@BB0Bh5BBVBB*B B}B8B BB B*BBvB1NBuBBtiBB\B {B-BBzB:BWBvBB7]B^ZB?B B0B~_BDBBB0B{BBBB5B(B(BBMBBaBݛB BYB^ݯBB%BBBB3BB}Bk4B&B8B!BY=BKkBEB WBxBCtB$BQBBȠBmEBeƵB$BmB|BB8BP&BlBޜBB8B5B#B BԾBBBrBBBBBnBߺBd*BB5BdBhBBB~yBF/BjB}BABݱB`BrBHBB_BкB(BB;BulBݵBPȯB<4BrBBkΟBųBzBvnBB@BB0 BBұBKBrBRB'lB\BBtBgB ˗BEUBѲBBB*B;9B_BBB B&Bu&B!=BBBWBB+B,BB)BXܸBBӅB:pBBnB߷BЖBBMBBxBBжB7B2,B BBdB3[Bg#BXBiBƐBBBiB#B>B BiBBPB#BBKBB߸BBB|B@fBB6BFBBQB/B5BYBϕBBu`BBB\BZB B^GBBB B٬B8BȹBJBBNUBBl'B:BۯBBBBڏBBABB!BRB6BRB]BB~B nBB&/BB(ZBBBBB5B=*BzmBBŞBB{|B(BKBKBB(B BTBB BB@BeBBӱB=BNBeBHBpB&BBsBMBYBlBBBPB'B6B߫BXBiBB ۰BOBdzBʮBiBҲBnBBޱBCB|%BBiBBRBqBGB:"Bz&BnBPBήBBBGB/͋B٩BƃBBsBuBѭBlB(B0B)vB5وB.BgBfBv_BB:BbB B_B?B4BB BB^KBͭBzRB}B9BDBB}ݲBiJB BBB,9B2BU BBaB~eB%%BBBpBBB2BVڳBSBڝBB]BB`BByBЮBDGBobBIgBBByBBBBo]B(BnBU'BYB%B'B: BXBTPBBBB?jBBBBBB͎B,B BBs,BtB3BB+5B\ޓBFB=B3uBBBBkBtBdBU_B\YB*ؙBB1BxQBEB BܜBȬBgB/{BӬBB9&BBBءB BBBzBSBBBzBܓBiB BRBBVBqBOBɭB\VB-[BBn$BBSBB:BBuB4BLB= BBBB_{B6BBmHB\?BхBrB2BRB7BBTB3BB>BڽBB@B=Bx٭BpBѰB1B BB;ۭBBANB'ȭBzaB^BGBfB+VBC[BdB>BzBUBBBBYB?B' BB=ƫBnB{QB'BBLBlBoBjB5BBB@ȨBBLjBݧBPBgհB9BoB(B#B9θBu|BBBBBjYB]B^B:B7BJdB,BƭBB.BBB9BBBNBVBŮB[BnB'B B懯BmBB"BzBBbB/̯BuNBBBB罫BB~B}ʪB63B/#BĩBBB^ҐB鱨B[B\B BX{BkBKBB0ϸBbBEB)BB#XBRBvB푉BB6BB&B8BBBBRBFB{B0юBBUB[Bs&BBBhٰBBUdBӰBΈB֔BB[BSBBBB1BpBB0/B+Bk.B]B[BМBtWBBBB sB0B̰B~'B֡B}BBVxBVB0BeBjBBCwBWBy*BBGBR4BqIB8BSBdDBa"BBLB BNB`BBBhtB؇BeB1}BJ,BBLB"BBBYBBBB`BQBѰB-DB9B4BGBBBqBBBB^BuBB(B׵BBmB%B2BUBWB4ZBBnmBBBsBܱBBBeB(BϕB`ABtBKBBʲBJJBB{/BB*BBٞBBKB3sB2BNmByB2BB&B%BBKBBBϨB4`B2BDzBCaBX B;BFB!~BlgBBKBBlBuBQBB9BBBԑB흰BXB pB2BϿB BB$BBBBڗBBRBZBBDBBWBRBB 7BBBגBIB'BBBKB6BBBB:BNBóBgB5BKܳB[BBmӳBBoCBBBBB4eBBB B.B+FBƂB1ޮBBmBByB\BLBjB͇BBB~BBB/BȉBiBB5NB3BHB"9BBEB8TB2BRB BvBBBѐBcB2BBFB˯BSBUBBwB%BP!BsBB{CBBB^B©B.B\B,BaB BOB9BЗBhBBBIBRBJ3B1nB=׳BfBB5ZB;B?BݲB؈BWBz[BBBɱBB:ϸB 3BgBBB_B"EB|-B1nBBQ¯B$B;BfB:B= BuB\[B]BB.BᘱBYBɟBȰBBBBBj{BBB&BȗBέBsBxBBB$WB쫫BuB.B˪BBJBB:BDBWBAB*BB B۴B BBQʴBBeBLBB,B BwBBOHBB BXB B9BjBjBY6B BB̐B$ʲBtBdBBBBsBBߎBGaBeBBsB6BƕB B B@BDzB3BBBzBlGBB\BBԲBgIBBBB*PBvB7|BEBFBBcBB B"BBBkB'B$*BDשBB- B!1B B`BvB/BB멭BABZBϮB=B4BBFBB B^BB7B}BwB޲BBBBBB5BϲBOPBBBݦBB> BGB UB7BEBuByBwB{BӵB&B2BHaB!BBBBBB>BYIB@`B6BβB $B "BLBmBBԱBB]BYBtB!BR԰B B6BYBqB`BBEѿBVBBs0BBB΋BBBBBBHBBQBƭB BwBBBBvBBBB5BDOBTB#YBgB[BBBLBBBBB=B|8BlBژBB/B:B2 BFB LBIDzB\BB뀳BBBB@BBBB<:BYB&BۇBqBcB_B8[BBಊBBB.BxB:MBBBBMBòB+BBQBۭBBXBJBB8B BB5BBf BKBByBGpBaCB5BBB՛BƩBK BāB$BB)5B{BB5B/JBBBJBWQB 8BBlBNۥB>BFBoBmBuBBB.BUBB{B?BB]BgB2B2BQ6B%ӱBBeBgֱB@BBαBZB2BֱB BhBB2vBD`BJ BRBB#BNBB=BB0BWB BBpB^BB#B|Bl1BB6B{LBsBJBNB88BlBDBwBB.B۳BCBBe>ByBHBLB~BVB$B-iBBBB䭴B!+B7BZB@ӶBBBB.EBg\BVXBپB@ѲB&BqBIB BJB&ıB(BcٺB4BFBBðBB;B8BB·BvBsBTѶBzB B2صBWxBA@BEBmcB+BBBgBaB(^BBB BߥBƬB°BB@BPBB5BBBpBBVįBB}BԮBڒBBJBِBFB~BaBi۔BBkBywB BPB,BBp9BvBBB&BdžB6rB5ՎB]B*ܫBF+BFB/BcBBBBlBNB ۵B]BCBnBBBBB{B{BBIBtB7SBKBزBBEBBBBKBʟBs7BB5BЗBzŲBB=yBB$B)BBBۜBuBB0BzϲB`B7B֎BBڡB3NBۖB}BBq>B%BBB ȦBB*BWBBNB:ϩBBTB4BBGB+BB2BJݭB1BB;!BݼBBcTBBBxB|B>BBñBBsBرBBGB3B/BGBVδB BQB>[B$BSrB޵BABBWQB^B"BByBrBBBBB!ȲB6 B0 BBP\BBMBB|BB5B϶BB ZBǐB `BٺBS8B$ѴB#BڵBHBBȃB*ǵB;Bn3B_B^BŴBUB:BiB^dBGABBBL1B-sBB-?BBNKBBBBBBCBSBjaBBLBܬBBxmBA2BqBBKuB6oBBpKB*BIB 2BpB?B ݳBBGB9B4BBUByBgfBCBrBBBBg3B|BdBBBBOBtB61BGBBMBP}BB5BBOB uBEB@B/OB)BJB|3B.BhgBDBMBDBBBBHPB~BBKBWB&YB(BykB-BCB|BBBBqB .BBpBVBBmBĶB۱BcBBBJBB>B BBBpPBڶBֲB*BͱB<)BBMmB؇Bh\BBBBB8bB?#BQBմBBBeB4BnB?BBoسB|ζB"B?B$BBBBcB#6BtBBoBeBBB޻B^ݸBྰBIBBBB BñB.B$BȰBٛB7B{ְBM BJBհBxqBUB1ǰBB}NBB,BMPBBB\B~BBkBBXlB|B;BFBiBCBBʱB7#BnB1̱BBaBbDZBBIBmѱBBBBBCbBBBB BwBB8BB=BPB 3BBfBdžBB爲BaBw8B輲B$BPBBtBQBeGBBCBʛB{BtB/BoBBPBRBHB߽B*BhB{7BBYBBBiɳBGBy]BBBB +BBPB_BRBkܿB7BBhBBBB5B޺B฻B5BB‰B B)BPtB.B~BEnB LBۿBkBj,Bh6BndBBB_BBkB'fBqBRYBqBBBBpB2BqBiBUB37BjBFB]KB1/B.ůBQBdB.BOBB%BUB(B]B7jBBgB1BB^dBABi|BABBB}BFƱB8B.BsݱBƏB(XBKBBꢶBB/B2жB47B|B9BlBBBBBֶBBmBѼBwQBBBB B3BBzBӵBBBBq#BkBYҿB猳BBgBBwBxBrB(\BBB`B?oBIB[BUB BKBZBBnBvrBVBB BȰBOBBhװB-BҵBذBU'BóBʰBBbòBPBBBBHBuȰBjBBկBiBBB*mBBrB\B>BGB>1BWB BBBB"B?XYBXB$B BBʔB>Bw*BnB BrBBW.BBqB-OBUBBdBB.UBٱBHBBޱBxBbBBB8BBqBjBBBBޱBB.BBgBYBŕB7BBB9B BeB:"B9ЫBñBB" BݱBB-BB*[BR=BBCBMBgBB,ZB'BjGBPBDBBB\Bq.BBdtB5B[?BBBB΢B9B Bc˲B B"BBB3BFB'B1BByBBBBFBoNB+BBBBRBd7BBBBYB(B;EB$B{fBB9BBBB8B\BLBBBBfڽBdB=BL~BBB>B-aBZBMBaůBYBx BJWB*BjBBB.BBWlBBBmBjBˮB0BNBBByB+B3 B_/BPaBvBEBC,B BUTBWBLBisBBBSBƚBEB8ϭBBBBFB&@B4aBŬBRBģB/9BBBBBC,BBU-B @Bi.{B BB*O~BBBBBVB@BlBi B}ɃBJ BBLB /BBRB#BdBBBBhBBBB. BlB BGB}:BɝBB0B%B*BBzB!۲BYB{^BBBBBjBgBBB2BߕBABXBƯBB!BͲBcBvΛBBBvBX۲BB3BBRBͰB}oBjsB!ZB0BaBoBoBYBBJuBWB5BB&YB۟B%B^B:B݅BmBBBBBnLB*BBBBKBBBZ5BUBDBȬBgBB2BsB BrBBNB>BBǮBBoBeBfBB'BlBBܭBèB@mB BBB`BTvB BB2BB2BʡBfBBQBlBBٞB6BX[B[B'Ba0BBBBCBrBeBՊB̶B:B[BBBB|B3`BI`BBBBB‘BBxSBoGBaBB*B=B`B B[B?B+BB}BFFBB(BhBuB$ۛBnBBB[BAB=B.BDBBIBoBĊBB1B4B^B_BdۥB)B4BWlBrB BJܨBBB2B4B4BBB^֒BUB@BYtBnB)B"BB B ֗BjBBB!BCBB-BB*-B*dB B9BB7BܱB͍BסB1xB3B:~BAByB#BB:BBB hB6BBIqBBBbBBBWEBR=BdBL!BƀB+BBhB[,BBB+-BQBB"B:B;B&BB4B8B\PBBpQBBBlB) B2BBBB`BBC2BϲBI5BҏBsB>BY˶BvBBB{FB BmBˇBepB޶BҳBBi˶BA#BBBBNoB.nBBSBBhB!6B?BuBқBtBqB@B'.BBwB̴BҷBBMBĤBCB BfzBBGBBB\UB,2BBIdB!ɰB(`BxB5tBK BVBsBsDZBB BrB|BB^BOB BγBBͲBBmB*?BkBͣBBBЄBFBIBBB#ͲBpB$B&vBBB`@B^~BYB BBB߱BBBñBlB BñB=CBCBܱB4BBBBTBo#BPB-BABүB՞BEBeBOB-B3B2BBABBuBB;B5BvB.BMBXBjBQvBBG-BXmBBJBhBBBKfBSBZEBiBBB{tBnB0BEvB#BӱB{BBزBwBSB_BٞBB|BNBs/BBT˱BfBBB(BBBJBB'BBB5BUBs+BQB>BOBuBBFXBBBSBB)BFB`:B{B"+BʍBeBSBB/B-B{WBÒBlABȴBBB?B[B B˵BB BsBQBB^5BBBBBukBB$BBɹBλBy[BպB4BԱBwByB}B~BqB,`BBBlBkBɶBBeBWBձBSYBBB±BHB.BDZB;BCBBűB[BPBBB^BB{{BnBsBBw}BBIB鉮B]\BfBBBEBBB>BXbB`B0eB)hBBBB;B(BCBBoϬBBBBeBQB BsBBB>B뭌B #BB?[BW(BBB 1B?B宑B>BBFBLBBɔBQBvB 4B]OB6BBWBBABhBDB(BzBBYBBkBBB$NB4B[BBǠB^B"tBgB̭B|B{BЭBHB-¥BB\B7BBBdBkBB"BJB{9BB$ByBwĭBBoB;ۭBƻBBBhBBu,BB*B^BgBŲB9B! BBB vBo)BBMBBFzB0BcB2VBފBB3BBeBc"B9BLBb&BwB뜶B4B4B_ܶBAB'BXBJBrB B9BABB B BjBoíB^BdɶB+FBKB٬BNtBB;BB{AB!BB.BB٭B.BGBB}>B2B&BTB-B+BG^B^BBEBB(B GBxŐBGB gBvB5kBB#B^PB6BIBGBB^B=\BBLB=BjBBBFB BBȮB BlڜB>ٮBEB[BB7B 4BBBߡBA BDBtuByBnBOBBzBZBBwB|BBsB$B"B0fBYoBA3B:BBBBBʬBGB@B/ʭBNB'BB_BBBuB=BBBҳBSBB}BDB߱BBp~B lB+BBBsB6By^B싯BBóB]BNBBBM0BYOBnدB-}BhBBPB]nB"BsUBxږBYB%BUBǬB9#BзBB)B8 BWDB|B=B`Bm:B9>B{B^B5BBOBsBMBB^?B߭BNBoBOB]B'B>9B@TB|BQB ?BϨBXBP BB_BeBGB\dBSBBfmBVBJ B}B7{BfBBkBѮBBu~BBBwB&NBТBYBSBsB,~B^BMBBRJBEB^B5B4>B3ĖB.UBOBB zB6BBeB6B.BBBBB+RB)BBBBѯBS-BEmB&}BBEBYBBԸBRBBCBHSBBTBGSBB.BLRBBcoBkOBB ΫBXBB B1jBB_B }BbBB"BBoB\B3BBBLB0BЯBBoBBDB"B, BBgBS BBe6BB~aBBfBjBS B B BgBFB;]B,B0.B~B ӶBPBCBBZBEBBWְB%B&ֶB2BB3BMQBABPRBqBBBF,/BbjBmBѴBXBƂB0BBvLBABBB'(BBBBBNBδBBBƌBB%KB*B BBBNBjBU;BBB}DzB$ BUB/[BiBLBIBB=BDB|B~BBDBBB&BvB')B>BBKRBBVBqBcBӞB~B<B]B"BBBϴBBBGBܸB2BBѵBνBe4B]B2uBºBfBE0BqHB?BqB@BHB BұBlB#B<ٱB]1BLBBByBB\&BBe/B%BôB5BbBB;B BBN>BB#B*:BFWBB2BBg)BxB1B9BnB9BBBfBBBdB%"Be^BBB.BBCBdBBKB>BbBEBlBBHB.BBaBYBBBBvB|DB-B3GB%BBTBB {BPBBB糭BBlBwЭB@B BB:BuB\BJsB[jB BB B=B:+BBSB':BBbB2B%SB6nB"BB B BFBQB|B$BLB|B6BQBB1B淮BB!BЮB4BuB{BB˰BBS!B-zBBBaBۮBBBeŮBGBBXB>BrBBMBBgɮBKBDBUԮBBqBܮBBzBҮB /BsBB}BWB^SBB,BB*BRBBB̻B BB_BeBnB?BNBBapBܪB&B5B8]B@BuBBтB՝BBIBO?BBB BkB BWBWBBdۣB٬BBGBBBBBBgBv8BB&B[BKlBhBYvB-B(BBB7B0BIpBBBB~B>ƭBBuPBحB{BBЭB*eBBtBB=SB5B2BBaB~BqBGBbB=BCBPBncBi=BBBB2BܳBB@BjͳBDB+BB1BBBӫBmDB4RBmBBB9BBpϲBBbBt~BBBRBCBSBBB߾B7BBwB9BԦBB BBպBzyB BBծB3BSBNABޢB$!B ĭBBBOSBBԴBB;BoBUάBBfBDBB0B}B%BBm0BBB,BWB ®B/B9BWѭBB^BfB5׬BBBBqBBmeBZ~B|ةBF'BYBBPB%IB)BB(qmBB0BiBBBEB>BYB/BXB>B5BB(BҦBWGBB\B(BCBѲB\BB'BBB:BjoBVFBuB)B`B,B#BVBwBB BIJBBuBxBZBT BB{BB˩B6B:BBcB8BCJB~vB0B{B|Bu}BiBiB̨B)ҪB-6BNB3BBBB>B%BB9'BB?BB B>B},BBHB#B{B.BB1BBpBRBתBBBoBZhB23BuBBBB'B9BbB"zBJ~BBBߒBTB (BKBQBB3zB(B4B+QB7BYBpBGuzB ByBgBZ BĂB&BSBJB/&BBB=BB*BBRBKBgB B͊BBBgBBTB.B/WBBBBBZBbBBCBBZBB/B B#B/B4BHBRBBOBϲBؗB˚BBB]*B:BB闝BïB-cBkBȲB)BؒB.޲BVbBlBBBGBBB@BzBB4mB59BwBԩBmBCaBB%BiBB|BBBB=B\B'BBr)B1B~B\B.VB"B[BnB8Bf7BqBֲBBMB}B3BBB8BQBT8BB#/Bs[B ZBCByBBصBdBB]BB,HBB-BvBBA{BBBԵBdдB@BEBK5BBB}oBBB#Bs5BOeB%ߴBɶBBB`Bj7BrB B;Bj$BϸB6BBIB˿Bt:ByBGbBDzBgBB ^BuB BBYBՐBĒBZBBXBTB2B_HBBBQB=2B!BRBDBŵB6?B BLشB BxB.BBBKBؽB:BeBZB8B*BBB'B۴BjBEB\BBwBABOBԭBGUB.BkBB0ܯB@;BXBNBBkB>B7BBSkBWB5ʉByBBwBTBB(BB5Bs֎BBBlB?BB9B(B8TBߓB%BG=BtPB4BfHBB=QBmIBlB~B7BrBBBB˰BpBBݰBB*BްBWKB/۟B°B/B2BGB|B,3B͐B`BY¤B6}B`9BC9BiB{EBeBaB}FBKBmB[5B\XBBBBBPBB(άBBؒBTBưBY5BBϰBBBB7jBZBBB9ȱB7!BhB |B3BBB AB,B>BCKBB5BWBBcBiqB-BBDB4}BٴBkαBBBBB2B\BnB0ԴBABBBBO$BDB>BbB(BABBўB}BMB-^BB_BBB;B6BvBƱB"#B B BzB BIwBkBB^B(BhBFPBBBQ'B΃B֝BrBBBB B:RB BgDBBMBOB7BȲBBg̑BbBUTB"lBBBBBSB…BbBQwBB]B3BBBCB"B8B;BƜBZBBrB"^BB B4BBOˡBBQBFvB@B^-B%BӗBBŦBvBVBUB[B.BϩBLBK:B8BFDB9B™B/B;BB4 B'B_MB.BB BBԤBЙB),BCByB =BB"BgSBYBk^B/jBBB|B(B6BeBB BBBnBBB8ĴB濶BO׿B#BBpBBbYB2 BVB=BCBmBFBxBB#BaBBBlBiBfBdBzBBB7B[BḆBB BBKB%3B|BB;MBB*By_BBB]rBBcBՆBBmuBB,B5B~BBIFBTB߱BB@BBABGlBBJB]BBBݠBMPBߴBBBݴBBB해BB9?BNBdBB BWBɖB˳BB1BwBmBӎBbBB)uBBBQBB'BbBB@fB:TBqBABBB0BBRBhBMBqBBM~BtBBBB=B`B6B-QBJBBޏB&BkñBq|BabBXBB/BByZBB{B/Bf BbBGB~wBcBBӪB~BkBkBB2B#KBBBbBBBjBñB=4BX|BB9BBBfBgB B2BB6B;B$BwKBBE"B^BBIB'|B>BBHBBfBBBkBB)B-BZBzBBϴB׳BB8B @Bx8B`FBiBМBB*BB'BެBMhBBB>BkB B)BWBpBBBB[BhBWrB)ByBBBiB GB0BBיBDBmBB”BhBB^BhlBBXB%oBwBBefBcBB5oB-ABXBB!BB囲BB~jBBBBdӰBĭB@BaB^BBB(`B5B=BTB7BhtB?B&B BlUBBBBXB]BBXBfCB\B)ZBМBByaBsB*dBSwBDBBڝBݓBKڵB԰BBBB2BB7]BЃBѵBpByB/BB3Bb;B:IB{BEڴBxBXB`|B,YB`BmނBBG=BSBBBq.BJBB߇BDB=>BBe]BBBB>BOEB)BB]SB~Bn[BjBBŧBB B7ݳBűBYoBB BBB[BBtijB BoB"~B%QgBBnBBBLB&BGB#B|GBrgB(BܼBBJBUKB BlBَBqBVBmBCBB BBa:BﮓBQgBBKB 2B[BRۖBCB[BdB*BhBB_JBehBBgBB0BBqBBBB=BՏBBHBB_uBB{BeBoBw5BVSB)ZBABBBBBwrBB@B̫BBB.BqB\ BTBWB(BBB2B{BB BἲB)BdBXֳB8B^TBXʴBB[BjB4BBֵB.TBoB;B=BVBBB-BI*BBBBKڊB>BcSB'BB˂BBB[BTױB6BBB,BBMBYB_B=BMBkBͱBKBņBrBB,-B$BR8B׏BVBBX~B>B|&BBB\BᥔB;BBBBB]B<BpB_B-EB^B%xBfBhB@ B~BrBBąBKB_BeBBB?BGBNBZBBVB_BBڥB,BBIB&װBBaBܰBBBBUBNBBB{B+,BXBUBt;BBB=BWB쨯BMB8IBB lBBBɍBTeBbvBVBB&BmB4BBűBB!BӱBDBB&B49B{ϴBBhBuBcIBB!B(B&BBֲBxBBd,BBѴB⍳Be+BB B Bo(BmBwB˳BSBXBoBfBBM B B9DBBBzͿBCBOyBUBXBCByBoB#BB|QBdBkBB9BMBkBBEBBBQBtBzXB`BsBBgBsB'BtBnBBcBbB BB`BmḆBfB'BBPBycBB(BBB B^MB?BhBBeBH BDBB=BzBcʫBB,BBQBBBB8B"BM֬B BbB)ЄBPB3BӅBTBZ BbֆBeBByBmB B_B5B2eBv!B B\B*BBYyBRB$B1B8B%BB膏BsBBV$BB/BɒB1B{B(mB=ݴB#BB“BEBBjB?B$BhBBﯚBwB"wBPB⌴BBDBB3BBʮBrBeBôB6BBaɴBBȤBΝBs ByBzLBZB%BB uBBuBB]%B/B}mBwB5BBB+BB[~BBJBxB$BB{B-)BsB`BBa_B_B&BP7B: B+\B.+ByBؑBi-BmBeǯBFBBBmkBύBBTB Bh3BB*BBxBؒB?B~BEB~B"B!6BkBդBIJB2ĮB B BBݱBiBEBXBڛB9BKBdBTPBABBJ[B8BBFVBFB]BxJBKB B,GBdBBFBBqB2CBB}BKAB BکBJBvB1BdBYBzBHB$BլBEBJBŮBޫB.xB ɯBB BB>ٯB{BBB9B\BB/|BBaBBBBe6BBeBUBlBBBٴBFBk=B.B=?BBjB3mB;B`BtBo+BtB~Bn{B؃Bk(BBfUBljB&BBJɷBnBBFB ]BB~B^B]BoBBcBBBBβBW1BBIBB-BB!B@BHB>B4BwBBB繱BjBFBƱBnBB_ݱB+BۗBBYlBNEBBuBqBBU_B樟BB3 BcMB\BB%B3B$BTBBgBBzBXB猧BBBBװB"B[B߰BًBBB{BBBTBFB;BBqpBBBBBWpBBB BB3BBhfBQB"BCBiB5fBB{BB{5BIB"BBByBNB&BB`BѱB'B֒BBhBB+B0B`BpBBBB6YB3BB:BQBijB BBtѳBmBB@BBABUB6B4B5B#BMrB׵B(B2BԏB:BNBPBLBcBaBB7Bf B!BB BEYBABcBB?BB5BDABAZBBWBTB]BoBndBB}BlBRBƈBhBBBUBBԑB?BI|BgBj+BB1BBSBB;BNB#ȮB*B.BܭBI,BBBBB@B.BYBB BBBO:BBB|BEB0nBBBdBiBB(B;>B4BɷBHBBB[BVMBB@BB~2B߶BBBfBHBBB BBBsBWB3B>BBTBAB BٛBBnBALBx]BB8IBBBABB>%B.BjBEB\ BBEgBBܭBzB]B!BB4B]BBB߰BB7BqBܯBBlsBBBTBBsBB-BBDBدB*B3wB֯BBdߴBBBC1B^B%BlB3BRsB‹BvnB BڐBɪBBE}BBbB[IB0nBgBxB舶BCB8BDRB!BۊBBBBTB:BZB,SBBӎBB0BYB1BzBݑBưB=BZBȰB%B˔B2ҰBB~OB}ܰBBCBBbB7B BB$B-B3BB!BeBsgBհB`PBB9zBBBHBBwXB]2BByB!BBpNBrBBgBBBBzBBiBB BWB$BB@@BJBB|BBuBB B1BBB_BbB#BgBסB3BȶBB BٶBBZBٶBAlB1BöBOBB葶BBWB DB-DBBߵBNBHB~B|B zB!B)DBfBBOB{B7BTBZ B|քB5RB BB3\BB8kBIBVBmBBUBYBޅBntBrBB:B遷B~BJRBBBչB[B,BBāBJBHB0BⲶB@BBCBB?.BuܶB-ZBL܇BB$kBBgBB.BBxBMBŵBBXBnB3BBB+BBsBRPBABZʳBB/͔BZB?BeB9BvlBB&B B힙BB!BBԲBBYxBӲBTBPBBB[BBmB[BlBBnB2B7wB\EBB#0BBײBeBLB=BBB?BB2uBBBWȫBȲBBBBeBJBBeB[BB(BʳBB\BͱBjBFyBUͲBBBBJBlBQB4=BgB?BtgB9B xBЎBBBBB#B۳B3BCBHB9BEBVcBB2BB#&BB:ůBBBBBBKB BBBXkB2cBBcbBPB@BLBBBB6B"BBU²BB B8B4BBSBB]dB_'BcBBBSB/BaBzBBWBB\BޱBBEŭBqűBB0BBBBB˞B ñB俱B_HBѲB(ԱBB%dzBBMjB>BBPB5VBB8B3ֵB 1BBDBNMBB|BrBKB1BxBTB+BBBBB۲ByDBBBdBB@BB0BZB,BBB}BBuBBwBUB=-BsB,ĴBǐBB8BPBB㳵BhZBgBK:BBgBB߶Bx>BBBBBhB[}BzBgzBYBB^B BBfBmBTFBxB2BzBmNBBٽBCMBNBLB!jB+BB}BҲBRBBBB,B#IBUBeB vBBsB BJB7ɳBBBiBBKB B BBBLB!BBB3ɵB=By@BxBЦB4BXyBBfBzBB>B{BB܃BErB?,Bu"B-BB5BҿBBB]OBB B@BCBB@BmBB,NBB&BBj-B)BBB'B'bBBHABy7B7BHtB'DBBڐBڤB`BkB31BZBzBkްB:BBװB-#B BWBBxBBBB,BܬBwBNB)HBz BnBBiBquB"BWBVBTBDB9(BB BrBBVBְBBoBB 'BaBeB9BHϩBQBeBB=6B B @BǕBBs"B߭B娰BBBsB4B%BiB>B/B#BB2B ϰBiBBTBBӳBBKBrrBB!BBo$B9 BzB2BdBBDB/BCBaZB BBvBWBBܜBRB+BNֱBB⿶BB7DBBkBBEBBBDB~BMBByBB)B*BBQ'B-4BBmуB_eBxBjBBpBf1BʮBkBBOB?fBÒBBtBRFBˮBԌB"BBdBB'BvVB@BBB]ȒB頮BB3BB8BB"ʮBh BBB*B(pBBBB#B4BBDBB7BGByBγB1BtB gB<B4!BB BBxBzBB0BB5BܕB5BBBB~BOBh$B-BBFBBBfBPBBQyBB$BBTBr$BB/BBBµB4B.ٯB;B~ñBNByB BBbBBB BwBSBBrB BMBU-BBBB$BBJB7Bs7B'B pBzgBuBB8B B)BBB.zBx!B7gB/DB9BBJBגBSB@^B"BBўBB{BE`BEʬBBBxBB{B1+BhB BNB]BABhBնBJBBhBCبBBCkBKBDB B-BBB-B˼BRBBӭBBBBhBB BBwB B`B)qByB`BB'B5B9BҭBBeGBBfBSͲBBDWB=EB䲭BB/B}BBrBbBIBcBYBBBB/B˳B籬BAB4B?PB4B`BYBBBlBaBBc̪BBKBB dBұBHBBwTBoBkB̰B͈BEB'0BBB|B7BqŸB̮BֱBЈB.BB[BiB]Bl@B8B}B/B BB()BBXB-BMBBLABBђBOBB7(B8aBBkBtBBPBB!BZB%B;BBFB<B9 BBHB$ݑBxB BB)DBOѭB&B_BZBkvBBɬB-BFB7BBNBi̓Bp,B^BzB@BRB<-B_B`B߈BHBΆBҏBqBB@B⢭BBLBiBGBBB5XBOBB,BB괭BB]BBBBBYB!BխBBBB_B B B6BBBB[HB@BBB`B*B襠B pB4RB!PB*B?BYBƏBRBPBǝB sBB#B3mB`B&B`B^B2B5MBl B_خB-BQBBBBBBaB BLBKB;6B[B֑BUBiBEnByBBO0B.BNB}ӲBBBVBuBO B гBdBeBXFB!WB!BB]BBBvBUeBWBBB킵BBsBByB.MBKBBB,aB#BB͖BABBBLkBtByB BJBB1BeBYwBsدB!BBB4BʟBۯB1B|BүBBBxϯBBBɯB>BcB;BB~B B}B=ܨBlɯBMB.'BׯBȒB4OB-߯B6BVBBB{RBBiBIBHBB 4B#BwBB/B\BBLBVBCXBbBB{BkBNBkBnBsB ܲBxBaB?BBqBB˨BnBԳBذB{BBB? BB)"BÂB-BL BB:BB@BBBnBB>BYB}8BַB(BljB^BWBU!B7B1B`BBB᪍BB\kB B`BBkB BBB[BMcBqēBcXB3B_B&BIBBBnPB7tB#BWB(BABBB^BB@GBnzB+BB&BB"BiBȳB^B[BBB0gBsBv¥BuB[BoBBBN BҲB!BBWٲBhB1ݫBhݲBB@BKزB BᰮBѲB#Bp"BղB'BB۲BBIJBB BٳBB>BcʴBBSBPiB!BBֵBABTZB=BaoB.B BdBB,BXBtB]B-ZMB2BLB$tB']B0?BB*BBB<$BѡBK׭B0BSBBKBLBB_B]B B,[BkBBTZBB#ByB(ёBDBB5~BtSBHBrBLB BfB]B&)B(Bz+BC%Bf?Bi̓B"BB#aBBBBrBBUBTtBA+BABBHBP5BLKB9qBPBB0BQB:BBWBFBŪBNBkB⬱B^B7DBzBKB=BvBvBBMB-BBA3BBeB&BB B20BrB.?BKBBBAjBB}BwBlrBٯB|wBBSByBϾBBXBdLBԲBpBB3B:ƱB1B9BܱBBȴBiBBDBBXJB٫BBڜB>BTBB[wBBvB_ BՐB BGBNB/B9BkBS*BpB]BwļBBϹBuBPBܺBABBBk,BMB |B,1BB|B?BB1NBIBlB*BHBB_BNBpB[wB-VBdկBBZBBCBgBcB1BUyBBBBU~B(BBvBmBPB iBEmBӬBPBBBBnBeB@B40BxB%BRMB?&BB1B&BBqB*BfBOڧBZ.B>Bt;B/0BB˜Ba/BQBB}:BcB6FBLBBdB^B\BBjBQ BðB)|BȯBʱBPBTFB B9BBBWBq=B'DBNBuBմB B8BPTBSBXBõB>ByB(B[BBBBSBöBBB'BY(BBB]B;BBBBBBBwBCB6BshBWBBBǯB/BBB;BZBBi4BB:BBDBXBCtBB"kBóBBkBGBEB`B BPB[BFbBpBOBBB>BBgSB.BBJBn%BB B.BBjB;BDBBCBnBB$GBBᕮBNBBB:`BB:vByB'B?BBX˦B-BB.3BB*B BBBBO)BBABDB~BmBVB,BB^BBBeBgBB{BBaBۤBoB'BİBBвBsB LByBBB6BVB BUB8B|dBBBEBITB$B B흵BIB[BK˵BBdBB0BBB BKB͵BYBBoBBB"QBBW[BB;~BB’B!7BsBqB]ЪBBٞB:B BPBBJ[B͡BثBBdIB)"BB¤B^B B *BB BgBDˬBBrBBgB$BqBPB$hB8BThBB _B?BByBC B'BBB3BíB^Bz2BBAB-B?(B:oBOBMB8BsBO?BBBBBLB B?߭BB)BjBWBkBBBӵBjxBBO5BǁB[BBB$BɶBxB2BcBzSBBB!B BBvB꤬B:B,B|;BwCBIBc۫BBPĶBEBMBe?BpBBBBXBۊBޑB_\Bv9BTB)BޡB߳BB%BlBB8ǐBBVBScBBRB^BƲBB\BBBBBbBBӲBBQBB'B&BB B۶BB_BfBBvBM BqBBB:BBVBB;B,B߱B{BqyBʱBېBBDZBBgGBK۱BtB BBPBcجB B'BB{BBHB1BByBqBoBBBBB B|BB9BB4BTBtwBϴBfqBB;]BB9Bs۵BBґBoAB˲BBنB&B'1BTB B~BQB]BBBB]BBBqB B PB BAPB^B*BEB/BBBBBTBV3B(ZB B ˶BqBEB/BQB7BMBBzBdBqBh!BB#!BXBBUϻB8*BBĘBBYB5B*gB?B匸BKNB-BTBWBBuB8pBB޵ByBBBbrB'B BB^B\BB1DBBBgB#B4*B>ݰBB4BתBBrCBiB)RB+XBB0BzBSůBZ,BBpxB7qB-Bx!BBB᪟B3B>YB3;BBBĢB&ѪB2BAB!BB鯥BTBcBYBBB[BB*BuB|BBB&BS BTBB,BBH$BIBBBBsBvBAB%B~B&BO֬B%B8 BB{BEױBy"B+B戲B`B%BB`̬B,OBڕBQBB B4BBYBBsBB׫BBskBԫB(BٵBSBzBU%BYBB/KB~5BBY_B ȪBvBfB=JBB]BЩB2BCB+-BBB5BB4B8BXBτB5B&BŀB-=B,Bz4B[[BwaB BBdBB۬Bo[BAB*BIBAB8B3BUBBQ?BZBBBBBR5B@BBB1BDBBBB!BB BWBcBB,BBBBBB;B| BBcBwsB#jפBa\B BCBMBBnѤB,gBYBsڤBBBvݤB BBBB MBvBIBlʉBGBB4BBkBqBBLBBBfZBY|BB9BYBBBB%BMButB+B6ԓB]BBwBMBB ]BB fBBBB֞BVB-BgABBGB1ʴBB#Bx7BFjBBB+BPKB BDBBr~BBBcBCBZBXB -B@ABKBBFB2BUBcBŰBxBBvBBBBYBiBtB|BʳBB,BѲBwBoBαBװBBQհB4BXBB°B;BBְBi.B/BYҰBؓB B$BB B䘰BBBBBoBoBTBBB)BBveBA\B;BeBsB#VBB=BnnBJBBBiB2͗BuB.B-BuB,B쥚BBnB4B*BBӝB*B=:BZB'Bh7B4BsB@BޢBB4BoBBVB BpگBSxB,^BYͯB6~B\èBǯB|B#BүBmBxB8BFBrBB4B/ܭBBBpBG!B@BޯB2BBQɰBQB)UBӞB[nBB(TBEB.BB܈BBhBݍBBQB珰BAB_QBޟB BBeBBBB5BB^"BB#BfBBBB 'BtBEBBȧBq`BBGBG=BN|BՁBBBCBBrB܄BB`B㌆BBB@BBBkB"B BBV:BoB"HBhB*BBBBBBB_YBBBBLPB#~BBBBS>BٵBjBsBxBwB1BիBBJBBQMB.B)BBўB'BRB?BCBB2BozB(BţBBGBF6BBAFB BB0BߧBA@BSB'BWBjB+`BXgBB;}BNlB7B}BvB BpBیBXBSB禭B1B%BBSBBoBBBbB0B5@BEuBBqڱBXB B iBKBZBBMBB*UBwBeBB٘BjBBjBBOBN*BByBܵB~BBTBB+/Bd#BRB֎BIBtBvB&.B\B&BƳB.QBwBBBB-|B xBB bBBc8BEBBxB>BBB}EBB%BtPBA/BEBdBBBHB@qB5B!BB+@BsBY6BtǤBŲBhhBDBDBB BB!BB괲BYBJTB4˲BB{BzϲBHBzBBDB^̭BBGABǮBbBBkB"B_BB'=BBgBaBbPBBBBVBcB~ BBrijBxZBU=BBBc[B6BBk\BBNB!2BH0B0BBaBB~BѓBOB BDBaBˆBcZBѐB{Bh%BJ.B/BBB!BܖBcB.BY(BBΎBӼBDBueBaB֮BVB B)EBtBuBB!BҲBmBB/βB0B{>BB`BTBBB5B9BfBϜBKBoBoxB7BB+BkB"B6ءB޲B!B]{B髲BBBByBqBB8EB/*Br&B%BABΕBs%B;BBfABB:BiBBapB~BݭB뚮B-|B ^B걯BBB˰BBBBڱBBNBɲBвBBBB BBEBBtiBp B6LBB~гBBYMBuBBBBBhBBKYBjB{BIB]ByBZBbB,BgB>BӹB<ھB1BAwB:B0B^BNB93BVBB9B:BaB7@BFBBzABB<BsKB~BcBEZB=3BBgBήBgEB~BcBBBCB31BϮB B8BBB<$B_rB\B~B9ܬBZBXB?=BKB-(B|BBԴBɑB0BPBBފBRBBBzBڹB8ņBNBBvBBxB#BB#BˋB_B8BtB Bv5BS BB.B@ʐBZBBiB B1BTB޲B`BaB8ͲBLB$.BỲBB˘BBBHZBZB7fBBɛBBhB#BB ABhB*BB8BXBB])BB1TB_B%BNB${B+B/BBBA BB%BBBXBBBBBB{B "B'MB,B<1BBp@BVBB|B'BŨB{BBWB9˱BSֲBBBBEB @BBǤBB-BBBPB|IBOBWB BkBijBBkBBJ:BTB?nBB"B@״B9B=׳B3MBSBB*,BB3BBBBB `BG BBBBBUBdBB/BBB܌BYB_JB^B4ӋBVBB~kB B=B B BOB B BB'BвBGB$BBB&XBGBxBJB1B>BB#BrBuԯB}1B@B BHB hBcB`BBB|BtB#˳BRBBpBͳBCBB#BBKBBBwB9VBw6BBB^BqB6BڞBlBB?ByܜBB#DBR5B*BfBBE{BB6B:B>LBȢByBtBMBqBBYB:BŌB BTBpBzSBqBQBB֔B,BXݪB BBxBx̭BB9'B<ۭBpGB"'B=BxBBBT_BB3BBB6BEB9VBnBBBDB BmBSʭBTmBHBܨBkB{kB"B%B\۳BBzB;BxBZB tBKBeB BvHBdBwBBB^BxB`Bv7BBB= BBFBBBBBBDXBS5BdB;BB"BB~oB.wByBB‰BbB~B(B5BZBζBB`B;FB@+BB܏BϰB^BmB9B&B!BkBCB?gBBBЕBBJBHRBBRB BgӰB[BXBBjBBBBfÝBSBBMuB%BB4'B}BBҢB_BaBwB@BB" BA.BB|BB B%B BBLBBB BBB BBBKBBإBoBy!BNB~B)BBB\4BBxBzLB BNBjB}BBBBɗBBCBUBB9BKBBB-BOBHJB:PBwBBBPհBB+BB4BʶB:5BlB̶BBTB)B|ȱB(BJBBB?BF]BiB۵BB6>BzB,BBBBӦBYBӵBIBB2BBxBʴBBEB+B̮BmʕB]BlBwB-UB +Bz&By`B~B՚B_BvBzBgyBNB BTBBğBB"B_BԳBBBBBBBBmBH[BBBBBByzBݲB]-BVBBSB+7BBB BB!BABB;)B5gBBBH BBBsBKBBBBrB BҴBV,BvB^BHBBZεBpB.B{EB3B BĶBDBB{9B (BBBkYB&BB.BBB BȼBBB;[BթB_BnBOjB:B|B!BBBbB鿍BFB.BpBj BB$BBaBfВBGPB!BKsBZBdBvBǧBسBBB+Bf`BũB,BBs BBlBa$BRBBP;BBBBGB"eB;B}BB0BBB\B0B7BBbTBѭB{BhMBVBɪB(BD%BͪBBZBBTB BNBS[BվBB5mBnoBIBBB,B4B'B2BBS,B*EB=3B}BgBEBDϖB_BeXB BűB?BBY֯BBBBBȥBJBYBtB¬BpB)BRޭBrBBBBsBBBB?BñB!BBB2,BٯBBBGB BBdzB9BvIB/B2BB*~BWBsBBjB:B˴BʲBBϴBBBB7nB1BBѳB1B-EBy@BwBB 8BxqB"BBBZBhB DBSB͹BYB͈B3BtB*HB_BGBBB BB"B B(BEB'B.!B|B BYBBtBBԔB BaBIB&BW+BBBŖB$سBMBFWBnسBBZBBBlBB-+BZB/BSBžB-@BTB vBSB7B(BtmB6B٣B4aBBoBOB2B/;BVԳBBWB`B[BbBijBBBRԳBBpBҳBzBjBbɳB\eB"B6dzBEBB̳BBB"QB^B6BWhB>+BʖBB]DBBBFBsBB&BGB BBwuBTBJB B"BBbȟBBeB){BXBnBB2B7BKBBSBBB+[BZB'BtXBBBxDBP4BgB BccB&B+BuB!B{BdB%BxBޞB.BƮBBb@B=B}BQYBB^IBqB*BBIBB進BJB[BB쐰BEBVtBBBֳB_BB%BcӰB>B<_BB@B|BBHBvB B笌BfBDB9XBիBD&B.Be˫BBBãBRB5B{BB]BbPBB%(Bo3BB%B,BڏB,B/&BFBɸB%BBUB 1B1ABB3B;B垟BKB4BNBzB=BB BwByB(BBB^BBBBQGB BB7FB #B1B0B5GBBcBdBP-BPİBbBUBqB%>BBBBiB6B{ܬBB#BNB0BҧB}BB!BGB6BBB6BʹB MB)B]޴BABBִBB.BpĴB-BB椴BĪBB.{B"]B2=BbHBݩBB[Bf8BNBQBzB`B&BثBR8BBǦB ۼB*BH֥BBqBڤBHB B٣BB:B~ӢB~BB B셶BB~B\B[BP'B3RBtrB˝B~ZBƯBlBcsBF6B B B8BB+BBTB:B'LBB~BBBڕBVḆBBWJBB!'BgB5B1BBiBBBHBBBBB|lBqBYBB*B ߧBVBJBKoB=BI BBB7BeeBKBB2BoTBB]ةBs!B:BΩB^B$BwԩBWBCBBBBBKeBBLBBEBoBBuBBBݎBBnBBBBPB[[BrB:̤BᵫB[RBBBtBg5B BBQJB"B|B:_B&B0BM\B$,BBVBP8BFBGBDB{B.BBBZB B-BB~ݮBB/WBUB3ϫB״BB ̫BB\'BëBBB'BBjBfBByBɫB`BE͉BzBBsB*$BBJBHLBBΎB0^B"B3BJB>BBBJ&BMBBBaB0 B4BLΫBBB⾫BBBϫB~B)BB B4FB0B:B BlB6BBB-B@BBǚBJ١BCBBqB!B)BLBnaBp+B@BٗBBB9BtBרBƭBB|BͭBBeBuBӭBB BBޭBLB$BB!BaBbBإBBBS BxB5BHB3B?BBbذB߭BIdB oB6ҭBB.BϭBc B$mBǭB;sBB)BBB~B B{ԲBm5B_BOBHլByBB[lBB2BB|BֱBBBjBIB\BRB1DB޿BRBNB2lB@B,BB={B ܧBBڮB<N5B$BBY#B BBBB3CB B[BBBwBB ˲BjB4BwB2B͠B3BԆBWoBBBBBWرBB\BܱBB=BBL B沨BEB*B=(BxB.BBrBj,B:BlB"BEBjBB'BqBBTBuBBܱBY}BK"BMB銱BB4ͳB,B*BόBٵB B"B@˱BBBB㱾B۷B풱B,BaBǹBB2BBbBiZBBBp}BmB( BB BmwBB BBBf B6QBаBBBBB+BBBBB]BBBsB2$ٸBBJB^BMBʟBOBBI>B%BujBڇBBxBUBBvBݰBm BbBBB%B ]B{BBǤBܱB_BBB#BBByrBBBBBگBB&"BBBxBB BBABeBBB6ŬBB߳B BmB|B?BůB.BXB ίBBbBۯBqB`BBBEBKBvBBBgBţBt!B@B-BBBABBqBB#BJBqB>2BDBtBlCBBxBYB7BBkBBZBBB~B> B&BBJBgzBBBBo^B˱B/BB,BBHBMB\QB_B9BB{BB^B BBaVBFBRB0BB B(BB{BMBIGB{BBuB8BBB4EB,B0KBB>BoB_BHBBBIBBBBBvB5BVB$B B/wB@B(BcBtLB BhB/B۪BtBBBf{BޒB BBAB BrB BRBױBBBBmmBuBBB0B[8BGBBiLBB4B\BCB{B'zBHB%ϳBڣBBBܲBBd B&B87BB~BlBEгBBB5BwSB=MB`/B^B!BՕB kBBPFB~B|B+BB B BBBBްBLBBBHBVBBBqBB~B9B0߰B(B?BҰBB5B쾰B-BB߬B98BBB/BjBBB6BưBBCB˰BMBBrذB)BȭBBB(B|B BBBƌBWB~1BBB PB`BB3bB%BBiBB;BuB-pBBB[B?BBȵB~BpBBhB BaBOBBB-BYQBBBԏBnPBwBYBBH*B!ؒBǰBB OB4ͰBBa̕BѰBB_BBBB)B~BoBB%B2BByBݝB߰B^}BBrB>B/B4BBpҢBBi7B/fB BPBإBB6YB?BB_BBB fBG BٯBjBnBԯBCfB̬BۯBCBTBuBAB>B{B~BRBBXBXB BBTB_"B|B3BP=BBB8UBZBB`BwBBp`BBІBkeBanBBqBBMB;B*B˗BBbBŶBBHBڶBBBB/BMBնB'xBB/BBB%jBBQBFBUBBBBpBEB;B BBЬBBℕBBBB~dBOB%B[ZBB71BlUBpBޛB{]B:BBbBNB;BBBB_BpB{B,]BЫBB$_BZ3BBSeBʘBB]BKBBu0BvBBNBB3B0B2BBȶBBV7BB#BOB5BHB2ByBBBݸB BTBKB~BIȱBSBBPB9BW0BIBƓBIBoB%ABoBBUBBLBYB#+B3B'[BəB`BBhBBxBB>B BBBBBqTB"ѳBBBƋBAjBBWJB;B-BBB٦BȲBmB BۦBRBBBBiBաB(BɬB甲B6B'ȮBBBҮB BuBخBFBn̬BBpBB5BABLBaBiBvB BXBaB5B'BB.UB\BkBYBzB֯BfBB8B:BB4BB^BB%xBBBkFBB2BNB#BLB BBBFBBBBʶBbfBBBlB B:{BpBBBB6SBBB[BMBEBB:B`BBpBMB}B3BBS"BôBB.ӚBմBpB;BMʴBB5B嬴BVBB|BBB+;BB.B BmBؤB?BB|BWrB[BBaiB\BeBBBB°BB0BB*ɳB BDB%ճBBƛBqȳBBBBBBB>BeBMnB"B^DBB $BCBEBFBdEBBiBB0BׄBBSB&BbBDBwȯBH"BʝBBBRBB/5BBZBBB^,BBLB&BBUݥB0B;ByRB6B+@BB:B[;B{B=Bu3BoBBTBBjŲB=gBBR/B|B@B~B㞲BB#B̲BUBܳB B.B߳B]B[BB-BBbB(B/YB'/BҔBU@BBMBLBB—Br_B͡BFLBuB-BBWBgBƉBBB.B|BkBlٟB3B.BJB,BB)BB8xBˤBְBBZdBk°BGBB?B 8BZBB*EBĪB,BKB *BxB+EBBBlB#!BʮBiBBBoB|BBwBBVBBnBزB-B /BBBBkBʰB B-B#ڰB1jBBBBZBBB-pBBtBʶBwBB)B00BAB:BpJB_B=IBDuB B'MBBBCBhBnNB]%BTKBBBBPBBxB]Bi.BVBÿBεBdzB$BoqB'B9B4BBdB~ BiպBB!BBB6B7$BWBt"BĶBnB!BֵB|BhB2cB7oBh2B״BdhBϒB\2B~nBEbBByBqB^KBB\B84B,B 5BB?B.BvB.BN2Bq{BBYB(jBHB3BBfBBBBaB֢B5BBBBDBSjBBBGB%BB=B&'BXBKBBk4B\BBYBaBu\BYB'B2BB⽲B(BBBB2 B/B BΩB/BDB#B B)fBOBMB[BBBۤB4BJ2BӳB_BTBBB|YB_B$BwDBB{BB-B~B_BZڮBٛB]BBB BKGBÕBBXBBqB]1BWB`#BB7BЎB_ȮBXB>pBBBIBûBBuBBBؔBٮBB^3BB\BABBBp5B3BU)BŚBSBxB2OBYnBBٝBcBB}BH:BΗB>9B$B(BޢBm"B|BnB/$BkBB$BB0DB$BzBdB`)BBwB>BpBHBcBDBBD}BB BBBBBbBB6BXBfBB{دBPBaBB^B/B/BBB޲B)BBlB&BB@BB"BZBVBBBʴB%B B )B?B\B=nB4iBBBBBBB-JBB[&BBBfBBB5ABVBB0B"BvBIčBBT BֽB۵B.Bt׏B#-ByBWBʹBBB}Bk%B˞B)5BxB :B6Bs=BM͗BB+B7ZB BBBB-B܂B6BdBW2ByHBjBBXBOBwBusB BHBnvBB}BBByB BB>BWB%dzBRBB1ƳBNVBKBڳBHBaBݳBhBBԳBBB˳B1B$0B@γBB\BaӳBBWBz)B BܓBȮB0Bz9B]B:BB9B:@BB/BUBUiBB-kB֚BֳBBcB7B\kBJBIBBB̳B]BZBBDBBHBB/BBxBvCB1B 6BYBBBczBB+BdBBB>B.BtBRBUB:B BBBpBBBBܕB?JBBzBBBDBpBĴBIJB)B[BܲB0B-ʵB`BB ?B*BHBB?]BiBG B2BhHBBgB[BB0iBBITB^BZïBB7ɊBBtBzBdBB5(BO2BBkՏBBBpBBJBB B@BF\B@4BPBB YBGB-BN}BBBJB!B%BBmBMB BB@B BBǞBBRBjBůB6eBB~BV&BE£BfB B@\BdBB`ڦB dBCB@BcBBRB`BIB=B]BB`BfBBBvBiwB6BpB2BB搯BBBBV}BB#BBeBۯBB(ҳB}BBztBBP[BoBBB|B_BBB2BgB LB B]B BsBe B۶BBVBpB?BҢBByB{BBİBCBB?B BKB8ByB"^B0ݲB,BBdBB}B]uB'BɈBijBLBO B~B^BhLBBBLBcB}nB4BBBB1BA_B=B B&BخBeB$BBïBBB.BUBvBBy5BBBBNB9BBכB9TB-TBRB.B(BDBm̯BBBHB%B JBwBLBBuB&B `BsB"BƦBrqB`B)BonBBBlkBͿBFBfBBGBpBB1BBsBjͮB^BB.B4BBBBBByB>BEïB BBaB4BBLBB`BeBYBBB_BwjBBVB$صB$BeB:BU BBBPB4B˶Bp.BTBBSBơBBBBnBڰBCBBB4B\BBQBBOBBIB#B.$B/B҂BBBsB^$BfhBm$BN;B gBׇB_ByBBBoBd;B\BBJB跭BBBBBPBB,BBB B]BɭBF BJBŭByBrЖBYB*B-BBBmBݭB"BBYBGBoB/BB3;B&BXB0ܟBUDBB}BnBBB Bz~BBKˤBBK+BRB~B6B廧BB 0BBtB $BfpBQBBBڮBB?BBǼBBsB?BBBBBB0BсBBB.|BcBBB\zBBNCBxBBZB}B.rBٯBBBBpB2BOBBOB`BBBmB@B_BB8BaBiɲBB [B?BBJB_B8+B'*B2BBBBABXB|BEB7BxB8B۵BrB!B܅BBBJBĴB B'B#{B駿BB{HBXBB]Bw"BBFҲBCB[BKB,BZB7ϱByBKBBBZBVvBgZB{BXBBB~B LB̵BϱBBGBT߱Bn7Bm BBB#BBhB1BeٱB|BABzӱBBSBӱBTB_cBB=BqBrBB=uB BbBbB B?B-B7+B0έBf_BBB4B೅BBm.BeB.MB3BBwB1@BZȊBVwB>]B5{BTBB+B0BB̏B*BBbB1B*qB5B0BbBnKB;BgBͨBQBSBBjBBBBBT-BrBgBBBB^LBʮBBОBͮBsBEjB`BqBEBB)BɣBeB BvhBBJBBuBQBIB®B]IBBήB5;BBԮB.BRBBBlBB=BܮB BB) B BcBd'BEB B41B84BNB[(BUB{(BBrBBB'BrBqQBB*B*B۸B!BeBBI8B8BJB˭BBdBBBaGBBYBrCB#BaB.LB/BB{TBuLBI/B]B_QBɘBeBHBgBGiB$7BBuB!BB}BBMBgBBBQBϭBʐB%B|B>BHBBBڦB2B93B[TBBBuB蔜B&ABzBBB9DBxB(BBB3uBB&BB?B= BBs\Bo|BB'YBצB=FB]CB%BaB&BoByBB*BΈBBūBʍBJKBŬB昭B6B BBkBB[ʭBB2jBԭB=dBm+BʭBBٰBhB=B zBBBCBdzBZB1BmB{dB-BMqBBՂBbBi B}B>BXB̳B7BiB B B'B@BTBTBhBBɯB)BBBݲB B=wBBhBBsBIBxBEBB BBB BB*BXBB>BBEBNBָBTB LBWBpB)BpBցBpB5SB5BBO;BˬBWB/BͬBB0B B陜B=B-RB5BGBxB̙BUB六B^`BWfBUBBuBx}B匕BBgB]#BxBJBmɒB B,B؍BbB BtB(BB|`BxBBABԥBwB-NBBBwBD^BBBUBRB!BiNBYBBOGBsB'BOB#hBB[BPB®B%BɏBBZB{fBB#B&B"ʮBB>_B*ڮBMBBBB7BBTBBB BMBE.BBB6ęBӱBBm>B栱BBB*BBeBBeB`BDZBB8BBBBBBVB(BٱBBB`BBBBBGBcBFBBRSBBdBBBGB=BıB[BBZBܵBݮBBBBd=BQBB4BrbBBBB4BCBؘB]B1B,BBeBEɲBh BBBSB_-B-BVXB?BBױBB:SBBDBB BB BBBnBfBʌBBpBѓBsBNʹB BAnBBͯBBLDBB B;UBR(B{WBNB!QBB7,B=qpBBwBB;QB3xBB2BKzBY$BB|BpBAB~BDnBBRB,BXBBBBn BABBjeBoBXBHBBBsB\B{BCB*5BwVBӾB2BNsB}B1BBBZڊB0BBkNjBȵBnBܜB7dB*BBBB.BOBFBB:BJBsB˰BYBiBfBBwB0BOBQB>BQBEBYB7WB BPBEB.xBBBBӰB.BˇBB*@BZ%BBBРBҰBtB%BBmB0BBoB>ΥBIpBBMB7[B=,B-BIBZ7B#BmB4BTBBBB7%BgBs>BΓBVBmBBBB&BBBoB B0BbGB_BqBBBBPB>OBzB(B\B+BRBBB}BBBBBf}B]BB6˨BǭBWBBVBB]BBvBB.B$)B驭BBHBBg3B[ZBBRBBXtBioBZ]B9BuBBBbB/BHvBQFBBB#BB}B+BMBBBKBaB BB^B BJBYBUBuB3 B5BJB>BRBp0BBYBB8ѴB/B BBB3B5{B+B_B2BeB# B?˳BӪB^BQB Bn%BԲBj0BBWB>BbqBϱB0]BBB2*BBj BPƒB LBB:^B@B] B5B^B=BWBBcBBxByB:B/B&B7BLBBvBB4BݥBAB`B?B&BBB ÓB2ʮBw;BbBqٮBD2BɖBBRPBBQB2JBvB+4BBBVBzB倜B9tBwBBhBBB |B-jBQBIBYWBBu/B;BBX,Bv=B5Bz.BJBB 0B KBB@0BFB_B1Bu?BBBB(B B<[BBZFBpBкB#mB(BUgBNBBBځBBښBpBӯBBM>BBŒBKB BsB{Bl BLBBBBcB=B B`ʵB B]QBa$BB6BAdB 7BB[BcBFlBͮBBhBmcBvBzBBBB:*BÚBB1BvBBBH)BB8BןBbBqB&|B6&B/"Bs$BbBcB̤BBYBiBgBBQBeCBB*B)8B5BBEB B1BBYBBBmnBB߮B}B߹B$BY{BsB[BNpB=BZyBPsB(BoBBOB4BBڷB1дBBBT_BBX~B0BBB_B,B12BTBcMBBBvB(Bc BBB(B{BnBB?B BBqBB(BaBOnBBdBB/BB5BߵBSlBB$ByB%B\B?B|BBmB BeBJBB8B$'B(JBBBBB BBBTBL{BRBBqBU,BBB.BBB?RB*Bh޶BxBB B:B:B1BoB{B"UBB9B.nBȲB*UBzBٲBQBfBsB6BaBB$B>B}BUB6B0BxBB4jBB-BqB7B;BjBkBB@PBؽBBBBbB6eBsBB3BYBtBBkB_yBBBB!BBBB&B؍Bo B^Bi(B>BBʔBBBsB9BmBBB_B BBB)BBrB痜BBB}BMBBSBB!}BɠBj BUB6rB,,BB$B6BBץBjųBBTBjB`Ba"BLvBBBBSnBBcBWlBZJB,B1BB㜳BB ;BڡBB-BiBBBB>B.B+"BB0B+PBˑB;1B·BmB`B|BB[B BjB¢B׷B@BSB⏷BB$B8>B[B'B>BtB)BBB:ҍB6B#BzBεBB!B:EB6BUBgBBMBB-=BBBۙBBfBB!BfBBB[BBBŲB-BvBB|BBIJBmBTBYֲBgB7BBTBBvղBBтB#BBB?zBB}BgBe)B(B}BB1BBBfB|B9BvBԺBDBBB"BBLBBB9B#B 3BFZB1B B4}BBߴBIBBb9B³BVBBBBPB#BBJϴBlBEBBBBwB06B@BǁB^B!B6PBmBBBq=BBQBYB6BNB,oB"BBeBYB>B~-B)B'5BҰBBގBܡBLIBBBB2B&B*NB=ʓBB!BGB.BN BȩBBBBDB4BB BBBD+Bc5BNBBjBBBաBB%B HBBR4BBZBvB=pBB=BܧBBaBs@B(BBɠB}BBOB}BvBUBsBMBLBBJBpBnBBYհBȧBbBܱB港BBղB,̯BXBoBsݯB;BaB>B%XBBB9BxBٯBBvB=֯BGiBSBܯBBBBLBBB[BBBͧB7B\:BB'CBe{BDB/8BBB B;BB¶B3)B>FBYB.BBBBBqBԬB BBB|B}BBBeBBBAB;BIBƊB,ZB $BPwBhB6Bz*BRrBzABޏB|BpB2BBB8BwBjBϔB^ByMB^CBXBEBBoB%BnBB BBB|BԝBBBvBBBdBBB̢B6B]BPlBzNBHB_B`B9B._BEiB̟B{BuBaBN BB,uBIVBBNBBʮBL BBٮB#BƮB[BULBcB2BBtBU&B\BcwBBG5BòB/BBCBpB0BGBVBbMBD6BBBB BBBv BHB3B:BYBSBTBBUBu_B[,BBBPBzBcB! Bo}B'BBlBB2BBBB@B1xB[֫B~BB'BcBrB?rBAB BUBBZBެBJBBBBBS%BBc8B\IBJBzBTpBkBBWB7BBBB BϭBB` BB$BB2B7B|ƲB8]BlBtxBTB4~B= B+B9BQ~BBG7BB̭BB>[BxBBõBBBB4%BȠBJB-BBB5BNB2BݶBB~BB_GBBIBB#B޶BoBzBжB"BBKB#[B(BBVB&BָBkBQB_ByBB=BBFBYBBBLB.!BBljB@BUBN2B0BBjBBBBN>B͡BEyBBBJېBBuB=BQBB nBN1B|tBAIB_pBBBŪBͫB]B*B:BBBB B@\B5Br]B1BeB8BBBB{BuӬBF BMBBBBBNB BέBg^BBZBBBIB%BBBEBJBvBOBB(BXqBe?B_BnBBӵBYBBi(B,B=BYBBBqBxBGBV|BBeGBuBJǣB1B$BxBuB fBGB|BكB7BRB{Be,B@$B-BLBUB ݈BeBgBrBB]BB;NBB.BԲB\B;7BlvBBːB;BuBaB%B BB$BBӊBJBAvBBB&QBBf*B%7BB;SBBbBlBBBr}BwBƞBiBzB>zBCBBu%BdBBţBBLdBRBͱBBʦBB$B5B0BB͐BsBFBBرBhB1B)B*0B`BBB{B BBDBDB.>BKB@BB¯BbB`wBB6|BBaIBB[PB B`B&B=YBŲBBBBQBB BsBBYBBB B6B5ABBBBwٴBJXBBBHŴB#QB#GB?BB B;ŵBGBcBBRBBSdBBBIB$BuB?BYBB"B2BBB$B!#B##BB׻BBSBvBBs&BgBH{BxBB-BտBBF BN0B'~BEBΉBwB̮BBg}B֗BPB҈BgBǽBBbBB*BB B[BưBBRBBj}BxuB!B1SB:BCBcEBwByB)BrB۬BwBZB,BЫBB|B'B(BD۪BjB#ByB{BʺBzBoBB[BGBB<$BXBDB.B*FBBzOB BB:BSpBBljBBBLBHBB?BѐBn,BB@BH8B\LBCBUBBBBbBB,B6B%BђBߴByBtBBB B MB,BaB.BHB+B7BBBLB}xBYB0gBbB ByBBBB:tBoBfBu;B{B̡B%BҤBsB/.BB^!BNBM/BB~BfũBBшB-BByBFBB&BBBBfBBIB+BhBvBB#+BlB\B BGB B+˩B)lB6BBBBϠB BBBCB(>BޓBB͊B*.B&XB$^B㐫B;&BBBTBB~BBeB9`BB[BBYBIB*BB3FBB iB̗BBB/BiPB?By)BqBNB#BzByԪBa BgvB4BCBB ΪBIBqƠB̪B8~BvBB,B6BW9BBpBBBȦBuB3B^B;B4BFBBTB`Bd BB`BBDBUBO*BB:BCB`B$BFBBٮB-BBB BBWVBIBopBBBWBB2ڱBQBB3BB,$B%.BB[BB4B,B9BBypB`B/BBB]BkBhBB2BcB5BB5ԱBB2B8B9mBBIB=B>B쓱B8BڷBBhB;B ˱B-B˙BvBaBMiB+ BqBBS BBžBBPBtBBl4BB\BBãB_B-BiWBB BDBBBTBlB7BB B`BBӵBǚBBB1ݵB5߲BڒB׵B+BBµBB:BBWBBKByNBBBBE^BPBN8BB9B@B+B^Bc^BBhBB4BBշBþBXBۜB0WBұB}BvB>ByBºBלBסBDBgBgBBgBxVBB嚯BWB BABB,BB7~BBsB3BBZBpOBBLGB$BӱB7BI)B6B BBBB0BBBmxB.BjخB5BKBϮBLcBqBBBB{B(B`1BO1B2=BBBxBBB&}BBB;0B:BNB~ BTB8B1B)BB~BP Bt(BBiB8BbB3B'BBBBӯBBBBBJB< B)BsBB"BB/BB=BPBcbBƜB*xB7JB5BZBBxBB B{:B[jB>BĢB#(BSBrBBB"B!B>UBB:BEKBBMBs-BIB`BB0BtB5BBBB'BBBfBBDvBdBnB*BeBBB$BؒBu_BBnBBnBBvBrBhBB(BgBLB$B:BJB#BBdBVBI cBC>BӁB} BmB,,BbB?BB|hBwlB[BIBaBxBBB컉BϳBB\BzBHBPBsBsBVBBBs?BaBB̑B1B+Bd_BBxBBBsPBBvBB\BBBBI BBB B B%|B3B5BB.FBۄB-ƟBXB܊ByBiBܐB,BlB3;BYؤBABBkBPBnBBBBXBBBB BB5B,BB4BBBIBVB?BBZbBNOBJB,~BlBTBBBB B몲BeBIBRƲBBBtBG0BlBHBB̴BBB BMB%B(BBLuB*Bv׳BtBmB+*BeBBBlsB}ôBBUBntBpB7B#BBBnBBIBB"B sB[BsҷBFBBKBzBX|BRBBBh$BܚBBBFB B)BBBZBBBVBqB軯BB.BCB`BhڶBBBDӵBKVBBִB;BvB۳BpޮBBBBSGBBkB7B BBB(BʭBBRBNBB˅BxBBɭB&*BB&BƖBjBbBbBBܫB,B4BHB`B,?BKFB/|BB5(B̳~B)ηBBB"B-B"BYMB$B.B"`B˲B`BBBByİB BBB 9BB1ղBVBLjBB*zBB&BBKBBBBشBBBB!,B0;BVBixB/BCB3*B BSRBBBBܵByB.BBB\BBBiCB̲B8AB$B<"B-BBB B,ٗB)BŁB/zBz?BBBVUB BBoB'>BDBUKBBԟBB:B\BjB$BjBG)BB̡B"BVBAB1BϋBçBFBЌB$BCPBB~BOB}B ܫBQB\oB2BXNBpQBc|B8B$B߷B&ByBܰBk&BbBBp7BSBBвBNB@B뮳BlB"B҃BIBFBBBeBfBJ޵BB_BXBwB1B϶BZBBCB껰BmBKB԰BUBdBIBWB4(B B;ZB>B`BRBK;Bm4BB4BskB?BY)B˷B"B# BBB&ηBAB@BtBJBܝB[BBBBUB^aBOB JB BCB.B,B)B]B9BDBB5[BYB7BBBAB~BB?BBZȭBBB>BBOBB(BxBNB\BnղBhB1BCBBҞB>BƕBpBBnBYBI1BIB-BdB'B BdB2B_bBaBBǴB_B#BBB.BQB`B7BƜB(B&BB"BSBB6BX BB:B>B38NBBcB0@BBBm/B?\B|B%BqBOB>=BBBTBӯB-B萲BBѭB,BF8BABFBBѓB3NBفBPPB^WBnByBNVBW.B1FB]BBB4pBcB>,BJBMBfwBBB-mBnBU;B-BrB.BB9eB{ B;BB̷B2BB pB/ӲBsBW BN3BDBBܣBn8BMBLBBB B'BBwB*BbB6BHB| B\_BB?B'BBlB۴B BB%B1BBD)Bn6BhB׳B-BBBqBϺB+B BIBYBѦBXܔBԲBJ4BwBòBB BȔBB܋B[BBlB["BB8B8BݜB B BYeB (Be7BBʠBJB B6{BCYBBu*BFB9BåBB}kBBBlBBCBBLB\BB}BlB^BLB|B&0BaBs­B/B!BŮBGBbBB}kB"BFBB BBBGBBβBsBBBB BB"jB&nB$BBBYTB5 B̴BBXBдBܳBB´B1BBߝBB$YB_B)B+B BB;BB BB~BB4B@B ,BB^LB#~BYBs|B;BoBBeBB2B.BjzB֥BwBo\BnB3B|(BBO\BKBw6B4BBBBBBKB7B~BcBBB?BpB#BOBBA)BMBEBZϑBB0BkBBBoBO;BBvBBjLBBԴBzB1BߴB7pB+BFBBݜB#B3BB?BB.BLBYBۡBiBBvB{BZBB]BGB˦BQBUBtBY+B"BB2BB[BRUBB5B|BkfBBKBrBYB,B6BٰBBcBNB:BqB9BBCBȎBmsB BkBBBvB"B B:BSBȉB๢B|BB?BBBB౭BBBŭBB5MBޭB;BBBGsB_ڪBB,3BB(BuB B4BtBBKBBBlB7B ïBJ~B!B䀰B{BqhB%B:hBBѷBLBI/BeABX*BBòB'B(B9BBBBB!ByBBBB2BZB,BB˭B/{BIBxBB߳BB(BQBŴBބBfB9JB]BBzB(UBBBB98BTB[`BBꆩBB|B"IBtB&BהBBBjB1mB BnB0BBBB}BeBB%BBB;BNB9ŴB~6BSBBB#Bo_BCBՒBBBB|B`BVBBBpB1BÇBBBuBíBB%B٭BBR،BܭBPB싎BfBBY?B$BBBhBC$BBBmBMBB`BsB;B_B&B B=B įB]BǜBBnlB+XB9BB@BگB+BBB_BRB~B˗BBjuBB}BjqB/BBmBIBGBhB$BBkB2BBBFBMBݙB(BBfB>BBBBBYʯBЂBĩB&BBB-BBRB2BRBbBd=BYB2B~9BBBB9B BaB?BcBZƵBAKBB}BaaBqBOB/BRBnBBtBxBHBBiBKBDBBB2BB{BѱB>B㙵B8,BUB7BB{BٴB%BjBBBB BՖBBVB~+B4&BBxBPBABB~BbB`eBB5 BBBBpBBJBBeWBڳB?BddBPB BfBLqBЏBjBBB.wBB YBNBqBBB1B׶BвBB=gBвB BBIJBB̢BBB|B5yBBB%BA:BB ɧB B"BQB~B5B޾BB2,B BN B#BsBTBBˮBB7BBBBkSB|BBtB!Bb(B:xBY1BB5LBFBBB^BvBQuB@|BWBHBxB26BwB DzBBB`BB6B3 B2BedB.BBouB`BBwlBnBB(^B8BoBHBAB3B!B9BBBW߶BBBBBB.BBMBB3BBBDBg1B.B BB5BBiBBùBgBBBBBBgBBB@BYWBHPB">B8BBhVB#CB=BBoBB5B B;BBiBQBBYղB0BEBBBcB"B BDsBAB}BB}gB3B~BB$EB,jBzBFuBBm2~B~BlBB{BB JBVBًByBelBZBȉBtBLB+B;BBGBB9hB4BшBRdB(|B۷BfBB BUB<ٍBj BuBBWB;BBLBƿB#BBBɓBEBxBgBBzBBBݳBLBB:B>lBgBBBB`BeB$VBx4ByBBDBPiBBKWBCBmBjB"BB_TB*B!ӥBB6BBgijBVB!BqBBLBB.BBʳBB"UB@ųB4 B֮BB>lBcBrBB~B4BiBEB77CBBpBB BƝBBIB0BB"B籠BOBBBBBABb£B@BQB+BB>B|BB<B§B#BGBmBCBBCB%XBmBdB0`B-BAjBkBBm\BeBLB;BBB BBBBFƯBBBpBՁBBBm\BgBB7BB4B%B;DBDB'BB"B0B0BWfBB;BsBAB1BUB:BB'*BxӫB!BB=BdBHBB'BBRBdBpBBBB]BpBRjB3BPBBȰBBzBlMBrB BrBEBPŻBaBBB3nBB=B)ϭB3BqBLB-BXB1٬BJBBcB&jB'BBBZsBpB1ĜBfmBgܫBgBuiBūBIB^BB׵BfB)GBZMB ~B>BB˞BBzyB@ŬB9BB9BB'BJABPBLB@B*EB֎B%BWB'$BBBMBbBB)BB[B'BB ПB*rB/=BZB\BBBBBmB{B1B+إB4B?BN*BNBLBsBkB'B"B BxBdBBEB_1B̭BBBNBZ߭BgB@SBBt2B!CBrBB3B:BMB B]BmBMαBdB3BtBLBQBB(BBo|B2B_SBjB ڭB BlB˭BB״BM֭Bi^BK8B٭BB|wBҭBBnBBhEBB~BCBB&B%B%lBBʬBt?B^KBnB:B"BWBBBkBcBBBBBB{BBBBBBHûB#B5 BB_ÁB BBVABBtBBBLBꒆBGB)RBZFBݬB[}BB_BB3B%BgBUB8B BBEBnBwB FBtBmB<1BfBB3BBB)۬BdBw;BޠBBB'BqB~BqȬBKB3BoBB$$B{ BxBĞBWB\BoBS9B̷B!BfBp9BB7BtBnCBB@BBϭBwBBBaBpTBBxFBBBBӫB B{BB_$Bl_BB%2BBAخBILB{B+BdBBBlB&lB7B_BB?ұBJB7BbB,B:BBBBeBB6PB7ԳBB8BY1B BqBJgB0 B>BytBBB8mBߧBIBkUBbNB1By-BB{BBˎB?BBBJBhiBqB{BBBOBBpBBBBBB!B[1BB0{SBCkBDB :BzB]wBBBBB^B BB/ B\PBBkBŽB?sBȸBbPBuBx.B6ߑBBBYaBBBϔBI~BhBZB~BBBGBBƘBBE BrfBBBJBBȭBBӳBlB+YBӥB B&BlBBpB%B,BvBɲBZB BXBm٪BvBeݱBBrB_BՎBB$BrB8BoB6BBB4BBBa6BB`uBBBBpNBmɳBABnBqBBBBܼBTBBҗBB˳BLB뱠BBBVBOBݣBB BeEBԣBղB"B3BBB+BBBBBܲBBSFBBTmBӈBB|MBϭBB B B BB-B3BBOBBV?B_BBJBPIBBEBB6BBB`BB2BBByBӾB BBBֳBBjGB B9lBWBQBBIB}B;Bf3B2BBAB#B$BBBBXBjBBB`BwBcB'ʳBnBiBBt BB B(sB^BYB;B{BSʰBIBjB6BBB6߯B+BUBYBB?ŔBB|BBBBBؗB1BBzBQB5BBzBeBBnBB|BaBwuBsB,BB6BBs0BVբBBNBuB BiSBc{BB[gBBBvBΦB4BJB"BFBMBB2B]±BB?B(BBBBAB{bBB}B}B^BBABBsBBAqB`WBBB]>BVBYB,6BgB^BDBBBfWB4KB/BbBĖBEBnBBLBBw-BVBBszBqTB_ܯBVB5BBBBKZ!B!ABBRB1B[BvBUBʃBt BB0jBBBBǶBB̈BِB`Bb{B KBF.B&BBMB캍B͜BiBbB3BBF BBB奒BBB-4B$VB#B•BdBBaBB$ BqBβBB̍BNBBBXMBFBTB!FB|BўBG[B BmB4pBXBBJBk>ḄBuB7BbBPBBlB)BBZBBrBéB%BoBB^ B9BBa]B B"BBfB1BB BB}B̈BI;BDB!yBj^BBc#B}BB0ύB6Bd:BzBUBlBBʴBBBMBBB齬B%|BHlB/BclB\ؖBBs2B\BiЬBmBpBsBTMBkB]B^?B9BBBBR BBhB;BB2BMVBBBأBlBeBgB{~BBHܦBhB~Bt9B|BoB؏B袮BAZB"BB!7B#BڮBBcKBBbBXBB9BRB BB4BOBtBBYBBXB BGԲB1BBBBB$BBB;BBBdBtBtB BtpBQ|BBpyBM.B B`BBV B}BO_BB/B`BGBñB@BUB=ױBŝB ӵBEBB>NB BXBɶB(BB1BHB3B8BeBRBﭷBfBBBqBB>B]޲B9B B6'BBBwBBtBϳB!8B1B4BMBӶBB BuB%BaBJBBBhĵB.B@BZLBBdBBӰB&pBwBBBsBoBBB@gBgB߂BB@9BBB1B{BUB HBmBJӿBzBBQBBABcBBiBKBBBB8B@B~;BZBᲲBB rBʲBpBBdBBB B?BǯBB2B BBčBB;tBdKBBnB-B)BiBBgBƇB[QBcB_B\B"mBwBBt5BB\bBBauBñBBDBm%BQB(B[BbB BhB=BiBBB?0BABBBBBB#ۯBcBjsBzBB BCBBB@BATB'BdLBBB*B'B:lBGBBBzBB0BrBtBX=BwuBBB]tBBBrBqBpB)pBxBϫBZmB7B'B\uBݭBkB_BsBB\Bb'BBB'BB!BuiBZBB8BB)ɯBjBB[BjүBBBЯB0BwBůBBLBSBBsYB;B9BB˯B&BBٯBBY3BB&%BMB BqBo^BFBBS\B#B.BF;B$°BeBoBTBEBiB'/B@B=BBABMB7hB=BߖBqBBPBKB(BBBBRBBBB,BVB=BŞBOγB˪B\BBG,BFBAB­BpBBu\B:BIJB!BPBBgBᆨBlBB B eBB/oB!dBBBЬBTB[B@BABVBB3B"BBJ?B BIJBNBB ]BaBw2B,QByBįBBBBBBZjB BƲBHBd{B~B!BB B~BdBNBaB÷BqB%BݴBiB1BBѲBB4B.VBg+Bf_B@wBB B_B2BrB BBnBޭBBBRB_BыB B+BcBBOB5BBWxBBBBմB᤯BNBBBΜBɥBӪBŢBbBQBBGB7BݯBەB6]BBBgBPB3BUB,BBB6(B쵰B=B2׵B,3BBn؁Bw´BB0ۄBB^BBBJ2B:BǴBBÉBBB,BVBB˳Bb۳BnBHBhBfBޏBjBBtzBxBaByBbBBB䋲BB:BB\kBBVB:5B@BҲBB7КBBVBrBBB BSBBҟB/B:BnBKBթBG8BLPBBB$BhB˚B߲BB;BֻB7BBBS;B"B̲B{!BnB%߲BeBŭBBL BaBBBrBB BBBB߲BRB,BֳBBâBBnBLBBQ8BfBBWfB{B*BgB_)B|BdzBBBWBMBPB\B2B˛BB(B&BרBjLBBB5BDB%BB7lB?BeZBBhEBBhB^&BBB,Bt˷B~#B͡B$BBB}~BZBKBdBFBB]B}CBB YB1B>BQBBFޙB(ZBBjvBqB pB BBḄBBBQ?BCBBB^`B,BBB4NB)B BsBB`BFBOBGBzB B\BbB#BBNKBB̫B+B;XB'BBRBwBBMBZBjBZBRB[!BbBUBrBKxB BdBOB|BBفBB\YB BB#B#(B BBjBuB*B?BaBBBB6BBoBώB׍BBqBx5BBB6BBB״BBBNDBHBB#̖BGBBB3[BBSBMB>ѴB?B8B@B-BZBܴB: B BB*BBBcRB[Bz>BLBiBVBBBBP,B=BB`ABwBSuBoVBBwBFBIoB$B|!BWBʳBBB߳BBGBٳBB&BBpBBTBw@BgϲBBkB԰BѳB0=B:eBBB}B1BB>BOjB tBABBBoB"B^ByBKB@BB@BpB\B6cBq'BBfB'#BBĩBBPeBmBkBB BB|Br^BGhB.B,BWBTߏBZkBmBwBB*B2BzB B̔BBBP^B(BBmBCBBaBשBBBBB?DBaBlBƝBaBjBDhB>Bh#BBh?BByȢBaqBIB UB{B]BBBaB]BBMBkBJBBBcBBBmBwBƫBwvBBB_BUB BRBBiBBBm-BעBBB[hBsfB PBABTBnBBB7B/BBB+B'BWB?B|BnюBBQBdBBB;BBfBBBBBBBBRvBBT8B=BBZB[B83BBbBZBBBrOBB5ƞBBBkrBB BBBh/BãBoBB gBDB5B;B"B:oBBO B BBBB>BPZB%BBBB(rBBj BAB?BBB`BBBsB/B)IBwB)B Bc]BABMBBN[BnBeBoBBrDBcB[wB/ĵBBNB4B{B&BȏBñB/wBҶB߱BBLBB BB ;BB_BBBBBϲB?BѶBY!BeZB BF}BBt9B BBڵBTB~BH|BkƴB/BB~OBVB壴BBZBBhBtB5uBvB; BBIB B/[B0BSBBz@BBS0B B BXҰBLBBB_B4BB ԿBSBQҰB'CBiBlBBj~B@BBB߰B|B&BEʰB BBB@B3BɓBBBBBBB2BFíB BhB̬B{B>GB˫BCqBBeBRBBvB\ȰBBCB B3BL+B 5B7B/B DBB7BIBYB1BMB B9BF%B*B`BðBgBBN-BPBJBkBHVBV}BދBhB՟BB2B"BwĬB*BܙBëBHB!IBBz7B*B B?BڝB쟧BBBBzBҝBBGB-BMB .BLBB%B7B{BʪBB3BBvBhB=BěBB9تBW BBBeB[BRByBB҂BBBiBBngBBKVBB̀B BB[BCBd(BxBBİBnBBT.BBBJB0B/Be6BB+B۝BBkB׎BB[B=BnBBӢBWBBNBϏBBsȥBjƪBF3B4B|B+BċBB7BѾB)BB ̪B.B:BƫBTByBeBBbB:[B-QBp+B*J˶BB;B"BBRB'ABB=BBmBBVBBBţBB;BRBB֋B6BB|B뒵BB=$B B-BBBlKB:PB$BBΓBqB&BcBBVB B۳BqBBBBM@BpoB{B BEBBBCB3BBUBB!BrB{B}BdBB0BBjlBBsB$B(B*B@B 'BkBBaBBlBIBBBIB?BBqB\B%jBԮBbBGBBQBqBJBJCB_BdBGBlVB;VB]\BBB{B;BBBB7SBVٳBBصBB_Bl>B+B:BqB_BB;BEBVNBzBLBBw-BǂBIB횃BBCB1BB BBӑB,B#BpbBB0B%BKB‹B_ϳB7B`B[BDB?BgB}B)BBB{"BuB BݶBUBOB.MBBB|BޖBABUB\B2^BBΙB䅲B}B MB^BBBBQBgBgBB=BBvTB7BYBlBB0Br/BBBYBAB0B#nBB>BgB%_BMB-aBB%B@BB%BB~1Bz7BBJBB_B2BABīBkBǟBABB,BӪBABNBBBkB4yBpBHAB7BSB״BBBv)BX2BxBX(BBFBBWBBB,BBȽBBB|B/]BxB BBBB5BB5CBVB@BB.BܒBԲBǻB vBbB^BB BBBɽBBpBBBHBfB]$B@8BBuBB B4B]BBRBFBBBBBΘBBNB Bo4B4BRzBçBBBt;B5BB5BBBMBB`lBY3B;B:=BkBGBB8BABBBBB[BB+QBBݱBDlB,oB}BBB*BꤲBvEBBòBB?BBB3BBLB5B(BYB B dB6B~BB9BBBBHB|MBNBOBBv=BzBI$BDBp%BBBAִB>,BRlBBGB%ǜBC/B BqfBB?BsBBеBB{BaBAB:B3.BqBRBTBB^ҲB(BB=ƲB7BBOײB"BLܪB B!BB|BBB`BBWBBBLB+B`BBAB*BBhCBIB׮B BBB|BIBBBDaBB±BzBMBBB0B1B"BpBڳB̲BBqfB Br,BҴBvB$~B|B +B BFB?fB BQB᫳BlBZJBBB$3BQBBHBBqBzB=+BBnBVBa:B#B;BBKB+ͶB! BBrBBEB,B{B ѲB.B•BPBB^%BױB:BƼBe^BB!BڰB,BYBMBBHBrBcBDB¯BB&BB*BB7BBvB7B\BB malloc(BS * BS * BS * sizeof(double)) cache = malloc(BS * BS * BS * sizeof(double)) sigma_block = malloc(BS * BS * BS * sizeof(double)) # (i, j, k) coordinates are the center of the static patch # copy block in cache copy_block_3d(cache, BS, BS, BS, arr, i - B, j - B, k - B) copy_block_3d(sigma_block, BS, BS, BS, sigma, i - B, j - B, k - B) # calculate weights between the central patch and the moving patch in block # (m, n, o) coordinates are the center of the moving patch # (a, b, c) run inside both patches for m in range(P, BS - P): for n in range(P, BS - P): for o in range(P, BS - P): summ = 0 sigm = 0 # calculate square distance for a in range(-P, P + 1): for b in range(-P, P + 1): for c in range(-P, P + 1): # this line takes most of the time! mem access d = cache[(B + a) * BS * BS + (B + b) * BS + (B + c)] - \ cache[(m + a) * BS * BS + (n + b) * BS + (o + c)] summ += d * d sigm += sigma_block[(m + a) * BS * BS + (n + b) * BS + (o + c)] denom = sqrt(2) * (sigm / patch_vol_size)**2 w = exp(-(summ / patch_vol_size) / denom) sumw += w W[cnt] = w cnt += 1 cnt = 0 sum_out = 0 # calculate normalized weights and sums of the weights with the positions # of the patches for m in range(P, BS - P): for n in range(P, BS - P): for o in range(P, BS - P): if sumw > 0: w = W[cnt] / sumw else: w = 0 x = cache[m * BS * BS + n * BS + o] sum_out += w * x * x cnt += 1 free(W) free(cache) free(sigma_block) return sum_out def add_padding_reflection(double[:, :, ::1] arr, padding): cdef: double[:, :, ::1] final cnp.npy_intp i, j, k cnp.npy_intp B = padding cnp.npy_intp[::1] indices_i = correspond_indices(arr.shape[0], padding) cnp.npy_intp[::1] indices_j = correspond_indices(arr.shape[1], padding) cnp.npy_intp[::1] indices_k = correspond_indices(arr.shape[2], padding) final = np.zeros( np.array( (arr.shape[0], arr.shape[1], arr.shape[2])) + 2 * padding) for i in range(final.shape[0]): for j in range(final.shape[1]): for k in range(final.shape[2]): final[i, j, k] = arr[indices_i[i], indices_j[j], indices_k[k]] return final def correspond_indices(dim_size, padding): return np.ascontiguousarray(np.hstack((np.arange(1, padding + 1)[::-1], np.arange(dim_size), np.arange(dim_size - padding - 1, dim_size - 1)[::-1])), dtype=np.intp) def remove_padding(arr, padding): shape = arr.shape return arr[padding:shape[0] - padding, padding:shape[1] - padding, padding:shape[2] - padding] @cython.wraparound(False) @cython.boundscheck(False) cdef cnp.npy_intp copy_block_3d(double * dest, cnp.npy_intp I, cnp.npy_intp J, cnp.npy_intp K, double[:, :, ::1] source, cnp.npy_intp min_i, cnp.npy_intp min_j, cnp.npy_intp min_k) nogil: cdef cnp.npy_intp i, j for i in range(I): for j in range(J): memcpy(&dest[i * J * K + j * K], &source[i + min_i, j + min_j, min_k], K * sizeof(double)) return 1 dipy-1.11.0/dipy/denoise/enhancement_kernel.pyx000066400000000000000000000320651476546756600215610ustar00rootroot00000000000000import numpy as np cimport numpy as cnp cimport cython import os.path import logging from dipy.data import get_sphere from dipy.core.sphere import disperse_charges, Sphere, HemiSphere from tempfile import gettempdir from libc.math cimport sqrt, exp, fabs, cos, sin, tan, acos, atan2 from math import ceil logger = logging.getLogger(__name__) cdef class EnhancementKernel: cdef double D33 cdef double D44 cdef double t cdef int kernelsize cdef double kernelmax cdef double [:, :] orientations_list cdef double [:, :, :, :, :] lookuptable cdef object sphere def __init__(self, D33, D44, t, force_recompute=False, orientations=None, verbose=True): """ Compute a look-up table for the contextual enhancement kernel See :footcite:p:`Meesters2016a`, :footcite:p:`Duits2011`, :footcite:p:`Portegies2015a` and :footcite:p:`Portegies2015b` for further details about the method. Parameters ---------- D33 : float Spatial diffusion D44 : float Angular diffusion t : float Diffusion time force_recompute : boolean, optional Always compute the look-up table even if it is available in cache. orientations : integer or Sphere object, optional Specify the number of orientations to be used with electrostatic repulsion, or provide a Sphere object. The default sphere is 'repulsion100'. verbose : boolean, optional Enable verbose mode. References ---------- .. footbibliography:: """ # save parameters as class members self.D33 = D33 self.D44 = D44 self.t = t # define a sphere rng = np.random.default_rng() if isinstance(orientations, Sphere): # use the sphere defined by the user sphere = orientations elif isinstance(orientations, (int, float)): # electrostatic repulsion based on number of orientations n_pts = int(orientations) if n_pts == 0: sphere = None else: theta = np.pi * rng.random(n_pts) phi = 2 * np.pi * rng.random(n_pts) hsph_initial = HemiSphere(theta=theta, phi=phi) sphere, potential = disperse_charges(hsph_initial, 5000) else: # use default sphere = get_sphere(name="repulsion100") if sphere is not None: self.orientations_list = sphere.vertices self.sphere = sphere else: self.orientations_list = np.zeros((0,0)) self.sphere = None # file location of the lut table for saving/loading kernellutpath = os.path.join(gettempdir(), "kernel_d33@%4.2f_d44@%4.2f_t@%4.2f_numverts%d.npy" \ % (D33, D44, t, len(self.orientations_list))) # if LUT exists, load if not force_recompute and os.path.isfile(kernellutpath): if verbose: logger.info("The kernel already exists. Loading from " + kernellutpath) self.lookuptable = np.load(kernellutpath) # else, create else: if verbose: logger.info("The kernel doesn't exist yet. Computing...") self.create_lookup_table(verbose) if self.sphere is not None: np.save(kernellutpath, self.lookuptable) def get_lookup_table(self): """ Return the computed look-up table. """ return self.lookuptable def get_orientations(self): """ Return the orientations. """ return self.orientations_list def get_sphere(self): """ Get the sphere corresponding with the orientations """ return self.sphere def evaluate_kernel(self, x, y, r, v): """ Evaluate the kernel at position x relative to position y, with orientation r relative to orientation v. Parameters ---------- x : 1D ndarray Position x y : 1D ndarray Position y r : 1D ndarray Orientation r v : 1D ndarray Orientation v Returns ------- kernel_value : double """ return self.k2(x, y, r, v) @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef void create_lookup_table(self, verbose=True): """ Compute the look-up table based on the parameters set during class initialization Parameters ---------- verbose : boolean Enable verbose mode. """ self.estimate_kernel_size(verbose) cdef: double [:, :] orientations = np.copy(self.orientations_list) cnp.npy_intp OR1 = orientations.shape[0] cnp.npy_intp OR2 = orientations.shape[0] cnp.npy_intp N = self.kernelsize cnp.npy_intp hn = (N-1)/2 cnp.npy_intp angv, angr, xp, yp, zp double [:] x double [:] y cdef double [:, :, :, :, :] lookuptablelocal double kmax = self.kernelmax double l1norm double kernelval lookuptablelocal = np.zeros((OR1, OR2, N, N, N)) x = np.zeros(3) y = np.zeros(3) # constant at (0,0,0) with nogil: for angv in range(OR1): for angr in range(OR2): for xp in range(-hn, hn + 1): for yp in range(-hn, hn + 1): for zp in range(-hn, hn + 1): x[0] = xp x[1] = yp x[2] = zp lookuptablelocal[angv, angr, xp + hn, yp + hn, zp + hn] = self.k2(x, y, orientations[angr,:], orientations[angv,:]) # save to class member self.lookuptable = lookuptablelocal @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef void estimate_kernel_size(self, verbose=True): """ Estimates the dimensions the kernel should have based on the kernel parameters. Parameters ---------- verbose : boolean Enable verbose mode. """ cdef: double [:] x double [:] y double [:] r double [:] v double i x = np.array([0., 0., 0.]) y = np.array([0., 0., 0.]) r = np.array([0., 0., 1.]) v = np.array([0., 0., 1.]) # evaluate at origin self.kernelmax = self.k2(x, y, r, v) with nogil: # determine a good kernel size i = 0.0 while True: i += 0.1 x[2] = i kval = self.k2(x, y, r, v) / self.kernelmax if kval < 0.1: break N = ceil(i) * 2 if N % 2 == 0: N -= 1 if verbose: logger.info("Dimensions of kernel: %dx%dx%d" % (N, N, N)) self.kernelsize = N @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) cdef double k2(self, double [:] x, double [:] y, double [:] r, double [:] v) noexcept nogil: """ Evaluate the kernel at position x relative to position y, with orientation r relative to orientation v. Parameters ---------- x : 1D ndarray Position x y : 1D ndarray Position y r : 1D ndarray Orientation r v : 1D ndarray Orientation v Returns ------- kernel_value : double """ cdef: double [:] a double [:,:] transm double [:] arg1 double [:] arg2p double [:] arg2 double [:] c double kernelval with gil: a = np.subtract(x, y) transm = np.transpose(R(euler_angles(v))) arg1 = np.dot(transm, a) arg2p = np.dot(transm, r) arg2 = euler_angles(arg2p) c = self.coordinate_map(arg1[0], arg1[1], arg1[2], arg2[0], arg2[1]) kernelval = self.kernel(c) return kernelval @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef double [:] coordinate_map(self, double x, double y, double z, double beta, double gamma) noexcept nogil: """ Compute a coordinate map for the kernel Parameters ---------- x : double X position y : double Y position z : double Z position beta : double First Euler angle gamma : double Second Euler angle Returns ------- c : 1D ndarray array of coordinates for kernel """ cdef: double [:] c double q double cg double cotq2 with gil: c = np.zeros(6) if beta == 0: c[0] = x c[1] = y c[2] = z c[3] = c[4] = c[5] = 0 else: q = fabs(beta) cg = cos(gamma) sg = sin(gamma) cotq2 = 1.0 / tan(q/2) c[0] = -0.5*z*beta*cg + \ x*(1 - (beta*beta*cg*cg * (1 - 0.5*q*cotq2)) / (q*q)) - \ (y*beta*beta*cg*sg * (1 - 0.5*q*cotq2)) / (q*q) c[1] = -0.5*z*beta*sg - \ (x*beta*beta*cg*sg * (1 - 0.5*q*cotq2)) / (q*q) + \ y * (1 - (beta*beta*sg*sg * (1 - 0.5*q*cotq2)) / (q*q)) c[2] = 0.5*x*beta*cg + 0.5*y*beta*sg + \ z * (1 + ((1 - 0.5*q*cotq2) * (-beta*beta*cg*cg - \ beta*beta*sg*sg)) / (q*q)) c[3] = beta * (-sg) c[4] = beta * cg c[5] = 0 return c @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef double kernel(self, double [:] c) noexcept nogil: """ Internal function, evaluates the kernel based on the coordinate map. Parameters ---------- c : 1D ndarray array of coordinates for kernel Returns ------- kernel_value : double """ cdef double output = 1 / (8*sqrt(2)) output *= sqrt(PI)*self.t*sqrt(self.t*self.D33)*sqrt(self.D33*self.D44) output *= 1 / (16*PI*PI*self.D33*self.D33*self.D44*self.D44*self.t*self.t*self.t*self.t) output *= exp(-sqrt((c[0]*c[0] + c[1]*c[1]) / (self.D33*self.D44) + \ (c[2]*c[2] / self.D33 + (c[3]*c[3]+c[4]*c[4]) / self.D44) * \ (c[2]*c[2] / self.D33 + (c[3]*c[3]+c[4]*c[4]) / self.D44) + \ c[5]*c[5]/self.D44) / (4*self.t)) return output cdef double PI = 3.1415926535897932 @cython.wraparound(False) @cython.boundscheck(False) cdef double [:] euler_angles(double [:] inp) noexcept nogil: """ Compute the Euler angles for a given input vector Parameters ---------- inp : 1D ndarray Input vector Returns ------- euler_angles : 1D ndarray """ cdef: double x double y double z double [:] output x = inp[0] y = inp[1] z = inp[2] with gil: output = np.zeros(3) # handle the case (0,0,1) if x*x < 10e-6 and y*y < 10e-6 and (z-1) * (z-1) < 10e-6: output[0] = 0 output[1] = 0 # handle the case (0,0,-1) elif x*x < 10e-6 and y*y < 10e-6 and (z+1) * (z+1) < 10e-6: output[0] = PI output[1] = 0 # all other cases else: output[0] = acos(z) output[1] = atan2(y, x) return output @cython.wraparound(False) @cython.boundscheck(False) cdef double [:,:] R(double [:] inp) noexcept nogil: """ Compute the Rotation matrix for a given input vector Parameters ---------- inp : 1D ndarray Input vector Returns ------- rotation_matrix : 2D ndarray """ cdef: double beta double gamma double [:] output double cb double sb double cg double sg beta = inp[0] gamma = inp[1] with gil: output = np.zeros(9) cb = cos(beta) sb = sin(beta) cg = cos(gamma) sg = sin(gamma) output[0] = cb * cg output[1] = -sg output[2] = cg * sb output[3] = cb * sg output[4] = cg output[5] = sb * sg output[6] = -sb output[7] = 0 output[8] = cb with gil: return np.reshape(output, (3,3)) dipy-1.11.0/dipy/denoise/gibbs.py000066400000000000000000000246751476546756600166420ustar00rootroot00000000000000from functools import partial import multiprocessing as mp import numpy as np import scipy.fft from dipy.testing.decorators import warning_for_keywords from dipy.utils.multiproc import determine_num_processes _fft = scipy.fft @warning_for_keywords() def _image_tv(x, *, axis=0, n_points=3): """Computes total variation (TV) of matrix x across a given axis and along two directions. Parameters ---------- x : 2D ndarray matrix x axis : int (0 or 1) Axis which TV will be calculated. Default a is set to 0. n_points : int Number of points to be included in TV calculation. Returns ------- ptv : 2D ndarray Total variation calculated from the right neighbours of each point. ntv : 2D ndarray Total variation calculated from the left neighbours of each point. """ xs = x.copy() if axis else x.T.copy() # Add copies of the data so that data extreme points are also analysed xs = np.concatenate( (xs[:, (-n_points - 1) :], xs, xs[:, 0 : (n_points + 1)]), axis=1 ) ptv = np.absolute( xs[:, (n_points + 1) : (-n_points - 1)] - xs[:, (n_points + 2) : (-n_points)] ) ntv = np.absolute( xs[:, (n_points + 1) : (-n_points - 1)] - xs[:, n_points : (-n_points - 2)] ) for n in range(1, n_points): ptv = ptv + np.absolute( xs[:, (n_points + 1 + n) : (-n_points - 1 + n)] - xs[:, (n_points + 2 + n) : (-n_points + n)] ) ntv = ntv + np.absolute( xs[:, (n_points + 1 - n) : (-n_points - 1 - n)] - xs[:, (n_points - n) : (-n_points - 2 - n)] ) if axis: return ptv, ntv else: return ptv.T, ntv.T @warning_for_keywords() def _gibbs_removal_1d(x, *, axis=0, n_points=3): """Suppresses Gibbs ringing along a given axis using fourier sub-shifts. Parameters ---------- x : 2D ndarray Matrix x. axis : int (0 or 1) Axis in which Gibbs oscillations will be suppressed. Default is set to 0. n_points : int, optional Number of neighbours to access local TV (see note). Default is set to 3. Returns ------- xc : 2D ndarray Matrix with suppressed Gibbs oscillations along the given axis. Notes ----- This function suppresses the effects of Gibbs oscillations based on the analysis of local total variation (TV). Although artefact correction is done based on two adjacent points for each voxel, total variation should be accessed in a larger range of neighbours. The number of neighbours to be considered in TV calculation can be adjusted using the parameter n_points. """ dtype_float = np.promote_types(x.real.dtype, np.float32) ssamp = np.linspace(0.02, 0.9, num=45, dtype=dtype_float) xs = x.copy() if axis else x.T.copy() # TV for shift zero (baseline) tvr, tvl = _image_tv(xs, axis=1, n_points=n_points) tvp = np.minimum(tvr, tvl) tvn = tvp.copy() # Find optimal shift for gibbs removal isp = xs.copy() isn = xs.copy() sp = np.zeros(xs.shape, dtype=dtype_float) sn = np.zeros(xs.shape, dtype=dtype_float) N = xs.shape[1] c = _fft.fft(xs, axis=1) k = _fft.fftfreq(N, 1 / (2.0j * np.pi)) k = k.astype(c.dtype, copy=False) for s in ssamp: ks = k * s # Access positive shift for given s img_p = abs(_fft.ifft(c * np.exp(ks), axis=1)) tvsr, tvsl = _image_tv(img_p, axis=1, n_points=n_points) tvs_p = np.minimum(tvsr, tvsl) # Access negative shift for given s img_n = abs(_fft.ifft(c * np.exp(-ks), axis=1)) tvsr, tvsl = _image_tv(img_n, axis=1, n_points=n_points) tvs_n = np.minimum(tvsr, tvsl) # Update positive shift params isp[tvp > tvs_p] = img_p[tvp > tvs_p] sp[tvp > tvs_p] = s tvp[tvp > tvs_p] = tvs_p[tvp > tvs_p] # Update negative shift params isn[tvn > tvs_n] = img_n[tvn > tvs_n] sn[tvn > tvs_n] = s tvn[tvn > tvs_n] = tvs_n[tvn > tvs_n] # check non-zero sub-voxel shifts idx = np.nonzero(sp + sn) # use positive and negative optimal sub-voxel shifts to interpolate to # original grid points xs[idx] = (isp[idx] - isn[idx]) / (sp[idx] + sn[idx]) * sn[idx] + isn[idx] return xs if axis else xs.T def _weights(shape): """Computes the weights necessary to combine two images processed by the 1D Gibbs removal procedure along two different axes. See :footcite:p:`Kellner2016` for further details about the method. Parameters ---------- shape : tuple shape of the image. Returns ------- G0 : 2D ndarray Weights for the image corrected along axis 0. G1 : 2D ndarray Weights for the image corrected along axis 1. References ---------- .. footbibliography:: """ G0 = np.zeros(shape) G1 = np.zeros(shape) k0 = np.linspace(-np.pi, np.pi, num=shape[0]) k1 = np.linspace(-np.pi, np.pi, num=shape[1]) # Middle points K1, K0 = np.meshgrid(k1[1:-1], k0[1:-1]) cosk0 = 1.0 + np.cos(K0) cosk1 = 1.0 + np.cos(K1) G1[1:-1, 1:-1] = cosk0 / (cosk0 + cosk1) G0[1:-1, 1:-1] = cosk1 / (cosk0 + cosk1) # Boundaries G1[1:-1, 0] = G1[1:-1, -1] = 1 G1[0, 0] = G1[-1, -1] = G1[0, -1] = G1[-1, 0] = 1 / 2 G0[0, 1:-1] = G0[-1, 1:-1] = 1 G0[0, 0] = G0[-1, -1] = G0[0, -1] = G0[-1, 0] = 1 / 2 return G0, G1 @warning_for_keywords() def _gibbs_removal_2d(image, *, n_points=3, G0=None, G1=None): """Suppress Gibbs ringing of a 2D image :footcite:p:`Kellner2016`, :footcite:p:`NetoHenriques2018`. Parameters ---------- image : 2D ndarray Matrix containing the 2D image. n_points : int, optional Number of neighbours to access local TV (see note). Default is set to 3. G0 : 2D ndarray, optional Weights for the image corrected along axis 0. If not given, the function estimates them using the function :func:`_weights`. G1 : 2D ndarray, optional Weights for the image corrected along axis 1. If not given, the function estimates them using the function :func:`_weights`. Returns ------- imagec : 2D ndarray Matrix with Gibbs oscillations reduced along axis a. Notes ----- This function suppresses the effects of Gibbs oscillations based on the analysis of local total variation (TV). Although artefact correction is done based on two adjacent points for each voxel, total variation should be accessed in a larger range of neighbours. The number of neighbours to be considered in TV calculation can be adjusted using the parameter n_points. References ---------- .. footbibliography:: """ if G0 is None or G1 is None: G0, G1 = _weights(image.shape) img_c1 = _gibbs_removal_1d(image, axis=1, n_points=n_points) img_c0 = _gibbs_removal_1d(image, axis=0, n_points=n_points) C1 = _fft.fft2(img_c1) C0 = _fft.fft2(img_c0) imagec = abs(_fft.ifft2(_fft.fftshift(C1) * G1 + _fft.fftshift(C0) * G0)) return imagec @warning_for_keywords() def gibbs_removal(vol, *, slice_axis=2, n_points=3, inplace=True, num_processes=1): """Suppresses Gibbs ringing artefacts of images volumes. See :footcite:p:`Kellner2016` and :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- vol : ndarray ([X, Y]), ([X, Y, Z]) or ([X, Y, Z, g]) Matrix containing one volume (3D) or multiple (4D) volumes of images. slice_axis : int (0, 1, or 2) Data axis corresponding to the number of acquired slices. n_points : int, optional Number of neighbour points to access local TV (see note). inplace : bool, optional If True, the input data is replaced with results. Otherwise, returns a new array. num_processes : int or None, optional Split the calculation to a pool of children processes. This only applies to 3D or 4D `data` arrays. Default is 1. If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. Returns ------- vol : ndarray ([X, Y]), ([X, Y, Z]) or ([X, Y, Z, g]) Matrix containing one volume (3D) or multiple (4D) volumes of corrected images. Notes ----- For 4D matrix last element should always correspond to the number of diffusion gradient directions. References ---------- .. footbibliography:: """ nd = vol.ndim # check matrix dimension if nd > 4: raise ValueError("Data have to be a 4D, 3D or 2D matrix") elif nd < 2: raise ValueError("Data is not an image") if not isinstance(inplace, bool): raise TypeError("inplace must be a boolean.") num_processes = determine_num_processes(num_processes) # check the axis corresponding to different slices # 1) This axis cannot be larger than 2 if slice_axis > 2: raise ValueError( "Different slices have to be organized along" + "one of the 3 first matrix dimensions" ) # 2) Reorder axis to allow iteration over the first axis elif nd == 3: vol = np.moveaxis(vol, slice_axis, 0) elif nd == 4: vol = np.moveaxis(vol, (slice_axis, 3), (0, 1)) if nd == 4: inishap = vol.shape vol = vol.reshape((inishap[0] * inishap[1], inishap[2], inishap[3])) # Produce weighting functions for 2D Gibbs removal shap = vol.shape G0, G1 = _weights(shap[-2:]) # Copy data if not inplace if not inplace: vol = vol.copy() # Run Gibbs removal of 2D images if nd == 2: vol[:, :] = _gibbs_removal_2d(vol, n_points=n_points, G0=G0, G1=G1) else: if num_processes == 1: for i in range(shap[0]): vol[i, :, :] = _gibbs_removal_2d( vol[i, :, :], n_points=n_points, G0=G0, G1=G1 ) else: mp.set_start_method("spawn", force=True) pool = mp.Pool(num_processes) partial_func = partial(_gibbs_removal_2d, n_points=n_points, G0=G0, G1=G1) vol[:, :, :] = pool.map(partial_func, vol) pool.close() pool.join() # Reshape data to original format if nd == 3: vol = np.moveaxis(vol, 0, slice_axis) if nd == 4: vol = vol.reshape(inishap) vol = np.moveaxis(vol, (0, 1), (slice_axis, 3)) return vol dipy-1.11.0/dipy/denoise/localpca.py000066400000000000000000000442651476546756600173270ustar00rootroot00000000000000import copy from warnings import warn import numpy as np from scipy.linalg import eigh from scipy.linalg.lapack import dgesvd as svd from dipy.denoise.pca_noise_estimate import pca_noise_estimate from dipy.testing.decorators import warning_for_keywords def dimensionality_problem_message(arr, num_samples, spr): """Message about the number of samples being smaller than one less the dimensionality of the data to be denoised. Parameters ---------- arr : ndarray Data to be denoised. num_samples : int Number of samples. spr : int Suggested patch radius. Returns ------- str Message. """ return ( f"Number of samples {arr.shape[-1]} - 1 < Dimensionality " f"{num_samples}. This might have a performance impact. Increase p" f"atch_radius to {spr} to avoid this." ) def _pca_classifier(L, nvoxels): """Classifies which PCA eigenvalues are related to noise and estimates the noise variance Parameters ---------- L : array (n,) Array containing the PCA eigenvalues in ascending order. nvoxels : int Number of voxels used to compute L Returns ------- var : float Estimation of the noise variance ncomps : int Number of eigenvalues related to noise Notes ----- This is based on the algorithm described in :footcite:p:`Veraart2016c`. References ---------- .. footbibliography:: """ # if num_samples - 1 (to correct for mean subtraction) is less than number # of features, discard the zero eigenvalues if L.size > nvoxels - 1: L = L[-(nvoxels - 1) :] # Note that the condition expressed in the while-loop is expressed in terms # of the variance of equation (12), not equation (11) as in # :footcite:p:`Veraart2016c`. Also, this code implements ascending # eigenvalues, unlike :footcite:p:`Veraart2016c`. var = np.mean(L) c = L.size - 1 r = L[c] - L[0] - 4 * np.sqrt((c + 1.0) / nvoxels) * var while r > 0: var = np.mean(L[:c]) c = c - 1 r = L[c] - L[0] - 4 * np.sqrt((c + 1.0) / nvoxels) * var ncomps = c + 1 return var, ncomps def create_patch_radius_arr(arr, pr): """Create the patch radius array from the data to be denoised and the patch radius. Parameters ---------- arr : ndarray Data to be denoised. pr : int or ndarray Patch radius. Returns ------- patch_radius : ndarray Number of samples. """ patch_radius = copy.deepcopy(pr) if isinstance(patch_radius, int): patch_radius = np.ones(3, dtype=int) * patch_radius if len(patch_radius) != 3: raise ValueError("patch_radius should have length 3") else: patch_radius = np.asarray(patch_radius).astype(int) patch_radius[arr.shape[0:3] == np.ones(3)] = 0 # account for dim of size 1 return patch_radius def compute_patch_size(patch_radius): """Compute patch size from the patch radius. it is twice the radius plus one. Parameters ---------- patch_radius : ndarray Patch radius. Returns ------- int Number of samples. """ return 2 * patch_radius + 1 def compute_num_samples(patch_size): """Compute the number of samples as the dot product of the elements. Parameters ---------- patch_size : ndarray Patch size. Returns ------- int Number of samples. """ return np.prod(patch_size) def compute_suggested_patch_radius(arr, patch_size): """Compute the suggested patch radius. Parameters ---------- arr : ndarray Data to be denoised. patch_size : ndarray Patch size. Returns ------- int Suggested patch radius. """ tmp = np.sum(patch_size == 1) # count spatial dimensions with size 1 if tmp == 0: root = np.ceil(arr.shape[-1] ** (1.0 / 3)) # 3D if tmp == 1: root = np.ceil(arr.shape[-1] ** (1.0 / 2)) # 2D if tmp == 2: root = arr.shape[-1] # 1D root = root + 1 if (root % 2) == 0 else root # make odd return int((root - 1) / 2) @warning_for_keywords() def genpca( arr, *, sigma=None, mask=None, patch_radius=2, pca_method="eig", tau_factor=None, return_sigma=False, out_dtype=None, suppress_warning=False, ): r"""General function to perform PCA-based denoising of diffusion datasets. Parameters ---------- arr : 4D array Array of data to be denoised. The dimensions are (X, Y, Z, N), where N are the diffusion gradient directions. The first 3 dimension must have size >= 2 * patch_radius + 1 or size = 1. sigma : float or 3D array, optional Standard deviation of the noise estimated from the data. If no sigma is given, this will be estimated based on random matrix theory :footcite:p:`Veraart2016b`, :footcite:p:`Veraart2016c`. mask : 3D boolean array, optional A mask with voxels that are true inside the brain and false outside of it. The function denoises within the true part and returns zeros outside of those voxels. patch_radius : int or 1D array, optional The radius of the local patch to be taken around each voxel (in voxels). E.g. patch_radius=2 gives 5x5x5 patches. pca_method : 'eig' or 'svd', optional Use either eigenvalue decomposition (eig) or singular value decomposition (svd) for principal component analysis. The default method is 'eig' which is faster. However, occasionally 'svd' might be more accurate. tau_factor : float, optional Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than: .. math:: \tau = (\tau_{factor} \sigma)^2 $\tau_{factor}$ can be set to a predefined values (e.g. $\tau_{factor} = 2.3$ :footcite:p:`Manjon2013`), or automatically calculated using random matrix theory (in case that $\tau_{factor}$ is set to None). return_sigma : bool, optional If true, the Standard deviation of the noise will be returned. out_dtype : str or dtype, optional The dtype for the output array. Default: output has the same dtype as the input. suppress_warning : bool, optional If true, suppress warning caused by patch_size < arr.shape[-1]. Returns ------- denoised_arr : 4D array This is the denoised array of the same size as that of the input data, clipped to non-negative values. References ---------- .. footbibliography:: """ if mask is None: # If mask is not specified, use the whole volume mask = np.ones_like(arr, dtype=bool)[..., 0] if out_dtype is None: out_dtype = arr.dtype # We retain float64 precision, iff the input is in this precision: if arr.dtype == np.float64: calc_dtype = np.float64 # Otherwise, we'll calculate things in float32 (saving memory) else: calc_dtype = np.float32 if not arr.ndim == 4: raise ValueError("PCA denoising can only be performed on 4D arrays.", arr.shape) if pca_method.lower() == "svd": is_svd = True elif pca_method.lower() == "eig": is_svd = False else: raise ValueError("pca_method should be either 'eig' or 'svd'") patch_radius_arr = create_patch_radius_arr(arr, patch_radius) patch_size = compute_patch_size(patch_radius_arr) ash = arr.shape[0:3] if np.any((ash != np.ones(3)) * (ash < patch_size)): raise ValueError("Array 'arr' is incorrect shape") num_samples = compute_num_samples(patch_size) if num_samples == 1: raise ValueError( "Cannot have only 1 sample,\ please increase patch_radius." ) # account for mean subtraction by testing #samples - 1 if (num_samples - 1) < arr.shape[-1] and not suppress_warning: spr = compute_suggested_patch_radius(arr, patch_size) warn( dimensionality_problem_message(arr, num_samples, spr), UserWarning, stacklevel=2, ) if isinstance(sigma, np.ndarray): var = sigma**2 if not sigma.shape == arr.shape[:-1]: e_s = "You provided a sigma array with a shape" e_s += f"{sigma.shape} for data with" e_s += f"shape {arr.shape}. Please provide a sigma array" e_s += " that matches the spatial dimensions of the data." raise ValueError(e_s) elif isinstance(sigma, (int, float)): var = sigma**2 * np.ones(arr.shape[:-1]) dim = arr.shape[-1] if tau_factor is None: tau_factor = 1 + np.sqrt(dim / num_samples) theta = np.zeros(arr.shape, dtype=calc_dtype) thetax = np.zeros(arr.shape, dtype=calc_dtype) if return_sigma is True and sigma is None: var = np.zeros(arr.shape[:-1], dtype=calc_dtype) thetavar = np.zeros(arr.shape[:-1], dtype=calc_dtype) # loop around and find the 3D patch for each direction at each pixel for k in range(patch_radius_arr[2], arr.shape[2] - patch_radius_arr[2]): for j in range(patch_radius_arr[1], arr.shape[1] - patch_radius_arr[1]): for i in range(patch_radius_arr[0], arr.shape[0] - patch_radius_arr[0]): # Shorthand for indexing variables: if not mask[i, j, k]: continue ix1 = i - patch_radius_arr[0] ix2 = i + patch_radius_arr[0] + 1 jx1 = j - patch_radius_arr[1] jx2 = j + patch_radius_arr[1] + 1 kx1 = k - patch_radius_arr[2] kx2 = k + patch_radius_arr[2] + 1 X = arr[ix1:ix2, jx1:jx2, kx1:kx2].reshape(num_samples, dim) # compute the mean M = np.mean(X, axis=0) # Upcast the dtype for precision in the SVD X = X - M if is_svd: # PCA using an SVD svd_args = [1, 0] U, S, Vt = svd(X, *svd_args)[:3] # Items in S are the eigenvalues, but in ascending order # We invert the order (=> descending), square and normalize # \lambda_i = s_i^2 / n d = S[::-1] ** 2 / X.shape[0] # Rows of Vt are eigenvectors, but also in ascending # eigenvalue order: W = Vt[::-1].T else: # PCA using an Eigenvalue decomposition C = np.transpose(X).dot(X) C = C / X.shape[0] [d, W] = eigh(C) if sigma is None: # Random matrix theory this_var, _ = _pca_classifier(d, num_samples) else: # Predefined variance this_var = var[i, j, k] # Threshold by tau: tau = tau_factor**2 * this_var # Update ncomps according to tau_factor ncomps = np.sum(d < tau) W[:, :ncomps] = 0 # This is equations 1 and 2 in Manjon 2013: Xest = X.dot(W).dot(W.T) + M Xest = Xest.reshape(patch_size[0], patch_size[1], patch_size[2], dim) # This is equation 3 in Manjon 2013: this_theta = 1.0 / (1.0 + dim - ncomps) theta[ix1:ix2, jx1:jx2, kx1:kx2] += this_theta thetax[ix1:ix2, jx1:jx2, kx1:kx2] += Xest * this_theta if return_sigma is True and sigma is None: var[ix1:ix2, jx1:jx2, kx1:kx2] += this_var * this_theta thetavar[ix1:ix2, jx1:jx2, kx1:kx2] += this_theta denoised_arr = thetax / theta denoised_arr.clip(min=0, out=denoised_arr) denoised_arr[mask == 0] = 0 if return_sigma is True: if sigma is None: var = var / thetavar var[mask == 0] = 0 return denoised_arr.astype(out_dtype), np.sqrt(var) else: return denoised_arr.astype(out_dtype), sigma else: return denoised_arr.astype(out_dtype) @warning_for_keywords() def localpca( arr, *, sigma=None, mask=None, patch_radius=2, gtab=None, patch_radius_sigma=1, pca_method="eig", tau_factor=2.3, return_sigma=False, correct_bias=True, out_dtype=None, suppress_warning=False, ): r"""Performs local PCA denoising. The method follows the algorithm in :footcite:t:`Manjon2013`. Parameters ---------- arr : 4D array Array of data to be denoised. The dimensions are (X, Y, Z, N), where N are the diffusion gradient directions. sigma : float or 3D array, optional Standard deviation of the noise estimated from the data. If not given, calculate using method in :footcite:t:`Manjon2013`. mask : 3D boolean array, optional A mask with voxels that are true inside the brain and false outside of it. The function denoises within the true part and returns zeros outside of those voxels. patch_radius : int or 1D array, optional The radius of the local patch to be taken around each voxel (in voxels). E.g. patch_radius=2 gives 5x5x5 patches. gtab: gradient table object (optional if sigma is provided) gradient information for the data gives us the bvals and bvecs of diffusion data, which is needed to calculate noise level if sigma is not provided. patch_radius_sigma : int, optional The radius of the local patch to be taken around each voxel (in voxels) for estimating sigma. E.g. patch_radius_sigma=2 gives 5x5x5 patches. pca_method : 'eig' or 'svd', optional Use either eigenvalue decomposition (eig) or singular value decomposition (svd) for principal component analysis. The default method is 'eig' which is faster. However, occasionally 'svd' might be more accurate. tau_factor : float, optional Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than: .. math:: \tau = (\tau_{factor} \sigma)^2 \tau_{factor} can be change to adjust the relationship between the noise standard deviation and the threshold \tau. If \tau_{factor} is set to None, it will be automatically calculated using the Marcenko-Pastur distribution :footcite:p:`Veraart2016c`. Default: 2.3 according to :footcite:t:`Manjon2013`. return_sigma : bool, optional If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned :footcite:p:`Veraart2016c`. correct_bias : bool, optional Whether to correct for bias due to Rician noise. This is an implementation of equation 8 in :footcite:p:`Manjon2013`. out_dtype : str or dtype, optional The dtype for the output array. Default: output has the same dtype as the input. suppress_warning : bool, optional If true, suppress warning caused by patch_size < arr.shape[-1]. Returns ------- denoised_arr : 4D array This is the denoised array of the same size as that of the input data, clipped to non-negative values References ---------- .. footbibliography:: """ # check gtab is given, if sigma is not given if sigma is None and gtab is None: raise ValueError("gtab must be provided if sigma is not given") # calculate sigma if sigma is None: sigma = pca_noise_estimate( arr, gtab, correct_bias=correct_bias, patch_radius=patch_radius_sigma, images_as_samples=True, ) return genpca( arr, sigma=sigma, mask=mask, patch_radius=patch_radius, pca_method=pca_method, tau_factor=tau_factor, return_sigma=return_sigma, out_dtype=out_dtype, suppress_warning=suppress_warning, ) @warning_for_keywords() def mppca( arr, *, mask=None, patch_radius=2, pca_method="eig", return_sigma=False, out_dtype=None, suppress_warning=False, ): r"""Performs PCA-based denoising using the Marcenko-Pastur distribution. See :footcite:p:`Veraart2016c` for further details about the method. Parameters ---------- arr : 4D array Array of data to be denoised. The dimensions are (X, Y, Z, N), where N are the diffusion gradient directions. mask : 3D boolean array, optional A mask with voxels that are true inside the brain and false outside of it. The function denoises within the true part and returns zeros outside of those voxels. patch_radius : int or 1D array, optional The radius of the local patch to be taken around each voxel (in voxels). E.g. patch_radius=2 gives 5x5x5 patches. pca_method : 'eig' or 'svd', optional Use either eigenvalue decomposition (eig) or singular value decomposition (svd) for principal component analysis. The default method is 'eig' which is faster. However, occasionally 'svd' might be more accurate. return_sigma : bool, optional If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned :footcite:p:`Veraart2016b`. out_dtype : str or dtype, optional The dtype for the output array. Default: output has the same dtype as the input. suppress_warning : bool, optional If true, suppress warning caused by patch_size < arr.shape[-1]. Returns ------- denoised_arr : 4D array This is the denoised array of the same size as that of the input data, clipped to non-negative values sigma : 3D array (when return_sigma=True) Estimate of the spatial varying standard deviation of the noise References ---------- .. footbibliography:: """ return genpca( arr, sigma=None, mask=mask, patch_radius=patch_radius, pca_method=pca_method, tau_factor=None, return_sigma=return_sigma, out_dtype=out_dtype, suppress_warning=suppress_warning, ) dipy-1.11.0/dipy/denoise/meson.build000066400000000000000000000012441476546756600173270ustar00rootroot00000000000000cython_sources = [ 'denspeed', 'enhancement_kernel', 'nlmeans_block', 'pca_noise_estimate', 'shift_twist_convolution', ] foreach ext: cython_sources py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/denoise' ) endforeach python_sources = ['__init__.py', 'adaptive_soft_matching.py', 'gibbs.py', 'localpca.py', 'nlmeans.py', 'noise_estimate.py', 'non_local_means.py', 'patch2self.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/denoise' ) subdir('tests')dipy-1.11.0/dipy/denoise/nlmeans.py000066400000000000000000000060311476546756600171730ustar00rootroot00000000000000import numpy as np from dipy.denoise.denspeed import nlmeans_3d from dipy.testing.decorators import warning_for_keywords # from warnings import warn # import warnings # warnings.simplefilter('always', DeprecationWarning) # warn(DeprecationWarning("Module 'dipy.denoise.nlmeans' is deprecated," # " use module 'dipy.denoise.non_local_means' instead")) @warning_for_keywords() def nlmeans( arr, sigma, *, mask=None, patch_radius=1, block_radius=5, rician=True, num_threads=None, ): r"""Non-local means for denoising 3D and 4D images. See :footcite:p:Descoteaux2008a` for further details about the method. Parameters ---------- arr : 3D or 4D ndarray The array to be denoised mask : 3D ndarray sigma : float or 3D array standard deviation of the noise estimated from the data patch_radius : int patch size is ``2 x patch_radius + 1``. Default is 1. block_radius : int block size is ``2 x block_radius + 1``. Default is 5. rician : boolean If True the noise is estimated as Rician, otherwise Gaussian noise is assumed. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Returns ------- denoised_arr : ndarray the denoised ``arr`` which has the same shape as ``arr``. References ---------- .. footbibliography:: """ # warn(DeprecationWarning("function 'dipy.denoise.nlmeans'" # " is deprecated, use module " # "'dipy.denoise.non_local_means'" # " instead")) if arr.ndim == 3: sigma = np.ones(arr.shape, dtype=np.float64) * sigma return nlmeans_3d( arr, mask=mask, sigma=sigma, patch_radius=patch_radius, block_radius=block_radius, rician=rician, num_threads=num_threads, ).astype(arr.dtype) elif arr.ndim == 4: denoised_arr = np.zeros_like(arr) if isinstance(sigma, np.ndarray) and sigma.ndim == 3: sigma = np.ones(arr.shape, dtype=np.float64) * sigma[..., np.newaxis] else: sigma = np.ones(arr.shape, dtype=np.float64) * sigma for i in range(arr.shape[-1]): denoised_arr[..., i] = nlmeans_3d( arr[..., i], mask=mask, sigma=sigma[..., i], patch_radius=patch_radius, block_radius=block_radius, rician=rician, num_threads=num_threads, ).astype(arr.dtype) return denoised_arr else: raise ValueError("Only 3D or 4D array are supported!", arr.shape) dipy-1.11.0/dipy/denoise/nlmeans_block.pyx000066400000000000000000000411651476546756600205440ustar00rootroot00000000000000cimport cython from cython.view cimport array as cvarray from libc.math cimport sqrt, exp import numpy as np cimport numpy as cnp __all__ = ['firdn', 'upfir', 'nlmeans_block'] cdef inline int _int_max(int a, int b): return a if a >= b else b cdef inline int _int_min(int a, int b): return a if a <= b else b def _firdn_vector(double[:] f, double[:] h, double[:] out): cdef int n = len(f) cdef int klen = len(h) cdef int outLen = (n + klen) // 2 cdef double ss cdef int i, k, limInf, limSup, x = 0, ox = 0, ks = 0 for i in range(outLen): ss = 0 limInf = _int_max(0, x - klen + 1) limSup = 1 + _int_min(n - 1, x) ks = limInf for k in range(limInf, limSup): ss += f[ks] * h[x - k] ks += 1 out[ox] = ss x += 2 ox += 1 def _upfir_vector(double[:] f, double[:] h, double[:] out): cdef int n = f.shape[0] cdef int klen = h.shape[0] cdef int outLen = 2 * n + klen - 2 cdef int x, limInf, limSup, k, ks cdef double ss for x in range(outLen): limInf = _int_max(0, x - klen + 1) if limInf % 2 == 1: limInf += 1 limSup = _int_min(2 * (n - 1), x) if limSup % 2 == 1: limSup -= 1 ss = 0 k = limInf ks = limInf // 2 while k <= limSup: ss += f[ks] * h[x - k] k += 2 ks += 1 out[x] = ss def _firdn_matrix(double[:, :] F, double[:] h, double[:, :] out): cdef int n = F.shape[0] cdef int m = F.shape[1] cdef int j for j in range(m): _firdn_vector(F[:, j], h, out[:, j]) def _upfir_matrix(double[:, :] F, double[:] h, double[:, :] out): cdef int n = F.shape[0] cdef int m = F.shape[1] for j in range(m): _upfir_vector(F[:, j], h, out[:, j]) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _average_block(double[:, :, :] ima, int x, int y, int z, double[:, :, :] average, double weight) noexcept nogil: """ Computes the weighted average of the patches in a blockwise manner Parameters ---------- ima : 3D array of doubles input image x : integer x coordinate of the center voxel y : integer y coordinate of the center voxel z : integer z coordinate of the center voxel average : 3D array of doubles the image where averages are stored weight : double weight for the weighted averaging """ cdef int a, b, c, x_pos, y_pos, z_pos cdef int is_outside cdef int neighborhoodsize = average.shape[0] // 2 for a in range(average.shape[0]): for b in range(average.shape[1]): for c in range(average.shape[2]): x_pos = x + a - neighborhoodsize y_pos = y + b - neighborhoodsize z_pos = z + c - neighborhoodsize is_outside = 0 if x_pos < 0 or x_pos >= ima.shape[1]: is_outside = 1 if y_pos < 0 or y_pos >= ima.shape[0]: is_outside = 1 if z_pos < 0 or z_pos >= ima.shape[2]: is_outside = 1 if is_outside == 1: average[a, b, c] += weight * (ima[y, x, z]**2) else: average[a, b, c] += weight * (ima[y_pos, x_pos, z_pos]**2) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef void _value_block(double[:, :, :] estimate, double[:, :, :] Label, int x, int y, int z, double[:, :, :] average, double global_sum, double hh, int rician_int) noexcept nogil: """ Computes the final estimate of the denoised image Parameters ---------- estimate : 3D array of doubles The denoised estimate array Label : 3D array of doubles The label map for block wise weighted averaging x : integer x coordinate of the center voxel y : integer y coordinate of the center voxel z : integer z coordinate of the center voxel average : 3D array of doubles weighted average image global_sum : double total weight sum hh : double weight parameter rician_int : integer 0 or 1 as per the boolean value """ cdef int is_outside, a, b, c, x_pos, y_pos, z_pos, count = 0 cdef double value = 0.0 cdef double denoised_value = 0.0 cdef double label = 0.0 cdef int neighborhoodsize = average.shape[0] // 2 for a in range(average.shape[0]): for b in range(average.shape[1]): for c in range(average.shape[2]): is_outside = 0 x_pos = x + a - neighborhoodsize y_pos = y + b - neighborhoodsize z_pos = z + c - neighborhoodsize if x_pos < 0 or x_pos >= estimate.shape[1]: is_outside = 1 if y_pos < 0 or y_pos >= estimate.shape[0]: is_outside = 1 if z_pos < 0 or z_pos >= estimate.shape[2]: is_outside = 1 if is_outside == 0: value = estimate[y_pos, x_pos, z_pos] if rician_int: denoised_value = (average[a, b, c] / global_sum) - hh else: denoised_value = (average[a, b, c] / global_sum) if denoised_value > 0: denoised_value = sqrt(denoised_value) else: denoised_value = 0.0 value += denoised_value label = Label[y_pos, x_pos, z_pos] estimate[y_pos, x_pos, z_pos] = value Label[y_pos, x_pos, z_pos] = label + 1 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double _distance(double[:, :, :] image, int x, int y, int z, int nx, int ny, int nz, int block_radius) nogil: """ Computes the distance between two square subpatches of image located at p and q, respectively. If the centered squares lie beyond the boundaries of image, they are mirrored. Parameters ---------- image : 3D array of doubles the image whose voxels are taken x : integer x coordinate of first patch's center y : integer y coordinate of first patch's center z : integer z coordinate of first patch's center nx : integer nx coordinate of second patch's center ny : integer ny coordinate of second patch's center nz : integer nz coordinate of second patch's center block_radius : integer block radius for which the distance is computed for """ cdef double acu, distancetotal cdef int i, j, k, ni1, nj1, ni2, nj2, nk1, nk2 cdef int sx = image.shape[1], sy = image.shape[0], sz = image.shape[2] acu = 0 distancetotal = 0 for i in range(-block_radius, block_radius + 1): for j in range(-block_radius, block_radius + 1): for k in range(-block_radius, block_radius + 1): ni1 = x + i nj1 = y + j nk1 = z + k ni2 = nx + i nj2 = ny + j nk2 = nz + k if ni1 < 0: ni1 = -ni1 if nj1 < 0: nj1 = -nj1 if ni2 < 0: ni2 = -ni2 if nj2 < 0: nj2 = -nj2 if nk1 < 0: nk1 = -nk1 if nk2 < 0: nk2 = -nk2 if ni1 >= sx: ni1 = 2 * sx - ni1 - 1 if nj1 >= sy: nj1 = 2 * sy - nj1 - 1 if nk1 >= sz: nk1 = 2 * sz - nk1 - 1 if ni2 >= sx: ni2 = 2 * sx - ni2 - 1 if nj2 >= sy: nj2 = 2 * sy - nj2 - 1 if nk2 >= sz: nk2 = 2 * sz - nk2 - 1 distancetotal += (image[nj1, ni1, nk1] - image[nj2, ni2, nk2])**2 acu = acu + 1 return distancetotal / acu @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double _local_mean(double[:, :, :]ima, int x, int y, int z) nogil: """ local mean of a 3x3x3 patch centered at x,y,z """ cdef int dims0 = ima.shape[0] cdef int dims1 = ima.shape[1] cdef int dims2 = ima.shape[2] cdef double ss = 0 cdef int px, py, pz, dx, dy, dz, nx, ny, nz for px in range(x - 1, x + 2): for py in range(y - 1, y + 2): for pz in range(z - 1, z + 2): px = (-px if px < 0 else (2 * dims0 - px - 1 if px >= dims0 else px)) py = (-py if py < 0 else (2 * dims1 - py - 1 if py >= dims1 else py)) pz = (-pz if pz < 0 else (2 * dims2 - pz - 1 if pz >= dims2 else pz)) ss += ima[px, py, pz] return ss / 27.0 @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double _local_variance(double[:, :, :] ima, double mean, int x, int y, int z) nogil: """ local variance of a 3x3x3 patch centered at x,y,z """ dims0 = ima.shape[0] dims1 = ima.shape[1] dims2 = ima.shape[2] cdef int cnt = 0 cdef double ss = 0 cdef int dx, dy, dz, nx, ny, nz for px in range(x - 1, x + 2): for py in range(y - 1, y + 2): for pz in range(z - 1, z + 2): if ((px >= 0 and py >= 0 and pz > 0) and (px < dims0 and py < dims1 and pz < dims2)): ss += (ima[px, py, pz] - mean) * (ima[px, py, pz] - mean) cnt += 1 return ss / (cnt - 1) cpdef firdn(double[:, :] image, double[:] h): """ Applies the filter given by the convolution kernel 'h' columnwise to 'image', then subsamples by 2. This is a special case of the matlab's 'upfirdn' function, ported to python. Returns the filtered image. Parameters ---------- image: 2D array of doubles the input image to be filtered h: double array the convolution kernel """ nrows = image.shape[0] ncols = image.shape[1] ll = h.shape[0] cdef double[:, :] filtered = np.zeros(shape=((nrows + ll) // 2, ncols)) _firdn_matrix(image, h, filtered) return filtered cpdef upfir(double[:, :] image, double[:] h): """ Upsamples the columns of the input image by 2, then applies the convolution kernel 'h' (again, columnwise). This is a special case of the matlab's 'upfirdn' function, ported to python. Returns the filtered image. Parameters ---------- image: 2D array of doubles the input image to be filtered h: double array the convolution kernel """ nrows = image.shape[0] ncols = image.shape[1] ll = h.shape[0] cdef double[:, :] filtered = np.zeros(shape=(2 * nrows + ll - 2, ncols)) _upfir_matrix(image, h, filtered) return filtered @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def nlmeans_block(double[:, :, :]image, double[:, :, :] mask, int patch_radius, int block_radius, double h, int rician): """Non-Local Means Denoising Using Blockwise Averaging. See :footcite:p:`Coupe2008` and :footcite:p:`Coupe2012` for further details about the method. Parameters ---------- image : 3D array of doubles the input image, corrupted with rician noise mask : 3D array of doubles the input mask patch_radius : int similar patches in the non-local means are searched for locally, inside a cube of side 2*v+1 centered at each voxel of interest. block_radius : int the size of the block to be used (2*f+1)x(2*f+1)x(2*f+1) in the blockwise non-local means implementation (the Coupe's proposal). h : double the estimated amount of rician noise in the input image: in P. Coupe et al. the rician noise was simulated as sqrt((f+x)^2 + (y)^2) where f is the pixel value and x and y are independent realizations of a random variable with Normal distribution, with mean=0 and standard deviation=h rician : boolean If True the noise is estimated as Rician, otherwise Gaussian noise is assumed. Returns ------- fima: 3D double array the denoised output which has the same shape as input image. References ---------- .. footbibliography:: """ cdef int[:] dims = cvarray((3,), itemsize=sizeof(int), format="i") dims[0] = image.shape[0] dims[1] = image.shape[1] dims[2] = image.shape[2] cdef double hh = 2 * h * h cdef int Ndims = (2 * block_radius + 1)**3 cdef int nvox = dims[0] * dims[1] * dims[2] cdef double[:, :, :] average = np.zeros((2 * block_radius + 1, 2 * block_radius + 1, 2 * block_radius + 1), dtype=np.float64) cdef double[:, :, :] fima = np.zeros_like(image) cdef double[:, :, :] means = np.zeros_like(image) cdef double[:, :, :] variances = np.zeros_like(image) cdef double[:, :, :] Estimate = np.zeros_like(image) cdef double[:, :, :] Label = np.zeros_like(image) cdef cnp.npy_intp i, j, k, ni, nj, nk cdef double t1, t2 cdef double epsilon = 0.00001 cdef double mu1 = 0.95 cdef double var1 = 0.5 + 1e-7 cdef double d cdef double totalWeight, wmax, w with nogil: for k in range(dims[2]): for i in range(dims[1]): for j in range(dims[0]): means[j, i, k] = _local_mean(image, j, i, k) variances[j, i, k] = _local_variance( image, means[j, i, k], j, i, k) for k in range(0, dims[2], 2): for i in range(0, dims[1], 2): for j in range(0, dims[0], 2): with gil: average[...] = 0 totalWeight = 0 if (means[j, i, k] <= epsilon) or ( variances[j, i, k] <= epsilon): wmax = 1.0 _average_block(image, i, j, k, average, wmax) totalWeight += wmax _value_block(Estimate, Label, i, j, k, average, totalWeight, hh, rician) else: wmax = 0 for nk in range(k - patch_radius, k + patch_radius + 1): for ni in range(i - patch_radius, i + patch_radius + 1): for nj in range(j - patch_radius, j + patch_radius + 1): if ni == i and nj == j and nk == k: continue if ni < 0 or nj < 0 or nk < 0 or nj >= dims[0] or ni >= dims[1] or nk >= dims[2]: continue if ((means[nj, ni, nk] <= epsilon) or ( variances[nj, ni, nk] <= epsilon)): continue t1 = (means[j, i, k]) / (means[nj, ni, nk]) t2 = (variances[j, i, k]) / \ (variances[nj, ni, nk]) if mu1 < t1 < (1 / mu1) and var1 < t2 < (1 / var1): d = _distance( image, i, j, k, ni, nj, nk, block_radius) w = exp(-d / (h * h)) if w > wmax: wmax = w _average_block( image, ni, nj, nk, average, w) totalWeight += w if totalWeight != 0.0: _value_block(Estimate, Label, i, j, k, average, totalWeight, hh, rician) for k in range(0, dims[2]): for i in range(0, dims[1]): for j in range(0, dims[0]): if mask[j, i, k] == 0: fima[j, i, k] = 0 else: if Label[j, i, k] == 0.0: fima[j, i, k] = image[j, i, k] else: fima[j, i, k] = Estimate[j, i, k] / Label[j, i, k] return fima dipy-1.11.0/dipy/denoise/noise_estimate.py000066400000000000000000000254511476546756600205550ustar00rootroot00000000000000import numpy as np from scipy.ndimage import convolve from scipy.special import gammainccinv from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecated_params def _inv_nchi_cdf(N, K, alpha): """Inverse CDF for the noncentral chi distribution. See :footcite:t:`Koay2009b`, p.3 section 2.3. References ---------- .. footbibliography:: """ return gammainccinv(N * K, 1 - alpha) / K # List of optimal quantile for PIESNO. # Get optimal quantile for N if available, else use the median. opt_quantile = { 1: 0.79681213002002, 2: 0.7306303027491917, 4: 0.6721952960782169, 8: 0.6254030432343569, 16: 0.5900487123737876, 32: 0.5641772300866416, 64: 0.5455611840489607, 128: 0.5322811923303339, } @deprecated_params("l", new_name="step", since="1.10.0", until="1.12.0") @warning_for_keywords() def piesno(data, N, *, alpha=0.01, step=100, itermax=100, eps=1e-5, return_mask=False): """ Probabilistic Identification and Estimation of Noise (PIESNO). See :footcite:p:`Koay2009a` and :footcite:p:`Koay2009b` for further details about the method. Parameters ---------- data : ndarray The magnitude signals to analyse. The last dimension must contain the same realisation of the volume, such as dMRI or fMRI data. N : int The number of phase array coils of the MRI scanner. If your scanner does a SENSE reconstruction, ALWAYS use N=1, as the noise profile is always Rician. If your scanner does a GRAPPA reconstruction, set N as the number of phase array coils. alpha : float Probabilistic estimation threshold for the gamma function. step : int number of initial estimates for sigma to try. itermax : int Maximum number of iterations to execute if convergence is not reached. eps : float Tolerance for the convergence criterion. Convergence is reached if two subsequent estimates are smaller than eps. return_mask : bool If True, return a mask identifying all the pure noise voxel that were found. Returns ------- sigma : float The estimated standard deviation of the gaussian noise. mask : ndarray, optional A boolean mask indicating the voxels identified as pure noise. Notes ----- This function assumes two things : 1. The data has a noisy, non-masked background and 2. The data is a repetition of the same measurements along the last axis, i.e. dMRI or fMRI data, not structural data like T1/T2. This function processes the data slice by slice, as originally designed in the paper. Use it to get a slice by slice estimation of the noise, as in spinal cord imaging for example. References ---------- .. footbibliography:: """ # This method works on a 2D array with repetitions as the third dimension, # so process the dataset slice by slice. if data.ndim < 3: e_s = "This function only works on datasets of at least 3 dimensions." raise ValueError(e_s) if N in opt_quantile: q = opt_quantile[N] else: q = 0.5 # Initial estimation of sigma initial_estimation = np.percentile(data, q * 100) / np.sqrt( 2 * _inv_nchi_cdf(N, 1, q) ) if data.ndim == 4: sigma = np.zeros(data.shape[-2], dtype=np.float32) mask_noise = np.zeros(data.shape[:-1], dtype=bool) for idx in range(data.shape[-2]): sigma[idx], mask_noise[..., idx] = _piesno_3D( data[..., idx, :], N, alpha=alpha, step=step, itermax=itermax, eps=eps, return_mask=True, initial_estimation=initial_estimation, ) else: sigma, mask_noise = _piesno_3D( data, N, alpha=alpha, step=step, itermax=itermax, eps=eps, return_mask=True, initial_estimation=initial_estimation, ) if return_mask: return sigma, mask_noise return sigma @deprecated_params("l", new_name="step", since="1.10.0", until="1.12.0") @warning_for_keywords() def _piesno_3D( data, N, *, alpha=0.01, step=100, itermax=100, eps=1e-5, return_mask=False, initial_estimation=None, ): """ Probabilistic Identification and Estimation of Noise (PIESNO). See :footcite:p:`Koay2009a` and :footcite:p:`Koay2009b` for further details about the method. This is the slice by slice version for working on a 4D array. Parameters ---------- data : ndarray The magnitude signals to analyse. The last dimension must contain the same realisation of the volume, such as dMRI or fMRI data. N : int The number of phase array coils of the MRI scanner. alpha : float, optional Probabilistic estimation threshold for the gamma function. step : int, optional number of initial estimates for sigma to try. itermax : int, optional Maximum number of iterations to execute if convergence is not reached. eps : float, optional Tolerance for the convergence criterion. Convergence is reached if two subsequent estimates are smaller than eps. return_mask : bool, optional If True, return a mask identifying all the pure noise voxel that were found. initial_estimation : float, optional Upper bound for the initial estimation of sigma. default : None, which computes the optimal quantile for N. Returns ------- sigma : float The estimated standard deviation of the gaussian noise. mask : ndarray A boolean mask indicating the voxels identified as pure noise. Notes ----- This function assumes two things : 1. The data has a noisy, non-masked background and 2. The data is a repetition of the same measurements along the last axis, i.e. dMRI or fMRI data, not structural data like T1/T2. References ---------- .. footbibliography:: """ if np.all(data == 0): if return_mask: return 0, np.zeros(data.shape[:-1], dtype=bool) return 0 if N in opt_quantile: q = opt_quantile[N] else: q = 0.5 denom = np.sqrt(2 * _inv_nchi_cdf(N, 1, q)) if initial_estimation is None: m = np.percentile(data, q * 100) / denom else: m = initial_estimation phi = np.arange(1, step + 1) * m / step K = data.shape[-1] sum_m2 = np.sum(data.astype(np.float32) ** 2, axis=2) sigma_prev = 0 sigma = m prev_idx = 0 mask = np.zeros(data.shape[:-1], dtype=bool) lambda_minus = _inv_nchi_cdf(N, K, alpha / 2) lambda_plus = _inv_nchi_cdf(N, K, 1 - alpha / 2) for sigma_init in phi: s = sum_m2 / (2 * K * sigma_init**2) found_idx = np.sum( np.logical_and(lambda_minus <= s, s <= lambda_plus), dtype=np.int16 ) if found_idx > prev_idx: sigma = sigma_init prev_idx = found_idx for _ in range(itermax): if np.abs(sigma - sigma_prev) < eps: break s = sum_m2 / (2 * K * sigma**2) mask[...] = np.logical_and(lambda_minus <= s, s <= lambda_plus) omega = data[mask, :] # If no point meets the criterion, exit if omega.size == 0: break sigma_prev = sigma # Numpy percentile must range in 0 to 100, hence q*100 sigma = np.percentile(omega, q * 100) / denom if return_mask: return sigma, mask return sigma @warning_for_keywords() def estimate_sigma(arr, *, disable_background_masking=False, N=0): """Standard deviation estimation from local patches Parameters ---------- arr : 3D or 4D ndarray The array to be estimated disable_background_masking : bool, optional If True, uses all voxels for the estimation, otherwise, only non-zeros voxels are used. Useful if the background is masked by the scanner. N : int, optional Number of coils of the receiver array. Use N = 1 in case of a SENSE reconstruction (Philips scanners) or the number of coils for a GRAPPA reconstruction (Siemens and GE). Use 0 to disable the correction factor, as for example if the noise is Gaussian distributed. See :footcite:p:`Koay2006a` for more information. Returns ------- sigma : ndarray standard deviation of the noise, one estimation per volume. Notes ----- This function is the same as manually taking the standard deviation of the background and gives one value for the whole 3D array. It also includes the coil-dependent correction factor from :footcite:t:`Koay2006a` (see equation 18 in :footcite:p:`Koay2006a`) with theta = 0. Since this function was introduced in :footcite:p:`Coupe2008` for T1 imaging, it is expected to perform ok on diffusion MRI data, but might oversmooth some regions and leave others un-denoised for spatially varying noise profiles. Consider using :func:`piesno` to estimate sigma instead if visual inaccuracies are apparent in the denoised result. References ---------- .. footbibliography:: """ k = np.zeros((3, 3, 3), dtype=np.int8) k[0, 1, 1] = 1 k[2, 1, 1] = 1 k[1, 0, 1] = 1 k[1, 2, 1] = 1 k[1, 1, 0] = 1 k[1, 1, 2] = 1 # Precomputed factor from Koay 2006, this corrects the bias of magnitude # image correction_factor = { 0: 1, # No correction 1: 0.42920367320510366, 4: 0.4834941393603609, 6: 0.4891759468548269, 8: 0.49195420135894175, 12: 0.4946862482541263, 16: 0.4960339908122364, 20: 0.4968365823718557, 24: 0.49736907650825657, 32: 0.49803177052530145, 64: 0.49901964176235936, } if N in correction_factor: factor = correction_factor[N] else: raise ValueError( f"N = {N} is not supported! Please choose amongst " f"{sorted(correction_factor.keys())}" ) if arr.ndim == 3: sigma = np.zeros(1, dtype=np.float32) arr = arr[..., None] elif arr.ndim == 4: sigma = np.zeros(arr.shape[-1], dtype=np.float32) else: raise ValueError("Array shape is not supported!", arr.shape) if disable_background_masking: mask = arr[..., 0].astype(bool) else: mask = np.ones_like(arr[..., 0], dtype=bool) conv_out = np.zeros(arr[..., 0].shape, dtype=np.float64) for i in range(sigma.size): convolve(arr[..., i], k, output=conv_out) mean_block = np.sqrt(6 / 7) * (arr[..., i] - 1 / 6 * conv_out) sigma[i] = np.sqrt(np.mean(mean_block[mask] ** 2) / factor) return sigma dipy-1.11.0/dipy/denoise/non_local_means.py000066400000000000000000000062501476546756600206700ustar00rootroot00000000000000from numbers import Number import numpy as np from dipy.denoise.nlmeans_block import nlmeans_block from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def non_local_means( arr, sigma, *, mask=None, patch_radius=1, block_radius=5, rician=True ): r"""Non-local means for denoising 3D and 4D images, using blockwise averaging approach. See :footcite:p:`Coupe2008` and :footcite:p:`Coupe2012` for further details about the method. Parameters ---------- arr : 3D or 4D ndarray The array to be denoised mask : 3D ndarray Mask on data where the non-local means will be applied. sigma : float or ndarray standard deviation of the noise estimated from the data patch_radius : int patch size is ``2 x patch_radius + 1``. Default is 1. block_radius : int block size is ``2 x block_radius + 1``. Default is 5. rician : boolean If True the noise is estimated as Rician, otherwise Gaussian noise is assumed. Returns ------- denoised_arr : ndarray the denoised ``arr`` which has the same shape as ``arr``. References ---------- .. footbibliography:: """ if isinstance(sigma, np.ndarray) and sigma.size == 1: sigma = sigma.item() if isinstance(sigma, np.ndarray): if arr.ndim == 3: raise ValueError("sigma should be a scalar for 3D data", sigma) if not np.issubdtype(sigma.dtype, np.number): raise ValueError("sigma should be an array of floats", sigma) if arr.ndim == 4 and sigma.ndim != 1: raise ValueError("sigma should be a 1D array for 4D data", sigma) if arr.ndim == 4 and sigma.shape[0] != arr.shape[-1]: raise ValueError( "sigma should have the same length as the last " "dimension of arr for 4D data", sigma, ) else: if not isinstance(sigma, Number): raise ValueError("sigma should be a float", sigma) # if sigma is a scalar and arr is 4D, we assume the same sigma for all if arr.ndim == 4: sigma = np.array([sigma] * arr.shape[-1]) if mask is None and arr.ndim > 2: mask = np.ones((arr.shape[0], arr.shape[1], arr.shape[2]), dtype="f8") else: mask = np.ascontiguousarray(mask, dtype="f8") if mask.ndim != 3: raise ValueError("mask needs to be a 3D ndarray", mask.shape) if arr.ndim == 3: return np.array( nlmeans_block( np.double(arr), mask, patch_radius, block_radius, sigma, int(rician) ) ).astype(arr.dtype) elif arr.ndim == 4: denoised_arr = np.zeros_like(arr) for i in range(arr.shape[-1]): denoised_arr[..., i] = np.array( nlmeans_block( np.double(arr[..., i]), mask, patch_radius, block_radius, sigma[i], int(rician), ) ).astype(arr.dtype) return denoised_arr else: raise ValueError("Only 3D or 4D array are supported!", arr.shape) dipy-1.11.0/dipy/denoise/patch2self.py000066400000000000000000000652101476546756600175750ustar00rootroot00000000000000import os import tempfile import time from warnings import warn import numpy as np from numpy.lib.stride_tricks import sliding_window_view from tqdm import tqdm from dipy.stats.sketching import count_sketch from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package sklearn, has_sklearn, _ = optional_package("sklearn") linear_model, _, _ = optional_package("sklearn.linear_model") def _vol_split(train, vol_idx): """Split the 3D volumes into the train and test set. Parameters ---------- train : numpy.ndarray Array of all 3D patches flattened out to be 2D. vol_idx : int The volume number that needs to be held out for training. Returns ------- cur_x : numpy.ndarray of shape (nvolumes * patch_size) x (nvoxels) Array of patches corresponding to all volumes except the held out volume. y : numpy.ndarray of shape (patch_size) x (nvoxels) Array of patches corresponding to the volume that is used a target for denoising. """ mask = np.zeros(train.shape[0], dtype=bool) mask[vol_idx] = True cur_x = train[~mask].reshape((train.shape[0] - 1) * train.shape[1], train.shape[2]) y = train[vol_idx, train.shape[1] // 2, :] return cur_x, y def _extract_3d_patches(arr, patch_radius): """Extract 3D patches from 4D DWI data. Parameters ---------- arr : ndarray The 4D noisy DWI data to be denoised. patch_radius : int or array of shape (3,) The radius of the local patch to be taken around each voxel (in voxels). Returns ------- all_patches : ndarray All 3D patches flattened out to be 2D corresponding to the each 3D volume of the 4D DWI data. """ patch_radius = np.asarray(patch_radius, dtype=int) if patch_radius.size == 1: patch_radius = np.repeat(patch_radius, 3) elif patch_radius.size != 3: raise ValueError("patch_radius should have length 1 or 3") patch_size = 2 * patch_radius + 1 dim = arr.shape[-1] # Calculate the shape of the output array output_shape = tuple(arr.shape[i] - 2 * patch_radius[i] for i in range(3)) total_patches = np.prod(output_shape) patches = sliding_window_view(arr, tuple(patch_size) + (dim,)) # Reshape and transpose the patches to match the original function's output shape all_patches = patches.reshape(total_patches, np.prod(patch_size), dim) all_patches = all_patches.transpose(2, 1, 0) return np.array(all_patches) def _fit_denoising_model(train, vol_idx, model, alpha): """Fit a single 3D volume using a train and test phase. Parameters ---------- train : ndarray Array of all 3D patches flattened out to be 2D. vol_idx : int The volume number that needs to be held out for training. model : str or sklearn.base.RegressorMixin This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ols', 'ridge', 'lasso'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit-Learn: `sklearn.linear_model.LinearRegression`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. alpha : float Regularization parameter only for ridge and lasso regression models. version : int Version 1 or 3 of Patch2Self to use. Returns ------- model_instance : fitted linear model object The fitted model instance if version is 3. cur_x : ndarray The patches corresponding to all volumes except the held out volume. """ if isinstance(model, str): if model.lower() == "ols": model_instance = linear_model.Ridge(copy_X=False, alpha=1e-10) elif model.lower() == "ridge": model_instance = linear_model.Ridge(copy_X=False, alpha=alpha) elif model.lower() == "lasso": model_instance = linear_model.Lasso(copy_X=False, max_iter=50, alpha=alpha) else: raise ValueError( f"Invalid model string: {model}. Should be 'ols', 'ridge', or 'lasso'." ) elif isinstance(model, linear_model.BaseEstimator): model_instance = model else: raise ValueError( "Model should either be a string or \ an instance of sklearn.linear_model BaseEstimator." ) cur_x, y = _vol_split(train, vol_idx) model_instance.fit(cur_x.T, y.T) return model_instance, cur_x def vol_denoise( data_dict, b0_idx, dwi_idx, model, alpha, b0_denoising, verbose, tmp_dir ): """Denoise a single 3D volume using train and test phase. Parameters ---------- data_dict : dict Dictionary containing the following: data_name : str The name of the memmap file containing the memmaped data. data_dtype : dtype The dtype of the data. data_shape : tuple The shape of the data. data_b0s : ndarray Array of all 3D patches flattened out to be 2D for b0 volumes. data_dwi : ndarray Array of all 3D patches flattened out to be 2D for dwi volumes. b0_idx : ndarray The indices of the b0 volumes. dwi_idx : ndarray The indices of the dwi volumes. model : sklearn.base.RegressorMixin This is the model that is initialized from the `_fit_denoising_model` function. alpha : float Regularization parameter only for ridge and lasso regression models. b0_denoising : bool Skips denoising b0 volumes if set to False. verbose : bool Show progress of Patch2Self and time taken. tmp_dir : str The directory to save the temporary files. Returns ------- denoised_arr.name : str The name of the memmap file containing the denoised array. denoised_arr.dtype : dtype The dtype of the denoised array. denoised_arr.shape : tuple The shape of the denoised array. """ data_shape = data_dict["data"][2] data_tmp = np.memmap( data_dict["data"][0], dtype=data_dict["data"][1], mode="r", shape=data_dict["data"][2], ).reshape(np.prod(data_shape[:-1]), data_shape[-1]) data_b0s = data_dict["data_b0s"] data_dwi = data_dict["data_dwi"] p = data_tmp.shape[0] // 10 b0_counter = 0 dwi_counter = 0 start_idx = 0 denoised_arr_file = tempfile.NamedTemporaryFile( delete=False, dir=tmp_dir, suffix="denoised_arr" ) denoised_arr_file.close() denoised_arr = np.memmap( denoised_arr_file.name, dtype=data_tmp.dtype, mode="w+", shape=data_shape ) idx_counter = 0 full_result = np.empty( (data_shape[0], data_shape[1], data_shape[2], data_shape[3] // 5) ) b0_idx = b0_idx dwi_idx = dwi_idx if data_b0s.shape[0] == 1: b0_denoising = False if not b0_denoising: if verbose: print("b0 denoising skipped....") for vol_idx in tqdm( range(data_shape[-1]), desc="Fitting and Denoising", leave=False ): if vol_idx in b0_idx.flatten(): if b0_denoising: b_fit, _ = _fit_denoising_model(data_b0s, b0_counter, model, alpha) b_matrix = np.zeros(data_tmp.shape[-1]) b_fit_coef = np.insert(b_fit.coef_, b0_counter, 0) np.put(b_matrix, b0_idx, b_fit_coef) result = np.zeros(data_tmp.shape[0]) for z in range(0, data_tmp.shape[0], p): end_idx = z + p if end_idx > z + p: end_idx = data_tmp.shape[0] result[z:end_idx] = ( np.matmul(np.squeeze(data_tmp[z:end_idx, :]), b_matrix) + b_fit.intercept_ ) full_result[..., idx_counter] = result.reshape( data_shape[0], data_shape[1], data_shape[2] ) idx_counter += 1 b0_counter += 1 del b_fit_coef del b_matrix del result else: full_result[..., idx_counter] = data_tmp[..., b0_counter].reshape( data_shape[0], data_shape[1], data_shape[2] ) b0_counter += 1 idx_counter += 1 else: dwi_fit, _ = _fit_denoising_model(data_dwi, dwi_counter, model, alpha) b_matrix = np.zeros(data_tmp.shape[-1]) dwi_fit_coef = np.insert(dwi_fit.coef_, dwi_counter, 0) np.put(b_matrix, dwi_idx, dwi_fit_coef) del dwi_fit_coef result = np.zeros(data_tmp.shape[0]) for z in range(0, data_tmp.shape[0], p): end_idx = z + p if end_idx > z + p: end_idx = data_tmp.shape[0] result[z:end_idx] = ( np.matmul(np.squeeze(data_tmp[z:end_idx, :]), b_matrix) + dwi_fit.intercept_ ) full_result[..., idx_counter] = result.reshape( data_shape[0], data_shape[1], data_shape[2] ) idx_counter += 1 dwi_counter += 1 if idx_counter >= data_shape[-1] // 5: denoised_arr[..., start_idx : vol_idx + 1] = full_result start_idx = vol_idx + 1 idx_counter = 0 denoised_arr_idx = data_shape[-1] - data_shape[-1] % 5 full_result_idx = full_result.shape[-1] - data_shape[-1] % 5 denoised_arr[..., denoised_arr_idx:] = full_result[..., full_result_idx:] del full_result return denoised_arr_file.name, denoised_arr.dtype, denoised_arr.shape @warning_for_keywords() def patch2self( data, bvals, *, patch_radius=(0, 0, 0), model="ols", b0_threshold=50, out_dtype=None, alpha=1.0, verbose=False, b0_denoising=True, clip_negative_vals=False, shift_intensity=True, tmp_dir=None, version=3, ): """Patch2Self Denoiser. See :footcite:p:`Fadnavis2020` for further details about the method. See :footcite:p:`Fadnavis2024` for further details about the new method. Parameters ---------- data : ndarray The 4D noisy DWI data to be denoised. bvals : array of shape (N,) Array of the bvals from the DWI acquisition patch_radius : int or array of shape (3,), optional The radius of the local patch to be taken around each voxel (in voxels). Default: 0 (denoise in blocks of 1x1x1 voxels). model : string, or sklearn.base.RegressorMixin This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ols', 'ridge', 'lasso'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit-Learn: `sklearn.linear_model.LinearRegression`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. b0_threshold : int, optional Threshold for considering volumes as b0. out_dtype : str or dtype, optional The dtype for the output array. Default: output has the same dtype as the input. alpha : float, optional Regularization parameter only for ridge regression model. verbose : bool, optional Show progress of Patch2Self and time taken. b0_denoising : bool, optional Skips denoising b0 volumes if set to False. clip_negative_vals : bool, optional Sets negative values after denoising to 0 using `np.clip`. shift_intensity : bool, optional Shifts the distribution of intensities per volume to give non-negative values. tmp_dir : str, optional The directory to save the temporary files. If None, the temporary files are saved in the system's default temporary directory. Default: None. version : int, optional Version 1 or 3 of Patch2Self to use. Default: 3 Returns ------- denoised array : ndarray This is the denoised array of the same size as that of the input data, clipped to non-negative values. References ---------- .. footbibliography:: """ out_dtype, tmp_dir, patch_radius = _validate_inputs( data, out_dtype, patch_radius, version, tmp_dir ) if version == 1: return _patch2self_version1( data, bvals, patch_radius, model, b0_threshold, out_dtype, alpha, verbose, b0_denoising, clip_negative_vals, shift_intensity, ) return _patch2self_version3( data, bvals, model, b0_threshold, out_dtype, alpha, verbose, b0_denoising, clip_negative_vals, shift_intensity, tmp_dir, ) def _validate_inputs(data, out_dtype, patch_radius, version, tmp_dir): """Validate inputs for patch2self function. Parameters ---------- data : ndarray The 4D noisy DWI data to be denoised. out_dtype : str or dtype The dtype for the output array. patch_radius : int or array of shape (3,) The radius of the local patch to be taken around each voxel (in voxels). version : int Version 1 or 3 of Patch2Self to use. tmp_dir : str The directory to save the temporary files. If None, the temporary files are saved in the system's default temporary directory. Raises ------ ValueError If temporary directory is not None for Patch2Self version 1. If the patch_radius is not 0 for Patch2Self version 3. If the temporary directory does not exist. If the input data is not a 4D array. Warns ----- If the input data has less than 10 3D volumes. Returns ------- out_dtype : str or dtype The dtype for the output array. tmp_dir : str The directory to save the temporary files. If None, the temporary files are saved in the system's default temporary directory. """ if out_dtype is None: out_dtype = data.dtype if tmp_dir is None and version == 3: tmp_dir = tempfile.gettempdir() if version not in [1, 3]: raise ValueError("Invalid version. Should be 1 or 3.") if version == 1 and tmp_dir is not None: raise ValueError( "Temporary directory is not supported for Patch2Self version 1. \ Please set tmp_dir to None." ) if patch_radius != (0, 0, 0) and version == 3: raise ValueError( "Patch radius is not supported for Patch2Self version 3. \ Please do not set patch_radius." ) if isinstance(patch_radius, list) and len(patch_radius) == 1: patch_radius = (patch_radius[0], patch_radius[0], patch_radius[0]) if isinstance(patch_radius, int): patch_radius = (patch_radius, patch_radius, patch_radius) if version == 3 and tmp_dir is not None and not os.path.exists(tmp_dir): raise ValueError("The temporary directory does not exist.") if data.ndim != 4: raise ValueError("Patch2Self can only denoise on 4D arrays.", data.shape) if data.shape[3] < 10: warn( "The input data has less than 10 3D volumes. \ Patch2Self may not give optimal denoising performance.", stacklevel=2, ) return out_dtype, tmp_dir, patch_radius def _patch2self_version1( data, bvals, patch_radius, model, b0_threshold, out_dtype, alpha, verbose, b0_denoising, clip_negative_vals, shift_intensity, ): """Patch2Self Denoiser. Parameters ---------- data : ndarray The 4D noisy DWI data to be denoised. bvals : array of shape (N,) Array of the bvals from the DWI acquisition. patch_radius : int or array of shape (3,) The radius of the local patch to be taken around each voxel (in voxels). model : string, or sklearn.base.RegressorMixin This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ols', 'ridge', 'lasso'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit-Learn: `sklearn.linear_model.LinearRegression`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. b0_threshold : int Threshold for considering volumes as b0. out_dtype : str or dtype The dtype for the output array. Default: output has the same dtype as the input. alpha : float Regularization parameter only for ridge regression model. verbose : bool Show progress of Patch2Self and time taken. b0_denoising : bool Skips denoising b0 volumes if set to False. clip_negative_vals : bool Sets negative values after denoising to 0 using `np.clip`. shift_intensity : bool Shifts the distribution of intensities per volume to give non-negative values. Returns ------- denoised array : ndarray This is the denoised array of the same size as that of the input data, clipped to non-negative values. """ # We retain float64 precision, iff the input is in this precision: if data.dtype == np.float64: calc_dtype = np.float64 # Otherwise, we'll calculate things in float32 (saving memory) else: calc_dtype = np.float32 original_shape = data.shape if 1 in data.shape and data.shape[-1] != 1: position = data.shape.index(1) data = np.concatenate((data, data, data), position) # Segregates volumes by b0 threshold b0_idx = np.argwhere(bvals <= b0_threshold) dwi_idx = np.argwhere(bvals > b0_threshold) data_b0s = np.squeeze(np.take(data, b0_idx, axis=3)) data_dwi = np.squeeze(np.take(data, dwi_idx, axis=3)) # create empty arrays denoised_b0s = np.empty(data_b0s.shape, dtype=calc_dtype) denoised_dwi = np.empty(data_dwi.shape, dtype=calc_dtype) denoised_arr = np.empty(data.shape, dtype=calc_dtype) if verbose is True: t1 = time.time() # if only 1 b0 volume, skip denoising it if data_b0s.ndim == 3 or not b0_denoising: if verbose: print("b0 denoising skipped...") denoised_b0s = data_b0s else: train_b0 = _extract_3d_patches( np.pad( data_b0s, ( (patch_radius[0], patch_radius[0]), (patch_radius[1], patch_radius[1]), (patch_radius[2], patch_radius[2]), (0, 0), ), mode="constant", ), patch_radius=patch_radius, ) for vol_idx in range(0, data_b0s.shape[3]): b0_model, cur_x = _fit_denoising_model( train_b0, vol_idx, model, alpha=alpha ) denoised_b0s[..., vol_idx] = b0_model.predict(cur_x.T).reshape( data_b0s.shape[0], data_b0s.shape[1], data_b0s.shape[2] ) if verbose is True: print("Denoised b0 Volume: ", vol_idx) # Separate denoising for DWI volumes train_dwi = _extract_3d_patches( np.pad( data_dwi, ( (patch_radius[0], patch_radius[0]), (patch_radius[1], patch_radius[1]), (patch_radius[2], patch_radius[2]), (0, 0), ), mode="constant", ), patch_radius=patch_radius, ) # Insert the separately denoised arrays into the respective empty arrays for vol_idx in range(0, data_dwi.shape[3]): dwi_model, cur_x = _fit_denoising_model(train_dwi, vol_idx, model, alpha=alpha) denoised_dwi[..., vol_idx] = dwi_model.predict(cur_x.T).reshape( data_dwi.shape[0], data_dwi.shape[1], data_dwi.shape[2] ) if verbose is True: print("Denoised DWI Volume: ", vol_idx) if verbose is True: t2 = time.time() print("Total time taken for Patch2Self: ", t2 - t1, " seconds") if data_b0s.ndim == 3: denoised_arr[:, :, :, b0_idx[0][0]] = denoised_b0s else: for i, idx in enumerate(b0_idx): denoised_arr[:, :, :, idx[0]] = np.squeeze(denoised_b0s[..., i]) for i, idx in enumerate(dwi_idx): denoised_arr[:, :, :, idx[0]] = np.squeeze(denoised_dwi[..., i]) if 1 in original_shape and original_shape[-1] != 1: denoised_arr = np.take(denoised_arr, [0], axis=position) denoised_arr = _apply_post_processing( denoised_arr, shift_intensity, clip_negative_vals ) return np.array(denoised_arr, dtype=out_dtype) def _patch2self_version3( data, bvals, model, b0_threshold, out_dtype, alpha, verbose, b0_denoising, clip_negative_vals, shift_intensity, tmp_dir, ): """Patch2Self Denoiser. Parameters ---------- data : ndarray The 4D noisy DWI data to be denoised. bvals : array of shape (N,) Array of the bvals from the DWI acquisition. model : string, or sklearn.base.RegressorMixin This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ols', 'ridge', 'lasso'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit-Learn: `sklearn.linear_model.LinearRegression`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. b0_threshold : int Threshold for considering volumes as b0. out_dtype : str or dtype The dtype for the output array. Default: output has the same dtype as the input. alpha : float Regularization parameter only for ridge regression model. verbose : bool Show progress of Patch2Self and time taken. b0_denoising : bool Skips denoising b0 volumes if set to False. clip_negative_vals : bool Sets negative values after denoising to 0 using `np.clip`. shift_intensity : bool Shifts the distribution of intensities per volume to give non-negative values. tmp_dir : str The directory to save the temporary files. If None, the temporary files are saved in the system's default temporary directory. Returns ------- denoised array : ndarray This is the denoised array of the same size as that of the input data, clipped to non-negative values. """ tmp_file = tempfile.NamedTemporaryFile(delete=False, dir=tmp_dir, suffix="tmp_file") tmp_file.close() tmp = np.memmap( tmp_file.name, dtype=data.dtype, mode="w+", shape=(data.shape[0], data.shape[1], data.shape[2], data.shape[3]), ) p = data.shape[-1] // 5 idx_start = 0 for z in range(0, data.shape[3], p): end_idx = z + p if end_idx > data.shape[3]: end_idx = data.shape[3] if verbose: print("Loading data from {} to {}".format(idx_start, end_idx)) tmp[..., idx_start:end_idx] = data[..., idx_start:end_idx] idx_start = end_idx sketch_rows = int(0.30 * data.shape[0] * data.shape[1] * data.shape[2]) sketched_matrix_name, sketched_matrix_dtype, sketched_matrix_shape = count_sketch( tmp_file.name, data.dtype, tmp.shape, sketch_rows=sketch_rows, tmp_dir=tmp_dir, ) sketched_matrix = np.memmap( sketched_matrix_name, dtype=sketched_matrix_dtype, mode="r", shape=sketched_matrix_shape, ).T if verbose: print("Sketching done.") b0_idx = np.argwhere(bvals <= b0_threshold) dwi_idx = np.argwhere(bvals > b0_threshold) data_b0s = np.take(np.squeeze(sketched_matrix), b0_idx, axis=0) data_dwi = np.take(np.squeeze(sketched_matrix), dwi_idx, axis=0) data_dict = { "data": [tmp_file.name, data.dtype, tmp.shape], "data_b0s": data_b0s, "data_dwi": data_dwi, } if verbose: t1 = time.time() del sketched_matrix os.unlink(sketched_matrix_name) denoised_arr_name, denoised_arr_dtype, denoised_arr_shape = vol_denoise( data_dict, b0_idx, dwi_idx, model, alpha, b0_denoising, verbose, tmp_dir=tmp_dir, ) denoised_arr = np.memmap( denoised_arr_name, dtype=denoised_arr_dtype, mode="r+", shape=denoised_arr_shape, ) if verbose: t2 = time.time() print("Time taken for Patch2Self: ", t2 - t1, " seconds.") denoised_arr = _apply_post_processing( denoised_arr, shift_intensity, clip_negative_vals ) del tmp os.unlink(data_dict["data"][0]) result = np.array(denoised_arr, dtype=out_dtype) del denoised_arr os.unlink(denoised_arr_name) return result def _apply_post_processing(denoised_arr, shift_intensity, clip_negative_vals): """Apply post-processing steps such as clipping and shifting intensities. Parameters ---------- denoised_arr : ndarray The denoised array. shift_intensity : bool Shifts the distribution of intensities per volume to give non-negative values. clip_negative_vals : bool Sets negative values after denoising to 0 using `np.clip`. Returns ------- denoised_arr : ndarray The denoised array with post-processing applied. """ if shift_intensity and not clip_negative_vals: for i in range(denoised_arr.shape[-1]): shift = np.min(denoised_arr[..., i]) - np.min(denoised_arr[..., i]) denoised_arr[..., i] += shift elif clip_negative_vals and not shift_intensity: denoised_arr.clip(min=0, out=denoised_arr) elif clip_negative_vals and shift_intensity: warn( "Both `clip_negative_vals` and `shift_intensity` cannot be True. \ Defaulting to `clip_negative_vals`...", stacklevel=2, ) denoised_arr.clip(min=0, out=denoised_arr) return denoised_arr dipy-1.11.0/dipy/denoise/pca_noise_estimate.pyx000066400000000000000000000160171476546756600215660ustar00rootroot00000000000000""" ================================ PCA Based Local Noise Estimation ================================ """ import numpy as np import scipy.special as sps from scipy import ndimage cimport cython cimport numpy as cnp from warnings import warn # Try to get the SVD through direct API to lapack: try: from scipy.linalg.lapack import sgesvd as svd svd_args = [1, 0] # If you have an older version of scipy, we fall back # on the standard scipy SVD API: except ImportError: from scipy.linalg import svd svd_args = [False] @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def pca_noise_estimate(data, gtab, patch_radius=1, correct_bias=True, smooth=2, images_as_samples=False): """PCA based local noise estimation. Parameters ---------- data : 4D array the input dMRI data. The first 3 dimension must have size >= 2 * patch_radius + 1 or size = 1. gtab : gradient table object gradient information for the data gives us the bvals and bvecs of diffusion data, which is needed here to select between the noise estimation methods. patch_radius : int The radius of the local patch to be taken around each voxel (in voxels). Default: 1 (estimate noise in blocks of 3x3x3 voxels). correct_bias : bool Whether to correct for bias due to Rician noise. This is an implementation of equation 8 in :footcite:p:`Manjon2013`. smooth : int Radius of a Gaussian smoothing filter to apply to the noise estimate before returning. Default: 2. images_as_samples : bool, optional Whether to use images as rows (samples) for PCA (algorithm in :footcite:p:`Manjon2013`) or to use images as columns (features). Returns ------- sigma_corr: 3D array The local noise standard deviation estimate. Notes ----- In :footcite:p:`Manjon2013`, images are used as samples, so voxels are features, therefore eigenvectors are image-shaped. However, :footcite:t:`Manjon2013` is not clear on how to use these eigenvectors to determine the noise level, so here eigenvalues (variance over samples explained by eigenvectors) are used to scale the eigenvectors. Use images_as_samples=True to use this algorithm. Alternatively, voxels can be used as samples using images_as_samples=False. This is not the canonical algorithm of :footcite:t:`Manjon2013`. References ---------- .. footbibliography:: """ # first identify the number of the b0 images K = gtab.b0s_mask[gtab.b0s_mask].size if K > 1: # If multiple b0 values then use MUBE noise estimate data0 = data[..., gtab.b0s_mask] sibe = False else: # if only one b0 value then SIBE noise estimate data0 = data[..., ~gtab.b0s_mask] sibe = True if patch_radius < 1: warn("Minimum patch radius must be 1, setting to 1", UserWarning) patch_radius = 1 data0 = data0.astype(np.float64) cdef: # We need to be explicit because of boundscheck = False cnp.npy_intp dsm = np.min(data0.shape[0:3]) cnp.npy_intp n0 = data0.shape[0] cnp.npy_intp n1 = data0.shape[1] cnp.npy_intp n2 = data0.shape[2] cnp.npy_intp n3 = data0.shape[3] cnp.npy_intp nsamples = n0 * n1 * n2 cnp.npy_intp i, j, k, i0, j0, k0, l0 cnp.npy_intp prx = patch_radius if n0 > 1 else 0 cnp.npy_intp pry = patch_radius if n1 > 1 else 0 cnp.npy_intp prz = patch_radius if n2 > 1 else 0 cnp.npy_intp norm = (2 * prx + 1) * (2 * pry + 1) * (2 * prz + 1) double sum_reg, temp1 double[:, :, :] I = np.zeros((n0, n1, n2)) # check dimensions of data if (dsm != 1) and (dsm < 2 * patch_radius + 1): raise ValueError("Array 'data' is incorrect shape") if images_as_samples: X = data0.reshape(nsamples, n3).T else: X = data0.reshape(nsamples, n3) # Demean: M = np.mean(X, axis=0) X = X - M U, S, Vt = svd(X, *svd_args)[:3] # Rows of Vt are the eigenvectors, in ascending eigenvalue order: W = Vt.T if images_as_samples: W = W.astype('double') # #vox(features) >> # img(samples), last eigval zero (X is centered) idx = n3 - 2 # use second-to-last eigvec V = W[:, idx].reshape(n0, n1, n2) # ref [1]_ method is ambiguous on how to use image-shaped eigvec # since eigvec is normalized, used eigval=variance for scale I = V * S[idx] else: # Project into the data space V = X.dot(W) # Grab the column corresponding to the smallest eigen-vector/-value: # #vox(samples) >> #img(features), last eigenvector is meaningful I = V[:, -1].reshape(n0, n1, n2) del V, W, X, U, S, Vt cdef: double[:, :, :] count = np.zeros((n0, n1, n2)) double[:, :, :] mean = np.zeros((n0, n1, n2)) double[:, :, :] sigma_sq = np.zeros((n0, n1, n2)) double[:, :, :, :] data0temp = data0 with nogil: for i in range(prx, n0 - prx): for j in range(pry, n1 - pry): for k in range(prz, n2 - prz): sum_reg = 0 temp1 = 0 for i0 in range(-prx, prx + 1): for j0 in range(-pry, pry + 1): for k0 in range(-prz, prz + 1): sum_reg += I[i + i0, j + j0, k + k0] / norm for l0 in range(n3): temp1 += (data0temp[i + i0, j + j0, k + k0, l0])\ / (norm * n3) for i0 in range(-prx, prx + 1): for j0 in range(-pry, pry + 1): for k0 in range(-prz, prz + 1): sigma_sq[i + i0, j + j0, k + k0] += ( I[i + i0, j + j0, k + k0] - sum_reg) ** 2 mean[i + i0, j + j0, k + k0] += temp1 count[i + i0, j + j0, k + k0] += 1 sigma_sq = np.divide(sigma_sq, count) # find the SNR and make the correction for bias due to Rician noise: if correct_bias: mean = np.divide(mean, count) snr = np.divide(mean, np.sqrt(sigma_sq)) snr_sq = (snr ** 2) # xi is practically equal to 1 above 37.4, and we overflow, raising # warnings and creating ot-a-numbers. # Instead, we will replace these values with 1 below with np.errstate(over='ignore', invalid='ignore'): xi = (2 + snr_sq - (np.pi / 8) * np.exp(-snr_sq / 2) * ((2 + snr_sq) * sps.iv(0, snr_sq / 4) + snr_sq * sps.iv(1, snr_sq / 4)) ** 2).astype(float) xi[snr > 37.4] = 1 sigma_corr = sigma_sq / xi sigma_corr[np.isnan(sigma_corr)] = 0 else: sigma_corr = sigma_sq if smooth is not None: sigma_corr = ndimage.gaussian_filter(sigma_corr, smooth) return np.sqrt(sigma_corr) dipy-1.11.0/dipy/denoise/shift_twist_convolution.pyx000066400000000000000000000174511476546756600227440ustar00rootroot00000000000000import numpy as np cimport numpy as cnp cimport cython cimport safe_openmp as openmp from cython.parallel import prange from dipy.reconst.shm import sh_to_sf, sf_to_sh from dipy.utils.omp import determine_num_threads from dipy.utils.omp cimport set_num_threads, restore_default_num_threads from dipy.utils.deprecator import deprecated_params @deprecated_params('sh_order', new_name='sh_order_max', since='1.9', until='2.0') def convolve(odfs_sh, kernel, sh_order_max, test_mode=False, num_threads=None, normalize=True): """Perform the shift-twist convolution with the ODF data and the lookup-table of the kernel. See :footcite:p:`Meesters2016a`, :footcite:p:`Duits2011`, :footcite:p:`Portegies2015a` and :footcite:p:`Portegies2015b` for further details about the method. Parameters ---------- odfs : array of double The ODF data in spherical harmonics format kernel : array of double The 5D lookup table sh_order_max : integer Maximal spherical harmonics order (l) test_mode : boolean Reduced convolution in one direction only for testing num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. normalize : boolean Apply max-normalization to the output such that its value range matches the input ODF data. Returns ------- output : array of double The ODF data after convolution enhancement in spherical harmonics format References ---------- .. footbibliography:: """ # convert the ODFs from SH basis to DSF sphere = kernel.get_sphere() odfs_dsf = sh_to_sf(odfs_sh, sphere, sh_order_max=sh_order_max, basis_type=None) # perform the convolution output = perform_convolution(odfs_dsf, kernel.get_lookup_table(), test_mode, num_threads) # normalize the output if normalize: output = np.multiply(output, np.amax(odfs_dsf)/np.amax(output)) # convert back to SH output_sh = sf_to_sh(output, sphere, sh_order_max=sh_order_max) return output_sh def convolve_sf(odfs_sf, kernel, test_mode=False, num_threads=None, normalize=True): """Perform the shift-twist convolution with the ODF data and the lookup-table of the kernel. Parameters ---------- odfs : array of double The ODF data sampled on a sphere kernel : array of double The 5D lookup table test_mode : boolean Reduced convolution in one direction only for testing num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. normalize : boolean Apply max-normalization to the output such that its value range matches the input ODF data. Returns ------- output : array of double The ODF data after convolution enhancement, sampled on a sphere """ # perform the convolution output = perform_convolution(odfs_sf, kernel.get_lookup_table(), test_mode, num_threads) # normalize the output if normalize: output = np.multiply(output, np.amax(odfs_sf)/np.amax(output)) return output @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef double [:, :, :, ::1] perform_convolution (double [:, :, :, ::1] odfs, double [:, :, :, :, ::1] lut, cnp.npy_intp test_mode, num_threads=None): """ Perform the shift-twist convolution with the ODF data and the lookup-table of the kernel. Parameters ---------- odfs : array of double The ODF data sampled on a sphere lut : array of double The 5D lookup table test_mode : boolean Reduced convolution in one direction only for testing num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. Returns ------- output : array of double The ODF data after convolution enhancement """ cdef: double [:, :, :, ::1] output = np.array(odfs, copy=True) cnp.npy_intp OR1 = lut.shape[0] cnp.npy_intp OR2 = lut.shape[1] cnp.npy_intp N = lut.shape[2] cnp.npy_intp hn = (N - 1) / 2 double [:, :, :, :] totalval double [:, :, :, :] voxcount cnp.npy_intp nx = odfs.shape[0] cnp.npy_intp ny = odfs.shape[1] cnp.npy_intp nz = odfs.shape[2] cnp.npy_intp threads_to_use = -1 cnp.npy_intp all_cores = openmp.omp_get_num_procs() cnp.npy_intp corient, orient, cx, cy, cz, x, y, z cnp.npy_intp expectedvox cnp.npy_intp edgeNormalization = True threads_to_use = determine_num_threads(num_threads) set_num_threads(threads_to_use) if test_mode: edgeNormalization = False OR2 = 1 # expected number of voxels in kernel totalval = np.zeros((OR1, nx, ny, nz)) voxcount = np.zeros((OR1, nx, ny, nz)) expectedvox = nx * ny * nz with nogil: # loop over ODFs cx,cy,cz,orient --> y and v for corient in prange(OR1, schedule='guided'): for cx in range(nx): for cy in range(ny): for cz in range(nz): # loop over kernel x,y,z,orient --> x and r for x in range(int_max(cx - hn, 0), int_min(cx + hn + 1, ny - 1)): for y in range(int_max(cy - hn, 0), int_min(cy + hn + 1, ny - 1)): for z in range(int_max(cz - hn, 0), int_min(cz + hn + 1, nz - 1)): voxcount[corient, cx, cy, cz] += 1.0 for orient in range(0, OR2): totalval[corient, cx, cy, cz] += \ odfs[x, y, z, orient] * \ lut[corient, orient, x - (cx - hn), y - (cy - hn), z - (cz - hn)] if edgeNormalization: output[cx, cy, cz, corient] = \ totalval[corient, cx, cy, cz] * expectedvox/voxcount[corient, cx, cy, cz] else: output[cx, cy, cz, corient] = \ totalval[corient, cx, cy, cz] # Reset number of OpenMP cores to default if num_threads is not None: restore_default_num_threads() return output cdef inline cnp.npy_intp int_max(cnp.npy_intp a, cnp.npy_intp b) nogil: return a if a >= b else b cdef inline cnp.npy_intp int_min(cnp.npy_intp a, cnp.npy_intp b) nogil: return a if a <= b else b dipy-1.11.0/dipy/denoise/tests/000077500000000000000000000000001476546756600163265ustar00rootroot00000000000000dipy-1.11.0/dipy/denoise/tests/__init__.py000066400000000000000000000000001476546756600204250ustar00rootroot00000000000000dipy-1.11.0/dipy/denoise/tests/meson.build000066400000000000000000000005111476546756600204650ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_ascm.py', 'test_denoise.py', 'test_gibbs.py', 'test_kernel.py', 'test_lpca.py', 'test_nlmeans.py', 'test_noise_estimate.py', 'test_non_local_means.py', 'test_patch2self.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/denoise/tests' ) dipy-1.11.0/dipy/denoise/tests/test_ascm.py000066400000000000000000000077521476546756600206750ustar00rootroot00000000000000import nibabel as nib import numpy as np from numpy.testing import assert_, assert_array_almost_equal, assert_equal import dipy.data as dpd from dipy.denoise.adaptive_soft_matching import adaptive_soft_matching from dipy.denoise.noise_estimate import estimate_sigma from dipy.denoise.non_local_means import non_local_means from dipy.testing.decorators import set_random_number_generator def test_ascm_static(): S0 = 100 * np.ones((20, 20, 20), dtype="f8") S0n1 = non_local_means(S0, sigma=0, rician=False, patch_radius=1, block_radius=1) S0n2 = non_local_means(S0, sigma=0, rician=False, patch_radius=2, block_radius=1) S0n = adaptive_soft_matching(S0, S0n1, S0n2, 0) assert_array_almost_equal(S0, S0n) @set_random_number_generator() def test_ascm_random_noise(rng): S0 = 100 + 2 * rng.standard_normal((22, 23, 30)) S0n1 = non_local_means(S0, sigma=1, rician=False, patch_radius=1, block_radius=1) S0n2 = non_local_means(S0, sigma=1, rician=False, patch_radius=2, block_radius=1) S0n = adaptive_soft_matching(S0, S0n1, S0n2, 1) print(S0.mean(), S0.min(), S0.max()) print(S0n.mean(), S0n.min(), S0n.max()) assert_(S0n.min() > S0.min()) assert_(S0n.max() < S0.max()) assert_equal(np.round(S0n.mean()), 100) @set_random_number_generator() def test_ascm_rmse_with_nlmeans(rng): # checks the smoothness S0 = np.ones((30, 30, 30)) * 100 S0[10:20, 10:20, 10:20] = 50 S0[20:30, 20:30, 20:30] = 0 S0_noise = S0 + 20 * rng.standard_normal((30, 30, 30)) print("Original RMSE", np.sum(np.abs(S0 - S0_noise)) / np.sum(S0)) S0n1 = non_local_means( S0_noise, sigma=400, rician=False, patch_radius=1, block_radius=1 ) print("Smaller patch RMSE", np.sum(np.abs(S0 - S0n1)) / np.sum(S0)) S0n2 = non_local_means( S0_noise, sigma=400, rician=False, patch_radius=2, block_radius=2 ) print("Larger patch RMSE", np.sum(np.abs(S0 - S0n2)) / np.sum(S0)) S0n = adaptive_soft_matching(S0, S0n1, S0n2, 400) print("ASCM RMSE", np.sum(np.abs(S0 - S0n)) / np.sum(S0)) assert_( np.sum(np.abs(S0 - S0n)) / np.sum(S0) < np.sum(np.abs(S0 - S0n1)) / np.sum(S0) ) assert_( np.sum(np.abs(S0 - S0n)) / np.sum(S0) < np.sum(np.abs(S0 - S0_noise)) / np.sum(S0) ) assert_(90 < np.mean(S0n) < 110) @set_random_number_generator() def test_sharpness(rng): # check the edge-preserving nature S0 = np.ones((30, 30, 30)) * 100 S0[10:20, 10:20, 10:20] = 50 S0[20:30, 20:30, 20:30] = 0 S0_noise = S0 + 20 * rng.standard_normal((30, 30, 30)) S0n1 = non_local_means( S0_noise, sigma=400, rician=False, patch_radius=1, block_radius=1 ) edg1 = np.abs(np.mean(S0n1[8, 10:20, 10:20] - S0n1[12, 10:20, 10:20]) - 50) print("Edge gradient smaller patch", edg1) S0n2 = non_local_means( S0_noise, sigma=400, rician=False, patch_radius=2, block_radius=2 ) edg2 = np.abs(np.mean(S0n2[8, 10:20, 10:20] - S0n2[12, 10:20, 10:20]) - 50) print("Edge gradient larger patch", edg2) S0n = adaptive_soft_matching(S0, S0n1, S0n2, 400) edg = np.abs(np.mean(S0n[8, 10:20, 10:20] - S0n[12, 10:20, 10:20]) - 50) print("Edge gradient ASCM", edg) assert_(edg2 > edg1) assert_(edg2 > edg) assert_(np.abs(edg1 - edg) < 1.5) def test_ascm_accuracy(): f_name = dpd.get_fnames(name="ascm_test") test_ascm_data_ref = np.asanyarray(nib.load(f_name).dataobj) test_data = np.asanyarray(nib.load(dpd.get_fnames(name="aniso_vox")).dataobj) # the test data was constructed in this manner mask = test_data > 50 sigma = estimate_sigma(test_data, N=4).item() den_small = non_local_means( test_data, sigma=sigma, mask=mask, patch_radius=1, block_radius=1, rician=True ) den_large = non_local_means( test_data, sigma=sigma, mask=mask, patch_radius=2, block_radius=1, rician=True ) S0n = np.array(adaptive_soft_matching(test_data, den_small, den_large, sigma)) assert_array_almost_equal(S0n, test_ascm_data_ref, decimal=4) dipy-1.11.0/dipy/denoise/tests/test_denoise.py000066400000000000000000000007271476546756600213730ustar00rootroot00000000000000import dipy.data as dpd from dipy.denoise.nlmeans import nlmeans from dipy.denoise.noise_estimate import estimate_sigma from dipy.io.image import load_nifti def test_denoise(): """ """ fdata, fbval, fbvec = dpd.get_fnames() # Test on 4D image: data, _ = load_nifti(fdata) sigma1 = estimate_sigma(data) nlmeans(data, sigma=sigma1) # Test on 3D image: data = data[..., 0] sigma2 = estimate_sigma(data) nlmeans(data, sigma=sigma2) dipy-1.11.0/dipy/denoise/tests/test_gibbs.py000066400000000000000000000217341476546756600210340ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_, assert_array_almost_equal, assert_raises from dipy.denoise.gibbs import ( _gibbs_removal_1d, _gibbs_removal_2d, _image_tv, gibbs_removal, ) def setup_module(): """Module-level setup""" global image_gibbs, image_gt, image_cor, Nre # Produce a 2D image Nori = 32 image = np.zeros((6 * Nori, 6 * Nori)) image[Nori : 2 * Nori, Nori : 2 * Nori] = 1 image[Nori : 2 * Nori, 4 * Nori : 5 * Nori] = 1 image[2 * Nori : 3 * Nori, Nori : 3 * Nori] = 1 image[3 * Nori : 4 * Nori, 2 * Nori : 3 * Nori] = 2 image[3 * Nori : 4 * Nori, 4 * Nori : 5 * Nori] = 1 image[4 * Nori : 5 * Nori, 3 * Nori : 5 * Nori] = 3 # Corrupt image with gibbs ringing c = np.fft.fft2(image) c = np.fft.fftshift(c) c_crop = c[48:144, 48:144] image_gibbs = abs(np.fft.ifft2(c_crop) / 4) # Produce ground truth Nre = 16 image_gt = np.zeros((6 * Nre, 6 * Nre)) image_gt[Nre : 2 * Nre, Nre : 2 * Nre] = 1 image_gt[Nre : 2 * Nre, 4 * Nre : 5 * Nre] = 1 image_gt[2 * Nre : 3 * Nre, Nre : 3 * Nre] = 1 image_gt[3 * Nre : 4 * Nre, 2 * Nre : 3 * Nre] = 2 image_gt[3 * Nre : 4 * Nre, 4 * Nre : 5 * Nre] = 1 image_gt[4 * Nre : 5 * Nre, 3 * Nre : 5 * Nre] = 3 # Suppressing gibbs artefacts image_cor = _gibbs_removal_2d(image_gibbs) def test_parallel(): # Only relevant for 3d or 4d inputs # Make input data input_2d = image_gibbs.copy() input_3d = np.stack([input_2d, input_2d], axis=2) input_4d = np.stack([input_3d, input_3d], axis=3) # Test 3d case output_3d_parallel = gibbs_removal(input_3d, inplace=False, num_processes=2) output_3d_no_parallel = gibbs_removal(input_3d, inplace=False, num_processes=1) assert_array_almost_equal(output_3d_parallel, output_3d_no_parallel) # Test 4d case output_4d_parallel = gibbs_removal(input_4d, inplace=False, num_processes=2) output_4d_no_parallel = gibbs_removal(input_4d, inplace=False, num_processes=1) assert_array_almost_equal(output_4d_parallel, output_4d_no_parallel) # Test num_processes=None case output_4d_all_cpu = gibbs_removal(input_4d, inplace=False, num_processes=None) assert_array_almost_equal(output_4d_all_cpu, output_4d_no_parallel) def test_inplace(): # Make input data input_2d = image_gibbs.copy() input_3d = np.stack([input_2d, input_2d], axis=2) input_4d = np.stack([input_3d, input_3d], axis=3) # Test 2d cases output_2d = gibbs_removal(input_2d, inplace=False) assert_raises(AssertionError, assert_array_almost_equal, input_2d, output_2d) output_2d = gibbs_removal(input_2d, inplace=True) assert_array_almost_equal(input_2d, output_2d) # Test 3d case output_3d = gibbs_removal(input_3d, inplace=False) assert_raises(AssertionError, assert_array_almost_equal, input_3d, output_3d) output_3d = gibbs_removal(input_3d, inplace=True) assert_array_almost_equal(input_3d, output_3d) # Test 4d case output_4d = gibbs_removal(input_4d, inplace=False) assert_raises(AssertionError, assert_array_almost_equal, input_4d, output_4d) output_4d = gibbs_removal(input_4d, inplace=True) assert_array_almost_equal(input_4d, output_4d) def test_gibbs_2d(): # Correction of gibbs ringing have to be closer to gt than denoised image diff_raw = np.mean(abs(image_gibbs - image_gt)) diff_cor = np.mean(abs(image_cor - image_gt)) assert_(diff_raw > diff_cor) # Test if gibbs_removal works for 2D data image_cor2 = gibbs_removal(image_gibbs, inplace=False) assert_array_almost_equal(image_cor2, image_cor) def test_gibbs_3d(): image3d = np.zeros((6 * Nre, 6 * Nre, 2)) image3d[:, :, 0] = image_gibbs image3d[:, :, 1] = image_gibbs image3d_cor = gibbs_removal(image3d, slice_axis=2) assert_array_almost_equal(image3d_cor[:, :, 0], image_cor) assert_array_almost_equal(image3d_cor[:, :, 1], image_cor) def test_gibbs_4d(): image4d = np.zeros((6 * Nre, 6 * Nre, 2, 2)) image4d[:, :, 0, 0] = image_gibbs image4d[:, :, 1, 0] = image_gibbs image4d[:, :, 0, 1] = image_gibbs image4d[:, :, 1, 1] = image_gibbs image4d_cor = gibbs_removal(image4d) assert_array_almost_equal(image4d_cor[:, :, 0, 0], image_cor) assert_array_almost_equal(image4d_cor[:, :, 1, 0], image_cor) assert_array_almost_equal(image4d_cor[:, :, 0, 1], image_cor) assert_array_almost_equal(image4d_cor[:, :, 1, 1], image_cor) def test_swapped_gibbs_2d(): # 2D case: In this case slice_axis is a dummy variable. Since data is # already a single 2D image, to axis swapping is required image_cor0 = gibbs_removal(image_gibbs, slice_axis=0, inplace=False) assert_array_almost_equal(image_cor0, image_cor) image_cor1 = gibbs_removal(image_gibbs, slice_axis=1, inplace=False) assert_array_almost_equal(image_cor1, image_cor) image_cor2 = gibbs_removal(image_gibbs, slice_axis=2, inplace=False) assert_array_almost_equal(image_cor2, image_cor) def test_swapped_gibbs_3d(): image3d = np.zeros((6 * Nre, 2, 6 * Nre)) image3d[:, 0, :] = image_gibbs image3d[:, 1, :] = image_gibbs image3d_cor = gibbs_removal(image3d, slice_axis=1) assert_array_almost_equal(image3d_cor[:, 0, :], image_cor) assert_array_almost_equal(image3d_cor[:, 1, :], image_cor) image3d = np.zeros((2, 6 * Nre, 6 * Nre)) image3d[0, :, :] = image_gibbs image3d[1, :, :] = image_gibbs image3d_cor = gibbs_removal(image3d, slice_axis=0) assert_array_almost_equal(image3d_cor[0, :, :], image_cor) assert_array_almost_equal(image3d_cor[1, :, :], image_cor) def test_swapped_gibbs_4d(): image4d = np.zeros((2, 6 * Nre, 6 * Nre, 2)) image4d[0, :, :, 0] = image_gibbs image4d[1, :, :, 0] = image_gibbs image4d[0, :, :, 1] = image_gibbs image4d[1, :, :, 1] = image_gibbs image4d_cor = gibbs_removal(image4d, slice_axis=0) assert_array_almost_equal(image4d_cor[0, :, :, 0], image_cor) assert_array_almost_equal(image4d_cor[1, :, :, 0], image_cor) assert_array_almost_equal(image4d_cor[0, :, :, 1], image_cor) assert_array_almost_equal(image4d_cor[1, :, :, 1], image_cor) def test_gibbs_errors(): assert_raises(ValueError, gibbs_removal, np.ones((2, 2, 2, 2, 2))) assert_raises(ValueError, gibbs_removal, np.ones(2)) assert_raises(ValueError, gibbs_removal, np.ones((2, 2, 2)), slice_axis=3) assert_raises(TypeError, gibbs_removal, image_gibbs.copy(), inplace="True") # Test for valid num_processes assert_raises(TypeError, gibbs_removal, image_gibbs.copy(), num_processes="1") assert_raises(ValueError, gibbs_removal, image_gibbs.copy(), num_processes=0) # Test for valid input dimensionality assert_raises(ValueError, gibbs_removal, np.ones(2)) # 1D assert_raises(ValueError, gibbs_removal, np.ones((2, 2, 2, 2, 2))) # 5D def test_gibbs_subfunction(): # This complementary test is to make sure that Gibbs suppression # sub-functions are properly implemented # Testing correction along axis 0 image_a0 = _gibbs_removal_1d(image_gibbs, axis=0) # After this step tv along axis 0 should provide lower values than along # axis 1 tv0_a0_r, tv0_a0_l = _image_tv(image_a0, axis=0) tv0_a0 = np.minimum(tv0_a0_r, tv0_a0_l) tv1_a0_r, tv1_a0_l = _image_tv(image_a0, axis=1) tv1_a0 = np.minimum(tv1_a0_r, tv1_a0_l) # Let's check that mean_tv0 = np.mean(abs(tv0_a0)) mean_tv1 = np.mean(abs(tv1_a0)) assert_(mean_tv0 < mean_tv1) # Testing correction along axis 1 image_a1 = _gibbs_removal_1d(image_gibbs, axis=1) # After this step tv along axis 1 should provide higher values than along # axis 0 tv0_a1_r, tv0_a1_l = _image_tv(image_a1, axis=0) tv0_a1 = np.minimum(tv0_a1_r, tv0_a1_l) tv1_a1_r, tv1_a1_l = _image_tv(image_a1, axis=1) tv1_a1 = np.minimum(tv1_a1_r, tv1_a1_l) # Let's check that mean_tv0 = np.mean(abs(tv0_a1)) mean_tv1 = np.mean(abs(tv1_a1)) assert_(mean_tv0 > mean_tv1) def test_non_square_image(): # Produce non-square 2D image Nori = 32 img = np.zeros((6 * Nori, 6 * Nori)) img[Nori : 2 * Nori, Nori : 2 * Nori] = 1 img[2 * Nori : 3 * Nori, Nori : 3 * Nori] = 1 img[3 * Nori : 4 * Nori, 2 * Nori : 3 * Nori] = 2 img[4 * Nori : 5 * Nori, 3 * Nori : 5 * Nori] = 3 # Corrupt image with gibbs ringing c = np.fft.fft2(img) c = np.fft.fftshift(c) c_crop = c[48:144, :] img_gibbs = abs(np.fft.ifft2(c_crop) / 2) # Produce ground truth Nre = 16 img_gt = np.zeros((6 * Nre, 6 * Nori)) img_gt[Nre : 2 * Nre, Nori : 2 * Nori] = 1 img_gt[2 * Nre : 3 * Nre, Nori : 3 * Nori] = 1 img_gt[3 * Nre : 4 * Nre, 2 * Nori : 3 * Nori] = 2 img_gt[4 * Nre : 5 * Nre, 3 * Nori : 5 * Nori] = 3 # Suppressing gibbs artefacts img_cor = gibbs_removal(img_gibbs, inplace=False) # Correction of gibbs ringing have to be closer to gt than denoised image diff_raw = np.mean(abs(img_gibbs - img_gt)) diff_cor = np.mean(abs(img_cor - img_gt)) assert_(diff_raw > diff_cor) dipy-1.11.0/dipy/denoise/tests/test_kernel.py000066400000000000000000000127721476546756600212300ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere import Sphere from dipy.denoise.enhancement_kernel import EnhancementKernel from dipy.denoise.shift_twist_convolution import convolve, convolve_sf from dipy.reconst.shm import descoteaux07_legacy_msg, sf_to_sh, sh_to_sf def test_enhancement_kernel(): """Test if the kernel values are correct by comparison against the values originally calculated by implementation in Mathematica, and at the same time checks the symmetry of the kernel.""" D33 = 1.0 D44 = 0.04 t = 1 k = EnhancementKernel(D33, D44, t, orientations=0, force_recompute=True) y = np.array([0.0, 0.0, 0.0]) v = np.array([0.0, 0.0, 1.0]) orientationlist = [ [0.0, 0.0, 1.0], [-0.0527864, 0.688191, 0.723607], [-0.67082, -0.16246, 0.723607], [-0.0527864, -0.688191, 0.723607], [0.638197, -0.262866, 0.723607], [0.831052, 0.238856, 0.502295], [0.262866, -0.809017, -0.525731], [0.812731, 0.295242, -0.502295], [-0.029644, 0.864188, -0.502295], [-0.831052, 0.238856, -0.502295], [-0.638197, -0.262866, -0.723607], [-0.436009, 0.864188, -0.251148], [-0.687157, -0.681718, 0.251148], [0.67082, -0.688191, 0.276393], [0.67082, 0.688191, 0.276393], [0.947214, 0.16246, -0.276393], [-0.861803, -0.425325, -0.276393], ] positionlist = [ [-0.108096, 0.0412229, 0.339119], [0.220647, -0.422053, 0.427524], [-0.337432, -0.0644619, -0.340777], [0.172579, -0.217602, -0.292446], [-0.271575, -0.125249, -0.350906], [-0.483807, 0.326651, 0.191993], [-0.480936, -0.0718426, 0.33202], [0.497193, -0.00585659, -0.251344], [0.237737, 0.013634, -0.471988], [0.367569, -0.163581, 0.0723955], [0.47859, -0.143252, 0.318579], [-0.21474, -0.264929, -0.46786], [-0.0684234, 0.0342464, 0.0942475], [0.344272, 0.423119, -0.303866], [0.0430714, 0.216233, -0.308475], [0.386085, 0.127333, 0.0503609], [0.334723, 0.071415, 0.403906], ] kernelvalues = [ 0.10701063104295713, 0.0030052117308328923, 0.003125410084676201, 0.0031765819772012613, 0.003127254657020615, 0.0001295130396491743, 6.882352014430076e-14, 1.3821277371353332e-13, 1.3951939946082493e-13, 1.381612071786285e-13, 5.0861109163441125e-17, 1.0722120295517027e-10, 2.425145934791457e-6, 3.557919265806602e-6, 3.6669510385105265e-6, 5.97473789679846e-11, 6.155412262223178e-11, ] for p in range(len(orientationlist)): r = np.array(orientationlist[p]) x = np.array(positionlist[p]) npt.assert_almost_equal(k.evaluate_kernel(x, y, r, v), kernelvalues[p]) def test_spike(): """Test if a convolution with a delta spike is equal to the kernel saved in the lookup table.""" # create kernel D33 = 1.0 D44 = 0.04 t = 1 num_orientations = 5 k = EnhancementKernel( D33, D44, t, orientations=num_orientations, force_recompute=True ) # create a delta spike numorientations = k.get_orientations().shape[0] spike = np.zeros((7, 7, 7, numorientations), dtype=np.float64) spike[3, 3, 3, 0] = 1 # convolve kernel with delta spike csd_enh = convolve_sf(spike, k, test_mode=True, normalize=False) # check if kernel matches with the convolved delta spike totalsum = 0.0 for i in range(0, numorientations): totalsum += np.sum( np.array(k.get_lookup_table())[i, 0, :, :, :] - np.array(csd_enh)[:, :, :, i] ) npt.assert_equal(totalsum, 0.0) def test_normalization(): """Test the normalization routine applied after a convolution""" # create kernel D33 = 1.0 D44 = 0.04 t = 1 num_orientations = 5 k = EnhancementKernel( D33, D44, t, orientations=num_orientations, force_recompute=True ) # create a constant dataset numorientations = k.get_orientations().shape[0] spike = np.ones((7, 7, 7, numorientations), dtype=np.float64) # convert dataset to SH with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) spike_sh = sf_to_sh(spike, k.get_sphere(), sh_order_max=8) # convolve kernel with delta spike and apply normalization csd_enh = convolve(spike_sh, k, sh_order_max=8, test_mode=True, normalize=True) # convert dataset to DSF csd_enh_dsf = sh_to_sf(csd_enh, k.get_sphere(), sh_order_max=8, basis_type=None) # test if the normalization is performed correctly npt.assert_almost_equal(np.amax(csd_enh_dsf), np.amax(spike)) def test_kernel_input(): """Test the kernel for inputs of type Sphere, type int and for input None""" sph = Sphere(x=1, y=0, z=0) D33 = 1.0 D44 = 0.04 t = 1 k = EnhancementKernel(D33, D44, t, orientations=sph, force_recompute=True) npt.assert_equal(k.get_lookup_table().shape, (1, 1, 7, 7, 7)) num_orientations = 2 k = EnhancementKernel( D33, D44, t, orientations=num_orientations, force_recompute=True ) npt.assert_equal(k.get_lookup_table().shape, (2, 2, 7, 7, 7)) k = EnhancementKernel(D33, D44, t, orientations=0, force_recompute=True) npt.assert_equal(k.get_lookup_table().shape, (0, 0, 7, 7, 7)) dipy-1.11.0/dipy/denoise/tests/test_lpca.py000066400000000000000000000404471476546756600206670ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import ( assert_, assert_array_almost_equal, assert_equal, assert_raises, assert_warns, ) import scipy.special as sps from dipy.core.gradients import generate_bvecs, gradient_table from dipy.denoise.localpca import ( _pca_classifier, compute_num_samples, compute_patch_size, compute_suggested_patch_radius, create_patch_radius_arr, dimensionality_problem_message, genpca, localpca, mppca, ) from dipy.sims.voxel import multi_tensor from dipy.testing.decorators import set_random_number_generator def setup_module(): global gtab # generate a gradient table for phantom data directions8 = generate_bvecs(8) directions30 = generate_bvecs(30) directions60 = generate_bvecs(60) # Create full dataset parameters # (6 b-values = 0, 8 directions for b-value 300, 30 directions for b-value # 1000 and 60 directions for b-value 2000) bvals = np.hstack( (np.zeros(6), 300 * np.ones(8), 1000 * np.ones(30), 2000 * np.ones(60)) ) bvecs = np.vstack((np.zeros((6, 3)), directions8, directions30, directions60)) gtab = gradient_table(bvals, bvecs=bvecs) def rfiw_phantom(gtab, snr=None, rng=None): """rectangle fiber immersed in water""" # define voxel index slice_ind = np.zeros((10, 10, 8)) slice_ind[4:7, 4:7, :] = 1 slice_ind[4:7, 7, :] = 2 slice_ind[7, 7, :] = 3 slice_ind[7, 4:7, :] = 4 slice_ind[7, 3, :] = 5 slice_ind[4:7, 3, :] = 6 slice_ind[3, 3, :] = 7 slice_ind[3, 4:7, :] = 8 slice_ind[3, 7, :] = 9 # Define tissue diffusion parameters # Restricted diffusion ADr = 0.99e-3 RDr = 0.0 # Hindered diffusion ADh = 2.26e-3 RDh = 0.87e-3 # S0 value for tissue S1 = 50 # Fraction between Restricted and Hindered diffusion fia = 0.51 # Define water diffusion Dwater = 3e-3 S2 = 100 # S0 value for water # Define tissue volume fraction for each voxel type (in index order) f = np.array([0.0, 1.0, 0.6, 0.18, 0.30, 0.15, 0.50, 0.35, 0.70, 0.42]) # Define S0 for each voxel (in index order) S0 = S1 * f + S2 * (1 - f) # multi tensor simulations assume that each water pull as constant S0 # since I am assuming that tissue and water voxels have different S0, # tissue volume fractions have to be adjusted to the measured f values when # constant S0 are assumed constant. Doing this correction, simulations will # be analogous to simulates that S0 are different for each media. (For more # details on this contact the phantom designer) f1 = f * S1 / S0 mevals = np.array([[ADr, RDr, RDr], [ADh, RDh, RDh], [Dwater, Dwater, Dwater]]) angles = [(0, 0, 1), (0, 0, 1), (0, 0, 1)] DWI = np.zeros(slice_ind.shape + (gtab.bvals.size,)) for i in range(10): fractions = [f1[i] * fia * 100, f1[i] * (1 - fia) * 100, (1 - f1[i]) * 100] sig, direction = multi_tensor( gtab, mevals, S0=S0[i], angles=angles, fractions=fractions, snr=None ) DWI[slice_ind == i, :] = sig if snr is None: return DWI else: if rng is None: rng = np.random.default_rng() sigma = S2 * 1.0 / snr n1 = rng.normal(0, sigma, size=DWI.shape) n2 = rng.normal(0, sigma, size=DWI.shape) return [ np.sqrt((DWI / np.sqrt(2) + n1) ** 2 + (DWI / np.sqrt(2) + n2) ** 2), sigma, ] def test_lpca_static(): S0 = 100 * np.ones((20, 20, 20, 20), dtype="f8") S0ns = localpca(S0, sigma=np.ones((20, 20, 20), dtype=np.float64)) assert_array_almost_equal(S0, S0ns) @set_random_number_generator() def test_lpca_random_noise(rng): S0 = 100 + 2 * rng.standard_normal((22, 23, 30, 20)) S0ns = localpca(S0, sigma=np.std(S0)) assert_(S0ns.min() > S0.min()) assert_(S0ns.max() < S0.max()) assert_equal(np.round(S0ns.mean()), 100) @set_random_number_generator() def test_lpca_boundary_behaviour(rng): # check is first slice is getting denoised or not ? S0 = 100 * np.ones((20, 20, 20, 20), dtype="f8") S0[:, :, 0, :] = S0[:, :, 0, :] + 2 * rng.standard_normal((20, 20, 20)) S0_first = S0[:, :, 0, :] S0ns = localpca(S0, sigma=np.std(S0)) S0ns_first = S0ns[:, :, 0, :] rmses = np.sum(np.abs(S0ns_first - S0_first)) / (100.0 * 20.0 * 20.0 * 20.0) # shows that S0n_first is not very close to S0_first assert_(rmses > 0.0001) assert_equal(np.round(S0ns_first.mean()), 100) rmses = np.sum(np.abs(S0ns_first - S0_first)) / (100.0 * 20.0 * 20.0 * 20.0) # shows that S0n_first is not very close to S0_first assert_(rmses > 0.0001) assert_equal(np.round(S0ns_first.mean()), 100) @set_random_number_generator() def test_lpca_rmse(rng): S0_w_noise = 100 + 2 * rng.standard_normal((22, 23, 30, 20)) rmse_w_noise = np.sqrt(np.mean((S0_w_noise - 100) ** 2)) S0_denoised = localpca(S0_w_noise, sigma=np.std(S0_w_noise)) rmse_denoised = np.sqrt(np.mean((S0_denoised - 100) ** 2)) # Denoising should always improve the RMSE: assert_(rmse_denoised < rmse_w_noise) @set_random_number_generator() def test_lpca_sharpness(rng): S0 = np.ones((30, 30, 30, 20), dtype=np.float64) * 100 S0[10:20, 10:20, 10:20, :] = 50 S0[20:30, 20:30, 20:30, :] = 0 S0 = S0 + 20 * rng.standard_normal((30, 30, 30, 20)) S0ns = localpca(S0, sigma=20.0) # check the edge gradient edgs = np.abs(np.mean(S0ns[8, 10:20, 10:20] - S0ns[12, 10:20, 10:20]) - 50) assert_(edgs < 2) def test_lpca_dtype(): # If out_dtype is not specified, we retain the original precision: S0 = 200 * np.ones((20, 20, 20, 3), dtype=np.float64) S0ns = localpca(S0, sigma=1) assert_equal(S0.dtype, S0ns.dtype) S0 = 200 * np.ones((20, 20, 20, 20), dtype=np.uint16) S0ns = localpca(S0, sigma=np.ones((20, 20, 20))) assert_equal(S0.dtype, S0ns.dtype) # If we set out_dtype, we get what we asked for: S0 = 200 * np.ones((20, 20, 20, 20), dtype=np.uint16) S0ns = localpca(S0, sigma=np.ones((20, 20, 20)), out_dtype=np.float32) assert_equal(np.float32, S0ns.dtype) # If we set a few entries to zero, this induces negative entries in the # Resulting denoised array: S0[5:8, 5:8, 5:8] = 0 # But if we should always get all non-negative results: S0ns = localpca(S0, sigma=np.ones((20, 20, 20)), out_dtype=np.uint16) assert_(np.all(S0ns >= 0)) # And no wrap-around to crazy high values: assert_(np.all(S0ns <= 200)) def test_lpca_wrong(): S0 = np.ones((20, 20)) assert_raises(ValueError, localpca, S0, sigma=1) @set_random_number_generator() def test_phantom(rng): DWI_clean = rfiw_phantom(gtab, snr=None, rng=rng) DWI, sigma = rfiw_phantom(gtab, snr=30, rng=rng) # To test without Rician correction temp = (DWI_clean / sigma) ** 2 DWI_clean_wrc = ( sigma * np.sqrt(np.pi / 2) * np.exp(-0.5 * temp) * ( (1 + 0.5 * temp) * sps.iv(0, 0.25 * temp) + 0.5 * temp * sps.iv(1, 0.25 * temp) ) ** 2 ) DWI_den = localpca(DWI, sigma=sigma, patch_radius=3) rmse_den = np.sum(np.abs(DWI_clean - DWI_den)) / np.sum(np.abs(DWI_clean)) rmse_noisy = np.sum(np.abs(DWI_clean - DWI)) / np.sum(np.abs(DWI_clean)) rmse_den_wrc = np.sum(np.abs(DWI_clean_wrc - DWI_den)) / np.sum( np.abs(DWI_clean_wrc) ) rmse_noisy_wrc = np.sum(np.abs(DWI_clean_wrc - DWI)) / np.sum(np.abs(DWI_clean_wrc)) assert_(np.max(DWI_clean) / sigma < np.max(DWI_den) / sigma) assert_(np.max(DWI_den) / sigma < np.max(DWI) / sigma) assert_(rmse_den < rmse_noisy) assert_(rmse_den_wrc < rmse_noisy_wrc) # Check if the results of different PCA methods (eig, svd) are similar DWI_den_svd = localpca(DWI, sigma=sigma, pca_method="svd", patch_radius=3) assert_array_almost_equal(DWI_den, DWI_den_svd) assert_raises(ValueError, localpca, DWI, sigma=sigma, pca_method="empty") # Try this with a sigma volume, instead of a scalar sigma_vol = sigma * np.ones(DWI.shape[:-1]) mask = np.zeros_like(DWI, dtype=bool)[..., 0] mask[2:-2, 2:-2, 2:-2] = True DWI_den = localpca(DWI, sigma=sigma_vol, mask=mask, patch_radius=3) DWI_clean_masked = DWI_clean.copy() DWI_clean_masked[~mask] = 0 DWI_masked = DWI.copy() DWI_masked[~mask] = 0 rmse_den = np.sum(np.abs(DWI_clean_masked - DWI_den)) / np.sum( np.abs(DWI_clean_masked) ) rmse_noisy = np.sum(np.abs(DWI_clean_masked - DWI_masked)) / np.sum( np.abs(DWI_clean_masked) ) DWI_clean_wrc_masked = DWI_clean_wrc.copy() DWI_clean_wrc_masked[~mask] = 0 rmse_den_wrc = np.sum(np.abs(DWI_clean_wrc_masked - DWI_den)) / np.sum( np.abs(DWI_clean_wrc_masked) ) rmse_noisy_wrc = np.sum(np.abs(DWI_clean_wrc_masked - DWI_masked)) / np.sum( np.abs(DWI_clean_wrc_masked) ) assert_(np.max(DWI_clean) / sigma < np.max(DWI_den) / sigma) assert_(np.max(DWI_den) / sigma < np.max(DWI) / sigma) assert_(rmse_den < rmse_noisy) assert_(rmse_den_wrc < rmse_noisy_wrc) @set_random_number_generator() def test_lpca_ill_conditioned(rng): DWI, sigma = rfiw_phantom(gtab, snr=30, rng=rng) for patch_radius in [1, [1, 1, 1]]: assert_warns(UserWarning, localpca, DWI, sigma=sigma, patch_radius=patch_radius) @set_random_number_generator() def test_lpca_radius_wrong_shape(rng): DWI, sigma = rfiw_phantom(gtab, snr=30, rng=rng) for patch_radius in [[2, 2], [2, 2, 2, 2]]: assert_raises(ValueError, localpca, DWI, sigma=sigma, patch_radius=patch_radius) @set_random_number_generator() def test_lpca_sigma_wrong_shape(rng): DWI, sigma = rfiw_phantom(gtab, snr=30, rng=rng) # If sigma is 3D but shape is not like DWI.shape[:-1], an error is raised: sigma = np.zeros((DWI.shape[0], DWI.shape[1] + 1, DWI.shape[2])) assert_raises(ValueError, localpca, DWI, sigma=sigma) @set_random_number_generator() def test_lpca_no_gtab_no_sigma(rng): DWI, sigma = rfiw_phantom(gtab, snr=30, rng=rng) assert_raises(ValueError, localpca, DWI, sigma=None, mask=None) @set_random_number_generator() def test_pca_classifier(rng): # Produce small phantom with well aligned single voxels and ground truth # snr = 50, i.e signal std = 0.02 (Gaussian noise) std_gt = 0.02 S0 = 1.0 ndir = gtab.bvals.size signal_test = np.zeros((5, 5, 5, ndir)) mevals = np.array([[0.99e-3, 0.0, 0.0], [2.26e-3, 0.87e-3, 0.87e-3]]) sig, direction = multi_tensor( gtab, mevals, S0=S0, angles=[(0, 0, 1), (0, 0, 1)], fractions=(50, 50), snr=None ) signal_test[..., :] = sig noise = std_gt * rng.standard_normal((5, 5, 5, ndir)) dwi_test = signal_test + noise # Compute eigenvalues X = dwi_test.reshape(125, ndir) M = np.mean(X, axis=0) X = X - M [L, W] = np.linalg.eigh(np.dot(X.T, X) / 125) # Find number of noise related eigenvalues var, c = _pca_classifier(L, 125) std = np.sqrt(var) # Expected number of signal components is 0 because the phantom only has # one voxel type and that information is captured by the mean of X. # Therefore, expected noise components should be equal to size of L. # To allow some margin of error let's assess if c is higher than # L.size - 3. assert_(c > L.size - 3) # Let's check if noise std estimate as an error less than 5% std_error = abs(std - std_gt) / std_gt * 100 assert_(std_error < 5) @set_random_number_generator() def test_mppca_in_phantom(rng): DWIgt = rfiw_phantom(gtab, snr=None, rng=rng) std_gt = 0.02 noise = std_gt * rng.standard_normal(DWIgt.shape) DWInoise = DWIgt + noise # patch radius (2: #samples > #features, 1: #samples < #features) for PR in [2, 1]: if PR == 1: patch_radius_arr = create_patch_radius_arr(DWInoise, PR) patch_size = compute_patch_size(patch_radius_arr) num_samples = compute_num_samples(patch_size) spr = compute_suggested_patch_radius(DWInoise, patch_size) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=dimensionality_problem_message(DWInoise, num_samples, spr), category=UserWarning, ) DWIden = mppca(DWInoise, patch_radius=PR) else: DWIden = mppca(DWInoise, patch_radius=PR) # Test if denoised data is closer to ground truth than noisy data rmse_den = np.sum(np.abs(DWIgt - DWIden)) / np.sum(np.abs(DWIgt)) rmse_noisy = np.sum(np.abs(DWIgt - DWInoise)) / np.sum(np.abs(DWIgt)) assert_(rmse_den < rmse_noisy) @set_random_number_generator() def test_create_patch_radius_arr(rng): shape = (10, 10, 8, 104) arr = rng.standard_normal(shape) pr = 2 expected_val = np.asarray([2, 2, 2]) obtained_val = create_patch_radius_arr(arr, pr) assert np.array_equal(obtained_val, expected_val) def test_compute_patch_size(): patch_radius = 1 expected_val = 3 obtained_val = compute_patch_size(patch_radius) assert obtained_val == expected_val patch_radius = 2 expected_val = 5 obtained_val = compute_patch_size(patch_radius) assert obtained_val == expected_val def test_compute_num_samples(): patch_size = np.asarray([5, 5, 5]) expected_val = 125 obtained_val = compute_num_samples(patch_size) assert obtained_val == expected_val @set_random_number_generator() def test_compute_suggested_patch_radius(rng): shape = (10, 10, 8, 104) arr = rng.standard_normal(shape) patch_size = [3, 3, 3] expected_val = 2 obtained_val = compute_suggested_patch_radius(arr, patch_size) assert obtained_val == expected_val patch_size = [5, 5, 5] obtained_val = compute_suggested_patch_radius(arr, patch_size) assert obtained_val == expected_val @set_random_number_generator() def test_mppca_returned_sigma(rng): DWIgt = rfiw_phantom(gtab, snr=None, rng=rng) std_gt = 0.02 noise = std_gt * rng.standard_normal(DWIgt.shape) DWInoise = DWIgt + noise # patch radius (2: #samples > #features, 1: #samples < #features) for PR in [2, 1]: # Case that sigma is estimated using mpPCA if PR == 1: patch_radius_arr = create_patch_radius_arr(DWInoise, PR) patch_size = compute_patch_size(patch_radius_arr) num_samples = compute_num_samples(patch_size) spr = compute_suggested_patch_radius(DWInoise, patch_size) if PR == 1: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=dimensionality_problem_message(DWInoise, num_samples, spr), category=UserWarning, ) DWIden0, sigma = mppca(DWInoise, patch_radius=PR, return_sigma=True) else: DWIden0, sigma = mppca(DWInoise, patch_radius=PR, return_sigma=True) msigma = np.mean(sigma) std_error = abs(msigma - std_gt) / std_gt * 100 # if #noise_eigenvals >> #signal_eigenvals, variance should be well estimated # this is more likely achieved if #samples > #features if PR > 1: assert_(std_error < 5) else: # otherwise, variance estimate may be wrong pass # Case that sigma is inputted (sigma outputted should be the same as the # one inputted) if PR == 1: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=dimensionality_problem_message(DWInoise, num_samples, spr), category=UserWarning, ) DWIden1, rsigma = genpca( DWInoise, sigma=sigma, tau_factor=None, patch_radius=PR, return_sigma=True, ) else: DWIden1, rsigma = genpca( DWInoise, sigma=sigma, tau_factor=None, patch_radius=PR, return_sigma=True, ) assert_array_almost_equal(rsigma, sigma) # DWIden1 should be very similar to DWIden0 rmse_den = np.sum(np.abs(DWIden1 - DWIden0)) / np.sum(np.abs(DWIden0)) rmse_ref = np.sum(np.abs(DWIden1 - DWIgt)) / np.sum(np.abs(DWIgt)) assert_(rmse_den < rmse_ref) dipy-1.11.0/dipy/denoise/tests/test_nlmeans.py000066400000000000000000000075541476546756600214070ustar00rootroot00000000000000from time import time import numpy as np from numpy.testing import ( assert_, assert_array_almost_equal, assert_equal, assert_raises, ) import pytest from dipy.denoise.denspeed import add_padding_reflection, remove_padding from dipy.denoise.nlmeans import nlmeans from dipy.testing import assert_greater from dipy.testing.decorators import set_random_number_generator from dipy.utils.omp import cpu_count, have_openmp @set_random_number_generator() def test_nlmeans_padding(rng): S0 = 100 + 2 * rng.standard_normal((50, 50, 50)) S0 = S0.astype("f8") S0n = add_padding_reflection(S0, 5) S0n2 = remove_padding(S0n, 5) assert_equal(S0.shape, S0n2.shape) def test_nlmeans_static(): S0 = 100 * np.ones((20, 20, 20), dtype="f8") S0n = nlmeans(S0, sigma=np.ones((20, 20, 20)), rician=False) assert_array_almost_equal(S0, S0n) def test_nlmeans_wrong(): S0 = np.ones((2, 2, 2, 2, 2)) assert_raises(ValueError, nlmeans, S0, 1.0) # test invalid values of num_threads data = np.ones((10, 10, 10)) sigma = 1 assert_raises(ValueError, nlmeans, data, sigma, num_threads=0) @set_random_number_generator() def test_nlmeans_random_noise(rng): S0 = 100 + 2 * rng.standard_normal((22, 23, 30)) S0n = nlmeans(S0, sigma=np.ones((22, 23, 30)) * np.std(S0), rician=False) print(S0.mean(), S0.min(), S0.max()) print(S0n.mean(), S0n.min(), S0n.max()) assert_(S0n.min() > S0.min()) assert_(S0n.max() < S0.max()) assert_equal(np.round(S0n.mean()), 100) @set_random_number_generator() def test_nlmeans_boundary(rng): # nlmeans preserves boundaries S0 = 100 + np.zeros((20, 20, 20)) noise = 2 * rng.standard_normal((20, 20, 20)) S0 += noise S0[:10, :10, :10] = 300 + noise[:10, :10, :10] nlmeans(S0, sigma=np.ones((20, 20, 20)) * np.std(noise), rician=False) print(S0[9, 9, 9]) print(S0[10, 10, 10]) assert_(S0[9, 9, 9] > 290) assert_(S0[10, 10, 10] < 110) def test_nlmeans_4D_and_mask(): S0 = 200 * np.ones((20, 20, 20, 3), dtype="f8") mask = np.zeros((20, 20, 20)) mask[10, 10, 10] = 1 S0n = nlmeans(S0, sigma=1, mask=mask, rician=True) assert_equal(S0.shape, S0n.shape) assert_equal(np.round(S0n[10, 10, 10]), 200) assert_equal(S0n[8, 8, 8], 0) def test_nlmeans_dtype(): S0 = 200 * np.ones((20, 20, 20, 3), dtype="f4") mask = np.zeros((20, 20, 20)) mask[10:14, 10:14, 10:14] = 1 S0n = nlmeans(S0, sigma=1, mask=mask, rician=True) assert_equal(S0.dtype, S0n.dtype) S0 = 200 * np.ones((20, 20, 20), dtype=np.uint16) mask = np.zeros((20, 20, 20)) mask[10:14, 10:14, 10:14] = 1 S0n = nlmeans(S0, sigma=np.ones((20, 20, 20)), mask=mask, rician=True) assert_equal(S0.dtype, S0n.dtype) @pytest.mark.skipif(not have_openmp, reason="OpenMP does not appear to be available") def test_nlmeans_4d_3dsigma_and_threads(): # Input is 4D data and 3D sigma data = np.ones((50, 50, 50, 5)) sigma = np.ones(data.shape[:3]) mask = np.zeros(data.shape[:3]) # mask[25-10:25+10] = 1 mask[:] = 1 print(f"cpu count {cpu_count()}") print("1") t = time() new_data = nlmeans(data, sigma, mask=mask, num_threads=1) duration_1core = time() - t print(duration_1core) print("All") t = time() new_data2 = nlmeans(data, sigma, mask=mask, num_threads=None) duration_all_core = time() - t print(duration_all_core) print("2") t = time() new_data3 = nlmeans(data, sigma, mask=mask, num_threads=2) duration_2core = time() - t print(duration_2core) assert_array_almost_equal(new_data, new_data2) assert_array_almost_equal(new_data2, new_data3) if cpu_count() > 2: assert_equal(duration_all_core < duration_2core, True) assert_equal(duration_2core < duration_1core, True) if cpu_count() == 2: assert_greater(duration_1core, duration_2core) dipy-1.11.0/dipy/denoise/tests/test_noise_estimate.py000066400000000000000000000167331476546756600227610ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_equal, assert_warns, ) import dipy.core.gradients as dpg import dipy.data as dpd from dipy.denoise.noise_estimate import ( _inv_nchi_cdf, _piesno_3D, estimate_sigma, piesno, ) from dipy.denoise.pca_noise_estimate import pca_noise_estimate from dipy.io.image import load_nifti_data from dipy.testing.decorators import set_random_number_generator def test_inv_nchi(): # See page 5 of the reference paper for tested values # Values taken from hispeed.MedianPIESNO.lambdaPlus # and hispeed.MedianPIESNO.lambdaMinus N = 8 K = 20 alpha = 0.01 lambdaMinus = _inv_nchi_cdf(N, K, alpha / 2) lambdaPlus = _inv_nchi_cdf(N, K, 1 - alpha / 2) assert_almost_equal(lambdaMinus, 6.464855180579397) assert_almost_equal(lambdaPlus, 9.722849086419043) @set_random_number_generator() def test_piesno(rng): # Values taken from hispeed.OptimalPIESNO with the test data # in the package computed in matlab test_piesno_data = load_nifti_data(dpd.get_fnames(name="test_piesno")) sigma = piesno( test_piesno_data, N=8, alpha=0.01, step=1, eps=1e-10, return_mask=False ) assert_almost_equal(sigma, 0.010749458025559) noise1 = (rng.standard_normal((100, 100, 100)) * 50) + 10 noise2 = (rng.standard_normal((100, 100, 100)) * 50) + 10 rician_noise = np.sqrt(noise1**2 + noise2**2) sigma, mask = piesno( rician_noise, N=1, alpha=0.01, step=1, eps=1e-10, return_mask=True ) # less than 3% of error? assert_(np.abs(sigma - 50) / sigma < 0.03) # Test using the median as the initial estimation initial_estimation = np.median(sigma) / np.sqrt(2 * _inv_nchi_cdf(1, 1, 0.5)) sigma, mask = _piesno_3D( rician_noise, N=1, alpha=0.01, step=1, eps=1e-10, return_mask=True, initial_estimation=initial_estimation, ) assert_(np.abs(sigma - 50) / sigma < 0.03) sigma = _piesno_3D( rician_noise, N=1, alpha=0.01, step=1, eps=1e-10, return_mask=False, initial_estimation=initial_estimation, ) assert_(np.abs(sigma - 50) / sigma < 0.03) sigma = _piesno_3D( np.zeros_like(rician_noise), N=1, alpha=0.01, step=1, eps=1e-10, return_mask=False, initial_estimation=initial_estimation, ) assert_(np.all(sigma == 0)) sigma, mask = _piesno_3D( np.zeros_like(rician_noise), N=1, alpha=0.01, step=1, eps=1e-10, return_mask=True, initial_estimation=initial_estimation, ) assert_(np.all(sigma == 0)) assert_(np.all(mask == 0)) # Check if no noise points found in array it exits sigma = _piesno_3D( 1000 * np.ones_like(rician_noise), N=1, alpha=0.01, step=1, eps=1e-10, return_mask=False, initial_estimation=10, ) assert_(np.all(sigma == 10)) def test_piesno_type(): # This is testing if the `sum_m2` cast is overflowing data = np.ones((10, 10, 10), dtype=np.int16) for i in range(10): data[:, i, :] = i * 26 sigma = piesno(data, N=2, alpha=0.01, step=1, eps=1e-10, return_mask=False) assert_almost_equal(sigma, 79.970003117424739) def test_estimate_sigma(): sigma = estimate_sigma(np.ones((7, 7, 7)), disable_background_masking=True) assert_equal(sigma, 0.0) sigma = estimate_sigma(np.ones((7, 7, 7, 3)), disable_background_masking=True) assert_equal(sigma, np.array([0.0, 0.0, 0.0])) sigma = estimate_sigma(5 * np.ones((7, 7, 7)), disable_background_masking=False) assert_equal(sigma, 0.0) sigma = estimate_sigma(5 * np.ones((7, 7, 7, 3)), disable_background_masking=False) assert_equal(sigma, np.array([0.0, 0.0, 0.0])) arr = np.zeros((3, 3, 3)) arr[0, 0, 0] = 1 sigma = estimate_sigma(arr, disable_background_masking=False, N=1) assert_array_almost_equal( sigma, (0.10286889997472792 / np.sqrt(0.42920367320510366)) ) arr = np.zeros((3, 3, 3, 3)) arr[0, 0, 0] = 1 sigma = estimate_sigma(arr, disable_background_masking=False, N=1) assert_array_almost_equal( sigma, np.array( [ 0.10286889997472792 / np.sqrt(0.42920367320510366), 0.10286889997472792 / np.sqrt(0.42920367320510366), 0.10286889997472792 / np.sqrt(0.42920367320510366), ] ), ) arr = np.zeros((3, 3, 3)) arr[0, 0, 0] = 1 sigma = estimate_sigma(arr, disable_background_masking=True, N=4) assert_array_almost_equal(sigma, 0.46291005 / np.sqrt(0.4834941393603609)) arr = np.zeros((3, 3, 3)) arr[0, 0, 0] = 1 sigma = estimate_sigma(arr, disable_background_masking=True, N=0) assert_array_almost_equal(sigma, 0.46291005 / np.sqrt(1)) arr = np.zeros((3, 3, 3, 3)) arr[0, 0, 0] = 1 sigma = estimate_sigma(arr, disable_background_masking=True, N=12) assert_array_almost_equal( sigma, np.array( [ 0.46291005 / np.sqrt(0.4946862482541263), 0.46291005 / np.sqrt(0.4946862482541263), 0.46291005 / np.sqrt(0.4946862482541263), ] ), ) @set_random_number_generator(1984) def test_pca_noise_estimate(rng): # MUBE: bvals1 = np.concatenate([np.zeros(17), np.ones(3) * 1000]) bvecs1 = np.concatenate([np.zeros((17, 3)), np.eye(3)]) gtab1 = dpg.gradient_table(bvals1, bvecs=bvecs1) # SIBE: bvals2 = np.concatenate([np.zeros(1), np.ones(3) * 1000]) bvecs2 = np.concatenate([np.zeros((1, 3)), np.eye(3)]) gtab2 = dpg.gradient_table(bvals2, bvecs=bvecs2) for images_as_samples in [True, False]: for patch_radius in [1, 2]: for gtab in [gtab1, gtab2]: for dtype in [np.int16, np.float64]: signal = np.ones((20, 20, 20, gtab.bvals.shape[0])) for correct_bias in [True, False]: if not correct_bias: # High signal for no bias correction signal = signal * 100 sigma = 1 noise1 = rng.normal(0, sigma, size=signal.shape) noise2 = rng.normal(0, sigma, size=signal.shape) # Rician noise: data = np.sqrt((signal + noise1) ** 2 + noise2**2) sigma_est = pca_noise_estimate( data.astype(dtype), gtab, correct_bias=correct_bias, patch_radius=patch_radius, images_as_samples=images_as_samples, ) # print("sigma_est:", sigma_est) assert_array_almost_equal(np.mean(sigma_est), sigma, decimal=1) # check that Rician corrects produces larger noise estimate assert_( np.mean( pca_noise_estimate( data, gtab, correct_bias=True, images_as_samples=images_as_samples ) ) > np.mean( pca_noise_estimate( data, gtab, correct_bias=False, images_as_samples=images_as_samples ) ) ) assert_warns(UserWarning, pca_noise_estimate, data, gtab, patch_radius=0) dipy-1.11.0/dipy/denoise/tests/test_non_local_means.py000066400000000000000000000073041476546756600230720ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_, assert_array_almost_equal, assert_equal, assert_raises, ) from dipy.denoise.non_local_means import non_local_means from dipy.testing.decorators import set_random_number_generator def test_nlmeans_static(): S0 = 100 * np.ones((20, 20, 20), dtype="f8") S0nb = non_local_means(S0, sigma=1.0, rician=False) assert_array_almost_equal(S0, S0nb) S0 = 100 * np.ones((20, 20, 20, 3), dtype="f8") S0nb = non_local_means(S0, sigma=1.0, rician=False) assert_array_almost_equal(S0, S0nb) S0nb = non_local_means(S0, sigma=np.array(1.0), rician=False) assert_array_almost_equal(S0, S0nb) S0nb = non_local_means(S0, sigma=np.array([1.0]), rician=False) assert_array_almost_equal(S0, S0nb) @set_random_number_generator() def test_nlmeans_random_noise(rng): S0 = 100 + 2 * rng.standard_normal((22, 23, 30)) masker = np.zeros(S0.shape[:3]).astype(bool) masker[8:15, 8:15, 8:15] = 1 for mask in [None, masker]: S0nb = non_local_means(S0, sigma=np.std(S0), rician=False, mask=mask) assert_(S0nb[mask].min() > S0[mask].min()) assert_(S0nb[mask].max() < S0[mask].max()) assert_equal(np.round(S0nb[mask].mean()), 100) S0nb = non_local_means(S0, sigma=np.std(S0), rician=False, mask=mask) assert_(S0nb[mask].min() > S0[mask].min()) assert_(S0nb[mask].max() < S0[mask].max()) assert_equal(np.round(S0nb[mask].mean()), 100) @set_random_number_generator() def test_scalar_sigma(rng): S0 = 100 + np.zeros((20, 20, 20)) noise = 2 * rng.standard_normal((20, 20, 20)) S0 += noise S0[:10, :10, :10] = 300 + noise[:10, :10, :10] assert_raises(ValueError, non_local_means, S0, sigma=noise, rician=False) noise = "a" assert_raises(ValueError, non_local_means, S0, sigma=noise, rician=False) @set_random_number_generator() def test_nlmeans_boundary(rng): # nlmeans preserves boundaries S0 = 100 + np.zeros((20, 20, 20)) noise = 2 * rng.standard_normal((20, 20, 20)) S0 += noise S0[:10, :10, :10] = 300 + noise[:10, :10, :10] non_local_means(S0, sigma=np.std(noise), rician=False) assert_(S0[9, 9, 9] > 290) assert_(S0[10, 10, 10] < 110) def test_nlmeans_wrong(): S0 = 100 + np.zeros((10, 10, 10, 10, 10)) assert_raises(ValueError, non_local_means, S0, 1.0) S0 = 100 + np.zeros((20, 20, 20)) mask = np.ones((10, 10)) assert_raises(ValueError, non_local_means, S0, 1.0, mask=mask) def test_nlmeans_4D_and_mask(): S0 = 200 * np.ones((20, 20, 20, 3), dtype="f8") mask = np.zeros((20, 20, 20)) mask[10, 10, 10] = 1 S0n = non_local_means(S0, sigma=1, mask=mask, rician=True) assert_equal(S0.shape, S0n.shape) assert_equal(np.round(S0n[10, 10, 10]), 200) assert_equal(S0n[8, 8, 8], 0) def test_nlmeans_dtype(): S0 = 200 * np.ones((20, 20, 20, 3), dtype="f4") mask = np.zeros((20, 20, 20)) mask[10:14, 10:14, 10:14] = 1 S0n = non_local_means(S0, sigma=1, mask=mask, rician=True) assert_equal(S0.dtype, S0n.dtype) S0 = 200 * np.ones((20, 20, 20), dtype=np.uint16) mask = np.zeros((20, 20, 20)) mask[10:14, 10:14, 10:14] = 1 S0n = non_local_means(S0, sigma=1, mask=mask, rician=True) assert_equal(S0.dtype, S0n.dtype) def test_nlmeans_2D_sigma(): S0 = np.ones((20, 20, 20, 3)) noise = np.ones((3, 2)) assert_raises(ValueError, non_local_means, S0, sigma=noise, rician=True) S0 = np.ones((20, 20, 20)) assert_raises(ValueError, non_local_means, S0, sigma=noise, rician=True) def test_nlmeans_sigma_arr(): S0 = np.ones((20, 20, 20, 3)) noise = np.ones(4) assert_raises(ValueError, non_local_means, S0, sigma=noise, rician=True) dipy-1.11.0/dipy/denoise/tests/test_patch2self.py000066400000000000000000000234741476546756600220040ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import assert_array_almost_equal, assert_equal, assert_raises import pytest from dipy.core.gradients import generate_bvecs, gradient_table from dipy.denoise import patch2self as p2s from dipy.sims.voxel import multi_tensor from dipy.testing import ( assert_greater, assert_greater_equal, assert_less, assert_less_equal, ) from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package sklearn, has_sklearn, _ = optional_package("sklearn") needs_sklearn = pytest.mark.skipif(not has_sklearn, reason="Requires sklearn") @needs_sklearn @set_random_number_generator(1234) def test_patch2self_random_noise(rng): S0 = 30 + 2 * rng.standard_normal((20, 20, 20, 50)) bvals = np.repeat(30, 50) # shift = True for version in [1, 3]: extra_args = {"patch_radius": (0, 0, 0)} if version == 1 else {} S0den_shift = p2s.patch2self( S0, bvals, model="ols", shift_intensity=True, version=version, **extra_args, ) assert_greater_equal(S0den_shift.min(), S0.min()) assert_less_equal(np.round(S0den_shift.mean()), 30) # clip = True msg = "Both `clip_negative_vals` and `shift_intensity` .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) S0den_clip = p2s.patch2self( S0, bvals, model="ols", clip_negative_vals=True, version=version, **extra_args, ) assert_greater(S0den_clip.min(), S0.min()) assert_equal(np.round(S0den_clip.mean()), 30) # both clip and shift = True, and int patch_radius with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) S0den_clip = p2s.patch2self( S0, bvals, model="ols", clip_negative_vals=True, shift_intensity=True, version=version, **extra_args, ) assert_greater(S0den_clip.min(), S0.min()) assert_equal(np.round(S0den_clip.mean()), 30) # both clip and shift = False S0den_clip = p2s.patch2self( S0, bvals, model="ols", clip_negative_vals=False, shift_intensity=False, version=version, **extra_args, ) assert_greater(S0den_clip.min(), S0.min()) assert_equal(np.round(S0den_clip.mean()), 30) @needs_sklearn @set_random_number_generator(1234) def test_patch2self_boundary(rng): # patch2self preserves boundaries S0 = 100 + np.zeros((20, 20, 20, 20)) noise = 2 * rng.standard_normal((20, 20, 20, 20)) S0 += noise S0[:10, :10, :10, :10] = 300 + noise[:10, :10, :10, :10] bvals = np.repeat(100, 20) for version in [1, 3]: extra_args = {"patch_radius": (0, 0, 0)} if version == 1 else {} p2s.patch2self(S0, bvals, **extra_args) assert_greater(S0[9, 9, 9, 9], 290) assert_less(S0[10, 10, 10, 10], 110) def rfiw_phantom(gtab, snr=None, rng=None): """rectangle fiber immersed in water""" # define voxel index slice_ind = np.zeros((10, 10, 8)) slice_ind[4:7, 4:7, :] = 1 slice_ind[4:7, 7, :] = 2 slice_ind[7, 7, :] = 3 slice_ind[7, 4:7, :] = 4 slice_ind[7, 3, :] = 5 slice_ind[4:7, 3, :] = 6 slice_ind[3, 3, :] = 7 slice_ind[3, 4:7, :] = 8 slice_ind[3, 7, :] = 9 # Define tissue diffusion parameters # Restricted diffusion ADr = 0.99e-3 RDr = 0.0 # Hindered diffusion ADh = 2.26e-3 RDh = 0.87e-3 # S0 value for tissue S1 = 50 # Fraction between Restricted and Hindered diffusion fia = 0.51 # Define water diffusion Dwater = 3e-3 S2 = 100 # S0 value for water # Define tissue volume fraction for each voxel type (in index order) f = np.array([0.0, 1.0, 0.6, 0.18, 0.30, 0.15, 0.50, 0.35, 0.70, 0.42]) # Define S0 for each voxel (in index order) S0 = S1 * f + S2 * (1 - f) # multi tensor simulations assume that each water pull as constant S0 # since I am assuming that tissue and water voxels have different S0, # tissue volume fractions have to be adjusted to the measured f values when # constant S0 are assumed constant. Doing this correction, simulations will # be analogous to simulates that S0 are different for each media. (For more # details on this contact the phantom designer) f1 = f * S1 / S0 mevals = np.array([[ADr, RDr, RDr], [ADh, RDh, RDh], [Dwater, Dwater, Dwater]]) angles = [(0, 0, 1), (0, 0, 1), (0, 0, 1)] dwi = np.zeros(slice_ind.shape + (gtab.bvals.size,)) for i in range(10): fractions = [ f1[i] * fia * 100, f1[i] * (1 - fia) * 100, (1 - f1[i]) * 100, ] sig, direction = multi_tensor( gtab, mevals, S0=S0[i], angles=angles, fractions=fractions, snr=None ) dwi[slice_ind == i, :] = sig if snr is None: return dwi else: sigma = S2 * 1.0 / snr n1 = rng.normal(0, sigma, size=dwi.shape) n2 = rng.normal(0, sigma, size=dwi.shape) return [ np.sqrt((dwi / np.sqrt(2) + n1) ** 2 + (dwi / np.sqrt(2) + n2) ** 2), sigma, ] @needs_sklearn @set_random_number_generator(4321) def test_phantom(rng): # generate a gradient table for phantom data directions8 = generate_bvecs(8) directions30 = generate_bvecs(20) directions60 = generate_bvecs(30) # Create full dataset parameters # (6 b-values = 0, 8 directions for b-value 300, 30 directions for b-value # 1000 and 60 directions for b-value 2000) bvals = np.hstack( (np.zeros(6), 300 * np.ones(8), 1000 * np.ones(20), 2000 * np.ones(30)) ) bvecs = np.vstack((np.zeros((6, 3)), directions8, directions30, directions60)) gtab = gradient_table(bvals, bvecs=bvecs) dwi, sigma = rfiw_phantom(gtab, snr=10, rng=rng) dwi_den1 = p2s.patch2self(dwi, model="ridge", bvals=bvals, alpha=1.0, version=1) assert_less(np.max(dwi_den1) / sigma, np.max(dwi) / sigma) dwi_den2 = p2s.patch2self(dwi, model="ridge", bvals=bvals, alpha=0.7, version=1) assert_less(np.max(dwi_den2) / sigma, np.max(dwi) / sigma) assert_array_almost_equal(dwi_den1, dwi_den2, decimal=0) assert_raises(ValueError, p2s.patch2self, dwi, model="empty", bvals=bvals) # Try this with a sigma volume, instead of a scalar dwi_den = p2s.patch2self(dwi, bvals=bvals, model="ols", version=1) assert_less(np.max(dwi_den) / sigma, np.max(dwi) / sigma) @needs_sklearn def test_validate_patch_radius_and_version(): data = np.random.rand(5, 5, 5, 10) bvals = np.zeros(10) test_cases = [ { "patch_radius": 1, "version": 1, "tmp_dir": None, "expect_fail": False, }, { "patch_radius": (1, 1, 1), "version": 1, "tmp_dir": None, "expect_fail": False, }, { "patch_radius": (0, 0, 0), "version": 1, "tmp_dir": None, "expect_fail": False, }, { "patch_radius": (0, 0, 0), "version": 3, "tmp_dir": None, "expect_fail": False, }, {"patch_radius": 1, "version": 3, "tmp_dir": None, "expect_fail": True}, { "patch_radius": (1, 1, 1), "version": 3, "tmp_dir": None, "expect_fail": True, }, { "patch_radius": (0, 0, 0), "version": 3, "tmp_dir": "/nonexistent_dir", "expect_fail": True, }, { "patch_radius": 1, "version": 1, "tmp_dir": "/some_temp_dir", "expect_fail": True, }, ] for case in test_cases: patch_radius = case["patch_radius"] version = case["version"] tmp_dir = case["tmp_dir"] expect_fail = case["expect_fail"] if expect_fail: # Expecting a ValueError for this case with pytest.raises(ValueError): p2s.patch2self( data, bvals, patch_radius=patch_radius, version=version, tmp_dir=tmp_dir, ) else: # Expecting success try: result = p2s.patch2self( data, bvals, patch_radius=patch_radius, version=version, tmp_dir=tmp_dir, ) assert result.shape == data.shape, ( f"Shape mismatch with patch_radius={patch_radius}, " f"version={version}, tmp_dir={tmp_dir}" ) except ValueError: pytest.fail( f"Unexpected ValueError with patch_radius={patch_radius}, " f"version={version}, tmp_dir={tmp_dir}" ) @needs_sklearn def test_single_slice_data(): for version in [1, 3]: # Create single-slice 4D data with shape (64, 64, 1, 10) single_slice_data = np.random.rand(64, 64, 1, 10).astype(np.float32) bvals = np.array([0] * 5 + [1000] * 5) # Simulate bvals for testing # Run the Patch2Self function denoised_data = p2s.patch2self(single_slice_data, bvals, version=version) assert denoised_data.shape == single_slice_data.shape, ( f"Expected shape {single_slice_data.shape} for version {version}, " f"but got {denoised_data.shape}." ) dipy-1.11.0/dipy/direction/000077500000000000000000000000001476546756600155165ustar00rootroot00000000000000dipy-1.11.0/dipy/direction/__init__.py000066400000000000000000000014701476546756600176310ustar00rootroot00000000000000from .bootstrap_direction_getter import BootDirectionGetter from .closest_peak_direction_getter import ClosestPeakDirectionGetter from .peaks import ( PeaksAndMetrics, peak_directions, peak_directions_nl, peaks_from_model, peaks_from_positions, reshape_peaks_for_visualization, ) from .probabilistic_direction_getter import ( DeterministicMaximumDirectionGetter, ProbabilisticDirectionGetter, ) from .ptt_direction_getter import PTTDirectionGetter __all__ = [ "BootDirectionGetter", "ClosestPeakDirectionGetter", "DeterministicMaximumDirectionGetter", "ProbabilisticDirectionGetter", "PTTDirectionGetter", "PeaksAndMetrics", "peak_directions", "peak_directions_nl", "peaks_from_model", "peaks_from_positions", "reshape_peaks_for_visualization", ] dipy-1.11.0/dipy/direction/bootstrap_direction_getter.pyx000066400000000000000000000142311476546756600237100ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False cimport numpy as cnp import numpy as np from dipy.core.interpolation cimport trilinear_interpolate4d_c from dipy.data import default_sphere from dipy.direction.closest_peak_direction_getter cimport closest_peak from dipy.direction.peaks import peak_directions from dipy.reconst import shm from dipy.tracking.direction_getter cimport DirectionGetter cdef class BootDirectionGetter(DirectionGetter): cdef: cnp.ndarray dwi_mask double[:] vox_data dict _pf_kwargs double cos_similarity double min_separation_angle double relative_peak_threshold double[:] pmf double[:, :] R double[:, :, :, :] data int max_attempts int sh_order object H object model object sphere def __init__(self, data, model, max_angle, sphere=default_sphere, max_attempts=5, sh_order=0, b_tol=20, **kwargs): cdef: cnp.ndarray x, y, z, r double[:] theta, phi double[:, :] B if max_attempts < 1: raise ValueError("max_attempts must be greater than 0.") if b_tol <= 0: raise ValueError("b_tol must be greater than 0.") self._pf_kwargs = kwargs self.data = np.asarray(data, dtype=float) self.model = model self.cos_similarity = np.cos(np.deg2rad(max_angle)) self.sphere = sphere self.sh_order = sh_order self.max_attempts = max_attempts if self.sh_order == 0: if hasattr(model, "sh_order"): self.sh_order = model.sh_order else: self.sh_order = 4 # DEFAULT Value self.dwi_mask = model.gtab.b0s_mask == 0 x, y, z = model.gtab.gradients[self.dwi_mask].T r, theta, phi = shm.cart2sphere(x, y, z) if r.max() - r.min() >= b_tol: raise ValueError("BootDirectionGetter only supports single shell \ data.") B, _, _ = shm.real_sh_descoteaux(self.sh_order, theta, phi) self.H = shm.hat(B) self.R = shm.lcr_matrix(self.H) self.vox_data = np.empty(self.data.shape[3]) self.pmf = np.empty(sphere.vertices.shape[0]) @classmethod def from_data(cls, data, model, max_angle, sphere=default_sphere, sh_order=0, max_attempts=5, b_tol=20, **kwargs): """Create a BootDirectionGetter using HARDI data and an ODF type model Parameters ---------- data : ndarray, float, (..., N) Diffusion MRI data with N volumes. model : dipy diffusion model Must provide fit with odf method. max_angle : float (0, 90) Maximum angle between tract segments. This angle can be more generous (larger) than values typically used with probabilistic direction getters. sphere : Sphere The sphere used to sample the diffusion ODF. sh_order : even int The order of the SH "model" used to estimate bootstrap residuals. max_attempts : int Max number of bootstrap samples used to find tracking direction before giving up. b_tol : float Maximum difference between b-values to be considered single shell. relative_peak_threshold : float in [0., 1.] Relative threshold for excluding ODF peaks. min_separation_angle : float in [0, 90] Angular threshold for excluding ODF peaks. """ return cls(data, model, max_angle, sphere=sphere, max_attempts=max_attempts, sh_order=sh_order, b_tol=b_tol, **kwargs) cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction(self, double[::1] point): """Returns best directions at seed location to start tracking. Parameters ---------- point : ndarray, shape (3,) The point in an image at which to lookup tracking directions. Returns ------- directions : ndarray, shape (N, 3) Possible tracking directions from point. ``N`` may be 0, all directions should be unique. """ cdef: double[:] pmf = self.get_pmf_no_boot(point) return peak_directions(pmf, self.sphere, **self._pf_kwargs)[0] cpdef double[:] get_pmf(self, double[::1] point): """Produces an ODF from a SH bootstrap sample""" if trilinear_interpolate4d_c(self.data, &point[0], &self.vox_data[0]) != 0: self.__clear_pmf() else: np.asarray(self.vox_data)[self.dwi_mask] = shm.bootstrap_data_voxel( np.asarray(self.vox_data)[self.dwi_mask], self.H, self.R) self.pmf = self.model.fit(np.asarray(self.vox_data)).odf(self.sphere) return self.pmf cpdef double[:] get_pmf_no_boot(self, double[::1] point): if trilinear_interpolate4d_c(self.data, &point[0], &self.vox_data[0]) != 0: self.__clear_pmf() else: self.pmf = self.model.fit(np.asarray(self.vox_data)).odf(self.sphere) return self.pmf cdef void __clear_pmf(self) nogil: cdef: cnp.npy_intp len_pmf = self.pmf.shape[0] cnp.npy_intp i for i in range(len_pmf): self.pmf[i] = 0.0 cdef int get_direction_c(self, double[::1] point, double[::1] direction): """Attempt direction getting on a few bootstrap samples. Returns ------- status : int Returns 0 `direction` was updated with a new tracking direction, or 1 otherwise. """ cdef: double[:] pmf cnp.ndarray[cnp.float_t, ndim=2] peaks for _ in range(self.max_attempts): pmf = self.get_pmf(point) peaks = peak_directions(pmf, self.sphere, **self._pf_kwargs)[0] if len(peaks) > 0: return closest_peak(peaks, direction, self.cos_similarity) return 1 dipy-1.11.0/dipy/direction/closest_peak_direction_getter.pxd000066400000000000000000000016311476546756600243220ustar00rootroot00000000000000cimport numpy as cnp from dipy.direction.pmf cimport PmfGen from dipy.tracking.direction_getter cimport DirectionGetter cdef int closest_peak(cnp.ndarray[cnp.float_t, ndim=2] peak_dirs, double[::1] direction, double cos_similarity) cdef class BasePmfDirectionGetter(DirectionGetter): cdef: dict _pf_kwargs double pmf_threshold double cos_similarity int len_pmf object sphere PmfGen pmf_gen cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction( self, double[::1] point) cdef double* _get_pmf(self, double[::1] point) nogil cdef int get_direction_c(self, double[::1] point, double[::1] direction) cdef class BaseDirectionGetter(BasePmfDirectionGetter): pass cdef class PmfGenDirectionGetter(BasePmfDirectionGetter): pass cdef class ClosestPeakDirectionGetter(PmfGenDirectionGetter): pass dipy-1.11.0/dipy/direction/closest_peak_direction_getter.pyx000066400000000000000000000220051476546756600243450ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False import numpy as np cimport numpy as cnp from dipy.direction.peaks import peak_directions, default_sphere from dipy.direction.pmf cimport SimplePmfGen, SHCoeffPmfGen from dipy.reconst import shm from dipy.tracking.direction_getter cimport DirectionGetter from dipy.utils.fast_numpy cimport copy_point, scalar_muliplication_point cdef int closest_peak(cnp.ndarray[cnp.float_t, ndim=2] peak_dirs, double[::1] direction, double cos_similarity): """Update direction with the closest direction from peak_dirs. All directions should be unit vectors. Antipodal symmetry is assumed, ie direction x is the same as -x. Parameters ---------- peak_dirs : array (N, 3) N unit vectors. direction : array (3,) or None Previous direction. The new direction is saved here. cos_similarity : float `cos(max_angle)` where `max_angle` is the maximum allowed angle between prev_step and the returned direction. Returns ------- 0 : if ``direction`` is updated 1 : if no new direction is founded """ cdef: cnp.npy_intp _len=len(peak_dirs) cnp.npy_intp i int closest_peak_i=-1 double _dot double closest_peak_dot=0 for i in range(_len): _dot = (peak_dirs[i,0] * direction[0] + peak_dirs[i,1] * direction[1] + peak_dirs[i,2] * direction[2]) if np.abs(_dot) > np.abs(closest_peak_dot): closest_peak_dot = _dot closest_peak_i = i if closest_peak_i >= 0: if closest_peak_dot >= cos_similarity: copy_point(&peak_dirs[closest_peak_i, 0], &direction[0]) return 0 if closest_peak_dot <= -cos_similarity: copy_point(&peak_dirs[closest_peak_i, 0], &direction[0]) scalar_muliplication_point(&direction[0], -1) return 0 return 1 cdef class BasePmfDirectionGetter(DirectionGetter): """A base class for dynamic direction getters""" def __init__(self, pmf_gen, max_angle, sphere, pmf_threshold=.1, **kwargs): self.sphere = sphere self._pf_kwargs = kwargs self.pmf_gen = pmf_gen if pmf_threshold < 0: raise ValueError("pmf threshold must be >= 0.") self.pmf_threshold = pmf_threshold self.cos_similarity = np.cos(np.deg2rad(max_angle)) self.len_pmf = sphere.vertices.shape[0] def _get_peak_directions(self, blob): """Gets directions using parameters provided at init. Blob can be any function defined on ``self.sphere``, i.e. an ODF. """ return peak_directions(blob, self.sphere, **self._pf_kwargs)[0] cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction(self, double[::1] point): """Returns best directions at seed location to start tracking. Parameters ---------- point : ndarray, shape (3,) The point in an image at which to lookup tracking directions. Returns ------- directions : ndarray, shape (N, 3) Possible tracking directions from point. ``N`` may be 0, all directions should be unique. """ cdef double* pmf = self._get_pmf(point) return self._get_peak_directions(pmf) cdef double* _get_pmf(self, double[::1] point) nogil: cdef: cnp.npy_intp i cnp.npy_intp _len = self.len_pmf double* pmf = &self.pmf_gen.pmf[0] double pmf_threshold=self.pmf_threshold double absolute_pmf_threshold double max_pmf=0 pmf = self.pmf_gen.get_pmf_c(&point[0], pmf) for i in range(_len): if pmf[i] > max_pmf: max_pmf = pmf[i] absolute_pmf_threshold = pmf_threshold * max_pmf for i in range(_len): if pmf[i] < absolute_pmf_threshold: pmf[i] = 0.0 return pmf cdef class PmfGenDirectionGetter(BasePmfDirectionGetter): """A base class for direction getter using a pmf""" @classmethod def from_pmf(cls, pmf, max_angle, sphere, pmf_threshold=.1, **kwargs): """Constructor for making a DirectionGetter from an array of Pmfs Parameters ---------- pmf : array, 4d The pmf to be used for tracking at each voxel. max_angle : float, [0, 90] The maximum allowed angle between incoming direction and new direction. sphere : Sphere The set of directions on which the pmf is sampled and to be used for tracking. pmf_threshold : float [0., 1.] Used to remove direction from the probability mass function for selecting the tracking direction. relative_peak_threshold : float in [0., 1.] Used for extracting initial tracking directions. Passed to peak_directions. min_separation_angle : float in [0, 90] Used for extracting initial tracking directions. Passed to peak_directions. See Also -------- dipy.direction.peaks.peak_directions """ if pmf.ndim != 4: raise ValueError("pmf should be a 4d array.") if pmf.shape[3] != len(sphere.theta): msg = ("The last dimension of pmf should match the number of " "points in sphere.") raise ValueError(msg) pmf_gen = SimplePmfGen(np.asarray(pmf,dtype=float), sphere) return cls(pmf_gen, max_angle, sphere, pmf_threshold=pmf_threshold, **kwargs) @classmethod def from_shcoeff(cls, shcoeff, max_angle, sphere=default_sphere, pmf_threshold=0.1, basis_type=None, legacy=True, sh_to_pmf=False, **kwargs): """Probabilistic direction getter from a distribution of directions on the sphere Parameters ---------- shcoeff : array The distribution of tracking directions at each voxel represented as a function on the sphere using the real spherical harmonic basis. For example the FOD of the Constrained Spherical Deconvolution model can be used this way. This distribution will be discretized using ``sphere`` and tracking directions will be chosen from the vertices of ``sphere`` based on the distribution. max_angle : float, [0, 90] The maximum allowed angle between incoming direction and new direction. sphere : Sphere The set of directions to be used for tracking. pmf_threshold : float [0., 1.] Used to remove direction from the probability mass function for selecting the tracking direction. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. relative_peak_threshold : float in [0., 1.] Used for extracting initial tracking directions. Passed to peak_directions. min_separation_angle : float in [0, 90] Used for extracting initial tracking directions. Passed to peak_directions. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. sh_to_pmf: bool, optional If true, map sherical harmonics to spherical function (pmf) before tracking (faster, requires more memory). See Also -------- dipy.direction.peaks.peak_directions """ if sh_to_pmf: sh_order = shm.order_from_ncoef(shcoeff.shape[3]) pmf = shm.sh_to_sf(shcoeff, sphere, sh_order_max=sh_order, basis_type=basis_type, legacy=legacy) pmf[pmf<0] = 0 pmf_gen = SimplePmfGen(np.asarray(pmf,dtype=float), sphere) else: pmf_gen = SHCoeffPmfGen(np.asarray(shcoeff,dtype=float), sphere, basis_type, legacy=legacy) return cls(pmf_gen, max_angle, sphere, pmf_threshold=pmf_threshold, **kwargs) cdef class ClosestPeakDirectionGetter(PmfGenDirectionGetter): """A direction getter that returns the closest odf peak to previous tracking direction. """ cdef int get_direction_c(self, double[::1] point, double[::1] direction): """ Returns ------- 0 : if ``direction`` is updated 1 : if no new direction is founded """ cdef: cnp.npy_intp _len = self.len_pmf double* pmf cnp.ndarray[cnp.float_t, ndim=2] peaks pmf = self._get_pmf(point) peaks = self._get_peak_directions(pmf) if len(peaks) == 0: return 1 return closest_peak(peaks, direction, self.cos_similarity) dipy-1.11.0/dipy/direction/meson.build000066400000000000000000000014751476546756600176670ustar00rootroot00000000000000cython_sources = [ 'bootstrap_direction_getter', 'closest_peak_direction_getter', 'pmf', 'probabilistic_direction_getter', 'ptt_direction_getter', ] cython_headers = [ 'closest_peak_direction_getter.pxd', 'pmf.pxd', 'probabilistic_direction_getter.pxd', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/direction' ) endforeach python_sources = ['__init__.py', 'peaks.py', ] py3.install_sources( python_sources + cython_headers, pure: false, subdir: 'dipy/direction' ) subdir('tests')dipy-1.11.0/dipy/direction/peaks.py000066400000000000000000000512411476546756600171760ustar00rootroot00000000000000from itertools import repeat import multiprocessing as mp from os import path import tempfile import warnings import numpy as np import scipy.optimize as opt from dipy.core.interpolation import trilinear_interpolate4d from dipy.core.ndindex import ndindex from dipy.core.sphere import Sphere from dipy.data import default_sphere from dipy.reconst.dirspeed import peak_directions from dipy.reconst.eudx_direction_getter import EuDXDirectionGetter from dipy.reconst.odf import gfa from dipy.reconst.recspeed import ( local_maxima, remove_similar_vertices, search_descending, ) from dipy.reconst.shm import sh_to_sf_matrix from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.utils.multiproc import determine_num_processes @warning_for_keywords() def peak_directions_nl( sphere_eval, *, relative_peak_threshold=0.25, min_separation_angle=25, sphere=default_sphere, xtol=1e-7, ): """Non Linear Direction Finder. Parameters ---------- sphere_eval : callable A function which can be evaluated on a sphere. relative_peak_threshold : float Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float in [0, 90] The minimum distance between directions. If two peaks are too close only the larger of the two is returned. sphere : Sphere A discrete Sphere. The points on the sphere will be used for initial estimate of maximums. xtol : float Relative tolerance for optimization. Returns ------- directions : array (N, 3) Points on the sphere corresponding to N local maxima on the sphere. values : array (N,) Value of sphere_eval at each point on directions. """ # Find discrete peaks for use as seeds in non-linear search discrete_values = sphere_eval(sphere) values, indices = local_maxima(discrete_values, sphere.edges) seeds = np.column_stack([sphere.theta[indices], sphere.phi[indices]]) # Helper function def _helper(x): sphere = Sphere(theta=x[0], phi=x[1]) return -sphere_eval(sphere) # Non-linear search num_seeds = len(seeds) theta = np.empty(num_seeds) phi = np.empty(num_seeds) for i in range(num_seeds): peak = opt.fmin(_helper, seeds[i], xtol=xtol, disp=False) theta[i], phi[i] = peak # Evaluate on new-found peaks small_sphere = Sphere(theta=theta, phi=phi) values = sphere_eval(small_sphere) # Sort in descending order order = values.argsort()[::-1] values = values[order] directions = small_sphere.vertices[order] # Remove directions that are too small n = search_descending(values, relative_peak_threshold) directions = directions[:n] # Remove peaks too close to each-other directions, idx = remove_similar_vertices( directions, min_separation_angle, return_index=True ) values = values[idx] return directions, values def _pam_from_attrs( klass, sphere, peak_indices, peak_values, peak_dirs, gfa, qa, shm_coeff, B, odf ): """ Construct PeaksAndMetrics object (or subclass) from its attributes. This is also useful for pickling/unpickling of these objects (see also :func:`__reduce__` below). Parameters ---------- klass : class The class of object to be created. sphere : `Sphere` class instance. Sphere for discretization. peak_indices : ndarray Indices (in sphere vertices) of the peaks in each voxel. peak_values : ndarray The value of the peaks. peak_dirs : ndarray The direction of each peak. gfa : ndarray The Generalized Fractional Anisotropy in each voxel. qa : ndarray Quantitative Anisotropy in each voxel. shm_coeff : ndarray The coefficients of the spherical harmonic basis for the ODF in each voxel. B : ndarray The spherical harmonic matrix, for multiplication with the coefficients. odf : ndarray The orientation distribution function on the sphere in each voxel. Returns ------- pam : Instance of the class `klass`. """ this_pam = klass() this_pam.sphere = sphere this_pam.peak_dirs = peak_dirs this_pam.peak_values = peak_values this_pam.peak_indices = peak_indices this_pam.gfa = gfa this_pam.qa = qa this_pam.shm_coeff = shm_coeff this_pam.B = B this_pam.odf = odf return this_pam class PeaksAndMetrics(EuDXDirectionGetter): def __reduce__(self): return _pam_from_attrs, ( self.__class__, self.sphere, self.peak_indices, self.peak_values, self.peak_dirs, self.gfa, self.qa, self.shm_coeff, self.B, self.odf, ) def _peaks_from_model_parallel( model, data, sphere, relative_peak_threshold, min_separation_angle, mask, return_odf, return_sh, gfa_thr, normalize_peaks, sh_order, sh_basis_type, legacy, npeaks, B, invB, num_processes, ): shape = list(data.shape) data = np.reshape(data, (-1, shape[-1])) n = data.shape[0] nbr_chunks = num_processes**2 chunk_size = int(np.ceil(n / nbr_chunks)) indices = list( zip( np.arange(0, n, chunk_size), np.arange(0, n, chunk_size) + chunk_size, ) ) with tempfile.TemporaryDirectory() as tmpdir: data_file_name = path.join(tmpdir, "data.npy") np.save(data_file_name, data) if mask is not None: mask = mask.flatten() mask_file_name = path.join(tmpdir, "mask.npy") np.save(mask_file_name, mask) else: mask_file_name = None mp.set_start_method("spawn", force=True) pool = mp.Pool(num_processes) pam_res = pool.map( _peaks_from_model_parallel_sub, zip( repeat((data_file_name, mask_file_name)), indices, repeat(model), repeat(sphere), repeat(relative_peak_threshold), repeat(min_separation_angle), repeat(return_odf), repeat(return_sh), repeat(gfa_thr), repeat(normalize_peaks), repeat(sh_order), repeat(sh_basis_type), repeat(legacy), repeat(npeaks), repeat(B), repeat(invB), ), ) pool.close() pam = PeaksAndMetrics() pam.sphere = sphere # use memmap to reduce the memory usage pam.gfa = np.memmap( path.join(tmpdir, "gfa.npy"), dtype=pam_res[0].gfa.dtype, mode="w+", shape=(data.shape[0]), ) pam.peak_dirs = np.memmap( path.join(tmpdir, "peak_dirs.npy"), dtype=pam_res[0].peak_dirs.dtype, mode="w+", shape=(data.shape[0], npeaks, 3), ) pam.peak_values = np.memmap( path.join(tmpdir, "peak_values.npy"), dtype=pam_res[0].peak_values.dtype, mode="w+", shape=(data.shape[0], npeaks), ) pam.peak_indices = np.memmap( path.join(tmpdir, "peak_indices.npy"), dtype=pam_res[0].peak_indices.dtype, mode="w+", shape=(data.shape[0], npeaks), ) pam.qa = np.memmap( path.join(tmpdir, "qa.npy"), dtype=pam_res[0].qa.dtype, mode="w+", shape=(data.shape[0], npeaks), ) if return_sh: nbr_shm_coeff = (sh_order + 2) * (sh_order + 1) // 2 pam.shm_coeff = np.memmap( path.join(tmpdir, "shm.npy"), dtype=pam_res[0].shm_coeff.dtype, mode="w+", shape=(data.shape[0], nbr_shm_coeff), ) pam.B = pam_res[0].B else: pam.shm_coeff = None pam.invB = None if return_odf: pam.odf = np.memmap( path.join(tmpdir, "odf.npy"), dtype=pam_res[0].odf.dtype, mode="w+", shape=(data.shape[0], len(sphere.vertices)), ) else: pam.odf = None # copy subprocesses pam to a single pam (memmaps) for i, (start_pos, end_pos) in enumerate(indices): pam.gfa[start_pos:end_pos] = pam_res[i].gfa pam.peak_dirs[start_pos:end_pos] = pam_res[i].peak_dirs pam.peak_values[start_pos:end_pos] = pam_res[i].peak_values pam.peak_indices[start_pos:end_pos] = pam_res[i].peak_indices pam.qa[start_pos:end_pos] = pam_res[i].qa if return_sh: pam.shm_coeff[start_pos:end_pos] = pam_res[i].shm_coeff if return_odf: pam.odf[start_pos:end_pos] = pam_res[i].odf # load memmaps to arrays and reshape the metric shape[-1] = -1 pam.gfa = np.reshape(np.array(pam.gfa), shape[:-1]) pam.peak_dirs = np.reshape(np.array(pam.peak_dirs), shape + [3]) pam.peak_values = np.reshape(np.array(pam.peak_values), shape) pam.peak_indices = np.reshape(np.array(pam.peak_indices), shape) pam.qa = np.reshape(np.array(pam.qa), shape) if return_sh: pam.shm_coeff = np.reshape(np.array(pam.shm_coeff), shape) if return_odf: pam.odf = np.reshape(np.array(pam.odf), shape) # Make sure all worker processes have exited before leaving context # manager in order to prevent temporary file deletion errors in windows pool.join() return pam def _peaks_from_model_parallel_sub(args): (data_file_name, mask_file_name) = args[0] (start_pos, end_pos) = args[1] model = args[2] sphere = args[3] relative_peak_threshold = args[4] min_separation_angle = args[5] return_odf = args[6] return_sh = args[7] gfa_thr = args[8] normalize_peaks = args[9] sh_order = args[10] sh_basis_type = args[11] legacy = args[12] npeaks = args[13] B = args[14] invB = args[15] data = np.load(data_file_name, mmap_mode="r")[start_pos:end_pos] if mask_file_name is not None: mask = np.load(mask_file_name, mmap_mode="r")[start_pos:end_pos] else: mask = None return peaks_from_model( model, data, sphere, relative_peak_threshold, min_separation_angle, mask=mask, return_odf=return_odf, return_sh=return_sh, gfa_thr=gfa_thr, normalize_peaks=normalize_peaks, sh_order_max=sh_order, sh_basis_type=sh_basis_type, legacy=legacy, npeaks=npeaks, B=B, invB=invB, parallel=False, num_processes=None, ) @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def peaks_from_model( model, data, sphere, relative_peak_threshold, min_separation_angle, *, mask=None, return_odf=False, return_sh=True, gfa_thr=0, normalize_peaks=False, sh_order_max=8, sh_basis_type=None, legacy=True, npeaks=5, B=None, invB=None, parallel=False, num_processes=None, ): """Fit the model to data and computes peaks and metrics Parameters ---------- model : a model instance `model` will be used to fit the data. data : ndarray Diffusion data. sphere : Sphere The Sphere providing discrete directions for evaluation. relative_peak_threshold : float Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float in [0, 90] The minimum distance between directions. If two peaks are too close only the larger of the two is returned. mask : array, optional If `mask` is provided, voxels that are False in `mask` are skipped and no peaks are returned. return_odf : bool If True, the odfs are returned. return_sh : bool If True, the odf as spherical harmonics coefficients is returned gfa_thr : float Voxels with gfa less than `gfa_thr` are skipped, no peaks are returned. normalize_peaks : bool If true, all peak values are calculated relative to `max(odf)`. sh_order_max : int, optional Maximum SH order (l) in the SH fit. For `sh_order_max`, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients (default 8). sh_basis_type : {None, 'tournier07', 'descoteaux07'} ``None`` for the default DIPY basis, ``tournier07`` for the Tournier 2007 :footcite:p:Tournier2007` basis, and ``descoteaux07`` for the Descoteaux 2007 :footcite:p:Descoteaux2007` basis (``None`` defaults to ``descoteaux07``). legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. npeaks : int Maximum number of peaks found (default 5 peaks). B : ndarray, optional Matrix that transforms spherical harmonics to spherical function ``sf = np.dot(sh, B)``. invB : ndarray, optional Inverse of B. parallel: bool If True, use multiprocessing to compute peaks and metric (default False). Temporary files are saved in the default temporary directory of the system. It can be changed using ``import tempfile`` and ``tempfile.tempdir = '/path/to/tempdir'``. num_processes: int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. Returns ------- pam : PeaksAndMetrics An object with ``gfa``, ``peak_directions``, ``peak_values``, ``peak_indices``, ``odf``, ``shm_coeffs`` as attributes References ---------- .. footbibliography:: """ if return_sh and (B is None or invB is None): B, invB = sh_to_sf_matrix( sphere, sh_order_max=sh_order_max, basis_type=sh_basis_type, return_inv=True, legacy=legacy, ) num_processes = determine_num_processes(num_processes) if parallel and num_processes > 1: # It is mandatory to provide B and invB to the parallel function. # Otherwise, a call to np.linalg.pinv is made in a subprocess and # makes it timeout on some system. # see https://github.com/dipy/dipy/issues/253 for details return _peaks_from_model_parallel( model, data, sphere, relative_peak_threshold, min_separation_angle, mask, return_odf, return_sh, gfa_thr, normalize_peaks, sh_order_max, sh_basis_type, legacy, npeaks, B, invB, num_processes, ) shape = data.shape[:-1] if mask is None: mask = np.ones(shape, dtype="bool") else: if mask.shape != shape: raise ValueError("Mask is not the same shape as data.") gfa_array = np.zeros(shape) qa_array = np.zeros((shape + (npeaks,))) peak_dirs = np.zeros((shape + (npeaks, 3))) peak_values = np.zeros((shape + (npeaks,))) peak_indices = np.zeros((shape + (npeaks,)), dtype=np.int32) peak_indices.fill(-1) if return_sh: n_shm_coeff = (sh_order_max + 2) * (sh_order_max + 1) // 2 shm_coeff = np.zeros((shape + (n_shm_coeff,))) if return_odf: odf_array = np.zeros((shape + (len(sphere.vertices),))) global_max = -np.inf for idx in ndindex(shape): if not mask[idx]: continue odf = model.fit(data[idx]).odf(sphere=sphere) if return_sh: shm_coeff[idx] = np.dot(odf, invB) if return_odf: odf_array[idx] = odf gfa_array[idx] = gfa(odf) if gfa_array[idx] < gfa_thr: global_max = max(global_max, odf.max()) continue # Get peaks of odf direction, pk, ind = peak_directions( odf, sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, ) # Calculate peak metrics if pk.shape[0] != 0: global_max = max(global_max, pk[0]) n = min(npeaks, pk.shape[0]) qa_array[idx][:n] = pk[:n] - odf.min() peak_dirs[idx][:n] = direction[:n] peak_indices[idx][:n] = ind[:n] peak_values[idx][:n] = pk[:n] if normalize_peaks: peak_values[idx][:n] = peak_values[idx][:n] / pk[0] if pk[0] != 0 else 0 peak_dirs[idx] *= peak_values[idx][:, None] qa_array /= global_max return _pam_from_attrs( PeaksAndMetrics, sphere, peak_indices, peak_values, peak_dirs, gfa_array, qa_array, shm_coeff if return_sh else None, B if return_sh else None, odf_array if return_odf else None, ) def reshape_peaks_for_visualization(peaks): """Reshape peaks for visualization. Reshape and convert to float32 a set of peaks for visualisation with mrtrix or the fibernavigator. Parameters ---------- peaks: nd array (..., N, 3) or PeaksAndMetrics object The peaks to be reshaped and converted to float32. Returns ------- peaks : nd array (..., 3*N) """ if isinstance(peaks, PeaksAndMetrics): peaks = peaks.peak_dirs return peaks.reshape(np.append(peaks.shape[:-2], -1)).astype("float32") def peaks_from_positions( positions, odfs, sphere, affine, *, pmf_gen=None, relative_peak_threshold=0.5, min_separation_angle=25, is_symmetric=True, npeaks=5, ): """ Extract the peaks at each positions. Parameters ---------- position : array, (N, 3) World coordinates of the N positions. odfs : array, (X, Y, Z, M) Orientation distribution function (spherical function) represented on a sphere of M points. sphere : Sphere A discrete Sphere. The M points on the sphere correspond to the points of the odfs. affine : array (4, 4) The mapping between voxel indices and the point space for positions. pmf_gen : PmfGen Probability mass function generator from voxel orientation information. Replaces ``odfs`` and ``sphere`` when used. relative_peak_threshold : float, optional Only peaks greater than ``min + relative_peak_threshold * scale`` are kept, where ``min = max(0, odf.min())`` and ``scale = odf.max() - min``. The ``relative_peak_threshold`` should be in the range [0, 1]. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. The ``min_separation_angle`` should be in the range [0, 90]. is_symmetric : bool, optional If True, v is considered equal to -v. npeaks : int, optional The maximum number of peaks to extract at from each position. Returns ------- peaks_arr : array (N, npeaks, 3) """ if pmf_gen is not None and (odfs is not None or sphere is not None): msg = ( "``odfs`` and ``sphere`` arguments will be ignored in favor of ``pmf_gen``." ) warnings.warn(msg, stacklevel=2) if pmf_gen is not None: # use the sphere data from the pmf_gen sphere = pmf_gen.get_sphere() inv_affine = np.linalg.inv(affine) vox_positions = np.dot(positions, inv_affine[:3, :3].T.copy()) vox_positions += inv_affine[:3, 3] peaks_arr = np.zeros((len(positions), npeaks, 3)) if vox_positions.dtype not in [np.float64, float]: vox_positions = vox_positions.astype(float) for i, s in enumerate(vox_positions): if pmf_gen: odf = pmf_gen.get_pmf(s) else: odf = trilinear_interpolate4d(odfs, s) peaks, _, _ = peak_directions( odf, sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, is_symmetric=is_symmetric, ) nbr_peaks = min(npeaks, peaks.shape[0]) peaks_arr[i, :nbr_peaks, :] = peaks[:nbr_peaks, :] return peaks_arr dipy-1.11.0/dipy/direction/pmf.pxd000066400000000000000000000010211476546756600170070ustar00rootroot00000000000000cimport numpy as np cdef class PmfGen: cdef: double[:] pmf double[:, :, :, :] data double[:, :] vertices object sphere cdef double* get_pmf_c(self, double* point, double* out) noexcept nogil cdef int find_closest(self, double* xyz) noexcept nogil cdef double get_pmf_value_c(self, double* point, double* xyz) noexcept nogil pass cdef class SimplePmfGen(PmfGen): pass cdef class SHCoeffPmfGen(PmfGen): cdef: double[:, :] B double[:] coeff pass dipy-1.11.0/dipy/direction/pmf.pyx000066400000000000000000000117501476546756600170460ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False import numpy as np cimport numpy as cnp from dipy.reconst import shm from dipy.core.interpolation cimport trilinear_interpolate4d_c from libc.stdlib cimport malloc, free cdef extern from "stdlib.h" nogil: void *memset(void *ptr, int value, size_t num) cdef class PmfGen: def __init__(self, double[:, :, :, :] data, object sphere): self.data = np.asarray(data, dtype=float, order='C') self.vertices = np.asarray(sphere.vertices, dtype=float) self.pmf = np.zeros(self.vertices.shape[0]) self.sphere = sphere def get_pmf(self, double[::1] point, double[:] out=None): if out is None: out = self.pmf return self.get_pmf_c(&point[0], &out[0]) def get_sphere(self): return self.sphere cdef double* get_pmf_c(self, double* point, double* out) noexcept nogil: pass cdef int find_closest(self, double* xyz) noexcept nogil: cdef: cnp.npy_intp idx = 0 cnp.npy_intp i cnp.npy_intp len_pmf = self.pmf.shape[0] double cos_max = 0 double cos_sim for i in range(len_pmf): cos_sim = self.vertices[i][0] * xyz[0] \ + self.vertices[i][1] * xyz[1] \ + self.vertices[i][2] * xyz[2] if cos_sim < 0: cos_sim = cos_sim * -1 if cos_sim > cos_max: cos_max = cos_sim idx = i return idx def get_pmf_value(self, double[::1] point, double[::1] xyz): return self.get_pmf_value_c(&point[0], &xyz[0]) cdef double get_pmf_value_c(self, double* point, double* xyz) noexcept nogil: pass cdef class SimplePmfGen(PmfGen): def __init__(self, double[:, :, :, :] pmf_array, object sphere): PmfGen.__init__(self, pmf_array, sphere) if not pmf_array.shape[3] == sphere.vertices.shape[0]: raise ValueError("pmf should have the same number of values as the" + " number of vertices of sphere.") cdef double* get_pmf_c(self, double* point, double* out) noexcept nogil: if trilinear_interpolate4d_c(self.data, point, out) != 0: memset(out, 0, self.pmf.shape[0] * sizeof(double)) return out cdef double get_pmf_value_c(self, double* point, double* xyz) noexcept nogil: """ Return the pmf value corresponding to the closest vertex to the direction xyz. """ cdef: int idx double pmf_value = 0 idx = self.find_closest(xyz) trilinear_interpolate4d_c(self.data[:,:,:,idx:idx+1], point, &pmf_value) return pmf_value cdef class SHCoeffPmfGen(PmfGen): def __init__(self, double[:, :, :, :] shcoeff_array, object sphere, object basis_type, legacy=True): cdef: int sh_order PmfGen.__init__(self, shcoeff_array, sphere) sh_order = shm.order_from_ncoef(shcoeff_array.shape[3]) try: basis = shm.sph_harm_lookup[basis_type] except KeyError: raise ValueError("%s is not a known basis type." % basis_type) self.B, _, _ = basis(sh_order, sphere.theta, sphere.phi, legacy=legacy) cdef double* get_pmf_c(self, double* point, double* out) noexcept nogil: cdef: cnp.npy_intp i, j cnp.npy_intp len_pmf = self.pmf.shape[0] cnp.npy_intp len_B = self.B.shape[1] double _sum double *coeff = malloc(len_B * sizeof(double)) if trilinear_interpolate4d_c(self.data, point, coeff) != 0: memset(out, 0, len_pmf * sizeof(double)) else: for i in range(len_pmf): _sum = 0 for j in range(len_B): _sum = _sum + (self.B[i, j] * coeff[j]) out[i] = _sum free(coeff) return out cdef double get_pmf_value_c(self, double* point, double* xyz) noexcept nogil: """ Return the pmf value corresponding to the closest vertex to the direction xyz. """ cdef: int idx = self.find_closest(xyz) cnp.npy_intp j cnp.npy_intp len_B = self.B.shape[1] double *coeff = malloc(len_B * sizeof(double)) double pmf_value = 0 if trilinear_interpolate4d_c(self.data, point, coeff) == 0: for j in range(len_B): pmf_value = pmf_value + (self.B[idx, j] * coeff[j]) free(coeff) return pmf_value dipy-1.11.0/dipy/direction/probabilistic_direction_getter.pxd000066400000000000000000000004231476546756600244720ustar00rootroot00000000000000from dipy.direction.closest_peak_direction_getter cimport PmfGenDirectionGetter cdef class ProbabilisticDirectionGetter(PmfGenDirectionGetter): cdef: double[:, :] vertices cdef class DeterministicMaximumDirectionGetter(ProbabilisticDirectionGetter): pass dipy-1.11.0/dipy/direction/probabilistic_direction_getter.pyx000066400000000000000000000150731476546756600245260ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False """ Implementation of a probabilistic direction getter based on sampling from discrete distribution (pmf) at each step of the tracking. """ from random import random import numpy as np cimport numpy as cnp from dipy.direction.closest_peak_direction_getter cimport PmfGenDirectionGetter from dipy.utils.fast_numpy cimport (copy_point, cumsum, norm, normalize, where_to_insert) cdef class ProbabilisticDirectionGetter(PmfGenDirectionGetter): """Randomly samples direction of a sphere based on probability mass function (pmf). The main constructors for this class are current from_pmf and from_shcoeff. The pmf gives the probability that each direction on the sphere should be chosen as the next direction. To get the true pmf from the "raw pmf" directions more than ``max_angle`` degrees from the incoming direction are set to 0 and the result is normalized. """ def __init__(self, pmf_gen, max_angle, sphere, pmf_threshold=.1, **kwargs): """Direction getter from a pmf generator. Parameters ---------- pmf_gen : PmfGen Used to get probability mass function for selecting tracking directions. max_angle : float, [0, 90] The maximum allowed angle between incoming direction and new direction. sphere : Sphere The set of directions to be used for tracking. pmf_threshold : float [0., 1.] Used to remove direction from the probability mass function for selecting the tracking direction. relative_peak_threshold : float in [0., 1.] Used for extracting initial tracking directions. Passed to peak_directions. min_separation_angle : float in [0, 90] Used for extracting initial tracking directions. Passed to peak_directions. See Also -------- dipy.direction.peaks.peak_directions """ PmfGenDirectionGetter.__init__(self, pmf_gen, max_angle, sphere, pmf_threshold=pmf_threshold, **kwargs) # The vertices need to be in a contiguous array self.vertices = self.sphere.vertices.copy() cdef int get_direction_c(self, double[::1] point, double[::1] direction): """Samples a pmf to updates ``direction`` array with a new direction. Parameters ---------- point : memory-view (or ndarray), shape (3,) The point in an image at which to lookup tracking directions. direction : memory-view (or ndarray), shape (3,) Previous tracking direction. Returns ------- status : int Returns 0 `direction` was updated with a new tracking direction, or 1 otherwise. """ cdef: cnp.npy_intp i, idx, _len double[:] newdir double* pmf double last_cdf, cos_sim _len = self.len_pmf pmf = self._get_pmf(point) if norm(&direction[0]) == 0: return 1 normalize(&direction[0]) with nogil: for i in range(_len): cos_sim = self.vertices[i][0] * direction[0] \ + self.vertices[i][1] * direction[1] \ + self.vertices[i][2] * direction[2] if cos_sim < 0: cos_sim = cos_sim * -1 if cos_sim < self.cos_similarity: pmf[i] = 0 cumsum(pmf, pmf, _len) last_cdf = pmf[_len - 1] if last_cdf == 0: return 1 idx = where_to_insert(pmf, random() * last_cdf, _len) newdir = self.vertices[idx] # Update direction and return 0 for error if (direction[0] * newdir[0] + direction[1] * newdir[1] + direction[2] * newdir[2] > 0): copy_point(&newdir[0], &direction[0]) else: newdir[0] = newdir[0] * -1 newdir[1] = newdir[1] * -1 newdir[2] = newdir[2] * -1 copy_point(&newdir[0], &direction[0]) return 0 cdef class DeterministicMaximumDirectionGetter(ProbabilisticDirectionGetter): """Return direction of a sphere with the highest probability mass function (pmf). """ def __init__(self, pmf_gen, max_angle, sphere, pmf_threshold=.1, **kwargs): ProbabilisticDirectionGetter.__init__(self, pmf_gen, max_angle, sphere, pmf_threshold=pmf_threshold, **kwargs) cdef int get_direction_c(self, double[::1] point, double[::1] direction): """Find direction with the highest pmf to updates ``direction`` array with a new direction. Parameters ---------- point : memory-view (or ndarray), shape (3,) The point in an image at which to lookup tracking directions. direction : memory-view (or ndarray), shape (3,) Previous tracking direction. Returns ------- status : int Returns 0 `direction` was updated with a new tracking direction, or 1 otherwise. """ cdef: cnp.npy_intp _len, max_idx double[:] newdir double* pmf double max_value, cos_sim pmf = self._get_pmf(point) _len = self.len_pmf max_idx = 0 max_value = 0.0 if norm(&direction[0]) == 0: return 1 normalize(&direction[0]) with nogil: for i in range(_len): cos_sim = self.vertices[i][0] * direction[0] \ + self.vertices[i][1] * direction[1] \ + self.vertices[i][2] * direction[2] if cos_sim < 0: cos_sim = cos_sim * -1 if cos_sim > self.cos_similarity and pmf[i] > max_value: max_idx = i max_value = pmf[i] if max_value <= 0: return 1 newdir = self.vertices[max_idx] # Update direction and return 0 for error if (direction[0] * newdir[0] + direction[1] * newdir[1] + direction[2] * newdir[2] > 0): copy_point(&newdir[0], &direction[0]) else: newdir[0] = newdir[0] * -1 newdir[1] = newdir[1] * -1 newdir[2] = newdir[2] * -1 copy_point(&newdir[0], &direction[0]) return 0 dipy-1.11.0/dipy/direction/ptt_direction_getter.pyx000066400000000000000000000454141476546756600225110ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False """ Implementation of the Parallel Transport Tractography (PTT) algorithm by :footcite:t:`Aydogan2021`. PTT Default parameter values are slightly different than in Trekker to optimise performances. The rejection sampling algorithm also uses fewer samples to estimate the maximum of the posterior, and fewer tries to obtain a suitable propagation candidate. Moreover, the initial tangent direction in this implementation is always obtained from the voxel-wise peaks. References ---------- .. footbibliography:: """ cimport numpy as cnp from libc.math cimport M_PI, pow, sin, cos, fabs from libc.stdlib cimport malloc, free from dipy.direction.probabilistic_direction_getter cimport \ ProbabilisticDirectionGetter from dipy.utils.fast_numpy cimport (copy_point, cross, normalize, random, random_perpendicular_vector, random_point_within_circle) from dipy.tracking.stopping_criterion cimport (StreamlineStatus, StoppingCriterion, TRACKPOINT, ENDPOINT, OUTSIDEIMAGE, INVALIDPOINT) from dipy.tracking.utils import min_radius_curvature_from_angle cdef class PTTDirectionGetter(ProbabilisticDirectionGetter): """Parallel Transport Tractography (PTT) direction getter. """ cdef double angular_separation cdef double data_support_exponent cdef double[3][3] frame cdef double k1 cdef double k2 cdef double k_small cdef double last_val cdef double last_val_cand cdef double max_angle cdef double max_curvature cdef double[3] position cdef int probe_count cdef double probe_length cdef double probe_normalizer cdef int probe_quality cdef double probe_radius cdef double probe_step_size cdef double[9] propagator cdef double step_size cdef int rejection_sampling_max_try cdef int rejection_sampling_nbr_sample cdef double[3] voxel_size cdef double[3] inv_voxel_size def __init__(self, pmf_gen, max_angle, sphere, pmf_threshold=None, double probe_length=0.5, double probe_radius=0, int probe_quality=3, int probe_count=1, double data_support_exponent=1, **kwargs): """PTT used probe for estimating future propagation steps. A probe is a short, cylindrical model of the connecting segment. Parameters ---------- pmf_gen : PmfGen Used to get probability mass function for selecting tracking directions. max_angle : float, [0, 90] Is used to set the upper limits for the k1 and k2 parameters of parallel transport frame (max_curvature). sphere : Sphere The set of directions to be used for tracking. pmf_threshold : float, [0., 1.] Used to remove direction from the probability mass function for selecting the tracking direction. probe_length : double The length of the probes. Shorter probe_length yields more dispersed fibers. probe_radius : double The radius of the probe. A large probe_radius helps mitigate noise in the pmf but it might make it harder to sample thin and intricate connections, also the boundary of fiber bundles might be eroded. probe_quality : integer, The quality of the probe. This parameter sets the number of segments to split the cylinder along the length of the probe (minimum=2). probe_count : integer The number of probes. This parameter sets the number of parallel lines used to model the cylinder (minimum=1). data_support_exponent : double Data support to the power dataSupportExponent is used for rejection sampling. """ if not probe_length > 0: raise ValueError("probe_length must be greater than 0.") if not probe_radius >= 0: raise ValueError("probe_radius must be greater or equal to 0.") if not probe_quality >= 2: raise ValueError("probe_quality must be greater or equal than 2.") if not probe_count >= 1: raise ValueError("probe_count must be greater or equal than 1.") self.max_angle = max_angle self.probe_length = probe_length self.probe_radius = probe_radius self.probe_quality = probe_quality self.probe_count = probe_count self.probe_step_size = self.probe_length / (self.probe_quality - 1) self.probe_normalizer = 1.0 / float(self.probe_quality * self.probe_count) self.angular_separation = 2.0 * M_PI / float(self.probe_count) self.data_support_exponent = data_support_exponent self.k_small = 0.0001 self.rejection_sampling_max_try = 100 self.rejection_sampling_nbr_sample = 10 # Adaptively set in Trekker. ProbabilisticDirectionGetter.__init__(self, pmf_gen, max_angle, sphere, pmf_threshold=pmf_threshold, **kwargs) cdef void initialize_candidate(self, double[:] init_dir): """"Initialize the parallel transport frame. After initial position is set, a parallel transport frame is set using the initial direction (a walking frame, i.e., 3 orthonormal vectors, plus 2 scalars, i.e. k1 and k2). A point and parallel transport frame parametrizes a curve that is named the "probe". Using probe parameters (probe_length, probe_radius, probe_quality, probe_count), a short fiber bundle segment is modelled. Parameters ---------- init_dir : np.array Initial tracking direction (tangent) """ cdef double[3] position cdef int count cdef cnp.npy_intp i # Initialize Frame self.frame[0][0] = init_dir[0] self.frame[0][1] = init_dir[1] self.frame[0][2] = init_dir[2] random_perpendicular_vector(&self.frame[2][0], &self.frame[0][0]) cross(&self.frame[1][0], &self.frame[2][0], &self.frame[0][0]) self.k1, self.k2 = random_point_within_circle(self.max_curvature) self.last_val = 0 if self.probe_count == 1: self.last_val = self.pmf_gen.get_pmf_value_c(self.position, self.frame[0]) else: for count in range(self.probe_count): for i in range(3): position[i] = (self.position[i] + self.frame[1][i] * self.probe_radius * cos(count * self.angular_separation) * self.inv_voxel_size[i] + self.frame[2][i] * self.probe_radius * sin(count * self.angular_separation) * self.inv_voxel_size[i]) self.last_val += self.pmf_gen.get_pmf_value_c(position, self.frame[0]) cdef void prepare_propagator(self, double arclength) nogil: """Prepare the propagator. The propagator used for transporting the moving frame forward. Parameters ---------- arclength : double Arclenth, which is equivalent to step size along the arc """ cdef double tmp_arclength if (fabs(self.k1) < self.k_small and fabs(self.k2) < self.k_small): self.propagator[0] = arclength self.propagator[1] = 0 self.propagator[2] = 0 self.propagator[3] = 1 self.propagator[4] = 0 self.propagator[5] = 0 self.propagator[6] = 0 self.propagator[7] = 0 self.propagator[8] = 1 else: if fabs(self.k1) < self.k_small: self.k1 = self.k_small if fabs(self.k2) < self.k_small: self.k2 = self.k_small tmp_arclength = arclength * arclength / 2.0 self.propagator[0] = arclength self.propagator[1] = self.k1 * tmp_arclength self.propagator[2] = self.k2 * tmp_arclength self.propagator[3] = (1 - self.k2 * self.k2 * tmp_arclength - self.k1 * self.k1 * tmp_arclength) self.propagator[4] = self.k1 * arclength self.propagator[5] = self.k2 * arclength self.propagator[6] = -self.k2 * arclength self.propagator[7] = -self.k1 * self.k2 * tmp_arclength self.propagator[8] = (1 - self.k2 * self.k2 * tmp_arclength) cdef double calculate_data_support(self): """Calculates data support for the candidate probe.""" cdef double fod_amp cdef double[3] position cdef double[3][3] frame cdef double[3] tangent cdef double[3] normal cdef double[3] binormal cdef double[3] new_position cdef double likelihood cdef int c, i, j, q self.prepare_propagator(self.probe_step_size) for i in range(3): position[i] = self.position[i] for j in range(3): frame[i][j] = self.frame[i][j] likelihood = self.last_val for q in range(1, self.probe_quality): for i in range(3): position[i] = \ (self.propagator[0] * frame[0][i] * self.voxel_size[i] + self.propagator[1] * frame[1][i] * self.voxel_size[i] + self.propagator[2] * frame[2][i] * self.voxel_size[i] + position[i]) tangent[i] = (self.propagator[3] * frame[0][i] + self.propagator[4] * frame[1][i] + self.propagator[5] * frame[2][i]) normalize(&tangent[0]) if q < (self.probe_quality - 1): for i in range(3): binormal[i] = (self.propagator[6] * frame[0][i] + self.propagator[7] * frame[1][i] + self.propagator[8] * frame[2][i]) cross(&normal[0], &binormal[0], &tangent[0]) copy_point(&tangent[0], &frame[0][0]) copy_point(&normal[0], &frame[1][0]) copy_point(&binormal[0], &frame[2][0]) if self.probe_count == 1: fod_amp = self.pmf_gen.get_pmf_value_c(position, tangent) fod_amp = fod_amp if fod_amp > self.pmf_threshold else 0 self.last_val_cand = fod_amp likelihood += self.last_val_cand else: self.last_val_cand = 0 if q == self.probe_quality-1: for i in range(3): binormal[i] = (self.propagator[6] * frame[0][i] + self.propagator[7] * frame[1][i] + self.propagator[8] * frame[2][i]) cross(&normal[0], &binormal[0], &tangent[0]) for c in range(self.probe_count): for i in range(3): new_position[i] = (position[i] + normal[i] * self.probe_radius * cos(c * self.angular_separation) * self.inv_voxel_size[i] + binormal[i] * self.probe_radius * sin(c * self.angular_separation) * self.inv_voxel_size[i]) fod_amp = self.pmf_gen.get_pmf_value_c(new_position, tangent) fod_amp = fod_amp if fod_amp > self.pmf_threshold else 0 self.last_val_cand += fod_amp likelihood += self.last_val_cand likelihood *= self.probe_normalizer if self.data_support_exponent != 1: likelihood = pow(likelihood, self.data_support_exponent) return likelihood cdef int initialize(self, double[:] seed_point, double[:] seed_direction): """Sample an initial curve by rejection sampling. Parameters ---------- seed_point : double[3] Initial point seed_direction : double[3] Initial direction Returns ------- status : int Returns 0 if the initialization was successful, or 1 otherwise. """ cdef double data_support = 0 cdef double max_posterior = 0 cdef int tries self.position[0] = seed_point[0] self.position[1] = seed_point[1] self.position[2] = seed_point[2] for tries in range(self.rejection_sampling_nbr_sample): self.initialize_candidate(seed_direction) data_support = self.calculate_data_support() if data_support > max_posterior: max_posterior = data_support # Compensation for underestimation of max posterior estimate max_posterior = pow(2.0 * max_posterior, self.data_support_exponent) # Initialization is successful if a suitable candidate can be sampled # within the trial limit for tries in range(self.rejection_sampling_max_try): self.initialize_candidate(seed_direction) if (random() * max_posterior <= self.calculate_data_support()): self.last_val = self.last_val_cand return 0 return 1 cdef int propagate(self): """Propagates the position by step_size amount. The propagation is using the parameters of the last candidate curve. Then, randomly generate curve parametrization from the current position. The walking frame is the same, only the k1 and k2 parameters are randomly picked. Rejection sampling is used to pick the next curve using the data support (likelihood). Returns ------- status : int Returns 0 if the propagation was successful, or 1 otherwise. """ cdef double max_posterior = 0 cdef double data_support = 0 cdef double[3] tangent cdef int tries cdef cnp.npy_intp i self.prepare_propagator(self.step_size) for i in range(3): self.position[i] = \ (self.propagator[0] * self.frame[0][i] * self.inv_voxel_size[i] + self.propagator[1] * self.frame[1][i] * self.inv_voxel_size[i] + self.propagator[2] * self.frame[2][i] * self.inv_voxel_size[i] + self.position[i]) tangent[i] = (self.propagator[3] * self.frame[0][i] + self.propagator[4] * self.frame[1][i] + self.propagator[5] * self.frame[2][i]) self.frame[2][i] = (self.propagator[6] * self.frame[0][i] + self.propagator[7] * self.frame[1][i] + self.propagator[8] * self.frame[2][i]) normalize(&tangent[0]) cross(&self.frame[1][0], &self.frame[2][0], &tangent[0]) normalize(&self.frame[1][0]) cross(&self.frame[2][0], &tangent[0], &self.frame[1][0]) self.frame[0][0] = tangent[0] self.frame[0][1] = tangent[1] self.frame[0][2] = tangent[2] for tries in range(self.rejection_sampling_nbr_sample): self.k1, self.k2 = random_point_within_circle(self.max_curvature) data_support = self.calculate_data_support() if data_support > max_posterior: max_posterior = data_support # Compensation for underestimation of max posterior estimate max_posterior = pow(2.0 * max_posterior, self.data_support_exponent) # Propagation is successful if a suitable candidate can be sampled # within the trial limit for tries in range(self.rejection_sampling_max_try): self.k1, self.k2 = random_point_within_circle(self.max_curvature) if random() * max_posterior <= self.calculate_data_support(): self.last_val = self.last_val_cand return 0 return 1 # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) cpdef tuple generate_streamline(self, double[::1] seed, double[::1] dir, double[::1] voxel_size, double step_size, StoppingCriterion stopping_criterion, cnp.float_t[:, :] streamline, StreamlineStatus stream_status, int fixedstep): cdef: cnp.npy_intp i cnp.npy_intp len_streamlines = streamline.shape[0] double average_voxel_size = 0 if not fixedstep > 0: raise ValueError("PTT only supports fixed step size.") self.step_size = step_size for i in range(3): self.voxel_size[i] = voxel_size[i] self.inv_voxel_size[i] = 1 / voxel_size[i] average_voxel_size += voxel_size[i] / 3 # convert max_angle from degrees to radians self.max_curvature = 1 / min_radius_curvature_from_angle( self.max_angle * M_PI / 180.0, self.step_size / average_voxel_size) copy_point(&seed[0], &streamline[0,0]) i = 0 stream_status = TRACKPOINT if not self.initialize(seed, dir): # the initialization was successful for i in range(1, len_streamlines): if self.propagate(): # the propagation failed break copy_point(&self.position[0], &streamline[i, 0]) stream_status = stopping_criterion\ .check_point_c( &self.position[0]) if stream_status == TRACKPOINT: continue elif (stream_status == ENDPOINT or stream_status == INVALIDPOINT or stream_status == OUTSIDEIMAGE): break else: # maximum length has been reached, return everything i = streamline.shape[0] return i, stream_status dipy-1.11.0/dipy/direction/tests/000077500000000000000000000000001476546756600166605ustar00rootroot00000000000000dipy-1.11.0/dipy/direction/tests/__init__.py000066400000000000000000000000411476546756600207640ustar00rootroot00000000000000# Make direction/tests a package dipy-1.11.0/dipy/direction/tests/meson.build000066400000000000000000000004261476546756600210240ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_bootstrap_direction_getter.py', 'test_peaks.py', 'test_pmf.py', 'test_prob_direction_getter.py', 'test_ptt_direction_getter.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/direction/tests' ) dipy-1.11.0/dipy/direction/tests/test_bootstrap_direction_getter.py000066400000000000000000000265241476546756600257310ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.geometry import cart2sphere from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, unit_icosahedron from dipy.data import get_fnames, get_sphere from dipy.direction.bootstrap_direction_getter import BootDirectionGetter from dipy.io.gradients import read_bvals_bvecs from dipy.reconst import dti, shm from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, TensorModel from dipy.sims.voxel import multi_tensor, single_tensor from dipy.testing.decorators import set_random_number_generator def test_bdg_initial_direction(): """This tests the number of initial directions." """ hsph_updated = HemiSphere.from_sphere(unit_icosahedron).subdivide(n=2) vertices = hsph_updated.vertices bvecs = vertices bvals = np.ones(len(vertices)) * 1000 bvecs = np.insert(bvecs, 0, np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, 0, 0) gtab = gradient_table(bvals, bvecs=bvecs) # test that we get one direction when we have a single tensor sphere = HemiSphere.from_sphere(get_sphere(name="symmetric724")) voxel = single_tensor(gtab).reshape([1, 1, 1, -1]) dti_model = dti.TensorModel(gtab) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter.from_data( voxel, dti_model, 30, sphere=sphere, sh_order=6 ) initial_direction = boot_dg.initial_direction(np.zeros(3)) npt.assert_equal(len(initial_direction), 1) npt.assert_allclose(initial_direction[0], [1, 0, 0], atol=0.1) # test that we get multiple directions when we have a multi-tensor mevals = np.array([[1.5, 0.4, 0.4], [1.5, 0.4, 0.4]]) * 1e-3 fracs = [60, 40] voxel, primary_evecs = multi_tensor(gtab, mevals, fractions=fracs, snr=None) voxel = voxel.reshape([1, 1, 1, -1]) response = (np.array([0.0015, 0.0004, 0.0004]), 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd_model = ConstrainedSphericalDeconvModel( gtab, response=response, sh_order_max=4 ) boot_dg = BootDirectionGetter.from_data( voxel, csd_model, 30, sphere=sphere, ) initial_direction = boot_dg.initial_direction(np.zeros(3)) npt.assert_equal(len(initial_direction), 2) npt.assert_allclose(initial_direction, primary_evecs, atol=0.1) def test_bdg_get_direction(): """This tests the direction found by the bootstrap direction getter.""" _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=0) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0)] voxel, _ = multi_tensor(gtab, mevals, S0=1, angles=angles, fractions=[100], snr=100) data = np.tile(voxel, (3, 3, 3, 1)) sphere = get_sphere(name="symmetric362") response = (np.array([0.0015, 0.0003, 0.0003]), 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) point = np.array([0.0, 0.0, 0.0]) prev_direction = sphere.vertices[5] # test case in which no valid direction is found with default max attempts with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter( data, model=csd_model, max_angle=10.0, sphere=sphere ) npt.assert_equal(boot_dg.get_direction(point, prev_direction), 1) # test case in which no valid direction is found with new max attempts with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter( data, model=csd_model, max_angle=10, sphere=sphere, max_attempts=3 ) npt.assert_equal(boot_dg.get_direction(point, prev_direction), 1) # test case in which a valid direction is found with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter( data, model=csd_model, max_angle=60.0, sphere=sphere, max_attempts=5 ) npt.assert_equal(boot_dg.get_direction(point, prev_direction), 0) # test invalid max_attempts parameters with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) npt.assert_raises( ValueError, lambda: BootDirectionGetter( data, csd_model, 60, sphere=sphere, max_attempts=0 ), ) @set_random_number_generator() def test_bdg_residual(rng): """This tests the bootstrapping residual.""" hsph_updated = HemiSphere.from_sphere(unit_icosahedron).subdivide(n=2) vertices = hsph_updated.vertices bvecs = vertices bvals = np.ones(len(vertices)) * 1000 bvecs = np.insert(bvecs, 0, np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, 0, 0) gtab = gradient_table(bvals, bvecs=bvecs) r, theta, phi = cart2sphere(*vertices.T) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B, m, n = shm.real_sh_descoteaux(6, theta, phi) shm_coeff = rng.random(B.shape[1]) # sphere_func is sampled of the spherical function for each point of # the sphere sphere_func = np.dot(shm_coeff, B.T) response = (np.array([1.5e3, 0.3e3, 0.3e3]), 1) voxel = np.concatenate((np.zeros(1), sphere_func)) data = np.tile(voxel, (3, 3, 3, 1)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) boot_dg = BootDirectionGetter.from_data( data, model=csd_model, max_angle=60, sphere=hsph_updated, sh_order=6 ) # Two boot samples should be the same odf1 = boot_dg.get_pmf(np.array([1.5, 1.5, 1.5])) odf2 = boot_dg.get_pmf(np.array([1.5, 1.5, 1.5])) npt.assert_array_almost_equal(odf1, odf2) # A boot sample with less sh coeffs should have residuals, thus the two # should be different with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg2 = BootDirectionGetter.from_data( data, model=csd_model, max_angle=60, sphere=hsph_updated, sh_order=4 ) odf1 = boot_dg2.get_pmf(np.array([1.5, 1.5, 1.5])) odf2 = boot_dg2.get_pmf(np.array([1.5, 1.5, 1.5])) npt.assert_(np.any(odf1 != odf2)) # test with a gtab with two shells and assert you get an error bvals[-1] = 2000 gtab = gradient_table(bvals, bvecs=bvecs) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) npt.assert_raises( ValueError, BootDirectionGetter, data, csd_model, 60, sphere=hsph_updated, max_attempts=6, ) def test_boot_pmf(): # This tests the local model used for the bootstrapping. hsph_updated = HemiSphere.from_sphere(unit_icosahedron) vertices = hsph_updated.vertices bvecs = vertices bvals = np.ones(len(vertices)) * 1000 bvecs = np.insert(bvecs, 0, np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, 0, 0) gtab = gradient_table(bvals, bvecs=bvecs) voxel = single_tensor(gtab) data = np.tile(voxel, (3, 3, 3, 1)) point = np.array([1.0, 1.0, 1.0]) tensor_model = TensorModel(gtab) response = (np.array([1.5e3, 0.3e3, 0.3e3]), 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter( data, model=tensor_model, max_angle=60, sphere=hsph_updated ) no_boot_pmf = boot_dg.get_pmf_no_boot(point) model_pmf = tensor_model.fit(voxel).odf(hsph_updated) npt.assert_equal(len(hsph_updated.vertices), no_boot_pmf.shape[0]) npt.assert_array_almost_equal(no_boot_pmf, model_pmf) # test model spherical harmonic order different than bootstrap order with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) warnings.simplefilter("always", category=PendingDeprecationWarning) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) # Tests that the first caught warning comes from the CSD model # constructor npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("Number of parameters required " in str(w[0].message)) # Tests that additional warnings are raised for outdated SH basis npt.assert_(len(w) > 1) boot_dg_sh4 = BootDirectionGetter( data, model=csd_model, max_angle=60, sphere=hsph_updated, sh_order=4 ) pmf_sh4 = boot_dg_sh4.get_pmf(point) npt.assert_equal(len(hsph_updated.vertices), pmf_sh4.shape[0]) npt.assert_(np.sum(pmf_sh4.shape) > 0) boot_dg_sh8 = BootDirectionGetter( data, model=csd_model, max_angle=60, sphere=hsph_updated, sh_order=8 ) pmf_sh8 = boot_dg_sh8.get_pmf(point) npt.assert_equal(len(hsph_updated.vertices), pmf_sh8.shape[0]) npt.assert_(np.sum(pmf_sh8.shape) > 0) # test b_tol parameter bvals[-2] = 1100 gtab = gradient_table(bvals, bvecs=bvecs) tensor_model = TensorModel(gtab) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) npt.assert_raises( ValueError, BootDirectionGetter, data, tensor_model, 60, sphere=hsph_updated, max_attempts=6, b_tol=20, ) npt.assert_raises( ValueError, BootDirectionGetter, data, tensor_model, 60, sphere=hsph_updated, max_attempts=6, b_tol=-1, ) dipy-1.11.0/dipy/direction/tests/test_peaks.py000066400000000000000000001040771476546756600214050ustar00rootroot00000000000000from io import BytesIO import pickle from random import randint import warnings import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, assert_warns, ) from dipy.core.gradients import GradientTable, gradient_table from dipy.core.sphere import HemiSphere, unit_icosahedron from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_hemisphere from dipy.data import default_sphere, get_fnames, get_sphere from dipy.direction.peaks import ( peak_directions, peak_directions_nl, peaks_from_model, peaks_from_positions, reshape_peaks_for_visualization, ) from dipy.direction.pmf import SHCoeffPmfGen, SimplePmfGen from dipy.io.gradients import read_bvals_bvecs from dipy.reconst.odf import OdfFit, OdfModel, gfa from dipy.reconst.shm import CsaOdfModel, descoteaux07_legacy_msg, tournier07_legacy_msg from dipy.sims.voxel import multi_tensor, multi_tensor_odf from dipy.testing.decorators import set_random_number_generator from dipy.tracking.utils import seeds_from_mask def test_peak_directions_nl(): def discrete_eval(sphere): return abs(sphere.vertices).sum(-1) directions, values = peak_directions_nl(discrete_eval) assert_equal(directions.shape, (4, 3)) assert_array_almost_equal(abs(directions), 1 / np.sqrt(3)) assert_array_equal(values, abs(directions).sum(-1)) # Test using a different sphere sphere = unit_icosahedron.subdivide(n=4) directions, values = peak_directions_nl(discrete_eval, sphere=sphere) assert_equal(directions.shape, (4, 3)) assert_array_almost_equal(abs(directions), 1 / np.sqrt(3)) assert_array_equal(values, abs(directions).sum(-1)) # Test the relative_peak_threshold def discrete_eval(sphere): A = abs(sphere.vertices).sum(-1) x, y, z = sphere.vertices.T B = 1 + (x * z > 0) + 2 * (y * z > 0) return A * B directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.01) assert_equal(directions.shape, (4, 3)) directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.3) assert_equal(directions.shape, (3, 3)) directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.6) assert_equal(directions.shape, (2, 3)) directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.8) assert_equal(directions.shape, (1, 3)) assert_almost_equal(values, 4 * 3 / np.sqrt(3)) # Test odfs with large areas of zero def discrete_eval(sphere): A = abs(sphere.vertices).sum(-1) x, y, z = sphere.vertices.T B = (x * z > 0) + 2 * (y * z > 0) return A * B directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.0) assert_equal(directions.shape, (3, 3)) directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.6) assert_equal(directions.shape, (2, 3)) directions, values = peak_directions_nl(discrete_eval, relative_peak_threshold=0.8) assert_equal(directions.shape, (1, 3)) assert_almost_equal(values, 3 * 3 / np.sqrt(3)) _sphere = create_unit_hemisphere(recursion_level=4) _odf = (_sphere.vertices * [1, 2, 3]).sum(-1) _gtab = GradientTable(np.ones((64, 3))) class SimpleOdfModel(OdfModel): sphere = _sphere def fit(self, data): fit = SimpleOdfFit(self, data) fit.model = self return fit class SimpleOdfFit(OdfFit): def odf(self, sphere=None): if sphere is None: sphere = self.model.sphere # Use ascontiguousarray to work around a bug in NumPy return np.ascontiguousarray((sphere.vertices * [1, 2, 3]).sum(-1)) def test_OdfFit(): m = SimpleOdfModel(_gtab) f = m.fit(None) odf = f.odf(_sphere) assert_equal(len(odf), len(_sphere.theta)) def test_peak_directions(): model = SimpleOdfModel(_gtab) fit = model.fit(None) odf = fit.odf() argmax = odf.argmax() mx = odf.max() sphere = fit.model.sphere # Only one peak direction, val, ind = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=45 ) dir_e = sphere.vertices[[argmax]] assert_array_equal(ind, [argmax]) assert_array_equal(val, odf[ind]) assert_array_equal(direction, dir_e) odf[0] = mx * 0.9 # Two peaks, relative_threshold direction, val, ind = peak_directions( odf, sphere, relative_peak_threshold=1.0, min_separation_angle=0 ) dir_e = sphere.vertices[[argmax]] assert_array_equal(direction, dir_e) assert_array_equal(ind, [argmax]) assert_array_equal(val, odf[ind]) direction, val, ind = peak_directions( odf, sphere, relative_peak_threshold=0.8, min_separation_angle=0 ) dir_e = sphere.vertices[[argmax, 0]] assert_array_equal(direction, dir_e) assert_array_equal(ind, [argmax, 0]) assert_array_equal(val, odf[ind]) # Two peaks, angle_sep direction, val, ind = peak_directions( odf, sphere, relative_peak_threshold=0.0, min_separation_angle=90 ) dir_e = sphere.vertices[[argmax]] assert_array_equal(direction, dir_e) assert_array_equal(ind, [argmax]) assert_array_equal(val, odf[ind]) direction, val, ind = peak_directions( odf, sphere, relative_peak_threshold=0.0, min_separation_angle=0 ) dir_e = sphere.vertices[[argmax, 0]] assert_array_equal(direction, dir_e) assert_array_equal(ind, [argmax, 0]) assert_array_equal(val, odf[ind]) def _create_mt_sim(mevals, angles, fractions, S0, SNR, half_sphere=False): _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) S, sticks = multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=fractions, snr=SNR ) sphere = get_sphere(name="symmetric724").subdivide(n=2) if half_sphere: sphere = HemiSphere.from_sphere(sphere) odf_gt = multi_tensor_odf( sphere.vertices, mevals, angles=angles, fractions=fractions ) return odf_gt, sticks, sphere def test_peak_directions_thorough(): # two equal fibers (creating a very sharp odf) mevals = np.array([[0.0025, 0.0003, 0.0003], [0.0025, 0.0003, 0.0003]]) angles = [(0, 0), (45, 0)] fractions = [50, 50] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # two unequal fibers fractions = [75, 25] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 1, 2) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.20, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # two equal fibers short angle (simulating very sharp ODF) mevals = np.array(([0.0045, 0.0003, 0.0003], [0.0045, 0.0003, 0.0003])) fractions = [50, 50] angles = [(0, 0), (20, 0)] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 1, 2) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=15.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # 1 fiber mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) fractions = [50, 50] angles = [(15, 0), (15, 0)] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=15.0 ) assert_almost_equal(angular_similarity(directions, sticks), 1, 2) AE = np.rad2deg(np.arccos(np.dot(directions[0], sticks[0]))) assert_(abs(AE) < 2.0 or abs(AE - 180) < 2.0) # two equal fibers and one small noisy one mevals = np.array( [[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]] ) angles = [(0, 0), (45, 0), (90, 0)] fractions = [45, 45, 10] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # two equal fibers and one faulty mevals = np.array( [[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]] ) angles = [(0, 0), (45, 0), (60, 0)] fractions = [45, 45, 10] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # two equal fibers and one very very annoying one mevals = np.array( [[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]] ) angles = [(0, 0), (45, 0), (60, 0)] fractions = [40, 40, 20] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 2, 2) # three peaks and one faulty mevals = np.array( [ [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], ] ) angles = [(0, 0), (45, 0), (90, 0), (90, 45)] fractions = [35, 35, 20, 10] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_almost_equal(angular_similarity(directions, sticks), 3, 2) # four peaks mevals = np.array( [ [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], ] ) angles = [(0, 0), (45, 0), (90, 0), (90, 45)] fractions = [25, 25, 25, 25] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.15, min_separation_angle=5.0 ) assert_almost_equal(angular_similarity(directions, sticks), 4, 2) # four difficult peaks mevals = np.array( [ [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], ] ) angles = [(0, 0), (45, 0), (90, 0), (90, 45)] fractions = [30, 30, 20, 20] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0, min_separation_angle=0 ) assert_almost_equal(angular_similarity(directions, sticks), 4, 1) # test the asymmetric case directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0, min_separation_angle=0, is_symmetric=False, ) expected = np.concatenate([sticks, -sticks], axis=0) assert_almost_equal(angular_similarity(directions, expected), 8, 1) odf_gt, sticks, hsphere = _create_mt_sim( mevals, angles, fractions, 100, None, half_sphere=True ) directions, values, indices = peak_directions( odf_gt, hsphere, relative_peak_threshold=0, min_separation_angle=0 ) assert_equal(angular_similarity(directions, sticks) < 4, True) # four peaks and one them quite small fractions = [35, 35, 20, 10] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0, min_separation_angle=0 ) assert_equal(angular_similarity(directions, sticks) < 4, True) odf_gt, sticks, hsphere = _create_mt_sim( mevals, angles, fractions, 100, None, half_sphere=True ) directions, values, indices = peak_directions( odf_gt, hsphere, relative_peak_threshold=0, min_separation_angle=0 ) assert_equal(angular_similarity(directions, sticks) < 4, True) # isotropic case mevals = np.array([[0.0015, 0.0015, 0.0015]]) angles = [(0, 0)] fractions = [100.0] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) directions, values, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.5, min_separation_angle=25.0 ) assert_equal(len(values) > 10, True) def test_difference_with_minmax(): # Show difference with and without minmax normalization # we create an odf here with 3 main peaks, 1 small sharp unwanted peak # (noise) and an isotropic compartment. mevals = np.array( [ [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003], [0.0015, 0.00005, 0.00005], [0.0015, 0.0015, 0.0015], ] ) angles = [(0, 0), (45, 0), (90, 0), (90, 90), (0, 0)] fractions = [20, 20, 10, 1, 100 - 20 - 20 - 10 - 1] odf_gt, sticks, sphere = _create_mt_sim(mevals, angles, fractions, 100, None) # We will show that when the minmax normalization is used we can remove # the noisy peak using a lower threshold. odf_gt_minmax = (odf_gt - odf_gt.min()) / (odf_gt.max() - odf_gt.min()) _, values_1, _ = peak_directions( odf_gt, sphere, relative_peak_threshold=0.30, min_separation_angle=25.0 ) assert_equal(len(values_1), 3) _, values_2, _ = peak_directions( odf_gt_minmax, sphere, relative_peak_threshold=0.30, min_separation_angle=25.0 ) assert_equal(len(values_2), 3) # Setting the smallest value of the odf to zero is like running # peak_directions without the odf_min correction. odf_gt[odf_gt.argmin()] = 0.0 _, values_3, _ = peak_directions( odf_gt, sphere, relative_peak_threshold=0.30, min_separation_angle=25.0, ) assert_equal(len(values_3), 4) # we show here that to actually get that noisy peak out we need to # increase the peak threshold considerably directions, values_4, indices = peak_directions( odf_gt, sphere, relative_peak_threshold=0.60, min_separation_angle=25.0, ) assert_equal(len(values_4), 3) assert_almost_equal(values_1, values_4) @set_random_number_generator() def test_degenerate_cases(rng): sphere = default_sphere # completely isotropic and degenerate case odf = np.zeros(sphere.vertices.shape[0]) directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) print(directions, values, indices) assert_equal(len(values), 0) assert_equal(len(directions), 0) assert_equal(len(indices), 0) odf = np.zeros(sphere.vertices.shape[0]) odf[0] = 0.020 odf[1] = 0.018 directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) print(directions, values, indices) assert_equal(values[0], 0.02) odf = -np.ones(sphere.vertices.shape[0]) directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) print(directions, values, indices) assert_equal(len(values), 0) odf = np.zeros(sphere.vertices.shape[0]) odf[0] = 0.020 odf[1] = 0.018 odf[2] = -0.018 directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) assert_equal(values[0], 0.02) odf = np.ones(sphere.vertices.shape[0]) odf += 0.1 * rng.random(odf.shape[0]) directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) assert_(all(values > values[0] * 0.5)) assert_array_equal(values, odf[indices]) odf = np.ones(sphere.vertices.shape[0]) odf[1:] = np.finfo(float).eps * rng.random(odf.shape[0] - 1) directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) assert_equal(values[0], 1) assert_equal(len(values), 1) def test_peaksFromModel(): data = np.zeros((10, 2)) for sphere in [_sphere, get_sphere(name="symmetric642")]: # Test basic case model = SimpleOdfModel(_gtab) _odf = (sphere.vertices * [1, 2, 3]).sum(-1) odf_argmax = _odf.argmax() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam = peaks_from_model(model, data, sphere, 0.5, 45, normalize_peaks=True) assert_array_equal(pam.gfa, gfa(_odf)) assert_array_equal(pam.peak_values[:, 0], 1.0) assert_array_equal(pam.peak_values[:, 1:], 0.0) mn, mx = _odf.min(), _odf.max() assert_array_equal(pam.qa[:, 0], (mx - mn) / mx) assert_array_equal(pam.qa[:, 1:], 0.0) assert_array_equal(pam.peak_indices[:, 0], odf_argmax) assert_array_equal(pam.peak_indices[:, 1:], -1) # Test that odf array matches and is right shape with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam = peaks_from_model(model, data, sphere, 0.5, 45, return_odf=True) expected_shape = (len(data), len(_odf)) assert_equal(pam.odf.shape, expected_shape) assert_((_odf == pam.odf).all()) assert_array_equal(pam.peak_values[:, 0], _odf.max()) # Test mask mask = (np.arange(10) % 2) == 1 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam = peaks_from_model( model, data, sphere, 0.5, 45, mask=mask, normalize_peaks=True ) assert_array_equal(pam.gfa[~mask], 0) assert_array_equal(pam.qa[~mask], 0) assert_array_equal(pam.peak_values[~mask], 0) assert_array_equal(pam.peak_indices[~mask], -1) assert_array_equal(pam.gfa[mask], gfa(_odf)) assert_array_equal(pam.peak_values[mask, 0], 1.0) assert_array_equal(pam.peak_values[mask, 1:], 0.0) mn, mx = _odf.min(), _odf.max() assert_array_equal(pam.qa[mask, 0], (mx - mn) / mx) assert_array_equal(pam.qa[mask, 1:], 0.0) assert_array_equal(pam.peak_indices[mask, 0], odf_argmax) assert_array_equal(pam.peak_indices[mask, 1:], -1) # Test serialization and deserialization: for normalize_peaks in [True, False]: for return_odf in [True, False]: for return_sh in [True, False]: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam = peaks_from_model( model, data, sphere, 0.5, 45, normalize_peaks=normalize_peaks, return_odf=return_odf, return_sh=return_sh, ) b = BytesIO() pickle.dump(pam, b) b.seek(0) new_pam = pickle.load(b) b.close() for attr in [ "peak_dirs", "peak_values", "peak_indices", "gfa", "qa", "shm_coeff", "B", "odf", ]: assert_array_equal(getattr(pam, attr), getattr(new_pam, attr)) assert_array_equal(pam.sphere.vertices, new_pam.sphere.vertices) def test_peaksFromModelParallel(): SNR = 100 S0 = 100 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) data, _ = multi_tensor( gtab, mevals, S0=S0, angles=[(0, 0), (60, 0)], fractions=[50, 50], snr=SNR ) for sphere in [_sphere, default_sphere]: # test equality with/without multiprocessing model = SimpleOdfModel(gtab) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam_multi = peaks_from_model( model, data, sphere, relative_peak_threshold=0.5, min_separation_angle=45, normalize_peaks=True, return_odf=True, return_sh=True, parallel=True, ) pam_single = peaks_from_model( model, data, sphere, relative_peak_threshold=0.5, min_separation_angle=45, normalize_peaks=True, return_odf=True, return_sh=True, parallel=False, ) pam_multi_inv1 = peaks_from_model( model, data, sphere, relative_peak_threshold=0.5, min_separation_angle=45, normalize_peaks=True, return_odf=True, return_sh=True, parallel=True, num_processes=-1, ) pam_multi_inv2 = peaks_from_model( model, data, sphere, relative_peak_threshold=0.5, min_separation_angle=45, normalize_peaks=True, return_odf=True, return_sh=True, parallel=True, num_processes=-2, ) for pam in [pam_multi, pam_multi_inv1, pam_multi_inv2]: assert_equal(pam.gfa.dtype, pam_single.gfa.dtype) assert_equal(pam.gfa.shape, pam_single.gfa.shape) assert_array_almost_equal(pam.gfa, pam_single.gfa) assert_equal(pam.qa.dtype, pam_single.qa.dtype) assert_equal(pam.qa.shape, pam_single.qa.shape) assert_array_almost_equal(pam.qa, pam_single.qa) assert_equal(pam.peak_values.dtype, pam_single.peak_values.dtype) assert_equal(pam.peak_values.shape, pam_single.peak_values.shape) assert_array_almost_equal(pam.peak_values, pam_single.peak_values) assert_equal(pam.peak_indices.dtype, pam_single.peak_indices.dtype) assert_equal(pam.peak_indices.shape, pam_single.peak_indices.shape) assert_array_equal(pam.peak_indices, pam_single.peak_indices) assert_equal(pam.peak_dirs.dtype, pam_single.peak_dirs.dtype) assert_equal(pam.peak_dirs.shape, pam_single.peak_dirs.shape) assert_array_almost_equal(pam.peak_dirs, pam_single.peak_dirs) assert_equal(pam.shm_coeff.dtype, pam_single.shm_coeff.dtype) assert_equal(pam.shm_coeff.shape, pam_single.shm_coeff.shape) assert_array_almost_equal(pam.shm_coeff, pam_single.shm_coeff) assert_equal(pam.odf.dtype, pam_single.odf.dtype) assert_equal(pam.odf.shape, pam_single.odf.shape) assert_array_almost_equal(pam.odf, pam_single.odf) def test_peaks_shm_coeff(): SNR = 100 S0 = 100 _, fbvals, fbvecs = get_fnames(name="small_64D") sphere = default_sphere bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) data, _ = multi_tensor( gtab, mevals, S0=S0, angles=[(0, 0), (60, 0)], fractions=[50, 50], snr=SNR ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = CsaOdfModel(gtab, 4) pam = peaks_from_model( model, data[None, :], sphere, 0.5, 45, return_odf=True, return_sh=True ) # Test that spherical harmonic coefficients return back correctly odf2 = np.dot(pam.shm_coeff, pam.B) assert_array_almost_equal(pam.odf, odf2) assert_equal(pam.shm_coeff.shape[-1], 45) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam = peaks_from_model( model, data[None, :], sphere, 0.5, 45, return_odf=True, return_sh=False ) assert_equal(pam.shm_coeff, None) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) pam = peaks_from_model( model, data[None, :], sphere, 0.5, 45, return_odf=True, return_sh=True, sh_basis_type="tournier07", ) odf2 = np.dot(pam.shm_coeff, pam.B) assert_array_almost_equal(pam.odf, odf2) @set_random_number_generator() def test_reshape_peaks_for_visualization(rng): data1 = rng.standard_normal((10, 5, 3)).astype("float32") data2 = rng.standard_normal((10, 2, 5, 3)).astype("float32") data3 = rng.standard_normal((10, 2, 12, 5, 3)).astype("float32") data1_reshape = reshape_peaks_for_visualization(data1) data2_reshape = reshape_peaks_for_visualization(data2) data3_reshape = reshape_peaks_for_visualization(data3) assert_array_equal(data1_reshape.shape, (10, 15)) assert_array_equal(data2_reshape.shape, (10, 2, 15)) assert_array_equal(data3_reshape.shape, (10, 2, 12, 15)) assert_array_equal(data1_reshape.reshape(10, 5, 3), data1) assert_array_equal(data2_reshape.reshape(10, 2, 5, 3), data2) assert_array_equal(data3_reshape.reshape(10, 2, 12, 5, 3), data3) def test_peaks_from_positions(): thresh = 0.5 min_angle = 25 npeaks = 5 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) voxels = [] for _ in range(27): v, _ = multi_tensor( gtab, mevals, S0=100, angles=[(0, 0), (randint(0, 90), randint(0, 90))], fractions=[50, 50], snr=10, ) voxels.append(v) data = np.array(voxels).reshape((3, 3, 3, -1)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = CsaOdfModel(gtab, 8) pam = peaks_from_model( model, data, default_sphere, return_odf=True, return_sh=True, legacy=True, npeaks=npeaks, relative_peak_threshold=thresh, min_separation_angle=min_angle, ) mask = np.ones((3, 3, 3)) affine = np.eye(4) positions = seeds_from_mask(mask, affine) # test the peaks at each voxel using int coordinates peaks = peaks_from_positions( positions, pam.odf, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam.peak_dirs, peaks) # test the peaks at each voxel using float coordinates peaks = peaks_from_positions( positions.astype(float), pam.odf, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam.peak_dirs, peaks) # test the peaks at each voxel using double coordinates peaks = peaks_from_positions( positions.astype(np.float64), pam.odf, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam.peak_dirs, peaks) # test the peaks at each voxel using SimplePmfGen pmf_gen = SimplePmfGen(pam.odf, default_sphere) peaks = peaks_from_positions( positions, odfs=None, sphere=None, affine=affine, pmf_gen=pmf_gen, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam.peak_dirs, peaks, decimal=3) # test the peaks at each voxel using SHCoeffPmfGen with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pmf_gen = SHCoeffPmfGen( pam.shm_coeff, default_sphere, basis_type="descoteaux07" ) peaks = peaks_from_positions( positions, odfs=None, sphere=None, affine=affine, pmf_gen=pmf_gen, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam.peak_dirs, peaks, decimal=3) # test the peaks with a full sphere with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pam_full_sphere = peaks_from_model( model, data, get_sphere(name="symmetric362"), return_odf=True, return_sh=True, legacy=True, npeaks=npeaks, relative_peak_threshold=thresh, min_separation_angle=min_angle, ) pmf_gen = SimplePmfGen(pam_full_sphere.odf, get_sphere(name="symmetric362")) peaks = peaks_from_positions( positions, odfs=None, sphere=None, affine=affine, pmf_gen=pmf_gen, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) peaks = np.array(peaks).reshape((3, 3, 3, 5, 3)) assert_array_almost_equal(pam_full_sphere.peak_dirs, peaks, decimal=3) # test the peaks extraction at the mid point between 2 voxels odfs = [pam.odf[0, 0, 0], pam.odf[0, 0, 0]] odfs = np.array(odfs).reshape((2, 1, 1, -1)) positions = np.array([[0.0, 0, 0], [0.5, 0, 0], [1.0, 0, 0]]) peaks = peaks_from_positions( positions, odfs, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) assert_array_equal(peaks[0], peaks[1]) assert_array_equal(peaks[0], peaks[2]) # test with none identity affine positions = seeds_from_mask(mask, affine) peaks_eye = peaks_from_positions( positions, pam.odf, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) affine[:3, :3] = np.random.random((3, 3)) positions = seeds_from_mask(mask, affine) peaks = peaks_from_positions( positions, pam.odf, default_sphere, affine, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) assert_array_almost_equal(peaks_eye, peaks) # test with invalid seed coordinates affine = np.eye(4) positions = np.array([[0, -1, 0], [0.1, -0.1, 0.1]]) assert_raises( IndexError, peaks_from_positions, positions, pam.odf, default_sphere, affine, ) affine = np.eye(4) * 10 positions = np.array([[1, -1, 1]]) positions = np.dot(positions, affine[:3, :3].T) positions += affine[:3, 3] assert_raises( IndexError, peaks_from_positions, positions, pam.odf, default_sphere, affine, ) # test a warning is thrown when odfs and pmf_gen arguments are used assert_warns( UserWarning, peaks_from_positions, positions, pam.odf, default_sphere, affine, pmf_gen=pmf_gen, relative_peak_threshold=thresh, min_separation_angle=min_angle, npeaks=npeaks, ) dipy-1.11.0/dipy/direction/tests/test_pmf.py000066400000000000000000000063441476546756600210620ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere import HemiSphere, unit_octahedron from dipy.data import default_sphere, get_sphere from dipy.direction.pmf import SHCoeffPmfGen, SimplePmfGen from dipy.reconst import shm from dipy.testing.decorators import set_random_number_generator response = (np.array([1.5e3, 0.3e3, 0.3e3]), 1) @set_random_number_generator() def test_pmf_val(rng): sphere = get_sphere(name="symmetric724") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pmfgen = SHCoeffPmfGen(rng.random([2, 2, 2, 28]), sphere, None) point = np.array([1, 1, 1], dtype="float") out = np.ones(len(sphere.vertices)) for idx in [0, 5, 15, -1]: pmf = pmfgen.get_pmf(point) pmf_2 = pmfgen.get_pmf(point, out) npt.assert_array_almost_equal(pmf, out) npt.assert_array_almost_equal(pmf, pmf_2) # Create a direction vector close to the vertex idx xyz = sphere.vertices[idx] + rng.random([3]) / 100 pmf_idx = pmfgen.get_pmf_value(point, xyz) # Test that the pmf sampled for the direction xyz is correct npt.assert_array_almost_equal(pmf[idx], pmf_idx) def test_pmf_from_sh(): sphere = HemiSphere.from_sphere(unit_octahedron) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pmfgen = SHCoeffPmfGen(np.ones([2, 2, 2, 28]), sphere, None) out = np.zeros(len(sphere.vertices)) # Test that the pmf is greater than 0 for a valid point pmf = pmfgen.get_pmf(np.array([0, 0, 0], dtype="float")) out = pmfgen.get_pmf(np.array([0, 0, 0], dtype="float"), out) npt.assert_equal(np.sum(pmf) > 0, True) npt.assert_array_almost_equal(pmf, out) # Test that the pmf is 0 for invalid Points npt.assert_array_equal( pmfgen.get_pmf(np.array([-1, 0, 0], dtype="float")), np.zeros(len(sphere.vertices)), ) npt.assert_array_equal( pmfgen.get_pmf(np.array([0, 0, 10], dtype="float")), np.zeros(len(sphere.vertices)), ) def test_pmf_from_array(): sphere = HemiSphere.from_sphere(unit_octahedron) pmfgen = SimplePmfGen(np.ones([2, 2, 2, len(sphere.vertices)]), sphere) out = np.zeros(len(sphere.vertices)) # Test that the pmf is greater than 0 for a valid point pmf = pmfgen.get_pmf(np.array([0, 0, 0], dtype="float")) out = pmfgen.get_pmf(np.array([0, 0, 0], dtype="float"), out) npt.assert_equal(np.sum(pmf) > 0, True) npt.assert_array_almost_equal(pmf, out) # Test that the pmf is 0 for invalid Points npt.assert_array_equal( pmfgen.get_pmf(np.array([-1, 0, 0], dtype=float)), np.zeros(len(sphere.vertices)), ) npt.assert_array_equal( pmfgen.get_pmf(np.array([0, 0, 10], dtype=float)), np.zeros(len(sphere.vertices)), ) # Test ValueError for non matching pmf and sphere npt.assert_raises( ValueError, lambda: SimplePmfGen(np.ones([2, 2, 2, len(sphere.vertices)]), default_sphere), ) dipy-1.11.0/dipy/direction/tests/test_prob_direction_getter.py000066400000000000000000000100021476546756600246360ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere import unit_octahedron from dipy.direction import ( DeterministicMaximumDirectionGetter, ProbabilisticDirectionGetter, ) from dipy.reconst.shm import ( SphHarmFit, SphHarmModel, descoteaux07_legacy_msg, tournier07_legacy_msg, ) def test_ProbabilisticDirectionGetter(): # Test the constructors and errors of the ProbabilisticDirectionGetter class SillyModel(SphHarmModel): sh_order = 4 def fit(self, data, mask=None): coeff = np.zeros(data.shape[:-1] + (15,)) return SphHarmFit(self, coeff, mask=None) model = SillyModel(gtab=None) data = np.zeros((3, 3, 3, 7)) # Test if the tracking works on different dtype of the same data. for dtype in [np.float32, np.float64]: fit = model.fit(data.astype(dtype)) # Sample point and direction point = np.zeros(3) direction = unit_octahedron.vertices[0].copy() # make a dg from a fit with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dg = ProbabilisticDirectionGetter.from_shcoeff( fit.shm_coeff, 90, unit_octahedron ) state = dg.get_direction(point, direction) npt.assert_equal(state, 1) # make a dg from a fit (using sh_to_pmf=True) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dg = ProbabilisticDirectionGetter.from_shcoeff( fit.shm_coeff, 90, unit_octahedron, sh_to_pmf=True ) state = dg.get_direction(point, direction) npt.assert_equal(state, 1) # Make a dg from a pmf N = unit_octahedron.theta.shape[0] pmf = np.zeros((3, 3, 3, N)) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 90, unit_octahedron) state = dg.get_direction(point, direction) npt.assert_equal(state, 1) # pmf shape must match sphere bad_pmf = pmf[..., 1:] npt.assert_raises( ValueError, ProbabilisticDirectionGetter.from_pmf, bad_pmf, 90, unit_octahedron, ) # pmf must have 4 dimensions bad_pmf = pmf[0, ...] npt.assert_raises( ValueError, ProbabilisticDirectionGetter.from_pmf, bad_pmf, 90, unit_octahedron, ) # Check basis_type keyword with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning, ) dg = ProbabilisticDirectionGetter.from_shcoeff( fit.shm_coeff, 90, unit_octahedron, basis_type="tournier07" ) npt.assert_raises( ValueError, ProbabilisticDirectionGetter.from_shcoeff, fit.shm_coeff, 90, unit_octahedron, basis_type="not a basis", ) def test_DeterministicMaximumDirectionGetter(): # Test the DeterministicMaximumDirectionGetter direction = unit_octahedron.vertices[-1].copy() point = np.zeros(3) N = unit_octahedron.theta.shape[0] # No valid direction pmf = np.zeros((3, 3, 3, N)) dg = DeterministicMaximumDirectionGetter.from_pmf(pmf, 90, unit_octahedron) state = dg.get_direction(point, direction) npt.assert_equal(state, 1) # Test BF #1566 - bad condition in DeterministicMaximumDirectionGetter pmf = np.zeros((3, 3, 3, N)) pmf[0, 0, 0, 0] = 1 dg = DeterministicMaximumDirectionGetter.from_pmf(pmf, 0, unit_octahedron) state = dg.get_direction(point, direction) npt.assert_equal(state, 1) dipy-1.11.0/dipy/direction/tests/test_ptt_direction_getter.py000066400000000000000000000131311476546756600245110ustar00rootroot00000000000000"""Test file for Parallel Transport Tracking Algorithm.""" import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere import unit_octahedron from dipy.data import default_sphere, get_fnames from dipy.direction import PTTDirectionGetter from dipy.io.image import load_nifti from dipy.reconst.shm import ( SphHarmFit, SphHarmModel, descoteaux07_legacy_msg, sh_to_sf, tournier07_legacy_msg, ) from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines def test_ptt_tracking(): # Test PTT direction getter generate 100 streamlines with more than 1 pts. fod_fname, seed_coordinates_fname, _ = get_fnames(name="ptt_minimal_dataset") fod, affine = load_nifti(fod_fname) seed_coordinates = np.loadtxt(seed_coordinates_fname)[:10, :] sf = sh_to_sf( fod, default_sphere, basis_type="tournier07", sh_order_max=8, legacy=False ) sf[sf < 0] = 0 sc = BinaryStoppingCriterion(np.ones(fod.shape[:3])) dg_default = PTTDirectionGetter.from_pmf(sf, sphere=default_sphere, max_angle=20) dg_count2 = PTTDirectionGetter.from_pmf( sf, sphere=default_sphere, max_angle=20, probe_count=2, probe_radius=0.2 ) dg_quality10 = PTTDirectionGetter.from_pmf( sf, sphere=default_sphere, max_angle=20, probe_quality=10 ) dg_length2 = PTTDirectionGetter.from_pmf( sf, sphere=default_sphere, max_angle=20, probe_length=2 ) for dg in [dg_default, dg_count2, dg_quality10, dg_length2]: streamline_generator = LocalTracking( direction_getter=dg, step_size=0.2, stopping_criterion=sc, seeds=seed_coordinates, affine=affine, ) streamlines = Streamlines(streamline_generator) npt.assert_equal(len(streamlines), 10) npt.assert_(np.all([len(s) > 1 for s in streamlines])) # Test with zeros pmf dg = PTTDirectionGetter.from_pmf( np.zeros(sf.shape), sphere=default_sphere, max_angle=20 ) streamline_generator = LocalTracking( direction_getter=dg, step_size=0.2, stopping_criterion=sc, seeds=seed_coordinates, affine=affine, ) streamlines = Streamlines(streamline_generator) npt.assert_equal(len(streamlines), 10) npt.assert_(np.all([len(s) == 1 for s in streamlines])) # Test with maximum length reach dg = PTTDirectionGetter.from_pmf(sf, sphere=default_sphere, max_angle=20) streamline_generator = LocalTracking( direction_getter=dg, step_size=0.2, stopping_criterion=sc, seeds=seed_coordinates, affine=affine, maxlen=1, minlen=1, ) streams = Streamlines(streamline_generator) npt.assert_almost_equal( np.linalg.norm(streams[0][0] - streams[0][1]), 0.2, decimal=1 ) npt.assert_equal(len(streams), 10) npt.assert_(np.all([len(s) <= 3 for s in streams])) streamline_generator = LocalTracking( direction_getter=dg, step_size=0.2, stopping_criterion=sc, seeds=seed_coordinates, affine=affine, maxlen=2, fixedstep=False, ) # Check fixedstep ValueError npt.assert_raises(ValueError, Streamlines, streamline_generator) def test_PTTDirectionGetter(): # Test the constructors and errors of the PTTDirectionGetter class SillyModel(SphHarmModel): def fit(self, data, mask=None): coeff = np.zeros(data.shape[:-1] + (15,)) return SphHarmFit(self, coeff, mask=None) silly_model = SillyModel(gtab=None) data = np.zeros((3, 3, 3, 7)) fit = silly_model.fit(data) point = np.zeros(3) dir = unit_octahedron.vertices[0].copy() # Make ptt_dg from shm_coeffs with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dg = PTTDirectionGetter.from_shcoeff(fit.shm_coeff, 90, unit_octahedron) npt.assert_equal(dg.get_direction(point, dir), 1) # Make ptt_dg from pmf pmf = np.zeros((3, 3, 3, unit_octahedron.theta.shape[0])) dg = PTTDirectionGetter.from_pmf(pmf, 90, unit_octahedron) npt.assert_equal(dg.get_direction(point, dir), 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) # Check probe_length ValueError npt.assert_raises( ValueError, PTTDirectionGetter.from_shcoeff, fit.shm_coeff, 90, unit_octahedron, basis_type="tournier07", probe_length=0, ) # Check probe_radius ValueError npt.assert_raises( ValueError, PTTDirectionGetter.from_shcoeff, fit.shm_coeff, 90, unit_octahedron, basis_type="tournier07", probe_radius=-1, ) # Check probe_quality ValueError npt.assert_raises( ValueError, PTTDirectionGetter.from_shcoeff, fit.shm_coeff, 90, unit_octahedron, basis_type="tournier07", probe_quality=1, ) # Check probe_length ValueError npt.assert_raises( ValueError, PTTDirectionGetter.from_shcoeff, fit.shm_coeff, 90, unit_octahedron, basis_type="tournier07", probe_count=0, ) dipy-1.11.0/dipy/io/000077500000000000000000000000001476546756600141455ustar00rootroot00000000000000dipy-1.11.0/dipy/io/__init__.py000066400000000000000000000003451476546756600162600ustar00rootroot00000000000000# init for io routines from . import utils from .dpy import Dpy from .gradients import read_bvals_bvecs from .pickles import load_pickle, save_pickle __all__ = ["read_bvals_bvecs", "Dpy", "save_pickle", "load_pickle", "utils"] dipy-1.11.0/dipy/io/dpy.py000066400000000000000000000105031476546756600153120ustar00rootroot00000000000000"""A class for handling large tractography datasets. It is built using the h5py which in turn implement key features of the HDF5 (hierarchical data format) API [1]_. References ---------- .. [1] http://www.hdfgroup.org/HDF5/doc/H5.intro.html """ import h5py from nibabel.streamlines import ArraySequence as Streamlines import numpy as np from dipy.testing.decorators import warning_for_keywords # Make sure not to carry across setup module from * import __all__ = ["Dpy"] class Dpy: @warning_for_keywords() def __init__(self, fname, *, mode="r", compression=0): """Advanced storage system for tractography based on HDF5 Parameters ---------- fname : str Full filename mode : str, optional Use 'r' to read, 'w' to write, and 'r+' to read and write (only if file already exists). compression : int, optional 0 no compression to 9 maximum compression. Examples -------- >>> import os >>> from tempfile import mkstemp #temp file >>> from dipy.io.dpy import Dpy >>> def dpy_example(): ... fd,fname = mkstemp() ... fname += '.dpy'#add correct extension ... dpw = Dpy(fname, mode='w') ... A=np.ones((5,3)) ... B=2*A.copy() ... C=3*A.copy() ... dpw.write_track(A) ... dpw.write_track(B) ... dpw.write_track(C) ... dpw.close() ... dpr = Dpy(fname, mode='r') ... dpr.read_track() ... dpr.read_track() ... dpr.read_tracksi([0, 1, 2, 0, 0, 2]) ... dpr.close() ... os.remove(fname) #delete file from disk >>> dpy_example() """ self.mode = mode self.f = h5py.File(fname, mode=self.mode) self.compression = compression if self.mode == "w": self.f.attrs["version"] = "0.0.1" self.streamlines = self.f.create_group("streamlines") self.tracks = self.streamlines.create_dataset( "tracks", shape=(0, 3), dtype="f4", maxshape=(None, 3), chunks=True ) self.offsets = self.streamlines.create_dataset( "offsets", shape=(1,), dtype="i8", maxshape=(None,), chunks=True ) self.curr_pos = 0 self.offsets[:] = np.array([self.curr_pos]).astype(np.int64) if self.mode == "r": self.tracks = self.f["streamlines"]["tracks"] self.offsets = self.f["streamlines"]["offsets"] self.track_no = len(self.offsets) - 1 self.offs_pos = 0 def version(self): return self.f.attrs["version"] def write_track(self, track): """write on track each time""" self.tracks.resize(self.tracks.shape[0] + track.shape[0], axis=0) self.tracks[-track.shape[0] :] = track.astype(np.float32) self.curr_pos += track.shape[0] self.offsets.resize(self.offsets.shape[0] + 1, axis=0) self.offsets[-1] = self.curr_pos def write_tracks(self, tracks): """write many tracks together""" self.tracks.resize(self.tracks.shape[0] + tracks._data.shape[0], axis=0) self.tracks[-tracks._data.shape[0] :] = tracks._data self.offsets.resize(self.offsets.shape[0] + tracks._offsets.shape[0], axis=0) self.offsets[-tracks._offsets.shape[0] :] = ( self.offsets[-tracks._offsets.shape[0] - 1] + tracks._offsets + tracks._lengths ) def read_track(self): """read one track each time""" off0, off1 = self.offsets[self.offs_pos : self.offs_pos + 2] self.offs_pos += 1 return self.tracks[off0:off1] def read_tracksi(self, indices): """read tracks with specific indices""" tracks = Streamlines() for i in indices: off0, off1 = self.offsets[i : i + 2] tracks.append(self.tracks[off0:off1]) return tracks def read_tracks(self): """read the entire tractography""" offsets = self.offsets[:] TR = self.tracks[:] tracks = Streamlines() for i in range(len(offsets) - 1): off0, off1 = offsets[i : i + 2] tracks.append(TR[off0:off1]) return tracks def close(self): self.f.close() dipy-1.11.0/dipy/io/gradients.py000066400000000000000000000053101476546756600164760ustar00rootroot00000000000000import io from os.path import splitext import re import warnings import numpy as np def read_bvals_bvecs(fbvals, fbvecs): """Read b-values and b-vectors from disk. Parameters ---------- fbvals : str Full path to file with b-values. None to not read bvals. fbvecs : str Full path of file with b-vectors. None to not read bvecs. Returns ------- bvals : array, (N,) or None bvecs : array, (N, 3) or None Notes ----- Files can be either '.bvals'/'.bvecs' or '.txt' or '.npy' (containing arrays stored with the appropriate values). """ # Loop over the provided inputs, reading each one in turn and adding them # to this list: vals = [] for this_fname in [fbvals, fbvecs]: # If the input was None or empty string, we don't read anything and # move on: if this_fname is None or not this_fname: vals.append(None) continue if not isinstance(this_fname, str): raise ValueError("String with full path to file is required") base, ext = splitext(this_fname) if ext in [ ".bvals", ".bval", ".bvecs", ".bvec", ".txt", ".eddy_rotated_bvecs", "", ]: with open(this_fname, "r") as f: content = f.read() munged_content = io.StringIO(re.sub(r"(\t|,)", " ", content)) vals.append(np.squeeze(np.loadtxt(munged_content))) elif ext == ".npy": vals.append(np.squeeze(np.load(this_fname))) else: e_s = f"File type {ext} is not recognized" raise ValueError(e_s) # Once out of the loop, unpack them: bvals, bvecs = vals[0], vals[1] # If bvecs is None, you can just return now w/o making more checks: if bvecs is None: return bvals, bvecs if 3 not in bvecs.shape: raise OSError("bvec file should have three rows") if bvecs.ndim != 2: bvecs = bvecs[None, ...] bvals = bvals[None, ...] msg = "Detected only 1 direction on your bvec file. For diffusion " msg += "dataset, it is recommended to have at least 3 directions." msg += "You may have problems during the reconstruction step." warnings.warn(msg, stacklevel=2) if bvecs.shape[1] != 3: bvecs = bvecs.T # If bvals is None, you don't need to check that they have the same shape: if bvals is None: return bvals, bvecs if len(bvals.shape) > 1: raise OSError("bval file should have one row") if bvals.shape[0] != bvecs.shape[0]: raise OSError("b-values and b-vectors shapes do not correspond") return bvals, bvecs dipy-1.11.0/dipy/io/image.py000066400000000000000000000103601476546756600156010ustar00rootroot00000000000000import nibabel as nib import numpy as np from packaging.version import Version from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def load_nifti_data(fname, *, as_ndarray=True): """Load only the data array from a nifti file. Parameters ---------- fname : str Full path to the file. as_ndarray: bool, optional convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False. Returns ------- data: np.ndarray or nib.ArrayProxy See Also -------- load_nifti """ img = nib.load(fname) return np.asanyarray(img.dataobj) if as_ndarray else img.dataobj @warning_for_keywords() def load_nifti( fname, *, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True, ): """Load data and other information from a nifti file. Parameters ---------- fname : str Full path to a nifti file. return_img : bool, optional Whether to return the nibabel nifti img object. return_voxsize: bool, optional Whether to return the nifti header zooms. return_coords : bool, optional Whether to return the nifti header aff2axcodes. as_ndarray: bool, optional convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False. Returns ------- A tuple, with (at the most, if all keyword args are set to True): (data, img.affine, img, vox_size, nib.aff2axcodes(img.affine)) See Also -------- load_nifti_data """ img = nib.load(fname) data = np.asanyarray(img.dataobj) if as_ndarray else img.dataobj vox_size = img.header.get_zooms()[:3] ret_val = [data, img.affine] if return_img: ret_val.append(img) if return_voxsize: ret_val.append(vox_size) if return_coords: ret_val.append(nib.aff2axcodes(img.affine)) return tuple(ret_val) @warning_for_keywords() def save_nifti(fname, data, affine, *, hdr=None, dtype=None): """Save a data array into a nifti file. Parameters ---------- fname : str The full path to the file to be saved. data : ndarray The array with the data to save. affine : 4x4 array The affine transform associated with the file. hdr : nifti header, optional May contain additional information to store in the file header. Returns ------- None """ NIBABEL_4_0_0_PLUS = Version(nib.__version__) >= Version("4.0.0") # See GitHub issues # * https://github.com/nipy/nibabel/issues/1046 # * https://github.com/nipy/nibabel/issues/1089 # This only applies to NIfTI because the parent Analyze formats did # not support 64-bit integer data, so `set_data_dtype(int64)` would # already fail. danger_dts = (np.dtype("int64"), np.dtype("uint64")) if ( hdr is None and dtype is None and data.dtype in danger_dts and NIBABEL_4_0_0_PLUS ): msg = f"Image data has type {data.dtype}, which may cause " msg += "incompatibilities with other tools. Indeed, Analyze formats " msg += "did not support 64-bit integer data.\n\n" msg += "To silent this, please specify the `header` or `dtype` " msg += "You could also use `np.asarray(data, dtype=np.int32)`. " msg += "This cast will make sure that you data is compatible with " msg += "other software." raise ValueError(msg) kwargs = {"dtype": dtype} if NIBABEL_4_0_0_PLUS else {} result_img = nib.Nifti1Image(data, affine, header=hdr, **kwargs) result_img.to_filename(fname) def save_qa_metric(fname, xopt, fopt): """Save Quality Assurance metrics. Parameters ---------- fname: string File name to save the metric values. xopt: numpy array The metric containing the optimal parameters for image registration. fopt: int The distance between the registered images. """ np.savetxt(fname, xopt, header="Optimal Parameter metric") with open(fname, "a") as f: f.write("# Distance after registration\n") f.write(str(fopt)) dipy-1.11.0/dipy/io/meson.build000066400000000000000000000004521476546756600163100ustar00rootroot00000000000000python_sources = [ '__init__.py', 'dpy.py', 'gradients.py', 'image.py', 'peaks.py', 'pickles.py', 'stateful_tractogram.py', 'streamline.py', 'surface.py', 'utils.py', 'vtk.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/io' ) subdir('tests')dipy-1.11.0/dipy/io/peaks.py000066400000000000000000000356121476546756600156310ustar00rootroot00000000000000import os import h5py import numpy as np from dipy.core.sphere import Sphere from dipy.direction.peaks import PeaksAndMetrics, reshape_peaks_for_visualization from dipy.io.image import save_nifti from dipy.reconst.dti import quantize_evecs from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecate_with_version def _safe_save(group, array, name): """Safe saving of arrays with specific names. Parameters ---------- group : HDF5 group array : array name : string """ if array is not None: ds = group.create_dataset( name, shape=array.shape, dtype=array.dtype, chunks=True ) ds[:] = array @deprecate_with_version( "dipy.io.peaks.load_peaks is deprecated, Please use" "dipy.io.peaks.load_pam instead", since="1.10", until="1.12", ) @warning_for_keywords() def load_peaks(fname, *, verbose=False): """Load a PeaksAndMetrics HDF5 file (PAM5). Parameters ---------- fname : string Filename of PAM5 file. verbose : bool Print summary information about the loaded file. Returns ------- pam : PeaksAndMetrics object """ return load_pam(fname=fname, verbose=verbose) def load_pam(fname, *, verbose=False): """Load a PeaksAndMetrics HDF5 file (PAM5). Parameters ---------- fname : string Filename of PAM5 file. verbose : bool, optional Print summary information about the loaded file. Returns ------- pam : PeaksAndMetrics object Object holding peaks information and metrics. """ if os.path.splitext(fname)[1].lower() != ".pam5": raise IOError("This function supports only PAM5 (HDF5) files") f = h5py.File(fname, "r") pam = PeaksAndMetrics() pamh = f["pam"] version = f.attrs["version"] if version != "0.0.1": raise OSError(f"Incorrect PAM5 file version {version}") peak_dirs = pamh["peak_dirs"][:] peak_values = pamh["peak_values"][:] peak_indices = pamh["peak_indices"][:] sphere_vertices = pamh["sphere_vertices"][:] pam.affine = pamh["affine"][:] if "affine" in pamh else None pam.peak_dirs = peak_dirs pam.peak_values = peak_values pam.peak_indices = peak_indices pam.shm_coeff = pamh["shm_coeff"][:] if "shm_coeff" in pamh else None pam.sphere = Sphere(xyz=sphere_vertices) pam.B = pamh["B"][:] if "B" in pamh else None pam.total_weight = pamh["total_weight"][:][0] if "total_weight" in pamh else None pam.ang_thr = pamh["ang_thr"][:][0] if "ang_thr" in pamh else None pam.gfa = pamh["gfa"][:] if "gfa" in pamh else None pam.qa = pamh["qa"][:] if "qa" in pamh else None pam.odf = pamh["odf"][:] if "odf" in pamh else None f.close() if verbose: print("PAM5 version") print(version) print("Affine") print(pam.affine) print("Dirs shape") print(pam.peak_dirs.shape) print("SH shape") if pam.shm_coeff is not None: print(pam.shm_coeff.shape) else: print("None") print("ODF shape") if pam.odf is not None: print(pam.odf.shape) else: print("None") print("Total weight") print(pam.total_weight) print("Angular threshold") print(pam.ang_thr) print("Sphere vertices shape") print(pam.sphere.vertices.shape) return pam @deprecate_with_version( "dipy.io.peaks.save_peaks is deprecated, Please use " "dipy.io.peaks.save_pam instead", since="1.10.0", until="1.12.0", ) @warning_for_keywords() def save_peaks(fname, pam, *, affine=None, verbose=False): """Save PeaksAndMetrics object attributes in a PAM5 file (HDF5). Parameters ---------- fname : string Filename of PAM5 file. pam : PeaksAndMetrics Object holding peak_dirs, shm_coeffs and other attributes. affine : array The 4x4 matrix transforming the date from native to world coordinates. PeaksAndMetrics should have that attribute but if not it can be provided here. Default None. verbose : bool Print summary information about the saved file. """ return save_pam(fname=fname, pam=pam, affine=affine, verbose=verbose) def save_pam(fname, pam, *, affine=None, verbose=False): """Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5). Parameters ---------- fname : str Filename of PAM5 file. pam : PeaksAndMetrics Object holding peaks information and metrics. affine : ndarray, optional The 4x4 matrix transforming the date from native to world coordinates. PeaksAndMetrics should have that attribute but if not it can be provided here. verbose : bool, optional Print summary information about the saved file. """ if os.path.splitext(fname)[1] != ".pam5": raise IOError("This function saves only PAM5 (HDF5) files") if not ( hasattr(pam, "peak_dirs") and hasattr(pam, "peak_values") and hasattr(pam, "peak_indices") ): msg = "Cannot save object without peak_dirs, peak_values" msg += " and peak_indices" raise ValueError(msg) if not ( isinstance(pam.peak_dirs, np.ndarray) and isinstance(pam.peak_values, np.ndarray) and isinstance(pam.peak_indices, np.ndarray) ): msg = "Cannot save object: peak_dirs, peak_values" msg += " and peak_indices should be a ndarray" raise ValueError(msg) f = h5py.File(fname, "w") group = f.create_group("pam") f.attrs["version"] = "0.0.1" version_string = f.attrs["version"] affine = pam.affine if hasattr(pam, "affine") else affine shm_coeff = pam.shm_coeff if hasattr(pam, "shm_coeff") else None odf = pam.odf if hasattr(pam, "odf") else None vertices = None if hasattr(pam, "sphere") and pam.sphere is not None: vertices = pam.sphere.vertices _safe_save(group, affine, "affine") _safe_save(group, pam.peak_dirs, "peak_dirs") _safe_save(group, pam.peak_values, "peak_values") _safe_save(group, pam.peak_indices, "peak_indices") _safe_save(group, shm_coeff, "shm_coeff") _safe_save(group, vertices, "sphere_vertices") _safe_save(group, pam.B, "B") _safe_save(group, np.array([pam.total_weight]), "total_weight") _safe_save(group, np.array([pam.ang_thr]), "ang_thr") _safe_save(group, pam.gfa, "gfa") _safe_save(group, pam.qa, "qa") _safe_save(group, odf, "odf") f.close() if verbose: print("PAM5 version") print(version_string) print("Affine") print(affine) print("Dirs shape") print(pam.peak_dirs.shape) print("SH shape") if shm_coeff is not None: print(shm_coeff.shape) else: print("None") print("ODF shape") if odf is not None: print(pam.odf.shape) else: print("None") print("Total weight") print(pam.total_weight) print("Angular threshold") print(pam.ang_thr) print("Sphere vertices shape") print(pam.sphere.vertices.shape) return pam @deprecate_with_version( "dipy.io.peaks.peaks_to_niftis is deprecated, Please" " use dipy.io.peaks.pam_to_niftis instead", since="1.10.0", until="1.12.0", ) @warning_for_keywords() def peaks_to_niftis( pam, fname_shm, fname_dirs, fname_values, fname_indices, *, fname_gfa=None, reshape_dirs=False, ): """Save SH, directions, indices and values of peaks to Nifti. Parameters ---------- pam : PeaksAndMetrics Object holding peaks information and metrics. fname_shm : str Spherical Harmonics coefficients filename. fname_dirs : str Peaks direction filename. fname_values : str Peaks values filename. fname_indices : str Peaks indices filename. fname_gfa : str, optional Generalized FA filename. reshape_dirs : bool, optional If True, reshape peaks for visualization. """ return pam_to_niftis( pam=pam, fname_peaks_dir=fname_dirs, fname_peaks_values=fname_values, fname_peaks_indices=fname_indices, fname_gfa=fname_gfa, fname_shm=fname_shm, reshape_dirs=reshape_dirs, ) def pam_to_niftis( pam, *, fname_peaks_dir="peaks_dirs.nii.gz", fname_peaks_values="peaks_values.nii.gz", fname_peaks_indices="peaks_indices.nii.gz", fname_shm="shm.nii.gz", fname_gfa="gfa.nii.gz", fname_sphere="sphere.txt", fname_b="B.nii.gz", fname_qa="qa.nii.gz", reshape_dirs=False, ): """Save SH, directions, indices and values of peaks to Nifti. Parameters ---------- pam : PeaksAndMetrics Object holding peaks information and metrics. fname_peaks_dir : str, optional Peaks direction filename. fname_peaks_values : str, optional Peaks values filename. fname_peaks_indices : str, optional Peaks indices filename. fname_shm : str, optional Spherical Harmonics coefficients filename. It will be saved if available. fname_gfa : str, optional Generalized FA filename. It will be saved if available. fname_sphere : str, optional Sphere vertices filename. It will be saved if available. fname_b : str, optional B Matrix filename. Matrix that transforms spherical harmonics to spherical function. It will be saved if available. fname_qa : str, optional Quantitative Anisotropy filename. It will be saved if available. reshape_dirs : bool, optional If True, Reshape and convert to float32 a set of peaks for visualisation with mrtrix or the fibernavigator. """ if reshape_dirs: pam_dirs = reshape_peaks_for_visualization(pam) else: pam_dirs = pam.peak_dirs.astype(np.float32) save_nifti(fname_peaks_dir, pam_dirs, pam.affine) save_nifti(fname_peaks_values, pam.peak_values.astype(np.float32), pam.affine) save_nifti(fname_peaks_indices, pam.peak_indices, pam.affine) for attr, fname in [("gfa", fname_gfa), ("B", fname_b), ("qa", fname_qa)]: obj = getattr(pam, attr, None) if obj is None: continue save_nifti(fname, obj, pam.affine) if hasattr(pam, "shm_coeff") and pam.shm_coeff is not None: save_nifti(fname_shm, pam.shm_coeff.astype(np.float32), pam.affine) if hasattr(pam, "sphere") and pam.sphere is not None: np.savetxt(fname_sphere, pam.sphere.vertices) def niftis_to_pam( affine, peak_dirs, peak_values, peak_indices, *, shm_coeff=None, sphere=None, gfa=None, B=None, qa=None, odf=None, total_weight=None, ang_thr=None, pam_file=None, ): """Return SH, directions, indices and values of peaks to pam5. Parameters ---------- affine : array, (4, 4) The matrix defining the affine transform. peak_dirs : ndarray The direction of each peak. peak_values : ndarray The value of the peaks. peak_indices : ndarray Indices (in sphere vertices) of the peaks in each voxel. shm_coeff : array, optional Spherical harmonics coefficients. sphere : `Sphere` class instance, optional The sphere providing discrete directions for evaluation. gfa : ndarray, optional Generalized FA volume. B : ndarray, optional Matrix that transforms spherical harmonics to spherical function. qa : array, optional Quantitative Anisotropy in each voxel. odf : ndarray, optional SH coefficients for the ODF spherical function. total_weight : float, optional Total weight of the peaks. ang_thr : float, optional Angular threshold of the peaks. pam_file : str, optional Filename of the desired pam file. Returns ------- pam : PeaksAndMetrics Object holding peak_dirs, shm_coeffs and other attributes. """ pam = PeaksAndMetrics() pam.affine = affine pam.peak_dirs = peak_dirs pam.peak_values = peak_values pam.peak_indices = peak_indices for name, value in [ ("shm_coeff", shm_coeff), ("sphere", sphere), ("B", B), ("total_weight", total_weight), ("ang_thr", ang_thr), ("gfa", gfa), ("qa", qa), ("odf", odf), ]: if value is not None: setattr(pam, name, value) if pam_file: save_pam(pam_file, pam) return pam def tensor_to_pam( evals, evecs, affine, *, shm_coeff=None, sphere=None, gfa=None, B=None, qa=None, odf=None, total_weight=None, ang_thr=None, pam_file=None, npeaks=5, generate_peaks_indices=True, ): """Convert diffusion tensor to pam5. Parameters ---------- evals : ndarray Eigenvalues of a diffusion tensor. shape should be (...,3). evecs : ndarray Eigen vectors from the tensor model. affine : array, (4, 4) The matrix defining the affine transform. shm_coeff : array, optional Spherical harmonics coefficients. sphere : `Sphere` class instance, optional The sphere providing discrete directions for evaluation. gfa : ndarray, optional Generalized FA volume. B : ndarray, optional Matrix that transforms spherical harmonics to spherical function. qa : array, optional Quantitative Anisotropy in each voxel. odf : ndarray, optional SH coefficients for the ODF spherical function. pam_file : str, optional Filename of the desired pam file. npeaks : int, optional Maximum number of peaks found. generate_peaks_indices : bool, optional total_weight : float, optional Total weight of the peaks. ang_thr : float, optional Angular threshold of the peaks. Returns ------- pam : PeaksAndMetrics Object holding peaks information and metrics. """ npeaks = 1 if npeaks < 1 else npeaks npeaks = min(npeaks, evals.shape[-1]) shape = evals.shape[:3] peaks_dirs = np.zeros((shape + (npeaks, 3))) peaks_dirs[..., :npeaks, :] = evecs[..., :npeaks, :] peaks_values = np.zeros((shape + (npeaks,))) peaks_values[..., :npeaks] = evals[..., :npeaks] if generate_peaks_indices: vertices = sphere.vertices if sphere else None peaks_indices = quantize_evecs(evecs[..., :npeaks, :], odf_vertices=vertices) else: peaks_indices = np.zeros((shape + (npeaks,)), dtype="int") peaks_indices.fill(-1) return niftis_to_pam( affine=affine, peak_dirs=peaks_dirs, peak_values=peaks_values, peak_indices=peaks_indices.astype(np.int32), shm_coeff=shm_coeff, sphere=sphere, gfa=gfa, B=B, qa=qa, odf=odf, total_weight=total_weight, ang_thr=ang_thr, pam_file=pam_file, ) dipy-1.11.0/dipy/io/pickles.py000066400000000000000000000023121476546756600161470ustar00rootroot00000000000000"""Load and save pickles""" import pickle def save_pickle(fname, dix): """Save `dix` to `fname` as pickle. Parameters ---------- fname : str filename to save object e.g. a dictionary dix : str dictionary or other object Examples -------- >>> import os >>> from tempfile import mkstemp >>> fd, fname = mkstemp() # make temporary file (opened, attached to fh) >>> d={0:{'d':1}} >>> save_pickle(fname, d) >>> d2=load_pickle(fname) We remove the temporary file we created for neatness >>> os.close(fd) # the file is still open, we need to close the fh >>> os.remove(fname) See Also -------- dipy.io.pickles.load_pickle """ out = open(fname, "wb") pickle.dump(dix, out, protocol=pickle.HIGHEST_PROTOCOL) out.close() def load_pickle(fname): """Load object from pickle file `fname`. Parameters ---------- fname : str filename to load dict or other python object Returns ------- dix : object dictionary or other object Examples -------- dipy.io.pickles.save_pickle """ inp = open(fname, "rb") dix = pickle.load(inp) inp.close() return dix dipy-1.11.0/dipy/io/stateful_tractogram.py000066400000000000000000000720531476546756600206000ustar00rootroot00000000000000from bisect import bisect from collections import OrderedDict from copy import deepcopy import enum from itertools import product import logging from nibabel.affines import apply_affine from nibabel.streamlines.tractogram import ( PerArrayDict, PerArraySequenceDict, Tractogram, ) import numpy as np from dipy.io.dpy import Streamlines from dipy.io.utils import ( get_reference_info, is_header_compatible, is_reference_info_valid, ) from dipy.testing.decorators import warning_for_keywords logger = logging.getLogger("StatefulTractogram") logger.setLevel(level=logging.INFO) def set_sft_logger_level(log_level): """Change the logger of the StatefulTractogram to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR Parameters ---------- log_level : str Log level for the StatefulTractogram only """ logger.setLevel(level=log_level) class Space(enum.Enum): """Enum to simplify future change to convention""" VOX = "vox" VOXMM = "voxmm" RASMM = "rasmm" class Origin(enum.Enum): """Enum to simplify future change to convention""" NIFTI = "center" TRACKVIS = "corner" class StatefulTractogram: """Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point. """ @warning_for_keywords() def __init__( self, streamlines, reference, space, *, origin=Origin.NIFTI, data_per_point=None, data_per_streamline=None, ): """Create a strict, state-aware, robust tractogram Parameters ---------- streamlines : list or ArraySequence Streamlines of the tractogram reference : Nifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation space : Enum (dipy.io.stateful_tractogram.Space) Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM origin : Enum (dipy.io.stateful_tractogram.Origin), optional Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER data_per_point : dict, optional Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i data_per_streamline : dict, optional Dictionary in which each key has X items X being the number of streamlines Notes ----- Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space. Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification. In a case of manipulation not allowed by this object, use Nibabel directly and be careful. """ if data_per_point is None: data_per_point = {} if data_per_streamline is None: data_per_streamline = {} if isinstance(streamlines, Streamlines): streamlines = streamlines.copy() self._tractogram = Tractogram( streamlines, data_per_point=data_per_point, data_per_streamline=data_per_streamline, ) if isinstance(reference, type(self)): logger.warning( "Using a StatefulTractogram as reference, this " "will copy only the space_attributes, not " "the state. The variables space and origin " "must be specified separately." ) logger.warning( "To copy the state from another StatefulTractogram " "you may want to use the function from_sft " "(static function of the StatefulTractogram)." ) if isinstance(reference, tuple) and len(reference) == 4: if is_reference_info_valid(*reference): space_attributes = reference else: raise TypeError( "The provided space attributes are not " "considered valid, please correct before " "using them with StatefulTractogram." ) else: space_attributes = get_reference_info(reference) if space_attributes is None: raise TypeError( "Reference MUST be one of the following:\n" "Nifti or Trk filename, Nifti1Image or " "TrkFile, Nifti1Header or trk.header (dict)." ) (self._affine, self._dimensions, self._voxel_sizes, self._voxel_order) = ( space_attributes ) self._inv_affine = np.linalg.inv(self._affine).astype(np.float32) if space not in Space: raise ValueError("Space MUST be from Space enum, e.g Space.VOX.") self._space = space if origin not in Origin: raise ValueError("Origin MUST be from Origin enum, e.g Origin.NIFTI.") self._origin = origin logger.debug(self) @staticmethod def are_compatible(sft_1, sft_2): """Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency""" are_sft_compatible = True if not is_header_compatible(sft_1, sft_2): logger.warning("Inconsistent spatial attributes between both sft.") are_sft_compatible = False if sft_1.space != sft_2.space: logger.warning("Inconsistent space between both sft.") are_sft_compatible = False if sft_1.origin != sft_2.origin: logger.warning("Inconsistent origin between both sft.") are_sft_compatible = False if sft_1.get_data_per_point_keys() != sft_2.get_data_per_point_keys(): logger.warning("Inconsistent data_per_point between both sft.") are_sft_compatible = False if sft_1.get_data_per_streamline_keys() != sft_2.get_data_per_streamline_keys(): logger.warning("Inconsistent data_per_streamline between both sft.") are_sft_compatible = False return are_sft_compatible @staticmethod @warning_for_keywords() def from_sft(streamlines, sft, *, data_per_point=None, data_per_streamline=None): """Create an instance of `StatefulTractogram` from another instance of `StatefulTractogram`. Parameters ---------- streamlines : list or ArraySequence Streamlines of the tractogram sft : StatefulTractogram, The other StatefulTractogram to copy the space_attribute AND state from. data_per_point : dict, optional Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i data_per_streamline : dict, optional Dictionary in which each key has X items X being the number of streamlines ----- """ new_sft = StatefulTractogram( streamlines, sft.space_attributes, sft.space, origin=sft.origin, data_per_point=data_per_point, data_per_streamline=data_per_streamline, ) new_sft.dtype_dict = sft.dtype_dict return new_sft def __str__(self): """Generate the string for printing""" affine = np.array2string( self._affine, formatter={"float_kind": lambda x: f"{x:.6f}"} ) vox_sizes = np.array2string( self._voxel_sizes, formatter={"float_kind": lambda x: "{x:.2f}"} ) text = f"Affine: \n{affine}" text += f"\ndimensions: {np.array2string(self._dimensions)}" text += f"\nvoxel_sizes: {vox_sizes}" text += f"\nvoxel_order: {self._voxel_order}" text += f"\nstreamline_count: {self._get_streamline_count()}" text += f"\npoint_count: {self._get_point_count()}" text += f"\ndata_per_streamline keys: {self.get_data_per_streamline_keys()}" text += f"\ndata_per_point keys: {self.get_data_per_point_keys()}" return text def __len__(self): """Define the length of the object""" return self._get_streamline_count() def __getitem__(self, key): """Slice all data in a consistent way""" if isinstance(key, int): key = [key] return self.from_sft( self.streamlines[key], self, data_per_point=self.data_per_point[key], data_per_streamline=self.data_per_streamline[key], ) def __eq__(self, other): """Robust StatefulTractogram equality test""" if not self.are_compatible(self, other): return False streamlines_equal = np.allclose( self.streamlines.get_data(), other.streamlines.get_data(), rtol=1e-3 ) if not streamlines_equal: return False dpp_equal = True for key in self.data_per_point: dpp_equal = dpp_equal and np.allclose( self.data_per_point[key].get_data(), other.data_per_point[key].get_data(), rtol=1e-3, ) if not dpp_equal: return False dps_equal = True for key in self.data_per_streamline: dps_equal = dps_equal and np.allclose( self.data_per_streamline[key], other.data_per_streamline[key], rtol=1e-3 ) if not dps_equal: return False return True def __ne__(self, other): """Robust StatefulTractogram equality test (NOT)""" return not self == other def __add__(self, other_sft): """Addition of two sft with attributes consistency checks""" if not self.are_compatible(self, other_sft): logger.debug(self) logger.debug(other_sft) raise ValueError( "Inconsistent StatefulTractogram.\n" "Make sure Space, Origin are the same and that " "data_per_point and data_per_streamline keys are " "the same." ) streamlines = self.streamlines.copy() streamlines.extend(other_sft.streamlines) data_per_point = deepcopy(self.data_per_point) data_per_point.extend(other_sft.data_per_point) data_per_streamline = deepcopy(self.data_per_streamline) data_per_streamline.extend(other_sft.data_per_streamline) return self.from_sft( streamlines, self, data_per_point=data_per_point, data_per_streamline=data_per_streamline, ) def __iadd__(self, other): self.value = self + other return self.value @property def dtype_dict(self): """Getter for dtype_dict""" dtype_dict = { "positions": self.streamlines._data.dtype, "offsets": self.streamlines._offsets.dtype, } if self.data_per_point is not None: dtype_dict["dpp"] = {} for key in self.data_per_point.keys(): if key in self.data_per_point: dtype_dict["dpp"][key] = self.data_per_point[key]._data.dtype if self.data_per_streamline is not None: dtype_dict["dps"] = {} for key in self.data_per_streamline.keys(): if key in self.data_per_streamline: dtype_dict["dps"][key] = self.data_per_streamline[key].dtype return OrderedDict(dtype_dict) @property def space_attributes(self): """Getter for spatial attribute""" return self._affine, self._dimensions, self._voxel_sizes, self._voxel_order @property def space(self): """Getter for the current space""" return self._space @property def affine(self): """Getter for the reference affine""" return self._affine @property def dimensions(self): """Getter for the reference dimensions""" return self._dimensions @property def voxel_sizes(self): """Getter for the reference voxel sizes""" return self._voxel_sizes @property def voxel_order(self): """Getter for the reference voxel order""" return self._voxel_order @property def origin(self): """Getter for origin standard""" return self._origin @property def streamlines(self): """Partially safe getter for streamlines""" return self._tractogram.streamlines @dtype_dict.setter def dtype_dict(self, dtype_dict): """Modify dtype_dict. Parameters ---------- dtype_dict : dict Dictionary containing the desired datatype for positions, offsets and all dpp and dps keys. (To use with TRX file format): """ if "offsets" in dtype_dict: self.streamlines._offsets = self.streamlines._offsets.astype( dtype_dict["offsets"] ) if "positions" in dtype_dict: self.streamlines._data = self.streamlines._data.astype( dtype_dict["positions"] ) if "dpp" not in dtype_dict: dtype_dict["dpp"] = {} if "dps" not in dtype_dict: dtype_dict["dps"] = {} for key in self.data_per_point: if key in dtype_dict["dpp"]: dtype_to_use = dtype_dict["dpp"][key] self.data_per_point[key]._data = self.data_per_point[key]._data.astype( dtype_to_use ) for key in self.data_per_streamline: if key in dtype_dict["dps"]: dtype_to_use = dtype_dict["dps"][key] self.data_per_streamline[key] = self.data_per_streamline[key].astype( dtype_to_use ) def get_streamlines_copy(self): """Safe getter for streamlines (for slicing)""" return self._tractogram.streamlines.copy() @streamlines.setter def streamlines(self, streamlines): """Modify streamlines. Creating a new object would be less risky. Parameters ---------- streamlines : list or ArraySequence (list and deepcopy recommended) Streamlines of the tractogram """ if isinstance(streamlines, Streamlines): streamlines = streamlines.copy() self._tractogram._streamlines = Streamlines(streamlines) self.data_per_point = self.data_per_point self.data_per_streamline = self.data_per_streamline logger.warning("Streamlines has been modified.") @property def data_per_point(self): """Getter for data_per_point""" return self._tractogram.data_per_point @data_per_point.setter def data_per_point(self, data): """Modify point data . Creating a new object would be less risky. Parameters ---------- data : dict Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i """ self._tractogram.data_per_point = data logger.warning("Data_per_point has been modified.") @property def data_per_streamline(self): """Getter for data_per_streamline""" return self._tractogram.data_per_streamline @data_per_streamline.setter def data_per_streamline(self, data): """Modify point data . Creating a new object would be less risky. Parameters ---------- data : dict Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines """ self._tractogram.data_per_streamline = data logger.warning("Data_per_streamline has been modified.") def get_data_per_point_keys(self): """Return a list of the data_per_point attribute names""" return list(set(self.data_per_point.keys())) def get_data_per_streamline_keys(self): """Return a list of the data_per_streamline attribute names""" return list(set(self.data_per_streamline.keys())) def to_vox(self): """Safe function to transform streamlines and update state""" if self._space == Space.VOXMM: self._voxmm_to_vox() elif self._space == Space.RASMM: self._rasmm_to_vox() def to_voxmm(self): """Safe function to transform streamlines and update state""" if self._space == Space.VOX: self._vox_to_voxmm() elif self._space == Space.RASMM: self._rasmm_to_voxmm() def to_rasmm(self): """Safe function to transform streamlines and update state""" if self._space == Space.VOX: self._vox_to_rasmm() elif self._space == Space.VOXMM: self._voxmm_to_rasmm() def to_space(self, target_space): """Safe function to transform streamlines to a particular space using an enum and update state""" if target_space == Space.VOX: self.to_vox() elif target_space == Space.VOXMM: self.to_voxmm() elif target_space == Space.RASMM: self.to_rasmm() else: logger.error( "Unsupported target space, please use Enum in " "dipy.io.stateful_tractogram." ) def to_origin(self, target_origin): """Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)""" if target_origin == Origin.NIFTI: self.to_center() elif target_origin == Origin.TRACKVIS: self.to_corner() else: logger.error( "Unsupported origin standard, please use Enum in " "dipy.io.stateful_tractogram." ) def to_center(self): """Safe function to shift streamlines so the center of voxel is the origin""" if self._origin == Origin.TRACKVIS: self._shift_voxel_origin() def to_corner(self): """Safe function to shift streamlines so the corner of voxel is the origin""" if self._origin == Origin.NIFTI: self._shift_voxel_origin() def compute_bounding_box(self): """Compute the bounding box of the streamlines in their current state Returns ------- output : ndarray 8 corners of the XYZ aligned box, all zeros if no streamlines """ if self._tractogram.streamlines._data.size > 0: bbox_min = np.min(self._tractogram.streamlines._data, axis=0) bbox_max = np.max(self._tractogram.streamlines._data, axis=0) return np.asarray(list(product(*zip(bbox_min, bbox_max)))) return np.zeros((8, 3)) def is_bbox_in_vox_valid(self): """Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space. Returns ------- output : bool Are the streamlines within the volume of the associated reference """ if not self.streamlines: return True old_space = deepcopy(self.space) old_origin = deepcopy(self.origin) # Do to rotation, equivalent of a OBB must be done self.to_vox() self.to_corner() bbox_corners = deepcopy(self.compute_bounding_box()) is_valid = True if np.any(bbox_corners < 0): logger.error("Voxel space values lower than 0.0.") logger.debug(bbox_corners) is_valid = False if ( np.any(bbox_corners[:, 0] > self._dimensions[0]) or np.any(bbox_corners[:, 1] > self._dimensions[1]) or np.any(bbox_corners[:, 2] > self._dimensions[2]) ): logger.error("Voxel space values higher than dimensions.") logger.debug(bbox_corners) is_valid = False self.to_space(old_space) self.to_origin(old_origin) return is_valid @warning_for_keywords() def remove_invalid_streamlines(self, *, epsilon=1e-3): """Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero Parameters ---------- epsilon : float, optional Epsilon value for the bounding box verification. Default is 1e-6. Returns ------- output : tuple Tuple of two list, indices_to_remove, indices_to_keep """ if not self.streamlines: return old_space = deepcopy(self.space) old_origin = deepcopy(self.origin) self.to_vox() self.to_corner() min_condition = np.min(self._tractogram.streamlines._data, axis=1) < epsilon max_condition = np.any( self._tractogram.streamlines._data > self._dimensions - epsilon, axis=1 ) ic_offsets_indices = np.where(np.logical_or(min_condition, max_condition))[0] indices_to_remove = [] for i in ic_offsets_indices: indices_to_remove.append( bisect(self._tractogram.streamlines._offsets, i) - 1 ) indices_to_remove = sorted(set(indices_to_remove)) indices_to_keep = list( np.setdiff1d( np.arange(len(self._tractogram)), np.array(indices_to_remove) ).astype(int) ) tmp_streamlines = self.streamlines[indices_to_keep] tmp_dpp = self._tractogram.data_per_point[indices_to_keep] tmp_dps = self._tractogram.data_per_streamline[indices_to_keep] ori_dtype = self._tractogram.streamlines._data.dtype tmp_streamlines = tmp_streamlines.copy() tmp_streamlines._data = tmp_streamlines._data.astype(ori_dtype) self._tractogram = Tractogram( tmp_streamlines, data_per_point=tmp_dpp, data_per_streamline=tmp_dps, affine_to_rasmm=np.eye(4), ) self.to_space(old_space) self.to_origin(old_origin) return indices_to_remove, indices_to_keep def _get_streamline_count(self): """Safe getter for the number of streamlines""" return len(self._tractogram) def _get_point_count(self): """Safe getter for the number of streamlines""" return self._tractogram.streamlines.total_nb_rows def _vox_to_voxmm(self): """Unsafe function to transform streamlines""" if self._space == Space.VOX: if self._tractogram.streamlines._data.size > 0: self._tractogram.streamlines._data *= np.asarray(self._voxel_sizes) self._space = Space.VOXMM logger.debug("Moved streamlines from vox to voxmm.") else: logger.warning("Wrong initial space for this function.") def _voxmm_to_vox(self): """Unsafe function to transform streamlines""" if self._space == Space.VOXMM: if self._tractogram.streamlines._data.size > 0: self._tractogram.streamlines._data /= np.asarray(self._voxel_sizes) self._space = Space.VOX logger.debug("Moved streamlines from voxmm to vox.") else: logger.warning("Wrong initial space for this function.") def _vox_to_rasmm(self): """Unsafe function to transform streamlines""" if self._space == Space.VOX: if self._tractogram.streamlines._data.size > 0: self._tractogram.apply_affine(self._affine) self._space = Space.RASMM logger.debug("Moved streamlines from vox to rasmm.") else: logger.warning("Wrong initial space for this function.") def _rasmm_to_vox(self): """Unsafe function to transform streamlines""" if self._space == Space.RASMM: if self._tractogram.streamlines._data.size > 0: self._tractogram.apply_affine(self._inv_affine) self._space = Space.VOX logger.debug("Moved streamlines from rasmm to vox.") else: logger.warning("Wrong initial space for this function.") def _voxmm_to_rasmm(self): """Unsafe function to transform streamlines""" if self._space == Space.VOXMM: if self._tractogram.streamlines._data.size > 0: self._tractogram.streamlines._data /= np.asarray(self._voxel_sizes) self._tractogram.apply_affine(self._affine) self._space = Space.RASMM logger.debug("Moved streamlines from voxmm to rasmm.") else: logger.warning("Wrong initial space for this function.") def _rasmm_to_voxmm(self): """Unsafe function to transform streamlines""" if self._space == Space.RASMM: if self._tractogram.streamlines._data.size > 0: self._tractogram.apply_affine(self._inv_affine) self._tractogram.streamlines._data *= np.asarray(self._voxel_sizes) self._space = Space.VOXMM logger.debug("Moved streamlines from rasmm to voxmm.") else: logger.warning("Wrong initial space for this function.") def _shift_voxel_origin(self): """Unsafe function to switch the origin from center to corner and vice versa""" if self.streamlines: shift = np.asarray([0.5, 0.5, 0.5]) if self._space == Space.VOXMM: shift = shift * self._voxel_sizes elif self._space == Space.RASMM: tmp_affine = np.eye(4) tmp_affine[0:3, 0:3] = self._affine[0:3, 0:3] shift = apply_affine(tmp_affine, shift) if self._origin == Origin.TRACKVIS: shift *= -1 self._tractogram.streamlines._data += shift if self._origin == Origin.NIFTI: logger.debug("Origin moved to the corner of voxel.") self._origin = Origin.TRACKVIS else: logger.debug("Origin moved to the center of voxel.") self._origin = Origin.NIFTI def _is_data_per_point_valid(streamlines, data): """Verify that the number of item in data is X and that each of these items has Y_i items. X being the number of streamlines Y_i being the number of points on streamlines #i Parameters ---------- streamlines : list or ArraySequence Streamlines of the tractogram data : dict Contains the organized point's metadata (hopefully) Returns ------- output : bool Does all the streamlines and metadata attribute match """ if not isinstance(data, (dict, PerArraySequenceDict)): logger.error("data_per_point MUST be a dictionary.") return False elif data == {}: return True total_point = 0 total_streamline = 0 for i in streamlines: total_streamline += 1 total_point += len(i) for key in data.keys(): total_point_entries = 0 if not len(data[key]) == total_streamline: logger.error( "Missing entry for streamlines points data, " "inconsistent number of streamlines." ) return False for values in data[key]: total_point_entries += len(values) if total_point_entries != total_point: logger.error( "Missing entry for streamlines points data, " "inconsistent number of points per streamlines." ) return False return True def _is_data_per_streamline_valid(streamlines, data): """Verify that the number of item in data is X X being the number of streamlines Parameters ---------- streamlines : list or ArraySequence Streamlines of the tractogram data : dict Contains the organized streamline's metadata (hopefully) Returns ------- output : bool Does all the streamlines and metadata attribute match """ if not isinstance(data, (dict, PerArrayDict)): logger.error("data_per_point MUST be a dictionary.") return False elif data == {}: return True total_streamline = 0 for _ in streamlines: total_streamline += 1 for key in data.keys(): if not len(data[key]) == total_streamline: logger.error( "Missing entry for streamlines points data, " "inconsistent number of streamlines." ) return False return True dipy-1.11.0/dipy/io/streamline.py000066400000000000000000000230331476546756600166630ustar00rootroot00000000000000from copy import deepcopy import logging import os import time import nibabel as nib from nibabel.streamlines import detect_format from nibabel.streamlines.tractogram import Tractogram import numpy as np import trx.trx_file_memmap as tmm from dipy.io.dpy import Dpy from dipy.io.stateful_tractogram import Origin, Space, StatefulTractogram from dipy.io.utils import create_tractogram_header, is_header_compatible from dipy.io.vtk import load_vtk_streamlines, save_vtk_streamlines from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def save_tractogram(sft, filename, *, bbox_valid_check=True): """Save the stateful tractogram in any format (trx/trk/tck/vtk/vtp/fib/dpy) Parameters ---------- sft : StatefulTractogram The stateful tractogram to save filename : string Filename with valid extension bbox_valid_check : bool Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file. Returns ------- output : bool True if the saving operation was successful """ _, extension = os.path.splitext(filename) if extension not in [".trk", ".tck", ".trx", ".vtk", ".vtp", ".fib", ".dpy"]: raise TypeError("Output filename is not one of the supported format.") if bbox_valid_check and not sft.is_bbox_in_vox_valid(): raise ValueError( "Bounding box is not valid in voxel space, cannot " "load a valid file if some coordinates are invalid.\n" "Please set bbox_valid_check to False and then use " "the function remove_invalid_streamlines to discard " "invalid streamlines." ) old_space = deepcopy(sft.space) old_origin = deepcopy(sft.origin) sft.to_rasmm() sft.to_center() timer = time.time() if extension in [".trk", ".tck"]: tractogram_type = detect_format(filename) header = create_tractogram_header(tractogram_type, *sft.space_attributes) new_tractogram = Tractogram(sft.streamlines, affine_to_rasmm=np.eye(4)) if extension == ".trk": new_tractogram.data_per_point = sft.data_per_point new_tractogram.data_per_streamline = sft.data_per_streamline fileobj = tractogram_type(new_tractogram, header=header) nib.streamlines.save(fileobj, filename) elif extension in [".vtk", ".vtp", ".fib"]: binary = extension in [".vtk", ".fib"] save_vtk_streamlines(sft.streamlines, filename, binary=binary) elif extension in [".dpy"]: dpy_obj = Dpy(filename, mode="w") dpy_obj.write_tracks(sft.streamlines) dpy_obj.close() elif extension in [".trx"]: trx = tmm.TrxFile.from_sft(sft) tmm.save(trx, filename) trx.close() logging.debug( "Save %s with %s streamlines in %s seconds.", filename, len(sft), round(time.time() - timer, 3), ) sft.to_space(old_space) sft.to_origin(old_origin) return True @warning_for_keywords() def load_tractogram( filename, reference, *, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True, ): """Load the stateful tractogram from any format (trx/trk/tck/vtk/vtp/fib/dpy) Parameters ---------- filename : string Filename with valid extension reference : Nifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or trk.header (dict), or 'same' if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation to_space : Enum (dipy.io.stateful_tractogram.Space) Space to which the streamlines will be transformed after loading to_origin : Enum (dipy.io.stateful_tractogram.Origin) Origin to which the streamlines will be transformed after loading NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel) bbox_valid_check : bool Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file. trk_header_check : bool Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded Returns ------- output : StatefulTractogram The tractogram to load (must have been saved properly) """ _, extension = os.path.splitext(filename) if extension not in [".trk", ".tck", ".trx", ".vtk", ".vtp", ".fib", ".dpy"]: logging.error("Output filename is not one of the supported format.") return False if to_space not in Space: logging.error("Space MUST be one of the 3 choices (Enum).") return False if reference == "same": if extension in [".trk", ".trx"]: reference = filename else: logging.error( 'Reference must be provided, "same" is only ' "available for Trk file." ) return False if trk_header_check and extension == ".trk": if not is_header_compatible(filename, reference): logging.error("Trk file header does not match the provided reference.") return False timer = time.time() data_per_point = None data_per_streamline = None if extension in [".trk", ".tck"]: tractogram_obj = nib.streamlines.load(filename).tractogram streamlines = tractogram_obj.streamlines if extension == ".trk": data_per_point = tractogram_obj.data_per_point data_per_streamline = tractogram_obj.data_per_streamline elif extension in [".vtk", ".vtp", ".fib"]: streamlines = load_vtk_streamlines(filename) elif extension in [".dpy"]: dpy_obj = Dpy(filename, mode="r") streamlines = list(dpy_obj.read_tracks()) dpy_obj.close() if extension in [".trx"]: trx_obj = tmm.load(filename) sft = trx_obj.to_sft() trx_obj.close() else: sft = StatefulTractogram( streamlines, reference, Space.RASMM, origin=Origin.NIFTI, data_per_point=data_per_point, data_per_streamline=data_per_streamline, ) logging.debug( "Load %s with %s streamlines in %s seconds.", filename, len(sft), round(time.time() - timer, 3), ) if bbox_valid_check and not sft.is_bbox_in_vox_valid(): raise ValueError( "Bounding box is not valid in voxel space, cannot " "load a valid file if some coordinates are invalid.\n" "Please set bbox_valid_check to False and then use " "the function remove_invalid_streamlines to discard " "invalid streamlines." ) sft.to_space(to_space) sft.to_origin(to_origin) return sft def load_generator(ttype): """Generate a loading function that performs a file extension check to restrict the user to a single file format. Parameters ---------- ttype : string Extension of the file format that requires a loader Returns ------- output : function Function (load_tractogram) that handle only one file format """ @warning_for_keywords() def f_gen( filename, reference, *, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True, ): _, extension = os.path.splitext(filename) if not extension == ttype: msg = f"This function can only load {ttype} files, " msg += "for a more general purpose, use load_tractogram instead." raise ValueError(msg) sft = load_tractogram( filename, reference, to_space=Space.RASMM, to_origin=to_origin, bbox_valid_check=bbox_valid_check, trk_header_check=trk_header_check, ) return sft f_gen.__doc__ = load_tractogram.__doc__.replace( "from any format (trk/tck/vtk/vtp/fib/dpy)", f"of the {ttype} format" ) return f_gen def save_generator(ttype): """Generate a saving function that performs a file extension check to restrict the user to a single file format. Parameters ---------- ttype : string Extension of the file format that requires a saver Returns ------- output : function Function (save_tractogram) that handle only one file format """ def f_gen(sft, filename, bbox_valid_check=True): _, extension = os.path.splitext(filename) if not extension == ttype: msg = f"This function can only save {ttype} file, " msg += "for more general cases, use save_tractogram instead." raise ValueError(msg) save_tractogram(sft, filename, bbox_valid_check=bbox_valid_check) f_gen.__doc__ = save_tractogram.__doc__.replace( "in any format (trk/tck/vtk/vtp/fib/dpy)", f"of the {ttype} format" ) return f_gen load_trk = load_generator(".trk") load_tck = load_generator(".tck") load_trx = load_generator(".trx") load_vtk = load_generator(".vtk") load_vtp = load_generator(".vtp") load_fib = load_generator(".fib") load_dpy = load_generator(".dpy") save_trk = save_generator(".trk") save_tck = save_generator(".tck") save_trx = save_generator(".trx") save_vtk = save_generator(".vtk") save_vtp = save_generator(".vtp") save_fib = save_generator(".fib") save_dpy = save_generator(".dpy") dipy-1.11.0/dipy/io/surface.py000066400000000000000000000017631476546756600161560ustar00rootroot00000000000000from warnings import warn import nibabel as nib from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def load_pial(fname, *, return_meta=False): """Load pial file. Parameters ---------- fname : str Absolute path of the file. return_meta : bool, optional Whether to read the metadata of the file or not, by default False. Returns ------- tuple (vertices, faces) if return_meta=False. Otherwise, (vertices, faces, metadata). """ try: return nib.freesurfer.read_geometry(fname, read_metadata=return_meta) except ValueError: warn(f"The file {fname} provided does not have geometry data.", stacklevel=2) def load_gifti(fname): """Load gifti file. Parameters ---------- fname : str Absolute path of the file. Returns ------- tuple (vertices, faces) """ surf_img = nib.load(fname) return surf_img.agg_data(("pointset", "triangle")) dipy-1.11.0/dipy/io/tests/000077500000000000000000000000001476546756600153075ustar00rootroot00000000000000dipy-1.11.0/dipy/io/tests/__init__.py000066400000000000000000000000521476546756600174150ustar00rootroot00000000000000# init to allow relative imports in tests dipy-1.11.0/dipy/io/tests/meson.build000066400000000000000000000004331476546756600174510ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_dpy.py', 'test_io.py', 'test_io_gradients.py', 'test_io_peaks.py', 'test_stateful_tractogram.py', 'test_streamline.py', 'test_utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/io/tests' ) dipy-1.11.0/dipy/io/tests/test_dpy.py000066400000000000000000000017121476546756600175150ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import numpy as np import numpy.testing as npt from dipy.io.dpy import Dpy, Streamlines def test_dpy(): with TemporaryDirectory() as tmpdir: fname = pjoin(tmpdir, "test.bin") dpw = Dpy(fname, mode="w") A = np.ones((5, 3)) B = 2 * A.copy() C = 3 * A.copy() dpw.write_track(A) dpw.write_track(B) dpw.write_track(C) dpw.write_tracks(Streamlines([C, B, A])) all_tracks = np.ascontiguousarray(np.vstack([A, B, C, C, B, A])) npt.assert_array_equal(all_tracks, dpw.tracks[:]) dpw.close() dpr = Dpy(fname, mode="r") npt.assert_equal(dpr.version() == "0.0.1", True) T = dpr.read_tracksi([0, 1, 2, 0, 0, 2]) T2 = dpr.read_tracks() npt.assert_equal(len(T2), 6) dpr.close() npt.assert_array_equal(A, T[0]) npt.assert_array_equal(C, T[5]) dipy-1.11.0/dipy/io/tests/test_io.py000066400000000000000000000002711476546756600173270ustar00rootroot00000000000000"""Tests for overall io sub-package.""" from dipy import io def test_imports(): # Make sure io has not pulled in setup_module from dpy assert not hasattr(io, "setup_module") dipy-1.11.0/dipy/io/tests/test_io_gradients.py000066400000000000000000000131701476546756600213710ustar00rootroot00000000000000import os.path as osp from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.testing import assert_true def test_read_bvals_bvecs(): fimg, fbvals, fbvecs = get_fnames(name="small_101D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gt = gradient_table(bvals, bvecs=bvecs) npt.assert_array_equal(bvals, gt.bvals) npt.assert_array_equal(bvecs, gt.bvecs) # None should also work as an input: bvals_none, bvecs_none = read_bvals_bvecs(None, fbvecs) npt.assert_array_equal(bvecs_none, gt.bvecs) bvals_none, bvecs_none = read_bvals_bvecs(fbvals, None) npt.assert_array_equal(bvals_none, gt.bvals) # Test for error raising with unknown file formats: nan_fbvecs = osp.splitext(fbvecs)[0] + ".nan" # Nonsense extension npt.assert_raises(ValueError, read_bvals_bvecs, fbvals, nan_fbvecs) npt.assert_raises(ValueError, read_bvals_bvecs, bvals, nan_fbvecs) npt.assert_raises(ValueError, read_bvals_bvecs, fbvals, bvecs) # Test for error raising with incorrect file-contents: with TemporaryDirectory() as tmpdir: # These bvecs only have two rows/columns: new_bvecs1 = bvecs[:, :2] # Make a temporary file fname = "test_bv_file1.txt" with open(pjoin(tmpdir, fname), "wt") as bv_file1: # And fill it with these 2-columned bvecs: for x in range(new_bvecs1.shape[0]): bv_file1.write(f"{new_bvecs1[x][0]} {new_bvecs1[x][1]}\n") npt.assert_raises(OSError, read_bvals_bvecs, fbvals, fname) # These bvecs are saved as one long array: fname = "test_bv_file2.npy" new_bvecs2 = np.ravel(bvecs) with open(pjoin(tmpdir, fname), "w") as bv_file2: np.save(bv_file2.name, new_bvecs2) npt.assert_raises(OSError, read_bvals_bvecs, fbvals, fname) # There are less bvecs than bvals: fname = "test_bv_file3.txt" new_bvecs3 = bvecs[:-1, :] with open(pjoin(tmpdir, fname), "w") as bv_file3: np.savetxt(bv_file3.name, new_bvecs3) npt.assert_raises(OSError, read_bvals_bvecs, fbvals, fname) # You entered the bvecs on both sides: npt.assert_raises(OSError, read_bvals_bvecs, fbvecs, fbvecs) # All possible delimiters should work bv_file4 = "test_space.txt" with open(pjoin(tmpdir, bv_file4), "w") as f: f.write("66 55 33") bvals_1, _ = read_bvals_bvecs(pjoin(tmpdir, bv_file4), "") bv_file5 = "test_coma.txt" with open(pjoin(tmpdir, bv_file5), "w") as f: f.write("66, 55, 33") bvals_2, _ = read_bvals_bvecs(pjoin(tmpdir, bv_file5), "") bv_file6 = "test_tabs.txt" with open(pjoin(tmpdir, bv_file6), "w") as f: f.write("66 \t 55 \t 33") bvals_3, _ = read_bvals_bvecs(pjoin(tmpdir, bv_file6), "") ans = np.array([66.0, 55.0, 33.0]) npt.assert_array_equal(ans, bvals_1) npt.assert_array_equal(ans, bvals_2) npt.assert_array_equal(ans, bvals_3) bv_file7 = "test_space_2.txt" with open(pjoin(tmpdir, bv_file7), "w") as f: f.write("66 55 33\n45 34 21\n55 32 65\n") _, bvecs_1 = read_bvals_bvecs("", pjoin(tmpdir, bv_file7)) bv_file8 = "test_coma_2.txt" with open(pjoin(tmpdir, bv_file8), "w") as f: f.write("66, 55, 33\n45, 34, 21 \n 55, 32, 65\n") _, bvecs_2 = read_bvals_bvecs("", pjoin(tmpdir, bv_file8)) bv_file9 = "test_tabs_2.txt" with open(pjoin(tmpdir, bv_file9), "w") as f: f.write("66 \t 55 \t 33\n45 \t 34 \t 21\n55 \t 32 \t 65\n") _, bvecs_3 = read_bvals_bvecs("", pjoin(tmpdir, bv_file9)) bv_file10 = "test_multiple_space.txt" with open(pjoin(tmpdir, bv_file10), "w") as f: f.write("66 55 33\n45, 34, 21 \n 55, 32, 65\n") _, bvecs_4 = read_bvals_bvecs("", pjoin(tmpdir, bv_file10)) ans = np.array([[66.0, 55.0, 33.0], [45.0, 34.0, 21.0], [55.0, 32.0, 65.0]]) npt.assert_array_equal(ans, bvecs_1) npt.assert_array_equal(ans, bvecs_2) npt.assert_array_equal(ans, bvecs_3) npt.assert_array_equal(ans, bvecs_4) bv_two_volume = "bv_two_volume.txt" with open(pjoin(tmpdir, bv_two_volume), "w") as f: f.write("0 0 \n0 0 \n0 0") bval_two_volume = "bval_two_volume.txt" with open(pjoin(tmpdir, bval_two_volume), "w") as f: f.write("0\n0\n") bval_5, bvecs_5 = read_bvals_bvecs( pjoin(tmpdir, bval_two_volume), pjoin(tmpdir, bv_two_volume) ) npt.assert_array_equal(bvecs_5, np.zeros((2, 3))) npt.assert_array_equal(bval_5, np.zeros(2)) bv_single_volume = "test_single_volume.txt" with open(pjoin(tmpdir, bv_single_volume), "w") as f: f.write("0 \n0 \n0 ") bval_single_volume = "test_single_volume_2.txt" with open(pjoin(tmpdir, bval_single_volume), "w") as f: f.write("0\n") with warnings.catch_warnings(record=True) as w: bval_5, bvecs_5 = read_bvals_bvecs( pjoin(tmpdir, bval_single_volume), pjoin(tmpdir, bv_single_volume) ) npt.assert_array_equal(bvecs_5, np.zeros((1, 3))) npt.assert_array_equal(bval_5, np.zeros(1)) assert_true(len(w) == 1) assert_true(issubclass(w[0].category, UserWarning)) assert_true("Detected only 1 direction on" in str(w[0].message)) dipy-1.11.0/dipy/io/tests/test_io_peaks.py000066400000000000000000000173571476546756600205270ustar00rootroot00000000000000import os from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt from dipy.core.gradients import gradient_table from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, get_fnames from dipy.direction.peaks import PeaksAndMetrics from dipy.io.image import load_nifti from dipy.io.peaks import ( load_pam, load_peaks, niftis_to_pam, pam_to_niftis, save_pam, save_peaks, tensor_to_pam, ) import dipy.reconst.dti as dti from dipy.testing.decorators import set_random_number_generator def generate_default_pam(rng): pam = PeaksAndMetrics() pam.affine = np.eye(4) pam.peak_dirs = rng.random((10, 10, 10, 5, 3)) pam.peak_values = np.zeros((10, 10, 10, 5)) pam.peak_indices = np.zeros((10, 10, 10, 5)) pam.shm_coeff = np.zeros((10, 10, 10, 45)) pam.sphere = default_sphere pam.B = np.zeros((45, default_sphere.vertices.shape[0])) pam.total_weight = 0.5 pam.ang_thr = 60 pam.gfa = np.zeros((10, 10, 10)) pam.qa = np.zeros((10, 10, 10, 5)) pam.odf = np.zeros((10, 10, 10, default_sphere.vertices.shape[0])) return pam @set_random_number_generator() def test_io_peaks(rng): with TemporaryDirectory() as tmpdir: fname = pjoin(tmpdir, "test.pam5") pam = generate_default_pam(rng) save_pam(fname, pam) pam2 = load_pam(fname, verbose=False) npt.assert_array_equal(pam.peak_dirs, pam2.peak_dirs) pam2.affine = None fname2 = pjoin(tmpdir, "test2.pam5") save_pam(fname2, pam2, affine=np.eye(4)) pam2_res = load_pam(fname2, verbose=True) npt.assert_array_equal(pam.peak_dirs, pam2_res.peak_dirs) pam3 = load_pam(fname2, verbose=False) for attr in [ "peak_dirs", "peak_values", "peak_indices", "gfa", "qa", "shm_coeff", "B", "odf", ]: npt.assert_array_equal(getattr(pam3, attr), getattr(pam, attr)) npt.assert_equal(pam3.total_weight, pam.total_weight) npt.assert_equal(pam3.ang_thr, pam.ang_thr) npt.assert_array_almost_equal(pam3.sphere.vertices, pam.sphere.vertices) fname3 = pjoin(tmpdir, "test3.pam5") pam4 = PeaksAndMetrics() npt.assert_raises((ValueError, AttributeError), save_pam, fname3, pam4) fname4 = pjoin(tmpdir, "test4.pam5") del pam.affine save_pam(fname4, pam, affine=None) fname5 = pjoin(tmpdir, "test5.pkm") npt.assert_raises(IOError, save_pam, fname5, pam) pam.affine = np.eye(4) fname6 = pjoin(tmpdir, "test6.pam5") save_pam(fname6, pam, verbose=True) del pam.shm_coeff save_pam(pjoin(tmpdir, fname6), pam, verbose=False) pam.shm_coeff = np.zeros((10, 10, 10, 45)) del pam.odf save_pam(fname6, pam) pam_tmp = load_pam(fname6, verbose=True) npt.assert_equal(pam_tmp.odf, None) fname7 = pjoin(tmpdir, "test7.paw") npt.assert_raises(OSError, load_pam, fname7) del pam.shm_coeff save_pam(fname6, pam, verbose=True) fname_shm = pjoin(tmpdir, "shm.nii.gz") fname_dirs = pjoin(tmpdir, "peaks_dirs.nii.gz") fname_values = pjoin(tmpdir, "peaks_values.nii.gz") fname_indices = pjoin(tmpdir, "peaks_indices.nii.gz") fname_gfa = pjoin(tmpdir, "gfa.nii.gz") fname_sphere = pjoin(tmpdir, "sphere.txt") fname_b = pjoin(tmpdir, "B.nii.gz") fname_qa = pjoin(tmpdir, "qa.nii.gz") pam.shm_coeff = np.ones((10, 10, 10, 45)) pam_to_niftis( pam, fname_shm=fname_shm, fname_peaks_dir=fname_dirs, fname_peaks_values=fname_values, fname_peaks_indices=fname_indices, fname_gfa=fname_gfa, fname_sphere=fname_sphere, fname_b=fname_b, fname_qa=fname_qa, reshape_dirs=False, ) for name in [ "shm.nii.gz", "peaks_dirs.nii.gz", "peaks_values.nii.gz", "gfa.nii.gz", "peaks_indices.nii.gz", "shm.nii.gz", ]: npt.assert_( os.path.isfile(pjoin(tmpdir, name)), "{} file does not exist".format(pjoin(tmpdir, name)), ) @set_random_number_generator() def test_io_save_pam_error(rng): with TemporaryDirectory() as tmpdir: fname = "test.pam5" pam = PeaksAndMetrics() npt.assert_raises(IOError, save_pam, pjoin(tmpdir, "test.pam"), pam) npt.assert_raises( (ValueError, AttributeError), save_pam, pjoin(tmpdir, fname), pam ) pam.affine = np.eye(4) pam.peak_dirs = rng.random((10, 10, 10, 5, 3)) pam.peak_values = np.zeros((10, 10, 10, 5)) pam.peak_indices = np.zeros((10, 10, 10, 5)) pam.shm_coeff = np.zeros((10, 10, 10, 45)) pam.sphere = default_sphere pam.B = np.zeros((45, default_sphere.vertices.shape[0])) pam.total_weight = 0.5 pam.ang_thr = 60 pam.gfa = np.zeros((10, 10, 10)) pam.qa = np.zeros((10, 10, 10, 5)) pam.odf = np.zeros((10, 10, 10, default_sphere.vertices.shape[0])) def test_io_niftis_to_pam(): with TemporaryDirectory() as tmpdir: pam = niftis_to_pam( affine=np.eye(4), peak_dirs=np.random.rand(10, 10, 10, 5, 3), peak_values=np.zeros((10, 10, 10, 5)), peak_indices=np.zeros((10, 10, 10, 5)), shm_coeff=np.zeros((10, 10, 10, 45)), sphere=default_sphere, gfa=np.zeros((10, 10, 10)), B=np.zeros((45, default_sphere.vertices.shape[0])), qa=np.zeros((10, 10, 10, 5)), odf=np.zeros((10, 10, 10, default_sphere.vertices.shape[0])), total_weight=0.5, ang_thr=60, pam_file=pjoin(tmpdir, "test15.pam5"), ) npt.assert_equal(pam.peak_dirs.shape, (10, 10, 10, 5, 3)) npt.assert_(os.path.isfile(pjoin(tmpdir, "test15.pam5"))) def test_tensor_to_pam(): fdata, fbval, fbvec = get_fnames(name="small_25") gtab = gradient_table(fbval, bvecs=fbvec) data, affine = load_nifti(fdata) dm = dti.TensorModel(gtab) df = dm.fit(data) df.evals[0, 0, 0] = np.array([0, 0, 0]) sphere = create_unit_sphere(recursion_level=4) odf = df.odf(sphere) with TemporaryDirectory() as tmpdir: fname = "test_tt.pam5" pam = tensor_to_pam( evals=df.evals, evecs=df.evecs, affine=affine, sphere=sphere, odf=odf, pam_file=pjoin(tmpdir, fname), ) npt.assert_(os.path.isfile(pjoin(tmpdir, fname))) save_pam(pjoin(tmpdir, "test_tt_2.pam5"), pam) pam2 = load_pam(pjoin(tmpdir, "test_tt_2.pam5")) npt.assert_array_equal(pam.peak_values, pam2.peak_values) npt.assert_array_equal(pam.peak_dirs, pam2.peak_dirs) npt.assert_array_almost_equal(pam.peak_indices, pam2.peak_indices) del pam @set_random_number_generator() def test_io_peaks_deprecated(rng): with TemporaryDirectory() as tmpdir: with warnings.catch_warnings(record=True) as cw: warnings.simplefilter("always", DeprecationWarning) fname = pjoin(tmpdir, "test_tt.pam5") pam = generate_default_pam(rng) save_peaks(fname, pam) pam2 = load_peaks(fname, verbose=True) npt.assert_array_equal(pam.peak_dirs, pam2.peak_dirs) npt.assert_equal(len(cw), 2) npt.assert_(issubclass(cw[0].category, DeprecationWarning)) npt.assert_(issubclass(cw[1].category, DeprecationWarning)) dipy-1.11.0/dipy/io/tests/test_stateful_tractogram.py000066400000000000000000001112011476546756600227660ustar00rootroot00000000000000from copy import deepcopy import os from os.path import join as pjoin import sys from tempfile import TemporaryDirectory from urllib.error import HTTPError, URLError import numpy as np import numpy.testing as npt from numpy.testing import assert_, assert_allclose, assert_array_equal import pytest import trx.trx_file_memmap as tmm from dipy.data import get_fnames from dipy.io.stateful_tractogram import Origin, Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.io.utils import is_header_compatible from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package fury, have_fury, setup_module = optional_package("fury", min_version="0.10.0") is_big_endian = "big" in sys.byteorder.lower() FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA = None, None, None def setup_module(): global FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA try: FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA = get_fnames( name="gold_standard_tracks" ) except (HTTPError, URLError) as e: FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA = None, None, None error_msg = f'"Tests Data failed to download." Reason: {e}' pytest.skip(error_msg, allow_module_level=True) return def teardown_module(): global FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA = None, None, None @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_direct_trx_loading(): trx = tmm.load(FILEPATH_DIX["gs.trx"]) tmp_dir = deepcopy(trx._uncompressed_folder_handle.name) assert os.path.isdir(tmp_dir) sft = trx.to_sft() tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) trx.close() assert not os.path.isdir(tmp_dir) assert_allclose(sft.streamlines._data, tmp_points_rasmm, rtol=1e-04, atol=1e-06) sft.to_vox() assert_allclose(sft.streamlines._data, tmp_points_vox, rtol=1e-04, atol=1e-06) def test_trk_equal_in_vox_space(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) assert_allclose(tmp_points_vox, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_tck_equal_in_vox_space(): sft = load_tractogram( FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) assert_allclose(tmp_points_vox, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_trx_equal_in_vox_space(): sft = load_tractogram( FILEPATH_DIX["gs.trx"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) assert_allclose(tmp_points_vox, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_fib_equal_in_vox_space(): if not have_fury: return sft = load_tractogram( FILEPATH_DIX["gs.fib"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) assert_allclose(tmp_points_vox, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_dpy_equal_in_vox_space(): sft = load_tractogram( FILEPATH_DIX["gs.dpy"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) tmp_points_vox = np.loadtxt(FILEPATH_DIX["gs_vox_space.txt"]) assert_allclose(tmp_points_vox, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_trk_equal_in_rasmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) assert_allclose(tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_tck_equal_in_rasmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) assert_allclose(tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_trx_equal_in_rasmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.trx"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) assert_allclose(tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_fib_equal_in_rasmm_space(): if not have_fury: return sft = load_tractogram( FILEPATH_DIX["gs.fib"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) assert_allclose(tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_dpy_equal_in_rasmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.dpy"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) assert_allclose(tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_trk_equal_in_voxmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) assert_allclose(tmp_points_voxmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_tck_equal_in_voxmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) assert_allclose(tmp_points_voxmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_trx_equal_in_voxmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.trx"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) assert_allclose(tmp_points_voxmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_fib_equal_in_voxmm_space(): if not have_fury: return sft = load_tractogram( FILEPATH_DIX["gs.fib"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) assert_allclose(tmp_points_voxmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_dpy_equal_in_voxmm_space(): sft = load_tractogram( FILEPATH_DIX["gs.dpy"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) assert_allclose(tmp_points_voxmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6) def test_switch_voxel_sizes_from_rasmm(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) sft_switch = StatefulTractogram( sft.streamlines, FILEPATH_DIX["gs_3mm.nii"], Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) sft_switch.to_rasmm() assert_allclose( tmp_points_rasmm, sft_switch.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) sft_switch.to_voxmm() assert_allclose( tmp_points_voxmm, sft_switch.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_switch_voxel_sizes_from_voxmm(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) sft_switch = StatefulTractogram( sft.streamlines, FILEPATH_DIX["gs_3mm.nii"], Space.VOXMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) tmp_points_voxmm = np.loadtxt(FILEPATH_DIX["gs_voxmm_space.txt"]) sft_switch.to_rasmm() assert_allclose( tmp_points_rasmm, sft_switch.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) sft_switch.to_voxmm() assert_allclose( tmp_points_voxmm, sft_switch.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_to_rasmm_equivalence(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_rasmm() sft_2.to_space(Space.RASMM) assert_allclose( sft_1.streamlines.get_data(), sft_2.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_to_voxmm_equivalence(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_voxmm() sft_2.to_space(Space.VOXMM) assert_allclose( sft_1.streamlines.get_data(), sft_2.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_to_vox_equivalence(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) sft_1.to_vox() sft_2.to_space(Space.VOX) assert_allclose( sft_1.streamlines.get_data(), sft_2.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_to_corner_equivalence(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_corner() sft_2.to_origin(Origin.TRACKVIS) assert_allclose( sft_1.streamlines.get_data(), sft_2.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_to_center_equivalence(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_center() sft_2.to_origin(Origin.NIFTI) assert_allclose( sft_1.streamlines.get_data(), sft_2.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_empty_sft_case(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX, to_origin=Origin("corner"), ) # Removing data_per_point sft_1 = sft_1.from_sft(sft_1.streamlines, sft_1) # Creating an empty set with the same spatial attributes. sft_2 = sft_1.from_sft([], sft_1) # Loaded in Vox, Corner. Modifying and checking. sft_1.to_rasmm() sft_2.to_rasmm() sft_1.to_center() sft_2.to_center() assert StatefulTractogram.are_compatible(sft_1, sft_2) assert is_header_compatible(sft_1, sft_2) def test_trk_iterative_saving_loading(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "gs_iter.trk")) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(100): sft_iter = load_tractogram( pjoin(tmp_dir, "gs_iter.trk"), FILEPATH_DIX["gs.nii"], to_space=Space.RASMM, ) assert_allclose( tmp_points_rasmm, sft_iter.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) save_tractogram(sft_iter, pjoin(tmp_dir, "gs_iter.trk")) def test_tck_iterative_saving_loading(): sft = load_tractogram( FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "gs_iter.tck")) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(100): sft_iter = load_tractogram( pjoin(tmp_dir, "gs_iter.tck"), FILEPATH_DIX["gs.nii"], to_space=Space.RASMM, ) assert_allclose( tmp_points_rasmm, sft_iter.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) save_tractogram(sft_iter, pjoin(tmp_dir, "gs_iter.tck")) @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_trx_iterative_saving_loading(): sft = load_tractogram( FILEPATH_DIX["gs.trx"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "gs_iter.trx")) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(100): sft_iter = load_tractogram( pjoin(tmp_dir, "gs_iter.trx"), FILEPATH_DIX["gs.nii"], to_space=Space.RASMM, ) assert_allclose( tmp_points_rasmm, sft_iter.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) save_tractogram(sft_iter, pjoin(tmp_dir, "gs_iter.trx")) @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_fib_iterative_saving_loading(): if not have_fury: return sft = load_tractogram( FILEPATH_DIX["gs.fib"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "gs_iter.fib")) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(100): sft_iter = load_tractogram( pjoin(tmp_dir, "gs_iter.fib"), FILEPATH_DIX["gs.nii"], to_space=Space.RASMM, ) assert_allclose( tmp_points_rasmm, sft_iter.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) save_tractogram(sft_iter, pjoin(tmp_dir, "gs_iter.fib")) def test_dpy_iterative_saving_loading(): sft = load_tractogram( FILEPATH_DIX["gs.dpy"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "gs_iter.dpy")) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(100): sft_iter = load_tractogram( pjoin(tmp_dir, "gs_iter.dpy"), FILEPATH_DIX["gs.nii"], to_space=Space.RASMM, ) assert_allclose( tmp_points_rasmm, sft_iter.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) save_tractogram(sft_iter, pjoin(tmp_dir, "gs_iter.dpy")) def test_iterative_to_vox_transformation(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(1000): sft.to_vox() sft.to_rasmm() assert_allclose( tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_iterative_to_voxmm_transformation(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_points_rasmm = np.loadtxt(FILEPATH_DIX["gs_rasmm_space.txt"]) for _ in range(1000): sft.to_voxmm() sft.to_rasmm() assert_allclose( tmp_points_rasmm, sft.streamlines.get_data(), atol=1e-3, rtol=1e-6 ) def test_empty_space_change(): sft = StatefulTractogram([], FILEPATH_DIX["gs.nii"], Space.VOX) sft.to_vox() sft.to_voxmm() sft.to_rasmm() assert_array_equal([], sft.streamlines.get_data()) def test_empty_shift_change(): sft = StatefulTractogram([], FILEPATH_DIX["gs.nii"], Space.VOX) sft.to_corner() sft.to_center() assert_array_equal([], sft.streamlines.get_data()) def test_empty_remove_invalid(): sft = StatefulTractogram([], FILEPATH_DIX["gs.nii"], Space.VOX) sft.remove_invalid_streamlines() assert_array_equal([], sft.streamlines.get_data()) def test_shift_corner_from_rasmm(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_corner() bbox_1 = sft_1.compute_bounding_box() sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) sft_2.to_corner() sft_2.to_vox() bbox_2 = sft_2.compute_bounding_box() assert_allclose(bbox_1, bbox_2, atol=1e-3, rtol=1e-6) def test_shift_corner_from_voxmm(): sft_1 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOX ) sft_1.to_corner() bbox_1 = sft_1.compute_bounding_box() sft_2 = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.VOXMM ) sft_2.to_corner() sft_2.to_vox() bbox_2 = sft_2.compute_bounding_box() assert_allclose(bbox_1, bbox_2, atol=1e-3, rtol=1e-6) def test_iterative_shift_corner(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_streamlines = sft.get_streamlines_copy() for _ in range(1000): sft._shift_voxel_origin() assert_allclose(sft.get_streamlines_copy(), tmp_streamlines, atol=1e-3, rtol=1e-6) def test_replace_streamlines(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_streamlines = sft.get_streamlines_copy()[::-1] try: sft.streamlines = tmp_streamlines assert_(True) except (TypeError, ValueError): assert_(False) def test_subsample_streamlines(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) tmp_streamlines = sft.get_streamlines_copy()[0:8] try: sft.streamlines = tmp_streamlines assert_(False) except (TypeError, ValueError): assert_(True) def test_reassign_both_data_sep_to_empty(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) try: sft.data_per_point = {} sft.data_per_streamline = {} except (TypeError, ValueError): assert_(False) assert_(sft.data_per_point == {} and sft.data_per_streamline == {}) def test_reassign_both_data_sep(): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_space=Space.RASMM ) try: sft.data_per_point = POINTS_DATA sft.data_per_streamline = STREAMLINES_DATA assert_(True) except (TypeError, ValueError): assert_(False) @pytest.mark.parametrize("standard", [Origin.NIFTI, Origin.TRACKVIS]) def test_bounding_bbox_valid(standard): sft = load_tractogram( FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"], to_origin=standard, bbox_valid_check=False, ) assert_(sft.is_bbox_in_vox_valid()) @set_random_number_generator(0) def test_random_point_color(rng): sft = load_tractogram(FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"]) random_colors = rng.integers(0, 255, (13, 8, 3)) coloring_dict = {"colors": random_colors} try: sft.data_per_point = coloring_dict with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "random_points_color.trk")) assert_(True) except (TypeError, ValueError): assert_(False) @set_random_number_generator(0) def test_random_point_gray(rng): sft = load_tractogram(FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"]) random_colors = rng.integers(0, 255, (13, 8, 1)) coloring_dict = { "color_x": random_colors, "color_y": random_colors, "color_z": random_colors, } try: sft.data_per_point = coloring_dict with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "random_points_gray.trk")) assert_(True) except ValueError: assert_(False) @set_random_number_generator(0) def test_random_streamline_color(rng): sft = load_tractogram(FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"]) uniform_colors_x = rng.integers(0, 255, (13, 1)) uniform_colors_y = rng.integers(0, 255, (13, 1)) uniform_colors_z = rng.integers(0, 255, (13, 1)) uniform_colors_x = np.expand_dims(np.repeat(uniform_colors_x, 8, axis=1), axis=-1) uniform_colors_y = np.expand_dims(np.repeat(uniform_colors_y, 8, axis=1), axis=-1) uniform_colors_z = np.expand_dims(np.repeat(uniform_colors_z, 8, axis=1), axis=-1) coloring_dict = { "color_x": uniform_colors_x, "color_y": uniform_colors_y, "color_z": uniform_colors_z, } try: sft.data_per_point = coloring_dict with TemporaryDirectory() as tmp_dir: save_tractogram(sft, pjoin(tmp_dir, "random_streamlines_color.trk")) assert_(True) except (TypeError, ValueError): assert_(False) @pytest.mark.parametrize( "value, is_out_of_grid", [(100, True), (-100, True), (0, False)] ) def test_out_of_grid(value, is_out_of_grid): sft = load_tractogram(FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"]) sft.to_vox() tmp_streamlines = list(sft.get_streamlines_copy()) tmp_streamlines[0] += value try: sft.streamlines = tmp_streamlines assert_(sft.is_bbox_in_vox_valid() != is_out_of_grid) except (TypeError, ValueError): assert_(False) def test_data_per_point_consistency_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] sft_first_half.data_per_point = {} try: _ = sft_first_half + sft_last_half assert_(False) except ValueError: assert_(True) def test_data_per_streamline_consistency_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] sft_first_half.data_per_streamline = {} try: _ = sft_first_half + sft_last_half assert_(False) except ValueError: assert_(True) def test_space_consistency_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] sft_first_half.to_vox() try: _ = sft_first_half + sft_last_half assert_(False) except ValueError: assert_(True) def test_origin_consistency_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] sft_first_half.to_corner() try: _ = sft_first_half + sft_last_half assert_(False) except ValueError: assert_(True) def test_space_attributes_consistency_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_switch = StatefulTractogram( sft.streamlines, FILEPATH_DIX["gs_3mm.nii"], Space.RASMM ) try: _ = sft + sft_switch assert_(False) except ValueError: assert_(True) def test_equality(): sft_1 = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_2 = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) assert_(sft_1 == sft_2, msg="Identical sft should be equal (==)") def test_basic_slicing(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) first_streamline_sft = sft[0] npt.assert_allclose( first_streamline_sft.streamlines[0][0], [11.149319, 21.579943, 37.600685], rtol=1e-6, err_msg="streamlines were not sliced correctly", ) rgb = np.array( [ first_streamline_sft.data_per_point["color_x"][0][0], first_streamline_sft.data_per_point["color_y"][0][0], first_streamline_sft.data_per_point["color_z"][0][0], ] ) npt.assert_allclose( np.squeeze(rgb), [220.0, 20.0, 60.0], err_msg="data_per_point were not sliced correctly", ) rand_coord = first_streamline_sft.data_per_streamline["random_coord"] npt.assert_allclose( np.squeeze(rand_coord), [7.0, 1.0, 5.0], rtol=1e-3, err_msg="data_per_streamline were not sliced correctly", ) def test_space_side_effect_slicing(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) first_streamline = deepcopy(sft.streamlines[0]) first_streamline_sft = sft[0] sft.to_vox() npt.assert_allclose( first_streamline_sft.streamlines[0], first_streamline, rtol=1e-6, err_msg="Side effect, modifying a StatefulTractogram " "after slicing should not modify the slice", ) # Testing it both ways sft.to_rasmm() first_streamline_sft.to_vox() npt.assert_allclose( sft.streamlines[0], first_streamline, rtol=1e-6, err_msg="Side effect, modifying a StatefulTractogram " "after slicing should not modify the slice", ) def test_origin_side_effect_slicing(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) first_streamline = deepcopy(sft.streamlines[0]) first_streamline_sft = sft[0] sft.to_corner() npt.assert_allclose( first_streamline_sft.streamlines[0], first_streamline, rtol=1e-6, err_msg="Side effect, modifying a StatefulTractogram " "after slicing should not modify the slice", ) # Testing it both ways sft.to_center() first_streamline_sft.to_corner() npt.assert_allclose( sft.streamlines[0], first_streamline, rtol=1e-6, err_msg="Side effect, modifying a StatefulTractogram " "after slicing should not modify the slice", ) def test_advanced_slicing(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) last_streamline_sft = sft[::-1][0] npt.assert_allclose( last_streamline_sft.streamlines[0][0], [14.389803, 27.857153, 39.3602], rtol=1e-6, err_msg="streamlines were not sliced correctly", ) rgb = np.array( [ last_streamline_sft.data_per_point["color_x"][0][0], last_streamline_sft.data_per_point["color_y"][0][0], last_streamline_sft.data_per_point["color_z"][0][0], ] ) npt.assert_allclose( np.squeeze(rgb), [0.0, 255.0, 0.0], err_msg="data_per_point were not sliced correctly", ) rand_coord = last_streamline_sft.data_per_streamline["random_coord"] npt.assert_allclose( np.squeeze(rand_coord), [7.0, 9.0, 8.0], err_msg="data_per_streamline were not sliced correctly", ) def test_basic_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] concatenate_sft = sft_first_half + sft_last_half assert_(concatenate_sft == sft, msg="sft were not added correctly") def test_space_side_effect_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] concatenate_sft = sft_first_half + sft_last_half sft.to_vox() assert_( concatenate_sft != sft, msg="Side effect, modifying a StatefulTractogram " "after an addition should not modify the result", ) # Testing it both ways sft.to_rasmm() concatenate_sft.to_vox() assert_( concatenate_sft != sft, msg="Side effect, modifying a StatefulTractogram " "after an addition should not modify the result", ) def test_origin_side_effect_addition(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) sft_first_half = sft[0:7] sft_last_half = sft[7:13] concatenate_sft = sft_first_half + sft_last_half sft.to_corner() assert_( concatenate_sft != sft, msg="Side effect, modifying a StatefulTractogram " "after an addition should not modify the result", ) # Testing it both ways sft.to_center() concatenate_sft.to_corner() assert_( concatenate_sft != sft, msg="Side effect, modifying a StatefulTractogram " "after an addition should not modify the result", ) def test_invalid_streamlines(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) src_strml_count = len(sft) obtained_idx_to_remove, obtained_idx_to_keep = sft.remove_invalid_streamlines() expected_idx_to_keep = list(range(src_strml_count)) assert len(obtained_idx_to_remove) == 0 assert expected_idx_to_keep == obtained_idx_to_keep assert_( len(sft) == src_strml_count, msg="An unshifted gold standard should have " f"{src_strml_count - src_strml_count} invalid streamlines", ) # Change the dimensions so that a few streamlines become invalid sft.dimensions[2] = 5 obtained_idx_to_remove, obtained_idx_to_keep = sft.remove_invalid_streamlines() expected_idx_to_remove = [1, 3, 5, 7, 8, 9, 10, 11] expected_idx_to_keep = [0, 2, 4, 6, 12] expected_len_sft = 5 assert obtained_idx_to_remove == expected_idx_to_remove assert obtained_idx_to_keep == expected_idx_to_keep assert_( len(sft) == expected_len_sft, msg="The shifted gold standard should have " f"{src_strml_count - expected_len_sft} invalid streamlines", ) def test_invalid_streamlines_epsilon(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) src_strml_count = len(sft) epsilon = 1e-6 obtained_idx_to_remove, obtained_idx_to_keep = sft.remove_invalid_streamlines( epsilon=epsilon ) expected_idx_to_keep = list(range(src_strml_count)) assert len(obtained_idx_to_remove) == 0 assert expected_idx_to_keep == obtained_idx_to_keep assert_( len(sft) == src_strml_count, msg="A small epsilon should not remove any streamlines", ) epsilon = 1.0 obtained_idx_to_remove, obtained_idx_to_keep = sft.remove_invalid_streamlines( epsilon=epsilon ) expected_idx_to_remove = [0, 1, 2, 3, 4, 5, 6, 7] expected_idx_to_keep = [8, 9, 10, 11, 12] expected_len_sft = 5 expected_removed_strml_count = src_strml_count - expected_len_sft assert obtained_idx_to_remove == expected_idx_to_remove assert obtained_idx_to_keep == expected_idx_to_keep assert_( len(sft) == expected_len_sft, msg=f"Too big of an epsilon ({epsilon} mm) should have removed " f"{expected_removed_strml_count} streamlines " f"({expected_removed_strml_count} corners)", ) def test_create_from_sft(): sft_1 = load_tractogram(FILEPATH_DIX["gs.tck"], FILEPATH_DIX["gs.nii"]) sft_2 = StatefulTractogram.from_sft( sft_1.streamlines, sft_1, data_per_point=sft_1.data_per_point, data_per_streamline=sft_1.data_per_streamline, ) if not ( np.array_equal(sft_1.streamlines, sft_2.streamlines) and sft_1.space_attributes == sft_2.space_attributes and sft_1.space == sft_2.space and sft_1.origin == sft_2.origin and sft_1.data_per_point == sft_2.data_per_point and sft_1.data_per_streamline == sft_2.data_per_streamline ): assert_( True, msg="Streamlines, space attributes, space, origin, " "data_per_point and data_per_streamline should " "be identical", ) # Side effect testing sft_1.streamlines = np.arange(6000).reshape((100, 20, 3)) if np.array_equal(sft_1.streamlines, sft_2.streamlines): assert_( True, msg="Side effect, modifying the original " "StatefulTractogram after creating a new one " "should not modify the new one", ) def test_init_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = { "positions": np.float32, "offsets": np.int_, "dpp": {"color_x": np.float32, "color_y": np.float32, "color_z": np.float32}, "dps": {"random_coord": np.float32}, } try: recursive_compare(dtype_dict, sft.dtype_dict) except ValueError as e: print(e) assert_(False, msg=e) def test_set_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = { "positions": np.float16, "offsets": np.int32, "dpp": {"color_x": np.uint8, "color_y": np.uint8, "color_z": np.uint8}, "dps": {"random_coord": np.float64}, } sft.dtype_dict = dtype_dict try: recursive_compare(dtype_dict, sft.dtype_dict) except ValueError: assert_(False, msg="dtype_dict should be identical after set.") def test_set_partial_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = {"positions": np.float16, "offsets": np.int32} dpp_dtype_dict = { "dpp": {"color_x": np.float32, "color_y": np.float32, "color_z": np.float32} } dps_dtype_dict = {"dps": {"random_coord": np.float32}} # Set only positions and offsets sft.dtype_dict = dtype_dict try: recursive_compare(dtype_dict["positions"], sft.dtype_dict["positions"]) recursive_compare(dtype_dict["offsets"], sft.dtype_dict["offsets"]) recursive_compare(dpp_dtype_dict["dpp"], sft.dtype_dict["dpp"]) recursive_compare(dps_dtype_dict["dps"], sft.dtype_dict["dps"]) except ValueError: assert_( False, msg="Partial use of dtype_dict should apply only to the " "relevant portions.", ) def test_non_existing_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = { "dpp": { "color_a": np.uint8, # Fake "color_b": np.uint8, # Fake "color_c": np.uint8, }, # Fake "dps": {"random_coordinates": np.float64}, # Fake "dps2": {"random_coord": np.float64}, } # Fake sft.dtype_dict = dtype_dict try: recursive_compare(sft.dtype_dict, dtype_dict) assert_(False, msg="Fake entries in dtype_dict should not work.") except ValueError: assert_(True) def test_from_sft_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = { "positions": np.float16, "offsets": np.int32, "dpp": {"color_x": np.uint8, "color_y": np.uint8, "color_z": np.uint8}, "dps": {"random_coord": np.float64}, } sft.dtype_dict = dtype_dict new_sft = StatefulTractogram.from_sft( sft.streamlines, sft, data_per_point=sft.data_per_point, data_per_streamline=sft.data_per_streamline, ) try: recursive_compare(new_sft.dtype_dict, dtype_dict) recursive_compare(sft.dtype_dict, dtype_dict) except ValueError: assert_(False, msg="from_sft() should not modify the dtype_dict.") def test_slicing_dtype_dict_attributes(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) dtype_dict = { "positions": np.float16, "offsets": np.int32, "dpp": {"color_x": np.uint8, "color_y": np.uint8, "color_z": np.uint8}, "dps": {"random_coord": np.float64}, } sft.dtype_dict = dtype_dict new_sft = sft[::2] try: recursive_compare(new_sft.dtype_dict, dtype_dict) recursive_compare(sft.dtype_dict, dtype_dict) except ValueError: assert_(False, msg="Slicing should not modify the dtype_dict.") def recursive_compare(d1, d2, level="root"): if isinstance(d1, dict) and isinstance(d2, dict): if d1.keys() != d2.keys(): s1 = set(d1.keys()) s2 = set(d2.keys()) common_keys = s1 & s2 if s1 - s2: raise ValueError(f"Keys {s1 - s2} in d1 but not in d2") else: common_keys = set(d1.keys()) for k in common_keys: recursive_compare(d1[k], d2[k], level=f"{level}.{k}") elif isinstance(d1, list) and isinstance(d2, list): if len(d1) != len(d2): raise ValueError(f"Lists do not have the same length at level {level}") common_len = min(len(d1), len(d2)) for i in range(common_len): recursive_compare(d1[i], d2[i], level=f"{level}[{i}]") else: if np.dtype(d1).itemsize != np.dtype(d2).itemsize: raise ValueError(f"Values {d1}, {d2} do not match at level {level}") dipy-1.11.0/dipy/io/tests/test_streamline.py000066400000000000000000000265721476546756600210770ustar00rootroot00000000000000import os from tempfile import TemporaryDirectory from urllib.error import HTTPError, URLError import numpy as np import numpy.testing as npt import pytest from dipy.data import get_fnames from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, load_trk, save_tractogram, save_trk from dipy.io.utils import create_nifti_header from dipy.io.vtk import load_vtk_streamlines, save_vtk_streamlines from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package fury, have_fury, setup_module = optional_package("fury", min_version="0.10.0") FILEPATH_DIX, STREAMLINE, STREAMLINES = None, None, None def setup_module(): global FILEPATH_DIX, STREAMLINE, STREAMLINES try: FILEPATH_DIX, _, _ = get_fnames(name="gold_standard_tracks") except (HTTPError, URLError) as e: FILEPATH_DIX, STREAMLINE, STREAMLINES = None, None, None error_msg = f'"Tests Data failed to download." Reason: {e}' pytest.skip(error_msg, allow_module_level=True) return STREAMLINE = np.array( [ [82.20181274, 91.36505891, 43.15737152], [82.38442231, 91.79336548, 43.87036514], [82.48710632, 92.27861023, 44.56298065], [82.53310394, 92.78545381, 45.24635315], [82.53793335, 93.26902008, 45.94785309], [82.48797607, 93.75003815, 46.64939880], [82.35533142, 94.25181581, 47.32533264], [82.15484619, 94.76634216, 47.97451019], [81.90982819, 95.28792572, 48.60244371], [81.63336945, 95.78153229, 49.23971176], [81.35479736, 96.24868011, 49.89558792], [81.08713531, 96.69807434, 50.56812668], [80.81504822, 97.14285278, 51.24193192], [80.52591705, 97.56719971, 51.92168427], [80.26599884, 97.98269653, 52.61848068], [80.04635621, 98.38131714, 53.33855821], [79.84691621, 98.77052307, 54.06955338], [79.57667542, 99.13599396, 54.78985596], [79.23351288, 99.43207551, 55.51065063], [78.84815979, 99.64141846, 56.24016571], [78.47383881, 99.77347565, 56.99299241], [78.12837219, 99.81330872, 57.76969528], [77.80438995, 99.85082245, 58.55574799], [77.49439240, 99.88065338, 59.34777069], [77.21414185, 99.85343933, 60.15090561], [76.96416473, 99.82772827, 60.96406937], [76.74712372, 99.80519104, 61.78676605], [76.52263641, 99.79122162, 62.60765076], [76.03757477, 100.08692169, 63.24152374], [75.44867706, 100.35265351, 63.79513168], [74.78033447, 100.57255554, 64.27278901], [74.11605835, 100.77330781, 64.76428986], [73.51222992, 100.98779297, 65.32373047], [72.97387695, 101.23387146, 65.93502045], [72.47355652, 101.49151611, 66.57343292], [71.99834442, 101.72480774, 67.23979950], [71.56909181, 101.98665619, 67.92664337], [71.18083191, 102.29483795, 68.61888123], [70.81879425, 102.63343048, 69.31127167], [70.47422791, 102.98672485, 70.00532532], [70.10092926, 103.28502655, 70.70999908], [69.69512177, 103.51667023, 71.42147064], [69.27423096, 103.71351624, 72.13452911], [68.91260529, 103.81676483, 72.89796448], [68.60788727, 103.81982422, 73.69258118], [68.34162903, 103.76619720, 74.49915314], [68.08542633, 103.70635223, 75.30856323], [67.83590698, 103.60187531, 76.11553955], [67.56822968, 103.44821930, 76.90870667], [67.28399658, 103.25878906, 77.68825531], [67.00117493, 103.03740692, 78.45989227], [66.72718048, 102.80329895, 79.23099518], [66.46197511, 102.54130554, 79.99622345], [66.20803833, 102.22305298, 80.74387360], [65.96872711, 101.88980865, 81.48987579], [65.72864532, 101.59316254, 82.25085449], [65.47808075, 101.33383942, 83.02194214], [65.21841431, 101.11295319, 83.80186462], [64.95678711, 100.94080353, 84.59326935], [64.71759033, 100.82022095, 85.40114594], [64.48053741, 100.73490143, 86.21411896], [64.24304199, 100.65074158, 87.02709198], [64.01773834, 100.55318451, 87.84204865], [63.83801651, 100.41996765, 88.66333008], [63.70982361, 100.25119019, 89.48779297], [63.60707855, 100.06730652, 90.31262207], [63.46164322, 99.91001892, 91.13648224], [63.26287842, 99.78648376, 91.95485687], [63.03713226, 99.68377686, 92.76905823], [62.81192398, 99.56619263, 93.58140564], [62.57145309, 99.42708588, 94.38592529], [62.32259369, 99.25592804, 95.18167114], [62.07497787, 99.05770111, 95.97154236], [61.82253647, 98.83877563, 96.75438690], [61.59536743, 98.59293365, 97.53706360], [61.46530151, 98.30503845, 98.32772827], [61.39904785, 97.97928619, 99.11172485], [61.33279419, 97.65353394, 99.89572906], [61.26067352, 97.30914307, 100.67123413], [61.19459534, 96.96743011, 101.44847107], [61.19580461, 96.63417053, 102.23215485], [61.26572037, 96.29887391, 103.01185608], [61.39840698, 95.96297455, 103.78307343], [61.57207871, 95.64262391, 104.55268097], [61.78163528, 95.35540771, 105.32629395], [62.06700134, 95.09746552, 106.08564758], [62.39427185, 94.85724641, 106.83369446], [62.74076462, 94.62278748, 107.57482147], [63.11461639, 94.40107727, 108.30641937], [63.53397751, 94.20418549, 109.02002716], [64.00019836, 94.03809357, 109.71183777], [64.43580627, 93.87523651, 110.42416382], [64.84857941, 93.69993591, 111.14715576], [65.26740265, 93.51858521, 111.86515808], [65.69511414, 93.36718751, 112.58474731], [66.10470581, 93.22719574, 113.31711578], [66.45891571, 93.06028748, 114.07256317], [66.78582001, 92.90560913, 114.84281921], [67.11138916, 92.79004669, 115.62040711], [67.44729614, 92.75711823, 116.40135193], [67.75688171, 92.98265076, 117.16111755], [68.02041626, 93.28012848, 117.91371155], [68.25725555, 93.53466797, 118.69052124], [68.46047974, 93.63263702, 119.51107788], [68.62039948, 93.62007141, 120.34690094], [68.76782227, 93.56475067, 121.18331909], [68.90222168, 93.46326447, 122.01765442], [68.99872589, 93.30039978, 122.84759521], [69.04119873, 93.05428314, 123.66156769], [69.05086517, 92.74394989, 124.45450592], [69.02742004, 92.40427399, 125.23509979], [68.95466614, 92.09059143, 126.02339935], [68.84975433, 91.79674531, 126.81564331], [68.72673798, 91.53726196, 127.61715698], [68.60685731, 91.30300141, 128.42681885], [68.50636292, 91.12481689, 129.25317383], [68.39311218, 91.01572418, 130.08976746], [68.25946808, 90.94654083, 130.92756653], ], dtype=np.float32, ) STREAMLINES = Streamlines( [ STREAMLINE[[0, 10]], STREAMLINE, STREAMLINE[::2], STREAMLINE[::3], STREAMLINE[::5], STREAMLINE[::6], ] ) def teardown_module(): global FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA, STREA FILEPATH_DIX, POINTS_DATA, STREAMLINES_DATA = None, None, None def io_tractogram(extension): with TemporaryDirectory() as tmp_dir: fname = f"test.{extension}" fpath = os.path.join(tmp_dir, fname) in_affine = np.eye(4) in_dimensions = np.array([50, 50, 50]) in_voxel_sizes = np.array([2, 1.5, 1.5]) in_affine = np.dot(in_affine, np.diag(np.r_[in_voxel_sizes, 1])) nii_header = create_nifti_header(in_affine, in_dimensions, in_voxel_sizes) sft = StatefulTractogram(STREAMLINES, nii_header, space=Space.RASMM) save_tractogram(sft, fpath, bbox_valid_check=False) if extension in ["trk", "trx"]: reference = "same" else: reference = nii_header sft = load_tractogram(fpath, reference, bbox_valid_check=False) affine, dimensions, voxel_sizes, _ = sft.space_attributes npt.assert_array_equal(in_affine, affine) npt.assert_array_equal(in_voxel_sizes, voxel_sizes) npt.assert_array_equal(in_dimensions, dimensions) npt.assert_equal(len(sft), len(STREAMLINES)) npt.assert_array_almost_equal(sft.streamlines[1], STREAMLINE, decimal=4) def test_io_trk(): io_tractogram("trk") def test_io_tck(): io_tractogram("tck") def test_io_trx(): io_tractogram("trx") @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_io_vtk(): io_tractogram("vtk") @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_io_vtp(): io_tractogram("vtp") def test_io_dpy(): io_tractogram("dpy") @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_low_io_vtk(): with TemporaryDirectory() as tmp_dir: fname = os.path.join(tmp_dir, "test.fib") # Test save save_vtk_streamlines(STREAMLINES, fname, binary=True) tracks = load_vtk_streamlines(fname) npt.assert_equal(len(tracks), len(STREAMLINES)) npt.assert_array_almost_equal(tracks[1], STREAMLINE, decimal=4) def trk_loader(filename): try: with TemporaryDirectory() as tmp_dir: load_trk(os.path.join(tmp_dir, filename), FILEPATH_DIX["gs.nii"]) return True except ValueError: return False def trk_saver(filename): sft = load_tractogram(FILEPATH_DIX["gs.trk"], FILEPATH_DIX["gs.nii"]) try: with TemporaryDirectory() as tmp_dir: save_trk(sft, os.path.join(tmp_dir, filename)) return True except ValueError: return False def test_io_trk_load(): npt.assert_( trk_loader(FILEPATH_DIX["gs.trk"]), msg="trk_loader should be able to load a trk", ) npt.assert_( not trk_loader("fake_file.TRK"), msg="trk_loader should not be able to load a TRK", ) npt.assert_( not trk_loader(FILEPATH_DIX["gs.tck"]), msg="trk_loader should not be able to load a tck", ) npt.assert_( not trk_loader(FILEPATH_DIX["gs.fib"]), msg="trk_loader should not be able to load a fib", ) npt.assert_( not trk_loader(FILEPATH_DIX["gs.dpy"]), msg="trk_loader should not be able to load a dpy", ) def test_io_trk_save(): npt.assert_( trk_saver(FILEPATH_DIX["gs.trk"]), msg="trk_saver should be able to save a trk" ) npt.assert_( not trk_saver("fake_file.TRK"), msg="trk_saver should not be able to save a TRK" ) npt.assert_( not trk_saver(FILEPATH_DIX["gs.tck"]), msg="trk_saver should not be able to save a tck", ) npt.assert_( not trk_saver(FILEPATH_DIX["gs.fib"]), msg="trk_saver should not be able to save a fib", ) npt.assert_( not trk_saver(FILEPATH_DIX["gs.dpy"]), msg="trk_saver should not be able to save a dpy", ) dipy-1.11.0/dipy/io/tests/test_utils.py000066400000000000000000000175221476546756600200670ustar00rootroot00000000000000import tempfile from urllib.error import HTTPError, URLError import nibabel as nib import numpy as np from numpy.testing import assert_, assert_allclose, assert_array_equal import pytest import trx.trx_file_memmap as tmm from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.io.utils import ( create_nifti_header, decfa, decfa_to_float, get_reference_info, is_reference_info_valid, read_img_arr_or_path, ) from dipy.testing.decorators import set_random_number_generator FILEPATH_DIX = None def setup_module(): global FILEPATH_DIX try: FILEPATH_DIX, _, _ = get_fnames(name="gold_standard_tracks") except (HTTPError, URLError) as e: FILEPATH_DIX = None error_msg = f'"Tests Data failed to download." Reason: {e}' pytest.skip(error_msg, allow_module_level=True) return def teardown_module(): global FILEPATH_DIX FILEPATH_DIX = (None,) def test_decfa(): data_orig = np.zeros((4, 4, 4, 3)) data_orig[0, 0, 0] = np.array([1, 0, 0]) img_orig = nib.Nifti1Image(data_orig, np.eye(4)) img_new = decfa(img_orig) data_new = np.asanyarray(img_new.dataobj) assert data_new[0, 0, 0] == np.array( (1, 0, 0), dtype=np.dtype([("R", "uint8"), ("G", "uint8"), ("B", "uint8")]) ) assert data_new.dtype == np.dtype([("R", "uint8"), ("G", "uint8"), ("B", "uint8")]) round_trip = decfa_to_float(img_new) data_rt = np.asanyarray(round_trip.dataobj) assert np.all(data_rt == data_orig) data_orig = np.zeros((4, 4, 4, 3)) data_orig[0, 0, 0] = np.array([0.1, 0, 0]) img_orig = nib.Nifti1Image(data_orig, np.eye(4)) img_new = decfa(img_orig, scale=True) data_new = np.asanyarray(img_new.dataobj) assert data_new[0, 0, 0] == np.array( (25, 0, 0), dtype=np.dtype([("R", "uint8"), ("G", "uint8"), ("B", "uint8")]) ) assert data_new.dtype == np.dtype([("R", "uint8"), ("G", "uint8"), ("B", "uint8")]) round_trip = decfa_to_float(img_new) data_rt = np.asanyarray(round_trip.dataobj) assert data_rt.shape == (4, 4, 4, 3) assert np.all(data_rt[0, 0, 0] == np.array([25, 0, 0])) def is_affine_valid(affine): return is_reference_info_valid(affine, [1, 1, 1], [1.0, 1.0, 1.0], "RAS") def is_dimensions_valid(dimensions): return is_reference_info_valid(np.eye(4), dimensions, [1.0, 1.0, 1.0], "RAS") def is_voxel_sizes_valid(voxel_sizes): return is_reference_info_valid(np.eye(4), [1, 1, 1], voxel_sizes, "RAS") def is_voxel_order_valid(voxel_order): return is_reference_info_valid(np.eye(4), [1, 1, 1], [1.0, 1.0, 1.0], voxel_order) def test_reference_info_validity(): assert_(not is_affine_valid(np.eye(3)), msg="3x3 affine is invalid") assert_(not is_affine_valid(np.zeros((4, 4))), msg="All zeroes affine is invalid") assert_(is_affine_valid(np.eye(4)), msg="Identity should be valid") assert_(not is_dimensions_valid([0, 0]), msg="Dimensions of the wrong length") assert_(not is_dimensions_valid([1, 1.0, 1]), msg="Dimensions cannot be float") assert_(not is_dimensions_valid([1, -1, 1]), msg="Dimensions cannot be negative") assert_(is_dimensions_valid([1, 1, 1]), msg="Dimensions of [1,1,1] should be valid") assert_(not is_voxel_sizes_valid([0, 0]), msg="Voxel sizes of the wrong length") assert_(not is_voxel_sizes_valid([1, -1, 1]), msg="Voxel sizes cannot be negative") assert_( is_voxel_sizes_valid([1.0, 1.0, 1.0]), msg="Voxel sizes of [1.0,1.0,1.0] should be valid", ) assert_(not is_voxel_order_valid("RA"), msg="Voxel order of the wrong length") assert_( not is_voxel_order_valid(["RAS"]), msg="List of string is not a valid voxel order", ) assert_( not is_voxel_order_valid(["R", "A", "Z"]), msg="Invalid value for voxel order (Z)", ) assert_(not is_voxel_order_valid("RAZ"), msg="Invalid value for voxel order (Z)") assert_(is_voxel_order_valid("RAS"), msg="RAS should be a valid voxel order") assert_( is_voxel_order_valid(["R", "A", "S"]), msg="RAS should be a valid voxel order" ) def reference_info_zero_affine(): header = create_nifti_header(np.zeros((4, 4)), [10, 10, 10], [1, 1, 1]) try: get_reference_info(header) return True except ValueError: return False def test_reference_trk_file_info_identical(): tuple_1 = get_reference_info(FILEPATH_DIX["gs.trk"]) tuple_2 = get_reference_info(FILEPATH_DIX["gs.nii"]) affine_1, dimensions_1, voxel_sizes_1, voxel_order_1 = tuple_1 affine_2, dimensions_2, voxel_sizes_2, voxel_order_2 = tuple_2 assert_allclose(affine_1, affine_2) assert_array_equal(dimensions_1, dimensions_2) assert_allclose(voxel_sizes_1, voxel_sizes_2) assert voxel_order_1 == voxel_order_2 def test_reference_trx_file_info_identical(): tuple_1 = get_reference_info(FILEPATH_DIX["gs.trx"]) tuple_2 = get_reference_info(FILEPATH_DIX["gs.nii"]) affine_1, dimensions_1, voxel_sizes_1, voxel_order_1 = tuple_1 affine_2, dimensions_2, voxel_sizes_2, voxel_order_2 = tuple_2 assert_allclose(affine_1, affine_2) assert_array_equal(dimensions_1, dimensions_2) assert_allclose(voxel_sizes_1, voxel_sizes_2) assert voxel_order_1 == voxel_order_2 def test_reference_obj_info_identical(): sft = load_tractogram(FILEPATH_DIX["gs.trk"], "same") trx = tmm.load(FILEPATH_DIX["gs.trx"]) img = nib.load(FILEPATH_DIX["gs.nii"]) tuple_1 = get_reference_info(sft) tuple_2 = get_reference_info(trx) tuple_3 = get_reference_info(img) affine_1, dimensions_1, voxel_sizes_1, voxel_order_1 = tuple_1 affine_2, dimensions_2, voxel_sizes_2, voxel_order_2 = tuple_2 affine_3, dimensions_3, voxel_sizes_3, voxel_order_3 = tuple_3 assert_allclose(affine_1, affine_2) assert_array_equal(dimensions_1, dimensions_2) assert_allclose(voxel_sizes_1, voxel_sizes_2) assert voxel_order_1 == voxel_order_2 assert_allclose(affine_1, affine_3) assert_array_equal(dimensions_1, dimensions_3) assert_allclose(voxel_sizes_1, voxel_sizes_3) assert voxel_order_1 == voxel_order_3 def test_reference_header_info_identical(): trk = nib.streamlines.load(FILEPATH_DIX["gs.trk"]) trx = tmm.load(FILEPATH_DIX["gs.trx"]) img = nib.load(FILEPATH_DIX["gs.nii"]) tuple_1 = get_reference_info(trk.header) tuple_2 = get_reference_info(trx.header) tuple_3 = get_reference_info(img.header) affine_1, dimensions_1, voxel_sizes_1, voxel_order_1 = tuple_1 affine_2, dimensions_2, voxel_sizes_2, voxel_order_2 = tuple_2 affine_3, dimensions_3, voxel_sizes_3, voxel_order_3 = tuple_3 assert_allclose(affine_1, affine_2) assert_array_equal(dimensions_1, dimensions_2) assert_allclose(voxel_sizes_1, voxel_sizes_2) assert voxel_order_1 == voxel_order_2 assert_allclose(affine_1, affine_3) assert_array_equal(dimensions_1, dimensions_3) assert_allclose(voxel_sizes_1, voxel_sizes_3) assert voxel_order_1 == voxel_order_3 def test_all_zeros_affine(): assert_( not reference_info_zero_affine(), msg="An all zeros affine should not be valid" ) @set_random_number_generator() def test_read_img_arr_or_path(rng): data = rng.random((4, 4, 4, 3)) aff = np.eye(4) aff[:3, :] = rng.standard_normal((3, 4)) img = nib.Nifti1Image(data, aff) path = tempfile.NamedTemporaryFile().name + ".nii.gz" nib.save(img, path) for this in [data, img, path]: dd, aa = read_img_arr_or_path(this, affine=aff) assert np.allclose(dd, data) assert np.allclose(aa, aff) # Tests that if an array is provided, but no affine, an error is raised: with pytest.raises(ValueError): read_img_arr_or_path(data) # Tests that the affine is recovered correctly from path: dd, aa = read_img_arr_or_path(path) assert np.allclose(dd, data) assert np.allclose(aa, aff) dipy-1.11.0/dipy/io/utils.py000066400000000000000000000351271476546756600156670ustar00rootroot00000000000000"""Utility functions for file formats""" import logging import numbers import os import nibabel as nib from nibabel import Nifti1Image from nibabel.streamlines import detect_format import numpy as np from trx import trx_file_memmap import dipy from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package pd, have_pd, _ = optional_package("pandas") if have_pd: import pandas as pd def nifti1_symmat(image_data, *args, **kwargs): """Returns a Nifti1Image with a symmetric matrix intent Parameters ---------- image_data : array-like should have lower triangular elements of a symmetric matrix along the last dimension *args Passed to Nifti1Image **kwargs Passed to Nifti1Image Returns ------- image : Nifti1Image 5d, extra dimensions added before the last. Has symmetric matrix intent code """ image_data = make5d(image_data) last_dim = image_data.shape[-1] n = (np.sqrt(1 + 8 * last_dim) - 1) / 2 if (n % 1) != 0: raise ValueError("input_data does not seem to have matrix elements") image = Nifti1Image(image_data, *args, **kwargs) hdr = image.header hdr.set_intent("symmetric matrix", (n,)) return image def make5d(data): """reshapes the input to have 5 dimensions, adds extra dimensions just before the last dimension """ data = np.asarray(data) if data.ndim > 5: raise ValueError("input is already more than 5d") shape = data.shape shape = shape[:-1] + (1,) * (5 - len(shape)) + shape[-1:] return data.reshape(shape) @warning_for_keywords() def decfa(img_orig, *, scale=False): """ Create a nifti-compliant directional-encoded color FA image. Parameters ---------- img_orig : Nifti1Image class instance. Contains encoding of the DEC FA image with a 4D volume of data, where the elements on the last dimension represent R, G and B components. scale: bool. Whether to scale the incoming data from the 0-1 to the 0-255 range expected in the output. Returns ------- img : Nifti1Image class instance with dtype set to store tuples of uint8 in (R, G, B) order. Notes ----- For a description of this format, see: https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/datatype.html """ dest_dtype = np.dtype([("R", "uint8"), ("G", "uint8"), ("B", "uint8")]) out_data = np.zeros(img_orig.shape[:3], dtype=dest_dtype) data_orig = np.asanyarray(img_orig.dataobj) if scale: data_orig = (data_orig * 255).astype("uint8") for ii in np.ndindex(img_orig.shape[:3]): val = data_orig[ii] out_data[ii] = (val[0], val[1], val[2]) new_hdr = img_orig.header new_hdr["dim"][4] = 1 new_hdr.set_intent(1001, name="Color FA") new_hdr.set_data_dtype(dest_dtype) return Nifti1Image(out_data, affine=img_orig.affine, header=new_hdr) def decfa_to_float(img_orig): """ Convert a nifti-compliant directional-encoded color FA image into a nifti image with RGB encoded in floating point resolution. Parameters ---------- img_orig : Nifti1Image class instance. Contains encoding of the DEC FA image with a 3D volume of data, where each element is a (R, G, B) tuple in uint8. Returns ------- img : Nifti1Image class instance with float dtype. Notes ----- For a description of this format, see: https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/datatype.html """ data_orig = np.asanyarray(img_orig.dataobj) out_data = np.zeros(data_orig.shape + (3,), dtype=np.uint8) for ii in np.ndindex(img_orig.shape[:3]): val = data_orig[ii] out_data[ii] = np.array([val[0], val[1], val[2]]) new_hdr = img_orig.header new_hdr["dim"][4] = 3 # Remove the original intent new_hdr.set_intent(0) new_hdr.set_data_dtype(float) return Nifti1Image(out_data, affine=img_orig.affine, header=new_hdr) def is_reference_info_valid(affine, dimensions, voxel_sizes, voxel_order): """Validate basic data type and value of spatial attribute. Does not ensure that voxel_sizes and voxel_order are self-coherent with the affine. Only verifies the following: - affine is of the right type (float) and dimension (4,4) - affine contain values in the rotation part - dimensions is of right type (int) and length (3) - voxel_sizes is of right type (float) and length (3) - voxel_order is of right type (str) and length (3) The listed parameters are what is expected, provide something else and this function should fail (cover common mistakes). Parameters ---------- affine: ndarray (4,4) Transformation of VOX to RASMM dimensions: ndarray (3,), int16 Volume shape for each axis voxel_sizes: ndarray (3,), float32 Size of voxel for each axis voxel_order: string Typically 'RAS' or 'LPS' Returns ------- output : bool Does the input represent a valid 'state' of spatial attribute """ all_valid = True only_3d_warning = False if not affine.shape == (4, 4): all_valid = False logging.warning("Transformation matrix must be 4x4") if not affine[0:3, 0:3].any(): all_valid = False logging.warning("Rotation matrix cannot be all zeros") if not len(dimensions) >= 3: all_valid = False only_3d_warning = True for i in dimensions: if not isinstance(i, numbers.Integral): all_valid = False logging.warning("Dimensions must be int.") if i <= 0: all_valid = False logging.warning("Dimensions must be above 0.") if not len(voxel_sizes) >= 3: all_valid = False only_3d_warning = True for i in voxel_sizes: if not isinstance(i, numbers.Number): all_valid = False logging.warning("Voxel size must be int/float.") if i <= 0: all_valid = False logging.warning("Voxel size must be above 0.") if not len(voxel_order) >= 3: all_valid = False only_3d_warning = True for i in voxel_order: if not isinstance(i, str): all_valid = False logging.warning("Voxel order must be string/char.") if i not in ["R", "A", "S", "L", "P", "I"]: all_valid = False logging.warning("Voxel order does not follow convention.") if only_3d_warning: logging.warning("Only 3D (and above) reference are considered valid.") return all_valid def split_name_with_gz(filename): """ Returns the clean basename and extension of a file. Means that this correctly manages the ".nii.gz" extensions. Parameters ---------- filename: str The filename to clean Returns ------- base, ext : tuple(str, str) Clean basename and the full extension """ base, ext = os.path.splitext(filename) if ext.lower() == ".gz": # Test if we have a .nii additional extension temp_base, add_ext = os.path.splitext(base) if add_ext.lower() == ".nii" or add_ext.lower() == ".trk": ext = add_ext + ext base = temp_base return base, ext def get_reference_info(reference): """Will compare the spatial attribute of 2 references. Parameters ---------- reference : Nifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or trk.header (dict), TrxFile or trx.header (dict) Reference that provides the spatial attribute. Returns ------- output : tuple - affine ndarray (4,4), np.float32, transformation of VOX to RASMM - dimensions ndarray (3,), int16, volume shape for each axis - voxel_sizes ndarray (3,), float32, size of voxel for each axis - voxel_order, string, Typically 'RAS' or 'LPS' """ is_nifti = False is_trk = False is_sft = False is_trx = False if isinstance(reference, str): _, ext = split_name_with_gz(reference) ext = ext.lower() if ext in [".nii", ".nii.gz"]: header = nib.load(reference).header is_nifti = True elif ext == ".trk": header = nib.streamlines.load(reference, lazy_load=True).header is_trk = True elif ext == ".trx": header = trx_file_memmap.load(reference).header is_trx = True elif isinstance(reference, trx_file_memmap.TrxFile): header = reference.header is_trx = True elif isinstance(reference, nib.nifti1.Nifti1Image): header = reference.header is_nifti = True elif isinstance(reference, nib.streamlines.trk.TrkFile): header = reference.header is_trk = True elif isinstance(reference, nib.nifti1.Nifti1Header): header = reference is_nifti = True elif isinstance(reference, dict) and "magic_number" in reference: header = reference is_trk = True elif isinstance(reference, dict) and "NB_VERTICES" in reference: header = reference is_trx = True elif isinstance(reference, dipy.io.stateful_tractogram.StatefulTractogram): is_sft = True if is_nifti: affine = header.get_best_affine() dimensions = header["dim"][1:4] voxel_sizes = header["pixdim"][1:4] if not affine[0:3, 0:3].any(): raise ValueError( "Invalid affine, contains only zeros." "Cannot determine voxel order from transformation" ) voxel_order = "".join(nib.aff2axcodes(affine)) elif is_trk: affine = header["voxel_to_rasmm"] dimensions = header["dimensions"] voxel_sizes = header["voxel_sizes"] voxel_order = header["voxel_order"] elif is_sft: affine, dimensions, voxel_sizes, voxel_order = reference.space_attributes elif is_trx: affine = header["VOXEL_TO_RASMM"] dimensions = header["DIMENSIONS"] voxel_sizes = nib.affines.voxel_sizes(affine) voxel_order = "".join(nib.aff2axcodes(affine)) else: raise TypeError("Input reference is not one of the supported format") if isinstance(voxel_order, np.bytes_): voxel_order = voxel_order.decode("utf-8") is_reference_info_valid(affine, dimensions, voxel_sizes, voxel_order) return affine.astype(np.float32), dimensions, voxel_sizes, voxel_order def is_header_compatible(reference_1, reference_2): """Will compare the spatial attribute of 2 references Parameters ---------- reference_1 : Nifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or trk.header (dict) Reference that provides the spatial attribute. reference_2 : Nifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or trk.header (dict) Reference that provides the spatial attribute. Returns ------- output : bool Does all the spatial attribute match """ affine_1, dimensions_1, voxel_sizes_1, voxel_order_1 = get_reference_info( reference_1 ) affine_2, dimensions_2, voxel_sizes_2, voxel_order_2 = get_reference_info( reference_2 ) identical_header = True if not np.allclose(affine_1, affine_2, rtol=1e-03, atol=1e-03): logging.error("Affine not equal") identical_header = False if not np.array_equal(dimensions_1, dimensions_2): logging.error("Dimensions not equal") identical_header = False if not np.allclose(voxel_sizes_1, voxel_sizes_2, rtol=1e-03, atol=1e-03): logging.error("Voxel_size not equal") identical_header = False if voxel_order_1 != voxel_order_2: logging.error("Voxel_order not equal") identical_header = False return identical_header def create_tractogram_header( tractogram_type, affine, dimensions, voxel_sizes, voxel_order ): """Write a standard trk/tck header from spatial attribute""" if isinstance(tractogram_type, str): tractogram_type = detect_format(tractogram_type) new_header = tractogram_type.create_empty_header() new_header[nib.streamlines.Field.VOXEL_SIZES] = tuple(voxel_sizes) new_header[nib.streamlines.Field.DIMENSIONS] = tuple(dimensions) new_header[nib.streamlines.Field.VOXEL_TO_RASMM] = affine new_header[nib.streamlines.Field.VOXEL_ORDER] = voxel_order return new_header def create_nifti_header(affine, dimensions, voxel_sizes): """Write a standard nifti header from spatial attribute""" new_header = nib.Nifti1Header() new_header.set_sform(affine) new_header["dim"][1:4] = dimensions new_header["pixdim"][1:4] = voxel_sizes new_header.affine = new_header.get_best_affine() return new_header @warning_for_keywords() def save_buan_profiles_hdf5(fname, dt, *, key=None): """Saves the given input dataframe to .h5 file Parameters ---------- fname : string file name for saving the hdf5 file dt : Pandas DataFrame DataFrame to be saved as .h5 file key : str, optional Key to retrieve the contents in the HDF5 file. The file rootname will be used if not provided. """ df = pd.DataFrame(dt) filename_hdf5 = fname + ".h5" if key is None: base_name_parts, _ = os.path.splitext(os.path.basename(fname)) key = base_name_parts.split(".")[0] store = pd.HDFStore(filename_hdf5, complevel=9) store.append(key, df, data_columns=True, complevel=9) store.close() @warning_for_keywords() def read_img_arr_or_path(data, *, affine=None): """ Helper function that handles inputs that can be paths, nifti img or arrays Parameters ---------- data : array or nib.Nifti1Image or str. Either as a 3D/4D array or as a nifti image object, or as a string containing the full path to a nifti file. affine : 4x4 array, optional. Must be provided for `data` provided as an array. If provided together with Nifti1Image or str `data`, this input will over-ride the affine that is stored in the `data` input. Default: use the affine stored in `data`. Returns ------- data, affine : ndarray and 4x4 array """ if isinstance(data, np.ndarray) and affine is None: raise ValueError( "If data is provided as an array, an affine has ", "to be provided as well" ) if isinstance(data, str): data = nib.load(data) if isinstance(data, nib.Nifti1Image): if affine is None: affine = data.affine data = data.get_fdata() return data, affine dipy-1.11.0/dipy/io/vtk.py000066400000000000000000000053261476546756600153310ustar00rootroot00000000000000import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import transform_streamlines from dipy.utils.optpkg import optional_package fury, have_fury, setup_module = optional_package("fury", min_version="0.10.0") if have_fury: import fury.io import fury.utils def load_polydata(file_name): """Load a vtk polydata to a supported format file. Supported file formats are OBJ, VTK, VTP, FIB, PLY, STL and XML Parameters ---------- file_name : string Returns ------- output : vtkPolyData """ return fury.io.load_polydata(file_name) @warning_for_keywords() def save_polydata(polydata, file_name, *, binary=False, color_array_name=None): """Save a vtk polydata to a supported format file. Save formats can be VTK, VTP, FIB, PLY, STL and XML. Parameters ---------- polydata : vtkPolyData file_name : string """ fury.io.save_polydata( polydata=polydata, file_name=file_name, binary=binary, color_array_name=color_array_name, ) @warning_for_keywords() def save_vtk_streamlines(streamlines, filename, *, to_lps=True, binary=False): """Save streamlines as vtk polydata to a supported format file. File formats can be OBJ, VTK, VTP, FIB, PLY, STL and XML Parameters ---------- streamlines : list list of 2D arrays or ArraySequence filename : string output filename (.obj, .vtk, .fib, .ply, .stl and .xml) to_lps : bool Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain binary : bool save the file as binary """ if to_lps: # ras (mm) to lps (mm) to_lps = np.eye(4) to_lps[0, 0] = -1 to_lps[1, 1] = -1 streamlines = transform_streamlines(streamlines, to_lps) polydata, _ = fury.utils.lines_to_vtk_polydata(streamlines) save_polydata(polydata, file_name=filename, binary=binary) @warning_for_keywords() def load_vtk_streamlines(filename, *, to_lps=True): """Load streamlines from vtk polydata. Load formats can be VTK, FIB Parameters ---------- filename : string input filename (.vtk or .fib) to_lps : bool Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain Returns ------- output : list list of 2D arrays """ polydata = load_polydata(filename) lines = fury.utils.get_polydata_lines(polydata) if to_lps: to_lps = np.eye(4) to_lps[0, 0] = -1 to_lps[1, 1] = -1 return transform_streamlines(lines, to_lps) return lines dipy-1.11.0/dipy/meson.build000066400000000000000000000336101476546756600157030ustar00rootroot00000000000000# Platform detection is_windows = host_machine.system() == 'windows' is_mingw = is_windows and cc.get_id() == 'gcc' # ------------------------------------------------------------------------ # Preprocessor flags # ------------------------------------------------------------------------ numpy_nodepr_api_1_9 = '-DNPY_NO_DEPRECATED_API=NPY_1_9_API_VERSION' numpy_nodepr_api_1_7 = '-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION' # ------------------------------------------------------------------------ # Compiler flags # ------------------------------------------------------------------------ # C warning flags Wno_maybe_uninitialized = cc.get_supported_arguments('-Wno-maybe-uninitialized') Wno_discarded_qualifiers = cc.get_supported_arguments('-Wno-discarded-qualifiers') Wno_empty_body = cc.get_supported_arguments('-Wno-empty-body') Wno_implicit_function_declaration = cc.get_supported_arguments('-Wno-implicit-function-declaration') Wno_parentheses = cc.get_supported_arguments('-Wno-parentheses') Wno_switch = cc.get_supported_arguments('-Wno-switch') Wno_unused_label = cc.get_supported_arguments('-Wno-unused-label') Wno_unused_variable = cc.get_supported_arguments('-Wno-unused-variable') # C++ warning flags _cpp_Wno_cpp = cpp.get_supported_arguments('-Wno-cpp') _cpp_Wno_deprecated_declarations = cpp.get_supported_arguments('-Wno-deprecated-declarations') _cpp_Wno_class_memaccess = cpp.get_supported_arguments('-Wno-class-memaccess') _cpp_Wno_format_truncation = cpp.get_supported_arguments('-Wno-format-truncation') _cpp_Wno_format_extra_args = cpp.get_supported_arguments('-Wno-format-extra-args') _cpp_Wno_format = cpp.get_supported_arguments('-Wno-format') _cpp_Wno_non_virtual_dtor = cpp.get_supported_arguments('-Wno-non-virtual-dtor') _cpp_Wno_sign_compare = cpp.get_supported_arguments('-Wno-sign-compare') _cpp_Wno_switch = cpp.get_supported_arguments('-Wno-switch') _cpp_Wno_terminate = cpp.get_supported_arguments('-Wno-terminate') _cpp_Wno_unused_but_set_variable = cpp.get_supported_arguments('-Wno-unused-but-set-variable') _cpp_Wno_unused_function = cpp.get_supported_arguments('-Wno-unused-function') _cpp_Wno_unused_local_typedefs = cpp.get_supported_arguments('-Wno-unused-local-typedefs') _cpp_Wno_unused_variable = cpp.get_supported_arguments('-Wno-unused-variable') _cpp_Wno_int_in_bool_context = cpp.get_supported_arguments('-Wno-int-in-bool-context') cython_c_args = [] if is_windows # For mingw-w64, link statically against the UCRT. # automatic detect lto for now due to some issues. '-fno-use-linker-plugin' gcc_link_args = ['-lucrt', '-static'] if is_mingw add_project_link_arguments(gcc_link_args, language: ['c', 'cpp']) # Force gcc to float64 long doubles for compatibility with MSVC # builds, for C only. add_project_arguments('-mlong-double-64', language: 'c') # Make fprintf("%zd") work (see https://github.com/rgommers/scipy/issues/118) add_project_arguments('-D__USE_MINGW_ANSI_STDIO=1', language: ['c', 'cpp']) # Manual add of MS_WIN64 macro when not using MSVC. # https://bugs.python.org/issue28267 if target_machine.cpu_family().to_lower().contains('64') add_project_arguments('-DMS_WIN64', language: ['c', 'cpp']) endif # Silence warnings emitted by PyOS_snprintf for (%zd), see # https://github.com/rgommers/scipy/issues/118. # Use as c_args for extensions containing Cython code cython_c_args += [_cpp_Wno_format_extra_args, _cpp_Wno_format] endif endif # Deal with M_PI & friends; add `use_math_defines` to c_args # Cython doesn't always get this correctly itself # explicitly add the define as a compiler flag for Cython-generated code. if is_windows use_math_defines = ['-D_USE_MATH_DEFINES'] else use_math_defines = [] endif # Suppress warning for deprecated Numpy API. # (Suppress warning messages emitted by #warning directives). # Replace with numpy_nodepr_api after Cython 3.0 is out cython_c_args += [_cpp_Wno_cpp, use_math_defines] cython_cpp_args = cython_c_args # ------------------------------------------------------------------------ # NumPy include directory - needed in all submodules # ------------------------------------------------------------------------ # The chdir is needed because within numpy there's an `import signal` # statement, and we don't want that to pick up scipy's signal module rather # than the stdlib module. The try-except is needed because when things are # split across drives on Windows, there is no relative path and an exception # gets raised. There may be other such cases, so add a catch-all and switch to # an absolute path. Relative paths are needed when for example a virtualenv is # placed inside the source tree; Meson rejects absolute paths to places inside # the source tree. # For cross-compilation it is often not possible to run the Python interpreter # in order to retrieve numpy's include directory. It can be specified in the # cross file instead: # [properties] # numpy-include-dir = /abspath/to/host-pythons/site-packages/numpy/core/include # # This uses the path as is, and avoids running the interpreter. incdir_numpy = meson.get_external_property('numpy-include-dir', 'not-given') if incdir_numpy == 'not-given' incdir_numpy = run_command(py3, [ '-c', ''' import numpy as np try: incdir = os.path.relpath(np.get_include()) except Exception: incdir = np.get_include() print(incdir) ''' ], check: true ).stdout().strip() # We do need an absolute path to feed to `cc.find_library` below _incdir_numpy_abs = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], check: true ).stdout().strip() else _incdir_numpy_abs = incdir_numpy endif inc_np = include_directories(incdir_numpy) np_dep = declare_dependency(include_directories: inc_np) # npymath_path = _incdir_numpy_abs / '..' / 'lib' # npyrandom_path = _incdir_numpy_abs / '..' / '..' / 'random' / 'lib' # npymath_lib = cc.find_library('npymath', dirs: npymath_path) # npyrandom_lib = cc.find_library('npyrandom', dirs: npyrandom_path) # ------------------------------------------------------------------------ # Define Optimisation for cython extensions # ------------------------------------------------------------------------ omp = dependency('openmp', required: false) if not omp.found() and meson.get_compiler('c').get_id() == 'clang' # Check for libomp (OpenMP) using Homebrew brew = find_program('brew', required : false) if brew.found() output = run_command(brew, 'list', 'libomp', check: true) output = output.stdout().strip() if output.contains('/libomp/') omp_prefix = fs.parent(output.split('\n')[0]) message('OpenMP Found: YES (Manual search) - ', omp_prefix) omp = declare_dependency(compile_args : ['-Xpreprocessor', '-fopenmp'], link_args : ['-L' + omp_prefix + '/lib', '-lomp'], include_directories : include_directories(omp_prefix / 'include') ) endif endif endif # SSE intrinsics sse2_cflags = [] sse_prog = ''' #if defined(__GNUC__) # if !defined(__amd64__) && !defined(__x86_64__) # error "SSE2 intrinsics are only available on x86_64" # endif #elif defined (_MSC_VER) && !defined (_M_X64) && !defined (_M_AMD64) # error "SSE2 intrinsics not supported on x86 MSVC builds" #endif #if defined(__SSE__) || (_M_X64 > 0) # include # include # include #else # error "No SSE intrinsics available" #endif int main () { __m128i a = _mm_set1_epi32 (0), b = _mm_set1_epi32 (0), c; c = _mm_xor_si128 (a, b); return 0; }''' if cc.get_id() != 'msvc' test_sse2_cflags = ['-mfpmath=sse', '-msse', '-msse2'] # might need to check the processor type here # arm neon flag: -mfpu=neon -mfloat-abi=softfp # see test below # freescale altivec flag: -maltivec -mabi=altivec else test_sse2_cflags = ['/arch:SSE2'] # SSE2 support is only available in 32 bit mode. endif if cc.compiles(sse_prog, args: test_sse2_cflags, name: 'SSE intrinsics') sse2_cflags = test_sse2_cflags cython_c_args += test_sse2_cflags cython_cpp_args = cython_c_args endif if host_cpu_family in ['x86', 'x86_64'] x86_intrinsics = [] if cc.get_id() == 'msvc' x86_intrinsics = [ [ 'AVX', 'immintrin.h', '__m256', '_mm256_setzero_ps()', ['/ARCH:AVX'] ], [ 'AVX2', 'immintrin.h', '__m256i', '_mm256_setzero_si256()', ['/ARCH:AVX2'] ], [ 'AVX512', 'immintrin.h', '__m512', '_mm512_setzero_si512()', ['/ARCH:AVX512'] ], ] else x86_intrinsics = [ # [ 'SSE', 'xmmintrin.h', '__m128', '_mm_setzero_ps()', ['-msse'] ], # [ 'SSE2', 'emmintrin.h', '__m128i', '_mm_setzero_si128()', ['-msse2'] ], [ 'SSE4.1', 'smmintrin.h', '__m128i', '_mm_setzero_si128(); mtest = _mm_cmpeq_epi64(mtest, mtest)', ['-msse4.1'] ], [ 'AVX', 'immintrin.h', '__m256', '_mm256_setzero_ps()', ['-mavx'] ], ] endif foreach intrin : x86_intrinsics intrin_check = '''#include <@0@> int main (int argc, char ** argv) { static @1@ mtest; mtest = @2@; return *((unsigned char *) &mtest) != 0; }'''.format(intrin[1],intrin[2],intrin[3]) intrin_name = intrin[0] if cc.links(intrin_check, name : 'compiler supports @0@ intrinsics'.format(intrin_name)) cython_c_args += intrin[4] cython_cpp_args = cython_c_args endif endforeach endif # ARM NEON intrinsics neon_prog = ''' #if !defined (_MSC_VER) || defined (__clang__) # if !defined (_M_ARM64) && !defined (__aarch64__) # ifndef __ARM_EABI__ # error "EABI is required (to be sure that calling conventions are compatible)" # endif # ifndef __ARM_NEON__ # error "No ARM NEON instructions available" # endif # endif #endif #if defined (_MSC_VER) && (_MSC_VER < 1920) && defined (_M_ARM64) # include #else # include #endif int main () { const float32_t __v[4] = { 1, 2, 3, 4 }; \ const unsigned int __umask[4] = { \ 0x80000000, \ 0x80000000, \ 0x80000000, \ 0x80000000 \ }; \ const uint32x4_t __mask = vld1q_u32 (__umask); \ float32x4_t s = vld1q_f32 (__v); \ float32x4_t c = vreinterpretq_f32_u32 (veorq_u32 (vreinterpretq_u32_f32 (s), __mask)); \ return 0; }''' test_neon_cflags = [] if cc.get_id() != 'msvc' and host_cpu_family != 'aarch64' test_neon_cflags += ['-mfpu=neon'] endif if host_system == 'android' # dipy not in android but I keep it just in case test_neon_cflags += ['-mfloat-abi=softfp'] endif if cc.compiles(neon_prog, args: test_neon_cflags, name: 'ARM NEON intrinsics') neon_cflags = test_neon_cflags cython_c_args += neon_cflags cython_cpp_args = cython_c_args endif # ------------------------------------------------------------------------ # include openmp # Copy the main __init__.py and pxd files to the build dir. # Needed to trick Cython, it won't do a relative import outside a package # ------------------------------------------------------------------------ _cython_tree = [ fs.copyfile('__init__.py'), fs.copyfile('../src/conditional_omp.h'), fs.copyfile('../src/ctime.pxd'), fs.copyfile('../src/cythonutils.h'), fs.copyfile('../src/dpy_math.h'), fs.copyfile('../src/safe_openmp.pxd'), ] # include some local folder # Todo: need more explicit name incdir_local = meson.current_build_dir() inc_local = include_directories('.') # ------------------------------------------------------------------------ # Manage version file # ------------------------------------------------------------------------ dipy_dir = py3.get_install_dir() / 'dipy' meson.add_dist_script( ['../tools/gitversion.py', '--meson-dist', '--write', 'dipy/version.py'] ) if not fs.exists('version.py') generate_version = custom_target( 'generate-version', install: true, build_always_stale: true, build_by_default: true, output: 'version.py', input: '../tools/gitversion.py', command: ['../tools/gitversion.py', '--meson-dist', '--write', 'dipy/version.py'], install_dir: dipy_dir, install_tag: 'python-runtime', ) else # When building from sdist, version.py exists and should be included py3.install_sources(['version.py'], subdir : 'dipy') endif # ------------------------------------------------------------------------ # Include Python Sources # ------------------------------------------------------------------------ python_sources = [ '__init__.py', 'conftest.py', 'py.typed' ] py3.install_sources( python_sources, pure: false, subdir: 'dipy' ) # ------------------------------------------------------------------------ # Manage datafiles # ------------------------------------------------------------------------ data_install_dir = join_paths(get_option('datadir'), 'doc', meson.project_name()) ex_file_excludes = ['_valid_examples.toml', '.gitignore', 'README.md'] install_subdir('../doc/examples', install_dir: data_install_dir, exclude_files: ex_file_excludes, ) # ------------------------------------------------------------------------ # Custom Meson Command line tools # ------------------------------------------------------------------------ cython_args = ['-3', '--fast-fail', '--warning-errors', '@EXTRA_ARGS@', '--output-file', '@OUTPUT@', '--include-dir', incdir_local, '@INPUT@'] cython_cplus_args = ['--cplus'] + cython_args cython_gen = generator(cython, arguments : cython_args, output : '@BASENAME@.c', depends : _cython_tree) cython_gen_cpp = generator(cython, arguments : cython_cplus_args, output : '@BASENAME@.cpp', depends : [_cython_tree]) # ------------------------------------------------------------------------ # Add subfolders # ------------------------------------------------------------------------ subdir('align') subdir('core') subdir('data') subdir('denoise') subdir('direction') subdir('io') subdir('nn') subdir('reconst') subdir('segment') subdir('sims') subdir('stats') subdir('testing') subdir('tests') subdir('tracking') subdir('utils') subdir('viz') subdir('workflows')dipy-1.11.0/dipy/nn/000077500000000000000000000000001476546756600141515ustar00rootroot00000000000000dipy-1.11.0/dipy/nn/__init__.py000066400000000000000000000055711476546756600162720ustar00rootroot00000000000000# init for nn aka the deep neural network module import os import sys from dipy.utils.deprecator import deprecate_with_version from dipy.utils.optpkg import optional_package def _load_backend(): """Dynamically load the preferred backend based on the environment variable.""" preferred_backend = os.getenv("DIPY_NN_BACKEND", "torch").lower() tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") torch, have_torch, _ = optional_package("torch", min_version="2.2.0") __all__ = [] if (have_torch and (preferred_backend == "torch" or not have_tf)) or ( not have_tf and not have_torch ): import dipy.nn.torch.deepn4 as deepn4_module import dipy.nn.torch.evac as evac_module import dipy.nn.torch.histo_resdnn as histo_resdnn_module sys.modules["dipy.nn.evac"] = evac_module sys.modules["dipy.nn.histo_resdnn"] = histo_resdnn_module sys.modules["dipy.nn.deepn4"] = deepn4_module globals().update( { "evac": evac_module, "histo_resdnn": histo_resdnn_module, "deepn4": deepn4_module, } ) __all__ += ["evac", "histo_resdnn", "deepn4"] elif have_tf: import dipy.nn.tf.cnn_1d_denoising as cnn_1d_denoising_module import dipy.nn.tf.deepn4 as deepn4_module import dipy.nn.tf.evac as evac_module import dipy.nn.tf.histo_resdnn as histo_resdnn_module import dipy.nn.tf.model as model_module import dipy.nn.tf.synb0 as synb0_module msg = ( "`dipy.nn.tf` module uses TensorFlow, which is deprecated in DIPY 1.10.0. " "Please install PyTorch to use the `dipy.nn.torch` module instead." ) dec = deprecate_with_version(msg, since="1.10.0", until="1.12.0") dec(lambda x=None: x)() sys.modules["dipy.nn.evac"] = evac_module sys.modules["dipy.nn.histo_resdnn"] = histo_resdnn_module sys.modules["dipy.nn.deepn4"] = deepn4_module sys.modules["dipy.nn.cnn_1d_denoising"] = cnn_1d_denoising_module sys.modules["dipy.nn.synb0"] = synb0_module sys.modules["dipy.nn.model"] = model_module globals().update( { "cnn_1d_denoising": cnn_1d_denoising_module, "deepn4": deepn4_module, "evac": evac_module, "histo_resdnn": histo_resdnn_module, "model": model_module, "synb0": synb0_module, } ) __all__ += [ "cnn_1d_denoising", "deepn4", "evac", "histo_resdnn", "model", "synb0", ] else: print( "Warning: Neither TensorFlow nor PyTorch is installed. " "Please install one of these packages." ) return __all__ __all__ = _load_backend() dipy-1.11.0/dipy/nn/meson.build000066400000000000000000000002631476546756600163140ustar00rootroot00000000000000 python_sources = [ '__init__.py', 'utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/nn' ) subdir('tests') subdir('torch') subdir('tf') dipy-1.11.0/dipy/nn/tests/000077500000000000000000000000001476546756600153135ustar00rootroot00000000000000dipy-1.11.0/dipy/nn/tests/__init__.py000066400000000000000000000000251476546756600174210ustar00rootroot00000000000000# tests for ANN code dipy-1.11.0/dipy/nn/tests/meson.build000066400000000000000000000004171476546756600174570ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_cnn_1denoiser.py', 'test_deepn4.py', 'test_evac.py', 'test_histo_resdnn.py', 'test_synb0.py', 'test_tf.py', 'test_utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/nn/tests' ) dipy-1.11.0/dipy/nn/tests/test_cnn_1denoiser.py000066400000000000000000000052441476546756600214600ustar00rootroot00000000000000import importlib import os import sys import warnings import pytest from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") sklearn, have_sklearn, _ = optional_package("sklearn.model_selection") original_backend = os.environ.get("DIPY_NN_BACKEND") cnnden_mod = None def setup_module(module): """Set up environment variable for all tests in this module.""" global cnnden_mod os.environ["DIPY_NN_BACKEND"] = "tensorflow" with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) cnnden_mod = dipy_nn.cnn_1d_denoising def teardown_module(module): """Restore the original environment variable after all tests in this module.""" global cnnden_mod if original_backend is not None: os.environ["DIPY_NN_BACKEND"] = original_backend else: del os.environ["DIPY_NN_BACKEND"] cnnden_mod = None @pytest.mark.skipif( not have_tf or not have_sklearn, reason="Requires TensorFlow and scikit-learn" ) @set_random_number_generator() def test_default_Cnn1DDenoiser_sequential(rng=None): # Create dummy data normal_img = rng.random((10, 10, 10, 30)) nos_img = normal_img + rng.normal(loc=0.0, scale=0.1, size=normal_img.shape) x = rng.random((10, 10, 10, 30)) # Test 1D denoiser model = cnnden_mod.Cnn1DDenoiser(30) model.compile(optimizer="adam", loss="mean_squared_error", metrics=["accuracy"]) epochs = 1 hist = model.fit(nos_img, normal_img, epochs=epochs) data = model.predict(x) model.evaluate(nos_img, normal_img, verbose=2) _ = hist.history["accuracy"][0] assert data.shape == x.shape @pytest.mark.skipif( not have_tf or not have_sklearn, reason="Requires TensorFlow and scikit-learn" ) @set_random_number_generator() def test_default_Cnn1DDenoiser_flow(pytestconfig, rng): # Create dummy data normal_img = rng.random((10, 10, 10, 30)) nos_img = normal_img + rng.normal(loc=0.0, scale=0.1, size=normal_img.shape) x = rng.random((10, 10, 10, 30)) # Test 1D denoiser with flow API model = cnnden_mod.Cnn1DDenoiser(30) if pytestconfig.getoption("verbose") > 0: model.summary() model.compile(optimizer="adam", loss="mean_squared_error", metrics=["accuracy"]) epochs = 1 hist = model.fit(nos_img, normal_img, epochs=epochs) _ = model.predict(x) model.evaluate(nos_img, normal_img, verbose=2) accuracy = hist.history["accuracy"][0] assert accuracy > 0 dipy-1.11.0/dipy/nn/tests/test_deepn4.py000066400000000000000000000026231476546756600201060ustar00rootroot00000000000000import importlib import sys import warnings import numpy as np from numpy.testing import assert_almost_equal import pytest from dipy.data import get_fnames from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") torch, have_torch, _ = optional_package("torch", min_version="2.2.0") have_nn = have_tf or have_torch BACKENDS = [ backend for backend, available in [("tensorflow", have_tf), ("torch", have_torch)] if available ] @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_default_weights(monkeypatch): file_names = get_fnames(name="deepn4_test_data") input_arr = np.load(file_names[0])["img"] input_affine_arr = np.load(file_names[0])["affine"] target_arr = np.load(file_names[1])["corr"] for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) deepn4_mod = dipy_nn.deepn4 deepn4_model = deepn4_mod.DeepN4() deepn4_model.fetch_default_weights() results_arr = deepn4_model.predict(input_arr, input_affine_arr) assert_almost_equal(results_arr / 100, target_arr / 100, decimal=1) dipy-1.11.0/dipy/nn/tests/test_evac.py000066400000000000000000000057201476546756600176460ustar00rootroot00000000000000import importlib import sys import warnings import numpy as np import numpy.testing as npt import pytest from dipy.data import get_fnames from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") torch, have_torch, _ = optional_package("torch", min_version="2.2.0") have_nn = have_tf or have_torch BACKENDS = [ backend for backend, available in [("tensorflow", have_tf), ("torch", have_torch)] if available ] @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_default_weights(monkeypatch): file_path = get_fnames(name="evac_test_data") input_arr = np.load(file_path)["input"][0] output_arr = np.load(file_path)["output"][0] for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) evac = dipy_nn.evac evac_model = evac.EVACPlus() results_arr = evac_model.predict(input_arr, np.eye(4), return_prob=True) npt.assert_almost_equal(results_arr, output_arr, decimal=2) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_default_weights_batch(monkeypatch): file_path = get_fnames(name="evac_test_data") input_arr = np.load(file_path)["input"] output_arr = np.load(file_path)["output"] input_arr = list(input_arr) for backend in BACKENDS: print(backend) monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) evac = dipy_nn.evac evac_model = evac.EVACPlus() fake_affine = np.array([np.eye(4), np.eye(4)]) fake_voxsize = np.ones((2, 3)) results_arr = evac_model.predict( input_arr, fake_affine, voxsize=fake_voxsize, batch_size=2, return_prob=True ) npt.assert_almost_equal(results_arr, output_arr, decimal=2) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_T1_error(monkeypatch): for backend in BACKENDS: print(backend) monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) evac = dipy_nn.evac T1 = np.ones((3, 32, 32, 32)) evac_model = evac.EVACPlus() fake_affine = np.array([np.eye(4), np.eye(4), np.eye(4)]) npt.assert_raises(ValueError, evac_model.predict, T1, fake_affine) dipy-1.11.0/dipy/nn/tests/test_histo_resdnn.py000066400000000000000000000161421476546756600214270ustar00rootroot00000000000000import importlib import sys import warnings import numpy as np from numpy.testing import assert_almost_equal, assert_equal, assert_raises import pytest from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.shm import tournier07_legacy_msg from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") torch, have_torch, _ = optional_package("torch", min_version="2.2.0") have_nn = have_tf or have_torch BACKENDS = [ backend for backend, available in [("tf", have_tf), ("torch", have_torch)] if available ] @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_default_weights(monkeypatch): input_arr = np.expand_dims( np.array( [ 1.15428471, 0.37460899, -0.16743798, -0.02638639, -0.02587842, -0.24743459, -0.11091634, -0.01974129, -0.03463564, 0.04234652, 0.00909119, -0.02181194, -0.01141419, 0.06747056, -0.02881568, 0.0037776, 0.02069041, 0.01655271, -0.00958642, 0.0103591, -0.00579612, 0.00559265, 0.00311974, 0.0067629, 0.00140297, 0.01844978, -0.00551951, 0.02215372, 0.00186543, -0.01057652, 0.00189625, -0.01114438, 0.00509697, -0.00150783, 0.01585437, 0.00256389, 0.00196107, -0.0108544, 0.01143742, -0.00547229, -0.01040528, 0.0114365, -0.02261801, 0.00452243, 0.0015014, ] ), axis=0, ) target_arr = np.expand_dims( np.array( [ 0.17804961, -0.18878266, 0.02026339, -0.0100488, -0.04045521, 0.20264171, -0.16151273, 0.00508221, 0.01158303, 0.03848331, 0.04242867, -0.00493216, -0.05138939, 0.03944791, -0.06210141, -0.01777741, 0.00032369, -0.00781484, 0.02685455, 0.00617174, -0.01357785, 0.0112316, 0.02457713, -0.00307974, -0.00110319, -0.0274653, 0.01606723, -0.05088685, 0.0017358, -0.00533427, -0.00785866, 0.00529946, 0.00624491, 0.00682212, -0.00551173, -0.00760572, -0.00145562, 0.02271283, 0.0238023, -0.01574752, 0.00853913, -0.00715324, 0.02677651, 0.01718479, -0.01433261, ] ), axis=0, ) for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) resdnn = dipy_nn.histo_resdnn resdnn_model = resdnn.HistoResDNN() resdnn_model.fetch_default_weights() results_arr = resdnn_model._HistoResDNN__predict(input_arr) assert_almost_equal(results_arr, target_arr, decimal=6) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_predict_shape_and_masking(monkeypatch): dwi_fname, bval_fname, bvec_fname = get_fnames(name="stanford_hardi") data, _ = load_nifti(dwi_fname) data = np.squeeze(data) bvals, bvecs = read_bvals_bvecs(bval_fname, bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) mask = np.zeros(data.shape[0:3], dtype=bool) mask[38:40, 45:50, 35:40] = 1 for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) resdnn = dipy_nn.histo_resdnn resdnn_model = resdnn.HistoResDNN() resdnn_model.fetch_default_weights() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning, ) results_arr = resdnn_model.predict(data, gtab, mask=mask) results_pos = np.sum(results_arr, axis=-1, dtype=bool) assert_equal(mask, results_pos) assert_equal(results_arr.shape[-1], 45) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_wrong_sh_order_weights(monkeypatch): for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) resdnn = dipy_nn.histo_resdnn resdnn_model = resdnn.HistoResDNN(sh_order_max=6) fetch_model_weights_path = get_fnames(name=f"histo_resdnn_{backend}_weights") if backend == "torch": assert_raises( RuntimeError, resdnn_model.load_model_weights, fetch_model_weights_path ) else: # tf assert_raises( ValueError, resdnn_model.load_model_weights, fetch_model_weights_path ) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_wrong_sh_order_input(monkeypatch): for backend in BACKENDS: monkeypatch.setenv("DIPY_NN_BACKEND", backend) with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) resdnn = dipy_nn.histo_resdnn resdnn_model = resdnn.HistoResDNN() fetch_model_weights_path = get_fnames(name=f"histo_resdnn_{backend}_weights") resdnn_model.load_model_weights(fetch_model_weights_path) assert_raises(ValueError, resdnn_model._HistoResDNN__predict, np.zeros((1, 28))) dipy-1.11.0/dipy/nn/tests/test_synb0.py000066400000000000000000000041231476546756600177570ustar00rootroot00000000000000import importlib import os import sys import warnings import numpy as np from numpy.testing import assert_almost_equal import pytest from dipy.data import get_fnames from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") original_backend = os.environ.get("DIPY_NN_BACKEND") synb0_mod = None def setup_module(module): """Set up environment variable for all tests in this module.""" global synb0_mod os.environ["DIPY_NN_BACKEND"] = "tensorflow" with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) synb0_mod = dipy_nn.synb0 def teardown_module(module): """Restore the original environment variable after all tests in this module.""" global synb0_mod if original_backend is not None: os.environ["DIPY_NN_BACKEND"] = original_backend else: del os.environ["DIPY_NN_BACKEND"] synb0_mod = None @pytest.mark.skipif(not have_tf, reason="Requires TensorFlow") def test_default_weights(): file_names = get_fnames(name="synb0_test_data") input_arr1 = np.load(file_names[0])["b0"][0] input_arr2 = np.load(file_names[0])["T1"][0] target_arr = np.load(file_names[1])["arr_0"][0] synb0_model = synb0_mod.Synb0() synb0_model.fetch_default_weights(0) results_arr = synb0_model.predict(input_arr1, input_arr2, average=False) assert_almost_equal(results_arr, target_arr, decimal=1) @pytest.mark.skipif(not have_tf, reason="Requires TensorFlow") def test_default_weights_batch(): file_names = get_fnames(name="synb0_test_data") input_arr1 = np.load(file_names[0])["b0"] input_arr2 = np.load(file_names[0])["T1"] target_arr = np.load(file_names[1])["arr_0"] synb0_model = synb0_mod.Synb0() synb0_model.fetch_default_weights(0) results_arr = synb0_model.predict( input_arr1, input_arr2, batch_size=2, average=False ) assert_almost_equal(results_arr, target_arr, decimal=1) dipy-1.11.0/dipy/nn/tests/test_tf.py000066400000000000000000000062551476546756600173450ustar00rootroot00000000000000import importlib import os import sys import warnings from numpy.testing import assert_, assert_equal import pytest from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") original_backend = os.environ.get("DIPY_NN_BACKEND") model_mod = None def setup_module(module): """Set up environment variable for all tests in this module.""" global model_mod os.environ["DIPY_NN_BACKEND"] = "tensorflow" with warnings.catch_warnings(): msg = ".*uses TensorFlow.*install PyTorch.*" warnings.filterwarnings("ignore", message=msg, category=DeprecationWarning) dipy_nn = importlib.reload(sys.modules["dipy.nn"]) model_mod = dipy_nn.model def teardown_module(module): """Restore the original environment variable after all tests in this module.""" global model_mod if original_backend is not None: os.environ["DIPY_NN_BACKEND"] = original_backend else: del os.environ["DIPY_NN_BACKEND"] model_mod = None @pytest.mark.skipif(not have_tf, reason="Requires TensorFlow") def test_default_mnist_sequential(): mnist = tf.keras.datasets.mnist epochs = 5 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential( [ tf.keras.layers.Input(shape=(28, 28)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation="relu"), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation="softmax"), ] ) model.compile( optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"] ) hist = model.fit(x_train, y_train, epochs=epochs) model.evaluate(x_test, y_test, verbose=2) accuracy = hist.history["accuracy"][0] assert_(accuracy > 0.9) @pytest.mark.skipif(not have_tf, reason="Requires TensorFlow") def test_default_mnist_slp(): mnist = tf.keras.datasets.mnist epochs = 5 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 slp = model_mod.SingleLayerPerceptron(input_shape=(28, 28)) hist = slp.fit(x_train, y_train, epochs=epochs) slp.evaluate(x_test, y_test, verbose=2) x_test_prob = slp.predict(x_test) accuracy = hist.history["accuracy"][0] assert_(slp.accuracy > 0.9) assert_(slp.loss < 0.4) assert_equal(slp.accuracy, accuracy) assert_equal(x_test_prob.shape, (10000, 10)) @pytest.mark.skipif(not have_tf, reason="Requires TensorFlow") def test_default_mnist_mlp(): mnist = tf.keras.datasets.mnist epochs = 5 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 mlp = model_mod.MultipleLayerPercepton(input_shape=(28, 28), num_hidden=[128, 128]) hist = mlp.fit(x_train, y_train, epochs=epochs) mlp.evaluate(x_test, y_test, verbose=2) x_test_prob = mlp.predict(x_test) accuracy = hist.history["accuracy"][0] assert_(mlp.accuracy > 0.8) assert_(mlp.loss < 0.4) assert_equal(mlp.accuracy, accuracy) assert_equal(x_test_prob.shape, (10000, 10)) dipy-1.11.0/dipy/nn/tests/test_utils.py000066400000000000000000000024651476546756600200730ustar00rootroot00000000000000import warnings import numpy as np from dipy.nn.utils import normalize, recover_img, transform_img, unnormalize from dipy.testing.decorators import set_random_number_generator @set_random_number_generator() def test_norm(rng=None): temp = rng.random((8, 8, 8)) * 10 temp2 = normalize(temp) temp2 = unnormalize(temp2, -1, 1, 0, 10) np.testing.assert_almost_equal(temp, temp2, 1) @set_random_number_generator() def test_transform(rng=None): temp = rng.random((30, 31, 32)) temp2, affine, mid_shape, offset_array, scale, crop_vs, pad_vs = transform_img( temp, np.eye(4), init_shape=(32, 32, 32), voxsize=np.ones(3) * 2 ) with warnings.catch_warnings(): scipy_affine_txfm_msg = ( "The behavior of affine_transform with a 1-D " "array supplied for the matrix parameter has changed in " "SciPy 0.18.0." ) warnings.filterwarnings( "ignore", message=scipy_affine_txfm_msg, category=UserWarning ) temp2 = recover_img( temp2, affine, mid_shape, temp.shape, offset_array, np.ones(3) * 2, scale, crop_vs, pad_vs, ) np.testing.assert_almost_equal(np.array(temp.shape), np.array(temp2.shape)) dipy-1.11.0/dipy/nn/tf/000077500000000000000000000000001476546756600145625ustar00rootroot00000000000000dipy-1.11.0/dipy/nn/tf/__init__.py000066400000000000000000000000001476546756600166610ustar00rootroot00000000000000dipy-1.11.0/dipy/nn/tf/cnn_1d_denoising.py000066400000000000000000000412121476546756600203350ustar00rootroot00000000000000""" Title : Denoising diffusion weighted imaging data using CNN =========================================================== Obtaining tissue microstructure measurements from diffusion weighted imaging (DWI) with multiple, high b-values is crucial. However, the high noise levels present in these images can adversely affect the accuracy of the microstructural measurements. In this context, we suggest a straightforward denoising technique :footcite:p:`Cheng2022` that can be applied to any DWI dataset as long as a low-noise, single-subject dataset is obtained using the same DWI sequence. We created a simple 1D-CNN model with five layers, based on the 1D CNN for denoising speech. The model consists of two convolutional layers followed by max-pooling layers, and a dense layer. The first convolutional layer has 16 one-dimensional filters of size 16, and the second layer has 32 filters of size 8. ReLu activation function is applied to both convolutional layers. The max-pooling layer has a kernel size of 2 and a stride of 2. The dense layer maps the features extracted from the noisy image to the low-noise reference image. Reference --------- .. footbibliography:: """ import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") if have_tf: from tensorflow.keras.initializers import Orthogonal from tensorflow.keras.layers import Activation, Conv1D, Input from tensorflow.keras.models import Model sklearn, have_sklearn, _ = optional_package("sklearn.model_selection") class Cnn1DDenoiser: @warning_for_keywords() def __init__( self, sig_length, *, optimizer="adam", loss="mean_squared_error", metrics=("accuracy",), loss_weights=None, ): """Initialize the CNN 1D denoiser with the given parameters. Parameters ---------- sig_length : int Length of the DWI signal. optimizer : str, optional Name of the optimization algorithm to use. Options: 'adam', 'sgd', 'rmsprop', 'adagrad', 'adadelta'. loss : str, optional Name of the loss function to use. Available options are 'mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_logarithmic_error', 'squared_hinge', 'hinge', 'categorical_hinge', 'logcosh', 'categorical_crossentropy', 'sparse_categorical_crossentropy', 'binary_crossentropy', 'kullback_leibler_divergence', 'poisson', 'cosine_similarity'. Suggested to go with 'mean_squared_error'. metrics : tuple of str or function, optional List of metrics to be evaluated by the model during training and testing. Available options are 'accuracy', 'binary_accuracy', 'categorical_accuracy', 'top_k_categorical_accuracy', 'sparse_categorical_accuracy', 'sparse_top_k_categorical_accuracy', and any custom function. loss_weights : float or dict, optional Scalar coefficients to weight the loss contributions of different model outputs. Can be a single float value or a dictionary mapping output names to scalar coefficients. """ if not have_tf: raise ImportError( "TensorFlow is not available. Please install TensorFlow 2+." ) if not have_sklearn: raise ImportError( "scikit-learn is not available. Please install scikit-learn." ) input_layer = Input(shape=(sig_length, 1)) x = Conv1D( filters=16, kernel_size=16, kernel_initializer=Orthogonal(), padding="same", name="Conv1", )(input_layer) x = Activation("relu", name="ReLU1")(x) max_pool_1d = tf.keras.layers.MaxPooling1D( pool_size=2, strides=2, padding="valid" ) pool1 = max_pool_1d(x) x = Conv1D(filters=32, kernel_size=8, padding="same", name="Conv2")(pool1) x = Activation("relu", name="ReLU2")(x) pool2 = max_pool_1d(x) pool2_flat = tf.keras.layers.Flatten()(pool2) logits = tf.keras.layers.Dense(units=sig_length, activation="relu")(pool2_flat) model = Model(inputs=input_layer, outputs=logits) model.compile( optimizer=optimizer, loss=loss, metrics=metrics, loss_weights=loss_weights ) self.model = model @warning_for_keywords() def compile(self, *, optimizer="adam", loss=None, metrics=None, loss_weights=None): """Configure the model for training. Parameters ---------- optimizer : str or optimizer object, optional Name of optimizer or optimizer object. loss : str or objective function, optional Name of objective function or objective function itself. If 'None', the model will be compiled without any loss function and can only be used to predict output. metrics : list of metrics, optional List of metrics to be evaluated by the model during training and testing. loss_weights : list or dict, optional Optional list or dictionary specifying scalar coefficients(floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients. """ self.model.compile( optimizer=optimizer, loss=loss, metrics=metrics, loss_weights=loss_weights ) def summary(self): """Get the summary of the model. The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer. Returns ------- summary : NoneType the summary of the model """ return self.model.summary() @warning_for_keywords() def train_test_split( self, x, y, *, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None, ): """Split the input data into random train and test subsets. Parameters ---------- x: numpy array input data. y: numpy array target data. test_size: float or int, optional If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If train_size is also None, it will be set to 0.25. train_size: float or int, optional If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size. random_state: int, np.random.Generator instance or None, optional Controls the shuffling applied to the data before applying the split. Pass an int for reproducible output across multiple function calls. See Glossary. shuffle: bool, optional Whether or not to shuffle the data before splitting. If shuffle=False then stratify must be None. stratify: array-like, optional If not None, data is split in a stratified fashion, using this as the class labels. Read more in the User Guide. Returns ------- Tuple of four numpy arrays: x_train, x_test, y_train, y_test. """ sz = x.shape if len(sz) == 4: x = np.reshape(x, (sz[0] * sz[1] * sz[2], sz[3])) sz = y.shape if len(sz) == 4: y = np.reshape(y, (sz[0] * sz[1] * sz[2], sz[3])) return sklearn.train_test_split( x, y, test_size=test_size, train_size=train_size, random_state=random_state, shuffle=shuffle, stratify=stratify, ) @warning_for_keywords() def fit( self, x, y, *, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, ): """Train the model on train dataset. The fit method will train the model for a fixed number of epochs (iterations) on a dataset. If given data is 4D it will convert it into 1D. Parameters ---------- x : ndarray The input data, as an ndarray. y : ndarray The target data, as an ndarray. batch_size : int or None, optional Number of samples per batch of computation. epochs : int, optional The number of epochs. verbose : 'auto', 0, 1, or 2, optional Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. callbacks : list of keras.callbacks.Callback instances, optional List of callbacks to apply during training. validation_split : float between 0 and 1, optional Fraction of the training data to be used as validation data. validation_data : tuple (x_val, y_val) or None, optional Data on which to evaluate the loss and any model metrics at the end of each epoch. shuffle : boolean, optional This argument is ignored when x is a generator or an object of tf.data.Dataset. initial_epoch : int, optional Epoch at which to start training. steps_per_epoch : int or None, optional Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. validation_batch_size : int or None, optional Number of samples per validation batch. validation_steps : int or None, optional Only relevant if validation_data is provided and is a tf.data dataset. validation_freq : int or list/tuple/set, optional Only relevant if validation data is provided. If an integer, specifies how many training epochs to run before a new validation run is performed. If a list, tuple, or set, specifies the epochs on which to run validation. max_queue_size : int, optional Used for generator or keras.utils.Sequence input only. workers : integer, optional Used for generator or keras.utils.Sequence input only. use_multiprocessing : boolean, optional Used for generator or keras.utils.Sequence input only. Returns ------- hist : object A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs. """ sz = x.shape if len(sz) == 4: x = np.reshape(x, (sz[0] * sz[1] * sz[2], sz[3])) sz = y.shape if len(sz) == 4: y = np.reshape(y, (sz[0] * sz[1] * sz[2], sz[3])) return self.model.fit( x=x, y=y, batch_size=batch_size, epochs=epochs, verbose=verbose, callbacks=callbacks, validation_split=validation_split, validation_data=validation_data, shuffle=shuffle, initial_epoch=initial_epoch, steps_per_epoch=steps_per_epoch, validation_steps=validation_steps, validation_batch_size=validation_batch_size, validation_freq=validation_freq, ) @warning_for_keywords() def evaluate( self, x, y, *, batch_size=None, verbose=1, steps=None, callbacks=None, return_dict=False, ): """Evaluate the model on a test dataset. Parameters ---------- x : ndarray Test dataset (high-noise data). If 4D, it will be converted to 1D. y : ndarray Labels of the test dataset (low-noise data). If 4D, it will be converted to 1D. batch_size : int, optional Number of samples per gradient update. verbose : int, optional Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. steps : int, optional Total number of steps (batches of samples) before declaring the evaluation round finished. callbacks : list, optional List of callbacks to apply during evaluation. max_queue_size : int, optional Maximum size for the generator queue. workers : int, optional Maximum number of processes to spin up when using process-based threading. use_multiprocessing : bool, optional If `True`, use process-based threading. return_dict : bool, optional If `True`, loss and metric results are returned as a dictionary. Returns ------- List or dict If `return_dict` is `False`, returns a list of [loss, metrics] values on the test dataset. If `return_dict` is `True`, returns a dictionary of metric names and their corresponding values. """ sz = x.shape if len(sz) == 4: x = np.reshape(x, (sz[0] * sz[1] * sz[2], sz[3])) sz = y.shape if len(sz) == 4: y = np.reshape(y, (sz[0] * sz[1] * sz[2], sz[3])) return self.model.evaluate( x=x, y=y, batch_size=batch_size, verbose=verbose, steps=steps, callbacks=callbacks, return_dict=return_dict, ) @warning_for_keywords() def predict(self, x, *, batch_size=None, verbose=0, steps=None, callbacks=None): """Generate predictions for input samples. Parameters ---------- x : ndarray Input samples. batch_size : int, optional Number of samples per batch. verbose : int, optional Verbosity mode. steps : int, optional Total number of steps (batches of samples) before declaring the prediction round finished. callbacks : list, optional List of Keras callbacks to apply during prediction. max_queue_size : int, optional Maximum size for the generator queue. workers : int, optional Maximum number of processes to spin up when using process-based threading. use_multiprocessing : bool, optional If `True`, use process-based threading. If `False`, use thread-based threading. Returns ------- ndarray Numpy array of predictions. """ sz = x.shape x = np.reshape(x, (sz[0] * sz[1] * sz[2], sz[3])) predicted_output = self.model.predict( x=x, batch_size=batch_size, verbose=verbose, steps=steps, callbacks=callbacks, ) predicted_output = np.float32( np.reshape(predicted_output, (sz[0], sz[1], sz[2], sz[3])) ) return predicted_output @warning_for_keywords() def save_weights(self, filepath, *, overwrite=True): """Save the weights of the model to HDF5 file format. Parameters ---------- filepath : str The path where the weights should be saved. overwrite : bool,optional If `True`, overwrites the file if it already exists. If `False`, raises an error if the file already exists. """ self.model.save_weights( filepath=filepath, overwrite=overwrite, save_format=None ) def load_weights(self, filepath): """Load the model weights from the specified file path. Parameters ---------- filepath : str The file path from which to load the weights. """ self.model.load_weights(filepath) dipy-1.11.0/dipy/nn/tf/deepn4.py000066400000000000000000000244121476546756600163160ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the DeepN4 model.""" import logging import numpy as np from scipy.ndimage import gaussian_filter from dipy.data import get_fnames from dipy.nn.utils import normalize, recover_img, set_logger_level, transform_img from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") if have_tf: from tensorflow.keras.layers import ( Concatenate, Conv3D, Conv3DTranspose, GroupNormalization, Layer, LeakyReLU, MaxPool3D, ) from tensorflow.keras.models import Model else: class Model: pass class Layer: pass logging.warning( "This model requires Tensorflow.\ Please install these packages using \ pip. If using mac, please refer to this \ link for installation. \ https://github.com/apple/tensorflow_macos" ) logging.basicConfig() logger = logging.getLogger("deepn4") class EncoderBlock(Layer): def __init__(self, out_channels, kernel_size, strides, padding): super(EncoderBlock, self).__init__() self.conv3d = Conv3D( out_channels, kernel_size, strides=strides, padding=padding, use_bias=False ) self.instnorm = GroupNormalization( groups=-1, axis=-1, epsilon=1e-05, center=False, scale=False ) self.activation = LeakyReLU(0.01) def call(self, input): x = self.conv3d(input) x = self.instnorm(x) x = self.activation(x) return x class DecoderBlock(Layer): def __init__(self, out_channels, kernel_size, strides, padding): super(DecoderBlock, self).__init__() self.conv3d = Conv3DTranspose( out_channels, kernel_size, strides=strides, padding=padding, use_bias=False ) self.instnorm = GroupNormalization( groups=-1, axis=-1, epsilon=1e-05, center=False, scale=False ) self.activation = LeakyReLU(0.01) def call(self, input): x = self.conv3d(input) x = self.instnorm(x) x = self.activation(x) return x def UNet3D(input_shape): inputs = tf.keras.Input(input_shape) # Encode x = EncoderBlock(32, kernel_size=3, strides=1, padding="same")(inputs) syn0 = EncoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn0) x = EncoderBlock(64, kernel_size=3, strides=1, padding="same")(x) syn1 = EncoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn1) x = EncoderBlock(128, kernel_size=3, strides=1, padding="same")(x) syn2 = EncoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn2) x = EncoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = EncoderBlock(512, kernel_size=3, strides=1, padding="same")(x) # Last layer without relu x = Conv3D(512, kernel_size=1, strides=1, padding="same")(x) x = DecoderBlock(512, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn2]) x = DecoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(256, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn1]) x = DecoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(128, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn0]) x = DecoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(1, kernel_size=1, strides=1, padding="valid")(x) # Last layer without relu out = Conv3DTranspose(1, kernel_size=1, strides=1, padding="valid")(x) return Model(inputs, out) class DeepN4: """This class is intended for the DeepN4 model. The DeepN4 model :footcite:p:`Kanakaraj2024` predicts the bias field for magnetic field inhomogeneity correction on T1-weighted images. References ---------- .. footbibliography:: """ @warning_for_keywords() @doctest_skip_parser def __init__(self, *, verbose=False): """Model initialization To obtain the pre-trained model, use fetch_default_weights() like: >>> deepn4_model = DeepN4() # skip if not have_tf >>> deepn4_model.fetch_default_weights() # skip if not have_tf This model is designed to take as input file T1 signal and predict bias field. Effectively, this model is mimicking bias correction. Parameters ---------- verbose : bool, optional Whether to show information about the processing. """ if not have_tf: raise tf() log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) # Synb0 network load self.model = UNet3D(input_shape=(128, 128, 128, 1)) def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting.""" fetch_model_weights_path = get_fnames(name="deepn4_default_tf_weights") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Parameters ---------- weights_path : str Path to the file containing the weights (hdf5, saved by tensorflow) """ try: self.model.load_weights(weights_path) except ValueError as e: raise ValueError( "Expected input for the provided model weights \ do not match the declared model" ) from e def __predict(self, x_test): """Internal prediction function Predict bias field from input T1 signal Parameters ---------- x_test : np.ndarray (128, 128, 128, 1) Image should match the required shape of the model. Returns ------- np.ndarray (128, 128, 128) Predicted bias field """ return self.model.predict(x_test)[..., 0] def pad(self, img, sz): tmp = np.zeros((sz, sz, sz)) diff = int((sz - img.shape[0]) / 2) lx = max(diff, 0) lX = min(img.shape[0] + diff, sz) diff = (img.shape[0] - sz) / 2 rx = max(int(np.floor(diff)), 0) rX = min(img.shape[0] - int(np.ceil(diff)), img.shape[0]) diff = int((sz - img.shape[1]) / 2) ly = max(diff, 0) lY = min(img.shape[1] + diff, sz) diff = (img.shape[1] - sz) / 2 ry = max(int(np.floor(diff)), 0) rY = min(img.shape[1] - int(np.ceil(diff)), img.shape[1]) diff = int((sz - img.shape[2]) / 2) lz = max(diff, 0) lZ = min(img.shape[2] + diff, sz) diff = (img.shape[2] - sz) / 2 rz = max(int(np.floor(diff)), 0) rZ = min(img.shape[2] - int(np.ceil(diff)), img.shape[2]) tmp[lx:lX, ly:lY, lz:lZ] = img[rx:rX, ry:rY, rz:rZ] return tmp, [lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ] def load_resample(self, subj): input_data, [lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ] = self.pad( subj, 128 ) in_max = np.percentile(input_data[np.nonzero(input_data)], 99.99) input_data = normalize(input_data, min_v=0, max_v=in_max, new_min=0, new_max=1) input_data = np.squeeze(input_data) input_vols = np.zeros((1, 128, 128, 128, 1)) input_vols[0, :, :, :, 0] = input_data return ( tf.convert_to_tensor(input_vols, dtype=tf.float32), lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ, in_max, ) def predict(self, img, img_affine, *, voxsize=(1, 1, 1), threshold=0.5): """Wrapper function to facilitate prediction of larger dataset. The function will mask, normalize, split, predict and 're-assemble' the data as a volume. Parameters ---------- img : np.ndarray T1 image to predict and apply bias field img_affine : np.ndarray (4, 4) Affine matrix for the T1 image voxsize : np.ndarray or list or tuple (3,), optional voxel size of the T1 image. threshold : float, optional Threshold for cleaning the final correction field Returns ------- final_corrected : np.ndarray (x, y, z) Predicted bias corrected image. The volume has matching shape to the input data """ # Preprocess input data (resample, normalize, and pad) resampled_T1, inv_affine, mid_shape, offset_array, scale, crop_vs, pad_vs = ( transform_img(img, img_affine, voxsize=voxsize) ) (in_features, lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ, in_max) = ( self.load_resample(resampled_T1) ) # Run the model to get the bias field logfield = self.__predict(in_features) field = np.exp(logfield) field = field.squeeze() # Postprocess predicted field (reshape - unpad, smooth the field, # upsample) final_field = np.zeros( [resampled_T1.shape[0], resampled_T1.shape[1], resampled_T1.shape[2]] ) final_field[rx:rX, ry:rY, rz:rZ] = field[lx:lX, ly:lY, lz:lZ] final_fields = gaussian_filter(final_field, sigma=3) upsample_final_field = recover_img( final_fields, inv_affine, mid_shape, img.shape, offset_array, voxsize, scale, crop_vs, pad_vs, ) # Correct the image below_threshold_mask = np.abs(upsample_final_field) < threshold with np.errstate(divide="ignore", invalid="ignore"): final_corrected = np.where( below_threshold_mask, 0, img / upsample_final_field ) return final_corrected dipy-1.11.0/dipy/nn/tf/evac.py000066400000000000000000000344501476546756600160600ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the EVAC+ model.""" import logging import numpy as np from dipy.align.reslice import reslice from dipy.data import get_fnames from dipy.nn.utils import ( normalize, recover_img, set_logger_level, transform_img, ) from dipy.segment.utils import remove_holes_and_islands from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") if have_tf: from tensorflow.keras.layers import ( Add, Concatenate, Conv3D, Conv3DTranspose, Dropout, Layer, LayerNormalization, ReLU, Softmax, ) from tensorflow.keras.models import Model else: class Model: pass class Layer: pass logging.warning( "This model requires Tensorflow.\ Please install these packages using \ pip. If using mac, please refer to this \ link for installation. \ https://github.com/apple/tensorflow_macos" ) logging.basicConfig() logger = logging.getLogger("EVAC+") def prepare_img(image): """Function to prepare image for model input Specific to EVAC+ Parameters ---------- image : np.ndarray Input image Returns ------- input_data : dict """ input1 = np.moveaxis(image, -1, 0) input1 = np.expand_dims(input1, -1) input2, _ = reslice(image, np.eye(4), (1, 1, 1), (2, 2, 2)) input2 = np.moveaxis(input2, -1, 0) input2 = np.expand_dims(input2, -1) input3, _ = reslice(image, np.eye(4), (1, 1, 1), (4, 4, 4)) input3 = np.moveaxis(input3, -1, 0) input3 = np.expand_dims(input3, -1) input4, _ = reslice(image, np.eye(4), (1, 1, 1), (8, 8, 8)) input4 = np.moveaxis(input4, -1, 0) input4 = np.expand_dims(input4, -1) input5, _ = reslice(image, np.eye(4), (1, 1, 1), (16, 16, 16)) input5 = np.moveaxis(input5, -1, 0) input5 = np.expand_dims(input5, -1) input_data = { "input_1": input1, "input_2": input2, "input_3": input3, "input_4": input4, "input_5": input5, } return input_data class Block(Layer): @warning_for_keywords() def __init__( self, out_channels, kernel_size, strides, padding, drop_r, n_layers, *, layer_type="down", ): super(Block, self).__init__() self.layer_list = [] self.layer_list2 = [] self.n_layers = n_layers for _ in range(n_layers): self.layer_list.append( Conv3D(out_channels, kernel_size, strides=strides, padding=padding) ) self.layer_list.append(Dropout(drop_r)) self.layer_list.append(LayerNormalization()) self.layer_list.append(ReLU()) if layer_type == "down": self.layer_list2.append(Conv3D(1, 2, strides=2, padding="same")) self.layer_list2.append(ReLU()) elif layer_type == "up": self.layer_list2.append(Conv3DTranspose(1, 2, strides=2, padding="same")) self.layer_list2.append(ReLU()) self.channel_sum = ChannelSum() self.add = Add() def call(self, input, passed): x = input for layer in self.layer_list: x = layer(x) x = self.channel_sum(x) fwd = self.add([x, passed]) x = fwd for layer in self.layer_list2: x = layer(x) return fwd, x class ChannelSum(Layer): def __init__(self): super(ChannelSum, self).__init__() def call(self, inputs): return tf.reduce_sum(inputs, axis=-1, keepdims=True) @warning_for_keywords() def init_model(*, model_scale=16): """Function to create model for EVAC+ Parameters ---------- model_scale : int, optional The scale of the model Should match the saved weights from fetcher Default is 16 Returns ------- model : tf.keras.Model """ inputs = tf.keras.Input(shape=(128, 128, 128, 1), name="input_1") raw_input_2 = tf.keras.Input(shape=(64, 64, 64, 1), name="input_2") raw_input_3 = tf.keras.Input(shape=(32, 32, 32, 1), name="input_3") raw_input_4 = tf.keras.Input(shape=(16, 16, 16, 1), name="input_4") raw_input_5 = tf.keras.Input(shape=(8, 8, 8, 1), name="input_5") # Encode fwd1, x = Block( model_scale, kernel_size=5, strides=1, padding="same", drop_r=0.2, n_layers=1 )(inputs, inputs) x = Concatenate()([x, raw_input_2]) fwd2, x = Block( model_scale * 2, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=2, )(x, x) x = Concatenate()([x, raw_input_3]) fwd3, x = Block( model_scale * 4, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=3, )(x, x) x = Concatenate()([x, raw_input_4]) fwd4, x = Block( model_scale * 8, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=3, )(x, x) x = Concatenate()([x, raw_input_5]) _, up = Block( model_scale * 16, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=3, layer_type="up", )(x, x) x = Concatenate()([fwd4, up]) _, up = Block( model_scale * 8, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=3, layer_type="up", )(x, up) x = Concatenate()([fwd3, up]) _, up = Block( model_scale * 4, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=3, layer_type="up", )(x, up) x = Concatenate()([fwd2, up]) _, up = Block( model_scale * 2, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=2, layer_type="up", )(x, up) x = Concatenate()([fwd1, up]) _, pred = Block( model_scale, kernel_size=5, strides=1, padding="same", drop_r=0.5, n_layers=1, layer_type="none", )(x, up) pred = Conv3D(2, 1, padding="same")(pred) output = Softmax(axis=-1)(pred) model = Model( { "input_1": inputs, "input_2": raw_input_2, "input_3": raw_input_3, "input_4": raw_input_4, "input_5": raw_input_5, }, output[..., 0], ) return model class EVACPlus: """This class is intended for the EVAC+ model. The EVAC+ model :footcite:p:`Park2024` is a deep learning neural network for brain extraction. It uses a V-net architecture combined with multi-resolution input data, an additional conditional random field (CRF) recurrent layer and supplementary Dice loss term for this recurrent layer. References ---------- .. footbibliography:: """ @doctest_skip_parser @warning_for_keywords() def __init__(self, *, verbose=False): """Model initialization The model was pre-trained for usage on brain extraction of T1 images. This model is designed to take as input a T1 weighted image. Parameters ---------- verbose : bool, optional Whether to show information about the processing. """ if not have_tf: raise tf() log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) # EVAC+ network load self.model = init_model() self.fetch_default_weights() def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting. While the user can load different weights, the function is mainly intended for the class function 'predict'. """ fetch_model_weights_path = get_fnames(name="evac_default_tf_weights") print(f"fetched {fetch_model_weights_path}") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Parameters ---------- weights_path : str Path to the file containing the weights (hdf5, saved by tensorflow) """ try: self.model.load_weights(weights_path) except ValueError as e: raise ValueError( "Expected input for the provided model weights \ do not match the declared model" ) from e def __predict(self, x_test): """Internal prediction function Parameters ---------- x_test : np.ndarray (batch, 128, 128, 128, 1) Image should match the required shape of the model. Returns ------- np.ndarray (batch, ...) Predicted brain mask """ return self.model.predict(x_test) @deprecated_params( "largest_area", new_name="finalize_mask", since="1.10", until="1.12" ) def predict( self, T1, affine, *, voxsize=(1, 1, 1), batch_size=None, return_affine=False, return_prob=False, finalize_mask=True, ): """Wrapper function to facilitate prediction of larger dataset. Parameters ---------- T1 : np.ndarray or list of np.ndarray For a single image, input should be a 3D array. If multiple images, it should be a a list or tuple. or list of np.ndarrays with len of batch_size affine : np.ndarray (4, 4) or (batch, 4, 4) Affine matrix for the T1 image. Should have batch dimension if T1 has one. voxsize : np.ndarray or list or tuple, optional (3,) or (batch, 3) voxel size of the T1 image. batch_size : int, optional Number of images per prediction pass. Only available if data is provided with a batch dimension. Consider lowering it if you get an out of memory error. Increase it if you want it to be faster and have a lot of data. If None, batch_size will be set to 1 if the provided image has a batch dimension. return_affine : bool, optional Whether to return the affine matrix. Useful if the input was a file path. return_prob : bool, optional Whether to return the probability map instead of a binary mask. Useful for testing. finalize_mask : bool, optional Whether to remove potential holes or islands. Useful for solving minor errors. Returns ------- pred_output : np.ndarray (...) or (batch, ...) Predicted brain mask affine : np.ndarray (...) or (batch, ...) affine matrix of mask only if return_affine is True """ voxsize = np.array(voxsize) affine = np.array(affine) if isinstance(T1, (list, tuple)): dim = 4 T1 = np.array(T1) elif len(T1.shape) == 3: dim = 3 if batch_size is not None: logger.warning( "Batch size specified, but not used", "due to the input not having \ a batch dimension", ) T1 = np.expand_dims(T1, 0) affine = np.expand_dims(affine, 0) voxsize = np.expand_dims(voxsize, 0) else: raise ValueError( "T1 data should be a np.ndarray of dimension 3 or a list/tuple of it" ) if batch_size is None: batch_size = 1 input_data = np.zeros((128, 128, 128, len(T1))) affines = np.zeros((len(T1), 4, 4)) mid_shapes = np.zeros((len(T1), 3)).astype(int) offset_arrays = np.zeros((len(T1), 4, 4)).astype(int) scales = np.zeros(len(T1)) crop_vss = np.zeros((len(T1), 3, 2)) pad_vss = np.zeros((len(T1), 3, 2)) # Normalize the data. n_T1 = np.zeros(T1.shape) for i, T1_img in enumerate(T1): n_T1[i] = normalize(T1_img, new_min=0, new_max=1) t_img, t_affine, mid_shape, offset_array, scale, crop_vs, pad_vs = ( transform_img(n_T1[i], affine[i], voxsize=voxsize[i]) ) input_data[..., i] = t_img affines[i] = t_affine mid_shapes[i] = mid_shape offset_arrays[i] = offset_array scales[i] = scale crop_vss[i] = crop_vs pad_vss[i] = pad_vs # Prediction stage prediction = np.zeros((len(T1), 128, 128, 128), dtype=np.float32) for batch_idx in range(batch_size, len(T1) + 1, batch_size): batch = input_data[..., batch_idx - batch_size : batch_idx] temp_input = prepare_img(batch) temp_pred = self.__predict(temp_input) prediction[:batch_idx] = temp_pred remainder = np.mod(len(T1), batch_size) if remainder != 0: temp_input = prepare_img(input_data[..., -remainder:]) temp_pred = self.__predict(temp_input) prediction[-remainder:] = temp_pred output_mask = [] for i in range(len(T1)): output = recover_img( prediction[i], affines[i], mid_shapes[i], n_T1[i].shape, offset_arrays[i], voxsize=voxsize[i], scale=scales[i], crop_vs=crop_vss[i], pad_vs=pad_vss[i], ) if not return_prob: output = np.where(output >= 0.5, 1, 0) if finalize_mask: output = remove_holes_and_islands(output, slice_wise=True) output_mask.append(output) if dim == 3: output_mask = output_mask[0] affine = affine[0] output_mask = np.array(output_mask) affine = np.array(affine) if return_affine: return output_mask, affine else: return output_mask dipy-1.11.0/dipy/nn/tf/histo_resdnn.py000066400000000000000000000240131476546756600176330ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the Histological ResDNN model.""" import logging import numpy as np from dipy.core.gradients import get_bval_indices, unique_bvals_magnitude from dipy.core.sphere import HemiSphere from dipy.data import get_fnames, get_sphere from dipy.nn.utils import set_logger_level from dipy.reconst.shm import sf_to_sh, sh_to_sf, sph_harm_ind_list from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") if have_tf: from tensorflow.keras.layers import Add, Dense, Input from tensorflow.keras.models import Model else: logging.warning( "This model requires Tensorflow.\ Please install these packages using \ pip. If using mac, please refer to this \ link for installation. \ https://github.com/apple/tensorflow_macos" ) logging.basicConfig() logger = logging.getLogger("histo_resdnn") class HistoResDNN: """This class is intended for the ResDNN Histology Network model. ResDNN :footcite:p:`Nath2019` is a deep neural network that employs residual blocks deep neural network to predict ground truth SH coefficients from SH coefficients computed using DWI data. To this end, authors considered histology FOD-computed SH coefficients (obtained from ex vivo non-human primate acquisitions) as their ground truth, and the DWI-computed SH coefficients as their target. References ---------- .. footbibliography:: """ @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() @doctest_skip_parser def __init__(self, *, sh_order_max=8, basis_type="tournier07", verbose=False): """Model initialization The model was re-trained for usage with a different basis function ('tournier07') like the proposed model in :footcite:p:`Nath2019`. To obtain the pre-trained model, use:: >>> resdnn_model = HistoResDNN() # skip if not have_tf >>> fetch_model_weights_path = get_fnames(name='histo_resdnn_tf_weights') # skip if not have_tf >>> resdnn_model.load_model_weights(fetch_model_weights_path) # skip if not have_tf This model is designed to take as input raw DWI signal on a sphere (ODF) represented as SH of order 8 in the tournier basis and predict fODF of order 8 in the tournier basis. Effectively, this model is mimicking a CSD fit. Parameters ---------- sh_order_max : int, optional Maximum SH order (l) in the SH fit. For ``sh_order_max``, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients for a symmetric basis. basis_type : {'tournier07', 'descoteaux07'}, optional ``tournier07`` (default) or ``descoteaux07``. verbose : bool, optional Whether to show information about the processing. References ---------- .. footbibliography:: """ # noqa: E501 if not have_tf: raise tf() self.sh_order_max = sh_order_max self.sh_size = len(sph_harm_ind_list(sh_order_max)[0]) self.basis_type = basis_type log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) if self.basis_type != "tournier07": logger.warning( "Be careful, original weights were obtained " "from training on the tournier07 basis, " "unless you re-trained the network, do not " "change basis!" ) # ResDNN Network Flow num_hidden = self.sh_size inputs = Input(shape=(self.sh_size,)) x1 = Dense(400, activation="relu")(inputs) x2 = Dense(num_hidden, activation="relu")(x1) x3 = Dense(200, activation="relu")(x2) x4 = Dense(num_hidden, activation="linear")(x3) res_add = Add()([x2, x4]) x5 = Dense(200, activation="relu")(res_add) x6 = Dense(num_hidden)(x5) self.model = Model(inputs=inputs, outputs=x6) def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input. """ fetch_model_weights_path = get_fnames(name="histo_resdnn_tf_weights") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input. The weights for a sh_order of 8 can be obtained via the function: get_fnames(name='histo_resdnn_tf_weights'). Parameters ---------- weights_path : str Path to the file containing the weights (hdf5, saved by tensorflow) """ try: self.model.load_weights(weights_path) except ValueError as e: raise ValueError( "Expected input for the provided model weights do not match the " f"declared model ({self.sh_size})" ) from e def __predict(self, x_test): """Predict fODF (as SH) from input raw DWI signal (as SH) Parameters ---------- x_test : np.ndarray Array of size (N, M) where M is ``(sh_order_max + 1) * (sh_order_max + 2) / 2``. N should not be too big as to limit memory usage. Returns ------- np.ndarray (N, M) Predicted fODF (as SH) """ if x_test.shape[-1] != self.sh_size: raise ValueError( "Expected input for the provided model weights do not match the " f"declared model ({self.sh_size})" ) return self.model.predict(x_test) @warning_for_keywords() def predict(self, data, gtab, *, mask=None, chunk_size=1000): """Wrapper function to facilitate prediction of larger dataset. The function will mask, normalize, split, predict and 're-assemble' the data as a volume. Parameters ---------- data : np.ndarray DWI signal in a 4D array gtab : GradientTable class instance The acquisition scheme matching the data (must contain at least one b0) mask : np.ndarray, optional Binary mask of the brain to avoid unnecessary computation and unreliable prediction outside the brain. Default: Compute prediction only for nonzero voxels (with at least one nonzero DWI value). chunk_size : int, optional Batch size when running model prediction. Returns ------- pred_sh_coef : np.ndarray (x, y, z, M) Predicted fODF (as SH). The volume has matching shape to the input data, but with ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` as a last dimension. """ if mask is None: logger.warning( "Mask should be provided to accelerate " "computation, and because predictions are " "not reliable outside of the brain." ) mask = np.sum(data, axis=-1) mask = mask.astype(bool) # Extract B0's and obtain a mean B0 b0_indices = gtab.b0s_mask if not len(b0_indices) > 0: raise ValueError("b0 must be present for DWI normalization.") logger.info(f"b0 indices found are: {np.argwhere(b0_indices).ravel()}") mean_b0 = np.mean(data[..., b0_indices], axis=-1) # Detect number of b-values and extract a single shell of DW-MRI Data unique_shells = np.sort(unique_bvals_magnitude(gtab.bvals)) logger.info(f"Number of b-values: {unique_shells}") # Extract DWI only dw_indices = get_bval_indices(gtab.bvals, unique_shells[1]) dw_data = data[..., dw_indices] dw_bvecs = gtab.bvecs[dw_indices, :] # Normalize the DW-MRI Data with the mean b0 (voxel-wise) norm_dw_data = np.zeros(dw_data.shape) for n in range(len(dw_indices)): norm_dw_data[..., n] = np.divide( dw_data[..., n], mean_b0, where=np.abs(mean_b0) > 0.000001 ) # Fit SH to the raw DWI signal h_sphere = HemiSphere(xyz=dw_bvecs) dw_sh_coef = sf_to_sh( norm_dw_data, h_sphere, smooth=0.0006, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) # Flatten and mask the data (N, SH_SIZE) to facilitate chunks ori_shape = dw_sh_coef.shape flat_dw_sh_coef = dw_sh_coef[mask > 0] flat_pred_sh_coef = np.zeros(flat_dw_sh_coef.shape) count = len(flat_dw_sh_coef) // chunk_size for i in range(count + 1): if i % 100 == 0 or i == count: logger.info(f"Chunk #{i} out of {count}") tmp_sh = self.__predict( flat_dw_sh_coef[i * chunk_size : (i + 1) * chunk_size] ) # Removing negative values from the SF sphere = get_sphere(name="repulsion724") tmp_sf = sh_to_sf( sh=tmp_sh, sphere=sphere, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) tmp_sf[tmp_sf < 0] = 0 tmp_sh = sf_to_sh( tmp_sf, sphere, smooth=0.0006, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) flat_pred_sh_coef[i * chunk_size : (i + 1) * chunk_size] = tmp_sh pred_sh_coef = np.zeros(ori_shape) pred_sh_coef[mask > 0] = flat_pred_sh_coef return pred_sh_coef dipy-1.11.0/dipy/nn/tf/meson.build000066400000000000000000000003371476546756600167270ustar00rootroot00000000000000python_sources = [ '__init__.py', 'cnn_1d_denoising.py', 'deepn4.py', 'evac.py', 'histo_resdnn.py', 'model.py', 'synb0.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/nn/tf' ) dipy-1.11.0/dipy/nn/tf/model.py000066400000000000000000000211531476546756600162360ustar00rootroot00000000000000from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") class SingleLayerPerceptron: @warning_for_keywords() def __init__( self, *, input_shape=(28, 28), num_hidden=128, act_hidden="relu", dropout=0.2, num_out=10, act_out="softmax", optimizer="adam", loss="sparse_categorical_crossentropy", ): """Single Layer Perceptron with Dropout. Parameters ---------- input_shape : tuple Shape of data to be trained num_hidden : int Number of nodes in hidden layer act_hidden : string Activation function used in hidden layer dropout : float Dropout ratio num_out : 10 Number of nodes in output layer act_out : string Activation function used in output layer optimizer : string Select optimizer. Default adam. loss : string Select loss function for measuring accuracy. Default sparse_categorical_crossentropy. """ self.accuracy = None self.loss = None model = tf.keras.models.Sequential( [ tf.keras.layers.Input(shape=input_shape), tf.keras.layers.Flatten(), tf.keras.layers.Dense(num_hidden, activation=act_hidden), tf.keras.layers.Dropout(dropout), tf.keras.layers.Dense(num_out, activation=act_out), ] ) model.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"]) self.model = model def summary(self): """Get the summary of the model. The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer. Returns ------- summary : NoneType the summary of the model """ return self.model.summary() @warning_for_keywords() def fit(self, x_train, y_train, *, epochs=5): """Train the model on train dataset. The fit method will train the model for a fixed number of epochs (iterations) on a dataset. Parameters ---------- x_train : ndarray the x_train is the train dataset y_train : ndarray shape=(BatchSize,) the y_train is the labels of the train dataset epochs : int (Default = 5) the number of epochs Returns ------- hist : object A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs """ hist = self.model.fit(x_train, y_train, epochs=epochs) self.accuracy = hist.history["accuracy"][0] self.loss = hist.history["loss"][0] return hist @warning_for_keywords() def evaluate(self, x_test, y_test, *, verbose=2): """Evaluate the model on test dataset. The evaluate method will evaluate the model on a test dataset. Parameters ---------- x_test : ndarray the x_test is the test dataset y_test : ndarray shape=(BatchSize,) the y_test is the labels of the test dataset verbose : int (Default = 2) By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch. Returns ------- evaluate : List return list of loss value and accuracy value on test dataset """ return self.model.evaluate(x_test, y_test, verbose=verbose) def predict(self, x_test): """Predict the output from input samples. The predict method will generates output predictions for the input samples. Parameters ---------- x_train : ndarray the x_test is the test dataset or input samples Returns ------- predict : ndarray shape(TestSize,OutputSize) Numpy array(s) of predictions. """ return self.model.predict(x_test) class MultipleLayerPercepton: @warning_for_keywords() def __init__( self, *, input_shape=(28, 28), num_hidden=(128,), act_hidden="relu", dropout=0.2, num_out=10, act_out="softmax", loss="sparse_categorical_crossentropy", optimizer="adam", ): """Multiple Layer Perceptron with Dropout. Parameters ---------- input_shape : tuple Shape of data to be trained num_hidden : array-like List of number of nodes in hidden layers act_hidden : string Activation function used in hidden layer dropout : float Dropout ratio num_out : 10 Number of nodes in output layer act_out : string Activation function used in output layer optimizer : string Select optimizer. Default adam. loss : string Select loss function for measuring accuracy. Default sparse_categorical_crossentropy. """ self.input_shape = input_shape self.num_hidden = num_hidden self.act_hidden = act_hidden self.dropout = dropout self.num_out = num_out self.act_out = act_out self.loss = loss self.optimizer = optimizer self.accuracy = None # model building inp = tf.keras.layers.Input(shape=self.input_shape) x = tf.keras.layers.Flatten()(inp) for i in range(len(self.num_hidden)): x = tf.keras.layers.Dense(self.num_hidden[i])(x) x = tf.keras.layers.Dropout(self.dropout)(x) out = tf.keras.layers.Dense(self.num_out, activation=self.act_out)(x) self.model = tf.keras.models.Model(inputs=inp, outputs=out) # compiling the model self.model.compile( optimizer=self.optimizer, loss=self.loss, metrics=["accuracy"] ) def summary(self): """Get the summary of the model. The summary is textual and includes information about: The layers and their order in the model. The output shape of each layer. Returns ------- summary : NoneType the summary of the model """ return self.model.summary() @warning_for_keywords() def fit(self, x_train, y_train, *, epochs=5): """Train the model on train dataset. The fit method will train the model for a fixed number of epochs (iterations) on a dataset. Parameters ---------- x_train : ndarray the x_train is the train dataset y_train : ndarray shape=(BatchSize,) the y_train is the labels of the train dataset epochs : int (Default = 5) the number of epochs Returns ------- hist : object A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs """ hist = self.model.fit(x_train, y_train, epochs=epochs) self.accuracy = hist.history["accuracy"][0] self.loss = hist.history["loss"][0] return hist @warning_for_keywords() def evaluate(self, x_test, y_test, *, verbose=2): """Evaluate the model on test dataset. The evaluate method will evaluate the model on a test dataset. Parameters ---------- x_test : ndarray the x_test is the test dataset y_test : ndarray shape=(BatchSize,) the y_test is the labels of the test dataset verbose : int (Default = 2) By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch. Returns ------- evaluate : List return list of loss value and accuracy value on test dataset """ return self.model.evaluate(x_test, y_test, verbose=verbose) def predict(self, x_test): """Predict the output from input samples. The predict method will generates output predictions for the input samples. Parameters ---------- x_train : ndarray the x_test is the test dataset or input samples Returns ------- predict : ndarray shape(TestSize,OutputSize) Numpy array(s) of predictions. """ return self.model.predict(x_test) dipy-1.11.0/dipy/nn/tf/synb0.py000066400000000000000000000270531476546756600161760ustar00rootroot00000000000000#!/usr/bin/python """ Class and helper functions for fitting the Synb0 model. """ import logging import numpy as np from dipy.data import get_fnames from dipy.nn.utils import normalize, set_logger_level, unnormalize from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords from dipy.utils.optpkg import optional_package tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") if have_tf: from tensorflow.keras.layers import ( Concatenate, Conv3D, Conv3DTranspose, GroupNormalization, Layer, LeakyReLU, MaxPool3D, ) from tensorflow.keras.models import Model else: class Model: pass class Layer: pass logging.warning( "This model requires Tensorflow.\ Please install these packages using \ pip. If using mac, please refer to this \ link for installation. \ https://github.com/apple/tensorflow_macos" ) logging.basicConfig() logger = logging.getLogger("synb0") class EncoderBlock(Layer): def __init__(self, out_channels, kernel_size, strides, padding): super(EncoderBlock, self).__init__() self.conv3d = Conv3D( out_channels, kernel_size, strides=strides, padding=padding, use_bias=False ) self.instnorm = GroupNormalization(groups=-1) self.activation = LeakyReLU(0.01) def call(self, input): x = self.conv3d(input) x = self.instnorm(x) x = self.activation(x) return x class DecoderBlock(Layer): def __init__(self, out_channels, kernel_size, strides, padding): super(DecoderBlock, self).__init__() self.conv3d = Conv3DTranspose( out_channels, kernel_size, strides=strides, padding=padding, use_bias=False ) self.instnorm = GroupNormalization(groups=-1) self.activation = LeakyReLU(0.01) def call(self, input): x = self.conv3d(input) x = self.instnorm(x) x = self.activation(x) return x def UNet3D(input_shape): inputs = tf.keras.Input(input_shape) # Encode x = EncoderBlock(32, kernel_size=3, strides=1, padding="same")(inputs) syn0 = EncoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn0) x = EncoderBlock(64, kernel_size=3, strides=1, padding="same")(x) syn1 = EncoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn1) x = EncoderBlock(128, kernel_size=3, strides=1, padding="same")(x) syn2 = EncoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = MaxPool3D()(syn2) x = EncoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = EncoderBlock(512, kernel_size=3, strides=1, padding="same")(x) # Last layer without relu x = Conv3D(512, kernel_size=1, strides=1, padding="same")(x) x = DecoderBlock(512, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn2]) x = DecoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(256, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(256, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn1]) x = DecoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(128, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(128, kernel_size=2, strides=2, padding="valid")(x) x = Concatenate()([x, syn0]) x = DecoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(64, kernel_size=3, strides=1, padding="same")(x) x = DecoderBlock(1, kernel_size=1, strides=1, padding="valid")(x) # Last layer without relu out = Conv3DTranspose(1, kernel_size=1, strides=1, padding="valid")(x) return Model(inputs, out) class Synb0: """ This class is intended for the Synb0 model. Synb0 :footcite:p:`Schilling2019`, :footcite:p:`Schilling2020` uses a neural network to synthesize a b0 volume for distortion correction in DWI images. The model is the deep learning part of the Synb0-Disco pipeline, thus stand-alone usage is not recommended. References ---------- .. footbibliography:: """ @doctest_skip_parser @warning_for_keywords() def __init__(self, *, verbose=False): r""" The model was pre-trained for usage on pre-processed images following the synb0-disco pipeline. One can load their own weights using load_model_weights. This model is designed to take as input a b0 image and a T1 weighted image. It was designed to predict a b-inf image. Parameters ---------- verbose : bool, optional Whether to show information about the processing. """ if not have_tf: raise tf() log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) # Synb0 network load self.model = UNet3D(input_shape=(80, 80, 96, 2)) def fetch_default_weights(self, idx): r""" Load the model pre-training weights to use for the fitting. While the user can load different weights, the function is mainly intended for the class function 'predict'. Parameters ---------- idx : int The idx of the default weights. It can be from 0~4. """ fetch_model_weights_path = get_fnames(name="synb0_default_weights") print(f"fetched {fetch_model_weights_path[idx]}") self.load_model_weights(fetch_model_weights_path[idx]) def load_model_weights(self, weights_path): r""" Load the custom pre-training weights to use for the fitting. Parameters ---------- weights_path : str Path to the file containing the weights (hdf5, saved by tensorflow) """ try: self.model.load_weights(weights_path) except ValueError as e: raise ValueError( "Expected input for the provided model weights \ do not match the declared model" ) from e def __predict(self, x_test): r""" Internal prediction function Parameters ---------- x_test : np.ndarray (batch, 80, 80, 96, 2) Image should match the required shape of the model. Returns ------- np.ndarray (...) or (batch, ...) Reconstructed b-inf image(s) """ return self.model.predict(x_test) @warning_for_keywords() def predict(self, b0, T1, *, batch_size=None, average=True): r""" Wrapper function to facilitate prediction of larger dataset. The function will pad the data to meet the required shape of image. Note that the b0 and T1 image should have the same shape Parameters ---------- b0 : np.ndarray (batch, 77, 91, 77) or (77, 91, 77) For a single image, input should be a 3D array. If multiple images, there should also be a batch dimension. T1 : np.ndarray (batch, 77, 91, 77) or (77, 91, 77) For a single image, input should be a 3D array. If multiple images, there should also be a batch dimension. batch_size : int Number of images per prediction pass. Only available if data is provided with a batch dimension. Consider lowering it if you get an out of memory error. Increase it if you want it to be faster and have a lot of data. If None, batch_size will be set to 1 if the provided image has a batch dimension. Default is None average : bool Whether the function follows the Synb0-Disco pipeline and averages the prediction of 5 different models. If False, it uses the loaded weights for prediction. Default is True. Returns ------- pred_output : np.ndarray (...) or (batch, ...) Reconstructed b-inf image(s) """ # Check if shape is as intended if ( all([b0.shape[1:] != (77, 91, 77), b0.shape != (77, 91, 77)]) or b0.shape != T1.shape ): raise ValueError( "Expected shape (batch, 77, 91, 77) or \ (77, 91, 77) for both inputs" ) dim = len(b0.shape) # Add batch dimension if not provided if dim == 3: T1 = np.expand_dims(T1, 0) b0 = np.expand_dims(b0, 0) shape = b0.shape # Pad the data to match the model's input shape T1 = np.pad(T1, ((0, 0), (2, 1), (3, 2), (2, 1)), "constant") b0 = np.pad(b0, ((0, 0), (2, 1), (3, 2), (2, 1)), "constant") # Normalize the data. p99 = np.percentile(b0, 99, axis=(1, 2, 3)) for i in range(shape[0]): T1[i] = normalize(T1[i], min_v=0, max_v=150, new_min=-1, new_max=1) b0[i] = normalize(b0[i], min_v=0, max_v=p99[i], new_min=-1, new_max=1) if dim == 3: if batch_size is not None: logger.warning( "Batch size specified, but not used", "due to the input not having \ a batch dimension", ) batch_size = 1 # Prediction stage if average: mean_pred = np.zeros(shape + (5,), dtype=np.float32) for i in range(5): self.fetch_default_weights(i) temp = np.stack([b0, T1], -1) input_data = np.moveaxis(temp, 3, 1).astype(np.float32) prediction = np.zeros((shape[0], 80, 80, 96, 1), dtype=np.float32) for batch_idx in range(batch_size, shape[0] + 1, batch_size): temp_input = input_data[batch_idx - batch_size : batch_idx] temp_pred = self.__predict(temp_input) prediction[batch_idx - batch_size : batch_idx] = temp_pred remainder = np.mod(shape[0], batch_size) if remainder != 0: temp_pred = self.__predict(input_data[-remainder:]) prediction[-remainder:] = temp_pred for j in range(shape[0]): temp_pred = unnormalize(prediction[j], -1, 1, 0, p99[j]) prediction[j] = temp_pred prediction = prediction[:, 2:-1, 2:-1, 3:-2, 0] prediction = np.moveaxis(prediction, 1, -1) mean_pred[..., i] = prediction prediction = np.mean(mean_pred, axis=-1) else: temp = np.stack([b0, T1], -1) input_data = np.moveaxis(temp, 3, 1).astype(np.float32) prediction = np.zeros((shape[0], 80, 80, 96, 1), dtype=np.float32) for batch_idx in range(batch_size, shape[0] + 1, batch_size): temp_input = input_data[batch_idx - batch_size : batch_idx] temp_pred = self.__predict(temp_input) prediction[:batch_idx] = temp_pred remainder = np.mod(shape[0], batch_size) if remainder != 0: temp_pred = self.__predict(input_data[-remainder:]) prediction[-remainder:] = temp_pred for j in range(shape[0]): prediction[j] = unnormalize(prediction[j], -1, 1, 0, p99[j]) prediction = prediction[:, 2:-1, 2:-1, 3:-2, 0] prediction = np.moveaxis(prediction, 1, -1) if dim == 3: prediction = prediction[0] return prediction dipy-1.11.0/dipy/nn/torch/000077500000000000000000000000001476546756600152705ustar00rootroot00000000000000dipy-1.11.0/dipy/nn/torch/__init__.py000066400000000000000000000000621476546756600173770ustar00rootroot00000000000000# init for nn aka the deep neural network module dipy-1.11.0/dipy/nn/torch/deepn4.py000066400000000000000000000270531476546756600170300ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the DeepN4 model.""" import logging import numpy as np from scipy.ndimage import gaussian_filter from dipy.data import get_fnames from dipy.nn.utils import normalize, recover_img, set_logger_level, transform_img from dipy.testing.decorators import doctest_skip_parser from dipy.utils.optpkg import optional_package torch, have_torch, _ = optional_package("torch", min_version="2.2.0") if have_torch: from torch.nn import ( Conv3d, ConvTranspose3d, InstanceNorm3d, LeakyReLU, MaxPool3d, Module, Sequential, ) else: class Module: pass logging.warning( "This model requires Pytorch.\ Please install these packages using \ pip." ) logging.basicConfig() logger = logging.getLogger("deepn4") class UNet3D(Module): def __init__(self, n_in, n_out): super(UNet3D, self).__init__() # Encoder c = 32 self.ec0 = self.encoder_block(n_in, c, kernel_size=3, stride=1, padding=1) self.ec1 = self.encoder_block(c, c * 2, kernel_size=3, stride=1, padding=1) self.pool0 = MaxPool3d(2) self.ec2 = self.encoder_block(c * 2, c * 2, kernel_size=3, stride=1, padding=1) self.ec3 = self.encoder_block(c * 2, c * 4, kernel_size=3, stride=1, padding=1) self.pool1 = MaxPool3d(2) self.ec4 = self.encoder_block(c * 4, c * 4, kernel_size=3, stride=1, padding=1) self.ec5 = self.encoder_block(c * 4, c * 8, kernel_size=3, stride=1, padding=1) self.pool2 = MaxPool3d(2) self.ec6 = self.encoder_block(c * 8, c * 8, kernel_size=3, stride=1, padding=1) self.ec7 = self.encoder_block(c * 8, c * 16, kernel_size=3, stride=1, padding=1) self.el = Conv3d(c * 16, c * 16, kernel_size=1, stride=1, padding=0) # Decoder self.dc9 = self.decoder_block( c * 16, c * 16, kernel_size=2, stride=2, padding=0 ) self.dc8 = self.decoder_block( c * 16 + c * 8, c * 8, kernel_size=3, stride=1, padding=1 ) self.dc7 = self.decoder_block(c * 8, c * 8, kernel_size=3, stride=1, padding=1) self.dc6 = self.decoder_block(c * 8, c * 8, kernel_size=2, stride=2, padding=0) self.dc5 = self.decoder_block( c * 8 + c * 4, c * 4, kernel_size=3, stride=1, padding=1 ) self.dc4 = self.decoder_block(c * 4, c * 4, kernel_size=3, stride=1, padding=1) self.dc3 = self.decoder_block(c * 4, c * 4, kernel_size=2, stride=2, padding=0) self.dc2 = self.decoder_block( c * 4 + c * 2, c * 2, kernel_size=3, stride=1, padding=1 ) self.dc1 = self.decoder_block(c * 2, c * 2, kernel_size=3, stride=1, padding=1) self.dc0 = self.decoder_block(c * 2, n_out, kernel_size=1, stride=1, padding=0) self.dl = ConvTranspose3d(n_out, n_out, kernel_size=1, stride=1, padding=0) def encoder_block(self, in_channels, out_channels, kernel_size, stride, padding): layer = Sequential( Conv3d( in_channels, out_channels, kernel_size, stride=stride, padding=padding, bias=False, ), InstanceNorm3d(out_channels), LeakyReLU(), ) return layer def decoder_block(self, in_channels, out_channels, kernel_size, stride, padding): layer = Sequential( ConvTranspose3d( in_channels, out_channels, kernel_size, stride=stride, padding=padding, bias=False, ), InstanceNorm3d(out_channels), LeakyReLU(), ) return layer def forward(self, x): # Encodes e0 = self.ec0(x) syn0 = self.ec1(e0) del e0 e1 = self.pool0(syn0) e2 = self.ec2(e1) syn1 = self.ec3(e2) del e1, e2 e3 = self.pool1(syn1) e4 = self.ec4(e3) syn2 = self.ec5(e4) del e3, e4 e5 = self.pool2(syn2) e6 = self.ec6(e5) e7 = self.ec7(e6) # Last layer without relu el = self.el(e7) del e5, e6, e7 # Decode d9 = torch.cat((self.dc9(el), syn2), 1) del el, syn2 d8 = self.dc8(d9) d7 = self.dc7(d8) del d9, d8 d6 = torch.cat((self.dc6(d7), syn1), 1) del d7, syn1 d5 = self.dc5(d6) d4 = self.dc4(d5) del d6, d5 d3 = torch.cat((self.dc3(d4), syn0), 1) del d4, syn0 d2 = self.dc2(d3) d1 = self.dc1(d2) del d3, d2 d0 = self.dc0(d1) del d1 # Last layer without relu out = self.dl(d0) return out class DeepN4: """This class is intended for the DeepN4 model. The DeepN4 model :footcite:p:`Kanakaraj2024` predicts the bias field for magnetic field inhomogeneity correction on T1-weighted images. References ---------- .. footbibliography:: """ @doctest_skip_parser def __init__(self, *, verbose=False, use_cuda=False): """Model initialization To obtain the pre-trained model, use fetch_default_weights() like: >>> deepn4_model = DeepN4() # skip if not have_torch >>> deepn4_model.fetch_default_weights() # skip if not have_torch This model is designed to take as input file T1 signal and predict bias field. Effectively, this model is mimicking bias correction. Parameters ---------- verbose : bool, optional Whether to show information about the processing. use_cuda : bool, optional Whether to use GPU for processing. If False or no CUDA is detected, CPU will be used. """ if not have_torch: raise torch() log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) # DeepN4 network load self.model = UNet3D(1, 1) if use_cuda and torch.cuda.is_available(): self.device = torch.device("cuda") else: self.device = torch.device("cpu") self.model = self.model.to(self.device) def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting.""" fetch_model_weights_path = get_fnames(name="deepn4_default_torch_weights") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Parameters ---------- weights_path : str Path to the file containing the weights """ try: self.model.load_state_dict( torch.load( weights_path, weights_only=True, map_location=self.device, )["model_state_dict"] ) self.model.eval() except ValueError as e: raise ValueError( "Expected input for the provided model weights \ do not match the declared model" ) from e def __predict(self, x_test): """Internal prediction function Predict bias field from input T1 signal Parameters ---------- x_test : np.ndarray Image should match the required shape of the model. Returns ------- np.ndarray (batch, ...) Predicted bias field """ return self.model(x_test)[:, 0].detach().numpy() def pad(self, img, sz): tmp = np.zeros((sz, sz, sz)) diff = int((sz - img.shape[0]) / 2) lx = max(diff, 0) lX = min(img.shape[0] + diff, sz) diff = (img.shape[0] - sz) / 2 rx = max(int(np.floor(diff)), 0) rX = min(img.shape[0] - int(np.ceil(diff)), img.shape[0]) diff = int((sz - img.shape[1]) / 2) ly = max(diff, 0) lY = min(img.shape[1] + diff, sz) diff = (img.shape[1] - sz) / 2 ry = max(int(np.floor(diff)), 0) rY = min(img.shape[1] - int(np.ceil(diff)), img.shape[1]) diff = int((sz - img.shape[2]) / 2) lz = max(diff, 0) lZ = min(img.shape[2] + diff, sz) diff = (img.shape[2] - sz) / 2 rz = max(int(np.floor(diff)), 0) rZ = min(img.shape[2] - int(np.ceil(diff)), img.shape[2]) tmp[lx:lX, ly:lY, lz:lZ] = img[rx:rX, ry:rY, rz:rZ] return tmp, [lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ] def load_resample(self, subj): input_data, [lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ] = self.pad( subj, 128 ) in_max = np.percentile(input_data[np.nonzero(input_data)], 99.99) input_data = normalize(input_data, min_v=0, max_v=in_max, new_min=0, new_max=1) input_data = np.squeeze(input_data) input_vols = np.zeros((1, 1, 128, 128, 128)) input_vols[0, 0, :, :, :] = input_data return ( torch.from_numpy(input_vols).float(), lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ, in_max, ) def predict(self, img, img_affine, *, voxsize=(1, 1, 1), threshold=0.5): """Wrapper function to facilitate prediction of larger dataset. The function will mask, normalize, split, predict and 're-assemble' the data as a volume. Parameters ---------- img : np.ndarray T1 image to predict and apply bias field img_affine : np.ndarray (4, 4) Affine matrix for the T1 image voxsize : np.ndarray or list or tuple (3,), optional voxel size of the T1 image. threshold : float, optional Threshold for cleaning the final correction field Returns ------- final_corrected : np.ndarray (x, y, z) Predicted bias corrected image. The volume has matching shape to the input data """ # Preprocess input data (resample, normalize, and pad) resampled_T1, inv_affine, mid_shape, offset_array, scale, crop_vs, pad_vs = ( transform_img(img, img_affine, voxsize=voxsize) ) (in_features, lx, lX, ly, lY, lz, lZ, rx, rX, ry, rY, rz, rZ, in_max) = ( self.load_resample(resampled_T1) ) # Run the model to get the bias field logfield = self.__predict(in_features.to(self.device)) field = np.exp(logfield) field = field.squeeze() # Postprocess predicted field (reshape - unpad, smooth the field, # upsample) final_field = np.zeros( [resampled_T1.shape[0], resampled_T1.shape[1], resampled_T1.shape[2]] ) final_field[rx:rX, ry:rY, rz:rZ] = field[lx:lX, ly:lY, lz:lZ] final_fields = gaussian_filter(final_field, sigma=3) upsample_final_field = recover_img( final_fields, inv_affine, mid_shape, img.shape, offset_array, voxsize, scale, crop_vs, pad_vs, ) # Correct the image below_threshold_mask = np.abs(upsample_final_field) < threshold with np.errstate(divide="ignore", invalid="ignore"): final_corrected = np.where( below_threshold_mask, 0, img / upsample_final_field ) return final_corrected dipy-1.11.0/dipy/nn/torch/evac.py000066400000000000000000000374001476546756600165640ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the EVAC+ model.""" import logging import numpy as np from dipy.align.reslice import reslice from dipy.data import get_fnames from dipy.nn.utils import ( normalize, recover_img, set_logger_level, transform_img, ) from dipy.segment.utils import remove_holes_and_islands from dipy.testing.decorators import doctest_skip_parser from dipy.utils.deprecator import deprecated_params from dipy.utils.optpkg import optional_package torch, have_torch, _ = optional_package("torch", min_version="2.2.0") if have_torch: from torch.nn import ( Conv3d, ConvTranspose3d, Dropout3d, LayerNorm, Module, ModuleList, ReLU, Softmax, ) else: class Module: pass logging.warning( "This model requires Pytorch.\ Please install these packages using \ pip." ) logging.basicConfig() logger = logging.getLogger("EVAC+") def prepare_img(image): """Function to prepare image for model input Specific to EVAC+ Parameters ---------- image : np.ndarray Input image Returns ------- input_data : dict """ input1 = np.moveaxis(image, -1, 0) input1 = np.expand_dims(input1, 1) input2, _ = reslice(image, np.eye(4), (1, 1, 1), (2, 2, 2)) input2 = np.moveaxis(input2, -1, 0) input2 = np.expand_dims(input2, 1) input3, _ = reslice(image, np.eye(4), (1, 1, 1), (4, 4, 4)) input3 = np.moveaxis(input3, -1, 0) input3 = np.expand_dims(input3, 1) input4, _ = reslice(image, np.eye(4), (1, 1, 1), (8, 8, 8)) input4 = np.moveaxis(input4, -1, 0) input4 = np.expand_dims(input4, 1) input5, _ = reslice(image, np.eye(4), (1, 1, 1), (16, 16, 16)) input5 = np.moveaxis(input5, -1, 0) input5 = np.expand_dims(input5, 1) input_data = [ torch.from_numpy(input1).float(), torch.from_numpy(input2).float(), torch.from_numpy(input3).float(), torch.from_numpy(input4).float(), torch.from_numpy(input5).float(), ] return input_data class MoveDimLayer(Module): def __init__(self, source_dim, dest_dim): super(MoveDimLayer, self).__init__() self.source_dim = source_dim self.dest_dim = dest_dim def forward(self, x): return torch.movedim(x, self.source_dim, self.dest_dim) class ChannelSum(Module): def __init__(self): super(ChannelSum, self).__init__() def forward(self, inputs): return torch.sum(inputs, dim=1, keepdim=True) class Add(Module): def __init__(self): super(Add, self).__init__() def forward(self, x, passed): return x + passed class Block(Module): def __init__( self, in_channels, out_channels, kernel_size, strides, padding, drop_r, n_layers, *, passed_channel=1, layer_type="down", ): super(Block, self).__init__() self.n_layers = n_layers self.layer_list = ModuleList() self.layer_list2 = ModuleList() cur_channel = in_channels for _ in range(n_layers): self.layer_list.append( Conv3d( cur_channel, out_channels, kernel_size, stride=strides, padding=padding, ) ) cur_channel = out_channels self.layer_list.append(Dropout3d(drop_r)) self.layer_list.append(MoveDimLayer(1, -1)) self.layer_list.append(LayerNorm(out_channels)) self.layer_list.append(MoveDimLayer(-1, 1)) self.layer_list.append(ReLU()) if layer_type == "down": self.layer_list2.append(Conv3d(in_channels, 1, 2, stride=2, padding=0)) self.layer_list2.append(ReLU()) elif layer_type == "up": self.layer_list2.append( ConvTranspose3d(passed_channel, 1, 2, stride=2, padding=0) ) self.layer_list2.append(ReLU()) self.channel_sum = ChannelSum() self.add = Add() def forward(self, input, passed): x = input for layer in self.layer_list: x = layer(x) x = self.channel_sum(x) fwd = self.add(x, passed) x = fwd for layer in self.layer_list2: x = layer(x) return fwd, x class Model(Module): def __init__(self, model_scale=16): super(Model, self).__init__() # Block structure self.block1 = Block( 1, model_scale, kernel_size=5, strides=1, padding=2, drop_r=0.2, n_layers=1 ) self.block2 = Block( 2, model_scale * 2, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=2, ) self.block3 = Block( 2, model_scale * 4, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=3, ) self.block4 = Block( 2, model_scale * 8, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=3, ) self.block5 = Block( 2, model_scale * 16, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=3, passed_channel=2, layer_type="up", ) # Upsample/decoder blocks self.up_block1 = Block( 3, model_scale * 8, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=3, passed_channel=1, layer_type="up", ) self.up_block2 = Block( 3, model_scale * 4, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=3, passed_channel=1, layer_type="up", ) self.up_block3 = Block( 3, model_scale * 2, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=2, passed_channel=1, layer_type="up", ) self.up_block4 = Block( 2, model_scale, kernel_size=5, strides=1, padding=2, drop_r=0.5, n_layers=1, passed_channel=1, layer_type="none", ) self.conv_pred = Conv3d(1, 2, 1, padding=0) self.softmax = Softmax(dim=1) def forward(self, inputs, raw_input_2, raw_input_3, raw_input_4, raw_input_5): fwd1, x = self.block1(inputs, inputs) x = torch.cat([x, raw_input_2], dim=1) fwd2, x = self.block2(x, x) x = torch.cat([x, raw_input_3], dim=1) fwd3, x = self.block3(x, x) x = torch.cat([x, raw_input_4], dim=1) fwd4, x = self.block4(x, x) x = torch.cat([x, raw_input_5], dim=1) # Decoding path _, up = self.block5(x, x) x = torch.cat([fwd4, up], dim=1) _, up = self.up_block1(x, up) x = torch.cat([fwd3, up], dim=1) _, up = self.up_block2(x, up) x = torch.cat([fwd2, up], dim=1) _, up = self.up_block3(x, up) x = torch.cat([fwd1, up], dim=1) _, pred = self.up_block4(x, up) pred = self.conv_pred(pred) output = self.softmax(pred) return output class EVACPlus: """This class is intended for the EVAC+ model. The EVAC+ model :footcite:p:`Park2024` is a deep learning neural network for brain extraction. It uses a V-net architecture combined with multi-resolution input data, an additional conditional random field (CRF) recurrent layer and supplementary Dice loss term for this recurrent layer. References ---------- .. footbibliography:: """ @doctest_skip_parser def __init__(self, *, verbose=False, use_cuda=False): """Model initialization The model was pre-trained for usage on brain extraction of T1 images. This model is designed to take as input a T1 weighted image. Parameters ---------- verbose : bool, optional Whether to show information about the processing. use_cuda : bool, optional Whether to use GPU for processing. If False or no CUDA is detected, CPU will be used. """ if not have_torch: raise torch() log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) # EVAC+ network load self.model = self.init_model() if use_cuda and torch.cuda.is_available(): self.device = torch.device("cuda") else: self.device = torch.device("cpu") self.model = self.model.to(self.device) self.fetch_default_weights() def init_model(self, model_scale=16): return Model(model_scale) def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting. While the user can load different weights, the function is mainly intended for the class function 'predict'. """ fetch_model_weights_path = get_fnames(name="evac_default_torch_weights") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Parameters ---------- weights_path : str Path to the file containing the weights (pth, saved by Pytorch) """ try: self.model.load_state_dict( torch.load( weights_path, weights_only=True, map_location=self.device, ) ) self.model.eval() except ValueError as e: raise ValueError( "Expected input for the provided model weights \ do not match the declared model" ) from e def __predict(self, x_test): """Internal prediction function Parameters ---------- x_test : list of np.ndarray Image should match the required shape of the model. Returns ------- np.ndarray (batch, ...) Predicted brain mask """ return self.model(*x_test)[:, 0].detach().numpy() @deprecated_params( "largest_area", new_name="finalize_mask", since="1.10", until="1.12" ) def predict( self, T1, affine, *, voxsize=(1, 1, 1), batch_size=None, return_affine=False, return_prob=False, finalize_mask=True, ): """Wrapper function to facilitate prediction of larger dataset. Parameters ---------- T1 : np.ndarray or list of np.ndarray For a single image, input should be a 3D array. If multiple images, it should be a a list or tuple. affine : np.ndarray (4, 4) or (batch, 4, 4) or list of np.ndarrays with len of batch Affine matrix for the T1 image. Should have batch dimension if T1 has one. voxsize : np.ndarray or list or tuple, optional (3,) or (batch, 3) voxel size of the T1 image. batch_size : int, optional Number of images per prediction pass. Only available if data is provided with a batch dimension. Consider lowering it if you get an out of memory error. Increase it if you want it to be faster and have a lot of data. If None, batch_size will be set to 1 if the provided image has a batch dimension. return_affine : bool, optional Whether to return the affine matrix. Useful if the input was a file path. return_prob : bool, optional Whether to return the probability map instead of a binary mask. Useful for testing. finalize_mask : bool, optional Whether to remove potential holes or islands. Useful for solving minor errors. Returns ------- pred_output : np.ndarray (...) or (batch, ...) Predicted brain mask affine : np.ndarray (...) or (batch, ...) affine matrix of mask only if return_affine is True """ voxsize = np.array(voxsize) affine = np.array(affine) if isinstance(T1, (list, tuple)): dim = 4 T1 = np.array(T1) elif len(T1.shape) == 3: dim = 3 if batch_size is not None: logger.warning( "Batch size specified, but not used", "due to the input not having \ a batch dimension", ) T1 = np.expand_dims(T1, 0) affine = np.expand_dims(affine, 0) voxsize = np.expand_dims(voxsize, 0) else: raise ValueError( "T1 data should be a np.ndarray of dimension 3 or a list/tuple of it" ) if batch_size is None: batch_size = 1 input_data = np.zeros((128, 128, 128, len(T1))) affines = np.zeros((len(T1), 4, 4)) mid_shapes = np.zeros((len(T1), 3)).astype(int) offset_arrays = np.zeros((len(T1), 4, 4)).astype(int) scales = np.zeros(len(T1)) crop_vss = np.zeros((len(T1), 3, 2)) pad_vss = np.zeros((len(T1), 3, 2)) # Normalize the data. n_T1 = np.zeros(T1.shape) for i, T1_img in enumerate(T1): n_T1[i] = normalize(T1_img, new_min=0, new_max=1) t_img, t_affine, mid_shape, offset_array, scale, crop_vs, pad_vs = ( transform_img(n_T1[i], affine[i], voxsize=voxsize[i]) ) input_data[..., i] = t_img affines[i] = t_affine mid_shapes[i] = mid_shape offset_arrays[i] = offset_array scales[i] = scale crop_vss[i] = crop_vs pad_vss[i] = pad_vs # Prediction stage prediction = np.zeros((len(T1), 128, 128, 128), dtype=np.float32) for batch_idx in range(batch_size, len(T1) + 1, batch_size): batch = input_data[..., batch_idx - batch_size : batch_idx] temp_input = prepare_img(batch) temp_pred = self.__predict(temp_input) prediction[:batch_idx] = temp_pred remainder = np.mod(len(T1), batch_size) if remainder != 0: temp_input = prepare_img(input_data[..., -remainder:]) temp_pred = self.__predict(temp_input) prediction[-remainder:] = temp_pred output_mask = [] for i in range(len(T1)): output = recover_img( prediction[i], affines[i], mid_shapes[i], n_T1[i].shape, offset_arrays[i], voxsize=voxsize[i], scale=scales[i], crop_vs=crop_vss[i], pad_vs=pad_vss[i], ) if not return_prob: output = np.where(output >= 0.5, 1, 0) if finalize_mask: output = remove_holes_and_islands(output, slice_wise=True) output_mask.append(output) if dim == 3: output_mask = output_mask[0] affine = affine[0] output_mask = np.array(output_mask) affine = np.array(affine) if return_affine: return output_mask, affine else: return output_mask dipy-1.11.0/dipy/nn/torch/histo_resdnn.py000066400000000000000000000250741476546756600203510ustar00rootroot00000000000000#!/usr/bin/python """Class and helper functions for fitting the Histological ResDNN model.""" import logging import numpy as np from dipy.core.gradients import get_bval_indices, unique_bvals_magnitude from dipy.core.sphere import HemiSphere from dipy.data import get_fnames, get_sphere from dipy.nn.utils import set_logger_level from dipy.reconst.shm import sf_to_sh, sh_to_sf, sph_harm_ind_list from dipy.testing.decorators import doctest_skip_parser from dipy.utils.optpkg import optional_package torch, have_torch, _ = optional_package("torch", min_version="2.2.0") if have_torch: from torch.nn import Linear, Module else: class Module: pass logging.warning( "This model requires Pytorch.\ Please install these packages using \ pip." ) logging.basicConfig() logger = logging.getLogger("histo_resdnn") class DenseModel(Module): def __init__(self, sh_size, num_hidden): super(DenseModel, self).__init__() self.fc1 = Linear(sh_size, 400) self.fc2 = Linear(400, num_hidden) self.fc3 = Linear(num_hidden, 200) self.fc4 = Linear(200, num_hidden) self.fc5 = Linear(num_hidden, 200) self.fc6 = Linear(200, num_hidden) def forward(self, x): x1 = torch.relu(self.fc1(x)) x2 = torch.relu(self.fc2(x1)) x3 = torch.relu(self.fc3(x2)) x4 = self.fc4(x3) # Adding x2 and x4 res_add = x2 + x4 x5 = torch.relu(self.fc5(res_add)) x6 = self.fc6(x5) return x6 class HistoResDNN: """This class is intended for the ResDNN Histology Network model. ResDNN :footcite:p:`Nath2019` is a deep neural network that employs residual blocks deep neural network to predict ground truth SH coefficients from SH coefficients computed using DWI data. To this end, authors considered histology FOD-computed SH coefficients (obtained from ex vivo non-human primate acquisitions) as their ground truth, and the DWI-computed SH coefficients as their target. References ---------- .. footbibliography:: """ @doctest_skip_parser def __init__( self, *, sh_order_max=8, basis_type="tournier07", verbose=False, use_cuda=False ): """Model initialization The model was re-trained for usage with a different basis function ('tournier07') like the proposed model in :footcite:p:`Nath2019`. To obtain the pre-trained model, use:: >>> resdnn_model = HistoResDNN() # skip if not have_torch >>> fetch_model_weights_path = get_fnames(name='histo_resdnn_torch_weights') # skip if not have_torch >>> resdnn_model.load_model_weights(fetch_model_weights_path) # skip if not have_torch This model is designed to take as input raw DWI signal on a sphere (ODF) represented as SH of order 8 in the tournier basis and predict fODF of order 8 in the tournier basis. Effectively, this model is mimicking a CSD fit. Parameters ---------- sh_order_max : int, optional Maximum SH order (l) in the SH fit. For ``sh_order_max``, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients for a symmetric basis. basis_type : {'tournier07', 'descoteaux07'}, optional ``tournier07`` (default) or ``descoteaux07``. verbose : bool, optional Whether to show information about the processing. use_cuda : bool, optional Whether to use GPU for processing. If False or no CUDA is detected, CPU will be used. References ---------- .. footbibliography:: """ # noqa: E501 if not have_torch: raise torch() self.sh_order_max = sh_order_max self.sh_size = len(sph_harm_ind_list(sh_order_max)[0]) self.basis_type = basis_type log_level = "INFO" if verbose else "CRITICAL" set_logger_level(log_level, logger) if self.basis_type != "tournier07": logger.warning( "Be careful, original weights were obtained " "from training on the tournier07 basis, " "unless you re-trained the network, do not " "change basis!" ) # ResDNN Network Flow num_hidden = self.sh_size self.model = DenseModel(self.sh_size, num_hidden).type(torch.float64) if use_cuda and torch.cuda.is_available(): self.device = torch.device("cuda") else: self.device = torch.device("cpu") self.model = self.model.to(self.device) def fetch_default_weights(self): """Load the model pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input. """ fetch_model_weights_path = get_fnames(name="histo_resdnn_torch_weights") self.load_model_weights(fetch_model_weights_path) def load_model_weights(self, weights_path): """Load the custom pre-training weights to use for the fitting. Will not work if the declared SH_ORDER does not match the weights expected input. The weights for a sh_order of 8 can be obtained via the function: get_fnames('histo_resdnn_torch_weights'). Parameters ---------- weights_path : str Path to the file containing the weights (pth, saved by Pytorch) """ try: self.model.load_state_dict( torch.load( weights_path, weights_only=True, map_location=self.device, ) ) self.model.eval() except RuntimeError as e: raise RuntimeError( "Expected input for the provided model weights do not match the " f"declared model ({self.sh_size})" ) from e def __predict(self, x_test): """Predict fODF (as SH) from input raw DWI signal (as SH) Parameters ---------- x_test : np.ndarray Array of size (N, M) where M is ``(sh_order_max + 1) * (sh_order_max + 2) / 2``. N should not be too big as to limit memory usage. Returns ------- np.ndarray (N, M) Predicted fODF (as SH) """ if x_test.shape[-1] != self.sh_size: raise ValueError( "Expected input for the provided model weights do not match the " f"declared model ({self.sh_size})" ) return self.model(torch.from_numpy(x_test)).detach().numpy() def predict(self, data, gtab, *, mask=None, chunk_size=1000): """Wrapper function to facilitate prediction of larger dataset. The function will mask, normalize, split, predict and 're-assemble' the data as a volume. Parameters ---------- data : np.ndarray DWI signal in a 4D array gtab : GradientTable class instance The acquisition scheme matching the data (must contain at least one b0) mask : np.ndarray, optional Binary mask of the brain to avoid unnecessary computation and unreliable prediction outside the brain. Default: Compute prediction only for nonzero voxels (with at least one nonzero DWI value). chunk_size : int, optional Batch size when running model prediction. Returns ------- pred_sh_coef : np.ndarray (x, y, z, M) Predicted fODF (as SH). The volume has matching shape to the input data, but with ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` as a last dimension. """ if mask is None: logger.warning( "Mask should be provided to accelerate " "computation, and because predictions are " "not reliable outside of the brain." ) mask = np.sum(data, axis=-1) mask = mask.astype(bool) # Extract B0's and obtain a mean B0 b0_indices = gtab.b0s_mask if not len(b0_indices) > 0: raise ValueError("b0 must be present for DWI normalization.") logger.info(f"b0 indices found are: {np.argwhere(b0_indices).ravel()}") mean_b0 = np.mean(data[..., b0_indices], axis=-1) # Detect number of b-values and extract a single shell of DW-MRI Data unique_shells = np.sort(unique_bvals_magnitude(gtab.bvals)) logger.info(f"Number of b-values: {unique_shells}") # Extract DWI only dw_indices = get_bval_indices(gtab.bvals, unique_shells[1]) dw_data = data[..., dw_indices] dw_bvecs = gtab.bvecs[dw_indices, :] # Normalize the DW-MRI Data with the mean b0 (voxel-wise) norm_dw_data = np.zeros(dw_data.shape) for n in range(len(dw_indices)): norm_dw_data[..., n] = np.divide( dw_data[..., n], mean_b0, where=np.abs(mean_b0) > 0.000001 ) # Fit SH to the raw DWI signal h_sphere = HemiSphere(xyz=dw_bvecs) dw_sh_coef = sf_to_sh( norm_dw_data, h_sphere, smooth=0.0006, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) # Flatten and mask the data (N, SH_SIZE) to facilitate chunks ori_shape = dw_sh_coef.shape flat_dw_sh_coef = dw_sh_coef[mask > 0] flat_pred_sh_coef = np.zeros(flat_dw_sh_coef.shape) count = len(flat_dw_sh_coef) // chunk_size for i in range(count + 1): if i % 100 == 0 or i == count: logger.info(f"Chunk #{i} out of {count}") tmp_sh = self.__predict( flat_dw_sh_coef[i * chunk_size : (i + 1) * chunk_size] ) # Removing negative values from the SF sphere = get_sphere(name="repulsion724") tmp_sf = sh_to_sf( sh=tmp_sh, sphere=sphere, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) tmp_sf[tmp_sf < 0] = 0 tmp_sh = sf_to_sh( tmp_sf, sphere, smooth=0.0006, basis_type=self.basis_type, sh_order_max=self.sh_order_max, ) flat_pred_sh_coef[i * chunk_size : (i + 1) * chunk_size] = tmp_sh pred_sh_coef = np.zeros(ori_shape) pred_sh_coef[mask > 0] = flat_pred_sh_coef return pred_sh_coef dipy-1.11.0/dipy/nn/torch/meson.build000066400000000000000000000002551476546756600174340ustar00rootroot00000000000000python_sources = [ '__init__.py', 'evac.py', 'histo_resdnn.py', 'deepn4.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/nn/torch' ) dipy-1.11.0/dipy/nn/utils.py000066400000000000000000000205351476546756600156700ustar00rootroot00000000000000import numpy as np from scipy.ndimage import affine_transform from dipy.align.reslice import reslice from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def normalize(image, *, min_v=None, max_v=None, new_min=-1, new_max=1): """ normalization function Parameters ---------- image : np.ndarray Image to be normalized. min_v : int or float, optional minimum value range for normalization intensities below min_v will be clipped if None it is set to min value of image Default : None max_v : int or float, optional maximum value range for normalization intensities above max_v will be clipped if None it is set to max value of image Default : None new_min : int or float, optional new minimum value after normalization Default : 0 new_max : int or float, optional new maximum value after normalization Default : 1 Returns ------- np.ndarray Normalized image from range new_min to new_max """ if min_v is None: min_v = np.min(image) if max_v is None: max_v = np.max(image) return np.interp(image, (min_v, max_v), (new_min, new_max)) def unnormalize(image, norm_min, norm_max, min_v, max_v): """ unnormalization function Parameters ---------- image : np.ndarray norm_min : int or float minimum value of normalized image norm_max : int or float maximum value of normalized image min_v : int or float minimum value of unnormalized image max_v : int or float maximum value of unnormalized image Returns ------- np.ndarray unnormalized image from range min_v to max_v """ return (image - norm_min) / (norm_max - norm_min) * (max_v - min_v) + min_v def set_logger_level(log_level, logger): """Change the logger to one of the following: DEBUG, INFO, WARNING, CRITICAL, ERROR Parameters ---------- log_level : str Log level for the logger """ logger.setLevel(level=log_level) @warning_for_keywords() def transform_img( image, affine, *, voxsize=None, considered_points="corners", init_shape=(256, 256, 256), scale=2, ): """ Function to reshape image as an input to the model Parameters ---------- image : np.ndarray Image to transform to voxelspace affine : np.ndarray Affine matrix provided by the file voxsize : np.ndarray (3,), optional Voxel size of the image considered_points : str, optional which points to consider when calculating the boundary of the image. If there is shearing in the affine, 'all' might be more accurate init_shape : list, tuple or numpy array (3,), optional Initial shape to transform the image to scale : float, optional How much we want to scale the image Returns ------- tuple Tuple with variables for recover_img """ if voxsize is not None and np.any(voxsize != np.ones(3)): image2, affine2 = reslice(image, affine, voxsize, (1, 1, 1)) else: image2 = image.copy() affine2 = affine.copy() shape = image2.shape if considered_points == "corners": corners = np.array( [ [0, 0, 0, 1], [shape[0] - 1, 0, 0, 1], [0, shape[1] - 1, 0, 1], [0, 0, shape[2] - 1, 1], [shape[0] - 1, shape[1] - 1, shape[2] - 1, 1], [shape[0] - 1, 0, shape[2] - 1, 1], [0, shape[1] - 1, shape[2] - 1, 1], [shape[0] - 1, shape[1] - 1, 0, 1], ], dtype=np.float64, ) else: temp1 = np.arange(shape[0]) temp2 = np.arange(shape[1]) temp3 = np.arange(shape[2]) grid1, grid2, grid3 = np.meshgrid(temp1, temp2, temp3) corners = np.vstack([grid1.ravel(), grid2.ravel(), grid3.ravel()]).T corners = np.hstack([corners, np.full((corners.shape[0], 1), 1)]) corners = corners.astype(np.float64) transformed_corners = (affine2 @ corners.T).T min_bounds = transformed_corners.min(axis=0)[:3] max_bounds = transformed_corners.max(axis=0)[:3] # Calculate the required offset to ensure # all necessary coordinates are positive offset = np.floor(-min_bounds) new_shape = (np.ceil(max_bounds) + offset).astype(int) offset_array = np.array( [[1, 0, 0, offset[0]], [0, 1, 0, offset[1]], [0, 0, 1, offset[2]], [0, 0, 0, 1]] ) new_affine = affine2.copy() new_affine = np.matmul(offset_array, new_affine) inv_affine = np.linalg.inv(new_affine) new_image = np.zeros(tuple(new_shape)) affine_transform( image2, inv_affine, output_shape=tuple(new_shape), output=new_image ) new_image, pad_vs, crop_vs = pad_crop(new_image, init_shape) if scale != 1: new_image, _ = reslice(new_image, np.eye(4), (1, 1, 1), (scale, scale, scale)) return new_image, inv_affine, image2.shape, offset_array, scale, crop_vs, pad_vs def recover_img( image, inv_affine, mid_shape, ori_shape, offset_array, voxsize, scale, crop_vs, pad_vs, ): """ Function to recover image from transform_img Parameters ---------- image : np.ndarray Image to recover inv_affine : np.ndarray Affine matrix returned from transform_img mid_shape : np.ndarray (3,) shape of image returned from transform_img ori_shape : tuple (3,) original shape of the image offset_array : np.ndarray Affine matrix that was used in transform_img to translate the center voxsize : np.ndarray (3,) Voxel size used in transform_img scale : float Scale used in transform_img crop_vs : np.ndarray (3,2) crop range used in transform_img pad_vs : np.ndarray (3,2) pad range used in transform_img Returns ------- image2 : np.ndarray Recovered image """ new_affine = np.linalg.inv(inv_affine) new_image, _ = reslice(image, np.eye(4), (scale, scale, scale), (1, 1, 1)) crop_vs = crop_vs.astype(int) pad_vs = pad_vs.astype(int) new_image = np.pad( new_image, ( (crop_vs[0, 0], crop_vs[0, 1]), (crop_vs[1, 0], crop_vs[1, 1]), (crop_vs[2, 0], crop_vs[2, 1]), ), ) new_image = new_image[ pad_vs[0, 0] : new_image.shape[0] - pad_vs[0, 1], pad_vs[1, 0] : new_image.shape[1] - pad_vs[1, 1], pad_vs[2, 0] : new_image.shape[2] - pad_vs[2, 1], ] new_image = affine_transform(new_image, new_affine, output_shape=mid_shape) affine = np.matmul(np.linalg.inv(offset_array), new_affine) if voxsize is not None and np.any(voxsize != np.ones(3)): image2, _ = reslice(new_image, affine, (1, 1, 1), voxsize) else: image2 = new_image # because of zoom rounding errors image2, _, _ = pad_crop(image2, ori_shape) return image2 def pad_crop(image, target_shape): """ Function to figure out pad and crop range to fit the target shape with the image Parameters ---------- image : np.ndarray Target image target_shape : (3,) Target shape Returns ------- image : np.ndarray Padded/cropped image pad_vs : np.ndarray (3,2) Pad range used crop_vs : np.ndarray (3,2) Crop range used """ crop_vs = np.zeros((3, 2)).astype(int) pad_vs = np.zeros((3, 2)).astype(int) if image.shape == target_shape: return image, pad_vs, crop_vs pad_crop_v = np.array(target_shape) - np.array(image.shape) for d in range(3): if pad_crop_v[d] < 0: crop_vs[d, 0] = -pad_crop_v[d] // 2 crop_vs[d, 1] = -pad_crop_v[d] - crop_vs[d, 0] elif pad_crop_v[d] > 0: pad_vs[d, 0] = pad_crop_v[d] // 2 pad_vs[d, 1] = pad_crop_v[d] - pad_vs[d, 0] image = np.pad( image, ( (pad_vs[0, 0], pad_vs[0, 1]), (pad_vs[1, 0], pad_vs[1, 1]), (pad_vs[2, 0], pad_vs[2, 1]), ), ) image = image[ crop_vs[0, 0] : image.shape[0] - crop_vs[0, 1], crop_vs[1, 0] : image.shape[1] - crop_vs[1, 1], crop_vs[2, 0] : image.shape[2] - crop_vs[2, 1], ] return image, pad_vs, crop_vs dipy-1.11.0/dipy/py.typed000066400000000000000000000000001476546756600152230ustar00rootroot00000000000000dipy-1.11.0/dipy/reconst/000077500000000000000000000000001476546756600152135ustar00rootroot00000000000000dipy-1.11.0/dipy/reconst/__init__.py000066400000000000000000000000611476546756600173210ustar00rootroot00000000000000# init for reconst aka the reconstruction module dipy-1.11.0/dipy/reconst/base.py000066400000000000000000000021401476546756600164740ustar00rootroot00000000000000""" Base-classes for reconstruction models and reconstruction fits. All the models in the reconst module follow the same template: a Model object is used to represent the abstract properties of the model, that are independent of the specifics of the data . These properties are reused whenever fitting a particular set of data (different voxels, for example). """ from dipy.testing.decorators import warning_for_keywords class ReconstModel: """Abstract class for signal reconstruction models""" def __init__(self, gtab): """Initialization of the abstract class for signal reconstruction models Parameters ---------- gtab : GradientTable class instance Gradient table. """ self.gtab = gtab @warning_for_keywords() def fit(self, data, *, mask=None, **kwargs): return ReconstFit(self, data) class ReconstFit: """Abstract class which holds the fit result of ReconstModel For example that could be holding FA or GFA etc. """ def __init__(self, model, data): self.model = model self.data = data dipy-1.11.0/dipy/reconst/bingham.py000066400000000000000000000567111476546756600172040ustar00rootroot00000000000000"""Tools for fitting Bingham distributions to orientation distribution functions (ODF), as described in :footcite:p:`Riffert2014`. The resulting distributions can further be used to compute ODF-lobe-specific measures such as the fiber density (FD) and fiber spread (FS) :footcite:p:`Riffert2014` and the orientation dispersion index (ODI) :footcite:p:`NetoHenriques2018`. References ---------- .. footbibliography:: """ import numpy as np from dipy.core.ndindex import ndindex from dipy.core.onetime import auto_attr from dipy.core.sphere import unit_icosahedron from dipy.direction import peak_directions from dipy.reconst.shm import calculate_max_order, sh_to_sf def _bingham_fit_peak(sf, peak, sphere, max_search_angle): """ Fit Bingham function on the ODF lobe aligned with peak. Parameters ---------- sf: 1d ndarray The odf spherical function (sf) evaluated on the vertices of `sphere`. peak: ndarray (3, 1) The peak direction of the lobe to fit. sphere: `Sphere` class instance The Sphere providing the odf's discrete directions max_angle: float The maximum angle in degrees of the neighbourhood around the peak to consider for fitting. Returns ------- f0: float Maximum amplitude of the ODF peak. k1: tuple of floats Concentration parameter of Bingham's major axis k1. k2: tuple of floats Concentration parameter of Bingham's minor axis k2. mu0: ndarray (3,) of floats Bingham's main axis. mu1: ndarray (3,) of floats Bingham's major concentration axis. mu2: ndarray (3,) of floats Bingham's minor concentration axis. """ # abs for twice the number of pts to fit dot_prod = np.abs(sphere.vertices.dot(peak)) min_dot = np.cos(np.deg2rad(max_search_angle)) # [p] are the selected ODF vertices (N, 3) around the peak of the lobe # within max_angle p = sphere.vertices[dot_prod > min_dot] # [v] are the selected ODF amplitudes (N, 1) around the peak of the lobe # within max_angle v = sf[dot_prod > min_dot] # Test that the surface along peak direction contains # at least 3 non-zero directions if np.count_nonzero(v) < 3: return 0, 0.0, 0.0, np.zeros(3), np.zeros(3) x, y, z = (p[:, 0], p[:, 1], p[:, 2]) # Create an orientation matrix (T) to approximate mu0, mu1 and mu2 T = np.array( [ [x**2 * v, x * y * v, x * z * v], [x * y * v, y**2 * v, y * z * v], [x * z * v, y * z * v, z**2 * v], ] ) T = np.sum(T, axis=-1) / np.sum(v) # eigh better than eig. T will always be symmetric, eigh is faster. evals, evecs = np.linalg.eigh(T) # Not ordering the evals, eigh orders by default. mu0 = evecs[:, 2] mu1 = evecs[:, 1] mu2 = evecs[:, 0] f0 = v.max() # Maximum amplitude of the ODF peak # If no real fit is possible, return null if np.iscomplex(mu1).any() or np.iscomplex(mu2).any(): return 0, 0.0, 0.0, np.zeros(3), np.zeros(3) # Calculating the A matrix A = np.zeros((len(v), 2), dtype=float) # (N, 2) A[:, 0] = p.dot(mu1) ** 2 A[:, 1] = p.dot(mu2) ** 2 # Test that AT.A is invertible for pseudo-inverse ATA = A.T.dot(A) if np.linalg.matrix_rank(ATA) != ATA.shape[0]: return 0, 0.0, 0.0, np.zeros(3), np.zeros(3) # Calculating the Beta matrix B = np.zeros_like(v) B[v > 0] = np.log(v[v > 0] / f0) # (N, 1) # Calculating the Kappas k = np.abs(np.linalg.pinv(ATA).dot(A.T).dot(B)) k1 = k[0] k2 = k[1] if k1 > k2: k1, k2 = k2, k1 mu1, mu2 = mu2, mu1 return f0, k1, k2, mu0, mu1, mu2 def _single_sf_to_bingham( odf, sphere, max_search_angle, *, npeaks=5, min_sep_angle=60, rel_th=0.1 ): """ Fit a Bingham distribution onto each principal ODF lobe. Parameters ---------- odf: 1d ndarray The ODF function evaluated on the vertices of `sphere` sphere: `Sphere` class instance The Sphere providing the odf's discrete directions max_search_angle: float. Maximum angle between a peak and its neighbour directions for fitting the Bingham distribution. Although they suggest 6 degrees in :footcite:p:`Riffert2014`, tests show that a value around 45 degrees is more stable. npeak: int, optional Maximum number of peaks found (default 5 peaks). min_sep_angle: float, optional Minimum separation angle between two peaks for peak extraction. rel_th: float, optional Relative threshold used for peak extraction. Returns ------- fits: list of tuples Bingham distribution parameters for each ODF peak. n: float Number of maximum peaks for the input ODF. Notes ----- Lobes are first found by performing a peak extraction on the input ODF and Bingham distributions are then fitted around each of the extracted peaks using the method described in :footcite:p:`Riffert2014`. References ---------- .. footbibliography:: """ # extract all maxima on the ODF dirs, vals, inds = peak_directions( odf, sphere, relative_peak_threshold=rel_th, min_separation_angle=min_sep_angle ) # n becomes the new limit of peaks and sets a maximum of peaks in case # the ODF has more than npeaks. n = min(npeaks, vals.shape[0]) # Calculate dispersion on all and each of the peaks up to 'n' if vals.shape[0] != 0: fits = [] for i in range(n): fit = _bingham_fit_peak(odf, dirs[i], sphere, max_search_angle) fits.append(fit) return fits, n def _single_bingham_to_sf(f0, k1, k2, major_axis, minor_axis, vertices): """ Sample a Bingham function on the directions described by `vertices`. The function assumes that `vertices` are unit length and no checks are performed to validate that this is the case. Parameters ---------- f0: float Maximum value of the Bingham function. k1: float Concentration along major axis. k2: float Concentration along minor axis. major_axis: ndarray (3) Direction of major axis minor_axis: ndarray (3) Direction of minor axis vertices: ndarray (N, 3) Unit sphere directions along which the distribution is evaluated. Returns ------- fn : array (N,) Sampled Bingham function values at each point on directions. Notes ----- Refer to method `bingham_to_odf` for the definition of the Bingham distribution. """ x = -k1 * vertices.dot(major_axis) ** 2 - k2 * vertices.dot(minor_axis) ** 2 fn = f0 * np.exp(x) return fn.T def bingham_to_sf(bingham_params, sphere, *, mask=None): """ Reconstruct ODFs from fitted Bingham parameters on multiple voxels. Parameters ---------- bingham_params : ndarray (...., nl, 12) ndarray containing the model parameters of Binghams fitted to ODFs in the following order: - Maximum value of the Bingham function (f0, index 0); - concentration parameters k1 and k2 (indexes 1 and 2); - elements of Bingham's main direction (indexes 3-5); - elements of Bingham's dispersion major axis (indexes 6-8); - elements of Bingham's dispersion minor axis (indexes 9-11). sphere: `Sphere` class instance The Sphere providing the odf's discrete directions mask: ndarray, optional Map marking the coordinates in the data that should be analyzed. Default (None) means all voxels in the volume will be analyzed. Returns ------- ODF : ndarray (..., n_directions) The value of the odf on each point of `sphere`. """ n_directions = sphere.vertices.shape[0] shape = bingham_params.shape[0:-2] if mask is None: mask = np.ones(shape) odf = np.zeros(shape + (n_directions,)) for idx in ndindex(shape): if not mask[idx]: continue bpars = bingham_params[idx] f0s = bpars[..., 0] npeaks = np.sum(f0s > 0) this_odf = 0 for li in range(npeaks): f0 = f0s[li] k1 = bpars[li, 1] k2 = bpars[li, 2] mu1 = bpars[li, 6:9] mu2 = bpars[li, 9:12] this_odf += _single_bingham_to_sf(f0, k1, k2, mu1, mu2, sphere.vertices) odf[idx] = this_odf return odf def bingham_fiber_density(bingham_params, *, subdivide=5, mask=None): """ Compute fiber density for each lobe for a given Bingham ODF. Measured in the unit 1/mm^3. Parameters ---------- bingham_params : ndarray (...., nl, 12) ndarray containing the model parameters of Bingham's fitted to ODFs in the following order: - Maximum value of the Bingham function (f0, index 0); - Concentration parameters k1 and k2 (indexes 1 and 2); - Elements of Bingham's main direction (indexes 3-5); - Elements of Bingham's dispersion major axis (indexes 6-8); - Elements of Bingham's dispersion minor axis (indexes 9-11). subdivide: int >= 0, optional Number of times the unit icosahedron used for integration should be subdivided. The higher this value the more precise the approximation will be, at the cost of longer execution times. The default results in a sphere of 10242 points. mask: ndarray, optional Map marking the coordinates in the data that should be analyzed. Default (None) means all voxels in the volume will be analyzed. Returns ------- fd: ndarray (...., nl) Fiber density for each Bingham function. Notes ----- Fiber density (fd) is given by the surface integral of the Bingham function :footcite:p:`Riffert2014`. References ---------- .. footbibliography:: """ sphere = unit_icosahedron.subdivide(n=subdivide) # directions for evaluating the integral u = sphere.vertices # area of a single surface element dA = 4.0 * np.pi / len(u) shape = bingham_params.shape[0:-2] if mask is None: mask = np.ones(shape) # loop integral calculation for each image voxel fd = np.zeros(bingham_params.shape[0:-1]) for idx in ndindex(shape): if not mask[idx]: continue bpars = bingham_params[idx] f0s = bpars[..., 0] npeaks = np.sum(f0s > 0) for li in range(npeaks): f0 = f0s[li] k1 = bpars[li, 1] k2 = bpars[li, 2] mu1 = bpars[li, 6:9] mu2 = bpars[li, 9:12] bingham_eval = _single_bingham_to_sf(f0, k1, k2, mu1, mu2, u) fd[idx + (li,)] = np.sum(bingham_eval * dA) return fd def bingham_fiber_spread(f0, fd): """ Compute fiber spread for each lobe for a given Bingham volume. Measured in radians. Parameters ---------- f0: ndarray Peak amplitude (f0) of each Bingham function. fd: ndarray Fiber density (fd) of each Bingham function. Returns ------- fs: list of floats Fiber spread (fs) of each each Bingham function. Notes ----- Fiber spread (fs) is defined as fs = fd/f0 and characterizes the spread of the lobe, i.e. the higher the fs, the wider the lobe :footcite:p:`Riffert2014`. References ---------- .. footbibliography:: """ fs = np.zeros(f0.shape) fs[f0 > 0] = fd[f0 > 0] / f0[f0 > 0] return fs def k2odi(k): r""" Convert the Bingham/Watson concentration parameter k to the orientation dispersion index (ODI). Parameters ---------- k: ndarray Watson/Bingham concentration parameter Returns ------- ODI: float or ndarray Orientation Dispersion Index Notes ----- Orientation Dispersion Index for Watson/Bingham functions are defined as :footcite:p:`NetoHenriques2018`, :footcite:p:`Zhang2012`: .. math:: ODI = \frac{2}{pi} \arctan{( \frac{1}{k})} References ---------- .. footbibliography:: """ odi = np.zeros(k.shape) odi[k > 0] = 2 / np.pi * np.arctan(1 / k[k > 0]) return odi def odi2k(odi): r""" Convert the orientation dispersion index (ODI) to the Bingham/Watson concentration parameter k. Parameters ---------- ODI: ndarray Orientation Dispersion Index Returns ------- k: float or ndarray Watson/Bingham concentration parameter Notes ----- Orientation Dispersion Index for Watson/Bingham functions are defined as :footcite:p:`NetoHenriques2018`, :footcite:p:`Zhang2012`: .. math:: ODI = \frac{2}{pi} \arctan ( \frac{1}{k} ) References ---------- .. footbibliography:: """ k = np.zeros(odi.shape) k[odi > 0] = 1 / np.tan(np.pi / 2 * odi[odi > 0]) return k def _convert_bingham_pars(fits, npeaks): """ Convert list of tuples output of the Bingham fit to ndarray. Parameters ---------- fits : tuple Tuple of nl elements containing the Bingham function parameters in the following order: - Maximum value of the Bingham function (f0); - concentration parameters (k1 and k2); - elements of Bingham's main direction (mu0); - elements of Bingham's dispersion major axis (mu1); - and elements of Bingham's dispersion minor axis (mu2). npeaks: int Maximum number of fitted Bingham functions, by number of peaks. Returns ------- bingham_params : ndarray (nl, 12) ndarray containing the model parameters of Bingham fitted to ODFs in the following order: - Maximum value of the Bingham function (f0, index 0); - concentration parameters k1 and k2 (indexes 1 and 2); - elements of Bingham's main direction (indexes 3-5); - elements of Bingham's dispersion major axis (indexes 6-8); - elements of Bingham's dispersion minor axis (indexes 9-11). """ n = len(fits) bpars = np.zeros((npeaks, 12)) for ln in range(n): bpars[ln, 0] = fits[ln][0] bpars[ln, 1] = fits[ln][1] bpars[ln, 2] = fits[ln][2] bpars[ln, 3:6] = fits[ln][3] bpars[ln, 6:9] = fits[ln][4] bpars[ln, 9:12] = fits[ln][5] return bpars def weighted_voxel_metric(bmetric, bfd): """ Compute density-weighted scalar maps for metrics of Bingham functions fitted to multiple ODF lobes. The metric is computed as the weighted average of a given metric across the multiple ODF lobes (weights are defined by the Bingham fiber density estimates). Parameters ---------- bmetric: ndarray(..., nl) Any metric with values for nl ODF lobes. bfd: ndarray(..., nl) Bingham's fiber density estimates for the nl ODF lobes Returns ------- wmetric: ndarray(...) Weight-averaged Bingham metric """ return np.sum(bmetric * bfd, axis=-1) / np.sum(bfd, axis=-1) class BinghamMetrics: """ Class for Bingham Metrics. """ def __init__(self, model_params): """Initialization of the Bingham Metrics Class. Parameters ---------- model_params : ndarray (..., nl, 12) ndarray containing Bingham's model parameters fitted to ODFs in the following order: - Maximum value of the Bingham function (f0, index 0); - concentration parameters k1 and k2 (indexes 1 and 2); - elements of Bingham's main direction (indexes 3-5); - elements of Bingham's dispersion major axis (indexes 6-8); - elements of Bingham's dispersion minor axis (indexes 9-11). """ self.model_params = model_params self.peak_dirs = model_params[..., 3:6] @auto_attr def amplitude_lobe(self): """Maximum Bingham Amplitude for each ODF lobe. Measured in the unit 1/mm^3*rad.""" return self.model_params[..., 0] @auto_attr def kappa1_lobe(self): """Concentration parameter k1 for each ODF lobe.""" return self.model_params[..., 1] @auto_attr def kappa2_lobe(self): """Concentration parameter k2 for each ODF lobe.""" return self.model_params[..., 2] @auto_attr def kappa_total_lobe(self): """Overall concentration parameters for an ODF peak. The overall (combined) concentration parameters for each lobe is defined by equation 19 in :footcite:p:`Tariq2016` as .. math:: k_{total} = sqrt{(k_1 * k_2)} References ---------- .. footbibliography:: """ return np.sqrt(self.kappa1_lobe * self.kappa2_lobe) @auto_attr def odi1_lobe(self): """Orientation Dispersion index 1 computed for each ODF lobe from concentration parameter kappa1.""" return k2odi(self.kappa1_lobe) @auto_attr def odi2_lobe(self): """Orientation Dispersion index 2 computed for each ODF lobe from concentration parameter kappa2.""" return k2odi(self.kappa2_lobe) @auto_attr def odi_total_lobe(self): """Overall Orientation Dispersion Index (ODI) for an ODF lobe. Overall Orientation Dispersion Index (ODI) computed for an ODF lobe from the overall concentration parameter (k_total). Defined by equation 20 in :footcite:p:`Tariq2016`. References ---------- .. footbibliography:: """ return k2odi(self.kappa_total_lobe) @auto_attr def fd_lobe(self): """Fiber Density computed as the integral of the Bingham functions fitted for each ODF lobe.""" return bingham_fiber_density(self.model_params) @auto_attr def odi1_voxel(self): """Voxel Orientation Dispersion Index 1 (weighted average of odi1 across all lobes where the weights are each lobe's fd estimate). """ return weighted_voxel_metric(self.odi1_lobe, self.fd_lobe) @auto_attr def odi2_voxel(self): """Voxel Orientation Dispersion Index 2 (weighted average of odi2 across all lobes where the weights are each lobe's fd estimate). """ return weighted_voxel_metric(self.odi2_lobe, self.fd_lobe) @auto_attr def odi_total_voxel(self): """Voxel total Orientation Dispersion Index (weighted average of odf_total across all lobes where the weights are each lobe's fd estimate).""" return weighted_voxel_metric(self.odi_total_lobe, self.fd_lobe) @auto_attr def fd_voxel(self): """Voxel fiber density (sum of fd estimates of all ODF lobes).""" return np.sum(self.fd_lobe, axis=-1) @auto_attr def fs_lobe(self): """Fiber spread computed for each ODF lobe. Notes ----- Fiber spread (fs) is defined as fs = fd/f0 and characterizes the spread of the lobe, i.e. the higher the fs, the wider the lobe :footcite:p:`Riffert2014`. References ---------- .. footbibliography:: """ return bingham_fiber_spread(self.amplitude_lobe, self.fd_lobe) @auto_attr def fs_voxel(self): """Voxel fiber spread (weighted average of fiber spread across all lobes where the weights are each lobe's fd estimate).""" return weighted_voxel_metric(self.fs_lobe, self.fd_lobe) def odf(self, sphere): """Reconstruct ODFs from fitted Bingham parameters on multiple voxels. Parameters ---------- sphere: `Sphere` class instance The Sphere providing the discrete directions for ODF reconstruction. Returns ------- ODF : ndarray (..., n_directions) The value of the odf on each point of `sphere`. """ mask = self.fd_voxel > 0 return bingham_to_sf(self.model_params, sphere, mask=mask) def sf_to_bingham( odf, sphere, max_search_angle, *, mask=None, npeaks=5, min_sep_angle=60, rel_th=0.1 ): """ Fit the Bingham function from an image volume of ODFs. Parameters ---------- odf: ndarray (Nx, Ny, Nz, Ndirs) Orientation Distribution Function sampled on the vertices of a sphere. sphere: `Sphere` class instance The Sphere providing the odf's discrete directions. max_search_angle: float. Maximum angle between a peak and its neighbour directions for fitting the Bingham distribution. mask: ndarray, optional Map marking the coordinates in the data that should be analyzed. npeak: int, optional Maximum number of peaks found. min_sep_angle: float, optional Minimum separation angle between two peaks for peak extraction. rel_th: float, optional Relative threshold used for peak extraction. Returns ------- BinghamMetrics: class instance Class instance containing metrics computed from Bingham functions fitted to ODF lobes. """ shape = odf.shape[0:-1] if mask is None: mask = np.ones(shape) # Bingham parameters stored in an ndarray with shape: # (Nx, Ny, Nz, n_max_peak, 12). bpars = np.zeros(shape + (npeaks,) + (12,)) for idx in ndindex(shape): if not mask[idx]: continue [fits, npeaks_final] = _single_sf_to_bingham( odf[idx], sphere, max_search_angle, npeaks=npeaks, min_sep_angle=min_sep_angle, rel_th=rel_th, ) bpars[idx] = _convert_bingham_pars(fits, npeaks) return BinghamMetrics(bpars) def sh_to_bingham( sh, sphere, max_search_angle, *, mask=None, sh_basis="descoteaux07", legacy=True, npeaks=5, min_sep_angle=60, rel_th=0.1, ): """ Fit the Bingham function from an image volume of spherical harmonics (SH) representing ODFs. Parameters ---------- sh : ndarray SH coefficients representing a spherical function. sphere : `Sphere` class instance The Sphere providing the odf's discrete directions. max_search_angle: float. Maximum angle between a peak and its neighbour directions for fitting the Bingham distribution. mask: ndarray, optional Map marking the coordinates in the data that should be analyzed. sh_basis: str, optional SH basis. Either `descoteaux07` or `tournier07`. legacy: bool, optional Use legacy SH basis definitions. npeak: int, optional Maximum number of peaks found. min_sep_angle: float, optional Minimum separation angle between two peaks for peak extraction. rel_th: float, optional Relative threshold used for peak extraction. Returns ------- BinghamMetrics: class instance Class instance containing metrics computed from Bingham functions fitted to ODF lobes. """ shape = sh.shape[0:-1] sh_order_max = calculate_max_order(sh.shape[-1]) if mask is None: mask = np.ones(shape) # Bingham parameters saved in an ndarray with shape: # (Nx, Ny, Nz, n_max_peak, 12). bpars = np.zeros(shape + (npeaks,) + (12,)) for idx in ndindex(shape): if not mask[idx]: continue odf = sh_to_sf( sh[idx], sphere, sh_order_max=sh_order_max, basis_type=sh_basis, legacy=legacy, ) [fits, npeaks_final] = _single_sf_to_bingham( odf, sphere, max_search_angle, npeaks=npeaks, min_sep_angle=min_sep_angle, rel_th=rel_th, ) bpars[idx] = _convert_bingham_pars(fits, npeaks) return BinghamMetrics(bpars) dipy-1.11.0/dipy/reconst/cache.py000066400000000000000000000050071476546756600166320ustar00rootroot00000000000000from dipy.core.onetime import auto_attr from dipy.testing.decorators import warning_for_keywords class Cache: """Cache values based on a key object (such as a sphere or gradient table). Notes ----- This class is meant to be used as a mix-in:: class MyModel(Model, Cache): pass class MyModelFit(Fit): pass Inside a method on the fit, typical usage would be:: def odf(sphere): M = self.model.cache_get('odf_basis_matrix', key=sphere) if M is None: M = self._compute_basis_matrix(sphere) self.model.cache_set('odf_basis_matrix', key=sphere, value=M) """ # We use this method instead of __init__ to construct the cache, so # that the class can be used as a mixin, without having to worry about # calling the super-class constructor @auto_attr def _cache(self): return {} def cache_set(self, tag, key, value): """Store a value in the cache. Parameters ---------- tag : str Description of the cached value. key : object Key object used to look up the cached value. value : object Value stored in the cache for each unique combination of ``(tag, key)``. Examples -------- >>> def compute_expensive_matrix(parameters): ... # Imagine the following computation is very expensive ... return (p**2 for p in parameters) >>> c = Cache() >>> parameters = (1, 2, 3) >>> X1 = compute_expensive_matrix(parameters) >>> c.cache_set('expensive_matrix', parameters, X1) >>> X2 = c.cache_get('expensive_matrix', parameters) >>> X1 is X2 True """ self._cache[(tag, key)] = value @warning_for_keywords() def cache_get(self, tag, key, *, default=None): """Retrieve a value from the cache. Parameters ---------- tag : str Description of the cached value. key : object Key object used to look up the cached value. default : object Value to be returned if no cached entry is found. Returns ------- v : object Value from the cache associated with ``(tag, key)``. Returns `default` if no cached entry is found. """ return self._cache.get((tag, key), default) def cache_clear(self): """Clear the cache.""" self._cache = {} dipy-1.11.0/dipy/reconst/cross_validation.py000066400000000000000000000131601476546756600211310ustar00rootroot00000000000000""" Cross-validation analysis of diffusion models. """ import numpy as np import dipy.core.gradients as gt from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def coeff_of_determination(data, model, *, axis=-1): r"""Calculate the coefficient of determination for a model prediction, relative to data. Parameters ---------- data : ndarray The data model : ndarray The predictions of a model for this data. Same shape as the data. axis: int, optional The axis along which different samples are laid out. Returns ------- COD : ndarray The coefficient of determination. This has shape `data.shape[:-1]` Notes ----- See: https://en.wikipedia.org/wiki/Coefficient_of_determination The coefficient of determination is calculated as: .. math:: R^2 = 100 * (1 - \frac{SSE}{SSD}) where SSE is the sum of the squared error between the model and the data (sum of the squared residuals) and SSD is the sum of the squares of the deviations of the data from the mean of the data (variance * N). """ residuals = data - model ss_err = np.sum(residuals**2, axis=axis) demeaned_data = data - np.mean(data, axis=axis)[..., np.newaxis] ss_tot = np.sum(demeaned_data**2, axis=axis) # Don't divide by 0: if np.all(ss_tot == 0.0): return np.nan return 100 * (1 - (ss_err / ss_tot)) def kfold_xval(model, data, folds, *model_args, **model_kwargs): """Perform k-fold cross-validation. It generates out-of-sample predictions for each measurement. See :footcite:p:`Rokem2014` for further details about the method. Parameters ---------- model : Model class instance The type of the model to use for prediction. The corresponding Fit object must have a `predict` function implemented One of the following: `reconst.dti.TensorModel` or `reconst.csdeconv.ConstrainedSphericalDeconvModel`. data : ndarray Diffusion MRI data acquired with the GradientTable of the model. Shape will typically be `(x, y, z, b)` where `xyz` are spatial dimensions and b is the number of bvals/bvecs in the GradientTable. folds : int The number of divisions to apply to the data model_args : list Additional arguments to the model initialization model_kwargs : dict Additional key-word arguments to the model initialization. If contains the kwarg `mask`, this will be used as a key-word argument to the `fit` method of the model object, rather than being used in the initialization of the model object Notes ----- This function assumes that a prediction API is implemented in the Model class for which prediction is conducted. That is, the Fit object that gets generated upon fitting the model needs to have a `predict` method, which receives a GradientTable class instance as input and produces a predicted signal as output. It also assumes that the model object has `bval` and `bvec` attributes holding b-values and corresponding unit vectors. References ---------- .. footbibliography:: """ _ = model_kwargs.pop("rng", np.random.default_rng()) # This should always be there, if the model inherits from # dipy.reconst.base.ReconstModel: gtab = model.gtab data_b = data[..., ~gtab.b0s_mask] div_by_folds = np.mod(data_b.shape[-1], folds) # Make sure that an equal number of samples get left out in each fold: if div_by_folds != 0: msg = "The number of folds must divide the diffusion-weighted " msg += "data equally, but " msg = f"np.mod({data_b.shape[-1]}, {folds}) is {div_by_folds}" raise ValueError(msg) data_0 = data[..., gtab.b0s_mask] S0 = np.mean(data_0, -1) n_in_fold = data_b.shape[-1] / folds prediction = np.zeros(data.shape) # We are going to leave out some randomly chosen samples in each iteration: order = np.random.permutation(data_b.shape[-1]) nz_bval = gtab.bvals[~gtab.b0s_mask] nz_bvec = gtab.bvecs[~gtab.b0s_mask] # Pop the mask, if there is one, out here for use in every fold: mask = model_kwargs.pop("mask", None) gtgt = gt.gradient_table # Shorthand for k in range(folds): fold_mask = np.ones(data_b.shape[-1], dtype=bool) fold_idx = order[int(k * n_in_fold) : int((k + 1) * n_in_fold)] fold_mask[fold_idx] = False this_data = np.concatenate([data_0, data_b[..., fold_mask]], -1) this_gtab = gtgt( np.hstack([gtab.bvals[gtab.b0s_mask], nz_bval[fold_mask]]), bvecs=np.concatenate([gtab.bvecs[gtab.b0s_mask], nz_bvec[fold_mask]]), ) left_out_gtab = gtgt( np.hstack([gtab.bvals[gtab.b0s_mask], nz_bval[~fold_mask]]), bvecs=np.concatenate([gtab.bvecs[gtab.b0s_mask], nz_bvec[~fold_mask]]), ) this_model = model.__class__(this_gtab, *model_args, **model_kwargs) this_fit = this_model.fit(this_data, mask=mask) if not hasattr(this_fit, "predict"): err_str = f"Models of type: {this_model.__class__} " err_str += "do not have an implementation of model prediction" err_str += " and do not support cross-validation" raise ValueError(err_str) this_predict = S0[..., None] * this_fit.predict(gtab=left_out_gtab, S0=1) idx_to_assign = np.where(~gtab.b0s_mask)[0][~fold_mask] prediction[..., idx_to_assign] = this_predict[..., np.sum(gtab.b0s_mask) :] # For the b0 measurements prediction[..., gtab.b0s_mask] = S0[..., None] return prediction dipy-1.11.0/dipy/reconst/csdeconv.py000066400000000000000000001214161476546756600173760ustar00rootroot00000000000000import numbers import warnings import numpy as np from scipy.integrate import quad import scipy.linalg as la import scipy.linalg.lapack as ll import scipy.special as sps from dipy.core.geometry import cart2sphere, vec2vec_rotmat from dipy.core.ndindex import ndindex from dipy.data import default_sphere, get_sphere, small_sphere from dipy.direction.peaks import peaks_from_model from dipy.reconst.dti import TensorModel, fractional_anisotropy from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.shm import ( SphHarmFit, SphHarmModel, forward_sdeconv_mat, lazy_index, real_sh_descoteaux, real_sh_descoteaux_from_index, sh_to_rh, sph_harm_ind_list, sph_harm_lookup, ) from dipy.reconst.utils import _mask_from_roi, _roi_in_volume from dipy.sims.voxel import single_tensor from dipy.testing.decorators import warning_for_keywords from dipy.utils.compatibility import check_max_version from dipy.utils.deprecator import deprecated_params class AxSymShResponse: """A simple wrapper for response functions represented using only axially symmetric, even spherical harmonic functions (ie, m == 0 and l is even). Parameters ---------- S0 : float Signal with no diffusion weighting. dwi_response : array Response function signal as coefficients to axially symmetric, even spherical harmonic. """ @warning_for_keywords() def __init__(self, S0, dwi_response, *, bvalue=None): self.S0 = S0 self.dwi_response = dwi_response self.bvalue = bvalue self.m_values = np.zeros(len(dwi_response)) self.sh_order_max = 2 * (len(dwi_response) - 1) self.l_values = np.arange(0, self.sh_order_max + 1, 2) def basis(self, sphere): """A basis that maps the response coefficients onto a sphere.""" theta = sphere.theta[:, None] phi = sphere.phi[:, None] return real_sh_descoteaux_from_index(self.m_values, self.l_values, theta, phi) def on_sphere(self, sphere): """Evaluates the response function on sphere.""" B = self.basis(sphere) return np.dot(self.dwi_response, B.T) class ConstrainedSphericalDeconvModel(SphHarmModel): @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__( self, gtab, response, *, reg_sphere=None, sh_order_max=8, lambda_=1, tau=0.1, convergence=50, ): r"""Constrained Spherical Deconvolution (CSD). See :footcite:p:`Tournier2007` for further details about the model. Spherical deconvolution computes a fiber orientation distribution (FOD), also called fiber ODF (fODF) :footcite:p:`Descoteaux2009`, as opposed to a diffusion ODF as the QballModel or the CsaOdfModel. This results in a sharper angular profile with better angular resolution that is the best object to be used for later deterministic and probabilistic tractography :footcite:p:`Cote2013`. A sharp fODF is obtained because a single fiber *response* function is injected as *a priori* knowledge. The response function is often data-driven and is thus provided as input to the ConstrainedSphericalDeconvModel. It will be used as deconvolution kernel, as described in :footcite:p:`Tournier2007`. See also :footcite:p:`Tournier2012`. Parameters ---------- gtab : GradientTable Gradient table. response : tuple or AxSymShResponse object A tuple with two elements. The first is the eigen-values as an (3,) ndarray and the second is the signal value for the response function without diffusion weighting (i.e. S0). This is to be able to generate a single fiber synthetic signal. The response function will be used as deconvolution kernel :footcite:p:`Tournier2007`. reg_sphere : Sphere, optional sphere used to build the regularization B matrix. sh_order_max : int, optional maximal spherical harmonics order (l). lambda_ : float, optional weight given to the constrained-positivity regularization part of the deconvolution equation (see :footcite:p:`Tournier2007`). tau : float, optional threshold controlling the amplitude below which the corresponding fODF is assumed to be zero. Ideally, tau should be set to zero. However, to improve the stability of the algorithm, tau is set to tau*100 % of the mean fODF amplitude (here, 10% by default) (see :footcite:p:`Tournier2007`). convergence : int, optional Maximum number of iterations to allow the deconvolution to converge. References ---------- .. footbibliography:: """ # Initialize the parent class: SphHarmModel.__init__(self, gtab) m_values, l_values = sph_harm_ind_list(sh_order_max) self.m_values, self.l_values = m_values, l_values self._where_b0s = lazy_index(gtab.b0s_mask) self._where_dwi = lazy_index(~gtab.b0s_mask) no_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if no_params > np.sum(~gtab.b0s_mask): msg = "Number of parameters required for the fit are more " msg += "than the actual data points" warnings.warn(msg, UserWarning, stacklevel=2) x, y, z = gtab.gradients[self._where_dwi].T r, theta, phi = cart2sphere(x, y, z) # for the gradient sphere self.B_dwi = real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) # for the sphere used in the regularization positivity constraint self.sphere = reg_sphere or small_sphere r, theta, phi = cart2sphere(self.sphere.x, self.sphere.y, self.sphere.z) self.B_reg = real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) self.response = response if isinstance(response, AxSymShResponse): r_sh = response.dwi_response self.response_scaling = response.S0 l_response = response.l_values m_response = response.m_values else: self.S_r = estimate_response(gtab, self.response[0], self.response[1]) r_sh = np.linalg.lstsq(self.B_dwi, self.S_r[self._where_dwi], rcond=-1)[0] l_response = l_values m_response = m_values self.response_scaling = response[1] r_rh = sh_to_rh(r_sh, m_response, l_response) self.R = forward_sdeconv_mat(r_rh, l_values) # scale lambda_ to account for differences in the number of # SH coefficients and number of mapped directions # This is exactly what is done in :footcite:p:`Tournier2012` lambda_ = ( lambda_ * self.R.shape[0] * r_rh[0] / (np.sqrt(self.B_reg.shape[0]) * np.sqrt(362.0)) ) self.B_reg *= lambda_ self.sh_order_max = sh_order_max self.tau = tau self.convergence = convergence self._X = X = self.R.diagonal() * self.B_dwi self._P = np.dot(X.T, X) @multi_voxel_fit def fit(self, data, **kwargs): dwi_data = data[self._where_dwi] shm_coeff, _ = csdeconv( dwi_data, self._X, self.B_reg, tau=self.tau, convergence=self.convergence, P=self._P, ) return SphHarmFit(self, shm_coeff, None) @warning_for_keywords() def predict(self, sh_coeff, *, gtab=None, S0=1.0): """Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance. Parameters ---------- sh_coeff : ndarray The spherical harmonic representation of the FOD from which to make the signal prediction. gtab : GradientTable The gradients for which the signal will be predicted. Uses the model's gradient table by default. S0 : ndarray or float The non diffusion-weighted signal value. Returns ------- pred_sig : ndarray The predicted signal. """ if gtab is None or gtab is self.gtab: SH_basis = self.B_dwi gtab = self.gtab else: x, y, z = gtab.gradients[~gtab.b0s_mask].T r, theta, phi = cart2sphere(x, y, z) SH_basis, _, _ = real_sh_descoteaux(self.sh_order_max, theta, phi) # Because R is diagonal, the matrix multiply is written as a multiply predict_matrix = SH_basis * self.R.diagonal() S0 = np.asarray(S0)[..., None] scaling = S0 / self.response_scaling # This is the key operation: convolve and multiply by S0: pre_pred_sig = scaling * np.dot(sh_coeff, predict_matrix.T) # Now put everything in its right place: pred_sig = np.zeros(pre_pred_sig.shape[:-1] + (gtab.bvals.shape[0],)) pred_sig[..., ~gtab.b0s_mask] = pre_pred_sig pred_sig[..., gtab.b0s_mask] = S0 return pred_sig class ConstrainedSDTModel(SphHarmModel): @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__( self, gtab, ratio, *, reg_sphere=None, sh_order_max=8, lambda_=1.0, tau=0.1 ): r"""Spherical Deconvolution Transform (SDT) :footcite:p:`Descoteaux2009`. The SDT computes a fiber orientation distribution (FOD) as opposed to a diffusion ODF as the QballModel or the CsaOdfModel. This results in a sharper angular profile with better angular resolution. The Constrained SDTModel is similar to the Constrained CSDModel but mathematically it deconvolves the q-ball ODF as opposed to the HARDI signal (see :footcite:p:`Descoteaux2009` for a comparison and a thorough discussion). A sharp fODF is obtained because a single fiber *response* function is injected as *a priori* knowledge. In the SDTModel, this response is a single fiber q-ball ODF as opposed to a single fiber signal function for the CSDModel. The response function will be used as deconvolution kernel. Parameters ---------- gtab : GradientTable Gradient table. ratio : float ratio of the smallest vs the largest eigenvalue of the single prolate tensor response function reg_sphere : Sphere sphere used to build the regularization B matrix sh_order_max : int maximal spherical harmonics order (l) lambda_ : float weight given to the constrained-positivity regularization part of the deconvolution equation tau : float threshold (``tau *mean(fODF)``) controlling the amplitude below which the corresponding fODF is assumed to be zero. References ---------- .. footbibliography:: """ SphHarmModel.__init__(self, gtab) m_values, l_values = sph_harm_ind_list(sh_order_max) self.m_values, self.l_values = m_values, l_values self._where_b0s = lazy_index(gtab.b0s_mask) self._where_dwi = lazy_index(~gtab.b0s_mask) no_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if no_params > np.sum(~gtab.b0s_mask): msg = "Number of parameters required for the fit are more " msg += "than the actual data points" warnings.warn(msg, UserWarning, stacklevel=2) x, y, z = gtab.gradients[self._where_dwi].T r, theta, phi = cart2sphere(x, y, z) # for the gradient sphere self.B_dwi = real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) # for the odf sphere if reg_sphere is None: self.sphere = get_sphere(name="symmetric362") else: self.sphere = reg_sphere r, theta, phi = cart2sphere(self.sphere.x, self.sphere.y, self.sphere.z) self.B_reg = real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) self.R, self.P = forward_sdt_deconv_mat(ratio, l_values) # scale lambda_ to account for differences in the number of # SH coefficients and number of mapped directions self.lambda_ = lambda_ * self.R.shape[0] * self.R[0, 0] / self.B_reg.shape[0] self.tau = tau self.sh_order_max = sh_order_max @multi_voxel_fit def fit(self, data, **kwargs): s_sh = np.linalg.lstsq(self.B_dwi, data[self._where_dwi], rcond=-1)[0] # initial ODF estimation odf_sh = np.dot(self.P, s_sh) qball_odf = np.dot(self.B_reg, odf_sh) Z = np.linalg.norm(qball_odf) shm_coeff, num_it = np.zeros_like(odf_sh), 0 if Z: # normalize ODF odf_sh /= Z shm_coeff, num_it = odf_deconv( odf_sh, self.R, self.B_reg, lambda_=self.lambda_, tau=self.tau ) # print 'SDT CSD converged after %d iterations' % num_it return SphHarmFit(self, shm_coeff, None) def estimate_response(gtab, evals, S0): """Estimate single fiber response function Parameters ---------- gtab : GradientTable Gradient table. evals : ndarray Eigenvalues. S0 : float non diffusion weighted Returns ------- S : estimated signal """ evecs = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]) return single_tensor(gtab, S0, evals=evals, evecs=evecs, snr=None) @deprecated_params("n", new_name="l_values", since="1.9", until="2.0") @warning_for_keywords() def forward_sdt_deconv_mat(ratio, l_values, *, r2_term=False): r"""Build forward sharpening deconvolution transform (SDT) matrix Parameters ---------- ratio : float ratio = $\frac{\lambda_2}{\lambda_1}$ of the single fiber response function l_values : ndarray (N,) The order ($l$) of spherical harmonic function associated with each row of the deconvolution matrix. Only even orders are allowed. r2_term : bool True if ODF comes from an ODF computed from a model using the $r^2$ term in the integral. For example, DSI, GQI, SHORE, CSA, Tensor, Multi-tensor ODFs. This results in using the proper analytical response function solution solving from the single-fiber ODF with the r^2 term. This derivation is not published anywhere but is very similar to :footcite:p:`Descoteaux2008b`. Returns ------- R : ndarray (N, N) SDT deconvolution matrix P : ndarray (N, N) Funk-Radon Transform (FRT) matrix References ---------- .. footbibliography:: """ if np.any(l_values % 2): raise ValueError("n has odd orders, expecting only even orders") n_orders = l_values.max() // 2 + 1 sdt = np.zeros(n_orders) # SDT matrix frt = np.zeros(n_orders) # FRT (Funk-Radon transform) q-ball matrix lpn = ( sps.lpn if check_max_version("scipy", "1.15.0", strict=True) else sps.legendre_p_all ) for j in np.arange(0, n_orders * 2, 2): if r2_term: sharp = quad( lambda z, j=j: lpn(j, z)[0][-1] * sps.gamma(1.5) * np.sqrt(ratio / (4 * np.pi**3)) / np.power((1 - (1 - ratio) * z**2), 1.5), -1.0, 1.0, ) else: sharp = quad( lambda z, j=j: lpn(j, z)[0][-1] * np.sqrt(1 / (1 - (1 - ratio) * z * z)), -1.0, 1.0, ) sdt[j // 2] = sharp[0] frt[j // 2] = 2 * np.pi * lpn(j, 0)[0][-1] idx = l_values // 2 b = sdt[idx] bb = frt[idx] return np.diag(b), np.diag(bb) potrf, potrs = ll.get_lapack_funcs(("potrf", "potrs")) def _solve_cholesky(Q, z): L, info = potrf(Q, lower=False, overwrite_a=False, clean=False) if info > 0: msg = f"{info}-th leading minor not positive definite" raise la.LinAlgError(msg) if info < 0: msg = f"illegal value in {-info}-th argument of internal potrf" raise ValueError(msg) f, info = potrs(L, z, lower=False, overwrite_b=False) if info != 0: msg = f"illegal value in {-info}-th argument of internal potrs" raise ValueError(msg) return f @warning_for_keywords() def csdeconv(dwsignal, X, B_reg, *, tau=0.1, convergence=50, P=None): r"""Constrained-regularized spherical deconvolution (CSD). Deconvolves the axially symmetric single fiber response function `r_rh` in rotational harmonics coefficients from the diffusion weighted signal in `dwsignal` :footcite:p:`Tournier2007`. Parameters ---------- dwsignal : array Diffusion weighted signals to be deconvolved. X : array Prediction matrix which estimates diffusion weighted signals from FOD coefficients. B_reg : array (N, B) SH basis matrix which maps FOD coefficients to FOD values on the surface of the sphere. B_reg should be scaled to account for lambda. tau : float Threshold controlling the amplitude below which the corresponding fODF is assumed to be zero. Ideally, tau should be set to zero. However, to improve the stability of the algorithm, tau is set to tau*100 % of the max fODF amplitude (here, 10% by default). This is similar to peak detection where peaks below 0.1 amplitude are usually considered noise peaks. Because SDT is based on a q-ball ODF deconvolution, and not signal deconvolution, using the max instead of mean (as in CSD), is more stable. convergence : int Maximum number of iterations to allow the deconvolution to converge. P : ndarray This is an optimization to avoid computing ``dot(X.T, X)`` many times. If the same ``X`` is used many times, ``P`` can be precomputed and passed to this function. Returns ------- fodf_sh : ndarray (``(sh_order_max + 1)*(sh_order_max + 2)/2``,) Spherical harmonics coefficients of the constrained-regularized fiber ODF. _num_it : int Number of iterations in the constrained-regularization used for convergence. Notes ----- This section describes how the fitting of the SH coefficients is done. Problem is to minimise per iteration: $F(f_n) = ||Xf_n - S||^2 + \lambda^2 ||H_{n-1} f_n||^2$ Where $X$ maps current FOD SH coefficients $f_n$ to DW signals $s$ and $H_{n-1}$ maps FOD SH coefficients $f_n$ to amplitudes along set of negative directions identified in previous iteration, i.e. the matrix formed by the rows of $B_{reg}$ for which $Hf_{n-1}<0$ where $B_{reg}$ maps $f_n$ to FOD amplitude on a sphere. Solve by differentiating and setting to zero: $\Rightarrow \frac{\delta F}{\delta f_n} = 2X^T(Xf_n - S) + 2 \lambda^2 H_{n-1}^TH_{n-1}f_n=0$ Or: $(X^TX + \lambda^2 H_{n-1}^TH_{n-1})f_n = X^Ts$ Define $Q = X^TX + \lambda^2 H_{n-1}^TH_{n-1}$ , which by construction is a square positive definite symmetric matrix of size $n_{SH} by n_{SH}$. If needed, positive definiteness can be enforced with a small minimum norm regulariser (helps a lot with poorly conditioned direction sets and/or superresolution): $Q = X^TX + (\lambda H_{n-1}^T) (\lambda H_{n-1}) + \mu I$ Solve $Qf_n = X^Ts$ using Cholesky decomposition: $Q = LL^T$ where $L$ is lower triangular. Then problem can be solved by back-substitution: $L_y = X^Ts$ $L^Tf_n = y$ To speeds things up further, form $P = X^TX + \mu I$, and update to form $Q$ by rankn update with $H_{n-1}$. The dipy implementation looks like: form initially $P = X^T X + \mu I$ and $\lambda B_{reg}$ for each voxel: form $z = X^Ts$ estimate $f_0$ by solving $Pf_0=z$. We use a simplified $l_{max}=4$ solution here, but it might not make a big difference. Then iterate until no change in rows of $H$ used in $H_n$ form $H_{n}$ given $f_{n-1}$ form $Q = P + (\lambda H_{n-1}^T) (\lambda H_{n-1}$) (this can be done by rankn update, but we currently do not use rankn update). solve $Qf_n = z$ using Cholesky decomposition We would like to thank Donald Tournier for his help with describing and implementing this algorithm. References ---------- .. footbibliography:: """ mu = 1e-5 if P is None: P = np.dot(X.T, X) z = np.dot(X.T, dwsignal) try: fodf_sh = _solve_cholesky(P, z) except la.LinAlgError: P = P + mu * np.eye(P.shape[0]) fodf_sh = _solve_cholesky(P, z) # For the first iteration we use a smooth FOD that only uses SH orders up # to 4 (the first 15 coefficients). fodf = np.dot(B_reg[:, :15], fodf_sh[:15]) # The mean of an fodf can be computed by taking $Y_{0,0} * coeff_{0,0}$ threshold = B_reg[0, 0] * fodf_sh[0] * tau where_fodf_small = (fodf < threshold).nonzero()[0] # If the low-order fodf does not have any values less than threshold, the # full-order fodf is used. if len(where_fodf_small) == 0: fodf = np.dot(B_reg, fodf_sh) where_fodf_small = (fodf < threshold).nonzero()[0] # If the fodf still has no values less than threshold, return the fodf. if len(where_fodf_small) == 0: return fodf_sh, 0 _num_it = 0 for _num_it in range(1, convergence + 1): # This is the super-resolved trick. Wherever there is a negative # amplitude value on the fODF, it concatenates a value to the S vector # so that the estimation can focus on trying to eliminate it. In a # sense, this "adds" a measurement, which can help to better estimate # the fodf_sh, even if you have more SH coefficients to estimate than # actual S measurements. H = B_reg.take(where_fodf_small, axis=0) # We use the Cholesky decomposition to solve for the SH coefficients. Q = P + np.dot(H.T, H) fodf_sh = _solve_cholesky(Q, z) # Sample the FOD using the regularization sphere and compute k. fodf = np.dot(B_reg, fodf_sh) where_fodf_small_last = where_fodf_small where_fodf_small = (fodf < threshold).nonzero()[0] if ( len(where_fodf_small) == len(where_fodf_small_last) and (where_fodf_small == where_fodf_small_last).all() ): break else: msg = "maximum number of iterations exceeded - failed to converge" warnings.warn(msg, stacklevel=2) return fodf_sh, _num_it @warning_for_keywords() def odf_deconv(odf_sh, R, B_reg, *, lambda_=1.0, tau=0.1, r2_term=False): r"""ODF constrained-regularized spherical deconvolution using the Sharpening Deconvolution Transform (SDT). See :footcite:p:`Tuch2004` and :footcite:p:`Descoteaux2009` for further details about the method. Parameters ---------- odf_sh : ndarray (``(sh_order_max + 1)*(sh_order_max + 2)/2``,) ndarray of SH coefficients for the ODF spherical function to be deconvolved R : ndarray (``(sh_order_max + 1)(sh_order_max + 2)/2``, ``(sh_order_max + 1)(sh_order_max + 2)/2``) SDT matrix in SH basis B_reg : ndarray (``(sh_order_max + 1)(sh_order_max + 2)/2``, ``(sh_order_max + 1)(sh_order_max + 2)/2``) SH basis matrix used for deconvolution lambda_ : float, optional lambda parameter in minimization equation tau : float, optional threshold (``tau *max(fODF)``) controlling the amplitude below which the corresponding fODF is assumed to be zero. r2_term : bool, optional True if ODF is computed from model that uses the $r^2$ term in the integral. Recall that Tuch's ODF (used in Q-ball Imaging :footcite:p:`Tuch2004`) and the true normalized ODF definition differ from a $r^2$ term in the ODF integral. The original Sharpening Deconvolution Transform (SDT) technique :footcite:p:`Descoteaux2009` is expecting Tuch's ODF without the $r^2$ (see :footcite:p:`Descoteaux2008b` for the mathematical details). Now, this function supports ODF that have been computed using the $r^2$ term because the proper analytical response function has be derived. For example, models such as DSI, GQI, SHORE, CSA, Tensor, Multi-tensor ODFs, should now be deconvolved with the r2_term=True. Returns ------- fodf_sh : ndarray (``(sh_order_max + 1)(sh_order_max + 2)/2``,) Spherical harmonics coefficients of the constrained-regularized fiber ODF num_it : int Number of iterations in the constrained-regularization used for convergence References ---------- .. footbibliography:: """ # In ConstrainedSDTModel.fit, odf_sh is divided by its norm (Z) and # sometimes the norm is 0 which creates NaNs. if np.any(np.isnan(odf_sh)): return np.zeros_like(odf_sh), 0 # Generate initial fODF estimate, which is the ODF truncated at SH order 4 fodf_sh = np.linalg.lstsq(R, odf_sh, rcond=-1)[0] fodf_sh[15:] = 0 fodf = np.dot(B_reg, fodf_sh) # if sharpening a q-ball odf (it is NOT properly normalized), we need to # force normalization otherwise, for DSI, CSA, SHORE, Tensor odfs, they are # normalized by construction if not r2_term: Z = np.linalg.norm(fodf) fodf_sh /= Z threshold = tau * np.max(np.dot(B_reg, fodf_sh)) k = np.empty([]) convergence = 50 for num_it in range(1, convergence + 1): A = np.dot(B_reg, fodf_sh) k2 = np.nonzero(A < threshold)[0] if (k2.shape[0] + R.shape[0]) < B_reg.shape[1]: warnings.warn( "too few negative directions identified - failed to converge", stacklevel=2, ) return fodf_sh, num_it if num_it > 1 and k.shape[0] == k2.shape[0]: if (k == k2).all(): return fodf_sh, num_it k = k2 M = np.concatenate((R, lambda_ * B_reg[k, :])) ODF = np.concatenate((odf_sh, np.zeros(k.shape))) try: fodf_sh = np.linalg.lstsq(M, ODF, rcond=-1)[0] except np.linalg.LinAlgError: # SVD did not converge in Linear Least Squares in current # voxel. Proceeding with initial SH estimate for this voxel. pass warnings.warn( "maximum number of iterations exceeded - failed to converge", stacklevel=2 ) return fodf_sh, num_it @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def odf_sh_to_sharp( odfs_sh, sphere, *, basis=None, ratio=3 / 15.0, sh_order_max=8, lambda_=1.0, tau=0.1, r2_term=False, ): r"""Sharpen odfs using the sharpening deconvolution transform. This function can be used to sharpen any smooth ODF spherical function. In theory, this should only be used to sharpen QballModel ODFs, but in practice, one can play with the deconvolution ratio and sharpen almost any ODF-like spherical function. The constrained-regularization is stable and will not only sharpen the ODF peaks but also regularize the noisy peaks. See :footcite:p:`Descoteaux2009` for further details about the method. Parameters ---------- odfs_sh : ndarray (``(sh_order_max + 1)*(sh_order_max + 2)/2``, ) array of odfs expressed as spherical harmonics coefficients sphere : Sphere sphere used to build the regularization matrix basis : {None, 'tournier07', 'descoteaux07'}, optional different spherical harmonic basis: ``None`` for the default DIPY basis, ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007` basis, and ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis (``None`` defaults to ``descoteaux07``). ratio : float, optional ratio of the smallest vs the largest eigenvalue of the single prolate tensor response function (:math:`\frac{\lambda_2}{\lambda_1}`) sh_order_max : int, optional maximal SH order ($l$) of the SH representation lambda_ : float, optional lambda parameter (see odfdeconv) tau : float, optional tau parameter in the L matrix construction (see odfdeconv) r2_term : bool, optional True if ODF is computed from model that uses the $r^2$ term in the integral. Recall that Tuch's ODF (used in Q-ball Imaging :footcite:p:`Tuch2004`) and the true normalized ODF definition differ from a $r^2$ term in the ODF integral. The original Sharpening Deconvolution Transform (SDT) technique :footcite:p:`Descoteaux2009` is expecting Tuch's ODF without the $r^2$ (see :footcite:p:`Descoteaux2007` for the mathematical details). Now, this function supports ODF that have been computed using the $r^2$ term because the proper analytical response function has be derived. For example, models such as DSI, GQI, SHORE, CSA, Tensor, Multi-tensor ODFs, should now be deconvolved with the r2_term=True. Returns ------- fodf_sh : ndarray sharpened odf expressed as spherical harmonics coefficients References ---------- .. footbibliography:: """ r, theta, phi = cart2sphere(sphere.x, sphere.y, sphere.z) real_sym_sh = sph_harm_lookup[basis] B_reg, m_values, l_values = real_sym_sh(sh_order_max, theta, phi) R, P = forward_sdt_deconv_mat(ratio, l_values, r2_term=r2_term) # scale lambda to account for differences in the number of # SH coefficients and number of mapped directions lambda_ = lambda_ * R.shape[0] * R[0, 0] / B_reg.shape[0] fodf_sh = np.zeros(odfs_sh.shape) for index in ndindex(odfs_sh.shape[:-1]): fodf_sh[index], num_it = odf_deconv( odfs_sh[index], R, B_reg, lambda_=lambda_, tau=tau, r2_term=r2_term ) return fodf_sh @warning_for_keywords() def mask_for_response_ssst(gtab, data, *, roi_center=None, roi_radii=10, fa_thr=0.7): """Computation of mask for single-shell single-tissue (ssst) response function using FA. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data (4D) roi_center : array-like, (3,) Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : int or array-like, (3,) radii of cuboid ROI fa_thr : float FA threshold Returns ------- mask : ndarray Mask of voxels within the ROI and with FA above the FA threshold. Notes ----- In CSD there is an important pre-processing step: the estimation of the fiber response function. In order to do this, we look for voxels with very anisotropic configurations. This function aims to accomplish that by returning a mask of voxels within a ROI, that have a FA value above a given threshold. For example we can use a ROI (20x20x20) at the center of the volume and store the signal values for the voxels with FA values higher than 0.7 (see :footcite:p:`Tournier2004`). References ---------- .. footbibliography:: """ if len(data.shape) < 4: msg = """Data must be 4D (3D image + directions). To use a 2D image, please reshape it into a (N, N, 1, ndirs) array.""" raise ValueError(msg) if isinstance(roi_radii, numbers.Number): roi_radii = (roi_radii, roi_radii, roi_radii) if roi_center is None: roi_center = np.array(data.shape[:3]) // 2 roi_radii = _roi_in_volume( data.shape, np.asarray(roi_center), np.asarray(roi_radii) ) roi_mask = _mask_from_roi(data.shape[:3], roi_center, roi_radii) ten = TensorModel(gtab) tenfit = ten.fit(data, mask=roi_mask) fa = fractional_anisotropy(tenfit.evals) fa[np.isnan(fa)] = 0 mask = np.zeros(fa.shape, dtype=np.int64) mask[fa > fa_thr] = 1 if np.sum(mask) == 0: msg = f"""No voxel with a FA higher than {str(fa_thr)} were found. Try a larger roi or a lower threshold.""" warnings.warn(msg, UserWarning, stacklevel=2) return mask def response_from_mask_ssst(gtab, data, mask): """Computation of single-shell single-tissue (ssst) response function from a given mask. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data mask : ndarray mask from where to compute the response function Returns ------- response : tuple, (2,) (`evals`, `S0`) ratio : float The ratio between smallest versus largest eigenvalue of the response. Notes ----- In CSD there is an important pre-processing step: the estimation of the fiber response function. In order to do this, we look for voxels with very anisotropic configurations. This information can be obtained by using csdeconv.mask_for_response_ssst() through a mask of selected voxels (see :footcite:p:`Tournier2004`). The present function uses such a mask to compute the ssst response function. For the response we also need to find the average S0 in the ROI. This is possible using `gtab.b0s_mask()` we can find all the S0 volumes (which correspond to b-values equal 0) in the dataset. The `response` consists always of a prolate tensor created by averaging the highest and second highest eigenvalues in the ROI with FA higher than threshold. We also include the average S0s. We also return the `ratio` which is used for the SDT models. References ---------- .. footbibliography:: """ ten = TensorModel(gtab) indices = np.where(mask > 0) if indices[0].size == 0: msg = "No voxel in mask with value > 0 were found." warnings.warn(msg, UserWarning, stacklevel=2) return (np.nan, np.nan), np.nan tenfit = ten.fit(data[indices]) lambdas = tenfit.evals[:, :2] S0s = data[indices][:, np.nonzero(gtab.b0s_mask)[0]] return _get_response(S0s, lambdas) @warning_for_keywords() def auto_response_ssst(gtab, data, *, roi_center=None, roi_radii=10, fa_thr=0.7): """Automatic estimation of single-shell single-tissue (ssst) response function using FA. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data roi_center : array-like, (3,) Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : int or array-like, (3,) radii of cuboid ROI fa_thr : float FA threshold Returns ------- response : tuple, (2,) (`evals`, `S0`) ratio : float The ratio between smallest versus largest eigenvalue of the response. Notes ----- In CSD there is an important pre-processing step: the estimation of the fiber response function. In order to do this, we look for voxels with very anisotropic configurations. We get this information from csdeconv.mask_for_response_ssst(), which returns a mask of selected voxels (more details are available in the description of the function). With the mask, we compute the response function by using csdeconv.response_from_mask_ssst(), which returns the `response` and the `ratio` (more details are available in the description of the function). """ mask = mask_for_response_ssst( gtab, data, roi_center=roi_center, roi_radii=roi_radii, fa_thr=fa_thr ) response, ratio = response_from_mask_ssst(gtab, data, mask) return response, ratio def _get_response(S0s, lambdas): S0 = np.mean(S0s) if S0s.size else 0 # Check if lambdas is empty if not lambdas.size: response = (np.zeros(3), S0) ratio = 0 return response, ratio l01 = np.mean(lambdas, axis=0) if S0s.size else 0 evals = np.array([l01[0], l01[1], l01[1]]) response = (evals, S0) ratio = evals[1] / evals[0] return response, ratio @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def recursive_response( gtab, data, *, mask=None, sh_order_max=8, peak_thr=0.01, init_fa=0.08, init_trace=0.0021, iter=8, convergence=0.001, parallel=False, num_processes=None, sphere=default_sphere, ): """Recursive calibration of response function using peak threshold. See :footcite:p:`Tax2014` for further details about the method. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data mask : ndarray, optional mask for recursive calibration, for example a white matter mask. It has shape `data.shape[0:3]` and dtype=bool. Default: use the entire data array. sh_order_max : int, optional maximal spherical harmonics order (l). peak_thr : float, optional peak threshold, how large the second peak can be relative to the first peak in order to call it a single fiber population :footcite:p:`Tax2014`. init_fa : float, optional FA of the initial 'fat' response function (tensor). init_trace : float, optional trace of the initial 'fat' response function (tensor). iter : int, optional maximum number of iterations for calibration. convergence : float, optional convergence criterion, maximum relative change of SH coefficients. parallel : bool, optional Whether to use parallelization in peak-finding during the calibration procedure. num_processes : int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. sphere : Sphere, optional. The sphere used for peak finding. Returns ------- response : ndarray response function in SH coefficients Notes ----- In CSD there is an important pre-processing step: the estimation of the fiber response function. Using an FA threshold is not a very robust method. It is dependent on the dataset (non-informed used subjectivity), and still depends on the diffusion tensor (FA and first eigenvector), which has low accuracy at high b-value. This function recursively calibrates the response function, for more information see :footcite:p:`Tax2014`. References ---------- .. footbibliography:: """ S0 = 1.0 evals = fa_trace_to_lambdas(init_fa, init_trace) res_obj = (evals, S0) if mask is None: data = data.reshape(-1, data.shape[-1]) else: data = data[mask] n = np.arange(0, sh_order_max + 1, 2) where_dwi = lazy_index(~gtab.b0s_mask) response_p = np.ones(len(n)) for _ in range(iter): r_sh_all = np.zeros(len(n)) csd_model = ConstrainedSphericalDeconvModel( gtab, res_obj, sh_order_max=sh_order_max ) csd_peaks = peaks_from_model( model=csd_model, data=data, sphere=sphere, relative_peak_threshold=peak_thr, min_separation_angle=25, parallel=parallel, num_processes=num_processes, ) dirs = csd_peaks.peak_dirs vals = csd_peaks.peak_values single_peak_mask = (vals[:, 1] / vals[:, 0]) < peak_thr data = data[single_peak_mask] dirs = dirs[single_peak_mask] for num_vox in range(data.shape[0]): rotmat = vec2vec_rotmat(dirs[num_vox, 0], np.array([0, 0, 1])) rot_gradients = np.dot(rotmat, gtab.gradients.T).T x, y, z = rot_gradients[where_dwi].T r, theta, phi = cart2sphere(x, y, z) # for the gradient sphere B_dwi = real_sh_descoteaux_from_index(0, n, theta[:, None], phi[:, None]) r_sh_all += np.linalg.lstsq(B_dwi, data[num_vox, where_dwi], rcond=-1)[0] response = r_sh_all / data.shape[0] res_obj = AxSymShResponse(data[:, gtab.b0s_mask].mean(), response) change = abs((response_p - response) / response_p) if all(change < convergence): break response_p = response return res_obj def fa_trace_to_lambdas(fa=0.08, trace=0.0021): lambda1 = (trace / 3.0) * (1 + 2 * fa / (3 - 2 * fa**2) ** (1 / 2.0)) lambda2 = (trace / 3.0) * (1 - fa / (3 - 2 * fa**2) ** (1 / 2.0)) evals = np.array([lambda1, lambda2, lambda2]) return evals dipy-1.11.0/dipy/reconst/cti.py000066400000000000000000000462711476546756600163560ustar00rootroot00000000000000#!/usr/bin/python """Classes and functions for fitting the correlation tensor model""" import numpy as np from dipy.core.onetime import auto_attr from dipy.reconst.base import ReconstModel from dipy.reconst.dki import ( DiffusionKurtosisFit, ) from dipy.reconst.dti import ( MIN_POSITIVE_SIGNAL, decompose_tensor, from_lower_triangular, lower_triangular, ) from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.utils import cti_design_matrix as design_matrix from dipy.testing.decorators import warning_for_keywords def from_qte_to_cti(C): """ Rescales the qte C elements to the C elements used in CTI. Parameters ---------- C: array(..., 21) Twenty-one elements of the covariance tensor in voigt notation plus some extra scaling factors. Returns ------- ccti: array(..., 21) Covariance Tensor Elements with no hidden factors. """ const = np.sqrt(2) ccti = np.zeros((21, 1)) ccti[0] = C[0] ccti[1] = C[1] ccti[2] = C[2] ccti[3] = C[3] / const ccti[4] = C[4] / const ccti[5] = C[5] / const ccti[6] = C[6] / 2 ccti[7] = C[7] / 2 ccti[8] = C[8] / 2 ccti[9] = C[9] / 2 ccti[10] = C[10] / 2 ccti[11] = C[11] / 2 ccti[12] = C[12] / 2 ccti[13] = C[13] / 2 ccti[14] = C[14] / 2 ccti[15] = C[15] / 2 ccti[16] = C[16] / 2 ccti[17] = C[17] / 2 ccti[18] = C[18] / (2 * const) ccti[19] = C[19] / (2 * const) ccti[20] = C[20] / (2 * const) return ccti def multi_gaussian_k_from_c(ccti, MD): """ Computes the multiple Gaussian diffusion kurtosis tensor from the covariance tensor. Parameters ---------- ccti: array(..., 21) Covariance Tensor Elements with no hidden factors. MD : ndarray Mean Diffusivity (MD) of a diffusion tensor. Returns ------- K: array (..., 15) Fifteen elements of the kurtosis tensor """ K = np.zeros((15, 1)) K[0] = 3 * ccti[0] / (MD**2) K[1] = 3 * ccti[1] / (MD**2) K[2] = 3 * ccti[2] / (MD**2) K[3] = 3 * ccti[8] / (MD**2) K[4] = 3 * ccti[7] / (MD**2) K[5] = 3 * ccti[11] / (MD**2) K[6] = 3 * ccti[9] / (MD**2) K[7] = 3 * ccti[13] / (MD**2) K[8] = 3 * ccti[12] / (MD**2) K[9] = (ccti[5] + 2 * ccti[17]) / (MD**2) K[10] = (ccti[4] + 2 * ccti[16]) / (MD**2) K[11] = (ccti[3] + 2 * ccti[15]) / (MD**2) K[12] = (ccti[6] + 2 * ccti[19]) / (MD**2) K[13] = (ccti[10] + 2 * ccti[20]) / (MD**2) K[14] = (ccti[14] + 2 * ccti[18]) / (MD**2) return K def split_cti_params(cti_params): r"""Splits CTI params into DTI, DKI, CTI portions. Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 21 independent elements of the covariance tensor, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the CTI model Parameters ---------- cti_params: numpy.ndarray (..., 48) All parameters estimated from the correlation tensor model. Parameters are ordered as follows: 1. Three diffusion tensor's eigenvalues 2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3. Fifteen elements of the kurtosis tensor 4. Twenty-One elements of the covariance tensor Returns ------- evals : array (..., 3) Eigenvalues from eigen decomposition of the tensor. evecs : array (..., 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. evecs[:,j] is associated with evals[j]) kt : array (..., 15) Fifteen elements of the kurtosis tensor ct: array(..., 21) Twenty-one elements of the covariance tensor """ evals = cti_params[..., :3] evecs = cti_params[..., 3:12].reshape(cti_params.shape[:-1] + (3, 3)) kt = cti_params[..., 12:27] ct = cti_params[..., 27:48] return evals, evecs, kt, ct @warning_for_keywords() def cti_prediction(cti_params, gtab1, gtab2, *, S0=1): """Predict a signal given correlation tensor imaging parameters. Parameters ---------- cti_params: numpy.ndarray (..., 48) All parameters estimated from the correlation tensor model. Parameters are ordered as follows: 1. Three diffusion tensor's eigenvalues 2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3. Fifteen elements of the kurtosis tensor 4. Twenty-One elements of the covariance tensor gtab1: dipy.core.gradients.GradientTable A GradientTable class instance for first DDE diffusion epoch gtab2: dipy.core.gradients.GradientTable A GradientTable class instance for second DDE diffusion epoch S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Returns ------- S : ndarray Simulated signal based on the CTI model """ evals, evecs, kt, ct = split_cti_params(cti_params) A = design_matrix(gtab1, gtab2) fevals = evals.reshape((-1, evals.shape[-1])) fevecs = evecs.reshape((-1,) + evecs.shape[-2:]) fct = ct.reshape((-1, ct.shape[-1])) fkt = kt.reshape((-1, kt.shape[-1])) pred_sig = np.zeros((len(fevals), len(gtab1.bvals))) if isinstance(S0, np.ndarray): S0_vol = np.reshape(S0, (len(fevals))) else: S0_vol = S0 for v in range(len(pred_sig)): DT = np.dot(np.dot(fevecs[v], np.diag(fevals[v])), fevecs[v].T) dt = lower_triangular(DT) MD = (dt[0] + dt[2] + dt[5]) / 3 if isinstance(S0_vol, np.ndarray): this_S0 = S0_vol[v] else: this_S0 = S0_vol X = np.concatenate( (dt, fkt[v] * MD * MD, fct[v], np.array([-np.log(this_S0)])), axis=0 ) pred_sig[v] = np.exp(np.dot(A, X)) pred_sig = pred_sig.reshape(cti_params.shape[:-1] + (pred_sig.shape[-1],)) return pred_sig class CorrelationTensorModel(ReconstModel): """Class for the Correlation Tensor Model""" def __init__(self, gtab1, gtab2, *args, fit_method="WLS", **kwargs): """Correlation Tensor Imaging Model. See :footcite:p:`NetoHenriques2020` for further details about the model. Parameters ---------- gtab1: dipy.core.gradients.GradientTable A GradientTable class instance for first DDE diffusion epoch gtab2: dipy.core.gradients.GradientTable A GradientTable class instance for second DDE diffusion epoch fit_method : str or callable, optional Fitting method. *args Variable length argument list passed to the :func:`fit` method. **kwargs Arbitrary keyword arguments passed to the :func:`fit` method. References ---------- .. footbibliography:: """ self.gtab1 = gtab1 self.gtab2 = gtab2 self.args = args self.kwargs = kwargs self.common_fit_method = not callable(fit_method) if self.common_fit_method: try: self.fit_method = common_fit_methods[fit_method] except KeyError as e: msg = '"' + str(fit_method) + '" is not a known fit method. The' msg += " fit method should either be a function or one of the" msg += " common fit methods." raise ValueError(msg) from e self.args = args self.kwargs = kwargs self.min_signal = self.kwargs.pop("min_signal", None) if self.min_signal is None: self.min_signal = MIN_POSITIVE_SIGNAL elif self.min_signal <= 0: msg = "The `min_signal` key-word argument needs to be strictly" msg += " positive." raise ValueError(msg) self.design_matrix = design_matrix(self.gtab1, self.gtab2) self.inverse_design_matrix = np.linalg.pinv(self.design_matrix) tol = 1e-6 self.min_diffusivity = tol / -self.design_matrix.min() self.weights = fit_method in {"WLS", "WLLS", "UWLLS"} @multi_voxel_fit @warning_for_keywords() def fit(self, data, *, mask=None): """Fit method of the CTI model class. Parameters ---------- data : array 4D array of dMRI data. mask : array, optional A boolean array of the same shape as data.shape[-1]. It designates which coordinates in the data should be analyzed. """ data_thres = np.maximum(data, self.min_signal) params = self.fit_method( self.design_matrix, data_thres, self.inverse_design_matrix, weights=self.weights, *self.args, **self.kwargs, ) return CorrelationTensorFit(self, params) @warning_for_keywords() def predict(self, cti_params, *, S0=1): """Predict a signal for the CTI model class instance given parameters Parameters ---------- cti_params: numpy.ndarray (..., 48) All parameters estimated from the correlation tensor model. Parameters are ordered as follows: 1. Three diffusion tensor's eigenvalues 2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3. Fifteen elements of the kurtosis tensor 4. Twenty-One elements of the covariance tensor gtab1: dipy.core.gradients.GradientTable A GradientTable class instance for first DDE diffusion epoch gtab2: dipy.core.gradients.GradientTable A GradientTable class instance for second DDE diffusion epoch S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Returns ------- S : numpy.ndarray Predicted signal based on the CTI model """ return cti_prediction(cti_params, self.gtab1, self.gtab2, S0=S0) class CorrelationTensorFit(DiffusionKurtosisFit): """Class for fitting the Correlation Tensor Model""" def __init__(self, model, model_params): """Initialize a CorrelationTensorFit class instance. Parameters ---------- model : CorrelationTensorModel Class instance Class instance containing the Correlation Tensor Model for the fit model_params : ndarray (x, y, z, 48) or (n, 48) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor 4) Twenty One elements of the covariance tensor """ DiffusionKurtosisFit.__init__(self, model, model_params) @property def ct(self): """ Returns the 21 independent elements of the covariance tensor as an array """ return self.model_params[..., 27:48] @warning_for_keywords() def predict(self, gtab1, gtab2, *, S0=1): """Given a CTI model fit, predict the signal on the vertices of a gradient table Parameters ---------- gtab1: dipy.core.gradients.GradientTable A GradientTable class instance for first DDE diffusion epoch gtab2: dipy.core.gradients.GradientTable A GradientTable class instance for second DDE diffusion epoch S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Returns ------- S : numpy.ndarray Predicted signal based on the CTI model """ return cti_prediction(self.model_params, gtab1, gtab2, S0=S0) @property def K_aniso(self): r"""Returns the anisotropic Source of Kurtosis ($K_{aniso}$) Notes ----- The $K_{aniso}$ is defined as :footcite:p:`NetoHenriques2020`: .. math:: \[K_{aniso} = \frac{6}{5} \cdot \frac{\langle V_{\lambda}(D_c) \rangle}{\overline{D}^2}\] where $K_{aniso}$ is the anisotropic kurtosis, $\langle V_{\lambda}(D_c) \rangle$ represents the mean of the variance of eigenvalues of the diffusion tensor, $\overline{D}$ is the mean of the diffusion tensor. References ---------- .. footbibliography:: """ C = self.ct D = self.quadratic_form Variance = ( 2 / 9 * ( C[..., 0] + D[..., 0, 0] ** 2 + C[..., 1] + D[..., 1, 1] ** 2 + C[..., 2] + D[..., 2, 2] ** 2 - C[..., 5] - D[..., 0, 0] * D[..., 1, 1] - C[..., 4] - D[..., 0, 0] * D[..., 2, 2] - C[..., 3] - D[..., 1, 1] * D[..., 2, 2] + 3 * ( C[..., 17] + D[..., 0, 1] ** 2 + C[..., 16] + D[..., 0, 2] ** 2 + C[..., 15] + D[..., 1, 2] ** 2 ) ) ) mean_D = np.trace(D) / 3 if mean_D == 0: K_aniso = 0 else: K_aniso = (6 / 5) * (Variance / (mean_D**2)) return K_aniso @property def K_iso(self): r"""Returns the isotropic Source of Kurtosis ($K_{iso}$) Notes ----- The $K_{iso}$ is defined as : .. math:: K_{\text{iso}} = 3 \cdot \frac{V(\overline{D}^c)}{\overline{D}^2} where: $K_{\text{iso}}$ is the isotropic kurtosis, $V({\overline{D}^c})$ represents the variance of the diffusion tensor raised to the power $c$, $\overline{D}$ is the mean of the diffusion tensor. """ C = self.ct mean_D = self.md Variance = ( 1 / 9 * ( C[..., 0] + C[..., 1] + C[..., 2] + 2 * C[..., 5] + 2 * C[..., 4] + 2 * C[..., 3] ) ) if mean_D == 0: K_iso = 0 else: K_iso = 3 * (Variance / (mean_D**2)) return K_iso @auto_attr def K_total(self): r""" Returns the total excess kurtosis. Notes ----- $K_{total}$ is defined as: .. math:: \[\Psi = \frac{2}{5} \cdot \frac{D_{11}^2 + D_{22}^2 + D_{33}^2 + 2D_{12}^2 + 2D_{13}^2 + 2D_{23}^2{\overline{D}^2} - \frac{6}{5} \] \[{\overline{W}} = \frac{1}{5} \cdot (W_{1111} + W_{2222} + W_{3333} + 2W_{1122} + 2W_{1133} + 2W_{2233})\ ] where $\Psi$ is a variable representing a part of the total excess kurtosis, $D_{ij}$ are elements of the diffusion tensor, $\overline{D}$ is the mean of the diffusion tensor. $\overline{W}$ is the mean kurtosis, $W_{ijkl}$ are elements of the kurtosis tensor. """ mean_K = self.mkt() D = self.quadratic_form mean_D = self.md if mean_D == 0: psi = 0 else: psi = 2 / 5 * ( ( D[..., 0, 0] ** 2 + D[..., 1, 1] ** 2 + D[..., 2, 2] ** 2 + 2 * D[..., 0, 1] ** 2 + 2 * D[..., 0, 2] ** 2 + D[..., 1, 2] ** 2 ) / (mean_D**2) ) - (6 / 5) return mean_K + psi @property def K_micro(self): r"""Returns Microscopic Source of Kurtosis.""" K_total = self.K_total K_aniso = self.K_aniso K_iso = self.K_iso micro_K = K_total - K_aniso - K_iso return micro_K @warning_for_keywords() def params_to_cti_params(result, *, min_diffusivity=0): # Extracting the diffusion tensor parameters from solution DT_elements = result[:6] evals, evecs = decompose_tensor( from_lower_triangular(DT_elements), min_diffusivity=min_diffusivity ) # Extracting kurtosis tensor parameters from solution MD_square = evals.mean(0) ** 2 KT_elements = result[6:21] / MD_square if MD_square else 0.0 * result[6:21] # Extracting correlation tensor parameters from solution CT_elements = result[21:42] # Write output cti_params = np.concatenate( (evals, evecs[0], evecs[1], evecs[2], KT_elements, CT_elements), axis=0 ) return cti_params @warning_for_keywords() def ls_fit_cti( design_matrix, data, inverse_design_matrix, *, weights=True, min_diffusivity=0 ): r"""Compute the diffusion kurtosis and covariance tensors using an ordinary or weighted linear least squares approach Parameters ---------- design_matrix : array (g, 43) Design matrix holding the covariants used to solve for the regression coefficients. data : array (g) Data or response variables holding the data. inverse_design_matrix : array (43, g) Inverse of the design matrix. weights : bool, optional Parameter indicating whether weights are used. min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Returns ------- cti_params : array (48) All parameters estimated from the diffusion kurtosis model for all N voxels. Parameters are ordered as follows: 1) Three diffusion tensor eigenvalues. 2) Three blocks of three elements, containing the first second and third coordinates of the diffusion tensor eigenvectors. 3) Fifteen elements of the kurtosis tensor. 4) Twenty One elements of the covariance tensor. """ A = design_matrix y = np.log(data) result = np.dot(inverse_design_matrix, y) if weights: W = np.diag(np.exp(2 * np.dot(A, result))) AT_W = np.dot(A.T, W) inv_AT_W_A = np.linalg.pinv(np.dot(AT_W, A)) AT_W_LS = np.dot(AT_W, y) result = np.dot(inv_AT_W_A, AT_W_LS) cti_params = params_to_cti_params(result, min_diffusivity=min_diffusivity) return cti_params common_fit_methods = { "WLS": ls_fit_cti, "OLS": ls_fit_cti, "UWLLS": ls_fit_cti, "ULLS": ls_fit_cti, "WLLS": ls_fit_cti, "OLLS": ls_fit_cti, } dipy-1.11.0/dipy/reconst/dirspeed.pyx000066400000000000000000000170241476546756600175600ustar00rootroot00000000000000# cython: boundscheck=False # cython: cdivision=True # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False # cython: overflowcheck=False import numpy as np cimport numpy as cnp import cython from libc.stdlib cimport malloc, free from libc.string cimport memcpy from libc.math cimport cos, M_PI, INFINITY from dipy.core.math cimport f_max, f_array_min from dipy.reconst.recspeed cimport local_maxima_c, search_descending_c, remove_similar_vertices_c cdef cnp.uint16_t peak_directions_c( double[:] odf, double[:, ::1] sphere_vertices, cnp.uint16_t[:, ::1] sphere_edges, double relative_peak_threshold, double min_separation_angle, bint is_symmetric, double[:, ::1] out_directions, double[::1] out_values, cnp.npy_intp[::1] out_indices, double[:, :] unique_vertices, cnp.uint16_t[:] mapping, cnp.uint16_t[:] index ) noexcept nogil: """Get the directions of odf peaks. Peaks are defined as points on the odf that are greater than at least one neighbor and greater than or equal to all neighbors. Peaks are sorted in descending order by their values then filtered based on their relative size and spacing on the sphere. An odf may have 0 peaks, for example if the odf is perfectly isotropic. Parameters ---------- odf : 1d ndarray The odf function evaluated on the vertices of `sphere` sphere_vertices : (N,3) ndarray The sphere vertices providing discrete directions for evaluation. sphere_edges : (N, 2) ndarray relative_peak_threshold : float in [0., 1.] Only peaks greater than ``min + relative_peak_threshold * scale`` are kept, where ``min = max(0, odf.min())`` and ``scale = odf.max() - min``. min_separation_angle : float in [0, 90] The minimum distance between directions. If two peaks are too close only the larger of the two is returned. is_symmetric : bool, optional If True, v is considered equal to -v. out_directions : (N, 3) ndarray The directions of the peaks, N vertices for sphere, one for each peak out_values : (N,) ndarray The peak values out_indices: (N,) ndarray The peak indices of the directions on the sphere unique_vertices : (N, 3) ndarray The unique vertices mapping : (N,) ndarray For each element ``vertices[i]`` ($i \in 0..N-1$), the index $j$ to a vertex in `unique_vertices` that is less than `theta` degrees from ``vertices[i]``. index : (N,) ndarray `index` gives the reverse of `mapping`. For each element ``unique_vertices[j]`` ($j \in 0..M-1$), the index $i$ to a vertex in `vertices` that is less than `theta` degrees from ``unique_vertices[j]``. If there is more than one element of `vertices` that is less than theta degrees from `unique_vertices[j]`, return the first (lowest index) matching value. Returns ------- n_unique : int The number of unique peaks Notes ----- If the odf has any negative values, they will be clipped to zeros. """ cdef cnp.npy_intp i, n, idx cdef long count cdef double odf_min cdef cnp.npy_intp num_vertices = sphere_vertices.shape[0] cdef double* tmp_buffer cdef cnp.uint16_t n_unique = 0 count = local_maxima_c(odf, sphere_edges, out_values, out_indices) # If there is only one peak return if count == 0: return 0 elif count == 1: return 1 odf_min = f_array_min[double](&odf[0], num_vertices) odf_min = f_max[double](odf_min, 0.0) # because of the relative threshold this algorithm will give the same peaks # as if we divide (values - odf_min) with (odf_max - odf_min) or not so # here we skip the division to increase speed tmp_buffer = malloc(count * sizeof(double)) memcpy(tmp_buffer, &out_values[0], count * sizeof(double)) for i in range(count): tmp_buffer[i] -= odf_min # Remove small peaks n = search_descending_c[cython.double](tmp_buffer, count, relative_peak_threshold) for i in range(n): idx = out_indices[i] out_directions[i, :] = sphere_vertices[idx, :] # Remove peaks too close together remove_similar_vertices_c( out_directions[:n, :], min_separation_angle, is_symmetric, 0, # return_mapping 1, # return_index unique_vertices[:n, :], mapping[:n], index[:n], &n_unique ) # Update final results for i in range(n_unique): idx = index[i] out_directions[i, :] = unique_vertices[i, :] out_values[i] = out_values[idx] out_indices[i] = out_indices[idx] free(tmp_buffer) return n_unique def peak_directions( double[:] odf, sphere, *, relative_peak_threshold=0.5, min_separation_angle=25, bint is_symmetric=True, ): """Get the directions of odf peaks. Peaks are defined as points on the odf that are greater than at least one neighbor and greater than or equal to all neighbors. Peaks are sorted in descending order by their values then filtered based on their relative size and spacing on the sphere. An odf may have 0 peaks, for example if the odf is perfectly isotropic. Parameters ---------- odf : 1d ndarray The odf function evaluated on the vertices of `sphere` sphere : Sphere The Sphere providing discrete directions for evaluation. relative_peak_threshold : float in [0., 1.] Only peaks greater than ``min + relative_peak_threshold * scale`` are kept, where ``min = max(0, odf.min())`` and ``scale = odf.max() - min``. min_separation_angle : float in [0, 90] The minimum distance between directions. If two peaks are too close only the larger of the two is returned. is_symmetric : bool, optional If True, v is considered equal to -v. Returns ------- directions : (N, 3) ndarray N vertices for sphere, one for each peak values : (N,) ndarray peak values indices : (N,) ndarray peak indices of the directions on the sphere Notes ----- If the odf has any negative values, they will be clipped to zeros. """ cdef double[:, ::1] vertices = sphere.vertices cdef cnp.uint16_t[:, ::1] edges = sphere.edges cdef cnp.npy_intp num_vertices = sphere.vertices.shape[0] cdef cnp.uint16_t n_unique = 0 cdef double[:, ::1] directions_out = np.zeros((num_vertices, 3), dtype=np.float64) cdef double[::1] values_out = np.zeros(num_vertices, dtype=np.float64) cdef cnp.npy_intp[::1] indices_out = np.zeros(num_vertices, dtype=np.intp) cdef cnp.float64_t[:, ::1] unique_vertices = np.empty((num_vertices, 3), dtype=np.float64) cdef cnp.uint16_t[::1] mapping = None cdef cnp.uint16_t[::1] index = np.empty(num_vertices, dtype=np.uint16) n_unique = peak_directions_c( odf, vertices, edges, relative_peak_threshold, min_separation_angle, is_symmetric, directions_out, values_out, indices_out, unique_vertices, mapping, index ) if n_unique == 0: return np.zeros((0, 3)), np.zeros(0), np.zeros(0, dtype=int) elif n_unique == 1: return ( sphere.vertices[np.asarray(indices_out[:n_unique]).astype(int)], np.asarray(values_out[:n_unique]), np.asarray(indices_out[:n_unique]), ) return ( np.asarray(directions_out[:n_unique, :]), np.asarray(values_out[:n_unique]), np.asarray(indices_out[:n_unique]), )dipy-1.11.0/dipy/reconst/dki.py000066400000000000000000003117731476546756600163500ustar00rootroot00000000000000#!/usr/bin/python """ Classes and functions for fitting the diffusion kurtosis model. """ import warnings import numpy as np import scipy.optimize as opt from dipy.core.geometry import cart2sphere, perpendicular_directions, sphere2cart from dipy.core.gradients import check_multi_b from dipy.core.ndindex import ndindex from dipy.core.optimize import PositiveDefiniteLeastSquares import dipy.core.sphere as dps from dipy.data import get_fnames, get_sphere, load_sdp_constraints from dipy.reconst.base import ReconstModel from dipy.reconst.dti import ( MIN_POSITIVE_SIGNAL, TensorFit, decompose_tensor, from_lower_triangular, iterative_fit_tensor, lower_triangular, mean_diffusivity, nlls_fit_tensor, radial_diffusivity, restore_fit_tensor, robust_fit_tensor_nlls, robust_fit_tensor_wls, ) from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.recspeed import local_maxima from dipy.reconst.utils import dki_design_matrix as design_matrix from dipy.reconst.vec_val_sum import vec_val_vect from dipy.reconst.weights_method import ( weights_method_wls_m_est, ) from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def _positive_evals(L1, L2, L3, *, er=2e-7): """Helper function that identifies which voxels in an array have all eigenvalues significantly larger than zero Parameters ---------- L1 : ndarray First independent variable of the integral. L2 : ndarray Second independent variable of the integral. L3 : ndarray Third independent variable of the integral. er : float, optional A eigenvalues is classified as larger than zero if it is larger than er Returns ------- ind : boolean (n,) Array that marks the voxels that have all eigenvalues are larger than zero. """ ind = np.logical_and(L1 > er, np.logical_and(L2 > er, L3 > er)) return ind @warning_for_keywords() def carlson_rf(x, y, z, *, errtol=3e-4): r"""Compute the Carlson's incomplete elliptic integral of the first kind. Carlson's incomplete elliptic integral of the first kind is defined as :footcite:p:`Carlson1995`: .. math:: R_F = \frac{1}{2} \int_{0}^{\infty} \left [(t+x)(t+y)(t+z) \right ] ^{-\frac{1}{2}}dt Parameters ---------- x : ndarray First independent variable of the integral. y : ndarray Second independent variable of the integral. z : ndarray Third independent variable of the integral. errtol : float Error tolerance. Integral is computed with relative error less in magnitude than the defined value Returns ------- RF : ndarray Value of the incomplete first order elliptic integral Notes ----- x, y, and z have to be nonnegative and at most one of them is zero. References ---------- .. footbibliography:: """ xn = x.copy() yn = y.copy() zn = z.copy() An = (xn + yn + zn) / 3.0 Q = (3.0 * errtol) ** (-1 / 6.0) * np.max( np.abs([An - xn, An - yn, An - zn]), axis=0 ) # Convergence has to be done voxel by voxel index = ndindex(x.shape) for v in index: n = 0 # Convergence condition while 4.0 ** (-n) * Q[v] > abs(An[v]): xnroot = np.sqrt(xn[v]) ynroot = np.sqrt(yn[v]) znroot = np.sqrt(zn[v]) lamda = xnroot * (ynroot + znroot) + ynroot * znroot n = n + 1 xn[v] = (xn[v] + lamda) * 0.250 yn[v] = (yn[v] + lamda) * 0.250 zn[v] = (zn[v] + lamda) * 0.250 An[v] = (An[v] + lamda) * 0.250 # post convergence calculation X = 1.0 - xn / An Y = 1.0 - yn / An Z = -X - Y E2 = X * Y - Z * Z E3 = X * Y * Z RF = An ** (-1 / 2.0) * ( 1 - E2 / 10.0 + E3 / 14.0 + (E2**2) / 24.0 - 3 / 44.0 * E2 * E3 ) return RF @warning_for_keywords() def carlson_rd(x, y, z, *, errtol=1e-4): r"""Compute the Carlson's incomplete elliptic integral of the second kind. Carlson's incomplete elliptic integral of the second kind is defined as :footcite:p:`Carlson1995`: .. math:: R_D = \frac{3}{2} \int_{0}^{\infty} (t+x)^{-\frac{1}{2}} (t+y)^{-\frac{1}{2}}(t+z) ^{-\frac{3}{2}} Parameters ---------- x : ndarray First independent variable of the integral. y : ndarray Second independent variable of the integral. z : ndarray Third independent variable of the integral. errtol : float Error tolerance. Integral is computed with relative error less in magnitude than the defined value Returns ------- RD : ndarray Value of the incomplete second order elliptic integral Notes ----- x, y, and z have to be nonnegative and at most x or y is zero. References ---------- .. footbibliography:: """ xn = x.copy() yn = y.copy() zn = z.copy() A0 = (xn + yn + 3.0 * zn) / 5.0 An = A0.copy() Q = (errtol / 4.0) ** (-1 / 6.0) * np.max( np.abs([An - xn, An - yn, An - zn]), axis=0 ) sum_term = np.zeros(x.shape, dtype=x.dtype) n = np.zeros(x.shape) # Convergence has to be done voxel by voxel index = ndindex(x.shape) for v in index: # Convergence condition while 4.0 ** (-n[v]) * Q[v] > abs(An[v]): xnroot = np.sqrt(xn[v]) ynroot = np.sqrt(yn[v]) znroot = np.sqrt(zn[v]) lamda = xnroot * (ynroot + znroot) + ynroot * znroot sum_term[v] = sum_term[v] + 4.0 ** (-n[v]) / (znroot * (zn[v] + lamda)) n[v] = n[v] + 1 xn[v] = (xn[v] + lamda) * 0.250 yn[v] = (yn[v] + lamda) * 0.250 zn[v] = (zn[v] + lamda) * 0.250 An[v] = (An[v] + lamda) * 0.250 # post convergence calculation X = (A0 - x) / (4.0**n * An) Y = (A0 - y) / (4.0**n * An) Z = -(X + Y) / 3.0 E2 = X * Y - 6.0 * Z * Z E3 = (3.0 * X * Y - 8.0 * Z * Z) * Z E4 = 3.0 * (X * Y - Z * Z) * Z**2.0 E5 = X * Y * Z**3.0 RD = ( 4 ** (-n) * An ** (-3 / 2.0) * ( 1 - 3 / 14.0 * E2 + 1 / 6.0 * E3 + 9 / 88.0 * (E2**2) - 3 / 22.0 * E4 - 9 / 52.0 * E2 * E3 + 3 / 26.0 * E5 ) + 3 * sum_term ) return RD def _F1m(a, b, c): r""" Helper function that computes function $F_1$ which is required to compute the analytical solution of the Mean kurtosis Parameters ---------- a : ndarray Array containing the values of parameter $\lambda_1$ of function $F_1$ b : ndarray Array containing the values of parameter $\lambda_2$ of function $F_1$ c : ndarray Array containing the values of parameter $\lambda_3$ of function $F_1$ Returns ------- F1 : ndarray Value of the function $F_1$ for all elements of the arrays a, b, and c Notes ----- Function $F_1$ is defined as :footcite:p:`Tabesh2011`: .. math:: F_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {18(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)} [\frac{\sqrt{\lambda_2\lambda_3}}{\lambda_1} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{3\lambda_1^2-\lambda_1\lambda_2-\lambda_2\lambda_3- \lambda_1\lambda_3} {3\lambda_1 \sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-1 ] References ---------- .. footbibliography:: """ # Eigenvalues are considered equal if they are not 2.5% different to each # other. This value is adjusted according to the analysis reported in: # https://gsoc2015dipydki.blogspot.com/2015/08/rnh-post-13-start-wrapping-up-test.html er = 2.5e-2 # Initialize F1 F1 = np.zeros(a.shape) # Only computes F1 in voxels that have all eigenvalues larger than zero cond0 = _positive_evals(a, b, c) # Apply formula for non problematic plausible cases, i.e. a!=b and a!=c cond1 = np.logical_and( cond0, np.logical_and(abs(a - b) >= a * er, abs(a - c) >= a * er) ) if np.sum(cond1) != 0: L1 = a[cond1] L2 = b[cond1] L3 = c[cond1] RFm = carlson_rf(L1 / L2, L1 / L3, np.ones(len(L1))) RDm = carlson_rd(L1 / L2, L1 / L3, np.ones(len(L1))) F1[cond1] = ( ((L1 + L2 + L3) ** 2) / (18 * (L1 - L2) * (L1 - L3)) * ( np.sqrt(L2 * L3) / L1 * RFm + (3 * L1**2 - L1 * L2 - L1 * L3 - L2 * L3) / (3 * L1 * np.sqrt(L2 * L3)) * RDm - 1 ) ) # Resolve possible singularity a==b cond2 = np.logical_and( cond0, np.logical_and(abs(a - b) < a * er, abs(a - c) > a * er) ) if np.sum(cond2) != 0: L1 = (a[cond2] + b[cond2]) / 2.0 L3 = c[cond2] F1[cond2] = _F2m(L3, L1, L1) / 2.0 # Resolve possible singularity a==c cond3 = np.logical_and( cond0, np.logical_and(abs(a - c) < a * er, abs(a - b) > a * er) ) if np.sum(cond3) != 0: L1 = (a[cond3] + c[cond3]) / 2.0 L2 = b[cond3] F1[cond3] = _F2m(L2, L1, L1) / 2 # Resolve possible singularity a==b and a==c cond4 = np.logical_and( cond0, np.logical_and(abs(a - c) < a * er, abs(a - b) < a * er) ) if np.sum(cond4) != 0: F1[cond4] = 1 / 5.0 return F1 def _F2m(a, b, c): r""" Helper function that computes function $F_2$ which is required to compute the analytical solution of the Mean kurtosis Parameters ---------- a : ndarray Array containing the values of parameter $\lambda_1$ of function $F_2$ b : ndarray Array containing the values of parameter $\lambda_2$ of function $F_2$ c : ndarray Array containing the values of parameter $\lambda_3$ of function $F_2$ Returns ------- F2 : ndarray Value of the function $F_2$ for all elements of the arrays a, b, and c Notes ----- Function $F_2$ is defined as :footcite:p:`Tabesh2011`: .. math:: F_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {3(\lambda_2-\lambda_3)^2} [\frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{2\lambda_1-\lambda_2-\lambda_3}{3\sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-2] References ---------- .. footbibliography:: """ # Eigenvalues are considered equal if they are not 2.5% different to each # other. This value is adjusted according to the analysis reported in: # https://gsoc2015dipydki.blogspot.com/2015/08/rnh-post-13-start-wrapping-up-test.html er = 2.5e-2 # Initialize F2 F2 = np.zeros(a.shape) # Only computes F2 in voxels that have all eigenvalues larger than zero cond0 = _positive_evals(a, b, c) # Apply formula for non problematic plausible cases, i.e. b!=c cond1 = np.logical_and(cond0, (abs(b - c) > b * er)) if np.sum(cond1) != 0: L1 = a[cond1] L2 = b[cond1] L3 = c[cond1] RF = carlson_rf(L1 / L2, L1 / L3, np.ones(len(L1))) RD = carlson_rd(L1 / L2, L1 / L3, np.ones(len(L1))) F2[cond1] = (((L1 + L2 + L3) ** 2) / (3.0 * (L2 - L3) ** 2)) * ( ((L2 + L3) / (np.sqrt(L2 * L3))) * RF + ((2.0 * L1 - L2 - L3) / (3.0 * np.sqrt(L2 * L3))) * RD - 2.0 ) # Resolve possible singularity b==c cond2 = np.logical_and( cond0, np.logical_and(abs(b - c) < b * er, abs(a - b) > b * er) ) if np.sum(cond2) != 0: L1 = a[cond2] L3 = (c[cond2] + b[cond2]) / 2.0 # Compute alfa :footcite:p:`Tabesh2011` x = 1.0 - (L1 / L3) alpha = np.zeros(len(L1)) for i in range(len(x)): if x[i] > 0: alpha[i] = 1.0 / np.sqrt(x[i]) * np.arctanh(np.sqrt(x[i])) else: alpha[i] = 1.0 / np.sqrt(-x[i]) * np.arctan(np.sqrt(-x[i])) F2[cond2] = ( 6.0 * ((L1 + 2.0 * L3) ** 2) / (144.0 * L3**2 * (L1 - L3) ** 2) * (L3 * (L1 + 2.0 * L3) + L1 * (L1 - 4.0 * L3) * alpha) ) # Resolve possible singularity a==b and a==c cond3 = np.logical_and( cond0, np.logical_and(abs(b - c) < b * er, abs(a - b) < b * er) ) if np.sum(cond3) != 0: F2[cond3] = 6 / 15.0 return F2 @warning_for_keywords() def directional_diffusion(dt, V, *, min_diffusivity=0): r"""Compute apparent diffusion coefficient (adc). Calculate the apparent diffusion coefficient (adc) in each direction of a sphere for a single voxel :footcite:t:`NetoHenriques2015`. Parameters ---------- dt : array (6,) elements of the diffusion tensor of the voxel. V : array (g, 3) g directions of a Sphere in Cartesian coordinates min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues cause quite a lot of noise in diffusion-based metrics, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Default = 0 Returns ------- adc : ndarray (g,) Apparent diffusion coefficient (ADC) in all g directions of a sphere for a single voxel. References ---------- .. footbibliography:: """ adc = ( V[:, 0] * V[:, 0] * dt[0] + 2 * V[:, 0] * V[:, 1] * dt[1] + V[:, 1] * V[:, 1] * dt[2] + 2 * V[:, 0] * V[:, 2] * dt[3] + 2 * V[:, 1] * V[:, 2] * dt[4] + V[:, 2] * V[:, 2] * dt[5] ) if min_diffusivity is not None: adc = adc.clip(min=min_diffusivity) return adc def directional_diffusion_variance(kt, V): r"""Calculate the apparent diffusion variance (adv) in each direction of a sphere for a single voxel See :footcite:p:`Jensen2005`, :footcite:p:`NetoHenriques2015`, and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- kt : array (15,) elements of the kurtosis tensor of the voxel. V : array (g, 3) g directions of a Sphere in Cartesian coordinates. Returns ------- adv : ndarray (g,) Apparent diffusion variance (adv) in all g directions of a sphere for a single voxel. References ---------- .. footbibliography:: """ adv = ( V[:, 0] * V[:, 0] * V[:, 0] * V[:, 0] * kt[0] + V[:, 1] * V[:, 1] * V[:, 1] * V[:, 1] * kt[1] + V[:, 2] * V[:, 2] * V[:, 2] * V[:, 2] * kt[2] + 4 * V[:, 0] * V[:, 0] * V[:, 0] * V[:, 1] * kt[3] + 4 * V[:, 0] * V[:, 0] * V[:, 0] * V[:, 2] * kt[4] + 4 * V[:, 0] * V[:, 1] * V[:, 1] * V[:, 1] * kt[5] + 4 * V[:, 1] * V[:, 1] * V[:, 1] * V[:, 2] * kt[6] + 4 * V[:, 0] * V[:, 2] * V[:, 2] * V[:, 2] * kt[7] + 4 * V[:, 1] * V[:, 2] * V[:, 2] * V[:, 2] * kt[8] + 6 * V[:, 0] * V[:, 0] * V[:, 1] * V[:, 1] * kt[9] + 6 * V[:, 0] * V[:, 0] * V[:, 2] * V[:, 2] * kt[10] + 6 * V[:, 1] * V[:, 1] * V[:, 2] * V[:, 2] * kt[11] + 12 * V[:, 0] * V[:, 0] * V[:, 1] * V[:, 2] * kt[12] + 12 * V[:, 0] * V[:, 1] * V[:, 1] * V[:, 2] * kt[13] + 12 * V[:, 0] * V[:, 1] * V[:, 2] * V[:, 2] * kt[14] ) return adv @warning_for_keywords() def directional_kurtosis( dt, md, kt, V, *, min_diffusivity=0, min_kurtosis=-3 / 7, adc=None, adv=None ): r"""Calculate the apparent kurtosis coefficient (akc) in each direction of a sphere for a single voxel. See :footcite:p:`NetoHenriques2015` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dt : array (6,) elements of the diffusion tensor of the voxel. md : float mean diffusivity of the voxel kt : array (15,) elements of the kurtosis tensor of the voxel. V : array (g, 3) g directions of a Sphere in Cartesian coordinates min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues cause quite a lot of noise in diffusion-based metrics, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Default = 0 min_kurtosis : float, optional Because high-amplitude negative values of kurtosis are not physically and biologicaly pluasible, and these cause artefacts in kurtosis-based measures, directional kurtosis values smaller than `min_kurtosis` are replaced with `min_kurtosis`. (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). adc : ndarray(g,), optional Apparent diffusion coefficient (ADC) in all g directions of a sphere for a single voxel. adv : ndarray(g,), optional Apparent diffusion variance (advc) in all g directions of a sphere for a single voxel. Returns ------- akc : ndarray (g,) Apparent kurtosis coefficient (AKC) in all g directions of a sphere for a single voxel. References ---------- .. footbibliography:: """ if adc is None: adc = directional_diffusion(dt, V, min_diffusivity=min_diffusivity) if adv is None: adv = directional_diffusion_variance(kt, V) akc = adv * (md / adc) ** 2 if min_kurtosis is not None: akc = akc.clip(min=min_kurtosis) return akc @warning_for_keywords() def apparent_kurtosis_coef( dki_params, sphere, *, min_diffusivity=0, min_kurtosis=-3.0 / 7 ): r"""Calculate the apparent kurtosis coefficient (AKC) in each direction of a sphere. See :footcite:p:`NetoHenriques2015` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvectors respectively 3) Fifteen elements of the kurtosis tensor sphere : a Sphere class instance The AKC will be calculated for each of the vertices in the sphere min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues cause quite a lot of noise in diffusion-based metrics, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. min_kurtosis : float, optional Because high-amplitude negative values of kurtosis are not physically and biologicaly pluasible, and these cause artefacts in kurtosis-based measures, directional kurtosis values smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). Returns ------- akc : ndarray (x, y, z, g) or (n, g) Apparent kurtosis coefficient (AKC) for all g directions of a sphere. Notes ----- For each sphere direction with coordinates $(n_{1}, n_{2}, n_{3})$, the calculation of AKC is done using formula :footcite:p:`NetoHenriques2015`: .. math:: AKC(n)=\frac{MD^{2}}{ADC(n)^{2}}\sum_{i=1}^{3}\sum_{j=1}^{3} \sum_{k=1}^{3}\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl} where $W_{ijkl}$ are the elements of the kurtosis tensor, MD the mean diffusivity and ADC the apparent diffusion coefficient computed as: .. math:: ADC(n)=\sum_{i=1}^{3}\sum_{j=1}^{3}n_{i}n_{j}D_{ij} where $D_{ij}$ are the elements of the diffusion tensor. References ---------- .. footbibliography:: """ # Flat parameters outshape = dki_params.shape[:-1] dki_params = dki_params.reshape((-1, dki_params.shape[-1])) # Split data evals, evecs, kt = split_dki_param(dki_params) # Initialize AKC matrix V = sphere.vertices akc = np.zeros((len(kt), len(V))) # select relevant voxels to process rel_i = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) kt = kt[rel_i] evecs = evecs[rel_i] evals = evals[rel_i] akci = akc[rel_i] # Compute MD and DT md = mean_diffusivity(evals) dt = lower_triangular(vec_val_vect(evecs, evals)) # loop over all relevant voxels for vox in range(len(kt)): akci[vox] = directional_kurtosis( dt[vox], md[vox], kt[vox], V, min_diffusivity=min_diffusivity, min_kurtosis=min_kurtosis, ) # reshape data according to input data akc[rel_i] = akci return akc.reshape((outshape + (len(V),))) @warning_for_keywords() def mean_kurtosis( dki_params, *, min_kurtosis=-3.0 / 7, max_kurtosis=3, analytical=True ): r""" Compute mean kurtosis (MK) from the kurtosis tensor. See :footcite:p:`Tabesh2011` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, MK is calculated using its analytical solution, otherwise an exact numerical estimator is used (see Notes). Returns ------- mk : array Calculated MK. Notes ----- The MK is defined as the average of directional kurtosis coefficients across all spatial directions, which can be formulated by the following surface integral :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`: .. math:: MK \equiv \frac{1}{4\pi} \int d\Omega_\mathbf{n} K(\mathbf{n}) This integral can be numerically solved by averaging directional kurtosis values sampled for directions of a spherical t-design :footcite:p:`Hardin1996`. Alternatively, MK can be solved from the analytical solution derived by :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`. This solution is given by: .. math:: MK=F_1(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{1111}+ F_1(\lambda_2,\lambda_1,\lambda_3)\hat{W}_{2222}+ F_1(\lambda_3,\lambda_2,\lambda_1)\hat{W}_{3333}+ \\ F_2(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2233}+ F_2(\lambda_2,\lambda_1,\lambda_3)\hat{W}_{1133}+ F_2(\lambda_3,\lambda_2,\lambda_1)\hat{W}_{1122} where $\hat{W}_{ijkl}$ are the components of the $W$ tensor in the coordinates system defined by the eigenvectors of the diffusion tensor $\mathbf{D}$ and .. math:: F_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {18(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)} [\frac{\sqrt{\lambda_2\lambda_3}}{\lambda_1} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{3\lambda_1^2-\lambda_1\lambda_2-\lambda_2\lambda_3- \lambda_1\lambda_3} {3\lambda_1 \sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-1 ] F_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {3(\lambda_2-\lambda_3)^2} [\frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{2\lambda_1-\lambda_2-\lambda_3}{3\sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-2] where $R_f$ and $R_d$ are the Carlson's elliptic integrals. References ---------- .. footbibliography:: """ # Flat parameters. For numpy versions more recent than 1.6.0, this step # isn't required outshape = dki_params.shape[:-1] dki_params = dki_params.reshape((-1, dki_params.shape[-1])) if analytical: # Split the model parameters to three variable containing the evals, # evecs, and kurtosis elements evals, evecs, kt = split_dki_param(dki_params) # Rotate the kurtosis tensor from the standard Cartesian coordinate # system to another coordinate system in which the 3 orthonormal # eigenvectors of DT are the base coordinate Wxxxx = Wrotate_element(kt, 0, 0, 0, 0, evecs) Wyyyy = Wrotate_element(kt, 1, 1, 1, 1, evecs) Wzzzz = Wrotate_element(kt, 2, 2, 2, 2, evecs) Wxxyy = Wrotate_element(kt, 0, 0, 1, 1, evecs) Wxxzz = Wrotate_element(kt, 0, 0, 2, 2, evecs) Wyyzz = Wrotate_element(kt, 1, 1, 2, 2, evecs) # Compute MK MK = ( _F1m(evals[..., 0], evals[..., 1], evals[..., 2]) * Wxxxx + _F1m(evals[..., 1], evals[..., 0], evals[..., 2]) * Wyyyy + _F1m(evals[..., 2], evals[..., 1], evals[..., 0]) * Wzzzz + _F2m(evals[..., 0], evals[..., 1], evals[..., 2]) * Wyyzz + _F2m(evals[..., 1], evals[..., 0], evals[..., 2]) * Wxxzz + _F2m(evals[..., 2], evals[..., 1], evals[..., 0]) * Wxxyy ) else: # Numerical Solution using t-design of 45 directions V = np.loadtxt(get_fnames(name="t-design")) sph = dps.Sphere(xyz=V) KV = apparent_kurtosis_coef(dki_params, sph, min_kurtosis=min_kurtosis) MK = np.mean(KV, axis=-1) if min_kurtosis is not None: MK = MK.clip(min=min_kurtosis) if max_kurtosis is not None: MK = MK.clip(max=max_kurtosis) return MK.reshape(outshape) def _G1m(a, b, c): r"""Helper function that computes function $G_1$ which is required to compute the analytical solution of the Radial kurtosis Parameters ---------- a : ndarray Array containing the values of parameter $\lambda_1$ of function $G_1$ b : ndarray Array containing the values of parameter $\lambda_2$ of function $G_1$ c : ndarray Array containing the values of parameter $\lambda_3$ of function $G_1$ Returns ------- G1 : ndarray Value of the function $G_1$ for all elements of the arrays a, b, and c Notes ----- Function $G_1$ is defined as :footcite:p:`Tabesh2011`: .. math:: G_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{18\lambda_2(\lambda_2- \lambda_3)} \left (2\lambda_2 + \frac{\lambda_3^2-3\lambda_2\lambda_3}{\sqrt{\lambda_2\lambda_3}} \right) References ---------- .. footbibliography:: """ # Float error used to compare two floats, abs(l1 - l2) < er for l1 = l2 # Error is defined as five orders of magnitude larger than system's epslon er = np.finfo(a.ravel()[0]).eps * 1e5 # Initialize G1 G1 = np.zeros(a.shape) # Only computes G1 in voxels that have all eigenvalues larger than zero cond0 = _positive_evals(a, b, c) # Apply formula for non problematic plausible cases, i.e. b!=c cond1 = np.logical_and(cond0, (abs(b - c) > er)) if np.sum(cond1) != 0: L1 = a[cond1] L2 = b[cond1] L3 = c[cond1] G1[cond1] = ( (L1 + L2 + L3) ** 2 / (18 * L2 * (L2 - L3) ** 2) * (2.0 * L2 + (L3**2 - 3 * L2 * L3) / np.sqrt(L2 * L3)) ) # Resolve possible singularity b==c cond2 = np.logical_and(cond0, abs(b - c) < er) if np.sum(cond2) != 0: L1 = a[cond2] L2 = b[cond2] G1[cond2] = (L1 + 2.0 * L2) ** 2 / (24.0 * L2**2) return G1 def _G2m(a, b, c): r"""Helper function that computes function $G_2$ which is required to compute the analytical solution of the Radial kurtosis Parameters ---------- a : ndarray Array containing the values of parameter $\lambda_1$ of function $G_2$ b : ndarray Array containing the values of parameter $\lambda_2$ of function $G_2$ c : ndarray (n,) Array containing the values of parameter $\lambda_3$ of function $G_2$ Returns ------- G2 : ndarray Value of the function $G_2$ for all elements of the arrays a, b, and c Notes ----- Function $G_2$ is defined as :footcite:p:`Tabesh2011`: .. math:: G_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{(\lambda_2-\lambda_3)^2} \left ( \frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}}-2\right ) References ---------- .. footbibliography:: """ # Float error used to compare two floats, abs(l1 - l2) < er for l1 = l2 # Error is defined as five order of magnitude larger than system's epsilon er = np.finfo(a.ravel()[0]).eps * 1e5 # Initialize G2 G2 = np.zeros(a.shape) # Only computes G2 in voxels that have all eigenvalues larger than zero cond0 = _positive_evals(a, b, c) # Apply formula for non problematic plausible cases, i.e. b!=c cond1 = np.logical_and(cond0, (abs(b - c) > er)) if np.sum(cond1) != 0: L1 = a[cond1] L2 = b[cond1] L3 = c[cond1] G2[cond1] = ( (L1 + L2 + L3) ** 2 / (3 * (L2 - L3) ** 2) * ((L2 + L3) / np.sqrt(L2 * L3) - 2) ) # Resolve possible singularity b==c cond2 = np.logical_and(cond0, abs(b - c) < er) if np.sum(cond2) != 0: L1 = a[cond2] L2 = b[cond2] G2[cond2] = (L1 + 2.0 * L2) ** 2 / (12.0 * L2**2) return G2 @warning_for_keywords() def radial_kurtosis( dki_params, *, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True ): r"""Compute radial kurtosis (RK) of a diffusion kurtosis tensor. See :footcite:p:`Tabesh2011` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, RK is calculated using its analytical solution, otherwise an exact numerical estimator is used (see Notes). Default is set to True. Returns ------- rk : array Calculated RK. Notes ----- RK is defined as the average of the directional kurtosis perpendicular to the fiber's main direction e1 :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`: .. math:: RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta} K(\mathbf{\theta}) \delta (\mathbf{\theta}\cdot \mathbf{e}_1) This equation can be numerically computed by averaging apparent directional kurtosis samples for directions perpendicular to e1 :footcite:p:`NetoHenriques2021a`. Otherwise, RK can be calculated from its analytical solution :footcite:p:`Tabesh2011`: .. math:: K_{\bot} = G_1(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2222} + G_1(\lambda_1,\lambda_3,\lambda_2)\hat{W}_{3333} + G_2(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2233} where: .. math:: G_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{18\lambda_2(\lambda_2- \lambda_3)} \left (2\lambda_2 + \frac{\lambda_3^2-3\lambda_2\lambda_3}{\sqrt{\lambda_2\lambda_3}} \right) and .. math:: G_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{(\lambda_2-\lambda_3)^2} \left ( \frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}}-2\right ) References ---------- .. footbibliography:: """ outshape = dki_params.shape[:-1] dki_params = dki_params.reshape((-1, dki_params.shape[-1])) # Split the model parameters to three variable containing the evals, # evecs, and kurtosis elements evals, evecs, kt = split_dki_param(dki_params) if analytical: # Rotate the kurtosis tensor from the standard Cartesian coordinate # system to another coordinate system in which the 3 orthonormal # eigenvectors of DT are the base coordinate Wyyyy = Wrotate_element(kt, 1, 1, 1, 1, evecs) Wzzzz = Wrotate_element(kt, 2, 2, 2, 2, evecs) Wyyzz = Wrotate_element(kt, 1, 1, 2, 2, evecs) # Compute RK RK = ( _G1m(evals[..., 0], evals[..., 1], evals[..., 2]) * Wyyyy + _G1m(evals[..., 0], evals[..., 2], evals[..., 1]) * Wzzzz + _G2m(evals[..., 0], evals[..., 1], evals[..., 2]) * Wyyzz ) else: # Numerical Solution using 10 perpendicular directions samples npa = 10 # Initialize RK RK = np.zeros(kt.shape[:-1]) # select relevant voxels to process rel_i = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) dki_params = dki_params[rel_i] evecs = evecs[rel_i] RKi = RK[rel_i] # loop over all voxels KV = np.zeros((dki_params.shape[0], npa)) for vox in range(len(dki_params)): V = perpendicular_directions(np.array(evecs[vox, :, 0]), num=npa, half=True) sph = dps.Sphere(xyz=V) KV[vox, :] = apparent_kurtosis_coef( dki_params[vox], sph, min_kurtosis=min_kurtosis ) RKi = np.mean(KV, axis=-1) RK[rel_i] = RKi if min_kurtosis is not None: RK = RK.clip(min=min_kurtosis) if max_kurtosis is not None: RK = RK.clip(max=max_kurtosis) return RK.reshape(outshape) @warning_for_keywords() def axial_kurtosis( dki_params, *, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True ): r"""Compute axial kurtosis (AK) from the kurtosis tensor. See :footcite:p:`Tabesh2011` and :footcite:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, axial kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, axial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, AK is calculated from rotated diffusion kurtosis tensor, otherwise it will be computed from the apparent diffusion kurtosis values along the principal axis of the diffusion tensor (see notes). Returns ------- ak : array Calculated AK. Notes ----- AK is defined as the directional kurtosis parallel to the fiber's main direction e1 :footcite:p:`Tabesh2011`, :footcite:`NetoHenriques2021a`. You can compute AK using to approaches: 1) AK is calculated from rotated diffusion kurtosis tensor :footcite:p:`Tabesh2011`, i.e.: .. math:: AK = \hat{W}_{1111} \frac{(\lambda_{1}+\lambda_{2}+\lambda_{3})^2}{(9 \lambda_{1}^2)} 2) AK can be sampled from the principal axis of the diffusion tensor :footcite:`NetoHenriques2021a`: .. math:: AK = K(\mathbf{e}_1) Although both approaches leads to an exact calculation of AK, the first approach will be referred to as the analytical method while the second approach will be referred to as the numerical method based on their analogy to the estimation strategies for MK and RK. References ---------- .. footbibliography:: """ # Flat parameters outshape = dki_params.shape[:-1] dki_params = dki_params.reshape((-1, dki_params.shape[-1])) # Split data evals, evecs, kt = split_dki_param(dki_params) # Initialize AK AK = np.zeros(kt.shape[:-1]) # select relevant voxels to process rel_i = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) kt = kt[rel_i] evecs = evecs[rel_i] evals = evals[rel_i] AKi = AK[rel_i] # Compute mean diffusivity md = mean_diffusivity(evals) if analytical: # Rotate the kurtosis tensor from the standard Cartesian coordinate # system to another coordinate system in which the 3 orthonormal # eigenvectors of DT are the base coordinate Wxxxx = Wrotate_element(kt, 0, 0, 0, 0, evecs) AKi = Wxxxx * (md**2) / (evals[..., 0] ** 2) else: # Compute apparent directional kurtosis along evecs[0] dt = lower_triangular(vec_val_vect(evecs, evals)) for vox in range(len(kt)): AKi[vox] = directional_kurtosis( dt[vox], md[vox], kt[vox], np.array([evecs[vox, :, 0]]) ).item() # reshape data according to input data AK[rel_i] = AKi if min_kurtosis is not None: AK = AK.clip(min=min_kurtosis) if max_kurtosis is not None: AK = AK.clip(max=max_kurtosis) return AK.reshape(outshape) def _kt_maximum_converge(ang, dt, md, kt): """Helper function that computes the inverse of the directional kurtosis of a voxel along a given direction in polar coordinates Parameters ---------- ang : array (2,) array containing the two polar angles dt : array (6,) elements of the diffusion tensor of the voxel. md : float mean diffusivity of the voxel kt : array (15,) elements of the kurtosis tensor of the voxel. Returns ------- neg_kt : float The inverse value of the apparent kurtosis for the given direction. Notes ----- This function is used to refine the kurtosis maximum estimate See Also -------- dipy.reconst.dki.kurtosis_maximum """ n = np.array([sphere2cart(1, ang[0], ang[1])]) return -1.0 * directional_kurtosis(dt, md, kt, n) @warning_for_keywords() def _voxel_kurtosis_maximum(dt, md, kt, sphere, *, gtol=1e-2): """Compute the maximum value of a single voxel kurtosis tensor Parameters ---------- dt : array (6,) elements of the diffusion tensor of the voxel. md : float mean diffusivity of the voxel kt : array (15,) elements of the kurtosis tensor of the voxel. sphere : Sphere class instance, optional The sphere providing sample directions for the initial search of the maximum value of kurtosis. gtol : float, optional This input is to refine kurtosis maximum under the precision of the directions sampled on the sphere class instance. The gradient of the convergence procedure must be less than gtol before successful termination. If gtol is None, fiber direction is directly taken from the initial sampled directions of the given sphere object Returns ------- max_value : float kurtosis tensor maximum value max_dir : array (3,) Cartesian coordinates of the direction of the maximal kurtosis value """ # Estimation of maximum kurtosis candidates akc = directional_kurtosis(dt, md, kt, sphere.vertices) max_val, ind = local_maxima(akc, sphere.edges) n = len(max_val) # case that none maximum was find (spherical or null kurtosis tensors) if n == 0: return np.mean(akc), np.zeros(3) max_dir = sphere.vertices[ind] # Select the maximum from the candidates max_value = max(max_val) max_direction = max_dir[np.argmax(max_val.argmax)] # refine maximum direction if gtol is not None: for p in range(n): r, theta, phi = cart2sphere(max_dir[p, 0], max_dir[p, 1], max_dir[p, 2]) ang = np.array([theta, phi]) ang[:] = opt.fmin_bfgs( _kt_maximum_converge, ang, args=(dt, md, kt), disp=False, retall=False, gtol=gtol, ) k_dir = np.array([sphere2cart(1.0, ang[0], ang[1])]) k_val = directional_kurtosis(dt, md, kt, k_dir) if k_val > max_value: max_value = k_val max_direction = k_dir return max_value.item(), max_direction @warning_for_keywords() def kurtosis_maximum(dki_params, *, sphere="repulsion100", gtol=1e-2, mask=None): """Compute kurtosis maximum value. See :footcite:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eingenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor sphere : Sphere class instance, optional The sphere providing sample directions for the initial search of the maximal value of kurtosis. gtol : float, optional This input is to refine kurtosis maximum under the precision of the directions sampled on the sphere class instance. The gradient of the convergence procedure must be less than gtol before successful termination. If gtol is None, fiber direction is directly taken from the initial sampled directions of the given sphere object mask : ndarray A boolean array used to mark the coordinates in the data that should be analyzed that has the shape dki_params.shape[:-1] Returns ------- max_value : float kurtosis tensor maximum value max_dir : array (3,) Cartesian coordinates of the direction of the maximal kurtosis value References ---------- .. footbibliography:: """ shape = dki_params.shape[:-1] # load gradient directions if not isinstance(sphere, dps.Sphere): sphere = get_sphere(name="repulsion100") # select voxels where to find fiber directions if mask is None: mask = np.ones(shape, dtype="bool") else: if mask.shape != shape: raise ValueError("Mask is not the same shape as dki_params.") evals, evecs, kt = split_dki_param(dki_params) # select non-zero voxels pos_evals = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) mask = np.logical_and(mask, pos_evals) kt_max = np.zeros(mask.shape) dt = lower_triangular(vec_val_vect(evecs, evals)) md = mean_diffusivity(evals) for idx in ndindex(shape): if not mask[idx]: continue kt_max[idx], da = _voxel_kurtosis_maximum( dt[idx], md[idx], kt[idx], sphere, gtol=gtol ) return kt_max @warning_for_keywords() def mean_kurtosis_tensor(dki_params, *, min_kurtosis=-3.0 / 7, max_kurtosis=10): r"""Compute mean of the kurtosis tensor (MKT). See :footcite:p:`Hansen2013` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. Returns ------- mkt : array Calculated mean kurtosis tensor. Notes ----- The MKT is defined as :footcite:p:`Hansen2013`: .. math:: MKT \equiv \frac{1}{4\pi} \int d \Omega_{\mathnbf{n}} n_i n_j n_k n_l W_{ijkl} which can be directly computed from the trace of the kurtosis tensor: .. math:: MKT = \frac{1}{5} Tr(\mathbf{W}) = \frac{1}{5} (W_{1111} + W_{2222} + W_{3333} + 2W_{1122} + 2W_{1133} + 2W_{2233}) References ---------- .. footbibliography:: """ MKT = ( 1 / 5 * ( dki_params[..., 12] + dki_params[..., 13] + dki_params[..., 14] + 2 * dki_params[..., 21] + 2 * dki_params[..., 22] + 2 * dki_params[..., 23] ) ) if min_kurtosis is not None: MKT = MKT.clip(min=min_kurtosis) if max_kurtosis is not None: MKT = MKT.clip(max=max_kurtosis) return MKT def radial_tensor_kurtosis(dki_params, *, min_kurtosis=-3.0 / 7, max_kurtosis=10): r"""Compute the rescaled radial tensor kurtosis (RTK). See :footcite:p:`Hansen2013` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. Returns ------- rtk : array Calculated rescaled radial tensor kurtosis (RTK). Notes ----- Rescaled radial tensor kurtosis (RTK) is defined as :footcite:p:`Hansen2013`: .. math:: RKT = \frac{3}{8} \frac{MD^2}{RD^2} (W_{2222} + W_{3333} + 2*W_{2233}) where W is the kurtosis tensor rotated to a coordinate system in which the 3 orthonormal eigenvectors of DT are the base coordinate, MD is the mean diffusivity, and RD is the radial diffusivity. References ---------- .. footbibliography:: """ outshape = dki_params.shape[:-1] dki_params = dki_params.reshape((-1, dki_params.shape[-1])) # Split the model parameters to three variable containing the evals, # evecs, and kurtosis elements evals, evecs, kt = split_dki_param(dki_params) # Initializes RKT RTK = np.zeros(kt.shape[:-1]) # select relevant voxels to process rel_i = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) kt = kt[rel_i] evecs = evecs[rel_i] evals = evals[rel_i] # Rotate the kurtosis tensor from the standard Cartesian coordinate # system to another coordinate system in which the 3 orthonormal # eigenvectors of DT are the base coordinate Wyyyy = Wrotate_element(kt, 1, 1, 1, 1, evecs) Wzzzz = Wrotate_element(kt, 2, 2, 2, 2, evecs) Wyyzz = Wrotate_element(kt, 1, 1, 2, 2, evecs) # Compute radial kurtois tensor WTK = 3 / 8 * (Wyyyy + Wzzzz + 2 * Wyyzz) # Rescaling radial kurtosis tensor md = mean_diffusivity(evals) rd = radial_diffusivity(evals) RTK[rel_i] = WTK * md**2 / rd**2 if min_kurtosis is not None: RTK = RTK.clip(min=min_kurtosis) if max_kurtosis is not None: RTK = RTK.clip(max=max_kurtosis) return RTK.reshape(outshape) def kurtosis_fractional_anisotropy(dki_params): r"""Compute the anisotropy of the kurtosis tensor (KFA). See :footcite:p:`Glenn2015` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor Returns ------- kfa : array Calculated mean kurtosis tensor. Notes ----- The KFA is defined as :footcite:p:`Glenn2015`: .. math:: KFA \equiv \frac{||\mathbf{W} - MKT \mathbf{I}^{(4)}||_F}{||\mathbf{W}||_F} where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^{(4)}$ is the fully symmetric rank 2 isotropic tensor and $||...||_F$ is the tensor's Frobenius norm :footcite:p:`Glenn2015`. References ---------- .. footbibliography:: """ Wxxxx = dki_params[..., 12] Wyyyy = dki_params[..., 13] Wzzzz = dki_params[..., 14] Wxxxy = dki_params[..., 15] Wxxxz = dki_params[..., 16] Wxyyy = dki_params[..., 17] Wyyyz = dki_params[..., 18] Wxzzz = dki_params[..., 19] Wyzzz = dki_params[..., 20] Wxxyy = dki_params[..., 21] Wxxzz = dki_params[..., 22] Wyyzz = dki_params[..., 23] Wxxyz = dki_params[..., 24] Wxyyz = dki_params[..., 25] Wxyzz = dki_params[..., 26] W = 1.0 / 5.0 * (Wxxxx + Wyyyy + Wzzzz + 2 * Wxxyy + 2 * Wxxzz + 2 * Wyyzz) # Compute's equation numerator A = ( (Wxxxx - W) ** 2 + (Wyyyy - W) ** 2 + (Wzzzz - W) ** 2 + 4 * (Wxxxy**2 + Wxxxz**2 + Wxyyy**2 + Wyyyz**2 + Wxzzz**2 + Wyzzz**2) + 6 * ((Wxxyy - W / 3) ** 2 + (Wxxzz - W / 3) ** 2 + (Wyyzz - W / 3) ** 2) + 12 * (Wxxyz**2 + Wxyyz**2 + Wxyzz**2) ) # Compute's equation denominator B = ( Wxxxx**2 + Wyyyy**2 + Wzzzz**2 + 4 * (Wxxxy**2 + Wxxxz**2 + Wxyyy**2 + Wyyyz**2 + Wxzzz**2 + Wyzzz**2) + 6 * (Wxxyy**2 + Wxxzz**2 + Wyyzz**2) + 12 * (Wxxyz**2 + Wxyyz**2 + Wxyzz**2) ) # Compute KFA KFA = np.zeros(A.shape) cond1 = B > 0 # Avoiding Singularity (if B = 0, KFA = 0) # Avoiding overestimating KFA for small MKT values (KFA=0, MKT < tol) cond2 = W > 1e-8 cond = np.logical_and(cond1, cond2) KFA[cond] = np.sqrt(A[cond] / B[cond]) return KFA @warning_for_keywords() def dki_prediction(dki_params, gtab, *, S0=1.0): r"""Predict a signal given diffusion kurtosis imaging parameters The predicted signal is given by: .. math:: S=S_{0}e^{-bD+\frac{1}{6}b^{2}D^{2}K} See :footcite:p:`Jensen2005` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Returns ------- pred_sig : (..., N) ndarray Simulated signal based on the DKI model. References ---------- .. footbibliography:: """ evals, evecs, kt = split_dki_param(dki_params) # Define DKI design matrix according to given gtab A = design_matrix(gtab) # Flat parameters and initialize pred_sig fevals = evals.reshape((-1, evals.shape[-1])) fevecs = evecs.reshape((-1,) + evecs.shape[-2:]) fkt = kt.reshape((-1, kt.shape[-1])) pred_sig = np.zeros((len(fevals), len(gtab.bvals))) if isinstance(S0, np.ndarray): S0_vol = np.reshape(S0, (len(fevals))) else: S0_vol = S0 # looping for all voxels for v in range(len(pred_sig)): DT = np.dot(np.dot(fevecs[v], np.diag(fevals[v])), fevecs[v].T) dt = lower_triangular(DT) MD = (dt[0] + dt[2] + dt[5]) / 3 if isinstance(S0_vol, np.ndarray): this_S0 = S0_vol[v] else: this_S0 = S0_vol X = np.concatenate((dt, fkt[v] * MD * MD, np.array([-np.log(this_S0)])), axis=0) pred_sig[v] = np.exp(np.dot(A, X)) # Reshape data according to the shape of dki_params pred_sig = pred_sig.reshape(dki_params.shape[:-1] + (pred_sig.shape[-1],)) return pred_sig class DiffusionKurtosisModel(ReconstModel): """Class for the Diffusion Kurtosis Model""" @warning_for_keywords() def __init__(self, gtab, *args, fit_method="WLS", return_S0_hat=False, **kwargs): """Diffusion Kurtosis Tensor Model. See :footcite:p:`Jensen2005` and :footcite:p:`NetoHenriques2021a` for further details about the model. Parameters ---------- gtab : GradientTable instance The gradient table for the data set. fit_method : str or callable, optional str be one of the following: - 'OLS' or 'ULLS' for ordinary least squares. - 'WLS', 'WLLS' or 'UWLLS' for weighted ordinary least squares. See func:`dki.ls_fit_dki`. - 'CLS' for LMI constrained ordinary least squares :footcite:p:`DelaHaije2020`. - 'CWLS' for LMI constrained weighted least squares :footcite:p:`DelaHaije2020`. See func:`dki.cls_fit_dki`. callable has to have the signature: ``fit_method(design_matrix, data, *args, **kwargs)`` return_S0_hat : bool Boolean to return (True) or not (False) the S0 values for the fit. *args Variable length argument list passed to the :func:`fit` method. **kwargs Arbitrary keyword arguments passed to the :func:`fit` method. References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) # Check if at least three b-values are given enough_b = check_multi_b(self.gtab, 3, non_zero=False) if not enough_b: msg = "DKI requires at least 3 b-values (which can include b=0)." raise ValueError(msg) self.common_fit_method = not callable(fit_method) if self.common_fit_method: try: self.fit_method = common_fit_methods[fit_method.upper()] except KeyError as e: msg = '"' + str(fit_method) + '" is not a known fit method. ' msg += " The fit method should either be a function or one of " msg += " thecommon fit methods." raise ValueError(msg) from e self.return_S0_hat = return_S0_hat self.args = args self.kwargs = kwargs self.min_signal = self.kwargs.pop("min_signal", None) if self.min_signal is None: self.min_signal = MIN_POSITIVE_SIGNAL elif self.min_signal <= 0: msg = "The `min_signal` key-word argument needs to be strictly" msg += " positive." raise ValueError(msg) self.design_matrix = design_matrix(self.gtab) self.inverse_design_matrix = np.linalg.pinv(self.design_matrix) tol = 1e-6 self.min_diffusivity = tol / -self.design_matrix.min() self.convexity_constraint = fit_method in {"CLS", "CWLS", "NLS"} if self.convexity_constraint: self.cvxpy_solver = self.kwargs.pop("cvxpy_solver", None) self.convexity_level = self.kwargs.pop("convexity_level", "full") msg = "convexity_level must be a positive, even number, or 'full'." if isinstance(self.convexity_level, str): if self.convexity_level == "full": self.sdp_constraints = load_sdp_constraints("dki") else: raise ValueError(msg) elif self.convexity_level < 0 or self.convexity_level % 2: raise ValueError(msg) else: if self.convexity_level > 4: msg = "Maximum convexity_level supported is 4." warnings.warn(msg, stacklevel=2) self.convexity_level = 4 self.sdp_constraints = load_sdp_constraints( "dki", order=self.convexity_level ) self.sdp = PositiveDefiniteLeastSquares(22, A=self.sdp_constraints) self.weights = fit_method in {"WLS", "WLLS", "CWLS"} self.is_multi_method = fit_method in [ "WLS", "WLLS", "LS", "LLS", "OLS", "OLLS", "CLS", "CWLS", ] self.is_iter_method = "weights_method" in self.kwargs if "num_iter" not in self.kwargs and self.is_iter_method: self.kwargs["num_iter"] = 4 self.extra = {} @warning_for_keywords() def fit(self, data, *, mask=None): """Fit method of the DKI model. Parameters ---------- data : array The measured signal. mask : array A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[-1] """ data_thres = np.maximum(data, self.min_signal) if self.is_multi_method and not self.is_iter_method: fit_result, extra = self.multi_fit( data_thres, mask=mask, weights=self.weights, **self.kwargs ) if extra is not None: self.extra = extra return fit_result if self.is_multi_method and self.is_iter_method: fit_result, extra = self.iterative_fit( data_thres, mask=mask, num_iter=self.kwargs["num_iter"], weights_method=self.kwargs["weights_method"], ) if extra is not None: self.extra = extra return fit_result S0_params = None if data.ndim == 1: # NOTE: doesn't use mask... params, extra = self.fit_method( self.design_matrix, data_thres, return_S0_hat=self.return_S0_hat, *self.args, **self.kwargs, ) if self.return_S0_hat: params, S0_params = params if extra is not None: for key in extra: self.extra[key] = extra[key] return DiffusionKurtosisFit(self, params, model_S0=S0_params) if mask is not None: # Check for valid shape of the mask if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) data_in_mask = np.reshape(data[mask], (-1, data.shape[-1])) data_in_mask = np.maximum(data_in_mask, self.min_signal) params, extra = self.fit_method( self.design_matrix, data_in_mask, return_S0_hat=self.return_S0_hat, *self.args, **self.kwargs, ) if self.return_S0_hat: params, S0_params = params if mask is None: out_shape = data.shape[:-1] + (-1,) dki_params = params.reshape(out_shape) if self.return_S0_hat: S0_params = S0_params.reshape(out_shape).squeeze(axis=-1) if extra is not None: for key in extra: self.extra[key] = extra[key].reshape(data.shape) else: dki_params = np.zeros(data.shape[:-1] + (27,)) dki_params[mask, :] = params if self.return_S0_hat: S0_params_in_mask = np.zeros(data.shape[:-1]) S0_params_in_mask[mask] = S0_params.squeeze(axis=-1) S0_params = S0_params_in_mask if extra is not None: for key in extra: self.extra[key] = np.zeros(data.shape) self.extra[key][mask, :] = extra[key] return DiffusionKurtosisFit(self, dki_params, model_S0=S0_params) @multi_voxel_fit @warning_for_keywords() def multi_fit(self, data, *, mask=None, **kwargs): """Convenience function for fitting multiple voxels.""" extra_args = ( {} if not self.convexity_constraint else { "cvxpy_solver": self.cvxpy_solver, "sdp": self.sdp, } ) params, extra = self.fit_method( self.design_matrix, data, self.inverse_design_matrix, return_S0_hat=self.return_S0_hat, min_diffusivity=self.min_diffusivity, **extra_args, **kwargs, ) S0_params = None if self.return_S0_hat: params, S0_params = params return DiffusionKurtosisFit(self, params, model_S0=S0_params), extra def iterative_fit( self, data_thres, *, mask=None, num_iter=4, weights_method=weights_method_wls_m_est, ): """Iteratively Reweighted fitting for the DKI model. Parameters ---------- data_thres : array The measured signal. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[-1] num_iter : int, optional Number of times to iterate. weights_method : callable, optional A function with args and returns as follows: (weights, robust) = weights_method(data, pred_sig, design_matrix, leverages, idx, num_iter, robust) Notes ----- On the first iteration, (C)WLS fit is performed. Subsequent iterations will be weighted according to weights_method. Outlier rejection should be handled within weights_method by setting the corresponding weights to zero (if weights_method implements outlier rejection). """ if num_iter < 2: # otherwise, weights_method will not be utilized raise ValueError("num_iter must be 2+") w, robust = True, None # w = True ensures WLS (not OLS) tmp, extra, leverages = None, None, None # initialize, for clarity TDX = num_iter for rdx in range(1, TDX + 1): if rdx > 1: # after first iteration, update weights # if using mask, define full-sized arrays for w and robust if rdx == 2 and mask is not None: cond = mask == 1 w = np.ones_like(data_thres, dtype=float) robust = np.zeros_like(data_thres, dtype=bool) robust[cond] = 1 # make prediction of the signal pred_sig = self.predict(tmp.model_params, S0=tmp.model_S0) # define weights for next fit if mask is not None: w[cond], robust_mask = weights_method( data_thres[cond], pred_sig[cond], self.design_matrix, leverages[cond], rdx, TDX, robust[cond], ) if robust_mask is not None: robust[cond] = robust_mask else: w, robust = weights_method( data_thres, pred_sig, self.design_matrix, leverages, rdx, TDX, robust, ) tmp, extra = self.multi_fit( data_thres, mask=mask, weights=w, return_leverages=True ) leverages = extra["leverages"] extra = {"robust": robust} return tmp, extra @warning_for_keywords() def predict(self, dki_params, *, S0=1.0): """Predict a signal for this DKI model class instance given parameters See :footcite:p:`Jensen2005` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1. Three diffusion tensor's eigenvalues 2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3. Fifteen elements of the kurtosis tensor S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. References ---------- .. footbibliography:: """ return dki_prediction(dki_params, self.gtab, S0=S0) class DiffusionKurtosisFit(TensorFit): """Class for fitting the Diffusion Kurtosis Model""" @warning_for_keywords() def __init__(self, model, model_params, *, model_S0=None): """Initialize a DiffusionKurtosisFit class instance Since DKI is an extension of DTI, class instance is defined as subclass of the TensorFit from dti.py Parameters ---------- model : DiffusionKurtosisModel Class instance Class instance containing the Diffusion Kurtosis Model for the fit model_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model, not including S0. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor model_S0 : ndarray (x, y, z, 1) or (n, 1), optional S0 estimated from the diffusion kurtosis model. """ TensorFit.__init__(self, model, model_params, model_S0=model_S0) @property def kt(self): """ Return the 15 independent elements of the kurtosis tensor as an array """ return self.model_params[..., 12:27] def akc(self, sphere): r"""Calculate the apparent kurtosis coefficient (AKC) in each direction on the sphere for each voxel in the data Parameters ---------- sphere : Sphere class instance Sphere providing sample directions to compute the apparent kurtosis coefficient. Returns ------- akc : ndarray The estimates of the apparent kurtosis coefficient in every direction on the input sphere Notes ----- For each sphere direction with coordinates $(n_{1}, n_{2}, n_{3})$, the calculation of AKC is done using formula: .. math:: AKC(n)=\frac{MD^{2}}{ADC(n)^{2}}\sum_{i=1}^{3}\sum_{j=1}^{3} \sum_{k=1}^{3}\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl} where $W_{ijkl}$ are the elements of the kurtosis tensor, MD the mean diffusivity and ADC the apparent diffusion coefficient computed as: .. math:: ADC(n)=\sum_{i=1}^{3}\sum_{j=1}^{3}n_{i}n_{j}D_{ij} where $D_{ij}$ are the elements of the diffusion tensor. """ return apparent_kurtosis_coef(self.model_params, sphere) @warning_for_keywords() def mk(self, *, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True): r""" Compute mean kurtosis (MK) from the kurtosis tensor. See :footcite:t:`Tabesh2011` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, MK is calculated using its analytical solution, otherwise an exact numerical estimator is used (see Notes). Returns ------- mk : array Calculated MK. Notes ----- The MK is defined as the average of directional kurtosis coefficients across all spatial directions, which can be formulated by the following surface integral :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`: .. math:: MK \equiv \frac{1}{4\pi} \int d\Omega_\mathbf{n} K(\mathbf{n}) This integral can be numerically solved by averaging directional kurtosis values sampled for directions of a spherical t-design :footcite:p:`Hardin1996`. Alternatively, MK can be solved from the analytical solution derived by :footcite:t:`Tabesh2011`. This solution is given by: .. math:: MK=F_1(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{1111}+ F_1(\lambda_2,\lambda_1,\lambda_3)\hat{W}_{2222}+ F_1(\lambda_3,\lambda_2,\lambda_1)\hat{W}_{3333}+ \\ F_2(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2233}+ F_2(\lambda_2,\lambda_1,\lambda_3)\hat{W}_{1133}+ F_2(\lambda_3,\lambda_2,\lambda_1)\hat{W}_{1122} where $\hat{W}_{ijkl}$ are the components of the $W$ tensor in the coordinates system defined by the eigenvectors of the diffusion tensor $\mathbf{D}$ and .. math:: F_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {18(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)} [\frac{\sqrt{\lambda_2\lambda_3}}{\lambda_1} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{3\lambda_1^2-\lambda_1\lambda_2-\lambda_2\lambda_3- \lambda_1\lambda_3} {3\lambda_1 \sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-1 ] F_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2} {3(\lambda_2-\lambda_3)^2} [\frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}} R_F(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)+\\ \frac{2\lambda_1-\lambda_2-\lambda_3}{3\sqrt{\lambda_2 \lambda_3}} R_D(\frac{\lambda_1}{\lambda_2},\frac{\lambda_1}{\lambda_3},1)-2] where $R_f$ and $R_d$ are the Carlson's elliptic integrals. References ---------- .. footbibliography:: """ return mean_kurtosis( self.model_params, min_kurtosis=min_kurtosis, max_kurtosis=max_kurtosis, analytical=analytical, ) @warning_for_keywords() def ak(self, *, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True): r""" Compute axial kurtosis (AK) of a diffusion kurtosis tensor. See :footcite:p:`Tabesh2011` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, axial kurtosis values that are smaller than `min_kurtosis` are replaced with -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, axial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, AK is calculated from rotated diffusion kurtosis tensor, otherwise it will be computed from the apparent diffusion kurtosis values along the principal axis of the diffusion tensor (see notes). Returns ------- ak : array Calculated AK. Notes ----- AK is defined as the directional kurtosis parallel to the fiber's main direction e1 :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`. You can compute AK using to approaches: 1) AK is calculated from rotated diffusion kurtosis tensor, i.e.: .. math:: AK = \hat{W}_{1111} \frac{(\lambda_{1}+\lambda_{2}+\lambda_{3})^2}{(9 \lambda_{1}^2)} 2) AK can be sampled from the principal axis of the diffusion tensor: .. math:: AK = K(\mathbf{e}_1) Although both approaches leads to an exact calculation of AK, the first approach will be referred to as the analytical method while the second approach will be referred to as the numerical method based on their analogy to the estimation strategies for MK and RK. References ---------- .. footbibliography:: """ return axial_kurtosis( self.model_params, min_kurtosis=min_kurtosis, max_kurtosis=max_kurtosis, analytical=analytical, ) @warning_for_keywords() def rk(self, *, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True): r"""Compute radial kurtosis (RK) of a diffusion kurtosis tensor. See :footcite:p:`Tabesh2011` for further details about the method. Parameters ---------- min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. analytical : bool, optional If True, RK is calculated using its analytical solution, otherwise an exact numerical estimator is used (see Notes). Returns ------- rk : array Calculated RK. Notes ----- RK is defined as the average of the directional kurtosis perpendicular to the fiber's main direction e1 :footcite:p:`Tabesh2011`, :footcite:p:`NetoHenriques2021a`: .. math:: RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta} K(\mathbf{\theta}) \delta (\mathbf{\theta}\cdot \mathbf{e}_1) This equation can be numerically computed by averaging apparent directional kurtosis samples for directions perpendicular to e1 :footcite:p:`NetoHenriques2021a`. Otherwise, RK can be calculated from its analytical solution :footcite:p:`Tabesh2011`: .. math:: K_{\bot} = G_1(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2222} + G_1(\lambda_1,\lambda_3,\lambda_2)\hat{W}_{3333} + G_2(\lambda_1,\lambda_2,\lambda_3)\hat{W}_{2233} where: .. math:: G_1(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{18\lambda_2(\lambda_2- \lambda_3)} \left (2\lambda_2 + \frac{\lambda_3^2-3\lambda_2\lambda_3}{\sqrt{\lambda_2\lambda_3}} \right) and .. math:: G_2(\lambda_1,\lambda_2,\lambda_3)= \frac{(\lambda_1+\lambda_2+\lambda_3)^2}{(\lambda_2-\lambda_3)^2} \left ( \frac{\lambda_2+\lambda_3}{\sqrt{\lambda_2\lambda_3}}- 2\right ) References ---------- .. footbibliography:: """ return radial_kurtosis( self.model_params, min_kurtosis=min_kurtosis, max_kurtosis=max_kurtosis, analytical=analytical, ) @warning_for_keywords() def kmax(self, *, sphere="repulsion100", gtol=1e-5, mask=None): """Compute the maximum value of a single voxel kurtosis tensor See :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- sphere : Sphere class instance, optional The sphere providing sample directions for the initial search of the maximum value of kurtosis. gtol : float, optional This input is to refine kurtosis maximum under the precision of the directions sampled on the sphere class instance. The gradient of the convergence procedure must be less than gtol before successful termination. If gtol is None, fiber direction is directly taken from the initial sampled directions of the given sphere object Returns ------- max_value : float kurtosis tensor maximum value References ---------- .. footbibliography:: """ return kurtosis_maximum(self.model_params, sphere=sphere, gtol=gtol, mask=mask) @warning_for_keywords() def mkt(self, *, min_kurtosis=-3.0 / 7, max_kurtosis=10): r"""Compute mean of the kurtosis tensor (MKT). See :footcite:p:`Hansen2013` for further details about the method. Parameters ---------- min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. Default = -3./7 (theoretical kurtosis limit for regions that consist of water confined to spherical pores :footcite:p:`Jensen2005`). max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, mean kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. Returns ------- mkt : array Calculated mean kurtosis tensor. Notes ----- The MKT is defined as :footcite:p:`Hansen2013`: .. math:: MKT \equiv \frac{1}{4\pi} \int d \Omega_{\mathnbf{n}} n_i n_j n_k n_l W_{ijkl} which can be directly computed from the trace of the kurtosis tensor: .. math:: MKT = \frac{1}{5} Tr(\mathbf{W}) = \frac{1}{5} (W_{1111} + W_{2222} + W_{3333} + 2W_{1122} + 2W_{1133} + 2W_{2233}) References ---------- .. footbibliography:: """ return mean_kurtosis_tensor( self.model_params, min_kurtosis=min_kurtosis, max_kurtosis=max_kurtosis ) def rtk(self, *, min_kurtosis=-3.0 / 7, max_kurtosis=10): r"""Compute the rescaled radial tensor kurtosis (RTK). See :footcite:p:`Hansen2016b` for further details about the method. Parameters ---------- min_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are smaller than `min_kurtosis` are replaced with `min_kurtosis`. max_kurtosis : float, optional To keep kurtosis values within a plausible biophysical range, radial kurtosis values that are larger than `max_kurtosis` are replaced with `max_kurtosis`. Returns ------- rtk : array Calculated escaled radial tensor kurtosis (RTK). Notes ----- Rescaled radial tensor kurtosis (RTK) is defined as :footcite:p:`Hansen2016b`: .. math:: RKT = \frac{3}{8}\frac{MD^2}{RD^2} (W_{2222}+ W_{3333}+2*W_{2233}) where W is the kurtosis tensor rotated to a coordinate system in which the 3 orthonormal eigenvectors of DT are the base coordinate, MD is the mean diffusivity, and RD is the radial diffusivity. References ---------- .. footbibliography:: """ return radial_tensor_kurtosis( self.model_params, min_kurtosis=min_kurtosis, max_kurtosis=max_kurtosis ) @property def kfa(self): r"""Return the kurtosis tensor (KFA). See :footcite:p:`Glenn2015` for further details about the method. Notes ----- The KFA is defined as :footcite:p:`Glenn2015`: .. math:: KFA \equiv \frac{||\mathbf{W} - MKT \mathbf{I}^{(4)}||_F}{||\mathbf{W}||_F} where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^{(4)}$ is the fully symmetric rank 2 isotropic tensor and $||...||_F$ is the tensor's Frobenius norm :footcite:p:`Glenn2015`. References ---------- .. footbibliography: """ return kurtosis_fractional_anisotropy(self.model_params) @warning_for_keywords() def predict(self, gtab, *, S0=1.0): r"""Given a DKI model fit, predict the signal on the vertices of a gradient table See :footcite:p:`Jensen2005` and :footcite:p:`NetoHenriques2021a` for further details about the method. Parameters ---------- gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Notes ----- The predicted signal is given by: .. math:: S(n,b)=S_{0}e^{-bD(n)+\frac{1}{6}b^{2}D(n)^{2}K(n)} $\mathbf{D(n)}$ and $\mathbf{K(n)}$ can be computed from the DT and KT using the following equations: .. math:: D(n)=\sum_{i=1}^{3}\sum_{j=1}^{3}n_{i}n_{j}D_{ij} and .. math:: K(n)=\frac{MD^{2}}{D(n)^{2}}\sum_{i=1}^{3}\sum_{j=1}^{3} \sum_{k=1}^{3}\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl} where $D_{ij}$ and $W_{ijkl}$ are the elements of the second-order DT and the fourth-order KT tensors, respectively, and $MD$ is the mean diffusivity. References ---------- .. footbibliography:: """ return dki_prediction(self.model_params, gtab, S0=S0) @warning_for_keywords() def params_to_dki_params(result, *, min_diffusivity=0): r"""Convert the 21 unique elements of the diffusion and kurtosis tensors to the parameter format adopted in DIPY Parameters ---------- results : array (21) Unique elements of the diffusion and kurtosis tensors in the following order: 1) six unique lower triangular DT elements; and 2) Fifteen unique elements of the kurtosis tensor. min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Returns ------- dki_params : array (27) All parameters estimated from the diffusion kurtosis model for all N voxels. Parameters are ordered as follows: 1) Three diffusion tensor eigenvalues. 2) Three blocks of three elements, containing the first second and third coordinates of the diffusion tensor eigenvectors. 3) Fifteen elements of the kurtosis tensor. """ # Extracting the diffusion tensor parameters from solution DT_elements = result[:6] evals, evecs = decompose_tensor( from_lower_triangular(DT_elements), min_diffusivity=min_diffusivity ) # Extracting kurtosis tensor parameters from solution MD_square = evals.mean(0) ** 2 KT_elements = result[6:21] / MD_square if MD_square else 0.0 * result[6:21] S0 = np.exp(-result[[-1]]) # Write output dki_params = np.concatenate( (evals, evecs[0], evecs[1], evecs[2], KT_elements, S0), axis=0 ) return dki_params @warning_for_keywords() def ls_fit_dki( design_matrix, data, inverse_design_matrix, *, return_S0_hat=False, weights=True, min_diffusivity=0, return_lower_triangular=False, return_leverages=False, ): r"""Compute the diffusion and kurtosis tensors using an ordinary or weighted linear least squares approach. See :footcite:p:`Veraart2013` for further details about the method. Parameters ---------- design_matrix : array (g, 22) Design matrix holding the covariants used to solve for the regression coefficients. data : array (g) Data or response variables holding the data. inverse_design_matrix : array (22, g) Inverse of the design matrix. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. If not provided, weights are estimated as the squared predicted signal from an initial OLS fit. min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. return_lower_triangular : bool, optional Boolean to return (True) or not (False) the coefficients of the fit. return_leverages : bool, optional Boolean to return (True) or not (False) the fitting leverages. Returns ------- dki_params : array (27) All parameters estimated from the diffusion kurtosis model for all N voxels. Parameters are ordered as follows: 1) Three diffusion tensor eigenvalues. 2) Three blocks of three elements, containing the first second and third coordinates of the diffusion tensor eigenvectors. 3) Fifteen elements of the kurtosis tensor. leverages : array (g) Leverages of the fitting problem (if return_leverages is True) References ---------- .. footbibliography:: """ # Set up least squares problem A = design_matrix y = np.log(data) # weights correspond to *squared* residuals # define W = sqrt(weights), so that W are weights for residuals # e.g. weights for WLS are diag(fn**2), so W are diag(fn) if weights is not False: if type(weights) is np.ndarray: # user supplied weights W = np.diag(np.sqrt(weights)) else: # Define weights as diag(fn**2) (fn = fitted signal from OLS) result = np.dot(inverse_design_matrix, y) W = np.diag(np.exp(np.dot(A, result))) # W = sqrt(diag(fn**2)) W_A = np.dot(W, A) inv_W_A_W = np.linalg.pinv(W_A).dot(W) result = np.dot(inv_W_A_W, y) if return_leverages: leverages = np.einsum("ij,ji->i", design_matrix, inv_W_A_W) else: # DKI ordinary linear least square solution result = np.dot(inverse_design_matrix, y) if return_leverages: leverages = np.einsum("ij,ji->i", design_matrix, inverse_design_matrix) if return_leverages: leverages = {"leverages": leverages} else: leverages = None if return_lower_triangular: return result, leverages # Write output dki_params = params_to_dki_params(result, min_diffusivity=min_diffusivity) if return_S0_hat: return (dki_params[..., 0:-1], dki_params[..., -1]), leverages else: return dki_params[..., 0:-1], leverages @warning_for_keywords() def cls_fit_dki( design_matrix, data, inverse_design_matrix, sdp, *, return_S0_hat=False, weights=True, min_diffusivity=0, return_lower_triangular=False, return_leverages=False, cvxpy_solver=None, ): r"""Compute the diffusion and kurtosis tensors using a constrained ordinary or weighted linear least squares approach. See :footcite:p:`DelaHaije2020` for further details about the method. Parameters ---------- design_matrix : array (g, 22) Design matrix holding the covariants used to solve for the regression coefficients. data : array (g) Data or response variables holding the data. inverse_design_matrix : array (22, g) Inverse of the design matrix. sdp : PositiveDefiniteLeastSquares instance A CVXPY representation of a regularized least squares optimization problem. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. If not provided, weights are estimated as the squared predicted signal from an initial OLS fit. weights : bool, optional Parameter indicating whether weights are used. min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. return_lower_triangular : bool, optional Boolean to return (True) or not (False) the coefficients of the fit. return_leverages : bool, optional Boolean to return (True) or not (False) the fitting leverages. cvxpy_solver : str, optional cvxpy solver name. Optionally optimize the positivity constraint with a particular cvxpy solver. See https://www.cvxpy.org/ for details. Default: None (cvxpy chooses its own solver). Returns ------- dki_params : array (27) All parameters estimated from the diffusion kurtosis model for all N voxels. Parameters are ordered as follows: 1) Three diffusion tensor eigenvalues. 2) Three blocks of three elements, containing the first second and third coordinates of the diffusion tensor eigenvectors. 3) Fifteen elements of the kurtosis tensor. leverages : array (g) Leverages of the fitting problem (if return_leverages is True) References ---------- .. footbibliography:: """ # Set up least squares problem A = design_matrix y = np.log(data) # weights correspond to *squared* residuals # define W = sqrt(weights), so that W are weights for residuals # e.g. weights for WLS are diag(fn**2), so W are diag(fn) if weights is not False: if type(weights) is np.ndarray: # use supplied weights W = np.diag(np.sqrt(weights)) else: # Define weights as diag(fn**2) (fn = fitted signal from OLS) result = np.dot(inverse_design_matrix, y) W = np.diag(np.exp(np.dot(A, result))) # W = sqrt(diag(fn**2)) A = np.dot(W, A) y = np.dot(W, y) if return_leverages: inv_W_A_W = np.linalg.pinv(A).dot(W) leverages = np.einsum("ij,ji->i", design_matrix, inv_W_A_W) # Solve sdp result = sdp.solve(A, y, check=True, solver=cvxpy_solver) if return_leverages: leverages = {"leverages": leverages} else: leverages = None if return_lower_triangular: return result, leverages # Write output dki_params = params_to_dki_params(result, min_diffusivity=min_diffusivity) if return_S0_hat: return (dki_params[..., 0:-1], dki_params[..., -1]), leverages else: return dki_params[..., 0:-1], leverages def Wrotate(kt, Basis): r""" Rotate a kurtosis tensor from the standard Cartesian coordinate system to another coordinate system basis See :footcite:p:`Hui2008` for further details about the method. Parameters ---------- kt : (15,) Vector with the 15 independent elements of the kurtosis tensor Basis : array (3, 3) Vectors of the basis column-wise oriented inds : array(m, 4), optional Array of vectors containing the four indexes of m specific elements of the rotated kurtosis tensor. If not specified all 15 elements of the rotated kurtosis tensor are computed. Returns ------- Wrot : array (m,) or (15,) Vector with the m independent elements of the rotated kurtosis tensor. If 'indices' is not specified all 15 elements of the rotated kurtosis tensor are computed. Notes ----- The kurtosis tensor elements are assumed to be ordered as follows: .. math:: KT = \begin{pmatrix} W_{xxxx} & W_{yyyy} & W_{zzzz} & W_{xxxy} & W_{xxxz} \\ W_{xyyy} & W_{yyyz} & W_{xzzz} & W_{yzzz} & W_{xxyy} \\ W_{xxzz} & W_{yyzz} & W_{xxyz} & W_{xyyz} & W_{xyzz} \end{pmatrix} References ---------- .. footbibliography:: """ inds = np.array( [ [0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2], [0, 0, 0, 1], [0, 0, 0, 2], [0, 1, 1, 1], [1, 1, 1, 2], [0, 2, 2, 2], [1, 2, 2, 2], [0, 0, 1, 1], [0, 0, 2, 2], [1, 1, 2, 2], [0, 0, 1, 2], [0, 1, 1, 2], [0, 1, 2, 2], ] ) Wrot = np.zeros(kt.shape) for e in range(len(inds)): Wrot[..., e] = Wrotate_element( kt, inds[e][0], inds[e][1], inds[e][2], inds[e][3], Basis ) return Wrot # Defining keys to select a kurtosis tensor element with indexes (i, j, k, l) # on a kt vector that contains only the 15 independent elements of the kurtosis # tensor: Considering y defined by (i+1) * (j+1) * (k+1) * (l+1). Two elements # of the full 4D kurtosis tensor are equal if y obtain from the indexes of # these two element are equal. Therefore, the possible values of y (1, 16, 81, # 2, 3, 8, 24 27, 54, 4, 9, 36, 6, 12, 18) are used to point each element of # the kurtosis tensor on the format of a vector containing the 15 independent # elements. ind_ele = { 1: 0, 16: 1, 81: 2, 2: 3, 3: 4, 8: 5, 24: 6, 27: 7, 54: 8, 4: 9, 9: 10, 36: 11, 6: 12, 12: 13, 18: 14, } def Wrotate_element(kt, indi, indj, indk, indl, B): r"""Compute the specified index element of a kurtosis tensor rotated to the coordinate system basis B See :footcite:p:`Hui2008` for further details about the method. Parameters ---------- kt : ndarray (x, y, z, 15) or (n, 15) Array containing the 15 independent elements of the kurtosis tensor indi : int Rotated kurtosis tensor element index i (0 for x, 1 for y, 2 for z) indj : int Rotated kurtosis tensor element index j (0 for x, 1 for y, 2 for z) indk : int Rotated kurtosis tensor element index k (0 for x, 1 for y, 2 for z) indl: int Rotated kurtosis tensor element index l (0 for x, 1 for y, 2 for z) B: array (x, y, z, 3, 3) or (n, 15) Vectors of the basis column-wise oriented Returns ------- Wre : float rotated kurtosis tensor element of index ind_i, ind_j, ind_k, ind_l Notes ----- It is assumed that initial kurtosis tensor elements are defined on the Cartesian coordinate system. References ---------- .. footbibliography:: """ Wre = 0 xyz = [0, 1, 2] for il in xyz: for jl in xyz: for kl in xyz: for ll in xyz: key = (il + 1) * (jl + 1) * (kl + 1) * (ll + 1) multiplyB = ( B[..., il, indi] * B[..., jl, indj] * B[..., kl, indk] * B[..., ll, indl] ) Wre = Wre + multiplyB * kt[..., ind_ele[key]] return Wre def Wcons(k_elements): r""" Construct the full 4D kurtosis tensors from its 15 independent elements Parameters ---------- k_elements : (15,) elements of the kurtosis tensor in the following order: .. math:: KT = \begin{pmatrix} W_{xxxx} & W_{yyyy} & W_{zzzz} & W_{xxxy} & W_{xxxz} \\ W_{xyyy} & W_{yyyz} & W_{xzzz} & W_{yzzz} & W_{xxyy} \\ W_{xxzz} & W_{yyzz} & W_{xxyz} & W_{xyyz} & W_{xyzz} \end{pmatrix} Returns ------- W : array(3, 3, 3, 3) Full 4D kurtosis tensor """ W = np.zeros((3, 3, 3, 3)) xyz = [0, 1, 2] for ind_i in xyz: for ind_j in xyz: for ind_k in xyz: for ind_l in xyz: key = (ind_i + 1) * (ind_j + 1) * (ind_k + 1) * (ind_l + 1) W[ind_i][ind_j][ind_k][ind_l] = k_elements[ind_ele[key]] return W def split_dki_param(dki_params): r"""Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the DKI model Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor Returns ------- eigvals : array (x, y, z, 3) or (n, 3) Eigenvalues from eigen decomposition of the tensor. eigvecs : array (x, y, z, 3, 3) or (n, 3, 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[:,j] is associated with eigvals[j]) kt : array (x, y, z, 15) or (n, 15) Fifteen elements of the kurtosis tensor """ evals = dki_params[..., :3] evecs = dki_params[..., 3:12].reshape(dki_params.shape[:-1] + (3, 3)) kt = dki_params[..., 12:27] return evals, evecs, kt common_fit_methods = { "WLS": ls_fit_dki, "WLLS": ls_fit_dki, "LS": ls_fit_dki, "LLS": ls_fit_dki, "OLS": ls_fit_dki, "OLLS": ls_fit_dki, "NLS": nlls_fit_tensor, "NLLS": nlls_fit_tensor, "RESTORE": restore_fit_tensor, "IRLS": iterative_fit_tensor, "RWLS": robust_fit_tensor_wls, "RNLLS": robust_fit_tensor_nlls, "CLS": cls_fit_dki, "CWLS": cls_fit_dki, } dipy-1.11.0/dipy/reconst/dki_micro.py000066400000000000000000000547151476546756600175410ustar00rootroot00000000000000#!/usr/bin/python """Classes and functions for fitting the DKI-based microstructural model""" import numpy as np from dipy.core.ndindex import ndindex import dipy.core.sphere as dps from dipy.data import get_sphere from dipy.reconst.dki import ( DiffusionKurtosisFit, DiffusionKurtosisModel, _positive_evals, directional_diffusion, directional_kurtosis, kurtosis_maximum, split_dki_param, ) from dipy.reconst.dti import ( MIN_POSITIVE_SIGNAL, axial_diffusivity, decompose_tensor, design_matrix as dti_design_matrix, from_lower_triangular, lower_triangular, mean_diffusivity, radial_diffusivity, trace, ) from dipy.reconst.vec_val_sum import vec_val_vect from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def axonal_water_fraction(dki_params, *, sphere="repulsion100", gtol=1e-2, mask=None): """Computes the axonal water fraction from DKI. See :footcite:p:`Fieremans2011` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor sphere : Sphere class instance, optional The sphere providing sample directions for the initial search of the maximal value of kurtosis. gtol : float, optional This input is to refine kurtosis maxima under the precision of the directions sampled on the sphere class instance. The gradient of the convergence procedure must be less than gtol before successful termination. If gtol is None, fiber direction is directly taken from the initial sampled directions of the given sphere object mask : ndarray A boolean array used to mark the coordinates in the data that should be analyzed that has the shape dki_params.shape[:-1] Returns ------- awf : ndarray (x, y, z) or (n) Axonal Water Fraction References ---------- .. footbibliography:: """ kt_max = kurtosis_maximum(dki_params, sphere=sphere, gtol=gtol, mask=mask) awf = kt_max / (kt_max + 3) return awf @warning_for_keywords() def diffusion_components(dki_params, *, sphere="repulsion100", awf=None, mask=None): """Extracts the restricted and hindered diffusion tensors of well aligned fibers from diffusion kurtosis imaging parameters. See :footcite:p:`Fieremans2011` for further details about the method. Parameters ---------- dki_params : ndarray (x, y, z, 27) or (n, 27) All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor sphere : Sphere class instance, optional The sphere providing sample directions to sample the restricted and hindered cellular diffusion tensors. For more details see :footcite:p:`Fieremans2011`. awf : ndarray, optional Array containing values of the axonal water fraction that has the shape dki_params.shape[:-1]. If not given this will be automatically computed using :func:`axonal_water_fraction` with function's default precision. mask : ndarray, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the shape dki_params.shape[:-1] Returns ------- edt : ndarray (x, y, z, 6) or (n, 6) Parameters of the hindered diffusion tensor. idt : ndarray (x, y, z, 6) or (n, 6) Parameters of the restricted diffusion tensor. Notes ----- In the original article of DKI microstructural model :footcite:p:`Fieremans2011`, the hindered and restricted tensors were defined as the intra-cellular and extra-cellular diffusion compartments respectively. References ---------- .. footbibliography:: """ shape = dki_params.shape[:-1] # load gradient directions if not isinstance(sphere, dps.Sphere): sphere = get_sphere(name=sphere) # select voxels where to apply the single fiber model if mask is None: mask = np.ones(shape, dtype="bool") else: if mask.shape != shape: raise ValueError("Mask is not the same shape as dki_params.") else: mask = np.asarray(mask, dtype=bool) # check or compute awf values if awf is None: awf = axonal_water_fraction(dki_params, sphere=sphere, mask=mask) else: if awf.shape != shape: raise ValueError("awf array is not the same shape as dki_params.") # Initialize hindered and restricted diffusion tensors edt_all = np.zeros(shape + (6,)) idt_all = np.zeros(shape + (6,)) # Generate matrix that converts apparent diffusion coefficients to tensors B = np.zeros((sphere.x.size, 6)) B[:, 0] = sphere.x * sphere.x # Bxx B[:, 1] = sphere.x * sphere.y * 2.0 # Bxy B[:, 2] = sphere.y * sphere.y # Byy B[:, 3] = sphere.x * sphere.z * 2.0 # Bxz B[:, 4] = sphere.y * sphere.z * 2.0 # Byz B[:, 5] = sphere.z * sphere.z # Bzz pinvB = np.linalg.pinv(B) # Compute hindered and restricted diffusion tensors for all voxels evals, evecs, kt = split_dki_param(dki_params) dt = lower_triangular(vec_val_vect(evecs, evals)) md = mean_diffusivity(evals) index = ndindex(mask.shape) for idx in index: if not mask[idx]: continue # sample apparent diffusion and kurtosis values di = directional_diffusion(dt[idx], sphere.vertices) ki = directional_kurtosis( dt[idx], md[idx], kt[idx], sphere.vertices, adc=di, min_kurtosis=0 ) edi = di * (1 + np.sqrt(ki * awf[idx] / (3.0 - 3.0 * awf[idx]))) edt = np.dot(pinvB, edi) edt_all[idx] = edt # We only move on if there is an axonal water fraction. # Otherwise, remaining params are already zero, so move on if awf[idx] == 0: continue # Convert apparent diffusion and kurtosis values to apparent diffusion # values of the hindered and restricted diffusion idi = di * (1 - np.sqrt(ki * (1.0 - awf[idx]) / (3.0 * awf[idx]))) # generate hindered and restricted diffusion tensors idt = np.dot(pinvB, idi) idt_all[idx] = idt return edt_all, idt_all @warning_for_keywords() def dkimicro_prediction(params, gtab, *, S0=1): r"""Signal prediction given the DKI microstructure model parameters. Parameters ---------- params : ndarray (x, y, z, 40) or (n, 40) All parameters estimated from the diffusion kurtosis microstructure model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor 4) Six elements of the hindered diffusion tensor 5) Six elements of the restricted diffusion tensor 6) Axonal water fraction gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels Returns ------- S : (..., N) ndarray Simulated signal based on the DKI microstructure model Notes ----- 1) The predicted signal is given by: .. math:: S(\theta, b) = S_0 * [f * e^{-b ADC_{r}} + (1-f) * e^{-b ADC_{h}] where $ADC_{r}$ and $ADC_{h}$ are the apparent diffusion coefficients of the diffusion hindered and restricted compartment for a given direction $\theta$, $b$ is the b value provided in the GradientTable input for that direction, $f$ is the volume fraction of the restricted diffusion compartment (also known as the axonal water fraction). 2) In the original article of DKI microstructural model :footcite:p:`Fieremans2011`, the hindered and restricted tensors were defined as the intra-cellular and extra-cellular diffusion compartments respectively. References ---------- .. footbibliography:: """ # Initialize pred_sig pred_sig = np.zeros(params.shape[:-1] + (gtab.bvals.shape[0],)) # Define dti design matrix and region to process D = dti_design_matrix(gtab) evals = params[..., :3] mask = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) # Prepare parameters f = params[..., 27] adce = params[..., 28:34] adci = params[..., 34:40] if isinstance(S0, np.ndarray): S0_vol = S0 * np.ones(params.shape[:-1]) else: S0_vol = S0 # Process pred_sig for all data voxels index = ndindex(evals.shape[:-1]) for v in index: if mask[v]: pred_sig[v] = (1.0 - f[v]) * np.exp(np.dot(D[:, :6], adce[v])) + f[ v ] * np.exp(np.dot(D[:, :6], adci[v])) return pred_sig * S0_vol def tortuosity(hindered_ad, hindered_rd): """Computes the tortuosity of the hindered diffusion compartment given its axial and radial diffusivities Parameters ---------- hindered_ad: ndarray Array containing the values of the hindered axial diffusivity. hindered_rd: ndarray Array containing the values of the hindered radial diffusivity. Returns ------- Tortuosity of the hindered diffusion compartment """ if not isinstance(hindered_rd, np.ndarray): hindered_rd = np.array(hindered_rd) if not isinstance(hindered_ad, np.ndarray): hindered_ad = np.array(hindered_ad) tortuosity = np.zeros(hindered_rd.shape) # mask to avoid divisions by zero mask = hindered_rd > 0 # Check single voxel cases. For numpy versions more recent than 1.7, # this if else condition is not required since single voxel can be # processed using the same line of code of multi-voxel if hindered_rd.size == 1: if mask: tortuosity = hindered_ad / hindered_rd else: tortuosity[mask] = hindered_ad[mask] / hindered_rd[mask] return tortuosity def _compartments_eigenvalues(cdt): """Helper function that computes the eigenvalues of a tissue sub compartment given its individual diffusion tensor Parameters ---------- cdt : ndarray (..., 6) Diffusion tensors elements of the tissue compartment stored in lower triangular order. Returns ------- eval : ndarry (..., 3) Eigenvalues of the tissue compartment """ evals, evecs = decompose_tensor(from_lower_triangular(cdt)) return evals class KurtosisMicrostructureModel(DiffusionKurtosisModel): """Class for the Diffusion Kurtosis Microstructural Model""" def __init__(self, gtab, *args, fit_method="WLS", **kwargs): """Initialize a KurtosisMicrostrutureModel class instance. See :footcite:p:`Fieremans2011` for further details about the model. Parameters ---------- gtab : GradientTable class instance Gradient table. fit_method : str or callable str can be one of the following: - 'OLS' or 'ULLS' to fit the diffusion tensor and kurtosis tensor using the ordinary linear least squares solution `:func:dki.ols_fit_dki` - 'WLS' or 'UWLLS' to fit the diffusion tensor and kurtosis tensor using the ordinary linear least squares solution :func:`dki.wls_fit_dki` callable has to have the signature: ``fit_method(design_matrix, data, *args, **kwargs)`` args, kwargs : arguments and key-word arguments passed to the fit_method. See :func:`dki.ols_fit_dki`, :func:`dki.wls_fit_dki` for details References ---------- .. footbibliography:: """ DiffusionKurtosisModel.__init__( self, gtab, fit_method=fit_method, *args, **kwargs ) @warning_for_keywords() def fit(self, data, *, mask=None, sphere="repulsion100", gtol=1e-2, awf_only=False): """Fit method of the Diffusion Kurtosis Microstructural Model Parameters ---------- data : array An 4D matrix containing the diffusion-weighted data. mask : array A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[-1] sphere : Sphere class instance, optional The sphere providing sample directions for the initial search of the maximal value of kurtosis. gtol : float, optional This input is to refine kurtosis maxima under the precision of the directions sampled on the sphere class instance. The gradient of the convergence procedure must be less than gtol before successful termination. If gtol is None, fiber direction is directly taken from the initial sampled directions of the given sphere object awf_only : bool, optiomal If set to true only the axonal volume fraction is computed from the kurtosis tensor. Default = False """ if mask is not None: # Check for valid shape of the mask if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) data_in_mask = np.reshape(data[mask], (-1, data.shape[-1])) if self.min_signal is None: self.min_signal = MIN_POSITIVE_SIGNAL data_in_mask = np.maximum(data_in_mask, self.min_signal) # DKI fit dki_params = super().fit(data_in_mask).model_params # Computing awf awf = axonal_water_fraction(dki_params, sphere=sphere, gtol=gtol) if awf_only: params_all_mask = np.concatenate((dki_params, np.array([awf]).T), axis=-1) else: # Computing the hindered and restricted diffusion tensors hdt, rdt = diffusion_components(dki_params, sphere=sphere, awf=awf) params_all_mask = np.concatenate( (dki_params, np.array([awf]).T, hdt, rdt), axis=-1 ) if mask is None: out_shape = data.shape[:-1] + (-1,) params = params_all_mask.reshape(out_shape) # if extra is not None: # self.extra = extra.reshape(data.shape) else: params = np.zeros(data.shape[:-1] + (params_all_mask.shape[-1],)) params[mask, :] = params_all_mask # if extra is not None: # self.extra = np.zeros(data.shape) # self.extra[mask, :] = extra return KurtosisMicrostructuralFit(self, params) @warning_for_keywords() def predict(self, params, *, S0=1.0): """Predict a signal for the DKI microstructural model class instance given parameters. Parameters ---------- params : ndarray (x, y, z, 40) or (n, 40) All parameters estimated from the diffusion kurtosis microstructural model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor 4) Six elements of the hindered diffusion tensor 5) Six elements of the restricted diffusion tensor 6) Axonal water fraction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Notes ----- In the original article of DKI microstructural model :footcite:p:`Fieremans2011`, the hindered and restricted tensors were defined as the intra-cellular and extra-cellular diffusion compartments respectively. References ---------- .. footbibliography:: """ return dkimicro_prediction(params, self.gtab, S0=S0) class KurtosisMicrostructuralFit(DiffusionKurtosisFit): """Class for fitting the Diffusion Kurtosis Microstructural Model""" def __init__(self, model, model_params): """Initialize a KurtosisMicrostructural Fit class instance. Parameters ---------- model : DiffusionKurtosisModel Class instance Class instance containing the Diffusion Kurtosis Model for the fit model_params : ndarray (x, y, z, 40) or (n, 40) All parameters estimated from the diffusion kurtosis microstructural model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) Fifteen elements of the kurtosis tensor 4) Six elements of the hindered diffusion tensor 5) Six elements of the restricted diffusion tensor 6) Axonal water fraction Notes ----- In the original article of DKI microstructural model :footcite:p:`Fieremans2011`, the hindered and restricted tensors were defined as the intra-cellular and extra-cellular diffusion compartments respectively. References ---------- .. footbibliography:: """ DiffusionKurtosisFit.__init__(self, model, model_params) @property def awf(self): """Returns the volume fraction of the restricted diffusion compartment also known as axonal water fraction. Notes ----- The volume fraction of the restricted diffusion compartment can be seen as the volume fraction of the intra-cellular compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ return self.model_params[..., 27] @property def restricted_evals(self): """Returns the eigenvalues of the restricted diffusion compartment. Notes ----- The restricted diffusion tensor can be seen as the tissue's intra-cellular diffusion compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ self._is_awfonly() return _compartments_eigenvalues(self.model_params[..., 34:40]) @property def hindered_evals(self): """Returns the eigenvalues of the hindered diffusion compartment. Notes ----- The hindered diffusion tensor can be seen as the tissue's extra-cellular diffusion compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ self._is_awfonly() return _compartments_eigenvalues(self.model_params[..., 28:34]) @property def axonal_diffusivity(self): """Returns the axonal diffusivity defined as the restricted diffusion tensor trace. See :footcite:p:`Fieremans2011` for further details about the method. References ---------- .. footbibliography:: """ return trace(self.restricted_evals) @property def hindered_ad(self): """Returns the axial diffusivity of the hindered compartment. Notes ----- The hindered diffusion tensor can be seen as the tissue's extra-cellular diffusion compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ return axial_diffusivity(self.hindered_evals) @property def hindered_rd(self): """Returns the radial diffusivity of the hindered compartment. Notes ----- The hindered diffusion tensor can be seen as the tissue's extra-cellular diffusion compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ return radial_diffusivity(self.hindered_evals) @property def tortuosity(self): """Returns the tortuosity of the hindered diffusion which is defined by ADe / RDe, where ADe and RDe are the axial and radial diffusivities of the hindered compartment. See :footcite:p:`Fieremans2011` for further details about the method. Notes ----- The hindered diffusion tensor can be seen as the tissue's extra-cellular diffusion compartment :footcite:p:`Fieremans2011`. References ---------- .. footbibliography:: """ return tortuosity(self.hindered_ad, self.hindered_rd) def _is_awfonly(self): """To raise error if only the axonal water fraction was computed""" if self.model_params.shape[-1] < 39: raise ValueError( "Only the awf was processed! Rerun model fit " "with input parameter awf_only set to False" ) @warning_for_keywords() def predict(self, gtab, *, S0=1.0): r"""Given a DKI microstructural model fit, predict the signal on the vertices of a gradient table gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Notes ----- The predicted signal is given by: .. math:: S(\theta, b) = S_0 * [f * e^{-b ADC_{r}} + (1-f) * e^{-b ADC_{h}] where $ADC_{r}$ and $ADC_{h}$ are the apparent diffusion coefficients of the diffusion hindered and restricted compartment for a given direction $\theta$, $b$ is the b value provided in the GradientTable input for that direction, $f$ is the volume fraction of the restricted diffusion compartment (also known as the axonal water fraction). """ self._is_awfonly() return dkimicro_prediction(self.model_params, gtab, S0=S0) dipy-1.11.0/dipy/reconst/dsi.py000066400000000000000000000511041476546756600163450ustar00rootroot00000000000000import numpy as np from scipy.fftpack import fftn, fftshift, ifftshift from scipy.ndimage import map_coordinates from dipy.reconst.cache import Cache from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.odf import OdfFit, OdfModel from dipy.testing.decorators import warning_for_keywords class DiffusionSpectrumModel(OdfModel, Cache): @warning_for_keywords() def __init__( self, gtab, *, qgrid_size=17, r_start=2.1, r_end=6.0, r_step=0.2, filter_width=32, normalize_peaks=False, ): r"""Diffusion Spectrum Imaging The theoretical idea underlying this method is that the diffusion propagator $P(\mathbf{r})$ (probability density function of the average spin displacements) can be estimated by applying 3D FFT to the signal values $S(\mathbf{q})$ .. math:: :nowrap: P(\mathbf{r}) & = & S_{0}^{-1}\int S(\mathbf{q})\exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{r} where $\mathbf{r}$ is the displacement vector and $\mathbf{q}$ is the wave vector which corresponds to different gradient directions. Method used to calculate the ODFs. Here we implement the method proposed by :footcite:t:`Wedeen2005`. The main assumption for this model is fast gradient switching and that the acquisition gradients will sit on a keyhole Cartesian grid in q_space :footcite:p:`Garyfallidis2012b`. See also :footcite:p:`CanalesRodriguez2010`. Parameters ---------- gtab : GradientTable, Gradient directions and bvalues container class qgrid_size : int, has to be an odd number. Sets the size of the q_space grid. For example if qgrid_size is 17 then the shape of the grid will be ``(17, 17, 17)``. r_start : float, ODF is sampled radially in the PDF. This parameters shows where the sampling should start. r_end : float, Radial endpoint of ODF sampling r_step : float, Step size of the ODf sampling from r_start to r_end filter_width : float, Strength of the hanning filter References ---------- .. footbibliography:: Examples -------- In this example where we provide the data, a gradient table and a reconstruction sphere, we calculate generalized FA for the first voxel in the data with the reconstruction performed using DSI. >>> import warnings >>> from dipy.data import dsi_voxels, default_sphere >>> data, gtab = dsi_voxels() >>> from dipy.reconst.dsi import DiffusionSpectrumModel >>> ds = DiffusionSpectrumModel(gtab) >>> dsfit = ds.fit(data) >>> from dipy.reconst.odf import gfa >>> np.round(gfa(dsfit.odf(default_sphere))[0, 0, 0], 2) 0.11 Notes ----- A. Have in mind that DSI expects gradients on both hemispheres. If your gradients span only one hemisphere you need to duplicate the data and project them to the other hemisphere before calling this class. The function dipy.reconst.dsi.half_to_full_qspace can be used for this purpose. B. If you increase the size of the grid (parameter qgrid_size) you will most likely also need to update the r_* parameters. This is because the added zero padding from the increase of gqrid_size also introduces a scaling of the PDF. C. We assume that data only one b0 volume is provided. See Also -------- dipy.reconst.gqi.GeneralizedQSampling """ # noqa: E501 self.bvals = gtab.bvals self.bvecs = gtab.bvecs self.normalize_peaks = normalize_peaks # 3d volume for Sq if qgrid_size % 2 == 0: raise ValueError("qgrid_size needs to be an odd integer") self.qgrid_size = qgrid_size # necessary shifting for centering self.origin = self.qgrid_size // 2 # hanning filter width self.filter = hanning_filter(gtab, filter_width, self.origin) # odf sampling radius self.qradius = np.arange(r_start, r_end, r_step) self.qradiusn = len(self.qradius) # create qspace grid self.qgrid = create_qspace(gtab, self.origin) b0 = np.min(self.bvals) self.dn = (self.bvals > b0).sum() self.gtab = gtab @multi_voxel_fit def fit(self, data, **kwargs): return DiffusionSpectrumFit(self, data) class DiffusionSpectrumFit(OdfFit): def __init__(self, model, data): """Calculates PDF and ODF and other properties for a single voxel Parameters ---------- model : object, DiffusionSpectrumModel data : 1d ndarray, signal values """ self.model = model self.data = data self.qgrid_sz = self.model.qgrid_size self.dn = self.model.dn self._gfa = None self.npeaks = 5 self._peak_values = None self._peak_indices = None @warning_for_keywords() def pdf(self, *, normalized=True): """Applies the 3D FFT in the q-space grid to generate the diffusion propagator """ values = self.data * self.model.filter # create the signal volume Sq = np.zeros((self.qgrid_sz, self.qgrid_sz, self.qgrid_sz)) # fill q-space for i in range(len(values)): qx, qy, qz = self.model.qgrid[i] Sq[qx, qy, qz] += values[i] # apply fourier transform Pr = fftshift(np.real(fftn(ifftshift(Sq), 3 * (self.qgrid_sz,)))) # clipping negative values to 0 (ringing artefact) Pr = np.clip(Pr, 0, Pr.max()) # normalize the propagator to obtain a pdf if normalized: Pr /= Pr.sum() return Pr @warning_for_keywords() def rtop_signal(self, *, filtering=True): """Calculates the return to origin probability (rtop) from the signal rtop equals to the sum of all signal values Parameters ---------- filtering : boolean, optional Whether to perform Hanning filtering. Returns ------- rtop : float the return to origin probability """ if filtering: values = self.data * self.model.filter else: values = self.data rtop = values.sum() return rtop @warning_for_keywords() def rtop_pdf(self, *, normalized=True): r"""Calculates the return to origin probability from the propagator, which is the propagator evaluated at zero. rtop = P(0) See :footcite:p:`Descoteaux2011`, :footcite:p:`Tuch2002` and :footcite:p:`Wu2008` for further details about the method. Parameters ---------- normalized : boolean, optional Whether to normalize the propagator by its sum in order to obtain a pdf. Returns ------- rtop : float the return to origin probability References ---------- .. footbibliography:: """ Pr = self.pdf(normalized=normalized) center = self.qgrid_sz // 2 rtop = Pr[center, center, center] return rtop @warning_for_keywords() def msd_discrete(self, *, normalized=True): r"""Calculates the mean squared displacement on the discrete propagator .. math:: :nowrap: MSD:{DSI}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} P(\hat{\mathbf{r}}) \cdot \hat{\mathbf{r}}^{2} \ dr_x \ dr_y \ dr_z where $\hat{\mathbf{r}}$ is a point in the 3D Propagator space (see :footcite:p:`Wu2007`). Parameters ---------- normalized : boolean, optional Whether to normalize the propagator by its sum in order to obtain a pdf. Returns ------- msd : float the mean square displacement References ---------- .. footbibliography:: """ # noqa: E501 Pr = self.pdf(normalized=normalized) # create the r squared 3D matrix gridsize = self.qgrid_sz center = gridsize // 2 a = np.arange(gridsize) - center x = np.tile(a, (gridsize, gridsize, 1)) y = np.tile(a.reshape(gridsize, 1), (gridsize, 1, gridsize)) z = np.tile(a.reshape(gridsize, 1, 1), (1, gridsize, gridsize)) r2 = x**2 + y**2 + z**2 msd = np.sum(Pr * r2) / float((gridsize**3)) return msd def odf(self, sphere): r"""Calculates the real discrete odf for a given discrete sphere .. math:: \psi_{DSI}(\hat{\mathbf{u}})=\int_{0}^{\infty}P(r\hat{\mathbf{u}})r^{2}dr where $\hat{\mathbf{u}}$ is the unit vector which corresponds to a sphere point. """ interp_coords = self.model.cache_get("interp_coords", key=sphere) if interp_coords is None: interp_coords = pdf_interp_coords( sphere, self.model.qradius, self.model.origin ) self.model.cache_set("interp_coords", sphere, interp_coords) Pr = self.pdf() # calculate the orientation distribution function return pdf_odf(Pr, self.model.qradius, interp_coords) def create_qspace(gtab, origin): """create the 3D grid which holds the signal values (q-space) Parameters ---------- gtab : GradientTable Gradient table. origin : (3,) ndarray center of qspace Returns ------- qgrid : ndarray qspace coordinates """ # create the q-table from bvecs and bvals qtable = create_qtable(gtab, origin) # center and index in qspace volume qgrid = qtable + origin return qgrid.astype("i8") def create_qtable(gtab, origin): """create a normalized version of gradients Parameters ---------- gtab : GradientTable Gradient table. origin : (3,) ndarray center of qspace Returns ------- qtable : ndarray """ bv = gtab.bvals bsorted = np.sort(bv[np.bitwise_not(gtab.b0s_mask)]) for i in range(len(bsorted)): bmin = bsorted[i] try: if np.sqrt(bv.max() / bmin) > origin + 1: continue else: break except ZeroDivisionError: continue bv = np.sqrt(bv / bmin) qtable = np.vstack((bv, bv, bv)).T * gtab.bvecs return np.floor(qtable + 0.5) def hanning_filter(gtab, filter_width, origin): """create a hanning window The signal is premultiplied by a Hanning window before Fourier transform in order to ensure a smooth attenuation of the signal at high q values. Parameters ---------- gtab : GradientTable Gradient table. filter_width : int Strength of the Hanning filter. origin : (3,) ndarray center of qspace Returns ------- filter : (N,) ndarray where N is the number of non-b0 gradient directions """ qtable = create_qtable(gtab, origin) # calculate r - hanning filter free parameter r = np.sqrt(qtable[:, 0] ** 2 + qtable[:, 1] ** 2 + qtable[:, 2] ** 2) # setting hanning filter width and hanning return 0.5 * np.cos(2 * np.pi * r / filter_width) def pdf_interp_coords(sphere, rradius, origin): """Precompute coordinates for ODF calculation from the PDF Parameters ---------- sphere : object, Sphere rradius : array, shape (N,) line interpolation points origin : array, shape (3,) center of the grid """ interp_coords = rradius * sphere.vertices[np.newaxis].T origin = np.reshape(origin, [-1, 1, 1]) interp_coords = origin + interp_coords return interp_coords def pdf_odf(Pr, rradius, interp_coords): r"""Calculates the real ODF from the diffusion propagator(PDF) Pr Parameters ---------- Pr : array, shape (X, X, X) probability density function rradius : array, shape (N,) interpolation range on the radius interp_coords : array, shape (3, M, N) coordinates in the pdf for interpolating the odf """ PrIs = map_coordinates(Pr, interp_coords, order=1) odf = (PrIs * rradius**2).sum(-1) return odf def half_to_full_qspace(data, gtab): """Half to full Cartesian grid mapping Useful when dMRI data are provided in one qspace hemisphere as DiffusionSpectrum expects data to be in full qspace. Parameters ---------- data : array, shape (X, Y, Z, W) where (X, Y, Z) volume size and W number of gradient directions gtab : GradientTable container for b-values and b-vectors (gradient directions) Returns ------- new_data : array, shape (X, Y, Z, 2 * W -1) DWI data across the full Cartesian space. new_gtab : GradientTable Gradient table. Notes ----- We assume here that only on b0 is provided with the initial data. If that is not the case then you will need to write your own preparation function before providing the gradients and the data to the DiffusionSpectrumModel class. """ bvals = gtab.bvals bvecs = gtab.bvecs bvals = np.append(bvals, bvals[1:]) bvecs = np.append(bvecs, -bvecs[1:], axis=0) data = np.append(data, data[..., 1:], axis=-1) gtab.bvals = bvals.copy() gtab.bvecs = bvecs.copy() return data, gtab def project_hemisph_bvecs(gtab): """Project any near identical bvecs to the other hemisphere Parameters ---------- gtab : object, GradientTable Notes ----- Useful only when working with some types of dsi data. """ bvals = gtab.bvals bvecs = gtab.bvecs bvs = bvals[1:] bvcs = bvecs[1:] b = bvs[:, None] * bvcs bb = np.zeros((len(bvs), len(bvs))) pairs = [] for i, vec in enumerate(b): for j, vec2 in enumerate(b): bb[i, j] = np.sqrt(np.sum((vec - vec2) ** 2)) _I = np.argsort(bb[i]) for j in _I: if j != i: break if (j, i) in pairs: pass else: pairs.append((i, j)) bvecs2 = bvecs.copy() for _, j in pairs: bvecs2[1 + j] = -bvecs2[1 + j] return bvecs2, pairs class DiffusionSpectrumDeconvModel(DiffusionSpectrumModel): @warning_for_keywords() def __init__( self, gtab, *, qgrid_size=35, r_start=4.1, r_end=13.0, r_step=0.4, filter_width=np.inf, normalize_peaks=False, ): r""" Diffusion Spectrum Deconvolution The idea is to remove the convolution on the DSI propagator that is caused by the truncation of the q-space in the DSI sampling. .. math:: :nowrap: \begin{eqnarray} P_{dsi}(\mathbf{r}) & = & S_{0}^{-1}\iiint\limits_{\| \mathbf{q} \| \le \mathbf{q_{max}}} S(\mathbf{q})\exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{q} \\ & = & S_{0}^{-1}\iiint\limits_{\mathbf{q}} \left( S(\mathbf{q}) \cdot M(\mathbf{q}) \right) \exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{q} \\ & = & P(\mathbf{r}) \otimes \left( S_{0}^{-1}\iiint\limits_{\mathbf{q}} M(\mathbf{q}) \exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{q} \right) \\ \end{eqnarray} where $\mathbf{r}$ is the displacement vector and $\mathbf{q}$ is the wave vector which corresponds to different gradient directions, $M(\mathbf{q})$ is a mask corresponding to your q-space sampling and $\otimes$ is the convolution operator :footcite:p:`CanalesRodriguez2010`. See also :footcite:p:`Biggs1997`. Parameters ---------- gtab : GradientTable, Gradient directions and bvalues container class qgrid_size : int, optional has to be an odd number. Sets the size of the q_space grid. For example if qgrid_size is 35 then the shape of the grid will be ``(35, 35, 35)``. r_start : float, optional ODF is sampled radially in the PDF. This parameters shows where the sampling should start. r_end : float, optional Radial endpoint of ODF sampling r_step : float, optional Step size of the ODf sampling from r_start to r_end filter_width : float, Strength of the hanning filter References ---------- .. footbibliography:: """ # noqa: E501 DiffusionSpectrumModel.__init__( self, gtab, qgrid_size=qgrid_size, r_start=r_start, r_end=r_end, r_step=r_step, filter_width=filter_width, normalize_peaks=normalize_peaks, ) @multi_voxel_fit def fit(self, data, **kwargs): return DiffusionSpectrumDeconvFit(self, data) class DiffusionSpectrumDeconvFit(DiffusionSpectrumFit): def pdf(self): """Applies the 3D FFT in the q-space grid to generate the DSI diffusion propagator, remove the background noise with a hard threshold and then deconvolve the propagator with the Lucy-Richardson deconvolution algorithm """ values = self.data # create the signal volume Sq = np.zeros((self.qgrid_sz, self.qgrid_sz, self.qgrid_sz)) # fill q-space for i in range(len(values)): qx, qy, qz = self.model.qgrid[i] Sq[qx, qy, qz] += values[i] # get deconvolution PSF DSID_PSF = self.model.cache_get("deconv_psf", key=self.model.gtab) if DSID_PSF is None: DSID_PSF = gen_PSF( self.model.qgrid, self.qgrid_sz, self.qgrid_sz, self.qgrid_sz ) self.model.cache_set("deconv_psf", self.model.gtab, DSID_PSF) # apply fourier transform Pr = fftshift(np.abs(np.real(fftn(ifftshift(Sq), 3 * (self.qgrid_sz,))))) # threshold propagator Pr = threshold_propagator(Pr) # apply LR deconvolution Pr = LR_deconv(Pr, DSID_PSF, numit=5, acc_factor=2) return Pr @warning_for_keywords() def threshold_propagator(P, *, estimated_snr=15.0): """ Applies hard threshold on the propagator to remove background noise for the deconvolution. """ P_thresholded = P.copy() threshold = P_thresholded.max() / float(estimated_snr) P_thresholded[P_thresholded < threshold] = 0 return P_thresholded / P_thresholded.sum() def gen_PSF(qgrid_sampling, siz_x, siz_y, siz_z): """ Generate a PSF for DSI Deconvolution by taking the ifft of the binary q-space sampling mask and truncating it to keep only the center. """ Sq = np.zeros((siz_x, siz_y, siz_z)) # fill q-space for i in range(qgrid_sampling.shape[0]): qx, qy, qz = qgrid_sampling[i] Sq[qx, qy, qz] = 1 return Sq * np.real(np.fft.fftshift(np.fft.ifftn(np.fft.ifftshift(Sq)))) @warning_for_keywords() def LR_deconv(prop, psf, *, numit=5, acc_factor=1): r""" Perform Lucy-Richardson deconvolution algorithm on a 3D array. Parameters ---------- prop : 3-D ndarray of dtype float The 3D volume to be deconvolve psf : 3-D ndarray of dtype float The filter that will be used for the deconvolution. numit : int Number of Lucy-Richardson iteration to perform. acc_factor : float Exponential acceleration factor as in :footcite:p:`Biggs1997`. References ---------- .. footbibliography:: """ eps = 1e-16 # Create the otf of the same size as prop otf = np.zeros_like(prop) # prop.ndim==3 otf[ otf.shape[0] // 2 - psf.shape[0] // 2 : otf.shape[0] // 2 + psf.shape[0] // 2 + 1, otf.shape[1] // 2 - psf.shape[1] // 2 : otf.shape[1] // 2 + psf.shape[1] // 2 + 1, otf.shape[2] // 2 - psf.shape[2] // 2 : otf.shape[2] // 2 + psf.shape[2] // 2 + 1, ] = psf otf = np.real(np.fft.fftn(np.fft.ifftshift(otf))) # Enforce Positivity prop = np.clip(prop, 0, np.inf) prop_deconv = prop.copy() for _ in range(numit): # Blur the estimate reBlurred = np.real(np.fft.ifftn(otf * np.fft.fftn(prop_deconv))) reBlurred[reBlurred < eps] = eps # Update the estimate prop_deconv = ( prop_deconv * (np.real(np.fft.ifftn(otf * np.fft.fftn((prop / reBlurred) + eps)))) ** acc_factor ) # Enforce positivity prop_deconv = np.clip(prop_deconv, 0, np.inf) return prop_deconv / prop_deconv.sum() dipy-1.11.0/dipy/reconst/dti.py000077500000000000000000002311611476546756600163540ustar00rootroot00000000000000#!/usr/bin/python """ Classes and functions for fitting tensors. """ import functools import warnings import numpy as np import scipy.optimize as opt from dipy.core.geometry import vector_norm from dipy.core.gradients import gradient_table from dipy.core.onetime import auto_attr from dipy.data import get_sphere from dipy.reconst.base import ReconstModel from dipy.reconst.vec_val_sum import vec_val_vect from dipy.reconst.weights_method import ( weights_method_nlls_m_est, weights_method_wls_m_est, ) from dipy.testing.decorators import warning_for_keywords MIN_POSITIVE_SIGNAL = 0.0001 ols_resort_msg = "Resorted to OLS solution in some voxels" @warning_for_keywords() def _roll_evals(evals, *, axis=-1): """Check evals shape. Helper function to check that the evals provided to functions calculating tensor statistics have the right shape Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. shape should be (...,3). axis : int, optional The axis of the array which contains the 3 eigenvals. Default: -1 Returns ------- evals : array-like Eigenvalues of a diffusion tensor, rolled so that the 3 eigenvals are the last axis. """ if evals.shape[-1] != 3: msg = f"Expecting 3 eigenvalues, got {evals.shape[-1]}" raise ValueError(msg) evals = np.rollaxis(evals, axis) return evals @warning_for_keywords() def fractional_anisotropy(evals, *, axis=-1): r"""Return Fractional anisotropy (FA) of a diffusion tensor. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- fa : array Calculated FA. Range is 0 <= FA <= 1. Notes ----- FA is calculated using the following equation: .. math:: FA = \sqrt{\frac{1}{2}\frac{(\lambda_1-\lambda_2)^2+(\lambda_1- \lambda_3)^2+(\lambda_2-\lambda_3)^2}{\lambda_1^2+ \lambda_2^2+\lambda_3^2}} """ evals = _roll_evals(evals, axis=axis) # Make sure not to get nans all_zero = (evals == 0).all(axis=0) ev1, ev2, ev3 = evals fa = np.sqrt( 0.5 * ((ev1 - ev2) ** 2 + (ev2 - ev3) ** 2 + (ev3 - ev1) ** 2) / ((evals * evals).sum(0) + all_zero) ) return fa @warning_for_keywords() def geodesic_anisotropy(evals, *, axis=-1): r""" Geodesic anisotropy (GA) of a diffusion tensor. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- ga : array Calculated GA. In the range 0 to +infinity Notes ----- GA is calculated using the following equation given in :footcite:p:`Batchelor2005`: .. math:: GA = \sqrt{\sum_{i=1}^3 \log^2{\left ( \lambda_i/<\mathbf{D}> \right )}}, \quad \textrm{where} \quad <\mathbf{D}> = (\lambda_1\lambda_2\lambda_3)^{1/3} Note that the notation, $$, is often used as the mean diffusivity (MD) of the diffusion tensor and can lead to confusions in the literature (see :footcite:p:`Batchelor2005` versus :footcite:p:`Correia2011b` versus :footcite:p:`Lee2008` for example). :footcite:p:`Correia2011b` defines geodesic anisotropy (GA) with $$ as the MD in the denominator of the sum. This is wrong. The original paper :footcite:p:`Batchelor2005` defines GA with $ = det(D)^{1/3}$, as the isotropic part of the distance. This might be an explanation for the confusion. The isotropic part of the diffusion tensor in Euclidean space is the MD whereas the isotropic part of the tensor in log-Euclidean space is $det(D)^{1/3}$. The Appendix of :footcite:p:`Batchelor2005` and log-Euclidean derivations from :footcite:p:`Lee2008` are clear on this. Hence, all that to say that $ = det(D)^{1/3}$ here for the GA definition and not MD. See also :footcite:p:`Arsigny2006`. References ---------- .. footbibliography:: """ evals = _roll_evals(evals, axis=axis) ev1, ev2, ev3 = evals log1 = np.zeros(ev1.shape) log2 = np.zeros(ev1.shape) log3 = np.zeros(ev1.shape) idx = np.nonzero(ev1) # this is the definition in :footcite:p:`Batchelor2005` detD = np.power(ev1 * ev2 * ev3, 1 / 3.0) log1[idx] = np.log(ev1[idx] / detD[idx]) log2[idx] = np.log(ev2[idx] / detD[idx]) log3[idx] = np.log(ev3[idx] / detD[idx]) ga = np.sqrt(log1**2 + log2**2 + log3**2) return ga @warning_for_keywords() def mean_diffusivity(evals, *, axis=-1): r""" Mean Diffusivity (MD) of a diffusion tensor. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- md : array Calculated MD. Notes ----- MD is calculated with the following equation: .. math:: MD = \frac{\lambda_1 + \lambda_2 + \lambda_3}{3} """ evals = _roll_evals(evals, axis=axis) return evals.mean(0) @warning_for_keywords() def axial_diffusivity(evals, *, axis=-1): r""" Axial Diffusivity (AD) of a diffusion tensor. Also called parallel diffusivity. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor, must be sorted in descending order along `axis`. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- ad : array Calculated AD. Notes ----- AD is calculated with the following equation: .. math:: AD = \lambda_1 """ evals = _roll_evals(evals, axis=axis) ev1, ev2, ev3 = evals return ev1 @warning_for_keywords() def radial_diffusivity(evals, *, axis=-1): r""" Radial Diffusivity (RD) of a diffusion tensor. Also called perpendicular diffusivity. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor, must be sorted in descending order along `axis`. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- rd : array Calculated RD. Notes ----- RD is calculated with the following equation: .. math:: RD = \frac{\lambda_2 + \lambda_3}{2} """ evals = _roll_evals(evals, axis=axis) return evals[1:].mean(0) @warning_for_keywords() def trace(evals, *, axis=-1): r""" Trace of a diffusion tensor. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- trace : array Calculated trace of the diffusion tensor. Notes ----- Trace is calculated with the following equation: .. math:: Trace = \lambda_1 + \lambda_2 + \lambda_3 """ evals = _roll_evals(evals, axis=axis) return evals.sum(0) def color_fa(fa, evecs): r"""Color fractional anisotropy of diffusion tensor Parameters ---------- fa : array-like Array of the fractional anisotropy (can be 1D, 2D or 3D) evecs : array-like eigen vectors from the tensor model Returns ------- rgb : Array with 3 channels for each color as the last dimension. Colormap of the FA with red for the x value, y for the green value and z for the blue value. Notes ----- It is computed from the clipped FA between 0 and 1 using the following formula .. math:: rgb = abs(max(\vec{e})) \times fa """ if (fa.shape != evecs[..., 0, 0].shape) or ((3, 3) != evecs.shape[-2:]): raise ValueError("Wrong number of dimensions for evecs") return np.abs(evecs[..., 0]) * np.clip(fa, 0, 1)[..., None] # The following are used to calculate the tensor mode: def determinant(q_form): """ The determinant of a tensor, given in quadratic form Parameters ---------- q_form : ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x, y, z, 3, 3) or (n, 3, 3) or (3, 3). Returns ------- det : array The determinant of the tensor in each spatial coordinate """ # Following the conventions used here: # https://en.wikipedia.org/wiki/Determinant aei = q_form[..., 0, 0] * q_form[..., 1, 1] * q_form[..., 2, 2] bfg = q_form[..., 0, 1] * q_form[..., 1, 2] * q_form[..., 2, 0] cdh = q_form[..., 0, 2] * q_form[..., 1, 0] * q_form[..., 2, 1] ceg = q_form[..., 0, 2] * q_form[..., 1, 1] * q_form[..., 2, 0] bdi = q_form[..., 0, 1] * q_form[..., 1, 0] * q_form[..., 2, 2] afh = q_form[..., 0, 0] * q_form[..., 1, 2] * q_form[..., 2, 1] return aei + bfg + cdh - ceg - bdi - afh def isotropic(q_form): r""" Calculate the isotropic part of the tensor. See :footcite:p:`Ennis2006` for further details about the method. Parameters ---------- q_form : ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x,y,z,3,3) or (n, 3, 3) or (3,3). Returns ------- A_hat: ndarray The isotropic part of the tensor in each spatial coordinate Notes ----- The isotropic part of a tensor is defined as (equations 3-5 of :footcite:p:`Ennis2006`): .. math:: \bar{A} = \frac{1}{2} tr(A) I References ---------- .. footbibliography:: """ tr_A = q_form[..., 0, 0] + q_form[..., 1, 1] + q_form[..., 2, 2] my_I = np.eye(3) tr_AI = tr_A.reshape(tr_A.shape + (1, 1)) * my_I return (1 / 3.0) * tr_AI def deviatoric(q_form): r""" Calculate the deviatoric (anisotropic) part of the tensor. See :footcite:p:`Ennis2006` for further details about the method. Parameters ---------- q_form : ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x,y,z,3,3) or (n, 3, 3) or (3,3). Returns ------- A_squiggle : ndarray The deviatoric part of the tensor in each spatial coordinate. Notes ----- The deviatoric part of the tensor is defined as (equations 3-5 in :footcite:p:`Ennis2006`): .. math:: \widetilde{A} = A - \bar{A} Where $A$ is the tensor quadratic form and $\bar{A}$ is the anisotropic part of the tensor. References ---------- .. footbibliography:: """ A_squiggle = q_form - isotropic(q_form) return A_squiggle def norm(q_form): r""" Calculate the Frobenius norm of a tensor quadratic form Parameters ---------- q_form: ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x,y,z,3,3) or (n, 3, 3) or (3,3). Returns ------- norm : ndarray The Frobenius norm of the 3,3 tensor q_form in each spatial coordinate. Notes ----- The Frobenius norm is defined as: .. math:: ||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2} See Also -------- np.linalg.norm """ return np.sqrt(np.sum(np.sum(np.abs(q_form**2), -1), -1)) def mode(q_form): r""" Mode (MO) of a diffusion tensor. See :footcite:p:`Ennis2006` for further details about the method. Parameters ---------- q_form : ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x, y, z, 3, 3) or (n, 3, 3) or (3, 3). Returns ------- mode : array Calculated tensor mode in each spatial coordinate. Notes ----- Mode ranges between -1 (planar anisotropy) and +1 (linear anisotropy) with 0 representing isotropy. Mode is calculated with the following equation (equation 9 in :footcite:p:`Ennis2006`): .. math:: Mode = 3*\sqrt{6}*det(\widetilde{A}/norm(\widetilde{A})) Where $\widetilde{A}$ is the deviatoric part of the tensor quadratic form. References ---------- .. footbibliography:: """ A_squiggle = deviatoric(q_form) A_s_norm = norm(A_squiggle) mode = np.zeros_like(A_s_norm) nonzero = A_s_norm != 0 A_squiggle_nonzero = A_squiggle[nonzero] # Add two dims for the (3,3), so that it can broadcast on A_squiggle A_s_norm_nonzero = A_s_norm[nonzero].reshape(-1, 1, 1) mode_nonzero = 3 * np.sqrt(6) * determinant(A_squiggle_nonzero / A_s_norm_nonzero) mode[nonzero] = mode_nonzero return mode @warning_for_keywords() def linearity(evals, *, axis=-1): r""" The linearity of the tensor. See :footcite:p:`Westin1997` for further details about the method. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- linearity : array Calculated linearity of the diffusion tensor. Notes ----- Linearity is calculated with the following equation: .. math:: Linearity = \frac{\lambda_1-\lambda_2}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ evals = _roll_evals(evals, axis=axis) ev1, ev2, ev3 = evals return (ev1 - ev2) / evals.sum(0) @warning_for_keywords() def planarity(evals, *, axis=-1): r""" The planarity of the tensor. See :footcite:p:`Westin1997` for further details about the method. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- linearity : array Calculated linearity of the diffusion tensor. Notes ----- Planarity is calculated with the following equation: .. math:: Planarity = \frac{2 (\lambda_2-\lambda_3)}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ evals = _roll_evals(evals, axis=axis) ev1, ev2, ev3 = evals return 2 * (ev2 - ev3) / evals.sum(0) @warning_for_keywords() def sphericity(evals, *, axis=-1): r""" The sphericity of the tensor. See :footcite:p:`Westin1997` for further details about the method. Parameters ---------- evals : array-like Eigenvalues of a diffusion tensor. axis : int, optional Axis of `evals` which contains 3 eigenvalues. Returns ------- sphericity : array Calculated sphericity of the diffusion tensor. Notes ----- Sphericity is calculated with the following equation: .. math:: Sphericity = \frac{3 \lambda_3)}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ evals = _roll_evals(evals, axis=axis) ev1, ev2, ev3 = evals return (3 * ev3) / evals.sum(0) def apparent_diffusion_coef(q_form, sphere): r""" Calculate the apparent diffusion coefficient (ADC) in each direction of a sphere. Parameters ---------- q_form : ndarray The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (..., 3, 3) sphere : a Sphere class instance The ADC will be calculated for each of the vertices in the sphere Notes ----- The calculation of ADC, relies on the following relationship: .. math:: ADC = \vec{b} Q \vec{b}^T Where Q is the quadratic form of the tensor. """ bvecs = sphere.vertices bvals = np.ones(bvecs.shape[0]) gtab = gradient_table(bvals, bvecs=bvecs) D = design_matrix(gtab)[:, :6] return -np.dot(lower_triangular(q_form), D.T) def tensor_prediction(dti_params, gtab, S0): r""" Predict a signal given tensor parameters. Parameters ---------- dti_params : ndarray Tensor parameters. The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 corresponding eigenvectors. gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Notes ----- The predicted signal is given by: .. math:: S(\theta, b) = S_0 * e^{-b ADC} where $ADC = \theta Q \theta^T$, $\theta$ is a unit vector pointing at any direction on the sphere for which a signal is to be predicted, $b$ is the b value provided in the GradientTable input for that direction, $Q$ is the quadratic form of the tensor determined by the input parameters. """ evals = dti_params[..., :3] evecs = dti_params[..., 3:].reshape(dti_params.shape[:-1] + (3, 3)) qform = vec_val_vect(evecs, evals) del evals, evecs lower_tri = lower_triangular(qform, b0=S0) del qform D = design_matrix(gtab) return np.exp(np.dot(lower_tri, D.T)) class TensorModel(ReconstModel): """Diffusion Tensor""" def __init__(self, gtab, *args, fit_method="WLS", return_S0_hat=False, **kwargs): """A Diffusion Tensor Model. See :footcite:p:`Basser1994b` and :footcite:p:`Basser1996` for further details about the model. Parameters ---------- gtab : GradientTable class instance Gradient table. fit_method : str or callable, optional str can be one of the following: 'WLS' for weighted least squares :func:`dti.wls_fit_tensor` 'LS' or 'OLS' for ordinary least squares :func:`dti.ols_fit_tensor` 'NLLS' for non-linear least-squares :func:`dti.nlls_fit_tensor` 'RT' or 'restore' or 'RESTORE' for RESTORE robust tensor fitting :footcite:p:`Chang2005` :func:`dti.restore_fit_tensor` callable has to have the signature: ``fit_method(design_matrix, data, *args, **kwargs)`` return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. args, kwargs : arguments and key-word arguments passed to the fit_method. See :func:`dti.wls_fit_tensor`, :func:`dti.ols_fit_tensor` for details min_signal : float, optional The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to `fit`. Notes ----- In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. Many fit_methods use the 'step' parameter to set the number of voxels that will be fit at once in each iteration. This is the chunk size as a number of voxels. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased. E.g., in :func:`iter_fit_tensor` we have a default step value of 1e4 References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) if not callable(fit_method): try: if fit_method.upper() in ["NLS", "NLLS"] and "step" in kwargs: _ = kwargs.pop("step") warnings.warn( "The 'step' parameter can not be used in the " f"{fit_method.upper()} method. It will be ignored.", UserWarning, stacklevel=2, ) fit_method = common_fit_methods[fit_method.upper()] except KeyError as e: e_s = '"' + str(fit_method) + '" is not a known fit ' e_s += "method, the fit method should either be a " e_s += "function or one of the common fit methods" raise ValueError(e_s) from e self.fit_method = fit_method self.return_S0_hat = return_S0_hat self.design_matrix = design_matrix(self.gtab) self.args = args self.kwargs = kwargs self.min_signal = self.kwargs.pop("min_signal", None) if self.min_signal is not None and self.min_signal <= 0: e_s = "The `min_signal` key-word argument needs to be strictly" e_s += " positive." raise ValueError(e_s) self.extra = {} @warning_for_keywords() def fit(self, data, *, mask=None): """Fit method of the DTI model class Parameters ---------- data : array The measured signal from one voxel. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] """ S0_params = None img_shape = data.shape[:-1] if mask is not None: # Check for valid shape of the mask if mask.shape != img_shape: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) data_in_mask = np.reshape(data[mask], (-1, data.shape[-1])) if self.min_signal is None: min_signal = MIN_POSITIVE_SIGNAL else: min_signal = self.min_signal data_in_mask = np.maximum(data_in_mask, min_signal) params_in_mask, extra = self.fit_method( self.design_matrix, data_in_mask, *self.args, return_S0_hat=self.return_S0_hat, **self.kwargs, ) if self.return_S0_hat: params_in_mask, model_S0 = params_in_mask if mask is None: out_shape = data.shape[:-1] + (-1,) dti_params = params_in_mask.reshape(out_shape) if self.return_S0_hat: S0_params = model_S0.reshape(out_shape[:-1]) if extra is not None: for key in extra: self.extra[key] = extra[key].reshape(data.shape) else: dti_params = np.zeros(data.shape[:-1] + (12,)) dti_params[mask, :] = params_in_mask if self.return_S0_hat: S0_params = np.zeros(data.shape[:-1]) try: S0_params[mask] = model_S0.squeeze(axis=-1) except ValueError: S0_params[mask] = model_S0 if extra is not None: for key in extra: self.extra[key] = np.zeros(data.shape) self.extra[key][mask, :] = extra[key] return TensorFit(self, dti_params, model_S0=S0_params) @warning_for_keywords() def predict(self, dti_params, *, S0=1.0): """ Predict a signal for this TensorModel class instance given parameters. Parameters ---------- dti_params : ndarray The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 eigenvectors S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. """ return tensor_prediction(dti_params, self.gtab, S0) class TensorFit: @warning_for_keywords() def __init__(self, model, model_params, *, model_S0=None): """Initialize a TensorFit class instance.""" self.model = model self.model_params = model_params self.model_S0 = model_S0 def __getitem__(self, index): model_params = self.model_params model_S0 = self.model_S0 N = model_params.ndim if type(index) is not tuple: index = (index,) elif len(index) >= model_params.ndim: raise IndexError("IndexError: invalid index") index = index + (slice(None),) * (N - len(index)) if model_S0 is not None: model_S0 = model_S0[index[:-1]] return type(self)(self.model, model_params[index], model_S0=model_S0) @property def S0_hat(self): return self.model_S0 @property def shape(self): return self.model_params.shape[:-1] @property def directions(self): """ For tracking - return the primary direction in each voxel """ return self.evecs[..., None, :, 0] @property def evals(self): """ Returns the eigenvalues of the tensor as an array """ return self.model_params[..., :3] @property def evecs(self): """ Returns the eigenvectors of the tensor as an array, columnwise """ evecs = self.model_params[..., 3:12] return evecs.reshape(self.shape + (3, 3)) @property def quadratic_form(self): """Calculates the 3x3 diffusion tensor for each voxel""" # do `evecs * evals * evecs.T` where * is matrix multiply # einsum does this with: # np.einsum('...ij,...j,...kj->...ik', evecs, evals, evecs) return vec_val_vect(self.evecs, self.evals) @warning_for_keywords() def lower_triangular(self, *, b0=None): return lower_triangular(self.quadratic_form, b0=b0) @auto_attr def fa(self): """Fractional anisotropy (FA) calculated from cached eigenvalues.""" return fractional_anisotropy(self.evals) @auto_attr def color_fa(self): """Color fractional anisotropy of diffusion tensor""" return color_fa(self.fa, self.evecs) @auto_attr def ga(self): """Geodesic anisotropy (GA) calculated from cached eigenvalues.""" return geodesic_anisotropy(self.evals) @auto_attr def mode(self): """ Tensor mode calculated from cached eigenvalues. """ return mode(self.quadratic_form) @auto_attr def md(self): r""" Mean diffusivity (MD) calculated from cached eigenvalues. Returns ------- md : array (V, 1) Calculated MD. Notes ----- MD is calculated with the following equation: .. math:: MD = \frac{\lambda_1+\lambda_2+\lambda_3}{3} """ return self.trace / 3.0 @auto_attr def rd(self): r""" Radial diffusivity (RD) calculated from cached eigenvalues. Returns ------- rd : array (V, 1) Calculated RD. Notes ----- RD is calculated with the following equation: .. math:: RD = \frac{\lambda_2 + \lambda_3}{2} """ return radial_diffusivity(self.evals) @auto_attr def ad(self): r""" Axial diffusivity (AD) calculated from cached eigenvalues. Returns ------- ad : array (V, 1) Calculated AD. Notes ----- AD is calculated with the following equation: .. math:: AD = \lambda_1 """ return axial_diffusivity(self.evals) @auto_attr def trace(self): r""" Trace of the tensor calculated from cached eigenvalues. Returns ------- trace : array (V, 1) Calculated trace. Notes ----- The trace is calculated with the following equation: .. math:: trace = \lambda_1 + \lambda_2 + \lambda_3 """ return trace(self.evals) @auto_attr def planarity(self): r""" Returns ------- sphericity : array Calculated sphericity of the diffusion tensor :footcite:p:`Westin1997`. Notes ----- Sphericity is calculated with the following equation: .. math:: Sphericity = \frac{2 (\lambda_2 - \lambda_3)}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ return planarity(self.evals) @auto_attr def linearity(self): r""" Returns ------- linearity : array Calculated linearity of the diffusion tensor :footcite:p:`Westin1997`. Notes ----- Linearity is calculated with the following equation: .. math:: Linearity = \frac{\lambda_1-\lambda_2}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ return linearity(self.evals) @auto_attr def sphericity(self): r""" Returns ------- sphericity : array Calculated sphericity of the diffusion tensor :footcite:p:`Westin1997`. Notes ----- Sphericity is calculated with the following equation: .. math:: Sphericity = \frac{3 \lambda_3}{\lambda_1+\lambda_2+\lambda_3} References ---------- .. footbibliography:: """ return sphericity(self.evals) def odf(self, sphere): r""" The diffusion orientation distribution function (dODF). This is an estimate of the diffusion distance in each direction Parameters ---------- sphere : Sphere class instance. The dODF is calculated in the vertices of this input. Returns ------- odf : ndarray The diffusion distance in every direction of the sphere in every voxel in the input data. Notes ----- This is based on equation 3 in :footcite:p:`Aganj2010`. To re-derive it from scratch, follow steps in :footcite:p:`Descoteaux2008b`, Section 7.9 Equation 7.24 but with an $r^2$ term in the integral. References ---------- .. footbibliography:: """ odf = np.zeros((self.evals.shape[:-1] + (sphere.vertices.shape[0],))) if len(self.evals.shape) > 1: mask = np.where( (self.evals[..., 0] > 0) & (self.evals[..., 1] > 0) & (self.evals[..., 2] > 0) ) evals = self.evals[mask] evecs = self.evecs[mask] else: evals = self.evals evecs = self.evecs lower = 4 * np.pi * np.sqrt(np.prod(evals, -1)) projection = np.dot(sphere.vertices, evecs) projection /= np.sqrt(evals) result = ((vector_norm(projection) ** -3) / lower).T if len(self.evals.shape) > 1: odf[mask] = result else: odf = result return odf def adc(self, sphere): r""" Calculate the apparent diffusion coefficient (ADC) in each direction on the sphere for each voxel in the data Parameters ---------- sphere : Sphere class instance Sphere providing sample directions to compute the apparent diffusion coefficient. Returns ------- adc : ndarray The estimates of the apparent diffusion coefficient in every direction on the input sphere Notes ----- The calculation of ADC, relies on the following relationship: .. math:: ADC = \vec{b} Q \vec{b}^T Where Q is the quadratic form of the tensor. """ return apparent_diffusion_coef(self.quadratic_form, sphere) @warning_for_keywords() def predict(self, gtab, *, S0=None, step=None): r""" Given a model fit, predict the signal on the vertices of a sphere Parameters ---------- gtab : a GradientTable class instance This encodes the directions for which a prediction is made S0 : float array, optional The mean non-diffusion weighted signal in each voxel. Default: The fitted S0 value in all voxels if it was fitted. Otherwise 1 in all voxels. step : int, optional The chunk size as a number of voxels. Optional parameter with default value 10,000. In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. This parameter sets the number of voxels that will be fit at once in each iteration. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased. Notes ----- The predicted signal is given by: .. math:: S(\theta, b) = S_0 * e^{-b ADC} Where: .. math:: ADC = \theta Q \theta^T $\theta$ is a unit vector pointing at any direction on the sphere for which a signal is to be predicted and $b$ is the b value provided in the GradientTable input for that direction """ if S0 is None: S0 = self.model_S0 if S0 is None: # if we didn't input or estimate S0 just use 1 S0 = 1.0 shape = self.model_params.shape[:-1] size = np.prod(shape) if step is None: step = self.model.kwargs.get("step", size) if step >= size: return tensor_prediction(self.model_params[..., 0:12], gtab, S0=S0) params = np.reshape(self.model_params, (-1, self.model_params.shape[-1])) predict = np.empty((size, gtab.bvals.shape[0])) if isinstance(S0, np.ndarray): S0 = S0.ravel() for i in range(0, size, step): if isinstance(S0, np.ndarray): this_S0 = S0[i : i + step] else: this_S0 = S0 predict[i : i + step] = tensor_prediction( params[i : i + step], gtab, S0=this_S0 ) return predict.reshape(shape + (gtab.bvals.shape[0],)) @warning_for_keywords() def iter_fit_tensor(*, step=1e4): """Wrap a fit_tensor func and iterate over chunks of data with given length Splits data into a number of chunks of specified size and iterates the decorated fit_tensor function over them. This is useful to counteract the temporary but significant memory usage increase in fit_tensor functions that use vectorized operations and need to store large temporary arrays for their vectorized operations. Parameters ---------- step : int, optional The chunk size as a number of voxels. Optional parameter with default value 10,000. In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. This parameter sets the number of voxels that will be fit at once in each iteration. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased. """ def iter_decorator(fit_tensor): """Actual iter decorator returned by iter_fit_tensor dec factory Parameters ---------- fit_tensor : callable A tensor fitting callable (most likely a function). The callable has to have the signature: ``fit_method(design_matrix, data, *args, **kwargs)`` """ @functools.wraps(fit_tensor) def wrapped_fit_tensor( design_matrix, data, *args, return_S0_hat=False, step=step, **kwargs ): """Iterate fit_tensor function over the data chunks Parameters ---------- design_matrix : array (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. data : array ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. step : int, optional The chunk size as a number of voxels. Overrides `step` value of `iter_fit_tensor`. args : {list,tuple} Any extra optional positional arguments passed to `fit_tensor`. kwargs : dict Any extra optional keyword arguments passed to `fit_tensor`. """ shape = data.shape[:-1] size = np.prod(shape) step = int(step) or size weights = kwargs["weights"] if "weights" in kwargs else None if weights is None: kwargs.pop("weights", None) if step >= size: return fit_tensor( design_matrix, data, *args, return_S0_hat=return_S0_hat, **kwargs ) data = data.reshape(-1, data.shape[-1]) if weights is not None: weights = weights.reshape(-1, weights.shape[-1]) if design_matrix.shape[-1] == 22: # DKI sz = 22 else: # DTI sz = 7 if kwargs.get("return_lower_triangular", False) else 12 dtiparams = np.empty((size, sz), dtype=np.float64) if return_S0_hat: S0params = np.empty(size, dtype=np.float64) extra = {} for i in range(0, size, step): if weights is not None: kwargs["weights"] = weights[i : i + step] if return_S0_hat: (dtiparams[i : i + step], S0params[i : i + step]), extra_i = ( fit_tensor( design_matrix, data[i : i + step], *args, return_S0_hat=return_S0_hat, **kwargs, ) ) else: dtiparams[i : i + step], extra_i = fit_tensor( design_matrix, data[i : i + step], *args, **kwargs ) if extra_i is not None: for key in extra_i: if i == 0: extra[key] = np.empty(data.shape) extra[key][i : i + step] = extra_i[key] if extra: for key in extra: extra[key] = extra[key].reshape(shape + (-1,)) if return_S0_hat: return ( dtiparams.reshape(shape + (sz,)), S0params.reshape(shape + (1,)), ), extra else: return dtiparams.reshape(shape + (sz,)), extra return wrapped_fit_tensor return iter_decorator @iter_fit_tensor() @warning_for_keywords() def wls_fit_tensor( design_matrix, data, *, weights=None, return_S0_hat=False, return_lower_triangular=False, return_leverages=False, ): r""" Computes weighted least squares (WLS) fit to calculate self-diffusion tensor using a linear regression model. See :footcite:p:`Chung2006` for further details about the method. Parameters ---------- design_matrix : array (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. data : array ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. If not provided, weights are estimated as the squared predicted signal from an initial OLS fit :footcite:p:`Chung2006`. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. return_lower_triangular : bool, optional Boolean to return (True) or not (False) the coefficients of the fit. return_leverages : bool, optional Boolean to return (True) or not (False) the fitting leverages. Returns ------- eigvals : array (..., 3) Eigenvalues from eigen decomposition of the tensor. eigvecs : array (..., 3, 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[:,j] is associated with eigvals[j]) leverages : array (g) Leverages of the fitting problem (if return_leverages is True) See Also -------- decompose_tensor Notes ----- In Chung, et al. 2006, the regression of the WLS fit needed an unbiased preliminary estimate of the weights and therefore the ordinary least squares (OLS) estimates were used. A "two pass" method was implemented: 1. calculate OLS estimates of the data 2. apply the OLS estimates as weights to the WLS fit of the data This ensured heteroscedasticity could be properly modeled for various types of bootstrap resampling (namely residual bootstrap). .. math:: y = \mathrm{data} \\ X = \mathrm{design matrix} \\ \hat{\beta}_\mathrm{WLS} = \mathrm{desired regression coefficients (e.g. tensor)}\\ \\ \hat{\beta}_\mathrm{WLS} = (X^T W X)^{-1} X^T W y \\ \\ W = \mathrm{diag}((X \hat{\beta}_\mathrm{OLS})^2), \mathrm{where} \hat{\beta}_\mathrm{OLS} = (X^T X)^{-1} X^T y References ---------- .. footbibliography:: """ tol = 1e-6 data = np.asarray(data) log_s = np.log(data) if weights is None: # calculate weights fit_result, _ = ols_fit_tensor( design_matrix, data, return_lower_triangular=True ) w = np.exp(fit_result @ design_matrix.T) else: w = np.sqrt(weights) # the weighted problem design_matrix * w is much larger (differs per voxel) if return_leverages is False: fit_result = np.einsum( "...ij,...j", np.linalg.pinv(design_matrix * w[..., None]), w * log_s ) leverages = None else: tmp = np.einsum( "...ij,...j->...ij", np.linalg.pinv(design_matrix * w[..., None]), w ) fit_result = np.einsum("...ij,...j", tmp, log_s) leverages = np.einsum("ij,...ji->...i", design_matrix, tmp) if leverages is not None: leverages = {"leverages": leverages} if return_lower_triangular: return fit_result, leverages if return_S0_hat: return ( eig_from_lo_tri(fit_result, min_diffusivity=tol / -design_matrix.min()), np.exp(-fit_result[:, -1]), ), leverages else: return eig_from_lo_tri( fit_result, min_diffusivity=tol / -design_matrix.min() ), leverages @iter_fit_tensor() @warning_for_keywords() def ols_fit_tensor( design_matrix, data, *, return_S0_hat=False, return_lower_triangular=False, return_leverages=False, ): r""" Computes ordinary least squares (OLS) fit to calculate self-diffusion tensor using a linear regression model. See :footcite:p:`Chung2006` for further details about the method. Parameters ---------- design_matrix : array (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. data : array ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. return_lower_triangular : bool, optional Boolean to return (True) or not (False) the coefficients of the fit. return_leverages : bool, optional Boolean to return (True) or not (False) the fitting leverages. Returns ------- eigvals : array (..., 3) Eigenvalues from eigen decomposition of the tensor. eigvecs : array (..., 3, 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[:,j] is associated with eigvals[j]) leverages : array (g) Leverages of the fitting problem (if return_leverages is True) See Also -------- WLS_fit_tensor, decompose_tensor, design_matrix Notes ----- .. math:: y = \mathrm{data} \\ X = \mathrm{design matrix} \\ \hat{\beta}_\mathrm{OLS} = (X^T X)^{-1} X^T y References ---------- .. footbibliography:: """ tol = 1e-6 data = np.asarray(data) if return_leverages is False: fit_result = np.einsum( "...ij,...j", np.linalg.pinv(design_matrix), np.log(data) ) leverages = None else: tmp = np.linalg.pinv(design_matrix) fit_result = np.einsum("...ij,...j", tmp, np.log(data)) leverages = np.einsum("ij,ji->i", design_matrix, tmp) if leverages is not None: leverages = {"leverages": leverages} if return_lower_triangular: return fit_result, leverages if return_S0_hat: return ( eig_from_lo_tri(fit_result, min_diffusivity=tol / -design_matrix.min()), np.exp(-fit_result[:, -1]), ), leverages else: return eig_from_lo_tri( fit_result, min_diffusivity=tol / -design_matrix.min() ), leverages def _ols_fit_matrix(design_matrix): """ Helper function to calculate the ordinary least squares (OLS) fit as a matrix multiplication. Mainly used to calculate WLS weights. Can be used to calculate regression coefficients in OLS but not recommended. See Also -------- wls_fit_tensor, ols_fit_tensor Examples --------- ols_fit = _ols_fit_matrix(design_mat) ols_data = np.dot(ols_fit, data) """ U, S, V = np.linalg.svd(design_matrix, False) return np.dot(U, U.T) class _NllsHelper: r"""Class with member functions to return nlls error and derivative.""" def err_func(self, tensor, design_matrix, data, weights=None): r""" Error function for the non-linear least-squares fit of the tensor. Parameters ---------- tensor : array (3,3) The 3-by-3 tensor matrix design_matrix : array The design matrix data : array The voxel signal in all gradient directions weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. """ # This is the predicted signal given the params: y = np.exp(np.dot(design_matrix, tensor)) self.y = y # cache the results # Compute the residuals residuals = data - y # Set weights if weights is None: self.sqrt_w = 1 # cache weights for the *non-squared* residuals # And we return the SSE: return residuals else: # Return the weighted residuals: with warnings.catch_warnings(): warnings.simplefilter("ignore") self.sqrt_w = np.sqrt(weights) ans = self.sqrt_w * residuals if np.iterable(weights): # cache the weights for the *non-squared* residuals self.sqrt_w = self.sqrt_w[:, None] return ans def jacobian_func(self, tensor, design_matrix, data, weights=None): r"""The Jacobian is the first derivative of the error function. Parameters ---------- tensor : array (3,3) The 3-by-3 tensor matrix design_matrix : array The design matrix data : array The voxel signal in all gradient directions weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. Notes ----- This Jacobian correctly accounts for weights on the squared residuals if provided. References ---------- .. footbibliography:: """ # minus sign, because derivative of residuals = data - y # sqrt(w) because w corresponds to the squared residuals if weights is None: return -self.y[:, None] * design_matrix else: return -self.y[:, None] * design_matrix * self.sqrt_w @warning_for_keywords() def _decompose_tensor_nan(tensor, tensor_alternative, *, min_diffusivity=0): """Helper function that expands the function decompose_tensor to deal with tensor with nan elements. Computes tensor eigen decomposition to calculate eigenvalues and eigenvectors (Basser et al., 1994a). Some fit approaches can produce nan tensor elements in background voxels (particularly non-linear approaches). This function avoids the eigen decomposition errors of nan tensor elements by replacing tensor with nan elements by a given alternative tensor estimate. Parameters ---------- tensor : array (3, 3) Hermitian matrix representing a diffusion tensor. tensor_alternative : array (3, 3) Hermitian matrix representing a diffusion tensor obtain from an approach that does not produce nan tensor elements min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Returns ------- eigvals : array (3) Eigenvalues from eigen decomposition of the tensor. Negative eigenvalues are replaced by zero. Sorted from largest to smallest. eigvecs : array (3, 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[..., :, j] is associated with eigvals[..., j]) """ try: evals, evecs = decompose_tensor(tensor[:6], min_diffusivity=min_diffusivity) except np.linalg.LinAlgError: evals, evecs = decompose_tensor( tensor_alternative[:6], min_diffusivity=min_diffusivity ) return evals, evecs @warning_for_keywords() def nlls_fit_tensor( design_matrix, data, *, weights=None, jac=True, return_S0_hat=False, fail_is_nan=False, return_lower_triangular=False, return_leverages=False, init_params=None, ): r""" Fit the cumulant expansion params (e.g. DTI, DKI) using non-linear least-squares. Parameters ---------- design_matrix : array (g, Npar) Design matrix holding the covariants used to solve for the regression coefficients. First six parameters of design matrix should correspond to the six unique diffusion tensor elements in the lower triangular order (Dxx, Dxy, Dyy, Dxz, Dyz, Dzz), while last parameter to -log(S0) data : array ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. weights : array ([X, Y, Z, ...], g), optional Weights to apply for fitting. These weights must correspond to the squared residuals such that $S = \sum_i w_i r_i^2$. jac : bool, optional Use the Jacobian? return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. fail_is_nan : bool, optional Boolean to set failed NL fitting to NaN (True) or LS (False, default). return_lower_triangular : bool, optional Boolean to return (True) or not (False) the coefficients of the fit. return_leverages : bool, optional Boolean to return (True) or not (False) the fitting leverages. init_params : array ([X, Y, Z, ...], Npar), optional Parameters in lower triangular form as initial optimization guess. Returns ------- nlls_params: the eigen-values and eigen-vectors of the tensor in each voxel. """ tol = 1e-6 # Detect number of parameters to estimate from design_matrix length plus # 5 due to diffusion tensor conversion to eigenvalue and eigenvectors npa = design_matrix.shape[-1] + 5 # Detect if number of parameters corresponds to dti dti = npa == 12 # Flatten for the iteration over voxels: flat_data = data.reshape((-1, data.shape[-1])) if init_params is None: # Use the OLS method parameters as the starting point for nlls D, extra = ols_fit_tensor( design_matrix, flat_data, return_lower_triangular=True, return_leverages=return_leverages, ) if extra is not None: leverages = extra["leverages"] # Flatten for the iteration over voxels: ols_params = np.reshape(D, (-1, D.shape[-1])) else: # Replace starting guess for opt (usually ols_params) with init_params ols_params = init_params # Initialize parameter matrix params = np.empty((flat_data.shape[0], npa)) # Initialize parameter matrix for storing flattened parameters flat_params = np.empty_like(ols_params) # For warnings resort_to_OLS = False # Instance of _NllsHelper, need for nlls error func and jacobian nlls = _NllsHelper() err_func = nlls.err_func jac_func = nlls.jacobian_func if jac else None if return_S0_hat: model_S0 = np.empty((flat_data.shape[0], 1)) for vox in range(flat_data.shape[0]): if np.all(flat_data[vox] == 0): raise ValueError("The data in this voxel contains only zeros") start_params = ols_params[vox] weights_vox = weights[vox] if weights is not None else None try: # Do the optimization in this voxel: this_param, status = opt.leastsq( err_func, start_params, args=(design_matrix, flat_data[vox], weights_vox), Dfun=jac_func, ) flat_params[vox] = this_param # Convert diffusion tensor parameters to the evals and the evecs: evals, evecs = decompose_tensor( from_lower_triangular(this_param[:6]), min_diffusivity=tol / -design_matrix.min(), ) params[vox, :3] = evals params[vox, 3:12] = evecs.ravel() # If leastsq failed to converge and produced nans, we'll resort to the # OLS solution in this voxel: except (np.linalg.LinAlgError, TypeError): resort_to_OLS = True this_param = start_params flat_params[vox] = this_param # NOTE: ignores fail_is_nan if not fail_is_nan: # Convert diffusion tensor parameters to evals and evecs evals, evecs = decompose_tensor( from_lower_triangular(this_param[:6]), min_diffusivity=tol / -design_matrix.min(), ) params[vox, :3] = evals params[vox, 3:12] = evecs.ravel() else: # Set NaN values this_param[:] = np.nan # so that S0_hat is NaN params[vox, :] = np.nan if return_S0_hat: model_S0[vox] = np.exp(-this_param[-1]) if not dti: md2 = evals.mean(0) ** 2 params[vox, 12:] = this_param[6:-1] / md2 if resort_to_OLS: warnings.warn(ols_resort_msg, UserWarning, stacklevel=2) if return_leverages: leverages = {"leverages": leverages} else: leverages = None if return_lower_triangular: return flat_params, leverages params.shape = data.shape[:-1] + (npa,) if return_S0_hat: model_S0.shape = data.shape[:-1] + (1,) return [params, model_S0], None else: return params, None @warning_for_keywords() def restore_fit_tensor( design_matrix, data, *, sigma=None, jac=True, return_S0_hat=False, fail_is_nan=False ): """Compute a robust tensor fit using the RESTORE algorithm. Note that the RESTORE algorithm defined in :footcite:p:`Chang2005` does not define Geman–McClure M-estimator weights as claimed (instead, Cauchy M-estimator weights are defined), but this function does define correct Geman–McClure M-estimator weights. Parameters ---------- design_matrix : array of shape (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. data : array of shape ([X, Y, Z, n_directions], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. sigma : float, optional An estimate of the variance. :footcite:p:`Chang2005` recommend to use 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise). If not provided, will be estimated per voxel as: sigma = 1.4826 * sqrt(N / (N - p)) * MAD(residuals) as in :footcite:p:`Chang2012` but with the additional correction factor 1.4826 required to link standard deviation to MAD. jac : bool, optional Whether to use the Jacobian of the tensor to speed the non-linear optimization procedure used to fit the tensor parameters (see also :func:`nlls_fit_tensor`). return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. fail_is_nan : bool, optional Boolean to set failed NL fitting to NaN (True) or LS (False). Returns ------- restore_params : an estimate of the tensor parameters in each voxel. References ---------- .. footbibliography:: """ # Detect number of parameters to estimate from design_matrix length plus # 5 due to diffusion tensor conversion to eigenvalue and eigenvectors npa = design_matrix.shape[-1] + 5 # Detect if number of parameters corresponds to dti dti = npa == 12 # define some constants p = design_matrix.shape[-1] N = data.shape[-1] factor = 1.4826 * np.sqrt(N / (N - p)) # Flatten for the iteration over voxels: flat_data = data.reshape((-1, data.shape[-1])) # calculate OLS solution D, _ = ols_fit_tensor(design_matrix, flat_data, return_lower_triangular=True) # Flatten for the iteration over voxels: ols_params = np.reshape(D, (-1, D.shape[-1])) # Initialize parameter matrix params = np.empty((flat_data.shape[0], npa)) # For storing whether image is used in final fit for each voxel robust = np.ones(flat_data.shape, dtype=int) # For warnings resort_to_OLS = False # Instance of _NllsHelper, need for nlls error func and jacobian nlls = _NllsHelper() err_func = nlls.err_func jac_func = nlls.jacobian_func if jac else None if return_S0_hat: model_S0 = np.empty((flat_data.shape[0], 1)) for vox in range(flat_data.shape[0]): if np.all(flat_data[vox] == 0): raise ValueError("The data in this voxel contains only zeros") start_params = ols_params[vox] try: # Do unweighted nlls in this voxel: this_param, status = opt.leastsq( err_func, start_params, args=(design_matrix, flat_data[vox]), Dfun=jac_func, ) # Get the residuals: pred_sig = np.exp(np.dot(design_matrix, this_param)) residuals = flat_data[vox] - pred_sig # estimate or set sigma if sigma is not None: C = sigma else: C = factor * np.median(np.abs(residuals - np.median(residuals))) # If any of the residuals are outliers (using 3 sigma as a # criterion following Chang et al., e.g page 1089): test_sigma = np.any(np.abs(residuals) > 3 * C) # test for doing robust reweighting if test_sigma: rdx = 1 while rdx <= 10: # NOTE: capped at 10 iterations # GM weights (original Restore paper used Cauchy weights) C = factor * np.median(np.abs(residuals - np.median(residuals))) denominator = (C**2 + residuals**2) ** 2 gmm = np.divide( C**2, denominator, out=np.zeros_like(denominator), where=denominator != 0, ) # Do nlls with GMM-weighting: this_param, status = opt.leastsq( err_func, start_params, args=(design_matrix, flat_data[vox], gmm), Dfun=jac_func, ) # Recalculate residuals given gmm fit pred_sig = np.exp(np.dot(design_matrix, this_param)) residuals = flat_data[vox] - pred_sig perc = ( 100 * np.linalg.norm(this_param - start_params) / np.linalg.norm(this_param) ) start_params = this_param if perc < 0.1: break rdx = rdx + 1 cond = np.abs(residuals) > 3 * C if np.any(cond): # If you still have outliers, refit without those outliers: non_outlier_idx = np.where(np.logical_not(cond)) clean_design = design_matrix[non_outlier_idx] clean_data = flat_data[vox][non_outlier_idx] robust[vox] = np.logical_not(cond) # recalculate OLS solution with clean data new_start, _ = ols_fit_tensor( clean_design, clean_data, return_lower_triangular=True ) this_param, status = opt.leastsq( err_func, new_start, args=(clean_design, clean_data), Dfun=jac_func, ) # Convert diffusion tensor parameters to the evals and the evecs: evals, evecs = decompose_tensor(from_lower_triangular(this_param[:6])) params[vox, :3] = evals params[vox, 3:12] = evecs.ravel() # If leastsq failed to converge and produced nans, we'll resort to the # OLS solution in this voxel: except (np.linalg.LinAlgError, TypeError): resort_to_OLS = True this_param = start_params if not fail_is_nan: # Convert diffusion tensor parameters to evals and evecs: evals, evecs = decompose_tensor(from_lower_triangular(this_param[:6])) params[vox, :3] = evals params[vox, 3:12] = evecs.ravel() else: # Set NaN values this_param[:] = np.nan # so that S0_hat is NaN params[vox, :] = np.nan if return_S0_hat: model_S0[vox] = np.exp(-this_param[-1]) if not dti: md2 = evals.mean(0) ** 2 params[vox, 12:] = this_param[6:-1] / md2 if resort_to_OLS: warnings.warn(ols_resort_msg, UserWarning, stacklevel=2) params.shape = data.shape[:-1] + (npa,) extra = {"robust": robust} if return_S0_hat: model_S0.shape = data.shape[:-1] + (1,) return [params, model_S0], extra else: return params, extra def iterative_fit_tensor( design_matrix, data, *, jac=True, return_S0_hat=False, fit_type=None, num_iter=4, weights_method=None, ): """Iteratively Reweighted fitting for the DTI/DKI model. Parameters ---------- design_matrix : ndarray of shape (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. data : ndarray of shape ([X, Y, Z, n_directions], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. jac : bool, optional Use the Jacobian for NLLS fitting (does nothing for WLS fitting). return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. fit_type : str, optional Whether to use NLLS or WLS fitting scheme. num_iter : int, optional Number of times to iterate. weights_method : callable, optional A function with args and returns as follows:: (weights, robust) = weights_method(data, pred_sig, design_matrix, leverages, idx, num_iter, robust) Notes ----- Take care to supply an appropriate weights_method for the fit_type. It is possible to use NLLS fitting with weights designed for WLS fitting, but this is a user error. """ tol = 1e-6 if fit_type is None: raise ValueError("fit_type must be provided") if weights_method is None: raise ValueError("weights_method must be provided") if num_iter < 2: # otherwise, weights_method will not be utilized raise ValueError("num_iter must be 2+") if fit_type not in ["WLS", "NLLS"]: raise ValueError("fit_type must be 'WLS' or 'NLLS'") # Detect number of parameters to estimate from design_matrix length plus # 5 due to diffusion tensor conversion to eigenvalue and eigenvectors p = design_matrix.shape[-1] N = data.shape[-1] if N <= p: raise ValueError("Fewer data points than parameters.") # Detect if number of parameters corresponds to dti npa = p + 5 dti = npa == 12 w, robust = None, None # w = None means wls_fit_tensor uses WLS weights D, extra, leverages = None, None, None # initialize, for clarity TDX = num_iter for rdx in range(1, TDX + 1): if rdx > 1: log_pred_sig = np.dot(design_matrix, D.T).T pred_sig = np.exp(log_pred_sig) w, robust = weights_method( data, pred_sig, design_matrix, leverages, rdx, TDX, robust ) if fit_type == "WLS": D, extra = wls_fit_tensor( design_matrix, data, weights=w, return_lower_triangular=True, return_leverages=True, ) leverages = extra["leverages"] # for WLS, update leverages if fit_type == "NLLS": D, extra = nlls_fit_tensor( design_matrix, data, weights=w, return_lower_triangular=True, return_leverages=(rdx == 1), jac=jac, init_params=D, ) if rdx == 1: # for NLLS, leverages from OLS, so they never change leverages = extra["leverages"] # Convert diffusion tensor parameters to the evals and the evecs: evals, evecs = decompose_tensor( from_lower_triangular(D[:, :6]), min_diffusivity=tol / -design_matrix.min() ) params = np.empty((data.shape[0:-1] + (npa,))) params[:, :3] = evals params[:, 3:12] = evecs.reshape(params.shape[0:-1] + (-1,)) if return_S0_hat: model_S0 = np.exp(-D[:, -1]) if not dti: md2 = evals.mean(axis=1)[:, None] ** 2 # NOTE: changed from axis=0 params[:, 12:] = D[:, 6:-1] / md2 extra = {"robust": robust} if return_S0_hat: model_S0.shape = data.shape[:-1] + (1,) return [params, model_S0], extra else: return params, extra def robust_fit_tensor_wls(design_matrix, data, *, return_S0_hat=False, num_iter=4): """Iteratively Reweighted fitting for WLS for the DTI/DKI model. Parameters ---------- design_matrix : ndarray of shape (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. data : ndarray of shape ([X, Y, Z, n_directions], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. num_iter : int, optional Number of times to iterate. Notes ----- This is a convenience function that does:: iterative_fit_tensor(*args, **kwargs, fit_type="WLS", weights_method=weights_method_wls) """ return iterative_fit_tensor( design_matrix, data, return_S0_hat=return_S0_hat, fit_type="WLS", num_iter=num_iter, weights_method=weights_method_wls_m_est, ) def robust_fit_tensor_nlls( design_matrix, data, *, jac=True, return_S0_hat=False, num_iter=4 ): """Iteratively Reweighted fitting for NLLS for the DTI/DKI model. Parameters ---------- design_matrix : ndarray of shape (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. data : ndarray of shape ([X, Y, Z, n_directions], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. jac : bool, optional Use the Jacobian? return_S0_hat : bool, optional Boolean to return (True) or not (False) the S0 values for the fit. num_iter : int, optional Number of times to iterate. Notes ----- This is a convenience function that does:: iterative_fit_tensor(*args, **kwargs, fit_type="NLLS", weights_method=weights_method_nlls) """ return iterative_fit_tensor( design_matrix, data, jac=jac, return_S0_hat=return_S0_hat, fit_type="NLLS", num_iter=num_iter, weights_method=weights_method_nlls_m_est, ) _lt_indices = np.array([[0, 1, 3], [1, 2, 4], [3, 4, 5]]) def from_lower_triangular(D): """Returns a tensor given the six unique tensor elements Given the six unique tensor elements (in the order: Dxx, Dxy, Dyy, Dxz, Dyz, Dzz) returns a 3 by 3 tensor. All elements after the sixth are ignored. Parameters ---------- D : array_like, (..., >6) Unique elements of the tensors Returns ------- tensor : ndarray (..., 3, 3) 3 by 3 tensors """ return D[..., _lt_indices] _lt_rows = np.array([0, 1, 1, 2, 2, 2]) _lt_cols = np.array([0, 0, 1, 0, 1, 2]) @warning_for_keywords() def lower_triangular(tensor, *, b0=None): """ Returns the six lower triangular values of the tensor ordered as (Dxx, Dxy, Dyy, Dxz, Dyz, Dzz) and a dummy variable if b0 is not None. Parameters ---------- tensor : array_like (..., 3, 3) a collection of 3, 3 diffusion tensors b0 : float, optional if b0 is not none log(b0) is returned as the dummy variable Returns ------- D : ndarray If b0 is none, then the shape will be (..., 6) otherwise (..., 7) """ if tensor.shape[-2:] != (3, 3): raise ValueError("Diffusion tensors should be (..., 3, 3)") if b0 is None: return tensor[..., _lt_rows, _lt_cols] else: D = np.empty(tensor.shape[:-2] + (7,), dtype=tensor.dtype) D[..., 6] = -np.log(b0) D[..., :6] = tensor[..., _lt_rows, _lt_cols] return D @warning_for_keywords() def decompose_tensor(tensor, *, min_diffusivity=0): """Returns eigenvalues and eigenvectors given a diffusion tensor Computes tensor eigen decomposition to calculate eigenvalues and eigenvectors (Basser et al., 1994a). Parameters ---------- tensor : array (..., 3, 3) Hermitian matrix representing a diffusion tensor. min_diffusivity : float, optional Because negative eigenvalues are not physical and small eigenvalues, much smaller than the diffusion weighting, cause quite a lot of noise in metrics such as fa, diffusivity values smaller than `min_diffusivity` are replaced with `min_diffusivity`. Returns ------- eigvals : array (..., 3) Eigenvalues from eigen decomposition of the tensor. Negative eigenvalues are replaced by zero. Sorted from largest to smallest. eigvecs : array (..., 3, 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[..., :, j] is associated with eigvals[..., j]) """ # outputs multiplicity as well so need to unique eigenvals, eigenvecs = np.linalg.eigh(tensor) # need to sort the eigenvalues and associated eigenvectors if eigenvals.ndim == 1: # this is a lot faster when dealing with a single voxel order = eigenvals.argsort()[::-1] eigenvecs = eigenvecs[:, order] eigenvals = eigenvals[order] else: # temporarily flatten eigenvals and eigenvecs to make sorting easier shape = eigenvals.shape[:-1] eigenvals = eigenvals.reshape(-1, 3) eigenvecs = eigenvecs.reshape(-1, 3, 3) size = eigenvals.shape[0] order = eigenvals.argsort()[:, ::-1] xi, yi = np.ogrid[:size, :3, :3][:2] eigenvecs = eigenvecs[xi, yi, order[:, None, :]] xi = np.ogrid[:size, :3][0] eigenvals = eigenvals[xi, order] eigenvecs = eigenvecs.reshape(shape + (3, 3)) eigenvals = eigenvals.reshape(shape + (3,)) eigenvals = eigenvals.clip(min=min_diffusivity) # eigenvecs: each vector is columnar return eigenvals, eigenvecs @warning_for_keywords() def design_matrix(gtab, *, dtype=None): """Constructs design matrix for DTI weighted least squares or least squares fitting. (Basser et al., 1994a) Parameters ---------- gtab : A GradientTable class instance dtype : str, optional Parameter to control the dtype of returned designed matrix Returns ------- design_matrix : array (g,7) Design matrix or B matrix assuming Gaussian distributed tensor model design_matrix[j, :] = (Bxx, Byy, Bzz, Bxy, Bxz, Byz, dummy) """ if gtab.btens is None: B = np.zeros((gtab.gradients.shape[0], 7)) B[:, 0] = gtab.bvecs[:, 0] * gtab.bvecs[:, 0] * 1.0 * gtab.bvals # Bxx B[:, 1] = gtab.bvecs[:, 0] * gtab.bvecs[:, 1] * 2.0 * gtab.bvals # Bxy B[:, 2] = gtab.bvecs[:, 1] * gtab.bvecs[:, 1] * 1.0 * gtab.bvals # Byy B[:, 3] = gtab.bvecs[:, 0] * gtab.bvecs[:, 2] * 2.0 * gtab.bvals # Bxz B[:, 4] = gtab.bvecs[:, 1] * gtab.bvecs[:, 2] * 2.0 * gtab.bvals # Byz B[:, 5] = gtab.bvecs[:, 2] * gtab.bvecs[:, 2] * 1.0 * gtab.bvals # Bzz B[:, 6] = np.ones(gtab.gradients.shape[0]) else: B = np.zeros((gtab.gradients.shape[0], 7)) B[:, 0] = gtab.btens[:, 0, 0] # Bxx B[:, 1] = gtab.btens[:, 0, 1] * 2 # Bxy B[:, 2] = gtab.btens[:, 1, 1] # Byy B[:, 3] = gtab.btens[:, 0, 2] * 2 # Bxz B[:, 4] = gtab.btens[:, 1, 2] * 2 # Byz B[:, 5] = gtab.btens[:, 2, 2] # Bzz B[:, 6] = np.ones(gtab.gradients.shape[0]) return -B @warning_for_keywords() def quantize_evecs(evecs, *, odf_vertices=None): """Find the closest orientation of an evenly distributed sphere Parameters ---------- evecs : ndarray Eigenvectors. odf_vertices : ndarray, optional If None, then set vertices from symmetric362 sphere. Otherwise use passed ndarray as vertices Returns ------- IN : ndarray """ max_evecs = evecs[..., :, 0] if odf_vertices is None: odf_vertices = get_sphere(name="symmetric362").vertices tup = max_evecs.shape[:-1] mec = max_evecs.reshape(np.prod(np.array(tup)), 3) IN = np.array([np.argmin(np.dot(odf_vertices, m)) for m in mec]) IN = IN.reshape(tup) return IN @warning_for_keywords() def eig_from_lo_tri(data, *, min_diffusivity=0): """ Calculates tensor eigenvalues/eigenvectors from an array containing the lower diagonal form of the six unique tensor elements. Parameters ---------- data : array_like (..., 6) diffusion tensors elements stored in lower triangular order min_diffusivity : float, optional See decompose_tensor() Returns ------- dti_params : array (..., 12) Eigen-values and eigen-vectors of the same array. """ data = np.asarray(data) evals, evecs = decompose_tensor( from_lower_triangular(data), min_diffusivity=min_diffusivity ) dti_params = np.concatenate((evals[..., None, :], evecs), axis=-2) return dti_params.reshape(data.shape[:-1] + (12,)) common_fit_methods = { "WLS": wls_fit_tensor, "WLLS": wls_fit_tensor, "LS": ols_fit_tensor, "LLS": ols_fit_tensor, "OLS": ols_fit_tensor, "OLLS": ols_fit_tensor, "NLS": nlls_fit_tensor, "NLLS": nlls_fit_tensor, "RESTORE": restore_fit_tensor, "IRLS": iterative_fit_tensor, "RWLS": robust_fit_tensor_wls, "RNLLS": robust_fit_tensor_nlls, } dipy-1.11.0/dipy/reconst/eudx_direction_getter.pyx000066400000000000000000000104541476546756600223400ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: nonecheck=False cimport cython cimport numpy as cnp import numpy as np from dipy.tracking.propspeed cimport _propagation_direction from dipy.tracking.direction_getter cimport DirectionGetter cdef extern from "dpy_math.h" nogil: double dpy_rint(double x) cdef class EuDXDirectionGetter(DirectionGetter): """Deterministic Direction Getter based on peak directions. This class contains the cython portion of the code for PeaksAndMetrics and is not meant to be used on its own. """ cdef: public double qa_thr, ang_thr, total_weight public double[:, :, :, ::1] _qa, _ind public double[:, ::1] _odf_vertices int initialized def __cinit__(self): initialized = False self.qa_thr = 0.0239 self.ang_thr = 60 self.total_weight = .5 self.sphere = None self.peak_indices = None self.peak_values = None self.peak_dirs = None self.gfa = None self.qa = None self.shm_coeff = None self.B = None self.odf = None def _initialize(self): """First time that a PAM instance is used as a direction getter, initialize all the memoryviews. """ if self.peak_values.shape != self.peak_indices.shape: msg = "shapes of peak_values and peak_indices do not match" raise ValueError(msg) self._qa = np.ascontiguousarray(self.peak_values, dtype=np.double) self._ind = np.ascontiguousarray(self.peak_indices, dtype=np.double) self._odf_vertices = np.asarray( self.sphere.vertices, dtype=np.double, order='C' ) self.initialized = True cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction(self, double[::1] point): """The best starting directions for fiber tracking from point All the valid peaks in the voxel closest to point are returned as initial directions. """ if not self.initialized: self._initialize() cdef: cnp.npy_intp numpeaks, i cnp.npy_intp ijk[3] # ijk is the closest voxel to point for i in range(3): ijk[i] = dpy_rint(point[i]) if ijk[i] < 0 or ijk[i] >= self._ind.shape[i]: raise IndexError("point outside data") # Check to see how many peaks were found in the voxel numpeaks=0 for i in range(self.peak_values.shape[3]): # Test if the value associated to the peak is > 0 if self._qa[ijk[0], ijk[1], ijk[2], i] > 0: numpeaks = numpeaks + 1 else: break # Create directions array and copy peak directions from vertices res = np.empty((numpeaks, 3)) for i in range(numpeaks): peak_index = self._ind[ijk[0], ijk[1], ijk[2], i] res[i, :] = self._odf_vertices[ peak_index, :] return res @cython.initializedcheck(False) @cython.boundscheck(False) @cython.wraparound(False) cdef int get_direction_c(self, double[::1] point, double[::1] direction): """Interpolate closest peaks to direction from voxels neighboring point Update direction and return 0 if successful. If no tracking direction could be found, return 1. """ if not self.initialized: self._initialize() cdef: cnp.npy_intp s double newdirection[3] cnp.npy_intp qa_shape[4] cnp.npy_intp qa_strides[4] for i in range(4): qa_shape[i] = self._qa.shape[i] qa_strides[i] = self._qa.strides[i] s = _propagation_direction(&point[0], &direction[0], &self._qa[0, 0, 0, 0], &self._ind[0, 0, 0, 0], &self._odf_vertices[0, 0], self.qa_thr, self.ang_thr, qa_shape, qa_strides, newdirection, self.total_weight) if s: for i in range(3): direction[i] = newdirection[i] return 0 else: return 1 dipy-1.11.0/dipy/reconst/forecast.py000066400000000000000000000400461476546756600173770ustar00rootroot00000000000000from warnings import warn import numpy as np from scipy.optimize import leastsq from scipy.special import gamma, hyp1f1 from dipy.core.geometry import cart2sphere from dipy.data import default_sphere from dipy.reconst.cache import Cache from dipy.reconst.csdeconv import csdeconv from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.odf import OdfFit, OdfModel from dipy.reconst.shm import real_sh_descoteaux_from_index from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") class ForecastModel(OdfModel, Cache): r"""Fiber ORientation Estimated using Continuous Axially Symmetric Tensors (FORECAST). FORECAST :footcite:p:`Anderson2005`, :footcite:p:`Kaden2016a`, :footcite:p:`Zucchelli2017` is a Spherical Deconvolution reconstruction model for multi-shell diffusion data which enables the calculation of a voxel adaptive response function using the Spherical Mean Technique (SMT) :footcite:p:`Kaden2016a`, :footcite:p:`Zucchelli2017`. With FORECAST it is possible to calculate crossing invariant parallel diffusivity, perpendicular diffusivity, mean diffusivity, and fractional anisotropy :footcite:p:`Kaden2016a`. References ---------- .. footbibliography:: Notes ----- The implementation of FORECAST may require CVXPY (https://www.cvxpy.org/). """ @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__( self, gtab, *, sh_order_max=8, lambda_lb=1e-3, dec_alg="CSD", sphere=None, lambda_csd=1.0, ): r"""Analytical and continuous modeling of the diffusion signal with respect to the FORECAST basis. This implementation is a modification of the original FORECAST model presented in :footcite:p:`Anderson2005` adapted for multi-shell data as in :footcite:p:`Kaden2016a`, :footcite:p:`Zucchelli2017`. The main idea is to model the diffusion signal as the combination of a single fiber response function $F(\mathbf{b})$ times the fODF $\rho(\mathbf{v})$ .. math:: E(\mathbf{b}) = \int_{\mathbf{v} \in \mathcal{S}^2} \rho(\mathbf{v}) F({\mathbf{b}} | \mathbf{v}) d \mathbf{v} where $\mathbf{b}$ is the b-vector (b-value times gradient direction) and $\mathbf{v}$ is a unit vector representing a fiber direction. In FORECAST $\rho$ is modeled using real symmetric Spherical Harmonics (SH) and $F(\mathbf(b))$ is an axially symmetric tensor. Parameters ---------- gtab : GradientTable, gradient directions and bvalues container class. sh_order_max : unsigned int, optional an even integer that represent the maximal SH order ($l$) of the basis (max 12) lambda_lb: float, optional Laplace-Beltrami regularization weight. dec_alg : str, optional Spherical deconvolution algorithm. The possible values are Weighted Least Squares ('WLS'), Positivity Constraints using CVXPY ('POS') and the Constraint Spherical Deconvolution algorithm ('CSD'). Default is 'CSD'. sphere : array, shape (N,3), optional sphere points where to enforce positivity when 'POS' or 'CSD' dec_alg are selected. lambda_csd : float, optional CSD regularization weight. References ---------- .. footbibliography:: Examples -------- In this example, where the data, gradient table and sphere tessellation used for reconstruction are provided, we model the diffusion signal with respect to the FORECAST and compute the fODF, parallel and perpendicular diffusivity. >>> import warnings >>> from dipy.data import default_sphere, get_3shell_gtab >>> gtab = get_3shell_gtab() >>> from dipy.sims.voxel import multi_tensor >>> mevals = np.array(([0.0017, 0.0003, 0.0003], ... [0.0017, 0.0003, 0.0003])) >>> angl = [(0, 0), (60, 0)] >>> data, sticks = multi_tensor(gtab, ... mevals, ... S0=100.0, ... angles=angl, ... fractions=[50, 50], ... snr=None) >>> from dipy.reconst.forecast import ForecastModel >>> from dipy.reconst.shm import descoteaux07_legacy_msg >>> with warnings.catch_warnings(): ... warnings.filterwarnings( ... "ignore", message=descoteaux07_legacy_msg, ... category=PendingDeprecationWarning) ... fm = ForecastModel(gtab, sh_order_max=6) >>> f_fit = fm.fit(data) >>> d_par = f_fit.dpar >>> d_perp = f_fit.dperp >>> with warnings.catch_warnings(): ... warnings.filterwarnings( ... "ignore", message=descoteaux07_legacy_msg, ... category=PendingDeprecationWarning) ... fodf = f_fit.odf(default_sphere) """ # noqa: E501 OdfModel.__init__(self, gtab) # round the bvals in order to avoid numerical errors self.bvals = np.round(gtab.bvals / 100) * 100 self.bvecs = gtab.bvecs if 0 <= sh_order_max <= 12 and not bool(sh_order_max % 2): self.sh_order_max = sh_order_max else: msg = "sh_order_max must be a non-zero even positive number " msg += "between 2 and 12" raise ValueError(msg) if sphere is None: sphere = default_sphere self.vertices = sphere.vertices[0 : int(sphere.vertices.shape[0] / 2), :] else: self.vertices = sphere self.b0s_mask = self.bvals == 0 self.one_0_bvals = np.r_[0, self.bvals[~self.b0s_mask]] self.one_0_bvecs = np.r_[ np.array([0, 0, 0]).reshape(1, 3), self.bvecs[~self.b0s_mask, :] ] self.rho = rho_matrix(self.sh_order_max, self.one_0_bvecs) # signal regularization matrix self.srm = rho_matrix(4, self.one_0_bvecs) self.lb_matrix_signal = lb_forecast(4) self.b_unique = np.sort(np.unique(self.bvals[self.bvals > 0])) self.wls = True self.csd = False self.pos = False if dec_alg.upper() == "POS": if not have_cvxpy: cvxpy.import_error() self.wls = False self.pos = True if dec_alg.upper() == "CSD": self.csd = True self.lb_matrix = lb_forecast(self.sh_order_max) self.lambda_lb = lambda_lb self.lambda_csd = lambda_csd self.fod = rho_matrix(sh_order_max, self.vertices) @multi_voxel_fit def fit(self, data, **kwargs): data_b0 = data[self.b0s_mask].mean() data_single_b0 = np.r_[data_b0, data[~self.b0s_mask]] data_single_b0 = np.divide( data_single_b0, data_b0, out=np.zeros_like(data_single_b0), where=data_b0 != 0, ) # calculates the mean signal at each b_values means = find_signal_means( self.b_unique, data_single_b0, self.one_0_bvals, self.srm, self.lb_matrix_signal, ) # average diffusivity initialization x = np.array([np.pi / 4, np.pi / 4]) x, status = leastsq(forecast_error_func, x, args=(self.b_unique, means)) # transform to bound the diffusivities from 0 to 3e-03 d_par = np.cos(x[0]) ** 2 * 3e-03 d_perp = np.cos(x[1]) ** 2 * 3e-03 if d_perp >= d_par: d_par, d_perp = d_perp, d_par # round to avoid memory explosion diff_key = str(int(np.round(d_par * 1e05))) + str(int(np.round(d_perp * 1e05))) M_diff = self.cache_get("forecast_matrix", key=diff_key) if M_diff is None: M_diff = forecast_matrix(self.sh_order_max, d_par, d_perp, self.one_0_bvals) self.cache_set("forecast_matrix", key=diff_key, value=M_diff) M = M_diff * self.rho M0 = M[:, 0] c0 = np.sqrt(1.0 / (4 * np.pi)) # coefficients vector initialization n_c = int((self.sh_order_max + 1) * (self.sh_order_max + 2) / 2) coef = np.zeros(n_c) coef[0] = c0 if int(np.round(d_par * 1e05)) > int(np.round(d_perp * 1e05)): if self.wls: data_r = data_single_b0 - M0 * c0 Mr = M[:, 1:] Lr = self.lb_matrix[1:, 1:] pseudo_inv = np.dot( np.linalg.inv(np.dot(Mr.T, Mr) + self.lambda_lb * Lr), Mr.T ) coef = np.dot(pseudo_inv, data_r) coef = np.r_[c0, coef] if self.csd: coef, _ = csdeconv(data_single_b0, M, self.fod, tau=0.1, convergence=50) coef = coef / coef[0] * c0 if self.pos: c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_single_b0) + self.lambda_lb * cvxpy.quad_form(c, self.lb_matrix) ) constraints = [c[0] == c0, self.fod @ c >= 0] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=cvxpy.OSQP, eps_abs=1e-05, eps_rel=1e-05) coef = np.asarray(c.value).squeeze() except Exception: warn("Optimization did not find a solution", stacklevel=2) coef = np.zeros(M.shape[1]) coef[0] = c0 return ForecastFit(self, data, coef, d_par, d_perp) class ForecastFit(OdfFit): def __init__(self, model, data, sh_coef, d_par, d_perp): """Calculates diffusion properties for a single voxel Parameters ---------- model : object, AnalyticalModel data : 1d ndarray, fitted data sh_coef : 1d ndarray, forecast sh coefficients d_par : float, parallel diffusivity d_perp : float, perpendicular diffusivity """ OdfFit.__init__(self, model, data) self.model = model self._sh_coef = sh_coef self.gtab = model.gtab self.sh_order_max = model.sh_order_max self.d_par = d_par self.d_perp = d_perp self.rho = None @warning_for_keywords() def odf(self, sphere, *, clip_negative=True): r"""Calculates the fODF for a given discrete sphere. Parameters ---------- sphere : Sphere, the odf sphere clip_negative : boolean, optional if True clip the negative odf values to 0, default True """ if self.rho is None: self.rho = rho_matrix(self.sh_order_max, sphere.vertices) odf = np.dot(self.rho, self._sh_coef) if clip_negative: odf = np.clip(odf, 0, odf.max()) return odf def fractional_anisotropy(self): r"""Calculates the fractional anisotropy.""" fa = np.sqrt( 0.5 * (2 * (self.d_par - self.d_perp) ** 2) / (self.d_par**2 + 2 * self.d_perp**2) ) return fa def mean_diffusivity(self): r"""Calculates the mean diffusivity.""" md = (self.d_par + 2 * self.d_perp) / 3.0 return md @warning_for_keywords() def predict(self, *, gtab=None, S0=1.0): r"""Calculates the fODF for a given discrete sphere. Parameters ---------- gtab : GradientTable, optional gradient directions and bvalues container class. S0 : float, optional the signal at b-value=0 """ if gtab is None: gtab = self.gtab M_diff = forecast_matrix(self.sh_order_max, self.d_par, self.d_perp, gtab.bvals) rho = rho_matrix(self.sh_order_max, gtab.bvecs) M = M_diff * rho S = S0 * np.dot(M, self._sh_coef) return S @property def sh_coeff(self): """The FORECAST SH coefficients""" return self._sh_coef @property def dpar(self): """The parallel diffusivity""" return self.d_par @property def dperp(self): """The perpendicular diffusivity""" return self.d_perp @warning_for_keywords() def find_signal_means(b_unique, data_norm, bvals, rho, lb_matrix, *, w=1e-03): r"""Calculate the mean signal for each shell. Parameters ---------- b_unique : 1d ndarray, unique b-values in a vector excluding zero data_norm : 1d ndarray, normalized diffusion signal bvals : 1d ndarray, the b-values rho : 2d ndarray, SH basis matrix for fitting the signal on each shell lb_matrix : 2d ndarray, Laplace-Beltrami regularization matrix w : float, weight for the Laplace-Beltrami regularization Returns ------- means : 1d ndarray the average of the signal for each b-values """ lb = len(b_unique) means = np.zeros(lb) for u in range(lb): ind = bvals == b_unique[u] shell = data_norm[ind] if np.sum(ind) > 20: M = rho[ind, :] coef = np.linalg.multi_dot( [np.linalg.inv(np.dot(M.T, M) + w * lb_matrix), M.T, shell] ) means[u] = coef[0] / np.sqrt(4 * np.pi) else: means[u] = shell.mean() return means def forecast_error_func(x, b_unique, E): r"""Calculates the difference between the mean signal calculated using the parameter vector x and the average signal E using FORECAST and SMT """ d_par = np.cos(x[0]) ** 2 * 3e-03 d_perp = np.cos(x[1]) ** 2 * 3e-03 if d_perp >= d_par: d_par, d_perp = d_perp, d_par E_reconst = ( 0.5 * np.exp(-b_unique * d_perp) * psi_l(0, (b_unique * (d_par - d_perp))) ) v = E - E_reconst return v def psi_l(ell, b): n = ell // 2 v = (-b) ** n v *= gamma(n + 1.0 / 2) / gamma(2 * n + 3.0 / 2) v *= hyp1f1(n + 1.0 / 2, 2 * n + 3.0 / 2, -b) return v @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def forecast_matrix(sh_order_max, d_par, d_perp, bvals): r"""Compute the FORECAST radial matrix""" n_c = int((sh_order_max + 1) * (sh_order_max + 2) / 2) M = np.zeros((bvals.shape[0], n_c)) counter = 0 for ell in range(0, sh_order_max + 1, 2): for _ in range(-ell, ell + 1): M[:, counter] = ( 2 * np.pi * np.exp(-bvals * d_perp) * psi_l(ell, bvals * (d_par - d_perp)) ) counter += 1 return M @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def rho_matrix(sh_order_max, vecs): r"""Compute the SH matrix $\rho$""" r, theta, phi = cart2sphere(vecs[:, 0], vecs[:, 1], vecs[:, 2]) theta[np.isnan(theta)] = 0 n_c = int((sh_order_max + 1) * (sh_order_max + 2) / 2) rho = np.zeros((vecs.shape[0], n_c)) counter = 0 for l_values in range(0, sh_order_max + 1, 2): for m_values in range(-l_values, l_values + 1): rho[:, counter] = real_sh_descoteaux_from_index( m_values, l_values, theta, phi ) counter += 1 return rho @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def lb_forecast(sh_order_max): r"""Returns the Laplace-Beltrami regularization matrix for FORECAST""" n_c = int((sh_order_max + 1) * (sh_order_max + 2) / 2) diag_lb = np.zeros(n_c) counter = 0 for j in range(0, sh_order_max + 1, 2): stop = 2 * j + 1 + counter diag_lb[counter:stop] = (j * (j + 1)) ** 2 counter = stop return np.diag(diag_lb) dipy-1.11.0/dipy/reconst/fwdti.py000066400000000000000000000760331476546756600167130ustar00rootroot00000000000000"""Classes and functions for fitting tensors without free water contamination""" from functools import partial import warnings import numpy as np import scipy.optimize as opt from dipy.core.gradients import check_multi_b from dipy.core.ndindex import ndindex from dipy.reconst.base import ReconstModel from dipy.reconst.dki import _positive_evals from dipy.reconst.dti import ( TensorFit, _decompose_tensor_nan, decompose_tensor, design_matrix, from_lower_triangular, lower_triangular, ) from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.vec_val_sum import vec_val_vect from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def fwdti_prediction(params, gtab, *, S0=1, Diso=3.0e-3): r"""Signal prediction given the free water DTI model parameters. Parameters ---------- params : (..., 13) ndarray Model parameters. The last dimension should have the 12 tensor parameters (3 eigenvalues, followed by the 3 corresponding eigenvectors) and the volume fraction of the free water compartment. gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. Returns ------- S : (..., N) ndarray Simulated signal based on the free water DTI model Notes ----- The predicted signal is given by: $S(\theta, b) = S_0 * [(1-f) * e^{-b ADC} + f * e^{-b D_{iso}]$, where $ADC = \theta Q \theta^T$, $\theta$ is a unit vector pointing at any direction on the sphere for which a signal is to be predicted, $b$ is the b value provided in the GradientTable input for that direction, $Q$ is the quadratic form of the tensor determined by the input parameters, $f$ is the free water diffusion compartment, $D_{iso}$ is the free water diffusivity which is equal to $3 * 10^{-3} mm^{2}s^{-1} :footcite:p:`NetoHenriques2017`. References ---------- .. footbibliography:: """ evals = params[..., :3] evecs = params[..., 3:-1].reshape(params.shape[:-1] + (3, 3)) f = params[..., 12] qform = vec_val_vect(evecs, evals) lower_dt = lower_triangular(qform, b0=S0) lower_diso = lower_dt.copy() lower_diso[..., 0] = lower_diso[..., 2] = lower_diso[..., 5] = Diso lower_diso[..., 1] = lower_diso[..., 3] = lower_diso[..., 4] = 0 D = design_matrix(gtab) pred_sig = np.zeros(f.shape + (gtab.bvals.shape[0],)) mask = _positive_evals(evals[..., 0], evals[..., 1], evals[..., 2]) index = ndindex(f.shape) for v in index: if mask[v]: pred_sig[v] = (1 - f[v]) * np.exp(np.dot(lower_dt[v], D.T)) + f[v] * np.exp( np.dot(lower_diso[v], D.T) ) return pred_sig class FreeWaterTensorModel(ReconstModel): """Class for the Free Water Elimination Diffusion Tensor Model""" def __init__(self, gtab, *args, fit_method="NLS", **kwargs): """Free Water Diffusion Tensor Model. See :footcite:p:`NetoHenriques2017` for further details about the model. Parameters ---------- gtab : GradientTable class instance Gradient table. fit_method : str or callable str can be one of the following: - 'WLS' for weighted linear least square fit according to :footcite:p:`NetoHenriques2017` :func:`fwdti.wls_iter` - 'NLS' for non-linear least square fit according to :footcite:p:`NetoHenriques2017` :func:`fwdti.nls_iter` callable has to have the signature: ``fit_method(design_matrix, data, *args, **kwargs)`` args, kwargs : arguments and key-word arguments passed to the fit_method. See fwdti.wls_iter, fwdti.nls_iter for details References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) if not callable(fit_method): try: fit_method = common_fit_methods[fit_method] except KeyError as e: e_s = '"' + str(fit_method) + '" is not a known fit ' e_s += "method, the fit method should either be a " e_s += "function or one of the common fit methods" raise ValueError(e_s) from e self.fit_method = fit_method self.design_matrix = design_matrix(self.gtab) self.args = args self.kwargs = kwargs # Check if at least three b-values are given enough_b = check_multi_b(self.gtab, 3, non_zero=False) if not enough_b: mes = "fwDTI requires at least 3 b-values (which can include b=0)" raise ValueError(mes) @multi_voxel_fit @warning_for_keywords() def fit(self, data, *, mask=None, **kwargs): """Fit method of the free water elimination DTI model class Parameters ---------- data : array The measured signal from one voxel. mask : array A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] """ S0 = np.mean(data[self.gtab.b0s_mask]) fwdti_params = self.fit_method( self.design_matrix, data, S0, *self.args, **self.kwargs ) return FreeWaterTensorFit(self, fwdti_params) @warning_for_keywords() def predict(self, fwdti_params, *, S0=1): """Predict a signal for this TensorModel class instance given parameters. Parameters ---------- fwdti_params : (..., 13) ndarray The last dimension should have 13 parameters: the 12 tensor parameters (3 eigenvalues, followed by the 3 corresponding eigenvectors) and the free water volume fraction. S0 : float or ndarray The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1 Returns ------- S : (..., N) ndarray Simulated signal based on the free water DTI model """ return fwdti_prediction(fwdti_params, self.gtab, S0=S0) class FreeWaterTensorFit(TensorFit): """Class for fitting the Free Water Tensor Model""" def __init__(self, model, model_params): """Initialize a FreeWaterTensorFit class instance. Since the free water tensor model is an extension of DTI, class instance is defined as subclass of the TensorFit from dti.py See :footcite:p:`NetoHenriques2017` for further details about the method. Parameters ---------- model : FreeWaterTensorModel Class instance Class instance containing the free water tensor model for the fit model_params : ndarray (x, y, z, 13) or (n, 13) All parameters estimated from the free water tensor model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) The volume fraction of the free water compartment References ---------- .. footbibliography:: """ TensorFit.__init__(self, model, model_params) @property def f(self): """Returns the free water diffusion volume fraction f""" return self.model_params[..., 12] @warning_for_keywords() def predict(self, gtab, *, S0=1): r"""Given a free water tensor model fit, predict the signal on the vertices of a gradient table Parameters ---------- gtab : a GradientTable class instance The gradient table for this prediction S0 : float array The mean non-diffusion weighted signal in each voxel. Default: 1 in all voxels. Returns ------- S : (..., N) ndarray Simulated signal based on the free water DTI model """ return fwdti_prediction(self.model_params, gtab, S0=S0) @warning_for_keywords() def wls_iter( design_matrix, sig, S0, *, Diso=3e-3, mdreg=2.7e-3, min_signal=1.0e-6, piterations=3 ): """Applies weighted linear least squares fit of the water free elimination model to single voxel signals. Parameters ---------- design_matrix : array (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. sig : array (g, ) Diffusion-weighted signal for a single voxel data. S0 : float Non diffusion weighted signal (i.e. signal for b-value=0). Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. mdreg : float, optimal DTI's mean diffusivity regularization threshold. If standard DTI diffusion tensor's mean diffusivity is almost near the free water diffusion value, the diffusion signal is assumed to be only free water diffusion (i.e. volume fraction will be set to 1 and tissue's diffusion parameters are set to zero). Default md_reg is 2.7e-3 $mm^{2}.s^{-1}$ (corresponding to 90% of the free water diffusion value). min_signal : float The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to `fit`. piterations : inter, optional Number of iterations used to refine the precision of f. Default is set to 3 corresponding to a precision of 0.01. Returns ------- fw_params : ndarray All parameters estimated from the free water tensor model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) The volume fraction of the free water compartment """ W = design_matrix # DTI ordinary linear least square solution log_s = np.log(np.maximum(sig, min_signal)) # Define weights S2 = np.diag(sig**2) # DTI weighted linear least square solution WTS2 = np.dot(W.T, S2) inv_WT_S2_W = np.linalg.pinv(np.dot(WTS2, W)) invWTS2W_WTS2 = np.dot(inv_WT_S2_W, WTS2) params = np.dot(invWTS2W_WTS2, log_s) md = (params[0] + params[2] + params[5]) / 3 # Process voxel if it has significant signal from tissue if md < mdreg and np.mean(sig) > min_signal and S0 > min_signal: # General free-water signal contribution fwsig = np.exp(np.dot(design_matrix, np.array([Diso, 0, Diso, 0, 0, Diso, 0]))) df = 1 # initialize precision flow = 0 # lower f evaluated fhig = 1 # higher f evaluated ns = 9 # initial number of samples per iteration for _ in range(piterations): df = df * 0.1 fs = np.linspace(flow + df, fhig - df, num=ns) # sampling f SFW = np.array( [ fwsig, ] * ns ) # repeat contributions for all values FS, SI = np.meshgrid(fs, sig) SA = SI - FS * S0 * SFW.T # SA < 0 means that the signal components from the free water # component is larger than the total fiber. These cases are present # for inappropriate large volume fractions (given the current S0 # value estimated). To overcome this issue negative SA are replaced # by data's min positive signal. SA[SA <= 0] = min_signal y = np.log(SA / (1 - FS)) all_new_params = np.dot(invWTS2W_WTS2, y) # Select params for lower F2 SIpred = (1 - FS) * np.exp(np.dot(W, all_new_params)) + FS * S0 * SFW.T F2 = np.sum(np.square(SI - SIpred), axis=0) Mind = np.argmin(F2) params = all_new_params[:, Mind] f = fs[Mind] # Updated f flow = f - df # refining precision fhig = f + df ns = 19 evals, evecs = decompose_tensor(from_lower_triangular(params)) fw_params = np.concatenate( (evals, evecs[0], evecs[1], evecs[2], np.array([f])), axis=0 ) else: fw_params = np.zeros(13) if md > mdreg: fw_params[12] = 1.0 return fw_params @warning_for_keywords() def wls_fit_tensor( gtab, data, *, Diso=3e-3, mask=None, min_signal=1.0e-6, piterations=3, mdreg=2.7e-3 ): r"""Computes weighted least squares (WLS) fit to calculate self-diffusion tensor using a linear regression model. See :footcite:p:`NetoHenriques2017` for further details about the method. Parameters ---------- gtab : a GradientTable class instance The gradient table containing diffusion acquisition parameters. data : ndarray ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] min_signal : float The minimum signal value. Needs to be a strictly positive number. piterations : inter, optional Number of iterations used to refine the precision of f. Default is set to 3 corresponding to a precision of 0.01. mdreg : float, optimal DTI's mean diffusivity regularization threshold. If standard DTI diffusion tensor's mean diffusivity is almost near the free water diffusion value, the diffusion signal is assumed to be only free water diffusion (i.e. volume fraction will be set to 1 and tissue's diffusion parameters are set to zero). Default md_reg is 2.7e-3 $mm^{2}.s^{-1}$ (corresponding to 90% of the free water diffusion value). Returns ------- fw_params : ndarray (x, y, z, 13) Matrix containing in the last dimension the free water model parameters in the following order: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) The volume fraction of the free water compartment. References ---------- .. footbibliography:: """ fw_params = np.zeros(data.shape[:-1] + (13,)) W = design_matrix(gtab) # Prepare mask if mask is None: mask = np.ones(data.shape[:-1], dtype=bool) else: if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) # Prepare S0 S0 = np.mean(data[:, :, :, gtab.b0s_mask], axis=-1) index = ndindex(mask.shape) for v in index: if mask[v]: params = wls_iter( W, data[v], S0[v], min_signal=min_signal, Diso=Diso, piterations=piterations, mdreg=mdreg, ) fw_params[v] = params return fw_params @warning_for_keywords() def _nls_err_func( tensor_elements, design_matrix, data, *, Diso=3e-3, weighting=None, sigma=None, cholesky=False, f_transform=False, ): """Error function for the non-linear least-squares fit of the tensor water elimination model. Parameters ---------- tensor_elements : array (8, ) The six independent elements of the diffusion tensor followed by -log(S0) and the volume fraction f of the water elimination compartment. Note that if cholesky is set to true, tensor elements are assumed to be written as Cholesky's decomposition elements. If f_transform is true, volume fraction f has to be converted to ft = arcsin(2*f - 1) + pi/2 design_matrix : array The design matrix data : array The voxel signal in all gradient directions Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. weighting : str, optional Whether to use the Geman-McClure weighting criterion (see :footcite:p:`NetoHenriques2017` for details) sigma : float or float array, optional If 'sigma' weighting is used, we will weight the error function according to the background noise estimated either in aggregate over all directions (when a float is provided), or to an estimate of the noise in each diffusion-weighting direction (if an array is provided). If 'gmm', the Geman-Mclure M-estimator is used for weighting. cholesky : bool, optional If true, the diffusion tensor elements were decomposed using Cholesky decomposition. See fwdti.nls_fit_tensor f_transform : bool, optional If true, the water volume fraction was converted to ft = arcsin(2*f - 1) + pi/2, insuring f estimates between 0 and 1. See fwdti.nls_fit_tensor References ---------- .. footbibliography:: """ tensor = np.copy(tensor_elements) if cholesky: tensor[:6] = cholesky_to_lower_triangular(tensor[:6]) if f_transform: f = 0.5 * (1 + np.sin(tensor[7] - np.pi / 2)) else: f = tensor[7] # This is the predicted signal given the params: y = (1 - f) * np.exp(np.dot(design_matrix, tensor[:7])) + f * np.exp( np.dot(design_matrix, np.array([Diso, 0, Diso, 0, 0, Diso, tensor[6]])) ) # Compute the residuals residuals = data - y # If we don't want to weight the residuals, we are basically done: if weighting is None: # And we return the SSE: return residuals se = residuals**2 # If the user provided a sigma (e.g 1.5267 * std(background_noise), as # suggested by Chang et al.) we will use it: if weighting == "sigma": if sigma is None: e_s = "Must provide sigma value as input to use this weighting" e_s += " method" raise ValueError(e_s) w = 1 / (sigma**2) elif weighting == "gmm": # We use the Geman-McClure M-estimator to compute the weights on the # residuals: C = 1.4826 * np.median(np.abs(residuals - np.median(residuals))) with warnings.catch_warnings(): warnings.simplefilter("ignore") w = 1 / (se + C**2) # The weights are normalized to the mean weight (see p. 1089): w = w / np.mean(w) # Return the weighted residuals: with warnings.catch_warnings(): warnings.simplefilter("ignore") return np.sqrt(w * se) @warning_for_keywords() def _nls_jacobian_func( tensor_elements, design_matrix, data, *, Diso=3e-3, weighting=None, sigma=None, cholesky=False, f_transform=False, ): """The Jacobian is the first derivative of the least squares error function. Parameters ---------- tensor_elements : array (8, ) The six independent elements of the diffusion tensor followed by -log(S0) and the volume fraction f of the water elimination compartment. Note that if f_transform is true, volume fraction f is converted to ft = arcsin(2*f - 1) + pi/2 design_matrix : array The design matrix Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. f_transform : bool, optional If true, the water volume fraction was converted to ft = arcsin(2*f - 1) + pi/2, insuring f estimates between 0 and 1. See fwdti.nls_fit_tensor. """ tensor = np.copy(tensor_elements) if f_transform: f = 0.5 * (1 + np.sin(tensor[7] - np.pi / 2)) else: f = tensor[7] t = np.exp(np.dot(design_matrix, tensor[:7])) s = np.exp(np.dot(design_matrix, np.array([Diso, 0, Diso, 0, 0, Diso, tensor[6]]))) T = (f - 1.0) * t[:, None] * design_matrix S = np.zeros(design_matrix.shape) S[:, 6] = f * s if f_transform: df = (t - s) * (0.5 * np.cos(tensor[7] - np.pi / 2)) else: df = t - s return np.concatenate((T - S, df[:, None]), axis=1) @warning_for_keywords() def nls_iter( design_matrix, sig, S0, *, Diso=3e-3, mdreg=2.7e-3, min_signal=1.0e-6, cholesky=False, f_transform=True, jac=False, weighting=None, sigma=None, ): """Applies non linear least squares fit of the water free elimination model to single voxel signals. Parameters ---------- design_matrix : array (g, 7) Design matrix holding the covariants used to solve for the regression coefficients. sig : array (g, ) Diffusion-weighted signal for a single voxel data. S0 : float Non diffusion weighted signal (i.e. signal for b-value=0). Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. mdreg : float, optimal DTI's mean diffusivity regularization threshold. If standard DTI diffusion tensor's mean diffusivity is almost near the free water diffusion value, the diffusion signal is assumed to be only free water diffusion (i.e. volume fraction will be set to 1 and tissue's diffusion parameters are set to zero). Default md_reg is 2.7e-3 $mm^{2}.s^{-1}$ (corresponding to 90% of the free water diffusion value). min_signal : float, optional The minimum signal value. Needs to be a strictly positive number. cholesky : bool, optional If true it uses Cholesky decomposition to ensure that diffusion tensor is positive define. f_transform : bool, optional If true, the water volume fractions is converted during the convergence procedure to ft = arcsin(2*f - 1) + pi/2, insuring f estimates between 0 and 1. jac : bool, optional True to use the Jacobian. weighting: str, optional the weighting scheme to use in considering the squared-error. Default behavior is to use uniform weighting. Other options: 'sigma' 'gmm' sigma: float, optional If the 'sigma' weighting scheme is used, a value of sigma needs to be provided here. According to :footcite:t:`Chang2005`, a good value to use is 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise). Returns ------- All parameters estimated from the free water tensor model. Parameters are ordered as follows: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) The volume fraction of the free water compartment. """ # Initial guess params = wls_iter( design_matrix, sig, S0, min_signal=min_signal, Diso=Diso, mdreg=mdreg ) partial_err_func = partial( _nls_err_func, Diso=Diso, weighting=weighting, sigma=sigma, cholesky=cholesky, f_transform=f_transform, ) # Process voxel if it has significant signal from tissue if params[12] < 0.99 and np.mean(sig) > min_signal and S0 > min_signal: # converting evals and evecs to diffusion tensor elements evals = params[:3] evecs = params[3:12].reshape((3, 3)) dt = lower_triangular(vec_val_vect(evecs, evals)) # Cholesky decomposition if requested if cholesky: dt = lower_triangular_to_cholesky(dt) # f transformation if requested if f_transform: f = np.arcsin(2 * params[12] - 1) + np.pi / 2 else: f = params[12] # Use the Levenberg-Marquardt algorithm wrapped in opt.leastsq start_params = np.concatenate((dt, [-np.log(S0), f]), axis=0) if jac: this_tensor, status = opt.leastsq( partial_err_func, start_params[:8], args=(design_matrix, sig), Dfun=_nls_jacobian_func, ) else: this_tensor, status = opt.leastsq( partial_err_func, start_params[:8], args=(design_matrix, sig) ) # Process tissue diffusion tensor if cholesky: this_tensor[:6] = cholesky_to_lower_triangular(this_tensor[:6]) evals, evecs = _decompose_tensor_nan( from_lower_triangular(this_tensor[:6]), from_lower_triangular(start_params[:6]), ) # Process water volume fraction f f = this_tensor[7] if f_transform: f = 0.5 * (1 + np.sin(f - np.pi / 2)) params = np.concatenate( (evals, evecs[0], evecs[1], evecs[2], np.array([f])), axis=0 ) return params @warning_for_keywords() def nls_fit_tensor( gtab, data, *, mask=None, Diso=3e-3, mdreg=2.7e-3, min_signal=1.0e-6, f_transform=True, cholesky=False, jac=False, weighting=None, sigma=None, ): """ Fit the water elimination tensor model using the non-linear least-squares. Parameters ---------- gtab : a GradientTable class instance The gradient table containing diffusion acquisition parameters. data : ndarray ([X, Y, Z, ...], g) Data or response variables holding the data. Note that the last dimension should contain the data. It makes no copies of data. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] Diso : float, optional Value of the free water isotropic diffusion. Default is set to 3e-3 $mm^{2}.s^{-1}$. Please adjust this value if you are assuming different units of diffusion. mdreg : float, optimal DTI's mean diffusivity regularization threshold. If standard DTI diffusion tensor's mean diffusivity is almost near the free water diffusion value, the diffusion signal is assumed to be only free water diffusion (i.e. volume fraction will be set to 1 and tissue's diffusion parameters are set to zero). Default md_reg is 2.7e-3 $mm^{2}.s^{-1}$ (corresponding to 90% of the free water diffusion value). min_signal : float, optional The minimum signal value. Needs to be a strictly positive number. f_transform : bool, optional If true, the water volume fractions is converted during the convergence procedure to ft = arcsin(2*f - 1) + pi/2, insuring f estimates between 0 and 1. cholesky : bool, optional If true it uses Cholesky decomposition to ensure that diffusion tensor is positive define. jac : bool, optional True to use the Jacobian. weighting: str, optional the weighting scheme to use in considering the squared-error. Default behavior is to use uniform weighting. Other options: 'sigma' 'gmm' sigma: float, optional If the 'sigma' weighting scheme is used, a value of sigma needs to be provided here. According to :footcite:t:`Chang2005`, a good value to use is 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise). Returns ------- fw_params : ndarray (x, y, z, 13) Matrix containing in the dimension the free water model parameters in the following order: 1) Three diffusion tensor's eigenvalues 2) Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3) The volume fraction of the free water compartment References ---------- .. footbibliography:: """ fw_params = np.zeros(data.shape[:-1] + (13,)) W = design_matrix(gtab) # Prepare mask if mask is None: mask = np.ones(data.shape[:-1], dtype=bool) else: if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) # Prepare S0 S0 = np.mean(data[:, :, :, gtab.b0s_mask], axis=-1) index = ndindex(mask.shape) for v in index: if mask[v]: params = nls_iter( W, data[v], S0[v], Diso=Diso, mdreg=mdreg, min_signal=min_signal, f_transform=f_transform, cholesky=cholesky, jac=jac, weighting=weighting, sigma=sigma, ) fw_params[v] = params return fw_params def lower_triangular_to_cholesky(tensor_elements): """Performs Cholesky decomposition of the diffusion tensor Parameters ---------- tensor_elements : array (6,) Array containing the six elements of diffusion tensor's lower triangular. Returns ------- cholesky_elements : array (6,) Array containing the six Cholesky's decomposition elements (R0, R1, R2, R3, R4, R5) :footcite:p:`Koay2006b`. References ---------- .. footbibliography:: """ R0 = np.sqrt(tensor_elements[0]) R3 = tensor_elements[1] / R0 R1 = np.sqrt(tensor_elements[2] - R3**2) R5 = tensor_elements[3] / R0 R4 = (tensor_elements[4] - R3 * R5) / R1 R2 = np.sqrt(tensor_elements[5] - R4**2 - R5**2) return np.array([R0, R1, R2, R3, R4, R5]) def cholesky_to_lower_triangular(R): """Convert Cholesky decomposition elements to the diffusion tensor elements Parameters ---------- R : array (6,) Array containing the six Cholesky's decomposition elements (R0, R1, R2, R3, R4, R5) :footcite:p:`Koay2006b`. Returns ------- tensor_elements : array (6,) Array containing the six elements of diffusion tensor's lower triangular. References ---------- .. footbibliography:: """ Dxx = R[0] ** 2 Dxy = R[0] * R[3] Dyy = R[1] ** 2 + R[3] ** 2 Dxz = R[0] * R[5] Dyz = R[1] * R[4] + R[3] * R[5] Dzz = R[2] ** 2 + R[4] ** 2 + R[5] ** 2 return np.array([Dxx, Dxy, Dyy, Dxz, Dyz, Dzz]) common_fit_methods = { "WLLS": wls_iter, "WLS": wls_iter, "NLLS": nls_iter, "NLS": nls_iter, } dipy-1.11.0/dipy/reconst/gqi.py000066400000000000000000000223371476546756600163540ustar00rootroot00000000000000"""Classes and functions for generalized q-sampling""" import warnings import numpy as np from dipy.reconst.cache import Cache from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.odf import OdfFit, OdfModel from dipy.testing.decorators import warning_for_keywords class GeneralizedQSamplingModel(OdfModel, Cache): @warning_for_keywords() def __init__( self, gtab, *, method="gqi2", sampling_length=1.2, normalize_peaks=False ): r"""Generalized Q-Sampling Imaging. See :footcite:t:`Yeh2010` for further details about the model. This model has the same assumptions as the DSI method i.e. Cartesian grid sampling in q-space and fast gradient switching. Implements equations 2.14 from :footcite:p:`Garyfallidis2012b` for standard GQI and equation 2.16 from :footcite:p:`Garyfallidis2012b` for GQI2. You can think of GQI2 as an analytical solution of the DSI ODF. Parameters ---------- gtab : object, GradientTable method : str, optional 'standard' or 'gqi2' sampling_length : float, optional diffusion sampling length (lambda in eq. 2.14 and 2.16) normalize_peaks : bool, optional True to normalize peaks. Notes ----- As of version 0.9, range of the sampling length in GQI2 has changed to match the same scale used in the 'standard' method :footcite:t:`Yeh2010`. This means that the value of `sampling_length` should be approximately 1 - 1.3 (see :footcite:t:`Yeh2010`, pg. 1628). References ---------- .. footbibliography:: Examples -------- Here we create an example where we provide the data, a gradient table and a reconstruction sphere and calculate the ODF for the first voxel in the data. >>> from dipy.data import dsi_voxels >>> data, gtab = dsi_voxels() >>> from dipy.core.subdivide_octahedron import create_unit_sphere >>> sphere = create_unit_sphere(recursion_level=5) >>> from dipy.reconst.gqi import GeneralizedQSamplingModel >>> gq = GeneralizedQSamplingModel(gtab, method='gqi2', sampling_length=1.1) >>> voxel_signal = data[0, 0, 0] >>> odf = gq.fit(voxel_signal).odf(sphere) See Also -------- dipy.reconst.dsi.DiffusionSpectrumModel """ OdfModel.__init__(self, gtab) self.method = method self.Lambda = sampling_length self.normalize_peaks = normalize_peaks # 0.01506 = 6*D where D is the free water diffusion coefficient # l_values sqrt(6 D tau) D free water diffusion coefficient and # tau included in the b-value scaling = np.sqrt(self.gtab.bvals * 0.01506) tmp = np.tile(scaling, (3, 1)) gradsT = self.gtab.bvecs.T b_vector = gradsT * tmp # element-wise product self.b_vector = b_vector.T @multi_voxel_fit def fit(self, data, **kwargs): return GeneralizedQSamplingFit(self, data) class GeneralizedQSamplingFit(OdfFit): def __init__(self, model, data): """Calculates PDF and ODF for a single voxel Parameters ---------- model : object, DiffusionSpectrumModel data : 1d ndarray, signal values """ OdfFit.__init__(self, model, data) self._gfa = None self.npeaks = 5 self._peak_values = None self._peak_indices = None self._qa = None def odf(self, sphere): """Calculates the discrete ODF for a given discrete sphere.""" self.gqi_vector = self.model.cache_get("gqi_vector", key=sphere) if self.gqi_vector is None: if self.model.method == "gqi2": H = squared_radial_component # print self.gqi_vector.shape self.gqi_vector = np.real( H( np.dot(self.model.b_vector, sphere.vertices.T) * self.model.Lambda ) ) if self.model.method == "standard": self.gqi_vector = np.real( np.sinc( np.dot(self.model.b_vector, sphere.vertices.T) * self.model.Lambda / np.pi ) ) self.model.cache_set("gqi_vector", sphere, self.gqi_vector) return np.dot(self.data, self.gqi_vector) @warning_for_keywords() def normalize_qa(qa, *, max_qa=None): """Normalize quantitative anisotropy. Used mostly with GQI rather than GQI2. Parameters ---------- qa : array, shape (X, Y, Z, N) where N is the maximum number of peaks stored max_qa : float, maximum qa value. Usually found in the CSF (corticospinal fluid). Returns ------- nqa : array, shape (x, Y, Z, N) normalized quantitative anisotropy Notes ----- Normalized quantitative anisotropy has the very useful property to be very small near gray matter and background areas. Therefore, it can be used to mask out white matter areas. """ if max_qa is None: return qa / qa.max() return qa / max_qa @warning_for_keywords() def squared_radial_component(x, *, tol=0.01): """Part of the GQI2 integral Eq.8 in the referenced paper by :footcite:t:`Yeh2010`. References ---------- .. footbibliography:: """ with warnings.catch_warnings(): warnings.simplefilter("ignore") result = (2 * x * np.cos(x) + (x * x - 2) * np.sin(x)) / (x**3) x_near_zero = (x < tol) & (x > -tol) return np.where(x_near_zero, 1.0 / 3, result) @warning_for_keywords() def npa(self, odf, *, width=5): """non-parametric anisotropy Nimmo-Smith et al. ISMRM 2011 """ # odf = self.odf(s) t0, t1, t2 = triple_odf_maxima(self.odf_vertices, odf, width) psi0 = t0[1] ** 2 psi1 = t1[1] ** 2 psi2 = t2[1] ** 2 npa = np.sqrt( (psi0 - psi1) ** 2 + (psi1 - psi2) ** 2 + (psi2 - psi0) ** 2 ) / np.sqrt(2 * (psi0**2 + psi1**2 + psi2**2)) # print 'tom >>>> ',t0,t1,t2,npa return t0, t1, t2, npa @warning_for_keywords() def equatorial_zone_vertices(vertices, pole, *, width=5): """ finds the 'vertices' in the equatorial zone conjugate to 'pole' with width half 'width' degrees """ return [ i for i, v in enumerate(vertices) if np.abs(np.dot(v, pole)) < np.abs(np.sin(np.pi * width / 180)) ] @warning_for_keywords() def polar_zone_vertices(vertices, pole, *, width=5): """ finds the 'vertices' in the equatorial band around the 'pole' of radius 'width' degrees """ return [ i for i, v in enumerate(vertices) if np.abs(np.dot(v, pole)) > np.abs(np.cos(np.pi * width / 180)) ] def upper_hemi_map(v): """ maps a 3-vector into the z-upper hemisphere """ return np.sign(v[2]) * v def equatorial_maximum(vertices, odf, pole, width): eqvert = equatorial_zone_vertices(vertices, pole, width) # need to test for whether eqvert is empty or not if len(eqvert) == 0: print( f"empty equatorial band at {np.array_str(pole)} pole with width {width:f}" ) return None, None eqvals = [odf[i] for i in eqvert] eqargmax = np.argmax(eqvals) eqvertmax = eqvert[eqargmax] eqvalmax = eqvals[eqargmax] return eqvertmax, eqvalmax def patch_vertices(vertices, pole, width): """ find 'vertices' within the cone of 'width' degrees around 'pole' """ return [ i for i, v in enumerate(vertices) if np.abs(np.dot(v, pole)) > np.abs(np.cos(np.pi * width / 180)) ] def patch_maximum(vertices, odf, pole, width): eqvert = patch_vertices(vertices, pole, width) # need to test for whether eqvert is empty or not if len(eqvert) == 0: print(f"empty cone around pole {np.array_str(pole)} with with width {width:f}") return np.Null, np.Null eqvals = [odf[i] for i in eqvert] eqargmax = np.argmax(eqvals) eqvertmax = eqvert[eqargmax] eqvalmax = eqvals[eqargmax] return eqvertmax, eqvalmax def odf_sum(odf): return np.sum(odf) def patch_sum(vertices, odf, pole, width): eqvert = patch_vertices(vertices, pole, width) # need to test for whether eqvert is empty or not if len(eqvert) == 0: print(f"empty cone around pole {np.array_str(pole)} with with width {width:f}") return np.Null return np.sum([odf[i] for i in eqvert]) def triple_odf_maxima(vertices, odf, width): indmax1 = np.argmax([odf[i] for i, v in enumerate(vertices)]) odfmax1 = odf[indmax1] pole = vertices[indmax1] eqvert = equatorial_zone_vertices(vertices, pole, width) indmax2, odfmax2 = equatorial_maximum(vertices, odf, pole, width) indmax3 = eqvert[ np.argmin([np.abs(np.dot(vertices[indmax2], vertices[p])) for p in eqvert]) ] odfmax3 = odf[indmax3] """ cross12 = np.cross(vertices[indmax1],vertices[indmax2]) cross12 = cross12/np.sqrt(np.sum(cross12**2)) indmax3, odfmax3 = patch_maximum(vertices, odf, cross12, 2*width) """ return [(indmax1, odfmax1), (indmax2, odfmax2), (indmax3, odfmax3)] dipy-1.11.0/dipy/reconst/ivim.py000066400000000000000000000715141476546756600165410ustar00rootroot00000000000000"""Classes and functions for fitting ivim model""" import warnings import numpy as np from scipy.optimize import differential_evolution, least_squares from dipy.reconst.base import ReconstModel from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") # global variable for bounding least_squares in both models BOUNDS = ([0.0, 0.0, 0.0, 0.0], [np.inf, 0.2, 1.0, 1.0]) def ivim_prediction(params, gtab): """The Intravoxel incoherent motion (IVIM) model function. Parameters ---------- params : array An array of IVIM parameters - [S0, f, D_star, D]. gtab : GradientTable class instance Gradient directions and bvalues. S0 : float, optional This has been added just for consistency with the existing API. Unlike other models, IVIM predicts S0 and this is over written by the S0 value in params. Returns ------- S : array An array containing the IVIM signal estimated using given parameters. """ b = gtab.bvals S0, f, D_star, D = params S = S0 * (f * np.exp(-b * D_star) + (1 - f) * np.exp(-b * D)) return S def _ivim_error(params, gtab, signal): """Error function to be used in fitting the IVIM model. Parameters ---------- params : array An array of IVIM parameters - [S0, f, D_star, D] gtab : GradientTable class instance Gradient directions and bvalues. signal : array Array containing the actual signal values. Returns ------- residual : array An array containing the difference between actual and estimated signal. """ residual = signal - ivim_prediction(params, gtab) return residual def f_D_star_prediction(params, gtab, S0, D): """Function used to predict IVIM signal when S0 and D are known by considering f and D_star as the unknown parameters. Parameters ---------- params : array The value of f and D_star. gtab : GradientTable class instance Gradient directions and bvalues. S0 : float The parameters S0 obtained from a linear fit. D : float The parameters D obtained from a linear fit. Returns ------- S : array An array containing the IVIM signal estimated using given parameters. """ f, D_star = params b = gtab.bvals S = S0 * (f * np.exp(-b * D_star) + (1 - f) * np.exp(-b * D)) return S def f_D_star_error(params, gtab, signal, S0, D): """Error function used to fit f and D_star keeping S0 and D fixed Parameters ---------- params : array The value of f and D_star. gtab : GradientTable class instance Gradient directions and bvalues. signal : array Array containing the actual signal values. S0 : float The parameters S0 obtained from a linear fit. D : float The parameters D obtained from a linear fit. Returns ------- residual : array An array containing the difference of actual and estimated signal. """ f, D_star = params return signal - f_D_star_prediction([f, D_star], gtab, S0, D) @warning_for_keywords() def ivim_model_selector(gtab, *, fit_method="trr", **kwargs): """ Selector function to switch between the 2-stage Trust-Region Reflective based NLLS fitting method (also containing the linear fit): `trr` and the Variable Projections based fitting method: `varpro`. Parameters ---------- fit_method : string, optional The value fit_method can either be 'trr' or 'varpro'. default : trr """ bounds_warning = "Bounds for this fit have been set from experiments " bounds_warning += "and literature survey. To change the bounds, please " bounds_warning += "input your bounds in model definition..." if fit_method.lower() == "trr": ivimmodel_trr = IvimModelTRR(gtab, **kwargs) if "bounds" not in kwargs: warnings.warn(bounds_warning, UserWarning, stacklevel=2) return ivimmodel_trr elif fit_method.lower() == "varpro": ivimmodel_vp = IvimModelVP(gtab, **kwargs) if "bounds" not in kwargs: warnings.warn(bounds_warning, UserWarning, stacklevel=2) return ivimmodel_vp else: opt_msg = "The fit_method option chosen was not correct. " opt_msg += "Using fit_method: TRR instead..." warnings.warn(opt_msg, UserWarning, stacklevel=2) return IvimModelTRR(gtab, **kwargs) IvimModel = ivim_model_selector class IvimModelTRR(ReconstModel): """Ivim model""" @warning_for_keywords() def __init__( self, gtab, *, split_b_D=400.0, split_b_S0=200.0, bounds=None, two_stage=True, tol=1e-15, x_scale=(1000.0, 0.1, 0.001, 0.0001), gtol=1e-15, ftol=1e-15, eps=1e-15, maxiter=1000, ): r""" Initialize an IVIM model. The IVIM model footcite:p:`LeBihan1988`, :footcite:p:`Federau2012` assumes that biological tissue includes a volume fraction 'f' of water flowing with a pseudo-diffusion coefficient D* and a fraction (1-f) of static (diffusion only), intra and extracellular water, with a diffusion coefficient D. In this model the echo attenuation of a signal in a single voxel can be written as .. math:: S(b) = S_0[f*e^{(-b*D\*)} + (1-f)e^{(-b*D)}] where $S_0$, $f$, $D\*$ and $D$ are the IVIM parameters. Parameters ---------- gtab : GradientTable class instance Gradient directions and bvalues split_b_D : float, optional The b-value to split the data on for two-stage fit. This will be used while estimating the value of D. The assumption is that at higher b values the effects of perfusion is less and hence the signal can be approximated as a mono-exponential decay. split_b_S0 : float, optional The b-value to split the data on for two-stage fit for estimation of S0 and initial guess for D_star. The assumption here is that at low bvalues the effects of perfusion are more. bounds : tuple of arrays with 4 elements, optional Bounds to constrain the fitted model parameters. This is only supported for Scipy version > 0.17. When using a older Scipy version, this function will raise an error if bounds are different from None. This parameter is also used to fill nan values for out of bounds parameters in the `IvimFit` class using the method fill_na. default : ([0., 0., 0., 0.], [np.inf, .3, 1., 1.]) two_stage : bool Argument to specify whether to perform a non-linear fitting of all parameters after the linear fitting by splitting the data based on bvalues. This gives more accurate parameters but takes more time. The linear fit can be used to get a quick estimation of the parameters. tol : float, optional Tolerance for convergence of minimization. x_scale : array-like, optional Scaling for the parameters. This is passed to `least_squares` which is only available for Scipy version > 0.17. gtol : float, optional Tolerance for termination by the norm of the gradient. ftol : float, optional Tolerance for termination by the change of the cost function. eps : float, optional Step size used for numerical approximation of the jacobian. maxiter : int, optional Maximum number of iterations to perform. References ---------- .. footbibliography:: """ if not np.any(gtab.b0s_mask): e_s = "No measured signal at bvalue == 0." e_s += "The IVIM model requires signal measured at 0 bvalue" raise ValueError(e_s) if gtab.b0_threshold > 0: b0_s = "The IVIM model requires a measurement at b==0. As of " b0_s += "version 0.15, the default b0_threshold for the " b0_s += "GradientTable object is set to 50, so if you used the " b0_s += "default settings to initialize the gtab input to the " b0_s += "IVIM model, you may have provided a gtab with " b0_s += "b0_threshold larger than 0. Please initialize the gtab " b0_s += "input with b0_threshold=0" raise ValueError(b0_s) ReconstModel.__init__(self, gtab) self.split_b_D = split_b_D self.split_b_S0 = split_b_S0 self.bounds = bounds self.two_stage = two_stage self.tol = tol self.options = {"gtol": gtol, "ftol": ftol, "eps": eps, "maxiter": maxiter} self.x_scale = x_scale self.bounds = bounds or BOUNDS @multi_voxel_fit def fit(self, data, **kwargs): """Fit method of the IvimModelTRR class. The fitting takes place in the following steps: Linear fitting for D (bvals > `split_b_D` (default: 400)) and store S0_prime. Another linear fit for S0 (bvals < split_b_S0 (default: 200)). Estimate f using 1 - S0_prime/S0. Use non-linear least squares to fit D_star and f. We do a final non-linear fitting of all four parameters and select the set of parameters which make sense physically. The criteria for selecting a particular set of parameters is checking the pseudo-perfusion fraction. If the fraction is more than `f_threshold` (default: 25%), we will reject the solution obtained from non-linear least squares fitting and consider only the linear fit. Parameters ---------- data : array The measured signal from one voxel. A multi voxel decorator will be applied to this fit method to scale it and apply it to multiple voxels. Returns ------- IvimFit object """ # Get S0_prime and D - parameters assuming a single exponential decay # for signals for bvals greater than `split_b_D` S0_prime, D = self.estimate_linear_fit(data, self.split_b_D, less_than=False) # Get S0 and D_star_prime - parameters assuming a single exponential # decay for for signals for bvals greater than `split_b_S0`. S0, D_star_prime = self.estimate_linear_fit( data, self.split_b_S0, less_than=True ) # Estimate f f_guess = 1 - S0_prime / S0 # Fit f and D_star using leastsq. params_f_D_star = [f_guess, D_star_prime] f, D_star = self.estimate_f_D_star(params_f_D_star, data, S0, D) params_linear = np.array([S0, f, D_star, D]) # Fit parameters again if two_stage flag is set. if self.two_stage: params_two_stage = self._leastsq(data, params_linear) bounds_violated = ~( np.all(params_two_stage >= self.bounds[0]) and (np.all(params_two_stage <= self.bounds[1])) ) if bounds_violated: warningMsg = "Bounds are violated for leastsq fitting. " warningMsg += "Returning parameters from linear fit" warnings.warn(warningMsg, UserWarning, stacklevel=2) return IvimFit(self, params_linear) else: return IvimFit(self, params_two_stage) else: return IvimFit(self, params_linear) @warning_for_keywords() def estimate_linear_fit(self, data, split_b, *, less_than=True): """Estimate a linear fit by taking log of data. Parameters ---------- data : array An array containing the data to be fit split_b : float The b value to split the data less_than : bool If True, splitting occurs for bvalues less than split_b Returns ------- S0 : float The estimated S0 value. (intercept) D : float The estimated value of D. """ if less_than: bvals_split = self.gtab.bvals[self.gtab.bvals <= split_b] D, neg_log_S0 = np.polyfit( bvals_split, -np.log(data[self.gtab.bvals <= split_b]), 1 ) else: bvals_split = self.gtab.bvals[self.gtab.bvals >= split_b] D, neg_log_S0 = np.polyfit( bvals_split, -np.log(data[self.gtab.bvals >= split_b]), 1 ) S0 = np.exp(-neg_log_S0) return S0, D def estimate_f_D_star(self, params_f_D_star, data, S0, D): """Estimate f and D_star using the values of all the other parameters obtained from a linear fit. Parameters ---------- params_f_D_star: array An array containing the value of f and D_star. data : array Array containing the actual signal values. S0 : float The parameters S0 obtained from a linear fit. D : float The parameters D obtained from a linear fit. Returns ------- f : float Perfusion fraction estimated from the fit. D_star : The value of D_star estimated from the fit. """ gtol = self.options["gtol"] ftol = self.options["ftol"] xtol = self.tol maxfev = self.options["maxiter"] try: res = least_squares( f_D_star_error, params_f_D_star, bounds=((0.0, 0.0), (self.bounds[1][1], self.bounds[1][2])), args=(self.gtab, data, S0, D), ftol=ftol, xtol=xtol, gtol=gtol, max_nfev=maxfev, ) f, D_star = res.x return f, D_star except ValueError: warningMsg = "x0 obtained from linear fitting is not feasible" warningMsg += " as initial guess for leastsq while estimating " warningMsg += "f and D_star. Using parameters from the " warningMsg += "linear fit." warnings.warn(warningMsg, UserWarning, stacklevel=2) f, D_star = params_f_D_star return f, D_star @warning_for_keywords() def predict(self, ivim_params, gtab, *, S0=1.0): """ Predict a signal for this IvimModel class instance given parameters. Parameters ---------- ivim_params : array The ivim parameters as an array [S0, f, D_star and D] gtab : GradientTable class instance Gradient directions and bvalues. S0 : float, optional This has been added just for consistency with the existing API. Unlike other models, IVIM predicts S0 and this is over written by the S0 value in params. Returns ------- ivim_signal : array The predicted IVIM signal using given parameters. """ return ivim_prediction(ivim_params, gtab) def _leastsq(self, data, x0): """Use leastsq to find ivim_params Parameters ---------- data : array, (len(bvals)) An array containing the signal from a voxel. If the data was a 3D image of 10x10x10 grid with 21 bvalues, the multi_voxel decorator will run the single voxel fitting on all the 1000 voxels to get the parameters in IvimFit.model_paramters. The shape of the parameter array will be (data[:-1], 4). x0 : array Initial guesses for the parameters S0, f, D_star and D calculated using a linear fitting. Returns ------- x0 : array Estimates of the parameters S0, f, D_star and D. """ gtol = self.options["gtol"] ftol = self.options["ftol"] xtol = self.tol maxfev = self.options["maxiter"] bounds = self.bounds try: res = least_squares( _ivim_error, x0, bounds=bounds, ftol=ftol, xtol=xtol, gtol=gtol, max_nfev=maxfev, args=(self.gtab, data), x_scale=self.x_scale, ) ivim_params = res.x if np.all(np.isnan(ivim_params)): return np.array([-1, -1, -1, -1]) return ivim_params except ValueError: warningMsg = "x0 is unfeasible for leastsq fitting." warningMsg += " Returning x0 values from the linear fit." warnings.warn(warningMsg, UserWarning, stacklevel=2) return x0 class IvimModelVP(ReconstModel): @warning_for_keywords() def __init__(self, gtab, *, bounds=None, maxiter=10, xtol=1e-8): r"""Initialize an IvimModelVP class. See :footcite:p:`LeBihan1988`, :footcite:p:`Federau2012` and :footcite:p:`Fadnavis2019` for further details about the model. The IVIM model assumes that biological tissue includes a volume fraction 'f' of water flowing with a pseudo-diffusion coefficient D* and a fraction (1-f: treated as a separate fraction in the variable projection method) of static (diffusion only), intra and extracellular water, with a diffusion coefficient D. In this model the echo attenuation of a signal in a single voxel can be written as .. math:: S(b) = S_0*[f*e^{(-b*D\*)} + (1-f)e^{(-b*D)}] where $S_0$, $f$, $D\*$ and $D$ are the IVIM parameters. maxiter: int, optional Maximum number of iterations for the Differential Evolution in SciPy. xtol : float, optional Tolerance for convergence of minimization. References ---------- .. footbibliography:: """ self.maxiter = maxiter self.xtol = xtol self.bvals = gtab.bvals self.yhat_perfusion = np.zeros(self.bvals.shape[0]) self.yhat_diffusion = np.zeros(self.bvals.shape[0]) self.exp_phi1 = np.zeros((self.bvals.shape[0], 2)) self.bounds = bounds or (BOUNDS[0][1:], BOUNDS[1][1:]) @multi_voxel_fit def fit(self, data, bounds_de=None, **kwargs): r"""Fit method of the IvimModelVP model class MicroLearn framework (VarPro) :footcite:p:`Fadnavis2019`. The VarPro computes the IVIM parameters using the MIX approach. This algorithm uses three different optimizers. It starts with a differential evolution algorithm and fits the parameters in the power of exponentials. Then the fitted parameters in the first step are utilized to make a linear convex problem. Using a convex optimization, the volume fractions are determined. Then the last step is non linear least square fitting on all the parameters. The results of the first and second step are utilized as the initial values for the last step of the algorithm. (see :footcite:p:`Fadnavis2019` and :footcite:p:`Farooq2016` for a comparison and a thorough discussion). References ---------- .. footbibliography:: """ data_max = data.max() data = data / data_max b = self.bvals # Setting up the bounds for differential_evolution bounds_de = np.array([(0.005, 0.01), (10**-4, 0.001)]) # Optimizer #1: Differential Evolution res_one = differential_evolution( self.stoc_search_cost, bounds_de, maxiter=self.maxiter, args=(data,), disp=False, polish=True, popsize=28, ) x = res_one.x phi = self.phi(x) # Optimizer #2: Convex Optimizer f = self.cvx_fit(data, phi) x_f = self.x_and_f_to_x_f(x, f) # Setting up the bounds for least_squares bounds = self.bounds # Optimizer #3: Nonlinear-Least Squares res = least_squares( self.nlls_cost, x_f, bounds=bounds, xtol=self.xtol, args=(data,) ) result = res.x f_est = result[0] D_star_est = result[1] D_est = result[2] S0 = data / (f_est * np.exp(-b * D_star_est) + (1 - f_est) * np.exp(-b * D_est)) S0_est = S0 * data_max # final result containing the four fit parameters: S0, f, D* and D result = np.insert(result, 0, np.mean(S0_est), axis=0) return IvimFit(self, result) def stoc_search_cost(self, x, signal): """ Cost function for differential evolution algorithm. Performs a stochastic search for the non-linear parameters 'x'. The objective function is calculated in the :func: `ivim_mix_cost_one`. The function constructs the parameters using :func: `phi`. Parameters ---------- x : array input from the Differential Evolution optimizer. signal : array The signal values measured for this model. Returns ------- :func: `ivim_mix_cost_one` """ phi = self.phi(x) return self.ivim_mix_cost_one(phi, signal) def ivim_mix_cost_one(self, phi, signal): """ Constructs the objective for the :func: `stoc_search_cost`. First calculates the Moore-Penrose inverse of the input `phi` and takes a dot product with the measured signal. The result obtained is again multiplied with `phi` to complete the projection of the variable into a transformed space. (see :footcite:p:`Fadnavis2019` and :footcite:p:`Farooq2016` for thorough discussion on Variable Projections and relevant cost functions). Parameters ---------- phi : array Returns an array calculated from :func: `Phi`. signal : array The signal values measured for this model. Returns ------- (signal - S)^T(signal - S) Notes ----- to make cost function for Differential Evolution algorithm: .. math:: (signal - S)^T(signal - S) References ---------- .. footbibliography:: """ # Moore-Penrose phi_mp = np.dot(np.linalg.inv(np.dot(phi.T, phi)), phi.T) f = np.dot(phi_mp, signal) yhat = np.dot(phi, f) # - sigma return np.dot((signal - yhat).T, signal - yhat) def cvx_fit(self, signal, phi): """ Performs the constrained search for the linear parameters `f` after the estimation of `x` is done. Estimation of the linear parameters `f` is a constrained linear least-squares optimization problem solved by using a convex optimizer from cvxpy. The IVIM equation contains two parameters that depend on the same volume fraction. Both are estimated as separately in the convex optimizer. Parameters ---------- phi : array Returns an array calculated from :func: `phi`. signal : array The signal values measured for this model. Returns ------- f1, f2 (volume fractions) Notes ----- cost function for differential evolution algorithm: .. math:: minimize(norm((signal)- (phi*f))) """ # Create four scalar optimization variables. f = cvxpy.Variable(2) # Constraints have been set similar to the MIX paper's # Supplementary Note 2: Synthetic Data Experiments, experiment 2 constraints = [ cvxpy.sum(f) == 1, f[0] >= 0.011, f[1] >= 0.011, f[0] <= self.bounds[1][0], f[1] <= 0.89, ] # Form objective. obj = cvxpy.Minimize(cvxpy.sum(cvxpy.square(phi @ f - signal))) # Form and solve problem. prob = cvxpy.Problem(obj, constraints) prob.solve() # Returns the optimal value. return np.array(f.value) def nlls_cost(self, x_f, signal): """ Cost function for the least square problem. The cost function is used in the Least Squares function of SciPy in :func: `fit`. It guarantees that stopping point of the algorithm is at least a stationary point with reduction in the number of iterations required by the differential evolution optimizer. Parameters ---------- x_f : array Contains the parameters 'x' and 'f' combines in the same array. signal : array The signal values measured for this model. Returns ------- sum{(signal - phi*f)^2} Notes ----- cost function for the least square problem. .. math:: sum{(signal - phi*f)^2} """ x, f = self.x_f_to_x_and_f(x_f) f1 = np.array([f, 1 - f]) phi = self.phi(x) return np.sum((np.dot(phi, f1) - signal) ** 2) def x_f_to_x_and_f(self, x_f): """ Splits the array of parameters in x_f to 'x' and 'f' for performing a search on the both of them independently using the Trust Region Method. Parameters ---------- x_f : array Combined array of parameters 'x' and 'f' parameters. Returns ------- x, f : array Split parameters into two separate arrays """ x = np.zeros(2) f = x_f[0] x = x_f[1:3] return x, f def x_and_f_to_x_f(self, x, f): """ Combines the array of parameters 'x' and 'f' into x_f for performing NLLS on the final stage of optimization. Parameters ---------- x, f : array Split parameters into two separate arrays Returns ------- x_f : array Combined array of parameters 'x' and 'f' parameters. """ x_f = np.zeros(3) x_f[0] = f[0] x_f[1:3] = x return x_f def phi(self, x): """ Creates a structure for the combining the diffusion and pseudo- diffusion by multiplying with the bvals and then exponentiating each of the two components for fitting as per the IVIM- two compartment model. Parameters ---------- x : array input from the Differential Evolution optimizer. Returns ------- exp_phi1 : array Combined array of parameters perfusion/pseudo-diffusion and diffusion parameters. """ self.yhat_perfusion = self.bvals * x[0] self.yhat_diffusion = self.bvals * x[1] self.exp_phi1[:, 0] = np.exp(-self.yhat_perfusion) self.exp_phi1[:, 1] = np.exp(-self.yhat_diffusion) return self.exp_phi1 class IvimFit: def __init__(self, model, model_params): """Initialize a IvimFit class instance. Parameters ---------- model : Model class model_params : array The parameters of the model. In this case it is an array of ivim parameters. If the fitting is done for multi_voxel data, the multi_voxel decorator will run the fitting on all the voxels and model_params will be an array of the dimensions (data[:-1], 4), i.e., there will be 4 parameters for each of the voxels. """ self.model = model self.model_params = model_params def __getitem__(self, index): model_params = self.model_params N = model_params.ndim if type(index) is not tuple: index = (index,) elif len(index) >= model_params.ndim: raise IndexError("IndexError: invalid index") index = index + (slice(None),) * (N - len(index)) return type(self)(self.model, model_params[index]) @property def S0_predicted(self): return self.model_params[..., 0] @property def perfusion_fraction(self): return self.model_params[..., 1] @property def D_star(self): return self.model_params[..., 2] @property def D(self): return self.model_params[..., 3] @property def shape(self): return self.model_params.shape[:-1] @warning_for_keywords() def predict(self, gtab, *, S0=1.0): """Given a model fit, predict the signal. Parameters ---------- gtab : GradientTable class instance Gradient directions and bvalues S0 : float S0 value here is not necessary and will not be used to predict the signal. It has been added to conform to the structure of the predict method in multi_voxel which requires a keyword argument S0. Returns ------- signal : array The signal values predicted for this model using its parameters. """ return ivim_prediction(self.model_params, gtab) dipy-1.11.0/dipy/reconst/mapmri.py000066400000000000000000002203621476546756600170570ustar00rootroot00000000000000import numpy as np from scipy.special import gamma, genlaguerre, hermite from dipy.reconst.base import ReconstFit, ReconstModel from dipy.reconst.cache import Cache from dipy.reconst.multi_voxel import multi_voxel_fit try: # preferred scipy >= 0.14, required scipy >= 1.0 from scipy.special import factorial as sfactorial, factorial2 except ImportError: from scipy.misc import factorial as sfactorial, factorial2 from math import factorial as mfactorial from warnings import warn from dipy.core.geometry import cart2sphere from dipy.core.gradients import gradient_table from dipy.core.optimize import Optimizer, PositiveDefiniteLeastSquares from dipy.data import load_sdp_constraints import dipy.reconst.dti as dti from dipy.reconst.shm import real_sh_descoteaux_from_index, sph_harm_ind_list from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") class MapmriModel(ReconstModel, Cache): r"""Mean Apparent Propagator MRI (MAPMRI) of the diffusion signal. The main idea in MAPMRI footcite:p:`Ozarslan2013` is to model the diffusion signal as a linear combination of the continuous functions presented in footcite:p:`Ozarslan2008` but extending it in three dimensions. The main difference with the SHORE proposed in footcite:p:`Merlet2013` is that MAPMRI 3D extension is provided using a set of three basis functions for the radial part, one for the signal along x, one for y and one for z, while footcite:p:`Merlet2013` uses one basis function to model the radial part and real Spherical Harmonics to model the angular part. From the MAPMRI coefficients is possible to use the analytical formulae to estimate the ODF. See :footcite:p:`Avram2015` for additional tissue microstructure insights provided by MAPMRI. See also footcite:p:`Fick2016b`, footcite:p:`Cheng2012`, footcite:p:`Hosseinbor2013`, footcite:p:`Craven1979`, and footcite:p:`DelaHaije2020` for additional insight into to the model. References ---------- .. footbibliography:: """ @warning_for_keywords() def __init__( self, gtab, *, radial_order=6, laplacian_regularization=True, laplacian_weighting=0.2, positivity_constraint=False, global_constraints=False, pos_grid=15, pos_radius="adaptive", anisotropic_scaling=True, eigenvalue_threshold=1e-04, bval_threshold=np.inf, dti_scale_estimation=True, static_diffusivity=0.7e-3, cvxpy_solver=None, ): r"""Analytical and continuous modeling of the diffusion signal with respect to the MAPMRI basis. The main idea of the MAPMRI :footcite:p:`Ozarslan2013` is to model the diffusion signal as a linear combination of the continuous functions presented in :footcite:p:`Ozarslan2008` but extending it in three dimensions. The main difference with the SHORE proposed in :footcite:p:`Ozarslan2009` is that MAPMRI 3D extension is provided using a set of three basis functions for the radial part, one for the signal along x, one for y and one for z, while :footcite:p:`Ozarslan2009` uses one basis function to model the radial part and real Spherical Harmonics to model the angular part. From the MAPMRI coefficients it is possible to estimate various q-space indices, the PDF and the ODF. The fitting procedure can be constrained using the positivity constraint proposed in :footcite:p:`Ozarslan2013` or :footcite:p:`DelaHaije2020` and/or the laplacian regularization proposed in :footcite:p:`Fick2016b`. For the estimation of q-space indices we recommend using the 'regular' anisotropic implementation of MAPMRI. However, it has been shown that the ODF estimation in this implementation has a bias which 'squeezes together' the ODF peaks when there is a crossing at an angle smaller than 90 degrees :footcite:p:`Fick2016b`. When you want to estimate ODFs for tractography we therefore recommend using the isotropic implementation (which is equivalent to :footcite:p:`Ozarslan2009`). The switch between isotropic and anisotropic can be easily made through the anisotropic_scaling option. Parameters ---------- gtab : GradientTable, gradient directions and bvalues container class. the gradient table has to include b0-images. radial_order : unsigned int, an even integer that represent the order of the basis laplacian_regularization: bool, Regularize using the Laplacian of the MAP-MRI basis. laplacian_weighting: string or scalar, The string 'GCV' makes it use generalized cross-validation :footcite:p:`Craven1979` to find the regularization weight :footcite:p:`DelaHaije2020`. A scalar sets the regularization weight to that value and an array will make it selected the optimal weight from the values in the array. positivity_constraint : bool, Constrain the propagator to be positive. global_constraints : bool, optional If set to False, positivity is enforced on a grid determined by pos_grid and pos_radius. If set to True, positivity is enforced everywhere using the constraints of :footcite:p:`Merlet2013`. Global constraints are currently supported for anisotropic_scaling=True and for radial_order <= 10. pos_grid : int, optional The number of points in the grid that is used in the local positivity constraint. pos_radius : float or string, optional If set to a float, the maximum distance the local positivity constraint constrains to posivity is that value. If set to 'adaptive', the maximum distance is dependent on the estimated tissue diffusivity. If 'infinity', semidefinite programming constraints are used :footcite:p:`DelaHaije2020`. anisotropic_scaling : bool, optional If True, uses the standard anisotropic MAP-MRI basis. If False, uses the isotropic MAP-MRI basis (equal to 3D-SHORE). eigenvalue_threshold : float, optional Sets the minimum of the tensor eigenvalues in order to avoid stability problem. bval_threshold : float, optional Sets the b-value threshold to be used in the scale factor estimation. In order for the estimated non-Gaussianity to have meaning this value should set to a lower value (b<2000 s/mm^2) such that the scale factors are estimated on signal points that reasonably represent the spins at Gaussian diffusion. dti_scale_estimation : bool, optional Whether or not DTI fitting is used to estimate the isotropic scale factor for isotropic MAP-MRI. When set to False the algorithm presets the isotropic tissue diffusivity to static_diffusivity. This vastly increases fitting speed but at the cost of slightly reduced fitting quality. Can still be used in combination with regularization and constraints. static_diffusivity : float, optional the tissue diffusivity that is used when dti_scale_estimation is set to False. The default is that of typical white matter D=0.7e-3 :footcite:p:`Fick2016b`. cvxpy_solver : str, optional cvxpy solver name. Optionally optimize the positivity constraint with a particular cvxpy solver. See https://www.cvxpy.org/ for details. Default: None (cvxpy chooses its own solver) References ---------- .. footbibliography:: Examples -------- In this example, where the data, gradient table and sphere tessellation used for reconstruction are provided, we model the diffusion signal with respect to the SHORE basis and compute the real and analytical ODF. >>> from dipy.data import dsi_voxels, default_sphere >>> from dipy.core.gradients import gradient_table >>> _, gtab_ = dsi_voxels() >>> gtab = gradient_table(gtab_.bvals, bvecs=gtab_.bvecs, ... b0_threshold=gtab_.bvals.min()) >>> from dipy.sims.voxel import sticks_and_ball >>> data, golden_directions = sticks_and_ball(gtab, d=0.0015, S0=1, ... angles=[(0, 0), ... (90, 0)], ... fractions=[50, 50], ... snr=None) >>> from dipy.reconst.mapmri import MapmriModel >>> radial_order = 4 >>> map_model = MapmriModel(gtab, radial_order=radial_order) >>> mapfit = map_model.fit(data) >>> odf = mapfit.odf(default_sphere) """ if np.sum(gtab.b0s_mask) == 0: raise ValueError( "gtab does not have any b0s, check in the " "gradient_table if b0_threshold needs to be " "increased." ) self.gtab = gtab if radial_order < 0 or radial_order % 2: raise ValueError("radial_order must be a positive, even number.") self.radial_order = radial_order self.bval_threshold = bval_threshold self.dti_scale_estimation = dti_scale_estimation if laplacian_regularization: msg = ( "Laplacian Regularization weighting must be 'GCV'," " a positive float or an array of positive floats." ) if isinstance(laplacian_weighting, str): if not laplacian_weighting == "GCV": raise ValueError(msg) elif isinstance(laplacian_weighting, (float, np.ndarray)): if np.sum(laplacian_weighting < 0) > 0: raise ValueError(msg) self.laplacian_weighting = laplacian_weighting self.laplacian_regularization = laplacian_regularization if positivity_constraint: if not have_cvxpy: raise ImportError("CVXPY package needed to enforce constraints.") if cvxpy_solver is not None: if cvxpy_solver not in cvxpy.installed_solvers(): installed_solvers = ", ".join(cvxpy.installed_solvers()) raise ValueError( f"Input `cvxpy_solver` was set to" f" {cvxpy_solver}. One of" f" {installed_solvers} was expected." ) self.cvxpy_solver = cvxpy_solver if global_constraints: if not anisotropic_scaling: raise ValueError( "Global constraints only available for" " anistropic_scaling=True." ) if radial_order > 10: self.sdp_constraints = load_sdp_constraints("hermite", order=10) warn( "Global constraints are currently supported for" " radial_order <= 10.", stacklevel=2, ) else: self.sdp_constraints = load_sdp_constraints( "hermite", order=radial_order ) m = (2 + radial_order) * (4 + radial_order) * (3 + 2 * radial_order) m = m // 24 self.sdp = PositiveDefiniteLeastSquares(m, A=self.sdp_constraints) else: msg = "pos_radius must be 'adaptive' or a positive float." if isinstance(pos_radius, str): if pos_radius != "adaptive": raise ValueError(msg) elif isinstance(pos_radius, (float, int)): if pos_radius <= 0: raise ValueError(msg) self.constraint_grid = create_rspace(pos_grid, pos_radius) if not anisotropic_scaling: self.pos_K_independent = mapmri_isotropic_K_mu_independent( radial_order, self.constraint_grid ) else: raise ValueError(msg) self.pos_grid = pos_grid self.pos_radius = pos_radius self.global_constraints = global_constraints self.positivity_constraint = positivity_constraint self.anisotropic_scaling = anisotropic_scaling if (gtab.big_delta is None) or (gtab.small_delta is None): self.tau = 1 / (4 * np.pi**2) else: self.tau = gtab.big_delta - gtab.small_delta / 3.0 self.eigenvalue_threshold = eigenvalue_threshold self.cutoff = gtab.bvals < self.bval_threshold gtab_cutoff = gradient_table( bvals=self.gtab.bvals[self.cutoff], bvecs=self.gtab.bvecs[self.cutoff] ) self.tenmodel = dti.TensorModel(gtab_cutoff) if self.anisotropic_scaling: self.ind_mat = mapmri_index_matrix(self.radial_order) self.Bm = b_mat(self.ind_mat) self.S_mat, self.T_mat, self.U_mat = mapmri_STU_reg_matrices(radial_order) else: self.ind_mat = mapmri_isotropic_index_matrix(self.radial_order) self.Bm = b_mat_isotropic(self.ind_mat) self.laplacian_matrix = mapmri_isotropic_laplacian_reg_matrix( radial_order, 1.0 ) qvals = np.sqrt(self.gtab.bvals / self.tau) / (2 * np.pi) q = gtab.bvecs * qvals[:, None] if self.dti_scale_estimation: self.M_mu_independent = mapmri_isotropic_M_mu_independent( self.radial_order, q ) else: D = static_diffusivity mumean = np.sqrt(2 * D * self.tau) self.mu = np.array([mumean, mumean, mumean]) self.M = mapmri_isotropic_phi_matrix(radial_order, mumean, q) if ( self.laplacian_regularization and isinstance(laplacian_weighting, float) and not positivity_constraint ): MMt = ( np.dot(self.M.T, self.M) + laplacian_weighting * mumean * self.laplacian_matrix ) self.MMt_inv_Mt = np.dot(np.linalg.pinv(MMt), self.M.T) @multi_voxel_fit def fit(self, data, **kwargs): errorcode = 0 tenfit = self.tenmodel.fit(data[self.cutoff]) evals = tenfit.evals R = tenfit.evecs evals = np.clip(evals, self.eigenvalue_threshold, evals.max()) qvals = np.sqrt(self.gtab.bvals / self.tau) / (2 * np.pi) mu_max = max(np.sqrt(evals * 2 * self.tau)) # used for constraint if self.anisotropic_scaling: mu = np.sqrt(evals * 2 * self.tau) qvecs = np.dot(self.gtab.bvecs, R) q = qvecs * qvals[:, None] M = mapmri_phi_matrix(self.radial_order, mu, q) else: try: # self.MMt_inv_Mt lopt = self.laplacian_weighting coef = np.dot(self.MMt_inv_Mt, data) coef = coef / sum(coef * self.Bm) return MapmriFit(self, coef, self.mu, R, lopt, errorcode=errorcode) except AttributeError: try: M = self.M mu = self.mu except AttributeError: u0 = isotropic_scale_factor(evals * 2 * self.tau) mu = np.array([u0, u0, u0]) M_mu_dependent = mapmri_isotropic_M_mu_dependent( self.radial_order, mu[0], qvals ) M = M_mu_dependent * self.M_mu_independent if self.laplacian_regularization: if self.anisotropic_scaling: laplacian_matrix = mapmri_laplacian_reg_matrix( self.ind_mat, mu, self.S_mat, self.T_mat, self.U_mat ) else: laplacian_matrix = self.laplacian_matrix * mu[0] if ( isinstance(self.laplacian_weighting, str) and self.laplacian_weighting.upper() == "GCV" ): try: lopt = generalized_crossvalidation(data, M, laplacian_matrix) except np.linalg.linalg.LinAlgError: # 1/0. lopt = 0.05 errorcode = 1 elif np.isscalar(self.laplacian_weighting): lopt = self.laplacian_weighting else: lopt = generalized_crossvalidation_array( data, M, laplacian_matrix, weights_array=self.laplacian_weighting ) else: lopt = 0.0 laplacian_matrix = np.ones((self.ind_mat.shape[0], self.ind_mat.shape[0])) if self.positivity_constraint: data_norm = np.asarray(data / data[self.gtab.b0s_mask].mean()) if self.global_constraints: coef = self.sdp.solve(M, data_norm, solver=self.cvxpy_solver) else: c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c # workaround for the bug on cvxpy 1.0.15 when lopt = 0 # See https://github.com/cvxgrp/cvxpy/issues/672 if not lopt: objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) ) else: objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) + lopt * cvxpy.quad_form(c, laplacian_matrix) ) if self.pos_radius == "adaptive": # custom constraint grid based on scale factor [Avram2015] constraint_grid = create_rspace(self.pos_grid, np.sqrt(5) * mu_max) else: constraint_grid = self.constraint_grid if self.anisotropic_scaling: K = mapmri_psi_matrix(self.radial_order, mu, constraint_grid) else: if self.pos_radius == "adaptive": # grid changes per voxel. Recompute entire K matrix. K = mapmri_isotropic_psi_matrix( self.radial_order, mu[0], constraint_grid ) else: # grid is static. Only compute mu-dependent part of K. K_dependent = mapmri_isotropic_K_mu_dependent( self.radial_order, mu[0], constraint_grid ) K = K_dependent * self.pos_K_independent M0 = M[self.gtab.b0s_mask, :] constraints = [(M0[0] @ c) == 1, (K @ c) >= -0.1] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=self.cvxpy_solver) coef = np.asarray(c.value).squeeze() except Exception: errorcode = 2 warn("Optimization did not find a solution", stacklevel=2) try: coef = np.dot(np.linalg.pinv(M), data) # least squares except np.linalg.linalg.LinAlgError: errorcode = 3 coef = np.zeros(M.shape[1]) return MapmriFit(self, coef, mu, R, lopt, errorcode=errorcode) else: try: pseudoInv = np.dot( np.linalg.inv(np.dot(M.T, M) + lopt * laplacian_matrix), M.T ) coef = np.dot(pseudoInv, data) except np.linalg.linalg.LinAlgError: errorcode = 1 coef = np.zeros(M.shape[1]) return MapmriFit(self, coef, mu, R, lopt, errorcode=errorcode) coef = coef / sum(coef * self.Bm) return MapmriFit(self, coef, mu, R, lopt, errorcode=errorcode) class MapmriFit(ReconstFit): @warning_for_keywords() def __init__(self, model, mapmri_coef, mu, R, lopt, *, errorcode=0): """Calculates diffusion properties for a single voxel Parameters ---------- model : object, AnalyticalModel mapmri_coef : 1d ndarray, mapmri coefficients mu : array, shape (3,) scale parameters vector for x, y and z R : array, shape (3,3) rotation matrix lopt : float, regularization weight used for laplacian regularization errorcode : int provides information on whether errors occurred in the fitting of each voxel. 0 means no problem, 1 means a LinAlgError occurred when trying to invert the design matrix. 2 means the positivity constraint was unable to solve the problem. 3 means that after positivity constraint failed, also matrix inversion failed. """ self.model = model self._mapmri_coef = mapmri_coef self.gtab = model.gtab self.radial_order = model.radial_order self.mu = mu self.R = R self.lopt = lopt self.errorcode = errorcode @property def mapmri_mu(self): """The MAPMRI scale factors""" return self.mu @property def mapmri_R(self): """The MAPMRI rotation matrix""" return self.R @property def mapmri_coeff(self): """The MAPMRI coefficients""" return self._mapmri_coef @warning_for_keywords() def odf(self, sphere, *, s=2): """Calculates the analytical Orientation Distribution Function (ODF) from the signal. See :footcite:p:`Ozarslan2013` Eq. (32). Parameters ---------- sphere : Sphere A Sphere instance with vertices, edges and faces attributes. s : unsigned int radial moment of the ODF References ---------- .. footbibliography:: """ if self.model.anisotropic_scaling: v_ = sphere.vertices v = np.dot(v_, self.R) I_s = mapmri_odf_matrix(self.radial_order, self.mu, s, v) odf = np.dot(I_s, self._mapmri_coef) else: _I = self.model.cache_get("ODF_matrix", key=(sphere, s)) if _I is None: _I = mapmri_isotropic_odf_matrix( self.radial_order, 1, s, sphere.vertices ) self.model.cache_set("ODF_matrix", (sphere, s), _I) odf = self.mu[0] ** s * np.dot(_I, self._mapmri_coef) return odf @warning_for_keywords() def odf_sh(self, *, s=2): """Calculates the real analytical odf for a given discrete sphere. Computes the design matrix of the ODF for the given sphere vertices and radial moment :footcite:p:`Ozarslan2013` eq. (32). The radial moment s acts as a sharpening method. The analytical equation for the spherical ODF basis is given in :footcite:p:`Fick2016b` eq. (C8). References ---------- .. footbibliography:: """ if self.model.anisotropic_scaling: raise ValueError( "odf in spherical harmonics not yet implemented " "for anisotropic implementation" ) _I = self.model.cache_get("ODF_sh_matrix", key=(self.radial_order, s)) if _I is None: _I = mapmri_isotropic_odf_sh_matrix(self.radial_order, 1, s) self.model.cache_set("ODF_sh_matrix", (self.radial_order, s), _I) odf = self.mu[0] ** s * np.dot(_I, self._mapmri_coef) return odf def rtpp(self): """Calculates the analytical return to the plane probability (RTPP). RTPP is defined in :footcite:p:`Ozarslan2013` eq. (42). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). References ---------- .. footbibliography:: """ Bm = self.model.Bm ind_mat = self.model.ind_mat if self.model.anisotropic_scaling: sel = Bm > 0.0 # select only relevant coefficients const = 1 / (np.sqrt(2 * np.pi) * self.mu[0]) ind_sum = (-1.0) ** (ind_mat[sel, 0] / 2.0) rtpp_vec = const * Bm[sel] * ind_sum * self._mapmri_coef[sel] rtpp = rtpp_vec.sum() return rtpp else: rtpp_vec = np.zeros((ind_mat.shape[0])) count = 0 for n in range(0, self.model.radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j const = (-1 / 2.0) ** (ell / 2) / np.sqrt(np.pi) matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * binomialfloat(j + ell - 0.5, j - k - 1) * gamma(ell / 2 + k + 1 / 2.0) / (sfactorial(k) * 0.5 ** (ell / 2 + 1 / 2.0 + k)) ) for _ in range(-ell, ell + 1): rtpp_vec[count] = const * matsum count += 1 direction = np.array(self.R[:, 0], ndmin=2) r, theta, phi = cart2sphere( direction[:, 0], direction[:, 1], direction[:, 2] ) rtpp = ( self._mapmri_coef * (1 / self.mu[0]) * rtpp_vec * real_sh_descoteaux_from_index( ind_mat[:, 2], ind_mat[:, 1], theta, phi ) ) return rtpp.sum() def rtap(self): """Calculates the analytical return to the axis probability (RTAP). RTAP is defined in :footcite:p:`Ozarslan2013` eq. (40, 44a). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). References ---------- .. footbibliography:: """ Bm = self.model.Bm ind_mat = self.model.ind_mat if self.model.anisotropic_scaling: sel = Bm > 0.0 # select only relevant coefficients const = 1 / (2 * np.pi * np.prod(self.mu[1:])) ind_sum = (-1.0) ** (np.sum(ind_mat[sel, 1:], axis=1) / 2.0) rtap_vec = const * Bm[sel] * ind_sum * self._mapmri_coef[sel] rtap = np.sum(rtap_vec) else: rtap_vec = np.zeros((ind_mat.shape[0])) count = 0 for n in range(0, self.model.radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j kappa = ((-1) ** (j - 1) * 2 ** (-(ell + 3) / 2.0)) / np.pi matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * binomialfloat(j + ell - 0.5, j - k - 1) * gamma((ell + 1) / 2.0 + k) ) / (sfactorial(k) * 0.5 ** ((ell + 1) / 2.0 + k)) for _ in range(-ell, ell + 1): rtap_vec[count] = kappa * matsum count += 1 rtap_vec *= 2 direction = np.array(self.R[:, 0], ndmin=2) r, theta, phi = cart2sphere( direction[:, 0], direction[:, 1], direction[:, 2] ) rtap_vec = ( self._mapmri_coef * (1 / self.mu[0] ** 2) * rtap_vec * real_sh_descoteaux_from_index( ind_mat[:, 2], ind_mat[:, 1], theta, phi ) ) rtap = rtap_vec.sum() return rtap def rtop(self): """Calculates the analytical return to the origin probability (RTOP). RTOP is defined in :footcite:p:`Ozarslan2013` eq. (36, 43). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). References ---------- .. footbibliography:: """ Bm = self.model.Bm if self.model.anisotropic_scaling: const = 1 / (np.sqrt(8 * np.pi**3) * np.prod(self.mu)) ind_sum = (-1.0) ** (np.sum(self.model.ind_mat, axis=1) / 2) rtop_vec = const * ind_sum * Bm * self._mapmri_coef rtop = rtop_vec.sum() else: const = 1 / (2 * np.sqrt(2.0) * np.pi ** (3 / 2.0)) rtop_vec = const * (-1.0) ** (self.model.ind_mat[:, 0] - 1) * Bm rtop = (1 / self.mu[0] ** 3) * rtop_vec * self._mapmri_coef rtop = rtop.sum() return rtop def msd(self): """Calculates the analytical Mean Squared Displacement (MSD). It is defined as the Laplacian of the origin of the estimated signal :footcite:p:`Cheng2012`. The analytical formula for the MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C13, D1). References ---------- .. footbibliography:: """ mu = self.mu ind_mat = self.model.ind_mat Bm = self.model.Bm sel = self.model.Bm > 0.0 # select only relevant coefficients mapmri_coef = self._mapmri_coef[sel] if self.model.anisotropic_scaling: ind_sum = np.sum(ind_mat[sel], axis=1) nx, ny, nz = ind_mat[sel].T numerator = ( (-1) ** (0.5 * (-ind_sum)) * np.pi ** (3 / 2.0) * ( (1 + 2 * nx) * mu[0] ** 2 + (1 + 2 * ny) * mu[1] ** 2 + (1 + 2 * nz) * mu[2] ** 2 ) ) denominator = ( np.sqrt( 2.0 ** (-ind_sum) * sfactorial(nx) * sfactorial(ny) * sfactorial(nz) ) * gamma(0.5 - 0.5 * nx) * gamma(0.5 - 0.5 * ny) * gamma(0.5 - 0.5 * nz) ) msd_vec = self._mapmri_coef[sel] * (numerator / denominator) msd = msd_vec.sum() else: msd_vec = (4 * ind_mat[sel, 0] - 1) * Bm[sel] msd = self.mu[0] ** 2 * msd_vec * mapmri_coef msd = msd.sum() return msd def qiv(self): """Calculates the analytical Q-space Inverse Variance (QIV). It is defined as the inverse of the Laplacian of the origin of the estimated propagator :footcite:p:`Hosseinbor2013` eq. (22). The analytical formula for the MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C14, D2). References ---------- .. footbibliography:: """ ux, uy, uz = self.mu ind_mat = self.model.ind_mat if self.model.anisotropic_scaling: sel = self.model.Bm > 0 # select only relevant coefficients nx, ny, nz = ind_mat[sel].T numerator = ( 8 * np.pi**2 * (ux * uy * uz) ** 3 * np.sqrt(sfactorial(nx) * sfactorial(ny) * sfactorial(nz)) * gamma(0.5 - 0.5 * nx) * gamma(0.5 - 0.5 * ny) * gamma(0.5 - 0.5 * nz) ) denominator = np.sqrt(2.0 ** (-1 + nx + ny + nz)) * ( (1 + 2 * nx) * uy**2 * uz**2 + ux**2 * ((1 + 2 * nz) * uy**2 + (1 + 2 * ny) * uz**2) ) qiv_vec = self._mapmri_coef[sel] * (numerator / denominator) qiv = qiv_vec.sum() else: sel = self.model.Bm > 0.0 # select only relevant coefficients j = ind_mat[sel, 0] qiv_vec = (8 * (-1.0) ** (1 - j) * np.sqrt(2) * np.pi ** (7 / 2.0)) / ( (4.0 * j - 1) * self.model.Bm[sel] ) qiv = ux**5 * qiv_vec * self._mapmri_coef[sel] qiv = qiv.sum() return qiv def ng(self): r"""Calculates the analytical non-Gaussiannity (NG). For the NG to be meaningful the mapmri scale factors must be estimated only on data representing Gaussian diffusion of spins, i.e., bvals smaller than about 2000 s/mm^2 :footcite:p:`Avram2015`. See :footcite:p:`Ozarslan2013` for a definition of the metric. References ---------- .. footbibliography:: """ if self.model.bval_threshold > 2000.0: warn( "model bval_threshold must be lower than 2000 for the " "non_Gaussianity to be physically meaningful [2].", stacklevel=2, ) if not self.model.anisotropic_scaling: raise ValueError( "Parallel non-Gaussianity is not defined using isotropic scaling." ) coef = self._mapmri_coef return np.sqrt(1 - coef[0] ** 2 / np.sum(coef**2)) def ng_parallel(self): r"""Calculates the analytical parallel non-Gaussiannity (NG). For the NG to be meaningful the mapmri scale factors must be estimated only on data representing Gaussian diffusion of spins, i.e., bvals smaller than about 2000 s/mm^2 :footcite:p:`Avram2015`. See :footcite:p:`Ozarslan2013` for a definition of the metric. References ---------- .. footbibliography:: """ if self.model.bval_threshold > 2000.0: warn( "Model bval_threshold must be lower than 2000 for the " "non_Gaussianity to be physically meaningful [2].", stacklevel=2, ) if not self.model.anisotropic_scaling: raise ValueError( "Parallel non-Gaussianity is not defined using isotropic scaling." ) ind_mat = self.model.ind_mat coef = self._mapmri_coef a_par = np.zeros_like(coef) a0 = np.zeros_like(coef) for i in range(coef.shape[0]): n1, n2, n3 = ind_mat[i] if (n2 % 2 + n3 % 2) == 0: a_par[i] = ( coef[i] * (-1) ** ((n2 + n3) / 2) * np.sqrt(sfactorial(n2) * sfactorial(n3)) / (factorial2(n2) * factorial2(n3)) ) if n1 == 0: a0[i] = a_par[i] return np.sqrt(1 - np.sum(a0**2) / np.sum(a_par**2)) def ng_perpendicular(self): r"""Calculates the analytical perpendicular non-Gaussiannity (NG) For the NG to be meaningful the mapmri scale factors must be estimated only on data representing Gaussian diffusion of spins, i.e., bvals smaller than about 2000 s/mm^2 :footcite:p:`Avram2015`. See :footcite:p:`Ozarslan2013` for a definition of the metric. References ---------- .. footbibliography:: """ if self.model.bval_threshold > 2000.0: warn( "model bval_threshold must be lower than 2000 for the " "non_Gaussianity to be physically meaningful [2].", stacklevel=2, ) if not self.model.anisotropic_scaling: raise ValueError( "Parallel non-Gaussianity is not defined using isotropic scaling." ) ind_mat = self.model.ind_mat coef = self._mapmri_coef a_perp = np.zeros_like(coef) a00 = np.zeros_like(coef) for i in range(coef.shape[0]): n1, n2, n3 = ind_mat[i] if n1 % 2 == 0: if n2 % 2 == 0 and n3 % 2 == 0: a_perp[i] = ( coef[i] * (-1) ** (n1 / 2) * np.sqrt(sfactorial(n1)) / factorial2(n1) ) if n2 == 0 and n3 == 0: a00[i] = a_perp[i] return np.sqrt(1 - np.sum(a00**2) / np.sum(a_perp**2)) def norm_of_laplacian_signal(self): """Calculates the norm of the laplacian of the fitted signal. This information could be useful to assess if the extrapolation of the fitted signal contains spurious oscillations. A high laplacian may indicate that these are present, and any q-space indices that use integrals of the signal may be corrupted (e.g. RTOP, RTAP, RTPP, QIV). See :footcite:p:`Fick2016b` for a definition of the metric. References ---------- .. footbibliography:: """ if self.model.anisotropic_scaling: laplacian_matrix = mapmri_laplacian_reg_matrix( self.model.ind_mat, self.mu, self.model.S_mat, self.model.T_mat, self.model.U_mat, ) else: laplacian_matrix = self.mu[0] * self.model.laplacian_matrix norm_of_laplacian = np.linalg.multi_dot( [self._mapmri_coef, laplacian_matrix, self._mapmri_coef] ) return norm_of_laplacian @warning_for_keywords() def fitted_signal(self, *, gtab=None): """Recovers the fitted signal for the given gradient table. If no gradient table is given it recovers the signal for the gtab of the model object. """ if gtab is None: E = self.predict(self.model.gtab, S0=1.0) else: E = self.predict(gtab, S0=1.0) return E @warning_for_keywords() def predict(self, qvals_or_gtab, *, S0=100.0): """Recovers the reconstructed signal for any qvalue array or gradient table. """ if isinstance(qvals_or_gtab, np.ndarray): q = qvals_or_gtab # qvals = np.linalg.norm(q, axis=1) else: gtab = qvals_or_gtab qvals = np.sqrt(gtab.bvals / self.model.tau) / (2 * np.pi) q = qvals[:, None] * gtab.bvecs if self.model.anisotropic_scaling: q_rot = np.dot(q, self.R) M = mapmri_phi_matrix(self.radial_order, self.mu, q_rot) else: M = mapmri_isotropic_phi_matrix(self.radial_order, self.mu[0], q) E = S0 * np.dot(M, self._mapmri_coef) return E def pdf(self, r_points): """Diffusion propagator on a given set of real points. if the array r_points is non writeable, then intermediate results are cached for faster recalculation """ if self.model.anisotropic_scaling: r_point_rotated = np.dot(r_points, self.R) K = mapmri_psi_matrix(self.radial_order, self.mu, r_point_rotated) EAP = np.dot(K, self._mapmri_coef) else: if not r_points.flags.writeable: K_independent = self.model.cache_get( "mapmri_matrix_pdf_independent", key=hash(r_points.data) ) if K_independent is None: K_independent = mapmri_isotropic_K_mu_independent( self.radial_order, r_points ) self.model.cache_set( "mapmri_matrix_pdf_independent", hash(r_points.data), K_independent, ) K_dependent = mapmri_isotropic_K_mu_dependent( self.radial_order, self.mu[0], r_points ) K = K_dependent * K_independent else: K = mapmri_isotropic_psi_matrix(self.radial_order, self.mu[0], r_points) EAP = np.dot(K, self._mapmri_coef) return EAP def isotropic_scale_factor(mu_squared): """Estimated isotropic scaling factor. See :footcite:p:`Ozarslan2013` Eq. (49). Parameters ---------- mu_squared : array, shape (N,3) squared scale factors of mapmri basis in x, y, z Returns ------- u0 : float closest isotropic scale factor for the isotropic basis References ---------- .. footbibliography:: """ X, Y, Z = mu_squared coef_array = np.array([-3, -(X + Y + Z), (X * Y + X * Z + Y * Z), 3 * X * Y * Z]) # take the real, positive root of the problem. u0 = np.sqrt(np.real(np.roots(coef_array).max())) return u0 def mapmri_index_matrix(radial_order): """Calculates the indices for the MAPMRI basis in x, y and z. See :footcite:p:`Ozarslan2013` for a definition of MAPMRI. Parameters ---------- radial_order : unsigned int radial order of MAPMRI basis Returns ------- index_matrix : array, shape (N,3) ordering of the basis in x, y, z References ---------- .. footbibliography:: """ index_matrix = [] for n in range(0, radial_order + 1, 2): for i in range(0, n + 1): for j in range(0, n - i + 1): index_matrix.append([n - i - j, j, i]) return np.array(index_matrix) def b_mat(index_matrix): """Calculates the B coefficients from See :footcite:p:`Ozarslan2013` Eq. (27). Parameters ---------- index_matrix : array, shape (N,3) ordering of the basis in x, y, z Returns ------- B : array, shape (N,) B coefficients for the basis References ---------- .. footbibliography:: """ B = np.zeros(index_matrix.shape[0]) for i in range(index_matrix.shape[0]): n1, n2, n3 = index_matrix[i] K = int(not (n1 % 2) and not (n2 % 2) and not (n3 % 2)) B[i] = ( K * np.sqrt(sfactorial(n1) * sfactorial(n2) * sfactorial(n3)) / (factorial2(n1) * factorial2(n2) * factorial2(n3)) ) return B def b_mat_isotropic(index_matrix): """Calculates the isotropic B coefficients. See :footcite:p:`Ozarslan2013` Fig 8. Parameters ---------- index_matrix : array, shape (N,3) ordering of the isotropic basis in j, l, m Returns ------- B : array, shape (N,) B coefficients for the isotropic basis References ---------- .. footbibliography:: """ B = np.zeros((index_matrix.shape[0])) for i in range(index_matrix.shape[0]): if index_matrix[i, 1] == 0: B[i] = genlaguerre(index_matrix[i, 0] - 1, 0.5)(0) return B def mapmri_phi_1d(n, q, mu): """One dimensional MAPMRI basis function. See :footcite:p:`Ozarslan2013` Eq. (4). Parameters ---------- n : unsigned int order of the basis q : array, shape (N,) points in the q-space in which evaluate the basis mu : float scale factor of the basis References ---------- .. footbibliography:: """ qn = 2 * np.pi * mu * q H = hermite(n)(qn) i = complex(0, 1) f = mfactorial(n) k = i ** (-n) / np.sqrt(2**n * f) phi = k * np.exp(-(qn**2) / 2) * H return phi def mapmri_phi_matrix(radial_order, mu, q_gradients): """Compute the MAPMRI phi matrix for the signal. See :footcite:p:`Ozarslan2013` eq. (23). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : array, shape (3,) scale factors of the basis for x, y, z q_gradients : array, shape (N,3) points in the q-space in which evaluate the basis References ---------- .. footbibliography:: """ ind_mat = mapmri_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_qgrad = q_gradients.shape[0] qx, qy, qz = q_gradients.T mux, muy, muz = mu Mx_storage = np.array(np.zeros((n_qgrad, radial_order + 1)), dtype=complex) My_storage = np.array(np.zeros((n_qgrad, radial_order + 1)), dtype=complex) Mz_storage = np.array(np.zeros((n_qgrad, radial_order + 1)), dtype=complex) M = np.zeros((n_qgrad, n_elem)) for n in range(radial_order + 1): Mx_storage[:, n] = mapmri_phi_1d(n, qx, mux) My_storage[:, n] = mapmri_phi_1d(n, qy, muy) Mz_storage[:, n] = mapmri_phi_1d(n, qz, muz) counter = 0 for nx, ny, nz in ind_mat: M[:, counter] = np.real( Mx_storage[:, nx] * My_storage[:, ny] * Mz_storage[:, nz] ) counter += 1 return M def mapmri_psi_1d(n, x, mu): """One dimensional MAPMRI propagator basis function. See :footcite:p:`Ozarslan2013` Eq. (10). Parameters ---------- n : unsigned int order of the basis x : array, shape (N,) points in the r-space in which evaluate the basis mu : float scale factor of the basis References ---------- .. footbibliography:: """ H = hermite(n)(x / mu) f = mfactorial(n) k = 1 / (np.sqrt(2 ** (n + 1) * np.pi * f) * mu) psi = k * np.exp(-(x**2) / (2 * mu**2)) * H return psi def mapmri_psi_matrix(radial_order, mu, rgrad): """Compute the MAPMRI psi matrix for the propagator. See :footcite:p:`Ozarslan2013` eq. (22). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : array, shape (3,) scale factors of the basis for x, y, z rgrad : array, shape (N,3) points in the r-space in which evaluate the EAP References ---------- .. footbibliography:: """ ind_mat = mapmri_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_qgrad = rgrad.shape[0] rx, ry, rz = rgrad.T mux, muy, muz = mu Kx_storage = np.zeros((n_qgrad, radial_order + 1)) Ky_storage = np.zeros((n_qgrad, radial_order + 1)) Kz_storage = np.zeros((n_qgrad, radial_order + 1)) K = np.zeros((n_qgrad, n_elem)) for n in range(radial_order + 1): Kx_storage[:, n] = mapmri_psi_1d(n, rx, mux) Ky_storage[:, n] = mapmri_psi_1d(n, ry, muy) Kz_storage[:, n] = mapmri_psi_1d(n, rz, muz) counter = 0 for nx, ny, nz in ind_mat: K[:, counter] = Kx_storage[:, nx] * Ky_storage[:, ny] * Kz_storage[:, nz] counter += 1 return K def mapmri_odf_matrix(radial_order, mu, s, vertices): """Compute the MAPMRI ODF matrix. See :footcite:p:`Ozarslan2013` Eq. (33). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : array, shape (3,) scale factors of the basis for x, y, z s : unsigned int radial moment of the ODF vertices : array, shape (N,3) points of the sphere shell in the r-space in which evaluate the ODF References ---------- .. footbibliography:: """ ind_mat = mapmri_index_matrix(radial_order) n_vert = vertices.shape[0] n_elem = ind_mat.shape[0] odf_mat = np.zeros((n_vert, n_elem)) mux, muy, muz = mu # Eq, 35a rho = 1.0 / np.sqrt( (vertices[:, 0] / mux) ** 2 + (vertices[:, 1] / muy) ** 2 + (vertices[:, 2] / muz) ** 2 ) # Eq, 35b alpha = 2 * rho * (vertices[:, 0] / mux) # Eq, 35c beta = 2 * rho * (vertices[:, 1] / muy) # Eq, 35d gamma = 2 * rho * (vertices[:, 2] / muz) const = rho ** (3 + s) / np.sqrt( 2 ** (2 - s) * np.pi**3 * (mux**2 * muy**2 * muz**2) ) for j in range(n_elem): n1, n2, n3 = ind_mat[j] f = np.sqrt(sfactorial(n1) * sfactorial(n2) * sfactorial(n3)) odf_mat[:, j] = const * f * _odf_cfunc(n1, n2, n3, alpha, beta, gamma, s) return odf_mat def _odf_cfunc(n1, n2, n3, a, b, g, s): """Compute the MAPMRI ODF function. See :footcite:p:`Ozarslan2013` Eq. (34). References ---------- .. footbibliography:: """ f = mfactorial f2 = factorial2 sumc = 0 for i in range(0, n1 + 1, 2): for j in range(0, n2 + 1, 2): for k in range(0, n3 + 1, 2): nn = n1 + n2 + n3 - i - j - k gam = (-1) ** ((i + j + k) / 2.0) * gamma((3 + s + nn) / 2.0) num1 = a ** (n1 - i) num2 = b ** (n2 - j) num3 = g ** (n3 - k) num = gam * num1 * num2 * num3 denom = f(n1 - i) * f(n2 - j) * f(n3 - k) * f2(i) * f2(j) * f2(k) sumc += num / denom return sumc def mapmri_isotropic_phi_matrix(radial_order, mu, q): """Three dimensional isotropic MAPMRI signal basis function See :footcite:p:`Ozarslan2013` Eq. (61). Parameters ---------- radial_order : unsigned int, radial order of the mapmri basis. mu : float, positive isotropic scale factor of the basis q : array, shape (N,3) points in the q-space in which evaluate the basis References ---------- .. footbibliography:: """ qval, theta, phi = cart2sphere(q[:, 0], q[:, 1], q[:, 2]) theta[np.isnan(theta)] = 0 ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_qgrad = q.shape[0] M = np.zeros((n_qgrad, n_elem)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): l_value = n + 2 - 2 * j const = mapmri_isotropic_radial_signal_basis(j, l_value, mu, qval) for m_value in range(-l_value, l_value + 1): M[:, counter] = const * real_sh_descoteaux_from_index( m_value, l_value, theta, phi ) counter += 1 return M def mapmri_isotropic_radial_signal_basis(j, l_value, mu, qval): """Radial part of the isotropic 1D-SHORE signal basis. See :footcite:p:`Ozarslan2013` eq. (61). Parameters ---------- j : unsigned int, a positive integer related to the radial order l_value : unsigned int, the spherical harmonic order (l) mu : float, isotropic scale factor of the basis qval : float, points in the q-space in which evaluate the basis References ---------- .. footbibliography:: """ pi2_mu2_q2 = 2 * np.pi**2 * mu**2 * qval**2 const = ( (-1) ** (l_value / 2) * np.sqrt(4.0 * np.pi) * pi2_mu2_q2 ** (l_value / 2) * np.exp(-pi2_mu2_q2) * genlaguerre(j - 1, l_value + 0.5)(2 * pi2_mu2_q2) ) return const def mapmri_isotropic_M_mu_independent(radial_order, q): r"""Computed the mu independent part of the signal design matrix.""" ind_mat = mapmri_isotropic_index_matrix(radial_order) qval, theta, phi = cart2sphere(q[:, 0], q[:, 1], q[:, 2]) theta[np.isnan(theta)] = 0 n_elem = ind_mat.shape[0] n_rgrad = theta.shape[0] Q_mu_independent = np.zeros((n_rgrad, n_elem)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): l_value = n + 2 - 2 * j const = ( np.sqrt(4 * np.pi) * (-1) ** (-l_value / 2) * (2 * np.pi**2 * qval**2) ** (l_value / 2) ) for m_value in range(-1 * (n + 2 - 2 * j), (n + 3 - 2 * j)): Q_mu_independent[:, counter] = const * real_sh_descoteaux_from_index( m_value, l_value, theta, phi ) counter += 1 return Q_mu_independent def mapmri_isotropic_M_mu_dependent(radial_order, mu, qval): """Computed the mu dependent part of the signal design matrix.""" ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_qgrad = qval.shape[0] Q_u0_dependent = np.zeros((n_qgrad, n_elem)) pi2q2mu2 = 2 * np.pi**2 * mu**2 * qval**2 counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): l_value = n + 2 - 2 * j const = ( mu**l_value * np.exp(-pi2q2mu2) * genlaguerre(j - 1, l_value + 0.5)(2 * pi2q2mu2) ) for _ in range(-l_value, l_value + 1): Q_u0_dependent[:, counter] = const counter += 1 return Q_u0_dependent def mapmri_isotropic_psi_matrix(radial_order, mu, rgrad): """Three dimensional isotropic MAPMRI propagator basis function. See :footcite:p:`Ozarslan2013` Eq. (61). Parameters ---------- radial_order : unsigned int, radial order of the mapmri basis. mu : float, positive isotropic scale factor of the basis rgrad : array, shape (N,3) points in the r-space in which evaluate the basis References ---------- .. footbibliography:: """ r, theta, phi = cart2sphere(rgrad[:, 0], rgrad[:, 1], rgrad[:, 2]) theta[np.isnan(theta)] = 0 ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_rgrad = rgrad.shape[0] K = np.zeros((n_rgrad, n_elem)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): l_value = n + 2 - 2 * j const = mapmri_isotropic_radial_pdf_basis(j, l_value, mu, r) for m_value in range(-l_value, l_value + 1): K[:, counter] = const * real_sh_descoteaux_from_index( m_value, l_value, theta, phi ) counter += 1 return K def mapmri_isotropic_radial_pdf_basis(j, l_value, mu, r): """Radial part of the isotropic 1D-SHORE propagator basis. See :footcite:p:`Ozarslan2013` eq. (61). Parameters ---------- j : unsigned int, a positive integer related to the radial order l_value : unsigned int, the spherical harmonic order (l) mu : float, isotropic scale factor of the basis r : float, points in the r-space in which evaluate the basis References ---------- .. footbibliography:: """ r2u2 = r**2 / (2 * mu**2) const = ( (-1) ** (j - 1) / (np.sqrt(2) * np.pi * mu**3) * r2u2 ** (l_value / 2) * np.exp(-r2u2) * genlaguerre(j - 1, l_value + 0.5)(2 * r2u2) ) return const def mapmri_isotropic_K_mu_independent(radial_order, rgrad): """Computes mu independent part of K. Same trick as with M.""" r, theta, phi = cart2sphere(rgrad[:, 0], rgrad[:, 1], rgrad[:, 2]) theta[np.isnan(theta)] = 0 ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_rgrad = rgrad.shape[0] K = np.zeros((n_rgrad, n_elem)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j const = ( (-1) ** (j - 1) * (np.sqrt(2) * np.pi) ** (-1) * (r**2 / 2) ** (ell / 2) ) for m in range(-ell, ell + 1): K[:, counter] = const * real_sh_descoteaux_from_index( m, ell, theta, phi ) counter += 1 return K def mapmri_isotropic_K_mu_dependent(radial_order, mu, rgrad): """Computes mu dependent part of M. Same trick as with M.""" r, theta, phi = cart2sphere(rgrad[:, 0], rgrad[:, 1], rgrad[:, 2]) theta[np.isnan(theta)] = 0 ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem = ind_mat.shape[0] n_rgrad = rgrad.shape[0] K = np.zeros((n_rgrad, n_elem)) r2mu2 = r**2 / (2 * mu**2) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j const = ( (mu**3) ** (-1) * mu ** (-ell) * np.exp(-r2mu2) * genlaguerre(j - 1, ell + 0.5)(2 * r2mu2) ) for _ in range(-ell, ell + 1): K[:, counter] = const counter += 1 return K def binomialfloat(n, k): """Custom Binomial function""" return sfactorial(n) / (sfactorial(n - k) * sfactorial(k)) def mapmri_isotropic_odf_matrix(radial_order, mu, s, vertices): """Compute the isotropic MAPMRI ODF matrix. The computation follows :footcite:p:`Ozarslan2013` Eq. 32, but it is done for the isotropic propagator in footcite:p:`Ozarslan2013` eq. (60). Analytical derivation in :footcite:p:`Fick2016b` eq. (C8). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : float, isotropic scale factor of the isotropic MAP-MRI basis s : unsigned int radial moment of the ODF vertices : array, shape (N,3) points of the sphere shell in the r-space in which evaluate the ODF Returns ------- odf_mat : Matrix, shape (N_vertices, N_mapmri_coef) ODF design matrix to discrete sphere function References ---------- .. footbibliography:: """ r, theta, phi = cart2sphere(vertices[:, 0], vertices[:, 1], vertices[:, 2]) theta[np.isnan(theta)] = 0 ind_mat = mapmri_isotropic_index_matrix(radial_order) n_vert = vertices.shape[0] n_elem = ind_mat.shape[0] odf_mat = np.zeros((n_vert, n_elem)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j kappa = ((-1) ** (j - 1) * 2 ** (-(ell + 3) / 2.0) * mu**s) / np.pi matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * binomialfloat(j + ell - 0.5, j - k - 1) * gamma((ell + s + 3) / 2.0 + k) ) / (mfactorial(k) * 0.5 ** ((ell + s + 3) / 2.0 + k)) for m in range(-ell, ell + 1): odf_mat[:, counter] = ( kappa * matsum * real_sh_descoteaux_from_index(m, ell, theta, phi) ) counter += 1 return odf_mat def mapmri_isotropic_odf_sh_matrix(radial_order, mu, s): """Compute the isotropic MAPMRI ODF matrix. The computation follows :footcite:p:`Ozarslan2013` Eq. 32, but it is done for the isotropic propagator in :footcite:p:`Ozarslan2013` eq. (60). Here we do not compute the sphere function but the spherical harmonics by only integrating the radial part of the propagator. We use the same derivation of the ODF in the isotropic implementation as in :footcite:p:`Fick2016b` eq. (C8). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : float, isotropic scale factor of the isotropic MAP-MRI basis s : unsigned int radial moment of the ODF Returns ------- odf_sh_mat : Matrix, shape (N_sh_coef, N_mapmri_coef) ODF design matrix to spherical harmonics References ---------- .. footbibliography:: """ sh_mat = sph_harm_ind_list(radial_order) ind_mat = mapmri_isotropic_index_matrix(radial_order) n_elem_shore = ind_mat.shape[0] n_elem_sh = sh_mat[0].shape[0] odf_sh_mat = np.zeros((n_elem_sh, n_elem_shore)) counter = 0 for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): ell = n + 2 - 2 * j kappa = ((-1) ** (j - 1) * 2 ** (-(ell + 3) / 2.0) * mu**s) / np.pi matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * binomialfloat(j + ell - 0.5, j - k - 1) * gamma((ell + s + 3) / 2.0 + k) ) / (mfactorial(k) * 0.5 ** ((ell + s + 3) / 2.0 + k)) for m in range(-ell, ell + 1): index_overlap = np.all([ell == sh_mat[1], m == sh_mat[0]], 0) odf_sh_mat[:, counter] = kappa * matsum * index_overlap counter += 1 return odf_sh_mat def mapmri_isotropic_laplacian_reg_matrix(radial_order, mu): """Computes the Laplacian regularization matrix for MAP-MRI's isotropic implementation. See :footcite:p:`Fick2016b` eq. (C7). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis mu : float, isotropic scale factor of the isotropic MAP-MRI basis Returns ------- LR : Matrix, shape (N_coef, N_coef) Laplacian regularization matrix References ---------- .. footbibliography:: """ ind_mat = mapmri_isotropic_index_matrix(radial_order) return mapmri_isotropic_laplacian_reg_matrix_from_index_matrix(ind_mat, mu) def mapmri_isotropic_laplacian_reg_matrix_from_index_matrix(ind_mat, mu): """Computes the Laplacian regularization matrix for MAP-MRI's isotropic implementation. See :footcite:p:`Fick2016b` eq. (C7). Parameters ---------- ind_mat : matrix (N_coef, 3), Basis order matrix mu : float, isotropic scale factor of the isotropic MAP-MRI basis Returns ------- LR : Matrix, shape (N_coef, N_coef) Laplacian regularization matrix References ---------- .. footbibliography:: """ n_elem = ind_mat.shape[0] LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): if ind_mat[i, 1] == ind_mat[k, 1] and ind_mat[i, 2] == ind_mat[k, 2]: ji = ind_mat[i, 0] jk = ind_mat[k, 0] ell = ind_mat[i, 1] if ji == (jk + 2): LR[i, k] = LR[k, i] = ( 2.0 ** (2 - ell) * np.pi**2 * mu * gamma(5 / 2.0 + jk + ell) / gamma(jk) ) elif ji == (jk + 1): LR[i, k] = LR[k, i] = ( 2.0 ** (2 - ell) * np.pi**2 * mu * (-3 + 4 * ji + 2 * ell) * gamma(3 / 2.0 + jk + ell) / gamma(jk) ) elif ji == jk: LR[i, k] = ( 2.0 ** (-ell) * np.pi**2 * mu * ( 3 + 24 * ji**2 + 4 * (-2 + ell) * ell + 12 * ji * (-1 + 2 * ell) ) * gamma(1 / 2.0 + ji + ell) / gamma(ji) ) elif ji == (jk - 1): LR[i, k] = LR[k, i] = ( 2.0 ** (2 - ell) * np.pi**2 * mu * (-3 + 4 * jk + 2 * ell) * gamma(3 / 2.0 + ji + ell) / gamma(ji) ) elif ji == (jk - 2): LR[i, k] = LR[k, i] = ( 2.0 ** (2 - ell) * np.pi**2 * mu * gamma(5 / 2.0 + ji + ell) / gamma(ji) ) return LR def mapmri_isotropic_index_matrix(radial_order): """Calculates the indices for the isotropic MAPMRI basis. See :footcite:p:`Ozarslan2013` Fig 8. Parameters ---------- radial_order : unsigned int radial order of isotropic MAPMRI basis Returns ------- index_matrix : array, shape (N,3) ordering of the basis in x, y, z References ---------- .. footbibliography:: """ index_matrix = [] for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): for m in range(-1 * (n + 2 - 2 * j), (n + 3 - 2 * j)): index_matrix.append([j, n + 2 - 2 * j, m]) return np.array(index_matrix) def create_rspace(gridsize, radius_max): """Create the real space table, that contains the points in which to compute the pdf. Parameters ---------- gridsize : unsigned int dimension of the propagator grid radius_max : float maximal radius in which compute the propagator Returns ------- tab : array, shape (N,3) real space points in which calculates the pdf """ radius = gridsize // 2 vecs = [] for i in range(-radius, radius + 1): for j in range(-radius, radius + 1): for k in range(0, radius + 1): vecs.append([i, j, k]) vecs = np.array(vecs, dtype=np.float32) # there are points in the corners farther than sphere radius points_inside_sphere = np.sqrt(np.einsum("ij,ij->i", vecs, vecs)) <= radius vecs_inside_sphere = vecs[points_inside_sphere] tab = vecs_inside_sphere / radius tab = tab * radius_max return tab def delta(n, m): if n == m: return 1 return 0 def map_laplace_u(n, m): r"""S(n, m) static matrix for Laplacian regularization. See :footcite:p:`Fick2016b` eq. (13). Parameters ---------- n, m : unsigned int basis order of the MAP-MRI basis in different directions Returns ------- U : float, Analytical integral of :math:`\phi_n(q) * \phi_m(q)` References ---------- .. footbibliography:: """ return (-1) ** n * delta(n, m) / (2 * np.sqrt(np.pi)) def map_laplace_t(n, m): r"""L(m, n) static matrix for Laplacian regularization. See :footcite:p:`Fick2016b` eq. (12). Parameters ---------- n, m : unsigned int basis order of the MAP-MRI basis in different directions Returns ------- T : float Analytical integral of :math:`\phi_n(q) * \phi_m''(q)` References ---------- .. footbibliography:: """ a = np.sqrt((m - 1) * m) * delta(m - 2, n) b = np.sqrt((n - 1) * n) * delta(n - 2, m) c = (2 * n + 1) * delta(m, n) return np.pi ** (3 / 2.0) * (-1) ** (n + 1) * (a + b + c) def map_laplace_s(n, m): r"""R(m,n) static matrix for Laplacian regularization. See :footcite:p:`Fick2016b` eq. (11). Parameters ---------- n, m : unsigned int basis order of the MAP-MRI basis in different directions Returns ------- S : float Analytical integral of :math:`\phi_n''(q) * \phi_m''(q)` References ---------- .. footbibliography:: """ k = 2 * np.pi ** (7 / 2.0) * (-1) ** n a0 = 3 * (2 * n**2 + 2 * n + 1) * delta(n, m) sqmn = np.sqrt(gamma(m + 1) / gamma(n + 1)) sqnm = 1 / sqmn an2 = 2 * (2 * n + 3) * sqmn * delta(m, n + 2) an4 = sqmn * delta(m, n + 4) am2 = 2 * (2 * m + 3) * sqnm * delta(m + 2, n) am4 = sqnm * delta(m + 4, n) return k * (a0 + an2 + an4 + am2 + am4) def mapmri_STU_reg_matrices(radial_order): """Generate the static portions of the Laplacian regularization matrix. See :footcite:p:`Fick2016b` eq. (11, 12, 13). Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis Returns ------- S, T, U : Matrices, shape (N_coef,N_coef) Regularization submatrices References ---------- .. footbibliography:: """ S = np.zeros((radial_order + 1, radial_order + 1)) for i in range(radial_order + 1): for j in range(radial_order + 1): S[i, j] = map_laplace_s(i, j) T = np.zeros((radial_order + 1, radial_order + 1)) for i in range(radial_order + 1): for j in range(radial_order + 1): T[i, j] = map_laplace_t(i, j) U = np.zeros((radial_order + 1, radial_order + 1)) for i in range(radial_order + 1): for j in range(radial_order + 1): U[i, j] = map_laplace_u(i, j) return S, T, U def mapmri_laplacian_reg_matrix(ind_mat, mu, S_mat, T_mat, U_mat): """Put the Laplacian regularization matrix together. See :footcite:p:`Fick2016b` eq. (10). The static parts in S, T and U are multiplied and divided by the voxel-specific scale factors. Parameters ---------- ind_mat : matrix (N_coef, 3), Basis order matrix mu : array, shape (3,) scale factors of the basis for x, y, z S, T, U : matrices, shape (N_coef,N_coef) Regularization submatrices Returns ------- LR : matrix (N_coef, N_coef), Voxel-specific Laplacian regularization matrix References ---------- .. footbibliography:: """ ux, uy, uz = mu x, y, z = ind_mat.T n_elem = ind_mat.shape[0] LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for j in range(i, n_elem): if ( (x[i] - x[j]) % 2 == 0 and (y[i] - y[j]) % 2 == 0 and (z[i] - z[j]) % 2 == 0 ): LR[i, j] = LR[j, i] = ( (ux**3 / (uy * uz)) * S_mat[x[i], x[j]] * U_mat[y[i], y[j]] * U_mat[z[i], z[j]] + (uy**3 / (ux * uz)) * S_mat[y[i], y[j]] * U_mat[z[i], z[j]] * U_mat[x[i], x[j]] + (uz**3 / (ux * uy)) * S_mat[z[i], z[j]] * U_mat[x[i], x[j]] * U_mat[y[i], y[j]] + 2 * ((ux * uy) / uz) * T_mat[x[i], x[j]] * T_mat[y[i], y[j]] * U_mat[z[i], z[j]] + 2 * ((ux * uz) / uy) * T_mat[x[i], x[j]] * T_mat[z[i], z[j]] * U_mat[y[i], y[j]] + 2 * ((uz * uy) / ux) * T_mat[z[i], z[j]] * T_mat[y[i], y[j]] * U_mat[x[i], x[j]] ) return LR @warning_for_keywords() def generalized_crossvalidation_array(data, M, LR, *, weights_array=None): """Generalized Cross Validation Function. See :footcite:p:`Fick2016b` eq. (15). Here weights_array is a numpy array with all values that should be considered in the GCV. It will run through the weights until the cost function starts to increase, then stop and take the last value as the optimum weight. Parameters ---------- data : array (N), Basis order matrix M : matrix, shape (N, Ncoef) mapmri observation matrix LR : matrix, shape (N_coef, N_coef) regularization matrix weights_array : array (N_of_weights) array of optional regularization weights References ---------- .. footbibliography:: """ if weights_array is None: lrange = np.linspace(0.05, 1, 20) # reasonably fast standard range else: lrange = weights_array samples = lrange.shape[0] MMt = np.dot(M.T, M) K = len(data) gcvold = gcvnew = 10e10 # set initialization gcv threshold very high i = -1 while gcvold >= gcvnew and i < samples - 2: gcvold = gcvnew i = i + 1 S = np.linalg.multi_dot([M, np.linalg.pinv(MMt + lrange[i] * LR), M.T]) trS = np.trace(S) normyytilde = np.linalg.norm(data - np.dot(S, data), 2) gcvnew = normyytilde / (K - trS) lopt = lrange[i - 1] return lopt @warning_for_keywords() def generalized_crossvalidation(data, M, LR, *, gcv_startpoint=5e-2): """Generalized Cross Validation Function. Finds optimal regularization weight based on generalized cross-validation. See :footcite:p:`Craven1979` eq. (15). Parameters ---------- data : array (N), data array M : matrix, shape (N, Ncoef) mapmri observation matrix LR : matrix, shape (N_coef, N_coef) regularization matrix gcv_startpoint : float startpoint for the gcv optimization Returns ------- optimal_lambda : float, optimal regularization weight References ---------- .. footbibliography:: """ MMt = np.dot(M.T, M) K = len(data) bounds = ((1e-5, 10),) solver = Optimizer( fun=gcv_cost_function, x0=(gcv_startpoint,), args=((data, M, MMt, K, LR),), bounds=bounds, ) optimal_lambda = solver.xopt return optimal_lambda def gcv_cost_function(weight, args): """The GCV cost function that is iterated. See :footcite:p:`Fick2016b` for further details about the method. References ---------- .. footbibliography:: """ data, M, MMt, K, LR = args S = np.linalg.multi_dot([M, np.linalg.pinv(MMt + weight * LR), M.T]) trS = np.trace(S) normyytilde = np.linalg.norm(data - np.dot(S, data), 2) gcv_value = normyytilde / (K - trS) return gcv_value dipy-1.11.0/dipy/reconst/mcsd.py000066400000000000000000000752121476546756600165220ustar00rootroot00000000000000import numbers import warnings import numpy as np from dipy.core import geometry as geo from dipy.core.gradients import ( GradientTable, get_bval_indices, gradient_table, unique_bvals_tolerance, ) from dipy.data import default_sphere from dipy.reconst import shm from dipy.reconst.csdeconv import response_from_mask_ssst from dipy.reconst.dti import TensorModel, fractional_anisotropy, mean_diffusivity from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.utils import _mask_from_roi, _roi_in_volume from dipy.sims.voxel import single_tensor from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") SH_CONST = 0.5 / np.sqrt(np.pi) @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def multi_tissue_basis(gtab, sh_order_max, iso_comp): """ Builds a basis for multi-shell multi-tissue CSD model. Parameters ---------- gtab : GradientTable Gradient table. sh_order_max : int Maximal spherical harmonics order (l). iso_comp: int Number of tissue compartments for running the MSMT-CSD. Minimum number of compartments required is 2. Returns ------- B : ndarray Matrix of the spherical harmonics model used to fit the data m_values : int ``|m_value| <= l_value`` The phase factor ($m$) of the harmonic. l_values : int ``l_value >= 0`` The order ($l$) of the harmonic. """ if iso_comp < 2: msg = "Multi-tissue CSD requires at least 2 tissue compartments" raise ValueError(msg) r, theta, phi = geo.cart2sphere(*gtab.gradients.T) m_values, l_values = shm.sph_harm_ind_list(sh_order_max) B = shm.real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) B[np.ix_(gtab.b0s_mask, l_values > 0)] = 0.0 iso = np.empty([B.shape[0], iso_comp]) iso[:] = SH_CONST B = np.concatenate([iso, B], axis=1) return B, m_values, l_values class MultiShellResponse: @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__(self, response, sh_order_max, shells, *, S0=None): """Estimate Multi Shell response function for multiple tissues and multiple shells. The method `multi_shell_fiber_response` allows to create a multi-shell fiber response with the right format, for a three compartments model. It can be referred to in order to understand the inputs of this class. Parameters ---------- response : ndarray Multi-shell fiber response. The ordering of the responses should follow the same logic as S0. sh_order_max : int Maximal spherical harmonics order (l). shells : int Number of shells in the data S0 : array (3,) Signal with no diffusion weighting for each tissue compartments, in the same tissue order as `response`. This S0 can be used for predicting from a fit model later on. """ self.S0 = S0 self.response = response self.sh_order_max = sh_order_max self.l_values = np.arange(0, sh_order_max + 1, 2) self.m_values = np.zeros_like(self.l_values) self.shells = shells if self.iso < 1: raise ValueError("sh_order_max and shape of response do not agree") @property def iso(self): return self.response.shape[1] - (self.sh_order_max // 2) - 1 @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def _inflate_response(response, gtab, sh_order_max, delta): """Used to inflate the response for the `multiplier_matrix` in the `MultiShellDeconvModel`. Parameters ---------- response : MultiShellResponse object Response function. gtab : GradientTable Gradient table. sh_order_max : int ``>= 0`` The maximal order ($l$) of the harmonic. delta : Delta generated from `_basic_delta` """ if ( any((sh_order_max % 2) != 0) or (sh_order_max.max() // 2) >= response.sh_order_max ): raise ValueError("Response and n do not match") iso = response.iso n_idx = np.empty(len(sh_order_max) + iso, dtype=int) n_idx[:iso] = np.arange(0, iso) n_idx[iso:] = sh_order_max // 2 + iso diff = abs(response.shells[:, None] - gtab.bvals) b_idx = np.argmin(diff, axis=0) kernel = response.response / delta return kernel[np.ix_(b_idx, n_idx)] def _basic_delta(iso, m_value, l_value, theta, phi): """Simple delta function Parameters ---------- iso: int Number of tissue compartments for running the MSMT-CSD. Minimum number of compartments required is 2. Default: 2 m_value : int ``|m| <= l`` The phase factor ($m$) of the harmonic. l_value : int ``>= 0`` The order ($l$) of the harmonic. theta : array_like inclination or polar angle phi : array_like azimuth angle """ wm_d = shm.gen_dirac(m_value, l_value, theta, phi) iso_d = [SH_CONST] * iso return np.concatenate([iso_d, wm_d]) class MultiShellDeconvModel(shm.SphHarmModel): @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__( self, gtab, response, reg_sphere=default_sphere, *, sh_order_max=8, iso=2, tol=20, ): r""" Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) :footcite:p:`Jeurissen2014`. This method extends the CSD model proposed in :footcite:p:`Tournier2007`. by the estimation of multiple response functions as a function of multiple b-values and multiple tissue types. Spherical deconvolution computes a fiber orientation distribution (FOD), also called fiber ODF (fODF) :footcite:p:`Tournier2007`. The fODF is derived from different tissue types and thus overcomes the overestimation of WM in GM and CSF areas. The response function is based on the different tissue types and is provided as input to the MultiShellDeconvModel. It will be used as deconvolution kernel, as described in :footcite:p:`Tournier2007`. Parameters ---------- gtab : GradientTable Gradient table. response : ndarray or MultiShellResponse object Pre-computed multi-shell fiber response function in the form of a MultiShellResponse object, or simple response function as a ndarray. The later must be of shape (3, len(bvals)-1, 4), because it will be converted into a MultiShellResponse object via the `multi_shell_fiber_response` method (important note: the function `unique_bvals_tolerance` is used here to select unique bvalues from gtab as input). Each column (3,) has two elements. The first is the eigen-values as a (3,) ndarray and the second is the signal value for the response function without diffusion weighting (S0). Note that in order to use more than three compartments, one must create a MultiShellResponse object on the side. reg_sphere : Sphere, optional sphere used to build the regularization B matrix. sh_order_max : int, optional Maximal spherical harmonics order (l). iso: int, optional Number of tissue compartments for running the MSMT-CSD. Minimum number of compartments required is 2. tol : int, optional Tolerance gap for b-values clustering. References ---------- .. footbibliography:: """ if not iso >= 2: msg = "Multi-tissue CSD requires at least 2 tissue compartments" raise ValueError(msg) super(MultiShellDeconvModel, self).__init__(gtab) if not isinstance(response, MultiShellResponse): bvals = unique_bvals_tolerance(gtab.bvals, tol=tol) if iso > 2: msg = """Too many compartments for this kind of response input. It must be two tissue compartments.""" raise ValueError(msg) if response.shape != (3, len(bvals) - 1, 4): msg = """Response must be of shape (3, len(bvals)-1, 4) or be a MultiShellResponse object.""" raise ValueError(msg) response = multi_shell_fiber_response( sh_order_max, bvals=bvals, wm_rf=response[0], gm_rf=response[1], csf_rf=response[2], ) B, m_values, l_values = multi_tissue_basis(gtab, sh_order_max, iso) delta = _basic_delta( response.iso, response.m_values, response.l_values, 0.0, 0.0 ) self.delta = delta multiplier_matrix = _inflate_response(response, gtab, l_values, delta) r, theta, phi = geo.cart2sphere(*reg_sphere.vertices.T) odf_reg, _, _ = shm.real_sh_descoteaux(sh_order_max, theta, phi) reg = np.zeros([i + iso for i in odf_reg.shape]) reg[:iso, :iso] = np.eye(iso) reg[iso:, iso:] = odf_reg X = B * multiplier_matrix self.fitter = QpFitter(X, reg) self.sh_order_max = sh_order_max self._X = X self.sphere = reg_sphere self.gtab = gtab self.B_dwi = B self.m_values = m_values self.l_values = l_values self.response = response @warning_for_keywords() def predict(self, params, *, gtab=None, S0=None): """Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance. Parameters ---------- params : ndarray The spherical harmonic representation of the FOD from which to make the signal prediction. gtab : GradientTable The gradients for which the signal will be predicted. Use the model's gradient table by default. S0 : ndarray or float The non diffusion-weighted signal value. """ if gtab is None or gtab is self.gtab: X = self._X else: iso = self.response.iso B, m_values, l_values = multi_tissue_basis(gtab, self.sh_order_max, iso) multiplier_matrix = _inflate_response( self.response, gtab, l_values, self.delta ) X = B * multiplier_matrix scaling = 1.0 if S0 and S0 != 1.0: # The S0=1. case comes from fit.predict(). raise NotImplementedError # This case is not implemented yet because it would require to have # access to volume fractions (vf) from the fit. The following code # gives an idea of how to use this with S0 and vf. It could also be # calculated externally and used as scaling = S0. # response_scaling = np.ndarray(params.shape[0:3]) # response_scaling[...] = (vf[..., 0] * self.response.S0[0] # + vf[..., 1] * self.response.S0[1] # + vf[..., 2] * self.response.S0[2]) # scaling = np.where(response_scaling > 1, S0 / response_scaling, 0) # scaling = np.expand_dims(scaling, 3) # scaling = np.repeat(scaling, len(gtab.bvals), axis=3) pred_sig = scaling * np.dot(params, X.T) return pred_sig @multi_voxel_fit def fit(self, data, verbose=True, **kwargs): """Fits the model to diffusion data and returns the model fit. Sometimes the solving process of some voxels can end in a SolverError from cvxpy. This might be attributed to the response functions not being tuned properly, as the solving process is very sensitive to it. The method will fill the problematic voxels with a NaN value, so that it is traceable. The user should check for the number of NaN values and could then fill the problematic voxels with zeros, for example. Running a fit again only on those problematic voxels can also work. Parameters ---------- data : ndarray The diffusion data to fit the model on. verbose : bool, optional Whether to show warnings when a SolverError appears or not. """ coeff = self.fitter(data) if verbose: if np.isnan(coeff[..., 0]): msg = """Voxel could not be solved properly and ended up with a SolverError. Proceeding to fill it with NaN values. """ warnings.warn(msg, UserWarning, stacklevel=2) return MSDeconvFit(self, coeff, None) class MSDeconvFit(shm.SphHarmFit): def __init__(self, model, coeff, mask): """ Abstract class which holds the fit result of MultiShellDeconvModel. Inherits the SphHarmFit which fits the diffusion data to a spherical harmonic model. Parameters ---------- model: object MultiShellDeconvModel coeff : array Spherical harmonic coefficients for the ODF. mask: ndarray Mask for fitting """ self._shm_coef = coeff self.mask = mask self.model = model @property def shm_coeff(self): return self._shm_coef[..., self.model.response.iso :] @property def all_shm_coeff(self): return self._shm_coef @property def volume_fractions(self): tissue_classes = self.model.response.iso + 1 return self._shm_coef[..., :tissue_classes] / SH_CONST def solve_qp(P, Q, G, H): r""" Helper function to set up and solve the Quadratic Program (QP) in CVXPY. A QP problem has the following form: minimize 1/2 x' P x + Q' x subject to G x <= H Here the QP solver is based on CVXPY and uses OSQP. Parameters ---------- P : ndarray n x n matrix for the primal QP objective function. Q : ndarray n x 1 matrix for the primal QP objective function. G : ndarray m x n matrix for the inequality constraint. H : ndarray m x 1 matrix for the inequality constraint. Returns ------- x : array Optimal solution to the QP problem. """ x = cvxpy.Variable(Q.shape[0]) P = cvxpy.Constant(P) objective = cvxpy.Minimize(0.5 * cvxpy.quad_form(x, P, True) + Q @ x) constraints = [G @ x <= H] # setting up the problem prob = cvxpy.Problem(objective, constraints) try: prob.solve() opt = np.array(x.value).reshape((Q.shape[0],)) except cvxpy.error.SolverError: opt = np.empty((Q.shape[0],)) opt[:] = np.nan return opt class QpFitter: def __init__(self, X, reg): r""" Makes use of the quadratic programming solver `solve_qp` to fit the model. The initialization for the model is done using the warm-start by default in `CVXPY`. Parameters ---------- X : ndarray Matrix to be fit by the QP solver calculated in `MultiShellDeconvModel` reg : ndarray the regularization B matrix calculated in `MultiShellDeconvModel` """ self._P = P = np.dot(X.T, X) self._X = X self._reg = reg self._P_mat = np.array(P) self._reg_mat = np.array(-reg) self._h_mat = np.array([0]) def __call__(self, signal): Q = np.dot(self._X.T, signal) Q_mat = np.array(-Q) fodf_sh = solve_qp(self._P_mat, Q_mat, self._reg_mat, self._h_mat) return fodf_sh @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def multi_shell_fiber_response( sh_order_max, bvals, wm_rf, gm_rf, csf_rf, *, sphere=None, tol=20, btens=None ): """Fiber response function estimation for multi-shell data. Parameters ---------- sh_order_max : int Maximum spherical harmonics order (l). bvals : ndarray Array containing the b-values. Must be unique b-values, like outputted by `dipy.core.gradients.unique_bvals_tolerance`. wm_rf : (N-1, 4) ndarray Response function of the WM tissue, for each bvals, where N is the number of unique b-values including the b0. gm_rf : (N-1, 4) ndarray Response function of the GM tissue, for each bvals. csf_rf : (N-1, 4) ndarray Response function of the CSF tissue, for each bvals. sphere : `dipy.core.Sphere` instance, optional Sphere where the signal will be evaluated. tol : int, optional Tolerance gap for b-values clustering. btens : can be any of two options, optional 1. an array of strings of shape (N,) specifying encoding tensor shape associated with all unique b-values separately. N corresponds to the number of unique b-values, including the b0. Options for elements in array: 'LTE', 'PTE', 'STE', 'CTE' corresponding to linear, planar, spherical, and "cigar-shaped" tensor encoding. 2. an array of shape (N,3,3) specifying the b-tensor of each unique b-values exactly. N corresponds to the number of unique b-values, including the b0. Returns ------- MultiShellResponse MultiShellResponse object. """ bvals = np.array(bvals, copy=True) if btens is None: btens = np.repeat(["LTE"], len(bvals)) elif len(btens) != len(bvals): msg = """bvals and btens parameters must have the same dimension.""" raise ValueError(msg) evecs = np.zeros((3, 3)) z = np.array([0, 0, 1.0]) evecs[:, 0] = z evecs[:2, 1:] = np.eye(2) l_values = np.arange(0, sh_order_max + 1, 2) m_values = np.zeros_like(l_values) if sphere is None: sphere = default_sphere big_sphere = sphere.subdivide() theta, phi = big_sphere.theta, big_sphere.phi B = shm.real_sh_descoteaux_from_index( m_values, l_values, theta[:, None], phi[:, None] ) A = shm.real_sh_descoteaux_from_index(0, 0, 0, 0) response = np.empty([len(bvals), len(l_values) + 2]) if bvals[0] < tol: gtab = GradientTable(big_sphere.vertices * 0, btens=btens[0]) wm_response = single_tensor( gtab, wm_rf[0, 3], evals=wm_rf[0, :3], evecs=evecs, snr=None ) response[0, 2:] = np.linalg.lstsq(B, wm_response, rcond=None)[0] response[0, 1] = gm_rf[0, 3] / A response[0, 0] = csf_rf[0, 3] / A for i, bvalue in enumerate(bvals[1:]): gtab = GradientTable(big_sphere.vertices * bvalue, btens=btens[i + 1]) wm_response = single_tensor( gtab, wm_rf[i, 3], evals=wm_rf[i, :3], evecs=evecs, snr=None ) response[i + 1, 2:] = np.linalg.lstsq(B, wm_response, rcond=None)[0] response[i + 1, 1] = gm_rf[i, 3] * np.exp(-bvalue * gm_rf[i, 0]) / A response[i + 1, 0] = csf_rf[i, 3] * np.exp(-bvalue * csf_rf[i, 0]) / A S0 = [csf_rf[0, 3], gm_rf[0, 3], wm_rf[0, 3]] else: warnings.warn( """No b0 given. Proceeding either way.""", UserWarning, stacklevel=2 ) for i, bvalue in enumerate(bvals): gtab = GradientTable(big_sphere.vertices * bvalue, btens=btens[i]) wm_response = single_tensor( gtab, wm_rf[i, 3], evals=wm_rf[i, :3], evecs=evecs, snr=None ) response[i, 2:] = np.linalg.lstsq(B, wm_response, rcond=None)[0] response[i, 1] = gm_rf[i, 3] * np.exp(-bvalue * gm_rf[i, 0]) / A response[i, 0] = csf_rf[i, 3] * np.exp(-bvalue * csf_rf[i, 0]) / A S0 = [csf_rf[0, 3], gm_rf[0, 3], wm_rf[0, 3]] return MultiShellResponse(response, sh_order_max, bvals, S0=S0) @warning_for_keywords() def mask_for_response_msmt( gtab, data, *, roi_center=None, roi_radii=10, wm_fa_thr=0.7, gm_fa_thr=0.2, csf_fa_thr=0.1, gm_md_thr=0.0007, csf_md_thr=0.002, ): """Computation of masks for multi-shell multi-tissue (msmt) response function using FA and MD. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data (4D) roi_center : array-like, (3,) Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : int or array-like, (3,) radii of cuboid ROI wm_fa_thr : float FA threshold for WM. gm_fa_thr : float FA threshold for GM. csf_fa_thr : float FA threshold for CSF. gm_md_thr : float MD threshold for GM. csf_md_thr : float MD threshold for CSF. Returns ------- mask_wm : ndarray Mask of voxels within the ROI and with FA above the FA threshold for WM. mask_gm : ndarray Mask of voxels within the ROI and with FA below the FA threshold for GM and with MD below the MD threshold for GM. mask_csf : ndarray Mask of voxels within the ROI and with FA below the FA threshold for CSF and with MD below the MD threshold for CSF. Notes ----- In msmt-CSD there is an important pre-processing step: the estimation of every tissue's response function. In order to do this, we look for voxels corresponding to WM, GM and CSF. This function aims to accomplish that by returning a mask of voxels within a ROI and who respect some threshold constraints, for each tissue. More precisely, the WM mask must have a FA value above a given threshold. The GM mask and CSF mask must have a FA below given thresholds and a MD below other thresholds. To get the FA and MD, we need to fit a Tensor model to the datasets. """ if len(data.shape) < 4: msg = """Data must be 4D (3D image + directions). To use a 2D image, please reshape it into a (N, N, 1, ndirs) array.""" raise ValueError(msg) if isinstance(roi_radii, numbers.Number): roi_radii = (roi_radii, roi_radii, roi_radii) if roi_center is None: roi_center = np.array(data.shape[:3]) // 2 roi_radii = _roi_in_volume( data.shape, np.asarray(roi_center), np.asarray(roi_radii) ) roi_mask = _mask_from_roi(data.shape[:3], roi_center, roi_radii) list_bvals = unique_bvals_tolerance(gtab.bvals) if not np.all(list_bvals <= 1200): msg_bvals = """Some b-values are higher than 1200. The DTI fit might be affected.""" warnings.warn(msg_bvals, UserWarning, stacklevel=2) ten = TensorModel(gtab) tenfit = ten.fit(data, mask=roi_mask) fa = fractional_anisotropy(tenfit.evals) fa[np.isnan(fa)] = 0 md = mean_diffusivity(tenfit.evals) md[np.isnan(md)] = 0 mask_wm = np.zeros(fa.shape, dtype=np.int64) mask_wm[fa > wm_fa_thr] = 1 mask_wm *= roi_mask md_mask_gm = np.zeros(md.shape, dtype=np.int64) md_mask_gm[(md < gm_md_thr)] = 1 fa_mask_gm = np.zeros(fa.shape, dtype=np.int64) fa_mask_gm[(fa < gm_fa_thr) & (fa > 0)] = 1 mask_gm = md_mask_gm * fa_mask_gm mask_gm *= roi_mask md_mask_csf = np.zeros(md.shape, dtype=np.int64) md_mask_csf[(md < csf_md_thr) & (md > 0)] = 1 fa_mask_csf = np.zeros(fa.shape, dtype=np.int64) fa_mask_csf[(fa < csf_fa_thr) & (fa > 0)] = 1 mask_csf = md_mask_csf * fa_mask_csf mask_csf *= roi_mask msg = """No voxel with a {0} than {1} were found. Try a larger roi or a {2} threshold for {3}.""" if np.sum(mask_wm) == 0: msg_fa = msg.format("FA higher", str(wm_fa_thr), "lower FA", "WM") warnings.warn(msg_fa, UserWarning, stacklevel=2) if np.sum(mask_gm) == 0: msg_fa = msg.format("FA lower", str(gm_fa_thr), "higher FA", "GM") msg_md = msg.format("MD lower", str(gm_md_thr), "higher MD", "GM") warnings.warn(msg_fa, UserWarning, stacklevel=2) warnings.warn(msg_md, UserWarning, stacklevel=2) if np.sum(mask_csf) == 0: msg_fa = msg.format("FA lower", str(csf_fa_thr), "higher FA", "CSF") msg_md = msg.format("MD lower", str(csf_md_thr), "higher MD", "CSF") warnings.warn(msg_fa, UserWarning, stacklevel=2) warnings.warn(msg_md, UserWarning, stacklevel=2) return mask_wm, mask_gm, mask_csf @warning_for_keywords() def response_from_mask_msmt(gtab, data, mask_wm, mask_gm, mask_csf, *, tol=20): """Computation of multi-shell multi-tissue (msmt) response functions from given tissues masks. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data mask_wm : ndarray mask from where to compute the WM response function. mask_gm : ndarray mask from where to compute the GM response function. mask_csf : ndarray mask from where to compute the CSF response function. tol : int tolerance gap for b-values clustering. (Default = 20) Returns ------- response_wm : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for WM for each unique bvalues (except b0). response_gm : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for GM for each unique bvalues (except b0). response_csf : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for CSF for each unique bvalues (except b0). Notes ----- In msmt-CSD there is an important pre-processing step: the estimation of every tissue's response function. In order to do this, we look for voxels corresponding to WM, GM and CSF. This information can be obtained by using mcsd.mask_for_response_msmt() through masks of selected voxels. The present function uses such masks to compute the msmt response functions. For the responses, we base our approach on the function csdeconv.response_from_mask_ssst(), with the added layers of multishell and multi-tissue (see the ssst function for more information about the computation of the ssst response function). This means that for each tissue we use the previously found masks and loop on them. For each mask, we loop on the b-values (clustered using the tolerance gap) to get many responses and then average them to get one response per tissue. """ bvals = gtab.bvals bvecs = gtab.bvecs btens = gtab.btens list_bvals = unique_bvals_tolerance(bvals, tol=tol) b0_indices = get_bval_indices(bvals, list_bvals[0], tol=tol) b0_map = np.mean(data[..., b0_indices], axis=-1)[..., np.newaxis] masks = [mask_wm, mask_gm, mask_csf] tissue_responses = [] for mask in masks: responses = [] for bval in list_bvals[1:]: indices = get_bval_indices(bvals, bval, tol=tol) bvecs_sub = np.concatenate([[bvecs[b0_indices[0]]], bvecs[indices]]) bvals_sub = np.concatenate([[0], bvals[indices]]) if btens is not None: btens_b0 = btens[b0_indices[0]].reshape((1, 3, 3)) btens_sub = np.concatenate([btens_b0, btens[indices]]) else: btens_sub = None data_conc = np.concatenate([b0_map, data[..., indices]], axis=3) gtab = gradient_table(bvals_sub, bvecs=bvecs_sub, btens=btens_sub) response, _ = response_from_mask_ssst(gtab, data_conc, mask) responses.append(list(np.concatenate([response[0], [response[1]]]))) tissue_responses.append(list(responses)) wm_response = np.asarray(tissue_responses[0]) gm_response = np.asarray(tissue_responses[1]) csf_response = np.asarray(tissue_responses[2]) return wm_response, gm_response, csf_response @warning_for_keywords() def auto_response_msmt( gtab, data, *, tol=20, roi_center=None, roi_radii=10, wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ): """Automatic estimation of multi-shell multi-tissue (msmt) response functions using FA and MD. Parameters ---------- gtab : GradientTable Gradient table. data : ndarray diffusion data tol : int, optional Tolerance gap for b-values clustering. roi_center : array-like, (3,) Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : int or array-like, (3,) radii of cuboid ROI wm_fa_thr : float FA threshold for WM. gm_fa_thr : float FA threshold for GM. csf_fa_thr : float FA threshold for CSF. gm_md_thr : float MD threshold for GM. csf_md_thr : float MD threshold for CSF. Returns ------- response_wm : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for WM for each unique bvalues (except b0). response_gm : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for GM for each unique bvalues (except b0). response_csf : ndarray, (len(unique_bvals_tolerance(gtab.bvals))-1, 4) (`evals`, `S0`) for CSF for each unique bvalues (except b0). Notes ----- In msmt-CSD there is an important pre-processing step: the estimation of every tissue's response function. In order to do this, we look for voxels corresponding to WM, GM and CSF. We get this information from mcsd.mask_for_response_msmt(), which returns masks of selected voxels (more details are available in the description of the function). With the masks, we compute the response functions by using mcsd.response_from_mask_msmt(), which returns the `response` for each tissue (more details are available in the description of the function). """ list_bvals = unique_bvals_tolerance(gtab.bvals) if not np.all(list_bvals <= 1200): msg_bvals = """Some b-values are higher than 1200. The DTI fit might be affected. It is advised to use mask_for_response_msmt with bvalues lower than 1200, followed by response_from_mask_msmt with all bvalues to overcome this.""" warnings.warn(msg_bvals, UserWarning, stacklevel=2) mask_wm, mask_gm, mask_csf = mask_for_response_msmt( gtab, data, roi_center=roi_center, roi_radii=roi_radii, wm_fa_thr=wm_fa_thr, gm_fa_thr=gm_fa_thr, csf_fa_thr=csf_fa_thr, gm_md_thr=gm_md_thr, csf_md_thr=csf_md_thr, ) response_wm, response_gm, response_csf = response_from_mask_msmt( gtab, data, mask_wm, mask_gm, mask_csf, tol=tol ) return response_wm, response_gm, response_csf dipy-1.11.0/dipy/reconst/meson.build000066400000000000000000000017071476546756600173620ustar00rootroot00000000000000cython_sources = [ 'dirspeed', 'eudx_direction_getter', 'quick_squash', 'recspeed', 'vec_val_sum', ] cython_headers = [ 'recspeed.pxd', ] foreach ext: cython_sources py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/reconst' ) endforeach python_sources = ['__init__.py', 'base.py', 'bingham.py', 'cache.py', 'cross_validation.py', 'csdeconv.py', 'cti.py', 'dki_micro.py', 'dki.py', 'dsi.py', 'dti.py', 'forecast.py', 'fwdti.py', 'gqi.py', 'ivim.py', 'mapmri.py', 'mcsd.py', 'msdki.py', 'multi_voxel.py', 'odf.py', 'qtdmri.py', 'qti.py', 'rumba.py', 'sfm.py', 'shm.py', 'shore.py', 'utils.py', 'weights_method.py' ] py3.install_sources( python_sources + cython_headers, pure: false, subdir: 'dipy/reconst' ) subdir('tests') dipy-1.11.0/dipy/reconst/msdki.py000066400000000000000000000475751476546756600167160ustar00rootroot00000000000000#!/usr/bin/python """Classes and functions for fitting the mean signal diffusion kurtosis model""" import numpy as np import scipy.optimize as opt from dipy.core.gradients import check_multi_b, round_bvals, unique_bvals_magnitude from dipy.core.ndindex import ndindex from dipy.core.onetime import auto_attr from dipy.reconst.base import ReconstModel from dipy.reconst.dti import MIN_POSITIVE_SIGNAL from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def mean_signal_bvalue(data, gtab, *, bmag=None): """ Computes the average signal across different diffusion directions for each unique b-value Parameters ---------- data : ndarray ([X, Y, Z, ...], g) ndarray containing the data signals in its last dimension. gtab : a GradientTable class instance The gradient table containing diffusion acquisition parameters. bmag : The order of magnitude that the bvalues have to differ to be considered an unique b-value. Default: derive this value from the maximal b-value provided: $bmag=log_{10}(max(bvals)) - 1$. Returns ------- msignal : ndarray ([X, Y, Z, ..., nub]) Mean signal along all gradient directions for each unique b-value Note that the last dimension contains the signal means and nub is the number of unique b-values. ng : ndarray(nub) Number of gradient directions used to compute the mean signal for all unique b-values Notes ----- This function assumes that directions are evenly sampled on the sphere or on the hemisphere """ bvals = gtab.bvals.copy() # Compute unique and rounded bvals ub, rb = unique_bvals_magnitude(bvals, bmag=bmag, rbvals=True) # Initialize msignal and ng nub = ub.size ng = np.zeros(nub) msignal = np.zeros(data.shape[:-1] + (nub,)) for bi in range(ub.size): msignal[..., bi] = np.mean(data[..., rb == ub[bi]], axis=-1) ng[bi] = np.sum(rb == ub[bi]) return msignal, ng def msk_from_awf(f): """ Computes mean signal kurtosis from axonal water fraction estimates of the SMT2 model Parameters ---------- f : ndarray ([X, Y, Z, ...]) ndarray containing the axonal volume fraction estimate. Returns ------- msk : ndarray(nub) Mean signal kurtosis (msk) Notes ----- Computes mean signal kurtosis using equations 17 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ msk_num = 216 * f - 504 * f**2 + 504 * f**3 - 180 * f**4 msk_den = 135 - 360 * f + 420 * f**2 - 240 * f**3 + 60 * f**4 msk = msk_num / msk_den return msk def _msk_from_awf_error(f, msk): """Helper function that calculates the error of a predicted mean signal kurtosis from the axonal water fraction of SMT2 model and a measured mean signal kurtosis Parameters ---------- f : float Axonal volume fraction estimate. msk : float Measured mean signal kurtosis. Returns ------- error : float Error computed by subtracting msk with fun(f), where fun is the function described in equation 17 of :footcite:p:`NetoHenriques2019`. Notes ----- This function corresponds to the differential of equations 17 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ return msk_from_awf(f) - msk def _diff_msk_from_awf(f, msk): """ Helper function that calculates differential of function msk_from_awf Parameters ---------- f : ndarray ([X, Y, Z, ...]) ndarray containing the axonal volume fraction estimate. Returns ------- dkdf : ndarray(nub) Mean signal kurtosis differential msk : float Measured mean signal kurtosis. Notes ----- This function corresponds to the differential of equations 17 of :footcite:p:`NetoHenriques2019`. This function is applicable to both _msk_from_awf and _msk_from_awf_error. References ---------- .. footbibliography:: """ F = 216 * f - 504 * f**2 + 504 * f**3 - 180 * f**4 # Numerator G = 135 - 360 * f + 420 * f**2 - 240 * f**3 + 60 * f**4 # Denominator dF = 216 - 1008 * f + 1512 * f**2 - 720 * f**3 # Num. differential dG = -360 + 840 * f - 720 * f**2 + 240 * f**3 # Den. differential return (G * dF - F * dG) / (G**2) @warning_for_keywords() def awf_from_msk(msk, *, mask=None): """ Computes the axonal water fraction from the mean signal kurtosis assuming the 2-compartmental spherical mean technique model. See :footcite:p:`Kaden2016b` and :footcite:p:`NetoHenriques2019` for further details about the method. Parameters ---------- msk : ndarray ([X, Y, Z, ...]) Mean signal kurtosis (msk) mask : ndarray, optional A boolean array used to mark the coordinates in the data that should be analyzed that has the same shape of the msdki parameters Returns ------- awf : ndarray ([X, Y, Z, ...]) ndarray containing the axonal volume fraction estimate. Notes ----- Computes the axonal water fraction from the mean signal kurtosis MSK using equation 17 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ awf = np.zeros(msk.shape) # Prepare mask if mask is None: mask = np.ones(msk.shape, dtype=bool) else: if mask.shape != msk.shape: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) # looping voxels index = ndindex(mask.shape) for v in index: # Skip if out of mask if not mask[v]: continue if msk[v] > 2.4: awf[v] = 1 elif msk[v] < 0: awf[v] = 0 else: if np.isnan(msk[v]): awf[v] = np.nan else: mski = msk[v] fini = mski / 2.4 # Initial guess based on linear assumption awf[v] = opt.fsolve( _msk_from_awf_error, fini, args=(mski,), fprime=_diff_msk_from_awf, col_deriv=True, ).item() return awf @warning_for_keywords() def msdki_prediction(msdki_params, gtab, *, S0=1.0): """ Predict the mean signal given the parameters of the mean signal DKI, an GradientTable object and S0 signal. See :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- msdki_params : ndarray ([X, Y, Z, ...], 2) Array containing the mean signal diffusivity and mean signal kurtosis in its last axis gtab : a GradientTable class instance The gradient table for this prediction S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Notes ----- The predicted signal is given by: $MS(b) = S_0 * exp(-bD + 1/6 b^{2} D^{2} K)$, where $D$ and $K$ are the mean signal diffusivity and mean signal kurtosis. References ---------- .. footbibliography:: """ A = design_matrix(round_bvals(gtab.bvals)) params = msdki_params.copy() params[..., 1] = params[..., 1] * params[..., 0] ** 2 if isinstance(S0, (float, int)): pred_sig = S0 * np.exp(np.dot(params, A[:, :2].T)) elif S0.size == 1: pred_sig = S0 * np.exp(np.dot(params, A[:, :2].T)) else: nv = gtab.bvals.size S0r = np.zeros(S0.shape + gtab.bvals.shape) for vi in range(nv): S0r[..., vi] = S0 pred_sig = S0r * np.exp(np.dot(params, A[:, :2].T)) return pred_sig class MeanDiffusionKurtosisModel(ReconstModel): """Mean signal Diffusion Kurtosis Model""" def __init__(self, gtab, *args, bmag=None, return_S0_hat=False, **kwargs): """Mean Signal Diffusion Kurtosis Model. See :footcite:p:`NetoHenriques2018` for further details about the model. Parameters ---------- gtab : GradientTable class instance Gradient table. bmag : int, optional The order of magnitude that the bvalues have to differ to be considered an unique b-value. Default: derive this value from the maximal b-value provided: $bmag=log_{10}(max(bvals)) - 1$. return_S0_hat : bool, optional If True, also return S0 values for the fit. args, kwargs : arguments and keyword arguments passed to the fit_method. See msdki.wls_fit_msdki for details References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) self.return_S0_hat = return_S0_hat self.ubvals = unique_bvals_magnitude(gtab.bvals, bmag=bmag) self.design_matrix = design_matrix(self.ubvals) self.bmag = bmag self.args = args self.kwargs = kwargs self.min_signal = self.kwargs.pop("min_signal", MIN_POSITIVE_SIGNAL) if self.min_signal is not None and self.min_signal <= 0: e_s = "The `min_signal` key-word argument needs to be strictly" e_s += " positive." raise ValueError(e_s) # Check if at least three b-values are given enough_b = check_multi_b(self.gtab, 3, non_zero=False, bmag=bmag) if not enough_b: mes = "MSDKI requires at least 3 b-values (which can include b=0)" raise ValueError(mes) @warning_for_keywords() def fit(self, data, *, mask=None): """Fit method of the MSDKI model class Parameters ---------- data : ndarray ([X, Y, Z, ...], g) ndarray containing the data signals in its last dimension. mask : array A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] """ S0_params = None # Compute mean signal for each unique b-value mdata, ng = mean_signal_bvalue(data, self.gtab, bmag=self.bmag) # Remove mdata zeros mdata = np.maximum(mdata, self.min_signal) params = wls_fit_msdki( self.design_matrix, mdata, ng, mask=mask, return_S0_hat=self.return_S0_hat, *self.args, **self.kwargs, ) if self.return_S0_hat: params, S0_params = params return MeanDiffusionKurtosisFit(self, params, model_S0=S0_params) @warning_for_keywords() def predict(self, msdki_params, *, S0=1.0): """ Predict a signal for this MeanDiffusionKurtosisModel class instance given parameters. See :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- msdki_params : ndarray The parameters of the mean signal diffusion kurtosis model S0 : float or ndarray, optional The non diffusion-weighted signal in every voxel, or across all voxels. Returns ------- S : (..., N) ndarray Simulated mean signal based on the mean signal diffusion kurtosis model Notes ----- The predicted signal is given by: $MS(b) = S_0 * exp(-bD + 1/6 b^{2} D^{2} K)$, where $D$ and $K$ are the mean signal diffusivity and mean signal kurtosis. References ---------- .. footbibliography:: """ return msdki_prediction(msdki_params, self.gtab, S0=S0) class MeanDiffusionKurtosisFit: @warning_for_keywords() def __init__(self, model, model_params, *, model_S0=None): """Initialize a MeanDiffusionKurtosisFit class instance.""" self.model = model self.model_params = model_params self.model_S0 = model_S0 def __getitem__(self, index): model_params = self.model_params model_S0 = self.model_S0 N = model_params.ndim if type(index) is not tuple: index = (index,) elif len(index) >= model_params.ndim: raise IndexError("IndexError: invalid index") index = index + (slice(None),) * (N - len(index)) if model_S0 is not None: model_S0 = model_S0[index[:-1]] return MeanDiffusionKurtosisFit( self.model, model_params[index], model_S0=model_S0 ) @property def S0_hat(self): return self.model_S0 @auto_attr def msd(self): r""" Mean signal diffusivity (MSD) calculated from the mean signal Diffusion Kurtosis Model. See :footcite:p:`NetoHenriques2018` for further details about the method. Returns ------- msd : ndarray Calculated signal mean diffusivity. References ---------- .. footbibliography:: """ return self.model_params[..., 0] @auto_attr def msk(self): r""" Mean signal kurtosis (MSK) calculated from the mean signal Diffusion Kurtosis Model. See :footcite:p:`NetoHenriques2018` for further details about the method. Returns ------- msk : ndarray Calculated signal mean kurtosis. References ---------- .. footbibliography:: """ return self.model_params[..., 1] @auto_attr def smt2f(self): r""" Computes the axonal water fraction from the mean signal kurtosis assuming the 2-compartmental spherical mean technique model. See :footcite:p:`Kaden2016b` and :footcite:p:`NetoHenriques2019` for further details about the method. Returns ------- ndarray Axonal volume fraction calculated from MSK. Notes ----- Computes the axonal water fraction from the mean signal kurtosis MSK using equation 17 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ return awf_from_msk(self.msk) @auto_attr def smt2di(self): r""" Computes the intrinsic diffusivity from the mean signal diffusional kurtosis parameters assuming the 2-compartmental spherical mean technique model. See :footcite:p:`Kaden2016b` and :footcite:p:`NetoHenriques2019` for further details about the method. Returns ------- smt2di : ndarray Intrinsic diffusivity computed by converting MSDKI to SMT2. Notes ----- Computes the intrinsic diffusivity using equation 16 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ return 3 * self.msd / (1 + 2 * (1 - self.smt2f) ** 2) @auto_attr def smt2uFA(self): r""" Computes the microscopic fractional anisotropy from the mean signal diffusional kurtosis parameters assuming the 2-compartmental spherical mean technique model. See :footcite:p:`Kaden2016b` and :footcite:p:`NetoHenriques2019` for further details about the method. Returns ------- smt2uFA : ndarray Microscopic fractional anisotropy computed by converting MSDKI to SMT2. Notes ----- Computes the intrinsic diffusivity using equation 10 of :footcite:p:`NetoHenriques2019`. References ---------- .. footbibliography:: """ fe = 1 - self.smt2f num = 3 * (1 - 2 * fe**2 + fe**3) den = 3 + 2 * fe**3 + 4 * fe**4 return np.sqrt(num / den) @warning_for_keywords() def predict(self, gtab, *, S0=1.0): r""" Given a mean signal diffusion kurtosis model fit, predict the signal on the vertices of a sphere See :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- gtab : a GradientTable class instance This encodes the directions for which a prediction is made S0 : float array The mean non-diffusion weighted signal in each voxel. Default: The fitted S0 value in all voxels if it was fitted. Otherwise 1 in all voxels. Returns ------- S : (..., N) ndarray Simulated mean signal based on the mean signal kurtosis model Notes ----- The predicted signal is given by: $MS(b) = S_0 * exp(-bD + 1/6 b^{2} D^{2} K)$, where $D$ and $k$ are the mean signal diffusivity and mean signal kurtosis. References ---------- .. footbibliography:: """ return msdki_prediction(self.model_params, gtab, S0=S0) @warning_for_keywords() def wls_fit_msdki( design_matrix, msignal, ng, *, mask=None, min_signal=MIN_POSITIVE_SIGNAL, return_S0_hat=False, ): r""" Fits the mean signal diffusion kurtosis imaging based on a weighted least square solution. See :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- design_matrix : array (nub, 3) Design matrix holding the covariants used to solve for the regression coefficients of the mean signal diffusion kurtosis model. Note that nub is the number of unique b-values msignal : ndarray ([X, Y, Z, ..., nub]) Mean signal along all gradient directions for each unique b-value Note that the last dimension should contain the signal means and nub is the number of unique b-values. ng : ndarray(nub) Number of gradient directions used to compute the mean signal for all unique b-values mask : array A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1] min_signal : float, optional Voxel with mean signal intensities lower than the min positive signal are not processed. Default: 0.0001 return_S0_hat : bool If True, also return S0 values for the fit. Returns ------- params : array (..., 2) Containing the mean signal diffusivity and mean signal kurtosis References ---------- .. footbibliography:: """ params = np.zeros(msignal.shape[:-1] + (3,)) # Prepare mask if mask is None: mask = np.ones(msignal.shape[:-1], dtype=bool) else: if mask.shape != msignal.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) index = ndindex(mask.shape) for v in index: # Skip if out of mask if not mask[v]: continue # Skip if no signal is present if np.mean(msignal[v]) <= min_signal: continue # Define weights as diag(ng * yn**2) W = np.diag(ng * msignal[v] ** 2) # WLS fitting BTW = np.dot(design_matrix.T, W) inv_BT_W_B = np.linalg.pinv(np.dot(BTW, design_matrix)) p = np.linalg.multi_dot([inv_BT_W_B, BTW, np.log(msignal[v])]) # Process parameters p[1] = p[1] / (p[0] ** 2) p[2] = np.exp(p[2]) params[v] = p if return_S0_hat: return params[..., :2], params[..., 2] else: return params[..., :2] def design_matrix(ubvals): """Constructs design matrix for the mean signal diffusion kurtosis model Parameters ---------- ubvals : array Containing the unique b-values of the data. Returns ------- design_matrix : array (nb, 3) Design matrix or B matrix for the mean signal diffusion kurtosis model assuming that parameters are in the following order: design_matrix[j, :] = (msd, msk, S0) """ nb = ubvals.shape B = np.zeros(nb + (3,)) B[:, 0] = -ubvals B[:, 1] = 1.0 / 6.0 * ubvals**2 B[:, 2] = np.ones(nb) return B dipy-1.11.0/dipy/reconst/multi_voxel.py000066400000000000000000000207241476546756600201410ustar00rootroot00000000000000"""Tools to easily make multi voxel models""" from functools import partial import multiprocessing import numpy as np from tqdm import tqdm from dipy.core.ndindex import ndindex from dipy.reconst.base import ReconstFit from dipy.reconst.quick_squash import quick_squash as _squash from dipy.utils.parallel import paramap def _parallel_fit_worker(vox_data, single_voxel_fit, **kwargs): """ Works on a chunk of voxel data to create a list of single voxel fits. Parameters ---------- vox_data : ndarray, shape (n_voxels, ...) The data to fit. single_voxel_fit : callable The fit function to use on each voxel. """ vox_weights = kwargs.pop("weights", None) if type(vox_weights) is np.ndarray: return [ single_voxel_fit(data, **(dict({"weights": weights}, **kwargs))) for data, weights in zip(vox_data, vox_weights) ] else: return [single_voxel_fit(data, **kwargs) for data in vox_data] def multi_voxel_fit(single_voxel_fit): """Method decorator to turn a single voxel model fit definition into a multi voxel model fit definition """ def new_fit(self, data, *, mask=None, **kwargs): """Fit method for every voxel in data""" # If only one voxel just return a standard fit, passing through # the functions key-word arguments (no mask needed). if data.ndim == 1: svf = single_voxel_fit(self, data, **kwargs) # If fit method does not return extra, cannot return extra if isinstance(svf, tuple): svf, extra = svf return svf, extra else: return svf # Make a mask if mask is None if mask is None: mask = np.ones(data.shape[:-1], bool) # Check the shape of the mask if mask is not None elif mask.shape != data.shape[:-1]: raise ValueError("mask and data shape do not match") # Get weights from kwargs if provided weights = kwargs["weights"] if "weights" in kwargs else None weights_is_array = True if type(weights) is np.ndarray else False # Fit data where mask is True fit_array = np.empty(data.shape[:-1], dtype=object) return_extra = False # Default to serial execution: engine = kwargs.get("engine", "serial") if engine == "serial": extra_list = [] bar = tqdm( total=np.sum(mask), position=0, disable=kwargs.get("verbose", True) ) bar.set_description("Fitting reconstruction model using serial execution") for ijk in ndindex(data.shape[:-1]): if mask[ijk]: if weights_is_array: kwargs["weights"] = weights[ijk] svf = single_voxel_fit(self, data[ijk], **kwargs) # Not all fit methods return extra, handle this here if isinstance(svf, tuple): fit_array[ijk], extra = svf return_extra = True else: fit_array[ijk], extra = svf, None extra_list.append(extra) bar.update() bar.close() else: data_to_fit = data[np.where(mask)] if weights_is_array: weights_to_fit = weights[np.where(mask)] single_voxel_with_self = partial(single_voxel_fit, self) n_jobs = kwargs.get("n_jobs", multiprocessing.cpu_count() - 1) vox_per_chunk = kwargs.get( "vox_per_chunk", np.max([data_to_fit.shape[0] // n_jobs, 1]) ) chunks = [ data_to_fit[ii : ii + vox_per_chunk] for ii in range(0, data_to_fit.shape[0], vox_per_chunk) ] # func_kwargs : dict or sequence, optional # Keyword arguments to `func` or sequence of keyword arguments # to `func`: one item for each item in the input list. kwargs_chunks = [] for ii in range(0, data_to_fit.shape[0], vox_per_chunk): kw = kwargs.copy() if weights_is_array: kw["weights"] = weights_to_fit[ii : ii + vox_per_chunk] kwargs_chunks.append(kw) parallel_kwargs = {} for kk in ["n_jobs", "vox_per_chunk", "engine", "verbose"]: if kk in kwargs: parallel_kwargs[kk] = kwargs[kk] mvf = paramap( _parallel_fit_worker, chunks, func_args=[single_voxel_with_self], func_kwargs=kwargs_chunks, **parallel_kwargs, ) if isinstance(mvf[0][0], tuple): tmp_fit_array = np.concatenate( [[svf[0] for svf in mvf_ch] for mvf_ch in mvf] ) tmp_extra = np.concatenate( [[svf[1] for svf in mvf_ch] for mvf_ch in mvf] ).tolist() fit_array[np.where(mask)], extra_list = tmp_fit_array, tmp_extra return_extra = True else: tmp_fit_array = np.concatenate(mvf) fit_array[np.where(mask)], extra_list = tmp_fit_array, None # Redefine extra to be a single dictionary if return_extra: if extra_list[0] is not None: extra_mask = { key: np.vstack([e[key] for e in extra_list]) for key in extra_list[0] } extra = {} for key in extra_mask: extra[key] = np.zeros(data.shape) extra[key][mask == 1] = extra_mask[key] else: extra = None # If fit method does not return extra, assume we cannot return extra if return_extra: return MultiVoxelFit(self, fit_array, mask), extra else: return MultiVoxelFit(self, fit_array, mask) return new_fit class MultiVoxelFit(ReconstFit): """Holds an array of fits and allows access to their attributes and methods""" def __init__(self, model, fit_array, mask): self.model = model self.fit_array = fit_array self.mask = mask @property def shape(self): return self.fit_array.shape def __getattr__(self, attr): result = CallableArray(self.fit_array.shape, dtype=object) for ijk in ndindex(result.shape): if self.mask[ijk]: result[ijk] = getattr(self.fit_array[ijk], attr) return _squash(result, self.mask) def __getitem__(self, index): item = self.fit_array[index] if isinstance(item, np.ndarray): return MultiVoxelFit(self.model, item, self.mask[index]) else: return item def predict(self, *args, **kwargs): """ Predict for the multi-voxel object using each single-object's prediction API, with S0 provided from an array. """ S0 = kwargs.get("S0", np.ones(self.fit_array.shape)) idx = ndindex(self.fit_array.shape) ijk = next(idx) def gimme_S0(S0, ijk): if isinstance(S0, np.ndarray): return S0[ijk] else: return S0 kwargs["S0"] = gimme_S0(S0, ijk) # If we have a mask, we might have some Nones up front, skip those: while self.fit_array[ijk] is None: ijk = next(idx) if not hasattr(self.fit_array[ijk], "predict"): msg = "This model does not have prediction implemented yet" raise NotImplementedError(msg) first_pred = self.fit_array[ijk].predict(*args, **kwargs) result = np.zeros(self.fit_array.shape + (first_pred.shape[-1],)) result[ijk] = first_pred for ijk in idx: kwargs["S0"] = gimme_S0(S0, ijk) # If it's masked, we predict a 0: if self.fit_array[ijk] is None: result[ijk] *= 0 else: result[ijk] = self.fit_array[ijk].predict(*args, **kwargs) return result class CallableArray(np.ndarray): """An array which can be called like a function""" def __call__(self, *args, **kwargs): result = np.empty(self.shape, dtype=object) for ijk in ndindex(self.shape): item = self[ijk] if item is not None: result[ijk] = item(*args, **kwargs) return _squash(result) dipy-1.11.0/dipy/reconst/odf.py000066400000000000000000000055561476546756600163500ustar00rootroot00000000000000import numpy as np from dipy.reconst.base import ReconstFit, ReconstModel from dipy.testing.decorators import warning_for_keywords # Classes OdfModel and OdfFit are using API ReconstModel and ReconstFit from # .base class OdfModel(ReconstModel): """An abstract class to be sub-classed by specific odf models All odf models should provide a fit method which may take data as it's first and only argument. """ def __init__(self, gtab): ReconstModel.__init__(self, gtab) def fit(self, data): """To be implemented by specific odf models""" raise NotImplementedError("To be implemented in sub classes") class OdfFit(ReconstFit): def odf(self, sphere): """To be implemented but specific odf models""" raise NotImplementedError("To be implemented in sub classes") def gfa(samples): r"""The general fractional anisotropy of a function evaluated on the unit sphere Parameters ---------- samples : ndarray Values of data on the unit sphere. Returns ------- gfa : ndarray GFA evaluated in each entry of the array, along the last dimension. An `np.nan` is returned for coordinates that contain all-zeros in `samples`. Notes ----- The GFA is defined as :footcite:p:`CohenAdad2011`: .. math:: \sqrt{\frac{n \sum_i{(\Psi_i - <\Psi>)^2}}{(n-1) \sum{\Psi_i ^ 2}}} Where $\Psi$ is an orientation distribution function sampled discretely on the unit sphere and angle brackets denote average over the samples on the sphere. References ---------- .. footbibliography:: """ diff = samples - samples.mean(-1)[..., None] n = samples.shape[-1] numer = np.array([n * (diff**2).sum(-1)]) denom = np.array([(n - 1) * (samples**2).sum(-1)]) result = np.ones_like(denom) * np.nan idx = np.where(denom > 0) result[idx] = np.sqrt(numer[idx] / denom[idx]) return result.squeeze() @warning_for_keywords() def minmax_normalize(samples, *, out=None): """Min-max normalization of a function evaluated on the unit sphere Normalizes samples to ``(samples - min(samples)) / (max(samples) - min(samples))`` for each unit sphere. Parameters ---------- samples : ndarray (..., N) N samples on a unit sphere for each point, stored along the last axis of the array. out : ndrray (..., N), optional An array to store the normalized samples. Returns ------- out : ndarray, (..., N) Normalized samples. """ if out is None: dtype = np.common_type(np.empty(0, "float32"), samples) out = np.array(samples, dtype=dtype, copy=True) else: out[:] = samples sample_mins = np.min(samples, -1)[..., None] sample_maxes = np.max(samples, -1)[..., None] out -= sample_mins out /= sample_maxes - sample_mins return out dipy-1.11.0/dipy/reconst/qtdmri.py000066400000000000000000002342521476546756600170750ustar00rootroot00000000000000from warnings import warn import numpy as np from scipy import special from scipy.special import gamma, genlaguerre from dipy.core.geometry import cart2sphere from dipy.core.gradients import gradient_table_from_gradient_strength_bvecs from dipy.reconst import mapmri from dipy.reconst.cache import Cache from dipy.reconst.multi_voxel import multi_voxel_fit try: # preferred scipy >= 0.14, required scipy >= 1.0 from scipy.special import factorial except ImportError: from scipy.misc import factorial import random from scipy.optimize import fmin_l_bfgs_b import dipy.reconst.dti as dti from dipy.reconst.shm import real_sh_descoteaux_from_index from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") plt, have_plt, _ = optional_package("matplotlib.pyplot") class QtdmriModel(Cache): r"""The q$\tau$-dMRI model to analytically and continuously represent the q$\tau$ diffusion signal attenuation over diffusion sensitization q and diffusion time $\tau$. The model :footcite:p:`Fick2018` can be seen as an extension of the MAP-MRI basis :footcite:p:`Ozarslan2013` towards different diffusion times. The main idea is to model the diffusion signal over time and space as a linear combination of continuous functions, .. math:: :nowrap: \hat{E}(\textbf{q},\tau;\textbf{c}) = \sum_i^{N_{\textbf{q}}}\sum_k^{N_\tau} \textbf{c}_{ik} \,\Phi_i(\textbf{q})\,T_k(\tau) where $\Phi$ and $T$ are the spatial and temporal basis functions, $N_{\textbf{q}}$ and $N_\tau$ are the maximum spatial and temporal order, and $i,k$ are basis order iterators. The estimation of the coefficients $c_i$ can be regularized using either analytic Laplacian regularization, sparsity regularization using the l1-norm, or both to do a type of elastic net regularization. From the coefficients, there exists an analytical formula to estimate the ODF, RTOP, RTAP, RTPP, QIV and MSD, for any diffusion time. Parameters ---------- gtab : GradientTable, gradient directions and bvalues container class. The bvalues should be in the normal s/mm^2. big_delta and small_delta need to be given in seconds. radial_order : unsigned int, an even integer representing the spatial/radial order of the basis. time_order : unsigned int, an integer larger or equal than zero representing the time order of the basis. laplacian_regularization : bool, Regularize using the Laplacian of the qt-dMRI basis. laplacian_weighting: string or scalar, The string 'GCV' makes it use generalized cross-validation to find the regularization weight :footcite:p:`Craven1979`. A scalar sets the regularization weight to that value. l1_regularization : bool, Regularize by imposing sparsity in the coefficients using the l1-norm. l1_weighting : 'CV' or scalar, The string 'CV' makes it use five-fold cross-validation to find the regularization weight. A scalar sets the regularization weight to that value. cartesian : bool Whether to use the Cartesian or spherical implementation of the qt-dMRI basis, which we first explored in :footcite:p:`Fick2015`. anisotropic_scaling : bool Whether to use anisotropic scaling or isotropic scaling. This option can be used to test if the Cartesian implementation is equivalent with the spherical one when using the same scaling. normalization : bool Whether to normalize the basis functions such that their inner product is equal to one. Normalization is only necessary when imposing sparsity in the spherical basis if cartesian=False. constrain_q0 : bool whether to constrain the q0 point to unity along the tau-space. This is necessary to ensure that $E(0,\tau)=1$. bval_threshold : float the threshold b-value to be used, such that only data points below that threshold are used when estimating the scale factors. eigenvalue_threshold : float, Sets the minimum of the tensor eigenvalues in order to avoid stability problem. cvxpy_solver : str, optional cvxpy solver name. Optionally optimize the positivity constraint with a particular cvxpy solver. See https://www.cvxpy.org/ for details. Default: None (cvxpy chooses its own solver) References ---------- .. footbibliography:: """ @warning_for_keywords() def __init__( self, gtab, *, radial_order=6, time_order=2, laplacian_regularization=False, laplacian_weighting=0.2, l1_regularization=False, l1_weighting=0.1, cartesian=True, anisotropic_scaling=True, normalization=False, constrain_q0=True, bval_threshold=1e10, eigenvalue_threshold=1e-04, cvxpy_solver="CLARABEL", ): if radial_order % 2 or radial_order < 0: msg = "radial_order must be zero or an even positive integer." msg += f" radial_order {radial_order} was given." raise ValueError(msg) if time_order < 0: msg = "time_order must be larger or equal than zero integer." msg += f" time_order {time_order} was given." raise ValueError(msg) if not isinstance(laplacian_regularization, bool): msg = "laplacian_regularization must be True or False." msg += f" Input value was {laplacian_regularization}." raise ValueError(msg) if laplacian_regularization: msg = "laplacian_regularization weighting must be 'GCV' " msg += "or a float larger or equal than zero." msg += f" Input value was {laplacian_weighting}." if isinstance(laplacian_weighting, str): if laplacian_weighting != "GCV": raise ValueError(msg) elif isinstance(laplacian_weighting, float): if laplacian_weighting < 0: raise ValueError(msg) else: raise ValueError(msg) if not isinstance(l1_regularization, bool): msg = "l1_regularization must be True or False." msg += f" Input value was {l1_regularization}." raise ValueError(msg) if l1_regularization: msg = "l1_weighting weighting must be 'CV' " msg += "or a float larger or equal than zero." msg += f" Input value was {l1_weighting}." if isinstance(l1_weighting, str): if l1_weighting != "CV": raise ValueError(msg) elif isinstance(l1_weighting, float): if l1_weighting < 0: raise ValueError(msg) else: raise ValueError(msg) if not isinstance(cartesian, bool): msg = "cartesian must be True or False." msg += f" Input value was {cartesian}." raise ValueError(msg) if not isinstance(anisotropic_scaling, bool): msg = "anisotropic_scaling must be True or False." msg += f" Input value was {anisotropic_scaling}." raise ValueError(msg) if not isinstance(constrain_q0, bool): msg = "constrain_q0 must be True or False." msg += f" Input value was {constrain_q0}." raise ValueError(msg) if not isinstance(bval_threshold, float) or bval_threshold < 0: msg = "bval_threshold must be a positive float." msg += f" Input value was {bval_threshold}." raise ValueError(msg) if not isinstance(eigenvalue_threshold, float) or eigenvalue_threshold < 0: msg = "eigenvalue_threshold must be a positive float." msg += f" Input value was {eigenvalue_threshold}." raise ValueError(msg) if laplacian_regularization or l1_regularization: if not have_cvxpy: msg = "cvxpy must be installed for Laplacian or l1 " msg += "regularization." raise ImportError(msg) if cvxpy_solver is not None: if cvxpy_solver not in cvxpy.installed_solvers(): msg = f"Input `cvxpy_solver` was set to {cvxpy_solver}." msg += f" One of {', '.join(cvxpy.installed_solvers())}" msg += " was expected." raise ValueError(msg) if l1_regularization and not cartesian and not normalization: msg = "The non-Cartesian implementation must be normalized for the" msg += " l1-norm sparsity regularization to work. Set " msg += "normalization=True to proceed." raise ValueError(msg) self.gtab = gtab self.radial_order = radial_order self.time_order = time_order self.laplacian_regularization = laplacian_regularization self.laplacian_weighting = laplacian_weighting self.l1_regularization = l1_regularization self.l1_weighting = l1_weighting self.cartesian = cartesian self.anisotropic_scaling = anisotropic_scaling self.normalization = normalization self.constrain_q0 = constrain_q0 self.bval_threshold = bval_threshold self.eigenvalue_threshold = eigenvalue_threshold self.cvxpy_solver = cvxpy_solver if self.cartesian: self.ind_mat = qtdmri_index_matrix(radial_order, time_order) else: self.ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) # precompute parts of laplacian regularization matrices self.part4_reg_mat_tau = part4_reg_matrix_tau(self.ind_mat, 1.0) self.part23_reg_mat_tau = part23_reg_matrix_tau(self.ind_mat, 1.0) self.part1_reg_mat_tau = part1_reg_matrix_tau(self.ind_mat, 1.0) if self.cartesian: self.S_mat, self.T_mat, self.U_mat = mapmri.mapmri_STU_reg_matrices( radial_order ) else: self.part1_uq_iso_precomp = ( mapmri.mapmri_isotropic_laplacian_reg_matrix_from_index_matrix( self.ind_mat[:, :3], 1.0 ) ) self.tenmodel = dti.TensorModel(gtab) @multi_voxel_fit def fit(self, data, **kwargs): bval_mask = self.gtab.bvals < self.bval_threshold data_norm = data / data[self.gtab.b0s_mask].mean() tau = self.gtab.tau bvecs = self.gtab.bvecs qvals = self.gtab.qvals b0s_mask = self.gtab.b0s_mask if self.cartesian: if self.anisotropic_scaling: us, ut, R = qtdmri_anisotropic_scaling( data_norm[bval_mask], qvals[bval_mask], bvecs[bval_mask], tau[bval_mask], ) tau_scaling = ut / us.mean() tau_scaled = tau * tau_scaling ut /= tau_scaling us = np.clip(us, self.eigenvalue_threshold, np.inf) q = np.dot(bvecs, R) * qvals[:, None] M = _qtdmri_signal_matrix( self.radial_order, self.time_order, us, ut, q, tau_scaled, normalization=self.normalization, ) else: us, ut = qtdmri_isotropic_scaling(data_norm, qvals, tau) tau_scaling = ut / us tau_scaled = tau * tau_scaling ut /= tau_scaling R = np.eye(3) us = np.tile(us, 3) q = bvecs * qvals[:, None] M = _qtdmri_signal_matrix( self.radial_order, self.time_order, us, ut, q, tau_scaled, normalization=self.normalization, ) else: us, ut = qtdmri_isotropic_scaling(data_norm, qvals, tau) tau_scaling = ut / us tau_scaled = tau * tau_scaling ut /= tau_scaling R = np.eye(3) us = np.tile(us, 3) q = bvecs * qvals[:, None] M = _qtdmri_isotropic_signal_matrix( self.radial_order, self.time_order, us[0], ut, q, tau_scaled, normalization=self.normalization, ) b0_indices = np.arange(self.gtab.tau.shape[0])[self.gtab.b0s_mask] tau0_ordered = self.gtab.tau[b0_indices] unique_taus = np.unique(self.gtab.tau) first_tau_pos = [] for unique_tau in unique_taus: first_tau_pos.append(np.where(tau0_ordered == unique_tau)[0][0]) M0 = M[b0_indices[first_tau_pos]] lopt = 0.0 alpha = 0.0 if self.laplacian_regularization and not self.l1_regularization: if self.cartesian: laplacian_matrix = qtdmri_laplacian_reg_matrix( self.ind_mat, us, ut, S_mat=self.S_mat, T_mat=self.T_mat, U_mat=self.U_mat, part1_ut_precomp=self.part1_reg_mat_tau, part23_ut_precomp=self.part23_reg_mat_tau, part4_ut_precomp=self.part4_reg_mat_tau, normalization=self.normalization, ) else: laplacian_matrix = qtdmri_isotropic_laplacian_reg_matrix( self.ind_mat, us, ut, part1_uq_iso_precomp=self.part1_uq_iso_precomp, part1_ut_precomp=self.part1_reg_mat_tau, part23_ut_precomp=self.part23_reg_mat_tau, part4_ut_precomp=self.part4_reg_mat_tau, normalization=self.normalization, ) if self.laplacian_weighting == "GCV": try: lopt = generalized_crossvalidation(data_norm, M, laplacian_matrix) except BaseException: msg = "Laplacian GCV failed. lopt defaulted to 2e-4." warn(msg, stacklevel=2) lopt = 2e-4 elif np.isscalar(self.laplacian_weighting): lopt = self.laplacian_weighting c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) + lopt * cvxpy.quad_form(c, laplacian_matrix) ) if self.constrain_q0: # just constraint first and last, otherwise the solver fails constraints = [M0[0] @ c == 1, M0[-1] @ c == 1] else: constraints = [] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=self.cvxpy_solver, verbose=False) cvxpy_solution_optimal = prob.status == "optimal" qtdmri_coef = np.asarray(c.value).squeeze() except BaseException: qtdmri_coef = np.zeros(M.shape[1]) cvxpy_solution_optimal = False elif self.l1_regularization and not self.laplacian_regularization: if self.l1_weighting == "CV": alpha = l1_crossvalidation(b0s_mask, data_norm, M) elif np.isscalar(self.l1_weighting): alpha = self.l1_weighting c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) + alpha * cvxpy.norm1(c) ) if self.constrain_q0: # just constraint first and last, otherwise the solver fails constraints = [M0[0] @ c == 1, M0[-1] @ c == 1] else: constraints = [] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=self.cvxpy_solver, verbose=False) cvxpy_solution_optimal = prob.status == "optimal" qtdmri_coef = np.asarray(c.value).squeeze() except BaseException: qtdmri_coef = np.zeros(M.shape[1]) cvxpy_solution_optimal = False elif self.l1_regularization and self.laplacian_regularization: if self.cartesian: laplacian_matrix = qtdmri_laplacian_reg_matrix( self.ind_mat, us, ut, S_mat=self.S_mat, T_mat=self.T_mat, U_mat=self.U_mat, part1_ut_precomp=self.part1_reg_mat_tau, part23_ut_precomp=self.part23_reg_mat_tau, part4_ut_precomp=self.part4_reg_mat_tau, normalization=self.normalization, ) else: laplacian_matrix = qtdmri_isotropic_laplacian_reg_matrix( self.ind_mat, us, ut, part1_uq_iso_precomp=self.part1_uq_iso_precomp, part1_ut_precomp=self.part1_reg_mat_tau, part23_ut_precomp=self.part23_reg_mat_tau, part4_ut_precomp=self.part4_reg_mat_tau, normalization=self.normalization, ) if self.laplacian_weighting == "GCV": lopt = generalized_crossvalidation(data_norm, M, laplacian_matrix) elif np.isscalar(self.laplacian_weighting): lopt = self.laplacian_weighting if self.l1_weighting == "CV": alpha = elastic_crossvalidation( b0s_mask, data_norm, M, laplacian_matrix, lopt ) elif np.isscalar(self.l1_weighting): alpha = self.l1_weighting c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) + alpha * cvxpy.norm1(c) + lopt * cvxpy.quad_form(c, laplacian_matrix) ) if self.constrain_q0: # just constraint first and last, otherwise the solver fails constraints = [M0[0] @ c == 1, M0[-1] @ c == 1] else: constraints = [] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=self.cvxpy_solver, verbose=False) cvxpy_solution_optimal = prob.status == "optimal" qtdmri_coef = np.asarray(c.value).squeeze() except BaseException: qtdmri_coef = np.zeros(M.shape[1]) cvxpy_solution_optimal = False elif not self.l1_regularization and not self.laplacian_regularization: # just use least squares with the observation matrix pseudoInv = np.linalg.pinv(M) qtdmri_coef = np.dot(pseudoInv, data_norm) # if cvxpy is used to constraint q0 without regularization the # solver often fails, so only first tau-position is manually # normalized. qtdmri_coef /= np.dot(M0[0], qtdmri_coef) cvxpy_solution_optimal = None if cvxpy_solution_optimal is False: msg = "cvxpy optimization resulted in non-optimal solution. Check " msg += "cvxpy_solution_optimal attribute in fitted object to see " msg += "which voxels are affected." warn(msg, stacklevel=2) return QtdmriFit( self, qtdmri_coef, us, ut, tau_scaling, R, lopt, alpha, cvxpy_solution_optimal, ) class QtdmriFit: def __init__( self, model, qtdmri_coef, us, ut, tau_scaling, R, lopt, alpha, cvxpy_solution_optimal, ): """Calculates diffusion properties for a single voxel Parameters ---------- model : object, AnalyticalModel qtdmri_coef : 1d ndarray, qtdmri coefficients us : array, 3 x 1 spatial scaling factors ut : float temporal scaling factor tau_scaling : float, the temporal scaling that used to scale tau to the size of us R : 3x3 numpy array, tensor eigenvectors lopt : float, laplacian regularization weight alpha : float, the l1 regularization weight cvxpy_solution_optimal: bool, indicates whether the cvxpy coefficient estimation reach an optimal solution """ self.model = model self._qtdmri_coef = qtdmri_coef self.us = us self.ut = ut self.tau_scaling = tau_scaling self.R = R self.lopt = lopt self.alpha = alpha self.cvxpy_solution_optimal = cvxpy_solution_optimal def qtdmri_to_mapmri_coef(self, tau): """This function converts the qtdmri coefficients to mapmri coefficients for a given tau. Defined in :footcite:p:`Fick2018`, the conversion is performed by a matrix multiplication that evaluates the time-depenent part of the basis and multiplies it with the coefficients, after which coefficients with the same spatial orders are summed up, resulting in mapmri coefficients. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ if self.model.cartesian: II = self.model.cache_get("qtdmri_to_mapmri_matrix", key=tau) if II is None: II = qtdmri_to_mapmri_matrix( self.model.radial_order, self.model.time_order, self.ut, self.tau_scaling * tau, ) self.model.cache_set("qtdmri_to_mapmri_matrix", tau, II) else: II = self.model.cache_get("qtdmri_isotropic_to_mapmri_matrix", key=tau) if II is None: II = qtdmri_isotropic_to_mapmri_matrix( self.model.radial_order, self.model.time_order, self.ut, self.tau_scaling * tau, ) self.model.cache_set("qtdmri_isotropic_to_mapmri_matrix", tau, II) mapmri_coef = np.dot(II, self._qtdmri_coef) return mapmri_coef @warning_for_keywords() def sparsity_abs(self, *, threshold=0.99): """As a measure of sparsity, calculates the number of largest coefficients needed to absolute sum up to 99% of the total absolute sum of all coefficients""" if not 0.0 < threshold < 1.0: msg = "sparsity threshold must be between zero and one" raise ValueError(msg) total_weight = np.sum(abs(self._qtdmri_coef)) absolute_normalized_coef_array = ( np.sort(abs(self._qtdmri_coef))[::-1] / total_weight ) current_weight = 0.0 counter = 0 while current_weight < threshold: current_weight += absolute_normalized_coef_array[counter] counter += 1 return counter @warning_for_keywords() def sparsity_density(self, *, threshold=0.99): """As a measure of sparsity, calculates the number of largest coefficients needed to squared sum up to 99% of the total squared sum of all coefficients""" if not 0.0 < threshold < 1.0: msg = "sparsity threshold must be between zero and one" raise ValueError(msg) total_weight = np.sum(self._qtdmri_coef**2) squared_normalized_coef_array = ( np.sort(self._qtdmri_coef**2)[::-1] / total_weight ) current_weight = 0.0 counter = 0 while current_weight < threshold: current_weight += squared_normalized_coef_array[counter] counter += 1 return counter @warning_for_keywords() def odf(self, sphere, tau, *, s=2): r"""Calculates the analytical Orientation Distribution Function (ODF) for a given diffusion time tau from the signal. See :footcite:p:`Ozarslan2013` Eq. (32). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- sphere : dipy sphere object sphere object with vertice orientations to compute the ODF on. tau : float diffusion time (big_delta - small_delta / 3.) in seconds s : unsigned int radial moment of the ODF References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) if self.model.cartesian: v_ = sphere.vertices v = np.dot(v_, self.R) I_s = mapmri.mapmri_odf_matrix(self.model.radial_order, self.us, s, v) odf = np.dot(I_s, mapmri_coef) else: II = self.model.cache_get("ODF_matrix", key=(sphere, s)) if II is None: II = mapmri.mapmri_isotropic_odf_matrix( self.model.radial_order, 1, s, sphere.vertices ) self.model.cache_set("ODF_matrix", (sphere, s), II) odf = self.us[0] ** s * np.dot(II, mapmri_coef) return odf @warning_for_keywords() def odf_sh(self, tau, *, s=2): r"""Calculates the real analytical odf for a given discrete sphere. Computes the design matrix of the ODF for the given sphere vertices and radial moment :footcite:p:`Ozarslan2013` eq. (32). The radial moment s acts as a sharpening method. The analytical equation for the spherical ODF basis is given in :footcite:p:`Fick2016b` eq. (C8). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds s : unsigned int radial moment of the ODF References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) if self.model.cartesian: msg = "odf in spherical harmonics not yet implemented for " msg += "cartesian implementation" raise ValueError(msg) II = self.model.cache_get("ODF_sh_matrix", key=(self.model.radial_order, s)) if II is None: II = mapmri.mapmri_isotropic_odf_sh_matrix(self.model.radial_order, 1, s) self.model.cache_set("ODF_sh_matrix", (self.model.radial_order, s), II) odf = self.us[0] ** s * np.dot(II, mapmri_coef) return odf def rtpp(self, tau): r"""Calculates the analytical return to the plane probability (RTPP) for a given diffusion time tau. See :footcite:p:`Ozarslan2013` eq. (42). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) if self.model.cartesian: ind_mat = mapmri.mapmri_index_matrix(self.model.radial_order) Bm = mapmri.b_mat(ind_mat) sel = Bm > 0.0 # select only relevant coefficients const = 1 / (np.sqrt(2 * np.pi) * self.us[0]) ind_sum = (-1.0) ** (ind_mat[sel, 0] / 2.0) rtpp_vec = const * Bm[sel] * ind_sum * mapmri_coef[sel] rtpp = rtpp_vec.sum() return rtpp else: ind_mat = mapmri.mapmri_isotropic_index_matrix(self.model.radial_order) rtpp_vec = np.zeros(int(ind_mat.shape[0])) count = 0 for n in range(0, self.model.radial_order + 1, 2): for j in range(1, 2 + n // 2): ll = n + 2 - 2 * j const = (-1 / 2.0) ** (ll / 2) / np.sqrt(np.pi) matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * mapmri.binomialfloat(j + ll - 0.5, j - k - 1) * gamma(ll / 2 + k + 1 / 2.0) / (factorial(k) * 0.5 ** (ll / 2 + 1 / 2.0 + k)) ) for _ in range(-ll, ll + 1): rtpp_vec[count] = const * matsum count += 1 direction = np.array(self.R[:, 0], ndmin=2) r, theta, phi = cart2sphere( direction[:, 0], direction[:, 1], direction[:, 2] ) rtpp = ( mapmri_coef * (1 / self.us[0]) * rtpp_vec * real_sh_descoteaux_from_index( ind_mat[:, 2], ind_mat[:, 1], theta, phi ) ) return rtpp.sum() def rtap(self, tau): r"""Calculates the analytical return to the axis probability (RTAP) for a given diffusion time tau. See :footcite:p:`Ozarslan2013` eq. (40, 44a). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) if self.model.cartesian: ind_mat = mapmri.mapmri_index_matrix(self.model.radial_order) Bm = mapmri.b_mat(ind_mat) sel = Bm > 0.0 # select only relevant coefficients const = 1 / (2 * np.pi * np.prod(self.us[1:])) ind_sum = (-1.0) ** (np.sum(ind_mat[sel, 1:], axis=1) / 2.0) rtap_vec = const * Bm[sel] * ind_sum * mapmri_coef[sel] rtap = np.sum(rtap_vec) else: ind_mat = mapmri.mapmri_isotropic_index_matrix(self.model.radial_order) rtap_vec = np.zeros(int(ind_mat.shape[0])) count = 0 for n in range(0, self.model.radial_order + 1, 2): for j in range(1, 2 + n // 2): ll = n + 2 - 2 * j kappa = ((-1) ** (j - 1) * 2.0 ** (-(ll + 3) / 2.0)) / np.pi matsum = 0 for k in range(0, j): matsum += ( (-1) ** k * mapmri.binomialfloat(j + ll - 0.5, j - k - 1) * gamma((ll + 1) / 2.0 + k) ) / (factorial(k) * 0.5 ** ((ll + 1) / 2.0 + k)) for _ in range(-ll, ll + 1): rtap_vec[count] = kappa * matsum count += 1 rtap_vec *= 2 direction = np.array(self.R[:, 0], ndmin=2) r, theta, phi = cart2sphere( direction[:, 0], direction[:, 1], direction[:, 2] ) rtap_vec = ( mapmri_coef * (1 / self.us[0] ** 2) * rtap_vec * real_sh_descoteaux_from_index( ind_mat[:, 2], ind_mat[:, 1], theta, phi ) ) rtap = rtap_vec.sum() return rtap def rtop(self, tau): r"""Calculates the analytical return to the origin probability (RTOP) for a given diffusion time tau. See :footcite:p:`Ozarslan2013` eq. (36, 43). The analytical formula for the isotropic MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C11). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) if self.model.cartesian: ind_mat = mapmri.mapmri_index_matrix(self.model.radial_order) Bm = mapmri.b_mat(ind_mat) const = 1 / (np.sqrt(8 * np.pi**3) * np.prod(self.us)) ind_sum = (-1.0) ** (np.sum(ind_mat, axis=1) / 2) rtop_vec = const * ind_sum * Bm * mapmri_coef rtop = rtop_vec.sum() else: ind_mat = mapmri.mapmri_isotropic_index_matrix(self.model.radial_order) Bm = mapmri.b_mat_isotropic(ind_mat) const = 1 / (2 * np.sqrt(2.0) * np.pi ** (3 / 2.0)) rtop_vec = const * (-1.0) ** (ind_mat[:, 0] - 1) * Bm rtop = (1 / self.us[0] ** 3) * rtop_vec * mapmri_coef rtop = rtop.sum() return rtop def msd(self, tau): r"""Calculates the analytical Mean Squared Displacement (MSD) for a given diffusion time tau. It is defined as the Laplacian of the origin of the estimated signal :footcite:t:`Cheng2012`. The analytical formula for the MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C13, D1). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:p:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) mu = self.us if self.model.cartesian: ind_mat = mapmri.mapmri_index_matrix(self.model.radial_order) Bm = mapmri.b_mat(ind_mat) sel = Bm > 0.0 # select only relevant coefficients ind_sum = np.sum(ind_mat[sel], axis=1) nx, ny, nz = ind_mat[sel].T numerator = ( (-1) ** (0.5 * (-ind_sum)) * np.pi ** (3 / 2.0) * ( (1 + 2 * nx) * mu[0] ** 2 + (1 + 2 * ny) * mu[1] ** 2 + (1 + 2 * nz) * mu[2] ** 2 ) ) denominator = ( np.sqrt( 2.0 ** (-ind_sum) * factorial(nx) * factorial(ny) * factorial(nz) ) * gamma(0.5 - 0.5 * nx) * gamma(0.5 - 0.5 * ny) * gamma(0.5 - 0.5 * nz) ) msd_vec = mapmri_coef[sel] * (numerator / denominator) msd = msd_vec.sum() else: ind_mat = mapmri.mapmri_isotropic_index_matrix(self.model.radial_order) Bm = mapmri.b_mat_isotropic(ind_mat) sel = Bm > 0.0 # select only relevant coefficients msd_vec = (4 * ind_mat[sel, 0] - 1) * Bm[sel] msd = self.us[0] ** 2 * msd_vec * mapmri_coef[sel] msd = msd.sum() return msd def qiv(self, tau): r"""Calculates the analytical Q-space Inverse Variance (QIV) for given diffusion time tau. It is defined as the inverse of the Laplacian of the origin of the estimated propagator :footcite:p:`Hosseinbor2013` eq. (22). The analytical formula for the MAP-MRI basis was derived in :footcite:p:`Fick2016b` eq. (C14, D2). The qtdmri coefficients are first converted to mapmri coefficients following :footcite:t:`Fick2018`. Parameters ---------- tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ mapmri_coef = self.qtdmri_to_mapmri_coef(tau) ux, uy, uz = self.us if self.model.cartesian: ind_mat = mapmri.mapmri_index_matrix(self.model.radial_order) Bm = mapmri.b_mat(ind_mat) sel = Bm > 0 # select only relevant coefficients nx, ny, nz = ind_mat[sel].T numerator = ( 8 * np.pi**2 * (ux * uy * uz) ** 3 * np.sqrt(factorial(nx) * factorial(ny) * factorial(nz)) * gamma(0.5 - 0.5 * nx) * gamma(0.5 - 0.5 * ny) * gamma(0.5 - 0.5 * nz) ) denominator = np.sqrt(2.0 ** (-1 + nx + ny + nz)) * ( (1 + 2 * nx) * uy**2 * uz**2 + ux**2 * ((1 + 2 * nz) * uy**2 + (1 + 2 * ny) * uz**2) ) qiv_vec = mapmri_coef[sel] * (numerator / denominator) qiv = qiv_vec.sum() else: ind_mat = mapmri.mapmri_isotropic_index_matrix(self.model.radial_order) Bm = mapmri.b_mat_isotropic(ind_mat) sel = Bm > 0.0 # select only relevant coefficients j = ind_mat[sel, 0] qiv_vec = (8 * (-1.0) ** (1 - j) * np.sqrt(2) * np.pi ** (7 / 2.0)) / ( (4.0 * j - 1) * Bm[sel] ) qiv = ux**5 * qiv_vec * mapmri_coef[sel] qiv = qiv.sum() return qiv @warning_for_keywords() def fitted_signal(self, *, gtab=None): """Recovers the fitted signal for the given gradient table. If no gradient table is given it recovers the signal for the gtab of the model object. """ if gtab is None: E = self.predict(self.model.gtab) else: E = self.predict(gtab) return E @warning_for_keywords() def predict(self, qvals_or_gtab, *, S0=1.0): """Recovers the reconstructed signal for any qvalue array or gradient table. """ tau_scaling = self.tau_scaling if isinstance(qvals_or_gtab, np.ndarray): q = qvals_or_gtab[:, :3] tau = qvals_or_gtab[:, 3] * tau_scaling else: gtab = qvals_or_gtab qvals = gtab.qvals tau = gtab.tau * tau_scaling q = qvals[:, None] * gtab.bvecs if self.model.cartesian: if self.model.anisotropic_scaling: q_rot = np.dot(q, self.R) M = _qtdmri_signal_matrix( self.model.radial_order, self.model.time_order, self.us, self.ut, q_rot, tau, normalization=self.model.normalization, ) else: M = _qtdmri_signal_matrix( self.model.radial_order, self.model.time_order, self.us, self.ut, q, tau, normalization=self.model.normalization, ) else: M = _qtdmri_isotropic_signal_matrix( self.model.radial_order, self.model.time_order, self.us[0], self.ut, q, tau, normalization=self.model.normalization, ) E = S0 * np.dot(M, self._qtdmri_coef) return E def norm_of_laplacian_signal(self): """Calculates the norm of the laplacian of the fitted signal. This information could be useful to assess if the extrapolation of the fitted signal contains spurious oscillations. A high laplacian norm may indicate that these are present, and any q-space indices that use integrals of the signal may be corrupted (e.g. RTOP, RTAP, RTPP, QIV). In contrast to :footcite:t:`Fick2016b`, the Laplacian now describes oscillations in the 4-dimensional qt-signal :footcite:p:`Fick2018`. See :footcite:p:`Fick2016b` for a definition of the metric. References ---------- .. footbibliography:: """ if self.model.cartesian: lap_matrix = qtdmri_laplacian_reg_matrix( self.model.ind_mat, self.us, self.ut, S_mat=self.model.S_mat, T_mat=self.model.T_mat, U_mat=self.model.U_mat, part1_ut_precomp=self.model.part1_reg_mat_tau, part23_ut_precomp=self.model.part23_reg_mat_tau, part4_ut_precomp=self.model.part4_reg_mat_tau, normalization=self.model.normalization, ) else: lap_matrix = qtdmri_isotropic_laplacian_reg_matrix( self.model.ind_mat, self.us, self.ut, part1_uq_iso_precomp=self.model.part1_uq_iso_precomp, part1_ut_precomp=self.model.part1_reg_mat_tau, part23_ut_precomp=self.model.part23_reg_mat_tau, part4_ut_precomp=self.model.part4_reg_mat_tau, normalization=self.model.normalization, ) norm_laplacian = np.dot( self._qtdmri_coef, np.dot(self._qtdmri_coef, lap_matrix) ) return norm_laplacian def pdf(self, rt_points): """Diffusion propagator on a given set of real points. if the array r_points is non writeable, then intermediate results are cached for faster recalculation """ tau_scaling = self.tau_scaling rt_points_ = rt_points * np.r_[1, 1, 1, tau_scaling] if self.model.cartesian: K = _qtdmri_eap_matrix( self.model.radial_order, self.model.time_order, self.us, self.ut, rt_points_, normalization=self.model.normalization, ) else: K = _qtdmri_isotropic_eap_matrix( self.model.radial_order, self.model.time_order, self.us[0], self.ut, rt_points_, normalization=self.model.normalization, ) eap = np.dot(K, self._qtdmri_coef) return eap def _qtdmri_to_mapmri_matrix(radial_order, time_order, ut, tau, isotropic): """Generate the matrix that maps the spherical qtdmri coefficients to MAP-MRI coefficients. The conversion is done by only evaluating the time basis for a diffusion time tau and summing up coefficients with the same spatial basis orders :footcite:p:`Fick2018`. Parameters ---------- radial_order : unsigned int, an even integer representing the spatial/radial order of the basis. time_order : unsigned int, an integer larger or equal than zero representing the time order of the basis. ut : float temporal scaling factor tau : float diffusion time (big_delta - small_delta / 3.) in seconds isotropic : bool `True` if the case is isotropic. References ---------- .. footbibliography:: """ if isotropic: mapmri_ind_mat = mapmri.mapmri_isotropic_index_matrix(radial_order) qtdmri_ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) else: mapmri_ind_mat = mapmri.mapmri_index_matrix(radial_order) qtdmri_ind_mat = qtdmri_index_matrix(radial_order, time_order) n_elem_mapmri = int(mapmri_ind_mat.shape[0]) n_elem_qtdmri = int(qtdmri_ind_mat.shape[0]) temporal_storage = np.zeros(time_order + 1) for o in range(time_order + 1): temporal_storage[o] = temporal_basis(o, ut, tau) counter = 0 mapmri_mat = np.zeros((n_elem_mapmri, n_elem_qtdmri)) for j, ll, m, o in qtdmri_ind_mat: index_overlap = np.all( [ j == mapmri_ind_mat[:, 0], ll == mapmri_ind_mat[:, 1], m == mapmri_ind_mat[:, 2], ], 0, ) mapmri_mat[:, counter] = temporal_storage[o] * index_overlap counter += 1 return mapmri_mat def qtdmri_to_mapmri_matrix(radial_order, time_order, ut, tau): """Generate the matrix that maps the qtdmri coefficients to MAP-MRI coefficients for the anisotropic case. The conversion is done by only evaluating the time basis for a diffusion time tau and summing up coefficients with the same spatial basis orders :footcite:p:`Fick2018`. Parameters ---------- radial_order : unsigned int, an even integer representing the spatial/radial order of the basis. time_order : unsigned int, an integer larger or equal than zero representing the time order of the basis. ut : float temporal scaling factor tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ return _qtdmri_to_mapmri_matrix(radial_order, time_order, ut, tau, False) def qtdmri_isotropic_to_mapmri_matrix(radial_order, time_order, ut, tau): """Generate the matrix that maps the spherical qtdmri coefficients to MAP-MRI coefficients for the isotropic case. The conversion is done by only evaluating the time basis for a diffusion time tau and summing up coefficients with the same spatial basis orders :footcite:p:`Fick2018`. Parameters ---------- radial_order : unsigned int, an even integer representing the spatial/radial order of the basis. time_order : unsigned int, an integer larger or equal than zero representing the time order of the basis. ut : float temporal scaling factor tau : float diffusion time (big_delta - small_delta / 3.) in seconds References ---------- .. footbibliography:: """ return _qtdmri_to_mapmri_matrix(radial_order, time_order, ut, tau, True) def qtdmri_temporal_normalization(ut): """Normalization factor for the temporal basis""" return np.sqrt(ut) def qtdmri_mapmri_normalization(mu): """Normalization factor for Cartesian MAP-MRI basis. The scaling is the same for every basis function depending only on the spatial scaling mu. """ sqrtC = np.sqrt(8 * np.prod(mu)) * np.pi ** (3.0 / 4.0) return sqrtC def qtdmri_mapmri_isotropic_normalization(j, ell, u0): """Normalization factor for Spherical MAP-MRI basis. The normalization for a basis function with orders [j,l,m] depends only on orders j,l and the isotropic scale factor. """ sqrtC = (2 * np.pi) ** (3.0 / 2.0) * np.sqrt( 2**ell * u0**3 * gamma(j) / gamma(j + ell + 1.0 / 2.0) ) return sqrtC @warning_for_keywords() def _qtdmri_signal_matrix( radial_order, time_order, us, ut, q, tau, *, normalization=False ): """Function to generate the qtdmri signal basis.""" M = qtdmri_signal_matrix(radial_order, time_order, us, ut, q, tau) if normalization: sqrtC = qtdmri_mapmri_normalization(us) sqrtut = qtdmri_temporal_normalization(ut) sqrtCut = sqrtC * sqrtut M *= sqrtCut return M def qtdmri_signal_matrix(radial_order, time_order, us, ut, q, tau): r"""Constructs the design matrix as a product of 3 separated radial, angular and temporal design matrices. It precomputes the relevant basis orders for each one and finally puts them together according to the index matrix """ ind_mat = qtdmri_index_matrix(radial_order, time_order) n_dat = int(q.shape[0]) n_elem = int(ind_mat.shape[0]) qx, qy, qz = q.T mux, muy, muz = us temporal_storage = np.zeros((n_dat, time_order + 1)) for o in range(time_order + 1): temporal_storage[:, o] = temporal_basis(o, ut, tau) Qx_storage = np.array(np.zeros((n_dat, radial_order + 1 + 4)), dtype=complex) Qy_storage = np.array(np.zeros((n_dat, radial_order + 1 + 4)), dtype=complex) Qz_storage = np.array(np.zeros((n_dat, radial_order + 1 + 4)), dtype=complex) for n in range(radial_order + 1 + 4): Qx_storage[:, n] = mapmri.mapmri_phi_1d(n, qx, mux) Qy_storage[:, n] = mapmri.mapmri_phi_1d(n, qy, muy) Qz_storage[:, n] = mapmri.mapmri_phi_1d(n, qz, muz) counter = 0 Q = np.zeros((n_dat, n_elem)) for nx, ny, nz, o in ind_mat: Q[:, counter] = ( np.real(Qx_storage[:, nx] * Qy_storage[:, ny] * Qz_storage[:, nz]) * temporal_storage[:, o] ) counter += 1 return Q def qtdmri_eap_matrix(radial_order, time_order, us, ut, grid): r"""Constructs the design matrix as a product of 3 separated radial, angular and temporal design matrices. It precomputes the relevant basis orders for each one and finally puts them together according to the index matrix """ ind_mat = qtdmri_index_matrix(radial_order, time_order) rx, ry, rz, tau = grid.T n_dat = int(rx.shape[0]) n_elem = int(ind_mat.shape[0]) mux, muy, muz = us temporal_storage = np.zeros((n_dat, time_order + 1)) for o in range(time_order + 1): temporal_storage[:, o] = temporal_basis(o, ut, tau) Kx_storage = np.zeros((n_dat, radial_order + 1)) Ky_storage = np.zeros((n_dat, radial_order + 1)) Kz_storage = np.zeros((n_dat, radial_order + 1)) for n in range(radial_order + 1): Kx_storage[:, n] = mapmri.mapmri_psi_1d(n, rx, mux) Ky_storage[:, n] = mapmri.mapmri_psi_1d(n, ry, muy) Kz_storage[:, n] = mapmri.mapmri_psi_1d(n, rz, muz) counter = 0 K = np.zeros((n_dat, n_elem)) for nx, ny, nz, o in ind_mat: K[:, counter] = ( Kx_storage[:, nx] * Ky_storage[:, ny] * Kz_storage[:, nz] * temporal_storage[:, o] ) counter += 1 return K @warning_for_keywords() def _qtdmri_isotropic_signal_matrix( radial_order, time_order, us, ut, q, tau, *, normalization=False ): M = qtdmri_isotropic_signal_matrix(radial_order, time_order, us, ut, q, tau) if normalization: ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) j, ll = ind_mat[:, :2].T sqrtut = qtdmri_temporal_normalization(ut) sqrtC = qtdmri_mapmri_isotropic_normalization(j, ll, us) sqrtCut = sqrtC * sqrtut M = M * sqrtCut[None, :] return M def qtdmri_isotropic_signal_matrix(radial_order, time_order, us, ut, q, tau): ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) qvals, theta, phi = cart2sphere(q[:, 0], q[:, 1], q[:, 2]) n_dat = int(qvals.shape[0]) n_elem = int(ind_mat.shape[0]) num_j = int(np.max(ind_mat[:, 0])) num_o = int(time_order + 1) num_l = int(radial_order // 2 + 1) num_m = int(radial_order * 2 + 1) # Radial Basis radial_storage = np.zeros([num_j, num_l, n_dat]) for j in range(1, num_j + 1): for ll in range(0, radial_order + 1, 2): radial_storage[j - 1, ll // 2, :] = radial_basis_opt(j, ll, us, qvals) # Angular Basis angular_storage = np.zeros([num_l, num_m, n_dat]) for ll in range(0, radial_order + 1, 2): for m in range(-ll, ll + 1): angular_storage[ll // 2, m + ll, :] = angular_basis_opt( ll, m, qvals, theta, phi ) # Temporal Basis temporal_storage = np.zeros([num_o + 1, n_dat]) for o in range(0, num_o + 1): temporal_storage[o, :] = temporal_basis(o, ut, tau) # Construct full design matrix M = np.zeros((n_dat, n_elem)) counter = 0 for j, ll, m, o in ind_mat: M[:, counter] = ( radial_storage[j - 1, ll // 2, :] * angular_storage[ll // 2, m + ll, :] * temporal_storage[o, :] ) counter += 1 return M @warning_for_keywords() def _qtdmri_eap_matrix(radial_order, time_order, us, ut, grid, *, normalization=False): sqrtCut = 1.0 if normalization: sqrtC = qtdmri_mapmri_normalization(us) sqrtut = qtdmri_temporal_normalization(ut) sqrtCut = sqrtC * sqrtut K_tau = qtdmri_eap_matrix(radial_order, time_order, us, ut, grid) * sqrtCut return K_tau @warning_for_keywords() def _qtdmri_isotropic_eap_matrix( radial_order, time_order, us, ut, grid, *, normalization=False ): K = qtdmri_isotropic_eap_matrix(radial_order, time_order, us, ut, grid) if normalization: ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) j, ll = ind_mat[:, :2].T sqrtut = qtdmri_temporal_normalization(ut) sqrtC = qtdmri_mapmri_isotropic_normalization(j, ll, us) sqrtCut = sqrtC * sqrtut K = K * sqrtCut[None, :] return K def qtdmri_isotropic_eap_matrix(radial_order, time_order, us, ut, grid): r"""Constructs the design matrix as a product of 3 separated radial, angular and temporal design matrices. It precomputes the relevant basis orders for each one and finally puts them together according to the index matrix """ rx, ry, rz, tau = grid.T R, theta, phi = cart2sphere(rx, ry, rz) theta[np.isnan(theta)] = 0 ind_mat = qtdmri_isotropic_index_matrix(radial_order, time_order) n_dat = int(R.shape[0]) n_elem = int(ind_mat.shape[0]) num_j = int(np.max(ind_mat[:, 0])) num_o = int(time_order + 1) num_l = int(radial_order / 2 + 1) num_m = int(radial_order * 2 + 1) # Radial Basis radial_storage = np.zeros([num_j, num_l, n_dat]) for j in range(1, num_j + 1): for ll in range(0, radial_order + 1, 2): radial_storage[j - 1, ll // 2, :] = radial_basis_EAP_opt(j, ll, us, R) # Angular Basis angular_storage = np.zeros([num_j, num_l, num_m, n_dat]) for j in range(1, num_j + 1): for ll in range(0, radial_order + 1, 2): for m in range(-ll, ll + 1): angular_storage[j - 1, ll // 2, m + ll, :] = angular_basis_EAP_opt( j, ll, m, R, theta, phi ) # Temporal Basis temporal_storage = np.zeros([num_o + 1, n_dat]) for o in range(0, num_o + 1): temporal_storage[o, :] = temporal_basis(o, ut, tau) # Construct full design matrix M = np.zeros((n_dat, n_elem)) counter = 0 for j, ll, m, o in ind_mat: M[:, counter] = ( radial_storage[j - 1, ll // 2, :] * angular_storage[j - 1, ll // 2, m + ll, :] * temporal_storage[o, :] ) counter += 1 return M def radial_basis_opt(j, ell, us, q): """Spatial basis dependent on spatial scaling factor us""" const = ( us**ell * np.exp(-2 * np.pi**2 * us**2 * q**2) * genlaguerre(j - 1, ell + 0.5)(4 * np.pi**2 * us**2 * q**2) ) return const def angular_basis_opt(ell, m, q, theta, phi): """Angular basis independent of spatial scaling factor us. Though it includes q, it is independent of the data and can be precomputed. """ const = ( (-1) ** (ell / 2) * np.sqrt(4.0 * np.pi) * (2 * np.pi**2 * q**2) ** (ell / 2) * real_sh_descoteaux_from_index(m, ell, theta, phi) ) return const def radial_basis_EAP_opt(j, ell, us, r): radial_part = ( (us**3) ** (-1) / (us**2) ** (ell / 2) * np.exp(-(r**2) / (2 * us**2)) * genlaguerre(j - 1, ell + 0.5)(r**2 / us**2) ) return radial_part def angular_basis_EAP_opt(j, ell, m, r, theta, phi): angular_part = ( (-1) ** (j - 1) * (np.sqrt(2) * np.pi) ** (-1) * (r**2 / 2) ** (ell / 2) * real_sh_descoteaux_from_index(m, ell, theta, phi) ) return angular_part def temporal_basis(o, ut, tau): """Temporal basis dependent on temporal scaling factor ut""" const = np.exp(-ut * tau / 2.0) * special.laguerre(o)(ut * tau) return const def qtdmri_index_matrix(radial_order, time_order): """Computes the SHORE basis order indices according to [1].""" index_matrix = [] for n in range(0, radial_order + 1, 2): for i in range(0, n + 1): for j in range(0, n - i + 1): for o in range(0, time_order + 1): index_matrix.append([n - i - j, j, i, o]) return np.array(index_matrix) def qtdmri_isotropic_index_matrix(radial_order, time_order): """Computes the SHORE basis order indices according to [1].""" index_matrix = [] for n in range(0, radial_order + 1, 2): for j in range(1, 2 + n // 2): ll = n + 2 - 2 * j for m in range(-ll, ll + 1): for o in range(0, time_order + 1): index_matrix.append([j, ll, m, o]) return np.array(index_matrix) @warning_for_keywords() def qtdmri_laplacian_reg_matrix( ind_mat, us, ut, *, S_mat=None, T_mat=None, U_mat=None, part1_ut_precomp=None, part23_ut_precomp=None, part4_ut_precomp=None, normalization=False, ): """Computes the cartesian qt-dMRI Laplacian regularization matrix. If given, uses precomputed matrices for temporal and spatial regularization matrices to speed up computation. Follows the formulation of Appendix B in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ if S_mat is None or T_mat is None or U_mat is None: radial_order = ind_mat[:, :3].max() S_mat, T_mat, U_mat = mapmri.mapmri_STU_reg_matrices(radial_order) part1_us = mapmri.mapmri_laplacian_reg_matrix( ind_mat[:, :3], us, S_mat, T_mat, U_mat ) part23_us = part23_reg_matrix_q(ind_mat, U_mat, T_mat, us) part4_us = part4_reg_matrix_q(ind_mat, U_mat, us) if part1_ut_precomp is None: part1_ut = part1_reg_matrix_tau(ind_mat, ut) else: part1_ut = part1_ut_precomp / ut if part23_ut_precomp is None: part23_ut = part23_reg_matrix_tau(ind_mat, ut) else: part23_ut = part23_ut_precomp * ut if part4_ut_precomp is None: part4_ut = part4_reg_matrix_tau(ind_mat, ut) else: part4_ut = part4_ut_precomp * ut**3 regularization_matrix = ( part1_us * part1_ut + part23_us * part23_ut + part4_us * part4_ut ) if normalization: temporal_normalization = qtdmri_temporal_normalization(ut) ** 2 spatial_normalization = qtdmri_mapmri_normalization(us) ** 2 regularization_matrix *= temporal_normalization * spatial_normalization return regularization_matrix @warning_for_keywords() def qtdmri_isotropic_laplacian_reg_matrix( ind_mat, us, ut, *, part1_uq_iso_precomp=None, part1_ut_precomp=None, part23_ut_precomp=None, part4_ut_precomp=None, normalization=False, ): """Computes the spherical qt-dMRI Laplacian regularization matrix. If given, uses precomputed matrices for temporal and spatial regularization matrices to speed up computation. Follows the formulation of Appendix C in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ if part1_uq_iso_precomp is None: part1_us = mapmri.mapmri_isotropic_laplacian_reg_matrix_from_index_matrix( ind_mat[:, :3], us[0] ) else: part1_us = part1_uq_iso_precomp * us[0] if part1_ut_precomp is None: part1_ut = part1_reg_matrix_tau(ind_mat, ut) else: part1_ut = part1_ut_precomp / ut if part23_ut_precomp is None: part23_ut = part23_reg_matrix_tau(ind_mat, ut) else: part23_ut = part23_ut_precomp * ut if part4_ut_precomp is None: part4_ut = part4_reg_matrix_tau(ind_mat, ut) else: part4_ut = part4_ut_precomp * ut**3 part23_us = part23_iso_reg_matrix_q(ind_mat, us[0]) part4_us = part4_iso_reg_matrix_q(ind_mat, us[0]) regularization_matrix = ( part1_us * part1_ut + part23_us * part23_ut + part4_us * part4_ut ) if normalization: temporal_normalization = qtdmri_temporal_normalization(ut) ** 2 j, ll = ind_mat[:, :2].T pre_spatial_norm = qtdmri_mapmri_isotropic_normalization(j, ll, us[0]) spatial_normalization = np.outer(pre_spatial_norm, pre_spatial_norm) regularization_matrix *= temporal_normalization * spatial_normalization return regularization_matrix def part23_reg_matrix_q(ind_mat, U_mat, T_mat, us): """Partial cartesian spatial Laplacian regularization matrix. The implementation follows the second line of Eq. (B2) in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ ux, uy, uz = us x, y, z, _ = ind_mat.T n_elem = int(ind_mat.shape[0]) LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): val = 0 if x[i] == x[k] and y[i] == y[k]: val += ( (uz / (ux * uy)) * U_mat[x[i], x[k]] * U_mat[y[i], y[k]] * T_mat[z[i], z[k]] ) if x[i] == x[k] and z[i] == z[k]: val += ( (uy / (ux * uz)) * U_mat[x[i], x[k]] * T_mat[y[i], y[k]] * U_mat[z[i], z[k]] ) if y[i] == y[k] and z[i] == z[k]: val += ( (ux / (uy * uz)) * T_mat[x[i], x[k]] * U_mat[y[i], y[k]] * U_mat[z[i], z[k]] ) LR[i, k] = LR[k, i] = val return LR def part23_iso_reg_matrix_q(ind_mat, us): """Partial spherical spatial Laplacian regularization matrix. The implementation follows the equation below Eq. (C4) in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ n_elem = int(ind_mat.shape[0]) LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): if ind_mat[i, 1] == ind_mat[k, 1] and ind_mat[i, 2] == ind_mat[k, 2]: ji = ind_mat[i, 0] jk = ind_mat[k, 0] ll = ind_mat[i, 1] if ji == (jk + 1): LR[i, k] = LR[k, i] = ( 2.0 ** (-ll) * -gamma(3 / 2.0 + jk + ll) / gamma(jk) ) elif ji == jk: LR[i, k] = LR[k, i] = ( 2.0 ** (-(ll + 1)) * (1 - 4 * ji - 2 * ll) * gamma(1 / 2.0 + ji + ll) / gamma(ji) ) elif ji == (jk - 1): LR[i, k] = LR[k, i] = ( 2.0 ** (-ll) * -gamma(3 / 2.0 + ji + ll) / gamma(ji) ) return LR / us def part4_reg_matrix_q(ind_mat, U_mat, us): """Partial cartesian spatial Laplacian regularization matrix. The implementation follows equation Eq. (B2) in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ ux, uy, uz = us x, y, z, _ = ind_mat.T n_elem = int(ind_mat.shape[0]) LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): if x[i] == x[k] and y[i] == y[k] and z[i] == z[k]: LR[i, k] = LR[k, i] = ( (1.0 / (ux * uy * uz)) * U_mat[x[i], x[k]] * U_mat[y[i], y[k]] * U_mat[z[i], z[k]] ) return LR def part4_iso_reg_matrix_q(ind_mat, us): """Partial spherical spatial Laplacian regularization matrix. The implementation follows the equation below Eq. (C4) in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ n_elem = int(ind_mat.shape[0]) LR = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): if ( ind_mat[i, 0] == ind_mat[k, 0] and ind_mat[i, 1] == ind_mat[k, 1] and ind_mat[i, 2] == ind_mat[k, 2] ): ji = ind_mat[i, 0] ll = ind_mat[i, 1] LR[i, k] = LR[k, i] = ( 2.0 ** (-(ll + 2)) * gamma(1 / 2.0 + ji + ll) / (np.pi**2 * gamma(ji)) ) return LR / us**3 def part1_reg_matrix_tau(ind_mat, ut): """Partial temporal Laplacian regularization matrix. The implementation follows Appendix B in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ n_elem = int(ind_mat.shape[0]) LD = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): oi = ind_mat[i, 3] ok = ind_mat[k, 3] if oi == ok: LD[i, k] = LD[k, i] = 1.0 / ut return LD def part23_reg_matrix_tau(ind_mat, ut): """Partial temporal Laplacian regularization matrix. The implementation follows Appendix B in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ n_elem = int(ind_mat.shape[0]) LD = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): oi = ind_mat[i, 3] ok = ind_mat[k, 3] if oi == ok: LD[i, k] = LD[k, i] = 1 / 2.0 else: LD[i, k] = LD[k, i] = np.abs(oi - ok) return ut * LD def part4_reg_matrix_tau(ind_mat, ut): """Partial temporal Laplacian regularization matrix. The implementation follows Appendix B in :footcite:p:`Fick2018`. References ---------- .. footbibliography:: """ n_elem = int(ind_mat.shape[0]) LD = np.zeros((n_elem, n_elem)) for i in range(n_elem): for k in range(i, n_elem): oi = ind_mat[i, 3] ok = ind_mat[k, 3] sum1 = 0 for p in range(1, min([ok, oi]) + 1 + 1): sum1 += (oi - p) * (ok - p) * H(min([oi, ok]) - p) sum2 = 0 for p in range(0, min(ok - 2, oi - 1) + 1): sum2 += p sum3 = 0 for p in range(0, min(ok - 1, oi - 2) + 1): sum3 += p LD[i, k] = LD[k, i] = ( 0.25 * np.abs(oi - ok) + (1 / 16.0) * mapmri.delta(oi, ok) + min([oi, ok]) + sum1 + H(oi - 1) * H(ok - 1) * ( oi + ok - 2 + sum2 + sum3 + H(abs(oi - ok) - 1) * (abs(oi - ok) - 1) * min([ok - 1, oi - 1]) ) ) return LD * ut**3 def H(value): """Step function of H(x)=1 if x>=0 and zero otherwise. Used for the temporal laplacian matrix.""" if value >= 0: return 1 return 0 @warning_for_keywords() def generalized_crossvalidation(data, M, LR, *, startpoint=5e-4): r"""Generalized Cross Validation Function. See :footcite:p:`Craven1979` for further details about the method. References ---------- .. footbibliography:: """ startpoint = 1e-4 MMt = np.dot(M.T, M) K = len(data) input_stuff = (data, M, MMt, K, LR) bounds = ((1e-5, 1),) res = fmin_l_bfgs_b( GCV_cost_function, startpoint, args=(input_stuff,), approx_grad=True, bounds=bounds, disp=False, pgtol=1e-10, factr=10.0, ) return res[0][0] def GCV_cost_function(weight, arguments): r"""Generalized Cross Validation Function that is iterated. See :footcite:p:`Craven1979` for further details about the method. References ---------- .. footbibliography:: """ data, M, MMt, K, LR = arguments S = np.dot(np.dot(M, np.linalg.pinv(MMt + weight * LR)), M.T) trS = np.trace(S) normyytilde = np.linalg.norm(data - np.dot(S, data), 2) gcv_value = normyytilde / (K - trS) return gcv_value def qtdmri_isotropic_scaling(data, q, tau): """Constructs design matrix for fitting an exponential to the diffusion time points. """ dataclip = np.clip(data, 1e-05, 1.0) logE = -np.log(dataclip) logE_q = logE / (2 * np.pi**2) logE_tau = logE * 2 B_q = np.array([q * q]) inv_B_q = np.linalg.pinv(B_q) B_tau = np.array([tau]) inv_B_tau = np.linalg.pinv(B_tau) us = np.sqrt(np.dot(logE_q, inv_B_q)).item() ut = np.dot(logE_tau, inv_B_tau).item() return us, ut def qtdmri_anisotropic_scaling(data, q, bvecs, tau): """Constructs design matrix for fitting an exponential to the diffusion time points. """ dataclip = np.clip(data, 1e-05, 10e10) logE = -np.log(dataclip) logE_q = logE / (2 * np.pi**2) logE_tau = logE * 2 B_q = design_matrix_spatial(bvecs, q) inv_B_q = np.linalg.pinv(B_q) A = np.dot(inv_B_q, logE_q) evals, R = dti.decompose_tensor(dti.from_lower_triangular(A)) us = np.sqrt(evals) B_tau = np.array([tau]) inv_B_tau = np.linalg.pinv(B_tau) ut = np.dot(logE_tau, inv_B_tau).item() return us, ut, R def design_matrix_spatial(bvecs, qvals): """Constructs design matrix for DTI weighted least squares or least squares fitting. (Basser et al., 1994a) Parameters ---------- bvecs : array (N x 3) unit b-vectors of the acquisition. qvals : array (N,) corresponding q-values in 1/mm Returns ------- design_matrix : array (g,7) Design matrix or B matrix assuming Gaussian distributed tensor model design_matrix[j, :] = (Bxx, Byy, Bzz, Bxy, Bxz, Byz, dummy) """ B = np.zeros((bvecs.shape[0], 6)) B[:, 0] = bvecs[:, 0] * bvecs[:, 0] * 1.0 * qvals**2 # Bxx B[:, 1] = bvecs[:, 0] * bvecs[:, 1] * 2.0 * qvals**2 # Bxy B[:, 2] = bvecs[:, 1] * bvecs[:, 1] * 1.0 * qvals**2 # Byy B[:, 3] = bvecs[:, 0] * bvecs[:, 2] * 2.0 * qvals**2 # Bxz B[:, 4] = bvecs[:, 1] * bvecs[:, 2] * 2.0 * qvals**2 # Byz B[:, 5] = bvecs[:, 2] * bvecs[:, 2] * 1.0 * qvals**2 # Bzz return B def create_rt_space_grid( grid_size_r, max_radius_r, grid_size_tau, min_radius_tau, max_radius_tau ): """Generates EAP grid (for potential positivity constraint).""" tau_list = np.linspace(min_radius_tau, max_radius_tau, grid_size_tau) constraint_grid_tau = np.c_[0.0, 0.0, 0.0, 0.0] for tau in tau_list: constraint_grid = mapmri.create_rspace(grid_size_r, max_radius_r) constraint_grid_tau = np.vstack( [ constraint_grid_tau, np.c_[constraint_grid, np.zeros(constraint_grid.shape[0]) + tau], ] ) return constraint_grid_tau[1:] def qtdmri_number_of_coefficients(radial_order, time_order): """Computes the total number of coefficients of the qtdmri basis given a radial and temporal order. See equation given below Eq (9) in :footcite:t:`Fick2018`. References ---------- .. footbibliography:: """ F = np.floor(radial_order / 2.0) Msym = (F + 1) * (F + 2) * (4 * F + 3) / 6 M_total = Msym * (time_order + 1) return M_total @warning_for_keywords() def l1_crossvalidation(b0s_mask, E, M, *, weight_array=None): """cross-validation function to find the optimal weight of alpha for sparsity regularization""" if weight_array is None: weight_array = np.linspace(0, 0.4, 21) dwi_mask = ~b0s_mask b0_mask = b0s_mask dwi_indices = np.arange(E.shape[0])[dwi_mask] b0_indices = np.arange(E.shape[0])[b0_mask] random.shuffle(dwi_indices) sub0 = dwi_indices[0::5] sub1 = dwi_indices[1::5] sub2 = dwi_indices[2::5] sub3 = dwi_indices[3::5] sub4 = dwi_indices[4::5] test0 = np.hstack((b0_indices, sub1, sub2, sub3, sub4)) test1 = np.hstack((b0_indices, sub0, sub2, sub3, sub4)) test2 = np.hstack((b0_indices, sub0, sub1, sub3, sub4)) test3 = np.hstack((b0_indices, sub0, sub1, sub2, sub4)) test4 = np.hstack((b0_indices, sub0, sub1, sub2, sub3)) cv_list = ( (sub0, test0), (sub1, test1), (sub2, test2), (sub3, test3), (sub4, test4), ) errorlist = np.zeros((5, 21)) errorlist[:, 0] = 100.0 optimal_alpha_sub = np.zeros(5) for i, (sub, test) in enumerate(cv_list): counter = 1 cv_old = errorlist[i, 0] cv_new = errorlist[i, 0] while cv_old >= cv_new and counter < weight_array.shape[0]: alpha = weight_array[counter] c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M[test]) @ c recovered_signal = cvxpy.Constant(M[sub]) @ c data = cvxpy.Constant(E[test]) objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data) + alpha * cvxpy.norm1(c) ) constraints = [] prob = cvxpy.Problem(objective, constraints) prob.solve(solver="CLARABEL", verbose=False) errorlist[i, counter] = np.mean( (E[sub] - np.asarray(recovered_signal.value).squeeze()) ** 2 ) cv_old = errorlist[i, counter - 1] cv_new = errorlist[i, counter] counter += 1 optimal_alpha_sub[i] = weight_array[counter - 1] optimal_alpha = optimal_alpha_sub.mean() return optimal_alpha @warning_for_keywords() def elastic_crossvalidation(b0s_mask, E, M, L, lopt, *, weight_array=None): """cross-validation function to find the optimal weight of alpha for sparsity regularization when also Laplacian regularization is used.""" if weight_array is None: weight_array = np.linspace(0, 0.2, 21) dwi_mask = ~b0s_mask b0_mask = b0s_mask dwi_indices = np.arange(E.shape[0])[dwi_mask] b0_indices = np.arange(E.shape[0])[b0_mask] random.shuffle(dwi_indices) sub0 = dwi_indices[0::5] sub1 = dwi_indices[1::5] sub2 = dwi_indices[2::5] sub3 = dwi_indices[3::5] sub4 = dwi_indices[4::5] test0 = np.hstack((b0_indices, sub1, sub2, sub3, sub4)) test1 = np.hstack((b0_indices, sub0, sub2, sub3, sub4)) test2 = np.hstack((b0_indices, sub0, sub1, sub3, sub4)) test3 = np.hstack((b0_indices, sub0, sub1, sub2, sub4)) test4 = np.hstack((b0_indices, sub0, sub1, sub2, sub3)) cv_list = ( (sub0, test0), (sub1, test1), (sub2, test2), (sub3, test3), (sub4, test4), ) errorlist = np.zeros((5, 21)) errorlist[:, 0] = 100.0 optimal_alpha_sub = np.zeros(5) for i, (sub, test) in enumerate(cv_list): counter = 1 cv_old = errorlist[i, 0] cv_new = errorlist[i, 0] c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M[test]) @ c recovered_signal = cvxpy.Constant(M[sub]) @ c data = cvxpy.Constant(E[test]) constraints = [] while cv_old >= cv_new and counter < weight_array.shape[0]: alpha = weight_array[counter] objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data) + alpha * cvxpy.norm1(c) + lopt * cvxpy.quad_form(c, L) ) prob = cvxpy.Problem(objective, constraints) prob.solve(solver="CLARABEL", verbose=False) errorlist[i, counter] = np.mean( (E[sub] - np.asarray(recovered_signal.value).squeeze()) ** 2 ) cv_old = errorlist[i, counter - 1] cv_new = errorlist[i, counter] counter += 1 optimal_alpha_sub[i] = weight_array[counter - 1] optimal_alpha = optimal_alpha_sub.mean() return optimal_alpha @warning_for_keywords() def visualise_gradient_table_G_Delta_rainbow( gtab, *, big_delta_start=None, big_delta_end=None, G_start=None, G_end=None, bval_isolines=np.r_[0, 250, 1000, 2500, 5000, 7500, 10000, 14000], alpha_shading=0.6, ): """This function visualizes a q-tau acquisition scheme as a function of gradient strength and pulse separation (big_delta). It represents every measurements at its G and big_delta position regardless of b-vector, with a background of b-value isolines for reference. It assumes there is only one unique pulse length (small_delta) in the acquisition scheme. Parameters ---------- gtab : GradientTable object constructed gradient table with big_delta and small_delta given as inputs. big_delta_start : float, optional minimum big_delta that is plotted in seconds big_delta_end : float, optional maximum big_delta that is plotted in seconds G_start : float, optional minimum gradient strength that is plotted in T/m G_end : float, optional maximum gradient strength that is plotted in T/m bval_isolines : array, optional array of bvalue isolines that are plotted in the background alpha_shading : float between [0-1] optional shading of the bvalue colors in the background """ Delta = gtab.big_delta # in seconds delta = gtab.small_delta # in seconds G = gtab.gradient_strength * 1e3 # in SI units T/m if len(np.unique(delta)) > 1: msg = "This acquisition has multiple small_delta values. " msg += "This visualization assumes there is only one small_delta." raise ValueError(msg) if big_delta_start is None: big_delta_start = 0.005 if big_delta_end is None: big_delta_end = Delta.max() + 0.004 if G_start is None: G_start = 0.0 if G_end is None: G_end = G.max() + 0.05 Delta_ = np.linspace(big_delta_start, big_delta_end, 50) G_ = np.linspace(G_start, G_end, 50) Delta_grid, G_grid = np.meshgrid(Delta_, G_) dummy_bvecs = np.tile([0, 0, 1], (len(G_grid.ravel()), 1)) gtab_grid = gradient_table_from_gradient_strength_bvecs( G_grid.ravel() / 1e3, dummy_bvecs, Delta_grid.ravel(), delta[0] ) bvals_ = gtab_grid.bvals.reshape(G_grid.shape) plt.contourf( Delta_, G_, bvals_, levels=bval_isolines, cmap="rainbow", alpha=alpha_shading ) cb = plt.colorbar(spacing="proportional") cb.ax.tick_params(labelsize=16) plt.scatter(Delta, G, c="k", s=25) plt.xlim(big_delta_start, big_delta_end) plt.ylim(G_start, G_end) cb.set_label("b-value ($s$/$mm^2$)", fontsize=18) plt.xlabel(r"Pulse Separation $\Delta$ [sec]", fontsize=18) plt.ylabel("Gradient Strength [T/m]", fontsize=18) return None dipy-1.11.0/dipy/reconst/qti.py000066400000000000000000001065201476546756600163660ustar00rootroot00000000000000"""Classes and functions for fitting the covariance tensor model of q-space trajectory imaging (QTI) by :footcite:t:`Westin2016`. References ---------- .. footbibliography:: """ from warnings import warn import numpy as np from dipy.reconst.base import ReconstModel from dipy.reconst.dti import auto_attr from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cp, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") # XXX Eventually to be replaced with `reconst.dti.lower_triangular` def from_3x3_to_6x1(T): """Convert symmetric 3 x 3 matrices into 6 x 1 vectors. Parameters ---------- T : numpy.ndarray An array of size (..., 3, 3). Returns ------- V : numpy.ndarray Converted vectors of size (..., 6, 1). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\mathbf{V} = \\begin{bmatrix} T_{11} & T_{22} & T_{33} & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\end{bmatrix}^T """ if T.shape[-2::] != (3, 3): raise ValueError("The shape of the input array must be (..., 3, 3).") if not np.all(np.isclose(T, np.swapaxes(T, -1, -2))): warn( "All matrices converted to Voigt notation are not symmetric.", stacklevel=2 ) C = np.sqrt(2) V = np.stack( ( T[..., 0, 0], T[..., 1, 1], T[..., 2, 2], C * T[..., 1, 2], C * T[..., 0, 2], C * T[..., 0, 1], ), axis=-1, )[..., np.newaxis] return V def from_6x1_to_3x3(V): """Convert 6 x 1 vectors into symmetric 3 x 3 matrices. Parameters ---------- V : numpy.ndarray An array of size (..., 6, 1). Returns ------- T : numpy.ndarray Converted matrices of size (..., 3, 3). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\mathbf{V} = \\begin{bmatrix} T_{11} & T_{22} & T_{33} & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\end{bmatrix}^T """ if V.shape[-2::] != (6, 1): raise ValueError("The shape of the input array must be (..., 6, 1).") C = 1 / np.sqrt(2) T = np.array( ( [V[..., 0, 0], C * V[..., 5, 0], C * V[..., 4, 0]], [C * V[..., 5, 0], V[..., 1, 0], C * V[..., 3, 0]], [C * V[..., 4, 0], C * V[..., 3, 0], V[..., 2, 0]], ) ) T = np.moveaxis(T, (0, 1), (-2, -1)) return T def from_6x6_to_21x1(T): """Convert symmetric 6 x 6 matrices into 21 x 1 vectors. Parameters ---------- T : numpy.ndarray An array of size (..., 6, 6). Returns ------- V : numpy.ndarray Converted vectors of size (..., 21, 1). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\begin{matrix} \\mathbf{V} = & \\big[ T_{11} & T_{22} & T_{33} \\ & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\ & \\sqrt{2} T_{14} & \\sqrt{2} T_{15} & \\sqrt{2} T_{16} \\ & \\sqrt{2} T_{24} & \\sqrt{2} T_{25} & \\sqrt{2} T_{26} \\ & \\sqrt{2} T_{34} & \\sqrt{2} T_{35} & \\sqrt{2} T_{36} \\ & T_{44} & T_{55} & T_{66} \\ & \\sqrt{2} T_{45} & \\sqrt{2} T_{56} & \\sqrt{2} T_{46} \\big]^T \\end{matrix} """ if T.shape[-2::] != (6, 6): raise ValueError("The shape of the input array must be (..., 6, 6).") if not np.all(np.isclose(T, np.swapaxes(T, -1, -2), equal_nan=True)): warn( "All matrices converted to Voigt notation are not symmetric.", stacklevel=2 ) C = np.sqrt(2) V = np.stack( ( [ T[..., 0, 0], T[..., 1, 1], T[..., 2, 2], C * T[..., 1, 2], C * T[..., 0, 2], C * T[..., 0, 1], C * T[..., 0, 3], C * T[..., 0, 4], C * T[..., 0, 5], C * T[..., 1, 3], C * T[..., 1, 4], C * T[..., 1, 5], C * T[..., 2, 3], C * T[..., 2, 4], C * T[..., 2, 5], T[..., 3, 3], T[..., 4, 4], T[..., 5, 5], C * T[..., 3, 4], C * T[..., 4, 5], C * T[..., 3, 5], ] ), axis=-1, )[..., np.newaxis] return V def from_21x1_to_6x6(V): """Convert 21 x 1 vectors into symmetric 6 x 6 matrices. Parameters ---------- V : numpy.ndarray An array of size (..., 21, 1). Returns ------- T : numpy.ndarray Converted matrices of size (..., 6, 6). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\begin{matrix} \\mathbf{V} = & \\big[ T_{11} & T_{22} & T_{33} \\ & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\ & \\sqrt{2} T_{14} & \\sqrt{2} T_{15} & \\sqrt{2} T_{16} \\ & \\sqrt{2} T_{24} & \\sqrt{2} T_{25} & \\sqrt{2} T_{26} \\ & \\sqrt{2} T_{34} & \\sqrt{2} T_{35} & \\sqrt{2} T_{36} \\ & T_{44} & T_{55} & T_{66} \\ & \\sqrt{2} T_{45} & \\sqrt{2} T_{56} & \\sqrt{2} T_{46} \\big]^T \\end{matrix} """ if V.shape[-2::] != (21, 1): raise ValueError("The shape of the input array must be (..., 21, 1).") C = 1 / np.sqrt(2) T = np.array( ( [ V[..., 0, 0], C * V[..., 5, 0], C * V[..., 4, 0], C * V[..., 6, 0], C * V[..., 7, 0], C * V[..., 8, 0], ], [ C * V[..., 5, 0], V[..., 1, 0], C * V[..., 3, 0], C * V[..., 9, 0], C * V[..., 10, 0], C * V[..., 11, 0], ], [ C * V[..., 4, 0], C * V[..., 3, 0], V[..., 2, 0], C * V[..., 12, 0], C * V[..., 13, 0], C * V[..., 14, 0], ], [ C * V[..., 6, 0], C * V[..., 9, 0], C * V[..., 12, 0], V[..., 15, 0], C * V[..., 18, 0], C * V[..., 20, 0], ], [ C * V[..., 7, 0], C * V[..., 10, 0], C * V[..., 13, 0], C * V[..., 18, 0], V[..., 16, 0], C * V[..., 19, 0], ], [ C * V[..., 8, 0], C * V[..., 11, 0], C * V[..., 14, 0], C * V[..., 20, 0], C * V[..., 19, 0], V[..., 17, 0], ], ) ) T = np.moveaxis(T, (0, 1), (-2, -1)) return T def cvxpy_1x6_to_3x3(V): """Convert a 1 x 6 vector into a symmetric 3 x 3 matrix. Parameters ---------- V : numpy.ndarray An array of size (1, 6). Returns ------- T : cvxpy.bmat Converted matrix of size (3, 3). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\mathbf{V} = \\begin{bmatrix} T_{11} & T_{22} & T_{33} & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\end{bmatrix}^T """ if V.shape[0] == 6: V = V.T f = 1 / np.sqrt(2) T = cp.bmat( [ [V[0, 0], f * V[0, 5], f * V[0, 4]], [f * V[0, 5], V[0, 1], f * V[0, 3]], [f * V[0, 4], f * V[0, 3], V[0, 2]], ] ) return T def cvxpy_1x21_to_6x6(V): """Convert 1 x 21 vector into a symmetric 6 x 6 matrix. Parameters ---------- V : numpy.ndarray An array of size (1, 21). Returns ------- T : cvxpy.bmat Converted matrices of size (6, 6). Notes ----- The conversion of a matrix into a vector is defined as .. math:: \\begin{matrix} \\mathbf{V} = & \\big[ T_{11} & T_{22} & T_{33} \\ & \\sqrt{2} T_{23} & \\sqrt{2} T_{13} & \\sqrt{2} T_{12} \\ & \\sqrt{2} T_{14} & \\sqrt{2} T_{15} & \\sqrt{2} T_{16} \\ & \\sqrt{2} T_{24} & \\sqrt{2} T_{25} & \\sqrt{2} T_{26} \\ & \\sqrt{2} T_{34} & \\sqrt{2} T_{35} & \\sqrt{2} T_{36} \\ & T_{44} & T_{55} & T_{66} \\ & \\sqrt{2} T_{45} & \\sqrt{2} T_{56} & \\sqrt{2} T_{46} \\big]^T \\end{matrix} """ if V.shape[0] == 21: V = V.T f = 1 / np.sqrt(2) T = cp.bmat( [ [V[0, 0], f * V[0, 5], f * V[0, 4], f * V[0, 6], f * V[0, 7], f * V[0, 8]], [ f * V[0, 5], V[0, 1], f * V[0, 3], f * V[0, 9], f * V[0, 10], f * V[0, 11], ], [ f * V[0, 4], f * V[0, 3], V[0, 2], f * V[0, 12], f * V[0, 13], f * V[0, 14], ], [ f * V[0, 6], f * V[0, 9], f * V[0, 12], V[0, 15], f * V[0, 18], f * V[0, 20], ], [ f * V[0, 7], f * V[0, 10], f * V[0, 13], f * V[0, 18], V[0, 16], f * V[0, 19], ], [ f * V[0, 8], f * V[0, 11], f * V[0, 14], f * V[0, 20], f * V[0, 19], V[0, 17], ], ] ) return T # These tensors are used in the calculation of the QTI parameters e_iso = np.eye(3) / 3 E_iso = np.eye(6) / 3 E_bulk = from_3x3_to_6x1(e_iso) @ from_3x3_to_6x1(e_iso).T E_shear = E_iso - E_bulk E_tsym = E_bulk + 0.4 * E_shear def dtd_covariance(DTD): """Calculate covariance of a diffusion tensor distribution (DTD). Parameters ---------- DTD : numpy.ndarray Diffusion tensor distribution of shape (number of tensors, 3, 3) or (number of tensors, 6, 1). Returns ------- C : numpy.ndarray Covariance tensor of shape (6, 6). Notes ----- The covariance tensor is calculated according to the following equation and converted into a rank-2 tensor :footcite:p:`Westin2016`: .. math:: \\mathbb{C} = \\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle - \\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle References ---------- .. footbibliography:: """ dims = DTD.shape if len(dims) != 3 or (dims[1:3] != (3, 3) and dims[1:3] != (6, 1)): raise ValueError( "The shape of DTD must be (number of tensors, 3, 3) or (number of " + "tensors, 6, 1)." ) if dims[1:3] == (3, 3): DTD = from_3x3_to_6x1(DTD) D = np.mean(DTD, axis=0) C = np.mean(DTD @ np.swapaxes(DTD, -2, -1), axis=0) - D @ np.swapaxes(D, -2, -1) return C @warning_for_keywords() def qti_signal(gtab, D, C, *, S0=1): """Generate signals using the covariance tensor signal representation. Parameters ---------- gtab : dipy.core.gradients.GradientTable Gradient table with b-tensors. D : numpy.ndarray Diffusion tensors of shape (..., 3, 3), (..., 6, 1), or (..., 6). C : numpy.ndarray Covariance tensors of shape (..., 6, 6), (..., 21, 1), or (..., 21). S0 : numpy.ndarray, optional Signal magnitudes without diffusion-weighting. Must be a single number or an array of same shape as D and C without the last two dimensions. Returns ------- S : numpy.ndarray Simulated signals. Notes ----- The signal is generated according to .. math:: S = S_0 \\exp \\left(- \\mathbf{b} : \\langle \\mathbf{D} \\rangle + \\frac{1}{2}(\\mathbf{b} \\otimes \\mathbf{b}) : \\mathbb{C} \\right) """ # Validate input and convert to Voigt notation if necessary if gtab.btens is None: raise ValueError("QTI requires b-tensors to be defined in the gradient table.") if D.shape[-2::] != (6, 1): if D.shape[-2::] == (3, 3): D = from_3x3_to_6x1(D) elif D.shape[-1] == 6: D = D[..., np.newaxis] else: raise ValueError( "The shape of D must be (..., 3, 3), (..., 6, 1) or (..., 6)." ) if C.shape[-2::] != (21, 1): if C.shape[-2::] == (6, 6): C = from_6x6_to_21x1(C) elif C.shape[-1] == 21: C = C[..., np.newaxis] else: raise ValueError( "The shape of C must be (..., 6, 6), (..., 21, 1), or (..., 21)." ) if D.shape[0:-2] != C.shape[0:-2]: raise ValueError("The shapes of C and D are not compatible") if not isinstance(S0, (int, float)): if S0.shape != (1,) and S0.shape != D.shape[0:-2]: raise ValueError( "S0 must be a single number or an array of the same shape " + " compatible with D and C." ) # Generate signals S = np.zeros(D.shape[0:-2] + (gtab.btens.shape[0],)) for i, bten in enumerate(gtab.btens): b = from_3x3_to_6x1(bten) b_sq = from_6x6_to_21x1(b @ np.swapaxes(b, -2, -1)) S[..., i] = ( S0 * np.exp(-np.swapaxes(b, -2, -1) @ D + 0.5 * np.swapaxes(b_sq, -2, -1) @ C)[ ..., 0, 0 ] ) return S def design_matrix(btens): """Calculate the design matrix from the b-tensors. Parameters ---------- btens : numpy.ndarray An array of b-tensors of shape (number of acquisitions, 3, 3). Returns ------- X : numpy.ndarray Design matrix. Notes ----- The design matrix is generated according to .. math:: X = \\begin{pmatrix} 1 & -\\mathbf{b}_1^T & \\frac{1}{2}( \\mathbf{b}_1 \\otimes\\mathbf{b}_1)^T \\ \\vdots & \\vdots & \\vdots \\ 1 & -\\mathbf{b}_n^T & \\frac{1}{2}(\\mathbf{b}_n\\otimes \\mathbf{b}_n)^T \\end{pmatrix} """ X = np.zeros((btens.shape[0], 28)) for i, bten in enumerate(btens): b = from_3x3_to_6x1(bten) b_sq = from_6x6_to_21x1(b @ b.T) X[i] = np.concatenate(([1], (-b.T)[0, :], (0.5 * b_sq.T)[0, :])) return X @warning_for_keywords() def _ols_fit(data, mask, X, *, step=int(1e4)): """Estimate the model parameters using ordinary least squares. Parameters ---------- data : numpy.ndarray Array of shape (..., number of acquisitions). mask : numpy.ndarray Boolean array with the same shape as the data array of a single acquisition. X : numpy.ndarray Design matrix of shape (number of acquisitions, 28). step : int, optional The number of voxels over which the fit is calculated simultaneously. Returns ------- params : numpy.ndarray Array of shape (..., 28) containing the estimated model parameters. Element 0 is the natural logarithm of the estimated signal without diffusion-weighting, elements 1-6 are the estimated diffusion tensor elements in Voigt notation, and elements 7-27 are the estimated covariance tensor elements in Voigt notation. """ params = np.zeros((np.prod(mask.shape), 28)) * np.nan data_masked = data[mask] size = len(data_masked) X_inv = np.linalg.pinv(X.T @ X) # Independent of data if step >= size: # Fit over all data simultaneously S = np.log(data_masked)[..., np.newaxis] params_masked = (X_inv @ X.T @ S)[..., 0] else: # Iterate over data params_masked = np.zeros((size, 28)) for i in range(0, size, step): S = np.log(data_masked[i : i + step])[..., np.newaxis] params_masked[i : i + step] = (X_inv @ X.T @ S)[..., 0] params[np.where(mask.ravel())] = params_masked params = params.reshape((mask.shape + (28,))) return params @warning_for_keywords() def _wls_fit(data, mask, X, *, step=int(1e4)): """Estimate the model parameters using weighted least squares with the signal magnitudes as weights. Parameters ---------- data : numpy.ndarray Array of shape (..., number of acquisitions). mask : numpy.ndarray Array with the same shape as the data array of a single acquisition. X : numpy.ndarray Design matrix of shape (number of acquisitions, 28). step : int, optional The number of voxels over which the fit is calculated simultaneously. Returns ------- params : numpy.ndarray Array of shape (..., 28) containing the estimated model parameters. Element 0 is the natural logarithm of the estimated signal without diffusion-weighting, elements 1-6 are the estimated diffusion tensor elements in Voigt notation, and elements 7-27 are the estimated covariance tensor elements in Voigt notation. """ params = np.zeros((np.prod(mask.shape), 28)) * np.nan data_masked = data[mask] size = len(data_masked) if step >= size: # Fit over all data simultaneously S = np.log(data_masked)[..., np.newaxis] C = data_masked[:, np.newaxis, :] B = X.T * C A = np.linalg.pinv(B @ X) params_masked = (A @ B @ S)[..., 0] else: # Iterate over data params_masked = np.zeros((size, 28)) for i in range(0, size, step): S = np.log(data_masked[i : i + step])[..., np.newaxis] C = data_masked[i : i + step][:, np.newaxis, :] B = X.T * C A = np.linalg.pinv(B @ X) params_masked[i : i + step] = (A @ B @ S)[..., 0] params[np.where(mask.ravel())] = params_masked params = params.reshape((mask.shape + (28,))) return params def _sdpdc_fit(data, mask, X, cvxpy_solver): """Estimate the model parameters using Semidefinite Programming (SDP), while enforcing positivity constraints on the D and C tensors (SDPdc). See :footcite:p:`Herberthson2021` for further details about the method. Parameters ---------- data : numpy.ndarray Array of shape (..., number of acquisitions). mask : numpy.ndarray Array with the same shape as the data array of a single acquisition. X : numpy.ndarray Design matrix of shape (number of acquisitions, 28). cvxpy_solver: string, required The name of the SDP solver to be used. Default: 'SCS' Returns ------- params : numpy.ndarray Array of shape (..., 28) containing the estimated model parameters. Element 0 is the natural logarithm of the estimated signal without diffusion-weighting, elements 1-6 are the estimated diffusion tensor elements in Voigt notation, and elements 7-27 are the estimated covariance tensor elements in Voigt notation. References ---------- .. footbibliography:: """ if not have_cvxpy: raise ImportError("CVXPY package needed to enforce constraints") if cvxpy_solver not in cp.installed_solvers(): raise ValueError("The selected solver is not available") params = np.zeros((np.prod(mask.shape), 28)) * np.nan data_masked = data[mask] size, nvols = data_masked.shape scale = np.maximum(np.max(data_masked, axis=1, keepdims=True), 1) data_masked = data_masked / scale data_masked[data_masked < 0] = 0 log_data = np.log(data_masked) params_masked = np.zeros((size, 28)) x = cp.Variable((28, 1)) y = cp.Parameter((nvols, 1)) A = cp.Parameter((nvols, 28)) dc = cvxpy_1x6_to_3x3(x[1:7]) cc = cvxpy_1x21_to_6x6(x[7:]) constraints = [dc >> 0, cc >> 0] objective = cp.Minimize(cp.norm(A @ x - y)) prob = cp.Problem(objective, constraints) unconstrained = cp.Problem(objective) for i in range(0, size, 1): vox_data = data_masked[i : i + 1, :].T vox_log_data = log_data[i : i + 1, :].T vox_log_data[np.isinf(vox_log_data)] = 0 y.value = vox_data * vox_log_data A.value = vox_data * X try: prob.solve(solver=cvxpy_solver, verbose=False) m = x.value except Exception: msg = "Constrained optimization failed, attempting unconstrained" msg += " optimization." warn(msg, stacklevel=2) try: unconstrained.solve(solver=cvxpy_solver) m = x.value except Exception: msg = "Unconstrained optimization failed," msg += " returning zero array." warn(msg, stacklevel=2) m = np.zeros(x.shape) params_masked[i : i + 1, :] = m.T params_masked[:, 0] += np.log(scale[:, 0]) params[np.where(mask.ravel())] = params_masked params = params.reshape((mask.shape + (28,))) return params class QtiModel(ReconstModel): @warning_for_keywords() def __init__(self, gtab, *, fit_method="WLS", cvxpy_solver="SCS"): """Covariance tensor model of q-space trajectory imaging. See :footcite:t:`Westin2016` for further details about the model. Parameters ---------- gtab : dipy.core.gradients.GradientTable Gradient table with b-tensors. fit_method : str, optional Must be one of the following: - 'OLS' for ordinary least squares :func:`qti._ols_fit` - 'WLS' for weighted least squares :func:`qti._wls_fit` - 'SDPDc' for semidefinite programming with positivity constraints applied :footcite:p:`Herberthson2021` :func:`qti._sdpdc_fit` cvxpy_solver: str, optionals solver for the SDP formulation. default: 'SCS' References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) if self.gtab.btens is None: raise ValueError( "QTI requires b-tensors to be defined in the gradient table." ) self.X = design_matrix(self.gtab.btens) rank = np.linalg.matrix_rank(self.X.T @ self.X) if rank < 28: warn( "The combination of the b-tensor shapes, sizes, and " + "orientations does not enable all elements of the covariance " + f"tensor to be estimated (rank(X.T @ X) = {rank} < 28).", stacklevel=2, ) try: self.fit_method = common_fit_methods[fit_method] except KeyError as e: raise ValueError( f"Invalid value ({fit_method}) for 'fit_method'." + " Options: 'OLS', 'WLS', 'SDPdc'." ) from e self.cvxpy_solver = cvxpy_solver self.fit_method_name = fit_method @warning_for_keywords() def fit(self, data, *, mask=None): """Fit QTI to data. Parameters ---------- data : numpy.ndarray Array of shape (..., number of acquisitions). mask : numpy.ndarray, optional Array with the same shape as the data array of a single acquisition. Returns ------- qtifit : dipy.reconst.qti.QtiFit The fitted model. """ if mask is None: mask = np.ones(data.shape[:-1], dtype=bool) else: if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) if self.fit_method_name == "SDPdc": params = self.fit_method(data, mask, self.X, self.cvxpy_solver) else: params = self.fit_method(data, mask, self.X) return QtiFit(params) def predict(self, params): """Generate signals from this model class instance and given parameters. Parameters ---------- params : numpy.ndarray Array of shape (..., 28) containing the model parameters. Element 0 is the natural logarithm of the signal without diffusion-weighting, elements 1-6 are the diffusion tensor elements in Voigt notation, and elements 7-27 are the covariance tensor elements in Voigt notation. Returns ------- S : numpy.ndarray Signals. """ S0 = np.exp(params[..., 0]) D = params[..., 1:7, np.newaxis] C = params[..., 7::, np.newaxis] S = qti_signal(self.gtab, D, C, S0=S0) return S class QtiFit: def __init__(self, params): """Fitted QTI model. Parameters ---------- params : numpy.ndarray Array of shape (..., 28) containing the model parameters. Element 0 is the natural logarithm of the signal without diffusion-weighting, elements 1-6 are the diffusion tensor elements in Voigt notation, and elements 7-27 are the covariance tensor elements in Voigt notation. """ self.params = params def predict(self, gtab): """Generate signals from this model fit and a given gradient table. Parameters ---------- gtab : dipy.core.gradients.GradientTable Gradient table with b-tensors. Returns ------- S : numpy.ndarray Signals. """ if gtab.btens is None: raise ValueError( "QTI requires b-tensors to be defined in the gradient table." ) S0 = self.S0_hat D = self.params[..., 1:7, np.newaxis] C = self.params[..., 7::, np.newaxis] S = qti_signal(gtab, D, C, S0=S0) return S @auto_attr def S0_hat(self): """Estimated signal without diffusion-weighting. Returns ------- S0 : numpy.ndarray """ S0 = np.exp(self.params[..., 0]) return S0 @auto_attr def md(self): """Mean diffusivity. Returns ------- md : numpy.ndarray Notes ----- Mean diffusivity is calculated as .. math:: \\text{MD} = \\langle \\mathbf{D} \\rangle : \\mathbf{E}_\\text{iso} """ md = np.matmul(self.params[..., np.newaxis, 1:7], from_3x3_to_6x1(e_iso))[ ..., 0, 0 ] return md @auto_attr def v_md(self): """Variance of microscopic mean diffusivities. Returns ------- v_md : numpy.ndarray Notes ----- Variance of microscopic mean diffusivities is calculated as .. math:: V_\\text{MD} = \\mathbb{C} : \\mathbb{E}_\\text{bulk} """ v_md = np.matmul(self.params[..., np.newaxis, 7::], from_6x6_to_21x1(E_bulk))[ ..., 0, 0 ] return v_md @auto_attr def v_shear(self): """Shear variance. Returns ------- v_shear : numpy.ndarray Notes ----- Shear variance is calculated as .. math:: V_\\text{shear} = \\mathbb{C} : \\mathbb{E}_\\text{shear} """ v_shear = np.matmul( self.params[..., np.newaxis, 7::], from_6x6_to_21x1(E_shear) )[..., 0, 0] return v_shear @auto_attr def v_iso(self): """Total isotropic variance. Returns ------- v_iso : numpy.ndarray Notes ----- Total isotropic variance is calculated as .. math:: V_\\text{iso} = \\mathbb{C} : \\mathbb{E}_\\text{iso} """ v_iso = np.matmul(self.params[..., np.newaxis, 7::], from_6x6_to_21x1(E_iso))[ ..., 0, 0 ] return v_iso @auto_attr def d_sq(self): """Diffusion tensor's outer product with itself. Returns ------- d_sq : numpy.ndarray """ d_sq = np.matmul( self.params[..., 1:7, np.newaxis], self.params[..., np.newaxis, 1:7] ) return d_sq @auto_attr def mean_d_sq(self): """Average of microscopic diffusion tensors' outer products with themselves. Returns ------- mean_d_sq : numpy.ndarray Notes ----- Average of microscopic diffusion tensors' outer products with themselves is calculated as .. math:: \\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle = \\mathbb{C} + \\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle """ mean_d_sq = from_21x1_to_6x6(self.params[..., 7::, np.newaxis]) + self.d_sq return mean_d_sq @auto_attr def c_md(self): """Normalized variance of mean diffusivities. Returns ------- c_md : numpy.ndarray Notes ----- Normalized variance of microscopic mean diffusivities is calculated as .. math:: C_\\text{MD} = \\frac{\\mathbb{C} : \\mathbb{E}_\\text{bulk}} {\\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle : \\mathbb{E}_\\text{bulk}} """ c_md = ( self.v_md / np.matmul( np.swapaxes(from_6x6_to_21x1(self.mean_d_sq), -1, -2), from_6x6_to_21x1(E_bulk), )[..., 0, 0] ) return c_md @auto_attr def c_mu(self): """Normalized microscopic anisotropy. Returns ------- c_mu : numpy.ndarray Notes ----- Normalized microscopic anisotropy is calculated as .. math:: C_\\mu = \\frac{3}{2} \\frac{\\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle : \\mathbb{E}_\\text{shear}}{\\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle : \\mathbb{E}_\\text{iso}} """ c_mu = ( 1.5 * np.matmul( np.swapaxes(from_6x6_to_21x1(self.mean_d_sq), -1, -2), from_6x6_to_21x1(E_shear), ) / np.matmul( np.swapaxes(from_6x6_to_21x1(self.mean_d_sq), -1, -2), from_6x6_to_21x1(E_iso), ) )[..., 0, 0] return c_mu @auto_attr def ufa(self): """Microscopic fractional anisotropy. Returns ------- ufa : numpy.ndarray Notes ----- Microscopic fractional anisotropy is calculated as .. math:: \\mu\\text{FA} = \\sqrt{C_\\mu} """ ufa = np.sqrt(self.c_mu) return ufa @auto_attr def c_m(self): """Normalized macroscopic anisotropy. Returns ------- c_m : numpy.ndarray Notes ----- Normalized macroscopic anisotropy is calculated as .. math:: C_\\text{M} = \\frac{3}{2} \\frac{\\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle : \\mathbb{E}_\\text{shear}} {\\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle : \\mathbb{E}_\\text{iso}} """ c_m = ( 1.5 * np.matmul( np.swapaxes(from_6x6_to_21x1(self.d_sq), -1, -2), from_6x6_to_21x1(E_shear), ) / np.matmul( np.swapaxes(from_6x6_to_21x1(self.d_sq), -1, -2), from_6x6_to_21x1(E_iso), ) )[..., 0, 0] return c_m @auto_attr def fa(self): """Fractional anisotropy. Returns ------- fa : numpy.ndarray Notes ----- Fractional anisotropy is calculated as .. math:: \\text{FA} = \\sqrt{C_\\text{M}} """ fa = np.sqrt(self.c_m) return fa @auto_attr def c_c(self): """Microscopic orientation coherence. Returns ------- c_c : numpy.ndarray Notes ----- Microscopic orientation coherence is calculated as .. math:: C_c = \\frac{C_\\text{M}}{C_\\mu} """ c_c = self.c_m / self.c_mu return c_c @auto_attr def mk(self): """Mean kurtosis. Returns ------- mk : numpy.ndarray Notes ----- Mean kurtosis is calculated as .. math:: \\text{MK} = K_\\text{bulk} + K_\\text{shear} """ mk = self.k_bulk + self.k_shear return mk @auto_attr def k_bulk(self): """Bulk kurtosis. Returns ------- k_bulk : numpy.ndarray Notes ----- Bulk kurtosis is calculated as .. math:: K_\\text{bulk} = 3 \\frac{\\mathbb{C} : \\mathbb{E}_\\text{bulk}} {\\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle : \\mathbb{E}_\\text{bulk}} """ k_bulk = ( 3 * np.matmul(self.params[..., np.newaxis, 7::], from_6x6_to_21x1(E_bulk)) / np.matmul( np.swapaxes(from_6x6_to_21x1(self.d_sq), -1, -2), from_6x6_to_21x1(E_bulk), ) )[..., 0, 0] return k_bulk @auto_attr def k_shear(self): """Shear kurtosis. Returns ------- k_shear : numpy.ndarray Notes ----- Shear kurtosis is calculated as .. math:: K_\\text{shear} = \\frac{6}{5} \\frac{\\mathbb{C} : \\mathbb{E}_\\text{shear}}{\\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle : \\mathbb{E}_\\text{bulk}} """ k_shear = ( 6 / 5 * np.matmul(self.params[..., np.newaxis, 7::], from_6x6_to_21x1(E_shear)) / np.matmul( np.swapaxes(from_6x6_to_21x1(self.d_sq), -1, -2), from_6x6_to_21x1(E_bulk), ) )[..., 0, 0] return k_shear @auto_attr def k_mu(self): """Microscopic kurtosis. Returns ------- k_mu : numpy.ndarray Notes ----- Microscopic kurtosis is calculated as .. math:: K_\\mu = \\frac{6}{5} \\frac{\\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle : \\mathbb{E}_\\text{shear}}{\\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle : \\mathbb{E}_\\text{bulk}} """ k_mu = ( 6 / 5 * np.matmul( np.swapaxes(from_6x6_to_21x1(self.mean_d_sq), -1, -2), from_6x6_to_21x1(E_shear), ) / np.matmul( np.swapaxes(from_6x6_to_21x1(self.d_sq), -1, -2), from_6x6_to_21x1(E_bulk), ) )[..., 0, 0] return k_mu common_fit_methods = {"OLS": _ols_fit, "WLS": _wls_fit, "SDPdc": _sdpdc_fit} dipy-1.11.0/dipy/reconst/quick_squash.pyx000066400000000000000000000100771476546756600204620ustar00rootroot00000000000000""" Detect common dtype across object array """ from functools import reduce cimport numpy as cnp cimport cython import numpy as np cdef enum: SCALAR, ARRAY SCALAR_TYPES = np.ScalarType @cython.boundscheck(False) @cython.wraparound(False) def quick_squash(obj_arr, mask=None, fill=0): """Try and make a standard array from an object array This function takes an object array and attempts to convert it to a more useful dtype. If array can be converted to a better dtype, Nones are replaced by `fill`. To make the behaviour of this function more clear, here are the most common cases: 1. `obj_arr` is an array of scalars of type `T`. Returns an array like `obj_arr.astype(T)` 2. `obj_arr` is an array of arrays. All items in `obj_arr` have the same shape ``S``. Returns an array with shape ``obj_arr.shape + S`` 3. `obj_arr` is an array of arrays of different shapes. Returns `obj_arr`. 4. Items in `obj_arr` are not ndarrays or scalars. Returns `obj_arr`. Parameters ---------- obj_arr : array, dtype=object The array to be converted. mask : array, dtype=bool, optional mask is nonzero where `obj_arr` has Nones. fill : number, optional Nones are replaced by `fill`. Returns ------- result : array Examples -------- >>> arr = np.empty(3, dtype=object) >>> arr.fill(2) >>> quick_squash(arr) array([2, 2, 2]) >>> arr[0] = None >>> quick_squash(arr) array([0, 2, 2]) >>> arr.fill(np.ones(2)) >>> r = quick_squash(arr) >>> r.shape (3, 2) >>> r.dtype dtype('float64') """ cdef: cnp.npy_intp i, j, N, dtypes_i object [:] flat_obj char [:] flat_mask cnp.dtype [:] dtypes int have_mask = not mask is None int search_for cnp.ndarray result cnp.dtype dtype, last_dtype object common_shape if have_mask: flat_mask = np.array(mask.reshape(-1), dtype=np.int8) N = obj_arr.size dtypes = np.empty((N,), dtype=object) flat_obj = obj_arr.reshape((-1)) # Find first valid value for i in range(N): e = flat_obj[i] if ((have_mask and flat_mask[i] == 0) or (not have_mask and e is None)): continue t = type(e) if issubclass(t, np.generic) or t in SCALAR_TYPES: search_for = SCALAR common_shape = () dtype = np.dtype(t) break elif t == cnp.ndarray: search_for = ARRAY common_shape = e.shape dtype = e.dtype break else: # something other than scalar or array return obj_arr else: # Nothing outside mask / all None return obj_arr # Check rest of values to confirm common type / shape, and collect dtypes last_dtype = dtype dtypes[0] = dtype dtypes_i = 1 for j in range(i+1, N): e = flat_obj[j] if ((have_mask and flat_mask[j] == 0) or (not have_mask and e is None)): continue t = type(e) if search_for == SCALAR: if not issubclass(t, np.generic) and not t in SCALAR_TYPES: return obj_arr dtype = np.dtype(t) else: # search_for == ARRAY: if not t == cnp.ndarray: return obj_arr if not e.shape == common_shape: return obj_arr dtype = e.dtype if dtype != last_dtype: last_dtype = dtype dtypes[dtypes_i] = dtype dtypes_i += 1 # Find common dtype unique_dtypes = set(dtypes[:dtypes_i]) tiny_arrs = [np.zeros((1,), dtype=dt) for dt in unique_dtypes] dtype = reduce(np.add, tiny_arrs).dtype # Create and fill output array result = np.empty((N,) + common_shape, dtype=dtype) for i in range(N): e = flat_obj[i] if ((have_mask and flat_mask[i] == 0) or (not have_mask and e is None)): result[i] = fill else: result[i] = e return result.reshape(obj_arr.shape + common_shape) dipy-1.11.0/dipy/reconst/recspeed.pxd000066400000000000000000000011441476546756600175220ustar00rootroot00000000000000cimport cython cimport numpy as cnp cdef void remove_similar_vertices_c( double[:, :] vertices, double theta, int remove_antipodal, int return_mapping, int return_index, double[:, :] unique_vertices, cnp.uint16_t[:] mapping, cnp.uint16_t[:] index, cnp.uint16_t* n_unique ) noexcept nogil cdef cnp.npy_intp search_descending_c( cython.floating* arr, cnp.npy_intp size, double relative_threshold) noexcept nogil cdef long local_maxima_c( double[:] odf, cnp.uint16_t[:, :] edges, double[::1] out_values, cnp.npy_intp[::1] out_indices) noexcept nogildipy-1.11.0/dipy/reconst/recspeed.pyx000066400000000000000000000501231476546756600175500ustar00rootroot00000000000000# Emacs should think this is a -*- python -*- file """ Optimized routines for creating voxel diffusion models """ # cython: profile=True # cython: embedsignature=True cimport cython import numpy as np cimport numpy as cnp from libc.math cimport cos, fabs, M_PI from libc.stdlib cimport malloc, free from libc.string cimport memset, memcpy from dipy.utils.fast_numpy cimport take cdef extern from "dpy_math.h" nogil: double floor(double x) double fabs(double x) double cos(double x) double sin(double x) float acos(float x ) double sqrt(double x) double DPY_PI # initialize numpy runtime cnp.import_array() #numpy pointers cdef inline float* asfp(cnp.ndarray pt): return cnp.PyArray_DATA(pt) cdef inline double* asdp(cnp.ndarray pt): return cnp.PyArray_DATA(pt) @cython.boundscheck(False) @cython.wraparound(False) cdef void remove_similar_vertices_c( double[:, :] vertices, double theta, int remove_antipodal, int return_mapping, int return_index, double[:, :] unique_vertices, cnp.uint16_t[:] mapping, cnp.uint16_t[:] index, cnp.uint16_t* n_unique ) noexcept nogil: """ Optimized Cython version to remove vertices that are less than `theta` degrees from any other. """ cdef: int n = vertices.shape[0] int i, j int pass_all double a, b, c, sim, cos_similarity cos_similarity = cos(M_PI / 180 * theta) n_unique[0] = 0 for i in range(n): pass_all = 1 a = vertices[i, 0] b = vertices[i, 1] c = vertices[i, 2] for j in range(n_unique[0]): sim = (a * unique_vertices[j, 0] + b * unique_vertices[j, 1] + c * unique_vertices[j, 2]) if remove_antipodal: sim = fabs(sim) if sim > cos_similarity: pass_all = 0 if return_mapping: mapping[i] = j break if pass_all: unique_vertices[n_unique[0], 0] = a unique_vertices[n_unique[0], 1] = b unique_vertices[n_unique[0], 2] = c if return_mapping: mapping[i] = n_unique[0] if return_index: index[n_unique[0]] = i n_unique[0] += 1 def remove_similar_vertices( cnp.float64_t[:, ::1] vertices, double theta, bint return_mapping=False, bint return_index=False, bint remove_antipodal=True ): """Remove vertices that are less than `theta` degrees from any other Returns vertices that are at least theta degrees from any other vertex. Vertex v and -v are considered the same so if v and -v are both in `vertices` only one is kept. Also if v and w are both in vertices, w must be separated by theta degrees from both v and -v to be unique. To disable this, set `remove_antipodal` to False to keep both directions. Parameters ---------- vertices : (N, 3) ndarray N unit vectors. theta : float The minimum separation between vertices in degrees. return_mapping : {False, True}, optional If True, return `mapping` as well as `vertices` and maybe `indices` (see below). return_indices : {False, True}, optional If True, return `indices` as well as `vertices` and maybe `mapping` (see below). remove_antipodal : {False, True}, optional If True, v and -v are considered equal, and only one will be kept. Returns ------- unique_vertices : (M, 3) ndarray Vertices sufficiently separated from one another. mapping : (N,) ndarray For each element ``vertices[i]`` ($i \in 0..N-1$), the index $j$ to a vertex in `unique_vertices` that is less than `theta` degrees from ``vertices[i]``. Only returned if `return_mapping` is True. indices : (N,) ndarray `indices` gives the reverse of `mapping`. For each element ``unique_vertices[j]`` ($j \in 0..M-1$), the index $i$ to a vertex in `vertices` that is less than `theta` degrees from ``unique_vertices[j]``. If there is more than one element of `vertices` that is less than theta degrees from `unique_vertices[j]`, return the first (lowest index) matching value. Only return if `return_indices` is True. """ if vertices.shape[1] != 3: raise ValueError('Vertices should be 2D with second dim length 3') cdef int n = vertices.shape[0] if n >= 2**16: raise ValueError("Too many vertices") cdef cnp.float64_t[:, ::1] unique_vertices = np.empty((n, 3), dtype=np.float64) cdef cnp.uint16_t[::1] mapping = None cdef cnp.uint16_t[::1] index = None cdef cnp.uint16_t n_unique = 0 if return_mapping: mapping = np.empty(n, dtype=np.uint16) if return_index: index = np.empty(n, dtype=np.uint16) # Call the optimized Cython function remove_similar_vertices_c( vertices, theta, remove_antipodal, return_mapping, return_index, unique_vertices, mapping, index, &n_unique ) # Prepare the outputs verts = np.asarray(unique_vertices[:n_unique]).copy() if not return_mapping and not return_index: return verts out = [verts] if return_mapping: out.append(np.asarray(mapping)) if return_index: out.append(np.asarray(index[:n_unique]).copy()) return out @cython.boundscheck(False) @cython.wraparound(False) cdef cnp.npy_intp search_descending_c(cython.floating* arr, cnp.npy_intp size, double relative_threshold) noexcept nogil: """ Optimized Cython version of the search_descending function. Parameters ---------- arr : floating* 1D contiguous array assumed to be sorted in descending order. size : cnp.npy_intp Number of elements in the array. relative_threshold : double Threshold factor to determine the cutoff index. Returns ------- cnp.npy_intp Largest index `i` such that all(arr[:i] >= T), where T = arr[0] * relative_threshold. """ cdef: cnp.npy_intp left = 0 cnp.npy_intp right = size cnp.npy_intp mid double threshold # Handle edge case of empty array if right == 0: return 0 threshold = relative_threshold * arr[0] # Binary search for the threshold while left != right: mid = (left + right) // 2 if arr[mid] >= threshold: left = mid + 1 else: right = mid return left @cython.boundscheck(False) @cython.wraparound(False) def search_descending(cython.floating[::1] a, double relative_threshold): """`i` in descending array `a` so `a[i] < a[0] * relative_threshold` Call ``T = a[0] * relative_threshold``. Return value `i` will be the smallest index in the descending array `a` such that ``a[i] < T``. Equivalently, `i` will be the largest index such that ``all(a[:i] >= T)``. If all values in `a` are >= T, return the length of array `a`. Parameters ---------- a : ndarray, ndim=1, c-contiguous Array to be searched. We assume `a` is in descending order. relative_threshold : float Applied threshold will be ``T`` with ``T = a[0] * relative_threshold``. Returns ------- i : np.intp If ``T = a[0] * relative_threshold`` then `i` will be the largest index such that ``all(a[:i] >= T)``. If all values in `a` are >= T then `i` will be `len(a)`. Examples -------- >>> a = np.arange(10, 0, -1, dtype=float) >>> np.allclose(a, np.array([10., 9., 8., 7., 6., 5., 4., 3., 2., 1.])) True >>> search_descending(a, 0.5) 6 >>> np.allclose(a < 10 * 0.5, np.array([False, False, False, False, False, ... False, True, True, True, True])) True >>> search_descending(a, 1) 1 >>> search_descending(a, 2) 0 >>> search_descending(a, 0) 10 """ return search_descending_c(&a[0], a.shape[0], relative_threshold) @cython.wraparound(False) @cython.boundscheck(False) cdef long local_maxima_c(double[:] odf, cnp.uint16_t[:, :] edges, double[::1] out_values, cnp.npy_intp[::1] out_indices) noexcept nogil: cdef: long count cnp.npy_intp* wpeak = malloc(odf.shape[0] * sizeof(cnp.npy_intp)) if not wpeak: return -3 # Memory allocation failed memset(wpeak, 0, odf.shape[0] * sizeof(cnp.npy_intp)) count = _compare_neighbors(odf, edges, wpeak) if count < 0: free(wpeak) return count memcpy(&out_indices[0], wpeak, count * sizeof(cnp.npy_intp)) take(&odf[0], &out_indices[0], count, &out_values[0]) _cosort(out_values, out_indices) free(wpeak) return count @cython.wraparound(False) @cython.boundscheck(False) def local_maxima(double[:] odf, cnp.uint16_t[:, :] edges): """Local maxima of a function evaluated on a discrete set of points. If a function is evaluated on some set of points where each pair of neighboring points is an edge in edges, find the local maxima. Parameters ---------- odf : array, 1d, dtype=double The function evaluated on a set of discrete points. edges : array (N, 2) The set of neighbor relations between the points. Every edge, ie `edges[i, :]`, is a pair of neighboring points. Returns ------- peak_values : ndarray Value of odf at a maximum point. Peak values is sorted in descending order. peak_indices : ndarray Indices of maximum points. Sorted in the same order as `peak_values` so `odf[peak_indices[i]] == peak_values[i]`. Notes ----- A point is a local maximum if it is > at least one neighbor and >= all neighbors. If no points meet the above criteria, 1 maximum is returned such that `odf[maximum] == max(odf)`. See Also -------- dipy.core.sphere """ cdef: cnp.ndarray[cnp.npy_intp] wpeak double[::1] out_values cnp.npy_intp[::1] out_indices out_values = np.zeros(odf.shape[0], dtype=float) out_indices = np.zeros(odf.shape[0], dtype=np.intp) count = local_maxima_c(odf, edges, out_values, out_indices) if count == -1: raise IndexError("Values in edges must be < len(odf)") elif count == -2: raise ValueError("odf cannot have NaNs") elif count == -3: raise MemoryError("Memory allocation failed") # Wrap the pointers as NumPy arrays values = np.asarray(out_values[:count]) indices = np.asarray(out_indices[:count]) return values, indices @cython.wraparound(False) @cython.boundscheck(False) cdef void _cosort(double[::1] A, cnp.npy_intp[::1] B) noexcept nogil: """Sorts `A` in-place and applies the same reordering to `B`""" cdef: cnp.npy_intp n = A.shape[0] cnp.npy_intp hole double insert_A long insert_B for i in range(1, n): insert_A = A[i] insert_B = B[i] hole = i while hole > 0 and insert_A > A[hole -1]: A[hole] = A[hole - 1] B[hole] = B[hole - 1] hole -= 1 A[hole] = insert_A B[hole] = insert_B @cython.wraparound(False) @cython.boundscheck(False) cdef long _compare_neighbors(double[:] odf, cnp.uint16_t[:, :] edges, cnp.npy_intp *wpeak_ptr) noexcept nogil: """Compares every pair of points in edges Parameters ---------- odf : array of double values of points on sphere. edges : array of uint16 neighbor relationships on sphere. Every set of neighbors on the sphere should be an edge. wpeak_ptr : pointer pointer to a block of memory which will be updated with the result of the comparisons. This block of memory must be large enough to hold len(odf) longs. The first `count` elements of wpeak will be updated with the indices of the peaks. Returns ------- count : long Number of maxima in odf. A value < 0 indicates an error: -1 : value in edges too large, >= than len(odf) -2 : odf contains nans """ cdef: cnp.npy_intp lenedges = edges.shape[0] cnp.npy_intp lenodf = odf.shape[0] cnp.npy_intp i cnp.uint16_t find0, find1 double odf0, odf1 long count = 0 for i in range(lenedges): find0 = edges[i, 0] find1 = edges[i, 1] if find0 >= lenodf or find1 >= lenodf: count = -1 break odf0 = odf[find0] odf1 = odf[find1] """ Here `wpeak_ptr` is used as an indicator array that can take one of three values. If `wpeak_ptr[i]` is: * -1 : point i of the sphere is smaller than at least one neighbor. * 0 : point i is equal to all its neighbors. * 1 : point i is > at least one neighbor and >= all its neighbors. Each iteration of the loop is a comparison between neighboring points (the two point of an edge). At each iteration we update wpeak_ptr in the following way:: wpeak_ptr[smaller_point] = -1 if wpeak_ptr[larger_point] == 0: wpeak_ptr[larger_point] = 1 If the two points are equal, wpeak is left unchanged. """ if odf0 < odf1: wpeak_ptr[find0] = -1 wpeak_ptr[find1] |= 1 elif odf0 > odf1: wpeak_ptr[find0] |= 1 wpeak_ptr[find1] = -1 elif (odf0 != odf0) or (odf1 != odf1): count = -2 break if count < 0: return count # Count the number of peaks and use first count elements of wpeak_ptr to # hold indices of those peaks for i in range(lenodf): if wpeak_ptr[i] > 0: wpeak_ptr[count] = i count += 1 return count @cython.boundscheck(False) @cython.wraparound(False) def le_to_odf(cnp.ndarray[double, ndim=1] odf, cnp.ndarray[double, ndim=1] LEs, cnp.ndarray[double, ndim=1] radius, int odfn, int radiusn, int anglesn): """odf for interpolated Laplacian normalized signal """ cdef int m, i, j with nogil: for m in range(odfn): for i in range(radiusn): for j in range(anglesn): odf[m]=odf[m]-LEs[(m*radiusn+i)*anglesn+j]*radius[i] return @cython.boundscheck(False) @cython.wraparound(False) def sum_on_blocks_1d(cnp.ndarray[double, ndim=1] arr, cnp.ndarray[long, ndim=1] blocks, cnp.ndarray[double, ndim=1] out,int outn): """Summations on blocks of 1d array """ cdef: int m,i,j double blocksum with nogil: j=0 for m in range(outn): blocksum=0 for i in range(j,j+blocks[m]): blocksum+=arr[i] out[m]=blocksum j+=blocks[m] return def argmax_from_adj(vals, vertex_inds, adj_inds): """Indices of local maxima from `vals` given adjacent points Parameters ---------- vals : (N,) array, dtype np.float64 values at all vertices referred to in either of `vertex_inds` or `adj_inds`' vertex_inds : (V,) array indices into `vals` giving vertices that may be local maxima. adj_inds : sequence For every vertex in ``vertex_inds``, the indices (into `vals`) of the neighboring points Returns ------- inds : (M,) array Indices into `vals` giving local maxima of vals, given topology from `adj_inds`, and restrictions from `vertex_inds`. Inds are returned sorted by value at that index - i.e. smallest value (at index) first. """ cvals, cvertinds = proc_reco_args(vals, vertex_inds) cadj_counts, cadj_inds = adj_to_countarrs(adj_inds) return argmax_from_countarrs(cvals, cvertinds, cadj_counts, cadj_inds) def proc_reco_args(vals, vertinds): vals = np.ascontiguousarray(vals.astype(float)) vertinds = np.ascontiguousarray(vertinds.astype(np.uint32)) return vals, vertinds def adj_to_countarrs(adj_inds): """Convert adjacency sequence to counts and flattened indices We use this to provide expected input to ``argmax_from_countarrs`` Parameters ---------- adj_indices : sequence length V sequence of sequences, where sequence ``i`` contains the neighbors of a particular vertex. Returns ------- counts : (V,) array Number of neighbors for each vertex adj_inds : (n,) array flat array containing `adj_indices` unrolled as a vector """ counts = [] all_inds = [] for verts in adj_inds: v = list(verts) all_inds += v counts.append(len(v)) adj_inds = np.array(all_inds, dtype=np.uint32) counts = np.array(counts, dtype=np.uint32) return counts, adj_inds # prefetch argsort for small speedup cdef object argsort = np.argsort def argmax_from_countarrs(cnp.ndarray vals, cnp.ndarray vertinds, cnp.ndarray adj_counts, cnp.ndarray adj_inds): """Indices of local maxima from `vals` from count, array neighbors Parameters ---------- vals : (N,) array, dtype float values at all vertices referred to in either of `vertex_inds` or `adj_inds`' vertinds : (V,) array, dtype uint32 indices into `vals` giving vertices that may be local maxima. adj_counts : (V,) array, dtype uint32 For every vertex ``i`` in ``vertex_inds``, the number of neighbors for vertex ``i`` adj_inds : (P,) array, dtype uint32 Indices for neighbors for each point. ``P=sum(adj_counts)`` Returns ------- inds : (M,) array Indices into `vals` giving local maxima of vals, given topology from `adj_counts` and `adj_inds`, and restrictions from `vertex_inds`. Inds are returned sorted by value at that index - i.e. smallest value (at index) first. """ cdef: cnp.ndarray[cnp.float64_t, ndim=1] cvals = vals cnp.ndarray[cnp.uint32_t, ndim=1] cvertinds = vertinds cnp.ndarray[cnp.uint32_t, ndim=1] cadj_counts = adj_counts cnp.ndarray[cnp.uint32_t, ndim=1] cadj_inds = adj_inds # temporary arrays for storing maxes cnp.ndarray[cnp.float64_t, ndim=1] maxes = vals.copy() cnp.ndarray[cnp.uint32_t, ndim=1] maxinds = vertinds.copy() cnp.npy_intp i, j, V, C, n_maxes=0, adj_size, adj_pos=0 int is_max cnp.float64_t *vals_ptr double val cnp.uint32_t vert_ind, ind cnp.uint32_t *vertinds_ptr cnp.uint32_t *counts_ptr cnp.uint32_t *adj_ptr cnp.uint32_t vals_size, vert_size if not (cnp.PyArray_ISCONTIGUOUS(cvals) and cnp.PyArray_ISCONTIGUOUS(cvertinds) and cnp.PyArray_ISCONTIGUOUS(cadj_counts) and cnp.PyArray_ISCONTIGUOUS(cadj_inds)): raise ValueError('Need contiguous arrays as input') vals_size = cnp.PyArray_DIM(cvals, 0) vals_ptr = cnp.PyArray_DATA(cvals) vertinds_ptr = cnp.PyArray_DATA(cvertinds) adj_ptr = cnp.PyArray_DATA(cadj_inds) counts_ptr = cnp.PyArray_DATA(cadj_counts) V = cnp.PyArray_DIM(cadj_counts, 0) adj_size = cnp.PyArray_DIM(cadj_inds, 0) if cnp.PyArray_DIM(cvertinds, 0) < V: raise ValueError('Too few indices for adj arrays') for i in range(V): vert_ind = vertinds_ptr[i] if vert_ind >= vals_size: raise IndexError('Overshoot on vals') val = vals_ptr[vert_ind] C = counts_ptr[i] # check for overshoot adj_pos += C if adj_pos > adj_size: raise IndexError('Overshoot on adj_inds array') is_max = 1 for j in range(C): ind = adj_ptr[j] if ind >= vals_size: raise IndexError('Overshoot on vals') if val <= vals_ptr[ind]: is_max = 0 break if is_max: maxinds[n_maxes] = vert_ind maxes[n_maxes] = val n_maxes +=1 adj_ptr += C if n_maxes == 0: return np.array([]) # fancy indexing always produces a copy return maxinds[argsort(maxes[:n_maxes])] dipy-1.11.0/dipy/reconst/rumba.py000066400000000000000000001127421476546756600167020ustar00rootroot00000000000000"""Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD)""" import logging import warnings import numpy as np from dipy.core.geometry import vec2vec_rotmat from dipy.core.gradients import get_bval_indices, gradient_table, unique_bvals_tolerance from dipy.core.onetime import auto_attr from dipy.core.sphere import Sphere from dipy.data import get_sphere from dipy.reconst.csdeconv import AxSymShResponse from dipy.reconst.odf import OdfFit, OdfModel from dipy.reconst.shm import lazy_index, normalize_data from dipy.segment.mask import bounding_box, crop from dipy.sims.voxel import all_tensor_evecs, single_tensor from dipy.testing.decorators import warning_for_keywords # Machine precision for numerical stability in division _EPS = np.finfo(float).eps logger = logging.getLogger(__name__) class RumbaSDModel(OdfModel): @warning_for_keywords() def __init__( self, gtab, *, wm_response=(1.7e-3, 0.2e-3, 0.2e-3), gm_response=0.8e-3, csf_response=3.0e-3, n_iter=600, recon_type="smf", n_coils=1, R=1, voxelwise=True, use_tv=False, sphere=None, verbose=False, ): """ Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD). RUMBA-SD :footcite:p:`CanalesRodriguez2015` is a modification of the Richardson-Lucy algorithm accounting for Rician and Noncentral Chi noise distributions, which more accurately represent MRI noise. Computes a maximum likelihood estimation of the fiber orientation density function (fODF) at each voxel. Includes white matter compartments alongside optional GM and CSF compartments to account for partial volume effects. This fit can be performed voxelwise or globally. The global fit will proceed more quickly than the voxelwise fit provided that the computer has adequate RAM (>= 16 GB should be sufficient for most datasets). Kernel for deconvolution constructed using a priori knowledge of white matter response function, as well as the mean diffusivity of GM and/or CSF. RUMBA-SD is robust against impulse response imprecision, and thus the default diffusivity values are often adequate :footcite:p:`DellAcqua2007`. Parameters ---------- gtab : GradientTable Gradient table. wm_response : 1d ndarray or 2d ndarray or AxSymShResponse, optional Tensor eigenvalues as a (3,) ndarray, multishell eigenvalues as a (len(unique_bvals_tolerance(gtab.bvals))-1, 3) ndarray in order of smallest to largest b-value, or an AxSymShResponse. Default: np.array([1.7e-3, 0.2e-3, 0.2e-3]) gm_response : float, optional Mean diffusivity for GM compartment. If `None`, then grey matter volume fraction is not computed. Default: 0.8e-3 csf_response : float, optional Mean diffusivity for CSF compartment. If `None`, then CSF volume fraction is not computed. Default: 3.0e-3 n_iter : int, optional Number of iterations for fODF estimation. Must be a positive int. Default: 600 recon_type : {'smf', 'sos'}, optional MRI reconstruction method: spatial matched filter (SMF) or sum-of-squares (SoS). SMF reconstruction generates Rician noise while SoS reconstruction generates Noncentral Chi noise. Default: 'smf' n_coils : int, optional Number of coils in MRI scanner -- only relevant in SoS reconstruction. Must be a positive int. Default: 1 R : int, optional Acceleration factor of the acquisition. For SIEMENS, R = iPAT factor. For GE, R = ASSET factor. For PHILIPS, R = SENSE factor. Typical values are 1 or 2. Must be a positive int. Default: 1 voxelwise : bool, optional If true, performs a voxelwise fit. If false, performs a global fit on the entire brain at once. The global fit requires a 4D brain volume in `fit`. use_tv : bool, optional If true, applies total variation regularization. This only takes effect in a global fit (`voxelwise` is set to `False`). TV can only be applied to 4D brain volumes with no singleton dimensions. sphere : Sphere, optional Sphere on which to construct fODF. If None, uses `repulsion724`. verbose : bool, optional If true, logs updates on estimated signal-to-noise ratio after each iteration. This only takes effect in a global fit (`voxelwise` is set to `False`). References ---------- .. footbibliography:: """ if not np.any(gtab.b0s_mask): raise ValueError("Gradient table has no b0 measurements") self.gtab_orig = gtab # save for prediction # Masks to extract b0/non-b0 measurements self.where_b0s = lazy_index(gtab.b0s_mask) self.where_dwi = lazy_index(~gtab.b0s_mask) # Correct gradient table to contain b0 data at the beginning bvals_cor = np.concatenate(([0], gtab.bvals[self.where_dwi])) bvecs_cor = np.concatenate(([[0, 0, 0]], gtab.bvecs[self.where_dwi])) gtab_cor = gradient_table(bvals_cor, bvecs=bvecs_cor) # Initialize self.gtab OdfModel.__init__(self, gtab_cor) if isinstance(wm_response, tuple): wm_response = np.asarray(wm_response) # Store responses self.wm_response = wm_response self.gm_response = gm_response self.csf_response = csf_response # Initializing remaining parameters if R < 1 or n_iter < 1 or n_coils < 1: raise ValueError( f"R, n_iter, and n_coils must be >= 1, but R={R}," + f"n_iter={n_iter}, and n_coils={n_coils} " ) self.R = R self.n_iter = n_iter self.recon_type = recon_type self.n_coils = n_coils if voxelwise and use_tv: raise ValueError("Total variation has no effect in voxelwise fit") if voxelwise and verbose: warnings.warn( "Verbosity has no effect in voxelwise fit", UserWarning, stacklevel=2 ) self.voxelwise = voxelwise self.use_tv = use_tv self.verbose = verbose if sphere is None: self.sphere = get_sphere(name="repulsion724") else: self.sphere = sphere if voxelwise: self.fit = self._voxelwise_fit else: self.fit = self._global_fit # Fitting parameters self.kernel = None @warning_for_keywords() def _global_fit(self, data, *, mask=None): """ Fit fODF and GM/CSF volume fractions globally. Parameters ---------- data : ndarray (x, y, z, N) Signal values for each voxel. Must be 4D. mask : ndarray (x, y, z), optional Binary mask specifying voxels of interest with 1; results will only be fit at these voxels (0 elsewhere). If `None`, fits all voxels. Default: None. Returns ------- model_fit : RumbaFit Fit object storing model parameters. """ # Checking data and mask shapes if len(data.shape) != 4: raise ValueError(f"Data should be 4D, received shape f{data.shape}") if mask is None: # default mask includes all voxels mask = np.ones(data.shape[:3]) if data.shape[:3] != mask.shape: raise ValueError( "Mask shape should match first 3 dimensions of " + f"data, but data dimensions are f{data.shape} " + f"while mask dimensions are f{mask.shape}" ) # Signal repair, normalization # Normalize data to mean b0 image data = normalize_data(data, self.where_b0s, min_signal=_EPS) # Rearrange data to match corrected gradient table data = np.concatenate( (np.ones([*data.shape[:3], 1]), data[..., self.where_dwi]), axis=3 ) data[data > 1] = 1 # clip values between 0 and 1 # All arrays are converted to float32 to reduce memory load data = data.astype(np.float32) # Generate kernel self.kernel = generate_kernel( self.gtab, self.sphere, self.wm_response, self.gm_response, self.csf_response, ).astype(np.float32) # Fit fODF model_params = rumba_deconv_global( data, self.kernel, mask, n_iter=self.n_iter, recon_type=self.recon_type, n_coils=self.n_coils, R=self.R, use_tv=self.use_tv, verbose=self.verbose, ) model_fit = RumbaFit(self, model_params) return model_fit @warning_for_keywords() def _voxelwise_fit(self, data, *, mask=None): """ Fit fODF and GM/CSF volume fractions voxelwise. Parameters ---------- data : ndarray ([x, y, z], N) Signal values for each voxel. mask : ndarray ([x, y, z]), optional Binary mask specifying voxels of interest with 1; results will only be fit at these voxels (0 elsewhere). If `None`, fits all voxels. Default: None. Returns ------- model_fit : RumbaFit Fit object storing model parameters. """ if mask is None: # default mask includes all voxels mask = np.ones(data.shape[:-1]) if data.shape[:-1] != mask.shape: raise ValueError( "Mask shape should match first dimensions of " + f"data, but data dimensions are f{data.shape} " + f"while mask dimensions are f{mask.shape}" ) self.kernel = generate_kernel( self.gtab, self.sphere, self.wm_response, self.gm_response, self.csf_response, ) model_params = np.zeros(data.shape[:-1] + (len(self.sphere.vertices) + 2,)) for ijk in np.ndindex(data.shape[:-1]): if mask[ijk]: vox_data = data[ijk] # Normalize data to mean b0 image vox_data = normalize_data(vox_data, self.where_b0s, min_signal=_EPS) # Rearrange data to match corrected gradient table vox_data = np.concatenate(([1], vox_data[self.where_dwi])) vox_data[vox_data > 1] = 1 # clip values between 0 and 1 # Fitting model_param = rumba_deconv( vox_data, self.kernel, n_iter=self.n_iter, recon_type=self.recon_type, n_coils=self.n_coils, ) model_params[ijk] = model_param model_fit = RumbaFit(self, model_params) return model_fit class RumbaFit(OdfFit): def __init__(self, model, model_params): """ Constructs fODF, GM/CSF volume fractions, and other derived results. fODF and GM/CSF fractions are normalized to collectively sum to 1 for each voxel. Parameters ---------- model : RumbaSDModel RumbaSDModel-SD model. model_params : ndarray ([x, y, z], M) fODF and GM/CSF volume fractions for each voxel. """ self.model = model self.model_params = model_params @warning_for_keywords() def odf(self, *, sphere=None): """ Constructs fODF at discrete vertices on model sphere for each voxel. Parameters ---------- sphere : Sphere, optional Sphere on which to construct fODF. If specified, must be the same sphere used by the `RumbaSDModel` model. Default: None. Returns ------- odf : ndarray ([x, y, z], M-2) fODF computed at each vertex on model sphere. """ if sphere is not None and sphere != self.model.sphere: raise ValueError( "Reconstruction sphere must be the same as used" + " in the RUMBA-SD model." ) odf = self.model_params[..., :-2] return odf @auto_attr def f_gm(self): """ Constructs GM volume fraction for each voxel. Returns ------- f_gm : ndarray ([x, y, z]) GM volume fraction. """ f_gm = self.model_params[..., -2] return f_gm @auto_attr def f_csf(self): """ Constructs CSF volume fraction for each voxel. Returns ------- f_csf : ndarray ([x, y, z]) CSF volume fraction. """ f_csf = self.model_params[..., -1] return f_csf @auto_attr def f_wm(self): """ Constructs white matter volume fraction for each voxel. Equivalent to sum of fODF. Returns ------- f_wm : ndarray ([x, y, z]) White matter volume fraction. """ f_wm = np.sum(self.odf(), axis=-1) return f_wm @auto_attr def f_iso(self): """ Constructs isotropic volume fraction for each voxel. Equivalent to sum of GM and CSF volume fractions. Returns ------- f_iso : ndarray ([x, y, z]) Isotropic volume fraction. """ f_iso = self.f_gm + self.f_csf return f_iso @auto_attr def combined_odf_iso(self): """ Constructs fODF combined with isotropic volume fraction at discrete vertices on model sphere. Distributes isotropic compartments evenly along each fODF direction. Sums to 1. Returns ------- combined : ndarray ([x, y, z], M-2) fODF combined with isotropic volume fraction. """ odf = self.odf() combined = odf + self.f_iso[..., None] / odf.shape[-1] return combined @warning_for_keywords() def predict(self, *, gtab=None, S0=None): """ Compute signal prediction on model gradient table given given fODF and GM/CSF volume fractions for each voxel. Parameters ---------- gtab : GradientTable, optional The gradients for which the signal will be predicted. Use the model's gradient table if `None`. Default: None S0 : ndarray ([x, y, z]) or float, optional The non diffusion-weighted signal value for each voxel. If a float, the same value is used for each voxel. If `None`, 1 is used for each voxel. Default: None Returns ------- pred_sig : ndarray ([x, y, z], N) The predicted signal. """ model_params = self.model_params if gtab is None: gtab = self.model.gtab_orig pred_kernel = generate_kernel( gtab, self.model.sphere, self.model.wm_response, self.model.gm_response, self.model.csf_response, ) if S0 is None: S0 = np.ones(model_params.shape[:-1] + (1,)) elif isinstance(S0, np.ndarray): S0 = S0[..., None] else: S0 = S0 * np.ones(model_params.shape[:-1] + (1,)) pred_sig = np.empty(model_params.shape[:-1] + (len(gtab.bvals),)) for ijk in np.ndindex(model_params.shape[:-1]): pred_sig[ijk] = S0[ijk] * np.dot(pred_kernel, model_params[ijk]) return pred_sig @warning_for_keywords() def rumba_deconv(data, kernel, *, n_iter=600, recon_type="smf", n_coils=1): r""" Fit fODF and GM/CSF volume fractions for a voxel using RUMBA-SD. Deconvolves the kernel from the diffusion-weighted signal by computing a maximum likelihood estimation of the fODF :footcite:p:`CanalesRodriguez2015`. Minimizes the negative log-likelihood of the data under Rician or Noncentral Chi noise distributions by adapting the iterative technique developed in Richardson-Lucy deconvolution. Parameters ---------- data : 1d ndarray (N,) Signal values for a single voxel. kernel : 2d ndarray (N, M) Deconvolution kernel mapping volume fractions of the M compartments to N-length signal. Last two columns should be for GM and CSF. n_iter : int, optional Number of iterations for fODF estimation. Must be a positive int. Default: 600 recon_type : {'smf', 'sos'}, optional MRI reconstruction method: spatial matched filter (SMF) or sum-of-squares (SoS). SMF reconstruction generates Rician noise while SoS reconstruction generates Noncentral Chi noise. Default: 'smf' n_coils : int, optional Number of coils in MRI scanner -- only relevant in SoS reconstruction. Must be a positive int. Default: 1 Returns ------- fit_vec : 1d ndarray (M,) Vector containing fODF and GM/CSF volume fractions. First M-2 components are fODF while last two are GM and CSF respectively. Notes ----- The diffusion MRI signal measured at a given voxel is a sum of contributions from each intra-voxel compartment, including parallel white matter (WM) fiber populations in a given orientation as well as effects from GM and CSF. The equation governing these contributions is: $S_i = S_0\left(\sum_{j=1}^{M}f_j\exp(-b_i\textbf{v}_i^T\textbf{D}_j \textbf{v}_i) + f_{GM}\exp(-b_iD_{GM})+f_{CSF}\exp(-b_iD_{CSF})\right)$ Where $S_i$ is the resultant signal along the diffusion-sensitizing gradient unit vector $\textbf{v_i}; i = 1, ..., N$ with a b-value of $b_i$. $f_j; j = 1, ..., M$ is the volume fraction of the $j^{th}$ fiber population with an anisotropic diffusion tensor $\textbf{D_j}$. $f_{GM}$ and $f_{CSF}$ are the volume fractions and $D_{GM}$ and $D_{CSF}$ are the mean diffusivities of GM and CSF respectively. This equation is linear in $f_j, f_{GM}, f_{CSF}$ and can be simplified to a single matrix multiplication: $\textbf{S} = \textbf{Hf}$ Where $\textbf{S}$ is the signal vector at a certain voxel, $\textbf{H}$ is the deconvolution kernel, and $\textbf{f}$ is the vector of volume fractions for each compartment. Modern MRI scanners produce noise following a Rician or Noncentral Chi distribution, depending on their signal reconstruction technique `footcite:p:`Constantinides1997`. Using this linear model, it can be shown that the likelihood of a signal under a Noncentral Chi noise model is: $P(\textbf{S}|\textbf{H}, \textbf{f}, \sigma^2, n) = \prod_{i=1}^{N}\left( \frac{S_i}{\bar{S_i}}\right)^n\exp\left\{-\frac{1}{2\sigma^2}\left[ S_i^2 + \bar{S}_i^2\right]\right\}I_{n-1}\left(\frac{S_i\bar{S}_i} {\sigma^2}\right)u(S_i)$ Where $S_i$ and $\bar{S}_i = \textbf{Hf}$ are the measured and expected signals respectively, and $n$ is the number of coils in the scanner, and $I_{n-1}$ is the modified Bessel function of first kind of order $n-1$. This gives the likelihood under a Rician distribution when $n$ is set to 1. By taking the negative log of this with respect to $\textbf{f}$ and setting the derivative to 0, the $\textbf{f}$ maximizing likelihood is found to be: $\textbf{f} = \textbf{f} \circ \frac{\textbf{H}^T\left[\textbf{S}\circ \frac{I_n(\textbf{S}\circ \textbf{Hf}/\sigma^2)} {I_{n-1}(\textbf{S} \circ\textbf{Hf}\sigma^2)} \right ]} {\textbf{H}^T\textbf{Hf}}$ The solution can be found using an iterative scheme, just as in the Richardson-Lucy algorithm: $\textbf{f}^{k+1} = \textbf{f}^k \circ \frac{\textbf{H}^T\left[\textbf{S} \circ\frac{I_n(\textbf{S}\circ\textbf{Hf}^k/\sigma^2)} {I_{n-1}(\textbf{S} \circ\textbf{Hf}^k/\sigma^2)} \right ]} {\textbf{H}^T\textbf{Hf}^k}$ In order to apply this, a reasonable estimate of $\sigma^2$ is required. To find this, a separate iterative scheme is found using the derivative of the negative log with respect to $\sigma^2$, and is run in parallel. This is shown here: $\alpha^{k+1} = \frac{1}{nN}\left\{ \frac{\textbf{S}^T\textbf{S} + \textbf{f}^T\textbf{H}^T\textbf{Hf}}{2} - \textbf{1}^T_N\left[(\textbf{S} \circ\textbf{Hf})\circ\frac{I_n(\textbf{S}\circ\textbf{Hf}/\alpha^k)} {I_{n-1}(\textbf{S}\circ\textbf{Hf}/\alpha^k)} \right ]\right \}$ For more details, see :footcite:p:`CanalesRodriguez2015`. References ---------- .. footbibliography:: """ n_comp = kernel.shape[1] # number of compartments n_grad = len(data) # gradient directions fodf = np.ones((n_comp, 1)) # initial guess is iso-probable fodf = fodf / np.sum(fodf, axis=0) # normalize initial guess if recon_type == "smf": n_order = 1 # Rician noise (same as Noncentral Chi with order 1) elif recon_type == "sos": n_order = n_coils # Noncentral Chi noise (order = # of coils) else: raise ValueError( f"Invalid recon_type. Should be 'smf' or 'sos', received {recon_type}" ) data = data.reshape(-1, 1) reblurred = np.matmul(kernel, fodf) # For use later kernel_t = kernel.T f_zero = 0 # minimum value allowed in fODF # Initialize variance sigma0 = 1 / 15 sigma2 = sigma0**2 * np.ones(data.shape) # Expand into vector reblurred_s = data * reblurred / sigma2 for _ in range(n_iter): fodf_i = fodf ratio = mbessel_ratio(n_order, reblurred_s) rl_factor = np.matmul(kernel_t, data * ratio) / ( np.matmul(kernel_t, reblurred) + _EPS ) fodf = fodf_i * rl_factor # result of iteration fodf = np.maximum(f_zero, fodf) # apply positivity constraint # Update other variables reblurred = np.matmul(kernel, fodf) reblurred_s = data * reblurred / sigma2 # Iterate variance sigma2_i = (1 / (n_grad * n_order)) * np.sum( (data**2 + reblurred**2) / 2 - (sigma2 * reblurred_s) * ratio, axis=0 ) sigma2_i = np.minimum((1 / 8) ** 2, np.maximum(sigma2_i, (1 / 80) ** 2)) sigma2 = sigma2_i * np.ones(data.shape) # Expand into vector # Normalize final result fit_vec = np.squeeze(fodf / (np.sum(fodf, axis=0) + _EPS)) return fit_vec def mbessel_ratio(n, x): r""" Fast computation of modified Bessel function ratio (first kind). Computes: $I_{n}(x) / I_{n-1}(x)$ using Perron's continued fraction equation where $I_n$ is the modified Bessel function of first kind, order $n$ :footcite:p:`Gautschi1978`. Parameters ---------- n : int Order of Bessel function in numerator (denominator is of order n-1). Must be a positive int. x : float or ndarray Value or array of values with which to compute ratio. Returns ------- y : float or ndarray Result of ratio computation. References ---------- .. footbibliography:: """ y = x / ( (2 * n + x) - ( 2 * x * (n + 1 / 2) / ( 2 * n + 1 + 2 * x - ( 2 * x * (n + 3 / 2) / (2 * n + 2 + 2 * x - (2 * x * (n + 5 / 2) / (2 * n + 3 + 2 * x))) ) ) ) ) return y def generate_kernel(gtab, sphere, wm_response, gm_response, csf_response): """ Generate deconvolution kernel Compute kernel mapping orientation densities of white matter fiber populations (along each vertex of the sphere) and isotropic volume fractions to a diffusion weighted signal. Parameters ---------- gtab : GradientTable Gradient table. sphere : Sphere Sphere with which to sample discrete fiber orientations in order to construct kernel wm_response : 1d ndarray or 2d ndarray or AxSymShResponse, optional Tensor eigenvalues as a (3,) ndarray, multishell eigenvalues as a (len(unique_bvals_tolerance(gtab.bvals))-1, 3) ndarray in order of smallest to largest b-value, or an AxSymShResponse. gm_response : float, optional Mean diffusivity for GM compartment. If `None`, then grey matter compartment set to all zeros. csf_response : float, optional Mean diffusivity for CSF compartment. If `None`, then CSF compartment set to all zeros. Returns ------- kernel : 2d ndarray (N, M) Computed kernel; can be multiplied with a vector consisting of volume fractions for each of M-2 fiber populations as well as GM and CSF fractions to produce a diffusion weighted signal. """ # Coordinates of sphere vertices sticks = sphere.vertices n_grad = len(gtab.gradients) # number of gradient directions n_wm_comp = sticks.shape[0] # number of fiber populations n_comp = n_wm_comp + 2 # plus isotropic compartments kernel = np.zeros((n_grad, n_comp)) # White matter compartments list_bvals = unique_bvals_tolerance(gtab.bvals) n_bvals = len(list_bvals) - 1 # number of unique b-values if isinstance(wm_response, AxSymShResponse): # Data-driven response where_dwi = lazy_index(~gtab.b0s_mask) gradients = gtab.gradients[where_dwi] gradients = gradients / np.linalg.norm(gradients, axis=1)[..., None] S0 = wm_response.S0 for i in range(n_wm_comp): # Response oriented along [0, 0, 1], so must rotate sticks[i] rot_mat = vec2vec_rotmat(sticks[i], np.array([0, 0, 1])) rot_gradients = np.dot(rot_mat, gradients.T).T rot_sphere = Sphere(xyz=rot_gradients) # Project onto rotated sphere and scale rot_response = wm_response.on_sphere(rot_sphere) / S0 kernel[where_dwi, i] = rot_response # Set b0 components kernel[gtab.b0s_mask, :] = 1 elif wm_response.shape == (n_bvals, 3): # Multi-shell response bvals = gtab.bvals bvecs = gtab.bvecs for n, bval in enumerate(list_bvals[1:]): indices = get_bval_indices(bvals, bval) with warnings.catch_warnings(): # extract relevant b-value warnings.simplefilter("ignore") gtab_sub = gradient_table(bvals[indices], bvecs=bvecs[indices]) for i in range(n_wm_comp): # Signal generated by WM-fiber for each gradient direction S = single_tensor( gtab_sub, evals=wm_response[n], evecs=all_tensor_evecs(sticks[i]) ) kernel[indices, i] = S # Set b0 components b0_indices = get_bval_indices(bvals, list_bvals[0]) kernel[b0_indices, :] = 1 else: # Single-shell response for i in range(n_wm_comp): # Signal generated by WM-fiber for each gradient direction S = single_tensor( gtab, evals=wm_response, evecs=all_tensor_evecs(sticks[i]) ) kernel[:, i] = S # Set b0 components kernel[gtab.b0s_mask, :] = 1 # GM compartment if gm_response is None: S_gm = np.zeros(n_grad) else: S_gm = single_tensor( gtab, evals=np.array([gm_response, gm_response, gm_response]) ) if csf_response is None: S_csf = np.zeros(n_grad) else: S_csf = single_tensor( gtab, evals=np.array([csf_response, csf_response, csf_response]) ) kernel[:, n_comp - 2] = S_gm kernel[:, n_comp - 1] = S_csf return kernel @warning_for_keywords() def rumba_deconv_global( data, kernel, mask, *, n_iter=600, recon_type="smf", n_coils=1, R=1, use_tv=True, verbose=False, ): r""" Fit fODF for all voxels simultaneously using RUMBA-SD. Deconvolves the kernel from the diffusion-weighted signal at each voxel by computing a maximum likelihood estimation of the fODF :footcite:p:`CanalesRodriguez2015`. Global fitting also permits the use of total variation regularization (RUMBA-SD + TV). The spatial dependence introduced by TV promotes smoother solutions (i.e. prevents oscillations), while still allowing for sharp discontinuities :footcite:p:`Rudin1992`. This promotes smoothness and continuity along individual tracts while preventing smoothing of adjacent tracts. Generally, global_fit will proceed more quickly than the voxelwise fit provided that the computer has adequate RAM (>= 16 GB should be more than sufficient). Parameters ---------- data : 4d ndarray (x, y, z, N) Signal values for entire brain. None of the volume dimensions x, y, z can be 1 if TV regularization is required. kernel : 2d ndarray (N, M) Deconvolution kernel mapping volume fractions of the M compartments to N-length signal. Last two columns should be for GM and CSF. mask : 3d ndarray(x, y, z) Binary mask specifying voxels of interest with 1; fODF will only be fit at these voxels (0 elsewhere). n_iter : int, optional Number of iterations for fODF estimation. Must be a positive int. recon_type : {'smf', 'sos'}, optional MRI reconstruction method: spatial matched filter (SMF) or sum-of-squares (SoS). SMF reconstruction generates Rician noise while SoS reconstruction generates Noncentral Chi noise. n_coils : int, optional Number of coils in MRI scanner -- only relevant in SoS reconstruction. Must be a positive int. use_tv : bool, optional If true, applies total variation regularization. This requires a brain volume with no singleton dimensions. verbose : bool, optional If true, logs updates on estimated signal-to-noise ratio after each iteration. Returns ------- fit_array : 4d ndarray (x, y, z, M) fODF and GM/CSF volume fractions computed for each voxel. First M-2 components are fODF, while last two are GM and CSf respectively. Notes ----- TV modifies our cost function as follows: .. math:: J(\textbf{f}) = -\log{P(\textbf{S}|\textbf{H}, \textbf{f}, \sigma^2, n)})+ \alpha_{TV}TV(\textbf{f}) where the first term is the negative log likelihood described in the notes of `rumba_deconv`, and the second term is the TV energy, or the sum of gradient absolute values for the fODF across the entire brain. This results in a new multiplicative factor in the iterative scheme, now becoming: .. math:: \textbf{f}^{k+1} = \textbf{f}^k \circ \frac{\textbf{H}^T\left[\textbf{S} \circ\frac{I_n(\textbf{S}\circ\textbf{Hf}^k/\sigma^2)} {I_{n-1}(\textbf{S} \circ\textbf{Hf}^k/\sigma^2)} \right ]} {\textbf{H}^T\textbf{Hf}^k}\circ \textbf{R}^k where $\textbf{R}^k$ is computed voxelwise by: .. math:: (\textbf{R}^k)_j = \frac{1}{1 - \alpha_{TV}div\left(\frac{\triangledown[ \textbf{f}^k_{3D}]_j}{\lvert\triangledown[\textbf{f}^k_{3D}]_j \rvert} \right)\biggr\rvert_{x, y, z}} Here, $\triangledown$ is the symbol for the 3D gradient at any voxel. The regularization strength, $\alpha_{TV}$ is updated after each iteration by the discrepancy principle -- specifically, it is selected to match the estimated variance after each iteration :footcite:p:`Chambolle2004`. References ---------- .. footbibliography:: """ # Crop data to reduce memory consumption dim_orig = data.shape ixmin, ixmax = bounding_box(mask) data = crop(data, ixmin, ixmax) mask = crop(mask, ixmin, ixmax) if np.any(np.array(data.shape[:3]) == 1) and use_tv: raise ValueError( "Cannot use TV regularization if any spatial" + "dimensions are 1; " + f"provided dimensions were {data.shape[:3]}" ) epsilon = 1e-7 n_grad = kernel.shape[0] # gradient directions n_comp = kernel.shape[1] # number of compartments dim = data.shape n_v_tot = np.prod(dim[:3]) # total number of voxels # Initial guess is iso-probable fodf0 = np.ones((n_comp, 1), dtype=np.float32) fodf0 = fodf0 / np.sum(fodf0, axis=0) if recon_type == "smf": n_order = 1 # Rician noise (same as Noncentral Chi with order 1) elif recon_type == "sos": n_order = n_coils # Noncentral Chi noise (order = # of coils) else: raise ValueError( f"Invalid recon_type. Should be 'smf' or 'sos', received f{recon_type}" ) mask_vec = np.ravel(mask) # Indices of target voxels index_mask = np.atleast_1d(np.squeeze(np.argwhere(mask_vec))) n_v_true = len(index_mask) # number of target voxels data_2d = np.zeros((n_v_true, n_grad), dtype=np.float32) for i in range(n_grad): data_2d[:, i] = np.ravel(data[:, :, :, i])[ index_mask ] # only keep voxels of interest data_2d = data_2d.T fodf = np.tile(fodf0, (1, n_v_true)) reblurred = np.matmul(kernel, fodf) # For use later kernel_t = kernel.T f_zero = 0 # Initialize algorithm parameters sigma0 = 1 / 15 sigma2 = sigma0**2 tv_lambda = sigma2 # initial guess for TV regularization strength # Expand into matrix form for iterations sigma2 = sigma2 * np.ones(data_2d.shape, dtype=np.float32) tv_lambda_aux = np.zeros(n_v_tot, dtype=np.float32) reblurred_s = data_2d * reblurred / sigma2 for i in range(n_iter): fodf_i = fodf ratio = mbessel_ratio(n_order, reblurred_s).astype(np.float32) rl_factor = np.matmul(kernel_t, data_2d * ratio) / ( np.matmul(kernel_t, reblurred) + _EPS ) if use_tv: # apply TV regularization tv_factor = np.ones(fodf_i.shape, dtype=np.float32) fodf_4d = _reshape_2d_4d(fodf_i.T, mask) # Compute gradient, divergence gr = _grad(fodf_4d) d_inv = 1 / np.sqrt(epsilon**2 + np.sum(gr**2, axis=3)) gr_norm = gr * d_inv[:, :, :, None, :] div_f = _divergence(gr_norm) g0 = np.abs(1 - tv_lambda * div_f) tv_factor_4d = 1 / (g0 + _EPS) for j in range(n_comp): tv_factor_1d = np.ravel(tv_factor_4d[:, :, :, j])[index_mask] tv_factor[j, :] = tv_factor_1d # Apply TV regularization to iteration factor rl_factor = rl_factor * tv_factor fodf = fodf_i * rl_factor # result of iteration fodf = np.maximum(f_zero, fodf) # positivity constraint # Update other variables reblurred = np.matmul(kernel, fodf) reblurred_s = data_2d * reblurred / sigma2 # Iterate variance sigma2_i = (1 / (n_grad * n_order)) * np.sum( (data_2d**2 + reblurred**2) / 2 - (sigma2 * reblurred_s) * ratio, axis=0 ) sigma2_i = np.minimum((1 / 8) ** 2, np.maximum(sigma2_i, (1 / 80) ** 2)) if verbose: logger.info("Iteration %d of %d", i + 1, n_iter) snr_mean = np.mean(1 / np.sqrt(sigma2_i)) snr_std = np.std(1 / np.sqrt(sigma2_i)) logger.info( "Mean SNR (S0/sigma) estimated to be %.3f +/- %.3f", snr_mean, snr_std ) # Expand into matrix sigma2 = np.tile(sigma2_i[None, :], (data_2d.shape[0], 1)) # Update TV regularization strength using the discrepancy principle if use_tv: if R == 1: tv_lambda = np.mean(sigma2_i) if tv_lambda < (1 / 30) ** 2: tv_lambda = (1 / 30) ** 2 else: # different factor for each voxel tv_lambda_aux[index_mask] = sigma2_i tv_lambda = np.reshape(tv_lambda_aux, (*dim[:3], 1)) fodf = fodf.astype(np.float64) fodf = fodf / (np.sum(fodf, axis=0)[None, ...] + _EPS) # normalize fODF # Extract compartments fit_array = np.zeros((*dim_orig[:3], n_comp)) _reshape_2d_4d( fodf.T, mask, out=fit_array[ixmin[0] : ixmax[0], ixmin[1] : ixmax[1], ixmin[2] : ixmax[2]], ) return fit_array def _grad(M): """ Computes one way first difference """ x_ind = list(range(1, M.shape[0])) + [M.shape[0] - 1] y_ind = list(range(1, M.shape[1])) + [M.shape[1] - 1] z_ind = list(range(1, M.shape[2])) + [M.shape[2] - 1] grad = np.zeros((*M.shape[:3], 3, M.shape[-1]), dtype=np.float32) grad[:, :, :, 0, :] = M[x_ind, :, :, :] - M grad[:, :, :, 1, :] = M[:, y_ind, :, :] - M grad[:, :, :, 2, :] = M[:, :, z_ind, :] - M return grad def _divergence(F): """ Computes divergence of a 3-dimensional vector field (with one way first difference) """ Fx = F[:, :, :, 0, :] Fy = F[:, :, :, 1, :] Fz = F[:, :, :, 2, :] x_ind = [0] + list(range(F.shape[0] - 1)) y_ind = [0] + list(range(F.shape[1] - 1)) z_ind = [0] + list(range(F.shape[2] - 1)) fx = Fx - Fx[x_ind, :, :, :] fx[0, :, :, :] = Fx[0, :, :, :] # edge conditions fx[-1, :, :, :] = -Fx[-2, :, :, :] fy = Fy - Fy[:, y_ind, :, :] fy[:, 0, :, :] = Fy[:, 0, :, :] fy[:, -1, :, :] = -Fy[:, -2, :, :] fz = Fz - Fz[:, :, z_ind, :] fz[:, :, 0, :] = Fz[:, :, 0, :] fz[:, :, -1, :] = -Fz[:, :, -2, :] return fx + fy + fz @warning_for_keywords() def _reshape_2d_4d(M, mask, *, out=None): """ Faster reshape from 2D to 4D. """ if out is None: out = np.zeros((*mask.shape, M.shape[-1]), dtype=M.dtype) n = 0 for i, j, k in np.ndindex(mask.shape): if mask[i, j, k]: out[i, j, k, :] = M[n, :] n += 1 return out dipy-1.11.0/dipy/reconst/sfm.py000066400000000000000000000603121476546756600163540ustar00rootroot00000000000000""" The Sparse Fascicle Model. This is an implementation of the sparse fascicle model described in :footcite:t:`Rokem2015`. The multi b-value version of this model is described in :footcite:t:`Rokem2014`. References ---------- .. footbibliography:: """ from collections import OrderedDict import gc import warnings import numpy as np try: from numpy import nanmean except ImportError: from scipy.stats import nanmean import dipy.core.gradients as grad from dipy.core.onetime import auto_attr import dipy.core.optimize as opt import dipy.data as dpd from dipy.reconst.base import ReconstFit, ReconstModel from dipy.reconst.cache import Cache import dipy.sims.voxel as sims from dipy.testing.decorators import warning_for_keywords from dipy.utils.multiproc import determine_num_processes from dipy.utils.optpkg import optional_package joblib, has_joblib, _ = optional_package("joblib") sklearn, has_sklearn, _ = optional_package("sklearn") lm, _, _ = optional_package("sklearn.linear_model") # Isotropic signal models: these are models of the part of the signal that # changes with b-value, but does not change with direction. This collection is # extensible, by inheriting from IsotropicModel/IsotropicFit below: # First, a helper function to derive the fit signal for these models: @warning_for_keywords() def _to_fit_iso(data, gtab, *, mask=None): if mask is None: mask = np.ones(data.shape[:-1], dtype=bool) # Turn it into a 2D thing: if len(mask.shape) > 0: data = data[mask] else: # This handles the corner case of fitting a single voxel: data = data.reshape((-1, data.shape[0])) data_no_b0 = data[:, ~gtab.b0s_mask] nzb0 = data_no_b0 > 0 nzb0_idx = np.where(nzb0) zb0_idx = np.where(~nzb0) if np.sum(gtab.b0s_mask) > 0: s0 = np.mean(data[:, gtab.b0s_mask], -1) to_fit = np.empty(data_no_b0.shape) with warnings.catch_warnings(): warnings.simplefilter("ignore") to_fit[nzb0_idx] = data_no_b0[nzb0_idx] / s0[nzb0_idx[0]] to_fit[zb0_idx] = 0 else: to_fit = data_no_b0 return to_fit class IsotropicModel(ReconstModel): """ A base-class for the representation of isotropic signals. The default behavior, suitable for single b-value data is to calculate the mean in each voxel as an estimate of the signal that does not depend on direction. """ def __init__(self, gtab): """Initialize an IsotropicModel. Parameters ---------- gtab : a GradientTable class instance """ ReconstModel.__init__(self, gtab) @warning_for_keywords() def fit(self, data, *, mask=None, **kwargs): """Fit an IsotropicModel. This boils down to finding the mean diffusion-weighted signal in each voxel Parameters ---------- data : ndarray Returns ------- IsotropicFit class instance. """ # This returns as a 2D thing: params = np.mean(_to_fit_iso(data, self.gtab, mask=mask), -1) if mask is None: params = np.reshape(params, data.shape[:-1]) else: out_params = np.zeros(data.shape[:-1]) out_params[mask] = params params = out_params return IsotropicFit(self, params) class IsotropicFit(ReconstFit): """ A fit object for representing the isotropic signal as the mean of the diffusion-weighted signal. """ def __init__(self, model, params): """Initialize an IsotropicFit object. Parameters ---------- model : IsotropicModel class instance Isotropic model. params : ndarray The mean isotropic model parameters (the mean diffusion-weighted signal in each voxel). n_vox : int The number of voxels for which the fit was done. """ super().__init__(self, model) self.model = model self.params = params @warning_for_keywords() def predict(self, *, gtab=None): """Predict the isotropic signal. Based on a gradient table. In this case, the (naive!) prediction will be the mean of the diffusion-weighted signal in the voxels. Parameters ---------- gtab : a GradientTable class instance, optional Defaults to use the gtab from the IsotropicModel from which this fit was derived. """ if gtab is None: gtab = self.model.gtab if len(self.params.shape) == 0: return self.params[..., np.newaxis] + np.zeros(np.sum(~gtab.b0s_mask)) else: return self.params[..., np.newaxis] + np.zeros( self.params.shape + (np.sum(~gtab.b0s_mask),) ) class ExponentialIsotropicModel(IsotropicModel): """ Representing the isotropic signal as a fit to an exponential decay function with b-values """ @warning_for_keywords() def fit(self, data, *, mask=None, **kwargs): """ Parameters ---------- data : ndarray mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed. Has the shape `data.shape[:-1]`. Default: None, which implies that all points should be analyzed. Returns ------- ExponentialIsotropicFit class instance. """ to_fit = _to_fit_iso(data, self.gtab, mask=mask) # Fitting to the log-transformed relative data is much faster: nz_idx = to_fit > 0 to_fit[nz_idx] = np.log(to_fit[nz_idx]) to_fit[~nz_idx] = -np.inf params = -nanmean(to_fit / self.gtab.bvals[~self.gtab.b0s_mask], -1) if mask is None: params = np.reshape(params, data.shape[:-1]) else: out_params = np.zeros(data.shape[:-1]) out_params[mask] = params params = out_params return ExponentialIsotropicFit(self, params) class ExponentialIsotropicFit(IsotropicFit): """ A fit to the ExponentialIsotropicModel object, based on data. """ @warning_for_keywords() def predict(self, *, gtab=None): """ Predict the isotropic signal, based on a gradient table. In this case, the prediction will be for an exponential decay with the mean diffusivity derived from the data that was fit. Parameters ---------- gtab : a GradientTable class instance, optional Defaults to use the gtab from the IsotropicModel from which this fit was derived. """ if gtab is None: gtab = self.model.gtab if len(self.params.shape) == 0: return np.exp( -gtab.bvals[~gtab.b0s_mask] * (np.zeros(np.sum(~gtab.b0s_mask)) + self.params[..., np.newaxis]) ) else: return np.exp( -gtab.bvals[~gtab.b0s_mask] * ( np.zeros((self.params.shape[0], np.sum(~gtab.b0s_mask))) + self.params[..., np.newaxis] ) ) @warning_for_keywords() def sfm_design_matrix(gtab, sphere, response, *, mode="signal"): """ Construct the SFM design matrix Parameters ---------- gtab : GradientTable or Sphere Sets the rows of the matrix, if the mode is 'signal', this should be a GradientTable. If mode is 'odf' this should be a Sphere. sphere : Sphere Sets the columns of the matrix response : list of 3 elements The eigenvalues of a tensor which will serve as a kernel function. mode : str {'signal' | 'odf'}, optional Choose the (default) 'signal' for a design matrix containing predicted signal in the measurements defined by the gradient table for putative fascicles oriented along the vertices of the sphere. Otherwise, choose 'odf' for an odf convolution matrix, with values of the odf calculated from a tensor with the provided response eigenvalues, evaluated at the b-vectors in the gradient table, for the tensors with principal diffusion directions along the vertices of the sphere. Returns ------- mat : ndarray A design matrix that can be used for one of the following operations: when the 'signal' mode is used, each column contains the putative signal in each of the bvectors of the `gtab` if a fascicle is oriented in the direction encoded by the sphere vertex corresponding to this column. This is used for deconvolution with a measured DWI signal. If the 'odf' mode is chosen, each column instead contains the values of the tensor ODF for a tensor with a principal diffusion direction corresponding to this vertex. This is used to generate odfs from the fits of the SFM for the purpose of tracking. Examples -------- >>> import dipy.data as dpd >>> data, gtab = dpd.dsi_voxels() >>> sphere = dpd.get_sphere() >>> from dipy.reconst.sfm import sfm_design_matrix A canonical tensor approximating corpus-callosum voxels :footcite:p`Rokem2014`: >>> tensor_matrix = sfm_design_matrix(gtab, sphere, ... [0.0015, 0.0005, 0.0005]) A 'stick' function :footcite:p`Behrens2007`: >>> stick_matrix = sfm_design_matrix(gtab, sphere, [0.001, 0, 0]) References ---------- .. footbibliography:: """ if mode == "signal": with warnings.catch_warnings(): warnings.simplefilter("ignore") mat_gtab = grad.gradient_table( gtab.bvals[~gtab.b0s_mask], bvecs=gtab.bvecs[~gtab.b0s_mask] ) # Preallocate: mat = np.empty((np.sum(~gtab.b0s_mask), sphere.vertices.shape[0])) elif mode == "odf": mat = np.empty((gtab.x.shape[0], sphere.vertices.shape[0])) # Calculate column-wise: for ii, this_dir in enumerate(sphere.vertices): # Rotate the canonical tensor towards this vertex and calculate the # signal you would have gotten in the direction if mode == "signal": # For regressors based on the single tensor, remove $e^{-bD}$ mat[:, ii] = sims.single_tensor( mat_gtab, evals=response, evecs=sims.all_tensor_evecs(this_dir) ) - np.exp(-mat_gtab.bvals * np.mean(response)) elif mode == "odf": # Stick function if response[1] == 0 or response[2] == 0: mat[sphere.find_closest(sims.all_tensor_evecs(this_dir)[0]), ii] = 1 else: mat[:, ii] = sims.single_tensor_odf( gtab.vertices, evals=response, evecs=sims.all_tensor_evecs(this_dir) ) return mat class SparseFascicleModel(ReconstModel, Cache): @warning_for_keywords() def __init__( self, gtab, *, sphere=None, response=(0.0015, 0.0005, 0.0005), solver="ElasticNet", l1_ratio=0.5, alpha=0.001, isotropic=None, seed=42, ): """ Initialize a Sparse Fascicle Model Parameters ---------- gtab : GradientTable class instance Gradient table. sphere : Sphere class instance, optional A sphere on which coefficients will be estimated. Default: symmetric sphere with 362 points (from :mod:`dipy.data`). response : (3,) array-like, optional The eigenvalues of a canonical tensor to be used as the response function of single-fascicle signals. solver : string, or initialized linear model object. This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ElasticNet', 'NNLS'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit Learn: `sklearn.linear_model.ElasticNet`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. l1_ratio : float, optional Sets the balance between L1 and L2 regularization in ElasticNet :footcite:p`Zou2005`. alpha : float, optional Sets the balance between least-squares error and L1/L2 regularization in ElasticNet :footcite:p`Zou2005`. isotropic : IsotropicModel class instance This is a class that implements the function that calculates the value of the isotropic signal. This is a value of the signal that is independent of direction, and therefore removed from both sides of the SFM equation. The default is an instance of IsotropicModel, but other functions can be inherited from IsotropicModel to implement other fits to the aspects of the data that depend on b-value, but not on direction. seed : int, optional Seed for the random number generator. Notes ----- This is an implementation of the SFM, described in :footcite:p`Rokem2015`. References ---------- .. footbibliography:: """ ReconstModel.__init__(self, gtab) if sphere is None: sphere = dpd.get_sphere() self.sphere = sphere self.response = np.asarray(response) if isotropic is None: isotropic = IsotropicModel self.isotropic = isotropic if solver == "ElasticNet": self.solver = lm.ElasticNet( l1_ratio=l1_ratio, alpha=alpha, positive=True, warm_start=False, random_state=seed, ) elif solver in ("NNLS", "nnls"): self.solver = opt.NonNegativeLeastSquares() elif ( isinstance(solver, opt.SKLearnLinearSolver) or has_sklearn and isinstance(solver, sklearn.base.RegressorMixin) ): self.solver = solver else: # If sklearn is unavailable, we can fall back on nnls (but we also # warn the user that we are about to do that): if not has_sklearn: w = sklearn._msg + "\nAlternatively, you can use 'nnls' method " w += "to fit the SparseFascicleModel" warnings.warn(w, stacklevel=2) e_s = "The `solver` key-word argument needs to be: " e_s += "'ElasticNet', 'NNLS', or a " e_s += "`dipy.optimize.SKLearnLinearSolver` object" raise ValueError(e_s) @auto_attr def design_matrix(self): """ The design matrix for a SFM. Returns ------- ndarray The design matrix, where each column is a rotated version of the response function. """ return sfm_design_matrix(self.gtab, self.sphere, self.response, mode="signal") @warning_for_keywords() def _fit_solver2voxels(self, isopredict, vox_data, vox, *, parallel=False): # In voxels in which S0 is 0, we just want to keep the # parameters at all-zeros, and avoid nasty sklearn errors: if not (np.any(~np.isfinite(vox_data)) or np.all(vox_data == 0)): with warnings.catch_warnings(): warnings.simplefilter("ignore") if parallel: coef = { vox: self.solver.fit( self.design_matrix, vox_data - isopredict[vox] ).coef_ } else: coef = self.solver.fit( self.design_matrix, vox_data - isopredict[vox] ).coef_ else: if parallel: return {vox: np.zeros(self.design_matrix.shape[-1])} else: return np.zeros(self.design_matrix.shape[-1]) return coef @warning_for_keywords() def fit( self, data, *, mask=None, num_processes=1, parallel_backend="multiprocessing" ): """ Fit the SparseFascicleModel object to data. Parameters ---------- data : array The measured signal. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed. Has the shape `data.shape[:-1]`. Default: None, which implies that all points should be analyzed. num_processes : int, optional Split the `fit` calculation to a pool of children processes using joblib. This only applies to 4D `data` arrays. Default is 1, which does not require joblib and will run `fit` serially. If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. parallel_backend: str, ParallelBackendBase instance or None Specify the parallelization backend implementation. Supported backends are: - "loky" used by default, can induce some communication and memory overhead when exchanging input and output data with the worker Python processes. - "multiprocessing" previous process-based backend based on `multiprocessing.Pool`. Less robust than `loky`. - "threading" is a very low-overhead backend but it suffers from the Python Global Interpreter Lock if the called function relies a lot on Python objects. "threading" is mostly useful when the execution bottleneck is a compiled extension that explicitly releases the GIL (for instance a Cython loop wrapped in a "with nogil" block or an expensive call to a library such as NumPy). Returns ------- SparseFascicleFit object """ if mask is None: # Flatten it to 2D either way: data_in_mask = np.reshape(data, (-1, data.shape[-1])) else: # Check for valid shape of the mask if mask.shape != data.shape[:-1]: raise ValueError("Mask is not the same shape as data.") mask = np.asarray(mask, dtype=bool) data_in_mask = np.reshape(data[mask], (-1, data.shape[-1])) # Fitting is done on the relative signal (S/S0): flat_S0 = np.mean(data_in_mask[..., self.gtab.b0s_mask], -1) if not flat_S0.size or not flat_S0.max(): flat_S = np.zeros(data_in_mask[..., ~self.gtab.b0s_mask].shape) else: flat_S = data_in_mask[..., ~self.gtab.b0s_mask] / flat_S0[..., None] isotropic = self.isotropic(self.gtab).fit(data, mask=mask) flat_params = np.zeros((data_in_mask.shape[0], self.design_matrix.shape[-1])) del data_in_mask gc.collect() isopredict = isotropic.predict() if mask is None: isopredict = np.reshape(isopredict, (-1, isopredict.shape[-1])) else: isopredict = isopredict[mask] if not num_processes: num_processes = determine_num_processes(num_processes) if num_processes > 1 and has_joblib: with joblib.Parallel( n_jobs=num_processes, backend=parallel_backend, mmap_mode="r+" ) as parallel: out = parallel( joblib.delayed(self._fit_solver2voxels)( isopredict, vox_data, vox, parallel=True ) for vox, vox_data in enumerate(flat_S) ) del parallel flat_params_dict = {} for d in out: flat_params_dict.update(d) flat_params = np.concatenate( [ np.array(i).reshape(1, flat_params.shape[1]) for i in list( OrderedDict( sorted(flat_params_dict.items(), key=lambda x: int(x[0])) ).values() ) ] ) else: for vox, vox_data in enumerate(flat_S): flat_params[vox] = self._fit_solver2voxels( isopredict, vox_data, vox, parallel=False ) del isopredict, flat_S gc.collect() if mask is None: out_shape = data.shape[:-1] + (-1,) beta = flat_params.reshape(out_shape) S0 = flat_S0.reshape(data.shape[:-1]) else: beta = np.zeros(data.shape[:-1] + (self.design_matrix.shape[-1],)) beta[mask, :] = flat_params S0 = np.zeros(data.shape[:-1]) S0[mask] = flat_S0 return SparseFascicleFit(self, beta, S0, isotropic) class SparseFascicleFit(ReconstFit): def __init__(self, model, beta, S0, iso): """ Initialize a SparseFascicleFit class instance Parameters ---------- model : a SparseFascicleModel object. beta : ndarray The parameters of fit to data. S0 : ndarray The mean non-diffusion-weighted signal. iso : IsotropicFit class instance A representation of the isotropic signal, together with parameters of the isotropic signal in each voxel, that is capable of deriving/predicting an isotropic signal, based on a gradient-table. """ super().__init__(self, model) self.model = model self.beta = beta self.S0 = S0 self.iso = iso def odf(self, sphere): """ The orientation distribution function of the SFM Parameters ---------- sphere : Sphere The points in which the ODF is evaluated Returns ------- odf : ndarray of shape (x, y, z, sphere.vertices.shape[0]) """ odf_matrix = self.model.cache_get("odf_matrix", key=sphere) if odf_matrix is None: odf_matrix = sfm_design_matrix( sphere, self.model.sphere, self.model.response, mode="odf" ) self.model.cache_set("odf_matrix", key=sphere, value=odf_matrix) return np.dot( odf_matrix, self.beta.reshape(-1, self.beta.shape[-1]).T ).T.reshape(self.beta.shape[:-1] + (odf_matrix.shape[0],)) @warning_for_keywords() def predict(self, *, gtab=None, response=None, S0=None): """ Predict the signal based on the SFM parameters Parameters ---------- gtab : GradientTable, optional The bvecs/bvals to predict the signal on. Default: the gtab from the model object. response : list of 3 elements, optional The eigenvalues of a tensor which will serve as a kernel function. Default: the response of the model object. Default to use `model.response`. S0 : float or array, optional The non-diffusion-weighted signal. Default: use the S0 of the data Returns ------- pred_sig : ndarray The signal predicted in each voxel/direction """ if response is None: response = self.model.response if gtab is None: _matrix = self.model.design_matrix gtab = self.model.gtab # The only thing we can't change at this point is the sphere we use # (which sets the width of our design matrix): else: _matrix = sfm_design_matrix(gtab, self.model.sphere, response) # Get them all at once: pred_weighted = np.dot( _matrix, self.beta.reshape(-1, self.beta.shape[-1]).T ).T.reshape(self.beta.shape[:-1] + (_matrix.shape[0],)) if S0 is None: S0 = self.S0 if isinstance(S0, np.ndarray): S0 = S0[..., None] pre_pred_sig = S0 * ( pred_weighted + self.iso.predict(gtab=gtab).reshape(pred_weighted.shape) ) pred_sig = np.zeros(pre_pred_sig.shape[:-1] + (gtab.bvals.shape[0],)) pred_sig[..., ~gtab.b0s_mask] = pre_pred_sig pred_sig[..., gtab.b0s_mask] = S0 return pred_sig.squeeze() dipy-1.11.0/dipy/reconst/shm.py000077500000000000000000001562741476546756600163760ustar00rootroot00000000000000"""Tools for using spherical harmonic models to fit diffusion data. See :footcite:p:`Aganj2009`, :footcite:p:`Descoteaux2007`, :footcite:p:`TristanVega2009a`, and :footcite:p:`TristanVega2010`. References ---------- .. footbibliography:: Note about the Transpose: In the literature the matrix representation of these methods is often written as Y = Bx where B is some design matrix and Y and x are column vectors. In our case the input data, a dwi stored as a nifti file for example, is stored as row vectors (ndarrays) of the form (x, y, z, n), where n is the number of diffusion directions. We could transpose and reshape the data to be (n, x*y*z), so that we could directly plug it into the above equation. However, I have chosen to keep the data as is and implement the relevant equations rewritten in the following form: Y.T = x.T B.T, or in python syntax data = np.dot(sh_coef, B.T) where data is Y.T and sh_coef is x.T. """ from warnings import warn import numpy as np from numpy.random import randint import scipy.special as sps from dipy.core.geometry import cart2sphere from dipy.core.onetime import auto_attr from dipy.reconst.cache import Cache from dipy.reconst.odf import OdfFit, OdfModel from dipy.testing.decorators import warning_for_keywords from dipy.utils.compatibility import check_max_version from dipy.utils.deprecator import deprecate_with_version, deprecated_params descoteaux07_legacy_msg = ( "The legacy descoteaux07 SH basis uses absolute values for negative " "harmonic phase factors. It is outdated and will be deprecated in a " "future DIPY release. Consider using the new descoteaux07 basis by setting" "the `legacy` parameter to `False`." ) tournier07_legacy_msg = ( "The legacy tournier07 basis is not normalized. It is outdated and will " "be deprecated in a future release of DIPY. Consider using the new " "tournier07 basis by setting the `legacy` parameter to `False`." ) def _copydoc(obj): def bandit(f): f.__doc__ = obj.__doc__ return f return bandit @deprecated_params("n", new_name="l_values", since="1.9", until="2.0") def forward_sdeconv_mat(r_rh, l_values): """Build forward spherical deconvolution matrix Parameters ---------- r_rh : ndarray Rotational harmonics coefficients for the single fiber response function. Each element ``rh[i]`` is associated with spherical harmonics of order ``2*i``. l_values : ndarray The orders ($l$) of spherical harmonic function associated with each row of the deconvolution matrix. Only even orders are allowed Returns ------- R : ndarray (N, N) Deconvolution matrix with shape (N, N) """ if np.any(l_values % 2): raise ValueError("l_values has odd orders, expecting only even orders") return np.diag(r_rh[l_values // 2]) @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) def sh_to_rh(r_sh, m_values, l_values): """Spherical harmonics (SH) to rotational harmonics (RH) Calculate the rotational harmonic decomposition up to harmonic phase factor ``m``, order ``l`` for an axially and antipodally symmetric function. Note that all ``m != 0`` coefficients will be ignored as axial symmetry is assumed. Hence, there will be ``(sh_order/2 + 1)`` non-zero coefficients. See :footcite:p:`Tournier2007` for further details about the method. Parameters ---------- r_sh : ndarray (N,) ndarray of SH coefficients for the single fiber response function. These coefficients must correspond to the real spherical harmonic functions produced by `shm.real_sh_descoteaux_from_index`. m_values : ndarray (N,) The phase factors ($m$) of the spherical harmonic function associated with each coefficient. l_values : ndarray (N,) The orders ($l$) of the spherical harmonic function associated with each coefficient. Returns ------- r_rh : ndarray (``(sh_order + 1)*(sh_order + 2)/2``,) Rotational harmonics coefficients representing the input `r_sh` See Also -------- shm.real_sh_descoteaux_from_index, shm.real_sh_descoteaux References ---------- .. footbibliography:: """ mask = m_values == 0 # The delta function at theta = phi = 0 is known to have zero coefficients # where m != 0, therefore we need only compute the coefficients at m=0. dirac_sh = gen_dirac(0, l_values[mask], 0, 0) r_rh = r_sh[mask] / dirac_sh return r_rh @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) @warning_for_keywords() def gen_dirac(m_values, l_values, theta, phi, *, legacy=True): """Generate Dirac delta function orientated in (theta, phi) on the sphere The spherical harmonics (SH) representation of this Dirac is returned as coefficients to spherical harmonic functions produced from ``descoteaux07`` basis. Parameters ---------- m_values : ndarray (N,) The phase factors of the spherical harmonic function associated with each coefficient. l_values : ndarray (N,) The order ($l$) of the spherical harmonic function associated with each coefficient. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. legacy: bool, optional If true, uses DIPY's legacy descoteaux07 implementation (where $|m|$ is used for m < 0). Else, implements the basis as defined in Descoteaux et al. 2007 (without the absolute value). See Also -------- shm.real_sh_descoteaux_from_index, shm.real_sh_descoteaux Returns ------- dirac : ndarray SH coefficients representing the Dirac function. The shape of this is `(m + 2) * (m + 1) / 2`. """ return real_sh_descoteaux_from_index(m_values, l_values, theta, phi, legacy=legacy) @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) @warning_for_keywords() def spherical_harmonics(m_values, l_values, theta, phi, *, use_scipy=True): """Compute spherical harmonics. This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- m_values : array of int ``|m| <= l`` The phase factors ($m$) of the harmonics. l_values : array of int ``l >= 0`` The orders ($l$) of the harmonics. theta : float [0, 2*pi] The azimuthal (longitudinal) coordinate. phi : float [0, pi] The polar (colatitudinal) coordinate. use_scipy : bool, optional If True, use scipy implementation. Returns ------- y_mn : complex float The harmonic $Y_l^m$ sampled at ``theta`` and ``phi``. Notes ----- This is a faster implementation of scipy.special.sph_harm for scipy version < 0.15.0. For scipy 0.15 and onwards, we use the scipy implementation of the function. The usual definitions for ``theta` and `phi`` used in DIPY are interchanged in the method definition to agree with the definitions in scipy.special.sph_harm, where `theta` represents the azimuthal coordinate and `phi` represents the polar coordinate. Although scipy uses a naming convention where ``m`` is the order and ``n`` is the degree of the SH, the opposite of DIPY's, their definition for both parameters is the same as ours, with ``l >= 0`` and ``|m| <= l``. """ if use_scipy: if check_max_version("scipy", "1.15.0", strict=True): return sps.sph_harm(m_values, l_values, theta, phi).astype(complex) else: degree = ( l_values.astype(int) if isinstance(l_values, np.ndarray) else int(l_values) ) order = ( m_values.astype(int) if isinstance(m_values, np.ndarray) else int(m_values) ) return sps.sph_harm_y(degree, order, phi, theta).astype(complex) x = np.cos(phi) val = sps.lpmv(m_values, l_values, x).astype(complex) val *= np.sqrt((2 * l_values + 1) / 4.0 / np.pi) val *= np.exp( 0.5 * (sps.gammaln(l_values - m_values + 1) - sps.gammaln(l_values + m_values + 1)) ) val = val * np.exp(1j * m_values * theta) return val @deprecate_with_version( "dipy.reconst.shm.real_sph_harm is deprecated, " "Please use " "dipy.reconst.shm.real_sh_descoteaux_from_index " "instead", since="1.3", until="2.0", ) @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) def real_sph_harm(m_values, l_values, theta, phi): r"""Compute real spherical harmonics. Where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Im(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Re(Y_l^{|m|}) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- m_values : array of int ``|m| <= l`` The phase factors ($m$) of the harmonics. l_values : array of int ``l >= 0`` The orders ($l$) of the harmonics. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. Returns ------- y_mn : real float The real harmonic $Y_l^m$ sampled at `theta` and `phi`. See Also -------- scipy.special.sph_harm """ return real_sh_descoteaux_from_index(m_values, l_values, theta, phi, legacy=True) @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) @warning_for_keywords() def real_sh_tournier_from_index(m_values, l_values, theta, phi, *, legacy=True): r"""Compute real spherical harmonics. The SH are computed as initially defined in :footcite:p:`Tournier2007` then updated in MRtrix3 :footcite:p:`Tournier2019`, where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Re(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Im(Y_l^{|m|}) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- m_values : array of int ``|m| <= l`` The phase factors ($m$) of the harmonics. l_values : array of int ``l >= 0`` The orders ($l$) of the harmonics. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. legacy: bool, optional If true, uses MRtrix 0.2 SH basis definition, where the ``sqrt(2)`` factor is omitted. Else, uses the MRtrix 3 definition presented above. Returns ------- real_sh : real float The real harmonics $Y_l^m$ sampled at ``theta`` and ``phi``. References ---------- .. footbibliography:: """ # In the m < 0 case, Tournier basis considers |m| sh = spherical_harmonics(np.abs(m_values), l_values, phi, theta) real_sh = np.where(m_values < 0, sh.imag, sh.real) if not legacy: # The Tournier basis from MRtrix3 is normalized real_sh *= np.where(m_values == 0, 1.0, np.sqrt(2)) else: warn(tournier07_legacy_msg, category=PendingDeprecationWarning, stacklevel=2) return real_sh @deprecated_params( [ "m", "n", ], new_name=["m_values", "l_values"], since="1.9", until="2.0", ) @warning_for_keywords() def real_sh_descoteaux_from_index(m_values, l_values, theta, phi, *, legacy=True): r"""Compute real spherical harmonics. The definition adopted here follows :footcite:p:`Descoteaux2007`, where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Im(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Re(Y_l^m) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- m_values : array of int ``|m| <= l`` The phase factors ($m$) of the harmonics. l_values : array of int ``l >= 0`` The orders ($l$) of the harmonics. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. legacy: bool, optional If true, uses DIPY's legacy descoteaux07 implementation (where $|m|$ is used for m < 0). Else, implements the basis as defined in Descoteaux et al. 2007 (without the absolute value). Returns ------- real_sh : real float The real harmonic $Y_l^m$ sampled at ``theta`` and ``phi``. References ---------- .. footbibliography:: """ if legacy: # In the case where m < 0, legacy descoteaux basis considers |m| warn(descoteaux07_legacy_msg, category=PendingDeprecationWarning, stacklevel=2) sh = spherical_harmonics(np.abs(m_values), l_values, phi, theta) else: # In the cited paper, the basis is defined without the absolute value sh = spherical_harmonics(m_values, l_values, phi, theta) real_sh = np.where(m_values > 0, sh.imag, sh.real) real_sh *= np.where(m_values == 0, 1.0, np.sqrt(2)) return real_sh @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def real_sh_tournier(sh_order_max, theta, phi, *, full_basis=False, legacy=True): r"""Compute real spherical harmonics. The SH are computed as initially defined in :footcite:p:`Tournier2007` then updated in MRtrix3 :footcite:p:`Tournier2019`, where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Re(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Im(Y_l^{|m|}) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- sh_order_max : int The maximum order ($l$) of the spherical harmonic basis. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. full_basis: bool, optional If true, returns a basis including odd order SH functions as well as even order SH functions. Else returns only even order SH functions. legacy: bool, optional If true, uses MRtrix 0.2 SH basis definition, where the ``sqrt(2)`` factor is omitted. Else, uses MRtrix 3 definition presented above. Returns ------- real_sh : real float The real harmonic $Y_l^m$ sampled at ``theta`` and ``phi``. m_values : array of int The phase factor ($m$) of the harmonics. l_values : array of int The order ($l$) of the harmonics. References ---------- .. footbibliography:: """ m_values, l_values = sph_harm_ind_list(sh_order_max, full_basis=full_basis) phi = np.reshape(phi, [-1, 1]) theta = np.reshape(theta, [-1, 1]) real_sh = real_sh_tournier_from_index(m_values, l_values, theta, phi, legacy=legacy) return real_sh, m_values, l_values @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def real_sh_descoteaux(sh_order_max, theta, phi, *, full_basis=False, legacy=True): r"""Compute real spherical harmonics. The definition adopted here follows :footcite:p:`Descoteaux2007`, where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Im(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Re(Y_l^m) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- sh_order_max : int The maximum order ($l$) of the spherical harmonic basis. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. full_basis: bool, optional If true, returns a basis including odd order SH functions as well as even order SH functions. Otherwise returns only even order SH functions. legacy: bool, optional If true, uses DIPY's legacy descoteaux07 implementation (where $|m|$ for m < 0). Else, implements the basis as defined in Descoteaux et al. 2007. Returns ------- real_sh : real float The real harmonic $Y_l^m$ sampled at ``theta`` and ``phi``. m_values : array of int The phase factor ($m$) of the harmonics. l_values : array of int The order ($l$) of the harmonics. References ---------- .. footbibliography:: """ m_value, l_value = sph_harm_ind_list(sh_order_max, full_basis=full_basis) phi = np.reshape(phi, [-1, 1]) theta = np.reshape(theta, [-1, 1]) real_sh = real_sh_descoteaux_from_index(m_value, l_value, theta, phi, legacy=legacy) return real_sh, m_value, l_value @deprecate_with_version( "dipy.reconst.shm.real_sym_sh_mrtrix is deprecated, " "Please use dipy.reconst.shm.real_sh_tournier instead", since="1.3", until="2.0", ) @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def real_sym_sh_mrtrix(sh_order_max, theta, phi): r""" Compute real symmetric spherical harmonics. The SH are computed as initially defined in :footcite:t:`Tournier2007`, where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \Re(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \Im(Y_l^{|m|}) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- sh_order_max : int The maximum order ($l$) of the spherical harmonic basis. theta : float [0, pi] The polar (colatitudinal) coordinate. phi : float [0, 2*pi] The azimuthal (longitudinal) coordinate. Returns ------- y_mn : real float The real harmonic $Y_l^m$ sampled at ``theta`` and ``phi`` as implemented in mrtrix. Warning: the basis is :footcite:t:`Tournier2007`; :footcite:p:`Tournier2004` is slightly different. m_values : array The phase factor ($m$) of the harmonics. l_values : array The order ($l$) of the harmonics. References ---------- .. footbibliography:: """ return real_sh_tournier(sh_order_max, theta, phi, legacy=True) @deprecate_with_version( "dipy.reconst.shm.real_sym_sh_basis is deprecated, " "Please use dipy.reconst.shm.real_sh_descoteaux " "instead", since="1.3", until="2.0", ) @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def real_sym_sh_basis(sh_order_max, theta, phi): r"""Samples a real symmetric spherical harmonic basis at point on the sphere Samples the basis functions up to order `sh_order_max` at points on the sphere given by `theta` and `phi`. The basis functions are defined here the same way as in :footcite:p:`Descoteaux2007` where the real harmonic $Y_l^m$ is defined to be: .. math:: :nowrap: Y_l^m = \begin{cases} \sqrt{2} * \Im(Y_l^m) \; if m > 0 \\ Y^0_l \; if m = 0 \\ \sqrt{2} * \Im(Y_l^{|m|}) \; if m < 0 \\ \end{cases} This may take scalar or array arguments. The inputs will be broadcast against each other. Parameters ---------- sh_order_max : int The maximum order ($l$) of the spherical harmonic basis. Even int > 0, max spherical harmonic order theta : float [0, 2*pi] The azimuthal (longitudinal) coordinate. phi : float [0, pi] The polar (colatitudinal) coordinate. Returns ------- y_mn : real float The real harmonic $Y_l^m$ sampled at ``theta`` and ``phi`` m_values : array of int The phase factor ($m$) of the harmonics. l_values : array of int The order ($l$) of the harmonics. References ---------- .. footbibliography:: """ return real_sh_descoteaux(sh_order_max, theta, phi, legacy=True) sph_harm_lookup = { None: real_sh_descoteaux, "tournier07": real_sh_tournier, "descoteaux07": real_sh_descoteaux, } @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def sph_harm_ind_list(sh_order_max, *, full_basis=False): """ Returns the order (``l``) and phase_factor (``m``) of all the symmetric spherical harmonics of order less then or equal to ``sh_order_max``. The results, ``m_list`` and ``l_list`` are kx1 arrays, where k depends on ``sh_order_max``. They can be passed to :func:`real_sh_descoteaux_from_index` and :func:``real_sh_tournier_from_index``. Parameters ---------- sh_order_max : int The maximum order ($l$) of the spherical harmonic basis. Even int > 0, max order to return full_basis: bool, optional True for SH basis with even and odd order terms Returns ------- m_list : array of int phase factors ($m$) of even spherical harmonics l_list : array of int orders ($l$) of even spherical harmonics See Also -------- shm.real_sh_descoteaux_from_index, shm.real_sh_tournier_from_index """ if full_basis: l_range = np.arange(0, sh_order_max + 1, dtype=int) ncoef = int((sh_order_max + 1) * (sh_order_max + 1)) else: if sh_order_max % 2 != 0: raise ValueError("sh_order_max must be an even integer >= 0") l_range = np.arange(0, sh_order_max + 1, 2, dtype=int) ncoef = int((sh_order_max + 2) * (sh_order_max + 1) // 2) l_list = np.repeat(l_range, l_range * 2 + 1) offset = 0 m_list = np.empty(ncoef, "int") for ii in l_range: m_list[offset : offset + 2 * ii + 1] = np.arange(-ii, ii + 1) offset = offset + 2 * ii + 1 # makes the arrays ncoef by 1, allows for easy broadcasting later in code return m_list, l_list @warning_for_keywords() def order_from_ncoef(ncoef, *, full_basis=False): """ Given a number ``n`` of coefficients, calculate back the ``sh_order_max`` Parameters ---------- ncoef: int number of coefficients full_basis: bool, optional True when coefficients are for a full SH basis. Returns ------- sh_order_max: int maximum order ($l$) of SH basis """ if full_basis: # Solve the equation : # ncoef = (sh_order_max + 1) * (sh_order_max + 1) return int(np.sqrt(ncoef) - 1) # Solve the quadratic equation derived from : # ncoef = (sh_order_max + 2) * (sh_order_max + 1) / 2 return -1 + int(np.sqrt(9 - 4 * (2 - 2 * ncoef)) // 2) def smooth_pinv(B, L): """Regularized pseudo-inverse Computes a regularized least square inverse of B Parameters ---------- B : array_like (n, m) Matrix to be inverted L : array_like (m,) Returns ------- inv : ndarray (m, n) regularized least square inverse of B Notes ----- In the literature this inverse is often written $(B^{T}B+L^{2})^{-1}B^{T}$. However here this inverse is implemented using the pseudo-inverse because it is more numerically stable than the direct implementation of the matrix product. """ L = np.diag(L) inv = np.linalg.pinv(np.concatenate((B, L))) return inv[:, : len(B)] def lazy_index(index): """Produces a lazy index Returns a slice that can be used for indexing an array, if no slice can be made index is returned as is. """ index = np.array(index) assert index.ndim == 1 if index.dtype.kind == "b": index = index.nonzero()[0] if len(index) == 1: return slice(index[0], index[0] + 1) step = np.unique(np.diff(index)) if len(step) != 1 or step[0] == 0: return index else: return slice(index[0], index[-1] + 1, step[0]) @warning_for_keywords() def _gfa_sh(coef, *, sh0_index=0): """The gfa of the odf, computed from the spherical harmonic coefficients This is a private function because it only works for coefficients of normalized sh bases. Parameters ---------- coef : array The coefficients, using a normalized sh basis, that represent each odf. sh0_index : int, optional The index of the coefficient associated with the 0th order sh harmonic. Returns ------- gfa_values : array The gfa of each odf. """ coef_sq = coef**2 numer = coef_sq[..., sh0_index] denom = coef_sq.sum(-1) # The sum of the square of the coefficients being zero is the same as all # the coefficients being zero allzero = denom == 0 # By adding 1 to numer and denom where both and are 0, we prevent 0/0 numer = numer + allzero denom = denom + allzero return np.sqrt(1.0 - (numer / denom)) class SphHarmModel(OdfModel, Cache): """To be subclassed by all models that return a SphHarmFit when fit.""" def sampling_matrix(self, sphere): """The matrix needed to sample ODFs from coefficients of the model. Parameters ---------- sphere : Sphere Points used to sample ODF. Returns ------- sampling_matrix : array The size of the matrix will be (N, M) where N is the number of vertices on sphere and M is the number of coefficients needed by the model. """ sampling_matrix = self.cache_get("sampling_matrix", sphere) if sampling_matrix is None: sh_order = self.sh_order_max theta = sphere.theta phi = sphere.phi sampling_matrix, m_values, l_values = real_sh_descoteaux( sh_order, theta, phi ) self.cache_set("sampling_matrix", sphere, sampling_matrix) return sampling_matrix class QballBaseModel(SphHarmModel): """To be subclassed by Qball type models.""" @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def __init__( self, gtab, sh_order_max, *, smooth=0.006, min_signal=1e-5, assume_normed=False ): """Creates a model that can be used to fit or sample diffusion data Parameters ---------- gtab : GradientTable Diffusion gradients used to acquire data sh_order_max : even int >= 0 the maximal spherical harmonic order ($l$) of the model smooth : float between 0 and 1, optional The regularization parameter of the model min_signal : float, > 0, optional During fitting, all signal values less than `min_signal` are clipped to `min_signal`. This is done primarily to avoid values less than or equal to zero when taking logs. assume_normed : bool, optional If True, clipping and normalization of the data with respect to the mean B0 signal are skipped during mode fitting. This is an advanced feature and should be used with care. See Also -------- normalize_data """ SphHarmModel.__init__(self, gtab) lpn = ( sps.lpn if check_max_version("scipy", "1.15.0", strict=True) else sps.legendre_p_all ) self._where_b0s = lazy_index(gtab.b0s_mask) self._where_dwi = lazy_index(~gtab.b0s_mask) self.assume_normed = assume_normed self.min_signal = min_signal x, y, z = gtab.gradients[self._where_dwi].T r, theta, phi = cart2sphere(x, y, z) B, m_values, l_values = real_sh_descoteaux( sh_order_max, theta[:, None], phi[:, None] ) L = -l_values * (l_values + 1) legendre0 = lpn(sh_order_max, 0)[0] F = legendre0[l_values] self.sh_order_max = sh_order_max self.B = B self.m_values = m_values self.l_values = l_values self._set_fit_matrix(B, L, F, smooth) def _set_fit_matrix(self, *args): """Should be set in a subclass and is called by __init__""" msg = "User must implement this method in a subclass" raise NotImplementedError(msg) @warning_for_keywords() def fit(self, data, *, mask=None): """Fits the model to diffusion data and returns the model fit""" # Normalize the data and fit coefficients if not self.assume_normed: data = normalize_data(data, self._where_b0s, min_signal=self.min_signal) # Compute coefficients using abstract method coef = self._get_shm_coef(data) # Apply the mask to the coefficients if mask is not None: mask = np.asarray(mask, dtype=bool) coef *= mask[..., None] return SphHarmFit(self, coef, mask) class SphHarmFit(OdfFit): """Diffusion data fit to a spherical harmonic model""" def __init__(self, model, shm_coef, mask): self.model = model self._shm_coef = shm_coef self.mask = mask @property def shape(self): return self._shm_coef.shape[:-1] def __getitem__(self, index): """Allowing indexing into fit""" # Index shm_coefficients if isinstance(index, tuple): coef_index = index + (Ellipsis,) else: coef_index = index new_coef = self._shm_coef[coef_index] # Index mask if self.mask is not None: new_mask = self.mask[index] assert new_mask.shape == new_coef.shape[:-1] else: new_mask = None return SphHarmFit(self.model, new_coef, new_mask) def odf(self, sphere): """Samples the odf function on the points of a sphere Parameters ---------- sphere : Sphere The points on which to sample the odf. Returns ------- values : ndarray The value of the odf on each point of `sphere`. """ B = self.model.sampling_matrix(sphere) return np.dot(self.shm_coeff, B.T) @auto_attr def gfa(self): return _gfa_sh(self.shm_coeff, sh0_index=0) @property def shm_coeff(self): """The spherical harmonic coefficients of the odf Make this a property for now, if there is a use case for modifying the coefficients we can add a setter or expose the coefficients more directly """ return self._shm_coef @warning_for_keywords() def predict(self, *, gtab=None, S0=1.0): """ Predict the diffusion signal from the model coefficients. Parameters ---------- gtab : a GradientTable class instance The directions and bvalues on which prediction is desired S0 : float array The mean non-diffusion-weighted signal in each voxel. """ if not hasattr(self.model, "predict"): msg = "This model does not have prediction implemented yet" raise NotImplementedError(msg) return self.model.predict(self._shm_coef, gtab=gtab, S0=S0) class CsaOdfModel(QballBaseModel): """Implementation of Constant Solid Angle reconstruction method. See :footcite:p:`Aganj2009` for further details about the method. References ---------- .. footbibliography:: """ min = 0.001 max = 0.999 _n0_const = 0.5 / np.sqrt(np.pi) def _set_fit_matrix(self, B, L, F, smooth): """The fit matrix, is used by fit_coefficients to return the coefficients of the odf""" invB = smooth_pinv(B, np.sqrt(smooth) * L) L = L[:, None] F = F[:, None] self._fit_matrix = (F * L) / (8 * np.pi) * invB @warning_for_keywords() def _get_shm_coef(self, data, *, mask=None): """Returns the coefficients of the model""" data = data[..., self._where_dwi] data = data.clip(self.min, self.max) loglog_data = np.log(-np.log(data)) sh_coef = np.dot(loglog_data, self._fit_matrix.T) sh_coef[..., 0] = self._n0_const return sh_coef class OpdtModel(QballBaseModel): """Implementation of Orientation Probability Density Transform reconstruction method. See :footcite:p:`TristanVega2009a` and :footcite:p:`TristanVega2010` for further details about the method. References ---------- .. footbibliography:: """ def _set_fit_matrix(self, B, L, F, smooth): invB = smooth_pinv(B, np.sqrt(smooth) * L) L = L[:, None] F = F[:, None] delta_b = F * L * invB delta_q = 4 * F * invB self._fit_matrix = delta_b, delta_q @warning_for_keywords() def _get_shm_coef(self, data, *, mask=None): """Returns the coefficients of the model""" delta_b, delta_q = self._fit_matrix return _slowadc_formula(data[..., self._where_dwi], delta_b, delta_q) def _slowadc_formula(data, delta_b, delta_q): """formula used in SlowAdcOpdfModel""" logd = -np.log(data) return np.dot(logd * (1.5 - logd) * data, delta_q.T) - np.dot(data, delta_b.T) class QballModel(QballBaseModel): """Implementation of regularized Qball reconstruction method. See :footcite:p:`Descoteaux2007` for further details about the method. References ---------- .. footbibliography:: """ def _set_fit_matrix(self, B, L, F, smooth): invB = smooth_pinv(B, np.sqrt(smooth) * L) F = F[:, None] self._fit_matrix = F * invB @warning_for_keywords() def _get_shm_coef(self, data, *, mask=None): """Returns the coefficients of the model""" return np.dot(data[..., self._where_dwi], self._fit_matrix.T) @warning_for_keywords() def normalize_data(data, where_b0, *, min_signal=1e-5, out=None): """Normalizes the data with respect to the mean b0""" if out is None: out = np.array(data, dtype="float32", copy=True) else: if out.dtype.kind != "f": raise ValueError("out must be floating point") out[:] = data out.clip(min_signal, out=out) b0 = out[..., where_b0].mean(-1) out /= b0[..., None] return out def hat(B): """Returns the hat matrix for the design matrix B""" U, S, V = np.linalg.svd(B, False) H = np.dot(U, U.T) return H def lcr_matrix(H): """Returns a matrix for computing leveraged, centered residuals from data if r = (d-Hd), the leveraged centered residuals are lcr = (r/l)-mean(r/l) returns the matrix R, such that lcr = Rd """ if H.ndim != 2 or H.shape[0] != H.shape[1]: raise ValueError("H should be a square matrix") leverages = np.sqrt(1 - H.diagonal(), where=H.diagonal() < 1) leverages = leverages[:, None] R = (np.eye(len(H)) - H) / leverages return R - R.mean(0) @warning_for_keywords() def bootstrap_data_array(data, H, R, *, permute=None): """Applies the Residual Bootstraps to the data given H and R data must be normalized, ie 0 < data <= 1 This function, and the bootstrap_data_voxel function, calculate residual-bootstrap samples given a Hat matrix and a Residual matrix. These samples can be used for non-parametric statistics or for bootstrap probabilistic tractography, See :footcite:p:`Berman2008`, :footcite:p:`Haroon2009`, and :footcite:p:`Jeurissen2011`. References ---------- .. footbibliography:: """ if permute is None: permute = randint(data.shape[-1], size=data.shape[-1]) assert R.shape == H.shape assert len(permute) == R.shape[-1] R = R[permute] data = np.dot(data, (H + R).T) return data @warning_for_keywords() def bootstrap_data_voxel(data, H, R, *, permute=None): """Like bootstrap_data_array but faster when for a single voxel data must be 1d and normalized """ if permute is None: permute = randint(data.shape[-1], size=data.shape[-1]) r = np.dot(data, R.T) boot_data = np.dot(data, H.T) boot_data += r[permute] return boot_data class ResidualBootstrapWrapper: """Returns a residual bootstrap sample of the signal_object when indexed Wraps a signal_object, this signal object can be an interpolator. When indexed, the wrapper indexes the signal_object to get the signal. There wrapper than samples the residual bootstrap distribution of signal and returns that sample. """ @warning_for_keywords() def __init__(self, signal_object, B, where_dwi, *, min_signal=1e-5): """Builds a ResidualBootstrapWapper Given some linear model described by B, the design matrix, and a signal_object, returns an object which can sample the residual bootstrap distribution of the signal. We assume that the signals are normalized so we clip the bootstrap samples to be between `min_signal` and 1. Parameters ---------- signal_object : some object that can be indexed This object should return diffusion weighted signals when indexed. B : ndarray, ndim=2 The design matrix of the spherical harmonics model used to fit the data. This is the model that will be used to compute the residuals and sample the residual bootstrap distribution where_dwi : indexing object to find diffusion weighted signals from signal min_signal : float The lowest allowable signal. """ self._signal_object = signal_object self._H = hat(B) self._R = lcr_matrix(self._H) self._min_signal = min_signal self._where_dwi = where_dwi self.data = signal_object.data self.voxel_size = signal_object.voxel_size def __getitem__(self, index): """Indexes self._signal_object and bootstraps the result""" signal = self._signal_object[index].copy() dwi_signal = signal[self._where_dwi] boot_signal = bootstrap_data_voxel(dwi_signal, self._H, self._R) boot_signal.clip(self._min_signal, 1.0, out=boot_signal) signal[self._where_dwi] = boot_signal return signal @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def sf_to_sh( sf, sphere, *, sh_order_max=4, basis_type=None, full_basis=False, legacy=True, smooth=0.0, ): """Spherical function to spherical harmonics (SH). Parameters ---------- sf : ndarray Values of a function on the given ``sphere``. sphere : Sphere The points on which the sf is defined. sh_order_max : int, optional Maximum SH order (l) in the SH fit. For ``sh_order_max``, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients for a symmetric basis and ``(sh_order_max + 1) * (sh_order_max + 1)`` coefficients for a full SH basis. basis_type : {None, 'tournier07', 'descoteaux07'}, optional ``None`` for the default DIPY basis, ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` basis, ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis, (``None`` defaults to ``descoteaux07``). full_basis: bool, optional True for using a SH basis containing even and odd order SH functions. False for using a SH basis consisting only of even order SH functions. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. smooth : float, optional Lambda-regularization in the SH fit. Returns ------- sh : ndarray SH coefficients representing the input function. References ---------- .. footbibliography:: """ sph_harm_basis = sph_harm_lookup.get(basis_type) if sph_harm_basis is None: raise ValueError("Invalid basis name.") B, m_values, l_values = sph_harm_basis( sh_order_max, sphere.theta, sphere.phi, full_basis=full_basis, legacy=legacy ) L = -l_values * (l_values + 1) invB = smooth_pinv(B, np.sqrt(smooth) * L) sh = np.dot(sf, invB.T) return sh @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def sh_to_sf( sh, sphere, *, sh_order_max=4, basis_type=None, full_basis=False, legacy=True ): """Spherical harmonics (SH) to spherical function (SF). Parameters ---------- sh : ndarray SH coefficients representing a spherical function. sphere : Sphere The points on which to sample the spherical function. sh_order_max : int, optional Maximum SH order (l) in the SH fit. For ``sh_order_max``, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients for a symmetric basis and ``(sh_order_max + 1) * (sh_order_max + 1)`` coefficients for a full SH basis. basis_type : {None, 'tournier07', 'descoteaux07'}, optional ``None`` for the default DIPY basis, ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` basis, ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis, (``None`` defaults to ``descoteaux07``). full_basis: bool, optional True to use a SH basis containing even and odd order SH functions. Else, use a SH basis consisting only of even order SH functions. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. Returns ------- sf : ndarray Spherical function values on the ``sphere``. References ---------- .. footbibliography:: """ sph_harm_basis = sph_harm_lookup.get(basis_type) if sph_harm_basis is None: raise ValueError("Invalid basis name.") B, m_values, l_values = sph_harm_basis( sh_order_max, sphere.theta, sphere.phi, full_basis=full_basis, legacy=legacy ) sf = np.dot(sh, B.T) return sf @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") @warning_for_keywords() def sh_to_sf_matrix( sphere, *, sh_order_max=4, basis_type=None, full_basis=False, legacy=True, return_inv=True, smooth=0, ): """Matrix that transforms Spherical harmonics (SH) to spherical function (SF). Parameters ---------- sphere : Sphere The points on which to sample the spherical function. sh_order_max : int, optional Maximum SH order in the SH fit. For ``sh_order_max``, there will be ``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients for a symmetric basis and ``(sh_order_max + 1) * (sh_order_max + 1)`` coefficients for a full SH basis. basis_type : {None, 'tournier07', 'descoteaux07'}, optional ``None`` for the default DIPY basis, ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` basis, ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis, (``None`` defaults to ``descoteaux07``). full_basis: bool, optional If True, uses a SH basis containing even and odd order SH functions. Else, uses a SH basis consisting only of even order SH functions. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. return_inv : bool, optional If True then the inverse of the matrix is also returned. smooth : float, optional Lambda-regularization in the SH fit. Returns ------- B : ndarray Matrix that transforms spherical harmonics to spherical function ``sf = np.dot(sh, B)``. invB : ndarray Inverse of B. References ---------- .. footbibliography:: """ sph_harm_basis = sph_harm_lookup.get(basis_type) if sph_harm_basis is None: raise ValueError("Invalid basis name.") B, m_values, l_values = sph_harm_basis( sh_order_max, sphere.theta, sphere.phi, full_basis=full_basis, legacy=legacy ) if return_inv: L = -l_values * (l_values + 1) invB = smooth_pinv(B, np.sqrt(smooth) * L) return B.T, invB.T return B.T @warning_for_keywords() def calculate_max_order(n_coeffs, *, full_basis=False): r"""Calculate the maximal harmonic order (l), given that you know the number of parameters that were estimated. Parameters ---------- n_coeffs : int The number of SH coefficients full_basis: bool, optional True if the used SH basis contains even and odd order SH functions. False if the SH basis consists only of even order SH functions. Returns ------- L : int The maximal SH order (l), given the number of coefficients Notes ----- The calculation in this function for the symmetric SH basis proceeds according to the following logic: .. math:: n = \frac{1}{2} (L+1) (L+2) \rarrow 2n = L^2 + 3L + 2 \rarrow L^2 + 3L + 2 - 2n = 0 \rarrow L^2 + 3L + 2(1-n) = 0 \rarrow L_{1,2} = \frac{-3 \pm \sqrt{9 - 8 (1-n)}}{2} \rarrow L{1,2} = \frac{-3 \pm \sqrt{1 + 8n}}{2} Finally, the positive value is chosen between the two options. For a full SH basis, the calculation consists in solving the equation $n = (L + 1)^2$ for $L$, which gives $L = sqrt(n) - 1$. """ # L2 is negative for all positive values of n_coeffs, so we don't # bother even computing it: # L2 = (-3 - np.sqrt(1 + 8 * n_coeffs)) / 2 # L1 is always the larger value, so we go with that: if full_basis: L1 = np.sqrt(n_coeffs) - 1 if L1.is_integer(): return int(L1) else: L1 = (-3 + np.sqrt(1 + 8 * n_coeffs)) / 2.0 # Check that it is a whole even number: if L1.is_integer() and not np.mod(L1, 2): return int(L1) # Otherwise, the input didn't make sense: raise ValueError( f"The input to ``calculate_max_order`` was" f" {n_coeffs}, but that is not a valid number" f" of coefficients for a spherical harmonics" f" basis set." ) @warning_for_keywords() def anisotropic_power(sh_coeffs, *, norm_factor=0.00001, power=2, non_negative=True): r"""Calculate anisotropic power map with a given SH coefficient matrix. See :footcite:p:`DellAcqua2014` for further details about the method. Parameters ---------- sh_coeffs : ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel. norm_factor: float, optional The value to normalize the ap values. power : int, optional The degree to which power maps are calculated. non_negative: bool, optional Whether to rectify the resulting map to be non-negative. Returns ------- log_ap : ndarray The log of the resulting power image. Notes ----- Calculate AP image based on a IxJxKxC SH coefficient matrix based on the equation: .. math:: AP = \sum_{l=2,4,6,...}{\frac{1}{2l+1} \sum_{m=-l}^l{|a_{l,m}|^n}} Where the last dimension, C, is made of a flattened array of $l$x$m$ coefficients, where $l$ are the SH orders, and $m = 2l+1$, So l=1 has 1 coefficient, l=2 has 5, ... l=8 has 17 and so on. A l=2 SH coefficient matrix will then be composed of a IxJxKx6 volume. The power, $n$ is usually set to $n=2$. The final AP image is then shifted by -log(norm_factor), to be strictly non-negative. Remaining values < 0 are discarded (set to 0), per default, and this option is controlled through the `non_negative` keyword argument. References ---------- .. footbibliography:: """ dim = sh_coeffs.shape[:-1] n_coeffs = sh_coeffs.shape[-1] max_order = calculate_max_order(n_coeffs) ap = np.zeros(dim) n_start = 1 for L in range(2, max_order + 2, 2): n_stop = n_start + (2 * L + 1) ap_i = np.mean(np.abs(sh_coeffs[..., n_start:n_stop]) ** power, -1) ap += ap_i n_start = n_stop # Shift the map to be mostly non-negative, # only applying the log operation to positive elements # to avoid getting numpy warnings on log(0). # It is impossible to get ap values smaller than 0. # Also avoids getting voxels with -inf when non_negative=False. if ap.ndim < 1: # For the off chance we have a scalar on our hands ap = np.reshape(ap, (1,)) log_ap = np.zeros_like(ap) log_ap[ap > 0] = np.log(ap[ap > 0]) - np.log(norm_factor) # Deal with residual negative values: if non_negative: if isinstance(log_ap, np.ndarray): # zero all values < 0 log_ap[log_ap < 0] = 0 else: # assume this is a singleton float (input was 1D): if log_ap < 0: return 0 return log_ap def convert_sh_to_full_basis(sh_coeffs): """Given an array of SH coeffs from a symmetric basis, returns the coefficients for the full SH basis by filling odd order SH coefficients with zeros Parameters ---------- sh_coeffs: ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel. Returns ------- full_sh_coeffs: ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel in a full SH basis. """ sh_order_max = calculate_max_order(sh_coeffs.shape[-1]) _, n = sph_harm_ind_list(sh_order_max, full_basis=True) full_sh_coeffs = np.zeros(np.append(sh_coeffs.shape[:-1], [n.size]).astype(int)) mask = np.mod(n, 2) == 0 full_sh_coeffs[..., mask] = sh_coeffs return full_sh_coeffs @warning_for_keywords() def convert_sh_from_legacy(sh_coeffs, sh_basis, *, full_basis=False): """Convert SH coefficients in legacy SH basis to SH coefficients of the new SH basis for ``descoteaux07`` or ``tournier07`` bases. See :footcite:p:`Descoteaux2007` and :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` for the ``descoteaux07`` and ``tournier07`` bases, respectively. Parameters ---------- sh_coeffs: ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel. sh_basis: {'descoteaux07', 'tournier07'} ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` basis, ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis. full_basis: bool, optional True if the input SH basis includes both even and odd order SH functions, else False. Returns ------- out_sh_coeffs: ndarray The array of coefficients expressed in the new SH basis. References ---------- .. footbibliography:: """ sh_order_max = calculate_max_order(sh_coeffs.shape[-1], full_basis=full_basis) m_values, l_values = sph_harm_ind_list(sh_order_max, full_basis=full_basis) if sh_basis == "descoteaux07": out_sh_coeffs = sh_coeffs * np.where(m_values < 0, (-1.0) ** m_values, 1.0) elif sh_basis == "tournier07": out_sh_coeffs = sh_coeffs * np.where(m_values == 0, 1.0, 1.0 / np.sqrt(2)) else: raise ValueError("Invalid basis name.") return out_sh_coeffs @warning_for_keywords() def convert_sh_to_legacy(sh_coeffs, sh_basis, *, full_basis=False): """Convert SH coefficients in new SH basis to SH coefficients for the legacy SH basis for ``descoteaux07`` or ``tournier07`` bases. See :footcite:p:`Descoteaux2007` and :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` for the ``descoteaux07`` and ``tournier07`` bases, respectively. Parameters ---------- sh_coeffs: ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel. sh_basis: {'descoteaux07', 'tournier07'} ``tournier07`` for the Tournier 2007 :footcite:p:`Tournier2007`, :footcite:p:`Tournier2019` basis, ``descoteaux07`` for the Descoteaux 2007 :footcite:p:`Descoteaux2007` basis. full_basis: bool, optional True if the input SH basis includes both even and odd order SH functions. Returns ------- out_sh_coeffs: ndarray The array of coefficients expressed in the legacy SH basis. References ---------- .. footbibliography:: """ sh_order_max = calculate_max_order(sh_coeffs.shape[-1], full_basis=full_basis) m_values, l_values = sph_harm_ind_list(sh_order_max, full_basis=full_basis) if sh_basis == "descoteaux07": out_sh_coeffs = sh_coeffs * np.where(m_values < 0, (-1.0) ** m_values, 1.0) elif sh_basis == "tournier07": out_sh_coeffs = sh_coeffs * np.where(m_values == 0, 1.0, np.sqrt(2)) else: raise ValueError("Invalid basis name.") return out_sh_coeffs def convert_sh_descoteaux_tournier(sh_coeffs): """Convert SH coefficients between legacy-descoteaux07 and tournier07. Convert SH coefficients between the legacy ``descoteaux07`` SH basis and the non-legacy ``tournier07`` SH basis. Because this conversion is equal to its own inverse, it can be used to convert in either direction: legacy-descoteaux to non-legacy-tournier or non-legacy-tournier to legacy-descoteaux. This can be used to convert SH representations between DIPY and MRtrix3. See :footcite:p:`Descoteaux2007` and :footcite:p:`Tournier2019` for the origin of these SH bases. See [mrtrixbasis]_ for a description of the basis used in MRtrix3. See [mrtrixdipybases]_ for more details on the conversion. Parameters ---------- sh_coeffs: ndarray A ndarray where the last dimension is the SH coefficients estimates for that voxel. Returns ------- out_sh_coeffs: ndarray The array of coefficients expressed in the "other" SH basis. If the input was in the legacy-descoteaux basis then the output will be in the non-legacy-tournier basis, and vice versa. References ---------- .. footbibliography:: .. [mrtrixbasis] https://mrtrix.readthedocs.io/en/latest/concepts/spherical_harmonics.html .. [mrtrixdipybases] https://github.com/dipy/dipy/discussions/2959#discussioncomment-7481675 """ # noqa: E501 sh_order_max = calculate_max_order(sh_coeffs.shape[-1]) m_values, l_values = sph_harm_ind_list(sh_order_max) basis_indices = list(zip(l_values, m_values)) # dipy basis ordering basis_indices_permuted = list(zip(l_values, -m_values)) # mrtrix basis ordering permutation = [ basis_indices.index(basis_indices_permuted[i]) for i in range(len(basis_indices)) ] return sh_coeffs[..., permutation] dipy-1.11.0/dipy/reconst/shore.py000066400000000000000000000660531476546756600167170ustar00rootroot00000000000000from math import factorial from warnings import warn import numpy as np from scipy.special import gamma, genlaguerre, hyp2f1 from dipy.core.geometry import cart2sphere from dipy.reconst.cache import Cache from dipy.reconst.multi_voxel import multi_voxel_fit from dipy.reconst.shm import real_sh_descoteaux_from_index from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") class ShoreModel(Cache): r"""Simple Harmonic Oscillator based Reconstruction and Estimation (SHORE) of the diffusion signal. The main idea of SHORE :footcite:p:`Ozarslan2008` is to model the diffusion signal as a linear combination of continuous functions $\phi_i$, .. math:: S(\mathbf{q})= \sum_{i=0}^I c_{i} \phi_{i}(\mathbf{q}) where $\mathbf{q}$ is the wave vector which corresponds to different gradient directions. Numerous continuous functions $\phi_i$ can be used to model $S$. Some are presented in :footcite:p:`Merlet2013`, :footcite:p:`Rathi2011`, and :footcite:p:`Cheng2011`. From the $c_i$ coefficients, there exist analytical formulae to estimate the ODF, the return to the origin probability (RTOP), the mean square displacement (MSD), amongst others :footcite:p:`Ozarslan2013`. References ---------- .. footbibliography:: Notes ----- The implementation of SHORE depends on CVXPY (https://www.cvxpy.org/). """ @warning_for_keywords() def __init__( self, gtab, *, radial_order=6, zeta=700, lambdaN=1e-8, lambdaL=1e-8, tau=1.0 / (4 * np.pi**2), constrain_e0=False, positive_constraint=False, pos_grid=11, pos_radius=20e-03, cvxpy_solver=None, ): r"""Analytical and continuous modeling of the diffusion signal with respect to the SHORE basis. This implementation is a modification of SHORE presented in :footcite:p:`Merlet2013`. The modification was made to obtain the same ordering of the basis presented in :footcite:p:`Cheng2011`, :footcite:p:`Ozarslan2013`. The main idea is to model the diffusion signal as a linear combination of continuous functions $\phi_i$, .. math:: S(\mathbf{q})= \sum_{i=0}^I c_{i} \phi_{i}(\mathbf{q}) where $\mathbf{q}$ is the wave vector which corresponds to different gradient directions. From the $c_i$ coefficients, there exists an analytical formula to estimate the ODF. Parameters ---------- gtab : GradientTable, gradient directions and bvalues container class radial_order : unsigned int, optional an even integer that represent the order of the basis zeta : unsigned int, optional scale factor lambdaN : float, optional radial regularisation constant lambdaL : float, optional angular regularisation constant tau : float, optional diffusion time. By default the value that makes q equal to the square root of the b-value. constrain_e0 : bool, optional Constrain the optimization such that E(0) = 1. positive_constraint : bool, optional Constrain the propagator to be positive. pos_grid : int, optional Grid that define the points of the EAP in which we want to enforce positivity. pos_radius : float, optional Radius of the grid of the EAP in which enforce positivity in millimeters. cvxpy_solver : str, optional cvxpy solver name. Optionally optimize the positivity constraint with a particular cvxpy solver. See https://www.cvxpy.org/ for details. Default: None (cvxpy chooses its own solver) References ---------- .. footbibliography:: Examples -------- In this example, where the data, gradient table and sphere tessellation used for reconstruction are provided, we model the diffusion signal with respect to the SHORE basis and compute the real and analytical ODF. >>> import warnings >>> from dipy.data import get_isbi2013_2shell_gtab, default_sphere >>> from dipy.sims.voxel import sticks_and_ball >>> from dipy.reconst.shm import descoteaux07_legacy_msg >>> from dipy.reconst.shore import ShoreModel >>> gtab = get_isbi2013_2shell_gtab() >>> data, golden_directions = sticks_and_ball( ... gtab, d=0.0015, S0=1., angles=[(0, 0), (90, 0)], ... fractions=[50, 50], snr=None) ... >>> radial_order = 4 >>> zeta = 700 >>> asm = ShoreModel(gtab, radial_order=radial_order, zeta=zeta, ... lambdaN=1e-8, lambdaL=1e-8) >>> with warnings.catch_warnings(): ... warnings.filterwarnings( ... "ignore", message=descoteaux07_legacy_msg, ... category=PendingDeprecationWarning) ... asmfit = asm.fit(data) ... odf = asmfit.odf(default_sphere) """ self.bvals = gtab.bvals self.bvecs = gtab.bvecs self.gtab = gtab self.constrain_e0 = constrain_e0 if radial_order > 0 and not (bool(radial_order % 2)): self.radial_order = radial_order else: msg = "radial_order must be a non-zero even positive number." raise ValueError(msg) self.zeta = zeta self.lambdaL = lambdaL self.lambdaN = lambdaN if (gtab.big_delta is None) or (gtab.small_delta is None): self.tau = tau else: self.tau = gtab.big_delta - gtab.small_delta / 3.0 if positive_constraint and not constrain_e0: msg = "Constrain_e0 must be True to enforce positivity." raise ValueError(msg) if positive_constraint or constrain_e0: if not have_cvxpy: msg = "cvxpy must be installed for positive_constraint or " msg += "constraint_e0." raise ImportError(msg) if cvxpy_solver is not None: if cvxpy_solver not in cvxpy.installed_solvers(): msg = f"Input `cvxpy_solver` was set to {cvxpy_solver}." msg += f" One of {', '.join(cvxpy.installed_solvers())}" msg += " was expected." raise ValueError(msg) self.cvxpy_solver = cvxpy_solver self.positive_constraint = positive_constraint self.pos_grid = pos_grid self.pos_radius = pos_radius @multi_voxel_fit def fit(self, data, **kwargs): Lshore = l_shore(self.radial_order) Nshore = n_shore(self.radial_order) # Generate the SHORE basis M = self.cache_get("shore_matrix", key=self.gtab) if M is None: M = shore_matrix(self.radial_order, self.zeta, self.gtab, tau=self.tau) self.cache_set("shore_matrix", self.gtab, M) MpseudoInv = self.cache_get("shore_matrix_reg_pinv", key=self.gtab) if MpseudoInv is None: MpseudoInv = np.dot( np.linalg.inv( np.dot(M.T, M) + self.lambdaN * Nshore + self.lambdaL * Lshore ), M.T, ) self.cache_set("shore_matrix_reg_pinv", self.gtab, MpseudoInv) # Compute the signal coefficients in SHORE basis if not self.constrain_e0: coef = np.dot(MpseudoInv, data) signal_0 = 0 for n in range(int(self.radial_order / 2) + 1): signal_0 += coef[n] * ( genlaguerre(n, 0.5)(0) * ((factorial(n)) / (2 * np.pi * (self.zeta**1.5) * gamma(n + 1.5))) ** 0.5 ) coef = coef / signal_0 else: data_norm = data / data[self.gtab.b0s_mask].mean() M0 = M[self.gtab.b0s_mask, :] c = cvxpy.Variable(M.shape[1]) design_matrix = cvxpy.Constant(M) @ c objective = cvxpy.Minimize( cvxpy.sum_squares(design_matrix - data_norm) + self.lambdaN * cvxpy.quad_form(c, Nshore) + self.lambdaL * cvxpy.quad_form(c, Lshore) ) if not self.positive_constraint: constraints = [M0[0] @ c == 1] else: lg = int(np.floor(self.pos_grid**3 / 2)) v, t = create_rspace(self.pos_grid, self.pos_radius) psi = self.cache_get( "shore_matrix_positive_constraint", key=(self.pos_grid, self.pos_radius), ) if psi is None: psi = shore_matrix_pdf(self.radial_order, self.zeta, t[:lg]) self.cache_set( "shore_matrix_positive_constraint", (self.pos_grid, self.pos_radius), psi, ) constraints = [(M0[0] @ c) == 1.0, (psi @ c) >= 1e-3] prob = cvxpy.Problem(objective, constraints) try: prob.solve(solver=self.cvxpy_solver) coef = np.asarray(c.value).squeeze() except Exception: warn("Optimization did not find a solution", stacklevel=2) coef = np.zeros(M.shape[1]) return ShoreFit(self, coef) class ShoreFit: def __init__(self, model, shore_coef): """Calculates diffusion properties for a single voxel Parameters ---------- model : object, AnalyticalModel shore_coef : 1d ndarray, shore coefficients """ self.model = model self._shore_coef = shore_coef self.gtab = model.gtab self.radial_order = model.radial_order self.zeta = model.zeta def pdf_grid(self, gridsize, radius_max): r"""Applies the analytical FFT on $S$ to generate the diffusion propagator. This is calculated on a discrete 3D grid in order to obtain an EAP similar to that which is obtained with DSI. Parameters ---------- gridsize : unsigned int dimension of the propagator grid radius_max : float maximal radius in which to compute the propagator Returns ------- eap : ndarray the ensemble average propagator in the 3D grid """ # Create the grid in which to compute the pdf rgrid_rtab = self.model.cache_get("pdf_grid", key=(gridsize, radius_max)) if rgrid_rtab is None: rgrid_rtab = create_rspace(gridsize, radius_max) self.model.cache_set("pdf_grid", (gridsize, radius_max), rgrid_rtab) rgrid, rtab = rgrid_rtab psi = self.model.cache_get("shore_matrix_pdf", key=(gridsize, radius_max)) if psi is None: psi = shore_matrix_pdf(self.radial_order, self.zeta, rtab) self.model.cache_set("shore_matrix_pdf", (gridsize, radius_max), psi) propagator = np.dot(psi, self._shore_coef) eap = np.empty((gridsize, gridsize, gridsize), dtype=float) eap[tuple(rgrid.astype(int).T)] = propagator eap *= (2 * radius_max / (gridsize - 1)) ** 3 return eap def pdf(self, r_points): """Diffusion propagator on a given set of real points. if the array r_points is non writeable, then intermediate results are cached for faster recalculation """ if not r_points.flags.writeable: psi = self.model.cache_get("shore_matrix_pdf", key=hash(r_points.data)) else: psi = None if psi is None: psi = shore_matrix_pdf(self.radial_order, self.zeta, r_points) if not r_points.flags.writeable: self.model.cache_set("shore_matrix_pdf", hash(r_points.data), psi) eap = np.dot(psi, self._shore_coef) return np.clip(eap, 0, eap.max()) def odf_sh(self): r"""Calculates the real analytical ODF in terms of Spherical Harmonics. """ # Number of Spherical Harmonics involved in the estimation J = (self.radial_order + 1) * (self.radial_order + 2) // 2 # Compute the Spherical Harmonics Coefficients c_sh = np.zeros(J) counter = 0 for ell in range(0, self.radial_order + 1, 2): for n in range(ell, int((self.radial_order + ell) / 2) + 1): for m in range(-ell, ell + 1): j = int(ell + m + (2 * np.array(range(0, ell, 2)) + 1).sum()) Cnl = ( ((-1) ** (n - ell / 2)) / (2.0 * (4.0 * np.pi**2 * self.zeta) ** (3.0 / 2.0)) * ( ( 2.0 * (4.0 * np.pi**2 * self.zeta) ** (3.0 / 2.0) * factorial(n - ell) ) / (gamma(n + 3.0 / 2.0)) ) ** (1.0 / 2.0) ) Gnl = ( (gamma(ell / 2 + 3.0 / 2.0) * gamma(3.0 / 2.0 + n)) / (gamma(ell + 3.0 / 2.0) * factorial(n - ell)) * (1.0 / 2.0) ** (-ell / 2 - 3.0 / 2.0) ) Fnl = hyp2f1(-n + ell, ell / 2 + 3.0 / 2.0, ell + 3.0 / 2.0, 2.0) c_sh[j] += self._shore_coef[counter] * Cnl * Gnl * Fnl counter += 1 return c_sh def odf(self, sphere): r"""Calculates the ODF for a given discrete sphere.""" upsilon = self.model.cache_get("shore_matrix_odf", key=sphere) if upsilon is None: upsilon = shore_matrix_odf(self.radial_order, self.zeta, sphere.vertices) self.model.cache_set("shore_matrix_odf", sphere, upsilon) odf = np.dot(upsilon, self._shore_coef) return odf def rtop_signal(self): r"""Calculates the analytical return to origin probability (RTOP) from the signal. See :footcite:p:`Ozarslan2013` for further details about the method. References ---------- .. footbibliography:: """ rtop = 0 c = self._shore_coef for n in range(int(self.radial_order / 2) + 1): rtop += ( c[n] * (-1) ** n * ((16 * np.pi * self.zeta**1.5 * gamma(n + 1.5)) / (factorial(n))) ** 0.5 ) return np.clip(rtop, 0, rtop.max()) def rtop_pdf(self): r"""Calculates the analytical return to origin probability (RTOP) from the pdf. See :footcite:p:`Ozarslan2013` for further details about the method. References ---------- .. footbibliography:: """ rtop = 0 c = self._shore_coef for n in range(int(self.radial_order / 2) + 1): rtop += ( c[n] * (-1) ** n * ((4 * np.pi**2 * self.zeta**1.5 * factorial(n)) / (gamma(n + 1.5))) ** 0.5 * genlaguerre(n, 0.5)(0) ) return np.clip(rtop, 0, rtop.max()) def msd(self): r"""Calculates the analytical mean squared displacement (MSD). See :footcite:p:`Wu2007` for a definition of the method. .. math:: :nowrap: MSD:{DSI}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} P(\hat{\mathbf{r}}) \cdot \hat{\mathbf{r}}^{2} \ dr_x \ dr_y \ dr_z where $\hat{\mathbf{r}}$ is a point in the 3D propagator space (see :footcite:t:`Wu2007`). References ---------- .. footbibliography:: """ msd = 0 c = self._shore_coef for n in range(int(self.radial_order / 2) + 1): msd += ( c[n] * (-1) ** n * ( 9 * (gamma(n + 1.5)) / (8 * np.pi**6 * self.zeta**3.5 * factorial(n)) ) ** 0.5 * hyp2f1(-n, 2.5, 1.5, 2) ) return np.clip(msd, 0, msd.max()) def fitted_signal(self): """The fitted signal.""" phi = self.model.cache_get("shore_matrix", key=self.model.gtab) return np.dot(phi, self._shore_coef) @property def shore_coeff(self): """The SHORE coefficients""" return self._shore_coef @warning_for_keywords() def shore_matrix(radial_order, zeta, gtab, *, tau=1 / (4 * np.pi**2)): r"""Compute the SHORE matrix for modified Merlet's 3D-SHORE. See :footcite:p:`Merlet2013` for the definition. .. math:: :nowrap: \textbf{E}(q\textbf{u})=\sum_{l=0, even}^{N_{max}} \sum_{n=l}^{(N_{max}+l)/2} \sum_{m=-l}^l c_{nlm} \phi_{nlm}(q\textbf{u}) where $\phi_{nlm}$ is .. math:: :nowrap: \phi_{nlm}^{SHORE}(q\textbf{u})=\Biggl[\dfrac{2(n-l)!} {\zeta^{3/2} \Gamma(n+3/2)} \Biggr]^{1/2} \Biggl(\dfrac{q^2}{\zeta}\Biggr)^{l/2} exp\Biggl(\dfrac{-q^2}{2\zeta}\Biggr) L^{l+1/2}_{n-l} \Biggl(\dfrac{q^2}{\zeta}\Biggr) Y_l^m(\textbf{u}) Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis zeta : unsigned int, scale factor gtab : GradientTable, gradient directions and bvalues container class tau : float, optional diffusion time. By default the value that makes q=sqrt(b). References ---------- .. footbibliography:: """ qvals = np.sqrt(gtab.bvals / (4 * np.pi**2 * tau)) qvals[gtab.b0s_mask] = 0 bvecs = gtab.bvecs qgradients = qvals[:, None] * bvecs r, theta, phi = cart2sphere(qgradients[:, 0], qgradients[:, 1], qgradients[:, 2]) theta[np.isnan(theta)] = 0 F = radial_order / 2 n_c = int(np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3))) M = np.zeros((r.shape[0], n_c)) counter = 0 for ell in range(0, radial_order + 1, 2): for n in range(ell, int((radial_order + ell) / 2) + 1): for m in range(-ell, ell + 1): M[:, counter] = ( real_sh_descoteaux_from_index(m, ell, theta, phi) * genlaguerre(n - ell, ell + 0.5)(r**2 / zeta) * np.exp(-(r**2) / (2.0 * zeta)) * _kappa(zeta, n, ell) * (r**2 / zeta) ** (ell / 2) ) counter += 1 return M def _kappa(zeta, n, ell): return np.sqrt((2 * factorial(n - ell)) / (zeta**1.5 * gamma(n + 1.5))) def shore_matrix_pdf(radial_order, zeta, rtab): r"""Compute the SHORE propagator matrix. See :footcite:p:`Merlet2013` for the definition. Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis zeta : unsigned int, scale factor rtab : array, shape (N,3) real space points in which calculates the pdf References ---------- .. footbibliography:: """ r, theta, phi = cart2sphere(rtab[:, 0], rtab[:, 1], rtab[:, 2]) theta[np.isnan(theta)] = 0 F = radial_order / 2 n_c = int(np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3))) psi = np.zeros((r.shape[0], n_c)) counter = 0 for ell in range(0, radial_order + 1, 2): for n in range(ell, int((radial_order + ell) / 2) + 1): for m in range(-ell, ell + 1): psi[:, counter] = ( real_sh_descoteaux_from_index(m, ell, theta, phi) * genlaguerre(n - ell, ell + 0.5)(4 * np.pi**2 * zeta * r**2) * np.exp(-2 * np.pi**2 * zeta * r**2) * _kappa_pdf(zeta, n, ell) * (4 * np.pi**2 * zeta * r**2) ** (ell / 2) * (-1) ** (n - ell / 2) ) counter += 1 return psi def _kappa_pdf(zeta, n, ell): return np.sqrt((16 * np.pi**3 * zeta**1.5 * factorial(n - ell)) / gamma(n + 1.5)) def shore_matrix_odf(radial_order, zeta, sphere_vertices): r"""Compute the SHORE ODF matrix. See :footcite:p:`Merlet2013` for the definition. Parameters ---------- radial_order : unsigned int, an even integer that represent the order of the basis zeta : unsigned int, scale factor sphere_vertices : array, shape (N,3) vertices of the odf sphere References ---------- .. footbibliography:: """ r, theta, phi = cart2sphere( sphere_vertices[:, 0], sphere_vertices[:, 1], sphere_vertices[:, 2] ) theta[np.isnan(theta)] = 0 F = radial_order / 2 n_c = int(np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3))) upsilon = np.zeros((len(sphere_vertices), n_c)) counter = 0 for ell in range(0, radial_order + 1, 2): for n in range(ell, int((radial_order + ell) / 2) + 1): for m in range(-ell, ell + 1): upsilon[:, counter] = ( (-1) ** (n - ell / 2.0) * _kappa_odf(zeta, n, ell) * hyp2f1(ell - n, ell / 2.0 + 1.5, ell + 1.5, 2.0) * real_sh_descoteaux_from_index(m, ell, theta, phi) ) counter += 1 return upsilon def _kappa_odf(zeta, n, ell): return np.sqrt( (gamma(ell / 2.0 + 1.5) ** 2 * gamma(n + 1.5) * 2 ** (ell + 3)) / (16 * np.pi**3 * zeta**1.5 * factorial(n - ell) * gamma(ell + 1.5) ** 2) ) def l_shore(radial_order): """Returns the angular regularisation matrix for SHORE basis""" F = radial_order / 2 n_c = int(np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3))) diagL = np.zeros(n_c) counter = 0 for ell in range(0, radial_order + 1, 2): for _ in range(ell, int((radial_order + ell) / 2) + 1): for _ in range(-ell, ell + 1): diagL[counter] = (ell * (ell + 1)) ** 2 counter += 1 return np.diag(diagL) def n_shore(radial_order): """Returns the angular regularisation matrix for SHORE basis""" F = radial_order / 2 n_c = int(np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3))) diagN = np.zeros(n_c) counter = 0 for ell in range(0, radial_order + 1, 2): for n in range(ell, int((radial_order + ell) / 2) + 1): for _ in range(-ell, ell + 1): diagN[counter] = (n * (n + 1)) ** 2 counter += 1 return np.diag(diagN) def create_rspace(gridsize, radius_max): """Create the real space table, that contains the points in which to compute the pdf. Parameters ---------- gridsize : unsigned int dimension of the propagator grid radius_max : float maximal radius in which compute the propagator Returns ------- vecs : array, shape (N,3) positions of the pdf points in a 3D matrix tab : array, shape (N,3) real space points in which calculates the pdf """ radius = gridsize // 2 vecs = [] for i in range(-radius, radius + 1): for j in range(-radius, radius + 1): for k in range(-radius, radius + 1): vecs.append([i, j, k]) vecs = np.array(vecs, dtype=np.float32) tab = vecs / radius tab = tab * radius_max vecs = vecs + radius return vecs, tab def shore_indices(radial_order, index): r"""Given the basis order and the index, return the shore indices n, l, m for modified Merlet's 3D-SHORE .. math:: :nowrap: \begin{equation} \textbf{E}(q\textbf{u})=\sum_{l=0, even}^{N_{max}} \sum_{n=l}^{(N_{max}+l)/2} \sum_{m=-l}^l c_{nlm} \phi_{nlm}(q\textbf{u}) \end{equation} where $\phi_{nlm}$ is .. math:: :nowrap: \begin{equation} \phi_{nlm}^{SHORE}(q\textbf{u})=\Biggl[\dfrac{2(n-l)!} {\zeta^{3/2} \Gamma(n+3/2)} \Biggr]^{1/2} \Biggl(\dfrac{q^2}{\zeta}\Biggr)^{l/2} exp\Biggl(\dfrac{-q^2}{2\zeta}\Biggr) L^{l+1/2}_{n-l} \Biggl(\dfrac{q^2}{\zeta}\Biggr) Y_l^m(\textbf{u}). \end{equation} Parameters ---------- radial_order : unsigned int an even integer that represent the maximal order of the basis index : unsigned int index of the coefficients, start from 0 Returns ------- n : unsigned int the index n of the modified shore basis l : unsigned int the index l of the modified shore basis m : unsigned int the index m of the modified shore basis """ F = radial_order / 2 n_c = np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3)) n_i = 0 l_i = 0 m_i = 0 if n_c < (index + 1): msg = f"The index {index} is higher than the number of" msg += " coefficients of the truncated basis," msg += f" which is {int(n_c - 1)} starting from 0." msg += " Select a lower index." raise ValueError(msg) else: counter = 0 for ell in range(0, radial_order + 1, 2): for n in range(ell, int((radial_order + ell) / 2) + 1): for m in range(-ell, ell + 1): if counter == index: n_i = n l_i = ell m_i = m counter += 1 return n_i, l_i, m_i def shore_order(n, ell, m): r"""Given the indices (n,l,m) of the basis, return the minimum order for those indices and their index for modified Merlet's 3D-SHORE. Parameters ---------- n : unsigned int the index n of the modified shore basis ell : unsigned int the index l of the modified shore basis m : unsigned int the index m of the modified shore basis Returns ------- radial_order : unsigned int an even integer that represent the maximal order of the basis index : unsigned int index of the coefficient corresponding to (n,l,m), start from 0 """ if ell % 2 == 1 or ell > n or ell < 0 or n < 0 or np.abs(m) > ell: msg = "The index l must be even and 0 <= l <= n, the index m must be " msg += "-l <= m <= ell. Given values were" msg += f" [n,l,m]=[{','.join([str(n), str(ell), str(m)])}]." raise ValueError(msg) else: if n % 2 == 1: radial_order = n + 1 else: radial_order = n counter_i = 0 counter = 0 for l_i in range(0, radial_order + 1, 2): for n_i in range(l_i, int((radial_order + l_i) / 2) + 1): for m_i in range(-l_i, l_i + 1): if n == n_i and ell == l_i and m == m_i: counter_i = counter counter += 1 return radial_order, counter_i dipy-1.11.0/dipy/reconst/tests/000077500000000000000000000000001476546756600163555ustar00rootroot00000000000000dipy-1.11.0/dipy/reconst/tests/__init__.py000066400000000000000000000000401476546756600204600ustar00rootroot00000000000000# tests for reconstruction code dipy-1.11.0/dipy/reconst/tests/meson.build000066400000000000000000000015111476546756600205150ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_bingham.py', 'test_cache.py', 'test_cross_validation.py', 'test_csdeconv.py', 'test_cti.py', 'test_dki.py', 'test_dki_micro.py', 'test_dsi.py', 'test_dsi_deconv.py', 'test_dsi_metrics.py', 'test_dti.py', 'test_eudx_dg.py', 'test_forecast.py', 'test_fwdti.py', 'test_gqi.py', 'test_ivim.py', 'test_mapmri.py', 'test_mcsd.py', 'test_msdki.py', 'test_multi_voxel.py', 'test_odf.py', 'test_peak_finding.py', 'test_qtdmri.py', 'test_qti.py', 'test_reco_utils.py', 'test_rumba.py', 'test_sfm.py', 'test_shm.py', 'test_shore.py', 'test_shore_metrics.py', 'test_shore_odf.py', 'test_utils.py', 'test_vec_val_vect.py', 'test_weights_method.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/reconst/tests' ) dipy-1.11.0/dipy/reconst/tests/test_bingham.py000066400000000000000000000177741476546756600214130ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_less, ) from dipy.data import get_sphere from dipy.reconst.bingham import ( _bingham_fit_peak, _convert_bingham_pars, _single_bingham_to_sf, _single_sf_to_bingham, bingham_fiber_density, bingham_fiber_spread, k2odi, odi2k, sf_to_bingham, sh_to_bingham, ) from dipy.reconst.shm import sf_to_sh def setup_module(): global sphere sphere = get_sphere(name="repulsion724") sphere = sphere.subdivide(n=2) def teardown_module(): global sphere sphere = None def test_bingham_fit(): """Tests for bingham function and single Bingham fit""" peak_dir = np.array([1, 0, 0]) ma_axis = np.array([0, 1, 0]) mi_axis = np.array([0, 0, 1]) k1 = 2 k2 = 6 f0 = 3 # Test if maximum amplitude is in the expected Bingham main direction # which should be perpendicular to both ma_axis and mi_axis odf_test = _single_bingham_to_sf(f0, k1, k2, ma_axis, mi_axis, peak_dir) assert_almost_equal(odf_test, f0) # Test Bingham fit on full sampled GT Bingham function odf_gt = _single_bingham_to_sf(f0, k1, k2, ma_axis, mi_axis, sphere.vertices) a0, c1, c2, mu0, mu1, mu2 = _bingham_fit_peak(odf_gt, peak_dir, sphere, 45) # check scalar parameters assert_almost_equal(a0, f0, decimal=3) assert_almost_equal(c1, k1, decimal=3) assert_almost_equal(c2, k2, decimal=3) # check if measured peak direction and dispersion axis are aligned to their # respective GT Mus = np.array([mu0, mu1, mu2]) Mus_ref = np.array([peak_dir, ma_axis, mi_axis]) assert_array_almost_equal( np.abs(np.diag(np.dot(Mus, Mus_ref))), np.ones(3), decimal=5 ) # check the same for bingham_fit_odf fits, n = _single_sf_to_bingham(odf_gt, sphere, max_search_angle=45) assert_almost_equal(fits[0][0], f0, decimal=3) assert_almost_equal(fits[0][1], k1, decimal=3) assert_almost_equal(fits[0][2], k2, decimal=3) Mus = np.array([fits[0][3], fits[0][4], fits[0][5]]) # I had to decrease the precision in the assert below because main peak # direction is now calculated (before the GT direction was given) assert_array_almost_equal( np.abs(np.diag(np.dot(Mus, Mus_ref))), np.ones(3), decimal=5 ) def test_bingham_metrics(): axis0 = np.array([1, 0, 0]) axis1 = np.array([0, 1, 0]) axis2 = np.array([0, 0, 1]) k1 = 2 k2 = 6 f0_lobe1 = 3 f0_lobe2 = 1 # define the parameters of two bingham functions with different amplitudes fits = [(f0_lobe1, k1, k2, axis0, axis1, axis2)] fits.append((f0_lobe2, k1, k2, axis0, axis1, axis2)) # First test just to check right parameter conversion ref_pars = np.zeros((2, 12)) ref_pars[0, 0] = f0_lobe1 ref_pars[1, 0] = f0_lobe2 ref_pars[0, 1] = ref_pars[1, 1] = k1 ref_pars[0, 2] = ref_pars[1, 2] = k2 ref_pars[0, 3:6] = ref_pars[1, 3:6] = axis0 ref_pars[0, 6:9] = ref_pars[1, 6:9] = axis1 ref_pars[0, 9:12] = ref_pars[1, 9:12] = axis2 bpars = _convert_bingham_pars(fits, 2) assert_array_almost_equal(bpars, ref_pars) # TEST: Bingham Fiber density # As the amplitude of the first bingham function is 3 times higher than # the second, its surface integral have to be also 3 times larger. fd = bingham_fiber_density(bpars) assert_almost_equal(fd[0] / fd[1], 3) # Fiber density using the default sphere should be close to the fd obtained # using a high-resolution sphere (2621442 vertices) fd_hires = bingham_fiber_density(bpars, subdivide=9) # We must lower the precision for the test to pass, but still shows that # the fiber density is precise up to 4 decimals using only 0.39% of the # samples (10242 versus 2621442 vertices) assert_array_almost_equal(fd, fd_hires, decimal=4) # Rotating the Bingham distribution should not bias the FD estimate xfits = [ (f0_lobe1, k1, k2, axis0, axis1, axis2), (f0_lobe1, k1, k2, axis1, axis2, axis0), (f0_lobe1, k1, k2, axis2, axis0, axis1), ] xpars = _convert_bingham_pars(xfits, 3) xfd = bingham_fiber_density(xpars) assert_almost_equal(xfd[0], xfd[1]) assert_almost_equal(xfd[0], xfd[2]) # If the Bingham function is a sphere of unit radius, the # fiber density should be 4*np.pi. sphere_fit = [(1.0, 0.0, 0.0, axis0, axis1, axis2)] sphere_pars = _convert_bingham_pars(sphere_fit, 1) fd_sphere = bingham_fiber_density(sphere_pars) assert_almost_equal(fd_sphere[0], 4.0 * np.pi) # TEST: k2odi and odi2k conversions assert_almost_equal(odi2k(k2odi(np.array(k1))), k1) assert_almost_equal(odi2k(k2odi(np.array(k2))), k2) # TEST: Fiber spread f0s = np.array([f0_lobe1, f0_lobe2]) fs = bingham_fiber_spread(f0s, fd) assert_array_almost_equal(fs, fd / f0s) def test_bingham_from_odf(): # Reconstruct multi voxel ODFs to test bingham_from_odf ma_axis = np.array([0, 1, 0]) mi_axis = np.array([0, 0, 1]) k1 = 2 k2 = 6 f0 = 3 odf = _single_bingham_to_sf(f0, k1, k2, ma_axis, mi_axis, sphere.vertices) # Perform Bingham fit in multi-voxel odf multi_odfs = np.zeros((2, 2, 1, len(sphere.vertices))) multi_odfs[...] = odf bim = sf_to_bingham(multi_odfs, sphere, npeaks=2, max_search_angle=45) # check model_params assert_almost_equal(bim.model_params[0, 0, 0, 0, 0], f0, decimal=3) assert_almost_equal(bim.model_params[0, 0, 0, 0, 1], k1, decimal=3) assert_almost_equal(bim.model_params[0, 0, 0, 0, 2], k2, decimal=3) # check if estimates for a second lobe are zero (note that a single peak # ODF is assumed here for this test GT) assert_array_almost_equal(bim.model_params[0, 0, 0, 1], np.zeros(12)) # check if we have estimates in the right lobe for all voxels peak_v = bim.model_params[0, 0, 0, 0, 0] assert_array_almost_equal(bim.amplitude_lobe[..., 0], peak_v * np.ones((2, 2, 1))) assert_array_almost_equal(bim.amplitude_lobe[..., 1], np.zeros((2, 2, 1))) # check kappas assert_almost_equal(bim.kappa1_lobe[0, 0, 0, 0], k1, decimal=3) assert_almost_equal(bim.kappa2_lobe[0, 0, 0, 0], k2, decimal=3) assert_almost_equal(bim.kappa_total_lobe[0, 0, 0, 0], np.sqrt(k1 * k2), decimal=3) # check ODI assert_almost_equal(bim.odi1_lobe[0, 0, 0, 0], k2odi(np.array(k1)), decimal=3) assert_almost_equal(bim.odi2_lobe[0, 0, 0, 0], k2odi(np.array(k2)), decimal=3) # ODI2 < ODI total < ODI1 assert_array_less(bim.odi2_lobe[..., 0], bim.odi1_lobe[..., 0]) assert_array_less(bim.odi2_lobe[..., 0], bim.odi_total_lobe[..., 0]) assert_array_less(bim.odi_total_lobe[..., 0], bim.odi1_lobe[..., 0]) # check fiber_density estimates (larger than zero for lobe 0) assert_array_less(np.zeros((2, 2, 1)), bim.fd_lobe[:, :, :, 0]) assert_almost_equal(np.zeros((2, 2, 1)), bim.fd_lobe[:, :, :, 1]) # check global metrics: since this simulations only have one lobe, global # metrics have to give the same values than their counterparts for lobe 1 assert_almost_equal(bim.odi1_voxel, bim.odi1_lobe[..., 0]) assert_almost_equal(bim.odi2_voxel, bim.odi2_lobe[..., 0]) assert_almost_equal(bim.odi_total_voxel, bim.odi_total_lobe[..., 0]) assert_almost_equal(bim.fd_voxel, bim.fd_lobe[..., 0]) # check fiber spread fs_v = bim.fd_lobe[0, 0, 0, 0] / peak_v assert_almost_equal(bim.fs_lobe[..., 0], fs_v) # check reconstructed odf reconst_odf = bim.odf(sphere) assert_almost_equal(reconst_odf[0, 0, 0], odf, decimal=2) def test_bingham_from_sh(): ma_axis = np.array([0, 1, 0]) mi_axis = np.array([0, 0, 1]) k1 = 2 k2 = 6 f0 = 3 odf = _single_bingham_to_sf(f0, k1, k2, ma_axis, mi_axis, sphere.vertices) bim_odf = sf_to_bingham(odf, sphere, npeaks=2, max_search_angle=45) sh = sf_to_sh(odf, sphere, sh_order_max=16, legacy=False) bim_sh = sh_to_bingham(sh, sphere, legacy=False, npeaks=2, max_search_angle=45) assert_array_almost_equal(bim_sh.model_params, bim_odf.model_params, decimal=3) dipy-1.11.0/dipy/reconst/tests/test_cache.py000066400000000000000000000010151476546756600210260ustar00rootroot00000000000000from numpy.testing import assert_, assert_equal from dipy.core.sphere import Sphere from dipy.reconst.cache import Cache class DummyModel(Cache): def __init__(self): pass def test_basic_cache(): t = DummyModel() s = Sphere(theta=[0], phi=[0]) assert_(t.cache_get("design_matrix", s) is None) m = [[1, 0], [0, 1]] t.cache_set("design_matrix", key=s, value=m) assert_equal(t.cache_get("design_matrix", s), m) t.cache_clear() assert_(t.cache_get("design_matrix", s) is None) dipy-1.11.0/dipy/reconst/tests/test_cross_validation.py000066400000000000000000000124171476546756600233360ustar00rootroot00000000000000"""Testing cross-validation analysis.""" import warnings import numpy as np import numpy.testing as npt import dipy.core.gradients as gt import dipy.data as dpd from dipy.io.image import load_nifti_data import dipy.reconst.base as base import dipy.reconst.cross_validation as xval import dipy.reconst.csdeconv as csd import dipy.reconst.dti as dti from dipy.reconst.shm import descoteaux07_legacy_msg import dipy.sims.voxel as sims from dipy.testing.decorators import set_random_number_generator # We'll set these globally: fdata, fbval, fbvec = dpd.get_fnames(name="small_64D") @set_random_number_generator() def test_coeff_of_determination(rng): model = rng.standard_normal((10, 10, 10, 150)) data = np.copy(model) # If the model predicts the data perfectly, the COD is all 100s: cod = xval.coeff_of_determination(data, model) npt.assert_array_equal(100, cod) def test_dti_xval(): data = load_nifti_data(fdata) gtab = gt.gradient_table(fbval, bvecs=fbvec) dm = dti.TensorModel(gtab, fit_method="LS") # The data has 102 directions, so will not divide neatly into 10 bits npt.assert_raises(ValueError, xval.kfold_xval, dm, data, 10) # In simulation with no noise, COD should be perfect: psphere = dpd.get_sphere(name="symmetric362") bvecs = np.concatenate(([[0, 0, 0]], psphere.vertices)) bvals = np.zeros(len(bvecs)) + 1000 bvals[0] = 0 gtab = gt.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0001], [0.0015, 0.0003, 0.0003])) mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]), ] S = sims.single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None) dm = dti.TensorModel(gtab, fit_method="LS") kf_xval = xval.kfold_xval(dm, S, 2) cod = xval.coeff_of_determination(S, kf_xval) npt.assert_array_almost_equal(cod, np.ones(kf_xval.shape[:-1]) * 100) # Test with 2D data for use of a mask S = np.array([[S, S], [S, S]]) mask = np.ones(S.shape[:-1], dtype=bool) mask[1, 1] = 0 kf_xval = xval.kfold_xval(dm, S, 2, mask=mask) cod2d = xval.coeff_of_determination(S, kf_xval) npt.assert_array_almost_equal(np.round(cod2d[0, 0]), cod) @set_random_number_generator(12345) def test_csd_xval(rng): # First, let's see that it works with some data: data = load_nifti_data(fdata)[1:3, 1:3, 1:3] # Make it *small* gtab = gt.gradient_table(fbval, bvecs=fbvec) S0 = np.mean(data[..., gtab.b0s_mask]) response = ([0.0015, 0.0003, 0.0001], S0) # In simulation, it should work rather well (high COD): psphere = dpd.get_sphere(name="symmetric362") bvecs = np.concatenate(([[0, 0, 0]], psphere.vertices)) bvals = np.zeros(len(bvecs)) + 1000 bvals[0] = 0 gtab = gt.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0001], [0.0015, 0.0003, 0.0003])) mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]), ] S0 = 100 S = sims.single_tensor(gtab, S0, evals=mevals[0], evecs=mevecs[0], snr=None) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sm = csd.ConstrainedSphericalDeconvModel(gtab, response) response = ([0.0015, 0.0003, 0.0001], S0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) kf_xval = xval.kfold_xval(sm, S, 2, response, sh_order_max=2, rng=rng) # Because of the regularization, COD is not going to be perfect here: cod = xval.coeff_of_determination(S, kf_xval) # We'll just test for regressions: csd_cod = 97 # pre-computed by hand for this random seed # We're going to be really lenient here: npt.assert_array_almost_equal(np.round(cod), csd_cod) # Test for sD data with more than one voxel for use of a mask: S = np.array([[S, S], [S, S]]) mask = np.ones(S.shape[:-1], dtype=bool) mask[1, 1] = 0 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) kf_xval = xval.kfold_xval( sm, S, 2, response, sh_order_max=2, mask=mask, rng=rng ) cod = xval.coeff_of_determination(S, kf_xval) npt.assert_array_almost_equal(np.round(cod[0]), csd_cod) def test_no_predict(): # Test that if you try to do this with a model that doesn't have a `predict` # method, you get something reasonable. class NoPredictModel(base.ReconstModel): def __init__(self, gtab): base.ReconstModel.__init__(self, gtab) def fit(self, data, mask=None): return NoPredictFit(self, data, mask=mask) class NoPredictFit(base.ReconstFit): def __init__(self, model, data, mask=None): base.ReconstFit.__init__(self, model, data) gtab = gt.gradient_table(fbval, bvecs=fbvec) my_model = NoPredictModel(gtab) data = load_nifti_data(fdata)[1:3, 1:3, 1:3] # Whatever npt.assert_raises(ValueError, xval.kfold_xval, my_model, data, 2) dipy-1.11.0/dipy/reconst/tests/test_csdeconv.py000066400000000000000000000667301476546756600216060ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, ) from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, Sphere from dipy.core.sphere_stats import angular_similarity from dipy.data import default_sphere, get_fnames, get_sphere, small_sphere from dipy.direction.peaks import peak_directions from dipy.io.gradients import read_bvals_bvecs from dipy.reconst.csdeconv import ( ConstrainedSDTModel, ConstrainedSphericalDeconvModel, auto_response_ssst, forward_sdeconv_mat, mask_for_response_ssst, odf_deconv, odf_sh_to_sharp, recursive_response, response_from_mask_ssst, ) from dipy.reconst.dti import TensorModel, fractional_anisotropy from dipy.reconst.shm import ( QballModel, descoteaux07_legacy_msg, lazy_index, real_sh_descoteaux, sf_to_sh, sh_to_sf, sph_harm_ind_list, ) from dipy.sims.voxel import ( all_tensor_evecs, multi_tensor, multi_tensor_odf, single_tensor, single_tensor_odf, ) from dipy.testing import assert_greater, assert_greater_equal from dipy.testing.decorators import set_random_number_generator def get_test_data(): _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) evals_list = [ np.array([1.7e-3, 0.4e-3, 0.4e-3]), np.array([4.0e-4, 4.0e-4, 4.0e-4]), np.array([3.0e-3, 3.0e-3, 3.0e-3]), ] s0 = [0.8, 1, 4] signals = [single_tensor(gtab, x[0], evals=x[1]) for x in zip(s0, evals_list)] tissues = [0, 0, 2, 0, 1, 0, 0, 1, 2] data = [signals[tissue] for tissue in tissues] data = np.asarray(data).reshape((3, 3, 1, len(signals[0]))) evals = [evals_list[tissue] for tissue in tissues] evals = np.asarray(evals).reshape((3, 3, 1, 3)) tissues = np.asarray(tissues).reshape((3, 3, 1)) mask = np.where(tissues == 0, 1, 0) response = (evals_list[0], s0[0]) fa = fractional_anisotropy(evals) return gtab, data, mask, response, fa def test_recursive_response_calibration(): """ Test the recursive response calibration method. """ SNR = 100 S0 = 1 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) sphere = default_sphere gtab = gradient_table(bvals, bvecs=bvecs) evals = np.array([0.0015, 0.0003, 0.0003]) evecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (90, 0)] where_dwi = lazy_index(~gtab.b0s_mask) S_cross, _ = multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR ) S_single = single_tensor(gtab, S0, evals=evals, evecs=evecs, snr=SNR) data = np.concatenate((np.tile(S_cross, (8, 1)), np.tile(S_single, (2, 1))), axis=0) odf_gt_cross = multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) odf_gt_single = single_tensor_odf(sphere.vertices, evals=evals, evecs=evecs) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = recursive_response( gtab, data, mask=None, sh_order_max=8, peak_thr=0.01, init_fa=0.05, init_trace=0.0021, iter=8, convergence=0.001, parallel=False, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd.fit(data) assert_equal(np.all(csd_fit.shm_coeff[:, 0] >= 0), True) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = csd_fit.odf(sphere) directions_gt_single, _, _ = peak_directions(odf_gt_single, sphere) directions_gt_cross, _, _ = peak_directions(odf_gt_cross, sphere) directions_single, _, _ = peak_directions(fodf[8, :], sphere) directions_cross, _, _ = peak_directions(fodf[0, :], sphere) ang_sim = angular_similarity(directions_cross, directions_gt_cross) assert_equal(ang_sim > 1.9, True) assert_equal(directions_cross.shape[0], 2) assert_equal(directions_gt_cross.shape[0], 2) ang_sim = angular_similarity(directions_single, directions_gt_single) assert_equal(ang_sim > 0.9, True) assert_equal(directions_single.shape[0], 1) assert_equal(directions_gt_single.shape[0], 1) with warnings.catch_warnings(record=True) as w: sphere = Sphere(xyz=gtab.gradients[where_dwi]) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("Vertices are not on the unit sphere" in str(w[0].message)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sf = response.on_sphere(sphere) S = np.concatenate(([response.S0], sf)) tenmodel = TensorModel(gtab, min_signal=0.001) tenfit = tenmodel.fit(S) FA = fractional_anisotropy(tenfit.evals) FA_gt = fractional_anisotropy(evals) assert_almost_equal(FA, FA_gt, 1) def test_mask_for_response_ssst(): gtab, data, mask_gt, _, _ = get_test_data() mask = mask_for_response_ssst( gtab, data, roi_center=None, roi_radii=(1, 1, 0), fa_thr=0.7 ) # Verifies that mask is not empty: assert_equal(int(np.sum(mask)) != 0, True) assert_array_almost_equal(mask_gt, mask) def test_mask_for_response_ssst_nvoxels(): gtab, data, _, _, _ = get_test_data() mask = mask_for_response_ssst( gtab, data, roi_center=None, roi_radii=(1, 1, 0), fa_thr=0.7 ) nvoxels = np.sum(mask) assert_equal(nvoxels, 5) with warnings.catch_warnings(record=True) as w: mask = mask_for_response_ssst( gtab, data, roi_center=None, roi_radii=(1, 1, 0), fa_thr=1 ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("No voxel with a FA higher than 1 were found" in str(w[0].message)) nvoxels = np.sum(mask) assert_equal(nvoxels, 0) def test_response_from_mask_ssst(): gtab, data, mask_gt, response_gt, _ = get_test_data() response, _ = response_from_mask_ssst(gtab, data, mask_gt) assert_array_almost_equal(response[0], response_gt[0]) assert_equal(response[1], response_gt[1]) def test_auto_response_ssst(): gtab, data, _, _, _ = get_test_data() response_auto, ratio_auto = auto_response_ssst( gtab, data, roi_center=None, roi_radii=(1, 1, 0), fa_thr=0.7 ) mask = mask_for_response_ssst( gtab, data, roi_center=None, roi_radii=(1, 1, 0), fa_thr=0.7 ) response_from_mask, ratio_from_mask = response_from_mask_ssst(gtab, data, mask) assert_array_equal(response_auto[0], response_from_mask[0]) assert_equal(response_auto[1], response_from_mask[1]) assert_array_equal(ratio_auto, ratio_from_mask) def test_csdeconv(): SNR = 100 S0 = 1 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=0) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (60, 0)] S, sticks = multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR ) sphere = get_sphere(name="symmetric362") odf_gt = multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) response = (np.array([0.0015, 0.0003, 0.0003]), S0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd.fit(S) assert_equal(csd_fit.shm_coeff[0] > 0, True) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = csd_fit.odf(sphere) directions, _, _ = peak_directions(odf_gt, sphere) directions2, _, _ = peak_directions(fodf, sphere) ang_sim = angular_similarity(directions, directions2) assert_equal(ang_sim > 1.9, True) assert_equal(directions.shape[0], 2) assert_equal(directions2.shape[0], 2) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) _ = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=10) assert_greater(len([lw for lw in w if issubclass(lw.category, UserWarning)]), 0) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=8) assert_equal(len([lw for lw in w if issubclass(lw.category, UserWarning)]), 0) mevecs = [] for s in sticks: mevecs += [all_tensor_evecs(s).T] S2 = single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None) big_S = np.zeros((10, 10, 10, len(S2))) big_S[:] = S2 aresponse, aratio = auto_response_ssst( gtab, big_S, roi_center=(5, 5, 4), roi_radii=3, fa_thr=0.5 ) assert_array_almost_equal(aresponse[0], response[0]) assert_almost_equal(aresponse[1], 100) assert_almost_equal(aratio, response[0][1] / response[0][0]) auto_response_ssst(gtab, big_S, roi_radii=3, fa_thr=0.5) assert_array_almost_equal(aresponse[0], response[0]) def test_odfdeconv(): SNR = 100 S0 = 1 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (90, 0)] S, _ = multi_tensor(gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR) sphere = get_sphere(name="symmetric362") odf_gt = multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) e1 = 15.0 e2 = 3.0 ratio = e2 / e1 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd = ConstrainedSDTModel(gtab, ratio, reg_sphere=None) csd_fit = csd.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = csd_fit.odf(sphere) directions, _, _ = peak_directions(odf_gt, sphere) directions2, _, _ = peak_directions(fodf, sphere) ang_sim = angular_similarity(directions, directions2) assert_equal(ang_sim > 1.9, True) assert_equal(directions.shape[0], 2) assert_equal(directions2.shape[0], 2) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=PendingDeprecationWarning) ConstrainedSDTModel(gtab, ratio, sh_order_max=10) w_count = len(w) # A warning is expected from the ConstrainedSDTModel constructor # and additional warnings should be raised where legacy SH bases # are used assert_equal(w_count > 1, True) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=PendingDeprecationWarning) ConstrainedSDTModel(gtab, ratio, sh_order_max=8) # Test that the warning from ConstrainedSDTModel # constructor is no more raised assert_equal(len(w) == w_count - 1, True) csd_fit = csd.fit(np.zeros_like(S)) fodf = csd_fit.odf(sphere) assert_array_equal(fodf, np.zeros_like(fodf)) odf_sh = np.zeros_like(fodf) odf_sh[1] = np.nan fodf, _ = odf_deconv(odf_sh, csd.R, csd.B_reg) assert_array_equal(fodf, np.zeros_like(fodf)) def test_odf_sh_to_sharp(): SNR = None S0 = 1 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) S, _ = multi_tensor( gtab, mevals, S0=S0, angles=[(10, 0), (100, 0)], fractions=[50, 50], snr=SNR ) sphere = default_sphere with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) qb = QballModel(gtab, sh_order_max=8, assume_normed=True) qbfit = qb.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_gt = qbfit.odf(sphere) Z = np.linalg.norm(odf_gt) odfs_gt = np.zeros((3, 1, 1, odf_gt.shape[0])) odfs_gt[:, :, :] = odf_gt[:] with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odfs_sh = sf_to_sh(odfs_gt, sphere, sh_order_max=8, basis_type=None) odfs_sh /= Z with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf_sh = odf_sh_to_sharp( odfs_sh, sphere, basis=None, ratio=3 / 15.0, sh_order_max=8, lambda_=1.0, tau=0.1, ) fodf = sh_to_sf(fodf_sh, sphere, sh_order_max=8, basis_type=None) directions2, _, _ = peak_directions(fodf[0, 0, 0], sphere) assert_equal(directions2.shape[0], 2) def test_forward_sdeconv_mat(): _, l_values = sph_harm_ind_list(4) mat = forward_sdeconv_mat(np.array([0, 2, 4]), l_values) expected = np.diag([0, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4]) npt.assert_array_equal(mat, expected) sh_order_max = 8 expected_size = (sh_order_max + 1) * (sh_order_max + 2) / 2 r_rh = np.arange(0, sh_order_max + 1, 2) m_values, l_values = sph_harm_ind_list(sh_order_max) mat = forward_sdeconv_mat(r_rh, l_values) npt.assert_equal(mat.shape, (expected_size, expected_size)) npt.assert_array_equal(mat.diagonal(), l_values) # Odd spherical harmonic degrees should raise a ValueError l_values[2] = 3 npt.assert_raises(ValueError, forward_sdeconv_mat, r_rh, l_values) def test_r2_term_odf_sharp(): SNR = None S0 = 1 angle = 45 # 45 degrees is a very tight angle to disentangle _, fbvals, fbvecs = get_fnames(name="small_64D") # get_fnames('small_64D') bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) sphere = default_sphere gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (angle, 0)] S, _ = multi_tensor(gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR) odf_gt = multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odfs_sh = sf_to_sh(odf_gt, sphere, sh_order_max=8, basis_type=None) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf_sh = odf_sh_to_sharp( odfs_sh, sphere, basis=None, ratio=3 / 15.0, sh_order_max=8, lambda_=1.0, tau=0.1, r2_term=True, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = sh_to_sf(fodf_sh, sphere, sh_order_max=8, basis_type=None) directions_gt, _, _ = peak_directions(odf_gt, sphere) directions, _, _ = peak_directions(fodf, sphere) ang_sim = angular_similarity(directions_gt, directions) assert_equal(ang_sim > 1.9, True) assert_equal(directions.shape[0], 2) # This should pass as well with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sdt_model = ConstrainedSDTModel(gtab, ratio=3 / 15.0, sh_order_max=8) sdt_fit = sdt_model.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = sdt_fit.odf(sphere) directions_gt, _, _ = peak_directions(odf_gt, sphere) directions, _, _ = peak_directions(fodf, sphere) ang_sim = angular_similarity(directions_gt, directions) assert_equal(ang_sim > 1.9, True) assert_equal(directions.shape[0], 2) @set_random_number_generator() def test_csd_predict(rng): """ Test prediction API """ SNR = 100 S0 = 1 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (60, 0)] S, _ = multi_tensor(gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR) sphere = small_sphere multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) response = (np.array([0.0015, 0.0003, 0.0003]), S0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd.fit(S) # Predicting from a fit should give the same result as predicting from a # model, S0 is 1 by default prediction1 = csd_fit.predict() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) prediction2 = csd.predict(csd_fit.shm_coeff) npt.assert_array_equal(prediction1, prediction2) npt.assert_array_equal(prediction1[..., gtab.b0s_mask], 1.0) # Same with a different S0 prediction1 = csd_fit.predict(S0=123.0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) prediction2 = csd.predict(csd_fit.shm_coeff, S0=123.0) npt.assert_array_equal(prediction1, prediction2) npt.assert_array_equal(prediction1[..., gtab.b0s_mask], 123.0) # For "well behaved" coefficients, the model should be able to find the # coefficients from the predicted signal. coeff = rng.random(csd_fit.shm_coeff.shape) - 0.5 coeff[..., 0] = 10.0 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S = csd.predict(coeff) csd_fit = csd.fit(S) npt.assert_array_almost_equal(coeff, csd_fit.shm_coeff) # Test predict on nd-data set S_nd = np.zeros((2, 3, 4, S.size)) S_nd[:] = S fit = csd.fit(S_nd) predict1 = fit.predict() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) predict2 = csd.predict(fit.shm_coeff) npt.assert_array_almost_equal(predict1, predict2) @set_random_number_generator() def test_csd_predict_multi(rng): """ Check that we can predict reasonably from multi-voxel fits: """ S0 = 123.0 _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) response = (np.array([0.0015, 0.0003, 0.0003]), S0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd = ConstrainedSphericalDeconvModel(gtab, response) coeff = rng.random(45) - 0.5 coeff[..., 0] = 10.0 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S = csd.predict(coeff, S0=123.0) multi_S = np.array([[S, S], [S, S]]) csd_fit_multi = csd.fit(multi_S) S0_multi = np.mean(multi_S[..., gtab.b0s_mask], -1) pred_multi = csd_fit_multi.predict(S0=S0_multi) npt.assert_array_almost_equal(pred_multi, multi_S) def test_sphere_scaling_csdmodel(): """Check that mirroring regularization sphere does not change the result of the model""" _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (60, 0)] S, _ = multi_tensor( gtab, mevals, S0=100.0, angles=angles, fractions=[50, 50], snr=None ) hemi = small_sphere sphere = hemi.mirror() response = (np.array([0.0015, 0.0003, 0.0003]), 100) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_full = ConstrainedSphericalDeconvModel(gtab, response, reg_sphere=sphere) model_hemi = ConstrainedSphericalDeconvModel(gtab, response, reg_sphere=hemi) csd_fit_full = model_full.fit(S) csd_fit_hemi = model_hemi.fit(S) assert_array_almost_equal(csd_fit_full.shm_coeff, csd_fit_hemi.shm_coeff) def test_default_lambda_csdmodel(): """We check that the default value of lambda is the expected value with the symmetric362 sphere. This value has empirically been found to work well and changes to this default value should be discussed with the dipy team. """ expected_lambda = {4: 27.5230088, 8: 82.5713865, 16: 216.0843135} expected_csdmodel_warnings = {4: 0, 8: 0, 16: 1} expected_sh_basis_deprecation_warnings = 3 sphere = default_sphere # Create gradient table _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) # Some response function response = (np.array([0.0015, 0.0003, 0.0003]), 100) for sh_order_max, expected, e_warn in zip( expected_lambda.keys(), expected_lambda.values(), expected_csdmodel_warnings.values(), ): with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=PendingDeprecationWarning) s_o_m = sh_order_max model_full = ConstrainedSphericalDeconvModel( gtab, response, sh_order_max=s_o_m, reg_sphere=sphere ) npt.assert_equal(len(w) - expected_sh_basis_deprecation_warnings, e_warn) if e_warn: npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("Number of parameters required " in str(w[0].message)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B_reg, _, _ = real_sh_descoteaux(sh_order_max, sphere.theta, sphere.phi) npt.assert_array_almost_equal(model_full.B_reg, expected * B_reg) def test_csd_superres(): """Check the quality of csdfit with high SH order.""" _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) # img, gtab = read_stanford_hardi() evals = np.array([[1.5, 0.3, 0.3]]) * [[1.0], [1.0]] / 1000.0 S, sticks = multi_tensor(gtab, evals, snr=None, fractions=[55.0, 45.0]) with warnings.catch_warnings(record=True) as w: warnings.filterwarnings( action="always", message="Number of parameters required.*", category=UserWarning, ) warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model16 = ConstrainedSphericalDeconvModel( gtab, (evals[0], 3.0), sh_order_max=16 ) assert_greater_equal(len(w), 1) npt.assert_(issubclass(w[0].category, UserWarning)) fit16 = model16.fit(S) sphere = HemiSphere.from_sphere(get_sphere(name="symmetric724")) # print local_maxima(fit16.odf(default_sphere), default_sphere.edges) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) d, v, ind = peak_directions( fit16.odf(sphere), sphere, relative_peak_threshold=0.2, min_separation_angle=0, ) # Check that there are two peaks assert_equal(len(d), 2) # Check that peaks line up with sticks cos_sim = abs((d * sticks).sum(1)) ** 0.5 assert_(all(cos_sim > 0.99)) def test_csd_convergence(): """Check existence of `convergence` keyword in CSD model""" _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) evals = np.array([[1.5, 0.3, 0.3]]) * [[1.0], [1.0]] / 1000.0 S, sticks = multi_tensor(gtab, evals, snr=None, fractions=[55.0, 45.0]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_w_conv = ConstrainedSphericalDeconvModel( gtab, (evals[0], 3.0), sh_order_max=8, convergence=50, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_wo_conv = ConstrainedSphericalDeconvModel( gtab, (evals[0], 3.0), sh_order_max=8 ) assert_equal(model_w_conv.fit(S).shm_coeff, model_wo_conv.fit(S).shm_coeff) dipy-1.11.0/dipy/reconst/tests/test_cti.py000066400000000000000000000402261476546756600205510ustar00rootroot00000000000000import math import numpy as np from numpy.testing import assert_array_almost_equal, assert_raises from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, disperse_charges import dipy.reconst.cti as cti from dipy.reconst.cti import ( from_qte_to_cti, ls_fit_cti, multi_gaussian_k_from_c, split_cti_params, ) from dipy.reconst.dki import ( axial_kurtosis, kurtosis_fractional_anisotropy, mean_kurtosis, mean_kurtosis_tensor, radial_kurtosis, ) from dipy.reconst.dti import decompose_tensor, mean_diffusivity import dipy.reconst.qti as qti from dipy.reconst.tests.test_qti import _anisotropic_DTD, _isotropic_DTD from dipy.reconst.utils import cti_design_matrix as design_matrix gtab1, gtab2, gtab, DTDs, S0 = None, None, None, None, None def setup_module(): global gtab1, gtab2, gtab, DTDs, S0 # Generating the DDE acquisition parameters (gtab1 and gtab2) for CTI # based on the minimal requirements. # This code then further generates the corresponding QTE gtab to test # simulated signals based on multiple Gaussian components. # In this scenario, CTI should yield analogous Kaniso and Kiso values as # compared to QTE, while Kmicro would be zero. rng = np.random.default_rng(1234) n_pts = 20 theta = np.pi * rng.random(n_pts) phi = 2 * np.pi * rng.random(n_pts) hsph_initial = HemiSphere(theta=theta, phi=phi) hsph_updated, potential = disperse_charges(hsph_initial, 5000) # Generating gtab1 bvecs1 = np.concatenate([hsph_updated.vertices] * 4) bvecs1 = np.append(bvecs1, [[0, 0, 0]], axis=0) bvals1 = np.array([2] * 20 + [1] * 20 + [1] * 20 + [1] * 20 + [0]) gtab1 = gradient_table(bvals1, bvecs=bvecs1) # Generating perpendicular directions to hsph_updated hsph_updated90 = _perpendicular_directions_temp(hsph_updated.vertices) dot_product = np.sum(hsph_updated.vertices * hsph_updated90, axis=1) assert all(np.isclose(dot_product, 0)) # Generating gtab2 bvecs2 = np.concatenate( ([hsph_updated.vertices] * 2) + [hsph_updated90] + ([hsph_updated.vertices]) ) bvecs2 = np.append(bvecs2, [[0, 0, 0]], axis=0) bvals2 = np.array([0] * 20 + [1] * 20 + [1] * 20 + [0] * 20 + [0]) gtab2 = gradient_table(bvals2, bvecs=bvecs2) e1 = bvecs1 e2 = bvecs2 e3 = np.cross(e1, e2) V = np.stack((e1, e2, e3), axis=-1) V_transpose = np.transpose(V, axes=(0, 2, 1)) B = np.zeros((81, 3, 3)) b = np.zeros((3, 3)) for i in range(81): b[0, 0] = bvals1[i] b[1, 1] = bvals2[i] B[i] = np.matmul(V[i], np.matmul(b, V_transpose[i])) gtab = gradient_table(bvals1, bvecs=bvecs1, btens=B) S0 = 100 anisotropic_DTD = _anisotropic_DTD() isotropic_DTD = _isotropic_DTD() DTDs = [ anisotropic_DTD, isotropic_DTD, np.concatenate((anisotropic_DTD, isotropic_DTD)), ] # DTD_labels = ["Anisotropic DTD", "Isotropic DTD", "Combined DTD"] def teardown_module(): global gtab1, gtab2, gtab, DTDs, S0 gtab1, gtab2, gtab, DTDs, S0 = None, None, None, None, None def _perpendicular_directions_temp(v, num=20, half=False): """ Computes a set of perpendicular directions relative to the direction in v. The perpendicular directions are computed by sampling on a unit circumference that is perpendicular to `v`. The computation depends on whether the vector is aligned with the x-axis or not. Parameters ---------- v : array (3,) Array containing the three cartesian coordinates of vector v. num : int, optional Number of perpendicular directions to generate. Default is 20. half : bool, optional If True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwise they are sampled on the full circumference. Default is False. Returns ------- psamples : array (n, 3) Array of vectors perpendicular to v. """ v = np.array(v, dtype=np.float64) v = v.T er = np.finfo(v[0].dtype).eps * 1e3 if half is True: a = np.linspace(0.0, math.pi, num=num, endpoint=False) else: a = np.linspace(0.0, 2 * math.pi, num=num, endpoint=False) cosa = np.cos(a) sina = np.sin(a) if np.any(abs(v[0] - 1.0) > er): sq = np.sqrt(v[1] ** 2 + v[2] ** 2) psamples = np.array( [ -sq * sina, (v[0] * v[1] * sina - v[2] * cosa) / sq, (v[0] * v[2] * sina + v[1] * cosa) / sq, ] ) else: sq = np.sqrt(v[0] ** 2 + v[2] ** 2) psamples = np.array( [ -(v[2] * cosa + v[0] * v[1] * sina) / sq, sina * sq, (v[0] * cosa - v[2] * v[1] * sina) / sq, ] ) return psamples.T def construct_cti_params(evals, evecs, kt, fct): """ Combines all components to generate Correlation Tensor Model Parameter. Parameters ---------- evals : array (..., 3) Eigenvalues from eigen decomposition of the tensor. evecs : array (..., 3) Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. evecs[:,j] is associated with evals[j]) kt : array (..., 15) Fifteen elements of the kurtosis tensor fct: array(..., 21) Twenty-one elements of the covariance tensor Returns ------- cti_params : numpy.ndarray (..., 48) All parameters estimated from the correlation tensor model. Parameters are ordered as follows: 1. Three diffusion tensor's eigenvalues 2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector 3. Fifteen elements of the kurtosis tensor 4. Twenty-One elements of the covariance tensor """ fevals = evals.reshape((-1, evals.shape[-1])) fevecs = evecs.reshape((-1,) + evecs.shape[-2:]) fevecs = fevecs.reshape((1, -1)) fkt = kt.reshape((-1, kt.shape[-1])) cti_params = np.concatenate((fevals.T, fevecs.T, fkt, fct), axis=0) return np.squeeze(cti_params) def test_cti_prediction(): ctiM = cti.CorrelationTensorModel(gtab1, gtab2) for DTD in DTDs: D = np.mean(DTD, axis=0) evals, evecs = decompose_tensor(D) C = qti.dtd_covariance(DTD) C = qti.from_6x6_to_21x1(C) ccti = from_qte_to_cti(C) MD = mean_diffusivity(evals) K = multi_gaussian_k_from_c(ccti, MD) cti_params = construct_cti_params(evals, evecs, K, ccti) cti_pred_signals = ctiM.predict(cti_params, S0=S0) qti_pred_signals = qti.qti_signal(gtab, D, C, S0=S0)[np.newaxis, :] assert np.allclose( cti_pred_signals, qti_pred_signals ), "CTI and QTI signals do not match!" # check the function predict of the CorrelationTensorFit object ctiF = ctiM.fit(cti_pred_signals) ctiF_pred = ctiF.predict(gtab1, gtab2, S0=S0) assert_array_almost_equal(ctiF_pred, cti_pred_signals) def test_split_cti_param(): ctiM = cti.CorrelationTensorModel(gtab1, gtab2) for DTD in DTDs: D = np.mean(DTD, axis=0) evals, evecs = decompose_tensor(D) C = qti.dtd_covariance(DTD) C = qti.from_6x6_to_21x1(C) ccti = from_qte_to_cti(C) MD = mean_diffusivity(evals) K = multi_gaussian_k_from_c(ccti, MD) cti_params = construct_cti_params(evals, evecs, K, ccti) ctiM = cti.CorrelationTensorModel(gtab1, gtab2) cti_pred_signals = ctiM.predict(cti_params, S0=S0) ctiF = ctiM.fit(cti_pred_signals) evals, evecs, kt, ct = cti.split_cti_params(ctiF.model_params) assert_array_almost_equal(evals, ctiF.evals) assert_array_almost_equal(evecs, ctiF.evecs) assert np.allclose(kt, ctiF.kt), "kt doesn't match in test_split_cti_param " assert np.allclose(ct, ctiF.ct), "ct doesn't match in test_split_cti_param" def test_cti_fits(): ctiM = cti.CorrelationTensorModel(gtab1, gtab2) CTI_data = np.zeros((2, 2, 1, len(gtab1.bvals))) for DTD in DTDs: D = np.mean(DTD, axis=0) evals, evecs = decompose_tensor(D) C = qti.dtd_covariance(DTD) C = qti.from_6x6_to_21x1(C) ccti = from_qte_to_cti(C) MD = mean_diffusivity(evals) K = multi_gaussian_k_from_c(ccti, MD) cti_params = construct_cti_params(evals, evecs, K, ccti) cti_pred_signals = ctiM.predict(cti_params, S0=S0) evals, evecs, kt, ct = split_cti_params(cti_params) # Testing the model with correct min_signal value ctiM = cti.CorrelationTensorModel(gtab1, gtab2, min_signal=1) cti_pred_signals = ctiM.predict(cti_params, S0=S0) ctiF = ctiM.fit(cti_pred_signals) evals, evecs, kt, ct = cti.split_cti_params(ctiF.model_params) assert_array_almost_equal(evals, ctiF.evals) assert_array_almost_equal(evecs, ctiF.evecs) assert np.allclose(kt, ctiF.kt), "kt doesn't match in test_split_cti_param " assert np.allclose(ct, ctiF.ct), "ct doesn't match in test_split_cti_param" # Testing Multi-Voxel Fit CTI_data[0, 0, 0] = cti_pred_signals CTI_data[0, 1, 0] = cti_pred_signals CTI_data[1, 0, 0] = cti_pred_signals CTI_data[1, 1, 0] = cti_pred_signals multi_params = np.zeros((2, 2, 1, 48)) multi_params[0, 0, 0] = multi_params[0, 1, 0] = cti_params multi_params[1, 0, 0] = multi_params[1, 1, 0] = cti_params ctiF_multi = ctiM.fit(CTI_data) multi_evals, _, multi_kt, multi_ct = split_cti_params(ctiF_multi.model_params) assert np.allclose(evals, multi_evals), "Evals don't match" assert np.allclose(kt, multi_kt), "K doesn't match" assert np.allclose(ct, multi_ct), "C doesn't match" # Check that it works with more than one voxel, and with a different S0 # in each voxel: cti_multi_pred_signals = ctiM.predict( multi_params, S0=100 * np.ones(ctiF_multi.shape[:3]) ) CTI_data = cti_multi_pred_signals ctiF_multi_pred_signals = ctiM.fit(CTI_data) multi_evals, _, multi_kt, multi_ct = split_cti_params( ctiF_multi_pred_signals.model_params ) assert np.allclose(evals, multi_evals), "Evals don't match" assert np.allclose(kt, multi_kt), "K doesn't match" assert np.allclose(ct, multi_ct), "C doesn't match" # Testing ls_fit_cti inverse_design_matrix = np.linalg.pinv(design_matrix(gtab1, gtab2)) cti_return = ls_fit_cti( design_matrix(gtab1, gtab2), cti_pred_signals, inverse_design_matrix ) evals_return, _, kt_return, ct_return = split_cti_params(cti_return) assert np.allclose(evals, evals_return), "evals do not match!" assert np.allclose(kt, kt_return), "K do not match!" assert np.allclose(ct, ct_return), "C do not match!" # OLS fitting ctiM = cti.CorrelationTensorModel(gtab1, gtab2, fit_method="OLS") ctiF = ctiM.fit(cti_pred_signals) ols_evals, _, ols_kt, ols_ct = split_cti_params(ctiF.model_params) assert np.allclose(evals, ols_evals), "evals do not match!" assert np.allclose(kt, ols_kt), "K do not match!" assert np.allclose(ct, ols_ct), "C do not match!" # WLS fitting cti_wlsM = cti.CorrelationTensorModel(gtab1, gtab2, fit_method="WLS") cti_wlsF = cti_wlsM.fit(cti_pred_signals) wls_evals, _, wls_kt, wls_ct = split_cti_params(cti_wlsF.model_params) assert np.allclose(evals, wls_evals), "evals do not match!" assert np.allclose(kt, wls_kt), "K do not match!" assert np.allclose(ct, wls_ct), "C do not match!" # checking Mean Kurtosis Values mk_result = ctiF.mk(min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True) mean_kurtosis_result = mean_kurtosis( cti_params, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True ) assert np.allclose(mk_result, mean_kurtosis_result), ( "The results of the mk function from CorrelationTensorFit and the " "mean_kurtosis function from dki.py are not equal." ) # checking Axial Kurtosis Values ak_result = ctiF.ak(min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True) axial_kurtosis_result = axial_kurtosis( cti_params, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True ) assert np.allclose(ak_result, axial_kurtosis_result), ( "The results of the ak function from CorrelationTensorFit and the " "axial_kurtosis function from dki.py are not equal." ) # checking Radial kurtosis values rk_result = ctiF.rk(min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True) radial_kurtosis_result = radial_kurtosis( cti_params, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True ) assert np.allclose(rk_result, radial_kurtosis_result), ( "The results of the rk function from CorrelationTensorFit and the " "radial_kurtosis function from DKI.py are not equal." ) # checking Anisotropic values. kfa_result = ctiF.kfa kurtosis_fractional_anisotropy_result = kurtosis_fractional_anisotropy( cti_params ) assert np.allclose(kfa_result, kurtosis_fractional_anisotropy_result), ( "The results of the kfa function from CorrelationTensorFit and the" "kurtosis_fractional_anisotropy function from dki.py are not equal" ) # checking mean Kurtosis tensor mkt_result = ctiF.mkt(min_kurtosis=-3.0 / 7, max_kurtosis=10) mean_kurtosis_result = mean_kurtosis_tensor( cti_params, min_kurtosis=-3.0 / 7, max_kurtosis=10 ) assert np.allclose(mkt_result, mean_kurtosis_result), ( "The results of the mkt function from CorrelationTensorFit and the" "mean_kurtosis_tensor function from dki.py are not equal." ) # checking anisotropic source of kurtosis. K_aniso = ctiF.K_aniso variance_of_eigenvalues = [] for tensor in DTD: evals_tensor, _ = decompose_tensor(tensor) variance_of_eigenvalues.append(np.var(evals_tensor)) mean_variance_of_eigenvalues = np.mean(variance_of_eigenvalues) mean_D = np.trace(np.mean(DTD, axis=0)) / 3 ground_truth_K_aniso = (6 / 5) * (mean_variance_of_eigenvalues / (mean_D**2)) error_msg = ( f"Calculated K_iso {K_aniso} for isotropicDTD does not match the " f"ground truth {ground_truth_K_aniso}" ) assert np.isclose(K_aniso, ground_truth_K_aniso), error_msg # checking isotropic source of kurtosis. K_iso = ctiF.K_iso mean_diffusivities = [] for tensor in DTD: evals_tensor, _ = decompose_tensor(tensor) mean_diffusivities.append(np.mean(evals_tensor)) variance_of_mean_diffusivities = np.var(mean_diffusivities) mean_D = np.mean(mean_diffusivities) ground_truth_K_iso = 3 * variance_of_mean_diffusivities / (mean_D**2) error_msg = ( f"Calculated K_iso {K_iso} for anisotropicDTD does not match the " f"ground truth {ground_truth_K_iso}" ) assert np.isclose(K_iso, ground_truth_K_iso), error_msg # checking microscopic source of kurtosis ground_truth_K_micro = 0 K_micro = ctiF.K_micro assert np.allclose( K_micro, ground_truth_K_micro ), "K_micro values don't match ground truth values" def test_cti_errors(): # first error of CTI module is if a unknown fit method is given assert_raises(ValueError, cti.CorrelationTensorModel, gtab1, gtab2, fit_method="") # second error of CTI module is if a min_signal is defined as negative assert_raises(ValueError, cti.CorrelationTensorModel, gtab1, gtab2, min_signal=-1) def test_cti_design_matrix(): A1 = design_matrix(gtab1, gtab2) A2 = design_matrix(gtab2, gtab1) # Check if the two matrices are the same assert np.allclose(A1, A2), ( "The design matrices are not symmetric for different gradient" "directions order." ) dipy-1.11.0/dipy/reconst/tests/test_dki.py000066400000000000000000001173261476546756600205470ustar00rootroot00000000000000"""Testing DKI""" import random import warnings import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_raises, ) from dipy.core.geometry import sphere2cart from dipy.core.gradients import gradient_table from dipy.core.sphere import Sphere from dipy.data import default_sphere, get_fnames from dipy.io.gradients import read_bvals_bvecs import dipy.reconst.dki as dki from dipy.reconst.dki import ( MIN_POSITIVE_SIGNAL, _positive_evals, axial_kurtosis, carlson_rd, carlson_rf, kurtosis_fractional_anisotropy, lower_triangular, ls_fit_dki, mean_kurtosis, mean_kurtosis_tensor, radial_kurtosis, radial_tensor_kurtosis, ) import dipy.reconst.dti as dti from dipy.reconst.dti import decompose_tensor, from_lower_triangular from dipy.reconst.utils import dki_design_matrix from dipy.reconst.weights_method import ( weights_method_nlls_m_est, weights_method_wls_m_est, ) from dipy.sims.voxel import multi_tensor_dki from dipy.testing import check_for_warnings from dipy.utils.optpkg import optional_package from dipy.utils.tripwire import TripWireError cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") gtab, gtab_2s, crossing_ref, signal_cross = None, None, None, None multi_params, Kref_sphere, DWI = None, None, None mevals_cross, angles_cross, frac_cross, kt_cross = None, None, None, None dt_sph, evals_sph, kt_sph, params_sph = None, None, None, None def setup_module(): global gtab, gtab_2s, crossing_ref, signal_cross global multi_params, Kref_sphere, DWI, S0 global mevals_cross, angles_cross, frac_cross, kt_cross global dt_sph, evals_sph, kt_sph, params_sph fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) # 2 shells for techniques that requires multishell data bvals_2s = np.concatenate((bvals, bvals * 2), axis=0) bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) gtab_2s = gradient_table(bvals_2s, bvecs=bvecs_2s) # Simulation 1. signals of two crossing fibers are simulated mevals_cross = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) angles_cross = [(80, 10), (80, 10), (20, 30), (20, 30)] fie = 0.49 frac_cross = [fie * 50, (1 - fie) * 50, fie * 50, (1 - fie) * 50] S0 = 100 # Noise free simulates signal_cross, dt_cross, kt_cross = multi_tensor_dki( gtab_2s, mevals_cross, S0=S0, angles=angles_cross, fractions=frac_cross, snr=None, ) evals_cross, evecs_cross = decompose_tensor(from_lower_triangular(dt_cross)) crossing_ref = np.concatenate( (evals_cross, evecs_cross[0], evecs_cross[1], evecs_cross[2], kt_cross), axis=0 ) # Simulation 2. Spherical kurtosis tensor.- for white matter, this can be a # biological implausible scenario, however this simulation is useful for # testing the estimation of directional apparent kurtosis and the mean # kurtosis, since its directional and mean kurtosis ground truth are a # constant which can be calculated easily mathematically. Di = 0.00099 De = 0.00226 mevals_sph = np.array([[Di, Di, Di], [De, De, De]]) frac_sph = [50, 50] signal_sph, dt_sph, kt_sph = multi_tensor_dki( gtab_2s, mevals_sph, S0=S0, fractions=frac_sph, snr=None ) evals_sph, evecs_sph = decompose_tensor(from_lower_triangular(dt_sph)) params_sph = np.concatenate( (evals_sph, evecs_sph[0], evecs_sph[1], evecs_sph[2], kt_sph), axis=0 ) # Compute ground truth - since KT is spherical, apparent kurtosis # coefficient for all gradient directions and mean kurtosis have to be # equal to Kref_sph. f = 0.5 Dg = f * Di + (1 - f) * De Kref_sphere = 3 * f * (1 - f) * ((Di - De) / Dg) ** 2 # Simulation 3. Multi-voxel simulations - dataset of four voxels is # simulated. Since the objective of this simulation is to see if # procedures are able to work with multi-dimensional data all voxels # contains the same crossing signal produced in simulation 1. DWI = np.zeros((2, 2, 1, len(gtab_2s.bvals))) DWI[0, 0, 0] = DWI[0, 1, 0] = DWI[1, 0, 0] = DWI[1, 1, 0] = signal_cross multi_params = np.zeros((2, 2, 1, 27)) multi_params[0, 0, 0] = multi_params[0, 1, 0] = crossing_ref multi_params[1, 0, 0] = multi_params[1, 1, 0] = crossing_ref def teardown_module(): global gtab, gtab_2s, crossing_ref, signal_cross global multi_params, Kref_sphere, DWI global mevals_cross, angles_cross, frac_cross, kt_cross global dt_sph, evals_sph, kt_sph, params_sph gtab, gtab_2s, crossing_ref, signal_cross = None, None, None, None multi_params, Kref_sphere, DWI = None, None, None mevals_cross, angles_cross, frac_cross, kt_cross = None, None, None, None dt_sph, evals_sph, kt_sph, params_sph = None, None, None, None def test_positive_evals(): # Tested evals L1 = np.array([[1e-3, 1e-3, 2e-3], [0, 1e-3, 0]]) L2 = np.array([[3e-3, 0, 2e-3], [1e-3, 1e-3, 0]]) L3 = np.array([[4e-3, 1e-4, 0], [0, 1e-3, 0]]) # only the first voxels have all eigenvalues larger than zero, thus: expected_ind = np.array([[True, False, False], [False, True, False]], dtype=bool) # test function _positive_evals ind = _positive_evals(L1, L2, L3) assert_array_equal(ind, expected_ind) def test_split_dki_param(): dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="OLS") dkiF = dkiM.fit(DWI) evals, evecs, kt = dki.split_dki_param(dkiF.model_params) assert_array_almost_equal(evals, dkiF.evals) assert_array_almost_equal(evecs, dkiF.evecs) assert_array_almost_equal(kt, dkiF.kt) def test_dki_fits(): """DKI fits are tested on noise free crossing fiber simulates""" mask_signal_cross = np.ones_like(signal_cross) mask_signal_cross[..., ::2] = 0 mask_signal_cross[0] = 1 # OLS fitting dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="OLS") dkiF = dkiM.fit(signal_cross) assert_array_almost_equal(dkiF.model_params, crossing_ref) dkiF = dkiM.fit(signal_cross, mask=mask_signal_cross) assert_array_almost_equal(dkiF.model_params, crossing_ref) # OLS fitting with return leverages dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="OLS", return_leverages=True) dkiF = dkiM.fit(signal_cross) leverages = dkiM.extra["leverages"] assert_almost_equal(leverages.sum(), np.array([22.0])) # WLS fitting dki_wlsM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dki_wlsF = dki_wlsM.fit(signal_cross) assert_array_almost_equal(dki_wlsF.model_params, crossing_ref) # Test wls given S^2 weights argument, matches default wls design_matrix = dki_design_matrix(gtab_2s) inverse_design_matrix = np.linalg.pinv(design_matrix) YN = signal_cross + 10 * np.random.normal(size=signal_cross.shape) # error YN[YN < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL # wls calculation D_w, _ = ls_fit_dki( design_matrix, YN, inverse_design_matrix, return_S0_hat=False, return_lower_triangular=True, weights=True, ) # wls calculation, by calculating S^2 from OLS fit, then passing weights D_o, _ = ls_fit_dki( design_matrix, YN, inverse_design_matrix, return_S0_hat=False, return_lower_triangular=True, weights=False, ) pred_s = np.exp(np.dot(design_matrix, D_o.T)).T D_W, _ = ls_fit_dki( design_matrix, YN, inverse_design_matrix, return_S0_hat=False, return_lower_triangular=True, weights=pred_s**2, ) # S^2 weights assert_almost_equal(D_w, D_W) # wls calculation, but passing incorrect weights D_W, _ = ls_fit_dki( design_matrix, YN, inverse_design_matrix, return_S0_hat=False, return_lower_triangular=True, weights=pred_s**1, ) assert_raises(AssertionError, assert_array_equal, D_w, D_W) # Test that wls implementation is correct, by comparison with result here W = np.diag(pred_s.squeeze() ** 2) AT_W = np.dot(design_matrix.T, W) inv_AT_W_A = np.linalg.pinv(np.dot(AT_W, design_matrix)) AT_W_LS = np.dot(AT_W, np.log(YN).squeeze()) result = np.dot(inv_AT_W_A, AT_W_LS) assert_almost_equal(D_w.squeeze(), result) if have_cvxpy: # CLS fitting dki_clsM = dki.DiffusionKurtosisModel( gtab_2s, fit_method="CLS", cvxpy_solver=cvxpy.CLARABEL ) dki_clsF = dki_clsM.fit(signal_cross) assert_array_almost_equal(dki_clsF.model_params, crossing_ref) # CWLS fitting dki_cwlsM = dki.DiffusionKurtosisModel( gtab_2s, fit_method="CWLS", cvxpy_solver=cvxpy.CLARABEL ) dki_cwlsF = dki_cwlsM.fit(signal_cross) assert_array_almost_equal(dki_cwlsF.model_params, crossing_ref) else: assert_raises( TripWireError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="CLS" ) assert_raises( TripWireError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="CWLS" ) # NLS fitting if have_cvxpy: dki_nlsM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="NLS") dki_nlsF = dki_nlsM.fit(signal_cross) assert_array_almost_equal(dki_nlsF.model_params, crossing_ref) dki_nlsF = dki_nlsM.fit(signal_cross, mask=mask_signal_cross) assert_array_almost_equal(dki_nlsF.model_params, crossing_ref) else: assert_raises( TripWireError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="NLS" ) # Restore fitting dki_rtM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="RESTORE", sigma=2) dki_rtF = dki_rtM.fit(signal_cross) assert_array_almost_equal(dki_rtF.model_params, crossing_ref) # Test single voxel return_S0 if have_cvxpy: dki_nlsM = dki.DiffusionKurtosisModel( gtab_2s, fit_method="NLS", return_S0_hat=True, cvxpy_solver=cvxpy.CLARABEL ) dki_nlsF = dki_nlsM.fit(signal_cross) assert_array_almost_equal(dki_nlsF.model_S0, S0) else: assert_raises( TripWireError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="NLS" ) # testing multi-voxels mask_signal_multi = np.ones_like(DWI[..., 0]) mask_signal_multi[1, 1, ...] = 0 dkiF_multi = dkiM.fit(DWI) assert_array_almost_equal(dkiF_multi.model_params, multi_params) dkiF_multi = dki_wlsM.fit(DWI) assert_array_almost_equal(dkiF_multi.model_params, multi_params) dkiF_multi = dki_rtM.fit(DWI) assert_array_almost_equal(dkiF_multi.model_params, multi_params) if have_cvxpy: dkiF_multi = dki_nlsM.fit(DWI) assert_array_almost_equal(dkiF_multi.model_params, multi_params) dkiF_multi = dki_nlsM.fit(DWI, mask=mask_signal_multi) masked_multi_params = multi_params.copy() masked_multi_params[1, 1, ...] = 0 assert_array_almost_equal(dkiF_multi.model_params, masked_multi_params) # testing return of S0 if have_cvxpy: tested_methods = ["NLLS", "WLLS", "CLS", "CWLS"] else: tested_methods = ["NLLS", "WLLS"] for fit_method in tested_methods: kwargs = {} if fit_method in ["CLS", "CWLS"]: kwargs = {"cvxpy_solver": cvxpy.CLARABEL} dki_S0M = dki.DiffusionKurtosisModel( gtab_2s, fit_method=fit_method, return_S0_hat=True, **kwargs ) dki_S0F = dki_S0M.fit(DWI) dki_S0F_S0 = dki_S0F.model_S0 assert_array_almost_equal(dki_S0F_S0, np.full(dki_S0F_S0.shape, S0)) for fit_method in tested_methods: kwargs = {} if fit_method in ["CLS", "CWLS"]: kwargs = {"cvxpy_solver": cvxpy.CLARABEL} dki_S0M = dki.DiffusionKurtosisModel( gtab_2s, fit_method=fit_method, return_S0_hat=True, **kwargs ) # FIXME: CLS and CWLS generally fail with noisy data - needs fixing # dki_S0F = dki_S0M.fit(DWI + np.random.normal(size=DWI.shape),\ # mask=mask_test) dki_S0F = dki_S0M.fit(DWI) dki_S0F_S0 = dki_S0F.model_S0 assert_array_almost_equal(dki_S0F_S0, np.full(dki_S0F_S0.shape, S0)) # testing robust fitting by with WLS and CWLS (needs noisy data) # --------==============----------------------================== YN = DWI + np.random.normal(size=DWI.shape) if have_cvxpy: tested_methods = ["WLS"] # FIXME: CWLS generally failing with noisy data, needs fixing # tested_methods = ['WLS', 'CWLS'] else: tested_methods = ["WLS"] for fit_method in tested_methods: kwargs = {} if fit_method == "CWLS": kwargs = {"cvxpy_solver": cvxpy.CLARABEL} wm = weights_method_wls_m_est # weights method dki_M = dki.DiffusionKurtosisModel( gtab_2s, fit_method=fit_method, return_S0_hat=True, weights_method=wm, fit_type=fit_method, num_iter=4, **kwargs, ) for mask in [None, np.ones(DWI.shape[0:-1])]: # Smoke test _ = dki_M.fit(YN, mask=mask) # test that a single iteration will raise an error dki_M = dki.DiffusionKurtosisModel( gtab_2s, fit_method=fit_method, return_S0_hat=True, weights_method=wm, fit_type=fit_method, num_iter=1, **kwargs, ) assert_raises(ValueError, dki_M.fit, YN) # RWLS and RNLLS work based are dti fit functions tested_methods = ["RWLS", "RNLLS"] for fit_method in tested_methods: dki_M = dki.DiffusionKurtosisModel( gtab_2s, fit_method=fit_method, return_S0_hat=True ) # Smoke test: check that the fit method runs without error _ = dki_M.fit(YN) _ = dki_M.fit(YN, mask=np.ones(DWI.shape[0:-1])) # test with mask # try wrong mask (these are not 'multi_fit' methods) assert_raises( ValueError, dki_M.fit, YN, mask=np.tile(np.ones(DWI.shape[0:-1]), 2) ) # use RWLS and RNLLS but via explicit call to IRLS tested_methods = ["WLS", "NLLS"] for fm in tested_methods: wm = weights_method_wls_m_est if fm == "WLS" else weights_method_nlls_m_est dki_M = dki.DiffusionKurtosisModel( gtab_2s, fit_method="IRLS", return_S0_hat=True, weights_method=wm, fit_type=fm, ) # num_iter default _ = dki_M.fit(YN) # test the weights methods fail if num_iter is too low tested_methods = ["WLS", "NLLS"] for fm in tested_methods: wm = weights_method_wls_m_est if fm == "WLS" else weights_method_nlls_m_est dki_M = dki.DiffusionKurtosisModel( gtab_2s, fit_method="IRLS", return_S0_hat=True, weights_method=wm, fit_type=fm, num_iter=2, ) assert_raises(ValueError, dki_M.fit, YN) def test_apparent_kurtosis_coef(): """Apparent kurtosis coefficients are tested for a spherical kurtosis tensor""" sph = Sphere(xyz=gtab.bvecs[gtab.bvals > 0]) AKC = dki.apparent_kurtosis_coef(params_sph, sph) # check all direction for d in range(len(gtab.bvecs[gtab.bvals > 0])): assert_array_almost_equal(AKC[d], Kref_sphere) def test_dki_predict(): dkiM = dki.DiffusionKurtosisModel(gtab_2s) pred = dkiM.predict(crossing_ref, S0=100) assert_array_almost_equal(pred, signal_cross) # just to check that it works with more than one voxel: pred_multi = dkiM.predict(multi_params, S0=100) assert_array_almost_equal(pred_multi, DWI) # Check that it works with more than one voxel, and with a different S0 # in each voxel: pred_multi = dkiM.predict(multi_params, S0=100 * np.ones(pred_multi.shape[:3])) assert_array_almost_equal(pred_multi, DWI) # check the function predict of the DiffusionKurtosisFit object dkiF = dkiM.fit(DWI) pred_multi = dkiF.predict(gtab_2s, S0=100) assert_array_almost_equal(pred_multi, DWI) dkiF = dkiM.fit(pred_multi) pred_from_fit = dkiF.predict(dkiM.gtab, S0=100) assert_array_almost_equal(pred_from_fit, DWI) # Test the module function: pred = dki.dki_prediction(crossing_ref, gtab_2s, S0=100) assert_array_almost_equal(pred, signal_cross) # Test the module function with S0 volume: pred = dki.dki_prediction( multi_params, gtab_2s, S0=100 * np.ones(multi_params.shape[:3]) ) assert_array_almost_equal(pred, DWI) def test_carlson_rf(): # Define inputs that we know the outputs from: # Carlson, B.C., 1994. Numerical computation of real or complex # elliptic integrals. arXiv:math/9409227 [math.CA] # Real values (test in 2D format) x = np.array([[1.0, 0.5], [2.0, 2.0]]) y = np.array([[2.0, 1.0], [3.0, 3.0]]) z = np.array([[0.0, 0.0], [4.0, 4.0]]) # Define reference outputs RF_ref = np.array( [[1.3110287771461, 1.8540746773014], [0.58408284167715, 0.58408284167715]] ) # Compute integrals RF = carlson_rf(x, y, z) # Compare assert_array_almost_equal(RF, RF_ref) # Complex values x = np.array([1j, 1j - 1, 1j, 1j - 1]) y = np.array([-1j, 1j, -1j, 1j]) z = np.array([0.0, 0.0, 2, 1 - 1j]) # Define reference outputs RF_ref = np.array( [ 1.8540746773014, 0.79612586584234 - 1.2138566698365j, 1.0441445654064, 0.93912050218619 - 0.53296252018635j, ] ) # Compute integrals RF = carlson_rf(x, y, z, errtol=3e-5) # Compare assert_array_almost_equal(RF, RF_ref) def test_carlson_rd(): # Define inputs that we know the outputs from: # Carlson, B.C., 1994. Numerical computation of real or complex # elliptic integrals. arXiv:math/9409227 [math.CA] # Real values x = np.array([0.0, 2.0]) y = np.array([2.0, 3.0]) z = np.array([1.0, 4.0]) # Defene reference outputs RD_ref = np.array([1.7972103521034, 0.16510527294261]) # Compute integrals RD = carlson_rd(x, y, z, errtol=1e-5) # Compare assert_array_almost_equal(RD, RD_ref) # Complex values (testing in 2D format) x = np.array([[1j, 0.0], [0.0, -2 - 1j]]) y = np.array([[-1j, 1j], [1j - 1, -1j]]) z = np.array([[2.0, -1j], [1j, -1 + 1j]]) # Defene reference outputs RD_ref = np.array( [ [0.65933854154220, 1.2708196271910 + 2.7811120159521j], [-1.8577235439239 - 0.96193450888839j, 1.8249027393704 - 1.2218475784827j], ] ) # Compute integrals RD = carlson_rd(x, y, z, errtol=1e-5) # Compare assert_array_almost_equal(RD, RD_ref) def test_Wrotate_single_fiber(): # Rotate the kurtosis tensor of single fiber simulate to the diffusion # tensor diagonal and check that is equal to the kurtosis tensor of the # same single fiber simulated directly to the x-axis # Define single fiber simulate mevals = np.array([[0.00099, 0, 0], [0.00226, 0.00087, 0.00087]]) fie = 0.49 frac = [fie * 100, (1 - fie) * 100] # simulate single fiber not aligned to the x-axis theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) evals, evecs = decompose_tensor(from_lower_triangular(dt)) kt_rotated = dki.Wrotate(kt, evecs) # Now coordinate system has the DT diagonal aligned to the x-axis # Reference simulation in which DT diagonal is directly aligned to the # x-axis angles = (90, 0), (90, 0) signal, dt_ref, kt_ref = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) assert_array_almost_equal(kt_rotated, kt_ref) def test_Wrotate_crossing_fibers(): # Test 2 - simulate crossing fibers intersecting at 70 degrees. # In this case, diffusion tensor principal eigenvector will be aligned in # the middle of the crossing fibers. Thus, after rotating the kurtosis # tensor, this will be equal to a kurtosis tensor simulate of crossing # fibers both deviating 35 degrees from the x-axis. Moreover, we know that # crossing fibers will be aligned to the x-y plane, because the smaller # diffusion eigenvalue, perpendicular to both crossings fibers, will be # aligned to the z-axis. # Simulate the crossing fiber angles = [(90, 30), (90, 30), (20, 30), (20, 30)] fie = 0.49 frac = [fie * 50, (1 - fie) * 50, fie * 50, (1 - fie) * 50] mevals = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) evals, evecs = decompose_tensor(from_lower_triangular(dt)) kt_rotated = dki.Wrotate(kt, evecs) # Now coordinate system has diffusion tensor diagonal aligned to the x-axis # Simulate the reference kurtosis tensor angles = [(90, 35), (90, 35), (90, -35), (90, -35)] signal, dt, kt_ref = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) # Compare rotated with the reference assert_array_almost_equal(kt_rotated, kt_ref) def test_Wcons(): # Construct the 4D kurtosis tensor manually from the crossing fiber kt # simulate Wfit = np.zeros([3, 3, 3, 3]) # Wxxxx Wfit[0, 0, 0, 0] = kt_cross[0] # Wyyyy Wfit[1, 1, 1, 1] = kt_cross[1] # Wzzzz Wfit[2, 2, 2, 2] = kt_cross[2] # Wxxxy Wfit[0, 0, 0, 1] = Wfit[0, 0, 1, 0] = Wfit[0, 1, 0, 0] = kt_cross[3] Wfit[1, 0, 0, 0] = kt_cross[3] # Wxxxz Wfit[0, 0, 0, 2] = Wfit[0, 0, 2, 0] = Wfit[0, 2, 0, 0] = kt_cross[4] Wfit[2, 0, 0, 0] = kt_cross[4] # Wxyyy Wfit[0, 1, 1, 1] = Wfit[1, 0, 1, 1] = Wfit[1, 1, 1, 0] = kt_cross[5] Wfit[1, 1, 0, 1] = kt_cross[5] # Wxxxz Wfit[1, 1, 1, 2] = Wfit[1, 2, 1, 1] = Wfit[2, 1, 1, 1] = kt_cross[6] Wfit[1, 1, 2, 1] = kt_cross[6] # Wxzzz Wfit[0, 2, 2, 2] = Wfit[2, 2, 2, 0] = Wfit[2, 0, 2, 2] = kt_cross[7] Wfit[2, 2, 0, 2] = kt_cross[7] # Wyzzz Wfit[1, 2, 2, 2] = Wfit[2, 2, 2, 1] = Wfit[2, 1, 2, 2] = kt_cross[8] Wfit[2, 2, 1, 2] = kt_cross[8] # Wxxyy Wfit[0, 0, 1, 1] = Wfit[0, 1, 0, 1] = Wfit[0, 1, 1, 0] = kt_cross[9] Wfit[1, 0, 0, 1] = Wfit[1, 0, 1, 0] = Wfit[1, 1, 0, 0] = kt_cross[9] # Wxxzz Wfit[0, 0, 2, 2] = Wfit[0, 2, 0, 2] = Wfit[0, 2, 2, 0] = kt_cross[10] Wfit[2, 0, 0, 2] = Wfit[2, 0, 2, 0] = Wfit[2, 2, 0, 0] = kt_cross[10] # Wyyzz Wfit[1, 1, 2, 2] = Wfit[1, 2, 1, 2] = Wfit[1, 2, 2, 1] = kt_cross[11] Wfit[2, 1, 1, 2] = Wfit[2, 2, 1, 1] = Wfit[2, 1, 2, 1] = kt_cross[11] # Wxxyz Wfit[0, 0, 1, 2] = Wfit[0, 0, 2, 1] = Wfit[0, 1, 0, 2] = kt_cross[12] Wfit[0, 1, 2, 0] = Wfit[0, 2, 0, 1] = Wfit[0, 2, 1, 0] = kt_cross[12] Wfit[1, 0, 0, 2] = Wfit[1, 0, 2, 0] = Wfit[1, 2, 0, 0] = kt_cross[12] Wfit[2, 0, 0, 1] = Wfit[2, 0, 1, 0] = Wfit[2, 1, 0, 0] = kt_cross[12] # Wxyyz Wfit[0, 1, 1, 2] = Wfit[0, 1, 2, 1] = Wfit[0, 2, 1, 1] = kt_cross[13] Wfit[1, 0, 1, 2] = Wfit[1, 1, 0, 2] = Wfit[1, 1, 2, 0] = kt_cross[13] Wfit[1, 2, 0, 1] = Wfit[1, 2, 1, 0] = Wfit[2, 0, 1, 1] = kt_cross[13] Wfit[2, 1, 0, 1] = Wfit[2, 1, 1, 0] = Wfit[1, 0, 2, 1] = kt_cross[13] # Wxyzz Wfit[0, 1, 2, 2] = Wfit[0, 2, 1, 2] = Wfit[0, 2, 2, 1] = kt_cross[14] Wfit[1, 0, 2, 2] = Wfit[1, 2, 0, 2] = Wfit[1, 2, 2, 0] = kt_cross[14] Wfit[2, 0, 1, 2] = Wfit[2, 0, 2, 1] = Wfit[2, 1, 0, 2] = kt_cross[14] Wfit[2, 1, 2, 0] = Wfit[2, 2, 0, 1] = Wfit[2, 2, 1, 0] = kt_cross[14] # Function to be tested W4D = dki.Wcons(kt_cross) Wfit = Wfit.reshape(-1) W4D = W4D.reshape(-1) assert_array_almost_equal(W4D, Wfit) def test_spherical_dki_statistics(): # tests if MK, AK, RK and MSK are equal to expected values of a spherical # kurtosis tensor # Define multi voxel spherical kurtosis simulations MParam = np.zeros((2, 2, 2, 27)) MParam[0, 0, 0] = MParam[0, 0, 1] = MParam[0, 1, 0] = params_sph MParam[0, 1, 1] = MParam[1, 1, 0] = params_sph # MParam[1, 1, 1], MParam[1, 0, 0], and MParam[1, 0, 1] remains zero MRef = np.zeros((2, 2, 2)) MRef[0, 0, 0] = MRef[0, 0, 1] = MRef[0, 1, 0] = Kref_sphere MRef[0, 1, 1] = MRef[1, 1, 0] = Kref_sphere MRef[1, 1, 1] = MRef[1, 0, 0] = MRef[1, 0, 1] = 0 # Mean kurtosis analytical solution MK_multi = mean_kurtosis(MParam, analytical=True) assert_array_almost_equal(MK_multi, MRef) # radial kurtosis analytical solution RK_multi = radial_kurtosis(MParam, analytical=True) assert_array_almost_equal(RK_multi, MRef) # axial kurtosis analytical solution AK_multi = axial_kurtosis(MParam, analytical=True) assert_array_almost_equal(AK_multi, MRef) # mean kurtosis tensor MSK_multi = mean_kurtosis_tensor(MParam) assert_array_almost_equal(MSK_multi, MRef) # radial kurtosis tensor RKT_multi = radial_tensor_kurtosis(MParam) assert_array_almost_equal(RKT_multi, MRef) # kurtosis fractional anisotropy (isotropic case kfa=0) KFA_multi = kurtosis_fractional_anisotropy(MParam) assert_array_almost_equal(KFA_multi, 0 * MRef) def test_single_voxel_DKI_stats(): # tests if DKI metrics are equal to expected values for a single fiber # simulate randomly oriented fie = 0.49 ADi = 0.00099 ADe = 0.00226 RDi = 0 RDe = 0.00087 MD = fie * (ADi + 2 * RDi) / 3 + (1 - fie) * (ADe + 2 * RDe) / 3 # Reference values AD = fie * ADi + (1 - fie) * ADe AK = 3 * fie * (1 - fie) * ((ADi - ADe) / AD) ** 2 RD = fie * RDi + (1 - fie) * RDe RK = 3 * fie * (1 - fie) * ((RDi - RDe) / RD) ** 2 MKT = ( 3 * fie * (1 - fie) * (RDe**2 + (ADe - ADi - RDe) * (7 * RDe + 3 * (ADe - ADi)) / 15) / (MD**2) ) ref_vals = np.array([AD, AK, RD, RK, RK, MKT]) # simulate fiber randomly oriented theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, S0=100, angles=angles, fractions=frac, snr=None ) evals, evecs = decompose_tensor(from_lower_triangular(dt)) dki_par = np.concatenate((evals, evecs[0], evecs[1], evecs[2], kt), axis=0) # Estimates using dki functions ADe1 = dti.axial_diffusivity(evals) RDe1 = dti.radial_diffusivity(evals) AKe1 = axial_kurtosis(dki_par) RKe1 = radial_kurtosis(dki_par) RTKe1 = radial_tensor_kurtosis(dki_par) MKTe1 = mean_kurtosis_tensor(dki_par) e1_vals = np.array([ADe1, AKe1, RDe1, RKe1, RTKe1, MKTe1]) assert_array_almost_equal(e1_vals, ref_vals) # Estimates using the kurtosis class object dkiM = dki.DiffusionKurtosisModel(gtab_2s) dkiF = dkiM.fit(signal) e2_vals = np.array([dkiF.ad, dkiF.ak(), dkiF.rd, dkiF.rk(), dkiF.rtk(), dkiF.mkt()]) assert_array_almost_equal(e2_vals, ref_vals) # test MK (note this test correspond to the MK singularity L2==L3) MK_as = dkiF.mk() sph = Sphere(xyz=gtab.bvecs[gtab.bvals > 0]) MK_nm = np.mean(dkiF.akc(sph)) assert_array_almost_equal(MK_as, MK_nm, decimal=1) def test_compare_analytical_and_numerical_methods(): # tests if analytical solution of MK/RK/AK produces the same results than # their respective numerical methods # DKI Model fitting dkiM = dki.DiffusionKurtosisModel(gtab_2s) dkiF = dkiM.fit(signal_cross) # MK analytical and numerical solution MK_as = dkiF.rk(analytical=True) MK_nm = dkiF.rk(analytical=False) assert_array_almost_equal(MK_as, MK_nm) # RK analytical and numerical solution RK_as = dkiF.rk(analytical=True) RK_nm = dkiF.rk(analytical=False) assert_array_almost_equal(RK_as, RK_nm) # AK analytical and numerical solution AK_as = dkiF.ak(analytical=True) AK_nm = dkiF.ak(analytical=False) assert_array_almost_equal(AK_as, AK_nm) def test_MK_singularities(): # To test MK in case that analytical solution was a singularity not covered # by other tests dkiM = dki.DiffusionKurtosisModel(gtab_2s) # test singularity L1 == L2 - this is the case of a prolate diffusion # tensor for crossing fibers at 90 degrees angles_all = np.array( [[(90, 0), (90, 0), (0, 0), (0, 0)], [(89.9, 0), (89.9, 0), (0, 0), (0, 0)]] ) for angles_90 in angles_all: s_90, dt_90, kt_90 = multi_tensor_dki( gtab_2s, mevals_cross, S0=100, angles=angles_90, fractions=frac_cross, snr=None, ) dkiF = dkiM.fit(s_90) MK_an = dkiF.mk(analytical=True) MK_nm = dkiF.mk(analytical=False) assert_almost_equal(MK_an, MK_nm, decimal=3) # test singularity L1 == L3 and L1 != L2 # since L1 is defined as the larger eigenvalue and L3 the smallest # eigenvalue, this singularity theoretically will never be called, # because for L1 == L3, L2 have also to be = L1 and L2. # Nevertheless, I decided to include this test since this singularity # is relevant for cases that eigenvalues are not ordered # artificially revert the eigenvalue and eigenvector order dki_params = dkiF.model_params.copy() dki_params[1] = dkiF.model_params[2] dki_params[2] = dkiF.model_params[1] dki_params[4] = dkiF.model_params[5] dki_params[5] = dkiF.model_params[4] dki_params[7] = dkiF.model_params[8] dki_params[8] = dkiF.model_params[7] dki_params[10] = dkiF.model_params[11] dki_params[11] = dkiF.model_params[10] MK_an = dki.mean_kurtosis(dki_params, analytical=True) MK_nm = dki.mean_kurtosis(dki_params, analytical=False) assert_almost_equal(MK_an, MK_nm, decimal=3) def test_dki_errors(): # first error of DKI module is if a unknown fit method is given assert_raises(ValueError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="JOANA") # second error of DKI module is if a min_signal is defined as negative assert_raises(ValueError, dki.DiffusionKurtosisModel, gtab_2s, min_signal=-1) # try case with correct min_signal dkiM = dki.DiffusionKurtosisModel(gtab_2s, min_signal=1) dkiF = dkiM.fit(DWI) assert_array_almost_equal(dkiF.model_params, multi_params) # third error is if a given mask do not have same shape as data dkiM = dki.DiffusionKurtosisModel(gtab_2s) # test a correct mask dkiF = dkiM.fit(DWI) mask_correct = dkiF.fa > 0 mask_correct[1, 1] = False multi_params[1, 1] = np.zeros(27) dkiF = dkiM.fit(DWI, mask=mask_correct) assert_array_almost_equal(dkiF.model_params, multi_params) # test incorrect mask ("multi-voxel" fit types) mask_not_correct = np.array([[True, True, False], [True, False, False]]) assert_raises(ValueError, dkiM.fit, DWI, mask=mask_not_correct) # test incorrect mask ("single-voxel" fit types) if have_cvxpy: dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="NLS") assert_raises(ValueError, dkiM.fit, DWI, mask=mask_not_correct) # error if data with only one non zero b-value is given assert_raises(ValueError, dki.DiffusionKurtosisModel, gtab) else: assert_raises( TripWireError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="NLS" ) # Extra checks for CLS fitting if have_cvxpy: assert_raises( ValueError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="CLS", convexity_level="all", ) assert_raises( ValueError, dki.DiffusionKurtosisModel, gtab_2s, fit_method="CLS", convexity_level=3, ) # Check that maximum convexity levels is set to 4, # when large one is given with warnings.catch_warnings(record=True) as l_warns: dkim = dki.DiffusionKurtosisModel( gtab_2s, fit_method="CLS", convexity_level=6 ) check_for_warnings(l_warns, "Maximum convexity_level supported is 4.") assert_almost_equal(dkim.convexity_level, 4) def test_kurtosis_maximum(): # TEST 1 # simulate a crossing fibers intersecting at 70 degrees. The first fiber # is aligned to the x-axis while the second fiber is aligned to the x-z # plane with an angular deviation of 70 degrees from the first one. # According to Neto Henriques et al, 2015 (NeuroImage 111: 85-99), the # kurtosis tensor of this simulation will have a maxima aligned to axis y angles = [(90, 0), (90, 0), (20, 0), (20, 0)] signal_70, dt_70, kt_70 = multi_tensor_dki( gtab_2s, mevals_cross, S0=100, angles=angles, fractions=frac_cross, snr=None ) # prepare inputs dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dkiF = dkiM.fit(signal_70) MD = dkiF.md kt = dkiF.kt R = dkiF.evecs evals = dkiF.evals dt = lower_triangular(np.dot(np.dot(R, np.diag(evals)), R.T)) sphere = default_sphere # compute maxima k_max_cross, max_dir = dki._voxel_kurtosis_maximum(dt, MD, kt, sphere, gtol=1e-5) yaxis = np.array([0.0, 1.0, 0.0]) cos_angle = np.abs(np.dot(max_dir[0], yaxis)) assert_almost_equal(cos_angle, 1.0) # TEST 2 # test the function on cases of well aligned fibers oriented in a random # defined direction. According to Neto Henriques et al, 2015 (NeuroImage # 111: 85-99), the maxima of kurtosis is any direction perpendicular to the # fiber direction. Moreover, according to multicompartmetal simulations, # kurtosis in this direction has to be equal to: fie = 0.49 ADi = 0.00099 ADe = 0.00226 RDi = 0 RDe = 0.00087 RD = fie * RDi + (1 - fie) * RDe RK = 3 * fie * (1 - fie) * ((RDi - RDe) / RD) ** 2 # prepare simulation: theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) # prepare inputs dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dkiF = dkiM.fit(signal) MD = dkiF.md kt = dkiF.kt R = dkiF.evecs evals = dkiF.evals dt = lower_triangular(np.dot(np.dot(R, np.diag(evals)), R.T)) # compute maxima k_max, max_dir = dki._voxel_kurtosis_maximum(dt, MD, kt, sphere, gtol=1e-5) # check if max direction is perpendicular to fiber direction fdir = np.array([sphere2cart(1.0, np.deg2rad(theta), np.deg2rad(phi))]) cos_angle = np.abs(np.dot(max_dir[0], fdir[0])) assert_almost_equal(cos_angle, 0.0, decimal=5) # check if max direction is equal to expected value assert_almost_equal(k_max, RK) assert_almost_equal(dkiF.kmax(sphere=sphere, gtol=1e-5), RK) # According to Neto Henriques et al., 2015 (NeuroImage 111: 85-99), # e.g. see figure 1 of this article, kurtosis maxima for the first test is # also equal to the maxima kurtosis value of well-aligned fibers, since # simulations parameters (apart from fiber directions) are equal assert_almost_equal(k_max_cross, RK) # Test 3 - Test performance when kurtosis is spherical - this case, can be # problematic since a spherical kurtosis does not have an maximum k_max, max_dir = dki._voxel_kurtosis_maximum( dt_sph, np.mean(evals_sph), kt_sph, sphere, gtol=1e-2 ) assert_almost_equal(k_max, Kref_sphere) # Test 4 - Test performance when kt have all elements zero - this case, can # be problematic this case does not have an maximum k_max, max_dir = dki._voxel_kurtosis_maximum( dt_sph, np.mean(evals_sph), np.zeros(15), sphere, gtol=1e-2 ) assert_almost_equal(k_max, 0.0) def test_multi_voxel_kurtosis_maximum(): # Multi-voxel simulations parameters FIE = np.array([[[0.30, 0.32], [0.74, 0.51]], [[0.47, 0.21], [0.80, 0.63]]]) RDI = np.zeros((2, 2, 2)) ADI = np.array( [[[1e-3, 1.3e-3], [0.8e-3, 1e-3]], [[0.9e-3, 0.99e-3], [0.89e-3, 1.1e-3]]] ) ADE = np.array( [[[2.2e-3, 2.3e-3], [2.8e-3, 2.1e-3]], [[1.9e-3, 2.5e-3], [1.89e-3, 2.1e-3]]] ) Tor = np.array([[[2.6, 2.4], [2.8, 2.1]], [[2.9, 2.5], [2.7, 2.3]]]) RDE = ADE / Tor # prepare simulation: DWIsim = np.zeros((2, 2, 2, gtab_2s.bvals.size)) for i in range(2): for j in range(2): for k in range(2): ADi = ADI[i, j, k] RDi = RDI[i, j, k] ADe = ADE[i, j, k] RDe = RDE[i, j, k] fie = FIE[i, j, k] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) DWIsim[i, j, k, :] = signal # Ground truth Maximum kurtosis RD = FIE * RDI + (1 - FIE) * RDE RK = 3 * FIE * (1 - FIE) * ((RDI - RDE) / RD) ** 2 # prepare inputs dkiM = dki.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dkiF = dkiM.fit(DWIsim) # TEST - when no sphere is given k_max = dki.kurtosis_maximum(dkiF.model_params) assert_almost_equal(k_max, RK, decimal=4) # TEST - when sphere is given k_max = dki.kurtosis_maximum(dkiF.model_params, sphere=default_sphere) assert_almost_equal(k_max, RK, decimal=4) # TEST - when mask is given mask = np.ones((2, 2, 2), dtype="bool") mask[1, 1, 1] = 0 RK[1, 1, 1] = 0 k_max = dki.kurtosis_maximum(dkiF.model_params, mask=mask) assert_almost_equal(k_max, RK, decimal=4) # TEST if wrong mask is given mask_not_correct = np.array([[True, True, False], [True, False, False]]) assert_raises( ValueError, dki.kurtosis_maximum, dkiF.model_params, mask=mask_not_correct ) def test_kurtosis_fa(): # KFA = sqrt(4/5) if kurtosis is non-zero only in one direction mevals = np.array([[0.002, 0, 0], [0.003, 0, 0]]) angles = [(45, 0), (45, 0)] fie = 0.5 frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, S0=100, angles=angles, fractions=frac, snr=None ) dkiM = dki.DiffusionKurtosisModel(gtab_2s) dkiF = dkiM.fit(signal) assert_almost_equal(dkiF.kfa, np.sqrt(4 / 5)) # KFA = sqrt(13/5) for systems of two tensors with same AD and RD values # See appendix of Gleen et al., 2015 Quantitative assessment of diffusional # kurtosis anisotropy. NMR Biomed 28; 448-459. doi:10.1002/nbm.3271 mevals = np.array([[0.003, 0.001, 0.001], [0.003, 0.001, 0.001]]) angles = [(40, -10), (-45, 10)] fie = 0.5 frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, S0=100, angles=angles, fractions=frac, snr=None ) dkiM = dki.DiffusionKurtosisModel(gtab_2s) dkiF = dkiM.fit(signal) assert_almost_equal(dkiF.kfa, np.sqrt(13 / 15)) # KFA = 0 if MKT = 0 mevals = np.array([[0.003, 0.001, 0.001], [0.003, 0.001, 0.001]]) angles = [(40, -10), (40, -10)] fie = 0.5 frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, S0=100, angles=angles, fractions=frac, snr=None ) dkiM = dki.DiffusionKurtosisModel(gtab_2s) dkiF = dkiM.fit(signal) assert_almost_equal(dkiF.kfa, 0) dipy-1.11.0/dipy/reconst/tests/test_dki_micro.py000066400000000000000000000324671476546756600217420ustar00rootroot00000000000000"""Testing DKI microstructure""" import random import numpy as np from numpy.testing import ( assert_, assert_allclose, assert_almost_equal, assert_array_almost_equal, assert_raises, ) import pytest from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.reconst.dki import common_fit_methods import dipy.reconst.dki_micro as dki_micro from dipy.reconst.dti import eig_from_lo_tri from dipy.sims.voxel import _check_directions, multi_tensor, multi_tensor_dki from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") needs_cvxpy = pytest.mark.skipif(not have_cvxpy, reason="Requires CVXPY") gtab_2s, DWIsim, DWIsim_all_taylor = None, None, None FIE, RDI, ADI, ADE, Tor, RDE = None, None, None, None, None, None def setup_module(): global gtab_2s, DWIsim, DWIsim_all_taylor, FIE, RDI, ADI, ADE, Tor, RDE fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) # 2 shells for techniques that requires multishell data bvals_2s = np.concatenate((bvals, bvals * 2), axis=0) bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) gtab_2s = gradient_table(bvals_2s, bvecs=bvecs_2s) # single fiber simulate (which is the assumption of our model) FIE = np.array([[[0.30, 0.32], [0.74, 0.51]], [[0.47, 0.21], [0.80, 0.63]]]) RDI = np.zeros((2, 2, 2)) ADI = np.array( [[[1e-3, 1.3e-3], [0.8e-3, 1e-3]], [[0.9e-3, 0.99e-3], [0.89e-3, 1.1e-3]]] ) ADE = np.array( [[[2.2e-3, 2.3e-3], [2.8e-3, 2.1e-3]], [[1.9e-3, 2.5e-3], [1.89e-3, 2.1e-3]]] ) Tor = np.array([[[2.6, 2.4], [2.8, 2.1]], [[2.9, 2.5], [2.7, 2.3]]]) RDE = ADE / Tor # prepare simulation: DWIsim = np.zeros((2, 2, 2, gtab_2s.bvals.size)) # Diffusion microstructural model assumes that signal does not have Taylor # approximation components larger than the fourth order. Thus parameter # estimates are only equal to the ground truth values of the simulation # if signals taylor components larger than the fourth order are removed. # Signal without this taylor components can be generated using the # multi_tensor_dki simulations. Therefore we used this function to test the # expected estimates of the model. DWIsim_all_taylor = np.zeros((2, 2, 2, gtab_2s.bvals.size)) # Signal with all taylor components can be simulated using the function # multi_tensor. Generating this signals will be useful to test the # prediction procedures of DKI-based microstructural model. for i in range(2): for j in range(2): for k in range(2): ADi = ADI[i, j, k] RDi = RDI[i, j, k] ADe = ADE[i, j, k] RDe = RDE[i, j, k] fie = FIE[i, j, k] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) DWIsim[i, j, k, :] = signal signal, sticks = multi_tensor( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) DWIsim_all_taylor[i, j, k, :] = signal def teardown_module(): global gtab_2s, DWIsim, DWIsim_all_taylor, FIE, RDI, ADI, ADE, Tor, RDE gtab_2s, DWIsim, DWIsim_all_taylor = None, None, None FIE, RDI, ADI, ADE, Tor, RDE = None, None, None, None, None, None @needs_cvxpy def test_fit_selection(): for model in [ dki_micro.KurtosisMicrostructureModel, dki_micro.DiffusionKurtosisModel, ]: for name, method in common_fit_methods.items(): model_instance = model(gtab_2s, fit_method=name) assert model_instance.fit_method == method def test_single_fiber_model(): # single fiber simulate (which is the assumption of our model) fie = 0.49 ADi = 0.00099 ADe = 0.00226 RDi = 0 RDe = 0.00087 # prepare simulation: theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) # DKI fit dkiM = dki_micro.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dkiF = dkiM.fit(signal) # Axonal Water Fraction AWF = dki_micro.axonal_water_fraction( dkiF.model_params, sphere=default_sphere, mask=None, gtol=1e-5 ) assert_almost_equal(AWF, fie) # Extra-cellular and intra-cellular components edt, idt = dki_micro.diffusion_components(dkiF.model_params, sphere=default_sphere) EDT = eig_from_lo_tri(edt) IDT = eig_from_lo_tri(idt) # check eigenvalues assert_array_almost_equal(EDT[0:3], np.array([ADe, RDe, RDe])) assert_array_almost_equal(IDT[0:3], np.array([ADi, RDi, RDi])) # first eigenvalue should be the direction of the fibers fiber_direction = _check_directions([(theta, phi)]) f_norm = abs(np.dot(fiber_direction, np.array((EDT[3], EDT[6], EDT[9])))) assert_almost_equal(f_norm, 1.0) f_norm = abs(np.dot(fiber_direction, np.array((IDT[3], IDT[6], IDT[9])))) assert_almost_equal(f_norm, 1.0) # Test model and fit objects wmtiM = dki_micro.KurtosisMicrostructureModel(gtab_2s, fit_method="WLS") wmtiF = wmtiM.fit(signal) assert_allclose(wmtiF.awf, AWF, rtol=1e-6) assert_array_almost_equal(wmtiF.hindered_evals, np.array([ADe, RDe, RDe])) assert_array_almost_equal(wmtiF.restricted_evals, np.array([ADi, RDi, RDi])) assert_almost_equal(wmtiF.hindered_ad, ADe) assert_almost_equal(wmtiF.hindered_rd, RDe) assert_almost_equal(wmtiF.axonal_diffusivity, ADi) assert_almost_equal(wmtiF.tortuosity, ADe / RDe, decimal=4) # Test diffusion_components when a kurtosis tensors is associated with # negative kurtosis values. E.g of this cases is given below: dkiparams = np.array( [ 1.67135726e-03, 5.03651205e-04, 9.35365328e-05, -7.11167583e-01, 6.23186820e-01, -3.25390313e-01, -1.75247376e-02, -4.78415563e-01, -8.77958674e-01, 7.02804064e-01, 6.18673368e-01, -3.51154825e-01, 2.18384153, -2.76378153e-02, 2.22893297, -2.68306546e-01, -1.28411610, -1.56557645e-01, -1.80850619e-01, -8.33152110e-01, -3.62410766e-01, 1.57775442e-01, 8.73775381e-01, 2.77188975e-01, -3.67415502e-02, -1.56330984e-01, -1.62295407e-02, ] ) edt, idt = dki_micro.diffusion_components(dkiparams) assert_(np.all(np.isfinite(edt))) def test_wmti_model_multi_voxel(): # DKI fit dkiM = dki_micro.DiffusionKurtosisModel(gtab_2s, fit_method="WLS") dkiF = dkiM.fit(DWIsim) # Axonal Water Fraction sphere = get_sphere() AWF = dki_micro.axonal_water_fraction( dkiF.model_params, sphere=sphere, mask=None, gtol=1e-5 ) assert_almost_equal(AWF, FIE) # Extra-cellular and intra-cellular components edt, idt = dki_micro.diffusion_components(dkiF.model_params, sphere=sphere) EDT = eig_from_lo_tri(edt) IDT = eig_from_lo_tri(idt) # check eigenvalues assert_array_almost_equal(EDT[..., 0], ADE, decimal=3) assert_array_almost_equal(EDT[..., 1], RDE, decimal=3) assert_array_almost_equal(EDT[..., 2], RDE, decimal=3) assert_array_almost_equal(IDT[..., 0], ADI, decimal=3) assert_array_almost_equal(IDT[..., 1], RDI, decimal=3) assert_array_almost_equal(IDT[..., 2], RDI, decimal=3) # Test methods performance when a signal with all zeros is present FIEc = FIE.copy() RDIc = RDI.copy() ADIc = ADI.copy() ADEc = ADE.copy() Torc = Tor.copy() RDEc = RDE.copy() DWIsimc = DWIsim.copy() FIEc[0, 0, 0] = 0 RDIc[0, 0, 0] = 0 ADIc[0, 0, 0] = 0 ADEc[0, 0, 0] = 0 Torc[0, 0, 0] = 0 RDEc[0, 0, 0] = 0 DWIsimc[0, 0, 0, :] = 0 mask = np.ones((2, 2, 2)) mask[0, 0, 0] = 0 dkiF = dkiM.fit(DWIsimc) awf = dki_micro.axonal_water_fraction(dkiF.model_params, sphere=sphere, gtol=1e-5) assert_almost_equal(awf, FIEc) # Extra-cellular and intra-cellular components edt, idt = dki_micro.diffusion_components(dkiF.model_params, sphere=sphere, awf=awf) EDT = eig_from_lo_tri(edt) IDT = eig_from_lo_tri(idt) assert_array_almost_equal(EDT[..., 0], ADEc, decimal=3) assert_array_almost_equal(EDT[..., 1], RDEc, decimal=3) assert_array_almost_equal(EDT[..., 2], RDEc, decimal=3) assert_array_almost_equal(IDT[..., 0], ADIc, decimal=3) assert_array_almost_equal(IDT[..., 1], RDIc, decimal=3) assert_array_almost_equal(IDT[..., 2], RDIc, decimal=3) # Check when mask is given dkiF = dkiM.fit(DWIsim) awf = dki_micro.axonal_water_fraction( dkiF.model_params, sphere=sphere, gtol=1e-5, mask=mask ) assert_almost_equal(awf, FIEc, decimal=3) # Extra-cellular and intra-cellular components edt, idt = dki_micro.diffusion_components( dkiF.model_params, sphere=sphere, awf=awf, mask=mask ) EDT = eig_from_lo_tri(edt) IDT = eig_from_lo_tri(idt) assert_array_almost_equal(EDT[..., 0], ADEc, decimal=3) assert_array_almost_equal(EDT[..., 1], RDEc, decimal=3) assert_array_almost_equal(EDT[..., 2], RDEc, decimal=3) assert_array_almost_equal(IDT[..., 0], ADIc, decimal=3) assert_array_almost_equal(IDT[..., 1], RDIc, decimal=3) assert_array_almost_equal(IDT[..., 2], RDIc, decimal=3) # Check class object wmtiM = dki_micro.KurtosisMicrostructureModel(gtab_2s, fit_method="WLS") wmtiF = wmtiM.fit(DWIsim, mask=mask) assert_almost_equal(wmtiF.awf, FIEc, decimal=3) assert_almost_equal(wmtiF.axonal_diffusivity, ADIc, decimal=3) assert_almost_equal(wmtiF.hindered_ad, ADEc, decimal=3) assert_almost_equal(wmtiF.hindered_rd, RDEc, decimal=3) assert_almost_equal(wmtiF.tortuosity, Torc, decimal=3) def test_dki_micro_predict_single_voxel(): # single fiber simulate (which is the assumption of our model) fie = 0.49 ADi = 0.00099 ADe = 0.00226 RDi = 0 RDe = 0.00087 # prepare simulation: theta = random.uniform(0, 180) phi = random.uniform(0, 320) angles = [(theta, phi), (theta, phi)] mevals = np.array([[ADi, RDi, RDi], [ADe, RDe, RDe]]) frac = [fie * 100, (1 - fie) * 100] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) signal_gt, da = multi_tensor( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) # Defined DKI microstrutural model dkiM = dki_micro.KurtosisMicrostructureModel(gtab_2s) # Fit single voxel signal dkiF = dkiM.fit(signal) # Check predict of KurtosisMicrostruturalModel pred = dkiM.predict(dkiF.model_params) assert_array_almost_equal(pred, signal_gt, decimal=4) pred = dkiM.predict(dkiF.model_params, S0=100) assert_array_almost_equal(pred, signal_gt * 100, decimal=4) # Check predict of KurtosisMicrostruturalFit pred = dkiF.predict(gtab_2s, S0=100) assert_array_almost_equal(pred, signal_gt * 100, decimal=4) def test_dki_micro_predict_multi_voxel(): dkiM = dki_micro.KurtosisMicrostructureModel(gtab_2s) dkiF = dkiM.fit(DWIsim) # Check predict of KurtosisMicrostruturalModel pred = dkiM.predict(dkiF.model_params) assert_array_almost_equal(pred, DWIsim_all_taylor, decimal=3) pred = dkiM.predict(dkiF.model_params, S0=100) assert_array_almost_equal(pred, DWIsim_all_taylor * 100, decimal=3) # Check predict of KurtosisMicrostruturalFit pred = dkiF.predict(gtab_2s, S0=100) assert_array_almost_equal(pred, DWIsim_all_taylor * 100, decimal=3) def _help_test_awf_only(dkimicrofit, string): exec(string) def test_dki_micro_awf_only(): dkiM = dki_micro.KurtosisMicrostructureModel(gtab_2s) dkiF = dkiM.fit(DWIsim, awf_only=True) awf = dkiF.awf assert_almost_equal(awf, FIE, decimal=3) # assert_raises(dkiF.hindered_evals) assert_raises(ValueError, _help_test_awf_only, dkiF, "dkimicrofit.hindered_evals") assert_raises(ValueError, _help_test_awf_only, dkiF, "dkimicrofit.restricted_evals") assert_raises( ValueError, _help_test_awf_only, dkiF, "dkimicrofit.axonal_diffusivity" ) assert_raises(ValueError, _help_test_awf_only, dkiF, "dkimicrofit.hindered_ad") assert_raises(ValueError, _help_test_awf_only, dkiF, "dkimicrofit.hindered_rd") assert_raises(ValueError, _help_test_awf_only, dkiF, "dkimicrofit.tortuosity") def additional_tortuosity_tests(): # Test tortuosity when rd is zero # single voxel t = dki_micro.tortuosity(1.7e-3, 0.0) assert_almost_equal(t, 0.0) # multi-voxel RDEc = RDE.copy() Torc = Tor.copy() RDEc[1, 1, 1] = 0.0 Torc[1, 1, 1] = 0.0 t = dki_micro.tortuosity(ADE, RDEc) assert_almost_equal(Torc, t) dipy-1.11.0/dipy/reconst/tests/test_dsi.py000066400000000000000000000101151476546756600205430ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_almost_equal, assert_equal, assert_raises from dipy.core.gradients import gradient_table from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, dsi_voxels, get_fnames from dipy.direction.peaks import peak_directions from dipy.reconst.dsi import DiffusionSpectrumModel from dipy.reconst.odf import gfa from dipy.sims.voxel import sticks_and_ball def test_dsi(): # load repulsion 724 sphere sphere = default_sphere # load icosahedron sphere sphere2 = create_unit_sphere(recursion_level=5) btable = np.loadtxt(get_fnames(name="dsi515btable")) gtab = gradient_table(btable[:, 0], bvecs=btable[:, 1:]) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) ds = DiffusionSpectrumModel(gtab) # repulsion724 dsfit = ds.fit(data) odf = dsfit.odf(sphere) directions, _, _ = peak_directions(odf, sphere) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # 5 subdivisions dsfit = ds.fit(data) odf2 = dsfit.odf(sphere2) directions, _, _ = peak_directions(odf2, sphere2) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) assert_equal(dsfit.pdf().shape, 3 * (ds.qgrid_size,)) sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] odf = ds.fit(data).odf(sphere2) directions, _, _ = peak_directions(odf, sphere2) if len(directions) <= 3: assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: assert_equal(gfa(odf) < 0.1, True) assert_raises(ValueError, DiffusionSpectrumModel, gtab, qgrid_size=16) def test_multivox_dsi(): data, gtab = dsi_voxels() DS = DiffusionSpectrumModel(gtab) DSfit = DS.fit(data) PDF = DSfit.pdf() assert_equal(data.shape[:-1] + (17, 17, 17), PDF.shape) assert_equal(np.all(np.isreal(PDF)), True) def test_multib0_dsi(): data, gtab = dsi_voxels() # Create a new data-set with a b0 measurement: new_data = np.concatenate([data, data[..., 0, None]], -1) new_bvecs = np.concatenate([gtab.bvecs, np.zeros((1, 3))]) new_bvals = np.concatenate([gtab.bvals, [0]]) new_gtab = gradient_table(new_bvals, bvecs=new_bvecs) ds = DiffusionSpectrumModel(new_gtab) dsfit = ds.fit(new_data) pdf = dsfit.pdf() dsfit.odf(default_sphere) assert_equal(new_data.shape[:-1] + (17, 17, 17), pdf.shape) assert_equal(np.all(np.isreal(pdf)), True) # And again, with one more b0 measurement (two in total): new_data = np.concatenate([data, data[..., 0, None]], -1) new_bvecs = np.concatenate([gtab.bvecs, np.zeros((1, 3))]) new_bvals = np.concatenate([gtab.bvals, [0]]) new_gtab = gradient_table(new_bvals, bvecs=new_bvecs) ds = DiffusionSpectrumModel(new_gtab) dsfit = ds.fit(new_data) pdf = dsfit.pdf() dsfit.odf(default_sphere) assert_equal(new_data.shape[:-1] + (17, 17, 17), pdf.shape) assert_equal(np.all(np.isreal(pdf)), True) def sticks_and_ball_dummies(gtab): sb_dummies = {} S, sticks = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0)], fractions=[100], snr=None ) sb_dummies["1fiber"] = (S, sticks) S, sticks = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) sb_dummies["2fiber"] = (S, sticks) S, sticks = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0), (90, 90)], fractions=[33, 33, 33], snr=None, ) sb_dummies["3fiber"] = (S, sticks) S, sticks = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0), (90, 90)], fractions=[0, 0, 0], snr=None, ) sb_dummies["isotropic"] = (S, sticks) return sb_dummies dipy-1.11.0/dipy/reconst/tests/test_dsi_deconv.py000066400000000000000000000047631476546756600221150ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_almost_equal, assert_equal, assert_raises from dipy.core.gradients import gradient_table from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, dsi_deconv_voxels, get_fnames from dipy.direction.peaks import peak_directions from dipy.reconst.dsi import DiffusionSpectrumDeconvModel from dipy.reconst.odf import gfa from dipy.reconst.tests.test_dsi import sticks_and_ball_dummies from dipy.sims.voxel import sticks_and_ball def test_dsi(): # load repulsion 724 sphere sphere = default_sphere # load icosahedron sphere sphere2 = create_unit_sphere(recursion_level=5) btable = np.loadtxt(get_fnames(name="dsi515btable")) gtab = gradient_table(btable[:, 0], bvecs=btable[:, 1:]) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) ds = DiffusionSpectrumDeconvModel(gtab) # repulsion724 dsfit = ds.fit(data) odf = dsfit.odf(sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # 5 subdivisions dsfit = ds.fit(data) odf2 = dsfit.odf(sphere2) directions, _, _ = peak_directions( odf2, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) assert_equal(dsfit.pdf().shape, 3 * (ds.qgrid_size,)) sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] odf = ds.fit(data).odf(sphere2) directions, _, _ = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: assert_equal(gfa(odf) < 0.1, True) assert_raises(ValueError, DiffusionSpectrumDeconvModel, gtab, qgrid_size=16) def test_multivox_dsi(): data, gtab = dsi_deconv_voxels() DS = DiffusionSpectrumDeconvModel(gtab) DSfit = DS.fit(data) PDF = DSfit.pdf() assert_equal(data.shape[:-1] + (35, 35, 35), PDF.shape) assert_equal(np.all(np.isreal(PDF)), True) dipy-1.11.0/dipy/reconst/tests/test_dsi_metrics.py000066400000000000000000000026231476546756600222760ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_almost_equal from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.reconst.dsi import DiffusionSpectrumModel from dipy.sims.voxel import multi_tensor, sticks_and_ball def test_dsi_metrics(): btable = np.loadtxt(get_fnames(name="dsi4169btable")) gtab = gradient_table(btable[:, 0], bvecs=btable[:, 1:]) data, _ = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (60, 0)], fractions=[50, 50], snr=None ) dsmodel = DiffusionSpectrumModel(gtab, qgrid_size=21, filter_width=4500) rtop_signal_norm = dsmodel.fit(data).rtop_signal() dsmodel.fit(data).rtop_pdf() rtop_pdf = dsmodel.fit(data).rtop_pdf(normalized=False) assert_almost_equal(rtop_signal_norm, rtop_pdf, 6) dsmodel = DiffusionSpectrumModel(gtab, qgrid_size=21, filter_width=4500) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) S_0, _ = multi_tensor( gtab, mevals, S0=100, angles=[(0, 0), (60, 0)], fractions=[50, 50], snr=None ) S_1, _ = multi_tensor( gtab, mevals * 2.0, S0=100, angles=[(0, 0), (60, 0)], fractions=[50, 50], snr=None, ) MSD_norm_0 = dsmodel.fit(S_0).msd_discrete(normalized=True) MSD_norm_1 = dsmodel.fit(S_1).msd_discrete(normalized=True) assert_almost_equal(MSD_norm_0, 0.5 * MSD_norm_1, 4) dipy-1.11.0/dipy/reconst/tests/test_dti.py000066400000000000000000001210541476546756600205510ustar00rootroot00000000000000"""Testing DTI.""" import numpy as np import numpy.testing as npt import dipy.core.gradients as grad import dipy.core.sphere as dps from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import dsi_voxels, get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data import dipy.reconst.dti as dti from dipy.reconst.dti import ( MIN_POSITIVE_SIGNAL, TensorModel, _decompose_tensor_nan, axial_diffusivity, color_fa, decompose_tensor, fractional_anisotropy, from_lower_triangular, geodesic_anisotropy, linearity, lower_triangular, mean_diffusivity, mode, ols_fit_tensor, planarity, radial_diffusivity, sphericity, trace, wls_fit_tensor, ) from dipy.reconst.weights_method import ( weights_method_nlls_m_est, weights_method_wls_m_est, ) from dipy.sims.voxel import single_tensor from dipy.testing.decorators import set_random_number_generator def test_roll_evals(): # Just making sure this never passes through weird_evals = np.array([1, 0.5]) npt.assert_raises(ValueError, dti._roll_evals, weird_evals) @set_random_number_generator() def test_tensor_algebra(rng): # Test that the computation of tensor determinant and norm is correct test_arr = rng.random((10, 3, 3)) t_det = dti.determinant(test_arr) t_norm = dti.norm(test_arr) for i, x in enumerate(test_arr): npt.assert_almost_equal(np.linalg.det(x), t_det[i]) npt.assert_almost_equal(np.linalg.norm(x), t_norm[i]) def test_odf_with_zeros(): fdata, fbval, fbvec = get_fnames(name="small_25") gtab = grad.gradient_table(fbval, bvecs=fbvec) data = load_nifti_data(fdata) dm = dti.TensorModel(gtab) df = dm.fit(data) df.evals[0, 0, 0] = np.array([0, 0, 0]) sphere = create_unit_sphere(recursion_level=4) odf = df.odf(sphere) npt.assert_equal(odf[0, 0, 0], np.zeros(sphere.vertices.shape[0])) def test_mode_with_isotropic(): # mode involves a division by norm, so may be problematic for isotropic # voxels. In the above test, 4 voxels are produced with isotropic tensors # in indexes [0, 0] and [0, 1] in which norm should give mode 0. Voxels # with indexes [1, 0] and [1, 1] should give mode of 1 and -1 respectively. q_form = np.zeros((2, 2, 3, 3)) q_form[0, 1, 0, 0] = 1 q_form[0, 1, 1, 1] = 1 q_form[0, 1, 2, 2] = 1 q_form[1, 0, 0, 0] = 1 q_form[1, 0, 1, 1] = 1 q_form[1, 0, 2, 2] = 2 q_form[1, 1, 0, 0] = 1 q_form[1, 1, 1, 1] = 2 q_form[1, 1, 2, 2] = 2 npt.assert_array_almost_equal(mode(q_form), np.array([[0, 0], [1, -1]])) def test_tensor_model(): fdata, fbval, fbvec = get_fnames(name="small_25") data1 = load_nifti_data(fdata) gtab1 = grad.gradient_table(fbval, bvecs=fbvec) data2, gtab2 = dsi_voxels() for data, gtab in zip([data1, data2], [gtab1, gtab2]): dm = dti.TensorModel(gtab, fit_method="LS") dtifit = dm.fit(data[0, 0, 0]) npt.assert_equal(dtifit.fa < 0.9, True) dm = dti.TensorModel(gtab, fit_method="WLS") dtifit = dm.fit(data[0, 0, 0]) npt.assert_equal(dtifit.fa < 0.9, True) npt.assert_equal(dtifit.fa > 0, True) sphere = create_unit_sphere(recursion_level=4) npt.assert_equal(len(dtifit.odf(sphere)), len(sphere.vertices)) # Check that the multivoxel case works: dtifit = dm.fit(data) # Check that it works on signal that has already been normalized to S0: dm_to_relative = dti.TensorModel(gtab) if np.any(gtab.b0s_mask): relative_data = data[0, 0, 0] / np.mean(data[0, 0, 0, gtab.b0s_mask]) dtifit_to_relative = dm_to_relative.fit(relative_data) npt.assert_almost_equal( dtifit.fa[0, 0, 0], dtifit_to_relative.fa, decimal=3 ) # And smoke-test that all these operations return sensibly-shaped arrays: npt.assert_equal(dtifit.fa.shape, data.shape[:3]) npt.assert_equal(dtifit.ad.shape, data.shape[:3]) npt.assert_equal(dtifit.md.shape, data.shape[:3]) npt.assert_equal(dtifit.rd.shape, data.shape[:3]) npt.assert_equal(dtifit.trace.shape, data.shape[:3]) npt.assert_equal(dtifit.mode.shape, data.shape[:3]) npt.assert_equal(dtifit.linearity.shape, data.shape[:3]) npt.assert_equal(dtifit.planarity.shape, data.shape[:3]) npt.assert_equal(dtifit.sphericity.shape, data.shape[:3]) # Test for the shape of the mask npt.assert_raises(ValueError, dm.fit, np.ones((10, 10, 3)), mask=np.ones((3, 3))) # Make some synthetic data b0 = 1000.0 bvals, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table_from_bvals_bvecs(bvals, bvecs) # The first b value is 0., so we take the second one: B = bvals[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B evals = np.array([2.0, 1.0, 0.0]) / B md = evals.mean() tensor = from_lower_triangular(D) A_squiggle = tensor - (1 / 3.0) * np.trace(tensor) * np.eye(3) mode = 3 * np.sqrt(6) * np.linalg.det(A_squiggle / np.linalg.norm(A_squiggle)) evals_eigh, evecs_eigh = np.linalg.eigh(tensor) # Sort according to eigen-value from large to small: evecs = evecs_eigh[:, np.argsort(evals_eigh)[::-1]] # Check that eigenvalues and eigenvectors are properly sorted through # that previous operation: for i in range(3): npt.assert_array_almost_equal( np.dot(tensor, evecs[:, i]), evals[i] * evecs[:, i] ) # Design Matrix X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) npt.assert_almost_equal(Y[0], b0) Y.shape = (-1,) + Y.shape # Test fitting with different methods: for fit_method in ["OLS", "WLS", "NLLS"]: tensor_model = dti.TensorModel(gtab, fit_method=fit_method, return_S0_hat=True) tensor_fit = tensor_model.fit(Y) assert tensor_fit.model is tensor_model npt.assert_equal(tensor_fit.shape, Y.shape[:-1]) npt.assert_array_almost_equal(tensor_fit.evals[0], evals) npt.assert_array_almost_equal(tensor_fit.S0_hat, b0, decimal=3) # Test that the eigenvectors are correct, one-by-one: for i in range(3): # Eigenvectors have intrinsic sign ambiguity # (see # http://prod.sandia.gov/techlib/access-control.cgi/2007/076422.pdf) # so we need to allow for sign flips. One of the following should # always be true: npt.assert_( np.all(np.abs(tensor_fit.evecs[0][:, i] - evecs[:, i]) < 10e-6) or np.all(np.abs(-tensor_fit.evecs[0][:, i] - evecs[:, i]) < 10e-6) ) # We set a fixed tolerance of 10e-6, similar to array_almost_equal err_msg = "Calculation of tensor from Y does not compare to " err_msg += "analytical solution" npt.assert_array_almost_equal( tensor_fit.quadratic_form[0], tensor, err_msg=err_msg ) npt.assert_almost_equal(tensor_fit.md[0], md) npt.assert_array_almost_equal(tensor_fit.mode, mode, decimal=5) npt.assert_equal(tensor_fit.directions.shape[-2], 1) npt.assert_equal(tensor_fit.directions.shape[-1], 3) # Test error-handling: npt.assert_raises(ValueError, dti.TensorModel, gtab, fit_method="crazy_method") npt.assert_warns(UserWarning, dti.TensorModel, gtab, fit_method="NLLS", step=1e4) with npt.assert_warns(UserWarning): model = dti.TensorModel(gtab, fit_method="NLS", step=1e4) npt.assert_equal(model.kwargs.get("step", None), None) # Test custom fit tensor method try: model = dti.TensorModel(gtab, fit_method=lambda *args, **kwargs: 42) fit = model.fit_method() except Exception as exc: raise AssertionError( f"TensorModel should accept custom fit methods: {exc}" ) from exc assert fit == 42, f"Custom fit method for TensorModel returned {fit}." # Test multi-voxel data data = np.zeros((3, Y.shape[1])) # Normal voxel data[0] = Y # High diffusion voxel, all diffusing weighted signal equal to zero data[1, gtab.b0s_mask] = b0 data[1, ~gtab.b0s_mask] = 0 # Masked voxel, all data set to zero data[2] = 0.0 tensor_model = dti.TensorModel(gtab) fit = tensor_model.fit(data) npt.assert_array_almost_equal(fit[0].evals, evals) # Return S0_test tensor_model = dti.TensorModel(gtab, return_S0_hat=True) fit = tensor_model.fit(data) npt.assert_array_almost_equal(fit[0].evals, evals) npt.assert_array_almost_equal(fit[0].S0_hat, b0) # Evals should be high for high diffusion voxel assert all(fit[1].evals > evals[0] * 0.9) # Evals should be zero where data is masked npt.assert_array_almost_equal(fit[2].evals, 0.0) def test_indexing_on_tensor_fit(): params = np.zeros([2, 3, 4, 12]) fit = dti.TensorFit(None, params) # Should return a TensorFit of appropriate shape npt.assert_equal(fit.shape, (2, 3, 4)) fit1 = fit[0] npt.assert_equal(fit1.shape, (3, 4)) npt.assert_equal(type(fit1), dti.TensorFit) fit1 = fit[0, 0, 0] npt.assert_equal(fit1.shape, ()) npt.assert_equal(type(fit1), dti.TensorFit) fit1 = fit[[0], slice(None)] npt.assert_equal(fit1.shape, (1, 3, 4)) npt.assert_equal(type(fit1), dti.TensorFit) # Should raise an index error if too many indices are passed npt.assert_raises(IndexError, fit.__getitem__, (0, 0, 0, 0)) def test_fa_of_zero(): evals = np.zeros((4, 3)) fa = fractional_anisotropy(evals) npt.assert_array_equal(fa, 0) def test_ga_of_zero(): evals = np.zeros((4, 3)) ga = geodesic_anisotropy(evals) npt.assert_array_equal(ga, 0) def test_diffusivities(): psphere = get_sphere(name="symmetric362") bvecs = np.concatenate(([[0, 0, 0]], psphere.vertices)) bvals = np.zeros(len(bvecs)) + 1000 bvals[0] = 0 gtab = grad.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0001], [0.0015, 0.0003, 0.0003])) mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]), ] S = single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None) dm = dti.TensorModel(gtab, fit_method="LS") dmfit = dm.fit(S) md = mean_diffusivity(dmfit.evals) Trace = trace(dmfit.evals) rd = radial_diffusivity(dmfit.evals) ad = axial_diffusivity(dmfit.evals) lin = linearity(dmfit.evals) plan = planarity(dmfit.evals) spher = sphericity(dmfit.evals) npt.assert_almost_equal(md, (0.0015 + 0.0003 + 0.0001) / 3) npt.assert_almost_equal(Trace, (0.0015 + 0.0003 + 0.0001)) npt.assert_almost_equal(ad, 0.0015) npt.assert_almost_equal(rd, (0.0003 + 0.0001) / 2) npt.assert_almost_equal(lin, (0.0015 - 0.0003) / Trace) npt.assert_almost_equal(plan, 2 * (0.0003 - 0.0001) / Trace) npt.assert_almost_equal(spher, (3 * 0.0001) / Trace) def test_color_fa(): data, gtab = dsi_voxels() dm = dti.TensorModel(gtab, fit_method="LS") dmfit = dm.fit(data) fa = fractional_anisotropy(dmfit.evals) fa = np.ones((3, 3, 3)) # evecs should be of shape (fa, 3, 3) evecs = np.zeros(fa.shape + (3, 2)) npt.assert_raises(ValueError, color_fa, fa, evecs) evecs = np.zeros(fa.shape + (3, 3)) evecs[..., :, :] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) npt.assert_equal(fa.shape, evecs[..., 0, 0].shape) npt.assert_equal((3, 3), evecs.shape[-2:]) # 3D test case fa = np.ones((3, 3, 3)) evecs = np.zeros(fa.shape + (3, 3)) evecs[..., :, :] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) cfa = color_fa(fa, evecs) cfa_truth = np.array([1, 0, 0]) true_cfa = np.reshape(np.tile(cfa_truth, 27), [3, 3, 3, 3]) npt.assert_array_equal(cfa, true_cfa) # 2D test case fa = np.ones((3, 3)) evecs = np.zeros(fa.shape + (3, 3)) evecs[..., :, :] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) cfa = color_fa(fa, evecs) cfa_truth = np.array([1, 0, 0]) true_cfa = np.reshape(np.tile(cfa_truth, 9), [3, 3, 3]) npt.assert_array_equal(cfa, true_cfa) # 1D test case fa = np.ones(3) evecs = np.zeros(fa.shape + (3, 3)) evecs[..., :, :] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) cfa = color_fa(fa, evecs) cfa_truth = np.array([1, 0, 0]) true_cfa = np.reshape(np.tile(cfa_truth, 3), [3, 3]) npt.assert_array_equal(cfa, true_cfa) def test_wls_and_ls_fit(): """ Tests the WLS and LS fitting functions to see if they returns the correct eigenvalues and eigenvectors. Uses data/55dir_grad as the gradient table and 3by3by56.nii as the data. """ # Defining Test Voxel (avoid nibabel dependency) ### # Recall: D = [Dxx,Dyy,Dzz,Dxy,Dxz,Dyz,log(S_0)] and D ~ 10^-4 mm^2 /s b0 = 1000.0 bval, bvec = read_bvals_bvecs(*get_fnames(name="55dir_grad")) B = bval[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B evals = np.array([2.0, 1.0, 0.0]) / B md = evals.mean() tensor = from_lower_triangular(D) # Design Matrix gtab = grad.gradient_table(bval, bvecs=bvec) X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) npt.assert_almost_equal(Y[0], b0) Y.shape = (-1,) + Y.shape # Testing WLS Fit on single voxel # If you do something wonky (passing min_signal<0), you should get an # error: npt.assert_raises(ValueError, TensorModel, gtab, fit_method="WLS", min_signal=-1) # Estimate tensor from test signals model = TensorModel(gtab, fit_method="WLS", return_S0_hat=True) tensor_est = model.fit(Y) npt.assert_equal(tensor_est.shape, Y.shape[:-1]) npt.assert_array_almost_equal(tensor_est.evals[0], evals) npt.assert_array_almost_equal( tensor_est.quadratic_form[0], tensor, err_msg="Calculation of tensor from Y does " "not compare to analytical solution", ) npt.assert_almost_equal(tensor_est.md[0], md) npt.assert_array_almost_equal(tensor_est.S0_hat[0], b0, decimal=3) # Test that we can fit a single voxel's worth of data (a 1d array) y = Y[0] tensor_est = model.fit(y) npt.assert_equal(tensor_est.shape, ()) npt.assert_array_almost_equal(tensor_est.evals, evals) npt.assert_array_almost_equal(tensor_est.quadratic_form, tensor) npt.assert_almost_equal(tensor_est.md, md) npt.assert_array_almost_equal(tensor_est.lower_triangular(b0=b0), D) # Test using fit_method='LS' model = TensorModel(gtab, fit_method="LS") tensor_est = model.fit(y) npt.assert_equal(tensor_est.shape, ()) npt.assert_array_almost_equal(tensor_est.evals, evals) npt.assert_array_almost_equal(tensor_est.quadratic_form, tensor) npt.assert_almost_equal(tensor_est.md, md) npt.assert_array_almost_equal(tensor_est.lower_triangular(b0=b0), D) npt.assert_array_almost_equal(tensor_est.linearity, linearity(evals)) npt.assert_array_almost_equal(tensor_est.planarity, planarity(evals)) npt.assert_array_almost_equal(tensor_est.sphericity, sphericity(evals)) # testing that leverages are returned on request for fit_method in ["LS", "WLS"]: # Estimate tensor from test signals, not returning leverages model = TensorModel( gtab, fit_method=fit_method, return_S0_hat=True, return_leverages=False ) tensor_est = model.fit(Y) npt.assert_equal(tensor_est.model.extra, {}) # Estimate tensor from test signals, returning leverages model = TensorModel( gtab, fit_method=fit_method, return_S0_hat=True, return_leverages=True ) tensor_est = model.fit(Y) npt.assert_equal(tensor_est.model.extra["leverages"].shape, Y.shape) # test value of leverage is 7 (in this case, for DTI) leverages = tensor_est.model.extra["leverages"] npt.assert_almost_equal(leverages.sum(axis=1), np.array([7.0])) # Test wls given S^2 weights argument, matches default wls design_matrix = dti.design_matrix(gtab) YN = Y + 10 * np.random.normal(size=Y.shape) # error or weights irrelevant YN[YN < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL # wls calculation D_w, _ = wls_fit_tensor(design_matrix, YN, return_lower_triangular=True) # wls calculation, by calculating S^2 from OLS fit, then passing weights D_o, _ = ols_fit_tensor(design_matrix, YN, return_lower_triangular=True) pred_s = np.exp(np.dot(design_matrix, D_o.T)).T D_W, _ = wls_fit_tensor( design_matrix, YN, return_lower_triangular=True, weights=pred_s**2 ) # weights match WLS default npt.assert_almost_equal(D_w, D_W) # wls calculation, but passing incorrect weights D_W, _ = wls_fit_tensor( design_matrix, YN, return_lower_triangular=True, weights=pred_s**1 ) npt.assert_raises(AssertionError, npt.assert_array_equal, D_w, D_W) # Test that wls implementation is correct, by comparison with result here W = np.diag(pred_s.squeeze() ** 2) AT_W = np.dot(design_matrix.T, W) inv_AT_W_A = np.linalg.pinv(np.dot(AT_W, design_matrix)) AT_W_LS = np.dot(AT_W, np.log(YN).squeeze()) result = np.dot(inv_AT_W_A, AT_W_LS) npt.assert_almost_equal(D_w.squeeze(), result) def test_rwls_rnlls_irls_fit(): # Recall: D = [Dxx,Dyy,Dzz,Dxy,Dxz,Dyz,log(S_0)] and D ~ 10^-4 mm^2 /s b0 = 1000.0 bval, bvec = read_bvals_bvecs(*get_fnames(name="55dir_grad")) B = bval[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B evals = np.array([2.0, 1.0, 0.0]) / B md = evals.mean() tensor = from_lower_triangular(D) # Design Matrix gtab = grad.gradient_table(bval, bvecs=bvec) X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) npt.assert_almost_equal(Y[0], b0) Y.shape = (-1,) + Y.shape YN = Y + 1 * np.random.normal(size=Y.shape) # error, or weights irrelevant YN[0, -1] *= 10 # note 1D array! for a, ar in zip(["WLS", "NLLS"], ["RWLS", "RNLLS"]): # Estimate tensor from test signals model = TensorModel(gtab, fit_method=a, return_S0_hat=True) tensor_est = model.fit(YN) model = TensorModel(gtab, fit_method=ar, return_S0_hat=True, num_iter=10) tensor_est_R = model.fit(YN) npt.assert_array_less( np.linalg.norm(tensor_est_R.evals[0] - evals), np.linalg.norm(tensor_est.evals[0] - evals), ) npt.assert_array_less( np.linalg.norm(tensor_est_R.quadratic_form[0] - tensor), np.linalg.norm(tensor_est.quadratic_form[0] - tensor), ) npt.assert_array_less( np.linalg.norm(tensor_est_R.md[0] - md), np.linalg.norm(tensor_est.md[0] - md), ) # error is often almost exactly the same, so this test sometimes fails # npt.assert_array_less(np.linalg.norm(tensor_est_R.S0_hat[0] - b0), # np.linalg.norm(tensor_est.S0_hat[0] - b0)) # test RWLS/RNLLS implemented explicitly via IRLS function for wm, fit_type in zip( [weights_method_wls_m_est, weights_method_nlls_m_est], ["WLS", "NLLS"] ): # IRLS implementation model = TensorModel( gtab, fit_method="IRLS", return_S0_hat=True, weights_method=wm, fit_type=fit_type, num_iter=10, ) tensor_est_R1 = model.fit(YN) npt.assert_equal(tensor_est_R1.model.extra["robust"].shape, YN.shape) npt.assert_equal(tensor_est_R1.model.extra["robust"][0, -1], 0) # 'shortcut' method RWLS/RNLLS model = TensorModel( gtab, fit_method="R" + fit_type, return_S0_hat=False, # NOTE increase coverage num_iter=10, ) tensor_est_R2 = model.fit(YN) npt.assert_equal(tensor_est_R2.model.extra["robust"].shape, YN.shape) npt.assert_equal(tensor_est_R2.model.extra["robust"][0, -1], 0) npt.assert_almost_equal(tensor_est_R1.evals[0], tensor_est_R2.evals[0]) npt.assert_almost_equal( tensor_est_R1.quadratic_form[0], tensor_est_R2.quadratic_form[0] ) # test that error is raised if not enough data model = TensorModel(gtab, fit_method="RWLS", num_iter=10) npt.assert_raises(ValueError, model.fit, YN[:, 0:3]) # force use of iter_fit_tensor without making a large test model = TensorModel(gtab, fit_method="RWLS", return_S0_hat=True, num_iter=10) tensor_est_R2 = model.fit(np.repeat(YN, repeats=1e4 + 1, axis=0)) def test_masked_array_with_tensor(): data = np.ones((2, 4, 56)) mask = np.array([[True, False, False, True], [True, False, True, False]]) bval, bvec = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table_from_bvals_bvecs(bval, bvec) # test self.extra with mask by using return_leverages tensor_model = TensorModel(gtab, return_leverages=True) tensor = tensor_model.fit(data, mask=mask) npt.assert_equal(tensor.shape, (2, 4)) npt.assert_equal(tensor.fa.shape, (2, 4)) npt.assert_equal(tensor.evals.shape, (2, 4, 3)) npt.assert_equal(tensor.evecs.shape, (2, 4, 3, 3)) tensor = tensor[0] npt.assert_equal(tensor.shape, (4,)) npt.assert_equal(tensor.fa.shape, (4,)) npt.assert_equal(tensor.evals.shape, (4, 3)) npt.assert_equal(tensor.evecs.shape, (4, 3, 3)) tensor = tensor[0] npt.assert_equal(tensor.shape, ()) npt.assert_equal(tensor.fa.shape, ()) npt.assert_equal(tensor.evals.shape, (3,)) npt.assert_equal(tensor.evecs.shape, (3, 3)) npt.assert_equal(type(tensor.model_params), np.ndarray) def test_fit_method_error(): bval, bvec = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table_from_bvals_bvecs(bval, bvec) # This should work (smoke-testing!): TensorModel(gtab, fit_method="WLS") # This should raise an error because there is no such fit_method npt.assert_raises(ValueError, TensorModel, gtab, min_signal=1e-9, fit_method="s") def test_lower_triangular(): tensor = np.arange(9).reshape((3, 3)) D = lower_triangular(tensor) npt.assert_array_equal(D, [0, 3, 4, 6, 7, 8]) D = lower_triangular(tensor, b0=1) npt.assert_array_equal(D, [0, 3, 4, 6, 7, 8, 0]) npt.assert_raises(ValueError, lower_triangular, np.zeros((2, 3))) shape = (4, 5, 6) many_tensors = np.empty(shape + (3, 3)) many_tensors[:] = tensor result = np.empty(shape + (6,)) result[:] = [0, 3, 4, 6, 7, 8] D = lower_triangular(many_tensors) npt.assert_array_equal(D, result) D = lower_triangular(many_tensors, b0=1) result = np.empty(shape + (7,)) result[:] = [0, 3, 4, 6, 7, 8, 0] npt.assert_array_equal(D, result) def test_from_lower_triangular(): result = np.array([[0, 1, 3], [1, 2, 4], [3, 4, 5]]) D = np.arange(7) tensor = from_lower_triangular(D) npt.assert_array_equal(tensor, result) result = result * np.ones((5, 4, 1, 1)) D = D * np.ones((5, 4, 1)) tensor = from_lower_triangular(D) npt.assert_array_equal(tensor, result) def test_all_constant(): bvals, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table_from_bvals_bvecs(bvals, bvecs) fit_methods = ["LS", "OLS", "NNLS", "RESTORE"] for _ in fit_methods: dm = dti.TensorModel(gtab) npt.assert_almost_equal(dm.fit(100 * np.ones(bvals.shape[0])).fa, 0) # Doesn't matter if the signal is smaller than 1: npt.assert_almost_equal(dm.fit(0.4 * np.ones(bvals.shape[0])).fa, 0) def test_all_zeros(): bvals, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table_from_bvals_bvecs(bvals, bvecs) fit_methods = ["LS", "OLS", "NNLS", "RESTORE"] for _ in fit_methods: dm = dti.TensorModel(gtab) evals = dm.fit(np.zeros(bvals.shape[0])).evals npt.assert_array_almost_equal(evals, 0) def test_mask(): data, gtab = dsi_voxels() for fit_type in ["LS", "NLLS"]: dm = dti.TensorModel(gtab, fit_method=fit_type) mask = np.zeros(data.shape[:-1], dtype=bool) mask[0, 0, 0] = True dtifit = dm.fit(data) dtifit_w_mask = dm.fit(data, mask=mask) # Without a mask it has some value assert not np.isnan(dtifit.fa[0, 0, 0]) # Where mask is False, evals, evecs and fa should all be 0 npt.assert_array_equal(dtifit_w_mask.evals[~mask], 0) npt.assert_array_equal(dtifit_w_mask.evecs[~mask], 0) npt.assert_array_equal(dtifit_w_mask.fa[~mask], 0) # Except for the one voxel that was selected by the mask: npt.assert_almost_equal(dtifit_w_mask.fa[0, 0, 0], dtifit.fa[0, 0, 0]) # Test with returning S0_hat dm = dti.TensorModel(gtab, fit_method=fit_type, return_S0_hat=True) mask = np.zeros(data.shape[:-1], dtype=bool) mask[0, 0, 0] = True for mask_more in [True, False]: if mask_more: mask[0, 0, 1] = True dtifit = dm.fit(data) dtifit_w_mask = dm.fit(data, mask=mask) # Without a mask it has some value assert not np.isnan(dtifit.fa[0, 0, 0]) # Where mask is False, evals, evecs and fa should all be 0 npt.assert_array_equal(dtifit_w_mask.evals[~mask], 0) npt.assert_array_equal(dtifit_w_mask.evecs[~mask], 0) npt.assert_array_equal(dtifit_w_mask.fa[~mask], 0) npt.assert_array_equal(dtifit_w_mask.S0_hat[~mask], 0) # Except for the one voxel that was selected by the mask: npt.assert_almost_equal(dtifit_w_mask.fa[0, 0, 0], dtifit.fa[0, 0, 0]) npt.assert_almost_equal( dtifit_w_mask.S0_hat[0, 0, 0], dtifit.S0_hat[0, 0, 0] ) @set_random_number_generator() def test_nnls_jacobian_func(rng): b0 = 1000.0 bval, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bval, bvecs=bvecs) B = bval[1] # Scale the eigenvalues and tensor by the B value so the units match D_orig = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B # Design Matrix X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D_orig)) scale = 10 error = rng.normal(scale=scale, size=Y.shape) Y = Y + error nlls = dti._NllsHelper() sigma_scalar = 1.4826 * np.median(np.abs(error - np.median(error))) sigma_array = np.full_like(Y, sigma_scalar) for sigma in [sigma_scalar, sigma_array]: weights = 1 / sigma**2 for D in [D_orig, np.zeros_like(D_orig)]: # Test Jacobian at D args = [D, X, Y, weights] # 1. call 'err_func', to set internal stuff in the class nlls.err_func(*args) # 2. call 'jabobian_func', corresponds to last err_func call # analytical = nlls.jacobian_func(*args) # test analytical gradient (needs to be performed per data-point) for i in range(len(X)): args = [X[i], Y[i], weights] # FIXME: this is sometimes failing in tests on Github # approx = opt.approx_fprime(D, nlls.err_func, 1e-8, *args) # # approx_fprime wants nlls.err_func to return a scalar # value, which it ought to do if called with a single # data point (otherwise, it returns an array, consistent # with scipy.opt.leastsq) but something seems broken in # some tests, so let's make a function that ensures a # scalar is returned. Issue for this *test*, not for # nlls.err_func, which works correctly # def ef(x): # tmp = nlls.err_func(x, *args) # return tmp if np.isscalar(tmp) else tmp[0] # NOTE: approx_fprime not accurate enough to pass this test # even though it will pass if using autograd code # to ensure a truly accurate derivative of nlls.err_func # approx = opt.approx_fprime(D, ef, 1e-8) # assert np.allclose(approx, analytical[i]) assert True def test_nlls_fit_tensor(): """ Test the implementation of NLLS and RESTORE """ b0 = 1000.0 bvals, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bvals, bvecs=bvecs) B = bvals[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B evals = np.array([2.0, 1.0, 0.0]) / B md = evals.mean() tensor = from_lower_triangular(D) # Design Matrix X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) Y.shape = (-1,) + Y.shape # Estimate tensor from test signals and compare against expected result # using non-linear least squares: tensor_model = dti.TensorModel(gtab, fit_method="NLLS") tensor_est = tensor_model.fit(Y) npt.assert_equal(tensor_est.shape, Y.shape[:-1]) npt.assert_array_almost_equal(tensor_est.evals[0], evals) npt.assert_array_almost_equal(tensor_est.quadratic_form[0], tensor) npt.assert_almost_equal(tensor_est.md[0], md) # You can also do this without the Jacobian (though it's slower): tensor_model = dti.TensorModel(gtab, fit_method="NLLS", jac=False) tensor_est = tensor_model.fit(Y) npt.assert_equal(tensor_est.shape, Y.shape[:-1]) npt.assert_array_almost_equal(tensor_est.evals[0], evals) npt.assert_array_almost_equal(tensor_est.quadratic_form[0], tensor) npt.assert_almost_equal(tensor_est.md[0], md) # Using weights: weights = 2 * np.ones_like(Y, dtype=np.float32) tensor_model = dti.TensorModel(gtab, fit_method="NLLS", weights=weights) tensor_est = tensor_model.fit(Y) npt.assert_equal(tensor_est.shape, Y.shape[:-1]) npt.assert_array_almost_equal(tensor_est.evals[0], evals) npt.assert_array_almost_equal(tensor_est.quadratic_form[0], tensor) npt.assert_almost_equal(tensor_est.md[0], md) # Use NLLS with some actual 4D data: data, bvals, bvecs = get_fnames(name="small_25") gtab = grad.gradient_table(bvals, bvecs=bvecs) tm1 = dti.TensorModel(gtab, fit_method="NLLS") dd = load_nifti_data(data) tf1 = tm1.fit(dd) tm2 = dti.TensorModel(gtab) tf2 = tm2.fit(dd) npt.assert_array_almost_equal(tf1.fa, tf2.fa, decimal=1) # Reduce amount of data, to cause NLLS to fail gtab_less = grad.gradient_table(gtab.bvals[0:3], bvecs=gtab.bvecs[0:3, :]) Y_less = Y[..., 0:3].copy() # Test warning for failure of NLLS method, resort to OLS result # (reason for failure: too few data points for NLLS) tensor_model = dti.TensorModel(gtab_less, fit_method="NLLS", return_S0_hat=True) tmf = npt.assert_warns(UserWarning, tensor_model.fit, Y_less) # Test fail_is_nan=True, failed NLLS method gives NaN tensor_model = dti.TensorModel( gtab_less, fit_method="NLLS", return_S0_hat=True, fail_is_nan=True ) tmf = npt.assert_warns(UserWarning, tensor_model.fit, Y_less) npt.assert_equal(tmf[0].S0_hat, np.nan) def test_restore(): """ Test the implementation of the RESTORE algorithm """ b0 = 1000.0 bval, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bval, bvecs=bvecs) B = bval[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B evals = np.array([2.0, 1.0, 0.0]) / B tensor = from_lower_triangular(D) # Design Matrix X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) Y = np.vstack([Y[None, :], Y[None, :]]) # two voxels for drop_this in range(1, Y.shape[-1]): for jac in [True, False]: # RESTORE estimates should be robust to dropping this_y = Y.copy() this_y[:, drop_this] = 1.0 for sigma in [ 67.0, np.array([67.0]), np.ones(this_y.shape[-1]) * 67.0, np.array([66.0, 67.0]).reshape((-1, 1)), ]: tensor_model = dti.TensorModel( gtab, fit_method="restore", jac=jac, sigma=sigma ) tensor_est = tensor_model.fit(this_y) npt.assert_array_almost_equal(tensor_est.evals[0], evals, decimal=3) npt.assert_array_almost_equal( tensor_est.quadratic_form[0], tensor, decimal=3 ) # test recording of robust signals npt.assert_equal(tensor_est.model.extra["robust"].shape, Y.shape) # If sigma is very small, it still needs to work: tensor_model = dti.TensorModel(gtab, fit_method="restore", sigma=0.0001) tensor_model.fit(Y.copy()) # If sigma is very small, it still needs to work (it is estimated): tensor_model = dti.TensorModel(gtab, fit_method="restore", sigma=None) tensor_model.fit(Y.copy() + np.random.normal(size=Y.shape)) # Test return_S0_hat tensor_model = dti.TensorModel( gtab, fit_method="restore", sigma=0.0001, return_S0_hat=True ) tmf = tensor_model.fit(Y.copy()) npt.assert_almost_equal(tmf[0].S0_hat, b0) # Test warning for failure of NLLS method, resort to OLS result # (reason for failure: too few data points for NLLS, due to negative sigma) tensor_model = dti.TensorModel( gtab, fit_method="restore", sigma=-1.0, return_S0_hat=True ) tmf = npt.assert_warns(UserWarning, tensor_model.fit, Y.copy()) # Test fail_is_nan=True, failed NLLS method gives NaN tensor_model = dti.TensorModel( gtab, fit_method="restore", sigma=-1.0, return_S0_hat=True, fail_is_nan=True ) tmf = npt.assert_warns(UserWarning, tensor_model.fit, Y.copy()) npt.assert_equal(tmf[0].S0_hat, np.nan) def test_adc(): """ Test the implementation of the calculation of apparent diffusion coefficient """ data, gtab = dsi_voxels() dm = dti.TensorModel(gtab, fit_method="LS") mask = np.zeros(data.shape[:-1], dtype=bool) mask[0, 0, 0] = True dtifit = dm.fit(data) # The ADC in the principal diffusion direction should be equal to the AD in # each voxel: pdd0 = dtifit.evecs[0, 0, 0, 0] sphere_pdd0 = dps.Sphere(x=pdd0[0], y=pdd0[1], z=pdd0[2]) npt.assert_array_almost_equal( dtifit.adc(sphere_pdd0)[0, 0, 0], dtifit.ad[0, 0, 0], decimal=4 ) # Test that it works for cases in which the data is 1D dtifit = dm.fit(data[0, 0, 0]) sphere_pdd0 = dps.Sphere(x=pdd0[0], y=pdd0[1], z=pdd0[2]) npt.assert_array_almost_equal(dtifit.adc(sphere_pdd0), dtifit.ad, decimal=4) def test_predict(): """ Test model prediction API """ psphere = get_sphere(name="symmetric362") bvecs = np.concatenate(([[1, 0, 0]], psphere.vertices)) bvals = np.zeros(len(bvecs)) + 1000 bvals[0] = 0 gtab = grad.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0001], [0.0015, 0.0003, 0.0003])) mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]), ] S = single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None) dm = dti.TensorModel(gtab, fit_method="LS", return_S0_hat=True) dmfit = dm.fit(S) npt.assert_array_almost_equal(dmfit.predict(gtab, S0=100), S) npt.assert_array_almost_equal(dmfit.predict(gtab), S) npt.assert_array_almost_equal(dm.predict(dmfit.model_params, S0=100), S) fdata, fbvals, fbvecs = get_fnames() data = load_nifti_data(fdata) # Make the data cube a bit larger: data = np.tile(data.T, 2).T gtab = grad.gradient_table(fbvals, bvecs=fbvecs) dtim = dti.TensorModel(gtab) dtif = dtim.fit(data) S0 = np.mean(data[..., gtab.b0s_mask], -1) p = dtif.predict(gtab, S0=S0) npt.assert_equal(p.shape, data.shape) # Predict using S0_hat: dtim = dti.TensorModel(gtab, return_S0_hat=True) dtif = dtim.fit(data) p = dtif.predict(gtab) npt.assert_equal(p.shape, data.shape) p = dtif.predict(gtab, S0=S0) npt.assert_equal(p.shape, data.shape) # Test iter_fit_tensor with S0_hat dtim = dti.TensorModel(gtab, step=2, return_S0_hat=True) dtif = dtim.fit(data) S0 = np.mean(data[..., gtab.b0s_mask], -1) p = dtif.predict(gtab, S0=S0) npt.assert_equal(p.shape, data.shape) # Use a smaller step in predicting: dtim = dti.TensorModel(gtab, step=2) dtif = dtim.fit(data) S0 = np.mean(data[..., gtab.b0s_mask], -1) p = dtif.predict(gtab, S0=S0) npt.assert_equal(p.shape, data.shape) # And with a scalar S0: S0 = 1 p = dtif.predict(gtab, S0=S0) npt.assert_equal(p.shape, data.shape) # Assign the step through kwarg: p = dtif.predict(gtab, S0=S0, step=1) npt.assert_equal(p.shape, data.shape) # And without S0: p = dtif.predict(gtab, step=1) npt.assert_equal(p.shape, data.shape) def test_eig_from_lo_tri(): psphere = get_sphere(name="symmetric362") bvecs = np.concatenate(([[0, 0, 0]], psphere.vertices)) bvals = np.zeros(len(bvecs)) + 1000 bvals[0] = 0 gtab = grad.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0001], [0.0015, 0.0003, 0.0003])) mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]]), ] S = np.array( [ [ single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None), single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None), ] ] ) dm = dti.TensorModel(gtab, fit_method="LS") dmfit = dm.fit(S) lo_tri = lower_triangular(dmfit.quadratic_form) npt.assert_array_almost_equal(dti.eig_from_lo_tri(lo_tri), dmfit.model_params) def test_min_signal_alone(): fdata, fbvals, fbvecs = get_fnames() data = load_nifti_data(fdata) gtab = grad.gradient_table(fbvals, bvecs=fbvecs) idx = tuple(np.array(np.where(data == np.min(data)))[:-1, 0]) ten_model = dti.TensorModel(gtab) fit_alone = ten_model.fit(data[idx]) fit_together = ten_model.fit(data) npt.assert_array_almost_equal( fit_together.model_params[idx], fit_alone.model_params, decimal=12 ) def test_decompose_tensor_nan(): D_fine = np.array([1.7e-3, 0.0, 0.3e-3, 0.0, 0.0, 0.2e-3]) D_alter = np.array([1.6e-3, 0.0, 0.4e-3, 0.0, 0.0, 0.3e-3]) D_nan = np.nan * np.ones(6) lref, vref = decompose_tensor(from_lower_triangular(D_fine)) lfine, vfine = _decompose_tensor_nan( from_lower_triangular(D_fine), from_lower_triangular(D_alter) ) npt.assert_array_almost_equal(lfine, np.array([1.7e-3, 0.3e-3, 0.2e-3])) npt.assert_array_almost_equal(vfine, vref) lref, vref = decompose_tensor(from_lower_triangular(D_alter)) lalter, valter = _decompose_tensor_nan( from_lower_triangular(D_nan), from_lower_triangular(D_alter) ) npt.assert_array_almost_equal(lalter, np.array([1.6e-3, 0.4e-3, 0.3e-3])) npt.assert_array_almost_equal(valter, vref) def test_design_matrix_lte(): _, fbval, fbvec = get_fnames(name="small_25") gtab_btens_none = grad.gradient_table(fbval, bvecs=fbvec) gtab_btens_lte = grad.gradient_table(fbval, bvecs=fbvec, btens="LTE") B_btens_none = dti.design_matrix(gtab_btens_none) B_btens_lte = dti.design_matrix(gtab_btens_lte) npt.assert_array_almost_equal(B_btens_none, B_btens_lte, decimal=1) def test_extra_return(): """ Test if returns of dictionary 'extra' from fitting functions are working properly. Uses data/55dir_grad as the gradient table and 3by3by56.nii as the data. """ b0 = 1000.0 bval, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bval, bvecs=bvecs) B = bval[1] # Scale the eigenvalues and tensor by the B value so the units match D = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B # Design Matrix X = dti.design_matrix(gtab) # Signals Y = np.exp(np.dot(X, D)) Y = np.vstack([Y[None, :], Y[None, :]]) # two voxels for drop_this in range(1, 3): # Y.shape[-1]): # test specific extra from specific methods for method in ["restore"]: this_y = Y.copy() this_y[:, drop_this] = 1.0 sigma = 0.0001 if method == "restore": tensor_model = dti.TensorModel(gtab, fit_method=method, sigma=sigma) tensor_est = tensor_model.fit(this_y) npt.assert_equal(tensor_est.model.extra["robust"].shape, Y.shape) dipy-1.11.0/dipy/reconst/tests/test_eudx_dg.py000066400000000000000000000065061476546756600214140ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.direction.peaks import default_sphere, peaks_from_model from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.testing.decorators import set_random_number_generator @set_random_number_generator() def test_EuDXDirectionGetter(rng): class SillyModel: def fit(self, data, mask=None): return SillyFit(self) class SillyFit: def __init__(self, model): self.model = model def odf(self, sphere): odf = np.zeros(sphere.theta.shape) r = rng.integers(0, len(odf)) odf[r] = 1 return odf def get_direction(dg, point, direction): newdir = direction.copy() state = dg.get_direction(point, newdir) return state, np.array(newdir) data = rng.random((3, 4, 5, 2)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) peaks = peaks_from_model( SillyModel(), data, default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, ) peaks._initialize() up = np.zeros(3) up[2] = 1.0 down = -up for i in range(3 - 1): for j in range(4 - 1): for k in range(5 - 1): point = np.array([i, j, k], dtype=float) # Test that the angle threshold rejects points peaks.ang_thr = 0.0 state, nd = get_direction(peaks, point, up) npt.assert_equal(state, 1) # Here we leverage the fact that we know Hemispheres project # all their vertices into the z >= 0 half of the sphere. peaks.ang_thr = 90.0 state, nd = get_direction(peaks, point, up) npt.assert_equal(state, 0) expected_dir = peaks.peak_dirs[i, j, k, 0] npt.assert_array_almost_equal(nd, expected_dir) state, nd = get_direction(peaks, point, down) npt.assert_array_almost_equal(nd, -expected_dir) # Check that we can get directions at non-integer points point += rng.random(3) state, nd = get_direction(peaks, point, up) npt.assert_equal(state, 0) # Check that points are rounded to get initial direction point -= 0.5 initial_dir = peaks.initial_direction(point) # It should be a (1, 3) array npt.assert_array_almost_equal(initial_dir, [expected_dir]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) peaks1 = peaks_from_model( SillyModel(), data, default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, npeaks=1, ) peaks1._initialize() point = np.array([1, 1, 1], dtype=float) # it should have one direction npt.assert_array_almost_equal(len(peaks1.initial_direction(point)), 1) npt.assert_array_almost_equal(len(peaks.initial_direction(point)), 1) dipy-1.11.0/dipy/reconst/tests/test_forecast.py000066400000000000000000000266421476546756600216060ustar00rootroot00000000000000# Tests for FORECAST fitting and metrics import warnings import numpy as np from numpy.testing import assert_almost_equal, assert_equal import pytest from dipy.core.sphere_stats import angular_similarity from dipy.data import default_sphere, get_3shell_gtab, get_sphere from dipy.direction.peaks import peak_directions from dipy.reconst.forecast import ForecastModel from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.sims.voxel import multi_tensor from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") needs_cvxpy = pytest.mark.skipif(not have_cvxpy, reason="Requires CVXPY") # Object to hold module global data class _C: pass data = _C() def setup_module(): global data data.gtab = get_3shell_gtab() data.mevals = np.array(([0.0017, 0.0003, 0.0003], [0.0017, 0.0003, 0.0003])) data.angl = [(0, 0), (60, 0)] data.S, data.sticks = multi_tensor( data.gtab, data.mevals, S0=100.0, angles=data.angl, fractions=[50, 50], snr=None ) data.sh_order_max = 6 data.lambda_lb = 1e-8 data.lambda_csd = 1.0 sphere = get_sphere(name="repulsion100") data.sphere = sphere.vertices[0 : int(sphere.vertices.shape[0] / 2), :] @needs_cvxpy def test_forecast_positive_constrain(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=data.sh_order_max, lambda_lb=data.lambda_lb, dec_alg="POS", sphere=data.sphere, ) f_fit = fm.fit(data.S) sphere = get_sphere(name="repulsion100") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere, clip_negative=False) assert_almost_equal(fodf[fodf < 0].sum(), 0, 2) coeff = f_fit.sh_coeff c0 = np.sqrt(1.0 / (4 * np.pi)) assert_almost_equal(coeff[0], c0, 5) def test_forecast_csd(): sphere = get_sphere(name="repulsion100") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, dec_alg="CSD", sphere=data.sphere, lambda_csd=data.lambda_csd ) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf_csd = f_fit.odf(sphere, clip_negative=False) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=data.sh_order_max, lambda_lb=data.lambda_lb, dec_alg="WLS", ) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf_wls = f_fit.odf(sphere, clip_negative=False) value = fodf_wls[fodf_wls < 0].sum() < fodf_csd[fodf_csd < 0].sum() assert_equal(value, 1) def test_forecast_odf(): # check FORECAST fODF at different SH order with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel(data.gtab, sh_order_max=4, dec_alg="CSD", sphere=data.sphere) f_fit = fm.fit(data.S) sphere = default_sphere with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere) directions, _, _ = peak_directions( fodf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, data.sticks), 2, 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel(data.gtab, sh_order_max=6, dec_alg="CSD", sphere=data.sphere) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere) directions, _, _ = peak_directions( fodf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, data.sticks), 2, 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel(data.gtab, sh_order_max=8, dec_alg="CSD", sphere=data.sphere) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere) directions, _, _ = peak_directions( fodf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, data.sticks), 2, 1) # stronger regularization is required for high order SH with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=10, dec_alg="CSD", sphere=sphere.vertices ) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere) directions, _, _ = peak_directions( fodf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, data.sticks), 2, 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=12, dec_alg="CSD", sphere=sphere.vertices ) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fodf = f_fit.odf(sphere) directions, _, _ = peak_directions( fodf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, data.sticks), 2, 1) def test_forecast_indices(): # check anisotropic tensor with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=2, lambda_lb=data.lambda_lb, dec_alg="WLS" ) f_fit = fm.fit(data.S) d_par = f_fit.dpar d_perp = f_fit.dperp assert_almost_equal(d_par, data.mevals[0, 0], 5) assert_almost_equal(d_perp, data.mevals[0, 1], 5) gt_fa = np.sqrt( 0.5 * (2 * (data.mevals[0, 0] - data.mevals[0, 1]) ** 2) / (data.mevals[0, 0] ** 2 + 2 * data.mevals[0, 1] ** 2) ) gt_md = (data.mevals[0, 0] + 2 * data.mevals[0, 1]) / 3.0 assert_almost_equal(f_fit.fractional_anisotropy(), gt_fa, 2) assert_almost_equal(f_fit.mean_diffusivity(), gt_md, 5) # check isotropic tensor mevals = np.array(([0.003, 0.003, 0.003], [0.003, 0.003, 0.003])) data.angl = [(0, 0), (60, 0)] S, sticks = multi_tensor( data.gtab, mevals, S0=100.0, angles=data.angl, fractions=[50, 50], snr=None ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel( data.gtab, sh_order_max=data.sh_order_max, lambda_lb=data.lambda_lb, dec_alg="WLS", ) f_fit = fm.fit(S) d_par = f_fit.dpar d_perp = f_fit.dperp assert_almost_equal(d_par, 3e-03, 5) assert_almost_equal(d_perp, 3e-03, 5) assert_almost_equal(f_fit.fractional_anisotropy(), 0.0, 5) assert_almost_equal(f_fit.mean_diffusivity(), 3e-03, 10) def test_forecast_predict(): # check anisotropic tensor with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel(data.gtab, sh_order_max=8, dec_alg="CSD", sphere=data.sphere) f_fit = fm.fit(data.S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S = f_fit.predict(S0=1.0) mse = np.sum((S - data.S / 100.0) ** 2) / len(S) assert_almost_equal(mse, 0.0, 3) def test_multivox_forecast(): gtab = get_3shell_gtab() mevals = np.array(([0.0017, 0.0003, 0.0003], [0.0017, 0.0003, 0.0003])) angl1 = [(0, 0), (60, 0)] angl2 = [(90, 0), (45, 90)] angl3 = [(0, 0), (90, 0)] S = np.zeros((3, 1, 1, len(gtab.bvals))) S[0, 0, 0], _ = multi_tensor( gtab, mevals, S0=1.0, angles=angl1, fractions=[50, 50], snr=None ) S[1, 0, 0], _ = multi_tensor( gtab, mevals, S0=1.0, angles=angl2, fractions=[50, 50], snr=None ) S[2, 0, 0], _ = multi_tensor( gtab, mevals, S0=1.0, angles=angl3, fractions=[50, 50], snr=None ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fm = ForecastModel(gtab, sh_order_max=8, dec_alg="CSD") f_fit = fm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_predict = f_fit.predict() assert_equal(S_predict.shape, S.shape) mse1 = np.sum((S_predict[0, 0, 0] - S[0, 0, 0]) ** 2) / len(gtab.bvals) assert_almost_equal(mse1, 0.0, 3) mse2 = np.sum((S_predict[1, 0, 0] - S[1, 0, 0]) ** 2) / len(gtab.bvals) assert_almost_equal(mse2, 0.0, 3) mse3 = np.sum((S_predict[2, 0, 0] - S[2, 0, 0]) ** 2) / len(gtab.bvals) assert_almost_equal(mse3, 0.0, 3) dipy-1.11.0/dipy/reconst/tests/test_fwdti.py000066400000000000000000000273111476546756600211070ustar00rootroot00000000000000"""Testing Free Water Elimination Model""" import numpy as np from numpy.testing import assert_almost_equal, assert_array_almost_equal, assert_raises from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs import dipy.reconst.dti as dti from dipy.reconst.dti import ( decompose_tensor, fractional_anisotropy, from_lower_triangular, ) import dipy.reconst.fwdti as fwdti from dipy.reconst.fwdti import ( cholesky_to_lower_triangular, fwdti_prediction, lower_triangular_to_cholesky, nls_fit_tensor, wls_fit_tensor, ) from dipy.sims.voxel import ( all_tensor_evecs, multi_tensor, multi_tensor_dki, single_tensor, ) def setup_module(): """Module-level setup""" global gtab, gtab_2s, mevals, model_params_mv global DWI, FAref, GTF, MDref, FAdti, MDdti _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) # FW model requires multishell data bvals_2s = np.concatenate((bvals, bvals * 1.5), axis=0) bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) gtab_2s = gradient_table(bvals_2s, bvecs=bvecs_2s) # Simulation a typical DT and DW signal for no water contamination # S0 = np.array(100) dt = np.array([0.0017, 0, 0.0003, 0, 0, 0.0003]) evals, evecs = decompose_tensor(from_lower_triangular(dt)) S_tissue = single_tensor(gtab_2s, S0=100, evals=evals, evecs=evecs, snr=None) dm = dti.TensorModel(gtab_2s, fit_method="WLS") dtifit = dm.fit(S_tissue) FAdti = dtifit.fa MDdti = dtifit.md # Simulation of 8 voxels tested DWI = np.zeros((2, 2, 2, len(gtab_2s.bvals))) FAref = np.zeros((2, 2, 2)) MDref = np.zeros((2, 2, 2)) # Diffusion of tissue and water compartments are constant for all voxel mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) # volume fractions GTF = np.array([[[0.06, 0.71], [0.33, 0.91]], [[0.0, 0.0], [0.0, 0.0]]]) # S0 multivoxel # S0m = 100 * np.ones((2, 2, 2)) # model_params ground truth (to be fill) model_params_mv = np.zeros((2, 2, 2, 13)) for i in range(2): for j in range(2): gtf = GTF[0, i, j] S, p = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) DWI[0, i, j] = S FAref[0, i, j] = FAdti MDref[0, i, j] = MDdti R = all_tensor_evecs(p[0]) R = R.reshape(9) model_params_mv[0, i, j] = np.concatenate( ([0.0017, 0.0003, 0.0003], R, [gtf]), axis=0 ) def test_fwdti_singlevoxel(): # Simulation when water contamination is added gtf = 0.44444 # ground truth volume fraction mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="WLS") fwefit = fwdm.fit(S_conta) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAdti, FAfwe, decimal=3) assert_almost_equal(Ffwe, gtf, decimal=3) assert_almost_equal(MDfwe, MDdti, decimal=3) # Test non-linear fit fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", cholesky=False) fwefit = fwdm.fit(S_conta) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAdti, FAfwe) assert_almost_equal(Ffwe, gtf) assert_almost_equal(MDfwe, MDdti) # Test cholesky fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", cholesky=True) fwefit = fwdm.fit(S_conta) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAdti, FAfwe) assert_almost_equal(Ffwe, gtf) assert_almost_equal(MDfwe, MDfwe) def test_fwdti_precision(): # Simulation when water contamination is added gtf = 0.63416 # ground truth volume fraction mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="WLS", piterations=5) fwefit = fwdm.fit(S_conta) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAdti, FAfwe, decimal=3) assert_almost_equal(Ffwe, gtf, decimal=3) assert_almost_equal(MDfwe, MDdti, decimal=5) def test_fwdti_multi_voxel(): fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", cholesky=False) fwefit = fwdm.fit(DWI) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAfwe, FAref) assert_almost_equal(Ffwe, GTF) assert_almost_equal(MDfwe, MDref) # Test Cholesky fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", cholesky=True) fwefit = fwdm.fit(DWI) FAfwe = fwefit.fa Ffwe = fwefit.f MDfwe = fwefit.md assert_almost_equal(FAfwe, FAref) assert_almost_equal(Ffwe, GTF) assert_almost_equal(MDfwe, MDref) def test_fwdti_predictions(): # single voxel case gtf = 0.50 # ground truth volume fraction angles = [(90, 0), (90, 0)] mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=angles, fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) R = all_tensor_evecs(peaks[0]) R = R.reshape(9) model_params = np.concatenate(([0.0017, 0.0003, 0.0003], R, [gtf]), axis=0) S_pred1 = fwdti_prediction(model_params, gtab_2s, S0=100) assert_array_almost_equal(S_pred1, S_conta) # Testing in model class fwdm = fwdti.FreeWaterTensorModel(gtab_2s) S_pred2 = fwdm.predict(model_params, S0=100) assert_array_almost_equal(S_pred2, S_conta) # Testing in fit class fwefit = fwdm.fit(S_conta) S_pred3 = fwefit.predict(gtab_2s, S0=100) assert_array_almost_equal(S_pred3, S_conta, decimal=5) # Multi voxel simulation S_pred1 = fwdti_prediction(model_params_mv, gtab_2s, S0=100) # function assert_array_almost_equal(S_pred1, DWI) S_pred2 = fwdm.predict(model_params_mv, S0=100) # Model class assert_array_almost_equal(S_pred2, DWI) fwefit = fwdm.fit(DWI) # Fit class S_pred3 = fwefit.predict(gtab_2s, S0=100) assert_array_almost_equal(S_pred3, DWI) def test_fwdti_errors(): # 1st error - if a unknown fit method is given to the FWTM assert_raises(ValueError, fwdti.FreeWaterTensorModel, gtab_2s, fit_method="pKT") # 2nd error - if incorrect mask is given fwdtiM = fwdti.FreeWaterTensorModel(gtab_2s) incorrect_mask = np.array([[True, True, False], [True, False, False]]) assert_raises(ValueError, fwdtiM.fit, DWI, mask=incorrect_mask) # 3rd error - if data with only one non zero b-value is given assert_raises(ValueError, fwdti.FreeWaterTensorModel, gtab) # Testing the correct usage fwdtiM = fwdti.FreeWaterTensorModel(gtab_2s, min_signal=1) correct_mask = np.zeros((2, 2, 2)) correct_mask[0, :, :] = 1 correct_mask = correct_mask > 0 fwdtiF = fwdtiM.fit(DWI, mask=correct_mask) assert_array_almost_equal(fwdtiF.fa, FAref) assert_array_almost_equal(fwdtiF.f, GTF) # 4th error - if a sigma is selected by no value of sigma is given fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", weighting="sigma") assert_raises(ValueError, fwdm.fit, DWI) def test_fwdti_restore(): # Restore has to work well even in nonproblematic cases # Simulate a signal corrupted by free water diffusion contamination gtf = 0.50 # ground truth volume fraction mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) fwdm = fwdti.FreeWaterTensorModel( gtab_2s, fit_method="NLS", weighting="sigma", sigma=4 ) fwdtiF = fwdm.fit(S_conta) assert_array_almost_equal(fwdtiF.fa, FAdti) assert_array_almost_equal(fwdtiF.f, gtf) fwdm2 = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS", weighting="gmm") fwdtiF2 = fwdm2.fit(S_conta) assert_array_almost_equal(fwdtiF2.fa, FAdti) assert_array_almost_equal(fwdtiF2.f, gtf) def test_cholesky_functions(): S, dt, kt = multi_tensor_dki( gtab, mevals, S0=100, angles=[(45.0, 45.0), (45.0, 45.0)], fractions=[80, 20] ) R = lower_triangular_to_cholesky(dt) tensor = cholesky_to_lower_triangular(R) assert_array_almost_equal(dt, tensor) def test_fwdti_jac_multi_voxel(): fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="WLS") fwdm.fit(DWI[0, :, :]) # no f transform fwdm = fwdti.FreeWaterTensorModel( gtab_2s, fit_method="NLS", f_transform=False, jac=True ) fwefit = fwdm.fit(DWI[0, :, :]) Ffwe = fwefit.f assert_array_almost_equal(Ffwe, GTF[0, :]) # with f transform fwdm = fwdti.FreeWaterTensorModel( gtab_2s, fit_method="NLS", f_transform=True, jac=True ) fwefit = fwdm.fit(DWI[0, :, :]) Ffwe = fwefit.f assert_array_almost_equal(Ffwe, GTF[0, :]) def test_standalone_functions(): # WLS procedure params = wls_fit_tensor(gtab_2s, DWI) assert_array_almost_equal(params[..., 12], GTF) fa = fractional_anisotropy(params[..., :3]) assert_array_almost_equal(fa, FAref) # NLS procedure params = nls_fit_tensor(gtab_2s, DWI) assert_array_almost_equal(params[..., 12], GTF) fa = fractional_anisotropy(params[..., :3]) assert_array_almost_equal(fa, FAref) def test_md_regularization(): # single voxel gtf = 0.97 # for this ground truth value, md is larger than 2.7e-3 mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS") fwefit = fwdm.fit(S_conta) assert_array_almost_equal(fwefit.fa, 0.0) assert_array_almost_equal(fwefit.md, 0.0) assert_array_almost_equal(fwefit.f, 1.0) # multi voxel DWI[0, 1, 1] = S_conta GTF[0, 1, 1] = 1 FAref[0, 1, 1] = 0 MDref[0, 1, 1] = 0 fwefit = fwdm.fit(DWI) assert_array_almost_equal(fwefit.fa, FAref) assert_array_almost_equal(fwefit.md, MDref) assert_array_almost_equal(fwefit.f, GTF) def test_negative_s0(): # single voxel gtf = 0.55 mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) S_conta, peaks = multi_tensor( gtab_2s, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) S_conta[gtab_2s.bvals == 0] = -100 fwdm = fwdti.FreeWaterTensorModel(gtab_2s, fit_method="NLS") fwefit = fwdm.fit(S_conta) assert_array_almost_equal(fwefit.fa, 0.0) assert_array_almost_equal(fwefit.md, 0.0) assert_array_almost_equal(fwefit.f, 0.0) # multi voxel DWI[0, 0, 1, gtab_2s.bvals == 0] = -100 GTF[0, 0, 1] = 0 FAref[0, 0, 1] = 0 MDref[0, 0, 1] = 0 fwefit = fwdm.fit(DWI) assert_array_almost_equal(fwefit.fa, FAref) assert_array_almost_equal(fwefit.md, MDref) assert_array_almost_equal(fwefit.f, GTF) dipy-1.11.0/dipy/reconst/tests/test_gqi.py000066400000000000000000000055461476546756600205600ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_almost_equal, assert_equal from dipy.core.gradients import gradient_table from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, dsi_voxels, get_fnames, get_sphere from dipy.direction.peaks import peak_directions from dipy.reconst.gqi import GeneralizedQSamplingModel from dipy.reconst.odf import gfa from dipy.reconst.tests.test_dsi import sticks_and_ball_dummies from dipy.sims.voxel import sticks_and_ball def test_gqi(): # load repulsion 724 sphere sphere = default_sphere # load icosahedron sphere sphere2 = create_unit_sphere(recursion_level=5) btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) gq = GeneralizedQSamplingModel(gtab, method="gqi2", sampling_length=1.4) # repulsion724 gqfit = gq.fit(data) odf = gqfit.odf(sphere) directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # 5 subdivisions gqfit = gq.fit(data) odf2 = gqfit.odf(sphere2) directions, values, indices = peak_directions( odf2, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] odf = gq.fit(data).odf(sphere2) directions, values, indices = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: assert_equal(gfa(odf) < 0.1, True) def test_mvoxel_gqi(): data, gtab = dsi_voxels() sphere = get_sphere(name="symmetric724") gq = GeneralizedQSamplingModel(gtab, method="standard") gqfit = gq.fit(data) all_odfs = gqfit.odf(sphere) # Check that the first and last voxels each have 2 peaks odf = all_odfs[0, 0, 0] directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(directions.shape[0], 2) odf = all_odfs[-1, -1, -1] directions, values, indices = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(directions.shape[0], 2) dipy-1.11.0/dipy/reconst/tests/test_ivim.py000066400000000000000000000477211476546756600207450ustar00rootroot00000000000000""" Testing the Intravoxel incoherent motion module The values of the various parameters used in the tests are inspired by the study of the IVIM model applied to MR images of the brain by :footcite:t:`Federau2012`. References ---------- .. footbibliography:: """ import warnings import numpy as np from numpy.testing import ( assert_, assert_array_almost_equal, assert_array_equal, assert_array_less, assert_equal, assert_raises, ) import pytest from dipy.core.gradients import generate_bvecs, gradient_table from dipy.reconst.ivim import IvimModel, ivim_prediction from dipy.sims.voxel import multi_tensor from dipy.testing import assert_greater_equal from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") needs_cvxpy = pytest.mark.skipif(not have_cvxpy, reason="REQUIRES CVXPY") def setup_module(): global \ gtab, \ ivim_fit_single, \ ivim_model_trr, \ data_single, \ params_trr, \ data_multi, \ ivim_params_trr, \ D_star, \ D, \ f, \ S0, \ gtab_with_multiple_b0, \ noisy_single, \ mevals, \ gtab_no_b0, \ ivim_fit_multi, \ ivim_model_VP, \ f_VP, \ D_star_VP, \ D_VP, \ params_VP # Let us generate some data for testing. bvals = np.array( [ 0.0, 10.0, 20.0, 30.0, 40.0, 60.0, 80.0, 100.0, 120.0, 140.0, 160.0, 180.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, ] ) N = len(bvals) bvecs = generate_bvecs(N) gtab = gradient_table(bvals, bvecs=bvecs.T, b0_threshold=0) S0, f, D_star, D = 1000.0, 0.132, 0.00885, 0.000921 # params for a single voxel params_trr = np.array([S0, f, D_star, D]) mevals = np.array(([D_star, D_star, D_star], [D, D, D])) # This gives an isotropic signal. signal = multi_tensor( gtab, mevals, snr=None, S0=S0, fractions=[f * 100, 100 * (1 - f)] ) # Single voxel data data_single = signal[0] data_multi = np.zeros((2, 2, 1, len(gtab.bvals))) data_multi[0, 0, 0] = data_multi[0, 1, 0] = data_multi[1, 0, 0] = data_multi[ 1, 1, 0 ] = data_single ivim_params_trr = np.zeros((2, 2, 1, 4)) ivim_params_trr[0, 0, 0] = ivim_params_trr[0, 1, 0] = params_trr ivim_params_trr[1, 0, 0] = ivim_params_trr[1, 1, 0] = params_trr msg = "Bounds for this fit have been set from experiments .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) ivim_model_trr = IvimModel(gtab, fit_method="trr") ivim_model_one_stage = IvimModel(gtab, fit_method="trr") ivim_fit_single = ivim_model_trr.fit(data_single) ivim_fit_multi = ivim_model_trr.fit(data_multi) ivim_model_one_stage.fit(data_single) ivim_model_one_stage.fit(data_multi) bvals_no_b0 = np.array( [ 5.0, 10.0, 20.0, 30.0, 40.0, 60.0, 80.0, 100.0, 120.0, 140.0, 160.0, 180.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, ] ) gtab_no_b0 = gradient_table(bvals_no_b0, bvecs=bvecs.T, b0_threshold=0) bvals_with_multiple_b0 = np.array( [ 0.0, 0.0, 0.0, 0.0, 40.0, 60.0, 80.0, 100.0, 120.0, 140.0, 160.0, 180.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, ] ) bvecs_with_multiple_b0 = generate_bvecs(N) gtab_with_multiple_b0 = gradient_table( bvals_with_multiple_b0, bvecs=bvecs_with_multiple_b0.T, b0_threshold=0 ) noisy_single = np.array( [ 4243.71728516, 4317.81298828, 4244.35693359, 4439.36816406, 4420.06201172, 4152.30078125, 4114.34912109, 4104.59375, 4151.61914062, 4003.58374023, 4013.68408203, 3906.39428711, 3909.06079102, 3495.27197266, 3402.57006836, 3163.10180664, 2896.04003906, 2663.7253418, 2614.87695312, 2316.55371094, 2267.7722168, ] ) noisy_multi = np.zeros((2, 2, 1, len(gtab.bvals))) noisy_multi[0, 1, 0] = noisy_multi[1, 0, 0] = noisy_multi[1, 1, 0] = noisy_single noisy_multi[0, 0, 0] = data_single msg = "Bounds for this fit have been set from experiments .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) ivim_model_VP = IvimModel(gtab, fit_method="VarPro") f_VP, D_star_VP, D_VP = 0.13, 0.0088, 0.000921 # params for a single voxel params_VP = np.array([f, D_star, D]) ivim_params_VP = np.zeros((2, 2, 1, 3)) ivim_params_VP[0, 0, 0] = ivim_params_VP[0, 1, 0] = params_VP ivim_params_VP[1, 0, 0] = ivim_params_VP[1, 1, 0] = params_VP def single_exponential(S0, D, bvals): return S0 * np.exp(-bvals * D) def test_single_voxel_fit(): """ Test the implementation of the fitting for a single voxel. Here, we will use the multi_tensor function to generate a bi-exponential signal. The multi_tensor generates a multi tensor signal and expects eigenvalues of each tensor in mevals. Our basic test requires a scalar signal isotropic signal and hence we set the same eigenvalue in all three directions to generate the required signal. The bvals, f, D_star and D are inspired from the paper by Federau, Christian, et al. We use the function "generate_bvecs" to simulate bvectors corresponding to the bvalues. In the two stage fitting routine, initially we fit the signal values for bvals less than the specified split_b using the TensorModel and get an initial guess for f and D. Then, using these parameters we fit the entire data for all bvalues. """ est_signal = ivim_prediction(ivim_fit_single.model_params, gtab) assert_array_equal(est_signal.shape, data_single.shape) assert_array_almost_equal(ivim_fit_single.model_params, params_trr) assert_array_almost_equal(est_signal, data_single) # Test predict function for single voxel p = ivim_fit_single.predict(gtab) assert_array_equal(p.shape, data_single.shape) assert_array_almost_equal(p, data_single) def test_multivoxel(): """Test fitting with multivoxel data. We generate a multivoxel signal to test the fitting for multivoxel data. This is to ensure that the fitting routine takes care of signals packed as 1D, 2D or 3D arrays. """ ivim_fit_multi = ivim_model_trr.fit(data_multi) est_signal = ivim_fit_multi.predict(gtab, S0=1.0) assert_array_equal(est_signal.shape, data_multi.shape) assert_array_almost_equal(ivim_fit_multi.model_params, ivim_params_trr) assert_array_almost_equal(est_signal, data_multi) def test_ivim_errors(): """ Test if errors raised in the module are working correctly. Scipy introduced bounded least squares fitting in the version 0.17 and is not supported by the older versions. Initializing an IvimModel with bounds for older Scipy versions should raise an error. """ ivim_model_trr = IvimModel( gtab, bounds=([0.0, 0.0, 0.0, 0.0], [np.inf, 1.0, 1.0, 1.0]), fit_method="trr" ) ivim_fit = ivim_model_trr.fit(data_multi) est_signal = ivim_fit.predict(gtab, S0=1.0) assert_array_equal(est_signal.shape, data_multi.shape) assert_array_almost_equal(ivim_fit.model_params, ivim_params_trr) assert_array_almost_equal(est_signal, data_multi) def test_mask(): """ Test whether setting incorrect mask raises and error """ mask_correct = data_multi[..., 0] > 0.2 mask_not_correct = np.array([[False, True, False], [True, False]], dtype=object) ivim_fit = ivim_model_trr.fit(data_multi, mask=mask_correct) est_signal = ivim_fit.predict(gtab, S0=1.0) assert_array_equal(est_signal.shape, data_multi.shape) assert_array_almost_equal(est_signal, data_multi) assert_array_almost_equal(ivim_fit.model_params, ivim_params_trr) assert_raises(ValueError, ivim_model_trr.fit, data_multi, mask=mask_not_correct) def test_with_higher_S0(): """ Test whether fitting works for S0 > 1. """ # params for a single voxel S0_2 = 1000.0 params2 = np.array([S0_2, f, D_star, D]) mevals2 = np.array(([D_star, D_star, D_star], [D, D, D])) # This gives an isotropic signal. signal2 = multi_tensor( gtab, mevals2, snr=None, S0=S0_2, fractions=[f * 100, 100 * (1 - f)] ) # Single voxel data data_single2 = signal2[0] ivim_fit = ivim_model_trr.fit(data_single2) est_signal = ivim_fit.predict(gtab) assert_array_equal(est_signal.shape, data_single2.shape) assert_array_almost_equal(est_signal, data_single2) assert_array_almost_equal(ivim_fit.model_params, params2) def test_b0_threshold_greater_than0(): """ Added test case for default b0_threshold set to 50. Checks if error is thrown correctly. """ bvals_b0t = np.array( [ 50.0, 10.0, 20.0, 30.0, 40.0, 60.0, 80.0, 100.0, 120.0, 140.0, 160.0, 180.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, ] ) N = len(bvals_b0t) bvecs = generate_bvecs(N) gtab = gradient_table(bvals_b0t, bvecs=bvecs.T) with assert_raises(ValueError) as vae: _ = IvimModel(gtab, fit_method="trr") b0_s = "The IVIM model requires a measurement at b==0. As of " assert b0_s in vae.exception def test_bounds_x0(): """ Test to check if setting bounds for signal where initial value is higher than subsequent values works. These values are from the IVIM dataset which can be obtained by using the `read_ivim` function from dipy.data.fetcher. These are values from the voxel [160, 98, 33] which can be obtained by : .. code-block:: python from dipy.data.fetcher import read_ivim img, gtab = read_ivim() data = load_nifti_data(img) signal = data[160, 98, 33, :] """ x0_test = np.array([1.0, 0.13, 0.001, 0.0001]) test_signal = ivim_prediction(x0_test, gtab) ivim_fit = ivim_model_trr.fit(test_signal) est_signal = ivim_fit.predict(gtab) assert_array_equal(est_signal.shape, test_signal.shape) def test_predict(): """ Test the model prediction API. The predict method is already used in previous tests for estimation of the signal. But here, we will test is separately. """ assert_array_almost_equal(ivim_fit_single.predict(gtab), data_single) assert_array_almost_equal( ivim_model_trr.predict(ivim_fit_single.model_params, gtab), data_single ) ivim_fit_multi = ivim_model_trr.fit(data_multi) assert_array_almost_equal(ivim_fit_multi.predict(gtab), data_multi) def test_fit_object(): """ Test the method of IvimFit class """ assert_raises(IndexError, ivim_fit_single.__getitem__, (-0.1, 0, 0)) # Check if the S0 called is matching assert_array_almost_equal(ivim_fit_single.__getitem__(0).model_params, 1000.0) ivim_fit_multi = ivim_model_trr.fit(data_multi) # Should raise a TypeError if the arguments are not passed as tuple assert_raises(TypeError, ivim_fit_multi.__getitem__, -0.1, 0) # Should return IndexError if invalid indices are passed assert_raises(IndexError, ivim_fit_multi.__getitem__, (100, -0)) assert_raises(IndexError, ivim_fit_multi.__getitem__, (100, -0, 2)) assert_raises(IndexError, ivim_fit_multi.__getitem__, (-100, 0)) assert_raises(IndexError, ivim_fit_multi.__getitem__, [-100, 0]) assert_raises(IndexError, ivim_fit_multi.__getitem__, (1, 0, 0, 3, 4)) # Check if the get item returns the S0 value for voxel (1,0,0) assert_array_almost_equal( ivim_fit_multi.__getitem__((1, 0, 0)).model_params[0], data_multi[1, 0, 0][0] ) def test_shape(): """ Test if `shape` in `IvimFit` class gives the correct output. """ assert_array_equal(ivim_fit_single.shape, ()) ivim_fit_multi = ivim_model_trr.fit(data_multi) assert_array_equal(ivim_fit_multi.shape, (2, 2, 1)) def test_multiple_b0(): # Generate a signal with multiple b0 # This gives an isotropic signal. signal = multi_tensor( gtab_with_multiple_b0, mevals, snr=None, S0=S0, fractions=[f * 100, 100 * (1 - f)], ) # Single voxel data data_single = signal[0] msg = "Bounds for this fit have been set from experiments .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) ivim_model_multiple_b0 = IvimModel(gtab_with_multiple_b0, fit_method="trr") ivim_model_multiple_b0.fit(data_single) # Test if all signals are positive def test_no_b0(): assert_raises(ValueError, IvimModel, gtab_no_b0) def test_noisy_fit(): """ Test fitting for noisy signals. This tests whether the threshold condition applies correctly and returns the linear fitting parameters. For older scipy versions, the returned value of `f` from a linear fit is around 135 and D and D_star values are equal. Hence doing a test based on Scipy version. """ msg = "Bounds for this fit have been set from experiments .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) model_one_stage = IvimModel(gtab, fit_method="trr") with warnings.catch_warnings(record=True) as w: fit_one_stage = model_one_stage.fit(noisy_single) assert_equal(len(w), 3) for l_w in w: assert_(issubclass(l_w.category, UserWarning)) assert_("" in str(w[0].message)) assert_("x0 obtained from linear fitting is not feasible" in str(w[0].message)) assert_("x0 is unfeasible" in str(w[1].message)) assert_("Bounds are violated for leastsq fitting" in str(w[2].message)) assert_array_less(fit_one_stage.model_params, [10000.0, 0.3, 0.01, 0.001]) def test_S0(): """ Test if the `IvimFit` class returns the correct S0 """ assert_array_almost_equal(ivim_fit_single.S0_predicted, S0) assert_array_almost_equal(ivim_fit_multi.S0_predicted, ivim_params_trr[..., 0]) def test_perfusion_fraction(): """ Test if the `IvimFit` class returns the correct f """ assert_array_almost_equal(ivim_fit_single.perfusion_fraction, f) assert_array_almost_equal( ivim_fit_multi.perfusion_fraction, ivim_params_trr[..., 1] ) def test_D_star(): """ Test if the `IvimFit` class returns the correct D_star """ assert_array_almost_equal(ivim_fit_single.D_star, D_star) assert_array_almost_equal(ivim_fit_multi.D_star, ivim_params_trr[..., 2]) def test_D(): """ Test if the `IvimFit` class returns the correct D """ assert_array_almost_equal(ivim_fit_single.D, D) assert_array_almost_equal(ivim_fit_multi.D, ivim_params_trr[..., 3]) def test_estimate_linear_fit(): """ Test the linear estimates considering a single exponential fit. """ data_single_exponential_D = single_exponential(S0, D, gtab.bvals) assert_array_almost_equal( ivim_model_trr.estimate_linear_fit( data_single_exponential_D, split_b=500.0, less_than=False ), (S0, D), ) data_single_exponential_D_star = single_exponential(S0, D_star, gtab.bvals) assert_array_almost_equal( ivim_model_trr.estimate_linear_fit( data_single_exponential_D_star, split_b=100.0, less_than=True ), (S0, D_star), ) def test_estimate_f_D_star(): """ Test if the `estimate_f_D_star` returns the correct parameters after a non-linear fit. """ params_f_D = f + 0.001, D + 0.0001 assert_array_almost_equal( ivim_model_trr.estimate_f_D_star(params_f_D, data_single, S0, D), (f, D_star) ) def test_fit_one_stage(): """ Test to check the results for the one_stage linear fit. """ msg = "Bounds for this fit have been set from experiments .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) model = IvimModel(gtab, two_stage=False) fit = model.fit(data_single) linear_fit_params = [9.88834140e02, 1.19707191e-01, 7.91176970e-03, 9.30095210e-04] linear_fit_signal = [ 988.83414044, 971.77122546, 955.46786293, 939.87125905, 924.93258982, 896.85182201, 870.90346447, 846.81187693, 824.34108781, 803.28900104, 783.48245048, 764.77297789, 747.03322866, 669.54798887, 605.03328304, 549.00852235, 499.21077611, 454.40299244, 413.83192296, 376.98072773, 343.45531017, ] assert_array_almost_equal(fit.model_params, linear_fit_params) assert_array_almost_equal(fit.predict(gtab), linear_fit_signal) def test_leastsq_failing(): """ Test for cases where leastsq fitting fails and the results from a linear fit is returned. """ with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) fit_single = ivim_model_trr.fit(noisy_single) assert_greater_equal(len(w), 3) u_warn = [l_w for l_w in w if issubclass(l_w.category, UserWarning)] assert_greater_equal(len(u_warn), 3) message = [ "x0 obtained from linear fitting is not feasible", "x0 is unfeasible", "Bounds are violated for leastsq fitting", ] assert_greater_equal( len([lw for lw in u_warn for m in message if m in str(lw.message)]), 3 ) # Test for the S0 and D values assert_array_almost_equal(fit_single.S0_predicted, 4356.268901117833) assert_array_almost_equal(fit_single.D, 6.936684e-04) def test_leastsq_error(): """ Test error handling of the `_leastsq` method works when unfeasible x0 is passed. If an unfeasible x0 value is passed using which leastsq fails, the x0 value is returned as it is. """ with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) fit = ivim_model_trr._leastsq(data_single, [-1, -1, -1, -1]) assert_greater_equal(len(w), 1) assert_(issubclass(w[-1].category, UserWarning)) assert_("" in str(w[-1].message)) assert_("x0 is unfeasible" in str(w[-1].message)) assert_array_almost_equal(fit, [-1, -1, -1, -1]) @needs_cvxpy def test_perfusion_fraction_vp(): """ Test if the `IvimFit` class returns the correct f """ ivim_fit_VP = ivim_model_VP.fit(data_single) assert_array_almost_equal(ivim_fit_VP.perfusion_fraction, f_VP, decimal=2) @needs_cvxpy def test_D_star_vp(): """ Test if the `IvimFit` class returns the correct D_star """ ivim_fit_VP = ivim_model_VP.fit(data_single) assert_array_almost_equal(ivim_fit_VP.D_star, D_star_VP, decimal=4) @needs_cvxpy def test_D_vp(): """ Test if the `IvimFit` class returns the correct D """ ivim_fit_VP = ivim_model_VP.fit(data_single) assert_array_almost_equal(ivim_fit_VP.D, D_VP, decimal=4) dipy-1.11.0/dipy/reconst/tests/test_mapmri.py000066400000000000000000001157261476546756600212670ustar00rootroot00000000000000from math import factorial import platform import time import warnings import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_equal, assert_raises, ) import pytest import scipy.integrate as integrate from scipy.special import gamma from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, get_gtab_taiwan_dsi from dipy.direction.peaks import peak_directions from dipy.reconst import dti, mapmri from dipy.reconst.mapmri import MapmriModel, mapmri_index_matrix from dipy.reconst.odf import gfa from dipy.reconst.shm import descoteaux07_legacy_msg, sh_to_sf from dipy.reconst.tests.test_dsi import sticks_and_ball_dummies from dipy.sims.voxel import ( add_noise, cylinders_and_ball_soderman, multi_tensor, multi_tensor_pdf, single_tensor, ) from dipy.testing.decorators import set_random_number_generator def int_func(n): f = ( np.sqrt(2) * factorial(n) / float(((gamma(1 + n / 2.0)) * np.sqrt(2 ** (n + 1) * factorial(n)))) ) return f def generate_signal_crossing(gtab, lambda1, lambda2, lambda3, angle2=60): mevals = np.array(([lambda1, lambda2, lambda3], [lambda1, lambda2, lambda3])) angl = [(0, 0), (angle2, 0)] S, sticks = multi_tensor( gtab, mevals, S0=100.0, angles=angl, fractions=[50, 50], snr=None ) return S, sticks def test_orthogonality_basis_functions(): # numerical integration parameters diffusivity = 0.0015 qmin = 0 qmax = 1000 int1 = integrate.quad( lambda x: np.real(mapmri.mapmri_phi_1d(0, x, diffusivity)) * np.real(mapmri.mapmri_phi_1d(2, x, diffusivity)), qmin, qmax, )[0] int2 = integrate.quad( lambda x: np.real(mapmri.mapmri_phi_1d(2, x, diffusivity)) * np.real(mapmri.mapmri_phi_1d(4, x, diffusivity)), qmin, qmax, )[0] int3 = integrate.quad( lambda x: np.real(mapmri.mapmri_phi_1d(4, x, diffusivity)) * np.real(mapmri.mapmri_phi_1d(6, x, diffusivity)), qmin, qmax, )[0] int4 = integrate.quad( lambda x: np.real(mapmri.mapmri_phi_1d(6, x, diffusivity)) * np.real(mapmri.mapmri_phi_1d(8, x, diffusivity)), qmin, qmax, )[0] # checking for first 5 basis functions if they are indeed orthogonal assert_almost_equal(int1, 0.0) assert_almost_equal(int2, 0.0) assert_almost_equal(int3, 0.0) assert_almost_equal(int4, 0.0) # do the same for the isotropic mapmri basis functions # we already know the spherical harmonics are orthonormal # only check j>0, l=0 basis functions with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=integrate.IntegrationWarning) int1 = integrate.quad( lambda q: mapmri.mapmri_isotropic_radial_signal_basis(1, 0, diffusivity, q) * mapmri.mapmri_isotropic_radial_signal_basis(2, 0, diffusivity, q) * q**2, qmin, qmax, )[0] int2 = integrate.quad( lambda q: mapmri.mapmri_isotropic_radial_signal_basis(2, 0, diffusivity, q) * mapmri.mapmri_isotropic_radial_signal_basis(3, 0, diffusivity, q) * q**2, qmin, qmax, )[0] int3 = integrate.quad( lambda q: mapmri.mapmri_isotropic_radial_signal_basis(3, 0, diffusivity, q) * mapmri.mapmri_isotropic_radial_signal_basis(4, 0, diffusivity, q) * q**2, qmin, qmax, )[0] int4 = integrate.quad( lambda q: mapmri.mapmri_isotropic_radial_signal_basis(4, 0, diffusivity, q) * mapmri.mapmri_isotropic_radial_signal_basis(5, 0, diffusivity, q) * q**2, qmin, qmax, )[0] # checking for first 5 basis functions if they are indeed orthogonal assert_almost_equal(int1, 0.0) assert_almost_equal(int2, 0.0) assert_almost_equal(int3, 0.0) assert_almost_equal(int4, 0.0) def test_mapmri_number_of_coefficients(radial_order=6): indices = mapmri_index_matrix(radial_order) n_c = indices.shape[0] F = radial_order / 2 n_gt = np.round(1 / 6.0 * (F + 1) * (F + 2) * (4 * F + 3)) assert_equal(n_c, n_gt) def test_mapmri_initialize_radial_error(): """ Test initialization conditions """ gtab = get_gtab_taiwan_dsi() # No negative radial_order allowed assert_raises(ValueError, MapmriModel, gtab, radial_order=-1) # No odd radial order allowed: assert_raises(ValueError, MapmriModel, gtab, radial_order=3) def test_mapmri_initialize_gcv(): """ Test initialization conditions """ gtab = get_gtab_taiwan_dsi() # When string is provided it has to be "GCV" assert_raises(ValueError, MapmriModel, gtab, laplacian_weighting="notGCV") def test_mapmri_initialize_pos_radius(): """ Test initialization conditions """ gtab = get_gtab_taiwan_dsi() # When string is provided it has to be "adaptive" ErrorType = ImportError if not mapmri.have_cvxpy else ValueError assert_raises( ErrorType, MapmriModel, gtab, positivity_constraint=True, pos_radius="notadaptive", ) # When a number is provided it has to be positive assert_raises( ErrorType, MapmriModel, gtab, positivity_constraint=True, pos_radius=-1 ) def test_mapmri_signal_fitting(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3) mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_weighting=0.02) mapfit = mapm.fit(S) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) # Test with multidimensional signals: mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_weighting=0.02) # Each voxel is identical: mapfit = mapm.fit(S[:, None, None].T * np.ones((3, 3, 3, S.shape[0]))) # Predict back with an array of ones or a single value: for S0 in [S[0], np.ones((3, 3, 3, 203))]: S_reconst = mapfit.predict(gtab, S0=S0) # test the signal reconstruction for one voxel: nmse_signal = np.sqrt(np.sum((S - S_reconst[0, 0, 0]) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) # do the same for isotropic implementation with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=0.0001, anisotropic_scaling=False, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) # do the same without the positivity constraint: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=0.0001, positivity_constraint=False, anisotropic_scaling=False, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) # Repeat with a gtab with big_delta and small_delta: gtab.big_delta = 5 gtab.small_delta = 3 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=0.0001, positivity_constraint=False, anisotropic_scaling=False, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) if mapmri.have_cvxpy: # Positivity constraint and anisotropic scaling: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=0.0001, positivity_constraint=True, anisotropic_scaling=False, pos_radius=2, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 3) # Positivity constraint and anisotropic scaling: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=None, positivity_constraint=True, anisotropic_scaling=False, pos_radius=2, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_reconst = mapfit.predict(gtab, S0=1.0) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) assert_almost_equal(nmse_signal, 0.0, 2) @set_random_number_generator(1234) def test_mapmri_isotropic_static_scale_factor(radial_order=6, rng=None): gtab = get_gtab_taiwan_dsi() D = 0.7e-3 tau = 1 / (4 * np.pi**2) mu = np.sqrt(D * 2 * tau) l1, l2, l3 = [D, D, D] S = single_tensor(gtab, evals=np.r_[l1, l2, l3], rng=rng) S_array = np.tile(S, (5, 1)) stat_weight = 0.1 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm_scale_stat_reg_stat = MapmriModel( gtab, radial_order=radial_order, anisotropic_scaling=False, dti_scale_estimation=False, static_diffusivity=D, laplacian_regularization=True, laplacian_weighting=stat_weight, ) mapm_scale_adapt_reg_stat = MapmriModel( gtab, radial_order=radial_order, anisotropic_scaling=False, dti_scale_estimation=True, laplacian_regularization=True, laplacian_weighting=stat_weight, ) start = time.time() mapf_scale_stat_reg_stat = mapm_scale_stat_reg_stat.fit(S_array) time_scale_stat_reg_stat = time.time() - start start = time.time() mapf_scale_adapt_reg_stat = mapm_scale_adapt_reg_stat.fit(S_array) time_scale_adapt_reg_stat = time.time() - start # test if indeed the scale factor is fixed now assert_equal(np.all(mapf_scale_stat_reg_stat.mu == mu), True) # test if computation time is shorter (except on Windows): if not platform.system() == "Windows": assert_equal( time_scale_stat_reg_stat < time_scale_adapt_reg_stat, True, f"mapf_scale_stat_reg_stat ({time_scale_stat_reg_stat}s) slower " f"than mapf_scale_adapt_reg_stat ({time_scale_adapt_reg_stat}s). It " "should be the opposite.", ) # check if the fitted signal is the same with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_almost_equal( mapf_scale_stat_reg_stat.fitted_signal(), mapf_scale_adapt_reg_stat.fitted_signal(), ) def test_mapmri_signal_fitting_over_radial_order(order_max=8): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0012, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=60) # take radial order 0, 4 and 8 orders = [0, 4, 8] error_array = np.zeros(len(orders)) for i, order in enumerate(orders): mapm = MapmriModel(gtab, radial_order=order, laplacian_regularization=False) mapfit = mapm.fit(S) S_reconst = mapfit.predict(gtab, S0=100.0) error_array[i] = np.mean((S - S_reconst) ** 2) # check if the fitting error decreases as radial order increases assert_equal(np.diff(error_array) < 0.0, True) def test_mapmri_pdf_integral_unity(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3) sphere = default_sphere # test MAPMRI fitting mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_weighting=0.02) mapfit = mapm.fit(S) c_map = mapfit.mapmri_coeff # test if the analytical integral of the pdf is equal to one indices = mapmri_index_matrix(radial_order) integral = 0 for i in range(indices.shape[0]): n1, n2, n3 = indices[i] integral += c_map[i] * int_func(n1) * int_func(n2) * int_func(n3) assert_almost_equal(integral, 1.0, 3) # test if numerical integral of odf is equal to one odf = mapfit.odf(sphere, s=0) odf_sum = odf.sum() / sphere.vertices.shape[0] * (4 * np.pi) assert_almost_equal(odf_sum, 1.0, 2) # do the same for isotropic implementation radius_max = 0.04 # 40 microns gridsize = 17 r_points = mapmri.create_rspace(gridsize, radius_max) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=0.02, anisotropic_scaling=False, ) mapfit = mapm.fit(S) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf = mapfit.pdf(r_points) pdf[r_points[:, 2] == 0.0] /= 2 # for antipodal symmetry on z-plane point_volume = (radius_max / (gridsize // 2)) ** 3 integral = pdf.sum() * point_volume * 2 assert_almost_equal(integral, 1.0, 3) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = mapfit.odf(sphere, s=0) odf_sum = odf.sum() / sphere.vertices.shape[0] * (4 * np.pi) assert_almost_equal(odf_sum, 1.0, 2) def test_mapmri_compare_fitted_pdf_with_multi_tensor(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3) radius_max = 0.02 # 40 microns gridsize = 10 r_points = mapmri.create_rspace(gridsize, radius_max) # test MAPMRI fitting mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_weighting=0.0001) mapfit = mapm.fit(S) # compare the mapmri pdf with the ground truth multi_tensor pdf mevals = np.array(([l1, l2, l3], [l1, l2, l3])) angl = [(0, 0), (60, 0)] pdf_mt = multi_tensor_pdf(r_points, mevals=mevals, angles=angl, fractions=[50, 50]) pdf_map = mapfit.pdf(r_points) nmse_pdf = np.sqrt(np.sum((pdf_mt - pdf_map) ** 2)) / (pdf_mt.sum()) assert_almost_equal(nmse_pdf, 0.0, 2) def test_mapmri_metrics_anisotropic(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=0) # test MAPMRI q-space indices mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_regularization=False) mapfit = mapm.fit(S) tau = 1 / (4 * np.pi**2) # ground truth indices estimated from the DTI tensor rtpp_gt = 1.0 / (2 * np.sqrt(np.pi * l1 * tau)) rtap_gt = ( 1.0 / (2 * np.sqrt(np.pi * l2 * tau)) * 1.0 / (2 * np.sqrt(np.pi * l3 * tau)) ) rtop_gt = rtpp_gt * rtap_gt msd_gt = 2 * (l1 + l2 + l3) * tau qiv_gt = (64 * np.pi ** (7 / 2.0) * (l1 * l2 * l3 * tau**3) ** (3 / 2.0)) / ( (l2 * l3 + l1 * (l2 + l3)) * tau**2 ) assert_almost_equal(mapfit.rtap(), rtap_gt, 5) assert_almost_equal(mapfit.rtpp(), rtpp_gt, 5) assert_almost_equal(mapfit.rtop(), rtop_gt, 5) with warnings.catch_warnings(record=True) as w: ng = mapfit.ng() ng_parallel = mapfit.ng_parallel() ng_perpendicular = mapfit.ng_perpendicular() assert_equal(len(w), 3) for l_w in w: assert_(issubclass(l_w.category, UserWarning)) assert_( "model bval_threshold must be lower than 2000".lower() in str(l_w.message).lower() ) assert_almost_equal(ng, 0.0, 5) assert_almost_equal(ng_parallel, 0.0, 5) assert_almost_equal(ng_perpendicular, 0.0, 5) assert_almost_equal(mapfit.msd(), msd_gt, 5) assert_almost_equal(mapfit.qiv(), qiv_gt, 5) def test_mapmri_metrics_isotropic(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0003, 0.0003, 0.0003] # isotropic diffusivities S = single_tensor(gtab, evals=np.r_[l1, l2, l3]) # test MAPMRI q-space indices with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, anisotropic_scaling=False, ) mapfit = mapm.fit(S) tau = 1 / (4 * np.pi**2) # ground truth indices estimated from the DTI tensor rtpp_gt = 1.0 / (2 * np.sqrt(np.pi * l1 * tau)) rtap_gt = ( 1.0 / (2 * np.sqrt(np.pi * l2 * tau)) * 1.0 / (2 * np.sqrt(np.pi * l3 * tau)) ) rtop_gt = rtpp_gt * rtap_gt msd_gt = 2 * (l1 + l2 + l3) * tau qiv_gt = (64 * np.pi ** (7 / 2.0) * (l1 * l2 * l3 * tau**3) ** (3 / 2.0)) / ( (l2 * l3 + l1 * (l2 + l3)) * tau**2 ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_almost_equal(mapfit.rtap(), rtap_gt, 5) assert_almost_equal(mapfit.rtpp(), rtpp_gt, 5) assert_almost_equal(mapfit.rtop(), rtop_gt, 4) assert_almost_equal(mapfit.msd(), msd_gt, 5) assert_almost_equal(mapfit.qiv(), qiv_gt, 5) def test_mapmri_laplacian_anisotropic(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = single_tensor(gtab, evals=np.r_[l1, l2, l3]) mapm = MapmriModel(gtab, radial_order=radial_order, laplacian_regularization=False) mapfit = mapm.fit(S) tau = 1 / (4 * np.pi**2) # ground truth norm of laplacian of tensor norm_of_laplacian_gt = ( (3 * (l1**2 + l2**2 + l3**2) + 2 * l2 * l3 + 2 * l1 * (l2 + l3)) * (np.pi ** (5 / 2.0) * tau) / (np.sqrt(2 * l1 * l2 * l3 * tau)) ) # check if estimated laplacian corresponds with ground truth laplacian_matrix = mapmri.mapmri_laplacian_reg_matrix( mapm.ind_mat, mapfit.mu, mapm.S_mat, mapm.T_mat, mapm.U_mat ) coef = mapfit._mapmri_coef norm_of_laplacian = np.dot(np.dot(coef, laplacian_matrix), coef) assert_almost_equal(norm_of_laplacian, norm_of_laplacian_gt) def test_mapmri_laplacian_isotropic(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0003, 0.0003, 0.0003] # isotropic diffusivities S = single_tensor(gtab, evals=np.r_[l1, l2, l3]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, anisotropic_scaling=False, ) mapfit = mapm.fit(S) tau = 1 / (4 * np.pi**2) # ground truth norm of laplacian of tensor norm_of_laplacian_gt = ( (3 * (l1**2 + l2**2 + l3**2) + 2 * l2 * l3 + 2 * l1 * (l2 + l3)) * (np.pi ** (5 / 2.0) * tau) / (np.sqrt(2 * l1 * l2 * l3 * tau)) ) # check if estimated laplacian corresponds with ground truth laplacian_matrix = mapmri.mapmri_isotropic_laplacian_reg_matrix( radial_order, mapfit.mu[0] ) coef = mapfit._mapmri_coef norm_of_laplacian = np.dot(np.dot(coef, laplacian_matrix), coef) assert_almost_equal(norm_of_laplacian, norm_of_laplacian_gt) def test_signal_fitting_equality_anisotropic_isotropic(radial_order=6): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=60) gridsize = 17 radius_max = 0.02 r_points = mapmri.create_rspace(gridsize, radius_max) tenmodel = dti.TensorModel(gtab) evals = tenmodel.fit(S).evals tau = 1 / (4 * np.pi**2) # estimate isotropic scale factor u0 = mapmri.isotropic_scale_factor(evals * 2 * tau) mu = np.array([u0, u0, u0]) qvals = np.sqrt(gtab.bvals / tau) / (2 * np.pi) q = gtab.bvecs * qvals[:, None] M_aniso = mapmri.mapmri_phi_matrix(radial_order, mu, q) K_aniso = mapmri.mapmri_psi_matrix(radial_order, mu, r_points) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) M_iso = mapmri.mapmri_isotropic_phi_matrix(radial_order, u0, q) K_iso = mapmri.mapmri_isotropic_psi_matrix(radial_order, u0, r_points) coef_aniso = np.dot(np.linalg.pinv(M_aniso), S) coef_iso = np.dot(np.linalg.pinv(M_iso), S) # test if anisotropic and isotropic implementation produce equal results # if the same isotropic scale factors are used s_fitted_aniso = np.dot(M_aniso, coef_aniso) s_fitted_iso = np.dot(M_iso, coef_iso) assert_array_almost_equal(s_fitted_aniso, s_fitted_iso) # the same test for the PDF pdf_fitted_aniso = np.dot(K_aniso, coef_aniso) pdf_fitted_iso = np.dot(K_iso, coef_iso) assert_array_almost_equal( pdf_fitted_aniso / pdf_fitted_iso, np.ones_like(pdf_fitted_aniso), 3 ) # test if the implemented version also produces the same result with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapm = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, anisotropic_scaling=False, ) s_fitted_implemented_isotropic = mapm.fit(S).fitted_signal() # normalize non-implemented fitted signal with b0 value s_fitted_aniso_norm = s_fitted_aniso / s_fitted_aniso.max() assert_array_almost_equal(s_fitted_aniso_norm, s_fitted_implemented_isotropic) # test if norm of signal laplacians are the same laplacian_matrix_iso = mapmri.mapmri_isotropic_laplacian_reg_matrix( radial_order, mu[0] ) ind_mat = mapmri.mapmri_index_matrix(radial_order) S_mat, T_mat, U_mat = mapmri.mapmri_STU_reg_matrices(radial_order) laplacian_matrix_aniso = mapmri.mapmri_laplacian_reg_matrix( ind_mat, mu, S_mat, T_mat, U_mat ) norm_aniso = np.dot(coef_aniso, np.dot(coef_aniso, laplacian_matrix_aniso)) norm_iso = np.dot(coef_iso, np.dot(coef_iso, laplacian_matrix_iso)) assert_almost_equal(norm_iso, norm_aniso) def test_mapmri_isotropic_design_matrix_separability(radial_order=6): gtab = get_gtab_taiwan_dsi() tau = 1 / (4 * np.pi**2) qvals = np.sqrt(gtab.bvals / tau) / (2 * np.pi) q = gtab.bvecs * qvals[:, None] mu = 0.0003 # random value with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) M = mapmri.mapmri_isotropic_phi_matrix(radial_order, mu, q) M_independent = mapmri.mapmri_isotropic_M_mu_independent(radial_order, q) M_dependent = mapmri.mapmri_isotropic_M_mu_dependent(radial_order, mu, qvals) M_reconstructed = M_independent * M_dependent assert_array_almost_equal(M, M_reconstructed) def test_estimate_radius_with_rtap(radius_gt=5e-3): gtab = get_gtab_taiwan_dsi() tau = 1 / (4 * np.pi**2) # we estimate the infinite diffusion time case for a perfectly reflecting # cylinder using the Callaghan model E = cylinders_and_ball_soderman( gtab, tau, radii=[radius_gt], snr=None, angles=[(0, 90)], fractions=[100] )[0] # estimate radius using anisotropic MAP-MRI. mapmod = mapmri.MapmriModel( gtab, radial_order=6, laplacian_regularization=True, laplacian_weighting=0.01 ) mapfit = mapmod.fit(E) radius_estimated = np.sqrt(1 / (np.pi * mapfit.rtap())) assert_almost_equal(radius_estimated, radius_gt, 5) # estimate radius using isotropic MAP-MRI. # note that the radial order is higher and the precision is lower due to # less accurate signal extrapolation. with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapmod = mapmri.MapmriModel( gtab, radial_order=8, laplacian_regularization=True, laplacian_weighting=0.01, anisotropic_scaling=False, ) mapfit = mapmod.fit(E) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) radius_estimated = np.sqrt(1 / (np.pi * mapfit.rtap())) assert_almost_equal(radius_estimated, radius_gt, 4) @pytest.mark.skipif(not mapmri.have_cvxpy, reason="Requires CVXPY") @set_random_number_generator(1234) def test_positivity_constraint(radial_order=6, rng=None): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=60) S_noise = add_noise(S, snr=20, S0=100.0, rng=rng) gridsize = 20 max_radius = 15e-3 # 20 microns maximum radius r_grad = mapmri.create_rspace(gridsize, max_radius) # The positivity constraint does not make the pdf completely positive # but greatly decreases the amount of negativity in the constrained points. # We test if the amount of negative pdf has decreased more than 90% mapmod_no_constraint = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=False, ) mapfit_no_constraint = mapmod_no_constraint.fit(S_noise) pdf = mapfit_no_constraint.pdf(r_grad) pdf_negative_no_constraint = pdf[pdf < 0].sum() # Set the cvxpy solver to CLARABEL as the one picked otherwise for this # problem (OSQP) triggers a `Solution may be inaccurate` UserWarning with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=".*Solution may be inaccurate.*", category=UserWarning, ) mapmod_constraint = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=True, pos_grid=gridsize, pos_radius="adaptive", cvxpy_solver=mapmri.cvxpy.CLARABEL, ) mapfit_constraint = mapmod_constraint.fit(S_noise) pdf = mapfit_constraint.pdf(r_grad) pdf_negative_constraint = pdf[pdf < 0].sum() assert_equal((pdf_negative_constraint / pdf_negative_no_constraint) < 0.1, True) # the same for isotropic scaling with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapmod_no_constraint = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=False, anisotropic_scaling=False, ) mapfit_no_constraint = mapmod_no_constraint.fit(S_noise) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf = mapfit_no_constraint.pdf(r_grad) pdf_negative_no_constraint = pdf[pdf < 0].sum() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapmod_constraint = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=True, anisotropic_scaling=False, pos_grid=gridsize, pos_radius="adaptive", ) mapfit_constraint = mapmod_constraint.fit(S_noise) pdf = mapfit_constraint.pdf(r_grad) pdf_negative_constraint = pdf[pdf < 0].sum() assert_equal((pdf_negative_constraint / pdf_negative_no_constraint) < 0.1, True) @pytest.mark.skipif(not mapmri.have_cvxpy, reason="Requires CVXPY") @set_random_number_generator(1234) def test_plus_constraint(radial_order=6, rng=None): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=60) S_noise = add_noise(S, snr=20, S0=100.0, rng=rng) gridsize = 50 max_radius = 25e-3 # 25 microns maximum radius r_grad = mapmri.create_rspace(gridsize, max_radius) # The positivity constraint should make the pdf positive everywhere mapmod_constraint = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=True, global_constraints=True, ) mapfit_constraint = mapmod_constraint.fit(S_noise) pdf = mapfit_constraint.pdf(r_grad) pdf_negative_constraint = pdf[pdf < 0].sum() assert_equal(pdf_negative_constraint == 0.0, True) @set_random_number_generator(1234) def test_laplacian_regularization(radial_order=6, rng=None): gtab = get_gtab_taiwan_dsi() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S, _ = generate_signal_crossing(gtab, l1, l2, l3, angle2=60) S_noise = add_noise(S, snr=20, S0=100.0, rng=rng) weight_array = np.linspace(0, 0.3, 301) mapmod_unreg = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, laplacian_weighting=weight_array, ) mapmod_laplacian_array = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=weight_array, ) mapmod_laplacian_gcv = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting="GCV", ) # test the Generalized Cross Validation # test if GCV gives very low if there is no noise mapfit_laplacian_array = mapmod_laplacian_array.fit(S) assert_equal(mapfit_laplacian_array.lopt < 0.01, True) # test if GCV gives higher values if there is noise mapfit_laplacian_array = mapmod_laplacian_array.fit(S_noise) lopt_array = mapfit_laplacian_array.lopt assert_equal(lopt_array > 0.01, True) # test if continuous GCV gives the same the one based on an array mapfit_laplacian_gcv = mapmod_laplacian_gcv.fit(S_noise) lopt_gcv = mapfit_laplacian_gcv.lopt assert_almost_equal(lopt_array, lopt_gcv, 2) # test if laplacian reduced the norm of the laplacian in the reconstruction mu = mapfit_laplacian_gcv.mu laplacian_matrix = mapmri.mapmri_laplacian_reg_matrix( mapmod_laplacian_gcv.ind_mat, mu, mapmod_laplacian_gcv.S_mat, mapmod_laplacian_gcv.T_mat, mapmod_laplacian_gcv.U_mat, ) coef_unreg = mapmod_unreg.fit(S_noise)._mapmri_coef coef_laplacian = mapfit_laplacian_gcv._mapmri_coef laplacian_norm_unreg = np.dot(coef_unreg, np.dot(coef_unreg, laplacian_matrix)) laplacian_norm_laplacian = np.dot( coef_laplacian, np.dot(coef_laplacian, laplacian_matrix) ) assert_equal(laplacian_norm_laplacian < laplacian_norm_unreg, True) # the same for isotropic scaling with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapmod_unreg = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, laplacian_weighting=weight_array, anisotropic_scaling=False, ) mapmod_laplacian_array = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=weight_array, anisotropic_scaling=False, ) mapmod_laplacian_gcv = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting="GCV", anisotropic_scaling=False, ) # test the Generalized Cross Validation # test if GCV gives zero if there is no noise mapfit_laplacian_array = mapmod_laplacian_array.fit(S) assert_equal(mapfit_laplacian_array.lopt < 0.01, True) # test if GCV gives higher values if there is noise mapfit_laplacian_array = mapmod_laplacian_array.fit(S_noise) lopt_array = mapfit_laplacian_array.lopt assert_equal(lopt_array > 0.01, True) # test if continuous GCV gives the same the one based on an array mapfit_laplacian_gcv = mapmod_laplacian_gcv.fit(S_noise) lopt_gcv = mapfit_laplacian_gcv.lopt assert_almost_equal(lopt_array, lopt_gcv, 2) # test if laplacian reduced the norm of the laplacian in the reconstruction mu = mapfit_laplacian_gcv.mu laplacian_matrix = mapmri.mapmri_isotropic_laplacian_reg_matrix(radial_order, mu[0]) coef_unreg = mapmod_unreg.fit(S_noise)._mapmri_coef coef_laplacian = mapfit_laplacian_gcv._mapmri_coef laplacian_norm_unreg = np.dot(coef_unreg, np.dot(coef_unreg, laplacian_matrix)) laplacian_norm_laplacian = np.dot( coef_laplacian, np.dot(coef_laplacian, laplacian_matrix) ) assert_equal(laplacian_norm_laplacian < laplacian_norm_unreg, True) def test_mapmri_odf(radial_order=6): gtab = get_gtab_taiwan_dsi() # load repulsion 724 sphere sphere = default_sphere # load icosahedron sphere l1, l2, l3 = [0.0015, 0.0003, 0.0003] data, golden_directions = generate_signal_crossing(gtab, l1, l2, l3, angle2=90) mapmod = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.01, ) # repulsion724 sphere2 = create_unit_sphere(recursion_level=5) mapfit = mapmod.fit(data) odf = mapfit.odf(sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # 5 subdivisions odf = mapfit.odf(sphere2) directions, _, _ = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] asmfit = mapmod.fit(data) odf = asmfit.odf(sphere2) directions, _, _ = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: assert_equal(gfa(odf) < 0.1, True) # for the isotropic implementation check if the odf spherical harmonics # actually represent the discrete sphere function. with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) mapmod = MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.01, anisotropic_scaling=False, ) mapfit = mapmod.fit(data) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = mapfit.odf(sphere) odf_sh = mapfit.odf_sh() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_from_sh = sh_to_sf( odf_sh, sphere, sh_order_max=radial_order, basis_type=None, legacy=True ) assert_almost_equal(odf, odf_from_sh, 10) dipy-1.11.0/dipy/reconst/tests/test_mcsd.py000066400000000000000000000367511476546756600207300ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt import pytest from dipy.core.gradients import GradientTable from dipy.data import default_sphere, get_3shell_gtab from dipy.reconst import shm from dipy.reconst.mcsd import ( MultiShellDeconvModel, auto_response_msmt, mask_for_response_msmt, multi_shell_fiber_response, response_from_mask_msmt, ) from dipy.sims.voxel import add_noise, multi_tensor, single_tensor from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package cvx, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") needs_cvxpy = pytest.mark.skipif(not have_cvxpy, reason="Requires CVXPY") wm_response = np.array( [ [1.7e-3, 0.4e-3, 0.4e-3, 25.0], [1.7e-3, 0.4e-3, 0.4e-3, 25.0], [1.7e-3, 0.4e-3, 0.4e-3, 25.0], ] ) csf_response = np.array( [ [3.0e-3, 3.0e-3, 3.0e-3, 100.0], [3.0e-3, 3.0e-3, 3.0e-3, 100.0], [3.0e-3, 3.0e-3, 3.0e-3, 100.0], ] ) gm_response = np.array( [ [4.0e-4, 4.0e-4, 4.0e-4, 40.0], [4.0e-4, 4.0e-4, 4.0e-4, 40.0], [4.0e-4, 4.0e-4, 4.0e-4, 40.0], ] ) def get_test_data(rng): gtab = get_3shell_gtab() evals_list = [ np.array([1.7e-3, 0.4e-3, 0.4e-3]), np.array([6.0e-4, 4.0e-4, 4.0e-4]), np.array([3.0e-3, 3.0e-3, 3.0e-3]), ] s0 = [0.8, 1, 4] signals = [single_tensor(gtab, x[0], evals=x[1]) for x in zip(s0, evals_list)] tissues = [0, 0, 2, 0, 1, 0, 0, 1, 2] # wm=0, gm=1, csf=2 data = [add_noise(signals[tissue], 80, s0[0], rng=rng) for tissue in tissues] data = np.asarray(data).reshape((3, 3, 1, len(signals[0]))) tissues = np.asarray(tissues).reshape((3, 3, 1)) masks = [np.where(tissues == x, 1, 0) for x in range(3)] responses = [np.concatenate((x[0], [x[1]])) for x in zip(evals_list, s0)] return gtab, data, masks, responses def _expand(m, iso, coeff): params = np.zeros(len(m)) params[m == 0] = coeff[iso:] params = np.concatenate([coeff[:iso], params]) return params @needs_cvxpy def test_mcsd_model_delta(): sh_order_max = 8 gtab = get_3shell_gtab() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = MultiShellDeconvModel(gtab, response) iso = response.iso theta, phi = default_sphere.theta, default_sphere.phi with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B = shm.real_sh_descoteaux_from_index( response.m_values, response.l_values, theta[:, None], phi[:, None] ) wm_delta = model.delta.copy() # set isotropic components to zero wm_delta[:iso] = 0.0 wm_delta = _expand(model.m_values, iso, wm_delta) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) for i, s in enumerate([0, 1000, 2000, 3500]): g = GradientTable(default_sphere.vertices * s) signal = model.predict(wm_delta, gtab=g) expected = np.dot(response.response[i, iso:], B.T) npt.assert_array_almost_equal(signal, expected) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) signal = model.predict(wm_delta, gtab=gtab) fit = model.fit(signal) m = model.m_values npt.assert_array_almost_equal(fit.shm_coeff[m != 0], 0.0, 2) @needs_cvxpy def test_MultiShellDeconvModel_response(): gtab = get_3shell_gtab() sh_order_max = 8 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_1 = MultiShellDeconvModel(gtab, response, sh_order_max=sh_order_max) responses = np.array([wm_response, gm_response, csf_response]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_2 = MultiShellDeconvModel(gtab, responses, sh_order_max=sh_order_max) response_1 = model_1.response.response response_2 = model_2.response.response npt.assert_array_almost_equal(response_1, response_2, 0) npt.assert_raises(ValueError, MultiShellDeconvModel, gtab, np.ones((4, 3, 4))) npt.assert_raises( ValueError, MultiShellDeconvModel, gtab, np.ones((3, 3, 4)), iso=3 ) @needs_cvxpy def test_MultiShellDeconvModel(): gtab = get_3shell_gtab() mevals = np.array([wm_response[0, :3], wm_response[0, :3]]) angles = [(0, 0), (60, 0)] S_wm, sticks = multi_tensor( gtab, mevals, S0=wm_response[0, 3], angles=angles, fractions=[30.0, 70.0], snr=None, ) S_gm = gm_response[0, 3] * np.exp(-gtab.bvals * gm_response[0, 0]) S_csf = csf_response[0, 3] * np.exp(-gtab.bvals * csf_response[0, 0]) sh_order_max = 8 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response ) model = MultiShellDeconvModel(gtab, response) vf = [0.325, 0.2, 0.475] signal = sum(i * j for i, j in zip(vf, [S_csf, S_gm, S_wm])) fit = model.fit(signal) # Testing both ways to predict S_pred_fit = fit.predict() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) S_pred_model = model.predict(fit.all_shm_coeff) npt.assert_array_almost_equal(S_pred_fit, S_pred_model, 0) npt.assert_array_almost_equal(S_pred_fit, signal, 0) @needs_cvxpy def test_MSDeconvFit(): gtab = get_3shell_gtab() mevals = np.array([wm_response[0, :3], wm_response[0, :3]]) angles = [(0, 0), (60, 0)] S_wm, sticks = multi_tensor( gtab, mevals, S0=wm_response[0, 3], angles=angles, fractions=[30.0, 70.0], snr=None, ) S_gm = gm_response[0, 3] * np.exp(-gtab.bvals * gm_response[0, 0]) S_csf = csf_response[0, 3] * np.exp(-gtab.bvals * csf_response[0, 0]) sh_order_max = 8 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response ) model = MultiShellDeconvModel(gtab, response) vf = [0.325, 0.2, 0.475] signal = sum(i * j for i, j in zip(vf, [S_csf, S_gm, S_wm])) fit = model.fit(signal) # Testing volume fractions npt.assert_array_almost_equal(fit.volume_fractions, vf, 1) def test_multi_shell_fiber_response(): sh_order_max = 8 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response ) npt.assert_equal(response.response.shape, (4, 7)) btens = ["LTE", "PTE", "STE", "CTE"] with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=shm.descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) response = multi_shell_fiber_response( sh_order_max, [0, 1000, 2000, 3500], wm_response, gm_response, csf_response, btens=btens, ) npt.assert_equal(response.response.shape, (4, 7)) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=PendingDeprecationWarning) response = multi_shell_fiber_response( sh_order_max, [1000, 2000, 3500], wm_response, gm_response, csf_response ) # Test that the number of warnings raised is greater than 1, with # deprecation warnings being raised from using legacy SH bases as well # as a warning from multi_shell_fiber_response npt.assert_(len(w) > 1) # The last warning in list is the one from multi_shell_fiber_response npt.assert_(issubclass(w[-1].category, UserWarning)) npt.assert_("""No b0 given. Proceeding either way.""" in str(w[-1].message)) npt.assert_equal(response.response.shape, (3, 7)) @set_random_number_generator() def test_mask_for_response_msmt(rng): gtab, data, masks_gt, _ = get_test_data(rng) with warnings.catch_warnings(record=True) as w: wm_mask, gm_mask, csf_mask = mask_for_response_msmt( gtab, data, roi_center=None, roi_radii=(1, 1, 0), wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("""Some b-values are higher than 1200.""" in str(w[0].message)) # Verifies that masks are not empty: masks_sum = int(np.sum(wm_mask) + np.sum(gm_mask) + np.sum(csf_mask)) npt.assert_equal(masks_sum != 0, True) npt.assert_array_almost_equal(masks_gt[0], wm_mask) npt.assert_array_almost_equal(masks_gt[1], gm_mask) npt.assert_array_almost_equal(masks_gt[2], csf_mask) @set_random_number_generator() def test_mask_for_response_msmt_nvoxels(rng): gtab, data, _, _ = get_test_data(rng) with warnings.catch_warnings(record=True) as w: wm_mask, gm_mask, csf_mask = mask_for_response_msmt( gtab, data, roi_center=None, roi_radii=(1, 1, 0), wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("""Some b-values are higher than 1200.""" in str(w[0].message)) wm_nvoxels = np.sum(wm_mask) gm_nvoxels = np.sum(gm_mask) csf_nvoxels = np.sum(csf_mask) npt.assert_equal(wm_nvoxels, 5) npt.assert_equal(gm_nvoxels, 2) npt.assert_equal(csf_nvoxels, 2) with warnings.catch_warnings(record=True) as w: wm_mask, gm_mask, csf_mask = mask_for_response_msmt( gtab, data, roi_center=None, roi_radii=(1, 1, 0), wm_fa_thr=1, gm_fa_thr=0, csf_fa_thr=0, gm_md_thr=0, csf_md_thr=0, ) npt.assert_equal(len(w), 6) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_("""Some b-values are higher than 1200.""" in str(w[0].message)) npt.assert_("No voxel with a FA higher than 1 were found" in str(w[1].message)) npt.assert_("No voxel with a FA lower than 0 were found" in str(w[2].message)) npt.assert_("No voxel with a MD lower than 0 were found" in str(w[3].message)) npt.assert_("No voxel with a FA lower than 0 were found" in str(w[4].message)) npt.assert_("No voxel with a MD lower than 0 were found" in str(w[5].message)) wm_nvoxels = np.sum(wm_mask) gm_nvoxels = np.sum(gm_mask) csf_nvoxels = np.sum(csf_mask) npt.assert_equal(wm_nvoxels, 0) npt.assert_equal(gm_nvoxels, 0) npt.assert_equal(csf_nvoxels, 0) @set_random_number_generator() def test_response_from_mask_msmt(rng): gtab, data, masks_gt, responses_gt = get_test_data(rng) response_wm, response_gm, response_csf = response_from_mask_msmt( gtab, data, masks_gt[0], masks_gt[1], masks_gt[2], tol=20 ) # Verifying that csf's response is greater than gm's npt.assert_equal(np.sum(response_csf[:, :3]) > np.sum(response_gm[:, :3]), True) # Verifying that csf and gm are described by spheres npt.assert_almost_equal(response_csf[:, 1], response_csf[:, 2]) npt.assert_allclose(response_csf[:, 0], response_csf[:, 1], rtol=1, atol=0) npt.assert_almost_equal(response_gm[:, 1], response_gm[:, 2]) npt.assert_allclose(response_gm[:, 0], response_gm[:, 1], rtol=1, atol=0) # Verifying that wm is anisotropic in one direction npt.assert_almost_equal(response_wm[:, 1], response_wm[:, 2]) npt.assert_equal(response_wm[:, 0] > 2.5 * response_wm[:, 1], True) # Verifying with ground truth for the first bvalue npt.assert_array_almost_equal(response_wm[0], responses_gt[0], 1) npt.assert_array_almost_equal(response_gm[0], responses_gt[1], 1) npt.assert_array_almost_equal(response_csf[0], responses_gt[2], 1) @set_random_number_generator() def test_auto_response_msmt(rng): gtab, data, _, _ = get_test_data(rng) with warnings.catch_warnings(record=True) as w: response_auto_wm, response_auto_gm, response_auto_csf = auto_response_msmt( gtab, data, tol=20, roi_center=None, roi_radii=(1, 1, 0), wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ) npt.assert_(issubclass(w[0].category, UserWarning)) npt.assert_( """Some b-values are higher than 1200. The DTI fit might be affected. It is advised to use mask_for_response_msmt with bvalues lower than 1200, followed by response_from_mask_msmt with all bvalues to overcome this.""" in str(w[0].message) ) mask_wm, mask_gm, mask_csf = mask_for_response_msmt( gtab, data, roi_center=None, roi_radii=(1, 1, 0), wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ) response_from_mask_wm, response_from_mask_gm, response_from_mask_csf = ( response_from_mask_msmt(gtab, data, mask_wm, mask_gm, mask_csf, tol=20) ) npt.assert_array_equal(response_auto_wm, response_from_mask_wm) npt.assert_array_equal(response_auto_gm, response_from_mask_gm) npt.assert_array_equal(response_auto_csf, response_from_mask_csf) dipy-1.11.0/dipy/reconst/tests/test_msdki.py000066400000000000000000000272431476546756600211050ustar00rootroot00000000000000"""Testing Mean Signal DKI (MSDKI)""" import random import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_raises, ) from dipy.core.gradients import gradient_table, round_bvals, unique_bvals_magnitude from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs import dipy.reconst.msdki as msdki from dipy.reconst.msdki import awf_from_msk, msk_from_awf from dipy.sims.voxel import multi_tensor_dki, single_tensor gtab_3s, signal_sph, msignal_sph, DWI, MDWI = None, None, None, None, None gtab, params_single, params_multi, MKgt_multi = None, None, None, None bvals_3s, MDgt_multi, S0gt_multi, MKgt, MDgt = None, None, None, None, None bvecs = None def setup_module(): global gtab_3s, signal_sph, msignal_sph, DWI, MDWI global bvecs, gtab, params_single, params_multi, MKgt_multi global bvals_3s, MDgt_multi, S0gt_multi, MKgt, MDgt fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) bvals = round_bvals(bvals) gtab = gradient_table(bvals, bvecs=bvecs) # 2 shells for techniques that requires multishell data bvals_3s = np.concatenate((bvals, bvals * 1.5, bvals * 2), axis=0) bvecs_3s = np.concatenate((bvecs, bvecs, bvecs), axis=0) gtab_3s = gradient_table(bvals_3s, bvecs=bvecs_3s) # Simulation 1. Spherical kurtosis tensor - MSK and MSD from the MSDKI # model should be equal to the MK and MD of the DKI tensor for cases of # spherical kurtosis tensors Di = 0.00099 De = 0.00226 mevals_sph = np.array([[Di, Di, Di], [De, De, De]]) f = 0.5 frac_sph = [f * 100, (1.0 - f) * 100] signal_sph, dt_sph, kt_sph = multi_tensor_dki( gtab_3s, mevals_sph, S0=100, fractions=frac_sph, snr=None ) # Compute ground truth values MDgt = f * Di + (1 - f) * De MKgt = 3 * f * (1 - f) * ((Di - De) / MDgt) ** 2 params_single = np.array([MDgt, MKgt]) msignal_sph = np.zeros(4) msignal_sph[0] = signal_sph[0] msignal_sph[1] = signal_sph[1] msignal_sph[2] = signal_sph[100] msignal_sph[3] = signal_sph[180] # Simulation 2. Multi-voxel simulations DWI = np.zeros((2, 2, 2, len(gtab_3s.bvals))) MDWI = np.zeros((2, 2, 2, 4)) MDgt_multi = np.zeros((2, 2, 2)) MKgt_multi = np.zeros((2, 2, 2)) S0gt_multi = np.zeros((2, 2, 2)) params_multi = np.zeros((2, 2, 2, 2)) for i in range(2): for j in range(2): for k in range(1): # Only one k to have some zero voxels f = random.uniform(0.0, 1) frac = [f * 100, (1.0 - f) * 100] signal_i, dt_i, kt_i = multi_tensor_dki( gtab_3s, mevals_sph, S0=100, fractions=frac, snr=None ) DWI[i, j, k] = signal_i md_i = f * Di + (1 - f) * De mk_i = 3 * f * (1 - f) * ((Di - De) / md_i) ** 2 MDgt_multi[i, j, k] = md_i MKgt_multi[i, j, k] = mk_i S0gt_multi[i, j, k] = 100 params_multi[i, j, k, 0] = md_i params_multi[i, j, k, 1] = mk_i MDWI[i, j, k, 0] = signal_i[0] MDWI[i, j, k, 1] = signal_i[1] MDWI[i, j, k, 2] = signal_i[100] MDWI[i, j, k, 3] = signal_i[180] def teardown_module(): global gtab_3s, signal_sph, msignal_sph, DWI, MDWI global bvecs, gtab, params_single, params_multi, MKgt_multi global bvals_3s, MDgt_multi, S0gt_multi, MKgt, MDgt gtab_3s, signal_sph, msignal_sph, DWI, MDWI = None, None, None, None, None gtab, params_single, params_multi, MKgt_multi = None, None, None, None bvals_3s, MDgt_multi, S0gt_multi, MKgt, MDgt = None, None, None, None, None bvecs = None def test_msdki_predict(): dkiM = msdki.MeanDiffusionKurtosisModel(gtab_3s) # single voxel pred = dkiM.predict(params_single, S0=100) assert_array_almost_equal(pred, signal_sph) # multi-voxel pred = dkiM.predict(params_multi, S0=100) assert_array_almost_equal(pred[:, :, 0, :], DWI[:, :, 0, :]) # check the function predict of the DiffusionKurtosisFit object dkiF = dkiM.fit(signal_sph) pred_single = dkiF.predict(gtab_3s, S0=100) assert_array_almost_equal(pred_single, signal_sph) dkiF = dkiM.fit(DWI) pred_multi = dkiF.predict(gtab_3s, S0=100) assert_array_almost_equal(pred_multi[:, :, 0, :], DWI[:, :, 0, :]) # No S0 dkiF = dkiM.fit(signal_sph) pred_single = dkiF.predict(gtab_3s) assert_array_almost_equal(100 * pred_single, signal_sph) dkiF = dkiM.fit(DWI) pred_multi = dkiF.predict(gtab_3s) assert_array_almost_equal(100 * pred_multi[:, :, 0, :], DWI[:, :, 0, :]) # SO volume dkiF = dkiM.fit(DWI) pred_multi = dkiF.predict(gtab_3s, S0=100 * np.ones(DWI.shape[:-1])) assert_array_almost_equal(pred_multi[:, :, 0, :], DWI[:, :, 0, :]) def test_errors(): # first error raises if MeanDiffusionKurtosisModel is called for # data will only one non-zero b-value assert_raises(ValueError, msdki.MeanDiffusionKurtosisModel, gtab) # second error raises if negative signal is given to MeanDiffusionKurtosis # model assert_raises(ValueError, msdki.MeanDiffusionKurtosisModel, gtab_3s, min_signal=-1) # third error raises if wrong mask is given to fit mask_wrong = np.ones((2, 3, 1)) msdki_model = msdki.MeanDiffusionKurtosisModel(gtab_3s) assert_raises(ValueError, msdki_model.fit, DWI, mask=mask_wrong) # fourth error raises if an given index point to more dimensions that data # does not contain # define auxiliary function for the assert raises def aux_test_fun(ob, ind): met = ob[ind].msk return met mdkiF = msdki_model.fit(DWI) assert_raises(IndexError, aux_test_fun, mdkiF, (0, 0, 0, 0)) # checking if aux_test_fun runs fine met = aux_test_fun(mdkiF, (0, 0, 0)) assert_array_almost_equal(MKgt_multi[0, 0, 0], met) # Fifth error rises if wrong mask is given to awf_from_msk assert_raises(ValueError, awf_from_msk, MKgt_multi, mask=mask_wrong) def test_design_matrix(): ub = unique_bvals_magnitude(bvals_3s) D = msdki.design_matrix(ub) Dgt = np.ones((4, 3)) Dgt[:, 0] = -ub Dgt[:, 1] = 1.0 / 6 * ub**2 assert_array_almost_equal(D, Dgt) def test_msignal(): # Multi-voxel case ms, ng = msdki.mean_signal_bvalue(DWI, gtab_3s) assert_array_almost_equal(ms, MDWI) assert_array_almost_equal(ng, np.array([3, 64, 64, 64])) # Single-voxel case ms, ng = msdki.mean_signal_bvalue(signal_sph, gtab_3s) assert_array_almost_equal(ng, np.array([3, 64, 64, 64])) assert_array_almost_equal(ms, msignal_sph) def test_msdki_statistics(): # tests if MD and MK are equal to expected values of a spherical # tensors # Multi-tensors ub = unique_bvals_magnitude(bvals_3s) design_matrix = msdki.design_matrix(ub) msignal, ng = msdki.mean_signal_bvalue(DWI, gtab_3s, bmag=None) params = msdki.wls_fit_msdki(design_matrix, msignal, ng) assert_array_almost_equal(params[..., 1], MKgt_multi) assert_array_almost_equal(params[..., 0], MDgt_multi) mdkiM = msdki.MeanDiffusionKurtosisModel(gtab_3s) mdkiF = mdkiM.fit(DWI) mk = mdkiF.msk md = mdkiF.msd assert_array_almost_equal(MKgt_multi, mk) assert_array_almost_equal(MDgt_multi, md) # Single-tensors mdkiF = mdkiM.fit(signal_sph) mk = mdkiF.msk md = mdkiF.msd assert_array_almost_equal(MKgt, mk) assert_array_almost_equal(MDgt, md) # Test with given mask mask = np.ones(DWI.shape[:-1]) v = (0, 0, 0) mask[1, 1, 1] = 0 mdkiF = mdkiM.fit(DWI, mask=mask) mk = mdkiF.msk md = mdkiF.msd assert_array_almost_equal(MKgt_multi, mk) assert_array_almost_equal(MDgt_multi, md) assert_array_almost_equal(MKgt_multi[v], mdkiF[v].msk) # tuple case assert_array_almost_equal(MDgt_multi[v], mdkiF[v].msd) # tuple case assert_array_almost_equal(MKgt_multi[0], mdkiF[0].msk) # not tuple case assert_array_almost_equal(MDgt_multi[0], mdkiF[0].msd) # not tuple case # Test returned S0 mdkiM = msdki.MeanDiffusionKurtosisModel(gtab_3s, return_S0_hat=True) mdkiF = mdkiM.fit(DWI) assert_array_almost_equal(S0gt_multi, mdkiF.S0_hat) assert_array_almost_equal(MKgt_multi[v], mdkiF[v].msk) def test_kurtosis_to_smt2_conversion(): # 1. Check conversion of smt2 awf to kurtosis # When awf = 0 kurtosis was to be 0 awf0 = 0 kexp0 = 0 kest0 = msk_from_awf(awf0) assert_almost_equal(kest0, kexp0) # When awf = 1 kurtosis was to be 2.4 awf1 = 1 kexp1 = 2.4 kest1 = msk_from_awf(awf1) assert_almost_equal(kest1, kexp1) # Check the inversion of msk_from_awf awf_test_array = np.linspace(0, 1, 100) k_exp = msk_from_awf(awf_test_array) awf_from_k = awf_from_msk(k_exp) assert_array_almost_equal(awf_from_k, awf_test_array) # Check the awf_from_msk estimates when kurtosis is out of expected # interval ranges - note that under SMT2 assumption MSK is never lower # than 0 and never higher than 2.4. Since SMT2 assumptions are commonly not # met kurtosis can be out of this expected range. So, if MSK is lower than # 0, f is set to 0 (avoiding negative f). On the other hand, if MSK is # higher than 2.4, f is set to the maximum value of 1. assert_array_almost_equal(awf_from_msk(np.array([-0.1, 2.5])), np.array([0.0, 1.0])) # if msk = np.nan, function outputs awf=np.nan assert_(np.isnan(awf_from_msk(np.array(np.nan)))) def test_smt2_metrics(): # Just checking if parameters can be retrieved from MSDKI's fit class obj # Based on the multi-voxel simulations above (computes gt for SMT2 params) AWFgt = awf_from_msk(MKgt_multi) DIgt = 3 * MDgt_multi / (1 + 2 * (1 - AWFgt) ** 2) # General microscopic anisotropy estimation (Eq 8 Henriques et al MRM 2019) RDe = DIgt * (1 - AWFgt) # tortuosity assumption VarD = 2 / 9 * (AWFgt * DIgt**2 + (1 - AWFgt) * (DIgt - RDe) ** 2) MD = (AWFgt * DIgt + (1 - AWFgt) * (DIgt + 2 * RDe)) / 3 uFAgt = np.sqrt(3 / 2 * VarD[MD > 0] / (VarD[MD > 0] + MD[MD > 0] ** 2)) mdkiM = msdki.MeanDiffusionKurtosisModel(gtab_3s) mdkiF = mdkiM.fit(DWI) assert_array_almost_equal(mdkiF.smt2f, AWFgt) assert_array_almost_equal(mdkiF.smt2di, DIgt) assert_array_almost_equal(mdkiF.smt2uFA[MD > 0], uFAgt) # Check if awf_from_msk when mask is given mask = MKgt_multi > 0 AWF = awf_from_msk(MKgt_multi, mask=mask) assert_array_almost_equal(AWF, AWFgt) def test_smt2_specific_cases(): mdkiM = msdki.MeanDiffusionKurtosisModel(gtab_3s) # Check smt2 is specific cases with known g.t: # 1) Intrinsic diffusion is equal MSD for single Gaussian isotropic # diffusion (i.e. awf=0) sig_gaussian = single_tensor(gtab_3s, evals=np.array([2e-3, 2e-3, 2e-3])) mdkiF = mdkiM.fit(sig_gaussian) assert_almost_equal(mdkiF.msk, 0.0) assert_almost_equal(mdkiF.msd, 2.0e-3) assert_almost_equal(mdkiF.smt2f, 0) assert_almost_equal(mdkiF.smt2di, 2.0e-3) # 2) Intrinsic diffusion is equal to MSD/3 for single powder-averaged stick # compartment Da = 2.0e-3 mevals = np.zeros((64, 3)) mevals[:, 0] = Da fracs = np.ones(64) * 100 / 64 signal_pa, dt_sph, kt_sph = multi_tensor_dki( gtab_3s, mevals, angles=bvecs[1:, :], fractions=fracs, snr=None ) mdkiF = mdkiM.fit(signal_pa) # decimal is set to 1 because of finite number of directions for powder # average calculation assert_almost_equal(mdkiF.msk, 2.4, decimal=1) assert_almost_equal(mdkiF.msd * 1000, Da / 3 * 1000, decimal=1) assert_almost_equal(mdkiF.smt2f, 1, decimal=1) assert_almost_equal(mdkiF.smt2di, mdkiF.msd * 3, decimal=1) dipy-1.11.0/dipy/reconst/tests/test_multi_voxel.py000066400000000000000000000203111476546756600223320ustar00rootroot00000000000000from functools import reduce import numpy as np import numpy.testing as npt from dipy.core.sphere import unit_icosahedron from dipy.reconst.multi_voxel import CallableArray, _squash, multi_voxel_fit from dipy.testing.decorators import set_random_number_generator, warning_for_keywords from dipy.utils.optpkg import optional_package joblib, has_joblib, _ = optional_package("joblib") dask, has_dask, _ = optional_package("dask") ray, has_ray, _ = optional_package("ray") def test_squash(): A = np.ones((3, 3), dtype=float) B = np.asarray(A, object) npt.assert_array_equal(A, _squash(B)) npt.assert_equal(_squash(B).dtype, A.dtype) B[2, 2] = None A[2, 2] = 0 npt.assert_array_equal(A, _squash(B)) npt.assert_equal(_squash(B).dtype, A.dtype) for ijk in np.ndindex(*B.shape): B[ijk] = np.ones((2,)) A = np.ones((3, 3, 2)) npt.assert_array_equal(A, _squash(B)) npt.assert_equal(_squash(B).dtype, A.dtype) B[2, 2] = None A[2, 2] = 0 npt.assert_array_equal(A, _squash(B)) npt.assert_equal(_squash(B).dtype, A.dtype) # sub-arrays have different shapes ( (3,) and (2,) ) B[0, 0] = np.ones((3,)) npt.assert_(_squash(B) is B) # Check dtypes for arrays and scalars arr_arr = np.zeros((2,), dtype=object) scalar_arr = np.zeros((2,), dtype=object) numeric_types = [ getattr(np, dtype) for dtype in ( "int8", "byte", "int16", "short", "int32", "intc", "int_", "int64", "longlong", "uint8", "ubyte", "uint16", "ushort", "uint32", "uintc", "uint", "uint64", "ulonglong", "float16", "half", "float32", "single", "float64", "double", "float96", "float128", "longdouble", "complex64", "csingle", "complex128", "cdouble", "complex192", "complex256", "clongdouble", ) if hasattr(np, dtype) ] + [bool] for dt0 in numeric_types: arr_arr[0] = np.zeros((3,), dtype=dt0) scalar_arr[0] = dt0(0) for dt1 in numeric_types: arr_arr[1] = np.zeros((3,), dtype=dt1) npt.assert_equal(_squash(arr_arr).dtype, reduce(np.add, arr_arr).dtype) scalar_arr[1] = dt0(1) npt.assert_equal( _squash(scalar_arr).dtype, reduce(np.add, scalar_arr).dtype ) # Check masks and Nones arr = np.ones((3, 4), dtype=float) obj_arr = arr.astype(object) arr[1, 1] = 99 obj_arr[1, 1] = None npt.assert_array_equal(_squash(obj_arr, mask=None, fill=99), arr) msk = arr == 1 npt.assert_array_equal(_squash(obj_arr, mask=msk, fill=99), arr) msk[1, 1] = 1 # unmask None - object array back npt.assert_array_equal(_squash(obj_arr, mask=msk, fill=99), obj_arr) msk[1, 1] = 0 # remask, back to fill again npt.assert_array_equal(_squash(obj_arr, mask=msk, fill=99), arr) obj_arr[2, 3] = None # add another unmasked None, object again npt.assert_array_equal(_squash(obj_arr, mask=msk, fill=99), obj_arr) # Check array of arrays obj_arrs = np.zeros((3,), dtype=object) for i in range(3): obj_arrs[i] = np.ones((4, 5)) arr_arrs = np.ones((3, 4, 5)) # No Nones npt.assert_array_equal(_squash(obj_arrs, mask=None, fill=99), arr_arrs) # None, implicit masking obj_masked = obj_arrs.copy() obj_masked[1] = None arr_masked = arr_arrs.copy() arr_masked[1] = 99 npt.assert_array_equal(_squash(obj_masked, mask=None, fill=99), arr_masked) msk = np.array([1, 0, 1], dtype=bool) # explicit mask npt.assert_array_equal(_squash(obj_masked, mask=msk, fill=99), arr_masked) def test_CallableArray(): callarray = CallableArray((2, 3), dtype=object) # Test without Nones callarray[:] = np.arange expected = np.empty([2, 3, 4]) expected[:] = range(4) npt.assert_array_equal(callarray(4), expected) # Test with Nones callarray[0, 0] = None expected[0, 0] = 0 npt.assert_array_equal(callarray(4), expected) @set_random_number_generator() def test_multi_voxel_fit(rng): class SillyModel: @warning_for_keywords() @multi_voxel_fit def fit( self, data, *, mask=None, another_kwarg=None, kwarg_untouched=True, **kwargs ): # We want to make sure that all kwargs are passed through to the # the fitting procedure assert another_kwarg is not None # Make sure that an argument that is not passed is still # usable in the fitting procedure: assert kwarg_untouched return SillyFit(model, data) def predict(self, S0): return np.ones(10) * S0 class SillyFit: def __init__(self, model, data): self.model = model self.data = data model_attr = 2.0 def odf(self, sphere): return np.ones(len(sphere.phi)) @property def directions(self): n = rng.integers(0, 10) return np.zeros((n, 3)) def predict(self, S0): return np.ones(self.data.shape) * S0 # Test the single voxel case model = SillyModel() single_voxel = np.zeros(64) fit = model.fit(single_voxel, another_kwarg="foo") npt.assert_equal(type(fit), SillyFit) # Test without a mask many_voxels = np.zeros((2, 3, 4, 64)) for verbose in [True, False]: fit = model.fit(many_voxels, verbose=verbose, another_kwarg="foo") expected = np.empty((2, 3, 4)) expected[:] = 2.0 npt.assert_array_equal(fit.model_attr, expected) expected = np.ones((2, 3, 4, 12)) npt.assert_array_equal(fit.odf(unit_icosahedron), expected) npt.assert_equal(fit.directions.shape, (2, 3, 4)) S0 = 100.0 npt.assert_equal(fit.predict(S0=S0), np.ones(many_voxels.shape) * S0) # Test with parallelization (using the "serial" dummy engine) fit = model.fit(many_voxels, another_kwarg="foo", engine="serial") for verbose in [True, False]: # Test with single value kwarg, or sequence of kwarg values for another_kwarg in ["foo", len(many_voxels) * ["foo"]]: # If parallelization engines are installed use them to test: if has_joblib: fit = model.fit( many_voxels, verbose=verbose, another_kwarg=another_kwarg, engine="joblib", ) npt.assert_equal(fit.predict(S0=S0), np.ones(many_voxels.shape) * S0) if has_dask: fit = model.fit( many_voxels, verbose=verbose, another_kwarg=another_kwarg, engine="dask", ) npt.assert_equal(fit.predict(S0=S0), np.ones(many_voxels.shape) * S0) if has_ray: fit = model.fit( many_voxels, verbose=verbose, another_kwarg=another_kwarg, engine="ray", ) npt.assert_equal(fit.predict(S0=S0), np.ones(many_voxels.shape) * S0) # Test with a mask mask = np.zeros((3, 3, 3)).astype("bool") mask[0, 0] = 1 mask[1, 1] = 1 mask[2, 2] = 1 data = np.zeros((3, 3, 3, 64)) fit = model.fit(data, mask=mask, another_kwarg="foo") expected = np.zeros((3, 3, 3)) expected[0, 0] = 2 expected[1, 1] = 2 expected[2, 2] = 2 npt.assert_array_equal(fit.model_attr, expected) odf = fit.odf(unit_icosahedron) npt.assert_equal(odf.shape, (3, 3, 3, 12)) npt.assert_array_equal(odf[~mask], 0) npt.assert_array_equal(odf[mask], 1) predicted = np.zeros(data.shape) predicted[mask] = S0 npt.assert_equal(fit.predict(S0=S0), predicted) # Test fit.shape npt.assert_equal(fit.shape, (3, 3, 3)) # Test indexing into a fit npt.assert_equal(type(fit[0, 0, 0]), SillyFit) npt.assert_equal(fit[:2, :2, :2].shape, (2, 2, 2)) dipy-1.11.0/dipy/reconst/tests/test_odf.py000066400000000000000000000045001476546756600205350ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_, assert_almost_equal, assert_equal from dipy.core.gradients import GradientTable, gradient_table from dipy.core.subdivide_octahedron import create_unit_hemisphere from dipy.data import get_sphere from dipy.reconst.odf import OdfFit, OdfModel, gfa, minmax_normalize from dipy.sims.voxel import multi_tensor, multi_tensor_odf _sphere = create_unit_hemisphere(recursion_level=4) _odf = (_sphere.vertices * [1, 2, 3]).sum(-1) _gtab = GradientTable(np.ones((64, 3))) class SimpleOdfModel(OdfModel): sphere = _sphere def fit(self, data): fit = SimpleOdfFit(self, data) return fit class SimpleOdfFit(OdfFit): def odf(self, sphere=None): if sphere is None: sphere = self.model.sphere # Use ascontiguousarray to work around a bug in NumPy return np.ascontiguousarray((sphere.vertices * [1, 2, 3]).sum(-1)) def test_OdfFit(): m = SimpleOdfModel(_gtab) f = m.fit(None) odf = f.odf(_sphere) assert_equal(len(odf), len(_sphere.theta)) def test_minmax_normalize(): bvalue = 3000 S0 = 1 SNR = 100 sphere = get_sphere(name="symmetric362") bvecs = np.concatenate(([[0, 0, 0]], sphere.vertices)) bvals = np.zeros(len(bvecs)) + bvalue bvals[0] = 0 gtab = gradient_table(bvals, bvecs=bvecs) evals = np.array(([0.0017, 0.0003, 0.0003], [0.0017, 0.0003, 0.0003])) multi_tensor( gtab, evals, S0=S0, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=SNR ) odf = multi_tensor_odf( sphere.vertices, evals, angles=[(0, 0), (90, 0)], fractions=[50, 50] ) odf2 = minmax_normalize(odf) assert_equal(odf2.max(), 1) assert_equal(odf2.min(), 0) odf3 = np.empty(odf.shape) odf3 = minmax_normalize(odf, out=odf3) assert_equal(odf3.max(), 1) assert_equal(odf3.min(), 0) def test_gfa(): g = gfa(np.ones(100)) assert_equal(g, 0) g = gfa(np.ones((2, 100))) assert_equal(g, np.array([0, 0])) # The following series follows the rule (sqrt(n-1)/((n-1)^2)) g = gfa(np.hstack([np.ones(9), [0]])) assert_almost_equal(g, np.sqrt(9.0 / 81)) g = gfa(np.hstack([np.ones(99), [0]])) assert_almost_equal(g, np.sqrt(99.0 / (99.0**2))) # All-zeros returns a nan with no warning: g = gfa(np.zeros(10)) assert_(np.isnan(g)) dipy-1.11.0/dipy/reconst/tests/test_peak_finding.py000066400000000000000000000116271476546756600224130ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.core.sphere import HemiSphere, unique_edges from dipy.data import default_sphere from dipy.reconst.recspeed import ( local_maxima, remove_similar_vertices, search_descending, ) def test_local_maxima(): sphere = default_sphere vertices, faces = sphere.vertices, sphere.faces edges = unique_edges(faces) # Check that the first peak is == max(odf) odf = abs(vertices.sum(-1)) peak_values, peak_index = local_maxima(odf, edges) npt.assert_equal(max(odf), peak_values[0]) npt.assert_equal(max(odf), odf[peak_index[0]]) # Create an artificial odf with a few peaks odf = np.zeros(len(vertices)) odf[1] = 1.0 odf[143] = 143.0 odf[361] = 361.0 peak_values, peak_index = local_maxima(odf, edges) npt.assert_array_equal(peak_values, [361, 143, 1]) npt.assert_array_equal(peak_index, [361, 143, 1]) # Check that neighboring points can both be peaks odf = np.zeros(len(vertices)) point1, point2 = edges[0] odf[[point1, point2]] = 1.0 peak_values, peak_index = local_maxima(odf, edges) npt.assert_array_equal(peak_values, [1.0, 1.0]) npt.assert_(point1 in peak_index) npt.assert_(point2 in peak_index) # Repeat with a hemisphere hemisphere = HemiSphere(xyz=vertices, faces=faces) vertices, edges = hemisphere.vertices, hemisphere.edges # Check that the first peak is == max(odf) odf = abs(vertices.sum(-1)) peak_values, peak_index = local_maxima(odf, edges) npt.assert_equal(max(odf), peak_values[0]) npt.assert_equal(max(odf), odf[peak_index[0]]) # Create an artificial odf with a few peaks odf = np.zeros(len(vertices)) odf[1] = 1.0 odf[143] = 143.0 odf[300] = 300.0 peak_value, peak_index = local_maxima(odf, edges) npt.assert_array_equal(peak_value, [300, 143, 1]) npt.assert_array_equal(peak_index, [300, 143, 1]) # Check that neighboring points can both be peaks odf = np.zeros(len(vertices)) point1, point2 = edges[0] odf[[point1, point2]] = 1.0 peak_values, peak_index = local_maxima(odf, edges) npt.assert_array_equal(peak_values, [1.0, 1.0]) npt.assert_(point1 in peak_index) npt.assert_(point2 in peak_index) # Should raise an error if odf has nans odf[20] = np.nan npt.assert_raises(ValueError, local_maxima, odf, edges) # Should raise an error if edge values are too large to index odf edges[0, 0] = 9999 odf[20] = 0 npt.assert_raises(IndexError, local_maxima, odf, edges) def test_remove_similar_peaks(): vertices = np.array( [ [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.1, 1.0, 0.0], [0.0, 2.0, 1.0], [2.0, 1.0, 0.0], [1.0, 0.0, 0.0], ] ) norms = np.sqrt((vertices * vertices).sum(-1)) vertices = vertices / norms[:, None] # Return unique vertices uv = remove_similar_vertices(vertices, 0.01) npt.assert_array_equal(uv, vertices[:6]) # Return vertices with mapping and indices uv, mapping, index = remove_similar_vertices( vertices, 0.01, return_mapping=True, return_index=True ) npt.assert_array_equal(uv, vertices[:6]) npt.assert_array_equal(mapping, list(range(6)) + [0]) npt.assert_array_equal(index, range(6)) # Test mapping with different angles uv, mapping = remove_similar_vertices(vertices, 0.01, return_mapping=True) npt.assert_array_equal(uv, vertices[:6]) npt.assert_array_equal(mapping, list(range(6)) + [0]) uv, mapping = remove_similar_vertices(vertices, 30, return_mapping=True) npt.assert_array_equal(uv, vertices[:4]) npt.assert_array_equal(mapping, list(range(4)) + [1, 0, 0]) uv, mapping = remove_similar_vertices(vertices, 60, return_mapping=True) npt.assert_array_equal(uv, vertices[:3]) npt.assert_array_equal(mapping, list(range(3)) + [0, 1, 0, 0]) # Test index with different angles uv, index = remove_similar_vertices(vertices, 0.01, return_index=True) npt.assert_array_equal(uv, vertices[:6]) npt.assert_array_equal(index, range(6)) uv, index = remove_similar_vertices(vertices, 30, return_index=True) npt.assert_array_equal(uv, vertices[:4]) npt.assert_array_equal(index, range(4)) uv, index = remove_similar_vertices(vertices, 60, return_index=True) npt.assert_array_equal(uv, vertices[:3]) npt.assert_array_equal(index, range(3)) def test_search_descending(): a = np.linspace(10.0, 1.0, 10) npt.assert_equal(search_descending(a, 1.0), 1) npt.assert_equal(search_descending(a, 0.89), 2) npt.assert_equal(search_descending(a, 0.79), 3) # Test small array npt.assert_equal(search_descending(a[:1], 1.0), 1) npt.assert_equal(search_descending(a[:1], 0.0), 1) npt.assert_equal(search_descending(a[:1], 0.5), 1) # Test very small array npt.assert_equal(search_descending(a[:0], 1.0), 0) dipy-1.11.0/dipy/reconst/tests/test_qtdmri.py000066400000000000000000000620051476546756600212710ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_equal, assert_raises, ) import pytest import scipy.integrate as integrate from dipy.core.gradients import gradient_table_from_qvals_bvecs from dipy.data import get_gtab_taiwan_dsi, get_sphere from dipy.reconst import mapmri, qtdmri from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.sims.voxel import add_noise, multi_tensor from dipy.testing.decorators import set_random_number_generator needs_cvxpy = pytest.mark.skipif(not qtdmri.have_cvxpy, reason="REQUIRES CVXPY") def generate_gtab4D(number_of_tau_shells=4, delta=0.01): """Generates testing gradient table for 4D qt-dMRI scheme""" gtab = get_gtab_taiwan_dsi() qvals = np.tile(gtab.bvals / 100.0, number_of_tau_shells) bvecs = np.tile(gtab.bvecs, (number_of_tau_shells, 1)) pulse_separation = [] for ps in np.linspace(0.02, 0.05, number_of_tau_shells): pulse_separation = np.append(pulse_separation, np.tile(ps, gtab.bvals.shape[0])) pulse_duration = np.tile(delta, qvals.shape[0]) gtab_4d = gradient_table_from_qvals_bvecs( qvals=qvals, bvecs=bvecs, big_delta=pulse_separation, small_delta=pulse_duration, b0_threshold=0, ) return gtab_4d def generate_signal_crossing(gtab, lambda1, lambda2, lambda3, angle2=60): mevals = np.array(([lambda1, lambda2, lambda3], [lambda1, lambda2, lambda3])) angl = [(0, 0), (angle2, 0)] S, _ = multi_tensor(gtab, mevals, S0=1.0, angles=angl, fractions=[50, 50], snr=None) return S def test_input_parameters(): gtab_4d = generate_gtab4D() # uneven radial order assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, radial_order=3) # negative radial order assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, radial_order=-1) # negative time order assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, time_order=-1) # non-bool laplacian_regularization assert_raises( ValueError, qtdmri.QtdmriModel, gtab_4d, laplacian_regularization="test" ) # 'non-"GCV" string for laplacian_weighting assert_raises( ValueError, qtdmri.QtdmriModel, gtab_4d, laplacian_regularization=True, laplacian_weighting="test", ) # negative laplacian_weighting assert_raises( ValueError, qtdmri.QtdmriModel, gtab_4d, laplacian_regularization=True, laplacian_weighting=-1.0, ) # non-bool for l1_weighting assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, l1_regularization="test") # non-"CV" string for laplacian_weighting assert_raises( ValueError, qtdmri.QtdmriModel, gtab_4d, l1_regularization=True, l1_weighting="test", ) # negative l1_weighting is caught assert_raises( ValueError, qtdmri.QtdmriModel, gtab_4d, l1_regularization=True, l1_weighting=-1.0, ) # non-bool cartesian is caught assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, cartesian="test") # non-bool anisotropic_scaling is caught assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, anisotropic_scaling="test") # non-bool constrain_q0 is caught assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, constrain_q0="test") # negative bval_threshold is caught assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, bval_threshold=-1) # negative eigenvalue_threshold is caught assert_raises(ValueError, qtdmri.QtdmriModel, gtab_4d, eigenvalue_threshold=-1) error = ValueError if qtdmri.have_cvxpy else ImportError # unavailable cvxpy solver is caught assert_raises( error, qtdmri.QtdmriModel, gtab_4d, laplacian_regularization=True, cvxpy_solver="test", ) # non-normalized non-cartesian l1-regularization is caught assert_raises( error, qtdmri.QtdmriModel, gtab_4d, l1_regularization=True, cartesian=False, normalization=False, ) def test_orthogonality_temporal_basis_functions(): # numerical integration parameters ut = 10 tmin = 0 tmax = 100 int1 = integrate.quad( lambda t: qtdmri.temporal_basis(1, ut, t) * qtdmri.temporal_basis(2, ut, t), tmin, tmax, ) int2 = integrate.quad( lambda t: qtdmri.temporal_basis(2, ut, t) * qtdmri.temporal_basis(3, ut, t), tmin, tmax, ) int3 = integrate.quad( lambda t: qtdmri.temporal_basis(3, ut, t) * qtdmri.temporal_basis(4, ut, t), tmin, tmax, ) int4 = integrate.quad( lambda t: qtdmri.temporal_basis(4, ut, t) * qtdmri.temporal_basis(5, ut, t), tmin, tmax, ) assert_almost_equal(int1, 0.0) assert_almost_equal(int2, 0.0) assert_almost_equal(int3, 0.0) assert_almost_equal(int4, 0.0) def test_normalization_time(): ut = 10 tmin = 0 tmax = 100 int0 = integrate.quad( lambda t: qtdmri.qtdmri_temporal_normalization(ut) ** 2 * qtdmri.temporal_basis(0, ut, t) * qtdmri.temporal_basis(0, ut, t), tmin, tmax, )[0] int1 = integrate.quad( lambda t: qtdmri.qtdmri_temporal_normalization(ut) ** 2 * qtdmri.temporal_basis(1, ut, t) * qtdmri.temporal_basis(1, ut, t), tmin, tmax, )[0] int2 = integrate.quad( lambda t: qtdmri.qtdmri_temporal_normalization(ut) ** 2 * qtdmri.temporal_basis(2, ut, t) * qtdmri.temporal_basis(2, ut, t), tmin, tmax, )[0] assert_almost_equal(int0, 1.0) assert_almost_equal(int1, 1.0) assert_almost_equal(int2, 1.0) def test_anisotropic_isotropic_equivalence(radial_order=4, time_order=2): # generate qt-scheme and arbitrary synthetic crossing data. gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) # initialize both cartesian and spherical models without any kind of # regularization qtdmri_mod_aniso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=True, anisotropic_scaling=False, ) qtdmri_mod_iso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=False, anisotropic_scaling=False, ) # both implementations fit the same signal with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) qtdmri_fit_cart = qtdmri_mod_aniso.fit(S) qtdmri_fit_sphere = qtdmri_mod_iso.fit(S) # same signal fit with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_array_almost_equal( qtdmri_fit_cart.fitted_signal(), qtdmri_fit_sphere.fitted_signal() ) # same PDF reconstruction rt_grid = qtdmri.create_rt_space_grid(5, 20e-3, 5, 0.02, 0.05) pdf_aniso = qtdmri_fit_cart.pdf(rt_grid) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf_iso = qtdmri_fit_sphere.pdf(rt_grid) assert_array_almost_equal(pdf_aniso / pdf_aniso.max(), pdf_iso / pdf_aniso.max()) # same norm of the Laplacian norm_laplacian_aniso = qtdmri_fit_cart.norm_of_laplacian_signal() norm_laplacian_iso = qtdmri_fit_sphere.norm_of_laplacian_signal() assert_almost_equal( norm_laplacian_aniso / norm_laplacian_aniso, norm_laplacian_iso / norm_laplacian_aniso, ) # all q-space index is the same for arbitrary tau tau = 0.02 assert_almost_equal(qtdmri_fit_cart.rtop(tau), qtdmri_fit_sphere.rtop(tau)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_almost_equal(qtdmri_fit_cart.rtap(tau), qtdmri_fit_sphere.rtap(tau)) assert_almost_equal(qtdmri_fit_cart.rtpp(tau), qtdmri_fit_sphere.rtpp(tau)) assert_almost_equal(qtdmri_fit_cart.msd(tau), qtdmri_fit_sphere.msd(tau)) assert_almost_equal(qtdmri_fit_cart.qiv(tau), qtdmri_fit_sphere.qiv(tau)) # ODF estimation is the same sphere = get_sphere() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_array_almost_equal( qtdmri_fit_cart.odf(sphere, tau, s=0), qtdmri_fit_sphere.odf(sphere, tau, s=0), ) def test_cartesian_normalization(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_aniso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=True, normalization=False, ) qtdmri_mod_aniso_norm = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=True, normalization=True, ) qtdmri_fit_aniso = qtdmri_mod_aniso.fit(S) qtdmri_fit_aniso_norm = qtdmri_mod_aniso_norm.fit(S) assert_array_almost_equal( qtdmri_fit_aniso.fitted_signal(), qtdmri_fit_aniso_norm.fitted_signal() ) rt_grid = qtdmri.create_rt_space_grid(5, 20e-3, 5, 0.02, 0.05) pdf_aniso = qtdmri_fit_aniso.pdf(rt_grid) pdf_aniso_norm = qtdmri_fit_aniso_norm.pdf(rt_grid) assert_array_almost_equal( pdf_aniso / pdf_aniso.max(), pdf_aniso_norm / pdf_aniso.max() ) norm_laplacian = qtdmri_fit_aniso.norm_of_laplacian_signal() norm_laplacian_norm = qtdmri_fit_aniso_norm.norm_of_laplacian_signal() assert_array_almost_equal( norm_laplacian / norm_laplacian, norm_laplacian_norm / norm_laplacian ) def test_spherical_normalization(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_aniso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=False, normalization=False, ) qtdmri_mod_aniso_norm = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=False, normalization=True, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) qtdmri_fit = qtdmri_mod_aniso.fit(S) qtdmri_fit_norm = qtdmri_mod_aniso_norm.fit(S) assert_array_almost_equal( qtdmri_fit.fitted_signal(), qtdmri_fit_norm.fitted_signal() ) rt_grid = qtdmri.create_rt_space_grid(5, 20e-3, 5, 0.02, 0.05) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf = qtdmri_fit.pdf(rt_grid) pdf_norm = qtdmri_fit_norm.pdf(rt_grid) assert_array_almost_equal(pdf / pdf.max(), pdf_norm / pdf.max()) norm_laplacian = qtdmri_fit.norm_of_laplacian_signal() norm_laplacian_norm = qtdmri_fit_norm.norm_of_laplacian_signal() assert_array_almost_equal( norm_laplacian / norm_laplacian, norm_laplacian_norm / norm_laplacian ) def test_anisotropic_reduced_MSE(radial_order=0, time_order=0): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_aniso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=True, anisotropic_scaling=True, ) qtdmri_mod_iso = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=True, anisotropic_scaling=False, ) qtdmri_fit_aniso = qtdmri_mod_aniso.fit(S) qtdmri_fit_iso = qtdmri_mod_iso.fit(S) mse_aniso = np.mean((S - qtdmri_fit_aniso.fitted_signal()) ** 2) mse_iso = np.mean((S - qtdmri_fit_iso.fitted_signal()) ** 2) assert_(mse_aniso < mse_iso) def test_number_of_coefficients(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order ) qtdmri_fit = qtdmri_mod.fit(S) number_of_coef_model = qtdmri_fit._qtdmri_coef.shape[0] number_of_coef_analytic = qtdmri.qtdmri_number_of_coefficients( radial_order, time_order ) assert_equal(number_of_coef_model, number_of_coef_analytic) def test_calling_cartesian_laplacian_with_precomputed_matrices( radial_order=4, time_order=2, ut=2e-3, us=np.r_[1e-3, 2e-3, 3e-3] ): ind_mat = qtdmri.qtdmri_index_matrix(radial_order, time_order) part4_reg_mat_tau = qtdmri.part4_reg_matrix_tau(ind_mat, 1.0) part23_reg_mat_tau = qtdmri.part23_reg_matrix_tau(ind_mat, 1.0) part1_reg_mat_tau = qtdmri.part1_reg_matrix_tau(ind_mat, 1.0) S_mat, T_mat, U_mat = mapmri.mapmri_STU_reg_matrices(radial_order) laplacian_matrix_precomputed = qtdmri.qtdmri_laplacian_reg_matrix( ind_mat, us, ut, S_mat=S_mat, T_mat=T_mat, U_mat=U_mat, part1_ut_precomp=part1_reg_mat_tau, part23_ut_precomp=part23_reg_mat_tau, part4_ut_precomp=part4_reg_mat_tau, ) laplacian_matrix_regular = qtdmri.qtdmri_laplacian_reg_matrix(ind_mat, us, ut) assert_array_almost_equal(laplacian_matrix_precomputed, laplacian_matrix_regular) def test_calling_spherical_laplacian_with_precomputed_matrices( radial_order=4, time_order=2, ut=2e-3, us=np.r_[2e-3, 2e-3, 2e-3] ): ind_mat = qtdmri.qtdmri_isotropic_index_matrix(radial_order, time_order) part4_reg_mat_tau = qtdmri.part4_reg_matrix_tau(ind_mat, 1.0) part23_reg_mat_tau = qtdmri.part23_reg_matrix_tau(ind_mat, 1.0) part1_reg_mat_tau = qtdmri.part1_reg_matrix_tau(ind_mat, 1.0) part1_uq_iso_precomp = ( mapmri.mapmri_isotropic_laplacian_reg_matrix_from_index_matrix( ind_mat[:, :3], 1.0 ) ) laplacian_matrix_precomp = qtdmri.qtdmri_isotropic_laplacian_reg_matrix( ind_mat, us, ut, part1_uq_iso_precomp=part1_uq_iso_precomp, part1_ut_precomp=part1_reg_mat_tau, part23_ut_precomp=part23_reg_mat_tau, part4_ut_precomp=part4_reg_mat_tau, ) laplacian_matrix_regular = qtdmri.qtdmri_isotropic_laplacian_reg_matrix( ind_mat, us, ut ) assert_array_almost_equal(laplacian_matrix_precomp, laplacian_matrix_regular) @needs_cvxpy def test_q0_constraint_and_unity_of_ODFs(radial_order=6, time_order=2): gtab_4d = generate_gtab4D() tau = gtab_4d.tau l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) # first test without regularization qtdmri_mod_ls = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order ) qtdmri_fit_ls = qtdmri_mod_ls.fit(S) fitted_signal = qtdmri_fit_ls.fitted_signal() # only first tau_point is normalized with least squares. E_q0_first_tau = fitted_signal[ np.all([tau == tau.min(), gtab_4d.b0s_mask], axis=0) ].item() assert_almost_equal(float(E_q0_first_tau), 1.0) # now with cvxpy regularization cartesian qtdmri_mod_lap = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, laplacian_regularization=True, laplacian_weighting=1e-4, ) qtdmri_fit_lap = qtdmri_mod_lap.fit(S) fitted_signal = qtdmri_fit_lap.fitted_signal() E_q0_first_tau = fitted_signal[ np.all([tau == tau.min(), gtab_4d.b0s_mask], axis=0) ].item() E_q0_last_tau = fitted_signal[ np.all([tau == tau.max(), gtab_4d.b0s_mask], axis=0) ].item() assert_almost_equal(E_q0_first_tau, 1.0) assert_almost_equal(E_q0_last_tau, 1.0) # check if odf in spherical harmonics for cartesian raises an error try: qtdmri_fit_lap.odf_sh(tau=tau.max()) assert_equal(True, False) except ValueError: print("missing spherical harmonics cartesian ODF caught.") # now with cvxpy regularization spherical qtdmri_mod_lap = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, laplacian_regularization=True, laplacian_weighting=1e-4, cartesian=False, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) qtdmri_fit_lap = qtdmri_mod_lap.fit(S) fitted_signal = qtdmri_fit_lap.fitted_signal() E_q0_first_tau = fitted_signal[ np.all([tau == tau.min(), gtab_4d.b0s_mask], axis=0) ].item() E_q0_last_tau = fitted_signal[ np.all([tau == tau.max(), gtab_4d.b0s_mask], axis=0) ].item() assert_almost_equal(float(E_q0_first_tau), 1.0) assert_almost_equal(float(E_q0_last_tau), 1.0) # test if marginal ODF integral in sh is equal to one # Integral of Y00 spherical harmonic is 1 / (2 * np.sqrt(np.pi)) # division with this results in normalization odf_sh = qtdmri_fit_lap.odf_sh(s=0, tau=tau.max()) odf_integral = odf_sh[0] * (2 * np.sqrt(np.pi)) assert_almost_equal(odf_integral, 1.0) @needs_cvxpy def test_laplacian_reduces_laplacian_norm(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_no_laplacian = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, laplacian_regularization=True, laplacian_weighting=0.0, ) qtdmri_mod_laplacian = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, laplacian_regularization=True, laplacian_weighting=1e-4, ) qtdmri_fit_no_laplacian = qtdmri_mod_no_laplacian.fit(S) qtdmri_fit_laplacian = qtdmri_mod_laplacian.fit(S) laplacian_norm_no_reg = qtdmri_fit_no_laplacian.norm_of_laplacian_signal() laplacian_norm_reg = qtdmri_fit_laplacian.norm_of_laplacian_signal() assert_(laplacian_norm_no_reg > laplacian_norm_reg) @needs_cvxpy def test_spherical_laplacian_reduces_laplacian_norm(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_no_laplacian = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=False, laplacian_regularization=True, laplacian_weighting=0.0, ) qtdmri_mod_laplacian = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, cartesian=False, laplacian_regularization=True, laplacian_weighting=1e-4, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) qtdmri_fit_no_laplacian = qtdmri_mod_no_laplacian.fit(S) qtdmri_fit_laplacian = qtdmri_mod_laplacian.fit(S) laplacian_norm_no_reg = qtdmri_fit_no_laplacian.norm_of_laplacian_signal() laplacian_norm_reg = qtdmri_fit_laplacian.norm_of_laplacian_signal() assert_(laplacian_norm_no_reg > laplacian_norm_reg) @needs_cvxpy @set_random_number_generator(1234) def test_laplacian_GCV_higher_weight_with_noise(radial_order=4, time_order=2, rng=None): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) S_noise = add_noise(S, S0=1.0, snr=10, rng=rng) qtdmri_mod_laplacian_GCV = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, laplacian_regularization=True, laplacian_weighting="GCV", ) qtdmri_fit_no_noise = qtdmri_mod_laplacian_GCV.fit(S) qtdmri_fit_noise = qtdmri_mod_laplacian_GCV.fit(S_noise) assert_(qtdmri_fit_noise.lopt > qtdmri_fit_no_noise.lopt) @needs_cvxpy def test_l1_increases_sparsity(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_no_l1 = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, l1_weighting=0.0, ) qtdmri_mod_l1 = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, l1_weighting=0.1, ) qtdmri_fit_no_l1 = qtdmri_mod_no_l1.fit(S) qtdmri_fit_l1 = qtdmri_mod_l1.fit(S) sparsity_abs_no_reg = qtdmri_fit_no_l1.sparsity_abs() sparsity_abs_reg = qtdmri_fit_l1.sparsity_abs() assert_(sparsity_abs_no_reg > sparsity_abs_reg) sparsity_density_no_reg = qtdmri_fit_no_l1.sparsity_density() sparsity_density_reg = qtdmri_fit_l1.sparsity_density() assert_(sparsity_density_no_reg > sparsity_density_reg) @needs_cvxpy def test_spherical_l1_increases_sparsity(radial_order=4, time_order=2): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) qtdmri_mod_no_l1 = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, cartesian=False, normalization=True, l1_weighting=0.0, constrain_q0=False, ) qtdmri_mod_l1 = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, cartesian=False, normalization=True, l1_weighting=0.1, constrain_q0=False, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) warnings.filterwarnings( "ignore", message="cvxpy optimization resulted in .*", category=UserWarning ) warnings.filterwarnings( "ignore", message="Solution may be inaccurate..*", category=UserWarning ) qtdmri_fit_no_l1 = qtdmri_mod_no_l1.fit(S) qtdmri_fit_l1 = qtdmri_mod_l1.fit(S) sparsity_abs_no_reg = qtdmri_fit_no_l1.sparsity_abs() sparsity_abs_reg = qtdmri_fit_l1.sparsity_abs() assert_equal(sparsity_abs_no_reg > sparsity_abs_reg, True) sparsity_density_no_reg = qtdmri_fit_no_l1.sparsity_density() sparsity_density_reg = qtdmri_fit_l1.sparsity_density() assert_(sparsity_density_no_reg > sparsity_density_reg) @needs_cvxpy @set_random_number_generator(1234) def test_l1_CV(radial_order=4, time_order=2, rng=None): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) S_noise = add_noise(S, S0=1.0, snr=10, rng=rng) qtdmri_mod_l1_cv = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, l1_weighting="CV", ) qtdmri_fit_noise = qtdmri_mod_l1_cv.fit(S_noise) assert_(qtdmri_fit_noise.alpha >= 0) @needs_cvxpy @set_random_number_generator(1234) def test_elastic_GCV_CV(radial_order=4, time_order=2, rng=None): gtab_4d = generate_gtab4D() l1, l2, l3 = [0.0015, 0.0003, 0.0003] S = generate_signal_crossing(gtab_4d, l1, l2, l3) S_noise = add_noise(S, S0=1.0, snr=10, rng=rng) qtdmri_mod_elastic = qtdmri.QtdmriModel( gtab_4d, radial_order=radial_order, time_order=time_order, l1_regularization=True, l1_weighting="CV", laplacian_regularization=True, laplacian_weighting="GCV", ) qtdmri_fit_noise = qtdmri_mod_elastic.fit(S_noise) assert_(qtdmri_fit_noise.lopt >= 0) assert_(qtdmri_fit_noise.alpha >= 0) @pytest.mark.skipif(not qtdmri.have_plt, reason="Requires Matplotlib") def test_visualise_gradient_table_G_Delta_rainbow(): gtab_4d = generate_gtab4D() qtdmri.visualise_gradient_table_G_Delta_rainbow(gtab_4d) gtab_4d.small_delta[4] += 0.001 # so now the gtab has multiple small_delta assert_raises(ValueError, qtdmri.visualise_gradient_table_G_Delta_rainbow, gtab_4d) dipy-1.11.0/dipy/reconst/tests/test_qti.py000066400000000000000000000453471476546756600206000ustar00rootroot00000000000000"""Tests for dipy.reconst.qti module""" import numpy as np import numpy.testing as npt from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, disperse_charges from dipy.reconst.dti import fractional_anisotropy import dipy.reconst.qti as qti from dipy.sims.voxel import vec2vec_rotmat from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package cp, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") def test_from_3x3_to_6x1(): """Test conversion to Voigt notation.""" V = np.arange(1, 7)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391], [4.24264069, 2, 2.82842712], [3.53553391, 2.82842712, 3], ) ) npt.assert_array_almost_equal(qti.from_3x3_to_6x1(T), V) npt.assert_array_almost_equal(qti.from_3x3_to_6x1(qti.from_6x1_to_3x3(V)), V) npt.assert_raises(ValueError, qti.from_3x3_to_6x1, T[0:1]) npt.assert_warns(Warning, qti.from_3x3_to_6x1, T + np.arange(3)) def test_from_6x1_to_3x3(): """Test conversion from Voigt notation.""" V = np.arange(1, 7)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391], [4.24264069, 2, 2.82842712], [3.53553391, 2.82842712, 3], ) ) npt.assert_array_almost_equal(qti.from_6x1_to_3x3(V), T) npt.assert_array_almost_equal(qti.from_6x1_to_3x3(qti.from_3x3_to_6x1(T)), T) npt.assert_raises(ValueError, qti.from_6x1_to_3x3, T) def test_from_6x6_to_21x1(): """Test conversion to Voigt notation.""" V = np.arange(1, 22)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391, 4.94974747, 5.65685425, 6.36396103], [4.24264069, 2, 2.82842712, 7.07106781, 7.77817459, 8.48528137], [3.53553391, 2.82842712, 3, 9.19238816, 9.89949494, 10.60660172], [4.94974747, 7.07106781, 9.19238816, 16, 13.43502884, 14.8492424], [5.65685425, 7.77817459, 9.89949494, 13.43502884, 17, 14.14213562], [6.36396103, 8.48528137, 10.60660172, 14.8492424, 14.14213562, 18], ) ) npt.assert_array_almost_equal(qti.from_6x6_to_21x1(T), V) npt.assert_array_almost_equal(qti.from_6x6_to_21x1(qti.from_21x1_to_6x6(V)), V) npt.assert_raises(ValueError, qti.from_6x6_to_21x1, T[0:1]) npt.assert_warns(Warning, qti.from_6x6_to_21x1, T + np.arange(6)) def test_from_21x1_to_6x6(): """Test conversion from Voigt notation.""" V = np.arange(1, 22)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391, 4.94974747, 5.65685425, 6.36396103], [4.24264069, 2, 2.82842712, 7.07106781, 7.77817459, 8.48528137], [3.53553391, 2.82842712, 3, 9.19238816, 9.89949494, 10.60660172], [4.94974747, 7.07106781, 9.19238816, 16, 13.43502884, 14.8492424], [5.65685425, 7.77817459, 9.89949494, 13.43502884, 17, 14.14213562], [6.36396103, 8.48528137, 10.60660172, 14.8492424, 14.14213562, 18], ) ) npt.assert_array_almost_equal(qti.from_21x1_to_6x6(V), T) npt.assert_array_almost_equal(qti.from_21x1_to_6x6(qti.from_6x6_to_21x1(T)), T) npt.assert_raises(ValueError, qti.from_21x1_to_6x6, T) def test_cvxpy_1x6_to_3x3(): """Test conversion from Voigt notation.""" if have_cvxpy: V = np.arange(1, 7)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391], [4.24264069, 2, 2.82842712], [3.53553391, 2.82842712, 3], ) ) npt.assert_array_almost_equal(qti.cvxpy_1x6_to_3x3(V).value, T) npt.assert_array_almost_equal( qti.cvxpy_1x6_to_3x3(qti.from_3x3_to_6x1(T)).value, T ) def test_cvxpy_1x21_to_6x6(): """Test conversion from Voigt notation.""" if have_cvxpy: V = np.arange(1, 22)[:, np.newaxis].astype(float) T = np.array( ( [1, 4.24264069, 3.53553391, 4.94974747, 5.65685425, 6.36396103], [4.24264069, 2, 2.82842712, 7.07106781, 7.77817459, 8.48528137], [3.53553391, 2.82842712, 3, 9.19238816, 9.89949494, 10.60660172], [4.94974747, 7.07106781, 9.19238816, 16, 13.43502884, 14.8492424], [5.65685425, 7.77817459, 9.89949494, 13.43502884, 17, 14.14213562], [6.36396103, 8.48528137, 10.60660172, 14.8492424, 14.14213562, 18], ) ) npt.assert_array_almost_equal(qti.cvxpy_1x21_to_6x6(V).value, T) npt.assert_array_almost_equal( qti.cvxpy_1x21_to_6x6(qti.from_6x6_to_21x1(T)).value, T ) def test_helper_tensors(): """Test the helper tensors.""" npt.assert_array_equal(qti.e_iso, np.eye(3) / 3) npt.assert_array_equal(qti.E_iso, np.eye(6) / 3) npt.assert_array_equal( qti.E_bulk, np.matmul(qti.from_3x3_to_6x1(qti.e_iso), qti.from_3x3_to_6x1(qti.e_iso).T), ) npt.assert_array_equal(qti.E_shear, qti.E_iso - qti.E_bulk) npt.assert_array_equal(qti.E_tsym, qti.E_bulk + 0.4 * qti.E_shear) def _anisotropic_DTD(): """Return a distribution of six fully anisotropic diffusion tensors whose directions are uniformly distributed around the surface of a sphere.""" evals = np.array([1, 0, 0]) phi = (1 + np.sqrt(5)) / 2 directions = np.array( [ [0, 1, phi], [0, 1, -phi], [1, phi, 0], [1, -phi, 0], [phi, 0, 1], [phi, 0, -1], ] ) / np.linalg.norm([0, 1, phi]) DTD = np.zeros((6, 3, 3)) for i in range(6): R = vec2vec_rotmat(np.array([1, 0, 0]), directions[i]) DTD[i] = np.matmul(R, np.matmul(np.eye(3) * evals, R.T)) return DTD def _isotropic_DTD(): """Return a distribution of six isotropic diffusion tensors with varying sizes.""" evals = np.linspace(0.1, 3, 6) DTD = np.array([np.eye(3) * i for i in evals]) return DTD def test_dtd_covariance(): """Test diffusion tensor distribution covariance calculation.""" # Input validation npt.assert_raises(ValueError, qti.dtd_covariance, np.arange(2)) npt.assert_raises(ValueError, qti.dtd_covariance, np.zeros((1, 1, 1))) # Covariance of isotropic tensors (Figure 1 in Westin's paper) DTD = _isotropic_DTD() C = np.zeros((6, 6)) C[0:3, 0:3] = 0.98116667 npt.assert_almost_equal(qti.dtd_covariance(DTD), C) # Covariance of anisotropic tensors (Figure 1 in Westin's paper) DTD = _anisotropic_DTD() C = np.eye(6) * 2 / 15 C[0:3, 0:3] = np.array( [ [4 / 45, -2 / 45, -2 / 45], [-2 / 45, 4 / 45, -2 / 45], [-2 / 45, -2 / 45, 4 / 45], ] ) npt.assert_almost_equal(qti.dtd_covariance(DTD), C) def test_qti_signal(): """Test QTI signal generation.""" # Input validation bvals = np.ones(6) phi = (1 + np.sqrt(5)) / 2 bvecs = np.array( [ [0, 1, phi], [0, 1, -phi], [1, phi, 0], [1, -phi, 0], [phi, 0, 1], [phi, 0, -1], ] ) / np.linalg.norm([0, 1, phi]) gtab = gradient_table(bvals, bvecs=bvecs) npt.assert_raises(ValueError, qti.qti_signal, gtab, np.eye(3), np.eye(6)) gtab = gradient_table(bvals, bvecs=bvecs, btens="LTE") npt.assert_raises(ValueError, qti.qti_signal, gtab, np.eye(2), np.eye(6)) npt.assert_raises(ValueError, qti.qti_signal, gtab, np.eye(3), np.eye(5)) npt.assert_raises( ValueError, qti.qti_signal, gtab, np.stack((np.eye(3), np.eye(3))), np.eye(5) ) npt.assert_raises( ValueError, qti.qti_signal, gtab, np.eye(3)[np.newaxis, :], np.eye(6) ) npt.assert_raises( ValueError, qti.qti_signal, gtab, np.eye(3), np.eye(6), S0=np.ones(2) ) qti.qti_signal( gradient_table(bvals, bvecs=bvecs, btens="LTE"), np.zeros((5, 6)), np.zeros((5, 21)), ) # Isotropic diffusion and no 2nd order effects D = np.eye(3) C = np.zeros((6, 6)) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="LTE"), D, C), np.ones(6) * np.exp(-1), ) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="LTE"), D, C), qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="PTE"), D, C), ) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="LTE"), D, C), qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="STE"), D, C), ) # Anisotropic sticks aligned with the bvecs DTD = _anisotropic_DTD() D = np.mean(DTD, axis=0) C = qti.dtd_covariance(DTD) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="LTE"), D, C), np.ones(6) * 0.7490954, ) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="PTE"), D, C), np.ones(6) * 0.72453716, ) npt.assert_almost_equal( qti.qti_signal(gradient_table(bvals, bvecs=bvecs, btens="STE"), D, C), np.ones(6) * 0.71653131, ) def test_design_matrix(): """Test QTI design matrix calculation.""" btens = np.array([np.eye(3, 3) for i in range(3)]) btens[0, 1, 1] = 0 btens[0, 2, 2] = 0 btens[1, 0, 0] = 0 X = qti.design_matrix(btens) npt.assert_almost_equal( X, np.array( [ [1.0, 1.0, 1.0], [-1.0, -0.0, -1.0], [-0.0, -1.0, -1.0], [-0.0, -1.0, -1.0], [-0.0, -0.0, -0.0], [-0.0, -0.0, -0.0], [-0.0, -0.0, -0.0], [0.5, 0.0, 0.5], [0.0, 0.5, 0.5], [0.0, 0.5, 0.5], [0.0, 0.70710678, 0.70710678], [0.0, 0.0, 0.70710678], [0.0, 0.0, 0.70710678], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], ] ).T, ) def _qti_gtab(rng): """Return a gradient table with b0, 2 shells, 30 directions, and linear and planar tensor encoding for fitting QTI.""" n_dir = 30 hsph_initial = HemiSphere( theta=np.pi * rng.random(n_dir), phi=2 * np.pi * rng.random(n_dir) ) hsph_updated, _ = disperse_charges(hsph_initial, 100) directions = hsph_updated.vertices bvecs = np.vstack([np.zeros(3)] + [directions for _ in range(4)]) bvals = np.concatenate( ( np.zeros(1), np.ones(n_dir), np.ones(n_dir) * 2, np.ones(n_dir), np.ones(n_dir) * 2, ) ) btens = np.array( ["LTE" for i in range(1 + n_dir * 2)] + ["PTE" for i in range(n_dir * 2)] ) gtab = gradient_table(bvals, bvecs=bvecs, btens=btens) return gtab @set_random_number_generator(123) def test_ls_sdp_fits(rng): """Test ordinary and weighted least squares and semidefinite programming QTI fits by comparing the estimated parameters to the ground-truth values.""" gtab = _qti_gtab(rng) X = qti.design_matrix(gtab.btens) DTDs = [ _anisotropic_DTD(), _isotropic_DTD(), np.concatenate((_anisotropic_DTD(), _isotropic_DTD())), ] for DTD in DTDs: D = np.mean(DTD, axis=0) C = qti.dtd_covariance(DTD) params = np.concatenate( ( np.log(1)[np.newaxis, np.newaxis], qti.from_3x3_to_6x1(D), qti.from_6x6_to_21x1(C), ) ).T data = qti.qti_signal(gtab, D, C)[np.newaxis, :] mask = np.ones(1).astype(bool) npt.assert_almost_equal(qti._ols_fit(data, mask, X), params) npt.assert_almost_equal(qti._wls_fit(data, mask, X), params) data = np.vstack((data, data)) mask = np.ones(2).astype(bool) params = np.vstack((params, params)) npt.assert_almost_equal(qti._ols_fit(data, mask, X, step=1), params) npt.assert_almost_equal(qti._wls_fit(data, mask, X, step=1), params) if have_cvxpy: npt.assert_almost_equal( qti._sdpdc_fit(data, mask, X, cvxpy_solver="SCS"), params, decimal=1 ) @set_random_number_generator(123) def test_qti_model(rng): """Test the QTI model class.""" # Input validation gtab = gradient_table(np.ones(1), bvecs=np.array([[1, 0, 0]])) npt.assert_raises(ValueError, qti.QtiModel, gtab) gtab = gradient_table(np.ones(1), bvecs=np.array([[1, 0, 0]]), btens="LTE") npt.assert_warns(UserWarning, qti.QtiModel, gtab) npt.assert_raises(ValueError, qti.QtiModel, _qti_gtab(rng), fit_method="non-linear") # Design matrix calculation gtab = _qti_gtab(rng) qtimodel = qti.QtiModel(gtab) npt.assert_almost_equal(qtimodel.X, qti.design_matrix(gtab.btens)) @set_random_number_generator(4321) def test_qti_fit(rng): """Test the QTI fit class.""" # Generate a diffusion tensor distribution DTD = np.concatenate( ( _isotropic_DTD(), _anisotropic_DTD(), np.array([[[3, 0, 0], [0, 0, 0], [0, 0, 0]]]), ) ) # Calculate the ground-truth parameter values S0 = 1000 D = np.mean(DTD, axis=0) C = qti.dtd_covariance(DTD) params = np.concatenate( ( np.log(S0)[np.newaxis, np.newaxis], qti.from_3x3_to_6x1(D), qti.from_6x6_to_21x1(C), ) ).T evals, evecs = np.linalg.eig(DTD) avg_eval_var = np.mean(np.var(evals, axis=1)) md = np.mean(evals) fa = fractional_anisotropy(np.linalg.eig(D)[0]) v_md = np.var(np.mean(evals, axis=1)) v_shear = avg_eval_var - np.var(np.linalg.eig(D)[0]) v_iso = v_md + v_shear d_sq = qti.from_3x3_to_6x1(D) @ qti.from_3x3_to_6x1(D).T mean_d_sq = np.mean( np.matmul( qti.from_3x3_to_6x1(DTD), np.swapaxes(qti.from_3x3_to_6x1(DTD), -2, -1) ), axis=0, ) c_md = v_md / np.mean(np.mean(evals, axis=1) ** 2) c_m = fa**2 c_mu = 1.5 * avg_eval_var / np.mean(evals**2) ufa = np.sqrt(c_mu) c_c = c_m / c_mu k_bulk = ( 3 * np.matmul( np.swapaxes(qti.from_6x6_to_21x1(C), -1, -2), qti.from_6x6_to_21x1(qti.E_bulk), ) / np.matmul( np.swapaxes(qti.from_6x6_to_21x1(d_sq), -1, -2), qti.from_6x6_to_21x1(qti.E_bulk), ) )[0, 0] k_shear = ( 6 / 5 * np.matmul( np.swapaxes(qti.from_6x6_to_21x1(C), -1, -2), qti.from_6x6_to_21x1(qti.E_shear), ) / np.matmul( np.swapaxes(qti.from_6x6_to_21x1(d_sq), -1, -2), qti.from_6x6_to_21x1(qti.E_bulk), ) )[0, 0] mk = k_bulk + k_shear k_mu = ( 6 / 5 * np.matmul( np.swapaxes(qti.from_6x6_to_21x1(mean_d_sq), -1, -2), qti.from_6x6_to_21x1(qti.E_shear), ) / np.matmul( np.swapaxes(qti.from_6x6_to_21x1(d_sq), -1, -2), qti.from_6x6_to_21x1(qti.E_bulk), ) )[0, 0] # Fit QTI gtab = _qti_gtab(rng) if have_cvxpy: for fit_method in ["OLS", "WLS", "SDPdc"]: qtimodel = qti.QtiModel(gtab, fit_method=fit_method) data = qtimodel.predict(params) npt.assert_raises(ValueError, qtimodel.fit, data, mask=np.ones(2)) npt.assert_raises(ValueError, qtimodel.fit, data, mask=np.ones(data.shape)) for mask in [None, np.ones(data.shape[0:-1])]: qtifit = qtimodel.fit(data, mask=mask) npt.assert_raises( ValueError, qtifit.predict, gradient_table(np.zeros(3), bvecs=np.zeros((3, 3))), ) npt.assert_almost_equal(qtifit.predict(gtab), data, decimal=1) npt.assert_almost_equal(qtifit.S0_hat, S0, decimal=2) npt.assert_almost_equal(qtifit.md, md, decimal=2) npt.assert_almost_equal(qtifit.v_md, v_md, decimal=2) npt.assert_almost_equal(qtifit.v_shear, v_shear, decimal=2) npt.assert_almost_equal(qtifit.v_iso, v_iso, decimal=2) npt.assert_almost_equal(qtifit.c_md, c_md, decimal=2) npt.assert_almost_equal(qtifit.c_mu, c_mu, decimal=2) npt.assert_almost_equal(qtifit.ufa, ufa, decimal=2) npt.assert_almost_equal(qtifit.c_m, c_m, decimal=2) npt.assert_almost_equal(qtifit.fa, fa, decimal=2) npt.assert_almost_equal(qtifit.c_c, c_c, decimal=2) npt.assert_almost_equal(qtifit.mk, mk, decimal=2) npt.assert_almost_equal(qtifit.k_bulk, k_bulk, decimal=2) npt.assert_almost_equal(qtifit.k_shear, k_shear, decimal=2) npt.assert_almost_equal(qtifit.k_mu, k_mu, decimal=2) else: for fit_method in ["OLS", "WLS"]: qtimodel = qti.QtiModel(gtab, fit_method=fit_method) data = qtimodel.predict(params) npt.assert_raises(ValueError, qtimodel.fit, data, mask=np.ones(2)) npt.assert_raises(ValueError, qtimodel.fit, data, mask=np.ones(data.shape)) for mask in [None, np.ones(data.shape[0:-1])]: qtifit = qtimodel.fit(data, mask=mask) npt.assert_raises( ValueError, qtifit.predict, gradient_table(np.zeros(3), bvecs=np.zeros((3, 3))), ) npt.assert_almost_equal(qtifit.predict(gtab), data) npt.assert_almost_equal(qtifit.S0_hat, S0) npt.assert_almost_equal(qtifit.md, md) npt.assert_almost_equal(qtifit.v_md, v_md) npt.assert_almost_equal(qtifit.v_shear, v_shear) npt.assert_almost_equal(qtifit.v_iso, v_iso) npt.assert_almost_equal(qtifit.c_md, c_md) npt.assert_almost_equal(qtifit.c_mu, c_mu) npt.assert_almost_equal(qtifit.ufa, ufa) npt.assert_almost_equal(qtifit.c_m, c_m) npt.assert_almost_equal(qtifit.fa, fa) npt.assert_almost_equal(qtifit.c_c, c_c) npt.assert_almost_equal(qtifit.mk, mk) npt.assert_almost_equal(qtifit.k_bulk, k_bulk) npt.assert_almost_equal(qtifit.k_shear, k_shear) npt.assert_almost_equal(qtifit.k_mu, k_mu) dipy-1.11.0/dipy/reconst/tests/test_reco_utils.py000066400000000000000000000042151476546756600221400ustar00rootroot00000000000000"""Testing reconstruction utilities.""" import numpy as np from numpy.testing import assert_array_equal, assert_equal from dipy.reconst.recspeed import adj_to_countarrs, argmax_from_countarrs def test_adj_countarrs(): adj = [[0, 1, 2], [2, 3], [4, 5, 6, 7]] counts, inds = adj_to_countarrs(adj) assert_array_equal(counts, [3, 2, 4]) assert_equal(counts.dtype.type, np.uint32) assert_array_equal(inds, [0, 1, 2, 2, 3, 4, 5, 6, 7]) assert_equal(inds.dtype.type, np.uint32) def test_argmax_from_countarrs(): # basic case vals = np.arange(10, dtype=float) vertinds = np.arange(10, dtype=np.uint32) adj_counts = np.ones((10,), dtype=np.uint32) adj_inds_raw = np.arange(10, dtype=np.uint32)[::-1] # when contiguous - OK adj_inds = adj_inds_raw.copy() argmax_from_countarrs(vals, vertinds, adj_counts, adj_inds) # yield assert_array_equal(inds, [5, 6, 7, 8, 9]) # test for errors - first - not contiguous # # The tests below cause odd errors and segfaults with numpy SVN # vintage June 2010 (sometime after 1.4.0 release) - see # http://groups.google.com/group/cython-users/browse_thread/thread/624c696293b7fe44?pli=1 """ yield assert_raises(ValueError, argmax_from_countarrs, vals, vertinds, adj_counts, adj_inds_raw) # too few vertices yield assert_raises(ValueError, argmax_from_countarrs, vals, vertinds[:-1], adj_counts, adj_inds) # adj_inds too short yield assert_raises(IndexError, argmax_from_countarrs, vals, vertinds, adj_counts, adj_inds[:-1]) # vals too short yield assert_raises(IndexError, argmax_from_countarrs, vals[:-1], vertinds, adj_counts, adj_inds) """ dipy-1.11.0/dipy/reconst/tests/test_rumba.py000066400000000000000000000361301476546756600210770ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import ( assert_, assert_allclose, assert_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.core.geometry import cart2sphere from dipy.core.gradients import gradient_table, unique_bvals_tolerance from dipy.core.sphere_stats import angular_similarity from dipy.data import default_sphere, dsi_voxels, get_fnames, get_sphere from dipy.direction.peaks import peak_directions from dipy.reconst.csdeconv import AxSymShResponse from dipy.reconst.rumba import RumbaSDModel, generate_kernel from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.reconst.tests.test_dsi import sticks_and_ball_dummies from dipy.sims.voxel import multi_tensor, single_tensor, sticks_and_ball def test_rumba(): # Test fODF results from ideal examples. sphere = default_sphere # repulsion 724 sphere2 = get_sphere(name="symmetric362") btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) # Testing input validation msg = "b0_threshold .*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) gtab_broken = gradient_table(bvals[~gtab.b0s_mask], bvecs=bvecs[~gtab.b0s_mask]) assert_raises(ValueError, RumbaSDModel, gtab_broken) with warnings.catch_warnings(record=True) as w: _ = RumbaSDModel(gtab, verbose=True) assert_equal(len(w), 1) assert_(w[0].category, UserWarning) assert_raises(ValueError, RumbaSDModel, gtab, use_tv=True) assert_raises(ValueError, RumbaSDModel, gtab, n_iter=0) rumba_broken = RumbaSDModel(gtab, recon_type="test") assert_raises(ValueError, rumba_broken.fit, data) # Models to validate rumba_smf = RumbaSDModel( gtab, n_iter=20, recon_type="smf", n_coils=1, sphere=sphere ) rumba_sos = RumbaSDModel( gtab, n_iter=20, recon_type="sos", n_coils=32, sphere=sphere ) model_list = [rumba_smf, rumba_sos] # Test on repulsion724 sphere for model in model_list: model_fit = model.fit(data) # Verify only works on original sphere assert_raises(ValueError, model_fit.odf, sphere=sphere2) odf = model_fit.odf(sphere=sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # Test on data with 1, 2, 3, or no peaks sb_dummies = sticks_and_ball_dummies(gtab) for model in model_list: for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] model_fit = model.fit(data) odf = model_fit.odf(sphere=sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: # Verify small isotropic fraction in anisotropic case assert_equal(model_fit.f_iso < 0.1, True) assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: # Verify large isotropic fraction in isotropic case assert_equal(model_fit.f_iso > 0.8, True) def test_predict(): # Test signal reconstruction on ideal example sphere = default_sphere btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) rumba = RumbaSDModel(gtab, n_iter=600, sphere=sphere) # Simulated data data = single_tensor(gtab, S0=1, evals=rumba.wm_response) rumba_fit = rumba.fit(data) data_pred = rumba_fit.predict() assert_allclose(data_pred, data, atol=0.01, rtol=0.05) def test_recursive_rumba(): # Test with recursive data-driven response sphere = default_sphere # repulsion 724 btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) wm_response = AxSymShResponse( 480, np.array([570.35065982, -262.81741086, 80.23104069, -16.93940972, 2.57628738]), ) model = RumbaSDModel(gtab, wm_response=wm_response, n_iter=20, sphere=sphere) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_fit = model.fit(data) # Test peaks odf = model_fit.odf(sphere=sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) def test_multishell_rumba(): # Test with multi-shell response sphere = default_sphere # repulsion 724 btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) ms_eigenval_count = len(unique_bvals_tolerance(gtab.bvals)) - 1 wm_response = np.tile(np.array([1.7e-3, 0.2e-3, 0.2e-3]), (ms_eigenval_count, 1)) model = RumbaSDModel(gtab, wm_response=wm_response, n_iter=20, sphere=sphere) model_fit = model.fit(data) # Test peaks odf = model_fit.odf(sphere=sphere) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) def test_mvoxel_rumba(): # Verify form of results in multi-voxel situation. data, gtab = dsi_voxels() # multi-voxel data sphere = default_sphere # repulsion 724 # Models to validate rumba_smf = RumbaSDModel(gtab, n_iter=5, recon_type="smf", n_coils=1, sphere=sphere) rumba_sos = RumbaSDModel( gtab, n_iter=5, recon_type="sos", n_coils=32, sphere=sphere ) model_list = [rumba_smf, rumba_sos] msg = "There is overlap in clustering of b-value.*" for model in model_list: with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) model_fit = model.fit(data) odf = model_fit.odf(sphere=sphere) f_iso = model_fit.f_iso f_wm = model_fit.f_wm f_gm = model_fit.f_gm f_csf = model_fit.f_csf combined = model_fit.combined_odf_iso # Verify prediction properties with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) pred_sig_1 = model_fit.predict() pred_sig_2 = model_fit.predict(S0=1) pred_sig_3 = model_fit.predict(S0=np.ones(odf.shape[:-1])) pred_sig_4 = model_fit.predict(gtab=gtab) assert_equal(pred_sig_1, pred_sig_2) assert_equal(pred_sig_3, pred_sig_4) assert_equal(pred_sig_1, pred_sig_3) assert_equal(data.shape, pred_sig_1.shape) assert_equal(np.all(np.isreal(pred_sig_1)), True) assert_equal(np.all(pred_sig_1 > 0), True) # Verify shape, positivity, realness of results assert_equal(data.shape[:-1] + (len(sphere.vertices),), odf.shape) assert_equal(np.all(np.isreal(odf)), True) assert_equal(np.all(odf > 0), True) assert_equal(data.shape[:-1], f_iso.shape) assert_equal(np.all(np.isreal(f_iso)), True) assert_equal(np.all(f_iso > 0), True) # Verify properties of fODF and volume fractions assert_equal(f_iso, f_gm + f_csf) assert_equal(combined, odf + f_iso[..., None] / len(sphere.vertices)) assert_almost_equal(f_iso + f_wm, np.ones(f_iso.shape)) assert_almost_equal(np.sum(combined, axis=3), np.ones(f_iso.shape)) assert_equal(np.sum(odf, axis=3), f_wm) def test_global_fit(): # Test fODF results on ideal examples in global fitting paradigm. sphere = default_sphere # repulsion 724 btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) # global_fit requires 4D argument data = data[None, None, None, :] # TV requires non-singleton size in all volume dimensions data_mvoxel = np.tile(data, (2, 2, 2, 1)) # Model to validate rumba = RumbaSDModel( gtab, n_iter=20, recon_type="smf", n_coils=1, R=2, voxelwise=False, sphere=sphere, ) rumba_tv = RumbaSDModel( gtab, n_iter=20, recon_type="smf", n_coils=1, R=2, voxelwise=False, use_tv=True, sphere=sphere, ) # Testing input validation assert_raises(ValueError, rumba.fit, data[:, :, :, 0]) # Must be 4D # TV can't work with singleton dimensions in data volume assert_raises(ValueError, rumba_tv.fit, data) # Mask must match first 3 dimensions of data assert_raises(ValueError, rumba.fit, data, mask=np.ones(data.shape)) # Recon type validation rumba_broken = RumbaSDModel(gtab, recon_type="test", voxelwise=False) assert_raises(ValueError, rumba_broken.fit, data) # Test on repulsion 724 sphere, with/without TV regularization for ix, model in enumerate([rumba, rumba_tv]): if ix: model_fit = model.fit(data_mvoxel) else: model_fit = model.fit(data) odf = model_fit.odf(sphere=sphere) directions, _, _ = peak_directions( odf[0, 0, 0], sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) assert_equal(len(directions), 2) assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # Test on data with 1, 2, 3, or no peaks sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] data = data[None, None, None, :] # make 4D rumba_fit = rumba.fit(data) odf = rumba_fit.odf(sphere=sphere) f_iso = rumba_fit.f_iso directions, _, _ = peak_directions( odf[0, 0, 0], sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: # Verify small isotropic fraction in anisotropic case assert_equal(f_iso[0, 0, 0] < 0.1, True) assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: # Verify large isotropic fraction in isotropic case assert_equal(f_iso[0, 0, 0] > 0.8, True) def test_mvoxel_global_fit(): # Verify form of results in global fitting paradigm. data, gtab = dsi_voxels() # multi-voxel data sphere = default_sphere # repulsion 724 # Models to validate rumba_sos = RumbaSDModel( gtab, recon_type="sos", n_iter=5, n_coils=32, R=1, voxelwise=False, verbose=True, sphere=sphere, ) rumba_sos_tv = RumbaSDModel( gtab, recon_type="sos", n_iter=5, n_coils=32, R=1, voxelwise=False, use_tv=True, sphere=sphere, ) rumba_r = RumbaSDModel( gtab, recon_type="smf", n_iter=5, n_coils=1, R=2, voxelwise=False, sphere=sphere ) rumba_r_tv = RumbaSDModel( gtab, recon_type="smf", n_iter=5, n_coils=1, R=2, voxelwise=False, use_tv=True, sphere=sphere, ) model_list = [rumba_sos, rumba_sos_tv, rumba_r, rumba_r_tv] # Test each model with/without TV regularization for model in model_list: msg = "There is overlap in clustering of b-value.*" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) model_fit = model.fit(data) odf = model_fit.odf(sphere=sphere) f_iso = model_fit.f_iso f_wm = model_fit.f_wm f_gm = model_fit.f_gm f_csf = model_fit.f_csf combined = model_fit.combined_odf_iso # Verify shape, positivity, realness of results assert_equal(data.shape[:-1] + (len(sphere.vertices),), odf.shape) assert_equal(np.all(np.isreal(odf)), True) assert_equal(np.all(odf > 0), True) assert_equal(data.shape[:-1], f_iso.shape) assert_equal(np.all(np.isreal(f_iso)), True) assert_equal(np.all(f_iso > 0), True) # Verify normalization assert_equal(f_iso, f_gm + f_csf) assert_equal(combined, odf + f_iso[..., None] / len(sphere.vertices)) assert_almost_equal(f_iso + f_wm, np.ones(f_iso.shape)) assert_almost_equal(np.sum(combined, axis=3), np.ones(f_iso.shape)) assert_equal(np.sum(odf, axis=3), f_wm) def test_generate_kernel(): # Test form and content of kernel generation result. # load repulsion 724 sphere sphere = default_sphere btable = np.loadtxt(get_fnames(name="dsi515btable")) bvals = btable[:, 0] bvecs = btable[:, 1:] gtab = gradient_table(bvals, bvecs=bvecs) # Kernel parameters wm_response = np.array([1.7e-3, 0.2e-3, 0.2e-3]) gm_response = 0.2e-4 csf_response = 3.0e-3 # Test kernel shape kernel = generate_kernel(gtab, sphere, wm_response, gm_response, csf_response) assert_equal(kernel.shape, (len(gtab.bvals), len(sphere.vertices) + 2)) # Verify first column of kernel _, theta, phi = cart2sphere(sphere.x, sphere.y, sphere.z) S0 = 1 # S0 assumed to be 1 fi = 100 # volume fraction assumed to be 100% S, _ = multi_tensor( gtab, np.array([wm_response]), S0=S0, angles=[[theta[0] * 180 / np.pi, phi[0] * 180 / np.pi]], fractions=[fi], snr=None, ) assert_almost_equal(kernel[:, 0], S) # Multi-shell version ms_eigenval_count = len(unique_bvals_tolerance(gtab.bvals)) - 1 wm_response_multi = np.tile(wm_response, (ms_eigenval_count, 1)) kernel_multi = generate_kernel( gtab, sphere, wm_response_multi, gm_response, csf_response ) assert_equal(kernel.shape, (len(gtab.bvals), len(sphere.vertices) + 2)) assert_almost_equal(kernel, kernel_multi) # Test optional isotropic compartment; should cause last column of zeroes kernel = generate_kernel( gtab, sphere, wm_response, gm_response=None, csf_response=None ) assert_array_equal(kernel[:, -2], np.zeros(len(gtab.bvals))) assert_array_equal(kernel[:, -1], np.zeros(len(gtab.bvals))) dipy-1.11.0/dipy/reconst/tests/test_sfm.py000066400000000000000000000212221476546756600205520ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt import pytest import dipy.core.gradients as grad import dipy.core.optimize as opt import dipy.data as dpd from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data import dipy.reconst.cross_validation as xval import dipy.reconst.sfm as sfm import dipy.sims.voxel as sims from dipy.utils.optpkg import optional_package warnings.filterwarnings("ignore") sklearn, has_sklearn, _ = optional_package("sklearn") needs_sklearn = pytest.mark.skipif(not has_sklearn, reason="Requires sklearn") def test_design_matrix(): data, gtab = dpd.dsi_voxels() sphere = dpd.get_sphere() # Make it with NNLS, so that it gets tested regardless of sklearn sparse_fascicle_model = sfm.SparseFascicleModel(gtab, sphere=sphere, solver="NNLS") npt.assert_equal( sparse_fascicle_model.design_matrix.shape, (np.sum(~gtab.b0s_mask), sphere.vertices.shape[0]), ) @needs_sklearn def test_sfm(): fdata, fbvals, fbvecs = dpd.get_fnames() data = load_nifti_data(fdata) gtab = grad.gradient_table(fbvals, bvecs=fbvecs) for n_procs in [1, 2]: for iso in [sfm.ExponentialIsotropicModel, None]: sfmodel = sfm.SparseFascicleModel(gtab, isotropic=iso) sffit1 = sfmodel.fit(data[0, 0, 0], num_processes=n_procs) sphere = dpd.get_sphere() odf1 = sffit1.odf(sphere) pred1 = sffit1.predict(gtab=gtab) mask = np.ones(data.shape[:-1]) sffit2 = sfmodel.fit(data, mask=mask, num_processes=n_procs) pred2 = sffit2.predict(gtab=gtab) odf2 = sffit2.odf(sphere) sffit3 = sfmodel.fit(data, num_processes=n_procs) pred3 = sffit3.predict(gtab=gtab) odf3 = sffit3.odf(sphere) npt.assert_almost_equal(pred3, pred2, decimal=2) npt.assert_almost_equal(pred3[0, 0, 0], pred1, decimal=2) npt.assert_almost_equal(odf3[0, 0, 0], odf1, decimal=2) npt.assert_almost_equal(odf3[0, 0, 0], odf2[0, 0, 0], decimal=2) # Fit zeros and you will get back zeros npt.assert_almost_equal( sfmodel.fit(np.zeros(data[0, 0, 0].shape), num_processes=n_procs).beta, np.zeros(sfmodel.design_matrix[0].shape[-1]), ) @needs_sklearn def test_predict(): SNR = 1000 S0 = 100 _, fbvals, fbvecs = dpd.get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = grad.gradient_table(bvals, bvecs=bvecs) mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angles = [(0, 0), (60, 0)] S, sticks = sims.multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=[10, 90], snr=SNR ) sfmodel = sfm.SparseFascicleModel(gtab, response=[0.0015, 0.0003, 0.0003]) sffit = sfmodel.fit(S) pred = sffit.predict() npt.assert_(xval.coeff_of_determination(pred, S) > 97) # Should be possible to predict using a different gtab: new_gtab = grad.gradient_table(bvals[::2], bvecs=bvecs[::2]) new_pred = sffit.predict(gtab=new_gtab) npt.assert_(xval.coeff_of_determination(new_pred, S[::2]) > 97) # Should be possible to predict for a single direction: with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) new_gtab = grad.gradient_table(bvals[1][None], bvecs=bvecs[1][None, :]) new_pred = sffit.predict(gtab=new_gtab) # Fitting and predicting with a volume of data: fdata, fbval, fbvec = dpd.get_fnames(name="small_25") gtab = grad.gradient_table(fbval, bvecs=fbvec) data = load_nifti_data(fdata) sfmodel = sfm.SparseFascicleModel(gtab, response=[0.0015, 0.0003, 0.0003]) sffit = sfmodel.fit(data) pred = sffit.predict() # Should be possible to predict using a different gtab: new_gtab = grad.gradient_table(bvals[::2], bvecs=bvecs[::2]) new_pred = sffit.predict(gtab=new_gtab) npt.assert_equal( new_pred.shape, data.shape[:-1] + bvals[::2].shape, ) # Should be possible to predict for a single direction: with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) new_gtab = grad.gradient_table(bvals[1][None], bvecs=bvecs[1][None, :]) new_pred = sffit.predict(gtab=new_gtab) npt.assert_equal(new_pred.shape, data.shape[:-1]) # Fitting and predicting with masked data: mask = np.zeros(data.shape[:3]) mask[2:5, 2:5, :] = 1 sffit = sfmodel.fit(data, mask=mask) pred = sffit.predict() npt.assert_equal(pred.shape, data.shape) # Should be possible to predict using a different gtab: new_gtab = grad.gradient_table(bvals[::2], bvecs=bvecs[::2]) new_pred = sffit.predict(gtab=new_gtab) npt.assert_equal( new_pred.shape, data.shape[:-1] + bvals[::2].shape, ) npt.assert_equal(new_pred[0, 0, 0], 0) # Should be possible to predict for a single direction: with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) new_gtab = grad.gradient_table(bvals[1][None], bvecs=bvecs[1][None, :]) new_pred = sffit.predict(gtab=new_gtab) npt.assert_equal(new_pred.shape, data.shape[:-1]) npt.assert_equal(new_pred[0, 0, 0], 0) def test_sfm_background(): fdata, fbvals, fbvecs = dpd.get_fnames() data = load_nifti_data(fdata) gtab = grad.gradient_table(fbvals, bvecs=fbvecs) to_fit = data[0, 0, 0] to_fit[gtab.b0s_mask] = 0 sfmodel = sfm.SparseFascicleModel(gtab, solver="NNLS") sffit = sfmodel.fit(to_fit) npt.assert_equal(sffit.beta, np.zeros_like(sffit.beta)) def test_sfm_stick(): fdata, fbvals, fbvecs = dpd.get_fnames() data = load_nifti_data(fdata) gtab = grad.gradient_table(fbvals, bvecs=fbvecs) sfmodel = sfm.SparseFascicleModel(gtab, solver="NNLS", response=[0.001, 0, 0]) sffit1 = sfmodel.fit(data[0, 0, 0]) sphere = dpd.get_sphere() sffit1.odf(sphere) sffit1.predict(gtab=gtab) SNR = 1000 S0 = 100 mevals = np.array(([0.001, 0, 0], [0.001, 0, 0])) angles = [(0, 0), (60, 0)] S, sticks = sims.multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR ) sfmodel = sfm.SparseFascicleModel(gtab, solver="NNLS", response=[0.001, 0, 0]) sffit = sfmodel.fit(S) pred = sffit.predict() npt.assert_(xval.coeff_of_determination(pred, S) > 96) @needs_sklearn def test_sfm_sklearnlinearsolver(): class SillySolver(opt.SKLearnLinearSolver): def fit(self, X, y): self.coef_ = np.ones(X.shape[-1]) class EvenSillierSolver: def fit(self, X, y): self.coef_ = np.ones(X.shape[-1]) fdata, fbvals, fbvecs = dpd.get_fnames() gtab = grad.gradient_table(fbvals, bvecs=fbvecs) sfmodel = sfm.SparseFascicleModel(gtab, solver=SillySolver()) npt.assert_(isinstance(sfmodel.solver, SillySolver)) npt.assert_raises( ValueError, sfm.SparseFascicleModel, gtab, solver=EvenSillierSolver() ) @needs_sklearn def test_exponential_iso(): fdata, fbvals, fbvecs = dpd.get_fnames() data_dti = load_nifti_data(fdata) gtab_dti = grad.gradient_table(fbvals, bvecs=fbvecs) data_multi, gtab_multi = dpd.dsi_deconv_voxels() for data, gtab in zip([data_dti, data_multi], [gtab_dti, gtab_multi]): sfmodel = sfm.SparseFascicleModel(gtab, isotropic=sfm.ExponentialIsotropicModel) sffit1 = sfmodel.fit(data[0, 0, 0]) sphere = dpd.get_sphere() odf = sffit1.odf(sphere) pred = sffit1.predict(gtab=gtab) npt.assert_equal(pred.shape, data[0, 0, 0].shape) npt.assert_equal(odf.shape, data[0, 0, 0].shape[:-1] + (sphere.x.shape[0],)) sffit2 = sfmodel.fit(data) sphere = dpd.get_sphere() odf = sffit2.odf(sphere) pred = sffit2.predict(gtab=gtab) npt.assert_equal(pred.shape, data.shape) npt.assert_equal(odf.shape, data.shape[:-1] + (sphere.x.shape[0],)) mask = np.zeros(data.shape[:3]) mask[2:5, 2:5, :] = 1 sffit3 = sfmodel.fit(data, mask=mask) sphere = dpd.get_sphere() odf = sffit3.odf(sphere) pred = sffit3.predict(gtab=gtab) npt.assert_equal(pred.shape, data.shape) npt.assert_equal(odf.shape, data.shape[:-1] + (sphere.x.shape[0],)) SNR = 1000 S0 = 100 mevals = np.array(([0.0015, 0.0005, 0.0005], [0.0015, 0.0005, 0.0005])) angles = [(0, 0), (60, 0)] S, sticks = sims.multi_tensor( gtab, mevals, S0=S0, angles=angles, fractions=[50, 50], snr=SNR ) sffit = sfmodel.fit(S) pred = sffit.predict() npt.assert_(xval.coeff_of_determination(pred, S) > 96) dipy-1.11.0/dipy/reconst/tests/test_shm.py000066400000000000000000001205111476546756600205550ustar00rootroot00000000000000"""Test spherical harmonic models and the tools associated with those models.""" import warnings import numpy as np import numpy.linalg as npl import numpy.testing as npt from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.core.gradients import gradient_table from dipy.core.interpolation import NearestNeighborInterpolator from dipy.core.sphere import Sphere, hemi_icosahedron from dipy.data import mrtrix_spherical_functions from dipy.direction.peaks import peak_directions from dipy.reconst import odf from dipy.reconst.shm import ( CsaOdfModel, OpdtModel, QballModel, ResidualBootstrapWrapper, SphHarmFit, anisotropic_power, bootstrap_data_array, bootstrap_data_voxel, calculate_max_order, convert_sh_descoteaux_tournier, convert_sh_from_legacy, convert_sh_to_full_basis, convert_sh_to_legacy, descoteaux07_legacy_msg, gen_dirac, hat, lcr_matrix, normalize_data, order_from_ncoef, real_sh_descoteaux, real_sh_descoteaux_from_index, real_sh_tournier, real_sym_sh_basis, real_sym_sh_mrtrix, sf_to_sh, sh_to_sf, sh_to_sf_matrix, smooth_pinv, sph_harm_ind_list, spherical_harmonics, tournier07_legacy_msg, ) from dipy.sims.voxel import multi_tensor_odf, single_tensor from dipy.testing import assert_true def test_order_from_ncoeff(): # Just try some out: for sh_order_max in [2, 4, 6, 8, 12, 24]: m_values, l_values = sph_harm_ind_list(sh_order_max) n_coef = m_values.shape[0] assert_equal(order_from_ncoef(n_coef), sh_order_max) # Try out full basis m_full, l_full = sph_harm_ind_list(sh_order_max, full_basis=True) n_coef_full = m_full.shape[0] assert_equal(order_from_ncoef(n_coef_full, full_basis=True), sh_order_max) def test_sph_harm_ind_list(): m_list, l_list = sph_harm_ind_list(8) assert_equal(m_list.shape, l_list.shape) assert_equal(m_list.shape, (45,)) assert_true(np.all(np.abs(m_list) <= l_list)) assert_array_equal(l_list % 2, 0) assert_raises(ValueError, sph_harm_ind_list, 1) # Test for a full basis m_list, l_list = sph_harm_ind_list(8, full_basis=True) assert_equal(m_list.shape, l_list.shape) # There are (sh_order + 1) * (sh_order + 1) coefficients assert_equal(m_list.shape, (81,)) assert_true(np.all(np.abs(m_list) <= l_list)) def test_real_sh_descoteaux_from_index(): # Tests derived from tables in # https://en.wikipedia.org/wiki/Table_of_spherical_harmonics # where real spherical harmonic $Y_l^m$ is defined to be: # Real($Y_l^m$) * sqrt(2) if m > 0 # $Y_l^m$ if m == 0 # Imag($Y_l^m$) * sqrt(2) if m < 0 rsh = real_sh_descoteaux_from_index pi = np.pi sqrt = np.sqrt sin = np.sin cos = np.cos with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_array_almost_equal(rsh(0, 0, 0, 0), 0.5 / sqrt(pi)) assert_array_almost_equal( rsh(-2, 2, pi / 5, pi / 3), 0.25 * sqrt(15.0 / (2.0 * pi)) * (sin(pi / 5.0)) ** 2.0 * cos(0 + 2.0 * pi / 3) * sqrt(2), ) assert_array_almost_equal( rsh(2, 2, pi / 5, pi / 3), -1 * 0.25 * sqrt(15.0 / (2.0 * pi)) * (sin(pi / 5.0)) ** 2.0 * sin(0 - 2.0 * pi / 3) * sqrt(2), ) assert_array_almost_equal( rsh(-2, 2, pi / 2, pi), 0.25 * sqrt(15 / (2.0 * pi)) * cos(2.0 * pi) * sin(pi / 2.0) ** 2.0 * sqrt(2), ) assert_array_almost_equal( rsh(2, 4, pi / 3.0, pi / 4.0), -1 * (3.0 / 8.0) * sqrt(5.0 / (2.0 * pi)) * sin(0 - 2.0 * pi / 4.0) * sin(pi / 3.0) ** 2.0 * (7.0 * cos(pi / 3.0) ** 2.0 - 1) * sqrt(2), ) assert_array_almost_equal( rsh(-4, 4, pi / 6.0, pi / 8.0), (3.0 / 16.0) * sqrt(35.0 / (2.0 * pi)) * cos(0 + 4.0 * pi / 8.0) * sin(pi / 6.0) ** 4.0 * sqrt(2), ) assert_array_almost_equal( rsh(4, 4, pi / 6.0, pi / 8.0), -1 * (3.0 / 16.0) * sqrt(35.0 / (2.0 * pi)) * sin(0 - 4.0 * pi / 8.0) * sin(pi / 6.0) ** 4.0 * sqrt(2), ) aa = np.ones((3, 1, 1, 1)) bb = np.ones((1, 4, 1, 1)) cc = np.ones((1, 1, 5, 1)) dd = np.ones((1, 1, 1, 6)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) assert_equal(rsh(aa, bb, cc, dd).shape, (3, 4, 5, 6)) def test_gen_dirac(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sh = gen_dirac(np.array([0]), np.array([0]), np.array([0]), np.array([0])) assert_true(np.abs(sh[0] - 1.0 / np.sqrt(4.0 * np.pi)) < 0.0001) def test_real_sym_sh_mrtrix(): coef, expected, sphere = mrtrix_spherical_functions() with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") basis, m_values, l_values = real_sym_sh_mrtrix(8, sphere.theta, sphere.phi) npt.assert_equal(len(w), 2) npt.assert_(issubclass(w[0].category, DeprecationWarning)) npt.assert_( "dipy.reconst.shm.real_sym_sh_mrtrix is deprecated, Please use " "dipy.reconst.shm.real_sh_tournier instead" in str(w[0].message) ) npt.assert_(issubclass(w[1].category, PendingDeprecationWarning)) npt.assert_(tournier07_legacy_msg in str(w[1].message)) func = np.dot(coef, basis.T) assert_array_almost_equal(func, expected, 4) def test_real_sym_sh_basis(): new_order = [0, 5, 4, 3, 2, 1, 14, 13, 12, 11, 10, 9, 8, 7, 6] sphere = hemi_icosahedron.subdivide(n=2) with warnings.catch_warnings(): warnings.filterwarnings("ignore") basis, m_values, l_values = real_sym_sh_mrtrix(4, sphere.theta, sphere.phi) expected = basis[:, new_order] expected *= np.where(m_values == 0, 1.0, np.sqrt(2)) with warnings.catch_warnings(record=True) as w: descoteaux07_basis, m_values, l_values = real_sym_sh_basis( 4, sphere.theta, sphere.phi ) npt.assert_equal(len(w), 2) npt.assert_(issubclass(w[0].category, DeprecationWarning)) npt.assert_( "dipy.reconst.shm.real_sym_sh_basis is deprecated, Please use " "dipy.reconst.shm.real_sh_descoteaux instead" in str(w[0].message) ) npt.assert_(issubclass(w[1].category, PendingDeprecationWarning)) npt.assert_(descoteaux07_legacy_msg in str(w[1].message)) assert_array_almost_equal(descoteaux07_basis, expected) def test_real_sh_descoteaux1(): # This test should do for now # The tournier07 basis should be the same as re-ordering and re-scaling the # descoteaux07 basis new_order = [0, 5, 4, 3, 2, 1, 14, 13, 12, 11, 10, 9, 8, 7, 6] sphere = hemi_icosahedron.subdivide(n=2) with warnings.catch_warnings(): warnings.filterwarnings("ignore") basis, m_values, l_values = real_sym_sh_mrtrix(4, sphere.theta, sphere.phi) expected = basis[:, new_order] expected *= np.where(m_values == 0, 1.0, np.sqrt(2)) with warnings.catch_warnings(record=True) as w: descoteaux07_basis, m_values, l_values = real_sh_descoteaux( 4, sphere.theta, sphere.phi ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, PendingDeprecationWarning)) npt.assert_(descoteaux07_legacy_msg in str(w[0].message)) assert_array_almost_equal(descoteaux07_basis, expected) def test_real_sh_tournier(): vertices = hemi_icosahedron.subdivide(n=2).vertices mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(vertices, mevals, angles, [50, 50]) mevals = np.array([[0.0015, 0.0003, 0.0003]]) angles = [(0, 0)] odf2 = multi_tensor_odf(-vertices, mevals, angles, [100]) sphere = Sphere(xyz=np.vstack((vertices, -vertices))) # Asymmetric spherical function with 162 coefficients sf = np.append(odf, odf2) # In order for our approximation to be precise enough, we # will use a SH basis of orders up to 10 (121 coefficients) with warnings.catch_warnings(record=True) as w: B, m_values, l_values = real_sh_tournier( 10, sphere.theta, sphere.phi, full_basis=True ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, PendingDeprecationWarning)) npt.assert_(tournier07_legacy_msg in str(w[0].message)) invB = smooth_pinv(B, L=np.zeros_like(l_values)) sh_coefs = np.dot(invB, sf) sf_approx = np.dot(B, sh_coefs) assert_array_almost_equal(sf_approx, sf, 2) def test_real_sh_descoteaux2(): vertices = hemi_icosahedron.subdivide(n=2).vertices mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(vertices, mevals, angles, [50, 50]) mevals = np.array([[0.0015, 0.0003, 0.0003]]) angles = [(0, 0)] odf2 = multi_tensor_odf(-vertices, mevals, angles, [100]) sphere = Sphere(xyz=np.vstack((vertices, -vertices))) # Asymmetric spherical function with 162 coefficients sf = np.append(odf, odf2) # In order for our approximation to be precise enough, we # will use a SH basis of orders up to 10 (121 coefficients) with warnings.catch_warnings(record=True) as w: B, m_values, l_values = real_sh_descoteaux( 10, sphere.theta, sphere.phi, full_basis=True ) npt.assert_equal(len(w), 1) npt.assert_(issubclass(w[0].category, PendingDeprecationWarning)) npt.assert_(descoteaux07_legacy_msg in str(w[0].message)) invB = smooth_pinv(B, L=np.zeros_like(l_values)) sh_coefs = np.dot(invB, sf) sf_approx = np.dot(B, sh_coefs) assert_array_almost_equal(sf_approx, sf, 2) def test_sh_to_sf_matrix(): sphere = Sphere(xyz=hemi_icosahedron.vertices) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B1, invB1 = sh_to_sf_matrix(sphere) B2, m_values, l_values = real_sh_descoteaux(4, sphere.theta, sphere.phi) invB2 = smooth_pinv(B2, L=np.zeros_like(l_values)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B3 = sh_to_sf_matrix(sphere, return_inv=False) assert_array_almost_equal(B1, B2.T) assert_array_almost_equal(invB1, invB2.T) assert_array_almost_equal(B3, B1) assert_raises(ValueError, sh_to_sf_matrix, sphere, basis_type="") def test_smooth_pinv(): hemi = hemi_icosahedron.subdivide(n=2) m_values, l_values = sph_harm_ind_list(4) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B = real_sh_descoteaux_from_index( m_values, l_values, hemi.theta[:, None], hemi.phi[:, None] ) L = np.zeros(len(m_values)) C = smooth_pinv(B, L) D = np.dot(npl.inv(np.dot(B.T, B)), B.T) assert_array_almost_equal(C, D) L = l_values * (l_values + 1) * 0.05 C = smooth_pinv(B, L) L = np.diag(L) D = np.dot(npl.inv(np.dot(B.T, B) + L * L), B.T) assert_array_almost_equal(C, D) L = np.arange(len(l_values)) * 0.05 C = smooth_pinv(B, L) L = np.diag(L) D = np.dot(npl.inv(np.dot(B.T, B) + L * L), B.T) assert_array_almost_equal(C, D) def test_normalize_data(): sig = np.arange(1, 66)[::-1] where_b0 = np.zeros(65, "bool") where_b0[0] = True assert_raises(ValueError, normalize_data, sig, where_b0, out=sig) norm_sig = normalize_data(sig, where_b0, min_signal=1) assert_array_almost_equal(norm_sig, sig / 65.0) norm_sig = normalize_data(sig, where_b0, min_signal=5) assert_array_almost_equal(norm_sig[-5:], 5 / 65.0) where_b0[[0, 1]] = [True, True] norm_sig = normalize_data(sig, where_b0, min_signal=1) assert_array_almost_equal(norm_sig, sig / 64.5) norm_sig = normalize_data(sig, where_b0, min_signal=5) assert_array_almost_equal(norm_sig[-5:], 5 / 64.5) sig = sig * np.ones((2, 3, 1)) where_b0[[0, 1]] = [True, False] norm_sig = normalize_data(sig, where_b0, min_signal=1) assert_array_almost_equal(norm_sig, sig / 65.0) norm_sig = normalize_data(sig, where_b0, min_signal=5) assert_array_almost_equal(norm_sig[..., -5:], 5 / 65.0) where_b0[[0, 1]] = [True, True] norm_sig = normalize_data(sig, where_b0, min_signal=1) assert_array_almost_equal(norm_sig, sig / 64.5) norm_sig = normalize_data(sig, where_b0, min_signal=5) assert_array_almost_equal(norm_sig[..., -5:], 5 / 64.5) def make_fake_signal(): hemisphere = hemi_icosahedron.subdivide(n=2) bvecs = np.concatenate(([[0, 0, 0]], hemisphere.vertices)) bvals = np.zeros(len(bvecs)) + 2000 bvals[0] = 0 gtab = gradient_table(bvals, bvecs=bvecs) evals = np.array([[2.1, 0.2, 0.2], [0.2, 2.1, 0.2]]) * 10**-3 evecs0 = np.eye(3) evecs1 = evecs0 a = evecs0[0] b = evecs1[1] S1 = single_tensor(gtab, 0.55, evals=evals[0], evecs=evecs0) S2 = single_tensor(gtab, 0.45, evals=evals[1], evecs=evecs1) return S1 + S2, gtab, np.vstack([a, b]) class TestQballModel: model = QballModel def test_single_voxel_fit(self): signal, gtab, expected = make_fake_signal() sphere = hemi_icosahedron.subdivide(n=4) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model( gtab, sh_order_max=4, min_signal=1e-5, assume_normed=True ) fit = model.fit(signal) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = fit.odf(sphere) assert_equal(odf.shape, sphere.phi.shape) directions, _, _ = peak_directions(odf, sphere) # Check the same number of directions n = len(expected) assert_equal(len(directions), n) # Check directions are unit vectors cos_similarity = (directions * directions).sum(-1) assert_array_almost_equal(cos_similarity, np.ones(n)) # Check the directions == expected or -expected cos_similarity = (directions * expected).sum(-1) assert_array_almost_equal(abs(cos_similarity), np.ones(n)) # Test normalize data with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model( gtab, sh_order_max=4, min_signal=1e-5, assume_normed=False ) fit = model.fit(signal * 5) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_with_norm = fit.odf(sphere) assert_array_almost_equal(odf, odf_with_norm) def test_mulit_voxel_fit(self): signal, gtab, expected = make_fake_signal() sphere = hemi_icosahedron nd_signal = np.vstack([signal, signal]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model( gtab, sh_order_max=4, min_signal=1e-5, assume_normed=True ) fit = model.fit(nd_signal) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = fit.odf(sphere) assert_equal(odf.shape, (2,) + sphere.phi.shape) # Test fitting with mask, where mask is False odf should be 0 fit = model.fit(nd_signal, mask=[False, True]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = fit.odf(sphere) assert_array_equal(odf[0], 0.0) def test_sh_order(self): signal, gtab, expected = make_fake_signal() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model(gtab, sh_order_max=4, min_signal=1e-5) assert_equal(model.B.shape[1], 15) assert_equal(max(model.l_values), 4) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model(gtab, sh_order_max=6, min_signal=1e-5) assert_equal(model.B.shape[1], 28) assert_equal(max(model.l_values), 6) def test_gfa(self): signal, gtab, expected = make_fake_signal() signal = np.ones((2, 3, 4, 1)) * signal sphere = hemi_icosahedron.subdivide(n=3) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model = self.model(gtab, 6, min_signal=1e-5) fit = model.fit(signal) gfa_shm = fit.gfa with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) gfa_odf = odf.gfa(fit.odf(sphere)) assert_array_almost_equal(gfa_shm, gfa_odf, 3) # gfa should be 0 if all coefficients are 0 (masked areas) mask = np.zeros(signal.shape[:-1]) fit = model.fit(signal, mask=mask) assert_array_equal(fit.gfa, 0) def test_min_signal_default(self): signal, gtab, expected = make_fake_signal() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_default = self.model(gtab, 4) shm_default = model_default.fit(signal).shm_coeff with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) model_correct = self.model(gtab, 4, min_signal=1e-5) shm_correct = model_correct.fit(signal).shm_coeff assert_equal(shm_default, shm_correct) def test_SphHarmFit(): coef = np.zeros((3, 4, 5, 45)) mask = np.zeros((3, 4, 5), dtype=bool) fit = SphHarmFit(None, coef, mask) item = fit[0, 0, 0] assert_equal(item.shape, ()) data = fit[0] assert_equal(data.shape, (4, 5)) data = fit[:, :, 0] assert_equal(data.shape, (3, 4)) class TestOpdtModel(TestQballModel): model = OpdtModel class TestCsaOdfModel(TestQballModel): model = CsaOdfModel def test_hat_and_lcr(): hemi = hemi_icosahedron.subdivide(n=3) m_values, l_values = sph_harm_ind_list(8) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B = real_sh_descoteaux_from_index( m_values, l_values, hemi.theta[:, None], hemi.phi[:, None] ) H = hat(B) B_hat = np.dot(H, B) assert_array_almost_equal(B, B_hat) R = lcr_matrix(H) d = np.arange(len(hemi.theta)) r = d - np.dot(H, d) lev = np.sqrt(1 - H.diagonal()) r /= lev r -= r.mean() r2 = np.dot(R, d) assert_array_almost_equal(r, r2) r3 = np.dot(d, R.T) assert_array_almost_equal(r, r3) def test_bootstrap_array(): B = np.array([[4, 5, 7, 4, 2.0], [4, 6, 2, 3, 6.0]]) H = hat(B.T) R = np.zeros((5, 5)) d = np.arange(1, 6) dhat = np.dot(H, d) assert_array_almost_equal(bootstrap_data_voxel(dhat, H, R), dhat) assert_array_almost_equal(bootstrap_data_array(dhat, H, R), dhat) def test_ResidualBootstrapWrapper(): B = np.array([[4, 5, 7, 4, 2.0], [4, 6, 2, 3, 6.0]]) B = B.T H = hat(B) d = np.arange(10) / 8.0 d.shape = (2, 5) dhat = np.dot(d, H) signal_object = NearestNeighborInterpolator(dhat, (1,)) ms = 0.2 where_dwi = np.ones(len(H), dtype=bool) boot_obj = ResidualBootstrapWrapper(signal_object, B, where_dwi, min_signal=ms) assert_array_almost_equal(boot_obj[0], dhat[0].clip(ms, 1)) assert_array_almost_equal(boot_obj[1], dhat[1].clip(ms, 1)) dhat = np.column_stack([[0.6, 0.7], dhat]) signal_object = NearestNeighborInterpolator(dhat, (1,)) where_dwi = np.concatenate([[False], where_dwi]) boot_obj = ResidualBootstrapWrapper(signal_object, B, where_dwi, min_signal=ms) assert_array_almost_equal(boot_obj[0], dhat[0].clip(ms, 1)) assert_array_almost_equal(boot_obj[1], dhat[1].clip(ms, 1)) def test_sf_to_sh(): # Subdividing a hemi_icosahedron twice produces 81 unique points, which # is more than enough to fit a order 8 (45 coefficients) spherical harmonic hemisphere = hemi_icosahedron.subdivide(n=2) mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(hemisphere.vertices, mevals, angles, [50, 50]) # 1D case with the 2 symmetric bases functions # Tournier basis with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) odf_sh = sf_to_sh(odf, hemisphere, sh_order_max=8, basis_type="tournier07") odf_reconst = sh_to_sf( odf_sh, hemisphere, sh_order_max=8, basis_type="tournier07" ) assert_array_almost_equal(odf, odf_reconst, 2) # Legacy definition with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) odf_sh = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True ) odf_reconst = sh_to_sf( odf_sh, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True ) assert_array_almost_equal(odf, odf_reconst, 2) # Descoteaux basis with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_sh = sf_to_sh(odf, hemisphere, sh_order_max=8, basis_type="descoteaux07") odf_reconst = sh_to_sf( odf_sh, hemisphere, sh_order_max=8, basis_type="descoteaux07" ) assert_array_almost_equal(odf, odf_reconst, 2) # Legacy definition with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_sh = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="descoteaux07", legacy=True ) odf_reconst = sh_to_sf( odf_sh, hemisphere, sh_order_max=8, basis_type="descoteaux07", legacy=True ) assert_array_almost_equal(odf, odf_reconst, 2) # We now create an asymmetric signal # to try out our full SH basis mevals = np.array([[0.0015, 0.0003, 0.0003]]) angles = [(0, 0)] odf2 = multi_tensor_odf(hemisphere.vertices, mevals, angles, [100]) # We simulate our asymmetric signal by using a different ODF # per hemisphere. The sphere used is a concatenation of the # vertices of our hemisphere, for a total of 162 vertices. sphere = Sphere(xyz=np.vstack((hemisphere.vertices, -hemisphere.vertices))) asym_odf = np.append(odf, odf2) # Try out full bases with order 10 (121 coefficients) # Tournier basis with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) odf_sh = sf_to_sh( asym_odf, sphere, sh_order_max=10, basis_type="tournier07", full_basis=True ) odf_reconst = sh_to_sf( odf_sh, sphere, sh_order_max=10, basis_type="tournier07", full_basis=True ) assert_array_almost_equal(odf_reconst, asym_odf, 2) # Legacy definition with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) odf_sh = sf_to_sh( asym_odf, sphere, sh_order_max=10, basis_type="tournier07", full_basis=True, legacy=True, ) odf_reconst = sh_to_sf( odf_sh, sphere, sh_order_max=10, basis_type="tournier07", full_basis=True, legacy=True, ) assert_array_almost_equal(odf_reconst, asym_odf, 2) # Descoteaux basis with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_sh = sf_to_sh( asym_odf, sphere, sh_order_max=10, basis_type="descoteaux07", full_basis=True, ) odf_reconst = sh_to_sf( odf_sh, sphere, sh_order_max=10, basis_type="descoteaux07", full_basis=True ) assert_array_almost_equal(odf_reconst, asym_odf, 2) # Legacy definition with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_sh = sf_to_sh( asym_odf, sphere, sh_order_max=10, basis_type="descoteaux07", full_basis=True, legacy=True, ) odf_reconst = sh_to_sf( odf_sh, sphere, sh_order_max=10, basis_type="descoteaux07", full_basis=True, legacy=True, ) assert_array_almost_equal(odf_reconst, asym_odf, 2) # An invalid basis name should raise an error assert_raises(ValueError, sh_to_sf, odf, hemisphere, basis_type="") assert_raises(ValueError, sf_to_sh, odf_sh, hemisphere, basis_type="") # 2D case odf2d = np.vstack((odf, odf)) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf2d_sh = sf_to_sh(odf2d, hemisphere, sh_order_max=8) odf2d_sf = sh_to_sf(odf2d_sh, hemisphere, sh_order_max=8) assert_array_almost_equal(odf2d, odf2d_sf, 2) def test_faster_sph_harm(): sh_order_max = 8 m_values, l_values = sph_harm_ind_list(sh_order_max) theta = np.array( [ 1.61491146, 0.76661665, 0.11976141, 1.20198246, 1.74066314, 1.5925956, 2.13022055, 0.50332859, 1.19868988, 0.78440679, 0.50686938, 0.51739718, 1.80342999, 0.73778957, 2.28559395, 1.29569064, 1.86877091, 0.39239191, 0.54043037, 1.61263047, 0.72695314, 1.90527318, 1.58186125, 0.23130073, 2.51695237, 0.99835604, 1.2883426, 0.48114057, 1.50079318, 1.07978624, 1.9798903, 2.36616966, 2.49233299, 2.13116602, 1.36801518, 1.32932608, 0.95926683, 1.070349, 0.76355762, 2.07148422, 1.50113501, 1.49823314, 0.89248164, 0.22187079, 1.53805373, 1.9765295, 1.13361568, 1.04908355, 1.68737368, 1.91732452, 1.01937457, 1.45839, 0.49641525, 0.29087155, 0.52824641, 1.29875871, 1.81023541, 1.17030475, 2.24953206, 1.20280498, 0.76399964, 2.16109722, 0.79780421, 0.87154509, ] ) phi = np.array( [ -1.5889514, -3.11092733, -0.61328674, -2.4485381, 2.88058822, 2.02165946, -1.99783366, 2.71235211, 1.41577992, -2.29413676, -2.24565773, -1.55548635, 2.59318232, -1.84672472, -2.33710739, 2.12111948, 1.87523722, -1.05206575, -2.85381987, -2.22808984, 2.3202034, -2.19004474, -1.90358372, 2.14818373, 3.1030696, -2.86620183, -2.19860123, -0.45468447, -3.0034923, 1.73345011, -2.51716288, 2.49961525, -2.68782986, 2.69699056, 1.78566133, -1.59119705, -2.53378963, -2.02476738, 1.36924987, 2.17600517, 2.38117241, 2.99021511, -1.4218007, -2.44016802, -2.52868164, 3.01531658, 2.50093627, -1.70745826, -2.7863931, -2.97359741, 2.17039906, 2.68424643, 1.77896086, 0.45476215, 0.99734418, -2.73107896, 2.28815009, 2.86276506, 3.09450274, -3.09857384, -1.06955885, -2.83826831, 1.81932195, 2.81296654, ] ) sh = spherical_harmonics( m_values, l_values, theta[:, None], phi[:, None], use_scipy=False ) sh2 = spherical_harmonics( m_values, l_values, theta[:, None], phi[:, None], use_scipy=True ) assert_array_almost_equal(sh, sh2, 8) sh = spherical_harmonics( m_values, l_values, theta[:, None], phi[:, None], use_scipy=False ) assert_array_almost_equal(sh, sh2, 8) def test_anisotropic_power(): for n_coeffs in [6, 15, 28, 45, 66, 91]: for norm_factor in [0.0005, 0.00001]: # Create some really simple cases: coeffs = np.ones((3, n_coeffs)) max_order = calculate_max_order(coeffs.shape[-1]) # For the case where all coeffs == 1, the ap is simply log of the # number of even orders up to the maximal order: analytic = np.log(len(range(2, max_order + 2, 2))) - np.log(norm_factor) answers = [analytic] * 3 apvals = anisotropic_power(coeffs, norm_factor=norm_factor) assert_array_almost_equal(apvals, answers) # Test that this works for single voxel arrays as well: assert_array_almost_equal( anisotropic_power(coeffs[1], norm_factor=norm_factor), answers[1] ) # Test that even when we look at an all-zeros voxel, this # avoids a log-of-zero warning: with warnings.catch_warnings(record=True) as w: assert_equal(anisotropic_power(np.zeros(6)), 0) assert len(w) == 0 def test_calculate_max_order(): """Based on the table in: https://jdtournier.github.io/mrtrix-0.2/tractography/preprocess.html """ orders = [2, 4, 6, 8, 10, 12] n_coeffs_sym = [6, 15, 28, 45, 66, 91] # n = (R + 1)^2 for a full basis n_coeffs_full = [9, 25, 49, 81, 121, 169] for o, n_sym, n_full in zip(orders, n_coeffs_sym, n_coeffs_full): assert_equal(calculate_max_order(n_sym), o) assert_equal(calculate_max_order(n_full, full_basis=True), o) assert_raises(ValueError, calculate_max_order, 29) assert_raises(ValueError, calculate_max_order, 29, full_basis=True) def test_convert_sh_to_full_basis(): hemisphere = hemi_icosahedron.subdivide(n=2) mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(hemisphere.vertices, mevals, angles, [50, 50]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sh_coeffs = sf_to_sh(odf, hemisphere, sh_order_max=8) full_sh_coeffs = convert_sh_to_full_basis(sh_coeffs) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_reconst = sh_to_sf( full_sh_coeffs, hemisphere, sh_order_max=8, full_basis=True ) assert_array_almost_equal(odf, odf_reconst, 2) def test_convert_sh_from_legacy(): hemisphere = hemi_icosahedron.subdivide(n=2) mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(hemisphere.vertices, mevals, angles, [50, 50]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) sh_coeffs = sf_to_sh(odf, hemisphere, sh_order_max=8, legacy=True) converted_coeffs = convert_sh_from_legacy(sh_coeffs, "descoteaux07") expected_coeffs = sf_to_sh(odf, hemisphere, sh_order_max=8, legacy=False) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) sh_coeffs = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True ) converted_coeffs = convert_sh_from_legacy(sh_coeffs, "tournier07") expected_coeffs = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=False ) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) # 2D case odfs = np.array([odf, odf]) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) sh_coeffs = sf_to_sh( odfs, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True, full_basis=True, ) converted_coeffs = convert_sh_from_legacy(sh_coeffs, "tournier07", full_basis=True) expected_coeffs = sf_to_sh( odfs, hemisphere, sh_order_max=8, basis_type="tournier07", full_basis=True, legacy=False, ) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) assert_raises(ValueError, convert_sh_from_legacy, sh_coeffs, "", full_basis=True) def test_convert_sh_to_legacy(): hemisphere = hemi_icosahedron.subdivide(n=2) mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(hemisphere.vertices, mevals, angles, [50, 50]) sh_coeffs = sf_to_sh(odf, hemisphere, sh_order_max=8, legacy=False) converted_coeffs = convert_sh_to_legacy(sh_coeffs, "descoteaux07") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) expected_coeffs = sf_to_sh(odf, hemisphere, sh_order_max=8, legacy=True) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) sh_coeffs = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=False ) converted_coeffs = convert_sh_to_legacy(sh_coeffs, "tournier07") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) expected_coeffs = sf_to_sh( odf, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True ) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) # 2D case odfs = np.array([odf, odf]) sh_coeffs = sf_to_sh( odfs, hemisphere, sh_order_max=8, basis_type="tournier07", full_basis=True, legacy=False, ) converted_coeffs = convert_sh_to_legacy(sh_coeffs, "tournier07", full_basis=True) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=tournier07_legacy_msg, category=PendingDeprecationWarning ) expected_coeffs = sf_to_sh( odfs, hemisphere, sh_order_max=8, basis_type="tournier07", legacy=True, full_basis=True, ) assert_array_almost_equal(converted_coeffs, expected_coeffs, 2) assert_raises(ValueError, convert_sh_to_legacy, sh_coeffs, "", full_basis=True) def test_convert_sh_descoteaux_tournier(): # case: max degree zero sh_coeffs = np.array([1.54]) # there is only an l=0,m=0 coefficient converted_coeffs = convert_sh_descoteaux_tournier(sh_coeffs) assert_array_equal(converted_coeffs, sh_coeffs) # case: max degree 4 sh_coeffs = np.array( [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, ], dtype=float, ) # expected result is that there is a swap m <--> -m expected_coeffs = np.array( [ 1, 6, 5, 4, 3, 2, 15, 14, 13, 12, 11, 10, 9, 8, 7, ], dtype=float, ) converted_coeffs = convert_sh_descoteaux_tournier(sh_coeffs) assert_array_equal(converted_coeffs, expected_coeffs) # case: max degree 4 but with more axes dim0 = 2 dim1 = 3 sh_coeffs_grid = np.array( [np.linspace(10 * i, 10 * i + 1, 6) for i in range(6)] ).reshape(dim0, dim1, 6) converted_coeffs_grid = convert_sh_descoteaux_tournier(sh_coeffs_grid) assert_equal(sh_coeffs_grid.shape, converted_coeffs_grid.shape) for i0 in range(dim0): for i1 in range(dim1): shc = sh_coeffs_grid[i0, i1] # shc is short for "sh_coeffs" converted_coeffs = converted_coeffs_grid[i0, i1] expected_coeffs = np.array( [ shc[0], shc[5], shc[4], shc[3], shc[2], shc[1], ] ) assert_array_equal(converted_coeffs, expected_coeffs) dipy-1.11.0/dipy/reconst/tests/test_shore.py000066400000000000000000000065131476546756600211130ustar00rootroot00000000000000# Tests for shore fitting from math import factorial import warnings import numpy as np import numpy.testing as npt import pytest from scipy.special import gamma, genlaguerre from dipy.data import get_gtab_taiwan_dsi from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.reconst.shore import ShoreModel from dipy.sims.voxel import multi_tensor from dipy.utils.optpkg import optional_package cvxpy, have_cvxpy, _ = optional_package("cvxpy", min_version="1.4.1") needs_cvxpy = pytest.mark.skipif(not have_cvxpy, reason="Requires CVXPY") # Object to hold module global data class _C: pass data = _C() def setup_module(): data.gtab = get_gtab_taiwan_dsi() data.mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) data.angl = [(0, 0), (60, 0)] data.S, sticks = multi_tensor( data.gtab, data.mevals, S0=100.0, angles=data.angl, fractions=[50, 50], snr=None ) data.radial_order = 6 data.zeta = 700 data.lambdaN = 1e-12 data.lambdaL = 1e-12 def test_shore_error(): data.gtab = get_gtab_taiwan_dsi() npt.assert_raises(ValueError, ShoreModel, data.gtab, radial_order=-4) npt.assert_raises(ValueError, ShoreModel, data.gtab, radial_order=7) npt.assert_raises( ValueError, ShoreModel, data.gtab, constrain_e0=False, positive_constraint=True ) @needs_cvxpy def test_shore_positive_constrain(): asm = ShoreModel( data.gtab, radial_order=data.radial_order, zeta=data.zeta, lambdaN=data.lambdaN, lambdaL=data.lambdaL, constrain_e0=True, positive_constraint=True, pos_grid=11, pos_radius=20e-03, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(data.S) eap = asmfit.pdf_grid(11, 20e-03) npt.assert_almost_equal(eap[eap < 0].sum(), 0, 3) def test_shore_fitting_no_constrain_e0(): asm = ShoreModel( data.gtab, radial_order=data.radial_order, zeta=data.zeta, lambdaN=data.lambdaN, lambdaL=data.lambdaL, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(data.S) npt.assert_almost_equal(compute_e0(asmfit), 1) @needs_cvxpy def test_shore_fitting_constrain_e0(): asm = ShoreModel( data.gtab, radial_order=data.radial_order, zeta=data.zeta, lambdaN=data.lambdaN, lambdaL=data.lambdaL, constrain_e0=True, ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(data.S) npt.assert_almost_equal(compute_e0(asmfit), 1) def compute_e0(shorefit): signal_0 = 0 for n in range(int(shorefit.model.radial_order / 2) + 1): signal_0 += shorefit.shore_coeff[n] * ( genlaguerre(n, 0.5)(0) * ( (factorial(n)) / (2 * np.pi * (shorefit.model.zeta**1.5) * gamma(n + 1.5)) ) ** 0.5 ) return signal_0 dipy-1.11.0/dipy/reconst/tests/test_shore_metrics.py000066400000000000000000000102131476546756600226310ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from scipy.special import genlaguerre from dipy.data import get_gtab_taiwan_dsi, get_sphere from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.reconst.shore import ShoreModel, shore_indices, shore_matrix, shore_order from dipy.sims.voxel import ( multi_tensor, multi_tensor_msd, multi_tensor_pdf, multi_tensor_rtop, ) def test_shore_metrics(): gtab = get_gtab_taiwan_dsi() mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) angl = [(0, 0), (60, 0)] S, _ = multi_tensor( gtab, mevals, S0=100.0, angles=angl, fractions=[50, 50], snr=None ) # test shore_indices n = 7 ell = 6 m = -4 radial_order, c = shore_order(n, ell, m) n2, l2, m2 = shore_indices(radial_order, c) npt.assert_equal(n, n2) npt.assert_equal(ell, l2) npt.assert_equal(m, m2) radial_order = 6 c = 41 n, ell, m = shore_indices(radial_order, c) radial_order2, c2 = shore_order(n, ell, m) npt.assert_equal(radial_order, radial_order2) npt.assert_equal(c, c2) npt.assert_raises(ValueError, shore_indices, 6, 100) npt.assert_raises(ValueError, shore_order, m, n, ell) # since we are testing without noise we can use higher order and lower # lambdas, with respect to the default. radial_order = 8 zeta = 700 lambdaN = 1e-12 lambdaL = 1e-12 asm = ShoreModel( gtab, radial_order=radial_order, zeta=zeta, lambdaN=lambdaN, lambdaL=lambdaL ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(S) c_shore = asmfit.shore_coeff with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) cmat = shore_matrix(radial_order, zeta, gtab) S_reconst = np.dot(cmat, c_shore) # test the signal reconstruction S = S / S[0] nmse_signal = np.sqrt(np.sum((S - S_reconst) ** 2)) / (S.sum()) npt.assert_almost_equal(nmse_signal, 0.0, 4) # test if the analytical integral of the pdf is equal to one integral = 0 for n in range(int(radial_order / 2 + 1)): integral += ( c_shore[n] * (np.pi ** (-1.5) * zeta ** (-1.5) * genlaguerre(n, 0.5)(0)) ** 0.5 ) npt.assert_almost_equal(integral, 1.0, 10) # test if the integral of the pdf calculated on a discrete grid is # equal to one with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf_discrete = asmfit.pdf_grid(17, 40e-3) integral = pdf_discrete.sum() npt.assert_almost_equal(integral, 1.0, 1) # compare the shore pdf with the ground truth multi_tensor pdf sphere = get_sphere(name="symmetric724") v = sphere.vertices radius = 10e-3 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pdf_shore = asmfit.pdf(v * radius) pdf_mt = multi_tensor_pdf( v * radius, mevals=mevals, angles=angl, fractions=[50, 50] ) nmse_pdf = np.sqrt(np.sum((pdf_mt - pdf_shore) ** 2)) / (pdf_mt.sum()) npt.assert_almost_equal(nmse_pdf, 0.0, 2) # compare the shore rtop with the ground truth multi_tensor rtop rtop_shore_signal = asmfit.rtop_signal() rtop_shore_pdf = asmfit.rtop_pdf() npt.assert_almost_equal(rtop_shore_signal, rtop_shore_pdf, 9) rtop_mt = multi_tensor_rtop([0.5, 0.5], mevals=mevals) npt.assert_(rtop_mt / rtop_shore_signal > 0.95) npt.assert_(rtop_mt / rtop_shore_signal < 1.10) # compare the shore msd with the ground truth multi_tensor msd msd_mt = multi_tensor_msd([0.5, 0.5], mevals=mevals) msd_shore = asmfit.msd() npt.assert_(msd_mt / msd_shore > 0.95) npt.assert_(msd_mt / msd_shore < 1.05) dipy-1.11.0/dipy/reconst/tests/test_shore_odf.py000066400000000000000000000100621476546756600217350ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere_stats import angular_similarity from dipy.core.subdivide_octahedron import create_unit_sphere from dipy.data import default_sphere, get_3shell_gtab, get_isbi2013_2shell_gtab from dipy.direction.peaks import peak_directions from dipy.reconst.odf import gfa from dipy.reconst.shm import descoteaux07_legacy_msg, sh_to_sf from dipy.reconst.shore import ShoreModel, shore_matrix from dipy.reconst.tests.test_dsi import sticks_and_ball_dummies from dipy.sims.voxel import sticks_and_ball from dipy.testing.decorators import set_random_number_generator def test_shore_odf(): gtab = get_isbi2013_2shell_gtab() # load repulsion 724 sphere sphere = default_sphere # load icosahedron sphere sphere2 = create_unit_sphere(recursion_level=5) data, golden_directions = sticks_and_ball( gtab, d=0.0015, S0=100, angles=[(0, 0), (90, 0)], fractions=[50, 50], snr=None ) asm = ShoreModel(gtab, radial_order=6, zeta=700, lambdaN=1e-8, lambdaL=1e-8) # repulsion724 with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(data) odf = asmfit.odf(sphere) odf_sh = asmfit.odf_sh() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf_from_sh = sh_to_sf( odf_sh, sphere, sh_order_max=6, basis_type=None, legacy=True ) npt.assert_almost_equal(odf, odf_from_sh, 10) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) expected_phi = shore_matrix(radial_order=6, zeta=700, gtab=gtab) npt.assert_array_almost_equal( np.dot(expected_phi, asmfit.shore_coeff), asmfit.fitted_signal() ) directions, _, _ = peak_directions( odf, sphere, relative_peak_threshold=0.35, min_separation_angle=25 ) npt.assert_equal(len(directions), 2) npt.assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) # 5 subdivisions with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) odf = asmfit.odf(sphere2) directions, _, _ = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) npt.assert_equal(len(directions), 2) npt.assert_almost_equal(angular_similarity(directions, golden_directions), 2, 1) sb_dummies = sticks_and_ball_dummies(gtab) for sbd in sb_dummies: data, golden_directions = sb_dummies[sbd] asmfit = asm.fit(data) odf = asmfit.odf(sphere2) directions, _, _ = peak_directions( odf, sphere2, relative_peak_threshold=0.35, min_separation_angle=25 ) if len(directions) <= 3: npt.assert_equal(len(directions), len(golden_directions)) if len(directions) > 3: npt.assert_equal(gfa(odf) < 0.1, True) @set_random_number_generator() def test_multivox_shore(rng): gtab = get_3shell_gtab() data = rng.random([20, 30, 1, gtab.gradients.shape[0]]) radial_order = 4 zeta = 700 asm = ShoreModel( gtab, radial_order=radial_order, zeta=zeta, lambdaN=1e-8, lambdaL=1e-8 ) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) asmfit = asm.fit(data) c_shore = asmfit.shore_coeff npt.assert_equal(c_shore.shape[0:3], data.shape[0:3]) npt.assert_equal(np.all(np.isreal(c_shore)), True) dipy-1.11.0/dipy/reconst/tests/test_utils.py000066400000000000000000000075701476546756600211370ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.reconst.utils import _mask_from_roi, _roi_in_volume, convert_tensors def test_roi_in_volume(): data_shape = (11, 11, 11, 64) roi_center = np.array([5, 5, 5]) roi_radii = np.array([5, 5, 5]) roi_radii_out = _roi_in_volume(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_radii_out, np.array([5, 5, 5])) roi_radii = np.array([6, 6, 6]) roi_radii_out = _roi_in_volume(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_radii_out, np.array([5, 5, 5])) roi_center = np.array([4, 4, 4]) roi_radii = np.array([5, 5, 5]) roi_radii_out = _roi_in_volume(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_radii_out, np.array([4, 4, 4])) data_shape = (11, 11, 1, 64) roi_center = np.array([5, 5, 0]) roi_radii = np.array([5, 5, 0]) roi_radii_out = _roi_in_volume(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_radii_out, np.array([5, 5, 0])) roi_center = np.array([2, 5, 0]) roi_radii = np.array([5, 10, 2]) roi_radii_out = _roi_in_volume(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_radii_out, np.array([2, 5, 0])) def test_mask_from_roi(): data_shape = (5, 5, 5) roi_center = (2, 2, 2) roi_radii = (2, 2, 2) mask_gt = np.ones(data_shape) roi_mask = _mask_from_roi(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_mask, mask_gt) roi_radii = (1, 2, 2) mask_gt = np.zeros(data_shape) mask_gt[1:4, 0:5, 0:5] = 1 roi_mask = _mask_from_roi(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_mask, mask_gt) roi_radii = (0, 2, 2) mask_gt = np.zeros(data_shape) mask_gt[2, 0:5, 0:5] = 1 roi_mask = _mask_from_roi(data_shape, roi_center, roi_radii) npt.assert_array_equal(roi_mask, mask_gt) def test_convert_tensor(): # Test case 1: Convert from 'dipy' to 'mrtrix' tensor = np.array([[[[1, 2, 3, 4, 5, 6]]]]) converted_tensor = convert_tensors(tensor, "dipy", "mrtrix") expected_tensor = np.array([[[[1, 3, 6, 2, 4, 5]]]]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 2: Convert from 'mrtrix' to 'ants' tensor = np.array([[[[1, 2, 3, 4, 5, 6]]]]) converted_tensor = convert_tensors(tensor, "mrtrix", "ants") expected_tensor = np.array([[[[[1, 4, 2, 5, 6, 3]]]]]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 3: Convert from 'ants' to 'fsl' tensor = np.array([[[[1, 2, 3, 4, 5, 6]]]]) converted_tensor = convert_tensors(tensor, "ants", "fsl") expected_tensor = np.array([[[[1, 2, 4, 3, 5, 6]]]]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 4: Convert from 'fsl' to 'dipy' tensor = np.array([[[[1, 2, 3, 4, 5, 6]]]]) converted_tensor = convert_tensors(tensor, "fsl", "dipy") expected_tensor = np.array([[[[1, 2, 4, 3, 5, 6]]]]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 5: Convert from 'dipy' to 'ants' tensor = np.array([[[[1, 2, 3, 4, 5, 6]]]]) converted_tensor = convert_tensors(tensor, "dipy", "ants") expected_tensor = np.array([[[[[1, 2, 3, 4, 5, 6]]]]]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 6: Convert from 'ants' to 'dipy' tensor = np.array([[[[[1, 2, 3, 4, 5, 6]]]]]) converted_tensor = convert_tensors(tensor, "ants", "dipy") expected_tensor = np.array([1, 2, 3, 4, 5, 6]) npt.assert_array_equal(converted_tensor, expected_tensor) # Test case 7: Convert from 'dipy' to 'dipy' tensor = np.array([[[[[1, 2, 3, 4, 5, 6]]]]]) converted_tensor = convert_tensors(tensor, "dipy", "dipy") npt.assert_array_equal(converted_tensor, tensor) npt.assert_raises(ValueError, convert_tensors, tensor, "amico", "dipy") npt.assert_raises(ValueError, convert_tensors, tensor, "dipy", "amico") dipy-1.11.0/dipy/reconst/tests/test_vec_val_vect.py000066400000000000000000000021571476546756600224330ustar00rootroot00000000000000import numpy as np from numpy.random import randn from numpy.testing import assert_almost_equal from dipy.reconst.vec_val_sum import vec_val_vect def make_vecs_vals(shape): return randn(*shape), randn(*(shape[:-2] + shape[-1:])) def test_vec_val_vect(): for shape0 in ((10,), (100,), (10, 12), (12, 10, 5)): for shape1 in ((3, 3), (4, 3), (3, 4)): shape = shape0 + shape1 evecs, evals = make_vecs_vals(shape) res1 = np.einsum("...ij,...j,...kj->...ik", evecs, evals, evecs) assert_almost_equal(res1, vec_val_vect(evecs, evals)) def dumb_sum(vecs, vals): N, rows, cols = vecs.shape res2 = np.zeros((N, rows, rows)) for i in range(N): Q = vecs[i] L = vals[i] res2[i] = np.dot(Q, np.dot(np.diag(L), Q.T)) return res2 def test_vec_val_vect_dumber(): for shape0 in ((10,), (100,)): for shape1 in ((3, 3), (4, 3), (3, 4)): shape = shape0 + shape1 evecs, evals = make_vecs_vals(shape) res1 = dumb_sum(evecs, evals) assert_almost_equal(res1, vec_val_vect(evecs, evals)) dipy-1.11.0/dipy/reconst/tests/test_weights_method.py000066400000000000000000000140521476546756600230020ustar00rootroot00000000000000"""Testing weights methods""" import numpy as np from numpy.testing import ( assert_almost_equal, assert_equal, assert_raises, ) import dipy.core.gradients as grad from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs import dipy.reconst.dti as dti from dipy.reconst.weights_method import ( simple_cutoff, two_eyes_cutoff, weights_method_nlls_m_est, weights_method_wls_m_est, ) from dipy.testing.decorators import set_random_number_generator MIN_POSITIVE_SIGNAL = 0.0001 @set_random_number_generator() def test_outlier_funcs(rng): """Test functions that define outliers.""" # true signal b0 = 1000.0 bval, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bval, bvecs=bvecs) B = bval[1] D_orig = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B design_matrix = dti.design_matrix(gtab) log_pred_sig = np.dot(design_matrix, D_orig) pred_sig = np.exp(log_pred_sig) # noisy signal scale = 1 error = rng.normal(scale=scale, size=pred_sig.shape) Y = pred_sig + error Y[Y < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL Y[0] = Y[0] * 100 # make 1 signal into a super outlier # other args residuals = Y - pred_sig # may differ from residuals due to signal clip log_residuals = np.log(Y) - log_pred_sig # make some fake leverages (they should sum to npar=7) leverages = np.ones_like(Y) * D_orig.shape[0] / Y.shape[0] C = scale # since we added noise, just set C to scale for outlier_func in [simple_cutoff, two_eyes_cutoff]: # use extremely generous cutoff of 6; find only the super outlier outlier = outlier_func( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff=6 ) assert_equal(outlier[0], True) assert_equal(outlier[1:], np.zeros_like(outlier[1:], dtype=bool)) # make everything an outlier, by using a cut-off of zero outlier = outlier_func( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff=0.0 ) assert_equal(outlier.sum(), Y.shape[0]) @set_random_number_generator() def test_weights_funcs(rng): """Test functions that define weights.""" # true signal b0 = 1000.0 bval, bvecs = read_bvals_bvecs(*get_fnames(name="55dir_grad")) gtab = grad.gradient_table(bval, bvecs=bvecs) B = bval[1] D_orig = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -np.log(b0) * B]) / B design_matrix = dti.design_matrix(gtab) log_pred_sig = np.dot(design_matrix, D_orig) pred_sig = np.exp(log_pred_sig) # noisy signal scale = 1 error = rng.normal(scale=scale, size=pred_sig.shape) Y = pred_sig + error Y[Y < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL Y[0] = Y[0] * 100 # make 1 signal into a super outlier # make some fake leverages (they should sum to npar=7) leverages = np.ones_like(Y) * D_orig.shape[0] / Y.shape[0] # set last robust to None ? # it's only currently used on the last iteration of WLS, because we need to # perform a clean OLS fit first # in general, weight functions for WLS will need to do this # makes sense that they generally have access to the defined 'last_robust' # from previous iterations, for the most generality # TODO: need to test about the number of iterations, to make sure errors are raised # TODO: robust return should be None expect on last for NLLS, # and second to last and last on WLS NUM_ITER = 4 for fit_type, weights_func in zip( ["wls", "nlls"], [weights_method_wls_m_est, weights_method_nlls_m_est] ): # invalid estimator choice assert_raises( ValueError, weights_func, Y, pred_sig, design_matrix, leverages, 1, 10, None, m_est="unknown_estimator", ) # test error if <4 iters for wls, or <3 iters for nlls assert_raises( ValueError, weights_func, Y, pred_sig, design_matrix, leverages, 1, 3 if fit_type == "wls" else 2, None, ) # test error if not enough data points assert_raises( ValueError, weights_func, Y[0:6], pred_sig[0:6], design_matrix[0:6], leverages[0:6], 1, 10, None, ) for mest in ["gm", "cauchy"]: for outlier_func in [simple_cutoff, two_eyes_cutoff]: last_robust = None for idx in range(1, NUM_ITER + 1): # neither last, nor second-to-last, iteration weights, robust = weights_func( Y, pred_sig, design_matrix, leverages, idx=idx, total_idx=NUM_ITER, last_robust=last_robust, m_est=mest, cutoff=3, outlier_condition_func=outlier_func, ) last_robust = robust if idx >= NUM_ITER - 1 and fit_type == "wls": # for second-to-last and last iter, robust not None assert_equal(robust.shape, Y.shape) # check that the super-outlier has 0 weight assert_almost_equal(weights[0], 0) if idx == NUM_ITER and fit_type == "nlls": # for last iter, robust not None assert_equal(robust.shape, Y.shape) # check that the super-outlier has 0 weight assert_almost_equal(weights[0], 0) if idx < NUM_ITER - 1: # iter okay for both wls & nlls assert_equal(robust, None) assert_equal(np.all(weights > 0), True) dipy-1.11.0/dipy/reconst/utils.py000066400000000000000000000251741476546756600167360ustar00rootroot00000000000000import numpy as np def dki_design_matrix(gtab): r"""Construct B design matrix for DKI. Parameters ---------- gtab : GradientTable Measurement directions. Returns ------- B : array (N, 22) Design matrix or B matrix for the DKI model .. math:: B[j, :] = (Bxx, Bxy, Byy, Bxz, Byz, Bzz, Bxxxx, Byyyy, Bzzzz, Bxxxy, Bxxxz, Bxyyy, Byyyz, Bxzzz, Byzzz, Bxxyy, Bxxzz, Byyzz, Bxxyz, Bxyyz, Bxyzz, BlogS0) """ b = gtab.bvals bvec = gtab.bvecs B = np.zeros((len(b), 22)) B[:, 0] = -b * bvec[:, 0] * bvec[:, 0] B[:, 1] = -2 * b * bvec[:, 0] * bvec[:, 1] B[:, 2] = -b * bvec[:, 1] * bvec[:, 1] B[:, 3] = -2 * b * bvec[:, 0] * bvec[:, 2] B[:, 4] = -2 * b * bvec[:, 1] * bvec[:, 2] B[:, 5] = -b * bvec[:, 2] * bvec[:, 2] B[:, 6] = b * b * bvec[:, 0] ** 4 / 6 B[:, 7] = b * b * bvec[:, 1] ** 4 / 6 B[:, 8] = b * b * bvec[:, 2] ** 4 / 6 B[:, 9] = 4 * b * b * bvec[:, 0] ** 3 * bvec[:, 1] / 6 B[:, 10] = 4 * b * b * bvec[:, 0] ** 3 * bvec[:, 2] / 6 B[:, 11] = 4 * b * b * bvec[:, 1] ** 3 * bvec[:, 0] / 6 B[:, 12] = 4 * b * b * bvec[:, 1] ** 3 * bvec[:, 2] / 6 B[:, 13] = 4 * b * b * bvec[:, 2] ** 3 * bvec[:, 0] / 6 B[:, 14] = 4 * b * b * bvec[:, 2] ** 3 * bvec[:, 1] / 6 B[:, 15] = b * b * bvec[:, 0] ** 2 * bvec[:, 1] ** 2 B[:, 16] = b * b * bvec[:, 0] ** 2 * bvec[:, 2] ** 2 B[:, 17] = b * b * bvec[:, 1] ** 2 * bvec[:, 2] ** 2 B[:, 18] = 2 * b * b * bvec[:, 0] ** 2 * bvec[:, 1] * bvec[:, 2] B[:, 19] = 2 * b * b * bvec[:, 1] ** 2 * bvec[:, 0] * bvec[:, 2] B[:, 20] = 2 * b * b * bvec[:, 2] ** 2 * bvec[:, 0] * bvec[:, 1] B[:, 21] = -np.ones(len(b)) return B def cti_design_matrix(gtab1, gtab2): r"""Construct B design matrix for CTI. Parameters ---------- gtab1: dipy.core.gradients.GradientTable A GradientTable class instance for first DDE diffusion epoch gtab2: dipy.core.gradients.GradientTable A GradientTable class instance for second DDE diffusion epoch Returns ------- B: array(N, 43) Design matrix or B matrix for the CTI model assuming multiple Gaussian Components """ b1 = gtab1.bvals b2 = gtab2.bvals n1 = gtab1.bvecs n2 = gtab2.bvecs B = np.zeros((len(b1), 43)) B[:, 0] = -b1 * (n1[:, 0] ** 2) - b2 * (n2[:, 0] ** 2) B[:, 1] = -2 * b1 * n1[:, 0] * n1[:, 1] - 2 * b2 * n2[:, 0] * n2[:, 1] B[:, 2] = -b1 * n1[:, 1] ** 2 - b2 * n2[:, 1] ** 2 B[:, 3] = -2 * b1 * n1[:, 0] * n1[:, 2] - 2 * b2 * n2[:, 0] * n2[:, 2] B[:, 4] = -2 * b1 * n1[:, 1] * n1[:, 2] - 2 * b2 * n2[:, 1] * n2[:, 2] B[:, 5] = -b1 * n1[:, 2] ** 2 - b2 * n2[:, 2] ** 2 B[:, 6] = (b1 * b1 * n1[:, 0] ** 4 + b2 * b2 * n2[:, 0] ** 4) / 6 B[:, 7] = (b1 * b1 * n1[:, 1] ** 4 + b2 * b2 * n2[:, 1] ** 4) / 6 B[:, 8] = (b1 * b1 * n1[:, 2] ** 4 + b2 * b2 * n2[:, 2] ** 4) / 6 B[:, 9] = ( 2 * b1 * b1 * n1[:, 0] ** 3 * n1[:, 1] + 2 * b2 * b2 * n2[:, 0] ** 3 * n2[:, 1] ) / 3 B[:, 10] = ( 2 * b1 * b1 * n1[:, 0] ** 3 * n1[:, 2] + 2 * b2 * b2 * n2[:, 0] ** 3 * n2[:, 2] ) / 3 B[:, 11] = ( 2 * b1 * b1 * n1[:, 1] ** 3 * n1[:, 0] + 2 * b2 * b2 * n2[:, 1] ** 3 * n2[:, 0] ) / 3 B[:, 12] = ( 2 * b1 * b1 * n1[:, 1] ** 3 * n1[:, 2] + 2 * b2 * b2 * n2[:, 1] ** 3 * n2[:, 2] ) / 3 B[:, 13] = ( 2 * b1 * b1 * n1[:, 2] ** 3 * n1[:, 0] + 2 * b2 * b2 * n2[:, 2] ** 3 * n2[:, 0] ) / 3 B[:, 14] = ( 2 * b1 * b1 * n1[:, 2] ** 3 * n1[:, 1] + 2 * b2 * b2 * n2[:, 2] ** 3 * n2[:, 1] ) / 3 B[:, 15] = ( b1 * b1 * n1[:, 0] ** 2 * n1[:, 1] ** 2 + b2 * b2 * n2[:, 0] ** 2 * n2[:, 1] ** 2 ) B[:, 16] = ( b1 * b1 * n1[:, 0] ** 2 * n1[:, 2] ** 2 + b2 * b2 * n2[:, 0] ** 2 * n2[:, 2] ** 2 ) B[:, 17] = ( b1 * b1 * n1[:, 1] ** 2 * n1[:, 2] ** 2 + b2 * b2 * n2[:, 1] ** 2 * n2[:, 2] ** 2 ) B[:, 18] = ( 2 * b1 * b1 * n1[:, 0] ** 2 * n1[:, 1] * n1[:, 2] + 2 * b2 * b2 * n2[:, 0] ** 2 * n2[:, 1] * n2[:, 2] ) B[:, 19] = ( 2 * b1 * b1 * n1[:, 1] ** 2 * n1[:, 0] * n1[:, 2] + 2 * b2 * b2 * n2[:, 1] ** 2 * n2[:, 0] * n2[:, 2] ) B[:, 20] = ( 2 * b1 * b1 * n1[:, 2] ** 2 * n1[:, 0] * n1[:, 1] + 2 * b2 * b2 * n2[:, 2] ** 2 * n2[:, 0] * n2[:, 1] ) B[:, 21] = b1 * n1[:, 0] ** 2 * b2 * n2[:, 0] ** 2 B[:, 22] = b1 * n1[:, 1] ** 2 * b2 * n2[:, 1] ** 2 B[:, 23] = b1 * n1[:, 2] ** 2 * b2 * n2[:, 2] ** 2 B[:, 24] = ( b1 * n1[:, 1] ** 2 * b2 * n2[:, 2] ** 2 + b1 * n1[:, 2] ** 2 * b2 * n2[:, 1] ** 2 ) B[:, 25] = ( b1 * n1[:, 0] ** 2 * b2 * n2[:, 2] ** 2 + b1 * n1[:, 2] ** 2 * b2 * n2[:, 0] ** 2 ) B[:, 26] = ( b1 * n1[:, 0] ** 2 * b2 * n2[:, 1] ** 2 + b1 * n1[:, 1] ** 2 * b2 * n2[:, 0] ** 2 ) B[:, 27] = 2 * ( b1 * n1[:, 0] ** 2 * b2 * n2[:, 1] * n2[:, 2] + b1 * n1[:, 1] * n1[:, 2] * b2 * n2[:, 0] ** 2 ) B[:, 28] = 2 * ( b1 * n1[:, 0] ** 2 * b2 * n2[:, 0] * n2[:, 2] + b1 * n1[:, 0] * n1[:, 2] * b2 * n2[:, 0] ** 2 ) B[:, 29] = 2 * ( b1 * n1[:, 0] ** 2 * b2 * n2[:, 0] * n2[:, 1] + b1 * n1[:, 0] * n1[:, 1] * b2 * n2[:, 0] ** 2 ) B[:, 30] = 2 * ( b1 * n1[:, 1] ** 2 * b2 * n2[:, 1] * n2[:, 2] + b1 * n1[:, 1] * n1[:, 2] * b2 * n2[:, 1] ** 2 ) B[:, 31] = 2 * ( b1 * n1[:, 1] ** 2 * b2 * n2[:, 0] * n2[:, 2] + b1 * n1[:, 0] * n1[:, 2] * b2 * n2[:, 1] ** 2 ) B[:, 32] = 2 * ( b1 * n1[:, 1] ** 2 * b2 * n2[:, 1] * n2[:, 0] + b1 * n1[:, 1] * n1[:, 0] * b2 * n2[:, 1] ** 2 ) B[:, 33] = 2 * ( b1 * n1[:, 2] ** 2 * b2 * n2[:, 2] * n2[:, 1] + b1 * n1[:, 2] * n1[:, 1] * b2 * n2[:, 2] ** 2 ) B[:, 34] = 2 * ( b1 * n1[:, 2] ** 2 * b2 * n2[:, 2] * n2[:, 0] + b1 * n1[:, 2] * n1[:, 0] * b2 * n2[:, 2] ** 2 ) B[:, 35] = 2 * ( b1 * n1[:, 2] ** 2 * b2 * n2[:, 0] * n2[:, 1] + b1 * n1[:, 0] * n1[:, 1] * b2 * n2[:, 2] ** 2 ) B[:, 36] = 4 * (b1 * n1[:, 1] * n1[:, 2] * b2 * n2[:, 1] * n2[:, 2]) B[:, 37] = 4 * (b1 * n1[:, 0] * n1[:, 2] * b2 * n2[:, 0] * n2[:, 2]) B[:, 38] = 4 * (b1 * n1[:, 0] * n1[:, 1] * b2 * n2[:, 0] * n2[:, 1]) B[:, 39] = 4 * ( b1 * n1[:, 0] * n1[:, 2] * b2 * n2[:, 1] * n2[:, 2] + b1 * n1[:, 1] * n1[:, 2] * b2 * n2[:, 0] * n2[:, 2] ) B[:, 40] = 4 * ( b1 * n1[:, 0] * n1[:, 1] * b2 * n2[:, 0] * n2[:, 2] + b1 * n1[:, 0] * n1[:, 2] * b2 * n2[:, 0] * n2[:, 1] ) B[:, 41] = 4 * ( b1 * n1[:, 0] * n1[:, 1] * b2 * n2[:, 1] * n2[:, 2] + b1 * n1[:, 1] * n1[:, 2] * b2 * n2[:, 0] * n2[:, 1] ) B[:, 42] = -np.ones(len(b1)) return B def _roi_in_volume(data_shape, roi_center, roi_radii): """Ensures that a cuboid ROI is in a data volume. Parameters ---------- data_shape : ndarray Shape of the data roi_center : ndarray, (3,) Center of ROI in data. roi_radii : ndarray, (3,) Radii of cuboid ROI Returns ------- roi_radii : ndarray, (3,) Truncated radii of cuboid ROI. It remains unchanged if the ROI was already contained inside the data volume. """ for i in range(len(roi_center)): inf_lim = int(roi_center[i] - roi_radii[i]) sup_lim = int(roi_center[i] + roi_radii[i]) if inf_lim < 0 or sup_lim >= data_shape[i]: roi_radii[i] = min(int(roi_center[i]), int(data_shape[i] - roi_center[i])) return roi_radii def _mask_from_roi(data_shape, roi_center, roi_radii): """Produces a mask from a cuboid ROI defined by center and radii. Parameters ---------- data_shape : array-like, (3,) Shape of the data from which the ROI is taken. roi_center : array-like, (3,) Center of ROI in data. roi_radii : array-like, (3,) Radii of cuboid ROI. Returns ------- mask : ndarray Mask of the cuboid ROI. """ ci, cj, ck = roi_center wi, wj, wk = roi_radii interval_i = slice(int(ci - wi), int(ci + wi) + 1) interval_j = slice(int(cj - wj), int(cj + wj) + 1) interval_k = slice(int(ck - wk), int(ck + wk) + 1) if wi == 0: interval_i = ci elif wj == 0: interval_j = cj elif wk == 0: interval_k = ck mask = np.zeros(data_shape, dtype=np.int64) mask[interval_i, interval_j, interval_k] = 1 return mask def convert_tensors(tensor, from_format, to_format): """Convert tensors from one format to another. Parameters ---------- tensor : ndarray Input tensor. from_format : str Format of the input tensor. Options: 'dipy', 'mrtrix', 'ants', 'fsl'. to_format : str Format of the output tensor. Options: 'dipy', 'mrtrix', 'ants', 'fsl'. Notes ----- - DIPY order: [Dxx, Dxy, Dyy, Dxz, Dyz, Dzz]. Shape: [i, j , k, 6]. See: https://github.com/dipy/dipy/blob/master/dipy/reconst/dti.py#L1639 - MRTRIX order: [Dxx, Dyy, Dzz, Dxy, Dxz, Dyz] Shape: [i, j , k, 6]. See: https://mrtrix.readthedocs.io/en/dev/reference/commands/dwi2tensor.html # noqa - ANTS: [Dxx, Dxy, Dyy, Dxz, Dyz, Dzz]. Shape: [i, j , k, 1, 6] - Note the extra dimension (5D) See: https://github.com/ANTsX/ANTs/wiki/Importing-diffusion-tensor-data-from-other-software # noqa - FSL: [Dxx, Dxy, Dxz, Dyy, Dyz, Dzz] Shape: [i, j , k, 6]. (Also used for the Fibernavigator) Ref: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FDT/UserGuide """ # noqa: E501 tensor_order = { "fsl": [[0, 1, 3, 2, 4, 5], [0, 1, 3, 2, 4, 5]], "mrtrix": [[0, 3, 1, 4, 5, 2], [0, 2, 5, 1, 3, 4]], "dipy": [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]], "ants": [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]], } if from_format.lower() not in tensor_order.keys(): raise ValueError(f"Unknown tensor format: {from_format}") if to_format.lower() not in tensor_order.keys(): raise ValueError(f"Unknown tensor format: {to_format}") if from_format.lower() == to_format.lower(): return tensor if from_format.lower() in ["ants", "dipy"]: tensor = np.squeeze(tensor) if tensor.ndim == 5 else tensor tensor_dipy = tensor[..., tensor_order[from_format.lower()][0]] if to_format.lower() == "ants": tensor_dipy = tensor_dipy[:, :, :, np.newaxis, :] return tensor_dipy elif to_format.lower() == "dipy": return tensor_dipy tensor_reordered = tensor_dipy[..., tensor_order[to_format.lower()][1]] return tensor_reordered dipy-1.11.0/dipy/reconst/vec_val_sum.pyx000066400000000000000000000053541476546756600202670ustar00rootroot00000000000000import numpy as np cimport numpy as cnp cimport cython @cython.boundscheck(False) @cython.wraparound(False) def vec_val_vect(vecs, vals): """ Vectorize `vecs`.diag(`vals`).`vecs`.T for last 2 dimensions of `vecs` Parameters ---------- vecs : shape (..., M, N) array containing tensor in last two dimensions; M, N usually equal to (3, 3) vals : shape (..., N) array diagonal values carried in last dimension, ``...`` shape above must match that for `vecs` Returns ------- res : shape (..., M, M) array For all the dimensions ellided by ``...``, loops to get (M, N) ``vec`` matrix, and (N,) ``vals`` vector, and calculates ``vec.dot(np.diag(val).dot(vec.T)``. Raises ------ ValueError : non-matching ``...`` dimensions of `vecs`, `vals` ValueError : non-matching ``N`` dimensions of `vecs`, `vals` Examples -------- Make a 3D array where the first dimension is only 1 >>> vecs = np.arange(9).reshape((1, 3, 3)) >>> vals = np.arange(3).reshape((1, 3)) >>> vec_val_vect(vecs, vals) array([[[ 9., 24., 39.], [ 24., 66., 108.], [ 39., 108., 177.]]]) That's the same as the 2D case (apart from the float casting): >>> vecs = np.arange(9).reshape((3, 3)) >>> vals = np.arange(3) >>> np.dot(vecs, np.dot(np.diag(vals), vecs.T)) array([[ 9, 24, 39], [ 24, 66, 108], [ 39, 108, 177]]) """ vecs = np.asarray(vecs) vals = np.asarray(vals) cdef: cnp.npy_intp t, N, ndim, rows, cols, r, c, in_r_out_c double [:, :, :] vecr double [:, :] valr double [:, :] vec double [:, :] out_vec double [:] val double [:, :, :] out double row_c # Avoid negative indexing to avoid errors with False boundscheck decorator # and Cython > 0.18 ndim = vecs.ndim common_shape = vecs.shape[:(ndim-2)] rows, cols = vecs.shape[ndim-2], vecs.shape[ndim-1] if vals.shape != common_shape + (cols,): raise ValueError('dimensions do not match') N = np.prod(common_shape, dtype=np.int64) vecr = np.array(vecs.reshape((N, rows, cols)), dtype=float) valr = np.array(vals.reshape((N, cols)), dtype=float) out = np.zeros((N, rows, rows)) with nogil: for t in range(N): # loop over the early dimensions vec = vecr[t] val = valr[t] out_vec = out[t] for r in range(rows): for c in range(cols): row_c = vec[r, c] * val[c] for in_r_out_c in range(rows): out_vec[r, in_r_out_c] += row_c * vec[in_r_out_c, c] return np.reshape(out, (common_shape + (rows, rows))) dipy-1.11.0/dipy/reconst/weights_method.py000066400000000000000000000226761476546756600206140ustar00rootroot00000000000000#!/usr/bin/python """Functions for defining weights for iterative fitting methods.""" import numpy as np MIN_POSITIVE_SIGNAL = 0.0001 def simple_cutoff( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff ): """Define outliers based on the signal (rather than the log-signal). Parameters ---------- residuals : ndarray Residuals of the signal (observed signal - fitted signal). log_residuals : ndarray Residuals of the log signal (log observed signal - fitted log signal). pred_sig : ndarray The predicted signal, given a previous fit. design_matrix : ndarray (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. leverages : ndarray The leverages (diagonal of the 'hat matrix') of the fit. C : float Estimate of the standard deviation of the error. cutoff : float, optional Cut-off value for defining outliers based on fitting residuals. Here the condition is:: |residuals| > cut_off x C x HAT_factor where HAT_factor = sqrt(1 - leverages) adjusts for leverage effects. """ leverages[np.isclose(leverages, 1.0)] = 0.99 # avoids rare issues HAT_factor = np.sqrt(1 - leverages) cond = np.abs(residuals) > +cutoff * C * HAT_factor return cond def two_eyes_cutoff( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff ): """Define outliers with two-eyes approach. see :footcite:p:`Collier2015` for more details. Parameters ---------- residuals : ndarray Residuals of the signal (observed signal - fitted signal). log_residuals : ndarray Residuals of the log signal (log observed signal - fitted log signal). pred_sig : ndarray The predicted signal, given a previous fit. design_matrix : ndarray (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. leverages : ndarray The leverages (diagonal of the 'hat matrix') of the fit. C : float Estimate of the standard deviation of the error. cutoff : float, optional Cut-off value for defining outliers based on fitting residuals, see :footcite:p:`Collier2015` for the two-eyes approached used here. References ---------- .. footbibliography:: """ leverages[np.isclose(leverages, 1.0)] = 0.99 # avoids rare issues HAT_factor = np.sqrt(1 - leverages) cond = (residuals > +cutoff * C * HAT_factor) | ( log_residuals < -cutoff * C * HAT_factor / pred_sig ) return cond def weights_method_wls_m_est( data, pred_sig, design_matrix, leverages, idx, total_idx, last_robust, *, m_est="gm", cutoff=3, outlier_condition_func=simple_cutoff, ): """Calculate M-estimator weights for WLS model. Parameters ---------- data : ndarray The measured signal. pred_sig : ndarray The predicted signal, given a previous fit. Has the same shape as data. design_matrix : ndarray (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. leverages : ndarray The leverages (diagonal of the 'hat matrix') of the fit. idx : int The current iteration number. total_idx : int The total number of iterations. last_robust : ndarray True for inlier indices and False for outlier indices. Must have the same shape as data. m_est : str, optional. M-estimator weighting scheme to use. Currently, 'gm' (Geman-McClure) and 'cauchy' are provided. cutoff : float, optional Cut-off value for defining outliers based on fitting residuals. Will be passed to the outlier_condition_func. Typical example: ``|residuals| > cut_off x standard_deviation`` outlier_condition_func : callable, optional A function with args and returns as follows:: is_an_outlier = outlier_condition_func(residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff) Notes ----- Robust weights are calculated specifically for the WLS problem, i.e. the usual form of the WLS problem is accounted for when defining these new weights, see :footcite:p:`Collier2015`. On the second-to-last iteration, OLS is performed without outliers. On the last iteration, WLS is performed without outliers. References ---------- .. footbibliography:: """ # check if M-estimator is valid (defined in this function) if m_est not in ["gm", "cauchy"]: raise ValueError("unknown M-estimator choice") # min 4 iters: (1) WLS (2) WLS with M-weights (3) clean OLS (3) clean WLS if total_idx < 4: raise ValueError("require 4+ iterations") p, N = design_matrix.shape[-1], data.shape[-1] if N <= p: raise ValueError("Fewer data points than parameters.") factor = 1.4826 * np.sqrt(N / (N - p)) # handle potential zeros pred_sig[pred_sig < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL # calculate quantities needed for C and w log_pred_sig = np.log(pred_sig) residuals = data - pred_sig log_data = np.log(data) log_residuals = log_data - log_pred_sig z = pred_sig * log_residuals # IRLLS paper eq9 corrected (weights for log_residuals^2 are pred_sig^2) C = ( factor * np.median(np.abs(z - np.median(z, axis=-1)[..., None]), axis=-1)[..., None] ) C[C == 0] = np.nanmedian(C) # C could be 0, if all signals = min_signal # NOTE: if more M-estimators are added, please update the docs! if m_est == "gm": w = (C / pred_sig) ** 2 / ((C / pred_sig) ** 2 + log_residuals**2) ** 2 if m_est == "cauchy": w = C**2 / ((C / pred_sig) ** 2 + log_residuals**2) robust = None if idx == total_idx - 1: # OLS without outliers cond = outlier_condition_func( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff ) robust = np.logical_not(cond) w[~robust] = 0.0 w[robust] = 1.0 if idx == total_idx: # WLS without outliers robust = last_robust w[~robust] = 0.0 w[robust] = pred_sig[robust == 1] ** 2 w[np.isinf(w)] = 0 w[np.isnan(w)] = 0 return w, robust def weights_method_nlls_m_est( data, pred_sig, design_matrix, leverages, idx, total_idx, last_robust, *, m_est="gm", cutoff=3, outlier_condition_func=simple_cutoff, ): """Calculate M-estimator weights for NLLS model. Parameters ---------- data : ndarray The measured signal. pred_sig : ndarray The predicted signal, given a previous fit. Has the same shape as data. design_matrix : ndarray (g, ...) Design matrix holding the covariants used to solve for the regression coefficients. leverages : ndarray The leverages (diagonal of the 'hat matrix') of the fit. idx : int The current iteration number. total_idx : int The total number of iterations. last_robust : ndarray True for inlier indices and False for outlier indices. Must have the same shape as data. m_est : str, optional. M-estimator weighting scheme to use. Currently, 'gm' (Geman-McClure) and 'cauchy' are provided. cutoff : float, optional Cut-off value for defining outliers based on fitting residuals. Will be passed to the outlier_condition_func. Typical example: ``|residuals| > cut_off x standard_deviation`` outlier_condition_func : callable, optional A function with args and returns as follows:: is_an_outlier = outlier_condition_func(residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff) Notes ----- Robust weights are calculated specifically for the NLLS problem. On the last iteration, NLLS is performed without outliers. """ # check if M-estimator is valid (defined in this function) if m_est not in ["gm", "cauchy"]: raise ValueError("unknown M-estimator choice") # min 3 iters: (1) NLLS (2) NLLS with M-weights (3) clean NLLS if total_idx < 3: raise ValueError("require 3+ iterations") p, N = design_matrix.shape[-1], data.shape[-1] if N <= p: raise ValueError("Fewer data points than parameters.") factor = 1.4826 * np.sqrt(N / (N - p)) # handle potential zeros pred_sig[pred_sig < MIN_POSITIVE_SIGNAL] = MIN_POSITIVE_SIGNAL # calculate quantities needed for C and w log_pred_sig = np.log(pred_sig) residuals = data - pred_sig log_data = np.log(data) log_residuals = log_data - log_pred_sig C = ( factor * np.median(np.abs(residuals - np.median(residuals)[..., None]), axis=-1)[ ..., None ] ) C[C == 0] = np.nanmedian(C) # C could be 0, if all signals = min_signal # NOTE: if more M-estimators are added, please update the docs! if m_est == "gm": w = C**2 / (C**2 + residuals**2) ** 2 if m_est == "cauchy": w = C**2 / (C**2 + residuals**2) robust = None if idx == total_idx: cond = outlier_condition_func( residuals, log_residuals, pred_sig, design_matrix, leverages, C, cutoff ) robust = np.logical_not(cond) w[~robust] = 0.0 w[robust] = 1.0 w[np.isinf(w)] = 0 w[np.isnan(w)] = 0 return w, robust dipy-1.11.0/dipy/segment/000077500000000000000000000000001476546756600152005ustar00rootroot00000000000000dipy-1.11.0/dipy/segment/__init__.py000066400000000000000000000000001476546756600172770ustar00rootroot00000000000000dipy-1.11.0/dipy/segment/bundles.py000066400000000000000000000756361476546756600172270ustar00rootroot00000000000000from itertools import chain import logging from time import time from nibabel.affines import apply_affine import numpy as np from dipy.align.streamlinear import ( BundleMinDistanceAsymmetricMetric, BundleMinDistanceMetric, BundleSumDistanceMatrixMetric, StreamlineLinearRegistration, ) from dipy.segment.clustering import qbx_and_merge from dipy.testing.decorators import warning_for_keywords from dipy.tracking.distances import bundles_distances_mam, bundles_distances_mdf from dipy.tracking.streamline import ( Streamlines, length, nbytes, select_random_set_of_streamlines, set_number_of_points, ) def check_range(streamline, gt, lt): length_s = length(streamline) if (length_s > gt) & (length_s < lt): return True else: return False logger = logging.getLogger(__name__) def bundle_adjacency(dtracks0, dtracks1, threshold): """Find bundle adjacency between two given tracks/bundles See :footcite:p:`Garyfallidis2012a` for further details about the method. Parameters ---------- dtracks0 : Streamlines White matter tract from one subject dtracks1 : Streamlines White matter tract from another subject threshold : float Threshold controls how much strictness user wants while calculating bundle adjacency between two bundles. Smaller threshold means bundles should be strictly adjacent to get higher BA score. Returns ------- res : Float Bundle adjacency score between two tracts References ---------- .. footbibliography:: """ d01 = bundles_distances_mdf(dtracks0, dtracks1) pair12 = [] for i in range(len(dtracks0)): if np.min(d01[i, :]) < threshold: j = np.argmin(d01[i, :]) pair12.append((i, j)) pair12 = np.array(pair12) pair21 = [] # solo2 = [] for i in range(len(dtracks1)): if np.min(d01[:, i]) < threshold: j = np.argmin(d01[:, i]) pair21.append((i, j)) pair21 = np.array(pair21) A = len(pair12) / float(len(dtracks0)) B = len(pair21) / float(len(dtracks1)) res = 0.5 * (A + B) return res @warning_for_keywords() def ba_analysis(recognized_bundle, expert_bundle, *, nb_pts=20, threshold=6.0): """Calculates bundle adjacency score between two given bundles See :footcite:p:`Garyfallidis2012a` for further details about the method. Parameters ---------- recognized_bundle : Streamlines Extracted bundle from the whole brain tractogram (eg: AF_L) expert_bundle : Streamlines Model bundle used as reference while extracting similar type bundle from input tractogram nb_pts : integer, optional Discretizing streamlines to have nb_pts number of points threshold : float, optional Threshold used for in computing bundle adjacency. Threshold controls how much strictness user wants while calculating bundle adjacency between two bundles. Smaller threshold means bundles should be strictly adjacent to get higher BA score. Returns ------- Bundle adjacency score between two tracts References ---------- .. footbibliography:: """ recognized_bundle = set_number_of_points(recognized_bundle, nb_points=nb_pts) expert_bundle = set_number_of_points(expert_bundle, nb_points=nb_pts) return bundle_adjacency(recognized_bundle, expert_bundle, threshold) @warning_for_keywords() def cluster_bundle(bundle, clust_thr, rng, *, nb_pts=20, select_randomly=500000): """Clusters bundles See :footcite:p:`Garyfallidis2012a` for further details about the method. Parameters ---------- bundle : Streamlines White matter tract clust_thr : float clustering threshold used in quickbundlesX rng : np.random.Generator numpy's random generator for generating random values. nb_pts: integer, optional Discretizing streamlines to have nb_points number of points select_randomly: integer, optional Randomly select streamlines from the input bundle Returns ------- centroids : Streamlines clustered centroids of the input bundle References ---------- .. footbibliography:: """ model_cluster_map = qbx_and_merge( bundle, clust_thr, nb_pts=nb_pts, select_randomly=select_randomly, rng=rng ) centroids = model_cluster_map.centroids return centroids @warning_for_keywords() def bundle_shape_similarity( bundle1, bundle2, rng, *, clust_thr=(5, 3, 1.5), threshold=6 ): """Calculates bundle shape similarity between two given bundles using bundle adjacency (BA) metric See :footcite:p:`Garyfallidis2012a`, :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- bundle1 : Streamlines White matter tract from one subject (eg: AF_L) bundle2 : Streamlines White matter tract from another subject (eg: AF_L) rng : np.random.Generator Random number generator. clust_thr : array-like, optional list of clustering thresholds used in quickbundlesX threshold : float, optional Threshold used for in computing bundle adjacency. Threshold controls how much strictness user wants while calculating shape similarity between two bundles. Smaller threshold means bundles should be strictly similar to get higher shape similarity score. Returns ------- ba_value : float Bundle similarity score between two tracts References ---------- .. footbibliography:: """ if len(bundle1) == 0 or len(bundle2) == 0: return 0 bundle1_centroids = cluster_bundle(bundle1, clust_thr=clust_thr, rng=rng) bundle2_centroids = cluster_bundle(bundle2, clust_thr=clust_thr, rng=rng) bundle1_centroids = Streamlines(bundle1_centroids) bundle2_centroids = Streamlines(bundle2_centroids) ba_value = ba_analysis( recognized_bundle=bundle1_centroids, expert_bundle=bundle2_centroids, threshold=threshold, ) return ba_value class RecoBundles: @warning_for_keywords() def __init__( self, streamlines, *, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=False, ): """Recognition of bundles Extract bundles from a participants' tractograms using model bundles segmented from a different subject or an atlas of bundles. See :footcite:p:`Garyfallidis2018` for the details. Parameters ---------- streamlines : Streamlines The tractogram in which you want to recognize bundles. greater_than : int, optional Keep streamlines that have length greater than this value (default 50) less_than : int, optional Keep streamlines have length less than this value (default 1000000) cluster_map : QB map, optional. Provide existing clustering to start RB faster (default None). clust_thr : float, optional. Distance threshold in mm for clustering `streamlines`. Default: 15. nb_pts : int, optional. Number of points per streamline (default 20) rng : np.random.Generator If None define generator in initialization function. Default: None verbose: bool, optional. If True, log information. Notes ----- Make sure that before creating this class that the streamlines and the model bundles are roughly in the same space. Also default thresholds are assumed in RAS 1mm^3 space. You may want to adjust those if your streamlines are not in world coordinates. References ---------- .. footbibliography:: """ map_ind = np.zeros(len(streamlines)) for i in range(len(streamlines)): map_ind[i] = check_range(streamlines[i], greater_than, less_than) map_ind = map_ind.astype(bool) self.orig_indices = np.array(list(range(0, len(streamlines)))) self.filtered_indices = np.array(self.orig_indices[map_ind]) self.streamlines = Streamlines(streamlines[map_ind]) self.nb_streamlines = len(self.streamlines) self.verbose = verbose if self.verbose: logger.info(f"target brain streamlines length = {len(streamlines)}") logger.info( f"After refining target brain streamlines" f" length = {len(self.streamlines)}" ) self.start_thr = [40, 25, 20] if rng is None: self.rng = np.random.default_rng() else: self.rng = rng if cluster_map is None: self._cluster_streamlines(clust_thr=clust_thr, nb_pts=nb_pts) else: if self.verbose: t = time() self.cluster_map = cluster_map self.cluster_map.refdata = self.streamlines self.centroids = self.cluster_map.centroids self.nb_centroids = len(self.centroids) self.indices = [cluster.indices for cluster in self.cluster_map] if self.verbose: logger.info(f" Streamlines have {self.nb_centroids} centroids") logger.info(f" Total loading duration {time() - t:0.3f} s\n") def _cluster_streamlines(self, clust_thr, nb_pts): if self.verbose: t = time() logger.info("# Cluster streamlines using QBx") logger.info(f" Tractogram has {len(self.streamlines)} streamlines") logger.info(f" Size is {nbytes(self.streamlines):0.3f} MB") logger.info(f" Distance threshold {clust_thr:0.3f}") # TODO this needs to become a default parameter thresholds = self.start_thr + [clust_thr] merged_cluster_map = qbx_and_merge( self.streamlines, thresholds, nb_pts=nb_pts, select_randomly=None, rng=self.rng, verbose=self.verbose, ) self.cluster_map = merged_cluster_map self.centroids = merged_cluster_map.centroids self.nb_centroids = len(self.centroids) self.indices = [cluster.indices for cluster in self.cluster_map] if self.verbose: logger.info(f" Streamlines have {self.nb_centroids} centroids") logger.info(f" Total duration {time() - t:0.3f} s\n") @warning_for_keywords() def recognize( self, model_bundle, model_clust_thr, *, reduction_thr=10, reduction_distance="mdf", slr=True, num_threads=None, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method="L-BFGS-B", pruning_thr=5, pruning_distance="mdf", ): """Recognize the model_bundle in self.streamlines See :footcite:p:`Garyfallidis2018` for further details about the method. Parameters ---------- model_bundle : Streamlines model bundle streamlines used as a reference to extract similar streamlines from input tractogram model_clust_thr : float MDF distance threshold for the model bundles reduction_thr : float, optional Reduce search space in the target tractogram by (mm) (default 10) reduction_distance : string, optional Reduction distance type can be mdf or mam (default mdf) slr : bool, optional Use Streamline-based Linear Registration (SLR) locally (default True) num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. slr_metric : BundleMinDistanceMetric slr_x0 : array or int or str, optional Transformation allowed. translation, rigid, similarity or scaling Initial parametrization for the optimization. If 1D array with: a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing are performed (affine). Here is an example of x0 with 12 elements: ``x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, -0.5, 0])`` This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, -0.5, 0). If int: a) 6 ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) 7 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) 12 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` If str: a) "rigid" ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) "similarity" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) "affine" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` slr_bounds : array, optional SLR bounds. slr_select : tuple, optional Select the number of streamlines from model to neighborhood of model to perform the local SLR. slr_method : string, optional Optimization method 'L_BFGS_B' or 'Powell' optimizers can be used. (default 'L-BFGS-B') pruning_thr : float, optional Pruning after reducing the search space. pruning_distance : string, optional Pruning distance type can be mdf or mam. Returns ------- recognized_transf : Streamlines Recognized bundle in the space of the model tractogram recognized_labels : array Indices of recognized bundle in the original tractogram References ---------- .. footbibliography:: """ if self.verbose: t = time() logger.info("## Recognize given bundle ## \n") model_centroids = self._cluster_model_bundle( model_bundle, model_clust_thr=model_clust_thr ) neighb_streamlines, neighb_indices = self._reduce_search_space( model_centroids, reduction_thr=reduction_thr, reduction_distance=reduction_distance, ) if len(neighb_streamlines) == 0: return Streamlines([]), [] if slr: transf_streamlines, slr1_bmd = self._register_neighb_to_model( model_bundle, neighb_streamlines, metric=slr_metric, x0=slr_x0, bounds=slr_bounds, select_model=slr_select[0], select_target=slr_select[1], method=slr_method, num_threads=num_threads, ) else: transf_streamlines = neighb_streamlines pruned_streamlines, labels = self._prune_what_not_in_model( model_centroids, transf_streamlines, neighb_indices, pruning_thr=pruning_thr, pruning_distance=pruning_distance, ) if self.verbose: logger.info( f"Total duration of recognition time" f" is {time()-t:0.3f} s\n" ) return pruned_streamlines, self.filtered_indices[labels] @warning_for_keywords() def refine( self, model_bundle, pruned_streamlines, model_clust_thr, *, reduction_thr=14, reduction_distance="mdf", slr=True, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method="L-BFGS-B", pruning_thr=6, pruning_distance="mdf", ): """Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. It refines the first output of RecoBundles by applying second local slr (optional), and second pruning. This method is useful when we are dealing with noisy data or when we want to extract small tracks from tractograms. This time, search space is created using pruned bundle and not model bundle. See :footcite:p:`Garyfallidis2018`, :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- model_bundle : Streamlines model bundle streamlines used as a reference to extract similar streamlines from input tractogram pruned_streamlines : Streamlines Recognized bundle from target tractogram by RecoBundles. model_clust_thr : float MDF distance threshold for the model bundles reduction_thr : float Reduce search space by (mm) (default 14) reduction_distance : string Reduction distance type can be mdf or mam (default mdf) slr : bool Use Streamline-based Linear Registration (SLR) locally. slr_metric : BundleMinDistanceMetric Bundle distance metric. slr_x0 : array or int or str Transformation allowed. translation, rigid, similarity or scaling Initial parametrization for the optimization. If 1D array with: a) 6 elements then only rigid registration is performed with the 3 first elements for translation and 3 for rotation. b) 7 elements also isotropic scaling is performed (similarity). c) 12 elements then translation, rotation (in degrees), scaling and shearing are performed (affine). Here is an example of x0 with 12 elements: ``x0=np.array([0, 10, 0, 40, 0, 0, 2., 1.5, 1, 0.1, -0.5, 0])`` This has translation (0, 10, 0), rotation (40, 0, 0) in degrees, scaling (2., 1.5, 1) and shearing (0.1, -0.5, 0). If int: a) 6 ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) 7 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) 12 ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` If str: a) "rigid" ``x0 = np.array([0, 0, 0, 0, 0, 0])`` b) "similarity" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1.])`` c) "affine" ``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])`` slr_bounds : array SLR bounds. slr_select : tuple Select the number of streamlines from model to neighborhood of model to perform the local SLR. slr_method : string Optimization method 'L_BFGS_B' or 'Powell' optimizers can be used. pruning_thr : float Pruning after reducing the search space. pruning_distance : string Pruning distance type can be mdf or mam. Returns ------- recognized_transf : Streamlines Recognized bundle in the space of the model tractogram recognized_labels : array Indices of recognized bundle in the original tractogram References ---------- .. footbibliography:: """ if self.verbose: t = time() logger.info("## Refine recognize given bundle ## \n") model_centroids = self._cluster_model_bundle( model_bundle, model_clust_thr=model_clust_thr ) pruned_model_centroids = self._cluster_model_bundle( pruned_streamlines, model_clust_thr=model_clust_thr ) neighb_streamlines, neighb_indices = self._reduce_search_space( pruned_model_centroids, reduction_thr=reduction_thr, reduction_distance=reduction_distance, ) if len(neighb_streamlines) == 0: # if no streamlines recognized return Streamlines([]), [] if self.verbose: logger.info("2nd local Slr") if slr: transf_streamlines, slr2_bmd = self._register_neighb_to_model( model_bundle, neighb_streamlines, metric=slr_metric, x0=slr_x0, bounds=slr_bounds, select_model=slr_select[0], select_target=slr_select[1], method=slr_method, ) if self.verbose: logger.info("pruning after 2nd local Slr") pruned_streamlines, labels = self._prune_what_not_in_model( model_centroids, transf_streamlines, neighb_indices, pruning_thr=pruning_thr, pruning_distance=pruning_distance, ) if self.verbose: logger.info( f"Total duration of recognition time" f" is {time()-t:0.3f} s\n" ) return pruned_streamlines, self.filtered_indices[labels] def evaluate_results(self, model_bundle, pruned_streamlines, slr_select): """Compare the similarity between two given bundles, model bundle, and extracted bundle. Parameters ---------- model_bundle : Streamlines Model bundle streamlines. pruned_streamlines : Streamlines Pruned bundle streamlines. slr_select : tuple Select the number of streamlines from model to neighborhood of model to perform the local SLR. Returns ------- ba_value : float bundle adjacency value between model bundle and pruned bundle bmd_value : float bundle minimum distance value between model bundle and pruned bundle """ spruned_streamlines = Streamlines(pruned_streamlines) recog_centroids = self._cluster_model_bundle( spruned_streamlines, model_clust_thr=1.25 ) mod_centroids = self._cluster_model_bundle(model_bundle, model_clust_thr=1.25) recog_centroids = Streamlines(recog_centroids) model_centroids = Streamlines(mod_centroids) ba_value = bundle_adjacency( set_number_of_points(recog_centroids, nb_points=20), set_number_of_points(model_centroids, nb_points=20), threshold=10, ) BMD = BundleMinDistanceMetric() static = select_random_set_of_streamlines(model_bundle, slr_select[0]) moving = select_random_set_of_streamlines(pruned_streamlines, slr_select[1]) nb_pts = 20 static = set_number_of_points(static, nb_points=nb_pts) moving = set_number_of_points(moving, nb_points=nb_pts) BMD.setup(static, moving) x0 = np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1, 0, 0, 0]) # affine bmd_value = BMD.distance(x0.tolist()) return ba_value, bmd_value @warning_for_keywords() def _cluster_model_bundle( self, model_bundle, model_clust_thr, *, nb_pts=20, select_randomly=500000 ): if self.verbose: t = time() logger.info("# Cluster model bundle using QBX") logger.info(f" Model bundle has {len(model_bundle)} streamlines") logger.info(f" Distance threshold {model_clust_thr:0.3f}") thresholds = self.start_thr + [model_clust_thr] model_cluster_map = qbx_and_merge( model_bundle, thresholds, nb_pts=nb_pts, select_randomly=select_randomly, rng=self.rng, ) model_centroids = model_cluster_map.centroids nb_model_centroids = len(model_centroids) if self.verbose: logger.info(f" Model bundle has {nb_model_centroids} centroids") logger.info(f" Duration {time() - t:0.3f} s\n") return model_centroids @warning_for_keywords() def _reduce_search_space( self, model_centroids, *, reduction_thr=20, reduction_distance="mdf" ): if self.verbose: t = time() logger.info("# Reduce search space") logger.info(f" Reduction threshold {reduction_thr:0.3f}") logger.info(f" Reduction distance {reduction_distance}") if reduction_distance.lower() == "mdf": if self.verbose: logger.info(" Using MDF") centroid_matrix = bundles_distances_mdf(model_centroids, self.centroids) elif reduction_distance.lower() == "mam": if self.verbose: logger.info(" Using MAM") centroid_matrix = bundles_distances_mam(model_centroids, self.centroids) else: raise ValueError("Given reduction distance not known") centroid_matrix[centroid_matrix > reduction_thr] = np.inf mins = np.min(centroid_matrix, axis=0) close_clusters_indices = list(np.where(mins != np.inf)[0]) close_clusters = self.cluster_map[close_clusters_indices] neighb_indices = [cluster.indices for cluster in close_clusters] neighb_streamlines = Streamlines(chain(*close_clusters)) nb_neighb_streamlines = len(neighb_streamlines) if nb_neighb_streamlines == 0: if self.verbose: logger.info("You have no neighbor streamlines... No bundle recognition") return Streamlines([]), [] if self.verbose: logger.info(f" Number of neighbor streamlines" f" {nb_neighb_streamlines}") logger.info(f" Duration {time() - t:0.3f} s\n") return neighb_streamlines, neighb_indices @warning_for_keywords() def _register_neighb_to_model( self, model_bundle, neighb_streamlines, *, metric=None, x0=None, bounds=None, select_model=400, select_target=600, method="L-BFGS-B", nb_pts=20, num_threads=None, ): if self.verbose: logger.info("# Local SLR of neighb_streamlines to model") t = time() if metric is None or metric == "symmetric": metric = BundleMinDistanceMetric(num_threads=num_threads) if metric == "asymmetric": metric = BundleMinDistanceAsymmetricMetric() if metric == "diagonal": metric = BundleSumDistanceMatrixMetric() if x0 is None: x0 = "similarity" if bounds is None: bounds = [ (-30, 30), (-30, 30), (-30, 30), (-45, 45), (-45, 45), (-45, 45), (0.8, 1.2), ] # TODO this can be speeded up by using directly the centroids static = select_random_set_of_streamlines( model_bundle, select_model, rng=self.rng ) moving = select_random_set_of_streamlines( neighb_streamlines, select_target, rng=self.rng ) static = set_number_of_points(static, nb_points=nb_pts) moving = set_number_of_points(moving, nb_points=nb_pts) slr = StreamlineLinearRegistration( metric=metric, x0=x0, bounds=bounds, method=method ) slm = slr.optimize(static, moving) transf_streamlines = neighb_streamlines.copy() transf_streamlines._data = apply_affine(slm.matrix, transf_streamlines._data) transf_matrix = slm.matrix slr_bmd = slm.fopt slr_iterations = slm.iterations if self.verbose: logger.info(f" Square-root of BMD is {np.sqrt(slr_bmd):.3f}") if slr_iterations is not None: logger.info(f" Number of iterations {slr_iterations}") logger.info(f" Matrix size {slm.matrix.shape}") original = np.get_printoptions() np.set_printoptions(3, suppress=True) logger.info(transf_matrix) logger.info(slm.xopt) np.set_printoptions(**original) logger.info(f" Duration {time() - t:0.3f} s\n") return transf_streamlines, slr_bmd @warning_for_keywords() def _prune_what_not_in_model( self, model_centroids, transf_streamlines, neighb_indices, *, mdf_thr=5, pruning_thr=10, pruning_distance="mdf", ): if self.verbose: if pruning_thr < 0: logger.info("Pruning_thr has to be greater or equal to 0") logger.info("# Prune streamlines using the MDF distance") logger.info(f" Pruning threshold {pruning_thr:0.3f}") logger.info(f" Pruning distance {pruning_distance}") t = time() thresholds = [40, 30, 20, 10, mdf_thr] rtransf_cluster_map = qbx_and_merge( transf_streamlines, thresholds, nb_pts=20, select_randomly=500000, rng=self.rng, ) if self.verbose: logger.info(f" QB Duration {time() - t:0.3f} s\n") rtransf_centroids = rtransf_cluster_map.centroids if pruning_distance.lower() == "mdf": if self.verbose: logger.info(" Using MDF") dist_matrix = bundles_distances_mdf(model_centroids, rtransf_centroids) elif pruning_distance.lower() == "mam": if self.verbose: logger.info(" Using MAM") dist_matrix = bundles_distances_mam(model_centroids, rtransf_centroids) else: raise ValueError("Given pruning distance is not available") dist_matrix[np.isnan(dist_matrix)] = np.inf dist_matrix[dist_matrix > pruning_thr] = np.inf pruning_matrix = dist_matrix.copy() if self.verbose: logger.info(f" Pruning matrix size is {pruning_matrix.shape}") mins = np.min(pruning_matrix, axis=0) pruned_indices = [ rtransf_cluster_map[i].indices for i in np.where(mins != np.inf)[0] ] pruned_indices = list(chain(*pruned_indices)) idx = np.array(pruned_indices) if len(idx) == 0: if self.verbose: logger.info(" You have removed all streamlines") return Streamlines([]), [] pruned_streamlines = transf_streamlines[idx] initial_indices = list(chain(*neighb_indices)) final_indices = [initial_indices[i] for i in pruned_indices] labels = final_indices if self.verbose: logger.info(f" Number of centroids: {len(rtransf_centroids)}") logger.info( f" Number of streamlines after pruning:" f" {len(pruned_streamlines)}" ) logger.info(f" Duration {time() - t:0.3f} s\n") return pruned_streamlines, labels dipy-1.11.0/dipy/segment/clustering.py000066400000000000000000000607041476546756600177400ustar00rootroot00000000000000from abc import ABCMeta, abstractmethod import logging import operator from time import time import numpy as np from dipy.segment.featurespeed import ResampleFeature from dipy.segment.metricspeed import ( AveragePointwiseEuclideanMetric, Metric, MinimumAverageDirectFlipMetric, ) from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import nbytes, set_number_of_points logger = logging.getLogger(__name__) class Identity: """Provides identity indexing functionality. This can replace any class supporting indexing used for referencing (e.g. list, tuple). Indexing an instance of this class will return the index provided instead of the element. It does not support slicing. """ def __getitem__(self, idx): return idx class Cluster: """Provides functionalities for interacting with a cluster. Useful container to retrieve index of elements grouped together. If a reference to the data is provided to `cluster_map`, elements will be returned instead of their index when possible. Parameters ---------- cluster_map : `ClusterMap` object Reference to the set of clusters this cluster is being part of. id : int, optional Id of this cluster in its associated `cluster_map` object. refdata : list, optional Actual elements that clustered indices refer to. Notes ----- A cluster does not contain actual data but instead knows how to retrieve them using its `ClusterMap` object. """ @warning_for_keywords() def __init__(self, *, id=0, indices=None, refdata=None): if refdata is None: refdata = Identity() self.id = id self.refdata = refdata self.indices = indices if indices is not None else [] def __len__(self): return len(self.indices) def __getitem__(self, idx): """Gets element(s) through indexing. If a reference to the data was provided (via refdata property) elements will be returned instead of their index. Parameters ---------- idx : int, slice or list Index of the element(s) to get. Returns ------- `Cluster` object(s) When `idx` is a int, returns a single element. When `idx` is either a slice or a list, returns a list of elements. """ if isinstance(idx, (int, np.integer)): return self.refdata[self.indices[idx]] elif isinstance(idx, slice): return [self.refdata[i] for i in self.indices[idx]] elif isinstance(idx, list): return [self[i] for i in idx] msg = f"Index must be a int or a slice! Not '{type(idx)}'" raise TypeError(msg) def __iter__(self): return (self[i] for i in range(len(self))) def __str__(self): return "[" + ", ".join(map(str, self.indices)) + "]" def __repr__(self): return f"Cluster({str(self)})" def __eq__(self, other): return isinstance(other, Cluster) and self.indices == other.indices def __ne__(self, other): return not self == other def __cmp__(self, other): raise TypeError("Cannot compare Cluster objects.") def assign(self, *indices): """Assigns indices to this cluster. Parameters ---------- *indices : list of indices Indices to add to this cluster. """ self.indices += indices class ClusterCentroid(Cluster): """Provides functionalities for interacting with a cluster. Useful container to retrieve the indices of elements grouped together and the cluster's centroid. If a reference to the data is provided to `cluster_map`, elements will be returned instead of their index when possible. Parameters ---------- cluster_map : `ClusterMapCentroid` object Reference to the set of clusters this cluster is being part of. id : int, optional Id of this cluster in its associated `cluster_map` object. refdata : list, optional Actual elements that clustered indices refer to. Notes ----- A cluster does not contain actual data but instead knows how to retrieve them using its `ClusterMapCentroid` object. """ @warning_for_keywords() def __init__(self, centroid, *, id=0, indices=None, refdata=None): if refdata is None: refdata = Identity() super(ClusterCentroid, self).__init__(id=id, indices=indices, refdata=refdata) self.centroid = centroid.copy() self.new_centroid = centroid.copy() def __eq__(self, other): return ( isinstance(other, ClusterCentroid) and np.all(self.centroid == other.centroid) and super(ClusterCentroid, self).__eq__(other) ) def assign(self, id_datum, features): """Assigns a data point to this cluster. Parameters ---------- id_datum : int Index of the data point to add to this cluster. features : 2D array Data point's features to modify this cluster's centroid. """ N = len(self) self.new_centroid = ((self.new_centroid * N) + features) / (N + 1.0) super(ClusterCentroid, self).assign(id_datum) def update(self): """Update centroid of this cluster. Returns ------- converged : bool Tells if the centroid has moved. """ converged = np.equal(self.centroid, self.new_centroid) self.centroid = self.new_centroid.copy() return converged class ClusterMap: """Provides functionalities for interacting with clustering outputs. Useful container to create, remove, retrieve and filter clusters. If `refdata` is given, elements will be returned instead of their index when using `Cluster` objects. Parameters ---------- refdata : list Actual elements that clustered indices refer to. """ @warning_for_keywords() def __init__(self, *, refdata=None): if refdata is None: refdata = Identity() self._clusters = [] self.refdata = refdata @property def clusters(self): return self._clusters @property def refdata(self): return self._refdata @refdata.setter def refdata(self, value): if value is None: value = Identity() self._refdata = value for cluster in self.clusters: cluster.refdata = self._refdata def __len__(self): return len(self.clusters) def __getitem__(self, idx): """Gets cluster(s) through indexing. Parameters ---------- idx : int, slice, list or boolean array Index of the element(s) to get. Returns ------- `Cluster` object(s) When `idx` is an int, returns a single `Cluster` object. When `idx`is either a slice, list or boolean array, returns a list of `Cluster` objects. """ if isinstance(idx, np.ndarray) and idx.dtype == bool: return [self.clusters[i] for i, take_it in enumerate(idx) if take_it] elif isinstance(idx, slice): return [self.clusters[i] for i in range(*idx.indices(len(self)))] elif isinstance(idx, list): return [self.clusters[i] for i in idx] return self.clusters[idx] def __iter__(self): return iter(self.clusters) def __str__(self): return "[" + ", ".join(map(str, self)) + "]" def __repr__(self): return f"ClusterMap({str(self)})" def _richcmp(self, other, op): """Compares this cluster map with another cluster map or an integer. Two `ClusterMap` objects are equal if they contain the same clusters. When comparing a `ClusterMap` object with an integer, the comparison will be performed on the size of the clusters instead. Parameters ---------- other : `ClusterMap` object or int Object to compare to. op : rich comparison operators (see module `operator`) Valid operators are: lt, le, eq, ne, gt or ge. Returns ------- bool or 1D array (bool) When comparing to another `ClusterMap` object, it returns whether the two `ClusterMap` objects contain the same clusters or not. When comparing to an integer the comparison is performed on the clusters sizes, it returns an array of boolean. """ if isinstance(other, ClusterMap): if op is operator.eq: return ( isinstance(other, ClusterMap) and len(self) == len(other) and self.clusters == other.clusters ) elif op is operator.ne: return not self == other raise NotImplementedError( "Can only check if two ClusterMap instances are equal or not." ) elif isinstance(other, int): return np.array([op(len(cluster), other) for cluster in self]) msg = ( "ClusterMap only supports comparison with a int or another" " instance of Clustermap." ) raise NotImplementedError(msg) def __eq__(self, other): return self._richcmp(other, operator.eq) def __ne__(self, other): return self._richcmp(other, operator.ne) def __lt__(self, other): return self._richcmp(other, operator.lt) def __le__(self, other): return self._richcmp(other, operator.le) def __gt__(self, other): return self._richcmp(other, operator.gt) def __ge__(self, other): return self._richcmp(other, operator.ge) def add_cluster(self, *clusters): """Adds one or multiple clusters to this cluster map. Parameters ---------- *clusters : `Cluster` object, ... Cluster(s) to be added in this cluster map. """ for cluster in clusters: self.clusters.append(cluster) cluster.refdata = self.refdata def remove_cluster(self, *clusters): """Remove one or multiple clusters from this cluster map. Parameters ---------- *clusters : `Cluster` object, ... Cluster(s) to be removed from this cluster map. """ for cluster in clusters: self.clusters.remove(cluster) def clear(self): """Remove all clusters from this cluster map.""" del self.clusters[:] def size(self): """Gets number of clusters contained in this cluster map.""" return len(self) def clusters_sizes(self): """Gets the size of every cluster contained in this cluster map. Returns ------- list of int Sizes of every cluster in this cluster map. """ return list(map(len, self)) def get_large_clusters(self, min_size): """Gets clusters which contains at least `min_size` elements. Parameters ---------- min_size : int Minimum number of elements a cluster needs to have to be selected. Returns ------- list of `Cluster` objects Clusters having at least `min_size` elements. """ return self[self >= min_size] def get_small_clusters(self, max_size): """Gets clusters which contains at most `max_size` elements. Parameters ---------- max_size : int Maximum number of elements a cluster can have to be selected. Returns ------- list of `Cluster` objects Clusters having at most `max_size` elements. """ return self[self <= max_size] class ClusterMapCentroid(ClusterMap): """Provides functionalities for interacting with clustering outputs that have centroids. Allows to retrieve easily the centroid of every cluster. Also, it is a useful container to create, remove, retrieve and filter clusters. If `refdata` is given, elements will be returned instead of their index when using `ClusterCentroid` objects. Parameters ---------- refdata : list Actual elements that clustered indices refer to. """ @property def centroids(self): return [cluster.centroid for cluster in self.clusters] class Clustering: __metaclass__ = ABCMeta @abstractmethod @warning_for_keywords() def cluster(self, data, *, ordering=None): """Clusters `data`. Subclasses will perform their clustering algorithm here. Parameters ---------- data : list of N-dimensional arrays Each array represents a data point. ordering : iterable of indices, optional Specifies the order in which data points will be clustered. Returns ------- `ClusterMap` object Result of the clustering. """ msg = "Subclass has to define method 'cluster(data, ordering)'!" raise NotImplementedError(msg) class QuickBundles(Clustering): r"""Clusters streamlines using QuickBundles. Given a list of streamlines, the QuickBundles algorithm :footcite:p:`Garyfallidis2012a` sequentially assigns each streamline to its closest bundle in $\mathcal{O}(Nk)$ where $N$ is the number of streamlines and $k$ is the final number of bundles. If for a given streamline its closest bundle is farther than `threshold`, a new bundle is created and the streamline is assigned to it except if the number of bundles has already exceeded `max_nb_clusters`. Parameters ---------- threshold : float The maximum distance from a bundle for a streamline to be still considered as part of it. metric : str or `Metric` object, optional The distance metric to use when comparing two streamlines. By default, the Minimum average Direct-Flip (MDF) distance :footcite:p:`Garyfallidis2012a` is used and streamlines are automatically resampled so they have 12 points. max_nb_clusters : int, optional Limits the creation of bundles. Examples -------- >>> from dipy.segment.clustering import QuickBundles >>> from dipy.data import get_fnames >>> from dipy.io.streamline import load_tractogram >>> from dipy.tracking.streamline import Streamlines >>> fname = get_fnames(name='fornix') >>> fornix = load_tractogram(fname, 'same', ... bbox_valid_check=False).streamlines >>> streamlines = Streamlines(fornix) >>> # Segment fornix with a threshold of 10mm and streamlines resampled >>> # to 12 points. >>> qb = QuickBundles(threshold=10.) >>> clusters = qb.cluster(streamlines) >>> len(clusters) 4 >>> list(map(len, clusters)) [61, 191, 47, 1] >>> # Resampling streamlines differently is done explicitly as follows. >>> # Note this has an impact on the speed and the accuracy (tradeoff). >>> from dipy.segment.featurespeed import ResampleFeature >>> from dipy.segment.metricspeed import AveragePointwiseEuclideanMetric >>> feature = ResampleFeature(nb_points=2) >>> metric = AveragePointwiseEuclideanMetric(feature) >>> qb = QuickBundles(threshold=10., metric=metric) >>> clusters = qb.cluster(streamlines) >>> len(clusters) 4 >>> list(map(len, clusters)) [58, 142, 72, 28] References ---------- .. footbibliography:: """ @warning_for_keywords() def __init__(self, threshold, *, metric="MDF_12points", max_nb_clusters=None): if max_nb_clusters is None: max_nb_clusters = np.iinfo("i4").max self.threshold = threshold self.max_nb_clusters = max_nb_clusters if isinstance(metric, MinimumAverageDirectFlipMetric): raise ValueError("Use AveragePointwiseEuclideanMetric instead") if isinstance(metric, Metric): self.metric = metric elif metric == "MDF_12points": feature = ResampleFeature(nb_points=12) self.metric = AveragePointwiseEuclideanMetric(feature) else: raise ValueError(f"Unknown metric: {metric}") @warning_for_keywords() def cluster(self, streamlines, *, ordering=None): """Clusters `streamlines` into bundles. Performs quickbundles algorithm using predefined metric and threshold. Parameters ---------- streamlines : list of 2D arrays Each 2D array represents a sequence of 3D points (points, 3). ordering : iterable of indices, optional Specifies the order in which data points will be clustered. Returns ------- `ClusterMapCentroid` object Result of the clustering. """ from dipy.segment.clustering_algorithms import quickbundles cluster_map = quickbundles( streamlines, self.metric, threshold=self.threshold, max_nb_clusters=self.max_nb_clusters, ordering=ordering, ) cluster_map.refdata = streamlines return cluster_map class QuickBundlesX(Clustering): r"""Clusters streamlines using QuickBundlesX. See :footcite:p:`Garyfallidis2016` for further details about the method. Parameters ---------- thresholds : list of float Thresholds to use for each clustering layer. A threshold represents the maximum distance from a cluster for a streamline to be still considered as part of it. metric : str or `Metric` object, optional The distance metric to use when comparing two streamlines. By default, the Minimum average Direct-Flip (MDF) distance :footcite:p:`Garyfallidis2012a` is used and streamlines are automatically resampled so they have 12 points. References ---------- .. footbibliography:: """ @warning_for_keywords() def __init__(self, thresholds, *, metric="MDF_12points"): self.thresholds = thresholds if isinstance(metric, MinimumAverageDirectFlipMetric): raise ValueError("Use AveragePointwiseEuclideanMetric instead") if isinstance(metric, Metric): self.metric = metric elif metric == "MDF_12points": feature = ResampleFeature(nb_points=12) self.metric = AveragePointwiseEuclideanMetric(feature) else: raise ValueError(f"Unknown metric: {metric}") @warning_for_keywords() def cluster(self, streamlines, *, ordering=None): """Clusters `streamlines` into bundles. Performs QuickbundleX using a predefined metric and thresholds. Parameters ---------- streamlines : list of 2D arrays Each 2D array represents a sequence of 3D points (points, 3). ordering : iterable of indices Specifies the order in which data points will be clustered. Returns ------- `TreeClusterMap` object Result of the clustering. """ from dipy.segment.clustering_algorithms import quickbundlesx tree = quickbundlesx( streamlines, self.metric, thresholds=self.thresholds, ordering=ordering ) tree.refdata = streamlines return tree class TreeCluster(ClusterCentroid): @warning_for_keywords() def __init__(self, threshold, centroid, *, indices=None): super(TreeCluster, self).__init__(centroid=centroid, indices=indices) self.threshold = threshold self.parent = None self.children = [] def add(self, child): child.parent = self self.children.append(child) @property def is_leaf(self): return len(self.children) == 0 def return_indices(self): return self.children class TreeClusterMap(ClusterMap): def __init__(self, root): self.root = root self.leaves = [] def _retrieves_leaves(node): if node.is_leaf: self.leaves.append(node) self.traverse_postorder(self.root, _retrieves_leaves) @property def refdata(self): return self._refdata @refdata.setter def refdata(self, value): if value is None: value = Identity() self._refdata = value def _set_refdata(node): node.refdata = self._refdata self.traverse_postorder(self.root, _set_refdata) def traverse_postorder(self, node, visit): for child in node.children: self.traverse_postorder(child, visit) visit(node) def iter_preorder(self, node): parent_stack = [] while len(parent_stack) > 0 or node is not None: if node is not None: yield node if len(node.children) > 0: parent_stack += node.children[1:] node = node.children[0] else: node = None else: node = parent_stack.pop() def __iter__(self): return self.iter_preorder(self.root) def get_clusters(self, wanted_level): clusters = ClusterMapCentroid() def _traverse(node, level=0): if level == wanted_level: clusters.add_cluster(node) return for child in node.children: _traverse(child, level + 1) _traverse(self.root) return clusters @warning_for_keywords() def qbx_and_merge( streamlines, thresholds, *, nb_pts=20, select_randomly=None, rng=None, verbose=False ): """Run QuickBundlesX and then run again on the centroids of the last layer. Running again QuickBundles at a layer has the effect of merging some of the clusters that may be originally divided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset. See :footcite:p:`Garyfallidis2012a` and :footcite:p:`Garyfallidis2016` for further details about the method. Parameters ---------- streamlines : Streamlines Streamlines. thresholds : sequence List of distance thresholds for QuickBundlesX. nb_pts : int Number of points for discretizing each streamline select_randomly : int Randomly select a specific number of streamlines. If None all the streamlines are used. rng : numpy.random.Generator If None then generator is initialized internally. verbose : bool, optional. If True, log information. Default False. Returns ------- clusters : obj Contains the clusters of the last layer of QuickBundlesX after merging. References ---------- .. footbibliography:: """ t = time() len_s = len(streamlines) if select_randomly is None: select_randomly = len_s if rng is None: rng = np.random.default_rng() indices = rng.choice(len_s, min(select_randomly, len_s), replace=False) sample_streamlines = set_number_of_points(streamlines, nb_points=nb_pts) if verbose: logger.info(f" Resampled to {nb_pts} points") logger.info(f" Size is {nbytes(sample_streamlines):0.3f} MB") logger.info(f" Duration of resampling is {time() - t:0.3f} s") logger.info(" QBX phase starting...") qbx = QuickBundlesX(thresholds, metric=AveragePointwiseEuclideanMetric()) t1 = time() qbx_clusters = qbx.cluster(sample_streamlines, ordering=indices) if verbose: logger.info(" Merging phase starting ...") qbx_merge = QuickBundlesX( [thresholds[-1]], metric=AveragePointwiseEuclideanMetric() ) final_level = len(thresholds) len_qbx_fl = len(qbx_clusters.get_clusters(final_level)) qbx_ordering_final = rng.choice(len_qbx_fl, len_qbx_fl, replace=False) qbx_merged_cluster_map = qbx_merge.cluster( qbx_clusters.get_clusters(final_level).centroids, ordering=qbx_ordering_final ).get_clusters(1) qbx_cluster_map = qbx_clusters.get_clusters(final_level) merged_cluster_map = ClusterMapCentroid() for cluster in qbx_merged_cluster_map: merged_cluster = ClusterCentroid(centroid=cluster.centroid) for i in cluster.indices: merged_cluster.indices.extend(qbx_cluster_map[i].indices) merged_cluster_map.add_cluster(merged_cluster) merged_cluster_map.refdata = streamlines if verbose: logger.info(f" QuickBundlesX time for {select_randomly} random streamlines") logger.info(f" Duration {time() - t1:0.3f} s\n") return merged_cluster_map dipy-1.11.0/dipy/segment/clustering_algorithms.pyx000066400000000000000000000136111476546756600223540ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False, initializedcheck=False import itertools import numpy as np from dipy.segment.cythonutils cimport Data2D, shape2tuple from dipy.segment.metricspeed cimport Metric from dipy.segment.clusteringspeed cimport ClustersCentroid, QuickBundles, QuickBundlesX from dipy.segment.clustering import ClusterMapCentroid, ClusterCentroid cdef extern from "stdlib.h" nogil: ctypedef unsigned long size_t void free(void *ptr) void *calloc(size_t nelem, size_t elsize) void *realloc(void *ptr, size_t elsize) void *memset(void *ptr, int value, size_t num) DTYPE = np.float32 DEF BIGGEST_DOUBLE = 1.7976931348623157e+308 # np.finfo('f8').max DEF BIGGEST_FLOAT = 3.4028235e+38 # np.finfo('f4').max DEF BIGGEST_INT = 2147483647 # np.iinfo('i4').max def clusters_centroid2clustermap_centroid(ClustersCentroid clusters_list): """ Converts a `ClustersCentroid` object (Cython) to a `ClusterMapCentroid` object (Python). Only basic functionalities are provided with a `Clusters` object. To have more flexibility, one should use `ClusterMap` object, hence this conversion function. Parameters ---------- clusters_list : `ClustersCentroid` object Result of the clustering contained in a Cython's object. Returns ------- `ClusterMapCentroid` object Result of the clustering contained in a Python's object. """ clusters = ClusterMapCentroid() cdef Data2D features shape = clusters_list._centroid_shape for i in range(clusters_list._nb_clusters): features = \ &clusters_list.centroids[i].features[0][0,0] centroid = np.asarray(features) indices = np.asarray( clusters_list.clusters_indices[i]).tolist() clusters.add_cluster(ClusterCentroid(id=i, centroid=centroid, indices=indices)) return clusters def peek(iterable): """ Returns the first element of an iterable and the iterator. """ iterable = iter(iterable) first = next(iterable, None) iterator = itertools.chain([first], iterable) return first, iterator def quickbundles(streamlines, Metric metric, double threshold, long max_nb_clusters=BIGGEST_INT, ordering=None): """ Clusters streamlines using QuickBundles. See :footcite:p:`Garyfallidis2012a` for further details about the method. Parameters ---------- streamlines : list of 2D arrays List of streamlines to cluster. metric : `Metric` object Tells how to compute the distance between two streamlines. threshold : double The maximum distance from a cluster for a streamline to be still considered as part of it. max_nb_clusters : int, optional Limits the creation of bundles. (Default: inf) ordering : iterable of indices, optional Iterate through `data` using the given ordering. Returns ------- `ClusterMapCentroid` object Result of the clustering. References ---------- .. footbibliography:: """ # Threshold of np.inf is not supported, set it to 'biggest_double' threshold = min(threshold, BIGGEST_DOUBLE) # Threshold of -np.inf is not supported, set it to 0 threshold = max(threshold, 0) if ordering is None: ordering = range(len(streamlines)) # Check if `ordering` or `streamlines` are empty first_idx, ordering = peek(ordering) if first_idx is None or len(streamlines) == 0: return ClusterMapCentroid() features_shape = shape2tuple(metric.feature.c_infer_shape(streamlines[first_idx].astype(DTYPE))) cdef QuickBundles qb = QuickBundles(features_shape, metric, threshold, max_nb_clusters) cdef int idx for idx in ordering: streamline = streamlines[idx] if not streamline.flags.writeable or streamline.dtype != DTYPE: streamline = streamline.astype(DTYPE) cluster_id = qb.assignment_step(streamline, idx) # The update step is performed right after the assignment step instead # of after all streamlines have been assigned like k-means algorithm. qb.update_step(cluster_id) return clusters_centroid2clustermap_centroid(qb.clusters) def quickbundlesx(streamlines, Metric metric, thresholds, ordering=None): """ Clusters streamlines using QuickBundlesX. See :footcite:p:`Garyfallidis2012a` and :footcite:p:`Garyfallidis2016` for further details about the method. Parameters ---------- streamlines : list of 2D arrays List of streamlines to cluster. metric : `Metric` object Tells how to compute the distance between two streamlines. thresholds : list of double Thresholds to use for each clustering layer. A threshold represents the maximum distance from a cluster for a streamline to be still considered as part of it. ordering : iterable of indices, optional Iterate through `data` using the given ordering. Returns ------- `TreeClusterMap` object Result of the clustering. Use get_clusters() to get the clusters at a specific level of the hierarchy. References ---------- .. footbibliography:: """ if ordering is None: ordering = range(len(streamlines)) # Check if `ordering` or `streamlines` are empty first_idx, ordering = peek(ordering) if first_idx is None or len(streamlines) == 0: return ClusterMapCentroid() features_shape = shape2tuple(metric.feature.c_infer_shape(streamlines[first_idx].astype(DTYPE))) cdef QuickBundlesX qbx = QuickBundlesX(features_shape, thresholds, metric) cdef int idx for idx in ordering: streamline = streamlines[idx] if not streamline.flags.writeable or streamline.dtype != DTYPE: streamline = streamline.astype(DTYPE) qbx.insert(streamline, idx) return qbx.get_tree_cluster_map() dipy-1.11.0/dipy/segment/clusteringspeed.pxd000066400000000000000000000061221476546756600211160ustar00rootroot00000000000000from dipy.segment.cythonutils cimport Data2D, Shape from dipy.segment.metricspeed cimport Metric cimport numpy as cnp cdef struct QuickBundlesStats: long nb_mdf_calls long nb_aabb_calls cdef struct QuickBundlesXStatsLayer: long nb_mdf_calls long nb_aabb_calls cdef struct QuickBundlesXStats: QuickBundlesXStatsLayer* stats_per_layer cdef struct StreamlineInfos: Data2D* features Data2D* features_flip float[6] aabb int idx int use_flip cdef struct Centroid: Data2D* features int size float[6] aabb cdef struct NearestCluster: int id double dist int flip cdef struct Test: Data2D* centroid cdef struct CentroidNode: CentroidNode* father CentroidNode** children int nb_children Data2D* centroid float[6] aabb float threshold int* indices int size Shape centroid_shape int level cdef class Clusters: cdef int _nb_clusters cdef int** clusters_indices cdef int* clusters_size cdef void c_assign(Clusters self, int id_cluster, int id_element, Data2D element) noexcept nogil cdef int c_create_cluster(Clusters self) except -1 nogil cdef int c_size(Clusters self) noexcept nogil cdef class ClustersCentroid(Clusters): cdef Centroid* centroids cdef Centroid* _updated_centroids cdef Shape _centroid_shape cdef float eps cdef void c_assign(ClustersCentroid self, int id_cluster, int id_element, Data2D element) noexcept nogil cdef int c_create_cluster(ClustersCentroid self) except -1 nogil cdef int c_update(ClustersCentroid self, cnp.npy_intp id_cluster) except -1 nogil cdef class QuickBundles: cdef Shape features_shape cdef Data2D features cdef Data2D features_flip cdef ClustersCentroid clusters cdef Metric metric cdef double threshold cdef double aabb_pad cdef int max_nb_clusters cdef int bvh cdef QuickBundlesStats stats cdef NearestCluster find_nearest_cluster(QuickBundles self, Data2D features) noexcept nogil cdef int assignment_step(QuickBundles self, Data2D datum, int datum_id) except -1 nogil cdef void update_step(QuickBundles self, int cluster_id) noexcept nogil cdef object _build_clustermap(self) cdef class QuickBundlesX: cdef CentroidNode* root cdef Metric metric cdef Shape features_shape cdef Data2D features cdef Data2D features_flip cdef double* thresholds cdef int nb_levels cdef object level cdef object clusters cdef QuickBundlesXStats stats cdef StreamlineInfos* current_streamline cdef int _add_child(self, CentroidNode* node) noexcept nogil cdef void _update_node(self, CentroidNode* node, StreamlineInfos* streamline_infos) noexcept nogil cdef void _insert_in(self, CentroidNode* node, StreamlineInfos* streamline_infos, int[:] path) noexcept nogil cpdef object insert(self, Data2D datum, int datum_idx) cdef void traverse_postorder(self, CentroidNode* node, void (*visit)(QuickBundlesX, CentroidNode*)) cdef void _dealloc_node(self, CentroidNode* node) cdef object _build_tree_clustermap(self, CentroidNode* node)dipy-1.11.0/dipy/segment/clusteringspeed.pyx000066400000000000000000000605051476546756600211500ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False, initializedcheck=False import numpy as np cimport numpy as cnp from dipy.segment.clustering import ClusterCentroid, ClusterMapCentroid from dipy.segment.clustering import TreeCluster, TreeClusterMap from libc.math cimport fabs from dipy.segment.cythonutils cimport Data2D, Shape,\ tuple2shape, same_shape, create_memview_2d, free_memview_2d cdef extern from "math.h" nogil: double fabs(double x) cdef extern from "stdlib.h" nogil: ctypedef unsigned long size_t void free(void *ptr) void *malloc(size_t elsize) void *calloc(size_t nelem, size_t elsize) void *realloc(void *ptr, size_t elsize) void *memset(void *ptr, int value, size_t num) DTYPE = np.float32 DEF BIGGEST_DOUBLE = 1.7976931348623157e+308 # np.finfo('f8').max DEF BIGGEST_INT = 2147483647 # np.iinfo('i4').max DEF BIGGEST_FLOAT = 3.4028235e+38 # np.finfo('f4').max DEF SMALLEST_FLOAT = -3.4028235e+38 # np.finfo('f4').max cdef print_node(CentroidNode* node, prepend=""): if node == NULL: return "" cdef Data2D centroid centroid = &node.centroid[0][0,0] txt = "{}".format(np.asarray(centroid).tolist()) txt += " {" + ",".join(map(str, np.asarray( node.indices))) + "}" txt += " children({})".format(node.nb_children) txt += " count({})".format(node.size) txt += " thres({})".format(node.threshold) txt += "\n" cdef cnp.npy_intp i for i in range(node.nb_children): txt += prepend if i == node.nb_children-1: # Last child txt += "`-- " + print_node(node.children[i], prepend + " ") else: txt += "|-- " + print_node(node.children[i], prepend + "| ") return txt cdef void aabb_creation(Data2D streamline, float* aabb) noexcept nogil: """ Creates AABB enveloping the given streamline. Notes ----- This currently assumes streamline is made of 3D points. """ cdef: int N = streamline.shape[0], D = streamline.shape[1] int n, d float min_[3] float max_[3] for d in range(D): min_[d] = BIGGEST_FLOAT max_[d] = SMALLEST_FLOAT for n in range(N): if max_[d] < streamline[n, d]: max_[d] = streamline[n, d] if min_[d] > streamline[n, d]: min_[d] = streamline[n, d] aabb[d + 3] = (max_[d] - min_[d]) / 2.0 # radius aabb[d] = min_[d] + aabb[d + 3] # center cdef inline int aabb_overlap(float* aabb1, float* aabb2, float padding=0.) noexcept nogil: """ SIMD optimized AABB-AABB test Optimized by removing conditional branches """ cdef: int x = fabs(aabb1[0] - aabb2[0]) <= (aabb1[3] + aabb2[3] + padding) int y = fabs(aabb1[1] - aabb2[1]) <= (aabb1[4] + aabb2[4] + padding) int z = fabs(aabb1[2] - aabb2[2]) <= (aabb1[5] + aabb2[5] + padding) return x & y & z cdef CentroidNode* create_empty_node(Shape centroid_shape, float threshold) nogil: # Important: because the CentroidNode structure contains an uninitialized memview, # we need to zero-initialize the allocated memory (calloc or via memset), # otherwise during assignment CPython will try to call _PYX_XDEC_MEMVIEW on it and segfault. cdef CentroidNode* node = calloc(1, sizeof(CentroidNode)) node.centroid = create_memview_2d(centroid_shape.size, centroid_shape.dims) #node.updated_centroid = calloc(centroid_shape.size, sizeof(float)) node.father = NULL node.children = NULL node.nb_children = 0 node.aabb[0] = 0 node.aabb[1] = 0 node.aabb[2] = 0 node.aabb[3] = BIGGEST_FLOAT node.aabb[4] = BIGGEST_FLOAT node.aabb[5] = BIGGEST_FLOAT node.threshold = threshold node.indices = NULL node.size = 0 node.centroid_shape = centroid_shape return node cdef class QuickBundlesX: def __init__(self, features_shape, levels_thresholds, Metric metric): self.metric = metric self.features_shape = tuple2shape(features_shape) self.nb_levels = len(levels_thresholds) self.thresholds = malloc(self.nb_levels*sizeof(double)) cdef cnp.npy_intp i for i in range(self.nb_levels): self.thresholds[i] = levels_thresholds[i] self.root = create_empty_node(self.features_shape, self.thresholds[0]) self.level = None self.clusters = None self.stats.stats_per_layer = calloc(self.nb_levels, sizeof(QuickBundlesXStatsLayer)) # Important: because the CentroidNode structure contains an uninitialized memview, # we need to zero-initialize the allocated memory (calloc or via memset), # otherwise during assignment CPython will try to call _PYX_XDEC_MEMVIEW on it and segfault. self.current_streamline = calloc(1, sizeof(StreamlineInfos)) self.current_streamline.features = create_memview_2d(self.features_shape.size, self.features_shape.dims) self.current_streamline.features_flip = create_memview_2d(self.features_shape.size, self.features_shape.dims) def __dealloc__(self): self.traverse_postorder(self.root, self._dealloc_node) self.root = NULL if self.thresholds != NULL: free(self.thresholds) self.thresholds = NULL if self.stats.stats_per_layer != NULL: free(self.stats.stats_per_layer) self.stats.stats_per_layer = NULL if self.current_streamline != NULL: free(self.current_streamline) self.current_streamline = NULL cdef int _add_child(self, CentroidNode* node) noexcept nogil: cdef double threshold = 0.0 # Leaf node doesn't need threshold. if node.level+1 < self.nb_levels: threshold = self.thresholds[node.level+1] cdef CentroidNode* child = create_empty_node(self.features_shape, threshold) child.level = node.level+1 # Add new child. child.father = node node.children = realloc(node.children, (node.nb_children+1)*sizeof(CentroidNode*)) node.children[node.nb_children] = child node.nb_children += 1 return node.nb_children-1 cdef void _update_node(self, CentroidNode* node, StreamlineInfos* streamline_infos) noexcept nogil: cdef Data2D element = streamline_infos.features[0] cdef int C = node.size cdef cnp.npy_intp n, d if streamline_infos.use_flip: element = streamline_infos.features_flip[0] # Update centroid cdef Data2D centroid = node.centroid[0] cdef cnp.npy_intp N = centroid.shape[0], D = centroid.shape[1] for n in range(N): for d in range(D): centroid[n, d] = ((centroid[n, d] * C) + element[n, d]) / (C+1) # Update list of indices node.indices = realloc(node.indices, (C+1)*sizeof(int)) node.indices[C] = streamline_infos.idx node.size += 1 # Update AABB aabb_creation(centroid, node.aabb) cdef void _insert_in(self, CentroidNode* node, StreamlineInfos* streamline_infos, int[:] path) noexcept nogil: cdef: float dist, dist_flip cnp.npy_intp k NearestCluster nearest_cluster self._update_node(node, streamline_infos) if node.level == self.nb_levels: return nearest_cluster.id = -1 nearest_cluster.dist = BIGGEST_DOUBLE nearest_cluster.flip = 0 for k in range(node.nb_children): # Check streamline's aabb colides with the current child. self.stats.stats_per_layer[node.level].nb_aabb_calls += 1 if aabb_overlap(node.children[k].aabb, streamline_infos.aabb, node.threshold): self.stats.stats_per_layer[node.level].nb_mdf_calls += 1 dist = self.metric.c_dist(node.children[k].centroid[0], streamline_infos.features[0]) # Keep track of the nearest cluster if dist < nearest_cluster.dist: nearest_cluster.dist = dist nearest_cluster.id = k nearest_cluster.flip = 0 self.stats.stats_per_layer[node.level].nb_mdf_calls += 1 dist_flip = self.metric.c_dist(node.children[k].centroid[0], streamline_infos.features_flip[0]) if dist_flip < nearest_cluster.dist: nearest_cluster.dist = dist_flip nearest_cluster.id = k nearest_cluster.flip = 1 if nearest_cluster.dist > node.threshold: # No near cluster, create a new one. nearest_cluster.id = self._add_child(node) streamline_infos.use_flip = nearest_cluster.flip path[node.level] = nearest_cluster.id self._insert_in(node.children[nearest_cluster.id], streamline_infos, path) cpdef object insert(self, Data2D datum, int datum_idx): self.metric.feature.c_extract(datum, self.current_streamline.features[0]) self.metric.feature.c_extract(datum[::-1], self.current_streamline.features_flip[0]) self.current_streamline.idx = datum_idx aabb_creation(self.current_streamline.features[0], self.current_streamline.aabb) path = -1 * np.ones(self.nb_levels, dtype=np.int32) self._insert_in(self.root, self.current_streamline, path) return path def __str__(self): return print_node(self.root) cdef void traverse_postorder(self, CentroidNode* node, void (*visit)(QuickBundlesX, CentroidNode*)): cdef cnp.npy_intp i for i in range(node.nb_children): self.traverse_postorder(node.children[i], visit) visit(self, node) cdef void _dealloc_node(self, CentroidNode* node): free_memview_2d(node.centroid) if node.children != NULL: free(node.children) node.children = NULL free(node.indices) node.indices = NULL # No need to free node.father, only the current node. free(node) cdef object _build_tree_clustermap(self, CentroidNode* node): cdef Data2D centroid centroid = &node.centroid[0][0,0] tree_cluster = TreeCluster(threshold=node.threshold, centroid=np.asarray(centroid), indices=np.asarray( node.indices).copy()) cdef cnp.npy_intp i for i in range(node.nb_children): tree_cluster.add(self._build_tree_clustermap(node.children[i])) return tree_cluster def get_tree_cluster_map(self): return TreeClusterMap(self._build_tree_clustermap(self.root)) def get_stats(self): stats_per_level = [] for i in range(self.nb_levels): stats_per_level.append({'nb_mdf_calls': self.stats.stats_per_layer[i].nb_mdf_calls, 'nb_aabb_calls': self.stats.stats_per_layer[i].nb_aabb_calls, 'threshold': self.thresholds[i]}) stats = {'stats_per_level': stats_per_level} return stats cdef class Clusters: """ Provides Cython functionalities to interact with clustering outputs. This class allows one to create clusters and assign elements to them. Assignments of a cluster are represented as a list of element indices. """ def __init__(Clusters self): self._nb_clusters = 0 self.clusters_indices = NULL self.clusters_size = NULL def __dealloc__(Clusters self): """ Deallocates memory created with `c_create_cluster` and `c_assign`. """ for i in range(self._nb_clusters): free(self.clusters_indices[i]) self.clusters_indices[i] = NULL free(self.clusters_indices) self.clusters_indices = NULL free(self.clusters_size) self.clusters_size = NULL cdef int c_size(Clusters self) noexcept nogil: """ Returns the number of clusters. """ return self._nb_clusters cdef void c_assign(Clusters self, int id_cluster, int id_element, Data2D element) noexcept nogil: """ Assigns an element to a cluster. Parameters ---------- id_cluster : int Index of the cluster to which the element will be assigned. id_element : int Index of the element to assign. element : 2d array (float) Data of the element to assign. """ cdef cnp.npy_intp C = self.clusters_size[id_cluster] self.clusters_indices[id_cluster] = realloc(self.clusters_indices[id_cluster], (C+1)*sizeof(int)) self.clusters_indices[id_cluster][C] = id_element self.clusters_size[id_cluster] += 1 cdef int c_create_cluster(Clusters self) except -1 nogil: """ Creates a cluster and adds it at the end of the list. Returns ------- id_cluster : int Index of the new cluster. """ self.clusters_indices = realloc(self.clusters_indices, (self._nb_clusters+1)*sizeof(int*)) self.clusters_indices[self._nb_clusters] = calloc(0, sizeof(int)) self.clusters_size = realloc(self.clusters_size, (self._nb_clusters+1)*sizeof(int)) self.clusters_size[self._nb_clusters] = 0 self._nb_clusters += 1 return self._nb_clusters - 1 cdef class ClustersCentroid(Clusters): """ Provides Cython functionalities to interact with clustering outputs having the notion of cluster's centroid. This class allows one to create clusters, assign elements to them and update their centroid. Parameters ---------- centroid_shape : int, tuple of int Information about the shape of the centroid. eps : float, optional Consider the centroid has not changed if the changes per dimension are less than this epsilon. (Default: 1e-6) """ def __init__(ClustersCentroid self, centroid_shape, float eps=1e-6, *args, **kwargs): Clusters.__init__(self, *args, **kwargs) if isinstance(centroid_shape, int): centroid_shape = (1, centroid_shape) if not isinstance(centroid_shape, tuple): raise ValueError("'centroid_shape' must be a tuple or a int.") self._centroid_shape = tuple2shape(centroid_shape) self.centroids = NULL self._updated_centroids = NULL self.eps = eps def __dealloc__(ClustersCentroid self): """ Deallocates memory created with `c_create_cluster` and `c_assign`. Notes ----- The `__dealloc__` method of the superclass is automatically called: https://docs.cython.org/src/userguide/special_methods.html#finalization-method-dealloc """ cdef cnp.npy_intp i for i in range(self._nb_clusters): free_memview_2d(self.centroids[i].features) free_memview_2d(self._updated_centroids[i].features) free(self.centroids) self.centroids = NULL free(self._updated_centroids) self._updated_centroids = NULL cdef void c_assign(ClustersCentroid self, int id_cluster, int id_element, Data2D element) noexcept nogil: """ Assigns an element to a cluster. In addition of keeping element's index, an updated version of the cluster's centroid is computed. The centroid is the average of all elements in a cluster. Parameters ---------- id_cluster : int Index of the cluster to which the element will be assigned. id_element : int Index of the element to assign. element : 2d array (float) Data of the element to assign. """ cdef Data2D updated_centroid = self._updated_centroids[id_cluster].features[0] cdef cnp.npy_intp C = self.clusters_size[id_cluster] cdef cnp.npy_intp n, d cdef cnp.npy_intp N = updated_centroid.shape[0], D = updated_centroid.shape[1] for n in range(N): for d in range(D): updated_centroid[n, d] = ((updated_centroid[n, d] * C) + element[n, d]) / (C+1) Clusters.c_assign(self, id_cluster, id_element, element) cdef int c_update(ClustersCentroid self, cnp.npy_intp id_cluster) except -1 nogil: """ Update the centroid of a cluster. Parameters ---------- id_cluster : int Index of the cluster of which its centroid will be updated. Returns ------- int Tells whether the centroid has changed or not, i.e. converged. """ cdef Data2D centroid = self.centroids[id_cluster].features[0] cdef Data2D updated_centroid = self._updated_centroids[id_cluster].features[0] cdef cnp.npy_intp N = updated_centroid.shape[0], D = centroid.shape[1] cdef cnp.npy_intp n, d cdef int converged = 1 for n in range(N): for d in range(D): converged &= fabs(centroid[n, d] - updated_centroid[n, d]) < self.eps centroid[n, d] = updated_centroid[n, d] #cdef float * aabb = &self.centroids[id_cluster].aabb[0] aabb_creation(centroid, self.centroids[id_cluster].aabb) return converged cdef int c_create_cluster(ClustersCentroid self) except -1 nogil: """ Creates a cluster and adds it at the end of the list. Returns ------- id_cluster : int Index of the new cluster. """ self.centroids = realloc(self.centroids, (self._nb_clusters+1) * sizeof(Centroid)) # Zero-initialize the Centroid structure memset(&self.centroids[self._nb_clusters], 0, sizeof(Centroid)) self._updated_centroids = realloc(self._updated_centroids, (self._nb_clusters+1) * sizeof(Centroid)) # Zero-initialize the new Centroid structure memset(&self._updated_centroids[self._nb_clusters], 0, sizeof(Centroid)) self.centroids[self._nb_clusters].features = create_memview_2d(self._centroid_shape.size, self._centroid_shape.dims) self._updated_centroids[self._nb_clusters].features = create_memview_2d(self._centroid_shape.size, self._centroid_shape.dims) aabb_creation(self.centroids[self._nb_clusters].features[0], self.centroids[self._nb_clusters].aabb) return Clusters.c_create_cluster(self) cdef class QuickBundles: def __init__(QuickBundles self, features_shape, Metric metric, double threshold, int max_nb_clusters=BIGGEST_INT): self.metric = metric self.features_shape = tuple2shape(features_shape) self.threshold = threshold self.max_nb_clusters = max_nb_clusters self.clusters = ClustersCentroid(features_shape) self.features = np.empty(features_shape, dtype=DTYPE) self.features_flip = np.empty(features_shape, dtype=DTYPE) self.stats.nb_mdf_calls = 0 self.stats.nb_aabb_calls = 0 cdef NearestCluster find_nearest_cluster(QuickBundles self, Data2D features) noexcept nogil: """ Finds the nearest cluster of a datum given its `features` vector. Parameters ---------- features : 2D array Features of a datum. Returns ------- `NearestCluster` object Nearest cluster to `features` according to the given metric. """ cdef: cnp.npy_intp k double dist NearestCluster nearest_cluster float aabb[6] nearest_cluster.id = -1 nearest_cluster.dist = BIGGEST_DOUBLE nearest_cluster.flip = 0 for k in range(self.clusters.c_size()): self.stats.nb_mdf_calls += 1 dist = self.metric.c_dist(self.clusters.centroids[k].features[0], features) # Keep track of the nearest cluster if dist < nearest_cluster.dist: nearest_cluster.dist = dist nearest_cluster.id = k return nearest_cluster cdef int assignment_step(QuickBundles self, Data2D datum, int datum_id) except -1 nogil: """ Compute the assignment step of the QuickBundles algorithm. It will assign a datum to its closest cluster according to a given metric. If the distance between the datum and its closest cluster is greater than the specified threshold, a new cluster is created and the datum is assigned to it. Parameters ---------- datum : 2D array The datum to assign. datum_id : int ID of the datum, usually its index. Returns ------- int Index of the cluster the datum has been assigned to. """ cdef: Data2D features_to_add = self.features NearestCluster nearest_cluster, nearest_cluster_flip Shape features_shape = self.metric.feature.c_infer_shape(datum) # Check if datum is compatible with the metric if not same_shape(features_shape, self.features_shape): with gil: raise ValueError("All features do not have the same shape! QuickBundles requires this to compute centroids!") # Check if datum is compatible with the metric if not self.metric.c_are_compatible(features_shape, self.features_shape): with gil: raise ValueError("Data features' shapes must be compatible according to the metric used!") # Find nearest cluster to datum self.metric.feature.c_extract(datum, self.features) nearest_cluster = self.find_nearest_cluster(self.features) # Find nearest cluster to s_i_flip if metric is not order invariant if not self.metric.feature.is_order_invariant: self.metric.feature.c_extract(datum[::-1], self.features_flip) nearest_cluster_flip = self.find_nearest_cluster(self.features_flip) # If we found a lower distance using a flipped datum, # add the flipped version instead if nearest_cluster_flip.dist < nearest_cluster.dist: nearest_cluster.id = nearest_cluster_flip.id nearest_cluster.dist = nearest_cluster_flip.dist features_to_add = self.features_flip # Check if distance with the nearest cluster is below some threshold # or if we already have the maximum number of clusters. # If the former or the latter is true, assign datum to its nearest cluster # otherwise create a new cluster and assign the datum to it. if not (nearest_cluster.dist < self.threshold or self.clusters.c_size() >= self.max_nb_clusters): nearest_cluster.id = self.clusters.c_create_cluster() self.clusters.c_assign(nearest_cluster.id, datum_id, features_to_add) return nearest_cluster.id cdef void update_step(QuickBundles self, int cluster_id) noexcept nogil: """ Compute the update step of the QuickBundles algorithm. It will update the centroid of a cluster given its index. Parameters ---------- cluster_id : int ID of the cluster to update. """ self.clusters.c_update(cluster_id) def get_stats(self): stats = {'nb_mdf_calls': self.stats.nb_mdf_calls, 'nb_aabb_calls': self.stats.nb_aabb_calls} return stats cdef object _build_clustermap(self): clusters = ClusterMapCentroid() cdef int k for k in range(self.clusters.c_size()): cluster = ClusterCentroid(np.asarray(self.clusters.centroids[k].features[0]).copy()) cluster.indices = np.asarray( self.clusters.clusters_indices[k]).copy() clusters.add_cluster(cluster) return clusters def get_cluster_map(self): return self._build_clustermap() def evaluate_aabb_checks(): cdef: Data2D feature1 = np.array([[1, 0, 0], [1, 1, 0], [1 + np.sqrt(2)/2., 1 + np.sqrt(2)/2., 0]], dtype='f4') Data2D feature2 = np.array([[1, 0, 0], [1, 1, 0], [1 + np.sqrt(2)/2., 1 + np.sqrt(2)/2., 0]], dtype='f4') + np.array([0.5, 0, 0], dtype='f4') float[6] aabb1 float[6] aabb2 int res aabb_creation(feature1, &aabb1[0]) aabb_creation(feature2, &aabb2[0]) res = aabb_overlap(&aabb1[0], &aabb2[0]) return np.asarray(aabb1), np.asarray(aabb2), res dipy-1.11.0/dipy/segment/cythonutils.pxd000066400000000000000000000015501476546756600203030ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False cdef extern from "cythonutils.h": enum: MAX_NDIM ctypedef float[:] Data1D ctypedef float[:,:] Data2D ctypedef float[:,:,:] Data3D ctypedef float[:,:,:,:] Data4D ctypedef float[:,:,:,:,:] Data5D ctypedef float[:,:,:,:,:,:] Data6D ctypedef float[:,:,:,:,:,:,:] Data7D ctypedef fused Data: Data1D Data2D Data3D Data4D Data5D Data6D Data7D cdef struct Shape: Py_ssize_t ndim Py_ssize_t dims[MAX_NDIM] Py_ssize_t size cdef Shape shape_from_memview(Data data) noexcept nogil cdef Shape tuple2shape(dims) except * cdef shape2tuple(Shape shape) cdef int same_shape(Shape shape1, Shape shape2) noexcept nogil cdef Data2D* create_memview_2d(Py_ssize_t buffer_size, Py_ssize_t dims[MAX_NDIM]) noexcept nogil cdef void free_memview_2d(Data2D* memview) noexcept nogil dipy-1.11.0/dipy/segment/cythonutils.pyx000066400000000000000000000072131476546756600203320ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False import numpy as np cimport numpy as cnp cdef extern from "stdlib.h" nogil: ctypedef unsigned long size_t void free(void *ptr) void *calloc(size_t nelem, size_t elsize) cdef Py_ssize_t sizeof_memviewslice = 2 * sizeof(cnp.npy_intp) + 3 * sizeof(cnp.npy_intp) * 8 cdef Shape shape_from_memview(Data data) noexcept nogil: """ Retrieves shape from a memoryview object. Parameters ---------- data : memoryview object (float) array for which the shape information is retrieved Returns ------- shape : `Shape` struct structure containing information about the shape of `data` """ cdef Shape shape cdef cnp.npy_intp i shape.ndim = 0 shape.size = 1 for i in range(MAX_NDIM): shape.dims[i] = data.shape[i] if shape.dims[i] > 0: shape.size *= shape.dims[i] shape.ndim += 1 return shape cdef Shape tuple2shape(dims) except *: """ Converts a Python's tuple into a `Shape` Cython's struct. Parameters ---------- dims : tuple of int size of each dimension Returns ------- shape : `Shape` struct structure containing shape information obtained from `dims` """ assert len(dims) < MAX_NDIM cdef Shape shape cdef cnp.npy_intp i shape.ndim = len(dims) shape.size = np.prod(dims) for i in range(shape.ndim): shape.dims[i] = dims[i] return shape cdef shape2tuple(Shape shape): """ Converts a `Shape` Cython's struct into a Python's tuple. Parameters ---------- shape : `Shape` struct structure containing shape information Returns ------- dims : tuple of int size of each dimension """ cdef cnp.npy_intp i dims = [] for i in range(shape.ndim): dims.append(shape.dims[i]) return tuple(dims) cdef int same_shape(Shape shape1, Shape shape2) noexcept nogil: """ Checks if two shapes are the same. Two shapes are equals if they have the same number of dimensions and that each dimension's size matches. Parameters ---------- shape1 : `Shape` struct structure containing shape information shape2 : `Shape` struct structure containing shape information Returns ------- same_shape : int (0 or 1) tells if the shape are equals """ cdef cnp.npy_intp i cdef int same_shape = True same_shape &= shape1.ndim == shape2.ndim for i in range(shape1.ndim): same_shape &= shape1.dims[i] == shape2.dims[i] return same_shape cdef Data2D* create_memview_2d(Py_ssize_t buffer_size, Py_ssize_t dims[MAX_NDIM]) noexcept nogil: """ Create a light version of cython memory view. Parameters ---------- buffer_size : int data size dims : array desired memory view shape Returns ------- Data2D* : memview pointer floating pointer to memview """ cdef Data2D* memview memview = calloc(1, sizeof_memviewslice) memview.shape[0] = dims[0] memview.shape[1] = dims[1] memview.strides[0] = dims[1] * sizeof(float) memview.strides[1] = sizeof(float) memview.suboffsets[0] = -1 memview.suboffsets[1] = -1 memview._data = calloc(buffer_size, sizeof(float)) return memview cdef void free_memview_2d(Data2D* memview) noexcept nogil: """ free a light version of cython memory view Parameters ---------- memview : Data2D* floating pointer to memory view pointer """ free(&(memview[0][0, 0])) memview[0] = None # Necessary to decrease refcount free(memview) dipy-1.11.0/dipy/segment/featurespeed.pxd000066400000000000000000000015121476546756600203700ustar00rootroot00000000000000from dipy.segment.cythonutils cimport Data2D, Shape cimport numpy as cnp cdef class Feature: cdef int is_order_invariant cdef Shape c_infer_shape(Feature self, Data2D datum) noexcept nogil cdef void c_extract(Feature self, Data2D datum, Data2D out) noexcept nogil cpdef infer_shape(Feature self, datum) cpdef extract(Feature self, datum) cdef class CythonFeature(Feature): pass # The IdentityFeature class returns the datum as-is. This is useful for metric # that does not require any pre-processing. cdef class IdentityFeature(CythonFeature): pass # The ResampleFeature class returns the datum resampled. This is useful for # metric like SumPointwiseEuclideanMetric that does require a consistent # number of points between datum. cdef class ResampleFeature(CythonFeature): cdef cnp.npy_intp nb_points dipy-1.11.0/dipy/segment/featurespeed.pyx000066400000000000000000000313131476546756600204170ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False import numpy as np cimport numpy as cnp from dipy.segment.cythonutils cimport tuple2shape, shape2tuple, shape_from_memview from dipy.tracking.streamlinespeed cimport c_set_number_of_points, c_length cdef class Feature: """ Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Parameters ---------- is_order_invariant : bool (optional) tells if this feature is invariant to the sequence's ordering. This means starting from either extremities produces the same features. (Default: True) Notes ----- When subclassing `Feature`, one only needs to override the `extract` and `infer_shape` methods. """ def __init__(Feature self, is_order_invariant=True): # By default every feature is order invariant. self.is_order_invariant = is_order_invariant property is_order_invariant: """ Is this feature invariant to the sequence's ordering """ def __get__(Feature self): return bool(self.is_order_invariant) def __set__(self, int value): self.is_order_invariant = bool(value) cdef Shape c_infer_shape(Feature self, Data2D datum) noexcept nogil: """ Cython version of `Feature.infer_shape`. """ with gil: shape = self.infer_shape(np.asarray(datum)) if np.asarray(shape).ndim == 0: return tuple2shape((1, shape)) elif len(shape) == 1: return tuple2shape((1,) + shape) elif len(shape) == 2: return tuple2shape(shape) else: raise TypeError("Only scalar, 1D or 2D array features are supported!") cdef void c_extract(Feature self, Data2D datum, Data2D out) noexcept nogil: """ Cython version of `Feature.extract`. """ cdef Data2D c_features with gil: features = np.asarray(self.extract(np.asarray(datum))).astype(np.float32) if features.ndim == 0: features = features[np.newaxis, np.newaxis] elif features.ndim == 1: features = features[np.newaxis] elif features.ndim == 2: pass else: raise TypeError("Only scalar, 1D or 2D array features are supported!") c_features = features out[:] = c_features cpdef infer_shape(Feature self, datum): """ Infers the shape of features extracted from a sequential datum. Parameters ---------- datum : 2D array Sequence of N-dimensional points. Returns ------- int, 1-tuple or 2-tuple Shape of the features. """ raise NotImplementedError("Feature's subclasses must implement method `infer_shape(self, datum)`!") cpdef extract(Feature self, datum): """ Extracts features from a sequential datum. Parameters ---------- datum : 2D array Sequence of N-dimensional points. Returns ------- 2D array Features extracted from `datum`. """ raise NotImplementedError("Feature's subclasses must implement method `extract(self, datum)`!") cdef class CythonFeature(Feature): """ Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Parameters ---------- is_order_invariant : bool, optional Tells if this feature is invariant to the sequence's ordering (Default: True). Notes ----- By default, when inheriting from `CythonFeature`, Python methods will call their C version (e.g. `CythonFeature.extract` -> `self.c_extract`). """ cpdef infer_shape(CythonFeature self, datum): """ Infers the shape of features extracted from a sequential datum. Parameters ---------- datum : 2D array Sequence of N-dimensional points. Returns ------- tuple Shape of the features. Notes ----- This method calls its Cython version `self.c_infer_shape` accordingly. """ if not datum.flags.writeable or datum.dtype is not np.float32: datum = datum.astype(np.float32) return shape2tuple(self.c_infer_shape(datum)) cpdef extract(CythonFeature self, datum): """ Extracts features from a sequential datum. Parameters ---------- datum : 2D array Sequence of N-dimensional points. Returns ------- 2D array Features extracted from `datum`. Notes ----- This method calls its Cython version `self.c_extract` accordingly. """ if not datum.flags.writeable or datum.dtype is not np.float32: datum = datum.astype(np.float32) shape = shape2tuple(self.c_infer_shape(datum)) cdef Data2D out = np.empty(shape, dtype=datum.dtype) self.c_extract(datum, out) return np.asarray(out) cdef class IdentityFeature(CythonFeature): """ Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The features being extracted are the actual sequence's points. This is useful for metric that does not require any pre-processing. """ def __init__(IdentityFeature self): super(IdentityFeature, self).__init__(is_order_invariant=False) cdef Shape c_infer_shape(IdentityFeature self, Data2D datum) noexcept nogil: return shape_from_memview(datum) cdef void c_extract(IdentityFeature self, Data2D datum, Data2D out) noexcept nogil: cdef: int N = datum.shape[0], D = datum.shape[1] int n, d for n in range(N): for d in range(D): out[n, d] = datum[n, d] cdef class ResampleFeature(CythonFeature): """Extract features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The features being extracted are the points of the sequence once resampled. This is useful for metrics requiring a constant number of points for all streamlines. """ def __init__(ResampleFeature self, cnp.npy_intp nb_points): super(ResampleFeature, self).__init__(is_order_invariant=False) self.nb_points = nb_points if nb_points <= 0: raise ValueError("ResampleFeature: `nb_points` must be strictly positive: {0}".format(nb_points)) cdef Shape c_infer_shape(ResampleFeature self, Data2D datum) noexcept nogil: cdef Shape shape = shape_from_memview(datum) shape.dims[0] = self.nb_points return shape cdef void c_extract(ResampleFeature self, Data2D datum, Data2D out) noexcept nogil: c_set_number_of_points(datum, out) cdef class CenterOfMassFeature(CythonFeature): """Extract features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one N-dimensional point representing the mean of the points, i.e. the center of mass. """ def __init__(CenterOfMassFeature self): super(CenterOfMassFeature, self).__init__(is_order_invariant=True) cdef Shape c_infer_shape(CenterOfMassFeature self, Data2D datum) noexcept nogil: cdef Shape shape = shape_from_memview(datum) shape.ndim = 2 shape.dims[0] = 1 shape.dims[1] = datum.shape[1] shape.size = datum.shape[1] return shape cdef void c_extract(CenterOfMassFeature self, Data2D datum, Data2D out) noexcept nogil: cdef int N = datum.shape[0], D = datum.shape[1] cdef cnp.npy_intp i, d for d in range(D): out[0, d] = 0 for i in range(N): for d in range(D): out[0, d] += datum[i, d] for d in range(D): out[0, d] /= N cdef class MidpointFeature(CythonFeature): r"""Extract features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one N-dimensional point representing the middle point of the sequence (i.e. `nb_points//2` th point). """ def __init__(MidpointFeature self): super(MidpointFeature, self).__init__(is_order_invariant=False) cdef Shape c_infer_shape(MidpointFeature self, Data2D datum) noexcept nogil: cdef Shape shape = shape_from_memview(datum) shape.ndim = 2 shape.dims[0] = 1 shape.dims[1] = datum.shape[1] shape.size = datum.shape[1] return shape cdef void c_extract(MidpointFeature self, Data2D datum, Data2D out) noexcept nogil: cdef: int N = datum.shape[0], D = datum.shape[1] int mid = N/2 int d for d in range(D): out[0, d] = datum[mid, d] cdef class ArcLengthFeature(CythonFeature): """ Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one scalar representing the arc length of the sequence (i.e. the sum of the length of all segments). """ def __init__(ArcLengthFeature self): super(ArcLengthFeature, self).__init__(is_order_invariant=True) cdef Shape c_infer_shape(ArcLengthFeature self, Data2D datum) noexcept nogil: cdef Shape shape = shape_from_memview(datum) shape.ndim = 2 shape.dims[0] = 1 shape.dims[1] = 1 shape.size = 1 return shape cdef void c_extract(ArcLengthFeature self, Data2D datum, Data2D out) noexcept nogil: out[0, 0] = c_length(datum) cdef class VectorOfEndpointsFeature(CythonFeature): """ Extracts features from a sequential datum. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). The feature being extracted consists of one vector in the N-dimensional space pointing from one end-point of the sequence to the other (i.e. `S[-1]-S[0]`). """ def __init__(VectorOfEndpointsFeature self): super(VectorOfEndpointsFeature, self).__init__(is_order_invariant=False) cdef Shape c_infer_shape(VectorOfEndpointsFeature self, Data2D datum) noexcept nogil: cdef Shape shape = shape_from_memview(datum) shape.ndim = 2 shape.dims[0] = 1 shape.dims[1] = datum.shape[1] shape.size = datum.shape[1] return shape cdef void c_extract(VectorOfEndpointsFeature self, Data2D datum, Data2D out) noexcept nogil: cdef: int N = datum.shape[0], D = datum.shape[1] int d for d in range(D): out[0, d] = datum[N-1, d] - datum[0, d] cpdef infer_shape(Feature feature, data): """ Infers shape of the features extracted from data. Parameters ---------- feature : `Feature` object Tells how to infer shape of the features. data : list of 2D arrays List of sequences of N-dimensional points. Returns ------- list of tuples Shapes of the features inferred from `data`. """ single_datum = False if type(data) is np.ndarray: single_datum = True data = [data] if len(data) == 0: return [] shapes = [] cdef cnp.npy_intp i for i in range(0, len(data)): datum = data[i] if data[i].flags.writeable else data[i].astype(np.float32) shapes.append(shape2tuple(feature.c_infer_shape(datum))) if single_datum: return shapes[0] else: return shapes cpdef extract(Feature feature, data): """ Extracts features from data. Parameters ---------- feature : `Feature` object Tells how to extract features from the data. datum : list of 2D arrays List of sequence of N-dimensional points. Returns ------- list of 2D arrays List of features extracted from `data`. """ single_datum = False if type(data) is np.ndarray: single_datum = True data = [data] if len(data) == 0: return [] shapes = infer_shape(feature, data) features = [np.empty(shape, dtype=np.float32) for shape in shapes] cdef cnp.npy_intp i for i in range(len(data)): datum = data[i] if data[i].flags.writeable else data[i].astype(np.float32) feature.c_extract(datum, features[i]) if single_datum: return features[0] else: return features dipy-1.11.0/dipy/segment/fss.py000066400000000000000000000267161476546756600163610ustar00rootroot00000000000000import warnings import numpy as np from scipy.sparse import coo_array from scipy.spatial import cKDTree from dipy.io.stateful_tractogram import StatefulTractogram from dipy.segment.metric import mean_euclidean_distance from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import set_number_of_points class FastStreamlineSearch: @warning_for_keywords() def __init__( self, ref_streamlines, max_radius, *, nb_mpts=4, bin_size=20.0, resampling=24, bidirectional=True, ): """Fast Streamline Search (FFS) Generate the Binned K-D Tree structure with reference streamlines, using streamlines barycenter and mean-points. See :footcite:p:`StOnge2022` for further details. Parameters ---------- ref_streamlines : Streamlines Streamlines (ref) to generate the tree structure. max_radius : float The maximum radius (distance) for subsequent streamline search. Used to compute the overlap in-between bins. nb_mpts : int, optional Number of means points to improve computation speed. (this only changes computation time) bin_size : float, optional The bin size to separate streamlines in groups. (this only changes computation time) resampling : int, optional Number of points used to reshape each streamline. bidirectional : bool, optional Compute the smallest distance with and without flip. Notes ----- Make sure that streamlines are aligned in the same space. Preferably in millimeter space (voxmm or rasmm). References ---------- .. footbibliography:: """ if max_radius <= 0.0: raise ValueError("max_radius needs to be a positive value") if resampling < 20: warnings.warn( "For accurate results, resampling should be" " at least >= 10 and preferably >= 20", stacklevel=2, ) if resampling % nb_mpts != 0: raise ValueError("nb_mpts needs to be a factor of resampling") if isinstance(ref_streamlines, StatefulTractogram): ref_streamlines = ref_streamlines.streamlines self.nb_mpts = nb_mpts self.bin_size = bin_size self.bidirectional = bidirectional self.resampling = resampling self.max_radius = max_radius # Resample streamlines self.ref_slines = self._resample(ref_streamlines) self.ref_nb_slines = len(self.ref_slines) if self.bidirectional: self.ref_slines = np.concatenate( [self.ref_slines, np.flip(self.ref_slines, axis=1)] ) # Compute streamlines barycenter barycenters = self._slines_barycenters(self.ref_slines) # Compute bin shape (min, max, shape) bin_overlap = max_radius self.min_box = np.min(barycenters, axis=0) - bin_overlap self.max_box = np.max(barycenters, axis=0) + bin_overlap box_length = self.max_box - self.min_box self.bin_shape = (box_length // bin_size).astype(int) + 1 # Compute the center of each bin bin_list = np.arange(np.prod(self.bin_shape)) all_bins = np.vstack(np.unravel_index(bin_list, self.bin_shape)).T bins_center = all_bins * bin_size + self.min_box + bin_size / 2.0 # Assign a list of streamlines to each bin baryc_tree = cKDTree(barycenters) center_dist = bin_size / 2.0 + bin_overlap baryc_bins = baryc_tree.query_ball_point(bins_center, center_dist, p=np.inf) # Compute streamlines mean-points meanpts = self._slines_mean_points(self.ref_slines) # Compute bin indices, streamlines + mean-points tree self.bin_dict = {} for i, baryc_b in enumerate(baryc_bins): if baryc_b: slines_id = np.asarray(baryc_b) self.bin_dict[i] = (slines_id, cKDTree(meanpts[slines_id])) @warning_for_keywords() def radius_search(self, streamlines, radius, *, use_negative=True): """Radius Search using Fast Streamline Search For each given streamlines, return all reference streamlines within the given radius. See :footcite:p:`StOnge2022` for further details. Parameters ---------- streamlines : Streamlines Streamlines to generate the tree structure. radius : float Search radius (with MDF / average L2 distance) must be smaller than max_radius when FFS was initialized. use_negative : bool, optional When used with bidirectional, negative values are returned for reversed order neighbors. Returns ------- res : scipy COOrdinates sparse matrix (nb_slines x nb_slines_ref) Adjacency matrix containing all neighbors within the given radius Notes ----- Given streamlines should be already aligned with ref streamlines. Preferably in millimeter space (voxmm or rasmm). References ---------- .. footbibliography:: """ if radius > self.max_radius: raise ValueError( "radius should be smaller or equal to the given" "\n 'max_radius' in FastStreamlineSearch init" ) if isinstance(streamlines, StatefulTractogram): streamlines = streamlines.streamlines # Resample query streamlines q_slines = self._resample(streamlines) q_nb_slines = len(q_slines) # Compute streamlines barycenter q_baryc = self._slines_barycenters(q_slines) # Verify if each barycenter are inside the min max box u_bin, binned_slines_ids = self._barycenters_binning(q_baryc) # Adapting radius for L1 query: sqrt(3) = 1.73205080756887729.. # Rounded up for float32 precision to avoid error / false negative l1_sum_dist = 1.73205081 * radius * self.nb_mpts # Search for all similar streamlines list_id = [] list_id_ref = [] list_dist = [] for i, bin_id in enumerate(u_bin): if bin_id in self.bin_dict: slines_id_ref, ref_tree = self.bin_dict[bin_id] slines_id = binned_slines_ids[i] mpts = self._slines_mean_points(q_slines[slines_id]) # Compute Tree L1 Query with mean-points res = ref_tree.query_ball_point(mpts, l1_sum_dist, p=1) # Refine distance with the complete for s, ref_ids in enumerate(res): if ref_ids: s_id = slines_id[s] rs_ids = slines_id_ref[ref_ids] d = mean_euclidean_distance( q_slines[s_id], self.ref_slines[rs_ids] ) # Return all pairs within the radius in_dist_max = d < radius id_ref = rs_ids[in_dist_max] id_s = np.full_like(id_ref, s_id) list_id.append(id_s) list_id_ref.append(id_ref) list_dist.append(d[in_dist_max]) # Combine all results in a coup sparse matrix if len(list_id) > 0: ids_in = np.hstack(list_id) ids_ref = np.hstack(list_id_ref) dist = np.hstack(list_dist) if self.bidirectional: flipped = ids_ref >= self.ref_nb_slines ids_ref[flipped] -= self.ref_nb_slines if use_negative: dist[flipped] *= -1.0 return coo_array( (dist, (ids_in, ids_ref)), shape=(q_nb_slines, self.ref_nb_slines) ) # No results, return an empty sparse matrix return coo_array((q_nb_slines, self.ref_nb_slines)) def _resample(self, streamlines): """Resample streamlines""" s = np.zeros([len(streamlines), self.resampling, 3], dtype=np.float32) for i, sline in enumerate(streamlines): if len(sline) < 2: s[i] = sline else: s[i] = set_number_of_points(sline, nb_points=self.resampling) return s def _slines_barycenters(self, slines_arr): """Compute streamlines barycenter""" return np.mean(slines_arr, axis=1) def _slines_mean_points(self, slines_arr): """Compute streamlines mean-points""" r_arr = slines_arr.reshape((len(slines_arr), self.nb_mpts, -1, 3)) mpts = np.mean(r_arr, axis=2) return mpts.reshape(len(slines_arr), -1) def _barycenters_binning(self, barycenters): """Bin indices in a list according to their barycenter position""" in_bin = np.logical_and( np.all(barycenters >= self.min_box, axis=1), np.all(barycenters <= self.max_box, axis=1), ) baryc_to_box = barycenters[in_bin] - self.min_box baryc_bins_id = (baryc_to_box // self.bin_size).astype(int) baryc_multiid = np.ravel_multi_index(baryc_bins_id.T, self.bin_shape) sort_id = np.argsort(baryc_multiid) u_bin, mapping = np.unique(baryc_multiid[sort_id], return_index=True) slines_ids = np.split(np.flatnonzero(in_bin)[sort_id], mapping[1:]) return u_bin, slines_ids def nearest_from_matrix_row(coo_array): """ Return the nearest (smallest) for each row given an coup sparse matrix Parameters ---------- coo_array : scipy COOrdinates sparse array (nb_slines x nb_slines_ref) Adjacency matrix containing all neighbors within the given radius Returns ------- non_zero_ids : numpy array (nb_non_empty_row x 1) Indices of each non-empty slines (row) nearest_id : numpy array (nb_non_empty_row x 1) Indices of the nearest reference match (column) nearest_dist : numpy array (nb_non_empty_row x 1) Distance for each nearest match """ non_zero_ids = np.unique(coo_array.row) sparse_matrix = np.abs(coo_array.tocsr()) upper_limit = np.max(sparse_matrix.data) + 1.0 sparse_matrix.data = upper_limit - sparse_matrix.data nearest_id = np.squeeze(sparse_matrix.argmax(axis=1).data)[non_zero_ids] nearest_dist = upper_limit - np.squeeze(sparse_matrix.max(axis=1).data) return non_zero_ids, nearest_id, nearest_dist def nearest_from_matrix_col(coo_array): """ Return the nearest (smallest) for each column given an coup sparse matrix Parameters ---------- coo_array : scipy COOrdinates sparse matrix (nb_slines x nb_slines_ref) Adjacency matrix containing all neighbors within the given radius Returns ------- non_zero_ids : numpy array (nb_non_empty_col x 1) Indices of each non-empty reference (column) nearest_id : numpy array (nb_non_empty_col x 1) Indices of the nearest slines match (row) nearest_dist : numpy array (nb_non_empty_col x 1) Distance for each nearest match """ non_zero_ids = np.unique(coo_array.col) sparse_matrix = np.abs(coo_array.tocsc()) upper_limit = np.max(sparse_matrix.data) + 1.0 sparse_matrix.data = upper_limit - sparse_matrix.data nearest_id = np.squeeze(sparse_matrix.argmax(axis=0).data)[non_zero_ids] nearest_dist = upper_limit - np.squeeze(sparse_matrix.max(axis=0).data) return non_zero_ids, nearest_id, nearest_dist dipy-1.11.0/dipy/segment/mask.py000066400000000000000000000234571476546756600165200ustar00rootroot00000000000000from warnings import warn import numpy as np from scipy.ndimage import binary_dilation, generate_binary_structure, median_filter try: from skimage.filters import threshold_otsu as otsu except Exception: from dipy.segment.threshold import otsu from dipy.reconst.dti import color_fa, fractional_anisotropy from dipy.segment.utils import remove_holes_and_islands from dipy.testing.decorators import warning_for_keywords def multi_median(data, median_radius, numpass): """Applies median filter multiple times on input data. Parameters ---------- data : ndarray The input volume to apply filter on. median_radius : int Radius (in voxels) of the applied median filter numpass: int Number of pass of the median filter Returns ------- data : ndarray Filtered input volume. """ # Array representing the size of the median window in each dimension. medarr = np.ones_like(data.shape) * ((median_radius * 2) + 1) if numpass > 1: # ensure the input array is not modified data = data.copy() # Multi pass output = np.empty_like(data) for _ in range(0, numpass): median_filter(data, medarr, output=output) data, output = output, data return data def applymask(vol, mask): """Mask vol with mask. Parameters ---------- vol : ndarray Array with $V$ dimensions mask : ndarray Binary mask. Has $M$ dimensions where $M <= V$. When $M < V$, we append $V - M$ dimensions with axis length 1 to `mask` so that `mask` will broadcast against `vol`. In the typical case `vol` can be 4D, `mask` can be 3D, and we append a 1 to the mask shape which (via numpy broadcasting) has the effect of applying the 3D mask to each 3D slice in `vol` (``vol[..., 0]`` to ``vol[..., -1``). Returns ------- masked_vol : ndarray `vol` multiplied by `mask` where `mask` may have been extended to match extra dimensions in `vol` """ mask = mask.reshape(mask.shape + (vol.ndim - mask.ndim) * (1,)) return vol * mask def bounding_box(vol): """Compute the bounding box of nonzero intensity voxels in the volume. Parameters ---------- vol : ndarray Volume to compute bounding box on. Returns ------- npmins : list Array containing minimum index of each dimension npmaxs : list Array containing maximum index of each dimension """ # Find bounds on first dimension temp = vol for _ in range(vol.ndim - 1): temp = temp.any(-1) mins = [temp.argmax()] maxs = [len(temp) - temp[::-1].argmax()] # Check that vol is not all 0 if mins[0] == 0 and temp[0] == 0: warn( "No data found in volume to bound. Returning empty bounding box.", stacklevel=2, ) return [0] * vol.ndim, [0] * vol.ndim # Find bounds on remaining dimensions if vol.ndim > 1: a, b = bounding_box(vol.any(0)) mins.extend(a) maxs.extend(b) return mins, maxs def crop(vol, mins, maxs): """Crops the input volume. Parameters ---------- vol : ndarray Volume to crop. mins : array Array containing minimum index of each dimension. maxs : array Array containing maximum index of each dimension. Returns ------- vol : ndarray The cropped volume. """ return vol[tuple(slice(i, j) for i, j in zip(mins, maxs))] @warning_for_keywords() def median_otsu( input_volume, *, vol_idx=None, median_radius=4, numpass=4, autocrop=False, dilate=None, finalize_mask=False, ): """Simple brain extraction tool method for images from DWI data. It uses a median filter smoothing of the input_volumes `vol_idx` and an automatic histogram Otsu thresholding technique, hence the name *median_otsu*. This function is inspired from Mrtrix's bet which has default values ``median_radius=3``, ``numpass=2``. However, from tests on multiple 1.5T and 3T data from GE, Philips, Siemens, the most robust choice is ``median_radius=4``, ``numpass=4``. Parameters ---------- input_volume : ndarray 3D or 4D array of the brain volume. vol_idx : None or array, optional 1D array representing indices of ``axis=3`` of a 4D `input_volume`. None is only an acceptable input if ``input_volume`` is 3D. median_radius : int, optional Radius (in voxels) of the applied median filter. numpass: int, optional Number of pass of the median filter. autocrop: bool, optional if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. dilate : None or int, optional number of iterations for binary dilation finalize_mask : bool, optional Whether to remove potential holes or islands. Useful for solving minor errors. Returns ------- maskedvolume : ndarray Masked input_volume mask : 3D ndarray The binary brain mask Notes ----- Copyright (C) 2011, the scikit-image team All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ if len(input_volume.shape) == 4: if vol_idx is not None: b0vol = np.mean(input_volume[..., tuple(vol_idx)], axis=3) else: raise ValueError("For 4D images, must provide vol_idx input") else: b0vol = input_volume # Make a mask using a multiple pass median filter and histogram # thresholding. mask = multi_median(b0vol, median_radius, numpass) thresh = otsu(mask) mask = mask > thresh if dilate is not None: cross = generate_binary_structure(3, 1) mask = binary_dilation(mask, cross, iterations=dilate) # Correct mask by removing islands and holes if finalize_mask: mask = remove_holes_and_islands(mask) # Auto crop the volumes using the mask as input_volume for bounding box # computing. if autocrop: mins, maxs = bounding_box(mask) mask = crop(mask, mins, maxs) croppedvolume = crop(input_volume, mins, maxs) maskedvolume = applymask(croppedvolume, mask) else: maskedvolume = applymask(input_volume, mask) return maskedvolume, mask @warning_for_keywords() def segment_from_cfa(tensor_fit, roi, threshold, *, return_cfa=False): """ Segment the cfa inside roi using the values from threshold as bounds. Parameters ---------- tensor_fit : TensorFit object TensorFit object roi : ndarray A binary mask, which contains the bounding box for the segmentation. threshold : array-like An iterable that defines the min and max values to use for the thresholding. The values are specified as (R_min, R_max, G_min, G_max, B_min, B_max) return_cfa : bool, optional If True, the cfa is also returned. Returns ------- mask : ndarray Binary mask of the segmentation. cfa : ndarray, optional Array with shape = (..., 3), where ... is the shape of tensor_fit. The color fractional anisotropy, ordered as a nd array with the last dimension of size 3 for the R, G and B channels. """ FA = fractional_anisotropy(tensor_fit.evals) FA[np.isnan(FA)] = 0 FA = np.clip(FA, 0, 1) # Clamp the FA to remove degenerate tensors cfa = color_fa(FA, tensor_fit.evecs) roi = np.asarray(roi, dtype=bool) include = (cfa >= threshold[0::2]) & (cfa <= threshold[1::2]) & roi[..., None] mask = np.all(include, axis=-1) if return_cfa: return mask, cfa return mask def clean_cc_mask(mask): """ Cleans a segmentation of the corpus callosum so no random pixels are included. Parameters ---------- mask : ndarray Binary mask of the coarse segmentation. Returns ------- new_cc_mask : ndarray Binary mask of the cleaned segmentation. """ from scipy.ndimage import label new_cc_mask = np.zeros(mask.shape) # Flood fill algorithm to find contiguous regions. labels, numL = label(mask) volumes = [len(labels[np.where(labels == l_idx + 1)]) for l_idx in np.arange(numL)] biggest_vol = np.arange(numL)[np.where(volumes == np.max(volumes))] + 1 new_cc_mask[np.where(labels == biggest_vol)] = 1 return new_cc_mask dipy-1.11.0/dipy/segment/meson.build000066400000000000000000000015621476546756600173460ustar00rootroot00000000000000cython_sources = [ 'clustering_algorithms', 'clusteringspeed', 'cythonutils', 'featurespeed', 'metricspeed', 'mrf', ] cython_headers = [ 'clusteringspeed.pxd', 'cythonutils.pxd', 'featurespeed.pxd', 'metricspeed.pxd', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/segment' ) endforeach python_sources = ['__init__.py', 'bundles.py', 'clustering.py', 'fss.py', 'mask.py', 'metric.py', 'threshold.py', 'tissue.py', 'utils.py' ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/segment' ) subdir('tests')dipy-1.11.0/dipy/segment/metric.py000066400000000000000000000047461476546756600170500ustar00rootroot00000000000000__all__ = [ "MinimumAverageDirectFlipMetric", "Metric", "CosineMetric", "AveragePointwiseEuclideanMetric", "EuclideanMetric", "dist", "mdf", "mean_manhattan_distance", "mean_euclidean_distance", ] import numpy as np from dipy.segment.metricspeed import ( AveragePointwiseEuclideanMetric, CosineMetric, Metric, MinimumAverageDirectFlipMetric, SumPointwiseEuclideanMetric, dist, ) # Creates aliases EuclideanMetric = SumPointwiseEuclideanMetric def mdf(s1, s2): """Computes the MDF (Minimum average Direct-Flip) distance between two streamlines. Streamlines must have the same number of points. See :footcite:p:`Garyfallidis2012a` for a definition of the distance. Parameters ---------- s1 : 2D array A streamline (sequence of N-dimensional points). s2 : 2D array A streamline (sequence of N-dimensional points). Returns ------- double Distance between two streamlines. References ---------- .. footbibliography:: """ return dist(MinimumAverageDirectFlipMetric(), s1, s2) def mean_manhattan_distance(a, b): """Compute the average Manhattan-L1 distance (MDF without flip) Arrays are representing a single streamline or a list of streamlines that have the same number of N-dimensional points (two last axis). Parameters ---------- a : 2D or 3D array A streamline or concatenated streamlines (array of S streamlines by P points in N dimension). b : 2D or 3D array A streamline or concatenated streamlines (array of S streamlines by P points in N dimension). Returns ------- 1D array Distance between each S streamlines """ return np.mean(np.sum(np.abs(a - b), axis=-1), axis=-1) def mean_euclidean_distance(a, b): """Compute the average Euclidean-L2 distance (MDF without flip) Arrays are representing a single streamline or a list of streamlines that have the same number of N-dimensional points (two last axis). Parameters ---------- a : 2D or 3D array A streamline or concatenated streamlines (array of S streamlines by P points in N dimension). b : 2D or 3D array A streamline or concatenated streamlines (array of S streamlines by P points in N dimension). Returns ------- 1D array Distance between each S streamlines """ return np.mean(np.sqrt(np.sum(np.square(a - b), axis=-1)), axis=-1) dipy-1.11.0/dipy/segment/metricspeed.pxd000066400000000000000000000007331476546756600202240ustar00rootroot00000000000000from dipy.segment.cythonutils cimport Data2D, Shape from dipy.segment.featurespeed cimport Feature cdef class Metric: cdef Feature feature cdef int is_order_invariant cdef double c_dist(Metric self, Data2D features1, Data2D features2) except -1 nogil cdef int c_are_compatible(Metric self, Shape shape1, Shape shape2) except -1 nogil cpdef double dist(Metric self, features1, features2) except -1 cpdef are_compatible(Metric self, shape1, shape2) dipy-1.11.0/dipy/segment/metricspeed.pyx000066400000000000000000000365211476546756600202550ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False import numpy as np cimport numpy as cnp from libc.math cimport sqrt, acos from dipy.segment.cythonutils cimport tuple2shape, shape2tuple, same_shape from dipy.segment.featurespeed cimport IdentityFeature DEF biggest_double = 1.7976931348623157e+308 # np.finfo('f8').max import math cdef double PI = math.pi cdef class Metric: """ Computes a distance between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A `feature` object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data. Parameters ---------- feature : `Feature` object, optional It is used to extract features before computing the distance. Notes ----- When subclassing `Metric`, one only needs to override the `dist` and `are_compatible` methods. """ def __init__(Metric self, Feature feature=IdentityFeature()): self.feature = feature self.is_order_invariant = self.feature.is_order_invariant property feature: """ `Feature` object used to extract features from sequential data """ def __get__(Metric self): return self.feature property is_order_invariant: """ Is this metric invariant to the sequence's ordering """ def __get__(Metric self): return bool(self.is_order_invariant) cdef int c_are_compatible(Metric self, Shape shape1, Shape shape2) except -1 nogil: """ Cython version of `Metric.are_compatible`. """ with gil: return self.are_compatible(shape2tuple(shape1), shape2tuple(shape2)) cdef double c_dist(Metric self, Data2D features1, Data2D features2) except -1 nogil: """ Cython version of `Metric.dist`. """ with gil: _features1 = np.asarray( features1._data) _features2 = np.asarray( features2._data) return self.dist(_features1, _features2) cpdef are_compatible(Metric self, shape1, shape2): """ Checks if features can be used by `metric.dist` based on their shape. Basically this method exists so we don't have to do this check inside the `metric.dist` function (speedup). Parameters ---------- shape1 : int, 1-tuple or 2-tuple shape of the first data point's features shape2 : int, 1-tuple or 2-tuple shape of the second data point's features Returns ------- are_compatible : bool whether or not shapes are compatible """ raise NotImplementedError("Metric's subclasses must implement method `are_compatible(self, shape1, shape2)`!") cpdef double dist(Metric self, features1, features2) except -1: """ Computes a distance between two data points based on their features. Parameters ---------- features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. Returns ------- double Distance between two data points. """ raise NotImplementedError("Metric's subclasses must implement method `dist(self, features1, features2)`!") cdef class CythonMetric(Metric): """ Computes a distance between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A `feature` object can be specified in order to calculate the distance between extracted features, rather than directly between the sequential data. Parameters ---------- feature : `Feature` object, optional It is used to extract features before computing the distance. Notes ----- When subclassing `CythonMetric`, one only needs to override the `c_dist` and `c_are_compatible` methods. """ cpdef are_compatible(CythonMetric self, shape1, shape2): """ Checks if features can be used by `metric.dist` based on their shape. Basically this method exists so we don't have to do this check inside method `dist` (speedup). Parameters ---------- shape1 : int, 1-tuple or 2-tuple Shape of the first data point's features. shape2 : int, 1-tuple or 2-tuple Shape of the second data point's features. Returns ------- bool Whether or not shapes are compatible. Notes ----- This method calls its Cython version `self.c_are_compatible` accordingly. """ if np.asarray(shape1).ndim == 0: shape1 = (1, shape1) elif len(shape1) == 1: shape1 = (1,) + shape1 if np.asarray(shape2).ndim == 0: shape2 = (1, shape2) elif len(shape2) == 1: shape2 = (1,) + shape2 return self.c_are_compatible(tuple2shape(shape1), tuple2shape(shape2)) == 1 cpdef double dist(CythonMetric self, features1, features2) except -1: """ Computes a distance between two data points based on their features. Parameters ---------- features1 : 2D array Features of the first data point. features2 : 2D array Features of the second data point. Returns ------- double Distance between two data points. Notes ----- This method calls its Cython version `self.c_dist` accordingly. """ # If needed, we convert features to 2D arrays. features1 = np.asarray(features1) if features1.ndim == 0: features1 = features1[np.newaxis, np.newaxis] elif features1.ndim == 1: features1 = features1[np.newaxis] elif features1.ndim == 2: pass else: raise TypeError("Only scalar, 1D or 2D array features are" " supported for parameter 'features1'!") features2 = np.asarray(features2) if features2.ndim == 0: features2 = features2[np.newaxis, np.newaxis] elif features2.ndim == 1: features2 = features2[np.newaxis] elif features2.ndim == 2: pass else: raise TypeError("Only scalar, 1D or 2D array features are" " supported for parameter 'features2'!") if not self.are_compatible(features1.shape, features2.shape): raise ValueError("Features are not compatible according to this metric!") return self.c_dist(features1, features2) cdef class SumPointwiseEuclideanMetric(CythonMetric): r""" Computes the sum of pointwise Euclidean distances between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A `feature` object can be specified in order to calculate the distance between the features, rather than directly between the sequential data. Parameters ---------- feature : `Feature` object, optional It is used to extract features before computing the distance. Notes ----- The distance between two 2D sequential data:: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2 is equal to $a+b+c$ where $a$ is the Euclidean distance between s1[0] and s2[0], $b$ between s1[1] and s2[1] and $c$ between s1[2] and s2[2]. """ cdef double c_dist(SumPointwiseEuclideanMetric self, Data2D features1, Data2D features2) except -1 nogil: cdef : int N = features1.shape[0], D = features1.shape[1] int n, d double dd, dist_n, dist = 0.0 for n in range(N): dist_n = 0.0 for d in range(D): dd = features1[n, d] - features2[n, d] dist_n += dd*dd dist += sqrt(dist_n) return dist cdef int c_are_compatible(SumPointwiseEuclideanMetric self, Shape shape1, Shape shape2) except -1 nogil: return same_shape(shape1, shape2) cdef class AveragePointwiseEuclideanMetric(SumPointwiseEuclideanMetric): r""" Computes the average of pointwise Euclidean distances between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). A `feature` object can be specified in order to calculate the distance between the features, rather than directly between the sequential data. Parameters ---------- feature : `Feature` object, optional It is used to extract features before computing the distance. Notes ----- The distance between two 2D sequential data:: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2 is equal to $(a+b+c)/3$ where $a$ is the Euclidean distance between s1[0] and s2[0], $b$ between s1[1] and s2[1] and $c$ between s1[2] and s2[2]. """ cdef double c_dist(AveragePointwiseEuclideanMetric self, Data2D features1, Data2D features2) except -1 nogil: cdef int N = features1.shape[0] cdef double dist = SumPointwiseEuclideanMetric.c_dist(self, features1, features2) return dist / N cdef class MinimumAverageDirectFlipMetric(AveragePointwiseEuclideanMetric): r""" Computes the MDF distance (minimum average direct-flip) between two sequential data. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Notes ----- The distance between two 2D sequential data:: s1 s2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2 is equal to $\min((a+b+c)/3, (a'+b'+c')/3)$ where $a$ is the Euclidean distance between s1[0] and s2[0], $b$ between s1[1] and s2[1], $c$ between s1[2] and s2[2], $a'$ between s1[0] and s2[2], $b'$ between s1[1] and s2[1] and $c'$ between s1[2] and s2[0]. """ property is_order_invariant: """ Is this metric invariant to the sequence's ordering """ def __get__(MinimumAverageDirectFlipMetric self): return True # Ordering is handled in the distance computation cdef double c_dist(MinimumAverageDirectFlipMetric self, Data2D features1, Data2D features2) except -1 nogil: cdef double dist_direct = AveragePointwiseEuclideanMetric.c_dist(self, features1, features2) cdef double dist_flipped = AveragePointwiseEuclideanMetric.c_dist(self, features1, features2[::-1]) return min(dist_direct, dist_flipped) cdef class CosineMetric(CythonMetric): r""" Computes the cosine distance between two vectors. A vector (i.e. a N-dimensional point) is represented as a 2D array with shape (1, nb_dimensions). Notes ----- The distance between two vectors $v_1$ and $v_2$ is equal to $\frac{1}{\pi} \arccos\left(\frac{v_1 \cdot v_2}{\|v_1\| \|v_2\|}\right)$ and is bounded within $[0,1]$. """ def __init__(CosineMetric self, Feature feature): super(CosineMetric, self).__init__(feature=feature) cdef int c_are_compatible(CosineMetric self, Shape shape1, Shape shape2) except -1 nogil: return same_shape(shape1, shape2) != 0 and shape1.dims[0] == 1 cdef double c_dist(CosineMetric self, Data2D features1, Data2D features2) except -1 nogil: cdef : int d, D = features1.shape[1] double sqr_norm_features1 = 0.0, sqr_norm_features2 = 0.0 double cos_theta = 0.0 for d in range(D): cos_theta += features1[0, d] * features2[0, d] sqr_norm_features1 += features1[0, d] * features1[0, d] sqr_norm_features2 += features2[0, d] * features2[0, d] if sqr_norm_features1 == 0.: if sqr_norm_features2 == 0.: return 0. else: return 1. cos_theta /= sqrt(sqr_norm_features1) * sqrt(sqr_norm_features2) # Make sure it's in [-1, 1], i.e. within domain of arccosine cos_theta = min(cos_theta, 1.) cos_theta = max(cos_theta, -1.) return acos(cos_theta) / PI # Normalized cosine distance cpdef distance_matrix(Metric metric, data1, data2=None): """ Computes the distance matrix between two lists of sequential data. The distance matrix is obtained by computing the pairwise distance of all tuples spawn by the Cartesian product of `data1` with `data2`. If `data2` is not provided, the Cartesian product of `data1` with itself is used instead. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Parameters ---------- metric : `Metric` object Tells how to compute the distance between two sequential data. data1 : list of 2D arrays List of sequences of N-dimensional points. data2 : list of 2D arrays Llist of sequences of N-dimensional points. Returns ------- 2D array (double) Distance matrix. """ cdef cnp.npy_intp i, j if data2 is None: data2 = data1 shape = metric.feature.infer_shape(data1[0].astype(np.float32)) distance_matrix = np.zeros((len(data1), len(data2)), dtype=np.float64) cdef: Data2D features1 = np.empty(shape, np.float32) Data2D features2 = np.empty(shape, np.float32) for i in range(len(data1)): datum1 = data1[i] if data1[i].flags.writeable and data1[i].dtype is np.float32 else data1[i].astype(np.float32) metric.feature.c_extract(datum1, features1) for j in range(len(data2)): datum2 = data2[j] if data2[j].flags.writeable and data2[j].dtype is np.float32 else data2[j].astype(np.float32) metric.feature.c_extract(datum2, features2) distance_matrix[i, j] = metric.c_dist(features1, features2) return distance_matrix cpdef double dist(Metric metric, datum1, datum2) except -1: """ Computes a distance between `datum1` and `datum2`. A sequence of N-dimensional points is represented as a 2D array with shape (nb_points, nb_dimensions). Parameters ---------- metric : `Metric` object Tells how to compute the distance between `datum1` and `datum2`. datum1 : 2D array Sequence of N-dimensional points. datum2 : 2D array Sequence of N-dimensional points. Returns ------- double Distance between two data points. """ datum1 = datum1 if datum1.flags.writeable and datum1.dtype is np.float32 else datum1.astype(np.float32) datum2 = datum2 if datum2.flags.writeable and datum2.dtype is np.float32 else datum2.astype(np.float32) cdef: Shape shape1 = metric.feature.c_infer_shape(datum1) Shape shape2 = metric.feature.c_infer_shape(datum2) Data2D features1 = np.empty(shape2tuple(shape1), np.float32) Data2D features2 = np.empty(shape2tuple(shape2), np.float32) metric.feature.c_extract(datum1, features1) metric.feature.c_extract(datum2, features2) return metric.c_dist(features1, features2) dipy-1.11.0/dipy/segment/mrf.pyx000066400000000000000000000536661476546756600165460ustar00rootroot00000000000000#!python #cython: boundscheck=False #cython: wraparound=False #cython: cdivision=True import numpy as np cimport numpy as cnp cdef extern from "dpy_math.h" nogil: cdef double NPY_PI cdef double NPY_INFINITY double sqrt(double) double log(double) double exp(double) double fabs(double) class ConstantObservationModel: r""" Observation model assuming that the intensity of each class is constant. The model parameters are the means $\mu_{k}$ and variances $\sigma_{k}$ associated with each tissue class. According to this model, the observed intensity at voxel $x$ is given by $I(x) = \mu_{k} + \eta_{k}$ where $k$ is the tissue class of voxel $x$, and $\eta_{k}$ is a Gaussian random variable with zero mean and variance $\sigma_{k}^{2}$. The observation model is responsible for computing the negative log-likelihood of observing any given intensity $z$ at each voxel $x$ assuming the voxel belongs to each class $k$. It also provides a default parameter initialization. """ def __init__(self): r""" Initializes an instance of the ConstantObservationModel class """ pass def initialize_param_uniform(self, image, nclasses): r""" Initializes the means and variances uniformly The means are initialized uniformly along the dynamic range of `image`. The variances are set to 1 for all classes Parameters ---------- image : array 3D structural image nclasses : int number of desired classes Returns ------- mu : array 1 x nclasses, mean for each class sigma : array 1 x nclasses, standard deviation for each class. Set up to 1.0 for all classes. """ cdef: double[:] mu = np.empty((nclasses,), dtype=np.float64) double[:] sigma = np.empty((nclasses,), dtype=np.float64) _initialize_param_uniform(image, mu, sigma) return np.array(mu), np.array(sigma) def seg_stats(self, input_image, seg_image, cnp.npy_intp nclass): r""" Mean and standard variation for N desired tissue classes Parameters ---------- input_image : ndarray 3D structural image seg_image : ndarray 3D segmented image nclass : int number of classes (3 in most cases) Returns ------- mu, var : ndarrays 1 x nclasses dimension Mean and variance for each class """ cdef: cnp.npy_intp i, j cnp.npy_int16 s double v cnp.npy_intp size = input_image.size double [::1] input_view short [::1] seg_view double [::1] mu_view double [::1] var_view cnp.npy_intp [::1] num_vox_view # ravel C-contiguous array so we can use 1D indexing in the loop below input_view = np.ascontiguousarray(input_image, dtype=np.double).ravel() seg_view = np.ascontiguousarray(seg_image, dtype=np.int16).ravel() mu = np.zeros(nclass, dtype=np.float64) var = np.zeros(nclass, dtype=np.float64) num_vox = np.zeros(nclass, dtype=np.intp) mu_view = mu var_view = var num_vox_view = num_vox for j in range(size): s = seg_view[j] v = input_view[j] for i in range(nclass): if s == i: mu_view[i] += v var_view[i] += v * v num_vox_view[i] += 1 mu = mu / num_vox var = var / num_vox - mu * mu return mu, var def negloglikelihood(self, image, mu, sigmasq, cnp.npy_intp nclasses): r""" Computes the gaussian negative log-likelihood of each class at each voxel of `image` assuming a gaussian distribution with means and variances given by `mu` and `sigmasq`, respectively (constant models along the full volume). The negative log-likelihood will be written in `nloglike`. Parameters ---------- image : ndarray 3D gray scale structural image mu : ndarray mean of each class sigmasq : ndarray variance of each class nclasses : int number of classes Returns ------- nloglike : ndarray 4D negloglikelihood for each class in each volume """ cdef int l nloglike = np.zeros(image.shape + (nclasses,), dtype=np.float64) for l in range(nclasses): _negloglikelihood(image, mu, sigmasq, l, nloglike) return nloglike def prob_image(self, img, cnp.npy_intp nclasses, mu, sigmasq, P_L_N): r""" Conditional probability of the label given the image Parameters ---------- img : ndarray 3D structural gray-scale image nclasses : int number of tissue classes mu : ndarray 1 x nclasses, current estimate of the mean of each tissue class sigmasq : ndarray 1 x nclasses, current estimate of the variance of each tissue class P_L_N : ndarray 4D probability map of the label given the neighborhood. Previously computed by function prob_neighborhood Returns ------- P_L_Y : ndarray 4D probability of the label given the input image """ cdef int l P_L_Y = np.zeros_like(P_L_N) P_L_Y_norm = np.zeros_like(img) g = np.empty_like(img) for l in range(nclasses): _prob_image(img, g, mu, sigmasq, l, P_L_N, P_L_Y) P_L_Y_norm[:, :, :] += P_L_Y[:, :, :, l] for l in range(nclasses): P_L_Y[:, :, :, l] = P_L_Y[:, :, :, l] / P_L_Y_norm return P_L_Y def update_param(self, image, P_L_Y, mu, cnp.npy_intp nclasses): r""" Updates the means and the variances in each iteration for all the labels. This is for equations 25 and 26 of Zhang et. al., IEEE Trans. Med. Imag, Vol. 20, No. 1, Jan 2001. Parameters ---------- image : ndarray 3D structural gray-scale image P_L_Y : ndarray 4D probability map of the label given the input image computed by the expectation maximization (EM) algorithm mu : ndarray 1 x nclasses, current estimate of the mean of each tissue class. nclasses : int number of tissue classes Returns ------- mu_upd : ndarray 1 x nclasses, updated mean of each tissue class var_upd : ndarray 1 x nclasses, updated variance of each tissue class """ cdef int l mu_upd = np.zeros(nclasses, dtype=np.float64) var_upd = np.zeros(nclasses, dtype=np.float64) mu_num = np.zeros(image.shape + (nclasses,), dtype=np.float64) var_num = np.zeros(image.shape + (nclasses,), dtype=np.float64) for l in range(nclasses): mu_num[..., l] = P_L_Y[..., l] * image var_num[..., l] = P_L_Y[..., l] * ((image - mu[l]) ** 2) mu_upd[l] = np.sum(mu_num[..., l]) / np.sum(P_L_Y[..., l]) var_upd[l] = np.sum(var_num[..., l]) / np.sum(P_L_Y[..., l]) return mu_upd, var_upd cdef void _initialize_param_uniform(double[:, :, :] image, double[:] mu, double[:] var) noexcept nogil: r""" Initializes the means and standard deviations uniformly The means are initialized uniformly along the dynamic range of `image`. The standard deviations are set to 1 for all classes. Parameters ---------- image : array 3D structural gray-scale image mu : array buffer array for the mean of each tissue class var : array buffer array for the variance of each tissue class Returns ------- mu : array 1 x nclasses, mean of each class var : array 1 x nclasses, variance of each class """ cdef: cnp.npy_intp nx = image.shape[0] cnp.npy_intp ny = image.shape[1] cnp.npy_intp nz = image.shape[2] cnp.npy_intp nclasses = mu.shape[0] cnp.npy_intp i double min_val double max_val min_val = image[0, 0, 0] max_val = image[0, 0, 0] for x in range(nx): for y in range(ny): for z in range(nz): if image[x, y, z] < min_val: min_val = image[x, y, z] if image[x, y, z] > max_val: max_val = image[x, y, z] for i in range(nclasses): var[i] = 1.0 mu[i] = min_val + i * (max_val - min_val)/nclasses cdef void _negloglikelihood(double[:, :, :] image, double[:] mu, double[:] sigmasq, int classid, double[:, :, :, :] neglogl) noexcept nogil: r""" Computes the gaussian negative log-likelihood of each class at each voxel of `image` assuming a gaussian distribution with means and variances given by `mu` and `sigmasq`, respectively (constant models along the full volume). The negative log-likelihood will be written in `neglogl`. Parameters ---------- image : array 3D structural gray-scale image mu : array mean of each class sigmasq : array variance of each class classid : int class identifier neglogl : buffer for the neg-loglikelihood Returns ------- neglogl : array neg-loglikelihood for the class (l = classid) """ cdef: cnp.npy_intp nx = image.shape[0] cnp.npy_intp ny = image.shape[1] cnp.npy_intp nz = image.shape[2] cnp.npy_intp l = classid cnp.npy_intp x, y, z double eps = 1e-8 # We assume images normalized to 0-1 double eps_sq = 1e-16 # Maximum precision for double. for x in range(nx): for y in range(ny): for z in range(nz): if sigmasq[l] < eps_sq: if fabs(image[x, y, z] - mu[l]) < eps: neglogl[x, y, z, l] = 1 + log(sqrt(2.0 * NPY_PI * sigmasq[l])) else: neglogl[x, y, z, l] = NPY_INFINITY else: neglogl[x, y, z, l] = (((image[x, y, z] - mu[l])**2.0) / (2.0 * sigmasq[l])) neglogl[x, y, z, l] += log(sqrt(2.0 * NPY_PI * sigmasq[l])) cdef void _prob_image(double[:, :, :] image, double[:, :, :] gaussian, double[:] mu, double[:] sigmasq, int classid, double[:, :, :, :] P_L_N, double[:, :, :, :] P_L_Y) noexcept nogil: r""" Conditional probability of the label given the image Parameters ---------- image : array 3D structural gray-scale image gaussian : array 3D buffer for the gaussian distribution that is multiplied by P_L_N to make P_L_Y mu : array current estimate of the mean of each tissue class sigmasq : array current estimate of the variance of each tissue class classid : int tissue class identifier P_L_N : array 4D probability map of the label given the neighborhood. Previously computed by function prob_neighborhood P_L_Y : array 4D buffer to hold P(L|Y) Returns ------- P_L_Y : array 4D probability of the label given the input image P(L|Y) """ cdef: cnp.npy_intp nx = image.shape[0] cnp.npy_intp ny = image.shape[1] cnp.npy_intp nz = image.shape[2] cnp.npy_intp l = classid cnp.npy_intp x, y, z double eps = 1e-8 double eps_sq = 1e-16 for x in range(nx): for y in range(ny): for z in range(nz): if sigmasq[l] < eps_sq: if fabs(image[x, y, z] - mu[l]) < eps: gaussian[x, y, z] = 1 else: gaussian[x, y, z] = 0 else: gaussian[x, y, z] = ( (exp(-((image[x, y, z] - mu[l]) ** 2) / (2 * sigmasq[l]))) / (sqrt(2 * NPY_PI * sigmasq[l]))) P_L_Y[x, y, z, l] = gaussian[x, y, z] * P_L_N[x, y, z, l] class IteratedConditionalModes: def __init__(self): pass def initialize_maximum_likelihood(self, nloglike): r""" Initializes the segmentation of an image with given neg-loglikelihood Initializes the segmentation of an image with neglog-likelihood field given by `nloglike`. The class of each voxel is selected as the one with the minimum neglog-likelihood (i.e. maximum-likelihood segmentation). Parameters ---------- nloglike : ndarray 4D shape, nloglike[x, y, z, k] is the likelihhood of class k for voxel (x, y, z) Returns ------- seg : ndarray 3D initial segmentation """ seg = np.zeros(nloglike.shape[:3]).astype(np.int16) _initialize_maximum_likelihood(nloglike, seg) return seg def icm_ising(self, nloglike, beta, seg): r""" Executes one iteration of the ICM algorithm for MRF MAP estimation. The prior distribution of the MRF is a Gibbs distribution with the Potts/Ising model with parameter `beta`: https://en.wikipedia.org/wiki/Potts_model Parameters ---------- nloglike : ndarray 4D shape, nloglike[x, y, z, k] is the negative log likelihood of class k at voxel (x, y, z) beta : float positive scalar, it is the parameter of the Potts/Ising model. Determines the smoothness of the output segmentation. seg : ndarray 3D initial segmentation. This segmentation will change by one iteration of the ICM algorithm Returns ------- new_seg : ndarray 3D final segmentation energy : ndarray 3D final energy """ energy = np.zeros(nloglike.shape[:3]).astype(np.float64) new_seg = np.zeros_like(seg) _icm_ising(nloglike, beta, seg, energy, new_seg) return new_seg, energy def prob_neighborhood(self, seg, beta, cnp.npy_intp nclasses): r""" Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z. Li book (Stan Z. Li, Markov Random Field Modeling in Image Analysis, 3rd ed., Advances in Pattern Recognition Series, Springer Verlag 2009.) Parameters ---------- seg : ndarray 3D tissue segmentation derived from the ICM model beta : float scalar that determines the importance of the neighborhood and the spatial smoothness of the segmentation. Usually between 0 to 0.5 nclasses : int number of tissue classes Returns ------- PLN : ndarray 4D probability map of the label given the neighborhood of the voxel. """ cdef: double[:, :, :] P_L_N = np.zeros(seg.shape, dtype=np.float64) cnp.npy_intp classid = 0 PLN_norm = np.zeros(seg.shape, dtype=np.float64) PLN = np.zeros(seg.shape + (nclasses,), dtype=np.float64) for classid in range(nclasses): P_L_N = np.zeros(seg.shape, dtype=np.float64) _prob_class_given_neighb(seg, beta, classid, P_L_N) PLN[:, :, :, classid] = np.array(P_L_N) PLN[:, :, :, classid] = np.exp(- PLN[:, :, :, classid]) PLN_norm += PLN[:, :, :, classid] for l in range(nclasses): PLN[:, :, :, l] = PLN[:, :, :, l] / PLN_norm return PLN cdef void _initialize_maximum_likelihood(double[:,:,:,:] nloglike, cnp.npy_short[:,:,:] seg) noexcept nogil: r""" Initializes the segmentation of an image with given neg-log-likelihood. Initializes the segmentation of an image with neg-log-likelihood field given by `nloglike`. The class of each voxel is selected as the one with the minimum neg-log-likelihood (i.e. the maximum-likelihood segmentation). Parameters ---------- nloglike : array 4D nloglike[x, y, z, k] is the likelihhood of class k for voxel (x, y, z) seg : array 3D buffer for the initial segmentation Returns ------- seg : array, 3D initial segmentation """ cdef: cnp.npy_intp nx = nloglike.shape[0] cnp.npy_intp ny = nloglike.shape[1] cnp.npy_intp nz = nloglike.shape[2] cnp.npy_intp nclasses = nloglike.shape[3] double min_energy cnp.npy_short best_class for x in range(nx): for y in range(ny): for z in range(nz): best_class = -1 for k in range(nclasses): if (best_class == -1) or (nloglike[x, y, z, k] < min_energy): best_class = k min_energy = nloglike[x, y, z, k] seg[x, y, z] = best_class cdef void _icm_ising(double[:,:,:,:] nloglike, double beta, cnp.npy_short[:,:,:] seg, double[:,:,:] energy, cnp.npy_short[:,:,:] new_seg) noexcept nogil: r""" Executes one iteration of the ICM algorithm for MRF MAP estimation The prior distribution of the MRF is a Gibbs distribution with the Potts/Ising model with parameter `beta`: https://en.wikipedia.org/wiki/Potts_model Parameters ---------- nloglike : array 4D nloglike[x, y, z, k] is the negative log likelihood of class k at voxel (x, y, z) beta : float positive scalar, it is the parameter of the Potts/Ising model. Determines the smoothness of the output segmentation seg : array 3D initial segmentation. This segmentation will change by one iteration of the ICM algorithm energy : array 3D buffer for the energy new_seg : array 3D buffer for the final segmentation Returns ------- energy : array 3D map of the energy for every voxel new_seg : array 3D new final segmentation (there is a new one after each iteration). """ cdef: cnp.npy_intp nneigh = 6 cnp.npy_intp* dX = [-1, 0, 0, 0, 0, 1] cnp.npy_intp* dY = [0, -1, 0, 1, 0, 0] cnp.npy_intp* dZ = [0, 0, 1, 0, -1, 0] cnp.npy_intp nx = nloglike.shape[0] cnp.npy_intp ny = nloglike.shape[1] cnp.npy_intp nz = nloglike.shape[2] cnp.npy_intp nclasses = nloglike.shape[3] cnp.npy_intp x, y, z, xx, yy, zz, i, j, k double min_energy = NPY_INFINITY double this_energy = NPY_INFINITY cnp.npy_short best_class for x in range(nx): for y in range(ny): for z in range(nz): best_class = -1 min_energy = NPY_INFINITY for k in range(nclasses): this_energy = nloglike[x, y, z, k] for i in range(nneigh): xx = x + dX[i] if xx < 0 or xx >= nx: continue yy = y + dY[i] if yy < 0 or yy >= ny: continue zz = z + dZ[i] if zz < 0 or zz >= nz: continue if seg[xx, yy, zz] == k: this_energy -= beta else: this_energy += beta if this_energy < min_energy: min_energy = this_energy best_class = k new_seg[x, y, z] = best_class energy[x, y, z] = min_energy cdef void _prob_class_given_neighb(cnp.npy_short[:, :, :] seg, double beta, int classid, double[:, :, :] P_L_N) noexcept nogil: r""" Conditional probability of the label given the neighborhood Equation 2.18 of the Stan Z. Li book. Parameters ---------- image : array 3D structural gray-scale image seg : array 3D tissue segmentation derived from the ICM model beta : float scalar that determines the importance of the neighborhood and the spatial smoothness of the segmentation. Usually between 0 to 0.5 classid : int tissue class identifier P_L_N : array buffer array for P(L|N) Returns ------- P_L_N : array 3D map of the probability of the label (l) given the neighborhood of the voxel P(L|N) """ cdef: cnp.npy_intp nx = seg.shape[0] cnp.npy_intp ny = seg.shape[1] cnp.npy_intp nz = seg.shape[2] cnp.npy_intp nneigh = 6 cnp.npy_intp l = classid cnp.npy_intp x, y, z, xx, yy, zz double vox_prob cnp.npy_intp* dX = [-1, 0, 0, 0, 0, 1] cnp.npy_intp* dY = [0, -1, 0, 1, 0, 0] cnp.npy_intp* dZ = [0, 0, 1, 0, -1, 0] for x in range(nx): for y in range(ny): for z in range(nz): vox_prob = 0 for i in range(nneigh): xx = x + dX[i] if xx < 0 or xx >= nx: continue yy = y + dY[i] if yy < 0 or yy >= ny: continue zz = z + dZ[i] if zz < 0 or zz >= nz: continue if seg[xx, yy, zz] == l: vox_prob -= beta else: vox_prob += beta P_L_N[x, y, z] = vox_prob dipy-1.11.0/dipy/segment/tests/000077500000000000000000000000001476546756600163425ustar00rootroot00000000000000dipy-1.11.0/dipy/segment/tests/__init__.py000066400000000000000000000000001476546756600204410ustar00rootroot00000000000000dipy-1.11.0/dipy/segment/tests/meson.build000066400000000000000000000005671476546756600205140ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_adjustment.py', 'test_bundles.py', 'test_clustering.py', 'test_feature.py', 'test_fss.py', 'test_mask.py', 'test_metric.py', 'test_mrf.py', 'test_qbx.py', 'test_quickbundles.py', 'test_tissue.py', 'test_utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/segment/tests' ) dipy-1.11.0/dipy/segment/tests/test_adjustment.py000066400000000000000000000031761476546756600221400ustar00rootroot00000000000000import numpy as np from numpy import zeros from numpy.testing import assert_equal from dipy.segment.threshold import upper_bound_by_percent, upper_bound_by_rate def test_adjustment(): imga = zeros([128, 128]) for y in range(128): for x in range(128): if 10 < y < 115 and 10 < x < 115: imga[x, y] = 100 if 39 < y < 88 and 39 < x < 88: imga[x, y] = 150 if 59 < y < 69 and 59 < x < 69: imga[x, y] = 255 high_1 = upper_bound_by_rate(imga) high_2 = upper_bound_by_percent(imga) vol1 = np.interp(imga, xp=[imga.min(), high_1], fp=[0, 255]) vol2 = np.interp(imga, xp=[imga.min(), high_2], fp=[0, 255]) count2 = (88 - 40) * (88 - 40) count1 = (114 - 10) * (114 - 10) count1_test = 0 count2_test = 0 count2_upper = (88 - 40) * (88 - 40) count1_upper = (114 - 10) * (114 - 10) count1_upper_test = 0 count2_upper_test = 0 value1 = np.unique(vol1) value2 = np.unique(vol2) for i in range(128): for j in range(128): if vol1[i][j] > value1[1]: count2_test = count2_test + 1 if vol1[i][j] > 0: count1_test = count1_test + 1 for i in range(128): for j in range(128): if vol2[i][j] > value2[1]: count2_upper_test = count2_upper_test + 1 if vol2[i][j] > 0: count1_upper_test = count1_upper_test + 1 assert_equal(count2, count2_test) assert_equal(count1, count1_test) assert_equal(count2_upper, count2_upper_test) assert_equal(count1_upper, count1_upper_test) dipy-1.11.0/dipy/segment/tests/test_bundles.py000066400000000000000000000177021476546756600214160ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import assert_almost_equal, assert_equal from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.segment.bundles import RecoBundles from dipy.segment.clustering import qbx_and_merge from dipy.testing.decorators import set_random_number_generator from dipy.tracking.distances import bundles_distances_mam from dipy.tracking.streamline import Streamlines def setup_module(): global f, f1, f2, f3, fornix fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1 = f.copy() f2 = f1[:20].copy() f2._data += np.array([50, 0, 0]) f3 = f1[200:].copy() f3._data += np.array([100, 0, 0]) f.extend(f2) f.extend(f3) def test_rb_check_defaults(): rb = RecoBundles(f, greater_than=0, clust_thr=10) rec_trans, rec_labels = rb.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10 ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[rec_labels]) # check if the bundle is recognized correctly if len(f2) == len(rec_labels): for row in D: assert_equal(row.min(), 0) refine_trans, refine_labels = rb.refine( model_bundle=f2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[refine_labels]) # check if the bundle is recognized correctly for row in D: assert_equal(row.min(), 0) def test_rb_disable_slr(): rb = RecoBundles(f, greater_than=0, clust_thr=10) rec_trans, rec_labels = rb.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10, slr=False ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[rec_labels]) # check if the bundle is recognized correctly if len(f2) == len(rec_labels): for row in D: assert_equal(row.min(), 0) refine_trans, refine_labels = rb.refine( model_bundle=f2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[refine_labels]) # check if the bundle is recognized correctly for row in D: assert_equal(row.min(), 0) @set_random_number_generator(42) def test_rb_slr_threads(rng): rb_multi = RecoBundles(f, greater_than=0, clust_thr=10, rng=rng) rec_trans_multi_threads, _ = rb_multi.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10, slr=True, num_threads=None, ) rb_single = RecoBundles( f, greater_than=0, clust_thr=10, rng=np.random.default_rng(42) ) rec_trans_single_thread, _ = rb_single.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10, slr=True, num_threads=1 ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(rec_trans_multi_threads, rec_trans_single_thread) # check if the bundle is recognized correctly # multi-threading prevent an exact match for row in D: assert_almost_equal(row.min(), 0, decimal=4) def test_rb_no_verbose_and_mam(): rb = RecoBundles(f, greater_than=0, clust_thr=10, verbose=False) rec_trans, rec_labels = rb.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10, slr=True, pruning_distance="mam", ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[rec_labels]) # check if the bundle is recognized correctly if len(f2) == len(rec_labels): for row in D: assert_equal(row.min(), 0) refine_trans, refine_labels = rb.refine( model_bundle=f2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[refine_labels]) # check if the bundle is recognized correctly for row in D: assert_equal(row.min(), 0) def test_rb_clustermap(): cluster_map = qbx_and_merge(f, thresholds=[40, 25, 20, 10]) rb = RecoBundles( f, greater_than=0, less_than=1000000, cluster_map=cluster_map, clust_thr=10 ) rec_trans, rec_labels = rb.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10 ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[rec_labels]) # check if the bundle is recognized correctly if len(f2) == len(rec_labels): for row in D: assert_equal(row.min(), 0) refine_trans, refine_labels = rb.refine( model_bundle=f2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[refine_labels]) # check if the bundle is recognized correctly for row in D: assert_equal(row.min(), 0) def test_rb_no_neighb(): # what if no neighbors are found? No recognition b = Streamlines(fornix) b1 = b.copy() b2 = b1[:20].copy() b2._data += np.array([100, 0, 0]) b3 = b1[:20].copy() b3._data += np.array([300, 0, 0]) b.extend(b3) rb = RecoBundles(b, greater_than=0, clust_thr=10) rec_trans, rec_labels = rb.recognize( model_bundle=b2, model_clust_thr=5.0, reduction_thr=10 ) if len(rec_trans) > 0: refine_trans, refine_labels = rb.refine( model_bundle=b2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) assert_equal(len(refine_labels), 0) assert_equal(len(refine_trans), 0) else: assert_equal(len(rec_labels), 0) assert_equal(len(rec_trans), 0) def test_rb_reduction_mam(): rb = RecoBundles(f, greater_than=0, clust_thr=10, verbose=True) rec_trans, rec_labels = rb.recognize( model_bundle=f2, model_clust_thr=5.0, reduction_thr=10, reduction_distance="mam", slr=True, slr_metric="asymmetric", pruning_distance="mam", ) msg = "Streamlines do not have the same number of points. *" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[rec_labels]) # check if the bundle is recognized correctly if len(f2) == len(rec_labels): for row in D: assert_equal(row.min(), 0) refine_trans, refine_labels = rb.refine( model_bundle=f2, pruned_streamlines=rec_trans, model_clust_thr=5.0, reduction_thr=10, ) with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) D = bundles_distances_mam(f2, f[refine_labels]) # check if the bundle is recognized correctly for row in D: assert_equal(row.min(), 0) dipy-1.11.0/dipy/segment/tests/test_clustering.py000066400000000000000000000624151476546756600221420ustar00rootroot00000000000000import copy import itertools import numpy as np from numpy.testing import assert_array_equal, assert_equal, assert_raises from dipy.segment.clustering import ( Cluster, ClusterCentroid, ClusterMap, ClusterMapCentroid, Clustering, ) from dipy.testing import assert_arrays_equal, assert_false, assert_true from dipy.testing.decorators import set_random_number_generator features_shape = (1, 10) dtype = "float32" features = np.ones(features_shape, dtype=dtype) data = [ np.arange(3 * 5, dtype=dtype).reshape((-1, 3)), np.arange(3 * 10, dtype=dtype).reshape((-1, 3)), np.arange(3 * 15, dtype=dtype).reshape((-1, 3)), np.arange(3 * 17, dtype=dtype).reshape((-1, 3)), np.arange(3 * 20, dtype=dtype).reshape((-1, 3)), ] expected_clusters = [[2, 4], [0, 3], [1]] def test_cluster_attributes_and_constructor(): cluster = Cluster() assert_equal(type(cluster), Cluster) assert_equal(cluster.id, 0) assert_array_equal(cluster.indices, []) assert_equal(len(cluster), 0) # Duplicate assert_true( cluster == Cluster(id=cluster.id, indices=cluster.indices, refdata=cluster.refdata) ) assert_false( cluster != Cluster(id=cluster.id, indices=cluster.indices, refdata=cluster.refdata) ) # Invalid comparison assert_raises(TypeError, cluster.__cmp__, cluster) def test_cluster_assign(): cluster = Cluster() indices = [] for idx in range(1, 10): cluster.assign(idx) indices.append(idx) assert_equal(len(cluster), idx) assert_equal(type(cluster.indices), list) assert_array_equal(cluster.indices, indices) # Test add multiples indices at the same time cluster = Cluster() cluster.assign(*range(1, 10)) assert_array_equal(cluster.indices, indices) @set_random_number_generator() def test_cluster_iter(rng): indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering # Test without specifying refdata cluster = Cluster() cluster.assign(*indices) assert_array_equal(cluster.indices, indices) assert_array_equal(list(cluster), indices) # Test with specifying refdata in ClusterMap cluster.refdata = data assert_arrays_equal(list(cluster), [data[i] for i in indices]) @set_random_number_generator() def test_cluster_getitem(rng): indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering advanced_indices = indices + [0, 1, 2, -1, -2, -3] # Test without specifying refdata in ClusterMap cluster = Cluster() cluster.assign(*indices) # Test indexing for i in advanced_indices: assert_equal(cluster[i], indices[i]) # Test advanced indexing assert_array_equal( cluster[advanced_indices], [indices[i] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster.__getitem__, len(cluster)) assert_raises(IndexError, cluster.__getitem__, -len(cluster) - 1) # Test slicing and negative indexing assert_equal(cluster[-1], indices[-1]) assert_array_equal(cluster[::2], indices[::2]) assert_arrays_equal(cluster[::-1], indices[::-1]) assert_arrays_equal(cluster[:-1], indices[:-1]) assert_arrays_equal(cluster[1:], indices[1:]) # Test with wrong indexing object assert_raises(TypeError, cluster.__getitem__, "wrong") # Test with specifying refdata in ClusterMap cluster.refdata = data # Test indexing for i in advanced_indices: assert_array_equal(cluster[i], data[indices[i]]) # Test advanced indexing assert_arrays_equal( cluster[advanced_indices], [data[indices[i]] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster.__getitem__, len(cluster)) assert_raises(IndexError, cluster.__getitem__, -len(cluster) - 1) # Test slicing and negative indexing assert_array_equal(cluster[-1], data[indices[-1]]) assert_arrays_equal(cluster[::2], [data[i] for i in indices[::2]]) assert_arrays_equal(cluster[::-1], [data[i] for i in indices[::-1]]) assert_arrays_equal(cluster[:-1], [data[i] for i in indices[:-1]]) assert_arrays_equal(cluster[1:], [data[i] for i in indices[1:]]) # Test with wrong indexing object assert_raises(TypeError, cluster.__getitem__, "wrong") @set_random_number_generator() def test_cluster_str_and_repr(rng): indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering # Test without specifying refdata in ClusterMap cluster = Cluster() cluster.assign(*indices) assert_equal(str(cluster), "[" + ", ".join(map(str, indices)) + "]") assert_equal(repr(cluster), "Cluster([" + ", ".join(map(str, indices)) + "])") # Test with specifying refdata in ClusterMap cluster.refdata = data assert_equal(str(cluster), "[" + ", ".join(map(str, indices)) + "]") assert_equal(repr(cluster), "Cluster([" + ", ".join(map(str, indices)) + "])") def test_cluster_centroid_attributes_and_constructor(): centroid = np.zeros(features_shape) cluster = ClusterCentroid(centroid) assert_equal(type(cluster), ClusterCentroid) assert_equal(cluster.id, 0) assert_array_equal(cluster.indices, []) assert_array_equal(cluster.centroid, np.zeros(features_shape)) assert_equal(len(cluster), 0) # Duplicate assert_true(cluster == ClusterCentroid(centroid)) assert_false(cluster != ClusterCentroid(centroid)) assert_false(cluster == ClusterCentroid(centroid + 1)) # Invalid comparison assert_raises(TypeError, cluster.__cmp__, cluster) def test_cluster_centroid_assign(): centroid = np.zeros(features_shape) cluster = ClusterCentroid(centroid) indices = [] centroid = np.zeros(features_shape, dtype=dtype) for idx in range(1, 10): cluster.assign(idx, (idx + 1) * features) cluster.update() indices.append(idx) centroid = (centroid * (idx - 1) + (idx + 1) * features) / idx assert_equal(len(cluster), idx) assert_equal(type(cluster.indices), list) assert_array_equal(cluster.indices, indices) assert_equal(type(cluster.centroid), np.ndarray) assert_array_equal(cluster.centroid, centroid) @set_random_number_generator() def test_cluster_centroid_iter(rng): indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering # Test without specifying refdata in ClusterCentroid centroid = np.zeros(features_shape) cluster = ClusterCentroid(centroid) for idx in indices: cluster.assign(idx, (idx + 1) * features) assert_array_equal(cluster.indices, indices) assert_array_equal(list(cluster), indices) # Test with specifying refdata in ClusterCentroid cluster.refdata = data assert_arrays_equal(list(cluster), [data[i] for i in indices]) @set_random_number_generator() def test_cluster_centroid_getitem(rng): indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering advanced_indices = indices + [0, 1, 2, -1, -2, -3] # Test without specifying refdata in ClusterCentroid centroid = np.zeros(features_shape) cluster = ClusterCentroid(centroid) for idx in indices: cluster.assign(idx, (idx + 1) * features) # Test indexing for i in advanced_indices: assert_equal(cluster[i], indices[i]) # Test advanced indexing assert_array_equal( cluster[advanced_indices], [indices[i] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster.__getitem__, len(cluster)) assert_raises(IndexError, cluster.__getitem__, -len(cluster) - 1) # Test slicing and negative indexing assert_equal(cluster[-1], indices[-1]) assert_array_equal(cluster[::2], indices[::2]) assert_arrays_equal(cluster[::-1], indices[::-1]) assert_arrays_equal(cluster[:-1], indices[:-1]) assert_arrays_equal(cluster[1:], indices[1:]) # Test with specifying refdata in ClusterCentroid cluster.refdata = data # Test indexing for i in advanced_indices: assert_array_equal(cluster[i], data[indices[i]]) # Test advanced indexing assert_arrays_equal( cluster[advanced_indices], [data[indices[i]] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster.__getitem__, len(cluster)) assert_raises(IndexError, cluster.__getitem__, -len(cluster) - 1) # Test slicing and negative indexing assert_array_equal(cluster[-1], data[indices[-1]]) assert_arrays_equal(cluster[::2], [data[i] for i in indices[::2]]) assert_arrays_equal(cluster[::-1], [data[i] for i in indices[::-1]]) assert_arrays_equal(cluster[:-1], [data[i] for i in indices[:-1]]) assert_arrays_equal(cluster[1:], [data[i] for i in indices[1:]]) def test_cluster_map_attributes_and_constructor(): clusters = ClusterMap() assert_equal(len(clusters), 0) assert_array_equal(clusters.clusters, []) assert_array_equal(list(clusters), []) assert_raises(IndexError, clusters.__getitem__, 0) assert_raises(AttributeError, setattr, clusters, "clusters", []) def test_cluster_map_add_cluster(): clusters = ClusterMap() list_of_cluster_objects = [] list_of_indices = [] for i in range(3): cluster = Cluster() list_of_cluster_objects.append(cluster) list_of_indices.append([]) for id_data in range(2 * i): list_of_indices[-1].append(id_data) cluster.assign(id_data) clusters.add_cluster(cluster) assert_equal(type(cluster), Cluster) assert_equal(len(clusters), i + 1) assert_true(cluster == clusters[-1]) assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*list_of_indices)) ) # Test adding multiple clusters at once. clusters = ClusterMap() clusters.add_cluster(*list_of_cluster_objects) assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*list_of_indices)) ) def test_cluster_map_remove_cluster(): clusters = ClusterMap() cluster1 = Cluster(indices=[1]) clusters.add_cluster(cluster1) cluster2 = Cluster(indices=[1, 2]) clusters.add_cluster(cluster2) cluster3 = Cluster(indices=[1, 2, 3]) clusters.add_cluster(cluster3) assert_equal(len(clusters), 3) clusters.remove_cluster(cluster2) assert_equal(len(clusters), 2) assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*[cluster1, cluster3])) ) assert_equal(clusters[0], cluster1) assert_equal(clusters[1], cluster3) clusters.remove_cluster(cluster3) assert_equal(len(clusters), 1) assert_array_equal(list(itertools.chain(*clusters)), list(cluster1)) assert_equal(clusters[0], cluster1) clusters.remove_cluster(cluster1) assert_equal(len(clusters), 0) assert_array_equal(list(itertools.chain(*clusters)), []) # Test removing multiple clusters at once. clusters = ClusterMap() clusters.add_cluster(cluster1, cluster2, cluster3) clusters.remove_cluster(cluster3, cluster2) assert_equal(len(clusters), 1) assert_array_equal(list(itertools.chain(*clusters)), list(cluster1)) assert_equal(clusters[0], cluster1) clusters = ClusterMap() clusters.add_cluster(cluster2, cluster1, cluster3) clusters.remove_cluster(cluster1, cluster3, cluster2) assert_equal(len(clusters), 0) assert_array_equal(list(itertools.chain(*clusters)), []) def test_cluster_map_clear(): nb_clusters = 11 clusters = ClusterMap() for i in range(nb_clusters): new_cluster = Cluster(indices=range(i)) clusters.add_cluster(new_cluster) clusters.clear() assert_equal(len(clusters), 0) assert_array_equal(list(itertools.chain(*clusters)), []) @set_random_number_generator(42) def test_cluster_map_iter(rng): nb_clusters = 11 # Test without specifying refdata in ClusterMap cluster_map = ClusterMap() clusters = [] for _ in range(nb_clusters): new_cluster = Cluster(indices=rng.integers(0, len(data), size=10)) cluster_map.add_cluster(new_cluster) clusters.append(new_cluster) assert_true(all(c1 is c2 for c1, c2 in zip(cluster_map.clusters, clusters))) assert_array_equal(cluster_map, clusters) assert_array_equal(cluster_map.clusters, clusters) assert_array_equal(cluster_map, [cluster.indices for cluster in clusters]) # Set refdata cluster_map.refdata = data for c1, c2 in zip(cluster_map, clusters): assert_arrays_equal(c1, [data[i] for i in c2.indices]) # Remove refdata, i.e. back to indices cluster_map.refdata = None assert_array_equal(cluster_map, [cluster.indices for cluster in clusters]) @set_random_number_generator() def test_cluster_map_getitem(rng): nb_clusters = 11 indices = list(range(nb_clusters)) rng.shuffle(indices) # None trivial ordering advanced_indices = indices + [0, 1, 2, -1, -2, -3] cluster_map = ClusterMap() clusters = [] for i in range(nb_clusters): new_cluster = Cluster(indices=range(i)) cluster_map.add_cluster(new_cluster) clusters.append(new_cluster) # Test indexing for i in advanced_indices: assert_true(cluster_map[i] == clusters[i]) # Test advanced indexing assert_arrays_equal( cluster_map[advanced_indices], [clusters[i] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster_map.__getitem__, len(clusters)) assert_raises(IndexError, cluster_map.__getitem__, -len(clusters) - 1) # Test slicing and negative indexing assert_equal(cluster_map[-1], clusters[-1]) assert_array_equal( np.array(cluster_map[::2], dtype=object), np.array(clusters[::2], dtype=object) ) assert_arrays_equal(cluster_map[::-1], clusters[::-1]) assert_arrays_equal(cluster_map[:-1], clusters[:-1]) assert_arrays_equal(cluster_map[1:], clusters[1:]) def test_cluster_map_str_and_repr(): nb_clusters = 11 cluster_map = ClusterMap() clusters = [] for i in range(nb_clusters): new_cluster = Cluster(indices=range(i)) cluster_map.add_cluster(new_cluster) clusters.append(new_cluster) expected_str = "[" + ", ".join(map(str, clusters)) + "]" assert_equal(str(cluster_map), expected_str) assert_equal(repr(cluster_map), f"ClusterMap({expected_str})") def test_cluster_map_size(): nb_clusters = 11 cluster_map = ClusterMap() clusters = [Cluster() for _ in range(nb_clusters)] cluster_map.add_cluster(*clusters) assert_equal(len(cluster_map), nb_clusters) assert_equal(cluster_map.size(), nb_clusters) @set_random_number_generator(42) def test_cluster_map_clusters_sizes(rng): nb_clusters = 11 # Generate random indices indices = [range(rng.integers(1, 10)) for _ in range(nb_clusters)] cluster_map = ClusterMap() clusters = [Cluster(indices=indices[i]) for i in range(nb_clusters)] cluster_map.add_cluster(*clusters) assert_equal(cluster_map.clusters_sizes(), list(map(len, indices))) @set_random_number_generator(42) def test_cluster_map_get_small_and_large_clusters(rng): nb_clusters = 11 cluster_map = ClusterMap() # Randomly generate small clusters indices = [rng.integers(0, 10, size=i) for i in range(1, nb_clusters + 1)] small_clusters = [Cluster(indices=indices[i]) for i in range(nb_clusters)] cluster_map.add_cluster(*small_clusters) # Randomly generate small clusters indices = [ rng.integers(0, 10, size=i) for i in range(nb_clusters + 1, 2 * nb_clusters + 1) ] large_clusters = [Cluster(indices=indices[i]) for i in range(nb_clusters)] cluster_map.add_cluster(*large_clusters) assert_equal(len(cluster_map), 2 * nb_clusters) assert_equal(len(cluster_map.get_small_clusters(nb_clusters)), len(small_clusters)) assert_arrays_equal(cluster_map.get_small_clusters(nb_clusters), small_clusters) assert_equal( len(cluster_map.get_large_clusters(nb_clusters + 1)), len(large_clusters) ) assert_arrays_equal(cluster_map.get_large_clusters(nb_clusters + 1), large_clusters) def test_cluster_map_comparison_with_int(): clusters1_indices = range(10) clusters2_indices = range(10, 15) clusters3_indices = [15] # Build a test ClusterMap clusters = ClusterMap() cluster1 = Cluster() cluster1.assign(*clusters1_indices) clusters.add_cluster(cluster1) cluster2 = Cluster() cluster2.assign(*clusters2_indices) clusters.add_cluster(cluster2) cluster3 = Cluster() cluster3.assign(*clusters3_indices) clusters.add_cluster(cluster3) subset = clusters < 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters3_indices) subset = clusters <= 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters2_indices) assert_array_equal(list(clusters[subset][1]), clusters3_indices) subset = clusters == 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters2_indices) subset = clusters != 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters1_indices) assert_array_equal(list(clusters[subset][1]), clusters3_indices) subset = clusters > 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters1_indices) subset = clusters >= 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters1_indices) assert_array_equal(list(clusters[subset][1]), clusters2_indices) def test_cluster_map_comparison_with_object(): nb_clusters = 4 cluster_map = ClusterMap() # clusters = [] for i in range(nb_clusters): new_cluster = Cluster(indices=range(i)) cluster_map.add_cluster(new_cluster) # clusters.append(new_cluster) # Comparison with another ClusterMap object other_cluster_map = copy.deepcopy(cluster_map) assert_equal( np.array(cluster_map, dtype=object), np.array(other_cluster_map, dtype=object) ) other_cluster_map = copy.deepcopy(cluster_map) assert_false(cluster_map != other_cluster_map) other_cluster_map = copy.deepcopy(cluster_map) assert_raises(NotImplementedError, cluster_map.__le__, other_cluster_map) # Comparison with an object that is not a ClusterMap or int assert_raises(NotImplementedError, cluster_map.__le__, float(42)) def test_cluster_map_centroid_attributes_and_constructor(): clusters = ClusterMapCentroid() assert_array_equal(clusters.centroids, []) assert_raises(AttributeError, setattr, clusters, "centroids", []) def test_cluster_map_centroid_add_cluster(): clusters = ClusterMapCentroid() centroids = [] for i in range(3): cluster = ClusterCentroid(centroid=np.zeros_like(features)) centroids.append(np.zeros_like(features)) for id_data in range(2 * i): centroids[-1] = (centroids[-1] * id_data + (id_data + 1) * features) / ( id_data + 1 ) cluster.assign(id_data, (id_data + 1) * features) cluster.update() clusters.add_cluster(cluster) assert_array_equal(cluster.centroid, centroids[-1]) assert_equal(type(cluster), ClusterCentroid) assert_true(cluster == clusters[-1]) assert_equal(type(clusters.centroids), list) assert_array_equal( list(itertools.chain(*clusters.centroids)), list(itertools.chain(*centroids)) ) # Check adding features of different sizes (shorter and longer) features_shape_short = (1, features_shape[1] - 3) features_too_short = np.ones(features_shape_short, dtype=dtype) assert_raises(ValueError, cluster.assign, 123, features_too_short) features_shape_long = (1, features_shape[1] + 3) features_too_long = np.ones(features_shape_long, dtype=dtype) assert_raises(ValueError, cluster.assign, 123, features_too_long) @set_random_number_generator() def test_cluster_map_centroid_remove_cluster(rng): clusters = ClusterMapCentroid() centroid1 = rng.random(features_shape).astype(dtype) cluster1 = ClusterCentroid(centroid1, indices=[1]) clusters.add_cluster(cluster1) centroid2 = rng.random(features_shape).astype(dtype) cluster2 = ClusterCentroid(centroid2, indices=[1, 2]) clusters.add_cluster(cluster2) centroid3 = rng.random(features_shape).astype(dtype) cluster3 = ClusterCentroid(centroid3, indices=[1, 2, 3]) clusters.add_cluster(cluster3) assert_equal(len(clusters), 3) clusters.remove_cluster(cluster2) assert_equal(len(clusters), 2) assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*[cluster1, cluster3])) ) assert_array_equal(clusters.centroids, np.array([centroid1, centroid3])) assert_equal(clusters[0], cluster1) assert_equal(clusters[1], cluster3) clusters.remove_cluster(cluster3) assert_equal(len(clusters), 1) assert_array_equal(list(itertools.chain(*clusters)), list(cluster1)) assert_array_equal(clusters.centroids, np.array([centroid1])) assert_equal(clusters[0], cluster1) clusters.remove_cluster(cluster1) assert_equal(len(clusters), 0) assert_array_equal(list(itertools.chain(*clusters)), []) assert_array_equal(clusters.centroids, []) @set_random_number_generator(42) def test_cluster_map_centroid_iter(rng): nb_clusters = 11 cluster_map = ClusterMapCentroid() clusters = [] for _ in range(nb_clusters): new_centroid = np.zeros_like(features) new_cluster = ClusterCentroid( new_centroid, indices=rng.integers(0, len(data), size=10) ) cluster_map.add_cluster(new_cluster) clusters.append(new_cluster) assert_true(all(c1 is c2 for c1, c2 in zip(cluster_map.clusters, clusters))) assert_array_equal(cluster_map, clusters) assert_array_equal(cluster_map.clusters, clusters) assert_array_equal(cluster_map, [cluster.indices for cluster in clusters]) # Set refdata cluster_map.refdata = data for c1, c2 in zip(cluster_map, clusters): assert_arrays_equal(c1, [data[i] for i in c2.indices]) @set_random_number_generator() def test_cluster_map_centroid_getitem(rng): nb_clusters = 11 indices = list(range(len(data))) rng.shuffle(indices) # None trivial ordering advanced_indices = indices + [0, 1, 2, -1, -2, -3] cluster_map = ClusterMapCentroid() clusters = [] for _ in range(nb_clusters): centroid = np.zeros_like(features) cluster = ClusterCentroid(centroid) cluster.id = cluster_map.add_cluster(cluster) clusters.append(cluster) # Test indexing for i in advanced_indices: assert_true(cluster_map[i] == clusters[i]) # Test advanced indexing assert_arrays_equal( cluster_map[advanced_indices], [clusters[i] for i in advanced_indices] ) # Test index out of bounds assert_raises(IndexError, cluster_map.__getitem__, len(clusters)) assert_raises(IndexError, cluster_map.__getitem__, -len(clusters) - 1) # Test slicing and negative indexing assert_true(cluster_map[-1] == clusters[-1]) assert_array_equal(cluster_map[::2], clusters[::2]) assert_arrays_equal(cluster_map[::-1], clusters[::-1]) assert_arrays_equal(cluster_map[:-1], clusters[:-1]) assert_arrays_equal(cluster_map[1:], clusters[1:]) def test_cluster_map_centroid_comparison_with_int(): clusters1_indices = range(10) clusters2_indices = range(10, 15) clusters3_indices = [15] # Build a test ClusterMapCentroid centroid = np.zeros_like(features) cluster1 = ClusterCentroid(centroid.copy()) for i in clusters1_indices: cluster1.assign(i, features) cluster2 = ClusterCentroid(centroid.copy()) for i in clusters2_indices: cluster2.assign(i, features) cluster3 = ClusterCentroid(centroid.copy()) for i in clusters3_indices: cluster3.assign(i, features) # Update centroids cluster1.update() cluster2.update() cluster3.update() clusters = ClusterMapCentroid() clusters.add_cluster(cluster1) clusters.add_cluster(cluster2) clusters.add_cluster(cluster3) subset = clusters < 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters3_indices) subset = clusters <= 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters2_indices) assert_array_equal(list(clusters[subset][1]), clusters3_indices) subset = clusters == 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters2_indices) subset = clusters != 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters1_indices) assert_array_equal(list(clusters[subset][1]), clusters3_indices) subset = clusters > 5 assert_equal(subset.sum(), 1) assert_array_equal(list(clusters[subset][0]), clusters1_indices) subset = clusters >= 5 assert_equal(subset.sum(), 2) assert_array_equal(list(clusters[subset][0]), clusters1_indices) assert_array_equal(list(clusters[subset][1]), clusters2_indices) def test_subclassing_clustering(): class SubClustering(Clustering): def cluster(self, data, ordering=None): pass clustering_algo = SubClustering() assert_raises( NotImplementedError, super(SubClustering, clustering_algo).cluster, None ) dipy-1.11.0/dipy/segment/tests/test_feature.py000066400000000000000000000266551476546756600214240ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) import dipy.segment.featurespeed as dipysfeature from dipy.segment.featurespeed import extract import dipy.segment.metric as dipymetric import dipy.segment.metricspeed as dipysmetric from dipy.testing import assert_false, assert_true from dipy.testing.decorators import set_random_number_generator dtype = "float32" rng = np.random.default_rng() s1 = np.array([np.arange(10, dtype=dtype)] * 3).T # 10x3 s2 = np.arange(3 * 10, dtype=dtype).reshape((-1, 3))[::-1] # 10x3 s3 = rng.random((5, 4), dtype=dtype) # 5x4 s4 = rng.random((5, 3), dtype=dtype) # 5x3 def test_identity_feature(): # Test subclassing Feature class IdentityFeature(dipysfeature.Feature): def __init__(self): super(IdentityFeature, self).__init__(is_order_invariant=False) def infer_shape(self, streamline): return streamline.shape def extract(self, streamline): return streamline for feature in [dipysfeature.IdentityFeature(), IdentityFeature()]: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), s.shape) # Test method extract features = feature.extract(s) assert_equal(features.shape, s.shape) assert_array_equal(features, s) # This feature type is not order invariant assert_false(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) assert_array_equal(features_flip, s[::-1]) assert_true(np.any(np.not_equal(features, features_flip))) def test_feature_resample(): from dipy.tracking.streamline import set_number_of_points # Test subclassing Feature class ResampleFeature(dipysfeature.Feature): def __init__(self, nb_points): super(ResampleFeature, self).__init__(is_order_invariant=False) self.nb_points = nb_points if nb_points <= 0: msg = ( "ResampleFeature: `nb_points` must be strictly positive: " f"{nb_points}" ) raise ValueError(msg) def infer_shape(self, streamline): return self.nb_points, streamline.shape[1] def extract(self, streamline): return set_number_of_points(streamline, nb_points=self.nb_points) assert_raises(ValueError, dipysfeature.ResampleFeature, nb_points=0) assert_raises(ValueError, ResampleFeature, nb_points=0) max_points = max(map(len, [s1, s2, s3, s4])) for nb_points in [2, 5, 2 * max_points]: for feature in [ dipysfeature.ResampleFeature(nb_points), ResampleFeature(nb_points), ]: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), (nb_points, s.shape[1])) # Test method extract features = feature.extract(s) assert_equal(features.shape, (nb_points, s.shape[1])) assert_array_almost_equal( features, set_number_of_points(s, nb_points=nb_points) ) # This feature type is not order invariant assert_false(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) assert_array_equal( features_flip, set_number_of_points(s[::-1], nb_points=nb_points) ) assert_true(np.any(np.not_equal(features, features_flip))) def test_feature_center_of_mass(): # Test subclassing Feature class CenterOfMassFeature(dipysfeature.Feature): def __init__(self): super(CenterOfMassFeature, self).__init__(is_order_invariant=True) def infer_shape(self, streamline): return 1, streamline.shape[1] def extract(self, streamline): return np.mean(streamline, axis=0)[None, :] for feature in [dipysfeature.CenterOfMassFeature(), CenterOfMassFeature()]: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), (1, s.shape[1])) # Test method extract features = feature.extract(s) assert_equal(features.shape, (1, s.shape[1])) assert_array_almost_equal(features, np.mean(s, axis=0)[None, :]) # This feature type is order invariant assert_true(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) assert_array_almost_equal(features, features_flip) def test_feature_midpoint(): # Test subclassing Feature class MidpointFeature(dipysfeature.Feature): def __init__(self): super(MidpointFeature, self).__init__(is_order_invariant=False) def infer_shape(self, streamline): return 1, streamline.shape[1] def extract(self, streamline): return streamline[[len(streamline) // 2]] for feature in [dipysfeature.MidpointFeature(), MidpointFeature()]: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), (1, s.shape[1])) # Test method extract features = feature.extract(s) assert_equal(features.shape, (1, s.shape[1])) assert_array_almost_equal(features, s[len(s) // 2][None, :]) # This feature type is not order invariant assert_false(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) if len(s) % 2 == 0: assert_true(np.any(np.not_equal(features, features_flip))) else: assert_array_equal(features, features_flip) def test_feature_arclength(): from dipy.tracking.streamline import length # Test subclassing Feature class ArcLengthFeature(dipysfeature.Feature): def __init__(self): super(ArcLengthFeature, self).__init__(is_order_invariant=True) def infer_shape(self, streamline): return 1, 1 def extract(self, streamline): return length(streamline)[None, None] for feature in [dipysfeature.ArcLengthFeature(), ArcLengthFeature()]: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), (1, 1)) # Test method extract features = feature.extract(s) assert_equal(features.shape, (1, 1)) assert_array_almost_equal(features, length(s)[None, None]) # This feature type is order invariant assert_true(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) assert_array_almost_equal(features, features_flip) def test_feature_vector_of_endpoints(): # Test subclassing Feature class VectorOfEndpointsFeature(dipysfeature.Feature): def __init__(self): super(VectorOfEndpointsFeature, self).__init__(False) def infer_shape(self, streamline): return 1, streamline.shape[1] def extract(self, streamline): return streamline[[-1]] - streamline[[0]] feature_types = [ dipysfeature.VectorOfEndpointsFeature(), VectorOfEndpointsFeature(), ] for feature in feature_types: for s in [s1, s2, s3, s4]: # Test method infer_shape assert_equal(feature.infer_shape(s), (1, s.shape[1])) # Test method extract features = feature.extract(s) assert_equal(features.shape, (1, s.shape[1])) assert_array_almost_equal(features, s[[-1]] - s[[0]]) # This feature type is not order invariant assert_false(feature.is_order_invariant) for s in [s1, s2, s3, s4]: features = feature.extract(s) features_flip = feature.extract(s[::-1]) # The flip features are simply the negative of the features. assert_array_almost_equal(features, -features_flip) @set_random_number_generator(1234) def test_feature_extract(rng): # Test that features are automatically cast into float32 when # coming from Python space class CenterOfMass64bit(dipysfeature.Feature): def infer_shape(self, streamline): return streamline.shape[1] def extract(self, streamline): return np.mean(streamline.astype(np.float64), axis=0) nb_streamlines = 100 feature_shape = (1, 3) # One N-dimensional point feature = CenterOfMass64bit() nb_points = rng.integers(20, 30, size=(nb_streamlines,)) * 3 streamlines = [ np.arange(nb).reshape((-1, 3)).astype(np.float32) for nb in nb_points ] features = extract(feature, streamlines) assert_equal(len(features), len(streamlines)) assert_equal(features[0].shape, feature_shape) # Test that scalar features class ArcLengthFeature(dipysfeature.Feature): def infer_shape(self, streamline): return 1 def extract(self, streamline): square_norms = np.sum((streamline[1:] - streamline[:-1]) ** 2) return np.sum(np.sqrt(square_norms)) feature_shape = (1, 1) # One scalar represented as a 2D array feature = ArcLengthFeature() features = extract(feature, streamlines) assert_equal(len(features), len(streamlines)) assert_equal(features[0].shape, feature_shape) # Try if streamlines are readonly for s in streamlines: s.setflags(write=False) def test_subclassing_feature(): class EmptyFeature(dipysfeature.Feature): pass feature = EmptyFeature() assert_raises(NotImplementedError, feature.infer_shape, None) assert_raises(NotImplementedError, feature.extract, None) def test_using_python_feature_with_cython_metric(): class Identity(dipysfeature.Feature): def infer_shape(self, streamline): return streamline.shape def extract(self, streamline): return streamline # Test using Python Feature with Cython Metric feature = Identity() metric = dipysmetric.AveragePointwiseEuclideanMetric(feature) d1 = dipymetric.dist(metric, s1, s2) features1 = metric.feature.extract(s1) features2 = metric.feature.extract(s2) d2 = metric.dist(features1, features2) assert_equal(d1, d2) # Python 2.7 on Windows 64 bits uses long type instead of int for # constants integer. We make sure the code is robust to such behaviour # by explicitly testing it. class ArcLengthFeature(dipysfeature.Feature): def infer_shape(self, streamline): return 1 def extract(self, streamline): square_norms = np.sum((streamline[1:] - streamline[:-1]) ** 2) return np.sum(np.sqrt(square_norms)) # Test using Python Feature with Cython Metric feature = ArcLengthFeature() metric = dipymetric.EuclideanMetric(feature) d1 = dipymetric.dist(metric, s1, s2) features1 = metric.feature.extract(s1) features2 = metric.feature.extract(s2) d2 = metric.dist(features1, features2) assert_equal(d1, d2) dipy-1.11.0/dipy/segment/tests/test_fss.py000066400000000000000000000124131476546756600205470ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_almost_equal, assert_raises from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.segment.fss import ( FastStreamlineSearch, nearest_from_matrix_col, nearest_from_matrix_row, ) from dipy.segment.metric import mean_euclidean_distance from dipy.testing import ( assert_arrays_equal, assert_greater, assert_greater_equal, assert_true, ) def setup_module(): global f1, f2 fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False) # Should work with both StatefulTractogram and streamlines (list of array) f1 = fornix[:200] f2 = fornix.streamlines[200:] def test_fss_radius_search(): r = 4.0 nb_pts = 24 # For each "f1" streamlines search all in radius of "f2" fss_f1 = FastStreamlineSearch( f1, max_radius=r, nb_mpts=2, bin_size=20.0, resampling=nb_pts, bidirectional=True, ) rs_f2_in_f1 = fss_f1.radius_search(f2, radius=r, use_negative=True) # For each "f2" streamlines search all in radius of "f1" fss_f2 = FastStreamlineSearch( f2, max_radius=r, nb_mpts=8, bin_size=10.0, resampling=nb_pts, bidirectional=True, ) rs_f1_in_f2 = fss_f2.radius_search(f1, radius=r, use_negative=False) # Verify there is results assert_greater(rs_f2_in_f1.nnz, 0) assert_greater(rs_f1_in_f2.nnz, 0) # Verify The number of results are the same assert_true(rs_f2_in_f1.nnz == rs_f1_in_f2.nnz) # Verify that both search results are equivalent (transposed) assert_arrays_equal(np.sort(rs_f2_in_f1.row), np.sort(rs_f1_in_f2.col)) assert_arrays_equal(np.sort(rs_f2_in_f1.col), np.sort(rs_f1_in_f2.row)) # Verify if resulting distances are the same assert_almost_equal(np.sort(np.abs(rs_f2_in_f1.data)), np.sort(rs_f1_in_f2.data)) # Verify that minimum are the same from f1 to f2 r1_a, r1_b, r1_d = nearest_from_matrix_row(rs_f2_in_f1) r2_a, r2_b, r2_d = nearest_from_matrix_col(rs_f1_in_f2) assert_arrays_equal(r1_a, r2_a) assert_arrays_equal(r1_b, r2_b) assert_arrays_equal(r1_d, r2_d) # Verify that minimum are the same from f2 to f1 r3_a, r3_b, r3_d = nearest_from_matrix_col(rs_f2_in_f1) r4_a, r4_b, r4_d = nearest_from_matrix_row(rs_f1_in_f2) assert_arrays_equal(r3_a, r4_a) assert_arrays_equal(r3_b, r4_b) assert_almost_equal(r3_d, r4_d) # Test with unidirectional search (bidirectional=False) fss_sd = FastStreamlineSearch( f1, max_radius=r, nb_mpts=6, bin_size=80.0, resampling=nb_pts, bidirectional=False, ) rs_f1_sd = fss_sd.radius_search(f2, radius=r, use_negative=True) # Single direction should be a subset of bidirectional assert_greater_equal(rs_f1_in_f2.nnz, rs_f1_sd.nnz) assert_true(np.all(np.isin(rs_f1_sd.row, rs_f2_in_f1.row))) assert_true(np.all(np.isin(rs_f1_sd.col, rs_f2_in_f1.col))) def test_fss_varying_radius(): # For each "f1" streamlines search all in radius of "f2" fss = FastStreamlineSearch( f1, max_radius=10.0, nb_mpts=5, bin_size=20.0, resampling=25, bidirectional=True ) rs_6 = fss.radius_search(f2, radius=6.0, use_negative=True) rs_4 = fss.radius_search(f2, radius=4.0, use_negative=True) rs_2 = fss.radius_search(f2, radius=2.0, use_negative=True) # smaller radius should be a subset or equal of a bigger radius assert_greater_equal(rs_6.nnz, rs_4.nnz) assert_true(np.all(np.isin(rs_4.row, rs_6.row))) assert_true(np.all(np.isin(rs_4.col, rs_6.col))) assert_greater_equal(rs_4.nnz, rs_2.nnz) assert_true(np.all(np.isin(rs_2.row, rs_4.row))) assert_true(np.all(np.isin(rs_2.col, rs_4.col))) def test_fss_single_point_slines(): slines = [np.array([[1.0, 1.0, 1.0]]), np.array([[0.0, 1.0, 2.0]])] fss = FastStreamlineSearch( slines, max_radius=4.0, nb_mpts=4, bin_size=20.0, resampling=24, bidirectional=False, ) res = fss.radius_search(slines, radius=4.0) # 2x2 matrix with 4 element assert_true(res.nnz == 4) mat = res.toarray() dist = mean_euclidean_distance(slines[0], slines[1]) assert_almost_equal(mat[0, 0], 0.0) assert_almost_equal(mat[1, 1], 0.0) assert_almost_equal(mat[1, 0], dist) assert_almost_equal(mat[0, 1], dist) def test_fss_empty_results(): fss = FastStreamlineSearch( f1, max_radius=2.0, nb_mpts=2, bin_size=20.0, resampling=22, bidirectional=True ) res = fss.radius_search(f2, radius=0.01, use_negative=True) assert_true(res.nnz == 0) def test_fss_invalid_max_radius(): assert_raises(ValueError, FastStreamlineSearch, f1, max_radius=0.0) assert_raises(ValueError, FastStreamlineSearch, f1, max_radius=-1.0) def test_fss_invalid_radius(): fss = FastStreamlineSearch(f1, max_radius=2.0) assert_raises(ValueError, fss.radius_search, f2, radius=5.0) def test_fss_invalid_mpts(): assert_raises( ValueError, FastStreamlineSearch, f1, max_radius=4.0, nb_mpts=4, resampling=23 ) assert_raises( ZeroDivisionError, FastStreamlineSearch, f1, max_radius=4.0, nb_mpts=0, resampling=24, ) dipy-1.11.0/dipy/segment/tests/test_mask.py000066400000000000000000000067551476546756600207230ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import assert_almost_equal, assert_equal, assert_raises from scipy.ndimage import binary_dilation, generate_binary_structure, median_filter from dipy.data import get_fnames from dipy.io.image import load_nifti_data from dipy.segment.mask import ( applymask, bounding_box, crop, median_otsu, multi_median, otsu, ) def test_mask(): vol = np.zeros((30, 30, 30)) vol[15, 15, 15] = 1 struct = generate_binary_structure(3, 1) voln = binary_dilation(vol, structure=struct, iterations=4).astype("f4") initial = np.sum(voln > 0) mask = voln.copy() thresh = otsu(mask) mask = mask > thresh initial_otsu = np.sum(mask > 0) assert_equal(initial_otsu, initial) mins, maxs = bounding_box(mask) voln_crop = crop(mask, mins, maxs) initial_crop = np.sum(voln_crop > 0) assert_equal(initial_crop, initial) applymask(voln, mask) final = np.sum(voln > 0) assert_equal(final, initial) # Test multi_median. img = np.arange(25).reshape(5, 5) img_copy = img.copy() medianradius = 2 median_test = multi_median(img, medianradius, 3) assert_equal(img, img_copy) medarr = np.ones_like(img.shape) * ((medianradius * 2) + 1) median_control = median_filter(img, medarr) median_control = median_filter(median_control, medarr) median_control = median_filter(median_control, medarr) assert_equal(median_test, median_control) def test_bounding_box(): vol = np.zeros((100, 100, 50), dtype=int) # Check the more usual case vol[10:90, 11:40, 5:33] = 3 mins, maxs = bounding_box(vol) assert_equal(mins, [10, 11, 5]) assert_equal(maxs, [90, 40, 33]) # Check a 2d case mins, maxs = bounding_box(vol[10]) assert_equal(mins, [11, 5]) assert_equal(maxs, [40, 33]) vol[:] = 0 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") # Trigger a warning. num_warns = len(w) mins, maxs = bounding_box(vol) # Assert number of warnings has gone up by 1 assert_equal(len(w), num_warns + 1) # Check that an empty array returns zeros for both min & max assert_equal(mins, [0, 0, 0]) assert_equal(maxs, [0, 0, 0]) # Check the 2d case mins, maxs = bounding_box(vol[0]) assert_equal(len(w), num_warns + 2) assert_equal(mins, [0, 0]) assert_equal(maxs, [0, 0]) def test_median_otsu(): fname = get_fnames(name="S0_10") data = load_nifti_data(fname) data = np.squeeze(data.astype("f8")) dummy_mask = data > data.mean() data_masked, mask = median_otsu( data, median_radius=3, numpass=2, autocrop=False, vol_idx=None, dilate=None ) assert_equal(mask.sum() < dummy_mask.sum(), True) data2 = np.zeros(data.shape + (2,)) data2[..., 0] = data data2[..., 1] = data data2_masked, mask2 = median_otsu( data2, median_radius=3, numpass=2, autocrop=False, vol_idx=[0, 1], dilate=None ) assert_almost_equal(mask.sum(), mask2.sum()) _, mask3 = median_otsu( data2, median_radius=3, numpass=2, autocrop=False, vol_idx=[0, 1], dilate=1 ) assert_equal(mask2.sum() < mask3.sum(), True) _, mask4 = median_otsu( data2, median_radius=3, numpass=2, autocrop=False, vol_idx=[0, 1], dilate=2 ) assert_equal(mask3.sum() < mask4.sum(), True) # For 4D volumes, can't call without vol_idx input: assert_raises(ValueError, median_otsu, data2) dipy-1.11.0/dipy/segment/tests/test_metric.py000066400000000000000000000251701476546756600212430ustar00rootroot00000000000000import itertools import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_equal, assert_equal, assert_raises, ) import dipy.segment.featurespeed as dipysfeature import dipy.segment.metric as dipymetric import dipy.segment.metricspeed as dipysmetric from dipy.testing import ( assert_false, assert_greater_equal, assert_less_equal, assert_true, ) from dipy.testing.decorators import set_random_number_generator def norm(x, order=None, axis=None): if axis is not None: return np.apply_along_axis(np.linalg.norm, axis, x.astype(np.float64), order) return np.linalg.norm(x.astype(np.float64), ord=order) dtype = "float32" # Create wiggling streamline nb_points = 18 rng = np.random.default_rng(42) x = np.linspace(0, 10, nb_points) y = rng.random(nb_points) z = np.sin(np.linspace(0, np.pi, nb_points)) # Bending s = np.array([x, y, z], dtype=dtype).T # Create trivial streamlines s1 = np.array([np.arange(10, dtype=dtype)] * 3).T # 10x3 s2 = np.arange(3 * 10, dtype=dtype).reshape((-1, 3))[::-1] # 10x3 s3 = np.array([np.arange(5, dtype=dtype)] * 4) # 5x4 s4 = np.array([np.arange(5, dtype=dtype)] * 3) # 5x3 streamlines = [s, s1, s2, s3, s4] def test_metric_minimum_average_direct_flip(): feature = dipysfeature.IdentityFeature() class MinimumAverageDirectFlipMetric(dipysmetric.Metric): def __init__(self, feature): super(MinimumAverageDirectFlipMetric, self).__init__(feature=feature) @property def is_order_invariant(self): return True # Ordering is handled in the distance computation def are_compatible(self, shape1, shape2): return shape1[0] == shape2[0] def dist(self, v1, v2): def average_euclidean(x, y): return np.mean(norm(x - y, axis=1)) dist_direct = average_euclidean(v1, v2) dist_flipped = average_euclidean(v1, v2[::-1]) return min(dist_direct, dist_flipped) for metric in [ MinimumAverageDirectFlipMetric(feature), dipysmetric.MinimumAverageDirectFlipMetric(feature), ]: # Test special cases of the MDF distance. assert_equal(metric.dist(s, s), 0.0) assert_equal(metric.dist(s, s[::-1]), 0.0) # Translation offset = np.array([0.8, 1.3, 5], dtype=dtype) assert_almost_equal(metric.dist(s, s + offset), norm(offset), 5) # Scaling M_scaling = np.diag([1.2, 2.8, 3]).astype(dtype) s_mean = np.mean(s, axis=0) s_zero_mean = s - s_mean s_scaled = np.dot(M_scaling, s_zero_mean.T).T + s_mean d = np.mean(norm((np.diag(M_scaling) - 1) * s_zero_mean, axis=1)) assert_almost_equal(metric.dist(s, s_scaled), d, 5) # Rotation from dipy.core.geometry import rodrigues_axis_rotation rot_axis = np.array([1, 2, 3], dtype=dtype) M_rotation = rodrigues_axis_rotation(rot_axis, 60.0).astype(dtype) s_mean = np.mean(s, axis=0) s_zero_mean = s - s_mean s_rotated = np.dot(M_rotation, s_zero_mean.T).T + s_mean opposite = norm(np.cross(rot_axis, s_zero_mean), axis=1) / norm(rot_axis) distances = np.sqrt( 2 * opposite**2 * (1 - np.cos(60.0 * np.pi / 180.0)) ).astype(dtype) d = np.mean(distances) assert_almost_equal(metric.dist(s, s_rotated), d, 5) # All possible pairs for s1, s2 in itertools.product(*[streamlines] * 2): # Extract features since metric doesn't work # directly on streamlines f1 = metric.feature.extract(s1) f2 = metric.feature.extract(s2) # Test method are_compatible same_nb_points = f1.shape[0] == f2.shape[0] assert_equal(metric.are_compatible(f1.shape, f2.shape), same_nb_points) # Test method dist if features are compatible if metric.are_compatible(f1.shape, f2.shape): distance = metric.dist(f1, f2) if np.all(f1 == f2): assert_equal(distance, 0.0) assert_almost_equal(distance, dipysmetric.dist(metric, s1, s2)) assert_almost_equal(distance, dipymetric.mdf(s1, s2)) assert_greater_equal(distance, 0.0) # This metric type is order invariant assert_true(metric.is_order_invariant) # All possible pairs for s1, s2 in itertools.product(*[streamlines] * 2): f1 = metric.feature.extract(s1) f2 = metric.feature.extract(s2) if not metric.are_compatible(f1.shape, f2.shape): continue f1_flip = metric.feature.extract(s1[::-1]) f2_flip = metric.feature.extract(s2[::-1]) distance = metric.dist(f1, f2) assert_almost_equal(metric.dist(f1_flip, f2_flip), distance) if not np.all(f1_flip == f2_flip): assert_true(np.allclose(metric.dist(f1, f2_flip), distance)) assert_true(np.allclose(metric.dist(f1_flip, f2), distance)) def test_metric_cosine(): feature = dipysfeature.VectorOfEndpointsFeature() class CosineMetric(dipysmetric.Metric): def __init__(self, feature): super(CosineMetric, self).__init__(feature=feature) def are_compatible(self, shape1, shape2): # Cosine metric works on vectors. return shape1 == shape2 and shape1[0] == 1 def dist(self, v1, v2): # Check if we have null vectors if norm(v1) == 0: return 0.0 if norm(v2) == 0 else 1.0 v1_normed = v1.astype(np.float64) / norm(v1.astype(np.float64)) v2_normed = v2.astype(np.float64) / norm(v2.astype(np.float64)) cos_theta = np.dot(v1_normed, v2_normed.T).item() # Make sure it's in [-1, 1], i.e. within domain of arccosine cos_theta = np.minimum(cos_theta, 1.0) cos_theta = np.maximum(cos_theta, -1.0) return np.arccos(cos_theta) / np.pi # Normalized cosine distance for metric in [CosineMetric(feature), dipysmetric.CosineMetric(feature)]: # Test special cases of the cosine distance. v0 = np.array([[0, 0, 0]], dtype=np.float32) v1 = np.array([[1, 2, 3]], dtype=np.float32) v2 = np.array([[1, -1.0 / 2, 0]], dtype=np.float32) v3 = np.array([[-1, -2, -3]], dtype=np.float32) assert_equal(metric.dist(v0, v0), 0.0) # dot-dot assert_equal(metric.dist(v0, v1), 1.0) # dot-line assert_equal(metric.dist(v1, v1), 0.0) # collinear assert_equal(metric.dist(v1, v2), 0.5) # orthogonal assert_equal(metric.dist(v1, v3), 1.0) # opposite # All possible pairs for s1, s2 in itertools.product(*[streamlines] * 2): # Extract features since metric doesn't # work directly on streamlines f1 = metric.feature.extract(s1) f2 = metric.feature.extract(s2) # Test method are_compatible are_vectors = f1.shape[0] == 1 and f2.shape[0] == 1 same_dimension = f1.shape[1] == f2.shape[1] assert_equal( metric.are_compatible(f1.shape, f2.shape), are_vectors and same_dimension, ) # Test method dist if features are compatible if metric.are_compatible(f1.shape, f2.shape): distance = metric.dist(f1, f2) if np.all(f1 == f2): assert_almost_equal(distance, 0.0) assert_almost_equal(distance, dipysmetric.dist(metric, s1, s2)) assert_greater_equal(distance, 0.0) assert_less_equal(distance, 1.0) # This metric type is not order invariant assert_false(metric.is_order_invariant) # All possible pairs for s1, s2 in itertools.product(*[streamlines] * 2): f1 = metric.feature.extract(s1) f2 = metric.feature.extract(s2) if not metric.are_compatible(f1.shape, f2.shape): continue f1_flip = metric.feature.extract(s1[::-1]) f2_flip = metric.feature.extract(s2[::-1]) distance = metric.dist(f1, f2) assert_almost_equal(metric.dist(f1_flip, f2_flip), distance) if not np.all(f1_flip == f2_flip): assert_false(metric.dist(f1, f2_flip) == distance) assert_false(metric.dist(f1_flip, f2) == distance) def test_subclassing_metric(): class EmptyMetric(dipysmetric.Metric): pass metric = EmptyMetric() assert_raises(NotImplementedError, metric.are_compatible, None, None) assert_raises(NotImplementedError, metric.dist, None, None) @set_random_number_generator() def test_distance_matrix(rng): metric = dipysmetric.SumPointwiseEuclideanMetric() for dtype in [np.int32, np.int64, np.float32, np.float64]: # Compute distances of all tuples spawn by the Cartesian product # of `data` with itself. data = (rng.random((4, 10, 3)) * 10).astype(dtype) D = dipysmetric.distance_matrix(metric, data) assert_equal(D.shape, (len(data), len(data))) assert_array_equal(np.diag(D), np.zeros(len(data))) if metric.is_order_invariant: # Distance matrix should be symmetric assert_array_equal(D, D.T) for i in range(len(data)): for j in range(len(data)): assert_equal(D[i, j], dipysmetric.dist(metric, data[i], data[j])) # Compute distances of all tuples spawn by the Cartesian product # of `data` with `data2`. data2 = (rng.random((3, 10, 3)) * 10).astype(dtype) D = dipysmetric.distance_matrix(metric, data, data2) assert_equal(D.shape, (len(data), len(data2))) for i in range(len(data)): for j in range(len(data2)): assert_equal(D[i, j], dipysmetric.dist(metric, data[i], data2[j])) @set_random_number_generator() def test_mean_distances(rng): nb_slines = 10 nb_pts = 22 dim = 3 a = rng.random((nb_slines, nb_pts, dim)) b = rng.random((nb_slines, nb_pts, dim)) diff = a - b # Test Euclidean distance (L2) mean_l2_dist = dipymetric.mean_euclidean_distance(a, b) diff_norm = np.linalg.norm(diff.reshape((-1, dim)), ord=2, axis=-1) mean_norm = np.mean(diff_norm.reshape((nb_slines, -1)), axis=-1) assert_almost_equal(mean_l2_dist, mean_norm) # Test Manhattan distance (L1) mean_l1_dist = dipymetric.mean_manhattan_distance(a, b) diff_norm = np.linalg.norm(diff.reshape((-1, dim)), ord=1, axis=-1) mean_norm = np.mean(diff_norm.reshape((nb_slines, -1)), axis=-1) assert_almost_equal(mean_l1_dist, mean_norm) dipy-1.11.0/dipy/segment/tests/test_mrf.py000066400000000000000000000370301476546756600205420ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.data import get_fnames from dipy.segment.mrf import ConstantObservationModel, IteratedConditionalModes from dipy.segment.tissue import TissueClassifierHMRF from dipy.sims.voxel import add_noise from dipy.testing.decorators import set_random_number_generator def create_image(): # Load a coronal slice from a T1-weighted MRI fname = get_fnames(name="t1_coronal_slice") single_slice = np.load(fname) # Stack a few copies to form a 3D volume nslices = 5 image = np.zeros(shape=single_slice.shape + (nslices,)) image[..., :nslices] = single_slice[..., None] return image # Making squares def create_square(): square = np.zeros((256, 256, 3), dtype=np.int16) square[42:213, 42:213, :] = 1 square[71:185, 71:185, :] = 2 square[99:157, 99:157, :] = 3 return square def create_square_uniform(rng): square_1 = np.zeros((256, 256, 3)) + 0.001 square_1 = add_noise(square_1, 10000, 1, noise_type="gaussian", rng=rng) temp_1 = rng.integers(1, 21, size=(171, 171, 3)) temp_1 = np.where(temp_1 < 20, 1, 3) square_1[42:213, 42:213, :] = temp_1 temp_2 = rng.integers(1, 21, size=(114, 114, 3)) temp_2 = np.where(temp_2 < 19, 2, 1) square_1[71:185, 71:185, :] = temp_2 temp_3 = rng.integers(1, 21, size=(58, 58, 3)) temp_3 = np.where(temp_3 < 20, 3, 1) square_1[99:157, 99:157, :] = temp_3 return square_1 def create_square_gauss(rng): square_gauss = np.zeros((256, 256, 3)) + 0.001 square_gauss = add_noise(square_gauss, 10000, 1, noise_type="gaussian", rng=rng) square_gauss[42:213, 42:213, :] = 1 noise_1 = rng.normal(1.001, 0.0001, size=square_gauss[42:213, 42:213, :].shape) square_gauss[42:213, 42:213, :] = square_gauss[42:213, 42:213, :] + noise_1 square_gauss[71:185, 71:185, :] = 2 noise_2 = rng.normal(2.001, 0.0001, size=square_gauss[71:185, 71:185, :].shape) square_gauss[71:185, 71:185, :] = square_gauss[71:185, 71:185, :] + noise_2 square_gauss[99:157, 99:157, :] = 3 noise_3 = rng.normal(3.001, 0.0001, size=square_gauss[99:157, 99:157, :].shape) square_gauss[99:157, 99:157, :] = square_gauss[99:157, 99:157, :] + noise_3 return square_gauss def test_grayscale_image(): nclasses = 4 beta = np.float64(0.0) image = create_image() com = ConstantObservationModel() icm = IteratedConditionalModes() mu, sigmasq = com.initialize_param_uniform(image, nclasses) npt.assert_array_almost_equal(mu, np.array([0.0, 0.25, 0.5, 0.75])) npt.assert_array_almost_equal(sigmasq, np.array([1.0, 1.0, 1.0, 1.0])) neglogl = com.negloglikelihood(image, mu, sigmasq, nclasses) npt.assert_(neglogl[100, 100, 1, 0] != neglogl[100, 100, 1, 1]) npt.assert_(neglogl[100, 100, 1, 1] != neglogl[100, 100, 1, 2]) npt.assert_(neglogl[100, 100, 1, 2] != neglogl[100, 100, 1, 3]) npt.assert_(neglogl[100, 100, 1, 1] != neglogl[100, 100, 1, 3]) initial_segmentation = icm.initialize_maximum_likelihood(neglogl) npt.assert_(initial_segmentation.max() == nclasses - 1) npt.assert_(initial_segmentation.min() == 0) PLN = icm.prob_neighborhood(initial_segmentation, beta, nclasses) print(PLN.shape) npt.assert_(np.all((PLN >= 0) & (PLN <= 1.0))) if beta == 0.0: npt.assert_almost_equal(PLN[50, 50, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 3], 0.25, True) PLY = com.prob_image(image, nclasses, mu, sigmasq, PLN) print(PLY) npt.assert_(np.all((PLY >= 0) & (PLY <= 1.0))) mu_upd, sigmasq_upd = com.update_param(image, PLY, mu, nclasses) print(mu) print(mu_upd) npt.assert_(mu_upd[0] != mu[0]) npt.assert_(mu_upd[1] != mu[1]) npt.assert_(mu_upd[2] != mu[2]) npt.assert_(mu_upd[3] != mu[3]) print(sigmasq) print(sigmasq_upd) npt.assert_(sigmasq_upd[0] != sigmasq[0]) npt.assert_(sigmasq_upd[1] != sigmasq[1]) npt.assert_(sigmasq_upd[2] != sigmasq[2]) npt.assert_(sigmasq_upd[3] != sigmasq[3]) icm_segmentation, energy = icm.icm_ising(neglogl, beta, initial_segmentation) npt.assert_(np.abs(np.sum(icm_segmentation)) != 0) npt.assert_(icm_segmentation.max() == nclasses - 1) npt.assert_(icm_segmentation.min() == 0) def test_grayscale_iter(): nclasses = 4 beta = np.float64(0.1) max_iter = 15 background_noise = True image = create_image() com = ConstantObservationModel() icm = IteratedConditionalModes() mu, sigmasq = com.initialize_param_uniform(image, nclasses) neglogl = com.negloglikelihood(image, mu, sigmasq, nclasses) initial_segmentation = icm.initialize_maximum_likelihood(neglogl) npt.assert_(initial_segmentation.max() == nclasses - 1) npt.assert_(initial_segmentation.min() == 0) mu, sigmasq = com.seg_stats(image, initial_segmentation, nclasses) npt.assert_(mu[0] >= 0.0) npt.assert_(mu[1] >= 0.0) npt.assert_(mu[2] >= 0.0) npt.assert_(mu[3] >= 0.0) npt.assert_(sigmasq[0] >= 0.0) npt.assert_(sigmasq[1] >= 0.0) npt.assert_(sigmasq[2] >= 0.0) npt.assert_(sigmasq[3] >= 0.0) if background_noise: zero = np.zeros_like(image) + 0.001 zero_noise = add_noise(zero, 10000, 1, noise_type="gaussian") image_gauss = np.where(image == 0, zero_noise, image) else: image_gauss = image final_segmentation = np.empty_like(image) seg_init = initial_segmentation energies = [] for _ in range(max_iter): PLN = icm.prob_neighborhood(initial_segmentation, beta, nclasses) npt.assert_(np.all((PLN >= 0) & (PLN <= 1.0))) if beta == 0.0: npt.assert_almost_equal(PLN[50, 50, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[50, 50, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[147, 129, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[61, 152, 1, 3], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 0], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 1], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 2], 0.25, True) npt.assert_almost_equal(PLN[100, 100, 1, 3], 0.25, True) PLY = com.prob_image(image_gauss, nclasses, mu, sigmasq, PLN) npt.assert_(np.all((PLY >= 0) & (PLY <= 1.0))) npt.assert_(PLY[50, 50, 1, 0] > PLY[50, 50, 1, 1]) npt.assert_(PLY[50, 50, 1, 0] > PLY[50, 50, 1, 2]) npt.assert_(PLY[50, 50, 1, 0] > PLY[50, 50, 1, 3]) npt.assert_(PLY[100, 100, 1, 3] > PLY[100, 100, 1, 0]) npt.assert_(PLY[100, 100, 1, 3] > PLY[100, 100, 1, 1]) npt.assert_(PLY[100, 100, 1, 3] > PLY[100, 100, 1, 2]) mu_upd, sigmasq_upd = com.update_param(image_gauss, PLY, mu, nclasses) npt.assert_(mu_upd[0] >= 0.0) npt.assert_(mu_upd[1] >= 0.0) npt.assert_(mu_upd[2] >= 0.0) npt.assert_(mu_upd[3] >= 0.0) npt.assert_(sigmasq_upd[0] >= 0.0) npt.assert_(sigmasq_upd[1] >= 0.0) npt.assert_(sigmasq_upd[2] >= 0.0) npt.assert_(sigmasq_upd[3] >= 0.0) negll = com.negloglikelihood(image_gauss, mu_upd, sigmasq_upd, nclasses) npt.assert_(negll[50, 50, 1, 0] < negll[50, 50, 1, 1]) npt.assert_(negll[50, 50, 1, 0] < negll[50, 50, 1, 2]) npt.assert_(negll[50, 50, 1, 0] < negll[50, 50, 1, 3]) npt.assert_(negll[100, 100, 1, 3] < negll[100, 100, 1, 0]) npt.assert_(negll[100, 100, 1, 3] < negll[100, 100, 1, 1]) npt.assert_(negll[100, 100, 1, 3] < negll[100, 100, 1, 2]) final_segmentation, energy = icm.icm_ising(negll, beta, initial_segmentation) print(energy[energy > -np.inf].sum()) energies.append(energy[energy > -np.inf].sum()) initial_segmentation = final_segmentation mu = mu_upd sigmasq = sigmasq_upd npt.assert_(energies[-1] < energies[0]) difference_map = np.abs(seg_init - final_segmentation) npt.assert_(np.abs(np.sum(difference_map)) != 0) @set_random_number_generator() def test_square_iter(rng): nclasses = 4 beta = np.float64(0.0) max_iter = 10 com = ConstantObservationModel() icm = IteratedConditionalModes() initial_segmentation = create_square() square_gauss = create_square_gauss(rng) mu, sigmasq = com.seg_stats(square_gauss, initial_segmentation, nclasses) npt.assert_(mu[0] >= 0.0) npt.assert_(mu[1] >= 0.0) npt.assert_(mu[2] >= 0.0) npt.assert_(mu[3] >= 0.0) npt.assert_(sigmasq[0] >= 0.0) npt.assert_(sigmasq[1] >= 0.0) npt.assert_(sigmasq[2] >= 0.0) npt.assert_(sigmasq[3] >= 0.0) final_segmentation = np.empty_like(square_gauss) seg_init = initial_segmentation energies = [] for i in range(max_iter): print("\n") print(f">> Iteration: {i}") print("\n") PLN = icm.prob_neighborhood(initial_segmentation, beta, nclasses) npt.assert_(np.all((PLN >= 0) & (PLN <= 1.0))) if beta == 0.0: npt.assert_(PLN[25, 25, 1, 0] == 0.25) npt.assert_(PLN[25, 25, 1, 1] == 0.25) npt.assert_(PLN[25, 25, 1, 2] == 0.25) npt.assert_(PLN[25, 25, 1, 3] == 0.25) npt.assert_(PLN[50, 50, 1, 0] == 0.25) npt.assert_(PLN[50, 50, 1, 1] == 0.25) npt.assert_(PLN[50, 50, 1, 2] == 0.25) npt.assert_(PLN[50, 50, 1, 3] == 0.25) npt.assert_(PLN[90, 90, 1, 0] == 0.25) npt.assert_(PLN[90, 90, 1, 1] == 0.25) npt.assert_(PLN[90, 90, 1, 2] == 0.25) npt.assert_(PLN[90, 90, 1, 3] == 0.25) npt.assert_(PLN[125, 125, 1, 0] == 0.25) npt.assert_(PLN[125, 125, 1, 1] == 0.25) npt.assert_(PLN[125, 125, 1, 2] == 0.25) npt.assert_(PLN[125, 125, 1, 3] == 0.25) PLY = com.prob_image(square_gauss, nclasses, mu, sigmasq, PLN) npt.assert_(np.all((PLY >= 0) & (PLY <= 1.0))) npt.assert_(PLY[25, 25, 1, 0] > PLY[25, 25, 1, 1]) npt.assert_(PLY[25, 25, 1, 0] > PLY[25, 25, 1, 2]) npt.assert_(PLY[25, 25, 1, 0] > PLY[25, 25, 1, 3]) npt.assert_(PLY[125, 125, 1, 3] > PLY[125, 125, 1, 0]) npt.assert_(PLY[125, 125, 1, 3] > PLY[125, 125, 1, 1]) npt.assert_(PLY[125, 125, 1, 3] > PLY[125, 125, 1, 2]) mu_upd, sigmasq_upd = com.update_param(square_gauss, PLY, mu, nclasses) npt.assert_(mu_upd[0] >= 0.0) npt.assert_(mu_upd[1] >= 0.0) npt.assert_(mu_upd[2] >= 0.0) npt.assert_(mu_upd[3] >= 0.0) npt.assert_(sigmasq_upd[0] >= 0.0) npt.assert_(sigmasq_upd[1] >= 0.0) npt.assert_(sigmasq_upd[2] >= 0.0) npt.assert_(sigmasq_upd[3] >= 0.0) negll = com.negloglikelihood(square_gauss, mu_upd, sigmasq_upd, nclasses) npt.assert_(negll[25, 25, 1, 0] < negll[25, 25, 1, 1]) npt.assert_(negll[25, 25, 1, 0] < negll[25, 25, 1, 2]) npt.assert_(negll[25, 25, 1, 0] < negll[25, 25, 1, 3]) npt.assert_(negll[100, 100, 1, 3] < negll[125, 125, 1, 0]) npt.assert_(negll[100, 100, 1, 3] < negll[125, 125, 1, 1]) npt.assert_(negll[100, 100, 1, 3] < negll[125, 125, 1, 2]) final_segmentation, energy = icm.icm_ising(negll, beta, initial_segmentation) energies.append(energy[energy > -np.inf].sum()) initial_segmentation = final_segmentation mu = mu_upd sigmasq = sigmasq_upd difference_map = np.abs(seg_init - final_segmentation) npt.assert_(np.abs(np.sum(difference_map)) == 0.0) @set_random_number_generator() def test_icm_square(rng): nclasses = 4 max_iter = 10 com = ConstantObservationModel() icm = IteratedConditionalModes() initial_segmentation = create_square() square_1 = create_square_uniform(rng) mu, sigma = com.seg_stats(square_1, initial_segmentation, nclasses) sigmasq = sigma**2 npt.assert_(mu[0] >= 0.0) npt.assert_(mu[1] >= 0.0) npt.assert_(mu[2] >= 0.0) npt.assert_(mu[3] >= 0.0) npt.assert_(sigmasq[0] >= 0.0) npt.assert_(sigmasq[1] >= 0.0) npt.assert_(sigmasq[2] >= 0.0) npt.assert_(sigmasq[3] >= 0.0) negll = com.negloglikelihood(square_1, mu, sigmasq, nclasses) final_segmentation_1 = np.empty_like(square_1) final_segmentation_2 = np.empty_like(square_1) beta = 0.0 for i in range(max_iter): print("\n") print(f">> Iteration: {i}") print("\n") final_segmentation_1, energy_1 = icm.icm_ising( negll, beta, initial_segmentation ) initial_segmentation = final_segmentation_1 beta = 2 initial_segmentation = create_square() for j in range(max_iter): print("\n") print(f">> Iteration: {j}") print("\n") final_segmentation_2, energy_2 = icm.icm_ising( negll, beta, initial_segmentation ) initial_segmentation = final_segmentation_2 difference_map = np.abs(final_segmentation_1 - final_segmentation_2) npt.assert_(not np.isclose(np.abs(np.sum(difference_map)), 0)) def test_classify(): imgseg = TissueClassifierHMRF() nclasses = 4 beta = 0.1 tolerance = 0.0001 max_iter = 10 image = create_image() npt.assert_(image.max() == 1.0) npt.assert_(image.min() == 0.0) # First we test without setting iterations and tolerance seg_init, seg_final, PVE = imgseg.classify(image, nclasses, beta) npt.assert_(seg_final.max() == nclasses) npt.assert_(seg_final.min() == 0.0) # Second we test it with just changing the tolerance seg_init, seg_final, PVE = imgseg.classify( image, nclasses, beta, tolerance=tolerance ) npt.assert_(seg_final.max() == nclasses) npt.assert_(seg_final.min() == 0.0) # Third we test it with just the iterations seg_init, seg_final, PVE = imgseg.classify(image, nclasses, beta, max_iter=max_iter) npt.assert_(seg_final.max() == nclasses) npt.assert_(seg_final.min() == 0.0) # Next we test saving the history of accumulated energies from ICM imgseg = TissueClassifierHMRF(save_history=True) seg_init, seg_final, PVE = imgseg.classify( 200 * image, nclasses, beta, tolerance=tolerance ) npt.assert_(seg_final.max() == nclasses) npt.assert_(seg_final.min() == 0.0) npt.assert_(imgseg.energies_sum[0] > imgseg.energies_sum[-1]) dipy-1.11.0/dipy/segment/tests/test_qbx.py000066400000000000000000000151121476546756600205450ustar00rootroot00000000000000import itertools import numpy as np from numpy.testing import assert_array_equal, assert_equal, assert_raises from dipy.segment.clustering import QuickBundles, QuickBundlesX, qbx_and_merge from dipy.segment.featurespeed import ResampleFeature from dipy.segment.metricspeed import ( AveragePointwiseEuclideanMetric, MinimumAverageDirectFlipMetric, ) from dipy.testing.decorators import set_random_number_generator from dipy.tracking.streamline import Streamlines, set_number_of_points def straight_bundle(nb_streamlines=1, nb_pts=30, step_size=1, radius=1, rng=None): if rng is None: rng = np.random.default_rng(42) bundle = [] bundle_length = step_size * nb_pts Z = -np.linspace(0, bundle_length, nb_pts) for _ in range(nb_streamlines): theta = rng.random() * (2 * np.pi) r = radius * rng.random() Xk = np.ones(nb_pts) * (r * np.cos(theta)) Yk = np.ones(nb_pts) * (r * np.sin(theta)) Zk = Z.copy() bundle.append(np.c_[Xk, Yk, Zk]) return bundle def bearing_bundles(nb_balls=6, bearing_radius=2): bundles = [] for theta in np.linspace(0, 2 * np.pi, nb_balls, endpoint=False): x = bearing_radius * np.cos(theta) y = bearing_radius * np.sin(theta) bundle = np.array(straight_bundle(nb_streamlines=100)) bundle += (x, y, 0) bundles.append(bundle) return bundles def streamlines_in_circle(nb_streamlines=1, nb_pts=30, step_size=1, radius=1): bundle = [] bundle_length = step_size * nb_pts Z = np.linspace(0, bundle_length, nb_pts) for theta in np.linspace(0, 2 * np.pi, nb_streamlines, endpoint=False): Xk = np.ones(nb_pts) * (radius * np.cos(theta)) Yk = np.ones(nb_pts) * (radius * np.sin(theta)) Zk = Z.copy() bundle.append(np.c_[Xk, Yk, Zk]) return bundle def streamlines_parallel(nb_streamlines=1, nb_pts=30, step_size=1, delta=1): bundle = [] bundle_length = step_size * nb_pts Z = np.linspace(0, bundle_length, nb_pts) for x in delta * np.arange(0, nb_streamlines): Xk = np.ones(nb_pts) * x Yk = np.zeros(nb_pts) Zk = Z.copy() bundle.append(np.c_[Xk, Yk, Zk]) return bundle def simulated_bundle(no_streamlines=10, waves=False, no_pts=12): t = np.linspace(-10, 10, 200) # parallel waves or parallel lines bundle = [] for i in np.linspace(-5, 5, no_streamlines): if waves: pts = np.vstack((np.cos(t), t, i * np.ones(t.shape))).T else: pts = np.vstack((np.zeros(t.shape), t, i * np.ones(t.shape))).T pts = set_number_of_points(pts, nb_points=no_pts) bundle.append(pts) return bundle def test_3D_points(): points = np.array( [[[1, 0, 0]], [[3, 0, 0]], [[2, 0, 0]], [[5, 0, 0]], [[5.5, 0, 0]]], dtype="f4" ) thresholds = [4, 2, 1] metric = AveragePointwiseEuclideanMetric() qbx = QuickBundlesX(thresholds, metric=metric) tree = qbx.cluster(points) clusters_2 = tree.get_clusters(2) assert_array_equal(clusters_2.clusters_sizes(), [3, 2]) clusters_0 = tree.get_clusters(0) assert_array_equal(clusters_0.clusters_sizes(), [5]) def test_3D_segments(): points = np.array( [ [[1, 0, 0], [1, 1, 0]], [[3, 1, 0], [3, 0, 0]], [[2, 0, 0], [2, 1, 0]], [[5, 1, 0], [5, 0, 0]], [[5.5, 0, 0], [5.5, 1, 0]], ], dtype="f4", ) thresholds = [4, 2, 1] feature = ResampleFeature(nb_points=20) metric = AveragePointwiseEuclideanMetric(feature) qbx = QuickBundlesX(thresholds, metric=metric) tree = qbx.cluster(points) clusters_0 = tree.get_clusters(0) clusters_1 = tree.get_clusters(1) clusters_2 = tree.get_clusters(2) assert_equal(len(clusters_0.centroids), len(clusters_1.centroids)) assert_equal(len(clusters_2.centroids) > len(clusters_1.centroids), True) assert_array_equal(clusters_2[1].indices, np.array([3, 4], dtype=np.int32)) def test_with_simulated_bundles(): streamlines = simulated_bundle(3, False, 2) thresholds = [10, 3, 1] qbx_class = QuickBundlesX(thresholds) tree = qbx_class.cluster(streamlines) for level in range(len(thresholds) + 1): clusters = tree.get_clusters(level) assert_equal(tree.leaves[0].indices[0], 0) assert_equal(tree.leaves[2][0], 2) clusters.refdata = streamlines assert_array_equal( clusters[0][0], np.array([[0.0, -10.0, -5.0], [0.0, 10.0, -5.0]]) ) def test_with_simulated_bundles2(): # Generate synthetic streamlines bundles = bearing_bundles(4, 2) bundles.append(straight_bundle(1)) streamlines = list(itertools.chain(*bundles)) thresholds = [10, 2, 1] qbx_class = QuickBundlesX(thresholds) tree = qbx_class.cluster(streamlines) # By default `refdata` refers to data being clustered. assert_equal(tree.refdata, streamlines) def test_circle_parallel_fornix(): circle = streamlines_in_circle(100, step_size=2) parallel = streamlines_parallel(100) thresholds = [1, 0.1] qbx_class = QuickBundlesX(thresholds) tree = qbx_class.cluster(circle) clusters = tree.get_clusters(0) assert_equal(len(clusters), 1) clusters = tree.get_clusters(1) assert_equal(len(clusters), 3) clusters = tree.get_clusters(2) assert_equal(len(clusters), 34) thresholds = [0.5] qbx_class = QuickBundlesX(thresholds) tree = qbx_class.cluster(parallel) clusters = tree.get_clusters(0) assert_equal(len(clusters), 1) clusters = tree.get_clusters(1) assert_equal(len(clusters), 100) def test_raise_mdf(): thresholds = [1, 0.1] metric = MinimumAverageDirectFlipMetric() assert_raises(ValueError, QuickBundlesX, thresholds, metric=metric) assert_raises(ValueError, QuickBundles, thresholds[1], metric=metric) @set_random_number_generator(42) def test_qbx_and_merge(rng): # Generate synthetic streamlines bundles = bearing_bundles(4, 2) bundles.append(straight_bundle(1, rng=rng)) streamlines = Streamlines(list(itertools.chain(*bundles))) thresholds = [10, 2, 1] qbxm = qbx_and_merge(streamlines, thresholds, rng=rng) qbxm_centroids = qbxm.centroids qbxm_clusters = qbxm.clusters qbx = QuickBundlesX(thresholds) tree = qbx.cluster(streamlines) qbx_centroids = tree.get_clusters(3).centroids assert_equal(len(qbx_centroids) > len(qbxm_centroids), True) # check that refdata clusters return streamlines in qbx_and_merge streamline_idx = qbxm_clusters[0].indices[0] assert_array_equal(qbxm_clusters[0][0], streamlines[streamline_idx]) dipy-1.11.0/dipy/segment/tests/test_quickbundles.py000066400000000000000000000176201476546756600224520ustar00rootroot00000000000000import itertools import numpy as np from numpy.testing import assert_array_equal, assert_equal, assert_raises from dipy.segment.clustering import QuickBundles from dipy.segment.clustering_algorithms import quickbundles import dipy.segment.featurespeed as dipysfeature import dipy.segment.metricspeed as dipysmetric from dipy.testing import assert_arrays_equal from dipy.testing.decorators import set_random_number_generator from dipy.testing.memory import get_type_refcount import dipy.tracking.streamline as streamline_utils dtype = "float32" threshold = 7 data = [ np.arange(3 * 5, dtype=dtype).reshape((-1, 3)) + 2 * threshold, np.arange(3 * 10, dtype=dtype).reshape((-1, 3)) + 0 * threshold, np.arange(3 * 15, dtype=dtype).reshape((-1, 3)) + 8 * threshold, np.arange(3 * 17, dtype=dtype).reshape((-1, 3)) + 2 * threshold, np.arange(3 * 20, dtype=dtype).reshape((-1, 3)) + 8 * threshold, ] clusters_truth = [[0, 1], [2, 4], [3]] def test_quickbundles_empty_data(): threshold = 10 metric = dipysmetric.SumPointwiseEuclideanMetric() clusters = quickbundles([], metric, threshold) assert_equal(len(clusters), 0) assert_equal(len(clusters.centroids), 0) clusters = quickbundles([], metric, threshold, ordering=[]) assert_equal(len(clusters), 0) assert_equal(len(clusters.centroids), 0) def test_quickbundles_wrong_metric(): assert_raises(ValueError, QuickBundles, threshold=10.0, metric="WrongMetric") def test_quickbundles_shape_incompatibility(): # QuickBundles' old default metric (AveragePointwiseEuclideanMetric, # aka MDF) requires that all streamlines have the same number of points. metric = dipysmetric.AveragePointwiseEuclideanMetric() qb = QuickBundles(threshold=20.0, metric=metric) assert_raises(ValueError, qb.cluster, data) # QuickBundles' new default metric (AveragePointwiseEuclideanMetric, # aka MDF combined with ResampleFeature) will automatically resample # streamlines so they all have 18 points. qb = QuickBundles(threshold=20.0) clusters1 = qb.cluster(data) feature = dipysfeature.ResampleFeature(nb_points=18) metric = dipysmetric.AveragePointwiseEuclideanMetric(feature) qb = QuickBundles(threshold=20.0, metric=metric) clusters2 = qb.cluster(data) assert_arrays_equal( list(itertools.chain(*clusters1)), list(itertools.chain(*clusters2)) ) @set_random_number_generator(7) def test_quickbundles_2D(rng): # Test quickbundles clustering using 2D points and the Eulidean metric. data = [] data += [rng.standard_normal((1, 2)) + np.array([0, 0]) for _ in range(1)] data += [rng.standard_normal((1, 2)) + np.array([10, 10]) for _ in range(2)] data += [rng.standard_normal((1, 2)) + np.array([-10, 10]) for _ in range(3)] data += [rng.standard_normal((1, 2)) + np.array([10, -10]) for _ in range(4)] data += [rng.standard_normal((1, 2)) + np.array([-10, -10]) for _ in range(5)] data = np.array(data, dtype=dtype) clusters_truth = [[0], [1, 2], [3, 4, 5], [6, 7, 8, 9], [10, 11, 12, 13, 14]] # # Uncomment the following to visualize this test # import pylab as plt # plt.plot(*zip(*data[0:1, 0]), linestyle='None', marker='s') # plt.plot(*zip(*data[1:3, 0]), linestyle='None', marker='o') # plt.plot(*zip(*data[3:6, 0]), linestyle='None', marker='+') # plt.plot(*zip(*data[6:10, 0]), linestyle='None', marker='.') # plt.plot(*zip(*data[10:, 0]), linestyle='None', marker='*') # plt.show() # Theoretically, using a threshold above the following value will not # produce expected results. threshold = np.sqrt(2 * (10**2)) - np.sqrt(2) metric = dipysmetric.SumPointwiseEuclideanMetric() ordering = np.arange(len(data)) for _ in range(100): rng.shuffle(ordering) clusters = quickbundles(data, metric, threshold, ordering=ordering) # Check if clusters are the same as 'clusters_truth' for cluster in clusters: # Find the corresponding cluster in 'clusters_truth' for cluster_truth in clusters_truth: if cluster_truth[0] in cluster.indices: assert_equal(sorted(cluster.indices), sorted(cluster_truth)) # Cluster each cluster again using a small threshold for cluster in clusters: subclusters = quickbundles(data, metric, threshold=0, ordering=cluster.indices) assert_equal(len(subclusters), len(cluster)) assert_equal(sorted(itertools.chain(*subclusters)), sorted(cluster.indices)) # A very large threshold should produce only 1 cluster clusters = quickbundles(data, metric, threshold=np.inf) assert_equal(len(clusters), 1) assert_equal(len(clusters[0]), len(data)) assert_array_equal(clusters[0].indices, range(len(data))) # A very small threshold should produce only N clusters where N=len(data) clusters = quickbundles(data, metric, threshold=0) assert_equal(len(clusters), len(data)) assert_array_equal(list(map(len, clusters)), np.ones(len(data))) assert_array_equal( [idx for cluster in clusters for idx in cluster.indices], range(len(data)) ) def test_quickbundles_streamlines(): rdata = streamline_utils.set_number_of_points(data, nb_points=10) qb = QuickBundles(threshold=2 * threshold) clusters = qb.cluster(rdata) # By default `refdata` refers to data being clustered. assert_equal(clusters.refdata, rdata) # Set `refdata` to return indices instead of actual data points. clusters.refdata = None assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*clusters_truth)) ) # Cluster read-only data for datum in rdata: datum.setflags(write=False) # Cluster data with different dtype (should be converted into float32) for datatype in [np.float64, np.int32, np.int64]: newdata = [datum.astype(datatype) for datum in rdata] clusters = qb.cluster(newdata) assert_equal(clusters.centroids[0].dtype, np.float32) def test_quickbundles_with_python_metric(): class MDFpy(dipysmetric.Metric): def are_compatible(self, shape1, shape2): return shape1 == shape2 def dist(self, features1, features2): dist = np.sqrt(np.sum((features1 - features2) ** 2, axis=1)) dist = np.sum(dist / len(features1)) return dist rdata = streamline_utils.set_number_of_points(data, nb_points=10) qb = QuickBundles(threshold=2 * threshold, metric=MDFpy()) clusters = qb.cluster(rdata) # By default `refdata` refers to data being clustered. assert_equal(clusters.refdata, rdata) # Set `refdata` to return indices instead of actual data points. clusters.refdata = None assert_array_equal( list(itertools.chain(*clusters)), list(itertools.chain(*clusters_truth)) ) # Cluster read-only data for datum in rdata: datum.setflags(write=False) # Cluster data with different dtype (should be converted into float32) for datatype in [np.float64, np.int32, np.int64]: newdata = [datum.astype(datatype) for datum in rdata] clusters = qb.cluster(newdata) assert_equal(clusters.centroids[0].dtype, np.float32) def test_quickbundles_with_not_order_invariant_metric(): metric = dipysmetric.AveragePointwiseEuclideanMetric() qb = QuickBundles(threshold=np.inf, metric=metric) streamline = np.arange(10 * 3, dtype=dtype).reshape((-1, 3)) streamlines = [streamline, streamline[::-1]] clusters = qb.cluster(streamlines) assert_equal(len(clusters), 1) assert_array_equal(clusters[0].centroid, streamline) def test_quickbundles_memory_leaks(): qb = QuickBundles(threshold=2 * threshold) type_name_pattern = "memoryview" initial_types_refcount = get_type_refcount(type_name_pattern) qb.cluster(data) # At this point, all memoryviews created during clustering should be freed. assert_equal(get_type_refcount(type_name_pattern), initial_types_refcount) dipy-1.11.0/dipy/segment/tests/test_tissue.py000066400000000000000000000041501476546756600212670ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_, assert_equal, assert_raises import pytest from dipy.segment.tissue import compute_directional_average, dam_classifier from dipy.utils.optpkg import optional_package sklearn, has_sklearn, _ = optional_package("sklearn") needs_sklearn = pytest.mark.skipif(not has_sklearn, reason="Requires sklearn") @needs_sklearn def test_compute_directional_average_valid(): data = np.array([100, 80, 60, 50, 40, 20, 10]) bvals = np.array([0, 100, 500, 1000, 1500, 2000, 3000]) P, V = compute_directional_average(data, bvals) assert_(isinstance(P, float), "P should be a float") assert_(isinstance(V, float), "V should be a float") assert_(P != 0, "P should not be zero") assert_(V != 0, "V should not be zero") @needs_sklearn def test_compute_directional_average_low_signal(): data = np.array([20, 10, 5, 3, 2, 1, 1]) # Very low signal bvals = np.array([0, 100, 500, 1000, 1500, 2000, 3000]) P, V = compute_directional_average(data, bvals, low_signal_threshold=50) assert_equal(P, 0, "P should be 0 when low signal") assert_equal(V, 0, "V should be 0 when low signal") @needs_sklearn def test_compute_directional_average_div_by_zero(): data = np.array([100, 100, 100, 100, 100, 100, 100]) bvals = np.array([0, 100, 500, 1000, 1500, 2000, 3000]) P, V = compute_directional_average(data, bvals) assert_(np.allclose(P, 0)) @needs_sklearn def test_dam_classifier_valid(): data = np.random.rand(3, 3, 3, 7) * 100 # Simulated random data bvals = np.array([0, 100, 500, 1000, 1500, 2000, 3000]) assert_equal( data.shape[-1], bvals.shape[0], "The number of bvals must match the last dimension of data", ) wm_mask, gm_mask = dam_classifier(data, bvals, wm_threshold=0.5) assert_equal(wm_mask.shape, (3, 3, 3), "Shape of wm_mask should be (3, 3, 3)") assert_equal(gm_mask.shape, (3, 3, 3), "Shape of gm_mask should be (3, 3, 3)") data = np.array([100, 80, 60, 50]) bvals = np.array([0, 0, 100, 100]) assert_raises(ValueError, dam_classifier, data, bvals, wm_threshold=0.5) dipy-1.11.0/dipy/segment/tests/test_utils.py000066400000000000000000000023411476546756600211130ustar00rootroot00000000000000import numpy as np from dipy.segment.utils import remove_holes_and_islands from dipy.testing.decorators import set_random_number_generator @set_random_number_generator() def test_remove_holes_and_islands(rng=None): # inefficient but more readable temp = rng.choice([0, 1], size=(40, 40, 40), p=[0.8, 0.2]) temp[6:34, 6:34, 6:34] = 0 temp[7:33, 7:33, 7:33] = 1 temp[8:32, 8:32, 8:32] = rng.choice([0, 1], size=(24, 24, 24), p=[0.2, 0.8]) output = remove_holes_and_islands(temp) ground_truth = np.zeros((40, 40, 40)) ground_truth[7:33, 7:33, 7:33] = 1 np.testing.assert_equal(output, ground_truth) def test_remove_holes_and_islands_warnings(): # Not binary test non_binary_img = np.concatenate( [np.zeros((30, 30, 10)), np.ones((30, 30, 10)), np.ones((30, 30, 10)) * 2], axis=-1, ) np.testing.assert_warns(UserWarning, remove_holes_and_islands, non_binary_img) # No background test no_background_img = np.ones((40, 40, 40)) np.testing.assert_warns(UserWarning, remove_holes_and_islands, no_background_img) # No foreground test no_foreground_img = np.zeros((40, 40, 40)) np.testing.assert_warns(UserWarning, remove_holes_and_islands, no_foreground_img) dipy-1.11.0/dipy/segment/threshold.py000066400000000000000000000061361476546756600175540ustar00rootroot00000000000000import numpy as np from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def otsu(image, *, nbins=256): """ Return threshold value based on Otsu's method. Copied from scikit-image to remove dependency. Parameters ---------- image : array Input image. nbins : int Number of bins used to calculate histogram. This value is ignored for integer arrays. Returns ------- threshold : float Threshold value. """ hist, bin_centers = np.histogram(image, nbins) hist = hist.astype(float) # class probabilities for all possible thresholds weight1 = np.cumsum(hist) weight2 = np.cumsum(hist[::-1])[::-1] # class means for all possible thresholds mean1 = np.cumsum(hist * bin_centers[1:]) / weight1 mean2 = (np.cumsum((hist * bin_centers[1:])[::-1]) / weight2[::-1])[::-1] # Clip ends to align class 1 and class 2 variables: # The last value of `weight1`/`mean1` should pair with zero values in # `weight2`/`mean2`, which do not exist. variance12 = weight1[:-1] * weight2[1:] * (mean1[:-1] - mean2[1:]) ** 2 idx = np.argmax(variance12) threshold = bin_centers[:-1][idx] return threshold @warning_for_keywords() def upper_bound_by_rate(data, *, rate=0.05): r"""Adjusts upper intensity boundary using rates It calculates the image intensity histogram, and based on the rate value it decide what is the upperbound value for intensity normalization, usually lower bound is 0. The rate is the ratio between the amount of pixels in every bins and the bins with highest pixel amount Parameters ---------- data : float Input intensity value data rate : float representing the threshold whether a specific histogram bin that should be count in the normalization range Returns ------- high : float the upper_bound value for normalization """ g, h = np.histogram(data) m = np.zeros((10, 3)) high = data.max() for i in np.array(range(10)): m[i, 0] = g[i] m[i, 1] = h[i] m[i, 2] = h[i + 1] g = sorted(g, reverse=True) sz = np.size(g) Index = 0 for i in np.array(range(sz)): if g[i] / g[0] > rate: Index = Index + 1 for i in np.array(range(10)): for j in np.array(range(Index)): if g[j] == m[i, 0]: high = m[i, 2] return high @warning_for_keywords() def upper_bound_by_percent(data, *, percent=1): """Find the upper bound for visualization of medical images Calculate the histogram of the image and go right to left until you find the bound that contains more than a percentage of the image. Parameters ---------- data : ndarray percent : float Returns ------- upper_bound : float """ percent = percent / 100.0 values, bounds = np.histogram(data, 20) total_voxels = np.prod(data.shape) agg = 0 for i in range(len(values) - 1, 0, -1): agg += values[i] if agg / float(total_voxels) > percent: return bounds[i] dipy-1.11.0/dipy/segment/tissue.py000066400000000000000000000216101476546756600170660ustar00rootroot00000000000000import numpy as np from dipy.segment.mrf import ConstantObservationModel, IteratedConditionalModes from dipy.sims.voxel import add_noise from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package sklearn, has_sklearn, _ = optional_package("sklearn") linear_model, _, _ = optional_package("sklearn.linear_model") class TissueClassifierHMRF: """ This class contains the methods for tissue classification using the Markov Random Fields modeling approach. """ @warning_for_keywords() def __init__(self, *, save_history=False, verbose=True): self.save_history = save_history self.segmentations = [] self.pves = [] self.energies = [] self.energies_sum = [] self.verbose = verbose @warning_for_keywords() def classify(self, image, nclasses, beta, *, tolerance=1e-05, max_iter=100): """ This method uses the Maximum a posteriori - Markov Random Field approach for segmentation by using the Iterative Conditional Modes and Expectation Maximization to estimate the parameters. Parameters ---------- image : ndarray, 3D structural image. nclasses : int, Number of desired classes. beta : float, Smoothing parameter, the higher this number the smoother the output will be. tolerance: float, optional Value that defines the percentage of change tolerated to prevent the ICM loop to stop. Default is 1e-05. If you want tolerance check to be disabled put 'tolerance = 0'. max_iter : int, optional Fixed number of desired iterations. Default is 100. This parameter defines the maximum number of iterations the algorithm will perform. The loop may terminate early if the change in energy sum between iterations falls below the threshold defined by `tolerance`. However, if `tolerance` is explicitly set to 0, this early stopping mechanism is disabled, and the algorithm will run for the specified number of iterations unless another stopping criterion is met. Returns ------- initial_segmentation : ndarray, 3D segmented image with all tissue types specified in nclasses. final_segmentation : ndarray, 3D final refined segmentation containing all tissue types. PVE : ndarray, 3D probability map of each tissue type. """ nclasses += 1 # One extra class for the background energy_sum = [1e-05] com = ConstantObservationModel() icm = IteratedConditionalModes() if image.max() > 1: image = np.interp(image, [0, image.max()], [0.0, 1.0]) mu, sigmasq = com.initialize_param_uniform(image, nclasses) p = np.argsort(mu) mu = mu[p] sigmasq = sigmasq[p] neglogl = com.negloglikelihood(image, mu, sigmasq, nclasses) seg_init = icm.initialize_maximum_likelihood(neglogl) mu, sigmasq = com.seg_stats(image, seg_init, nclasses) zero = np.zeros_like(image) + 0.001 zero_noise = add_noise(zero, 10000, 1, noise_type="gaussian") image_gauss = np.where(image == 0, zero_noise, image) final_segmentation = np.empty_like(image) initial_segmentation = seg_init for i in range(max_iter): if self.verbose: print(f">> Iteration: {i}") PLN = icm.prob_neighborhood(seg_init, beta, nclasses) PVE = com.prob_image(image_gauss, nclasses, mu, sigmasq, PLN) mu_upd, sigmasq_upd = com.update_param(image_gauss, PVE, mu, nclasses) ind = np.argsort(mu_upd) mu_upd = mu_upd[ind] sigmasq_upd = sigmasq_upd[ind] negll = com.negloglikelihood(image_gauss, mu_upd, sigmasq_upd, nclasses) final_segmentation, energy = icm.icm_ising(negll, beta, seg_init) energy_sum.append(energy[energy > -np.inf].sum()) if self.save_history: self.segmentations.append(final_segmentation) self.pves.append(PVE) self.energies.append(energy) self.energies_sum.append(energy_sum[-1]) if tolerance > 0 and i > 5: e_sum = np.asarray(energy_sum) tol = tolerance * (np.amax(e_sum) - np.amin(e_sum)) e_end = e_sum[-5:] test_dist = np.abs(np.amax(e_end) - np.amin(e_end)) if test_dist < tol: break seg_init = final_segmentation mu = mu_upd sigmasq = sigmasq_upd PVE = PVE[..., 1:] return initial_segmentation, final_segmentation, PVE def compute_directional_average( data, bvals, *, s0_map=None, masks=None, b0_mask=None, b0_threshold=50, low_signal_threshold=50, ): """ Compute the mean signal for each unique b-value shell and fit a linear model. Parameters ---------- data : ndarray The diffusion MRI data. bvals : ndarray The b-values corresponding to the diffusion data. s0_map : ndarray, optional Precomputed mean signal map for b=0 images. masks : ndarray, optional Precomputed masks for each unique b-value shell. b0_mask : ndarray, optional Precomputed mask for b=0 images. b0_threshold : float, optional The intensity threshold for a b=0 image. low_signal_threshold : float, optional The threshold below which a voxel is considered to have low signal. Returns ------- P : float The slope of the linear model. V : float The intercept of the linear model. """ if b0_mask is None: b0_mask = bvals < b0_threshold if masks is None: unique_bvals = np.unique(bvals) masks = bvals[:, np.newaxis] == unique_bvals[np.newaxis, 1:] if s0_map is None: s0_map = data[..., b0_mask].mean(axis=-1) if s0_map < low_signal_threshold: return 0, 0 # Calculate the mean for each mask means = np.sum(data[:, np.newaxis] * masks, axis=0) / np.sum(masks, axis=0) # Normalize by s0, avoiding division by zero by adding 0.01 for stable division s_bvals = means / (s0_map[..., np.newaxis] + 0.01) # Avoid log(0) by adding 0.001 for stable linear regression fit s_bvals[s_bvals == 0] = 0.001 s_log = y = np.log(s_bvals) xb = -np.log(np.arange(1, s_log.shape[-1] + 1)) # Reshape xb for linear regression X = xb.reshape(-1, 1) # Fit linear model model = linear_model.LinearRegression() model.fit(X, y) P = model.coef_[0] V = model.intercept_ return P, V def dam_classifier( data, bvals, wm_threshold, *, b0_threshold=50, low_signal_threshold=50 ): """Computes the P-map (fitting slope) on data to extract white and grey matter. See :footcite:p:`Cheng2020` for further details about the method. Parameters ---------- data : ndarray The diffusion MRI data. bvals : ndarray The b-values corresponding to the diffusion data. wm_threshold : float The threshold below which a voxel is considered white matter. b0_threshold : float, optional The intensity threshold for a b=0 image. low_signal_threshold : float, optional The threshold below which a voxel is considered to have low signal. Returns ------- wm_mask : ndarray A binary mask for white matter. gm_mask : ndarray A binary mask for grey matter. References ---------- .. footbibliography:: """ # Precompute unique b-values, masks, and b=0 mask unique_bvals = np.unique(bvals) if len(unique_bvals) <= 2: raise ValueError("Insufficient unique b-values for fitting.") b0_mask = bvals < b0_threshold masks = bvals[:, np.newaxis] == unique_bvals[np.newaxis, 1:] # Precompute s0 (mean signal for b=0) s0_map = data[..., b0_mask].mean(axis=-1) # If the mean signal for b=0 is too low, set those voxels to 0 for both P and V valid_voxels = s0_map >= low_signal_threshold P_map = np.zeros(data.shape[:-1]) for idx in range(data.shape[0] * data.shape[1] * data.shape[2]): i, j, k = np.unravel_index(idx, P_map.shape) if valid_voxels[i, j, k]: P, _ = compute_directional_average( data[i, j, k, :], bvals, masks=masks, b0_mask=b0_mask, s0_map=s0_map[i, j, k], low_signal_threshold=low_signal_threshold, ) P_map[i, j, k] = P # Adding a small slope threshold for P_map to avoid 0 sloped background voxels wm_mask = (P_map <= wm_threshold) & (P_map > 0.01) # Grey matter has a higher P value than white matter gm_mask = P_map > wm_threshold return wm_mask, gm_mask dipy-1.11.0/dipy/segment/utils.py000066400000000000000000000046441476546756600167220ustar00rootroot00000000000000import warnings import numpy as np from scipy.ndimage import label def remove_holes_and_islands(binary_img, *, slice_wise=False): """ Remove any small mask chunks or holes that could be in the segmentation output. Parameters ---------- binary_img : np.ndarray Binary image slice_wise : bool, optional Whether to run slice wise background correction as well Returns ------- largest_img : np.ndarray """ if len(np.unique(binary_img)) != 2: warnings.warn( "The mask is not binary. \ Returning the original mask", UserWarning, stacklevel=2, ) return binary_img largest_img = np.zeros_like(binary_img) chunks, _ = label(np.abs(1 - binary_img)) u, c = np.unique(chunks[chunks != 0], return_counts=True) if len(u) == 0: warnings.warn( "The mask has no background. \ Returning the original mask", UserWarning, stacklevel=2, ) return binary_img target = u[np.argmax(c)] largest_img = np.where(chunks == target, 0, 1) chunks, _ = label(largest_img) u, c = np.unique(chunks[chunks != 0], return_counts=True) if len(u) == 0: warnings.warn( "The mask has no foreground. \ Returning the original mask", UserWarning, stacklevel=2, ) return binary_img target = u[np.argmax(c)] largest_img = np.where(chunks == target, 1, 0) if slice_wise: for x in range(largest_img.shape[0]): chunks, n_chunk = label(np.abs(1 - largest_img[x])) u, c = np.unique(chunks[chunks != 0], return_counts=True) target = u[np.argmax(c)] largest_img[x] = np.where(chunks == target, 0, 1) for y in range(largest_img.shape[1]): chunks, n_chunk = label(np.abs(1 - largest_img[:, y])) u, c = np.unique(chunks[chunks != 0], return_counts=True) target = u[np.argmax(c)] largest_img[:, y] = np.where(chunks == target, 0, 1) for z in range(largest_img.shape[2]): chunks, n_chunk = label(np.abs(1 - largest_img[..., z])) u, c = np.unique(chunks[chunks != 0], return_counts=True) target = u[np.argmax(c)] largest_img[..., z] = np.where(chunks == target, 0, 1) return largest_img dipy-1.11.0/dipy/sims/000077500000000000000000000000001476546756600145115ustar00rootroot00000000000000dipy-1.11.0/dipy/sims/__init__.py000066400000000000000000000000271476546756600166210ustar00rootroot00000000000000# init for simulations dipy-1.11.0/dipy/sims/meson.build000066400000000000000000000002471476546756600166560ustar00rootroot00000000000000python_sources = [ '__init__.py', 'phantom.py', 'voxel.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/sims' ) subdir('tests')dipy-1.11.0/dipy/sims/phantom.py000066400000000000000000000163141476546756600165360ustar00rootroot00000000000000import numpy as np from dipy.core.geometry import vec2vec_rotmat from dipy.core.gradients import gradient_table from dipy.data import get_fnames import dipy.sims.voxel as vox # import scipy.stats as stats from dipy.sims.voxel import diffusion_evals, single_tensor from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def add_noise(vol, *, snr=1.0, S0=None, noise_type="rician", rng=None): """Add noise of specified distribution to a 4D array. Parameters ---------- vol : array, shape (X,Y,Z,W) Diffusion measurements in `W` directions at each ``(X, Y, Z)`` voxel position. snr : float, optional The desired signal-to-noise ratio. (See notes below.) S0 : float, optional Reference signal for specifying `snr`. noise_type : string, optional The distribution of noise added. Can be either 'gaussian' for Gaussian distributed noise, 'rician' for Rice-distributed noise (default) or 'rayleigh' for a Rayleigh distribution. rng : numpy.random.Generator class, optional Numpy's random generator for setting seed values when needed. Returns ------- vol : array, same shape as vol Volume with added noise. Notes ----- SNR is defined here, following :footcite:p:`Descoteaux2007`, as ``S0 / sigma``, where ``sigma`` is the standard deviation of the two Gaussian distributions forming the real and imaginary components of the Rician noise distribution (see :footcite:p:`Gudbjartsson1995`). References ---------- .. footbibliography:: Examples -------- >>> signal = np.arange(800).reshape(2, 2, 2, 100) >>> signal_w_noise = add_noise(signal, snr=10, noise_type='rician') """ orig_shape = vol.shape vol_flat = np.reshape(vol.copy(), (-1, vol.shape[-1])) if S0 is None: S0 = np.max(vol) for vox_idx, signal in enumerate(vol_flat): vol_flat[vox_idx] = vox.add_noise( signal, snr=snr, S0=S0, noise_type=noise_type, rng=rng ) return np.reshape(vol_flat, orig_shape) def diff2eigenvectors(dx, dy, dz): """numerical derivatives 2 eigenvectors""" basis = np.eye(3) u = np.array([dx, dy, dz]) u = u / np.linalg.norm(u) R = vec2vec_rotmat(basis[:, 0], u) eig0 = u eig1 = np.dot(R, basis[:, 1]) eig2 = np.dot(R, basis[:, 2]) eigs = np.zeros((3, 3)) eigs[:, 0] = eig0 eigs[:, 1] = eig1 eigs[:, 2] = eig2 return eigs, R @warning_for_keywords() def orbital_phantom( *, gtab=None, evals=diffusion_evals, func=None, t=None, datashape=(64, 64, 64, 65), origin=(32, 32, 32), scale=(25, 25, 25), angles=None, radii=None, S0=100.0, snr=None, rng=None, ): """Create a phantom based on a 3-D orbit ``f(t) -> (x,y,z)``. Parameters ---------- gtab : GradientTable Gradient table of measurement directions. evals : array, shape (3,) Tensor eigenvalues. func : user defined function f(t)->(x,y,z) It could be desirable for ``-1=>> def f(t): ... x = np.sin(t) ... y = np.cos(t) ... z = np.linspace(-1, 1, len(x)) ... return x, y, z >>> data = orbital_phantom(func=f) """ if t is None: t = np.linspace(0, 2 * np.pi, 1000) if angles is None: angles = np.linspace(0, 2 * np.pi, 32) if radii is None: radii = np.linspace(0.2, 2, 6) if gtab is None: fimg, fbvals, fbvecs = get_fnames(name="small_64D") gtab = gradient_table(fbvals, bvecs=fbvecs) if func is None: x = np.sin(t) y = np.cos(t) z = np.zeros(t.shape) else: x, y, z = func(t) dx = np.diff(x) dy = np.diff(y) dz = np.diff(z) x = scale[0] * x + origin[0] y = scale[1] * y + origin[1] z = scale[2] * z + origin[2] bx = np.zeros(len(angles)) by = np.sin(angles) bz = np.cos(angles) # The entire volume is considered to be inside the brain. # Voxels without a fiber crossing through them are taken # to be isotropic with signal = S0. vol = np.zeros(datashape) + S0 for i in range(len(dx)): evecs, R = diff2eigenvectors(dx[i], dy[i], dz[i]) S = single_tensor(gtab, S0, evals=evals, evecs=evecs, snr=None) vol[int(x[i]), int(y[i]), int(z[i]), :] += S for r in radii: for j in range(len(angles)): rb = np.dot(R, np.array([bx[j], by[j], bz[j]])) ix = int(x[i] + r * rb[0]) iy = int(y[i] + r * rb[1]) iz = int(z[i] + r * rb[2]) vol[ix, iy, iz] = vol[ix, iy, iz] + S vol = vol / np.max(vol, axis=-1)[..., np.newaxis] vol *= S0 if snr is not None: vol = add_noise(vol, snr=snr, S0=S0, noise_type="rician", rng=rng) return vol if __name__ == "__main__": # TODO: this can become a nice tutorial for generating phantoms def f(t): x = np.sin(t) y = np.cos(t) # z=np.zeros(t.shape) z = np.linspace(-1, 1, len(x)) return x, y, z # helix vol = orbital_phantom(func=f) def f2(t): x = np.linspace(-1, 1, len(t)) y = np.linspace(-1, 1, len(t)) z = np.zeros(x.shape) return x, y, z # first direction vol2 = orbital_phantom(func=f2) def f3(t): x = np.linspace(-1, 1, len(t)) y = -np.linspace(-1, 1, len(t)) z = np.zeros(x.shape) return x, y, z # second direction vol3 = orbital_phantom(func=f3) # double crossing vol23 = vol2 + vol3 # """ def f4(t): x = np.zeros(t.shape) y = np.zeros(t.shape) z = np.linspace(-1, 1, len(t)) return x, y, z # triple crossing vol4 = orbital_phantom(func=f4) vol234 = vol23 + vol4 # unknown function # voln = add_rician_noise(vol234) # """ # from dipy.viz import window, actor # scene = window.Scene() # scene.add(actor.volume(vol234[...,0])) # window.show(scene) # vol234n=add_rician_noise(vol234,20) dipy-1.11.0/dipy/sims/tests/000077500000000000000000000000001476546756600156535ustar00rootroot00000000000000dipy-1.11.0/dipy/sims/tests/__init__.py000066400000000000000000000000201476546756600177540ustar00rootroot00000000000000# Test callable dipy-1.11.0/dipy/sims/tests/meson.build000066400000000000000000000002501476546756600200120ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_phantom.py', 'test_voxel.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/sims/tests' ) dipy-1.11.0/dipy/sims/tests/test_phantom.py000066400000000000000000000041441476546756600207350ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_, assert_array_almost_equal from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.reconst.dti import TensorModel from dipy.sims.phantom import orbital_phantom from dipy.testing.decorators import set_random_number_generator fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) bvecs[np.isnan(bvecs)] = 0 gtab = gradient_table(bvals, bvecs=bvecs) def f(t): """ Helper function used to define a mapping time => xyz """ x = np.linspace(-1, 1, len(t)) y = np.linspace(-1, 1, len(t)) z = np.linspace(-1, 1, len(t)) return x, y, z def test_phantom(): N = 50 vol = orbital_phantom( gtab=gtab, func=f, t=np.linspace(0, 2 * np.pi, N), datashape=(10, 10, 10, len(bvals)), origin=(5, 5, 5), scale=(3, 3, 3), angles=np.linspace(0, 2 * np.pi, 16), radii=np.linspace(0.2, 2, 6), S0=100, ) m = TensorModel(gtab) t = m.fit(vol) FA = t.fa # print vol FA[np.isnan(FA)] = 0 # 686 -> expected FA given diffusivities of [1500, 400, 400] l1, l2, l3 = 1500e-6, 400e-6, 400e-6 expected_fa = ( np.sqrt(0.5) * np.sqrt((l1 - l2) ** 2 + (l2 - l3) ** 2 + (l3 - l1) ** 2) / np.sqrt(l1**2 + l2**2 + l3**2) ) assert_array_almost_equal(FA.max(), expected_fa, decimal=2) @set_random_number_generator(1980) def test_add_noise(rng): N = 50 S0 = 100 options = { "func": f, "t": np.linspace(0, 2 * np.pi, N), "datashape": (10, 10, 10, len(bvals)), "origin": (5, 5, 5), "scale": (3, 3, 3), "angles": np.linspace(0, 2 * np.pi, 16), "radii": np.linspace(0.2, 2, 6), "S0": S0, "rng": rng, } vol = orbital_phantom(gtab=gtab, **options) for snr in [10, 20, 30, 50]: vol_noise = orbital_phantom(gtab=gtab, snr=snr, **options) sigma = S0 / snr assert_(np.abs(np.var(vol_noise - vol) - sigma**2) < 1) dipy-1.11.0/dipy/sims/tests/test_voxel.py000066400000000000000000000343771476546756600204370ustar00rootroot00000000000000import numpy as np from numpy.testing import ( assert_, assert_almost_equal, assert_array_almost_equal, assert_array_equal, ) from dipy.core.gradients import gradient_table # from dipy.core.geometry import vec2vec_rotmat from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.sims.voxel import ( _check_directions, add_noise, all_tensor_evecs, dki_signal, kurtosis_element, multi_tensor, multi_tensor_dki, single_tensor, sticks_and_ball, ) from dipy.testing.decorators import set_random_number_generator def setup_module(): """Module-level setup""" global gtab, gtab_2s _, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) # 2 shells for techniques that requires multishell data bvals_2s = np.concatenate((bvals, bvals * 2), axis=0) bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) gtab_2s = gradient_table(bvals_2s, bvecs=bvecs_2s) # Unused with missing references to basis # def diff2eigenvectors(dx, dy, dz): # """ numerical derivatives 2 eigenvectors # """ # u = np.array([dx, dy, dz]) # u = u / np.linalg.norm(u) # R = vec2vec_rotmat(basis[:, 0], u) # eig0 = u # eig1 = np.dot(R, basis[:, 1]) # eig2 = np.dot(R, basis[:, 2]) # eigs = np.zeros((3, 3)) # eigs[:, 0] = eig0 # eigs[:, 1] = eig1 # eigs[:, 2] = eig2 # return eigs, R def test_check_directions(): # Testing spherical angles for two principal coordinate axis angles = [(0, 0)] # axis z sticks = _check_directions(angles) assert_array_almost_equal(sticks, [[0, 0, 1]]) angles = [(0, 90)] # axis z again (phi can be anything it theta is zero) sticks = _check_directions(angles) assert_array_almost_equal(sticks, [[0, 0, 1]]) angles = [(90, 0)] # axis x sticks = _check_directions(angles) assert_array_almost_equal(sticks, [[1, 0, 0]]) # Testing if directions are already given in cartesian coordinates angles = [(0, 0, 1)] sticks = _check_directions(angles) assert_array_almost_equal(sticks, [[0, 0, 1]]) # Testing more than one direction simultaneously angles = np.array([[90, 0], [30, 0]]) sticks = _check_directions(angles) ref_vec = [np.sin(np.pi * 30 / 180), 0, np.cos(np.pi * 30 / 180)] assert_array_almost_equal(sticks, [[1, 0, 0], ref_vec]) # Testing directions not aligned to planes x = 0, y = 0, or z = 0 the1 = 0 phi1 = 90 the2 = 30 phi2 = 45 angles = np.array([(the1, phi1), (the2, phi2)]) sticks = _check_directions(angles) ref_vec1 = ( np.sin(np.pi * the1 / 180) * np.cos(np.pi * phi1 / 180), np.sin(np.pi * the1 / 180) * np.sin(np.pi * phi1 / 180), np.cos(np.pi * the1 / 180), ) ref_vec2 = ( np.sin(np.pi * the2 / 180) * np.cos(np.pi * phi2 / 180), np.sin(np.pi * the2 / 180) * np.sin(np.pi * phi2 / 180), np.cos(np.pi * the2 / 180), ) assert_array_almost_equal(sticks, [ref_vec1, ref_vec2]) def test_sticks_and_ball(): d = 0.0015 S, sticks = sticks_and_ball( gtab, d=d, S0=1, angles=[ (0, 0), ], fractions=[100], snr=None, ) assert_array_equal(sticks, [[0, 0, 1]]) S_st = single_tensor( gtab, 1, evals=[d, 0, 0], evecs=[[0, 0, 0], [0, 0, 0], [1, 0, 0]] ) assert_array_almost_equal(S, S_st) def test_single_tensor(): evals = np.array([1.4, 0.35, 0.35]) * 10 ** (-3) evecs = np.eye(3) S = single_tensor(gtab, 100, evals=evals, evecs=evecs, snr=None) assert_array_almost_equal(S[gtab.b0s_mask], 100) assert_(np.mean(S[~gtab.b0s_mask]) < 100) from dipy.reconst.dti import TensorModel m = TensorModel(gtab) t = m.fit(S) assert_array_almost_equal(t.fa, 0.707, decimal=3) def test_multi_tensor(): mevals = np.array(([0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003])) e0 = np.array([np.sqrt(2) / 2.0, np.sqrt(2) / 2.0, 0]) e1 = np.array([0, np.sqrt(2) / 2.0, np.sqrt(2) / 2.0]) mevecs = [all_tensor_evecs(e0), all_tensor_evecs(e1)] # odf = multi_tensor_odf(vertices, [0.5, 0.5], mevals, mevecs) # assert_(odf.shape == (len(vertices),)) # assert_(np.all(odf <= 1) & np.all(odf >= 0)) fimg, fbvals, fbvecs = get_fnames(name="small_101D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) gtab = gradient_table(bvals, bvecs=bvecs) s1 = single_tensor(gtab, 100, evals=mevals[0], evecs=mevecs[0], snr=None) s2 = single_tensor(gtab, 100, evals=mevals[1], evecs=mevecs[1], snr=None) Ssingle = 0.5 * s1 + 0.5 * s2 S, _ = multi_tensor( gtab, mevals, S0=100, angles=[(90, 45), (45, 90)], fractions=[50, 50], snr=None ) assert_array_almost_equal(S, Ssingle) @set_random_number_generator(2000) def test_snr(rng=None): s = single_tensor(gtab) # For reasonably large SNR, var(signal) ~= sigma**2, where sigma = 1/SNR for snr in [5, 10, 20]: sigma = 1.0 / snr for _ in range(1000): s_noise = add_noise(s, snr, 1, noise_type="rician", rng=rng) assert_array_almost_equal(np.var(s_noise - s), sigma**2, decimal=2) def test_all_tensor_evecs(): e0 = np.array([1 / np.sqrt(2), 1 / np.sqrt(2), 0]) # Vectors are returned column-wise! desired = np.array( [ [1 / np.sqrt(2), 1 / np.sqrt(2), 0], [-1 / np.sqrt(2), 1 / np.sqrt(2), 0], [0, 0, 1], ] ).T assert_array_almost_equal(all_tensor_evecs(e0), desired) def test_kurtosis_elements(): """Testing symmetry of the elements of the KT As an 4th order tensor, KT has 81 elements. However, due to diffusion symmetry the KT is fully characterized by 15 independent elements. This test checks for this property. """ # two fiber not aligned to planes x = 0, y = 0, or z = 0 mevals = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) angles = [(80, 10), (80, 10), (20, 30), (20, 30)] fie = 0.49 # intra axonal water fraction frac = [fie * 50, (1 - fie) * 50, fie * 50, (1 - fie) * 50] sticks = _check_directions(angles) mD = np.zeros((len(frac), 3, 3)) for i in range(len(frac)): R = all_tensor_evecs(sticks[i]) mD[i] = np.dot(np.dot(R, np.diag(mevals[i])), R.T) # compute global DT D = np.zeros((3, 3)) for i in range(len(frac)): D = D + frac[i] * mD[i] # compute voxel's MD MD = (D[0][0] + D[1][1] + D[2][2]) / 3 # Reference dictionary with the 15 independent elements. # Note: The multiplication of the indexes (i+1) * (j+1) * (k+1) * (l+1) # for of an elements is only equal to this multiplication for another # element if an only if the element corresponds to an symmetry element. # Thus indexes multiplication is used as key of the reference dictionary kt_ref = { 1: kurtosis_element(mD, frac, 0, 0, 0, 0), 16: kurtosis_element(mD, frac, 1, 1, 1, 1), 81: kurtosis_element(mD, frac, 2, 2, 2, 2), 2: kurtosis_element(mD, frac, 0, 0, 0, 1), 3: kurtosis_element(mD, frac, 0, 0, 0, 2), 8: kurtosis_element(mD, frac, 0, 1, 1, 1), 24: kurtosis_element(mD, frac, 1, 1, 1, 2), 27: kurtosis_element(mD, frac, 0, 2, 2, 2), 54: kurtosis_element(mD, frac, 1, 2, 2, 2), 4: kurtosis_element(mD, frac, 0, 0, 1, 1), 9: kurtosis_element(mD, frac, 0, 0, 2, 2), 36: kurtosis_element(mD, frac, 1, 1, 2, 2), 6: kurtosis_element(mD, frac, 0, 0, 1, 2), 12: kurtosis_element(mD, frac, 0, 1, 1, 2), 18: kurtosis_element(mD, frac, 0, 1, 2, 2), } # Testing all 81 possible elements xyz = [0, 1, 2] for i in xyz: for j in xyz: for k in xyz: for ell in xyz: key = (i + 1) * (j + 1) * (k + 1) * (ell + 1) assert_almost_equal( kurtosis_element(mD, frac, i, k, j, ell), kt_ref[key] ) # Testing optional function inputs assert_almost_equal( kurtosis_element(mD, frac, i, k, j, ell), kurtosis_element(mD, frac, i, k, j, ell, DT=D, MD=MD), ) def test_DKI_simulations_aligned_fibers(): """ Testing DKI simulations when aligning the same fiber to different axis. If biological parameters don't change, kt[0] of a fiber aligned to axis x has to be equal to kt[1] of a fiber aligned to the axis y and equal to kt[2] of a fiber aligned to axis z. The same is applicable for dt """ # Defining parameters based on Neto Henriques et al., 2015. NeuroImage 111 mevals = np.array( [ [0.00099, 0, 0], # Intra-cellular [0.00226, 0.00087, 0.00087], ] ) # Extra-cellular frac = [49, 51] # Compartment volume fraction # axis x angles = [(90, 0), (90, 0)] signal_fx, dt_fx, kt_fx = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac ) # axis y angles = [(90, 90), (90, 90)] signal_fy, dt_fy, kt_fy = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac ) # axis z angles = [(0, 0), (0, 0)] signal_fz, dt_fz, kt_fz = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac ) assert_array_equal([kt_fx[0], kt_fx[1], kt_fx[2]], [kt_fy[1], kt_fy[0], kt_fy[2]]) assert_array_equal([kt_fx[0], kt_fx[1], kt_fx[2]], [kt_fz[2], kt_fz[0], kt_fz[1]]) assert_array_equal([dt_fx[0], dt_fx[2], dt_fx[5]], [dt_fy[2], dt_fy[0], dt_fy[5]]) assert_array_equal([dt_fx[0], dt_fx[2], dt_fx[5]], [dt_fz[5], dt_fz[0], dt_fz[2]]) # testing S signal along axis x, y and z bvals = np.array([0, 0, 0, 1000, 1000, 1000, 2000, 2000, 2000]) bvecs = np.asarray( [ [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], ] ) gtab_axis = gradient_table(bvals, bvecs=bvecs) # axis x S_fx = dki_signal(gtab_axis, dt_fx, kt_fx, S0=100) assert_array_almost_equal(S_fx[0:3], [100, 100, 100]) # test S f0r b=0 # axis y S_fy = dki_signal(gtab_axis, dt_fy, kt_fy, S0=100) assert_array_almost_equal(S_fy[0:3], [100, 100, 100]) # test S f0r b=0 # axis z S_fz = dki_signal(gtab_axis, dt_fz, kt_fz, S0=100) assert_array_almost_equal(S_fz[0:3], [100, 100, 100]) # test S f0r b=0 # test S for b = 1000 assert_array_almost_equal([S_fx[3], S_fx[4], S_fx[5]], [S_fy[4], S_fy[3], S_fy[5]]) assert_array_almost_equal([S_fx[3], S_fx[4], S_fx[5]], [S_fz[5], S_fz[3], S_fz[4]]) # test S for b = 2000 assert_array_almost_equal([S_fx[6], S_fx[7], S_fx[8]], [S_fy[7], S_fy[6], S_fy[8]]) assert_array_almost_equal([S_fx[6], S_fx[7], S_fx[8]], [S_fz[8], S_fz[6], S_fz[7]]) def test_DKI_crossing_fibers_simulations(): """Testing DKI simulations of a crossing fiber""" # two fiber not aligned to planes x = 0, y = 0, or z = 0 mevals = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) angles = [(80, 10), (80, 10), (20, 30), (20, 30)] fie = 0.49 frac = [fie * 50, (1 - fie) * 50, fie * 50, (1 - fie) * 50] signal, dt, kt = multi_tensor_dki( gtab_2s, mevals, angles=angles, fractions=frac, snr=None ) # in this simulations dt and kt cannot have zero elements for i in range(len(dt)): assert dt[i] != 0 for i in range(len(kt)): assert kt[i] != 0 # test S, dt and kt relative to the expected values computed from another # DKI package - UDKI (Neto Henriques et al., 2015) dt_ref = [ 1.0576161e-3, 0.1292542e-3, 0.4786179e-3, 0.2667081e-3, 0.1136643e-3, 0.9888660e-3, ] kt_ref = [ 2.3529944, 0.8226448, 2.3011221, 0.2017312, -0.0437535, 0.0404011, 0.0355281, 0.2449859, 0.2157668, 0.3495910, 0.0413366, 0.3461519, -0.0537046, 0.0133414, -0.017441, ] assert_array_almost_equal(dt, dt_ref) assert_array_almost_equal(kt, kt_ref) assert_array_almost_equal( signal, dki_signal(gtab_2s, dt_ref, kt_ref, S0=1.0, snr=None), decimal=5 ) def test_single_tensor_btens(): """Testing single tensor simulations when a btensor is given""" gtab_lte = gradient_table(gtab.bvals, bvecs=gtab.bvecs, btens="LTE") gtab_ste = gradient_table(gtab.bvals, bvecs=gtab.bvecs, btens="STE") # Check if Signals produced with LTE btensor gives same results as # previous simulations not specifying b-tensor evecs = np.eye(3) evals = np.array([1.4, 0.35, 0.35]) * 10 ** (-3) S_ref = single_tensor(gtab, 100, evals=evals, evecs=evecs, snr=None) S_btens = single_tensor(gtab_lte, 100, evals=evals, evecs=evecs, snr=None) assert_array_almost_equal(S_ref, S_btens) # Check if signals produced with STE btensor gives signals that matches # the signal decay for mean diffusivity md = np.sum(evals) / 3 S_ref = 100 * np.exp(-gtab.bvals * md) S_btens = single_tensor(gtab_ste, 100, evals=evals, evecs=evecs, snr=None) assert_array_almost_equal(S_ref, S_btens) def test_multi_tensor_btens(): """Testing multi tensor simulations when a btensor is given""" mevals = np.array(([0.003, 0.0002, 0.0002], [0.0015, 0.0003, 0.0003])) e0 = np.array([np.sqrt(2) / 2.0, np.sqrt(2) / 2.0, 0]) e1 = np.array([0, np.sqrt(2) / 2.0, np.sqrt(2) / 2.0]) mevecs = [all_tensor_evecs(e0), all_tensor_evecs(e1)] gtab_ste = gradient_table(gtab.bvals, bvecs=gtab.bvecs, btens="STE") s1 = single_tensor(gtab_ste, 100, evals=mevals[0], evecs=mevecs[0], snr=None) s2 = single_tensor(gtab_ste, 100, evals=mevals[1], evecs=mevecs[1], snr=None) Ssingle = 0.5 * s1 + 0.5 * s2 S, _ = multi_tensor( gtab_ste, mevals, S0=100, angles=[(90, 45), (45, 90)], fractions=[50, 50], snr=None, ) assert_array_almost_equal(S, Ssingle) dipy-1.11.0/dipy/sims/voxel.py000066400000000000000000000733661476546756600162370ustar00rootroot00000000000000import numpy as np from numpy import dot from scipy.special import jn from dipy.core.geometry import sphere2cart, vec2vec_rotmat from dipy.reconst.utils import dki_design_matrix from dipy.testing.decorators import warning_for_keywords # Diffusion coefficients for white matter tracts, in mm^2/s # # Based roughly on values from: # # Pierpaoli, Basser, "Towards a Quantitative Assessment of Diffusion # Anisotropy", Magnetic Resonance in Medicine, 1996; 36(6):893-906. # diffusion_evals = np.array([1500e-6, 400e-6, 400e-6]) def _check_directions(angles): """ Helper function to check if direction ground truth have the right format and are in cartesian coordinates Parameters ---------- angles : array (K,2) or (K, 3) List of K polar angles (in degrees) for the sticks or array of K sticks as unit vectors. Returns ------- sticks : (K,3) Sticks in cartesian coordinates. """ angles = np.array(angles) if angles.shape[-1] == 3: sticks = angles else: sticks = [ sphere2cart(1, np.deg2rad(pair[0]), np.deg2rad(pair[1])) for pair in angles ] sticks = np.array(sticks) return sticks def _add_gaussian(sig, noise1, noise2): """ Helper function to add_noise This one simply adds one of the Gaussians to the sig and ignores the other one. """ return sig + noise1 def _add_rician(sig, noise1, noise2): """ Helper function to add_noise. This does the same as abs(sig + complex(noise1, noise2)) """ return np.sqrt((sig + noise1) ** 2 + noise2**2) def _add_rayleigh(sig, noise1, noise2): r"""Helper function to add_noise. The Rayleigh distribution is $\sqrt\{Gauss_1^2 + Gauss_2^2}$. """ return sig + np.sqrt(noise1**2 + noise2**2) @warning_for_keywords() def add_noise(signal, snr, S0, *, noise_type="rician", rng=None): r"""Add noise of specified distribution to the signal from a single voxel. Parameters ---------- signal : 1-d ndarray The signal in the voxel. snr : float The desired signal-to-noise ratio. (See notes below.) If `snr` is None, return the signal as-is. S0 : float Reference signal for specifying `snr`. noise_type : string, optional The distribution of noise added. Can be either 'gaussian' for Gaussian distributed noise, 'rician' for Rice-distributed noise (default) or 'rayleigh' for a Rayleigh distribution. rng : numpy.random.Generator, optional Random number generator for the noise. If ``None``, uses NumPy's default random generator. Returns ------- signal : array, same shape as the input Signal with added noise. Notes ----- SNR is defined here, following :footcite:p:`Descoteaux2007`, as ``S0 / sigma``, where ``sigma`` is the standard deviation of the two Gaussian distributions forming the real and imaginary components of the Rician noise distribution (see :footcite:p:`Gudbjartsson1995`). References ---------- .. footbibliography:: Examples -------- >>> signal = np.arange(800).reshape(2, 2, 2, 100) >>> signal_w_noise = add_noise(signal, 10., 100., noise_type='rician') """ if snr is None: return signal if rng is None: rng = np.random.default_rng() sigma = S0 / snr noise_adder = { "gaussian": _add_gaussian, "rician": _add_rician, "rayleigh": _add_rayleigh, } noise1 = rng.normal(0, sigma, size=signal.shape) if noise_type == "gaussian": noise2 = None else: noise2 = rng.normal(0, sigma, size=signal.shape) return noise_adder[noise_type](signal, noise1, noise2) @warning_for_keywords() def sticks_and_ball( gtab, *, d=0.0015, S0=1.0, angles=((0, 0), (90, 0)), fractions=(35, 35), snr=20 ): """Simulate the signal for a Sticks & Ball model. See :footcite:p:`Behrens2007` for a definition of the Sticks & Ball model. Parameters ---------- gtab : GradientTable Signal measurement directions. d : float, optional Diffusivity value. S0 : float, optional Unweighted signal value. angles : array (K, 2) or (K, 3), optional List of K polar angles (in degrees) for the sticks or array of K sticks as unit vectors. fractions : array-like, optional Percentage of each stick. Remainder to 100 specifies isotropic component. snr : float, optional Signal to noise ratio, assuming Rician noise. If set to None, no noise is added. Returns ------- S : (N,) ndarray Simulated signal. sticks : (M,3) Sticks in cartesian coordinates. References ---------- .. footbibliography:: """ fractions = [f / 100.0 for f in fractions] f0 = 1 - np.sum(fractions) S = np.zeros(len(gtab.bvals)) sticks = _check_directions(angles) for i, g in enumerate(gtab.bvecs[1:]): S[i + 1] = f0 * np.exp(-gtab.bvals[i + 1] * d) + np.sum( [ fractions[j] * np.exp(-gtab.bvals[i + 1] * d * np.dot(s, g) ** 2) for (j, s) in enumerate(sticks) ] ) S[i + 1] = S0 * S[i + 1] S[gtab.b0s_mask] = S0 S = add_noise(S, snr, S0) return S, sticks def callaghan_perpendicular(q, radius): """Calculates the perpendicular diffusion signal E(q) in a cylinder of radius R using the Soderman model. Assumes that the pulse length is infinitely short and the diffusion time is infinitely long. See :footcite:p:`Soderman1995` for details about the Soderman model. Parameters ---------- q : array, shape (N,) q-space value in 1/mm radius : float cylinder radius in mm Returns ------- E : array, shape (N,) signal attenuation References ---------- .. footbibliography:: """ # Eq. [6] in the paper numerator = (2 * jn(1, 2 * np.pi * q * radius)) ** 2 denom = (2 * np.pi * q * radius) ** 2 E = np.divide(numerator, denom, out=np.zeros_like(q), where=denom != 0) return E @warning_for_keywords() def gaussian_parallel(q, tau, *, D=0.7e-3): r"""Calculates the parallel Gaussian diffusion signal. Parameters ---------- q : array, shape (N,) q-space value in 1/mm tau : float diffusion time in s D : float, optional diffusion constant Returns ------- E : array, shape (N,) signal attenuation """ return np.exp(-((2 * np.pi * q) ** 2) * tau * D) @warning_for_keywords() def cylinders_and_ball_soderman( gtab, tau, *, radii=(5e-3, 5e-3), D=0.7e-3, S0=1.0, angles=((0, 0), (90, 0)), fractions=(35, 35), snr=20, ): """Calculates the three-dimensional signal attenuation E(q) originating from within a cylinder of radius R using the Soderman approximation. The diffusion signal is assumed to be separable perpendicular and parallel to the cylinder axis :footcite:p:`Assaf2004`. This function is basically an extension of the ball and stick model. Setting the radius to zero makes them equivalent. See :footcite:p:`Soderman1995` for details about the Soderman model. Parameters ---------- gtab : GradientTable Signal measurement directions. tau : float diffusion time in s radii : array-like, optional cylinder radius in mm D : float, optional diffusion constant S0 : float, optional Unweighted signal value. angles : array (K, 2) or (K, 3), optional List of K polar angles (in degrees) for the sticks or array of K sticks as unit vectors. fractions : array-like, optional Percentage of each stick. Remainder to 100 specifies isotropic component. snr : float, optional Signal to noise ratio, assuming Rician noise. If set to None, no noise is added. Returns ------- E : array, shape (N,) signal attenuation References ---------- .. footbibliography:: """ qvals = np.sqrt(gtab.bvals / tau) / (2 * np.pi) qvecs = qvals[:, None] * gtab.bvecs q_norm = np.sqrt(np.einsum("ij,ij->i", qvecs, qvecs)) fractions = [f / 100.0 for f in fractions] f0 = 1 - np.sum(fractions) S = np.zeros(len(gtab.bvals)) sticks = _check_directions(angles) for i, f in enumerate(fractions): q_par = abs(np.dot(qvecs, sticks[i])) q_perp = np.sqrt(q_norm**2 - q_par**2) S_cylinder = callaghan_perpendicular(q_perp, radii[i]) * gaussian_parallel( q_par, tau, D=D ) S += f * S_cylinder S += f0 * np.exp(-gtab.bvals * D) S *= S0 S[gtab.b0s_mask] = S0 S = add_noise(S, snr, S0) return S, sticks @warning_for_keywords() def single_tensor(gtab, S0=1, *, evals=None, evecs=None, snr=None, rng=None): """Simulate diffusion-weighted signals with a single tensor. See :footcite:p:`Stejskal1965`, :footcite:p:`Descoteaux2008b`. Parameters ---------- gtab : GradientTable Table with information of b-values and gradient directions g. Note that if gtab has a btens attribute, simulations will be performed according to the given b-tensor B information. S0 : double, optional Strength of signal in the presence of no diffusion gradient (also called the ``b=0`` value). evals : (3,) ndarray, optional Eigenvalues of the diffusion tensor. By default, values typical for prolate white matter are used. evecs : (3, 3) ndarray, optional Eigenvectors of the tensor. You can also think of this as a rotation matrix that transforms the direction of the tensor. The eigenvectors need to be column wise. snr : float, optional Signal to noise ratio, assuming Rician noise. None implies no noise. rng : numpy.random.Generator, optional Random number generator for the noise. If ``None``, uses NumPy's default random generator. Returns ------- S : (N,) ndarray Simulated signal: ``S(b, g) = S_0 e^(-b g^T R D R.T g)``, if gtab.tens=None ``S(B) = S_0 e^(-B:D)``, if gtab.tens information is given References ---------- .. footbibliography:: """ if rng is None: rng = np.random.default_rng() if evals is None: evals = diffusion_evals if evecs is None: evecs = np.eye(3) out_shape = gtab.bvecs.shape[: gtab.bvecs.ndim - 1] gradients = gtab.bvecs.reshape(-1, 3) R = np.asarray(evecs) S = np.zeros(len(gradients)) D = dot(dot(R, np.diag(evals)), R.T) if gtab.btens is None: for i, g in enumerate(gradients): S[i] = S0 * np.exp(-gtab.bvals[i] * dot(dot(g.T, D), g)) else: for i, b in enumerate(gtab.btens): S[i] = S0 * np.exp(-np.sum(b * D)) S = add_noise(S, snr, S0, rng=rng) return S.reshape(out_shape) @warning_for_keywords() def multi_tensor( gtab, mevals, *, S0=1.0, angles=((0, 0), (90, 0)), fractions=(50, 50), snr=20, rng=None, ): r"""Simulate a Multi-Tensor signal. Parameters ---------- gtab : GradientTable Table with information of b-values and gradient directions. Note that if gtab has a btens attribute, simulations will be performed according to the given b-tensor information. mevals : array (K, 3) each tensor's eigenvalues in each row S0 : float, optional Unweighted signal value (b0 signal). angles : array (K, 2) or (K, 3), optional List of K tensor directions in polar angles (in degrees) or unit vectors fractions : array-like, optional Percentage of the contribution of each tensor. The sum of fractions should be equal to 100%. snr : float, optional Signal to noise ratio, assuming Rician noise. If set to None, no noise is added. rng : numpy.random.Generator, optional Random number generator for the noise. If ``None``, uses NumPy's default random generator. Returns ------- S : (N,) ndarray Simulated signal. sticks : (M,3) Sticks in cartesian coordinates. Examples -------- >>> import numpy as np >>> from dipy.sims.voxel import multi_tensor >>> from dipy.data import get_fnames >>> from dipy.core.gradients import gradient_table >>> from dipy.io.gradients import read_bvals_bvecs >>> fimg, fbvals, fbvecs = get_fnames(name='small_101D') >>> bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) >>> gtab = gradient_table(bvals, bvecs=bvecs) >>> mevals=np.array(([0.0015, 0.0003, 0.0003],[0.0015, 0.0003, 0.0003])) >>> e0 = np.array([1, 0, 0.]) >>> e1 = np.array([0., 1, 0]) >>> S = multi_tensor(gtab, mevals) """ if rng is None: rng = np.random.default_rng() if np.round(np.sum(fractions), 2) != 100.0: raise ValueError("Fractions should sum to 100") fractions = [f / 100.0 for f in fractions] S = np.zeros(len(gtab.bvals)) sticks = _check_directions(angles) for i in range(len(fractions)): S = S + fractions[i] * single_tensor( gtab, S0=S0, evals=mevals[i], evecs=all_tensor_evecs(sticks[i]), snr=None ) return add_noise(S, snr, S0, rng=rng), sticks @warning_for_keywords() def multi_tensor_dki( gtab, mevals, *, S0=1.0, angles=((90.0, 0.0), (90.0, 0.0)), fractions=(50, 50), snr=20, ): r"""Simulate the diffusion-weight signal, diffusion and kurtosis tensors based on the DKI model See :footcite:p:`NetoHenriques2015` for further details. Parameters ---------- gtab : GradientTable Gradient table. mevals : array (K, 3) eigenvalues of the diffusion tensor for each individual compartment S0 : float, optional Unweighted signal value (b0 signal). angles : array (K,2) or (K,3), optional List of K tensor directions of the diffusion tensor of each compartment in polar angles (in degrees) or unit vectors fractions : float (K,), optional Percentage of the contribution of each tensor. The sum of fractions should be equal to 100%. snr : float, optional Signal to noise ratio, assuming Rician noise. If set to None, no noise is added. Returns ------- S : (N,) ndarray Simulated signal based on the DKI model. dt : (6,) elements of the diffusion tensor. kt : (15,) elements of the kurtosis tensor. Notes ----- Simulations are based on multicompartmental models which assumes that tissue is well described by impermeable diffusion compartments characterized by their only diffusion tensor. Since simulations are based on the DKI model, coefficients larger than the fourth order of the signal's taylor expansion approximation are neglected. Examples -------- >>> import numpy as np >>> from dipy.sims.voxel import multi_tensor_dki >>> from dipy.data import get_fnames >>> from dipy.core.gradients import gradient_table >>> from dipy.io.gradients import read_bvals_bvecs >>> fimg, fbvals, fbvecs = get_fnames(name='small_64D') >>> bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) >>> bvals_2s = np.concatenate((bvals, bvals * 2), axis=0) >>> bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) >>> gtab = gradient_table(bvals_2s, bvecs=bvecs_2s) >>> mevals = np.array([[0.00099, 0, 0],[0.00226, 0.00087, 0.00087]]) >>> S, dt, kt = multi_tensor_dki(gtab, mevals) References ---------- .. footbibliography:: """ if np.round(np.sum(fractions), 2) != 100.0: raise ValueError("Fractions should sum to 100") fractions = [f / 100.0 for f in fractions] # S = np.zeros(len(gtab.bvals)) sticks = _check_directions(angles) # computing a 3D matrix containing the individual DT components D_comps = np.zeros((len(fractions), 3, 3)) for i in range(len(fractions)): R = all_tensor_evecs(sticks[i]) D_comps[i] = dot(dot(R, np.diag(mevals[i])), R.T) # compute voxel's DT DT = np.zeros((3, 3)) for i in range(len(fractions)): DT = DT + fractions[i] * D_comps[i] dt = np.array([DT[0][0], DT[0][1], DT[1][1], DT[0][2], DT[1][2], DT[2][2]]) # compute voxel's MD MD = (DT[0][0] + DT[1][1] + DT[2][2]) / 3 # compute voxel's KT kt = np.zeros(15) kt[0] = kurtosis_element(D_comps, fractions, 0, 0, 0, 0, DT=DT, MD=MD) kt[1] = kurtosis_element(D_comps, fractions, 1, 1, 1, 1, DT=DT, MD=MD) kt[2] = kurtosis_element(D_comps, fractions, 2, 2, 2, 2, DT=DT, MD=MD) kt[3] = kurtosis_element(D_comps, fractions, 0, 0, 0, 1, DT=DT, MD=MD) kt[4] = kurtosis_element(D_comps, fractions, 0, 0, 0, 2, DT=DT, MD=MD) kt[5] = kurtosis_element(D_comps, fractions, 0, 1, 1, 1, DT=DT, MD=MD) kt[6] = kurtosis_element(D_comps, fractions, 1, 1, 1, 2, DT=DT, MD=MD) kt[7] = kurtosis_element(D_comps, fractions, 0, 2, 2, 2, DT=DT, MD=MD) kt[8] = kurtosis_element(D_comps, fractions, 1, 2, 2, 2, DT=DT, MD=MD) kt[9] = kurtosis_element(D_comps, fractions, 0, 0, 1, 1, DT=DT, MD=MD) kt[10] = kurtosis_element(D_comps, fractions, 0, 0, 2, 2, DT=DT, MD=MD) kt[11] = kurtosis_element(D_comps, fractions, 1, 1, 2, 2, DT=DT, MD=MD) kt[12] = kurtosis_element(D_comps, fractions, 0, 0, 1, 2, DT=DT, MD=MD) kt[13] = kurtosis_element(D_comps, fractions, 0, 1, 1, 2, DT=DT, MD=MD) kt[14] = kurtosis_element(D_comps, fractions, 0, 1, 2, 2, DT=DT, MD=MD) # compute S based on the DT and KT S = dki_signal(gtab, dt, kt, S0=S0, snr=snr) return S, dt, kt @warning_for_keywords() def kurtosis_element(D_comps, frac, ind_i, ind_j, ind_k, ind_l, *, DT=None, MD=None): r"""Computes the diffusion kurtosis tensor element (with indexes i, j, k and l) based on the individual diffusion tensor components of a multicompartmental model. Parameters ---------- D_comps : (K,3,3) ndarray Diffusion tensors for all K individual compartment of the multicompartmental model. frac : [float] Percentage of the contribution of each tensor. The sum of fractions should be equal to 100%. ind_i : int Element's index i (0 for x, 1 for y, 2 for z) ind_j : int Element's index j (0 for x, 1 for y, 2 for z) ind_k : int Element's index k (0 for x, 1 for y, 2 for z) ind_l: int Elements index l (0 for x, 1 for y, 2 for z) DT : (3,3) ndarray, optional Voxel's global diffusion tensor. MD : float, optional Voxel's global mean diffusivity. Returns ------- wijkl : float kurtosis tensor element of index i, j, k, l Notes ----- wijkl is calculated using equation 8 given in :footcite:p:`NetoHenriques2015`. References ---------- .. footbibliography:: """ if DT is None: DT = np.zeros((3, 3)) for i in range(len(frac)): DT = DT + frac[i] * D_comps[i] if MD is None: MD = (DT[0][0] + DT[1][1] + DT[2][2]) / 3 wijkl = 0 for f in range(len(frac)): wijkl = wijkl + frac[f] * ( D_comps[f][ind_i][ind_j] * D_comps[f][ind_k][ind_l] + D_comps[f][ind_i][ind_k] * D_comps[f][ind_j][ind_l] + D_comps[f][ind_i][ind_l] * D_comps[f][ind_j][ind_k] ) wijkl = ( wijkl - DT[ind_i][ind_j] * DT[ind_k][ind_l] - DT[ind_i][ind_k] * DT[ind_j][ind_l] - DT[ind_i][ind_l] * DT[ind_j][ind_k] ) / (MD**2) return wijkl @warning_for_keywords() def dki_signal(gtab, dt, kt, *, S0=150, snr=None): r"""Simulated signal based on the diffusion and diffusion kurtosis tensors of a single voxel. Simulations are performed assuming the DKI model. See :footcite:p:`NetoHenriques2015` for further details. Parameters ---------- gtab : GradientTable Measurement directions. dt : (6,) ndarray Elements of the diffusion tensor. kt : (15, ) ndarray Elements of the diffusion kurtosis tensor. S0 : float, optional Strength of signal in the presence of no diffusion gradient. snr : float, optional Signal to noise ratio, assuming Rician noise. None implies no noise. Returns ------- S : (N,) ndarray Simulated signal based on the DKI model: .. math:: S=S_{0}e^{-bD+\frac{1}{6}b^{2}D^{2}K} References ---------- .. footbibliography:: """ dt = np.array(dt) kt = np.array(kt) A = dki_design_matrix(gtab) # define vector of DKI parameters MD = (dt[0] + dt[2] + dt[5]) / 3 X = np.concatenate((dt, kt * MD * MD, np.array([-np.log(S0)])), axis=0) # Compute signals based on the DKI model S = np.exp(dot(A, X)) S = add_noise(S, snr, S0) return S @warning_for_keywords() def single_tensor_odf(r, *, evals=None, evecs=None): """Simulated ODF with a single tensor. See :footcite:p:`Aganj2010` for further details. Parameters ---------- r : (N,3) or (M,N,3) ndarray Measurement positions in (x, y, z), either as a list or on a grid. evals : (3,) Eigenvalues of diffusion tensor. By default, use values typical for prolate white matter. evecs : (3, 3) ndarray Eigenvectors of the tensor, written column-wise. You can also think of these as the rotation matrix that determines the orientation of the diffusion tensor. Returns ------- ODF : (N,) ndarray The diffusion probability at ``r`` after time ``tau``. References ---------- .. footbibliography:: """ if evals is None: evals = diffusion_evals if evecs is None: evecs = np.eye(3) out_shape = r.shape[: r.ndim - 1] R = np.asarray(evecs) D = dot(dot(R, np.diag(evals)), R.T) Di = np.linalg.inv(D) r = r.reshape(-1, 3) P = np.zeros(len(r)) for i, u in enumerate(r): P[i] = (dot(dot(u.T, Di), u)) ** (3 / 2) return (1 / (4 * np.pi * np.prod(evals) ** (1 / 2) * P)).reshape(out_shape) def all_tensor_evecs(e0): """Given the principle tensor axis, return the array of all eigenvectors column-wise (or, the rotation matrix that orientates the tensor). Parameters ---------- e0 : (3,) ndarray Principle tensor axis. Returns ------- evecs : (3,3) ndarray Tensor eigenvectors, arranged column-wise. """ axes = np.eye(3) mat = vec2vec_rotmat(axes[0], e0) e1 = np.dot(mat, axes[1]) e2 = np.dot(mat, axes[2]) # Return the eigenvectors column-wise: return np.array([e0, e1, e2]).T def multi_tensor_odf(odf_verts, mevals, angles, fractions): """Simulate a Multi-Tensor ODF. Parameters ---------- odf_verts : (N,3) ndarray Vertices of the reconstruction sphere. mevals : sequence of 1D arrays, Eigen-values for each tensor. angles : sequence of 2d tuples, Sequence of principal directions for each tensor in polar angles or cartesian unit coordinates. fractions : sequence of floats, Percentages of the fractions for each tensor. Returns ------- ODF : (N,) ndarray Orientation distribution function. Examples -------- Simulate a MultiTensor ODF with two peaks and calculate its exact ODF. >>> import numpy as np >>> from dipy.sims.voxel import multi_tensor_odf, all_tensor_evecs >>> from dipy.data import default_sphere >>> vertices, faces = default_sphere.vertices, default_sphere.faces >>> mevals = np.array(([0.0015, 0.0003, 0.0003],[0.0015, 0.0003, 0.0003])) >>> angles = [(0, 0), (90, 0)] >>> odf = multi_tensor_odf(vertices, mevals, angles, [50, 50]) """ mf = [f / 100.0 for f in fractions] sticks = _check_directions(angles) odf = np.zeros(len(odf_verts)) mevecs = [] for s in sticks: mevecs += [all_tensor_evecs(s)] for j, f in enumerate(mf): odf += f * single_tensor_odf(odf_verts, evals=mevals[j], evecs=mevecs[j]) return odf @warning_for_keywords() def single_tensor_rtop(*, evals=None, tau=1.0 / (4 * np.pi**2)): """Simulate a Single-Tensor rtop. See :footcite:p:`Cheng2012` for further details. Parameters ---------- evals : 1D arrays, optional Eigen-values for the tensor. By default, values typical for prolate white matter are used. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- rtop : float, Return to origin probability. References ---------- .. footbibliography:: """ if evals is None: evals = diffusion_evals rtop = 1.0 / np.sqrt((4 * np.pi * tau) ** 3 * np.prod(evals)) return rtop @warning_for_keywords() def multi_tensor_rtop(mf, *, mevals=None, tau=1 / (4 * np.pi**2)): """Simulate a Multi-Tensor rtop. See :footcite:p:`Cheng2012` for further details. Parameters ---------- mf : sequence of floats, bounded [0,1] Percentages of the fractions for each tensor. mevals : sequence of 1D arrays, optional Eigen-values for each tensor. By default, values typical for prolate white matter are used. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- rtop : float, Return to origin probability. References ---------- .. footbibliography:: """ rtop = 0 if mevals is None: mevals = [ None, ] * len(mf) for j, f in enumerate(mf): rtop += f * single_tensor_rtop(evals=mevals[j], tau=tau) return rtop @warning_for_keywords() def single_tensor_pdf(r, *, evals=None, evecs=None, tau=1 / (4 * np.pi**2)): """Simulated ODF with a single tensor. See :footcite:p:`Cheng2012` for further details. Parameters ---------- r : (N,3) or (M,N,3) ndarray Measurement positions in (x, y, z), either as a list or on a grid. evals : (3,), optional Eigenvalues of diffusion tensor. By default, use values typical for prolate white matter. evecs : (3, 3) ndarray, optional Eigenvectors of the tensor. You can also think of these as the rotation matrix that determines the orientation of the diffusion tensor. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- pdf : (N,) ndarray The diffusion probability at ``r`` after time ``tau``. References ---------- .. footbibliography:: """ if evals is None: evals = diffusion_evals if evecs is None: evecs = np.eye(3) out_shape = r.shape[: r.ndim - 1] R = np.asarray(evecs) D = dot(dot(R, np.diag(evals)), R.T) Di = np.linalg.inv(D) r = r.reshape(-1, 3) P = np.zeros(len(r)) for i, u in enumerate(r): P[i] = (-dot(dot(u.T, Di), u)) / (4 * tau) pdf = (1 / np.sqrt((4 * np.pi * tau) ** 3 * np.prod(evals))) * np.exp(P) return pdf.reshape(out_shape) @warning_for_keywords() def multi_tensor_pdf(pdf_points, mevals, angles, fractions, *, tau=1 / (4 * np.pi**2)): """Simulate a Multi-Tensor ODF. See :footcite:p:`Cheng2012` for further details. Parameters ---------- pdf_points : (N, 3) ndarray Points to evaluate the PDF. mevals : sequence of 1D arrays, Eigen-values for each tensor. By default, values typical for prolate white matter are used. angles : sequence, Sequence of principal directions for each tensor in polar angles or cartesian unit coordinates. fractions : sequence of floats, Percentages of the fractions for each tensor. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- pdf : (N,) ndarray, Probability density function of the water displacement. References ---------- .. footbibliography:: """ mf = [f / 100.0 for f in fractions] sticks = _check_directions(angles) pdf = np.zeros(len(pdf_points)) mevecs = [] for s in sticks: mevecs += [all_tensor_evecs(s)] for j, f in enumerate(mf): pdf += f * single_tensor_pdf( pdf_points, evals=mevals[j], evecs=mevecs[j], tau=tau ) return pdf @warning_for_keywords() def single_tensor_msd(*, evals=None, tau=1 / (4 * np.pi**2)): """Simulate a Multi-Tensor rtop. See :footcite:p:`Cheng2012` for further details. Parameters ---------- evals : 1D arrays, optional Eigen-values for the tensor. By default, values typical for prolate white matter are used. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- msd : float, Mean square displacement. References ---------- .. footbibliography:: """ if evals is None: evals = diffusion_evals msd = 2 * tau * np.sum(evals) return msd @warning_for_keywords() def multi_tensor_msd(mf, *, mevals=None, tau=1 / (4 * np.pi**2)): """Simulate a Multi-Tensor rtop. See :footcite:p:`Cheng2012` for further details. Parameters ---------- mf : sequence of floats, bounded [0,1] Percentages of the fractions for each tensor. mevals : sequence of 1D arrays, optional Eigen-values for each tensor. By default, values typical for prolate white matter are used. tau : float, optional diffusion time. By default the value that makes q=sqrt(b). Returns ------- msd : float, Mean square displacement. References ---------- .. footbibliography:: """ msd = 0 if mevals is None: mevals = [ None, ] * len(mf) for j, f in enumerate(mf): msd += f * single_tensor_msd(evals=mevals[j], tau=tau) return msd dipy-1.11.0/dipy/stats/000077500000000000000000000000001476546756600146745ustar00rootroot00000000000000dipy-1.11.0/dipy/stats/__init__.py000066400000000000000000000000721476546756600170040ustar00rootroot00000000000000# code support tractometric statistical analysis for dipy dipy-1.11.0/dipy/stats/analysis.py000066400000000000000000000247301476546756600170770ustar00rootroot00000000000000import os import numpy as np from scipy.ndimage import map_coordinates from scipy.spatial import cKDTree from scipy.spatial.distance import mahalanobis from dipy.io.utils import save_buan_profiles_hdf5 from dipy.segment.clustering import QuickBundles from dipy.segment.metricspeed import AveragePointwiseEuclideanMetric from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import ( Streamlines, orient_by_streamline, set_number_of_points, values_from_volume, ) def peak_values(bundle, peaks, dt, pname, bname, subject, group_id, ind, dir_name): """Peak_values function finds the generalized fractional anisotropy (gfa) and quantitative anisotropy (qa) values from peaks object (eg: csa) for every point on a streamline used while tracking and saves it in hd5 file. Parameters ---------- bundle : string Name of bundle being analyzed peaks : peaks contains peak directions and values dt : DataFrame DataFrame to be populated pname : string Name of the dti metric bname : string Name of bundle being analyzed. subject : string subject number as a string (e.g. 10001) group_id : integer which group subject belongs to 1 patient and 0 for control ind : integer list ind tells which disk number a point belong. dir_name : string path of output directory """ gfa = peaks.gfa anatomical_measures( bundle, gfa, dt, pname + "_gfa", bname, subject, group_id, ind, dir_name ) qa = peaks.qa[..., 0] anatomical_measures( bundle, qa, dt, pname + "_qa", bname, subject, group_id, ind, dir_name ) def anatomical_measures( bundle, metric, dt, pname, bname, subject, group_id, ind, dir_name ): """Calculates dti measure (eg: FA, MD) per point on streamlines and save it in hd5 file. Parameters ---------- bundle : string Name of bundle being analyzed metric : matrix of float values dti metric e.g. FA, MD dt : DataFrame DataFrame to be populated pname : string Name of the dti metric bname : string Name of bundle being analyzed. subject : string subject number as a string (e.g. 10001) group_id : integer which group subject belongs to 1 for patient and 0 control ind : integer list ind tells which disk number a point belong. dir_name : string path of output directory """ dt["streamline"] = [] dt["disk"] = [] dt["subject"] = [] dt[pname] = [] dt["group"] = [] values = map_coordinates(metric, bundle._data.T, order=1) dt["disk"].extend(ind[list(range(len(values)))] + 1) dt["subject"].extend([subject] * len(values)) dt["group"].extend([group_id] * len(values)) dt[pname].extend(values) for st_i in range(len(bundle)): st = bundle[st_i] dt["streamline"].extend([st_i] * len(st)) file_name = f"{bname}_{pname}" save_buan_profiles_hdf5(os.path.join(dir_name, file_name), dt) def assignment_map(target_bundle, model_bundle, no_disks): """ Calculates assignment maps of the target bundle with reference to model bundle centroids. See :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- target_bundle : streamlines target bundle extracted from subject data in common space model_bundle : streamlines atlas bundle used as reference no_disks : integer, optional Number of disks used for dividing bundle into disks. Returns ------- indx : ndarray Assignment map of the target bundle streamline point indices to the model bundle centroid points. References ---------- .. footbibliography:: """ mbundle_streamlines = set_number_of_points(model_bundle, nb_points=no_disks) metric = AveragePointwiseEuclideanMetric() qb = QuickBundles(threshold=85.0, metric=metric) clusters = qb.cluster(mbundle_streamlines) centroids = Streamlines(clusters.centroids) _, indx = cKDTree(centroids.get_data(), 1, copy_data=True).query( target_bundle.get_data(), k=1 ) return indx @warning_for_keywords() def gaussian_weights(bundle, *, n_points=100, return_mahalnobis=False, stat=np.mean): """ Calculate weights for each streamline/node in a bundle, based on a Mahalanobis distance from the core the bundle, at that node (mean, per default). Parameters ---------- bundle : Streamlines The streamlines to weight. n_points : int, optional The number of points to resample to. *If the `bundle` is an array, this input is ignored*. return_mahalanobis : bool, optional Whether to return the Mahalanobis distance instead of the weights. stat : callable, optional. The statistic used to calculate the central tendency of streamlines in each node. Can be one of {`np.mean`, `np.median`} or other functions that have similar API.` Returns ------- w : array of shape (n_streamlines, n_points) Weights for each node in each streamline, calculated as its relative inverse of the Mahalanobis distance, relative to the distribution of coordinates at that node position across streamlines. """ # Resample to same length for each streamline: bundle = set_number_of_points(bundle, nb_points=n_points) # This is the output w = np.zeros((len(bundle), n_points)) # If there's only one fiber here, it gets the entire weighting: if len(bundle) == 1: if return_mahalnobis: return np.array([np.nan]) else: return np.array([1]) for node in range(n_points): # This should come back as a 3D covariance matrix with the spatial # variance covariance of this node across the different streamlines # This is a 3-by-3 array: node_coords = bundle._data[node::n_points] c = np.cov(node_coords.T, ddof=0) # Reorganize as an upper diagonal matrix for expected Mahalanobis # input: c = np.array( [[c[0, 0], c[0, 1], c[0, 2]], [0, c[1, 1], c[1, 2]], [0, 0, c[2, 2]]] ) # Calculate the mean or median of this node as well # delta = node_coords - np.mean(node_coords, 0) m = stat(node_coords, 0) # Weights are the inverse of the Mahalanobis distance for fn in range(len(bundle)): # In the special case where all the streamlines have the exact same # coordinate in this node, the covariance matrix is all zeros, so # we can't calculate the Mahalanobis distance, we will instead give # each streamline an identical weight, equal to the number of # streamlines: if np.allclose(c, 0): w[:, node] = len(bundle) break # Otherwise, go ahead and calculate Mahalanobis for node on # fiber[fn]: w[fn, node] = mahalanobis(node_coords[fn], m, np.linalg.inv(c)) if return_mahalnobis: return w # weighting is inverse to the distance (the further you are, the less you # should be weighted) w = 1 / w # Normalize before returning, so that the weights in each node sum to 1: return w / np.sum(w, 0) @warning_for_keywords() def afq_profile( data, bundle, affine, *, n_points=100, profile_stat=np.average, orient_by=None, weights=None, **weights_kwarg, ): """ Calculates a summarized profile of data for a bundle or tract along its length. Follows the approach outlined in :footcite:p:`Yeatman2012`. Parameters ---------- data : 3D volume The statistic to sample with the streamlines. bundle : StreamLines class instance The collection of streamlines (possibly already resampled into an array for each to have the same length) with which we are resampling. See Note below about orienting the streamlines. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. n_points: int, optional The number of points to sample along the bundle. Default: 100. orient_by: streamline, optional A streamline to use as a standard to orient all of the streamlines in the bundle according to. weights : 1D array or 2D array or callable, optional Weight each streamline (1D) or each node (2D) when calculating the tract-profiles. Must sum to 1 across streamlines (in each node if relevant). If callable, this is a function that calculates weights. profile_stat : callable, optional The statistic used to average the profile across streamlines. If weights is not None, this must take weights as a keyword argument. The default, np.average, is the same as np.mean but takes weights as a keyword argument. weights_kwarg : key-word arguments Additional key-word arguments to pass to the weight-calculating function. Only to be used if weights is a callable. Returns ------- ndarray : a 1D array with the profile of `data` along the length of `bundle` Notes ----- Before providing a bundle as input to this function, you will need to make sure that the streamlines in the bundle are all oriented in the same orientation relative to the bundle (use :func:`orient_by_streamline`). References ---------- .. footbibliography:: """ if orient_by is not None: bundle = orient_by_streamline(bundle, orient_by) if affine is None: affine = np.eye(4) if len(bundle) == 0: raise ValueError("The bundle contains no streamlines") # Resample each streamline to the same number of points: fgarray = set_number_of_points(bundle, nb_points=n_points) # Extract the values values = np.array(values_from_volume(data, fgarray, affine)) if weights is not None: if callable(weights): weights = weights(bundle, **weights_kwarg) else: # We check that weights *always sum to 1 across streamlines*: if not np.allclose(np.sum(weights, 0), np.ones(n_points)): raise ValueError( "The sum of weights across streamlines", " must be equal to 1" ) return profile_stat(values, weights=weights, axis=0) else: return profile_stat(values, axis=0) dipy-1.11.0/dipy/stats/meson.build000066400000000000000000000003121476546756600170320ustar00rootroot00000000000000python_sources = [ '__init__.py', 'analysis.py', 'resampling.py', 'qc.py', 'sketching.py' ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/stats' ) subdir('tests')dipy-1.11.0/dipy/stats/qc.py000066400000000000000000000064151476546756600156570ustar00rootroot00000000000000import numpy as np from dipy.core.geometry import cart_distance from dipy.testing.decorators import warning_for_keywords def find_qspace_neighbors(gtab): """Create a mapping of dwi volume index to its nearest neighbor. An approximate q-space is used (the deltas are not included). Note that neighborhood is not necessarily bijective. One neighbor is found per dwi volume. Parameters ---------- gtab: dipy.core.gradients.GradientTable Gradient table. Returns ------- neighbors: list of tuple A list of 2-tuples indicating the nearest q-space neighbor of each dwi volume. Examples -------- >>> from dipy.core.gradients import gradient_table >>> import numpy as np >>> gtab = gradient_table( ... np.array([0, 1000, 1000, 2000]), ... bvecs=np.array([ ... [1, 0, 0], ... [1, 0, 0], ... [0.99, 0.0001, 0.0001], ... [1, 0, 0]])) >>> find_qspace_neighbors(gtab) [(1, 2), (2, 1), (3, 1)] """ dwi_neighbors = [] # Only correlate the b>0 images dwi_mask = np.logical_not(gtab.b0s_mask) dwi_indices = np.flatnonzero(dwi_mask) # Get a pseudo-qspace value for b>0s qvecs = np.sqrt(gtab.bvals)[:, np.newaxis] * gtab.bvecs for dwi_index in dwi_indices: qvec = qvecs[dwi_index] # Calculate distance in q-space, accounting for symmetry pos_dist = cart_distance(qvec[np.newaxis, :], qvecs) neg_dist = cart_distance(qvec[np.newaxis, :], -qvecs) distances = np.min(np.column_stack([pos_dist, neg_dist]), axis=1) # Be sure we don't select the image as its own neighbor distances[dwi_index] = np.inf # Or a b=0 distances[gtab.b0s_mask] = np.inf neighbor_index = np.argmin(distances) dwi_neighbors.append((dwi_index, neighbor_index)) return dwi_neighbors @warning_for_keywords() def neighboring_dwi_correlation(dwi_data, gtab, *, mask=None): """Calculate the Neighboring DWI Correlation (NDC) from dMRI data. Using a mask is highly recommended, otherwise the FOV will influence the correlations. According to :footcite:t:`Yeh2019`, an NDC less than 0.4 indicates a low quality image. Parameters ---------- dwi_data : 4D ndarray dwi data on which to calculate NDC gtab : dipy.core.gradients.GradientTable Gradient table. mask : 3D ndarray, optional Mask of voxels to include in the NDC calculation Returns ------- ndc : float The neighboring DWI correlation References ---------- .. footbibliography:: """ neighbor_indices = find_qspace_neighbors(gtab) neighbor_correlations = [] if mask is not None: binary_mask = mask > 0 for from_index, to_index in neighbor_indices: # Flatten the dwi images if mask is not None: flat_from_image = dwi_data[..., from_index][binary_mask] flat_to_image = dwi_data[..., to_index][binary_mask] else: flat_from_image = dwi_data[..., from_index].flatten() flat_to_image = dwi_data[..., to_index].flatten() neighbor_correlations.append(np.corrcoef(flat_from_image, flat_to_image)[0, 1]) return np.mean(neighbor_correlations) dipy-1.11.0/dipy/stats/resampling.py000066400000000000000000000235541476546756600174200ustar00rootroot00000000000000#!/usr/bin/python import numpy as np import scipy as sp from dipy.testing.decorators import warning_for_keywords def bs_se(bs_pdf): """Calculate the bootstrap standard error estimate of a statistic.""" N = len(bs_pdf) return np.std(bs_pdf) * np.sqrt(N / (N - 1)) def bootstrap(x, *, statistic=bs_se, B=1000, alpha=0.95, rng=None): """Bootstrap resampling to accurately estimate the standard error and confidence interval of a desired statistic of a probability distribution function (pdf). See :footcite:p`Efron1979` for further details about the method. Parameters ---------- x : ndarray (N, 1) Observable sample to resample. N should be reasonably large. statistic : method, optional Method to calculate the desired statistic. (Default: calculate bootstrap standard error) B : integer, optional Total number of bootstrap resamples in bootstrap pdf. (Default: 1000) alpha : float, optional Percentile for confidence interval of the statistic. (Default: 0.05) rng : numpy.random.Generator, optional Random number generator to use for sampling. If None, the generator is initialized using the default BitGenerator. Returns ------- bs_pdf : ndarray (M, 1) Jackknife probability distribution function of the statistic. se : float Standard error of the statistic. ci : ndarray (2, 1) Confidence interval of the statistic. See Also -------- numpy.std, numpy.random.random Notes ----- Bootstrap resampling is non parametric. It is quite powerful in determining the standard error and the confidence interval of a sample distribution. The key characteristics of bootstrap is: 1) uniform weighting among all samples (1/n) 2) resampling with replacement In general, the sample size should be large to ensure accuracy of the estimates. The number of bootstrap resamples should be large as well as that will also influence the accuracy of the estimate. References ---------- .. footbibliography:: """ N = len(x) bs_pdf = np.empty((B,)) rng = rng or np.random.default_rng() for ii in range(0, B): # resample with replacement rand_index = np.int16(np.round(rng.random(N) * (N - 1))) bs_pdf[ii] = statistic(x[rand_index]) return bs_pdf, bs_se(bs_pdf), abc(x, statistic=statistic, alpha=alpha) def abc(x, *, statistic=bs_se, alpha=0.05, eps=1e-5): """Calculate the bootstrap confidence interval by approximating the BCa. See :footcite:p`DiCiccio1996` for further details about the method. Parameters ---------- x : np.ndarray Observed data (e.g. chosen gold standard estimate used for bootstrap) statistic : method Method to calculate the desired statistic given x and probability proportions (flat probability densities vector) alpha : float (0, 1) Desired confidence interval initial endpoint (Default: 0.05) eps : float, optional Specifies step size in calculating numerical derivative T' and T''. Default: 1e-5 See Also -------- __tt, __tt_dot, __tt_dot_dot, __calc_z0 Notes ----- Unlike the BCa method of calculating the bootstrap confidence interval, the ABC method is computationally less demanding (about 3% computational power needed) and is fairly accurate (sometimes out performing BCa!). It does not require any bootstrap resampling and instead uses numerical derivatives via Taylor series to approximate the BCa calculation. However, the ABC method requires the statistic to be smooth and follow a multinomial distribution. References ---------- .. footbibliography:: """ # define base variables -- n, p_0, sigma_hat, delta_hat n = len(x) p_0 = np.ones(x.shape) / n sigma_hat = np.zeros(x.shape) delta_hat = np.zeros(x.shape) for i in range(0, n): sigma_hat[i] = __tt_dot(i, x, p_0, statistic, eps) ** 2 delta_hat[i] = __tt_dot(i, x, p_0, statistic, eps) sigma_hat = (sigma_hat / n**2) ** 0.5 # estimate the bias (z_0) and the acceleration (a_hat) a_num = np.zeros(x.shape) a_dem = np.zeros(x.shape) for i in range(0, n): a_num[i] = __tt_dot(i, x, p_0, statistic, eps) ** 3 a_dem[i] = __tt_dot(i, x, p_0, statistic, eps) ** 2 a_hat = 1 / 6 * a_num / a_dem**1.5 z_0 = __calc_z0(x, p_0, statistic, eps, a_hat, sigma_hat) # define helper variables -- w and l w = z_0 + __calc_z_alpha(1 - alpha) ell = w / (1 - a_hat * w) ** 2 return __tt(x, p_0 + ell * delta_hat / sigma_hat, statistic=statistic) def __calc_z_alpha(alpha): """Calculate inverse of cdf of standard normal (quantile function).""" return 2**0.5 * sp.special.erfinv(2 * alpha - 1) def __calc_z0(x, p_0, statistic, eps, a_hat, sigma_hat): """calculate the bias z_0 for abc method. See Also -------- abc, __tt, __tt_dot, __tt_dot_dot """ n = len(x) b_hat = np.ones(x.shape) tt_dot = np.ones(x.shape) for i in range(0, n): b_hat[i] = __tt_dot_dot(i, x, p_0, statistic, eps) tt_dot[i] = __tt_dot(i, x, p_0, statistic, eps) b_hat = b_hat / (2 * n**2) c_q_hat = ( __tt( x, ((1 - eps) * p_0 + eps * tt_dot / (n**2 * sigma_hat)), statistic=statistic, ) + __tt( x, ((1 - eps) * p_0 - eps * tt_dot / (n**2 * sigma_hat)), statistic=statistic, ) - 2 * __tt(x, p_0, statistic=statistic) ) / eps**2 return a_hat - (b_hat / sigma_hat - c_q_hat) @warning_for_keywords() def __tt(x, p_0, *, statistic=bs_se): """Calculate desired statistic from observable data and a given proportional weighting. Parameters ---------- x : np.ndarray Observable data (e.g. from gold standard). p_0 : np.ndarray Proportional weighting vector (Default: uniform weighting 1/n) Returns ------- theta_hat : float Desired statistic of the observable data. See Also -------- abc, __tt_dot, __tt_dot_dot """ return statistic(x / p_0) def __tt_dot(i, x, p_0, statistic, eps): """First numerical derivative of __tt.""" e = np.zeros(x.shape) e[i] = 1 return ( __tt(x, ((1 - eps) * p_0 + eps * e[i]), statistic=statistic) - __tt(x, p_0, statistic=statistic) ) / eps def __tt_dot_dot(i, x, p_0, statistic, eps): """Second numerical derivative of __tt.""" e = np.zeros(x.shape) e[i] = 1 return ( __tt_dot(i, x, p_0, statistic, eps) / eps + ( __tt(x, ((1 - eps) * p_0 - eps * e[i]), statistic=statistic) - __tt(x, p_0, statistic=statistic) ) / eps**2 ) def jackknife(pdf, *, statistic=np.std, M=None, rng=None): """Jackknife resampling to quickly estimate the bias and standard error of a desired statistic in a probability distribution function (pdf). See :footcite:p`Efron1979` for further details about the method. Parameters ---------- pdf : ndarray (N, 1) Probability distribution function to resample. N should be reasonably large. statistic : method, optional Method to calculate the desired statistic. (Default: calculate standard deviation) M : integer (M < N) Total number of samples in jackknife pdf. (Default: M == N) Returns ------- jk_pdf : ndarray (M, 1) Jackknife probability distribution function of the statistic. bias : float Bias of the jackknife pdf of the statistic. se : float Standard error of the statistic. rng : numpy.random.Generator Random number generator to use for sampling. If None, the generator is initialized using the default BitGenerator. See Also -------- numpy.std, numpy.mean, numpy.random.random Notes ----- Jackknife resampling like bootstrap resampling is non parametric. However, it requires a large distribution to be accurate and in some ways can be considered deterministic (if one removes the same set of samples, then one will get the same estimates of the bias and variance). In the context of this implementation, the sample size should be at least larger than the asymptotic convergence of the statistic (ACstat); preferably, larger than ACstat + np.greater(ACbias, ACvar) The clear benefit of using jackknife is its ability to estimate the bias of the statistic. The most powerful application of this is estimating the bias of a bootstrap-estimated standard error. In fact, one could "bootstrap the bootstrap" (nested bootstrap) of the estimated standard error, but the inaccuracy of the bootstrap to characterize the true mean would incur a poor estimate of the bias (recall: bias = mean[sample_est] - mean[true population]) References ---------- .. footbibliography:: """ N = len(pdf) pdf_mask = np.ones((N,), dtype="int16") # keeps track of all n - 1 indexes mask_index = np.copy(pdf_mask) if M is None: M = N M = np.minimum(M, N - 1) jk_pdf = np.empty((M,)) rng = rng or np.random.default_rng() for ii in range(0, M): rand_index = np.round(rng.random() * (N - 1)) # choose a unique random sample to remove while pdf_mask[int(rand_index)] == 0: rand_index = np.round(rng.random() * (N - 1)) # set mask to zero for chosen random index so not to choose again pdf_mask[int(rand_index)] = 0 mask_index[int(rand_index)] = 0 jk_pdf[ii] = statistic(pdf[mask_index > 0]) # compute n-1 statistic mask_index[int(rand_index)] = 1 return ( jk_pdf, (N - 1) * (np.mean(jk_pdf) - statistic(pdf)), np.sqrt(N - 1) * np.std(jk_pdf), ) def residual_bootstrap(data): pass def repetition_bootstrap(data): pass dipy-1.11.0/dipy/stats/sketching.py000066400000000000000000000042151476546756600172270ustar00rootroot00000000000000import os import tempfile import numpy as np def count_sketch(matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir): """Count Sketching algorithm to reduce the size of the matrix. Parameters ---------- matrixa_name : str The name of the memmap file containing the matrix A. matrixa_dtype : dtype The dtype of the matrix A. matrixa_shape : tuple The shape of the matrix A. sketch_rows : int The number of rows in the sketch matrix. tmp_dir : str The directory to save the temporary files. Returns ------- matrixc_file.name : str The name of the memmap file containing the sketch matrix. matrixc.dtype : dtype The dtype of the sketch matrix. matrixc.shape : tuple The shape of the sketch matrix. """ matrixa = np.squeeze( np.memmap(matrixa_name, dtype=matrixa_dtype, mode="r+", shape=matrixa_shape) ).reshape(np.prod(matrixa_shape[:-1]), matrixa_shape[-1]) with tempfile.NamedTemporaryFile( delete=False, dir=tmp_dir, suffix="matrix_t" ) as matrixt_file: matrixt = np.memmap( matrixt_file.name, dtype=matrixa_dtype, mode="w+", shape=matrixa.shape ) hashed_indices = np.random.choice(sketch_rows, matrixa.shape[0], replace=True) rand_signs = np.random.choice(2, matrixa.shape[0], replace=True) * 2 - 1 for i in range(0, matrixa.shape[0], matrixa.shape[0] // 20): end_index = min(i + matrixa.shape[0] // 20, matrixa.shape[0]) matrixt[i:end_index, :] = ( matrixa[i:end_index, :] * rand_signs[i:end_index, np.newaxis] ) with tempfile.NamedTemporaryFile( delete=False, dir=tmp_dir, suffix="matrix_C" ) as matrixc_file: matrixc = np.memmap( matrixc_file.name, dtype=matrixa_dtype, mode="w+", shape=(sketch_rows, matrixa.shape[1]), ) np.add.at(matrixc, hashed_indices, matrixt) matrixc.flush() matrixt.flush() del matrixt os.unlink(matrixt_file.name) return matrixc_file.name, matrixc.dtype, matrixc.shape dipy-1.11.0/dipy/stats/tests/000077500000000000000000000000001476546756600160365ustar00rootroot00000000000000dipy-1.11.0/dipy/stats/tests/__init__.py000066400000000000000000000000001476546756600201350ustar00rootroot00000000000000dipy-1.11.0/dipy/stats/tests/meson.build000066400000000000000000000003251476546756600202000ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_analysis.py', 'test_resampling.py', 'test_qc.py', 'test_sketching.py' ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/stats/tests' ) dipy-1.11.0/dipy/stats/tests/test_analysis.py000066400000000000000000000110451476546756600212730ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.stats.analysis import afq_profile, gaussian_weights from dipy.tracking.streamline import Streamlines def test_gaussian_weights(): # Some bogus x,y,z coordinates x = np.arange(10).astype(float) y = np.arange(10).astype(float) z = np.arange(10).astype(float) # Create a distribution for which we can predict the weights we would # expect to get: bundle = Streamlines([np.array([x, y, z]).T + 1, np.array([x, y, z]).T - 1]) # In this case, all nodes receives an equal weight of 0.5: w = gaussian_weights(bundle, n_points=10) npt.assert_almost_equal(w, np.ones((len(bundle), 10)) * 0.5) # Test when asked to return Mahalanobis, instead of weights w = gaussian_weights(bundle, n_points=10, return_mahalnobis=True) npt.assert_almost_equal(w, np.ones((len(bundle), 10))) # Here, some nodes are twice as far from the mean as others bundle = Streamlines( [ np.array([x, y, z]).T + 2, np.array([x, y, z]).T + 1, np.array([x, y, z]).T - 1, np.array([x, y, z]).T - 2, ] ) w = gaussian_weights(bundle, n_points=10) # And their weights should be halved: npt.assert_almost_equal(w[0], w[1] / 2) npt.assert_almost_equal(w[-1], w[2] / 2) # Test the situation where all the streamlines have an identical node: arr1 = np.array([x, y, z]).T + 2 arr2 = np.array([x, y, z]).T + 1 arr3 = np.array([x, y, z]).T - 1 arr4 = np.array([x, y, z]).T - 2 arr1[0] = np.array([1, 1, 1]) arr2[0] = np.array([1, 1, 1]) arr3[0] = np.array([1, 1, 1]) arr4[0] = np.array([1, 1, 1]) bundle_w_id_node = Streamlines([arr1, arr2, arr3, arr4]) w = gaussian_weights(Streamlines(bundle_w_id_node), n_points=10) # For this case, the result should be a weight of 1/n_streamlines in that # node for all streamlines: npt.assert_equal( w[:, 0], np.ones(len(bundle_w_id_node)) * 1 / len(bundle_w_id_node) ) # Test the situation where all the streamlines are copies of each other: bundle_w_copies = Streamlines([bundle[0], bundle[0], bundle[0], bundle[0]]) w = gaussian_weights(bundle_w_copies, n_points=10) # In this case, the entire array should be equal to 1/n_streamlines: npt.assert_equal(w, np.ones(w.shape) * 1 / len(bundle_w_id_node)) # Test with bundle of length 1: bundle_len_1 = Streamlines([bundle[0]]) w = gaussian_weights(bundle_len_1, n_points=10) npt.assert_equal(w, np.ones(w.shape)) bundle_len_1 = Streamlines([bundle[0]]) w = gaussian_weights(bundle_len_1, n_points=10, return_mahalnobis=True) npt.assert_equal(w, np.ones(w.shape) * np.nan) def test_afq_profile(): data = np.ones((10, 10, 10)) bundle = Streamlines() bundle.extend(np.array([[[0, 0.0, 0], [1, 0.0, 0.0], [2, 0.0, 0.0]]])) bundle.extend(np.array([[[0, 0.0, 0.0], [1, 0.0, 0], [2, 0, 0.0]]])) profile = afq_profile(data, bundle, np.eye(4)) npt.assert_equal(profile, np.ones(100)) profile = afq_profile(data, bundle, np.eye(4), n_points=10, weights=None) npt.assert_equal(profile, np.ones(10)) profile = afq_profile( data, bundle, np.eye(4), weights=gaussian_weights, stat=np.median ) npt.assert_equal(profile, np.ones(100)) profile = afq_profile( data, bundle, np.eye(4), orient_by=bundle[0], weights=gaussian_weights, stat=np.median, ) npt.assert_equal(profile, np.ones(100)) profile = afq_profile(data, bundle, np.eye(4), n_points=10, weights=None) npt.assert_equal(profile, np.ones(10)) profile = afq_profile( data, bundle, np.eye(4), n_points=10, weights=np.ones((2, 10)) * 0.5 ) npt.assert_equal(profile, np.ones(10)) profile = afq_profile(data, bundle, np.eye(4), n_points=10, stat=np.median) npt.assert_equal(profile, np.ones(10)) # Disallow setting weights that don't sum to 1 across fibers/nodes: npt.assert_raises( ValueError, afq_profile, data, bundle, np.eye(4), n_points=10, weights=np.ones((2, 10)) * 0.6, ) # Test using an affine: affine = np.eye(4) affine[:, 3] = [-1, 100, -20, 1] # Transform the streamlines: bundle._data = bundle._data + affine[:3, 3] profile = afq_profile(data, bundle, affine, n_points=10, weights=None) npt.assert_equal(profile, np.ones(10)) # Test for error-handling: empty_bundle = Streamlines([]) npt.assert_raises(ValueError, afq_profile, data, empty_bundle, np.eye(4)) dipy-1.11.0/dipy/stats/tests/test_qc.py000066400000000000000000000105461476546756600200600ustar00rootroot00000000000000import numpy as np from dipy.core.geometry import normalized_vector from dipy.core.gradients import gradient_table from dipy.stats.qc import neighboring_dwi_correlation rng = np.random.default_rng() def create_test_data(test_r, cube_size, mask_size, num_dwi_vols, num_b0s): """Create testing data with a known neighbor structure and a known NDC. The b>0 images have 2 images per shell, separated by a very small angle, guaranteeing they will be neighbors. The within-mask data is filled with random data with correlation value of approximately ``test_r``. Parameters ---------- test_r: float The approximate NDC that the simulated data should have cube_size: int The simulated data will be a cube with this many voxels per dim mask_size: int A cubic "brain" is this size per side and filled with data. Must be less than ``cube_size`` num_dwi_vols: int The number of b>0 images to simulate. Must be even to ensure we can make known neighbors num_b0s: int The number of b=0 images to prepend to the b>0 images Returns ------- real_r: float The ground-truth neighbor correlation of the simulated data dwi_data: np.ndarray A 4D array containing simulated data mask_data: np.ndarray A 3D array indicating which voxels in ``dwi_data`` contain brain data gtab: dipy.core.gradients.GradientTable Gradient table with known neighbors """ if not num_dwi_vols % 2 == 0: raise Exception("Needs an even number of dwi vols to ensure known neighbors") # Create a volume mask test_mask = np.zeros((cube_size, cube_size, cube_size)) test_mask[:mask_size, :mask_size, :mask_size] = 1 n_voxels_in_mask = mask_size**3 # 4D Data array dwi_data = np.zeros((cube_size, cube_size, cube_size, num_b0s + num_dwi_vols)) # Create a sampling scheme where we know what volumes will be neighbors n_known = num_dwi_vols // 2 dwi_bvals = ( np.column_stack([np.arange(n_known) + 1] * 2).flatten(order="C").tolist() ) bvals = np.array([0] * num_b0s + dwi_bvals) * 1000 # The bvecs will be a straight line with a minor perturbance every other ref_vec = np.array([1.0, 0.0, 0.0]) nbr_vec = normalized_vector(ref_vec + 0.00001) bvecs = np.vstack([ref_vec] * num_b0s + [np.vstack([ref_vec, nbr_vec])] * n_known) cor = np.ones((2, 2)) * test_r np.fill_diagonal(cor, 1) L = np.linalg.cholesky(cor) known_correlations = [] for starting_vol in np.arange(n_known) * 2 + num_b0s: uncorrelated = rng.standard_normal((2, n_voxels_in_mask)) correlated = np.dot(L, uncorrelated) dwi_data[:, :, :, starting_vol][test_mask > 0] = correlated[0] dwi_data[:, :, :, starting_vol + 1][test_mask > 0] = correlated[1] known_correlations += [np.corrcoef(correlated)[0, 1]] * 2 gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=50) return np.mean(known_correlations), dwi_data, test_mask, gtab def test_neighboring_dwi_correlation(): """Test NDC under various conditions.""" # Test data with b=0s, low correlation, using mask real_r, dwi_data, mask, gtab = create_test_data( test_r=0.3, cube_size=10, mask_size=6, num_dwi_vols=10, num_b0s=2 ) estimated_ndc = neighboring_dwi_correlation(dwi_data, gtab, mask=mask) assert np.allclose(real_r, estimated_ndc) maskless_ndc = neighboring_dwi_correlation(dwi_data, gtab) assert maskless_ndc != real_r # Try with no b=0s real_r, dwi_data, mask, gtab = create_test_data( test_r=0.3, cube_size=10, mask_size=6, num_dwi_vols=10, num_b0s=0 ) estimated_ndc = neighboring_dwi_correlation(dwi_data, gtab, mask=mask) assert np.allclose(real_r, estimated_ndc) # Try with realistic correlation value real_r, dwi_data, mask, gtab = create_test_data( test_r=0.8, cube_size=10, mask_size=6, num_dwi_vols=10, num_b0s=2 ) estimated_ndc = neighboring_dwi_correlation(dwi_data, gtab, mask=mask) assert np.allclose(real_r, estimated_ndc) # Try with a bigger volume, lower correlation real_r, dwi_data, mask, gtab = create_test_data( test_r=0.5, cube_size=100, mask_size=49, num_dwi_vols=160, num_b0s=2 ) estimated_ndc = neighboring_dwi_correlation(dwi_data, gtab, mask=mask) assert np.allclose(real_r, estimated_ndc) dipy-1.11.0/dipy/stats/tests/test_resampling.py000066400000000000000000000021201476546756600216030ustar00rootroot00000000000000import numpy as np import pytest from dipy.stats.resampling import bootstrap, jackknife from dipy.testing.decorators import set_random_number_generator @set_random_number_generator(1741332) def test_bootstrap(rng): # Generate a sample data x = rng.standard_normal(size=100) # Test bootstrap function with default parameters bs_pdf, se, ci = bootstrap(x, rng=rng) assert len(bs_pdf) == 1000 assert se > 0 # Test bootstrap function with custom parameters bs_pdf, se, ci = bootstrap(x, statistic=np.mean, B=500, alpha=0.90) assert len(bs_pdf) == 500 assert se > 0 @set_random_number_generator(1741333) def test_jackknife(rng): # Generate a sample data pdf = rng.standard_normal(size=100) # Test jackknife function with default parameters jk_pdf, bias, se = jackknife(pdf, rng=rng) assert len(jk_pdf) == 99 assert bias == pytest.approx(0, abs=1e-1) assert se > 0 # Test jackknife function with custom parameters jk_pdf, bias, se = jackknife(pdf, statistic=np.median, M=50) assert len(jk_pdf) == 50 assert se > 0 dipy-1.11.0/dipy/stats/tests/test_sketching.py000066400000000000000000000113471476546756600214340ustar00rootroot00000000000000import os import tempfile import numpy as np import pytest from dipy.stats.sketching import count_sketch @pytest.fixture def setup_matrices(): """Fixture to set up matrix A as a memmap file and return its details.""" # Matrix A details matrixa_dtype = np.float64 matrixa_shape = (100, 50) matrixa = np.random.rand(*matrixa_shape) # Create a temporary file to store matrix A with tempfile.NamedTemporaryFile(delete=False, suffix=".mmap") as matrixa_file: np.memmap( matrixa_file.name, dtype=matrixa_dtype, mode="w+", shape=matrixa_shape )[:] = matrixa matrixa_file_name = matrixa_file.name yield matrixa_file_name, matrixa_dtype, matrixa_shape # Cleanup: Remove the matrix file after the test os.unlink(matrixa_file_name) def test_count_sketch_basic_functionality(setup_matrices): """Test the basic functionality of the count_sketch function.""" matrixa_name, matrixa_dtype, matrixa_shape = setup_matrices sketch_rows = 10 # Create a temporary directory for the output files with tempfile.TemporaryDirectory() as tmp_dir: matrixc_file_name, matrixc_dtype, matrixc_shape = count_sketch( matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir ) # Load the resulting sketch matrix to check properties matrixc = np.memmap( matrixc_file_name, dtype=matrixc_dtype, mode="r+", shape=matrixc_shape ) # Verify the output sketch matrix shape and dtype assert matrixc.shape == (sketch_rows, matrixa_shape[1]), "Output shape mismatch" assert matrixc.dtype == matrixa_dtype, "Output dtype mismatch" # Clean up del matrixc os.unlink(matrixc_file_name) def test_count_sketch_randomness(setup_matrices): """Test if count_sketch produces different results with different random seeds.""" matrixa_name, matrixa_dtype, matrixa_shape = setup_matrices sketch_rows = 10 with tempfile.TemporaryDirectory() as tmp_dir: np.random.seed(42) matrixc_file_name_1, _, _ = count_sketch( matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir ) matrixc_1 = np.memmap(matrixc_file_name_1, dtype=matrixa_dtype, mode="r+") np.random.seed(43) matrixc_file_name_2, _, _ = count_sketch( matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir ) matrixc_2 = np.memmap(matrixc_file_name_2, dtype=matrixa_dtype, mode="r+") # Verify that the two resulting sketches are different assert not np.allclose( matrixc_1, matrixc_2 ), "Sketches should differ with different random seeds" # Clean up del matrixc_1 del matrixc_2 os.unlink(matrixc_file_name_1) os.unlink(matrixc_file_name_2) def test_count_sketch_preserves_shape_and_dtype(setup_matrices): """Test if the count_sketch preserves the shape and dtype of the input matrix A.""" matrixa_name, matrixa_dtype, matrixa_shape = setup_matrices sketch_rows = 20 with tempfile.TemporaryDirectory() as tmp_dir: matrixc_file_name, matrixc_dtype, matrixc_shape = count_sketch( matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir ) # Check the resulting matrix properties assert matrixc_shape == ( sketch_rows, matrixa_shape[1], ), "Sketch matrix shape is incorrect" assert matrixc_dtype == matrixa_dtype, "Sketch matrix dtype is incorrect" # Clean up os.unlink(matrixc_file_name) def test_count_sketch_with_large_matrix(): """Test count_sketch function with a large matrix to check performance.""" matrixa_dtype = np.float64 matrixa_shape = (1000, 500) matrixa = np.random.rand(*matrixa_shape) with tempfile.NamedTemporaryFile(delete=False, suffix=".mmap") as matrixa_file: np.memmap( matrixa_file.name, dtype=matrixa_dtype, mode="w+", shape=matrixa_shape )[:] = matrixa matrixa_name = matrixa_file.name sketch_rows = 50 with tempfile.TemporaryDirectory() as tmp_dir: matrixc_file_name, matrixc_dtype, matrixc_shape = count_sketch( matrixa_name, matrixa_dtype, matrixa_shape, sketch_rows, tmp_dir ) # Load the resulting sketch matrix matrixc = np.memmap( matrixc_file_name, dtype=matrixc_dtype, mode="r+", shape=matrixc_shape ) # Verify that the sketch was successfully computed assert matrixc.shape == ( sketch_rows, matrixa_shape[1], ), "Output shape mismatch for large matrix" # Clean up del matrixc os.unlink(matrixa_name) os.unlink(matrixc_file_name) dipy-1.11.0/dipy/testing/000077500000000000000000000000001476546756600152135ustar00rootroot00000000000000dipy-1.11.0/dipy/testing/__init__.py000066400000000000000000000106561476546756600173340ustar00rootroot00000000000000"""Utilities for testing.""" from functools import partial import operator from os.path import abspath, dirname, join as pjoin import warnings import numpy.testing as npt from numpy.testing import assert_array_equal # set path to example data IO_DATA_PATH = abspath(pjoin(dirname(__file__), "..", "io", "tests", "data")) def assert_operator(value1, value2, msg="", op=operator.eq): """Check Boolean statement.""" try: if op == operator.is_: value1 = bool(value1) assert op(value1, value2) except AssertionError as e: raise AssertionError(msg.format(str(value2), str(value1))) from e assert_greater_equal = partial(assert_operator, op=operator.ge, msg="{0} >= {1}") assert_greater = partial(assert_operator, op=operator.gt, msg="{0} > {1}") assert_less_equal = partial(assert_operator, op=operator.le, msg="{0} =< {1}") assert_less = partial(assert_operator, op=operator.lt, msg="{0} < {1}") assert_true = partial( assert_operator, value2=True, op=operator.is_, msg="False is not true" ) assert_false = partial( assert_operator, value2=False, op=operator.is_, msg="True is not false" ) assert_not_equal = partial(assert_operator, op=operator.ne) def assert_arrays_equal(arrays1, arrays2): for arr1, arr2 in zip(arrays1, arrays2): assert_array_equal(arr1, arr2) class clear_and_catch_warnings(warnings.catch_warnings): """Context manager that resets warning registry for catching warnings. Warnings can be slippery, because, whenever a warning is triggered, Python adds a ``__warningregistry__`` member to the *calling* module. This makes it impossible to retrigger the warning in this module, whatever you put in the warnings filters. This context manager accepts a sequence of `modules` as a keyword argument to its constructor and: * stores and removes any ``__warningregistry__`` entries in given `modules` on entry; * resets ``__warningregistry__`` to its previous state on exit. This makes it possible to trigger any warning afresh inside the context manager without disturbing the state of warnings outside. For compatibility with Python 3.0, please consider all arguments to be keyword-only. Parameters ---------- record : bool, optional Specifies whether warnings should be captured by a custom implementation of ``warnings.showwarning()`` and be appended to a list returned by the context manager. Otherwise None is returned by the context manager. The objects appended to the list are arguments whose attributes mirror the arguments to ``showwarning()``. NOTE: nibabel difference from numpy: default is True modules : sequence, optional Sequence of modules for which to reset warnings registry on entry and restore on exit Examples -------- >>> import warnings >>> import numpy as np >>> with clear_and_catch_warnings(modules=[np.lib.scimath]): ... warnings.simplefilter('always') ... # do something that raises a warning in np.lib.scimath ... _ = np.arccos(90) Notes ----- this class is copied (with minor modifications) from Nibabel. https://github.com/nipy/nibabel. See COPYING file distributed along with the Nibabel package for the copyright and license terms. """ class_modules = () def __init__(self, record=True, modules=()): self.modules = set(modules).union(self.class_modules) self._warnreg_copies = {} super(clear_and_catch_warnings, self).__init__(record=record) def __enter__(self): for mod in self.modules: if hasattr(mod, "__warningregistry__"): mod_reg = mod.__warningregistry__ self._warnreg_copies[mod] = mod_reg.copy() mod_reg.clear() return super(clear_and_catch_warnings, self).__enter__() def __exit__(self, *exc_info): super(clear_and_catch_warnings, self).__exit__(*exc_info) for mod in self.modules: if hasattr(mod, "__warningregistry__"): mod.__warningregistry__.clear() if mod in self._warnreg_copies: mod.__warningregistry__.update(self._warnreg_copies[mod]) def check_for_warnings(warn_printed, w_msg): selected_w = [w for w in warn_printed if issubclass(w.category, UserWarning)] assert len(selected_w) >= 1 msg = [str(m.message) for m in selected_w] npt.assert_equal(w_msg in msg, True) dipy-1.11.0/dipy/testing/decorators.py000066400000000000000000000166221476546756600177410ustar00rootroot00000000000000""" Decorators for dipy tests """ from functools import wraps import inspect from inspect import Parameter, signature import os import platform import re import warnings import numpy as np from packaging import version import dipy SKIP_RE = re.compile(r"(\s*>>>.*?)(\s*)#\s*skip\s+if\s+(.*)$") def doctest_skip_parser(func): """Decorator replaces custom skip test markup in doctests. Say a function has a docstring:: >>> something # skip if not HAVE_AMODULE >>> something + else >>> something # skip if HAVE_BMODULE This decorator will evaluate the expression after ``skip if``. If this evaluates to True, then the comment is replaced by ``# doctest: +SKIP``. If False, then the comment is just removed. The expression is evaluated in the ``globals`` scope of `func`. For example, if the module global ``HAVE_AMODULE`` is False, and module global ``HAVE_BMODULE`` is False, the returned function will have docstring:: >>> something # doctest: +SKIP >>> something + else >>> something """ lines = func.__doc__.split("\n") new_lines = [] for line in lines: match = SKIP_RE.match(line) if match is None: new_lines.append(line) continue code, space, expr = match.groups() if eval(expr, func.__globals__): code = code + space + "# doctest: +SKIP" new_lines.append(code) func.__doc__ = "\n".join(new_lines) return func ### # In some cases (e.g., on Travis), we want to use a virtual frame-buffer for # testing. The following decorator runs the tests under xvfb (mediated by # xvfbwrapper) conditioned on an environment variable (that we set in # .travis.yml for these cases): use_xvfb = os.environ.get("TEST_WITH_XVFB", False) is_windows = platform.system().lower() == "windows" is_macOS = platform.system().lower() == "darwin" is_linux = platform.system().lower() == "linux" def xvfb_it(my_test): """Run a test with xvfbwrapper.""" # When we use verbose testing we want the name: fname = my_test.__name__ def test_with_xvfb(*args, **kwargs): if use_xvfb: from xvfbwrapper import Xvfb display = Xvfb(width=1920, height=1080) display.start() my_test(*args, **kwargs) if use_xvfb: display.stop() # Plant it back in and return the new function: test_with_xvfb.__name__ = fname return test_with_xvfb if not is_windows else my_test def set_random_number_generator(seed_v=1234): """Decorator to use a fixed value for the random generator seed. This will make the tests that use random functions reproducible. """ def _set_random_number_generator(func): def _set_random_number_generator_wrapper(pytestconfig, *args, **kwargs): rng = np.random.default_rng(seed_v) kwargs["rng"] = rng signature = inspect.signature(func) if pytestconfig and "pytestconfig" in signature.parameters.keys(): output = func(pytestconfig, *args, **kwargs) else: output = func(*args, **kwargs) return output return _set_random_number_generator_wrapper return _set_random_number_generator def warning_for_keywords(from_version="1.10.0", until_version="2.0.0"): """ Decorator to warn about keyword arguments passed as positional arguments and handle version-based deprecation of functions. Parameters ---------- from_version : str, optional The version from which the warning should start. until_version : str, optional The version until which the warning is applicable. Returns ------- function The wrapped function that will issue a warning based on the version. """ def decorator(func): @wraps(func) def wrapper(*args, **kwargs): current_version = dipy.__version__ parsed_version = version.parse(current_version) current_version = parsed_version.base_version def convert_positional_to_keyword(func, args, kwargs): """ Converts excess positional arguments to keyword arguments. Parameters ---------- func : function The original function to be called. args : tuple The positional arguments passed to the function. kwargs : dict The keyword arguments passed to the function. Returns ------- result The result of the function call with corrected arguments. """ sig = signature(func) params = sig.parameters max_positional_args = sum( 1 for param in params.values() if param.kind in (Parameter.POSITIONAL_ONLY, Parameter.POSITIONAL_OR_KEYWORD) ) if len(args) > max_positional_args: # Split positional arguments into valid positional and # those needing conversion positional_args = list(args[:max_positional_args]) corrected_kwargs = dict(kwargs) positionally_passed_kwonly_args = [] for param, arg in zip( list(params.values())[max_positional_args:], args[max_positional_args:], ): corrected_kwargs[param.name] = arg positionally_passed_kwonly_args.append(param.name) if positionally_passed_kwonly_args and version.parse( from_version ) <= version.parse(current_version) <= version.parse(until_version): warnings.warn( f"Pass {positionally_passed_kwonly_args} as keyword args. " f"From version {until_version} passing these as positional " f"arguments will result in an error. ", UserWarning, stacklevel=3, ) return func(*positional_args, **corrected_kwargs) return func(*args, **kwargs) # Check if the current version is within the warning range if ( version.parse(from_version) <= version.parse(current_version) <= version.parse(until_version) ): # Convert positional to keyword arguments and issue a warning return convert_positional_to_keyword(func, args, kwargs) # If the version is greater than the until_version, # pass the arguments as they are elif version.parse(current_version) > version.parse(until_version): return func(*args, **kwargs) # Convert positional to keyword arguments if # current version is less than from_version without warning elif version.parse(current_version) < version.parse(from_version): return convert_positional_to_keyword(func, args, kwargs) # Default case: call the function with the original arguments return func(*args, **kwargs) return wrapper return decorator dipy-1.11.0/dipy/testing/memory.py000066400000000000000000000013171476546756600170770ustar00rootroot00000000000000from collections import defaultdict import gc def get_type_refcount(pattern=None): """ Retrieves refcount of types for which their name matches `pattern`. Parameters ---------- pattern : str Consider only types that have `pattern` in their name. Returns ------- dict The key is the type name and the value is the refcount. """ gc.collect() refcounts_per_type = defaultdict(int) for obj in gc.get_objects(): obj_type_name = type(obj).__name__ # If `pattern` is not None, keep only matching types. if pattern is None or pattern in obj_type_name: refcounts_per_type[obj_type_name] += 1 return refcounts_per_type dipy-1.11.0/dipy/testing/meson.build000066400000000000000000000003031476546756600173510ustar00rootroot00000000000000python_sources = [ '__init__.py', 'decorators.py', 'memory.py', 'spherepoints.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/testing' ) subdir('tests')dipy-1.11.0/dipy/testing/spherepoints.py000066400000000000000000000010511476546756600203050ustar00rootroot00000000000000"""Create example sphere points""" import numpy as np def _make_pts(): """Make points around sphere quadrants""" thetas = np.arange(1, 4) * np.pi / 4 phis = np.arange(8) * np.pi / 4 north_pole = (0, 0, 1) south_pole = (0, 0, -1) points = [north_pole, south_pole] for theta in thetas: for phi in phis: x = np.sin(theta) * np.cos(phi) y = np.sin(theta) * np.sin(phi) z = np.cos(theta) points.append((x, y, z)) return np.array(points) sphere_points = _make_pts() dipy-1.11.0/dipy/testing/tests/000077500000000000000000000000001476546756600163555ustar00rootroot00000000000000dipy-1.11.0/dipy/testing/tests/__init__.py000066400000000000000000000000311476546756600204600ustar00rootroot00000000000000# Init for testing/tests dipy-1.11.0/dipy/testing/tests/meson.build000066400000000000000000000003041476546756600205140ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_decorators.py', 'test_memory.py', 'test_testing.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/testing/tests' ) dipy-1.11.0/dipy/testing/tests/test_decorators.py000066400000000000000000000110441476546756600221330ustar00rootroot00000000000000"""Testing decorators module.""" import warnings from numpy.testing import assert_equal, assert_raises import dipy from dipy.testing import assert_true from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords def test_skipper(): def f(): pass docstring = """ Header >>> something # skip if not HAVE_AMODULE >>> something + else >>> a = 1 # skip if not HAVE_BMODULE >>> something2 # skip if HAVE_AMODULE """ f.__doc__ = docstring global HAVE_AMODULE, HAVE_BMODULE HAVE_AMODULE = False HAVE_BMODULE = True f2 = doctest_skip_parser(f) assert_true(f is f2) assert_equal( f2.__doc__, """ Header >>> something # doctest: +SKIP >>> something + else >>> a = 1 >>> something2 """, ) HAVE_AMODULE = True HAVE_BMODULE = False f.__doc__ = docstring f2 = doctest_skip_parser(f) assert_true(f is f2) assert_equal( f2.__doc__, """ Header >>> something >>> something + else >>> a = 1 # doctest: +SKIP >>> something2 # doctest: +SKIP """, ) del HAVE_AMODULE f.__doc__ = docstring assert_raises(NameError, doctest_skip_parser, f) def test_warning_for_keywords(): original_version = dipy.__version__ @warning_for_keywords(from_version="1.0.0", until_version="10.0.0") def func_with_kwonly_args(a, b, *, c=3): return a + b + c # Case 1: Version is 0.9.0, no warnings expected dipy.__version__ = "0.9.0" assert func_with_kwonly_args(1, 2, c=3) == 6 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = func_with_kwonly_args(1, 2, 3) assert result == 6, "Expected result to be 6" assert len(w) == 0, "Expected no warnings for version 0.9.0" # Case 2: Version is 1.10.0, warnings expected dipy.__version__ = "1.10.0" assert func_with_kwonly_args(1, 2, c=3) == 6 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = func_with_kwonly_args(1, 2, 3) assert result == 6, "Expected result to be 6" assert len(w) == 1, "Expected warning for version 1.10.0" assert ( "Pass ['c'] as keyword args. From version 10.0.0 passing these as " "positional arguments will result in an error" in str(w[-1].message) ) # Case 3: Version is 1.15.0, warnings expected dipy.__version__ = "1.15.0" assert func_with_kwonly_args(1, 2, c=3) == 6 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = func_with_kwonly_args(1, 2, 3) assert result == 6, "Expected result to be 6" assert len(w) == 1, "Expected warning for version 1.15.0" assert ( "Pass ['c'] as keyword args. From version 10.0.0 passing these as " "positional arguments will result in an error" in str(w[-1].message) ) # Case 4: Version is 10.1.0, arguments should pass as they are, # expecting TypeError from system dipy.__version__ = "10.1.0" assert func_with_kwonly_args(1, 2, c=3) == 6 try: result = func_with_kwonly_args(1, 2, 3) except TypeError: pass # Case 5: Version is a pre-release like '2.0.0rc1', warnings expected dipy.__version__ = "2.0.0rc1" assert func_with_kwonly_args(1, 2, c=3) == 6 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = func_with_kwonly_args(1, 2, 3) assert result == 6, "Expected result to be 6" assert len(w) == 1, "Expected warning for version 2.0.0rc1" assert ( "Pass ['c'] as keyword args. From version 10.0.0 passing these as " "positional arguments will result in an error" in str(w[-1].message) ) # Case 6: Version is a dev release like '1.10.0.dev1', warnings expected dipy.__version__ = "1.10.0.dev1" assert func_with_kwonly_args(1, 2, c=3) == 6 with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") result = func_with_kwonly_args(1, 2, 3) assert result == 6, "Expected result to be 6" assert len(w) == 1, "Expected warning for version 1.10.0.dev1" assert ( "Pass ['c'] as keyword args. From version 10.0.0 passing these as " "positional arguments will result in an error" in str(w[-1].message) ) # Restore the original version dipy.__version__ = original_version dipy-1.11.0/dipy/testing/tests/test_memory.py000066400000000000000000000005321476546756600212760ustar00rootroot00000000000000from numpy.testing import assert_equal from dipy.testing.memory import get_type_refcount def test_get_type_refcount(): list_ref_count = get_type_refcount("list") A = [] assert_equal(get_type_refcount("list")["list"], list_ref_count["list"] + 1) del A assert_equal(get_type_refcount("list")["list"], list_ref_count["list"]) dipy-1.11.0/dipy/testing/tests/test_testing.py000066400000000000000000000050771476546756600214540ustar00rootroot00000000000000"""Testing file unit test.""" import sys import warnings import numpy as np import numpy.testing as npt import dipy.testing as dt def test_assert(): npt.assert_raises(AssertionError, dt.assert_false, True) npt.assert_raises(AssertionError, dt.assert_true, False) npt.assert_raises(AssertionError, dt.assert_less, 2, 1) npt.assert_raises(AssertionError, dt.assert_less_equal, 2, 1) npt.assert_raises(AssertionError, dt.assert_greater, 1, 2) npt.assert_raises(AssertionError, dt.assert_greater_equal, 1, 2) npt.assert_raises(AssertionError, dt.assert_not_equal, 5, 5) npt.assert_raises(AssertionError, dt.assert_operator, 2, 1) arr = [np.arange(k) for k in range(2, 12, 3)] arr2 = [np.arange(k) for k in range(2, 12, 4)] npt.assert_raises(AssertionError, dt.assert_arrays_equal, arr, arr2) def assert_warn_len_equal(mod, n_in_context): mod_warns = mod.__warningregistry__ # Python 3 appears to clear any pre-existing warnings of the same type, # when raising warnings inside a catch_warnings block. So, there is a # warning generated by the tests within the context manager, but no # previous warnings. if "version" in mod_warns: # including 'version' npt.assert_equal(len(mod_warns), 2) else: npt.assert_equal(len(mod_warns), n_in_context) def test_clear_and_catch_warnings(): warnings.simplefilter("default", category=UserWarning) # Initial state of module, no warnings my_mod = sys.modules[__name__] try: my_mod.__warningregistry__.clear() except AttributeError: pass npt.assert_equal(getattr(my_mod, "__warningregistry__", {}), {}) with dt.clear_and_catch_warnings(modules=[my_mod]): warnings.simplefilter("ignore") warnings.warn("Some warning", stacklevel=1) npt.assert_equal(my_mod.__warningregistry__, {}) # Without specified modules, don't clear warnings during context with dt.clear_and_catch_warnings(): warnings.warn("Some warning", stacklevel=1) assert_warn_len_equal(my_mod, 1) # Confirm that specifying module keeps old warning, does not add new with dt.clear_and_catch_warnings(modules=[my_mod]): warnings.warn("Another warning", stacklevel=1) assert_warn_len_equal(my_mod, 1) # Another warning, no module spec does add to warnings dict, except on # Python 3 (see comments in `assert_warn_len_equal`) with dt.clear_and_catch_warnings(): warnings.warn("Another warning", stacklevel=1) assert_warn_len_equal(my_mod, 2) warnings.simplefilter("always", category=UserWarning) dipy-1.11.0/dipy/tests/000077500000000000000000000000001476546756600147005ustar00rootroot00000000000000dipy-1.11.0/dipy/tests/__init__.py000066400000000000000000000000341476546756600170060ustar00rootroot00000000000000# Make dipy.tests a package dipy-1.11.0/dipy/tests/meson.build000066400000000000000000000002451476546756600170430ustar00rootroot00000000000000python_sources = [ '__init__.py', 'scriptrunner.py', 'test_scripts.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/tests' ) dipy-1.11.0/dipy/tests/scriptrunner.py000066400000000000000000000142041476546756600200110ustar00rootroot00000000000000"""Module to help tests check script output Provides class to be instantiated in tests that check scripts. Usually works something like this in a test module:: from dipy.tests.scriptrunner import ScriptRunner runner = ScriptRunner() Then, in the tests, something like:: code, stdout, stderr = runner.run_command(['my-script', my_arg]) assert_equal(code, 0) assert_equal(stdout, b'This script ran OK') """ import os from os.path import dirname, isdir, isfile, join as pjoin, pathsep, realpath from subprocess import PIPE, Popen import sys try: # Python 2 string_types = (basestring,) except NameError: # Python 3 string_types = (str,) def _get_package(): """Workaround for missing ``__package__`` in Python 3.2""" if "__package__" in globals() and __package__ is not None: return __package__ return __name__.split(".", 1)[0] # Same as __package__ for Python 2.6, 2.7 and >= 3.3 MY_PACKAGE = _get_package() def local_script_dir(script_sdir): """Get local script directory if running in development dir, else None""" # Check for presence of scripts in development directory. ``realpath`` # allows for the situation where the development directory has been linked # into the path. package_path = dirname(__import__(MY_PACKAGE).__file__) above_us = realpath(pjoin(package_path, "..")) devel_script_dir = pjoin(above_us, script_sdir) if isfile(pjoin(above_us, "setup.py")) and isdir(devel_script_dir): return devel_script_dir return None def local_module_dir(module_name): """Get local module directory if running in development dir, else None""" mod = __import__(module_name) containing_path = dirname(dirname(realpath(mod.__file__))) if containing_path == realpath(os.getcwd()): return containing_path return None class ScriptRunner: """Class to run scripts and return output Finds local scripts and local modules if running in the development directory, otherwise finds system scripts and modules. """ def __init__( self, script_sdir="scripts", module_sdir=MY_PACKAGE, debug_print_var=None, output_processor=lambda x: x, ): """Init ScriptRunner instance Parameters ---------- script_sdir : str, optional Name of subdirectory in top-level directory (directory containing setup.py), to find scripts in development tree. Typically 'scripts', but might be 'bin'. module_sdir : str, optional Name of subdirectory in top-level directory (directory containing setup.py), to find main package directory. debug_print_vsr : str, optional Name of environment variable that indicates whether to do debug printing or no. output_processor : callable Callable to run on the stdout, stderr outputs before returning them. Use this to convert bytes to unicode, strip whitespace, etc. """ self.local_script_dir = local_script_dir(script_sdir) self.local_module_dir = local_module_dir(module_sdir) if debug_print_var is None: debug_print_var = f"{module_sdir.upper()}_DEBUG_PRINT" self.debug_print = os.environ.get(debug_print_var, False) self.output_processor = output_processor def run_command(self, cmd, check_code=True): """Run command sequence `cmd` returning exit code, stdout, stderr Parameters ---------- cmd : str or sequence string with command name or sequence of strings defining command check_code : {True, False}, optional If True, raise error for non-zero return code Returns ------- returncode : int return code from execution of `cmd` stdout : bytes (python 3) or str (python 2) stdout from `cmd` stderr : bytes (python 3) or str (python 2) stderr from `cmd` """ if isinstance(cmd, string_types): cmd = [cmd] else: cmd = list(cmd) if self.local_script_dir is not None: # Windows can't run script files without extensions # natively so we need to run local scripts (no extensions) # via the Python interpreter. On Unix, we might have the # wrong incantation for the Python interpreter # in the hash bang first line in the source file. So, either way, # run the script through the Python interpreter cmd = [sys.executable, pjoin(self.local_script_dir, cmd[0])] + cmd[1:] elif os.name == "nt": # Need .bat file extension for windows cmd[0] += ".bat" if os.name == "nt": # Quote any arguments with spaces. The quotes delimit the arguments # on Windows, and the arguments might be files paths with spaces. # On Unix the list elements are each separate arguments. cmd = [f'"{c}"' if " " in c else c for c in cmd] if self.debug_print: print(f"Running command '{cmd}'") env = os.environ if self.local_module_dir is not None: # module likely comes from the current working directory. # We might need that directory on the path if we're running # the scripts from a temporary directory env = env.copy() pypath = env.get("PYTHONPATH", None) if pypath is None: env["PYTHONPATH"] = self.local_module_dir else: env["PYTHONPATH"] = self.local_module_dir + pathsep + pypath proc = Popen(cmd, stdout=PIPE, stderr=PIPE, env=env) stdout, stderr = proc.communicate() if proc.poll() is None: proc.terminate() if check_code and proc.returncode != 0: raise RuntimeError( f"""Command "{cmd}" failed with stdout ------ {stdout} stderr ------ {stderr} """ ) opp = self.output_processor return proc.returncode, opp(stdout), opp(stderr) dipy-1.11.0/dipy/tests/test_scripts.py000066400000000000000000000116051476546756600200030ustar00rootroot00000000000000"""Test scripts Run scripts and check outputs """ """ import glob import os import shutil from tempfile import TemporaryDirectory from os.path import (dirname, join as pjoin, abspath) from dipy.testing import assert_true, assert_false import numpy.testing as nt import pytest import nibabel as nib from dipy.data import get_fnames # Quickbundles command-line requires matplotlib: try: import matplotlib no_mpl = False except ImportError: no_mpl = True from dipy.tests.scriptrunner import ScriptRunner runner = ScriptRunner( script_sdir='bin', debug_print_var='NIPY_DEBUG_PRINT') run_command = runner.run_command DATA_PATH = abspath(pjoin(dirname(__file__), 'data')) def test_dipy_peak_extraction(): # test dipy_peak_extraction script cmd = 'dipy_peak_extraction' code, stdout, stderr = run_command(cmd, check_code=False) npt.assert_equal(code, 2) def test_dipy_fit_tensor(): # test dipy_fit_tensor script cmd = 'dipy_fit_tensor' code, stdout, stderr = run_command(cmd, check_code=False) npt.assert_equal(code, 2) def test_dipy_sh_estimate(): # test dipy_sh_estimate script cmd = 'dipy_sh_estimate' code, stdout, stderr = run_command(cmd, check_code=False) npt.assert_equal(code, 2) def assert_image_shape_affine(filename, shape, affine): assert_true(os.path.isfile(filename)) image = nib.load(filename) npt.assert_equal(image.shape, shape) nt.assert_array_almost_equal(image.affine, affine) def test_dipy_fit_tensor_again(): with TemporaryDirectory(): dwi, bval, bvec = get_fnames(name="small_25") # Copy data to tmp directory shutil.copyfile(dwi, "small_25.nii.gz") shutil.copyfile(bval, "small_25.bval") shutil.copyfile(bvec, "small_25.bvec") # Call script cmd = ["dipy_fit_tensor", "--mask=none", "small_25.nii.gz"] out = run_command(cmd) npt.assert_equal(out[0], 0) # Get expected values img = nib.load("small_25.nii.gz") affine = img.affine shape = img.shape[:-1] # Check expected outputs assert_image_shape_affine("small_25_fa.nii.gz", shape, affine) assert_image_shape_affine("small_25_t2di.nii.gz", shape, affine) assert_image_shape_affine("small_25_dirFA.nii.gz", shape, affine) assert_image_shape_affine("small_25_ad.nii.gz", shape, affine) assert_image_shape_affine("small_25_md.nii.gz", shape, affine) assert_image_shape_affine("small_25_rd.nii.gz", shape, affine) with TemporaryDirectory(): dwi, bval, bvec = get_fnames(name="small_25") # Copy data to tmp directory shutil.copyfile(dwi, "small_25.nii.gz") shutil.copyfile(bval, "small_25.bval") shutil.copyfile(bvec, "small_25.bvec") # Call script cmd = ["dipy_fit_tensor", "--save-tensor", "--mask=none", "small_25.nii.gz"] out = run_command(cmd) npt.assert_equal(out[0], 0) # Get expected values img = nib.load("small_25.nii.gz") affine = img.affine shape = img.shape[:-1] # Check expected outputs assert_image_shape_affine("small_25_fa.nii.gz", shape, affine) assert_image_shape_affine("small_25_t2di.nii.gz", shape, affine) assert_image_shape_affine("small_25_dirFA.nii.gz", shape, affine) assert_image_shape_affine("small_25_ad.nii.gz", shape, affine) assert_image_shape_affine("small_25_md.nii.gz", shape, affine) assert_image_shape_affine("small_25_rd.nii.gz", shape, affine) # small_25_tensor saves the tensor as a symmetric matrix following # the nifti standard. ten_shape = shape + (1, 6) assert_image_shape_affine("small_25_tensor.nii.gz", ten_shape, affine) @pytest.mark.skipif(no_mpl) def test_qb_commandline(): with TemporaryDirectory(): tracks_file = get_fnames(name='fornix') cmd = ["dipy_quickbundles", tracks_file, '--pkl_file', 'mypickle.pkl', '--out_file', 'tracks300.trk'] out = run_command(cmd) npt.assert_equal(out[0], 0) @pytest.mark.skipif(no_mpl) def test_qb_commandline_output_path_handling(): with TemporaryDirectory(): # Create temporary subdirectory for input and for output os.mkdir('work') os.mkdir('output') os.chdir('work') tracks_file = get_fnames(name='fornix') # Need to specify an output directory with a "../" style path # to trigger old bug. cmd = ["dipy_quickbundles", tracks_file, '--pkl_file', 'mypickle.pkl', '--out_file', os.path.join('..', 'output', 'tracks300.trk')] out = run_command(cmd) npt.assert_equal(out[0], 0) # Make sure the files were created in the output directory os.chdir('../') output_files_list = glob.glob('output/tracks300_*.trk') assert_true(output_files_list) """ dipy-1.11.0/dipy/tracking/000077500000000000000000000000001476546756600153405ustar00rootroot00000000000000dipy-1.11.0/dipy/tracking/__init__.py000066400000000000000000000002131476546756600174450ustar00rootroot00000000000000# Init for tracking module """Tracking objects""" from nibabel.streamlines import ArraySequence as Streamlines __all__ = ["Streamlines"] dipy-1.11.0/dipy/tracking/_utils.py000066400000000000000000000031431476546756600172120ustar00rootroot00000000000000"""This is a helper module for dipy.tracking.utils.""" import numpy as np def _mapping_to_voxel(affine): """Inverts affine and returns a mapping so voxel coordinates. This function is an implementation detail and only meant to be used with ``_to_voxel_coordinates``. Parameters ---------- affine : array_like (4, 4) The mapping from voxel indices, [i, j, k], to real world coordinates. The inverse of this mapping is used unless `affine` is None. Returns ------- lin_T : array (3, 3) Transpose of the linear part of the mapping to voxel space, (ie ``inv(affine)[:3, :3].T``) offset : array or scalar Offset part of the mapping (ie, ``inv(affine)[:3, 3]``) + ``.5``. The half voxel shift is so that truncating the result of this mapping will give the correct integer voxel coordinate. Raises ------ ValueError If both affine and voxel_size are None. """ if affine is None: raise ValueError("no affine specified") affine = np.array(affine, dtype=float) inv_affine = np.linalg.inv(affine) lin_T = inv_affine[:3, :3].T.copy() offset = inv_affine[:3, 3] + 0.5 return lin_T, offset def _to_voxel_coordinates(streamline, lin_T, offset): """Applies a mapping from streamline coordinates to voxel_coordinates, raises an error for negative voxel values.""" inds = np.dot(streamline, lin_T) inds += offset if inds.min().round(decimals=6) < 0: raise IndexError("streamline has points that map to negative voxel" " indices") return inds.astype(np.intp) dipy-1.11.0/dipy/tracking/direction_getter.pxd000066400000000000000000000015601476546756600214110ustar00rootroot00000000000000 from dipy.tracking.stopping_criterion cimport (StreamlineStatus, StoppingCriterion) cimport numpy as cnp cdef class DirectionGetter: cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction( self, double[::1] point) cpdef tuple generate_streamline(self, double[::1] seed, double[::1] dir, double[::1] voxel_size, double step_size, StoppingCriterion stopping_criterion, cnp.float_t[:, :] streamline, StreamlineStatus stream_status, int fixedstep) cdef int get_direction_c(self, double[::1] point, double[::1] direction) dipy-1.11.0/dipy/tracking/direction_getter.pyx000066400000000000000000000115141476546756600214360ustar00rootroot00000000000000cimport cython from dipy.tracking.stopping_criterion cimport (StreamlineStatus, StoppingCriterion, TRACKPOINT, ENDPOINT, OUTSIDEIMAGE, INVALIDPOINT,) from dipy.utils.fast_numpy cimport copy_point cimport numpy as cnp cdef extern from "dpy_math.h" nogil: int dpy_signbit(double x) double dpy_rint(double x) double fabs(double) @cython.cdivision(True) cdef inline double _stepsize(double point, double increment) noexcept nogil: """Compute the step size to the closest boundary in units of increment.""" cdef: double dist dist = dpy_rint(point) + .5 - dpy_signbit(increment) - point if dist == 0: # Point is on an edge, return step size to next edge. This is most # likely to come up if overstep is set to 0. return 1. / fabs(increment) else: return dist / increment cdef void _step_to_boundary(double * point, double * direction, double overstep) noexcept nogil: """Takes a step from point in along direction just past a voxel boundary. Parameters ---------- direction : c-pointer to double[3] The direction along which the step should be taken. point : c-pointer to double[3] The tracking point which will be updated by this function. overstep : double It's often useful to have the points of a streamline lie inside of a voxel instead of having them lie on the boundary. For this reason, each step will overshoot the boundary by ``overstep * direction``. This should not be negative. """ cdef: double step_sizes[3] double smallest_step for i in range(3): step_sizes[i] = _stepsize(point[i], direction[i]) smallest_step = step_sizes[0] for i in range(1, 3): if step_sizes[i] < smallest_step: smallest_step = step_sizes[i] smallest_step += overstep for i in range(3): point[i] += smallest_step * direction[i] cdef void _fixed_step(double * point, double * direction, double step_size) noexcept nogil: """Updates point by stepping in direction. Parameters ---------- direction : c-pointer to double[3] The direction along which the step should be taken. point : c-pointer to double[3] The tracking point which will be updated by this function. step_size : double The size of step in units of direction. """ for i in range(3): point[i] += direction[i] * step_size cdef class DirectionGetter: cpdef cnp.ndarray[cnp.float_t, ndim=2] initial_direction( self, double[::1] point): pass @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cpdef tuple generate_streamline(self, double[::1] seed, double[::1] direction, double[::1] voxel_size, double step_size, StoppingCriterion stopping_criterion, cnp.float_t[:, :] streamline, StreamlineStatus stream_status, int fixedstep ): cdef: cnp.npy_intp i cnp.npy_intp len_streamlines = streamline.shape[0] double point[3] double voxdir[3] void (*step)(double*, double*, double) noexcept nogil if fixedstep > 0: step = _fixed_step else: step = _step_to_boundary copy_point(&seed[0], point) copy_point(&seed[0], &streamline[0,0]) stream_status = TRACKPOINT for i in range(1, len_streamlines): if self.get_direction_c(point, direction): break for j in range(3): voxdir[j] = direction[j] / voxel_size[j] step(point, voxdir, step_size) copy_point(point, &streamline[i, 0]) stream_status = stopping_criterion.check_point_c(point) if stream_status == TRACKPOINT: continue elif (stream_status == ENDPOINT or stream_status == INVALIDPOINT or stream_status == OUTSIDEIMAGE): break else: # maximum length of streamline has been reached, return everything i = streamline.shape[0] return i, stream_status def get_direction(self, double[::1] point, double[::1] direction): return self.get_direction_c(point, direction) cdef int get_direction_c(self, double[::1] point, double[::1] direction): pass dipy-1.11.0/dipy/tracking/distances.pyx000066400000000000000000002042261476546756600200650ustar00rootroot00000000000000 # A type of -*- python -*- file """ Optimized track distances, similarities and distanch clustering algorithms """ # cython: profile=True # cython: embedsignature=True cimport cython from libc.stdlib cimport calloc, realloc, free import numpy as np from warnings import warn cimport numpy as cnp cdef extern from "dpy_math.h" nogil: double floor(double x) float sqrt(float x) float fabs(float x) float acos(float x ) bint dpy_isnan(double x) double dpy_log2(double x) #@cython.boundscheck(False) #@cython.wraparound(False) DEF biggest_double = 1.79769e+308 #np.finfo('f8').max DEF biggest_float = 3.4028235e+38 #np.finfo('f4').max cdef inline cnp.ndarray[cnp.float32_t, ndim=1] as_float_3vec(object vec): """ Utility function to convert object to 3D float vector """ return np.squeeze(np.asarray(vec, dtype=np.float32)) cdef inline float* asfp(cnp.ndarray pt): return cnp.PyArray_DATA(pt) def normalized_3vec(vec): """ Return normalized 3D vector Vector divided by Euclidean (L2) norm Parameters ---------- vec : array-like shape (3,) Returns ------- vec_out : array shape (3,) """ cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_in = as_float_3vec(vec) cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_out = np.zeros((3,), np.float32) cnormalized_3vec( cnp.PyArray_DATA(vec_in), cnp.PyArray_DATA(vec_out)) return vec_out def norm_3vec(vec): """ Euclidean (L2) norm of length 3 vector Parameters ---------- vec : array-like shape (3,) Returns ------- norm : float Euclidean norm """ cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_in = as_float_3vec(vec) return cnorm_3vec( cnp.PyArray_DATA(vec_in)) cdef inline float cnorm_3vec(float *vec): """ Calculate Euclidean norm of input vector Parameters ---------- vec : float * length 3 float vector Returns ------- norm : float Euclidean norm """ cdef float v0, v1, v2 v0 = vec[0] v1 = vec[1] v2 = vec[2] return sqrt(v0 * v0 + v1*v1 + v2*v2) cdef inline void cnormalized_3vec(float *vec_in, float *vec_out): """ Calculate and fill normalized 3D vector Parameters ---------- vec_in : float * Length 3 vector to normalize vec_out : float * Memory into which to write normalized length 3 vector """ cdef float norm = cnorm_3vec(vec_in) cdef cnp.npy_intp i for i in range(3): vec_out[i] = vec_in[i] / norm def inner_3vecs(vec1, vec2): cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1 = as_float_3vec(vec1) cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec2 = as_float_3vec(vec2) return cinner_3vecs( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2)) cdef inline float cinner_3vecs(float *vec1, float *vec2) noexcept nogil: cdef cnp.npy_intp i cdef float ip = 0 for i from 0<=i<3: ip += vec1[i]*vec2[i] return ip def sub_3vecs(vec1, vec2): cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1 = as_float_3vec(vec1) cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec2 = as_float_3vec(vec2) cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_out = np.zeros((3,), np.float32) csub_3vecs( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(vec_out)) return vec_out cdef inline void csub_3vecs(float *vec1, float *vec2, float *vec_out) noexcept nogil: cdef cnp.npy_intp i for i from 0<=i<3: vec_out[i] = vec1[i]-vec2[i] def add_3vecs(vec1, vec2): cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1 = as_float_3vec(vec1) cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec2 = as_float_3vec(vec2) cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_out = np.zeros((3,), np.float32) cadd_3vecs( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(vec_out)) return vec_out cdef inline void cadd_3vecs(float *vec1, float *vec2, float *vec_out) noexcept nogil: cdef cnp.npy_intp i for i from 0<=i<3: vec_out[i] = vec1[i]+vec2[i] def mul_3vecs(vec1, vec2): cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1 = as_float_3vec(vec1) cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec2 = as_float_3vec(vec2) cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_out = np.zeros((3,), np.float32) cmul_3vecs( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(vec_out)) return vec_out cdef inline void cmul_3vecs(float *vec1, float *vec2, float *vec_out) noexcept nogil: cdef cnp.npy_intp i for i from 0<=i<3: vec_out[i] = vec1[i]*vec2[i] def mul_3vec(a, vec): cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec = as_float_3vec(vec) cdef cnp.ndarray[cnp.float32_t, ndim=1] vec_out = np.zeros((3,), np.float32) cmul_3vec(a, cnp.PyArray_DATA(fvec), cnp.PyArray_DATA(vec_out)) return vec_out cdef inline void cmul_3vec(float a, float *vec, float *vec_out) noexcept nogil: cdef cnp.npy_intp i for i from 0<=i<3: vec_out[i] = a*vec[i] # float 32 dtype for casting cdef cnp.dtype f32_dt = np.dtype(np.float32) def cut_plane(tracks, ref): """ Extract divergence vectors and points of intersection between planes normal to the reference fiber and other tracks Parameters ---------- tracks : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) ref : array, shape (N,3) reference track Returns ------- hits : sequence list of points and rcds (radial coefficient of divergence) Notes ----- The orthogonality relationship ``np.inner(hits[p][q][0:3]-ref[p+1],ref[p+2]-ref[r][p+1])`` will hold throughout for every point q in the hits plane at point (p+1) on the reference track. Examples -------- >>> refx = np.array([[0,0,0],[1,0,0],[2,0,0],[3,0,0]],dtype='float32') >>> bundlex = [np.array([[0.5,1,0],[1.5,2,0],[2.5,3,0]],dtype='float32')] >>> res = cut_plane(bundlex,refx) >>> len(res) 2 >>> np.allclose(res[0], np.array([[1., 1.5, 0., 0.70710683, 0. ]])) True >>> np.allclose(res[1], np.array([[2., 2.5, 0., 0.70710677, 0. ]])) True """ cdef: cnp.npy_intp n_hits, hit_no, max_hit_len float alpha,beta,lrq,rcd,lhp,ld cnp.ndarray[cnp.float32_t, ndim=2] ref32 cnp.ndarray[cnp.float32_t, ndim=2] track object hits cnp.ndarray[cnp.float32_t, ndim=1] one_hit float *hit_ptr cnp.ndarray[cnp.float32_t, ndim=2] hit_arr object Hit=[] # make reference fiber usable type ref32 = np.ascontiguousarray(ref, f32_dt) # convert all the tracks to something we can work with. Get track # lengths cdef: cnp.npy_intp N_tracks=len(tracks) cnp.ndarray[cnp.uint64_t, ndim=1] track_lengths cnp.npy_intp t_no, N_track cdef object tracks32 = [] track_lengths = np.empty((N_tracks,), dtype=np.uint64) for t_no in range(N_tracks): track = np.ascontiguousarray(tracks[t_no], f32_dt) track_lengths[t_no] = cnp.PyArray_DIM(track, 0) tracks32.append(track) # set up loop across reference fiber points cdef: cnp.npy_intp N_ref = cnp.PyArray_DIM(ref32, 0) cnp.npy_intp p_no, q_no float *this_ref_p float *next_ref_p float *this_trk_p float *next_trk_p float along[3] float normal[3] float qMp[3] float rMp[3] float rMq[3] float pMq[3] float hit[3] float hitMp[3] float *delta normal[:] = [0, 0, 0] # List used for storage of hits. We will fill this with lots of # small numpy arrays, and reuse them over the reference track point # loops. max_hit_len = 0 hits = [] # for every point along the reference track next_ref_p = asfp(ref32[0]) for p_no in range(N_ref-1): # extract point to point vector into `along` this_ref_p = next_ref_p next_ref_p = asfp(ref32[p_no+1]) csub_3vecs(next_ref_p, this_ref_p, along) # normalize cnormalized_3vec(along, normal) # initialize index for hits hit_no = 0 # for every track for t_no in range(N_tracks): track=tracks32[t_no] N_track = track_lengths[t_no] # for every point on the track next_trk_p = asfp(track[0]) for q_no in range(N_track-1): # p = ref32[p_no] # q = track[q_no] # r = track[q_no+1] # float* versions of above: p == this_ref_p this_trk_p = next_trk_p # q next_trk_p = asfp(track[q_no+1]) # r #if np.inner(normal,q-p)*np.inner(normal,r-p) <= 0: csub_3vecs(this_trk_p, this_ref_p, qMp) # q-p csub_3vecs(next_trk_p, this_ref_p, rMp) # r-p if (cinner_3vecs(normal, qMp) * cinner_3vecs(normal, rMp)) <=0: #if np.inner((r-q),normal) != 0: csub_3vecs(next_trk_p, this_trk_p, rMq) beta = cinner_3vecs(rMq, normal) if beta !=0: #alpha = np.inner((p-q),normal)/np.inner((r-q),normal) csub_3vecs(this_ref_p, this_trk_p, pMq) alpha = (cinner_3vecs(pMq, normal) / cinner_3vecs(rMq, normal)) if alpha < 1: # hit = q+alpha*(r-q) hit[0] = this_trk_p[0]+alpha*rMq[0] hit[1] = this_trk_p[1]+alpha*rMq[1] hit[2] = this_trk_p[2]+alpha*rMq[2] # h-p csub_3vecs(hit, this_ref_p, hitMp) # |h-p| lhp = cnorm_3vec(hitMp) delta = rMq # just renaming # |r-q| == |delta| ld = cnorm_3vec(delta) """ # Summary of stuff in comments # divergence =((r-q)-inner(r-q,normal)*normal)/|r-q| div[0] = (rMq[0]-beta*normal[0]) / ld div[1] = (rMq[1]-beta*normal[1]) / ld div[2] = (rMq[2]-beta*normal[2]) / ld # radial coefficient of divergence d.(h-p)/|h-p| """ # radial divergence # np.inner(delta, (hit-p)) / (ld * lhp) if lhp > 0: rcd = fabs(cinner_3vecs(delta, hitMp) / (ld*lhp)) else: rcd=0 # hit data into array if hit_no >= max_hit_len: one_hit = np.empty((5,), dtype=f32_dt) hits.append(one_hit) else: one_hit = hits[hit_no] hit_ptr = cnp.PyArray_DATA(one_hit) hit_ptr[0] = hit[0] hit_ptr[1] = hit[1] hit_ptr[2] = hit[2] hit_ptr[3] = rcd hit_ptr[4] = t_no hit_no += 1 # convert hits list to hits array n_hits = hit_no if n_hits > max_hit_len: max_hit_len = n_hits hit_arr = np.empty((n_hits,5), dtype=f32_dt) for hit_no in range(n_hits): hit_arr[hit_no] = hits[hit_no] Hit.append(hit_arr) #Div.append(divs[1:]) return Hit[1:] def most_similar_track_mam(tracks,metric='avg'): """ Find the most similar track in a bundle using distances calculated from Zhang et. al 2008. Parameters ---------- tracks : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) metric : str 'avg', 'min', 'max' Returns ------- si : int index of the most similar track in tracks. This can be used as a reference track for a bundle. s : array, shape (len(tracks),) similarities between tracks[si] and the rest of the tracks in the bundle Notes ----- A vague description of this function is given below: for (i,j) in tracks_combinations_of_2: calculate the mean_closest_distance from i to j (mcd_i) calculate the mean_closest_distance from j to i (mcd_j) if 'avg': s holds the average similarities if 'min': s holds the minimum similarities if 'max': s holds the maximum similarities si holds the index of the track with min {avg,min,max} average metric """ cdef: cnp.npy_intp i, j, lent int metric_type if metric=='avg': metric_type = 0 elif metric == 'min': metric_type = 1 elif metric == 'max': metric_type = 2 else: raise ValueError('Metric should be one of avg, min, max') # preprocess tracks cdef: cnp.npy_intp longest_track_len = 0, track_len cnp.ndarray[object, ndim=1] tracks32 lent = len(tracks) tracks32 = np.zeros((lent,), dtype=object) # process tracks to predictable memory layout, find longest track for i in range(lent): tracks32[i] = np.ascontiguousarray(tracks[i], dtype=f32_dt) track_len = tracks32[i].shape[0] if track_len > longest_track_len: longest_track_len = track_len # buffer for distances of found track to other tracks cdef: cnp.ndarray[cnp.double_t, ndim=1] track2others track2others = np.zeros((lent,), dtype=np.double) # use this buffer also for working space containing summed distances # of candidate track to all other tracks cdef cnp.double_t *sum_track2others = cnp.PyArray_DATA(track2others) # preallocate buffer array for track distance calculations cdef: cnp.ndarray [cnp.float32_t, ndim=1] distances_buffer cnp.float32_t *t1_ptr cnp.float32_t *t2_ptr cnp.float32_t *min_buffer cnp.float32_t distance distances_buffer = np.zeros((longest_track_len*2,), dtype=np.float32) min_buffer = cnp.PyArray_DATA(distances_buffer) # cycle over tracks cdef: cnp.ndarray [cnp.float32_t, ndim=2] t1, t2 cnp.npy_intp t1_len, t2_len for i from 0 <= i < lent-1: t1 = tracks32[i] t1_len = cnp.PyArray_DIM(t1, 0) t1_ptr = cnp.PyArray_DATA(t1) for j from i+1 <= j < lent: t2 = tracks32[j] t2_len = cnp.PyArray_DIM(t2, 0) t2_ptr = cnp.PyArray_DATA(t2) distance = czhang(t1_len, t1_ptr, t2_len, t2_ptr, min_buffer, metric_type) # get metric sum_track2others[i]+=distance sum_track2others[j]+=distance # find track with smallest summed metric with other tracks cdef double mn = sum_track2others[0] cdef cnp.npy_intp si = 0 for i in range(lent): if sum_track2others[i] < mn: si = i mn = sum_track2others[i] # recalculate distance of this track from the others t1 = tracks32[si] t1_len = cnp.PyArray_DIM(t1, 0) t1_ptr = cnp.PyArray_DATA(t1) for j from 0 <= j < lent: t2 = tracks32[j] t2_len = cnp.PyArray_DIM(t2, 0) t2_ptr = cnp.PyArray_DATA(t2) track2others[j] = czhang(t1_len, t1_ptr, t2_len, t2_ptr, min_buffer, metric_type) return si, track2others @cython.boundscheck(False) @cython.wraparound(False) def bundles_distances_mam(tracksA, tracksB, metric='avg'): """ Calculate distances between list of tracks A and list of tracks B Parameters ---------- tracksA : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) tracksB : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) metric : str 'avg', 'min', 'max' Returns ------- DM : array, shape (len(tracksA), len(tracksB)) distances between tracksA and tracksB according to metric See Also -------- dipy.tracking.streamline.set_number_of_points """ cdef: cnp.npy_intp i, j, lentA, lentB int metric_type if metric=='avg': metric_type = 0 elif metric == 'min': metric_type = 1 elif metric == 'max': metric_type = 2 else: raise ValueError('Metric should be one of avg, min, max') # preprocess tracks cdef: cnp.npy_intp longest_track_len = 0, track_len cnp.npy_intp longest_track_lenA = 0, longest_track_lenB = 0 cnp.ndarray[object, ndim=1] tracksA32 cnp.ndarray[object, ndim=1] tracksB32 cnp.ndarray[cnp.double_t, ndim=2] DM lentA = len(tracksA) lentB = len(tracksB) # for performance issue, we just check the first streamline if len(tracksA[0]) != len(tracksB[0]): w_s = "Streamlines do not have the same number of points. " w_s += "All streamlines need to have the same number of points. " w_s += "Use dipy.tracking.streamline.set_number_of_points to adjust " w_s += "your streamlines" warn(w_s) tracksA32 = np.zeros((lentA,), dtype=object) tracksB32 = np.zeros((lentB,), dtype=object) DM = np.zeros((lentA,lentB), dtype=np.double) # process tracks to predictable memory layout, find longest track for i in range(lentA): tracksA32[i] = np.ascontiguousarray(tracksA[i], dtype=f32_dt) track_len = tracksA32[i].shape[0] if track_len > longest_track_lenA: longest_track_lenA = track_len for i in range(lentB): tracksB32[i] = np.ascontiguousarray(tracksB[i], dtype=f32_dt) track_len = tracksB32[i].shape[0] if track_len > longest_track_lenB: longest_track_lenB = track_len if longest_track_lenB > longest_track_lenA: longest_track_lenA = longest_track_lenB # preallocate buffer array for track distance calculations cdef: cnp.ndarray [cnp.float32_t, ndim=1] distances_buffer cnp.float32_t *t1_ptr cnp.float32_t *t2_ptr cnp.float32_t *min_buffer distances_buffer = np.zeros((longest_track_lenA*2,), dtype=np.float32) min_buffer = cnp.PyArray_DATA(distances_buffer) # cycle over tracks cdef: cnp.ndarray [cnp.float32_t, ndim=2] t1, t2 cnp.npy_intp t1_len, t2_len for i from 0 <= i < lentA: t1 = tracksA32[i] t1_len = cnp.PyArray_DIM(t1, 0) t1_ptr = cnp.PyArray_DATA(t1) for j from 0 <= j < lentB: t2 = tracksB32[j] t2_len = cnp.PyArray_DIM(t2, 0) t2_ptr = cnp.PyArray_DATA(t2) DM[i,j] = czhang(t1_len, t1_ptr, t2_len, t2_ptr, min_buffer, metric_type) return DM @cython.boundscheck(False) @cython.wraparound(False) def bundles_distances_mdf(tracksA, tracksB): """ Calculate distances between list of tracks A and list of tracks B All tracks need to have the same number of points Parameters ---------- tracksA : sequence of tracks as arrays, [(N,3) .. (N,3)] tracksB : sequence of tracks as arrays, [(N,3) .. (N,3)] Returns ------- DM : array, shape (len(tracksA), len(tracksB)) distances between tracksA and tracksB according to metric See Also -------- dipy.tracking.streamline.set_number_of_points """ cdef: cnp.npy_intp i, j, lentA, lentB # preprocess tracks cdef: cnp.npy_intp longest_track_len = 0, track_len longest_track_lenA, longest_track_lenB cnp.ndarray[object, ndim=1] tracksA32 cnp.ndarray[object, ndim=1] tracksB32 cnp.ndarray[cnp.double_t, ndim=2] DM lentA = len(tracksA) lentB = len(tracksB) # for performance issue, we just check the first streamline if len(tracksA[0]) != len(tracksB[0]): w_s = "Streamlines do not have the same number of points. " w_s += "All streamlines need to have the same number of points. " w_s += "Use dipy.tracking.streamline.set_number_of_points to adjust " w_s += "your streamlines" warn(w_s) tracksA32 = np.zeros((lentA,), dtype=object) tracksB32 = np.zeros((lentB,), dtype=object) DM = np.zeros((lentA,lentB), dtype=np.double) # process tracks to predictable memory layout for i in range(lentA): tracksA32[i] = np.ascontiguousarray(tracksA[i], dtype=f32_dt) for i in range(lentB): tracksB32[i] = np.ascontiguousarray(tracksB[i], dtype=f32_dt) # preallocate buffer array for track distance calculations cdef: cnp.float32_t *t1_ptr cnp.float32_t *t2_ptr cnp.float32_t *min_buffer # cycle over tracks cdef: cnp.ndarray [cnp.float32_t, ndim=2] t1, t2 cnp.npy_intp t1_len, t2_len float d[2] t_len = tracksA32[0].shape[0] for i from 0 <= i < lentA: t1 = tracksA32[i] #t1_len = t1.shape[0] t1_ptr = cnp.PyArray_DATA(t1) for j from 0 <= j < lentB: t2 = tracksB32[j] #t2_len = t2.shape[0] t2_ptr = cnp.PyArray_DATA(t2) #DM[i,j] = czhang(t1_len, t1_ptr, t2_len, t2_ptr, min_buffer, metric_type) track_direct_flip_dist(t1_ptr, t2_ptr,t_len,d) if d[0] mean_t1t2: dist_val=mean_t2t1 else: dist_val=mean_t1t2 return dist_val @cython.cdivision(True) cdef inline void min_distances(cnp.npy_intp t1_len, cnp.float32_t *track1_ptr, cnp.npy_intp t2_len, cnp.float32_t *track2_ptr, cnp.float32_t *min_t2t1, cnp.float32_t *min_t1t2) noexcept nogil: cdef: cnp.float32_t *t1_pt cnp.float32_t *t2_pt cnp.float32_t d0, d1, d2 cnp.float32_t delta2 cnp.npy_intp t1_pi, t2_pi for t2_pi from 0<= t2_pi < t2_len: min_t2t1[t2_pi] = inf for t1_pi from 0<= t1_pi < t1_len: min_t1t2[t1_pi] = inf # pointer to current point in track 1 t1_pt = track1_ptr # calculate min squared distance between each point in the two # lines. Squared distance to delay doing the sqrt until after this # speed-critical loop for t1_pi from 0<= t1_pi < t1_len: # pointer to current point in track 2 t2_pt = track2_ptr for t2_pi from 0<= t2_pi < t2_len: d0 = t1_pt[0] - t2_pt[0] d1 = t1_pt[1] - t2_pt[1] d2 = t1_pt[2] - t2_pt[2] delta2 = d0*d0 + d1*d1 + d2*d2 if delta2 < min_t2t1[t2_pi]: min_t2t1[t2_pi]=delta2 if delta2 < min_t1t2[t1_pi]: min_t1t2[t1_pi]=delta2 t2_pt += 3 # to next point in track 2 t1_pt += 3 # to next point in track 1 # sqrt to get Euclidean distance from squared distance for t1_pi from 0<= t1_pi < t1_len: min_t1t2[t1_pi]=sqrt(min_t1t2[t1_pi]) for t2_pi from 0<= t2_pi < t2_len: min_t2t1[t2_pi]=sqrt(min_t2t1[t2_pi]) def mam_distances(xyz1,xyz2,metric='all'): """ Min/Max/Mean Average Minimum Distance between tracks xyz1 and xyz2 Based on the metrics in Zhang, Correia, Laidlaw 2008 https://ieeexplore.ieee.org/document/4479455 which in turn are based on those of Corouge et al. 2004 Parameters ---------- xyz1 : array, shape (N1,3), dtype float32 xyz2 : array, shape (N2,3), dtype float32 arrays representing x,y,z of the N1 and N2 points of two tracks metrics : {'avg','min','max','all'} Metric to calculate. {'avg','min','max'} return a scalar. 'all' returns a tuple Returns ------- avg_mcd : float average_mean_closest_distance min_mcd : float minimum_mean_closest_distance max_mcd : float maximum_mean_closest_distance Notes ----- Algorithmic description Let's say we have curves A and B. For every point in A calculate the minimum distance from every point in B stored in minAB For every point in B calculate the minimum distance from every point in A stored in minBA find average of minAB stored as avg_minAB find average of minBA stored as avg_minBA if metric is 'avg' then return (avg_minAB + avg_minBA)/2.0 if metric is 'min' then return min(avg_minAB,avg_minBA) if metric is 'max' then return max(avg_minAB,avg_minBA) """ cdef: cnp.ndarray[cnp.float32_t, ndim=2] track1 cnp.ndarray[cnp.float32_t, ndim=2] track2 cnp.npy_intp t1_len, t2_len track1 = np.ascontiguousarray(xyz1, dtype=f32_dt) t1_len = cnp.PyArray_DIM(track1, 0) track2 = np.ascontiguousarray(xyz2, dtype=f32_dt) t2_len = cnp.PyArray_DIM(track2, 0) # preallocate buffer array for track distance calculations cdef: cnp.float32_t *min_t2t1 cnp.float32_t *min_t1t2 cnp.ndarray [cnp.float32_t, ndim=1] distances_buffer distances_buffer = np.zeros((t1_len + t2_len,), dtype=np.float32) min_t2t1 = cnp.PyArray_DATA(distances_buffer) min_t1t2 = min_t2t1 + t2_len min_distances(t1_len, cnp.PyArray_DATA(track1), t2_len, cnp.PyArray_DATA(track2), min_t2t1, min_t1t2) cdef: cnp.npy_intp t1_pi, t2_pi cnp.float32_t mean_t2t1 = 0, mean_t1t2 = 0 for t1_pi from 0<= t1_pi < t1_len: mean_t1t2+=min_t1t2[t1_pi] mean_t1t2=mean_t1t2/t1_len for t2_pi from 0<= t2_pi < t2_len: mean_t2t1+=min_t2t1[t2_pi] mean_t2t1=mean_t2t1/t2_len if metric=='all': return ((mean_t2t1+mean_t1t2)/2.0, np.min((mean_t2t1,mean_t1t2)), np.max((mean_t2t1,mean_t1t2))) elif metric=='avg': return (mean_t2t1+mean_t1t2)/2.0 elif metric=='min': return np.min((mean_t2t1,mean_t1t2)) elif metric =='max': return np.max((mean_t2t1,mean_t1t2)) else : ValueError('Wrong argument for metric') def minimum_closest_distance(xyz1,xyz2): """ Find the minimum distance between two curves xyz1, xyz2 Parameters ---------- xyz1 : array, shape (N1,3), dtype float32 xyz2 : array, shape (N2,3), dtype float32 arrays representing x,y,z of the N1 and N2 points of two tracks Returns ------- md : minimum distance Notes ----- Algorithmic description Let's say we have curves A and B for every point in A calculate the minimum distance from every point in B stored in minAB for every point in B calculate the minimum distance from every point in A stored in minBA find min of minAB stored in min_minAB find min of minBA stored in min_minBA Then return (min_minAB + min_minBA)/2.0 """ cdef: cnp.ndarray[cnp.float32_t, ndim=2] track1 cnp.ndarray[cnp.float32_t, ndim=2] track2 cnp.npy_intp t1_len, t2_len track1 = np.ascontiguousarray(xyz1, dtype=f32_dt) t1_len = cnp.PyArray_DIM(track1, 0) track2 = np.ascontiguousarray(xyz2, dtype=f32_dt) t2_len = cnp.PyArray_DIM(track2, 0) # preallocate buffer array for track distance calculations cdef: cnp.float32_t *min_t2t1 cnp.float32_t *min_t1t2 cnp.ndarray [cnp.float32_t, ndim=1] distances_buffer distances_buffer = np.zeros((t1_len + t2_len,), dtype=np.float32) min_t2t1 = cnp.PyArray_DATA(distances_buffer) min_t1t2 = min_t2t1 + t2_len min_distances(t1_len, cnp.PyArray_DATA(track1), t2_len, cnp.PyArray_DATA(track2), min_t2t1, min_t1t2) cdef: cnp.npy_intp t1_pi, t2_pi double min_min_t2t1 = inf double min_min_t1t2 = inf for t1_pi in range(t1_len): if min_min_t1t2 > min_t1t2[t1_pi]: min_min_t1t2 = min_t1t2[t1_pi] for t2_pi in range(t2_len): if min_min_t2t1 > min_t2t1[t2_pi]: min_min_t2t1 = min_t2t1[t2_pi] return (min_min_t1t2+min_min_t2t1)/2.0 def lee_perpendicular_distance(start0, end0, start1, end1): """ Calculates perpendicular distance metric for the distance between two line segments Based on Lee , Han & Whang SIGMOD07. This function assumes that norm(end0-start0)>norm(end1-start1) i.e. that the first segment will be bigger than the second one. Parameters ---------- start0 : float array(3,) end0 : float array(3,) start1 : float array(3,) end1 : float array(3,) Returns ------- perpendicular_distance: float Notes ----- l0 = np.inner(end0-start0,end0-start0) l1 = np.inner(end1-start1,end1-start1) k0=end0-start0 u1 = np.inner(start1-start0,k0)/l0 u2 = np.inner(end1-start0,k0)/l0 ps = start0+u1*k0 pe = start0+u2*k0 lperp1 = np.sqrt(np.inner(ps-start1,ps-start1)) lperp2 = np.sqrt(np.inner(pe-end1,pe-end1)) if lperp1+lperp2 > 0.: return (lperp1**2+lperp2**2)/(lperp1+lperp2) else: return 0. Examples -------- >>> d = lee_perpendicular_distance([0,0,0],[1,0,0],[3,4,5],[5,4,3]) >>> print('%.6f' % d) 5.787888 """ cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1,fvec2,fvec3,fvec4 fvec1 = as_float_3vec(start0) fvec2 = as_float_3vec(end0) fvec3 = as_float_3vec(start1) fvec4 = as_float_3vec(end1) return clee_perpendicular_distance( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(fvec3), cnp.PyArray_DATA(fvec4)) cdef float clee_perpendicular_distance(float *start0, float *end0,float *start1, float *end1): """ This function assumes that norm(end0-start0)>norm(end1-start1) """ cdef: float l0,l1,ltmp,u1,u2,lperp1,lperp2 float *s_tmp float *e_tmp float k0[3] float ps[3] float pe[3] float ps1[3] float pe1[3] float tmp[3] csub_3vecs(end0,start0,k0) l0 = cinner_3vecs(k0,k0) csub_3vecs(end1,start1,tmp) l1 = cinner_3vecs(tmp, tmp) #csub_3vecs(end0,start0,k0) #u1 = np.inner(start1-start0,k0)/l0 #u2 = np.inner(end1-start0,k0)/l0 csub_3vecs(start1,start0,tmp) u1 = cinner_3vecs(tmp,k0)/l0 csub_3vecs(end1,start0,tmp) u2 = cinner_3vecs(tmp,k0)/l0 cmul_3vec(u1,k0,tmp) cadd_3vecs(start0,tmp,ps) cmul_3vec(u2,k0,tmp) cadd_3vecs(start0,tmp,pe) #lperp1 = np.sqrt(np.inner(ps-start1,ps-start1)) #lperp2 = np.sqrt(np.inner(pe-end1,pe-end1)) csub_3vecs(ps,start1,ps1) csub_3vecs(pe,end1,pe1) lperp1 = sqrt(cinner_3vecs(ps1,ps1)) lperp2 = sqrt(cinner_3vecs(pe1,pe1)) if lperp1+lperp2 > 0.: return (lperp1*lperp1+lperp2*lperp2)/(lperp1+lperp2) else: return 0. def lee_angle_distance(start0, end0, start1, end1): """ Calculates angle distance metric for the distance between two line segments Based on Lee , Han & Whang SIGMOD07. This function assumes that norm(end0-start0)>norm(end1-start1) i.e. that the first segment will be bigger than the second one. Parameters ---------- start0 : float array(3,) end0 : float array(3,) start1 : float array(3,) end1 : float array(3,) Returns ------- angle_distance : float Notes ----- l_0 = np.inner(end0-start0,end0-start0) l_1 = np.inner(end1-start1,end1-start1) cos_theta_squared = np.inner(end0-start0,end1-start1)**2/ (l_0*l_1) return np.sqrt((1-cos_theta_squared)*l_1) Examples -------- >>> lee_angle_distance([0,0,0],[1,0,0],[3,4,5],[5,4,3]) 2.0 """ cdef cnp.ndarray[cnp.float32_t, ndim=1] fvec1,fvec2,fvec3,fvec4 fvec1 = as_float_3vec(start0) fvec2 = as_float_3vec(end0) fvec3 = as_float_3vec(start1) fvec4 = as_float_3vec(end1) return clee_angle_distance( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(fvec3), cnp.PyArray_DATA(fvec4)) cdef float clee_angle_distance(float *start0, float *end0,float *start1, float *end1): """ This function assumes that norm(end0-start0)>norm(end1-start1) """ cdef: float l0,l1,ltmp,cos_theta_squared float *s_tmp float *e_tmp float k0[3] float k1[3] float tmp[3] csub_3vecs(end0,start0,k0) l0 = cinner_3vecs(k0,k0) #print l0 csub_3vecs(end1,start1,k1) l1 = cinner_3vecs(k1, k1) #print l1 ltmp=cinner_3vecs(k0,k1) cos_theta_squared = (ltmp*ltmp)/ (l0*l1) #print cos_theta_squared return sqrt((1-cos_theta_squared)*l1) def approx_polygon_track(xyz,alpha=0.392): """ Fast and simple trajectory approximation algorithm by Eleftherios and Ian It will reduce the number of points of the track by keeping intact the start and endpoints of the track and trying to remove as many points as possible without distorting much the shape of the track Parameters ---------- xyz : array(N,3) initial trajectory alpha : float smoothing parameter (<0.392 smoother, >0.392 rougher) if the trajectory was a smooth circle then with alpha =0.393 ~=pi/8. the circle would be approximated with an decahexagon if alpha = 0.7853 ~=pi/4. with an octagon. Returns ------- characteristic_points: list of M array(3,) points Examples -------- Approximating a helix: >>> t=np.linspace(0,1.75*2*np.pi,100) >>> x = np.sin(t) >>> y = np.cos(t) >>> z = t >>> xyz=np.vstack((x,y,z)).T >>> xyza = approx_polygon_track(xyz) >>> len(xyza) < len(xyz) True Notes ----- Assuming that a good approximation for a circle is an octagon then that means that the points of the octagon will have angle alpha = 2*pi/8 = pi/4 . We calculate the angle between every two neighbour segments of a trajectory and if the angle is higher than pi/4 we choose that point as a characteristic point otherwise we move at the next point. """ cdef : cnp.npy_intp mid_index cnp.ndarray[cnp.float32_t, ndim=2] track float *fvec0 float *fvec1 float *fvec2 object characteristic_points cnp.npy_intp t_len double angle,tmp, denom float vec0[3] float vec1[3] angle=alpha track = np.ascontiguousarray(xyz, dtype=f32_dt) t_len=len(track) characteristic_points=[track[0]] mid_index = 1 angle=0 while mid_index < t_len-1: #fvec0 = as_float_3vec(track[mid_index-1]) # cnp.PyArray_DATA(track[0]) fvec0 = asfp(track[mid_index-1]) fvec1 = asfp(track[mid_index]) fvec2 = asfp(track[mid_index+1]) #csub_3vecs( cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec0),vec0) csub_3vecs(fvec1,fvec0,vec0) csub_3vecs(fvec2,fvec1,vec1) denom = cnorm_3vec(vec0)*cnorm_3vec(vec1) tmp=fabs(acos(cinner_3vecs(vec0,vec1)/ denom)) if denom else 0 if dpy_isnan(tmp) : angle+=0. else: angle+=tmp if angle > alpha: characteristic_points.append(track[mid_index]) angle=0 mid_index+=1 characteristic_points.append(track[-1]) return np.array(characteristic_points) def approximate_mdl_trajectory(xyz, alpha=1.): """ Implementation of Lee et al Approximate Trajectory Partitioning Algorithm This is base on the minimum description length principle Parameters ---------- xyz : array(N,3) initial trajectory alpha : float smoothing parameter (>1 smoother, <1 rougher) Returns ------- characteristic_points : list of M array(3,) points """ cdef : int start_index,length,current_index, i double cost_par,cost_nopar,alphac object characteristic_points cnp.npy_intp t_len cnp.ndarray[cnp.float32_t, ndim=2] track float tmp[3] cnp.ndarray[cnp.float32_t, ndim=1] fvec1,fvec2,fvec3,fvec4 track = np.ascontiguousarray(xyz, dtype=f32_dt) t_len=len(track) alphac=alpha characteristic_points=[xyz[0]] start_index = 0 length = 2 #print t_len while start_index+length < t_len-1: current_index = start_index+length fvec1 = as_float_3vec(track[start_index]) fvec2 = as_float_3vec(track[current_index]) # L(H) csub_3vecs( cnp.PyArray_DATA(fvec2), cnp.PyArray_DATA(fvec1), tmp) cost_par=dpy_log2(sqrt(cinner_3vecs(tmp,tmp))) cost_nopar=0 #print start_index,current_index # L(D|H) #for i in range(start_index+1,current_index):#+1): for i in range(start_index,current_index+1): #print i fvec3 = as_float_3vec(track[i]) fvec4 = as_float_3vec(track[i+1]) cost_par += dpy_log2(clee_perpendicular_distance( cnp.PyArray_DATA(fvec3), cnp.PyArray_DATA(fvec4), cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2))) cost_par += dpy_log2(clee_angle_distance( cnp.PyArray_DATA(fvec3), cnp.PyArray_DATA(fvec4), cnp.PyArray_DATA(fvec1), cnp.PyArray_DATA(fvec2))) csub_3vecs( cnp.PyArray_DATA(fvec4), cnp.PyArray_DATA(fvec3), tmp) cost_nopar += dpy_log2(cinner_3vecs(tmp,tmp)) cost_nopar /= 2 #print cost_par, cost_nopar, start_index,length if alphac*cost_par>cost_nopar: characteristic_points.append(track[current_index-1]) start_index = current_index-1 length = 2 else: length+=1 characteristic_points.append(track[-1]) return np.array(characteristic_points) def intersect_segment_cylinder(sa,sb,p,q,r): """ Intersect Segment S(t) = sa +t(sb-sa), 0 <=t<= 1 against cylinder specified by p,q and r See p.197 from Real Time Collision Detection by C. Ericson Examples -------- Define cylinder using a segment defined by >>> p=np.array([0,0,0],dtype=np.float32) >>> q=np.array([1,0,0],dtype=np.float32) >>> r=0.5 Define segment >>> sa=np.array([0.5,1 ,0],dtype=np.float32) >>> sb=np.array([0.5,-1,0],dtype=np.float32) Intersection >>> intersect_segment_cylinder(sa, sb, p, q, r) (1.0, 0.25, 0.75) """ cdef: float *csa float *csb float *cp float *cq float cr float ct[2] csa = asfp(sa) csb = asfp(sb) cp = asfp(p) cq = asfp(q) cr=r ct[0]=-100 ct[1]=-100 tmp = cintersect_segment_cylinder(csa,csb,cp, cq, cr, ct) return tmp, ct[0], ct[1] cdef float cintersect_segment_cylinder(float *sa,float *sb,float *p, float *q, float r, float *t): """ Intersect Segment S(t) = sa +t(sb-sa), 0 <=t<= 1 against cylinder specified by p,q and r Look p.197 from Real Time Collision Detection C. Ericson Returns ------- inter : bool 0 no intersection 1 intersection """ cdef: float d[3] float m[3] float n[3] float md,nd,dd, nn, mn, a, k, c,b, discr float epsilon_float=5.96e-08 csub_3vecs(q,p,d) csub_3vecs(sa,p,m) csub_3vecs(sb,sa,n) md=cinner_3vecs(m,d) nd=cinner_3vecs(n,d) dd=cinner_3vecs(d,d) #test if segment fully outside either endcap of cylinder if md < 0. and md + nd < 0.: return 0 #segment outside p side if md > dd and md + nd > dd: return 0 #segment outside q side nn=cinner_3vecs(n,n) mn=cinner_3vecs(m,n) a=dd*nn-nd*nd k=cinner_3vecs(m,m) -r*r c=dd*k-md*md if fabs(a) < epsilon_float: #segment runs parallel to cylinder axis if c>0.: return 0. # segment lies outside cylinder if md < 0.: t[0]=-mn/nn # intersect against p endcap elif md > dd : t[0]=(nd-mn)/nn # intersect against q endcap else: t[0]=0. # lies inside cylinder return 1 b=dd*mn -nd*md discr=b*b-a*c if discr < 0.: return 0. # no real roots ; no intersection t[0]=(-b-sqrt(discr))/a t[1]=(-b+sqrt(discr))/a if t[0]<0. or t[0] > 1. : return 0. # intersection lies outside segment if md + t[0]* nd < 0.: #intersection outside cylinder on 'p' side if nd <= 0. : return 0. # segment pointing away from endcap t[0]=-md/nd #keep intersection if Dot(S(t)-p,S(t)-p) <= r^2 if k+2*t[0]*(mn+t[0]*nn) <=0.: return 1. elif md+t[0]*nd > dd : #intersection outside cylinder on 'q' side if nd >= 0.: return 0. # segment pointing away from endcap t[0]= (dd-md)/nd #keep intersection if Dot(S(t)-q,S(t)-q) <= r^2 if k+dd-2*md+t[0]*(2*(mn-nd)+t[0]*nn) <= 0.: return 1. # segment intersects cylinder between the endcaps; t is correct return 1. def point_segment_sq_distance(a, b, c): """ Calculate the squared distance from a point c to a finite line segment ab. Examples -------- >>> a=np.array([0,0,0], dtype=np.float32) >>> b=np.array([1,0,0], dtype=np.float32) >>> c=np.array([0,1,0], dtype=np.float32) >>> point_segment_sq_distance(a, b, c) 1.0 >>> c = np.array([0,3,0], dtype=np.float32) >>> point_segment_sq_distance(a,b,c) 9.0 >>> c = np.array([-1,1,0], dtype=np.float32) >>> point_segment_sq_distance(a, b, c) 2.0 """ cdef: float *ca float *cb float *cc float cr float ct[2] ca = asfp(a) cb = asfp(b) cc = asfp(c) return cpoint_segment_sq_dist(ca, cb, cc) @cython.cdivision(True) cdef inline float cpoint_segment_sq_dist(float * a, float * b, float * c) noexcept nogil: """ Calculate the squared distance from a point c to a line segment ab. """ cdef: float ab[3] float ac[3] float bc[3] float e,f csub_3vecs(b,a,ab) csub_3vecs(c,a,ac) csub_3vecs(c,b,bc) e = cinner_3vecs(ac, ab) #Handle cases where c projects outside ab if e <= 0.: return cinner_3vecs(ac, ac) f = cinner_3vecs(ab, ab) if e >= f : return cinner_3vecs(bc, bc) #Handle case where c projects onto ab return cinner_3vecs(ac, ac) - e * e / f def track_dist_3pts(tracka,trackb): """ Calculate the euclidean distance between two 3pt tracks Both direct and flip distances are calculated but only the smallest is returned Parameters ---------- a : array, shape (3,3) a three point track b : array, shape (3,3) a three point track Returns ------- dist :float Examples -------- >>> a = np.array([[0,0,0],[1,0,0,],[2,0,0]]) >>> b = np.array([[3,0,0],[3.5,1,0],[4,2,0]]) >>> c = track_dist_3pts(a, b) >>> print('%.6f' % c) 2.721573 """ cdef cnp.ndarray[cnp.float32_t, ndim=2] a,b cdef float d[2] a=np.ascontiguousarray(tracka,dtype=f32_dt) b=np.ascontiguousarray(trackb,dtype=f32_dt) track_direct_flip_3dist(asfp(a[0]),asfp(a[1]),asfp(a[2]), asfp(b[0]),asfp(b[1]),asfp(b[2]),d) if d[0]dist/rows out[1]=distf/rows @cython.cdivision(True) cdef inline void track_direct_flip_3dist(float *a1, float *b1,float *c1,float *a2, float *b2, float *c2, float *out) noexcept nogil: """ Calculate the euclidean distance between two 3pt tracks both direct and flip are given as output Parameters ---------- a1,b1,c1 : 3 float[3] arrays representing the first track a2,b2,c2 : 3 float[3] arrays representing the second track Returns ------- out : a float[2] array having the euclidean distance and the flipped euclidean distance """ cdef: cnp.npy_intp i float tmp1=0,tmp2=0,tmp3=0,tmp1f=0,tmp3f=0 #for i in range(3): for i from 0<=i<3: tmp1=tmp1+(a1[i]-a2[i])*(a1[i]-a2[i]) tmp2=tmp2+(b1[i]-b2[i])*(b1[i]-b2[i]) tmp3=tmp3+(c1[i]-c2[i])*(c1[i]-c2[i]) tmp1f=tmp1f+(a1[i]-c2[i])*(a1[i]-c2[i]) tmp3f=tmp3f+(c1[i]-a2[i])*(c1[i]-a2[i]) out[0]=(sqrt(tmp1)+sqrt(tmp2)+sqrt(tmp3))/3.0 out[1]=(sqrt(tmp1f)+sqrt(tmp2)+sqrt(tmp3f))/3.0 #out[0]=(tmp1+tmp2+tmp3)/3.0 #out[1]=(tmp1f+tmp2+tmp3f)/3.0 ctypedef struct LSC_Cluster: long *indices float *hidden long N @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def local_skeleton_clustering(tracks, d_thr=10): r"""Efficient tractography clustering Every track can needs to have the same number of points. Use `dipy.tracking.metrics.downsample` to restrict the number of points Parameters ---------- tracks : sequence of tracks as arrays, shape (N,3) .. (N,3) where N=points d_thr : float average euclidean distance threshold Returns ------- C : dict Clusters. Examples -------- >>> tracks=[np.array([[0,0,0],[1,0,0,],[2,0,0]]), ... np.array([[3,0,0],[3.5,1,0],[4,2,0]]), ... np.array([[3.2,0,0],[3.7,1,0],[4.4,2,0]]), ... np.array([[3.4,0,0],[3.9,1,0],[4.6,2,0]]), ... np.array([[0,0.2,0],[1,0.2,0],[2,0.2,0]]), ... np.array([[2,0.2,0],[1,0.2,0],[0,0.2,0]]), ... np.array([[0,0,0],[0,1,0],[0,2,0]])] >>> C = local_skeleton_clustering(tracks, d_thr=0.5) Notes ----- The distance calculated between two tracks:: t_1 t_2 0* a *0 \ | \ | 1* | | b *1 | \ 2* \ c *2 is equal to $(a+b+c)/3$ where $a$ the euclidean distance between ``t_1[0]`` and ``t_2[0]``, $b$ between ``t_1[1]`` and ``t_2[1]`` and $c$ between ``t_1[2]`` and ``t_2[2]``. Also the same with t2 flipped (so ``t_1[0]`` compared to ``t_2[2]`` etc). Visualization: It is possible to visualize the clustering C from the example above using the dipy.viz module:: from dipy.viz import window, actor scene = window.Scene() for c in C: color=np.random.rand(3) for i in C[c]['indices']: scene.add(actor.line(tracks[i],color)) window.show(scene) See Also -------- dipy.tracking.metrics.downsample """ cdef: cnp.ndarray[cnp.float32_t, ndim=2] track LSC_Cluster *cluster long lent = 0,lenC = 0, dim = 0, points=0 long i=0, j=0, c=0, i_k=0, rows=0 ,cit=0 float *ptr float *hid float *alld float d[2] float m_d, cd_thr long *flip points=len(tracks[0]) dim = points*3 rows = points cd_thr = d_thr #Allocate and copy memory for first cluster cluster=realloc(NULL,sizeof(LSC_Cluster)) cluster[0].indices=realloc(NULL,sizeof(long)) cluster[0].hidden=realloc(NULL,dim*sizeof(float)) cluster[0].indices[0]=0 track=np.ascontiguousarray(tracks[0],dtype=f32_dt) ptr= cnp.PyArray_DATA(track) for i from 0<=irealloc(NULL,dim*sizeof(float)) #Work with the rest of the tracks lent=len(tracks) for it in range(1,lent): track=np.ascontiguousarray(tracks[it],dtype=f32_dt) ptr= cnp.PyArray_DATA(track) cit=it with nogil: alld=calloc(lenC,sizeof(float)) flip=calloc(lenC,sizeof(long)) for k from 0<=kcluster[k].N #track_direct_flip_3dist(&ptr[0],&ptr[3],&ptr[6],&hid[0],&hid[3],&hid[6],d) #track_direct_flip_3dist(ptr,ptr+3,ptr+6,hid,hid+3,hid+6,d) track_direct_flip_dist(ptr, hid,rows,d) if d[1]realloc(cluster[i_k].indices,cluster[i_k].N*sizeof(long)) cluster[i_k].indices[cluster[i_k].N-1]=cit else:#New cluster added lenC+=1 cluster=realloc(cluster,lenC*sizeof(LSC_Cluster)) cluster[lenC-1].indices=realloc(NULL,sizeof(long)) cluster[lenC-1].hidden=realloc(NULL,dim*sizeof(float)) cluster[lenC-1].indices[0]=cit for i from 0<=i>> tracks=[np.array([[0,0,0],[1,0,0,],[2,0,0]]), ... np.array([[3,0,0],[3.5,1,0],[4,2,0]]), ... np.array([[3.2,0,0],[3.7,1,0],[4.4,2,0]]), ... np.array([[3.4,0,0],[3.9,1,0],[4.6,2,0]]), ... np.array([[0,0.2,0],[1,0.2,0],[2,0.2,0]]), ... np.array([[2,0.2,0],[1,0.2,0],[0,0.2,0]]), ... np.array([[0,0,0],[0,1,0],[0,2,0]])] >>> C=local_skeleton_clustering_3pts(tracks,d_thr=0.5) Notes ----- It is possible to visualize the clustering C from the example above using the fvtk module:: r=fvtk.ren() for c in C: color=np.random.rand(3) for i in C[c]['indices']: fvtk.add(r,fos.line(tracks[i],color)) fvtk.show(r) """ cdef : cnp.ndarray[cnp.float32_t, ndim=2] track cnp.ndarray[cnp.float32_t, ndim=2] h int lent,k,it float d[2] #float d_sq=d_thr**2 lent=len(tracks) #Network C C={0:{'indices':[0],'hidden':tracks[0].copy(),'N':1}} ts=np.zeros((3,3),dtype=np.float32) #for (it,t) in enumerate(tracks[1:]): for it in range(1,lent): track=np.ascontiguousarray(tracks[it],dtype=f32_dt) lenC=len(C.keys()) #if it%1000==0: # print it,lenC alld=np.zeros(lenC) flip=np.zeros(lenC) for k in range(lenC): h=np.ascontiguousarray(C[k]['hidden']/C[k]['N'],dtype=f32_dt) #print track #print h track_direct_flip_3dist( asfp(track[0]),asfp(track[1]),asfp(track[2]), asfp(h[0]), asfp(h[1]),asfp(h[2]),d) #d=np.sum(np.sqrt(np.sum((t-h)**2,axis=1)))/3.0 #ts[0]=t[-1];ts[1]=t[1];ts[-1]=t[0] #ds=np.sum(np.sqrt(np.sum((ts-h)**2,axis=1)))/3.0 #print d[0],d[1] if d[1]>> tracks=[np.array([[0,0,0],[1,0,0,],[2,0,0]],dtype=np.float32), ... np.array([[3,0,0],[3.5,1,0],[4,2,0]],dtype=np.float32), ... np.array([[3.2,0,0],[3.7,1,0],[4.4,2,0]],dtype=np.float32), ... np.array([[3.4,0,0],[3.9,1,0],[4.6,2,0]],dtype=np.float32), ... np.array([[0,0.2,0],[1,0.2,0],[2,0.2,0]],dtype=np.float32), ... np.array([[2,0.2,0],[1,0.2,0],[0,0.2,0]],dtype=np.float32), ... np.array([[0,0,0],[0,1,0],[0,2,0]],dtype=np.float32), ... np.array([[0.2,0,0],[0.2,1,0],[0.2,2,0]],dtype=np.float32), ... np.array([[-0.2,0,0],[-0.2,1,0],[-0.2,2,0]],dtype=np.float32)] >>> C = larch_3split(tracks, None, 0.5) Here is an example of how to visualize the clustering above:: from dipy.viz import window, actor scene = window.Scene() scene.add(actor.line(tracks,window.colors.red)) window.show(scene) for c in C: color=np.random.rand(3) for i in C[c]['indices']: scene.add(actor.line(tracks[i],color)) window.show(scene) for c in C: scene.add(actor.line(C[c]['rep3']/C[c]['N'], window.colors.white)) window.show(scene) """ cdef: cnp.ndarray[cnp.float32_t, ndim=2] track cnp.ndarray[cnp.float32_t, ndim=2] h int lent,k,it float d[2] lent=len(tracks) if indices is None: C={0:{'indices':[0],'rep3':tracks[0].copy(),'N':1}} itrange=range(1,lent) else: C={0:{'indices':[indices[0]],'rep3':tracks[indices[0]].copy(),'N':1}} itrange=indices[1:] ts=np.zeros((3,3),dtype=np.float32) for it in itrange: track=np.ascontiguousarray(tracks[it],dtype=f32_dt) lenC=len(C.keys()) alld=np.zeros(lenC) flip=np.zeros(lenC) for k in range(lenC): h=np.ascontiguousarray(C[k]['rep3']/C[k]['N'],dtype=f32_dt) track_direct_flip_3dist(asfp(track[0]),asfp(track[1]),asfp(track[2]), asfp(h[0]), asfp(h[1]), asfp(h[2]),d) if d[1]>> t=np.random.rand(10,3).astype(np.float32) >>> p=np.array([0.5,0.5,0.5],dtype=np.float32) >>> point_track_sq_distance_check(t,p,2**2) True >>> t=np.array([[0,0,0],[1,1,1],[2,2,2]],dtype='f4') >>> p=np.array([-1,-1.,-1],dtype='f4') >>> point_track_sq_distance_check(t,p,.2**2) False >>> point_track_sq_distance_check(t,p,2**2) True """ cdef: float *t= cnp.PyArray_DATA(track) float *p= cnp.PyArray_DATA(point) float a[3] float b[3] int tlen = len(track) int curr = 0 float dist = 0 cnp.npy_intp i int intersects = 0 with nogil: for i from 0<=ia,b,p) if dist<=sq_dist_thr: intersects=1 break if intersects==1: return True else: return False def track_roi_intersection_check(cnp.ndarray[float,ndim=2] track, cnp.ndarray[float,ndim=2] roi, double sq_dist_thr): """ Check if a track is intersecting a region of interest Parameters ---------- track: array,float32, shape (N,3) roi: array,float32, shape (M,3) sq_dist_thr: double, threshold, check squared euclidean distance from every roi point Returns ------- bool: True, if sq_distance <= sq_dist_thr, otherwise False. Examples -------- >>> roi=np.array([[0,0,0],[1,0,0],[2,0,0]],dtype='f4') >>> t=np.array([[0,0,0],[1,1,1],[2,2,2]],dtype='f4') >>> track_roi_intersection_check(t,roi,1) True >>> track_roi_intersection_check(t,np.array([[10,0,0]],dtype='f4'),1) False """ cdef: float *t= cnp.PyArray_DATA(track) float *r= cnp.PyArray_DATA(roi) float a[3] float b[3] float p[3] int tlen = len(track) int rlen = len(roi) int curr = 0 int currp = 0 float dist = 0 cnp.npy_intp i,j int intersects=0 with nogil: for i from 0<=ia,b,p) if dist<=sq_dist_thr: intersects=1 break if intersects==1: break if intersects==1: return True else: return False dipy-1.11.0/dipy/tracking/fbcmeasures.pyx000066400000000000000000000345441476546756600204130ustar00rootroot00000000000000import numpy as np cimport numpy as cnp cimport cython from safe_openmp cimport have_openmp from cython.parallel import parallel, prange, threadid from scipy.spatial import KDTree from scipy.interpolate import interp1d from dipy.utils.omp import determine_num_threads from dipy.utils.omp cimport set_num_threads, restore_default_num_threads cdef class FBCMeasures: cdef int [:] streamline_length cdef double [:, :, :] streamline_points cdef double [:, :] streamlines_lfbc cdef double [:] streamlines_rfbc def __init__(self, streamlines, kernel, min_fiberlength=10, max_windowsize=7, num_threads=None, verbose=False): """ Compute the fiber to bundle coherence measures for a set of streamlines. See :footcite:p`Meesters2016b` and :footcite:p:`Portegies2015b` for further details about the method. Parameters ---------- streamlines : list A collection of streamlines, each n by 3, with n being the number of nodes in the fiber. kernel : Kernel object A diffusion kernel object created from EnhancementKernel. min_fiberlength : int Fibers with fewer points than minimum_length are excluded from FBC computation. max_windowsize : int The maximal window size used to calculate the average LFBC region num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. verbose : boolean Enable verbose mode. References ---------- .. footbibliography:: """ self.compute(streamlines, kernel, min_fiberlength=min_fiberlength, max_windowsize=max_windowsize, num_threads=num_threads, verbose=verbose) def get_points_rfbc_thresholded(self, threshold, emphasis=.5, verbose=False): """ Set a threshold on the RFBC to remove spurious fibers. Parameters ---------- threshold : float The threshold to set on the RFBC, should be within 0 and 1. emphasis : float Enhances the coloring of the fibers by LFBC. Increasing emphasis will stress spurious fibers by logarithmic weighting. verbose : boolean Prints info about the found RFBC for the set of fibers such as median, mean, min and max values. Returns ------- output : tuple with 3 elements The output contains: 1) a collection of streamlines, each n by 3, with n being the number of nodes in the fiber that remain after filtering 2) the r,g,b values of the local fiber to bundle coherence (LFBC) 3) the relative fiber to bundle coherence (RFBC) """ if verbose: print("median RFBC: " + str(np.median(self.streamlines_rfbc))) print("mean RFBC: " + str(np.mean(self.streamlines_rfbc))) print("min RFBC: " + str(np.min(self.streamlines_rfbc))) print("max RFBC: " + str(np.max(self.streamlines_rfbc))) # logarithmic transform of color values to emphasize spurious fibers minval = np.nanmin(self.streamlines_lfbc) maxval = np.nanmax(self.streamlines_lfbc) lfbc_log = np.log((self.streamlines_lfbc - minval) / (maxval - minval + 10e-10) + emphasis) minval = np.nanmin(lfbc_log) maxval = np.nanmax(lfbc_log) lfbc_log = (lfbc_log - minval) / (maxval - minval) # define color interpolation functions x = np.linspace(0, 1, num=4, endpoint=True) r = np.array([1, 1, 0, 0]) g = np.array([1, 0, 0, 1]) b = np.array([0, 0, 1, 1]) fr = interp1d(x, r, bounds_error=False, fill_value=0) fg = interp1d(x, g, bounds_error=False, fill_value=0) fb = interp1d(x, b, bounds_error=False, fill_value=0) # select fibers above the RFBC threshold streamline_out = [] color_out = [] rfbc_out = [] for i in range(self.streamlines_rfbc.shape[0]): rfbc = self.streamlines_rfbc[i] lfbc = lfbc_log[i] if rfbc > threshold: fiber = np.array(self.streamline_points[i]) fiber = fiber[0:self.streamline_length[i] - 1] streamline_out.append(fiber) rfbc_out.append(rfbc) lfbc = lfbc[0:self.streamline_length[i] - 1] lfbc_colors = np.transpose([fr(lfbc), fg(lfbc), fb(lfbc)]) color_out.append(lfbc_colors.tolist()) return streamline_out, color_out, rfbc_out @cython.wraparound(False) @cython.boundscheck(False) @cython.nonecheck(False) @cython.cdivision(True) cdef void compute(self, py_streamlines, kernel, min_fiberlength, max_windowsize, num_threads=None, verbose=False): """ Compute the fiber to bundle coherence measures for a set of streamlines. Parameters ---------- py_streamlines : list A collection of streamlines, each n by 3, with n being the number of nodes in the fiber. kernel : Kernel object A diffusion kernel object created from EnhancementKernel. min_fiberlength : int Fibers with fewer points than minimum_length are excluded from FBC computation. max_windowsize : int The maximal window size used to calculate the average LFBC region num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. verbose : boolean Enable verbose mode. """ cdef: cnp.npy_intp num_fibers, max_length, dim double [:, :, :] streamlines int [:] streamlines_length double [:, :, :] streamlines_tangent int [:, :] streamlines_nearestp double [:, :] streamline_scores cnp.npy_intp line_id = 0 cnp.npy_intp point_id = 0 cnp.npy_intp line_id2 = 0 cnp.npy_intp point_id2 = 0 cnp.npy_intp dims double score double [:] score_mp int [:] xd_mp, yd_mp, zd_mp cnp.npy_intp xd, yd, zd, N, hn double [:, :, :, :, ::1] lut cnp.npy_intp threads_to_use = -1 threads_to_use = determine_num_threads(num_threads) set_num_threads(threads_to_use) # if the fibers are too short FBC measures cannot be applied, # remove these. streamlines_length = np.array([x.shape[0] for x in py_streamlines], dtype=np.int32) min_length = min(streamlines_length) if min_length < min_fiberlength: print("The minimum fiber length is 10 points. \ Shorter fibers were found and removed.") py_streamlines = [x for x in py_streamlines if x.shape[0] >= min_fiberlength] streamlines_length = np.array([x.shape[0] for x in py_streamlines], dtype=np.int32) num_fibers = len(py_streamlines) self.streamline_length = streamlines_length max_length = max(streamlines_length) dim = 3 # get lookup table info lut = kernel.get_lookup_table() N = lut.shape[2] hn = (N-1) / 2 # prepare numpy arrays for speed streamlines = np.zeros((num_fibers, max_length, dim), dtype=np.float64) * np.nan streamlines_tangents = np.zeros((num_fibers, max_length, dim), dtype=np.float64) streamlines_nearestp = np.zeros((num_fibers, max_length), dtype=np.int32) streamline_scores = np.zeros((num_fibers, max_length), dtype=np.float64) * np.nan # copy python streamlines into numpy array for line_id in range(num_fibers): for point_id in range(streamlines_length[line_id]): for dims in range(3): streamlines[line_id, point_id, dims] = \ py_streamlines[line_id][point_id][dims] self.streamline_points = streamlines # compute tangents for line_id in range(num_fibers): for point_id in range(streamlines_length[line_id] - 1): tangent = np.subtract(streamlines[line_id, point_id + 1], streamlines[line_id, point_id]) streamlines_tangents[line_id, point_id] = \ np.divide(tangent, np.sqrt(np.dot(tangent, tangent))) # estimate which kernel LUT index corresponds to angles tree = KDTree(kernel.get_orientations()) for line_id in range(num_fibers): for point_id in range(streamlines_length[line_id] - 1): streamlines_nearestp[line_id, point_id] = \ tree.query(streamlines[line_id, point_id])[1] # arrays for parallel computing score_mp = np.zeros(num_fibers) xd_mp = np.zeros(num_fibers, dtype=np.int32) yd_mp = np.zeros(num_fibers, dtype=np.int32) zd_mp = np.zeros(num_fibers, dtype=np.int32) if verbose: if have_openmp: print("Running in parallel!") else: print("No OpenMP...") # compute fiber LFBC measures with nogil: for line_id in prange(num_fibers, schedule='guided'): for point_id in range(streamlines_length[line_id] - 1): score_mp[line_id] = 0.0 for line_id2 in range(num_fibers): # skip lfbc computation with itself if line_id == line_id2: continue for point_id2 in range(streamlines_length[line_id2] - 1): # compute displacement xd_mp[line_id] = \ int(streamlines[line_id, point_id, 0] - streamlines[line_id2, point_id2, 0] + 0.5) yd_mp[line_id] = \ int(streamlines[line_id, point_id, 1] - streamlines[line_id2, point_id2, 1] + 0.5) zd_mp[line_id] = \ int(streamlines[line_id, point_id, 2] - streamlines[line_id2, point_id2, 2] + 0.5) # if position is outside the kernel bounds, skip if xd_mp[line_id] > hn or -xd_mp[line_id] > hn or \ yd_mp[line_id] > hn or -yd_mp[line_id] > hn or \ zd_mp[line_id] > hn or -zd_mp[line_id] > hn: continue # grab kernel value from LUT score_mp[line_id] += \ lut[streamlines_nearestp[line_id, point_id], streamlines_nearestp[line_id2, point_id2], hn+xd_mp[line_id], hn+yd_mp[line_id], hn+zd_mp[line_id]] # ang_v, ang_r, x, y, z streamline_scores[line_id, point_id] = score_mp[line_id] # Reset number of OpenMP cores to default if num_threads is not None: restore_default_num_threads() # Save LFBC as class member self.streamlines_lfbc = streamline_scores # compute RFBC for each fiber self.streamlines_rfbc = compute_rfbc(streamlines_length, streamline_scores, max_windowsize=max_windowsize) def compute_rfbc(streamlines_length, streamline_scores, max_windowsize=7): """ Compute the relative fiber to bundle coherence (RFBC) Parameters ---------- streamlines_length : 1D int array Contains the length of each streamline streamlines_scores : 2D double array Contains the local fiber to bundle coherence (LFBC) for each streamline element. max_windowsize : int The maximal window size used to calculate the average LFBC region Returns ------- output: normalized lowest average LFBC region along the fiber """ # finds the region of the fiber with maximal length of max_windowsize in # which the LFBC is the lowest int_length = min(np.amin(streamlines_length), max_windowsize) int_value = np.apply_along_axis(lambda x: min_moving_average(x[~np.isnan(x)], int_length), 1, streamline_scores) avg_total = np.mean( np.apply_along_axis( lambda x: np.mean(np.extract(x[~np.isnan(x)] >= 0, x[~np.isnan(x)])), 1, streamline_scores)) if not avg_total == 0: return int_value / avg_total else: return int_value def min_moving_average(a, n): """ Return the lowest cumulative sum for the score of a streamline segment Parameters ---------- a : array Input array n : int Length of the segment Returns ------- output: normalized lowest average LFBC region along the fiber """ ret = np.cumsum(np.extract(a >= 0, a)) ret[n:] = ret[n:] - ret[:-n] return np.amin(ret[n - 1:] / n) dipy-1.11.0/dipy/tracking/learning.py000066400000000000000000000071141476546756600175140ustar00rootroot00000000000000"""Learning algorithms for tractography""" import numpy as np import dipy.tracking.distances as pf def detect_corresponding_tracks(indices, tracks1, tracks2): """Detect corresponding tracks from list tracks1 to list tracks2 where tracks1 & tracks2 are lists of tracks Parameters ---------- indices : sequence of indices of tracks1 that are to be detected in tracks2 tracks1 : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) tracks2 : sequence of tracks as arrays, shape (M1,3) .. (Mm,3) Returns ------- track2track : array (N,2) where N is len(indices) of int it shows the correspondence in the following way: the first column is the current index in tracks1 the second column is the corresponding index in tracks2 Examples -------- >>> import numpy as np >>> import dipy.tracking.learning as tl >>> A = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> B = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]]) >>> C = np.array([[0, 0, -1], [0, 0, -2], [0, 0, -3]]) >>> bundle1 = [A, B, C] >>> bundle2 = [B, A] >>> indices = [0, 1] >>> arr = tl.detect_corresponding_tracks(indices, bundle1, bundle2) Notes ----- To find the corresponding tracks we use mam_distances with 'avg' option. Then we calculate the argmin of all the calculated distances and return it for every index. (See 3rd column of arr in the example given below.) """ li = len(indices) track2track = np.zeros((li, 2)) cnt = 0 for i in indices: rt = [pf.mam_distances(tracks1[i], t, "avg") for t in tracks2] rt = np.array(rt) track2track[cnt] = np.array([i, rt.argmin()]) cnt += 1 return track2track.astype(int) def detect_corresponding_tracks_plus(indices, tracks1, indices2, tracks2): """Detect corresponding tracks from 1 to 2 where tracks1 & tracks2 are sequences of tracks Parameters ---------- indices : sequence of indices of tracks1 that are to be detected in tracks2 tracks1 : sequence of tracks as arrays, shape (N1,3) .. (Nm,3) indices2 : sequence of indices of tracks2 in the initial brain tracks2 : sequence of tracks as arrays, shape (M1,3) .. (Mm,3) Returns ------- track2track : array (N,2) where N is len(indices) of int showing the correspondence in th following way the first column is the current index of tracks1 the second column is the corresponding index in tracks2 Examples -------- >>> import numpy as np >>> import dipy.tracking.learning as tl >>> A = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> B = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]]) >>> C = np.array([[0, 0, -1], [0, 0, -2], [0, 0, -3]]) >>> bundle1 = [A, B, C] >>> bundle2 = [B, A] >>> indices = [0, 1] >>> indices2 = indices >>> arr = tl.detect_corresponding_tracks_plus(indices, bundle1, indices2, bundle2) Notes ----- To find the corresponding tracks we use mam_distances with 'avg' option. Then we calculate the argmin of all the calculated distances and return it for every index. (See 3rd column of arr in the example given below.) See Also -------- distances.mam_distances """ li = len(indices) track2track = np.zeros((li, 2)) cnt = 0 for i in indices: rt = [pf.mam_distances(tracks1[i], t, "avg") for t in tracks2] rt = np.array(rt) track2track[cnt] = np.array([i, indices2[rt.argmin()]]) cnt += 1 return track2track.astype(int) dipy-1.11.0/dipy/tracking/life.py000066400000000000000000000470311476546756600166360ustar00rootroot00000000000000""" This is an implementation of the Linear Fascicle Evaluation (LiFE) algorithm described in :footcite:p:`Pestilli2014`. References ---------- .. footbibliography:: """ import numpy as np import scipy.linalg as la import scipy.sparse as sps import dipy.core.optimize as opt import dipy.data as dpd from dipy.reconst.base import ReconstFit, ReconstModel from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import transform_streamlines from dipy.tracking.utils import unique_rows from dipy.tracking.vox2track import _voxel2streamline def gradient(f): """ Return the gradient of an N-dimensional array. The gradient is computed using central differences in the interior and first differences at the boundaries. The returned gradient hence has the same shape as the input array. Parameters ---------- f : array_like An N-dimensional array containing samples of a scalar function. Returns ------- gradient : ndarray N arrays of the same shape as `f` giving the derivative of `f` with respect to each dimension. Examples -------- >>> x = np.array([1, 2, 4, 7, 11, 16], dtype=float) >>> gradient(x) array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ]) >>> gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float)) [array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[ 1. , 2.5, 4. ], [ 1. , 1. , 1. ]])] Notes ----- This is a simplified implementation of gradient that is part of numpy 1.8. In order to mitigate the effects of changes added to this implementation in version 1.9 of numpy, we include this implementation here. """ f = np.asanyarray(f) N = len(f.shape) # number of dimensions dx = [1.0] * N # use central differences on interior and first differences on endpoints outvals = [] # create slice objects --- initially all are [:, :, ..., :] slice1 = [slice(None)] * N slice2 = [slice(None)] * N slice3 = [slice(None)] * N for axis in range(N): # select out appropriate parts for this dimension out = np.empty_like(f) slice1[axis] = slice(1, -1) slice2[axis] = slice(2, None) slice3[axis] = slice(None, -2) # 1D equivalent -- out[1:-1] = (f[2:] - f[:-2])/2.0 out[tuple(slice1)] = (f[tuple(slice2)] - f[tuple(slice3)]) / 2.0 slice1[axis] = 0 slice2[axis] = 1 slice3[axis] = 0 # 1D equivalent -- out[0] = (f[1] - f[0]) out[tuple(slice1)] = f[tuple(slice2)] - f[tuple(slice3)] slice1[axis] = -1 slice2[axis] = -1 slice3[axis] = -2 # 1D equivalent -- out[-1] = (f[-1] - f[-2]) out[tuple(slice1)] = f[tuple(slice2)] - f[tuple(slice3)] # divide by step size outvals.append(out / dx[axis]) # reset the slice object in this dimension to ":" slice1[axis] = slice(None) slice2[axis] = slice(None) slice3[axis] = slice(None) if N == 1: return outvals[0] else: return outvals def streamline_gradients(streamline): """ Calculate the gradients of the streamline along the spatial dimension Parameters ---------- streamline : array-like of shape (n, 3) The 3d coordinates of a single streamline Returns ------- Array of shape (3, n): Spatial gradients along the length of the streamline. """ return np.array(gradient(np.asarray(streamline))[0]) def grad_tensor(grad, evals): """ Calculate the 3 by 3 tensor for a given spatial gradient, given a canonical tensor shape (also as a 3 by 3), pointing at [1,0,0] Parameters ---------- grad : 1d array of shape (3,) The spatial gradient (e.g between two nodes of a streamline). evals: 1d array of shape (3,) The eigenvalues of a canonical tensor to be used as a response function. """ # This is the rotation matrix from [1, 0, 0] to this gradient of the sl: R = la.svd([grad], overwrite_a=True)[2] # This is the 3 by 3 tensor after rotation: T = np.linalg.multi_dot([R, np.diag(evals), R.T]) return T @warning_for_keywords() def streamline_tensors(streamline, *, evals=(0.001, 0, 0)): """ The tensors generated by this fiber. Parameters ---------- streamline : array-like of shape (n, 3) The 3d coordinates of a single streamline evals : iterable with three entries, optional The estimated eigenvalues of a single fiber tensor. Returns ------- An n_nodes by 3 by 3 array with the tensor for each node in the fiber. Notes ----- Estimates of the radial/axial diffusivities may rely on empirical measurements (for example, the AD in the Corpus Callosum), or may be based on a biophysical model of some kind. """ grad = streamline_gradients(streamline) # Preallocate: tensors = np.empty((grad.shape[0], 3, 3)) for grad_idx, this_grad in enumerate(grad): tensors[grad_idx] = grad_tensor(this_grad, evals) return tensors @warning_for_keywords() def streamline_signal(streamline, gtab, *, evals=(0.001, 0, 0)): """ The signal from a single streamline estimate along each of its nodes. Parameters ---------- streamline : a single streamline Streamline data. gtab : GradientTable class instance Gradient table. evals : array-like of length 3, optional The eigenvalues of the canonical tensor used as an estimate of the signal generated by each node of the streamline. """ # Gotta have those tensors: tensors = streamline_tensors(streamline, evals=evals) sig = np.empty((len(streamline), np.sum(~gtab.b0s_mask))) # Extract them once: bvecs = gtab.bvecs[~gtab.b0s_mask] bvals = gtab.bvals[~gtab.b0s_mask] for ii, tensor in enumerate(tensors): ADC = np.diag(np.linalg.multi_dot([bvecs, tensor, bvecs.T])) # Use the Stejskal-Tanner equation with the ADC as input, and S0 = 1: sig[ii] = np.exp(-bvals * ADC) return sig - np.mean(sig) class LifeSignalMaker: """ A class for generating signals from streamlines in an efficient and speedy manner. """ @warning_for_keywords() def __init__(self, gtab, *, evals=(0.001, 0, 0), sphere=None): """ Initialize a signal maker Parameters ---------- gtab : GradientTable class instance The gradient table on which the signal is calculated. evals : array-like of 3 items, optional The eigenvalues of the canonical tensor to use in calculating the signal. sphere : `dipy.core.Sphere` class instance, optional The discrete sphere to use as an approximation for the continuous sphere on which the signal is represented. If integer - we will use an instance of one of the symmetric spheres cached in `dps.get_sphere`. If a 'dipy.core.Sphere' class instance is provided, we will use this object. Default: the :mod:`dipy.data` symmetric sphere with 724 vertices """ self.sphere = sphere or dpd.default_sphere self.gtab = gtab self.evals = evals # Initialize an empty dict to fill with signals for each of the sphere # vertices: self.signal = np.empty((self.sphere.vertices.shape[0], np.sum(~gtab.b0s_mask))) # We'll need to keep track of what we've already calculated: self._calculated = [] def calc_signal(self, xyz): idx = self.sphere.find_closest(xyz) if idx not in self._calculated: bvecs = self.gtab.bvecs[~self.gtab.b0s_mask] bvals = self.gtab.bvals[~self.gtab.b0s_mask] tensor = grad_tensor(self.sphere.vertices[idx], self.evals) ADC = np.diag(np.linalg.multi_dot([bvecs, tensor, bvecs.T])) sig = np.exp(-bvals * ADC) sig = sig - np.mean(sig) self.signal[idx] = sig self._calculated.append(idx) return self.signal[idx] def streamline_signal(self, streamline): """ Approximate the signal for a given streamline """ grad = streamline_gradients(streamline) sig_out = np.zeros((grad.shape[0], self.signal.shape[-1])) for ii, g in enumerate(grad): sig_out[ii] = self.calc_signal(g) return sig_out @warning_for_keywords() def voxel2streamline(streamline, affine, *, unique_idx=None): """ Maps voxels to streamlines and streamlines to voxels, for setting up the LiFE equations matrix Parameters ---------- streamline : list A collection of streamlines, each n by 3, with n being the number of nodes in the fiber. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. unique_idx : array, optional. The unique indices in the streamlines Returns ------- v2f, v2fn : tuple of dicts The first dict in the tuple answers the question: Given a voxel (from the unique indices in this model), which fibers pass through it? The second answers the question: Given a streamline, for each voxel that this streamline passes through, which nodes of that streamline are in that voxel? """ transformed_streamline = transform_streamlines(streamline, affine) if unique_idx is None: all_coords = np.concatenate(transformed_streamline) unique_idx = unique_rows(np.round(all_coords)) return _voxel2streamline(transformed_streamline, unique_idx.astype(np.intp)) class FiberModel(ReconstModel): """ A class for representing and solving predictive models based on tractography solutions. Notes ----- This is an implementation of the LiFE model described in :footcite:p:`Pestilli2014`. References ---------- .. footbibliography:: """ def __init__(self, gtab): """ Parameters ---------- gtab : a GradientTable class instance """ # Initialize the super-class: ReconstModel.__init__(self, gtab) @warning_for_keywords() def setup(self, streamline, affine, *, evals=(0.001, 0, 0), sphere=None): """ Set up the necessary components for the LiFE model: the matrix of fiber-contributions to the DWI signal, and the coordinates of voxels for which the equations will be solved Parameters ---------- streamline : list Streamlines, each is an array of shape (n, 3) affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. evals : array-like (3 items, optional) The eigenvalues of the canonical tensor used as a response function. Default:[0.001, 0, 0]. sphere: `dipy.core.Sphere` instance, optional Whether to approximate (and cache) the signal on a discrete sphere. This may confer a significant speed-up in setting up the problem, but is not as accurate. If `False`, we use the exact gradients along the streamlines to calculate the matrix, instead of an approximation. Defaults to use the 724-vertex symmetric sphere from :mod:`dipy.data` """ if sphere is not False: SignalMaker = LifeSignalMaker(self.gtab, evals=evals, sphere=sphere) streamline = transform_streamlines(streamline, affine) # Assign some local variables, for shorthand: all_coords = np.concatenate(streamline) vox_coords = unique_rows(np.round(all_coords).astype(np.intp)) del all_coords # We only consider the diffusion-weighted signals: n_bvecs = self.gtab.bvals[~self.gtab.b0s_mask].shape[0] v2f, v2fn = voxel2streamline(streamline, np.eye(4), unique_idx=vox_coords) # How many fibers in each voxel (this will determine how many # components are in the matrix): n_unique_f = len(np.hstack(list(v2f.values()))) # Preallocate these, which will be used to generate the sparse # matrix: f_matrix_sig = np.zeros(n_unique_f * n_bvecs, dtype=float) f_matrix_row = np.zeros(n_unique_f * n_bvecs, dtype=np.intp) f_matrix_col = np.zeros(n_unique_f * n_bvecs, dtype=np.intp) fiber_signal = [] for _, s in enumerate(streamline): if sphere is not False: fiber_signal.append(SignalMaker.streamline_signal(s)) else: fiber_signal.append(streamline_signal(s, self.gtab, evals=evals)) del streamline if sphere is not False: del SignalMaker keep_ct = 0 range_bvecs = np.arange(n_bvecs).astype(int) # In each voxel: for v_idx in range(vox_coords.shape[0]): mat_row_idx = (range_bvecs + v_idx * n_bvecs).astype(np.intp) # For each fiber in that voxel: for f_idx in v2f[v_idx]: # For each fiber-voxel combination, store the row/column # indices in the pre-allocated linear arrays f_matrix_row[keep_ct : keep_ct + n_bvecs] = mat_row_idx f_matrix_col[keep_ct : keep_ct + n_bvecs] = f_idx vox_fiber_sig = np.zeros(n_bvecs) for node_idx in v2fn[f_idx][v_idx]: # Sum the signal from each node of the fiber in that voxel: vox_fiber_sig += fiber_signal[f_idx][node_idx] # And add the summed thing into the corresponding rows: f_matrix_sig[keep_ct : keep_ct + n_bvecs] += vox_fiber_sig keep_ct = keep_ct + n_bvecs del v2f, v2fn # Allocate the sparse matrix, using the more memory-efficient 'csr' # format: life_matrix = sps.csr_array((f_matrix_sig, [f_matrix_row, f_matrix_col])) return life_matrix, vox_coords def _signals(self, data, vox_coords): """ Helper function to extract and separate all the signals we need to fit and evaluate a fit of this model Parameters ---------- data : 4D array vox_coords: n by 3 array The coordinates into the data array of the fiber nodes. """ # Fitting is done on the S0-normalized-and-demeaned diffusion-weighted # signal: idx_tuple = (vox_coords[:, 0], vox_coords[:, 1], vox_coords[:, 2]) # We'll look at a 2D array, extracting the data from the voxels: vox_data = data[idx_tuple] weighted_signal = vox_data[:, ~self.gtab.b0s_mask] b0_signal = np.mean(vox_data[:, self.gtab.b0s_mask], -1) relative_signal = weighted_signal / b0_signal[:, None] # The mean of the relative signal across directions in each voxel: mean_sig = np.mean(relative_signal, -1) to_fit = (relative_signal - mean_sig[:, None]).ravel() to_fit[np.isnan(to_fit)] = 0 return (to_fit, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data) @warning_for_keywords() def fit(self, data, streamline, affine, *, evals=(0.001, 0, 0), sphere=None): """ Fit the LiFE FiberModel for data and a set of streamlines associated with this data Parameters ---------- data : 4D array Diffusion-weighted data streamline : list A bunch of streamlines affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. evals : array-like, optional The eigenvalues of the tensor response function used in constructing the model signal. Default: [0.001, 0, 0] sphere: `dipy.core.Sphere` instance or False, optional Whether to approximate (and cache) the signal on a discrete sphere. This may confer a significant speed-up in setting up the problem, but is not as accurate. If `False`, we use the exact gradients along the streamlines to calculate the matrix, instead of an approximation. Returns ------- FiberFit class instance """ if affine is None: affine = np.eye(4) sl_len = np.array([len(s) for s in streamline]) if np.any(sl_len < 2): raise ValueError( "Input contains streamlines with only one node." " The LiFE model cannot be fit with these streamlines included." ) life_matrix, vox_coords = self.setup( streamline, affine, evals=evals, sphere=sphere ) (to_fit, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data) = ( self._signals(data, vox_coords) ) beta = opt.sparse_nnls(to_fit, life_matrix) return FiberFit( self, life_matrix, vox_coords, to_fit, beta, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data, streamline, affine, evals, ) class FiberFit(ReconstFit): """ A fit of the LiFE model to diffusion data """ def __init__( self, fiber_model, life_matrix, vox_coords, to_fit, beta, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data, streamline, affine, evals, ): """ Parameters ---------- fiber_model : A FiberModel class instance params : the parameters derived from a fit of the model to the data. """ ReconstFit.__init__(self, fiber_model, vox_data) self.life_matrix = life_matrix self.vox_coords = vox_coords self.fit_data = to_fit self.beta = beta self.weighted_signal = weighted_signal self.b0_signal = b0_signal self.relative_signal = relative_signal self.mean_signal = mean_sig self.streamline = streamline self.affine = affine self.evals = evals @warning_for_keywords() def predict(self, *, gtab=None, S0=None): """ Predict the signal Parameters ---------- gtab : GradientTable Default: use self.gtab S0 : float or array The non-diffusion-weighted signal in the voxels for which a prediction is made. Default: use self.b0_signal Returns ------- prediction : ndarray of shape (voxels, bvecs) An array with a prediction of the signal in each voxel/direction """ # We generate the prediction and in each voxel, we add the # offset, according to the isotropic part of the signal, which was # removed prior to fitting: if gtab is None: _matrix = self.life_matrix gtab = self.model.gtab else: _model = FiberModel(gtab) _matrix, _ = _model.setup(self.streamline, self.affine, evals=self.evals) pred_weighted = np.reshape( opt.spdot(_matrix, self.beta), (self.vox_coords.shape[0], np.sum(~gtab.b0s_mask)), ) pred = np.empty((self.vox_coords.shape[0], gtab.bvals.shape[0])) if S0 is None: S0 = self.b0_signal pred[..., gtab.b0s_mask] = S0[:, None] pred[..., ~gtab.b0s_mask] = (pred_weighted + self.mean_signal[:, None]) * S0[ :, None ] return pred dipy-1.11.0/dipy/tracking/local_tracking.py000066400000000000000000000434071476546756600206760ustar00rootroot00000000000000from collections.abc import Iterable import random from warnings import warn import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.tracking import utils from dipy.tracking.localtrack import local_tracker, pft_tracker from dipy.tracking.stopping_criterion import ( AnatomicalStoppingCriterion, StreamlineStatus, ) from dipy.utils import fast_numpy class LocalTracking: @staticmethod def _get_voxel_size(affine): """Computes the voxel sizes of an image from the affine. Checks that the affine does not have any shear because local_tracker assumes that the data is sampled on a regular grid. """ lin = affine[:3, :3] dotlin = np.dot(lin.T, lin) # Check that the affine is well behaved if not np.allclose(np.triu(dotlin, 1), 0.0, atol=1e-5): msg = ( "The affine provided seems to contain shearing, data must " "be acquired or interpolated on a regular grid to be used " "with `LocalTracking`." ) raise ValueError(msg) return np.sqrt(dotlin.diagonal()) @warning_for_keywords() def __init__( self, direction_getter, stopping_criterion, seeds, affine, step_size, *, max_cross=None, maxlen=500, minlen=2, fixedstep=True, return_all=True, random_seed=None, save_seeds=False, unidirectional=False, randomize_forward_direction=False, initial_directions=None, ): """Creates streamlines by using local fiber-tracking. Parameters ---------- direction_getter : instance of DirectionGetter Used to get directions for fiber tracking. stopping_criterion : instance of StoppingCriterion Identifies endpoints and invalid points to inform tracking. seeds : array (N, 3), optional Points to seed the tracking. Seed points should be given in point space of the track (see ``affine``). affine : array (4, 4), optional Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in "voxel coordinates" as long as isotropic voxels were used to acquire the data. step_size : float, optional Step size used for tracking. max_cross : int or None, optional The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked. maxlen : int, optional Maximum length of generated streamlines. Longer streamlines will be discarted if `return_all=False`. minlen : int, optional Minimum length of generated streamlines. Shorter streamlines will be discarted if `return_all=False`. fixedstep : bool, optional If true, a fixed stepsize is used, otherwise a variable step size is used. return_all : bool, optional If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image. random_seed : int, optional The seed for the random seed generator (numpy.random.seed and random.seed). save_seeds : bool, optional If True, return seeds alongside streamlines unidirectional : bool, optional If true, the tracking is performed only in the forward direction. The seed position will be the first point of all streamlines. randomize_forward_direction : bool, optional If true, the forward direction is randomized (multiplied by 1 or -1). Otherwise, the provided forward direction is used. initial_directions: array (N, npeaks, 3), optional Initial direction to follow from the ``seed`` position. If ``max_cross`` is None, one streamline will be generated per peak per voxel. If None, `direction_getter.initial_direction` is used. """ self.direction_getter = direction_getter self.stopping_criterion = stopping_criterion self.seeds = seeds self.unidirectional = unidirectional self.randomize_forward_direction = randomize_forward_direction self.initial_directions = initial_directions if affine.shape != (4, 4): raise ValueError("affine should be a (4, 4) array.") if step_size <= 0: raise ValueError("step_size must be greater than 0.") if maxlen < 1: raise ValueError("maxlen must be greater than 0.") if minlen > maxlen: raise ValueError("maxlen must be greater than or equal to minlen") if not isinstance(seeds, Iterable): raise ValueError("seeds should be (N,3) array.") if ( initial_directions is not None and seeds.shape[0] != initial_directions.shape[0] ): raise ValueError( "initial_directions and seeds must have the same shape[0]." ) if ( initial_directions is None and unidirectional and not self.randomize_forward_direction ): warn( "Unidirectional tractography will be performed " + "without providing initial directions nor " + "randomizing extracted initial forward " + "directions. This may introduce directional " + "biases in the reconstructed streamlines. " + "See ``initial_directions`` and " + "``randomize_forward_direction`` parameters.", stacklevel=2, ) self.affine = affine self._voxel_size = np.ascontiguousarray( self._get_voxel_size(affine), dtype=float ) self.step_size = step_size self.fixed_stepsize = fixedstep self.max_cross = max_cross self.max_length = maxlen self.min_length = minlen self.return_all = return_all self.random_seed = random_seed self.save_seeds = save_seeds def _tracker(self, seed, first_step, streamline): return local_tracker( self.direction_getter, self.stopping_criterion, seed, first_step, self._voxel_size, streamline, self.step_size, self.fixed_stepsize, ) def __iter__(self): # Make tracks, move them to point space and return track = self._generate_tractogram() return utils.transform_tracking_output( track, self.affine, save_seeds=self.save_seeds ) def _generate_tractogram(self): """A streamline generator""" # Get inverse transform (lin/offset) for seeds inv_A = np.linalg.inv(self.affine) lin = inv_A[:3, :3] offset = inv_A[:3, 3] F = np.empty((self.max_length + 1, 3), dtype=float) B = F.copy() for i, s in enumerate(self.seeds): s = np.dot(lin, s) + offset # Set the random seed in numpy, random and fast_numpy (lic.stdlib) if self.random_seed is not None: s_random_seed = hash(np.abs((np.sum(s)) + self.random_seed)) % ( np.iinfo(np.uint32).max - 1 ) random.seed(s_random_seed) np.random.seed(s_random_seed) fast_numpy.seed(s_random_seed) if self.initial_directions is None: directions = self.direction_getter.initial_direction(s) else: # normalize the initial directions. # initial directions with norm 0 are removed. d_ns = np.linalg.norm(self.initial_directions[i, :, :], axis=1) directions = ( self.initial_directions[i, d_ns > 0, :] / d_ns[d_ns > 0, np.newaxis] ) if len(directions) == 0 and self.return_all: # only the seed position if self.save_seeds: yield [s], s else: yield [s] if self.randomize_forward_direction: directions = [d * random.choice([1, -1]) for d in directions] directions = directions[: self.max_cross] for first_step in directions: stepsF = stepsB = 1 stepsF, stream_status = self._tracker(s, first_step, F) if not ( self.return_all or stream_status in (StreamlineStatus.ENDPOINT, StreamlineStatus.OUTSIDEIMAGE) ): continue if not self.unidirectional: first_step = -first_step if stepsF > 1: # Use the opposite of the first selected orientation for # the backward tracking segment opposite_step = F[0] - F[1] opposite_step_norm = np.linalg.norm(opposite_step) if opposite_step_norm > 0: first_step = opposite_step / opposite_step_norm stepsB, stream_status = self._tracker(s, first_step, B) if not ( self.return_all or stream_status in (StreamlineStatus.ENDPOINT, StreamlineStatus.OUTSIDEIMAGE) ): continue if stepsB == 1: streamline = F[:stepsF].copy() else: parts = (B[stepsB - 1 : 0 : -1], F[:stepsF]) streamline = np.concatenate(parts, axis=0) # move to the next streamline if only the seed position # and not return all len_sl = len(streamline) if ( len_sl >= self.min_length and len_sl <= self.max_length or self.return_all ): if self.save_seeds: yield streamline, s else: yield streamline class ParticleFilteringTracking(LocalTracking): @warning_for_keywords() def __init__( self, direction_getter, stopping_criterion, seeds, affine, step_size, *, max_cross=None, maxlen=500, minlen=2, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None, save_seeds=False, min_wm_pve_before_stopping=0, unidirectional=False, randomize_forward_direction=False, initial_directions=None, ): """A streamline generator using the particle filtering tractography method. See :footcite:p:`Girard2014` for further details about the method. Parameters ---------- direction_getter : instance of ProbabilisticDirectionGetter Used to get directions for fiber tracking. stopping_criterion : instance of AnatomicalStoppingCriterion Identifies endpoints and invalid points to inform tracking. seeds : array (N, 3) Points to seed the tracking. Seed points should be given in point space of the track (see ``affine``). affine : array (4, 4) Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in "voxel coordinates" as long as isotropic voxels were used to acquire the data. step_size : float Step size used for tracking. max_cross : int or None, optional The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked. maxlen : int, optional Maximum length of generated streamlines. Longer streamlines will be discarted if `return_all=False`. minlen : int, optional Minimum length of generated streamlines. Shorter streamlines will be discarted if `return_all=False`. pft_back_tracking_dist : float Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 2 mm. pft_front_tracking_dist : float, optional Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 1 mm. pft_max_trial : int, optional Maximum number of trial for the particle filtering tractography (Prevents infinite loops). particle_count : int, optional Number of particles to use in the particle filter. return_all : bool, optional If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image. random_seed : int, optional The seed for the random seed generator (numpy.random.seed and random.seed). save_seeds : bool, optional If True, return seeds alongside streamlines min_wm_pve_before_stopping : int, optional Minimum white matter pve (1 - stopping_criterion.include_map - stopping_criterion.exclude_map) to reach before allowing the tractography to stop. unidirectional : bool, optional If true, the tracking is performed only in the forward direction. The seed position will be the first point of all streamlines. randomize_forward_direction : bool, optional If true, the forward direction is randomized (multiplied by 1 or -1). Otherwise, the provided forward direction is used. initial_directions: array (N, npeaks, 3), optional Initial direction to follow from the ``seed`` position. If ``max_cross`` is None, one streamline will be generated per peak per voxel. If None, `direction_getter.initial_direction` is used. References ---------- .. footbibliography:: """ if not isinstance(stopping_criterion, AnatomicalStoppingCriterion): raise ValueError("expecting AnatomicalStoppingCriterion") self.pft_max_nbr_back_steps = int(np.ceil(pft_back_tracking_dist / step_size)) self.pft_max_nbr_front_steps = int(np.ceil(pft_front_tracking_dist / step_size)) pft_max_steps = self.pft_max_nbr_back_steps + self.pft_max_nbr_front_steps if ( self.pft_max_nbr_front_steps < 0 or self.pft_max_nbr_back_steps < 0 or pft_max_steps < 1 ): raise ValueError("The number of PFT steps must be greater than 0.") if particle_count <= 0: raise ValueError("The particle count must be greater than 0.") if not 0 <= min_wm_pve_before_stopping <= 1: raise ValueError( "The min_wm_pve_before_stopping value must be between 0 and 1." ) self.min_wm_pve_before_stopping = min_wm_pve_before_stopping self.directions = np.empty((maxlen + 1, 3), dtype=float) self.pft_max_trial = pft_max_trial self.particle_count = particle_count self.particle_paths = np.empty( (2, self.particle_count, pft_max_steps + 1, 3), dtype=float ) self.particle_weights = np.empty(self.particle_count, dtype=float) self.particle_dirs = np.empty( (2, self.particle_count, pft_max_steps + 1, 3), dtype=float ) self.particle_steps = np.empty((2, self.particle_count), dtype=np.intp) self.particle_stream_statuses = np.empty( (2, self.particle_count), dtype=np.intp ) super(ParticleFilteringTracking, self).__init__( direction_getter=direction_getter, stopping_criterion=stopping_criterion, seeds=seeds, affine=affine, step_size=step_size, max_cross=max_cross, maxlen=maxlen, minlen=minlen, fixedstep=True, return_all=return_all, random_seed=random_seed, save_seeds=save_seeds, unidirectional=unidirectional, randomize_forward_direction=randomize_forward_direction, initial_directions=initial_directions, ) def _tracker(self, seed, first_step, streamline): return pft_tracker( self.direction_getter, self.stopping_criterion, seed, first_step, self._voxel_size, streamline, self.directions, self.step_size, self.pft_max_nbr_back_steps, self.pft_max_nbr_front_steps, self.pft_max_trial, self.particle_count, self.particle_paths, self.particle_dirs, self.particle_weights, self.particle_steps, self.particle_stream_statuses, self.min_wm_pve_before_stopping, ) dipy-1.11.0/dipy/tracking/localtrack.pyx000066400000000000000000000415541476546756600202320ustar00rootroot00000000000000 from random import random cimport cython cimport numpy as cnp from dipy.tracking.direction_getter cimport DirectionGetter from dipy.tracking.stopping_criterion cimport( StreamlineStatus, StoppingCriterion, AnatomicalStoppingCriterion, TRACKPOINT, ENDPOINT, OUTSIDEIMAGE, INVALIDPOINT, PYERROR) from dipy.utils.fast_numpy cimport cumsum, where_to_insert, copy_point def local_tracker( DirectionGetter dg, StoppingCriterion sc, double[::1] seed_pos, double[::1] first_step, double[::1] voxel_size, cnp.float_t[:, :] streamline, double step_size, int fixedstep): """Tracks one direction from a seed. This function is the main workhorse of the ``LocalTracking`` class defined in ``dipy.tracking.local_tracking``. Parameters ---------- dg : DirectionGetter Used to choosing tracking directions. sc : StoppingCriterion Used to check the streamline status (e.g. endpoint) along path. seed_pos : array, float, 1d, (3,) First point of the (partial) streamline. first_step : array, float, 1d, (3,) Initial seeding direction. Used as ``prev_dir`` for selecting the step direction from the seed point. voxel_size : array, float, 1d, (3,) Size of voxels in the data set. streamline : array, float, 2d, (N, 3) Output of tracking will be put into this array. The length of this array, ``N``, will set the maximum allowable length of the streamline. step_size : float Size of tracking steps in mm if ``fixed_step``. fixedstep : int If greater than 0, a fixed step_size is used, otherwise a variable step size is used. Returns ------- end : int Length of the tracked streamline stream_status : StreamlineStatus Ending state of the streamlines as determined by the StoppingCriterion. """ cdef: cnp.npy_intp i StreamlineStatus stream_status double input_direction[3] double input_voxel_size[3] double input_seed_pos[3] if (seed_pos.shape[0] != 3 or first_step.shape[0] != 3 or voxel_size.shape[0] != 3 or streamline.shape[1] != 3): raise ValueError('Invalid input parameter dimensions.') copy_point(&first_step[0], input_direction) copy_point(&voxel_size[0], input_voxel_size) copy_point(&seed_pos[0], input_seed_pos) stream_status = TRACKPOINT i, stream_status = dg.generate_streamline(input_seed_pos, input_direction, input_voxel_size, step_size, sc, streamline, stream_status, fixedstep) return i, stream_status def pft_tracker( DirectionGetter dg, AnatomicalStoppingCriterion sc, cnp.float_t[:] seed_pos, cnp.float_t[:] first_step, cnp.float_t[:] voxel_size, cnp.float_t[:, :] streamline, cnp.float_t[:, :] directions, double step_size, int pft_max_nbr_back_steps, int pft_max_nbr_front_steps, int pft_max_trials, int particle_count, cnp.float_t[:, :, :, :] particle_paths, cnp.float_t[:, :, :, :] particle_dirs, cnp.float_t[:] particle_weights, cnp.npy_intp[:, :] particle_steps, cnp.npy_intp[:, :] particle_stream_statuses, int min_wm_pve_before_stopping): """Tracks one direction from a seed using the particle filtering algorithm. This function is the main workhorse of the ``ParticleFilteringTracking`` class defined in ``dipy.tracking.local_tracking``. Parameters ---------- dg : DirectionGetter Used to choosing tracking directions. sc : AnatomicalStoppingCriterion Used to check the streamline status (e.g. endpoint) along path. seed_pos : array, float, 1d, (3,) First point of the (partial) streamline. first_step : array, float, 1d, (3,) Initial seeding direction. Used as ``prev_dir`` for selecting the step direction from the seed point. voxel_size : array, float, 1d, (3,) Size of voxels in the data set. streamline : array, float, 2d, (N, 3) Output of tracking will be put into this array. The length of this array, ``N``, will set the maximum allowable length of the streamline. directions : array, float, 2d, (N, 3) Output of tracking directions will be put into this array. The length of this array, ``N``, will set the maximum allowable length of the streamline. step_size : float Size of tracking steps in mm if ``fixed_step``. pft_max_nbr_back_steps : int Number of tracking steps to back track before starting the particle filtering tractography. pft_max_nbr_front_steps : int Number of additional tracking steps to track. pft_max_trials : int Maximum number of trials for the particle filtering tractography (Prevents infinite loops). particle_count : int Number of particles to use in the particle filter. particle_paths : array, float, 4d, (2, particle_count, pft_max_steps, 3) Temporary array for paths followed by all particles. particle_dirs : array, float, 4d, (2, particle_count, pft_max_steps, 3) Temporary array for directions followed by particles. particle_weights : array, float, 1d (particle_count) Temporary array for the weights of particles. particle_steps : array, float, (2, particle_count) Temporary array for the number of steps of particles. particle_stream_statuses : array, float, (2, particle_count) Temporary array for the stream status of particles. min_wm_pve_before_stopping : int, optional Minimum white matter pve (1 - sc.include_map - sc.exclude_map) to reach before allowing the tractography to stop. Returns ------- end : int Length of the tracked streamline stream_status : StreamlineStatus Ending state of the streamlines as determined by the StoppingCriterion. """ cdef: cnp.npy_intp i StreamlineStatus stream_status double input_direction[3] double input_voxel_size[3] double input_seed_pos[3] if (seed_pos.shape[0] != 3 or first_step.shape[0] != 3 or voxel_size.shape[0] != 3 or streamline.shape[1] != 3): raise ValueError('Invalid input parameter dimensions.') copy_point(&first_step[0], input_direction) copy_point(&voxel_size[0], input_voxel_size) copy_point(&seed_pos[0], input_seed_pos) i = _pft_tracker(dg, sc, input_seed_pos, input_direction, input_voxel_size, streamline, directions, step_size, &stream_status, pft_max_nbr_back_steps, pft_max_nbr_front_steps, pft_max_trials, particle_count, particle_paths, particle_dirs, particle_weights, particle_steps, particle_stream_statuses, min_wm_pve_before_stopping) return i, stream_status @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef _pft_tracker(DirectionGetter dg, AnatomicalStoppingCriterion sc, double* seed, double[::1] direction, double* voxel_size, cnp.float_t[:, :] streamline, cnp.float_t[:, :] directions, double step_size, StreamlineStatus * stream_status, int pft_max_nbr_back_steps, int pft_max_nbr_front_steps, int pft_max_trials, int particle_count, cnp.float_t[:, :, :, :] particle_paths, cnp.float_t[:, :, :, :] particle_dirs, cnp.float_t[:] particle_weights, cnp.npy_intp[:, :] particle_steps, cnp.npy_intp[:, :] particle_stream_statuses, double min_wm_pve_before_stopping): cdef: cnp.npy_intp i, j int pft_trial, back_steps, front_steps int strl_array_len double max_wm_pve, current_wm_pve double point[3] void (*step)(double* , double*, double) noexcept nogil copy_point(seed, point) copy_point(seed, &streamline[0,0]) copy_point(&direction[0], &directions[0, 0]) stream_status[0] = TRACKPOINT pft_trial = 0 max_wm_pve = 0 i = 1 strl_array_len = streamline.shape[0] while i < strl_array_len: if dg.get_direction_c(point, direction): # no valid diffusion direction to follow stream_status[0] = INVALIDPOINT else: for j in range(3): # step forward point[j] += direction[j] / voxel_size[j] * step_size copy_point(point, &streamline[i, 0]) copy_point(&direction[0], &directions[i, 0]) stream_status[0] = sc.check_point_c(point) i += 1 current_wm_pve = 1.0 - sc.get_include(point) - sc.get_exclude(point) if current_wm_pve > max_wm_pve: max_wm_pve = current_wm_pve if stream_status[0] == TRACKPOINT: # The tracking continues normally continue elif (stream_status[0] == ENDPOINT and max_wm_pve < min_wm_pve_before_stopping and current_wm_pve > 0): # The tracking stopped before reaching the wm continue elif stream_status[0] == INVALIDPOINT: if pft_trial < pft_max_trials and i > 1 and i < strl_array_len: back_steps = min(i - 1, pft_max_nbr_back_steps) front_steps = min(strl_array_len - i - back_steps - 1, pft_max_nbr_front_steps) front_steps = max(0, front_steps) i = _pft(streamline, i - back_steps, directions, dg, sc, voxel_size, step_size, stream_status, back_steps + front_steps, particle_count, particle_paths, particle_dirs, particle_weights, particle_steps, particle_stream_statuses) pft_trial += 1 # update the current point with the PFT results copy_point(&streamline[i-1, 0], point) copy_point(&directions[i-1, 0], &direction[0]) # update max_wm_pve following pft for j in range(i): current_wm_pve = (1.0 - sc.get_include_c(&streamline[j, 0]) - sc.get_exclude_c(&streamline[j, 0])) if current_wm_pve > max_wm_pve: max_wm_pve = current_wm_pve if stream_status[0] != TRACKPOINT: # The tracking stops. PFT returned a valid stopping point # (ENDPOINT, OUTSIDEIMAGE) or failed to find one # (INVALIDPOINT, PYERROR) break else: # PFT was run more times than `pft_max_trials` without finding # a valid stopping point. The tracking stops with INVALIDPOINT. break else: # The tracking stops with a valid point (ENDPOINT, OUTSIDEIMAGE) # or an invalid point (PYERROR) break if ((stream_status[0] == OUTSIDEIMAGE or stream_status[0] == PYERROR) and i > 1): i -= 1 return i @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef _pft(cnp.float_t[:, :] streamline, int streamline_i, cnp.float_t[:, :] directions, DirectionGetter dg, AnatomicalStoppingCriterion sc, double* voxel_size, double step_size, StreamlineStatus * stream_status, int pft_nbr_steps, int particle_count, cnp.float_t[:, :, :, :] particle_paths, cnp.float_t[:, :, :, :] particle_dirs, cnp.float_t[:] particle_weights, cnp.npy_intp[:, :] particle_steps, cnp.npy_intp[:, :] particle_stream_statuses): cdef: double sum_weights, sum_squared, N_effective, rdm_sample double point[3] double dir[3] double eps = 1e-16 cnp.npy_intp s, p, j if pft_nbr_steps <= 0: return streamline_i for p in range(particle_count): copy_point(&streamline[streamline_i, 0], &particle_paths[0, p, 0, 0]) copy_point(&directions[streamline_i, 0], &particle_dirs[0, p, 0, 0]) particle_weights[p] = 1. / particle_count particle_stream_statuses[0, p] = TRACKPOINT particle_steps[0, p] = 0 for s in range(pft_nbr_steps): for p in range(particle_count): if particle_stream_statuses[0, p] != TRACKPOINT: for j in range(3): particle_paths[0, p, s, j] = 0 particle_dirs[0, p, s, j] = 0 continue # move to the next particle copy_point(&particle_paths[0, p, s, 0], point) copy_point(&particle_dirs[0, p, s, 0], dir) if dg.get_direction_c(point, dir): particle_stream_statuses[0, p] = INVALIDPOINT particle_weights[p] = 0 else: for j in range(3): # step forward point[j] += dir[j] / voxel_size[j] * step_size copy_point(point, &particle_paths[0, p, s + 1, 0]) copy_point(dir, &particle_dirs[0, p, s + 1, 0]) particle_stream_statuses[0, p] = sc.check_point_c(point) particle_steps[0, p] = s + 1 particle_weights[p] *= 1 - sc.get_exclude_c(point) if particle_weights[p] < eps: particle_weights[p] = 0 if (particle_stream_statuses[0, p] == INVALIDPOINT and particle_weights[p] > 0): particle_stream_statuses[0, p] = TRACKPOINT sum_weights = 0 for p in range(particle_count): sum_weights += particle_weights[p] if sum_weights > 0: sum_squared = 0 for p in range(particle_count): particle_weights[p] = particle_weights[p] / sum_weights sum_squared += particle_weights[p] * particle_weights[p] # Resample the particles if the weights are too uneven. # Particles with negligible weights are replaced by duplicates of # those with high weights through resampling N_effective = 1. / sum_squared if N_effective < particle_count / 10.: # copy data in the temp arrays for pp in range(particle_count): for ss in range(pft_nbr_steps): copy_point(&particle_paths[0, pp, ss, 0], &particle_paths[1, pp, ss, 0]) copy_point(&particle_dirs[0, pp, ss, 0], &particle_dirs[1, pp, ss, 0]) particle_stream_statuses[1, pp] = \ particle_stream_statuses[0, pp] particle_steps[1, pp] = particle_steps[0, pp] # sample N new particle cumsum(&particle_weights[0], &particle_weights[0], particle_count) for pp in range(particle_count): rdm_sample = random() * particle_weights[particle_count - 1] p_source = where_to_insert(&particle_weights[0], rdm_sample, particle_count) for ss in range(pft_nbr_steps): copy_point(&particle_paths[1, p_source, ss, 0], &particle_paths[0, pp, ss, 0]) copy_point(&particle_dirs[1, p_source, ss, 0], &particle_dirs[0, pp, ss, 0]) particle_stream_statuses[0, pp] = \ particle_stream_statuses[1, p_source] particle_steps[0, pp] = particle_steps[1, p_source] for pp in range(particle_count): particle_weights[pp] = 1. / particle_count # update the streamline with the trajectory of one particle cumsum(&particle_weights[0], &particle_weights[0], particle_count) if particle_weights[particle_count - 1] > 0: rdm_sample = random() * particle_weights[particle_count - 1] p = where_to_insert(&particle_weights[0], rdm_sample, particle_count) else: p = 0 for s in range(1, particle_steps[0, p]): copy_point(&particle_paths[0, p, s, 0], &streamline[streamline_i + s, 0]) copy_point(&particle_dirs[0, p, s, 0], &directions[streamline_i + s, 0]) stream_status[0] = particle_stream_statuses[0, p] return streamline_i + particle_steps[0, p] dipy-1.11.0/dipy/tracking/mesh.py000066400000000000000000000111061476546756600166450ustar00rootroot00000000000000import numpy as np from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def random_coordinates_from_surface( nb_triangles, nb_seed, *, triangles_mask=None, triangles_weight=None, rand_gen=None ): """Generate random triangles_indices and trilinear_coord Triangles_indices probability are weighted by triangles_weight, for each triangles inside the given triangles_mask Parameters ---------- nb_triangles : int (n) The amount of triangles in the mesh nb_seed : int The number of random indices and coordinates generated. triangles_mask : [n] numpy array Specifies which triangles should be chosen (or not) triangles_weight : [n] numpy array Specifies the weight/probability of choosing each triangle rand_gen : int The seed for the random seed generator (numpy.random.seed). Returns ------- triangles_idx: [s] array Randomly chosen triangles_indices trilin_coord: [s,3] array Randomly chosen trilinear_coordinates See Also -------- seeds_from_surface_coordinates, random_seeds_from_mask """ # Compute triangles_weight in vts_mask if triangles_mask is not None: if triangles_weight is None: triangles_weight = triangles_mask.astype(float) else: triangles_weight *= triangles_mask.astype(float) # Normalize weights to have probability (sum to 1) if triangles_weight is not None: triangles_weight /= triangles_weight.sum() # Set the random seed generator rng = np.random.default_rng(rand_gen) # Choose random triangles triangles_idx = rng.choice(nb_triangles, size=nb_seed, p=triangles_weight) # Choose random trilinear coordinates # https://mathworld.wolfram.com/TrianglePointPicking.html trilin_coord = rng.random((nb_seed, 3)) is_upper = trilin_coord[:, 1:].sum(axis=-1) > 1.0 trilin_coord[is_upper] = 1.0 - trilin_coord[is_upper] trilin_coord[:, 0] = 1.0 - (trilin_coord[:, 1] + trilin_coord[:, 2]) return triangles_idx, trilin_coord def seeds_from_surface_coordinates( triangles, vts_values, triangles_idx, trilinear_coord ): """Compute points from triangles_indices and trilinear_coord Parameters ---------- triangles : [n, 3] -> m array A list of triangles from a mesh vts_values : [m, .] array List of values to interpolates from coordinates along vertices, (vertices, vertices_normal, vertices_colors ...) triangles_idx : [s] array Specifies which triangles should be chosen (or not) trilinear_coord : [s, 3] array Specifies the weight/probability of choosing each triangle Returns ------- pts : [s, ...] array Interpolated values of vertices with triangles_idx and trilinear_coord See Also -------- random_coordinates_from_surface """ if vts_values.ndim == 1: vts_values = np.reshape(vts_values, (-1, 1)) # Compute the vertices for each chosen triangle tris_vals = vts_values[triangles[triangles_idx]] # Interpolate values for each trilinear coordinates return np.squeeze(np.einsum("ijk,ij...->ik", tris_vals, trilinear_coord)) def triangles_area(triangles, vts): """Compute the local area of each triangle Parameters ---------- triangles : [n, 3] -> m array A list of triangles from a mesh vts : [m, .] array List of vertices Returns ------- triangles_area : [m] array List of area for each triangle in the mesh See Also -------- random_coordinates_from_surface """ e1 = vts[triangles[:, 1]] - vts[triangles[:, 0]] e2 = vts[triangles[:, 2]] - vts[triangles[:, 0]] # Compute the area of each triangle (e.q. 0.5 * length(scipy.cross(e1, e2)) sqr_norms = np.sum(np.square(e1), axis=-1) * np.sum(np.square(e2), axis=-1) u_dot_v = np.sum(e1 * e2, axis=-1) tri_area = 0.5 * np.sqrt(sqr_norms - np.square(u_dot_v)) return tri_area def vertices_to_triangles_values(triangles, vts_values): """Change from values per vertex to values per triangle Parameters ---------- triangles : [n, 3] -> m array A list of triangles from a mesh vts_values : [m, .] array List of values to interpolates from coordinates along vertices, (vertices, vertices_normal, vertices_colors ...) Returns ------- triangles_values : [m] array List of values for each triangle in the mesh See Also -------- random_coordinates_from_surface """ return np.mean(vts_values[triangles], axis=1) dipy-1.11.0/dipy/tracking/meson.build000066400000000000000000000020421476546756600175000ustar00rootroot00000000000000cython_sources = [ 'direction_getter', 'distances', 'tractogen', 'fbcmeasures', 'localtrack', 'propspeed', 'stopping_criterion', 'streamlinespeed', 'tracker_parameters', 'vox2track', ] cython_headers = [ 'direction_getter.pxd', 'tractogen.pxd', 'fbcmeasures.pxd', 'propspeed.pxd', 'stopping_criterion.pxd', 'streamlinespeed.pxd', 'tracker_parameters.pxd' ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/tracking' ) endforeach python_sources = ['__init__.py', '_utils.py', 'learning.py', 'life.py', 'local_tracking.py', 'mesh.py', 'metrics.py', 'streamline.py', 'tracker.py', 'utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/tracking' ) subdir('tests')dipy-1.11.0/dipy/tracking/metrics.py000066400000000000000000000553061476546756600173710ustar00rootroot00000000000000"""Metrics for tracks, where tracks are arrays of points""" import numpy as np from scipy.interpolate import splev, splprep from dipy.testing.decorators import warning_for_keywords def winding(xyz): """Total turning angle projected. Project space curve to best fitting plane. Calculate the cumulative signed angle between each line segment and the previous one. Parameters ---------- xyz : array-like shape (N,3) Array representing x,y,z of N points in a track. Returns ------- a : scalar Total turning angle in degrees. """ U, s, V = np.linalg.svd(xyz - np.mean(xyz, axis=0), 0) proj = np.dot(U[:, 0:2], np.diag(s[0:2])) turn = 0 for j in range(len(xyz) - 1): v0 = proj[j] v1 = proj[j + 1] v = np.dot(v0, v1) / (np.linalg.norm(v0) * np.linalg.norm(v1)) v = np.clip(v, -1, 1) tmp = np.arccos(v) turn += tmp return np.rad2deg(turn) @warning_for_keywords() def length(xyz, *, along=False): """Euclidean length of track line This will give length in mm if tracks are expressed in world coordinates. Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track along : bool, optional If True, return array giving cumulative length along track, otherwise (default) return scalar giving total length. Returns ------- L : scalar or array shape (N-1,) scalar in case of `along` == False, giving total length, array if `along` == True, giving cumulative lengths. Examples -------- >>> from dipy.tracking.metrics import length >>> xyz = np.array([[1,1,1],[2,3,4],[0,0,0]]) >>> expected_lens = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]) >>> length(xyz) == expected_lens.sum() True >>> len_along = length(xyz, along=True) >>> np.allclose(len_along, expected_lens.cumsum()) True >>> length([]) 0 >>> length([[1, 2, 3]]) 0 >>> length([], along=True) array([0]) """ xyz = np.asarray(xyz) if xyz.shape[0] < 2: if along: return np.array([0]) return 0 dists = np.sqrt((np.diff(xyz, axis=0) ** 2).sum(axis=1)) if along: return np.cumsum(dists) return np.sum(dists) def bytes(xyz): """Size of track in bytes. Parameters ---------- xyz : array-like shape (N,3) Array representing x,y,z of N points in a track. Returns ------- b : int Number of bytes. """ return xyz.nbytes def midpoint(xyz): """Midpoint of track Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track Returns ------- mp : array shape (3,) Middle point of line, such that, if L is the line length then `np` is the point such that the length xyz[0] to `mp` and from `mp` to xyz[-1] is L/2. If the middle point is not a point in `xyz`, then we take the interpolation between the two nearest `xyz` points. If `xyz` is empty, return a ValueError Examples -------- >>> from dipy.tracking.metrics import midpoint >>> midpoint([]) Traceback (most recent call last): ... ValueError: xyz array cannot be empty >>> midpoint([[1, 2, 3]]) array([1, 2, 3]) >>> xyz = np.array([[1,1,1],[2,3,4]]) >>> midpoint(xyz) array([ 1.5, 2. , 2.5]) >>> xyz = np.array([[0,0,0],[1,1,1],[2,2,2]]) >>> midpoint(xyz) array([ 1., 1., 1.]) >>> xyz = np.array([[0,0,0],[1,0,0],[3,0,0]]) >>> midpoint(xyz) array([ 1.5, 0. , 0. ]) >>> xyz = np.array([[0,9,7],[1,9,7],[3,9,7]]) >>> midpoint(xyz) array([ 1.5, 9. , 7. ]) """ xyz = np.asarray(xyz) n_pts = xyz.shape[0] if n_pts == 0: raise ValueError("xyz array cannot be empty") if n_pts == 1: return xyz.copy().squeeze() cumlen = np.zeros(n_pts) cumlen[1:] = length(xyz, along=True) midlen = cumlen[-1] / 2.0 ind = np.where((cumlen - midlen) > 0)[0][0] len0 = cumlen[ind - 1] len1 = cumlen[ind] Ds = midlen - len0 Lambda = Ds / (len1 - len0) return Lambda * xyz[ind] + (1 - Lambda) * xyz[ind - 1] def center_of_mass(xyz): """Center of mass of streamline Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track Returns ------- com : array shape (3,) center of mass of streamline Examples -------- >>> from dipy.tracking.metrics import center_of_mass >>> center_of_mass([]) Traceback (most recent call last): ... ValueError: xyz array cannot be empty >>> center_of_mass([[1,1,1]]) array([ 1., 1., 1.]) >>> xyz = np.array([[0,0,0],[1,1,1],[2,2,2]]) >>> center_of_mass(xyz) array([ 1., 1., 1.]) """ xyz = np.asarray(xyz) if xyz.size == 0: raise ValueError("xyz array cannot be empty") return np.mean(xyz, axis=0) @warning_for_keywords() def magn(xyz, *, n=1): """magnitude of vector""" mag = np.sum(xyz**2, axis=1) ** 0.5 imag = np.where(mag == 0) mag[imag] = np.finfo(float).eps if n > 1: return np.tile(mag, (n, 1)).T return mag.reshape(len(mag), 1) def frenet_serret(xyz): r"""Frenet-Serret Space Curve Invariants Calculates the 3 vector and 2 scalar invariants of a space curve defined by vectors r = (x,y,z). If z is omitted (i.e. the array xyz has shape (N,2)), then the curve is only 2D (planar), but the equations are still valid. Similar to https://www.mathworks.com/matlabcentral/fileexchange/11169-frenet In the following equations the prime ($'$) indicates differentiation with respect to the parameter $s$ of a parametrised curve $\mathbf{r}(s)$. - $\mathbf{T}=\mathbf{r'}/|\mathbf{r'}|\qquad$ (Tangent vector)} - $\mathbf{N}=\mathbf{T'}/|\mathbf{T'}|\qquad$ (Normal vector) - $\mathbf{B}=\mathbf{T}\times\mathbf{N}\qquad$ (Binormal vector) - $\kappa=|\mathbf{T'}|\qquad$ (Curvature) - $\mathrm{\tau}=-\mathbf{B'}\cdot\mathbf{N}$ (Torsion) Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track Returns ------- T : array shape (N,3) array representing the tangent of the curve xyz N : array shape (N,3) array representing the normal of the curve xyz B : array shape (N,3) array representing the binormal of the curve xyz k : array shape (N,1) array representing the curvature of the curve xyz t : array shape (N,1) array representing the torsion of the curve xyz Examples -------- Create a helix and calculate its tangent, normal, binormal, curvature and torsion >>> from dipy.tracking import metrics as tm >>> import numpy as np >>> theta = 2*np.pi*np.linspace(0,2,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=theta/(2*np.pi) >>> xyz=np.vstack((x,y,z)).T >>> T,N,B,k,t=tm.frenet_serret(xyz) """ xyz = np.asarray(xyz) n_pts = xyz.shape[0] if n_pts == 0: raise ValueError("xyz array cannot be empty") dxyz = np.gradient(xyz)[0] ddxyz = np.gradient(dxyz)[0] # Tangent T = np.divide(dxyz, magn(dxyz, n=3)) # Derivative of Tangent dT = np.gradient(T)[0] # Normal N = np.divide(dT, magn(dT, n=3)) # Binormal B = np.cross(T, N) # Curvature k = magn(np.cross(dxyz, ddxyz), n=1) / (magn(dxyz, n=1) ** 3) # Torsion # (In matlab was t=dot(-B,N,2)) t = np.sum(-B * N, axis=1) # return T,N,B,k,t,dxyz,ddxyz,dT return T, N, B, k, t def mean_curvature(xyz): """Calculates the mean curvature of a curve Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a curve Returns ------- m : float Mean curvature. Examples -------- Create a straight line and a semi-circle and print their mean curvatures >>> from dipy.tracking import metrics as tm >>> import numpy as np >>> x=np.linspace(0,1,100) >>> y=0*x >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> m=tm.mean_curvature(xyz) #mean curvature straight line >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> _= tm.mean_curvature(xyz) #mean curvature for semi-circle """ xyz = np.asarray(xyz) n_pts = xyz.shape[0] if n_pts == 0: raise ValueError("xyz array cannot be empty") dxyz = np.gradient(xyz)[0] ddxyz = np.gradient(dxyz)[0] # Curvature k = magn(np.cross(dxyz, ddxyz), n=1) / (magn(dxyz, n=1) ** 3) return np.mean(k) def mean_orientation(xyz): """ Calculates the mean orientation of a curve Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a curve Returns ------- m : float Mean orientation. """ xyz = np.asarray(xyz) n_pts = xyz.shape[0] if n_pts == 0: raise ValueError("xyz array cannot be empty") dxyz = np.gradient(xyz)[0] return np.mean(dxyz, axis=0) def generate_combinations(items, n): """Combine sets of size n from items Parameters ---------- items : sequence n : int Returns ------- ic : iterator Examples -------- >>> from dipy.tracking.metrics import generate_combinations >>> ic=generate_combinations(range(3),2) >>> for i in ic: print(i) [0, 1] [0, 2] [1, 2] """ if n == 0: yield [] elif n == 2: # if n=2 non_recursive for i in range(len(items) - 1): for j in range(i + 1, len(items)): yield [i, j] else: # if n>2 uses recursion for i in range(len(items)): for cc in generate_combinations(items[i + 1 :], n - 1): yield [items[i]] + cc @warning_for_keywords() def longest_track_bundle(bundle, *, sort=False): """Return longest track or length sorted track indices in `bundle` If `sort` == True, return the indices of the sorted tracks in the bundle, otherwise return the longest track. Parameters ---------- bundle : sequence of tracks as arrays, shape (N1,3) ... (Nm,3) sort : bool, optional If False (default) return longest track. If True, return length sorted indices for tracks in bundle Returns ------- longest_or_indices : array longest track - shape (N,3) - (if `sort` is False), or indices of length sorted tracks (if `sort` is True) Examples -------- >>> from dipy.tracking.metrics import longest_track_bundle >>> import numpy as np >>> bundle = [np.array([[0,0,0],[2,2,2]]),np.array([[0,0,0],[4,4,4]])] >>> longest_track_bundle(bundle) array([[0, 0, 0], [4, 4, 4]]) >>> longest_track_bundle(bundle, sort=True) #doctest: +ELLIPSIS array([0, 1]...) """ alllengths = [length(t) for t in bundle] alllengths = np.array(alllengths) if sort: ilongest = alllengths.argsort() return ilongest else: ilongest = alllengths.argmax() return bundle[ilongest] def intersect_sphere(xyz, center, radius): """If any segment of the track is intersecting with a sphere of specific center and radius return True otherwise False Parameters ---------- xyz : array, shape (N,3) representing x,y,z of the N points of the track center : array, shape (3,) center of the sphere radius : float radius of the sphere Returns ------- tf : {True, False} True if track `xyz` intersects sphere >>> from dipy.tracking.metrics import intersect_sphere >>> line=np.array(([0,0,0],[1,1,1],[2,2,2])) >>> sph_cent=np.array([1,1,1]) >>> sph_radius = 1 >>> intersect_sphere(line,sph_cent,sph_radius) True Notes ----- The ray to sphere intersection method used here is similar with https://paulbourke.net/geometry/circlesphere/ https://paulbourke.net/geometry/circlesphere/source.cpp we just applied it for every segment neglecting the intersections where the intersecting points are not inside the segment """ center = np.array(center) # print center lt = xyz.shape[0] for i in range(lt - 1): # first point x1 = xyz[i] # second point x2 = xyz[i + 1] # do the calculations as given in the Notes x = x2 - x1 a = np.inner(x, x) x1c = x1 - center b = 2 * np.inner(x, x1c) c = ( np.inner(center, center) + np.inner(x1, x1) - 2 * np.inner(center, x1) - radius**2 ) bb4ac = b * b - 4 * a * c # print 'bb4ac',bb4ac if abs(a) < np.finfo(float).eps or bb4ac < 0: # too small segment or # no intersection continue if bb4ac == 0: # one intersection point p mu = -b / 2 * a p = x1 + mu * x # check if point is inside the segment # print 'p',p if np.inner(p - x1, p - x1) <= a: return True if bb4ac > 0: # two intersection points p1 and p2 mu = (-b + np.sqrt(bb4ac)) / (2 * a) p1 = x1 + mu * x mu = (-b - np.sqrt(bb4ac)) / (2 * a) p2 = x1 + mu * x # check if points are inside the line segment # print 'p1,p2',p1,p2 if np.inner(p1 - x1, p1 - x1) <= a or np.inner(p2 - x1, p2 - x1) <= a: return True return False def inside_sphere(xyz, center, radius): r"""If any point of the track is inside a sphere of a specified center and radius return True otherwise False. Mathematically this can be simply described by $|x-c|\le r$ where $x$ a point $c$ the center of the sphere and $r$ the radius of the sphere. Parameters ---------- xyz : array, shape (N,3) representing x,y,z of the N points of the track center : array, shape (3,) center of the sphere radius : float radius of the sphere Returns ------- tf : {True,False} Whether point is inside sphere. Examples -------- >>> from dipy.tracking.metrics import inside_sphere >>> line=np.array(([0,0,0],[1,1,1],[2,2,2])) >>> sph_cent=np.array([1,1,1]) >>> sph_radius = 1 >>> inside_sphere(line,sph_cent,sph_radius) True """ return (np.sqrt(np.sum((xyz - center) ** 2, axis=1)) <= radius).any() def inside_sphere_points(xyz, center, radius): r"""If a track intersects with a sphere of a specified center and radius return the points that are inside the sphere otherwise False. Mathematically this can be simply described by $|x-c| \le r$ where $x$ a point $c$ the center of the sphere and $r$ the radius of the sphere. Parameters ---------- xyz : array, shape (N,3) representing x,y,z of the N points of the track center : array, shape (3,) center of the sphere radius : float radius of the sphere Returns ------- xyzn : array, shape(M,3) array representing x,y,z of the M points inside the sphere Examples -------- >>> from dipy.tracking.metrics import inside_sphere_points >>> line=np.array(([0,0,0],[1,1,1],[2,2,2])) >>> sph_cent=np.array([1,1,1]) >>> sph_radius = 1 >>> inside_sphere_points(line,sph_cent,sph_radius) array([[1, 1, 1]]) """ return xyz[(np.sqrt(np.sum((xyz - center) ** 2, axis=1)) <= radius)] @warning_for_keywords() def spline(xyz, *, s=3, k=2, nest=-1): """Generate B-splines as documented in https://scipy-cookbook.readthedocs.io/items/Interpolation.html The scipy.interpolate packages wraps the netlib FITPACK routines (Dierckx) for calculating smoothing splines for various kinds of data and geometries. Although the data is evenly spaced in this example, it need not be so to use this routine. Parameters ---------- xyz : array, shape (N,3) array representing x,y,z of N points in 3d space s : float, optional A smoothing condition. The amount of smoothness is determined by satisfying the conditions: sum((w * (y - g))**2,axis=0) <= s where g(x) is the smoothed interpolation of (x,y). The user can use s to control the tradeoff between closeness and smoothness of fit. Larger satisfying the conditions: sum((w * (y - g))**2,axis=0) <= s where g(x) is the smoothed interpolation of (x,y). The user can use s to control the tradeoff between closeness and smoothness of fit. Larger s means more smoothing while smaller values of s indicate less smoothing. Recommended values of s depend on the weights, w. If the weights represent the inverse of the standard-deviation of y, then a: good s value should be found in the range (m-sqrt(2*m),m+sqrt(2*m)) where m is the number of datapoints in x, y, and w. k : int, optional Degree of the spline. Cubic splines are recommended. Even values of k should be avoided especially with a small s-value. for the same set of data. If task=-1 find the weighted least square spline for a given set of knots, t. nest : None or int, optional An over-estimate of the total number of knots of the spline to help in determining the storage space. None results in value m+2*k. -1 results in m+k+1. Always large enough is nest=m+k+1. Default is -1. Returns ------- xyzn : array, shape (M,3) array representing x,y,z of the M points inside the sphere Examples -------- >>> import numpy as np >>> t=np.linspace(0,1.75*2*np.pi,100)# make ascending spiral in 3-space >>> x = np.sin(t) >>> y = np.cos(t) >>> z = t >>> x+= np.random.normal(scale=0.1, size=x.shape) # add noise >>> y+= np.random.normal(scale=0.1, size=y.shape) >>> z+= np.random.normal(scale=0.1, size=z.shape) >>> xyz=np.vstack((x,y,z)).T >>> xyzn=spline(xyz,s=3,k=2,nest=-1) >>> len(xyzn) > len(xyz) True See Also -------- scipy.interpolate.splprep scipy.interpolate.splev """ # find the knot points tckp, u = splprep([xyz[:, 0], xyz[:, 1], xyz[:, 2]], s=s, k=k, nest=nest) # evaluate spline, including interpolated points xnew, ynew, znew = splev(np.linspace(0, 1, 400), tckp) return np.vstack((xnew, ynew, znew)).T def startpoint(xyz): """First point of the track Parameters ---------- xyz : array, shape(N,3) Track. Returns ------- sp : array, shape(3,) First track point. Examples -------- >>> from dipy.tracking.metrics import startpoint >>> import numpy as np >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> sp=startpoint(xyz) >>> sp.any()==xyz[0].any() True """ return xyz[0] def endpoint(xyz): """ Parameters ---------- xyz : array, shape(N,3) Track. Returns ------- ep : array, shape(3,) First track point. Examples -------- >>> from dipy.tracking.metrics import endpoint >>> import numpy as np >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> ep=endpoint(xyz) >>> ep.any()==xyz[-1].any() True """ return xyz[-1] def arbitrarypoint(xyz, distance): """Select an arbitrary point along distance on the track (curve) Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track distance : float float representing distance travelled from the xyz[0] point of the curve along the curve. Returns ------- ap : array shape (3,) Arbitrary point of line, such that, if the arbitrary point is not a point in `xyz`, then we take the interpolation between the two nearest `xyz` points. If `xyz` is empty, return a ValueError Examples -------- >>> import numpy as np >>> from dipy.tracking.metrics import arbitrarypoint, length >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> ap=arbitrarypoint(xyz,length(xyz)/3) """ xyz = np.asarray(xyz) n_pts = xyz.shape[0] if n_pts == 0: raise ValueError("xyz array cannot be empty") if n_pts == 1: return xyz.copy().squeeze() cumlen = np.zeros(n_pts) cumlen[1:] = length(xyz, along=True) if cumlen[-1] < distance: raise ValueError("Given distance is bigger than the length of the curve") ind = np.where((cumlen - distance) > 0)[0][0] len0 = cumlen[ind - 1] len1 = cumlen[ind] Ds = distance - len0 Lambda = Ds / (len1 - len0) return Lambda * xyz[ind] + (1 - Lambda) * xyz[ind - 1] def _extrap(xyz, cumlen, distance): """Helper function for extrapolate""" ind = np.where((cumlen - distance) > 0)[0][0] len0 = cumlen[ind - 1] len1 = cumlen[ind] Ds = distance - len0 Lambda = Ds / (len1 - len0) return Lambda * xyz[ind] + (1 - Lambda) * xyz[ind - 1] def principal_components(xyz): """We use PCA to calculate the 3 principal directions for a track Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track Returns ------- va : array_like eigenvalues ve : array_like eigenvectors Examples -------- >>> import numpy as np >>> from dipy.tracking.metrics import principal_components >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> va, ve = principal_components(xyz) >>> np.allclose(va, [0.51010101, 0.09883545, 0]) True """ C = np.cov(xyz.T) va, ve = np.linalg.eig(C) return va, ve def midpoint2point(xyz, p): """Calculate distance from midpoint of a curve to arbitrary point p Parameters ---------- xyz : array-like shape (N,3) array representing x,y,z of N points in a track p : array shape (3,) array representing an arbitrary point with x,y,z coordinates in space. Returns ------- d : float a float number representing Euclidean distance Examples -------- >>> import numpy as np >>> from dipy.tracking.metrics import midpoint2point, midpoint >>> theta=np.pi*np.linspace(0,1,100) >>> x=np.cos(theta) >>> y=np.sin(theta) >>> z=0*x >>> xyz=np.vstack((x,y,z)).T >>> dist=midpoint2point(xyz,np.array([0,0,0])) """ mid = midpoint(xyz) return np.sqrt(np.sum((xyz - mid) ** 2)) dipy-1.11.0/dipy/tracking/propspeed.pxd000066400000000000000000000033531476546756600200620ustar00rootroot00000000000000cimport numpy as cnp from dipy.direction.pmf cimport PmfGen from dipy.tracking.tracker_parameters cimport TrackerParameters, TrackerStatus from dipy.utils.fast_numpy cimport RNGState cdef TrackerStatus deterministic_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil cdef TrackerStatus probabilistic_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil cdef TrackerStatus parallel_transport_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil cdef cnp.npy_intp _propagation_direction(double *point, double* prev, double* qa, double *ind, double *odf_vertices, double qa_thr, double ang_thr, cnp.npy_intp *qa_shape,cnp.npy_intp* strides, double *direction,double total_weight) noexcept nogil dipy-1.11.0/dipy/tracking/propspeed.pyx000066400000000000000000001035461476546756600201140ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: Nonecheck=False """Track propagation performance functions.""" # cython: profile=True # cython: embedsignature=True cimport cython from cython.parallel import prange import numpy as np cimport numpy as cnp from dipy.core.interpolation cimport _trilinear_interpolation_iso, offset from dipy.direction.pmf cimport PmfGen from dipy.utils.fast_numpy cimport ( copy_point, cross, cumsum, norm, normalize, random_float, random_perpendicular_vector, random_point_within_circle, RNGState, where_to_insert, ) from dipy.tracking.tractogen cimport prepare_pmf from dipy.tracking.tracker_parameters cimport (TrackerParameters, TrackerStatus, SUCCESS, FAIL) from libc.stdlib cimport malloc, free from libc.math cimport M_PI, pow, sin, cos, fabs from libc.stdio cimport printf cdef extern from "dpy_math.h" nogil: double floor(double x) float acos(float x ) double sqrt(double x) double DPY_PI DEF PEAK_NO=5 # initialize numpy runtime cnp.import_array() def ndarray_offset(cnp.ndarray[cnp.npy_intp, ndim=1] indices, cnp.ndarray[cnp.npy_intp, ndim=1] strides, int lenind, int typesize): """ Find offset in an N-dimensional ndarray using strides Parameters ---------- indices : array, npy_intp shape (N,) Indices of the array which we want to find the offset. strides : array, shape (N,) Strides of array. lenind : int len of the `indices` array. typesize : int Number of bytes for data type e.g. if 8 for double, 4 for int32 Returns ------- offset : integer Index position in flattened array Examples -------- >>> import numpy as np >>> from dipy.tracking.propspeed import ndarray_offset >>> I=np.array([1,1]) >>> A=np.array([[1,0,0],[0,2,0],[0,0,3]]) >>> S=np.array(A.strides) >>> ndarray_offset(I,S,2,A.dtype.itemsize) 4 >>> A.ravel()[4]==A[1,1] True """ if not cnp.PyArray_CHKFLAGS(indices, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("indices is not C contiguous") if not cnp.PyArray_CHKFLAGS(strides, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("strides is not C contiguous") return offset( cnp.PyArray_DATA(indices), cnp.PyArray_DATA(strides), lenind, typesize) cdef cnp.npy_intp _nearest_direction(double* dx, double* qa, double *ind, cnp.npy_intp peaks, double *odf_vertices, double qa_thr, double ang_thr, double *direction) noexcept nogil: """ Give the nearest direction to a point, checking threshold and angle Parameters ---------- dx : double array shape (3,) Moving direction of the current tracking. qa : double array shape (Np,) Quantitative anisotropy matrix, where ``Np`` is the number of peaks. ind : array, float64 shape(x, y, z, Np) Index of the track orientation. peaks : npy_intp odf_vertices : double array shape (N, 3) Sampling directions on the sphere. qa_thr : float Threshold for QA, we want everything higher than this threshold. ang_thr : float Angle threshold, we only select fiber orientation within this range. direction : double array shape (3,) The fiber orientation to be considered in the interpolation. The array gets modified in-place. Returns ------- delta : bool Delta function: if 1 we give it weighting, if it is 0 we don't give any weighting. """ cdef: double max_dot = 0 double angl,curr_dot double odfv[3] cnp.npy_intp i, j, max_doti = 0 # calculate the cos with radians angl = cos((DPY_PI * ang_thr) / 180.) # if the maximum peak is lower than the threshold then there is no point # continuing tracking if qa[0] <= qa_thr: return 0 # for all peaks find the minimum angle between odf_vertices and dx for i from 0 <= i < peaks: # if the current peak is smaller than the threshold then jump out if qa[i] <= qa_thr: break # copy odf_vertices for j from 0 <= j < 3: odfv[j]=odf_vertices[3 * ind[i] + j] # calculate the absolute dot product between dx and odf_vertices curr_dot = dx[0] * odfv[0] + dx[1] * odfv[1] + dx[2] * odfv[2] if curr_dot < 0: #abs check curr_dot = -curr_dot # maximum dot means minimum angle # store the maximum dot and the corresponding index from the # neighboring voxel in maxdoti if curr_dot > max_dot: max_dot=curr_dot max_doti = i # if maxdot smaller than our angular *dot* threshold stop tracking if max_dot < angl: return 0 # copy the odf_vertices for the voxel qa indices which have the smaller # angle for j from 0 <= j < 3: odfv[j] = odf_vertices[3 * ind[max_doti] + j] # if the dot product is negative then return the opposite direction # otherwise return the same direction if dx[0] * odfv[0] + dx[1] * odfv[1] + dx[2] * odfv[2] < 0: for j from 0 <= j < 3: direction[j] = -odf_vertices[3 * ind[max_doti] + j] return 1 for j from 0 <= j < 3: direction[j]= odf_vertices[3 * ind[max_doti] + j] return 1 @cython.cdivision(True) cdef cnp.npy_intp _propagation_direction(double *point, double* dx, double* qa, double *ind, double *odf_vertices, double qa_thr, double ang_thr, cnp.npy_intp *qa_shape, cnp.npy_intp* strides, double *direction, double total_weight) noexcept nogil: cdef: double total_w = 0 # total weighting useful for interpolation double delta = 0 # store delta function (stopping function) result double new_direction[3] # new propagation direction double w[8] double qa_tmp[PEAK_NO] double ind_tmp[PEAK_NO] cnp.npy_intp index[24] cnp.npy_intp xyz[4] cnp.npy_intp i, j, m double normd # number of allowed peaks e.g. for fa is 1 for gqi.qa is 5 cnp.npy_intp peaks = qa_shape[3] # Calculate qa & ind of each of the 8 neighboring voxels. # To do that we use trilinear interpolation and return the weights and the # indices for the weights i.e. xyz in qa[x,y,z] _trilinear_interpolation_iso(point, w, index) # check if you are outside of the volume for i from 0 <= i < 3: new_direction[i] = 0 if index[7 * 3 + i] >= qa_shape[i] or index[i] < 0: return 0 # for every weight sum the total weighting for m from 0 <= m < 8: for i from 0 <= i < 3: xyz[i]=index[m * 3 + i] # fill qa_tmp and ind_tmp for j from 0 <= j < peaks: xyz[3] = j off = offset( xyz, strides, 4, 8) qa_tmp[j] = qa[off] ind_tmp[j] = ind[off] # return the nearest direction by searching in all peaks delta=_nearest_direction(dx, qa_tmp, ind_tmp, peaks, odf_vertices, qa_thr, ang_thr, direction) # if delta is 0 then that means that there was no good direction # (obeying the thresholds) from that neighboring voxel, so this voxel # is not adding to the total weight if delta == 0: continue # add in total total_w += w[m] for i from 0 <= i < 3: new_direction[i] += w[m] * direction[i] # if less than half the volume is time to stop propagating if total_w < total_weight: # termination return 0 # all good return normalized weighted next direction normd = new_direction[0]**2 + new_direction[1]**2 + new_direction[2]**2 normd = 1 / sqrt(normd) for i from 0 <= i < 3: direction[i] = new_direction[i] * normd return 1 cdef cnp.npy_intp _initial_direction(double* seed,double *qa, double* ind, double* odf_vertices, double qa_thr, cnp.npy_intp* strides, cnp.npy_intp ref, double* direction) noexcept nogil: """ First direction that we get from a seeding point """ cdef: cnp.npy_intp point[4] cnp.npy_intp off cnp.npy_intp i double qa_tmp,ind_tmp # Very tricky/cool addition/flooring that helps create a valid neighborhood # (grid) for the trilinear interpolation to run smoothly. # Find the index for qa for i from 0 <= i < 3: point[i] = floor(seed[i] + .5) point[3] = ref # Find the offset in memory to access the qa value off = offset(point,strides, 4, 8) qa_tmp = qa[off] # Check for scalar threshold if qa_tmp < qa_thr: return 0 # Find the correct direction from the indices ind_tmp = ind[off] # similar to ind[point] in numpy syntax # Return initial direction through odf_vertices by ind for i from 0 <= i < 3: direction[i] = odf_vertices[3 * ind_tmp + i] return 1 def eudx_both_directions(cnp.ndarray[double, ndim=1] seed, cnp.npy_intp ref, cnp.ndarray[double, ndim=4] qa, cnp.ndarray[double, ndim=4] ind, cnp.ndarray[double, ndim=2] odf_vertices, double qa_thr, double ang_thr, double step_sz, double total_weight, cnp.npy_intp max_points): """ Parameters ---------- seed : array, float64 shape (3,) Point where the tracking starts. ref : cnp.npy_intp int Index of peak to follow first. qa : array, float64 shape (X, Y, Z, Np) Anisotropy matrix, where ``Np`` is the number of maximum allowed peaks. ind : array, float64 shape(x, y, z, Np) Index of the track orientation. odf_vertices : double array shape (N, 3) Sampling directions on the sphere. qa_thr : float Threshold for QA, we want everything higher than this threshold. ang_thr : float Angle threshold, we only select fiber orientation within this range. step_sz : double total_weight : double max_points : cnp.npy_intp Returns ------- track : array, shape (N,3) """ cdef: double *ps = cnp.PyArray_DATA(seed) double *pqa = cnp.PyArray_DATA(qa) double *pin = cnp.PyArray_DATA(ind) double *pverts = cnp.PyArray_DATA(odf_vertices) cnp.npy_intp *pstr = cnp.PyArray_STRIDES(qa) cnp.npy_intp *qa_shape = cnp.PyArray_DIMS(qa) cnp.npy_intp *pvstr = cnp.PyArray_STRIDES(odf_vertices) cnp.npy_intp d, i, j, cnt double direction[3] double dx[3] double idirection[3] double ps2[3] double tmp, ftmp if not cnp.PyArray_CHKFLAGS(seed, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("seed is not C contiguous") if not cnp.PyArray_CHKFLAGS(qa, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("qa is not C contiguous") if not cnp.PyArray_CHKFLAGS(ind, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("ind is not C contiguous") if not cnp.PyArray_CHKFLAGS(odf_vertices, cnp.NPY_ARRAY_C_CONTIGUOUS): raise ValueError("odf_vertices is not C contiguous") cnt = 0 d = _initial_direction(ps, pqa, pin, pverts, qa_thr, pstr, ref, idirection) if d == 0: return None for i from 0 <= i < 3: # store the initial direction dx[i] = idirection[i] # ps2 is for downwards and ps for upwards propagation ps2[i] = ps[i] point = seed.copy() track = [point.copy()] # track towards one direction while d: d = _propagation_direction(ps, dx, pqa, pin, pverts, qa_thr, ang_thr, qa_shape, pstr, direction, total_weight) if d == 0: break if cnt > max_points: break # update the track for i from 0 <= i < 3: dx[i] = direction[i] # check for boundaries tmp = ps[i] + step_sz * dx[i] if tmp > qa_shape[i] - 1 or tmp < 0.: d = 0 break # propagate ps[i] = tmp point[i] = ps[i] if d == 1: track.append(point.copy()) cnt += 1 d = 1 for i from 0 <= i < 3: dx[i] = -idirection[i] cnt = 0 # track towards the opposite direction while d: d = _propagation_direction(ps2, dx, pqa, pin, pverts, qa_thr, ang_thr, qa_shape, pstr, direction, total_weight) if d == 0: break if cnt > max_points: break # update the track for i from 0 <= i < 3: dx[i] = direction[i] # check for boundaries tmp=ps2[i] + step_sz*dx[i] if tmp > qa_shape[i] - 1 or tmp < 0.: d = 0 break # propagate ps2[i] = tmp point[i] = ps2[i] # to be changed # add track point if d == 1: track.insert(0, point.copy()) cnt += 1 # prepare to return final track for the current seed tmp_track = np.array(track, dtype=np.float32) # Sometimes one of the ends takes small negative values; needs to be # investigated further # Return track for the current seed point and ref return tmp_track cdef int initialize_ptt(TrackerParameters params, double* stream_data, PmfGen pmf_gen, double* seed_point, double* seed_direction, RNGState* rng) noexcept nogil: """Sample an initial curve by rejection sampling. Parameters ---------- params : TrackerParameters PTT tracking parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. seed_point : double[3] Initial point seed_direction : double[3] Initial direction rng : RNGState* Random number generator state. (Threadsafe) Returns ------- status : int Returns 0 if the initialization was successful, or 1 otherwise. """ cdef double data_support = 0 cdef double max_posterior = 0 cdef int tries # position stream_data[19] = seed_point[0] stream_data[20] = seed_point[1] stream_data[21] = seed_point[2] for tries in range(params.ptt.rejection_sampling_nbr_sample): initialize_ptt_candidate(params, stream_data, pmf_gen, seed_direction, rng) data_support = calculate_ptt_data_support(params, stream_data, pmf_gen) if data_support > max_posterior: max_posterior = data_support # Compensation for underestimation of max posterior estimate max_posterior = pow(2.0 * max_posterior, params.ptt.data_support_exponent) # Initialization is successful if a suitable candidate can be sampled # within the trial limit for tries in range(params.ptt.rejection_sampling_max_try): initialize_ptt_candidate(params, stream_data, pmf_gen, seed_direction, rng) if (random_float(rng) * max_posterior <= calculate_ptt_data_support(params, stream_data, pmf_gen)): stream_data[22] = stream_data[23] # last_val = last_val_cand return 0 return 1 cdef void initialize_ptt_candidate(TrackerParameters params, double* stream_data, PmfGen pmf_gen, double* init_dir, RNGState* rng) noexcept nogil: """ Initialize the parallel transport frame. After initial position is set, a parallel transport frame is set using the initial direction (a walking frame, i.e., 3 orthonormal vectors, plus 2 scalars, i.e. k1 and k2). A point and parallel transport frame parametrizes a curve that is named the "probe". Using probe parameters (probe_length, probe_radius, probe_quality, probe_count), a short fiber bundle segment is modelled. Parameters ---------- params : TrackerParameters PTT tracking parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. init_dir : double[3] Initial tracking direction (tangent) rng : RNGState* Random number generator state. (Threadsafe) Returns ------- """ cdef double[3] position cdef int count cdef int i cdef double* pmf # Initialize Frame stream_data[1] = init_dir[0] stream_data[2] = init_dir[1] stream_data[3] = init_dir[2] random_perpendicular_vector(&stream_data[7], &stream_data[1], rng) # frame2, frame0 cross(&stream_data[4], &stream_data[7], &stream_data[1]) # frame1, frame2, frame0 stream_data[24], stream_data[25] = \ random_point_within_circle(params.max_curvature, rng) stream_data[22] = 0 # last_val if params.ptt.probe_count == 1: stream_data[22] = pmf_gen.get_pmf_value_c(&stream_data[19], &stream_data[1]) # position, frame[0] else: for count in range(params.ptt.probe_count): for i in range(3): position[i] = (stream_data[19 + i] + stream_data[4 + i] * params.ptt.probe_radius * cos(count * params.ptt.angular_separation) * params.inv_voxel_size[i] + stream_data[7 + i] * params.ptt.probe_radius * sin(count * params.ptt.angular_separation) * params.inv_voxel_size[i]) stream_data[22] += pmf_gen.get_pmf_value_c(&stream_data[19], &stream_data[1]) cdef void prepare_ptt_propagator(TrackerParameters params, double* stream_data, double arclength) noexcept nogil: """Prepare the propagator. The propagator used for transporting the moving frame forward. Parameters ---------- params : TrackerParameters PTT tracking parameters. stream_data : double* Streamline data persitant across tracking steps. arclength : double Arclenth, which is equivalent to step size along the arc """ cdef double tmp_arclength stream_data[10] = arclength # propagator[0] if (fabs(stream_data[24]) < params.ptt.k_small and fabs(stream_data[25]) < params.ptt.k_small): stream_data[11] = 0 stream_data[12] = 0 stream_data[13] = 1 stream_data[14] = 0 stream_data[15] = 0 stream_data[16] = 0 stream_data[17] = 0 stream_data[18] = 1 else: if fabs(stream_data[24]) < params.ptt.k_small: # k1 stream_data[24] = params.ptt.k_small if fabs(stream_data[25]) < params.ptt.k_small: # k2 stream_data[25] = params.ptt.k_small tmp_arclength = arclength * arclength / 2.0 # stream_data[10:18] -> propagator stream_data[11] = stream_data[24] * tmp_arclength stream_data[12] = stream_data[25] * tmp_arclength stream_data[13] = (1 - stream_data[25] * stream_data[25] * tmp_arclength - stream_data[24] * stream_data[24] * tmp_arclength) stream_data[14] = stream_data[24] * arclength stream_data[15] = stream_data[25] * arclength stream_data[16] = -stream_data[25] * arclength stream_data[17] = (-stream_data[24] * stream_data[25] * tmp_arclength) stream_data[18] = (1 - stream_data[25] * stream_data[25] * tmp_arclength) cdef double calculate_ptt_data_support(TrackerParameters params, double* stream_data, PmfGen pmf_gen) noexcept nogil: """Calculates data support for the candidate probe. Parameters ---------- params : TrackerParameters PTT tracking parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. """ cdef double fod_amp cdef double[3] position cdef double[3][3] frame cdef double[3] tangent cdef double[3] normal cdef double[3] binormal cdef double[3] new_position cdef double likelihood cdef int c, i, j, q cdef double* pmf prepare_ptt_propagator(params, stream_data, params.ptt.probe_step_size) for i in range(3): position[i] = stream_data[19+i] for j in range(3): frame[i][j] = stream_data[1 + i * 3 + j] likelihood = stream_data[22] for q in range(1, params.ptt.probe_quality): for i in range(3): # stream_data[10:18] : propagator position[i] = \ (stream_data[10] * frame[0][i] * params.voxel_size[i] + stream_data[11] * frame[1][i] * params.voxel_size[i] + stream_data[12] * frame[2][i] * params.voxel_size[i] + position[i]) tangent[i] = (stream_data[13] * frame[0][i] + stream_data[14] * frame[1][i] + stream_data[15] * frame[2][i]) normalize(&tangent[0]) if q < (params.ptt.probe_quality - 1): for i in range(3): binormal[i] = (stream_data[16] * frame[0][i] + stream_data[17] * frame[1][i] + stream_data[18] * frame[2][i]) cross(&normal[0], &binormal[0], &tangent[0]) copy_point(&tangent[0], &frame[0][0]) copy_point(&normal[0], &frame[1][0]) copy_point(&binormal[0], &frame[2][0]) if params.ptt.probe_count == 1: fod_amp = pmf_gen.get_pmf_value_c(position, tangent) fod_amp = fod_amp if fod_amp > params.sh.pmf_threshold else 0 stream_data[23] = fod_amp # last_val_cand likelihood += stream_data[23] # last_val_cand else: stream_data[23] = 0 # last_val_cand if q == params.ptt.probe_quality - 1: for i in range(3): binormal[i] = (stream_data[16] * frame[0][i] + stream_data[17] * frame[1][i] + stream_data[18] * frame[2][i]) cross(&normal[0], &binormal[0], &tangent[0]) for c in range(params.ptt.probe_count): for i in range(3): new_position[i] = (position[i] + normal[i] * params.ptt.probe_radius * cos(c * params.ptt.angular_separation) * params.inv_voxel_size[i] + binormal[i] * params.ptt.probe_radius * sin(c * params.ptt.angular_separation) * params.inv_voxel_size[i]) fod_amp = pmf_gen.get_pmf_value_c(new_position, tangent) fod_amp = fod_amp if fod_amp > params.sh.pmf_threshold else 0 stream_data[23] += fod_amp # last_val_cand likelihood += stream_data[23] # last_val_cand likelihood *= params.ptt.probe_normalizer if params.ptt.data_support_exponent != 1: likelihood = pow(likelihood, params.ptt.data_support_exponent) return likelihood cdef TrackerStatus deterministic_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil: """ Propagate the position by step_size amount. The propagation use the direction of a sphere with the highest probability mass function (pmf). Parameters ---------- point : double[3] Current tracking position. direction : double[3] Previous tracking direction. params : TrackerParameters Deterministic Tractography parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. rng : RNGState* Random number generator state. (thread safe) Returns ------- status : TrackerStatus Returns SUCCESS if the propagation was successful, or FAIL otherwise. """ cdef: cnp.npy_intp i, max_idx double max_value=0 double* newdir double* pmf double cos_sim cnp.npy_intp len_pmf=pmf_gen.pmf.shape[0] if norm(direction) == 0: return FAIL normalize(direction) pmf = malloc(len_pmf * sizeof(double)) prepare_pmf(pmf, point, pmf_gen, params.sh.pmf_threshold, len_pmf) for i in range(len_pmf): cos_sim = pmf_gen.vertices[i][0] * direction[0] \ + pmf_gen.vertices[i][1] * direction[1] \ + pmf_gen.vertices[i][2] * direction[2] if cos_sim < 0: cos_sim = cos_sim * -1 if cos_sim > params.cos_similarity and pmf[i] > max_value: max_idx = i max_value = pmf[i] if max_value <= 0: free(pmf) return FAIL newdir = &pmf_gen.vertices[max_idx][0] # Update direction if (direction[0] * newdir[0] + direction[1] * newdir[1] + direction[2] * newdir[2] > 0): copy_point(newdir, direction) else: copy_point(newdir, direction) direction[0] = direction[0] * -1 direction[1] = direction[1] * -1 direction[2] = direction[2] * -1 free(pmf) return SUCCESS cdef TrackerStatus probabilistic_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil: """ Propagates the position by step_size amount. The propagation use randomly samples direction of a sphere based on probability mass function (pmf). Parameters ---------- point : double[3] Current tracking position. direction : double[3] Previous tracking direction. params : TrackerParameters Parallel Transport Tractography (PTT) parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. rng : RNGState* Random number generator state. (thread safe) Returns ------- status : TrackerStatus Returns SUCCESS if the propagation was successful, or FAIL otherwise. """ cdef: cnp.npy_intp i, idx double* newdir double* pmf double last_cdf, cos_sim cnp.npy_intp len_pmf=pmf_gen.pmf.shape[0] if norm(direction) == 0: return FAIL normalize(direction) pmf = malloc(len_pmf * sizeof(double)) prepare_pmf(pmf, point, pmf_gen, params.sh.pmf_threshold, len_pmf) for i in range(len_pmf): cos_sim = pmf_gen.vertices[i][0] * direction[0] \ + pmf_gen.vertices[i][1] * direction[1] \ + pmf_gen.vertices[i][2] * direction[2] if cos_sim < 0: cos_sim = cos_sim * -1 if cos_sim < params.cos_similarity: pmf[i] = 0 cumsum(pmf, pmf, len_pmf) last_cdf = pmf[len_pmf - 1] if last_cdf == 0: free(pmf) return FAIL idx = where_to_insert(pmf, random_float(rng) * last_cdf, len_pmf) newdir = &pmf_gen.vertices[idx][0] # Update direction if (direction[0] * newdir[0] + direction[1] * newdir[1] + direction[2] * newdir[2] > 0): copy_point(newdir, direction) else: copy_point(newdir, direction) direction[0] = direction[0] * -1 direction[1] = direction[1] * -1 direction[2] = direction[2] * -1 free(pmf) return SUCCESS cdef TrackerStatus parallel_transport_propagator(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil: """ Propagates the position by step_size amount. The propagation is using the parameters of the last candidate curve. Then, randomly generate curve parametrization from the current position. The walking frame is the same, only the k1 and k2 parameters are randomly picked. Rejection sampling is used to pick the next curve using the data support (likelihood). stream_data: 0 : initialized 1-10 : frame1,2,3 10-19: propagator 19-22: position 22 : last_val 23 : last_val_cand 24 : k1 25 : k2 Parameters ---------- point : double[3] Current tracking position. direction : double[3] Previous tracking direction. params : TrackerParameters Parallel Transport Tractography (PTT) parameters. stream_data : double* Streamline data persitant across tracking steps. pmf_gen : PmfGen Orientation data. rng : RNGState* Random number generator state. (thread safe) Returns ------- status : TrackerStatus Returns SUCCESS if the propagation was successful, or FAIL otherwise. """ cdef double max_posterior = 0 cdef double data_support = 0 cdef double[3] tangent cdef int tries cdef int i if stream_data[0] == 0: initialize_ptt(params, stream_data, pmf_gen, point, direction, rng) stream_data[0] = 1 # initialized prepare_ptt_propagator(params, stream_data, params.step_size) for i in range(3): # position stream_data[19 + i] = (stream_data[10] * stream_data[1 + i] * params.inv_voxel_size[i] + stream_data[11] * stream_data[4 + i] * params.inv_voxel_size[i] + stream_data[12] * stream_data[7 + i] * params.inv_voxel_size[i] + stream_data[19 + i]) tangent[i] = (stream_data[13] * stream_data[1 + i] + stream_data[14] * stream_data[4 + i] + stream_data[15] * stream_data[7 + i]) stream_data[7 + i] = \ (stream_data[16] * stream_data[1 + i] + stream_data[17] * stream_data[4 + i] + stream_data[18] * stream_data[7 + i]) normalize(&tangent[0]) cross(&stream_data[4], &stream_data[7], &tangent[0]) # frame1, frame2 normalize(&stream_data[4]) # frame1 cross(&stream_data[7], &tangent[0], &stream_data[4]) # frame2, tangent, frame1 stream_data[1] = tangent[0] stream_data[2] = tangent[1] stream_data[3] = tangent[2] for tries in range(params.ptt.rejection_sampling_nbr_sample): # k1, k2 stream_data[24], stream_data[25] = \ random_point_within_circle(params.max_curvature, rng) data_support = calculate_ptt_data_support(params, stream_data, pmf_gen) if data_support > max_posterior: max_posterior = data_support # Compensation for underestimation of max posterior estimate max_posterior = pow(2.0 * max_posterior, params.ptt.data_support_exponent) for tries in range(params.ptt.rejection_sampling_max_try): # k1, k2 stream_data[24], stream_data[25] = \ random_point_within_circle(params.max_curvature, rng) if random_float(rng) * max_posterior < calculate_ptt_data_support(params, stream_data, pmf_gen): stream_data[22] = stream_data[23] # last_val = last_val_cand # Propagation is successful if a suitable candidate can be sampled # within the trial limit # update the point and return copy_point(&stream_data[19], point) return SUCCESS return FAIL dipy-1.11.0/dipy/tracking/stopping_criterion.pxd000066400000000000000000000024521476546756600220010ustar00rootroot00000000000000from dipy.utils.fast_numpy cimport RNGState cpdef enum StreamlineStatus: PYERROR = -2 OUTSIDEIMAGE = -1 INVALIDPOINT = 0 TRACKPOINT = 1 ENDPOINT = 2 VALIDSTREAMLIME = 100 INVALIDSTREAMLIME = -100 cdef class StoppingCriterion: cdef: double interp_out_double[1] double[::1] interp_out_view cpdef StreamlineStatus check_point(self, double[::1] point) cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=*) noexcept nogil cdef class BinaryStoppingCriterion(StoppingCriterion): cdef: unsigned char [:, :, :] mask pass cdef class ThresholdStoppingCriterion(StoppingCriterion): cdef: double threshold double[:, :, :] metric_map pass cdef class AnatomicalStoppingCriterion(StoppingCriterion): cdef: double[:, :, :] include_map, exclude_map cpdef double get_exclude(self, double[::1] point) cdef get_exclude_c(self, double* point) cpdef double get_include(self, double[::1] point) cdef get_include_c(self, double* point) pass cdef class ActStoppingCriterion(AnatomicalStoppingCriterion): pass cdef class CmcStoppingCriterion(AnatomicalStoppingCriterion): cdef: double step_size double average_voxel_size double correction_factor pass dipy-1.11.0/dipy/tracking/stopping_criterion.pyx000066400000000000000000000250241476546756600220260ustar00rootroot00000000000000# cython: boundscheck=False # cython: cdivision=True # cython: initializedcheck=False # cython: wraparound=False cdef extern from "dpy_math.h" nogil: int dpy_rint(double) from dipy.utils.fast_numpy cimport random, RNGState, random_float from dipy.core.interpolation cimport trilinear_interpolate4d_c import numpy as np cdef class StoppingCriterion: cpdef StreamlineStatus check_point(self, double[::1] point): if point.shape[0] != 3: raise ValueError("Point has wrong shape") return self.check_point_c(&point[0]) cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=NULL) noexcept nogil: pass cdef class BinaryStoppingCriterion(StoppingCriterion): """ cdef: unsigned char[:, :, :] mask """ def __cinit__(self, mask): self.interp_out_view = self.interp_out_double self.mask = (mask > 0).astype('uint8') cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=NULL) noexcept nogil: cdef: unsigned char result int err int voxel[3] voxel[0] = int(dpy_rint(point[0])) voxel[1] = int(dpy_rint(point[1])) voxel[2] = int(dpy_rint(point[2])) if (voxel[0] < 0 or voxel[0] >= self.mask.shape[0] or voxel[1] < 0 or voxel[1] >= self.mask.shape[1] or voxel[2] < 0 or voxel[2] >= self.mask.shape[2]): return OUTSIDEIMAGE result = self.mask[voxel[0], voxel[1], voxel[2]] if result > 0: return TRACKPOINT else: return ENDPOINT cdef class ThresholdStoppingCriterion(StoppingCriterion): def __cinit__(self, metric_map, double threshold): self.interp_out_view = self.interp_out_double self.metric_map = np.asarray(metric_map, 'float64') self.threshold = threshold cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=NULL) noexcept nogil: cdef: double result int err err = trilinear_interpolate4d_c(self.metric_map[..., None], point, &self.interp_out_view[0]) if err == -1: return OUTSIDEIMAGE elif err != 0: # This should never happen raise RuntimeError( "Unexpected interpolation error (code:%i)" % err) result = self.interp_out_view[0] if result > self.threshold: return TRACKPOINT else: return ENDPOINT cdef class AnatomicalStoppingCriterion(StoppingCriterion): r""" Abstract class that takes as input included and excluded tissue maps. The 'include_map' defines when the streamline reached a 'valid' stopping region (e.g. gray matter partial volume estimation (PVE) map) and the 'exclude_map' defines when the streamline reached an 'invalid' stopping region (e.g. corticospinal fluid PVE map). The background of the anatomical image should be added to the 'include_map' to keep streamlines exiting the brain (e.g. through the brain stem). cdef: double interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] include_map, exclude_map """ def __cinit__(self, include_map, exclude_map, *args, **kw): self.interp_out_view = self.interp_out_double self.include_map = np.asarray(include_map, 'float64') self.exclude_map = np.asarray(exclude_map, 'float64') @classmethod def from_pve(cls, wm_map, gm_map, csf_map, **kw): """AnatomicalStoppingCriterion from partial volume fraction (PVE) maps. Parameters ---------- wm_map : array The partial volume fraction of white matter at each voxel. gm_map : array The partial volume fraction of gray matter at each voxel. csf_map : array The partial volume fraction of corticospinal fluid at each voxel. """ # include map = gray matter + image background include_map = np.copy(gm_map) include_map[(wm_map + gm_map + csf_map) == 0] = 1 # exclude map = csf exclude_map = np.copy(csf_map) return cls(include_map, exclude_map, **kw) cpdef double get_exclude(self, double[::1] point): if point.shape[0] != 3: raise ValueError("Point has wrong shape") return self.get_exclude_c(&point[0]) cdef get_exclude_c(self, double* point): exclude_err = trilinear_interpolate4d_c(self.exclude_map[..., None], point, &self.interp_out_view[0]) if exclude_err != 0: return 0 return self.interp_out_view[0] cpdef double get_include(self, double[::1] point): if point.shape[0] != 3: raise ValueError("Point has wrong shape") return self.get_include_c(&point[0]) cdef get_include_c(self, double* point): include_err = trilinear_interpolate4d_c(self.include_map[..., None], point, &self.interp_out_view[0]) if include_err != 0: return 0 return self.interp_out_view[0] cdef class ActStoppingCriterion(AnatomicalStoppingCriterion): """Anatomically-Constrained Tractography (ACT) stopping criterion. See :footcite:p:`Smith2012` for further details about the method. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops. The proposed method :footcite:p:`Smith2012` that cuts streamlines going through subcortical gray matter regions is not implemented here. The backtracking technique for streamlines reaching INVALIDPOINT is not implemented either. References ---------- .. footbibliography:: """ def __cinit__(self, include_map, exclude_map): self.interp_out_view = self.interp_out_double self.include_map = np.asarray(include_map, 'float64') self.exclude_map = np.asarray(exclude_map, 'float64') cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=NULL) noexcept nogil: cdef: double include_result, exclude_result int include_err, exclude_err include_err = trilinear_interpolate4d_c(self.include_map[..., None], point, &self.interp_out_view[0]) include_result = self.interp_out_view[0] exclude_err = trilinear_interpolate4d_c(self.exclude_map[..., None], point, &self.interp_out_view[0]) exclude_result = self.interp_out_view[0] if include_err == -1 or exclude_err == -1: return OUTSIDEIMAGE elif include_err != 0: # This should never happen raise RuntimeError("Unexpected interpolation error " + "(include_map - code:%i)" % include_err) elif exclude_err != 0: # This should never happen raise RuntimeError("Unexpected interpolation error " + "(exclude_map - code:%i)" % exclude_err) if include_result > 0.5: return ENDPOINT elif exclude_result > 0.5: return INVALIDPOINT else: return TRACKPOINT cdef class CmcStoppingCriterion(AnatomicalStoppingCriterion): r""" Continuous map criterion (CMC) stopping criterion. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops :footcite:p:`Girard2014`. cdef: double interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] include_map, exclude_map double step_size double average_voxel_size double correction_factor References ---------- .. footbibliography:: """ def __cinit__(self, include_map, exclude_map, step_size, average_voxel_size): self.step_size = step_size self.average_voxel_size = average_voxel_size self.correction_factor = step_size / average_voxel_size cdef StreamlineStatus check_point_c(self, double* point, RNGState* rng=NULL) noexcept nogil: cdef: double include_result, exclude_result int include_err, exclude_err double p include_err = trilinear_interpolate4d_c(self.include_map[..., None], point, &self.interp_out_view[0]) include_result = self.interp_out_view[0] exclude_err = trilinear_interpolate4d_c(self.exclude_map[..., None], point, &self.interp_out_view[0]) exclude_result = self.interp_out_view[0] if include_err == -1 or exclude_err == -1: return OUTSIDEIMAGE elif include_err == -2 or exclude_err == -2: raise ValueError("Point has wrong shape") elif include_err != 0: # This should never happen raise RuntimeError("Unexpected interpolation error " + "(include_map - code:%i)" % include_err) elif exclude_err != 0: # This should never happen raise RuntimeError("Unexpected interpolation error " + "(exclude_map - code:%i)" % exclude_err) # rng = np.random.default_rng() # test if the tracking continues if include_result + exclude_result <= 0: return TRACKPOINT num = max(0, (1 - include_result - exclude_result)) den = num + include_result + exclude_result p = (num / den) ** self.correction_factor if rng != NULL: if random_float(rng) < p: return TRACKPOINT # test if the tracking stopped in the include tissue map p = (include_result / (include_result + exclude_result)) if random_float(rng) < p: return ENDPOINT else: if random() < p: return TRACKPOINT # test if the tracking stopped in the include tissue map p = (include_result / (include_result + exclude_result)) if random() < p: return ENDPOINT # the tracking stopped in the exclude tissue map return INVALIDPOINT dipy-1.11.0/dipy/tracking/streamline.py000066400000000000000000000616431476546756600200670ustar00rootroot00000000000000from copy import deepcopy import types from warnings import warn from nibabel.affines import apply_affine from nibabel.streamlines import ArraySequence as Streamlines import numpy as np from scipy.spatial.distance import cdist from dipy.core.geometry import dist_to_corner from dipy.core.interpolation import interpolate_scalar_3d, interpolate_vector_3d from dipy.testing.decorators import warning_for_keywords from dipy.tracking.distances import bundles_distances_mdf from dipy.tracking.streamlinespeed import length, set_number_of_points import dipy.tracking.utils as ut def unlist_streamlines(streamlines): """Return the streamlines not as a list but as an array and an offset Parameters ---------- streamlines: sequence Streamlines. Returns ------- points : array Streamlines' points. offsets : array Streamlines' offsets. """ points = np.concatenate(streamlines, axis=0) offsets = np.zeros(len(streamlines), dtype="i8") curr_pos = 0 for i, s in enumerate(streamlines): prev_pos = curr_pos curr_pos += s.shape[0] points[prev_pos:curr_pos] = s offsets[i] = curr_pos return points, offsets def relist_streamlines(points, offsets): """Given a representation of a set of streamlines as a large array and an offsets array return the streamlines as a list of shorter arrays. Parameters ---------- points : array Streamlines' points. offsets : array Streamlines' offsets. Returns ------- streamlines: sequence Streamlines. """ streamlines = [points[0 : offsets[0]]] for i in range(len(offsets) - 1): streamlines.append(points[offsets[i] : offsets[i + 1]]) return streamlines def center_streamlines(streamlines): """Move streamlines to the origin Parameters ---------- streamlines : list List of 2D ndarrays of shape[-1]==3 Returns ------- new_streamlines : list List of 2D ndarrays of shape[-1]==3 inv_shift : ndarray Translation in x,y,z to go back in the initial position """ center = np.mean(np.concatenate(streamlines, axis=0), axis=0) return [s - center for s in streamlines], center def deform_streamlines( streamlines, deform_field, stream_to_current_grid, current_grid_to_world, stream_to_ref_grid, ref_grid_to_world, ): """Apply deformation field to streamlines Parameters ---------- streamlines : list List of 2D ndarrays of shape[-1]==3 deform_field : 4D numpy array x,y,z displacements stored in volume, shape[-1]==3 stream_to_current_grid : array, (4, 4) transform matrix voxmm space to original grid space current_grid_to_world : array (4, 4) transform matrix original grid space to world coordinates stream_to_ref_grid : array (4, 4) transform matrix voxmm space to new grid space ref_grid_to_world : array(4, 4) transform matrix new grid space to world coordinates Returns ------- new_streamlines : list List of the transformed 2D ndarrays of shape[-1]==3 """ if deform_field.shape[-1] != 3: raise ValueError("Last dimension of deform_field needs shape==3") stream_in_curr_grid = transform_streamlines(streamlines, stream_to_current_grid) displacements = values_from_volume(deform_field, stream_in_curr_grid, np.eye(4)) stream_in_world = transform_streamlines(stream_in_curr_grid, current_grid_to_world) new_streams_in_world = [ np.add(d, s) for d, s in zip(displacements, stream_in_world) ] new_streams_grid = transform_streamlines( new_streams_in_world, np.linalg.inv(ref_grid_to_world) ) new_streamlines = transform_streamlines( new_streams_grid, np.linalg.inv(stream_to_ref_grid) ) return new_streamlines @warning_for_keywords() def transform_streamlines(streamlines, mat, *, in_place=False): """Apply affine transformation to streamlines Parameters ---------- streamlines : Streamlines Streamlines object mat : array, (4, 4) transformation matrix in_place : bool If True then change data in place. Be careful changes input streamlines. Returns ------- new_streamlines : Streamlines Sequence transformed 2D ndarrays of shape[-1]==3 """ # using new Streamlines API if isinstance(streamlines, Streamlines): old_data_dtype = streamlines._data.dtype old_offsets_dtype = streamlines._offsets.dtype if in_place: streamlines._data = apply_affine(mat, streamlines._data).astype( old_data_dtype ) return streamlines new_streamlines = streamlines.copy() new_streamlines._offsets = new_streamlines._offsets.astype(old_offsets_dtype) if new_streamlines._data.size: new_streamlines._data = apply_affine(mat, new_streamlines._data).astype( old_data_dtype ) return new_streamlines # supporting old data structure of streamlines return [apply_affine(mat, s) for s in streamlines] @warning_for_keywords() def select_random_set_of_streamlines(streamlines, select, *, rng=None): """Select a random set of streamlines Parameters ---------- streamlines : Streamlines Object of 2D ndarrays of shape[-1]==3 select : int Number of streamlines to select. If there are less streamlines than ``select`` then ``select=len(streamlines)``. rng : np.random.Generator Default None. Returns ------- selected_streamlines : list Notes ----- The same streamline will not be selected twice. """ len_s = len(streamlines) if rng is None: rng = np.random.default_rng() index = rng.choice(len_s, min(select, len_s), replace=False) if isinstance(streamlines, Streamlines): return streamlines[index] return [streamlines[i] for i in index] @warning_for_keywords() def select_by_rois(streamlines, affine, rois, include, *, mode=None, tol=None): """Select streamlines based on logical relations with several regions of interest (ROIs). For example, select streamlines that pass near ROI1, but only if they do not pass near ROI2. Parameters ---------- streamlines : list A list of candidate streamlines for selection affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. rois : list or ndarray A list of 3D arrays, each with shape (x, y, z) corresponding to the shape of the brain volume, or a 4D array with shape (n_rois, x, y, z). Non-zeros in each volume are considered to be within the region include : array or list A list or 1D array of boolean values marking inclusion or exclusion criteria. If a streamline is near any of the inclusion ROIs, it should evaluate to True, unless it is also near any of the exclusion ROIs. mode : string, optional One of {"any", "all", "either_end", "both_end"}, where a streamline is associated with an ROI if: "any" : any point is within tol from ROI. Default. "all" : all points are within tol from ROI. "either_end" : either of the end-points is within tol from ROI "both_end" : both end points are within tol from ROI. tol : float Distance (in the units of the streamlines, usually mm). If any coordinate in the streamline is within this distance from the center of any voxel in the ROI, the filtering criterion is set to True for this streamline, otherwise False. Defaults to the distance between the center of each voxel and the corner of the voxel. Notes ----- The only operation currently possible is "(A or B or ...) and not (X or Y or ...)", where A, B are inclusion regions and X, Y are exclusion regions. Returns ------- generator Generates the streamlines to be included based on these criteria. See also -------- :func:`dipy.tracking.utils.near_roi` :func:`dipy.tracking.utils.reduce_rois` Examples -------- >>> streamlines = [np.array([[0, 0., 0.9], ... [1.9, 0., 0.]]), ... np.array([[0., 0., 0], ... [0, 1., 1.], ... [0, 2., 2.]]), ... np.array([[2, 2, 2], ... [3, 3, 3]])] >>> mask1 = np.zeros((4, 4, 4), dtype=bool) >>> mask2 = np.zeros_like(mask1) >>> mask1[0, 0, 0] = True >>> mask2[1, 0, 0] = True >>> selection = select_by_rois(streamlines, np.eye(4), [mask1, mask2], ... [True, True], ... tol=1) >>> list(selection) # The result is a generator [array([[ 0. , 0. , 0.9], [ 1.9, 0. , 0. ]]), array([[ 0., 0., 0.], [ 0., 1., 1.], [ 0., 2., 2.]])] >>> selection = select_by_rois(streamlines, np.eye(4), [mask1, mask2], ... [True, False], ... tol=0.87) >>> list(selection) [array([[ 0., 0., 0.], [ 0., 1., 1.], [ 0., 2., 2.]])] >>> selection = select_by_rois(streamlines, np.eye(4), [mask1, mask2], ... [True, True], ... mode="both_end", ... tol=1.0) >>> list(selection) [array([[ 0. , 0. , 0.9], [ 1.9, 0. , 0. ]])] >>> mask2[0, 2, 2] = True >>> selection = select_by_rois(streamlines, np.eye(4), [mask1, mask2], ... [True, True], ... mode="both_end", ... tol=1.0) >>> list(selection) [array([[ 0. , 0. , 0.9], [ 1.9, 0. , 0. ]]), array([[ 0., 0., 0.], [ 0., 1., 1.], [ 0., 2., 2.]])] """ # This calculates the maximal distance to a corner of the voxel: dtc = dist_to_corner(affine) if tol is None: tol = dtc elif tol < dtc: w_s = "Tolerance input provided would create gaps in your" w_s += f" inclusion ROI. Setting to: {dist_to_corner}" warn(w_s, stacklevel=2) tol = dtc include_roi, exclude_roi = ut.reduce_rois(rois, include) include_roi_coords = np.array(np.where(include_roi)).T x_include_roi_coords = apply_affine(affine, include_roi_coords) exclude_roi_coords = np.array(np.where(exclude_roi)).T x_exclude_roi_coords = apply_affine(affine, exclude_roi_coords) if mode is None: mode = "any" for sl in streamlines: include = ut.streamline_near_roi(sl, x_include_roi_coords, tol=tol, mode=mode) exclude = ut.streamline_near_roi(sl, x_exclude_roi_coords, tol=tol, mode=mode) if include and not exclude: yield sl @warning_for_keywords() def cluster_confidence( streamlines, *, max_mdf=5, subsample=12, power=1, override=False ): """Computes the cluster confidence index (cci), which is an estimation of the support a set of streamlines gives to a particular pathway. Ex: A single streamline with no others in the dataset following a similar pathway has a low cci. A streamline in a bundle of 100 streamlines that follow similar pathways has a high cci. See :footcite:p:`Jordan2018` (based on the streamline MDF distance from :footcite:t:`Garyfallidis2012a`). Parameters ---------- streamlines : list of 2D (N, 3) arrays A sequence of streamlines of length N (# streamlines) max_mdf : int The maximum MDF distance (mm) that will be considered a "supporting" streamline and included in cci calculation subsample: int The number of points that are considered for each streamline in the calculation. To save on calculation time, each streamline is subsampled to subsampleN points. power: int The power to which the MDF distance for each streamline will be raised to determine how much it contributes to the cci. High values of power make the contribution value degrade much faster. E.g., a streamline with 5mm MDF similarity contributes 1/5 to the cci if power is 1, but only contributes 1/5^2 = 1/25 if power is 2. override: bool, False by default override means that the cci calculation will still occur even though there are short streamlines in the dataset that may alter expected behaviour. Returns ------- Returns an array of CCI scores References ---------- .. footbibliography:: """ # error if any streamlines are shorter than 20mm lengths = list(length(streamlines)) if min(lengths) < 20 and not override: raise ValueError( "Short streamlines found. We recommend removing them." " To continue without removing short streamlines set" " override=True" ) # calculate the pairwise MDF distance between all streamlines in dataset subsamp_sls = set_number_of_points(streamlines, nb_points=subsample) cci_score_mtrx = np.zeros([len(subsamp_sls)]) for i, _ in enumerate(subsamp_sls): mdf_mx = bundles_distances_mdf([subsamp_sls[i]], subsamp_sls) if (mdf_mx == 0).sum() > 1: raise ValueError("Identical streamlines. CCI calculation invalid") mdf_mx_oi = (mdf_mx > 0) & (mdf_mx < max_mdf) & ~np.isnan(mdf_mx) mdf_mx_oi_only = mdf_mx[mdf_mx_oi] cci_score = np.sum(np.divide(1, np.power(mdf_mx_oi_only, power))) cci_score_mtrx[i] = cci_score return cci_score_mtrx def _orient_by_roi_generator(out, roi1, roi2): """ Helper function to `orient_by_rois` Performs the inner loop separately. This is needed, because functions with `yield` always return a generator """ for sl in out: dist1 = cdist(sl, roi1, "euclidean") dist2 = cdist(sl, roi2, "euclidean") min1 = np.argmin(dist1, 0) min2 = np.argmin(dist2, 0) if min1[0] > min2[0]: yield sl[::-1] else: yield sl def _orient_by_roi_list(out, roi1, roi2): """ Helper function to `orient_by_rois` Performs the inner loop separately. This is needed, because functions with `yield` always return a generator. Flips the streamlines in place (as needed) and returns a reference to the updated list. """ for idx, sl in enumerate(out): dist1 = cdist(sl, roi1, "euclidean") dist2 = cdist(sl, roi2, "euclidean") min1 = np.argmin(dist1, 0) min2 = np.argmin(dist2, 0) if min1[0] > min2[0]: out[idx][:, 0] = sl[::-1][:, 0] out[idx][:, 1] = sl[::-1][:, 1] out[idx][:, 2] = sl[::-1][:, 2] return out @warning_for_keywords() def orient_by_rois( streamlines, affine, roi1, roi2, *, in_place=False, as_generator=False ): """Orient a set of streamlines according to a pair of ROIs Parameters ---------- streamlines : list or generator List or generator of 2d arrays of 3d coordinates. Each array contains the xyz coordinates of a single streamline. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. roi1, roi2 : ndarray Binary masks designating the location of the regions of interest, or coordinate arrays (n-by-3 array with ROI coordinate in each row). in_place : bool Whether to make the change in-place in the original list (and return a reference to the list), or to make a copy of the list and return this copy, with the relevant streamlines reoriented. as_generator : bool Whether to return a generator as output. Returns ------- streamlines : list or generator The same 3D arrays as a list or generator, but reoriented with respect to the ROIs Examples -------- >>> streamlines = [np.array([[0, 0., 0], ... [1, 0., 0.], ... [2, 0., 0.]]), ... np.array([[2, 0., 0.], ... [1, 0., 0], ... [0, 0, 0.]])] >>> roi1 = np.zeros((4, 4, 4), dtype=bool) >>> roi2 = np.zeros_like(roi1) >>> roi1[0, 0, 0] = True >>> roi2[1, 0, 0] = True >>> orient_by_rois(streamlines, np.eye(4), roi1, roi2) [array([[ 0., 0., 0.], [ 1., 0., 0.], [ 2., 0., 0.]]), array([[ 0., 0., 0.], [ 1., 0., 0.], [ 2., 0., 0.]])] """ # If we don't already have coordinates on our hands: if len(roi1.shape) == 3: roi1 = np.asarray(np.where(roi1.astype(bool))).T if len(roi2.shape) == 3: roi2 = np.asarray(np.where(roi2.astype(bool))).T roi1 = apply_affine(affine, roi1) roi2 = apply_affine(affine, roi2) if as_generator: if in_place: w_s = "Cannot return a generator when in_place is set to True" raise ValueError(w_s) return _orient_by_roi_generator(streamlines, roi1, roi2) # If it's a generator on input, we may as well generate it # here and now: if isinstance(streamlines, types.GeneratorType): out = Streamlines(streamlines) elif in_place: out = streamlines else: # Make a copy, so you don't change the output in place: out = deepcopy(streamlines) return _orient_by_roi_list(out, roi1, roi2) def _orient_by_sl_generator(out, std_array, fgarray): """Helper function that implements the generator version of this""" for idx, sl in enumerate(fgarray): dist_direct = np.sum(np.sqrt(np.sum((sl - std_array) ** 2, -1))) dist_flipped = np.sum(np.sqrt(np.sum((sl[::-1] - std_array) ** 2, -1))) if dist_direct > dist_flipped: yield out[idx][::-1] else: yield out[idx] def _orient_by_sl_list(out, std_array, fgarray): """Helper function that implements the sequence version of this""" for idx, sl in enumerate(fgarray): dist_direct = np.sum(np.sqrt(np.sum((sl - std_array) ** 2, -1))) dist_flipped = np.sum(np.sqrt(np.sum((sl[::-1] - std_array) ** 2, -1))) if dist_direct > dist_flipped: out[idx][:, 0] = out[idx][::-1][:, 0] out[idx][:, 1] = out[idx][::-1][:, 1] out[idx][:, 2] = out[idx][::-1][:, 2] return out @warning_for_keywords() def orient_by_streamline( streamlines, standard, *, n_points=12, in_place=False, as_generator=False ): """ Orient a bundle of streamlines to a standard streamline. Parameters ---------- streamlines : Streamlines, list The input streamlines to orient. standard : Streamlines, list, or ndarrray This provides the standard orientation according to which the streamlines in the provided bundle should be reoriented. n_points: int, optional The number of samples to apply to each of the streamlines. in_place : bool Whether to make the change in-place in the original input (and return a reference), or to make a copy of the list and return this copy, with the relevant streamlines reoriented. as_generator : bool Whether to return a generator as output. Returns ------- Streamlines : with each individual array oriented to be as similar as possible to the standard. """ # Start by resampling, so that distance calculation is easy: fgarray = set_number_of_points(streamlines, nb_points=n_points) std_array = set_number_of_points([standard], nb_points=n_points) if as_generator: if in_place: w_s = "Cannot return a generator when in_place is set to True" raise ValueError(w_s) return _orient_by_sl_generator(streamlines, std_array, fgarray) # If it's a generator on input, we may as well generate it # here and now: if isinstance(streamlines, types.GeneratorType): out = Streamlines(streamlines) elif in_place: out = streamlines else: # Make a copy, so you don't change the output in place: out = deepcopy(streamlines) return _orient_by_sl_list(out, std_array, fgarray) @warning_for_keywords() def _extract_vals(data, streamlines, affine, *, threedvec=False): """ Helper function for use with `values_from_volume`. Parameters ---------- data : 3D or 4D array Scalar (for 3D) and vector (for 4D) values to be extracted. For 4D data, interpolation will be done on the 3 spatial dimensions in each volume. streamlines : ndarray or list If array, of shape (n_streamlines, n_nodes, 3) If list, len(n_streamlines) with (n_nodes, 3) array in each element of the list. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. threedvec : bool Whether the last dimension has length 3. This is a special case in which we can use :func:`dipy.core.interpolate.interpolate_vector_3d` for the interpolation of 4D volumes without looping over the elements of the last dimension. Returns --------- array or list (depending on the input) : values interpolate to each coordinate along the length of each streamline """ data = data.astype(float) if isinstance(streamlines, (list, types.GeneratorType, Streamlines)): streamlines = transform_streamlines(streamlines, np.linalg.inv(affine)) vals = [] for sl in streamlines: if threedvec: vals.append(list(interpolate_vector_3d(data, sl.astype(float))[0])) else: vals.append(list(interpolate_scalar_3d(data, sl.astype(float))[0])) elif isinstance(streamlines, np.ndarray): sl_shape = streamlines.shape sl_cat = streamlines.reshape(sl_shape[0] * sl_shape[1], 3).astype(float) inv_affine = np.linalg.inv(affine) sl_cat = np.dot(sl_cat, inv_affine[:3, :3]) + inv_affine[:3, 3] # So that we can index in one operation: if threedvec: vals = np.array(interpolate_vector_3d(data, sl_cat)[0]) else: vals = np.array(interpolate_scalar_3d(data, sl_cat)[0]) vals = np.reshape(vals, (sl_shape[0], sl_shape[1], -1)) if vals.shape[-1] == 1: vals = np.reshape(vals, vals.shape[:-1]) else: raise RuntimeError( "Extracting values from a volume ", "requires streamlines input as an array, ", "a list of arrays, or a streamline generator.", ) return vals def values_from_volume(data, streamlines, affine): """Extract values of a scalar/vector along each streamline from a volume. Parameters ---------- data : 3D or 4D array Scalar (for 3D) and vector (for 4D) values to be extracted. For 4D data, interpolation will be done on the 3 spatial dimensions in each volume. streamlines : ndarray or list If array, of shape (n_streamlines, n_nodes, 3) If list, len(n_streamlines) with (n_nodes, 3) array in each element of the list. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. Returns ------- array or list (depending on the input) : values interpolate to each coordinate along the length of each streamline. Notes ----- Values are extracted from the image based on the 3D coordinates of the nodes that comprise the points in the streamline, without any interpolation into segments between the nodes. Using this function with streamlines that have been resampled into a very small number of nodes will result in very few values. """ data = np.asarray(data) if len(data.shape) == 4: if data.shape[-1] == 3: return _extract_vals(data, streamlines, affine, threedvec=True) if isinstance(streamlines, types.GeneratorType): streamlines = Streamlines(streamlines) vals = [] for ii in range(data.shape[-1]): vals.append(_extract_vals(data[..., ii], streamlines, affine)) if isinstance(vals[-1], np.ndarray): return np.swapaxes(np.array(vals), 2, 1).T else: new_vals = [] for sl_idx in range(len(streamlines)): sl_vals = [] for ii in range(data.shape[-1]): sl_vals.append(vals[ii][sl_idx]) new_vals.append(np.array(sl_vals).T) return new_vals elif len(data.shape) == 3: return _extract_vals(data, streamlines, affine) else: raise ValueError("Data needs to have 3 or 4 dimensions") def nbytes(streamlines): return streamlines._data.nbytes / 1024.0**2 dipy-1.11.0/dipy/tracking/streamlinespeed.pxd000066400000000000000000000006211476546756600212400ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False ctypedef float[:, :] float2d ctypedef double[:, :] double2d ctypedef fused Streamline: float2d double2d cdef double c_length(Streamline streamline) noexcept nogil cdef void c_arclengths(Streamline streamline, double * out) noexcept nogil cdef void c_set_number_of_points(Streamline streamline, Streamline out) noexcept nogil dipy-1.11.0/dipy/tracking/streamlinespeed.pyx000066400000000000000000000567521476546756600213050ustar00rootroot00000000000000# cython: wraparound=False, cdivision=True, boundscheck=False import numpy as np from libc.math cimport sqrt from libc.stdlib cimport malloc, free cimport numpy as cnp from dipy.tracking import Streamlines from dipy.utils.arrfuncs import as_native_array cdef extern from "dpy_math.h" nogil: bint dpy_isnan(double x) cdef double c_length(Streamline streamline) noexcept nogil: cdef: cnp.npy_intp i double out = 0.0 double dn, sum_dn_sqr for i in range(1, streamline.shape[0]): sum_dn_sqr = 0.0 for j in range(streamline.shape[1]): dn = streamline[i, j] - streamline[i-1, j] sum_dn_sqr += dn*dn out += sqrt(sum_dn_sqr) return out cdef void c_arclengths_from_arraysequence(Streamline points, cnp.npy_intp[:] offsets, cnp.npy_intp[:] lengths, double[:] arclengths) noexcept nogil: cdef: cnp.npy_intp i, j, k cnp.npy_intp offset double dn, sum_dn_sqr for i in range(offsets.shape[0]): offset = offsets[i] arclengths[i] = 0 for j in range(1, lengths[i]): sum_dn_sqr = 0.0 for k in range(points.shape[1]): dn = points[offset+j, k] - points[offset+j-1, k] sum_dn_sqr += dn*dn arclengths[i] += sqrt(sum_dn_sqr) def length(streamlines): """ Euclidean length of streamlines Length is in mm only if streamlines are expressed in world coordinates. Parameters ---------- streamlines : ndarray or a list or :class:`dipy.tracking.Streamlines` If ndarray, must have shape (N,3) where N is the number of points of the streamline. If list, each item must be ndarray shape (Ni,3) where Ni is the number of points of streamline i. If :class:`dipy.tracking.Streamlines`, its `common_shape` must be 3. Returns ------- lengths : scalar or ndarray shape (N,) If there is only one streamline, a scalar representing the length of the streamline. If there are several streamlines, ndarray containing the length of every streamline. Examples -------- >>> from dipy.tracking.streamline import length >>> import numpy as np >>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]]) >>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum() >>> length(streamline) == expected_length True >>> streamlines = [streamline, np.vstack([streamline, streamline[::-1]])] >>> expected_lengths = [expected_length, 2*expected_length] >>> lengths = [length(streamlines[0]), length(streamlines[1])] >>> np.allclose(lengths, expected_lengths) True >>> length([]) 0.0 >>> length(np.array([[1, 2, 3]])) 0.0 """ if isinstance(streamlines, Streamlines): if len(streamlines) == 0: return 0.0 arclengths = np.zeros(len(streamlines), dtype=np.float64) if as_native_array(streamlines._data).dtype == np.float32: c_arclengths_from_arraysequence[float2d]( as_native_array(streamlines._data), streamlines._offsets.astype(np.intp), streamlines._lengths.astype(np.intp), arclengths) else: c_arclengths_from_arraysequence[double2d]( as_native_array(streamlines._data), streamlines._offsets.astype(np.intp), streamlines._lengths.astype(np.intp), arclengths) return arclengths only_one_streamlines = False if type(streamlines) is cnp.ndarray: only_one_streamlines = True streamlines = [streamlines] if len(streamlines) == 0: return 0.0 dtype = streamlines[0].dtype for streamline in streamlines: if streamline.dtype != dtype: dtype = None break # Allocate memory for each streamline length. streamlines_length = np.empty(len(streamlines), dtype=np.float64) cdef cnp.npy_intp i if dtype is None: # List of streamlines having different dtypes for i in range(len(streamlines)): dtype = streamlines[i].dtype # HACK: To avoid memleaks we have to recast with astype(dtype). streamline = streamlines[i].astype(dtype) if dtype != np.float32 and dtype != np.float64: is_integer = dtype == np.int64 or dtype == np.uint64 dtype = np.float64 if is_integer else np.float32 streamline = streamlines[i].astype(dtype) if dtype == np.float32: streamlines_length[i] = c_length[float2d](streamline) else: streamlines_length[i] = c_length[double2d](streamline) elif dtype == np.float32: # All streamlines have composed of float32 points for i in range(len(streamlines)): # HACK: To avoid memleaks we have to recast with astype(dtype). streamline = streamlines[i].astype(dtype) streamlines_length[i] = c_length[float2d](streamline) elif dtype == np.float64: # All streamlines are composed of float64 points for i in range(len(streamlines)): # HACK: To avoid memleaks we have to recast with astype(dtype). streamline = streamlines[i].astype(dtype) streamlines_length[i] = c_length[double2d](streamline) elif dtype == np.int64 or dtype == np.uint64: # All streamlines are composed of int64 or uint64 points so convert # them in float64 one at the time. for i in range(len(streamlines)): streamline = streamlines[i].astype(np.float64) streamlines_length[i] = c_length[double2d](streamline) else: # All streamlines are composed of points with a dtype fitting in # 32 bits so convert them in float32 one at the time. for i in range(len(streamlines)): streamline = streamlines[i].astype(np.float32) streamlines_length[i] = c_length[float2d](streamline) if only_one_streamlines: return streamlines_length[0] else: return streamlines_length cdef void c_arclengths(Streamline streamline, double* out) noexcept nogil: cdef cnp.npy_intp i = 0 cdef double dn out[0] = 0.0 for i in range(1, streamline.shape[0]): out[i] = 0.0 for j in range(streamline.shape[1]): dn = streamline[i, j] - streamline[i-1, j] out[i] += dn*dn out[i] = out[i-1] + sqrt(out[i]) cdef void c_set_number_of_points(Streamline streamline, Streamline out) noexcept nogil: cdef: cnp.npy_intp N = streamline.shape[0] cnp.npy_intp D = streamline.shape[1] cnp.npy_intp new_N = out.shape[0] double ratio, step, next_point, delta cnp.npy_intp i, j, k, dim # Get arclength at each point. arclengths = malloc(streamline.shape[0] * sizeof(double)) c_arclengths(streamline, arclengths) step = arclengths[N-1] / (new_N-1) next_point = 0.0 i = 0 j = 0 k = 0 while next_point < arclengths[N-1]: if next_point == arclengths[k]: for dim in range(D): out[i, dim] = streamline[j, dim] next_point += step i += 1 j += 1 k += 1 elif next_point < arclengths[k]: ratio = 1 - ((arclengths[k]-next_point) / (arclengths[k]-arclengths[k-1])) for dim in range(D): delta = streamline[j, dim] - streamline[j-1, dim] out[i, dim] = streamline[j-1, dim] + ratio * delta next_point += step i += 1 else: j += 1 k += 1 # Last resampled point always the one from original streamline. for dim in range(D): out[new_N-1, dim] = streamline[N-1, dim] free(arclengths) cdef void c_set_number_of_points_from_arraysequence(Streamline points, cnp.npy_intp[:] offsets, cnp.npy_intp[:] lengths, long nb_points, Streamline out) noexcept nogil: cdef: cnp.npy_intp i, j, k cnp.npy_intp offset, length cnp.npy_intp offset_out = 0 double dn, sum_dn_sqr for i in range(offsets.shape[0]): offset = offsets[i] length = lengths[i] c_set_number_of_points(points[offset:offset+length, :], out[offset_out:offset_out+nb_points, :]) offset_out += nb_points def set_number_of_points(streamlines, nb_points=3): """ Change the number of points of streamlines (either by downsampling or upsampling) Change the number of points of streamlines in order to obtain `nb_points`-1 segments of equal length. Points of streamlines will be modified along the curve. Parameters ---------- streamlines : ndarray or a list or :class:`dipy.tracking.Streamlines` If ndarray, must have shape (N,3) where N is the number of points of the streamline. If list, each item must be ndarray shape (Ni,3) where Ni is the number of points of streamline i. If :class:`dipy.tracking.Streamlines`, its `common_shape` must be 3. nb_points : int integer representing number of points wanted along the curve. Returns ------- new_streamlines : ndarray or a list or :class:`dipy.tracking.Streamlines` Results of the downsampling or upsampling process. Examples -------- >>> from dipy.tracking.streamline import set_number_of_points >>> import numpy as np One streamline, a semi-circle: >>> theta = np.pi*np.linspace(0, 1, 100) >>> x = np.cos(theta) >>> y = np.sin(theta) >>> z = 0 * x >>> streamline = np.vstack((x, y, z)).T >>> modified_streamline = set_number_of_points(streamline, 3) >>> len(modified_streamline) 3 Multiple streamlines: >>> streamlines = [streamline, streamline[::2]] >>> new_streamlines = set_number_of_points(streamlines, 10) >>> [len(s) for s in streamlines] [100, 50] >>> [len(s) for s in new_streamlines] [10, 10] """ if isinstance(streamlines, Streamlines): if len(streamlines) == 0: return Streamlines() nb_streamlines = len(streamlines) dtype = as_native_array(streamlines._data).dtype new_streamlines = Streamlines() new_streamlines._data = np.zeros((nb_streamlines * nb_points, 3), dtype=dtype) new_streamlines._offsets = nb_points * np.arange(nb_streamlines, dtype=np.intp) new_streamlines._lengths = nb_points * np.ones(nb_streamlines, dtype=np.intp) if dtype == np.float32: c_set_number_of_points_from_arraysequence[float2d]( as_native_array(streamlines._data), streamlines._offsets.astype(np.intp), streamlines._lengths.astype(np.intp), nb_points, new_streamlines._data) else: c_set_number_of_points_from_arraysequence[double2d]( as_native_array(streamlines._data), streamlines._offsets.astype(np.intp), streamlines._lengths.astype(np.intp), nb_points, new_streamlines._data) return new_streamlines only_one_streamlines = False if type(streamlines) is cnp.ndarray: only_one_streamlines = True streamlines = [streamlines] if len(streamlines) == 0: return [] if nb_points < 2: raise ValueError("nb_points must be at least 2") dtype = streamlines[0].dtype for streamline in streamlines: if streamline.dtype != dtype: dtype = None if len(streamline) < 2: raise ValueError("All streamlines must have at least 2 points.") # Allocate memory for each modified streamline new_streamlines = [] cdef cnp.npy_intp i if dtype is None: # List of streamlines having different dtypes for i in range(len(streamlines)): dtype = streamlines[i].dtype # HACK: To avoid memleaks we have to recast with astype(dtype). streamline = streamlines[i].astype(dtype) if dtype != np.float32 and dtype != np.float64: dtype = np.float32 if dtype == np.int64 or dtype == np.uint64: dtype = np.float64 streamline = streamline.astype(dtype) new_streamline = np.empty((nb_points, streamline.shape[1]), dtype=dtype) if dtype == np.float32: c_set_number_of_points[float2d](streamline, new_streamline) else: c_set_number_of_points[double2d](streamline, new_streamline) # HACK: To avoid memleaks we have to recast with astype(dtype). new_streamlines.append(new_streamline.astype(dtype)) elif dtype == np.float32: # All streamlines have composed of float32 points for i in range(len(streamlines)): streamline = streamlines[i].astype(dtype) modified_streamline = np.empty((nb_points, streamline.shape[1]), dtype=streamline.dtype) c_set_number_of_points[float2d](streamline, modified_streamline) # HACK: To avoid memleaks we have to recast with astype(dtype). new_streamlines.append(modified_streamline.astype(dtype)) elif dtype == np.float64: # All streamlines are composed of float64 points for i in range(len(streamlines)): streamline = streamlines[i].astype(dtype) modified_streamline = np.empty((nb_points, streamline.shape[1]), dtype=streamline.dtype) c_set_number_of_points[double2d](streamline, modified_streamline) # HACK: To avoid memleaks we have to recast with astype(dtype). new_streamlines.append(modified_streamline.astype(dtype)) elif dtype == np.int64 or dtype == np.uint64: # All streamlines are composed of int64 or uint64 points so convert # them in float64 one at the time for i in range(len(streamlines)): streamline = streamlines[i].astype(np.float64) modified_streamline = np.empty((nb_points, streamline.shape[1]), dtype=streamline.dtype) c_set_number_of_points[double2d](streamline, modified_streamline) # HACK: To avoid memleaks we've to recast with astype(np.float64). new_streamlines.append(modified_streamline.astype(np.float64)) else: # All streamlines are composed of points with a dtype fitting in # 32bits so convert them in float32 one at the time for i in range(len(streamlines)): streamline = streamlines[i].astype(np.float32) modified_streamline = np.empty((nb_points, streamline.shape[1]), dtype=streamline.dtype) c_set_number_of_points[float2d](streamline, modified_streamline) # HACK: To avoid memleaks we've to recast with astype(np.float32). new_streamlines.append(modified_streamline.astype(np.float32)) if only_one_streamlines: return new_streamlines[0] else: return new_streamlines cdef double c_norm_of_cross_product(double bx, double by, double bz, double cx, double cy, double cz) noexcept nogil: """ Computes the norm of the cross-product in 3D. """ cdef double ax, ay, az ax = by*cz - bz*cy ay = bz*cx - bx*cz az = bx*cy - by*cx return sqrt(ax*ax + ay*ay + az*az) cdef double c_dist_to_line(Streamline streamline, cnp.npy_intp p1, cnp.npy_intp p2, cnp.npy_intp p0) noexcept nogil: """ Computes the shortest Euclidean distance between a point `curr` and the line passing through `prev` and `next`. """ cdef: double dn, norm1, norm2 cnp.npy_intp D = streamline.shape[1] # Compute cross product of next-prev and curr-next norm1 = c_norm_of_cross_product(streamline[p2, 0]-streamline[p1, 0], streamline[p2, 1]-streamline[p1, 1], streamline[p2, 2]-streamline[p1, 2], streamline[p0, 0]-streamline[p2, 0], streamline[p0, 1]-streamline[p2, 1], streamline[p0, 2]-streamline[p2, 2]) # Norm of next-prev norm2 = 0.0 for d in range(D): dn = streamline[p2, d]-streamline[p1, d] norm2 += dn*dn norm2 = sqrt(norm2) return norm1 / norm2 cdef double c_segment_length(Streamline streamline, cnp.npy_intp start, cnp.npy_intp end) noexcept nogil: """ Computes the length of the segment going from `start` to `end`. """ cdef: cnp.npy_intp D = streamline.shape[1] cnp.npy_intp d double segment_length = 0.0 double dn for d in range(D): dn = streamline[end, d] - streamline[start, d] segment_length += dn*dn return sqrt(segment_length) cdef cnp.npy_intp c_compress_streamline(Streamline streamline, Streamline out, double tol_error, double max_segment_length) noexcept nogil: """ Compresses a streamline (see function `compress_streamlines`).""" cdef: cnp.npy_intp N = streamline.shape[0] cnp.npy_intp D = streamline.shape[1] cnp.npy_intp nb_points = 0 cnp.npy_intp d, prev, next, curr double segment_length # Copy first point since it is always kept. for d in range(D): out[0, d] = streamline[0, d] nb_points = 1 prev = 0 # Loop through the points of the streamline checking if we can use the # linearized segment: next-prev. We start with next=2 (third points) since # we already added point 0 and segment between the two firsts is linear. for next in range(2, N): # Euclidean distance between last added point and current point. if c_segment_length(streamline, prev, next) > max_segment_length: for d in range(D): out[nb_points, d] = streamline[next-1, d] nb_points += 1 prev = next-1 continue # Check that each point is not offset by more than `tol_error` mm. for curr in range(prev+1, next): if c_segment_length(streamline, curr, prev) == 0: continue dist = c_dist_to_line(streamline, prev, next, curr) if dpy_isnan(dist) or dist > tol_error: for d in range(D): out[nb_points, d] = streamline[next-1, d] nb_points += 1 prev = next-1 break # Copy last point since it is always kept. for d in range(D): out[nb_points, d] = streamline[N-1, d] nb_points += 1 return nb_points def compress_streamlines(streamlines, tol_error=0.01, max_segment_length=10): """ Compress streamlines by linearization. The compression :footcite:p:`Presseau2015` consists in merging consecutive segments that are nearly collinear. The merging is achieved by removing the point the two segments have in common. The linearization process :footcite:p:`Presseau2015` ensures that every point being removed are within a certain margin (in mm) of the resulting streamline. Recommendations for setting this margin can be found in :footcite:p:`Presseau2015` (in which they called it tolerance error). The compression also ensures that two consecutive points won't be too far from each other (precisely less or equal than `max_segment_length`mm). This is a tradeoff to speed up the linearization process :footcite:p:`Rheault2015`. A low value will result in a faster linearization but low compression, whereas a high value will result in a slower linearization but high compression. Parameters ---------- streamlines : one or a list of array-like of shape (N,3) Array representing x,y,z of N points in a streamline. tol_error : float, optional Tolerance error in mm. A rule of thumb is to set it to 0.01mm for deterministic streamlines and 0.1mm for probabilitic streamlines. max_segment_length : float, optional Maximum length in mm of any given segment produced by the compression. The default is 10mm. (In :footcite:p:`Presseau2015` they used a value of `np.inf`). Returns ------- compressed_streamlines : one or a list of array-like Results of the linearization process. Examples -------- >>> from dipy.tracking.streamlinespeed import compress_streamlines >>> import numpy as np >>> # One streamline: a wiggling line >>> rng = np.random.default_rng(42) >>> streamline = np.linspace(0, 10, 100*3).reshape((100, 3)) >>> streamline += 0.2 * rng.rand(100, 3) >>> c_streamline = compress_streamlines(streamline, tol_error=0.2) >>> len(streamline) 100 >>> len(c_streamline) 10 >>> # Multiple streamlines >>> streamlines = [streamline, streamline[::2]] >>> c_streamlines = compress_streamlines(streamlines, tol_error=0.2) >>> [len(s) for s in streamlines] [100, 50] >>> [len(s) for s in c_streamlines] [10, 7] Notes ----- Be aware that compressed streamlines have variable step sizes. One needs to be careful when computing streamlines-based metrics :footcite:p:`Houde2015`. References ---------- .. footbibliography:: """ only_one_streamlines = False if type(streamlines) is cnp.ndarray: only_one_streamlines = True streamlines = [streamlines] if len(streamlines) == 0: return [] compressed_streamlines = [] cdef cnp.npy_intp i for i in range(len(streamlines)): dtype = streamlines[i].dtype # HACK: To avoid memleaks we have to recast with astype(dtype). streamline = streamlines[i].astype(dtype) shape = streamline.shape if dtype != np.float32 and dtype != np.float64: dtype = np.float64 if dtype == np.int64 or dtype == np.uint64 else np.float32 streamline = streamline.astype(dtype) if shape[0] <= 2: compressed_streamlines.append(streamline.copy()) continue compressed_streamline = np.empty(shape, dtype) if dtype == np.float32: nb_points = c_compress_streamline[float2d](streamline, compressed_streamline, tol_error, max_segment_length) else: nb_points = c_compress_streamline[double2d](streamline, compressed_streamline, tol_error, max_segment_length) compressed_streamline.resize((nb_points, streamline.shape[1])) # HACK: To avoid memleaks we have to recast with astype(dtype). compressed_streamlines.append(compressed_streamline.astype(dtype)) if only_one_streamlines: return compressed_streamlines[0] else: return compressed_streamlines dipy-1.11.0/dipy/tracking/tests/000077500000000000000000000000001476546756600165025ustar00rootroot00000000000000dipy-1.11.0/dipy/tracking/tests/__init__.py000066400000000000000000000000011476546756600206020ustar00rootroot00000000000000 dipy-1.11.0/dipy/tracking/tests/meson.build000066400000000000000000000015151476546756600206460ustar00rootroot00000000000000cython_sources = [ 'test_tractogen', 'test_propspeed', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/tracking/tests' ) endforeach python_sources = [ '__init__.py', 'test_distances.py', 'test_fbc.py', 'test_learning.py', 'test_life.py', 'test_mesh.py', 'test_metrics.py', 'test_stopping_criterion.py', 'test_streamline.py', 'test_tracker.py', 'test_track_volumes.py', 'test_tracking.py', 'test_utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/tracking/tests' ) dipy-1.11.0/dipy/tracking/tests/test_distances.py000066400000000000000000000271411476546756600220750ustar00rootroot00000000000000import warnings import numpy as np from numpy.testing import ( assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, ) from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.testing import assert_true from dipy.testing.decorators import set_random_number_generator from dipy.tracking import distances as pf from dipy.tracking.streamline import set_number_of_points @set_random_number_generator() def test_LSCv2(verbose=False, rng=None): xyz1 = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]], dtype="float32") xyz2 = np.array([[1, 0, 0], [1, 2, 0], [1, 3, 0]], dtype="float32") xyz3 = np.array([[1.1, 0, 0], [1, 2, 0], [1, 3, 0]], dtype="float32") xyz4 = np.array([[1, 0, 0], [2.1, 0, 0], [3, 0, 0]], dtype="float32") xyz5 = np.array([[100, 0, 0], [200, 0, 0], [300, 0, 0]], dtype="float32") xyz6 = np.array([[0, 20, 0], [0, 40, 0], [300, 50, 0]], dtype="float32") T = [xyz1, xyz2, xyz3, xyz4, xyz5, xyz6] pf.local_skeleton_clustering(T, 0.2) pf.local_skeleton_clustering_3pts(T, 0.2) for _ in range(40): xyz = rng.random((3, 3), dtype="f4") T.append(xyz) from time import time t1 = time() C3 = pf.local_skeleton_clustering(T, 0.5) t2 = time() if verbose: print(t2 - t1) print(len(C3)) t1 = time() C4 = pf.local_skeleton_clustering_3pts(T, 0.5) t2 = time() if verbose: print(t2 - t1) print(len(C4)) for c in C3: assert_equal(np.sum(C3[c]["hidden"] - C4[c]["hidden"]), 0) T2 = [] for _ in range(10**4): xyz = rng.random((10, 3), dtype="f4") T2.append(xyz) t1 = time() C5 = pf.local_skeleton_clustering(T2, 0.5) t2 = time() if verbose: print(t2 - t1) print(len(C5)) fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines T3 = set_number_of_points(fornix, nb_points=6) if verbose: print("lenT3", len(T3)) C = pf.local_skeleton_clustering(T3, 10.0) if verbose: print("lenC", len(C)) """ try: from dipy.viz import window, actor except ImportError as e: raise pytest.skip('Fails to import dipy.viz due to %s' % str(e)) scene = window.Scene() colors = np.zeros((len(C), 3)) for c in C: color = np.random.rand(3) for i in C[c]['indices']: scene.add(actor.line(T3[i], color)) colors[c] = color window.show(scene) scene.clear() skeleton = [] def width(w): if w<1: return 1 else: return w for c in C: bundle = [T3[i] for i in C[c]['indices']] si,s = pf.most_similar_track_mam(bundle, 'avg') skeleton.append(bundle[si]) actor.label(r,text = str(len(bundle)), pos=(bundle[si][-1]), scale=(2, 2, 2)) scene.add(actor.line(skeleton, colors, opacity=1, linewidth = width(len(bundle)/10.))) window.show(scene) """ def test_bundles_distances_mam(): xyz1A = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0]], dtype="float32") xyz2A = np.array([[0, 1, 1], [1, 0, 1], [2, 3, -2]], dtype="float32") xyz1B = np.array([[-1, 0, 0], [2, 0, 0], [2, 3, 0], [3, 0, 0]], dtype="float32") tracksA = [xyz1A, xyz2A] tracksB = [xyz1B, xyz1A, xyz2A] for metric in ("avg", "min", "max"): pf.bundles_distances_mam(tracksA, tracksB, metric=metric) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) tracksC = [xyz2A, xyz1A] _ = pf.bundles_distances_mam(tracksA, tracksC) print(w) assert_true(len(w) == 1) assert_true(issubclass(w[0].category, UserWarning)) assert_true("not have the same number of points" in str(w[0].message)) def test_bundles_distances_mdf(verbose=False): xyz1A = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0]], dtype="float32") xyz2A = np.array([[0, 1, 1], [1, 0, 1], [2, 3, -2]], dtype="float32") xyz3A = np.array([[0, 0, 0], [1, 0, 0], [3, 0, 0]], dtype="float32") xyz1B = np.array([[-1, 0, 0], [2, 0, 0], [2, 3, 0]], dtype="float32") xyz1C = np.array([[-1, 0, 0], [2, 0, 0], [2, 3, 0], [3, 0, 0]], dtype="float32") tracksA = [xyz1A, xyz2A] tracksB = [xyz1B, xyz1A, xyz2A] dist = pf.bundles_distances_mdf(tracksA, tracksA) assert_equal(dist[0, 0], 0) assert_equal(dist[1, 1], 0) assert_equal(dist[1, 0], dist[0, 1]) pf.bundles_distances_mdf(tracksA, tracksB) tracksA = [xyz1A, xyz1A] tracksB = [xyz1A, xyz1A] DM2 = pf.bundles_distances_mdf(tracksA, tracksB) assert_array_almost_equal(DM2, np.zeros((2, 2))) tracksA = [xyz1A, xyz3A] tracksB = [xyz2A] DM2 = pf.bundles_distances_mdf(tracksA, tracksB) if verbose: print(DM2) # assert_array_almost_equal(DM2,np.zeros((2,2))) DM = np.zeros(DM2.shape) for a, ta in enumerate(tracksA): for b, tb in enumerate(tracksB): md = np.sum(np.sqrt(np.sum((ta - tb) ** 2, axis=1))) / 3.0 md2 = np.sum(np.sqrt(np.sum((ta - tb[::-1]) ** 2, axis=1))) / 3.0 DM[a, b] = np.min((md, md2)) if verbose: print(DM) print("--------------") for t in tracksA: print(t) print("--------------") for t in tracksB: print(t) assert_array_almost_equal(DM, DM2, 4) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always", category=UserWarning) tracksC = [xyz1C, xyz1A] _ = pf.bundles_distances_mdf(tracksA, tracksC) print(w) assert_true(len(w) == 1) assert_true(issubclass(w[0].category, UserWarning)) assert_true("not have the same number of points" in str(w[0].message)) def test_mam_distances(): xyz1 = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0]]) xyz2 = np.array([[0, 1, 1], [1, 0, 1], [2, 3, -2]]) # dm=array([[ 2, 2, 17], [ 3, 1, 14], [6, 2, 13], [11, 5, 14]]) # this is the distance matrix between points of xyz1 # and points of xyz2 xyz1 = xyz1.astype("float32") xyz2 = xyz2.astype("float32") zd2 = pf.mam_distances(xyz1, xyz2) assert_almost_equal(zd2[0], 1.76135602742) def test_approx_ei_traj(): segs = 100 t = np.linspace(0, 1.75 * 2 * np.pi, segs) x = t y = 5 * np.sin(5 * t) z = np.zeros(x.shape) xyz = np.vstack((x, y, z)).T xyza = pf.approx_polygon_track(xyz) assert_equal(len(xyza), 27) # test repeated point track = np.array( [[1.0, 0.0, 0.0], [1.0, 0.0, 0.0], [3.0, 0.0, 0.0], [4.0, 0.0, 0.0]] ) xyza = pf.approx_polygon_track(track) assert_array_equal(xyza, np.array([[1.0, 0.0, 0.0], [4.0, 0.0, 0.0]])) def test_approx_mdl_traj(): t = np.linspace(0, 1.75 * 2 * np.pi, 100) x = np.sin(t) y = np.cos(t) z = t xyz = np.vstack((x, y, z)).T xyza1 = pf.approximate_mdl_trajectory(xyz, alpha=1.0) xyza2 = pf.approximate_mdl_trajectory(xyz, alpha=2.0) assert_equal(len(xyza1), 10) assert_equal(len(xyza2), 8) assert_array_almost_equal( xyza1, np.array( [ [0.00000000e00, 1.00000000e00, 0.00000000e00], [9.39692621e-01, 3.42020143e-01, 1.22173048e00], [6.42787610e-01, -7.66044443e-01, 2.44346095e00], [-5.00000000e-01, -8.66025404e-01, 3.66519143e00], [-9.84807753e-01, 1.73648178e-01, 4.88692191e00], [-1.73648178e-01, 9.84807753e-01, 6.10865238e00], [8.66025404e-01, 5.00000000e-01, 7.33038286e00], [7.66044443e-01, -6.42787610e-01, 8.55211333e00], [-3.42020143e-01, -9.39692621e-01, 9.77384381e00], [-1.00000000e00, -4.28626380e-16, 1.09955743e01], ] ), ) assert_array_almost_equal( xyza2, np.array( [ [0.00000000e00, 1.00000000e00, 0.00000000e00], [9.95471923e-01, -9.50560433e-02, 1.66599610e00], [-1.89251244e-01, -9.81928697e-01, 3.33199221e00], [-9.59492974e-01, 2.81732557e-01, 4.99798831e00], [3.71662456e-01, 9.28367933e-01, 6.66398442e00], [8.88835449e-01, -4.58226522e-01, 8.32998052e00], [-5.40640817e-01, -8.41253533e-01, 9.99597663e00], [-1.00000000e00, -4.28626380e-16, 1.09955743e01], ] ), ) def test_point_track_sq_distance(): t = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]], dtype="f4") p = np.array([-1, -1.0, -1], dtype="f4") assert_equal(pf.point_track_sq_distance_check(t, p, 0.2**2), False) pf.point_track_sq_distance_check(t, p, 2**2), True t = np.array([[0, 0, 0], [1, 0, 0], [2, 2, 0]], dtype="f4") p = np.array([0.5, 0, 0], dtype="f4") assert_equal(pf.point_track_sq_distance_check(t, p, 0.2**2), True) p = np.array([0.5, 1, 0], dtype="f4") assert_equal(pf.point_track_sq_distance_check(t, p, 0.2**2), False) def test_track_roi_intersection_check(): roi = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0]], dtype="f4") t = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]], dtype="f4") assert_equal(pf.track_roi_intersection_check(t, roi, 1), True) t = np.array([[0, 0, 0], [1, 0, 0], [2, 2, 2]], dtype="f4") assert_equal(pf.track_roi_intersection_check(t, roi, 1), True) t = np.array([[1, 1, 0], [1, 0, 0], [1, -1, 0]], dtype="f4") assert_equal(pf.track_roi_intersection_check(t, roi, 1), True) t = np.array([[4, 0, 0], [4, 1, 1], [4, 2, 0]], dtype="f4") assert_equal(pf.track_roi_intersection_check(t, roi, 1), False) def test_minimum_distance(): xyz1 = np.array([[1, 0, 0], [2, 0, 0]], dtype="float32") xyz2 = np.array([[3, 0, 0], [4, 0, 0]], dtype="float32") assert_equal(pf.minimum_closest_distance(xyz1, xyz2), 1.0) def test_most_similar_mam(): xyz1 = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0]], dtype="float32") xyz2 = np.array([[0, 1, 1], [1, 0, 1], [2, 3, -2]], dtype="float32") xyz3 = np.array([[-1, 0, 0], [2, 0, 0], [2, 3, 0], [3, 0, 0]], dtype="float32") tracks = [xyz1, xyz2, xyz3] for metric in ("avg", "min", "max"): # pf should be much faster and the results equivalent pf.most_similar_track_mam(tracks, metric=metric) def test_cut_plane(): dt = np.dtype(np.float32) refx = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0]], dtype=dt) bundlex = [ np.array([[0.5, 1, 0], [1.5, 2, 0], [2.5, 3, 0]], dtype=dt), np.array([[0.5, 2, 0], [1.5, 3, 0], [2.5, 4, 0]], dtype=dt), np.array([[0.5, 1, 1], [1.5, 2, 2], [2.5, 3, 3]], dtype=dt), np.array([[-0.5, 2, -1], [-1.5, 3, -2], [-2.5, 4, -3]], dtype=dt), ] expected_hit0 = [ [1.0, 1.5, 0.0, 0.70710683, 0.0], [1.0, 2.5, 0.0, 0.70710677, 1.0], [1.0, 1.5, 1.5, 0.81649661, 2.0], ] expected_hit1 = [ [2.0, 2.5, 0.0, 0.70710677, 0.0], [2.0, 3.5, 0.0, 0.70710677, 1.0], [2.0, 2.5, 2.5, 0.81649655, 2.0], ] hitx = pf.cut_plane(bundlex, refx) assert_array_almost_equal(hitx[0], expected_hit0) assert_array_almost_equal(hitx[1], expected_hit1) # check that algorithm allows types other than float32 bundlex[0] = np.asarray(bundlex[0], dtype=np.float64) hitx = pf.cut_plane(bundlex, refx) assert_array_almost_equal(hitx[0], expected_hit0) assert_array_almost_equal(hitx[1], expected_hit1) refx = np.asarray(refx, dtype=np.float64) hitx = pf.cut_plane(bundlex, refx) assert_array_almost_equal(hitx[0], expected_hit0) assert_array_almost_equal(hitx[1], expected_hit1) dipy-1.11.0/dipy/tracking/tests/test_fbc.py000066400000000000000000000026411476546756600206500ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.core.sphere import Sphere from dipy.denoise.enhancement_kernel import EnhancementKernel from dipy.tracking.fbcmeasures import FBCMeasures def test_fbc(): """Test the FBC measures on a set of fibers""" # Generate two fibers of 10 points streamlines = [] for i in range(2): fiber = np.zeros((10, 3)) for j in range(10): fiber[j, 0] = j fiber[j, 1] = i * 0.2 fiber[j, 2] = 0 streamlines.append(fiber) # Create lookup table. # A fixed set of orientations is used to guarantee deterministic results D33 = 1.0 D44 = 0.04 t = 1 sphere = Sphere( xyz=np.array( [ [0.82819078, 0.51050355, 0.23127074], [-0.10761926, -0.95554309, 0.27450957], [0.4101745, -0.07154038, 0.90919682], [-0.75573448, 0.64854889, 0.09082809], [-0.56874549, 0.01377562, 0.8223982], ] ) ) k = EnhancementKernel(D33, D44, t, orientations=sphere, force_recompute=True) # run FBC fbc = FBCMeasures(streamlines, k, verbose=True) # get FBC values fbc_sl_orig, clrs_orig, rfbc_orig = fbc.get_points_rfbc_thresholded( 0, emphasis=0.01 ) # check mean RFBC against tested value npt.assert_almost_equal(np.mean(rfbc_orig), 1.0500466494329224, decimal=4) dipy-1.11.0/dipy/tracking/tests/test_learning.py000066400000000000000000000013361476546756600217150ustar00rootroot00000000000000"""Testing track_metrics module""" import numpy as np from numpy.testing import assert_array_equal from dipy.tracking import learning as tl def test_det_corr_tracks(): A = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) B = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]]) C = np.array([[0, 0, -1], [0, 0, -2], [0, 0, -3]]) bundle1 = [A, B, C] bundle2 = [B, A] indices = [0, 1] print(A) print(B) print(C) arr = tl.detect_corresponding_tracks(indices, bundle1, bundle2) print(arr) assert_array_equal(arr, np.array([[0, 1], [1, 0]])) indices2 = [0, 1] arr2 = tl.detect_corresponding_tracks_plus(indices, bundle1, indices2, bundle2) print(arr2) assert_array_equal(arr, arr2) dipy-1.11.0/dipy/tracking/tests/test_life.py000066400000000000000000000153371476546756600210430ustar00rootroot00000000000000import os.path as op import nibabel as nib import numpy as np import numpy.testing as npt import scipy.linalg as la import dipy.core.gradients as grad import dipy.core.optimize as opt import dipy.data as dpd from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram import dipy.tracking.life as life THIS_DIR = op.dirname(__file__) def test_streamline_gradients(): streamline = [[1, 2, 3], [4, 5, 6], [5, 6, 7], [8, 9, 10]] grads = np.array([[3, 3, 3], [2, 2, 2], [2, 2, 2], [3, 3, 3]]) npt.assert_array_equal(life.streamline_gradients(streamline), grads) def test_streamline_tensors(): # Small streamline streamline = [[1, 2, 3], [4, 5, 3], [5, 6, 3]] # Non-default eigenvalues: evals = [0.0012, 0.0006, 0.0004] streamline_tensors = life.streamline_tensors(streamline, evals=evals) npt.assert_array_almost_equal( streamline_tensors[0], np.array([[0.0009, 0.0003, 0.0], [0.0003, 0.0009, 0.0], [0.0, 0.0, 0.0004]]), ) # Get the eigenvalues/eigenvectors: eigvals, eigvecs = la.eig(streamline_tensors[0]) eigvecs = eigvecs[np.argsort(eigvals)[::-1]] eigvals = eigvals[np.argsort(eigvals)[::-1]] npt.assert_array_almost_equal(eigvals, np.array([0.0012, 0.0006, 0.0004])) npt.assert_array_almost_equal(eigvecs[0], np.array([0.70710678, -0.70710678, 0.0])) # Another small streamline streamline = [[1, 0, 0], [2, 0, 0], [3, 0, 0]] streamline_tensors = life.streamline_tensors(streamline, evals=evals) for t in streamline_tensors: eigvals, eigvecs = la.eig(t) eigvecs = eigvecs[np.argsort(eigvals)[::-1]] # This one has no rotations - all tensors are simply the canonical: npt.assert_almost_equal(np.rad2deg(np.arccos(np.dot(eigvecs[0], [1, 0, 0]))), 0) npt.assert_almost_equal(np.rad2deg(np.arccos(np.dot(eigvecs[1], [0, 1, 0]))), 0) npt.assert_almost_equal(np.rad2deg(np.arccos(np.dot(eigvecs[2], [0, 0, 1]))), 0) def test_streamline_signal(): data_file, bval_file, bvec_file = dpd.get_fnames(name="small_64D") gtab = grad.gradient_table(bval_file, bvecs=bvec_file) evals = [0.0015, 0.0005, 0.0005] streamline1 = [ [[1, 2, 3], [4, 5, 3], [5, 6, 3], [6, 7, 3]], [[1, 2, 3], [4, 5, 3], [5, 6, 3]], ] [life.streamline_signal(s, gtab, evals=evals) for s in streamline1] streamline2 = [[[1, 2, 3], [4, 5, 3], [5, 6, 3], [6, 7, 3]]] [life.streamline_signal(s, gtab, evals=evals) for s in streamline2] npt.assert_array_equal(streamline2[0], streamline1[0]) def test_voxel2streamline(): streamline = [ [[1.1, 2.4, 2.9], [4, 5, 3], [5, 6, 3], [6, 7, 3]], [[1, 2, 3], [4, 5, 3], [5, 6, 3]], ] affine = np.eye(4) v2f, v2fn = life.voxel2streamline(streamline, affine) npt.assert_equal(v2f, {0: [0, 1], 1: [0, 1], 2: [0, 1], 3: [0]}) npt.assert_equal( v2fn, {0: {0: [0], 1: [1], 2: [2], 3: [3]}, 1: {0: [0], 1: [1], 2: [2]}} ) affine = np.array( [[0.9, 0, 0, 10], [0, 0.9, 0, -100], [0, 0, 0.9, 2], [0, 0, 0, 1]] ) xform_sl = life.transform_streamlines(streamline, np.linalg.inv(affine)) v2f, v2fn = life.voxel2streamline(xform_sl, affine) npt.assert_equal(v2f, {0: [0, 1], 1: [0, 1], 2: [0, 1], 3: [0]}) npt.assert_equal( v2fn, {0: {0: [0], 1: [1], 2: [2], 3: [3]}, 1: {0: [0], 1: [1], 2: [2]}} ) def test_FiberModel_init(): # Get some small amount of data: data_file, bval_file, bvec_file = dpd.get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(bval_file, bvec_file) gtab = grad.gradient_table(bvals, bvecs=bvecs) FM = life.FiberModel(gtab) streamline_cases = [ [ [[1, 2, 3], [4, 5, 3], [5, 6, 3], [6, 7, 3]], [[1, 2, 3], [4, 5, 3], [5, 6, 3]], ], [[[1, 2, 3]], [[1, 2, 3], [4, 5, 3], [5, 6, 3]]], ] affine = np.eye(4) for sphere in [None, False, dpd.get_sphere(name="symmetric362")]: fiber_matrix, vox_coords = FM.setup(streamline_cases[0], affine, sphere=sphere) npt.assert_array_equal( np.array(vox_coords), np.array([[1, 2, 3], [4, 5, 3], [5, 6, 3], [6, 7, 3]]) ) npt.assert_equal( fiber_matrix.shape, (len(vox_coords) * 64, len(streamline_cases[0])) ) npt.assert_raises( IndexError, FM.setup, streamline_cases[1], affine, sphere=sphere ) def test_FiberFit(): data_file, bval_file, bvec_file = dpd.get_fnames(name="small_64D") data = load_nifti_data(data_file) bvals, bvecs = read_bvals_bvecs(bval_file, bvec_file) gtab = grad.gradient_table(bvals, bvecs=bvecs) FM = life.FiberModel(gtab) evals = [0.0015, 0.0005, 0.0005] streamline = [ [[1, 2, 3], [4, 5, 3], [5, 6, 3], [6, 7, 3]], [[1, 2, 3], [4, 5, 3], [5, 6, 3]], ] fiber_matrix, vox_coords = FM.setup(streamline, np.eye(4), evals=evals) w = np.array([0.5, 0.5]) sig = opt.spdot(fiber_matrix, w) + 1.0 # Add some isotropic stuff S0 = data[..., gtab.b0s_mask] this_data = np.zeros((10, 10, 10, 64)) this_data[vox_coords[:, 0], vox_coords[:, 1], vox_coords[:, 2]] = ( sig.reshape((4, 64)) * S0[vox_coords[:, 0], vox_coords[:, 1], vox_coords[:, 2]] ) # Grab some realistic S0 values: this_data = np.concatenate([data[..., gtab.b0s_mask], this_data], -1) fit = FM.fit(this_data, streamline, np.eye(4)) npt.assert_almost_equal(fit.predict()[1], fit.data[1], decimal=-1) # Predict with an input GradientTable npt.assert_almost_equal(fit.predict(gtab=gtab)[1], fit.data[1], decimal=-1) npt.assert_almost_equal( this_data[vox_coords[:, 0], vox_coords[:, 1], vox_coords[:, 2]], fit.data ) def test_fit_data(): fdata, fbval, fbvec = dpd.get_fnames(name="small_25") fstreamlines = dpd.get_fnames(name="small_25_streamlines") gtab = grad.gradient_table(fbval, bvecs=fbvec) ni_data = nib.load(fdata) data = np.asarray(ni_data.dataobj) tensor_streamlines = nib.streamlines.load(fstreamlines).streamlines sft = StatefulTractogram(tensor_streamlines, ni_data, Space.RASMM) sft.to_vox() tensor_streamlines_vox = sft.streamlines life_model = life.FiberModel(gtab) life_fit = life_model.fit(data, tensor_streamlines_vox, np.eye(4)) model_error = life_fit.predict() - life_fit.data model_rmse = np.sqrt(np.mean(model_error**2, -1)) matlab_rmse, matlab_weights = dpd.matlab_life_results() # Lower error than the matlab implementation for these data: npt.assert_(np.median(model_rmse) < np.median(matlab_rmse)) # And a moderate correlation with the Matlab implementation weights: npt.assert_(np.corrcoef(matlab_weights, life_fit.beta)[0, 1] > 0.6) dipy-1.11.0/dipy/tracking/tests/test_mesh.py000066400000000000000000000056641476546756600210620ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.tracking.mesh import ( random_coordinates_from_surface, seeds_from_surface_coordinates, triangles_area, vertices_to_triangles_values, ) def create_cube(): cube_tri = np.array( [ [0, 6, 4], [0, 2, 6], [0, 3, 2], [0, 1, 3], [2, 7, 6], [2, 3, 7], [4, 6, 7], [4, 7, 5], [0, 4, 5], [0, 5, 1], [1, 5, 7], [1, 7, 3], ] ) cube_vts = np.array( [ [0.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 0.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0], [1.0, 1.0, 1.0], ] ) return cube_tri, cube_vts def test_triangles_area(): cube_tri, cube_vts = create_cube() area = triangles_area(cube_tri, cube_vts) npt.assert_array_equal(area, 0.5) cube_vts *= 2.0 area = triangles_area(cube_tri, cube_vts) npt.assert_array_equal(area, 2.0) def test_surface_seeds(): eps = 1e-12 cube_tri, cube_vts = create_cube() nb_tri = len(cube_tri) nb_seed = 1000 tri_idx, trilin_coord = random_coordinates_from_surface(nb_tri, nb_seed) # test min-max triangles indices npt.assert_array_less(-1, tri_idx) npt.assert_array_less(tri_idx, nb_tri) # test min-max trilin coordinates npt.assert_array_less(-eps, trilin_coord) npt.assert_array_less(trilin_coord, 1.0 + eps) seed_pts = seeds_from_surface_coordinates(cube_tri, cube_vts, tri_idx, trilin_coord) # test min-max seeds positions npt.assert_array_less(-eps, seed_pts) npt.assert_array_less(seed_pts, 1.0 + eps) for i in range(len(cube_tri)): tri_mask = np.zeros([len(cube_tri)]) tri_mask[i] = 1.0 tri_maskb = tri_mask.astype(bool) t_idx, _ = random_coordinates_from_surface( nb_tri, nb_seed, triangles_mask=tri_maskb ) npt.assert_array_equal(t_idx, i) t_idx, _ = random_coordinates_from_surface( nb_tri, nb_seed, triangles_weight=tri_mask ) npt.assert_array_equal(t_idx, i) def test_vertices_to_triangles(): cube_tri, cube_vts = create_cube() vts_w = np.ones([len(cube_vts)]) tri_w = vertices_to_triangles_values(cube_tri, vts_w) npt.assert_array_equal(tri_w, 1.0) # weight each vts individually and test triangles weights for i in range(len(cube_vts)): vts_w = np.zeros([len(cube_vts)]) vts_w[i] = 3.0 tri_w_func = vertices_to_triangles_values(cube_tri, vts_w) tri_w_manual = np.zeros([len(cube_tri)]) # if the current weighted vts is in the current triangle add 1 for j in range(len(cube_tri)): if i in cube_tri[j]: tri_w_manual[j] += 1.0 npt.assert_array_equal(tri_w_func, tri_w_manual) dipy-1.11.0/dipy/tracking/tests/test_metrics.py000066400000000000000000000062051476546756600215640ustar00rootroot00000000000000"""Testing track_metrics module""" import numpy as np from numpy.testing import assert_array_almost_equal, assert_equal from dipy.testing.decorators import set_random_number_generator from dipy.tracking import distances as pf, metrics as tm @set_random_number_generator() def test_splines(rng): # create a helix t = np.linspace(0, 1.75 * 2 * np.pi, 100) x = np.sin(t) y = np.cos(t) z = t # add noise x += rng.normal(scale=0.1, size=x.shape) y += rng.normal(scale=0.1, size=y.shape) z += rng.normal(scale=0.1, size=z.shape) xyz = np.vstack((x, y, z)).T # get the B-splines smoothed result tm.spline(xyz, s=3, k=2, nest=-1) def test_segment_intersection(): xyz = np.array([[1, 1, 1], [2, 2, 2], [2, 2, 2]]) center = [10, 4, 10] radius = 1 assert_equal(tm.intersect_sphere(xyz, center, radius), False) xyz = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]) center = [10, 10, 10] radius = 2 assert_equal(tm.intersect_sphere(xyz, center, radius), False) xyz = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]) center = [2.1, 2, 2.2] radius = 2 assert_equal(tm.intersect_sphere(xyz, center, radius), True) def test_normalized_3vec(): vec = [1, 2, 3] l2n = np.sqrt(np.dot(vec, vec)) assert_array_almost_equal(l2n, pf.norm_3vec(vec)) nvec = pf.normalized_3vec(vec) assert_array_almost_equal(np.array(vec) / l2n, nvec) vec = np.array([[1, 2, 3]]) assert_equal(vec.shape, (1, 3)) assert_equal(pf.normalized_3vec(vec).shape, (3,)) def test_inner_3vecs(): vec1 = [1, 2.3, 3] vec2 = [2, 3, 4.3] assert_array_almost_equal(np.inner(vec1, vec2), pf.inner_3vecs(vec1, vec2)) vec2 = [2, -3, 4.3] assert_array_almost_equal(np.inner(vec1, vec2), pf.inner_3vecs(vec1, vec2)) def test_add_sub_3vecs(): vec1 = np.array([1, 2.3, 3]) vec2 = np.array([2, 3, 4.3]) assert_array_almost_equal(vec1 - vec2, pf.sub_3vecs(vec1, vec2)) assert_array_almost_equal(vec1 + vec2, pf.add_3vecs(vec1, vec2)) vec2 = [2, -3, 4.3] assert_array_almost_equal(vec1 - vec2, pf.sub_3vecs(vec1, vec2)) assert_array_almost_equal(vec1 + vec2, pf.add_3vecs(vec1, vec2)) def test_winding(): t = np.array( [ [63.90763092, 66.25634766, 74.84692383], [63.19578171, 65.95800018, 74.77872467], [61.79797363, 64.91297913, 75.04083252], [60.22916412, 64.11988068, 75.12763214], [59.47861481, 63.50800323, 75.25228882], [58.29077911, 62.88838959, 75.59411621], [57.40341568, 62.48369217, 75.46385193], [56.08355713, 61.64668274, 75.50260162], [54.88656616, 60.34751129, 75.49420929], [52.57548523, 58.3325882, 76.18450928], [50.99916077, 56.06463623, 76.07842255], [50.2379303, 54.92457962, 76.14080811], [49.29185867, 54.21960449, 76.04216003], [48.56259918, 53.58783722, 75.95063782], [48.13407516, 53.19916534, 75.91035461], [47.29430389, 52.12264252, 76.05912018], ], dtype=np.float32, ) assert_equal(np.isnan(tm.winding(t)), False) dipy-1.11.0/dipy/tracking/tests/test_propspeed.pyx000066400000000000000000000261371476546756600223150ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt from dipy.core.sphere import unit_octahedron from dipy.data import default_sphere from dipy.direction.pmf import SHCoeffPmfGen, SimplePmfGen from dipy.reconst.shm import ( SphHarmFit, SphHarmModel, descoteaux07_legacy_msg, ) from dipy.tracking.propspeed import ndarray_offset, eudx_both_directions from dipy.tracking.propspeed cimport ( deterministic_propagator, probabilistic_propagator, parallel_transport_propagator, ) from dipy.tracking.tracker_parameters import generate_tracking_parameters, FAIL from dipy.tracking.tests.test_tractogen import get_fast_tracking_performances from dipy.utils.fast_numpy cimport RNGState, seed_rng def test_tracker_deterministic(): # Test the probabilistic tracker function cdef double[:] stream_data = np.zeros(3, dtype=float) cdef double[:] point cdef double[:] direction cdef RNGState rng seed_rng(&rng, 12345) class SillyModel(SphHarmModel): sh_order_max = 4 def fit(self, data, mask=None): coeff = np.zeros(data.shape[:-1] + (15,)) return SphHarmFit(self, coeff, mask=None) model = SillyModel(gtab=None) data = np.zeros((3, 3, 3, 7)) sphere = unit_octahedron params = generate_tracking_parameters("det", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20) # Test if the tracking works on different dtype of the same data. for dtype in [np.float32, np.float64]: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fit = model.fit(data.astype(dtype)) sh_pmf_gen = SHCoeffPmfGen(fit.shm_coeff, sphere, 'descoteaux07') sf_pmf_gen = SimplePmfGen(fit.odf(sphere), sphere) point = np.zeros(3) direction = unit_octahedron.vertices[0].copy() # Test using SH pmf status = deterministic_propagator(&point[0], &direction[0], params, &stream_data[0], sh_pmf_gen, &rng) npt.assert_equal(status, FAIL) # Test using SF pmf status = deterministic_propagator(&point[0], &direction[0], params, &stream_data[0], sf_pmf_gen, &rng) npt.assert_equal(status, FAIL) def test_deterministic_performances(): # Test deterministic tracker on the DiSCo dataset params = generate_tracking_parameters("det", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20) r = get_fast_tracking_performances(params, nbr_seeds=5000) npt.assert_(r > 0.85, msg="Deterministic tracker has a low performance " "score: " + str(r)) def test_tracker_probabilistic(): # Test the probabilistic tracker function cdef double[:] stream_data = np.zeros(3, dtype=float) cdef double[:] point cdef double[:] direction cdef RNGState rng seed_rng(&rng, 12345) class SillyModel(SphHarmModel): sh_order_max = 4 def fit(self, data, mask=None): coeff = np.zeros(data.shape[:-1] + (15,)) return SphHarmFit(self, coeff, mask=None) model = SillyModel(gtab=None) data = np.zeros((3, 3, 3, 7)) sphere = unit_octahedron params = generate_tracking_parameters("prob", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20) # Test if the tracking works on different dtype of the same data. for dtype in [np.float32, np.float64]: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fit = model.fit(data.astype(dtype)) sh_pmf_gen = SHCoeffPmfGen(fit.shm_coeff, sphere, 'descoteaux07') sf_pmf_gen = SimplePmfGen(fit.odf(sphere), sphere) point = np.zeros(3) direction = unit_octahedron.vertices[0].copy() # Test using SH pmf status = probabilistic_propagator(&point[0], &direction[0], params, &stream_data[0], sh_pmf_gen, &rng) npt.assert_equal(status, FAIL) # Test using SF pmf status = probabilistic_propagator(&point[0], &direction[0], params, &stream_data[0], sf_pmf_gen, &rng) npt.assert_equal(status, FAIL) def test_probabilistic_performances(): # Test probabilistic tracker on the DiSCo dataset params = generate_tracking_parameters("prob", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20) r = get_fast_tracking_performances(params, nbr_seeds=10000) npt.assert_(r > 0.85, msg="Probabilistic tracker has a low performance " "score: " + str(r)) def test_tracker_ptt(): # Test the probabilistic tracker function cdef double[:] stream_data = np.zeros(100, dtype=float) cdef double[:] point cdef double[:] direction cdef RNGState rng seed_rng(&rng, 12345) class SillyModel(SphHarmModel): sh_order_max = 4 def fit(self, data, mask=None): coeff = np.zeros(data.shape[:-1] + (15,)) return SphHarmFit(self, coeff, mask=None) model = SillyModel(gtab=None) data = np.zeros((3, 3, 3, 7)) sphere = unit_octahedron params = generate_tracking_parameters("ptt", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20, probe_quality=3) # Test if the tracking works on different dtype of the same data. for dtype in [np.float32, np.float64]: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) fit = model.fit(data.astype(dtype)) sh_pmf_gen = SHCoeffPmfGen(fit.shm_coeff, sphere, 'descoteaux07') sf_pmf_gen = SimplePmfGen(fit.odf(sphere), sphere) point = np.zeros(3) direction = unit_octahedron.vertices[0].copy() # Test using SH pmf status = parallel_transport_propagator(&point[0], &direction[0], params, &stream_data[0], sh_pmf_gen, &rng) npt.assert_equal(status, FAIL) # Test using SF pmf status = parallel_transport_propagator(&point[0], &direction[0], params, &stream_data[0], sf_pmf_gen, &rng) npt.assert_equal(status, FAIL) def test_ptt_performances(): # Test ptt tracker on the DiSCo dataset params = generate_tracking_parameters("ptt", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=15, probe_quality=4) r = get_fast_tracking_performances(params, nbr_seeds=5000) npt.assert_(r > 0.85, msg="PTT tracker has a low performance " "score: " + str(r)) def stepped_1d(arr_1d): # Make a version of `arr_1d` which is not contiguous return np.vstack((arr_1d, arr_1d)).ravel(order="F")[::2] def test_offset(): # Test ndarray_offset function for dt in (np.int32, np.float64): index = np.array([1, 1], dtype=np.intp) A = np.array([[1, 0, 0], [0, 2, 0], [0, 0, 3]], dtype=dt) strides = np.array(A.strides, np.intp) i_size = A.dtype.itemsize npt.assert_equal(ndarray_offset(index, strides, 2, i_size), 4) npt.assert_equal(A.ravel()[4], A[1, 1]) # Index and strides arrays must be C-continuous. Test this is enforced # by using non-contiguous versions of the input arrays. npt.assert_raises(ValueError, ndarray_offset, stepped_1d(index), strides, 2, i_size) npt.assert_raises(ValueError, ndarray_offset, index, stepped_1d(strides), 2, i_size) def test_eudx_both_directions_errors(): # Test error conditions for both directions function sphere = default_sphere seed = np.zeros(3, np.float64) qa = np.zeros((4, 5, 6, 7), np.float64) ind = qa.copy() # All of seed, qa, ind, odf_vertices must be C-contiguous. Check by # passing in versions that aren't C contiguous npt.assert_raises( ValueError, eudx_both_directions, stepped_1d(seed), 0, qa, ind, sphere.vertices, 0.5, 0.1, 1.0, 1.0, 2, ) npt.assert_raises( ValueError, eudx_both_directions, seed, 0, qa[..., ::2], ind, sphere.vertices, 0.5, 0.1, 1.0, 1.0, 2, ) npt.assert_raises( ValueError, eudx_both_directions, seed, 0, qa, ind[..., ::2], sphere.vertices, 0.5, 0.1, 1.0, 1.0, 2, ) npt.assert_raises( ValueError, eudx_both_directions, seed, 0, qa, ind, sphere.vertices[::2], 0.5, 0.1, 1.0, 1.0, 2, ) dipy-1.11.0/dipy/tracking/tests/test_stopping_criterion.py000066400000000000000000000176521476546756600240470ustar00rootroot00000000000000import numpy as np import numpy.testing as npt import scipy.ndimage from dipy.core.ndindex import ndindex from dipy.testing.decorators import set_random_number_generator from dipy.tracking.stopping_criterion import ( ActStoppingCriterion, BinaryStoppingCriterion, CmcStoppingCriterion, StreamlineStatus, ThresholdStoppingCriterion, ) @set_random_number_generator() def test_binary_stopping_criterion(rng): """This tests that the binary stopping criterion returns expected streamline statuses. """ mask = rng.random((4, 4, 4)) mask[mask < 0.4] = 0.0 btc_boolean = BinaryStoppingCriterion(mask > 0) btc_float64 = BinaryStoppingCriterion(mask) # Test voxel center for ind in ndindex(mask.shape): pts = np.array(ind, dtype="float64") state_boolean = btc_boolean.check_point(pts) state_float64 = btc_float64.check_point(pts) if mask[ind] > 0: npt.assert_equal(state_boolean, int(StreamlineStatus.TRACKPOINT)) npt.assert_equal(state_float64, int(StreamlineStatus.TRACKPOINT)) else: npt.assert_equal(state_boolean, int(StreamlineStatus.ENDPOINT)) npt.assert_equal(state_float64, int(StreamlineStatus.ENDPOINT)) # Test random points in voxel for ind in ndindex(mask.shape): for _ in range(50): pts = np.array(ind, dtype="float64") + rng.random(3) - 0.5 state_boolean = btc_boolean.check_point(pts) state_float64 = btc_float64.check_point(pts) if mask[ind] > 0: npt.assert_equal(state_boolean, int(StreamlineStatus.TRACKPOINT)) npt.assert_equal(state_float64, int(StreamlineStatus.TRACKPOINT)) else: npt.assert_equal(state_boolean, int(StreamlineStatus.ENDPOINT)) npt.assert_equal(state_float64, int(StreamlineStatus.ENDPOINT)) # Test outside points outside_pts = [ [100, 100, 100], [0, -1, 1], [0, 10, 2], [0, 0.5, -0.51], [0, -0.51, 0.1], [4, 0, 0], ] for pts in outside_pts: pts = np.array(pts, dtype="float64") state_boolean = btc_boolean.check_point(pts) state_float64 = btc_float64.check_point(pts) npt.assert_equal(state_boolean, int(StreamlineStatus.OUTSIDEIMAGE)) npt.assert_equal(state_float64, int(StreamlineStatus.OUTSIDEIMAGE)) @set_random_number_generator() def test_threshold_stopping_criterion(rng): """This tests that the threshold stopping criterion returns expected streamline statuses. """ tissue_map = rng.random((4, 4, 4)) ttc = ThresholdStoppingCriterion(tissue_map.astype("float32"), 0.5) # Test voxel center for ind in ndindex(tissue_map.shape): pts = np.array(ind, dtype="float64") state = ttc.check_point(pts) if tissue_map[ind] > 0.5: npt.assert_equal(state, int(StreamlineStatus.TRACKPOINT)) else: npt.assert_equal(state, int(StreamlineStatus.ENDPOINT)) # Test random points in voxel inds = [ [0, 1.4, 2.2], [0, 2.3, 2.3], [0, 2.2, 1.3], [0, 0.9, 2.2], [0, 2.8, 1.1], [0, 1.1, 3.3], [0, 2.1, 1.9], [0, 3.1, 3.1], [0, 0.1, 0.1], [0, 0.9, 0.5], [0, 0.9, 0.5], [0, 2.9, 0.1], ] for pts in inds: pts = np.array(pts, dtype="float64") state = ttc.check_point(pts) res = scipy.ndimage.map_coordinates( tissue_map, np.reshape(pts, (3, 1)), order=1, mode="nearest" ) if res > 0.5: npt.assert_equal(state, int(StreamlineStatus.TRACKPOINT)) else: npt.assert_equal(state, int(StreamlineStatus.ENDPOINT)) # Test outside points outside_pts = [ [100, 100, 100], [0, -1, 1], [0, 10, 2], [0, 0.5, -0.51], [0, -0.51, 0.1], ] for pts in outside_pts: pts = np.array(pts, dtype="float64") state = ttc.check_point(pts) npt.assert_equal(state, int(StreamlineStatus.OUTSIDEIMAGE)) @set_random_number_generator() def test_act_stopping_criterion(rng): """This tests that the act stopping criterion returns expected streamline statuses. """ gm = rng.random((4, 4, 4)) wm = rng.random((4, 4, 4)) csf = rng.random((4, 4, 4)) tissue_sum = gm + wm + csf gm /= tissue_sum wm /= tissue_sum csf /= tissue_sum act_tc = ActStoppingCriterion(include_map=gm, exclude_map=csf) # Test voxel center for ind in ndindex(wm.shape): pts = np.array(ind, dtype="float64") state = act_tc.check_point(pts) if csf[ind] > 0.5: npt.assert_equal(state, int(StreamlineStatus.INVALIDPOINT)) elif gm[ind] > 0.5: npt.assert_equal(state, int(StreamlineStatus.ENDPOINT)) else: npt.assert_equal(state, int(StreamlineStatus.TRACKPOINT)) # Test random points in voxel inds = [ [0, 1.4, 2.2], [0, 2.3, 2.3], [0, 2.2, 1.3], [0, 0.9, 2.2], [0, 2.8, 1.1], [0, 1.1, 3.3], [0, 2.1, 1.9], [0, 3.1, 3.1], [0, 0.1, 0.1], [0, 0.9, 0.5], [0, 0.9, 0.5], [0, 2.9, 0.1], ] for pts in inds: pts = np.array(pts, dtype="float64") state = act_tc.check_point(pts) gm_res = scipy.ndimage.map_coordinates( gm, np.reshape(pts, (3, 1)), order=1, mode="nearest" ) csf_res = scipy.ndimage.map_coordinates( csf, np.reshape(pts, (3, 1)), order=1, mode="nearest" ) if csf_res > 0.5: npt.assert_equal(state, int(StreamlineStatus.INVALIDPOINT)) elif gm_res > 0.5: npt.assert_equal(state, int(StreamlineStatus.ENDPOINT)) else: npt.assert_equal(state, int(StreamlineStatus.TRACKPOINT)) # Test outside points outside_pts = [ [100, 100, 100], [0, -1, 1], [0, 10, 2], [0, 0.5, -0.51], [0, -0.51, 0.1], ] for pts in outside_pts: pts = np.array(pts, dtype="float64") state = act_tc.check_point(pts) npt.assert_equal(state, int(StreamlineStatus.OUTSIDEIMAGE)) def test_cmc_stopping_criterion(): """This tests that the cmc stopping criterion returns expected streamline statuses. """ gm = np.array([[[1, 1], [0, 0], [0, 0]]]) wm = np.array([[[0, 0], [1, 1], [0, 0]]]) csf = np.array([[[0, 0], [0, 0], [1, 1]]]) include_map = gm exclude_map = csf cmc_tc = CmcStoppingCriterion( include_map=include_map, exclude_map=exclude_map, step_size=1, average_voxel_size=1, ) cmc_tc_from_pve = CmcStoppingCriterion.from_pve( wm_map=wm, gm_map=gm, csf_map=csf, step_size=1, average_voxel_size=1 ) # Test constructors for idx in np.ndindex(wm.shape): idx = np.asarray(idx, dtype="float64") npt.assert_almost_equal( cmc_tc.get_include(idx), cmc_tc_from_pve.get_include(idx) ) npt.assert_almost_equal( cmc_tc.get_exclude(idx), cmc_tc_from_pve.get_exclude(idx) ) # Test voxel center for ind in ndindex(wm.shape): pts = np.array(ind, dtype="float64") state = cmc_tc.check_point(pts) if csf[ind] == 1: npt.assert_equal(state, int(StreamlineStatus.INVALIDPOINT)) elif gm[ind] == 1: npt.assert_equal(state, int(StreamlineStatus.ENDPOINT)) else: npt.assert_equal(state, int(StreamlineStatus.TRACKPOINT)) # Test outside points outside_pts = [ [100, 100, 100], [0, -1, 1], [0, 10, 2], [0, 0.5, -0.51], [0, -0.51, 0.1], ] for pts in outside_pts: pts = np.array(pts, dtype="float64") npt.assert_equal(cmc_tc.check_point(pts), int(StreamlineStatus.OUTSIDEIMAGE)) npt.assert_equal(cmc_tc.get_exclude(pts), 0) npt.assert_equal(cmc_tc.get_include(pts), 0) dipy-1.11.0/dipy/tracking/tests/test_streamline.py000066400000000000000000001414211476546756600222610ustar00rootroot00000000000000import types import warnings import numpy as np from numpy.linalg import norm import numpy.testing as npt from numpy.testing import ( assert_allclose, assert_almost_equal, assert_array_almost_equal, assert_array_equal, assert_equal, assert_raises, ) from dipy.testing import assert_arrays_equal, assert_true from dipy.testing.decorators import set_random_number_generator from dipy.testing.memory import get_type_refcount from dipy.tracking.streamline import ( Streamlines, center_streamlines, cluster_confidence, deform_streamlines, orient_by_rois, orient_by_streamline, relist_streamlines, select_by_rois, select_random_set_of_streamlines, transform_streamlines, unlist_streamlines, values_from_volume, ) from dipy.tracking.streamlinespeed import ( compress_streamlines, length, set_number_of_points, ) streamline = np.array( [ [82.20181274, 91.36505890, 43.15737152], [82.38442230, 91.79336548, 43.87036514], [82.48710632, 92.27861023, 44.56298065], [82.53310394, 92.78545380, 45.24635315], [82.53793335, 93.26902008, 45.94785309], [82.48797607, 93.75003815, 46.64939880], [82.35533142, 94.25181580, 47.32533264], [82.15484619, 94.76634216, 47.97451019], [81.90982819, 95.28792572, 48.60244370], [81.63336945, 95.78153229, 49.23971176], [81.35479736, 96.24868011, 49.89558792], [81.08713531, 96.69807434, 50.56812668], [80.81504822, 97.14285278, 51.24193192], [80.52591705, 97.56719971, 51.92168427], [80.26599884, 97.98269653, 52.61848068], [80.04635620, 98.38131714, 53.33855820], [79.84691620, 98.77052307, 54.06955338], [79.57667542, 99.13599396, 54.78985596], [79.23351288, 99.43207550, 55.51065063], [78.84815979, 99.64141846, 56.24016571], [78.47383881, 99.77347565, 56.99299240], [78.12837219, 99.81330872, 57.76969528], [77.80438995, 99.85082245, 58.55574799], [77.49439240, 99.88065338, 59.34777069], [77.21414185, 99.85343933, 60.15090561], [76.96416473, 99.82772827, 60.96406937], [76.74712372, 99.80519104, 61.78676605], [76.52263641, 99.79122162, 62.60765076], [76.03757477, 100.08692169, 63.24152374], [75.44867706, 100.35265350, 63.79513168], [74.78033447, 100.57255554, 64.27278900], [74.11605835, 100.77330780, 64.76428986], [73.51222992, 100.98779297, 65.32373047], [72.97387695, 101.23387146, 65.93502045], [72.47355652, 101.49151611, 66.57343292], [71.99834442, 101.72480774, 67.23979950], [71.56909180, 101.98665619, 67.92664337], [71.18083191, 102.29483795, 68.61888123], [70.81879425, 102.63343048, 69.31127167], [70.47422791, 102.98672485, 70.00532532], [70.10092926, 103.28502655, 70.70999908], [69.69512177, 103.51667023, 71.42147064], [69.27423096, 103.71351624, 72.13452911], [68.91260529, 103.81676483, 72.89796448], [68.60788727, 103.81982422, 73.69258118], [68.34162903, 103.76619720, 74.49915314], [68.08542633, 103.70635223, 75.30856323], [67.83590698, 103.60187531, 76.11553955], [67.56822968, 103.44821930, 76.90870667], [67.28399658, 103.25878906, 77.68825531], [67.00117493, 103.03740692, 78.45989227], [66.72718048, 102.80329895, 79.23099518], [66.46197510, 102.54130554, 79.99622345], [66.20803833, 102.22305298, 80.74387360], [65.96872711, 101.88980865, 81.48987579], [65.72864532, 101.59316254, 82.25085449], [65.47808075, 101.33383942, 83.02194214], [65.21841431, 101.11295319, 83.80186462], [64.95678711, 100.94080353, 84.59326935], [64.71759033, 100.82022095, 85.40114594], [64.48053741, 100.73490143, 86.21411896], [64.24304199, 100.65074158, 87.02709198], [64.01773834, 100.55318451, 87.84204865], [63.83801651, 100.41996765, 88.66333008], [63.70982361, 100.25119019, 89.48779297], [63.60707855, 100.06730652, 90.31262207], [63.46164322, 99.91001892, 91.13648224], [63.26287842, 99.78648376, 91.95485687], [63.03713226, 99.68377686, 92.76905823], [62.81192398, 99.56619263, 93.58140564], [62.57145309, 99.42708588, 94.38592529], [62.32259369, 99.25592804, 95.18167114], [62.07497787, 99.05770111, 95.97154236], [61.82253647, 98.83877563, 96.75438690], [61.59536743, 98.59293365, 97.53706360], [61.46530151, 98.30503845, 98.32772827], [61.39904785, 97.97928619, 99.11172485], [61.33279419, 97.65353394, 99.89572906], [61.26067352, 97.30914307, 100.67123413], [61.19459534, 96.96743011, 101.44847107], [61.19580460, 96.63417053, 102.23215485], [61.26572037, 96.29887390, 103.01185608], [61.39840698, 95.96297455, 103.78307343], [61.57207870, 95.64262390, 104.55268097], [61.78163528, 95.35540771, 105.32629395], [62.06700134, 95.09746552, 106.08564758], [62.39427185, 94.85724640, 106.83369446], [62.74076462, 94.62278748, 107.57482147], [63.11461639, 94.40107727, 108.30641937], [63.53397751, 94.20418549, 109.02002716], [64.00019836, 94.03809357, 109.71183777], [64.43580627, 93.87523651, 110.42416382], [64.84857941, 93.69993591, 111.14715576], [65.26740265, 93.51858521, 111.86515808], [65.69511414, 93.36718750, 112.58474731], [66.10470581, 93.22719574, 113.31711578], [66.45891571, 93.06028748, 114.07256317], [66.78582001, 92.90560913, 114.84281921], [67.11138916, 92.79004669, 115.62040710], [67.44729614, 92.75711823, 116.40135193], [67.75688171, 92.98265076, 117.16111755], [68.02041626, 93.28012848, 117.91371155], [68.25725555, 93.53466797, 118.69052124], [68.46047974, 93.63263702, 119.51107788], [68.62039948, 93.62007141, 120.34690094], [68.76782227, 93.56475067, 121.18331909], [68.90222168, 93.46326447, 122.01765442], [68.99872589, 93.30039978, 122.84759521], [69.04119873, 93.05428314, 123.66156769], [69.05086517, 92.74394989, 124.45450592], [69.02742004, 92.40427399, 125.23509979], [68.95466614, 92.09059143, 126.02339935], [68.84975433, 91.79674530, 126.81564331], [68.72673798, 91.53726196, 127.61715698], [68.60685730, 91.30300140, 128.42681885], [68.50636292, 91.12481689, 129.25317383], [68.39311218, 91.01572418, 130.08976746], [68.25946808, 90.94654083, 130.92756653], ], dtype=np.float32, ) streamline_64bit = streamline.astype(np.float64) streamlines = [ streamline[[0, 10]], streamline, streamline[::2], streamline[::3], streamline[::5], streamline[::6], ] streamlines_64bit = [ streamline_64bit[[0, 10]], streamline_64bit, streamline_64bit[::2], streamline_64bit[::3], streamline_64bit[::4], streamline_64bit[::5], ] heterogeneous_streamlines = [ streamline_64bit, streamline_64bit.reshape((-1, 6)), streamline_64bit.reshape((-1, 2)), ] def length_python(xyz, along=False): xyz = np.asarray(xyz, dtype=np.float64) if xyz.shape[0] < 2: if along: return np.array([0]) return 0 dists = np.sqrt((np.diff(xyz, axis=0) ** 2).sum(axis=1)) if along: return np.cumsum(dists) return np.sum(dists) def set_number_of_points_python(xyz, n_pols=3): def _extrap(xyz, cumlen, distance): """Helper function for extrapolate""" ind = np.where((cumlen - distance) > 0)[0][0] len0 = cumlen[ind - 1] len1 = cumlen[ind] Ds = distance - len0 Lambda = Ds / (len1 - len0) return Lambda * xyz[ind] + (1 - Lambda) * xyz[ind - 1] cumlen = np.zeros(xyz.shape[0]) cumlen[1:] = length_python(xyz, along=True) step = cumlen[-1] / (n_pols - 1) ar = np.arange(0, cumlen[-1], step) if np.abs(ar[-1] - cumlen[-1]) < np.finfo("f4").eps: ar = ar[:-1] xyz2 = [_extrap(xyz, cumlen, distance) for distance in ar] return np.vstack((np.array(xyz2), xyz[-1])) def test_set_number_of_points(): # Test resampling of only one streamline nb_points = 12 new_streamline_cython = set_number_of_points(streamline, nb_points=nb_points) new_streamline_python = set_number_of_points_python(streamline, nb_points) assert_equal(len(new_streamline_cython), nb_points) # Using a 5 digits precision because of streamline is in float32. assert_array_almost_equal(new_streamline_cython, new_streamline_python, 5) new_streamline_cython = set_number_of_points(streamline_64bit, nb_points=nb_points) new_streamline_python = set_number_of_points_python(streamline_64bit, nb_points) assert_equal(len(new_streamline_cython), nb_points) assert_array_almost_equal(new_streamline_cython, new_streamline_python) res = [] simple_streamline = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]], "f4") for nb_points in range(2, 200): new_streamline_cython = set_number_of_points( simple_streamline, nb_points=nb_points ) res.append(nb_points - len(new_streamline_cython)) assert_equal(np.sum(res), 0) # Test resampling of multiple streamlines of different nb_points nb_points = 12 new_streamlines_cython = set_number_of_points(streamlines, nb_points=nb_points) for i, s in enumerate(streamlines): new_streamline_python = set_number_of_points_python(s, nb_points) # Using a 5 digits precision because of streamline is in float32. assert_array_almost_equal(new_streamlines_cython[i], new_streamline_python, 5) # ArraySequence arrseq = Streamlines(streamlines) new_streamlines_as_seq_cython = set_number_of_points(arrseq, nb_points=nb_points) assert_array_almost_equal(new_streamlines_as_seq_cython, new_streamlines_cython) new_streamlines_cython = set_number_of_points( streamlines_64bit, nb_points=nb_points ) for i, s in enumerate(streamlines_64bit): new_streamline_python = set_number_of_points_python(s, nb_points) assert_array_almost_equal(new_streamlines_cython[i], new_streamline_python) # ArraySequence arrseq = Streamlines(streamlines_64bit) new_streamlines_as_seq_cython = set_number_of_points(arrseq, nb_points=nb_points) assert_array_almost_equal(new_streamlines_as_seq_cython, new_streamlines_cython) # Test streamlines with mixed dtype streamlines_mixed_dtype = [ streamline, streamline.astype(np.float64), streamline.astype(np.int32), streamline.astype(np.int64), ] nb_points_mixed_dtype = [ len(s) for s in set_number_of_points(streamlines_mixed_dtype, nb_points=nb_points) ] assert_array_equal( nb_points_mixed_dtype, [nb_points] * len(streamlines_mixed_dtype) ) # Test streamlines with different shape new_streamlines_cython = set_number_of_points( heterogeneous_streamlines, nb_points=nb_points ) for i, s in enumerate(heterogeneous_streamlines): new_streamline_python = set_number_of_points_python(s, nb_points) assert_array_almost_equal(new_streamlines_cython[i], new_streamline_python) # Test streamline with integer dtype new_streamline = set_number_of_points(streamline.astype(np.int32)) assert_equal(new_streamline.dtype, np.float32) new_streamline = set_number_of_points(streamline.astype(np.int64)) assert_equal(new_streamline.dtype, np.float64) # Test empty list assert_equal(set_number_of_points([]), []) # Test streamline having only one point assert_raises(ValueError, set_number_of_points, np.array([[1, 2, 3]])) # We do not support list of lists, it should be numpy ndarray. streamline_unsupported = [[1, 2, 3], [4, 5, 5], [2, 1, 3], [4, 2, 1]] assert_raises(AttributeError, set_number_of_points, streamline_unsupported) # Test setting number of points of a numpy with flag WRITABLE=False streamline_readonly = streamline.copy() streamline_readonly.setflags(write=False) assert_equal(len(set_number_of_points(streamline_readonly, nb_points=42)), 42) # Test setting computing length of a numpy with flag WRITABLE=False streamlines_readonly = [] for s in streamlines: streamlines_readonly.append(s.copy()) streamlines_readonly[-1].setflags(write=False) assert_equal( len(set_number_of_points(streamlines_readonly, nb_points=42)), len(streamlines_readonly), ) streamlines_readonly = [] for s in streamlines_64bit: streamlines_readonly.append(s.copy()) streamlines_readonly[-1].setflags(write=False) assert_equal( len(set_number_of_points(streamlines_readonly, nb_points=42)), len(streamlines_readonly), ) # Test if nb_points is less than 2 assert_raises( ValueError, set_number_of_points, [np.ones((10, 3)), np.ones((10, 3))], nb_points=1, ) @set_random_number_generator(1234) def test_set_number_of_points_memory_leaks(rng): # Test some dtypes dtypes = [np.float32, np.float64, np.int32, np.int64] for dtype in dtypes: s_rng = np.random.default_rng(1234) NB_STREAMLINES = 10000 streamlines = [ s_rng.standard_normal((s_rng.integers(10, 100), 3)).astype(dtype) for _ in range(NB_STREAMLINES) ] list_refcount_before = get_type_refcount()["list"] rstreamlines = set_number_of_points(streamlines, nb_points=2) list_refcount_after = get_type_refcount()["list"] del ( rstreamlines ) # Delete `rstreamlines` because it holds a reference to `list # noqa: F841 # Calling `set_number_of_points` should increase the refcount of `list` # by one since we kept the returned value. assert_equal(list_refcount_after, list_refcount_before + 1) # Test mixed dtypes NB_STREAMLINES = 10000 streamlines = [] for i in range(NB_STREAMLINES): dtype = dtypes[i % len(dtypes)] streamlines.append( rng.standard_normal((rng.integers(10, 100), 3)).astype(dtype) ) list_refcount_before = get_type_refcount()["list"] rstreamlines = set_number_of_points(streamlines, nb_points=2) # noqa: F841 list_refcount_after = get_type_refcount()["list"] # Calling `set_number_of_points` should increase the refcount of `list` # by one since we kept the returned value. assert_equal(list_refcount_after, list_refcount_before + 1) def test_length(): # Test length of only one streamline length_streamline_cython = length(streamline) length_streamline_python = length_python(streamline) assert_almost_equal(length_streamline_cython, length_streamline_python) length_streamline_cython = length(streamline_64bit) length_streamline_python = length_python(streamline_64bit) assert_almost_equal(length_streamline_cython, length_streamline_python) # Test computing length of multiple streamlines of different nb_points length_streamlines_cython = length(streamlines) for i, s in enumerate(streamlines): length_streamline_python = length_python(s) assert_array_almost_equal( length_streamlines_cython[i], length_streamline_python ) length_streamlines_cython = length(streamlines_64bit) for i, s in enumerate(streamlines_64bit): length_streamline_python = length_python(s) assert_array_almost_equal( length_streamlines_cython[i], length_streamline_python ) # ArraySequence # Test length of only one streamline length_streamline_cython = length(streamline_64bit) length_streamline_arrseq = length(Streamlines([streamline])) assert_almost_equal(length_streamline_arrseq, length_streamline_cython) length_streamline_cython = length(streamline_64bit) length_streamline_arrseq = length(Streamlines([streamline_64bit])) assert_almost_equal(length_streamline_arrseq, length_streamline_cython) # Test computing length of multiple streamlines of different nb_points length_streamlines_cython = length(streamlines) length_streamlines_arrseq = length(Streamlines(streamlines)) assert_array_almost_equal(length_streamlines_arrseq, length_streamlines_cython) length_streamlines_cython = length(streamlines_64bit) length_streamlines_arrseq = length(Streamlines(streamlines_64bit)) assert_array_almost_equal(length_streamlines_arrseq, length_streamlines_cython) # Test on a sliced ArraySequence length_streamlines_cython = length(streamlines_64bit[::2]) length_streamlines_arrseq = length(Streamlines(streamlines_64bit)[::2]) assert_array_almost_equal(length_streamlines_arrseq, length_streamlines_cython) length_streamlines_cython = length(streamlines[::-1]) length_streamlines_arrseq = length(Streamlines(streamlines)[::-1]) assert_array_almost_equal(length_streamlines_arrseq, length_streamlines_cython) # Test streamlines having mixed dtype streamlines_mixed_dtype = [ streamline, streamline.astype(np.float64), streamline.astype(np.int32), streamline.astype(np.int64), ] lengths_mixed_dtype = [length(s) for s in streamlines_mixed_dtype] assert_array_equal(length(streamlines_mixed_dtype), lengths_mixed_dtype) # Test streamlines with different shape length_streamlines_cython = length(heterogeneous_streamlines) for i, s in enumerate(heterogeneous_streamlines): length_streamline_python = length_python(s) assert_array_almost_equal( length_streamlines_cython[i], length_streamline_python ) # Test streamline having integer dtype length_streamline = length(streamline.astype("int")) assert_equal(length_streamline.dtype, np.float64) # Test empty list assert_equal(length([]), 0.0) # Test streamline having only one point assert_equal(length(np.array([[1, 2, 3]])), 0.0) # We do not support list of lists, it should be numpy ndarray. streamline_unsupported = [[1, 2, 3], [4, 5, 5], [2, 1, 3], [4, 2, 1]] assert_raises(AttributeError, length, streamline_unsupported) # Test setting computing length of a numpy with flag WRITABLE=False streamlines_readonly = [] for s in streamlines: streamlines_readonly.append(s.copy()) streamlines_readonly[-1].setflags(write=False) assert_array_almost_equal( length(streamlines_readonly), [length_python(s) for s in streamlines_readonly] ) streamlines_readonly = [] for s in streamlines_64bit: streamlines_readonly.append(s.copy()) streamlines_readonly[-1].setflags(write=False) assert_array_almost_equal( length(streamlines_readonly), [length_python(s) for s in streamlines_readonly] ) @set_random_number_generator(1234) def test_length_memory_leaks(rng): # Test some dtypes dtypes = [np.float32, np.float64, np.int32, np.int64] for dtype in dtypes: s_rng = np.random.default_rng(1234) NB_STREAMLINES = 10000 streamlines = [ s_rng.standard_normal((s_rng.integers(10, 100), 3)).astype(dtype) for _ in range(NB_STREAMLINES) ] list_refcount_before = get_type_refcount()["list"] # lengths = length(streamlines) list_refcount_after = get_type_refcount()["list"] # Calling `length` shouldn't increase the refcount of `list` # since the return value is a numpy array. assert_equal(list_refcount_after, list_refcount_before) # Test mixed dtypes NB_STREAMLINES = 10000 streamlines = [] for i in range(NB_STREAMLINES): dtype = dtypes[i % len(dtypes)] streamlines.append( rng.standard_normal((rng.integers(10, 100), 3)).astype(dtype) ) list_refcount_before = get_type_refcount()["list"] # lengths = length(streamlines) list_refcount_after = get_type_refcount()["list"] # Calling `length` shouldn't increase the refcount of `list` # since the return value is a numpy array. assert_equal(list_refcount_after, list_refcount_before) @set_random_number_generator() def test_unlist_relist_streamlines(rng): streamlines = [rng.random((10, 3)), rng.random((20, 3)), rng.random((5, 3))] points, offsets = unlist_streamlines(streamlines) assert_equal(offsets.dtype, np.dtype("i8")) assert_equal(points.shape, (35, 3)) assert_equal(len(offsets), len(streamlines)) streamlines2 = relist_streamlines(points, offsets) assert_equal(len(streamlines), len(streamlines2)) for i in range(len(streamlines)): assert_array_equal(streamlines[i], streamlines2[i]) def test_transform_streamlines_dtype_in_place(): identity = np.eye(4) streamlines = Streamlines([streamline]) streamlines._data = streamlines._data.astype(np.float16) data_dtype = streamlines._data.dtype offsets_dtype = streamlines._offsets.dtype transform_streamlines(streamlines, identity, in_place=True) assert_equal(data_dtype, streamlines._data.dtype) assert_equal(offsets_dtype, streamlines._offsets.dtype) def test_transform_streamlines_dtype(): identity = np.eye(4) streamlines = Streamlines([streamline]) streamlines._data = streamlines._data.astype(np.float16) data_dtype = streamlines._data.dtype offsets_dtype = streamlines._offsets.dtype streamlines = transform_streamlines(streamlines, identity, in_place=False) assert_equal(data_dtype, streamlines._data.dtype) assert_equal(offsets_dtype, streamlines._offsets.dtype) def test_transform_empty_streamlines(): identity = np.eye(4) streamlines = Streamlines([]) streamlines = transform_streamlines(streamlines, identity, in_place=False) assert_equal(len(streamlines), 0) @set_random_number_generator() def test_deform_streamlines(rng): # Create Random deformation field deformation_field = rng.standard_normal((200, 200, 200, 3)) stream2grid = np.array( [ [-0.13152201, -0.52553149, -0.06759869, -0.80014208], [1.01579851, 0.19840874, 0.18875411, 0.81826065], [-0.07047617, -0.9290094, -0.55623385, 0.55165017], [0.0, 0.0, 0.0, 1.0], ] ) grid2world = np.array( [ [0.83354727, 1.33876877, 1.0218087, 0.12809569], [0.83571344, 0.63824941, 0.20564267, 0.82740437], [-0.26574668, -0.66695577, 0.11636694, -0.02620037], [0.0, 0.0, 0.0, 1.0], ] ) stream2world = np.dot(stream2grid, grid2world) # Deform streamlines (let two grid spaces be the same for simplicity) new_streamlines = deform_streamlines( streamlines, deformation_field, stream2grid, grid2world, stream2grid, grid2world ) # Interpolate displacements onto original streamlines streamlines_in_grid = transform_streamlines(streamlines, stream2grid) disps = values_from_volume(deformation_field, streamlines_in_grid, np.eye(4)) # Put new_streamlines into world space new_streamlines_world = transform_streamlines(new_streamlines, stream2world) # Subtract disps from new_streamlines in world space orig_streamlines_world = np.subtract( np.array(new_streamlines_world, dtype=object), np.array(disps, dtype=object) ) # Put orig_streamlines_world into voxmm orig_streamlines = transform_streamlines( orig_streamlines_world, np.linalg.inv(stream2world) ) # All close because of floating pt imprecision for o, s in zip(orig_streamlines, streamlines): assert_allclose(s, o.astype(np.float32), rtol=1e-6, atol=1e-6) def test_center_and_transform(): A = np.array([[1, 2, 3], [1, 2, 3.0]]) streamlines = [A for _ in range(10)] streamlines2, center = center_streamlines(streamlines) B = np.zeros((2, 3)) assert_array_equal(streamlines2[0], B) assert_array_equal(center, A[0]) affine = np.eye(4) affine[0, 0] = 2 affine[:3, -1] = -np.array([2, 1, 1]) * center streamlines3 = transform_streamlines(streamlines, affine) assert_array_equal(streamlines3[0], B) @set_random_number_generator() def test_select_random_streamlines(rng): streamlines = [rng.random((10, 3)), rng.random((20, 3)), rng.random((5, 3))] new_streamlines = select_random_set_of_streamlines(streamlines, 2) assert_equal(len(new_streamlines), 2) new_streamlines = select_random_set_of_streamlines(streamlines, 4) assert_equal(len(new_streamlines), 3) def compress_streamlines_python(streamline, tol_error=0.01, max_segment_length=10): """ Python version of the FiberCompression found on https://github.com/scilus/FiberCompression. """ if streamline.shape[0] <= 2: return streamline.copy() # Euclidean distance def segment_length(p1, p2): return np.sqrt(((p1 - p2) ** 2).sum()) # Projection of a 3D point on a 3D line, minimal distance def dist_to_line(p1, p2, p0): return norm(np.cross(p2 - p1, p0 - p2)) / norm(p2 - p1) nb_points = 0 compressed_streamline = np.zeros_like(streamline) # Copy first point since it is always kept. compressed_streamline[0, :] = streamline[0, :] nb_points += 1 p1 = streamline[0] prev_id = 0 for next_id, p2 in enumerate(streamline[2:], start=2): # Euclidean distance between last added point and current point. if segment_length(p1, p2) > max_segment_length: compressed_streamline[nb_points, :] = streamline[next_id - 1, :] nb_points += 1 p1 = streamline[next_id - 1] prev_id = next_id - 1 continue # Check that each point is not offset by more than `tol_error` mm. for _, p0 in enumerate(streamline[prev_id + 1 : next_id], start=prev_id + 1): dist = dist_to_line(p1, p2, p0) if np.isnan(dist) or dist > tol_error: compressed_streamline[nb_points, :] = streamline[next_id - 1, :] nb_points += 1 p1 = streamline[next_id - 1] prev_id = next_id - 1 break # Copy last point since it is always kept. compressed_streamline[nb_points, :] = streamline[-1, :] nb_points += 1 # Make sure the array have the correct size return compressed_streamline[:nb_points] def test_compress_streamlines(): for compress_func in [compress_streamlines_python, compress_streamlines]: # Small streamlines (less than two points) are incompressible. for small_streamline in [ np.array([[]]), np.array([[1, 1, 1]]), np.array([[1, 1, 1], [2, 2, 2]]), ]: c_streamline = compress_func(small_streamline) assert_equal(len(c_streamline), len(small_streamline)) assert_array_equal(c_streamline, small_streamline) # Compressing a straight streamline that is less than 10mm long # should output a two points streamline. linear_streamline = np.linspace(0, 5, 100 * 3).reshape((100, 3)) c_streamline = compress_func(linear_streamline) assert_equal(len(c_streamline), 2) assert_array_equal(c_streamline, [linear_streamline[0], linear_streamline[-1]]) # The distance of consecutive points must be less or equal than some # value. max_segment_length = 10 linear_streamline = np.linspace(0, 100, 100 * 3).reshape((100, 3)) linear_streamline[:, 1:] = 0.0 c_streamline = compress_func( linear_streamline, max_segment_length=max_segment_length ) segments_length = np.sqrt((np.diff(c_streamline, axis=0) ** 2).sum(axis=1)) assert_true(np.all(segments_length <= max_segment_length)) assert_equal(len(c_streamline), 12) assert_array_equal(c_streamline, linear_streamline[::9]) # A small `max_segment_length` should keep all points. c_streamline = compress_func(linear_streamline, max_segment_length=0.01) assert_array_equal(c_streamline, linear_streamline) # Test we can set `max_segment_length` to infinity # (like the C++ version) compress_func(streamline, max_segment_length=np.inf) # Incompressible streamline when `tol_error` == 1. simple_streamline = np.array( [[0, 0, 0], [1, 1, 0], [1.5, np.inf, 0], [2, 2, 0], [2.5, 20, 0], [3, 3, 0]] ) # Because of np.inf, compressing that streamline causes a warning. with np.errstate(invalid="ignore"): c_streamline = compress_func(simple_streamline, tol_error=1) assert_array_equal(c_streamline, simple_streamline) # Create a special streamline where every other point is increasingly # farther from a straight line formed by the streamline endpoints. tol_errors = np.linspace(0, 10, 21) orthogonal_line = np.array([[-np.sqrt(2) / 2, np.sqrt(2) / 2, 0]], dtype=np.float32) special_streamline = np.array( [range(len(tol_errors) * 2 + 1)] * 3, dtype=np.float32 ).T special_streamline[1::2] += orthogonal_line * tol_errors[:, None] # # Uncomment to see the streamline. # import pylab as plt # plt.plot(special_streamline[:, 0], special_streamline[:, 1], '.-') # plt.axis('equal'); plt.show() # Test different values for `tol_error`. for i, tol_error in enumerate(tol_errors): cspecial_streamline = compress_streamlines( special_streamline, tol_error=tol_error + 1e-4, max_segment_length=np.inf ) # First and last points should always be the same as the original ones. assert_array_equal(cspecial_streamline[0], special_streamline[0]) assert_array_equal(cspecial_streamline[-1], special_streamline[-1]) assert_equal(len(cspecial_streamline), len(special_streamline) - ((i * 2) + 1)) # Make sure Cython and Python versions are the same. cstreamline_python = compress_streamlines_python( special_streamline, tol_error=tol_error + 1e-4, max_segment_length=np.inf ) assert_equal(len(cspecial_streamline), len(cstreamline_python)) assert_array_almost_equal(cspecial_streamline, cstreamline_python) def test_compress_streamlines_identical_points(): sl_1 = np.array([[1, 1, 1], [1, 1, 1], [2, 2, 2], [3, 3, 3], [3, 3, 3]]) sl_2 = np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1], [2, 2, 2]]) sl_3 = np.array( [ [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [3, 3, 3], [3, 3, 3], ] ) sl_4 = np.array( [ [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [2, 2, 2], [3, 3, 3], [3, 3, 3], [1, 1, 1], ] ) new_sl_1 = compress_streamlines(sl_1) new_sl_2 = compress_streamlines(sl_2) new_sl_3 = compress_streamlines(sl_3) new_sl_4 = compress_streamlines(sl_4) npt.assert_array_equal(new_sl_1, np.array([[1, 1, 1], [3, 3, 3]])) npt.assert_array_equal(new_sl_2, np.array([[1, 1, 1], [2, 2, 2]])) npt.assert_array_equal(new_sl_3, new_sl_1) npt.assert_array_equal(new_sl_4, np.array([[1, 1, 1], [3, 3, 3], [1, 1, 1]])) @set_random_number_generator(1234) def test_compress_streamlines_memory_leaks(rng): # Test some dtypes dtypes = [np.float32, np.float64, np.int32, np.int64] for dtype in dtypes: s_rng = np.random.default_rng(1234) NB_STREAMLINES = 10000 streamlines = [ s_rng.standard_normal((s_rng.integers(10, 100), 3)).astype(dtype) for _ in range(NB_STREAMLINES) ] list_refcount_before = get_type_refcount()["list"] cstreamlines = compress_streamlines(streamlines) list_refcount_after = get_type_refcount()["list"] del ( cstreamlines ) # Delete `cstreamlines` because it holds a reference to `list` # noqa: F841 # Calling `compress_streamlines` should increase the refcount of `list` # by one since we kept the returned value. assert_equal(list_refcount_after, list_refcount_before + 1) # Test mixed dtypes NB_STREAMLINES = 10000 streamlines = [] for i in range(NB_STREAMLINES): dtype = dtypes[i % len(dtypes)] streamlines.append( rng.standard_normal((rng.integers(10, 100), 3)).astype(dtype) ) list_refcount_before = get_type_refcount()["list"] cstreamlines = compress_streamlines(streamlines) # noqa: F841 list_refcount_after = get_type_refcount()["list"] # Calling `compress_streamlines` should increase the refcount of `list` by # one since we kept the returned value. assert_equal(list_refcount_after, list_refcount_before + 1) def generate_sl(streamlines): """ Helper function that takes a sequence and returns a generator Parameters ---------- streamlines : sequence Usually, this would be a list of 2D arrays, representing streamlines Returns ------- generator """ for sl in streamlines: yield sl def test_select_by_rois(): streamlines = [ np.array([[0, 0.0, 0.9], [1.9, 0.0, 0.0]]), np.array([[0.1, 0.0, 0], [0, 1.0, 1.0], [0, 2.0, 2.0]]), np.array([[2, 2, 2], [3, 3, 3]]), ] # Make two ROIs: mask1 = np.zeros((4, 4, 4), dtype=bool) mask2 = np.zeros_like(mask1) mask1[0, 0, 0] = True mask2[1, 0, 0] = True selection = select_by_rois(streamlines, np.eye(4), [mask1], [True], tol=1) assert_arrays_equal(list(selection), [streamlines[0], streamlines[1]]) selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2], [True, True], tol=1 ) assert_arrays_equal(list(selection), [streamlines[0], streamlines[1]]) selection = select_by_rois(streamlines, np.eye(4), [mask1, mask2], [True, False]) assert_arrays_equal(list(selection), [streamlines[1]]) # Setting tolerance too low gets overridden: with warnings.catch_warnings(): warnings.simplefilter("ignore") selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2], [True, False], tol=0.1 ) assert_arrays_equal(list(selection), [streamlines[1]]) selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2], [True, True], tol=0.87 ) assert_arrays_equal(list(selection), [streamlines[1]]) mask3 = np.zeros_like(mask1) mask3[0, 2, 2] = 1 selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2, mask3], [True, True, False], tol=1.0 ) assert_arrays_equal(list(selection), [streamlines[0]]) # Select using only one ROI selection = select_by_rois(streamlines, np.eye(4), [mask1], [True], tol=0.87) assert_arrays_equal(list(selection), [streamlines[1]]) selection = select_by_rois(streamlines, np.eye(4), [mask1], [True], tol=1.0) assert_arrays_equal(list(selection), [streamlines[0], streamlines[1]]) # Use different modes: selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2, mask3], [True, True, False], mode="all", tol=1.0, ) assert_arrays_equal(list(selection), [streamlines[0]]) selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2, mask3], [True, True, False], mode="either_end", tol=1.0, ) assert_arrays_equal(list(selection), [streamlines[0]]) selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2, mask3], [True, True, False], mode="both_end", tol=1.0, ) assert_arrays_equal(list(selection), [streamlines[0]]) mask2[0, 2, 2] = True selection = select_by_rois( streamlines, np.eye(4), [mask1, mask2, mask3], [True, True, False], mode="both_end", tol=1.0, ) assert_arrays_equal(list(selection), [streamlines[0], streamlines[1]]) # Test with generator input: selection = select_by_rois( generate_sl(streamlines), np.eye(4), [mask1], [True], tol=1.0 ) assert_arrays_equal(list(selection), [streamlines[0], streamlines[1]]) def test_orient_by_rois(): streamlines = Streamlines( [ np.array([[0, 0.0, 0], [1, 0.0, 0.0], [2, 0.0, 0.0]]), np.array([[2, 0.0, 0.0], [1, 0.0, 0], [0, 0, 0.0]]), ] ) # Make two ROIs: mask1_vol = np.zeros((4, 4, 4), dtype=bool) mask2_vol = np.zeros_like(mask1_vol) mask1_vol[0, 0, 0] = True mask2_vol[1, 0, 0] = True mask1_coords = np.array(np.where(mask1_vol)).T mask2_coords = np.array(np.where(mask2_vol)).T # If there is an affine, we'll use it: affine = np.eye(4) affine[:, 3] = [-1, 100, -20, 1] # Transform the streamlines: x_streamlines = Streamlines([sl + affine[:3, 3] for sl in streamlines]) # After reorientation, this should be the answer: flipped_sl = Streamlines([streamlines[0], streamlines[1][::-1]]) new_streamlines = orient_by_rois( streamlines, np.eye(4), mask1_vol, mask2_vol, in_place=False, as_generator=False ) npt.assert_array_equal(new_streamlines, flipped_sl) npt.assert_(new_streamlines is not streamlines) # Test with affine: x_flipped_sl = Streamlines([s + affine[:3, 3] for s in flipped_sl]) new_streamlines = orient_by_rois( x_streamlines, affine, mask1_vol, mask2_vol, in_place=False, as_generator=False ) npt.assert_array_equal(new_streamlines, x_flipped_sl) npt.assert_(new_streamlines is not x_streamlines) # Test providing coord ROIs instead of vol ROIs: new_streamlines = orient_by_rois( x_streamlines, affine, mask1_coords, mask2_coords, in_place=False, as_generator=False, ) npt.assert_array_equal(new_streamlines, x_flipped_sl) # Test with as_generator set to True new_streamlines = orient_by_rois( streamlines, np.eye(4), mask1_vol, mask2_vol, in_place=False, as_generator=True ) npt.assert_(isinstance(new_streamlines, types.GeneratorType)) ll = Streamlines(new_streamlines) npt.assert_array_equal(ll, flipped_sl) # Test with as_generator set to True and with the affine new_streamlines = orient_by_rois( x_streamlines, affine, mask1_vol, mask2_vol, in_place=False, as_generator=True ) npt.assert_(isinstance(new_streamlines, types.GeneratorType)) ll = Streamlines(new_streamlines) npt.assert_array_equal(ll, x_flipped_sl) # Test with generator input: new_streamlines = orient_by_rois( generate_sl(streamlines), np.eye(4), mask1_vol, mask2_vol, in_place=False, as_generator=True, ) npt.assert_(isinstance(new_streamlines, types.GeneratorType)) ll = Streamlines(new_streamlines) npt.assert_array_equal(ll, flipped_sl) # Generator output cannot take a True `in_place` kwarg: npt.assert_raises( ValueError, orient_by_rois, *[generate_sl(streamlines), np.eye(4), mask1_vol, mask2_vol], **{"in_place": True, "as_generator": True}, ) # But you can input a generator and get a non-generator as output: new_streamlines = orient_by_rois( generate_sl(streamlines), np.eye(4), mask1_vol, mask2_vol, in_place=False, as_generator=False, ) npt.assert_(not isinstance(new_streamlines, types.GeneratorType)) npt.assert_array_equal(new_streamlines, flipped_sl) # Modify in-place: new_streamlines = orient_by_rois( streamlines, np.eye(4), mask1_vol, mask2_vol, in_place=True, as_generator=False ) npt.assert_array_equal(new_streamlines, flipped_sl) # The two objects are one and the same: npt.assert_(new_streamlines is streamlines) def test_orient_by_streamline(): streamlines = Streamlines( [ np.array([[0, 0.0, 0], [1, 0.0, 0.0], [2, 0.0, 0.0]]), np.array([[2, 0.0, 0.0], [1, 0.0, 0], [0, 0, 0.0]]), ] ) # If there is an affine, we'll use it: affine = np.eye(4) affine[:, 3] = [-1, 100, -20, 1] # Transform the streamlines: x_streamlines = Streamlines([sl + affine[:3, 3] for sl in streamlines]) standard_streamline = streamlines[0] # After reorientation, this should be the answer: flipped_sl = Streamlines([streamlines[0], streamlines[1][::-1]]) new_streamlines = orient_by_streamline( streamlines, standard_streamline, n_points=12, in_place=False ) npt.assert_array_equal(new_streamlines, flipped_sl) npt.assert_(new_streamlines is not streamlines) # Test with affine: x_flipped_sl = Streamlines([s + affine[:3, 3] for s in flipped_sl]) new_streamlines = orient_by_streamline( x_streamlines, standard_streamline, in_place=False ) npt.assert_array_equal(new_streamlines, x_flipped_sl) npt.assert_(new_streamlines is not x_streamlines) # Test with as_generator set to True new_streamlines = orient_by_streamline( streamlines, standard_streamline, in_place=False, as_generator=True ) npt.assert_(isinstance(new_streamlines, types.GeneratorType)) ll = Streamlines(new_streamlines) npt.assert_array_equal(ll, flipped_sl) # Test with as_generator set to True and with the affine new_streamlines = orient_by_streamline( x_streamlines, standard_streamline, in_place=False, as_generator=True ) npt.assert_(isinstance(new_streamlines, types.GeneratorType)) ll = Streamlines(new_streamlines) npt.assert_array_equal(ll, x_flipped_sl) # Modify in-place: new_streamlines = orient_by_streamline( streamlines, standard_streamline, in_place=True ) npt.assert_array_equal(new_streamlines, flipped_sl) # The two objects are one and the same: npt.assert_(new_streamlines is streamlines) def test_values_from_volume(): decimal = 4 data3d = np.arange(2000).reshape(20, 10, 10) # Test two cases of 4D data (handled differently) # One where the last dimension is length 3: data4d_3vec = np.arange(6000).reshape(20, 10, 10, 3) # The other where the last dimension is not 3: data4d_2vec = np.arange(4000).reshape(20, 10, 10, 2) for dt in [np.float32, np.float64]: for data in [data3d, data4d_3vec, data4d_2vec]: sl1 = [ np.array([[1, 0, 0], [1.5, 0, 0], [2, 0, 0], [2.5, 0, 0]]).astype(dt), np.array([[2, 0, 0], [3.1, 0, 0], [3.9, 0, 0], [4.1, 0, 0]]).astype(dt), ] ans1 = [ [ data[1, 0, 0], data[1, 0, 0] + (data[2, 0, 0] - data[1, 0, 0]) / 2, data[2, 0, 0], data[2, 0, 0] + (data[3, 0, 0] - data[2, 0, 0]) / 2, ], [ data[2, 0, 0], data[3, 0, 0] + (data[4, 0, 0] - data[3, 0, 0]) * 0.1, data[3, 0, 0] + (data[4, 0, 0] - data[3, 0, 0]) * 0.9, data[4, 0, 0] + (data[5, 0, 0] - data[4, 0, 0]) * 0.1, ], ] vv = values_from_volume(data, sl1, np.eye(4)) npt.assert_almost_equal(vv, ans1, decimal=decimal) vv = values_from_volume(data, np.array(sl1), np.eye(4)) npt.assert_almost_equal(vv, ans1, decimal=decimal) vv = values_from_volume(data, Streamlines(sl1), np.eye(4)) npt.assert_almost_equal(vv, ans1, decimal=decimal) affine = np.eye(4) affine[:, 3] = [-100, 10, 1, 1] x_sl1 = transform_streamlines(sl1, affine) x_sl2 = transform_streamlines(sl1, affine) vv = values_from_volume(data, x_sl1, affine) npt.assert_almost_equal(vv, ans1, decimal=decimal) x_sl1 = transform_streamlines(sl1, affine) vv = values_from_volume(data, x_sl1, affine) npt.assert_almost_equal(vv, ans1, decimal=decimal) # Test that the streamlines haven't mutated: l_sl2 = list(x_sl2) npt.assert_equal(x_sl1, l_sl2) vv = values_from_volume(data, np.array(x_sl1), affine) npt.assert_almost_equal(vv, ans1, decimal=decimal) npt.assert_equal(np.array(x_sl1), np.array(l_sl2)) # Test for lists of streamlines with different numbers of nodes: sl2 = [sl1[0][:-1], sl1[1]] ans2 = [ans1[0][:-1], ans1[1]] vv = values_from_volume(data, sl2, np.eye(4)) for ii, v in enumerate(vv): npt.assert_almost_equal(v, ans2[ii], decimal=decimal) # We raise an error if the streamlines fed don't make sense. In this # case, a tuple instead of a list, generator or array nonsense_sl = ( np.array([[1, 0, 0], [1.5, 0, 0], [2, 0, 0], [2.5, 0, 0]]), np.array([[2, 0, 0], [3.1, 0, 0], [3.9, 0, 0], [4.1, 0, 0]]), ) npt.assert_raises(RuntimeError, values_from_volume, data, nonsense_sl, np.eye(4)) # For some use-cases we might have singleton streamlines (with only one # node each): data3D = np.ones((2, 2, 2)) streamlines = np.ones((10, 1, 3)) npt.assert_equal(values_from_volume(data3D, streamlines, np.eye(4)).shape, (10, 1)) data4D = np.ones((2, 2, 2, 2)) streamlines = np.ones((10, 1, 3)) npt.assert_equal( values_from_volume(data4D, streamlines, np.eye(4)).shape, (10, 1, 2) ) def test_streamlines_generator(): # Test generator streamlines_generator = Streamlines(generate_sl(streamlines)) npt.assert_equal(len(streamlines_generator), len(streamlines)) # Nothing should change streamlines_generator.append(np.array([])) npt.assert_equal(len(streamlines_generator), len(streamlines)) # Test append error npt.assert_raises( ValueError, streamlines_generator.append, np.array(streamlines, dtype=object) ) # Test empty streamlines streamlines_generator = Streamlines(np.array([])) npt.assert_equal(len(streamlines_generator), 0) def test_cluster_confidence(): mysl = np.array([np.arange(10)] * 3, "float").T # a short streamline (<20 mm) should raise an error unless override=True test_streamlines = Streamlines() test_streamlines.append(mysl) assert_raises(ValueError, cluster_confidence, test_streamlines) cci = cluster_confidence(test_streamlines, override=True) # two identical streamlines should raise an error test_streamlines = Streamlines() test_streamlines.append(mysl, cache_build=True) test_streamlines.append(mysl) test_streamlines.finalize_append() assert_raises(ValueError, cluster_confidence, test_streamlines) # 3 offset collinear streamlines test_streamlines = Streamlines() test_streamlines.append(mysl, cache_build=True) test_streamlines.append(mysl + 1) test_streamlines.append(mysl + 2) test_streamlines.finalize_append() cci = cluster_confidence(test_streamlines, override=True) assert_almost_equal(cci[0], cci[2]) assert_true(cci[1] > cci[0]) # 3 parallel streamlines mysl = np.zeros([10, 3]) mysl[:, 0] = np.arange(10) mysl2 = mysl.copy() mysl2[:, 1] = 1 mysl3 = mysl.copy() mysl3[:, 1] = 2 mysl4 = mysl.copy() mysl4[:, 1] = 4 mysl5 = mysl.copy() mysl5[:, 1] = 5000 test_streamlines_p1 = Streamlines() test_streamlines_p1.append(mysl, cache_build=True) test_streamlines_p1.append(mysl2) test_streamlines_p1.append(mysl3) test_streamlines_p1.finalize_append() test_streamlines_p2 = Streamlines() test_streamlines_p2.append(mysl, cache_build=True) test_streamlines_p2.append(mysl3) test_streamlines_p2.append(mysl4) test_streamlines_p2.finalize_append() test_streamlines_p3 = Streamlines() test_streamlines_p3.append(mysl, cache_build=True) test_streamlines_p3.append(mysl2) test_streamlines_p3.append(mysl3) test_streamlines_p3.append(mysl5) test_streamlines_p3.finalize_append() cci_p1 = cluster_confidence(test_streamlines_p1, override=True) cci_p2 = cluster_confidence(test_streamlines_p2, override=True) # test relative distance assert_array_equal(cci_p1, cci_p2 * 2) # test simple cci calculation expected_p1 = np.array([1.0 / 1 + 1.0 / 2, 1.0 / 1 + 1.0 / 1, 1.0 / 1 + 1.0 / 2]) expected_p2 = np.array([1.0 / 2 + 1.0 / 4, 1.0 / 2 + 1.0 / 2, 1.0 / 2 + 1.0 / 4]) assert_array_equal(expected_p1, cci_p1) assert_array_equal(expected_p2, cci_p2) # test power variable calculation (dropoff with distance) cci_p1_pow2 = cluster_confidence(test_streamlines_p1, power=2, override=True) expected_p1_pow2 = np.array( [ np.power(1.0 / 1, 2) + np.power(1.0 / 2, 2), np.power(1.0 / 1, 2) + np.power(1.0 / 1, 2), np.power(1.0 / 1, 2) + np.power(1.0 / 2, 2), ] ) assert_array_equal(cci_p1_pow2, expected_p1_pow2) # test max distance (ignore distant sls) cci_dist = cluster_confidence(test_streamlines_p3, max_mdf=5, override=True) expected_cci_dist = np.concatenate([cci_p1, np.zeros(1)]) assert_array_equal(cci_dist, expected_cci_dist) dipy-1.11.0/dipy/tracking/tests/test_track_volumes.py000066400000000000000000000043121476546756600227710ustar00rootroot00000000000000import numpy as np from numpy.testing import assert_array_equal import dipy.tracking.vox2track as tvo def tracks_to_expected(tracks, vol_dims): # simulate expected behavior of module vol_dims = np.array(vol_dims, dtype=np.int32) counts = np.zeros(vol_dims, dtype=np.int32) elements = {} for t_no, t in enumerate(tracks): u_ps = set() ti = np.round(t).astype(np.int32) for p in ti: if np.any(p < 0): p[p < 0] = 0 too_high = p >= vol_dims if np.any(too_high): p[too_high] = vol_dims[too_high] - 1 p = tuple(p) if p in u_ps: continue u_ps.add(p) val = t_no if counts[p]: elements[p].append(val) else: elements[p] = [val] counts[p] += 1 return counts, elements def test_track_volumes(): # simplest case vol_dims = (1, 2, 3) tracks = ([[0, 0, 0], [0, 1, 1]],) tracks = [np.array(t) for t in tracks] ex_counts, ex_els = tracks_to_expected(tracks, vol_dims) tcs, tes = tvo.track_counts(tracks, vol_dims, vox_sizes=[1, 1, 1]) assert_array_equal(tcs, ex_counts) assert_array_equal(tes, ex_els) # check only counts returned for return_elements=False tcs = tvo.track_counts(tracks, vol_dims, vox_sizes=[1, 1, 1], return_elements=False) assert_array_equal(tcs, ex_counts) # non-unique points, non-integer points, points outside vol_dims = (5, 10, 15) tracks = ( [[-1, 0, 1], [0, 0.1, 0], [1, 1, 1], [1, 1, 1], [2, 2, 2]], [[0.7, 0, 0], [1, 1, 1], [1, 2, 2], [1, 11, 0]], ) tracks = [np.array(t) for t in tracks] ex_counts, ex_els = tracks_to_expected(tracks, vol_dims) tcs, tes = tvo.track_counts(tracks, vol_dims, vox_sizes=[1, 1, 1]) assert_array_equal(tcs, ex_counts) assert_array_equal(tes, ex_els) # points with non-unit voxel sizes vox_sizes = [1.4, 2.1, 3.7] float_tracks = [] for t in tracks: float_tracks.append(t * vox_sizes) tcs, tes = tvo.track_counts(float_tracks, vol_dims, vox_sizes=vox_sizes) assert_array_equal(tcs, ex_counts) assert_array_equal(tes, ex_els) dipy-1.11.0/dipy/tracking/tests/test_tracker.py000066400000000000000000000114471476546756600215550ustar00rootroot00000000000000import warnings import nibabel as nib import numpy as np import numpy.testing as npt from dipy.core.sphere import HemiSphere from dipy.data import get_fnames, get_sphere from dipy.reconst.shm import descoteaux07_legacy_msg, sh_to_sf from dipy.tracking import tracker from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.utils import random_seeds_from_mask def track(method, **kwargs): """This tests that the number of streamlines equals the number of seeds when return_all=True. """ fnames = get_fnames(name="disco1", include_optional=True) sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) sh = nib.load(fnames[20]).get_fdata() fODFs = sh_to_sf(sh, sphere, sh_order_max=12, basis_type="tournier07", legacy=False) fODFs[fODFs < 0] = 0 # seeds position and initial directions mask = nib.load(fnames[25]).get_fdata() sc = BinaryStoppingCriterion(mask) affine = nib.load(fnames[25]).affine seed_mask = np.ones(mask.shape) seeds = random_seeds_from_mask( seed_mask, affine, seeds_count=100, seed_count_per_voxel=False ) directions = np.random.random(seeds.shape) directions = np.array([v / np.linalg.norm(v) for v in directions]) use_sf = kwargs.get("use_sf", False) use_directions = kwargs.get("use_dirs", False) # test return_all=True params = { "max_len": 500, "min_len": 0, "step_size": 0.5, "voxel_size": np.ones(3), "max_angle": 20, "random_seed": 0, "return_all": True, "sf": fODFs if use_sf else None, "sh": sh if not use_sf else None, "seed_directions": directions if use_directions else None, "sphere": sphere, } stream_gen = method(seeds, sc, affine, **params) streamlines = Streamlines(stream_gen) if use_directions: npt.assert_equal(len(streamlines), len(seeds)) else: npt.assert_array_less(len(streamlines), len(seeds)) # test return_all=False params = { "max_len": 500, "min_len": 10, "step_size": 0.5, "voxel_size": np.ones(3), "max_angle": 20, "random_seed": 0, "return_all": False, "sf": fODFs if use_sf else None, "sh": sh if not use_sf else None, "seed_directions": directions if use_directions else None, "sphere": sphere, } stream_gen = method(seeds, sc, affine, **params) streamlines = Streamlines(stream_gen) npt.assert_array_less(len(streamlines), len(seeds)) def test_deterministic_tracking(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) track(tracker.deterministic_tracking, use_dirs=True) track(tracker.deterministic_tracking, use_sf=True, use_dirs=True) def test_probabilistic_tracking(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) track(tracker.probabilistic_tracking, use_dirs=True) track(tracker.probabilistic_tracking, use_sf=True, use_dirs=True) def test_ptt_tracking(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) track(tracker.ptt_tracking) track(tracker.ptt_tracking, use_sf=True, use_dirs=True) def test_tracking_error(): sh = np.array([(64, 61, 57)]) seeds = [np.array([0.0, 0.0, 0.0], "float"), np.array([1.0, 2.0, 3.0], "float")] mask = np.ones((10, 10, 10)) sc = BinaryStoppingCriterion(mask) npt.assert_raises( ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4), sf=sh, sh=sh ) npt.assert_raises(ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4)) npt.assert_raises( NotImplementedError, tracker.deterministic_tracking, seeds, sc, np.eye(4), peaks=sh, ) npt.assert_raises( ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4), sf=sh ) npt.assert_raises( ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4), sh=sh, seed_directions=1, ) npt.assert_raises( ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4), sh=sh, seed_directions=[1], ) npt.assert_raises( ValueError, tracker.deterministic_tracking, seeds, sc, np.eye(4), sf=sh, seed_directions=[1], ) dipy-1.11.0/dipy/tracking/tests/test_tracking.py000066400000000000000000001300521476546756600217160ustar00rootroot00000000000000import warnings import nibabel as nib import numpy as np import numpy.testing as npt from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, unit_octahedron from dipy.data import get_fnames, get_sphere from dipy.direction import ( BootDirectionGetter, ClosestPeakDirectionGetter, DeterministicMaximumDirectionGetter, PTTDirectionGetter, PeaksAndMetrics, ProbabilisticDirectionGetter, ) from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.sims.voxel import multi_tensor, single_tensor from dipy.testing.decorators import set_random_number_generator from dipy.tracking.local_tracking import LocalTracking, ParticleFilteringTracking from dipy.tracking.stopping_criterion import ( ActStoppingCriterion, BinaryStoppingCriterion, StreamlineStatus, ThresholdStoppingCriterion, ) from dipy.tracking.streamline import Streamlines from dipy.tracking.utils import seeds_from_mask def allclose(x, y, atol=None): if atol is not None: return x.shape == y.shape and np.allclose(x, y, atol=0.5) else: return x.shape == y.shape and np.allclose(x, y) def test_stop_conditions(): """This tests that the Local Tracker behaves as expected for the following tissue types. """ # StreamlineStatus.TRACKPOINT = 1 # StreamlineStatus.ENDPOINT = 2 # StreamlineStatus.INVALIDPOINT = 0 tissue = np.array( [ [2, 1, 1, 2, 1], [2, 2, 1, 1, 2], [1, 1, 1, 1, 1], [1, 1, 1, 2, 2], [0, 1, 1, 1, 2], [0, 1, 1, 0, 2], [1, 0, 1, 1, 1], [2, 1, 2, 0, 0], ] ) tissue = tissue[None] sphere = HemiSphere.from_sphere(unit_octahedron) pmf_lookup = np.array( [ [ 0.0, 0.0, 0.0, ], [0.0, 0.0, 1.0], ] ) pmf = pmf_lookup[(tissue > 0).astype("int")] # Create a seeds along x = np.array([0.0, 0, 0, 0, 0, 0, 0, 0]) y = np.array([0.0, 1, 2, 3, 4, 5, 6, 7]) z = np.array([1.0, 1, 1, 0, 1, 1, 1, 1]) seeds = np.column_stack([x, y, z]) # Set up tracking endpoint_mask = tissue == StreamlineStatus.ENDPOINT invalidpoint_mask = tissue == StreamlineStatus.INVALIDPOINT sc = ActStoppingCriterion(endpoint_mask, invalidpoint_mask) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 60, sphere) # valid streamlines only streamlines_generator = LocalTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds, affine=np.eye(4), step_size=1.0, return_all=False, ) streamlines_not_all = iter(streamlines_generator) # all streamlines streamlines_all_generator = LocalTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds, affine=np.eye(4), step_size=1.0, return_all=True, ) streamlines_all = iter(streamlines_all_generator) # Check that the first streamline stops at 1 and 2 (ENDPOINT) y = 0 sl = next(streamlines_not_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 2]) npt.assert_equal(len(sl), 2) sl = next(streamlines_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 2]) npt.assert_equal(len(sl), 2) # Check that the next streamline stops at 1 and 3 (ENDPOINT) y = 1 sl = next(streamlines_not_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 3]) npt.assert_equal(len(sl), 3) sl = next(streamlines_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 3]) npt.assert_equal(len(sl), 3) # This streamline should be the same as above. This row does not have # ENDPOINTs, but the streamline should stop at the edge and not include # OUTSIDEIMAGE points. y = 2 sl = next(streamlines_not_all) npt.assert_equal(sl[0], [0, y, 0]) npt.assert_equal(sl[-1], [0, y, 4]) npt.assert_equal(len(sl), 5) sl = next(streamlines_all) npt.assert_equal(sl[0], [0, y, 0]) npt.assert_equal(sl[-1], [0, y, 4]) npt.assert_equal(len(sl), 5) # If we seed on the edge, the first (or last) point in the streamline # should be the seed. y = 3 sl = next(streamlines_not_all) npt.assert_equal(sl[0], seeds[y]) sl = next(streamlines_all) npt.assert_equal(sl[0], seeds[y]) # The last 3 seeds should not produce streamlines, # INVALIDPOINT streamlines are rejected (return_all=False). npt.assert_equal(len(list(streamlines_not_all)), 0) # The last 3 seeds should produce invalid streamlines, # INVALIDPOINT streamlines are kept (return_all=True). # The streamline stops at 1 (INVALIDPOINT) and 3 (ENDPOINT) y = 4 sl = next(streamlines_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 3]) npt.assert_equal(len(sl), 3) # The streamline stops at 0 (INVALIDPOINT) and 2 (INVALIDPOINT) y = 5 sl = next(streamlines_all) npt.assert_equal(sl[0], [0, y, 1]) npt.assert_equal(sl[-1], [0, y, 2]) npt.assert_equal(len(sl), 2) # The streamline should contain only one point, the seed point, # because no valid initial direction was returned. y = 6 sl = next(streamlines_all) npt.assert_equal(sl[0], seeds[y]) npt.assert_equal(sl[-1], seeds[y]) npt.assert_equal(len(sl), 1) # The streamline should contain only one point, the seed point, # because no valid neighboring voxel (ENDPOINT) y = 7 sl = next(streamlines_all) npt.assert_equal(sl[0], seeds[y]) npt.assert_equal(sl[-1], seeds[y]) npt.assert_equal(len(sl), 1) def test_save_seeds(): tissue = np.array( [ [2, 1, 1, 2, 1], [2, 2, 1, 1, 2], [1, 1, 1, 1, 1], [1, 1, 1, 2, 2], [0, 1, 1, 1, 2], [0, 1, 1, 0, 2], [1, 0, 1, 1, 1], ] ) tissue = tissue[None] sphere = HemiSphere.from_sphere(unit_octahedron) pmf_lookup = np.array( [ [ 0.0, 0.0, 0.0, ], [0.0, 0.0, 1.0], ] ) pmf = pmf_lookup[(tissue > 0).astype("int")] # Create a seeds along x = np.array([0.0, 0, 0, 0, 0, 0, 0]) y = np.array([0.0, 1, 2, 3, 4, 5, 6]) z = np.array([1.0, 1, 1, 0, 1, 1, 1]) seeds = np.column_stack([x, y, z]) # Set up tracking endpoint_mask = tissue == StreamlineStatus.ENDPOINT invalidpoint_mask = tissue == StreamlineStatus.INVALIDPOINT sc = ActStoppingCriterion(endpoint_mask, invalidpoint_mask) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 60, sphere) # valid streamlines only streamlines_generator = LocalTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds, affine=np.eye(4), step_size=1.0, return_all=False, save_seeds=True, ) streamlines_not_all = iter(streamlines_generator) # Verify that seeds are returned by the LocalTracker _, seed = next(streamlines_not_all) npt.assert_equal(seed, seeds[0]) _, seed = next(streamlines_not_all) npt.assert_equal(seed, seeds[1]) # Verify that seeds are returned by the PFTTracker also pft_streamlines = ParticleFilteringTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds, affine=np.eye(4), step_size=1.0, max_cross=1, return_all=False, save_seeds=True, ) streamlines = iter(pft_streamlines) _, seed = next(streamlines) npt.assert_equal(seed, seeds[0]) _, seed = next(streamlines) npt.assert_equal(seed, seeds[1]) @set_random_number_generator(0) def test_tracking_max_angle(rng): """This tests that the angle between streamline points is always smaller then the input `max_angle` parameter. """ def get_min_cos_similarity(streamlines): min_cos_sim = 1 for sl in streamlines: if len(sl) > 1: v = sl[:-1] - sl[1:] # vectors have norm of 1 for i in range(len(v) - 1): cos_sim = np.dot(v[i], v[i + 1]) if cos_sim < min_cos_sim: min_cos_sim = cos_sim return min_cos_sim for sphere in [ get_sphere(name="repulsion100"), HemiSphere.from_sphere(get_sphere(name="repulsion100")), ]: shape_img = [5, 5, 5] shape_img.extend([sphere.vertices.shape[0]]) mask = np.ones(shape_img[:3]) affine = np.eye(4) random_pmf = rng.random(shape_img) seeds = seeds_from_mask(mask, affine, density=1) sc = ActStoppingCriterion.from_pve( mask, np.zeros(shape_img[:3]), np.zeros(shape_img[:3]) ) max_angle = 20 step_size = 1 dg = ProbabilisticDirectionGetter.from_pmf( random_pmf, max_angle, sphere, pmf_threshold=0.1 ) # local tracking streamlines = Streamlines(LocalTracking(dg, sc, seeds, affine, step_size)) min_cos_sim = get_min_cos_similarity(streamlines) npt.assert_(np.arccos(min_cos_sim) <= np.deg2rad(max_angle)) # PFT tracking streamlines = Streamlines(ParticleFilteringTracking(dg, sc, seeds, affine, 1.0)) min_cos_sim = get_min_cos_similarity(streamlines) npt.assert_(np.arccos(min_cos_sim) <= np.deg2rad(max_angle)) def test_probabilistic_odf_weighted_tracker(): """This tests that the Probabilistic Direction Getter plays nice LocalTracking and produces reasonable streamlines in a simple example. """ sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing pmf_lookup = np.array( [[0.0, 0.0, 1.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.6, 0.4, 0.0]] ) simple_image = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] pmf = pmf_lookup[simple_image] seeds = [np.array([1.0, 1.0, 0.0])] * 30 mask = (simple_image > 0).astype(float) sc = ThresholdStoppingCriterion(mask, 0.5) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 90, sphere, pmf_threshold=0.1) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) expected = [ np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0], ] ), np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0], [4.0, 1.0, 0.0], ] ), ] path = [False, False] for sl in streamlines: if allclose(sl, expected[0]): path[0] = True elif allclose(sl, expected[1]): path[1] = True else: raise AssertionError() npt.assert_(all(path)) # The first path is not possible if 90 degree turns are excluded dg = ProbabilisticDirectionGetter.from_pmf(pmf, 80, sphere, pmf_threshold=0.1) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) for sl in streamlines: npt.assert_(np.allclose(sl, expected[1])) # The first path is not possible if pmf_threshold > 0.67 # 0.4/0.6 < 2/3, multiplying the pmf should not change the ratio dg = ProbabilisticDirectionGetter.from_pmf(10 * pmf, 90, sphere, pmf_threshold=0.67) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) for sl in streamlines: npt.assert_(np.allclose(sl, expected[1])) # Test non WM seed position seeds = [[0, 0, 0], [5, 5, 5]] streamlines = LocalTracking( dg, sc, seeds, np.eye(4), 0.2, max_cross=1, return_all=True ) streamlines = Streamlines(streamlines) npt.assert_(len(streamlines[0]) == 1) # INVALIDPOINT npt.assert_(len(streamlines[1]) == 1) # OUTSIDEIMAGE # Test that all points are within the image volume seeds = seeds_from_mask(np.ones(mask.shape), np.eye(4), density=2) streamline_generator = LocalTracking(dg, sc, seeds, np.eye(4), 0.5, return_all=True) streamlines = Streamlines(streamline_generator) for s in streamlines: npt.assert_(np.all((s + 0.5).astype(int) >= 0)) npt.assert_(np.all((s + 0.5).astype(int) < mask.shape)) # Test that the number of streamline return with return_all=True equal the # number of seeds places npt.assert_(np.array([len(streamlines) == len(seeds)])) # Test reproducibility tracking_1 = Streamlines( LocalTracking(dg, sc, seeds, np.eye(4), 0.5, random_seed=0) )._data tracking_2 = Streamlines( LocalTracking(dg, sc, seeds, np.eye(4), 0.5, random_seed=0) )._data npt.assert_equal(tracking_1, tracking_2) @set_random_number_generator(0) def test_particle_filtering_tractography(rng): """This tests that the ParticleFilteringTracking produces more streamlines connecting the gray matter than LocalTracking. """ sphere = get_sphere(name="repulsion100") step_size = 0.2 # Simple tissue masks simple_wm = np.array( [ [0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0], ] ) simple_wm = np.dstack( [ np.zeros(simple_wm.shape), simple_wm, simple_wm, simple_wm, np.zeros(simple_wm.shape), ] ) simple_gm = np.array( [ [1, 1, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0], ] ) simple_gm = np.dstack( [ np.zeros(simple_gm.shape), simple_gm, simple_gm, simple_gm, np.zeros(simple_gm.shape), ] ) simple_csf = np.ones(simple_wm.shape) - simple_wm - simple_gm sc = ActStoppingCriterion.from_pve(simple_wm, simple_gm, simple_csf) seeds = seeds_from_mask(simple_wm, np.eye(4), density=2) # Random pmf in every voxel shape_img = list(simple_wm.shape) shape_img.extend([sphere.vertices.shape[0]]) pmf = rng.random(shape_img) # Test that PFT recover equal or more streamlines than localTracking dg = ProbabilisticDirectionGetter.from_pmf(pmf, 60, sphere) local_streamlines_generator = LocalTracking( dg, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=False ) local_streamlines = Streamlines(local_streamlines_generator) pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=False, pft_back_tracking_dist=1, pft_front_tracking_dist=0.5, ) pft_streamlines = Streamlines(pft_streamlines_generator) npt.assert_(np.array([len(pft_streamlines) > 0])) npt.assert_(np.array([len(pft_streamlines) >= len(local_streamlines)])) # Test PFT with a PTT direction getter dg_ptt = PTTDirectionGetter.from_pmf(pmf, 60, sphere) pft_ptt_streamlines = Streamlines( ParticleFilteringTracking( dg_ptt, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=False, pft_back_tracking_dist=1, pft_front_tracking_dist=0.5, ) ) npt.assert_(np.array([len(pft_ptt_streamlines) > 0])) npt.assert_(np.array([len(pft_ptt_streamlines) >= len(local_streamlines)])) # Test that all points are equally spaced for ell in [2, 3, 5, 10, 100]: pft_streamlines = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=True, maxlen=ell, ) for s in pft_streamlines: for i in range(len(s) - 1): npt.assert_almost_equal(np.linalg.norm(s[i] - s[i + 1]), step_size) # Test that all points are within the image volume seeds = seeds_from_mask(np.ones(simple_wm.shape), np.eye(4), density=1) pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=True ) pft_streamlines = Streamlines(pft_streamlines_generator) for s in pft_streamlines: npt.assert_(np.all((s + 0.5).astype(int) >= 0)) npt.assert_(np.all((s + 0.5).astype(int) < simple_wm.shape)) # Test that the number of streamline return with return_all=True equal the # number of seeds places npt.assert_(np.array([len(pft_streamlines) == len(seeds)])) # Test min and max length pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, maxlen=20, minlen=3, return_all=False ) pft_streamlines = Streamlines(pft_streamlines_generator) for s in pft_streamlines: npt.assert_(len(s) >= 3) npt.assert_(len(s) <= 20) # Test non WM seed position seeds = [[0, 5, 4], [0, 0, 1], [50, 50, 50]] pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, max_cross=1, return_all=True ) pft_streamlines = Streamlines(pft_streamlines_generator) npt.assert_equal(len(pft_streamlines[0]), 3) # INVALIDPOINT npt.assert_equal(len(pft_streamlines[1]), 3) # ENDPOINT npt.assert_equal(len(pft_streamlines[2]), 1) # OUTSIDEIMAGE # Test with wrong StoppingCriterion type sc_bin = BinaryStoppingCriterion(simple_wm) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking(dg, sc_bin, seeds, np.eye(4), step_size), ) # Test with invalid back/front tracking distances npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, pft_back_tracking_dist=0, pft_front_tracking_dist=0, ), ) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, pft_back_tracking_dist=-1 ), ) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, pft_back_tracking_dist=0, pft_front_tracking_dist=-2, ), ) # Test with invalid affine shape npt.assert_raises( ValueError, lambda: ParticleFilteringTracking(dg, sc, seeds, np.eye(3), step_size), ) # Test with invalid maxlen npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, maxlen=0 ), ) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, maxlen=-1 ), ) # Test with invalid particle count npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, particle_count=0 ), ) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, particle_count=-1 ), ) # Test reproducibility tracking1 = Streamlines( ParticleFilteringTracking(dg, sc, seeds, np.eye(4), step_size, random_seed=0) )._data tracking2 = Streamlines( ParticleFilteringTracking(dg, sc, seeds, np.eye(4), step_size, random_seed=0) )._data npt.assert_equal(tracking1, tracking2) # Test min_wm_pve_before_stopping parameter expected = [ np.array([[1.0, 0.0, 1.0], [1.0, 1.0, 1.0], [1.0, 2.0, 1.0]]), np.array( [ [1.0, 0.0, 1.0], [1.0, 1.0, 1.0], [1.0, 2.0, 1.0], [1.0, 3.0, 1.0], [1.0, 4.0, 1.0], ] ), ] simple_wm = np.array( [[0, 0, 0, 0, 0, 0], [0, 0.4, 0.4, 1, 0.4, 0], [0, 0, 0, 0, 0, 0]] ) simple_wm = np.dstack( [ np.zeros(simple_wm.shape), simple_wm, simple_wm, simple_wm, np.zeros(simple_wm.shape), ] ) simple_gm = np.array( [[0, 0, 0, 0, 0, 0], [1, 0.6, 0.6, 0, 0.6, 1], [0, 0, 0, 0, 0, 0]] ) simple_gm = np.dstack( [ np.zeros(simple_gm.shape), simple_gm, simple_gm, simple_gm, np.zeros(simple_gm.shape), ] ) simple_csf = np.ones(simple_wm.shape) - simple_wm - simple_gm sc = ActStoppingCriterion.from_pve(simple_wm, simple_gm, simple_csf) seeds = np.array([[1, 1, 1]]) sphere = HemiSphere.from_sphere(unit_octahedron) pmf = np.zeros(list(simple_gm.shape) + [3]) pmf[:, :, :, 1] = 1 # horizontal bundle dg = ProbabilisticDirectionGetter.from_pmf(pmf, 30, sphere) pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size=1, max_cross=1, return_all=True, min_wm_pve_before_stopping=0, ) pft_streamlines = Streamlines(pft_streamlines_generator) npt.assert_(np.allclose(pft_streamlines[0], expected[0])) pft_streamlines_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size=1, max_cross=1, return_all=True, min_wm_pve_before_stopping=1, ) pft_streamlines = Streamlines(pft_streamlines_generator) npt.assert_(np.allclose(pft_streamlines[0], expected[1])) # Test invalid min_wm_pve_before_stopping parameters npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, min_wm_pve_before_stopping=-1 ), ) npt.assert_raises( ValueError, lambda: ParticleFilteringTracking( dg, sc, seeds, np.eye(4), step_size, min_wm_pve_before_stopping=2 ), ) def test_maximum_deterministic_tracker(): """This tests that the Maximum Deterministic Direction Getter plays nice LocalTracking and produces reasonable streamlines in a simple example. """ sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing pmf_lookup = np.array( [[0.0, 0.0, 1.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.4, 0.6, 0.0]] ) simple_image = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] pmf = pmf_lookup[simple_image] seeds = [np.array([1.0, 1.0, 0.0])] * 30 mask = (simple_image > 0).astype(float) sc = ThresholdStoppingCriterion(mask, 0.5) dg = DeterministicMaximumDirectionGetter.from_pmf( pmf, max_angle=100, sphere=sphere, pmf_threshold=0.1 ) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) expected = [ np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0], ] ), np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0], [4.0, 1.0, 0.0], ] ), np.array([[0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0]]), ] for sl in streamlines: if not allclose(sl, expected[0]): raise AssertionError() # The first path is not possible if 90 degree turns are excluded dg = DeterministicMaximumDirectionGetter.from_pmf( pmf, 80, sphere, pmf_threshold=0.1 ) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) for sl in streamlines: npt.assert_(np.allclose(sl, expected[1])) # Both path are not possible if 90 degree turns are exclude and # if pmf_threshold is larger than 0.67. Streamlines should stop at # the crossing. # 0.4/0.6 < 2/3, multiplying the pmf should not change the ratio dg = DeterministicMaximumDirectionGetter.from_pmf( 10 * pmf, 80, sphere, pmf_threshold=0.67 ) streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) for sl in streamlines: npt.assert_(np.allclose(sl, expected[2])) def test_bootstap_peak_tracker(): """This tests that the Bootstrap Peak Direction Getter plays nice LocalTracking and produces reasonable streamlines in a simple example. """ sphere = get_sphere(name="repulsion100") # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing simple_image = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [2, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] bvecs = sphere.vertices bvals = np.ones(len(bvecs)) * 1000 bvecs = np.insert(bvecs, 0, np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, 0, 0) gtab = gradient_table(bvals, bvecs=bvecs) angles = [(90, 90), (90, 0)] fracs = [50, 50] mevals = np.array([[1.5, 0.4, 0.4], [1.5, 0.4, 0.4]]) * 1e-3 mevecs = [ np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]]), ] voxel1 = single_tensor(gtab, 1, evals=mevals[0], evecs=mevecs[0], snr=None) voxel2 = single_tensor(gtab, 1, evals=mevals[0], evecs=mevecs[1], snr=None) voxel3, _ = multi_tensor(gtab, mevals, fractions=fracs, angles=angles, snr=None) data = np.tile(voxel3, [5, 6, 1, 1]) data[simple_image == 1] = voxel1 data[simple_image == 2] = voxel2 response = (np.array(mevals[1]), 1) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) seeds = [np.array([0.0, 1.0, 0.0]), np.array([2.0, 4.0, 0.0])] sc = BinaryStoppingCriterion((simple_image > 0).astype(float)) sphere = HemiSphere.from_sphere(get_sphere(name="symmetric724")) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) boot_dg = BootDirectionGetter.from_data(data, csd_model, 60, sphere=sphere) streamlines_generator = LocalTracking(boot_dg, sc, seeds, np.eye(4), 1.0) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) streamlines = Streamlines(streamlines_generator) expected = [ np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0], [4.0, 1.0, 0.0], ] ), np.array( [ [2.0, 4.0, 0.0], [2.0, 3.0, 0.0], [2.0, 2.0, 0.0], [2.0, 1.0, 0.0], [2.0, 0.0, 0.0], ] ), ] if not allclose(streamlines[0], expected[0], atol=0.5): raise AssertionError() if not allclose(streamlines[1], expected[1], atol=0.5): raise AssertionError() def test_closest_peak_tracker(): """This tests that the Closest Peak Direction Getter plays nice LocalTracking and produces reasonable streamlines in a simple example. """ sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing pmf_lookup = np.array( [[0.0, 0.0, 1.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.5, 0.5, 0.0]] ) simple_image = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [2, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] pmf = pmf_lookup[simple_image] seeds = [np.array([1.0, 1.0, 0.0]), np.array([2.0, 4.0, 0.0])] mask = (simple_image > 0).astype(float) sc = BinaryStoppingCriterion(mask) dg = ClosestPeakDirectionGetter.from_pmf(pmf, 90, sphere, pmf_threshold=0.1) streamlines = Streamlines(LocalTracking(dg, sc, seeds, np.eye(4), 1.0)) expected = [ np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0], [4.0, 1.0, 0.0], ] ), np.array( [ [2.0, 0.0, 0.0], [2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0], ] ), ] if not allclose(streamlines[0], expected[0]): raise AssertionError() if not allclose(streamlines[1], expected[1]): raise AssertionError() def test_eudx_tracker(): """This tests that the Peaks And Metrics Direction Getter plays nice LocalTracking and produces reasonable streamlines in a simple example. """ sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing peaks_values_lookup = np.array([[0.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.5, 0.5]]) peaks_indices_lookup = np.array([[-1, -1], [0, -1], [1, -1], [0, 1]]) # EuDXDirectionGetter needs at 3 slices on each axis to work simple_image = np.zeros([5, 6, 3], dtype=int) simple_image[:, :, 1] = np.array( [ [0, 1, 0, 1, 0, 0], [0, 1, 0, 1, 0, 0], [0, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) dg = PeaksAndMetrics() dg.sphere = sphere dg.peak_values = peaks_values_lookup[simple_image] dg.peak_indices = peaks_indices_lookup[simple_image] dg.ang_thr = 90 mask = (simple_image >= 0).astype(float) sc = ThresholdStoppingCriterion(mask, 0.5) seeds = [ np.array([1.0, 1.0, 1.0]), np.array([2.0, 4.0, 1.0]), np.array([1.0, 3.0, 1.0]), np.array([4.0, 4.0, 1.0]), ] streamlines = LocalTracking(dg, sc, seeds, np.eye(4), 1.0) expected = [ np.array( [ [0.0, 1.0, 1.0], [1.0, 1.0, 1.0], [2.0, 1.0, 1.0], [3.0, 1.0, 1.0], [4.0, 1.0, 1.0], ] ), np.array( [ [2.0, 0.0, 1.0], [2.0, 1.0, 1.0], [2.0, 2.0, 1.0], [2.0, 3.0, 1.0], [2.0, 4.0, 1.0], [2.0, 5.0, 1.0], ] ), np.array( [ [0.0, 3.0, 1.0], [1.0, 3.0, 1.0], [2.0, 3.0, 1.0], [2.0, 4.0, 1.0], [2.0, 5.0, 1.0], ] ), np.array([[4.0, 4.0, 1.0]]), ] for i, sl in enumerate(streamlines): npt.assert_(np.allclose(sl, expected[i])) def test_affine_transformations(): """This tests that the input affine is properly handled by LocalTracking and produces reasonable streamlines in a simple example. """ sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing pmf_lookup = np.array( [[0.0, 0.0, 1.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.4, 0.6, 0.0]] ) simple_image = np.array( [ [0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 3, 2, 2, 2, 0], [0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] pmf = pmf_lookup[simple_image] seeds = [np.array([1.0, 1.0, 0.0]), np.array([2.0, 4.0, 0.0])] expected = [ np.array([[1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0]]), np.array([[2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0]]), ] mask = (simple_image > 0).astype(float) sc = BinaryStoppingCriterion(mask) dg = DeterministicMaximumDirectionGetter.from_pmf( pmf, 60, sphere, pmf_threshold=0.1 ) # TST- bad affine wrong shape bad_affine = np.eye(3) npt.assert_raises(ValueError, LocalTracking, dg, sc, seeds, bad_affine, 1.0) # TST - bad affine with shearing bad_affine = np.eye(4) bad_affine[0, 1] = 1.0 npt.assert_raises(ValueError, LocalTracking, dg, sc, seeds, bad_affine, 1.0) # TST - bad seeds bad_seeds = 1000 npt.assert_raises(ValueError, LocalTracking, dg, sc, bad_seeds, np.eye(4), 1.0) # TST - identity a0 = np.eye(4) # TST - affines with positive/negative offsets a1 = np.eye(4) a1[:3, 3] = [1, 2, 3] a2 = np.eye(4) a2[:3, 3] = [-2, 0, -1] # TST - affine with scaling a3 = np.eye(4) a3[0, 0] = a3[1, 1] = a3[2, 2] = 8 # TST - affine with axes inverting (negative value) a4 = np.eye(4) a4[1, 1] = a4[2, 2] = -1 # TST - combined affines a5 = a1 + a2 + a3 a5[3, 3] = 1 # TST - in vivo affine example # Sometimes data have affines with tiny shear components. # For example, the small_101D data-set has some of that: fdata, _, _ = get_fnames(name="small_101D") a6 = nib.load(fdata).affine for affine in [a0, a1, a2, a3, a4, a5, a6]: lin = affine[:3, :3] offset = affine[:3, 3] seeds_trans = [np.dot(lin, s) + offset for s in seeds] # We compute the voxel size to adjust the step size to one voxel voxel_size = np.mean(np.sqrt(np.dot(lin, lin).diagonal())) streamlines = LocalTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds_trans, affine=affine, step_size=voxel_size, return_all=True, ) # We apply the inverse affine transformation to the generated # streamlines. It should be equals to the expected streamlines # (generated with the identity affine matrix). affine_inv = np.linalg.inv(affine) lin = affine_inv[:3, :3] offset = affine_inv[:3, 3] streamlines_inv = [] for line in streamlines: streamlines_inv.append([np.dot(pts, lin) + offset for pts in line]) npt.assert_equal(len(streamlines_inv[0]), len(expected[0])) npt.assert_(np.allclose(streamlines_inv[0], expected[0], atol=0.3)) npt.assert_equal(len(streamlines_inv[1]), len(expected[1])) npt.assert_(np.allclose(streamlines_inv[1], expected[1], atol=0.3)) @set_random_number_generator() def test_random_seed_initialization(rng): """Test that the random generator can be initialized correctly with the tracking seeds. """ sphere = HemiSphere.from_sphere(unit_octahedron) pmf = np.zeros((4, 4, 4, 3)) x = np.array([0.0, 0, 0, 57.421434502602544]) y = np.array([0.0, 1, 2, 21.566539227085478]) z = np.array([1.0, 1, 1, 51.67881720942744]) seeds = np.vstack([np.column_stack([x, y, z]), rng.random((10, 3))]) sc = BinaryStoppingCriterion(np.ones((4, 4, 4))) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 60, sphere) randoms_seeds = ( [None, 0, 1, -1, np.iinfo(np.uint32).max + 1] + list(rng.random(10)) + list(rng.integers(0, np.iinfo(np.int32).max, 10)) ) for rdm_seed in randoms_seeds: _ = Streamlines( LocalTracking( direction_getter=dg, stopping_criterion=sc, seeds=seeds, affine=np.eye(4), step_size=1.0, random_seed=rdm_seed, ) ) def test_tracking_with_initial_directions(): """This tests that tractography play well with using seeding directions.""" def allclose(x, y): return x.shape == y.shape and np.allclose(x, y) sphere = HemiSphere.from_sphere(unit_octahedron) # A simple image with three possible configurations, a vertical tract, # a horizontal tract and a crossing pmf_lookup = np.array( [[0.0, 0.0, 1.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.6, 0.4, 0.0]] ) simple_image = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [2, 3, 2, 2, 2, 2], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_image = simple_image[..., None] simple_wm = np.array( [ [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1], [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], ] ) simple_wm = simple_wm[..., None] simple_csf = np.ones(simple_wm.shape) - simple_wm simple_gm = np.zeros(simple_wm.shape) pmf = pmf_lookup[simple_image] mask = (simple_image > 0).astype(float) expected = [ np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0], [2.0, 5.0, 0.0], ] ), np.array( [ [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0], [3.0, 1.0, 0.0], [4.0, 1.0, 0.0], ] ), np.array( [ [2.0, 0.0, 0.0], [2.0, 1.0, 0.0], [2.0, 2.0, 0.0], [2.0, 3.0, 0.0], [2.0, 4.0, 0.0], [2.0, 5.0, 0.0], ] ), ] # Test LocalTracking with initial directions sc = ThresholdStoppingCriterion(mask, 0.5) dg = ProbabilisticDirectionGetter.from_pmf(pmf, 45, sphere, pmf_threshold=0.1) crossing_pos = np.array([2, 1, 0]) seeds = np.array([crossing_pos, crossing_pos, crossing_pos]) initial_directions = np.array( [ [sphere.vertices[0], [0, 0, 0]], [sphere.vertices[1], sphere.vertices[0]], [[0, 0, 0], [0, 0, 0]], ] ) # with max_cross=1 streamline_generator = LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=1, return_all=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1])) npt.assert_(allclose(streamlines[1], expected[2])) npt.assert_(allclose(streamlines[2], np.array([crossing_pos]))) # with max_cross=2 streamline_generator = LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=2, return_all=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1])) npt.assert_(allclose(streamlines[1], expected[2])) npt.assert_(allclose(streamlines[2], expected[1])) npt.assert_(allclose(streamlines[3], np.array([crossing_pos]))) # Test initial_directions with norm != 1 and not sphere vertices initial_directions = np.array( [ [[0, 0, 0], [2, 0, 0]], [[0.1, 0.8, 0], [-0.4, 0, 0]], [[0, 0, 0], [0.7, 0.6, -0.1]], ] ) streamline_generator = LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=2, return_all=False, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1])) npt.assert_(allclose(streamlines[1], expected[2])) npt.assert_(allclose(streamlines[2], expected[1][::-1])) npt.assert_(allclose(streamlines[3], expected[1])) # Test dimension mismatch between seeds and initial_directions npt.assert_raises( ValueError, lambda: LocalTracking( dg, sc, seeds, np.eye(4), 0.2, max_cross=1, return_all=True, initial_directions=initial_directions[:1, :, :], ), ) # Test warning is raised for possible directional biases npt.assert_warns( Warning, lambda: LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=2, return_all=False, unidirectional=True, randomize_forward_direction=False, initial_directions=None, ), ) # Test ParticleFilteringTracking with initial directions dg = ProbabilisticDirectionGetter.from_pmf(pmf, 45, sphere, pmf_threshold=0.1) sc = ActStoppingCriterion.from_pve(simple_wm, simple_gm, simple_csf) crossing_pos = np.array([2, 1, 0]) seeds = np.array([crossing_pos, crossing_pos, crossing_pos]) initial_directions = np.array( [ [sphere.vertices[0], [0, 0, 0]], [sphere.vertices[1], sphere.vertices[0]], [[0, 0, 0], [0, 0, 0]], ] ) streamline_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), 1, return_all=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1])) npt.assert_(allclose(streamlines[1], expected[2])) npt.assert_(allclose(streamlines[2], expected[1])) npt.assert_(allclose(streamlines[3], np.array([crossing_pos]))) # Test unidirectional tracking with initial directions initial_directions = np.array( [[[1, 0, 0], [0, 0, 0]], [[-1, 0, 0], [0, 0, 0]], [[0, -1, 0], [0, 1, 0]]] ) streamline_generator = LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=2, return_all=False, unidirectional=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1][2:])) npt.assert_(allclose(streamlines[1], expected[1][:3][::-1])) npt.assert_(allclose(streamlines[2], expected[2][:2][::-1])) npt.assert_(allclose(streamlines[3], expected[2][1:])) streamline_generator = ParticleFilteringTracking( dg, sc, seeds, np.eye(4), 1, return_all=True, unidirectional=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) npt.assert_(allclose(streamlines[0], expected[1][2:])) npt.assert_(allclose(streamlines[1], expected[1][:3][::-1])) npt.assert_(allclose(streamlines[2], expected[2][:2][::-1])) npt.assert_(allclose(streamlines[3], expected[2][1:])) # Test randomized initial forward direction seeds = np.array([crossing_pos] * 30) initial_directions = np.array([np.array([1, 0, 0])] * 30)[:, np.newaxis, :] streamline_generator = LocalTracking( dg, sc, seeds, np.eye(4), 1, max_cross=2, return_all=False, unidirectional=True, randomize_forward_direction=True, initial_directions=initial_directions, ) streamlines = Streamlines(streamline_generator) for sl in streamlines: npt.assert_( np.allclose(sl, expected[1][2:]) or np.allclose(sl, expected[1][:3][::-1]) ) dipy-1.11.0/dipy/tracking/tests/test_tractogen.pyx000066400000000000000000000343061476546756600222770ustar00rootroot00000000000000import nibabel as nib import numpy as np import numpy.testing as npt from scipy.ndimage import binary_erosion from scipy.stats import pearsonr from dipy.core.sphere import HemiSphere from dipy.data import get_fnames, get_sphere from dipy.direction.peaks import peaks_from_positions from dipy.direction.pmf import SimplePmfGen, SHCoeffPmfGen from dipy.reconst.shm import sh_to_sf from dipy.tracking.tractogen import generate_tractogram from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker_parameters import generate_tracking_parameters from dipy.tracking.utils import (connectivity_matrix, random_seeds_from_mask, seeds_from_mask, seeds_directions_pairs) def get_fast_tracking_performances(params, *, nbr_seeds=1000, nbr_threads=0): """ Generate streamlines and return performances on the DiSCo dataset. """ s = generate_disco_streamlines(params, nbr_seeds=nbr_seeds, nbr_threads=nbr_threads) return get_disco_performances(s) def generate_disco_streamlines(params, *, nbr_seeds=1000, nbr_threads=0, sphere=None): """ Return streamlines generated on the DiSCo dataset using the fast tracking module with the input tracking params """ # fetch the disco data fnames = get_fnames(name="disco1", include_optional=True) # prepare ODFs if sphere is None: sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) sh = nib.load(fnames[20]).get_fdata() fODFs = sh_to_sf( sh, sphere, sh_order_max=12, basis_type='tournier07', legacy=False ) fODFs[fODFs<0] = 0 pmf_gen = SimplePmfGen(np.asarray(fODFs, dtype=float), sphere) # seeds position and initial directions mask = nib.load(fnames[25]).get_fdata() affine = nib.load(fnames[25]).affine seed_mask = nib.load(fnames[34]).get_fdata() seed_mask = binary_erosion(seed_mask * mask, iterations=1) seed_positions = seeds_from_mask(seed_mask, affine, density=1)[:nbr_seeds] peaks = peaks_from_positions(seed_positions, fODFs, sphere, npeaks=1, affine=affine) seeds, directions = seeds_directions_pairs(seed_positions, peaks, max_cross=1) # stopping criterion sc = BinaryStoppingCriterion(mask) streamlines = generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine, nbr_threads=nbr_threads) return streamlines def get_disco_performances(streamlines): """ Return the streamlines connectivity performance compared to the GT DiSCo connectom. """ # fetch the disco data fnames = get_fnames(name="disco1", include_optional=True) # prepare the GT connectome data GT_connectome = np.loadtxt(fnames[35]) connectome_mask = np.tril(np.ones(GT_connectome.shape), -1) > 0 labels_img = nib.load(fnames[23]) affine = labels_img.affine labels = np.round(labels_img.get_fdata()).astype(int) connectome = connectivity_matrix(streamlines, affine, labels)[1:,1:] r, _ = pearsonr(GT_connectome[connectome_mask].flatten(), connectome[connectome_mask].flatten()) return r def test_tractogram_reproducibility(): # Test tractogram reproducibility params0 = generate_tracking_parameters("prob", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20, random_seed=0) params1 = generate_tracking_parameters("prob", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20, random_seed=1) params2 = generate_tracking_parameters("prob", max_len=500, step_size=0.2, voxel_size=np.ones(3), max_angle=20, random_seed=2) # Same random generator seed r1 = get_fast_tracking_performances(params1, nbr_seeds=100, nbr_threads=1) r2 = get_fast_tracking_performances(params1, nbr_seeds=100, nbr_threads=1) npt.assert_equal(r1, r2) # Random random generator seed r1 = get_fast_tracking_performances(params0, nbr_seeds=100, nbr_threads=1) r2 = get_fast_tracking_performances(params0, nbr_seeds=100, nbr_threads=1) npt.assert_(not r1 == r2, msg="Tractograms are identical (random seed).") # Different random generator seeds r1 = get_fast_tracking_performances(params1, nbr_seeds=100, nbr_threads=1) r2 = get_fast_tracking_performances(params2, nbr_seeds=100, nbr_threads=1) npt.assert_(not r1 == r2, msg="Tractograms are identical (different seeds).") def test_tracking_max_angle(): """This tests that the angle between streamline points is always smaller then the input `max_angle` parameter. """ def get_min_cos_similarity(streamlines): min_cos_sim = 1 for sl in streamlines: if len(sl) > 1: v = sl[:-1] - sl[1:] # vectors have norm of 1 for i in range(len(v) - 1): cos_sim = np.dot(v[i], v[i + 1]) if cos_sim < min_cos_sim: min_cos_sim = cos_sim return min_cos_sim for sph in [ get_sphere(name="repulsion100"), HemiSphere.from_sphere(get_sphere(name="repulsion100")), ]: for max_angle in [20,45]: params = generate_tracking_parameters("det", max_len=500, step_size=1, voxel_size=np.ones(3), max_angle=max_angle, random_seed=0) streamlines = generate_disco_streamlines(params, nbr_seeds=100, sphere=sph) min_cos_sim = get_min_cos_similarity(streamlines) npt.assert_(np.arccos(min_cos_sim) <= np.deg2rad(max_angle)) def test_tracking_step_size(): """This tests that the distance between streamline points is equal to the input `step_size` parameter. """ def get_points_distance(streamlines): dists = [] for sl in streamlines: dists.extend(np.linalg.norm(sl[0:-1] - sl[1:], axis=1)) return dists for step_size in [0.02, 0.5, 1]: params = generate_tracking_parameters("det", max_len=500, step_size=step_size, voxel_size=np.ones(3), max_angle=20, random_seed=0) streamlines = generate_disco_streamlines(params, nbr_seeds=100) dists = get_points_distance(streamlines) npt.assert_almost_equal(np.min(dists), step_size) npt.assert_almost_equal(np.max(dists), step_size) def test_return_all(): """This tests that the number of streamlines equals the number of seeds when return_all=True. """ fnames = get_fnames(name="disco1", include_optional=True) sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) sh = nib.load(fnames[20]).get_fdata() fODFs = sh_to_sf(sh, sphere, sh_order_max=12, basis_type='tournier07', legacy=False) fODFs[fODFs<0] = 0 pmf_gen = SimplePmfGen(np.asarray(fODFs, dtype=float), sphere) # seeds position and initial directions mask = nib.load(fnames[25]).get_fdata() sc = BinaryStoppingCriterion(mask) affine = nib.load(fnames[25]).affine seed_mask = np.ones(mask.shape) seeds = random_seeds_from_mask(seed_mask, affine, seeds_count=100, seed_count_per_voxel=False) directions = np.random.random(seeds.shape) directions = np.array([v/np.linalg.norm(v) for v in directions]) # test return_all=True params = generate_tracking_parameters("det", max_len=500, min_len=0, step_size=0.5, voxel_size=np.ones(3), max_angle=20, random_seed=0, return_all=True) stream_gen = generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine) streamlines = Streamlines(stream_gen) npt.assert_equal(len(streamlines), len(seeds)) # test return_all=False params = generate_tracking_parameters("det", max_len=500, min_len=10, step_size=0.5, voxel_size=np.ones(3), max_angle=20, random_seed=0, return_all=False) stream_gen = generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine) streamlines = Streamlines(stream_gen) npt.assert_array_less(len(streamlines), len(seeds)) def test_max_min_length(): """This tests that the returned streamlines respect the length criterion. """ fnames = get_fnames(name="disco1", include_optional=True) sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) sh = nib.load(fnames[20]).get_fdata() fODFs = sh_to_sf( sh, sphere, sh_order_max=12, basis_type='tournier07', legacy=False ) fODFs[fODFs<0] = 0 pmf_gen = SimplePmfGen(np.asarray(fODFs, dtype=float), sphere) # seeds position and initial directions mask = nib.load(fnames[25]).get_fdata() sc = BinaryStoppingCriterion(mask) affine = nib.load(fnames[25]).affine seed_mask = np.ones(mask.shape) seeds = random_seeds_from_mask(seed_mask, affine, seeds_count=1000, seed_count_per_voxel=False) directions = np.random.random(seeds.shape) directions = np.array([v/np.linalg.norm(v) for v in directions]) min_len=10 max_len=100 params = generate_tracking_parameters("det", max_len=max_len, min_len=min_len, step_size=0.5, voxel_size=np.ones(3), max_angle=20, random_seed=0, return_all=False) stream_gen = generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine) streamlines = Streamlines(stream_gen) errors = np.array([len(s) < min_len and len(s) > max_len for s in streamlines]) npt.assert_(np.sum(errors) == 0) def test_buffer_frac(): """This tests that the buffer fraction for generate tractogram plays well. """ fnames = get_fnames(name="disco1", include_optional=True) sphere = HemiSphere.from_sphere(get_sphere(name="repulsion724")) sh = nib.load(fnames[20]).get_fdata() fODFs = sh_to_sf( sh, sphere, sh_order_max=12, basis_type='tournier07', legacy=False ) fODFs[fODFs<0] = 0 pmf_gen = SimplePmfGen(np.asarray(fODFs, dtype=float), sphere) # seeds position and initial directions mask = nib.load(fnames[25]).get_fdata() sc = BinaryStoppingCriterion(mask) affine = nib.load(fnames[25]).affine seed_mask = np.ones(mask.shape) seeds = random_seeds_from_mask(seed_mask, affine, seeds_count=500, seed_count_per_voxel=False) directions = np.random.random(seeds.shape) directions = np.array([v/np.linalg.norm(v) for v in directions]) params = generate_tracking_parameters("prob", max_len=500, min_len=0, step_size=0.5, voxel_size=np.ones(3), max_angle=20, random_seed=0, return_all=True) streams = Streamlines(generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine, buffer_frac=1.0)) # test the results are identical with various buffer fractions for frac in [0.01,0.1,0.5]: frac_streams = Streamlines(generate_tractogram(seeds, directions, sc, params, pmf_gen, affine=affine, buffer_frac=frac)) npt.assert_equal(len(frac_streams), len(streams)) dipy-1.11.0/dipy/tracking/tests/test_utils.py000066400000000000000000000625651476546756600212710ustar00rootroot00000000000000import warnings import numpy as np import numpy.testing as npt import pytest from dipy.testing import assert_true from dipy.testing.decorators import set_random_number_generator from dipy.tracking import metrics from dipy.tracking._utils import _to_voxel_coordinates from dipy.tracking.streamline import transform_streamlines from dipy.tracking.utils import ( _min_at, connectivity_matrix, density_map, length, max_angle_from_curvature, min_radius_curvature_from_angle, ndbincount, near_roi, path_length, random_seeds_from_mask, reduce_labels, reduce_rois, seeds_directions_pairs, seeds_from_mask, target, target_line_based, unique_rows, ) from dipy.tracking.vox2track import streamline_mapping def make_streamlines(return_seeds=False): streamlines = [ np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [5, 10, 12]], "float"), np.array([[1, 2, 3], [3, 2, 0], [5, 20, 33], [40, 80, 120]], "float"), ] seeds = [np.array([0.0, 0.0, 0.0], "float"), np.array([1.0, 2.0, 3.0], "float")] if return_seeds: return streamlines, seeds else: return streamlines def test_density_map(): # One streamline diagonal in volume streamlines = [np.array([np.arange(10)] * 3).T] shape = (10, 10, 10) x = np.arange(10) expected = np.zeros(shape) expected[x, x, x] = 1.0 dm = density_map(streamlines, np.eye(4), shape) npt.assert_array_equal(dm, expected) # add streamline, make voxel_size smaller. Each streamline should only be # counted once, even if multiple points lie in a voxel streamlines.append(np.ones((5, 3))) shape = (5, 5, 5) x = np.arange(5) expected = np.zeros(shape) expected[x, x, x] = 1.0 expected[0, 0, 0] += 1 affine = np.eye(4) * 2 affine[:3, 3] = 0.05 dm = density_map(streamlines, affine, shape) npt.assert_array_equal(dm, expected) # should work with a generator dm = density_map(iter(streamlines), affine, shape) npt.assert_array_equal(dm, expected) # Test passing affine affine = np.diag([2, 2, 2, 1.0]) affine[:3, 3] = 1.0 dm = density_map(streamlines, affine, shape) npt.assert_array_equal(dm, expected) # Shift the image by 2 voxels, ie 4mm affine[:3, 3] -= 4.0 expected_old = expected new_shape = [i + 2 for i in shape] expected = np.zeros(new_shape) expected[2:, 2:, 2:] = expected_old dm = density_map(streamlines, affine, new_shape) npt.assert_array_equal(dm, expected) def test_to_voxel_coordinates_precision(): # To simplify tests, use an identity affine. This would be the result of # a call to _mapping_to_voxel with another identity affine. transfo = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) # Offset is computed by _mapping_to_voxel. With a 1x1x1 dataset # having no translation, the offset is half the voxel size, i.e. 0.5. offset = np.array([0.5, 0.5, 0.5]) # Without the added tolerance in _to_voxel_coordinates, this streamline # should raise an Error in the call to _to_voxel_coordinates. failing_strl = [ np.array([[-0.5000001, 0.0, 0.0], [0.0, 1.0, 0.0]], dtype=np.float32) ] indices = _to_voxel_coordinates(failing_strl, transfo, offset) expected_indices = np.array([[[0, 0, 0], [0, 1, 0]]]) npt.assert_array_equal(indices, expected_indices) def test_connectivity_matrix(): label_volume = np.array([[[3, 0, 0], [0, 0, 5], [0, 0, 4]]]) streamlines = [ np.array([[0, 0, 0], [0, 1, 2], [0, 2, 2]], "float"), np.array([[0, 0, 0], [0, 1, 1], [0, 2, 2]], "float"), np.array([[0, 2, 2], [0, 1, 1], [0, 0, 0]], "float"), ] expected = np.zeros((6, 6), "int") expected[3, 4] = 2 expected[4, 3] = 1 # Check basic Case matrix = connectivity_matrix(streamlines, np.eye(4), label_volume, symmetric=False) npt.assert_array_equal(matrix, expected) # Test mapping matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, symmetric=False, return_mapping=True ) npt.assert_array_equal(matrix, expected) npt.assert_equal(mapping[3, 4], [0, 1]) npt.assert_equal(mapping[4, 3], [2]) npt.assert_equal(mapping.get((0, 0)), None) # Test mapping and symmetric matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, symmetric=True, return_mapping=True ) npt.assert_equal(mapping[3, 4], [0, 1, 2]) # When symmetric only (3,4) is a key, not (4, 3) npt.assert_equal(mapping.get((4, 3)), None) # expected output matrix is symmetric version of expected expected = expected + expected.T npt.assert_array_equal(matrix, expected) # Test mapping_as_streamlines, mapping dict has lists of streamlines matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, symmetric=False, return_mapping=True, mapping_as_streamlines=True, ) assert_true(mapping[3, 4][0] is streamlines[0]) assert_true(mapping[3, 4][1] is streamlines[1]) assert_true(mapping[4, 3][0] is streamlines[2]) # Test Inclusive streamline analysis # Check basic Case (inclusive) expected = np.zeros((6, 6), "int") expected[3, 4] = 2 expected[4, 3] = 1 expected[3, 5] = 1 expected[5, 4] = 1 expected[0, 3:5] = 1 expected[3:5, 0] = 1 matrix = connectivity_matrix( streamlines, np.eye(4), label_volume, symmetric=False, inclusive=True ) npt.assert_array_equal(matrix, expected) # Test mapping matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, inclusive=True, symmetric=False, return_mapping=True, ) npt.assert_array_equal(matrix, expected) npt.assert_equal(mapping[3, 4], [0, 1]) npt.assert_equal(mapping[4, 3], [2]) npt.assert_equal(mapping[0, 4], [1]) npt.assert_equal(mapping[3, 5], [0]) npt.assert_equal(mapping.get((0, 0)), None) # Test mapping and symmetric matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, inclusive=True, symmetric=True, return_mapping=True, ) npt.assert_equal(mapping[3, 4], [0, 1, 2]) npt.assert_equal(mapping[0, 3], [1, 2]) npt.assert_equal(mapping[0, 4], [1, 2]) npt.assert_equal(mapping[3, 5], [0]) npt.assert_equal(mapping[4, 5], [0]) # When symmetric only (3,4) is a key, not (4,3) npt.assert_equal(mapping.get((4, 3)), None) # expected output matrix is symmetric version of expected expected = expected + expected.T npt.assert_array_equal(matrix, expected) # Test mapping_as_streamlines, mapping dict has lists of streamlines matrix, mapping = connectivity_matrix( streamlines, np.eye(4), label_volume, symmetric=False, inclusive=True, return_mapping=True, mapping_as_streamlines=True, ) assert_true(mapping[0, 3][0] is streamlines[2]) assert_true(mapping[0, 4][0] is streamlines[1]) assert_true(mapping[3, 0][0] is streamlines[1]) assert_true(mapping[3, 4][0] is streamlines[0]) assert_true(mapping[3, 4][1] is streamlines[1]) assert_true(mapping[3, 5][0] is streamlines[0]) assert_true(mapping[4, 0][0] is streamlines[2]) assert_true(mapping[4, 3][0] is streamlines[2]) assert_true(mapping[5, 4][0] is streamlines[0]) # Test passing affine to connectivity_matrix affine = np.diag([-1, -1, -1, 1.0]) streamlines = [-i for i in streamlines] matrix = connectivity_matrix(streamlines, affine, label_volume) # In the symmetrical case, the matrix should be, well, symmetric: npt.assert_equal(matrix[4, 3], matrix[4, 3]) def test_ndbincount(): def check(expected): npt.assert_equal(bc[0, 0], expected[0]) npt.assert_equal(bc[0, 1], expected[1]) npt.assert_equal(bc[1, 0], expected[2]) npt.assert_equal(bc[2, 2], expected[3]) x = np.array([[0, 0], [0, 0], [0, 1], [0, 1], [1, 0], [2, 2]]).T expected = [2, 2, 1, 1] # count occurrences in x bc = ndbincount(x) npt.assert_equal(bc.shape, (3, 3)) check(expected) # pass in shape bc = ndbincount(x, shape=(4, 5)) npt.assert_equal(bc.shape, (4, 5)) check(expected) # pass in weights weights = np.arange(6.0) weights[-1] = 1.23 expected = [1.0, 5.0, 4.0, 1.23] bc = ndbincount(x, weights=weights) check(expected) # raises an error if shape is too small npt.assert_raises(ValueError, ndbincount, x, weights=None, shape=(2, 2)) def test_reduce_labels(): shape = (4, 5, 6) # labels from 100 to 220 labels = np.arange(100, np.prod(shape) + 100).reshape(shape) # new labels form 0 to 120, and lookup maps range(0,120) to range(100, 220) new_labels, lookup = reduce_labels(labels) npt.assert_array_equal(new_labels, labels - 100) npt.assert_array_equal(lookup, labels.ravel()) def test_target(): streamlines = [ np.array([[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [2.0, 0.0, 0.0]]), np.array([[0.0, 0.0, 0], [0, 1.0, 1.0], [0, 2.0, 2.0]]), ] _target(target, streamlines, (0, 0, 0), (1, 0, 0), True) def test_target_lb(): streamlines = [ np.array([[0.0, 1.0, 1.0], [3.0, 1.0, 1.0]]), np.array([[0.0, 0.0, 0.0], [2.0, 2.0, 2.0]]), np.array([[1.0, 1.0, 1.0]]), ] # Single-point streamline _target(target_line_based, streamlines, (1, 1, 1), (2, 1, 1), False) def _target(target_f, streamlines, voxel_both_true, voxel_one_true, test_bad_points): affine = np.eye(4) mask = np.zeros((4, 4, 4), dtype=bool) # Both pass though mask[voxel_both_true] = True new = list(target_f(streamlines, affine, mask)) npt.assert_equal(len(new), 2) new = list(target_f(streamlines, affine, mask, include=False)) npt.assert_equal(len(new), 0) # only first mask[:] = False mask[voxel_one_true] = True new = list(target_f(streamlines, affine, mask)) npt.assert_equal(len(new), 1) assert_true(new[0] is streamlines[0]) new = list(target_f(streamlines, affine, mask, include=False)) npt.assert_equal(len(new), 1) assert_true(new[0] is streamlines[1]) # Test that bad points raise a value error if test_bad_points: bad_sl = streamlines + [np.array([[10.0, 10.0, 10.0]])] new = target_f(bad_sl, affine, mask) npt.assert_raises(ValueError, list, new) bad_sl = streamlines + [-np.array([[10.0, 10.0, 10.0]])] new = target_f(bad_sl, affine, mask) npt.assert_raises(ValueError, list, new) # Test smaller voxels affine = np.array([[0.3, 0, 0, 0], [0, 0.2, 0, 0], [0, 0, 0.4, 0], [0, 0, 0, 1]]) streamlines = transform_streamlines(streamlines, affine) new = list(target_f(streamlines, affine, mask)) npt.assert_equal(len(new), 1) assert_true(new[0] is streamlines[0]) new = list(target_f(streamlines, affine, mask, include=False)) npt.assert_equal(len(new), 1) assert_true(new[0] is streamlines[1]) # Test that changing mask or affine does not break target/target_line_based include = target_f(streamlines, affine, mask) exclude = target_f(streamlines, affine, mask, include=False) affine[:] = np.eye(4) mask[:] = False include = list(include) exclude = list(exclude) npt.assert_equal(len(include), 1) assert_true(include[0] is streamlines[0]) npt.assert_equal(len(exclude), 1) assert_true(exclude[0] is streamlines[1]) def test_target_line_based_out_of_bounds(): test_cases = [ (np.array([[-10, 0, 0], [0, 0, 0]]), 0), (np.array([[-10, 0, 0], [1, -10, 0]]), 0), (np.array([[0, 0, 0], [10, 10, 10]]), 0), (np.array([[-2, -0.6, -0.6], [2, -0.6, -0.6]]), 0), (np.array([[-10000, 0, 0], [0, 0, 0]]), 0), (np.array([[-10, 0, 0], [10, 0, 0]]), 1), (np.array([[1, -10, 0], [1, 10, 0]]), 1), ] for streamline, expected_matched in test_cases: mask = np.zeros((2, 1, 1), dtype=np.uint8) mask[1, 0, 0] = 1 matched = list(target_line_based([streamline], np.eye(4), mask)) assert len(matched) == expected_matched def test_near_roi(): streamlines = [ np.array([[0.0, 0.0, 0.9], [1.9, 0.0, 0.0], [3, 2.0, 2.0]]), np.array([[0.1, 0.0, 0], [0, 1.0, 1.0], [0, 2.0, 2.0]]), np.array([[2, 2, 2], [3, 3, 3]]), ] mask = np.zeros((4, 4, 4), dtype=bool) mask[0, 0, 0] = True mask[1, 0, 0] = True npt.assert_array_equal( near_roi(streamlines, np.eye(4), mask, tol=1), np.array([True, True, False]) ) npt.assert_array_equal( near_roi(streamlines, np.eye(4), mask), np.array([False, True, False]) ) # test for handling of various forms of null streamlines # including a streamline from previous test because near_roi / tol # can't handle completely empty streamline collections streamlines_null = [ np.array([[0.0, 0.0, 0.9], [1.9, 0.0, 0.0], [3, 2.0, 2.0]]), np.array([[], [], []]).T, np.array([]), [], ] npt.assert_array_equal( near_roi(streamlines_null, np.eye(4), mask, tol=1), np.array([True, False, False, False]), ) npt.assert_array_equal( near_roi(streamlines_null, np.eye(4), mask), np.array([False, False, False, False]), ) # If there is an affine, we need to use it: affine = np.eye(4) affine[:, 3] = [-1, 100, -20, 1] # Transform the streamlines: x_streamlines = [sl + affine[:3, 3] for sl in streamlines] npt.assert_array_equal( near_roi(x_streamlines, affine, mask, tol=1), np.array([True, True, False]) ) npt.assert_array_equal( near_roi(x_streamlines, affine, mask, tol=None), np.array([False, True, False]) ) # Test for use of the 'all' mode: npt.assert_array_equal( near_roi(x_streamlines, affine, mask, tol=None, mode="all"), np.array([False, False, False]), ) mask[0, 1, 1] = True mask[0, 2, 2] = True # Test for use of the 'all' mode, also testing that setting the tolerance # to a very small number gets overridden: with warnings.catch_warnings(): warnings.simplefilter("ignore") npt.assert_array_equal( near_roi(x_streamlines, affine, mask, tol=0.1, mode="all"), np.array([False, True, False]), ) mask[2, 2, 2] = True mask[3, 3, 3] = True npt.assert_array_equal( near_roi(x_streamlines, affine, mask, tol=None, mode="all"), np.array([False, True, True]), ) # Test for use of endpoints as selection criteria: mask = np.zeros((4, 4, 4), dtype=bool) mask[0, 1, 1] = True mask[3, 2, 2] = True npt.assert_array_equal( near_roi(streamlines, np.eye(4), mask, tol=0.87, mode="either_end"), np.array([True, False, False]), ) npt.assert_array_equal( near_roi(streamlines, np.eye(4), mask, tol=0.87, mode="both_end"), np.array([False, False, False]), ) mask[0, 0, 0] = True mask[0, 2, 2] = True npt.assert_array_equal( near_roi(streamlines, np.eye(4), mask, mode="both_end"), np.array([False, True, False]), ) # Test with a generator input: def generate_sl(streamlines): for sl in streamlines: yield sl npt.assert_array_equal( near_roi(generate_sl(streamlines), np.eye(4), mask, mode="both_end"), np.array([False, True, False]), ) def test_streamline_mapping(): streamlines = [ np.array([[0, 0, 0], [0, 0, 0], [0, 2, 2]], "float"), np.array([[0, 0, 0], [0, 1, 1], [0, 2, 2]], "float"), np.array([[0, 2, 2], [0, 1, 1], [0, 0, 0]], "float"), ] mapping = streamline_mapping(streamlines, affine=np.eye(4)) expected = {(0, 0, 0): [0, 1, 2], (0, 2, 2): [0, 1, 2], (0, 1, 1): [1, 2]} npt.assert_equal(mapping, expected) mapping = streamline_mapping( streamlines, affine=np.eye(4), mapping_as_streamlines=True ) expected = {k: [streamlines[i] for i in indices] for k, indices in expected.items()} npt.assert_equal(mapping, expected) # Test passing affine affine = np.eye(4) affine[:3, 3] = 0.5 mapping = streamline_mapping( streamlines, affine=affine, mapping_as_streamlines=True ) npt.assert_equal(mapping, expected) # Make the voxel size smaller affine = np.diag([0.5, 0.5, 0.5, 1.0]) affine[:3, 3] = 0.25 expected = {tuple(i * 2 for i in key): value for key, value in expected.items()} mapping = streamline_mapping( streamlines, affine=affine, mapping_as_streamlines=True ) npt.assert_equal(mapping, expected) @set_random_number_generator() def test_length(rng): # Generate a simulated bundle of fibers: n_streamlines = 50 n_pts = 100 t = np.linspace(-10, 10, n_pts) bundle = [] for i in np.linspace(3, 5, n_streamlines): pts = np.vstack((np.cos(2 * t / np.pi), np.zeros(t.shape) + i, t)).T bundle.append(pts) start = rng.integers(10, 30, n_streamlines) end = rng.integers(60, 100, n_streamlines) bundle = [ 10 * streamline[start[i] : end[i]] for (i, streamline) in enumerate(bundle) ] bundle_lengths = length(bundle) for idx, this_length in enumerate(bundle_lengths): npt.assert_equal(this_length, metrics.length(bundle[idx])) @set_random_number_generator() def test_seeds_from_mask(rng): mask = rng.integers(0, 1, size=(10, 10, 10)) seeds = seeds_from_mask(mask, np.eye(4), density=1) npt.assert_equal(mask.sum(), len(seeds)) npt.assert_array_equal(np.argwhere(mask), seeds) mask[:] = False mask[3, 3, 3] = True seeds = seeds_from_mask(mask, np.eye(4), density=[3, 4, 5]) npt.assert_equal(len(seeds), 3 * 4 * 5) assert_true(np.all((seeds > 2.5) & (seeds < 3.5))) mask[4, 4, 4] = True seeds = seeds_from_mask(mask, np.eye(4), density=[3, 4, 5]) npt.assert_equal(len(seeds), 2 * 3 * 4 * 5) assert_true(np.all((seeds > 2.5) & (seeds < 4.5))) in_333 = ((seeds > 2.5) & (seeds < 3.5)).all(1) npt.assert_equal(in_333.sum(), 3 * 4 * 5) in_444 = ((seeds > 3.5) & (seeds < 4.5)).all(1) npt.assert_equal(in_444.sum(), 3 * 4 * 5) @set_random_number_generator() def test_random_seeds_from_mask(rng): mask = rng.integers(0, 1, size=(4, 6, 3)) seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=24, seed_count_per_voxel=True ) npt.assert_equal(mask.sum() * 24, len(seeds)) seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=0, seed_count_per_voxel=True ) npt.assert_equal(0, len(seeds)) mask[:] = False mask[2, 2, 2] = True seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=8, seed_count_per_voxel=True ) npt.assert_equal(mask.sum() * 8, len(seeds)) assert_true(np.all((seeds > 1.5) & (seeds < 2.5))) seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=24, seed_count_per_voxel=False ) npt.assert_equal(24, len(seeds)) seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=0, seed_count_per_voxel=False ) npt.assert_equal(0, len(seeds)) mask[:] = False mask[2, 2, 2] = True seeds = random_seeds_from_mask( mask, np.eye(4), seeds_count=100, seed_count_per_voxel=False ) npt.assert_equal(100, len(seeds)) assert_true(np.all((seeds > 1.5) & (seeds < 2.5))) mask = np.zeros((15, 15, 15)) mask[2:14, 2:14, 2:14] = 1 seeds_npv_2 = random_seeds_from_mask( mask, np.eye(4), seeds_count=2, seed_count_per_voxel=True, random_seed=0 )[:150] seeds_npv_3 = random_seeds_from_mask( mask, np.eye(4), seeds_count=3, seed_count_per_voxel=True, random_seed=0 )[:150] assert_true(np.all(seeds_npv_2 == seeds_npv_3)) seeds_nt_150 = random_seeds_from_mask( mask, np.eye(4), seeds_count=150, seed_count_per_voxel=False, random_seed=0 )[:150] seeds_nt_500 = random_seeds_from_mask( mask, np.eye(4), seeds_count=500, seed_count_per_voxel=False, random_seed=0 )[:150] assert_true(np.all(seeds_nt_150 == seeds_nt_500)) def test_connectivity_matrix_shape(): # Labels: z-planes have labels 0,1,2 labels = np.zeros((3, 3, 3), dtype=int) labels[:, :, 1] = 1 labels[:, :, 2] = 2 # Streamline set, only moves between first two z-planes. streamlines = [ np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.5], [0.0, 0.0, 1.0]]), np.array([[0.0, 1.0, 1.0], [0.0, 1.0, 0.5], [0.0, 1.0, 0.0]]), ] matrix = connectivity_matrix(streamlines, np.eye(4), labels) npt.assert_equal(matrix.shape, (3, 3)) def test_unique_rows(): """ Testing the function unique_coords """ arr = np.array([[1, 2, 3], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) arr_w_unique = np.array([[1, 2, 3], [2, 3, 4], [3, 4, 5]]) npt.assert_array_equal(unique_rows(arr), arr_w_unique) # Should preserve order: arr = np.array([[2, 3, 4], [1, 2, 3], [1, 2, 3], [3, 4, 5]]) arr_w_unique = np.array([[2, 3, 4], [1, 2, 3], [3, 4, 5]]) npt.assert_array_equal(unique_rows(arr), arr_w_unique) # Should work even with longer arrays: arr = np.array( [[2, 3, 4], [1, 2, 3], [1, 2, 3], [3, 4, 5], [6, 7, 8], [0, 1, 0], [1, 0, 1]] ) arr_w_unique = np.array( [[2, 3, 4], [1, 2, 3], [3, 4, 5], [6, 7, 8], [0, 1, 0], [1, 0, 1]] ) npt.assert_array_equal(unique_rows(arr), arr_w_unique) def test_reduce_rois(): roi1 = np.zeros((4, 4, 4), dtype=bool) roi2 = np.zeros((4, 4, 4), dtype=bool) roi1[1, 1, 1] = 1 roi2[2, 2, 2] = 1 include_roi, exclude_roi = reduce_rois([roi1, roi2], [True, True]) npt.assert_equal(include_roi, roi1 + roi2) npt.assert_equal(exclude_roi, np.zeros((4, 4, 4))) include_roi, exclude_roi = reduce_rois([roi1, roi2], [True, False]) npt.assert_equal(include_roi, roi1) npt.assert_equal(exclude_roi, roi2) # Array input: include_roi, exclude_roi = reduce_rois(np.array([roi1, roi2]), [True, True]) npt.assert_equal(include_roi, roi1 + roi2) npt.assert_equal(exclude_roi, np.zeros((4, 4, 4))) # Int and float input roi1 = np.zeros((4, 4, 4), dtype=int) roi2 = np.zeros((4, 4, 4), dtype=float) npt.assert_warns(UserWarning, reduce_rois, [roi1], [True]) npt.assert_warns(UserWarning, reduce_rois, [roi2], [True]) @set_random_number_generator() def test_path_length(rng): aoi = np.zeros((20, 20, 20), dtype=bool) aoi[0, 0, 0] = 1 # A few tests for basic usage x = np.arange(20) streamlines = [np.array([x, x, x]).T] pl = path_length(streamlines, np.eye(4), aoi) expected = x.copy() * np.sqrt(3) # expected[0] = np.inf npt.assert_array_almost_equal(pl[x, x, x], expected) aoi[19, 19, 19] = 1 pl = path_length(streamlines, np.eye(4), aoi) expected = np.minimum(expected, expected[::-1]) npt.assert_array_almost_equal(pl[x, x, x], expected) aoi[19, 19, 19] = 0 aoi[1, 1, 1] = 1 pl = path_length(streamlines, np.eye(4), aoi) expected = (x - 1) * np.sqrt(3) expected[0] = 0 npt.assert_array_almost_equal(pl[x, x, x], expected) z = np.zeros(x.shape, x.dtype) streamlines.append(np.array([x, z, z]).T) pl = path_length(streamlines, np.eye(4), aoi) npt.assert_array_almost_equal(pl[x, x, x], expected) npt.assert_array_almost_equal(pl[x, 0, 0], x) # Only streamlines that pass through aoi contribute to path length so if # all streamlines are duds, plm will be all inf. aoi[:] = 0 aoi[0, 0, 0] = 1 streamlines = [] for _ in range(1000): rando = rng.random(size=(100, 3)) * 19 + 0.5 assert (rando > 0.5).all() assert (rando < 19.5).all() streamlines.append(rando) pl = path_length(streamlines, np.eye(4), aoi) npt.assert_array_almost_equal(pl, -1) pl = path_length(streamlines, np.eye(4), aoi, fill_value=-12.0) npt.assert_array_almost_equal(pl, -12.0) def test_min_at(): k = np.array([3, 2, 2, 2, 1, 1, 1]) values = np.array([10.0, 1, 2, 3, 31, 21, 11]) i = np.zeros(k.shape, int) j = np.zeros(k.shape, int) a = np.zeros([1, 1, 4]) + 100.0 _min_at(a, (i, j, k), values) npt.assert_array_equal(a, [[[100, 11, 1, 10]]]) def test_curvature_angle(): angle = [0.0000001, np.pi / 3, np.pi / 2.01] step_size = [0.2, 0.5, 1.5] curvature = [2000000.0, 0.5, 1.064829060280437] for theta, step, curve in zip(angle, step_size, curvature): res_angle = max_angle_from_curvature(curve, step) npt.assert_almost_equal(res_angle, theta) res_curvature = min_radius_curvature_from_angle(theta, step) npt.assert_almost_equal(res_curvature, curve) # special case with pytest.warns(UserWarning): npt.assert_equal( min_radius_curvature_from_angle(0, 1), min_radius_curvature_from_angle(np.pi / 2, 1), ) @set_random_number_generator() def test_seeds_directions_pairs(rng): positions = rng.random(size=(10, 3)) peaks = rng.random(size=(10, 5, 3)) # test various max_cross value for i in range(-1, 7): seeds, directions = seeds_directions_pairs(positions, peaks, max_cross=i) npt.assert_equal(seeds.shape, directions.shape) # test it raise and error if the input array shapes don't match npt.assert_raises(ValueError, seeds_directions_pairs, positions[:, :2], peaks) npt.assert_raises(ValueError, seeds_directions_pairs, positions[:5, :], peaks) npt.assert_raises(ValueError, seeds_directions_pairs, positions, peaks[:, :, :2]) npt.assert_raises(ValueError, seeds_directions_pairs, positions, peaks.flatten()) dipy-1.11.0/dipy/tracking/tracker.py000066400000000000000000000713401476546756600173520ustar00rootroot00000000000000from nibabel.affines import voxel_sizes import numpy as np from dipy.data import default_sphere from dipy.direction import ( BootDirectionGetter, ClosestPeakDirectionGetter, ProbabilisticDirectionGetter, ) from dipy.direction.peaks import peaks_from_positions from dipy.direction.pmf import SHCoeffPmfGen, SimplePmfGen from dipy.tracking.local_tracking import LocalTracking, ParticleFilteringTracking from dipy.tracking.tracker_parameters import generate_tracking_parameters from dipy.tracking.tractogen import generate_tractogram from dipy.tracking.utils import seeds_directions_pairs def generic_tracking( seed_positions, seed_directions, sc, params, *, affine=None, sh=None, peaks=None, sf=None, sphere=None, basis_type=None, legacy=True, nbr_threads=0, seed_buffer_fraction=1.0, save_seeds=False, ): affine = affine if affine is not None else np.eye(4) pmf_type = [ {"name": "sh", "value": sh, "cls": SHCoeffPmfGen}, {"name": "peaks", "value": peaks, "cls": SimplePmfGen}, {"name": "sf", "value": sf, "cls": SimplePmfGen}, ] initialized_pmf = [ d_selected for d_selected in pmf_type if d_selected["value"] is not None ] if len(initialized_pmf) > 1: selected_pmf = ", ".join([p["name"] for p in initialized_pmf]) raise ValueError( "Only one pmf type should be initialized. " f"Variables initialized: {', '.join(selected_pmf)}" ) if len(initialized_pmf) == 0: available_pmf = ", ".join([d["name"] for d in pmf_type]) raise ValueError( f"No PMF found. One of this variable ({available_pmf}) should be" " initialized." ) selected_pmf = initialized_pmf[0] if selected_pmf["name"] == "sf" and sphere is None: raise ValueError("A sphere should be defined when using SF (an ODF).") if selected_pmf["name"] == "peaks": raise NotImplementedError("Peaks are not yet implemented.") sphere = sphere or default_sphere kwargs = {} if selected_pmf["name"] == "sh": kwargs = {"basis_type": basis_type, "legacy": legacy} pmf_gen = selected_pmf["cls"]( np.asarray(selected_pmf["value"], dtype=float), sphere, **kwargs ) if seed_directions is not None: if not isinstance(seed_directions, (np.ndarray, list)): raise ValueError("seed_directions should be a numpy array or a list.") elif isinstance(seed_directions, list): seed_directions = np.array(seed_directions) if not np.array_equal(seed_directions.shape, seed_positions.shape): raise ValueError( "seed_directions and seed_positions should have the same shape." ) else: peaks = peaks_from_positions( seed_positions, None, None, npeaks=1, affine=affine, pmf_gen=pmf_gen ) seed_positions, seed_directions = seeds_directions_pairs( seed_positions, peaks, max_cross=1 ) return generate_tractogram( seed_positions, seed_directions, sc, params, pmf_gen, affine=affine, nbr_threads=nbr_threads, buffer_frac=seed_buffer_fraction, save_seeds=save_seeds, ) def probabilistic_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, min_len=2, max_len=500, step_size=0.2, voxel_size=None, max_angle=20, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """Probabilistic tracking algorithm. Parameters ---------- seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). min_len : int, optional Minimum length (Nb points) of the streamlines. max_len : int, optional Maximum length (Nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ voxel_size = voxel_size if voxel_size is not None else voxel_sizes(affine) params = generate_tracking_parameters( "prob", min_len=min_len, max_len=max_len, step_size=step_size, voxel_size=voxel_size, max_angle=max_angle, pmf_threshold=pmf_threshold, random_seed=random_seed, return_all=return_all, ) return generic_tracking( seed_positions, seed_directions, sc, params, affine=affine, sh=sh, peaks=peaks, sf=sf, sphere=sphere, basis_type=basis_type, legacy=legacy, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, save_seeds=save_seeds, ) def deterministic_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, min_len=2, max_len=500, step_size=0.2, voxel_size=None, max_angle=20, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """Deterministic tracking algorithm. Parameters ---------- seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). min_len : int, optional Minimum length (nb points) of the streamlines. max_len : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ voxel_size = voxel_size if voxel_size is not None else voxel_sizes(affine) params = generate_tracking_parameters( "det", min_len=min_len, max_len=max_len, step_size=step_size, voxel_size=voxel_size, max_angle=max_angle, pmf_threshold=pmf_threshold, random_seed=random_seed, return_all=return_all, ) return generic_tracking( seed_positions, seed_directions, sc, params, affine=affine, sh=sh, peaks=peaks, sf=sf, sphere=sphere, basis_type=basis_type, legacy=legacy, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, save_seeds=save_seeds, ) def ptt_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, min_len=2, max_len=500, step_size=0.5, voxel_size=None, max_angle=10, pmf_threshold=0.1, probe_length=1.5, probe_radius=0, probe_quality=7, probe_count=1, data_support_exponent=1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """Parallel Transport Tractography (PTT) tracking algorithm. Parameters ---------- seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH) data. peaks : ndarray, optional Peaks array sf : ndarray, optional Spherical Function (SF). min_len : int, optional Minimum length of the streamlines. max_len : int, optional Maximum length of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. probe_length : float, optional Probe length. probe_radius : float, optional Probe radius. probe_quality : int, optional Probe quality. probe_count : int, optional Probe count. data_support_exponent : int, optional Data support exponent. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ voxel_size = voxel_size if voxel_size is not None else voxel_sizes(affine) params = generate_tracking_parameters( "ptt", min_len=min_len, max_len=max_len, step_size=step_size, voxel_size=voxel_size, max_angle=max_angle, pmf_threshold=pmf_threshold, random_seed=random_seed, probe_length=probe_length, probe_radius=probe_radius, probe_quality=probe_quality, probe_count=probe_count, data_support_exponent=data_support_exponent, return_all=return_all, ) return generic_tracking( seed_positions, seed_directions, sc, params, affine=affine, sh=sh, peaks=peaks, sf=sf, sphere=sphere, basis_type=basis_type, legacy=legacy, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, save_seeds=save_seeds, ) def closestpeak_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, min_len=2, max_len=500, step_size=0.5, voxel_size=None, max_angle=60, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """Closest peak tracking algorithm. Parameters ---------- seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). min_len : int, optional Minimum length (nb points) of the streamlines. max_len : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ dg = None sphere = sphere if sphere is not None else default_sphere if sh is not None: dg = ClosestPeakDirectionGetter.from_shcoeff( sh, sphere=sphere, max_angle=max_angle, pmf_threshold=pmf_threshold, basis_type=basis_type, legacy=legacy, ) elif sf is not None: dg = ClosestPeakDirectionGetter.from_pmf( sf, sphere=sphere, max_angle=max_angle, pmf_threshold=pmf_threshold ) else: raise ValueError("SH or SF should be defined. Not implemented yet for peaks.") return LocalTracking( dg, sc, seed_positions, affine, step_size=step_size, minlen=min_len, maxlen=max_len, random_seed=random_seed, return_all=return_all, initial_directions=seed_directions, save_seeds=save_seeds, ) def bootstrap_tracking( seed_positions, sc, affine, *, seed_directions=None, data=None, model=None, sh=None, peaks=None, sf=None, min_len=2, max_len=500, step_size=0.5, voxel_size=None, max_angle=60, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """Bootstrap tracking algorithm. seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. data : ndarray, optional Diffusion data. model : Model, optional Reconstruction model. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). min_len : int, optional Minimum length (nb points) of the streamlines. max_len : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ sphere = sphere if sphere is not None else default_sphere if data is None or model is None: raise ValueError("Data and model should be defined.") dg = BootDirectionGetter.from_data( data, model, max_angle=max_angle, ) return LocalTracking( dg, sc, seed_positions, affine, step_size=step_size, minlen=min_len, maxlen=max_len, random_seed=random_seed, return_all=return_all, initial_directions=seed_directions, save_seeds=save_seeds, ) def eudx_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, pam=None, min_len=2, max_len=500, step_size=0.5, voxel_size=None, max_angle=60, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, save_seeds=False, ): """EuDX tracking algorithm. seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). pam : PeakAndMetrics, optional Peaks and Metrics object min_len : int, optional Minimum length (nb points) of the streamlines. max_len : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. save_seeds: bool, optional True to return the seeds with the associated streamline. Returns ------- Tractogram """ sphere = sphere if sphere is not None else default_sphere if pam is None: raise ValueError("PAM should be defined.") return LocalTracking( pam, sc, seed_positions, affine, step_size=step_size, minlen=min_len, maxlen=max_len, random_seed=random_seed, return_all=return_all, initial_directions=seed_directions, save_seeds=save_seeds, ) def pft_tracking( seed_positions, sc, affine, *, seed_directions=None, sh=None, peaks=None, sf=None, pam=None, max_cross=None, min_len=2, max_len=500, step_size=0.2, voxel_size=None, max_angle=20, pmf_threshold=0.1, sphere=None, basis_type=None, legacy=True, nbr_threads=0, random_seed=0, seed_buffer_fraction=1.0, return_all=True, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, save_seeds=False, min_wm_pve_before_stopping=0, unidirectional=False, randomize_forward_direction=False, ): """Particle Filtering Tracking (PFT) tracking algorithm. seed_positions : ndarray Seed positions in world space. sc : StoppingCriterion Stopping criterion. affine : ndarray Affine matrix. seed_directions : ndarray, optional Seed directions. sh : ndarray, optional Spherical Harmonics (SH). peaks : ndarray, optional Peaks array. sf : ndarray, optional Spherical Function (SF). pam : PeakAndMetrics, optional Peaks and Metrics object. max_cross : int, optional Maximum number of crossing fibers. min_len : int, optional Minimum length (nb points) of the streamlines. max_len : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size of the tracking. voxel_size : ndarray, optional Voxel size. max_angle : float, optional Maximum angle. pmf_threshold : float, optional PMF threshold. sphere : Sphere, optional Sphere. basis_type : name of basis The basis that ``shcoeff`` are associated with. ``dipy.reconst.shm.real_sh_descoteaux`` is used by default. legacy: bool, optional True to use a legacy basis definition for backward compatibility with previous ``tournier07`` and ``descoteaux07`` implementations. nbr_threads: int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed: int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction: float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. return_all: bool, optional True to return all the streamlines, False to return only the streamlines that reached the stopping criterion. pft_back_tracking_dist : float, optional Back tracking distance. pft_front_tracking_dist : float, optional Front tracking distance. pft_max_trial : int, optional Maximum number of trials. particle_count : int, optional Number of particles. save_seeds: bool, optional True to return the seeds with the associated streamline. min_wm_pve_before_stopping : float, optional Minimum white matter partial volume estimation before stopping. unidirectional : bool, optional True to use unidirectional tracking. randomize_forward_direction : bool, optional True to randomize forward direction Returns ------- Tractogram """ sphere = sphere if sphere is not None else default_sphere dg = None if sh is not None: dg = ProbabilisticDirectionGetter.from_shcoeff( sh, max_angle=max_angle, sphere=sphere, sh_to_pmf=True, pmf_threshold=pmf_threshold, basis_type=basis_type, legacy=legacy, ) elif sf is not None: dg = ProbabilisticDirectionGetter.from_pmf( sf, max_angle=max_angle, sphere=sphere, pmf_threshold=pmf_threshold ) elif pam is not None and sh is None: sh = pam.shm_coeff else: msg = "SH, SF or PAM should be defined. Not implemented yet for peaks." raise ValueError(msg) return ParticleFilteringTracking( dg, sc, seed_positions, affine, max_cross=max_cross, step_size=step_size, minlen=min_len, maxlen=max_len, pft_back_tracking_dist=pft_back_tracking_dist, pft_front_tracking_dist=pft_front_tracking_dist, particle_count=particle_count, pft_max_trial=pft_max_trial, return_all=return_all, random_seed=random_seed, initial_directions=seed_directions, save_seeds=save_seeds, min_wm_pve_before_stopping=min_wm_pve_before_stopping, unidirectional=unidirectional, randomize_forward_direction=randomize_forward_direction, ) dipy-1.11.0/dipy/tracking/tracker_parameters.pxd000066400000000000000000000031371476546756600217370ustar00rootroot00000000000000from dipy.direction.pmf cimport PmfGen from dipy.utils.fast_numpy cimport RNGState cpdef enum TrackerStatus: SUCCESS = 1 FAIL = -1 ctypedef TrackerStatus (*func_ptr)(double* point, double* direction, TrackerParameters params, double* stream_data, PmfGen pmf_gen, RNGState* rng) noexcept nogil cdef class ParallelTransportTrackerParameters: cdef public double angular_separation cdef public double data_support_exponent cdef public double k_small cdef public int probe_count cdef public double probe_length cdef public double probe_normalizer cdef public int probe_quality cdef public double probe_radius cdef public double probe_step_size cdef public int rejection_sampling_max_try cdef public int rejection_sampling_nbr_sample cdef class ShTrackerParameters: cdef public double pmf_threshold cdef class TrackerParameters: cdef func_ptr tracker cdef public double cos_similarity cdef public double max_angle cdef public double max_curvature cdef public int max_len cdef public int min_len cdef public int random_seed cdef public double step_size cdef public double average_voxel_size cdef public double[3] voxel_size cdef public double[3] inv_voxel_size cdef public bint return_all cdef public ShTrackerParameters sh cdef public ParallelTransportTrackerParameters ptt cdef void set_tracker_c(self, func_ptr tracker) noexcept nogil dipy-1.11.0/dipy/tracking/tracker_parameters.pyx000066400000000000000000000122371476546756600217650ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: Nonecheck=False from dipy.tracking.propspeed cimport ( deterministic_propagator, probabilistic_propagator, parallel_transport_propagator, ) from dipy.tracking.utils import min_radius_curvature_from_angle import numpy as np cimport numpy as cnp def generate_tracking_parameters(algo_name, *, int max_len=500, int min_len=2, double step_size=0.2, double[:] voxel_size, double max_angle=20, bint return_all=True, double pmf_threshold=0.1, double probe_length=0.5, double probe_radius=0, int probe_quality=3, int probe_count=1, double data_support_exponent=1, int random_seed=0): cdef TrackerParameters params algo_name = algo_name.lower() if algo_name in ['deterministic', 'det']: params = TrackerParameters(max_len=max_len, min_len=min_len, step_size=step_size, voxel_size=voxel_size, pmf_threshold=pmf_threshold, max_angle=max_angle, random_seed=random_seed, return_all=return_all) params.set_tracker_c(deterministic_propagator) return params elif algo_name in ['probabilistic', 'prob']: params = TrackerParameters(max_len=max_len, min_len=min_len, step_size=step_size, voxel_size=voxel_size, pmf_threshold=pmf_threshold, max_angle=max_angle, random_seed=random_seed, return_all=return_all) params.set_tracker_c(probabilistic_propagator) return params elif algo_name == 'ptt': params = TrackerParameters(max_len=max_len, min_len=min_len, step_size=step_size, voxel_size=voxel_size, pmf_threshold=pmf_threshold, max_angle=max_angle, probe_length=probe_length, probe_radius=probe_radius, probe_quality=probe_quality, probe_count=probe_count, data_support_exponent=data_support_exponent, random_seed=random_seed, return_all=return_all) params.set_tracker_c(parallel_transport_propagator) return params else: raise ValueError("Invalid algorithm name") cdef class TrackerParameters: def __init__(self, max_len, min_len, step_size, voxel_size, max_angle, return_all, pmf_threshold=None, probe_length=None, probe_radius=None, probe_quality=None, probe_count=None, data_support_exponent=None, random_seed=None): cdef cnp.npy_intp i self.max_len = max_len self.min_len = min_len self.return_all = return_all self.random_seed = random_seed self.step_size = step_size self.average_voxel_size = 0 for i in range(3): self.voxel_size[i] = voxel_size[i] self.inv_voxel_size[i] = 1. / voxel_size[i] self.average_voxel_size += voxel_size[i] / 3 self.max_angle = np.deg2rad(max_angle) self.cos_similarity = np.cos(self.max_angle) self.max_curvature = 1 / min_radius_curvature_from_angle( self.max_angle, self.step_size / self.average_voxel_size) self.sh = None self.ptt = None if pmf_threshold is not None: self.sh = ShTrackerParameters(pmf_threshold) if probe_length is not None and probe_radius is not None and probe_quality is not None and probe_count is not None and data_support_exponent is not None: self.ptt = ParallelTransportTrackerParameters(probe_length, probe_radius, probe_quality, probe_count, data_support_exponent) cdef void set_tracker_c(self, func_ptr tracker) noexcept nogil: self.tracker = tracker cdef class ShTrackerParameters: def __init__(self, pmf_threshold): self.pmf_threshold = pmf_threshold cdef class ParallelTransportTrackerParameters: def __init__(self, double probe_length, double probe_radius, int probe_quality, int probe_count, double data_support_exponent): self.probe_length = probe_length self.probe_radius = probe_radius self.probe_quality = probe_quality self.probe_count = probe_count self.data_support_exponent = data_support_exponent self.probe_step_size = self.probe_length / (self.probe_quality - 1) self.probe_normalizer = 1.0 / (self.probe_quality * self.probe_count) self.k_small = 0.0001 # Adaptively set in Trekker self.rejection_sampling_nbr_sample = 10 self.rejection_sampling_max_try = 100 dipy-1.11.0/dipy/tracking/tractogen.pxd000066400000000000000000000024361476546756600200500ustar00rootroot00000000000000cimport numpy as cnp from dipy.tracking.stopping_criterion cimport StoppingCriterion, StreamlineStatus from dipy.direction.pmf cimport PmfGen from dipy.tracking.tracker_parameters cimport TrackerParameters cdef void generate_tractogram_c(double[:,::1] seed_positions, double[:,::1] seed_directions, int nbr_threads, StoppingCriterion sc, TrackerParameters params, PmfGen pmf_gen, double** streamlines, int* length, StreamlineStatus* status) cdef StreamlineStatus generate_local_streamline(double* seed, double* position, double* stream, int* stream_idx, StoppingCriterion sc, TrackerParameters params, PmfGen pmf_gen) noexcept nogil cdef void prepare_pmf(double* pmf, double* point, PmfGen pmf_gen, double pmf_threshold, int pmf_len) noexcept nogil dipy-1.11.0/dipy/tracking/tractogen.pyx000066400000000000000000000306431476546756600200760ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False # cython: Nonecheck=False from libc.stdio cimport printf cimport ctime cimport cython from cython.parallel import prange import numpy as np cimport numpy as cnp from dipy.direction.pmf cimport PmfGen from dipy.tracking.stopping_criterion cimport StoppingCriterion from dipy.utils cimport fast_numpy from dipy.tracking.stopping_criterion cimport (StreamlineStatus, StoppingCriterion, TRACKPOINT, ENDPOINT, OUTSIDEIMAGE, INVALIDPOINT, VALIDSTREAMLIME, INVALIDSTREAMLIME) from dipy.tracking.tracker_parameters cimport (TrackerParameters, func_ptr, SUCCESS, FAIL) from nibabel.streamlines import ArraySequence as Streamlines from libc.stdlib cimport malloc, free from libc.string cimport memcpy, memset from libc.math cimport ceil from libc.stdio cimport printf def generate_tractogram(double[:,::1] seed_positions, double[:,::1] seed_directions, StoppingCriterion sc, TrackerParameters params, PmfGen pmf_gen, affine, int nbr_threads=0, float buffer_frac=1.0, bint save_seeds=0): """Generate a tractogram from a set of seed points and directions. Parameters ---------- seed_positions : ndarray Seed positions for the streamlines. seed_directions : ndarray Seed directions for the streamlines. sc : StoppingCriterion Stopping criterion for the streamlines. params : TrackerParameters Parameters for the streamline generation. pmf_gen : PmfGen Probability mass function generator. affine : ndarray Affine transformation for the streamlines. nbr_threads : int, optional Number of threads to use for streamline generation. buffer_frac : float, optional Fraction of the seed points to process in each iteration. save_seeds : bool, optional If True, return seeds alongside streamlines Yields ------ streamlines : Streamlines Streamlines generated from the seed points. seeds : ndarray, optional seed points associated with the generated streamlines. """ cdef: cnp.npy_intp _len = seed_positions.shape[0] cnp.npy_intp _plen = int(ceil(_len * buffer_frac)) cnp.npy_intp i, seed_start, seed_end double** streamlines_arr int* length_arr StreamlineStatus* status_arr if buffer_frac <=0 or buffer_frac > 1: raise ValueError("buffer_frac must > 0 and <= 1.") lin_T = affine[:3, :3].T.copy() offset = affine[:3, 3].copy() inv_affine = np.linalg.inv(affine) seed_positions = np.dot(seed_positions, inv_affine[:3, :3].T.copy()) seed_positions += inv_affine[:3, 3] seed_start = 0 seed_end = _plen while seed_start < _len: streamlines_arr = malloc(_plen * sizeof(double*)) length_arr = malloc(_plen * sizeof(int)) status_arr = malloc(_plen * sizeof(int)) if streamlines_arr == NULL or length_arr == NULL or status_arr == NULL: raise MemoryError("Memory allocation failed") generate_tractogram_c(seed_positions[seed_start:seed_end], seed_directions[seed_start:seed_end], nbr_threads, sc, params, pmf_gen, streamlines_arr, length_arr, status_arr) for i in range(seed_end - seed_start): if ((status_arr[i] == VALIDSTREAMLIME or params.return_all) and (length_arr[i] >= params.min_len and length_arr[i] <= params.max_len)): s = np.asarray( streamlines_arr[i]) track = s.copy().reshape((-1,3)) if save_seeds: yield np.dot(track, lin_T) + offset, np.dot(seed_positions[seed_start + i], lin_T) + offset else: yield np.dot(track, lin_T) + offset free(streamlines_arr[i]) free(streamlines_arr) free(length_arr) free(status_arr) seed_start += _plen seed_end += _plen if seed_end > _len: seed_end = _len cdef void generate_tractogram_c(double[:,::1] seed_positions, double[:,::1] seed_directions, int nbr_threads, StoppingCriterion sc, TrackerParameters params, PmfGen pmf_gen, double** streamlines, int* lengths, StreamlineStatus* status): """Generate a tractogram from a set of seed points and directions. This is the C implementation of the generate_tractogram function. Parameters ---------- seed_positions : ndarray Seed positions for the streamlines. seed_directions : ndarray Seed directions for the streamlines. nbr_threads : int Number of threads to use for streamline generation. sc : StoppingCriterion Stopping criterion for the streamlines. params : TrackerParameters Parameters for the streamline generation. pmf_gen : PmfGen Probability mass function generator. streamlines : list List to store the generated streamlines. lengths : list List to store the lengths of the generated streamlines. status : list List to store the status of the generated streamlines. """ cdef: cnp.npy_intp _len=seed_positions.shape[0] cnp.npy_intp i if nbr_threads<= 0: nbr_threads = 0 for i in prange(_len, nogil=True, num_threads=nbr_threads): stream = malloc((params.max_len * 3 * 2 + 1) * sizeof(double)) stream_idx = malloc(2 * sizeof(int)) status[i] = generate_local_streamline(&seed_positions[i][0], &seed_directions[i][0], stream, stream_idx, sc, params, pmf_gen) # copy the streamlines points from the buffer to a 1d vector of the streamline length lengths[i] = stream_idx[1] - stream_idx[0] + 1 streamlines[i] = malloc(lengths[i] * 3 * sizeof(double)) memcpy(&streamlines[i][0], &stream[stream_idx[0] * 3], lengths[i] * 3 * sizeof(double)) free(stream) free(stream_idx) cdef StreamlineStatus generate_local_streamline(double* seed, double* direction, double* stream, int* stream_idx, StoppingCriterion sc, TrackerParameters params, PmfGen pmf_gen) noexcept nogil: """Generate a unique streamline from a seed point and direction. This is the C implementation. Parameters ---------- seed : ndarray Seed point for the streamline. direction : ndarray Seed direction for the streamline. stream : ndarray Buffer to store the generated streamline. stream_idx : ndarray Buffer to store the indices of the generated streamline. sc : StoppingCriterion Stopping criterion for the streamline. params : TrackerParameters Parameters for the streamline generation. pmf_gen : PmfGen Probability mass function generator. """ cdef: cnp.npy_intp i, j cnp.npy_uint32 s_random_seed double[3] point double[3] voxdir double voxdir_norm double* stream_data StreamlineStatus status_forward, status_backward fast_numpy.RNGState rng # set the random generator if params.random_seed > 0: s_random_seed = int( (seed[0] * 2 + seed[1] * 3 + seed[2] * 5) * params.random_seed ) else: s_random_seed = ctime.time_ns() fast_numpy.seed_rng(&rng, s_random_seed) # set the initial position fast_numpy.copy_point(seed, point) fast_numpy.copy_point(direction, voxdir) fast_numpy.copy_point(seed, &stream[params.max_len * 3]) stream_idx[0] = stream_idx[1] = params.max_len # the input direction is invalid voxdir_norm = fast_numpy.norm(voxdir) if voxdir_norm < 0.99 or voxdir_norm > 1.01: return INVALIDSTREAMLIME # forward tracking stream_data = malloc(100 * sizeof(double)) memset(stream_data, 0, 100 * sizeof(double)) status_forward = TRACKPOINT for i in range(1, params.max_len): if params.tracker(&point[0], &voxdir[0], params, stream_data, pmf_gen, &rng) == FAIL: break # update position for j in range(3): point[j] += voxdir[j] * params.inv_voxel_size[j] * params.step_size fast_numpy.copy_point(point, &stream[(params.max_len + i )* 3]) status_forward = sc.check_point_c(point, &rng) if (status_forward == ENDPOINT or status_forward == INVALIDPOINT or status_forward == OUTSIDEIMAGE): break stream_idx[1] = params.max_len + i - 1 free(stream_data) # backward tracking stream_data = malloc(100 * sizeof(double)) memset(stream_data, 0, 100 * sizeof(double)) fast_numpy.copy_point(seed, point) fast_numpy.copy_point(direction, voxdir) if i > 1: # Use the first selected orientation for the backward tracking segment for j in range(3): voxdir[j] = stream[(params.max_len + 1) * 3 + j] - stream[params.max_len * 3 + j] fast_numpy.normalize(voxdir) # flip the initial direction for backward streamline segment for j in range(3): voxdir[j] = voxdir[j] * -1 status_backward = TRACKPOINT for i in range(1, params.max_len): if params.tracker(&point[0], &voxdir[0], params, stream_data, pmf_gen, &rng) == FAIL: break # update position for j in range(3): point[j] += voxdir[j] * params.inv_voxel_size[j] * params.step_size fast_numpy.copy_point(point, &stream[(params.max_len - i )* 3]) status_backward = sc.check_point_c(point, &rng) if (status_backward == ENDPOINT or status_backward == INVALIDPOINT or status_backward == OUTSIDEIMAGE): break stream_idx[0] = params.max_len - i + 1 free(stream_data) # check for valid streamline ending status if ((status_backward == ENDPOINT or status_backward == OUTSIDEIMAGE) and (status_forward == ENDPOINT or status_forward == OUTSIDEIMAGE)): return VALIDSTREAMLIME return INVALIDSTREAMLIME cdef void prepare_pmf(double* pmf, double* point, PmfGen pmf_gen, double pmf_threshold, int pmf_len) noexcept nogil: """Prepare the probability mass function for streamline generation. Parameters ---------- pmf : ndarray Probability mass function. point : ndarray Current tracking position. pmf_gen : PmfGen Probability mass function generator. pmf_threshold : float Threshold for the probability mass function. pmf_len : int Length of the probability mass function. """ cdef: cnp.npy_intp i double absolute_pmf_threshold double max_pmf=0 pmf = pmf_gen.get_pmf_c(point, pmf) for i in range(pmf_len): if pmf[i] > max_pmf: max_pmf = pmf[i] absolute_pmf_threshold = pmf_threshold * max_pmf for i in range(pmf_len): if pmf[i] < absolute_pmf_threshold: pmf[i] = 0.0 dipy-1.11.0/dipy/tracking/utils.py000066400000000000000000001063571476546756600170660ustar00rootroot00000000000000"""Various tools related to creating and working with streamlines. This module provides tools for targeting streamlines using ROIs, for making connectivity matrices from whole brain fiber tracking and some other tools that allow streamlines to interact with image data. Important Notes ----------------- Dipy uses affine matrices to represent the relationship between streamline points, which are defined as points in a continuous 3d space, and image voxels, which are typically arranged in a discrete 3d grid. Dipy uses a convention similar to nifti files to interpret these affine matrices. This convention is that the point at the center of voxel ``[i, j, k]`` is represented by the point ``[x, y, z]`` where ``[x, y, z, 1] = affine * [i, j, k, 1]``. Also when the phrase "voxel coordinates" is used, it is understood to be the same as ``affine = eye(4)``. As an example, let's take a 2d image where the affine is:: [[1., 0., 0.], [0., 2., 0.], [0., 0., 1.]] The pixels of an image with this affine would look something like:: A------------ | | | | | C | | | | | | | ----B-------- | | | | | | | | | | | | ------------- | | | | | | | | | | | | ------------D And the letters A-D represent the following points in "real world coordinates":: A = [-.5, -1.] B = [ .5, 1.] C = [ 0., 0.] D = [ 2.5, 5.] """ from collections import defaultdict from functools import wraps from itertools import combinations from warnings import warn from nibabel.affines import apply_affine import numpy as np from scipy.spatial.distance import cdist from dipy.core.geometry import dist_to_corner from dipy.testing.decorators import warning_for_keywords from dipy.tracking import metrics # Import helper functions shared with vox2track from dipy.tracking._utils import _mapping_to_voxel, _to_voxel_coordinates from dipy.tracking.vox2track import _streamlines_in_mask def density_map(streamlines, affine, vol_dims): """Count the number of unique streamlines that pass through each voxel. Parameters ---------- streamlines : iterable A sequence of streamlines. affine : array_like (4, 4) The mapping from voxel coordinates to streamline points. The voxel_to_rasmm matrix, typically from a NIFTI file. vol_dims : 3 ints The shape of the volume to be returned containing the streamlines counts Returns ------- image_volume : ndarray, shape=vol_dims The number of streamline points in each voxel of volume. Raises ------ IndexError When the points of the streamlines lie outside of the return volume. Notes ----- A streamline can pass through a voxel even if one of the points of the streamline does not lie in the voxel. For example a step from [0,0,0] to [0,0,2] passes through [0,0,1]. Consider subsegmenting the streamlines when the edges of the voxels are smaller than the steps of the streamlines. """ lin_T, offset = _mapping_to_voxel(affine) counts = np.zeros(vol_dims, "int") for sl in streamlines: inds = _to_voxel_coordinates(sl, lin_T, offset) i, j, k = inds.T # this takes advantage of the fact that numpy's += operator only # acts once even if there are repeats in inds counts[i, j, k] += 1 return counts @warning_for_keywords() def connectivity_matrix( streamlines, affine, label_volume, *, inclusive=False, symmetric=True, return_mapping=False, mapping_as_streamlines=False, ): """Count the streamlines that start and end at each label pair. Parameters ---------- streamlines : sequence A sequence of streamlines. affine : array_like (4, 4) The mapping from voxel coordinates to streamline coordinates. The voxel_to_rasmm matrix, typically from a NIFTI file. label_volume : ndarray An image volume with an integer data type, where the intensities in the volume map to anatomical structures. inclusive: bool Whether to analyze the entire streamline, as opposed to just the endpoints. False by default. symmetric : bool, True by default Symmetric means we don't distinguish between start and end points. If symmetric is True, ``matrix[i, j] == matrix[j, i]``. return_mapping : bool, False by default If True, a mapping is returned which maps matrix indices to streamlines. mapping_as_streamlines : bool, False by default If True voxel indices map to lists of streamline objects. Otherwise voxel indices map to lists of integers. Returns ------- matrix : ndarray The number of connection between each pair of regions in `label_volume`. mapping : defaultdict(list) ``mapping[i, j]`` returns all the streamlines that connect region `i` to region `j`. If `symmetric` is True mapping will only have one key for each start end pair such that if ``i < j`` mapping will have key ``(i, j)`` but not key ``(j, i)``. """ # Error checking on label_volume kind = label_volume.dtype.kind labels_positive = (kind == "u") or ((kind == "i") and (label_volume.min() >= 0)) valid_label_volume = labels_positive and label_volume.ndim == 3 if not valid_label_volume: raise ValueError( "label_volume must be a 3d integer array with" "non-negative label values" ) matrix = np.zeros( (np.max(label_volume) + 1, np.max(label_volume) + 1), dtype=np.int64 ) mapping = defaultdict(list) lin_T, offset = _mapping_to_voxel(affine) if inclusive: for i, sl in enumerate(streamlines): sl = _to_voxel_coordinates(sl, lin_T, offset) x, y, z = sl.T if symmetric: crossed_labels = np.unique(label_volume[x, y, z]) else: crossed_labels = np.unique(label_volume[x, y, z], return_index=True) crossed_labels = crossed_labels[0][np.argsort(crossed_labels[1])] for comb in combinations(crossed_labels, 2): matrix[comb] += 1 if return_mapping: if mapping_as_streamlines: mapping[comb].append(streamlines[i]) else: mapping[comb].append(i) else: streamlines_end = np.array([sl[0 :: len(sl) - 1] for sl in streamlines]) streamlines_end = _to_voxel_coordinates(streamlines_end, lin_T, offset) x, y, z = streamlines_end.T if symmetric: end_labels = np.sort(label_volume[x, y, z], axis=0) else: end_labels = label_volume[x, y, z] np.add.at(matrix, (end_labels[0].T, end_labels[1].T), 1) if return_mapping: if mapping_as_streamlines: for i, (a, b) in enumerate(end_labels.T): mapping[a, b].append(streamlines[i]) else: for i, (a, b) in enumerate(end_labels.T): mapping[a, b].append(i) if symmetric: matrix = np.maximum(matrix, matrix.T) if return_mapping: return (matrix, mapping) else: return matrix @warning_for_keywords() def ndbincount(x, *, weights=None, shape=None): """Like bincount, but for nd-indices. Parameters ---------- x : array_like (N, M) M indices to a an Nd-array weights : array_like (M,), optional Weights associated with indices shape : optional the shape of the output """ x = np.asarray(x) if shape is None: shape = x.max(1) + 1 x = np.ravel_multi_index(x, shape) out = np.bincount(x, weights, minlength=np.prod(shape)) out.shape = shape return out def reduce_labels(label_volume): """Reduce an array of labels to the integers from 0 to n with smallest possible n. Examples -------- >>> labels = np.array([[1, 3, 9], ... [1, 3, 8], ... [1, 3, 7]]) >>> new_labels, lookup = reduce_labels(labels) >>> lookup array([1, 3, 7, 8, 9]) >>> new_labels #doctest: +ELLIPSIS array([[0, 1, 4], [0, 1, 3], [0, 1, 2]]...) >>> (lookup[new_labels] == labels).all() True """ lookup_table = np.unique(label_volume) label_volume = lookup_table.searchsorted(label_volume) return label_volume, lookup_table def subsegment(streamlines, max_segment_length): """Split the segments of the streamlines into small segments. Replaces each segment of each of the streamlines with the smallest possible number of equally sized smaller segments such that no segment is longer than max_segment_length. Among other things, this can useful for getting streamline counts on a grid that is smaller than the length of the streamline segments. Parameters ---------- streamlines : sequence of ndarrays The streamlines to be subsegmented. max_segment_length : float The longest allowable segment length. Returns ------- output_streamlines : generator A set of streamlines. Notes ----- Segments of 0 length are removed. If unchanged Examples -------- >>> streamlines = [np.array([[0,0,0],[2,0,0],[5,0,0]])] >>> list(subsegment(streamlines, 3.)) [array([[ 0., 0., 0.], [ 2., 0., 0.], [ 5., 0., 0.]])] >>> list(subsegment(streamlines, 1)) [array([[ 0., 0., 0.], [ 1., 0., 0.], [ 2., 0., 0.], [ 3., 0., 0.], [ 4., 0., 0.], [ 5., 0., 0.]])] >>> list(subsegment(streamlines, 1.6)) [array([[ 0. , 0. , 0. ], [ 1. , 0. , 0. ], [ 2. , 0. , 0. ], [ 3.5, 0. , 0. ], [ 5. , 0. , 0. ]])] """ for sl in streamlines: diff = sl[1:] - sl[:-1] dist = np.sqrt((diff * diff).sum(-1)) num_segments = np.ceil(dist / max_segment_length).astype("int") output_sl = np.empty((num_segments.sum() + 1, 3), "float") output_sl[0] = sl[0] count = 1 for ii in range(len(num_segments)): ns = num_segments[ii] if ns == 1: output_sl[count] = sl[ii + 1] count += 1 elif ns > 1: small_d = diff[ii] / ns point = sl[ii] for _ in range(ns): point = point + small_d output_sl[count] = point count += 1 elif ns == 0: pass # repeated point else: # this should never happen because ns should be a positive # int assert ns >= 0 yield output_sl @warning_for_keywords() def seeds_from_mask(mask, affine, *, density=(1, 1, 1)): """Create seeds for fiber tracking from a binary mask. Seeds points are placed evenly distributed in all voxels of ``mask`` which are ``True``. Parameters ---------- mask : binary 3d array_like A binary array specifying where to place the seeds for fiber tracking. affine : array, (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. A seed point at the center the voxel ``[i, j, k]`` will be represented as ``[x, y, z]`` where ``[x, y, z, 1] == np.dot(affine, [i, j, k , 1])``. density : int or array_like (3,) Specifies the number of seeds to place along each dimension. A ``density`` of `2` is the same as ``[2, 2, 2]`` and will result in a total of 8 seeds per voxel. See Also -------- random_seeds_from_mask Raises ------ ValueError When ``mask`` is not a three-dimensional array Examples -------- >>> mask = np.zeros((3,3,3), 'bool') >>> mask[0,0,0] = 1 >>> seeds_from_mask(mask, np.eye(4), density=[1,1,1]) array([[ 0., 0., 0.]]) """ mask = np.asarray(mask, dtype=bool) if mask.ndim < 1: mask = np.reshape(mask, (3,)) if mask.ndim != 3: raise ValueError("mask cannot be more than 3d") density = np.asarray(density, int) if density.size == 1: d = density density = np.empty(3, dtype=int) density.fill(d) elif density.shape != (3,): raise ValueError("density should be in integer array of shape (3,)") # Grid of points between -.5 and .5, centered at 0, with given density grid = np.mgrid[0 : density[0], 0 : density[1], 0 : density[2]] grid = grid.T.reshape((-1, 3)) grid = grid / density grid += 0.5 / density - 0.5 where = np.argwhere(mask) # Add the grid of points to each voxel in mask seeds = where[:, np.newaxis, :] + grid[np.newaxis, :, :] seeds = seeds.reshape((-1, 3)) # Apply the spatial transform if seeds.any(): # Use affine to move seeds into real world coordinates seeds = np.dot(seeds, affine[:3, :3].T) seeds += affine[:3, 3] return seeds @warning_for_keywords() def random_seeds_from_mask( mask, affine, *, seeds_count=1, seed_count_per_voxel=True, random_seed=None ): """Create randomly placed seeds for fiber tracking from a binary mask. Seeds points are placed randomly distributed in voxels of ``mask`` which are ``True``. If ``seed_count_per_voxel`` is ``True``, this function is similar to ``seeds_from_mask()``, with the difference that instead of evenly distributing the seeds, it randomly places the seeds within the voxels specified by the ``mask``. Parameters ---------- mask : binary 3d array_like A binary array specifying where to place the seeds for fiber tracking. affine : array, (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. A seed point at the center the voxel ``[i, j, k]`` will be represented as ``[x, y, z]`` where ``[x, y, z, 1] == np.dot(affine, [i, j, k , 1])``. seeds_count : int The number of seeds to generate. If ``seed_count_per_voxel`` is True, specifies the number of seeds to place in each voxel. Otherwise, specifies the total number of seeds to place in the mask. seed_count_per_voxel: bool If True, seeds_count is per voxel, else seeds_count is the total number of seeds. random_seed : int The seed for the random seed generator (numpy.random.Generator). See Also -------- seeds_from_mask Raises ------ ValueError When ``mask`` is not a three-dimensional array Examples -------- >>> mask = np.zeros((3,3,3), 'bool') >>> mask[0,0,0] = 1 >>> random_seeds_from_mask(mask, np.eye(4), seeds_count=1, ... seed_count_per_voxel=True, random_seed=1) array([[-0.23838787, -0.20150886, 0.31422574]]) >>> random_seeds_from_mask(mask, np.eye(4), seeds_count=6, ... seed_count_per_voxel=True, random_seed=1) array([[-0.23838787, -0.20150886, 0.31422574], [-0.41435083, -0.26318949, 0.30127447], [ 0.44305611, 0.01132755, 0.47624371], [ 0.30500292, 0.30794079, 0.01532556], [ 0.03816435, -0.15672913, -0.13093276], [ 0.12509547, 0.3972138 , 0.27568569]]) >>> mask[0,1,2] = 1 >>> random_seeds_from_mask(mask, np.eye(4), ... seeds_count=2, seed_count_per_voxel=True, random_seed=1) array([[ 0.30500292, 1.30794079, 2.01532556], [-0.23838787, -0.20150886, 0.31422574], [ 0.3702492 , 0.78681721, 2.10314815], [-0.41435083, -0.26318949, 0.30127447]]) """ mask = np.asarray(mask, dtype=bool) if mask.ndim < 1: mask = np.reshape(mask, (3,)) if mask.ndim != 3: raise ValueError("mask cannot be more than 3d") # Randomize the voxels rng = np.random.default_rng(random_seed) shape = mask.shape mask = mask.flatten() indices = np.arange(len(mask)) rng.shuffle(indices) where = [np.unravel_index(i, shape) for i in indices if mask[i] == 1] num_voxels = len(where) if not seed_count_per_voxel: # Generate enough seeds per voxel seeds_per_voxel = seeds_count // num_voxels + 1 else: seeds_per_voxel = seeds_count seeds = [] for i in range(1, seeds_per_voxel + 1): for s in where: # Set the random seed with the current seed, the current value of # seeds per voxel and the global random seed. if random_seed is not None: s_random_seed = hash((np.sum(s) + 1) * i + random_seed) % (2**32 - 1) rng = np.random.default_rng(s_random_seed) # Generate random triplet grid = rng.random(3) seed = s + grid - 0.5 seeds.append(seed) seeds = np.asarray(seeds) if not seed_count_per_voxel: # Select the requested amount seeds = seeds[:seeds_count] # Apply the spatial transform if seeds.any(): # Use affine to move seeds into real world coordinates seeds = np.dot(seeds, affine[:3, :3].T) seeds += affine[:3, 3] return seeds def _with_initialize(generator): """Allow one to write a generator with initialization code. All code up to the first yield is run as soon as the generator function is called and the first yield value is ignored. """ @wraps(generator) def helper(*args, **kwargs): gen = generator(*args, **kwargs) next(gen) return gen return helper @_with_initialize @warning_for_keywords() def target(streamlines, affine, target_mask, *, include=True): """Filter streamlines based on whether or not they pass through an ROI. Parameters ---------- streamlines : iterable A sequence of streamlines. Each streamline should be a (N, 3) array, where N is the length of the streamline. affine : array (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. target_mask : array-like A mask used as a target. Non-zero values are considered to be within the target region. include : bool, default True If True, streamlines passing through `target_mask` are kept. If False, the streamlines not passing through `target_mask` are kept. Returns ------- streamlines : generator A sequence of streamlines that pass through `target_mask`. Raises ------ ValueError When the points of the streamlines lie outside of the `target_mask`. See Also -------- density_map """ target_mask = np.array(target_mask, dtype=bool, copy=True) lin_T, offset = _mapping_to_voxel(affine) yield # End of initialization for sl in streamlines: try: ind = _to_voxel_coordinates(sl, lin_T, offset) i, j, k = ind.T state = target_mask[i, j, k] except IndexError as e: raise ValueError("streamlines points are outside of target_mask") from e if state.any() == include: yield sl @_with_initialize @warning_for_keywords() def target_line_based(streamlines, affine, target_mask, *, include=True): """Filter streamlines based on whether or not they pass through a ROI, using a line-based algorithm. Mostly used as a replacement of `target` for compressed streamlines. This function never returns single-point streamlines, whatever the value of `include`. See :footcite:`Bresenham1965` and :footcite:p:`Houde2015` for further details about the method. Parameters ---------- streamlines : iterable A sequence of streamlines. Each streamline should be a (N, 3) array, where N is the length of the streamline. affine : array (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. target_mask : array-like A mask used as a target. Non-zero values are considered to be within the target region. include : bool, default True If True, streamlines passing through `target_mask` are kept. If False, the streamlines not passing through `target_mask` are kept. Returns ------- streamlines : generator A sequence of streamlines that pass through `target_mask`. References ---------- .. footbibliography:: See Also -------- dipy.tracking.utils.density_map dipy.tracking.streamline.compress_streamlines """ target_mask = np.array(target_mask, dtype=np.uint8, copy=True) lin_T, offset = _mapping_to_voxel(affine) streamline_index = _streamlines_in_mask(streamlines, target_mask, lin_T, offset) yield # End of initialization for idx in np.where(streamline_index == [0, 1][include])[0]: yield streamlines[idx] @warning_for_keywords() def streamline_near_roi(streamline, roi_coords, tol, *, mode="any"): """Is a streamline near an ROI. Implements the inner loops of the :func:`near_roi` function. Parameters ---------- streamline : array, shape (N, 3) A single streamline roi_coords : array, shape (M, 3) ROI coordinates transformed to the streamline coordinate frame. tol : float Distance (in the units of the streamlines, usually mm). If any coordinate in the streamline is within this distance from the center of any voxel in the ROI, this function returns True. mode : string One of {"any", "all", "either_end", "both_end"}, where return True if: "any" : any point is within tol from ROI. "all" : all points are within tol from ROI. "either_end" : either of the end-points is within tol from ROI "both_end" : both end points are within tol from ROI. Returns ------- out : boolean """ if not np.any(streamline): return False if len(roi_coords) == 0: return False if mode in ("any", "all"): s = streamline elif mode in ("either_end", "both_end"): # 'end' modes, use a streamline with 2 nodes: s = np.vstack([streamline[0], streamline[-1]]) else: e_s = "For determining relationship to an array, you can use " e_s += "one of the following modes: 'any', 'all', 'both_end'," e_s += f"'either_end', but you entered: {mode}." raise ValueError(e_s) dist = cdist(s, roi_coords, "euclidean") if mode in ("any", "either_end"): return np.min(dist) <= tol else: return np.all(np.min(dist, -1) <= tol) @warning_for_keywords() def near_roi(streamlines, affine, region_of_interest, *, tol=None, mode="any"): """Provide filtering criteria for a set of streamlines based on whether they fall within a tolerance distance from an ROI. Parameters ---------- streamlines : list or generator A sequence of streamlines. Each streamline should be a (N, 3) array, where N is the length of the streamline. affine : array (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. region_of_interest : ndarray A mask used as a target. Non-zero values are considered to be within the target region. tol : float Distance (in the units of the streamlines, usually mm). If any coordinate in the streamline is within this distance from the center of any voxel in the ROI, the filtering criterion is set to True for this streamline, otherwise False. Defaults to the distance between the center of each voxel and the corner of the voxel. mode : string, optional One of {"any", "all", "either_end", "both_end"}, where return True if: "any" : any point is within tol from ROI. Default. "all" : all points are within tol from ROI. "either_end" : either of the end-points is within tol from ROI "both_end" : both end points are within tol from ROI. Returns ------- 1D array of boolean dtype, shape (len(streamlines), ) This contains `True` for indices corresponding to each streamline that passes within a tolerance distance from the target ROI, `False` otherwise. """ dtc = dist_to_corner(affine) if tol is None: tol = dtc elif tol < dtc: w_s = "Tolerance input provided would create gaps in your" w_s += f" inclusion ROI. Setting to: {dtc}" warn(w_s, stacklevel=2) tol = dtc roi_coords = np.array(np.where(region_of_interest)).T x_roi_coords = apply_affine(affine, roi_coords) # If it's already a list, we can save time by pre-allocating the output if isinstance(streamlines, list): out = np.zeros(len(streamlines), dtype=bool) for ii, sl in enumerate(streamlines): out[ii] = streamline_near_roi(sl, x_roi_coords, tol=tol, mode=mode) return out # If it's a generator, we'll need to generate the output into a list else: out = [] for sl in streamlines: out.append(streamline_near_roi(sl, x_roi_coords, tol=tol, mode=mode)) return np.array(out, dtype=bool) def length(streamlines): """Calculate the lengths of many streamlines in a bundle. Parameters ---------- streamlines : list Each item in the list is an array with 3D coordinates of a streamline. Returns ------- Iterator object which then computes the length of each streamline in the bundle, upon iteration. """ return map(metrics.length, streamlines) @warning_for_keywords() def unique_rows(in_array, *, dtype="f4"): """Find the unique rows in an array. Parameters ---------- in_array: ndarray The array for which the unique rows should be found dtype: str, optional This determines the intermediate representation used for the values. Should at least preserve the values of the input array. Returns ------- u_return: ndarray Array with the unique rows of the original array. """ # Sort input array order = np.lexsort(in_array.T) # Apply sort and compare neighbors x = in_array[order] diff_x = np.ones(len(x), dtype=bool) diff_x[1:] = (x[1:] != x[:-1]).any(-1) # Reverse sort and return unique rows un_order = order.argsort() diff_in_array = diff_x[un_order] return in_array[diff_in_array] @_with_initialize @warning_for_keywords() def transform_tracking_output(tracking_output, affine, *, save_seeds=False): """Apply a linear transformation, given by affine, to streamlines. Parameters ---------- tracking_output : Streamlines generator Either streamlines (list, ArraySequence) or a tuple with streamlines and seeds together affine : array (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. save_seeds : bool, optional If set, seeds associated to streamlines will be also moved and returned Returns ------- streamlines : generator A generator for the sequence of transformed streamlines. If save_seeds is True, also return a generator for the transformed seeds. """ lin_T = affine[:3, :3].T.copy() offset = affine[:3, 3].copy() yield # End of initialization if save_seeds: for sl, seed in tracking_output: yield np.dot(sl, lin_T) + offset, np.dot(seed, lin_T) + offset else: for sl in tracking_output: yield np.dot(sl, lin_T) + offset def reduce_rois(rois, include): """Reduce multiple ROIs to one inclusion and one exclusion ROI. Parameters ---------- rois : list or ndarray A list of 3D arrays, each with shape (x, y, z) corresponding to the shape of the brain volume, or a 4D array with shape (n_rois, x, y, z). Non-zeros in each volume are considered to be within the region. include : array or list A list or 1D array of boolean marking inclusion or exclusion criteria. Returns ------- include_roi : boolean 3D array An array marking the inclusion mask. exclude_roi : boolean 3D array An array marking the exclusion mask Notes ----- The include_roi and exclude_roi can be used to perform the operation: "(A or B or ...) and not (X or Y or ...)", where A, B are inclusion regions and X, Y are exclusion regions. """ # throw warning if non bool roi detected if not np.all([irois.dtype == bool for irois in rois]): warn( "Non-boolean input mask detected. Treating all nonzeros as True.", stacklevel=2, ) include_roi = np.zeros(rois[0].shape, dtype=bool) exclude_roi = np.zeros(rois[0].shape, dtype=bool) for i in range(len(rois)): if include[i]: include_roi |= rois[i] != 0 else: exclude_roi |= rois[i] != 0 return include_roi, exclude_roi def _min_at(a, index, value): index = np.asarray(index) sort_keys = [value] + list(index) order = np.lexsort(sort_keys) index = index[:, order] value = value[order] uniq = np.ones(index.shape[1], dtype=bool) uniq[1:] = (index[:, 1:] != index[:, :-1]).any(axis=0) index = index[:, uniq] value = value[uniq] a[tuple(index)] = np.minimum(a[tuple(index)], value) try: minimum_at = np.minimum.at except AttributeError: minimum_at = _min_at @warning_for_keywords() def path_length(streamlines, affine, aoi, *, fill_value=-1): """Compute the shortest path, along any streamline, between aoi and each voxel. Parameters ---------- streamlines : seq of (N, 3) arrays A sequence of streamlines, path length is given in mm along the curve of the streamline. aoi : array, 3d A mask (binary array) of voxels from which to start computing distance. affine : array (4, 4) The mapping between voxel indices and the point space for seeds. The voxel_to_rasmm matrix, typically from a NIFTI file. fill_value : float The value of voxel in the path length map that are not connected to the aoi. Returns ------- plm : array Same shape as aoi. The minimum distance between every point and aoi along the path of a streamline. """ aoi = np.asarray(aoi, dtype=bool) # path length map plm = np.empty(aoi.shape, dtype=float) plm[:] = np.inf lin_T, offset = _mapping_to_voxel(affine) for sl in streamlines: seg_ind = _to_voxel_coordinates(sl, lin_T, offset) i, j, k = seg_ind.T # Get where streamlines passes through aoi breaks = aoi[i, j, k] # Where streamline passes aoi, dist is zero i, j, k = seg_ind[breaks].T plm[i, j, k] = 0 # If a streamline crosses aoi >1, re-start counting distance for each for seg in _as_segments(sl, breaks): i, j, k = _to_voxel_coordinates(seg[1:], lin_T, offset).T # Get the distance, in mm, between streamline points segment_length = np.sqrt(((seg[1:] - seg[:-1]) ** 2).sum(1)) dist = segment_length.cumsum() # Updates path length map with shorter distances minimum_at(plm, (i, j, k), dist) if fill_value != np.inf: plm = np.where(plm == np.inf, fill_value, plm) return plm def _part_segments(streamline, break_points): segments = np.split(streamline, break_points.nonzero()[0]) # Skip first segment, all points before first break # first segment is empty when break_points[0] == 0 segments = segments[1:] for each in segments: if len(each) > 1: yield each def _as_segments(streamline, break_points): for seg in _part_segments(streamline, break_points): yield seg for seg in _part_segments(streamline[::-1], break_points[::-1]): yield seg def max_angle_from_curvature(min_radius_curvature, step_size): """Get the maximum deviation angle from the minimum radius curvature. See :footcite:p:`Tournier2012` for further details about the method. Parameters ---------- min_radius_curvature: float Minimum radius of curvature in mm. step_size: float The tracking step size in mm. Returns ------- max_angle: float The maximum deviation angle in radian, given the radius curvature and the step size. References ---------- .. footbibliography:: """ max_angle = 2.0 * np.arcsin(step_size / (2.0 * min_radius_curvature)) if np.isnan(max_angle) or max_angle > np.pi / 2 or max_angle <= 0: w_msg = "The max_angle found is outside the interval [0 ; pi/2]." w_msg += "max_angle will be set to the default value pi/2" warn(w_msg, stacklevel=2) max_angle = np.pi / 2.0 return max_angle def min_radius_curvature_from_angle(max_angle, step_size): """Get minimum radius of curvature from a deviation angle. See :footcite:p:`Tournier2012` for further details about the method. Parameters ---------- max_angle: float The maximum deviation angle in radian. theta should be between [0 - pi/2] otherwise default will be pi/2. step_size: float The tracking step size in mm. Returns ------- min_radius_curvature: float Minimum radius of curvature in mm, given the maximum deviation angle theta and the step size. References ---------- .. footbibliography:: """ if np.isnan(max_angle) or max_angle > np.pi / 2 or max_angle <= 0: w_msg = "The max_angle found is outside the interval [0 ; pi/2]." w_msg += "max_angle will be set to the default value pi/2" warn(w_msg, stacklevel=2) max_angle = np.pi / 2.0 min_radius_curvature = step_size / 2 / np.sin(max_angle / 2) return min_radius_curvature def seeds_directions_pairs(positions, peaks, *, max_cross=-1): """ Pair each seed to the corresponding peaks. If multiple peaks are available the seed is repeated for each. Parameters ---------- positions : array, (N, 3) Coordinates of the N positions. peaks : array (N, M, 3) Peaks at each position max_cross : int, optional The maximum number of direction to track from each seed in crossing voxels. By default all voxel peaks are used. Returns ------- seeds : array (K, 3) directions : array (K, 3) """ if ( not len(positions.shape) == 2 or not len(peaks.shape) == 3 or not positions.shape[0] == peaks.shape[0] or not positions.shape[1] == 3 or not peaks.shape[2] == 3 or not peaks.shape[1] > 0 ): raise ValueError( "The array shapes of the positions and peaks should" " be (N,3) and (N,M,3), respectively." ) seeds = [] directions = [] for i, s in enumerate(positions): voxel_dirs_norm = np.linalg.norm(peaks[i, :, :], axis=1) voxel_dirs = ( peaks[i, voxel_dirs_norm > 0, :] / voxel_dirs_norm[voxel_dirs_norm > 0, np.newaxis] ) for d in voxel_dirs[:max_cross, :]: seeds.append(s) directions.append(d) return np.array(seeds), np.array(directions) dipy-1.11.0/dipy/tracking/vox2track.pyx000066400000000000000000000416661476546756600200420ustar00rootroot00000000000000# A type of -*- python -*- file """This module contains the parts of dipy.tracking.utils that need to be implemented in cython. """ import cython cdef extern from "dpy_math.h" nogil: double fmin(double x, double y) from libc.math cimport ceil, floor, fabs, sqrt import numpy as np cimport numpy as cnp from dipy.tracking._utils import _mapping_to_voxel, _to_voxel_coordinates @cython.boundscheck(False) @cython.wraparound(False) @cython.profile(False) def _voxel2streamline(sl, cnp.ndarray[cnp.npy_intp, ndim=2] unique_idx): """ Maps voxels to streamlines and streamlines to voxels, for setting up the LiFE equations matrix Parameters ---------- sl : list A collection of streamlines, each n by 3, with n being the number of nodes in the fiber. unique_idx : array. The unique indices in the streamlines Returns ------- v2f, v2fn : tuple of dicts The first dict in the tuple answers the question: Given a voxel (from the unique indices in this model), which fibers pass through it? The second answers the question: Given a streamline, for each voxel that this streamline passes through, which nodes of that streamline are in that voxel? """ # Define local counters: cdef int s_idx, node_idx, voxel_id, ii cdef dict vox_dict = {} for ii in range(len(unique_idx)): vox = unique_idx[ii] vox_dict[vox[0], vox[1], vox[2]] = ii # Outputs are these dicts: cdef dict v2f = {} cdef dict v2fn = {} # In each fiber: for s_idx in range(len(sl)): sl_as_idx = np.round(sl[s_idx]).astype(int) v2fn[s_idx] = {} # In each voxel present in there: for node_idx in range(len(sl_as_idx)): node = sl_as_idx[node_idx] # What serial number is this voxel in the unique voxel indices: voxel_id = vox_dict[node[0], node[1], node[2]] # Add that combination to the dict: if voxel_id in v2f: if s_idx not in v2f[voxel_id]: v2f[voxel_id].append(s_idx) else: v2f[voxel_id] = [s_idx] # All the nodes going through this voxel get its number: if voxel_id in v2fn[s_idx]: v2fn[s_idx][voxel_id].append(node_idx) else: v2fn[s_idx][voxel_id] = [node_idx] return v2f ,v2fn def streamline_mapping(streamlines, affine=None, mapping_as_streamlines=False): """Creates a mapping from voxel indices to streamlines. Returns a dictionary where each key is a 3d voxel index and the associated value is a list of the streamlines that pass through that voxel. Parameters ---------- streamlines : sequence A sequence of streamlines. affine : array_like (4, 4), optional The mapping from voxel coordinates to streamline coordinates. The streamline values are assumed to be in voxel coordinates. IE ``[0, 0, 0]`` is the center of the first voxel and the voxel size is ``[1, 1, 1]``. mapping_as_streamlines : bool, optional, False by default If True voxel indices map to lists of streamline objects. Otherwise voxel indices map to lists of integers. Returns ------- mapping : defaultdict(list) A mapping from voxel indices to the streamlines that pass through that voxel. Examples -------- >>> streamlines = [np.array([[0., 0., 0.], ... [1., 1., 1.], ... [2., 3., 4.]]), ... np.array([[0., 0., 0.], ... [1., 2., 3.]])] >>> mapping = streamline_mapping(streamlines, affine=np.eye(4)) >>> mapping[0, 0, 0] [0, 1] >>> mapping[1, 1, 1] [0] >>> mapping[1, 2, 3] [1] >>> mapping.get((3, 2, 1), 'no streamlines') 'no streamlines' >>> mapping = streamline_mapping(streamlines, affine=np.eye(4), ... mapping_as_streamlines=True) >>> mapping[1, 2, 3][0] is streamlines[1] True """ cdef: cnp.npy_intp[:, :] voxel_indices lin, offset = _mapping_to_voxel(affine) if mapping_as_streamlines: streamlines = list(streamlines) mapping = {} for i, sl in enumerate(streamlines): voxel_indices = _to_voxel_coordinates(sl, lin, offset) # Get the unique voxels every streamline passes through uniq_points = set() for j in range(voxel_indices.shape[0]): point = (voxel_indices[j, 0], voxel_indices[j, 1], voxel_indices[j, 2]) uniq_points.add(point) # Add the index of this streamline for each unique voxel for point in uniq_points: if point in mapping: mapping[point].append(i) else: mapping[point] = [i] # If mapping_as_streamlines replace ids with streamlines if mapping_as_streamlines: for key in mapping: mapping[key] = [streamlines[i] for i in mapping[key]] return mapping @cython.boundscheck(False) @cython.wraparound(False) cdef inline cnp.double_t norm(cnp.double_t x, cnp.double_t y, cnp.double_t z) noexcept nogil: cdef cnp.double_t val = sqrt(x*x + y*y + z*z) return val # Changing this to a memview was slower. @cython.boundscheck(False) @cython.wraparound(False) cdef inline void c_get_closest_edge(cnp.double_t* p, cnp.double_t* direction, cnp.double_t* edge, double eps=1.) noexcept nogil: edge[0] = floor(p[0] + eps) if direction[0] >= 0.0 else ceil(p[0] - eps) edge[1] = floor(p[1] + eps) if direction[1] >= 0.0 else ceil(p[1] - eps) edge[2] = floor(p[2] + eps) if direction[2] >= 0.0 else ceil(p[2] - eps) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def _streamlines_in_mask(list streamlines, cnp.uint8_t[:,:,:] mask, lin_T, offset): """Filters streamlines based on whether or not they pass through a ROI, using a line-based algorithm for compressed streamlines. This function is private because it's supposed to be called only by tracking.utils.target_line_based. Parameters ---------- streamlines : sequence A sequence of streamlines. mask : array-like (uint8) A mask used as a target. Non-zero values are considered to be within the target region. lin_T : array (3, 3) Transpose of the linear part of the mapping to voxel space. Obtained with `_mapping_to_voxel`. offset : array or scalar Mapping to voxel space. Obtained with `_mapping_to_voxel`. Returns ------- in_mask : 1D array of bool (uint8), one for each streamline. 0 if passing through mask, 1 otherwise (2 for single-point streamline) """ cdef cnp.double_t[:,:] voxel_indices cdef cnp.npy_intp nb_streamlines = len(streamlines) cdef cnp.uint8_t[:] in_mask = np.zeros(nb_streamlines, dtype=np.uint8) cdef cnp.npy_intp streamline_idx for streamline_idx in range(nb_streamlines): # Can't call _to_voxel_coordinates because it casts to int voxel_indices = np.dot(streamlines[streamline_idx], lin_T) + offset in_mask[streamline_idx] = _streamline_in_mask(voxel_indices, mask) return np.asarray(in_mask) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef cnp.npy_intp _streamline_in_mask( cnp.double_t[:,:] streamline, cnp.uint8_t[:,:,:] mask) noexcept nogil: """ Check if a single streamline is passing through a mask. This is an utility function to make streamlines_in_mask() more readable. """ cdef cnp.double_t *current_pt = [0.0, 0.0, 0.0] cdef cnp.double_t *next_pt = [0.0, 0.0, 0.0] cdef cnp.double_t *direction = [0.0, 0.0, 0.0] cdef cnp.double_t *current_edge = [0.0, 0.0, 0.0] cdef cnp.double_t direction_norm, remaining_distance cdef cnp.double_t length_ratio, half_ratio cdef cnp.npy_intp point_idx, dim_idx cdef cnp.npy_intp x, y, z if streamline.shape[0] <= 1: return 2 # Single-point streamline # This loop is time-critical # Changed to -1 because we get the next point in the loop for point_idx in range(streamline.shape[0] - 1): # Assign current and next point, find vector between both, # and use the current point as nearest edge for testing. for dim_idx in range(3): current_pt[dim_idx] = streamline[point_idx, dim_idx] next_pt[dim_idx] = streamline[point_idx + 1, dim_idx] direction[dim_idx] = next_pt[dim_idx] - current_pt[dim_idx] current_edge[dim_idx] = current_pt[dim_idx] # Set the "remaining_distance" var to compute remaining length of # vector to process direction_norm = norm(direction[0], direction[1], direction[2]) remaining_distance = direction_norm # If consecutive coordinates are the same, skip one. if direction_norm == 0: continue # Check if it's already a real edge. If not, find the closest edge. if floor(current_edge[0]) != current_edge[0] and \ floor(current_edge[1]) != current_edge[1] and \ floor(current_edge[2]) != current_edge[2]: # All coordinates are not "integers", and therefore, not on the # edge. Fetch the closest edge. c_get_closest_edge(current_pt, direction, current_edge) while True: # Compute the smallest ratio of direction's length to get to an # edge. This effectively means we find the first edge # encountered # Set large value for length_ratio length_ratio = 10000 for dim_idx in range(3): if direction[dim_idx] != 0: length_ratio = fmin( fabs((current_edge[dim_idx] - current_pt[dim_idx]) / direction[dim_idx]), length_ratio) # Check if last point is already on an edge remaining_distance -= length_ratio * direction_norm if remaining_distance < 0 and not fabs(remaining_distance) < 1e-8: break # Find the coordinates of voxel containing current point, to # tag it in the map half_ratio = 0.5 * length_ratio x = floor(current_pt[0] + half_ratio * direction[0]) y = floor(current_pt[1] + half_ratio * direction[1]) z = floor(current_pt[2] + half_ratio * direction[2]) if 0 <= x < mask.shape[0] and 0 <= y < mask.shape[1] and 0 <= z < mask.shape[2]: if mask[x, y, z]: return 1 # current_pt is moved to the closest edge for dim_idx in range(3): current_pt[dim_idx] = \ length_ratio * direction[dim_idx] + current_pt[dim_idx] # Snap really small values to 0. if fabs(current_pt[dim_idx]) <= 1e-16: current_pt[dim_idx] = 0.0 c_get_closest_edge(current_pt, direction, current_edge) # Check last point x = floor(next_pt[0]) y = floor(next_pt[1]) z = floor(next_pt[2]) if 0 <= x < mask.shape[0] and 0 <= y < mask.shape[1] and 0 <= z < mask.shape[2]: if mask[x, y, z]: return 1 return 0 @cython.boundscheck(False) @cython.wraparound(False) @cython.profile(False) def track_counts(tracks, vol_dims, vox_sizes=(1,1,1), return_elements=True): """ Counts of points in `tracks` that pass through voxels in volume We find whether a point passed through a track by rounding the mm point values to voxels. For a track that passes through a voxel more than once, we only record counts and elements for the first point in the line that enters the voxel. Parameters ---------- tracks : sequence sequence of T tracks. One track is an ndarray of shape (N, 3), where N is the number of points in that track, and ``tracks[t][n]`` is the n-th point in the t-th track. Points are of form x, y, z in *voxel mm* coordinates. That is, if ``i, j, k`` is the possibly non-integer voxel coordinate of the track point, and `vox_sizes` are 3 floats giving voxel sizes of dimensions 0, 1, 2 respectively, then the voxel mm coordinates ``x, y, z`` are simply ``i * vox_sizes[0], j * vox_sizes[1], k * vox_sizes[2]``. This convention derives from trackviz. To pass in tracks as voxel coordinates, just pass ``vox_sizes=(1, 1, 1)`` (see below). vol_dims : sequence length 3 volume dimensions in voxels, x, y, z. vox_sizes : optional, sequence length 3 voxel sizes in mm. Default is (1,1,1) return_elements : {True, False}, optional If True, also return object array with one list per voxel giving track indices and point indices passing through the voxel (see below) Returns ------- tcs : ndarray shape `vol_dim` An array where entry ``tcs[x, y, z]`` is the number of tracks that passed through voxel at voxel coordinate x, y, z tes : ndarray dtype np.object, shape `vol_dim` If `return_elements` is True, we also return an object array with one object per voxel. The objects at each voxel are a list of integers, where the integers are the indices of the track that passed through the voxel. Examples -------- Imagine you have a volume (voxel) space of dimension ``(10,20,30)``. Imagine you had voxel coordinate tracks in ``vs``. To just fill an array with the counts of how many tracks pass through each voxel: >>> vox_track0 = np.array([[0, 0, 0], [1.1, 2.2, 3.3], [2.2, 4.4, 6.6]]) >>> vox_track1 = np.array([[0, 0, 0], [0, 0, 1], [0, 0, 2]]) >>> vs = (vox_track0, vox_track1) >>> vox_dim = (10, 20, 30) # original voxel array size >>> tcs = track_counts(vs, vox_dim, (1, 1, 1), False) >>> tcs.shape (10, 20, 30) >>> tcs[0, 0, 0:4] array([2, 1, 1, 0]) >>> tcs[1, 2, 3], tcs[2, 4, 7] (1, 1) You can also use the routine to count into larger-than-voxel boxes. To do this, increase the voxel size and decrease the ``vox_dim`` accordingly: >>> tcs=track_counts(vs, (10/2., 20/2., 30/2.), (2,2,2), False) >>> tcs.shape (5, 10, 15) >>> tcs[1,1,2], tcs[1,2,3] (1, 1) """ vol_dims = np.asarray(vol_dims).astype(int) vox_sizes = np.asarray(vox_sizes).astype(np.double) n_voxels = np.prod(vol_dims) # output track counts array, flattened cdef cnp.ndarray[cnp.npy_intp, ndim=1] tcs = \ np.zeros((n_voxels,), dtype=np.intp) # pointer to output track indices cdef cnp.npy_intp i if return_elements: el_inds = {} # cython numpy pointer to individual track array cdef cnp.ndarray[cnp.float_t, ndim=2] t # cython numpy pointer to point in track array cdef cnp.ndarray[cnp.float_t, ndim=1] in_pt # processed point cdef int out_pt[3] # various temporary loop and working variables cdef int tno, pno, cno cdef cnp.npy_intp el_no, v # fill native C arrays from inputs cdef int vd[3] cdef double vxs[3] for cno in range(3): vd[cno] = vol_dims[cno] vxs[cno] = vox_sizes[cno] # return_elements to C native cdef int ret_elf = return_elements # x slice size (C array ordering) cdef cnp.npy_intp yz = vd[1] * vd[2] for tno in range(len(tracks)): t = tracks[tno].astype(float) # set to find unique voxel points in track in_inds = set() # the loop below is time-critical for pno in range(cnp.PyArray_DIM(t, 0)): in_pt = t[pno] # Round to voxel coordinates, and set coordinates outside # volume to volume edges for cno in range(3): v = floor(in_pt[cno] / vxs[cno] + 0.5) if v < 0: v = 0 elif v >= vd[cno]: v = vd[cno]-1 # last index for this dimension out_pt[cno] = v # calculate element number in flattened tcs array el_no = out_pt[0] * yz + out_pt[1] * vd[2] + out_pt[2] # discard duplicates if el_no in in_inds: continue in_inds.add(el_no) # set elements into object array if ret_elf: key = (out_pt[0], out_pt[1], out_pt[2]) val = tno if tcs[el_no]: el_inds[key].append(val) else: el_inds[key] = [val] # set value into counts tcs[el_no] += 1 if ret_elf: return tcs.reshape(vol_dims), el_inds return tcs.reshape(vol_dims) dipy-1.11.0/dipy/utils/000077500000000000000000000000001476546756600146765ustar00rootroot00000000000000dipy-1.11.0/dipy/utils/__init__.py000066400000000000000000000000421476546756600170030ustar00rootroot00000000000000# code support utilities for dipy dipy-1.11.0/dipy/utils/arrfuncs.py000066400000000000000000000034771476546756600171060ustar00rootroot00000000000000"""Utilities to manipulate numpy arrays""" from nibabel.volumeutils import endian_codes, native_code import numpy as np from dipy.testing.decorators import warning_for_keywords def as_native_array(arr): """Return `arr` as native byteordered array If arr is already native byte ordered, return unchanged. If it is opposite endian, then make a native byte ordered copy and return that Parameters ---------- arr : ndarray Returns ------- native_arr : ndarray If `arr` was native order, this is just `arr`. Otherwise it's a new array such that ``np.all(native_arr == arr)``, with native byte ordering. """ if endian_codes[arr.dtype.byteorder] == native_code: return arr return arr.view(arr.dtype.newbyteorder()).byteswap() @warning_for_keywords() def pinv(a, *, rcond=1e-15): """Vectorized version of `numpy.linalg.pinv` If numpy version is less than 1.8, it falls back to iterating over `np.linalg.pinv` since there isn't a vectorized version of `np.linalg.svd` available. Parameters ---------- a : array_like (..., M, N) Matrix to be pseudo-inverted. rcond : float Cutoff for small singular values. Returns ------- B : ndarray (..., N, M) The pseudo-inverse of `a`. Raises ------ LinAlgError If the SVD computation does not converge. See Also -------- np.linalg.pinv """ a = np.asarray(a) swap = np.arange(a.ndim) swap[[-2, -1]] = swap[[-1, -2]] u, s, v = np.linalg.svd(a, full_matrices=False) cutoff = np.maximum.reduce(s, axis=-1, keepdims=True) * rcond mask = s > cutoff s[mask] = 1.0 / s[mask] s[~mask] = 0 return np.einsum( "...ij,...jk", np.transpose(v, swap) * s[..., None, :], np.transpose(u, swap) ) dipy-1.11.0/dipy/utils/compatibility.py000066400000000000000000000040731476546756600201250ustar00rootroot00000000000000"""Utility functions for checking different stuffs.""" import operator from packaging import version def check_version(package_name, ver, *, operator=operator.ge): """Check if the package is installed and the version satisfies the operator. Parameters ---------- package_name : str The name of the package to check. ver : str The version to check against. operator : callable, optional The operator to use for the comparison. Returns ------- bool True if the package is installed and the version satisfies the operator. """ try: pkg = __import__(package_name) except ImportError: return False if hasattr(pkg, "__version__"): return operator(version.parse(pkg.__version__), version.parse(ver)) return False def check_min_version(package_name, min_version, *, strict=False): """Check if the package is installed and the version is at least min_version. Parameters ---------- package_name : str The name of the package to check. min_version : str The minimum version required. strict : bool, optional If True, the version must be strictly greater than min_version. Returns ------- bool True if the package is installed and the version is at least min_version. """ op = operator.gt if strict else operator.ge return check_version(package_name, min_version, operator=op) def check_max_version(package_name, max_version, *, strict=False): """Check if the package is installed and the version is at most max_version. Parameters ---------- package_name : str The name of the package to check. max_version : str The maximum version required. strict : bool, optional If True, the version must be strictly less than max_version. Returns ------- bool True if the package is installed and the version is at most max_version. """ op = operator.lt if strict else operator.le return check_version(package_name, max_version, operator=op) dipy-1.11.0/dipy/utils/convert.py000066400000000000000000000014131476546756600167270ustar00rootroot00000000000000"""Module for different conversion functions.""" def expand_range(range_str): """Convert a string with ranges to a list of integers. Parameters ---------- range_str : str string with ranges, e.g. '1,2,3-5,7' Returns ------- list(int) """ range_str = range_str.replace(" ", "").strip() range_str = range_str[:-1] if range_str.endswith(",") else range_str result = [] for part in range_str.split(","): if not part.replace("-", "").isdigit(): raise ValueError(f"Incorrect character found in input: {part}") if "-" in part: start, end = map(int, part.split("-")) result += list(range(start, end + 1)) else: result.append(int(part)) return result dipy-1.11.0/dipy/utils/deprecator.py000066400000000000000000000366111476546756600174070ustar00rootroot00000000000000"""Function for recording and reporting deprecations. Note ----- this file is copied (with minor modifications) from the Nibabel. https://github.com/nipy/nibabel. See COPYING file distributed along with the Nibabel package for the copyright and license terms. """ import functools from inspect import signature import re import warnings from packaging.version import parse as version_cmp from dipy import __version__ from dipy.testing.decorators import warning_for_keywords _LEADING_WHITE = re.compile(r"^(\s*)") class ExpiredDeprecationError(RuntimeError): """Error for expired deprecation. Error raised when a called function or method has passed out of its deprecation period. """ pass class ArgsDeprecationWarning(DeprecationWarning): """Warning for args deprecation. Warning raised when a function or method argument has changed or removed. """ pass def _ensure_cr(text): """Remove trailing whitespace and add carriage return. Ensures that `text` always ends with a carriage return """ return text.rstrip() + "\n" def _add_dep_doc(old_doc, dep_doc): """Add deprecation message `dep_doc` to docstring in `old_doc`. Parameters ---------- old_doc : str Docstring from some object. dep_doc : str Deprecation warning to add to top of docstring, after initial line. Returns ------- new_doc : str `old_doc` with `dep_doc` inserted after any first lines of docstring. """ dep_doc = _ensure_cr(dep_doc) if not old_doc: return dep_doc old_doc = _ensure_cr(old_doc) old_lines = old_doc.splitlines() new_lines = [] for _line_no, line in enumerate(old_lines): if line.strip(): new_lines.append(line) else: break next_line = _line_no + 1 if next_line >= len(old_lines): # nothing following first paragraph, just append message return f"{old_doc}\n{dep_doc}" indent = _LEADING_WHITE.match(old_lines[next_line]).group() dep_lines = [indent + L for L in [""] + dep_doc.splitlines() + [""]] return "\n".join(new_lines + dep_lines + old_lines[next_line:]) + "\n" @warning_for_keywords() def cmp_pkg_version(version_str, *, pkg_version_str=__version__): """Compare `version_str` to current package version. Parameters ---------- version_str : str Version string to compare to current package version pkg_version_str : str, optional Version of our package. Optional, set from ``__version__`` by default. Returns ------- version_cmp : int 1 if `version_str` is a later version than `pkg_version_str`, 0 if same, -1 if earlier. Examples -------- >>> cmp_pkg_version('1.2.1', pkg_version_str='1.2.0') 1 >>> cmp_pkg_version('1.2.0dev', pkg_version_str='1.2.0') -1 """ if any(re.match(r"^[a-z, A-Z]", v) for v in [version_str, pkg_version_str]): msg = f"Invalid version {version_str} or {pkg_version_str}" raise ValueError(msg) elif version_cmp(version_str) > version_cmp(pkg_version_str): return 1 elif version_cmp(version_str) == version_cmp(pkg_version_str): return 0 else: return -1 @warning_for_keywords() def is_bad_version(version_str, *, version_comparator=cmp_pkg_version): """Return True if `version_str` is too high.""" return version_comparator(version_str) == -1 @warning_for_keywords() def deprecate_with_version( message, *, since="", until="", version_comparator=cmp_pkg_version, warn_class=DeprecationWarning, error_class=ExpiredDeprecationError, ): """Return decorator function function for deprecation warning / error. The decorated function / method will: * Raise the given `warning_class` warning when the function / method gets called, up to (and including) version `until` (if specified); * Raise the given `error_class` error when the function / method gets called, when the package version is greater than version `until` (if specified). Parameters ---------- message : str Message explaining deprecation, giving possible alternatives. since : str, optional Released version at which object was first deprecated. until : str, optional Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error. version_comparator : callable Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the `version_comparator`, 0 if the version is equal, and -1 if the version is lower. For example, the `version_comparator` may compare the input version string to the current package version string. warn_class : class, optional Class of warning to generate for deprecation. error_class : class, optional Class of error to generate when `version_comparator` returns 1 for a given argument of ``until``. Returns ------- deprecator : func Function returning a decorator. """ messages = [message] if (since, until) != ("", ""): messages.append("") if since: messages.append(f"* deprecated from version: {since}") if until: raise_will_raise = "Raises" if is_bad_version(until) else "Will raise" messages.append(f"* {raise_will_raise} {error_class} as of version: {until}") message = "\n".join(messages) def deprecator(func): @functools.wraps(func) def deprecated_func(*args, **kwargs): if until and is_bad_version(until, version_comparator=version_comparator): raise error_class(message) warnings.warn(message, warn_class, stacklevel=2) return func(*args, **kwargs) deprecated_func.__doc__ = _add_dep_doc(deprecated_func.__doc__, message) return deprecated_func return deprecator @warning_for_keywords() def deprecated_params( old_name, *, new_name=None, since="", until="", version_comparator=cmp_pkg_version, arg_in_kwargs=False, warn_class=ArgsDeprecationWarning, error_class=ExpiredDeprecationError, alternative="", ): """Deprecate a *renamed* or *removed* function argument. The decorator assumes that the argument with the ``old_name`` was removed from the function signature and the ``new_name`` replaced it at the **same position** in the signature. If the ``old_name`` argument is given when calling the decorated function the decorator will catch it and issue a deprecation warning and pass it on as ``new_name`` argument. Parameters ---------- old_name : str or list/tuple thereof The old name of the argument. new_name : str or list/tuple thereof or ``None``, optional The new name of the argument. Set this to `None` to remove the argument ``old_name`` instead of renaming it. since : str or number or list/tuple thereof, optional The release at which the old argument became deprecated. until : str or number or list/tuple thereof, optional Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error. version_comparator : callable Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the ``version_comparator``, 0 if the version is equal, and -1 if the version is lower. For example, the ``version_comparator`` may compare the input version string to the current package version string. arg_in_kwargs : bool or list/tuple thereof, optional If the argument is not a named argument (for example it was meant to be consumed by ``**kwargs``) set this to ``True``. Otherwise the decorator will throw an Exception if the ``new_name`` cannot be found in the signature of the decorated function. Default is ``False``. warn_class : warning, optional Warning to be issued. error_class : Exception, optional Error to be issued alternative : str, optional An alternative function or class name that the user may use in place of the deprecated object if ``new_name`` is None. The deprecation warning will tell the user about this alternative if provided. Raises ------ TypeError If the new argument name cannot be found in the function signature and arg_in_kwargs was False or if it is used to deprecate the name of the ``*args``-, ``**kwargs``-like arguments. At runtime such an Error is raised if both the new_name and old_name were specified when calling the function and "relax=False". Notes ----- This function is based on the Astropy (major modification). https://github.com/astropy/astropy. See COPYING file distributed along with the astropy package for the copyright and license terms. Examples -------- The deprecation warnings are not shown in the following examples. To deprecate a positional or keyword argument:: >>> from dipy.utils.deprecator import deprecated_params >>> @deprecated_params('sig', new_name='sigma', since='0.3') ... def test(sigma): ... return sigma >>> test(2) 2 >>> test(sigma=2) 2 >>> test(sig=2) # doctest: +SKIP 2 It is also possible to replace multiple arguments. The ``old_name``, ``new_name`` and ``since`` have to be `tuple` or `list` and contain the same number of entries:: >>> @deprecated_params(['a', 'b'], new_name=['alpha', 'beta'], ... since=['0.2', 0.4]) ... def test(alpha, beta): ... return alpha, beta >>> test(a=2, b=3) # doctest: +SKIP (2, 3) """ if isinstance(old_name, (list, tuple)): # Normalize input parameters if not isinstance(arg_in_kwargs, (list, tuple)): arg_in_kwargs = [arg_in_kwargs] * len(old_name) if not isinstance(since, (list, tuple)): since = [since] * len(old_name) if not isinstance(until, (list, tuple)): until = [until] * len(old_name) if not isinstance(new_name, (list, tuple)): new_name = [new_name] * len(old_name) if ( len( { len(old_name), len(new_name), len(since), len(until), len(arg_in_kwargs), } ) != 1 ): raise ValueError("All parameters should have the same length") else: # To allow a uniform approach later on, wrap all arguments in lists. old_name = [old_name] new_name = [new_name] since = [since] until = [until] arg_in_kwargs = [arg_in_kwargs] def deprecator(function): # The named arguments of the function. arguments = signature(function).parameters positions = [None] * len(old_name) for i, (o_name, n_name, in_keywords) in enumerate( zip(old_name, new_name, arg_in_kwargs) ): # Determine the position of the argument. if in_keywords: continue if n_name is not None and n_name not in arguments: # In case the argument is not found in the list of arguments # the only remaining possibility is that it should be caught # by some kind of **kwargs argument. msg = f'"{n_name}" was not specified in the function ' msg += "signature. If it was meant to be part of " msg += '"**kwargs" then set "arg_in_kwargs" to "True"' raise TypeError(msg) key = o_name if n_name is None else n_name param = arguments[key] if param.kind == param.POSITIONAL_OR_KEYWORD: key = o_name if n_name is None else n_name positions[i] = list(arguments.keys()).index(key) elif param.kind == param.KEYWORD_ONLY: # These cannot be specified by position. positions[i] = None else: # positional-only argument, varargs, varkwargs or some # unknown type: msg = f'cannot replace argument "{n_name}" ' msg += f"of kind {repr(param.kind)}." raise TypeError(msg) @functools.wraps(function) def wrapper(*args, **kwargs): for i, (o_name, n_name) in enumerate(zip(old_name, new_name)): messages = [ f'"{o_name}" was deprecated', ] if (since[i], until[i]) != ("", ""): messages.append("") if since[i]: messages.append(f"* deprecated from version: {since[i]}") if until[i]: raise_will_raise = ( "Raises" if is_bad_version(until[i]) else "Will raise" ) messages.append( f"* {raise_will_raise} {error_class} as of version: {until[i]}" ) messages.append("") message = "\n".join(messages) # The only way to have oldkeyword inside the function is # that it is passed as kwarg because the oldkeyword # parameter was renamed to newkeyword. if o_name in kwargs: value = kwargs.pop(o_name) # Check if the newkeyword was given as well. newarg_in_args = ( positions[i] is not None and len(args) > positions[i] ) newarg_in_kwargs = n_name in kwargs if newarg_in_args or newarg_in_kwargs: msg = f'cannot specify both "{o_name}"' msg += " (deprecated parameter) and " msg += f'"{n_name}" (new parameter name).' raise TypeError(msg) # Pass the value of the old argument with the # name of the new argument to the function key = n_name or o_name kwargs[key] = value if n_name is not None: message += f'* Use argument "{n_name}" instead.' elif alternative: message += f"* Use {alternative} instead." if until[i] and is_bad_version( until[i], version_comparator=version_comparator ): raise error_class(message) warnings.warn(message, warn_class, stacklevel=2) # Deprecated keyword without replacement is given as # positional argument. elif not n_name and positions[i] and len(args) > positions[i]: if alternative: message += f"* Use {alternative} instead." if until[i] and is_bad_version( until[i], version_comparator=version_comparator ): raise error_class(message) warnings.warn(message, warn_class, stacklevel=2) return function(*args, **kwargs) return wrapper return deprecator dipy-1.11.0/dipy/utils/fast_numpy.pxd000066400000000000000000000033151476546756600176020ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False cimport numpy as cnp from libc.stdlib cimport rand, srand, RAND_MAX from libc.math cimport sqrt cdef void take( double* odf, cnp.npy_intp* indices, cnp.npy_intp n_indices, double* values_out) noexcept nogil # Replaces a numpy.searchsorted(arr, number, 'right') cdef int where_to_insert( cnp.float_t* arr, cnp.float_t number, int size) noexcept nogil cdef void cumsum( cnp.float_t* arr_in, cnp.float_t* arr_out, int N) noexcept nogil cdef void copy_point( double * a, double * b) noexcept nogil cdef void scalar_muliplication_point( double * a, double scalar) noexcept nogil cdef double norm( double * v) noexcept nogil cdef double dot( double * v1, double * v2) noexcept nogil cdef void normalize( double * v) noexcept nogil cdef void cross( double * out, double * v1, double * v2) noexcept nogil cdef void random_vector( double * out, RNGState* rng=*) noexcept nogil cdef void random_perpendicular_vector( double * out, double * v, RNGState* rng=*) noexcept nogil cdef (double, double) random_point_within_circle( double r, RNGState* rng=*) noexcept nogil cpdef double random() noexcept nogil cpdef void seed(cnp.npy_uint32 s) noexcept nogil cdef struct RNGState: cnp.npy_uint64 state cnp.npy_uint64 inc cdef void seed_rng(RNGState* rng_state, cnp.npy_uint64 seed) noexcept nogil cdef cnp.npy_uint32 next_rng(RNGState* rng_state) noexcept nogil cdef double random_float(RNGState* rng_state) noexcept nogil dipy-1.11.0/dipy/utils/fast_numpy.pyx000066400000000000000000000146121476546756600176310ustar00rootroot00000000000000# cython: boundscheck=False # cython: initializedcheck=False # cython: wraparound=False from libc.stdio cimport printf cimport numpy as cnp cdef void take( double* odf, cnp.npy_intp* indices, cnp.npy_intp n_indices, double* values_out) noexcept nogil: """ Mimic np.take(odf, indices) in Cython using pointers. Parameters ---------- odf : double* Pointer to the input array from which values are taken. indices : npy_intp* Pointer to the array of indices specifying which elements to take. n_indices : npy_intp Number of indices to process. values_out : double* Pointer to the output array where the selected values will be stored. """ cdef int i for i in range(n_indices): values_out[i] = odf[indices[i]] cdef int where_to_insert(cnp.float_t* arr, cnp.float_t number, int size) noexcept nogil: cdef: int idx cnp.float_t current for idx in range(size - 1, -1, -1): current = arr[idx] if number >= current: return idx + 1 return 0 cdef void cumsum(cnp.float_t* arr_in, cnp.float_t* arr_out, int N) noexcept nogil: cdef: cnp.npy_intp i = 0 cnp.float_t csum = 0 for i in range(N): csum += arr_in[i] arr_out[i] = csum cdef void copy_point(double * a, double * b) noexcept nogil: cdef: cnp.npy_intp i = 0 for i in range(3): b[i] = a[i] cdef void scalar_muliplication_point(double * a, double scalar) noexcept nogil: cdef: cnp.npy_intp i = 0 for i in range(3): a[i] *= scalar cdef double norm(double * v) noexcept nogil: """Compute the vector norm. Parameters ---------- v : double[3] input vector. """ return sqrt(v[0] * v[0] + v[1] * v[1] + v[2] * v[2]) cdef double dot(double * v1, double * v2) noexcept nogil: """Compute vectors dot product. Parameters ---------- v1 : double[3] input vector 1. v2 : double[3] input vector 2. Returns ------- _ : double dot product of input vectors. """ return v1[0] * v2[0] + v1[1] * v2[1] + v1[2] * v2[2] cdef void normalize(double * v) noexcept nogil: """Normalize the vector. Parameters ---------- v : double[3] input vector Notes ----- Overwrites the first argument. """ cdef double scale = 1.0 / norm(v) v[0] = v[0] * scale v[1] = v[1] * scale v[2] = v[2] * scale cdef void cross(double * out, double * v1, double * v2) noexcept nogil: """Compute vectors cross product. Parameters ---------- out : double[3] output vector. v1 : double[3] input vector 1. v2 : double[3] input vector 2. Notes ----- Overwrites the first argument. """ out[0] = v1[1] * v2[2] - v1[2] * v2[1] out[1] = v1[2] * v2[0] - v1[0] * v2[2] out[2] = v1[0] * v2[1] - v1[1] * v2[0] cdef void random_vector(double * out, RNGState* rng=NULL) noexcept nogil: """Generate a unit random vector Parameters ---------- out : double[3] output vector Notes ----- Overwrites the input """ if rng == NULL: out[0] = 2.0 * random() - 1.0 out[1] = 2.0 * random() - 1.0 out[2] = 2.0 * random() - 1.0 else: out[0] = 2.0 * random_float(rng) - 1.0 out[1] = 2.0 * random_float(rng) - 1.0 out[2] = 2.0 * random_float(rng) - 1.0 normalize(out) cdef void random_perpendicular_vector(double * out, double * v, RNGState* rng=NULL) noexcept nogil: """Generate a random perpendicular vector Parameters ---------- out : double[3] output vector v : double[3] input vector Notes ----- Overwrites the first argument """ cdef double[3] tmp random_vector(tmp, rng) cross(out, v, tmp) normalize(out) cdef (double, double) random_point_within_circle(double r, RNGState* rng=NULL) noexcept nogil: """Generate a random point within a circle Parameters ---------- r : double The radius of the circle Returns ------- x : double x coordinate of the random point y : double y coordinate of the random point """ cdef double x = 1.0 cdef double y = 1.0 if rng == NULL: while (x * x + y * y) > 1: x = 2.0 * random() - 1.0 y = 2.0 * random() - 1.0 else: while (x * x + y * y) > 1: x = 2.0 * random_float(rng) - 1.0 y = 2.0 * random_float(rng) - 1.0 return (r * x, r * y) cpdef double random() noexcept nogil: """Sample a random number between 0 and 1. Returns ------- _ : double random number. """ return rand() / float(RAND_MAX) cpdef void seed(cnp.npy_uint32 s) noexcept nogil: """Set the random seed of stdlib. Parameters ---------- s : int random seed. """ srand(s) cdef void print_c_array_pointer(double* arr, int size) noexcept nogil: cdef int i for i in range(size): printf("%f, ", arr[i]) printf("\n\n\n") cdef void seed_rng(RNGState* rng_state, cnp.npy_uint64 seed) noexcept nogil: """ Seed the RNG state (thread-safe). Parameters ---------- rng_state : RNGState* The RNG state. seed : int The seed value. """ rng_state.state = 0 rng_state.inc = (seed << 1) | 1 next_rng(rng_state) # Advance the state rng_state.state += seed next_rng(rng_state) cdef cnp.npy_uint32 next_rng(RNGState* rng_state) noexcept nogil: """ Generate the next random number (thread-safe, PCG algorithm). Parameters ---------- rng_state : RNGState* The RNG state. Returns ------- _ : int The next random number. """ cdef cnp.npy_uint64 oldstate = rng_state.state rng_state.state = oldstate * 6364136223846793005ULL + rng_state.inc cdef cnp.npy_uint32 xorshifted = ((oldstate >> 18) ^ oldstate) >> 27 cdef cnp.npy_uint32 rot = oldstate >> 59 return (xorshifted >> rot) | (xorshifted << ((-rot) & 31)) cdef double random_float(RNGState* rng_state) noexcept nogil: """ Generate a random float in [0, 1) (thread-safe). Parameters ---------- rng_state : RNGState* The RNG state. Returns ------- _ : float The random float. """ return next_rng(rng_state) / 0xffffffff dipy-1.11.0/dipy/utils/meson.build000066400000000000000000000014341476546756600170420ustar00rootroot00000000000000cython_sources = [ 'fast_numpy', 'omp', ] cython_headers = [ 'fast_numpy.pxd', 'omp.pxd', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/utils' ) endforeach python_sources = [ '__init__.py', 'arrfuncs.py', 'compatibility.py', 'convert.py', 'deprecator.py', 'multiproc.py', 'optpkg.py', 'parallel.py', 'tractogram.py', 'tripwire.py', 'volume.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/utils' ) subdir('tests')dipy-1.11.0/dipy/utils/multiproc.py000066400000000000000000000024311476546756600172660ustar00rootroot00000000000000"""Function for determining the effective number of processes to be used.""" from multiprocessing import cpu_count from warnings import warn def determine_num_processes(num_processes): """Determine the effective number of processes for parallelization. - For `num_processes = None`` return the maximum number of cores retrieved by cpu_count(). - For ``num_processes > 0``, return this value. - For ``num_processes < 0``, return the maximal number of cores minus ``num_processes + 1``. In particular ``num_processes = -1`` will use as many cores as possible. - For ``num_processes = 0`` a ValueError is raised. Parameters ---------- num_processes : int or None Desired number of processes to be used. """ if not isinstance(num_processes, int) and num_processes is not None: raise TypeError("num_processes must be an int or None") if num_processes == 0: raise ValueError("num_processes cannot be 0") try: if num_processes is None: return cpu_count() if num_processes < 0: return max(1, cpu_count() + num_processes + 1) except NotImplementedError: warn("Cannot determine number of cores. Using only 1.", stacklevel=2) return 1 return num_processes dipy-1.11.0/dipy/utils/omp.pxd000066400000000000000000000001311476546756600162010ustar00rootroot00000000000000#!python cdef void set_num_threads(num_threads) cdef void restore_default_num_threads() dipy-1.11.0/dipy/utils/omp.pyx000066400000000000000000000060711476546756600162370ustar00rootroot00000000000000#!python import os cimport safe_openmp as openmp have_openmp = openmp.have_openmp __all__ = ['have_openmp', 'default_threads', 'cpu_count', 'thread_count'] def cpu_count(): """Return number of cpus as determined by omp_get_num_procs.""" if have_openmp: return openmp.omp_get_num_procs() else: return 1 def thread_count(): """Return number of threads as determined by omp_get_max_threads.""" if have_openmp: return openmp.omp_get_max_threads() else: return 1 def _get_default_threads(): """Default number of threads for OpenMP. This function prioritizes the OMP_NUM_THREADS environment variable. """ if have_openmp: try: default_threads = int(os.environ.get('OMP_NUM_THREADS', None)) if default_threads < 1: raise ValueError("invalid number of threads") except (ValueError, TypeError): default_threads = openmp.omp_get_num_procs() return default_threads else: return 1 default_threads = _get_default_threads() def determine_num_threads(num_threads): """Determine the effective number of threads to be used for OpenMP calls - For ``num_threads = None``, - if the ``OMP_NUM_THREADS`` environment variable is set, return that value - otherwise, return the maximum number of cores retrieved by ``openmp.opm_get_num_procs()``. - For ``num_threads > 0``, return this value. - For ``num_threads < 0``, return the maximal number of threads minus ``|num_threads + 1|``. In particular ``num_threads = -1`` will use as many threads as there are available cores on the machine. - For ``num_threads = 0`` a ValueError is raised. Parameters ---------- num_threads : int or None Desired number of threads to be used. """ if not isinstance(num_threads, int) and num_threads is not None: raise TypeError("num_threads must be an int or None") if num_threads == 0: raise ValueError("num_threads cannot be 0") if num_threads is None: return default_threads if num_threads < 0: return max(1, cpu_count() + num_threads + 1) return num_threads cdef void set_num_threads(num_threads): """Set the number of threads to be used by OpenMP This function does nothing if OpenMP is not available. Parameters ---------- num_threads : int Effective number of threads for OpenMP accelerated code. """ if openmp.have_openmp: openmp.omp_set_dynamic(0) openmp.omp_set_num_threads(num_threads) cdef void restore_default_num_threads(): """Restore OpenMP to using the default number of threads. This function does nothing if OpenMP is not available """ if openmp.have_openmp: openmp.omp_set_num_threads( default_threads) def _set_omp_threads(num_threads): """Function for testing set_num_threads.""" set_num_threads(num_threads) def _restore_omp_threads(): """Function for testing restore_default_num_threads.""" restore_default_num_threads() dipy-1.11.0/dipy/utils/optpkg.py000066400000000000000000000063211476546756600165560ustar00rootroot00000000000000"""Routines to support optional packages""" import importlib from packaging.version import Version try: import pytest except ImportError: have_pytest = False else: have_pytest = True from dipy.utils.tripwire import TripWire def optional_package(name, *, trip_msg=None, min_version=None): """Return package-like thing and module setup for package `name` Parameters ---------- name : str package name trip_msg : None or str message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None. min_version : None or str If not None, require that the imported package be at least this version. If the package has no ``__version__`` attribute, or if the version is not parseable, raise an error. Returns ------- pkg_like : module or ``TripWire`` instance If we can import the package, return it. Otherwise return an object raising an error when accessed have_pkg : bool True if import for package was successful, false otherwise module_setup : function callable usually set as ``setup_module`` in calling namespace, to allow skipping tests. Examples -------- Typical use would be something like this at the top of a module using an optional package: >>> from dipy.utils.optpkg import optional_package >>> pkg, have_pkg, setup_module = optional_package('not_a_package') Of course in this case the package doesn't exist, and so, in the module: >>> have_pkg False and >>> pkg.some_function() #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... TripWireError: We need package not_a_package for these functions, but ``import not_a_package`` raised an ImportError If the module does exist - we get the module >>> pkg, _, _ = optional_package('os') >>> hasattr(pkg, 'path') True Or a submodule if that's what we asked for >>> subpkg, _, _ = optional_package('os.path') >>> hasattr(subpkg, 'dirname') True """ try: pkg = importlib.import_module(name) except ImportError: pass else: # import worked # top level module if not min_version: return pkg, True, lambda: None current_version = getattr(pkg, "__version__", "0.0.0") if Version(current_version) >= Version(min_version): return pkg, True, lambda: None if trip_msg is None: trip_msg = ( f"We need at least version {min_version} of " f"package {name}, but ``import {name}`` " f"found version {current_version}." ) if current_version == "0.0.0": trip_msg += "Your installation might be incomplete or corrupted." if trip_msg is None: trip_msg = ( f"We need package {name} for these functions, but " f"``import {name}`` raised an ImportError" ) pkg = TripWire(trip_msg) def setup_module(): if have_pytest: pytest.mark.skip(f"No {name} for these tests") return pkg, False, setup_module dipy-1.11.0/dipy/utils/parallel.py000066400000000000000000000137051476546756600170520ustar00rootroot00000000000000from collections.abc import Sequence import json import multiprocessing import shutil import tempfile import numpy as np from tqdm.auto import tqdm from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package ray, has_ray, _ = optional_package("ray") joblib, has_joblib, _ = optional_package("joblib") dask, has_dask, _ = optional_package("dask") @warning_for_keywords() def paramap( func, in_list, *, out_shape=None, n_jobs=-1, engine="ray", backend=None, func_args=None, clean_spill=True, func_kwargs=None, **kwargs, ): # FIXME: but several fitting functions return "extra" as well # is this being handled properly, or not? # perhaps the 'func' docs below are misleading? """ Maps a function to a list of inputs in parallel. Parameters ---------- func : callable The function to apply to each item in the array. Must have the form: ``func(arr, idx, *args, *kwargs)`` where arr is an ndarray and idx is an index into that array (a tuple). The Return of `func` needs to be one item (e.g. float, int) per input item. in_list : list A sequence of items each of which can be an input to ``func``. out_shape : tuple, optional The shape of the output array. If not specified, the output shape will be `(len(in_list),)`. n_jobs : integer, optional The number of jobs to perform in parallel. -1 (default) to use all but one cpu. engine : str, optional {"dask", "joblib", "ray", "serial"} The last one is useful for debugging -- runs the code without any parallelization. backend : str, optional What joblib or dask backend to use. For joblib, the default is "loky". For dask the default is "threading". clean_spill : bool, optional If True, clean up the spill directory after the computation is done. Only applies to "ray" engine. Otherwise has no effect. func_args : list, optional Positional arguments to `func`. func_kwargs : dict or sequence, optional Keyword arguments to `func` or sequence of keyword arguments to `func`: one item for each item in the input list. kwargs : dict, optional Additional arguments to pass to either joblib.Parallel or dask.compute, or ray.remote depending on the engine used. Returns ------- ndarray of identical shape to `arr` """ func_args = func_args or [] func_kwargs = func_kwargs or {} # Check if the func_kwargs are a sequence: if isinstance(func_kwargs, Sequence): if len(func_kwargs) != len(in_list): raise ValueError( "The length of func_kwargs should be the same as the length of in_list" ) func_kwargs_sequence = True else: func_kwargs_sequence = False if engine == "joblib": if not has_joblib: raise joblib() if backend is None: backend = "loky" pp = joblib.Parallel(n_jobs=n_jobs, backend=backend, **kwargs) dd = joblib.delayed(func) if func_kwargs_sequence: d_l = [dd(ii, *func_args, **fk) for ii, fk in zip(in_list, func_kwargs)] else: d_l = [dd(ii, *func_args, **func_kwargs) for ii in in_list] results = pp(tqdm(d_l)) elif engine == "dask": if not has_dask: raise dask() if backend is None: backend = "threading" if n_jobs == -1: n_jobs = multiprocessing.cpu_count() n_jobs = n_jobs - 1 def partial(func, *args, **keywords): def newfunc(in_arg): return func(in_arg, *args, **keywords) return newfunc delayed_func = dask.delayed(func) if func_kwargs_sequence: dd = [ delayed_func(ii, *func_args, **fk) for ii, fk in zip(in_list, func_kwargs) ] else: pp = dask.delayed(partial(func, *func_args, **func_kwargs)) dd = [pp(ii) for ii in in_list] if backend == "multiprocessing": results = dask.compute(*dd, scheduler="processes", workers=n_jobs, **kwargs) elif backend == "threading": results = dask.compute(*dd, scheduler="threads", workers=n_jobs, **kwargs) else: raise ValueError(f"{backend} is not a backend for dask") elif engine == "ray": if not has_ray: raise ray() if clean_spill: tmp_dir = tempfile.TemporaryDirectory() if not ray.is_initialized(): ray.init( _system_config={ "object_spilling_config": json.dumps( { "type": "filesystem", "params": {"directory_path": tmp_dir.name}, }, ) }, ) func = ray.remote(func) if func_kwargs_sequence: results = ray.get( [ func.remote(ii, *func_args, **fk) for ii, fk in zip(in_list, func_kwargs) ] ) else: results = ray.get( [func.remote(ii, *func_args, **func_kwargs) for ii in in_list] ) if clean_spill: shutil.rmtree(tmp_dir.name) elif engine == "serial": results = [] if func_kwargs_sequence: for in_element, fk in zip(in_list, func_kwargs): results.append(func(in_element, *func_args, **fk)) else: for in_element in in_list: results.append(func(in_element, *func_args, **func_kwargs)) else: raise ValueError("%s is not a valid engine" % engine) if out_shape is not None: return np.array(results).reshape(out_shape) else: return results dipy-1.11.0/dipy/utils/tests/000077500000000000000000000000001476546756600160405ustar00rootroot00000000000000dipy-1.11.0/dipy/utils/tests/__init__.py000066400000000000000000000000431476546756600201460ustar00rootroot00000000000000# Tests for utilities - as package dipy-1.11.0/dipy/utils/tests/meson.build000066400000000000000000000014361476546756600202060ustar00rootroot00000000000000cython_sources = [ 'test_fast_numpy', ] foreach ext: cython_sources if fs.exists(ext + '.pxd') extra_args += ['--depfile', meson.current_source_dir() +'/'+ ext + '.pxd', ] endif py3.extension_module(ext, cython_gen.process(ext + '.pyx'), c_args: cython_c_args, include_directories: [incdir_numpy, inc_local], dependencies: [omp], install: true, subdir: 'dipy/utils/tests' ) endforeach python_sources = [ '__init__.py', 'test_arrfuncs.py', 'test_compatibility.py', 'test_convert.py', 'test_deprecator.py', 'test_multiproc.py', 'test_omp.py', 'test_optpkg.py', 'test_parallel.py', 'test_tractogram.py', 'test_tripwire.py', 'test_volume.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/utils/tests' ) dipy-1.11.0/dipy/utils/tests/test_arrfuncs.py000066400000000000000000000022261476546756600212760ustar00rootroot00000000000000"""Testing array utilities.""" import sys import numpy as np from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_equal from dipy.testing import assert_false, assert_true from dipy.testing.decorators import set_random_number_generator from dipy.utils.arrfuncs import as_native_array, pinv NATIVE_ORDER = "<" if sys.byteorder == "little" else ">" SWAPPED_ORDER = ">" if sys.byteorder == "little" else "<" def test_as_native(): arr = np.arange(5) # native assert_equal(arr.dtype.byteorder, "=") narr = as_native_array(arr) assert_true(arr is narr) sdt = arr.view(arr.dtype.newbyteorder("s")) barr = arr.astype(sdt.dtype) assert_equal(barr.dtype.byteorder, SWAPPED_ORDER) narr = as_native_array(barr) assert_false(barr is narr) assert_array_equal(barr, narr) assert_equal(narr.dtype.byteorder, NATIVE_ORDER) @set_random_number_generator() def test_pinv(rng): arr = rng.standard_normal((4, 4, 4, 3, 7)) _pinv = pinv(arr) for i in range(4): for j in range(4): for k in range(4): assert_array_almost_equal(_pinv[i, j, k], np.linalg.pinv(arr[i, j, k])) dipy-1.11.0/dipy/utils/tests/test_compatibility.py000066400000000000000000000022321476546756600223210ustar00rootroot00000000000000from numpy.testing import assert_equal from dipy.utils.compatibility import check_max_version, check_min_version def test_check_min_version(): assert_equal(check_min_version("dipy", "15.0.0"), False) assert_equal(check_min_version("dipy", "0.8.0"), True) assert_equal(check_min_version("numpy", "15.0.0"), False) assert_equal(check_min_version("numpy", "1.8.0"), True) assert_equal(check_min_version("scipy", "3.0.0"), False) assert_equal(check_min_version("scipy", "1.8.0"), True) assert_equal(check_min_version("fakepackage", "3.0.0"), False) assert_equal(check_min_version("fakepackage", "0.8.0"), False) def test_check_max_version(): assert_equal(check_max_version("dipy", "15.0.0"), True) assert_equal(check_max_version("dipy", "0.8.0"), False) assert_equal(check_max_version("numpy", "15.0.0"), True) assert_equal(check_max_version("numpy", "1.8.0"), False) assert_equal(check_max_version("scipy", "3.0.0"), True) assert_equal(check_max_version("scipy", "1.8.0"), False) assert_equal(check_max_version("fakepackage", "3.0.0"), False) assert_equal(check_max_version("fakepackage", "0.8.0"), False) dipy-1.11.0/dipy/utils/tests/test_convert.py000066400000000000000000000021321476546756600211270ustar00rootroot00000000000000"""Testing convert utilities.""" import numpy as np from dipy.utils.convert import expand_range def test_expand_range(): test_cases = [ ("1,2,3,4", np.array([1, 2, 3, 4])), ("5-7", np.array([5, 6, 7])), ("0", np.array([0])), ("0,", np.array([0])), ("0, ", np.array([0])), ("1,2,5-7,10", np.array([1, 2, 5, 6, 7, 10])), ("1,2,5-7,10", [1, 2, 5, 6, 7, 10]), ("1,2,5-7,10", (1, 2, 5, 6, 7, 10)), ("1,a,3", ValueError), ("1-2-3", ValueError), ] for input_string, expected_output in test_cases: # print(f"Running test for input: {input_string}") if not isinstance(expected_output, (np.ndarray, list, tuple)): try: expand_range(input_string) raise AssertionError(f"Expected ValueError for input: {input_string}") except ValueError: assert True else: result = expand_range(input_string) np.testing.assert_array_equal(result, expected_output) # print(f"Test passed for input: {input_string}") dipy-1.11.0/dipy/utils/tests/test_deprecator.py000066400000000000000000000326721476546756600216130ustar00rootroot00000000000000"""Module to test ``dipy.utils.deprecator`` module. Notes ----- this file is copied (with minor modifications) from Nibabel. https://github.com/nipy/nibabel. See COPYING file distributed along with the Nibabel package for the copyright and license terms. """ import sys import warnings import numpy.testing as npt import pytest import dipy from dipy.testing import assert_true, clear_and_catch_warnings from dipy.utils.deprecator import ( ArgsDeprecationWarning, ExpiredDeprecationError, _add_dep_doc, _ensure_cr, cmp_pkg_version, deprecate_with_version, deprecated_params, ) def test_cmp_pkg_version(): # Test version comparator npt.assert_equal(cmp_pkg_version(dipy.__version__), 0) npt.assert_equal(cmp_pkg_version("0.0"), -1) npt.assert_equal(cmp_pkg_version("1000.1000.1"), 1) npt.assert_equal( cmp_pkg_version(dipy.__version__, pkg_version_str=dipy.__version__), 0 ) for test_ver, pkg_ver, exp_out in ( ("1.0", "1.0", 0), ("1.0.0", "1.0", 0), ("1.0", "1.0.0", 0), ("1.1", "1.1", 0), ("1.2", "1.1", 1), ("1.1", "1.2", -1), ("1.1.1", "1.1.1", 0), ("1.1.2", "1.1.1", 1), ("1.1.1", "1.1.2", -1), ("1.1", "1.1dev", 1), ("1.1dev", "1.1", -1), ("1.2.1", "1.2.1rc1", 1), ("1.2.1rc1", "1.2.1", -1), ("1.2.1rc1", "1.2.1rc", 1), ("1.2.1rc", "1.2.1rc1", -1), ("1.2.1rc1", "1.2.1rc", 1), ("1.2.1rc", "1.2.1rc1", -1), ("1.2.1b", "1.2.1a", 1), ("1.2.1a", "1.2.1b", -1), ): npt.assert_equal(cmp_pkg_version(test_ver, pkg_version_str=pkg_ver), exp_out) npt.assert_raises(ValueError, cmp_pkg_version, "foo.2") npt.assert_raises(ValueError, cmp_pkg_version, "foo.2", pkg_version_str="1.0") npt.assert_raises(ValueError, cmp_pkg_version, "1.0", pkg_version_str="foo.2") npt.assert_raises(ValueError, cmp_pkg_version, "foo") def test__ensure_cr(): # Make sure text ends with carriage return npt.assert_equal(_ensure_cr(" foo"), " foo\n") npt.assert_equal(_ensure_cr(" foo\n"), " foo\n") npt.assert_equal(_ensure_cr(" foo "), " foo\n") npt.assert_equal(_ensure_cr("foo "), "foo\n") npt.assert_equal(_ensure_cr("foo \n bar"), "foo \n bar\n") npt.assert_equal(_ensure_cr("foo \n\n"), "foo\n") def test__add_dep_doc(): # Test utility function to add deprecation message to docstring npt.assert_equal(_add_dep_doc("", "foo"), "foo\n") npt.assert_equal(_add_dep_doc("bar", "foo"), "bar\n\nfoo\n") npt.assert_equal(_add_dep_doc(" bar", "foo"), " bar\n\nfoo\n") npt.assert_equal(_add_dep_doc(" bar", "foo\n"), " bar\n\nfoo\n") npt.assert_equal(_add_dep_doc("bar\n\n", "foo"), "bar\n\nfoo\n") npt.assert_equal(_add_dep_doc("bar\n \n", "foo"), "bar\n\nfoo\n") npt.assert_equal( _add_dep_doc(" bar\n\nSome explanation", "foo\nbaz"), " bar\n\nfoo\nbaz\n\nSome explanation\n", ) npt.assert_equal( _add_dep_doc(" bar\n\n Some explanation", "foo\nbaz"), " bar\n \n foo\n baz\n \n Some explanation\n", ) def test_deprecate_with_version(): def func_no_doc(): pass def func_doc(i): """Fake docstring.""" def func_doc_long(i, j): """Fake docstring.\n\n Some text.""" class CustomError(Exception): """Custom error class for testing expired deprecation errors.""" my_mod = sys.modules[__name__] dec = deprecate_with_version func = dec("foo")(func_no_doc) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(), None) npt.assert_equal(len(w), 1) assert_true(w[0].category is DeprecationWarning) npt.assert_equal(func.__doc__, "foo\n") func = dec("foo")(func_doc) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(1), None) npt.assert_equal(len(w), 1) npt.assert_equal(func.__doc__, "Fake docstring.\n\nfoo\n") func = dec("foo")(func_doc_long) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(1, 2), None) npt.assert_equal(len(w), 1) # Python 3.13 strips indents from docstrings if sys.version_info < (3, 13): npt.assert_equal( func.__doc__, "Fake docstring.\n \n foo\n \n Some text.\n" ) else: npt.assert_equal(func.__doc__, "Fake docstring.\n\nfoo\n\nSome text.\n") # Try some since and until versions func = dec("foo", since="0.2")(func_no_doc) npt.assert_equal(func.__doc__, "foo\n\n* deprecated from version: 0.2\n") with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(), None) npt.assert_equal(len(w), 1) func = dec("foo", until="100.6")(func_no_doc) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(), None) npt.assert_equal(len(w), 1) npt.assert_equal( func.__doc__, f"foo\n\n* Will raise {ExpiredDeprecationError} as of version: 100.6\n", ) func = dec("foo", until="0.3")(func_no_doc) npt.assert_raises(ExpiredDeprecationError, func) npt.assert_equal( func.__doc__, f"foo\n\n* Raises {ExpiredDeprecationError} as of version: 0.3\n", ) func = dec("foo", since="0.2", until="0.3")(func_no_doc) npt.assert_raises(ExpiredDeprecationError, func) npt.assert_equal( func.__doc__, "foo\n\n* deprecated from version: 0.2\n" f"* Raises {ExpiredDeprecationError} as of version: 0.3\n", ) func = dec("foo", since="0.2", until="0.3")(func_doc_long) # Python 3.13 strips indents from docstrings if sys.version_info < (3, 13): npt.assert_equal( func.__doc__, "Fake docstring.\n \n foo\n \n" " * deprecated from version: 0.2\n" f" * Raises {ExpiredDeprecationError} as of version: 0.3\n \n" " Some text.\n", ) npt.assert_raises(ExpiredDeprecationError, func) else: npt.assert_equal( func.__doc__, "Fake docstring.\n\nfoo\n\n" "* deprecated from version: 0.2\n" f"* Raises {ExpiredDeprecationError} as of version: 0.3\n\n" "Some text.\n", ) npt.assert_raises(ExpiredDeprecationError, func) # Check different warnings and errors func = dec("foo", warn_class=UserWarning)(func_no_doc) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(), None) npt.assert_equal(len(w), 1) assert_true(w[0].category is UserWarning) func = dec("foo", error_class=CustomError)(func_no_doc) with clear_and_catch_warnings(modules=[my_mod]) as w: warnings.simplefilter("always") npt.assert_equal(func(), None) npt.assert_equal(len(w), 1) assert_true(w[0].category is DeprecationWarning) func = dec("foo", until="0.3", error_class=CustomError)(func_no_doc) npt.assert_raises(CustomError, func) def test_deprecated_argument(): # Tests the decorator with function, method, staticmethod and classmethod. class CustomActor: @classmethod @deprecated_params("height", new_name="scale", since="0.3") def test1(cls, scale): return scale @staticmethod @deprecated_params("height", new_name="scale", since="0.3") def test2(scale): return scale @deprecated_params("height", new_name="scale", since="0.3") def test3(self, scale): return scale @deprecated_params("height", new_name="scale", since="0.3", until="0.5") def test4(self, scale): return scale @deprecated_params("height", new_name="scale", since="0.3", until="10.0.0") def test5(self, scale): return scale @deprecated_params("height", new_name="scale", since="0.3") def custom_actor(scale): return scale @deprecated_params("height", new_name="scale", since="0.3", until="0.5") def custom_actor_2(scale): return scale for method in [ CustomActor().test1, CustomActor().test2, CustomActor().test3, CustomActor().test4, custom_actor, custom_actor_2, ]: # As positional argument only npt.assert_equal(method(1), 1) # As new keyword argument npt.assert_equal(method(scale=1), 1) # As old keyword argument if method.__name__ not in ["test4", "custom_actor_2"]: res = npt.assert_warns(ArgsDeprecationWarning, method, height=1) npt.assert_equal(res, 1) else: npt.assert_raises(ExpiredDeprecationError, method, height=1) # Using both. Both keyword npt.assert_raises(TypeError, method, height=2, scale=1) # One positional, one keyword npt.assert_raises(TypeError, method, 1, scale=2) with warnings.catch_warnings(record=True) as w_record: res = CustomActor().test5(4) # Select only UserWarning selected_w = [w for w in w_record if issubclass(w.category, UserWarning)] npt.assert_equal(len(selected_w), 0) npt.assert_equal(res, 4) def test_deprecated_argument_in_kwargs(): # To rename an argument that is consumed by "kwargs" the "arg_in_kwargs" # parameter is used. @deprecated_params("height", new_name="scale", since="0.3", arg_in_kwargs=True) def test(**kwargs): return kwargs["scale"] @deprecated_params( "height", new_name="scale", since="0.3", until="0.5", arg_in_kwargs=True ) def test2(**kwargs): return kwargs["scale"] # As positional argument only npt.assert_raises(TypeError, test, 1) # As new keyword argument npt.assert_equal(test(scale=1), 1) # Using the deprecated name res = npt.assert_warns(ArgsDeprecationWarning, test, height=1) npt.assert_equal(res, 1) npt.assert_raises(ExpiredDeprecationError, test2, height=1) # Using both. Both keyword npt.assert_raises(TypeError, test, height=2, scale=1) # One positional, one keyword npt.assert_raises(TypeError, test, 1, scale=2) def test_deprecated_argument_multi_deprecation(): @deprecated_params(["x", "y", "z"], new_name=["a", "b", "c"], since=[0.3, 0.2, 0.4]) def test(a, b, c): return a, b, c @deprecated_params(["x", "y", "z"], new_name=["a", "b", "c"], since="0.3") def test2(a, b, c): return a, b, c with pytest.warns(ArgsDeprecationWarning) as w: npt.assert_equal(test(x=1, y=2, z=3), (1, 2, 3)) npt.assert_equal(test2(x=1, y=2, z=3), (1, 2, 3)) npt.assert_equal(len(w), 6) npt.assert_raises(TypeError, test, x=1, y=2, z=3, b=3) npt.assert_raises(TypeError, test, x=1, y=2, z=3, a=3) def test_deprecated_argument_not_allowed_use(): # If the argument is supposed to be inside the kwargs one needs to set the # arg_in_kwargs parameter. Without it it raises a TypeError. with pytest.raises(TypeError): @deprecated_params("height", new_name="scale", since="0.3") def test1(**kwargs): return kwargs["scale"] # Cannot replace "*args". with pytest.raises(TypeError): @deprecated_params("scale", new_name="args", since="0.3") def test2(*args): return args # Cannot replace "**kwargs". with pytest.raises(TypeError): @deprecated_params("scale", new_name="kwargs", since="0.3") def test3(**kwargs): return kwargs # wrong number of arguments with pytest.raises(ValueError): @deprecated_params(["a", "b", "c"], new_name=["x", "y"], since="0.3") def test4(**kwargs): return kwargs def test_deprecated_argument_remove(): @deprecated_params("x", new_name=None, since="0.3", alternative="test2.y") def test(dummy=11, x=3): return dummy, x @deprecated_params( "x", new_name=None, since="0.3", until="0.5", alternative="test2.y" ) def test2(dummy=11, x=3): return dummy, x @deprecated_params( ["dummy", "x"], new_name=None, since="0.3", alternative="test2.y" ) def test3(dummy=11, x=3): return dummy, x @deprecated_params( ["dummy", "x"], new_name=None, since="0.3", until="0.5", alternative="test2.y" ) def test4(dummy=11, x=3): return dummy, x with pytest.warns(ArgsDeprecationWarning, match=r"Use test2\.y instead") as w: npt.assert_equal(test(x=1), (11, 1)) npt.assert_equal(len(w), 1) with pytest.warns(ArgsDeprecationWarning, match=r"Use test2\.y instead") as w: npt.assert_equal(test(x=1, dummy=10), (10, 1)) npt.assert_equal(len(w), 1) with pytest.warns(ArgsDeprecationWarning, match=r"Use test2\.y instead"): npt.assert_equal(test(121, 1), (121, 1)) with pytest.warns(ArgsDeprecationWarning, match=r"Use test2\.y instead") as w: npt.assert_equal(test3(121, 1), (121, 1)) npt.assert_raises(ExpiredDeprecationError, test4, 121, 1) npt.assert_raises(ExpiredDeprecationError, test4, dummy=121, x=1) npt.assert_raises(ExpiredDeprecationError, test4, 121, x=1) npt.assert_raises(ExpiredDeprecationError, test2, x=1) npt.assert_equal(test(), (11, 3)) npt.assert_equal(test(121), (121, 3)) npt.assert_equal(test(dummy=121), (121, 3)) dipy-1.11.0/dipy/utils/tests/test_fast_numpy.pyx000066400000000000000000000105051476546756600220270ustar00rootroot00000000000000import timeit cimport numpy as cnp import numpy as np from numpy.testing import (assert_, assert_almost_equal, assert_raises, assert_array_equal) from dipy.utils.fast_numpy import random, seed from dipy.utils.fast_numpy cimport ( cross, dot, norm, normalize, random_vector, random_perpendicular_vector, take, random_point_within_circle, take, ) def test_norm(): # Test that the norm equal numpy norm. cdef double[:] vec_view for _ in range(10): vec_view = vec = np.random.random(3) assert_almost_equal(norm(&vec_view[0]), np.linalg.norm(vec)) def test_normalize(): # Test that the normalize vector as a norm of 1. cdef double[:] vec_view for _ in range(10): vec_view = vec = np.random.random(3) normalize(&vec_view[0]) assert_almost_equal(np.linalg.norm(vec), 1) def test_dot(): # Test that dot is faster and equal to numpy.dot. cdef double[:] vec_view1 cdef double[:] vec_view2 for _ in range(10): vec_view1 = vec1 = np.random.random(3) vec_view2 = vec2 = np.random.random(3) assert_almost_equal(dot(&vec_view1[0], &vec_view2[0]), np.dot(vec1, vec2)) vec_view1 = vec1 = np.random.random(3) vec_view2 = vec2 = np.random.random(3) def __dot(): dot(&vec_view1[0], &vec_view2[0]) def __npdot(): np.dot(vec1, vec2) number = 100000 time_dot = timeit.timeit(__dot, number=number) time_npdot = timeit.timeit(__npdot, number=number) assert_(time_dot < time_npdot) # TODO: Check why this test fails # def test_cross(): # # Test that cross is faster and equal to numpy.cross. # cdef double[:] vec1 # cdef double[:] vec2 # cdef double[:] out # out = np.zeros(3, dtype=float) # for _ in range(10): # vec1 = np.random.random(3) # vec2 = np.random.random(3) # cross(&out[0], &vec1[0], &vec2[0]) # assert_array_equal(out, np.cross(vec1, vec2)) # vec1 = np.random.random(3) # vec2 = np.random.random(3) # def __cross(): # cross(&out[0], &vec1[0], &vec2[0]) # def __npcross(): # np.cross(vec1, vec2) # number = 10000 # time_cross = timeit.timeit(__cross, number=number) # time_npcross = timeit.timeit(__npcross, number=number) # assert_(time_cross < time_npcross) def test_random_vector(): # Test that that the random vector is of norm 1 cdef double[:] test = np.zeros(3, dtype=float) for _ in range(10): random_vector(&test[0]) assert_almost_equal(np.linalg.norm(test), 1) assert_(np.all(test >= np.double(-1))) assert_(np.all(test <= np.double(1))) def test_random_perpendicular_vector(): # Test that the random vector is of norm 1 and perpendicular cdef double[:] test = np.zeros(3, dtype=float) cdef double[:] vec = np.zeros(3, dtype=float) for _ in range(10): random_vector(&vec[0]) random_perpendicular_vector(&test[0], &vec[0]) assert_almost_equal(np.linalg.norm(test), 1) assert_(np.all(test >= np.double(-1))) assert_(np.all(test <= np.double(1))) assert_almost_equal(np.dot(vec, test), 0) def test_random(): # Test that random numbers are between 0 and 1 and that the mean is 0.5. for _ in range(10): vec = random() assert_(vec < 1 and vec > 0) random_number_list = [] for _ in range(10000): random_number_list.append(random()) assert_almost_equal(np.mean(random_number_list), 0.5, decimal=1) # Test the random generator seed input for s in [0, 1, np.iinfo(np.uint32).max]: seed(s) for s in [np.iinfo(np.uint32).max + 1, -1]: assert_raises(OverflowError, seed, s) def test_random_point_within_circle(): # Test that the random point is within the circle for r in np.arange(0, 5, 0.2): pts = random_point_within_circle(r) assert_(np.linalg.norm(pts) <= r) def test_take(): # Test that the take function is equal to numpy.take cdef int n_indices = 5 cdef double[:] odf = np.random.random(10) cdef cnp.npy_intp[:] indices = np.random.randint(0, 10, n_indices, dtype=np.intp) cdef double[:] values_out = np.zeros(n_indices, dtype=float) take(&odf[0], &indices[0], n_indices, &values_out[0]) assert_array_equal(values_out, np.take(odf, indices))dipy-1.11.0/dipy/utils/tests/test_multiproc.py000066400000000000000000000017711476546756600214750ustar00rootroot00000000000000"""Testing multiproc utilities""" from numpy.testing import assert_equal, assert_raises from dipy.utils.multiproc import determine_num_processes def test_determine_num_processes(): # Test that the correct number of effective num_processes is returned # 0 should raise an error assert_raises(ValueError, determine_num_processes, 0) # A string should raise an error assert_raises(TypeError, determine_num_processes, "0") # 1 should be 1 assert_equal(determine_num_processes(1), 1) # A positive integer should not change assert_equal(determine_num_processes(4), 4) # None and -1 should be equal (all cores) assert_equal(determine_num_processes(None), determine_num_processes(-1)) # A big negative number should be 1 assert_equal(determine_num_processes(-10000), 1) # -2 should be one less than -1 (if there are more than 1 cores) if determine_num_processes(-1) > 1: assert_equal(determine_num_processes(-1), determine_num_processes(-2) + 1) dipy-1.11.0/dipy/utils/tests/test_omp.py000066400000000000000000000037631476546756600202550ustar00rootroot00000000000000"""Testing OpenMP utilities""" import os from numpy.testing import assert_equal, assert_raises import pytest from dipy.utils.omp import ( _restore_omp_threads, _set_omp_threads, cpu_count, default_threads, determine_num_threads, have_openmp, thread_count, ) from dipy.utils.parallel import has_joblib, has_ray def test_set_omp_threads(): if have_openmp: # set threads to default _restore_omp_threads() assert_equal(thread_count(), default_threads) # change number of threads nthreads_new = default_threads + 1 _set_omp_threads(nthreads_new) assert_equal(thread_count(), nthreads_new) # restore back to default _restore_omp_threads() assert_equal(thread_count(), default_threads) else: assert_equal(thread_count(), 1) assert_equal(cpu_count(), 1) @pytest.mark.skipif(has_joblib or has_ray, reason="joblib and/or ray are installed") def test_default_threads(): if have_openmp: try: expected_threads = int(os.environ.get("OMP_NUM_THREADS", None)) if expected_threads < 1: raise ValueError("invalid number of threads") except (ValueError, TypeError): expected_threads = cpu_count() else: expected_threads = 1 assert_equal(default_threads, expected_threads) def test_determine_num_threads(): # 0 should raise an error assert_raises(ValueError, determine_num_threads, 0) # A string should raise an error assert_raises(TypeError, determine_num_threads, "1") # 1 should be 1 assert_equal(determine_num_threads(1), 1) # A positive integer should not change assert_equal(determine_num_threads(4), 4) # A big negative number should be 1 assert_equal(determine_num_threads(-10000), 1) # -2 should be one less than -1 (if there are more than 1 cores) if determine_num_threads(-1) > 1: assert_equal(determine_num_threads(-1), determine_num_threads(-2) + 1) dipy-1.11.0/dipy/utils/tests/test_optpkg.py000066400000000000000000000025021476546756600207540ustar00rootroot00000000000000import os from tempfile import TemporaryDirectory import pytest from dipy.testing import assert_false, assert_true from dipy.utils.optpkg import optional_package from dipy.utils.tripwire import TripWireError def test_optional_package(): pkg, have_pkg, setup_module = optional_package("os") assert_true(have_pkg) assert_true(hasattr(pkg, "path")) pkg, have_pkg, setup_module = optional_package("not_a_package") assert_false(have_pkg) with pytest.raises(TripWireError): pkg.some_function() pkg, have_pkg, setup_module = optional_package("dipy", min_version="10.0.0") assert_false(have_pkg) with pytest.raises(TripWireError): pkg.some_function() pkg, have_pkg, setup_module = optional_package("dipy", min_version="1.0.0") assert_true(have_pkg) with TemporaryDirectory() as tmpdir: os.makedirs(os.path.join(tmpdir, "fake_package")) open(os.path.join(tmpdir, "fake_package", "__init__.py"), "a").close() current_dir = os.getcwd() os.chdir(tmpdir) pkg, have_pkg, setup_module = optional_package( "fake_package", min_version="1.0.0" ) assert_false(have_pkg) assert_false(hasattr(pkg, "__version__")) with pytest.raises(TripWireError): pkg.some_function() os.chdir(current_dir) dipy-1.11.0/dipy/utils/tests/test_parallel.py000066400000000000000000000034411476546756600212470ustar00rootroot00000000000000import numpy as np import numpy.testing as npt import dipy.utils.parallel as para def power_it(num, n=2): # We define a function of the right form for parallelization return num**n ENGINES = ["serial"] if para.has_dask: ENGINES.append("dask") if para.has_joblib: ENGINES.append("joblib") if para.has_ray: ENGINES.append("ray") def test_paramap(): my_array = np.arange(100).reshape(10, 10) my_list = list(my_array.ravel()) for engine in ENGINES: for backend in ["threading", "multiprocessing"]: npt.assert_array_equal( para.paramap( power_it, my_list, engine=engine, backend=backend, out_shape=my_array.shape, ), power_it(my_array), ) # If it's not reshaped, the first item should be the item 0, 0: npt.assert_equal( para.paramap(power_it, my_list, engine=engine, backend=backend)[0], power_it(my_array[0, 0]), ) def test_paramap_sequence_kwargs(): my_array = np.arange(10) kwargs_sequence = [{"n": i} for i in range(len(my_array))] my_list = list(my_array.ravel()) for engine in ENGINES: for backend in ["threading", "multiprocessing"]: npt.assert_array_equal( para.paramap( power_it, my_list, engine=engine, backend=backend, out_shape=my_array.shape, func_kwargs=kwargs_sequence, ), [ power_it(ii, **kwargs) for ii, kwargs in zip(my_array, kwargs_sequence) ], ) dipy-1.11.0/dipy/utils/tests/test_tractogram.py000066400000000000000000000012221476546756600216110ustar00rootroot00000000000000import sys import pytest import trx.trx_file_memmap as tmm from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.utils.tractogram import concatenate_tractogram is_big_endian = "big" in sys.byteorder.lower() @pytest.mark.skipif(is_big_endian, reason="Little Endian architecture required") def test_concatenate(): filepath_dix, _, _ = get_fnames(name="gold_standard_tracks") sft = load_tractogram(filepath_dix["gs.trk"], filepath_dix["gs.nii"]) trx = tmm.load(filepath_dix["gs.trx"]) concat = concatenate_tractogram([sft, trx]) assert len(concat) == 2 * len(trx) trx.close() concat.close() dipy-1.11.0/dipy/utils/tests/test_tripwire.py000066400000000000000000000014611476546756600213200ustar00rootroot00000000000000"""Testing tripwire module.""" from numpy.testing import assert_raises from dipy.testing import assert_false, assert_true from dipy.utils.tripwire import TripWire, TripWireError, is_tripwire def test_is_tripwire(): assert_false(is_tripwire(object())) assert_true(is_tripwire(TripWire("some message"))) def test_tripwire(): # Test tripwire object silly_module_name = TripWire("We do not have silly_module_name") assert_raises(TripWireError, getattr, silly_module_name, "do_silly_thing") assert_raises(TripWireError, silly_module_name) # Check AttributeError can be checked too try: _ = silly_module_name.__wrapped__ except TripWireError as err: assert_true(isinstance(err, AttributeError)) else: raise RuntimeError("No error raised, but expected") dipy-1.11.0/dipy/utils/tests/test_volume.py000066400000000000000000000023551476546756600207650ustar00rootroot00000000000000"""Testing volumes""" import numpy as np import numpy.testing as npt from dipy.utils.volume import adjacency_calc def test_adjacency_calc(): """ Test adjacency_calc function, which calculates indices of adjacent voxels """ cutoff = 1.99 for img_shape in [(50, 50), (50, 50, 5)]: mask = None adj = adjacency_calc(img_shape, mask=mask, cutoff=cutoff) # check that adj in the first voxel is correct adj[0].sort() if len(img_shape) == 2: npt.assert_equal(adj[0], [0, 1, 50, 51]) if len(img_shape) == 3: npt.assert_equal(adj[0], [0, 1, 5, 6, 250, 251, 255, 256]) mask = np.zeros(img_shape, dtype=int) if len(img_shape) == 2: mask[10:40, 20:30] = 1 if len(img_shape) == 3: mask[10:40, 20:30, :] = 1 adj = adjacency_calc(img_shape, mask=mask, cutoff=cutoff) # check that adj in the first voxel is correct if len(img_shape) == 2: npt.assert_equal(adj[0], [0, 1, 10, 11]) if len(img_shape) == 3: npt.assert_equal(adj[0], [0, 1, 5, 6, 50, 51, 55, 56]) # test that adj only corresponds to flattened and masked data npt.assert_equal(len(adj), mask.sum()) dipy-1.11.0/dipy/utils/tractogram.py000066400000000000000000000163741476546756600174260ustar00rootroot00000000000000"""This module is dedicated to the handling of tractograms.""" import logging import os import numpy as np import trx.trx_file_memmap as tmm def concatenate_tractogram( tractogram_list, *, delete_dpv=False, delete_dps=False, delete_groups=False, check_space_attributes=True, preallocation=False, ): """Concatenate multiple tractograms into one. If the data_per_point or data_per_streamline is not the same for all tractograms, the data must be deleted first. Parameters ---------- tractogram_list : List[StatefulTractogram or TrxFile] The stateful tractogram to concatenate delete_dpv: bool, optional Delete dpv keys that do not exist in all the provided TrxFiles delete_dps: bool, optional Delete dps keys that do not exist in all the provided TrxFile delete_groups: bool, optional Delete all the groups that currently exist in the TrxFiles check_space_attributes: bool, optional Verify that dimensions and size of data are similar between all the TrxFiles preallocation: bool, optional Preallocated TrxFile has already been generated and is the first element in trx_list (Note: delete_groups must be set to True as well) Returns ------- new_trx: TrxFile TrxFile representing the concatenated data """ trx_list = [] for sft in tractogram_list: if not isinstance(sft, tmm.TrxFile): sft = tmm.TrxFile.from_sft(sft) elif len(sft.groups): delete_groups = True trx_list.append(sft) trx_list = [ curr_trx for curr_trx in trx_list if curr_trx.header["NB_STREAMLINES"] > 0 ] if not trx_list: logging.warning("Inputs of concatenation were empty.") return tmm.TrxFile() if len(trx_list) == 1: if len(tractogram_list) > 1: logging.warning("Only 1 valid tractogram returned.") return trx_list[0] ref_trx = trx_list[0] all_dps = [] all_dpv = [] for curr_trx in trx_list: all_dps.extend(list(curr_trx.data_per_streamline.keys())) all_dpv.extend(list(curr_trx.data_per_vertex.keys())) all_dps, all_dpv = set(all_dps), set(all_dpv) if check_space_attributes: for curr_trx in trx_list[1:]: if not np.allclose( ref_trx.header["VOXEL_TO_RASMM"], curr_trx.header["VOXEL_TO_RASMM"] ) or not np.array_equal( ref_trx.header["DIMENSIONS"], curr_trx.header["DIMENSIONS"] ): raise ValueError("Wrong space attributes.") if preallocation and not delete_groups: raise ValueError("Groups are variables, cannot be handled with preallocation") # Verifying the validity of fixed-size arrays, coherence between inputs for curr_trx in trx_list: for key in all_dpv: if ( key not in ref_trx.data_per_vertex.keys() or key not in curr_trx.data_per_vertex.keys() ): if not delete_dpv: logging.debug(f"{key} dpv key does not exist in all TrxFile.") raise ValueError("TrxFile must be sharing identical dpv keys.") elif ( ref_trx.data_per_vertex[key]._data.dtype != curr_trx.data_per_vertex[key]._data.dtype ): logging.debug( f"{key} dpv key is not declared with the same dtype in all TrxFile." ) raise ValueError("Shared dpv key, has different dtype.") for curr_trx in trx_list: for key in all_dps: if ( key not in ref_trx.data_per_streamline.keys() or key not in curr_trx.data_per_streamline.keys() ): if not delete_dps: logging.debug(f"{key} dps key does not exist in all TrxFile.") raise ValueError("TrxFile must be sharing identical dps keys.") elif ( ref_trx.data_per_streamline[key].dtype != curr_trx.data_per_streamline[key].dtype ): logging.debug( f"{key} dps key is not declared with the same dtype in all TrxFile." ) raise ValueError("Shared dps key, has different dtype.") all_groups_len = {} all_groups_dtype = {} # Variable-size arrays do not have to exist in all TrxFile if not delete_groups: for trx_1 in trx_list: for group_key in trx_1.groups.keys(): # Concatenating groups together if group_key in all_groups_len: all_groups_len[group_key] += len(trx_1.groups[group_key]) else: all_groups_len[group_key] = len(trx_1.groups[group_key]) if ( group_key in all_groups_dtype and trx_1.groups[group_key].dtype != all_groups_dtype[group_key] ): raise ValueError("Shared group key, has different dtype.") else: all_groups_dtype[group_key] = trx_1.groups[group_key].dtype # Once the checks are done, actually concatenate to_concat_list = trx_list[1:] if preallocation else trx_list if not preallocation: nb_vertices = 0 nb_streamlines = 0 for curr_trx in to_concat_list: curr_strs_len, curr_pts_len = curr_trx._get_real_len() nb_streamlines += curr_strs_len nb_vertices += curr_pts_len new_trx = tmm.TrxFile( nb_vertices=nb_vertices, nb_streamlines=nb_streamlines, init_as=ref_trx ) if delete_dps: new_trx.data_per_streamline = {} if delete_dpv: new_trx.data_per_vertex = {} if delete_groups: new_trx.groups = {} tmp_dir = new_trx._uncompressed_folder_handle.name # When memory is allocated on the spot, groups and data_per_group can # be concatenated together for group_key in all_groups_len.keys(): if not os.path.isdir(os.path.join(tmp_dir, "groups/")): os.mkdir(os.path.join(tmp_dir, "groups/")) dtype = all_groups_dtype[group_key] group_filename = os.path.join(tmp_dir, f"groups/{group_key}.{dtype.name}") group_len = all_groups_len[group_key] new_trx.groups[group_key] = tmm._create_memmap( group_filename, mode="w+", shape=(group_len,), dtype=dtype ) if delete_groups: continue pos = 0 count = 0 for curr_trx in trx_list: curr_len = len(curr_trx.groups[group_key]) new_trx.groups[group_key][pos : pos + curr_len] = ( curr_trx.groups[group_key] + count ) pos += curr_len count += curr_trx.header["NB_STREAMLINES"] strs_end, pts_end = 0, 0 else: new_trx = ref_trx strs_end, pts_end = new_trx._get_real_len() for curr_trx in to_concat_list: # Copy the TrxFile fixed-size info (the right chunk) strs_end, pts_end = new_trx._copy_fixed_arrays_from( curr_trx, strs_start=strs_end, pts_start=pts_end ) return new_trx dipy-1.11.0/dipy/utils/tripwire.py000066400000000000000000000025131476546756600171160ustar00rootroot00000000000000"""Class to raise error for missing modules or other misfortunes""" class TripWireError(AttributeError): """Exception if trying to use TripWire object""" def is_tripwire(obj): """Returns True if `obj` appears to be a TripWire object Examples -------- >>> is_tripwire(object()) False >>> is_tripwire(TripWire('some message')) True """ try: _ = obj.any_attribute except TripWireError: return True except Exception: pass return False class TripWire: """Class raising error if used Standard use is to proxy modules that we could not import Examples -------- >>> try: ... import silly_module_name ... except ImportError: ... silly_module_name = TripWire('We do not have silly_module_name') >>> silly_module_name.do_silly_thing('with silly string') #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... TripWireError: We do not have silly_module_name """ # noqa: E501 def __init__(self, msg): self._msg = msg def __getattr__(self, attr_name): """Raise informative error accessing attributes""" raise TripWireError(self._msg) def __call__(self, *args, **kwargs): """Raise informative error while calling""" raise TripWireError(self._msg) dipy-1.11.0/dipy/utils/volume.py000066400000000000000000000035321476546756600165620ustar00rootroot00000000000000"""Utilities for calculating voxel adjacencies and neighbourhoods""" import numpy as np from scipy.spatial.distance import pdist, squareform from dipy.testing.decorators import warning_for_keywords @warning_for_keywords() def adjacency_calc(img_shape, *, mask=None, cutoff=1.99): """Create adjacency list for voxels, accounting for the mask. Parameters ---------- img_shape : list Spatial shape of the image data. mask : array, optional A boolean array used to mark the coordinates in the data that should be analyzed that should have the same shape as the images. cutoff : float, optional Cutoff distance in voxel coordinates for finding adjacent voxels. e.g. cutoff=2 gives a 3x3 patch, cutoff=2.1 gives a 3x3 patch with additional 4 voxels above, below, left, right of this patch. Returns ------- adj : list List, one entry per voxel, giving the indices of adjacent voxels. The list will correspond to the data array once masked and flattened. """ # Image-space coordinates of voxels, flattened XYZ = np.meshgrid(*[range(ds) for ds in img_shape], indexing="ij") XYZ = np.column_stack([xyz.ravel() for xyz in XYZ]) dists = squareform(pdist(XYZ)) dists = dists < cutoff # adjacency list contains current voxel adj = [] if mask is not None: flat_mask = mask.reshape(-1) for idx in range(dists.shape[0]): if flat_mask[idx]: cond = dists[idx, :] cond = cond * flat_mask cond = cond[flat_mask == 1] # so indices will match masked array adj.append(np.argwhere(cond).flatten().tolist()) else: for idx in range(dists.shape[0]): cond = dists[idx, :] adj.append(np.argwhere(cond).flatten().tolist()) return adj dipy-1.11.0/dipy/viz/000077500000000000000000000000001476546756600143465ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/__init__.py000066400000000000000000000021521476546756600164570ustar00rootroot00000000000000# Init file for visualization package import warnings from dipy.utils.optpkg import optional_package from dipy.viz.horizon.app import horizon fury_pckg_msg = ( "You do not have FURY installed. Some visualization functions might not " "work for you. Please install or upgrade FURY using pip install -U fury. " "For detailed installation instructions visit: https://fury.gl/" ) # Allow import, but disable doctests if we don't have fury fury, has_fury, _ = optional_package( "fury", trip_msg=fury_pckg_msg, min_version="0.10.0" ) if has_fury: from fury import actor, colormap, interactor, lib, shaders, ui, utils, window from fury.data import DATA_DIR as FURY_DATA_DIR, fetch_viz_icons, read_viz_icons else: warnings.warn(fury_pckg_msg, stacklevel=2) # We make the visualization requirements optional imports: _, has_mpl, _ = optional_package( "matplotlib", trip_msg="You do not have Matplotlib installed. Some visualization " "functions might not work for you. For installation instructions, " "please visit: https://matplotlib.org/", ) if has_mpl: from . import projections dipy-1.11.0/dipy/viz/gmem.py000066400000000000000000000024301476546756600156440ustar00rootroot00000000000000# Shared objects across Horizon's systems class GlobalHorizon: def __init__(self): # window level sharing self.window_timer_cnt = 0 # slicer level sharing self.slicer_opacity = 1 self.slicer_colormap = "gray" self.slicer_colormaps = [ "gray", "magma", "viridis", "jet", "Pastel1", "disting", ] self.slicer_colormap_cnt = 0 self.slicer_axes = ["x", "y", "z"] self.slicer_curr_x = None self.slicer_curr_y = None self.slicer_curr_z = None self.slicer_curr_actor_x = None self.slicer_curr_actor_y = None self.slicer_curr_actor_z = None self.slicer_orig_shape = None self.slicer_resliced_shape = None self.slicer_vol_idx = None self.slicer_vol = None self.slicer_peaks_actor_z = None self.slicer_rgb = False self.slicer_grid = False # tractogram level sharing self.cluster_thr = 15 # self.cluster_lengths = [] # not used # self.cluster_sizes = [] # not used # self.cluster_thr_min_max = [] # not used self.streamline_actors = [] self.centroid_actors = [] self.cluster_actors = [] dipy-1.11.0/dipy/viz/horizon/000077500000000000000000000000001476546756600160365ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/__init__.py000066400000000000000000000000001476546756600201350ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/app.py000066400000000000000000000774141476546756600172050ustar00rootroot00000000000000import logging from warnings import warn import numpy as np from packaging.version import Version from dipy import __version__ as horizon_version from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_tractogram from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package from dipy.viz.gmem import GlobalHorizon from dipy.viz.horizon.tab import ( ClustersTab, PeaksTab, ROIsTab, SlicesTab, SurfaceTab, TabManager, build_label, ) from dipy.viz.horizon.util import ( check_img_dtype, check_img_shapes, check_peak_size, unpack_image, unpack_surface, ) from dipy.viz.horizon.visualizer import ( ClustersVisualizer, PeaksVisualizer, SlicesVisualizer, SurfaceVisualizer, ) fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import __version__ as fury_version, actor, ui, window from fury.colormap import distinguishable_colormap # TODO: Re-enable >> right click: see menu HELP_MESSAGE = """ >> left click: select centroid >> e: expand centroids >> r: collapse all clusters >> h: hide unselected centroids >> i: invert selection >> a: select all centroids >> s: save in file >> y: new window >> o: hide/show this panel """ SHORTCUT_MESSAGE = """ >> shift + drag: move >> ctrl/cmd + drag: rotate >> ctrl/cmd + r: reset >> o: hide/show this panel """ class Horizon: @warning_for_keywords() def __init__( self, *, tractograms=None, images=None, pams=None, surfaces=None, cluster=False, rgb=False, cluster_thr=15.0, random_colors=None, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=10000, world_coords=True, interactive=True, out_png="tmp.png", recorded_events=None, return_showm=False, bg_color=(0, 0, 0), order_transparent=True, buan=False, buan_colors=None, roi_images=False, roi_colors=(1, 0, 0), surface_colors=((1, 0, 0),), ): """Interactive medical visualization - Invert the Horizon! :footcite:p:`Garyfallidis2019`. Parameters ---------- tractograms : sequence of StatefulTractograms StatefulTractograms are used for making sure that the coordinate systems are correct images : sequence of tuples Each tuple contains data and affine pams : sequence of PeakAndMetrics Contains peak directions and spherical harmonic coefficients surfaces : sequence of tuples Each tuple contains vertices and faces cluster : bool Enable QuickBundlesX clustering rgb : bool, optional Enable the color image (rgb only, alpha channel will be ignored). cluster_thr : float Distance threshold used for clustering. Default value 15.0 for small animal data you may need to use something smaller such as 2.0. The threshold is in mm. For this parameter to be active ``cluster`` should be enabled. random_colors : string, optional Given multiple tractograms and/or ROIs then each tractogram and/or ROI will be shown with a different color. If no value is provided, both the tractograms and the ROIs will have a different random color generated from a distinguishable colormap. If the effect should only be applied to one of the 2 types, then use the options 'tracts' and 'rois' for the tractograms and the ROIs respectively. length_gt : float Clusters with average length greater than ``length_gt`` amount in mm will be shown. length_lt : float Clusters with average length less than ``length_lt`` amount in mm will be shown. clusters_gt : int Clusters with size greater than ``clusters_gt`` will be shown. clusters_lt : int Clusters with size less than ``clusters_lt`` will be shown. world_coords : bool Show data in their world coordinates (not native voxel coordinates) Default True. interactive : bool Allow user interaction. If False then Horizon goes on stealth mode and just saves pictures. out_png : string Filename of saved picture. recorded_events : string File path to replay recorded events return_showm : bool Return ShowManager object. Used only at Python level. Can be used for extending Horizon's capabilities externally and for testing purposes. bg_color : ndarray or list or tuple Define the background color of the scene. Default is black (0, 0, 0) order_transparent : bool Default True. Use depth peeling to sort transparent objects. If True also enables anti-aliasing. buan : bool, optional Enables BUAN framework visualization. Default is False. buan_colors : list, optional List of colors for bundles. roi_images : bool, optional Displays binary images as contours. Default is False. roi_colors : ndarray or list or tuple, optional Define the colors of the roi images. Default is red (1, 0, 0) References ---------- .. footbibliography:: """ if not has_fury: raise ImportError( "Horizon requires FURY. Please install it with pip install fury" ) if Version(fury_version) < Version("0.10.0"): ValueError( "Horizon requires FURY version 0.10.0 or higher." " Please upgrade FURY with pip install -U fury." ) self.cluster = cluster self.rgb = rgb self.cluster_thr = cluster_thr self.random_colors = random_colors self.length_lt = length_lt self.length_gt = length_gt self.clusters_lt = clusters_lt self.clusters_gt = clusters_gt self.world_coords = world_coords self.interactive = interactive self.prng = np.random.default_rng(27) self.tractograms = tractograms or [] self.out_png = out_png self.images = images or [] self.pams = pams or [] self._surfaces = surfaces or [] self.cea = {} # holds centroid actors self.cla = {} # holds cluster actors self.recorded_events = recorded_events self.show_m = None self._scene = None self.return_showm = return_showm self.bg_color = bg_color self.order_transparent = order_transparent self.buan = buan self.buan_colors = buan_colors self.__roi_images = roi_images self.__roi_colors = roi_colors self._surface_colors = surface_colors self.color_gen = distinguishable_colormap() if self.random_colors is not None: if not self.random_colors: self.random_colors = ["tracts", "rois"] else: self.random_colors = [] self.__clusters_visualizer = None self.__tabs = [] self.__tab_mgr = None self.__help_visible = True self._shortcut_help_visible = True # TODO: Move to another class/module self.__hide_centroids = True self.__select_all = False self._win_size = (0, 0) # TODO: Move to another class/module def __expand(self): centroid_actors = self.__clusters_visualizer.centroid_actors lengths = self.__clusters_visualizer.lengths sizes = self.__clusters_visualizer.sizes min_length = np.min(lengths) min_size = np.min(sizes) for cent in centroid_actors: if centroid_actors[cent]["selected"]: if not centroid_actors[cent]["expanded"]: len_ = centroid_actors[cent]["length"] sz_ = centroid_actors[cent]["size"] if len_ >= min_length and sz_ >= min_size: centroid_actors[cent]["actor"].VisibilityOn() cent.VisibilityOff() centroid_actors[cent]["expanded"] = 1 self.show_m.render() # TODO: Move to another class/module def __hide(self): centroid_actors = self.__clusters_visualizer.centroid_actors lengths = self.__clusters_visualizer.lengths sizes = self.__clusters_visualizer.sizes min_length = np.min(lengths) min_size = np.min(sizes) for cent in centroid_actors: valid_length = centroid_actors[cent]["length"] >= min_length valid_size = centroid_actors[cent]["size"] >= min_size if self.__hide_centroids: if valid_length or valid_size: if centroid_actors[cent]["selected"] == 0: cent.VisibilityOff() else: if valid_length and valid_size: if centroid_actors[cent]["selected"] == 0: cent.VisibilityOn() self.__hide_centroids = not self.__hide_centroids self.show_m.render() # TODO: Move to another class/module def __invert(self): centroid_actors = self.__clusters_visualizer.centroid_actors cluster_actors = self.__clusters_visualizer.cluster_actors lengths = self.__clusters_visualizer.lengths sizes = self.__clusters_visualizer.sizes min_length = np.min(lengths) min_size = np.min(sizes) for cent in centroid_actors: valid_length = centroid_actors[cent]["length"] >= min_length valid_size = centroid_actors[cent]["size"] >= min_size if valid_length and valid_size: centroid_actors[cent]["selected"] = not centroid_actors[cent][ "selected" ] clus = centroid_actors[cent]["actor"] cluster_actors[clus]["selected"] = centroid_actors[cent]["selected"] self.show_m.render() def __key_press_events(self, obj, event): key = obj.GetKeySym() if key in ("r", "R") and obj.GetControlKey(): self._scene.reset_camera_tight() self.show_m.render() if key in ("o", "O"): if self._shortcut_help_visible: self._scene.rm(*self.shortcut_panel.actors) self._shortcut_help_visible = False else: self._scene.add(*self.shortcut_panel.actors) self._shortcut_help_visible = True self.show_m.render() # TODO: Move to another class/module if self.cluster: # retract help panel if key in ("o", "O"): if self.__help_visible: self._scene.rm(*self.help_panel.actors) self.__help_visible = False else: self._scene.add(*self.help_panel.actors) self.__help_visible = True self.show_m.render() if key in ("a", "A"): self.__show_all() if key in ("e", "E"): self.__expand() # hide on/off unselected centroids if key in ("h", "H"): self.__hide() # invert selection if key in ("i", "I"): self.__invert() if key in ("r", "R"): self.__reset() # save current result if key in ("s", "S"): self.__save() if key in ("y", "Y"): self.__new_window() # TODO: Move to another class/module def __new_window(self): cluster_actors = self.__clusters_visualizer.cluster_actors tractogram_clusters = self.__clusters_visualizer.tractogram_clusters active_streamlines = Streamlines() for bundle in cluster_actors.keys(): if bundle.GetVisibility(): t = cluster_actors[bundle]["tractogram"] c = cluster_actors[bundle]["cluster"] indices = tractogram_clusters[t][c] active_streamlines.extend(Streamlines(indices)) # Using the header of the first of the tractograms active_sft = StatefulTractogram( active_streamlines, self.tractograms[0], Space.RASMM ) hz2 = Horizon( tractograms=[active_sft], images=self.images, cluster=True, cluster_thr=self.cluster_thr / 2.0, random_colors=self.random_colors, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=True, ) ren2 = hz2.build_scene() hz2.build_show(ren2) # TODO: Move to another class/module def __reset(self): centroid_actors = self.__clusters_visualizer.centroid_actors lengths = self.__clusters_visualizer.lengths sizes = self.__clusters_visualizer.sizes min_length = np.min(lengths) min_size = np.min(sizes) for cent in centroid_actors: valid_length = centroid_actors[cent]["length"] >= min_length valid_size = centroid_actors[cent]["size"] >= min_size if valid_length and valid_size: centroid_actors[cent]["actor"].VisibilityOff() cent.VisibilityOn() centroid_actors[cent]["expanded"] = 0 self.show_m.render() # TODO: Move to another class/module def __save(self): cluster_actors = self.__clusters_visualizer.cluster_actors tractogram_clusters = self.__clusters_visualizer.tractogram_clusters saving_streamlines = Streamlines() for bundle in cluster_actors.keys(): if bundle.GetVisibility(): t = cluster_actors[bundle]["tractogram"] c = cluster_actors[bundle]["cluster"] indices = tractogram_clusters[t][c] saving_streamlines.extend(Streamlines(indices)) logging.info("Saving result in tmp.trk") # Using the header of the first of the tractograms sft_new = StatefulTractogram( saving_streamlines, self.tractograms[0], Space.RASMM ) save_tractogram(sft_new, "tmp.trk", bbox_valid_check=False) logging.info("Saved!") # TODO: Move to another class/module def __show_all(self): centroid_actors = self.__clusters_visualizer.centroid_actors cluster_actors = self.__clusters_visualizer.cluster_actors lengths = self.__clusters_visualizer.lengths sizes = self.__clusters_visualizer.sizes min_length = np.min(lengths) min_size = np.min(sizes) if self.__select_all: for cent in centroid_actors: valid_length = centroid_actors[cent]["length"] >= min_length valid_size = centroid_actors[cent]["size"] >= min_size if valid_length and valid_size: centroid_actors[cent]["selected"] = 0 clus = centroid_actors[cent]["actor"] cluster_actors[clus]["selected"] = centroid_actors[cent]["selected"] self.__select_all = False else: for cent in centroid_actors: valid_length = centroid_actors[cent]["length"] >= min_length valid_size = centroid_actors[cent]["size"] >= min_size if valid_length and valid_size: centroid_actors[cent]["selected"] = 1 clus = centroid_actors[cent]["actor"] cluster_actors[clus]["selected"] = centroid_actors[cent]["selected"] self.__select_all = True self.show_m.render() def __win_callback(self, obj, _event): if self._win_size != obj.GetSize(): self._win_size = obj.GetSize() if len(self.__tabs) > 0: self.__tab_mgr.reposition(self._win_size) panel_size = self.shortcut_panel._get_size() self.shortcut_panel._set_position( (5, self._win_size[1] - panel_size[1] - 5) ) if self.cluster: panel_size = self.help_panel._get_size() new_pos = np.array(self._win_size) - panel_size - 5 self.help_panel._set_position(new_pos) def build_scene(self): self.mem = GlobalHorizon() scene = window.Scene() scene.background(self.bg_color) return scene def _show_force_render(self, _element): """ Callback function for lower level elements to force render. """ self.show_m.render() def _update_actors(self, actors): """Update actors in the scene. It essentially brings them forward in the stack. Parameters ---------- actors : list list of FURY actors. """ self._scene.rm(*actors) self._scene.add(*actors) def build_show(self, scene): self._scene = scene self._win_size = self._scene.GetSize() title = f"Horizon {horizon_version}" self.show_m = window.ShowManager( scene=scene, title=title, size=(1920, 1080), reset_camera=False, order_transparent=self.order_transparent, ) if len(self.tractograms) > 0: if self.cluster: self.__clusters_visualizer = ClustersVisualizer( self.show_m, scene, self.tractograms ) color_ind = 0 for t, sft in enumerate(self.tractograms): streamlines = sft.streamlines if "tracts" in self.random_colors: colors = next(self.color_gen) else: colors = None if not self.world_coords: # TODO: Get affine from a StatefullTractogram raise ValueError( "Currently native coordinates are not supported for " "streamlines." ) if self.cluster: self.__clusters_visualizer.add_cluster_actors( t, streamlines, self.cluster_thr, colors ) else: if self.buan: colors = self.buan_colors[color_ind] streamline_actor = actor.line(streamlines, colors=colors) streamline_actor.GetProperty().SetEdgeVisibility(1) streamline_actor.GetProperty().SetRenderLinesAsTubes(1) streamline_actor.GetProperty().SetLineWidth(6) streamline_actor.GetProperty().SetOpacity(1) scene.add(streamline_actor) color_ind += 1 if self.cluster: # Information panel # It will be changed once all the elements wrapped in horizon # elements. text_block = build_label(HELP_MESSAGE, font_size=18) self.help_panel = ui.Panel2D( size=(300, 200), position=(1615, 875), color=(0.8, 0.8, 1.0), opacity=0.2, align="left", ) self.help_panel.add_element(text_block.obj, coords=(0.02, 0.01)) scene.add(self.help_panel) self.__tabs.append( ClustersTab(self.__clusters_visualizer, self.cluster_thr) ) text_block = build_label(SHORTCUT_MESSAGE, font_size=18) self.shortcut_panel = ui.Panel2D( size=(260, 115), position=(5, 960), color=(0.8, 0.8, 1.0), opacity=0.2, align="left", ) self.shortcut_panel.add_element(text_block.obj, coords=(0.02, 0.01)) scene.add(self.shortcut_panel) sync_slices = sync_vol = False self.images = check_img_dtype(self.images) if len(self.images) > 0: if self.__roi_images: roi_color = self.__roi_colors roi_actors = [] img_count = 0 sync_slices, sync_vol = check_img_shapes(self.images) for img in self.images: title = f"Image {img_count + 1}" data, affine, fname = unpack_image(img) self.vox2ras = affine if self.__roi_images: if "rois" in self.random_colors: roi_color = next(self.color_gen) roi_actor = actor.contour_from_roi( data, affine=affine, color=roi_color ) scene.add(roi_actor) roi_actors.append(roi_actor) else: slices_viz = SlicesVisualizer( self.show_m.iren, scene, data, affine=affine, world_coords=self.world_coords, rgb=self.rgb, fname=fname, ) self.__tabs.append( SlicesTab( slices_viz, title, fname, force_render=self._show_force_render, ) ) img_count += 1 if len(roi_actors) > 0: self.__tabs.append(ROIsTab(roi_actors)) sync_peaks = False if len(self.pams) > 0: if self.images: sync_peaks = check_peak_size( self.pams, ref_img_shape=self.images[0][0].shape[:3], sync_imgs=sync_slices, ) else: sync_peaks = check_peak_size(self.pams) for pam in self.pams: peak_viz = PeaksVisualizer( (pam.peak_dirs, pam.affine), self.world_coords ) scene.add(peak_viz.actors[0]) self.__tabs.append(PeaksTab(peak_viz.actors[0])) if len(self._surfaces) > 0: for idx, surface in enumerate(self._surfaces): try: vertices, faces, fname = unpack_surface(surface) except ValueError as e: warn(str(e), stacklevel=2) continue color = next(self.color_gen) title = f"Surface {idx + 1}" surf_viz = SurfaceVisualizer((vertices, faces), scene, color) surf_tab = SurfaceTab(surf_viz, title, fname) self.__tabs.append(surf_tab) if len(self.__tabs) > 0: self.__tab_mgr = TabManager( tabs=self.__tabs, win_size=scene.GetSize(), on_tab_changed=self._update_actors, add_to_scene=self._scene.add, remove_from_scene=self._scene.rm, sync_slices=sync_slices, sync_volumes=sync_vol, sync_peaks=sync_peaks, ) scene.add(self.__tab_mgr.tab_ui) self.__tab_mgr.handle_text_overflows() self.__tabs[-1].on_tab_selected() self.show_m.initialize() options = [ r"un\hide centroids", "invert selection", r"un\select all", "expand clusters", "collapse clusters", "save streamlines", "recluster", ] listbox = ui.ListBox2D( values=options, position=(10, 300), size=(200, 270), multiselection=False, font_size=18, ) def display_element(): action = listbox.selected[0] if action == r"un\hide centroids": self.__hide() if action == "invert selection": self.__invert() if action == r"un\select all": self.__show_all() if action == "expand clusters": self.__expand() if action == "collapse clusters": self.__reset() if action == "save streamlines": self.__save() if action == "recluster": self.__new_window() listbox.on_change = display_element listbox.panel.opacity = 0.2 listbox.set_visibility(0) self.show_m.scene.add(listbox) def left_click_centroid_callback(obj, event): self.cea[obj]["selected"] = not self.cea[obj]["selected"] self.cla[self.cea[obj]["cluster_actor"]]["selected"] = self.cea[obj][ "selected" ] self.show_m.render() def right_click_centroid_callback(obj, event): for lactor in listbox._get_actors(): lactor.SetVisibility(not lactor.GetVisibility()) listbox.scroll_bar.set_visibility(False) self.show_m.render() def left_click_cluster_callback(obj, event): if self.cla[obj]["selected"]: self.cla[obj]["centroid_actor"].VisibilityOn() ca = self.cla[obj]["centroid_actor"] self.cea[ca]["selected"] = 0 obj.VisibilityOff() self.cea[ca]["expanded"] = 0 self.show_m.render() def right_click_cluster_callback(obj, event): logging.info("Cluster Area Selected") self.show_m.render() for cl in self.cla: cl.AddObserver("LeftButtonPressEvent", left_click_cluster_callback, 1.0) cl.AddObserver("RightButtonPressEvent", right_click_cluster_callback, 1.0) self.cla[cl]["centroid_actor"].AddObserver( "LeftButtonPressEvent", left_click_centroid_callback, 1.0 ) self.cla[cl]["centroid_actor"].AddObserver( "RightButtonPressEvent", right_click_centroid_callback, 1.0 ) self.mem.window_timer_cnt = 0 def timer_callback(obj, event): self.mem.window_timer_cnt += 1 # TODO possibly add automatic rotation option # self.show_m.scene.azimuth(0.01 * self.mem.window_timer_cnt) # self.show_m.render() scene.reset_camera() scene.zoom(1.5) scene.reset_clipping_range() if self.interactive: self.show_m.add_window_callback(self.__win_callback) self.show_m.add_timer_callback(True, 200, timer_callback) self.show_m.iren.AddObserver("KeyPressEvent", self.__key_press_events) if self.return_showm: return self.show_m if self.recorded_events is None: self.show_m.render() self.show_m.start() else: # set to True if event recorded file needs updating recording = False recording_filename = self.recorded_events if recording: self.show_m.record_events_to_file(recording_filename) else: self.show_m.play_events_from_file(recording_filename) else: window.record( scene=scene, out_path=self.out_png, size=(1200, 900), reset_camera=False ) @warning_for_keywords() def horizon( *, tractograms=None, images=None, pams=None, surfaces=None, cluster=False, rgb=False, cluster_thr=15.0, random_colors=None, bg_color=(0, 0, 0), order_transparent=True, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=10000, world_coords=True, interactive=True, buan=False, buan_colors=None, roi_images=False, roi_colors=(1, 0, 0), out_png="tmp.png", recorded_events=None, return_showm=False, ): """Interactive medical visualization - Invert the Horizon! See :footcite:p:`Garyfallidis2019` for further details about Horizon. Parameters ---------- tractograms : sequence of StatefulTractograms StatefulTractograms are used for making sure that the coordinate systems are correct images : sequence of tuples Each tuple contains data and affine pams : sequence of PeakAndMetrics Contains peak directions and spherical harmonic coefficients surfaces : sequence of tuples Each tuple contains vertices and faces cluster : bool Enable QuickBundlesX clustering rgb: bool, optional Enable the color image. cluster_thr : float Distance threshold used for clustering. Default value 15.0 for small animal data you may need to use something smaller such as 2.0. The threshold is in mm. For this parameter to be active ``cluster`` should be enabled. random_colors : string Given multiple tractograms and/or ROIs then each tractogram and/or ROI will be shown with different color. If no value is provided both the tractograms and the ROIs will have a different random color generated from a distinguishable colormap. If the effect should only be applied to one of the 2 objects, then use the options 'tracts' and 'rois' for the tractograms and the ROIs respectively. bg_color : ndarray or list or tuple Define the background color of the scene. Default is black (0, 0, 0) order_transparent : bool Default True. Use depth peeling to sort transparent objects. If True also enables anti-aliasing. length_gt : float Clusters with average length greater than ``length_gt`` amount in mm will be shown. length_lt : float Clusters with average length less than ``length_lt`` amount in mm will be shown. clusters_gt : int Clusters with size greater than ``clusters_gt`` will be shown. clusters_lt : int Clusters with size less than ``clusters_lt`` will be shown. world_coords : bool Show data in their world coordinates (not native voxel coordinates) Default True. interactive : bool Allow user interaction. If False then Horizon goes on stealth mode and just saves pictures. buan : bool, optional Enables BUAN framework visualization. Default is False. buan_colors : list, optional List of colors for bundles. roi_images : bool, optional Displays binary images as contours. Default is False. roi_colors : ndarray or list or tuple, optional Define the color of the roi images. Default is red (1, 0, 0) out_png : string Filename of saved picture. recorded_events : string File path to replay recorded events return_showm : bool Return ShowManager object. Used only at Python level. Can be used for extending Horizon's capabilities externally and for testing purposes. References ---------- .. footbibliography:: """ hz = Horizon( tractograms=tractograms, images=images, pams=pams, surfaces=surfaces, cluster=cluster, rgb=rgb, cluster_thr=cluster_thr, random_colors=random_colors, length_gt=length_gt, length_lt=length_lt, clusters_gt=clusters_gt, clusters_lt=clusters_lt, world_coords=world_coords, interactive=interactive, out_png=out_png, recorded_events=recorded_events, return_showm=return_showm, bg_color=bg_color, order_transparent=order_transparent, buan=buan, buan_colors=buan_colors, roi_images=roi_images, roi_colors=roi_colors, ) scene = hz.build_scene() if return_showm: return hz.build_show(scene) hz.build_show(scene) dipy-1.11.0/dipy/viz/horizon/meson.build000066400000000000000000000002731476546756600202020ustar00rootroot00000000000000python_sources = [ '__init__.py', 'app.py', 'util.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz/horizon' ) subdir('tab') subdir('visualizer')dipy-1.11.0/dipy/viz/horizon/tab/000077500000000000000000000000001476546756600166045ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/tab/__init__.py000066400000000000000000000012301476546756600207110ustar00rootroot00000000000000from dipy.viz.horizon.tab.base import ( HorizonTab, TabManager, build_checkbox, build_label, build_radio_button, build_slider, build_switcher, ) from dipy.viz.horizon.tab.cluster import ClustersTab from dipy.viz.horizon.tab.peak import PeaksTab from dipy.viz.horizon.tab.roi import ROIsTab from dipy.viz.horizon.tab.slice import SlicesTab from dipy.viz.horizon.tab.surface import SurfaceTab __all__ = [ "HorizonTab", "TabManager", "ClustersTab", "PeaksTab", "ROIsTab", "SlicesTab", "build_label", "build_slider", "build_checkbox", "build_switcher", "SurfaceTab", "build_radio_button", ] dipy-1.11.0/dipy/viz/horizon/tab/base.py000066400000000000000000000470001476546756600200710ustar00rootroot00000000000000from abc import ABC, abstractmethod from dataclasses import dataclass import logging from typing import Any import warnings import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package from dipy.viz.horizon.util import show_ellipsis fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import ui from fury.data import read_viz_icons @dataclass class HorizonUIElement: """Dataclass to define properties of horizon ui elements.""" visibility: bool selected_value: Any obj: Any position = (0, 0) size = ("auto", "auto") class HorizonTab(ABC): """Base for different tabs available in horizon.""" def __init__(self): self._elements = [] self.hide = lambda *args: None self.show = lambda *args: None @abstractmethod def build(self, tab_id): """Build all the elements under the tab. Parameters ---------- tab_id : int Id of the tab. """ def _register_elements(self, *args): """Register elements for rendering. Parameters ---------- *args : HorizonUIElement(s) Elements to be register for rendering. """ self._elements += list(args) def _toggle_actors(self, checkbox): """Toggle actors in the scene. This helps removing the actor to interact with actors behind this actor. Parameters ---------- checkbox : CheckBox2D """ if "" in checkbox.checked_labels: self.show(*self.actors) else: self.hide(*self.actors) def on_tab_selected(self): """Implement if require to update something while the tab becomes active. """ if hasattr(self, "_actor_toggle"): self._toggle_actors(self._actor_toggle.obj) @property @abstractmethod def name(self): """Name of the tab.""" @property @abstractmethod def actors(self): """Name of the tab.""" @property def tab_id(self): """Id of the tab. Reference for Tab Manager to identify the tab. Returns ------- int """ return self._tab_id @property def elements(self): """list of underlying FURY ui elements in the tab.""" return self._elements class TabManager: """ A Manager for tabs of the table panel. Attributes ---------- tab_ui : TabUI Underlying FURY TabUI object. """ @warning_for_keywords() def __init__( self, tabs, win_size, on_tab_changed, add_to_scene, *, remove_from_scene, sync_slices=False, sync_volumes=False, sync_peaks=False, ): num_tabs = len(tabs) self._tabs = tabs self._add_to_scene = add_to_scene self._remove_from_scene = remove_from_scene self._synchronize_slices = sync_slices self._synchronize_volumes = sync_volumes self._synchronize_peaks = sync_peaks win_width, _win_height = win_size self._tab_size = (1280, 240) x_pad = np.rint((win_width - self._tab_size[0]) / 2) self._active_tab_id = num_tabs - 1 self._tab_ui = ui.TabUI( position=(x_pad, 5), size=self._tab_size, nb_tabs=num_tabs, active_color=(1, 1, 1), inactive_color=(0.5, 0.5, 0.5), draggable=True, startup_tab_id=self._active_tab_id, ) self._tab_ui.on_change = self._tab_selected self.tab_changed = on_tab_changed slices_tabs = list( filter(lambda x: x.__class__.__name__ == "SlicesTab", self._tabs) ) if not self._synchronize_slices and slices_tabs: msg = ( "Images are of different dimensions, " + "synchronization of slices will not work" ) logging.warning(msg) for tab_id, tab in enumerate(tabs): self._tab_ui.tabs[tab_id].title_font_size = 18 tab.hide = self._hide_elements tab.show = self._show_elements tab.build(tab_id) if tab.__class__.__name__ == "SlicesTab": tab.on_volume_change = self.synchronize_volumes if tab.__class__.__name__ in ["SlicesTab", "PeaksTab"]: tab.on_slice_change = self.synchronize_slices self._render_tab_elements(tab.tab_id, tab.elements) def handle_text_overflows(self): for tab_id, tab in enumerate(self._tabs): self._handle_title_overflow(tab.name, self._tab_ui.tabs[tab_id].text_block) if tab.__class__.__name__ == "SlicesTab": self._handle_label_text_overflow(tab.elements) def _handle_label_text_overflow(self, elements): for element in elements: if ( not element.size[0] == "auto" and element.obj.__class__.__name__ == "TextBlock2D" and isinstance(element.position, tuple) ): element.obj.message = show_ellipsis( element.selected_value, element.obj.size[0], element.size[0] ) def _handle_title_overflow(self, title_text, title_block): """Handle overflow of the tab title and show ellipsis if required. Parameters ---------- title_text : str Text to be shown on the tab. title_block : TextBlock2D Fury UI element for holding the title of the tab. """ tab_text = title_text.split(".", 1)[0] title_block.message = tab_text available_space, _ = self._tab_size text_size = title_block.size[0] max_width = (available_space / len(self._tabs)) - 15 title_block.message = show_ellipsis(tab_text, text_size, max_width) def _render_tab_elements(self, tab_id, elements): for element in elements: if isinstance(element.position, list): for i, position in enumerate(element.position): self._tab_ui.add_element(tab_id, element.obj[i], position) else: self._tab_ui.add_element(tab_id, element.obj, element.position) def _hide_elements(self, *args): """Hide elements from the scene. Parameters ---------- *args : HorizonUIElement or FURY actors Elements to be hidden. """ self._remove_from_scene(*self._get_vtkActors(*args)) def _show_elements(self, *args): """Show elements in the scene. Parameters ---------- *args : HorizonUIElement or FURY actors Elements to be hidden. """ self._add_to_scene(*self._get_vtkActors(*args)) def _get_vtkActors(self, *args): elements = [] vtk_actors = [] for element in args: if element.__class__.__name__ == "HorizonUIElement": if isinstance(element.obj, list): for obj in element.obj: elements.append(obj) else: elements.append(element.obj) else: elements.append(element) for element in elements: if hasattr(element, "_get_actors") and callable(element._get_actors): vtk_actors += element.actors else: vtk_actors.append(element) return vtk_actors def _tab_selected(self, tab_ui): if self._active_tab_id == tab_ui.active_tab_idx: self._active_tab_id = -1 return self._active_tab_id = tab_ui.active_tab_idx current_tab = self._tabs[self._active_tab_id] self.tab_changed(current_tab.actors) current_tab.on_tab_selected() def reposition(self, win_size): """ Reposition the tabs panel. Parameters ---------- win_size : (float, float) size of the horizon window. """ win_width, _win_height = win_size x_pad = np.rint((win_width - self._tab_size[0]) / 2) self._tab_ui.position = (x_pad, 5) def synchronize_slices(self, active_tab_id, x_value, y_value, z_value): """ Synchronize slicers for all the images and peaks. Parameters ---------- active_tab_id: int tab_id of the action performing tab x_value: float x-value of the active slicer y_value: float y-value of the active slicer z_value: float z-value of the active slicer """ if not self._synchronize_slices and not self._synchronize_peaks: return for tab in self._get_non_active_tabs(active_tab_id, ["SlicesTab", "PeaksTab"]): tab.update_slices(x_value, y_value, z_value) def synchronize_volumes(self, active_tab_id, value): """Synchronize volumes for all the images with volumes. Parameters ---------- active_tab_id : int tab_id of the action performing tab value : float volume value of the active volume slider """ if not self._synchronize_volumes: return for slices_tab in self._get_non_active_tabs(active_tab_id): slices_tab.update_volume(value) def _get_non_active_tabs(self, active_tab_id, types=("SlicesTab",)): """Get tabs which are not active and slice tabs. Parameters ---------- active_tab_id : int types : list(str), optional Returns ------- list """ return list( filter( lambda x: x.__class__.__name__ in types and not x.tab_id == active_tab_id, self._tabs, ) ) @property def tab_ui(self): """FURY TabUI object.""" return self._tab_ui @warning_for_keywords() def build_label(text, *, font_size=16, bold=False): """Simple utility function to build labels. Parameters ---------- text : str font_size : int, optional bold : bool, optional Returns ------- label : TextBlock2D """ label = ui.TextBlock2D() label.message = text label.font_size = font_size label.font_family = "Arial" label.justification = "left" label.bold = bold label.italic = False label.shadow = False label.actor.GetTextProperty().SetBackgroundColor(0, 0, 0) label.actor.GetTextProperty().SetBackgroundOpacity(0.0) label.color = (0.7, 0.7, 0.7) return HorizonUIElement(True, text, label) @warning_for_keywords() def build_slider( initial_value, max_value, *, min_value=0, length=450, line_width=3, radius=8, font_size=16, text_template="{value:.1f} ({ratio:.0%})", on_moving_slider=lambda _slider: None, on_value_changed=lambda _slider: None, on_change=lambda _slider: None, on_handle_released=lambda _istyle, _obj, _slider: None, label="", label_font_size=16, label_style_bold=False, is_double_slider=False, ): """Create a horizon theme based disk-knob slider. Parameters ---------- initial_value : float, (float, float) Initial value(s) of the slider. max_value : float Maximum value of the slider. min_value : float, optional Minimum value of the slider. length : int, optional Length of the slider. line_width : int, optional Width of the line on which the disk will slide. radius : int, optional Radius of the disk handle. font_size : int, optional Size of the text to display alongside the slider (pt). text_template : str, callable, optional If str, text template can contain one or multiple of the replacement fields: `{value:}`, `{ratio:}`. If callable, this instance of `:class:LineSlider2D` will be passed as argument to the text template function. on_moving_slider : callable, optional When the slider is interacted by the user. on_value_changed : callable, optional When value of the slider changed programmatically. on_change : callable, optional When value of the slider changed. on_handle_released: callable, optional When handle released. label : str, optional Label to ui element for slider label_font_size : int, optional Size of label text to display with slider label_style_bold : bool, optional Is label should have bold style. is_double_slider : bool, optional True if the slider allows to adjust two values. Returns ------- label : HorizonUIElement Slider label. HorizonUIElement Slider. """ if is_double_slider and "ratio" in text_template: warnings.warn("Double slider only support values and not ratio", stacklevel=2) return slider_label = build_label(label, font_size=label_font_size, bold=label_style_bold) if not is_double_slider: slider = ui.LineSlider2D( initial_value=initial_value, max_value=max_value, min_value=min_value, length=length, line_width=line_width, outer_radius=radius, font_size=font_size, text_template=text_template, ) else: slider = ui.LineDoubleSlider2D( initial_values=initial_value, max_value=max_value, min_value=min_value, length=length, line_width=line_width, outer_radius=radius, font_size=font_size, text_template=text_template, ) slider.on_moving_slider = on_moving_slider slider.on_value_changed = on_value_changed slider.on_change = on_change if not is_double_slider: slider.handle_events(slider.handle.actor) slider.on_left_mouse_button_released = on_handle_released slider.default_color = (1.0, 0.5, 0.0) slider.track.color = (0.8, 0.3, 0.0) slider.active_color = (0.9, 0.4, 0.0) if not is_double_slider: slider.handle.color = (1.0, 0.5, 0.0) else: slider.handles[0].color = (1.0, 0.5, 0.0) slider.handles[1].color = (1.0, 0.5, 0.0) return slider_label, HorizonUIElement(True, initial_value, slider) @warning_for_keywords() def build_checkbox( *, labels=None, checked_labels=None, padding=1, font_size=16, on_change=lambda _checkbox: None, ): """Create horizon theme checkboxes. Parameters ---------- labels : list(str), optional List of labels of each option. checked_labels: list(str), optional List of labels that are checked on setting up. padding : float, optional The distance between two adjacent options element font_size : int, optional Size of the text font. on_change : callback, optional When checkbox value changed Returns ------- checkbox : HorizonUIElement """ if labels is None or not labels: warnings.warn( "At least one label needs to be to create checkboxes", stacklevel=2 ) return if checked_labels is None: checked_labels = () checkboxes = ui.Checkbox( labels=labels, checked_labels=checked_labels, padding=padding, font_size=font_size, ) checkboxes.on_change = on_change return HorizonUIElement(True, checked_labels, checkboxes) @warning_for_keywords() def build_radio_button( *, labels=None, checked_labels=None, padding=1, font_size=16, on_change=lambda _checkbox: None, ): """Create horizon theme radio buttons. Parameters ---------- labels : list(str), optional List of labels of each option. checked_labels: list(str), optional List of labels that are checked on setting up. padding : float, optional The distance between two adjacent options element font_size : int, optional Size of the text font. on_change : callback, optional When radio button value changed Returns ------- radio : HorizonUIElement """ if labels is None or not labels: warnings.warn( "At least one label needs to be to create radio buttons", stacklevel=2 ) return if checked_labels is None: checked_labels = () radio = ui.RadioButton( labels=labels, checked_labels=checked_labels, padding=padding, font_size=font_size, ) radio.on_change = on_change return HorizonUIElement(True, checked_labels, radio) @warning_for_keywords() def build_switcher( *, items=None, label="", initial_selection=0, on_prev_clicked=lambda _selected_value: None, on_next_clicked=lambda _selected_value: None, on_value_changed=lambda _selected_idx, _selected_value: None, ): """Create horizon theme switcher. Parameters ---------- items : list, optional dictionaries with keys 'label' and 'value'. Label will be used to show it to user and value will be used for selection. label : str, optional label for the switcher. initial_selection : int, optional index of the selected item initially. on_prev_clicked : callback, optional method providing a callback when prev value is selected in switcher. on_next_clicked : callback, optional method providing a callback when next value is selected in switcher. on_value_changed : callback, optional method providing a callback when either prev or next value selected in switcher. Returns ------- HorizonCombineElement( label: HorizonUIElement, element(switcher): HorizonUIElement) Notes ----- switcher: consists 'obj' which is an array providing FURY UI elements used. """ if items is None: warnings.warn("No items passed in switcher", stacklevel=2) return num_items = len(items) if initial_selection >= num_items: initial_selection = 0 switch_label = build_label(text=label) selection_label = build_label(text=items[initial_selection]["label"]).obj left_button = ui.Button2D( icon_fnames=[("left", read_viz_icons(fname="circle-left.png"))], size=(25, 25) ) right_button = ui.Button2D( icon_fnames=[("right", read_viz_icons(fname="circle-right.png"))], size=(25, 25) ) switcher = HorizonUIElement( True, [initial_selection, items[initial_selection]["value"]], [left_button, selection_label, right_button], ) def left_clicked(_i_ren, _obj, _button): selected_id = switcher.selected_value[0] - 1 if selected_id < 0: selected_id = num_items - 1 value_changed(selected_id) on_prev_clicked(items[selected_id]["value"]) on_value_changed(selected_id, items[selected_id]["value"]) def right_clicked(_i_ren, _obj, _button): selected_id = switcher.selected_value[0] + 1 if selected_id >= num_items: selected_id = 0 value_changed(selected_id) on_next_clicked(items[selected_id]["value"]) on_value_changed(selected_id, items[selected_id]["value"]) def value_changed(selected_id): switcher.selected_value[0] = selected_id switcher.selected_value[1] = items[selected_id]["value"] selection_label.message = items[selected_id]["label"] left_button.on_left_mouse_button_clicked = left_clicked right_button.on_left_mouse_button_clicked = right_clicked return (switch_label, switcher) dipy-1.11.0/dipy/viz/horizon/tab/cluster.py000066400000000000000000000137271476546756600206510ustar00rootroot00000000000000import numpy as np from dipy.viz.horizon.tab import HorizonTab, build_slider class ClustersTab(HorizonTab): def __init__(self, clusters_visualizer, threshold): """Initialize Interaction tab for cluster visualization. Parameters ---------- clusters_visualizer : ClusterVisualizer threshold : float """ super().__init__() self._visualizer = clusters_visualizer self._name = "Clusters" self._tab_id = 0 sizes = self._visualizer.sizes self._size_slider_label, self._size_slider = build_slider( initial_value=np.percentile(sizes, 50), min_value=np.min(sizes), max_value=np.percentile(sizes, 98), text_template="{value:.0f}", label="Size", on_change=self._change_size, ) lengths = self._visualizer.lengths self._length_slider_label, self._length_slider = build_slider( initial_value=np.percentile(lengths, 25), min_value=np.min(lengths), max_value=np.percentile(lengths, 98), text_template="{value:.0f}", label="Length", on_change=self._change_length, ) self._threshold_slider_label, self._threshold_slider = build_slider( initial_value=threshold, min_value=5, max_value=25, text_template="{value:.0f}", label="Threshold", on_handle_released=self._change_threshold, ) self._register_elements( self._size_slider_label, self._size_slider, self._length_slider_label, self._length_slider, self._threshold_slider_label, self._threshold_slider, ) def _change_length(self, slider): """Change the length threshold for visibility. Parameters ---------- slider : LineSlider2D FURY object for slider. """ self._length_slider.selected_value = int(np.rint(slider.value)) self._update_clusters() def _change_size(self, slider): """Change the size threshold for visibility. Parameters ---------- slider : LineSlider2D FURY object for slider. """ self._size_slider.selected_value = int(np.rint(slider.value)) self._update_clusters() def _change_threshold(self, _istyle, _obj, slider): """Re-cluster the tractograms according to the new threshold set. It will also update the size and length slider for corresponding threshold. Parameters ---------- _istyle : vtkInteractor Should not be used. _obj : vtkObject Should not be used. slider : LineSlider2D FURY object for slider. """ value = int(np.rint(slider.value)) if value != self._threshold_slider.selected_value: self._visualizer.recluster_tractograms(value) sizes = self._visualizer.sizes self._size_slider.selected_value = np.percentile(sizes, 50) lengths = self._visualizer.lengths self._length_slider.selected_value = np.percentile(lengths, 25) self._update_clusters() self._size_slider.obj.min_value = np.min(sizes) self._size_slider.obj.max_value = np.percentile(sizes, 98) self._size_slider.obj.value = self._size_slider.selected_value self._size_slider.obj.update() self._length_slider.obj.min_value = np.min(lengths) self._length_slider.obj.max_value = np.percentile(lengths, 98) self._length_slider.obj.value = self._length_slider.selected_value self._length_slider.obj.update() self._threshold_slider.selected_value = value def _update_clusters(self): """Updates the clusters according to set size and length.""" for k, cluster in self.cluster_actors.items(): length_validation = cluster["length"] < self._length_slider.selected_value size_validation = cluster["size"] < self._size_slider.selected_value if length_validation or size_validation: cluster["actor"].SetVisibility(False) if k.GetVisibility(): k.SetVisibility(False) else: cluster["actor"].SetVisibility(True) def build(self, tab_id): """Position the elements in the tab. Parameters ---------- tab_id : int Id of the tab. """ self._tab_id = tab_id x_pos = 0.02 self._size_slider_label.position = (x_pos, 0.85) self._length_slider_label.position = (x_pos, 0.62) self._threshold_slider_label.position = (x_pos, 0.38) x_pos = 0.12 self._size_slider.position = (x_pos, 0.85) self._length_slider.position = (x_pos, 0.62) self._threshold_slider.position = (x_pos, 0.38) @property def name(self): """Title of the tab. Returns ------- str """ return self._name @property def cluster_actors(self): """Cluster actors of the tractograms. Returns ------- dict various properties of clusters. """ return self._visualizer.cluster_actors @property def centroid_actors(self): """Centroid actors of the tractograms. Returns ------- dict various properties of centroids. """ return self._visualizer.centroid_actors @property def actors(self): """All the actors in the visualizer. Returns ------- list """ actors = [] for cluster_actor in self.cluster_actors.values(): actors.append(cluster_actor["actor"]) for centroid_actor in self.centroid_actors.values(): actors.append(centroid_actor["actor"]) return actors dipy-1.11.0/dipy/viz/horizon/tab/meson.build000066400000000000000000000003461476546756600207510ustar00rootroot00000000000000python_sources = [ '__init__.py', 'base.py', 'cluster.py', 'peak.py', 'roi.py', 'slice.py', 'surface.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz/horizon/tab' ) subdir('tests')dipy-1.11.0/dipy/viz/horizon/tab/peak.py000066400000000000000000000273021476546756600201020ustar00rootroot00000000000000from functools import partial import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.viz.horizon.tab import ( HorizonTab, build_checkbox, build_label, build_radio_button, build_slider, ) class PeaksTab(HorizonTab): def __init__(self, peak_actor): """Initialize Interaction tab for peaks visualization. Parameters ---------- peak_actor : PeaksActor Horizon PeaksActor visualize peaks. """ super().__init__() self._actor = peak_actor self._name = "Peaks" self._tab_id = 0 self._actor_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._toggle_actors ) self._opacity_label, self._opacity = build_slider( initial_value=1.0, max_value=1.0, text_template="{ratio:.0%}", on_change=self._change_opacity, label="Opacity", ) min_centers = self._actor.min_centers max_centers = self._actor.max_centers cross_section = self._actor.cross_section self._slice_x_label, self._slice_x = build_slider( initial_value=cross_section[0], min_value=min_centers[0], max_value=max_centers[0], text_template="{value:.0f}", label="X Slice", ) self._change_slice_x = partial(self._change_slice, selected_slice=self._slice_x) self._adjust_slice_x = partial(self._change_slice_x, sync_slice=True) self._slice_x.obj.on_moving_slider = self._change_slice_x self._slice_x.obj.on_value_changed = self._adjust_slice_x self._slice_y_label, self._slice_y = build_slider( initial_value=cross_section[1], min_value=min_centers[1], max_value=max_centers[1], text_template="{value:.0f}", label="Y Slice", ) self._change_slice_y = partial(self._change_slice, selected_slice=self._slice_y) self._adjust_slice_y = partial(self._change_slice_y, sync_slice=True) self._slice_y.obj.on_moving_slider = self._change_slice_y self._slice_y.obj.on_value_changed = self._adjust_slice_y self._slice_z_label, self._slice_z = build_slider( initial_value=cross_section[2], min_value=min_centers[2], max_value=max_centers[2], text_template="{value:.0f}", label="Z Slice", ) self._change_slice_z = partial(self._change_slice, selected_slice=self._slice_z) self._adjust_slice_z = partial(self._change_slice_z, sync_slice=True) self._slice_z.obj.on_moving_slider = self._change_slice_z self._slice_z.obj.on_value_changed = self._adjust_slice_z low_ranges = self._actor.low_ranges high_ranges = self._actor.high_ranges self._range_x_label, self._range_x = build_slider( initial_value=(low_ranges[0], high_ranges[0]), min_value=low_ranges[0], max_value=high_ranges[0], text_template="{value:.0f}", label="X Range", is_double_slider=True, ) self._change_range_x = partial(self._change_range, selected_range=self._range_x) self._range_x.obj.on_change = self._change_range_x self._range_y_label, self._range_y = build_slider( initial_value=(low_ranges[1], high_ranges[1]), min_value=low_ranges[1], max_value=high_ranges[1], text_template="{value:.0f}", label="Y Range", is_double_slider=True, ) self._change_range_y = partial(self._change_range, selected_range=self._range_y) self._range_y.obj.on_change = self._change_range_y self._range_z_label, self._range_z = build_slider( initial_value=(low_ranges[2], high_ranges[2]), min_value=low_ranges[2], max_value=high_ranges[2], text_template="{value:.0f}", label="Z Range", is_double_slider=True, ) self._change_range_z = partial(self._change_range, selected_range=self._range_z) self._range_z.obj.on_change = self._change_range_z self._view_mode_label = build_label(text="View Mode") self._view_modes = ["Cross section", "Range"] self._view_mode_toggler = build_radio_button( labels=self._view_modes, checked_labels=[self._view_modes[0]], padding=1.5, on_change=self._toggle_view_mode, ) self._register_elements( self._opacity_label, self._opacity, self._actor_toggle, self._slice_x_label, self._slice_x, self._slice_y_label, self._slice_y, self._slice_z_label, self._slice_z, self._range_x_label, self._range_x, self._range_y_label, self._range_y, self._range_z_label, self._range_z, self._view_mode_label, self._view_mode_toggler, ) def _change_opacity(self, slider): """Update opacity of the peaks actor by adjusting the slider to suitable value. Parameters ---------- slider : LineSlider2D FURY slider UI element. """ self._actor.global_opacity = slider.value def _change_range(self, slider, selected_range): """Update the range of peaks actor by adjusting the double slider. Only usable in Range view mode. Parameters ---------- slider : LineDoubleSlider2D FURY two side slider UI element. selected_range : HorizonUIElement Selected horizon ui element intended to change. """ selected_range.selected_value = ( slider.left_disk_value, slider.right_disk_value, ) self._actor.display_extent( self._range_x.selected_value[0], self._range_x.selected_value[1], self._range_y.selected_value[0], self._range_y.selected_value[1], self._range_z.selected_value[0], self._range_z.selected_value[1], ) @warning_for_keywords() def _change_slice(self, slider, selected_slice, *, sync_slice=False): """Update the slice value of peaks actor by adjusting the slider. Only usable in Cross Section view mode. Parameters ---------- slider : LineSlider2D FURY slider UI element. selected_slice : HorizonUIElement Selected horizon ui element intended to change. sync_slice : bool, optional Whether the slice is getting synchronized some other change, by default False """ value = int(np.rint(slider.value)) selected_slice.selected_value = value if not sync_slice: self.on_slice_change( self._tab_id, self._slice_x.selected_value, self._slice_y.selected_value, self._slice_z.selected_value, ) if self._view_mode_toggler.selected_value[0] == self._view_modes[0]: self._actor.display_cross_section( self._slice_x.selected_value, self._slice_y.selected_value, self._slice_z.selected_value, ) def _show_cross_section(self): """Show Cross Section view mode. Hide the Range sliders and labels.""" self.hide( self._range_x_label, self._range_x, self._range_y_label, self._range_y, self._range_z_label, self._range_z, ) self.show( self._slice_x_label, self._slice_x, self._slice_y_label, self._slice_y, self._slice_z_label, self._slice_z, ) self._change_slice(self._slice_x.obj, self._slice_x, sync_slice=True) def _show_range(self): """Show Range view mode. Hide Cross Section sliders and labels.""" self.hide( self._slice_x_label, self._slice_x, self._slice_y_label, self._slice_y, self._slice_z_label, self._slice_z, ) self.show( self._range_x_label, self._range_x, self._range_y_label, self._range_y, self._range_z_label, self._range_z, ) def _toggle_view_mode(self, radio): self._view_mode_toggler.selected_value = radio.checked_labels if radio.checked_labels[0] == self._view_modes[0]: self._actor.display_cross_section( self._actor.cross_section[0], self._actor.cross_section[1], self._actor.cross_section[2], ) self._show_cross_section() else: self._actor.display_extent( self._actor.low_ranges[0], self._actor.high_ranges[0], self._actor.low_ranges[1], self._actor.high_ranges[1], self._actor.low_ranges[2], self._actor.high_ranges[2], ) self._show_range() def on_tab_selected(self): """Trigger when tab becomes active.""" super().on_tab_selected() self._toggle_view_mode(self._view_mode_toggler.obj) def update_slices(self, x_slice, y_slice, z_slice): """Updates slicer positions. Parameters ---------- x_slice: float x-value where the slicer should be placed y_slice: float y-value where the slicer should be placed z_slice: float z-value where the slicer should be placed """ if not self._slice_x.obj.value == x_slice: self._slice_x.obj.value = x_slice if not self._slice_y.obj.value == y_slice: self._slice_y.obj.value = y_slice if not self._slice_z.obj.value == z_slice: self._slice_z.obj.value = z_slice def build(self, tab_id): """Build all the elements under the tab. Parameters ---------- tab_id : int Id of the tab. """ self._tab_id = tab_id x_pos = 0.02 self._actor_toggle.position = (x_pos, 0.85) x_pos = 0.04 self._opacity_label.position = (x_pos, 0.85) self._slice_x_label.position = (x_pos, 0.62) self._slice_y_label.position = (x_pos, 0.38) self._slice_z_label.position = (x_pos, 0.15) self._range_x_label.position = (x_pos, 0.62) self._range_y_label.position = (x_pos, 0.38) self._range_z_label.position = (x_pos, 0.15) x_pos = 0.10 self._opacity.position = (x_pos, 0.85) self._slice_x.position = (x_pos, 0.62) self._slice_y.position = (x_pos, 0.38) self._slice_z.position = (x_pos, 0.15) x_pos = 0.105 self._range_x.position = (x_pos, 0.66) self._range_y.position = (x_pos, 0.42) self._range_z.position = (x_pos, 0.19) x_pos = 0.52 self._view_mode_label.position = (x_pos, 0.85) x_pos = 0.62 self._view_mode_toggler.position = (x_pos, 0.80) cross_section = self._actor.cross_section self._actor.display_cross_section( cross_section[0], cross_section[1], cross_section[2] ) @property def name(self): """Name of the tab. Returns ------- str """ return self._name @property def actors(self): """actors controlled by tab. Returns ------- list List of actors. """ return [self._actor] dipy-1.11.0/dipy/viz/horizon/tab/roi.py000066400000000000000000000037511476546756600177550ustar00rootroot00000000000000from dipy.viz.horizon.tab import HorizonTab, build_checkbox, build_slider class ROIsTab(HorizonTab): def __init__(self, contour_actors): """Initialize interaction tab for ROIs visualization. Parameters ---------- contour_actors : list list of vtkActor. """ super().__init__() self._actors = contour_actors self._name = "ROIs" self._tab_id = 0 self._actor_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._toggle_actors ) self._opacity_slider_label, self._opacity_slider = build_slider( initial_value=1, max_value=1.0, text_template="{ratio:.0%}", on_change=self._change_opacity, label="Opacity", ) self._register_elements( self._actor_toggle, self._opacity_slider_label, self._opacity_slider ) def _change_opacity(self, slider): """Change opacity of all ROIs. Parameters ---------- slider : LineSlider2D """ opacity = slider.value self._opacity_slider.selected_value = slider.value for contour in self._actors: contour.GetProperty().SetOpacity(opacity) def build(self, tab_id): """Position the elements in the tab. Parameters ---------- tab_id : int Identifier for the tab. Index of the tab in TabUI. """ self._tab_id = tab_id y_pos = 0.85 self._actor_toggle.position = (0.02, y_pos) self._opacity_slider_label.position = (0.05, y_pos) self._opacity_slider.position = (0.12, y_pos) @property def name(self): """Title of the tab. Returns ------- str """ return self._name @property def actors(self): """Actors controlled by tab. Returns ------- list """ return self._actors dipy-1.11.0/dipy/viz/horizon/tab/slice.py000066400000000000000000000414671476546756600202710ustar00rootroot00000000000000from functools import partial from pathlib import Path import warnings import numpy as np from dipy.testing.decorators import is_macOS, warning_for_keywords from dipy.utils.optpkg import optional_package from dipy.viz.horizon.tab import ( HorizonTab, build_checkbox, build_label, build_slider, build_switcher, ) fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import colormap class SlicesTab(HorizonTab): """Interaction tab for slice visualization. Attributes ---------- name : str Name of the tab. """ @warning_for_keywords() def __init__( self, slices_visualizer, tab_name, file_name, *, force_render=lambda _element: None, ): super().__init__() self._visualizer = slices_visualizer self._name = tab_name self._file_name = Path(file_name or tab_name).name self._force_render = force_render self._tab_id = 0 self.on_slice_change = lambda _tab_id, _x, _y, _z: None self.on_volume_change = lambda _tab_id, v: None self._actor_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._toggle_actors ) self._slice_opacity_label, self._slice_opacity = build_slider( initial_value=1.0, max_value=1.0, text_template="{ratio:.0%}", on_change=self._change_opacity, label="Opacity", ) self._slice_x_label, self._slice_x = build_slider( initial_value=self._visualizer.selected_slices[0], max_value=self._visualizer.data_shape[0] - 1, text_template="{value:.0f}", label="X Slice", ) self._change_slice_x = partial( self._change_slice_value, selected_slice=self._slice_x ) self._adjust_slice_x = partial( self._change_slice_value, selected_slice=self._slice_x, sync_slice=True ) self._slice_x.obj.on_value_changed = self._adjust_slice_x self._slice_x.obj.on_moving_slider = self._change_slice_x self._change_slice_visibility_x = partial( self._update_slice_visibility, selected_slice=self._slice_x, actor_idx=0 ) self._slice_x_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._change_slice_visibility_x ) self._slice_y_label, self._slice_y = build_slider( initial_value=self._visualizer.selected_slices[1], max_value=self._visualizer.data_shape[1] - 1, text_template="{value:.0f}", label="Y Slice", ) self._change_slice_y = partial( self._change_slice_value, selected_slice=self._slice_y ) self._adjust_slice_y = partial( self._change_slice_value, selected_slice=self._slice_y, sync_slice=True ) self._slice_y.obj.on_value_changed = self._adjust_slice_y self._slice_y.obj.on_moving_slider = self._change_slice_y self._change_slice_visibility_y = partial( self._update_slice_visibility, selected_slice=self._slice_y, actor_idx=1 ) self._slice_y_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._change_slice_visibility_y ) self._slice_z_label, self._slice_z = build_slider( initial_value=self._visualizer.selected_slices[2], max_value=self._visualizer.data_shape[2] - 1, text_template="{value:.0f}", label="Z Slice", ) self._change_slice_z = partial( self._change_slice_value, selected_slice=self._slice_z ) self._adjust_slice_z = partial( self._change_slice_value, selected_slice=self._slice_z, sync_slice=True ) self._slice_z.obj.on_value_changed = self._adjust_slice_z self._slice_z.obj.on_moving_slider = self._change_slice_z self._change_slice_visibility_z = partial( self._update_slice_visibility, selected_slice=self._slice_z, actor_idx=2 ) self._slice_z_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._change_slice_visibility_z ) self._voxel_label = build_label(text="Voxel") self._voxel_data = build_label(text="") self._visualizer.register_picker_callback(self._change_picked_voxel) self._register_visualize_partials() self._file_label = build_label(text="Filename") self._file_name_label = build_label(text=self._file_name) self._register_elements( self._actor_toggle, self._slice_opacity_label, self._slice_opacity, self._slice_x_toggle, self._slice_x_label, self._slice_x, self._slice_y_toggle, self._slice_y_label, self._slice_y, self._slice_z_toggle, self._slice_z_label, self._slice_z, self._voxel_label, self._voxel_data, self._file_label, self._file_name_label, ) if not self._visualizer.rgb: self._intensities_label, self._intensities = build_slider( initial_value=self._visualizer.intensities_range, min_value=self._visualizer.volume_min, max_value=self._visualizer.volume_max, text_template="{value:.0f}", on_change=self._change_intensity, label="Intensities", is_double_slider=True, ) self._supported_colormap = [ {"label": "Gray", "value": "gray"}, {"label": "Bone", "value": "bone"}, {"label": "Cividis", "value": "cividis"}, {"label": "Inferno", "value": "inferno"}, {"label": "Magma", "value": "magma"}, {"label": "Viridis", "value": "viridis"}, {"label": "Jet", "value": "jet"}, {"label": "Pastel 1", "value": "Pastel1"}, {"label": "Distinct", "value": "dist"}, ] self._colormap_switcher_label, self._colormap_switcher = build_switcher( items=self._supported_colormap, label="Colormap", on_value_changed=self._change_color_map, ) self._register_elements( self._intensities_label, self._intensities, self._colormap_switcher_label, self._colormap_switcher, ) if len(self._visualizer.data_shape) == 4: self._adjust_volume = partial(self._change_volume, sync_vol=True) self._volume_label, self._volume = build_slider( initial_value=0, max_value=self._visualizer.data_shape[-1] - 1, on_moving_slider=self._change_volume, on_value_changed=self._adjust_volume, text_template="{value:.0f}", label="Volume", ) self._register_elements(self._volume_label, self._volume) def _register_visualize_partials(self): self._visualize_slice_x = partial( self._visualizer.slice_actors[0].display_extent, y1=0, y2=self._visualizer.data_shape[1] - 1, z1=0, z2=self._visualizer.data_shape[2] - 1, ) self._visualize_slice_y = partial( self._visualizer.slice_actors[1].display_extent, x1=0, x2=self._visualizer.data_shape[0] - 1, z1=0, z2=self._visualizer.data_shape[2] - 1, ) self._visualize_slice_z = partial( self._visualizer.slice_actors[2].display_extent, x1=0, x2=self._visualizer.data_shape[0] - 1, y1=0, y2=self._visualizer.data_shape[1] - 1, ) def _change_color_map(self, _idx, _value): self._update_colormap() self._force_render(self) def _change_intensity(self, slider): self._intensities.selected_value[0] = slider.left_disk_value self._intensities.selected_value[1] = slider.right_disk_value self._update_colormap() def _change_opacity(self, slider): self._slice_opacity.selected_value = slider.value self._update_opacities() def _change_picked_voxel(self, message): self._voxel_data.obj.message = message self._voxel_data.selected_value = message @warning_for_keywords() def _change_slice_value(self, slider, selected_slice, *, sync_slice=False): selected_slice.selected_value = int(np.rint(slider.value)) if not sync_slice: self.on_slice_change( self._tab_id, self._slice_x.selected_value, self._slice_y.selected_value, self._slice_z.selected_value, ) self._visualize_slice_x( x1=self._slice_x.selected_value, x2=self._slice_x.selected_value ) self._visualize_slice_y( y1=self._slice_y.selected_value, y2=self._slice_y.selected_value ) self._visualize_slice_z( z1=self._slice_z.selected_value, z2=self._slice_z.selected_value ) @warning_for_keywords() def _update_slice_visibility( self, checkboxes, selected_slice, actor_idx, *, visibility=None ): if checkboxes is not None and "" in checkboxes.checked_labels: visibility = True elif visibility is None: visibility = False selected_slice.visibility = visibility selected_slice.obj.set_visibility(visibility) self._visualizer.slice_actors[actor_idx].SetVisibility(visibility) def _change_volume(self, slider, sync_vol=False): value = int(np.rint(slider.value)) if value != self._volume.selected_value: if not sync_vol: self.on_volume_change(self._tab_id, value) visible_slices = ( self._slice_x.selected_value, self._slice_y.selected_value, self._slice_z.selected_value, ) valid_vol, default_range = self._visualizer.change_volume( value, np.asarray(self._intensities.obj._ratio), visible_slices, ) self._register_visualize_partials() if not valid_vol: warnings.warn( f"Volume N°{value} does not have any contrast. Please, " "check the value ranges of your data. Returning to " "previous volume.", stacklevel=2, ) self._volume.obj.value = self._volume.selected_value else: intensities_range = self._visualizer.intensities_range # Updating the colormap self._intensities.selected_value[0] = intensities_range[0] self._intensities.selected_value[1] = intensities_range[1] self._update_colormap() # Updating intensities slider if default_range: self._intensities.obj.left_disk_value = self._visualizer.volume_min self._intensities.obj.right_disk_value = self._visualizer.volume_max self._intensities.obj.min_value = self._visualizer.volume_min self._intensities.obj.max_value = self._visualizer.volume_max self._intensities.obj.update(0) self._intensities.obj.update(1) # Updating opacities self._update_opacities() # Updating visibilities slices = [self._slice_x, self._slice_y, self._slice_z] for i, s in enumerate(slices): self._update_slice_visibility(None, s, i, visibility=s.visibility) self._volume.selected_value = value self._force_render(self) def _update_colormap(self): if self._colormap_switcher.selected_value[1] == "dist": rgb = colormap.distinguishable_colormap(nb_colors=256) rgb = np.asarray(rgb) else: rgb = colormap.create_colormap( np.linspace( self._intensities.selected_value[0], self._intensities.selected_value[1], 256, ), name=self._colormap_switcher.selected_value[1], auto=True, ) num_lut = rgb.shape[0] lut = colormap.LookupTable() lut.SetNumberOfTableValues(num_lut) lut.SetRange( self._intensities.selected_value[0], self._intensities.selected_value[1] ) for i in range(num_lut): r, g, b = rgb[i] lut.SetTableValue(i, r, g, b) lut.SetRampToLinear() lut.Build() for slice_actor in self._visualizer.slice_actors: slice_actor.output.SetLookupTable(lut) slice_actor.output.Update() def _update_opacities(self): for slice_actor in self._visualizer.slice_actors: slice_actor.GetProperty().SetOpacity(self._slice_opacity.selected_value) def on_tab_selected(self): """Trigger when tab becomes active.""" super().on_tab_selected() self._slice_x.obj.set_visibility(self._slice_x.visibility) self._slice_y.obj.set_visibility(self._slice_y.visibility) self._slice_z.obj.set_visibility(self._slice_z.visibility) def update_slices(self, x_slice, y_slice, z_slice): """Updates slicer positions. Parameters ---------- x_slice: float x-value where the slicer should be placed y_slice: float y-value where the slicer should be placed z_slice: float z-value where the slicer should be placed """ if not self._slice_x.obj.value == x_slice: self._slice_x.obj.value = x_slice if not self._slice_y.obj.value == y_slice: self._slice_y.obj.value = y_slice if not self._slice_z.obj.value == z_slice: self._slice_z.obj.value = z_slice def update_volume(self, volume): """Updates volume based on passed volume. Parameters ---------- volume : float value of where the volume slider should be placed """ if not hasattr(self, "_volume"): return if not self._volume.obj.value == volume: self._volume.obj.value = volume def build(self, tab_id): """Build all the elements under the tab. Parameters ---------- tab_id : int Id of the tab. """ self._tab_id = tab_id x_pos = 0.02 self._actor_toggle.position = (x_pos, 0.85) self._slice_x_toggle.position = (x_pos, 0.62) self._slice_y_toggle.position = (x_pos, 0.38) self._slice_z_toggle.position = (x_pos, 0.15) x_pos = 0.05 self._slice_opacity_label.position = (x_pos, 0.85) self._slice_x_label.position = (x_pos, 0.62) self._slice_y_label.position = (x_pos, 0.38) self._slice_z_label.position = (x_pos, 0.15) x_pos = 0.10 self._slice_opacity.position = (x_pos, 0.85) self._slice_x.position = (x_pos, 0.62) self._slice_y.position = (x_pos, 0.38) self._slice_z.position = (x_pos, 0.15) x_pos = 0.52 self._voxel_label.position = (x_pos, 0.42) self._file_label.position = (x_pos, 0.28) x_pos = 0.60 self._voxel_data.position = (x_pos, 0.42) self._file_name_label.position = (x_pos, 0.28) if is_macOS: self._file_name_label.size = (800, "auto") else: self._file_name_label.size = (400, "auto") if not self._visualizer.rgb: x_pos = 0.52 self._intensities_label.position = (x_pos, 0.85) self._colormap_switcher_label.position = (x_pos, 0.56) x_pos = 0.60 self._intensities.position = (x_pos, 0.85) self._colormap_switcher.position = [ (x_pos, 0.54), (0.63, 0.54), (0.69, 0.54), ] if len(self._visualizer.data_shape) == 4: x_pos = 0.52 self._volume_label.position = (x_pos, 0.15) x_pos = 0.60 self._volume.position = (x_pos, 0.15) @property def name(self): """Name of the tab.""" return self._name @property def file_name(self): """Name of the file opened in the tab.""" return self._file_name @property def actors(self): """visualization actors controlled by tab.""" return self._visualizer.slice_actors dipy-1.11.0/dipy/viz/horizon/tab/surface.py000066400000000000000000000047751476546756600206230ustar00rootroot00000000000000from pathlib import Path from dipy.viz.horizon.tab import HorizonTab, build_checkbox, build_label, build_slider class SurfaceTab(HorizonTab): def __init__(self, visualizer, tab_name, file_name): """Surface Tab. Parameters ---------- visualizer : SurfaceVisualizer id : int """ super().__init__() self._visualizer = visualizer self._tab_id = 0 self._name = tab_name self._file_name = Path(file_name or tab_name).name self._actor_toggle = build_checkbox( labels=[""], checked_labels=[""], on_change=self._toggle_actors ) self._surface_opacity_label, self._surface_opacity = build_slider( initial_value=1.0, max_value=1.0, text_template="{ratio:.0%}", on_change=self._change_opacity, label="Opacity", ) self._file_label = build_label( text="Filename", ) self._file_name_label = build_label(text=self._file_name) self._register_elements( self._actor_toggle, self._surface_opacity_label, self._surface_opacity, self._file_label, self._file_name_label, ) def _change_opacity(self, slider): """Change opacity value according to slider changed. Parameters ---------- slider : LineSlider2D """ self._surface_opacity.selected_value = slider.value self._update_opacities() def _update_opacities(self): """Update opacities of visible actors based on selected values.""" for actor in self.actors: actor.GetProperty().SetOpacity(self._surface_opacity.selected_value) def build(self, tab_id): """Build all the elements under the tab. Parameters ---------- tab_id : int Id of the tab. """ self._tab_id = tab_id y_pos = 0.85 self._actor_toggle.position = (0.02, y_pos) self._surface_opacity_label.position = (0.05, y_pos) self._surface_opacity.position = (0.10, y_pos) y_pos = 0.65 self._file_label.position = (0.05, y_pos) self._file_name_label.position = (0.13, y_pos) @property def name(self): """Name of the tab. Returns ------- str """ return self._name @property def actors(self): """Actors controlled by this tab.""" return self._visualizer.actors dipy-1.11.0/dipy/viz/horizon/tab/tests/000077500000000000000000000000001476546756600177465ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/tab/tests/__init__.py000066400000000000000000000000001476546756600220450ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/tab/tests/meson.build000066400000000000000000000002311476546756600221040ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_base.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz/horizon/tab/tests' )dipy-1.11.0/dipy/viz/horizon/tab/tests/test_base.py000066400000000000000000000163751476546756600223050ustar00rootroot00000000000000import warnings import numpy.testing as npt import pytest from dipy.testing import check_for_warnings from dipy.testing.decorators import use_xvfb from dipy.utils.optpkg import optional_package from dipy.viz.horizon.tab.base import ( build_checkbox, build_label, build_radio_button, build_slider, build_switcher, ) fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") skip_it = use_xvfb == "skip" def check_label(label): npt.assert_equal(label.font_family, "Arial") npt.assert_equal(label.justification, "left") npt.assert_equal(label.italic, False) npt.assert_equal(label.shadow, False) npt.assert_equal( label.actor.GetTextProperty().GetBackgroundColor(), (0.0, 0.0, 0.0) ) npt.assert_equal(label.actor.GetTextProperty().GetBackgroundOpacity(), 0.0) npt.assert_equal(label.color, (0.7, 0.7, 0.7)) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_build_label(): horizon_label = build_label(text="Hello") npt.assert_equal(horizon_label.obj.message, "Hello") npt.assert_equal(horizon_label.obj.font_size, 16) npt.assert_equal(horizon_label.obj.bold, False) check_label(horizon_label.obj) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_build_slider(): single_slider_label, single_slider = build_slider(5, 100) npt.assert_equal(single_slider_label.obj.message, "") npt.assert_equal(single_slider_label.obj.font_size, 16) npt.assert_equal(single_slider_label.obj.bold, False) npt.assert_equal(single_slider.obj.value, 5) npt.assert_equal(single_slider.obj.max_value, 100) npt.assert_equal(single_slider.obj.min_value, 0) npt.assert_equal(single_slider.obj.track.width, 450) npt.assert_equal(single_slider.obj.track.height, 3) npt.assert_equal(single_slider.obj.track.color, (0.8, 0.3, 0.0)) npt.assert_equal(single_slider.obj.default_color, (1.0, 0.5, 0.0)) npt.assert_equal(single_slider.obj.active_color, (0.9, 0.4, 0.0)) npt.assert_equal(single_slider.obj.handle.color, (1.0, 0.5, 0.0)) npt.assert_equal(single_slider.obj.handle.outer_radius, 8) npt.assert_equal(single_slider.obj.text.font_size, 16) npt.assert_equal(single_slider.obj.text_template, "{value:.1f} ({ratio:.0%})") npt.assert_equal(single_slider.selected_value, 5) double_slider_label, double_slider = build_slider( (4, 5), 100, text_template="{value:.1f}", is_double_slider=True ) npt.assert_equal(double_slider_label.obj.message, "") npt.assert_equal(double_slider_label.obj.font_size, 16) npt.assert_equal(double_slider_label.obj.bold, False) npt.assert_equal(double_slider.obj.max_value, 100) npt.assert_equal(double_slider.obj.min_value, 0) npt.assert_equal(double_slider.obj.track.width, 450) npt.assert_equal(double_slider.obj.track.height, 3) npt.assert_equal(double_slider.obj.track.color, (0.8, 0.3, 0.0)) npt.assert_equal(double_slider.obj.default_color, (1.0, 0.5, 0.0)) npt.assert_equal(double_slider.obj.active_color, (0.9, 0.4, 0.0)) npt.assert_equal(double_slider.obj.handles[0].color, (1.0, 0.5, 0.0)) npt.assert_equal(double_slider.obj.handles[1].color, (1.0, 0.5, 0.0)) npt.assert_equal(double_slider.obj.handles[0].outer_radius, 8) npt.assert_equal(double_slider.obj.handles[1].outer_radius, 8) npt.assert_equal(double_slider.obj.text[0].font_size, 16) npt.assert_equal(double_slider.obj.text[1].font_size, 16) npt.assert_equal(double_slider.selected_value, (4, 5)) def on_handle_released(a, b, c, d): pass single_slider_label, single_slider = build_slider( 5, 100, on_handle_released=on_handle_released ) npt.assert_equal(single_slider_label.obj.message, "") npt.assert_equal(single_slider_label.obj.font_size, 16) npt.assert_equal(single_slider_label.obj.bold, False) npt.assert_equal(single_slider.obj.value, 5) npt.assert_equal(single_slider.obj.max_value, 100) npt.assert_equal(single_slider.obj.min_value, 0) npt.assert_equal(single_slider.obj.track.width, 450) npt.assert_equal(single_slider.obj.track.height, 3) npt.assert_equal(single_slider.obj.track.color, (0.8, 0.3, 0.0)) npt.assert_equal(single_slider.obj.default_color, (1.0, 0.5, 0.0)) npt.assert_equal(single_slider.obj.active_color, (0.9, 0.4, 0.0)) npt.assert_equal(single_slider.obj.handle.color, (1.0, 0.5, 0.0)) npt.assert_equal(single_slider.obj.handle.outer_radius, 8) npt.assert_equal(single_slider.obj.text.font_size, 16) npt.assert_equal(single_slider.obj.text_template, "{value:.1f} ({ratio:.0%})") npt.assert_equal(single_slider.selected_value, 5) npt.assert_equal( single_slider.obj.on_left_mouse_button_released, on_handle_released ) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_build_checkbox(): checkbox = build_checkbox(labels=["Hello", "Hi"], checked_labels=["Hello"]) npt.assert_equal(len(checkbox.obj.checked_labels), 1) npt.assert_equal(len(checkbox.obj.labels), 2) npt.assert_equal(checkbox.selected_value, ["Hello"]) with warnings.catch_warnings(record=True) as l_warns: checkbox = build_checkbox() npt.assert_equal(checkbox, None) check_for_warnings( l_warns, "At least one label needs to be to create checkboxes" ) with warnings.catch_warnings(record=True) as l_warns: checkbox = build_checkbox([]) npt.assert_equal(checkbox, None) check_for_warnings( l_warns, "At least one label needs to be to create checkboxes" ) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_build_radio(): radio = build_radio_button(labels=["Hello", "Hi"], checked_labels=["Hello"]) npt.assert_equal(len(radio.obj.checked_labels), 1) npt.assert_equal(len(radio.obj.labels), 2) npt.assert_equal(radio.selected_value, ["Hello"]) with warnings.catch_warnings(record=True) as l_warns: radio = build_radio_button() npt.assert_equal(radio, None) check_for_warnings( l_warns, "At least one label needs to be to create radio buttons" ) with warnings.catch_warnings(record=True) as l_warns: radio = build_radio_button([]) npt.assert_equal(radio, None) check_for_warnings( l_warns, "At least one label needs to be to create radio buttons" ) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_build_switcher(): with warnings.catch_warnings(record=True) as l_warns: none_switcher = build_switcher() npt.assert_equal(none_switcher, None) check_for_warnings(l_warns, "No items passed in switcher") switcher_label, switcher = build_switcher( items=[{"label": "Hello", "value": "hello"}] ) npt.assert_equal(switcher.selected_value[0], 0) npt.assert_equal(switcher.selected_value[1], "hello") npt.assert_equal(switcher_label.selected_value, "") switcher_label, switcher = build_switcher( items=[{"label": "Hello", "value": "hello"}], label="Greeting", initial_selection=1, ) npt.assert_equal(switcher.selected_value[0], 0) npt.assert_equal(switcher.selected_value[1], "hello") npt.assert_equal(switcher_label.selected_value, "Greeting") dipy-1.11.0/dipy/viz/horizon/util.py000066400000000000000000000114071476546756600173700ustar00rootroot00000000000000import logging import numpy as np from dipy.testing.decorators import warning_for_keywords def check_img_shapes(images): """Check if the images have same shapes. It also provides details about the volumes are same or not. If the shapes are not equal it will return False for both shape and volume. Parameters ---------- images : list Returns ------- tuple tuple[0] = True, if shapes are equal. tuple[1] = True, if volumes are equal. """ if len(images) < 2: return (True, False) base_shape = images[0][0].shape[:3] volumed_data_shapes = [] for img in images: data, _, _ = unpack_image(img) data_shape = data.shape[:3] if data.ndim == 4: volumed_data_shapes.append(data.shape[3]) if base_shape != data_shape: return (False, False) return (True, len(set(volumed_data_shapes)) == 1) def check_img_dtype(images): """Check supplied image dtype. If not supported numerical type, fallback to supported numerical types (either int32 or float 32). If non-numerical type, skip the data. Parameters ---------- images : list Each image is tuple of (data, affine). Returns ------- list Valid images from the provided images. """ valid_images = [] for idx, img in enumerate(images): data, affine, fname = unpack_image(img) if np.issubdtype(data.dtype, np.integer): if data.dtype != np.int32: msg = f"{data.dtype} is not supported, falling back to int32" logging.warning(msg) img = (data.astype(np.int32), affine, fname) valid_images.append(img) elif np.issubdtype(data.dtype, np.floating): if data.dtype != np.float64 and data.dtype != np.float32: msg = f"{data.dtype} is not supported, falling back to float32" logging.warning(msg) img = (data.astype(np.float32), affine, fname) valid_images.append(img) else: msg = f"skipping image {idx + 1}, passed image is not in numerical format" logging.warning(msg) return valid_images def show_ellipsis(text, text_size, available_size): """Apply ellipsis to the text. Parameters ---------- text : string Text required to be check for ellipsis. text_size : float Current size of the text in pixels. available_size : float Size available to fit the text. This will be used to truncate the text and show ellipsis. Returns ------- string Text after processing for ellipsis. """ if text_size > available_size: max_chars = int((available_size / text_size) * len(text)) ellipsis_text = f"...{text[-(max_chars - 3) :]}" return ellipsis_text return text def unpack_image(img): """Unpack provided image data. Standard way to handle different value images. Parameters ---------- img : tuple An image can contain either (data, affine) or (data, affine, fname). Returns ------- tuple If img with (data, affine) it will convert to (data, affine, None). Otherwise it will be passed as it is. """ return _unpack_data(img) def unpack_surface(surface): """Unpack surface data. Parameters ---------- surface : tuple It either contains (vertices, faces) or (vertices, faces, fname). Returns ------- tuple If surface with (vertices, faces) it will convert to (vertices, faces, None). Otherwise it will be passed as it is. """ data = _unpack_data(surface) if data[0].shape[-1] != 3: raise ValueError(f"Vertices do not have correct shape: {data[0].shape}") if data[1].shape[-1] != 3: raise ValueError(f"Faces do not have correct shape: {data[1].shape}") return data def _unpack_data(data, return_size=3): result = [*data] if len(data) < return_size: result += (return_size - len(data)) * [None] return result @warning_for_keywords() def check_peak_size(pams, *, ref_img_shape=None, sync_imgs=False): """Check shape of peaks. Parameters ---------- pams : PeaksAndMetrics Peaks and metrics. ref_img_shape : tuple, optional 3D shape of the image, by default None. sync_imgs : bool, optional True if the images are synchronized, by default False. Returns ------- bool If the peaks are aligned with images and other peaks. """ base_shape = pams[0].peak_dirs.shape[:3] for pam in pams: if pam.peak_dirs.shape[:3] != base_shape: return False if not ref_img_shape: return True return base_shape == ref_img_shape and sync_imgs dipy-1.11.0/dipy/viz/horizon/visualizer/000077500000000000000000000000001476546756600202335ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/horizon/visualizer/__init__.py000066400000000000000000000005621476546756600223470ustar00rootroot00000000000000from dipy.viz.horizon.visualizer.cluster import ClustersVisualizer from dipy.viz.horizon.visualizer.peak import PeaksVisualizer from dipy.viz.horizon.visualizer.slice import SlicesVisualizer from dipy.viz.horizon.visualizer.surface import SurfaceVisualizer __all__ = [ "ClustersVisualizer", "SlicesVisualizer", "SurfaceVisualizer", "PeaksVisualizer", ] dipy-1.11.0/dipy/viz/horizon/visualizer/cluster.py000066400000000000000000000147331476546756600222760ustar00rootroot00000000000000import logging import numpy as np from dipy.segment.clustering import qbx_and_merge from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import length from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import actor from fury.lib import VTK_OBJECT, calldata_type from fury.shaders import add_shader_callback, shader_to_actor class ClustersVisualizer: @warning_for_keywords() def __init__(self, show_manager, scene, tractograms, *, enable_callbacks=True): # TODO: Avoid passing the entire show manager to the visualizer self.__show_man = show_manager self.__scene = scene self.__tractograms = tractograms self.__enable_callbacks = enable_callbacks self.__tractogram_clusters = {} self.__first_time = True self.__tractogram_colors = [] self.__centroid_actors = {} self.__cluster_actors = {} self.__lengths = [] self.__sizes = [] def __apply_shader(self, dict_element): decl = """ uniform float selected; """ impl = """ if (selected == 1) { fragOutput0 += vec4(.2, .2, .2, 0); } """ shader_to_actor(dict_element["actor"], "fragment", decl_code=decl) shader_to_actor( dict_element["actor"], "fragment", impl_code=impl, block="light" ) @calldata_type(VTK_OBJECT) def uniform_selected_callback(caller, event, calldata=None): program = calldata if program is not None: program.SetUniformf("selected", dict_element["selected"]) add_shader_callback( dict_element["actor"], uniform_selected_callback, priority=100 ) def __left_click_centroid_callback(self, obj, event): self.__centroid_actors[obj]["selected"] = not self.__centroid_actors[obj][ "selected" ] self.__cluster_actors[self.__centroid_actors[obj]["actor"]]["selected"] = ( self.__centroid_actors[obj]["selected"] ) # TODO: Find another way to rerender self.__show_man.render() def __left_click_cluster_callback(self, obj, event): if self.__cluster_actors[obj]["selected"]: self.__cluster_actors[obj]["actor"].VisibilityOn() ca = self.__cluster_actors[obj]["actor"] self.__centroid_actors[ca]["selected"] = 0 obj.VisibilityOff() self.__centroid_actors[ca]["expanded"] = 0 # TODO: Find another way to rerender self.__show_man.render() def add_cluster_actors(self, tract_idx, streamlines, thr, colors): # Saving the tractogram colors in case of reclustering if self.__first_time: self.__tractogram_colors.append(colors) logging.info(f"Clustering threshold {thr}") clusters = qbx_and_merge(streamlines, [40, 30, 25, 20, thr]) self.__tractogram_clusters[tract_idx] = clusters centroids = clusters.centroids logging.info(f"Total number of centroids = {len(centroids)}") lengths = [length(c) for c in centroids] self.__lengths.extend(lengths) lengths = np.array(lengths) sizes = [len(c) for c in clusters] self.__sizes.extend(sizes) sizes = np.array(sizes) linewidths = np.interp(sizes, [np.min(sizes), np.max(sizes)], [0.1, 2.0]) logging.info(f"Minimum number of streamlines in cluster {np.min(sizes)}") logging.info(f"Maximum number of streamlines in cluster {np.max(sizes)}") logging.info("Building cluster actors\n") for idx, cent in enumerate(centroids): centroid_actor = actor.streamtube( [cent], colors=colors, linewidth=linewidths[idx], lod=False ) self.__scene.add(centroid_actor) cluster_actor = actor.line(clusters[idx][:], lod=False) cluster_actor.GetProperty().SetRenderLinesAsTubes(1) cluster_actor.GetProperty().SetLineWidth(6) cluster_actor.GetProperty().SetOpacity(1) cluster_actor.VisibilityOff() self.__scene.add(cluster_actor) # Every centroid actor is paired to a cluster actor self.__centroid_actors[centroid_actor] = { "actor": cluster_actor, "cluster": idx, "tractogram": tract_idx, "size": sizes[idx], "length": lengths[idx], "selected": 0, "expanded": 0, } self.__cluster_actors[cluster_actor] = { "actor": centroid_actor, "cluster": idx, "tractogram": tract_idx, "size": sizes[idx], "length": lengths[idx], "selected": 0, "highlighted": 0, } self.__apply_shader(self.__centroid_actors[centroid_actor]) self.__apply_shader(self.__cluster_actors[cluster_actor]) if self.__enable_callbacks: centroid_actor.AddObserver( "LeftButtonPressEvent", self.__left_click_centroid_callback, 1.0 ) cluster_actor.AddObserver( "LeftButtonPressEvent", self.__left_click_cluster_callback, 1.0 ) def recluster_tractograms(self, thr): for cent in self.__centroid_actors: self.__scene.rm(self.__centroid_actors[cent]["actor"]) for clus in self.__cluster_actors: self.__scene.rm(self.__cluster_actors[clus]["actor"]) self.__tractogram_clusters = {} self.__centroid_actors = {} self.__cluster_actors = {} self.__lengths = [] self.__sizes = [] # Keeping states of some attributes self.__first_time = False for t, sft in enumerate(self.__tractograms): streamlines = sft.streamlines self.add_cluster_actors(t, streamlines, thr, self.__tractogram_colors[t]) @property def centroid_actors(self): return self.__centroid_actors @property def cluster_actors(self): return self.__cluster_actors @property def lengths(self): return np.array(self.__lengths) @property def sizes(self): return np.array(self.__sizes) @property def tractogram_clusters(self): return self.__tractogram_clusters dipy-1.11.0/dipy/viz/horizon/visualizer/meson.build000066400000000000000000000003051476546756600223730ustar00rootroot00000000000000python_sources = [ '__init__.py', 'cluster.py', 'slice.py', 'surface.py', 'peak.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz/horizon/visualizer' ) dipy-1.11.0/dipy/viz/horizon/visualizer/peak.py000066400000000000000000000437201476546756600215330ustar00rootroot00000000000000from os.path import join as pjoin import warnings import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.9.0") if has_fury: from fury.colormap import colormap_lookup_table from fury.lib import ( VTK_OBJECT, Actor, CellArray, Command, PolyData, PolyDataMapper, calldata_type, numpy_support, ) from fury.shaders import ( attribute_to_actor, compose_shader, import_fury_shader, shader_to_actor, ) from fury.utils import apply_affine, numpy_to_vtk_colors, numpy_to_vtk_points else: class Actor: pass def calldata_type(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper def VTK_OBJECT(*args): pass class PeakActor(Actor): """FURY actor for visualizing DWI peaks. Parameters ---------- directions : ndarray Peak directions. The shape of the array should be (X, Y, Z, D, 3). indices : tuple Indices given in tuple(x_indices, y_indices, z_indices) format for mapping 2D ODF array to 3D voxel grid. values : ndarray, optional Peak values. The shape of the array should be (X, Y, Z, D). affine : array, optional 4x4 transformation array from native coordinates to world coordinates. colors : None or string ('rgb_standard') or tuple (3D or 4D) or array/ndarray (N, 3 or 4) or (K, 3 or 4) or (N, ) or (K, ) If None a standard orientation colormap is used for every line. If one tuple of color is used. Then all streamlines will have the same color. If an array (N, 3 or 4) is given, where N is equal to the number of points. Then every point is colored with a different RGB(A) color. If an array (K, 3 or 4) is given, where K is equal to the number of lines. Then every line is colored with a different RGB(A) color. If an array (N, ) is given, where N is the number of points then these are considered as the values to be used by the colormap. If an array (K,) is given, where K is the number of lines then these are considered as the values to be used by the colormap. lookup_colormap : vtkLookupTable, optional Add a default lookup table to the colormap. Look at :func:`fury.actor.colormap_lookup_table` for more information. linewidth : float, optional Line thickness. symmetric: bool, optional If True, peaks are drawn for both peaks_dirs and -peaks_dirs. Else, peaks are only drawn for directions given by peaks_dirs. """ # noqa: E501 @warning_for_keywords() def __init__( self, directions, indices, *, values=None, affine=None, colors=None, lookup_colormap=None, linewidth=1, symmetric=True, ): if affine is not None: w_pos = apply_affine(affine, np.asarray(indices).T) valid_dirs = directions[indices] num_dirs = len(np.nonzero(np.abs(valid_dirs).max(axis=-1) > 0)[0]) pnts_per_line = 2 points_array = np.empty((num_dirs * pnts_per_line, 3)) centers_array = np.empty_like(points_array, dtype=int) diffs_array = np.empty_like(points_array) line_count = 0 for idx, center in enumerate(zip(indices[0], indices[1], indices[2])): if affine is None: xyz = np.asarray(center) else: xyz = w_pos[idx, :] valid_peaks = np.nonzero(np.abs(valid_dirs[idx, :, :]).max(axis=-1) > 0.0)[ 0 ] for direction in valid_peaks: if values is not None: pv = values[center][direction] else: pv = 1.0 if symmetric: point_i = directions[center][direction] * pv + xyz point_e = -directions[center][direction] * pv + xyz else: point_i = directions[center][direction] * pv + xyz point_e = xyz diff = point_e - point_i points_array[line_count * pnts_per_line, :] = point_e points_array[line_count * pnts_per_line + 1, :] = point_i centers_array[line_count * pnts_per_line, :] = center centers_array[line_count * pnts_per_line + 1, :] = center diffs_array[line_count * pnts_per_line, :] = diff diffs_array[line_count * pnts_per_line + 1, :] = diff line_count += 1 vtk_points = numpy_to_vtk_points(points_array) vtk_cells = _points_to_vtk_cells(points_array) colors_tuple = _peaks_colors_from_points(points_array, colors=colors) vtk_colors, colors_are_scalars, self.__global_opacity = colors_tuple poly_data = PolyData() poly_data.SetPoints(vtk_points) poly_data.SetLines(vtk_cells) poly_data.GetPointData().SetScalars(vtk_colors) self.__mapper = PolyDataMapper() self.__mapper.SetInputData(poly_data) self.__mapper.ScalarVisibilityOn() self.__mapper.SetScalarModeToUsePointFieldData() self.__mapper.SelectColorArray("colors") self.__mapper.Update() self.SetMapper(self.__mapper) attribute_to_actor(self, centers_array, "center") attribute_to_actor(self, diffs_array, "diff") vs_var_dec = """ in vec3 center; in vec3 diff; flat out vec3 centerVertexMCVSOutput; """ fs_var_dec = """ flat in vec3 centerVertexMCVSOutput; uniform bool isRange; uniform vec3 crossSection; uniform vec3 lowRanges; uniform vec3 highRanges; """ orient_to_rgb = import_fury_shader(pjoin("utils", "orient_to_rgb.glsl")) visible_cross_section = import_fury_shader( pjoin("interaction", "visible_cross_section.glsl") ) visible_range = import_fury_shader(pjoin("interaction", "visible_range.glsl")) vs_dec = compose_shader([vs_var_dec, orient_to_rgb]) fs_dec = compose_shader([fs_var_dec, visible_cross_section, visible_range]) vs_impl = """ centerVertexMCVSOutput = center; if (vertexColorVSOutput.rgb == vec3(0)) { vertexColorVSOutput.rgb = orient2rgb(diff); } """ fs_impl = """ if (isRange) { if (!inVisibleRange(centerVertexMCVSOutput)) discard; } else { if (!inVisibleCrossSection(centerVertexMCVSOutput)) discard; } """ shader_to_actor(self, "vertex", decl_code=vs_dec, impl_code=vs_impl) shader_to_actor(self, "fragment", decl_code=fs_dec) shader_to_actor(self, "fragment", impl_code=fs_impl, block="light") # Color scale with a lookup table if colors_are_scalars: if lookup_colormap is None: lookup_colormap = colormap_lookup_table() self.__mapper.SetLookupTable(lookup_colormap) self.__mapper.UseLookupTableScalarRangeOn() self.__mapper.Update() self.__lw = linewidth self.GetProperty().SetLineWidth(self.__lw) if self.__global_opacity >= 0: self.GetProperty().SetOpacity(self.__global_opacity) self.__min_centers = np.zeros(shape=(3,)) self.__max_centers = np.array(directions.shape[:3]) self.__is_range = True self.__low_ranges = self.__min_centers self.__high_ranges = self.__max_centers self.__cross_section = self.__high_ranges // 2 self.__mapper.AddObserver( Command.UpdateShaderEvent, self.__display_peaks_vtk_callback ) @calldata_type(VTK_OBJECT) def __display_peaks_vtk_callback(self, caller, event, calldata=None): if calldata is not None: calldata.SetUniformi("isRange", self.__is_range) calldata.SetUniform3f("highRanges", self.__high_ranges) calldata.SetUniform3f("lowRanges", self.__low_ranges) calldata.SetUniform3f("crossSection", self.__cross_section) def display_cross_section(self, x, y, z): if self.__is_range: self.__is_range = False self.__cross_section = [x, y, z] def display_extent(self, x1, x2, y1, y2, z1, z2): if not self.__is_range: self.__is_range = True self.__low_ranges = [x1, y1, z1] self.__high_ranges = [x2, y2, z2] @property def cross_section(self): return self.__cross_section @property def global_opacity(self): return self.__global_opacity @global_opacity.setter def global_opacity(self, opacity): self.__global_opacity = opacity self.GetProperty().SetOpacity(self.__global_opacity) @property def high_ranges(self): return self.__high_ranges @property def is_range(self): return self.__is_range @property def low_ranges(self): return self.__low_ranges @property def linewidth(self): return self.__lw @linewidth.setter def linewidth(self, linewidth): self.__lw = linewidth self.GetProperty().SetLineWidth(self.__lw) @property def max_centers(self): return self.__max_centers @property def min_centers(self): return self.__min_centers @warning_for_keywords() def peak( peaks_dirs, *, peaks_values=None, mask=None, affine=None, colors=None, linewidth=1, lookup_colormap=None, symmetric=True, ): """Visualize peak directions as given from ``peaks_from_model`` function. Parameters ---------- peaks_dirs : ndarray Peak directions. The shape of the array should be (X, Y, Z, D, 3). peaks_values : ndarray, optional Peak values. The shape of the array should be (X, Y, Z, D). affine : array, optional 4x4 transformation array from native coordinates to world coordinates. mask : ndarray, optional 3D mask colors : tuple or None, optional Default None. If None then every peak gets an orientation color in similarity to a DEC map. lookup_colormap : vtkLookupTable, optional Add a default lookup table to the colormap. Look at :func:`fury.actor.colormap_lookup_table` for more information. linewidth : float, optional Line thickness. Default is 1. symmetric : bool, optional If True, peaks are drawn for both peaks_dirs and -peaks_dirs. Else, peaks are only drawn for directions given by peaks_dirs. Default is True. Returns ------- peak_actor : PeakActor Actor or LODActor representing the peaks directions and/or magnitudes. """ if peaks_dirs.ndim != 5: raise ValueError( "Invalid peak directions. The shape of the structure " f"must be (XxYxZxDx3). Your data has {peaks_dirs.ndim} dimensions." "" ) if peaks_dirs.shape[4] != 3: raise ValueError( "Invalid peak directions. The shape of the last " "dimension must be 3. Your data has a last dimension " f"of {peaks_dirs.shape[4]}." ) dirs_shape = peaks_dirs.shape if peaks_values is not None: if peaks_values.ndim != 4: raise ValueError( "Invalid peak values. The shape of the structure " f"must be (XxYxZxD). Your data has {peaks_values.ndim} dimensions." ) vals_shape = peaks_values.shape if vals_shape != dirs_shape[:4]: raise ValueError( "Invalid peak values. The shape of the values " "must coincide with the shape of the directions." ) valid_mask = np.abs(peaks_dirs).max(axis=(-2, -1)) > 0 if mask is not None: if mask.ndim != 3: warnings.warn( "Invalid mask. The mask must be a 3D array. The " f"passed mask has {mask.ndim} dimensions. Ignoring passed " "mask.", UserWarning, stacklevel=2, ) elif mask.shape != dirs_shape[:3]: warnings.warn( "Invalid mask. The shape of the mask must coincide " "with the shape of the directions. Ignoring passed " "mask.", UserWarning, stacklevel=2, ) else: valid_mask = np.logical_and(valid_mask, mask) indices = np.where(valid_mask) return PeakActor( peaks_dirs, indices, values=peaks_values, affine=affine, colors=colors, lookup_colormap=lookup_colormap, linewidth=linewidth, symmetric=symmetric, ) @warning_for_keywords() def _peaks_colors_from_points(points, *, colors=None, points_per_line=2): """Return a VTK scalar array containing colors information for each one of the peaks according to the policy defined by the parameter colors. Parameters ---------- points : (N, 3) array or ndarray points coordinates array. colors : None or string ('rgb_standard') or tuple (3D or 4D) or array/ndarray (N, 3 or 4) or array/ndarray (K, 3 or 4) or array/ndarray(N, ) or array/ndarray (K, ) If None a standard orientation colormap is used for every line. If one tuple of color is used. Then all streamlines will have the same color. If an array (N, 3 or 4) is given, where N is equal to the number of points. Then every point is colored with a different RGB(A) color. If an array (K, 3 or 4) is given, where K is equal to the number of lines. Then every line is colored with a different RGB(A) color. If an array (N, ) is given, where N is the number of points then these are considered as the values to be used by the colormap. If an array (K,) is given, where K is the number of lines then these are considered as the values to be used by the colormap. points_per_line : int (1 or 2), optional number of points per peak direction. Returns ------- color_array : vtkDataArray vtk scalar array with name 'colors'. colors_are_scalars : bool indicates whether or not the colors are scalars to be interpreted by a colormap. global_opacity : float returns 1 if the colors array doesn't contain opacity otherwise -1. """ num_pnts = len(points) num_lines = num_pnts // points_per_line colors_are_scalars = False global_opacity = 1 if colors is None or colors == "rgb_standard": # Automatic RGB colors colors = np.asarray((0, 0, 0)) color_array = numpy_to_vtk_colors(np.tile(255 * colors, (num_pnts, 1))) elif type(colors) is tuple: global_opacity = 1 if len(colors) == 3 else -1 colors = np.asarray(colors) color_array = numpy_to_vtk_colors(np.tile(255 * colors, (num_pnts, 1))) else: colors = np.asarray(colors) if len(colors) == num_lines: pnts_colors = np.repeat(colors, points_per_line, axis=0) if colors.ndim == 1: # Scalar per line color_array = numpy_support.numpy_to_vtk(pnts_colors, deep=True) colors_are_scalars = True elif colors.ndim == 2: # RGB(A) color per line global_opacity = 1 if colors.shape[1] == 3 else -1 color_array = numpy_to_vtk_colors(255 * pnts_colors) elif len(colors) == num_pnts: if colors.ndim == 1: # Scalar per point color_array = numpy_support.numpy_to_vtk(colors, deep=True) colors_are_scalars = True elif colors.ndim == 2: # RGB(A) color per point global_opacity = 1 if colors.shape[1] == 3 else -1 color_array = numpy_to_vtk_colors(255 * colors) color_array.SetName("colors") return color_array, colors_are_scalars, global_opacity @warning_for_keywords() def _points_to_vtk_cells(points, *, points_per_line=2): """Return the VTK cell array for the peaks given the set of points coordinates. Parameters ---------- points : (N, 3) array or ndarray points coordinates array. points_per_line : int (1 or 2), optional number of points per peak direction. Returns ------- cell_array : vtkCellArray connectivity + offset information. """ num_pnts = len(points) num_cells = num_pnts // points_per_line cell_array = CellArray() """ Connectivity is an array that contains the indices of the points that need to be connected in the visualization. The indices start from 0. """ connectivity = np.asarray(list(range(0, num_pnts)), dtype=int) """ Offset is an array that contains the indices of the first point of each line. The indices start from 0 and given the known geometry of this actor the creation of this array requires a 2 points padding between indices. """ offset = np.asarray(list(range(0, num_pnts + 1, points_per_line)), dtype=int) vtk_array_type = numpy_support.get_vtk_array_type(connectivity.dtype) cell_array.SetData( numpy_support.numpy_to_vtk(offset, deep=True, array_type=vtk_array_type), numpy_support.numpy_to_vtk(connectivity, deep=True, array_type=vtk_array_type), ) cell_array.SetNumberOfCells(num_cells) return cell_array class PeaksVisualizer: def __init__(self, pam, world_coords): self._peak_dirs, self._affine = pam if world_coords: self._peak_actor = peak(self._peak_dirs, affine=self._affine) else: self._peak_actor = peak(self._peak_dirs) @property def actors(self): return [self._peak_actor] dipy-1.11.0/dipy/viz/horizon/visualizer/slice.py000066400000000000000000000201171476546756600217050ustar00rootroot00000000000000import logging from pathlib import Path import warnings import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import actor from fury.utils import apply_affine class SlicesVisualizer: @warning_for_keywords() def __init__( self, interactor, scene, data, *, affine=None, world_coords=False, percentiles=(0, 100), rgb=False, fname=None, ): self._interactor = interactor self._scene = scene self._data = data self._affine = affine if not world_coords: self._affine = np.eye(4) self._slice_actors = [None] * 3 if len(self._data.shape) == 4 and self._data.shape[-1] == 1: self._data = self._data[:, :, :, 0] self._data_ndim = self._data.ndim self._data_shape = self._data.shape self._rgb = False self._percentiles = percentiles vol_data = self._data if ( self._data_ndim == 4 and rgb and self._data_shape[-1] == 3 ) or self._data_ndim == 3: self._rgb = True and not self._data_ndim == 3 self._int_range = np.percentile(vol_data, self._percentiles) _evaluate_intensities_range(self._int_range) else: if self._data_ndim == 4 and rgb and self._data_shape[-1] != 3: warnings.warn( "The rgb flag is enabled but the color " + "channel information is not provided", stacklevel=2, ) vol_data = self._volume_calculations(self._percentiles) self._vol_max = np.max(vol_data) self._vol_min = np.min(vol_data) self._resliced_vol = None np.set_printoptions(3, suppress=True) fname = "" if fname is None else Path(fname).name logging.info(f"-------------------{len(fname) * '-'}") logging.info(f"Applying affine to {fname}") logging.info(f"-------------------{len(fname) * '-'}") logging.info(f"Affine Native to RAS matrix \n{affine}") logging.info(f"Original shape: {self._data_shape}") self._create_and_resize_actors(vol_data, self._int_range) logging.info(f"Resized to RAS shape: {self._data_shape} \n") np.set_printoptions() self._sel_slices = np.rint(np.asarray(self._data_shape[:3]) / 2).astype(int) self._add_slice_actors_to_scene(self._sel_slices) self._picker_callback = None self._picked_voxel_actor = None def _volume_calculations(self, percentiles): for i in range(self._data.shape[-1]): vol_data = self._data[..., i] self._int_range = np.percentile(vol_data, percentiles) if np.sum(np.diff(self._int_range)) != 0: break else: if i < self._data_shape[-1] - 1: warnings.warn( f"Volume N°{i} does not have any contrast. " "Please, check the value ranges of your data. " "Moving to the next volume.", stacklevel=2, ) else: _evaluate_intensities_range(self._int_range) return vol_data def _add_slice_actors_to_scene(self, visible_slices): self._slice_actors[0].display_extent( visible_slices[0], visible_slices[0], 0, self._data_shape[1] - 1, 0, self._data_shape[2] - 1, ) self._slice_actors[1].display_extent( 0, self._data_shape[0] - 1, visible_slices[1], visible_slices[1], 0, self._data_shape[2] - 1, ) self._slice_actors[2].display_extent( 0, self._data_shape[0] - 1, 0, self._data_shape[1] - 1, visible_slices[2], visible_slices[2], ) for act in self._slice_actors: self._scene.add(act) def _create_and_resize_actors(self, vol_data, value_range): self._slice_actors[0] = actor.slicer( vol_data, affine=self._affine, value_range=value_range, interpolation="nearest", ) self._resliced_vol = self._slice_actors[0].resliced_array() self._slice_actors[1] = self._slice_actors[0].copy() self._slice_actors[2] = self._slice_actors[0].copy() for slice_actor in self._slice_actors: slice_actor.AddObserver( "LeftButtonPressEvent", self._left_click_picker_callback, 1.0 ) if self._data_ndim == 4 and not self._rgb: self._data_shape = self._resliced_vol.shape + (self._data.shape[-1],) else: self._data_shape = self._resliced_vol.shape def _left_click_picker_callback(self, obj, event): # TODO: Find out why this is not triggered when opacity < 1 event_pos = self._interactor.GetEventPosition() obj.picker.Pick(event_pos[0], event_pos[1], 0, self._scene) i, j, k = obj.picker.GetPointIJK() res = self._resliced_vol[i, j, k] try: message = f"{res:.2f}" except TypeError: message = f"{res[0]:.2f} {res[1]:.2f} {res[2]:.2f}" message = f"({i}, {j}, {k}) = {message}" self._picker_callback(message) # TODO: Fix this # self._replace_picked_voxel_actor(i, j, k) def _replace_picked_voxel_actor(self, x, y, z): if self._picked_voxel_actor: self._scene.rm(self._picked_voxel_actor) pnt = np.asarray([[x, y, z]]) pnt = apply_affine(self._affine, pnt) self._picked_voxel_actor = actor.dot(pnt, colors=(0.9, 0.4, 0.0), dot_size=10) self._scene.add(self._picked_voxel_actor) def _adaptive_percentile(self, vol_data, intensity_ratios, idx): value_range = np.percentile(np.ravel(vol_data), intensity_ratios * 100) default_range = False if np.sum(np.diff(value_range)) == 0: warnings.warn( f"The selected intensity range have no contrast for Volume N°{idx}." "The selection of intensities will be ignored and changed to default.", stacklevel=2, ) value_range = np.asarray((np.min(vol_data), np.max(vol_data))) default_range = True return (value_range, default_range) def change_volume(self, next_idx, intensity_ratios, visible_slices): vol_data = self._data[..., next_idx] value_range, default_range = self._adaptive_percentile( vol_data, intensity_ratios, next_idx ) if np.sum(np.diff(value_range)) == 0: return False, default_range self._int_range = value_range self._vol_max = np.max(vol_data) self._vol_min = np.min(vol_data) for slice_actor in self._slice_actors: self._scene.rm(slice_actor) self._create_and_resize_actors(vol_data, self._int_range) self._add_slice_actors_to_scene(visible_slices) return True, default_range def register_picker_callback(self, callback): self._picker_callback = callback @property def data_shape(self): return self._data_shape @property def intensities_range(self): return self._int_range @property def selected_slices(self): return self._sel_slices @property def slice_actors(self): return self._slice_actors @property def volume_max(self): return self._vol_max @property def volume_min(self): return self._vol_min @property def rgb(self): return self._rgb def _evaluate_intensities_range(intensities_range): if np.sum(np.diff(intensities_range)) == 0: raise ValueError( "Your data does not have any contrast. Please, check the " "value range of your data." ) dipy-1.11.0/dipy/viz/horizon/visualizer/surface.py000066400000000000000000000011621476546756600222350ustar00rootroot00000000000000import numpy as np from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury.actor import surface as surface_actor class SurfaceVisualizer: def __init__(self, surface, scene, color): self._vertices, self._faces = surface self._surface_actor = surface_actor( self._vertices, faces=self._faces, colors=np.full((self._vertices.shape[0], 3), color), ) scene.add(self._surface_actor) @property def actors(self): return [self._surface_actor] dipy-1.11.0/dipy/viz/meson.build000066400000000000000000000003761476546756600165160ustar00rootroot00000000000000python_sources = [ '__init__.py', 'gmem.py', 'panel.py', 'plotting.py', 'projections.py', 'regtools.py', 'streamline.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz' ) subdir('horizon') subdir('tests')dipy-1.11.0/dipy/viz/panel.py000066400000000000000000000402041476546756600160170ustar00rootroot00000000000000import itertools import warnings import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package from dipy.viz.gmem import GlobalHorizon fury, have_fury, setup_module = optional_package("fury", min_version="0.10.0") if have_fury: from dipy.viz import actor, colormap, ui @warning_for_keywords() def build_label(text, *, font_size=18, bold=False): """Simple utility function to build labels Parameters ---------- text : str font_size : int bold : bool Returns ------- label : TextBlock2D """ label = ui.TextBlock2D() label.message = text label.font_size = font_size label.font_family = "Arial" label.justification = "left" label.bold = bold label.italic = False label.shadow = False label.actor.GetTextProperty().SetBackgroundColor(0, 0, 0) label.actor.GetTextProperty().SetBackgroundOpacity(0.0) label.color = (0.7, 0.7, 0.7) return label def _color_slider(slider): slider.default_color = (1, 0.5, 0) slider.track.color = (0.8, 0.3, 0) slider.active_color = (0.9, 0.4, 0) slider.handle.color = (1, 0.5, 0) def _color_dslider(slider): slider.default_color = (1, 0.5, 0) slider.track.color = (0.8, 0.3, 0) slider.active_color = (0.9, 0.4, 0) slider.handles[0].color = (1, 0.5, 0) slider.handles[1].color = (1, 0.5, 0) @warning_for_keywords() def slicer_panel( scene, iren, *, data=None, affine=None, world_coords=False, pam=None, mask=None, mem=None, ): """Slicer panel with slicer included Parameters ---------- scene : Scene Scene. iren : Interactor Interactor. data : 3d ndarray Data to be sliced. affine : 4x4 ndarray Affine matrix. world_coords : bool If True then the affine is applied. peaks : PeaksAndMetrics Default None mem : Returns ------- panel : Panel """ if mem is None: mem = GlobalHorizon() orig_shape = data.shape print("Original shape", orig_shape) ndim = data.ndim tmp = data if ndim == 4: if orig_shape[-1] > 3: orig_shape = orig_shape[:3] # Sometimes, first volume is null, so we try the next one. for i in range(orig_shape[-1]): tmp = data[..., i] value_range = np.percentile(data[..., i], q=[2, 98]) if np.sum(np.diff(value_range)) != 0: break if orig_shape[-1] == 3: value_range = (0, 1.0) mem.slicer_rgb = True if ndim == 3: value_range = np.percentile(tmp, q=[2, 98]) if np.sum(np.diff(value_range)) == 0: msg = "Your data does not have any contrast. " msg += "Please, check the value range of your data." warnings.warn(msg, stacklevel=2) if not world_coords: affine = np.eye(4) image_actor_z = actor.slicer( tmp, affine=affine, value_range=value_range, interpolation="nearest", picking_tol=0.025, ) tmp_new = image_actor_z.resliced_array() if len(data.shape) == 4: if data.shape[-1] == 3: print("Resized to RAS shape ", tmp_new.shape) else: print("Resized to RAS shape ", tmp_new.shape + (data.shape[-1],)) else: print("Resized to RAS shape ", tmp_new.shape) shape = tmp_new.shape if pam is not None: peaks_actor_z = actor.peak_slicer( pam.peak_dirs, peaks_values=None, mask=mask, affine=affine, colors=None ) slicer_opacity = 1.0 image_actor_z.opacity(slicer_opacity) image_actor_x = image_actor_z.copy() x_midpoint = int(np.round(shape[0] / 2)) image_actor_x.display_extent( x_midpoint, x_midpoint, 0, shape[1] - 1, 0, shape[2] - 1 ) image_actor_y = image_actor_z.copy() y_midpoint = int(np.round(shape[1] / 2)) image_actor_y.display_extent( 0, shape[0] - 1, y_midpoint, y_midpoint, 0, shape[2] - 1 ) scene.add(image_actor_z) scene.add(image_actor_x) scene.add(image_actor_y) if pam is not None: scene.add(peaks_actor_z) line_slider_z = ui.LineSlider2D( min_value=0, max_value=shape[2] - 1, initial_value=shape[2] / 2, text_template="{value:.0f}", length=140, ) _color_slider(line_slider_z) def change_slice_z(slider): z = int(np.round(slider.value)) mem.slicer_curr_actor_z.display_extent(0, shape[0] - 1, 0, shape[1] - 1, z, z) if pam is not None: mem.slicer_peaks_actor_z.display_extent( 0, shape[0] - 1, 0, shape[1] - 1, z, z ) mem.slicer_curr_z = z scene.reset_clipping_range() line_slider_x = ui.LineSlider2D( min_value=0, max_value=shape[0] - 1, initial_value=shape[0] / 2, text_template="{value:.0f}", length=140, ) _color_slider(line_slider_x) def change_slice_x(slider): x = int(np.round(slider.value)) mem.slicer_curr_actor_x.display_extent(x, x, 0, shape[1] - 1, 0, shape[2] - 1) scene.reset_clipping_range() mem.slicer_curr_x = x mem.window_timer_cnt += 100 line_slider_y = ui.LineSlider2D( min_value=0, max_value=shape[1] - 1, initial_value=shape[1] / 2, text_template="{value:.0f}", length=140, ) _color_slider(line_slider_y) def change_slice_y(slider): y = int(np.round(slider.value)) mem.slicer_curr_actor_y.display_extent(0, shape[0] - 1, y, y, 0, shape[2] - 1) scene.reset_clipping_range() mem.slicer_curr_y = y # TODO there is some small bug when starting the app the handles # are sitting a bit low double_slider = ui.LineDoubleSlider2D( length=140, initial_values=value_range, min_value=tmp.min(), max_value=tmp.max(), shape="square", ) _color_dslider(double_slider) def apply_colormap(r1, r2): if mem.slicer_rgb: return if mem.slicer_colormap == "disting": # use distinguishable colors rgb = colormap.distinguishable_colormap(nb_colors=256) rgb = np.asarray(rgb) else: # use matplotlib colormaps rgb = colormap.create_colormap( np.linspace(r1, r2, 256), name=mem.slicer_colormap, auto=True ) N = rgb.shape[0] lut = colormap.LookupTable() lut.SetNumberOfTableValues(N) lut.SetRange(r1, r2) for i in range(N): r, g, b = rgb[i] lut.SetTableValue(i, r, g, b) lut.SetRampToLinear() lut.Build() mem.slicer_curr_actor_z.output.SetLookupTable(lut) mem.slicer_curr_actor_z.output.Update() def on_change_ds(slider): values = slider._values r1, r2 = values apply_colormap(r1, r2) # TODO trying to see why there is a small bug in double slider # double_slider.left_disk_value = 0 # double_slider.right_disk_value = 98 # double_slider.update(0) # double_slider.update(1) double_slider.on_change = on_change_ds opacity_slider = ui.LineSlider2D( min_value=0.0, max_value=1.0, initial_value=slicer_opacity, length=140, text_template="{ratio:.0%}", ) _color_slider(opacity_slider) def change_opacity(slider): slicer_opacity = slider.value mem.slicer_curr_actor_x.opacity(slicer_opacity) mem.slicer_curr_actor_y.opacity(slicer_opacity) mem.slicer_curr_actor_z.opacity(slicer_opacity) volume_slider = ui.LineSlider2D( min_value=0, max_value=data.shape[-1] - 1, initial_value=0, length=140, text_template="{value:.0f}", shape="square", ) _color_slider(volume_slider) def change_volume(istyle, obj, slider): vol_idx = int(np.round(slider.value)) mem.slicer_vol_idx = vol_idx scene.rm(mem.slicer_curr_actor_x) scene.rm(mem.slicer_curr_actor_y) scene.rm(mem.slicer_curr_actor_z) tmp = data[..., vol_idx] image_actor_z = actor.slicer( tmp, affine=affine, value_range=value_range, interpolation="nearest", picking_tol=0.025, ) tmp_new = image_actor_z.resliced_array() mem.slicer_vol = tmp_new z = mem.slicer_curr_z image_actor_z.display_extent(0, shape[0] - 1, 0, shape[1] - 1, z, z) mem.slicer_curr_actor_z = image_actor_z mem.slicer_curr_actor_x = image_actor_z.copy() if pam is not None: mem.slicer_peaks_actor_z = peaks_actor_z x = mem.slicer_curr_x mem.slicer_curr_actor_x.display_extent(x, x, 0, shape[1] - 1, 0, shape[2] - 1) mem.slicer_curr_actor_y = image_actor_z.copy() y = mem.slicer_curr_y mem.slicer_curr_actor_y.display_extent(0, shape[0] - 1, y, y, 0, shape[2] - 1) mem.slicer_curr_actor_z.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) mem.slicer_curr_actor_x.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) mem.slicer_curr_actor_y.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) scene.add(mem.slicer_curr_actor_z) scene.add(mem.slicer_curr_actor_x) scene.add(mem.slicer_curr_actor_y) if pam is not None: scene.add(mem.slicer_peaks_actor_z) r1, r2 = double_slider._values apply_colormap(r1, r2) istyle.force_render() def left_click_picker_callback(obj, ev): """Get the value of the clicked voxel and show it in the panel.""" event_pos = iren.GetEventPosition() obj.picker.Pick(event_pos[0], event_pos[1], 0, scene) i, j, k = obj.picker.GetPointIJK() res = mem.slicer_vol[i, j, k] try: message = f"{res:.3f}" except TypeError: message = f"{res[0]:.3f} {res[1]:.3f} {res[2]:.3f}" picker_label.message = f"({str(i)}, {str(j)}, {str(k)}) {message}" mem.slicer_vol_idx = 0 mem.slicer_vol = tmp_new mem.slicer_curr_actor_x = image_actor_x mem.slicer_curr_actor_y = image_actor_y mem.slicer_curr_actor_z = image_actor_z if pam is not None: # change_volume.peaks_actor_z = peaks_actor_z mem.slicer_peaks_actor_z = peaks_actor_z mem.slicer_curr_actor_x.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) mem.slicer_curr_actor_y.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) mem.slicer_curr_actor_z.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) if pam is not None: mem.slicer_peaks_actor_z.AddObserver( "LeftButtonPressEvent", left_click_picker_callback, 1.0 ) mem.slicer_curr_x = int(np.round(shape[0] / 2)) mem.slicer_curr_y = int(np.round(shape[1] / 2)) mem.slicer_curr_z = int(np.round(shape[2] / 2)) line_slider_x.on_change = change_slice_x line_slider_y.on_change = change_slice_y line_slider_z.on_change = change_slice_z double_slider.on_change = on_change_ds opacity_slider.on_change = change_opacity volume_slider.handle_events(volume_slider.handle.actor) volume_slider.on_left_mouse_button_released = change_volume line_slider_label_x = build_label(text="X Slice") line_slider_label_x.visibility = True x_counter = itertools.count() def label_callback_x(obj, event): line_slider_label_x.visibility = not line_slider_label_x.visibility line_slider_x.set_visibility(line_slider_label_x.visibility) cnt = next(x_counter) if line_slider_label_x.visibility and cnt > 0: scene.add(mem.slicer_curr_actor_x) else: scene.rm(mem.slicer_curr_actor_x) iren.Render() line_slider_label_x.actor.AddObserver("LeftButtonPressEvent", label_callback_x, 1.0) line_slider_label_y = build_label(text="Y Slice") line_slider_label_y.visibility = True y_counter = itertools.count() def label_callback_y(obj, event): line_slider_label_y.visibility = not line_slider_label_y.visibility line_slider_y.set_visibility(line_slider_label_y.visibility) cnt = next(y_counter) if line_slider_label_y.visibility and cnt > 0: scene.add(mem.slicer_curr_actor_y) else: scene.rm(mem.slicer_curr_actor_y) iren.Render() line_slider_label_y.actor.AddObserver("LeftButtonPressEvent", label_callback_y, 1.0) line_slider_label_z = build_label(text="Z Slice") line_slider_label_z.visibility = True z_counter = itertools.count() def label_callback_z(obj, event): line_slider_label_z.visibility = not line_slider_label_z.visibility line_slider_z.set_visibility(line_slider_label_z.visibility) cnt = next(z_counter) if line_slider_label_z.visibility and cnt > 0: scene.add(mem.slicer_curr_actor_z) else: scene.rm(mem.slicer_curr_actor_z) iren.Render() line_slider_label_z.actor.AddObserver("LeftButtonPressEvent", label_callback_z, 1.0) opacity_slider_label = build_label(text="Opacity") volume_slider_label = build_label(text="Volume") picker_label = build_label(text="") double_slider_label = build_label(text="Colormap") slicer_panel_label = build_label(text="Slicer panel", bold=True) def label_colormap_callback(obj, event): if mem.slicer_colormap_cnt == len(mem.slicer_colormaps) - 1: mem.slicer_colormap_cnt = 0 else: mem.slicer_colormap_cnt += 1 cnt = mem.slicer_colormap_cnt mem.slicer_colormap = mem.slicer_colormaps[cnt] double_slider_label.message = mem.slicer_colormap values = double_slider._values r1, r2 = values apply_colormap(r1, r2) iren.Render() double_slider_label.actor.AddObserver( "LeftButtonPressEvent", label_colormap_callback, 1.0 ) # volume_slider.on_right_mouse_button_released = change_volume2 def label_opacity_callback(obj, event): if opacity_slider.value == 0: opacity_slider.value = 100 opacity_slider.update() slicer_opacity = 1 else: opacity_slider.value = 0 opacity_slider.update() slicer_opacity = 0 mem.slicer_curr_actor_x.opacity(slicer_opacity) mem.slicer_curr_actor_y.opacity(slicer_opacity) mem.slicer_curr_actor_z.opacity(slicer_opacity) iren.Render() opacity_slider_label.actor.AddObserver( "LeftButtonPressEvent", label_opacity_callback, 1.0 ) if data.ndim == 4: panel_size = (320, 400 + 100) if data.ndim == 3: panel_size = (320, 300 + 100) panel = ui.Panel2D( size=panel_size, position=(870, 10), color=(1, 1, 1), opacity=0.1, align="right" ) ys = np.linspace(0, 1, 10) panel.add_element(line_slider_z, coords=(0.42, ys[1])) panel.add_element(line_slider_y, coords=(0.42, ys[2])) panel.add_element(line_slider_x, coords=(0.42, ys[3])) panel.add_element(opacity_slider, coords=(0.42, ys[4])) panel.add_element(double_slider, coords=(0.42, (ys[7] + ys[8]) / 2.0)) if data.ndim == 4: if data.shape[-1] > 3: panel.add_element(volume_slider, coords=(0.42, ys[6])) panel.add_element(line_slider_label_z, coords=(0.1, ys[1])) panel.add_element(line_slider_label_y, coords=(0.1, ys[2])) panel.add_element(line_slider_label_x, coords=(0.1, ys[3])) panel.add_element(opacity_slider_label, coords=(0.1, ys[4])) panel.add_element(double_slider_label, coords=(0.1, (ys[7] + ys[8]) / 2.0)) if data.ndim == 4: if data.shape[-1] > 3: panel.add_element(volume_slider_label, coords=(0.1, ys[6])) panel.add_element(picker_label, coords=(0.2, ys[5])) panel.add_element(slicer_panel_label, coords=(0.05, 0.9)) scene.add(panel) # initialize colormap r1, r2 = value_range apply_colormap(r1, r2) return panel dipy-1.11.0/dipy/viz/plotting.py000066400000000000000000000216651476546756600165720ustar00rootroot00000000000000""" plotting functions """ from warnings import warn import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package plt, have_plt, _ = optional_package("matplotlib.pyplot") @warning_for_keywords() def compare_maps( fits, maps, *, transpose=None, fit_labels=None, map_labels=None, fit_kwargs=None, map_kwargs=None, filename=None, ): """Compare one or more scalar maps for different fits or models. Parameters ---------- fits : list List of fits to be compared. maps : list Names of attributes to be compared. Default: 'rtop'. transpose : bool, optional If False, different fits are placed on different rows and different maps on different columns. If True, the order is transposed. If None, the figures are placed such that there are more columns than rows. Default: None. fit_labels : list, optional Labels for the different fitting routines. If None the fits are labeled by number. Default: None. map_labels : list, optional Labels for the different attributes. If None the attribute names are used. Default: None. fit_kwargs : list or dict, optional A dict or list of dicts with imshow options for each fitting routine. The dicts are passed to imshow as keyword-argument pairs. Default: {}. map_kwargs : list or dict, optional A dict or list of dicts with imshow options for each MAP-MRI scalar. The dicts are passed to imshow as keyword-argument pairs. Default: {}. filename : string, optional Filename where the image will be saved. Default: None. """ fit_kwargs = fit_kwargs or {} map_kwargs = map_kwargs or {} if not have_plt: raise ValueError("matplotlib package needed for visualization.") fontsize = "large" xscale, yscale = 12, 10 m = len(fits) n = len(maps) if transpose is None: transpose = m > n if fit_labels is None: fit_labels = [f"Fit {i + 1}" for i in range(m)] if map_labels is None: map_labels = maps if isinstance(fit_kwargs, dict): fit_kwargs = [fit_kwargs] * m if isinstance(map_kwargs, dict): map_kwargs = [map_kwargs] * n if transpose: fig, ax = plt.subplots(n, m, figsize=(xscale, yscale / m * n), squeeze=False) ax = ax.T for i in range(m): ax[i, 0].set_title(fit_labels[i], fontsize=fontsize) for j in range(n): ax[0, j].set_ylabel(map_labels[j], fontsize=fontsize) else: fig, ax = plt.subplots(m, n, figsize=(xscale, yscale / n * m), squeeze=False) for i in range(m): ax[i, 0].set_ylabel(fit_labels[i], fontsize=fontsize) for j in range(n): ax[0, j].set_title(map_labels[j], fontsize=fontsize) for i in range(m): for j in range(n): try: attr = getattr(fits[i], maps[j]) if callable(attr): attr = attr() except AttributeError: warn(f"Could not recover attribute {maps[j]}.", stacklevel=2) attr = np.zeros((2, 2)) data = np.squeeze(np.array(attr, dtype=float)).T ax[i, j].imshow( data, interpolation="nearest", origin="lower", cmap="gray", **fit_kwargs[i], **map_kwargs[j], ) ax[i, j].set_xticks([]) ax[i, j].set_yticks([]) ax[i, j].spines["top"].set_visible(False) ax[i, j].spines["right"].set_visible(False) ax[i, j].spines["bottom"].set_visible(False) ax[i, j].spines["left"].set_visible(False) fig.tight_layout() if filename: plt.savefig(filename) else: plt.show() @warning_for_keywords() def compare_qti_maps( gt, fit1, fit2, mask, *, maps=("fa", "ufa"), fitname=("QTI", "QTI+"), xlimits=([0, 1], [0.4, 1.5]), disprange=([0, 1], [0, 1]), slice=13, ): """Compare one or more qti derived maps obtained with different fitting routines. Parameters ---------- gt : qti fit object The qti fit to be considered as ground truth fit1 : qti fit object First qti fit to be compared fit2 : qti fit object Second qti fit to be compared mask : np.ndarray Boolean array indicating which voxels to retain for comparing the values maps : array-like, optional QTI invariants to be compared fitname : array-like, optional Names of the used QTI fitting routines xlimits : array-like, optional X-Axis limits for the histograms visualization disprange : array-like, optional Display range for maps slice : int, optional Axial brain slice to be visualized """ if not have_plt: raise ValueError("matplotlib package needed for visualization") n = len(maps) fig, ax = plt.subplots(n, 4, figsize=(12, 9)) background = np.zeros(gt.S0_hat.shape[0:2]) for i in range(n): for j in range(3): ax[i, j].imshow(background, cmap="gray") ax[i, j].set_xticks([]) ax[i, j].set_yticks([]) for k in range(n): ax[k, 0].imshow( np.rot90(getattr(gt, maps[k])[:, :, slice]), cmap="gray", vmin=disprange[k][0], vmax=disprange[k][1], ) ax[k, 0].set_title("GROUND TRUTH") ax[k, 0].set_ylabel(maps[k], fontsize=20) ax[k, 1].imshow( np.rot90(getattr(fit1, maps[k])[:, :, slice]), cmap="gray", vmin=disprange[k][0], vmax=disprange[k][1], ) ax[k, 1].set_title(fitname[0]) ax[k, 2].imshow( np.rot90(getattr(fit2, maps[k])[:, :, slice]), cmap="gray", vmin=disprange[k][0], vmax=disprange[k][1], ) ax[k, 2].set_title(fitname[1]) ax[k, 3].hist( (getattr(fit1, maps[k])[mask, slice]).flatten(), density=True, bins=40, label=fitname[0], ) ax[k, 3].hist( (getattr(fit2, maps[k])[mask, slice]).flatten(), density=True, bins=40, label=fitname[1], alpha=0.7, ) ax[k, 3].hist( (getattr(gt, maps[k])[mask, slice]).flatten(), histtype="stepfilled", density=True, bins=40, label="GT", ec="k", alpha=1, linewidth=1.5, fc="None", ) ax[k, 3].legend() ax[k, 3].set_title("VALUE DISTRIBUTION") ax[k, 3].set_xlim(xlimits[k]) fig.tight_layout() plt.show() def bundle_shape_profile(x, shape_profile, std): """Plot bundlewarp bundle shape profile. Parameters ---------- x : np.ndarray Integer array containing x-axis shape_profile : np.ndarray Float array containing bundlewarp displacement magnitudes along the length of the bundle std : np.ndarray Float array containing standard deviations """ fig, ax = plt.subplots(figsize=(8, 6), dpi=300) std_1 = shape_profile + std std_2 = shape_profile - std ax.plot( x, shape_profile, "-", label="Mean", color="Purple", linewidth=3, markersize=12 ) ax.fill_between(x, std_1, std_2, alpha=0.2, label="Std", color="Purple") plt.xticks(x) plt.ylim(0, max(std_1) + 2) plt.ylabel("Average Displacement") plt.xlabel("Segment Number") plt.title("Bundle Shape Profile") plt.legend(loc=2) plt.show() def image_mosaic( images, *, ax_labels=None, ax_kwargs=None, figsize=None, filename=None ): """ Draw a mosaic of 2D images using pyplot.imshow(). A colorbar is drawn beside each image. Parameters ---------- images: list of ndarray Images to render. ax_labels: list of str, optional Label for each image. ax_kwargs: list of dictionaries, optional keyword arguments passed to imshow for each image. One dictionary per image. figsize: tuple of ints, optional Figure size. filename: str, optional When given, figure is saved to disk under this name. Returns ------- fig: pyplot.Figure The figure. ax: pyplot.Axes or array of Axes The subplots for each image. """ fig, ax = plt.subplots(1, len(images), figsize=figsize) aximages = [] for it, (im, axe, kw) in enumerate(zip(images, ax, ax_kwargs)): aximages.append(axe.imshow(im, **kw)) if ax_labels is not None: axe.set_title(ax_labels[it]) for it, aximage in enumerate(aximages): fig.colorbar(aximage, ax=ax[it]) if filename is not None: plt.savefig(filename) else: plt.show() return fig, ax dipy-1.11.0/dipy/viz/projections.py000066400000000000000000000100451476546756600172570ustar00rootroot00000000000000""" Visualization tools for 2D projections of 3D functions on the sphere, such as ODFs. """ import numpy as np import scipy.interpolate as interp import dipy.core.geometry as geo from dipy.testing.decorators import doctest_skip_parser, warning_for_keywords from dipy.utils.optpkg import optional_package matplotlib, has_mpl, setup_module = optional_package("matplotlib") plt, _, _ = optional_package("matplotlib.pyplot") bm, has_basemap, _ = optional_package("mpl_toolkits.basemap") @warning_for_keywords() @doctest_skip_parser def sph_project( vertices, val, *, ax=None, vmin=None, vmax=None, cmap=None, cbar=True, tri=False, boundary=False, **basemap_args, ): """Draw a signal on a 2D projection of the sphere. Parameters ---------- vertices : (N,3) ndarray Unit vector points of the sphere val : (N) ndarray Function values. ax : mpl axis, optional If specified, draw onto this existing axis instead. vmin : float, optional Minimum value to cut the z. vmax : float, optional Minimum value to cut the z. cmap : matplotlib.colors.Colormap, optional Colormap. cbar : bool, optional Whether to add the color-bar to the figure. tri : bool, optional Whether to display the plot triangulated as a pseudo-color plot. boundary : bool, optional Whether to draw the boundary around the projection in a black line. Returns ------- ax : axis Matplotlib figure axis Examples -------- >>> from dipy.data import default_sphere >>> verts = default_sphere.vertices >>> _ax = sph_project(verts.T, np.random.rand(len(verts.T))) # skip if not has_basemap """ # noqa: E501 if ax is None: fig, ax = plt.subplots(1) else: fig = plt.subplots(1) fig.axes.append(ax) if cmap is None: cmap = matplotlib.cm.hot basemap_args.setdefault("projection", "ortho") basemap_args.setdefault("lat_0", 0) basemap_args.setdefault("lon_0", 0) basemap_args.setdefault("resolution", "c") m = bm.Basemap(**basemap_args) if boundary: m.drawmapboundary() # Rotate the coordinate system so that you are looking from the north pole: verts_rot = np.array(np.dot(np.array([[0, 0, -1], [0, 1, 0], [1, 0, 0]]), vertices)) # To get the orthographic projection, when the first coordinate is # positive: neg_idx = np.where(verts_rot[0] > 0) # rotate the entire b-vector around to point in the other direction: verts_rot[:, neg_idx] *= -1 _, theta, phi = geo.cart2sphere(verts_rot[0], verts_rot[1], verts_rot[2]) lat, lon = geo.sph2latlon(theta, phi) x, y = m(lon, lat) my_min = np.nanmin(val) if vmin is not None: my_min = vmin my_max = np.nanmax(val) if vmax is not None: my_max = vmax if tri: m.pcolor(x, y, val, vmin=my_min, vmax=my_max, tri=True, cmap=cmap) else: cmap_data = cmap._segmentdata red_interp, blue_interp, green_interp = ( interp.interp1d( np.array(cmap_data[gun])[:, 0], np.array(cmap_data[gun])[:, 1] ) for gun in ["red", "blue", "green"] ) r = (val - my_min) / float(my_max - my_min) # Enforce the maximum and minimum boundaries, if there are values # outside those boundaries: r[r < 0] = 0 r[r > 1] = 1 for this_x, this_y, this_r in zip(x, y, r): red = red_interp(this_r) blue = blue_interp(this_r) green = green_interp(this_r) m.plot(this_x, this_y, "o", c=[red.item(), green.item(), blue.item()]) if cbar: mappable = matplotlib.cm.ScalarMappable(cmap=cmap) mappable.set_array([my_min, my_max]) # setup colorbar axes instance. pos = ax.get_position() ell, b, w, h = pos.bounds # setup colorbar axes cax = fig.add_axes([ell + w + 0.075, b, 0.05, h], frameon=False) fig.colorbar(mappable, cax=cax) # draw colorbar return ax dipy-1.11.0/dipy/viz/regtools.py000066400000000000000000000422241476546756600165620ustar00rootroot00000000000000import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package matplotlib, has_mpl, setup_module = optional_package("matplotlib") plt, _, _ = optional_package("matplotlib.pyplot") def _tile_plot(imgs, titles, **kwargs): """ Helper function """ # Create a new figure and plot the three images fig, ax = plt.subplots(1, len(imgs)) for ii, a in enumerate(ax): a.set_axis_off() a.imshow(imgs[ii], **kwargs) a.set_title(titles[ii]) return fig def simple_plot(file_name, title, x, y, xlabel, ylabel): """Saves the simple plot with given x and y values Parameters ---------- file_name : string file name for saving the plot title : string title of the plot x : integer list x-axis values to be plotted y : integer list y-axis values to be plotted xlabel : string label for x-axis ylable : string label for y-axis """ plt.plot(x, y) axes = plt.gca() axes.set_ylim([0, 4]) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) plt.savefig(file_name) plt.clf() @warning_for_keywords() def overlay_images( img0, img1, *, title0="", title_mid="", title1="", fname=None, **fig_kwargs ): r"""Plot two images one on top of the other using red and green channels. Creates a figure containing three images: the first image to the left plotted on the red channel of a color image, the second to the right plotted on the green channel of a color image and the two given images on top of each other using the red channel for the first image and the green channel for the second one. It is assumed that both images have the same shape. The intended use of this function is to visually assess the quality of a registration result. Parameters ---------- img0 : array, shape(R, C) the image to be plotted on the red channel, to the left of the figure img1 : array, shape(R, C) the image to be plotted on the green channel, to the right of the figure title0 : string, optional the title to be written on top of the image to the left. By default, no title is displayed. title_mid : string, optional the title to be written on top of the middle image. By default, no title is displayed. title1 : string, optional the title to be written on top of the image to the right. By default, no title is displayed. fname : string, optional the file name to write the resulting figure. If None (default), the image is not saved. fig_kwargs : dict Extra parameters for saving figure, e.g. `dpi=300`. """ # Normalize the input images to [0,255] img0 = 255 * ((img0 - img0.min()) / (img0.max() - img0.min())) img1 = 255 * ((img1 - img1.min()) / (img1.max() - img1.min())) # Create the color images img0_red = np.zeros(shape=img0.shape + (3,), dtype=np.uint8) img1_green = np.zeros(shape=img0.shape + (3,), dtype=np.uint8) overlay = np.zeros(shape=img0.shape + (3,), dtype=np.uint8) # Copy the normalized intensities into the appropriate channels of the # color images img0_red[..., 0] = img0 img1_green[..., 1] = img1 overlay[..., 0] = img0 overlay[..., 1] = img1 fig = _tile_plot([img0_red, overlay, img1_green], [title0, title_mid, title1]) # If a file name was given, save the figure if fname is not None: fig.savefig(fname, bbox_inches="tight", **fig_kwargs) return fig def draw_lattice_2d(nrows, ncols, delta): r"""Create a regular lattice of nrows x ncols squares. Creates an image (2D array) of a regular lattice of nrows x ncols squares. The size of each square is delta x delta pixels (not counting the separation lines). The lines are one pixel width. Parameters ---------- nrows : int the number of squares to be drawn vertically ncols : int the number of squares to be drawn horizontally delta : int the size of each square of the grid. Each square is delta x delta pixels Returns ------- lattice : array, shape (R, C) the image (2D array) of the segular lattice. The shape (R, C) of the array is given by R = 1 + (delta + 1) * nrows C = 1 + (delta + 1) * ncols """ lattice = np.ndarray( (1 + (delta + 1) * nrows, 1 + (delta + 1) * ncols), dtype=np.float64 ) # Fill the lattice with "white" lattice[...] = 127 # Draw the horizontal lines in "black" for i in range(nrows + 1): lattice[i * (delta + 1), :] = 0 # Draw the vertical lines in "black" for j in range(ncols + 1): lattice[:, j * (delta + 1)] = 0 return lattice @warning_for_keywords() def plot_2d_diffeomorphic_map( mapping, *, delta=10, fname=None, direct_grid_shape=None, direct_grid2world=-1, inverse_grid_shape=None, inverse_grid2world=-1, show_figure=True, **fig_kwargs, ): r"""Draw the effect of warping a regular lattice by a diffeomorphic map. Draws a diffeomorphic map by showing the effect of the deformation on a regular grid. The resulting figure contains two images: the direct transformation is plotted to the left, and the inverse transformation is plotted to the right. Parameters ---------- mapping : DiffeomorphicMap object the diffeomorphic map to be drawn delta : int, optional the size (in pixels) of the squares of the regular lattice to be used to plot the warping effects. Each square will be delta x delta pixels. By default, the size will be 10 pixels. fname : string, optional the name of the file the figure will be written to. If None (default), the figure will not be saved to disk. direct_grid_shape : tuple, shape (2,), optional the shape of the grid image after being deformed by the direct transformation. By default, the shape of the deformed grid is the same as the grid of the displacement field, which is by default equal to the shape of the fixed image. In other words, the resulting deformed grid (deformed by the direct transformation) will normally have the same shape as the fixed image. direct_grid2world : array, shape (3, 3), optional the affine transformation mapping the direct grid's coordinates to physical space. By default, this transformation will correspond to the image-to-world transformation corresponding to the default direct_grid_shape (in general, if users specify a direct_grid_shape, they should also specify direct_grid2world). inverse_grid_shape : tuple, shape (2,), optional the shape of the grid image after being deformed by the inverse transformation. By default, the shape of the deformed grid under the inverse transform is the same as the image used as "moving" when the diffeomorphic map was generated by a registration algorithm (so it corresponds to the effect of warping the static image towards the moving). inverse_grid2world : array, shape (3, 3), optional the affine transformation mapping inverse grid's coordinates to physical space. By default, this transformation will correspond to the image-to-world transformation corresponding to the default inverse_grid_shape (in general, if users specify an inverse_grid_shape, they should also specify inverse_grid2world). show_figure : bool, optional if True (default), the deformed grids will be plotted using matplotlib, else the grids are just returned fig_kwargs : dict Extra parameters for saving figure, e.g. `dpi=300`. Returns ------- warped_forward : array Image with the grid showing the effect of transforming the moving image to the static image. The shape will be `direct_grid_shape` if specified, otherwise the shape of the static image. warped_backward : array Image with the grid showing the effect of transforming the static image to the moving image. Shape will be `inverse_grid_shape` if specified, otherwise the shape of the moving image. Notes ----- The default value for the affine transformation is "-1" to handle the case in which the user provides "None" as input meaning "identity". If we used None as default, we wouldn't know if the user specifically wants to use the identity (specifically passing None) or if it was left unspecified, meaning to use the appropriate default matrix. """ if mapping.is_inverse: # By default, direct_grid_shape is the codomain grid if direct_grid_shape is None: direct_grid_shape = mapping.codomain_shape if direct_grid2world == -1: direct_grid2world = mapping.codomain_grid2world # By default, the inverse grid is the domain grid if inverse_grid_shape is None: inverse_grid_shape = mapping.domain_shape if inverse_grid2world == -1: inverse_grid2world = mapping.domain_grid2world else: # Now by default, direct_grid_shape is the mapping's input grid if direct_grid_shape is None: direct_grid_shape = mapping.domain_shape if direct_grid2world == -1: direct_grid2world = mapping.domain_grid2world # By default, the output grid is the mapping's domain grid if inverse_grid_shape is None: inverse_grid_shape = mapping.codomain_shape if inverse_grid2world == -1: inverse_grid2world = mapping.codomain_grid2world # The world-to-image (image = drawn lattice on the output grid) # transformation is the inverse of the output affine world_to_image = None if inverse_grid2world is not None: world_to_image = np.linalg.inv(inverse_grid2world) # Draw the squares on the output grid lattice_out = draw_lattice_2d( (inverse_grid_shape[0] + delta) // (delta + 1), (inverse_grid_shape[1] + delta) // (delta + 1), delta, ) lattice_out = lattice_out[0 : inverse_grid_shape[0], 0 : inverse_grid_shape[1]] # Warp in the forward direction (sampling it on the input grid) warped_forward = mapping.transform( lattice_out, interpolation="linear", image_world2grid=world_to_image, out_shape=direct_grid_shape, out_grid2world=direct_grid2world, ) # Now, the world-to-image (image = drawn lattice on the input grid) # transformation is the inverse of the input affine world_to_image = None if direct_grid2world is not None: world_to_image = np.linalg.inv(direct_grid2world) # Draw the squares on the input grid lattice_in = draw_lattice_2d( (direct_grid_shape[0] + delta) // (delta + 1), (direct_grid_shape[1] + delta) // (delta + 1), delta, ) lattice_in = lattice_in[0 : direct_grid_shape[0], 0 : direct_grid_shape[1]] # Warp in the backward direction (sampling it on the output grid) warped_backward = mapping.transform_inverse( lattice_in, interpolation="linear", image_world2grid=world_to_image, out_shape=inverse_grid_shape, out_grid2world=inverse_grid2world, ) # Now plot the grids if show_figure: plt.figure() plt.subplot(1, 2, 1).set_axis_off() plt.imshow(warped_forward, cmap=plt.cm.gray) plt.title("Direct transform") plt.subplot(1, 2, 2).set_axis_off() plt.imshow(warped_backward, cmap=plt.cm.gray) plt.title("Inverse transform") # Finally, save the figure to disk if fname is not None: plt.savefig(fname, bbox_inches="tight", **fig_kwargs) # Return the deformed grids return warped_forward, warped_backward @warning_for_keywords() def plot_slices(V, *, slice_indices=None, fname=None, **fig_kwargs): r"""Plot 3 slices from the given volume: 1 sagittal, 1 coronal and 1 axial Creates a figure showing the axial, coronal and sagittal slices at the requested positions of the given volume. The requested slices are specified by slice_indices. Parameters ---------- V : array, shape (S, R, C) the 3D volume to extract the slices from slice_indices : array, shape (3,), optional the indices of the sagittal (slice_indices[0]), coronal (slice_indices[1]) and axial (slice_indices[2]) slices to be displayed. If None, the middle slices along each direction are displayed. fname : string, optional the name of the file to save the figure to. If None (default), the figure is not saved to disk. fig_kwargs : dict Extra parameters for saving figure, e.g. `dpi=300`. """ if slice_indices is None: slice_indices = np.array(V.shape) // 2 # Normalize the intensities to [0, 255] V = np.asarray(V, dtype=np.float64) V = 255 * (V - V.min()) / (V.max() - V.min()) # Extract the middle slices axial = np.asarray(V[:, :, slice_indices[2]]).astype(np.uint8).T coronal = np.asarray(V[:, slice_indices[1], :]).astype(np.uint8).T sagittal = np.asarray(V[slice_indices[0], :, :]).astype(np.uint8).T fig = _tile_plot( [axial, coronal, sagittal], ["Axial", "Coronal", "Sagittal"], cmap=plt.cm.gray, origin="lower", ) # Save the figure if requested if fname is not None: fig.savefig(fname, bbox_inches="tight", **fig_kwargs) return fig @warning_for_keywords() def overlay_slices( L, R, *, slice_index=None, slice_type=1, ltitle="Left", rtitle="Right", fname=None, **fig_kwargs, ): r"""Plot three overlaid slices from the given volumes. Creates a figure containing three images: the gray scale k-th slice of the first volume (L) to the left, where k=slice_index, the k-th slice of the second volume (R) to the right and the k-th slices of the two given images on top of each other using the red channel for the first volume and the green channel for the second one. It is assumed that both volumes have the same shape. The intended use of this function is to visually assess the quality of a registration result. Parameters ---------- L : array, shape (S, R, C) the first volume to extract the slice from plotted to the left R : array, shape (S, R, C) the second volume to extract the slice from, plotted to the right slice_index : int, optional the index of the slices (along the axis given by slice_type) to be overlaid. If None, the slice along the specified axis is used slice_type : int, optional the type of slice to be extracted: 0=sagittal, 1=coronal (default), 2=axial. ltitle : string, optional the string to be written as the title of the left image. By default, no title is displayed. rtitle : string, optional the string to be written as the title of the right image. By default, no title is displayed. fname : string, optional the name of the file to write the image to. If None (default), the figure is not saved to disk. fig_kwargs: extra parameters for saving figure, e.g. `dpi=300`. """ # Normalize the intensities to [0,255] sh = L.shape L = np.asarray(L, dtype=np.float64) R = np.asarray(R, dtype=np.float64) L = 255 * (L - L.min()) / (L.max() - L.min()) R = 255 * (R - R.min()) / (R.max() - R.min()) # Create the color image to draw the overlapped slices into, and extract # the slices (note the transpositions) if slice_type == 0: if slice_index is None: slice_index = sh[0] // 2 colorImage = np.zeros(shape=(sh[2], sh[1], 3), dtype=np.uint8) ll = np.asarray(L[slice_index, :, :]).astype(np.uint8).T rr = np.asarray(R[slice_index, :, :]).astype(np.uint8).T elif slice_type == 1: if slice_index is None: slice_index = sh[1] // 2 colorImage = np.zeros(shape=(sh[2], sh[0], 3), dtype=np.uint8) ll = np.asarray(L[:, slice_index, :]).astype(np.uint8).T rr = np.asarray(R[:, slice_index, :]).astype(np.uint8).T elif slice_type == 2: if slice_index is None: slice_index = sh[2] // 2 colorImage = np.zeros(shape=(sh[1], sh[0], 3), dtype=np.uint8) ll = np.asarray(L[:, :, slice_index]).astype(np.uint8).T rr = np.asarray(R[:, :, slice_index]).astype(np.uint8).T else: print("Slice type must be 0, 1 or 2.") return # Draw the intensity images to the appropriate channels of the color image # The "(ll > ll[0, 0])" condition is just an attempt to eliminate the # background when its intensity is not exactly zero (the [0,0] corner is # usually background) colorImage[..., 0] = ll * (ll > ll[0, 0]) colorImage[..., 1] = rr * (rr > rr[0, 0]) fig = _tile_plot( [ll, colorImage, rr], [ltitle, "Overlay", rtitle], cmap=plt.cm.gray, origin="lower", ) # Save the figure to disk, if requested if fname is not None: fig.savefig(fname, bbox_inches="tight", **fig_kwargs) return fig dipy-1.11.0/dipy/viz/streamline.py000066400000000000000000000153461476546756600170740ustar00rootroot00000000000000import warnings from dipy.testing.decorators import warning_for_keywords from dipy.utils.optpkg import optional_package plt, have_plt, _ = optional_package("matplotlib.pyplot") fury, has_fury, _ = optional_package("fury", min_version="0.10.0") if has_fury: from dipy.viz import actor, window sagittal_deprecation_warning_msg = ( "The view argument value `sagital` is deprecated and " # codespell:ignore sagital "its support will be removed in a future version. Please, use `sagittal` instead." ) @warning_for_keywords() def show_bundles( bundles, *, interactive=True, view="sagittal", colors=None, linewidth=0.3, save_as=None, ): """Render bundles to visualize them interactively or save them into a png. The function allows to just render the bundles in an interactive plot or to export them into a png file. Parameters ---------- bundles : list Bundles to be rendered. interactive : boolean, optional If True a 3D interactive rendering is created. Default is True. view : str, optional Viewing angle. Supported options: 'sagittal', 'axial' and 'coronal'. colors : list, optional Colors to be used for each bundle. If None default colors are used. linewidth : float, optional Width of each rendered streamline. Default is 0.3. save_as : str, optional If not None rendered scene is stored in a png file with that name. Default is None. """ if view == "sagital": # codespell:ignore sagital warnings.warn( sagittal_deprecation_warning_msg, category=DeprecationWarning, stacklevel=2, ) view = "sagittal" scene = window.Scene() scene.SetBackground(1.0, 1, 1) for i, bundle in enumerate(bundles): if colors is None: lines_actor = actor.streamtube(bundle, linewidth=linewidth) else: lines_actor = actor.streamtube( bundle, linewidth=linewidth, colors=colors[i] ) if view == "sagittal": lines_actor.RotateX(-90) lines_actor.RotateZ(90) elif view == "axial": pass elif view == "coronal": lines_actor.RotateX(-90) else: raise ValueError( "Invalid view argument value. Use 'sagittal', 'axial' or 'coronal'." ) scene.add(lines_actor) if interactive: window.show(scene) if save_as is not None: window.record(scene=scene, n_frames=1, out_path=save_as, size=(900, 900)) @warning_for_keywords() def viz_two_bundles(b1, b2, fname, *, c1=(1, 0, 0), c2=(0, 1, 0), interactive=False): """Render and plot two bundles to visualize them. Parameters ---------- b1 : Streamlines Bundle one to be rendered. b2 : Streamlines Bundle two to be rendered. fname: str Rendered scene is stored in a png file with that name. C1 : tuple, optional Color to be used for first bundle. Default red. C2 : tuple, optional Color to be used for second bundle. Default green. interactive : boolean, optional If True a 3D interactive rendering is created. Default is True. """ ren = window.Scene() ren.SetBackground(1, 1, 1) actor1 = actor.line(b1, colors=c1) actor1.GetProperty().SetEdgeVisibility(1) actor1.GetProperty().SetRenderLinesAsTubes(1) actor1.GetProperty().SetLineWidth(6) actor1.GetProperty().SetOpacity(1) actor1.RotateX(-70) actor1.RotateZ(90) ren.add(actor1) actor2 = actor.line(b2, colors=c2) actor2.GetProperty().SetEdgeVisibility(1) actor2.GetProperty().SetRenderLinesAsTubes(1) actor2.GetProperty().SetLineWidth(6) actor2.GetProperty().SetOpacity(1) actor2.RotateX(-70) actor2.RotateZ(90) ren.add(actor2) if interactive: window.show(ren) window.record(scene=ren, n_frames=1, out_path=fname, size=(1200, 1200)) if interactive: im = plt.imread(fname) plt.figure(figsize=(10, 10)) plt.imshow(im) @warning_for_keywords() def viz_vector_field( points_aligned, directions, colors, offsets, fname, *, bundle=None, interactive=False, ): """Render and plot vector field. Parameters ---------- points_aligned : List List containing starting positions of vectors. directions : List List containing unitary directions of vectors. colors : List List containing colors for each vector. offsets : List List containing vector field modules. fname: str Rendered scene is stored in a png file with that name. bundle : Streamlines, optional Bundle to be rendered with vector field (Default None). interactive : boolean, optional If True a 3D interactive rendering is created. Default is True. """ scene = window.Scene() scene.SetBackground(1.0, 1, 1) arrows = actor.arrow(points_aligned, directions, colors, scales=offsets) arrows.RotateX(-70) arrows.RotateZ(90) scene.add(arrows) if bundle: actor1 = actor.line(bundle, colors=(0, 0, 1)) actor1.RotateX(-70) actor1.RotateZ(90) scene.add(actor1) if interactive: window.show(scene) window.record(scene=scene, n_frames=1, out_path=fname, size=(1200, 1200)) if interactive: im = plt.imread(fname) plt.figure(figsize=(10, 10)) plt.imshow(im) @warning_for_keywords() def viz_displacement_mag(bundle, offsets, fname, *, interactive=False): """Render and plot displacement magnitude over the bundle. Parameters ---------- bundle : Streamlines, Bundle to be rendered. offsets : List List containing displacement magnitdues per point on the bundle. fname: str Rendered scene is stored in a png file with that name. interactive : boolean, optional If True a 3D interactive rendering is created. Default is True. """ scene = window.Scene() hue = (0.1, 0.9) hue = (0.9, 0.3) saturation = (0.5, 1) scene.background((1, 1, 1)) lut_cmap = actor.colormap_lookup_table( scale_range=(offsets.min(), offsets.max()), hue_range=hue, saturation_range=saturation, ) stream_actor = actor.line( bundle, colors=offsets, linewidth=7, lookup_colormap=lut_cmap ) stream_actor.RotateX(-70) stream_actor.RotateZ(90) scene.add(stream_actor) bar = actor.scalar_bar(lookup_table=lut_cmap) scene.add(bar) if interactive: window.show(scene) window.record(scene=scene, n_frames=1, out_path=fname, size=(2000, 1500)) if interactive: im = plt.imread(fname) plt.figure(figsize=(10, 10)) plt.imshow(im) dipy-1.11.0/dipy/viz/tests/000077500000000000000000000000001476546756600155105ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/tests/__init__.py000066400000000000000000000000001476546756600176070ustar00rootroot00000000000000dipy-1.11.0/dipy/viz/tests/meson.build000066400000000000000000000003431476546756600176520ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_apps.py', 'test_fury.py', 'test_regtools.py', 'test_streamline.py', 'test_util.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/viz/tests' ) dipy-1.11.0/dipy/viz/tests/test_apps.py000066400000000000000000000222261476546756600200700ustar00rootroot00000000000000import os from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt import pytest from dipy.data import DATA_DIR from dipy.direction.peaks import PeaksAndMetrics from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.utils import create_nifti_header from dipy.testing import check_for_warnings from dipy.testing.decorators import set_random_number_generator, use_xvfb from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury import io, window from dipy.viz.horizon.app import horizon skip_it = use_xvfb == "skip" @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator() def test_horizon_events(rng): # using here MNI template affine 2009a affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = 255 * rng.random((197, 233, 189)) vox_size = (1.0, 1.0, 1.0) img = np.zeros((197, 233, 189)) img[0:25, :, :] = 1 images = [(data, affine, "/test/filename.nii.gz"), (img, affine)] peak_dirs = 255 * rng.random((5, 5, 5, 5, 3)) pam = PeaksAndMetrics() pam.peak_dirs = peak_dirs pam.affine = affine pams = [pam] from dipy.segment.tests.test_bundles import setup_module setup_module() from dipy.segment.tests.test_bundles import f1 streamlines = f1.copy() streamlines._data += np.array([-98.0, -134.0, -72.0]) header = create_nifti_header(affine, data.shape, vox_size) sft = StatefulTractogram(streamlines, header, Space.RASMM) tractograms = [sft] # select all centroids and expand and click everything else # do not press the key shortcuts as vtk generates warning that # blocks recording fname = os.path.join(DATA_DIR, "record_horizon.log.gz") with TemporaryDirectory() as out_dir: horizon( tractograms=tractograms, images=images, pams=pams, cluster=True, cluster_thr=5.0, roi_images=True, random_colors=False, length_gt=0, length_lt=np.inf, clusters_gt=0, clusters_lt=np.inf, world_coords=True, interactive=True, out_png=pjoin(out_dir, "horizon-event.png"), recorded_events=fname, ) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator() def test_horizon(rng): s1 = 10 * np.array( [[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0], [4, 0, 0]], dtype="f8" ) s2 = 10 * np.array( [[0, 0, 0], [0, 1, 0], [0, 2, 0], [0, 3, 0], [0, 4, 0]], dtype="f8" ) s3 = 10 * np.array( [[0, 0, 0], [1, 0.2, 0], [2, 0.2, 0], [3, 0.2, 0], [4, 0.2, 0]], dtype="f8" ) streamlines = Streamlines() streamlines.append(s1) streamlines.append(s2) streamlines.append(s3) streamlines.shrink_data() affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = 255 * rng.random((197, 233, 189)) vox_size = (1.0, 1.0, 1.0) streamlines._data += np.array([-98.0, -134.0, -72.0]) header = create_nifti_header(affine, data.shape, vox_size) sft = StatefulTractogram(streamlines, header, Space.RASMM) # only tractograms tractograms = [sft] images = None with TemporaryDirectory() as out_dir: horizon( tractograms=tractograms, images=images, cluster=True, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=False, out_png=pjoin(out_dir, "only-tractograms.png"), ) images = [(data, affine, "/test/filename.nii.gz")] # tractograms in native coords (not supported for now) with npt.assert_raises(ValueError) as ve: horizon( tractograms=tractograms, images=images, cluster=True, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=False, interactive=False, out_png=pjoin(out_dir, "native-tractograms.png"), ) msg = "Currently native coordinates are not supported for streamlines." npt.assert_(msg in str(ve.exception)) # only images tractograms = None horizon( tractograms=tractograms, images=images, cluster=True, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=False, out_png=pjoin(out_dir, "only-images.png"), ) # no clustering tractograms and images horizon( tractograms=tractograms, images=images, cluster=False, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=False, out_png=pjoin(out_dir, "no-clusting-tractograms-and-images.png"), ) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_horizon_wrong_dtype_images(): affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = np.random.rand(197, 233, 189).astype(np.bool_) images = [(data, affine)] with TemporaryDirectory() as out_dir: horizon( images=images, interactive=False, out_png=pjoin(out_dir, "wrong-dtype.png"), ) # Asserting the image will not get added and the image will be black. assert len(np.unique(io.load_image(pjoin(out_dir, "wrong-dtype.png")))) == 1 @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator(42) def test_roi_images(rng): img1 = rng.random((5, 5, 5)) img2 = np.zeros((5, 5, 5)) img2[2, 2, 2] = 1 img3 = np.zeros((5, 5, 5)) img3[0, :, :] = 1 images = [ (img1, np.eye(4)), (img2, np.eye(4), "/test/filename.nii.gz"), (img3, np.eye(4), "/test/filename.nii.gz"), ] show_m = horizon(images=images, return_showm=True) analysis = window.analyze_scene(show_m.scene) npt.assert_equal(analysis.actors, 0) arr = window.snapshot(show_m.scene) report = window.analyze_snapshot(arr, colors=[(0, 0, 0), (255, 255, 255)]) npt.assert_array_equal(report.colors_found, [True, True]) show_m = horizon(images=images, roi_images=True, return_showm=True) analysis = window.analyze_scene(show_m.scene) npt.assert_equal(analysis.actors, 3) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator(42) def test_surfaces(rng): vertices = rng.random((100, 3)) faces = rng.integers(0, 100, size=(100, 3)) surfaces = [ (vertices, faces), (vertices, faces, "/test/filename.pial"), (vertices, faces, "/test/filename.pial"), ] show_m = horizon(surfaces=surfaces, return_showm=True) analysis = window.analyze_scene(show_m.scene) npt.assert_equal(analysis.actors, 3) vertices = rng.random((100, 4)) faces = rng.integers(0, 100, size=(100, 3)) surfaces = [ (vertices, faces), (vertices, faces, "/test/filename.pial"), (vertices, faces, "/test/filename.pial"), ] with warnings.catch_warnings(record=True) as l_warns: show_m = horizon(surfaces=surfaces, return_showm=True) analysis = window.analyze_scene(show_m.scene) npt.assert_equal(analysis.actors, 0) check_for_warnings(l_warns, "Vertices do not have correct shape: (100, 4)") vertices = rng.random((100, 3)) faces = rng.integers(0, 100, size=(100, 4)) surfaces = [ (vertices, faces), (vertices, faces, "/test/filename.pial"), (vertices, faces, "/test/filename.pial"), ] with warnings.catch_warnings(record=True) as l_warns: show_m = horizon(surfaces=surfaces, return_showm=True) analysis = window.analyze_scene(show_m.scene) npt.assert_equal(analysis.actors, 0) check_for_warnings(l_warns, "Faces do not have correct shape: (100, 4)") @pytest.mark.skipif(skip_it, reason="Needs xvfb") def test_small_horizon_import(): from dipy.viz import horizon as Horizon if has_fury: assert Horizon == horizon else: npt.assert_raises(ImportError, Horizon) dipy-1.11.0/dipy/viz/tests/test_fury.py000066400000000000000000000362561476546756600201220ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt import pytest from dipy.align.reslice import reslice from dipy.align.tests.test_streamlinear import fornix_streamlines from dipy.data import default_sphere, get_sphere, read_stanford_labels from dipy.direction import peaks_from_model from dipy.reconst.dti import color_fa, fractional_anisotropy from dipy.reconst.shm import CsaOdfModel, descoteaux07_legacy_msg, sh_to_sf_matrix from dipy.testing.decorators import set_random_number_generator, use_xvfb from dipy.tracking import utils from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import center_streamlines, transform_streamlines from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from dipy.viz import actor, colormap, window skip_it = use_xvfb == "skip" @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator() def test_slicer(rng): scene = window.Scene() data = 255 * rng.random((50, 50, 50)) affine = np.diag([1, 3, 2, 1]) data2, affine2 = reslice(data, affine, zooms=(1, 3, 2), new_zooms=(1, 1, 1)) slicer = actor.slicer(data2, affine=affine2, interpolation="linear") slicer.display(x=None, y=None, z=25) scene.add(slicer) scene.reset_camera() scene.reset_clipping_range() # window.show(scene, reset_camera=False) arr = window.snapshot(scene, offscreen=True) report = window.analyze_snapshot(arr, find_objects=True) npt.assert_equal(report.objects, 1) npt.assert_array_equal([1, 3, 2] * np.array(data.shape), np.array(slicer.shape)) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_contour_from_roi(): hardi_img, gtab, labels_img = read_stanford_labels() data = np.asanyarray(hardi_img.dataobj) labels = np.asanyarray(labels_img.dataobj) affine = hardi_img.affine white_matter = (labels == 1) | (labels == 2) with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) classifier = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) seed_mask = labels == 2 seeds = utils.seeds_from_mask(seed_mask, density=[1, 1, 1], affine=affine) # Initialization of LocalTracking. # The computation happens in the next step. streamlines = LocalTracking(csa_peaks, classifier, seeds, affine, step_size=2) # Compute streamlines and store as a list. streamlines = list(streamlines) # Prepare the display objects. streamlines_actor = actor.line( streamlines, colors=colormap.line_colors(streamlines) ) seedroi_actor = actor.contour_from_roi( seed_mask, affine=affine, color=[0, 1, 1], opacity=0.5 ) with TemporaryDirectory() as out_dir: # Create the 3d display. sc = window.Scene() sc2 = window.Scene() sc.add(streamlines_actor) arr3 = window.snapshot( sc, fname=pjoin(out_dir, "test_surface3.png"), offscreen=True ) report3 = window.analyze_snapshot(arr3, find_objects=True) sc2.add(streamlines_actor) sc2.add(seedroi_actor) arr4 = window.snapshot( sc2, fname=pjoin(out_dir, "test_surface4.png"), offscreen=True ) report4 = window.analyze_snapshot(arr4, find_objects=True) # assert that the seed ROI rendering is not far # away from the streamlines (affine error) npt.assert_equal(report3.objects, report4.objects) # window.show(sc) # window.show(sc2) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") @set_random_number_generator() def test_bundle_maps(rng): scene = window.Scene() bundle = fornix_streamlines() bundle, _ = center_streamlines(bundle) mat = np.array([[1, 0, 0, 100], [0, 1, 0, 100], [0, 0, 1, 100], [0, 0, 0, 1.0]]) bundle = transform_streamlines(bundle, mat) # metric = np.random.rand(*(200, 200, 200)) metric = 100 * np.ones((200, 200, 200)) # add lower values metric[100, :, :] = 100 * 0.5 # create a nice orange-red colormap lut = actor.colormap_lookup_table( scale_range=(0.0, 100.0), hue_range=(0.0, 0.1), saturation_range=(1, 1), value_range=(1.0, 1), ) line = actor.line(bundle, colors=metric, linewidth=0.1, lookup_colormap=lut) scene.add(line) scene.add(actor.scalar_bar(lookup_table=lut)) report = window.analyze_scene(scene) npt.assert_almost_equal(report.actors, 1) # window.show(scene) scene.clear() nb_points = np.sum([len(b) for b in bundle]) values = 100 * rng.random((nb_points)) # values[:nb_points/2] = 0 line = actor.streamtube(bundle, colors=values, linewidth=0.1, lookup_colormap=lut) scene.add(line) # window.show(scene) report = window.analyze_scene(scene) npt.assert_equal(report.actors_classnames[0], "vtkLODActor") scene.clear() colors = rng.random((nb_points, 3)) # values[:nb_points/2] = 0 line = actor.line(bundle, colors=colors, linewidth=2) scene.add(line) # window.show(scene) report = window.analyze_scene(scene) npt.assert_equal(report.actors_classnames[0], "vtkLODActor") # window.show(scene) arr = window.snapshot(scene) report2 = window.analyze_snapshot(arr) npt.assert_equal(report2.objects, 1) # try other input options for colors scene.clear() actor.line(bundle, colors=(1.0, 0.5, 0)) actor.line(bundle, colors=np.arange(len(bundle))) actor.line(bundle) colors = [rng.random(b.shape) for b in bundle] actor.line(bundle, colors=colors) @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_odf_slicer(interactive=False): # Prepare our data sphere = get_sphere(name="repulsion100") shape = (11, 11, 11, sphere.vertices.shape[0]) odfs = np.ones(shape) affine = np.array( [ [2.0, 0.0, 0.0, 3.0], [0.0, 2.0, 0.0, 3.0], [0.0, 0.0, 2.0, 1.0], [0.0, 0.0, 0.0, 1.0], ] ) mask = np.ones(odfs.shape[:3], bool) mask[:4, :4, :4] = False # Test that affine and mask work odf_actor = actor.odf_slicer( odfs, sphere=sphere, affine=affine, mask=mask, scale=0.25, colormap="blues" ) k = 2 _I, _J, _ = odfs.shape[:3] odf_actor.display_extent(0, _I - 1, 0, _J - 1, k, k) scene = window.Scene() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene, reset_camera=False) arr = window.snapshot(scene) report = window.analyze_snapshot(arr, find_objects=True) npt.assert_equal(report.objects, 11 * 11 - 16) # Test that global colormap works odf_actor = actor.odf_slicer( odfs, sphere=sphere, mask=mask, scale=0.25, colormap="blues", norm=False, global_cm=True, ) scene.clear() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # Test that the most basic odf_slicer instantiation works odf_actor = actor.odf_slicer(odfs) scene.clear() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # Test that odf_slicer.display works properly scene.clear() scene.add(odf_actor) scene.add(actor.axes(scale=(11, 11, 11))) for i in range(11): odf_actor.display(x=i, y=None, z=None) if interactive: window.show(scene) for j in range(11): odf_actor.display(x=None, y=j, z=None) if interactive: window.show(scene) # With mask equal to zero everything should be black mask = np.zeros(odfs.shape[:3]) odf_actor = actor.odf_slicer( odfs, sphere=sphere, mask=mask, scale=0.25, colormap="blues", norm=False, global_cm=True, ) scene.clear() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # global_cm=True with colormap=None should raise an error npt.assert_raises( OSError, actor.odf_slicer, odfs, sphere=sphere, mask=None, scale=0.25, colormap=None, norm=False, global_cm=True, ) # Dimension mismatch between sphere vertices and number # of SF coefficients will raise an error. npt.assert_raises( ValueError, actor.odf_slicer, odfs, mask=None, sphere=get_sphere(name="repulsion200"), scale=0.25, ) # colormap=None and global_cm=False results in directionally encoded colors odf_actor = actor.odf_slicer( odfs, sphere=sphere, mask=None, scale=0.25, colormap=None, norm=False, global_cm=False, ) scene.clear() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # Test that SH coefficients input works with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) B = sh_to_sf_matrix(sphere, sh_order_max=4, return_inv=False) odfs = np.zeros((11, 11, 11, B.shape[0])) odfs[..., 0] = 1.0 odf_actor = actor.odf_slicer(odfs, sphere=sphere, B_matrix=B) scene.clear() scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # Dimension mismatch between sphere vertices and dimension of # B matrix will raise an error. npt.assert_raises( ValueError, actor.odf_slicer, odfs, mask=None, sphere=get_sphere(name="repulsion200"), ) # Test that constant colormap color works. Also test that sphere # normals are oriented correctly. Will show purple spheres with # a white contour. odf_contour = actor.odf_slicer( odfs, sphere=sphere, B_matrix=B, colormap=(255, 255, 255) ) odf_contour.GetProperty().SetAmbient(1.0) odf_contour.GetProperty().SetFrontfaceCulling(True) odf_actor = actor.odf_slicer( odfs, sphere=sphere, B_matrix=B, colormap=(255, 0, 255), scale=0.4 ) scene.clear() scene.add(odf_contour) scene.add(odf_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene) # Test that we can change the sphere on an active actor new_sphere = get_sphere(name="symmetric362") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) new_B = sh_to_sf_matrix(new_sphere, sh_order_max=4, return_inv=False) odf_actor.update_sphere(new_sphere.vertices, new_sphere.faces, new_B) if interactive: window.show(scene) del odf_actor del odfs @pytest.mark.skipif(skip_it or not has_fury, reason="Needs xvfb") def test_tensor_slicer(interactive=False): evals = np.array([1.4, 0.35, 0.35]) * 10 ** (-3) evecs = np.eye(3) mevals = np.zeros((3, 2, 4, 3)) mevecs = np.zeros((3, 2, 4, 3, 3)) mevals[..., :] = evals mevecs[..., :, :] = evecs sphere = get_sphere(name="symmetric724") affine = np.eye(4) scene = window.Scene() tensor_actor = actor.tensor_slicer( mevals, mevecs, affine=affine, sphere=sphere, scale=0.3, opacity=0.4 ) _, J, K = mevals.shape[:3] scene.add(tensor_actor) scene.reset_camera() scene.reset_clipping_range() tensor_actor.display_extent(0, 1, 0, J, 0, K) if interactive: window.show(scene, reset_camera=False) tensor_actor.GetProperty().SetOpacity(1.0) if interactive: window.show(scene, reset_camera=False) npt.assert_equal(scene.GetActors().GetNumberOfItems(), 1) # Test extent big_extent = scene.GetActors().GetLastActor().GetBounds() big_extent_x = abs(big_extent[1] - big_extent[0]) tensor_actor.display(x=2) if interactive: window.show(scene, reset_camera=False) small_extent = scene.GetActors().GetLastActor().GetBounds() small_extent_x = abs(small_extent[1] - small_extent[0]) npt.assert_equal(big_extent_x > small_extent_x, True) # Test empty mask empty_actor = actor.tensor_slicer( mevals, mevecs, affine=affine, mask=np.zeros(mevals.shape[:3]), sphere=sphere, scale=0.3, ) npt.assert_equal(empty_actor.GetMapper(), None) # Test mask mask = np.ones(mevals.shape[:3]) mask[:2, :3, :3] = 0 cfa = color_fa(fractional_anisotropy(mevals), mevecs) tensor_actor = actor.tensor_slicer( mevals, mevecs, affine=affine, mask=mask, scalar_colors=cfa, sphere=sphere, scale=0.3, ) scene.clear() scene.add(tensor_actor) scene.reset_camera() scene.reset_clipping_range() if interactive: window.show(scene, reset_camera=False) mask_extent = scene.GetActors().GetLastActor().GetBounds() mask_extent_x = abs(mask_extent[1] - mask_extent[0]) npt.assert_equal(big_extent_x > mask_extent_x, True) # test display tensor_actor.display() current_extent = scene.GetActors().GetLastActor().GetBounds() current_extent_x = abs(current_extent[1] - current_extent[0]) npt.assert_equal(big_extent_x > current_extent_x, True) if interactive: window.show(scene, reset_camera=False) tensor_actor.display(y=1) current_extent = scene.GetActors().GetLastActor().GetBounds() current_extent_y = abs(current_extent[3] - current_extent[2]) big_extent_y = abs(big_extent[3] - big_extent[2]) npt.assert_equal(big_extent_y > current_extent_y, True) if interactive: window.show(scene, reset_camera=False) tensor_actor.display(z=1) current_extent = scene.GetActors().GetLastActor().GetBounds() current_extent_z = abs(current_extent[5] - current_extent[4]) big_extent_z = abs(big_extent[5] - big_extent[4]) npt.assert_equal(big_extent_z > current_extent_z, True) if interactive: window.show(scene, reset_camera=False) # Test error handling of the method when # incompatible dimension of mevals and evecs are passed. mevals = np.zeros((3, 2, 3)) mevecs = np.zeros((3, 2, 4, 3, 3)) with npt.assert_raises(RuntimeError): tensor_actor = actor.tensor_slicer( mevals, mevecs, affine=affine, mask=mask, scalar_colors=cfa, sphere=sphere, scale=0.3, ) dipy-1.11.0/dipy/viz/tests/test_regtools.py000066400000000000000000000026141476546756600207620ustar00rootroot00000000000000import numpy.testing as npt import pytest from dipy.align.imwarp import SymmetricDiffeomorphicRegistration from dipy.align.metrics import SSDMetric from dipy.testing.decorators import set_random_number_generator # Conditional import machinery for matplotlib from dipy.utils.optpkg import optional_package from dipy.viz import regtools _, have_matplotlib, _ = optional_package("matplotlib") @pytest.mark.skipif(not have_matplotlib, reason="Requires Matplotlib") @set_random_number_generator() def test_plot_2d_diffeomorphic_map(rng): # Test the regtools plotting interface (lightly). mv_shape = (11, 12) moving = rng.random(mv_shape) st_shape = (13, 14) static = rng.random(st_shape) dim = static.ndim metric = SSDMetric(dim) level_iters = [200, 100, 50, 25] sdr = SymmetricDiffeomorphicRegistration( metric=metric, level_iters=level_iters, inv_iter=50 ) mapping = sdr.optimize(static, moving) # Smoke testing of plots ff = regtools.plot_2d_diffeomorphic_map(mapping, delta=10) # Default shape is static shape, moving shape npt.assert_equal(ff[0].shape, st_shape) npt.assert_equal(ff[1].shape, mv_shape) # Can specify shape ff = regtools.plot_2d_diffeomorphic_map( mapping, delta=10, direct_grid_shape=(7, 8), inverse_grid_shape=(9, 10) ) npt.assert_equal(ff[0].shape, (7, 8)) npt.assert_equal(ff[1].shape, (9, 10)) dipy-1.11.0/dipy/viz/tests/test_streamline.py000066400000000000000000000072551476546756600212750ustar00rootroot00000000000000import os import tempfile import warnings from numpy.testing import assert_equal, assert_raises import pytest from dipy.align.streamwarp import bundlewarp, bundlewarp_vector_filed from dipy.data import read_five_af_bundles, two_cingulum_bundles from dipy.tracking.streamline import ( Streamlines, set_number_of_points, unlist_streamlines, ) from dipy.utils.optpkg import optional_package _, have_matplotlib, _ = optional_package("matplotlib") fury, have_fury, _ = optional_package("fury", min_version="0.10.0") pd, have_pd, _ = optional_package("pandas") if have_fury: from dipy.viz import window from dipy.viz.streamline import ( sagittal_deprecation_warning_msg, show_bundles, viz_displacement_mag, viz_two_bundles, viz_vector_field, ) bundles = read_five_af_bundles() @pytest.mark.skipif( not have_fury or not have_matplotlib, reason="Requires FURY and Matplotlib" ) def test_output_created(): colors = [ [0.91, 0.26, 0.35], [0.99, 0.50, 0.38], [0.99, 0.88, 0.57], [0.69, 0.85, 0.64], [0.51, 0.51, 0.63], ] with tempfile.TemporaryDirectory() as temp_dir: with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=sagittal_deprecation_warning_msg, category=DeprecationWarning, ) view = "sagital" # codespell:ignore sagital fname = os.path.join(temp_dir, f"test_{view}.png") show_bundles(bundles, interactive=False, view=view, save_as=fname) assert_equal(os.path.exists(fname), True) views = ["axial", "sagittal", "coronal"] for view in views: fname = os.path.join(temp_dir, f"test_{view}.png") show_bundles(bundles, interactive=False, view=view, save_as=fname) assert_equal(os.path.exists(fname), True) fname = os.path.join(temp_dir, "test_colors.png") show_bundles(bundles, interactive=False, colors=colors, save_as=fname) assert_equal(os.path.exists(fname), True) # Check rendered image is not empty report = window.analyze_snapshot(fname, find_objects=True) assert_equal(report.objects > 0, True) cb1, cb2 = two_cingulum_bundles() fname = os.path.join(temp_dir, "test_two_bundles.png") viz_two_bundles(cb1, cb2, fname=fname) assert_equal(os.path.exists(fname), True) @pytest.mark.skipif(not have_fury, reason="Requires FURY") def test_incorrect_view(): assert_raises( ValueError, show_bundles, bundles, interactive=False, view="wrong_view" ) @pytest.mark.skipif( not have_fury or not have_matplotlib or not have_pd, reason="Requires FURY, Matplotlib and Pandas", ) def test_bundlewarp_viz(): with tempfile.TemporaryDirectory() as temp_dir: cingulum_bundles = two_cingulum_bundles() cb1 = cingulum_bundles[0] cb1 = Streamlines(set_number_of_points(cb1, nb_points=20)) cb2 = cingulum_bundles[1] cb2 = Streamlines(set_number_of_points(cb2, nb_points=20)) deformed_bundle, affine_bundle, _, _, _ = bundlewarp(cb1, cb2) offsets, directions, colors = bundlewarp_vector_filed( affine_bundle, deformed_bundle ) points_aligned, _ = unlist_streamlines(affine_bundle) fname = os.path.join(temp_dir, "test_vector_field.png") viz_vector_field(points_aligned, directions, colors, offsets, fname) assert_equal(os.path.exists(fname), True) fname = os.path.join(temp_dir, "test_mag_viz.png") viz_displacement_mag(affine_bundle, offsets, fname) assert_equal(os.path.exists(fname), True) dipy-1.11.0/dipy/viz/tests/test_util.py000066400000000000000000000114301476546756600200750ustar00rootroot00000000000000import numpy as np import numpy.testing as npt from dipy.direction.peaks import PeaksAndMetrics from dipy.testing.decorators import set_random_number_generator from dipy.viz.horizon.util import ( check_img_dtype, check_img_shapes, check_peak_size, show_ellipsis, unpack_surface, ) @set_random_number_generator() def test_check_img_shapes(rng): affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = 255 * rng.random((197, 233, 189)) data1 = 255 * rng.random((197, 233, 189)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (True, False)) data1 = 255 * rng.random((200, 233, 189)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (False, False)) data = 255 * rng.random((197, 233, 189, 10)) data1 = 255 * rng.random((197, 233, 189)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (True, True)) data = 255 * rng.random((197, 233, 189, 15)) data1 = 255 * rng.random((197, 233, 189, 15)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (True, True)) data = 255 * rng.random((198, 233, 189, 14)) data1 = 255 * rng.random((198, 233, 189, 15)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (True, False)) data = 255 * rng.random((197, 233, 189, 15)) data1 = 255 * rng.random((198, 233, 189, 14)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (False, False)) data = 255 * rng.random((197, 233, 189, 15)) data1 = 255 * rng.random((198, 233, 189, 15)) images = [(data, affine), (data1, affine)] npt.assert_equal(check_img_shapes(images), (False, False)) @set_random_number_generator() def test_check_img_dtype(rng): affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = 255 * rng.random((197, 233, 189)) images = [ (data, affine), ] npt.assert_equal(check_img_dtype(images)[0], images[0]) data = rng.random((5, 5, 5)).astype(np.int64) images = [ (data, affine), ] npt.assert_equal(check_img_dtype(images)[0][0].dtype, np.int32) assert len(check_img_dtype(images)) == 1 data = rng.random((5, 5, 5)).astype(np.float16) images = [ (data, affine), ] npt.assert_equal(check_img_dtype(images)[0][0].dtype, np.float32) assert len(check_img_dtype(images)) == 1 data = rng.random((5, 5, 5)).astype(np.bool_) images = [ (data, affine), ] assert len(check_img_dtype(images)) == 0 def test_show_ellipsis(): text = "IAmALongFileName" text_size = 10 available_size = 5 result_text = f"...{text[-5:]}" npt.assert_equal(show_ellipsis(text, text_size, available_size), result_text) available_size = 12 npt.assert_equal(show_ellipsis(text, text_size, available_size), text) @set_random_number_generator() def test_unpack_surface(rng): vertices = rng.random((100, 4)) faces = rng.integers(0, 100, size=(100, 3)) with npt.assert_raises(ValueError): unpack_surface((vertices, faces)) vertices = rng.random((100, 3)) faces = rng.integers(0, 100, size=(100, 4)) with npt.assert_raises(ValueError): unpack_surface((vertices, faces, "/test/filename.pial")) vertices = rng.random((100, 3)) faces = rng.integers(0, 100, size=(100, 3)) v, f, fname = unpack_surface((vertices, faces, "/test/filename.pial")) npt.assert_equal(vertices, v) npt.assert_equal(faces, f) npt.assert_equal("/test/filename.pial", fname) @set_random_number_generator() def test_check_peak_size(rng): peak_dirs = rng.random((100, 100, 100, 10, 6)) pam = PeaksAndMetrics() pam.peak_dirs = peak_dirs npt.assert_equal(True, check_peak_size([pam])) npt.assert_equal(True, check_peak_size([pam, pam])) npt.assert_equal( False, check_peak_size([pam], ref_img_shape=(100, 100, 1), sync_imgs=True) ) npt.assert_equal( False, check_peak_size([pam], ref_img_shape=(100, 100, 100), sync_imgs=False) ) pam1 = PeaksAndMetrics() peak_dirs_1 = rng.random((100, 100, 50, 10, 6)) pam1.peak_dirs = peak_dirs_1 npt.assert_equal(False, check_peak_size([pam, pam1])) npt.assert_equal( False, check_peak_size([pam, pam1], ref_img_shape=(100, 100, 100), sync_imgs=True), ) npt.assert_equal( False, check_peak_size([pam, pam1], ref_img_shape=(100, 100, 100), sync_imgs=False), ) dipy-1.11.0/dipy/workflows/000077500000000000000000000000001476546756600155735ustar00rootroot00000000000000dipy-1.11.0/dipy/workflows/__init__.py000066400000000000000000000000001476546756600176720ustar00rootroot00000000000000dipy-1.11.0/dipy/workflows/align.py000066400000000000000000001151741476546756600172500ustar00rootroot00000000000000import logging from os.path import join as pjoin from warnings import warn import nibabel as nib import numpy as np from dipy.align import affine_registration, motion_correction from dipy.align.imaffine import AffineMap from dipy.align.imwarp import DiffeomorphicMap, SymmetricDiffeomorphicRegistration from dipy.align.metrics import CCMetric, EMMetric, SSDMetric from dipy.align.reslice import reslice from dipy.align.streamlinear import slr_with_qbx from dipy.align.streamwarp import bundlewarp from dipy.core.gradients import gradient_table, mask_non_weighted_bvals from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti, save_qa_metric from dipy.tracking.streamline import set_number_of_points, transform_streamlines from dipy.utils.optpkg import optional_package from dipy.workflows.utils import handle_vol_idx from dipy.workflows.workflow import Workflow pd, have_pd, _ = optional_package("pandas") def check_dimensions(static, moving): """Check the dimensions of the input images. Parameters ---------- static : 2D or 3D array the image to be used as reference during optimization. moving: 2D or 3D array the image to be used as "moving" during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by "pre-aligning" the moving image towards the static using an affine transformation given by the 'starting_affine' matrix. """ if len(static.shape) != len(moving.shape): raise ValueError( "Dimension mismatch: The input images must " "have same number of dimensions." ) if len(static.shape) > 3 and len(moving.shape) > 3: raise ValueError( "Dimension mismatch: One of the input should be 2D or 3D dimensions." ) class ResliceFlow(Workflow): @classmethod def get_short_name(cls): return "reslice" def run( self, input_files, new_vox_size, order=1, mode="constant", cval=0, num_processes=1, out_dir="", out_resliced="resliced.nii.gz", ): """Reslice data with new voxel resolution defined by ``new_vox_sz`` Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. new_vox_size : variable float new voxel size. order : int, optional order of interpolation, from 0 to 5, for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don't want any smoothing 0 is the option you need. mode : string, optional Points outside the boundaries of the input are filled according to the given mode 'constant', 'nearest', 'reflect' or 'wrap'. cval : float, optional Value used for points outside the boundaries of the input if mode='constant'. num_processes : int, optional Split the calculation to a pool of children processes. This only applies to 4D `data` arrays. Default is 1. If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. out_dir : string, optional Output directory. out_resliced : string, optional Name of the resliced dataset to be saved. """ io_it = self.get_io_iterator() for inputfile, outpfile in io_it: data, affine, vox_sz = load_nifti(inputfile, return_voxsize=True) logging.info(f"Processing {inputfile}") new_data, new_affine = reslice( data, affine, vox_sz, new_vox_size, order=order, mode=mode, cval=cval, num_processes=num_processes, ) save_nifti(outpfile, new_data, new_affine) logging.info(f"Resliced file save in {outpfile}") class SlrWithQbxFlow(Workflow): @classmethod def get_short_name(cls): return "slrwithqbx" def run( self, static_files, moving_files, x0="affine", rm_small_clusters=50, qbx_thr=(40, 30, 20, 15), num_threads=None, greater_than=50, less_than=250, nb_pts=20, progressive=True, out_dir="", out_moved="moved.trk", out_affine="affine.txt", out_stat_centroids="static_centroids.trk", out_moving_centroids="moving_centroids.trk", out_moved_centroids="moved_centroids.trk", ): """Streamline-based linear registration. For efficiency we apply the registration on cluster centroids and remove small clusters. See :footcite:p:`Garyfallidis2014b`, :footcite:p:`Garyfallidis2015`, :footcite:p:`Garyfallidis2018` for further details. Parameters ---------- static_files : string List of reference/fixed bundle tractograms. moving_files : string List of target bundle tractograms that will be moved/registered to match the static bundles. x0 : string, optional rigid, similarity or affine transformation model. rm_small_clusters : int, optional Remove clusters that have less than `rm_small_clusters`. qbx_thr : variable int, optional Thresholds for QuickBundlesX. num_threads : int, optional Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus $|num_threads + 1|$ is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable. greater_than : int, optional Keep streamlines that have length greater than this value. less_than : int, optional Keep streamlines have length less than this value. nb_pts : int, optional Number of points for discretizing each streamline. progressive : boolean, optional True to enable progressive registration. out_dir : string, optional Output directory. out_moved : string, optional Filename of moved tractogram. out_affine : string, optional Filename of affine for SLR transformation. out_stat_centroids : string, optional Filename of static centroids. out_moving_centroids : string, optional Filename of moving centroids. out_moved_centroids : string, optional Filename of moved centroids. Notes ----- The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundlesX. Then SLR :footcite:p:`Garyfallidis2015` is applied. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() logging.info("QuickBundlesX clustering is in use") logging.info(f"QBX thresholds {qbx_thr}") for ( static_file, moving_file, out_moved_file, out_affine_file, static_centroids_file, moving_centroids_file, moved_centroids_file, ) in io_it: logging.info(f"Loading static file {static_file}") logging.info(f"Loading moving file {moving_file}") static_obj = nib.streamlines.load(static_file) moving_obj = nib.streamlines.load(moving_file) static, static_header = static_obj.streamlines, static_obj.header moving, moving_header = moving_obj.streamlines, moving_obj.header moved, affine, centroids_static, centroids_moving = slr_with_qbx( static, moving, x0=x0, rm_small_clusters=rm_small_clusters, greater_than=greater_than, less_than=less_than, qbx_thr=qbx_thr, ) logging.info(f"Saving output file {out_moved_file}") new_tractogram = nib.streamlines.Tractogram( moved, affine_to_rasmm=np.eye(4) ) nib.streamlines.save(new_tractogram, out_moved_file, header=moving_header) logging.info(f"Saving output file {out_affine_file}") np.savetxt(out_affine_file, affine) logging.info(f"Saving output file {static_centroids_file}") new_tractogram = nib.streamlines.Tractogram( centroids_static, affine_to_rasmm=np.eye(4) ) nib.streamlines.save( new_tractogram, static_centroids_file, header=static_header ) logging.info(f"Saving output file {moving_centroids_file}") new_tractogram = nib.streamlines.Tractogram( centroids_moving, affine_to_rasmm=np.eye(4) ) nib.streamlines.save( new_tractogram, moving_centroids_file, header=moving_header ) centroids_moved = transform_streamlines(centroids_moving, affine) logging.info(f"Saving output file {moved_centroids_file}") new_tractogram = nib.streamlines.Tractogram( centroids_moved, affine_to_rasmm=np.eye(4) ) nib.streamlines.save( new_tractogram, moved_centroids_file, header=moving_header ) class ImageRegistrationFlow(Workflow): """ The registration workflow allows the user to use only one type of registration (such as center of mass or rigid body registration only). Alternatively, a registration can be done in a progressive manner. For example, using affine registration with progressive set to 'True' will involve center of mass, translation, rigid body and full affine registration. Whereas, when progressive is False the registration will include only center of mass and affine registration. The progressive registration will be slower but will improve the quality. This can be controlled by using the progressive flag (True by default). """ def run( self, static_image_files, moving_image_files, transform="affine", nbins=32, sampling_prop=None, metric="mi", level_iters=(10000, 1000, 100), sigmas=(3.0, 1.0, 0.0), factors=(4, 2, 1), progressive=True, save_metric=False, static_vol_idx=None, moving_vol_idx=None, out_dir="", out_moved="moved.nii.gz", out_affine="affine.txt", out_quality="quality_metric.txt", ): """ Parameters ---------- static_image_files : string Path to the static image file. moving_image_files : string Path to the moving image file. transform : string, optional ``'com'``: center of mass; ``'trans'``: translation; ``'rigid'``: rigid body; ``'rigid_isoscaling'``: rigid body + isotropic scaling, ``'rigid_scaling'``: rigid body + scaling; ``'affine'``: full affine including translation, rotation, shearing and scaling. nbins : int, optional Number of bins to discretize the joint and marginal PDF. sampling_prop : int, optional Number ([0-100]) of voxels for calculating the PDF. None implies all voxels. metric : string, optional Similarity metric for gathering mutual information. level_iters : variable int, optional The number of iterations at each scale of the scale space. `level_iters[0]` corresponds to the coarsest scale, `level_iters[-1]` the finest, where n is the length of the sequence. sigmas : variable floats, optional Custom smoothing parameter to build the scale space (one parameter for each scale). factors : variable floats, optional Custom scale factors to build the scale space (one factor for each scale). progressive : boolean, optional Enable/Disable the progressive registration. save_metric : boolean, optional If true, quality assessment metric are saved in 'quality_metric.txt'. static_vol_idx : str, optional 1D array representing indices of ``axis=-1`` of a 4D `static` input volume. From the command line use something like `3 4 5 6`. From script use something like `[3, 4, 5, 6]`. This input is required for 4D volumes. moving_vol_idx : str, optional 1D array representing indices of ``axis=-1`` of a 4D `moving` input volume. From the command line use something like `3 4 5 6`. From script use something like `[3, 4, 5, 6]`. This input is required for 4D volumes. out_dir : string, optional Directory to save the transformed image and the affine matrix out_moved : string, optional Name for the saved transformed image. out_affine : string, optional Name for the saved affine matrix. out_quality : string, optional Name of the file containing the saved quality metric. """ io_it = self.get_io_iterator() transform = transform.lower() metric = metric.upper() if metric != "MI": raise ValueError( "Invalid similarity metric: Please provide a" "valid metric." ) if progressive: pipeline_opt = { "com": ["center_of_mass"], "trans": ["center_of_mass", "translation"], "rigid": ["center_of_mass", "translation", "rigid"], "rigid_isoscaling": [ "center_of_mass", "translation", "rigid_isoscaling", ], "rigid_scaling": ["center_of_mass", "translation", "rigid_scaling"], "affine": ["center_of_mass", "translation", "rigid", "affine"], } else: pipeline_opt = { "com": ["center_of_mass"], "trans": ["center_of_mass", "translation"], "rigid": ["center_of_mass", "rigid"], "rigid_isoscaling": ["center_of_mass", "rigid_isoscaling"], "rigid_scaling": ["center_of_mass", "rigid_scaling"], "affine": ["center_of_mass", "affine"], } static_vol_idx = handle_vol_idx(static_vol_idx) moving_vol_idx = handle_vol_idx(moving_vol_idx) pipeline = pipeline_opt.get(transform) if pipeline is None: raise ValueError( "Invalid transformation:" " Please see program's help" " for allowed values of" " transformation." ) for static_img, mov_img, moved_file, affine_matrix_file, qual_val_file in io_it: # Load the data from the input files and store into objects. static, static_grid2world = load_nifti(static_img) moving, moving_grid2world = load_nifti(mov_img) if static_vol_idx is not None: static = static[..., static_vol_idx].mean(axis=-1) if moving_vol_idx is not None: moving = moving[..., moving_vol_idx].mean(axis=-1) check_dimensions(static, moving) starting_affine = None # If only center of mass is selected do not return metric if pipeline == ["center_of_mass"]: moved_image, affine_matrix = affine_registration( moving, static, moving_affine=moving_grid2world, static_affine=static_grid2world, pipeline=pipeline, starting_affine=starting_affine, metric=metric, level_iters=level_iters, sigmas=sigmas, factors=factors, nbins=nbins, sampling_proportion=sampling_prop, ) else: moved_image, affine_matrix, xopt, fopt = affine_registration( moving, static, moving_affine=moving_grid2world, static_affine=static_grid2world, pipeline=pipeline, starting_affine=starting_affine, metric=metric, level_iters=level_iters, sigmas=sigmas, factors=factors, ret_metric=True, nbins=nbins, sampling_proportion=sampling_prop, ) """ Saving the moved image file and the affine matrix. """ logging.info(f"Optimal parameters: {str(xopt)}") logging.info(f"Similarity metric: {str(fopt)}") if save_metric: save_qa_metric(qual_val_file, xopt, fopt) save_nifti(moved_file, moved_image, static_grid2world) np.savetxt(affine_matrix_file, affine_matrix) class ApplyTransformFlow(Workflow): def run( self, static_image_files, moving_image_files, transform_map_file, transform_type="affine", out_dir="", out_file="transformed.nii.gz", ): """ Parameters ---------- static_image_files : string Path of the static image file. moving_image_files : string Path of the moving image(s). It can be a single image or a folder containing multiple images. transform_map_file : string For the affine case, it should be a text(``*.txt``) file containing the affine matrix. For the diffeomorphic case, it should be a nifti file containing the mapping displacement field in each voxel with this shape (x, y, z, 3, 2). transform_type : string, optional Select the transformation type to apply between 'affine' or 'diffeomorphic'. out_dir : string, optional Directory to save the transformed files. out_file : string, optional Name of the transformed file. It is recommended to use the flag --mix-names to prevent the output files from being overwritten. """ if transform_type.lower() not in ["affine", "diffeomorphic"]: raise ValueError( "Invalid transformation type: Please" " provide a valid transform like 'affine'" " or 'diffeomorphic'" ) io = self.get_io_iterator() for static_image_file, moving_image_file, transform_file, out_file in io: # Loading the image data from the input files into object. static_image, static_grid2world = load_nifti(static_image_file) moving_image, moving_grid2world = load_nifti(moving_image_file) # Doing a sanity check for validating the dimensions of the input # images. if static_image.ndim > moving_image.ndim: static_image = static_image[..., 0] if static_image.ndim < moving_image.ndim: moving_image_full = moving_image moving_image = moving_image[..., 0] else: moving_image_full = None check_dimensions(static_image, moving_image) if transform_type.lower() == "affine": # Loading the affine matrix. affine_matrix = np.loadtxt(transform_file) # Setting up the affine transformation object. mapping = AffineMap( affine=affine_matrix, domain_grid_shape=static_image.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving_image.shape, codomain_grid2world=moving_grid2world, ) elif transform_type.lower() == "diffeomorphic": # Loading the diffeomorphic map. disp_data, disp_affine = load_nifti(transform_file) mapping = DiffeomorphicMap( 3, disp_data.shape[:3], disp_grid2world=np.linalg.inv(disp_affine), domain_shape=static_image.shape, domain_grid2world=static_grid2world, codomain_shape=moving_image.shape, codomain_grid2world=moving_grid2world, ) mapping.forward = disp_data[..., 0] mapping.backward = disp_data[..., 1] mapping.is_inverse = True # Transforming the image if moving_image_full is None: transformed = mapping.transform(moving_image) else: transformed = np.concatenate( [ mapping.transform(moving_image)[..., None] for moving_image in moving_image_full ], axis=-1, ) save_nifti(out_file, transformed, affine=static_grid2world) class SynRegistrationFlow(Workflow): def run( self, static_image_files, moving_image_files, prealign_file="", inv_static=False, level_iters=(10, 10, 5), metric="cc", mopt_sigma_diff=2.0, mopt_radius=4, mopt_smooth=0.0, mopt_inner_iter=0, mopt_q_levels=256, mopt_double_gradient=True, mopt_step_type="", step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-5, inv_iter=20, inv_tol=1e-3, out_dir="", out_warped="warped_moved.nii.gz", out_inv_static="inc_static.nii.gz", out_field="displacement_field.nii.gz", ): """ Parameters ---------- static_image_files : string Path of the static image file. moving_image_files : string Path to the moving image file. prealign_file : string, optional The text file containing pre alignment information via an affine matrix. inv_static : boolean, optional Apply the inverse mapping to the static image. level_iters : variable int, optional The number of iterations at each level of the gaussian pyramid. metric : string, optional The metric to be used. metric available: cc (Cross Correlation), ssd (Sum Squared Difference), em (Expectation-Maximization). mopt_sigma_diff : float, optional Metric option applied on Cross correlation (CC). The standard deviation of the Gaussian smoothing kernel to be applied to the update field at each iteration. mopt_radius : int, optional Metric option applied on Cross correlation (CC). the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation. mopt_smooth : float, optional Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). Smoothness parameter, the larger the value the smoother the deformation field. (default 1.0 for EM, 4.0 for SSD) mopt_inner_iter : int, optional Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). This is number of iterations to be performed at each level of the multi-resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric). Default 5 for EM, 10 for SSD. mopt_q_levels : int, optional Metric option applied on Expectation Maximization (EM). Number of quantization levels (Default: 256 for EM) mopt_double_gradient : bool, optional Metric option applied on Expectation Maximization (EM). if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image. mopt_step_type : string, optional Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). The optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected). Possible value: ('gauss_newton', 'demons'). default: 'gauss_newton' for EM, 'demons' for SSD. step_length : float, optional the length of the maximum displacement vector of the update displacement field at each iteration. ss_sigma_factor : float, optional parameter of the scale-space smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, ..., n_scales is the scale. opt_tol : float, optional the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold. inv_iter : int, optional the number of iterations to be performed by the displacement field inversion algorithm. inv_tol : float, optional the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold. out_dir : string, optional Directory to save the transformed files. out_warped : string, optional Name of the warped file. out_inv_static : string, optional Name of the file to save the static image after applying the inverse mapping. out_field : string, optional Name of the file to save the diffeomorphic map. """ io_it = self.get_io_iterator() metric = metric.lower() if metric not in ["ssd", "cc", "em"]: raise ValueError( "Invalid similarity metric: Please" " provide a valid metric like 'ssd', 'cc', 'em'" ) logging.info("Starting Diffeomorphic Registration") logging.info(f"Using {metric.upper()} Metric") # Init parameter if they are not setup init_param = { "ssd": { "mopt_smooth": 4.0, "mopt_inner_iter": 10, "mopt_step_type": "demons", }, "em": { "mopt_smooth": 1.0, "mopt_inner_iter": 5, "mopt_step_type": "gauss_newton", }, } mopt_smooth = ( mopt_smooth if mopt_smooth or metric == "cc" else init_param[metric]["mopt_smooth"] ) mopt_inner_iter = ( mopt_inner_iter if mopt_inner_iter or metric == "cc" else init_param[metric]["mopt_inner_iter"] ) # If using the 'cc' metric, force the `mopt_step_type` parameter to an # empty value since the 'cc' metric does not use it; for the rest of # the metrics, the `step_type` parameter will be initialized to their # corresponding default values in `init_param`. if metric == "cc": mopt_step_type = "" for ( static_file, moving_file, owarped_file, oinv_static_file, omap_file, ) in io_it: logging.info(f"Loading static file {static_file}") logging.info(f"Loading moving file {moving_file}") # Loading the image data from the input files into object. static_image, static_grid2world = load_nifti(static_file) moving_image, moving_grid2world = load_nifti(moving_file) # Sanity check for the input image dimensions. check_dimensions(static_image, moving_image) # Loading the affine matrix. prealign = np.loadtxt(prealign_file) if prealign_file else None # Note that `step_type` is initialized to the default value in # `init_param` for the metric that was not specified as a # parameter or if the `mopt_step_type` is empty. l_metric = { "ssd": SSDMetric( static_image.ndim, smooth=mopt_smooth, inner_iter=mopt_inner_iter, step_type=mopt_step_type if (mopt_step_type and mopt_step_type.strip()) and metric == "ssd" else init_param["ssd"]["mopt_step_type"], ), "cc": CCMetric( static_image.ndim, sigma_diff=mopt_sigma_diff, radius=mopt_radius ), "em": EMMetric( static_image.ndim, smooth=mopt_smooth, inner_iter=mopt_inner_iter, step_type=mopt_step_type if (mopt_step_type and mopt_step_type.strip()) and metric == "em" else init_param["em"]["mopt_step_type"], q_levels=mopt_q_levels, double_gradient=mopt_double_gradient, ), } current_metric = l_metric.get(metric.lower()) sdr = SymmetricDiffeomorphicRegistration( metric=current_metric, level_iters=level_iters, step_length=step_length, ss_sigma_factor=ss_sigma_factor, opt_tol=opt_tol, inv_iter=inv_iter, inv_tol=inv_tol, ) mapping = sdr.optimize( static_image, moving_image, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, prealign=prealign, ) mapping_data = np.array([mapping.forward.T, mapping.backward.T]).T warped_moving = mapping.transform(moving_image) mapping.is_inverse = True inv_static = mapping.transform(static_image) # Saving logging.info(f"Saving warped {owarped_file}") save_nifti(owarped_file, warped_moving, static_grid2world) logging.info(f"Saving inverse transformes static {oinv_static_file}") save_nifti(oinv_static_file, inv_static, static_grid2world) logging.info(f"Saving Diffeomorphic map {omap_file}") save_nifti(omap_file, mapping_data, mapping.codomain_world2grid) class MotionCorrectionFlow(Workflow): """ The Motion Correction workflow allows the user to align between-volumes DWI dataset. """ def run( self, input_files, bvalues_files, bvectors_files, b0_threshold=50, bvecs_tol=0.01, out_dir="", out_moved="moved.nii.gz", out_affine="affine.txt", ): """ Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors out_dir : string, optional Directory to save the transformed image and the affine matrix. out_moved : string, optional Name for the saved transformed image. out_affine : string, optional Name for the saved affine matrix. """ io_it = self.get_io_iterator() for dwi, bval, bvec, omoved, oafffine in io_it: # Load the data from the input files and store into objects. logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"lowest b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) reg_img, reg_affines = motion_correction( data=data, gtab=gtab, affine=affine ) # Saving the corrected image file save_nifti(omoved, reg_img.get_fdata(), affine) # Write the affine matrix array to disk with open(oafffine, "w") as outfile: outfile.write(f"# Array shape: {reg_affines.shape}\n") for affine_slice in reg_affines: np.savetxt(outfile, affine_slice, fmt="%-7.2f") outfile.write("# New slice\n") class BundleWarpFlow(Workflow): @classmethod def get_short_name(cls): return "bundlewarp" def run( self, static_file, moving_file, dist=None, alpha=0.3, beta=20, max_iter=15, affine=True, out_dir="", out_linear_moved="linearly_moved.trk", out_nonlinear_moved="nonlinearly_moved.trk", out_warp_transform="warp_transform.npy", out_warp_kernel="warp_kernel.npy", out_dist="distance_matrix.npy", out_matched_pairs="matched_pairs.npy", ): """BundleWarp: streamline-based nonlinear registration. BundleWarp :footcite:p:`Chandio2023` is a nonrigid registration method for deformable registration of white matter tracts. Parameters ---------- static_file : string Path to the static (reference) .trk file. moving_file : string Path to the moving (target to be registered) .trk file. dist : string, optional Path to the precalculated distance matrix file. alpha : float, optional Represents the trade-off between regularizing the deformation and having points match very closely. Lower value of alpha means high deformations. It is represented with λ in BundleWarp paper. NOTE: setting alpha<=0.01 will result in highly deformable registration that could extremely modify the original anatomy of the moving bundle. beta : int, optional Represents the strength of the interaction between points Gaussian kernel size. max_iter : int, optional Maximum number of iterations for deformation process in ml-CPD method. affine : boolean, optional If False, use rigid registration as starting point. (default True) out_dir : string, optional Output directory. out_linear_moved : string, optional Filename of linearly moved bundle. out_nonlinear_moved : string, optional Filename of nonlinearly moved (warped) bundle. out_warp_transform : string, optional Filename of warp transformations generated by BundleWarp. out_warp_kernel : string, optional Filename of regularization gaussian kernel generated by BundleWarp. out_dist : string, optional Filename of MDF distance matrix. out_matched_pairs : string, optional Filename of matched pairs; streamline correspondences between two bundles. References ---------- .. footbibliography:: """ logging.info(f"Loading static file {static_file}") logging.info(f"Loading moving file {moving_file}") static_obj = nib.streamlines.load(static_file) moving_obj = nib.streamlines.load(moving_file) static, _ = static_obj.streamlines, static_obj.header moving, moving_header = moving_obj.streamlines, moving_obj.header static = set_number_of_points(static, nb_points=20) moving = set_number_of_points(moving, nb_points=20) deformed_bundle, affine_bundle, dists, mp, warp = bundlewarp( static, moving, dist=dist, alpha=alpha, beta=beta, max_iter=max_iter, affine=affine, ) logging.info(f"Saving output file {out_linear_moved}") new_tractogram = nib.streamlines.Tractogram( affine_bundle, affine_to_rasmm=np.eye(4) ) nib.streamlines.save( new_tractogram, pjoin(out_dir, out_linear_moved), header=moving_header ) logging.info(f"Saving output file {out_nonlinear_moved}") new_tractogram = nib.streamlines.Tractogram( deformed_bundle, affine_to_rasmm=np.eye(4) ) nib.streamlines.save( new_tractogram, pjoin(out_dir, out_nonlinear_moved), header=moving_header ) logging.info(f"Saving output file {out_warp_transform}") np.save(pjoin(out_dir, out_warp_transform), np.array(warp["transforms"])) logging.info(f"Saving output file {out_warp_kernel}") np.save(pjoin(out_dir, out_warp_kernel), np.array(warp["gaussian_kernel"])) logging.info(f"Saving output file {out_dist}") np.save(pjoin(out_dir, out_dist), dist) logging.info(f"Saving output file {out_matched_pairs}") np.save(pjoin(out_dir, out_matched_pairs), mp) dipy-1.11.0/dipy/workflows/base.py000066400000000000000000000302111476546756600170540ustar00rootroot00000000000000import argparse import inspect from dipy.workflows.docstring_parser import NumpyDocString def add_default_args_to_docstring(npds, func): """Add default arguments to the docstring of a function. Parameters ---------- npds : list List of parameters from the docstring. func : function Function to inspect. """ signature = inspect.signature(func) defaults = { param.name: param.default for param in signature.parameters.values() if param.default is not param.empty } if defaults.get("out_dir") in ["", " ", "."]: defaults["out_dir"] = " current directory" for i, param in enumerate(npds["Parameters"]): if param[0] not in defaults: continue param[2].append(f"(default: {defaults[param[0]]})") npds["Parameters"][i] = (param[0], param[1], param[2]) def get_args_default(func): sig_object = inspect.signature(func) params = sig_object.parameters.values() names = [param.name for param in params if param.name != "self"] defaults = [ param.default for param in params if param.default is not inspect._empty ] return names, defaults def none_or_dtype(dtype): """Check None presence before type casting.""" local_type = dtype def inner(value): if value in ["None", "none"]: return "None" return local_type(value) return inner class IntrospectiveArgumentParser(argparse.ArgumentParser): def __init__( self, prog=None, usage=None, description=None, epilog=None, parents=(), formatter_class=argparse.RawTextHelpFormatter, prefix_chars="-", fromfile_prefix_chars=None, argument_default=None, conflict_handler="resolve", add_help=True, ): """Augmenting the argument parser to allow automatic creation of arguments from workflows Parameters ---------- prog : None The name of the program. (default: sys.argv[0]) usage : None A usage message. (default: auto-generated from arguments) description : str A description of what the program does. epilog : str Text following the argument descriptions. parents : list Parsers whose arguments should be copied into this one. formatter_class : obj HelpFormatter class for printing help messages. prefix_chars : str Characters that prefix optional arguments. fromfile_prefix_chars : None Characters that prefix files containing additional arguments. argument_default : None The default value for all arguments. conflict_handler : str String indicating how to handle conflicts. add_help : bool Add a -h/-help option. """ iap = IntrospectiveArgumentParser if epilog is None: epilog = ( "References: \n" "Garyfallidis, E., M. Brett, B. Amirbekian, A. Rokem," " S. Van Der Walt, M. Descoteaux, and I. Nimmo-Smith. Dipy, a" " library for the analysis of diffusion MRI data. Frontiers" " in Neuroinformatics, 1-18, 2014." ) super(iap, self).__init__( prog=prog, usage=usage, description=description, epilog=epilog, parents=parents, formatter_class=formatter_class, prefix_chars=prefix_chars, fromfile_prefix_chars=fromfile_prefix_chars, argument_default=argument_default, conflict_handler=conflict_handler, add_help=add_help, ) self.doc = None def add_workflow(self, workflow): """Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser's own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added. Parameters ---------- workflow : dipy.workflows.workflow.Workflow Workflow from which to infer parameters. Returns ------- sub_flow_optionals : dictionary of all sub workflow optional parameters """ doc = inspect.getdoc(workflow.run) npds = NumpyDocString(doc) add_default_args_to_docstring(npds, workflow.run) self.doc = npds["Parameters"] self.description = ( f"{' '.join(npds['Summary'])}\n\n{' '.join(npds['Extended Summary'])}" ) if npds["References"]: ref_text = [text or "\n" for text in npds["References"]] ref_idx = self.epilog.find("References: \n") + len("References: \n") self.epilog = ( f"{self.epilog[:ref_idx]}{''.join(ref_text)}\n\n{self.epilog[ref_idx:]}" ) self._output_params = [ param for param in npds["Parameters"] if "out_" in param[0] ] self._positional_params = [ param for param in npds["Parameters"] if "optional" not in param[1] and "out_" not in param[0] ] self._optional_params = [ param for param in npds["Parameters"] if "optional" in param[1] ] args, defaults = get_args_default(workflow.run) output_args = self.add_argument_group("output arguments(optional)") len_args = len(args) len_defaults = len(defaults) nb_positional_variable = 0 if len_args != len(self.doc): raise ValueError( self.prog + ": Number of parameters in the " "doc string and run method does not match. " "Please ensure that the number of parameters " "in the run method is same as the doc string." ) for i, arg in enumerate(args): prefix = "" is_optional = i >= len_args - len_defaults if is_optional: prefix = "--" typestr = self.doc[i][1] dtype, isnarg = self._select_dtype(typestr) help_msg = " ".join(self.doc[i][2]) _args = [f"{prefix}{arg}"] _kwargs = {"help": help_msg, "type": dtype, "action": "store"} if is_optional: _kwargs["metavar"] = dtype.__name__ if dtype is bool: _kwargs["action"] = "store_true" default_ = {arg: False} self.set_defaults(**default_) del _kwargs["type"] del _kwargs["metavar"] elif dtype is bool: _kwargs["type"] = int _kwargs["choices"] = [0, 1] if dtype is tuple: _kwargs["type"] = str if isnarg: if is_optional: _kwargs["nargs"] = "*" else: _kwargs["nargs"] = "+" nb_positional_variable += 1 if "out_" in arg: output_args.add_argument(*_args, **_kwargs) else: if _kwargs["action"] != "store_true": _kwargs["type"] = none_or_dtype(_kwargs["type"]) self.add_argument(*_args, **_kwargs) if nb_positional_variable > 1: raise ValueError( self.prog + " : All positional arguments present" " are gathered into a list. It does not make" "much sense to have more than one positional" " argument with 'variable string' as dtype." " Please, ensure that 'variable (type)'" " appears only once as a positional argument." ) return self.add_sub_flow_args(workflow.get_sub_runs()) def add_sub_flow_args(self, sub_flows): """Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows. Parameters ---------- sub_flows : array of dipy.workflows.workflow.Workflow Workflows to inspect. Returns ------- sub_flow_optionals : dictionary of all sub workflow optional parameters """ sub_flow_optionals = {} for name, flow, short_name in sub_flows: sub_flow_optionals[name] = {} doc = inspect.getdoc(flow) npds = NumpyDocString(doc) _doc = npds["Parameters"] args, defaults = get_args_default(flow) len_args = len(args) len_defaults = len(defaults) flow_args = self.add_argument_group(f"{name} arguments(optional)") for i, arg_name in enumerate(args): is_not_optionnal = i < len_args - len_defaults if "out_" in arg_name or is_not_optionnal: continue arg_name = f"{short_name}.{arg_name}" sub_flow_optionals[name][arg_name] = None prefix = "--" typestr = _doc[i][1] dtype, isnarg = self._select_dtype(typestr) help_msg = "".join(_doc[i][2]) _args = [f"{prefix}{arg_name}"] _kwargs = { "help": help_msg, "type": dtype, "action": "store", "metavar": dtype.__name__, } if dtype is bool: _kwargs["action"] = "store_true" default_ = {arg_name: False} self.set_defaults(**default_) del _kwargs["type"] del _kwargs["metavar"] elif dtype is bool: _kwargs["type"] = int _kwargs["choices"] = [0, 1] if dtype is tuple: _kwargs["type"] = str if isnarg: _kwargs["nargs"] = "*" if _kwargs["action"] != "store_true": _kwargs["type"] = none_or_dtype(_kwargs["type"]) flow_args.add_argument(*_args, **_kwargs) return sub_flow_optionals def _select_dtype(self, text): """Analyses a docstring parameter line and returns the good argparser type. Parameters ---------- text : string Parameter text line to inspect. Returns ------- arg_type : The type found by inspecting the text line. is_nargs : Whether or not this argument is nargs (arparse's multiple values argument) """ text = text.lower() nargs_str = "variable" is_nargs = nargs_str in text arg_type = None if "str" in text: arg_type = str if "int" in text: arg_type = int if "float" in text: arg_type = float if "bool" in text: arg_type = bool if "tuple" in text: arg_type = tuple return arg_type, is_nargs def get_flow_args(self, args=None, namespace=None): """Return the parsed arguments as a dictionary that will be used as a workflow's run method arguments. """ ns_args = self.parse_args(args, namespace) dct = vars(ns_args) res = {k: v for k, v in dct.items() if v is not None} res.update({k: None for k, v in res.items() if v == "None"}) return res def update_argument(self, *args, **kargs): self.add_argument(*args, **kargs) def show_argument(self, dest): for act in self._actions[1:]: if act.dest == dest: print(act) def add_epilogue(self): pass def add_description(self): pass @property def output_parameters(self): return self._output_params @property def positional_parameters(self): return self._positional_params @property def optional_parameters(self): return self._optional_params dipy-1.11.0/dipy/workflows/cli.py000066400000000000000000000130431476546756600167150ustar00rootroot00000000000000#!python import logging import os import sys from dipy.utils.optpkg import optional_package from dipy.workflows.flow_runner import run_flow cli_flows = { "dipy_align_affine": ("dipy.workflows.align", "ImageRegistrationFlow"), "dipy_align_syn": ("dipy.workflows.align", "SynRegistrationFlow"), "dipy_apply_transform": ("dipy.workflows.align", "ApplyTransformFlow"), "dipy_buan_lmm": ("dipy.workflows.stats", "LinearMixedModelsFlow"), "dipy_buan_shapes": ("dipy.workflows.stats", "BundleShapeAnalysis"), "dipy_buan_profiles": ("dipy.workflows.stats", "BundleAnalysisTractometryFlow"), "dipy_bundlewarp": ("dipy.workflows.align", "BundleWarpFlow"), "dipy_classify_tissue": ("dipy.workflows.segment", "ClassifyTissueFlow"), "dipy_correct_motion": ("dipy.workflows.align", "MotionCorrectionFlow"), "dipy_correct_biasfield": ("dipy.workflows.nn", "BiasFieldCorrectionFlow"), "dipy_concatenate_tractograms": ("dipy.workflows.io", "ConcatenateTractogramFlow"), "dipy_convert_tractogram": ("dipy.workflows.io", "ConvertTractogramFlow"), "dipy_convert_tensors": ("dipy.workflows.io", "ConvertTensorsFlow"), "dipy_convert_sh": ("dipy.workflows.io", "ConvertSHFlow"), "dipy_denoise_nlmeans": ("dipy.workflows.denoise", "NLMeansFlow"), "dipy_denoise_lpca": ("dipy.workflows.denoise", "LPCAFlow"), "dipy_denoise_mppca": ("dipy.workflows.denoise", "MPPCAFlow"), "dipy_denoise_patch2self": ("dipy.workflows.denoise", "Patch2SelfFlow"), "dipy_evac_plus": ("dipy.workflows.nn", "EVACPlusFlow"), "dipy_fetch": ("dipy.workflows.io", "FetchFlow"), "dipy_fit_csa": ("dipy.workflows.reconst", "ReconstQBallBaseFlow"), "dipy_fit_csd": ("dipy.workflows.reconst", "ReconstCSDFlow"), "dipy_fit_dki": ("dipy.workflows.reconst", "ReconstDkiFlow"), "dipy_fit_dti": ("dipy.workflows.reconst", "ReconstDtiFlow"), "dipy_fit_dsi": ("dipy.workflows.reconst", "ReconstDsiFlow"), "dipy_fit_dsid": ("dipy.workflows.reconst", "ReconstDsiFlow"), "dipy_fit_forecast": ("dipy.workflows.reconst", "ReconstForecastFlow"), "dipy_fit_gqi": ("dipy.workflows.reconst", "ReconstGQIFlow"), "dipy_fit_ivim": ("dipy.workflows.reconst", "ReconstIvimFlow"), "dipy_fit_mapmri": ("dipy.workflows.reconst", "ReconstMAPMRIFlow"), "dipy_fit_opdt": ("dipy.workflows.reconst", "ReconstQBallBaseFlow"), "dipy_fit_qball": ("dipy.workflows.reconst", "ReconstQBallBaseFlow"), "dipy_fit_sdt": ("dipy.workflows.reconst", "ReconstSDTFlow"), "dipy_fit_sfm": ("dipy.workflows.reconst", "ReconstSFMFlow"), "dipy_gibbs_ringing": ("dipy.workflows.denoise", "GibbsRingingFlow"), "dipy_horizon": ("dipy.workflows.viz", "HorizonFlow"), "dipy_info": ("dipy.workflows.io", "IoInfoFlow"), "dipy_extract_b0": ("dipy.workflows.io", "ExtractB0Flow"), "dipy_extract_shell": ("dipy.workflows.io", "ExtractShellFlow"), "dipy_extract_volume": ("dipy.workflows.io", "ExtractVolumeFlow"), "dipy_labelsbundles": ("dipy.workflows.segment", "LabelsBundlesFlow"), "dipy_math": ("dipy.workflows.io", "MathFlow"), "dipy_mask": ("dipy.workflows.mask", "MaskFlow"), "dipy_median_otsu": ("dipy.workflows.segment", "MedianOtsuFlow"), "dipy_nifti2pam": ("dipy.workflows.io", "NiftisToPamFlow"), "dipy_pam2nifti": ("dipy.workflows.io", "PamToNiftisFlow"), "dipy_recobundles": ("dipy.workflows.segment", "RecoBundlesFlow"), "dipy_reslice": ("dipy.workflows.align", "ResliceFlow"), "dipy_sh_convert_mrtrix": ("dipy.workflows.io", "ConvertSHFlow"), "dipy_slr": ("dipy.workflows.align", "SlrWithQbxFlow"), "dipy_snr_in_cc": ("dipy.workflows.stats", "SNRinCCFlow"), "dipy_split": ("dipy.workflows.io", "SplitFlow"), "dipy_tensor2pam": ("dipy.workflows.io", "TensorToPamFlow"), "dipy_track": ("dipy.workflows.tracking", "LocalFiberTrackingPAMFlow"), "dipy_track_pft": ("dipy.workflows.tracking", "PFTrackingPAMFlow"), } def run(): """Run scripts located in pyproject.toml.""" script_name = os.path.basename(sys.argv[0]) mod_name, flow_name = cli_flows.get(script_name, (None, None)) if mod_name is None: print(f"Flow: {script_name} not Found in DIPY") print(f"Available flows: {', '.join(cli_flows.keys())}") sys.exit(1) mod, _, _ = optional_package(mod_name) if script_name in ["dipy_sh_convert_mrtrix"]: logging.warning( "`dipy_sh_convert_mrtrix` CLI is deprecated since DIPY 1.11.0. It will be " "removed on later release. Please use the `dipy_convert_sh` CLI instead.", ) extra_args = {} if script_name == "dipy_fit_dsid": extra_args = { "remove_convolution": { "dest": "remove_convolution", "action": "store_true", "default": True, } } elif script_name == "dipy_fit_csa": extra_args = { "method": { "action": "store", "dest": "method", "metavar": "string", "default": "csa", } } elif script_name == "dipy_fit_opdt": extra_args = { "method": { "action": "store", "dest": "method", "metavar": "string", "default": "opdt", } } elif script_name == "dipy_fit_qball": extra_args = { "method": { "action": "store", "dest": "method", "metavar": "string", "default": "qball", } } run_flow(getattr(mod, flow_name)(), extra_args=extra_args) dipy-1.11.0/dipy/workflows/combined_workflow.py000066400000000000000000000042401476546756600216570ustar00rootroot00000000000000from dipy.testing.decorators import warning_for_keywords from dipy.workflows.workflow import Workflow class CombinedWorkflow(Workflow): @warning_for_keywords() def __init__( self, *, output_strategy="append", mix_names=False, force=False, skip=False ): """Workflow that combines multiple workflows. The workflow combined together are referred as sub flows in this class. """ self._optionals = {} super(CombinedWorkflow, self).__init__( output_strategy=output_strategy, mix_names=mix_names, force=force, skip=skip ) def get_sub_runs(self): """Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction. """ sub_runs = [] for flow in self._get_sub_flows(): sub_runs.append((flow.__name__, flow.run, flow.get_short_name())) return sub_runs def _get_sub_flows(self): """Returns a list of sub flows used in the combined_workflow. Needs to be implemented in every new combined_workflow. """ raise AttributeError( f"Error: _get_sub_flows() has to be defined for {self.__class__}" ) def set_sub_flows_optionals(self, opts): """Sets the self._optionals variable with all sub flow arguments that were passed in the commandline. """ self._optionals = {} for key, sub_dict in opts.items(): self._optionals[key] = {k: v for k, v in sub_dict.items() if v is not None} def get_optionals(self, flow, **kwargs): """Returns the sub flow's optional arguments merged with those passed as params in kwargs. """ opts = self._optionals[flow.__name__] opts.update(kwargs) return opts def run_sub_flow(self, flow, *args, **kwargs): """Runs the sub flow with the optional parameters passed via the command line. This is a convenience method to make sub flow running more intuitive on the concrete CombinedWorkflow side. """ return flow.run(*args, **self.get_optionals(type(flow), **kwargs)) dipy-1.11.0/dipy/workflows/denoise.py000066400000000000000000000367771476546756600176170ustar00rootroot00000000000000import logging import shutil import numpy as np from dipy.core.gradients import gradient_table from dipy.denoise.gibbs import gibbs_removal from dipy.denoise.localpca import localpca, mppca from dipy.denoise.nlmeans import nlmeans from dipy.denoise.noise_estimate import estimate_sigma from dipy.denoise.patch2self import patch2self from dipy.denoise.pca_noise_estimate import pca_noise_estimate from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.workflows.workflow import Workflow class Patch2SelfFlow(Workflow): @classmethod def get_short_name(cls): return "patch2self" def run( self, input_files, bval_files, model="ols", b0_threshold=50, alpha=1.0, verbose=False, patch_radius=0, skip_b0_denoising=False, clip_negative_vals=False, skip_shift_intensity=False, ver=3, out_dir="", out_denoised="dwi_patch2self.nii.gz", ): """Workflow for Patch2Self denoising method. See :footcite:p:`Fadnavis2020` for further details about the method. See :footcite:p:`Fadnavis2024` for further details about the new method. It applies patch2self denoising :footcite:p:`Fadnavis2020` on each file found by 'globing' ``input_file`` and ``bval_file``. It saves the results in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bval_files : string bval file associated with the diffusion data. model : string, or initialized linear model object. This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {'ols', 'ridge', 'lasso'}. Otherwise, it can be an object that inherits from `dipy.optimize.SKLearnLinearSolver` or an object with a similar interface from Scikit-Learn: `sklearn.linear_model.LinearRegression`, `sklearn.linear_model.Lasso` or `sklearn.linear_model.Ridge` and other objects that inherit from `sklearn.base.RegressorMixin`. Default: 'ols'. b0_threshold : int, optional Threshold for considering volumes as b0. alpha : float, optional Regularization parameter only for ridge regression model. verbose : bool, optional Show progress of Patch2Self and time taken. patch_radius : variable int, optional The radius of the local patch to be taken around each voxel skip_b0_denoising : bool, optional Skips denoising b0 volumes if set to True. clip_negative_vals : bool, optional Sets negative values after denoising to 0 using `np.clip`. skip_shift_intensity : bool, optional Skips shifting the distribution of intensities per volume to give non-negative values if set to True. ver : int, optional Version of the Patch2Self algorithm to use between 1 or 3. out_dir : string, optional Output directory. out_denoised : string, optional Name of the resulting denoised volume (default: dwi_patch2self.nii.gz) References ---------- .. footbibliography:: """ b0_denoising = not skip_b0_denoising shift_intensity = not skip_shift_intensity io_it = self.get_io_iterator() if isinstance(patch_radius, list) and len(patch_radius) == 1: patch_radius = int(patch_radius[0]) for fpath, bvalpath, odenoised in io_it: if self._skip: shutil.copy(fpath, odenoised) logging.warning("Denoising skipped for now.") else: logging.info("Denoising %s", fpath) data, affine, image = load_nifti(fpath, return_img=True) bvals = np.loadtxt(bvalpath) extra_args = {"patch_radius": patch_radius} if ver == 1 else {} denoised_data = patch2self( data, bvals, model=model, b0_threshold=b0_threshold, alpha=alpha, verbose=verbose, b0_denoising=b0_denoising, clip_negative_vals=clip_negative_vals, shift_intensity=shift_intensity, version=ver, **extra_args, ) save_nifti(odenoised, denoised_data, affine, hdr=image.header) logging.info("Denoised volumes saved as %s", odenoised) class NLMeansFlow(Workflow): @classmethod def get_short_name(cls): return "nlmeans" def run( self, input_files, sigma=0, patch_radius=1, block_radius=5, rician=True, out_dir="", out_denoised="dwi_nlmeans.nii.gz", ): """Workflow wrapping the nlmeans denoising method. It applies nlmeans denoise :footcite:p:`Descoteaux2008a` on each file found by 'globing' ``input_files`` and saves the results in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. sigma : float, optional Sigma parameter to pass to the nlmeans algorithm. patch_radius : int, optional patch size is ``2 x patch_radius + 1``. block_radius : int, optional block size is ``2 x block_radius + 1``. rician : bool, optional If True the noise is estimated as Rician, otherwise Gaussian noise is assumed. out_dir : string, optional Output directory. out_denoised : string, optional Name of the resulting denoised volume. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for fpath, odenoised in io_it: if self._skip: shutil.copy(fpath, odenoised) logging.warning("Denoising skipped for now.") else: logging.info("Denoising %s", fpath) data, affine, image = load_nifti(fpath, return_img=True) if sigma == 0: logging.info("Estimating sigma") sigma = estimate_sigma(data) logging.debug(f"Found sigma {sigma}") denoised_data = nlmeans( data, sigma=sigma, patch_radius=patch_radius, block_radius=block_radius, rician=rician, ) save_nifti(odenoised, denoised_data, affine, hdr=image.header) logging.info("Denoised volume saved as %s", odenoised) class LPCAFlow(Workflow): @classmethod def get_short_name(cls): return "lpca" def run( self, input_files, bvalues_files, bvectors_files, sigma=0, b0_threshold=50, bvecs_tol=0.01, patch_radius=2, pca_method="eig", tau_factor=2.3, out_dir="", out_denoised="dwi_lpca.nii.gz", ): r"""Workflow wrapping LPCA denoising method. See :footcite:p:`Veraart2016a` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. sigma : float, optional Standard deviation of the noise estimated from the data. 0 means sigma value estimation following the algorithm in :footcite:t:`Manjon2013`. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors. patch_radius : int, optional The radius of the local patch to be taken around each voxel (in voxels) For example, for a patch radius with value 2, and assuming the input image is a 3D image, the denoising will take place in blocks of 5x5x5 voxels. pca_method : string, optional Use either eigenvalue decomposition ('eig') or singular value decomposition ('svd') for principal component analysis. The default method is 'eig' which is faster. However, occasionally 'svd' might be more accurate. tau_factor : float, optional Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than: .. math:: \tau = (\tau_{factor} \sigma)^2 $\tau_{factor}$ can be change to adjust the relationship between the noise standard deviation and the threshold $\tau$. If $\tau_{factor}$ is set to None, it will be automatically calculated using the Marcenko-Pastur distribution :footcite:p`Veraart2016b`. out_dir : string, optional Output directory. out_denoised : string, optional Name of the resulting denoised volume. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() if isinstance(patch_radius, list) and len(patch_radius) == 1: patch_radius = int(patch_radius[0]) for dwi, bval, bvec, odenoised in io_it: logging.info("Denoising %s", dwi) data, affine, image = load_nifti(dwi, return_img=True) if not sigma: logging.info("Estimating sigma") bvals, bvecs = read_bvals_bvecs(bval, bvec) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) sigma = pca_noise_estimate(data, gtab, correct_bias=True, smooth=3) logging.debug("Found sigma %s", sigma) denoised_data = localpca( data, sigma=sigma, patch_radius=patch_radius, pca_method=pca_method, tau_factor=tau_factor, ) save_nifti(odenoised, denoised_data, affine, hdr=image.header) logging.info("Denoised volume saved as %s", odenoised) class MPPCAFlow(Workflow): @classmethod def get_short_name(cls): return "mppca" def run( self, input_files, patch_radius=2, pca_method="eig", return_sigma=False, out_dir="", out_denoised="dwi_mppca.nii.gz", out_sigma="dwi_sigma.nii.gz", ): r"""Workflow wrapping Marcenko-Pastur PCA denoising method. See :footcite:p:`Veraart2016a` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. patch_radius : variable int, optional The radius of the local patch to be taken around each voxel (in voxels) For example, for a patch radius with value 2, and assuming the input image is a 3D image, the denoising will take place in blocks of 5x5x5 voxels. pca_method : string, optional Use either eigenvalue decomposition ('eig') or singular value decomposition ('svd') for principal component analysis. The default method is 'eig' which is faster. However, occasionally 'svd' might be more accurate. return_sigma : bool, optional If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned :footcite:p:`Veraart2016b`. out_dir : string, optional Output directory. out_denoised : string, optional Name of the resulting denoised volume. out_sigma : string, optional Name of the resulting sigma volume. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() if isinstance(patch_radius, list) and len(patch_radius) == 1: patch_radius = int(patch_radius[0]) for dwi, odenoised, osigma in io_it: logging.info("Denoising %s", dwi) data, affine, image = load_nifti(dwi, return_img=True) denoised_data, sigma = mppca( data, patch_radius=patch_radius, pca_method=pca_method, return_sigma=True, ) save_nifti(odenoised, denoised_data, affine, hdr=image.header) logging.info("Denoised volume saved as %s", odenoised) if return_sigma: save_nifti(osigma, sigma, affine, hdr=image.header) logging.info("Sigma volume saved as %s", osigma) class GibbsRingingFlow(Workflow): @classmethod def get_short_name(cls): return "gibbs_ringing" def run( self, input_files, slice_axis=2, n_points=3, num_processes=1, out_dir="", out_unring="dwi_unring.nii.gz", ): r"""Workflow for applying Gibbs Ringing method. See :footcite:p:`Kellner2016` and :footcite:p:`NetoHenriques2018` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. slice_axis : int, optional Data axis corresponding to the number of acquired slices. Could be (0, 1, or 2): for example, a value of 2 would mean the third axis. n_points : int, optional Number of neighbour points to access local TV (see note). num_processes : int or None, optional Split the calculation to a pool of children processes. Only applies to 3D or 4D `data` arrays. Default is 1. If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. out_dir : string, optional Output directory. out_unring : string, optional Name of the resulting denoised volume. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for dwi, ounring in io_it: logging.info("Unringing %s", dwi) data, affine, image = load_nifti(dwi, return_img=True) unring_data = gibbs_removal( data, slice_axis=slice_axis, n_points=n_points, num_processes=num_processes, ) save_nifti(ounring, unring_data, affine, hdr=image.header) logging.info("Denoised volume saved as %s", ounring) dipy-1.11.0/dipy/workflows/docstring_parser.py000066400000000000000000000326011476546756600215170ustar00rootroot00000000000000""" This was taken directly from the file docscrape.py of numpydoc package. Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import re import textwrap from warnings import warn from dipy.testing.decorators import warning_for_keywords class Reader: """A line-based string reader.""" def __init__(self, data): """ Parameters ---------- data : str String with lines separated by '\n'. """ if isinstance(data, list): self._str = data else: self._str = data.split("\n") # store string as list of lines self.reset() def __getitem__(self, n): return self._str[n] def reset(self): self._l = 0 # current line nr def read(self): if not self.eof(): out = self[self._l] self._l += 1 return out else: return "" def seek_next_non_empty_line(self): for ell in self[self._l :]: if ell.strip(): break else: self._l += 1 def eof(self): return self._l >= len(self._str) def read_to_condition(self, condition_func): start = self._l for line in self[start:]: if condition_func(line): return self[start : self._l] self._l += 1 if self.eof(): return self[start : self._l + 1] return [] def read_to_next_empty_line(self): self.seek_next_non_empty_line() def is_empty(line): return not line.strip() return self.read_to_condition(is_empty) def read_to_next_unindented_line(self): def is_unindented(line): return line.strip() and (len(line.lstrip()) == len(line)) return self.read_to_condition(is_unindented) def peek(self, n=0): if self._l + n < len(self._str): return self[self._l + n] else: return "" def is_empty(self): return not "".join(self._str).strip() def dedent_lines(lines): """Deindent a list of lines maximally""" return textwrap.dedent("\n".join(lines)).split("\n") class NumpyDocString: @warning_for_keywords() def __init__(self, docstring, *, config=None): docstring = textwrap.dedent(docstring).split("\n") self._doc = Reader(docstring) self._parsed_data = { "Signature": "", "Summary": [""], "Extended Summary": [], "Parameters": [], "Outputs": [], "Returns": [], "Yields": [], "Raises": [], "Warns": [], "Other Parameters": [], "Attributes": [], "Methods": [], "See Also": [], "Notes": [], "Warnings": [], "References": "", "Examples": "", "index": {}, } self._parse() def __getitem__(self, key): return self._parsed_data[key] def __setitem__(self, key, val): if key not in self._parsed_data: warn(f"Unknown section {key}", stacklevel=2) else: self._parsed_data[key] = val def _is_at_section(self): self._doc.seek_next_non_empty_line() if self._doc.eof(): return False l1 = self._doc.peek().strip() # e.g. Parameters if l1.startswith(".. index::"): return True l2 = self._doc.peek(1).strip() # ---------- or ========== return l2.startswith("-" * len(l1)) or l2.startswith("=" * len(l1)) def _strip(self, doc): _i = 0 _j = 0 for _i, line in enumerate(doc): if line.strip(): break for _j, line in enumerate(doc[::-1]): if line.strip(): break return doc[_i : len(doc) - _j] def _read_to_next_section(self): section = self._doc.read_to_next_empty_line() while not self._is_at_section() and not self._doc.eof(): if not self._doc.peek(-1).strip(): # previous line was empty section += [""] section += self._doc.read_to_next_empty_line() return section def _read_sections(self): while not self._doc.eof(): data = self._read_to_next_section() name = data[0].strip() if name.startswith(".."): # index section yield name, data[1:] elif len(data) < 2: yield StopIteration else: yield name, self._strip(data[2:]) def _parse_param_list(self, content): r = Reader(content) params = [] while not r.eof(): header = r.read().strip() if " : " in header: arg_name, arg_type = header.split(" : ")[:2] else: arg_name, arg_type = header, "" desc = r.read_to_next_unindented_line() desc = dedent_lines(desc) params.append((arg_name, arg_type, desc)) return params _name_rgx = re.compile( r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|" r" (?P[a-zA-Z0-9_.-]+))\s*", re.X, ) def _parse_see_also(self, content): """ func_name : Descriptive text continued text another_func_name : Descriptive text func_name1, func_name2, :meth:`func_name`, func_name3 """ items = [] def parse_item_name(text): """Match ':role:`name`' or 'name'""" m = self._name_rgx.match(text) if m: g = m.groups() if g[1] is None: return g[3], None else: return g[2], g[1] raise ValueError(f"{text} is not a item name") def push_item(name, rest): if not name: return name, role = parse_item_name(name) items.append((name, list(rest), role)) del rest[:] current_func = None rest = [] for line in content: if not line.strip(): continue m = self._name_rgx.match(line) if m and line[m.end() :].strip().startswith(":"): push_item(current_func, rest) current_func, line = line[: m.end()], line[m.end() :] rest = [line.split(":", 1)[1].strip()] if not rest[0]: rest = [] elif not line.startswith(" "): push_item(current_func, rest) current_func = None if "," in line: for func in line.split(","): if func.strip(): push_item(func, []) elif line.strip(): current_func = line elif current_func is not None: rest.append(line.strip()) push_item(current_func, rest) return items def _parse_index(self, section, content): """ .. index: default :refguide: something, else, and more """ def strip_each_in(lst): return [s.strip() for s in lst] out = {} section = section.split("::") if len(section) > 1: out["default"] = strip_each_in(section[1].split(","))[0] for line in content: line = line.split(":") if len(line) > 2: out[line[1]] = strip_each_in(line[2].split(",")) return out def _parse_summary(self): """Grab signature (if given) and summary""" if self._is_at_section(): return # If several signatures present, take the last one while True: summary = self._doc.read_to_next_empty_line() summary_str = " ".join([s.strip() for s in summary]).strip() if re.compile(r"^([\w., ]+=)?\s*[\w\.]+\(.*\)$").match(summary_str): self["Signature"] = summary_str if not self._is_at_section(): continue break if summary is not None: self["Summary"] = summary if not self._is_at_section(): self["Extended Summary"] = self._read_to_next_section() def _parse(self): self._doc.reset() self._parse_summary() for section, content in self._read_sections(): if not section.startswith(".."): section = " ".join([s.capitalize() for s in section.split(" ")]) if section in ( "Parameters", "Outputs", "Returns", "Raises", "Warns", "Other Parameters", "Attributes", "Methods", ): self[section] = self._parse_param_list(content) elif section.startswith(".. index::"): self["index"] = self._parse_index(section, content) elif section == "See Also": self["See Also"] = self._parse_see_also(content) else: self[section] = content # string conversion routines def _str_header(self, name, symbol="-"): return [name, len(name) * symbol] def _str_indent(self, doc, indent=4): out = [] for line in doc: out += [" " * indent + line] return out def _str_signature(self): if self["Signature"]: return [self["Signature"].replace("*", r"\*")] + [""] else: return [""] def _str_summary(self): if self["Summary"]: return self["Summary"] + [""] else: return [] def _str_extended_summary(self): if self["Extended Summary"]: return self["Extended Summary"] + [""] else: return [] def _str_param_list(self, name): out = [] if self[name]: out += self._str_header(name) for param, param_type, desc in self[name]: if param_type: out += [f"{param} : {param_type}"] else: out += [param] out += self._str_indent(desc) out += [""] return out def _str_section(self, name): out = [] if self[name]: out += self._str_header(name) out += self[name] out += [""] return out def _str_see_also(self, func_role): if not self["See Also"]: return [] out = [] out += self._str_header("See Also") last_had_desc = True for func, desc, role in self["See Also"]: if role: link = f":{role}:`{func}`" elif func_role: link = f":{func_role}:`{func}`" else: link = f"`{func}`_" if desc or last_had_desc: out += [""] out += [link] else: out[-1] += f", {link}" if desc: out += self._str_indent([" ".join(desc)]) last_had_desc = True else: last_had_desc = False out += [""] return out def _str_index(self): idx = self["index"] out = [] out += [f".. index:: {idx.get('default', '')}"] for section, references in idx.items(): if section == "default": continue out += [f" :{section}: {', '.join(references)}"] return out def __str__(self, func_role=""): out = [] out += self._str_signature() out += self._str_summary() out += self._str_extended_summary() for param_list in ( "Parameters", "Returns", "Other Parameters", "Raises", "Warns", ): out += self._str_param_list(param_list) out += self._str_section("Warnings") out += self._str_see_also(func_role) for s in ("Notes", "References", "Examples"): out += self._str_section(s) for param_list in ("Attributes", "Methods"): out += self._str_param_list(param_list) out += self._str_index() return "\n".join(out) dipy-1.11.0/dipy/workflows/flow_runner.py000066400000000000000000000065761476546756600205230ustar00rootroot00000000000000import logging import warnings from dipy import __version__ as dipy_version from dipy.workflows.base import IntrospectiveArgumentParser # Disabling the FutureWarning from h5py below. # This disables the FutureWarning warning for all the workflows. warnings.simplefilter(action="ignore", category=FutureWarning) def get_level(lvl): """Transforms the logging level passed on the commandline into a proper logging level name. """ try: return logging._levelNames[lvl] except Exception: return logging.INFO def run_flow(flow, *, extra_args=None): """Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies. The resulting parameters are then fed to the workflow's run method. """ parser = IntrospectiveArgumentParser() sub_flows_dicts = parser.add_workflow(flow) # Common workflow arguments parser.add_argument( "--force", dest="force", action="store_true", default=False, help="Force overwriting output files.", ) parser.add_argument("--version", action="version", version=f"DIPY {dipy_version}") parser.add_argument( "--out_strat", action="store", dest="out_strat", metavar="string", required=False, default="absolute", help="Strategy to manage output creation.", ) parser.add_argument( "--mix_names", dest="mix_names", action="store_true", default=False, help="Prepend mixed input names to output names.", ) # Add logging parameters common to all workflows msg = "Log messages display level. Accepted options include CRITICAL," msg += " ERROR, WARNING, INFO, DEBUG and NOTSET (default INFO)." parser.add_argument( "--log_level", action="store", dest="log_level", metavar="string", required=False, default="INFO", help=msg, ) parser.add_argument( "--log_file", action="store", dest="log_file", metavar="string", required=False, default="", help="Log file to be saved.", ) # Add predefined arguments to the parser if extra_args and isinstance(extra_args, dict): for key, value in extra_args.items(): parser.add_argument(f"--{key}", **value) args = parser.get_flow_args() logging.basicConfig( filename=args["log_file"], format="%(levelname)s:%(message)s", level=get_level(args["log_level"]), ) # Output management parameters flow._force_overwrite = args["force"] flow._output_strategy = args["out_strat"] flow._mix_names = args["mix_names"] # Keep only workflow related parameters del args["force"] del args["log_level"] del args["log_file"] del args["out_strat"] del args["mix_names"] # Remove subflows related params for params_dict in list(sub_flows_dicts.values()): for key in list(params_dict.keys()): if key in args.keys(): params_dict[key] = args.pop(key) # Rename dictionary key to the original param name params_dict[key.split(".")[1]] = params_dict.pop(key) if sub_flows_dicts: flow.set_sub_flows_optionals(sub_flows_dicts) return flow.run(**args) dipy-1.11.0/dipy/workflows/io.py000066400000000000000000001325261476546756600165650ustar00rootroot00000000000000import importlib from inspect import getmembers, isfunction import logging import os import sys import warnings import numpy as np import trx.trx_file_memmap as tmm from dipy.core.gradients import ( extract_b0, extract_dwi_shell, gradient_table, mask_non_weighted_bvals, ) from dipy.core.sphere import Sphere from dipy.data import get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.io.peaks import ( load_pam, niftis_to_pam, pam_to_niftis, tensor_to_pam, ) from dipy.io.streamline import load_tractogram, save_tractogram from dipy.reconst.shm import convert_sh_descoteaux_tournier from dipy.reconst.utils import convert_tensors from dipy.tracking.streamlinespeed import length from dipy.utils.optpkg import optional_package from dipy.utils.tractogram import concatenate_tractogram from dipy.workflows.utils import handle_vol_idx from dipy.workflows.workflow import Workflow ne, have_ne, _ = optional_package("numexpr") class IoInfoFlow(Workflow): @classmethod def get_short_name(cls): return "io_info" def run( self, input_files, b0_threshold=50, bvecs_tol=0.01, bshell_thr=100, reference=None, ): """Provides useful information about different files used in medical imaging. Any number of input files can be provided. The program identifies the type of file by its extension. Parameters ---------- input_files : variable string Any number of Nifti1, bvals or bvecs files. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors. bshell_thr : float, optional Threshold for distinguishing b-values in different shells. reference : string, optional Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz). """ np.set_printoptions(3, suppress=True) io_it = self.get_io_iterator() for input_path in io_it: mult_ = len(input_path) logging.info(f"-----------{mult_ * '-'}") logging.info(f"Looking at {input_path}") logging.info(f"-----------{mult_ * '-'}") ipath_lower = input_path.lower() extension = os.path.splitext(ipath_lower)[1] if ipath_lower.endswith(".nii") or ipath_lower.endswith(".nii.gz"): data, affine, img, vox_sz, affcodes = load_nifti( input_path, return_img=True, return_voxsize=True, return_coords=True ) logging.info(f"Data size {data.shape}") logging.info(f"Data type {data.dtype}") if data.ndim == 3: logging.info( f"Data min {data.min()} max {data.max()} avg {data.mean()}" ) logging.info( f"2nd percentile {np.percentile(data, 2)} " f"98th percentile {np.percentile(data, 98)}" ) if data.ndim == 4: logging.info( f"Data min {data[..., 0].min()} " f"max {data[..., 0].max()} " f"avg {data[..., 0].mean()} of vol 0" ) msg = ( f"2nd percentile {np.percentile(data[..., 0], 2)} " f"98th percentile {np.percentile(data[..., 0], 98)} " f"of vol 0" ) logging.info(msg) logging.info(f"Native coordinate system {''.join(affcodes)}") logging.info(f"Affine Native to RAS matrix \n{affine}") logging.info(f"Voxel size {np.array(vox_sz)}") if np.sum(np.abs(np.diff(vox_sz))) > 0.1: msg = "Voxel size is not isotropic. Please reslice.\n" logging.warning(msg, stacklevel=2) if os.path.basename(input_path).lower().find("bval") > -1: bvals = np.loadtxt(input_path) logging.info(f"b-values \n{bvals}") logging.info(f"Total number of b-values {len(bvals)}") shells = np.sum(np.diff(np.sort(bvals)) > bshell_thr) logging.info(f"Number of gradient shells {shells}") logging.info( f"Number of b0s {np.sum(bvals <= b0_threshold)} " f"(b0_thr {b0_threshold})\n" ) if os.path.basename(input_path).lower().find("bvec") > -1: bvecs = np.loadtxt(input_path) logging.info(f"Bvectors shape on disk is {bvecs.shape}") rows, cols = bvecs.shape if rows < cols: bvecs = bvecs.T logging.info(f"Bvectors are \n{bvecs}") norms = np.array([np.linalg.norm(bvec) for bvec in bvecs]) res = np.where((norms <= 1 + bvecs_tol) & (norms >= 1 - bvecs_tol)) ncl1 = np.sum(norms < 1 - bvecs_tol) logging.info(f"Total number of unit bvectors {len(res[0])}") logging.info(f"Total number of non-unit bvectors {ncl1}\n") if extension in [".trk", ".tck", ".trx", ".vtk", ".vtp", ".fib", ".dpy"]: sft = None if extension in [".trk", ".trx"]: sft = load_tractogram(input_path, "same", bbox_valid_check=False) else: sft = load_tractogram(input_path, reference, bbox_valid_check=False) lengths_mm = list(length(sft.streamlines)) sft.to_voxmm() lengths, steps = [], [] for streamline in sft.streamlines: lengths += [len(streamline)] steps += [np.sqrt(np.sum(np.diff(streamline, axis=0) ** 2, axis=1))] steps = np.hstack(steps) logging.info(f"Number of streamlines: {len(sft)}") logging.info(f"min_length_mm: {float(np.min(lengths_mm))}") logging.info(f"mean_length_mm: {float(np.mean(lengths_mm))}") logging.info(f"max_length_mm: {float(np.max(lengths_mm))}") logging.info(f"std_length_mm: {float(np.std(lengths_mm))}") logging.info(f"min_length_nb_points: {float(np.min(lengths))}") logging.info("mean_length_nb_points: " f"{float(np.mean(lengths))}") logging.info(f"max_length_nb_points: {float(np.max(lengths))}") logging.info(f"std_length_nb_points: {float(np.std(lengths))}") logging.info(f"min_step_size: {float(np.min(steps))}") logging.info(f"mean_step_size: {float(np.mean(steps))}") logging.info(f"max_step_size: {float(np.max(steps))}") logging.info(f"std_step_size: {float(np.std(steps))}") logging.info( "data_per_point_keys: " f"{list(sft.data_per_point.keys())}" ) logging.info( "data_per_streamline_keys: " f"{list(sft.data_per_streamline.keys())}" ) np.set_printoptions() class FetchFlow(Workflow): @classmethod def get_short_name(cls): return "fetch" @staticmethod def get_fetcher_datanames(): """Gets available dataset and function names. Returns ------- available_data: dict Available dataset and function names. """ fetcher_module = FetchFlow.load_module("dipy.data.fetcher") available_data = dict( { (name.replace("fetch_", ""), func) for name, func in getmembers(fetcher_module, isfunction) if name.lower().startswith("fetch_") and func is not fetcher_module.fetch_data } ) return available_data @staticmethod def load_module(module_path): """Load / reload an external module. Parameters ---------- module_path: string the path to the module relative to the main script Returns ------- module: module object """ if module_path in sys.modules: return importlib.reload(sys.modules[module_path]) else: return importlib.import_module(module_path) def run( self, data_names, subjects=None, include_optional=False, include_afq=False, hcp_bucket="hcp-openaccess", hcp_profile_name="hcp", hcp_study="HCP_1200", hcp_aws_access_key_id=None, hcp_aws_secret_access_key=None, out_dir="", ): """Download files to folder and check their md5 checksums. To see all available datasets, please type "list" in data_names. Parameters ---------- data_names : variable string Any number of Nifti1, bvals or bvecs files. subjects : variable string, optional Identifiers of the subjects to download. Used only by the HBN & HCP dataset. For example with HBN dataset: --subject NDARAA948VFH NDAREK918EC2 include_optional : bool, optional Include optional datasets. include_afq : bool, optional Whether to include pyAFQ derivatives. Used only by the HBN dataset. hcp_bucket : string, optional The name of the HCP S3 bucket. hcp_profile_name : string, optional The name of the AWS profile used for access. hcp_study : string, optional Which HCP study to grab. hcp_aws_access_key_id : string, optional AWS credentials to HCP AWS S3. Will only be used if `profile_name` is set to False. hcp_aws_secret_access_key : string, optional AWS credentials to HCP AWS S3. Will only be used if `profile_name` is set to False. out_dir : string, optional Output directory. """ if out_dir: dipy_home = os.environ.get("DIPY_HOME", None) os.environ["DIPY_HOME"] = out_dir available_data = FetchFlow.get_fetcher_datanames() data_names = [name.lower() for name in data_names] if "all" in data_names: logging.warning("Skipping HCP and HBN datasets.") available_data.pop("hcp", None) available_data.pop("hbn", None) for name, fetcher_function in available_data.items(): if name in ["hcp", "hbn"]: continue logging.info("------------------------------------------") logging.info(f"Fetching at {name}") logging.info("------------------------------------------") fetcher_function(include_optional=include_optional) elif "list" in data_names: logging.info( "Please, select between the following data names: \n" f"{', '.join(available_data.keys())}" ) else: skipped_names = [] for data_name in data_names: if data_name not in available_data.keys(): skipped_names.append(data_name) continue logging.info("------------------------------------------") logging.info(f"Fetching at {data_name}") logging.info("------------------------------------------") if data_name == "hcp": if not subjects: logging.error( "Please provide the subjects to download the HCP dataset." ) continue try: available_data[data_name]( subjects=subjects, bucket=hcp_bucket, profile_name=hcp_profile_name, study=hcp_study, aws_access_key_id=hcp_aws_access_key_id, aws_secret_access_key=hcp_aws_secret_access_key, ) except Exception as e: logging.error( f"Error while fetching HCP dataset: {str(e)}", exc_info=True ) elif data_name == "hbn": if not subjects: logging.error( "Please provide the subjects to download the HBN dataset." ) continue try: available_data[data_name]( subjects=subjects, include_afq=include_afq ) except Exception as e: logging.error( f"Error while fetching HBN dataset: {str(e)}", exc_info=True ) else: available_data[data_name](include_optional=include_optional) nb_success = len(data_names) - len(skipped_names) print("\n") logging.info(f"Fetched {nb_success} / {len(data_names)} Files ") if skipped_names: logging.warn(f"Skipped data name(s): {' '.join(skipped_names)}") logging.warn( "Please, select between the following data names: " f"{', '.join(available_data.keys())}" ) if out_dir: if dipy_home: os.environ["DIPY_HOME"] = dipy_home else: os.environ.pop("DIPY_HOME", None) # We load the module again so that if we run another one of these # in the same process, we don't have the env variable pointing # to the wrong place self.load_module("dipy.data.fetcher") class SplitFlow(Workflow): @classmethod def get_short_name(cls): return "split" def run(self, input_files, vol_idx=0, out_dir="", out_split="split.nii.gz"): """Splits the input 4D file and extracts the required 3D volume. Parameters ---------- input_files : variable string Any number of Nifti1 files vol_idx : int, optional Index of the 3D volume to extract. out_dir : string, optional Output directory. out_split : string, optional Name of the resulting split volume """ io_it = self.get_io_iterator() for fpath, osplit in io_it: logging.info(f"Splitting {fpath}") data, affine, image = load_nifti(fpath, return_img=True) if vol_idx == 0: logging.info("Splitting and extracting 1st b0") split_vol = data[..., vol_idx] save_nifti(osplit, split_vol, affine, hdr=image.header) logging.info(f"Split volume saved as {osplit}") class ExtractB0Flow(Workflow): @classmethod def get_short_name(cls): return "extract_b0" def run( self, input_files, bvalues_files, b0_threshold=50, group_contiguous_b0=False, strategy="mean", out_dir="", out_b0="b0.nii.gz", ): """Extract on or multiple b0 volume from the input 4D file. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. b0_threshold : float, optional Threshold used to find b0 volumes. group_contiguous_b0 : bool, optional If True, each contiguous b0 volumes are grouped together. strategy : str, optional The extraction strategy, of either: - first: select the first b0 found. - all: select them all. - mean: average them. When used in conjunction with the batch parameter set to True, the strategy is applied individually on each continuous set found. out_dir : string, optional Output directory. out_b0 : string, optional Name of the resulting b0 volume. """ io_it = self.get_io_iterator() for dwi, bval, ob0 in io_it: logging.info("Extracting b0 from {0}".format(dwi)) data, affine, image = load_nifti(dwi, return_img=True) bvals, bvecs = read_bvals_bvecs(bval, None) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warnings.warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) bvecs = np.random.randn(bvals.shape[0], 3) norms = np.linalg.norm(bvecs, axis=1, keepdims=True) bvecs = bvecs / norms gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=b0_threshold) b0s_result = extract_b0( data, gtab.b0s_mask, group_contiguous_b0=group_contiguous_b0, strategy=strategy, ) if b0s_result.ndim == 3: save_nifti(ob0, b0s_result, affine, hdr=image.header) logging.info("b0 saved as {0}".format(ob0)) elif b0s_result.ndim == 4: for i in range(b0s_result.shape[-1]): save_nifti( ob0.replace(".nii", f"_{i}.nii"), b0s_result[..., i], affine, hdr=image.header, ) logging.info( "b0 saved as {0}".format(ob0.replace(".nii", f"_{i}.nii")) ) else: logging.error("No b0 volumes found") class ExtractShellFlow(Workflow): @classmethod def get_short_name(cls): return "extract_shell" def run( self, input_files, bvalues_files, bvectors_files, bvals_to_extract=None, b0_threshold=50, bvecs_tol=0.01, tol=20, group_shells=True, out_dir="", out_shell="shell.nii.gz", ): """Extract shells from the input 4D file. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. bvals_to_extract : string, optional List of b-values to extract. You can provide a single b-values or a range of b-values separated by a dash. For example, to extract b-values 0, 1, and 2, you can use '0-2'. You can also provide a list of b-values separated by a comma. For example, to extract b-values 0, 1, 2, 8, 10, 11 and 12, you can use '0-2,8,10-12'. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used to check that norm(bvec) = 1 +/- bvecs_tol tol : int, optional Tolerance range for b-value selection. A value of 20 means volumes with b-values within ±20 units of the specified b-values will be extracted. group_shells : bool, optional If True, extracted volumes are grouped into a single array. If False, returns a list of separate volumes. out_dir : string, optional Output directory. out_shell : string, optional Name of the resulting shell volume. """ io_it = self.get_io_iterator() if bvals_to_extract is None: logging.error( "Please provide a list of b-values to extract." " e.g: --bvals_to_extract 1000 2000 3000" ) sys.exit(1) bvals_to_extract = handle_vol_idx(bvals_to_extract) for dwi, bval, bvec, oshell in io_it: logging.info("Extracting shell from {0}".format(dwi)) data, affine, image = load_nifti(dwi, return_img=True) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warnings.warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) indices, shell_data, output_bvals, output_bvecs = extract_dwi_shell( data, gtab, bvals_to_extract, tol=tol, group_shells=group_shells, ) for i, shell in enumerate(shell_data): shell_value = np.unique(output_bvals[i]).astype(int).astype(str) shell_value = "_".join(shell_value.tolist()) save_nifti( oshell.replace(".nii", f"_{shell_value}.nii"), shell, affine, hdr=image.header, ) logging.info( "b0 saved as {0}".format( oshell.replace(".nii", f"_{shell_value}.nii") ) ) class ExtractVolumeFlow(Workflow): @classmethod def get_short_name(cls): return "extract_volume" def run( self, input_files, vol_idx=0, grouped=True, out_dir="", out_vol="volume.nii.gz" ): """Extracts the required volume from the input 4D file. Parameters ---------- input_files : string Any number of Nifti1 files vol_idx : string, optional Indexes of the 3D volume to extract. Index start from 0. You can provide a single index or a range of indexes separated by a dash. For example, to extract volumes 0, 1, and 2, you can use '0-2'. You can also provide a list of indexes separated by a comma. For example, to extract volumes 0, 1, 2, 8, 10, 11 and 12 , you can use '0-2,8,10-12'. grouped : bool, optional If True, extracted volumes are grouped into a single array. If False, save a list of separate volumes. out_dir : string, optional Output directory. out_vol : string, optional Name of the resulting volume. """ io_it = self.get_io_iterator() vol_idx = handle_vol_idx(vol_idx) for fpath, ovol in io_it: logging.info("Extracting volume from {0}".format(fpath)) data, affine, image = load_nifti(fpath, return_img=True) if grouped: split_vol = data[..., vol_idx] save_nifti(ovol, split_vol, affine, hdr=image.header) logging.info("Volume saved as {0}".format(ovol)) else: for i in vol_idx: fname = ovol.replace(".nii", f"_{i}.nii") split_vol = data[..., i] save_nifti(fname, split_vol, affine, hdr=image.header) logging.info("Volume saved as {0}".format(fname)) class ConcatenateTractogramFlow(Workflow): @classmethod def get_short_name(cls): return "concatracks" def run( self, tractogram_files, reference=None, delete_dpv=False, delete_dps=False, delete_groups=False, check_space_attributes=True, preallocation=False, out_dir="", out_extension="trx", out_tractogram="concatenated_tractogram", ): """Concatenate multiple tractograms into one. Parameters ---------- tractogram_list : variable string The stateful tractogram filenames to concatenate reference : string, optional Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz). delete_dpv : bool, optional Delete dpv keys that do not exist in all the provided TrxFiles delete_dps : bool, optional Delete dps keys that do not exist in all the provided TrxFile delete_groups : bool, optional Delete all the groups that currently exist in the TrxFiles check_space_attributes : bool, optional Verify that dimensions and size of data are similar between all the TrxFiles preallocation : bool, optional Preallocated TrxFile has already been generated and is the first element in trx_list (Note: delete_groups must be set to True as well) out_dir : string, optional Output directory. out_extension : string, optional Extension of the resulting tractogram out_tractogram : string, optional Name of the resulting tractogram """ io_it = self.get_io_iterator() trx_list = [] has_group = False for fpath, _, _ in io_it: if fpath.lower().endswith(".trx") or fpath.lower().endswith(".trk"): reference = "same" if not reference: raise ValueError( "No reference provided. It is needed for tck," "fib, dpy or vtk files" ) tractogram_obj = load_tractogram(fpath, reference, bbox_valid_check=False) if not isinstance(tractogram_obj, tmm.TrxFile): tractogram_obj = tmm.TrxFile.from_sft(tractogram_obj) elif len(tractogram_obj.groups): has_group = True trx_list.append(tractogram_obj) trx = concatenate_tractogram( trx_list, delete_dpv=delete_dpv, delete_dps=delete_dps, delete_groups=delete_groups or not has_group, check_space_attributes=check_space_attributes, preallocation=preallocation, ) valid_extensions = ["trk", "trx", "tck", "fib", "dpy", "vtk"] if out_extension.lower() not in valid_extensions: raise ValueError( f"Invalid extension. Valid extensions are: {valid_extensions}" ) out_fpath = os.path.join(out_dir, f"{out_tractogram}.{out_extension}") save_tractogram(trx.to_sft(), out_fpath, bbox_valid_check=False) class ConvertSHFlow(Workflow): @classmethod def get_short_name(cls): return "convert_dipy_mrtrix" def run( self, input_files, out_dir="", out_file="sh_convert_dipy_mrtrix_out.nii.gz", ): """Converts SH basis representation between DIPY and MRtrix3 formats. Because this conversion is equal to its own inverse, it can be used to convert in either direction: DIPY to MRtrix3 or vice versa. Parameters ---------- input_files : string Path to the input files. This path may contain wildcards to process multiple inputs at once. out_dir : string, optional Where the resulting file will be saved. (default '') out_file : string, optional Name of the result file to be saved. (default 'sh_convert_dipy_mrtrix_out.nii.gz') """ io_it = self.get_io_iterator() for in_file, out_file in io_it: data, affine, image = load_nifti(in_file, return_img=True) data = convert_sh_descoteaux_tournier(data) save_nifti(out_file, data, affine, hdr=image.header) class ConvertTensorsFlow(Workflow): @classmethod def get_short_name(cls): return "convert_tensors" def run( self, tensor_files, from_format="mrtrix", to_format="dipy", out_dir=".", out_tensor="converted_tensor", ): """Converts tensor representation between different formats. Parameters ---------- tensor_files : variable string Any number of tensor files from_format : string, optional Format of the input tensor files. Valid options are 'dipy', 'mrtrix', 'ants', 'fsl'. to_format : string, optional Format of the output tensor files. Valid options are 'dipy', 'mrtrix', 'ants', 'fsl'. out_dir : string, optional Output directory. out_tensor : string, optional Name of the resulting tensor file """ io_it = self.get_io_iterator() for fpath, otensor in io_it: logging.info(f"Converting {fpath}") data, affine, image = load_nifti(fpath, return_img=True) data = convert_tensors(data, from_format, to_format) save_nifti(otensor, data, affine, hdr=image.header) class ConvertTractogramFlow(Workflow): @classmethod def get_short_name(cls): return "convert_tractogram" def run( self, input_files, reference=None, pos_dtype="float32", offsets_dtype="uint32", out_dir="", out_tractogram="converted_tractogram.trk", ): """Converts tractogram between different formats. Parameters ---------- input_files : variable string Any number of tractogram files reference : string, optional Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz). pos_dtype : string, optional Data type of the tractogram points, used for vtk files. offsets_dtype : string, optional Data type of the tractogram offsets, used for vtk files. out_dir : string, optional Output directory. out_tractogram : string, optional Name of the resulting tractogram """ io_it = self.get_io_iterator() for fpath, otracks in io_it: in_extension = fpath.lower().split(".")[-1] out_extension = otracks.lower().split(".")[-1] if in_extension == out_extension: warnings.warn( "Input and output are the same file format. Skipping...", stacklevel=2, ) continue if not reference and in_extension in ["trx", "trk"]: reference = "same" if not reference and in_extension not in ["trx", "trk"]: raise ValueError( "No reference provided. It is needed for tck," "fib, dpy or vtk files" ) sft = load_tractogram(fpath, reference, bbox_valid_check=False) if out_extension != "trx": if out_extension == "vtk": if sft.streamlines._data.dtype.name != pos_dtype: sft.streamlines._data = sft.streamlines._data.astype(pos_dtype) if offsets_dtype == "uint64" or offsets_dtype == "uint32": offsets_dtype = offsets_dtype[1:] if sft.streamlines._offsets.dtype.name != offsets_dtype: sft.streamlines._offsets = sft.streamlines._offsets.astype( offsets_dtype ) save_tractogram(sft, otracks, bbox_valid_check=False) else: trx = tmm.TrxFile.from_sft(sft) if trx.streamlines._data.dtype.name != pos_dtype: trx.streamlines._data = trx.streamlines._data.astype(pos_dtype) if trx.streamlines._offsets.dtype.name != offsets_dtype: trx.streamlines._offsets = trx.streamlines._offsets.astype( offsets_dtype ) tmm.save(trx, otracks) trx.close() class NiftisToPamFlow(Workflow): @classmethod def get_short_name(cls): return "niftis_to_pam" def run( self, peaks_dir_files, peaks_values_files, peaks_indices_files, shm_files=None, gfa_files=None, sphere_files=None, default_sphere_name="repulsion724", out_dir="", out_pam="peaks.pam5", ): """Convert multiple nifti files to a single pam5 file. Parameters ---------- peaks_dir_files : string Path to the input peaks directions volume. This path may contain wildcards to process multiple inputs at once. peaks_values_files : string Path to the input peaks values volume. This path may contain wildcards to process multiple inputs at once. peaks_indices_files : string Path to the input peaks indices volume. This path may contain wildcards to process multiple inputs at once. shm_files : string, optional Path to the input spherical harmonics volume. This path may contain wildcards to process multiple inputs at once. gfa_files : string, optional Path to the input generalized FA volume. This path may contain wildcards to process multiple inputs at once. sphere_files : string, optional Path to the input sphere vertices. This path may contain wildcards to process multiple inputs at once. If it is not define, default_sphere option will be used. default_sphere_name : string, optional Specify default sphere to use for spherical harmonics representation. This option can be superseded by sphere_files option. Possible options: ['symmetric362', 'symmetric642', 'symmetric724', 'repulsion724', 'repulsion100', 'repulsion200']. out_dir : string, optional Output directory (default input file directory). out_pam : string, optional Name of the peaks volume to be saved. """ io_it = self.get_io_iterator() msg = f"pam5 files saved in {out_dir or 'current directory'}" for fpeak_dirs, fpeak_values, fpeak_indices, opam in io_it: logging.info("Converting nifti files to pam5") peak_dirs, affine = load_nifti(fpeak_dirs) peak_values, _ = load_nifti(fpeak_values) peak_indices, _ = load_nifti(fpeak_indices) if sphere_files: xyz = np.loadtxt(sphere_files) sphere = Sphere(xyz=xyz) else: sphere = get_sphere(name=default_sphere_name) niftis_to_pam( affine=affine, peak_dirs=peak_dirs, sphere=sphere, peak_values=peak_values, peak_indices=peak_indices, pam_file=opam, ) logging.info(msg.replace("pam5", opam)) class TensorToPamFlow(Workflow): @classmethod def get_short_name(cls): return "tensor_to_niftis" def run( self, evals_files, evecs_files, sphere_files=None, default_sphere_name="repulsion724", out_dir="", out_pam="peaks.pam5", ): """Convert multiple tensor files(evals, evecs) to pam5 files. Parameters ---------- evals_files : string Path to the input eigen values volumes. This path may contain wildcards to process multiple inputs at once. evecs_files : string Path to the input eigen vectors volumes. This path may contain wildcards to process multiple inputs at once. sphere_files : string, optional Path to the input sphere vertices. This path may contain wildcards to process multiple inputs at once. If it is not define, default_sphere option will be used. default_sphere_name : string, optional Specify default sphere to use for spherical harmonics representation. This option can be superseded by sphere_files option. Possible options: ['symmetric362', 'symmetric642', 'symmetric724', 'repulsion724', 'repulsion100', 'repulsion200']. out_dir : string, optional Output directory (default input file directory). out_pam : string, optional Name of the peaks volume to be saved. """ io_it = self.get_io_iterator() msg = f"pam5 files saved in {out_dir or 'current directory'}" for fevals, fevecs, opam in io_it: logging.info("Converting tensor files to pam5...") evals, affine = load_nifti(fevals) evecs, _ = load_nifti(fevecs) if sphere_files: xyz = np.loadtxt(sphere_files) sphere = Sphere(xyz=xyz) else: sphere = get_sphere(name=default_sphere_name) tensor_to_pam(evals, evecs, affine, sphere=sphere, pam_file=opam) logging.info(msg.replace("pam5", opam)) class PamToNiftisFlow(Workflow): @classmethod def get_short_name(cls): return "pam_to_niftis" def run( self, pam_files, out_dir="", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_shm="shm.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Convert pam5 files to multiple nifti files. Parameters ---------- pam_files : string Path to the input peaks volumes. This path may contain wildcards to process multiple inputs at once. out_dir : string, optional Output directory (default input file directory). out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_gfa : string, optional Generalized FA volume name to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy file to be saved. """ io_it = self.get_io_iterator() msg = f"Nifti files saved in {out_dir or 'current directory'}" for ( ipam, opeaks_dir, opeaks_values, opeaks_indices, oshm, ogfa, osphere, ob, oqa, ) in io_it: logging.info("Converting %s file to niftis...", ipam) pam = load_pam(ipam) pam_to_niftis( pam, fname_peaks_dir=opeaks_dir, fname_shm=oshm, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_sphere=osphere, fname_gfa=ogfa, fname_b=ob, fname_qa=oqa, ) logging.info(msg) class MathFlow(Workflow): @classmethod def get_short_name(cls): return "math_flow" def run( self, operation, input_files, dtype=None, out_dir="", out_file="math_out.nii.gz" ): """Perform mathematical operations on volume input files. This workflow allows the user to perform mathematical operations on multiple input files. e.g. to add two volumes together, subtract one: ``dipy_math "vol1 + vol2 - vol3" t1.nii.gz t1_a.nii.gz t1_b.nii.gz`` The input files must be in Nifti format and have the same shape. Parameters ---------- operation : string Mathematical operation to perform. supported operators are: - Bitwise operators (and, or, not, xor): ``&, |, ~, ^`` - Comparison operators: ``<, <=, ==, !=, >=, >`` - Unary arithmetic operators: ``-`` - Binary arithmetic operators: ``+, -, *, /, **, <<, >>`` Supported functions are: - ``where(bool, number1, number2) -> number``: number1 if the bool condition is true, number2 otherwise. - ``{sin,cos,tan}(float|complex) -> float|complex``: trigonometric sine, cosine or tangent. - ``{arcsin,arccos,arctan}(float|complex) -> float|complex``: trigonometric inverse sine, cosine or tangent. - ``arctan2(float1, float2) -> float``: trigonometric inverse tangent of float1/float2. - ``{sinh,cosh,tanh}(float|complex) -> float|complex``: hyperbolic sine, cosine or tangent. - ``{arcsinh,arccosh,arctanh}(float|complex) -> float|complex``: hyperbolic inverse sine, cosine or tangent. - ``{log,log10,log1p}(float|complex) -> float|complex``: natural, base-10 and log(1+x) logarithms. - ``{exp,expm1}(float|complex) -> float|complex``: exponential and exponential minus one. - ``sqrt(float|complex) -> float|complex``: square root. - ``abs(float|complex) -> float|complex``: absolute value. - ``conj(complex) -> complex``: conjugate value. - ``{real,imag}(complex) -> float``: real or imaginary part of complex. - ``complex(float, float) -> complex``: complex from real and imaginary parts. - ``contains(np.str, np.str) -> bool``: returns True for every string in op1 that contains op2. input_files : variable string Any number of Nifti1 files dtype : string, optional Data type of the resulting file. out_dir : string, optional Output directory out_file : string, optional Name of the resulting file to be saved. """ vol_dict = {} ref_affine = None ref_shape = None info_msg = "" have_errors = False for i, fname in enumerate(input_files, start=1): if not os.path.isfile(fname): logging.error(f"Input file {fname} does not exist.") raise SystemExit() if not (fname.endswith(".nii.gz") or fname.endswith(".nii")): msg = ( f"Wrong volume type: {fname}. Only Nifti files are supported" " (*.nii or *.nii.gz)." ) logging.error(msg) raise SystemExit() data, affine = load_nifti(fname) vol_dict[f"vol{i}"] = data info_msg += f"{fname}:\n- vol index: {i}\n- shape: {data.shape}" info_msg += f"\n- affine:\n{affine}\n" if ref_affine is None: ref_affine = affine ref_shape = data.shape continue have_errors = ( have_errors or not np.all(np.isclose(ref_affine, affine, rtol=1e-05, atol=1e-08)) or not np.array_equal(ref_shape, data.shape) ) if have_errors: logging.warning(info_msg) msg = "All input files must have the same shape and affine matrix." logging.error(msg) raise SystemExit() try: res = ne.evaluate(operation, local_dict=vol_dict) except KeyError as e: msg = ( f"Impossible key {e} in the operation. You have {len(input_files)}" f" volumes available with the following keys: {list(vol_dict.keys())}" ) logging.error(msg) raise SystemExit() from e if dtype: try: res = res.astype(dtype) except TypeError as e: msg = ( f"Impossible to cast to {dtype}. Check possible numpy type here:" "https://numpy.org/doc/stable/reference/arrays.interface.html" ) logging.error(msg) raise SystemExit() from e out_fname = os.path.join(out_dir, out_file) logging.info(f"Saving result to {out_fname}") save_nifti(out_fname, res, affine) dipy-1.11.0/dipy/workflows/mask.py000066400000000000000000000024601476546756600171020ustar00rootroot00000000000000#!/usr/bin/env python3 import logging import numpy as np from dipy.io.image import load_nifti, save_nifti from dipy.workflows.workflow import Workflow class MaskFlow(Workflow): @classmethod def get_short_name(cls): return "mask" def run(self, input_files, lb, ub=np.inf, out_dir="", out_mask="mask.nii.gz"): """Workflow for creating a binary mask Parameters ---------- input_files : string Path to image to be masked. lb : float Lower bound value. ub : float, optional Upper bound value. out_dir : string, optional Output directory. out_mask : string, optional Name of the masked file. """ if lb >= ub: logging.error( "The upper bound(less than) should be greater" " than the lower bound (greater_than)." ) return io_it = self.get_io_iterator() for input_path, out_mask_path in io_it: logging.info(f"Creating mask of {input_path}") data, affine = load_nifti(input_path) mask = np.bitwise_and(data > lb, data < ub) save_nifti(out_mask_path, mask.astype(np.ubyte), affine) logging.info(f"Mask saved at {out_mask_path}") dipy-1.11.0/dipy/workflows/meson.build000066400000000000000000000007141476546756600177370ustar00rootroot00000000000000python_sources = [ '__init__.py', 'align.py', 'base.py', 'cli.py', 'combined_workflow.py', 'denoise.py', 'docstring_parser.py', 'flow_runner.py', 'io.py', 'mask.py', 'multi_io.py', 'nn.py', 'reconst.py', 'segment.py', 'stats.py', 'tracking.py', 'utils.py', 'viz.py', 'workflow.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/workflows' ) subdir('tests')dipy-1.11.0/dipy/workflows/multi_io.py000066400000000000000000000213011476546756600177630ustar00rootroot00000000000000from glob import glob import inspect import itertools import os import numpy as np from dipy.testing.decorators import warning_for_keywords from dipy.workflows.base import get_args_default def common_start(sa, sb): """Return the longest common substring from the beginning of sa and sb.""" def _iter(): for a, b in zip(sa, sb): if a == b: yield a else: return return "".join(_iter()) def slash_to_under(dir_str): return "".join(dir_str.replace("/", "_")) @warning_for_keywords() def connect_output_paths( inputs, out_dir, out_files, *, output_strategy="absolute", mix_names=True ): """Generate a list of output files paths based on input files and output strategies. Parameters ---------- inputs : array List of input paths. out_dir : string The output directory. out_files : array List of output files. output_strategy : string, optional Which strategy to use to generate the output paths. 'append': Add out_dir to the path of the input. 'prepend': Add the input path directory tree to out_dir. 'absolute': Put directly in out_dir. mix_names : bool, optional Whether or not prepend a string composed of a mix of the input names to the final output name. Returns ------- A list of output file paths. """ outputs = [] if isinstance(inputs, str): inputs = [inputs] if isinstance(out_files, str): out_files = [out_files] sizes_of_inputs = [len(inp) for inp in inputs] max_size = np.max(sizes_of_inputs) min_size = np.min(sizes_of_inputs) if min_size > 1 and min_size != max_size: raise ImportError("Size of input issue") elif min_size == 1: for i, sz in enumerate(sizes_of_inputs): if sz == min_size: inputs[i] = max_size * inputs[i] if mix_names: mixing_prefixes = concatenate_inputs(inputs) else: mixing_prefixes = [""] * len(inputs[0]) for mix_pref, inp in zip(mixing_prefixes, inputs[0]): inp_dirname = os.path.dirname(inp) if output_strategy == "prepend": if os.path.isabs(out_dir): dname = out_dir + inp_dirname if not os.path.isabs(out_dir): dname = os.path.join(os.getcwd(), out_dir + inp_dirname) elif output_strategy == "append": dname = os.path.join(inp_dirname, out_dir) else: dname = out_dir updated_out_files = [] for out_file in out_files: updated_out_files.append(os.path.join(dname, mix_pref + out_file)) outputs.append(updated_out_files) return inputs, outputs def concatenate_inputs(multi_inputs): """Concatenate list of inputs.""" mixing_names = [] for inps in zip(*multi_inputs): mixing_name = "" for inp in inps: mixing_name += basename_without_extension(inp) + "_" mixing_names.append(mixing_name + "_") return mixing_names def basename_without_extension(fname): base = os.path.basename(fname) result = base.split(".")[0] if result[-4:] == ".nii": result = result.split(".")[0] return result @warning_for_keywords() def io_iterator( inputs, out_dir, fnames, *, output_strategy="absolute", mix_names=False, out_keys=None, ): """Create an IOIterator from the parameters. Parameters ---------- inputs : array List of input files. out_dir : string Output directory. fnames : array File names of all outputs to be created. output_strategy : string, optional Controls the behavior of the IOIterator for output paths. mix_names : bool, optional Whether or not to append a mix of input names at the beginning. out_keys : list, optional Output parameter names. Returns ------- Properly instantiated IOIterator object. """ io_it = IOIterator(output_strategy=output_strategy, mix_names=mix_names) io_it.set_inputs(*inputs) io_it.set_out_dir(out_dir) io_it.set_out_fnames(*fnames) io_it.create_outputs() if out_keys: io_it.set_output_keys(*out_keys) return io_it @warning_for_keywords() def _io_iterator(frame, fnc, *, output_strategy="absolute", mix_names=False): """Create an IOIterator using introspection. Parameters ---------- frame : frameobject Contains the info about the current local variables values. fnc : function The function to inspect output_strategy : string, optional Controls the behavior of the IOIterator for output paths. mix_names : bool, optional Whether or not to append a mix of input names at the beginning. Returns ------- Properly instantiated IOIterator object. """ # Create a new object that does not contain the ``self`` dict item def _selfless_dict(_values): return {key: val for key, val in _values.items() if key != "self"} args, _, _, values = inspect.getargvalues(frame) args.remove("self") # Create a new object that does not contain the ``self`` dict item from the # provided copy of the local symbol table returned by ``getargvalues``. # Avoids attempting to remove it from the object returned by # ``getargvalues``. values = _selfless_dict(values) spargs, defaults = get_args_default(fnc) len_args = len(spargs) len_defaults = len(defaults) split_at = len_args - len_defaults inputs = [] outputs = [] out_dir = "" # inputs for arv in args[:split_at]: inputs.append(values[arv]) # defaults out_keys = [] for arv in args[split_at:]: if arv == "out_dir": out_dir = values[arv] elif "out_" in arv: out_keys.append(arv) outputs.append(values[arv]) return io_iterator( inputs, out_dir, outputs, output_strategy=output_strategy, mix_names=mix_names, out_keys=out_keys, ) class IOIterator: """Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command) Use information from input files, out_dir and out_fnames to generate correct outputs which can come from long lists of multiple or single inputs. """ @warning_for_keywords() def __init__(self, *, output_strategy="absolute", mix_names=False): self.output_strategy = output_strategy self.mix_names = mix_names self.inputs = [] self.out_keys = None def set_inputs(self, *args): self.file_existence_check(args) self.input_args = list(args) for inp in self.input_args: if isinstance(inp, str): self.inputs.append(sorted(glob(inp))) if isinstance(inp, list) and all(isinstance(s, str) for s in inp): nested = [sorted(glob(i)) for i in inp if isinstance(i, str)] self.inputs.append(list(itertools.chain.from_iterable(nested))) def set_out_dir(self, out_dir): self.out_dir = out_dir def set_out_fnames(self, *args): self.out_fnames = list(args) def set_output_keys(self, *args): self.out_keys = list(args) def create_outputs(self): if len(self.inputs) >= 1: self.updated_inputs, self.outputs = connect_output_paths( self.inputs, self.out_dir, self.out_fnames, output_strategy=self.output_strategy, mix_names=self.mix_names, ) self.create_directories() else: raise ImportError("No inputs") def create_directories(self): for outputs in self.outputs: for output in outputs: directory = os.path.dirname(output) if not (directory == "" or os.path.exists(directory)): os.makedirs(directory) def __iter__(self): ins = np.array(self.inputs).T out = np.array(self.outputs) IO = np.concatenate([ins, out], axis=1) for i_o in IO: if len(i_o) == 1: yield str(*i_o) else: yield i_o def file_existence_check(self, args): input_args = [] for fname in args: if isinstance(fname, str): input_args.append(fname) # unpack variable string if isinstance(fname, list) and all(isinstance(s, str) for s in fname): input_args += fname for path in input_args: if len(glob(path)) == 0: raise OSError(f"File not found: {path}") dipy-1.11.0/dipy/workflows/nn.py000066400000000000000000000134331476546756600165640ustar00rootroot00000000000000import logging import sys import numpy as np from dipy.core.gradients import extract_b0, gradient_table from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.nn.deepn4 import DeepN4 from dipy.nn.evac import EVACPlus from dipy.workflows.workflow import Workflow class EVACPlusFlow(Workflow): @classmethod def get_short_name(cls): return "evacplus" def run( self, input_files, save_masked=False, out_dir="", out_mask="brain_mask.nii.gz", out_masked="dwi_masked.nii.gz", ): """Extract brain using EVAC+. See :footcite:p:`Park2024` for further details about EVAC+. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. save_masked : bool, optional Save mask. out_dir : string, optional Output directory. out_mask : string, optional Name of the mask volume to be saved. out_masked : string, optional Name of the masked volume to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() empty_flag = True for fpath, mask_out_path, masked_out_path in io_it: logging.info(f"Applying evac+ brain extraction on {fpath}") data, affine, img, voxsize = load_nifti( fpath, return_img=True, return_voxsize=True ) evac = EVACPlus() mask_volume = evac.predict(data, affine, voxsize=voxsize) masked_volume = mask_volume * data save_nifti(mask_out_path, mask_volume.astype(np.float64), affine) logging.info(f"Mask saved as {mask_out_path}") if save_masked: save_nifti(masked_out_path, masked_volume, affine, hdr=img.header) logging.info(f"Masked volume saved as {masked_out_path}") empty_flag = False if empty_flag: raise ValueError( "All output paths exists." " If you want to overwrite " "please use the --force option." ) return io_it class BiasFieldCorrectionFlow(Workflow): @classmethod def get_short_name(cls): return "bias_field_correction" def run( self, input_files, bval=None, bvec=None, method="n4", threshold=0.5, use_cuda=False, verbose=False, out_dir="", out_corrected="biasfield_corrected.nii.gz", ): """Correct bias field. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bval : string, optional Path to the b-value file. bvec : string, optional Path to the b-vector file. method : string, optional Bias field correction method. Choose from: - 'n4': DeepN4 bias field correction. See :footcite:p:`Kanakaraj2024` for more details. - 'b0': B0 bias field correction via normalization. 'n4' method is recommended for T1-weighted images where 'b0' method is recommended for diffusion-weighted images. threshold : float, optional Threshold for cleaning the final correction field in DeepN4 method. use_cuda : bool, optional Use CUDA for DeepN4 bias field correction. verbose : bool, optional Print verbose output. out_dir : string, optional Output directory. out_corrected : string, optional Name of the corrected volume to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() if method.lower() not in ["n4", "b0"]: logging.error("Unknown bias field correction method. Choose from 'n4, b0'.") sys.exit(1) prefix = "t1" if method.lower() == "n4" else "dwi" for i, name in enumerate(self.flat_outputs): if name.endswith("biasfield_corrected.nii.gz"): self.flat_outputs[i] = name.replace( "biasfield_corrected.nii.gz", f"{prefix}_biasfield_corrected.nii.gz" ) self.update_flat_outputs(self.flat_outputs, io_it) for fpath, corrected_out_path in io_it: logging.info(f"Applying bias field correction on {fpath}") data, affine, img, voxsize = load_nifti( fpath, return_img=True, return_voxsize=True ) corrected_data = None if method.lower() == "b0": bvals, bvecs = read_bvals_bvecs(bval, bvec) gtab = gradient_table(bvals, bvecs=bvecs) b0 = extract_b0(data, gtab.b0s_mask) for i in range(data.shape[-1]): data[..., i] = np.divide( data[..., i], b0, out=np.zeros_like(data[..., i]).astype(float), where=b0 != 0, ) corrected_data = data elif method.lower() == "n4": deepn4_model = DeepN4(verbose=verbose, use_cuda=use_cuda) deepn4_model.fetch_default_weights() corrected_data = deepn4_model.predict( data, affine, voxsize=voxsize, threshold=threshold ) save_nifti(corrected_out_path, corrected_data, affine, hdr=img.header) logging.info(f"Corrected volume saved as {corrected_out_path}") return io_it dipy-1.11.0/dipy/workflows/reconst.py000066400000000000000000003130531476546756600176270ustar00rootroot00000000000000from ast import literal_eval import logging import os.path from warnings import warn import nibabel as nib import numpy as np from dipy.core.gradients import gradient_table, mask_non_weighted_bvals from dipy.core.ndindex import ndindex from dipy.data import default_sphere, get_sphere from dipy.direction.peaks import peak_directions, peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.peaks import niftis_to_pam, pam_to_niftis, save_pam, tensor_to_pam from dipy.io.utils import nifti1_symmat from dipy.reconst import mapmri from dipy.reconst.csdeconv import ( ConstrainedSDTModel, ConstrainedSphericalDeconvModel, auto_response_ssst, ) from dipy.reconst.dki import DiffusionKurtosisModel, split_dki_param from dipy.reconst.dsi import DiffusionSpectrumDeconvModel, DiffusionSpectrumModel from dipy.reconst.dti import ( TensorModel, axial_diffusivity, color_fa, fractional_anisotropy, geodesic_anisotropy, lower_triangular, mean_diffusivity, mode as get_mode, radial_diffusivity, ) from dipy.reconst.forecast import ForecastModel from dipy.reconst.gqi import GeneralizedQSamplingModel from dipy.reconst.ivim import IvimModel from dipy.reconst.rumba import RumbaSDModel from dipy.reconst.sfm import SparseFascicleModel from dipy.reconst.shm import CsaOdfModel, OpdtModel, QballModel from dipy.testing.decorators import warning_for_keywords from dipy.utils.deprecator import deprecated_params from dipy.workflows.workflow import Workflow class ReconstMAPMRIFlow(Workflow): @classmethod def get_short_name(cls): return "mapmri" def run( self, data_files, bvals_files, bvecs_files, small_delta, big_delta, b0_threshold=50.0, laplacian=True, positivity=True, bval_threshold=2000, save_metrics=(), laplacian_weighting=0.05, radial_order=6, sphere_name=None, relative_peak_threshold=0.5, min_separation_angle=25, npeaks=5, normalize_peaks=False, extract_pam_values=False, out_dir="", out_rtop="rtop.nii.gz", out_lapnorm="lapnorm.nii.gz", out_msd="msd.nii.gz", out_qiv="qiv.nii.gz", out_rtap="rtap.nii.gz", out_rtpp="rtpp.nii.gz", out_ng="ng.nii.gz", out_perng="perng.nii.gz", out_parng="parng.nii.gz", out_pam="mapmri_peaks.pam5", out_peaks_dir="mapmri_peaks_dirs.nii.gz", out_peaks_values="mapmri_peaks_values.nii.gz", out_peaks_indices="mapmri_peaks_indices.nii.gz", ): """Workflow for fitting the MAPMRI model (with optional Laplacian regularization). Generates rtop, lapnorm, msd, qiv, rtap, rtpp, non-gaussian (ng), parallel ng, perpendicular ng saved in a nifti format in input files provided by `data_files` and saves the nifti files to an output directory specified by `out_dir`. In order for the MAPMRI workflow to work in the way intended either the Laplacian or positivity or both must be set to True. Parameters ---------- data_files : string Path to the input volume. bvals_files : string Path to the bval files. bvecs_files : string Path to the bvec files. small_delta : float Small delta value used in generation of gradient table of provided bval and bvec. big_delta : float Big delta value used in generation of gradient table of provided bval and bvec. b0_threshold : float, optional Threshold used to find b0 volumes. laplacian : bool, optional Regularize using the Laplacian of the MAP-MRI basis. positivity : bool, optional Constrain the propagator to be positive. bval_threshold : float, optional Sets the b-value threshold to be used in the scale factor estimation. In order for the estimated non-Gaussianity to have meaning this value should set to a lower value (b<2000 s/mm^2) such that the scale factors are estimated on signal points that reasonably represent the spins at Gaussian diffusion. save_metrics : variable string, optional List of metrics to save. Possible values: rtop, laplacian_signal, msd, qiv, rtap, rtpp, ng, perng, parng laplacian_weighting : float, optional Weighting value used in fitting the MAPMRI model in the Laplacian and both model types. radial_order : unsigned int, optional Even value used to set the order of the basis. sphere_name : string, optional Sphere name on which to reconstruct the fODFs. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. npeaks : int, optional Maximum number of peaks found. normalize_peaks : bool, optional If true, all peak values are calculated relative to `max(odf)`. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. out_dir : string, optional Output directory. out_rtop : string, optional Name of the rtop to be saved. out_lapnorm : string, optional Name of the norm of Laplacian signal to be saved. out_msd : string, optional Name of the msd to be saved. out_qiv : string, optional Name of the qiv to be saved. out_rtap : string, optional Name of the rtap to be saved. out_rtpp : string, optional Name of the rtpp to be saved. out_ng : string, optional Name of the Non-Gaussianity to be saved. out_perng : string, optional Name of the Non-Gaussianity perpendicular to be saved. out_parng : string, optional Name of the Non-Gaussianity parallel to be saved. out_pam : string, optional Name of the peaks volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, out_rtop, out_lapnorm, out_msd, out_qiv, out_rtap, out_rtpp, out_ng, out_perng, out_parng, opam, opeaks_dir, opeaks_values, opeaks_indices, ) in io_it: logging.info(f"Computing MAPMRI metrics for {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals=bvals, bvecs=bvecs, small_delta=small_delta, big_delta=big_delta, b0_threshold=b0_threshold, ) if not save_metrics: save_metrics = [ "rtop", "laplacian_signal", "msd", "qiv", "rtap", "rtpp", "ng", "perng", "parng", ] kwargs = { "laplacian_regularization": laplacian, "positivity_constraint": positivity, } map_model_aniso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_weighting=laplacian_weighting, bval_threshold=bval_threshold, **kwargs, ) mapfit_aniso = map_model_aniso.fit(data) for name, fname, func in [ ("rtop", out_rtop, mapfit_aniso.rtop), ( "laplacian_signal", out_lapnorm, mapfit_aniso.norm_of_laplacian_signal, ), ("msd", out_msd, mapfit_aniso.msd), ("qiv", out_qiv, mapfit_aniso.qiv), ("rtap", out_rtap, mapfit_aniso.rtap), ("rtpp", out_rtpp, mapfit_aniso.rtpp), ("ng", out_ng, mapfit_aniso.ng), ("perng", out_perng, mapfit_aniso.ng_perpendicular), ("parng", out_parng, mapfit_aniso.ng_parallel), ]: if name in save_metrics: r = func() save_nifti(fname, r.astype(np.float32), affine) logging.info(f"MAPMRI saved in {os.path.abspath(out_dir)}") sphere = default_sphere if sphere_name: sphere = get_sphere(sphere_name) shape = data.shape[:-1] peak_dirs = np.zeros((shape + (npeaks, 3))) peak_values = np.zeros((shape + (npeaks,))) peak_indices = np.zeros((shape + (npeaks,)), dtype=np.int32) peak_indices.fill(-1) odf = mapfit_aniso.odf(sphere) for idx in ndindex(shape): # Get peaks of odf direction, pk, ind = peak_directions( odf[idx], sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, ) # Calculate peak metrics if pk.shape[0] != 0: n = min(npeaks, pk.shape[0]) peak_dirs[idx][:n] = direction[:n] peak_indices[idx][:n] = ind[:n] peak_values[idx][:n] = pk[:n] if normalize_peaks: peak_values[idx][:n] /= pk[0] peak_dirs[idx] *= peak_values[idx][:, None] pam = niftis_to_pam( affine, peak_dirs, peak_values, peak_indices, odf=odf, sphere=sphere ) save_pam(opam, pam) if extract_pam_values: pam_to_niftis( pam, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, reshape_dirs=True, ) class ReconstDtiFlow(Workflow): @classmethod def get_short_name(cls): return "dti" def run( self, input_files, bvalues_files, bvectors_files, mask_files, fit_method="WLS", b0_threshold=50, bvecs_tol=0.01, npeaks=1, sigma=None, save_metrics=None, nifti_tensor=True, extract_pam_values=False, out_dir="", out_tensor="tensors.nii.gz", out_fa="fa.nii.gz", out_ga="ga.nii.gz", out_rgb="rgb.nii.gz", out_md="md.nii.gz", out_ad="ad.nii.gz", out_rd="rd.nii.gz", out_mode="mode.nii.gz", out_evec="evecs.nii.gz", out_eval="evals.nii.gz", out_pam="peaks.pam5", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_sphere="sphere.txt", out_qa="qa.nii.gz", ): """Workflow for tensor reconstruction and for computing DTI metrics using Weighted Least-Squares. Performs a tensor reconstruction :footcite:p:`Basser1994b`, :footcite:p:`Basser1996` on the files by 'globing' ``input_files`` and saves the DTI metrics in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. fit_method : string, optional can be one of the following: 'WLS' for weighted least squares :footcite:p:`Chung2006` 'LS' or 'OLS' for ordinary least squares :footcite:p:`Chung2006` 'NLLS' for non-linear least-squares 'RT' or 'restore' or 'RESTORE' for RESTORE robust tensor fitting :footcite:p:`Chang2005`. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used to check that norm(bvec) = 1 +/- bvecs_tol npeaks : int, optional Number of peaks/eigen vectors to save in each voxel. DTI generates 3 eigen values and eigen vectors. The principal eigenvector is saved by default. sigma : float, optional An estimate of the variance. :footcite:t:`Chang2005` recommend to use 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise) b-vectors are unit vectors. save_metrics : variable string, optional List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval nifti_tensor : bool, optional Whether the tensor is saved in the standard Nifti format or in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. out_dir : string, optional Output directory. out_tensor : string, optional Name of the tensors volume to be saved. Per default, this will be saved following the nifti standard: with the tensor elements as Dxx, Dxy, Dyy, Dxz, Dyz, Dzz on the last (5th) dimension of the volume (shape: (i, j, k, 1, 6)). If `nifti_tensor` is False, this will be saved in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension. out_fa : string, optional Name of the fractional anisotropy volume to be saved. out_ga : string, optional Name of the geodesic anisotropy volume to be saved. out_rgb : string, optional Name of the color fa volume to be saved. out_md : string, optional Name of the mean diffusivity volume to be saved. out_ad : string, optional Name of the axial diffusivity volume to be saved. out_rd : string, optional Name of the radial diffusivity volume to be saved. out_mode : string, optional Name of the mode volume to be saved. out_evec : string, optional Name of the eigenvectors volume to be saved. out_eval : string, optional Name of the eigenvalues to be saved. out_pam : string, optional Name of the peaks volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ save_metrics = save_metrics or [] io_it = self.get_io_iterator() for ( dwi, bval, bvec, mask, otensor, ofa, oga, orgb, omd, oad, orad, omode, oevecs, oevals, opam, opeaks_dir, opeaks_values, opeaks_indices, osphere, oqa, ) in io_it: logging.info(f"Computing DTI metrics for {dwi}") data, affine = load_nifti(dwi) if mask is not None: mask = load_nifti_data(mask).astype(bool) optional_args = {} if fit_method in ["RT", "restore", "RESTORE", "NLLS"]: optional_args["sigma"] = sigma tenfit, tenmodel, _ = self.get_fitted_tensor( data, mask, bval, bvec, b0_threshold=b0_threshold, bvecs_tol=bvecs_tol, fit_method=fit_method, optional_args=optional_args, ) if not save_metrics: save_metrics = [ "fa", "md", "rd", "ad", "ga", "rgb", "mode", "evec", "eval", "tensor", ] FA = fractional_anisotropy(tenfit.evals) FA[np.isnan(FA)] = 0 FA = np.clip(FA, 0, 1) if "tensor" in save_metrics: tensor_vals = lower_triangular(tenfit.quadratic_form) if nifti_tensor: ten_img = nifti1_symmat(tensor_vals, affine=affine) else: alt_order = [0, 1, 3, 2, 4, 5] ten_img = nib.Nifti1Image( tensor_vals[..., alt_order].astype(np.float32), affine ) nib.save(ten_img, otensor) if "fa" in save_metrics: save_nifti(ofa, FA.astype(np.float32), affine) if "ga" in save_metrics: GA = geodesic_anisotropy(tenfit.evals) save_nifti(oga, GA.astype(np.float32), affine) if "rgb" in save_metrics: RGB = color_fa(FA, tenfit.evecs) save_nifti(orgb, np.array(255 * RGB, "uint8"), affine) if "md" in save_metrics: MD = mean_diffusivity(tenfit.evals) save_nifti(omd, MD.astype(np.float32), affine) if "ad" in save_metrics: AD = axial_diffusivity(tenfit.evals) save_nifti(oad, AD.astype(np.float32), affine) if "rd" in save_metrics: RD = radial_diffusivity(tenfit.evals) save_nifti(orad, RD.astype(np.float32), affine) if "mode" in save_metrics: MODE = get_mode(tenfit.quadratic_form) save_nifti(omode, MODE.astype(np.float32), affine) if "evec" in save_metrics: save_nifti(oevecs, tenfit.evecs.astype(np.float32), affine) if "eval" in save_metrics: save_nifti(oevals, tenfit.evals.astype(np.float32), affine) if save_metrics: msg = f"DTI metrics saved to {os.path.abspath(out_dir)}" logging.info(msg) for metric in save_metrics: logging.info(self.last_generated_outputs[f"out_{metric}"]) pam = tensor_to_pam( tenfit.evals.astype(np.float32), tenfit.evecs.astype(np.float32), affine, sphere=default_sphere, generate_peaks_indices=False, npeaks=npeaks, ) save_pam(opam, pam) if extract_pam_values: pam_to_niftis( pam, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_sphere=osphere, fname_qa=oqa, reshape_dirs=True, ) def get_fitted_tensor( self, data, mask, bval, bvec, b0_threshold=50, bvecs_tol=0.01, fit_method="WLS", optional_args=None, ): logging.info("Tensor estimation...") bvals, bvecs = read_bvals_bvecs(bval, bvec) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) tenmodel = TensorModel(gtab, fit_method=fit_method, **optional_args) tenfit = tenmodel.fit(data, mask=mask) return tenfit, tenmodel, gtab class ReconstDsiFlow(Workflow): @classmethod def get_short_name(cls): return "dsi" def run( self, input_files, bvalues_files, bvectors_files, mask_files, qgrid_size=17, r_start=2.1, r_end=6.0, r_step=0.2, filter_width=32, remove_convolution=False, normalize_peaks=False, sphere_name=None, relative_peak_threshold=0.5, min_separation_angle=25, sh_order_max=8, extract_pam_values=False, parallel=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Diffusion Spectrum Imaging (DSI) reconstruction workflow. In DSI, the diffusion signal is sampled on a Cartesian grid in q-space. When using remove_convolution=True, the convolution on the DSI propagator that is caused by the truncation of the q-space in the DSI sampling is removed. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. qgrid_size : int, optional has to be an odd number. Sets the size of the q_space grid. For example if qgrid_size is 17 then the shape of the grid will be ``(17, 17, 17)``. r_start : float, optional ODF is sampled radially in the PDF. This parameters shows where the sampling should start. r_end : float, optional Radial endpoint of ODF sampling r_step : float, optional Step size of the ODf sampling from r_start to r_end filter_width : float, optional Strength of the hanning filter remove_convolution : bool, optional Whether to remove the convolution on the DSI propagator that is caused by the truncation of the q-space in the DSI sampling. normalize_peaks : bool, optional Whether to normalize the peaks sphere_name : string, optional Sphere name on which to reconstruct the fODFs. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. sh_order_max : int, optional Spherical harmonics order (l) used in the DKI fit. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. parallel : bool, optional Whether to use parallelization in peak-finding during the calibration procedure. num_processes : int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. """ io_it = self.get_io_iterator() if remove_convolution: filter_width = np.inf for ( dwi, bval, bvec, mask, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Computing DSI Model for {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) gtab = gradient_table(bvals, bvecs=bvecs) mask = load_nifti_data(mask).astype(bool) DSIModel = ( DiffusionSpectrumDeconvModel if remove_convolution else DiffusionSpectrumModel ) dsi_model = DSIModel( gtab, qgrid_size=qgrid_size, r_start=r_start, r_end=r_end, r_step=r_step, filter_width=filter_width, normalize_peaks=normalize_peaks, ) peaks_sphere = default_sphere if sphere_name is not None: peaks_sphere = get_sphere(name=sphere_name) peaks_dsi = peaks_from_model( model=dsi_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=normalize_peaks, parallel=parallel, num_processes=num_processes, ) peaks_dsi.affine = affine save_pam(opam, peaks_dsi) logging.info("DSI computation completed.") if extract_pam_values: pam_to_niftis( peaks_dsi, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) logging.info(f"DSI metrics saved to {os.path.abspath(out_dir)}") class ReconstCSDFlow(Workflow): @classmethod def get_short_name(cls): return "csd" def run( self, input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radii=10, fa_thr=0.7, frf=None, sphere_name=None, relative_peak_threshold=0.5, min_separation_angle=25, sh_order_max=8, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Constrained spherical deconvolution. See :footcite:p:`Tournier2007` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. roi_center : variable int, optional Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : int or array-like, optional radii of cuboid ROI in voxels. fa_thr : float, optional FA threshold for calculating the response function. frf : variable float, optional Fiber response function can be for example inputted as 15 4 4 (from the command line) or [15, 4, 4] from a Python script to be converted to float and multiplied by 10**-4 . If None the fiber response function will be computed automatically. sphere_name : string, optional Sphere name on which to reconstruct the fODFs. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. sh_order_max : int, optional Spherical harmonics order (l) used in the CSA fit. parallel : bool, optional Whether to use parallelization in peak-finding during the calibration procedure. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) n_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if data.shape[-1] < n_params: raise ValueError( f"You need at least {n_params} unique DWI volumes to " f"compute fiber odfs. You currently have: {data.shape[-1]}" " DWI volumes." ) if frf is None: logging.info("Computing response function") if roi_center is not None: logging.info(f"Response ROI center:\n{roi_center}") logging.info(f"Response ROI radii:\n{roi_radii}") response, ratio = auto_response_ssst( gtab, data, roi_center=roi_center, roi_radii=roi_radii, fa_thr=fa_thr, ) response = list(response) else: logging.info("Using response function") if isinstance(frf, str): l01 = np.array(literal_eval(frf), dtype=np.float64) else: l01 = np.array(frf, dtype=np.float64) l01 *= 10**-4 response = np.array([l01[0], l01[1], l01[1]]) ratio = l01[1] / l01[0] response = (response, ratio) logging.info( f"Eigenvalues for the frf of the input data are :{response[0]}" ) logging.info(f"Ratio for smallest to largest eigen value is {ratio}") peaks_sphere = default_sphere if sphere_name is not None: peaks_sphere = get_sphere(name=sphere_name) logging.info("CSD computation started.") csd_model = ConstrainedSphericalDeconvModel( gtab, response, sh_order_max=sh_order_max ) peaks_csd = peaks_from_model( model=csd_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) peaks_csd.affine = affine save_pam(opam, peaks_csd) logging.info("CSD computation completed.") if extract_pam_values: pam_to_niftis( peaks_csd, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) if dname_ == "": logging.info("Pam5 file saved in current directory") else: logging.info(f"Pam5 file saved in {dname_}") return io_it class ReconstQBallBaseFlow(Workflow): @classmethod def get_short_name(cls): return "qballbase" @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, method="csa", smooth=0.006, min_signal=1e-5, assume_normed=False, b0_threshold=50.0, bvecs_tol=0.01, sphere_name=None, relative_peak_threshold=0.5, min_separation_angle=25, sh_order_max=8, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_sphere="sphere.txt", out_gfa="gfa.nii.gz", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Constant Solid Angle. See :footcite:p:`Aganj2009` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) method : string, optional Method to use for the reconstruction. Can be one of the following: 'csa' for Constant Solid Angle reconstruction 'qball' for Q-Ball reconstruction 'opdt' for Orientation Probability Density Transform reconstruction smooth : float, optional The regularization parameter of the model. min_signal : float, optional During fitting, all signal values less than `min_signal` are clipped to `min_signal`. This is done primarily to avoid values less than or equal to zero when taking logs. assume_normed : bool, optional If True, clipping and normalization of the data with respect to the mean B0 signal are skipped during mode fitting. This is an advanced feature and should be used with care. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Threshold used so that norm(bvec)=1. sphere_name : string, optional Sphere name on which to reconstruct the fODFs. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. sh_order_max : int, optional Spherical harmonics order (l) used in the CSA fit. parallel : bool, optional Whether to use parallelization in peak-finding during the calibration procedure. extract_pam_values : bool, optional Whether or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() if method.lower() not in ["csa", "qball", "opdt"]: raise ValueError( f"Method {method} not recognized. " "Please choose between 'csa', 'qball', 'opdt'." ) model_list = { "csa": CsaOdfModel, "qball": QballModel, "opdt": OpdtModel, } for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, osphere, ogfa, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) peaks_sphere = default_sphere if sphere_name is not None: peaks_sphere = get_sphere(name=sphere_name) logging.info(f"Starting {method.upper()} computations {dwi}") qball_base_model = model_list[method.lower()]( gtab, sh_order_max, smooth=smooth, min_signal=min_signal, assume_normed=assume_normed, ) peaks_qballbase = peaks_from_model( model=qball_base_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) peaks_qballbase.affine = affine save_pam(opam, peaks_qballbase) logging.info(f"Finished {method.upper()} {dwi}") if extract_pam_values: pam_to_niftis( peaks_qballbase, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_sphere=osphere, fname_gfa=ogfa, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) if dname_ == "": logging.info("Pam5 file saved in current directory") else: logging.info(f"Pam5 file saved in {dname_}") return io_it class ReconstDkiFlow(Workflow): @classmethod def get_short_name(cls): return "dki" def run( self, input_files, bvalues_files, bvectors_files, mask_files, fit_method="WLS", b0_threshold=50.0, sigma=None, save_metrics=None, extract_pam_values=False, npeaks=5, out_dir="", out_dt_tensor="dti_tensors.nii.gz", out_fa="fa.nii.gz", out_ga="ga.nii.gz", out_rgb="rgb.nii.gz", out_md="md.nii.gz", out_ad="ad.nii.gz", out_rd="rd.nii.gz", out_mode="mode.nii.gz", out_evec="evecs.nii.gz", out_eval="evals.nii.gz", out_dk_tensor="dki_tensors.nii.gz", out_mk="mk.nii.gz", out_ak="ak.nii.gz", out_rk="rk.nii.gz", out_pam="peaks.pam5", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_sphere="sphere.txt", ): """Workflow for Diffusion Kurtosis reconstruction and for computing DKI metrics. Performs a DKI reconstruction :footcite:p:`Tabesh2011`, :footcite:p:`Jensen2005` on the files by 'globing' ``input_files`` and saves the DKI metrics in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) fit_method : string, optional can be one of the following: 'OLS' or 'ULLS' for ordinary least squares 'WLS' or 'UWLLS' for weighted ordinary least squares b0_threshold : float, optional Threshold used to find b0 volumes. sigma : float, optional An estimate of the variance. :footcite:t:`Chang2005` recommend to use 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise) save_metrics : variable string, optional List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. npeaks : int, optional Number of peaks to fit in each voxel. out_dir : string, optional Output directory. out_dt_tensor : string, optional Name of the tensors volume to be saved. out_dk_tensor : string, optional Name of the tensors volume to be saved. out_fa : string, optional Name of the fractional anisotropy volume to be saved. out_ga : string, optional Name of the geodesic anisotropy volume to be saved. out_rgb : string, optional Name of the color fa volume to be saved. out_md : string, optional Name of the mean diffusivity volume to be saved. out_ad : string, optional Name of the axial diffusivity volume to be saved. out_rd : string, optional Name of the radial diffusivity volume to be saved. out_mode : string, optional Name of the mode volume to be saved. out_evec : string, optional Name of the eigenvectors volume to be saved. out_eval : string, optional Name of the eigenvalues to be saved. out_mk : string, optional Name of the mean kurtosis to be saved. out_ak : string, optional Name of the axial kurtosis to be saved. out_rk : string, optional Name of the radial kurtosis to be saved. out_pam : string, optional Name of the peaks volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. References ---------- .. footbibliography:: """ save_metrics = save_metrics or [] io_it = self.get_io_iterator() for ( dwi, bval, bvec, mask, otensor, ofa, oga, orgb, omd, oad, orad, omode, oevecs, oevals, odk_tensor, omk, oak, ork, opam, opeaks_dir, opeaks_values, opeaks_indices, osphere, ) in io_it: logging.info(f"Computing DKI metrics for {dwi}") data, affine = load_nifti(dwi) if mask is not None: mask = load_nifti_data(mask).astype(bool) optional_args = {} if fit_method in ["RT", "restore", "RESTORE", "NLLS"]: optional_args["sigma"] = sigma dkfit, dkmodel, _ = self.get_fitted_tensor( data, mask, bval, bvec, b0_threshold=b0_threshold, fit_method=fit_method, optional_args=optional_args, ) if not save_metrics: save_metrics = [ "mk", "rk", "ak", "fa", "md", "rd", "ad", "ga", "rgb", "mode", "evec", "eval", "dt_tensor", "dk_tensor", ] evals, evecs, kt = split_dki_param(dkfit.model_params) FA = fractional_anisotropy(evals) FA[np.isnan(FA)] = 0 FA = np.clip(FA, 0, 1) if "dt_tensor" in save_metrics: tensor_vals = lower_triangular(dkfit.quadratic_form) correct_order = [0, 1, 3, 2, 4, 5] tensor_vals_reordered = tensor_vals[..., correct_order] save_nifti(otensor, tensor_vals_reordered.astype(np.float32), affine) if "dk_tensor" in save_metrics: save_nifti(odk_tensor, dkfit.kt.astype(np.float32), affine) if "fa" in save_metrics: save_nifti(ofa, FA.astype(np.float32), affine) if "ga" in save_metrics: GA = geodesic_anisotropy(dkfit.evals) save_nifti(oga, GA.astype(np.float32), affine) if "rgb" in save_metrics: RGB = color_fa(FA, dkfit.evecs) save_nifti(orgb, np.array(255 * RGB, "uint8"), affine) if "md" in save_metrics: MD = mean_diffusivity(dkfit.evals) save_nifti(omd, MD.astype(np.float32), affine) if "ad" in save_metrics: AD = axial_diffusivity(dkfit.evals) save_nifti(oad, AD.astype(np.float32), affine) if "rd" in save_metrics: RD = radial_diffusivity(dkfit.evals) save_nifti(orad, RD.astype(np.float32), affine) if "mode" in save_metrics: MODE = get_mode(dkfit.quadratic_form) save_nifti(omode, MODE.astype(np.float32), affine) if "evec" in save_metrics: save_nifti(oevecs, dkfit.evecs.astype(np.float32), affine) if "eval" in save_metrics: save_nifti(oevals, dkfit.evals.astype(np.float32), affine) if "mk" in save_metrics: save_nifti(omk, dkfit.mk().astype(np.float32), affine) if "ak" in save_metrics: save_nifti(oak, dkfit.ak().astype(np.float32), affine) if "rk" in save_metrics: save_nifti(ork, dkfit.rk().astype(np.float32), affine) logging.info(f"DKI metrics saved in {os.path.dirname(oevals)}") pam = tensor_to_pam( dkfit.evals.astype(np.float32), dkfit.evecs.astype(np.float32), affine, sphere=default_sphere, generate_peaks_indices=False, npeaks=npeaks, ) save_pam(opam, pam) if extract_pam_values: pam_to_niftis( pam, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_sphere=osphere, reshape_dirs=True, ) def get_fitted_tensor( self, data, mask, bval, bvec, b0_threshold=50, fit_method="WLS", optional_args=None, ): logging.info("Diffusion kurtosis estimation...") bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=b0_threshold) dkmodel = DiffusionKurtosisModel(gtab, fit_method=fit_method, **optional_args) dkfit = dkmodel.fit(data, mask=mask) return dkfit, dkmodel, gtab class ReconstIvimFlow(Workflow): @classmethod def get_short_name(cls): return "ivim" def run( self, input_files, bvalues_files, bvectors_files, mask_files, split_b_D=400, split_b_S0=200, b0_threshold=0, save_metrics=None, out_dir="", out_S0_predicted="S0_predicted.nii.gz", out_perfusion_fraction="perfusion_fraction.nii.gz", out_D_star="D_star.nii.gz", out_D="D.nii.gz", ): """Workflow for Intra-voxel Incoherent Motion reconstruction and for computing IVIM metrics. Performs a IVIM reconstruction :footcite:p:`LeBihan1988`, :footcite:p:`Stejskal1965` on the files by 'globing' ``input_files`` and saves the IVIM metrics in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) split_b_D : int, optional Value to split the bvals to estimate D for the two-stage process of fitting. split_b_S0 : int, optional Value to split the bvals to estimate S0 for the two-stage process of fitting. b0_threshold : int, optional Threshold value for the b0 bval. save_metrics : variable string, optional List of metrics to save. Possible values: S0_predicted, perfusion_fraction, D_star, D out_dir : string, optional Output directory. out_S0_predicted : string, optional Name of the S0 signal estimated to be saved. out_perfusion_fraction : string, optional Name of the estimated volume fractions to be saved. out_D_star : string, optional Name of the estimated pseudo-diffusion parameter to be saved. out_D : string, optional Name of the estimated diffusion parameter to be saved. References ---------- .. footbibliography:: """ save_metrics = save_metrics or [] io_it = self.get_io_iterator() for ( dwi, bval, bvec, mask, oS0_predicted, operfusion_fraction, oD_star, oD, ) in io_it: logging.info(f"Computing IVIM metrics for {dwi}") data, affine = load_nifti(dwi) if mask is not None: mask = load_nifti_data(mask).astype(bool) ivimfit, _ = self.get_fitted_ivim( data, mask, bval, bvec, b0_threshold=b0_threshold ) if not save_metrics: save_metrics = ["S0_predicted", "perfusion_fraction", "D_star", "D"] if "S0_predicted" in save_metrics: save_nifti( oS0_predicted, ivimfit.S0_predicted.astype(np.float32), affine ) if "perfusion_fraction" in save_metrics: save_nifti( operfusion_fraction, ivimfit.perfusion_fraction.astype(np.float32), affine, ) if "D_star" in save_metrics: save_nifti(oD_star, ivimfit.D_star.astype(np.float32), affine) if "D" in save_metrics: save_nifti(oD, ivimfit.D.astype(np.float32), affine) logging.info(f"IVIM metrics saved in {os.path.dirname(oD)}") @warning_for_keywords() def get_fitted_ivim(self, data, mask, bval, bvec, *, b0_threshold=50): logging.info("Intra-Voxel Incoherent Motion Estimation...") bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=b0_threshold) ivimmodel = IvimModel(gtab) ivimfit = ivimmodel.fit(data, mask=mask) return ivimfit, gtab class ReconstRUMBAFlow(Workflow): @classmethod def get_short_name(cls): return "rumba" @deprecated_params("sh_order", new_name="sh_order_max", since="1.9", until="2.0") def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radii=10, fa_thr=0.7, extract_pam_values=False, sh_order_max=8, parallel=True, num_processes=None, gm_response=0.8e-3, csf_response=3.0e-3, n_iter=600, recon_type="smf", n_coils=1, R=1, voxelwise=True, use_tv=False, sphere_name="repulsion724", verbose=False, relative_peak_threshold=0.5, min_separation_angle=25, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Reconstruct the fiber local orientations using the Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) model. The fiber response function (FRF) is computed using the single-shell, single-tissue model, and the voxel-wise fitting procedure is used for RUMBA-SD :footcite:p:`CanalesRodriguez2015`. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. roi_center : variable int, optional Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : variable int, optional radii of cuboid ROI in voxels. fa_thr : float, optional FA threshold to compute the WM response function. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. sh_order : int, optional Spherical harmonics order (l) used in the RUMBA fit. parallel : bool, optional Whether to use parallelization in peak-finding during the calibration procedure. num_processes : int, optional If `parallel` is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus ``num_processes + 1`` is used (enter -1 to use as many cores as possible). 0 raises an error. gm_response : float, optional Mean diffusivity for GM compartment. If `None`, then grey matter volume fraction is not computed. csf_response : float, optional Mean diffusivity for CSF compartment. If `None`, then CSF volume fraction is not computed. n_iter : int, optional Number of iterations for fODF estimation. Must be a positive int. recon_type : str, optional MRI reconstruction method type: spatial matched filter (`smf`) or sum-of-squares (`sos`). SMF reconstruction generates Rician noise while SoS reconstruction generates Noncentral Chi noise. n_coils : int, optional Number of coils in MRI scanner -- only relevant in SoS reconstruction. Must be a positive int. Default: 1 R : int, optional Acceleration factor of the acquisition. For SIEMENS, R = iPAT factor. For GE, R = ASSET factor. For PHILIPS, R = SENSE factor. Typical values are 1 or 2. Must be a positive integer. voxelwise : bool, optional If true, performs a voxelwise fit. If false, performs a global fit on the entire brain at once. The global fit requires a 4D brain volume in `fit`. use_tv : bool, optional If true, applies total variation regularization. This only takes effect in a global fit (`voxelwise` is set to `False`). TV can only be applied to 4D brain volumes with no singleton dimensions. sphere_name : str, optional Sphere name on which to reconstruct the fODFs. verbose : bool, optional If true, logs updates on estimated signal-to-noise ratio after each iteration. This only takes effect in a global fit (`voxelwise` is set to `False`). relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The minimum distance between directions. If two peaks are too close only the larger of the two is returned. out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: # Read the data logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) mask_vol = load_nifti_data(maskfile).astype(bool) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) sphere = get_sphere(name=sphere_name) # Compute the FRF wm_response, _ = auto_response_ssst( gtab, data, roi_center=roi_center, roi_radii=roi_radii, fa_thr=fa_thr ) # Instantiate the RUMBA-SD reconstruction model rumba = RumbaSDModel( gtab, wm_response=wm_response[0], gm_response=gm_response, csf_response=csf_response, n_iter=n_iter, recon_type=recon_type, n_coils=n_coils, R=R, voxelwise=voxelwise, use_tv=use_tv, sphere=sphere, verbose=verbose, ) rumba_peaks = peaks_from_model( model=rumba, data=data, sphere=sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) logging.info("Peak computation completed.") rumba_peaks.affine = affine save_pam(opam, rumba_peaks) if extract_pam_values: pam_to_niftis( rumba_peaks, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) if dname_ == "": logging.info("Pam5 file saved in current directory") else: logging.info(f"Pam5 file saved in {dname_}") return io_it class ReconstSDTFlow(Workflow): @classmethod def get_short_name(cls): return "sdt" def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, ratio=None, roi_center=None, roi_radii=10, fa_thr=0.7, sphere_name=None, sh_order_max=8, lambda_=1.0, tau=0.1, b0_threshold=50.0, bvecs_tol=0.01, relative_peak_threshold=0.5, min_separation_angle=25, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Workflow for Spherical Deconvolution Transform (SDT) See :footcite:p:`Descoteaux2009` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) ratio : float, optional Ratio of the smallest to largest eigenvalue used in the response function estimation. If None, the response function will be estimated automatically. roi_center : variable int, optional Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape `data.shape[:3]`. roi_radii : variable int, optional radii of cuboid ROI in voxels. fa_thr : float, optional FA threshold to compute the WM response function. sphere_name : str, optional Sphere name on which to reconstruct the fODFs. sh_order_max : int, optional Maximum spherical harmonics order (l) used in the SDT fit. lambda_ : float, optional Regularization parameter. tau : float, optional Diffusion time. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The angle tolerance between directions. parallel : bool, optional Whether to use parallelization in peak-finding. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) n_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if data.shape[-1] < n_params: raise ValueError( f"You need at least {n_params} unique DWI volumes to " f"compute fiber odfs. You currently have: {data.shape[-1]}" " DWI volumes." ) if ratio is None: logging.info("Computing response function") _, ratio = auto_response_ssst( gtab, data, roi_center=roi_center, roi_radii=roi_radii, fa_thr=fa_thr, ) logging.info(f"Ratio for smallest to largest eigen value is {ratio}") peaks_sphere = default_sphere if sphere_name is not None: peaks_sphere = get_sphere(name=sphere_name) logging.info("SDT computation started.") sdt_model = ConstrainedSDTModel( gtab, ratio, sh_order_max=sh_order_max, reg_sphere=peaks_sphere, lambda_=lambda_, tau=tau, ) peaks_sdt = peaks_from_model( model=sdt_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) peaks_sdt.affine = affine save_pam(opam, peaks_sdt) logging.info("SDT computation completed.") if extract_pam_values: pam_to_niftis( peaks_sdt, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) if dname_ == "": logging.info("Pam5 file saved in current directory") else: logging.info(f"Pam5 file saved in {dname_}") return io_it class ReconstSFMFlow(Workflow): @classmethod def get_short_name(cls): return "sfm" def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, sphere_name=None, response=None, solver="ElasticNet", l1_ratio=0.5, alpha=0.001, seed=42, b0_threshold=50.0, bvecs_tol=0.01, sh_order_max=8, relative_peak_threshold=0.5, min_separation_angle=25, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Workflow for Sparse Fascicle Model (SFM) See :footcite:p:`Rokem2015` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvalues files. This path may contain wildcards to use mask_files : string Path to the input masks. This path may contain wildcards to use sphere_name : string, optional Sphere name on which to reconstruct the fODFs. response : variable int, optional Response function to use. If None, the response function will be defined automatically. solver : str, optional This will determine the algorithm used to solve the set of linear equations underlying this model. It needs to be one of the following: {'ElasticNet', 'NNLS'} l1_ratio : float, optional The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. alpha : float, optional Sets the balance between least-squares error and L1/L2 regularization in ElasticNet :footcite:p`Zou2005`. seed : int, optional Seed for the random number generator. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. sh_order_max : int, optional Maximum spherical harmonics order (l) used in the SFM fit. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The angle tolerance between directions. parallel : bool, optional Whether to use parallelization in peak-finding. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() response = response or (0.0015, 0.0005, 0.0005) for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) n_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if data.shape[-1] < n_params: raise ValueError( f"You need at least {n_params} unique DWI volumes to " f"compute fiber odfs. You currently have: {data.shape[-1]}" " DWI volumes." ) peaks_sphere = ( default_sphere if sphere_name is None else get_sphere(name=sphere_name) ) logging.info("SFM computation started.") sfm_model = SparseFascicleModel( gtab, sphere=peaks_sphere, response=response, solver=solver, l1_ratio=l1_ratio, alpha=alpha, seed=seed, ) peaks_sfm = peaks_from_model( model=sfm_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) peaks_sfm.affine = affine save_pam(opam, peaks_sfm) logging.info("SFM computation completed.") if extract_pam_values: pam_to_niftis( peaks_sfm, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) msg = ( "Pam5 file saved in current directory" if dname_ == "" else f"Pam5 file saved in {dname_}" ) logging.info(msg) return io_it class ReconstGQIFlow(Workflow): @classmethod def get_short_name(cls): return "gqi" def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, method="gqi2", sampling_length=1.2, normalize_peaks=False, sphere_name=None, b0_threshold=50.0, bvecs_tol=0.01, sh_order_max=8, relative_peak_threshold=0.5, min_separation_angle=25, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Workflow for Generalized Q-Sampling Imaging (GQI) See :footcite:p:`Yeh2010` for further details about the method. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. method : str, optional Method used to compute the ODFs. It can be 'standard' or 'gqi2'. sampling_length : float, optional The maximum length of the sampling fibers. normalize_peaks : bool, optional If True, the peaks are normalized to 1. sphere_name : str, optional Sphere name on which to reconstruct the fODFs. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. sh_order_max : int, optional Maximum spherical harmonics order (l) used in the SFM fit. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The angle tolerance between directions. parallel : bool, optional Whether to use parallelization in peak-finding. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) n_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if data.shape[-1] < n_params: raise ValueError( f"You need at least {n_params} unique DWI volumes to " f"compute fiber odfs. You currently have: {data.shape[-1]}" " DWI volumes." ) peaks_sphere = ( default_sphere if sphere_name is None else get_sphere(name=sphere_name) ) logging.info("GQI computation started.") gqi_model = GeneralizedQSamplingModel( gtab, method=method, sampling_length=sampling_length, normalize_peaks=normalize_peaks, ) peaks_gqi = peaks_from_model( model=gqi_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=normalize_peaks, parallel=parallel, num_processes=num_processes, ) peaks_gqi.affine = affine save_pam(opam, peaks_gqi) logging.info("GQI computation completed.") if extract_pam_values: pam_to_niftis( peaks_gqi, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) msg = ( "Pam5 file saved in current directory" if dname_ == "" else f"Pam5 file saved in {dname_}" ) logging.info(msg) return io_it class ReconstForecastFlow(Workflow): @classmethod def get_short_name(cls): return "forecast" def run( self, input_files, bvalues_files, bvectors_files, mask_files, *, lambda_lb=1e-3, dec_alg="CSD", lambda_csd=1.0, sphere_name=None, b0_threshold=50.0, bvecs_tol=0.01, sh_order_max=8, relative_peak_threshold=0.5, min_separation_angle=25, parallel=False, extract_pam_values=False, num_processes=None, out_dir="", out_pam="peaks.pam5", out_shm="shm.nii.gz", out_peaks_dir="peaks_dirs.nii.gz", out_peaks_values="peaks_values.nii.gz", out_peaks_indices="peaks_indices.nii.gz", out_gfa="gfa.nii.gz", out_sphere="sphere.txt", out_b="B.nii.gz", out_qa="qa.nii.gz", ): """Workflow for Fiber ORientation Estimated using Continuous Axially Symmetric Tensors (FORECAST). FORECAST :footcite:p:`Anderson2005`, :footcite:p:`Kaden2016a`, :footcite:p:`Zucchelli2017` is a Spherical Deconvolution reconstruction model for multi-shell diffusion data which enables the calculation of a voxel adaptive response function using the Spherical Mean Technique (SMT) :footcite:p:`Kaden2016a`, :footcite:p:`Zucchelli2017`. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvalues_files : string Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once. bvectors_files : string Path to the bvectors files. This path may contain wildcards to use multiple bvalues files at once. mask_files : string Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used) lambda_lb : float, optional Regularization parameter for the Laplacian-Beltrami operator. dec_alg : str, optional Spherical deconvolution algorithm. The possible values are Weighted Least Squares ('WLS'), Positivity Constraints using CVXPY ('POS') and the Constraint Spherical Deconvolution algorithm ('CSD'). lambda_csd : float, optional Regularization parameter for the CSD algorithm. sphere_name : str, optional Sphere name on which to reconstruct the fODFs. b0_threshold : float, optional Threshold used to find b0 volumes. bvecs_tol : float, optional Bvecs should be unit vectors. sh_order_max : int, optional Maximum spherical harmonics order (l) used in the SFM fit. relative_peak_threshold : float, optional Only return peaks greater than ``relative_peak_threshold * m`` where m is the largest peak. min_separation_angle : float, optional The angle tolerance between directions. parallel : bool, optional Whether to use parallelization in peak-finding. extract_pam_values : bool, optional Save or not to save pam volumes as single nifti files. num_processes : int, optional If `parallel` is True, the number of subprocesses to use out_dir : string, optional Output directory. out_pam : string, optional Name of the peaks volume to be saved. out_shm : string, optional Name of the spherical harmonics volume to be saved. out_peaks_dir : string, optional Name of the peaks directions volume to be saved. out_peaks_values : string, optional Name of the peaks values volume to be saved. out_peaks_indices : string, optional Name of the peaks indices volume to be saved. out_gfa : string, optional Name of the generalized FA volume to be saved. out_sphere : string, optional Sphere vertices name to be saved. out_b : string, optional Name of the B Matrix to be saved. out_qa : string, optional Name of the Quantitative Anisotropy to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() for ( dwi, bval, bvec, maskfile, opam, oshm, opeaks_dir, opeaks_values, opeaks_indices, ogfa, osphere, ob, oqa, ) in io_it: logging.info(f"Loading {dwi}") data, affine = load_nifti(dwi) bvals, bvecs = read_bvals_bvecs(bval, bvec) # If all b-values are smaller or equal to the b0 threshold, it is # assumed that no thresholding is requested if any(mask_non_weighted_bvals(bvals, b0_threshold)): if b0_threshold < bvals.min(): warn( f"b0_threshold (value: {b0_threshold}) is too low, " "increase your b0_threshold. It should be higher than the " f"first b0 value ({bvals.min()}).", stacklevel=2, ) gtab = gradient_table( bvals, bvecs=bvecs, b0_threshold=b0_threshold, atol=bvecs_tol ) mask_vol = load_nifti_data(maskfile).astype(bool) n_params = ((sh_order_max + 1) * (sh_order_max + 2)) / 2 if data.shape[-1] < n_params: raise ValueError( f"You need at least {n_params} unique DWI volumes to " f"compute fiber odfs. You currently have: {data.shape[-1]}" " DWI volumes." ) peaks_sphere = ( default_sphere if sphere_name is None else get_sphere(name=sphere_name) ) logging.info("FORECAST computation started.") forecast_model = ForecastModel( gtab, sh_order_max=sh_order_max, lambda_lb=lambda_lb, dec_alg=dec_alg, sphere=peaks_sphere.vertices, lambda_csd=lambda_csd, ) peaks_forecast = peaks_from_model( model=forecast_model, data=data, sphere=peaks_sphere, relative_peak_threshold=relative_peak_threshold, min_separation_angle=min_separation_angle, mask=mask_vol, return_sh=True, sh_order_max=sh_order_max, normalize_peaks=True, parallel=parallel, num_processes=num_processes, ) peaks_forecast.affine = affine save_pam(opam, peaks_forecast) logging.info("FORECAST computation completed.") if extract_pam_values: pam_to_niftis( peaks_forecast, fname_shm=oshm, fname_peaks_dir=opeaks_dir, fname_peaks_values=opeaks_values, fname_peaks_indices=opeaks_indices, fname_gfa=ogfa, fname_sphere=osphere, fname_b=ob, fname_qa=oqa, reshape_dirs=True, ) dname_ = os.path.dirname(opam) msg = ( "Pam5 file saved in current directory" if dname_ == "" else f"Pam5 file saved in {dname_}" ) logging.info(msg) return io_it dipy-1.11.0/dipy/workflows/segment.py000066400000000000000000000451221476546756600176130ustar00rootroot00000000000000import logging import os import sys from time import time import numpy as np from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.segment.bundles import RecoBundles from dipy.segment.mask import median_otsu from dipy.segment.tissue import TissueClassifierHMRF, dam_classifier from dipy.tracking import Streamlines from dipy.workflows.utils import handle_vol_idx from dipy.workflows.workflow import Workflow class MedianOtsuFlow(Workflow): @classmethod def get_short_name(cls): return "medotsu" def run( self, input_files, save_masked=False, median_radius=2, numpass=5, autocrop=False, vol_idx=None, dilate=None, finalize_mask=False, out_dir="", out_mask="brain_mask.nii.gz", out_masked="dwi_masked.nii.gz", ): """Workflow wrapping the median_otsu segmentation method. Applies median_otsu segmentation on each file found by 'globing' ``input_files`` and saves the results in a directory specified by ``out_dir``. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. save_masked : bool, optional Save mask. median_radius : int, optional Radius (in voxels) of the applied median filter. numpass : int, optional Number of pass of the median filter. autocrop : bool, optional If True, the masked input_volumes will also be cropped using the bounding box defined by the masked data. For example, if diffusion images are of 1x1x1 (mm^3) or higher resolution auto-cropping could reduce their size in memory and speed up some of the analysis. vol_idx : str, optional 1D array representing indices of ``axis=-1`` of a 4D `input_volume`. From the command line use something like '1,2,3-5,7'. This input is required for 4D volumes. dilate : int, optional number of iterations for binary dilation. finalize_mask : bool, optional Whether to remove potential holes or islands. Useful for solving minor errors. out_dir : string, optional Output directory. out_mask : string, optional Name of the mask volume to be saved. out_masked : string, optional Name of the masked volume to be saved. """ io_it = self.get_io_iterator() vol_idx = handle_vol_idx(vol_idx) for fpath, mask_out_path, masked_out_path in io_it: logging.info(f"Applying median_otsu segmentation on {fpath}") data, affine, img = load_nifti(fpath, return_img=True) masked_volume, mask_volume = median_otsu( data, vol_idx=vol_idx, median_radius=median_radius, numpass=numpass, autocrop=autocrop, dilate=dilate, finalize_mask=finalize_mask, ) save_nifti(mask_out_path, mask_volume.astype(np.float64), affine) logging.info(f"Mask saved as {mask_out_path}") if save_masked: save_nifti(masked_out_path, masked_volume, affine, hdr=img.header) logging.info(f"Masked volume saved as {masked_out_path}") return io_it class RecoBundlesFlow(Workflow): @classmethod def get_short_name(cls): return "recobundles" def run( self, streamline_files, model_bundle_files, greater_than=50, less_than=1000000, no_slr=False, clust_thr=15.0, reduction_thr=15.0, reduction_distance="mdf", model_clust_thr=2.5, pruning_thr=8.0, pruning_distance="mdf", slr_metric="symmetric", slr_transform="similarity", slr_matrix="small", refine=False, r_reduction_thr=12.0, r_pruning_thr=6.0, no_r_slr=False, out_dir="", out_recognized_transf="recognized.trk", out_recognized_labels="labels.npy", ): """Recognize bundles See :footcite:p:`Garyfallidis2018` and :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- streamline_files : string The path of streamline files where you want to recognize bundles. model_bundle_files : string The path of model bundle files. greater_than : int, optional Keep streamlines that have length greater than this value in mm. less_than : int, optional Keep streamlines have length less than this value in mm. no_slr : bool, optional Don't enable local Streamline-based Linear Registration. clust_thr : float, optional MDF distance threshold for all streamlines. reduction_thr : float, optional Reduce search space by (mm). reduction_distance : string, optional Reduction distance type can be mdf or mam. model_clust_thr : float, optional MDF distance threshold for the model bundles. pruning_thr : float, optional Pruning after matching. pruning_distance : string, optional Pruning distance type can be mdf or mam. slr_metric : string, optional Options are None, symmetric, asymmetric or diagonal. slr_transform : string, optional Transformation allowed. translation, rigid, similarity or scaling. slr_matrix : string, optional Options are 'nano', 'tiny', 'small', 'medium', 'large', 'huge'. refine : bool, optional Enable refine recognized bundle. r_reduction_thr : float, optional Refine reduce search space by (mm). r_pruning_thr : float, optional Refine pruning after matching. no_r_slr : bool, optional Don't enable Refine local Streamline-based Linear Registration. out_dir : string, optional Output directory. out_recognized_transf : string, optional Recognized bundle in the space of the model bundle. out_recognized_labels : string, optional Indices of recognized bundle in the original tractogram. References ---------- .. footbibliography:: """ slr = not no_slr r_slr = not no_r_slr bounds = [ (-30, 30), (-30, 30), (-30, 30), (-45, 45), (-45, 45), (-45, 45), (0.8, 1.2), (0.8, 1.2), (0.8, 1.2), ] slr_matrix = slr_matrix.lower() if slr_matrix == "nano": slr_select = (100, 100) if slr_matrix == "tiny": slr_select = (250, 250) if slr_matrix == "small": slr_select = (400, 400) if slr_matrix == "medium": slr_select = (600, 600) if slr_matrix == "large": slr_select = (800, 800) if slr_matrix == "huge": slr_select = (1200, 1200) slr_transform = slr_transform.lower() if slr_transform == "translation": bounds = bounds[:3] if slr_transform == "rigid": bounds = bounds[:6] if slr_transform == "similarity": bounds = bounds[:7] if slr_transform == "scaling": bounds = bounds[:9] logging.info("### RecoBundles ###") io_it = self.get_io_iterator() t = time() logging.info(streamline_files) input_obj = load_tractogram(streamline_files, "same", bbox_valid_check=False) streamlines = input_obj.streamlines logging.info(f" Loading time {time() - t:0.3f} sec") rb = RecoBundles(streamlines, greater_than=greater_than, less_than=less_than) for _, mb, out_rec, out_labels in io_it: t = time() logging.info(mb) model_bundle = load_tractogram( mb, "same", bbox_valid_check=False ).streamlines logging.info(f" Loading time {time() - t:0.3f} sec") logging.info("model file = ") logging.info(mb) recognized_bundle, labels = rb.recognize( model_bundle, model_clust_thr=model_clust_thr, reduction_thr=reduction_thr, reduction_distance=reduction_distance, pruning_thr=pruning_thr, pruning_distance=pruning_distance, slr=slr, slr_metric=slr_metric, slr_x0=slr_transform, slr_bounds=bounds, slr_select=slr_select, slr_method="L-BFGS-B", ) if refine: if len(recognized_bundle) > 1: # affine x0 = np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1, 0, 0, 0]) affine_bounds = [ (-30, 30), (-30, 30), (-30, 30), (-45, 45), (-45, 45), (-45, 45), (0.8, 1.2), (0.8, 1.2), (0.8, 1.2), (-10, 10), (-10, 10), (-10, 10), ] recognized_bundle, labels = rb.refine( model_bundle, recognized_bundle, model_clust_thr=model_clust_thr, reduction_thr=r_reduction_thr, reduction_distance=reduction_distance, pruning_thr=r_pruning_thr, pruning_distance=pruning_distance, slr=r_slr, slr_metric=slr_metric, slr_x0=x0, slr_bounds=affine_bounds, slr_select=slr_select, slr_method="L-BFGS-B", ) if len(labels) > 0: ba, bmd = rb.evaluate_results( model_bundle, recognized_bundle, slr_select ) logging.info(f"Bundle adjacency Metric {ba}") logging.info(f"Bundle Min Distance Metric {bmd}") new_tractogram = StatefulTractogram( recognized_bundle, streamline_files, Space.RASMM ) save_tractogram(new_tractogram, out_rec, bbox_valid_check=False) logging.info("Saving output files ...") np.save(out_labels, np.array(labels)) logging.info(out_rec) logging.info(out_labels) class LabelsBundlesFlow(Workflow): @classmethod def get_short_name(cls): return "labelsbundles" def run( self, streamline_files, labels_files, out_dir="", out_bundle="recognized_orig.trk", ): """Extract bundles using existing indices (labels) See :footcite:p:`Garyfallidis2018` for further details about the method. Parameters ---------- streamline_files : string The path of streamline files where you want to recognize bundles. labels_files : string The path of model bundle files. out_dir : string, optional Output directory. out_bundle : string, optional Recognized bundle in the space of the model bundle. References ---------- .. footbibliography:: """ logging.info("### Labels to Bundles ###") io_it = self.get_io_iterator() for f_steamlines, f_labels, out_bundle in io_it: logging.info(f_steamlines) sft = load_tractogram(f_steamlines, "same", bbox_valid_check=False) streamlines = sft.streamlines logging.info(f_labels) location = np.load(f_labels) if len(location) < 1: bundle = Streamlines([]) else: bundle = streamlines[location] logging.info("Saving output files ...") new_sft = StatefulTractogram(bundle, sft, Space.RASMM) save_tractogram(new_sft, out_bundle, bbox_valid_check=False) logging.info(out_bundle) class ClassifyTissueFlow(Workflow): @classmethod def get_short_name(cls): return "extracttissue" def run( self, input_files, bvals_file=None, method=None, wm_threshold=0.5, b0_threshold=50, low_signal_threshold=50, nclass=None, beta=0.1, tolerance=1e-05, max_iter=100, out_dir="", out_tissue="tissue_classified.nii.gz", out_pve="tissue_classified_pve.nii.gz", ): """Extract tissue from a volume. Parameters ---------- input_files : string Path to the input volumes. This path may contain wildcards to process multiple inputs at once. bvals_file : string, optional Path to the b-values file. Required for 'dam' method. method : string, optional Method to use for tissue extraction. Options are: - 'hmrf': Markov Random Fields modeling approach. - 'dam': Directional Average Maps, proposed by :footcite:p:`Cheng2020`. 'hmrf' method is recommended for T1w images, while 'dam' method is recommended for DWI Multishell images (single shell are not recommended). wm_threshold : float, optional The threshold below which a voxel is considered white matter. For data like HCP, threshold of 0.5 proves to be a good choice. For data like cfin, higher threshold values like 0.7 or 0.8 are more suitable. Used for 'dam' method. b0_threshold : float, optional The intensity threshold for a b=0 image. used only for 'dam' method. low_signal_threshold : float, optional The threshold below which a voxel is considered to have low signal. Used only for 'dam' method. nclass : int, optional Number of desired classes. Used only for 'hmrf' method. beta : float, optional Smoothing parameter, the higher this number the smoother the output will be. Used only for 'hmrf' method. tolerance : float, optional Value that defines the percentage of change tolerated to prevent the ICM loop to stop. Default is 1e-05. If you want tolerance check to be disabled put 'tolerance = 0'. Used only for 'hmrf' method. max_iter : int, optional Fixed number of desired iterations. Default is 100. This parameter defines the maximum number of iterations the algorithm will perform. The loop may terminate early if the change in energy sum between iterations falls below the threshold defined by `tolerance`. However, if `tolerance` is explicitly set to 0, this early stopping mechanism is disabled, and the algorithm will run for the specified number of iterations unless another stopping criterion is met. Used only for 'hmrf' method. out_dir : string, optional Output directory. out_tissue : string, optional Name of the tissue volume to be saved. out_pve : string, optional Name of the pve volume to be saved. REFERENCES ---------- .. footbibliography:: """ io_it = self.get_io_iterator() if not method or method.lower() not in ["hmrf", "dam"]: logging.error( f"Unknown method '{method}' for tissue extraction. " "Choose '--method hmrf' (for T1w) or '--method dam' (for DWI)" ) sys.exit(1) prefix = "t1" if method.lower() == "hmrf" else "dwi" for i, name in enumerate(self.flat_outputs): if name.endswith("tissue_classified.nii.gz"): self.flat_outputs[i] = name.replace( "tissue_classified.nii.gz", f"{prefix}_tissue_classified.nii.gz" ) if name.endswith("tissue_classified_pve.nii.gz"): self.flat_outputs[i] = name.replace( "tissue_classified_pve.nii.gz", f"{prefix}_tissue_classified_pve.nii.gz", ) self.update_flat_outputs(self.flat_outputs, io_it) for fpath, tissue_out_path, opve in io_it: logging.info(f"Extracting tissue from {fpath}") data, affine = load_nifti(fpath) if method.lower() == "hmrf": if nclass is None: logging.error( "Number of classes is required for 'hmrf' method. " "For example, Use '--nclass 4' to specify the number of " "classes." ) sys.exit(1) classifier = TissueClassifierHMRF() _, segmentation_final, PVE = classifier.classify( data, nclass, beta, tolerance=tolerance, max_iter=max_iter ) save_nifti(tissue_out_path, segmentation_final, affine) save_nifti(opve, PVE, affine) elif method.lower() == "dam": if bvals_file is None or not os.path.isfile(bvals_file): logging.error("'--bvals filename' is required for 'dam' method") sys.exit(1) bvals, _ = read_bvals_bvecs(bvals_file, None) wm_mask, gm_mask = dam_classifier( data, bvals, wm_threshold=wm_threshold, b0_threshold=b0_threshold, low_signal_threshold=low_signal_threshold, ) result = np.zeros(wm_mask.shape) result[wm_mask] = 1 result[gm_mask] = 2 save_nifti(tissue_out_path, result, affine) save_nifti( opve, np.stack([wm_mask, gm_mask], axis=-1).astype(np.int32), affine ) logging.info(f"Tissue saved as {tissue_out_path} and PVE as {opve}") return io_it dipy-1.11.0/dipy/workflows/stats.py000077500000000000000000000520011476546756600173040ustar00rootroot00000000000000from glob import glob import json import logging import os from time import time import warnings import numpy as np from scipy.ndimage import binary_dilation from dipy.core.gradients import gradient_table from dipy.io import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.io.peaks import load_peaks from dipy.io.streamline import load_tractogram from dipy.reconst.dti import TensorModel from dipy.segment.bundles import bundle_shape_similarity from dipy.segment.mask import bounding_box, segment_from_cfa from dipy.stats.analysis import anatomical_measures, assignment_map, peak_values from dipy.testing.decorators import warning_for_keywords from dipy.tracking.streamline import transform_streamlines from dipy.utils.optpkg import optional_package from dipy.workflows.workflow import Workflow pd, have_pd, _ = optional_package("pandas") smf, have_smf, _ = optional_package("statsmodels.formula.api") matplt, have_matplotlib, _ = optional_package("matplotlib") plt, have_plt, _ = optional_package("matplotlib.pyplot") class SNRinCCFlow(Workflow): @classmethod def get_short_name(cls): return "snrincc" def run( self, data_files, bvals_files, bvecs_files, mask_file, bbox_threshold=(0.6, 1, 0, 0.1, 0, 0.1), out_dir="", out_file="product.json", out_mask_cc="cc.nii.gz", out_mask_noise="mask_noise.nii.gz", ): """Compute the signal-to-noise ratio in the corpus callosum. Parameters ---------- data_files : string Path to the dwi.nii.gz file. This path may contain wildcards to process multiple inputs at once. bvals_files : string Path of bvals. bvecs_files : string Path of bvecs. mask_file : string Path of a brain mask file. bbox_threshold : variable float, optional Threshold for bounding box, values separated with commas for ex. [0.6,1,0,0.1,0,0.1]. out_dir : string, optional Where the resulting file will be saved. out_file : string, optional Name of the result file to be saved. out_mask_cc : string, optional Name of the CC mask volume to be saved. out_mask_noise : string, optional Name of the mask noise volume to be saved. """ io_it = self.get_io_iterator() for ( dwi_path, bvals_path, bvecs_path, mask_path, out_path, cc_mask_path, mask_noise_path, ) in io_it: data, affine = load_nifti(dwi_path) bvals, bvecs = read_bvals_bvecs(bvals_path, bvecs_path) gtab = gradient_table(bvals=bvals, bvecs=bvecs) mask, affine = load_nifti(mask_path) logging.info("Computing tensors...") tenmodel = TensorModel(gtab) tensorfit = tenmodel.fit(data, mask=mask) logging.info("Computing worst-case/best-case SNR using the CC...") if np.ndim(data) == 4: CC_box = np.zeros_like(data[..., 0]) elif np.ndim(data) == 3: CC_box = np.zeros_like(data) else: raise OSError("DWI data has invalid dimensions") mins, maxs = bounding_box(mask) mins = np.array(mins) maxs = np.array(maxs) diff = (maxs - mins) // 4 bounds_min = mins + diff bounds_max = maxs - diff CC_box[ bounds_min[0] : bounds_max[0], bounds_min[1] : bounds_max[1], bounds_min[2] : bounds_max[2], ] = 1 if len(bbox_threshold) != 6: raise OSError("bbox_threshold should have 6 float values") mask_cc_part, cfa = segment_from_cfa( tensorfit, CC_box, bbox_threshold, return_cfa=True ) if not np.count_nonzero(mask_cc_part.astype(np.uint8)): logging.warning( "Empty mask: corpus callosum not found." " Update your data or your threshold" ) save_nifti(cc_mask_path, mask_cc_part.astype(np.uint8), affine) logging.info(f"CC mask saved as {cc_mask_path}") masked_data = data[mask_cc_part] mean_signal = 0 if masked_data.size: mean_signal = np.mean(masked_data, axis=0) mask_noise = binary_dilation(mask, iterations=10) mask_noise[..., : mask_noise.shape[-1] // 2] = 1 mask_noise = ~mask_noise save_nifti(mask_noise_path, mask_noise.astype(np.uint8), affine) logging.info(f"Mask noise saved as {mask_noise_path}") noise_std = 0 if np.count_nonzero(mask_noise.astype(np.uint8)): noise_std = np.std(data[mask_noise, :]) logging.info(f"Noise standard deviation sigma= {noise_std}") idx = np.sum(gtab.bvecs, axis=-1) == 0 gtab.bvecs[idx] = np.inf axis_X = np.argmin(np.sum((gtab.bvecs - np.array([1, 0, 0])) ** 2, axis=-1)) axis_Y = np.argmin(np.sum((gtab.bvecs - np.array([0, 1, 0])) ** 2, axis=-1)) axis_Z = np.argmin(np.sum((gtab.bvecs - np.array([0, 0, 1])) ** 2, axis=-1)) SNR_output = [] SNR_directions = [] for direction in ["b0", axis_X, axis_Y, axis_Z]: if direction == "b0": SNR = mean_signal[0] / noise_std if noise_std else 0 logging.info(f"SNR for the b=0 image is : {SNR}") else: logging.info( f"SNR for direction {direction} {gtab.bvecs[direction]} is: " f"{SNR}" ) SNR_directions.append(direction) SNR = mean_signal[direction] / noise_std if noise_std else 0 SNR_output.append(SNR) snr_str = f"{SNR_output[0]} {SNR_output[1]} {SNR_output[2]} {SNR_output[3]}" dir_str = f"b0 {SNR_directions[0]} {SNR_directions[1]} {SNR_directions[2]}" data = [{"data": snr_str, "directions": dir_str}] with open(os.path.join(out_dir, out_path), "w") as myfile: json.dump(data, myfile) @warning_for_keywords() def buan_bundle_profiles( model_bundle_folder, bundle_folder, orig_bundle_folder, metric_folder, group_id, subject, *, no_disks=100, out_dir="", ): """ Applies statistical analysis on bundles and saves the results in a directory specified by ``out_dir``. See :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- model_bundle_folder : string Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once. bundle_folder : string Path to the input bundle files in common space. This path may contain wildcards to process multiple inputs at once. orig_bundle_folder : string Path to the input bundle files in native space. This path may contain wildcards to process multiple inputs at once. metric_folder : string Path to the input dti metric or/and peak files. It will be used as metric for statistical analysis of bundles. group_id : integer what group subject belongs to either 0 for control or 1 for patient. subject : string subject id e.g. 10001. no_disks : integer, optional Number of disks used for dividing bundle into disks. out_dir : string, optional Output directory. References ---------- .. footbibliography:: """ t = time() mb = glob(os.path.join(model_bundle_folder, "*.trk")) print(mb) mb.sort() bd = glob(os.path.join(bundle_folder, "*.trk")) bd.sort() print(bd) org_bd = glob(os.path.join(orig_bundle_folder, "*.trk")) org_bd.sort() print(org_bd) n = len(mb) for io in range(n): mbundles = load_tractogram( mb[io], reference="same", bbox_valid_check=False ).streamlines bundles = load_tractogram( bd[io], reference="same", bbox_valid_check=False ).streamlines orig_bundles = load_tractogram( org_bd[io], reference="same", bbox_valid_check=False ).streamlines if len(orig_bundles) > 5: indx = assignment_map(bundles, mbundles, no_disks) ind = np.array(indx) metric_files_names_dti = glob(os.path.join(metric_folder, "*.nii.gz")) metric_files_names_csa = glob(os.path.join(metric_folder, "*.pam5")) _, affine = load_nifti(metric_files_names_dti[0]) affine_r = np.linalg.inv(affine) transformed_orig_bundles = transform_streamlines(orig_bundles, affine_r) for mn in range(len(metric_files_names_dti)): ab = os.path.split(metric_files_names_dti[mn]) metric_name = ab[1] fm = metric_name[:-7] bm = os.path.split(mb[io])[1][:-4] logging.info(f"bm = {bm}") dt = {} logging.info(f"metric = {metric_files_names_dti[mn]}") metric, _ = load_nifti(metric_files_names_dti[mn]) anatomical_measures( transformed_orig_bundles, metric, dt, fm, bm, subject, group_id, ind, out_dir, ) for mn in range(len(metric_files_names_csa)): ab = os.path.split(metric_files_names_csa[mn]) metric_name = ab[1] fm = metric_name[:-5] bm = os.path.split(mb[io])[1][:-4] logging.info(f"bm = {bm}") logging.info(f"metric = {metric_files_names_csa[mn]}") dt = {} metric = load_peaks(metric_files_names_csa[mn]) peak_values( transformed_orig_bundles, metric, dt, fm, bm, subject, group_id, ind, out_dir, ) print("total time taken in minutes = ", (-t + time()) / 60) class BundleAnalysisTractometryFlow(Workflow): @classmethod def get_short_name(cls): return "ba" @warning_for_keywords() def run(self, model_bundle_folder, subject_folder, *, no_disks=100, out_dir=""): """Workflow of bundle analytics. Applies statistical analysis on bundles of subjects and saves the results in a directory specified by ``out_dir``. See :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- model_bundle_folder : string Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once. subject_folder : string Path to the input subject folder. This path may contain wildcards to process multiple inputs at once. no_disks : integer, optional Number of disks used for dividing bundle into disks. out_dir : string, optional Output directory. References ---------- .. footbibliography:: """ if os.path.isdir(subject_folder) is False: raise ValueError("Invalid path to subjects") groups = os.listdir(subject_folder) groups.sort() for group in groups: if os.path.isdir(os.path.join(subject_folder, group)): logging.info(f"group = {group}") all_subjects = os.listdir(os.path.join(subject_folder, group)) all_subjects.sort() logging.info(all_subjects) if group.lower() == "patient": group_id = 1 # 1 means patient elif group.lower() == "control": group_id = 0 # 0 means control else: print(group) raise ValueError("Invalid group. Neither patient nor control") for sub in all_subjects: logging.info(sub) pre = os.path.join(subject_folder, group, sub) logging.info(pre) b = os.path.join(pre, "rec_bundles") c = os.path.join(pre, "org_bundles") d = os.path.join(pre, "anatomical_measures") buan_bundle_profiles( model_bundle_folder, b, c, d, group_id, sub, no_disks=no_disks, out_dir=out_dir, ) class LinearMixedModelsFlow(Workflow): @classmethod def get_short_name(cls): return "lmm" def get_metric_name(self, path): """Splits the path string and returns name of anatomical measure (eg: fa), bundle name eg(AF_L) and bundle name with metric name (eg: AF_L_fa) Parameters ---------- path : string Path to the input metric files. This path may contain wildcards to process multiple inputs at once. """ head_tail = os.path.split(path) name = head_tail[1] count = 0 i = len(name) - 1 while i > 0: if name[i] == ".": count = i break i = i - 1 for j in range(len(name)): if name[j] == "_": if name[j + 1] != "L" and name[j + 1] != "R" and name[j + 1] != "F": return name[j + 1 : count], name[:j], name[:count] return " ", " ", " " def save_lmm_plot(self, plot_file, title, bundle_name, x, y): """Saves LMM plot with segment/disk number on x-axis and -log10(pvalues) on y-axis in out_dir folder. Parameters ---------- plot_file : string Path to the plot file. This path may contain wildcards to process multiple inputs at once. title : string Title for the plot. bundle_name : string Bundle name. x : list list containing segment/disk number for x-axis. y : list list containing -log10(pvalues) per segment/disk number for y-axis. """ n = len(x) dotted = np.ones(n) dotted[:] = 2 c1 = np.random.rand(1, 3) y_pos = np.arange(n) (l1,) = plt.plot( y_pos, dotted, color="red", marker=".", linestyle="solid", linewidth=0.6, markersize=0.7, label="p-value < 0.01", ) (l2,) = plt.plot( y_pos, dotted + 1, color="black", marker=".", linestyle="solid", linewidth=0.4, markersize=0.4, label="p-value < 0.001", ) first_legend = plt.legend(handles=[l1, l2], loc="upper right") axes = plt.gca() axes.add_artist(first_legend) axes.set_ylim([0, 6]) l3 = plt.bar(y_pos, y, color=c1, alpha=0.5, label=bundle_name) plt.legend(handles=[l3], loc="upper left") plt.title(title.upper()) plt.xlabel("Segment Number") plt.ylabel("-log10(Pvalues)") plt.savefig(plot_file) plt.clf() @warning_for_keywords() def run(self, h5_files, *, no_disks=100, out_dir=""): """Workflow of linear Mixed Models. Applies linear Mixed Models on bundles of subjects and saves the results in a directory specified by ``out_dir``. Parameters ---------- h5_files : string Path to the input metric files. This path may contain wildcards to process multiple inputs at once. no_disks : integer, optional Number of disks used for dividing bundle into disks. out_dir : string, optional Output directory. """ io_it = self.get_io_iterator() for file_path in io_it: logging.info(f"Applying metric {file_path}") file_name, bundle_name, save_name = self.get_metric_name(file_path) logging.info(f" file name = {file_name}") logging.info(f"file path = {file_path}") pvalues = np.zeros(no_disks) warnings.filterwarnings("ignore") # run mixed linear model for every disk for i in range(no_disks): disk_count = i + 1 df = pd.read_hdf(file_path, where="disk=disk_count") logging.info(f"read the dataframe for disk number {disk_count}") # check if data has significant data to perform LMM if len(df) < 10: raise ValueError("Dataset for Linear Mixed Model is too small") criteria = file_name + " ~ group" md = smf.mixedlm(criteria, df, groups=df["subject"]) mdf = md.fit() pvalues[i] = mdf.pvalues[1] x = list(range(1, len(pvalues) + 1)) y = -1 * np.log10(pvalues) save_file = os.path.join(out_dir, save_name + "_pvalues.npy") np.save(save_file, pvalues) save_file = os.path.join(out_dir, save_name + "_pvalues_log.npy") np.save(save_file, y) save_file = os.path.join(out_dir, save_name + ".png") self.save_lmm_plot(save_file, file_name, bundle_name, x, y) class BundleShapeAnalysis(Workflow): @classmethod def get_short_name(cls): return "BS" @warning_for_keywords() def run(self, subject_folder, *, clust_thr=(5, 3, 1.5), threshold=6, out_dir=""): """Workflow of bundle analytics. Applies bundle shape similarity analysis on bundles of subjects and saves the results in a directory specified by ``out_dir``. See :footcite:p:`Chandio2020a` for further details about the method. Parameters ---------- subject_folder : string Path to the input subject folder. This path may contain wildcards to process multiple inputs at once. clust_thr : variable float, optional list of bundle clustering thresholds used in QuickBundlesX. threshold : float, optional Bundle shape similarity threshold. out_dir : string, optional Output directory. References ---------- .. footbibliography:: """ rng = np.random.default_rng() all_subjects = [] if os.path.isdir(subject_folder): groups = os.listdir(subject_folder) groups.sort() else: raise ValueError("Not a directory") for group in groups: if os.path.isdir(os.path.join(subject_folder, group)): subjects = os.listdir(os.path.join(subject_folder, group)) subjects.sort() logging.info( "first " + str(len(subjects)) + " subjects in matrix belong to " + group + " group" ) for sub in subjects: dpath = os.path.join(subject_folder, group, sub) if os.path.isdir(dpath): all_subjects.append(dpath) N = len(all_subjects) bundles = os.listdir(os.path.join(all_subjects[0], "rec_bundles")) for bun in bundles: # bundle shape similarity matrix ba_matrix = np.zeros((N, N)) i = 0 logging.info(bun) for sub in all_subjects: j = 0 bundle1 = load_tractogram( os.path.join(sub, "rec_bundles", bun), reference="same", bbox_valid_check=False, ).streamlines for subi in all_subjects: logging.info(subi) bundle2 = load_tractogram( os.path.join(subi, "rec_bundles", bun), reference="same", bbox_valid_check=False, ).streamlines ba_value = bundle_shape_similarity( bundle1, bundle2, rng, clust_thr=clust_thr, threshold=threshold ) ba_matrix[i][j] = ba_value j += 1 i += 1 logging.info("saving BA score matrix") np.save(os.path.join(out_dir, bun[:-4] + ".npy"), ba_matrix) cmap = matplt.colormaps["Blues"] plt.title(bun[:-4]) plt.imshow(ba_matrix, cmap=cmap) plt.colorbar() plt.clim(0, 1) plt.savefig(os.path.join(out_dir, f"SM_{bun[:-4]}")) plt.clf() dipy-1.11.0/dipy/workflows/tests/000077500000000000000000000000001476546756600167355ustar00rootroot00000000000000dipy-1.11.0/dipy/workflows/tests/__init__.py000066400000000000000000000000001476546756600210340ustar00rootroot00000000000000dipy-1.11.0/dipy/workflows/tests/meson.build000066400000000000000000000011311476546756600210730ustar00rootroot00000000000000python_sources = [ '__init__.py', 'test_align.py', 'test_denoise.py', 'test_docstring_parser.py', 'test_iap.py', 'test_io.py', 'test_masking.py', 'test_nn.py', 'test_reconst_csa_csd.py', 'test_reconst_dki.py', 'test_reconst_dti.py', 'test_reconst_dsi.py', 'test_reconst_ivim.py', 'test_reconst_mapmri.py', 'test_reconst.py', 'test_segment.py', 'test_stats.py', 'test_tracking.py', 'test_utils.py', 'test_viz.py', 'test_workflow.py', 'workflow_tests_utils.py', ] py3.install_sources( python_sources, pure: false, subdir: 'dipy/workflows/tests' ) dipy-1.11.0/dipy/workflows/tests/test_align.py000066400000000000000000000514511476546756600214460ustar00rootroot00000000000000import os.path from os.path import join as pjoin from tempfile import TemporaryDirectory import nibabel as nib import numpy as np import numpy.testing as npt import pytest from dipy.align.tests.test_imwarp import get_synthetic_warped_circle from dipy.align.tests.test_parzenhist import setup_random_transform from dipy.align.transforms import regtransforms from dipy.data import get_fnames from dipy.io.image import load_nifti_data, save_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.testing.decorators import set_random_number_generator from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package from dipy.workflows.align import ( ApplyTransformFlow, BundleWarpFlow, ImageRegistrationFlow, MotionCorrectionFlow, ResliceFlow, SlrWithQbxFlow, SynRegistrationFlow, ) _, have_pd, _ = optional_package("pandas") def test_reslice(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_25") volume = load_nifti_data(data_path) reslice_flow = ResliceFlow() reslice_flow.run(data_path, [1.5, 1.5, 1.5], out_dir=out_dir) out_path = reslice_flow.last_generated_outputs["out_resliced"] resliced = load_nifti_data(out_path) npt.assert_equal(resliced.shape[0] > volume.shape[0], True) npt.assert_equal(resliced.shape[1] > volume.shape[1], True) npt.assert_equal(resliced.shape[2] > volume.shape[2], True) npt.assert_equal(resliced.shape[-1], volume.shape[-1]) def test_slr_flow(): with TemporaryDirectory() as out_dir: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1 = f.copy() f1_path = pjoin(out_dir, "f1.trk") sft = StatefulTractogram(f1, data_path, Space.RASMM) save_tractogram(sft, f1_path, bbox_valid_check=False) f2 = f1.copy() f2._data += np.array([50, 0, 0]) f2_path = pjoin(out_dir, "f2.trk") sft = StatefulTractogram(f2, data_path, Space.RASMM) save_tractogram(sft, f2_path, bbox_valid_check=False) slr_flow = SlrWithQbxFlow(force=True) slr_flow.run(f1_path, f2_path, out_dir=out_dir) out_path = slr_flow.last_generated_outputs["out_moved"] npt.assert_equal(os.path.isfile(out_path), True) @set_random_number_generator(1234) def test_image_registration(rng): with TemporaryDirectory() as temp_out_dir: static, moving, static_g2w, moving_g2w, smask, mmask, M = ( setup_random_transform( transform=regtransforms[("AFFINE", 3)], rfactor=0.1, rng=rng ) ) save_nifti(pjoin(temp_out_dir, "b0.nii.gz"), data=static, affine=static_g2w) save_nifti(pjoin(temp_out_dir, "t1.nii.gz"), data=moving, affine=moving_g2w) # simulate three direction DWI by repeating b0 three times save_nifti( pjoin(temp_out_dir, "dwi.nii.gz"), data=np.repeat(static[..., None], 3, axis=-1), affine=static_g2w, ) static_image_file = pjoin(temp_out_dir, "b0.nii.gz") moving_image_file = pjoin(temp_out_dir, "t1.nii.gz") dwi_image_file = pjoin(temp_out_dir, "dwi.nii.gz") image_registration_flow = ImageRegistrationFlow() apply_trans = ApplyTransformFlow() def read_distance(qual_fname): with open(pjoin(temp_out_dir, qual_fname), "r") as f: return float(f.readlines()[-1]) def test_com(): out_moved = pjoin(temp_out_dir, "com_moved.nii.gz") out_affine = pjoin(temp_out_dir, "com_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="com", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, ) check_existence(out_moved, out_affine) def test_translation(): out_moved = pjoin(temp_out_dir, "trans_moved.nii.gz") out_affine = pjoin(temp_out_dir, "trans_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="trans", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, save_metric=True, level_iters=[100, 10, 1], out_quality="trans_q.txt", ) dist = read_distance("trans_q.txt") npt.assert_almost_equal(float(dist), -0.42097809101318934, 1) check_existence(out_moved, out_affine) def test_rigid(): out_moved = pjoin(temp_out_dir, "rigid_moved.nii.gz") out_affine = pjoin(temp_out_dir, "rigid_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="rigid", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, save_metric=True, level_iters=[100, 10, 1], out_quality="rigid_q.txt", ) dist = read_distance("rigid_q.txt") npt.assert_almost_equal(dist, -0.6900534794005155, 1) check_existence(out_moved, out_affine) def test_rigid_isoscaling(): out_moved = pjoin(temp_out_dir, "rigid_isoscaling_moved.nii.gz") out_affine = pjoin(temp_out_dir, "rigid_isoscaling_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="rigid_isoscaling", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, save_metric=True, level_iters=[100, 10, 1], out_quality="rigid_isoscaling_q.txt", ) dist = read_distance("rigid_isoscaling_q.txt") npt.assert_almost_equal(dist, -0.6960044668271375, 1) check_existence(out_moved, out_affine) def test_rigid_scaling(): out_moved = pjoin(temp_out_dir, "rigid_scaling_moved.nii.gz") out_affine = pjoin(temp_out_dir, "rigid_scaling_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="rigid_scaling", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, save_metric=True, level_iters=[100, 10, 1], out_quality="rigid_scaling_q.txt", ) dist = read_distance("rigid_scaling_q.txt") npt.assert_almost_equal(dist, -0.698688892993124, 1) check_existence(out_moved, out_affine) def test_affine(): out_moved = pjoin(temp_out_dir, "affine_moved.nii.gz") out_affine = pjoin(temp_out_dir, "affine_affine.txt") image_registration_flow._force_overwrite = True image_registration_flow.run( static_image_file, moving_image_file, transform="affine", out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, save_metric=True, level_iters=[100, 10, 1], out_quality="affine_q.txt", ) dist = read_distance("affine_q.txt") npt.assert_almost_equal(dist, -0.7670650775914811, 1) check_existence(out_moved, out_affine) # Creating the erroneous behavior def test_err(): image_registration_flow._force_overwrite = True npt.assert_raises( ValueError, image_registration_flow.run, static_image_file, moving_image_file, transform="notransform", ) image_registration_flow._force_overwrite = True npt.assert_raises( ValueError, image_registration_flow.run, static_image_file, moving_image_file, metric="wrong_metric", ) def check_existence(movedfile, affine_mat_file): assert os.path.exists(movedfile) assert os.path.exists(affine_mat_file) return True def test_4D_static(): out_moved = pjoin(temp_out_dir, "trans_moved.nii.gz") out_affine = pjoin(temp_out_dir, "trans_affine.txt") image_registration_flow._force_overwrite = True kwargs = { "static_image_files": dwi_image_file, "moving_image_files": moving_image_file, "transform": "trans", "out_dir": temp_out_dir, "out_moved": out_moved, "out_affine": out_affine, "save_metric": True, "level_iters": [100, 10, 1], "out_quality": "trans_q.txt", } with pytest.raises(ValueError, match="Dimension mismatch"): image_registration_flow.run(**kwargs) image_registration_flow.run(static_vol_idx=0, **kwargs) dist = read_distance("trans_q.txt") npt.assert_almost_equal(float(dist), -0.42097809101318934, 1) check_existence(out_moved, out_affine) apply_trans.run( static_image_files=dwi_image_file, moving_image_files=moving_image_file, out_dir=temp_out_dir, transform_map_file=out_affine, ) # Checking for the transformed volume shape volume = load_nifti_data(pjoin(temp_out_dir, "transformed.nii.gz")) assert volume.ndim == 3 def test_4D_moving(): out_moved = pjoin(temp_out_dir, "trans_moved.nii.gz") out_affine = pjoin(temp_out_dir, "trans_affine.txt") image_registration_flow._force_overwrite = True kwargs = { "static_image_files": static_image_file, "moving_image_files": dwi_image_file, "transform": "trans", "out_dir": temp_out_dir, "out_moved": out_moved, "out_affine": out_affine, "save_metric": True, "level_iters": [100, 10, 1], "out_quality": "trans_q.txt", } with pytest.raises(ValueError, match="Dimension mismatch"): image_registration_flow.run(**kwargs) image_registration_flow.run(moving_vol_idx=0, **kwargs) dist = read_distance("trans_q.txt") npt.assert_almost_equal(float(dist), -1.0002607616786339, 1) check_existence(out_moved, out_affine) apply_trans.run( static_image_files=static_image_file, moving_image_files=dwi_image_file, out_dir=temp_out_dir, transform_map_file=out_affine, out_file="transformed2.nii.gz", ) # Checking for the transformed volume shape volume = load_nifti_data(pjoin(temp_out_dir, "transformed2.nii.gz")) assert volume.ndim == 4 test_com() test_translation() test_rigid() test_rigid_isoscaling() test_rigid_scaling() test_affine() test_err() test_4D_static() test_4D_moving() def test_apply_transform_error(): flow = ApplyTransformFlow() npt.assert_raises( ValueError, flow.run, "my_fake_static.nii.gz", "my_fake_moving.nii.gz", "my_fake_map.nii.gz", transform_type="wrong_type", ) def test_apply_affine_transform(): with TemporaryDirectory() as temp_out_dir: factors = { ("TRANSLATION", 3): (2.0, None, np.array([2.3, 4.5, 1.7])), ("RIGID", 3): (0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5, 1.7])), ("RIGIDISOSCALING", 3): ( 0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5, 1.7, 0.8]), ), ("RIGIDSCALING", 3): ( 0.1, None, np.array([0.1, 0.15, -0.11, 2.3, 4.5, 1.7, 0.8, 0.9, 1.1]), ), ("AFFINE", 3): ( 0.1, None, np.array( [ 0.99, -0.05, 0.03, 1.3, 0.05, 0.99, -0.10, 2.5, -0.07, 0.10, 0.99, -1.4, ] ), ), } image_registration_flow = ImageRegistrationFlow() apply_trans = ApplyTransformFlow() for i in factors.keys(): static, moving, static_g2w, moving_g2w, smask, mmask, M = ( setup_random_transform( transform=regtransforms[i], rfactor=factors[i][0] ) ) stat_file = str(i[0]) + "_static.nii.gz" mov_file = str(i[0]) + "_moving.nii.gz" save_nifti(pjoin(temp_out_dir, stat_file), data=static, affine=static_g2w) save_nifti(pjoin(temp_out_dir, mov_file), data=moving, affine=moving_g2w) static_image_file = pjoin(temp_out_dir, str(i[0]) + "_static.nii.gz") moving_image_file = pjoin(temp_out_dir, str(i[0]) + "_moving.nii.gz") out_moved = pjoin(temp_out_dir, str(i[0]) + "_moved.nii.gz") out_affine = pjoin(temp_out_dir, str(i[0]) + "_affine.txt") if str(i[0]) == "TRANSLATION": transform_type = "trans" elif str(i[0]) == "RIGIDISOSCALING": transform_type = "rigid_isoscaling" elif str(i[0]) == "RIGIDSCALING": transform_type = "rigid_scaling" else: transform_type = str(i[0]).lower() image_registration_flow.run( static_image_file, moving_image_file, transform=transform_type, out_dir=temp_out_dir, out_moved=out_moved, out_affine=out_affine, level_iters=[1, 1, 1], save_metric=False, ) # Checking for the created moved file. assert os.path.exists(out_moved) assert os.path.exists(out_affine) images = pjoin(temp_out_dir, "*moving*") apply_trans.run( static_image_file, images, out_dir=temp_out_dir, transform_map_file=out_affine, ) # Checking for the transformed file. assert os.path.exists(pjoin(temp_out_dir, "transformed.nii.gz")) def test_motion_correction(): data_path, fbvals_path, fbvecs_path = get_fnames(name="small_64D") with TemporaryDirectory() as out_dir: # Use an abbreviated data-set: img = nib.load(data_path) data = img.get_fdata()[..., :10] nib.save( nib.Nifti1Image(data, img.affine), os.path.join(out_dir, "data.nii.gz") ) # Save a subset: bvals = np.loadtxt(fbvals_path) bvecs = np.loadtxt(fbvecs_path) np.savetxt(os.path.join(out_dir, "bvals.txt"), bvals[:10]) np.savetxt(os.path.join(out_dir, "bvecs.txt"), bvecs[:10]) motion_correction_flow = MotionCorrectionFlow() motion_correction_flow._force_overwrite = True motion_correction_flow.run( os.path.join(out_dir, "data.nii.gz"), os.path.join(out_dir, "bvals.txt"), os.path.join(out_dir, "bvecs.txt"), out_dir=out_dir, ) out_path = motion_correction_flow.last_generated_outputs["out_moved"] corrected = load_nifti_data(out_path) npt.assert_equal(corrected.shape, data.shape) npt.assert_equal(corrected.min(), data.min()) npt.assert_equal(corrected.max(), data.max()) def test_syn_registration_flow(): moving_data, static_data = get_synthetic_warped_circle(40) moving_data[..., :10] = 0 moving_data[..., -1:-11:-1] = 0 static_data[..., :10] = 0 static_data[..., -1:-11:-1] = 0 syn_flow = SynRegistrationFlow() with TemporaryDirectory() as out_dir: static_img = nib.Nifti1Image(static_data.astype(float), np.eye(4)) fname_static = pjoin(out_dir, "tmp_static.nii.gz") nib.save(static_img, fname_static) moving_img = nib.Nifti1Image(moving_data.astype(float), np.eye(4)) fname_moving = pjoin(out_dir, "tmp_moving.nii.gz") nib.save(moving_img, fname_moving) positional_args = [fname_static, fname_moving] # Test the cc metric metric_optional_args = { "metric": "cc", "mopt_sigma_diff": 2.0, "mopt_radius": 4, } optimizer_optional_args = { "level_iters": [10, 10, 5], "step_length": 0.25, "opt_tol": 1e-5, "inv_iter": 20, "inv_tol": 1e-3, "ss_sigma_factor": 0.2, } all_args = dict(metric_optional_args, **optimizer_optional_args) syn_flow.run(*positional_args, out_dir=out_dir, **all_args) warped_path = syn_flow.last_generated_outputs["out_warped"] npt.assert_equal(os.path.isfile(warped_path), True) warped_map_path = syn_flow.last_generated_outputs["out_field"] npt.assert_equal(os.path.isfile(warped_map_path), True) # Test the ssd metric metric_optional_args = { "metric": "ssd", "mopt_smooth": 4.0, "mopt_inner_iter": 5, "mopt_step_type": "demons", } optimizer_optional_args = { "level_iters": [200, 100, 50, 25], "step_length": 0.5, "opt_tol": 1e-4, "inv_iter": 40, "inv_tol": 1e-3, "ss_sigma_factor": 0.2, } all_args = dict(metric_optional_args, **optimizer_optional_args) syn_flow.run(*positional_args, out_dir=out_dir, **all_args) warped_path = syn_flow.last_generated_outputs["out_warped"] npt.assert_equal(os.path.isfile(warped_path), True) warped_map_path = syn_flow.last_generated_outputs["out_field"] npt.assert_equal(os.path.isfile(warped_map_path), True) # Test the em metric metric_optional_args = { "metric": "em", "mopt_smooth": 1.0, "mopt_inner_iter": 5, "mopt_step_type": "gauss_newton", } optimizer_optional_args = { "level_iters": [200, 100, 50, 25], "step_length": 0.5, "opt_tol": 1e-4, "inv_iter": 40, "inv_tol": 1e-3, "ss_sigma_factor": 0.2, } all_args = dict(metric_optional_args, **optimizer_optional_args) syn_flow.run(*positional_args, out_dir=out_dir, **all_args) warped_path = syn_flow.last_generated_outputs["out_warped"] npt.assert_equal(os.path.isfile(warped_path), True) warped_map_path = syn_flow.last_generated_outputs["out_field"] npt.assert_equal(os.path.isfile(warped_map_path), True) @pytest.mark.skipif(not have_pd, reason="Requires pandas") def test_bundlewarp_flow(): with TemporaryDirectory() as out_dir: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1 = f.copy() f1_path = pjoin(out_dir, "f1.trk") sft = StatefulTractogram(f1, data_path, Space.RASMM) save_tractogram(sft, f1_path, bbox_valid_check=False) f2 = f1.copy() f2._data += np.array([50, 0, 0]) f2_path = pjoin(out_dir, "f2.trk") sft = StatefulTractogram(f2, data_path, Space.RASMM) save_tractogram(sft, f2_path, bbox_valid_check=False) bw_flow = BundleWarpFlow(force=True) bw_flow.run(f1_path, f2_path, out_dir=out_dir) out_linearly_moved = pjoin(out_dir, "linearly_moved.trk") out_nonlinearly_moved = pjoin(out_dir, "nonlinearly_moved.trk") assert os.path.exists(out_linearly_moved) assert os.path.exists(out_nonlinearly_moved) dipy-1.11.0/dipy/workflows/tests/test_denoise.py000066400000000000000000000112771476546756600220040ustar00rootroot00000000000000import os from tempfile import TemporaryDirectory import numpy as np import numpy.testing as npt import pytest from dipy.data import get_fnames from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.testing import assert_false, assert_greater, assert_less, assert_true from dipy.testing.decorators import set_random_number_generator from dipy.utils.optpkg import optional_package from dipy.workflows.denoise import ( GibbsRingingFlow, LPCAFlow, MPPCAFlow, NLMeansFlow, Patch2SelfFlow, ) sklearn, has_sklearn, _ = optional_package("sklearn") needs_sklearn = pytest.mark.skipif( not has_sklearn, reason=sklearn._msg if not has_sklearn else "" ) def test_nlmeans_flow(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames() volume, affine = load_nifti(data_path) nlmeans_flow = NLMeansFlow() nlmeans_flow.run(data_path, out_dir=out_dir) assert_true(os.path.isfile(nlmeans_flow.last_generated_outputs["out_denoised"])) nlmeans_flow._force_overwrite = True nlmeans_flow.run(data_path, sigma=4, out_dir=out_dir) denoised_path = nlmeans_flow.last_generated_outputs["out_denoised"] assert_true(os.path.isfile(denoised_path)) denoised_data, denoised_affine = load_nifti(denoised_path) npt.assert_equal(denoised_data.shape, volume.shape) npt.assert_array_almost_equal(denoised_affine, affine) @needs_sklearn def test_patch2self_flow(): with TemporaryDirectory() as out_dir: data_path, fbvals, fbvecs = get_fnames() patch2self_flow = Patch2SelfFlow() patch2self_flow.run( data_path, fbvals, patch_radius=(0, 0, 0), out_dir=out_dir, ver=1 ) assert_true( os.path.isfile(patch2self_flow.last_generated_outputs["out_denoised"]) ) patch2self_flow = Patch2SelfFlow() patch2self_flow.run( data_path, fbvals, patch_radius=(0, 0, 0), out_dir=out_dir, ver=3 ) assert_true( os.path.isfile(patch2self_flow.last_generated_outputs["out_denoised"]) ) def test_lpca_flow(): with TemporaryDirectory() as out_dir: data_path, fbvals, fbvecs = get_fnames() lpca_flow = LPCAFlow() lpca_flow.run(data_path, fbvals, fbvecs, out_dir=out_dir) assert_true(os.path.isfile(lpca_flow.last_generated_outputs["out_denoised"])) @set_random_number_generator() def test_mppca_flow(rng): with TemporaryDirectory() as out_dir: S0 = 100 + 2 * rng.standard_normal((22, 23, 30, 20)) data_path = os.path.join(out_dir, "random_noise.nii.gz") save_nifti(data_path, S0, np.eye(4)) mppca_flow = MPPCAFlow() mppca_flow.run(data_path, out_dir=out_dir) assert_true(os.path.isfile(mppca_flow.last_generated_outputs["out_denoised"])) assert_false(os.path.isfile(mppca_flow.last_generated_outputs["out_sigma"])) mppca_flow._force_overwrite = True mppca_flow.run(data_path, return_sigma=True, pca_method="svd", out_dir=out_dir) assert_true(os.path.isfile(mppca_flow.last_generated_outputs["out_denoised"])) assert_true(os.path.isfile(mppca_flow.last_generated_outputs["out_sigma"])) denoised_path = mppca_flow.last_generated_outputs["out_denoised"] denoised_data = load_nifti_data(denoised_path) assert_greater(denoised_data.min(), S0.min()) assert_less(denoised_data.max(), S0.max()) npt.assert_equal(np.round(denoised_data.mean()), 100) def test_gibbs_flow(): def generate_slice(): Nori = 32 image = np.zeros((6 * Nori, 6 * Nori)) image[Nori : 2 * Nori, Nori : 2 * Nori] = 1 image[Nori : 2 * Nori, 4 * Nori : 5 * Nori] = 1 image[2 * Nori : 3 * Nori, Nori : 3 * Nori] = 1 image[3 * Nori : 4 * Nori, 2 * Nori : 3 * Nori] = 2 image[3 * Nori : 4 * Nori, 4 * Nori : 5 * Nori] = 1 image[4 * Nori : 5 * Nori, 3 * Nori : 5 * Nori] = 3 # Corrupt image with gibbs ringing c = np.fft.fft2(image) c = np.fft.fftshift(c) c_crop = c[48:144, 48:144] image_gibbs = abs(np.fft.ifft2(c_crop) / 4) return image_gibbs with TemporaryDirectory() as out_dir: image4d = np.zeros((96, 96, 2, 2)) image4d[:, :, 0, 0] = generate_slice() image4d[:, :, 1, 0] = generate_slice() image4d[:, :, 0, 1] = generate_slice() image4d[:, :, 1, 1] = generate_slice() data_path = os.path.join(out_dir, "random_noise.nii.gz") save_nifti(data_path, image4d, np.eye(4)) gibbs_flow = GibbsRingingFlow() gibbs_flow.run(data_path, out_dir=out_dir) assert_true(os.path.isfile(gibbs_flow.last_generated_outputs["out_unring"])) dipy-1.11.0/dipy/workflows/tests/test_docstring_parser.py000066400000000000000000000317421476546756600237250ustar00rootroot00000000000000""" This was taken directly from the file test_docscrape.py of numpydoc package. Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ # -*- encoding:utf-8 -*- import textwrap import numpy.testing as npt from dipy.workflows.docstring_parser import NumpyDocString doc_txt = """\ numpy.multivariate_normal(mean, cov, shape=None, spam=None) Draw values from a multivariate normal distribution with specified mean and covariance. The multivariate normal or Gaussian distribution is a generalisation of the one-dimensional normal distribution to higher dimensions. Parameters ---------- mean : (N,) ndarray Mean of the N-dimensional distribution. .. math:: (1+2+3)/3 cov : (N, N) ndarray Covariance matrix of the distribution. shape : tuple of ints Given a shape of, for example, (m,n,k), m*n*k samples are generated, and packed in an m-by-n-by-k arrangement. Because each sample is N-dimensional, the output shape is (m,n,k,N). Returns ------- out : ndarray The drawn samples, arranged according to `shape`. If the shape given is (m,n,...), then the shape of `out` is is (m,n,...,N). In other words, each entry ``out[i,j,...,:]`` is an N-dimensional value drawn from the distribution. list of str This is not a real return value. It exists to test anonymous return values. Other Parameters ---------------- spam : parrot A parrot off its mortal coil. Raises ------ RuntimeError Some error Warns ----- RuntimeWarning Some warning Warnings -------- Certain warnings apply. Notes ----- Instead of specifying the full covariance matrix, popular approximations include: - Spherical covariance (`cov` is a multiple of the identity matrix) - Diagonal covariance(`cov` has non-negative elements only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: >>> mean = [0,0] >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis >>> x,y = multivariate_normal(mean,cov,5000).T >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() Note that the covariance matrix must be symmetric and non-negative definite. References ---------- .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic Processes," 3rd ed., McGraw-Hill Companies, 1991 .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," 2nd ed., Wiley, 2001. See Also -------- some, other, funcs otherfunc : relationship Examples -------- >>> mean = (1,2) >>> cov = [[1,0],[1,0]] >>> x = multivariate_normal(mean,cov,(3,3)) >>> print x.shape (3, 3, 2) The following is probably true, given that 0.6 is roughly twice the standard deviation: >>> print list( (x[0,0,:] - mean) < 0.6 ) [True, True] .. index:: random :refguide: random;distributions, random;gauss """ doc = NumpyDocString(doc_txt) doc_yields_txt = """ Test generator Yields ------ a : int The number of apples. b : int The number of bananas. int The number of unknowns. """ doc_yields = NumpyDocString(doc_yields_txt) def test_signature(): npt.assert_(doc["Signature"].startswith("numpy.multivariate_normal(")) npt.assert_(doc["Signature"].endswith("spam=None)")) def test_summary(): npt.assert_(doc["Summary"][0].startswith("Draw values")) npt.assert_(doc["Summary"][-1].endswith("covariance.")) def test_extended_summary(): npt.assert_(doc["Extended Summary"][0].startswith("The multivariate normal")) def test_parameters(): npt.assert_equal(len(doc["Parameters"]), 3) npt.assert_equal([n for n, _, _ in doc["Parameters"]], ["mean", "cov", "shape"]) arg, arg_type, desc = doc["Parameters"][1] npt.assert_equal(arg_type, "(N, N) ndarray") npt.assert_(desc[0].startswith("Covariance matrix")) npt.assert_equal(doc["Parameters"][0][-1][-2], " (1+2+3)/3") def test_other_parameters(): npt.assert_equal(len(doc["Other Parameters"]), 1) npt.assert_equal([n for n, _, _ in doc["Other Parameters"]], ["spam"]) arg, arg_type, desc = doc["Other Parameters"][0] npt.assert_equal(arg_type, "parrot") npt.assert_(desc[0].startswith("A parrot off its mortal coil")) def test_returns(): npt.assert_equal(len(doc["Returns"]), 2) arg, arg_type, desc = doc["Returns"][0] npt.assert_equal(arg, "out") npt.assert_equal(arg_type, "ndarray") npt.assert_(desc[0].startswith("The drawn samples")) npt.assert_(desc[-1].endswith("distribution.")) arg, arg_type, desc = doc["Returns"][1] npt.assert_equal(arg, "list of str") npt.assert_equal(arg_type, "") npt.assert_(desc[0].startswith("This is not a real")) npt.assert_(desc[-1].endswith("anonymous return values.")) def test_notes(): npt.assert_(doc["Notes"][0].startswith("Instead")) npt.assert_(doc["Notes"][-1].endswith("definite.")) npt.assert_equal(len(doc["Notes"]), 17) def test_references(): npt.assert_(doc["References"][0].startswith("..")) npt.assert_(doc["References"][-1].endswith("2001.")) def test_examples(): npt.assert_(doc["Examples"][0].startswith(">>>")) npt.assert_(doc["Examples"][-1].endswith("True]")) def test_index(): npt.assert_equal(doc["index"]["default"], "random") npt.assert_equal(len(doc["index"]), 2) npt.assert_equal(len(doc["index"]["refguide"]), 2) def non_blank_line_by_line_compare(a, b): a = textwrap.dedent(a) b = textwrap.dedent(b) a = [ell.rstrip() for ell in a.split("\n") if ell.strip()] b = [ell.rstrip() for ell in b.split("\n") if ell.strip()] for n, line in enumerate(a): if not line == b[n]: raise AssertionError( f"Lines {n} of a and b differ: \n>>> {line}\n<<< {b[n]}\n" ) def test_str(): # doc_txt has the order of Notes and See Also sections flipped. # This should be handled automatically, and so, one thing this test does # is to make sure that See Also precedes Notes in the output. non_blank_line_by_line_compare( str(doc), """numpy.multivariate_normal(mean, cov, shape=None, spam=None) Draw values from a multivariate normal distribution with specified mean and covariance. The multivariate normal or Gaussian distribution is a generalisation of the one-dimensional normal distribution to higher dimensions. Parameters ---------- mean : (N,) ndarray Mean of the N-dimensional distribution. .. math:: (1+2+3)/3 cov : (N, N) ndarray Covariance matrix of the distribution. shape : tuple of ints Given a shape of, for example, (m,n,k), m*n*k samples are generated, and packed in an m-by-n-by-k arrangement. Because each sample is N-dimensional, the output shape is (m,n,k,N). Returns ------- out : ndarray The drawn samples, arranged according to `shape`. If the shape given is (m,n,...), then the shape of `out` is is (m,n,...,N). In other words, each entry ``out[i,j,...,:]`` is an N-dimensional value drawn from the distribution. list of str This is not a real return value. It exists to test anonymous return values. Other Parameters ---------------- spam : parrot A parrot off its mortal coil. Raises ------ RuntimeError Some error Warns ----- RuntimeWarning Some warning Warnings -------- Certain warnings apply. See Also -------- `some`_, `other`_, `funcs`_ `otherfunc`_ relationship Notes ----- Instead of specifying the full covariance matrix, popular approximations include: - Spherical covariance (`cov` is a multiple of the identity matrix) - Diagonal covariance(`cov` has non-negative elements only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: >>> mean = [0,0] >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis >>> x,y = multivariate_normal(mean,cov,5000).T >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() Note that the covariance matrix must be symmetric and non-negative definite. References ---------- .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic Processes," 3rd ed., McGraw-Hill Companies, 1991 .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," 2nd ed., Wiley, 2001. Examples -------- >>> mean = (1,2) >>> cov = [[1,0],[1,0]] >>> x = multivariate_normal(mean,cov,(3,3)) >>> print x.shape (3, 3, 2) The following is probably true, given that 0.6 is roughly twice the standard deviation: >>> print list( (x[0,0,:] - mean) < 0.6 ) [True, True] .. index:: random :refguide: random;distributions, random;gauss""", ) doc2 = NumpyDocString(""" Returns array of indices of the maximum values of along the given axis. Parameters ---------- a : {array_like} Array to look in. axis : {None, integer} If None, the index is into the flattened array, otherwise along the specified axis""") def test_parameters_without_extended_description(): npt.assert_equal(len(doc2["Parameters"]), 2) doc3 = NumpyDocString(""" my_signature(*params, **kwds) Return this and that. """) doc5 = NumpyDocString( """ a.something() Raises ------ LinAlgException If array is singular. Warns ----- SomeWarning If needed """ ) def test_raises(): npt.assert_equal(len(doc5["Raises"]), 1) name, _, desc = doc5["Raises"][0] npt.assert_equal(name, "LinAlgException") npt.assert_equal(desc, ["If array is singular."]) def test_warns(): npt.assert_equal(len(doc5["Warns"]), 1) name, _, desc = doc5["Warns"][0] npt.assert_equal(name, "SomeWarning") npt.assert_equal(desc, ["If needed"]) def test_see_also(): doc6 = NumpyDocString( """ z(x,theta) See Also -------- func_a, func_b, func_c func_d : some equivalent func foo.func_e : some other func over multiple lines func_f, func_g, :meth:`func_h`, func_j, func_k :obj:`baz.obj_q` :class:`class_j`: fubar foobar """ ) npt.assert_equal(len(doc6["See Also"]), 12) for func, desc, role in doc6["See Also"]: if func in ( "func_a", "func_b", "func_c", "func_f", "func_g", "func_h", "func_j", "func_k", "baz.obj_q", ): assert not desc else: assert desc if func == "func_h": assert role == "meth" elif func == "baz.obj_q": assert role == "obj" elif func == "class_j": assert role == "class" else: assert role is None if func == "func_d": assert desc == ["some equivalent func"] elif func == "foo.func_e": assert desc == ["some other func over", "multiple lines"] elif func == "class_j": assert desc == ["fubar", "foobar"] doc7 = NumpyDocString(""" Doc starts on second line. """) def test_empty_first_line(): assert doc7["Summary"][0].startswith("Doc starts") def test_duplicate_signature(): # Duplicate function signatures occur e.g. in ufuncs, when the # automatic mechanism adds one, and a more detailed comes from the # docstring itself. doc = NumpyDocString( """ z(x1, x2) z(a, theta) """ ) assert doc["Signature"].strip() == "z(a, theta)" class_doc_txt = """ Foo Parameters ---------- f : callable ``f(t, y, *f_args)`` Aaa. jac : callable ``jac(t, y, *jac_args)`` Bbb. Attributes ---------- t : float Current time. y : ndarray Current variable values. x : float Some parameter Methods ------- a b c Examples -------- For usage examples, see `ode`. """ dipy-1.11.0/dipy/workflows/tests/test_iap.py000066400000000000000000000225241476546756600211240ustar00rootroot00000000000000from os.path import join as pjoin import sys from tempfile import TemporaryDirectory import numpy.testing as npt from dipy.workflows.base import ( IntrospectiveArgumentParser, add_default_args_to_docstring, none_or_dtype, ) from dipy.workflows.flow_runner import run_flow from dipy.workflows.tests.workflow_tests_utils import ( DummyCombinedWorkflow, DummyFlow, DummyVariableTypeErrorWorkflow, DummyVariableTypeWorkflow, DummyWorkflow1, DummyWorkflowOptionalStr, ) def test_none_or_dtype(): # test None for typ in [int, float, str, tuple, list]: dec = none_or_dtype(typ) npt.assert_equal(dec("None"), "None") npt.assert_equal(dec("none"), "None") dec = none_or_dtype(int) npt.assert_raises(ValueError, dec, "my value") npt.assert_equal(dec(4), 4) dec = none_or_dtype(str) npt.assert_equal(dec(4), "4") npt.assert_equal(dec("my value"), "my value") npt.assert_equal(dec([4]), "[4]") dec = none_or_dtype(float) npt.assert_raises(ValueError, dec, "my value") for val in [ (4,), [ 4, ], ]: npt.assert_raises(TypeError, dec, val) npt.assert_equal(dec(True), 1.0) npt.assert_equal(dec(4), 4.0) npt.assert_equal(dec(4.0), 4.0) dec = none_or_dtype(bool) dec = none_or_dtype(tuple) def test_variable_type(): with TemporaryDirectory() as out_dir: open(pjoin(out_dir, "test"), "w").close() open(pjoin(out_dir, "test1"), "w").close() open(pjoin(out_dir, "test2"), "w").close() sys.argv = [sys.argv[0]] pos_results = [ pjoin(out_dir, "test"), pjoin(out_dir, "test1"), pjoin(out_dir, "test2"), 12, ] inputs = inputs_from_results(pos_results) sys.argv.extend(inputs) dcwf = DummyVariableTypeWorkflow() _, positional_res, positional_res2 = run_flow(dcwf) npt.assert_equal(positional_res2, 12) for k, v in zip(positional_res, pos_results[:-1]): npt.assert_equal(k, v) dcwf = DummyVariableTypeErrorWorkflow() npt.assert_raises(ValueError, run_flow, dcwf) def test_iap(): sys.argv = [sys.argv[0]] pos_keys = [ "positional_str", "positional_bool", "positional_int", "positional_float", ] opt_keys = [ "optional_str", "optional_bool", "optional_int", "optional_float", "optional_int_2", "optional_float_2", ] pos_results = ["test", 0, 10, 10.2] opt_results = ["opt_test", True, 20, 20.2, None, None] inputs = inputs_from_results(opt_results, opt_keys, optional=True) inputs.extend(inputs_from_results(pos_results)) sys.argv.extend(inputs) parser = IntrospectiveArgumentParser() dummy_flow = DummyFlow() parser.add_workflow(dummy_flow) args = parser.get_flow_args() all_keys = pos_keys + opt_keys all_results = pos_results + opt_results # Test if types and order are respected for k, v in zip(all_keys, all_results): npt.assert_equal(args[k], v) # Test if **args really fits dummy_flow's arguments return_values = dummy_flow.run(**args) npt.assert_array_equal(return_values, all_results + [2.0]) def test_optional_str(): # Test optional and variable str argument exists but does not have a value sys.argv = [sys.argv[0]] inputs = ["--optional_str_1"] sys.argv.extend(inputs) parser = IntrospectiveArgumentParser() dummy_flow = DummyWorkflowOptionalStr() parser.add_workflow(dummy_flow) args = parser.get_flow_args() all_keys = ["optional_str_1"] all_results = [[]] # Test if types and order are respected for k, v in zip(all_keys, all_results): npt.assert_equal(args[k], v) # Test if **args really fits dummy_flow's arguments return_values = dummy_flow.run(**args) npt.assert_equal(return_values, all_results + ["default"]) # Test optional and variable str argument exists and has a value sys.argv = [sys.argv[0]] inputs = ["--optional_str_1", "test"] sys.argv.extend(inputs) parser = IntrospectiveArgumentParser() dummy_flow = DummyWorkflowOptionalStr() parser.add_workflow(dummy_flow) args = parser.get_flow_args() all_keys = ["optional_str_1"] all_results = [["test"]] # Test if types and order are respected for k, v in zip(all_keys, all_results): npt.assert_equal(args[k], v) # Test if **args really fits dummy_flow's arguments return_values = dummy_flow.run(**args) npt.assert_equal(return_values, all_results + ["default"]) # Test optional str empty arguments sys.argv = [sys.argv[0]] inputs = ["--optional_str_2"] sys.argv.extend(inputs) parser = IntrospectiveArgumentParser() dummy_flow = DummyWorkflowOptionalStr() parser.add_workflow(dummy_flow) with npt.assert_raises(SystemExit) as cm: parser.get_flow_args() npt.assert_equal(cm.exception.code, 2) def test_iap_epilog_and_description(): parser = IntrospectiveArgumentParser() dummy_flow = DummyWorkflow1() parser.add_workflow(dummy_flow) assert "dummy references" in parser.epilog assert "Workflow used to test combined" in parser.description def test_flow_runner(): old_argv = sys.argv sys.argv = [sys.argv[0]] opt_keys = [ "param_combined", "dwf1.param1", "dwf2.param2", "force", "out_strat", "mix_names", ] pos_results = ["dipy.txt"] opt_results = [30, 10, 20, True, "absolute", True] inputs = inputs_from_results(opt_results, opt_keys, optional=True) inputs.extend(inputs_from_results(pos_results)) sys.argv.extend(inputs) dcwf = DummyCombinedWorkflow() param1, param2, combined = run_flow(dcwf) # generic flow params assert dcwf._force_overwrite assert dcwf._output_strategy == "absolute" assert dcwf._mix_names # sub flow params assert param1 == 10 assert param2 == 20 # parent flow param assert combined == 30 sys.argv = old_argv def inputs_from_results(results, keys=None, optional=False): prefix = "--" inputs = [] for idx, result in enumerate(results): if keys is not None: inputs.append(prefix + keys[idx]) if optional and str(result) in ["True", "False"]: continue inputs.append(str(result)) return inputs def test_add_default_args_to_docstring(): # --- Test 1: Adds default values correctly --- def func1(a, b=10, c="hello"): """Sample function.""" pass npds1 = {"Parameters": [("a", "int", []), ("b", "int", []), ("c", "str", [])]} add_default_args_to_docstring(npds1, func1) assert npds1["Parameters"][1][2] == ["(default: 10)"] assert npds1["Parameters"][2][2] == ["(default: hello)"] # --- Test 2: Ignores non-default parameters --- def func2(x, y): """Function without defaults.""" pass npds2 = {"Parameters": [("x", "int", []), ("y", "int", [])]} add_default_args_to_docstring(npds2, func2) assert npds2["Parameters"][0][2] == [] assert npds2["Parameters"][1][2] == [] # --- Test 3: Handles 'out_dir' special case --- def func3(out_dir="."): """Handles out_dir default properly.""" pass npds3 = {"Parameters": [("out_dir", "str", [])]} add_default_args_to_docstring(npds3, func3) assert npds3["Parameters"][0][2] == ["(default: current directory)"] # --- Test 4: Handles empty docstring parameters --- def func4(a=42): """Function with an empty parameter list in docstring.""" pass npds4 = {"Parameters": []} # No parameters in docstring add_default_args_to_docstring(npds4, func4) assert npds4["Parameters"] == [] # Should remain unchanged # --- Test 5: Missing parameter in docstring --- def func5(a=42, b="test"): """Function with some missing parameters in docstring.""" pass npds5 = { "Parameters": [ ("a", "int", []) # 'b' is missing ] } add_default_args_to_docstring(npds5, func5) assert npds5["Parameters"][0][2] == ["(default: 42)"] # Only 'a' updated assert len(npds5["Parameters"]) == 1 # 'b' should not be added automatically # --- Test 6: Handles multi-line parameter descriptions --- def func6(alpha=0.5, beta=0.3): """Function with multi-line parameter descriptions. Parameters ---------- alpha : float Learning rate parameter. This value controls step size. beta : float Another parameter with multiple lines of description. """ pass npds6 = { "Parameters": [ ( "alpha", "float", ["Learning rate parameter.", "This value controls step size."], ), ( "beta", "float", ["Another parameter with", "multiple lines of description."], ), ] } add_default_args_to_docstring(npds6, func6) assert npds6["Parameters"][0][2] == [ "Learning rate parameter.", "This value controls step size.", "(default: 0.5)", ] assert npds6["Parameters"][1][2] == [ "Another parameter with", "multiple lines of description.", "(default: 0.3)", ] dipy-1.11.0/dipy/workflows/tests/test_io.py000066400000000000000000000414621476546756600207640ustar00rootroot00000000000000import importlib from inspect import getmembers, isfunction import logging import os from os.path import join as pjoin import shutil import sys from tempfile import TemporaryDirectory, mkstemp import numpy as np import numpy.testing as npt import pytest import dipy.core.gradients as grad from dipy.data import default_sphere, get_fnames from dipy.data.fetcher import dipy_home from dipy.direction.peaks import PeaksAndMetrics from dipy.io.image import load_nifti, save_nifti from dipy.io.peaks import load_pam, save_pam from dipy.io.streamline import load_tractogram from dipy.io.utils import nifti1_symmat from dipy.reconst import dti, utils as reconst_utils from dipy.reconst.shm import convert_sh_descoteaux_tournier from dipy.testing import assert_true from dipy.utils.optpkg import optional_package from dipy.utils.tripwire import TripWireError from dipy.workflows.io import ( ConcatenateTractogramFlow, ConvertSHFlow, ConvertTensorsFlow, ConvertTractogramFlow, ExtractB0Flow, ExtractShellFlow, ExtractVolumeFlow, FetchFlow, IoInfoFlow, MathFlow, NiftisToPamFlow, PamToNiftisFlow, SplitFlow, TensorToPamFlow, ) ne, have_ne, _ = optional_package("numexpr") fname_log = mkstemp()[1] logging.basicConfig( level=logging.INFO, format="%(levelname)s %(message)s", filename=fname_log, filemode="w", ) is_big_endian = "big" in sys.byteorder.lower() def test_io_info(): fimg, fbvals, fbvecs = get_fnames(name="small_101D") io_info_flow = IoInfoFlow() io_info_flow.run([fimg, fbvals, fbvecs]) fimg, fbvals, fvecs = get_fnames(name="small_25") io_info_flow = IoInfoFlow() io_info_flow.run([fimg, fbvals, fvecs]) io_info_flow = IoInfoFlow() io_info_flow.run([fimg, fbvals, fvecs], b0_threshold=20, bvecs_tol=0.001) filepath_dix, _, _ = get_fnames(name="gold_standard_tracks") if not is_big_endian: io_info_flow = IoInfoFlow() io_info_flow.run(filepath_dix["gs.trx"]) io_info_flow = IoInfoFlow() io_info_flow.run(filepath_dix["gs.trk"]) io_info_flow = IoInfoFlow() npt.assert_raises(TypeError, io_info_flow.run, filepath_dix["gs.tck"]) io_info_flow = IoInfoFlow() io_info_flow.run(filepath_dix["gs.tck"], reference=filepath_dix["gs.nii"]) with open(fname_log, "r") as file: lines = file.readlines() try: npt.assert_equal(lines[-3], "INFO Total number of unit bvectors 25\n") except IndexError: # logging maybe disabled in IDE setting pass def test_io_fetch(): fetch_flow = FetchFlow() with TemporaryDirectory() as out_dir: fetch_flow.run(["bundle_fa_hcp"]) npt.assert_equal(os.path.isdir(os.path.join(dipy_home, "bundle_fa_hcp")), True) fetch_flow.run(["bundle_fa_hcp"], out_dir=out_dir) npt.assert_equal(os.path.isdir(os.path.join(out_dir, "bundle_fa_hcp")), True) def test_io_fetch_fetcher_datanames(): available_data = FetchFlow.get_fetcher_datanames() module_path = "dipy.data.fetcher" if module_path in sys.modules: fetcher_module = importlib.reload(sys.modules[module_path]) else: fetcher_module = importlib.import_module(module_path) ignored_fetchers = ["fetch_data"] fetcher_list = { name.replace("fetch_", ""): func for name, func in getmembers(fetcher_module, isfunction) if name.lower().startswith("fetch_") and name.lower() not in ignored_fetchers } num_expected_fetch_methods = len(fetcher_list) npt.assert_equal(len(available_data), num_expected_fetch_methods) npt.assert_equal( all(dataset_name in available_data.keys() for dataset_name in fetcher_list), True, ) def test_split_flow(): with TemporaryDirectory() as out_dir: split_flow = SplitFlow() data_path, _, _ = get_fnames() volume, affine = load_nifti(data_path) split_flow.run(data_path, out_dir=out_dir) assert_true(os.path.isfile(split_flow.last_generated_outputs["out_split"])) split_flow._force_overwrite = True split_flow.run(data_path, vol_idx=0, out_dir=out_dir) split_path = split_flow.last_generated_outputs["out_split"] assert_true(os.path.isfile(split_path)) split_data, split_affine = load_nifti(split_path) npt.assert_equal(split_data.shape, volume[..., 0].shape) npt.assert_array_almost_equal(split_affine, affine) def test_concatenate_flow(): with TemporaryDirectory() as out_dir: concatenate_flow = ConcatenateTractogramFlow() data_path, _, _ = get_fnames(name="gold_standard_tracks") input_files = [ v for k, v in data_path.items() if k in ["gs.trk", "gs.tck", "gs.trx", "gs.fib"] ] concatenate_flow.run(*input_files, out_dir=out_dir) assert_true( concatenate_flow.last_generated_outputs["out_extension"].endswith("trx") ) assert_true( os.path.isfile( concatenate_flow.last_generated_outputs["out_tractogram"] + ".trx" ) ) trk = load_tractogram( concatenate_flow.last_generated_outputs["out_tractogram"] + ".trx", "same" ) npt.assert_equal(len(trk), 13) def test_convert_sh_flow(): with TemporaryDirectory() as out_dir: filepath_in = os.path.join(out_dir, "sh_coeff_img.nii.gz") filename_out = "sh_coeff_img_converted.nii.gz" filepath_out = os.path.join(out_dir, filename_out) # Create an input image dim0, dim1, dim2 = 2, 3, 3 # spatial dimensions of array num_sh_coeffs = 15 # 15 sh coeffs means l_max is 4 img_in = np.arange(dim0 * dim1 * dim2 * num_sh_coeffs, dtype=float).reshape( dim0, dim1, dim2, num_sh_coeffs ) save_nifti(filepath_in, img_in, np.eye(4)) # Compute expected result to compare against later expected_img_out = convert_sh_descoteaux_tournier(img_in) # Run the workflow and load the output workflow = ConvertSHFlow() workflow.run( filepath_in, out_dir=out_dir, out_file=filename_out, ) img_out, _ = load_nifti(filepath_out) # Compare npt.assert_array_almost_equal(img_out, expected_img_out) def test_convert_tractogram_flow(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="gold_standard_tracks") input_files = [ v for k, v in data_path.items() if k in [ "gs.tck", ] ] convert_tractogram_flow = ConvertTractogramFlow(mix_names=True) convert_tractogram_flow.run( input_files, reference=data_path["gs.nii"], out_dir=out_dir ) convert_tractogram_flow._force_overwrite = True npt.assert_raises( ValueError, convert_tractogram_flow.run, input_files, out_dir=out_dir ) if not is_big_endian: npt.assert_warns( UserWarning, convert_tractogram_flow.run, data_path["gs.trx"], out_dir=out_dir, out_tractogram="gs_converted.trx", ) def test_convert_tensors_flow(): with TemporaryDirectory() as out_dir: filepath_in = os.path.join(out_dir, "tensors_img.nii.gz") filename_out = "tensors_converted.nii.gz" filepath_out = os.path.join(out_dir, filename_out) # Create an input image fdata, fbval, fbvec = get_fnames(name="small_25") data, affine = load_nifti(fdata) gtab = grad.gradient_table(fbval, bvecs=fbvec) tenmodel = dti.TensorModel(gtab) tenfit = tenmodel.fit(data) tensor_vals = dti.lower_triangular(tenfit.quadratic_form) ten_img = nifti1_symmat(tensor_vals, affine=affine) save_nifti(filepath_in, ten_img.get_fdata().squeeze(), affine) # Compute expected result to compare against later expected_img_out = reconst_utils.convert_tensors( ten_img.get_fdata(), "dipy", "mrtrix" ) # Run the workflow and load the output workflow = ConvertTensorsFlow() workflow.run( filepath_in, from_format="dipy", to_format="mrtrix", out_dir=out_dir, out_tensor=filename_out, ) img_out, _ = load_nifti(filepath_out) npt.assert_array_almost_equal(img_out, expected_img_out) def generate_random_pam(): pam = PeaksAndMetrics() pam.affine = np.eye(4) pam.peak_dirs = np.random.rand(15, 15, 15, 5, 3) pam.peak_values = np.zeros((15, 15, 15, 5)) pam.peak_indices = np.zeros((15, 15, 15, 5)) pam.shm_coeff = np.zeros((15, 15, 15, 45)) pam.sphere = default_sphere pam.B = np.zeros((45, default_sphere.vertices.shape[0])) pam.total_weight = 0.5 pam.ang_thr = 60 pam.gfa = np.zeros((10, 10, 10)) pam.qa = np.zeros((10, 10, 10, 5)) pam.odf = np.zeros((10, 10, 10, default_sphere.vertices.shape[0])) return pam def test_niftis_to_pam_flow(): pam = generate_random_pam() with TemporaryDirectory() as out_dir: fname = pjoin(out_dir, "test.pam5") save_pam(fname, pam) args = [fname, out_dir] flow = PamToNiftisFlow() flow.run(*args) args = [ flow.last_generated_outputs["out_peaks_dir"], flow.last_generated_outputs["out_peaks_values"], flow.last_generated_outputs["out_peaks_indices"], ] flow2 = NiftisToPamFlow() flow2.run(*args, out_dir=out_dir) pam_file = flow2.last_generated_outputs["out_pam"] assert_true(os.path.isfile(pam_file)) res_pam = load_pam(pam_file) npt.assert_array_equal(pam.affine, res_pam.affine) npt.assert_array_almost_equal(pam.peak_dirs, res_pam.peak_dirs) npt.assert_array_almost_equal(pam.peak_values, res_pam.peak_values) npt.assert_array_almost_equal(pam.peak_indices, res_pam.peak_indices) def test_tensor_to_pam_flow(): fdata, fbval, fbvec = get_fnames(name="small_25") gtab = grad.gradient_table(fbval, bvecs=fbvec) data, affine = load_nifti(fdata) dm = dti.TensorModel(gtab) df = dm.fit(data) df.evals[0, 0, 0] = np.array([0, 0, 0]) with TemporaryDirectory() as out_dir: f_mevals, f_mevecs = ( pjoin(out_dir, "evals.nii.gz"), pjoin(out_dir, "evecs.nii.gz"), ) save_nifti(f_mevals, df.evals, affine) save_nifti(f_mevecs, df.evecs, affine) args = [f_mevals, f_mevecs] flow = TensorToPamFlow() flow.run(*args, out_dir=out_dir) pam_file = flow.last_generated_outputs["out_pam"] assert_true(os.path.isfile(pam_file)) pam = load_pam(pam_file) npt.assert_array_equal(pam.affine, affine) npt.assert_array_almost_equal(pam.peak_dirs[..., :3, :], df.evecs) npt.assert_array_almost_equal(pam.peak_values[..., :3], df.evals) def test_pam_to_niftis_flow(): pam = generate_random_pam() with TemporaryDirectory() as out_dir: fname = pjoin(out_dir, "test.pam5") save_pam(fname, pam) args = [fname, out_dir] flow = PamToNiftisFlow() flow.run(*args) assert_true(os.path.isfile(flow.last_generated_outputs["out_peaks_dir"])) assert_true(os.path.isfile(flow.last_generated_outputs["out_peaks_values"])) assert_true(os.path.isfile(flow.last_generated_outputs["out_peaks_indices"])) assert_true(os.path.isfile(flow.last_generated_outputs["out_shm"])) assert_true(os.path.isfile(flow.last_generated_outputs["out_gfa"])) assert_true(os.path.isfile(flow.last_generated_outputs["out_sphere"])) def test_math(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_101D") data_path_a = pjoin(out_dir, "data_a.nii.gz") data_path_b = pjoin(out_dir, "data_b.nii.gz") shutil.copy(data_path, data_path_a) shutil.copy(data_path, data_path_b) data, _ = load_nifti(data_path) operations = ["vol1*3", "vol1+vol2+vol3", "5*vol1-vol2-vol3", "vol3*2 + vol2"] kwargs = [{"dtype": "i"}, {"dtype": "float32"}, {}, {}] if have_ne: for op, kwarg in zip(operations, kwargs): math_flow = MathFlow() math_flow.run( op, [data_path_a, data_path_b, data_path], out_dir=out_dir, **kwarg ) out_path = pjoin(out_dir, "math_out.nii.gz") out_data, _ = load_nifti(out_path) npt.assert_array_equal(out_data, data * 3) if kwarg: npt.assert_equal(out_data.dtype, np.dtype(kwarg["dtype"])) else: math_flow = MathFlow() npt.assert_raises(TripWireError, math_flow.run, "vol1*3", [data_path_a]) @pytest.mark.skipif(not have_ne, reason="numexpr not installed") def test_math_error(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_101D") data_path_2, _, _ = get_fnames(name="small_64D") data_path_a = pjoin(out_dir, "data_a.nii.gz") data_path_b = pjoin(out_dir, "data_b.gz") data_path_c = pjoin(out_dir, "data_c.nii") shutil.copy(data_path, data_path_a) shutil.copy(data_path, data_path_b) math_flow = MathFlow() npt.assert_raises( SyntaxError, math_flow.run, "vol1*", [data_path_a], out_dir=out_dir ) npt.assert_raises( SystemExit, math_flow.run, "vol1*2", [data_path_a], dtype="k", out_dir=out_dir, ) npt.assert_raises( SystemExit, math_flow.run, "vol1*2", [data_path_b], out_dir=out_dir ) npt.assert_raises( SystemExit, math_flow.run, "vol1*2", [data_path_c], out_dir=out_dir ) npt.assert_raises( SystemExit, math_flow.run, "vol1*vol3", [data_path_a], out_dir=out_dir ) npt.assert_raises( SystemExit, math_flow.run, "vol1*vol2", [data_path, data_path_2], out_dir=out_dir, ) def test_extract_b0_flow(): with TemporaryDirectory() as out_dir: fdata, fbval, fbvec = get_fnames(name="small_25") data, affine = load_nifti(fdata) b0_data = data[..., 0] b0_path = pjoin(out_dir, "b0_expected.nii.gz") save_nifti(b0_path, b0_data, affine) extract_b0_flow = ExtractB0Flow() extract_b0_flow.run(fdata, fbval, out_dir=out_dir, strategy="first") npt.assert_equal( extract_b0_flow.last_generated_outputs["out_b0"], pjoin(out_dir, "b0.nii.gz"), ) res, _ = load_nifti(extract_b0_flow.last_generated_outputs["out_b0"]) npt.assert_array_equal(res, b0_data) def test_extract_shell_flow(): with TemporaryDirectory() as out_dir: fdata, fbval, fbvec = get_fnames(name="small_25") data, affine = load_nifti(fdata) extract_shell_flow = ExtractShellFlow() extract_shell_flow.run( fdata, fbval, fbvec, bvals_to_extract="2000", out_dir=out_dir ) res, _ = load_nifti(pjoin(out_dir, "shell_2000.nii.gz")) npt.assert_array_equal(res, data[..., 1:]) extract_shell_flow._force_overwrite = True extract_shell_flow.run( fdata, fbval, fbvec, bvals_to_extract="0, 2000", group_shells=False, out_dir=out_dir, ) npt.assert_equal(os.path.isfile(pjoin(out_dir, "shell_0.nii.gz")), True) npt.assert_equal(os.path.isfile(pjoin(out_dir, "shell_2000.nii.gz")), True) res0, _ = load_nifti(pjoin(out_dir, "shell_0.nii.gz")) res2000, _ = load_nifti(pjoin(out_dir, "shell_2000.nii.gz")) npt.assert_array_equal(np.squeeze(res0), data[..., 0]) npt.assert_array_equal(res2000[..., 9], data[..., 10]) def test_extract_volume_flow(): with TemporaryDirectory() as out_dir: fdata, _, _ = get_fnames(name="small_25") data, affine = load_nifti(fdata) extract_volume_flow = ExtractVolumeFlow() extract_volume_flow.run(fdata, vol_idx="0-3,5", out_dir=out_dir) res, _ = load_nifti(extract_volume_flow.last_generated_outputs["out_vol"]) npt.assert_equal(res.shape[-1], 5) extract_volume_flow._force_overwrite = True extract_volume_flow.run(fdata, vol_idx="0-3,5", grouped=False, out_dir=out_dir) npt.assert_equal(os.path.isfile(pjoin(out_dir, "volume_2.nii.gz")), True) npt.assert_equal(os.path.isfile(pjoin(out_dir, "volume_5.nii.gz")), True) res2, _ = load_nifti(pjoin(out_dir, "volume_2.nii.gz")) res5, _ = load_nifti(pjoin(out_dir, "volume_5.nii.gz")) npt.assert_array_equal(res2, data[..., 2]) npt.assert_array_equal(res5, data[..., 5]) dipy-1.11.0/dipy/workflows/tests/test_masking.py000066400000000000000000000014631476546756600220030ustar00rootroot00000000000000from tempfile import TemporaryDirectory import numpy.testing as npt from dipy.data import get_fnames from dipy.io.image import load_nifti from dipy.testing import assert_false from dipy.workflows.mask import MaskFlow def test_mask(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_25") volume, affine = load_nifti(data_path) mask_flow = MaskFlow() mask_flow.run(data_path, 10, out_dir=out_dir, ub=9) assert_false(mask_flow.last_generated_outputs) mask_flow.run(data_path, 10, out_dir=out_dir) mask_path = mask_flow.last_generated_outputs["out_mask"] mask_data, mask_affine = load_nifti(mask_path) npt.assert_equal(mask_data.shape, volume.shape) npt.assert_array_almost_equal(mask_affine, affine) dipy-1.11.0/dipy/workflows/tests/test_nn.py000066400000000000000000000062721476546756600207700ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import numpy as np import numpy.testing as npt import pytest from dipy.data import get_fnames from dipy.io.image import load_nifti_data, save_nifti from dipy.nn.evac import EVACPlus from dipy.utils.optpkg import optional_package from dipy.workflows.nn import BiasFieldCorrectionFlow, EVACPlusFlow tf, have_tf, _ = optional_package("tensorflow", min_version="2.18.0") torch, have_torch, _ = optional_package("torch", min_version="2.2.0") have_nn = have_tf or have_torch @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_evac_plus_flow(): with TemporaryDirectory() as out_dir: file_path = get_fnames(name="evac_test_data") volume = np.load(file_path)["input"][0] temp_affine = np.eye(4) temp_path = pjoin(out_dir, "temp.nii.gz") save_nifti(temp_path, volume, temp_affine) save_masked = True evac_flow = EVACPlusFlow() evac_flow.run(temp_path, out_dir=out_dir, save_masked=save_masked) mask_name = evac_flow.last_generated_outputs["out_mask"] masked_name = evac_flow.last_generated_outputs["out_masked"] evac = EVACPlus() mask = evac.predict(volume, temp_affine) masked = volume * mask result_mask_data = load_nifti_data(pjoin(out_dir, mask_name)) npt.assert_array_equal(result_mask_data.astype(np.uint8), mask) result_masked_data = load_nifti_data(pjoin(out_dir, masked_name)) npt.assert_array_equal(result_masked_data, masked) @pytest.mark.skipif(not have_nn, reason="Requires TensorFlow or Torch") def test_correct_biasfield_flow(): # Test with T1 data if have_nn: with TemporaryDirectory() as out_dir: file_path = get_fnames(name="deepn4_test_data") volume = np.load(file_path[0])["img"] temp_affine = np.load(file_path[0])["affine"] temp_path = pjoin(out_dir, "temp.nii.gz") save_nifti(temp_path, volume, temp_affine) bias_flow = BiasFieldCorrectionFlow() bias_flow.run(temp_path, out_dir=out_dir) corrected_name = bias_flow.last_generated_outputs["out_corrected"] corrected_data = load_nifti_data(pjoin(out_dir, corrected_name)) npt.assert_almost_equal( corrected_data.mean(), 119.03902876428222, decimal=4 ) # Test with DWI data with TemporaryDirectory() as out_dir: fdata, fbval, fbvec = get_fnames(name="small_25") args = { "input_files": fdata, "bval": fbval, "bvec": fbvec, "method": "b0", "out_dir": out_dir, } bias_flow = BiasFieldCorrectionFlow() bias_flow.run(**args) corrected_name = bias_flow.last_generated_outputs["out_corrected"] corrected_data = load_nifti_data(pjoin(out_dir, corrected_name)) npt.assert_almost_equal(corrected_data.mean(), 0.0384615384615, decimal=5) args = { "input_files": fdata, "bval": fbval, "bvec": fbvec, "method": "random", "out_dir": out_dir, } npt.assert_raises(SystemExit, bias_flow.run, **args) dipy-1.11.0/dipy/workflows/tests/test_reconst.py000066400000000000000000000141731476546756600220310ustar00rootroot00000000000000import logging import os from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt import pytest from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.peaks import load_pam from dipy.reconst.shm import descoteaux07_legacy_msg, sph_harm_ind_list from dipy.sims.voxel import multi_tensor from dipy.utils.optpkg import optional_package from dipy.workflows.reconst import ( ReconstForecastFlow, ReconstGQIFlow, ReconstRUMBAFlow, ReconstSFMFlow, ) _, has_sklearn, _ = optional_package("sklearn") logging.getLogger().setLevel(logging.INFO) def simulate_multitensor_data(gtab): dwi = np.zeros((2, 2, 2, len(gtab.bvals))) # Diffusion of tissue and water compartments are constant for all voxel mevals = np.array([[0.0017, 0.0003, 0.0003], [0.003, 0.003, 0.003]]) # volume fractions GTF = np.array([[[0.06, 0.71], [0.33, 0.91]], [[0.0, 0.0], [0.0, 0.0]]]) for i in range(2): for j in range(2): gtf = GTF[0, i, j] S, p = multi_tensor( gtab, mevals, S0=100, angles=[(90, 0), (90, 0)], fractions=[(1 - gtf) * 100, gtf * 100], snr=None, ) dwi[0, i, j] = S return dwi def test_reconst_rumba(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstRUMBAFlow) @pytest.mark.skipif(not has_sklearn, reason="Requires sklearn") def test_reconst_sfm(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstSFMFlow) def test_reconst_gqi(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstGQIFlow) def test_reconst_forecast(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstForecastFlow, use_multishell_data=True) def reconst_flow_core(flow, *, use_multishell_data=None, **kwargs): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_64D") if use_multishell_data: bvals, bvecs = read_bvals_bvecs(bval_path, bvec_path) # FW model requires multishell data bvals_2s = np.concatenate((bvals, bvals * 1.5), axis=0) bvecs_2s = np.concatenate((bvecs, bvecs), axis=0) gtab_2s = gradient_table(bvals_2s, bvecs=bvecs_2s) bval_path = pjoin(out_dir, os.path.basename(bval_path)) bvec_path = pjoin(out_dir, os.path.basename(bvec_path)) np.savetxt(bval_path, bvals_2s) np.savetxt(bvec_path, bvecs_2s) volume = simulate_multitensor_data(gtab_2s) data_path = pjoin(out_dir, "dwi.nii.gz") mask = np.ones_like(volume[..., 0], dtype=bool) mask_path = pjoin(out_dir, "mask.nii.gz") save_nifti(mask_path, mask.astype(np.int32), np.eye(4)) save_nifti(data_path, volume, np.eye(4)) else: volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0]) mask_path = pjoin(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask.astype(np.uint8), affine) reconst_flow = flow() for sh_order_max in [ 8, ]: reconst_flow.run( data_path, bval_path, bvec_path, mask_path, sh_order_max=sh_order_max, out_dir=out_dir, extract_pam_values=True, **kwargs, ) gfa_path = reconst_flow.last_generated_outputs["out_gfa"] gfa_data = load_nifti_data(gfa_path) npt.assert_equal(gfa_data.shape, volume.shape[:-1]) peaks_dir_path = reconst_flow.last_generated_outputs["out_peaks_dir"] peaks_dir_data = load_nifti_data(peaks_dir_path) npt.assert_equal(peaks_dir_data.shape[-1], 15) npt.assert_equal(peaks_dir_data.shape[:-1], volume.shape[:-1]) peaks_idx_path = reconst_flow.last_generated_outputs["out_peaks_indices"] peaks_idx_data = load_nifti_data(peaks_idx_path) npt.assert_equal(peaks_idx_data.shape[-1], 5) npt.assert_equal(peaks_idx_data.shape[:-1], volume.shape[:-1]) peaks_vals_path = reconst_flow.last_generated_outputs["out_peaks_values"] peaks_vals_data = load_nifti_data(peaks_vals_path) npt.assert_equal(peaks_vals_data.shape[-1], 5) npt.assert_equal(peaks_vals_data.shape[:-1], volume.shape[:-1]) shm_path = reconst_flow.last_generated_outputs["out_shm"] shm_data = load_nifti_data(shm_path) # Test that the number of coefficients is what you would expect # given the order of the sh basis: npt.assert_equal( shm_data.shape[-1], sph_harm_ind_list(sh_order_max)[0].shape[0] ) npt.assert_equal(shm_data.shape[:-1], volume.shape[:-1]) pam = load_pam(reconst_flow.last_generated_outputs["out_pam"]) npt.assert_allclose( pam.peak_dirs.reshape(peaks_dir_data.shape), peaks_dir_data ) npt.assert_allclose(pam.peak_values, peaks_vals_data) npt.assert_allclose(pam.peak_indices, peaks_idx_data) npt.assert_allclose(pam.shm_coeff, shm_data, atol=1e-7) npt.assert_allclose(pam.gfa, gfa_data) dipy-1.11.0/dipy/workflows/tests/test_reconst_csa_csd.py000066400000000000000000000166351476546756600235150ustar00rootroot00000000000000import logging from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt from dipy.core.gradients import generate_bvecs from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.peaks import load_pam from dipy.reconst.shm import descoteaux07_legacy_msg, sph_harm_ind_list from dipy.workflows.reconst import ReconstCSDFlow, ReconstQBallBaseFlow, ReconstSDTFlow logging.getLogger().setLevel(logging.INFO) def test_reconst_csa(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstQBallBaseFlow) def test_reconst_opdt(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstQBallBaseFlow, method="opdt") def test_reconst_qball(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstQBallBaseFlow, method="qball") def test_reconst_csd(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstCSDFlow) def test_reconst_sdt(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstSDTFlow) def reconst_flow_core(flow, **kwargs): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_64D") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0]) mask_path = pjoin(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask.astype(np.uint8), affine) reconst_flow = flow() for sh_order in [4, 6, 8]: reconst_flow.run( data_path, bval_path, bvec_path, mask_path, sh_order_max=sh_order, out_dir=out_dir, extract_pam_values=True, **kwargs, ) gfa_path = reconst_flow.last_generated_outputs["out_gfa"] gfa_data = load_nifti_data(gfa_path) npt.assert_equal(gfa_data.shape, volume.shape[:-1]) peaks_dir_path = reconst_flow.last_generated_outputs["out_peaks_dir"] peaks_dir_data = load_nifti_data(peaks_dir_path) npt.assert_equal(peaks_dir_data.shape[-1], 15) npt.assert_equal(peaks_dir_data.shape[:-1], volume.shape[:-1]) peaks_idx_path = reconst_flow.last_generated_outputs["out_peaks_indices"] peaks_idx_data = load_nifti_data(peaks_idx_path) npt.assert_equal(peaks_idx_data.shape[-1], 5) npt.assert_equal(peaks_idx_data.shape[:-1], volume.shape[:-1]) peaks_vals_path = reconst_flow.last_generated_outputs["out_peaks_values"] peaks_vals_data = load_nifti_data(peaks_vals_path) npt.assert_equal(peaks_vals_data.shape[-1], 5) npt.assert_equal(peaks_vals_data.shape[:-1], volume.shape[:-1]) shm_path = reconst_flow.last_generated_outputs["out_shm"] shm_data = load_nifti_data(shm_path) # Test that the number of coefficients is what you would expect # given the order of the sh basis: npt.assert_equal( shm_data.shape[-1], sph_harm_ind_list(sh_order)[0].shape[0] ) npt.assert_equal(shm_data.shape[:-1], volume.shape[:-1]) pam = load_pam(reconst_flow.last_generated_outputs["out_pam"]) npt.assert_allclose( pam.peak_dirs.reshape(peaks_dir_data.shape), peaks_dir_data ) npt.assert_allclose(pam.peak_values, peaks_vals_data) npt.assert_allclose(pam.peak_indices, peaks_idx_data) npt.assert_allclose(pam.shm_coeff, shm_data) npt.assert_allclose(pam.gfa, gfa_data) bvals, bvecs = read_bvals_bvecs(bval_path, bvec_path) bvals[0] = 5.0 bvecs = generate_bvecs(len(bvals)) tmp_bval_path = pjoin(out_dir, "tmp.bval") tmp_bvec_path = pjoin(out_dir, "tmp.bvec") np.savetxt(tmp_bval_path, bvals) np.savetxt(tmp_bvec_path, bvecs.T) reconst_flow._force_overwrite = True if flow.get_short_name() == "csd": reconst_flow = flow() reconst_flow._force_overwrite = True reconst_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, frf=[15, 5, 5], **kwargs, ) reconst_flow = flow() reconst_flow._force_overwrite = True reconst_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, frf="15, 5, 5", **kwargs, ) reconst_flow = flow() reconst_flow._force_overwrite = True reconst_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, frf=None, **kwargs, ) reconst_flow2 = flow() reconst_flow2._force_overwrite = True reconst_flow2.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, frf=None, roi_center=[5, 5, 5], **kwargs, ) else: with npt.assert_raises(BaseException): npt.assert_warns( UserWarning, reconst_flow.run, data_path, tmp_bval_path, tmp_bvec_path, mask_path, out_dir=out_dir, extract_pam_values=True, **kwargs, ) # test parallel implementation # Avoid SDT for now, as it is quite slow, something to introspect if flow.get_short_name() != "sdt": reconst_flow = flow() reconst_flow._force_overwrite = True reconst_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, parallel=True, num_processes=2, **kwargs, ) dipy-1.11.0/dipy/workflows/tests/test_reconst_dki.py000066400000000000000000000127641476546756600226640ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np from numpy.testing import assert_allclose, assert_equal from dipy.core.gradients import generate_bvecs from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.peaks import load_pam from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.workflows.reconst import ReconstDkiFlow def test_reconst_dki(): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_101D") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0]) mask_path = pjoin(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask.astype(np.uint8), affine) dki_flow = ReconstDkiFlow() args = [data_path, bval_path, bvec_path, mask_path] with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dki_flow.run(*args, out_dir=out_dir, extract_pam_values=True) fa_path = dki_flow.last_generated_outputs["out_fa"] fa_data = load_nifti_data(fa_path) assert_equal(fa_data.shape, volume.shape[:-1]) tensor_path = dki_flow.last_generated_outputs["out_dt_tensor"] tensor_data = load_nifti_data(tensor_path) assert_equal(tensor_data.shape[-1], 6) assert_equal(tensor_data.shape[:-1], volume.shape[:-1]) ga_path = dki_flow.last_generated_outputs["out_ga"] ga_data = load_nifti_data(ga_path) assert_equal(ga_data.shape, volume.shape[:-1]) rgb_path = dki_flow.last_generated_outputs["out_rgb"] rgb_data = load_nifti_data(rgb_path) assert_equal(rgb_data.shape[-1], 3) assert_equal(rgb_data.shape[:-1], volume.shape[:-1]) md_path = dki_flow.last_generated_outputs["out_md"] md_data = load_nifti_data(md_path) assert_equal(md_data.shape, volume.shape[:-1]) ad_path = dki_flow.last_generated_outputs["out_ad"] ad_data = load_nifti_data(ad_path) assert_equal(ad_data.shape, volume.shape[:-1]) rd_path = dki_flow.last_generated_outputs["out_rd"] rd_data = load_nifti_data(rd_path) assert_equal(rd_data.shape, volume.shape[:-1]) mk_path = dki_flow.last_generated_outputs["out_mk"] mk_data = load_nifti_data(mk_path) assert_equal(mk_data.shape, volume.shape[:-1]) ak_path = dki_flow.last_generated_outputs["out_ak"] ak_data = load_nifti_data(ak_path) assert_equal(ak_data.shape, volume.shape[:-1]) rk_path = dki_flow.last_generated_outputs["out_rk"] rk_data = load_nifti_data(rk_path) assert_equal(rk_data.shape, volume.shape[:-1]) kt_path = dki_flow.last_generated_outputs["out_dk_tensor"] kt_data = load_nifti_data(kt_path) assert_equal(kt_data.shape[-1], 15) assert_equal(kt_data.shape[:-1], volume.shape[:-1]) mode_path = dki_flow.last_generated_outputs["out_mode"] mode_data = load_nifti_data(mode_path) assert_equal(mode_data.shape, volume.shape[:-1]) evecs_path = dki_flow.last_generated_outputs["out_evec"] evecs_data = load_nifti_data(evecs_path) assert_equal(evecs_data.shape[-2:], (3, 3)) assert_equal(evecs_data.shape[:-2], volume.shape[:-1]) evals_path = dki_flow.last_generated_outputs["out_eval"] evals_data = load_nifti_data(evals_path) assert_equal(evals_data.shape[-1], 3) assert_equal(evals_data.shape[:-1], volume.shape[:-1]) peaks_dir_path = dki_flow.last_generated_outputs["out_peaks_dir"] peaks_dir_data = load_nifti_data(peaks_dir_path) assert_equal(peaks_dir_data.shape[-1], 9) assert_equal(peaks_dir_data.shape[:-1], volume.shape[:-1]) peaks_idx_path = dki_flow.last_generated_outputs["out_peaks_indices"] peaks_idx_data = load_nifti_data(peaks_idx_path) assert_equal(peaks_idx_data.shape[-1], 3) assert_equal(peaks_idx_data.shape[:-1], volume.shape[:-1]) peaks_vals_path = dki_flow.last_generated_outputs["out_peaks_values"] peaks_vals_data = load_nifti_data(peaks_vals_path) assert_equal(peaks_vals_data.shape[-1], 3) assert_equal(peaks_vals_data.shape[:-1], volume.shape[:-1]) pam = load_pam(dki_flow.last_generated_outputs["out_pam"]) assert_allclose(pam.peak_dirs.reshape(peaks_dir_data.shape), peaks_dir_data) assert_allclose(pam.peak_values, peaks_vals_data) assert_allclose(pam.peak_indices, peaks_idx_data) bvals, bvecs = read_bvals_bvecs(bval_path, bvec_path) bvals[0] = 5.0 bvecs = generate_bvecs(len(bvals)) tmp_bval_path = pjoin(out_dir, "tmp.bval") tmp_bvec_path = pjoin(out_dir, "tmp.bvec") np.savetxt(tmp_bval_path, bvals) np.savetxt(tmp_bvec_path, bvecs.T) dki_flow._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dki_flow.run( data_path, tmp_bval_path, tmp_bvec_path, mask_path, out_dir=out_dir, b0_threshold=0, ) dipy-1.11.0/dipy/workflows/tests/test_reconst_dsi.py000066400000000000000000000052261476546756600226670ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt from dipy.data import get_fnames from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.workflows.reconst import ReconstDsiFlow def test_reconst_dsi(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstDsiFlow) def test_reconst_dsid(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstDsiFlow, remove_convolution=True) def reconst_flow_core(flow, **kwargs): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_64D") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0]) mask_path = pjoin(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask.astype(np.uint8), affine) dsi_flow = ReconstDsiFlow() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) dsi_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, extract_pam_values=True, **kwargs, ) peaks_dir_path = dsi_flow.last_generated_outputs["out_peaks_dir"] peaks_dir_data = load_nifti_data(peaks_dir_path) npt.assert_equal(peaks_dir_data.shape[-1], 15) npt.assert_equal(peaks_dir_data.shape[:-1], volume.shape[:-1]) peaks_idx_path = dsi_flow.last_generated_outputs["out_peaks_indices"] peaks_idx_data = load_nifti_data(peaks_idx_path) npt.assert_equal(peaks_idx_data.shape[-1], 5) npt.assert_equal(peaks_idx_data.shape[:-1], volume.shape[:-1]) peaks_vals_path = dsi_flow.last_generated_outputs["out_peaks_values"] peaks_vals_data = load_nifti_data(peaks_vals_path) npt.assert_equal(peaks_vals_data.shape[-1], 5) npt.assert_equal(peaks_vals_data.shape[:-1], volume.shape[:-1]) gfa_path = dsi_flow.last_generated_outputs["out_gfa"] gfa_data = load_nifti_data(gfa_path) npt.assert_equal(gfa_data.shape, volume.shape[:-1]) dipy-1.11.0/dipy/workflows/tests/test_reconst_dti.py000066400000000000000000000111331476546756600226620ustar00rootroot00000000000000from os.path import join from tempfile import TemporaryDirectory import warnings import numpy as np from numpy.testing import assert_allclose, assert_equal from dipy.data import get_fnames from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.peaks import load_pam from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.workflows.reconst import ReconstDtiFlow def test_reconst_dti_wls(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstDtiFlow) def test_reconst_dti_nlls(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core(ReconstDtiFlow, extra_args=[], extra_kwargs={}) def test_reconst_dti_alt_tensor(): with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_flow_core( ReconstDtiFlow, extra_args=[], extra_kwargs={"nifti_tensor": False} ) def reconst_flow_core(flow, extra_args=None, extra_kwargs=None): extra_args = extra_args or [] extra_kwargs = extra_kwargs or {} with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_25") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0], dtype=np.uint8) mask_path = join(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask, affine) dti_flow = flow() args = [data_path, bval_path, bvec_path, mask_path] args.extend(extra_args) kwargs = {"out_dir": out_dir, "extract_pam_values": True} kwargs.update(extra_kwargs) dti_flow.run(*args, **kwargs) fa_path = dti_flow.last_generated_outputs["out_fa"] fa_data = load_nifti_data(fa_path) assert_equal(fa_data.shape, volume.shape[:-1]) tensor_path = dti_flow.last_generated_outputs["out_tensor"] tensor_data = load_nifti_data(tensor_path) # Per default, tensor data is 5D, with six tensor elements on the last # dimension, except if nifti_tensor is set to False: if extra_kwargs.get("nifti_tensor", True): assert_equal(tensor_data.shape[-1], 6) assert_equal(tensor_data.shape[:-2], volume.shape[:-1]) else: assert_equal(tensor_data.shape[-1], 6) assert_equal(tensor_data.shape[:-1], volume.shape[:-1]) for out_name in ["out_ga", "out_md", "out_ad", "out_rd", "out_mode"]: out_path = dti_flow.last_generated_outputs[out_name] out_data = load_nifti_data(out_path) assert_equal(out_data.shape, volume.shape[:-1]) rgb_path = dti_flow.last_generated_outputs["out_rgb"] rgb_data = load_nifti_data(rgb_path) assert_equal(rgb_data.shape[-1], 3) assert_equal(rgb_data.shape[:-1], volume.shape[:-1]) evecs_path = dti_flow.last_generated_outputs["out_evec"] evecs_data = load_nifti_data(evecs_path) assert_equal(evecs_data.shape[-2:], (3, 3)) assert_equal(evecs_data.shape[:-2], volume.shape[:-1]) evals_path = dti_flow.last_generated_outputs["out_eval"] evals_data = load_nifti_data(evals_path) assert_equal(evals_data.shape[-1], 3) assert_equal(evals_data.shape[:-1], volume.shape[:-1]) peaks_dir_path = dti_flow.last_generated_outputs["out_peaks_dir"] peaks_dir_data = load_nifti_data(peaks_dir_path) assert_equal(peaks_dir_data.shape[-1], 3) assert_equal(peaks_dir_data.shape[:-1], volume.shape[:-1]) peaks_idx_path = dti_flow.last_generated_outputs["out_peaks_indices"] peaks_idx_data = load_nifti_data(peaks_idx_path) assert_equal(peaks_idx_data.shape[-1], 1) assert_equal(peaks_idx_data.shape[:-1], volume.shape[:-1]) peaks_vals_path = dti_flow.last_generated_outputs["out_peaks_values"] peaks_vals_data = load_nifti_data(peaks_vals_path) assert_equal(peaks_vals_data.shape[-1], 1) assert_equal(peaks_vals_data.shape[:-1], volume.shape[:-1]) pam = load_pam(dti_flow.last_generated_outputs["out_pam"]) assert_allclose(pam.peak_dirs.reshape(peaks_dir_data.shape), peaks_dir_data) assert_allclose(pam.peak_values, peaks_vals_data) assert_allclose(pam.peak_indices, peaks_idx_data) dipy-1.11.0/dipy/workflows/tests/test_reconst_ivim.py000066400000000000000000000061701476546756600230530ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np from numpy.testing import assert_equal from dipy.core.gradients import generate_bvecs, gradient_table from dipy.io.image import load_nifti_data, save_nifti from dipy.sims.voxel import multi_tensor from dipy.workflows.reconst import ReconstIvimFlow def test_reconst_ivim(): with TemporaryDirectory() as out_dir: bvals = np.array( [ 0.0, 10.0, 20.0, 30.0, 40.0, 60.0, 80.0, 100.0, 120.0, 140.0, 160.0, 180.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, ] ) N = len(bvals) bvecs = generate_bvecs(N) temp_bval_path = pjoin(out_dir, "temp.bval") np.savetxt(temp_bval_path, bvals) temp_bvec_path = pjoin(out_dir, "temp.bvec") np.savetxt(temp_bvec_path, bvecs) gtab = gradient_table(bvals, bvecs=bvecs) S0, f, D_star, D = 1000.0, 0.132, 0.00885, 0.000921 mevals = np.array(([D_star, D_star, D_star], [D, D, D])) # This gives an isotropic signal. data = multi_tensor( gtab, mevals, snr=None, S0=S0, fractions=[f * 100, 100 * (1 - f)] ) # Single voxel data data_single = data[0] temp_affine = np.eye(4) data_multi = np.zeros((2, 2, 1, len(gtab.bvals)), dtype=int) data_multi[0, 0, 0] = data_multi[0, 1, 0] = data_multi[1, 0, 0] = data_multi[ 1, 1, 0 ] = data_single data_path = pjoin(out_dir, "tmp_data.nii.gz") save_nifti(data_path, data_multi, temp_affine, dtype=data_multi.dtype) mask = np.ones_like(data_multi[..., 0], dtype=np.uint8) mask_path = pjoin(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask, temp_affine) ivim_flow = ReconstIvimFlow() args = [data_path, temp_bval_path, temp_bvec_path, mask_path] msg = "Bounds for this fit have been set from experiments and * " with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) ivim_flow.run(*args, out_dir=out_dir) S0_path = ivim_flow.last_generated_outputs["out_S0_predicted"] S0_data = load_nifti_data(S0_path) assert_equal(S0_data.shape, data_multi.shape[:-1]) f_path = ivim_flow.last_generated_outputs["out_perfusion_fraction"] f_data = load_nifti_data(f_path) assert_equal(f_data.shape, data_multi.shape[:-1]) D_star_path = ivim_flow.last_generated_outputs["out_D_star"] D_star_data = load_nifti_data(D_star_path) assert_equal(D_star_data.shape, data_multi.shape[:-1]) D_path = ivim_flow.last_generated_outputs["out_D"] D_data = load_nifti_data(D_path) assert_equal(D_data.shape, data_multi.shape[:-1]) dipy-1.11.0/dipy/workflows/tests/test_reconst_mapmri.py000066400000000000000000000054421476546756600233750ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import warnings import numpy as np import numpy.testing as npt import pytest from dipy.core.gradients import generate_bvecs from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data from dipy.reconst import mapmri from dipy.workflows.reconst import ReconstMAPMRIFlow def test_reconst_mmri_laplacian(): reconst_mmri_core(ReconstMAPMRIFlow, lap=True, pos=False) def test_reconst_mmri_none(): reconst_mmri_core(ReconstMAPMRIFlow, lap=False, pos=False) @pytest.mark.skipif(not mapmri.have_cvxpy, reason="Requires CVXPY") def test_reconst_mmri_both(): reconst_mmri_core(ReconstMAPMRIFlow, lap=True, pos=True) @pytest.mark.skipif(not mapmri.have_cvxpy, reason="Requires CVXPY") def test_reconst_mmri_positivity(): reconst_mmri_core(ReconstMAPMRIFlow, lap=True, pos=False) def reconst_mmri_core(flow, lap, pos): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_25") volume = load_nifti_data(data_path) mmri_flow = flow() msg = "Optimization did not find a solution" with warnings.catch_warnings(): warnings.filterwarnings("ignore", message=msg, category=UserWarning) mmri_flow.run( data_files=data_path, bvals_files=bval_path, bvecs_files=bvec_path, small_delta=0.0129, big_delta=0.0218, laplacian=lap, positivity=pos, out_dir=out_dir, ) for out_name in [ "out_rtop", "out_lapnorm", "out_msd", "out_qiv", "out_rtap", "out_rtpp", "out_ng", "out_parng", "out_perng", ]: out_path = mmri_flow.last_generated_outputs[out_name] out_data = load_nifti_data(out_path) npt.assert_equal(out_data.shape, volume.shape[:-1]) bvals, bvecs = read_bvals_bvecs(bval_path, bvec_path) bvals[0] = 5.0 bvecs = generate_bvecs(len(bvals)) tmp_bval_path = pjoin(out_dir, "tmp.bval") tmp_bvec_path = pjoin(out_dir, "tmp.bvec") np.savetxt(tmp_bval_path, bvals) np.savetxt(tmp_bvec_path, bvecs.T) mmri_flow._force_overwrite = True with npt.assert_raises(BaseException): npt.assert_warns( UserWarning, mmri_flow.run, data_path, tmp_bval_path, tmp_bvec_path, small_delta=0.0129, big_delta=0.0218, laplacian=lap, positivity=pos, out_dir=out_dir, ) dipy-1.11.0/dipy/workflows/tests/test_segment.py000066400000000000000000000150531476546756600220140ustar00rootroot00000000000000from os.path import join as pjoin from tempfile import TemporaryDirectory import nibabel as nib import numpy as np import numpy.testing as npt from dipy.align.streamlinear import BundleMinDistanceMetric from dipy.data import get_fnames from dipy.io.image import load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.segment.mask import median_otsu from dipy.segment.tests.test_mrf import create_image from dipy.tracking.streamline import Streamlines, set_number_of_points from dipy.utils.optpkg import optional_package from dipy.workflows.segment import ( ClassifyTissueFlow, LabelsBundlesFlow, MedianOtsuFlow, RecoBundlesFlow, ) sklearn, has_sklearn, _ = optional_package("sklearn") def test_median_otsu_flow(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_25") volume = load_nifti_data(data_path) save_masked = True median_radius = 3 numpass = 3 autocrop = False vol_idx = "0," dilate = 0 finalize_mask = False mo_flow = MedianOtsuFlow() mo_flow.run( data_path, out_dir=out_dir, save_masked=save_masked, median_radius=median_radius, numpass=numpass, autocrop=autocrop, vol_idx=vol_idx, dilate=dilate, finalize_mask=finalize_mask, ) mask_name = mo_flow.last_generated_outputs["out_mask"] masked_name = mo_flow.last_generated_outputs["out_masked"] vol_idx = [ 0, ] masked, mask = median_otsu( volume, vol_idx=vol_idx, median_radius=median_radius, numpass=numpass, autocrop=autocrop, dilate=dilate, ) result_mask_data = load_nifti_data(pjoin(out_dir, mask_name)) npt.assert_array_equal(result_mask_data.astype(np.uint8), mask) result_masked = nib.load(pjoin(out_dir, masked_name)) result_masked_data = np.asanyarray(result_masked.dataobj) npt.assert_array_equal(np.round(result_masked_data), masked) def test_recobundles_flow(): with TemporaryDirectory() as out_dir: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) f1 = f.copy() f2 = f1[:15].copy() f2._data += np.array([40, 0, 0]) f.extend(f2) f2_path = pjoin(out_dir, "f2.trk") sft = StatefulTractogram(f2, data_path, Space.RASMM) save_tractogram(sft, f2_path, bbox_valid_check=False) f1_path = pjoin(out_dir, "f1.trk") sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, f1_path, bbox_valid_check=False) rb_flow = RecoBundlesFlow(force=True) rb_flow.run( f1_path, f2_path, greater_than=0, clust_thr=10, model_clust_thr=5.0, reduction_thr=10, out_dir=out_dir, ) labels = rb_flow.last_generated_outputs["out_recognized_labels"] recog_trk = rb_flow.last_generated_outputs["out_recognized_transf"] rec_bundle = load_tractogram( recog_trk, "same", bbox_valid_check=False ).streamlines npt.assert_equal(len(rec_bundle) == len(f2), True) label_flow = LabelsBundlesFlow(force=True) label_flow.run(f1_path, labels, out_dir=out_dir) recog_bundle = label_flow.last_generated_outputs["out_bundle"] rec_bundle_org = load_tractogram( recog_bundle, "same", bbox_valid_check=False ).streamlines BMD = BundleMinDistanceMetric() nb_pts = 20 static = set_number_of_points(f2, nb_points=nb_pts) moving = set_number_of_points(rec_bundle_org, nb_points=nb_pts) BMD.setup(static, moving) x0 = np.array([0, 0, 0, 0, 0, 0, 1.0, 1.0, 1, 0, 0, 0]) # affine bmd_value = BMD.distance(x0.tolist()) npt.assert_equal(bmd_value < 1, True) def test_classify_tissue_flow(): with TemporaryDirectory() as out_dir: data = create_image() data_path = pjoin(out_dir, "data.nii.gz") nib.save(nib.Nifti1Image(data, np.eye(4)), data_path) args = { "input_files": data_path, "method": "hmrf", "nclass": 4, "beta": 0.1, "tolerance": 0.0001, "max_iter": 10, "out_dir": out_dir, } flow = ClassifyTissueFlow() flow.run(**args) tissue = flow.last_generated_outputs["out_tissue"] pve = flow.last_generated_outputs["out_pve"] tissue_data = load_nifti_data(tissue) pve_data = load_nifti_data(pve) npt.assert_equal(tissue_data.shape, data.shape) npt.assert_equal(tissue_data.max(), 4) npt.assert_equal(tissue_data.min(), 0) npt.assert_equal(pve_data.shape, (data.shape) + (4,)) npt.assert_equal(pve_data.max(), 1) npt.assert_raises(SystemExit, flow.run, data_path) npt.assert_raises(SystemExit, flow.run, data_path, method="random") npt.assert_raises(SystemExit, flow.run, data_path, method="dam") npt.assert_raises(SystemExit, flow.run, data_path, method="hmrf") if has_sklearn: with TemporaryDirectory() as out_dir: data = np.random.rand(3, 3, 3, 7) * 100 # Simulated random data bvals = np.array([0, 100, 500, 1000, 1500, 2000, 3000]) data_path = pjoin(out_dir, "data.nii.gz") bvals_path = pjoin(out_dir, "bvals") np.savetxt(bvals_path, bvals) nib.save(nib.Nifti1Image(data, np.eye(4)), data_path) args = { "input_files": data_path, "bvals_file": bvals_path, "method": "dam", "wm_threshold": 0.5, "out_dir": out_dir, } flow = ClassifyTissueFlow() flow.run(**args) tissue = flow.last_generated_outputs["out_tissue"] pve = flow.last_generated_outputs["out_pve"] tissue_data = load_nifti_data(tissue) pve_data = load_nifti_data(pve) npt.assert_equal(tissue_data.shape, data.shape[:-1]) npt.assert_equal(tissue_data.max(), 2) npt.assert_equal(tissue_data.min(), 0) npt.assert_equal(pve_data.shape, (data.shape[:-1]) + (2,)) npt.assert_equal(pve_data.max(), 1) dipy-1.11.0/dipy/workflows/tests/test_stats.py000077500000000000000000000271121476546756600215120ustar00rootroot00000000000000#!/usr/bin/env python3 import os from os.path import join from tempfile import TemporaryDirectory import numpy as np import numpy.testing as npt import pytest from dipy.data import get_fnames from dipy.io.image import load_nifti, save_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.testing import assert_true from dipy.testing.decorators import set_random_number_generator from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package from dipy.workflows.stats import ( BundleAnalysisTractometryFlow, BundleShapeAnalysis, LinearMixedModelsFlow, SNRinCCFlow, buan_bundle_profiles, ) pd, have_pandas, _ = optional_package("pandas") _, have_statsmodels, _ = optional_package("statsmodels", min_version="0.14.0") _, have_tables, _ = optional_package("tables") _, have_matplotlib, _ = optional_package("matplotlib") def test_stats(): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_101D") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0], dtype=np.uint8) mask_path = join(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask, affine) snr_flow = SNRinCCFlow(force=True) args = [data_path, bval_path, bvec_path, mask_path] snr_flow.run(*args, out_dir=out_dir) assert_true(os.path.exists(os.path.join(out_dir, "product.json"))) assert_true(os.stat(os.path.join(out_dir, "product.json")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "cc.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "cc.nii.gz")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "mask_noise.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "mask_noise.nii.gz")).st_size != 0) snr_flow._force_overwrite = True snr_flow.run(*args, out_dir=out_dir) assert_true(os.path.exists(os.path.join(out_dir, "product.json"))) assert_true(os.stat(os.path.join(out_dir, "product.json")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "cc.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "cc.nii.gz")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "mask_noise.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "mask_noise.nii.gz")).st_size != 0) snr_flow._force_overwrite = True snr_flow.run(*args, bbox_threshold=(0.5, 1, 0, 0.15, 0, 0.2), out_dir=out_dir) assert_true(os.path.exists(os.path.join(out_dir, "product.json"))) assert_true(os.stat(os.path.join(out_dir, "product.json")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "cc.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "cc.nii.gz")).st_size != 0) assert_true(os.path.exists(os.path.join(out_dir, "mask_noise.nii.gz"))) assert_true(os.stat(os.path.join(out_dir, "mask_noise.nii.gz")).st_size != 0) @pytest.mark.skipif( not have_pandas or not have_statsmodels or not have_tables or not have_matplotlib, reason="Requires Pandas, StatsModels and PyTables", ) @set_random_number_generator() def test_buan_bundle_profiles(rng): with TemporaryDirectory() as dirpath: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) mb = os.path.join(dirpath, "model_bundles") os.mkdir(mb) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, os.path.join(mb, "temp.trk"), bbox_valid_check=False) rb = os.path.join(dirpath, "rec_bundles") os.mkdir(rb) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, os.path.join(rb, "temp.trk"), bbox_valid_check=False) ob = os.path.join(dirpath, "org_bundles") os.mkdir(ob) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, os.path.join(ob, "temp.trk"), bbox_valid_check=False) dt = os.path.join(dirpath, "anatomical_measures") os.mkdir(dt) fa = rng.random((255, 255, 255)) save_nifti(os.path.join(dt, "fa.nii.gz"), fa, affine=np.eye(4)) out_dir = os.path.join(dirpath, "output") os.mkdir(out_dir) buan_bundle_profiles( mb, rb, ob, dt, group_id=1, subject="10001", no_disks=100, out_dir=out_dir ) assert_true(os.path.exists(os.path.join(out_dir, "temp_fa.h5"))) @pytest.mark.skipif( not have_pandas or not have_statsmodels or not have_tables or not have_matplotlib, reason="Requires Pandas, StatsModels, PyTables, and matplotlib", ) @set_random_number_generator() def test_bundle_analysis_tractometry_flow(rng): with TemporaryDirectory() as dirpath: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) mb = os.path.join(dirpath, "model_bundles") sub = os.path.join(dirpath, "subjects") os.mkdir(mb) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, os.path.join(mb, "temp.trk"), bbox_valid_check=False) os.mkdir(sub) os.mkdir(os.path.join(sub, "patient")) os.mkdir(os.path.join(sub, "control")) p = os.path.join(sub, "patient", "10001") os.mkdir(p) c = os.path.join(sub, "control", "20002") os.mkdir(c) for pre in [p, c]: os.mkdir(os.path.join(pre, "rec_bundles")) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram( sft, os.path.join(pre, "rec_bundles", "temp.trk"), bbox_valid_check=False, ) os.mkdir(os.path.join(pre, "org_bundles")) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram( sft, os.path.join(pre, "org_bundles", "temp.trk"), bbox_valid_check=False, ) os.mkdir(os.path.join(pre, "anatomical_measures")) fa = rng.random((255, 255, 255)) save_nifti( os.path.join(pre, "anatomical_measures", "fa.nii.gz"), fa, affine=np.eye(4), ) out_dir = os.path.join(dirpath, "output") os.mkdir(out_dir) ba_flow = BundleAnalysisTractometryFlow() ba_flow.run(mb, sub, out_dir=out_dir) assert_true(os.path.exists(os.path.join(out_dir, "temp_fa.h5"))) dft = pd.read_hdf(os.path.join(out_dir, "temp_fa.h5")) # assert_true(dft.bundle.unique() == "temp") assert_true(set(dft.subject.unique()) == {"10001", "20002"}) @pytest.mark.skipif( not have_pandas or not have_statsmodels or not have_tables or not have_matplotlib, reason="Requires Pandas, StatsModels, PyTables, and matplotlib", ) def test_linear_mixed_models_flow(): with TemporaryDirectory() as dirpath: out_dir = os.path.join(dirpath, "output") os.mkdir(out_dir) d = { "disk": [1, 2, 3, 4, 5, 1, 2, 3, 4, 5] * 10, "fa": [0.21, 0.234, 0.44, 0.44, 0.5, 0.23, 0.55, 0.34, 0.76, 0.34] * 10, "subject": [ "10001", "10001", "10001", "10001", "10001", "20002", "20002", "20002", "20002", "20002", ] * 10, "group": [0, 0, 0, 0, 0, 1, 1, 1, 1, 1] * 10, } df = pd.DataFrame(data=d) store = pd.HDFStore(os.path.join(out_dir, "temp_fa.h5")) store.append("fa", df, data_columns=True) store.close() lmm_flow = LinearMixedModelsFlow() out_dir2 = os.path.join(dirpath, "output2") os.mkdir(out_dir2) input_path = os.path.join(out_dir, "*") lmm_flow.run(input_path, no_disks=5, out_dir=out_dir2) assert_true(os.path.exists(os.path.join(out_dir2, "temp_fa_pvalues.npy"))) assert_true(os.path.exists(os.path.join(out_dir2, "temp_fa.png"))) # test error d2 = { "disk": [1, 2, 3, 4, 5, 1, 2, 3, 4, 5] * 1, "fa": [0.21, 0.234, 0.44, 0.44, 0.5, 0.23, 0.55, 0.34, 0.76, 0.34] * 1, "subject": [ "10001", "10001", "10001", "10001", "10001", "20002", "20002", "20002", "20002", "20002", ] * 1, "group": [0, 0, 0, 0, 0, 1, 1, 1, 1, 1] * 1, } df = pd.DataFrame(data=d2) out_dir3 = os.path.join(dirpath, "output3") os.mkdir(out_dir3) store = pd.HDFStore(os.path.join(out_dir3, "temp_fa.h5")) store.append("fa", df, data_columns=True) store.close() out_dir4 = os.path.join(dirpath, "output4") os.mkdir(out_dir4) input_path = os.path.join(out_dir3, "f*") # OS error raised if path is wrong or file does not exist npt.assert_raises( OSError, lmm_flow.run, input_path, no_disks=5, out_dir=out_dir4 ) input_path = os.path.join(out_dir3, "*") # value error raised if length of data frame is less than 100 npt.assert_raises( ValueError, lmm_flow.run, input_path, no_disks=5, out_dir=out_dir4 ) @pytest.mark.skipif( not have_pandas or not have_statsmodels or not have_tables or not have_matplotlib, reason="Requires Pandas, StatsModels, PyTables, and matplotlib", ) @set_random_number_generator() def test_bundle_shape_analysis_flow(rng): with TemporaryDirectory() as dirpath: data_path = get_fnames(name="fornix") fornix = load_tractogram(data_path, "same", bbox_valid_check=False).streamlines f = Streamlines(fornix) mb = os.path.join(dirpath, "model_bundles") sub = os.path.join(dirpath, "subjects") os.mkdir(mb) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram(sft, os.path.join(mb, "temp.trk"), bbox_valid_check=False) os.mkdir(sub) os.mkdir(os.path.join(sub, "patient")) os.mkdir(os.path.join(sub, "control")) p = os.path.join(sub, "patient", "10001") os.mkdir(p) c = os.path.join(sub, "control", "20002") os.mkdir(c) for pre in [p, c]: os.mkdir(os.path.join(pre, "rec_bundles")) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram( sft, os.path.join(pre, "rec_bundles", "temp.trk"), bbox_valid_check=False, ) os.mkdir(os.path.join(pre, "org_bundles")) sft = StatefulTractogram(f, data_path, Space.RASMM) save_tractogram( sft, os.path.join(pre, "org_bundles", "temp.trk"), bbox_valid_check=False, ) os.mkdir(os.path.join(pre, "anatomical_measures")) fa = rng.random((255, 255, 255)) save_nifti( os.path.join(pre, "anatomical_measures", "fa.nii.gz"), fa, affine=np.eye(4), ) out_dir = os.path.join(dirpath, "output") os.mkdir(out_dir) sm_flow = BundleShapeAnalysis() sm_flow.run(sub, out_dir=out_dir) assert_true(os.path.exists(os.path.join(out_dir, "temp.npy"))) dipy-1.11.0/dipy/workflows/tests/test_tracking.py000066400000000000000000000303001476546756600221440ustar00rootroot00000000000000from os.path import join from tempfile import TemporaryDirectory import warnings import nibabel as nib import numpy as np from numpy.testing import assert_equal from dipy.data import get_fnames from dipy.io.image import load_nifti, save_nifti from dipy.io.streamline import load_tractogram from dipy.reconst.shm import descoteaux07_legacy_msg from dipy.testing import assert_false, assert_true from dipy.workflows.mask import MaskFlow from dipy.workflows.reconst import ReconstCSDFlow from dipy.workflows.tracking import LocalFiberTrackingPAMFlow, PFTrackingPAMFlow def test_particle_filtering_tracking_workflows(): with TemporaryDirectory() as out_dir: dwi_path, bval_path, bvec_path = get_fnames(name="small_64D") volume, affine = load_nifti(dwi_path) # Create some mask mask = np.ones_like(volume[:, :, :, 0], dtype=np.uint8) mask_path = join(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask, affine) simple_wm = np.array( [ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1, 0, 1, 0, 0], [0, 0, 1, 0, 1, 0, 1, 0, 0, 0], [0, 0, 1, 0, 1, 1, 0, 1, 0, 0], [0, 0, 0, 1, 1, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ] ) simple_wm = np.dstack( [ np.zeros(simple_wm.shape), np.zeros(simple_wm.shape), simple_wm, simple_wm, simple_wm, simple_wm, simple_wm, simple_wm, np.zeros(simple_wm.shape), np.zeros(simple_wm.shape), ] ) simple_gm = np.array( [ [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 1, 1, 1, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 1], [0, 1, 0, 0, 0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 0, 1, 0, 1, 0], [0, 1, 1, 0, 1, 1, 1, 0, 1, 1], [0, 0, 0, 1, 0, 0, 0, 1, 1, 0], ] ) simple_gm = np.dstack( [ np.zeros(simple_gm.shape), np.zeros(simple_gm.shape), simple_gm, simple_gm, simple_gm, simple_gm, simple_gm, simple_gm, np.zeros(simple_gm.shape), np.zeros(simple_gm.shape), ] ) simple_csf = np.ones(simple_wm.shape) - simple_wm - simple_gm wm_path = join(out_dir, "tmp_wm.nii.gz") gm_path = join(out_dir, "tmp_gm.nii.gz") csf_path = join(out_dir, "tmp_csf.nii.gz") for path, arr in zip( [wm_path, gm_path, csf_path], [simple_wm, simple_gm, simple_csf], ): save_nifti(path, arr.astype(np.uint8), affine) # CSD Reconstruction reconst_csd_flow = ReconstCSDFlow() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_csd_flow.run( dwi_path, bval_path, bvec_path, mask_path, out_dir=out_dir, extract_pam_values=True, ) pam_path = reconst_csd_flow.last_generated_outputs["out_pam"] gfa_path = reconst_csd_flow.last_generated_outputs["out_gfa"] # Create seeding mask by thresholding the gfa mask_flow = MaskFlow() mask_flow.run(gfa_path, 0.8, out_dir=out_dir) seeds_path = mask_flow.last_generated_outputs["out_mask"] # Test tracking pf_track_pam = PFTrackingPAMFlow() assert_equal(pf_track_pam.get_short_name(), "track_pft") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pf_track_pam.run( pam_path, wm_path, gm_path, csf_path, seeds_path, out_dir=out_dir ) tractogram_path = pf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test that tracking returns seeds pf_track_pam = PFTrackingPAMFlow() pf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) pf_track_pam.run( pam_path, wm_path, gm_path, csf_path, seeds_path, save_seeds=True, out_dir=out_dir, ) tractogram_path = pf_track_pam.last_generated_outputs["out_tractogram"] assert_true(tractogram_has_seeds(tractogram_path)) assert_true(seeds_are_same_space_as_streamlines(tractogram_path)) def test_local_fiber_tracking_workflow(): with TemporaryDirectory() as out_dir: data_path, bval_path, bvec_path = get_fnames(name="small_64D") volume, affine = load_nifti(data_path) mask = np.ones_like(volume[:, :, :, 0], dtype=np.uint8) mask_path = join(out_dir, "tmp_mask.nii.gz") save_nifti(mask_path, mask, affine) reconst_csd_flow = ReconstCSDFlow() with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) reconst_csd_flow.run( data_path, bval_path, bvec_path, mask_path, out_dir=out_dir, extract_pam_values=True, ) pam_path = reconst_csd_flow.last_generated_outputs["out_pam"] gfa_path = reconst_csd_flow.last_generated_outputs["out_gfa"] # Create seeding mask by thresholding the gfa mask_flow = MaskFlow() mask_flow.run(gfa_path, 0.8, out_dir=out_dir) seeds_path = mask_flow.last_generated_outputs["out_mask"] mask_path = mask_flow.last_generated_outputs["out_mask"] gfa_img, gfa_affine = load_nifti(gfa_path) save_nifti(gfa_path, gfa_img, gfa_affine) # Test tracking with pam no sh lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True assert_equal(lf_track_pam.get_short_name(), "track_local") with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run(pam_path, gfa_path, seeds_path, out_dir=out_dir) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test tracking with binary stopping criterion lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run( pam_path, mask_path, seeds_path, use_binary_mask=True, out_dir=out_dir ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test tracking with pam with sh lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True lf_track_pam.run( pam_path, gfa_path, seeds_path, tracking_method="eudx", out_dir=out_dir ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test tracking with pam with sh and deterministic getter lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run( pam_path, gfa_path, seeds_path, tracking_method="deterministic", out_dir=out_dir, ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test tracking with pam with sh and probabilistic getter lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run( pam_path, gfa_path, seeds_path, tracking_method="probabilistic", out_dir=out_dir, ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test tracking with pam with sh and closest peaks getter lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run( pam_path, gfa_path, seeds_path, tracking_method="closestpeaks", out_dir=out_dir, ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_false(is_tractogram_empty(tractogram_path)) # Test that tracking returns seeds lf_track_pam = LocalFiberTrackingPAMFlow() lf_track_pam._force_overwrite = True with warnings.catch_warnings(): warnings.filterwarnings( "ignore", message=descoteaux07_legacy_msg, category=PendingDeprecationWarning, ) lf_track_pam.run( pam_path, gfa_path, seeds_path, tracking_method="deterministic", save_seeds=True, out_dir=out_dir, ) tractogram_path = lf_track_pam.last_generated_outputs["out_tractogram"] assert_true(tractogram_has_seeds(tractogram_path)) assert_true(seeds_are_same_space_as_streamlines(tractogram_path)) def is_tractogram_empty(tractogram_path): tractogram_file = nib.streamlines.load(tractogram_path) return len(tractogram_file.tractogram) == 0 def tractogram_has_seeds(tractogram_path): tractogram = nib.streamlines.load(tractogram_path).tractogram return len(tractogram.data_per_streamline["seeds"]) > 0 def seeds_are_same_space_as_streamlines(tractogram_path): sft = load_tractogram(tractogram_path, "same", bbox_valid_check=False) seeds = sft.data_per_streamline["seeds"] streamlines = sft.streamlines for seed, streamline in zip(seeds, streamlines): map_res = [np.allclose(seed, s, atol=1e-2, rtol=1e-4) for s in streamline] # If no point is close to the seed, it likely means that the seed is # not in the same space as the streamline if not np.any(map_res): return False return True dipy-1.11.0/dipy/workflows/tests/test_utils.py000066400000000000000000000006041476546756600215060ustar00rootroot00000000000000"""Testing utilities.""" import numpy as np from dipy.workflows.utils import handle_vol_idx def test_handle_vol_idx(): test_cases = [ ("1,2,5-7,10", np.array([1, 2, 5, 6, 7, 10])), (3, [3]), ([3, "2", 1], [3, 2, 1]), ] for input_val, expected_output in test_cases: np.testing.assert_array_equal(handle_vol_idx(input_val), expected_output) dipy-1.11.0/dipy/workflows/tests/test_viz.py000066400000000000000000000121611476546756600211570ustar00rootroot00000000000000import os from os.path import join as pjoin from tempfile import TemporaryDirectory import numpy as np import numpy.testing as npt import pytest from dipy.io.image import save_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_tractogram from dipy.io.utils import create_nifti_header from dipy.testing.decorators import set_random_number_generator, use_xvfb from dipy.tracking.streamline import Streamlines from dipy.utils.optpkg import optional_package fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from dipy.viz.horizon.app import horizon from dipy.workflows.viz import HorizonFlow skip_it = use_xvfb == "skip" @pytest.mark.skipif(skip_it or not has_fury, reason="Requires FURY") @set_random_number_generator() def test_horizon_flow(rng): s1 = 10 * np.array( [[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0], [4, 0, 0]], dtype="f8" ) s2 = 10 * np.array( [[0, 0, 0], [0, 1, 0], [0, 2, 0], [0, 3, 0], [0, 4, 0]], dtype="f8" ) s3 = 10 * np.array( [[0, 0, 0], [1, 0.2, 0], [2, 0.2, 0], [3, 0.2, 0], [4, 0.2, 0]], dtype="f8" ) affine = np.array( [ [1.0, 0.0, 0.0, -98.0], [0.0, 1.0, 0.0, -134.0], [0.0, 0.0, 1.0, -72.0], [0.0, 0.0, 0.0, 1.0], ] ) data = 255 * rng.random((197, 233, 189)) vox_size = (1.0, 1.0, 1.0) streamlines = Streamlines() streamlines.append(s1) streamlines.append(s2) streamlines.append(s3) header = create_nifti_header(affine, data.shape, vox_size) sft = StatefulTractogram(streamlines, header, Space.RASMM) tractograms = [sft] images = None with TemporaryDirectory() as out_dir: horizon( tractograms=tractograms, images=images, cluster=True, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=False, out_png=pjoin(out_dir, "horizon-flow.png"), ) buan_colors = np.ones(streamlines.get_data().shape) horizon( tractograms=tractograms, buan=True, buan_colors=buan_colors, world_coords=True, interactive=False, out_png=pjoin(out_dir, "buan.png"), ) data = 255 * rng.random((197, 233, 189)) images = [(data, affine, "test/test.nii.gz")] horizon( tractograms=tractograms, images=images, cluster=True, cluster_thr=5, random_colors=False, length_lt=np.inf, length_gt=0, clusters_lt=np.inf, clusters_gt=0, world_coords=True, interactive=False, out_png=pjoin(out_dir, "horizon-flow-nii-images.png"), ) fimg = os.path.join(out_dir, "test.nii.gz") ftrk = os.path.join(out_dir, "test.trk") fnpy = os.path.join(out_dir, "test.npy") save_nifti(fimg, data, affine) dimensions = data.shape nii_header = create_nifti_header(affine, dimensions, vox_size) sft = StatefulTractogram(streamlines, nii_header, space=Space.RASMM) save_tractogram(sft, ftrk, bbox_valid_check=False) pvalues = rng.uniform(low=0, high=1, size=(10,)) np.save(fnpy, pvalues) input_files = [ftrk, fimg] npt.assert_equal(len(input_files), 2) hz_flow = HorizonFlow() hz_flow.run( input_files=input_files, stealth=True, out_dir=out_dir, out_stealth_png="tmp_x.png", ) npt.assert_equal(os.path.exists(os.path.join(out_dir, "tmp_x.png")), True) npt.assert_raises( ValueError, hz_flow.run, input_files=input_files, bg_color=(0.2, 0.2) ) hz_flow.run( input_files=input_files, stealth=True, bg_color=[ 0.5, ], out_dir=out_dir, out_stealth_png="tmp_x.png", ) npt.assert_equal(os.path.exists(os.path.join(out_dir, "tmp_x.png")), True) input_files = [ftrk, fnpy] npt.assert_equal(len(input_files), 2) hz_flow.run( input_files=input_files, stealth=True, bg_color=[ 0.5, ], buan=True, buan_thr=0.5, buan_highlight=(1, 1, 0), out_dir=out_dir, out_stealth_png="tmp_x.png", ) npt.assert_equal(os.path.exists(os.path.join(out_dir, "tmp_x.png")), True) npt.assert_raises( ValueError, hz_flow.run, input_files=input_files, roi_colors=(0.2, 0.2) ) hz_flow.run( input_files=input_files, stealth=True, roi_colors=[ 0.5, ], out_dir=out_dir, out_stealth_png="tmp_x.png", ) npt.assert_equal(os.path.exists(os.path.join(out_dir, "tmp_x.png")), True) dipy-1.11.0/dipy/workflows/tests/test_workflow.py000066400000000000000000000044271476546756600222270ustar00rootroot00000000000000import os from os.path import join as pjoin from tempfile import TemporaryDirectory import time import numpy.testing as npt from dipy.data import get_fnames from dipy.workflows.segment import MedianOtsuFlow from dipy.workflows.workflow import Workflow def test_force_overwrite(): with TemporaryDirectory() as out_dir: data_path, _, _ = get_fnames(name="small_25") mo_flow = MedianOtsuFlow(output_strategy="absolute") # Generate the first results mo_flow.run(data_path, out_dir=out_dir, vol_idx=[0]) mask_file = mo_flow.last_generated_outputs["out_mask"] first_time = os.path.getmtime(mask_file) # re-run with no force overwrite, modified time should not change mo_flow.run(data_path, out_dir=out_dir) mask_file = mo_flow.last_generated_outputs["out_mask"] second_time = os.path.getmtime(mask_file) assert first_time == second_time # re-run with force overwrite, modified time should change mo_flow = MedianOtsuFlow(output_strategy="absolute", force=True) # Make sure that at least one second elapsed, so that time-stamp is # different (sometimes measured in whole seconds) time.sleep(1) mo_flow.run(data_path, out_dir=out_dir, vol_idx=[0]) mask_file = mo_flow.last_generated_outputs["out_mask"] third_time = os.path.getmtime(mask_file) assert third_time != second_time def test_get_sub_runs(): wf = Workflow() assert len(wf.get_sub_runs()) == 0 def test_run(): wf = Workflow() npt.assert_raises(Exception, wf.run, None) def test_missing_file(): # The function is invoking a dummy workflow with a non-existent file. # So, an OSError will be raised. class TestMissingFile(Workflow): def run(self, filename, out_dir=""): """Dummy Workflow used to test if input file is absent. Parameters ---------- filename : string path of the first input file. out_dir: string, optional folder path to save the results. """ _ = self.get_io_iterator() dummyflow = TestMissingFile() with TemporaryDirectory() as tempdir: npt.assert_raises(OSError, dummyflow.run, pjoin(tempdir, "dummy_file.txt")) dipy-1.11.0/dipy/workflows/tests/workflow_tests_utils.py000066400000000000000000000135451476546756600236330ustar00rootroot00000000000000from dipy.workflows.combined_workflow import CombinedWorkflow from dipy.workflows.workflow import Workflow class DummyWorkflow1(Workflow): @classmethod def get_short_name(cls): return "dwf1" def run(self, inputs, param1=1, out_dir="", output_1="out1.txt"): """Workflow used to test combined workflows in general. Parameters ---------- inputs : string fake input string param param1 : int fake positional param (default 1) out_dir : string fake output directory (default '') out_combined : string fake out file (default out_combined.txt) References ---------- dummy references """ return param1 class DummyWorkflow2(Workflow): @classmethod def get_short_name(cls): return "dwf2" def run(self, inputs, param2=2, out_dir="", output_1="out2.txt"): """Workflow used to test combined workflows in general. Parameters ---------- inputs : string fake input string param param2 : int fake positional param (default 2) out_dir : string fake output directory (default '') out_combined : string fake out file (default out_combined.txt) """ return param2 class DummyCombinedWorkflow(CombinedWorkflow): def _get_sub_flows(self): return [DummyWorkflow1, DummyWorkflow2] def run( self, inputs, param_combined=3, out_dir="", out_combined="out_combined.txt" ): """Workflow used to test combined workflows in general. Parameters ---------- inputs : string fake input string param param_combined : int fake positional param (default 3) out_dir : string fake output directory (default '') out_combined : string fake out file (default out_combined.txt) """ dwf1 = DummyWorkflow1() param1 = self.run_sub_flow(dwf1, inputs) dwf2 = DummyWorkflow2() param2 = self.run_sub_flow(dwf2, inputs) return param1, param2, param_combined class DummyFlow(Workflow): def run( self, positional_str, positional_bool, positional_int, positional_float, optional_str="default", optional_bool=False, optional_int=0, optional_float=1.0, optional_float_2=2.0, optional_int_2=5, optional_float_3=2.0, out_dir="", ): """Workflow used to test the introspective argument parser. Parameters ---------- positional_str : string positional string argument positional_bool : bool positional bool argument positional_int : int positional int argument positional_float : float positional float argument optional_str : string, optional optional string argument (default 'default') optional_bool : bool, optional optional bool argument (default False) optional_int : int, optional optional int argument (default 0) optional_float : float, optional optional float argument (default 1.0) optional_float_2 : float, optional optional float argument #2 (default 2.0) optional_int_2 : int, optional optional int argument #2 (default 5) optional_float_3 : float, optional optional float argument #3 (default 2.0) out_dir : string output directory (default '') """ return ( positional_str, positional_bool, positional_int, positional_float, optional_str, optional_bool, optional_int, optional_float, optional_int_2, optional_float_2, optional_float_3, ) class DummyWorkflowOptionalStr(Workflow): def run(self, optional_str_1=None, optional_str_2="default"): """Workflow used to test optional string parameters in the introspective argument parser. Parameters ---------- optional_str_1 : variable str, optional optional string argument 1 optional_str_2 : str, optional optional string argument 2 (default 'default') """ return optional_str_1, optional_str_2 class DummyVariableTypeWorkflow(Workflow): @classmethod def get_short_name(cls): return "tvtwf" def run(self, positional_variable_str, positional_int, out_dir=""): """Workflow used to test variable string in general. Parameters ---------- positional_variable_str : variable string fake input string param positional_variable_int : int fake positional param (default 2) out_dir : string fake output directory (default '') """ result = [] io_it = self.get_io_iterator() for variable1 in io_it: result.append(variable1) return result, positional_variable_str, positional_int class DummyVariableTypeErrorWorkflow(Workflow): @classmethod def get_short_name(cls): return "tvtwfe" def run(self, positional_variable_str, positional_variable_int, out_dir=""): """Workflow used to test variable string error. Parameters ---------- positional_variable_str : variable string fake input string param positional_variable_int : variable int fake positional param (default 2) out_dir : string fake output directory (default '') """ result = [] io_it = self.get_io_iterator() for variable1, variable2 in io_it: result.append((variable1, variable2)) return result dipy-1.11.0/dipy/workflows/tracking.py000066400000000000000000000416111476546756600177520ustar00rootroot00000000000000#!/usr/bin/env python3 import logging import sys from dipy.data import SPHERE_FILES, get_sphere from dipy.io.image import load_nifti from dipy.io.peaks import load_pam from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_tractogram from dipy.tracking import utils from dipy.tracking.stopping_criterion import ( BinaryStoppingCriterion, CmcStoppingCriterion, ThresholdStoppingCriterion, ) from dipy.tracking.tracker import ( closestpeak_tracking, deterministic_tracking, eudx_tracking, pft_tracking, probabilistic_tracking, ptt_tracking, ) from dipy.workflows.workflow import Workflow class LocalFiberTrackingPAMFlow(Workflow): @classmethod def get_short_name(cls): return "track_local" def run( self, pam_files, stopping_files, seeding_files, use_binary_mask=False, stopping_thr=0.2, seed_density=1, minlen=2, maxlen=500, step_size=0.5, tracking_method="deterministic", pmf_threshold=0.1, max_angle=30.0, sphere_name=None, save_seeds=False, nbr_threads=0, random_seed=1, seed_buffer_fraction=1.0, out_dir="", out_tractogram="tractogram.trk", ): """Workflow for Local Fiber Tracking. This workflow use a saved peaks and metrics (PAM) file as input. See :footcite:p:`Garyfallidis2012b` and :footcite:p:`Amirbekian2016` for further details about the method. Parameters ---------- pam_files : string Path to the peaks and metrics files. This path may contain wildcards to use multiple masks at once. stopping_files : string Path to images (e.g. FA) used for stopping criterion for tracking. seeding_files : string A binary image showing where we need to seed for tracking. use_binary_mask : bool, optional If True, uses a binary stopping criterion. If the provided `stopping_files` are not binary, `stopping_thr` will be used to binarize the images. stopping_thr : float, optional Threshold applied to stopping volume's data to identify where tracking has to stop. seed_density : int, optional Number of seeds per dimension inside voxel. For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel. minlen : int, optional Minimum length (nb points) of the streamlines. maxlen : int, optional Maximum length (nb points) of the streamlines. step_size : float, optional Step size (in mm) used for tracking. tracking_method : string, optional Select direction getter strategy: - "eudx" (Uses the peaks saved in the pam_files) - "deterministic" or "det" for a deterministic tracking - "probabilistic" or "prob" for a Probabilistic tracking - "closestpeaks" or "cp" for a ClosestPeaks tracking - "ptt" for Parallel Transport Tractography By default, the sh coefficients saved in the pam_files are used. pmf_threshold : float, optional Threshold for ODF functions. max_angle : float, optional Maximum angle between streamline segments (range [0, 90]). sphere_name : string, optional The sphere used for tracking. If None, the sphere saved in the pam_files is used. For faster tracking, use a smaller sphere (e.g. 'repulsion200'). save_seeds : bool, optional If true, save the seeds associated to their streamline in the 'data_per_streamline' Tractogram dictionary using 'seeds' as the key. nbr_threads : int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed : int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction : float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. out_dir : string, optional Output directory. out_tractogram : string, optional Name of the tractogram file to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() tracking_method = tracking_method.lower() sphere_name = SPHERE_FILES.get(sphere_name, None) for pams_path, stopping_path, seeding_path, out_tract in io_it: if tracking_method == "eudx": logging.info(f"EuDX Deterministic tracking on {pams_path}") else: logging.info(f"{tracking_method.title()} tracking on {pams_path}") pam = load_pam(pams_path, verbose=False) sphere = pam.sphere if sphere_name is None else get_sphere(name=sphere_name) logging.info("Loading stopping criterion") stop, affine = load_nifti(stopping_path) if use_binary_mask: stopping_criterion = BinaryStoppingCriterion(stop > stopping_thr) else: stopping_criterion = ThresholdStoppingCriterion(stop, stopping_thr) logging.info("Loading seeds") seed_mask, _ = load_nifti(seeding_path) seeds = utils.seeds_from_mask( seed_mask, affine, density=[seed_density, seed_density, seed_density] ) logging.info("Starting to track") if tracking_method in ["closestpeaks", "cp"]: tracking_result = closestpeak_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, random_seed=random_seed, sphere=sphere, max_angle=max_angle, min_len=minlen, max_len=maxlen, step_size=step_size, pmf_threshold=pmf_threshold, save_seeds=save_seeds, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) elif tracking_method in [ "eudx", ]: tracking_result = eudx_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, pam=pam, random_seed=random_seed, sphere=sphere, max_angle=max_angle, min_len=minlen, max_len=maxlen, step_size=step_size, pmf_threshold=pmf_threshold, save_seeds=save_seeds, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) elif tracking_method in ["deterministic", "det"]: tracking_result = deterministic_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, random_seed=random_seed, sphere=sphere, max_angle=max_angle, min_len=minlen, max_len=maxlen, step_size=step_size, pmf_threshold=pmf_threshold, save_seeds=save_seeds, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) elif tracking_method in ["probabilistic", "prob"]: tracking_result = probabilistic_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, random_seed=random_seed, sphere=sphere, max_angle=max_angle, min_len=minlen, max_len=maxlen, step_size=step_size, pmf_threshold=pmf_threshold, save_seeds=save_seeds, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) elif tracking_method in ["ptt"]: tracking_result = ptt_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, random_seed=random_seed, sphere=sphere, max_angle=max_angle, min_len=minlen, max_len=maxlen, step_size=step_size, pmf_threshold=pmf_threshold, save_seeds=save_seeds, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) else: logging.error( f"Unknown tracking method: {tracking_method}. " f"Please use one of the following: " f"'eudx', 'deterministic', 'probabilistic', 'closestpeaks', 'ptt'" ) sys.exit(1) if save_seeds: streamlines, seeds = zip(*tracking_result) seeds = {"seeds": seeds} else: streamlines = list(tracking_result) seeds = {} sft = StatefulTractogram( streamlines, seeding_path, Space.RASMM, data_per_streamline=seeds ) save_tractogram(sft, out_tract, bbox_valid_check=False) logging.info(f"Saved {out_tract}") class PFTrackingPAMFlow(Workflow): @classmethod def get_short_name(cls): return "track_pft" def run( self, pam_files, wm_files, gm_files, csf_files, seeding_files, step_size=0.2, seed_density=1, pmf_threshold=0.1, max_angle=20.0, sphere_name=None, pft_back=2, pft_front=1, pft_count=15, pft_max_trial=20, save_seeds=False, min_wm_pve_before_stopping=0, nbr_threads=0, random_seed=1, seed_buffer_fraction=1.0, out_dir="", out_tractogram="tractogram.trk", ): """Workflow for Particle Filtering Tracking. This workflow uses a saved peaks and metrics (PAM) file as input. See :footcite:p:`Girard2014` for further details about the method. Parameters ---------- pam_files : string Path to the peaks and metrics files. This path may contain wildcards to use multiple masks at once. wm_files : string Path to white matter partial volume estimate for tracking (CMC). gm_files : string Path to grey matter partial volume estimate for tracking (CMC). csf_files : string Path to cerebrospinal fluid partial volume estimate for tracking (CMC). seeding_files : string A binary image showing where we need to seed for tracking. step_size : float, optional Step size (in mm) used for tracking. seed_density : int, optional Number of seeds per dimension inside voxel. For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel. pmf_threshold : float, optional Threshold for ODF functions. max_angle : float, optional Maximum angle between streamline segments (range [0, 90]). sphere_name : string, optional The sphere used for tracking. If None, the sphere saved in the pam_files is used. For faster tracking, use a smaller sphere (e.g. 'repulsion200'). pft_back : float, optional Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. pft_front : float, optional Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. pft_count : int, optional Number of particles to use in the particle filter. pft_max_trial : int, optional Maximum number of trials to run the particle filtering tractography. save_seeds : bool, optional If true, save the seeds associated to their streamline in the 'data_per_streamline' Tractogram dictionary using 'seeds' as the key. min_wm_pve_before_stopping : int, optional Minimum white matter pve (1 - stopping_criterion.include_map - stopping_criterion.exclude_map) to reach before allowing the tractography to stop. nbr_threads : int, optional Number of threads to use for the processing. By default, all available threads will be used. random_seed : int, optional Seed for the random number generator, must be >= 0. A value of greater than 0 will all produce the same streamline trajectory for a given seed coordinate. A value of 0 may produces various streamline tracjectories for a given seed coordinate. seed_buffer_fraction : float, optional Fraction of the seed buffer to use. A value of 1.0 will use the entire seed buffer. A value of 0.5 will use half of the seed buffer then the other half. a way to reduce memory usage. out_dir : string, optional Output directory. out_tractogram : string, optional Name of the tractogram file to be saved. References ---------- .. footbibliography:: """ io_it = self.get_io_iterator() sphere_name = SPHERE_FILES.get(sphere_name, None) for pams_path, wm_path, gm_path, csf_path, seeding_path, out_tract in io_it: logging.info(f"Particle Filtering tracking on {pams_path}") pam = load_pam(pams_path, verbose=False) sphere = pam.sphere if sphere_name is None else get_sphere(name=sphere_name) wm, affine, voxel_size = load_nifti(wm_path, return_voxsize=True) gm, _ = load_nifti(gm_path) csf, _ = load_nifti(csf_path) avs = sum(voxel_size) / len(voxel_size) # average_voxel_size logging.info("Preparing stopping criterion") stopping_criterion = CmcStoppingCriterion.from_pve( wm, gm, csf, step_size=step_size, average_voxel_size=avs ) logging.info("Seeding in mask") seed_mask, _ = load_nifti(seeding_path) seeds = utils.seeds_from_mask( seed_mask, affine, density=[seed_density, seed_density, seed_density] ) logging.info("Start tracking") tracking_result = pft_tracking( seeds, stopping_criterion, affine, sh=pam.shm_coeff, sphere=sphere, step_size=step_size, pft_back_tracking_dist=pft_back, pft_front_tracking_dist=pft_front, pft_max_trial=pft_max_trial, particle_count=pft_count, save_seeds=save_seeds, min_wm_pve_before_stopping=min_wm_pve_before_stopping, random_seed=random_seed, max_angle=max_angle, pmf_threshold=pmf_threshold, nbr_threads=nbr_threads, seed_buffer_fraction=seed_buffer_fraction, ) if save_seeds: streamlines, seeds = zip(*tracking_result) seeds = {"seeds": seeds} else: streamlines = list(tracking_result) seeds = {} sft = StatefulTractogram( streamlines, seeding_path, Space.RASMM, data_per_streamline=seeds ) save_tractogram(sft, out_tract, bbox_valid_check=False) logging.info(f"Saved {out_tract}") dipy-1.11.0/dipy/workflows/utils.py000066400000000000000000000012171476546756600173060ustar00rootroot00000000000000"""Module for utility functions.""" from dipy.utils.convert import expand_range def handle_vol_idx(vol_idx): """Handle user input for volume index. Parameters ---------- vol_idx : int, str, list, tuple Volume index or range of volume indices. Returns ------- vol_idx : list List of volume indices. """ if vol_idx is not None: if isinstance(vol_idx, str): vol_idx = expand_range(vol_idx) elif isinstance(vol_idx, int): vol_idx = [vol_idx] elif isinstance(vol_idx, (list, tuple)): vol_idx = [int(idx) for idx in vol_idx] return vol_idx dipy-1.11.0/dipy/workflows/viz.py000066400000000000000000000244161476546756600167640ustar00rootroot00000000000000import logging from os.path import join as pjoin from warnings import warn import numpy as np from dipy.io.image import load_nifti from dipy.io.peaks import load_pam from dipy.io.streamline import load_tractogram from dipy.io.surface import load_gifti, load_pial from dipy.io.utils import create_nifti_header from dipy.stats.analysis import assignment_map from dipy.utils.optpkg import optional_package from dipy.viz import horizon from dipy.workflows.workflow import Workflow fury, has_fury, setup_module = optional_package("fury", min_version="0.10.0") if has_fury: from fury.colormap import line_colors from fury.lib import numpy_support from fury.utils import numpy_to_vtk_colors class HorizonFlow(Workflow): @classmethod def get_short_name(cls): return "horizon" def run( self, input_files, cluster=False, rgb=False, cluster_thr=15.0, random_colors=None, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=10**8, native_coords=False, stealth=False, emergency_header="icbm_2009a", bg_color=(0, 0, 0), disable_order_transparency=False, buan=False, buan_thr=0.5, buan_highlight=(1, 0, 0), roi_images=False, roi_colors=(1, 0, 0), out_dir="", out_stealth_png="tmp.png", ): """Interactive medical visualization - Invert the Horizon! See :footcite:p:`Garyfallidis2019` for further details about Horizon. Interact with any number of .trk, .tck or .dpy tractograms and anatomy files .nii or .nii.gz. Cluster streamlines on loading. Parameters ---------- input_files : variable string Filenames. cluster : bool, optional Enable QuickBundlesX clustering. rgb : bool, optional Enable the color image (rgb only, alpha channel will be ignored). cluster_thr : float, optional Distance threshold used for clustering. Default value 15.0 for small animal brains you may need to use something smaller such as 2.0. The distance is in mm. For this parameter to be active ``cluster`` should be enabled. random_colors : variable str, optional Given multiple tractograms and/or ROIs then each tractogram and/or ROI will be shown with different color. If no value is provided, both the tractograms and the ROIs will have a different random color generated from a distinguishable colormap. If the effect should only be applied to one of the 2 types, then use the options 'tracts' and 'rois' for the tractograms and the ROIs respectively. length_gt : float, optional Clusters with average length greater than ``length_gt`` amount in mm will be shown. length_lt : float, optional Clusters with average length less than ``length_lt`` amount in mm will be shown. clusters_gt : int, optional Clusters with size greater than ``clusters_gt`` will be shown. clusters_lt : int, optional Clusters with size less than ``clusters_gt`` will be shown. native_coords : bool, optional Show results in native coordinates. stealth : bool, optional Do not use interactive mode just save figure. emergency_header : str, optional If no anatomy reference is provided an emergency header is provided. Current options 'icbm_2009a' and 'icbm_2009c'. bg_color : variable float, optional Define the background color of the scene. Colors can be defined with 1 or 3 values and should be between [0-1]. disable_order_transparency : bool, optional Use depth peeling to sort transparent objects. If True also enables anti-aliasing. buan : bool, optional Enables BUAN framework visualization. buan_thr : float, optional Uses the threshold value to highlight segments on the bundle which have pvalues less than this threshold. buan_highlight : variable float, optional Define the bundle highlight area color. Colors can be defined with 1 or 3 values and should be between [0-1]. For example, a value of (1, 0, 0) would mean the red color. roi_images : bool, optional Displays binary images as contours. roi_colors : variable float, optional Define the color for the roi images. Colors can be defined with 1 or 3 values and should be between [0-1]. For example, a value of (1, 0, 0) would mean the red color. out_dir : str, optional Output directory. out_stealth_png : str, optional Filename of saved picture. References ---------- .. footbibliography:: """ super(HorizonFlow, self).__init__(force=True) verbose = True tractograms = [] images = [] pams = [] surfaces = [] numpy_files = [] interactive = not stealth world_coords = not native_coords bundle_colors = None # mni_2009a = { # "affine": np.array( # [ # [1.0, 0.0, 0.0, -98.0], # [0.0, 1.0, 0.0, -134.0], # [0.0, 0.0, 1.0, -72.0], # [0.0, 0.0, 0.0, 1.0], # ] # ), # "dims": (197, 233, 189), # "vox_size": (1.0, 1.0, 1.0), # "vox_space": "RAS", # } mni_2009c = { "affine": np.array( [ [1.0, 0.0, 0.0, -96.0], [0.0, 1.0, 0.0, -132.0], [0.0, 0.0, 1.0, -78.0], [0.0, 0.0, 0.0, 1.0], ] ), "dims": (193, 229, 193), "vox_size": (1.0, 1.0, 1.0), "vox_space": "RAS", } if emergency_header == "icbm_2009a": hdr = mni_2009c else: hdr = mni_2009c emergency_ref = create_nifti_header(hdr["affine"], hdr["dims"], hdr["vox_size"]) io_it = self.get_io_iterator() for input_output in io_it: fname = input_output[0] if verbose: logging.info(f"Loading file ... \n {fname}\n") fl = fname.lower() ends = fl.endswith if ends(".trk") or ends(".trx"): sft = load_tractogram(fname, "same", bbox_valid_check=False) tractograms.append(sft) if ends((".dpy", ".tck", ".vtk", ".vtp", ".fib")): sft = load_tractogram(fname, emergency_ref) tractograms.append(sft) if ends(".nii.gz") or ends(".nii"): data, affine = load_nifti(fname) images.append((data, affine, fname)) if ends(".pial"): surface = load_pial(fname) if surface: vertices, faces = surface surfaces.append((vertices, faces, fname)) if ends(".gii.gz") or ends(".gii"): surface = load_gifti(fname) vertices, faces = surface if len(vertices) and len(faces): vertices, faces = surface surfaces.append((vertices, faces, fname)) else: warn(f"{fname} does not have any surface geometry.", stacklevel=2) if ends(".pam5"): pam = load_pam(fname) pams.append(pam) if verbose: logging.info(f"Peak_dirs shape \n {pam.peak_dirs.shape}\n") if ends(".npy"): data = np.load(fname) numpy_files.append(data) if verbose: logging.info(f"numpy array length \n {len(data)}\n") if buan: bundle_colors = [] for i in range(len(numpy_files)): n = len(numpy_files[i]) pvalues = numpy_files[i] bundle = tractograms[i].streamlines indx = assignment_map(bundle, bundle, n) ind = np.array(indx) nb_lines = len(bundle) lines_range = range(nb_lines) points_per_line = [len(bundle[i]) for i in lines_range] points_per_line = np.array(points_per_line, np.intp) cols_arr = line_colors(bundle) colors_mapper = np.repeat(lines_range, points_per_line, axis=0) vtk_colors = numpy_to_vtk_colors(255 * cols_arr[colors_mapper]) colors = numpy_support.vtk_to_numpy(vtk_colors) colors = (colors - np.min(colors)) / np.ptp(colors) for j in range(n): if pvalues[j] < buan_thr: colors[ind == j] = buan_highlight bundle_colors.append(colors) if len(bg_color) == 1: bg_color *= 3 elif len(bg_color) != 3: raise ValueError( "You need 3 values to set up background color. " "e.g --bg_color 0.5 0.5 0.5" ) if len(roi_colors) == 1: roi_colors *= 3 elif len(roi_colors) != 3: raise ValueError( "You need 3 values to set up ROI color. e.g. --roi_colors 0.5 0.5 0.5" ) order_transparent = not disable_order_transparency horizon( tractograms=tractograms, images=images, pams=pams, surfaces=surfaces, cluster=cluster, rgb=rgb, cluster_thr=cluster_thr, random_colors=random_colors, bg_color=bg_color, order_transparent=order_transparent, length_gt=length_gt, length_lt=length_lt, clusters_gt=clusters_gt, clusters_lt=clusters_lt, world_coords=world_coords, interactive=interactive, buan=buan, buan_colors=bundle_colors, roi_images=roi_images, roi_colors=roi_colors, out_png=pjoin(out_dir, out_stealth_png), ) dipy-1.11.0/dipy/workflows/workflow.py000066400000000000000000000122561476546756600200250ustar00rootroot00000000000000import inspect import logging import os from dipy.testing.decorators import warning_for_keywords from dipy.workflows.multi_io import _io_iterator class Workflow: @warning_for_keywords() def __init__( self, *, output_strategy="absolute", mix_names=False, force=False, skip=False ): """Initialize the basic workflow object. This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class. """ self._output_strategy = output_strategy self._mix_names = mix_names self.last_generated_outputs = None self._force_overwrite = force self._skip = skip def get_io_iterator(self): """Create an iterator for IO. Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method's docstring. """ # To manage different python versions. frame = inspect.stack()[1] if isinstance(frame, tuple): frame = frame[0] else: frame = frame.frame io_it = _io_iterator( frame, self.run, output_strategy=self._output_strategy, mix_names=self._mix_names, ) # Make a list out of a list of lists self.flat_outputs = [item for sublist in io_it.outputs for item in sublist] if io_it.out_keys: self.last_generated_outputs = dict(zip(io_it.out_keys, self.flat_outputs)) else: self.last_generated_outputs = self.flat_outputs if self.manage_output_overwrite(): return io_it else: return [] def update_flat_outputs(self, new_flat_outputs, io_it): """Update the flat outputs with new values. This method is useful when a workflow needs to update the flat_outputs with new values that were generated in the run method. Parameters ---------- new_flat_outputs : list List of new values to update the flat_outputs. io_it : IOIterator The IOIterator object that was returned by get_io_iterator. """ if len(new_flat_outputs) != len(self.flat_outputs): raise ValueError( "The new flat outputs must have the same length as the " "current flat outputs." ) self.flat_outputs = new_flat_outputs if io_it.outputs: size = len(io_it.outputs[-1]) io_it.outputs = [ self.flat_outputs[i : i + size] for i in range(0, len(self.flat_outputs), size) ] if io_it.out_keys: self.last_generated_outputs = dict(zip(io_it.out_keys, self.flat_outputs)) else: self.last_generated_outputs = self.flat_outputs self.manage_output_overwrite() def manage_output_overwrite(self): """Check if a file will be overwritten upon processing the inputs. If it is bound to happen, an action is taken depending on self._force_overwrite (or --force via command line). A log message is output independently of the outcome to tell the user something happened. """ duplicates = [] for output in self.flat_outputs: if os.path.isfile(output): duplicates.append(output) if len(duplicates) > 0: if self._force_overwrite: logging.info( "The following output files are about to be" " overwritten." ) else: logging.info( "The following output files already exist, the " "workflow will not continue processing any " "further. Add the --force flag to allow output " "files overwrite." ) for dup in duplicates: logging.info(dup) return self._force_overwrite return True def run(self, *args, **kwargs): """Execute the workflow. Since this is an abstract class, raise exception if this code is reached (not implemented in child class or literally called on this class) """ raise Exception(f"Error: {self.__class__} does not have a run method.") def get_sub_runs(self): """Return No sub runs since this is a simple workflow.""" return [] @classmethod def get_short_name(cls): """Return A short name for the workflow used to subdivide. The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name. For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available. Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline. """ return cls.__name__ dipy-1.11.0/doc/000077500000000000000000000000001476546756600133365ustar00rootroot00000000000000dipy-1.11.0/doc/.gitignore000066400000000000000000000000131476546756600153200ustar00rootroot00000000000000reference/ dipy-1.11.0/doc/Makefile000066400000000000000000000102341476546756600147760ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = -j auto SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " api to make the auto-generated API files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: api-clean examples-clean -rm -rf _build/* -rm *-stamp api-clean: rm -rf reference/*.rst rm -rf reference_cmd/*.rst api: @mkdir -p reference $(PYTHON) tools/build_modref_templates.py dipy reference @mkdir -p reference_cmd $(PYTHON) tools/docgen_cmd.py dipy reference_cmd @echo "Build API docs...done." examples-clean: rm -rf examples_revamped -cd examples_built && find . -not -name "README" -not -name ".gitignore" -exec rm -rfd {} \; || true gitwash-update: python3 ../tools/gitwash_dumper.py devel dipy --repo-name=dipy --github-user=dipy \ --project-url=https://dipy.org \ --project-ml-url=https://mail.python.org/mailman/listinfo/neuroimaging html: api # Standard html build after examples have been prepared $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." html-no-examples: api # Standard html build after examples have been prepared $(SPHINXBUILD) -D plot_gallery=0 -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) _build/dirhtml @echo @echo "Build finished. The HTML pages are in _build/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in _build/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) _build/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in _build/qthelp, like this:" @echo "# qcollectiongenerator _build/qthelp/dipy.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile _build/qthelp/dipy.qhc" latex: api $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo @echo "Build finished; the LaTeX files are in _build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes @echo @echo "The overview file is in _build/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in _build/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in _build/doctest/output.txt." pdf: pdf-stamp pdf-stamp: latex cd _build/latex && make all-pdf touch $@ upload: html ./upload-gh-pages.sh _build/html/ dipy dipy xvfb: export TEST_WITH_XVFB=true && make html memory_profile: export TEST_WITH_MEMPROF=true && make html dipy-1.11.0/doc/README.md000066400000000000000000000016431476546756600146210ustar00rootroot00000000000000# Documentation Generation ## Index - `_static`: Contains images, css, js for Sphinx to look at - `_templates`: Contains html layout for custom Sphinx design - `build`: Contains the generated documentation - `devel`: Contains `*.rst` files for the Developer's Guide - `examples`: DIPY application showcases. Add any tutorial here - `examples_built`: Keep it empty. Only for example generation - `releases_notes`: Contains all API changes / PRs, issues resolved for a specific release - `sphinxext`: Sphinx custom plugins - `theory`: Diffusion theory + FAQ files - `tools`: Scripts to generate some parts of the documentation, like the API ## Doc generation steps: ### Installing requirements ```bash $ pip install -U -r doc-requirements.txt ``` ### Generate all the Documentation #### Under Linux and OSX ```bash $ make -C . clean && make -C . html ``` #### Under Windows ```bash $ ./make.bat clean $ ./make.bat html ``` dipy-1.11.0/doc/_static/000077500000000000000000000000001476546756600147645ustar00rootroot00000000000000dipy-1.11.0/doc/_static/colorfa.png000066400000000000000000001540521476546756600171260ustar00rootroot00000000000000PNG  IHDR6XsRGBbKGD pHYs,r,rMvtIME3ktEXtCommentCreated with GIMPW IDATxݒH5YU[3++(tWWfFYˮ̚ꞄDfxBqc_j3W8ԯܗů@?/gKϗ?/n?w,Ŀ\}5CgZ&Gn }}jߟϧ9s/_p^>uKCsKזn. C_1W [w} ׀}~עkQwU ?%JS+mv_3 J v_)OU_Sw? \s"_ym)^ r+u'~H;X_/~&33K엾'88?}~)+~.>ϸ>W~- A:E"XR*I'i tcx^JIU/_(x=6jrο7,F]%"-‘0 DdΛ1(LL Ҵ/Rk+oKH>\rh .߬HKY Q EP"@*5Q9p< 3s8w 9B_C}gVڟGgDE6h bc*ሐB6 g63篠?q]y<'qtl,Vm0󠌃%Y1 BM嘟)匑WȆ.K_q-}}+_O) /2&;i7P Aw` J8qxb/5O+?gLL' ]f7Dd؀jh(g6x܄ Vjvqk`h30~ CK|4x|~gM!o6~1tS(pB$ۃPbUz܋!.mW:³+2!}<6–-خ*[`-K0b(JJ_ў?'%ʿM̸PTpBIh @ یLB=?4sY$lHdT;*"X#0 ы00ysqwWNd<3333p@JX*|li,c*NϬ&*wN7xI`Ojj="Ϡ&=2sYD-G"O4K 3idpPe| | X^ Lb$|8Ax23#8t=Ef` 3P}mC kD3*TDSy )Cr1>6QX , x3xk+REw娧8i$reǤDH;rfL*2&\4Оg3 @1iL9e'E Ft P ɮ17˄ V\,Cj"Rljм'aǜr"ޏ|7娟A|%_ϡ?uLSl+@W $Xe>7a2 :Q %fNA$&@`U2,bf7;A!,w{Ln-AtQ )U bIQpGω,fўO<t|I 2:T?scR_DͰ'*,^`*ypyNIjE1*j0f!Kq:r=CGBQ"R(pƩBE$r=9wTNJ9KZdQkX5LpX"8m{dH#4 STlN,rϠ5sKcr,wMQrX 5"M6=}h-OH+SH"- <2 1d ĖG"VQĵD͐yӅ.Q:t(r2&ekBU;zP1Q[[~[&U2oN\nd(DŔ* vMAPAqdhՈ1Pt LD&g1I|%U_@{q4CJARQNq܈)b,BX'HYWN#Y`R31nԀS]'F̢H=5Z%p1tF^(aP(! e@4#Yr~n蚁G9iE*9?&M 9XhʁW0UD`,ߤg&V7N HQ:jP&@-TRDZQ%Sq63VR``yI5 OmEp)X Y =a#31̏꼥WL.PjtZ0oIw šXhx;$͏pP03P?bp@׳iTDfc ޽S7 a -Y=I!'5۽/RGx@}0Y045e!Ty;Ӄ@[HebOqkQO5rOW݌K1gGoA# HQg)QNk+_#hYBh Dsz{pq|n=WU%UY헖JX3cIAvvTu "%NZdk*>obM~l즏4CVY)RhL-R A=_y@C?t}s>iw6N$7[%}a4m 4f xi7̋ZW.܎6o( /ܟ{}]uZآx! HYrʏ=Tm) VffYF?(/564iivpDPQs"9ѯX]HWP`]+ lv'j涢TRnu-FBz$[⏛/\}JPXUcb7KܷCUyo6/.[jبڊW*&H V2/p7UkTlG=^Veu=W-² zSQ'9-SmkZ4`'f+;`vJ#/ϸއZ v\JhA,##5:"?r(f>pҍ8飺>CE_+i\˺q%{ehD0v{RXJP8XxpYn,y֝w=mK7/iv+4 MQ-|k 6 I Wi1h$4دۂ9JWz҉Ug`M(1n%"\Vek8˼Wq[T%h(|_GBOrwv>ӯ&~p$"D2Y , NdLߧ'ΒBkPGoTރFnur/;}n2 5گX %R$tPuI G AW_6fJ$uNϨ7fA9iPYШEPjKGqGhl\܂s!Zq_|H'ŝ{fW&CH K.;h8 RU eiNZ twBY(=tx NO!?qᥥpLŌtسd 4egFqNPaqDN˙-IQvSvo\/t3ns;AHЁK¼7IK9}c,4$n=o;H>%GS:%`(uqlL3f,e0^3-EQڹ{}cF j9yՁhAj|mQJ'N8gz'@}';!rWւh%`,g81`h2%UM??>-woκ-+ҋuvOEsĒ KуlƳ;tDvj|uZt}.6kEQ6i4W&CtI)ZHEiV)bT!}<w[/Tж 4.M}`8#%RKsț(YJZCa[jv>Bb(p^b箾/EAYd1k M7Ll("pyӡ/:.,6%z0Z(pbt.UH:i9Ȝfd^3&wr*ޜ# j\l~B9+G$.B"ICzވf<0m>BcH8f;i4f'\1\A=)ugor|sh#@hyLqp6LQnd"ʜ3Q9[#yH eVn9P+0}yԓ+1ዑ4wș+C:F+WK#F Xdx5ɗo_Irv8X\ 9ú?Ff5Lx2٠/m}c}W4 8'fx)ua4@-9Y&cÝw,|G L*NdH1t or[ǚR ٛUT#9_%ǧrX,L Cłv}N3ov{}Cή<b:h$ƤIKZݜ\Ԃ!rb4LF1}r,h[CajMpay/=џH7F-15ӋWi@SHN5GR['ll2N^Iv{TZ,v' XOȝ̤ .4 =T[*z5ܙ|cBv@}90Usp 9S7o+?>uV9vbvȞ>h%L eᜏ\1'".Ή$ .OX1,z]ʝЮt֥1gAu\b CT8_E?Jj0'fdڼbA- =Sx5d[X'TUkҗFdjD!g` FI>My SQꞐyO_PtBN)v%?t~(6"L?_B{R@iP SCuC'1(^pd5' 甯Siv^uzm楕"f%AˋHIth˭sꚜD[mh, *'z}Xoo8KHƈ9I^T kٲk%h{p,g8lt3zNڀz !J !ǿc%YOWla(t>_}]8]9} bwH1$ e〴=]}wB!:-R8,V5ԲSV \+뷛w0ZnD9pTĺ 2oЯhx^;81"˨w_`>!Stgtd]ڒ4L]35>¤k%+SQXS>YmJe761n.AQ"xj0,(+(AVq|z~-lwGrmpsFP*?^iEpY(R L޼[n7bߩqI\ōNMOiH ضZ!f0̲ݽ[٠= % %~+@U|V́ƒ`()b z8CȑslJCn@mɪod='". < .%՟#L֠i×X7PW+ȚP~T{{pMh4h"Өu JO@pHMs2r}1DB\)vCw4D.eV!9[c>.cClS Ώfh|0&@՘'uaFH+|xGqzY7V''QWq٭KWzYZ8h,z21 _v 7e!iuR$ƄJ̬i3zXt>"Ѵhguc̩vQkLݲ',VWz.DVxxWQ%x~ Fj(>Z\{C>~;Q` QN-;1v>jq¥"FQE1L۫9v)oEyp^1A ʼnp>D2h8m1vȗ!xZ,ƪd9X\Gn(bC)ƔRͨlʓ ax412Nrv2yAJy(J_y K6h(*q`x`~cw!y~b<[&{Rl>(gGk-[!݌:l#'"gY)E'l29̑/;+D`Q n 1 pDY ݎOCw=EI&jJnYtq9mU+K}a-;k7RUi#?95iëQBo!:*F)]JfFk+Ujl,!z^B)a-Ukg[2^B>$?0ݻˁ}MϴLH,QX3gMc@ )eUjcꉰcN ψLS 6ː9D "7x  4S2irX61Vt,c0z]=x*Ԩ8x^vd-[)e-a+VSBY Ix6:}r裇2 Qsۊn u?s+wȫU mY> ЅnSY{^ݭ蓊EZ( E^/hYMY&~n}nkM#rU+ KKUV?F!9JJ'5^(Zyr>[Z#+JVJ!) IDAT (}hSolfsK$w@^Ҥ[B-)C[ZouTs EaYmNCcvbqķ*QK )2b"X8XB% ~GlfOi87R5nPv'u.^Vq yuu߸\7?7co\F`BA` ѩWִKx tJ/ 4VQcZuZ0bqP̠ tz. QhxĜ$1M…6UX--O,/g0pZM&[g6+kMi ChFFe)~^b(;֚O `x!K.;/O{٩1MDU򢩩^!|̹b3sL[Ӯleg5ՁQ27+Vx.;W]+.GbgxϦ\,~!6Cw,SB7,qm?pH=i釳-dAqZf} M:tl5hcg܌vJFQdP [sǏf#cxnZSF¦] q9'?9? L(=tt,kB"a:~ϨHuy5q}wjRPʙMBd5wEӷA[sUX6[%5mKm+,ax3^><'_N\ܹSٰ T.\*Fq% ~`U٧Zo땷v./I ej%eeHK !զX}(!݌k[2=kO{Dz)+4>\#eDZ"ꢵiu3?]_"+'oX(HJK9Ҕ_ά-^~7wOޖ6p٫=H9 81V]꩸d#\ +>b:0sh8cR# |ܝc:*өoS2'\кJ?/l}_rQ4; vX/ P6IOG5~%&ΚZO<*5+f}2q:NVXMZD u1 \f NR9-xe|\3^Wp%.b:Sx>vnѩN4l1it^J\K u L$(5RB \ǜ09&:z$& 95SަRZWz]?ahӰp5E+5 cm&a<u,N;UW\xZY4 =cfq#{%[yz\O,qI9ANcc F=JǘFӔiP1d5abìKpwoNp߭\\T 3Vu![%%2섦$-Ny;3 @/\-R:*#!c2%J(W"Ko(ҷb_XƠ&F#qʅT@+'uLW\AN^1N@k %d1twwE qX ,DTXMݠw2Xudqnl:b6 \hC[a*7x u;Z̲ۺmЗPC |7ىXS!vS{aRWx|Lb{ZgP%bʾWUWQKWmܖvM웈5` 2yiSIS]raIJm,y̓ŝHZw>u4YB)LQfN`QOI_<+/P3( &IN S<iٜbKvڰn a)]RteZrzzc "u-oVcz}zx4oNqM5XVe,. \RJ\v74UkZRTY6bkdʂFR=`n59UTeb.Ecy0*Ü N;ߒt5v.1V6gĤso[9 _񟝆Uqxq㷮5xܞ1;iAbcqvs`UdіeXǙR "0}${&zur]HT)mQaU?-9?49a)}k 6o1*uzz)[PP\!nu$Ire٭}T="Rgx _O/?tTef|P ntSdd[Z.^Rj&If11YX:cix;P4.Dƫx&=V"=Z5TiA,)%혝x󹯼Bd4zU;oqA|iCט{IvSx #ǟ.}P4HJ'`4ViD NY^C~M;2_5J-&%:‘!(C.+ϭSO>RWZ|+~ "p81/|OwB݂ffw+m=VZ<֥3wEwEWxs4 jRv⺲>`y@(#y|S-&i0eJPDǼ;N/Qaw4Bg:7~ptu^~,LBi4! c`IuZX2濗T'iC{ 7ьZ^ 609N[Seklo~чxKYYYRXzpxnOTr>JnT[Ѓނϧ9l6.%ѩa6ZUw9tbԒf, ڐM4Ջ혮Y^bֵy>܇w(jam >::HQ'&6ΌDCAOь){7&Jj5i*]g6vW#u;w A61Xv773n%sS1 4}/___c5c-r%.Z8 :]7*4*E8au 9!X3)y@/m.=^uV,s+V,eφA Ek)Lz eUΞg,gݝ{k |4:hMmX8z81ݝf̭)Q, ZB\£vFX":rn;>vxvǘ:؅)E㐴Y ^'щjw/a߿oCR`ڪc3Yv }]KLi>;gA-S5NL؞b ҃ybQGeI_NU[#j |xo7r7s 5;,H҉k4?rx⧫Yj1_x34ryWZOҳ&2IKa Ҽ`KS,3d5<~ǜz/v[ F ~9w'7M˚2Umͮ+X\J%@t4+#7gTmj2 #ct|;у_J\)Y|Ҳҩrʚ_9g_uO^(OM9TTPuDjR.NgRa=vӇ^>,<-Ë%"MSpiKގw| O<}~9U{!; sV*8bgMmu-;nyK*^wHi$T ."26W s%y7-8H7_r iYkR˰F1>ezVrGeGO.#ЭuXYfý \('.N ^#GM\ϛG’ drq~/|%|=h& ] ]0M&H{ג .,#|lxY؞trcg2.θ@5bIpG؏)M|$_V [-#X^fH4%sݿ?R8[?͗hm]ZJ7ҪR T":rud%`9b׷q f'G5CxmD9L1B&ZS-r~Am' Yx(A_H#Z)66-n%S1d8.S.?<7/$ )W0HZa[!6ka.\_\ˊBý<[R_a|Oӷs\'O5<2un O}m?<ߓ/;M>5cS jNZy[pW1,Xc3̕ud6ff^VrP\{yo}|% GUq,L,PF]4c*f)SFpPKAsX/\;+SЗlDc32DMQKPUCЃC=y#s 6)|wGt&ükeM3W'JQb6 !rYYWʶ ]8d*VK(F8#)Ӌ͋!>3!ǔnsh=]^m-#nv;E!(ծUV:fB9?cRl0J( .JAI0yQͨ.G ti |%tS>urB$ "ҿ EVEܾL9S E$R=گg鿍)jݣ25dTfəz:fHצ9()T\,JIbbt$wSaQ)G5  &q-u]R3i^h}st"zY,ʣ; U<XNmTr/3;EU"fS(FKL2h̏4 m #+:Q Zd_Rg0dYͭ48 OxI΍ I5'-iFYwk{09 F#eh, Ҋ*o$5zhX1`3"M2њC#Xsv!u, HjTJPqqF*(.!eS,ZXXZISj9Ta+*Q*jFH}ӱN$rŏcћK`oe$̠N^L~ wrA)tc. Rqnj kr> 4-BF,!s6ICCZeh=;ů(fz`I4K9o]vh(8Ǚy9?BybP%s.eFhk83Дu/†ۅqƄ)Rwiڙڝt1/PKRQpŋRݦatĻe<-UEOʥ^NtLG&*h hZ xCR}FZ;ӑ8$2do'+E9G,'*zx*R'mN,$farٸo4#6-:};"Mf0N6Gcfj'KQFwOhǬj$N4dy]mB_Usޟ.J9=g6NQwYW#<\?Df=up:5~=pCNγ):dI#AasfJ~`L9S1rmI֋~DzSB%m1RElN#Rej\z O}q>Eza g=v1FHg(](^_^v~e[۝,@WÞs7>+70rިc2gw~gKPCv̢4X8׀߮_ږd'YTwN9b:6jy]bd]&x`ߴ,m.E(ܻǼ;;MN,? p՟~oa;k\Dno湦$ϳyoVR[p>/X<6(w,C&Ny N Ħ^y (Zr4Pi!x+ɑ2L1C5MK#GMB!_O׋U^:G{Ǐb! #ɤ̧+aXN<+I[tD[X t!E([7JSV@u*z IDATO3XTuCI|k ^>KP?D[x=x r41 Bk\ŨtdJw4rn҉m^%b%vKIS+7.sڱ^?}x.Fc44_t#FurdgA'-|_!/ZF f_w߹+J{\F<ΰ)]Ƿ_؏7'V/9;kZBiyCXD+sW/M;W/:]-B8-yRxh6Qk&qeIOv|^>d5ZoR|YX8jP=HI>_Y7Ƹe+PI6؞;Se0PV_8̵M8'uRy'] $)+4/br5gx,=}#%؛8ɇ5nnQ,T~4-ĕO4嬽;_Ɨo=uzy0t =is`3ǖTTDׁ=¡?X8h1| YL]:y@Ʌ%Vw;]_"`{Hc޶yxXӍ/;99}&<|X_jFxN@̉y<#<ö9|ឹ13T0u<o>GX`5pٵlek 4fdI%6SWب( .XW6w~?n|yܸb r@%UB=bB=1Ã_';#xK3'O@ޡw{2:škT.P0<- GșޚrI9ˍCU.`oƭ\o/ 뇢$mxy+^s,$0z^;5NVjVti&YhVHmљ/xm#z(֗BYA?$^M~Iћ6hm.W;FIZ<`!UhǍV^\^aƗ#v^[8*zP̳S|xV'2.S-/:rJhdްTj|?vlN Tb0CLKX\-qGZ6x4-,-Ý!?LQ(xDӺ#??zヲ{ӆ{ أ(V| ́v(k{/NfYAY%SU2B!-0YG,Z6,z }pP"T5u(`gW;4+ѩz.6DNKc#$!]n l΄v~p{~ s֎qcȊgo=UJ<̹ѝ撇28\9|FwBā8J*Q ߟn8gK Op.M7W7rkcvk5k2vn|>{ CǸqѳ6>鷌^,}DFIe2X)T%X{}>x>nhd̔yncʪ5e]ɗـSx/a"Dgũ -N Ey)V PhEʺ,RwsQ Y6F^}_waa)RQgNmI1хƔ2F] + .Rѡѓ(IA9M_f~RYEǥ2Xk #9~r>0:%>?1UwNO'Uc>ݾV!9DZ`[϶u>3w1Ҩ[v^r#4`q;6r8~ݯ1<yŋՔJۃa25H`W4#E=Hs=#?<>'}/>Px‘9.?ۭsoM(p;:uyěW=O\ʛ~vy'NS6;G %qdݍNQrQ5UuA̿[Q.zKk!5H*5.Țb,x)_S)(}Gh5sF'MÕʥlysD SyUcl]֪tg-KcW?i q{8WeGb|evΒz!_.jto @ߦ;eL6rH)+xHru:B)2ٮ6--WwuH߆1C*Ac.<̕XG_ "~Og3IVHӂx뫶JR aE pp5H6s>{wAgndR9Ճw uR(% PA5g%M (#XXX녭pѕ=v^؏8 GfpA:NWGG1hwl{xeg;e"FnK1/۠j1p"ob^;QiaJ13q%bC*̖ L)Ȭd.D,QVu_>oׅ5Vgyp^|,Uhy2Q7s|&}s­un9~7~eEqaFHͼvoTcϗ1kA7w8}a;o_H'Y KY|>ԏ|wˏ~)+wznߝ:;ǀ(by}̓.%O7{| (H#JhۑnZR@$ήq|Uo?+揤dXi_XO|W?[[S}Q:ʡ,^z+Oږvg|}ˁcވjDˀ23 "ƊYmk:C81UBy d+2cQGV665>u#,wxY ĒN1zqMӠ,]۝G{q1[v.`<Vep%"֏hs̓{OlG%*-/p^z/IB;|ɠfIhYxT#2jV}bjH7qGjDj6%t Ƣwヮ@!V"W A-b)/홧'h7oNA`6-q6K!F!QzoCC=0휧űT^Pۺ-^EJu'ge8+].{X>l ֧dՀl`{o̹9Kָh##*?Vj+YRZ7]_|Kos<2X~E uι֋.e>F#W(6R 7FNJXo, e?eUTQrm!Wv5h: WK\|>ŋ7yCj~1s;֧i}En nU}M=+S*,RkFo]Y;5#^D+slhOzvOվ{y|{~1*='5c#p667gojmѿoh,2Rl:5Q--nFR%|ȱ qv݊-wkLƮϿ|N, d}/TZ3ztg|'>. i#Zy>!5Q`lo|(sEA(W >|> .Orrz^>"¡=zj7o ـ9{.CY QoO)y~zFLbח8 8U*.?vkz{ĭ1dne=G'G2]sc% :Ú J*uqJsSNƣ*ǫ#^Yr`Ў)]D1C.30"!eE1gDGqԇZ.tyVB'Y*SS(Gkahcm/OxYB@,2h6JZDE<+ӳy?5p ip%Nj BɅ_^W綐`I憎LO^>%:xR,*L`vڒa06R6V?!^QY|&./DU~Ϟ˅uݜ:I7yL|Y;tԺ`A*wok>1fx(pFr*qRP#L-"VjSXRrPbcI#_FEEXh:pt#旷g>d#_\X_؏[db]3qQlEqN;:!p'>R~kqxCИeI24z6ުu ]dy)rWҙbsRv΁!f'@gJC4xǃqoɈGl EkYu€&N b/f\\촄{X1/d=߮#ę?Ў:'q9fJ.٘AwM GfP= q䝜M(˅̅+7~ЫeɖLփdaEvH+b"^F9V+Z/Bw Cdc흲H*WXnϠK'i@]qTJqL _IEIo֓ԩw֦rҰ,,gy ꚼ^tPE6+h:#vL1;7}8 6h߉=,<<kغ9Lh1c cc2h_7Q;6C@4SLwC|V`;ƀg<>lRFSFcΞ?rSɭ*ȁrdڂi,1z1 rC; &~Z _`1~gYnJNJЇ13 r%+v7Kcq?)J/0cLjTmM!E/0N'BTmNL;V$ghg )86O :X}Ok밓 a['ǠJ/Έ Ѿšp+'CzZ>Ki\R%jo ZD־`3dNA0gD;eO<B;VF'bI;rVg]w2La8} 5O+9^X. {u$6vd֝b.Dvpb1 z>B攥6gi $wF$9L9D8"B#zv"ڼU>h &ۼPOI)ggdtGglL-17Ij79wbs}q /&K …J+gA3ZExX^m:o(sU(;a ۘJGIFFN98h:# k^̑6E" Mà&QW_q OqJ͔cIL@0 .kq@4dSDA~g;qB?gI{s@?`}4lĔL֧*gY}AݟN1r6kdER`0Hc2(5O׿ҟ#"M=#˂FFSzz RxE) xAFhP=pyyP^/8_V^ _PF';/ݍ"}V/wl:k[Y.J ?X%QV8nNJP:CiwwR= Mؘ/122#Nz}d،?pĕj.<ƤK=:zdTUm(S l9B8tsɉ*qpgJ&]ʉdhcD; ^:dS IDAT׷ܿ  gh~!9PYWJf/z*9W٨d/V sl6(bóc&T 4\Q~˹xb:lIՍ^&*b(&Hl7t|KSy\p-jGmlEZjB B:P2~@|MB#E BԘ;( 8c?~ADO4\M}s1ƜƤo"OF:->lKlx?/`M7f6b6'>%بW:\ *# tG).R} 3qy9KO@jB+~ĝ#; `,SLtdx 9ldd <9oӭ@ *1ƣYȋ \^KtPLg)T!Ovj,fM|;FM48BmBrTX:ϹpM$+neGо;z ?`WeYt b9E  XiImM؆rt4SY8~ {ʼnWN,}l?($n3 a!.JvI}'ۼ`hS12c*n|݉+|܊Q+S]PRЏd b>f?o<ǀlODss +=Ɵ&Aa: B/&R V"TLi^)ޏυI~ܐ(R'ćɹԭZN=q NY}3&sp6űXT?~?W ,z|K#NxЊQ4jTJ:+9y֥3FG'0H@AQ`yCU₸/E<-dG܇̉˒JMrkzuzt"/X}{ Ct\$/SkqNDSp'q@-޴`yjQܿ1SNzrC[Vg)7O׿ rN8~}cyee=GN{QQ&#Wzf>/ÚZ;M!xFxP=ٿ1= Q({B 7T]P3o>C`i$##9$ " ~Ix* OtV;[uέ4>}wGloS)Y\9h78rLyJ$u|Ml ,S4w#vM'svgl9n);lrXKzR-=NCkQ|H8)|eI=Q/XJjܼuV6 s=cnOԷ1.`I[wռP39UC9)ɦyy<ٝΐaR'o;5jBEdEҿ#G̯R:/blx?%Mg;{G -xP3Ϲ>a79sT\;O7RrA?%e+rp`{uV*{'VsPJUя_ ijs橿:>ME`96^ v i(F6??j1c 46.zAyrYe1oZ|,O|y3Lk⦈Guͷ!(G6o:`_K"h`1 _QbZrLHs' )<9FܓTjrO23c΅R߳MǜZdZsISՍ[ZJvnqsDzvMiL!mBgϮ+կX|EUF)#Ҹx26:L5ɢH#FG}}#Gm:2ӈ d zjv0 DzXݳSݢCs{NɂzP@4]0V OqCyMBqýA|W(ҦݵLE-psM\ŗWl\.pY 9=slb0wr3Yfof[c@d["@Դٟ$޴Z2[c>lXdM`H>Zb+_/{%WF7<3xp`/Ed*(L=8c&jVzC>aiQ js1qp<BTާowF6\&AY~ l< ;z 6&G٘0s'n~?}(A3Ch1c:(t:Nyb"n37ݏ 6wX 2#j63nPW 'OJ-G26oe Hoq~v1F؂Gg U\gϰ`{#݃wM d o{5lG zD!j>gCi9, .+Y/+ċ͂pLG\tlv'[-vJIrPiЏZ#F aR+" l WP;C?h;ESwqڜXO lAk;EdI v.wc^O?=,QckPiqgH=ɳhNflzMIdЛ`Gs򩒚ٴ_g۹ Y¹Qt)V u~e.ERȋ3>WI2$EN۹(B. RH"$Suh .k`K 6\mUv{ \>N]lX7ehmhCһKI擹EsRvt:^Uc}_6frPr\ٴ?8EFɛ8B36f##A *O(Dd#ʾ}oEgazM(I"W.{feRR2k9Qh , 3,rA*T*TI_߿b N B*r)|eh.d):Bv SYWձPZ\ g+4MMcIUlmgZ6+.|K9k/}Ah2;7I˝JD",I1qX\L &`Y߽_~B닸s7wX+|e'vnΜXnܩy~Ľ^MeSlόh),%c/GzgBvY [L7ٴ 2'NRp* ̤tY('9T[K=5ӢvM/{}Gw8l/W j:4Nݘ JQ=hS NغYJB0"")Ǚa s"ȣ $Jeζ ,RޏOb\==i1>gZR'zi* WFff/_RMSe^ן_(\ k&Ml=wU\Q0ED%K<(٩TUV6&#u}-x4+?P$ϵnISk2Z|=}2󥈍BٓUɗ\\_Lu9C-ù|{zSya(Jc-rBmZ:^\/ k7ٸsdɒ]ہo l-XG<=7 Y #֫8,Gt~hL{7=nA!:9&-sW%ɑF$I>()\2TsP4(U<;{^w^+5]C7]wGq~* /M_;?} ^_2#smwqhٚ=ڬY!ids#igAEN b)}'8c&͟13lslgtB!z!(sw0uPFɗt\\LaZ /_$Y^Z=? =Pg^oԿ{WBw1ҲP-};(QX]u(=?@`G kȾ4ճS$20T|f Od0PVM3 d#3{yDG9dJ ˘eaWɜ_ ;PH{,𾲕J,P.(VG\V-|nׅʝ__ܹw[|1x<~-4_88za{ GmGNhP],ih3g;wgb0d:%y-bW6s}׻f"jbjPKr\ G,3?v:%|X}<" wOz=Mhsm'ɣ M}49JBx1\pE_1\ie'DI㩛~:ϭs]upy~{쵪1]7'Z=U с?B*rcMAA;^QVޞµz0d?6A418<N's3Y'(}ѦqlĉMg 6i$>sj6Un44H$&&A;4 !eE-HY(#<,rbt2(mTԭ煹Zf~)*r+rƥ=?howKK}ߠZ;FGg۸i ?5GT=gˤZ|R n)P9&rSr6Xdf󝳐ZdY_T %!7˴d3E_ .iKwӲe.A- NqIGfW`+z{<(7)Ĥy`DS?A2u榩aH䬤5f &, 5%c&+FYJߒo~H1{uP8 :_DDa /~#(#&DݓOIv1Wϼ?V5d_UmXPS`~;훰`W信P+Qw\ =هxh|7<9\Q1M[Go^8*N. [2ₓtS(DJʍ%uس+|Bl<)ƉY$dOk+b/,kڔxyV6" AeLgO)rc- >Wʐhr^"eԞTL"|e[i Qh}c66>?2<|q]׍'Νxʃ͆I)՜8Gxg(O(R J4?27jH} jUa!T#rD|(N-e}OX><"t"K*$Th>8dWɒ{'Q;3=O~GƱ [~ hhKUh3-]d_GUgUvX-F̖OD$ﴙʜ9;*.iRo{I e:9ovG5L!Q`^^/,:t̂"]oFt1w"PrXtOɩObқhrF8~L|{ֲ?Ƃ +?8]MG>ƳpK &~25X-ow|I̬"Kt=>ՠҌ,/X] V`塕ޫ>ǫ:b-v~W;^Fy+¸Tg#"!R*n¥`pIVb^q&=$:+ETV!t{_L|\}(Hrvτ A? ,4!mpWE\0/:[ՂE2 |E3׷IP r`ےM,W"FhYVx߂A>Or4J; w^:/?tڟ6gʷ/\́=0r yӎ;K9xZV +ϪlKgݠNn"|7`MTAk4Yxm$K v33FeS6[dlQ*rQ6/O:o/y5HgKQ&t Oʴ,Yhlr"Kyq؅a7S&b%'оRh5pCÎ/\z+ۥ|ͼ^:ܩ *',(/#Bf܈)Nx2xOzKbŔdf9](",3+QRҒY6T7Om)dln+^^ٞtԼ-howˑз|w:̈UDeK^3 |N\3h_`=%xedQ&CBr#L X(c-p$R_+TŞwZy9KTy}/=Y+H >*Hw)aFNz .e˘(IC; ng~5}|8Ě=CCw 'uՐ>͋-:8grx&W;vr!RA1ot{ɇQS 6r-7=/W>??Yw%?\N#h͙lhhx1!;4HzȞ;iK (3޺wGfbQ9 ޅPZ&WΑ/t{ъ|䷻ӋEG>փt HJWgxye˯(l_Q$NzH'N֞GoZ-6:<)13ir">^23HN IL ԎI-6-&'uL:;fWx!SMͱ$SJ2nDx1GJ F??s#ìB jT؃xt);iVQ֧,T5)>Lmg$ 9N7V9\$mRSJdT̯LO Khf>c^Vȣf1YX ,[qduX"dB~xc`#3ep Cpɲpr0;K Sc[r<ʒL4eN JXj2V#$n߄57qR(M@TL >@Mߠp)9v/$KGkA>^eVBt2`ҿS+q ke.nؑ=nkp]Y%Y)Y<$-#?ΞEԨF)"Ky~0|Tv37ESї` Zr Ӗiz^ӂ oZ޻3<2G {eR&S|:ir`҅wr)EUrnrt t`=IiW3ř4͵|>|rwLjsNOdETLt湖 9biscQntx'ܵ8w:ȶ@ߡ,Bb KKA1q`ayqu>ud̐.hY-b=Ɯi"i#|Qt^'j ^ Q `pj˸Oh@9Üxp:rƳ8a >Cx݌Ry; +⩊eҼ 5hk!6ޟ3/>1nO"@kb]e-\B/ƽ&^f|DRfR JDg|ƽ{^_,1ոm+. _HNN9O٫,Z<(F#P=9 \ 1SNJ@GAoΡV—@viyL{M&ÜCƎhYPaYMk:O*ӹp#ېT<>gG+p0?:}s1';u f 4#3G!.-x fokջj`LǸ 5%9|2&r%?_?}b2KCl?v22ävI c#؂ΉBDccJGO?c% 5uv ~k vr~JJl]aWV*xi! ߞ"JƑ.uj+~q.5& ns.YrbO͒e\sfg=k4q[3薣~}wcMӆXMYmo:}Q(*j{ٖܮOg4%~4˹;: P%<:q]Zni{Hǟ??^2;2W6k3 یn>e1:h0-JzMMsXYV HJ_pl䚫^i Fq*$VwJzL'Qy)B7ާ:.#!Esvޣ|zLSW[&|3R:f0@ILܷ3DZ-K~^-ʣ{Y ǻo ?2s H\2 HAuQg KAy{w^N*噴OɑD/븣kyiQ~tKyB`+1;GXe!BmUV̒Os^ʺOs4s8H|rWjRRBzK{j{tud*DcpҺEPAVLQ24s13vPL}Hcɫ>3ZhY>gr!js(-'RPA̱;1P&k-ifTڌ%YIU-7V]S#s.7<3ί> ;dS@S}>u-ӷ96g)t)=87h<F4blEқX\υb2f>gq'rOX ޲̵4 BnKVf7fd.Wz4VkSǁF#_X޲a%duʜ 4'#~$$6K}}"mytdʿ$e-ӡEꧨ>}ڵXTUp1tӖO;=w| 7nR`L9FOQr%sՖ I^aC^LΊ0e`*26`;yQ]/ۡWn^Jjbr PrER$TɱGu:H@ ];_IfG)K' $$zviK%_cOhqGji6&JV,O+Nȡ zjZ,hX}:*נx$}=୘ޔ:#3=g氫6hGh?qKd|LL" u=ڧ90)tKA] |Z_ooV׼Pr92!( gˍ2ys[ t Q՞8^Wc(v3cg2>:=ceѝ#9. 'c23%RtdKr,AY4.eڬ'q'GrF݉1wUmXN f3<aG?'h#-5+O D5Ӟ5ev䵼߱OL 21XsȪGA<V,H9K̞S5,:ek!TH\ k>Q!ۄuGZZ:T|+ kyPiӄV]YccU#a Tcug#iš?CT%Glib=Om;M͓Sqj>&@M˂KrAr.n҈: FD3Ž0dJ1RY~tlbkp1ehr킇=@eLb/#Ƅr1ݿN̴˪szGZ:->Fɉ#Ʋ.EϗK.v/&9?4:n7ش]( IDAT*Zig;tUO.ydBkbUXHcUJ)Lt(B5 1 )|2cudRO~ߦ^d㑞]A!yf(e5JZTLYĥe(U!R2/s=3e̞Abd-"2(j{*bB`Ȇ5??§#؆ӹ &`640cj&^3o?РSc^^|'= B">QjՊtj:14T;jܪT,2؃+$atYKڎϟo<>_r{#yhiWo";]pgyQP(ܨet5ʛ Hk,TefHw#O?S6_똲DVdR*.KRKԙSR(7tH}ȷg񉈢/ȟ3)둄GޭqW-9T8h 2?xeS*tgd SGy躪񄱚eUH2:ݡ,s+Ք7nUe FhIKAF.oOw9nkQb{WnVQWVT# DZk{8Vs5s~<7wޭq_{ f$i=Wk%Iش%q_iŸ#nëO8OF&K >9Q!r<{oE?|ۅm+z=9|`]o9ޅsSX^3/OmVZJ͔ȼ"˙6zT8lÉ%(KSex-&d3D2;IQ-h-Ya̯cI;U̳,:8Sl1YV'?ϗHi5{QQ͋Wkbo%9TB u> V)&w**IYGpzYi.~E~[BL8StaTUY(yF`[W̗1)I곑9ѳ~sr6{ ,:Êp|^ɯ+ k\Sڳ^Qp>vv n+*tfؗz٨zH[gk 646cDFu-*S*6eώi'IS/8HCJxWn:I69K턜1R!͕P&e#PIp)'HkP?[c:Gc/#9A;2G{]xnQz|a_{A 5<4U+_/ץRu1-''g 4W7VM¶T#L}1f- X?ؾn"" "y&#wzVFV5SR6{J 61 r0' C*V7l)j*Չkc̯HESwFP}IeLoɼDk\-βN 2ͺLhM{^id^&Y萬 S+81 vFLaBV71QM 7=Jv4|7=<ԖKf=sX>.rLsͥ B3Avz>_r߶E]Rg+v8 LN|bpgA%EHӁ12Q (*3?1ޙ.q͛cBח 7$Y6zˡTDƼv$Dybhuɗ=owbx3g&#TTUsSJ}vH^,;bec.kcSzSBۜ$q[_:p3IN<~=AZLVLx!3{<4Pj{0lt [6|t=;nl>z[fV%wPA`̴>E5ɫ&#W&(_?|')S?K+5aT($5Vw u,ג1D+d Ds;J,̢j_kޥK{Dg,zz}f@2#aYe$QUEUdFDEn1XPz)9: ahi1[G,`DM5yF&)6K2аGzܤ{ԥhaj=w"姹CQgsݥ~#L! %y,Srb)K G+ČI ܘ,djG)X)a" I%͎Ć\OF(I=M=;gel8a(6_ N`on~"uqY߽~ w%;hOͺ&S˜ý)8TE-;mq$\D )P2Z29Z`JC4?-d2ɤd[-R>Cy6_􀒂lȇ:_#qtpq6EOʧ6Ga'pǻP%dWg4 D1*E$5GCR).|eh%LL"f<#l/T~O+m }oRҢ,XVW^@U悶'T#! \8 E"kO#T4д˟3iz²T'c[T7yd2)-*xJqg"J"λH<0+r'}䃙A &k :bWgɍ5n"l.fB 1g/lyG_w_v= n!ŴR#Y(vf73=5̃NH}Sd8pG$ ,4ڠ[FtrNKjh: SV牾7oitpFD13]6YF >q$L#۝~'LHK'%)jƆH9Tlb,)f?âh&-!F;ո C?N!DzSIrrI9LMSo>4Jg ?[LeSm#$+b =uiĭ&-K!oDE)HXaĈXJR6CV1]%I֟2 dL'mFJ5<#扱/~>j} eWWq mMєB!FϠ1ĩGMf3ԓ/Pۤǰص%x8G^7x}t#{SiNJ1r|u:hwzkGh{ oRu5֦%ݜA3gpƹN̎t&0B'=9.^F1(]sB8Gu-_өu_5fWEدXOϔǖݷKeN{fmu6,vSU=G>Ggdgȥ=ν;sp2˕fNũ!0>hN/xu -'6'a܀6%Z  34(Ձyr"ټMchfM!@̀IUmڍbo{qZj\5&ye&).6H53HYDBm%'gRgޡ]Tyu-VHn^>Z܂Vbk ȓbpc!܃q>xwz;3*}c_CwSVIː6l u'ÏyH!>ɚwIF"LI !'a(wG}(Q+ɥӣz';$h%gJY9OڮhxL_xu:ӈ.[ǭSBM?x4#5ozcރf2;5]F2wm'oנWOį7wޜsk+=.KQK;-]icw &>LSDD(t}@A"I<>/IT2KZ:v>-]:Mé0bZ{ 0Oa$QT|%{8BezDhBtǪDA.u9˻v%ųhw`' EkzFۤx)'b7zlU/kM';|zLaMchH)Qrf0rfꉅDS눽j1CQD #vdc`d (PWvgDGKOX)$ rU7u>}纇A:tg U;ha*c 9 \!g,S'Q"42Tk]^;7zj7^C|\h7 'ZS:^>jgе/n7Ұ)D1ͱx45U33 C6-4=7gO$ꙗ;~O7wN?J{ UKmimgF>sAsq(AQAƝ`+,-kDǴE?-l~ru-sZOQ~U:w +02Ԙ㤕%֔TL z@ƐtgSk1R,u /fo|=~aqΦ iE͇Jݽ9ESN}O 3(QDBJOp"">-6#90f3CJ1%ƘW:-\6o_g--wqh׋6z<2]#l=?5*=}i}ľwmg]e6#G)y'\!)ܗpwɜrN{O,ϵ`<\gxy__amG2 KJb#I^Y&'`ՃXz25g ˞[7~Ϥ#;τE-i!O6>EC2,LI %%lZlڠ~u c ;6+klޔ>Hh5:/º \ޱ?7W.)I˙SQr 4[6)E(Y>$c}>m"f7F*<<}EZ߰]w6ƾ~ǟ'U^_(_9MaS"#7:35))X Vxz| gwiN +,7荈Gccax9㓡]3`I;Cs?cGSҧ,>&M /VȳMw(&r5JFR͂H1C`M͍9 iLa'GUDd;CsH'aOF/ș4y 4YQ"b#ؾ{ݳM15xjFƶRV>oO+2 SG aJUN`BI I5r։N_Y JP:)u%.os>}uF_f=fSs T ٌL * ”/;)b)+U&REH`FaA02_=׿[w/ζY}’flv慶-5E\d[F/%t{DmZk.YaVK8ooϯv>={0,27ZRMr#܈ S+jfŁp ٯMX(caAL]le\EqZ8B/rhv%zoK~l e [AJ$EvKv52M=Y|Oq9"L-ql G*yD }>A3'nzuBTi4|8K!KEoHSvg :_qG OmHV2Gp{o_#ٻ"LLǧ0T ;Ϥ !Tg0&c # j .UEI^ _2'Wϯ`o_?C^]6c"dJTIjcg 1ÉZb.HΈ@&ѡգcb$cO2X_ Y&a;Wz/}Iݨ[0bLpC)U7ȿF;a;JŖwhw$vOŜ 8"NR!LX vq3סӔE,V-/+.x̶H~>wQ_͈G_;#_1^2㶱FBd*K8} n{fX&]>z_Vƴ%X +q߈u P(>s+ƮBO%M"czbH Pr_~߇C+[7w#.>B'ABp% D~HA f JO(ÖG35A]%Ӯ{ǭQug (&/M ՘KhUy Li=!4pUB&-&z'_t9sO,46P S+\#|FZIfs )5% E AǝXM_^ɺQP)Ur,$4'G&򘉴1d(3lCym:$'ւJʌtpϊq Uh#)xB_P/kgyD]\1nR ať2ƍpJQ䌆=Pte,CT M7|Ԉ@-G~遭im`;JHNxupMPO#N0'rL9)0IIBp3jKqaP nc)2҇GTIyc)!#`;1AOl]gȃAdIfOK;÷E@=R<^zau%15+ZPICo\KqqU(~rX`H51W1'f0aϜzC&<,+LgC%zHy#[]~d#FDhy9DgN0qp#oךDcP|xg 1Hg#-F*Fs>'Cc?D696.e[Hx#=Ka2,d!H}Ѧ%SysҢ u| ;zR ;zӫ7_n -dʼnIn~:G碝f;$򬵙}dX^FY ;#0/$4~w<տ'.^}z9̌E'>Νn="> ُAh+EiQ3~~Εَe'd_6|OO (>GmO&|fGF?q>|vARr5NhcHl#LFVANu|Xv(!o~;X)+^ˉv}mvawB7)*f# Q;mg\n9EKr '-HCz7-uKZV1#vb)uY$0=TkW;ç}*dIaί}WVR4|]! tpJܯw} $S\OyND9&8)ى3y%o M{Wr%J>AǗ|S gz VY.-w(?#vƣe*Fo89!#bx"nHs&s ]}و⬴?]#wFeˍeNNYo4p-~s'ekX4 z>T4Hy,qJ ;w|LI7x7>;;ˉsy{CGf҉3\_~|}ϒidN OREygʐNr4M64,k A(ȵ}G(>6PTD> xIx=Mߕ+h`KZ;/ 7Nmc5<]/贸4:;Ns'1Ãө};.LAX.d5q3IP4HFXb#>lV9NƌY=)RRwΞqx0dD,EbIwR4MmCq12ث,3u<1⭔߅33nwEMx߁]N Sx׈ؐ*}msdZ/ȓ,װExzꐽDpQ׼R'Χy2N!Lm:7@fli$T4rpfí^#(wcFŦF4ϿԽ#/q|6N(s/?~}9=&d 1(ED6 +c.Ɓ8ΌH%QqH3 =9|H9C rG>P8)2Id?w3;^px|# Ol?LvNaO;ΌoDv,٦ФuEmPS9ܘQTI1fΚ~dLVQ+ 8#b̗,./Kx!θWpe "1T:oIK4Z~Tfu ږ ^TMߦ*_SFa'+̠+6:~Dz: ŸD(uj}<򜣶n/ўaȹQ΢oqhm3qID}i2r#Vk~za05 R$2'iM1i۾Q| ]]6H 3}*8蓏.oCaD9r,d O)2ec hDB>D ctuڱFXNzٹ|{ sA&Wx;VN;3: iF g-a%,2E:J'OmͩvK#=T;1>eyV UFW:;%>5D/7[C#:ԏwՌbO>[~B(9_܇lrEU֛NxC!?"ҐN;:ك1,b1-}L21 <Gqf>N"NW~њ,"tZ2{mDT/ԑL}E  \{Li`霙JgO ^[~dPC?ܒ㟬G?@?ϩ9N$#5_8}r]/߸‡~q4m3V&pΤ1&dh9tF&ZA#d2[($::% ^Bzl˙Jxmmg17^9ɺ:NB1<[ )#:MD/P ) s÷1>zKsgu$7A_ gg^83A4&fР&0bBO.Om/h#Fn%yp5s@O."fj/qgPat0 -Rf&9IL^>ԯ`}iذO&7:-C 0FR E04՝~X@#\''䙣ƣcs&j$þuX Qj oX`9bL W"Q0,ӦW{=N美3㇞#bA8s^9iљ>w͘q:9USV QaIz] sȁ/!k8!Ma8T[Um7FG|rOQrTS%_hFFeB1 wcH2ǹg@ZƺMg3#MdfOhcv<醑4 9D5ez+6=mdjT\?i_MB6KY[v(GG6;bJ>!JoGsOUGԱ?Ks?U| Oc?I??{~'s}3~/~?5ܡbvܡڡ?k?wܡ#?,MIENDB`dipy-1.11.0/doc/_static/css/000077500000000000000000000000001476546756600155545ustar00rootroot00000000000000dipy-1.11.0/doc/_static/css/common/000077500000000000000000000000001476546756600170445ustar00rootroot00000000000000dipy-1.11.0/doc/_static/css/common/override.css000066400000000000000000000002341476546756600213740ustar00rootroot00000000000000/* This CSS file represents all the style changes to override the pydata-theme and sphinx. */ .navbar-brand.logo { padding: 1.1rem 1.1rem 1.1rem 0; }dipy-1.11.0/doc/_static/css/common/variables.css000066400000000000000000000005731476546756600215330ustar00rootroot00000000000000html[data-theme="light"] { --pst-color-primary: #FD8D25; --pst-color-secondary: #1B8BF4; --pst-color-link-hover: var(--pst-color-secondary); --pst-color-border: rgb(229,229,229); } html[data-theme="dark"] { --pst-color-primary: #FD8D25; --pst-color-secondary: #1B8BF4; --pst-color-link-hover: var(--pst-color-secondary); --pst-color-border: rgb(229,229,229); } dipy-1.11.0/doc/_static/css/dipy.css000066400000000000000000000002161476546756600172320ustar00rootroot00000000000000/* Index CSS file */ /* Do not add any css directly here.... */ @import url("./common/override.css"); @import url("./common/variables.css"); dipy-1.11.0/doc/_static/dipy-banner.png000066400000000000000000001134361476546756600177120ustar00rootroot00000000000000PNG  IHDRmtsBIT|d pHYstEXtSoftwarewww.inkscape.org< IDATxwe7!-PЛE@b@;v^}PHE:"UJh;$RY693~߯׼v=}es잙kK?"⫽ڣAJ^4 0-`ۻ̬]z@3J^',N\XxGcSa֚ffffffffffffffffYV^488UGĂ.;MI4ENofffffffffffffff6{'v!GUk^ U9)"nd,ffffffffffffffffE߶ EW $u2`;j', |ةq}p_q[p,{wnG0333333333333333E:y%"(8ا#afffffffffffffff6juJv' [bfffffffffffffffւO^9  cW` ' 1ffffffffffffffffcZ'*0nS}XcL>Z&(333333333333333A7+?+iC(reD4Mp5UIǔ Lɶe0333333333333333땁H^}bW&YI%%LIٿ C &)qea]х]b4333333333333333+^P9"^3e#cGX?"f88=MzĽfM`D|ө2&cT^y }$,q,<]`$M,q;Wk'瘙 J^MI-H-l2@$mlT`sk=(iRʦEIx- :ffffffffffffffff=7p+"P,eMa_5W$KJ\٤OF<~>Pla-333333333333333䕈u_ yG"HZR=Q=3Dlp򊙙 L^}xI/[ CZL+0_HZb톆<L):hKGIHlt䕈F;]Xs>Rʆxrmq-8߫[Xev>-Gye333333U$$iRzY-^6Ikx9g`&c BVCE6'%R1R'Vo2أZ"$RrpH߯6cffffffHZXtĪFĢ6ܜmeY5g N^fं.#Ui˚5"6-7`q^7=@үC `X? `y"733333TY2ջ'1͵XܑXΈmfffffff6d0x,.qDR8Y끷P;sNBtI+olpܟh"`-S14N\t+fffff6fIZ X2阮:# kx3333333i)` `Rv\|Y{Us i2pNC`O',aReH +BG;P2ė۴SSVm2tyDJLqrD27"?u&ĈO6'_Ϸ៏}y>,ox5kC[_|l!=On"=k(EZ8Ie{ҍH p pYL(9Z`6$Y4:tsWOIz1p1z!"ToI :,")  Fu8pBo$F> lCkqX({ǢUL I]("v<"~Y`",.V"3݈j)$-0.[PQ[8@G33333i<)OW?L$UlM"UϘP2NTDlm[H݁e*^`R>;*i:tM#Co&9g$REmud1kޤB)dq\^ ֒^FQ zGA?5wJ]E~k伈{/+"{H86S # I5v713333$mЅ66wwFĕ4Ty#R2ˤß̬2Dj2,E_>hC`mmk;^ҶL?^L"V0O5RRȕOy#YZADe8ԪgJų GE7:mYɜ-vJGe-u?RQ @oy񣈸%] ЃI/n;F:;2ؚy xADN?$.E N/IJK5rhD񘙙Ymy;e A:aw[}sSH'6cɊ,GĵݎP(>s+dHR9twc+U+dlbh_+i}yK:8pZ^E7lS37E͘bp,&@*d:8#ftSEı'{WNj5q%WǀR+8w$l\|{ IHܚyӱYcy{G<89"~7:q"x8oS_|# W[8bF+0F* nR[^%J^52xIM*Jeش 2R T4M*>i*{RME׬Pk1QyeHDBۥy>J*W;nAz%+qM\*OwW$:,"fn$8!"qJN\13333Y&?1d;Uz||'Uײ ]E3n |N।D*N ӤLj/I\1#eJ]e7"+FIgIOIz=%Mm3qGxo-0nt.ie.jvjǹmYȓ$zu@~̺#![g;dke|*r xpd8cE3LhR{hơ\cO>UtAx\~W.>LebE`+RVٻ/'̕JoJ$(iͼ%,R+"T (DsRD iI'V.p 5"jn\`̜_vffffffO8`$i^dfffff<{}?dY| xp~Wp2&vxFsJt_ gT0T0OmY|VELU4Wdz-P4ad&+1T9oeSH}&@xxVΗt7)0 5I'g;q# 4">  N/@3333>'iI:KnXx%-Ø̬n;,d1  ~Lܼ-N^4Y'uy6%K+Q%{d*r?G.86yaac6y%78Ah"^v0/8 xPoS5X'GĕmKEG ; 酤H6%ye|TI/5 ݓōdn`K/Zewb2L N^DY<|َh%!J1`^Aq3}V+<1ܽowuiIVnWENW̬B&Jz )qi#:ض ۺ. &ijB23333.K]dSCXat2-Ӂ+E:o WW0ϛɴlԖZ\\l'] |x-Y2c:y%"Ou,%sc?&"]Sܹ33336IZ 8TurKF%hWIt',333339/e{R{_t`݀_߮/[yR ,8V` WəVZ>.JIk*"n*{Em;mwm\-fsf+R۠Zt=pyDk=̬u}+Yt5-F;-e \];^PrmUQ}Y lV`ܱdqJ|JnZn'\ Xtk}`Ά}#R?ʆp 3333IZxOV<;> <[$g49yyb쵌6t~Dѭ̺.i:{ڿ9GKb;[X9 QWHSYޡ5[I:yGq;33331KҺH&p9peDĈ}/vhs XrʔJ.a=R$fG|\D 6^FJ`e9`?I'DĬhfffffmY\@W8t YެgR>ye*)$tpTYv WMPi@E)J+~M)"f4_x`mLj 3s9oUSdN:Kz]""n:fffff&iKR5zg? 5"h _"+\MSty_Kt"hGj҉+fffff6&dq.+'iYw. ){+G=KE2oo08S#o&Jo96Z4K^90"K#ՈxD6"UHwmKʰp"p!p'\kf3F`2wνEE~n:V;/~p-:Y'IZNҮ>n7 W|%e2%]r " gWt?*33333`o g݋-s%gYL"BBE|*i+MFpxtI˒2y.[D^ɒm7~I&gJZ%"u iՑ+W@#s̬iVʷ@K I'X"%#W?z˜܀&sDD'*:RJ"Yղw^"`ժf%/bPnHz,%ӧd Y\YUXeOj솯EB #4>wv?{?oi"/|SUh'/wWJ: x;(E?X$qGҦĕ=&pX̬H0X Ċ"`I\g<=3sxosU[tRв#ehp_6dW`#I77H2ꩈxHүu"|Rƨ,*7泱ђSI*RA;'N}ԂKYZ6֓W dF> l @ؙ~i 49z#g7P'qeȚ,RYIZ ֦*GĜD3cJD\SoI+WSϽA|F҅Q.̬8L>PьGd泱lJogyB 6gxjW7(p0Yfޒ2G}[ݭ@:%"o~Xةȋ=O&ĕ [u*%irC+wbEs13"#8%"~^jT2xiy{^333333o%]lec`&dI8 IDATplYLUmDJitngpY<؉lI*"ft+N.;]CN{H xN eM)3_r!WNj!Ufffffy6\+ :amoffffff(f8."N"bVODĂ8]޲&4M Y33333xaEl)rY\SY$| ww&Žvm!pY&R1Bz\#:5#k/HZvVpC+|Z8̬R{_Eq~D zuND̍󁓀 +iE{o $icfffffVT4ccO*,-9Wȴr44Y 4Ld]$- t[ xk33 MƼ<Dģ*r7Jo5 ie``"Lqb,"ݵhvVD;lmiffff&*>EkEe""t2]vxѕ'i{`d5H7j5"~Tffffffb~ !ӸNQ&D`6^h<2xk3AJ(`uAՐiinJ&c.yEҲ+ KtWzaIgYΉ2:'AIBr!^ 1M'Vqfu[W6 xj;l)qei>= R4>]ңI#=+6{":ID] ̬j'SMʪWU0{˷yd}ٝ#n04X3˜F1{\ KD܈x[t}IVvɿ8Kt\pp"/V#>K:wkD."zT[U>\Fj/ Iwہg+i-q^`NTW2EK%8؝T"{J!yY%7333F,*irsL*IFx8^rT[IgFIe߇ͳ}EXiWZN1*éTj|L+Vѐ~d3oTf!c펠lq-FČn<Q3#/MV"UHy /UY8"Bu8KEn^RbEjEGrW+w_o}`*![MjeawX8G4nTBg"-I/&~b63"攈̬%y]'4~pUDԻ8_k CU\t U[aQ&/EN"=k\[ 8HiQ潌Y]FnS6!E2MV~_lSwF YB c UM<ڗPS=yM׍7?jZIBa C#"s'NNOfgwaVCV*"ik`o`BqsNjRJׁ@["b^r6@÷RE+0 Kf,_133*-UR,"ici,ؾ)63k?nټƐɤ233333,l'<–WLc }GI_ڭV2O7"*DV9yԕ(:,o[UzUEBIwНK&tiVAw i}E5 =PtZ/p=pMDZcR8XGNh2qaCT^Yq".}m4 -WDcƚy"7p~D/;k%\ƨ y{R33333l2-5d'ؒ2mHrd,~N#=ڜCd:,n. mR1B+eW5sQՊda"iJD<<@D<1l?F-~U݀$}("~^ D\I?k.7+"ux-&3;OD',"˷)5.,i `R?MJ7"̬xDo# PGijc6pbfffff,ʷo 2֟[ꙣHEZ5.@Dc6q{LO>N^tptaoP%Ķ#+#{Sk#/TIJWΉ+%"EC1 Z]s9Tsn}RbZ$}{Kz5KmfffuT*ݰƮ{#▪ֱI8I;HںױaUeqK2#I ZM\9x(K\,n!h42;?tYҩ5Z E\༼*QoK4Td(%J.Y&yޒkY puñQ$"XF\ҥQoT333333k3AiG.x<ژiȹ,. m_d9Y,i"K`B>@֟7I+{74".?tFD;HLjiLIR<epX~ry,'vb&ݍxpnDkCA]UI 0m G|=x`mE)P{*"-3$ x"fe\ycuHc`tpY]< 8y{3- ]ÙGJXѺ3'2xL?osW dq*^Fn!ǶNW74߷YEHZq)"х\N*#6ɸs ̷fǾDL?,80I[KZ.߆{hַ"≒S?V$(%i:pSD<]r 33poD \C%4rG0낍<" *k%M_G$mx:-Cj39jIZxx`ݮY4Q2(Un~M[ŏHm5 Y@,Fs啽I(9?"nS#m2f.p@m5pI+v^t~OD<\plS~X՜]df }"xFMo33{y=/[]x6DLI3G uI7 h"zI'GDDWW0uAţ=zeq+|idQơ2WFU I{4"NSGv@43+8/""iprDdľeapVbRh8d)Eniff})/7K-#w ݎɬne䕩GZ 9+íN"buTW2X~ \ loW%%3Mʷ|{l.?Z|8شyxΎ IO %ӺNi'^9>"BH5EtIk5OH:8`w,20OL9~O_N*_UI=NnqՆ,HҚٍ: @Ծk|FۀF<4uШ+9OK:6htV53333.+[yqxH;Xܜo+Y#Q|&Yu]K:i\!i7[k"E3O9 du;QS)0,p\u7 iUI~@J\tݒVtE(|[lD3CH ,{ql⋓WlL"lu#͸Z0%+1\w6=Y㱑muFmxncfffffcF͇4ucseW4\ଊ8'1I>Z`q{)b YbIK p;:[Tǁ#1`D,'#!Ri룁WQf< | x/ O <r6Eļ."~MbuɮfY̺NԮVp}1ZY+t=.[ٽͺYS+a6veC&+uUv=-i*$ FD !!WKNU$\ఈ?AI.G=GE4s_#i[פh[,IWؑ) v#Rmw٨;Gҕv⟙!IQ܈^v4Z'TFu!qE2lI̬V^ *1~A*n0<2Fg76UW$-Xп|D-uP!$tkHm{:E%_8J6~7OI󡈸$"ND!SDFjYdf#䕑.v~:_ک6#+8J]@&4ᘙSʿw+06e1T?W0G׌2M<[^DĿ˻Թw$(hohkG*lmgEĢ<ămL p 5̬G$mB;r8fGM?h;jL&wU%ժdfffٻ8K*003H(J0.*"꺻ouM׸]WQW](* &b  CN&O5soU_zz9}ުS瘙$`GP (*4T+ xI_oc#^SЩJ(2=bSYI"bp$A :'WJڹ̬s_nU%)i['1OD< ״fഈXTCHfffff6v,0i%bfp_E'$ Lkg{Q1TWv_'`$^!Fffc4UޒkuWVFG(UYK3XDLzaDߏ]r IDATlhXpOi8ެDy-;䕺Hz㗲zp}` yJtg Gr@DXդq))!W] |Kҏ%Qndf6+}|G9*+dIUZo;̬?̆^(8~ѯIG1c"bN!""Y,JE8 x/eQ[$/6VmVD7DDҠX?gGaf:gM{;CiY@)ye [I:'"u<=pw= >"^LD| ؟TiS[IzwQ6Md_fffɒXn k auN^1333333f <[V(fV,1H^Zx]&I8 bS"[T;~40;"Ψ;"JR3*p3%]ndfSVue*133W& ] \V f6q E p8ώnڰOzw"Oo""cwFu5\¬\_r6>h͊233+Z'Ϫ:3333333HC8/iCecfװ$b9DљOIpzeu1u0/pG7(齥e6q+gj+%eff̏GbfffffffU9X˱|Xl#ٶ8@!U"^S)u1Y%tj$%))uB|R;fWY^+ꊙp{s?זMh"iyrYD̩*V$mAJP]gVRU&[8 @D̋tqÁmʍl≈E@^;$PqHfffeY+gGafffffffhhK.G1 nWh2JhՀ3['V[ݱcz+"á{لyIWYIe:؟ш &6'T$`< !"TOIz pci+{~+s/."'"nc󁒶233?\Vyfffffff{ me@`$U(,x~}`E͒Š~N?لK%l~ yD̮/233彆w>ՁuOF*=Bzme㐴2$Jz0"),33333G M!݄mՕ ̬|/y3/g&p u'p2Cݱsݺ襈Xo>WoTfCD<#H~;̬ eY-.:+O^%錈p%33333`.ǞA#.33B8PCIGn"u1J'i:Kgw< |ا񝺨+#IWs9!lD|I?Eucff%*q,k"] Jz]>8qYκm[588̬4ݷ hV0+G~N%QA&@;>%"u< PR-;׀ʒWzp+z&hp_szDļc03 -:A1lbZnުcTY+׳!Iz,"!433333 MNVbtory=Es+",="T*IK:8Wf2 \CWEIV0ٳ;5^XpN^b,ff2yx9^gՕ[xcD$3333+Y^/mUbo$m lUW EY3974 33333VCSh[g<4WNa5pK^^bd`*Mһ[p|D, I!鿁/q>N.@;G;QDe|I,F23333[I֚9UcևvY7@QD~CzlIyl5wy'hUSE[N-%(m`p?J^G+e&5$i-v x&F_ih'%[ee?FD^o8';v8G;FXfJX̬)MԒlBeΦkU,g,13333^kh'R.$rNGb|/ fM@%ڻ1d$M~b_WRW$m(;8f:OZTmxEԽͬ$MNqYO윳xHHv3YRR#nq%+*8=}̆FCv(#+WX%^^JQ4K^LƮzKкM++|Xá*t#AʈN.b33kHK$껌̆r6¼Yz/x-jn4.\3333sZttp45 Üc9y [UI;Gbקc#GI[^йOʈJcwmcہ `.xf%i@V#ODdff=WueвEٰef ooc%%Nw F&v<Hc`3aڬ "[2mtBR y_  2I^kD -U$;Zx-uI֭$Հmr6~fEÒ~DJ'"0s⊙ &y76J:giIJWa?_6~D0$W~:NHzm`|'`ۂa^}ŀfyggI: `c$U13+W5 \X}8ffVIG[Fuf#"n133333kƬ؂t85TgH#~V124_6~iϳ<3I}A S{2k灩CtD|1`yn4-ۈ]Ѥ$1UW̬g"⮬LMv[}TffVŤAf ISMHG@ ue_i=:~,8EDEcZ+ۑшDSO^$,yEҧHvqIDU,t$u %BуHZƮo-2؆eIk.}_~f6p.eI>o*JI ,7Hz)zꌤ餞#gHY\"%,uf_\hhKChH4GCIw؇֠k갂W^ WB,`_ګ1'+>|!_E!?wX|>}On 9]|=QR;EcT-Y&"t+h/tUD̪#.33+WD< <\wȪ6=g&}cffffff\CT H )7=|ZbM' 18z}HwZ±t;bL^p4V.HRu=YTI"脤ЋwoT0%[#3Z+g" I?~ J:;b(zY.avn>333%=մi7bffffffuih ]_ܖ~HJ X%mZis4⡺WtW+(8N^KJca:SjRVd?G$m|8ásWv/!gc#%Y^ bcDDG9_y;G ?~3!IbgDăcffHU0s&'U?Pi@UHC|-^7-[VF\± L*8^IGAH`I:pK.v$ϯ"⨲bh)HA:I6Hs&?N 3ϽI"33349ڈ]ل*'MmH[W3 tN19䬭Iڏ,'wЈz0uՁHρ#75n ߏ,r2".q`2`Lj!iO߹?{I_Wm`6𪈸 kݚőwhOEĒy6C%'ٳ$/ߎ5dffV/j*&iK`󈸨Xlih79~6t)4⊺JC/"XkԲ.Ϩ!ݤ. OdQ.wnhE]k:lʭ$p0iu56R!^ب C#~[ePE{K;β&t<%IKg=_!UyW6j+aK:^Vu c$IK0"n`^{Iv5Ddff?5'HjjIF֠$`?ֹfGD7333aԐP^ˌasp>p6p.xxzpBAal<ƶ7+5 >_"]GAԏV%]n}Bv^*Ö!Itb)^D*}C%?UG}%[kߙ?w7]̹]`.ƚ $M* 78yjKجincV"IV*iaDVwL֝mV(ш褵Y7&;>p1pp>xzoW$=/RV~ÈJH 4YIe~NC%[kϓOG>~l7z왙ufV#bV(Wf_}8֏$mO*;)[uyPaYvfH?#%("Dq/F-7҈ŵFefVM^+PBIp2EnEYtw+7&.ۯcV;g&q͂Tff3tR[bff5 !텒jKeثidN'+̊؄XftfD,>,3333,,Y7tlJq޶2.6-VnP-,aşk;g:=o Uy>w-%k{I2:/aߴvӣ%ED1m Y LXáFěI}ΞLK|~x_D,igI$2HVo J&8I;wfffV&IGlz8_SZoe7ؕ+gGĽEeeZG~UӿF%dfffffffCꌱN3 &',>Hz!鮧< l< 7EĿTyc\K+?vv⊙%"nۜbff$"f_t^҆5c;q) ' XK:셒mkfffffff/W$ml[ NKwjDZpH"c]>~AwEĩ5?c,%%+EI6Xyq7$|x_D̯`.3333Ys6ȫ`mgǀ_GĜ Dē^yIY&'HZ 8t&=AYIȿk72$ ׀u8Ow_m-xoD<ܣu鮯^y8s`Vf,"t1MG"nIW{7o֨!$3333333"=I^p<)Ye^3:;:5]وxAo> lżӲ؍T,W Km̊&l#"[E]+`ڨ3XS$yEg7 4AV򡈸@Kͺ<"|;_b IDATX>"jc]I툶$% <<}\uW$m \ BR~AY1^鬈x̬zO;)gQG$"n.0m5qENޖ}:r K?wugso,iZK2bIvs|tĤyFsFꊑ1qe_G꺡1vuD\弓ww7pID˹,,.{Mҵc-J%=yHIOYv0&"bm"JD\\W`=^Ps8ZWdߑffff6=s6bffffffIY'%4L!%qLq޺#/]'Nrl"]ckd,'Ϝw3qV݌͜ {'`ӱH|qD|@{;v9`f6rr2k0XwtG*kU= dQGxN||`Ak "%؍bu9hffffikfW333333$mlBJ4'4?< |v9H *ɢE.PCJ,VWݚJ6ms<+\)2~݃|req%pw"+WoI3ҝG=3$mBʰڗB%U76+fցٲj,Yff6IΊwVq8'K ඈxh&ƀHdZ%[6}lg/ G؄!:UF"{&Q4h"y6 $ESqxiۿ^ bE\M1f֐݁}k#bN I^;jy^Au`fglb13'%6t ]$:";DvOW!i4`N;e*˱E:MHe%V꼰?!5&qy9w#"DVn]>yEG6rG"ਈxq̀}X93TƉ7̬odUW@DRu>HB}D^ v@sf1ş7rK:b6a"yLv'wJV_(ޯ w ڬt,OTxN heu׽')L/|df})wޞffV@D+T;0"1IGs}jd˶6jsXwe|:"iM`Xl8IFJRY +f &TY}ls#anǛ{LT /^e=KwI'񾒎Sfy>u\@ zR?VN[a,ޙ-% p~DRZ̆NޛGaff+JxRsEē^M~s7JMD[mthڿdppV RIz&"Zz! /@R2M~xfn6331V O<""nY1{(FD|cJ^,7IU~-e.^|kޒ-U< 5rt338@ҙ9LtpEͭN^)}EZ]$-^&in̬S|XDtU $"JZ@I(u9ndŤ< IXº7f7Yt]=̬_`f٤c*ie#K:^a#e"TUّʠ.˖+}8bYݚ l3&Ť2]YQ&/% =j6ւՁr Ց@Ff6mHk?q% 33+DGE{ao_{jDfIEĜJ0MU Yw]Z xx^i% Ȼv݂S'M0W-c1Y^gn5IQ^aye@1-">}/agc?|HN$1wKY+ff6t"I3݁k+Y) H *#*L6!67jq?e33a:qgӵS|zäʄNZ133HZ xŊY?"I}F-ۓjw1p(XR2Q'EĵMڊToIw=lO6bE:13VFēubIڌtQ<ǽUw8uqqY;I;lZ%ubGR:qUDy_L!`&)aވXlT^y-N\)#,icl9XDZ)J'\R1е%̲̿|1'̬dͥ!ݙmff%WJXْgyߎ,GyL$-VfY F%LVЪM$`liV)NDY"jIwEܺc13ec'*5>+K]v9'hffffUqJg&JۀEug> l~ˀ6o/~*(ҧ* i0{fKpʄ#';3[W̬"'\,r333|N^qY0Q݁ Zw̠`6v-iL7333Bi𫈘WqHfffff&IVf4m'"~Q}TffffkSqJ+"h/q=y+I~I{>&Ɉ67333޺X tԺ_GzB23333HZh{ƿ^i̬*~A*mX|#".;A%"mzzD 201"ffff#V\!"ިlDhSRđ@$"ڳ̬ @&'G캃dIlyH?#ʓV1̬$l7+]df'' IkV`2`&pMD߃̬F#mggD-u3D>~_$79"N&6=8‰+ffff#";33+Ivs :>W?{+焕z J:xgDoPD qv{8,"m7n3nIKl"48^`'I"5effI=}}7E#Ueffff{y\-WFă4L{m}Ez NJ g}!q[ga'+`lY )@aUNhyM0"!,33PD=7K+j@M,@>΋sEUqo=NLJWtgu1鎬jGĢ}xUwW l3H`Fj̬6Yba}wjf6$ }:9"/*3333Hʉ'] |8=";[izo#qp &Јxx̺!i?EMv 3333hG<"0$33뒤"⡺c1333M>(׋HmffKT$!i3 %,Z?wc]$~+%N\1ܞn:qՁA꒦oHm%#,z@zDğbffff#4rӶ+= g,>>-Ž59$Fb[U<#Y$޹u =IZzl)k玱my̆ɤW;.+k ̆Uo8Ck)p)RK|;hcM<1z@`41a-78#23~#iU(N`87uOsH +ےnmnD\}Tffff6F'|pt'I' HةC"5Ubی۝lPIԪsj ̬IZ xKCnngzMHSWDo%x{SIec3ay2 ˀ&ʫ<ЍY͉+fyu8־V]\f+Do΃8II^%:,AT$Y.rْcVlUc%+U)WRXgRJWKsX % oD"Ww-4Uenx5_%IkWեYO&>_}j1O)>ڦ@cזkKDl :..G&WQ"I둤xl2lן$I+" R+OdZ $IҌ2ax "> |o^fDVk~jߧhזP "vYd'K4ZnHR"bQ`I;( S${CY|d$I?x93_I$LM+'S!"ohtkǰ:Hll2soE5V|EHnqpc:/]tgזcꮒkG$I$PD"F&$KD,^O2yZ`I$iP^yuO,ZEݵLL{vltm<-I$I$CD|co;$"b.y-X <#=+L$Iꃎ+jONFQ]Jڵ)5#I$IZ)g#e]JLW5X=J$I꽎+(D8+mgB^,oV>vVO$IwqfI:̚`A\ӿ$4q&+c> , UI$I׵ʫX| 'Ļ }fER\}&~V7̃}8$MDuI4ISn-ti%IԱXXdzן$4JSJ$iu=r#V7E-cEhwyW$I xrSey8˧$MGx'oFoS$"VSPqfqY$M1x3eN"5yKUIj)ioaW2sI$ICbIii𵺋ǭW$MWDDl5ӁFijK}+Pthv-*?%I4}T75[( % jQB+s@yx֒$IyE)"(7Yf7.B24xKqߏ$/"cwOQBm\$IT pNG'(ػ$Id <~Xns[X$D!,`hR4sd(V$I$< )~H$IVD< w_7O3."I=)7_O uGf>ߪ$-"M;Rw=$ ]-C$i'!Ѝ#z$I:g ~ˇ^ Kumow2q[\$\5YpD,2z} XUWmju$ܛ#l2.F2sD<\YueU&I$ ;hE[:B{K:1x#G,o8 AZD21\WsI("K2skݵHUǓyκ$/"NOtUf$I$i(^(7Z'֗j8^ \ J$3w]DD+ˀK]z$ISl33"IDΣK$IexE/"> Lyi,3ƾ$I=s4r˜gf>$IR3q"enbORB,{V$I$I53sg'JS<ս3sd5JLSuX{E6U緕նuys$IsEMuNj_5ox537]$I^P~!3;" 8Sl^ۭQf x74(R"bJw;|$IOuCgdJ_ DD,,u5ؘ]oU$I1(_F̝q (O;5sxgfy4+α!v.."V+){2s{/j$I3S })LPq:J9c>vfnU$IZaxEC#"+Nf~ggBvJI~xtcu\DeUOk2sWo$ITM\dAz-+"~ҭpgMJ$IT^Ј?ia<Ұ{oql(OdjVu]- AƲn,"\'X :@ >lJ/%I % U[c>NxLK(fA6?zQ$I^PR(53og̵].Qf+(cZ5'1;8 3ȇ)I8x&2J) ԓ$I$u y- 23WOp2ٮ'I3TDXZƲ&3o}URoTKgKY q㮝$&\F $''3p7j\ ]P$"b)p(3]Ŕʓ~$IW4" `U C.)I_DuID\Pp=lyu*"jsX$IR"0Z`Ufn$@ 'gu#I$izRfۀwM2<}Uf=9X^moF"b lu'"<ċSlmW$#U _/.6V/Eĉ%@T?=I$I켢л2=.GC|5]RY(Y6GD,V?_~vph蟧H$uSD r] c!"p4rGW$I+f7OO2mq_%iEp?PQDbJdIv_RO[;"ISU˚ t=8SlG}Uvp.N$Iҫ hhe摈"6$IꝤ,r*U)rr~p@YK:<-ycH$)J` XS]>Q3D,XI.6gG";`J$IAj՗荔e/*IRD|Jѭ /=#"g; xl2n6w?_jܒ$I*"N\C],®k3{J."><Pz:$I$ $;heΈI.~ %I<@yBt=@D,Φ~:pF:\ຆG"b%ȲN`Gf\BD,s5}4ӸDăt0~^2?$i] p%r1 umhkk[wxXy&3(I$I+zq {_$.b9hٕtp끫AZ^ h1P22sgx_e~53wyK6rJevJWvN+I4t.yWRt(4hqEiG&4x63GV$Id \73M/}Udhamyϣ,gt*v[ᕪJ86RB4 @[Je/TvUE$5UGG#J2J=WLfۀe[(zۢ$If9j[z[fq3T(I$i3!33"S-p%ICwG0KO'W ?Xm^s4 "I""Sk,˔ g>:TpxLpq$I h&&˙y_HOf7^YDȱhMl7$Ir Z{b]ǻ]԰s}qDiGbI$Iu3#3IT5]mw鰷G8A O$IT#b %(1p1qډyB`73wy\0"I$VW$ITf>Qw $I86S:IYBhʹ^,I7"b*!JХqeyY` ~jmzG4"vOI$Iu2"I$I$ke&U1 88 8Nc%0Tqk8DljsnIfL}%]pЎv`I$Iu3"I$I$g2Re FYgok;BRNFYD p33G:<$I$I$I$I}U5Vmkxix%WFUXH$II$I$Ibei,Q/^9H ̗:_$IfxE$I$IPF_GlJeᘟ yeNlp4r8@ zKH$IDբS$I$I$I$IYu I$I$I$I$I$I$I$IjcxE$I$I$I$I1"I$I$I$I^$I$I$I$IRm H$I$I$I$6HIENDB`dipy-1.11.0/doc/_static/dipy-ws-header.png000066400000000000000000006212411476546756600203220ustar00rootroot00000000000000PNG  IHDRQsBITOtEXtSoftwaregnome-screenshot> IDATxחdIr"t,Z!?|YY3;Ŵ.)B^Œ]]] 1`T"<ƽ~?737X?nO:\vZË?({N ?y\YCq6Jܨ5OIS*}VGzC9!0fKܩIK?Gni Eo*c[f-h3̀bOWB)596fGX)il|?>-b;~oD_N\zy3F#:2}>n)dxS]8 ԍ5}sϜ Wa > hmnĊGk68jc %BH!8x{J#ϰDO2Ni\sP" /wҳq%zz]_]m|6|s3?~9Y"Xa|<+B%ч #8Pah+h!VlL%O+kۘ;ke;/B+\rQ,˗#F Ǟfm0Z;$nMY˿* 1}1[XŶF?ݡ}k?P\0`~S*0+<J^nۛoCҐBxs}YﬤLCKSFhT Shng,iux* ^U!۞ )k WK2҇l̓^]U D)JYlwE:Qe;S~|AnrKJ9tzXxlE5TO Ra-E 8^֬zR=De/,mogR/V<$n!D7iӢMK_}2lfe.:nt99ƅۧ͵~$o]BtBhޘ9;֪BJΛ`yO,Bȶ2Fr"8WprRlٴ>OȚksP`+ҭN ٱӳ-5EZo=+ɁA*R# |. !K;RvwOO^Lj9c -)OޫO'<v6mBh8`~7olM!`*Q< TezƎg_sj' =il W2bY#h !IUM⩫ݫ ]CP~_̋.+LOP7"Ьdd7J5zMHV,…<%v&iO?ZXcV۹xmTw7hk#w37*cv}u8~#Jv2M4X%9'(FbTAU1CF2(V0YI;ЅN~mKZ~om  bo~ŗ]u<nb57,BDž'6#M x0Kܖ+~ qwAu<f)HO}[xz dψy%E ,7 uJ-,BIF.C5x&R !%?=PMh{D@*TSOgW_?>_9yJf+/ n)@4Y\bj HDD)-% SdiȴlI䮉VH1e[/k wnRd/rCvy`$b`cZfczV˻_{wZ J"Y8 Ly)!%5)#+[>B8jSZ6ݾsÐ~WOB;B66igͣ&U16/f?9X5?0EQMCI[.F_O^Nqwv2) c?9+U_*O/K]e)U30=k[C.tZr^(?XkX?kKsT3H4YxS ڌf(٩X+I^z; fD&4J (SC a|L˔q?՜Ivu(fTmi&qf!~ݻXnx"6-rY9LR,Oo?!z$6M8uQXRyqKaf"m:; 'yӨ4tڶb3<,+x_no&Z=^,3əJmH!-cQN4 Z qEeUIbȊ( 'J>DZf$'{-@ 9w:0D υ4I"myM bd\Ԉi͘Va!D㾔:J@0AO:vUi<>/A+ՕIiK<!gOl^i};šzAv)cf6E߫!|{mTAi$|P+/uJF+ Z6ePRmIp31髾 DÝFGe&E-is2>׆: ZZ&,M(~{"qO hY:Z Ht~'r0mGVm4f/BҁwIeC\e緉L>KV \PNV:2z"FK:Ϥ2+,6%Iͤ x#E*eɢҋbZf XMᖖÆ4 {}GvZ$ ґ5;`'+r]-XrREe@f D`o'fѭz.t™`5_ĚJ[ifaYaHq|D?㏪; N4 \nMhח=;98v]s n(jF7QI*ÝˣOK͜YmLT:O;K-H G28N~+U0-80Lb2vME#K+KyY|W\IQ\Z4AY%Ir/+cA*r]DI 1Xqxh2wdh4y}AeAdG(׷)5Deó462H)rm$dݛT%gXHC ?{omlxxE+YfWYw]K )aɛ}s=\"t< _6wC IR )d".W%6 JF$sЀ3&2&ܞ8qU$T\"ΚӅ2"Tų(o$2}i2-vo~WT[yKxѬ"ĈZFٹ5lgm4( Ǻ]jZ|NԔ:VVE~#F5⥔Y=ݬZ2,buXϊj/\,Y&q̯Tq}⧞ () B-u̸<e CͫTiLZ ;fKZ6 Dڥ(iS뫷:SH7Ն >JݾDqdz)4lꭾw#mvu_IW~ 6z국,w\Own˔nyYN,2.l+\b^+~d\Sd< ekX5qAE9kOn>ǵ[izF3ĸHh&WYռB4E]R(FСD?zٻp2 g{ j&3' 3!Y%NJhJȸrj()촍OW->)%(rý4;\,) %X4K,WxPo\j$:V̴v'ycwGWpvNֆnֶOTՑB@@3 ciU+ZFU *tV-s3ԲAMtbQT, K_N{YU&B,QSXs IDAT Sy)ƭ_ VF)j0ؠ&G˭IPVzhD3ДHyTVʒbE$\:2GĆ IBE%y#SD*c:__jFAχ}oak%ڊ'g̟W(R7Rd6ȳ8}>u;uFǴV:xlF3-Ѿmn~޵n%}lSm옭_Rj~ɔ fSff& @#^O" $Y_ml7YRvv6_p􊼚hQg;VEѳog;N|Cn(Kxo7gqɹ R|_ A4O*Jb'ƴr' -o8jU='qĬ9@)>Y=k5xItˮDŰ얓*&vϨuܢt)﮴EqhM^Wj-lC*M0jcDgt:cWYazJ9>l{}dLYj7owoM!6?i8,Ku[P)N]QsF׭gu4Ѣovwdo}P0Tdj銮KOHuKÊJzeuvW{Q+V'M [w:[iw'A"ETbJ 2LMf{ʦZ%DWc1} OY0'EcP9im 4ljߟp9--z)# ;iIEsaXˢ l:T1t;yzh\oF* . >masF&x]Hr&K , PCm0Mq5Pu~-YV$\̄ik-5mO NDґ-Ox5xz()r#cZȶIz&KU32W8 (srS^m`s'B! 1%zRZ';x]= yMlxZטTR@$(1^/yĪ,kErXjlt: jt'WۢpwajN7_`߽6%zjr*",x`կEreg7;7~Es'EY(S5$׏҇^{zS!LpϷ)X#v5$fQa$cSRʣy͂W D{͊&6Za@p_gQ

xضVZFF4s6hwVZ V` 0.@l4'3\PRcUP; :̨\X@ )H [FZa{-\[YӋ܃բAh#t^8n5um7ڼYb 7&7mteĢg[nNh*L΋z)%@|e  E+lj@XF6\Wji%b~ÕeHCaưJԠ2E^cIK߳?h5JOhc#zq˭&wԛ|kT @AbWP!kf {Uy ̇hd풚RVͳod>O4\Ӄ0R5ň+V ~ ͆ 4#[fMVx1Mbr?im~CJ;kYfcvƦTkY"K},Q89#D-NYVp*@Jb+2e }|LZriJY۰<^!11V8Ύ7I'Q-aO-RI| $6z;= 74SHM޽5Hw6|t>Xa2tj/FxhiS.E2U-*0骞Rp31IXXE7z8(3iįiᶀiD-ɲO1BqHg8HwTDj:YӜLκ][:U1h2#( .P^@98L{9dxvSټ#ޣA=wU*տkIoH'țk;j I]8§S>[ї ;"Z3RtŚMѭVX" N rZ5dDkgP;1H*DjnIMX7ApT (Gc8,cUOem''F6 ԕIU8E @KqcsI3*ˊTF3j^8g\Nwꖏ+3{] OB/ _=XVٸ!9; MSY\ed~ykHQXDލ)l, \dA#x^{FsZni~گ= KD$k hpJn+%~+3ݟ-5 -ӍJoD ^>I:ړ+͵Ϫ4 lcQ_X~<ۤ9~]~GykL!;đQU:Kh4ᗁW͕B 2>_ޫeI2 Nps^%ZĠ@=WZx&Y_߶&!رυnUZmDћznu*v'(*B|Sq`Ƴ1=XISw}sxPLuFu^~'wT>>\i h:˥\{2eL7>%x`xpfķ5gt2 txX;ŐC5^i--C@F4`RkݠH/[:V[IQјot;YנVCx*GY 3+li?mqe@tXڦVs)@F~J"rL{SϟWw6-Yr^imUyf6艢qUGJE*ֺ(;EYŘ@@-sђfETEnLdTo %nAco>rXfڸI."nlo'|Nn~$Kv OkTk)TGUgs,OQeZq%Pt[o\$SoINq+|E 5n3v< 7v{U)̝;{z!%R" (~PSb(BD~TKB }lSo=%$3w@_y;sڽs9yS3FB!E nu3p:+3,Ԏ4!cVGu#J!ʊ~(f aA/B5QWDIIA[wi@MUQt`@!^WIU夿1 HNrevʍyPbB|gڝNuXC BIYyX Y(հśv);2fW.;IH{`g`0ݯPZ?:3`YNv˘27=C}\32+K=Km V܁XkrgwR@dKsI/iRG mqՕA)% d1`L@A'aVcQaJ!`(XkI)]GGP>E3]STWVC -i )M@iޙ"zUe""\`FQc}Xѷ'R`}fy)]nx)Tʄa<&D1!8`?ƌx:X 0 G\J$&][aE: @[G k HQt)#>\)2SXJM7zR)<,Pb\X*8G9e < & Rm%9C8VqiXbb,d"J5.F2pLpsmpęj/ŭſ:,KCyq Eu YS`ϵ3q*g6B,g>}<?zܯJN@v[jX݅M)0E^ٚ,r-bC%(Cҷb$a5$V\Z0% \jP5@cR6p"CX‘10 &)P)<&Řu#8RKu 14)TӖ8hȸDGU.P @BzP)AXJ\HgDtxT#"H۔XHk5d3KwR:]d[eQEb}NETc !fxGV5RnT#< $bs(p]x"4jeeD2m#cܬ8P[65Ns:Uc@U,)B hT(`QUAWqFg(7B% h# dPXID!@pD!$"qMcOŒ`)p!="q% bşfz|S4,eev9i;R˚(/PI{=CL:Ζ&ſ-qkhr&>q %J-sl7Xd0d.wۇ:T>g7{v+]*W)qviZ" )B0cDs/8F:*I KVC+ pB2 cK % ArGHPDb,ѧGDI4 a$p+Eg<)5@Z!"ؑ$cCų \ U!Di߷n2J^'_ެ[T#sfdnH:EB[ 9]ɸ(ּbv#wyi#W>ӊ !OYXd1H@}) rMa A(i7,ӕfVĤɴa !Gᔕ#rl  P0X2*)!bB4!P9 u(Ӗ6: ρYT'k#V~0ݬTˁ zx Ѽl4k3s/="rFv6'SS[9'Q=U,eUH+,(RQI@4%Ɣ90@XU+S@*mV})Ÿ2Ӱ$LcN ƨapS #j]L@.QQF%E#Pa$=m!LA%HU!lKۑ,Fp"iH.yb`?EdɻH2IKչ-gSKs6p\q^Lw02D2'ܺpdpptQA$kb $pYDU2(52.a#d( caHRB*NLM 4Je c mV$L9'ge0W7X򯗎 u8džnF]JcZ IDATMFtL`UP+-"*D1K*GERHlzB0UA(B"gK&zE6R!5$9wyXX3k&Y@܃)(AL8F".T*T*c.0Mc+w* #B)[c: ID]S;Nqwzoc, cCTfMqؗ6L^ u} Qҏ5*ڒY{! )T(@Gߢ I6wg]z."SUNEJG%% `RQ`Z?Tn9GŊN)H)DT}| L9U >r9!ORj"V\]!R{p*qHaHzI.M`9T@R2}@ٰFTMa!]a@rCV&@@ )!7k R)a`cc^_K[aoFq@C gJ+ν-n۞VՂ h|",aRcU06<)SVEp2]5֢iFES~e89λ0.tM|#e8˽RYf}YY$hfʳ UDŽʰ@w "ˆ 4AV^w9Z )< ~yA.A+0) UA3d$g$He° q<xc^6ߐ#%@Wt5V}>|%%%_f8PS,}OCۯڽ{75<;J-CTBPݧPu:Vmqv !ղg޿"\R-o<h8 ۶;ZZZ֭[z5kl0JԠQc:ԩS7l؀-rM7[TW33澾~oSkhhD7pGLrnۧ38o&L7f:1ak_#""R )eٞ۷o۶^zh?G~b3*SY1w\]rC=tȗ m۲m۶oVPmÊ~$=Db`OQèÓBcoQ+Xպ]=SXP1=ua02Yq:H=s̳>SNY`A<)un_?8Τ:!51 ؐ4]ǥJ+]&yN}?|̃y5W\"2yL{9^i Ds'=[8ҟn׾Vrƕ!7mw:n.ST AFzu(kg@5uA0*|^W )8cs1gmi#IYH'sqIl1`JbAɸ25\TEz:8A_QMFKȔ>LU6n(4{n9j4wn#~Rd2yw/Yd&X:Bg-{w}> >0UMhp8L&}r-?=!:VN0~{.N;ZRFLԛjv>R~_>3g%\+;wi6=k0b*?*&?(//kxTя```[o]a\쥑5QʦyuxBaxk?]0WnW+:/:XZb(V\2N_tvv˗4ė5.74@N;j5 %艹pzٌj51_@O €(Ɠ+BStJaXK0BߊZ4n:TfM -^<1rJ60 d93FΜY:!1~BmEu3ZYF*[*"c!3n$;FСKZc&  #YכLrBI'}o˖-O=g=6`m!m33a+[s&^y/O?`^ Vݶ[7 >VR>sg}>wozO|"rz,P/,?cƌ]鵙< N>d".MUzr\ 4aͺ Ym$b!_*+b7MMc~_ݻBihDVXO[nGtJI xG!?<Gq-p{O<Ȼ|hL4|ǫ8jkk͛7_ve>8{^:S Nq5t'jhv{0yT**G6<ٟ_ㅭw+#jsIHܕ.drqz݅.iaZ/k~tH^^x" TK2O{o?7lvܹ \g6_xMm|}5rbB:#ܔdO~K9-(%1ȁu/n[  UUUsO *tC4 洅,&ͱ9#t\Rס)I18JL 4t^ӄ鉒9S's" TTLUUke%8$2%(SqejMIZWW!-Mk6ƫ⸒ $TVx9iU" D(cME Z(NÕ&1 B&((ͅjheC9A;EeӍZI|2c[]`.)H N^Ґ́}b-Foz_׍7 "%rYғD2`0#3e)SN9_wK)A ;6: }{gUU_ץǻL;G>lww$t:tFMC%n n_gӧQ`rSBͯ;%K2!76et3K6iӦ<dzOnu(uӞ)@Y]]׿w]}c0뮻njg3g|'|A;bi vؓq#XA_Wy L?!3XnݢETr>lܸK/i3y_ߦeo4ҊnVZϴ~!\pon|K/_r0U -9LXڌYm>[}9S̋-Zzu]]]26 e^YѲgXIwOLrX0 3?SSeB?+X_v?=.<& Ԧ*x:1w-پ7;K Y= 5ב: ˡ5mP7QJБAV&FS E  5LT1հ%1T-@4 kp8$ D0`#N0 ťAH[xYz27{ Bj*#Zʅ>#8%Zޝ zL Z̟NR 0U`aw:uxe-sHA!مo %]xoiYUSa&Iȇҟ7#R`:/u Q5F REe Ku=kM;=,o,o)rBGPjCX߇ B#<lɄ }ى'fD.`b޼yzϦ{Eyc۳_m5k=*`azC~u'ƍ{g<iUj&'+jjf;妛nw3ַ>!P}J uzbb@#lSyϏ~S~c}kA ّ /HeeO?04"Y^FN\D<*FTN׾ nb*&<9 A dve9Vg,YM᪈׵(-g>z&x-j5n;Js8RB J̓돧R1e:)Raudbk1p ٴLg߂J^YcuwM-@1@e. iL=893!q2G'!Adp\Q؆ƈG T$˄LV;ƀƇTu V4Hx)))yU|p$PaN;_57麾.SL9s !zN˔뗿%<1\01W IDATXp\d~SiƦ5o) eR 6@K8o㻚r1Z!)%M6-n#hyOH@"xF98LڰbzHԻeY~oo#gL=q$>~cɓ_~3gBc=67E֨V1V\Ns",k2#_җF=jxQs$6kٚ 6 WWUU{y_>7p Kn DOxp8M( gaEd@%)ٻc_B%:HfQ)D Wŕa@Qi{W27ivnv6[ U=$\Ժե ABe۳trwgUlu!Ǡ7Tۺ[Z|cW;;2V{6aZvOByWdd:#9i:yXbu=43Xߠ 2`cPt :(h* Mp41}Ku9>?Y(_u7}ܕ(G+?z.:#KŶ> V~greYA<)L۹vhoj \[׮Y~KPUTE{'!ˆX0~$? g`[oGÂL_2\p pXdɟ'ǞGM %ذ!nx<ϟW2m4UUU?x}6AɃ(V:s27,{tz@֕ШNlllM?^G < ogFŋ`9`{{k 6۞n&BaG%:!'(,='|wy%\2b{`xVnF7~Gz2?7pQt1a<}x@ Oyj]OzRp{vex' e3!aŠXM͢E.#?Xx X''>{m]ҥpN5$RCQFnL18c \#%:Vkyx8F3, ] 9BJ5Խgكwݖ4^ۮ]ɬڷ |\ 뎵;083'mAޛџ1nMt[26{*` /$gd;}"7)Dq/L#1!TW(}@PQS5Ճ;%oŠs/8JWϘj*c+kDU1,־o;\BGP|4$jvCWR/Kj\IGмkr-)GB^z7xܹsc#KW\q׿#]O"aٲeַ|/})rޓ{K.YhѲeˮ暧~g]0 k_]ΪU͛t=lVqqћG7.R=iJQ Ce$1uTʖuً٢ ;7TTH 6!Ȅ(r4Ht,Q)?{N>kɒ%{7߼iF/ls`@6:63GO^mMo;q a'+K`L[Z;x[ل)eժPBF;I6P KpT zTww1 3S%lЗd“4,%HSR ƌIsċc˒}Y-܏[93mڴM6U`gkuommo1+Xٖ^v8ZĴ|Om~K,Yr 'PUU4ܾ}>9C<ɼ?s=_.뮻ŗ^+/ٿYbBձ?UW?si%%%SL;wnQ?O|c}V cpaaBgg'!dɗ]vʕ+}Z~|թr~O#3lm/ "!!鲁a'Zy BzQyN '䜽` G׿K. ohhXlHάsБDv1Qsh\!@G?#c"%Gx."|qϼM17PKjj'M1nCo%vEy:ŵL# aؖ{ZwgNT,(ip9)CX8SN[pwo?B݄)3ys,do_\wuA} _c y 7ַxDyӆl3qDUUzi[W)/~cÚ5kx^{mݩTs f̘qi-_ܧ\5wyeˤ6?c=9@tΜ9/r^x4$3J_@y睗J \p\wu&{cm槑 &L:u>;}wRZZzghh{niYi&LXx-tsWOc]pԧJ8eqq۹s%K圯_n͚5kdrQ* f-=uα r5Qxux=D P 4\n/<՗OGrd6V)c=̙S>8y5ϫ[1vb,Yﯞuv5L x'}^o%L&uDz竫s]O#;waz{PpQl͚5~\vv>oٶmBZ&f6-?op@SSӺu|l֭^8{\<~t]7O }oA١N;LKK˕W^y!tgKEk׮#xW:::|0Le1MnVC?~H-+3gUNO'^Yy)<L>?cW0) OĚ4667sgyS.=+Nxj_k27^0޼06qqtڂiZՆ ]#3"Q#O+WljT#biغS[ D^}noMqsV:wr… >wW'hCHm4gLbn+>-!aS{elձimPTFWUɲn_<4t ШL]0#ɰ6Hӝ_QύO$#>1MsZ_Qfs>l`r_mLi ///~%ݻN+ =_>}zQM7t޽3SxĉGoӧZ*J)OZ@sstHGݻwpB#@kki#,|]קwBf?477ZʧC='Vy0nܸ#; 7s+.k?Of#Un3<3>ncSy/ɓ'y,/Ϸ7ntl_Z m*AzRϡ=i7{!vDU* Gē&dbL[") T0⾇N` LmDk_{<kLzåxHƇaLUN;-0JJ<bUJQsא'\ۊg+cF6MDz*pUQ=1EcACVT?VV]MT$HFN=883W(kSO7 45/*GX˗/ϟZbTЇ>_:q~VWWNu]}f\n]>)oqsU?я,|嗿!^pҟɓ;C99~>ϸ^yLxy͟+dOޛUoU$}$,!lUDT \* r^6YAI +I&=YT}LL&3}'Wt:y}c,9Kjb8_W ۶{=p!Kp +HBmL1S/~Q_;Dwu`AWCiZ|G>)SL=Y L,2|RS6QEBo"!L*+LcKl ֞gb)8k.D'Uam":6)Rd-rU0]Q d㋌KU!'sUiy JV>U輇UΌE*q@SpT.SٱTF?䓞ZCkߒ05BӔtN!6qd$seXvcCt:- ׿>"V˖-UjR|a1ᬊGM ǂ?l0VY,G}80б;[A p?jdL0|PCdAU{W_i9</|@U,wE'ʨK.$ QJ?bO!?Mw|>ǧ;믿^PO|%~9=&i+ JUaȬN)M$ 7cceB2CcA ,SyGro6B*Jʺc.u-iƘ`j鰹UYB6 9#c( I$9NtJ.PTeGd$Lixo+y*Uhl2sEw`DSN]zT*%1_Z]Xjh^#c* sIsv?C^mX]V<Zr)aa,y75Ln޼YSPkوqQtRNRz䩧ڵkHKHe F1o#ԓ9oJXii ` j  pj#jb=dwҤR(2jjq"@~] iҍ)EDP\PL VVYQllN,R r#cDrۈP@#+%SHa(g!ӂ7DWOS.,YSaYjٹ꫕ L];xnχA3 AM=չVyfy/oޚ.]3|_t~6ce'K)HT)x7=`ܹR{SݭV/ce$$+HPZo2ΊFwYhD<@KSLOz꩞k/?WqMN${/_+//7yXf XBy/Q$ـ3@_ nK.o~' Os=P]kHjt~mUVVz}u:iB@;>oUty,fZkrovyIeJɆ)ZT3``=xG!IA3.V(lɄ saorݺu"}7(~Fsɻ )IA9Hm"wq/pQL "E.w8_{XQQ)!\5hw36R({;ƋpB,#'JbxFgz AY-k~O0c1rYCyZwB<3Ò|nhPܚnM^vu}w\a}@[Z05xoۺrvc< RQAZdijEyePdZW'?$ϙ: M{oP|3XwUZq8{~Էt/n[8LdGA1Tc S=T% '7Y(gPeE"5nYy^ `.+ÕosO,hfJ$w(L=`8Gݻ\ʶMl#ܓDb_>Wd#P sy'E\P((N 9aڙ y9yQy \L:Ӳysgu8lY/|u L BΈKnΔUM>>bPFkƘZhZUI#0M"a >/㰤'N$VCg{( 龆ɓCyZT_;XuV x@`T¦'fV&GJB)ZM9)@@h<OXT?:h BS[zV|ޒ{>KJc8)hjڵZbk Ӄ[7itMdX!zgg`J> "2L(U?:$SpAxʥlv=N6~1W:Ñom".\nQ~q予֝BVԆaa0عsy_ωhD RAUW]/~QeggҥKGLOFλBCJ飏>:Abj˖-η5-a|J qP02ܰm6* [L@o}/9Y`uyKnΦ]@5gܾ<ߕ/c˦3}БE(Lle^ugs"zH>;5ՠ[5K{|l^+EޔmINQ*&BYg5}’sr)9n\hֿ59Z`@(HdEGe+ask"}>)>9쳇.؞ 1pXb}TGѕjCSD-\&OX[2ƓY2c6U> AAM\U"#DUJ4 ru PʕPIRi N%PWgϓ:梚 RW&GCReH27Mk'S|@R!Z*$r]R5'!TW/UURu UPL`PJ`Jה栏l"]ĭϺ:UjZL 覑,pbܐTYe@w~6 X XoƑG`DPD2@%i3 | )?<.xs_̉͢=[ &׬Y׿uwٞ-b$@yN1;ԗI7|31ga$5ӰW6 vٝׯݰB`>_TZN@FOd5. $mN$5o֡y zlvh)4M C K,6%i__eYCt-|(++n:/^x_|G yO!M2uy3BzAnxsiZ͢ l@h>vZͿӭg mי#>oVjY2]apF+DpWmTZ+cu^BUW^="Zx}=u} $޾AIYQG@yU{Xjײ[no^#;L YN> 2[\p]ASdҮnYg=SEI*QM3q"2 qVw5XHaXPTœ D/Z(peM-SG!:!ےmt:d#hA 8*a%%lrE\qP!iG* JlYs3H,a# pa%&x4ǥW{X3 > A 3IL(%pHV+KTS]2$@ >S0JJYBQ:`PEy睂(Ox('F[Xz>.!p t>!bTadaj%UbIHd\X|WQViY@ۓJWdֿgd<=Q±:KΘ3[n_Q[G=0]aܴT}5d+H Au! x,X' @ eJKKyA#o}[c` @*w 1 TU]tҥK|ǽיp-e_Z (0)7r|s*.T30v^__yފ[Q9 b _ ޝGfp !g}WںaÆ?KTŇzҘ3 7hLi4k(dl4wSg ~{. Ge`ہ4?bV_Z6h 6 b={;U U/NMӞxoۏ?ĺŻXi#k .(Zn(Nis&{`[w[]rP[<H*LHHΡtcT ib~BJaN5NY$! Uʢy aŲD$tq" Sn;U q6kfUٔq c$I͈.!@8\@8qA_J@ut,d'T=rӂATBH%Xrr(QJänu0\C!cf\*cMƥ$ha `Pi(Bqmmp I8_Xu1ef8<) ~xI]wј'V^]3k9Xuu/rRQK{{4d 32K{ PbHoz{z欹ޑGTeڀmHlZII9d骰S,|/  XhFΝ{i]z饂l^xa'/뮻W) G$彲CXbŭH}|_o~[oU\_X4Q1>=TE9<\S]Bl 1=XlcDi``@$2n(#g^rvY@><7?٩W%̜9c-ʼf*kJ6 @Glajr l=!0nur2(S-^1$0?֐,C0)t  +75 p (kR?eO̦_ `U&0,@ Bm۲sl!ݱIA,bӯI{38/ER#8WJ@qWw›9Wr:%ppL8 Vօ<3:X y bAe v&q9jP ? KE4Rb)v*_¶-$˙ ul2JeYA%PJAmO^\Uak>ޚ^ffFf#Y]fY닔ҖWSO LJ$Ig,CѴ_% 6q( 㪘ԩSމ:cw>V$ ץ&@-w-?u1 em9+V⍪hKIlذ!z:Ο7o5֔mS.\W֯u@ `IĦpa9ɑild/H+P=+ӥZ[l RtQr =@_M(@(iy8 x]~7o߫b[{.ˌR:%憑09f'VJf϶**ǯw駜tu9ߙg} P$,xk"W/WTPU駟_?'xx_! AWҧil"I\!R8:dTԔrDa;"P6@T(mAe$Stzz͑tv @>em)9ٜ'lVFa伦B#Gqb#s>lصe՟VA4z|V^Jז,7M<'{Zv ^4I"RqQ8|ϣu5kx5w< .cW2>eʔZի5fz6tpG٧m ?lY݀]$om'j*"T*GV?)T X84b͚5cK/- GpUV9?UUO~ l+s;l6]7fq:p9G^7BIx'>h{Rָ73@ʲ5 BܚPm9jAfELs=-sժUcPHUI#l HQ\vqe@TGOMlMHr3pT&#$`?ý]BoEz4˿V6g[Zߚ؝\׷3?,,r: ɒtwwu7dStr0/ ߺy-}kZ;{Fg}_3s#jLq\0ٳA/I5 c1!ƕ3n,SP(2nYU |26C$ç]<1 c7|{ܰ(" eJuR{cwֻ 9ɇd 0(h!̑MHO,D4˄bnS'Iń,"aɲ8!UV!M) cBSFT9D3񒉲hi$g|81pvFc #1Ig1nlmkv~5'^sf@Mԑ4*nYoN뢪sWZpeyQcH7N.p|$U2yO&+IT*!j3i]폗^z4n836-E0>?Hd<{ᓣgd[o S8Yz36~l>AX2ǩƚIwP!1Sb<,d WPHV~ ˥XDyJ$TT_ywƃK sn5k8NJNӴwCfΜ9\OѴj>!ءz{{8㌢ccA|wq~ƱbŊ tI=܄0s=7nmOlVpCӺ{&F-q||=[d={./JО;3ϱrʓO>)%s(%zF~I *XsmfK&M2Ђ15]fK̖.aIb ]1$LH&f\r\a?=)kf8fTdYe;3ػN]ɮcQ*&22f e~1uϹ %nnl݌zkJ_x6ܒ)ɮ,HBKzk嘤K#+> D8%,˿gZq_|qT)͉u]$6$ZYYIxZH E۲,ǂH[X`,̃W^y_jՄo Fز,?3UUB믿~y544Bv9ү-dYi)a\|ł HơWݲ{ ;餓^xUnܸK.)#i(EV""+TiDTƮ$疍R8['<6e%0U9c,$H1$6HU|7_}Np]w-N10]J>EmbQݿ2,O8f;wn<q"ˀ3AgSҩo/1JfA ( m7Л%ĥ${fK=z710;x7:mΝb H[?sMMM:|A9s,nkk?l۶mm裏O VP*qF`ݺu˖-+`%3PJ/חz{ TWW?ӧrV/~_⨣⋿/ oTۨXj`q8#ό8Q(41͛'c)l2: :::V\Xhoo׾ / g>[ok=#k׮]vW_}g_z饧~p\}˖-߱]vفD7n=;cgA]j tH$[neDs=`8ɜy晝E}r(Io6Ƹ/U~" 2fH98Bu{Dru}9m 5d E0.G gia*QU sտo8`︮O/[l#.rɦ_^˄;l8ǴƊ9ZJg%Q[Ainnkەv,S0Daɒ[SHEFzsFW2-XkKCz\DŽ49 LّV IDATph `l+$z94'(hȉ:;;;;;DI/_y),ċ/x]w 6k<'\3ljj"+8 g^i'|r0 S~z99_ne]VWWw8@ॗ^b ر~D~>,Xu<[?\]IqB8dزq{1.7. >D'TpͰY?#A@"[̦eUBaj{O~2}rgqwZsH)˲;w\{^CCE]}zSSOu==ɼ82udj ;bJRanҴryǖ[Z7E~xFg裏3'auM)>hr|[$gM="jpe :{?HD}WDWT_:?~w7O8ZZZ~̚5K(vB.Å7vıpB% ־Fv2|>ۻm۶_|;袋*++gx]?JXl?,xZy7ƫ4QhPˑG9 V(V(qdC\ 5gΜ)^Ֆ2ܼŻ>jp8dʌ"(K.\D~+3#iY) )HTt'h&@lvמxuuu$ ?j6mڴKw}]]]+BMUZM K< {aC- ;׬j@3<0h2ߘi +%iKgM2BȪU|rxO/?̰Y’үVssڜSN9/=<(\ڧsw~I kʔVیa#Ǝ[o 2ƽZ"Bg1={%K O"@W5ioc -8Hlٲ@y^ExJc٨@uuB>}b6_uU"Rϟ(p /$B+W=w5UDcss?*B|ۆ ZZZDZ^q|EL2!+x2 yKJ}1UUUf:쳯ڧz'eY{7+ %%?ޠKy=Ƅv\JVhoP ݫ2|e(CYw製}ynA1e $%eQ6r(g'G!:qPFg8%])o,qBH'K@TD< SQ,)% )e> ŴF0G/$>H>(K{zk|UYj fr-GaN4r&MLC\j7MDLAc\} m&Mtȶ0Ҙ?LtImJi ']dε/,Ʌxic)(͛7 ~a&iءѽ+rSzgbH}NQx뭷 h7K@^x2٭AR1~衇 +Wd)Q_r"*bS]PW|ů~$K{}m*{-Y 9Lʛj|ꩧA@ m e;my7~ `úMV|6k`% LqYqۛ33T惹lQR3w%N4ǴkZ>S0LH I=6o,28f9W+J,TA"԰vtacB !E22eSt}k[o=Ezc9>q'R1P6"cTaGΐ+DR3xO(v_y'O[EB&>fXr&EfKՂlr9Jr.r%F|>|;V`f Ž,8)ѣ駟.2ul4\:Rxpl$XEXΚp<>Wqq'K0B:y<裯k< CUJi2SDSBb5~*fW!<:fm^\Œ&TyF7m7P?<$pb3͞(ƽoXR?)Xv얄QR\,S}aTQ0&|q]2.2JRMvx?{[r`SQHTi8NAFŒ` ,/lأEN]xymv"_MuX<y Mh~(" Q#KlN@波stdF_Aؑڵ}GPdܜNI}*Bqk ofa.|7&;co{o`l*dXB35\s{6㜇B|I8\DSZlV7~ѢEGI{oRet(84i &aoTrt6D4}駋c'tt{-96Fɨ!{tBsS6hfnsp2غz\&;毯CƙEc9m\ }NzحDtl?ʚMٳzk捜%UjyeըRYuرRi/ gΚא6v}oċ/(q 7ܽ| O[Pre$ ,OKq{w}C=ض}I35ɴK #nR>G}T0,|7~'H%r7 Ig3)ZТwtYS;kve <;xOw+Ky5{"NNCqY ͣ& ?wm0=)H!bRb8ָuׯge}92[<5|#ՔL:bzRK͠n)3/_:p','œ:˱ỦiwnzwjժgFAq C" d. hZs<@ A]j `1{!>w\"@ Bv%MdQ57^ / Rl6D {Fƞ;?DI&t>˳ @5<ڵkGZaRTS;}~kZ+׬Y"n kqi{l4QpGPxܹW\qń`޽K/l\]]w pdlÆ ={WUU |N}An:cc_jCZrxICE]WWčB-Kok8~{5tM6i s# YW"QR %0B3JʖDLfH,tuCMc٦`\-zMgv[A^=Y#^ZhMMM/7O{O<nXojvvTe!CLe͇^7yÆ{.>GmT[`䡟-/SqU&8 l뼅'O~v~XG2츐3yngr3gyEW6KZ8#kQ7@e?eZǕl&u|wGåFxM%vCu%\pi5 |g®*#")iz6c*,CY fF0ue1XƔI%<9XQMDP%'iP@O•Iijv؝}gJq>b#C?JJmc^%ﵧTʥdݠh*ЬNc2n R[ncƌ8&8eh4z1ǜuY^xa뮻` O-3HtyJ16d0v9AQ@$(lCf+lJ/ aȕ`{֤k{??7]y_0+ߗ2"t;"AS1<k&cq*p.3駟+#Xv+㏿}}#ɈPN[ʿnI}Qx^yᏄlٲ#@?w_.㏻o!1q/sϺujpy vuFӇ=Y1lptӇs~/GQn\#Yb￿̙39o~󛂚CX|K2$l!a#y3H FY9u!a)fҌXUu%=6cۓɝJa6&@=f6 -.x@P|k&x@2t`;A o|_N:e 'zwYGuuaGTPHѻMi q@?KӞ_/ŋ۶: ræq\￿p"ƶW_~/ :fGa=ns&I$aG@tp\ШK(e2Ӑl5'\cp(FU*4Km* JL q[b΃cVL$)p9b~K4ũژ;؇1v "L8L̵.Z]J9-e]z|gngL& SLߧkŽ9Ɓzc Bwyk9c.\(ʫ5xg=[677o޼yΜ9-1=)2j\[__/NTÃ3:fkt_g_SOgeWy&Pu+)B) 0Bd4A€1f`Aml ,~-%a(PR!Jo>9Rխ X{kz׷zzRG?0 7;;Ei}}}6m:'Gi $ZA ^F$D҉W ?k5 W 5u8>Lط-+TZ%''E#[nw~1c<4:7UU3SS`#kdo)&L}ٿ˿;c@DQFe߆yeLl5Gt 7zMulٲe˖3'jrq:}uhnv{".ɯ bc;1f_xX$ڜbzti_oVZ4~l4 ?盿v9! /~P&KϷ[v0. Z"0s׏Veߺ钋.68쀻5ic؅^?݂X]eOأm -3ȣ-M XxkG)K~eccA0dMͷ`>C jP3&cѾ5Nu%Aicc-T]!_uQb0` Hk (8*0 FмKp咴x֘TX"'.s\!xb3J5LWd=p)q69pYA7x㉟咠zctݨ1\]d/T URB Y8h 1K2YɠEuu;xrYq•WN>Q D IDATx fK1t_iaFD*|hhJZ ĈF\tD0??(,ʛ W;9:&vرBFWt6<Ìh8-^> /<w~h[oMlU[ޗ`/9 ;V68\njO>j~)p=t=<(DFs|]d\jUo?6jw)M/Gr/D'P1TS{OT;LCʿ{u=mA6\7.ېc/ b; Vۃ=ɑ$5^_#_D5ɦj:0@y6:6.4cSL.$0K~D(00P V]&u ]y=P[Ab`y+x啒lsW#plg%Qډ`/ Zk_zKX.Yl5oc,*u,8ڹpY<>8r3f,3rݗoeO$lEF){pwl|A~ LՕE TqjΓ}ӬbLhrVxDrJ2EZv9;qvkDo}[ fjD#7oޜ韾/< /WaFz.b7;ss.5GwX3/@5%q]wXsM0^4m/ޘYݚLn޲KL@0TL&*fs"\O熢lTs LL\˶s?/+3;۝Hs s^83HL=XHb$R'{y׃go}UoJ"Z L=o6}tA§i\.b̲Fě1Ym=Qݙmے>=Tni6D$F0?'~$޽OΊkC!Qa͎:I)EzvZp3>mJ(/(4h'a%csoO>^wWcEs֒0`APf+&\Rp*1!٦̚t\bm߾=7tSvi: |;Y. `Ͱnt_g>W?OipsIx/LAk!da]4UfZQ徙D~Z{1%ZI9_Aw= YL$ Ry`! pff%5E #7<{9T*Q;轝*7 gi{psO*?8px_,Ȝ[?xN=ŊjgGwOw͘Yrj}㘯g&yTJ}&vE,N9qzJ>5_sŌ+mpPZ3CM<bЧAwC;Zg"Q#b I&1Y)y(yxZhO#aPaSaZx2B% -:=WĒlΌe->ZQTRCьa^!HD)w 3CP=Oʞ[+O 9gT,)rOs*=:jvњvg/QQ!kYٖDXlk'wGTwxGֿ3 I@)bsԡfִ Qjg0=jXc,' w:VU(W]wݵS v#T̩8E4N՚KC)ww]tѓO> {WIzOչHHɨ VUEh!-Se}ah)b@cIbs.9uKt``%i18-M6j`eՆ5mhG[V8`סkn?JW~VuҾʽ{N;?C9,/9rhh檎>ꯞj;}B{_"Wz5פg: 4?aDف9|bͮ/dlKw(I8*Nyr%1J DB q4Q$Ab)rL@c T  'Z_:o쾿-g?_0P/mKկЇ>>|6Z]T<zK*F%QYL$"H. QKZZ|deO&LG!vS%hs@ ""8E]PgcOZG% SP(HjwhG?|FOd-$IOk9ʊV{{o{^@ゃ[#/|#?q-<3)k(b}+zT*7ַ5_x.~g}.׿ @X7,HsAш0_kJ_BKQH0B@줹b<\ )\3 6PT2b ?oY0ٺu^I/^fƩwW_{W3-hq*]̐ 3SYhXLdB0\8bq11j:rc0?ldp0iIj}yAl'E-X2eh$!@l?=s0|و-QNWOuϞ=j&Q֭[/}>u:O~ׯ;jgnϊgsDQt7>tSSS_{޳RK_ӟ/GG.o|g?C_[U,;n? Ɗ6CeKbr)+9ڡ`=ؖ-[>O7^s0 @c2pN~SŮ$ K+9 LŠ TDpZ,.96EإX17oG<}kv~ujӾ/^f+1v}r-7x֭[O`;~EMDuxc Έ8#!ہ#$T#|)ʨ-Y8:-_ fpF0O 6Mx@M([2HP-LC9-IYj\UVkŝ*z堌NNzmy䑞h4ms~wl`׮]zĭltV)<ۿsݟصkWO9m_WҸjmܸΝ;OcZ~_wV+| by{6o/|4֮Eѷ7 }k򓟜%$4! A L#܆m?~??Ki\ ]s57pÎ4 M I )g28qj򁜁ԄcChjnh Xb LhN8n:IɷM7|SO=-?$,[ ^ X?Nj2yPIZjsg}g/~}!dZi=~3gD6RJŅR#!sA]`WyP@Q/,C!j!sQM"0Ͻ߸qGoWpw^wu˩!|;߹ꪫX[DQt~[l_7 W]u8+ۺuo}'ˇ={|[w7z+/bk|bԾ}m{}^6ӉXCu/u﫯n[ֵ-oy˩Uýk'Ђ?^Dǻ [vG 1&֠!!H!I'ӟ?iӦ7vٕWw3Yq)qa8?=̾gwӟG]&A%2APD& RP)$hr^K+L%MI3N5 Q=  O\ F1&Zw}wuַ_~O{ wTv3՗TSq}Pd{}⪑~Ǚ : :KK&-lnr]Phr^6{k JD(>zdρU 4uÕ(P%,ip%94h؃v¡pՅN&ڛTj9=9HX\S k6fK]Ϟ줟gڃevx+ٻP.qd*\ŽsW l`y$:CgĐu`tzg\Z86uȚM IK.P( BHnZfs|2oذaddX,: ={qrX5΃Gik⛮-n;=s787^{ppitӴ1)g vgަm`R=te=<ϴue\"C=OIJRګsYZΚJBszc lڬ\3`b*ܖy󆝑jj]މx q(xEI΢Z!.v5br"v/@htEW36uEѶ.0q`2o߾=eaTԴ6r9SLJ.*.n72]-9YO~ҾA6x ҎfJ뚔b/u_*%b0sL覊;b"Jlf$aasrY9pD*\J8H+hv*-t4Ftd< 2\\ `Hqٶ'@ 3I%f*>w\0@h2%K j@J`/-C~+3 mBop6y?~=LK̓ Z-+* S0n&DȸE+՛rC;j9us_{eP 3ӲKYHgW)=3zkpd֢v_\Z$̘t"QHڐWH4\Nz)NM=cǒFaӝcрŸ& XC41@A! C؀uСnֈ*MD J"_R]]a| (ՠd;%Bc~B|e19V(VZ:ͧ$P IDATT"2qWLAVzCk *5%1iWj&K4Q6,xA6pH͗-_n~ŀc  u͊)ł~83: Ej22 19fǶ bNSIſڄL]pW '8ȻFڢ2&P!(BcXT0tl4Z6 04(̨#˽aKbM&_̊]c T@3 *\oz7m?cs#腾C[p9? @ ¦pa\]Ц3>b*T !Y,L bGJ`$TD# 2ʴzL*\5Bd7̍ǹH>X |@a(*fRyuڀGA2H)r|.͈€m4g WϷeKp. z$'"z}2VWl= pq txlS D .reEVH!Xmk+0QBc-ޖQvx!8b)D-;rD׍1Gja,gԃ Lel-l+β3`;DZy3jt3[Yb"W uDp6Cڢ)wWG&8YQCE$& \!q@\* TƃbD ,ל(GB ,|Bk BC]Z{zJ,7S[P0!R脧u1:QLj+k2@4x>LgHRda mLM &a3Zb 5GDZ8*4}5`)"P0OŠ6 b#^Nш""c;3Z2s512Vbfc;IZ6x+1MlǦ(VB?pi¾c /ţ ۛ2vE˜XRtݢK)֯16@s5]J(U,Me@2@HJ%h)Ki#C\?ӷ0A9h3E.ׅ~!RGh Tp4CK€S6' 8[^!']aZX#aAb5X0AG* (g Zy0,b.UŮjzeqC#"hp|sje"I8 Us)mI/H X3 !@GMlʆԩ>~͓➡ q7wPn(_YF2Q2.2 E*+ʋ9ilL3j"D%ʤjű=6:X ŵN-VHc_˅שm M+IuWPcF:VDjj:J# UuBF#7c8j\RΑ0R' 2¡?:kNW! @]rlvtUԦݨ5% 5"y-p?^s|e}I %@NS(H0b:z| NPƱdrŹOk(وP.(B-;G *I'q[L! %ڴLs06j , X(Ļ37&N% XgLc<weZ,G1 rl5T+j. dpރ3 >mT6M)VVS6zLX'W* M4[7EEr Q]:@Ānx!IXBҦ𾃼2 JLeqp.!Y _$Y1nS^3x#^izӳuaFK] `iotk#y0 &觗[ۼL .P zGz"$B0hȥYܟJ&$3r>Yt &-\:~CiR搧WE!Ƭk(W`Nm ڵhIlCn@cۃk}GqޠZtP5*TQ Ӿ:{Pva4m1%eupFEӯ,}X.橸$ʮyEg;u?<0M%#s}:u>*Sl^kaYUT#ĬTix4YU]t/Sq}5=2`])5Ŧ~4?j[}w6y?gHb AlC,UCԀ#*Qš.`LJ.00de3Sd\Dv >Z0a)miN WűB/euC؂jJ@x1GR d{ =dlT:u*%-RƦAWʃ(Qpl|KWr %gN.r lܘ. j pZy@+Зu)2m'* DXa.$ڰ 8PqFHej\$oX$&" 4CJ@ zzxm1c! ՌfP4mqt, 7xtEL7؜ux\,d }ð2qB"ZSH@&K) W>EE 8t;=31S:&X:j*9A w ׅE-~ե.6)P.Nd=TF"8g B%::)"lЖL 5QF](2h`[Mf@r0C,.l":ՉfٳٰeCI+I[}F-='ZdPv7U` /xEH<ΎGL$LqBҰs`wVnh pw,MN:JŸ9HW/ !@Y2gx6YZP (*bHԁP„ % d#X=d!F\3 l0 0X=Z]=7-,BX•& vGD ՄxYg<*?O7 x[}v&]-&tS5c[pXfr%rP)y &Z( :ӂ64M~!s.gc4p"S*~UZcX˪)HB( \ b_7\+JIZ2ݜU'Ph=  $GdDJ߀ /. fzzQE LI5 QI?"0,0}8I<l%*:#'1@g GB(9 u&QĈП3ըCKB_\BmIa16jTNN@rП N'mv PAR }x8mX R*,ACit(kN .\w˘T6M XS!tݹ(w`ۣ]S=QEtK &0O)'ėXڥ`KZ?˹H ʩS6\mȕ` 悦y5Z6r.)JTNcc`jHbh G!6H!=G٩da &pD2Z(oABmt`͕GA2dC@֊&\FA9.:S 躜TyeT17T}ff;Z6J A *Aa2/ul=᯽A b:\SO I ,F*A5cEb}h؎$`P6` 6Ñj5ՁC5c" |ݷ b6 Xk`äܿ(wd\#y{= X(rGԚY5tBp"#>(c.heO@ĔHYXno=փԇZɞklųӚ,㭘m {0> 0qݏҥ7гH!l=#$IaK߀ĻJ.(9+9q'Rq 8k$ $H&ۄfm XEYxg.jݐ hV9F REIy,x:= F[p [c eL_^]zS#3#J tֹz($3,tĒ/\F#h5-@9I%B);%> 0D4 ņ"NZ VZ`crFTBSC2X8?GZ|~enVp Q֡ylc6د˳dst)W* Ѥ v7`+jh2FT ;qXKOh?{ޡR8#3EP kQM٤y|ZhQ=!@J-V H%q${BN|- G= I9.`hX=m@)eyOc,YȋrbO=-eJaX67`K)T"l ik5"@D40ժЍG;mi0(v s]+c$cfh1J!FiB!|~"g]k/"ptV>fm@Xň،``u;0Ek3Sht&\mוP@3EWwb1J/ZvGRUasɋ7{0&et=;~H|"y!DffʬD%~894_Rtizlqhfc8b`t!A&f[$HEܞC^4XI]hRՋvjX-bz3j7 5yM1d*M .3kF$bLW1:A 0$K0,j"cHk'95KA3dI 7wU~v?st=`ʫH@`C8)obx'm۱$WEj2p{Qݮ\"7O"xp(v#ܹ;v+qr9 ;j/0yg9~ڛ pz3$_ /W'99i:yX@q"ylfϯy(9EL9=PC!QcG(%,1jz,`<XNj&}j:DcV4x)9rʹiP`1EM0PV{7uǾƹd{!ms ]v]ǚC|=pL|?=>ܟ)<:S^'"],TyJZ $x煃aYی)x -|.zHR|o."%B!ZQ H%mq y;Dʩ4Ɛ~i& 5-|n0s.TdbC zE2 (8JE<zy-V(Rz ƒ˜P$D,$)HxoZ@E7T 2ģ";􆊯4tKWprgeV3Ĝ_ZڱsZ<<;sСINDȃfqM+&%rA4 a^z' Cy+eA*uPj/ IDAT%'g#$f0ĥszaQ1h {i J {\Cfި "&;*cJ8[ %9ɜ`$mtUT!QVܟ`fٗxx4_:RIX^s'ۍF'Hql2"0=az6`9hUH.5- J0Bfݏ3ΑT "%H ,5? [$[aVHTy эfrj;.I[ǽֱjA) *kKӓtXAt9@c9r B lk:h;^rЋHEZA(( Cn\#n Rt<86c, @j 5ijgu<3GyDs _yE6j|_%^%^s;_lsBU_ošfB2& q-H4!;A\r"S8ȀiFxМV(9IAp q'&o7Z )-#;˘ γ rZ(*ԗ\Jց %s0 !8zpC2NaJ$1 m$܍YЯWVBGZl/3~??C([u-BPd|/f+N[pK;ɖ:8L#r|UGft#3ɥ佼$?q.dVm(H8sB%@iH"q;5Z  (c5Y0A̛#3ݨR{qM޸f5n˱f-kwJ(wGS ݲfLpdSFEy⼚уύw*n//Xl;RH)@G$p04=OEFV Jk5sR i@<BJKNdN@ R;D+sbSɱ@-Ƣp0NDRZ{ ;,M2L\è۔7W)fseB/p{”b35 zla֋2P˳(<XVW%۞&R;. gvYfiRz8z弬@-dEܞ&KlӌA}Z(â`d̫jCf\;= ~6[鬐 S}2M}b.g#'2V3Dt7=})0۝9KK%ZUMr5bnp8턊 Pz=o҉B!PN*/y.0N9gY%\9&֎){#g*70-q7I҂[V(rSqTi@cafrE%HE:˹f|tjW:eX I8B:S$+S)ϙpGuQ} 5u}12.a+_8?dOX ]i,~ ƅ;HJk1Ra'{4{dʹ&qi3 r$#>evY .+>IsJgsݤ?Ci I{kIIu0c%.Q==ۭ(pFT?rHò(VD:~K9X#UuhKpVupv///yb-/+Ezbq́v^^ Y rPCP+KD;i(MMLyhΧB;1JoUVRiXn vR yX8/\*N,؞fNPZ#ѻ+QOgT`0ώJ& 44񔊜;÷3WWm)ƙ\:^ϖ_SwcסRgŒע~9IV߽\ЉŹJ }mor7j0k~lgi *$fB)s%u=`d疜GuX@)t)3L;YVm2>)}, fsp$cN2Ce %;e4Gړxz&&%8:t|432 <2(q3ZUYe5h:P1Q ݔ[J؞^pظVJ KwsHCK}mV? ^z߾x_0tZο3ڻJ!6?"lvd~w jDMV:Ajucdn"/;CV)˵CH9CwD83#cri=ԹёsqsvL#f/-,LZ9 pIqsi+JnKd83gv۝IFœ옇iZ6J"'뢂fL> 0LĒCC;/H '7A=_{;ks0vnj!3fb1q3~}5^H>@ڭ VC[JSo5W odQ1͡ vnhhW/0z&9ȧijj{d%w (5>u.XgaVE;!<njp$;<}aZp-=4|XZ^XN+w`bY+ieHM ɧ?)~]3Fjq6MiaM޽6>Ў[ {֫PpZ8=F7;;cii#@Eky3w^"+ aQLMh/ˇ; ʍW79w;H=֭_jfos͘ԗDW^) Ma'^N&lz}.vr#ZJB|UuQJm1O>̻ީމ vT~Bko\='yilV_o#ٷgGw A"<ͭX2NDRP/u cl 㱿ߧKu#u<6{20, #|[Y/ {v3"AL ;$LRӞZ gݮ|sNÞe a @\gv{.Tlk)fUyH4)Va0ga畮&Q8G2svPU9<8KA [GGk8BPa(4 q%9sjZ 0p<@Hl4i1( x\~ ^ Ь bbp4f_vpbІ371_jԮ/7UN,#1pVUR8wk+9T{,cm{ ȍX$5tpOafWF"$ZְwQzPa:~Q"̗uX,,L/.L:<w,xF_g Gȑ0d`:a^=t䟝ܑqwc:D=#EiJ !<\dҷ%JR7͝hP Mː1>vΠ qMHE_X8X[eRu21^.v9~6[Մm柽A Uծ$˕𐅋וvqKTm-vS"KS&zy ss/%𱴭H՞pu_^ov2(;E4J;miV_)6D狫FM @mDwR SAU89WIg0!ȡC'5X]E3(hx& H)4^4CT)e0SDg4|ʂQBbuv,O %{8ԗp5 . mh2lF~vJ=L]:)N 6Gsb((σX &znr;`_737U|S]]d[ u<žjݖbb\5=XrSZ'Kc/q1歕G/P _]]a^>(,,海Vj܊Zzk. UYXq˯eQEm4l-GX' ؆xPݹzYգz9q9%K0fe鋫F\ 5xkrZ m`^ kWۯN-*8OOqѓ$.f J p @@|âa'ݎE++滻V(@`.XiXO0xMVϔI>lSad,D ĥx-N #OĜ㨟gaZVjJ0)M& %_Nj&. KGvzNCEckߋxR7ZݻKR [h`rS6.Α m uXkrկ`ei*o7-v8bʩyD;+ǖf & {pn%X  #N7#7ApXwTf8ĻA"7V G:h qT?w1gW6)4-1y0 ^:cL 'RTGMXC'x8¡!W4=;Z -BGDܣ._r'Yk2ab! bkݛwU3`v1۪tý%#Vi\kXZA9(ZK5"ٵ.~9_fk{#$YG]f!G o($n?'vw `Au?H^mtW_[y՚PZ&{ '62 UdEI+J_uh4ǫ -$Udg<..<,˒5X[)y{wŞxMxDz;U`t@Jaу) xfM1 Firv @IВ}KqrPO(,H! iw`Fx*]1!YRi{C!P`8H; *&8`6ln92|LYC"@ !OyWOD$nU%8 'r@-zmi, x\ L$ 6y& nō(`o&Qv1P 8UM`4bh\k%.9cK`3@0,;(CX`!d2fn:uX*3SJcwvִ7Y.Ϡom}f('(`/-B?X54K˦yq0W@@HzjlNsqApsGCoq@0c:(䳧CY&B>{^J23 pȱk hSN1Oah!v01!{lgiq*t&W.n,MW5¤W1eSk{e҄ ,0vZN-!0aӭl2xz /qmdBpS_<^O?ixY+! xC;dߟr$V3^k˓&"F^׫TmTiZ]Նν=?f)+0qc{`=Yjˋ&]Ƽ-YSQ5!@УrɮjDrLJ[Fp;P5U@9&b՘W[vGƚd? Λ;Dq$$g|dၖsAЬk+M q4jutp'b>|M,☍F_r +̥}",9̑q&J>Vt7%C"A"bf0"%H:eN=*[Uօ0-ys[aTQ[5ZC(8T*D$ixPnn6ݛ;ɥZrsR[  :-)˘/ d$#nF4r2Z mԺA#5֌ [P ˘otEAЮxS{Ara޲jq}lsξZ:x6\WkdsP[E@*LrC{U%52tl qeyU٫3ZJoD+]{nC aYAm81:{#Z9Z 0{ċa9m r)tm֑0ش‘fx'fխb<*gZ9ԧ5t@ uR۫ANLى].D"qؙ߈mu*_Yz~WE^V]3/O@!D19=%C-Xv({vQ)^7ۯ/%aCSI;GevO$ "UrT_j.n]t07֛'r:o2Ŵ^;]9v{Zy<[vjvGۍ8-šE1)ZVJ؎Cڗ+ݶUy)Q:@GS ՕPrkΈʒ'#U6ǻ.ͩ+qHH XjWonRj\Y1,x,9_y3;m8lm35khg1@"W"fKROPd uc!zގ ~J ]6Au4T<Y-b+m"u}tvc)PVnT:SKa]0^I&fjmeo׮5HiEP &47&f8$^^uB]EJ\H&8HNRH0^^MAC,el!V'J+a}xy0X%wB.L;q00j]ֻf($ڢLi>^BEΨ3ic ) ` 춇>ۭ}#Ђmjtjl,$9~uݬ tcE ƆgA t əZ *V)37jl]g`t[  n*Lf\lD\vT\+) g p`ZXiU-0<_x9ܠMY֨~bfX \>zqw @M7Wty֚ark~t;qvF;_E%V5rپDG\jE G7k;Ϋ'1O͙ſ:T5o{ l?oݦޝ7Iu߇ Q1f\eo_+K';}ZbP1o;fk/}wG|h]sЎDWv0cQƕ$i>Ji1Qlq֖*l'WwPȵv8utsjouIiw0rpV7[~5n\wZ{M߹!.oI IDATBx6A1}d78 L \먯^\h@G' 8l죏 X5kjZ 8Gtqx: V 2=(%͉9@l^G*!,82K-y{58&)%tVd{3%&\a rqWQGAB9HBEw@RD҆VQQy51i"7*>xZ |+QS щ^?[v3f*4p A!KY qD )9SiF }EAqXTPC=#bJS, L 6 0lHDKV`m#эUO~dq9.}mȅEZ{lF>y@*l@&ki2G |c}:'ΆՈ{bI>'ŸKUҕq>s1X A̖Bt _(9x2yV; G P hE- Iǰ>Wæ"dt0īcfG,齱z:q2ԗ7QYnU2Yfߝ}ܟL`m#BC쟧Cr[:DzZ޻U̦ƥޒ??yC8o{?v?߁܁x3+ o߄o}B𿞩O 'yXso,>\ o#X}#0j]%>؛6wn[s׶+tq/KZGt+MƳo*]oFޚKSS,5t°t&0gv=kpanq+AI.%>Σ,L{'+ڏ>pZ߾߽7Ef!Qш|fA+.͡w+Θ$.{`c5vGB>zTJ$Oznj 2>y^  1G;ۧ#h'w嶬+0(Cg:hYlhly\ZLYdK14g=  2Vh>B@dL:,Z93ű1U& '?ēM^z:V{7>ǩK<.]{dI4v]޽#Ơ$zBԗ9Kb= &Qt8m]Ù*x;Z+(t?I|b&/.3{J«|LnzE.st4N0VNسrz v =c`)¡ցOݩV[$9)OŖJA*Wտd2L6tHj%Ոo8<+7ߊwjA%L ٪ %2E*X?Him򒎚^rbWn wy˦nUCvGW! M&:I盉 qc[{Fܛ!1S۩ɳ?tBQPQxi#GLL7ӣʭsN :ƗV![o?Ί/,?˫ $>4aܔraVU,Q$:$ՊTMkd-9r j :O=B(T5:_`U]kPU!\I \Ɍ9U |MS@ $HhFXToE3}!1%W^ M! @!wy+EGWk,ry%gTTrELRB%dS ptt__[(9P{ 6[:[ʞNsX yj-J5Z|Z D8.@H n<`լǃqΊi μ?8|EB=fE9&upwGs(Hf4tmt/!bxCsya DKj{4IӄXą3lmS>N\2![ǪRT7Nw,J 2G~!U,Vx͚WC8R!|cO{/~ү'yX"8v{4gnj?ܜ.LSDhM7d 2G?<.zOOZ/1}wPfo4et[/2 Ko/<} W5瓙𣝹DOˮnoB1, }` |ƮWFmXWm:{u ӽ mӟO܍a+/'->eD%ϽrYRH \n$u280܅1,Sn 8 j2 :Q&uΌbX֌$ܛJbQ4a +|H ;%J&p7j〖-R[K5xvyStBP V) ye -Ҥ'GP6ĶTO[7PH3jEbpm5x}Vtx9M@ahVk`h#JN4*WcM:\ºAX֔Rj3F4'MmHL <!L^ cfǚ[ō~4V&$ I 8w12صUCSTͰR8:\ Lf,2Zge3%fpH7Yvgb6]pf꯮ԯBȾԖx uz#PW68I&b˓麌p ,ВkɴdJvW 'p˒cKmO*KM9/n;݊z7*9Vt<%HKҗNxG=vn"a}ۋ|=CK*t,~gݗ/>~fF~a3ћ~5~[Sm7'`2Nq=ڻ1?0Os0GEnwG?}q Ώ|aWe\w}I_;~'QL ?)|20^`?_ɭ?J<0.~}/~|?z|>e#<>~ _r1={>"ޱqa}}i%5K pC"2~\qPJFF O$!uRs۰+ 4G9ʌLl.J (׊)WI5ʎwXbD31ĒfiEfK$c $) f[愖]_0[Zm2SáyMYj';%9ƔFji&=:]"U7ե8EXʔ @[d=٢M98B7ꉧ^\r1Hs.۪_He:=xTT]rv]mh"5T8ΘaRĈZx2Pˬ-*6-cHB9,a\0j|JeG 7Έ20cT`)/$VXq.,e!vqGSoTH#8ըrP*۔ ZՀ֔]A-Zx7Qmj+pbxšE)1u+]tS2RZi39-RM6~oҘ~gvn}a4czwk QLՙo&;M?ĔK"Fj/]x@W7xq݇o/tG%/~x?-,WO>7߾I7ݍG~L߉~e{P8|2:?FGL0}~?F^Mfs{ƣ/O>[~ CfKIw<=*N{^|{,g_XgiՌ;*(ʈҀ ߡjuR8#DiXef%2()T!pQjL}9S4^Bw Su_PNiݗ=N Fd!Nwm'.Bs4#Ј ?9pe.:5YĀjE2ʙ5@+n`:;UtL.uuÓ(Ja`7֦=$JZ;y_`lxv>oŏiD!PA ߏW .%߶ul쐯|_}ߗҿ& ɑ@dxgߡkPn(Z̄;zR9'6Fbc[QZAjmh= o*\ʅqں(,Q܀F%d[7'3ZoZTř%:TY[f#+L%D@Z c Y\ _%xќ\ (wǯ{ǿCuM\xA q)j}o!5Ei3$FWHQ*&oe8:B 2i֖ad!m ᔔs,Af ژPKB55^, ,)df+.\άiW+ 1Fʮƻ6Bԓ03bS`t'RXb,K$}s@$r)uQvct!7:jtPR0A@{m6)'ƨ 7NͮnqcEꓹ+ =گfg>32 ׾?]mF:F:> |[tϷU*BKL"Lu"&`s4~= S`8plL*ɠ\hXKQG ER'Q#zI2 3PNP1ZhiBӄBF^7bOY~2f!SDeoU:a'W/^[Xg\z#@* IDATERhB'Q@n)nrCs {KGxqrr!"ü ч~|7FPPƂU)ERt5m$fۅjMp <\RkI_4ƑI2IotFs; E4Sa[hJDR7d!jtrl\SҶqޕqףNbJ숂ݨe0ˈJl%-=1/sHve9kǽ,zXY;x5 ZrJۊ}ǃ7>T~x(PMJxQ2>2.^yܻ_7 K@ ƽ!\Qsw"Cz*qqR?Q5aQƔbyWlGrZ`9bJYr@ׯR邰^y:QDMmAFkwWycR {eO|0O3Ùg 7 *m7}[(Lsg? ;{}S Es.ׇcspr9Io5FmHwȌ_*B<cn087pEHC Ů}br_XTfnj0ٚeN` 'SRjXn! Fq A(dR9Wpx$Vό|ZHj3n j<:F >5D w];-Vo,Shw^XdC^[JݓS(3QԩmZXMھ-k!́@O&^6)ʉ2lfA TJzWaTy>ΧoX)9IH/gK*sl?V5IoxݥO<Pp.C%y ~*ju_d9m7dns,PzF;P8goP_uz2ΗT8 kz&Npލ^'hހ]=<}_ʫג<ywwEp%כ,|ahl[+Ͽ׊Xꦈ喁;ゝ#FՀAŬs}ta)̢|JŀgM3!st78 ͒3ѬRxNRCtL|pl0>w;^0faj>1!Q? .H~!ٮTԍe0dV%ƸЌXj]z:p xbuf6B[F*+fu4 (Chˬ6U& )2]9dovVzxbV* 3Iw{z'jXicR,M)!'MsX>A&wf=߁1>Z!F8F7g}z2N9'üԯ8-O<%DVY]Im.4{$!qM*Y A'곍xh?kш$GL.KzFp,6ɧ2=ѥlsF4$Ԋ\'snGzX̡G5u`)].[&H`(&R=x ̯+`b2( vqU*Tkڮv1g0p(pW>(E5_ 0I0)%Cfo1NCP:b$E4t#%fĜTf ,#™ݥ;mĥn8\&iA%8VCg.A͋ؕz4...9!B޻Ke`Av1?WkJ#e d]8%p9{ܙ 344դ'ER1$/?DOoNV_)Rj Ҍ_o$rGzFv5inOu&.v{ޓkz֚\}27KjӁ"ueZn-GdF&QJM H s) _I^fPH2{fͧGD0J훼ͬ;m>}Zj0+P45* S`16`'yE )T5{p$;,IuZ c: @]J5y bW|gd>Zu~`-k J{Y9VY!lV̌DOSA^13}3:Q%p]:܎W+9T*LP}Dzc_Ɛ(.y߽^+k6t7zv{{zao6lkVqbU$v9m?t79,aH2pkӌzK#k9"ЙN9Rna~}k,C}˯[bQMUN_kCKj} \h% ddSyK&"'{7 zh*\'x,i4 n#=#j;tzw(P||KyOPAY}f&ٴ_vA#~b ,̖%ζ' M~^8S v8oJO ](VQkp).16^ ]^K=Ä@`#]a)z}-\:M$]s02;v՚δ CŨ elG5$[ `2 5 1O9>4lb]T Stڃud)\E%+u*hڥAe:ivU6I9gLK铵 ZH lԷ*ϯ5}ز[qF*mf" ҅h-U1WPhaGYiVa֕ #cTC;bԊ^p`mΞ "M;AΝ8&RlE v`"p "#c $%(3$'oDQ6 \=͂V:zx @$̡$e U]Gpݰ0]yNx\0^pm|Z~ ѩ鬦Vi3Wʟ-UwQX92gƈ6 \O0Z*uVUM<+ O' c#`J#E25rg9s3r/Հ!T6`h@shZ[k)Y:Qc!&ր/śx'd Jq,0="NxU1Q%~[yPP,ηF[B}mI%#Y7 f1&PBbP<2@9I+`M$l#jOYHjmyHUOG~UshRAع87%,S<ڣbsN2$M;QJ|c-!<gmv]<{'@o=Q c- GAɖq6oǟzmxc%OƏ/ uf߷;["q4w{>0o~ CR,~X~k?sSyGgW{@~te&f0U|nfȱv%v'5v57OQ.m7qS'SvMͧV>6XK?sO09@Q!#VL:ZʝUO6|ܠ^]8v([PԵY>񏲣}dm\2[=J[A0BRxͰ^sAaSI`l?ݦǺ5 vv㌼t>#i Q_r֕`1v6s]2Jl 4VWMC#HUe20@@r! Q1)" $6АkX&)j"`$W]}P)vynɉt99sWԳ\>H u-#>'\2ݵvCa.zuʶl͟-+ (:X4mj;#91:2h'إSRU<0h|A,wC:P x !qe˞pҨlUS]0AƪwLW@tNnSUm\e{x'#y&Z[w ^U~5b,QwZ!vdFJ3W.`<K>0<|w\BVFS آ0Q˜ Nvn +1fS"#[BLo ]c1LaDZPlj'hH75**bJ@YA^9qUl-e(PP:VtQIyWh[f,qT~ # #ԈfS Ox&tV}A]JH^;꟝3AXxX?&jLn$>i9 gpd)Y\Vͺq,-NAs]Bħ̥w쨗vGŶ2.G*]S>m\hGE|B8x#;TN{4iѡwXG\/=Sg[Z<םZgnNԳ!U^a6b-4X+1Ruc#M=fʚ?>YU,^?O_5%~[nNw{ŷ Uá?"fqٛZmJS)XOx:S _s,~7:v8%^‡[5LCs%Bw+&!#i<لjb6lޠRo-I |kgg9">L5CmүDțg$%+zWgNz3?6BNJ4SFXnFG]F'rbn%e .ֲ[P&U2G%D-)yJIMF7c8pT.%.[YtxΧ%F5KBRt'EZp v8KC<--C_i< _곁8>U (8֡!6`t{"'NR9PB\8$oڳ/֌.;___N{pz5 l6C9^fց]]~5tf ۄմReXҍ̷}@4˻ {[W][vyVR\Ox@}=/2j)7X03O.h׏r!4uD4Oٌ0A(3ħ$2U 6 XP NjTYC9h3cf}Se1=&*Kݍ~d2j83Or b1NG),Na(x=W/ENs唝cvJ)Kh`BL)1c(~;zx5;IzV@'Vx˲|/ƃ >|[;5[PP%W+;߀^ľ}^7c(8X^8^5:_usxX]E|eo:ptp@US@zh/c'댖]2ıM_; RlX P =QSw'p6q}Uvn!JwyD~Y +gd'U^# ЖQ͖c~0猢P(cZ}2tc #^rM6e7ƂYq9QRh#Vp n b0yJn_ i4(0J!&ޟL c12؄A!v F&')_X_)rEj CuF:bk-q_e>v]֟.FpyCKa/]wԗuS?Xn&Mȷ̜p-|ieP ٪{Jgs}F8q?е'_Hڽo 73%t4iijgLRd|Bxq~{.q5"?\י9w/_n;)J$-K=x0A7h/-%&"k=Pb֚4$[X/nE䍸|,߁#(47ظA]M֘x=3d:EeEW3~7Gq ddK?Xm~+!Z^?P?HBeƮj}?,XJ8)ʯV{)S q}8>K~^#,'ʔOv B mEգÒ쪴H']owx\4mݎ;["oxݠ8{0X]ho9n")5Q8s) X +AeIfcĄ  S9Dwɡr :$$ kY^Q\Rl4 WV]8 8HT&QImi"y`T.( E?qI5 "p26$$ $O TD;\6"&HA6 xm%D/=~_Α]U,YfNy]]5GR|С>ltyfi^}37ɍaq.?:OowD1ѓ`XQQbvng[a2rXhg,!;Fbk3? K{ LB`H嵛-%ṽD4KhAIl',<'ӕyn8Jx2Y Nip3Ï y[; V"#΁>m7`b#U3ƣ9mruaR8*tu$$˴y*29 ?[9}U(N᫳h+fƿOL ;1;UT2ޕ$ :{)Ip+UޡI"'qtBx`bX2 ]*>?A9Ns/ʯY,Ta/l_ @@Ic5 -0gڛ$H" 'Y I/ m0o\£THT "\@iUj|yreikfT,*|w?{1pz΅0*xF,hrk\kZ{C?=ȏ.|o S!FzlF2HR;>mv3d,QBO0 ~\S#ݸ U;fŪ1AUa_ł1΀НY3C+ݸS`O!ɢj{2g3{pNӽ= z gbm VDB{\f+3b7o %"=ae!9Y\l5-JlKsNv?,t:]ڂdG²-I'RwdJȐ!2N7 C(j VX2 xfʄa¢B:{ӥ)j͝m$dr!-VH YnV3/r\LF> ϟ,uUcQisr \_d;>(5aQhl~?~ZWt Ss0RO(}vyAFlE.J ?,/2kqk\7aQ?8vw #u癿cRPF)YcA@BFr[mF1hM AX ^ %1pFG椨C.FQ4B.w,, 0yͪa}(#)c'I"Jk.[8 %o2VLxHjyTzva|EPj˕Yiz>1uTPuޡCԲ;9M{gW]# |EU2gu`oo_< jd|+'yc^U ydg+"R%"Ι k~<:o KGU 2p9y:3,|!yY[?mrC`JBH $f(!+D ,vxaO ,ay$SNdUq;q` sgҾȚz/vR O vBѲn , quוi{ޔ[ ^JwOOn7 ]wOڠRbgf?TG5qkzApCiٮqvC :rҁG"ȍq%PFHT!{&rj "utT:{wa(N d* s,tx)prϬ0 [I8v~W'z7:4PqG!MPT8*29'W/v㍮0xqoH 8.5b`:wui.ȿw.)"[~$H, <_eN @#hvbN +IXX^j4FsB&\[8^hbmm/FA !RLJb¡p1h-G/̫y0TD*`33IJc䅵=&y0;8u 9ag k"0WN[&'NU_6NQq?mY>wUГՂyy^4R/VWP޿Aro+дNRIYyk\& 9wOMW$z wQqpOݍn(0Vg^{SYX쉞ʚ\gpILpOF=ͪbā3`xʭ\B9NŽ`3tڀbmM#'BOh꯳0 2 S9IH&-AtψI8]/qĝNo?nmGV`Һ5_(Vo {I'e'rih+ « d[> s +RC 6+ϹI!#%"8Ho >) ' 91yʹҙzcQQ''?gOfWG7FwΎUoGN&"ޞ<8Wko~ұS㓶>,+[Jlw`Ӄy[Qƀsgx4Xb/NƬD,Lׇ֭2C&glmO|VEa$nځǧnSJ[L^w3Yxm8@Ws_ESk J$w WGժ=")vA{n5&$efLe}pFnUۇjP΢Q}rhmY)|҅ѕʕʐ9fO<8+g/ޜԫySW:,rQвFkZz??X\S~ a^?UۛH^\&d˒j.{:pw\m1F"^kW ~zolݑN x>owF[R3_{*$[Xi[X=o&]'EkQ7EPO6w:> u"vMՂ+ q ]?O/tw'B1鎋¡B:ϵ˜% 93Zh-ꆭj_(EnpYWuQ_oK>t fj zlDnM᪙Tgj d[Ѫ9ȋLC7a7"W6ewv9 ֮| Z-ZC7fFf.#yu11߽tIG ]֗z-OiG([F/Byz|2"({eZ)\ӣUyǜU37_^y. !{;x}zwPHRABg֏yDFr%`?*-0ApN@x) zMia&ӏYi&sfEz=zϛULޡ+zcW WxGl3$[|ݮUv/=ur{*:Cz1D][^sN' 5+j `HHYˢXv?ueTzrts]t_ar3~8ƮNv,6zd{g7v;1fo*i7FMgup,z j'j ^]>Νʐ&Q .k9TwDgQ~&BJ(j/ psG4bwcf=UUMɢAso[ *C[6˓00ŌN.ia! xx^}yR=#6#9u^Nk(!M=:;qna 8nGX8=UjqU&*4\M(75.la3+\z/r< _lhUJ-^jĢ-}- ymzc'@ztd,kq?*aJ$أ zTi?_>lҤ7~e.s0'ޞȯoB̧H^8z?oogu=rooȮ!+Ɗ:SvI켼zHTLz`VDĔ@'6{~0 xGC*,a I2T4_7LTw6)1 2Α)@P=,';^C.a "i湝`n+ %~z;]RQ$pbS:ڹ0:1r[=-I?-ü+ui\,dCFl_3{rbCuWy[t^g[H~ᖿ^h&j@a4 ;DyWE,u7Yeaߋ^ˌMt N\Q8G6ŽNy֬qd8_]?׶"ӰE~(&Ua37+8xYyAm_"[\Yy7ū'&?91sIi ݰOo^JfM&KS. IЉC>zv8SIq )Gy:W9x`>lWczgJG_O4 N"׌|$^HyN}UV(kYuFshs\ӠR-Q%L$L0o0 byWh1KA_}"8(ɺ=SʸN.f!v _0/ws{4,=#m2d;_x K`㸄EOj( \-}5  :MKǐ(l33w̞;襶^5HzU&tĸKB :o!zԜ'ayA=~q1~4HB/@H5@%+'kl4dqoҦ 8ԧߛgRfƘγ+t'@"61ͩ2dv<(|[mM>.ſr>}5s,~yyr,XժmO`ޒ2%D!5`'F8 [ 3NU78HG#^ "c(Y7$v9y p v/$^ 琟ïkUÍ;KqK8hiFFA H0lz|wyoUᣣ"^(6 1tdVPd Bo75rC0؉z: y_(}g.te_ϖW^dp;0jy%-D; p" =؇m|oa2\ buKj1+& ADOSb+aȥl^Ŝ!zOk+t5P8 ^8y|֊'gD~|/+jEdfz FE_9 -v>wg~sCHH2C[LIXӅQ+g"d0 x&$:t눑pD+PU{j[ ry+Z#e݄&{ PL1kfpBL>s Ad̖oNFn@5S)ఠܟ߯ϛrad6b;wG>>(2~{#\W,>ᙝ1{s!$G(dƾoA0p^‚\k6Vw(]bHKZ:*3MY{}h{*`;FppӪ 3p.~]s`Y4n J`{)k~gjk >VpzycCy\V:"Cs\+RNfҀH:I2F0vwnUQ^qdO;uyCeS{ G(/E{}01!7Úy•nM(hY$vĽm|G xãN)6oIh870 @0p7.O|FHiܲ3._ p CLPJ@nۅg|ÒﰤNS| O+ivdo2B:VYWicC>Pw8s_zӋ2;?'e=_D.YyIм@CVFgKlV3+މ9j O~tꖇ@ үO{?x&q|/,iH0dnW0`7BNGj"F{AP*(?ǿC6|p0 @DW!qİ;G/\"N &)vKXzqoPDOx(%51<^Qny8rApx9$@/eIzj@ozѼMlD 'a+>vz~Am z9`4&XYHכʞӯ5|CzYGQ?o5-~inO>Wi[2Y|tng}^ }٬j[.;~ V$v0wR/9?_'߿{i,ޖ@I I I4" !g fL਼zoP"\̞7Lvq ?YĀO|{l^ 0F/sH~CY-|FxcI{ߍ}EX<n:^⇟Ja숛 9'F`UiYOtv~{>5Pp^rN7"JG OquM}zJyYE!K>ĀAJ]Ѵ!6T]7.V 6X Uum9p P@٩]TSH`WAax1o."4Ϯp?οᛯ-#B$Իi b»J׋f4[/Q25D/gƬ '‡˷7'~d4J5JH6I锱&dJp//j @/M/onI7 u pK'byKƖGɋRZ1Ydm3`Q(pcus ^EHy7٤}HcD;8n+U (eфOblx:p?dU71 Z^_[A>kN$6GQ}͑ e+HߨdUN\1bci(kz0+Ϫ_;"}'s]u͝uX>.!*?xQ93im?~7鿆! ;xr\/ο.s(ݫcN0,h,_=1ڎ5nba>K\yGb'17^s (-؂= C|$n.b.r0\, jv8L)HI77"A-AfG#~,`^f ?uC51賽pc? _@v/ٹUP?ՂyC8ް* V̞Zm|j$Q4J~E`p HU[b{4xWX4nFsѧ3(:tt?:I-)WOhճھ{JZn݀*{fexӢyg#MB|-u>֓#j q>Vd(pyn[c||^XF>]2Q>˜W6Lp;:nl_eH= 8S:7O0/}޼`6B%G%ai{d)`qn."zp} c,6@S cg | \D\>`Ӑoa*S%ϳN2 L2o$ 1 0h?o^ Ir6GV9^.-#{i㼽#gMuh@{7mR6kߒ|݌m:o}N];ĽwÓfݞܸZ+IsWpUMOΧm-`m=՗W,,Ϊ=b[fK~[U>ޜenK6G_4d$$P]a0B/-s?}] hϜƷÎ<@,LB6`<,sWO7]Ԧ ȥpMp*{GY밧b("w}B[j'q{J Yv!8I{ xCnyʮy,qhbi^{OeX{![8_C㍒OF6:?c#0VT 󾌥 )|(AI~G,B$-y3HH$t’@Ir "Y Fc/^PIY2]|Ĺp9CɉsϽ)|^g.d. ueЅ#,G󎔍ťp?ޓI*J0cpQ$ܾPEw <۞2[P1˶ĢoWO+ikSL>_/u;Tr|mnjyaY x|jkl=-na6JZ O],qWbMYK)Ց܌I 5OSd^x/3c`YمE, -V_e0{c0 nq 8wOW =j?q o P Ѥ |PL`/`#9s:|Ҡ8ov08̙UB 9t3̽d6ه?aABko*rZ̲e%wPJ*>՚ vV0x72oi5.u3խa(w֖ƾP kK 5嗧l媱ky"{2cf|XGA:lHxo@MK{1&C){N/6D QU$ d0Dy{KR̃w@D%.Sa9Bq1ÅSB.1ǝA1Ue]z:aDN CWiq{T7O4bq΀,`lwNVVBf'hIE@`X(+9*HjH R+Z6RU04B+ y!XIy:"` cB(V$.,KJ֠O2 l R(/3skT)"/k^ہjhm4hu ;x IDATYWuyVnB+DRd*Nde\pΣ3PXeS#烑? 7pÿ+xZOc% | OUFi"~KTbVqn 3p/R:4B""TiJ4Jf=BIӄ@d$v&/n݆SӍ8p0"[:d!f{N쫌JZIhwGj[GZp=CXv{{B!Z UO$v:Y,.wxه֦c|o8KKjt_\# (Q/o[A;e [rU*[Qs,UbSUX0-$(s$z ^feoh:2Bi 9)2YJ J$ٙc#ED 0  h2hN-yi,q,jajRd4 IP*|B E^WD(+r/+5)[ʣ$6C[ GgVꑖP H*zB6MҬJD_ņnG1l._ڷv 7_!(1RAڎ% '_M_Ͷ"aDlQ@{vDZ-Tɑ@(ztJ=Ufu=.˵$I  X"Tt iȸ4H+Gfi5zO.v !&(ec@9svXYqR ] ʹӈJH&-(+t?ήGR Pv-`䮙FtuB&Idsv*VTYI+ o[{m#;)XSui,/ti2 T0>dၤNz'kee/zJo{Qh MRyA5j%x弰N0]&܍Xkҳtqғ^jDU/_I|?]ARH@^qVp(BljM#Y]5"( ۣ\FFkAJ~x4mwE=nᏄy{>w;if$ h^<+ Z*AĘ[=:xmhiI10\}J̞yVWl:1Z @B@᫙ϟdGBy;\3%JO-\ՎUj(&[Q,tz#J"`qa'E=ߕZc\5[ז 7 @< (蟿xqEvAi=+įW;'ltZߌGqvNo/&FߤF tlx^-. zMNz'+ttս cw<[XƌkK`5e/Zs\Xhj6ޣ2YN] bc@( 9$$isX{8N(//r_k u=21y4@pL9j|sj ==tx鯥pyqUi˵Ya8_v^[t4btplFdFVBvuO15SoG&WAJ-85һggb%EsWl6("Т&JoU~K ĎyatNǶ}nP'pu.+{Jt1Hf1K;ߚ2|8gAph~{G?{.Y8zN1E{`m)%.0 ǹI`|{ O5~NU7ե,~6͵~Qhz l/{~ls 0U80V)657v;'ed8ZV lr(s_3+lNnimk'$75.,98J_6IUYiX!5-+rhH҆dG弴p&Q'HXN?wu)Kr_͏fkvV:nlRٙb3 lBji,,fҿ|л֨tk`a૿IN 6÷wVh]o'ng0BEڦ׌Ɣ/֚fPg nH_~2q}G~C}'zQvC"3|q0}%n$IkUmNy8qB2vU* +DUé˶zjP d6Zf-$J u;P҆a Gƶ|4:$@$}估NVNj?/q7teEY ,WGenUAZYU⋣Ϸ[JJwUwwq0SxTYTzO :gD5>{fCtUkZrKD;•KZמ8,β:tu󜗌+P`om%wc`d[lXkQaVZw 1!sk._7r ecZfFgZR.Y<nךVu7ƷFg|Z]:nGYo^_~l@66ƨ*NKzi2{I'e bJNx-1kn] պtt7>=;$1͎޷_o6 ~zO73ۈM(Tk3/^cr`јF[}in*BIpLَ&Ŝf5hɻ($<+E5L*bܫNՓEiW <1glzP-a"W[2UqX8̯B'J~)z-R/ޣr.7gna~Q7%:px TEe? 8aU2CC+qꥴ}!1D/fG^lAsL*X0fضѐ[<4<4㥢ݧ,: n_Ԇ jܾ|dpS>ݡ?зQ>;6wtΒmE?t".ӥH >T\T\mXeܗyUPY;38`QR@~}t_V?_,ܒfz\7G]cb8|qsȱHLhj_;7p)7˦!jsh%<B^4o +2Mj2Weףw]6E<[ OVVWZr\TU[+U~*qh !y޳q38psg@<8^| B@`<` JURBJ42Umk}H/+!>r'cD>g6fmP?N]+-qT/jA{78ɫE›O{zAwţU=YoghRBYɇ'p&W_Yr3+8+(ۀj(,8cA]*[~`ABx);zʃʚ£D+Wx:&N3+Cve<*JgM)/yRRkc;ͼ-Q={@4% m8;'_Lruߏ;w•b4d"~opl1ŸіiL+%RY1̪+=#DGvri+%,ciVz^ŵ(z4L'ْEMDj~T?˲Yb7-aÉ_BZ`ā=Tr@!(h<0 U%H`]p "7C% */?cGLDy\7 P+۲$w&<Ȯ3z(ջK~VuWiyUͩ 7nYk&z}lW"w{{_R@ԭrfR%4qusdԐB\|E@?t8 .Vs&Ks* J 8[b6MQ׍ {A'OQa [(!E 2Z-2 09Sm?ȣf bo Aw`YM"bg˟$Dp0ղ; d OjZ"Jp a1W)d?\4r Lx/ZVca`WJ떝^fp `pI(4T$PE@ 7 v:BvKN p<㦉: {^5l_nkF~/Xh+N$Ptv']b=jS qKDv,7v͙%ϋ%=5za*6BL W`'2 _ʠOmtv V簳? 8_:v!}ӄ; c@Oְ6@d ވí&ܖ5Ff? !(rSB4;x,!ҪZCA/ FSd8. /ĵl[X>/= eCQrv(&1ݶZz9OWNmV*N|"x   ,Vvnרh%YT~ LvoKnuO)6yU^UOIyvp(u lzbŚ ~uPLb6=],.֒x32[HZ!ut4WKWlFiH,}qB޾mD}+XYVak on}W|i[til,I@Kb?/! )- [$XR&^(tHİ2Gij-j)6m*1_- pxd2`-n(nV8Y&?l jK5Wm4uYXPI>8GzR RL@#ŕͯ>& \MՆU@dQLa`ҩ$MX L!`kCtw0*k`/F S\ C`{i^ IDATA eZ w(:&c\Ga^/]woݩ˯ŧlm|:nw?l!c:h3ݝ>Pozi4+K:M 0_\n 'EhLC#䭨]YӘbl'}7.({v.ŧD]?Bަv]Szc=N_柧ۋ,Ӵ^j):;Y,bYGE=Kd1O qs8|o}x|7i+Q7}~XjRXUBٛ;{kNk@c!F At^} 2sz1Pô`@`A_< (^/>w? {l ]NWގ'A{(fNmяcW8 9Wi ]T/)(:+F+H0BP sG6F/`E4P Z"8 ?T!$ OVVYn>tEWz#l"wioMzV$l3J-G1~m`)ejl|c#K+JkD9/.Њ- A9}^UjKħk56n40~y$͐jU 5l^_gٙ F)6"THmP+=SUɶ"SzV >,_Xw( AT4*)A)4T6:E#/G "CqD&U3(Q|M 1F)&*"? u5@l$5db[0hDayXM,GIqXj3y&+O7v~ql磯G+eqwZ:-O脎NpyY 7;x\Uy6{a%LA*Y uocQ8*(`@q9^[ayY;DV&2%9k}D~>g?j<Ζ/  l__}YΗZ3Zل~>uA㙿QI|e~_>.7|*؞͗hӆjӪzt_~sh =;A<Lh[j`2~i3Nȉ]czCஶ[ZO;"P(%JGt`Es8|1aHb(@`̻W*`CC@hxDRϑ)OW˻^[WXW֓$ I29:zKNU(N=+/N #dnvrF9Y,8{Џ/~O|c SK0p.iΏCWN|Co_ik 8ny!83d˨f7BO4ǏnC3:8v <ƉPI-" u΁B:䯝_M@@!F` Q1g;3I$@ P"PtPDZ|E{xAJX4GZPH і iG8@)хFΩx,_,%LhBR.uЂ щOqĞ f$r`j59[J7je|"r((%q gd2MKݣ :  ]9\ѺefO[N23-+\`**ϖʘ:FoS4G_=sjWbF>XBa'ٞХog,Wtw ; *`b8Рȱhp-6UٝMHIF( [by4FFF5xodh?ՔHdZd0$Uojqz-Fe䡴+"I~k̹^aQ9^Thʍ߳N"GAkgE ^i"OZP͍Og;矾68 ʟNNӷ׮F@;Ӊgz;5-ɩ_3١]`w/@4cGh$_j>"FJj4b`e+O!́ṟkX*OK% C8s'y5;oF$[%|"CSu֊E0nrk-EVBEBz+ug^A КIKAh@ /YU"M@ Md+EJV$KrHiPLhsQ"H@A((QibcFzgXU>c| !XCAAjVॷG޼8Ӧ|GRFV*HफI*VžmiU*bm&P /;ha3ov-D'ZsdQ͉~{o)Y|rYAmdRAgō,Cw~=pgS ŋleT~0H)*~bB}1X FH-zfC7y)UADZR gUӭ&*jB7-VN-PFKʚTm(`lqJoFIHI`Xsbb%QgM͛HaiA\DcςnWIZ[V`c(s_~ػV|+~{يfN7W[2(+!וDzOKexY ^Í2C yьNB!#)TlE]B] 1;䴄NhB&i/1bI[-Zq^ pvh"b@jP$9^*Qj2``D $kIRf/@%bujN):^9*:nh#׊iY*IZbXg{ըi4F鵐H Og=,^uZ0LR98{Hc? ٷn_}/Z 3%棵ݷ 7ک։5X8{U(4{ҝ BD"ƃY20UBBpFف=<03BGD @gB٤R #SN;qR 4V%=AQ\ $jcw^5x P !jyAe8h/NB5A'dkɵyŵLZ"{'hZےܨ*4pw濍Pyhb/o:B)tqVtg-@4Cub}4bLY!IĢ; ˾X$JyGN*PH/k žynKA+5BzW$I T h9*_ %61 TM 2 I*zڕjk)G;D@K-bUDZZ gx&B/EŅ3MFU:(}'֛;8~5 7pÿWt%n)IJnYPX~)r,mL6I3rgP覬o;[^)j$"RĄیKXi+^M7 ӓu| a@(5"摖ޫRs>_(oW8㥦nym#זZ7q*WXBLxͦ(&Qi\\꭬ WQ31\B=.HH7dDp@VVRih墛MUJ_r)g\/L 6_ʻy~66]DJ*!+1te8ՊR_d!a=vv3YCz^HFI!Ҫȷ\X~uo3D0 K+>pa6\I&%ZoC(Zܖʼn);_P@r! M~3M@4xS2OA9ԖadM1Қg;kc-p_WNFsk-š~;F`*qvk8L-X/>Ou\Ha^_},|z&j>18>B~w&QRztɓ+p_>FG>۷!@}7Q 3sbi#)L(^]&zzkfKPׄ_eA1Ar3?zE1jII1Zdg1,sc٠0̭lhy|ce)3WI^t/ӕjpk;tVZȊUlH8TKąIVzdTaҝ ,G29 +&=$+*cp6xȤ |0i8Pd[!ºt'(hj*|;$ubdo aʌvEk1˩n}sGs# )Zq0` /Ĵ<ٱkQUs4Ϋnowx۹aYk:^h4Bͯ57>P}y76.[̘n^܌>:)_Ixt6.խvU 3bM˾Ӎ `V/ăۻs&W$$v`Y+Krc $Yw b! kΥd-9 ʍHݘZ v3,Ftrknt4P'Diq t!Z=.OB=#nWd3gՅ#c}7j|UԤs㭗6TPӪ=Ɣ҃#1ucxJME!;Ա졉gjrIW^=epD0SHNz33^{"5^:~@d xӢ6Jf="e9OtYUmdqSԳˍC]^ubڵ ;euZƴWG=Yx^\(pGN*8eРffwҺAjġHbc yzǜv^lB3<⡽ɐGd%"JG(YD@('꒤pJ{k8hk9mP ΤuAR76[C)ngbeVjCk%LۈUtۜUiN|`18^<穋QD'6O<M5cq/}UOoTgZ^t7,It; ;ro$e1-Ayu^?\-R9ح9-[Ǯu5ɧ| uu6[9N:@ ˽+"֔oCv湙 Ψ0V;N(% vѬF$8G͠Zo/`EҁskE%ۨxs IDAT:rB, Nȓ%Zʶ ^l2(o36rjL8'g7ܲ'~-Ybmrm708ya蕣RgCc*B}S-2Gby l}kV]ryU!s##֧[\#PP o`F AϹdoIxئoƬMTLOUKi-1jDo3㵏UxXt-';{㑼ezӋ# R{TYŜ׵G+~꜓GQwqRuҁkG%r_1Iciz&)e#'Cn;4bi;4r!nd ܴ|+Q9ᘤʛGQa]I:&d1AA@CmH ρI ni% 6z6NHB9@[fcbĒ+S"lT6IN#U٦]j#>] [6\&dWIUn.UWʊ029Bmm%7NOVcEpT<ም6C2Ff ΊET IώP6A%5zk g>ux%ۜjEm<ljKg鱃A>65_'LQF|Z|m768ur;%V|ƮWefZ aju61fUC#JMvVB4-'۪d; 8GRثk1BB"9(v p:;_.}<+؃'y"fBkn}[I3KJ15/9B NE,03-|y&Rrv/ D B-8ek-t89IN9R^~ K;ςk8 c@;.X}Vt %aIf%qX?d_+b ̀{kub/@ɜet}Aj3A GjK; 2^7 &XP,{%Ad=ad}XcLؚ^\OQsNIu"d0z^6%%iݹDc-KB?KLqo1cKpFLIDe& OWmvz1LʢcA[3I2CT09+-[I{kŨ rkamdt8J\eZ<>2r#mnwVl <3LFUyo8oy~za#UiDl;nW~4.p>.U;IŒye@={WxsCp p,tӛlg 5"Fm DO h+6q{Z5OMRĚtofey&9 )YJBI*x 4E{RS^IVuOt!-QֲڠyH؟\[[OgdXjG4U*2oF*̷}@H #tb+]Ї`8\yuA1C#gU{n}'xC4hV Uηd(ɀ<c#7MQԨ]gMڣ#dVB#HqD9:5KH#–j+SDL n+71J()r${->>b(ue[ZP; <[Wa-JGRQ%UPjt,@"G0j?w>u}k <aXU}WX/)9#,B#AݻoҊr_c"`c̎Ptb4BfU+ү>9G$Mu]|t'lMm\XψgVFPA%0qB[[ыgxn^=1ծF?> eckCCg`}PD 7=: =8ٖ  57 pnk zO iA1((sB< 9q[6u)uY,8*U5+卍;"Y * Iީ   t7SE UH9g!ˈo>̱ B i⡦"c;+ygRtb۹n;bOu^".BersҢ,0CJTrhl] `f%XS5N(%~t#'Z$ Ls4,qctFnck"4N&ҺK77jq#yޅtM|Ck qQ6/}^k"&Rcs zp8kG1z-_1t):#m;gN-kfppY1ač`F ؼEC4#KRK߆Óhjl TnJ8swԌbsǤǪ@iP0^é }*j)Q+>x#" cM&-3!72s̹i. NÅU.FpJ6pK2Q3Ayge fd1M0q@cpL : 6 `ds X;- *]R, JiȳZcJol, 5_N4D%0gP+dm"mRى :?M+"PIզ [rM/̶YJjd)c^}֛z)^6,3#l̋suZi%l}G:b=Go{6OcBj[=,PSU HRwJhX32DB :X N1 nii,3M#6Jz#`O2}\598]of|ٕw]oO~si9=,\BrX7߃wÏG^Ƶs /Kߤ{kL&N ^G_>_n+xǎ[s~JŋxvSOWU:טo8O0O'?|L?lcXzG}FCW//w GK3l>PbmIϮחT0S^B5E4?w?'{zlXMVF~!<Xį?Jps2t+xEb6|vqxbioK'@rs{TT·kݼ(BY}z󅅑֔*NI:n.$Mi3* vxV;xzƍ8bc픺}(8cIw]ZS1z1V  :jό GI 5tЎ{r|sΆCB.{`~՝7ƃP/ɮ&kwfJ= _bU-bՈ"T;ݩ1bIrpU4sl|5qSձ:kԨu/ zSKNG'[JWıYRl y䉮=w:>W'Hg=s>='sɥIe?|`U1~/m}~t o|+|gV[/c<ķb[~)}Ãm»'ӏ\eFO w^g#z3//#ZDOw7!kr!L_DQMg[Π{/8;^gZ&jj |sCa*7$kUƟO;G%xXȍt`r>z#y q~CKwIp[}O-9[K}-v>qLj>#J? in3nat"7 .Z+TRnEa G;Nvedp5 m1BH*"0ӓNq KY ;k\8"b3y9r8/aN}9p;ZI?n [ֆMz׊\4&5bH. 7Q^nwV@VT}Oԙ QFjx244yX].͑^g|IM5z@9n p$=j0H$}%ևsFK4/DSO*;}+j{?(,pOV;aě4(ގAUgNr%OO6Cb|Cg߅Oq^*`Zc>?T:!'74}Q+IrsRGRԩi*2sVOHN/30'&rnJRN&JQRrʹr F0*&Osu< Q8ǐ1\:`̤/Kg-IP!V@"u{uɡ p~|˹2'Cmc'c@Eܱd1orn"+Xo2lU;2}o^\HH>=Dol=P2T$Xe^%Vw~ 8 C+dkE_Ja-t=~mذ1O!ςqDK,f|ιࡠn\ΪΖ{C PE(؜0N\u^P*^*lu.}/mb587a7W]=l}7~ a[76G ;&ԫ,CX6m}N dzߒ;A4T'@?P]$/?,Oއ|/Wv!'|>3e+o Q6\ᄇbp!|]E Og|?f#jJL)Zcۅ*̗,A91~YO=\^LƬaU Ca(;M{,˴v,_Gw7UÔFJtIyhf!p #_*T`Ր_8Ž,6ٳ^nuLU(-#;Z;8Pnش W.ckG(raeۑج(HJזՈ?H9'dshnN>]ū@yZR\M>_<_IGbn>r}'6zhyܴiN{\hަDN*lB`yլś۷tPJ3dA`6χ][HG)k>y,:pMѐ"P1JJę%3b;1eSYf IDATsU(,/[)fbO&;/BH$8lT3(s&I9;%2j1bF5#L'c%>/=E )*Lx\IBPUiҝOxoyyQR fjk$x<ƫ⟿ 0r[wBrݏymNO߉NG5Cߏ_LU߹%@;?4a|{:I iУm76p>6[ڳx{с;ߏUgz9뎲=F2~nZ0Q=H9Ncl2BpbX0\6<pꊒIV N{9C=;#"w4M! ږ%X]< SP**0:W03t_k699vBkRf9S:*Yq5EA*k"dRZH*`n([b?+3j夘:-{;tXzq{aZ>tLEG~xgJDy%v(#$$df}[ZXK+soGbk^=>_|kcZ4@ޖqq>v]QRXΝbkݕD`Jg LT=z&u-ޮɲEiET(9qI+Íؔ^ƛ Vo﫹k{3"J "X9NY'g/:~,C77_>mGb/~xџc7%ε[/1cWO/Wx='1~)">r|5;y6t+C|9krz^/)X ވ䧾nl{ xcXWp'g3ߊec}X2UaPYS}ZU=y^U X&4Ƽ9r.ciw+j 0,eo:ғdVJFwnj57]q(+rV؅p~VpϕL0f ƾM ~ӜkDYjNZWZ=ay@lx/oOȱ\ۘJ6kvyx q7kŠ{!00np&ƒs؛!3*Ӝz>ְ0#ݟj=]\n"ϐμ1X#To}3F@)MVn:}* Μ /lDɹQ}ew`iJdpcrZ{w.+XD& JRxL{rfTcWjX*Y5B~TU2Y!hӻZ=3K)xlVzTƌLZ Sg$3$KȒKj,=':NBZY7nxCK OҞr Grt +>{m3U!yɲo.^1|1F-?й2F$)icWV+A-uΓD4Y%Qlݕ3 n2fsHa4%O.mw-K.NU]cK>YKi0ݖتRǑWWӢںEVi {v\=cRP]8$JÊ-[,ӮxO;-P 13le%W}"x *Bx`^NO" X'&;qWQsS54ڜY2bR1m9ޔ5ķy=,kgV(}pÄp8'8 r.d-gz`dʃ#83W 1=* OJc$$ jGQ['T"b(4eSH .F?k,NT*U(mcdʟ.i+PL3+^uZܼt"j7y ~yM]h=_zgM,M2]|g}} KfDRiqk۶vmlEJ*($#~η2?"+3'(Q|~~_GsH%Rx*!-:6f18~ n¨vO_뷫fCx˷9~>)~(c[1g._ƅ^g__s~^|Ip= omJw4렋w܊?%l ~s _`,Q]Oa_ waEџ=74gtnzm1:8f9Z; <AoЂ65NTi^6B+F/\'$ W4<(MjY)"3ǡPPlEcEM@gHLJb|s eYA8d3#K2ljX'GeiEdJ-[>jdžJ9BXt +=WJ?I߿n6Imi0k]]yyA-r)(굮sA2u.6DВAY\tgaL1UbejŤ@ gZҝP _[&& xd |86W1ׅ8PwOݍ<[^z uh 3j<'+,6qx{B3pCCq w!]{$o}g`~" [O*qPDDlrO"KL!2$2Kn$ـLI Hr\I1ci@Xe L*:ղI1攊<3j,KD @#q(j¦D8˴Q(zt!6t?_`I)A؉ H3L۾jj҈ZB d4rG<~ngZ6L[?:(>&|նph>!yvjWz߾A7>^֓q]_-up*O-S4z'{eu ]Q}`_1x@ n.W\__"X)#d #+Af6dT H,TG`()mt|Ѐ$!҈t Bkq6Z J:mR<6R- = ())Ե XmeJLS*ugک4sz9T_h={u eyJ0K)zjvgk D2.He* d`tyLHCiJ* 6l/rٮLցQ Tvu'H !ؠQP@7K烠D(f:rQ'0$"ʤȤ^Wg\$#\PC0S}fmET"$ CΟe)[Y4A*(x0,N VH$[ Ҏvtjj) If(ځ cșd!m(D5iD !hg\춃I̲u/#]k&eE f QL00F_,<[ﴳ8CJMi2$nDl3^jfݐ(z+GsηYޡu{6<Ǹj\08>uai7å/;6_f+s^os;">&#ϯ)~cϾxci׮oR. 9D}8-<12~ș5ûǯ>xn]+>GA|?pk.4?W|=鉶g# ׾~lN7uVͳwX)Nߵ)Z6TIR \NYdvŖ]63q>NEe+?C4Q` T|u#3meESes0!i"k{SA^)"2ǨJ?:nSSjU+pc}C!^fTb}'٪%(M xN$fÎ<61m4$=s;4a+ɥaHDڲj{](7yKlO 7fRfT ő'!ZQ!|gt]77pEpE˝Nմ -b)I]]hc>tTI+%=2ӣڀܤ`/:iMgƮB<fg/^WNm=fqv0U+eT^Cx߫a ?/k?/l>}I㏞ap/O.J5~C_Ҿ_do66Ymrmsh*[r:ߌ)y6{⺐Sq[f<7_qSa)JEGǝ٧}{(?4>|m>mOvWK]s"f4l5lwTKU89[tҌhJQj94(UfQ8& 'em%y (x  `1:`ڕPDGb%#졅VRtlf!9x`B PBb0Vd*JM1׍1#\ِLdR0@&gA7Mt[ߜ1TQ!e$֙R+8}h|x(wыj-F%Gs ylr^"`4Hmwk(RѢ7RYݥC7t`(cDdh׼$t`[0GVjYΝvm"m/aLp8۴Yjf}!x(ZHFi?JqN.փP+~2b(SW҅vDBE3fB/_:nI3I3\\[fgIa X&zLůDgeZ#SbR C?R!2aL/³KB7 \gBwQ (% 9Rp.u$"YB!dq8zï4VI&a*BuNeQBefy\"oZ^QVb6V8'kO/䭾Y=$f7J4S'm>5A7-pN{ъݙoxtSu~dxX/xߐy o9y1ƀ1-zӀw$?Xm\"S(boۇl^}:kҷO/]SıA|Q|{OL OYWqsֱ"suCЍz 4 &[~>j5AIFYT mL8hL : S\^+*QKݤWBF4A:j RY۱lGb^6y[]rlEN~]nhN(Pi+0Ҕe) "D+]zQ.B=T`챱lD;B˵R~ |]7VV-į{5~exg}}xQi紋X`E{vwa{l|nJQɎF<Y&7Y|692_NPjjŗj}(6j~G?bkԎO+۰L.ge^0Q͕] Pp%dIL,R:mŎl8W$H}!c4ϡՁHASeKDjW3Me' 8J a13Gf]BKra 1@A 5>LGBK8l)yB4n5AlC%* T"c+0HgКkCr9xʄ@ 5uA 4(DK$Zm6bÍ{TsxTi(]Ɲ,eHoƱ)D"xH''CgΨ'``h$'"Y$µ]?a#ijV,fj}5+1, LGlFjRrQv71[t+擄kaDR6CAjvAyr:B-.ۗNq~ y™Fn|PmXy{fKrq@Rr0%zv c}a&v$\"{[RLcDqJz\$.Z:kETa2P&f '.FwS-;L&VkڧҵR bWi,r.2f(7 k\Tkku+߿ $8;Ǯsh S_]jkwO+7ig <=u'~Ǯ཯B&o¯'f^OC x٫ ?|;߳1^d=FUAY~Is{[>'6z-hj^qUQ'ӻ:8#a6Dz Llz&ij)[ց!W5&R=YK3f =Y}Ip*6Ӵjg>ܽ)| -5]k%lU(4Q V6=8BkOa*d]T;v S1u4Z:(.<"! KP3("@pHs8XuW4 $ֱ6xR#G->kj`ݬi1pqN42QJ1tkiy tnсҡ$l]G_z~mri y>X%*Me5T*ld1`0I;1]Cӗͨ2en*6yl׼U)i/8b/^>Ε2Ƴdf\x *dtY ; W΄EZ3J܉ҝ1lbI$Mi{O*]\N\ST6!N|11YԓS\A)ؔo/p+Ă7Rst7j߆6oݾŵtZ}@w$|ʣɦ6?kjzB{1U,-V1Ga(Rl>0.O᪡CG/f @klT JTc`Rxd}@=E{r m.fEY. "G*ЯjE;1#wMPs̕q3eѼf6t')wBaN}s*{դPç߆ o?-s+vANdC9wt`ڲBbL'z&MVXjnhP*;;YդŜF>bxRBuQ:\L#^)^؉yq haJr8tt-yV>CEcݍD}Ulf7kW0Qa\EHm+O{~&=_:VDD}eQm[Xr)K*"tX$SdbIUPY˶_R1kMH$yJճ=I5 5Xn4lgt(X0M '#))Ap-ueVPZf&T],>b?c|QL(!D~6Ae>,DlHA"Nj ?Jv4"Y1O;Ld$83W$?Lt`aM8d\**Ij[b+.]"u(t|+M6ץoQvj>OevA}/!i,LzGDaIC5O8 a3&9r BXqI]DB4\g,waDu 2}a7lj)-1&?ed¶IQ]V]d3(yzSmGqW!9#;vDBCO^yhv!S8z O0ƹ5kէ:y*TWD$)c/5րYAxjhVT9 ÎQm| W+ƩzJ+m_ 'zy|-yŲ}Xg;+ꂃ"`t̑C@!]L{.ԛұiriXwRT]ƮO_L&|5kwT^w,`2FF?i3 A V1v ]= ao M<Xj^gd>v.9˝J+3V総[#iKer+p]h}}T+)kɮ!U5x$q+'R+>f J䮦~Fm FTIu¹~ guwI`&,`ݙ8z}6)s tP&sr "tuE{Rc?vR٥GKO^u{5nK0UĿB?xϽxǟl{^7\M l_ x;4p߇0ݼp>fc pȺ?^g=9X H҈5pum>Q9DsaXtt0ocb ^Ts!YΗu(g?}Ezt[Xjd~Xr2Tϖlϻbvm$++qheZ1La0xG\H$HR)*%Êթ!Mqrt0d~,.bY0s<-arl*2K\^>AȻ(VHgcX2ɭ7[^:9rf)1j`8$/PT.΋xx^>"ںZge%ϐ( Bո8pVD>~(Ҋ:$dІZjaH w8@h,/zXl1qmr,Ï*BK/pj@~1Xx)@4J1̽^R#I#Pl[hd@{kב JwyͥA3bZگe,I TFfeO)lD\>{pC:69qY_8qi[MQ`3=&uղ8%$$OC2-0 ¨MϐNM񛏛TYZSCH (92\@n^q[-8]:XPA\ϋ$!C;E#іKE=q`[*ZR䘛w]ȧf[;>L$rMGUFsZNγ@O v)y=7P웇_<~k`sp8sp8_z)~%SWGVB%͸ Qqq7݄ y.3+5n(ZG߀$H9(vΗD|8/-& QIG42ZdIyꍳ=&(T+=}V0TZFƔ9'%U*R QL Q] MenNMNE69 ag^=G*0əs ˄à$  c,:k.ɦrN r-&hBf`:^ޖleB ƣ*7qt#sH*ՅZw(::C>1fA&J3zUᖌ2:Ƽ)!`&3d/EșbZr)1lV "ȡ5);?j5W^#&4ɩ "dISFFK쑋$&R ጹJ \JB&v.\ʲO^\7kH"9I:1qK\NMq1+(Vf(qNGWgԠ̤:|.XFQ6Y3z{05GRQ;t)%FkqX5)tEfo/~epnҷ wu+-mzs 3|)l$;0U'dWi68f5Mf9EE/^~\~{asg{dg|uPeyA'Z+ ǷnpJUX?> krvK J0&fi'1,I{+Djj+E8nAQ!CMn?lS+hOy^KpIG&%=h{ʤ(rS$BjB"67]Hj\ Lll[q A)SiNi*d\w]B"ZfPcE˂k"\KM:PEJgv f3O}-YXYp z:pEV@wSަJfL)%чcGMX"F,aR%CuInH?@x |zcOmᛑ |]SūJtǶt3fjA YDѥVЄ4LNm}K jl0T`e(A.*3jsbpMbe"hTeaPE屖wPs(gb3#r`F Ai٦{+uˈf43r Z|Ri(?l.WUe#l!39eTq§X9X J1XbGqq&)sar]&12x[d 77[M$6C%I a[; X|I2=-0tFęw3^ĕXdZo/<&=/͢n.·+sx"=^@ IDATX5S;1mzϽx-7[_[Ow^f j1jM ]b$ގQX,&8HO+Jcf%'_{8>yBzf>)7caе˜R$GY61瞣yF:NIр"ga4~?:|7ٖ"[Nl1Jsީoq?_y)~eSTx ^} u/'Pq}7~j2KO\w=\|*{&?y >xϽW͍z߁/0-'b& -J ?yl71zGL cDJEy1֛-ee,Θ\5J[yMepuD)f4]N .7t57Y9K1Lj2&]͒j!MRJ$]7QB &GJUB/]79c ӐNSZ4\NdB)Mb:" [qv#nEAQ?JPWMNjܭr#Kg$lq7ӗ d\Q7%Z7OȚ osYFdX4 O*hc؋LJ˶uF34RJvm{iJ% &{tJd) *JP'&mwP)QT @S[ڋ- `QM"24[+Zmhj3c-f1 R&m F+FZ y#s Y|j  LZЃܜy ڲ/bw?rV_Ͳc(fàk[kQX&F<͵) ՋTsN</ H'i@iɲa6 ADɞ5deUN1aOYÍf݌Ryq^ÿk? 17xg/7|] k6oym|a~뇇/[=]ǯC`>_%/O|9"KwЎ_Cg}i"_[?_B?Jo|# o.CsF,~/|WW6otk㝘8tvBts)_[3h^wZ*JHaf WK)څTÐh=CAxCCF{e)KɖB)AR9%ec|םTJ@7VX"}Ę5w &ѭ^[Ў@xofꒅ̀b0/IUIJ7 OyĤn6WgB$E"t= N8h;9ڎF+-RJH(P%~_ W3gG|Qm=s"Q4J^0K.ٮҤXHo>4\O&bn]Rrl O")'7YLIɢI](g,P^W%$ЖzoF!DBΉB+h}>[av* Bp%NMe*n,Wf}3LKbD?z4xue+h+ǃfv'.lK)חOYԎZo}s{3i#3e}IR!,E,gj+ZJ}WI9d~6~ϞE/ǃw/R7ᄟO8^q|_:5hx_o׾ o>:~g!vo#ᗿ(aGc{wgE3O|;һwIpX!wf>s#sw]^|x?mGc-|sgq=l^Q3^vk-YoIs{H5!SSO--EJj!( y0` y L{7TՐ4;c, GS;-Q[j#!n:K;8̂EG&Nӱ}JdŞ02!zlm_j[O 2ًڐ5;׍nV$W:qI`lJœIy֬Pb=Q %GajB'Qm/OφLKQeE Im׃<$#'䨲79xp|T`N&n։cM{x<|t׿m)ͱg;;;%'ovU ɪi载i*o~wSBGyJB:'oݟn9bOg$jHڙLPz#ۿs eq8Tw$y:2j_4{%M7W8a6y/SÉ)M7ǥr}u}-8a :BRX;GٯG{ǢWWz֚󘂴sgG͍Ab{\=Nj:ENEPRy[ް'D6N<&^Y RqHo9zΏDS-H?Sg"N,Tiafٙ)رov6ZH"̌gx<]% ,,BCKVΊ3/ϩפs[G6XVùyE;dUs4@!ƒ  )B{65Mű&RAJ"~pT"GK q O0(2d Xyð6)%^Ju<( XDp׎*[$F^;(z; Lypx׮lo-EB= ӲS3pTLN<\ѽ[:fΏjsTQ:?0@˺["ic˸r?ìq֯:vBnuSS8Ƞx.}c72  \éƃrAp3ɎCIU|kHrA\܉&Geea'tOәN(- W_췘5>A:4*Sk-#hsv_zj56fjzQsY'fZswlHXpn:*!f#NLJmU` {.ptSނhlɠ?_`t3~)Zq/ǏOl):O?ܽ/gm_d/^ߺT&]z;ZҸ|0a 2?F|%Z/D kǦJG03Wgw|X[t:;ZupfÓ?nED~,wg)B˲Xڧ|~FZS$@ 1u=-YghXE$gfl\OeU9@wĴ=Gz~8%J3Bżp[􍹠\~DYH_+:#SK拍>U}e.'Eu!xh(OF"YJ$N 8wTWcs_=hmb *BT\}d%Mhg /l[zgsr/׺[)\,9!xz2 z#g~2~89W6惧Չ5]aBHܹAZ#팛#b-3O]lg޳߾_?<0S4U5E*oޟ6JWZ$p+oTãf`0 ) s3Q(pkGv:qeu;ۋt\ ԗQu80#{H5E&M9XG1Dk[of8?HOp/v}ކ~DYHoFd[!`h/ODt]_%祮%Gĝх./ O|}wiLJG/|f +@GqflBrTliˬB,}5ɋӀ56IhP5X Y& A0g +X֪rq=vL j{/We039NJ|>yh1*Fap⸣|lxiluX~>֟O(5u\!B}U< -rYiBڃv ^McsC|y4'v\rƕۊVc,gq$ W63$u0krtQЛ'nfGeL'I(]!`f{cp+5IIyPBwi- ̓imZ- ߸_>>2I b!EїW$ 3Ñ=~6_Brtdg1fDqS_mS\_Ke v́nPBHA,jn4u=gdm}I҅С<'俣$۴Bʲ-RWifp>]l/+sυ@.6Hh\p!:*sO^-u;kE> A!KO r(1?Izmf",ΡۙLʆ1+l01Bp#faTS |1#=c<>v3A18#hx|'@I fK 3.xv!ZI5 j2ulT;ґᦷV?'8 >oѵqm̅cW[4p?S\r:.2{{nZj>?En$¶zbv7V/~vc >屔Թ"ڔW7ӻ 'wFҢ2>mE(>x&|xؽ}BrK+FI< ]3Yt3; &9tZ،Q0c£O©O+jCxfNf*Z\o۝EDr?ص{Bf:FGRpqNX)D( Z<{}8'=,H`0 D#gކڲl6Q7A^{|F__~3Py;*ծW."i$["c =(IjG"ȒYd<剅_8Cz3E}P'H7ԙ[^uw%A]ߠ|s)ZU}εv޹?Zu'10%VXJR:.@A ps)n2w+{@/R5)Urgyun;!#TV %F9 I@ } qDl0 /i>`ohwږcO(@2ad6UYDڄi(.,ؓFz2c4 / yÆX48qF"d̑E䊫|UanAM uvQ{ 53̈́ܐ :) Lojo-s6 wZ;,X0t%?LLGbRA̯GWrSqC[ҿh`g~.Yg~ӈQ꭬S[ݟ~gP`NPu(u??+ IDATQwseʊ[P*LFI̓3S)s(&FPJRsY֛%L'}W$4Sa{_N=NsdFG|(=mgԙ׎Z+tz}+=Rݾ{}ᴿ$ũ79 jv?_ֺhhtwb2aۂTtoz$"BdBNScNKbF)< @bN* JveX5X(8ǐ!i]ܨ1x~Ha{<|jUD{ԘxEȁfg\`|eP/X00"dV1` "gƌ2jy9 *x Cr"KQu7k tV=#F g?qEo"oCwI-:|5F OYX'('bQ4v3Z7hԛ=v/$rsɆK~\HK-֓[JCs%ݷ΋]fTc\nmu`6߿0$|kߎNٞTUr+Oܶ}2#h҅%k)gKEd1 K+xa4[p'FgcNETzn:Wxc3( p["TUaAJd)xe)5Wc0ޏP)qDMKy֔;88s?.jY.˫)g(6yXId|ho9O,fڲ s>9"XRP#|P6{.^%dGzAKāC0`)xru޽4BWފ SXdnośJJ,-7T+-OQpJD /[עwގWT'a6u TUtvŮd߹Hݡh80 Hxs+6:c!Ѥ2lOTwC5-d+hqMϩ`*7'fYCau+,(u4̦:5IC?ʬyB[;뤚RVǓ _dYoS = C &wi:3r ayjLsYzK9Mkw Lg)^an,3]h%<x3Yxr;112SHRis\bhQY+f`>*/jJB+kE7WI16 m! p9'(8MtmBNxvnpL)%Cw\@qJLRuE3-dZ C`eX%11m}j,"W qyߌoo`gm}/t>GƗ8i\ȝS7-|kqsܞ4ۦ̃mx|nd:g )rrw ￿ȪU6F<Ϝ*195EilvrC!UriS\ST *o '/=?o+_My9 = cs޺\YѬ ,lp䚚I%usuJKb&y[h\8*^iJwoiG1\|[&>b܂pu5JGr<ߺ} }VDԵĦᕴo)هVR|;" 6Yc0!$UgSX|Bkѣ}k Of!&m]KWS~` hЫ}Z/'؞Ohq}`5z\ +6/8\ kʽj>3&0/> ^,(V[t(BSG5&mb}gੈTDY!E  )ZR2I"1N4ln$&$)v.|׭HӠIF"(IiAJ"UڱWO~=L7ЈKID)L憚2U^X!U" ωHtjkI/`=J2T:kBر+羉#^*D@v|Ҍc|ŨC3=D7pP\rɏ'6l>6mR"θ9jz!,V-/OݙIDK]yuCRjݥK`z6վ~pgRa`8߳ea3_dFhhxokqbsCqp3%f ;GmdL.u s:tB,YȓIx:'jtF.O&ODS'Yv c!)Ֆfb xoFWS݊)q3D؎>}(UI-3.p8?V¹3_Vg?,;_O;{Rx2 :yEh~rBf!f TK𨲯&JzKi`I:t'$V ヨ*D):T&{VE$AK$HYX@"v$Ĭ/TH!ҬE]_V1R[iQ iUXkiJLeBnkک4M[0 Q(@?1 mozCJ, BSjf~ʨK.FmCI y>#9kds]g?XTVnGRawd&羸T/ '2rٝ\jH$!EEӉ>HEۢ$]d݅m2` `d`Z%]E1ɪ`ōj$:=[UkGRc0sjcsD) YJRAjR:~--HHa(%H ]CKi|0nj+DOJB ePDtcI #R ;mxѐ Fҍ" \F$c}HP2$:6G+t8AGo%r,VZJ"9f5KiYL; Hn_nl/5vRY$c_P:</ =l"Wu9_Yv%p~ %ZDFiX&VA?9"[N@II A<^/Msv]NDJZs cYi%[J?)R!H*̓Yhj>A=4TAsi=tPƗ\(V>M EDVN/:Hk'DIsUZRIV!8 % )BF:*"",m-Hɠ%GL)eiR/38b!V\r9J+(ThH#-#fHy+4T*TYܼ\݊T$FN)̍X) iM/-4/q Yj<ͳ(x;oLeBNf< @<* ܱ\r%'V'QRGu3?!T޸K8k^ʣƒ0sտ38>jH^[6ډ1<o^K""^3??M(V*"V8P<~ʀ#}<™l..~?|?3O~fks7p9 /F>!i%OqV!qf\|[ߒݻ/\ӴbXbaѭM//%6Y$'|rjjݖԸ_|rR󳶿x,t(Pu.iI7WCM6}EJ jJЎMUU"D#\8K1qGODkJ *' _Qvm# Ɉ/'9# <Ğ E Үd\NG|X#@)#I%Ü(`dJI6O zsL@B( Zuġ * AAfYuJINUbp2QX8MW R Q:blH7$ 8>w Б?{ KغRA:5c{ WAN.Џ2lkc1HW$cLCP0O+ )p%8˘X,1)/ZuūT r7:(6˔@԰fSTx'䣲NJ\8b1EHN L] IDATM k/^]T$w[@N VH`q ER"7(騒aOFME J&ƝzX=1?X{) HDXI4 l L%BIîSMTd$! 91'd0:+2dxR9(g\ޡ /Mp}K'nQU5HZ?я~<~?$.3<nn+˾կ ώ7&\R*1T#i]&v2"_[$0@2Fن#MJKF(1 I#9X &<.gF]w}VbS[Aa%Ac[Q1 MT|haJlEě 9"8w# )O8*D `:Vդ, qD*I76ΑmHѐb`]RX ^w!k/OV_K㶪*'0GIyd-:-Z[ lML%U=Q9#c )JC?=n(^55lHD09w]T@9C1);\ ZҪSٜLy  ȰI,'Nr0rI$S(Wd\* 1u@pB!!58p> 2Z˕WnӡIDUHXɒ]_:Zzy2ec%.T*7{G}_R?^|UVuvv>Sot[^E*UCxs>KUMƿc}ҲEĂ(FY#n&SǎM $D ^wM>ݨ34%"H)k(q;ZH@R hHPG/@KvU1T+Q%pRJ HRIP&bR-AO,1 P :Q|!zpsViN L)<. \ *`J3)% <$+~ TZFBHGr]!/-Kڕ1i6|*9o1 /kTU{ccКH`I8'֤yg#!/N,G^ E IA)HB^p#s*8E(!j6 |ljX) Umj9 1(&RAu)֌_,;T}RzH\ba(#IJD@tF'?B‘[BODbܝ]VXٗQD4m@p;bT%E3ݸQŗ~Hbؘ۟ݝ{3rP(P$S,2g`%[D zSO[7bGdhiI׮M(|IY_? ݄ $G=~-cJ^{OOj Y)8߯Ml\|/#e&*|gt pe!)QD]ra 9]egn*20"y!Ox^>t{<l) 0ynُ GjbP@RB0HNADAA/o#1qʳxq/7=6`\!7b~$Cƽ@x\(gp p!qⳑj@!ɧB.ב}vcGB)UwP)/yFtd4gM\dmPmUKK H lF=eBIlbܔhM[UOkmZpCɎ`)PAeR)]mQ@oѡY9CgF:WaAA9O.3{ӊX k'H"#ѩE 7" B!T]'{'}v j :=pP#y$j>AF5IAA'!ɹ%%'&! K IZnRʢ$ J LY pRi77⻇K;b Nx?\#0D @ 'dB7ƍPᒌYXJi6㎅-_VU0DxWP̗_j²xA%va"SaJ=N&T2\N5E55MTA-YB:4c(~.ҪVv@{{{V3`eE;XtՙQ-3~3c:+}ڦ oO)=p&&䬆{+Cos6Ye7;)`UbVrʽBrќ/eQlVN GKxPfvEz%NX 3D«l*^']oPAhǑ8Q5kD΅I, $ 8i^8iI8U0e6@rB` *5UD!$|QLȉvwcX&J3 LDD,?7,PP82ɰh,&Bl@<_b9',ךtMY Y--U~V8fMTSČkĽj]YJT R z 64JjV؂b'r;VM1[햕h- 9 pY QY|TD/1f]֖D.~ƥ%) UD97V|gh$W)n+eX|kBƤ0pРDZ@6d8R4,THHl\p.tnYE*0)$12`V(*$ty233@ dAkRYՌ+*_8L&MlšIBd̀ Yx8VTd1bb&a:) RQ^jU8G&ܣ`X4ЄHCq.:1H8sH,Kx2atxڪ1phde 9Wğ_ohIMj@lC;RֶQ8\qe=3})%-^QZV6*92,nq9T 0b碥7LF" s(PK,PK+4hpxYuP\PC)"q0I!Su\`r>.+ )î$ZEH"s\ȭ!K!h\4ȗ^`"8@H ?G]ljrX98ى8=ĩ`=nGşL U*AUj $x|=)R|Adst48Zp!?%oeTQ

    XeW>lGTTQtPJ Aa]Ϲ\4D 5s15͏DYVdqvE FF@\i?eY29#c%YkmAxj iٛA-u"}zεSӽ5f,P|٨F֭}㏏$K޹sWkggg^uU{ѣG]-J~p/pj?zG ```oO_ ׿uPѣ_JW\qZL0u;crR˷~O >#Pj7^r G6tMoǃ {~7X8Xob6T~w=Bdޥqdo4WXW~CUJk׮IZk5W67t4}q;01}{:r UYXU)}]h I PB)B!i1B.đh1`0#K"HgT'D{c}僣~MZ ؁#"> ivp>\K*@2 |,ճZI"|b DyZ TRj@N( ~cjdq$mğ.D܉(H{D݃ny^d0Q X604/>o0BBs\ P(hd0$%8rvJ9 XT"s*5@(T($ $y_C^GυC&tH̀Je)hq3e9`Ъ.p #&x 4U9ۛ> %C\G-/?~ZjX-D UG"rpϯgox'뮻իO~?7p۷}+WܳgϽ;22ec=vWvPͰۏ}c_~ロs~g_矿uVql~g?җDٶm5\sW_yKMI1z'>y睥R__~_PJ7mte]y啟g/4G?zmMOOG?ڳgO˖-袋~RzjZu]DJy' ;vغu[(_nz￿T*e2۷_g>s7~{;d|` lnll sM7ݴcǎJ$s?|#_|> cSo|Y^ظq=_?}}}_|\pW\~APCimvu׹s=ro_7ݻHqVPU7|/n^J??~oo%\r%z뭟[ /WGtdHޖ>ٓ瞇f[%rQ5ׯw~vݶS^ڷ,sϽ曯w9cUV465X̋+z;՘*D %!U) @gb$2h(à9tS@&0Wn_''ǝׯ߾}ƊnݺRJ)YB~Ǥrˉ7xr۶m=Ю]z/Bq Bkk+$˅k}T*/~ )孷:VQzHJse*۷o?x?^xĉ5ꗏH)u׉KNy)ԍrrc}G޽Wn{VliliqhQ[8p ZaQ@"QTFXg@ ֯kW:tf@&N0jv``ކ@#`Z:ZVv :mt %|atau mXR_R6` K 갠=\tHc:Ѻ?YҊV–,YPa:_6&c苣4Yl馻tw= IDATd괭8fQGM"w`ذF;lv;&+l_ 6[*DQAA:;.uM;ma#Nu/,HA M͸{!I-uUh`-wM/kXBHZg>cuE?Ҕz9dwfbl]zueVkS].n˩\`u0cTmn 45s]pYj~`ewr3[:i'3VmmsTn`f8NkuMM0 8 [ZfԣV v8mN9[:epj+ZA^an ZN ,l^gz fW~^tv]pff)4ց3lhTu0-+Ȗ-@(ҳ>{ttsߵ!֗52uU;>={ꩧȜc'&&i_*ʥ^:::PC|@Jy]wݵ03޷oߞ={ Uܰa矟]Ӵ}͝!`N?{;-p-H)#<"|=k 3޷oߏc]p Rʯ}k 7¢رcguVLy}_`'H)/76U.-qirvJn TPѦ>8˂[Zl,Dw0` ^&X gQ@KA{lnF[@6C8XZ,j- h`m+1X^Jò&X o™:a; ¶,D>^SQS#b+Vg'gV +`@iZkmP7#gvS;aS'lhE$F7-.Z_Lw+ՋZd۶j/v;VZStqB0C͌p4]v vg-ꌖ6ۨPŪe,Ŗ-a"GE*٬| @q!ax`8>YWG QgbMOq1MDӑ5EنDc;cvo囒Kʊc#!T4rIOCDjhm dqq-2@HEB!,Kँ!`D2biF8C`g#XOvGygqekW\Q,X;-wկ~uzrKU΂eY\ppm˞={~_mٲsNssV-ۅ^{s7xK/tM7wL\s_k088}ȟGX,~|m-|rUW=5~WӴ}'nf7sO~>%5cȗ/UN^t|pݺu3]6?~ٖW{}իfNr/YO~yP[~.7ez t nNkԑK1ޣ>N:p,t 8;J L^]v@қ13p`^55ƖFoavf7Ƞ 傡纊66Ӷl4֝"c֐bZd!čiNF@ .K3'P w6²$j`HQ`IٴFC9XzUEq`*S!+ }VSR1 ,R)I]H>\"[tM7|*Ki ^lYgaÆ՞ EsovJu]w|iW8oJ)|#G>|աC B6}j |gČ\~w} k\.7000KR?ZUUo ~^sϝ;dsrG>lgggcB`ؿc=5o5\3#2?>΅bpH<~z51o:}и5 x-{WGճݻq3FI6%z]lL_הx\  "3nhG|бA8̯*nLi17+h TIp]8iJPĐNJ2 iy[[ެ,kUVD U]$,(imԮAvC(F!~`vNw_쀥jX _c5V}5ZS*2T,H,s"T4]v)g5؃>ʏQy4V.'H>U T|ҁZESEJJ}Jh|Y~ brˡH|5j"e2~&U,Om~ /Fr-WuOS.i!-a`r 9r g Ωz!_|Ȓqpч P 49H9waNO"Tk6(Rc]ᥪk L$ Ā[= *[N;hPJ9hӕcktaMŔLoDPb&F Ƙ` ՞kH11 !% (aƄ= B`N| LQ jv_/GWO&nֹ5M6mt 7|_~Ǿ/>āÌ8'"]wu|3[N;x<bl/x駛Kx'''ѣG;`vccCuEF։xYc sO{̽'?|g9~iSS񼁨y#.{_sq+w Q><꽮!SZfRʩYZ Lr;QiMxl6tH71\nMRHP-o8mo+>&!g<0bylZ!f[kN*l@ {`5Pyio "`"$;վuY+tBQAA8 ]Yi"Tቝ!Psa*xu[,G*2H\KB1@ʢ[WvI I9Ȁ1b:'תpP\ph%/Y%n+Q"3/j%Ab Q"?g7N䇻F*u,4?3z|j̸,_zy\/4Q1eSQ e'A&,SѴ!GhdkTms,m-#9`@X3PLE5}HɅژ(KYt*5(5bJ &&b/<̼$Rd61V4!@fin,%^+P5<t}*bСhǾoZU8F)TH,4F́RVV7 v(<%@UI4d]i@`8ܹ;!t饗{w|'/?\)fO<99o|55fkn:2223)c=_QZ+/~[x3YB| ):zEA--8T^TRlmj?̳4:L*G!c2+9ZcUI Yi(Bda-BL!+21@S`؈dBALAiLI`DDe w4J[Z\",HI<Ƀ<ɑSZ I׉Rs`.+ګU'*i x.'p>󒶤IZ"!P @rW&<_>VVyU@eJ'pWסΔ/s3t9q%ަĉŽV RBQаl,09aE=t`hݓvY((Ab"c6թ@1]t Ny %QI UD֌$G*cMRʟg]?O׭[wرzN<7cGGGdP8n,D[fMY$,Ug~Gq]COӚ_tEi.5 lu֙؟ٟ-[V#0f<[cؒ20ͅ(g.^yQtx$+{S fwW7a$^E5MԨJx d8~-l6&2v(bQlqmDR`$\PfG涐 JBF P(E!'@C)bTc.jʲi(FtC $ "巺fSRIQCP` iĕp 0ٜPKD /,L#Fpm(IP E $EG _rkŒhR[@(˥)+NPE]@w~ hYjI <)Hy' !d(W*Nh!X nq$ MJ:aEZQIҀR!"lǗنxUmIu#,CNJ}P~hM(J&0B8%HM(rQߟ[g,)TZJeqT2cѲ]{S 8 EATjDVV !ޖRlJu ٸ G*غT}#:mm?Op qXj m0T3뷭7'*W[n?k˿oرc>޼0ԙ3߻3k&{1̮Xv;z{{>m۶L xW'NivYgzV4MTUuFqÎA(|~iqsPJ"'dgF[ף5f⩝^BzVL psZcCBB+10ddSs7c cT\ Q(ϧ9FudBS cNj`@RH6 v;nI I&"!!YKL=6FBե&!0tC:J5NŸ4RpcI (ks%-f#toǁ$ Qb!Ʀ- H0I}U䤅EI4@a$1PxPlq2B#%8keWLk- JͨepgM6 )@B&#QCŦbF7\`&!#Rc27TfCKɀ"R(D4zRqFsdH$Y~Y㊅PVDQ|:ujIY!Nkb3)!+]$&ZI0D11w 3(Pk>uΩ3? ۦTS(^}[{}޽ σTlB &_.sIpba D.J(`p|ԥfba[a$b8=ToO|[;ϝ0 )a dN5wtm[c?k׮Oӗ_~?.// oXuۺ˲7a~f3Y__k/Gy䮻7177W4}_ggOIwWW;oͅa7޸05N0{pSC)6\Fb )hҎ|w/p>ṓA8'Uv!2P[n E渄3DHɞ$U,/%Qs콅 5P J Z^}8M%lPX11kiF9 1QF! %(m[:R!Q R8`1K+2}BChu4\ l#a)ճD%JUdu>.D]@bhCgh+L("P'ŤSpiX"J%Q QĀj6)#VQSm3z__^/<N'M'q3=*g@_˿˥^|o}CsΧo~??~kWQq/˲5V؆qOu4*GQE (I$J [>>=Vڍ0UE/4B::HRvڨPMnaadFsvu40(X\SU*\S#l&X^O,2)f |S!92L `Sa+K%SO3F9E6Li/xdXܳgρND ]tR{҆ǥ^zm]}\pAT:묳;Q{铦k8pݻYE]w}7s<?e sK___x{޳? >>5 IDATvm]tэ7޸3{Ͼ/GxbFV,MpkdL18Mt1 PLF`; pNh".e*҉2jWhB Rb8ˑQDD+Xnj>7mV&L Tk":,tU\dHH$DٕI 6si]m(C Z06FQD=]kkUvb aWBFc-q&P!0x7j.no<$Jker\zt1kϾβuٹad}٠5ucO;)s|7B].7ɱXK\dPCȅ^ToKFG)Z%1˜#wfxXPc hǺQabI$Z1x|' HuZ#LtYl喌#Š>S$n1P)\:QHdFcv0y,֓r`Pծ"9 ]/_V橤PjI}b%[<"c 9|}<6pϣm»~_TmU1s=BFr-G/^#֎_(;w ֎~:WW7|WUСC:dޓ*8ىC-2 q~vZCu.~ +_8fk+_ʐvm_kV)u 7(q׮]}Nu]ZszVc޽t3h}:&''7>|[oU #\~b_r}erkJW_}BlϮ ql Kckhm+.8c#UVTN=ܑG 'u}նV4صBTT.XE)D۠i#B/+dJVLmd3cfRTReQ ڽm507Bmhm@K4$BK$XE̸>5R#3cMDBz/wN0XSOWq! "Znx$RM芨l0i3(M(|xƯz9(b{(vӟ&+T]վA T]{W x3pθ-MφM;|ocmp>s*C.lwYr7GOG+^-V >FD v,{y3κ%2R`wE,Mr H(E@"*Q6pٍDVlJF2KE-8*Iap(S)t~Sea5x1Zeb-] UC3Tq`}{x ` QTF$:s BE#@7q b\Zƺns-hy}@h}~I׼5;Ï}۶?m _|M>7͏[.}]wuM7 U?Wreۿ[/xϞ=kNݣaV*-z?͎a}}1 qYʶՆQK F?_OFxT).wg!*,DFYqh:FIVDmo$9?!;'HR-JC k`:$sK+5`P12YOY7RaP<aD'X$B'R`cMKhIeqd6Nzsa:Ik˂d!Wc\7EQYx침$q4Ksle`?a!5,8mE eY t{Y*/|rv7=wO}߿Oo>__T*wu"_^^ꪫp-~kG?׼_wyȑ5z.hZr5\ӗ%s=p?򑏜~̇>[n>5y133կ~+>m N;Ͷǐ4"+>a4M~ ]9s뭷NLLlTވ'O_+_#u?я~36B`WxCKN?'U玗jK.G׿qlp@@KPl[+s7ļ a R B#%kY^{.Q@RpRC!,ýw#>d@+#4T\< 7׀LAO0R!RKj8"6VRa.Zi,:w!`åB4vg f%\ yH"e*L$d& DԬ[I7rpd %t2f) $HO&t"K'm8rdWJpw'-ćm2c>GX!Ṉilw"mJۿ(~&ly#*(_RJ '*cʩ̈Q.137 DN@2-m$c}ThԷ #wQlj8-``dCo_`{р>2IvJsH垶W#1U5Ss0q2x]@tD27YZ%264A+d+1$9W&C l:v5qP0x ͵T{S;FZ03 R Zy4'JXcmctz *Җ\X-t,ALj¡0})MbevJ,g A\uUksòR=#<|σvKi Q?;L{߾뮻VVV g}W~=yORȯ}k|+o}kqqR{+rzz;ug#07~RT*0BPT^]q7p;WU늪__K/ܿG}#HB5?o{۾w}G??^W^+СC?O$OLyw+// fЇ>>(PmIS6km it'j?묪No;/;Bȉ'o_{<ZVkC]\pU//}[nWc<;#+"wٵ$Qve./Crβr&on|[~b_^^>|g?kux^wpy* ;-t2Ɛ0'H'oHaMcpR:a׋HMf@6M[RiF*uYU$Zf1^ εhT\Iz[j5q(̈́ Xkad.i)8ڐ\Bdtfbfs?Hi)d:LU4 -@S2 x s琣0*UM a=1fB3W̃z'Za>qܾ)T i.ؑgM/Bn,F081t!)0p\1"XR7ƒ19PaP!Na[Ti+>N%˰WRMB}aWQ4 Ã)A3[p\ 7Ӫ0EkLL\<׋+MZA| gi\džÓb&aqx[vWo~[r7n}8vo~_\[ ]6lwcT:ZuRچmX48=X[Pţjk6fc_t+>|~bcXz#XhL0Ki f.gXѧ=d؅FCP3z'&Bv obR~‹weYgI+飧X rc5[ U"<2K$-'&Y0 :t8coeur-:@"|`̵7-u:r 4c^/f|约Ճfbyhۑ`[A9 P97aPЅ9f t`3 CT@x(2p Bհjm, 4906j ,8B). G@gJ\ !ьgSTͧ0oCf `ýmO븏  fhY!ZݢG׉yl[:z۶Z!g;^җ7<$I$y&Cȓ%maRٴ! /Pz,# MS^a84$BrZ ufY肕tW.*qww-4Am >D?^0b%Xmz 22 XL*4IyPtלH[JNhjV԰v >KBo'ԂS mO'p|!| dށco4LnZtM!Q] Cx2a@8# u R= hj%m) `[ )"nh` cXW/^+^yMz-&%,kUPG]?dd*DlוĤ@*8`\ Ѵ>=u['xSϗ;3P(ۖHtt!u=t8 {`\(ҍar -E؋ P32-59D>CRЖ>\ݏaĆH-;ゔa/k7ExPЖ*1 S| $>@ .h FplީA&p2J؛B;DՅ D/"bX:s:vPjG:|pg|ܑsLG-P{phgG|Ǻ'a!3B`MI-v?+q;.(Qj~ȧʤ g|=bG$FbNقBos5ц`!>MfQCt8xa1KqďȖ22#xGIڒ,3ڒķM("vG 0+ 0ZK #Ǧ1<- BI#6*.eXÁc8 cM@|2!iԴ2q(fe 3-LS(Pu"}}l59=jL-I[sʡ+q| E*A+vEiN yy6tc_8|Wx;{nveшa}>WGVX%)r7V(u: ;t<_.;GG_J-4Đu&xUSÈzKp0D r0ziy|^Cͅj੓n_XIRNoDT,P7݃5K1\(G-A~MPqiӁQ"qvpmlQ;"$AE YV.9㭎J.Ř:R&Swˀ^%Pŵ2.f@PJ8J=XXүP `߀L0 `$b&F J;Ie>KW5.xn傋/9r0Zͮ3 '=8@ IDATJ4,SfgJW(\җDy׻ !@c\}N~gW MvaS Z{7z*9 1f+%\*bΑ$`ʁrx!V\8 Iay]7^Fc5F% ܀ai qg\^8+yzX1`ƋhA @eRH*C;|1BYU11>*xL2`!"{ ˝q&)G]Š#z5ZvMZD|Sm0H@okwʲB2<QgDM, ]= 50̏4T3P~}]u4V2+yw`Ha@g?Wr1c]|cj$5ZQ+); Dxi\VduGsYTc.6Z&,WH@ +MWO UK+j/ Jk0 cްDV;ƈQ >ƅ 3i&3Bi^(biSe!xp$?~߱f~)G#è0!SEY aWy?SlF?YD #s2$ ZDJ܀ȸ;v{##^iUwcQ P`pОi ?]0_v7J׏J!ǐO 0 x-1p2PB(#рCJ*ǧq^exSNfӭ(Z3?R$st9Mޛ|EXg]3z~~~|w}|bκ+=~/yKAL)ja ٴg{,$RK̥fJaL4ZۧF= 1;8`ql<~ ZNC7O3*j;3o}Đ4Xślw읓`! ^cFN;(` #C`26EvOv% KK.qf#4Pm drn LY*Scdi>nI@i(n)@=7Ϸ q0 l_>z3IBVˍ/E@Lb@ DoCavB$ϟ}:W`ʣ4@%]f. cSra>^K#NϮBӝ0r2nr'N?Հm>Ʀ-mD#a6Fu݂ I27?ඌ\d NUe ӂYd-'u=0YLeSp#-T V]4cBX0>l~pxЈ! (F֯q cc!9ѩ!AJbp.t,Q4*ԳC,B);iE cV\(36Rf:0-4)umB31xNzq>(ء ⭫#pokh /Gٓy|O%R{>Oo}+^ۿ###Nsĉo~3V^ `yi+Wc%*}y^۷slcnpٟfBh o|?KCb<*W!P?3yEsi(1LC@_)8EIsRxXe3ND6#&TBFج YX4-WYĤ6%mC9#P*}3t-CXZtV].iz>A*AVz7K(/Vij` ilRph@7cqBZe@P0=hl&Z^DLݐ;Tʁ:&~ r{}Waez,O+eb|mLj U=4B}C v8NcekNkg[`̸[gC}ӡ՞j?.y}CO&ʲG]pk \ʭwt ' /ӈg!;&Au#Gľim@0hlaM_SӣA%mcԎq1 9QtK|k]m~W|cFhch,8melBI"C&8͹jp|yI|M՚"EPL.1+"Ҵ%TT!s` 0F5f')4h`aS'^XuVuW~c>8s,|kf\p Q4 pџ\h{O@r[<ڊ(vy %ڷY##5# 'ydU7 ]%=qlMMfܥ`s4ʗ\ߢ "lv1BjVZaL+9s8~nqLBՀt)" L%;X(H#D~On:gBz&:7*ZR9`irqн_ћn&x6=<ŶGXwiA+6e C "Eű1nO E`;+m?}U6BTUn/le ljƐg̥q(~KYJ2l**T/ acQ٥%Ncô_ /Qe^U*#d$2 !A!!P/ebca\n +=nOVQ8IL+ NsD :OӠ=jĦDu1_v)c&"CJb4*~b¢e01SrxQfXFO2[sø&L.E DYo|W(Ui 2NH,rĘv,Ęl.Jss|!_i=W# |1_2K 2NJ]Ŝx+jSVĒ6t o?V .FMMUc@ MM(+Ŷ qVA G&jlUa~B6:h6/3euot3m &VںWijd"pxcØ Qh[5Q^MT!Ec1%6I(BPmiEni{R@Z`PK֝QvD#\ R%& a;PDWz}"9*9q L*W~>5FKqiQeT e 3EAX7HUB%cʲb/bzakx+Fy|AHsw"wmEt./Lx# ȈkDɎHt%Jܙ) `!fTҨF['.MmN< SF_55Ȥ&N]47%w׈1  ui'%c 7yf|CEUV hD6%uVl&б{>ر1&( [Ua;U,0Fp/'*eovͳ x;"N#t!2 YwPͱ| @~AKgWY.1d< DH41' o"Jc-+5JS0 8Y>рɄrK1HP se * .rۥ,3)51 9HB n 9a#(cEZQDD ͅ$OOd/tw3T?ՏNOje(.)yU3EXUoɚQ|S#:@OղRk=Pv{vWق.[=ͣoS0E<{ %AA%crUԼmzT+ECB uOo+KR8V80@f8PYy=s($E⛋fc>O>N1w|淙  ȴlaV-P>9VX?3@ 0 R"^İ(B/QUʼnR$(q¬52Tk׋"" vL@ExU晭"%&"B* RLzDjMrM;ecHP| -YN}=gخl {%08-k``z͆JRכfw,#JՊ;Rz&;ްF+A(a32c[,ُNʫy;E!0"H5'LDgtgE^I!Ow &_QF wO?6xy_>mM T² HˋrZncLrGBI/*[r}3UޯzfM[,H%P '\>\ |c}Sރ0JxUen$U_,74ҳi)cU.5MqVmnt'OtF X -bio`b{Q'V/Wv^z 7bS GLT@d@D`.XU kGJ{{1 VDũfc&pd]l'sWlw6bƟI#[R"D`(JȴT,! nYiCƾeM{yTTQI% zʍ9"SQQ{ ^J6뵯7Z2W#F&@R>`.z,uHZdUp2n-+cB@ R4A"kO.WUAP2( N0Ϝ/maà/ti I6Mz^VE^t *ftztYs6d<õGGz[$q<XRZT1E>c&}/in$LE1 $jjf~w;ģQT>"d~V݇~IQg_?m.g|g]g=]ԟ_MWv b3AI0D!Px:ڻ\(L70;D<=iµ~3GוPw:*6BŬ?wέ}Z[⾑*<\- .lv-L~6h ,d\4~]_\Zk{QnH$*PfH_d(+l&25K=TnB@#`(lZyqcFyur#D"xyx7cHs us;`Ke#+w3Á`.|Wt/fZ+0oݻJ&7:b]V5zUrcZ AHV.@R-UC(,B# A+&Yx/v$Hـr ocxUsol+ked-I3ʓ`7;6N{Fj޴-5B+1|d!%d_"i =A\e|tK7ߎOFLz~~?ק10ut2<ȣgOhn4 ^6(_hUPfQƿXOD,Qz28zT|xrY]MW'r/WI.S`\{K$K<Þ j}Um0ޞ(!^dG=B(|۴;2=,<-{=5UaFZRKEP.*u:IDmр5R~v=>{B  $҆'em|9'Y?(xYkwv/xs1OBH8.<l(~p4|6RG=}v 3)9qIO6 wy/zpf;ώnxX!A{(}ZC&Twʤt~aH>AF,ʒߤgbڥJ#KH ^U5=!xX55$2 O HnW51 A*hNuv^,2 I$\B %6o敏#H݉m S7IP d8a4e"t^pХ%o;jMI F'@ӕ"'UjWo '"@枎zc6kǡ'z @?2BGQz҆.N`OQ쉯br@%ʘm.sdjn1ˆ$' IDAT ^7kVD!~yڲ(7ʿ֌OzDxn~u܎%\lo?gٺmHDB f=?AZ:b{8`Q  6Isp#z=@G,+oAt'Z5[ LHDEXe[R鱺Q _Ud8ʛius;Vw2̌xYTk._̚N pC-lTx|Y`#50h!0 |YfwѓݱJI@́y V?QI~bFV- n2zy grcy "5GUc=Y]LD1n =A'0Qvh6|Q8Is`;& E2LQEF$ϩ΁cut^fv2Owvөu|9lV{yy{w$RDl6Ou}z`BDaE3,0 A~ U丢p9 k֮ց@p<`zpRYrn؛&H,c @6'eXzU@Hy@GTŽ V !Hj ~V@ɀ7 Qjb+TcUcem(5m#"4F&c: 9_D/w#,{'ZL#8VuXtn^~7>+whGt҉/mB =Y`I.iKK3-:ߡ bCIv?˼ͫǃ'vܔ2ݽΊӖ+0czvuZ/VG1wס8szZίxYWL&[/Ϊ˕g0Y1zN̾7F0£"њʹ-g0Q+{FP&:pUzu3.LX8p~n܍' g!q\I<^nYoDe+0J[ʅ̝!E zO#5dCA'E*Lc(P!Yg8ᜀB+ &= SUp3CcyU`ČB+o= &L啛Cei9+$is|厐*Ha av0=,7%0S5D y颙/}uRO7&SDY*)1UbͣU(ȃ?EWmR|Ȫ@rnqx0׭cC#@JQM6''~~`Gfti=}?6iYk_f FB tȂ^IQDX<Zo`m}%m=4rhЊ hJisUZ[Ԏf밮YsX˚ qG%ZH3z .4VDH r'wl״I  Zj(+.+.(@7Č@Eu%TnG_ҲXHeF8簢8Yzu _! ^zvrrXZ{UsW*ʉҊxkE׋s; Êo傯 .]:O]Wndf0.$ܠ8`_ޤQ8wG6xhV)MjX=W;on+( %}OosրcN4$0}|V5t| vu@Pb*nhU+,9_y"]z~A.N׭/'!yu6)P.iW xCAA 5,j^C.wl1%f~e}310~>x">m~vA/9;tQ { լ 3M &7b"uA1@T C<5$P{J07hU1AoW߄ D (/gg_m;HF#U {RI@>W>,7~˪ś$YGw ƜS}5k5 v wvKz;|?zE G O+'⠦Fe/qO+EKX  sBH@7`@ (0>k9 A&pjc^-.w)V>YHX@⡭[5ON_vM[w ~AOZJ]6CKNg_񕃰opg,͂vo+!NA) la56FB±Cp|n9t?W0;o-Oἂշr!pͧ DumO5) p`]S:p ,ani6$wu| mLK~m'S¦? &0Ch?-]ӍWǶQM%Jѡ;$8Ym * n*WK<xlv󐇇z@.erow#8X'gͷ]:,g%1XoyRX҂ <^|%w+8Acl7i~AbϤlCc3"i>Uy~XK'b?Kzh`qL>,BHb/NcZ –tB6p'G DPnCP( kо]B09~G@ ;cկ !} Kn*Käƒ`˳M y#D-ס!287$k7Pt(A%6! MϪe.;q,JǏ"^ya [I)^ER)>udá掆FcXL:wkYXy 0\HTR`2|,kr<1RP`s>ڢ q` >0-(@DVLCn3PCA1u"(n 2ЉaP{X@ӂ+"ck0Ƅ j8>*8^r#q2\m/*NٌE2re I >fE02]YU fCIaD=x,=4, ,La1v5G8G$a1w zDy~mB G:U|ƺ3|ſtj7}ine LFd|7v9TTu{I/–3>4E{e2~Ä2~a­n9xOoQ#؟5`" M7&נ#KR$`:+S`d 2[()S$0!!v#cw8Qx*@у;]@O,[ |5Za!b膿H@f=5?Guu5CY\Vo)qZhAE5]|W[h1G:6uK;EûKux5+*۬c8_BuA` ^Usv>78̠YMŷb ^ffbV)RR󳰩<6b(u8;!=9 ؙM!J@g_Nb-{( 밾U#so$(9/84ƭL`c]qW3]8Ba .Z8_?0-떴2Ϛ٥2{VTMMۢuG?_#t]-8a-] s;Փ=Hx~Q+ p: 1gG VXI$u$L~C``Z NKG~G=8+UZ,@ h" $+~rD,cy#1衮r]$ddn4(H{}11 -|9pt$c91E,8p/, 67lYSmŪ{фІ?X*úUxw-VQD{R2*u v) V X+2.Zx|>Y?N{Qnu|8G_T u}Ȟev%fe+XyNϙ-:Lȯ/݈UIgMҗUzf)b7[|Ɵ=VFhKm~6`=x(RFB܉GӰJA. DD NXӛ@OAQ R#aP 8 PI`s:= %6b Z`&pRUBS RpIsB*7RK7byJpq/RR^||OζTJ ̄/엠G${ϟ-}| ==/KI2$?: x1G(Qux>QYi-qY4Gj])==H!c9лAzJDBLcױQp@7_\\|zV5| }r%n@u/e2{IgWC ā61_d" aoA9 ;+HI?ߺy]ei$yؖO|?ǵ3qCt_u[0c0=8|/n}۸rJEIj^5O~rg-*iS?/~{:C[q_RP?R[s&Yj/A)PeJ¬7-4ǎJ2H!( ̻rcikqRKDe^#Qv7@ sDD^b:IaJWuކflZ%bR<@,Tiļ&3Q_ $/ G"f#2N_g[D Hdu0'"Jo9"m|$N2d/߻O)@:ai'x閟Ebg"[V?:CKs7`q!/YDZyihn40 ;FJ(*˝THrVgvDW:?Ƕ福5jU"A89A7ڤ7GyIc?{tXf-fᛍ_ ʅ3}5t Y%vW M~c;[j˔xt(aWm5^>[ z*t>0Ib$O.ax17Dto8|MzD^Je: rOp`DO?k`r &ӳ*;ݩxa@W!`^o՛(uqL6]:vN\+fGrO +_d%ܾQQ˯q?՟=<-m Z `֒=#$ R#BlpZ9H|n9ྒ((d$M8(&_FS()s:W'j|2R( ?G0SΌ8zݾ|RZK=LnXxxu9q[BE&J(^ip/7, ҩ㕏_,{NA ÆYSr>k 7/^ѧ2IN:oq A5LUv H%X"]x?\ 'O 2AtY\I2Q& Qʊ*wx(bc ܋8e-ߥ /Wz{[XAkJ2x"L*Yl*}0g5y,X =ug$]7fA깿4-0*d ā$HTFw-M{]x,Jgᴾ9| (b%ZOoK$67?ݷg =KҡJ2Ԟ#=[ &]})#qbԽnl'?hH d_4eEI,L nU|ݔ(@.tn+gV- 7Mso:Iqgfڮ'0iJVJő!R@ul$dPHTi,s L G^tn*ibSB'֩Hf-)Z{!XU_TJZ{հ}OU?U/6*)FJ bfrGCaeBrrcHCR4$!zm5}V7MJ٥'Xo(|i}|'ŃQ2LLj&\<첨{ck "RڀYS= $;ۢeC?PV8,No^tM#T/N) ;H >bIDATjE)X&1"Xrmet7$׿OWC=ʃCqQd(\2۞Q}=أHW]ݔOGs͞}!^LOҴ%oɥ-V;UNyA_\a[!6OzV@s>V7`˦UtF61?LraH<jϯ8IENDB`dipy-1.11.0/doc/_static/dipy_paper_logo.jpg000066400000000000000000001346301476546756600206510ustar00rootroot00000000000000JFIF++C   #,%!*!&4'*./121%6:60:,010C  0  00000000000000000000000000000000000000000000000000u "Y !1AQa"2q#BRUbr$3CSt467%5DTcsE&du6!1AQRa"2qB#3Cbr ?5=NMm|P87글v l5qYW&e f -..¦X*dpt>d$o`=\,k[*l-Zib9cKv6;+GeDʮ*VH&y8\GM/)i|DSȤN{u5N  2qsRP+4q0-h7*^I-JESSVkn\Jv=2wS^mwO?SAR\&\(xI\BWQ) ados|CrD{7phYY骨UwnQ̗[ 4IY92JoގR&p83gIvM 5N`B6 ##>+nB؆xq;xs.]- C%x|s-]4n}ʠ Nwz'գ(NiB9V3ܷYS50SM hǯ봟h*uu귵=&ipqAִu6I#6H8 {jΎ6@t(AƔ*rqdJ'KƯak_|.-} (jn[?Fv*&T} c#<Y4TRKMY xxx^<0nQ `&]tSS\Dخ㧻WW97'z87Xbշ=7OAAE|!š۪_͎{Ū;5*;}TsK;akd4 {JgW[5TZ̤S@<஥}3GMw)Y'Zc^ ]GRz1-l%#ǒX+cFkISPESC8P;;ֻM{.QL]3XZ=fcub- j_"Hs eEz3SgiV. <&'EQή*ֆTgZ9ǖWl%rqDrtm5/|pG [1褖F\$ nZjz)FpAv ؕ| dnVp:LddvʭC\׊O-Jfhfɀ+CۼTGYx=JÜHVLPRrBB!H! [dOzo07Nf6?W<i3y !V0FOP^VHBZFz]7wwȬvwFceVgsU]sEج06eGȭ`M=Qf<%^xBhL$߳]m5MWapQ=)#>*itCGΐcz躎HTdPd:7lpVvgc}4^頂%mzW;_'o 'dGPqV:wԭN;P\uE¡9 Nl =zgwWˈe=BYGݤr+ 5At}Ԝٻ >^e^̌U6̺#zgbU*gn<8eUpUj쐈k)LB& DۤQej5\yjŮsyV M4e]H߫gR:>TWѷB^$ 2f9g>B8&vhu]9.n?bgXV:4n+TpoK\4v|FJueT`A ?7~GsOv_ôCMpK+!z8yu™bvoR#sV$GWo|dž6^|ߘMʂ6_nDŽ:xOHWmA+.Ӭ Jji7|W@jެL -dqx8Q{b_p45Ua0B8sひQkuT5Ogaq!u=Sy/:[ݮ/,Q<[{U(BWgM߿Q ӬG]I=+lѹq 4F *%3g{ =wRuQk.$QJs8x|KR*3=ZtqGmiy kKN$2!M W'8sJ7LGl~c4s)˹D{4ibK{$,wq/0Tu(ܮjj ukq#|Otr{Ik>t[pٮҊ-Ӑ\&e;36.iQ[!w;H!IhR)/%R+[ْ_tsQZnUhmՕQ_{A ZXWE,oV;sHsHA;ƫEYUls㤀l~-r8tdeU3VZ.t0նh^j!d{ɣB8xc8qO)FOc fOJ69 i`˞W$ݚMڭtiG42ϛ\Oto;YS+kOc; (6v>_W MYg p9~GsO*âu[?)S+ܽN1Ӥ5 Ԛmvp G?EF 5:^ׁjTګaRKL4drF5sJmd0q,k9\jM6CmN393qcǒs٤|}%R6m&8zrR+YlPFd~c\\|l͋rI7%{5Pi="5HFK"ܵ#' B%=e:zwQL7yNwLxu%'n#Vlt-stqKBkrF ;x*gwKBAYI 㯢tXP;l]-[ܶ6A ۻvTc!q i]UwED}VCGHFr|G*agf-&_FTXj7x|†IevI֏5ѡ8QkO࣭ϵf{NE{9tB K$U]}ͩ.gXw+.k[&]븎Cxf8X_8KjOn2ZY?f:OC/#ٍ$s9 $Tݹ#M{XYC5=մl-3쒩꺯V*\Gفov]ڋC13#~̇7Y*Aش8x(U4!`An;bRѺ+ό0m9>>SSVO;_6=@ ZYFVHR>Κ;sOݣ>zZdԝOU`tT;7röG"6%{OZPA_nx⎕3@pYi " 98ldK}̴qR]U-KtиU)ō˳ުaBJ >ok? IW4T>N]w/}s㧆Ko62r*ڭ-Yt񓌮;28{3apHg 9ĭ}) ԵR@q`gT4i{VQk[1,˸z*dԖs⥙{X2KyyVnnwcŨ*j-zhjJOSRjd;`ǟ}CW/fr><= :e/V4eM;a}ڎmᖞ.o c>J;&ôq|GT no sW؛[I%UujWe1T6g=0rsc"ShqgCx8YS5\kuERzYY!'> ̍W>Rk˕򦦙UB&q7=zDݤYCOY]oۓ,*utpM%fF|]<=z'-PF!$naom7g}'ٴ?ERKv-[e I#>_%AgE+P)≍-#r⦥::!:pIHw}Yf?l_? {BߴT֪8*8l#21O'bSz(N1D\n?RvNa=8:%l׊{e;O<<oHUBt)UP9,iã'sqkZ(Q 5cxݢk=> z2UR[]M٭e[]MSQp`zm?Tk AJOGMp{dX09,Pרz+iHXssŞ ־lK28bN,O'ov몰9|~G4IJ]x%NBB2G0Lv\/otM^XIq3.srn:K#≱Dr$I4/GǙ$fڴ^ɕ*Bj);lJژc q9#tRAi-H 9`rZʨ+dySuxP83Rfl-o{@@`ϪcZT%v #i῰V/# ԝ]qPx:}2ͻCc69~溋ͶZ+]R[r%}s쏽kW]#)(!Z=J.%$rIܒGA-e(Zy^&ƌn+݉$K\ɟZ 8jG?=H>fN&0aBָܶP5l}Mwp fg Y%\Y?djk}pA.{ܷMC0tn 2<%CPD$ឩ?VB9c'k.!gƓxE=+^Xm~`}ZwGv|%]o%]oN~_B]U侬7=5$ n wGv|%wHwKܦ//Qݫ|%]owKӽe;>c/.ߪЗou///Qݱ)} vW[;$~`}ZFM{A?1'zwl|ʿB]U/.ߪ侭M;^^c>Q$~,i޲ڼ_B]UB]U侭4Yyz폘GKwKӽe;>c.ߪ?OK/P//Qݱ)} vW[;$} vY[;%je;>c/.߫+d~ ޲ڼʂuY[;%Kf9CP% >''z.>aqu>Tm[;%.߫+侭IOzKԎ^c/.߫+ЗoՕÿ_V;^^wl|_B]VVg^*uk\sws%2FղrNGUx#w7~ru57S[ޯ4T,s_}O1 k 2* `?^{/qwQq<gB]VVetնKc4oIޥkekvny6k///R^?|o`G[@Yźƿ+ivE/[7Cu*Սkwޝz3Vw_JC.N=DpycHG[#[/Yyzl|?BUBU}ZwGv|-]w-]wNmךgGl i~ܷ*-5u1}Zxwrq ěwԲ8.m>N]Zbqٮc63/]wN;7czt0 wIQPTF<]NT~iX{sP6)='tZ2Zs+*& w;侊BgR\L6p{Q\zn<L}4?+#t>^}״w>_EZWegKq ӈ]>$sMKqŲwJӦ(j(Q:KNDx_AsWSdϜwV{: 9Z\cWߜ|ӨPN|l9e|!Å0<>Jn!S yf=It>^ǨoUאָ:GrhxkF6 K[/V&⬠Q.|5Vu&_lȭ6~r ;zkf>QwHwK.^^1lN#{;[+侬IGzKԞ^cqhgIr.3///S/7C42:իՂ}`h?U w u^ahI>B!4i& I&h3 $ЀII(JMq  u;jaZ5y KsˠP:v,j]IB!4h'ޚI!,&$Bi M! um5O 9GjIdgIs}A%%a #P^>;in7jr>ä_ʱ]ڶ$SF8?r@,hyܻ$HC*Iu=HpjM:Wl3=ŎB::JF]^qڨ'#q|z ÀظZ>_`ഇW:}a!l4?YdkG̱N!kjM׉ 7>*C 'EТX|/(?so']HGTߎjzjx &\W n<,)$d>7z+E[+|m4]N=\{y_2GNa ^87g-׍!Zh+sp9-tx'Ռ8kJ)(-55<@e Y2#%%x˜%JIЄ B!@4!B!@4!B!4M!@ &M@B=ql=84 i$@<M$u@4#8ܨZSkԐI%}cyQ06o<ӷyY|^7rGy5.g>^+GEES87ښyT@٠ L88᠏cOKlӂe]l6($c#iֆaDU{E]-E򛭽dv9ԏaqphh'`sTƛ!uSԽ.@K D^5wsNeAwu#Yh*mT4SJe.#hilnsXr-E%MEᢢ˗չd890 ൠ 5olm;nc'܀NpꈵV6jG]8^ \K@'#e<,$2__!wF=c8p? ʄ1 1Wf1`c hmHd ln3Kn F[#dt<千/{Z<Ԟmk-l 36SӲtQ 0ksȞ_9;[]i(!ӱsco&8d4ϞgOVH1s8 0vm qq3G |Ѻ9@6,-ǒ,szA4qelq7 \ӚͩGԴ45(яY#}GON~?mhΪNݑ0G(I4n@x!gR8551;RBͭFLn=[?V# ٖ0y /rX=Bb2$;?$2u1*UZ30蝖9<7HWyOlpI>26P PAz',h?kvw4 w]5(dlc n7<׻®'q(1K{*6GSNu6MF]AVyᩅSJɢx^5ȅFq% Gs"!@ &!!@ &@BIGD !$!Є BI@ K&USh.| hd5%MWM[*o˿`n7h i`|3G Lt85deڶ!G۝pø]pTlK]flH,pZyr8)#>-q%Ycc#cchZр CzɆXqӒ=*rAImm5h8>TSRi@q#s?Q+$eh %8Y4.pC 4R,9`>$29ka G<1ᕳmwxiH9d!jٟABQ0qcc`qqqoROS=g4<U0宗8ĜJ!dg|8jݾ*5ys] 699@!|30>9Z摱q{y \tFM\CX HyQQ0JN`zv͐v >ˊ[YCuikc{<<Șmq`GP Zpӏ86w_d|h6jqJX=%&h#Ü>pi !w/1u?:s$;CiM x篗-tppw錕3$ҹqt$9ϐ{dc_yoy]qBF[xc8oWuqآG׸znŹwG}!9%EK s{0ޙ-ȣKG/sײ`q9qɣg:8YGVGT 8sXs]1)ksʓƓrVdn'RCK"[3F<ͣhugdVV;> 5E7<ǎP[?YݒzӺBvhK߃Ign3sr~,5cCGR4[CZ$|GAv~ieG~ S~ |`s)+_-+\N#ɧoe`~\sIQUf'plW}POz{tދx&Z*]Վk3xISrұ}gS%o!Bҙ -L?⺕Oikfm®}ͮt~$v<v!C޻4Ib$AT<²Oւ}c整F69;e<+*WHF2Z@=X4gXjj!c p-w~TnʟR^C<5 VJcܲ/ij)飯%Ayk+K4E44% {2x?V^J6ߡP=}CV6C' sOnc5ps\29xNzujn gIjǤ! BHB BBHBBHM$BM!&u\45EکvdQ;R<=@NO4VE`{Z:y*̺DZ>qp1QCɂ90x^)tmҢ*g\XCAN (?w竏[ֱK⩝dџ؇欍kZրցy⁼S<4yvI1lx ,5ts cP;d2i="!uRn<^g;c[xSK,{<P hd"YT- V̡ 9"L ph55LVڗo*cCCfS] N9<@GXD$.ll @YF4d{ @kFI=[W^;LΑ,TY8!U<7r pI܆7im]w1\l2>dZɢnW2K.F G}ev Yl,.N#sW)%>/y~n4PdaDžN'd@ &M$ЀKjffj(Ǐ돒+fzGj4&9_MHcb10z \3~ѯ3$c`$ݺӞXTb[4 ,ssX%^]ISfm<4MҊVxG|N X<~+\?#v3pϗRޣvgZg9 ;s%.75zU^q ia9.hCDZ]Є !!tM@B hBj[Md.2z75N\ic9(`IHm3O-o+W$9E=ED29ih8h|XRU}m*AX2<7h۶> eDr8b&ѳ?gf:#һ\J5ƀ}%aU[l*ݑE qY?%1ӵve@h5{Gϫ#q{- 줣q9av:R I={g# z:m>[/{(dEpȡcc6 `_Fx`!$HD8,W qP*Z]L۳ {یuhr6Cx*h٠'AwVxj}=T1 xv*IU]1Xi[j_nw1j7c'lʗ]= Jg J?X)᪅ɢxÙ#C}ઝ߳{-Xs[d08Ͻly +U)BaRtrvczcC#3̵WEGeLf:YW.ZWTMb|7 oViX(3+VZv7v k B`)o2GA=KB"Ddw$˦yv 6A܎y pIq 9T<8;qe xܒw%,i"7=264b=P}l`<,Ԃj],L,nxbgNI']-KaI#]ĵNg|CɍSB^s_ML`#栤UtJA條3쟁Vd )V82kI8YmWZ #jupO[qv-KM>DCzz9kaBU]u.ϺPih˪Q%g±ڮTWjm1J2װ!6g{XWHBZ! 5:n5th\!܋, xDv]augOTLG6m}$-q|=U^(.\*in ̀(,o~O7t#bq}LS{ f{1paˑR4;Gn*# ޴MҢnA܌1mJE5&SO!sݼOZ\ƿ=~զ KOMH:+mu =Jw6"ģ,둺PFEI/Nېsu'gBіEtl =ґvֽ퓿6 rQ֕U]eVh>F*\cn%E{\HD/ qˎDxoS7X-Tk+㡀O[4NoxpHq WI=I5↖Q-E5ҾCPZ bl8-4 STYx>66f"ё: kݾAnJYS8cN Q[mUct . 3OUoWelm cCZ90M$5f:qI7?@.'yTb稪)ᐘɥ`d>gωSoEe2u>wҽsH{O?q+IGO<:N2@>,"&=? v>vFgH[QX!bk[3G1s'oմ<+ӕq9<|B0rj1݊cvKf@[F~w?Q톰ŧ#cj!ŭi'-^.VO7/C2/kI!qM㇋ĸ~cq;^ղ7×YFKsnڲsAYΥ״ѽT6|Um~~ W{T,ߣvJHjo{}o_aP>cW$Mj3h@$!*\[u5c?~Od=s);SGhΓh.q =!v{2*Pcbc=i'UKn3Fj2yx(iv{48%zJmE~~nu Sڕײ4|-Ow3O%J:Ŗm:l]NN?gF=Swt?Dw<v*JzXhdD8Yc h.J2lt!ٙBO IЄ !@4!B!4i& $BξoA=; yT+->l\^7M<ɠwPDJmABےw m'|V 긨4!]]1w C8!t_u]D5ES!k7pЬSK=Dub&c1./`IӈsZs|.<g` N`Mm6_O`<mGb2룩sQC .'0Z7`%NCp*_ qI(icpdsZgprH-z~m\:8s[RC".vpTM=ξi$cpkIˎ%<geP;KK=2zz i|Pq=c&p2rHbZ)r|ta3GH'=|ֵ }ULaI%xspXwݴ H*1FѭTD^TRKqg|t ],tTP t[W, u[ē_ ́#;7 N)sAۗHd⚙rU|T6̳>Q_=#s nt̍xIb|q62A)t΁Oop+6-i,i)9 .=X+OĒGNZ iٖVͧ ~] AR-PmTL#v˧VOLOMSUu#T n3%rxMLz0[xqE$}bU (>hm-{?xpqi*h]83 ( T(KI+SDLc'~h˜zW\owZc<\I( Wii)wҲr{Ċzͱ3mh,{OYZqpr")㨎$ $s/#ߡ=$оJ@&9{L=9.%865/bȎ-3|lm@a'jO3vs=PA#nG;F\I$ 玞4}+Hk].w\;]Y;>_BviA%Y+CFigp A-4T02(#cGF>J Qh}=]8.⡔Dykϙ Q|wȮ͋m$M I#[*86)u7Ɛhh)t۝KQH;$cc i;E+IJkuSDLq*̄ZAiΞx P?p<kf-6u#lUS!<`ĄNǠ-K7z:L T=gһq}fVԓ@B{6]]4<-'/.^w~}ڄ@o3gkOm.y`@r^N=iET.m؀8OWJ0x7XL,;S6',iXDni\Ln\I83+gf:H귆}nKEM֞r0eS:|fz iS]q }ڥ~73=O|e0ݒ/y>[cQ.ΟqܨHGd ./}HSGD1ǖ#TI[o8|oZbN>9?(;&LOre :fpAObZ0,I}YI!CWj==c}YdM߄#< b&PO{cc#XK^*!VC M.|8k@J=v6c|e N߉5F5a,vˆtWۦqkDg7 `#jѲhi|o#|%S$td$r:,F ܯz.WC Bw '@M!99 ƻf1τ!>%2(4z̾) Lj=튒IH\Z}e^AhmVʆR7qMQ(`24 +iٵ)OC3}3" FpLn9E?x#rVx7qnܺU5 k_GӕWV͊Mu[[wgݒ0Ou*- chg12YLk#cCZ֌ BZr[A5 K#"J.q̨ZSD]uD0 o:--p 姛C3A,{OB%(5֫ 5N="ly4u4zl-Ew>:#~"9} Mu(HxA#`yҩzhi%`kx-}+\$8]9qsA+.dO@rw.LmvC#/,oMftkS.}J;xxqU*$Z飒;|25{79;’u5l2G~kCKkO$uP#~Z`$ql- a3O i9Yr&WwFHph 2nwy޸N[k qǒFpAw*iH* *jZ$G.Fǥ-I_]v,qJZx^A-</Mr8 F`yKjϫ76:=w'*zTM{=4IԑM{ji'֊rxb{ x8yqnp@ZntZ3ia <4,cxʴ.JfVAVf-39s\BYdɢQ a- #87L"MvU[]ɕ" wwK)58ǞAQt1StF6+uvkKqFI#k: sdFн2yg8->ӲZOnG ?yuT;-+fgK6|d4lCy,C]+jjlToN\dvqJ>77!O`ӔVx '06?dlj]K))4FO;z_V:wHɭ>=j[PUAd=*w85˜#mRpJm c <49 kAϓOJH1o&h=Ch}=v s$q\E*HM$B  餀}hBI4~|ݟ^[Ka<bx]c¨4(%ac#|sloQZn p}41D x#9K4W-&(fdSFEB-|rZxA}qH*H"p-2F'|pgӏ5IB<K$K<_>jڏH]dpP3a}Hh$_6K{K^/~լP=|6j'WS0*ˆs+aPXMt:ဵk@vj|w\ΰZX".FLjAl~wDžskƣW]f7'QcÅ㒽jP2k/Sj:&UN>jDT )#Bs WH~s%n<$7Um]$/ ^YIukOj7Rl0rP|K)+\ڎg^sגm;Oܱ}w<4GXe,}Z>)Yþh$:QodA`x~d*׷ZllyrWw!1מ?{ZY6bIߟ==R V2S鞮(i*jֿbd;n2[(On$~N;oȮmKD2O9`>~ Foi8&Noe5Zv􅳻)} Sv7FƎ19ߺKtTf{{ov{VI|NWx^ܟ2 aaz7$\cImdu5YPYV9y'DCMwakM]SR\FxK[`Sډ0o !hq\:{ӴBYAY;% ÚXAs<YdRȻ9\2=Cԭ_K[]kQJ/đcXψߑFk誋Oӹpv3ܜٰ ʒdL\GܪVƚF 8$n{]fA灺վޯ͞F6AN9"Vqwn2j.`sc |Gd@j!Y04D|9Be5UwԉylH ۼ2Qf k]5=d3&\}8h U˴:h_,T p;gPmoU=%9#/dp9G lMEc+-e< n3g OzuKD&л/9z@6lD;LDv~^5$+SNUM!Lf>1 r-<0z}&Cip|F9sܤ**礧 kGckGSnWJZ]{I<,9#p\׿cNj\$Xw$:2H^X@RN tϚ4Ly2NK1[ŰnooW 56q0 5sZ h#etBDB-Eֶg5D 煃X΂=۸p'wmZߒ&B1huSkCx8A a]t-|[dNOEjBW5mx3)a{$<%3sZUs =\7 F:#]5?~ǰ0z W 9TUu]ˤdk^9pn8urWUUauDO&18`.sF 'a +URSU6):HE˥Ow0s # 8n2>"et/qg v=A'}[HSǵʕћuh#shn (aI )&L]Zp18fҏz\ecR8 9l kZ2\UjU4:#=ۙ@һǎ2|;uʨSZb~;p1dOi ,wJ:y GoRs* %u})YMgLGOT5OXIkC /ڂP x4q˧5 c8q}qVT * k[@zU с7%59iۯ ȿJ[qMfE%_מ6pߑaBAt釽kia-|Chݶk:I408yd!@z;s4!!@B55i@a>-=GZՙd/ҦtTCpABhtUNm59/O>m ӝJ-gm-@;ŧ:6ї}3XaPI.e1K>~ ,.9DRuJGFFUuk*ZX|[SǷ{ Y$6sAsH`h8g~ބ[t qN,4o\rt47sTېoU(G?T6Ksj߈^>gzh—ptEԯxVX% C]ߧӎ*w9•췳&\k.O+;#~@7uT eε,';gc68z*}[MR#⥽vyTF 8qq1#|JYtw/,kzA sKmk+âspٟǪ=anCN^VuLoz`VDhR8g#v vGS >'=.cNH 8|w>JsK2q48;coT|qa@# pٞ ; | .1YI{x9Ǵ=2wUh╥X݇l#F店iJl9΍#9|BZbQ[)cdN\H-8AZV;ൖ]LkҚxݷh/x w '8'bWJ-v=KMo&J{ԷY+䂜z4a! g4V{FGO +jjiؕ[*ݧ-l a<46n\s@;]fkeid{Z$6ac>1}Z#ۅC>w Hw[cmSABLP@_]3d?Rӑ9#e76iWQ@i+m͎c٘47`xwtFKlMou`WC~N+=vD:7?q1!4@.&p.x騫ޜ^H7RpT!Vc7PyuMښptIހU5ձ GQb;wWwMqVhd"{^nӐ~(I&8duM .lp~ӻ3CϐnrO@JS H/Hlj\1sׅ#ķVOhiw-}TEc;V[MAyI9$@muUKEiuT( pqٍ;*m}կWռSS"7;4eNXi,4BpeETQ!'!lGW^mVz e1Gԝ}%mX4L( XyxhCKF9D( _l;5#%EZ6h9 v@\Uj.T'= :Y|nkᖐrЄ \ rF<uWo9 u TUO%=C$Rpp<"jٍlG\,uNyV0e9{tN=}*HBi& IJj#Qhs\<;a;\d3Y!ܱ>JUM"l w2 r4V ɟ?3cP@Ý݉8 ~O0;|W҇c|_Gmڶ熚 hqJ8N1OSi51=qtNM\) 2';I}o[{]cy2?{YLEF:IM$Wip[JICWVgQ5uܭ4m'O;rQ:_T[Twvqŏ%,MQ(ͤ4! I4BM$!Є hB!@4!NP:6̊Hϻ%eF/YAK d'qYQƤa]wzָe<oz e 6YJKEЌ_$RCi'RzG~c./- ISvvpՌ{裍nv?|ոңMJ;ܫ J5g^p썎|k\89U;F%S]u y%nH07;QaZ lvCA>N8JT4PQIIb1¢K4}'Ae#:HBfЄE|{ p܂n;:hIc>jM ӹų\r<2F˩+H\ iQ=Û8FI2ZэyA$x!dt]Q KGQq"# {^drI4hRn,Z:~'THXqw`B*D4q%Ka|.hϫӎg{ɒLq] DZŲ5Yss9VO,AJ&ƒbܴj |϶\>t7iǭݐѾū K4h31 !`#9=|P]h^jAcjL7#}P얓29(&#Ҍb: tÁPlr\51O]p(d}.nG\FUV<ށVpf@\] !xX煒#$crȂ9! 0WQSW>:^0hp?Q{f٤UOc?rV +riG .1C^^1W27"()GVzzt7 d2yRAv:ek=MY-s:k} x㓺M!@VC=5jlqgnFMSTqDdyPZͳޭW9}%c(␂$?TVS@d2` g#`8xZq%` ֹ'U5]X"eS(4P'9Y[:T}'66YC]#n| O[&ﵐǶ =X<]SxesÇ15s3ϩA(sFE;K$pȩq|?FT8$s+X)kl7<@q;VB:JCcu,=wo}c^,URCg6WHX,``##CN^Zel2T6hٮqo#@EڻLJ("I"q9kseG [jj 9uDVS1f;c##pN4/zr 8.fi玙8IwI srp*%<2 o悱 '{Tt$>v>oQKk>9ڝeuRjmsWESǑ#i$`.M = 4UPL2&#iqv;ln^vh% y( m_hWjh,Uou-U\y| ,^/VguO;n4OxK8~.,]&'KANR w6~%m;fOoI-)s#,Ukɨ(% ͎ۥ<-mLh'<; u^սjKV|qIG(!=i~_s5vgfxz:;*uaSq>#d+-D/V:i6M0'%\%$\[LI5!@ FZu1$Q8~=Q8Ms3V5v22A v:I&ۭLG+[5l;t̑0XCyx;ԵRbVS#8pHh#Ez[_w;#`ԇ{ʩZSU}Fz5k.zmxi>0O3t}U5m$u2Zwd1H|- }ʶKDeV[ӻc-gL5}`ߧmXooG~%fJۗշ+Qojv["1qW׻V^gMdHhCÀOᓌ-arյN8Ychx''ZrU_}i8!h㨖1ݷvZx1Zuf5(Ћ'-滪*d|+X[#;G0 .,m75xkr<*kv;d3c/=iiᤦ6 M cɠr ҫJ9fչzv":4D&fM$ Bi M$ Pu7ڮ~cON挜g+UZzUA &SnQ3吀֘M~W)$pz Q62TMJֺj'SKq2qun1j[MHs&|Ӹ;>M=v; ʏEۦM#[;{2pǷ|#@N[.t/Hʘ Ө;0AUM[dC$o2(fcpw3UUgֈvz| c jY)]p9@r1]ƞMrq1gR27 shv ڛ-QTŜ|Uڻ3YR^\;#0A{F#⪕ڻHeq1ĸ%#Y{H72T@( <>yj68ei#ހ! 4hBdcdc#CZdzEv}4Rzi\UH>s\9# 6iQ-o"v`3޾C@RZF`IpԕB+m]UG×>BO1#mjcE4@񹄎hд촗 _zwo t7o8s9 fͦ::Q%ovCAi$Oq?e% պ6GayF}tّ,s$s[{D4.'x`..'a-$t82&z,h{?UUqmP>GIi4C)Zr wC8I2Coh[{}x``csNI7 :2J3%$8Gޚ=n54n;u[Uicp?%ل#рD䀲@I4h@$&$IHA Bh α_h0[=;;| աpqtnG=G5ޘn]M@q~> rC3㑆)w {+7ou}6 _u]#r usvp} MQ짹aZ}N֚Iԅr}fKukpF}|(^uMc[GQD2ˉσ:՞f=dzܱv] +Vтi;8nIpUE \_<;8<46(9q([Ʋ4^&"`d14p9ߴy=3wk=a.=SĨB(R#>WW,^QӴ2 Jp[c/mCSU#K%Ǧ$8c+/5Md#yqԎg5'~*4Vܪ?908o*ԡvU٣uV'EC_7`szI)4i^ֲHKNqَvKTt,t9Z2}6kDK%Ǵ9 <4pQi`800@gBuuꊖ5a]Ȝ 3dlt.F;osmOY.D!$ͻzb2bNď- 8rp27JuL kf\3 ㍆xxIhd-s-<qmᓵ֞~&A]}S4IVቍ `NORJ I;|cp玊|/Xs)wSת5bM\݂!٥jv}NitE88=.H IЎ@BI!@BIB}A̓;6`>˿*҅XM]ө*r&:w;9^-1eLl|}5P@% #?A#ϘɦW%=TN/c<LLۓO|.*e΁"Muf1% 7tG] S8$nkp%o=fcUien11ˣ y0G?OTJ~+8>BsUw7]j F*IWM[*<9 sȝ9AH̄!Xp$OBcK֎dםqa0V6^S>F v:%7h%_ںߦ`ꧏR{%uht-s-~$?5We-v퇌\Iq/##p=jiLݻE˛ H7{ 3k*jdq#j\<x?"'dS=ǡAuxNz͞hEI;zu$gki?42f)!S?' 7fGp>%m]h6<}FHZ4a2Wr|}59T53$_,PEچpms)n 'ZGܭ4[u􀑿3~ǻvz:Cn'QAĕD9LQ/^q稏o>YY&\Rvw|FnuUSRRm+$p?eWf]S[PXfHavN;eni=x? ȓV:N)%xŶ5#xP_IN'ztErGw1h HІE⎜{-2G-뎠Q2O< 1DOssß 4j~s4LKNԖSXG g7ˮ~ aj*؎r$x±YmMct;H&ϧc-;Eih!lQ>$+ѡ)Hᇼc7%ȥcjG-g߲N),V<ԕGy0B8!$B !BB Zn Mig5@1?6#*m4&Һ ӚV_+s"1\Pۭ˳FՔ24˞)-SF'Nkh*7>O ='8Ng#@],on:Hc?,n.fFppSVKWCVƜ;޳wQQ] fGdGP9}Ko2UCe]$)8bhŒ`_'ᮃcG*ķګ,<2qֵ;f'ŧ`)8^2Npw|V_I-0ՆP6-jj bh$Ǚf߂|/lǘ8#Ugb'ͭ(ӏbqg)K鴽j+햚Y?Y%׶&dixk)"I+!?]Bm#\ⶬqo[y('f7ߌm+M-[t>MymR[cThd8,oą4f4Q%m|tNAM٭ʪ3[+3sQM| ?rh[^̤ѓ*=⾇I'/^K_5;Eύ@_I^lVE?lr<׺JkMPR q:GQ 8ZG⍴1 %##asg%a|Acs^r#%DŽ<|<%+MW j :`@;?pz/5QަkygcZ\1II8n2W@'s[LͶҚrg9m51?hmx]3F^z|qB@N9#|G:5ӌ;- fVXۇOXR4FD3sˇvi+%ꠏ&Xɉ hk@ M%4!BI4 BH r SI&hB&4M$y 416K <+Pm-4ѵ '"8{g[IM*wcg˸-KJ[p;s08} x8qq#ȓVs[rTd~}J]C[, .V)Xu("-t.k%# O9=:bETښ i$.7[5Uzy`$rs|nzTedr dóPnњe`}؇LangS蝆so }SÒsžNܔ֊70}N( V迫gִ4@ *V0Le;9ĎJvq! w^P(9'IG]Hpc/.'l{_Z˽I"vlI8 "H4u%k>u4/ }Q粤WjjHy$`]mg;|G['e8,7p;WtM][3])02C288:dlUy)\8KFr>Gt]670;h ogЫV[u ęh'iK,3>٥qUN <qJ`omhM-tGŏ?*/u)I,-p?;N?kK\uwvqc=jGµ੫%o9;\F9uL-8Z6qᖏl2W>?K|Q]C "!ˈ;'TUϳF.|YjpDU?!ݔQP:5NK} Ng=LTt2&Ɔ[Y} ux6>+nOmFS] >atk]H[m,T7~یԟ3BjXx妏zĄ!h(}[ LDdhFN8 ~vȅ]H*ppr]֢P=ݤx99iK-].VD?ǃO"7ʝkw+lϧSȸ g ƱSj4ԯ8"{?n"9u|<{FMf8`HA+kzV۵&>a;' .aڥqtmWTT! HM$!!@I!!,``&{Є1M! zi& !$BH@hp-p`[u=䜒v”BuNVԐIl}kN|XAi*/pTc0 p<H'g]7 w7VJK9d3ݩZͮPʩ8A;>K&bvFx= }6/=-G_\M%,e5S e {BٻN!h`x꽄c0sW*I6RS:k㺦,.d\qSx0=ch[[qqq 9"];8˷fkjT7EwC$ӚTwGnf2p]u'4;@E5Qkqv{qU28qˀO? ڣYS)ъv\$˾ l:.$J:XFrZ|G ;10-ĵKtRU 9cp|B5hԱYV˸AF f$O?Tn006@rۆ4WmK4 kȆRAIs vhj7h 9^3hf ;Vq2GN`g8xt1:Vdo'q@dB !$ !$I!IВ}$hI4!!@ !4!!@A|qWw㛇gAHn-V3x=G|ҮYճiS .Y:\]n<ֿ.@iZ4o 97#GWZ❭BRP]nTIO)980qyMq NM<EB$B:!!@ &@H $B i&@$$B R@ B !$Bѻ^-js=ֶ H'Wܪ[tz*Yw~x}*f=+|I{O Pu[k: zZbso:Z,;{|ӏJ.2kå}i<m=۫=N !ɑ04} zRPmT&Eh$SG黜߁ȣlq1cF֌<^@4!BI4!$\ BB $B i& !D@BM$!! ZjԵNָ7FuYtJ[EGk3兤8~%(Stg7Ҥ;;2ۖR1sKs!tu,[jz >dfzo$Zƾxscx{|NN:*&Ri+SȒw7bD}:o:jxD4t˅-ZO'|ݙ/D &B B& B!4 i&MЀ@4@ &Bh@$!>$ЀHMЀHMЀK8U2~S{<ؒ8F,i{nZ1Ֆh;V GfjRڹd4 +s45 h  H! H94h@$lg4 BI4 B&h$ЀHMIB!4 BBHMЀIЀHM$~FY50C%YQG!ܵ}6^;qH ּ#"J69s$]tZ7_TU񖽇#_%s+f$qdRgs6qDqTodX695mf,ҚMW\mZupn<=l)*oܪ]ÀIt;pyw=R%I[3bgQ G.rseIQzP]ctlh"~sQSEx .0yoM1ijܫTI4-rĄл9B& BI4hBBPM$!I4!!@$M!@B M!@BI@4!BI4 BBM$B IЄ ! M!@B 4@BM$!!4AGy8yr[5T};h!98yq{Jn^8r$7t]$5C5]}Ahwܼ7O]/lڞ&S4>>+cƴxqyTW^ Eݿh(CEMAL 7h|O*WRGW%W-s aiӦ@*± Pc ց|OWN27oUF\Y75?vaw6X$:i)MSohB!$H!!@B!!@B!B! !BB !BB B !!@B!!@B!B! !BB !BB B !!@dipy-1.11.0/doc/_static/hbm2015_exhibitors.jpg000066400000000000000000002217221476546756600210120ustar00rootroot00000000000000JFIFHHCreated with GIMPC      C  t  :2\@8Ǣ|WeYWJ38qQ/3$vj E=vRJ)\k 5g 2o&L3ɼHQ31b;8(΅?!35(MpL̅'M!LSvGMv3t y/2g!~ &P7hfEaiC$95M^'kM0MEjZ,7(I316DL83Sf4\J܁/ rJ8c3;Nw]D&fqrnA*$Zau>QiSgl R/\Xd v&UZy,hKبVfX1d|`wY+F]DlrFKZ>HKh@S64byGb:yב8$%U)SB!IWbFLDɻ(.T"t%f:#%*2)ɡf2$̙gE6$ P]RRei2OaM;4JUYv^t!4-d'T$("5yXUT-7j)ϵaHQTj+-V>_jFRQ]q;wL 6%tʲ<4sچeE8IVY &c$N2Q7ⅼpEnHBR7+vfѳר:q9&-&C4.:5L_ʘΤbF`Zn5/FF#fc *R3 rJVvtl5 Q.W'WMbuszP.qWgl6gAabLcƤKp%%ao9 jxTtd  H8"Q d:/(N$uڬtHYcjQVF T+4 3Zj, z-'5"Fk9t"rFn4Y'VJwn, UuLW.sY$lx4(Æ6zjiM 4Ab(M"ȼEs8VJdfZK R)Y&RUyf"x|U|zJeTLxM a2=IIS^8íc2#3*չuZBhk0}& $ԊobC6xY}o﵎_O&fj*xi #bA]΄y8$l)-+w qU.Ҁ\]c.:LB W ŲP'nbgO̾g:af^]){uNRSoQjhjT1ck1c-j;]/w!/_juq_Bb^NYTR5vSDED/U=C9%g[S׹Kk-⾋L.j\w3hϷ=VvjV^k Flvg\jɂm+g!szѦyC>c d~o>X= y ż(6y"IV&6ܵܒ :<_9,OC;p/䶮ë6$$rI-,ۗ)H,q\N_<7sT$7s9cz.XfbtYt~u%\-1UEk$$NfFv7o)^sW7'DԲt@!h)hng$"8ȓSZdkkS$wq;~~:|l48e<-伖?EQ(fON}gxJ:8J˔8=PxQ3 l!tqWCٜ_Yacj{Yf,1Ry^}Ӻ]原֎Ucy_a律st=is5sI㾏f*ja\2jDL!7C#n$DFc,v C(л1r^`Qd*ts5K='QEw(ux$L;9 ./c߭֨Vo$IhsvG6V;GR[_C^Ro摗7>#fCȇ9X~z*Rשkq)Z*Da'J(x;G7=YʓgIo`&Yp< 68*=8_CQ B=hk0G-ޜ}&& r fk^4p\?0=kQ8un{LLGjfUӿgtkx0\\Spڦ۠ht={4L^N3 5{Vh7#1HɁ!LV|KcUKz&DXdxa8WszHu*3^DHS8Ȑ@ę[@@H33"GAlZI fAnXZp#$J9R+3Yuy~kS**5]?i=lI\c,=g~kijN~3Y}_cͶ:סųykK#w3[Z Ii0Hp2As3Բ9Ve%VX/Kxˣ2|w{ڤP^yS޺tn-=y}~yzXX= M7iKQװþkyy!-]U:}:?~gUՇ翬,}^c/Nqi{VXB# 0ʛIk$N(䛱9Rչ+$L1H o'aV}nRDѹsTHZWAKa-vW36AՇc/m޶+*ZT:XTlG3qMi53TBlSA JzS 7s͙TTjxwfzR@JY"9!09䋤HJB)dEg3tL]s DFdt6^vQbMf.\#E &ٕcfΜqŮ(›7lj>:ylɷƍJhIZnJk  \: 䏡9Z"}kWz ̬C^ldJ'`DY &oLXEJO#A 7[rޥ]:3qם]y7/F)yѓyד 7.Z\YceIR4w8>cQrIfr:_ ږY*h1]QB{޵by61Osn$mf"BFT'EL#DbqcEwtXJ|j|zt"LTYʐ.t)) !&/$I*ZYRhw+PI }plmGȏ/t܊LY ymG?yVǾW1p J rP}<AK [}>w9-8<Գ7Czڑumtgh={[iy߼ݩqKSGZth.Hޭ{͞Ñ[Y(׏simt-Z;Kcat 㒭{|lGX͎X|?$RHlwuL:o3"ˊ!x],9 *ɬw.r`^r[aTBk&j{VeL FF\HI 4IfIǐFt5LuR .̒8Pf^L(@֏3m "$HYp!;ėQtȴхq\(@(%N'T6m W+@н2cP3 H}$<%(W&ƾ/29xfi/$2uuLo'J/ARc WѴD:7@0M8}OU''BfEJrRHn{ ɝe%Do:kgdƷeRn_mۼM,>/6=* x<^C0$.Yǔu;[$n1K8t~Z[:Xl, v[Jj讒=$!/k_;Jˋn)r-z"uԓPy )ũ$nSեJT::@"_QjW+ˊ3d4ۄ|E:P}`̒YX "rMN'{ܲX:=WeWE#O\4u$ V3j9EE-}e-s0#B̚ETk2ɬCɓaEkYQk^iRKS)ce-HJAfKZ[o{56+u3YDDsY 3fj=ACCAH%##X cS"36ȐY\z\U4`%$nG+ʑc!eeM&T\}mx~L񘽱vL\4ƶ݂>Wv,bҾEGJZE<= ']u2D'GhMpCӒ{";2q.6L8m;4_i G%聇Ta,)AJj$lllllAkCe_װ#Q^R ǰl#վAwwPܘćIGAoF:&f$8fLL6O5q6]8WskzW2v3d#-ˮY,'4̬ I8~L(]7m19*iN_ctze6rRq&4$ю^yի++0`9#o5>&<3 ylsjA]Cvi\q/\LetXX]tSN I/5'V·'yK#W }UjDrLU8J՘;H Zb!\n)&)w=*R+$0k$%*s AՑI/c)Cl6g_XC|%y2+"d8<9帜~Zƞm*Jୣ:lEð|∗Qa2Vd)#'M&JQٶ,M|QJ)Ei% ΫWcW"4F8ЩۺT`5e].ؒoO`X&q:N!J+=/=**z/)Ja1MSz,&Ztktfk6@O7ĶiRj[ޚ4p e"d1 6J[on%΋ђRMWM)?ǚAdW [t=#ys28!/Y5 XEepF>nW%Ƭ8-M"/.kZք*TffBk<'c"ƺDKx?t$-bct3-lʥɦW+Z-$`t]gf;[)yY"x}769勥f=S)2gYUlnitA9"bckc[f@p˳ocU :MLb$`7cԒٗh }GUӍDfGZJZ7G@TLGmhPy؍dvk6&jִi(m$ee=e3.o<|q$;]I_zh3r3%"՚Ҥ) $z~Dyc&\IӾ?IF]"hd $)NwX*b- KG\sJI)kUBǵ6#BDg v'KR|y b)J2˒{g܊ï2닂|i_c0^}0cRܸfz BUb/ #͕㸤x)CTM $?]`X;lIL!Ĝ`.0դzZhgK&<ezeHR$&;_KVwod69HCd*}I䃙(WHyٶqmS#e8Ro҂o:y@Ts~f5(vmJneibRhHf cKqN:6e{䝣j+ɇd5dmK*#I\u"xIj^qè+~I׭Cli6~eiiY,YHjjgu0S4$%w۞x!<~S؝ p{z"G,Ss器arqqQlyQCk'7ҺfdG2o!{,hEki!R$bKa> _YMpBPJGL$r+5MHb-gͳ:*3uM6N8zo6]GHms}tb18ΖaoHMA% POR I) BevEtc(v-D_:eY$%IxU7"XJir9ۼ̹U:OCQIA ~V*^&pz#8[qaidWd 4Y.6uvT,TAWx{?Q ̐{lIÌM:ѐ@1p ~R%Eu>IVxlv,fM_kjS]k$KiXIN!*1vrvHMHK_P>̍cv2lr4Jl4@w4lCƐioXyߨEA\鼩sch}ܬn4Yn4 V a4GJe7L #C1hhhmJH6C#?<h~f 9F< kXk͔eDj٧ 7W늈ld]1ǐ̸Yޡ_;$OT`Grvj,^9U_;dc| lb9܀k ,fDxY:]9kt \dR7vs+&%xr#_|kμ|66>^9}XhhhhqF&#D5Fe+(v&z~b~I c6p}m7%_CE})P?w}m?;ĉ>m 3bzb[/QXU!i `g4CEW7% pOLñ̎ XӡNH4CcI H. V+kGdEž[>Rߥe 3 4Z}aң#g~h?U?{U{SThUzKh[=Rj8HCU`|S` &CCuCFZ} t48/lzOֽ>-!1FoqiR-Jk+X^I;$]Jɤ'/&-09ީ/ic 'ewN>*TxMKiDPcѨjrnm l?PѤAG3R 8o__r##/gjGiJۑATjFkb8Q'+,4ϺFy3u-! : p6|Ahq1i @4H2|q21[2 q82 'myÑX\2Wup"ŶV} P Qߺ d@¾j?`|h}0/O!1A"Qq2Ra #Br34Sb$0C%&57s6@c?6}ǚfv+EزY`Y"-w4pD` (N;7jq|Pϊ.,q[imϊωFZ<5Nml&ϊ5ϵt3fymg#>+k7|V֣>+mSl=G|WH+M%|VSO>+k?|P8q[iyġ,yFi|Vڧ>(T78T8j|UG|WJ>(>%tH|JTa{%Ǐ{7ߚɬ̋n`VN)ϹHƃbmB ;pؿJ7nn^{*[ j-hF V]l첫-{Nux E eT19 +zWYjDzF-VB8/ss79F]Kj49gX+wbӚGU7ZAwwڭ{tY,eNlZwN HYH W"7Tr,ɕmIPRMd1AY"J΃:!\pWfmUW ҵ͇~@C5@:8Wf[+m`Όnt}+tjx&1t0W+ 3rCfuB>diڅ,:9Z!#-{/KM>F]ILJubޠf}{n/L©☑_4[t؎n(MD8jQtF2 .2peJ)gf@jm#QONRQ;!EPwifel&I:/j"su7rb ޤ7r/mlCB}Xˤje[UFde[UFB2=KBi[V JwS1 N` 6 c,wfYl-'RI?:(Or Bh FtZ@< Gj;+?趔ANǙиx%|o1)Tl?n?݄(P7rfN#gվ AغS🱩)iwUŊjcs`6r9N 6*Ue fWO""\Ks,SQu]\,Y6"Ⱥhȶ'T.9 euX.TS޲p撡!t7[4eDY.= : 8iT`(T:3`t];N!miO@A8]-GYMʙ:8MP|'Zr6d[:w}  ϯ;Js*ڔ(J'k.u=(9Q<ذ㢡d1Mz_rs~C,_' ?=35RcV4[q&=2L {e̢^)C jE#J۹޶i=lSh7]:*+5kwHWMo!__O`_8 G'w7+kgֻ`=?BJgI&YX_/(z>[l9TU׽}DXY?iem̖fDsQDszẺ r4O1U,&h='IL٘3i,G')ܡ!4жt4*8W37nXN4.~W 噫K]55a Jy!*钒EVUHᔬp: N,T<!F3;h] G P2)\:C{}-MTĂ\,:S4$Ü0rfj-Ne苧H2TPb">(#z4_6ETkfcKQ{)aRlg_ܫ 0T1,Μ!/ lzI T5MRcf(٘oTS262Dc,@;>[BFVJB :$&Pѽ<IN)ƱJ$dzi]2]ˢĥ;خSVTʡLAmW͑ 5VcBlo ut[Bz[u+ھTLZfv\iu 0w&{hsH2;Ss`s o+낲{Q9U`Uc o?|WH}+^Ibiw6 Joѻǻ8 8+b9IXC2Sk;2xn)}9JN!H\ VernэͽQTdr ~ň,;ev uEd$W͍^ަ~gʊ)ݠڕ=MV}=AG ,xruk '"U}K'Q[SQ^7J|TGɝA%uBrhd7MjFnRQEpOk:W?Gr``HED\Y0z Cr+"/MUӔZU48lJP~=k ݋)^]m (49:0"‹Yx'Bd\]>[FI"yA,gRHZBO.R'*odOmT1GܧLhQ< S'!*9͹z)*#6!DAʥfFv6 %&*j8D&nՄa~y4{}bxl=uʋs}S3+ve?ULYîlUSAp:F:RJ~n~B^MH;J ZVB)gfE-8'@]eX##qml a[TeAE*)Z+?ONR3;rG(aUM˳bQ~enl'nUEݫx`7GK.K5*Xjdf("hjkAv%ͽ i:uZ= 3c&iwЫ&c{M^FNޒ47A;foG55y7N N7zXKɝ.~ް|Mto!YAG$Fjü,> kbMAFVeS|=E UWB.Џj99PIMVEg/+,Eƶz, e`Ͻo[&GVNj#$/jڟ$ДB-M6kʲf'=ٜ4Ni%lS533/Tuvcv]gʘ_U1en&CϹSLA&xvt-Qܪ/Wp!dctڈ)Or_>⟵~ &Mw໹k/.%n gO,ΜŜ_9W69Žf*lRF 1zSq)\z(VTG5alڝOө\w'B,=7Ntlo>WOV).nLWhgDDrr1#,mlr/b-EYY3-L^ȹfFQMWI+0 jl@w!>YXl} P=MZwyzf![8Uuݐ為WZpZaEsǘ\ Mխ(1:YK|zpsSRUStyX[ )(/I-OM%ae|'K=}YVYVV<Š] 6qT>ST6*a<~F[JީMbL"]pN4%}Ia=:G319mmK;n)+_Fk}`r_'01Ve%Dž`}Aphۼ[rQQ9m6zoWF#2$v|u__&(KU,S۷ ǰa8QڨLּ.{K ;}l;0˚j76_՟ܰbt$3+{~r=Ҁoh59P>6&Gk xШVxw>QvmNjV6ӼvsEMP@fՅ5,{;=!_ԫ7C:eͫsyۙ{ ϵb44ӱS8twa.emê6L[racZ=˔Odqg)lB-[^5wrLZO nQaf5I],Cdr6&7(jĤkΌ6\*~kuf3oh'Q\}W,|{kE|-ҿ5OGo0_(wr[?pxoBp9H)*@?0W%ʊxt< ܅93KmOi_(zr«?l_%[)('guqW'Q,"p|[cZ}ܾP1S5mdQTrb ~1ߗSu,O;oh'6==/ʦ&j1H!sW͌.ggy}jnLTSVHKufVTO3Y$UҏO*~Yz R-qٞsc}Ek]+ eiq/=bW{,A\tl429C7[}3C^ 5q}ks,Ea'#.3f.s[}쵻W7BW&Nn)rH)GLB2l M-͙zѿQo}vN<'y1 Zހ7O.QV3%:-ދU#ט Xo{ "U!{}JY⠦GRsk l5nc۪N8 n"FW2qp`>;jg;=/خbw-w"ҷwiX Κws]\s2Y"7a d<.5WI> d;G0qiSUT&yuT5S<.} Y7*9qПbonS)ZPWU zbu)lb^ʢmuQ[WUww|+|WB!nQ3T|!zKw߱|~%UWZATU?[U^#3-Dp쾞ƶh)!,nQKY hBı\Es@7f7TulZY n NqWtum*Kx}dٍU3C7}İ3Ϳ)K4dٜrOTx/DwnmcOJOLǎ#Bq NA-\G .}L9*kcN57Ni(v},= pQJߔztӻ3ݼ`!N騈poMQ̏7c'O]̗9s Bi<Áa'nuef`k!@Hsp7ѕt'}Iv+YeV<溺ຨtV NrQu,`:% #cr1[({kDX#%˱m\/m9űtk EjS[suU\fn\[=K zpC,ٕg,9s_]aee-UĠZqfcV!q5 苕f!\S[<UfSѷ.xm$taќgKERIB6N[Qap6=Qt,a?_C.Wj8aW2w<3`u{LJj<#GwaX->p_7~ t,JX: Oͯ8%8pW!fUɿ]ke=,[x JٔB+qBZ!v|6ӞTU34H_>N[[XKdv`nU'm+v Hߎjjj ~ !]]ee!J +zb̮0WĔ ]9@2oB}]_]]]_]_:f}A?N '9]R {OblMщޟ];LގkƬmԱ|ZcܪA;Ck"|"f amEE\FAV4dga(,5[.wqV]Yp>urg}]]_{"0jly8PXcvk1)6&:WMFvffaE Mgxݘ#<SGRGqtPDs0kU2*vSf&"iɫouu%3nڛ69Nh tXlw+Nk ;pW eVW fp@=0)PY ɣ# WzL q )D'5"1ZEd1XBbYn vfȀYh@qj}! Q!3]Z=#rE1.vM7/+Boyb!Ǜ(g# pN Y\qOCzެ,Pކ{J!1Q"Aq 2Ra#34B0r%Sb$56s&@C?kʹ{Q%\f+:̳Ve\rSx|SAoi@\\(kY7v{f ,|XPqr u qzd $^[}Qpn?t-~[PQzs%G-~[QqznV.Ku%3[r[r[-|H-|>K$|HPEꬑznAn.Anb,LB3[TֶM6[k\B]]]]d,F†Ĭ"Ȧ>ҋqY[haB!"[żYeutoUeUY[ff]yW+1WUZ;Xt\mu[je,p L8!<jt:-\djγfA] 器vtVr,f]t{RUeebUEʿupQ6@l:"rA]{dYDvktnA[ci@9ke5LtɠUxL}V>+6lٍ8u*N436MBAuu(}TRc0Fhn˝."֞,A 񬱕ZVRg Y5E &^)[9$^Y=e*<`67FVD첲 FnIeYEődYEdYVEnt OoEudgX][18Ue: *W7W?=A-,%g泅 ~+;exPK$[:I 4^G-#yar!'x+Ȧ-G?: 75[9TH;P.I%ɱDIz^м"HHGjW'bIv[=rUa&Q܁gl%}.'P]jPOxgA*ʬ*ʲFHq |F U~f? ~ItsEb5T!KrEFΌa5[ ̯&'R0SV*hu~*܍~ <]n>O7Y#SP UL{,)g"E[];O\yOQ4׹ 1F_[X[[kM jt lkB KdeS{t?/߱5yT1E hnNt?MY[H:T0]u?d@YQb݅$MNNٙo\[ⷷ=Ro~Ya?unUP|Wti4' DJ;]oɰyN-L{P߹gqto][^ߚF]\JʁC`Mm W+ apvћNO96'56DCVJiZt[o=^>#Tu@%)ȴ9>$:;"[,'4g :hHNLսj2NM iIM <&m!$\i=97)qMb1:2-WMlf5+|r5$cҥ,}2g;le9[R' [oz5xv,ñ>6TG#XnT-mĩuTUx<Tży)؄ǀn#' 3&ff5r( NmsMOvQur߹o$٨@]uy.@«w~hQZ{쉟xm.%1=]}]WnY[eo%$b wdX~Ge;M6S |c6A=)rVz=ԮH$P'7`AK cddW=\nv77}5AcUKwCoor1QLa;G|/$5a;fpPB L(ŚE$nU&ߗP\frЋfjmh[,}cK.BJY_ePv#4eɬsE6O9ctQ 5*鞒]kXeO::Ƿ` Ѹ5G4QE7+ eP wYUS ٧ZgX// y*]r7^+)ʙz)d2 BgS`DwVV vc -68b\o^(۴<@4 Vj UV8<h+0U􂥛x(Lam*M榙RgݨT1ϳx)2n MJ1oY)uz wsOO= H$Ñs܋!Wzi}D(ZnKO ޘ_iڙO>]ZemU5-^AЍʏM=|x-ۓ@] h6@7)ꮮW7n-K<{|}&pS9ZجTsgM4AL<K,4Tq E`UO8ꃚDj tw[Y]WWWeO=Qtumvw%bb9hJ()7*]F&xbXto1D3;ta$b|AGvhGf|6vA39QTA іmb) ~I&ZXQA&JXs#$`{ X?SU ?5uUCc_[?;_ Aeku n|#uh9oQT:9]VT:Iŭ']tw e cؤU  KXMŊ7%CU G鮎SnnlC VŤ^v{FcN56 ~ =/ |5첂VԴ\/Қk46;{= }Oph<~OԾ5kzQW/H7.b/tHZ_a50ت^,^/uOo\Rvam-:+[!:)D,eO㱜QsmhzpUCHaÞƝ_ߊ&~+eUS\$1M2XKJM?21 ?Vsa.ɜC:Xp9W .%[k-;߆bOj̺_TN6?XY wdJI7`G>|vbWF{;֠uXvOQD$4\~Sl8 >'1`S>Fǟ0,{rcmtY,oG/jwr{*"ⱊ#OQLo?oNl_ ܕRpbj9ͮ fOw{*swɢ]A7 n!oZXd'ѷg~CV'N:/=1QHɣl7Rχ/G4?i>x6'_Pwr _xŻm^KIRVɽ mhQkJ-t\[V2Ȏa%"˪q]O xpv k]V_{\|B7(fe'tlXB7Q1bq1=g//'i xbqrxVEB#ljX>"2߈#WK$- St\b5I=&:W=,pT'9O+- +s3=|X=] O`#oUf D~볣u4Gr/rI.soTq]:<"39W.GKXhy>9aeWJ.O<V`{A1-uQ#]3u2ffQ~9`ҏz=S8 3p6pJMPt"->A{Zv WA/+ +Z/OT_ pQ.iVķB'RG᯳ZaTM, WZ"!b2]+5-PQS5<3$=8b" X♹d/cӱMEMPsJkhGJ`]KECRo4` -/0 °[࿅aXbٌCFDܬ zjjonk^ |==5(;ߒ}35+C,2vYXpV%LLς>V:>j=\8e<-& ySKK[㎺EF)|v7~g>ꖒY]Samx{Uu0p n-O4OO \F)U-%-2S0{ {rTfDhy &xsj|DUP4609`Q 4pOOCr&F'G5XڇX V}VZ+x VY[eڻvq⊲ -T ն V2fγ"m]gRL,2xRLcm$%,9fWW}W}k++yYeqJ*Ⱥ̃6A@&F 2-ux&obQJ_N;f4]/.UǩPٛqUT> eNChF)%dE!SY9!ūlYYb_^;?8y1Ee{Żug ;|ð(mr4 Lz3 WZ2 KB7ʙf44)x #sFDt;mYYxeo:-tBIioU4A(bnJ}Dq ]y*5!Ru*K+jʨH)*7Pz@k>;5VVVE-]_̺iYRLsNTykE _#dnQޣޛaYR2!ecd|lÁL6 ھƆ#K Vo2~?@֫1AlWYͲ#d }FNJt <Ψf=`S"]@. {١Q<fv9j ]]fA2`(uut ]_e2n+(P";w{PD5Ulh;FMh!])ɞ60lybhߚ[  !"1Q2Aaq #345BRr$bst0Ccu%6@STDPUeEd?Ig37sd|nivH[:n fRޙx(c;l2|X@ 1=xjc X;%kjrz!zu%pKݱZUqZU =YIGκE`2HJX|{}CK+i䮑|)*%Kf\\1Y8'HԷ9=#[|6^ZΫz׾U_;ֳ5v!޵W}zZVJw{X?;֭VֳҵKk-+VֽOwz׾u7!޵SֳUwehw{YSkZ\ҕ?;ֳuGlwԟ׹\ 5g|hwl:|hwgjmKk*]^V.}k?hwgk?irKU~Z֬Jwgir5Z,irY ]Y*!޵o~I[:Zڇ/|juKjw$ /+ˏEb|9 Kpn:*yil܅׵X;iYszC#SX(yZu;ZB]1h;(kLi*bH BPNүαE]O! cqtrA <UB64u)df, 8fݫXy!F..d|o+;W'Coحkr쀷}$yr]n˽go#etE^El N>ex*VG-! ' 1[xSմʉhF]5ޅ>BC&xj8_+zc/zHTO9&ȈUqt)J tj4՞*|W?oZptR+0bp]p-T۟BMH-ހ-mN&9xϭ3 | F҉2Y#r" u]X:Ufd[YwG]ypWT|]Mή je˫ʾ0W,Vos1,U r%<On-DЁ ~] aX:!0 [BDžHkwy[N8 Y޷LQ_#2SZbhbTĭ>qłMI[L:ZQEHQalp1E_Ώ7{l!5aiZreqwHEUdhf1QG3UKmV/hZsua޼[1ՊSE3zM|!,/|gqi,%e[(N~>Qp>0jcMC/3W'kZf۩/nxue&8&@tO9 "kSYYrbT-dTQu2Sί{ + Tڈ{V}xAiX*c&k3>)b`a[Xx`aɅgKa״? uF-ܼ&c3w;q,;ݶ忐-rlF\ng8s*-m>7AR9ukjaYiV`8ܘj(^sx6['%D޷H QC$'Jm1ug>'ɫ]"rX鶔Սﳑq.):Xxc|lͫmg-k4moSNASuL&v@ JgRsfx\:ZJcn҅dp~b:շES,M+>%{0: ^CA6(>W{dz>L\+oVdak6Yx.>4oAu}6 6\%2-%D?^ 2L.jqQAŮzJ0׍2y%$qRS2y"skYmʦZyg7zS zip㐎krqT+ѕ'ӫQh+w|nȰd>-kSU@t|ahxyrdtox-{dv##eU97źb#jgWy״DTKB]UǫxϱÈWȬ1Sj43yx/ku誃8K׃b W BI.iGe]]utհCbs0݆Zqn0Fa˽:~}TI2ޅDEyZFQE 肶i&Z&Jk-\=A{vݸ[P<44[9KmaﵑD))98M@O{xD/TS?nCD"0!6?9+l=ZHͣ]a@>;!tU7G%q\YWK|?K?e8cn:M"TMw '9ȹy̋!92w4GێӓPA3zTsFlBp~wi6xdPk &gipqc=KoNzQk6aŃEpcFJR͛ 5}[\4م3zӝW3QWͤZ9̃VX:";u)wJ)\1˖܍7nMP)̡kdDZNUU[^t{TP?z,i# K[KџmUFG,nֽfO%>y Z,UVrwxܟ>7Y!fNuQ,Gp:tS.Yͷh9MV>Z NlkrHXzTI{O÷uf6;ߝ)˱Ȗ{ֳZb6Òm={OhȟBJVSnrve3nq4i*8y Λ6}vFcnT2Q, ←T1퉭xM!Hg"|/h[ts#{# CJjI2K T ;O64U`7D8KqzRo p*g wfղ<gTrr 3!Ym{Fc0ZQ21J:Vj2[ܬFگɖyv^B*`g8o E#dd;`,L1[Er]trX8IG ax_3^偭w(Z"?#~w؋YrU~pzL1B.| ,T7.~ Us՝d&mB Sbb¤VRE{~#Y<D-y 1Ux>D&Y PIoL)<ܷ+_>(sgNM<3aXЄ%qt7s$zޅ5,=+54-6 ZN޲b/9b;O9vQNO(E-rVtRisͻ,8cq6,\ݮ=m`?%0۱oYd?'l;rk-_z7]}WR|7\f9䏤/q]+vI+CHG ñnLޟЫ9ܙAskbZ.n0+%>.;{r7y^&iᨷ6KUe;h:~Pе*M4La Y|`)tvdiA@%hz`)[ST;7~SZ] wl5DC0^ "tinl/e<q/6aG.RJ) ŇTGU6w)*%:L>2L;8 #Qڲmz/3셒DagS%ϣ׏!c/`.>7!SVO ᮿbr>>$񃬅{ q[ʷ[ާGxwxr+}nsD77*ڃ퍋$k'r{y+I;8_.]L3si1o'K+=G] 2˥py Ύ[YU~vGZ? _W"8HpF}Dz=g:_Hf֒dµk^Klڇ $܌Q7[( 'H_:M;Š8ǵr32?p(Qۇ[~,ѼAhV ;[ԁf.@XoMz&e܍2ױ y]BTrwʁ1u I%D[ ]GVLK鈾'w brsFh0X\KPoj+nM{ #>HֱH7_q[MBv_kRP7_QT "s&60~]KWJQ2}c%Ld{ ~'OװTkqtD1b+bUodwhuq[-KM mB*71c n$s@qZZFA/O?rj/*')ʇ#7ru$s~)vA nge{90S\Q5G4:[ZfhOX$->Uqe`WH5֯q[dU怖2;n6blӆCU'wjs"{&Fs#(ߢk" ?&F[<&ґV5T^q#up0 /p+HFzGI˰Eݼ䪄vYZupQ؁j=3{v9'֡}3ErdPM?)S{tf8EDns[be<1rlGҹ ;*d2ոT $&3Vne Ԝ׷ 64D`AD&]k,\U,E/S[k6jBrzܸVk?zr6[Sc|gY,m}t$u8!WE,7xTZIcFEO!i΂BN-#6|P clbQ_nA\p[(Nwß'T̗횽#Y+\2 !>ۮ(~ mlF#$\ݦz%>Kp]e#eGu&%amlv[Gw෬*,o۾|`fH$";:ur%]k*&G`[2*Ҍ0WiO]C2Jl.=DpL/c 9KT*?SO4ڶ6=Cp'3DE+(esbs Zꩫ%h|`fYebg]qeY~vf ;LJ<_T‹Xu~Gv Îذjuߍ~n_6Xa&IWaqu,,q?;7l z@mԅN2捞&7;d=TNiLr-mFT쪤3:68InҶ!jL{w>iYgkqu# 2b-ՔΑu5tP 6inӚ! ڢ1=vmP+DNE\zml9^4UuQ:[smls`fc!j, Czu-#<qCSpnwREO셭Sdwz!Ojx 0؝Sk6];;W9@:WYT\8NjZw[R-Xk^|UW #yw|^ۧESj4-eu8tf+06YdU?;"Nd'd%אh|hn h:Z]\HmPsgr-SIE-4o?S Ah,*M+\#=t?HGU7a|npZ[؛Ǎdž'8jmm#p@w/XZv``G޴|?݄)(_,e \GԬњR1K.XX֌11Ćߤۭ9_ӛGQ}6v?(<^nmoJU貂L@70Gch *x)K.4}s]N|f W~!̳3[+5perYcbaOVM{yW4aj#:caұ-MFD5C*Fw"7OVbJv)? {WLmҒj+O+?ϛOeV8r{*Uz_[1 %=:\f1m:]ih=ϙPR\NͲӿ#Z1 IwjN/s<^:Cv{Q$Y!ڂ0[Z..MtmD.%ٴy%CgK ^Gxf92+1u\<=O-l>X!:MCm:8 ^a+R70OM잱RwU9^p3 rA󣥴~IO+3 ͣYUtę6 J $5k7Y0WEk}=lR:J7-him-PDxrɼV3^G34Zv]Ò5:݇v~LYrn1a ۹ra<6dg~"̎&.ԴͦŰ~Rm~D>gdu5-lh|L$F:#C"1e'!1AQaq ?!j} #…iJ#w2YJ`֟nh)OwSE_&`;h2?J] m)#$u*H`?s4S`?b9_ !\_Z^#dQ "u+Z_xo߽ƍOxps3Xύ\)U,\s )3O)b3̈f 5_Y='ɿcH."g;0n#[[".}ȹjHbQMح9[?kw¬O_5Ƅw o1Ue10@JTpLaĽ99 mf~+oIr%\˕>?.?D+gܗw/+'jH3\ce?6gXe(8{V29  f7fG/.l\M2RkFzf.)Þ& E,Dm(]ޒvN% jufdnn];@ 0~T<M4Ү]^Jllw>?sr Dukݽ>,返T> 浜E߰(\2*ƥp}!0ҋ](ڀJE3ơły-Lv gm{%lxKl[}L ¥FY])9#AG1[#2*h!~BR>bAe̮UAt󉲔Grsޘ:助}Gi˔fxm9+dLE[Ae ;FWE]4{c`GW+w_\=aɮm3qcrŲTy;K6 <nedM[apn8-;Pʥ6y ȨFt(>X֎Y3Ia2fVvi*5G %M  (#բKPO}G@ >ɯwǖ,|" -}etOKQ]oSw :,l*Ŋƹ53)ʎoj4MAHXh~6]F O|$qP1EE9l@T4.3]h04%4Łvk=T(<#N^Sk*̤OQVpR/PȽEz4jQL❦=Dh.튘w+d~Qۼ(A]:}Mt.(h<{FP *`:spἚտ> AvC=5>,iq+ A2m1>#l,BAM^[*ʄZLbt72=fQSj/nAt&u g=sN{#;'$VLAfPS; M3nHEm*Y5 o֐fuNly p7N'qNJ+bA1D*̻KJJdƶSi:~e:t9/ᙄ 9#4cin }`\j<Hn ٖx)(2`[/bD,pM {e +|)2Djt|&^ }i\:gyVrA]>Au䪕 jJcʑpu3;tsej?h4'P F#bȘj?bEY݀:~߭;ArM&bJw,u`s9yNp5)hCO8|ş]\?7M3$–zdP;֧UpA۩.fs1@6oԷӥeF[Rrb}l*.d;7> |y~+썏zM:5Awöp~E31,jJ$\oR]P薩K=͑J <r]B[)j+3}:H8o=3zT<#̀,Y $Fs@vb&U^36iI ]8-#NכH@^TK~6a09¶Hrat|u21|gk!̣/PB)4shbg-N P^vgṊW( V.aƐdal5ȸRm3@Z1xܨ%N#N>G(gg)n#UVgW3p>#(,<*c˧:2ȕը}ݰ@ʡ= z\̥ :NE@rا1kJ&oAtSjS#X5+|cFb4r^L<"q|԰W4PiaqB\]%~#y{8Uⱷ@-^i/x8quµE>%=%!:=Ӵyf CW2ĵ(qp$,?V[P|J~}#D%"0^g3 Wz{#IьX*-8?83$ ojt Qr< 80ĔV"*vC<& J\a WT##fbVi 1SqASaKgG.AVJ^~R?' L d,5:wUE$Z4npwuF%P3N¡ܾO:[O5>u.Tgb:vBu5 i)?1<~fMdRaI}M,]sԿ9jtz;q#^~$z>dBYߎҀZ`x )tfu8|52﬉oFIg0 Ah)>A" 1vx.<Ֆ&M^4(&괚k,LP#<qU1-L*V.Q˞=Rj0m٫14Q\K@0**\߮edt9ie[H5z_X :)\6?L1W I =PO/.6#Qj&P˯@Ǧ^"})ev]).ü6 x`6}jv),S &OcģXGVיh )8!5aRb@[,cnOu1]!]&NILK890ܬӬL+=&YwIBP4éҀbWJWѦb3kOU= 9j(nV lCD΋rSYJfRzB*jŏF`hL a*xINJL{1:w,s%ݵ( \ 8IX ګ+ R\2petPUjݽ5_b^#3C0OsA C g̒g 48SqND$z݃s!0׃ӻ8s Ĭ9?Mzi.itErTJ h-ne2ԖXX_2-#֢Js {LUjwP_#WމʐB⺔mPǨ-VF۷PeņA")A[9"Q1]aURʹj. Aa`vwe6΢4jPU.^*KO E}h`5x)O2⼈PN & B̤A"60o3c+scO(AXx}.O+ӭQ-`d,Sb쟲-Vb}{JO+WpT͚bkG$%]CŤb'|Ɛ%ר_xVCJE尦>.71T1(jz<$$Q~S&`UvDH| 4"[E\ʆ* ^?Gj}hKV2.LXfĴLSb{`*Oaؽe@%63JiO26MLS㋕O!.vHC!P,oQNQ ͭ{ǁaqLˬRʍSTDBd/^LuvӠ-kiku ̢AQbiĺй\o9bJ8k^`b|S _'55N>bKORE2>y;J l[ iDE678bNJ*|~OxpJ3DYnm+ :ܮ&6o1p&M(.UpcOĮ 4#rm 2k)JGq13Z YqA*0 KSHP_WKgZ't4(.LaUdG+O Zx'@fp+ crTlHlmy>rQk'z^PĵX+u^#YCXC'1y.7t\Lgeųs{2xĻwP9-vyŒRս`SDAE&.XטE`@stSbiN{QBכ^Z/6Aamkgy+bh KV +u cL6Qh-R9~BJBĘ1W Vk/3uyS;p,D \7wY1J"S;Giq[-q+J/n] i@UIAs+F?J*WԩRJ+U &izZY+N旑 Y%2 >XI:8JڿGE?K\|QT;BXε*KCfhe:;ӢS )#{qL^wit>!{BnPuh3W¢fWP #mRF}F5%C3J+.?KQ\S]:Ԗ*%̽:ՊCVkeġ6edVm]{90ƹJ/'ŪO 9"M_|.?0Zg!vv{`6*X5f,#SՀpuJqRBl-n5TOF@L6W&_0|Iv[&NQ~ *4MȵWufBݛ>'=^Ș`0F/5 HZ^c߸w \],b\Mȍ=Q4GL~xȭ0Sx}/`SEJ$ˮls Gau~_}3ccD WaT#-⽇$dTɳކ\RtN\nQ0Z@OLe^kx8HlebHߎ GJ.GuٽƦGQ &/2g ee WGGJQYt[bA.?c05LO1fӒ>D׶@[+HظZ7\`#6YN=a v@4d:?5[F"%CFK9HU]G6׮:,w)U-@_)_t1Rl]XW'hco\Zn]!{Yce  4V>z]MS*mPv5kCn„s/ELX oI:g,Vq F;x]h"OC N2<0\琫:82"OqiʁuMMV/y9Kzgfrjh"!䥓ڜTc)\:)H FNǚtyָۼdjR>sژA: V1ٖ* O`"=esEJE?Kh'4QL$_@tKO9.]-Ur5?rnDGl4Ab;LϨ|eDDl31]vFٿٓAXAXzPyNͭ'!1AQaq ?ת5οeî?W ?Cт=HHq;Aw>_ovHt_v{?ă,ӨJL/ywݙ?sb>Wyb;ض<Z_//}t=QOgg Yo`=kf➯-?џqP+F=??}yƬ<_؝{g;_ٚuo}W!G;N5m]u~K@3(ֈʈ,/a̗`_@t_x?|f j0$ 8>8pb:YP溍.XD&&ڂ@v hF5wLVB!l+&b5ÈԳ)6Tn-pEؗʮ{ ╙DTR'*Lʸ#c-TTQdS /s ir+N"p*c ܈=3՚B1̬;Lf+|-_k+9 ^<>e&4\@`^XY37T`Jĭl(S n lRg}ETS@NYp%CQFNҦ?1e5s&R3ƜB7=U1/.u4p ,J#&3=ElwR d[1jK7>8nj NbJ{MO TNA[s9 %(̠ L l%`UD`m ܢvs-]n=l+0c A',4ZůL%3s@f d81k1!*(R\PٿTDb ZERaUCWy}' K+EH{4>k{h#ŏS ;hBke>w)\,^1O.y\8-b=nZ{yAEys.ߤ@Ԡ!uACy]ȖtDP inT.a?`.X; g1Z/Mj*wq<kx?$єf`PO)Q0@i&[l%8|[78%?)gŮhO e%H1z"cڿz;FK}EHaqƵM&Wmt`UNjq`_,btmS)sק:sDn2C( %`t}JpJCa `t*|YyUfCO <뾳 *}+~fVஓOg'eI!(TH=qM=IDl RKK +z%+"o>z%b 0kA#a,6>QiX]OIV"0r73(EXvG=nΘ9󸘂e&r"xyϷF=9G< N#lJޫᏞ Kd0+lrET KX%א½l=~01snZh]]8\ԡMgHU<ϧYrق{:!9#RWm~ 4Y#4 J^u,pkmJ@+jC-̡Na1%]%*iw,_7BN4F|B?d6$Jen :T-UTnىr@X(phŚkq:~$"o %5X6d1W?K9}LKe,06#~y{lCJ|V<9C޼q)mHz%:x\x)` {ٳ={ gz~s+Գ>ُzi.yPγVZPCeATӕk9PJtkJF:]rSM>rp}#PnX"KROS85%z/T!FT}SnP1Iw2G]+Sj@Y/Ɣ5dZ1]=`$? /uھަB^PF1VUMK%)]fG:Ois׈4"hUDǧzj!6[COj==it|?QP:et/Ĩyz Q' yQn(c,u0ˣwۼ*P=neE(RјX ^P1`!Q9an8e"ξx?nD&=L1а,hG )@Iw(j=ȗ%b+7r'P)1$+ꢘ ,*,ҠiB"%%, Fo8b,{n&l^НQmPy|Sr>=e/Gi4bJȥ}@&WP?ޖ;%z԰nGoFWptz%,HZ£a0p}?p5{Zv'rׄU+ɟK-sv"_DesD -7^{:{ K:2 4e80]`G G<?:p 0YXשxS<{bvXY=CIfC(w3Yy_GWΏ C:=\ar-\ E 04' }iGeGyJY''Ǥ+gKhJEWA]=Q6`7֮\Sjͨl \  fZ.2b+#G;ERc' !gjco#\=k]Hԭk&qı|Y11P%kG|5zaQeuP&M .洒߈)dxX;3j\䩅еwr7xJrrw]s1_ #znaFڪ- NWEGz3*~%?h;%ֻ%6"GTJ&1&buA#|Mkl 0Q;b2ExdQF X)neBH!oA?C>&+.)Lm ~ZcdGӏ$uwAc>N#GAN\jnA\'d@(6Y{`z{ZVCۥT\<>Z-K;aV o< A0\nնl|=jUsM=jlYuߍ (z%jX  _FV_eΌS;Az2 RKq 9j5iaW(=93A /a=Ue+ :rӴ:Te1!@*h/-S\d7j 3x)}fH uIF'ƺLd½lFv(g5EʱNrC  ,sg/]=M[8NOL)䔌d;!apӂ0Ҋ5 [u[2c0\-e6El ec6V ؘD"abL/$2 7 f!4,5C%[o3hj9)uP[Qw }7sh] 4cg,`ݨDj^` ڼ2uxwҩh{a\' WUP=f{&<' ᥚm^z60Z\}+,ꛊhi \nSYyY̡)|UʽgK5;Qv3QG4e^*"٭؊ȕ;^99|0A SWc*zs>Z a_Y|{ZGQ|ɨ`أ BU>.ajm}t (1+-`m tKbAEwt[bj4))$TP1CE6,nQ- 9FSa z$cY`XT@H۾[1ƅ-*YLDgh:<ZP怼JcP_C0a6{EX'09Z6K]mw1*ZZ0J^ <KYXnS%I֦I9;%Rg,4YlJҘ hPXt(|cZ4§DYTYzȑm\vWR-BUm3v{ LPh^=,h"g! rzK*3 X;K@Tf)=lp enK4gʀ7M*/ P {LHžO`ʬj٫weLAЬkH;-6J2地ZAãޏ M>&-E 1Qj"'h߷qF-,Znܶ6!`ejUU"Q873OpBS>I+]-&bPdToQ;T-TZ)ST uܸQ wX@1o_.0C2ɵ,%EQbLzK kYzRhl)δRMV-/L^JR NeȁCFAЊ/JqcB=Qj$5ikD%zt<1gdW1˧X'}S4Z (:XڬߐΫe`ӟUZ E1CZKJG#hӿYe#K-yâC|\&7%Cs&J{FKɊh?uzs,1AǁǦ[Ip /m+/bt 45/卖J+Kl^(TH/ی)r#|>JceYbt!}bcĢbM2ʽhJ KĿ!%$K_vߘo*\}.[^ca-^,O;l n4䆵b<@2DȏR#ٰ\Yzc։)Y' u c e=V`AдulCGf)t=؁fe[k UW*X@Yʪ͜{rJz,F%x`r!Z/E]U#HFaXK.JKkC*;@zkINZpzF͆b+ I|Df[ rSY;/uUǬIfFfJs9n3L ,48AkC5 f]1 q*Zm87( )24KԼ_hvQt;1ds IUV^ 6xuzClW:b!bĘ1Dp@xfkPP؍>Dqv]2d;"ȷ*k,(Zi]|Ftqy``\dW\1X2%ŗf?1 Β5B1p m3ƻn6KnW%(!K_-97Kq]uʷ)]+R 3)Tc.{.ǡb ]WYj3yx)xgCs}r>6#tJ*!(2I( 1R`ᶢ˙`~/%7;%f.Ļ!C(r K#QZSTY9K` NRS]3_%woYzLwKS&`2:w_騢oKS1xF{ç!i!lBK<.l\ᄊwBvC1s1o尗d$\rK-eRr bFA}jY ߘ}q ̽ п2& 7n>.y2ܯ'U}E-TMXvG&XK NT5JCJ(F0Švdi.3dYl U; Pi`j+S#01R[Ql3  %`vCO%k?SQ uO9_g:_<[?& x"x>TCG#Vod#Nx 'CDN? "-tx`C2vǙC'I`>^>0A(كXqDxt- ML< kt@Փ>T*K E" NUK`a +4`˖˗SpDUɄ-AF]lbj,KB.Ys#1e#_3'q,٘TSHdKo"6n[N b^' ZCrl`b($-]FcLl^Jw/M>ߖ\)v\K=:b@LBՃBTau3Ks!*Q-+,.@o(.Z2znQ^rZlcZvnu3( 'Q1r)&dC" o=>la>`ws=0.ZtO|b+z E11B?2TdP7tˋTg=!f"PnxZq*"XЋpmd0BR1񙆵p=d4t~;X@e]3竫[:vUpwrpo dSK&B. :i,w,G~ Gy 3 O'_w)X /+tǨLr Tb\4yDԁƱek#ܸ`Rٵ0wbiW߂7TE"0(T-$ A@gQL:&s'OD&s|{/`@/R-1xsf jB;QyMiԀbbp0M]LJXLBU8x:]'G$81F? _eq:E{J~`NFOh*-y϶QoѺƱ$QʾfcG2'%J'oDe#SL 8v*HٵӾoK(Nsgju]2(>Շz$ǴFg@#Oh;YCKHVM r 2EO$Ю ς[pbԯP%c ^3,*? C"ưR:-C%3%#E\pwMR%Loņ=4Jf&4ʂnd T*bOȾH8WaqÌDe)Vtfl.~s UvlfB]="-K#SXi&rS2=QAjDybzeN5p }V܁v*0`{27OR!$9qG/hO`1-t#2 @4)Z![K**(KBNegDm:DrȆxb @֫bB]uks^+@lۥ zgC҃RN)Oc_c:=<-KjCvKu'R0l~xb]N0!,Ŀ +YihWYaplctZ0ŽML˂C,-kRZDa Ep"F(,<ufWnRab)C5%:AxmwS'd~ jՁ{Z얯zQ~ l:B6XȗNn1׵8yƧ4qhB]T1b8x8ﰤ,C. l&ʷ2nzGUI'HmM2}^Hu UjKrer*9ωPG dVf#w5\2EO(8;@.1Ktux9lW T )X2{ I6"n^1o[GS =1eϾ~"Bо7/C@D8M#f3օօhQwҜg򖾁SQCs)&eho;E(ӱ|S/7V{_s]]H_ycЪp&f}@Y#*TfdneRMy:IbNȐ8[̏okf/n/Q\ CF>[}x3]v7o oϼPAhSz?0wASX۾odj :Y4K&zQ0BWCH+N+w:ReG&x߼Ϭ萔8CB Mmw Fe )A[twggץ+X&X5uOlyAGOf`=Y%_A_]]T8%KT9L&i3a P%SR"ЉWim. -]`qbZKGc{3e.N4+fQ%ӊe.z,b;&f 0 tkP4E'|[?/B7(.o0YsGWݕB/b'jIo[˫1]fގŠz\2nU}x(v(ЬX=؆uf7( gzܵ\l4דTseH- 7*dEqaCŵ"atCCR %PɶBMūCm:P Ad\nH!U"6/=#g,zSp8&Ar{3]V ft y9y]9X;,4z{TVuxl< |}GdpqypHDίW19h(H"u1/)<_xEb%E%Po(i4cz %`khZ Pq(ì5&BVXPhVR\wvm\%^hӣ$6yV}0e:g:GP.0t Nx4IvoK^V߂Lp<̚`BA&veęZ˜JD#+)5*q+})8,p m CPJTc`-#Kĥ"d/n  .뜜57 nIvmżԾ>B2תlLL:y;\oV%_ݯnCE:){44G"G++!wMQWzNM~XѪMmpa 0c(ܶ^ E7U*Ր]dbSxk0e@f0 mqC)ޥ;>{A7r٭%y%=5k86zX,3mf>F|Z:tFjk4]D^DԕZRbUl*Qj]F>8ǐEP %vZ`FЖ-[c J @*nP(J\ZQv^3fY ňm'$ h=y'HxkZoraU|5059HcA7DXʚKWnq٘`m:" Z-U7 i}b>!zNYF(Y;U}^Ƙu={7 r7ՠx+U,/hSV.3ZhG% 4.,􊸲3?׉Ѿ 2/W;W@$8YaPd˺TpSwpTW2PF_l@Z%o_}$=1:2;2]/6Do]wa상xjt3IO Tyr^ZJ["r>7Pk /g+uۣ*j]"mwU{#L ̙e V1-*E-mek Q5z9%NК_amE2ݰgNɪxp~"ijuvuƏ-;kGJC0:Nj\NL&0ۭWk)u巰ZNJCTr/@]E湕5T([{Ģ(vp)zth6!$Mq7q4˺`RkhG%FQ.Nr{- ?(׸b`5aKCyX VR1m\Mm/wUenS(hBû]XJk`AfuFfHvѡZ/V^YfƊޮY~|N,Rk[V.R~ξB vc7G2e0! %(F !H|%PY(Jˡe '|–wdTP q#E٣}(?t r: a?%a> v M GoxF֗PY;: ~˓0PHu{:夹4{B (hwH58jh%ӽ)(qp8(+:[Q5`ԳlR/P`pږlAk lFZ kz2GDu0)$R&mp=@p`7D,\"ࢪVsʔ,U-%%\//LJc 3+i&xm1e12TBRf0ZPХT\,e:0d2a PV0o54`q3Q  A"KŲQ)̯G CDTV㬣DC S+ d-4c,371l%lb!Oiz Tzix .]e*:0PJ9>qƔqn h}zz![jsYf؍DiՖhuA(6K S,UB h |H%I9G;auyjP%x#>9݀Jp7(F7Ͽ\C/A0a[KC2S)*+1.]fAHQLoDW11*VeJfcTRN2QjA8GV-:~ :D cu.I,'@5!@ˎgQ KҢ21& `IP iX!B+a7^a|_Vd,%mX/Xnr(.QoBJ `תX0F4e.e"f_0MʯF|?e02B`5Bu4IL"}+(^QfWƩL@EKH#%GnlmE ۆe3/MH,mσ'!1AQaq ?y?x[sJƝ MF5W _~y|F8k~hUZ׾}a2?*XsGṞzb>-j L| D |iTB?x,#(s-_ G\~eؐ\n Ari{r#j3=|ߌP^Cd?[ÂO\Iϼp!`eaҁ0kUޕˆCba1%-݇nx?B%Nxg?mE,a%5JS }3YZ&%«r&|}HB]+0r%LᤛX#Cip(c켢9V&*׬<›߼Cp3N \`Øe_y `pQCZ5g#s6zO2nc" jAR+7xWok'I8:}? =P;*fʃ]Tc]>F&a|,9ʳ0/۳(0FQ - &mѕ/3D>| JLCal ꏼN*i("hz(mu2ʼn!8fć5kpC (~ˌ6=DIְ>!.bAKY Nf}bwX`F{ígSvjGDq jHHwaXm!:{GPK|IYvUƯ\1{aϥ(az@Y ~^5*xqF=b8c_8`η0čXσ 0pT'ZQu/o1d=k2Oβ4t0izl'KB18c+ѷŶ#)-m0憴+hR"9”M5ͥ{ǫ#ܩK*Vf09:ȄGo tr5vlQPZiG(EP%Q:* F"8RB l~$4 C\TxA1Mȭ(8v7P`3"+#)ѫAƆݔdڗM%Ot)r|cX!`$<3w1q %8͞?cM_ ,;|͚fg Sy 9ӝ&L~EOO*y~䣯,Ȏ^8 5+CO^Lo0Ϝcޔrv~0ހp ɼ Te7d&? $Z>Z0ZW-x&ZA>0*D=fTQ9Hy+4hz;:d]@xT߲8Hh'#aڅAҎpKC+dc.~PSg |Ⓣ?lqC,1=jdG[gUJ1HJx͞of44mLxiTo#"xʜW\9qˏ?*6 zfo!QF`^p Yn`z@&md3r<{ye$iVǨ-ZԄBzSu~17v7h8,&vDy-/f+'Q48:}^rm? F9n<#z0OËWXoV)q[2o! :Su쇦 {zT:,Nɾo1ZneWDx=IZ#~=ʁ|:?>"ZuXzp qK_C`'077e kE/1iEϧ7%:estZ'h ؀zc8 % `pd^Sl<%'\H@*O@}k+=NJBk Z3S#J$bٴT|цTXϔ>lbs'8!c7:c3lFʩPf慟+w짎'_ppq r㇒UE±z& DZZ,e96dO2F/ƀֱ8vzçIrxa~1qAާ76" JƧK:I4vRh931眥Fr2Ã81P X@A #WTOS4RD@`ʘ$`y '$AH,4C{#yb~O7[#9CfF aB).%A" _8hhs6LoP,&L  f5:po0S +uB6.?b~doP v\Xw"3@*Mk1oOyk0'_Ȳ~CJe81l/\LMDO/X}IvG]&&@Hi=ONl1RM2 A׾X;x #};{Ð Cc8C!%#*KXI8gSuo81e"t:?f\qLH~qxoo f+6 -*!&:`B`K98 2D} T~qGd;  x0O#VOav'1r50=pjy`G*:q2(.r <'Y' unl&i[U~MC%-vrGܳ)bo` `hmECo $ >qFr w c@1TIuih (UwK?9ڔ>{qH .'9Pӊa(ayȖgOcb\up-7m'n] OKxt.BU`'^'Iᆸ*J8>-m$ jVxfԳ[$a#kbQv ڮؔ;񗯻I4;MIJ胙]cVFO#ٍ;H`u?Lyc %ΡBb̳x5~ +ւoBTi""s+ `u +5kpֈםE/&h؊@""8STNOzqb7x%!L5\3 j Aq{tvb⧧"]} G R½2f uNYl%y`4rXC!J2Y<;?iC*$P4G 84%Xi- {Âm`0z?@hB"s_ǯ~i2bmfP߳ pWMM> S-E  HhK2DS9)/4UlI 푋}q94  N}hTv`#(T 3(V$" q]ƙ0 GXJ[~"pȋ6 Bh ?G/j ̺T>YkPخ'$~8$?#҆"f6 Z8@4 upm\[0nViU,T * _ ŷ)2GN[9Ј Ʋ3i)^@t'Ȋ>'wseZw~!Jy,u.ˉm¡"~05dCBj?y{0kj vALJ5 9P1fĚi=FVs\Leb_&=lseUo AdmA3梬Fmz켝t@__O}; ?gKTz~k^p?&'A/ Jf.GM^ʊy7~ĥUW4?Gq~NW(&`i,.?#cNt t? C*u%u7 c(7LEHag~זZQA (hdj j<(; #qx⦇}0#ćv&2@yT*<TuUDc҃f&\Z2@'0k}9No†+#9 .y`@i KOiv|~0$/ -4 \maY `nNQI:AIlh SBB ICP*b&"i!'~HEΜ?:lW4F1QGJӌ@0r;| UOOZY)9A(f/)KO?#%i>4BYOd$Ԁ2;0p(b*'7qM[ӎf$ITESh "?'A'@؍Lm?V?9!hčM.S"GL+Ymː$u3RncXj&>#!"͒cmNUyPB ɳW8Ω @Bh=;)ez]JD>:q!R0PLRP,A3ǎ(0]8 cޝ^#U}x B W 7P7O m?C QzƳk!tUơ[ VhTwף~`|cz8\B =\S߼X $pd*r"T-U˪ҜX{KM&4NtH֊K ^/,@ Dfnbk`t cH~-I G\gwn$dB40ҕ*$6vDaco/#\4ؾ-Jij] 8\TLNm3o_ه%]TF& ;՘4=lwVʨ8.cx pߡu2k o ۈ8Z{?SZ@?n#uʹя xj\Dlٛ[ZCa9^g:yʈi3:\B:e!aX1Od ZPRjW)RP0 L#]6XrU-WPfhT1S@r;œM3@uy)>=:*9A]fiQlISmB̗uPd; Dbg_`ჰ-~ryv-qcprh=g^kxֱ$+by4T`4T|P5V_x7? )/!c4S 7F%FQ@s$6@ p{HO`0tm7lF;1 [Œ87~7 ~(-렚C+`ɂ2_`)DZ0MJM瓋9- 5-#&8+XgG@Z)"&zp,%p1˄Io 2,8e ^di|Я 1 %ѶmC}cKk\!z­sÀO˱˳eƝ1e=r?(A2tJLI.TP:t=p_Ԇl4#d{'(pyRl1/˱PE~t8-߮ssL~GE@w%X->+8>B "-Qd]:L N1ɗG&'Cc4/3Q'ЎnoLu]f.jYtR8Y23Uh)Į!ry2|囥^'(-Jdbu=2wVii@9WHwʑo1svqcɓYأģqB9a}ǽԑ>3#ŝ1 CP#`7~LQo h7z2n<'^~s~P׮pb7w_x9qD1siA I7*}K Hx>1"q̬7]طkO(XA*1>et1`QBl?6 _d,֧ )Ye蟟%ƭV Fn>.UF85xٜPGN$-#(? to I/~ ,~X}%N%OP]BD-#^&!5M [fp8LԘ<P5_*rH1jGŲ ࡟8F1 ^AQ @+~h P{G<)`'mڍwaa!Rn lMg*uPaK~)I (4E^UHF6p8sԙN2}w0S!W*~ƈI`]bq W[Ґ4P/8#ĹpT_8I<vp??  ~241YK=y F!ؤϮ0Z~qTg pN”וq|&;AXQ1o0.P mA9>)8@OD c_aʩ?_ ޠ<~ b=z"QxFUʤ{ʦ*+d:o@eU#|qd0`8ԬUJo8ebJ-,ơSjr3WP(J٠'-VHMb 1[ D WS%B bې5Xaƺ< (+G]':7z1 +M}ed n)IpE~q.u?3KuJ5(d6t' $}cCoY2<CN"p8rb|(4UUI} 2:\U,2xf$7q}Lf!IW$[?6 ѴndѣƆhi_6QӜ4 Ü;Mmšלƒ&S #\VϜx>&uD]pPDLFxacWk(E1zqD [`eOna6)}V3oL Ekn+Z֟Cw@ 0rI< :ϣK ,8įZ=+*(WoX_-8ZE;AO>}(jS:J6ݏYu$ p0@ZdJAuZ7v&@ٰ4UE28bD -|Vkp˒V6ta 1m'wf5zw^Goi6l*;~f=dJ>DoU4So.s뛍rqb*cC\/xE)Xcn Dm/~v;111ҕzh (D nDH "`(rmB.$:nWӡBT&.p e~.Ly!"/P^s `#aW([.94In%L(ED`iLNS7tT;:r7`qpԲ4 )TqҜtd0,h;P]LTpv( MvAm_4ՙ:q< <`UpBC4$,aƇ?1f{b 1@\a0OF :. M܀08a XW8i\u19Y$k}cȍ^Zpϙ5N{/=bh/SA?K(#oe3y#;kʴP˜hOk"DŽCy>I%i5w:5Td(:0s;#v|xH702?-Pn%uQf5UH? Z( }Uః t]II͏ xN09q Fw-kjL@aሞqAva5?x]o :5P)ŦQSsgDy=\ķ ;0ȴ'+`~0ӟLtezq7LpM|e;92] AYcInarc/Tx b\Df\1L4emxr YYnU@l%(EiN @m`{T(mp]3`A(`RH^&m47s]wq|R(I%AԴt bq`\\4 qupŁ+v4b@ 0@)`>]*%iSv69hKfr%ـjN /[ 8B/T)((jl6f&=z(ܙ>f9x;0* 2cbh9!9$9W@ D0H]Y^Rp%J#8SR/'KD1rE9aYP0EPå9)C>3%Q%Ͻ_3̑X,IX+(rqCT4'1b(JF{ BBx(D> !E! 13:][H@8̚&\(9|*dBamEtKкh7ڋiڝCو/#ک]d<*I-Q)Wu܁R )gJ9RNX/Dxc8.N@jZ:gJB—XQ3]_b\sc@͗K8IJb405hC0  X$" SC t˱۱? p +.p]3 `*O=Bx~D(MiDM1$VmD-3q8E':[xI| )MjF9"(2BfEd)YF.%ב5^3CaȰgcD2RٌrFc',d2f39d>bX,}=˗Ⲥrz^IeV7띂BBHaB ( (+Z*SU+T\CIb‒RRuJ*UzlҢ̵xZj:խԶ֎ծI]OGSM'YgNq.]JJ7R7_wAF %xԾQG'ЫۯwM>G?T?O~=`tg zGk]1ۆa,mFFFbFzusW012369i'GFLY%-4ov\<|yyDu--,s,ZZJުꩵuuu]"L[/k16}슱c?;;;pSY[pϲ:NV׳wގB~jo~p@vϟ3~~0`!C[xDHE=Q?!1脣-M>MG~qe1cu/;tғS٧Lks:3δ:{װ_O<{q^pb}{ۑ;^RcwGS/r׫W/^y=ot|z+֋%̿˸[qO^}5ˣmy{?>v=Vy\ISzz:ggi֗BbWvv{M~Cʇ'?>ds练b YY]J(_AoxhGx@t38 ]]G.)rubKƻWF $48i@>`[?ٱls;;iTXtXML:com.adobe.xmp Adobe Photoshop CC 2015 (Macintosh) 2016-01-06T12:09:40+01:00 2016-05-30T11:05:01+02:00 2016-05-30T11:05:01+02:00 xmp.iid:59de9650-86dc-496a-9b82-d40903827ef7 adobe:docid:photoshop:eff4f437-66d3-1179-9d05-d5477665b218 xmp.did:7e073c12-aa97-416b-8e35-c87e21007702 created xmp.iid:7e073c12-aa97-416b-8e35-c87e21007702 2016-01-06T12:09:40+01:00 Adobe Photoshop CC 2015 (Macintosh) saved xmp.iid:e6697ffb-b577-430f-831f-1774c07531a6 2016-01-06T12:09:40+01:00 Adobe Photoshop CC 2015 (Macintosh) / saved xmp.iid:59de9650-86dc-496a-9b82-d40903827ef7 2016-05-30T11:05:01+02:00 Adobe Photoshop CC 2015 (Macintosh) / image/png 3 Display 1 1440000/10000 1440000/10000 2 65535 640 169 ] cHRMmusdph0>tIDATxydYޏ>CwyԒjɖYMb81$$dXVC `ck`Y[-<ΰ}Nչuvt^gUݪSgBk} fI"E)RHq'\b)RH"E1.\٧CyM+-wˢ)RH"E;Ͽg 5)l0E)RH,"9H"E)RImg QY8:aJ"E)R>e^ELR)RH"-jI)RH"E;*(¦TJSH"E)<*3A ! H"E)Rj͡ϟ?ϟo߾W+-4¾oJSH"E)n__5Ξ=I~??}~9x }_5xWb!kH"E)R}Ο?)O}ض KZHyTK)ohLTb YNsSH"E)nlRm/ORJ"|0 ,o ݽ`ݹ9jb۶B DJr92B߯&þM!H"E)RP,Z*ȸ,}رcTU/|jJOO2<255 pO/}!lq~׿Κ5k8tKċ/2TRHB6Сq|`R0E)RH.###ʯ Ay.];m.l1>ϲsN~w6># \.STXnMx'>ƗeN>v}C/E)RHq C"??ʶmۘq^__PZUbGGGc)*Q*'>Mؾ};k׮ի/{[h6|BjZu%yٗ%E)RHq3y>]]]Oپ}{ y*B&5rJ|A/300@GGʏȏpחZyAT+"JBS>uDO[,A SH"Eoh(}J Y㌍p^z%>O |Ao˶(,b#!׎%uZVd%`}.die0%)RH" ~@ ^} P!aXrIvJM6d߭Ta! eR, [ZaT| _r~+0Y.Eq ҋL`)RH:!<[DȔRxOV}6m͛ .]LNN200//w )%0|P }B9!6!HW"[4kI _+—$V{:z/uI"E)^kOS(,Pu!#4Bk=z#HT-IR%?qԾxK)RHVooK_y{>Ì_%UM.* clt Pk&}͹C¿u`h] uOH"E7wb__#|={kJb`adF Ds4ԭlaXHL|Ykڥ#R""E)R$<#Wݻ,.~G}5$!qP8@ u\%u%>\'IP'PVc3d׊'d~)RH&GG[$+_k 1E q`IN Њ $V"O/"V #VJ|fqX-)RH"ד X"R脣_5a:pW@XMi\,ADXFN< 1%)RH"ŝ X*W(MF Hm`Z|c3 @08v(nrKkV/Ѣph&x*A-$dJSH"E;*&@GGP `1c50&]YF^ỏ?(jZ#3 Kٿ4aM=b CͤSFo'!=)RH"ŝiQ^crip_Tu&&~"}r{3gΰjR61 /~^~m)ON"E)4]y4`+-ɟj"K&Fc΍64Jm]@-*V`Zf0iy[J&/IE6)RH"ŝC5qtE"[He71LVV"A ,-e@qEf)UKfzUIDawO&-&DӾ7J`)RHqG@ !V4kM[TD0Lf'L!FxHpܘdiN8xJ%[)9{,""'ixʝvNF,Y) L"E)nsHOv=KL(K+xsV/#h"aD|ޑR-322N9zMu, kaqXݳ$-N/w=$9ǰ9o0E)RHqAhp"^"tP_ §į9], l&}0"a"}FG%4ࠁJcfڣ\ qbi OP%h:1LU)RηzO@X+ v7} 3Iq@q:цEfil4~ 50YO.g#C2Jl[Q Ke$#Wik/lY F9}.^VjD.#33cc._)\O#Ш {ǤO0E*`%sέjڣȇ`z P)/)D`EH؄X-ia6dG+/LEFe pdQPz1? pff_D)o\<˓?8D>2m,|m<]]x^s?qW'ٲen&ؾⓤ%No)R n>z).!J[Fn6(n0 vp:⎂[*JT[*ț@Y[s89l$RK8LLCB?6fᓟ| 6۬];@XdtKv観n|XGdTQJו];RxE7$Ⴥ17bY"E7 ߂p0v Ͱ~.z0:=)(HqG#Uk%ø/ D/''C -#lS'N{!]b~VpႠlT`xk@F._p$"Bvm{&"$^&`yJE ntE>}h(:?9K3< PefJqOG )>sxJq*:;'8|yMTM4 "E[:DOK]zu~C? ލlں j`klh;,VTE+5ٲu#/p˲Xv s"B)RDLBOF ޘ3ۅzv6!E_zS:;JkZ\&&F8t$lɺ "Oگ&,S״6#TV5 RϚ*쟷(c?T*5\L"eNb]vnY(_8"E ѽP\D+# k6" T8o;"} [ݬFR9DGO=}Mz·&@  JD1&`@(<4ZA y|UCۂBg:Q|Ж6Dc@hۧ˖'OY:;S"EzED*tsm5Tg#nyIq! C,Z>Z Zհ##;$p^1keDʟ˷wm[?LLM<70/A%QeV07k] vNMW.ѓ5n꧛Upo3KJVè) L"}KWK/;oҷ {Mg)n!>}??ŋM1vtMcqPH}G-OslSg(/煃?fdBFTQUjT ӳ =.S,Wz AT82 S!gCelz9 Q3!~n/ZӺ7IH4G\A߱~\yD)R1[bsNA_ړ?u"%^,(|GdGg]/"JQ{ok.Hue&'A*J9غx**a zN7L ",)T4G&zd ,v 4vIE^+GD ? &6ogΜ!-I"{d9f{/%T*>qƤ2Tf10S`sk5N$Cdhv >&-whz`HaEg 1;Hښ8> ́M&/K[qk94X@r={կ|ڃ,nQ(XjΝcjj"HAe=zѷ}ֶ廠;z(bϟ_e ")>%/=u6_QV^/>Jzut`>\hʕkm_|qp3Nǔ5ql۶}p_^)^{L_髈!Sr+Cy=v ѳ&=)P)_Wtwws6lٳgߟ<|<ۄ+HB$vVcdx7-+$}}9;fk|JRf+G#F.aWkG溬_nHV]j.slY<j^G?>VTd244@PؼCbCCÜ@P?tt|`K"^bD%ݎ!doڗlT!`uŒcp>u3lw-#W(#O-&"Q 'c,+pq$,B!i '&>dȹe^ h2YNv?J "OrEx{ߎht-˲شi# )Rz7u .#w:/7 a#V܃> @KOhwЇ9.&''|';wܻp^lai'nKY\ zjߗyJEb:|k Ljw^( P%CYgb'X7o,ZvAA8"AoHgR!hA۵sqdr<=l۶=*FIΖJ%8<]]]Q8E)^\^&۹܇Bt'clz^SxxGRRTMOI|iH qTmFCy=C. 1>|C(Q[7.U)ΐIzdf(Ru'NLkS Bf<+JҒ)jTboM5}<ʡΙ}'?Zb||3gΰu"EW>Blmbp#z2‹'6)  P֍UCW5QѷS`so24>7ţ"<htt\[|}4x~b򧰘qAΈ\l+S\/sw9\rUe),FeПˁ2A-2*HhERFWW-#vI"Za`۶Yz5\z5HafFO!6`4zlznS,a:X y)-N 7c灨a^L=e>"ť>.3U%k5:9:[ CL֦Ryld(iys:r9 YrzD`㕬$QDS6ꢯ3g[H"ōCPꁞw.!W/!i7>2"@N60K{^j 2ǘ=ǽw#Ǖ9"\B2GK pxΨ)&Km8sNgjd><˸.Q8>0뱡g˲Э`Eg;w2XqX:+6f6Lwh9rH+K" jw~ m݋s▘hYX:VhFtP `3k V$0h"qlC?T"2)_DDB@0CtsQ,礵3UUp^2.snx^ы:eɶjTR/ev4>Y BJ:9> ,+WJk)R>*SF˵YdapBz_0=5}}}- '&&x"Z !J) k׮%066V߶m\СCeV\ڵk_ZXuR00ہ.]rYl\gxqP%"7UZ&쫑L6΋!.X+8)pbn^WB311Mo 1(~Y&.UpELƵVB6`FobDTŵ"ePr#* ccc\p)RHn~Hѽ?ѽ=y}qb;v;Ϳ __Kw.$o_җ(H)jlڴdŊ_7AOwJ+~\}_?ϙ3gm\.˻n~gunnHa9Qt[&PbR葦[M\5_ ڄ~ˆI6cts^ s5Km^Cv }xA7'h2 gϜ䁕@6kjLPJ:rͽ9,g+r821"k͚U<">@hsV\.LDEqWؠ|[V4/6l߾=KHb1.o03bBOBt "X8tttœ/~9s??׿<5YUHjkR2l۪G p|"_vk;8p<羻V ]}ц |C&⸵=6P2hi34N.|RO6~ᵠa5rnqHCDj@RF/h ج Fÿɋx>|1zzz;A)@e[nvo X3ehL;Y(G!QU)Ķm,ĉ|[BJ׿u.\~M!DG/1oVyq_+ d8u$__|nH*?5}뇌sM芀EJB3M'm]<Z-vIZA6[ftyr*l;hk_橔Df]@-"&@mGL^%.~_n8<\)DaHd2`aȵ#z׼vs9L-C_>hO Bpd;>Oy=NPqx AC=cjjRZ5F U\Br R_D|?ޝ]y`ٙ ]"´w¾i&1oXx*ɹ"ġBJ2a8)ɬ`xJ٣ 08!ŏ*/y /_CmNBf@k BT Ců Wf޽9seHq;A)|cq,uG?iޔH,ێ>8\9 lܸu]j+W$1>>?HXdbb|3#Ze}5>e `Th稵O.$cB{:A(˗ovV &DQ(#ơG˓ 3O$_T,\ YDِS'ˢ|f&'Z}-_app3gGTJ)H8Á?S,KDzPZUc97'^&=ѣ]+ Ӗ^wjR>;vhYRÐ/}/_WyG[/^dÆ ZPPQ $HMíEbb4- (+yy/yCi>0B[.êv #˗KEX!>y[xBt7–O;y&ֶMh5ʱc{пF) )P!GH6 //y$>/_gww1#j^BIEf r:u*K|2ssG>M6G}!(nYk-óqKjj+&E"`YA`r|r`YFFF,4YM(\\BX[yY0z<~ܵc%0?dgY=iNE0RxhaqZͳ^Ow+,u%HeO0l^#aD X gy<9]n:M(DE ENBWSO{x7m9;aVK{uRvZI)Xy|ӟӃ` ޵w JF07Sk6Esϱo>yyfffWTpjcǎ~A裏 G?Q/s?s<WB ?B2DW1<_u\x\.swk׮d8h+Бx lΟ;v, *̩)sY:2Y7jX!^:qDD$#>3_c=+VAA8Gƺ+AX!6, !'SMҮQ59H( S/6oyx衍s!`+D_D-k+^QN:ƍ;G;DaM68O>$?%|~}pFOAX+PX ~}b(LjjFFFP*Dk֚[ƿ۶Ŵ3??MVC)E6e۶mwvvۿOO=uVz;ȸ7Q>!"V[dTбT"LM(0\[2;sFG&! Q:b97JW(tu#Oi-Zez aC(2"r Gx6$ղ`_59LBYb Μ9 mmis; b0 QJi&0駟~PZ?xs,Il_5B.61Fhkkc{|ΝK1!Xf"sk~+(GDKSǂf&Ԫey2 +79FO> P5`DÈ0kw򴼟2; .bZLGGFEkC,yKNMH Sp]PaR).XSJzaͮ^u D^@A{9*5>))ZsvltɿI'^;<<4gΜa۶m$)^ \•+WZ:꿑=N8a&xJf˖-(xgO<Mw D_m50uɨߞ+^K%ɼ{5u"`xo?™I#}&%晪 IVl4XjӜ9Jm^~ǖgt(gv,mɸE<_Ed]EKM5 +F T\a)LEH {K|򓏙u\IҪG2ʕطo/_f1+S!_-|߿F1W >,R|x󜨙QXu8m)ǟ3л*=&)^K m1FЭ&-_7{CE$J[]K|? q1;޾6;SF)ys0\k6~m/ W A[8ULM#T>0@ h.w}N=$D(tٽFȼEgO!nS xY={+˖ F0&'$[%@^_Ep]]]riS ###;w!7l\ZT*" Clڴ !< _ J0Ds:PG`eRUBD|*vYRMSy:Aa>G𢂏2ku2_Vl f{(Ϭns+J4÷Μ*BÚB AhlAʫj Bf CC O I>1+3煮um44pW__ϳy* a|{'¿4)X LLLp2śf&Bvņ ܞW!ؼy3B~i?>GYҋF-3cudo0T%iVE9Co:z>1kȟ plʍmd 0mUX#XQ]GFvpDž3ԼiG+WWݛKCCE@G~>{*e vvƍ:kY!t+Oׯc)!'N:EWB-¶m7hQ` j득~O8$\9Û WL/1-{g%t$냘CHc2 `K˗>Ev/T!=i,_fX ~lZSJˢZgll/J4H@Hr/܁@iKd3+Uro%f3*u ^/3Oۨu]]]J-&{Ǘ?ǏNSbaH׭B6mT/ ԧ>w5!(v#O/З0z }b/bo^)^Y3`iD)!ozf%0U7^/V9_k]aa,_ƀ1 "ēy~TMcЯF!,|뤘ÖW (]IJ2ju-J==Sc8Cbè[&|4r0`GD}3+HrY-~|_ߪaT*Μ90S0 ټy3A_EN|QG'3c:Ky-1|z/!Y #WO tMW[Kmg7o7LիWsʕz) $pu{5] `y~IkKAPSщȰyS+L)Uzj e4RqJB!T8w3g{ Q$kZZt6Z'Ro ?J@? I_N@.V0 ;t9MgN~ `bxxNN88SiHRwm ԧ>7}89p}U:p#{Ӄ5t4H | `27iV/*"~5_HhM4H >4\C z` @h6㔰JVRTP; k3WXN*(  :4^D|-¤@EaVO4+͡[lhhN:x֒"u ={pqx~~Ufjcz_KXm胻*So}+x;رcZ9TAn xIRnVS_s{شiarXZμ1A?ϑ#Gعs'WTok^C40d/O~ aӤ+*h bE$ʋTHjaMp9/9`*Ȅ3 :12 ĆuPz93sdz|lϋ(%6ؑ;0Ryh1EҨ|q7#_Q@&lɰos跹Y &I୭lڴ{.k֤S\mdnz'qo.{n;Ν;|[oߋ+ȷ JzPD}?߃\qkzLR,=aB@PXb%r_' C?$zI0IS$3ʟO4?* u2T;=,Pt1<6.Dz6yX @N@EGCsml\ghx5[4|>,Zfzޢhoog?~v_t79뢔bll/t#h!md2md2 /cرcGtۓ?G<Eze<ԎX6/VD%[@EՈyn Y)i2"l(q_Mf焥q*93]u:ɺVUqV.XCfύG)ЖdEqnVAY&P>/¹ S%xV:;;տzZ`ʕxGرcb^k:ko {֜4h "=zHBYv-A`Yj^xGr=''l۶HND9;vt`}#usCٕ/?&bڇj U_B 1 ɉ)v>αcShO5}Ľ~'Я4c<W.&'C x8!BXw1FD/ E`S*]186b #&2\;jZEmI|ǖ!d.2znU%C<:]~=r^x]vf"Iz^ Wk|kI [Zsk(Bv},]RJ@(@5W Eɿ+hLq9w4sXwd.fs5(l_&50]B+b+Q" oI^d#/nkqmUf8<De%,QN ]T\H&[2N ZV.Mہ%j8n{{_dl۶ѹR o@ :JU oX xw#ג`i77YuSfy8y<?qm|3aq2P~ݴ`'JA ,!;l 6aKp x%MlkϹs3wtP,c뷹ȣϭ5/Dpuټy/GeK3k!qDIZXBP6]k8&OܛVa6Asq1vE}a}JnF ߇={oyKzLR9^^* KJnUGn;޻;Y@gzX{ zBìn Ɉcs-ÿD2#n77@%lz:4p=& #@bρ9t-ovA䘚򙜨u]o"yv /ޭ:&[\F[[}9v 6lXvGDZH\G%[*F_ 7k(}[ҫ\KIKa;?ϦMnG@s7b]U :;oAWj=׹h[/V@׊® _lE8dN`T| d6(&oĶ)dL6@4UJ%õ"DJm#AkR[8q-tQr̺f|̳p}Xaʁs\8 ,[SjײGJ$S-Hnz{n1Ek/x<6nZPȍRΖ A $CrүWfbwb(R %j,)$"z5vz!>/"aL߭ށs9ܛט:@5J<7&r|N-\Յsgs|2Ǐ{aK */&f!`5ݵ*c_lq.95?z OR$Pp%ɝn4;Wnر|3lٲyܷ>X}oYQgrGGQMyVW[?I䫭N*9!~Z6Hek /pU!{;w}m\U0b7x pȌl;a6( "0125Bpq;DR* ֕Fx~B(aA7wz9ڻѾeL!\83萆Kgﯕ߭DjAR)}8j8A:m+BF$|o"eE߭<^\L^jj_ZA3Tdo wZw߼_8;GV#cu~slw.J]AIzi`~Ƥq, n*6oK_<׾u{73;:ƙSeo߱ftM8t,=<\䨈~(СqlJ˗k)P…'8O4L,2KV'_$BІoND2<,6mo|3"1ioԖ5GI M_OŻ^G>bXaXcnB.\!h:v咾3(\̕+T>B == v!kjRIu(bgQ222KP,06^ EJIM FG ә;ad.<",'=01<N"WĀ6ky %Q_'hS[YB%YB1j{Zj<_!ZpdPEHlLb`98{٬ Q0Hl4d*`2UBP,q+1TM}]\,_kII2h/Ą@RBQkr9η p({_<\E!+LLdlzKd2ZC[P̲S?7֪Χr,}tunP̐Dz#5X]w(}YL&q6nbkv,Z%`zbb={066iTAaQ,_ºd) |W<g(to[qs'oaœ\|ݾ@ѨɋoÉ[$|Bj  zO^__=LQ+WGNҿ~V\~H 4cZk8NW(-Yu/)8z}Ҩ{"g(CY eIA\Ґ mU{79*DtFqcԼ=ZGX4'*Qa5r!u;5N2;; d29Z軟?K/ra~یR%sLLLo@Zcaz۱\1:2Ņ 8qG ,J,V~ cG3YΜ=ͷ=m]AFmӂ@122ѣؽ 33MJ_Cpo#q] 27_?CwwOtۉ_AHz`8wĖ|it fFX d)gq۵@ur `nSosyeRͣP|Xx8ikvlPd^,tu*efYqNV@K.ǟaa"mM4=Hտatt$"XǤ9 \ěgm?aR̀U}dJ\w=L6#d:zs@C+jc[i/pK{r :IH^(QQAL<.H>=B.y@J %A6̗-ї"GB1o<;Z}9VӴ3m٦9~E5$Pd4رKqb`s!G6JGNX!dߜG|˓\8cǟ|ౝ n.UYW+ m۳=;ω8lXq`ky400{edt'8p{veǎaM=(…Q&'Fyѻ(0HųmIg-MHf{zz ÇOe`yDj4A0i֬颧)ON ;zA޺!hvfr_ǷCzX? LCU/ˌJ+mНِi˱훸|2_ VܵN$c8"VLQ/U$:^GQm aO^Ժ.ngPB&[V[fgv֏Re.\iNRغum=@϶VyO&ӀGͥVr2+W󩓗9{2Ss_\.nFıI҉q.vjW7;7ҵ/s 8GsyRg6dķfu_.\!J F4D3w O RLi25Ew9mR+i2,MFY:2<3;Vuk)OܿȀJ7@6D^;r jDF$a]<(O$ > ?nv7ⷨs Gܷk=_w_}{2ܹ|OP,`*~dNa4Ѝ bm)τjS<{B&Dv]o=zy j9Z -zϲ|wuKl`t 21^flL SenmULuwmqLLxqyBe:nFxx)u!S_ϰUxe3ҩI2ERwK/b^ "bPLOP"r.-oY /ټky}J.mC1+),s#ĺ9%XŪnC4BZ}W0kZ".h < )VF*x a/_"*:WeEʣ4aaĭ h;r-0QfBFT|ܔG ]ep'm]iۯR$BKq-@Sj235:sdoV ^ΓU3'G4$2)9@0"Ř=wP`Qn ~ɶXͪ ١k5{=MaF "=E>/!D6|P$x6K&ukZM҈i?$3 Y(DׯOQ*/&!T@Aה4#>qf\Ȼ$H2d3 "=fj TM^h:-hDce|?)mEj:lDŽgcbnZơgS!^MR,j-e8l\q\,ŶM { !¡dᛳ*4ssɀ iWdn5n:x mAݡؾ6HkCEk"5ѫ5۶(ٔJ]vLЅ!Tʊ|Hy>RVTUJ%uu-2YI6/p 2Yal0d'5j5RaH*lKV !Raۋv *Z~yUWRVTkmCޖu9tB>,૆V(usR`rS=2Vic^ <9dࣚsb (Fy~y4N2yڍ]E_aqR~mE9xb5-E״B8@5/jN?Z7n݆t#Lz@ OI!:h/3)KeʾbzF1i*RϘ`K Tj_EB,CZ$^ȩbR+y[g5{!@)+M5JMSy,), rAkPkthB+ccI͒fd]L͒f-m-ޛ;!0K}*!2 i u5RH, u )vs7!L-~yE!yE4&X"KO@*)n$mC6g~=!4a`CRi52)WCFGB<߈Z-mo+cฦaC%LڶeHe$\HE ݀&+n'dhh~nffROA Z# u:(hGc0334=y:;J@ ". Bڴ| #=&YUK$BT7eTBuQ8ZJ]þE6}.s,_ƏV&k|(_Il"{b U%{ 7QKvsh+61&6hS/MZ@HShzSaYm݈fgs9ܜfjRIdh~:GrA6k~+,Qe4~?RDuf Y0P|YSB  d21^vA (˞ aFhUMZ 3S-4^*%mA?K,::Z8s/izk8[A6l~R: $FhTH#TС}gYK60 0֨*eYױF$*նJ ;$ a2Tyܜ̏//ض VPD+Lִ?0\,kɫeilP6(cx۶Xh&fB&D bA>dD(o,j0 VkXd2izţl+7m[e[xGxGtF)u!Q(,A%) A a.P@-0\QqJD_'!)Ntx;WpHjrSsY:;;""la Qm{|RRk]A=Ϩ֍N+>xO!R,VQei}(BGrTsB o`$VxH_`=H#\@\jDR0KR ME-jV@xx8hej O9dOкH46(W@Q 9Qe2*%]#ԆŚԊQ,m&3@ "Fbkp58 P #e.چ[WjU `䱩00'D4Ԭ'< /T #⧴!`aY˭5hƤ! R,DŽH)*RJ8C.6x% ̍=,mrLD]Hط _9tla$ |}C(UւZ;!A`r^qJD}6keF%ńjŐP5Wv,L.>V9JOF0R, s0"ٌN2w4Mses#[GpHCGD9WAn ˺2"JQ rm)uz͈ȸfY!e5+e ̱뚼C˺0lG7ׂWyɚuIbQ:P̓Lm1w1 50ހDZs._R̮5Wd]]%Nկd_u5 f2IT(.]2XjBahFg&' 2(ą|ʋS=4yC5CXQ ye&f'dsfcuq k qX42WzM+̄L~mj, !Ţٞd2f<$Pcyѱj UN7a6dse A/,C-i_zE& Fm#|}EZc 5e:hƱ0PCRk5_O6:=8KVxIŸnU!,sn56/t:S# KHbWck9E*iQbBt2MSaH9!q&-ZG RFJ#GM+8 4N'M0 ! #:-rhYP.@жQ6y|d.|ĥU:XnydrDBY] `ɁyP{yO:4ٻߢ>㱰7ZsV1joLt%ZLi eg\B_{@۔R!EfE ͘fJ3ڙs{j`n(ʏ!t ),` U)M! 2#MrLbL(GV,4z1墂+*}!B@ooTO+3[`ˮp; $3R&e K2K**["*g;#mwEl=تp!ÒEEI$b"ALy3i佉Dyܛw8{ﳾ!Wܣr(\1&n)li&` `}V)i0嬯g`Hg0#Rn6FNpďkf$:zF0bU]],Ia˨NiYH&ZDgPT؈sϦPikrő#PYDaZy6Ǎ_tҝZ6{'wFzi& MGWRψyG6+Z4|_\c4 $ @@pQםNo ׄkLiݍ$1 kt#T")mP':{SsPuOu)\ iYkR!Gܐ#~׬0V:h#u d {5h,]74qu7+Q4rZUu< ydJog]g JlFr 8'2:3NY ˭+Qu XkÇo9WJu긲 ~(1ecmv)J%$8u3@A-w9΅SdK v +CL,@;v&juuՕFá{h5vi*4#K{S|ڑev,խ5MutQ +F6c"Pmo0?qDs_tOePYE.0 x>r࢑'I95z,9 fhȑ?+8uRF Vݢ"}Ş:;ꪋGq>q8FF(My \iҁs{݋ Tnc+گqΜ@A&+vNgQ̃ci0)ŵb~,¹R8\`<>dr,OqڹJF>8I8W}FqYT\=67_5oJ8 cVɲ|(YNk4j_Vp<Bΰϝ6y}jC.m^f{z rcf}8d50zs˜tc㦑DIW>ҐVȑ̤|i#l&U)բhˍBN5j J%A*Aye)$-,22q[]{E4H}9 u̙݁ ]ǜ3ݮgP<wƼ ~44MٙmF0bMF\})r BU8V#s1M-?9wiE#8Ζ fp{r}m`x=8zߤ/;ŶI pb}x.DO doOc&l|gy?IW(l't? l .=Bwԍdy'GGRU,pkx04&* @ L2Κ߷Jlc 2Is,kGqK 3/}]8$Zt_1 [ų[v~ʕ f[54AH5Yd:ahZTZG Q’1U6s6=浳9 #ƣIbP6s:[7iC!ZuYz p.s_"3>0Y~4]E$cR;bv+xWaLI*++w1Y$4q@,Te˵C7H-yJiCESsq6pGc8ăG7ׯms]e wqNzC(F6L$/ HfVZ/%.$&TUr.o|pffxy=hZĘCkdHKe{> ӾOqi("{5>kn@6 ;;%!vL_w<v -cYbAYPqAm3#=mMWH>ށԍ*E)!Q7y/_exES-Fk|i*lj;ѩuQKr5TfFobx`O0APj Z?#ZO2W=kͩ;>ɡqV9p9\bSa4:;;y8| i: Y_5NA@ὥi4^0_a>sg,w!]rKSNd EU!c9]/<×ᅬa2UhYјG-͏$zo\c#Đ â윿*Ӫ`XP0wábe%>& Wd8'T>wd,tI _Ɋb4Ɋ8T#) ѸLùGg*tNec_؇IkpA8"cJUj-8pmUbJV&]t` -&fA3!a߾uWۨ6QoS{F(˜)^I0*K|޽@Jwgpjd~=8u=* }kꨮU P,WUUQeD]`>,F 0I6,$&1MSQa\18"`|dIw_=XlV4Ggg? M#X h#I8 !isN&΃(lI>nYd:0Ƣ(ڱX\ܒ$!˲@"#ƱIa7سK:e+̣DYUjʢi n<%Ijc֍9O cwsdOY!;)yr'y9~ ǏS7yۂ֮o+WOrOqC4ہ͖(SڹCYY9vE:O^Sώ:+|W`1a>33a0`eb1i/>?̉cn7$~xS iá0J10 ϥ?O<㑡1?0a3Ks*ܰ%zenlU|#NHyJxq7јWguuBhIJ6Xә&J|PUԵk]JgdYh4j7'ƜeX.KEhqh]F4Bgi , Ҭ31$I{ ƛ1|ů]W"eאuji,֡$6Gu@U:.e@6SngG@U]as%]xpr4&+6Xlߘpnʷb;ShjѠ-hD;"㱀̓Tah9pi:rlň?Xlq= ~8zK~,Y^.Zj#J_ۅwy1u>H |khّ5tMw)X]>3<|ӟ;|1~M4C&Hjuzq]pX[n:,6\a6BE# >`AjDk&s EM\r#t~NH.op_vJJqFFTY<Ĥf2%wI66BAyýn`F fGx\/<ʩ!" qg t:?G'O%>MSb%TFڠtB欮eC!y.NGܳ5aړr!J EGe*ł뱻PT,F]Ct{mm"pcrX@#7 Z#:bm1:}ɏ451+,41Qq#oD:֑͝T)z0Fc8$#sMr;s[֬ L_c PՋb  ȋhIѭϳ&]_wl=~W|W@-p $Ew6zPx,m!5U4uImKmp4 (>`ihr}c~鿹aF3[w8FTQkO;ЍtZ/#ԅdZaҢ2tN@c0FTsK#e;3 2ƴ"~auR}(kYzHh@JmɈ =YÒQMICUy^aI\Yl̶jl)U{“ZFZex9a-9y- !~G⃲l*E MkܠQZJ5(Y:b0JUϋ^7FǓ&40z<o щ*R N^pPxq7{o$Jeɶh ʏpͦ!M`N|)6i6)_Q*K9ujDg] e+lS/0*-uiGU8}43 G'k #fb`VF{ (tg Q_4NX+#T!zD/mH bgdA],$c{G|Y[" Xlb-ʮ#KEQb0cxVㅽHաA+`8 F#ǣ@MN>]{(P^xnqFSKFC$Ѣ N @hcv7B*=z|^'^ꅂQ.`4oJ1kT}}M bxVRYU#^֊NM-qnJU q㑐&NEnLֱ-u[V31/ߋ^P9{UU~sF-h4z|{%_%MSXԔrKi#PC)O|9~|Q͂8\^@Q;5WUXH=`@Fip5%;jmHR֎?a8HPZ$j}U H_}+W^ѿ6iJE%vxr$kMbsx}`U+4aD54f 5/Sע׹vkf^pŕ CBt Tkưhj㺯JTоyXUl:̵\~*uCTP]i e:=Ǎ)L!WW 9kb|Qqo dYNdK  zwhwt8.3w4PQ6T774hȮ_-V]C,2\(Nyq#%VmG5}X1,%r::<gey!q_mDql-K=^w}2l Ñj@qb3Uee"%vKm1yu,l{!(|tΓdE\hz ,elmG-) c{4kUjU0td>-v4a`62_]m/䢫 d3t`u/vjli\4cȢ-yUxJi'( ^lTԎ41|۵.[Oo?!s_xҗĹs{o~Œ|W7g7s3lmcFf/IĩK?^|2DT4&n׺n43 lnw;O eA>YYs_O;s(Q蠡 x73^z^f5y>^S_[S@\"@䓌=z?˗WX]]{qH255g^#Ksw]a*8Z㽦qo5AY\Sk=k>^J ߌ.0FDsr.ڣ4<}p asjB_Z DƁyN|(Gj97`6ae wޡXP XbzY?6"/ZmdBmM_yQܴ$P s݈i ōQ zфBhn֛ AR/֖%!t9Ȇ;[I&y&Qd,ݶ6p< `,ZU 0R,=eD}H ;sQ]͏KѨiy<.i`k fstRQfөg{ǷJk?gÇsx |_wſݿc=_D$-@z ֚'Or(, 8 z' RsZ Y Ag0QGVMuXPU5nbg4N0pؘG9c)1+pk:_w5>k_]Q%lj*3v$IFd绗#I ۣk"[^wŢ1ZxE`6jv8#I5y2IIRh4$i?G0J8u:_y;x-^9{j G R>|| *UiZ l2fUjk(So0>~]._y1l5+8;,oPWidi=݇Wq]#'C7{_& wqm9ӧ`^aĊBQ :x|UjRr[枭9\ޱ.JuEǹͦcUbI PC8rT`oe"^l& u;vtס lxQt@4ePrPvKF@Vu 19'*l"4jj7{</ /rwo|__'?I~~__k_g~:w?yOOx?`$nim իWŘXO0L&kFy1t6MIU4uIUAWfV$._ '< I#)_…M/=##~5[.U ہwޠ1s%lIҽ0;-ftcccc?Tx_hݦR$ڿ.43 ڱEEm5T nIKMڤ->0>of7nxMBxS?I(s,yx+ o{'9dII2$MW"M' u|$58W(?'M*W1/9<(e}gRl9c*Pg kwqgVRyȍP Vb8> AζQ8`#.%֪u{Gq: b4}nhC+)(,NV 0P,d$>$b6]γh:6Dq&H7>1wq|~?׿u^|E}㩧bkk .s?d0xoC\w"Y[خ:%uaEQX̩76" +;L3&9+9kfzZrח`m}ӧ/ȅ[4uͥK;3 AB/Y? 4C`1/ciHic:lrLSq,mA9p/HN,/ƗpnX:_!&A* EUȗTQDQBU+ck]4Ue#c9.Q }N29vre3ƹ)'{m&hKzFR+idQj mDՐ kX,(2e7!IWHi6 KDGE/` D2YqJuNc)9GiIX)͢2Є,wLSP$"M:TcHR [ HL9JCt}/m,NlJrܸPxOiJ4oE^bǏj4y`(\N%cADCml(m)amg\E&+>`PmfR;{M-]QLh\'4ٍ\Y&} ^$K;W f8$t{jہ4lvVV`2*я<;wiG?& vc[Kuo|iN>׹9Ō|t:e69ž*X Œ8c44$i%v΅T'?(OlpիV<ݜ<5a>o  6'$3,ty6-hMtU6Y^yuXIPڴNgNEd+ ko+L3ct7> 4at' I8֊:~rSHkğx%6nQ/~mAq8 x{pUu%Tf}[Xr;:?1]VgcQh8;Q'Xbk{j7 p(5X*QT455B =7H[.,Z S{NdÜ!' w8⯳7Yez 4&|Z_,RZJ BAzl*γS J\.Kur/$ #pV("F{rEɄGu"H}˞7Q o^)FEL&Dr; уRнǫIME|s5!n:6G5RT h9i;Q[[tfsL@x,Ѩk]nr/,3q|^QǛprӤ3I\&VV%Zg<NGQ@kzHc{{/ /9'xkW?&ƾ,ʅҴReY5J,KIYt49~<2|%)wZlb00 I*GUؠK} m#qY3OOqX;16 $v,Kzn#O1b&I" N]- Mcp.upVaGU; Su-KIB֦|}o\i bsdmUso/Fkmu:NaCv7CBű+?~+V649T`mI0Hvd6!:L*3vH,ZkP_*|o'zjh ƣCF}+s$8B1=1 3s!?č|B*+)PaSc6׳}%KqX땪SBE)졮P~<2Dɤw% =?2l3b*  w'8Yb8aTB(JəwQ eBByqKP 4/͐$IIc2M 7čPt}f=h݅ual#cJa. fd/CQJ@qHXKpz_\yUy>}[<2;PftwIE1"[aԡȐ02#QZTjlк~ϯoYc(ĠVC&wmtӄuUApŷuƇ{FCU"Bj^`:/2ZyvPiVFu5LR9ӠK.O5KӮ'M t=fvI꽋j7f.\O<6__s˗.~&F"@o:iMjjU7cK2*tSl̕GwD0od. SjpSkaWU++Egb4Pd#`HG 3TLxW AҒ'xLiFUtaMюBCJUyi8]>NyݓyD~| _`kkhGgϞG_?_c'N:yyG{ٟY8q|#o~ 3.`&(Њ))ŜŢ`QUeo 4 OBW8Od4ueqXkQ8uV>GD'OiΟwdsk:};{"u7S7ގL\YEZ8l9@1~}qߘ-\%0Q#OkL{F;(DfLYaTd Ͼk`S8gP.u%c!aH+HEjPNIF{j4W| ͜O(IYȨ2Sؠd¢RMhv ey<۟jwb``,H)lY&3`iC(yy|C萆%Ǐ\IB"<1C0 b`M,| Q7v }}qYRUMkDJ^Qҿ_ ii2L8}4ֿMGQ2&8{#P86N-IuKX0-zoј׷a<fdu1Nُ!ib~^&z2J I}IE+aO#C:HW]KNSɸ1"dԎi-@!p(eCdDXtmo tV߈Z@vȨ4XҨ,+z3@\!yLM"sH]5[@`(xU6va:~Gl4u\ dB:_3ƫU4NJ6/^dggSNY Ֆ5c:b<l;+$\YV>:;o954uEY.U)fD0F#au%i6KD Ћ%I0F`]_\d7;O'9~^}%ϾG9vh`zYo.8Z.r#Ə>㿵y@<'xyn9.NJT$_x9 O1wT Oǖ[We(g2(rYqR#@d\ X|B唤T>9J$| Ն Q.}EJF,ȩ}З^ؽKCaCKH{<ܩ` uN>' H@=PH[yyMጘ@,`ZpR"+JI.6N>^ڈŏ߾"^FJ"s~{36"blX8׫fYbXrb:p0a,(|=[ׁ„lH_;&ͤ" gG;FF+mth#K mU蠶N9u=vr*BqniAQh0}{n &|0lt:<6G;{̺mo6~u-\ѽWa cbbQؚMJY}cro]ce¥n !R/pO}c?v`G a;{-!J _첣h+rWe\WtڋŢ(- MFU|2% Á4P1 YnI2&Nt FA*T{\1^btW(FlCJ9/gJȖ QjiS `:m9.MRj,A+8RUch0օ`mVϮ{4b4u *sz;3#F!`OaQz '_Fϡ%Zi&tJT*>4h?t@0FC7*x7q !EY䘕Bʢ93s{E[P,Q`qCqNF&bQΆ(id b/,kkTpQtm/AƔឪJr=4.4Lm2CdIR^. ~F#ZeVw^w~$)E_diy[ܿ *T-(9yRU%eQCv;FiT# qlZݲU)& Mg|xyFQ+d<嗯2/,Z7ZvfiduDv96/}R?,2|ѥ~Rvx+JދR`ITUIXaT'UL y.F1oȽ ! 5Pn XLH֚aչQWYxZ ir{t;Uw.bQB x˨npE*ctO_|Oы MOn=rTֳb*:3 *I$$!1u| OBWHeeYl/pSH-,nj<Q5ZwQ(pln-.rߊ+Og$2~ dJK:fxL"SŋkZ%g0:x.oJwD}LjZ G(ѣy%,^82 6F#cT9sٶ1-MuAĢkgB>9Ȓw t+%5t9`3ILjMgv䷓=y]msG&Q-cQ^=bl7Kb4 FHc m'ZČ5͘{]QŢEC][f:bƦoCY޳X,HGޖZpQtiqtdJGѝLtȴlKӝ;[^DQV&.z#3 Ƿ,黽%Ţm+rOEM D̉YZO\IT:d#_6:E)LRTг P&$Qm0 $?xk iޖ̻TŅ _xs/:Tj4s A  'ky}3.T,)ik=Y&ߧi!T*h¹/ia+ {BDȮHckP*vq?vb4[|@lbuQޅ7R9OJ:Mti8Y][am}2/ˊX0_)El*Q1KXR#(0h\hY֣cq^HLaRO6dJI6$haXiSh9 ȋ[yX9Wk4N{ח޽"сy014ʵ#S^SW2ԂC"c(k""lO2 +X1*gʞzIgps㸁UhN1 a}=ݗUBhf*˲\7V?s/#?O8tm/]9gv5]U H2vj-y~GVCȊbuIқ#9\XgsLL{e*X2!Du Ub;HZ 8G,XLիϤÂGoI< i""bUd%0yz FU"@~7 $ Jt|+%Kc-zBMUNNCݍFRBX&h>ZY&nʍF@N}ƅ7~Ll~Qѻ-@]U)mַ5b"JbqyG6WeCOG]ctiQD_l.hX[^ \[S$@TR$IJ@c4uƤ$1Z*dnڴHctrz)Yqbmbِ=MV"6Td|pI8  ZE뱶 V&c;;S?‹/>0X_/z;#'&QXMaLul[OE{13UU2DH9R}blTReMI .~gPbhdLpTYC4 } ax'dDbN }{-SQAOF~O1_Rz͆7L:GF߾qJ`-6NoIQ]HI<Hr_tIR QpbfV]'iL.rֆ5bDsu-3~K@wyoZ4=V8REhyoQ̌~`?O(CEu[ GJޟs.ĕ Vr"Nż{V61m0yˤ3p5v ҎA}]K嫲``0 9 ݒGw$0jq,8|=qs=RM#_!UBo)njZ`R"6L6Z| ͯ]}6Әr %f$&7zdݱ6:RE0 ȿvVa1ĨզEխ 0&vH*\L]r צ>rNDT\M1ZGw(\Ų.x.]kkpo 3&Q!tG1p i*:3k$I"ڶ` dԈ8uMQdYV-~ E8ZR{0"@Y,Y]_gxeiN2$F:T?f8T -+E!9Hg狖~M\@]b^r2[#\^A8Ʒi { h8_=đ Yf(ʆ[ ~ku~8?w>X`VTZ4U Y}H!XEQ4l$ؑS'7X[] FE|k66<`襨gg^/aˡCz8sr#$ήZI-ig0`>[(kk$Ʉ.s=z=rn8DpyхKYqUdB0$h- 3$&5OAp"g= ~=4a<^N0vEOf qM=GoNYfB |@U3oT|^Ʒ3W)tM}l66I݋ \]{l(Yʢ(j9EQPU5]Uم6VJ"[=w{Q*~ޞc4 +UfPW^㟸|NfEÍ˗v|ysg9{n̄,qB%E\O|LF/nSs*wӧ1f (S}9|yqIN@%\:/ʧ?0znbFQꡌ3wl7YfM;3_ w}-^򽗮s_9ƣ˼'yN>S6C-@=`R`3Oğs>ƙ'8*]O4J? >P{KݻƖ?8ލA#8V;vzoZv?G0-IZb Y@nY}Ph9I)zH2Z׹Ά笭1 'ūs9c9D ` #VUt`C.q,3^5E.^S?+{$J)jȩ;1ϴ0 ._CJ{b|\|A6̙ӽp#GO˗y;?;ϕ+$LnƕS&+̇Wiq '5.^FuP90VM^go©Sx{H;-<.TƩSgYW'Y[?{ zI{/{wق<[;udc,ߛo_qpwMX/({}x} &Ο;SO@]7?]w3W 3g]_<ݳ GָQsI8͒V+~ GULb(ˊkW7yu68 Ys}Ga 79Nd3'{7A666z׺g6 y;|%Oԅ甀!ر<8uY6XXϾëş9qpǏ1>v2ӸlQK̿և={Cs@k|=^>uX׮r&рNb6k:{y$n\Ex\z Y 2>}388! .{/ ;Gnh~7T]Feָq5"Hdfз AoXn#u;>ߦ߯Zv^zn\NH<~qp+zǃn G}p/^=G-77 ]fhzn/^.wpqF2 =;_/V9myYex?o,mPUޥ{<`88x/)sro GُV>vc|wp9V1>IENDB`dipy-1.11.0/doc/_static/images/examples/stochastic_process.jpg000066400000000000000000002675711476546756600244750ustar00rootroot00000000000000ExifMM*t|(1$2>?̂%i$PStochastic interpretation of the contour enhancement kernels.A. Accumulation of 300 sample paths drawn from the underlying stochastic process of the contour enhancement PDE in ℝ2 ⋊ S1, projected on the xy-plane. B. The contour enhancement kernel arises from the accumulation of infinitely many sample paths. The gray-scale contours indicate the marginal of the kernel, obtained by integration over S1, the red glyphs are polar graphs representing the kernel at each grid point. C. The contour enhancement kernel oriented in the positive z-direction in ℝ3 ⋊ S2 can be visualized on a grid with glyphs that in this case are spherical graphs.-'-'Adobe Photoshop CC 2015 (Macintosh)2016:01:25 10:34:34P T9X TzL&ff\(Creative Commons Attribution License(1HH Adobe_CMAdobed            " ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?Tzʱ3kp _)oE_JLu1fNEV yǵ겳phJ2׏nfk~MX۫uc8VZ_{z/}+}M$UO6ʯ嬢۫ȩ ?gk=̲}WWf~ecmF~߱E?(S֙*gͺ[6_UW}v+F2-ssͱ v5?ĔcoT=5olkOzI1 ))tLHO )Չ/ wW#KyWjI2Rzxo[蹍 ǮG%ߥfl?R{+:z0n ?5zU}c5S@;6ɏ7'y9`7Lh?䤧7uLم\_7m}qeu,.G^?[(sdcN]9o:cDP.7.#ihG[}`$LJOŷ}u`"WYfc[5ϳp}b8[+a05ƭo*Uli)ȯ'Mc2^ں_Mgf[Ɇݙ__Z_Wwٽ,5p1en\ѳk}IVŅK/7яԱFF\ б֛2Ge̪Z?Sǽu<8=.k`,o$%+_!G+u]cF VYlwŻe6A;Z䐛p@I~CEAm%$sDQL#$05{xp |?"JAՈ _|b:R{ u5`j`tK})bDuH6Z"Ѝ(?"~CFyk~_Ψߔju.->m.q-i`-=L-k4d t>E֖e3YfsZ\꟏uP59ik\;'ٿGkqqݎ4h>ק~}'Ȑ;n)l "dIr6 @`Lr>^NH$Zݮwʻf`8ɬIkߤ7+erN{R&>QË"95T5OG0lpcvpSܨt39ۜ_-ṋ֟^ ;h& c^T,_4vAo;\etnݻ#]dM9`эj亞EYxo~ X0%Q4vo椧2:nĵ3p5߶mچMOl3זbfza۶?;gY+5atݵK_{SߵjsGNo1FޥN>/mlU˭ʨ۞@c0jT &l]/[?c\8K;n{j椧(;, KD?8oޮW{KAxA}./v@{Q7?ooobߝmbKl}?}籉)]a1?,Ř7XF%ƹg֣>LcIt;o)t굌sf^K {[3+?Q̯bko-ֲkskQ9;麆کrqq:vFFc,z&<j3ql=]KP9omc>˒v55WGFP/sN˽ץoeWڿI>8OѼ Mh5wTz?L=/ h%\m/o~oH]EIӹ힯ߣ)3D`e!EȃJd\KD 0 =yβ-m,sn}:͵Ŧք_K*q]uI Wa Pyhѵ?\ݔ^\N{`n$?:Mϙ6',ԙ'p0jh#mmϏ]FOo@vێth?q{')Ռ\]x;P47t{WѶvLf[LhO]Xmy66hZ~=}45.fkG>nIO.C-h 56OӇ5bWAªawC} cC=[K\Iv85@& gU98to!p\FYc]RA-kwњL{rJhec6 KXuaTzv٤ i\b5,$"DpRY]m.Iq@B>kU X\6赾?p]a5ͧuwx4kdc9 :tm?}uztM,O'+*6_E֖EuoՎce^z7au1}d|26-*N4{?@}?#UĖL6}NZ_JQf;h0Z,=oQ34){!#R ,~' zke|'Zvծ_Xe ko w_癷YXFSwy9BY-^?V+wc߉NF0.cmM׏Qg%[t3^>Kݡcje~ԳԱ4~̠b=\8~Y~']"9c\ ~Ü;c!U|Ө4b`s,wkg{K#)9Lub?auN&Nkή>ޥ_ezg\|S$)9H%05ܨVXfec\cAZwX )s \8xy#l`p=$:/`\=pGu1tp\یWN֯>1ZsZp4A )Rk-4UX촵kus v;U~{=dfEeoSS\}&kjmsgϭ_+ @׵k^KZY؝c-o_CC]EQY6d`1w]j[mr_?uNmL1l?XnWD/c2w_gum~ا#8pa)KҶ@pNsy"N`ÛNּݏsG^Ϊy]{at179T.xmqǵγ}{~[י`2XuO5͟glVzlK^87Fw Nqː9Ie#9|w*]v·e_&mlێXLX o[\mc  6^UE{ ha X=CbؗTۛlI:jRR@.0{ݧܓւ5$P6qG^Խ_?$W7̃[[Ԁ cD)r'}Ɔ>ZvoRSh9 o#Hd*G$~kD$ :jL 0~⦒}[_|b t VW"{*5cMncm,'oj7cb N;c[Vd]7ٽuz=Q<5ֻeM& c?=N{ {!͐%m/fgzӲz7$6Ⱦ4fd:7V߆܏*[:^GZ2qqO@!4;k}X>k-DԮϷYVNEḙ-Ǿ[6͖,A_zV]%h "ĺ78 ^Fّt`r T!A/{700v+10_`Iw+:}S#se8`ӘX[nΫ'_gz[^EZ#d6Cqr?i'hJ_VuԿ+ZnqKĽqkK0;{YmRfY26iaI|r۽V_I#δ?/unn-4fl}Yq\-p?9je);OIzq?I:J/tGٴ~zDoos_r_SI\KϹ/t̚0'k*xLc[oDͲ \~#w&dF9KpqΛ')Gޅ V?a{-% 5=VάIFnc}= S;%8qKLxm.u1KZ=.{}A5H$x pv>G~I{6'^f:Kdx?ꬬj][[UӸ0CWc>o_c\{o3?~~_=[_|bf sVfFE=ײRC[{5Vtº8Zk?SCMo% eogM? ~I"CoNֶ$I&.>-X7e41V kګkZL?ּ0Wn+z'#f7nqvTcrkȮG_UOf33rexAŮ[ֻ>,rTD7,<וcHy_5hm,%}TSuFf-es.qs{?7lo csYk{\f֚1s׶}jNy-c֗ 'n-j.;qK`% k7uea>6ǣ!R^x-֝kNoz%7=0yv#ՏHMnkX>}kl}U\,KsT~$y[\[-l}=vkz{\Klp] Zq~o'WSZ\aNN@?j5 \]CXd ۘ;ץWKKZ5=GG% cCfd7ۿ#ƶk {/eDWI%j@$rӤ84I׎yI=v>oCjc[:f˝BrY[{5̝I%tjS@όG+eoJbHH `DJnZw4#kx 9 |EK9ki "zw{ ikvS-~h\|  noVqUa;s]KIX\-w%P8980ypnwzuЭD ; 6`oIL i Б#cEu1h5)(i=ݩ?zGHǙ_Pn( lhsocQtTSRKTSRKTSRKTSRKTSUwWB:ײ2ݓ_lU:kȶg]4pI%?TcI|W$ywk鯕IOJ-؉m.q-i`-=L-k4d t>E֖e3YfsZ\꟏uP59ik\;'ٿGkqqݎ4h>ק~}'Ȑ;n)l "dIr6 @`Lr>^NH$Zݮwʻf`8ɬIkߤ7+erN{R&>QË"95T5OG0lpcvpSܨt39ۜ_-ṋ֟^ ;h& c^T,_4vAo;\etnݻ#]dM9`эj亞EYxo~ X0%Q4vo椧2:nĵ3p5߶mچMOl3זbfza۶?;gY+5atݵK_{SߵjsGNo1FޥN>/mlU˭ʨ۞@c0jT &l]/[?c\8K;n{j椧(;, KD?8oޮW{KAxA}./v@{Q7?ooobߝmbKl}?}籉)]a1?,Ř7XF%ƹg֣>LcIt;o)t굌sf^K {[3+?Q̯bko-ֲkskQ9;麆کrqq:vFFc,z&<j3ql=]KP9omc>˒v55WGFP/sN˽ץoeWڿI>8OѼ Mh5wTz?L=/ h%\m/o~oH]EIӹ힯ߣ)3D`e!EȃJd\KD 0 =yβ-m,sn}:͵Ŧք_K*q]uI Wa Pyhѵ?\ݔ^\N{`n$?:Mϙ6',ԙ'p0jh#mmϏ]FOo@vێth?q{')Ռ\]x;P47t{WѶvLf[LhO]Xmy66hZ~=}45.fkG>nIO.C-h 56OӇ5bWAªawC} cC=[K\Iv85@& gU98to!p\FYc]RA-kwњL{rJhec6 KXuaTzv٤ i\b5,$"DpRY]m.Iq@B>kU X\6赾?p]a5ͧuwx4kdc9 :tm?}uztM,O'+*6_E֖EuoՎce^z7au1}d|26-*N4{?@}?#UĖL6}NZ_JQf;h0Z,=oQ34){!#R ,~' zke|'Zvծ_Xe ko w_癷YXFSwy9BY-^?V+wc߉NF0.cmM׏Qg%[t3^>Kݡcje~ԳԱ4~̠b=\8~Y~']"9c\ ~Ü;c!U|Ө4b`s,wkg{K#)9Lub?auN&Nkή>ޥ_ezg\|S$)9H%05ܨVXfec\cAZwX )s \8xy#l`p=$:/`\=pGu1tp\یWN֯>1ZsZp4A )Rk-4UX촵kus v;U~{=dfEeoSS\}&kjmsgϭ_+ @׵k^KZY؝c-o_CC]EQY6d`1w]j[mr_?uNmL1l?XnWD/c2w_gum~ا#8pa)KҶ@pNsy"N`ÛNּݏsG^Ϊy]{at179T.xmqǵγ}{~[י`2XuO5͟glVzlK^87Fw Nqː9Ie#9|w*]v·e_&mlێXLX o[\mc  6^UE{ ha X=CbؗTۛlI:jRR@.0{ݧܓւ5$P6qG^Խ_?$W7̃[[Ԁ cD)r'}Ɔ>ZvoRSh9 o#Hd*G$~kD$ :jL 0~⦒}[_|b t VW"{*5cMncm,'oj7cb N;c[Vd]7ٽuz=Q<5ֻeM& c?=N{ {!͐%m/fgzӲz7$6Ⱦ4fd:7V߆܏*[:^GZ2qqO@!4;k}X>k-DԮϷYVNEḙ-Ǿ[6͖,A_zV]%h "ĺ78 ^Fّt`r T!A/{700v+10_`Iw+:}S#se8`ӘX[nΫ'_gz[^EZ#d6Cqr?i'hJ_VuԿ+ZnqKĽqkK0;{YmRfY26iaI|r۽V_I#δ?/unn-4fl}Yq\-p?9je);OIzq?I:J/tGٴ~zDoos_r_SI\KϹ/t̚0'k*xLc[oDͲ \~#w&dF9KpqΛ')Gޅ V?a{-% 5=VάIFnc}= S;%8qKLxm.u1KZ=.{}A5H$x pv>G~I{6'^f:Kdx?ꬬj][[UӸ0CWc>o_c\{o3?~~_=[_|bf sVfFE=ײRC[{5Vtº8Zk?SCMo% eogM? ~I"CoNֶ$I&.>-X7e41V kګkZL?ּ0Wn+z'#f7nqvTcrkȮG_UOf33rexAŮ[ֻ>,rTD7,<וcHy_5hm,%}TSuFf-es.qs{?7lo csYk{\f֚1s׶}jNy-c֗ 'n-j.;qK`% k7uea>6ǣ!R^x-֝kNoz%7=0yv#ՏHMnkX>}kl}U\,KsT~$y[\[-l}=vkz{\Klp] Zq~o'WSZ\aNN@?j5 \]CXd ۘ;ץWKKZ5=GG% cCfd7ۿ#ƶk {/eDWI%j@$rӤ84I׎yI=v>oCjc[:f˝BrY[{5̝I%tjS@όG+eoJbHH `DJnZw4#kx 9 |EK9ki "zw{ ikvS-~h\|  noVqUa;s]KIX\-w%P8980ypnwzuЭD ; 6`oIL i Б#cEu1h5)(i=ݩ?zGHǙ_Pn( lhsocQtTSRKTSRKTSRKTSRKTSUwWB:ײ2ݓ_lU:kȶg]4pI%?TcI|W$ywk鯕IOJ-؉ 2015-10-14 Stochastic interpretation of the contour enhancement kernels.A. Accumulation of 300 sample paths drawn from the underlying stochastic process of the contour enhancement PDE in ℝ2 ⋊ S1, projected on the xy-plane. B. The contour enhancement kernel arises from the accumulation of infinitely many sample paths. The gray-scale contours indicate the marginal of the kernel, obtained by integration over S1, the red glyphs are polar graphs representing the kernel at each grid point. C. The contour enhancement kernel oriented in the positive z-direction in ℝ3 ⋊ S2 can be visualized on a grid with glyphs that in this case are spherical graphs. Public Library of Science Fig 2 Creative Commons Attribution License Adobed         P  s!1AQa"q2B#R3b$r%C4Scs5D'6Tdt& EFVU(eufv7GWgw8HXhx)9IYiy*:JZjzm!1AQa"q2#BRbr3$4CS%cs5DT &6E'dtU7()󄔤euFVfvGWgw8HXhx9IYiy*:JZjz ?N*UثWb]v*UثWb]v*}}gag5oinIUE$3/ʗ?qe^>bPUNE>*qWb]v*U ׼ T!hm%3(v⸪v*UثWb]v*UثWbXN~qkCܻihw0pY]iTbv*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثN*UثWb]v*UثWb]Kzy榵EP0n&ёOb%N23*5s󧖯4 wwc-pAyf2Q*/>qlU4-8PѢr`74^v*UثWb]v*UثWb^{Wb]v*UثWb]c>qZk+xMmgqs6`<(H4ogHȑ5Xcۆ#Wm[j\G'Ŋ,^gkP2CT,Y#bX ]ƽh|ϩy'˺ug ΫbA*1sqY9zi/eu?$hgsl_q(UBP2vd8WbTm--`Fy` 3QUT 8gEuZO/ś OL$\ſ?Bj>qE 7h~وC`[yF<}c<fA:ƿ~n59T#7GF5ORoY?ozn*UثWb]v*UثN*UثWb]v*UثWb][w{?C=+WڸOY^k4 L(Ӯ+Z,7? U6Gm;ygGn,.fY X駸qpjzһyS|%KӬ_Z|Hmc_F4QU|OCH| _2X¿XOլ$Ώ״u$n?|U1j-{jm7z2$ƳX!%丝]xC*7ˌ~M6^Tl .Zo I Gz gs՝bF$I4rN*=~SyRy̱ioz‰-.5UBDTκ?$biyC5 KP R}?Qwn/>ӣ"pY?//6MjIa\_Cm R7iddJYNo[Cv/VD!I$MKۮ*]v*UثWb]v*UثW>5?*|UXWb]v*UثWbX_9?JcZG4[̏ig s$|U*T]N6Q. @u* 9|<>LY'NѵPRHJǏUp?f6|U 5QW:=γ}l;;iWbM*3OT\CGgzƥf>a5eeq8y!OVg%#*UثWb]v*UثWN*UثWb]v*UثWb]Oz_y極}ȋX,k9M'"ܒ)œd\U=o16tǫdu-\nZL('J]K/?WU}j bYUrǒ}U3nk9a[֝LHܘ2ڧ*|o3cA]JQҭaHH,1+$- n?\$\?QG򦬞`8[Ca caoSIpSNn6]v*UثWb]v*UثWb^{Wb]v*UثWb]a4[;=}}vKu3}>nl'kgu\,|~x1d&W_Z *o@oETr(>a_o~?n*iQӦ*UثէK*,UثWb]v*UثWb{?PTyOk_T R x輁V?*?$ysS}OERYӹpԯ1$0务}XjF_d.3$j4vax/]Aum- 9n880XX@VV(DHQUTl6.%OD[ޕE)?^,T|83bWKIo=ֳoz&B7eEOGQ^t+u+tW=)u[建1W]8&*qWb]v*UثWb]N*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*Uثt>*,Ug'?)[{oLU*w=W|^ث?lU*w1W|^8?*7*?p b7ኸou]Fou*7~U>8⮡8c]1]Q]0۩[zVWu_k2 v+K-`;r"MXQ{{kTWuUR]׿gVKwg^EZ,U{{kTWuUR]׿gVKwg^EZ,U{{kTWuUR]׿gVKwg^EZ,U/C\-О+B^M 5bS\R/?58yS֓{dd{y֏ lXdjW_C?:L*b7o4ށb+C Q>bνS=XXu'Z2kwOG n)4F2⬿S- ,(X_0/kO,&RU-[tYmC?/>N[3CK őӟS\PnKQk>Tv*UثWb\H`:Up\UUثWb]P2En_wW>5?*|UXKxx\GmXáuXuv*K]v*UثWb]v*UثWb]Q㊻]Q㊻UK1Wr_UK1UÏQ~UvZ ؟_7Qbv*UثWb3*f!Tu'a2ky^cև4wr@}N&F& քOi`fNMA-Ҹ˻eҐFe1 >UV?͏kKYc.q_nL͊g_/@{}T~akS5fJ.,^D@hG语UNmvze񶱂;h8P"QXWb]q IN*9\0ccUWb]v*@&bFy.[{B>⨘X֋Uv*UgOwz*tO[]v*Uk*=[y]bY&pS7ޘDIW]|1Wq/* ኡӠd`cG;SVL-U嶎HwDbD1B`$)NZE mp0`EdCM|UȪu|1Wq_xbq_ :Kh[ګ g-R8TSҪ£ҴѫJ+0+t\Wb<1U+[HD9d!TTTs eIUEZ`8߯nN:|*7Qbv*UثT.Yiv2"v>ث0_/y.ZYFŶR HӵwSD"yۢOtF'-!薞Id<qAu$6p~g|R4O([֙}SVfZY!0!~[h>zHl5mBi9HE/CQu˿61Ma`\ 6Y# MNJ'B ;j)}m$uK*_⼓ *~XCiW*n. -@kr<ٚ;~UL@j3Nbڎ?1%J_?I17? &,czꤗP@bm^ت_,iw_ZY*#eə%DKtyWb][$ą`8<2ޞU]v*UثLꢧ8)AHf>LU|2sf) GR=LUWv*UgOwz*tN**=SOo33fx»*PPhѿk r)["&MNh.LD(dcibVYğH}ޯH K:j"Whe=(6@#nYlD7#G][Zn.H rTr'cf U"67][,QJ9Qd0 7pJjfj7^Cv6Eç[,1inS9f-D__,ɐJ)ثg)?.S`NzuǒogX+xxʟIz)-5y.ٝᶂ4y\# |NRfYN#8J[M*+qKtʼn Ʊ-4_)DDE"v*Uԯ\Uuj)y-n-+vEKHv*Uت坵ksD$QR(e;цJ31 Wȥ_7Qbv*Uث? 0Uѩ{vT/-|l$>l3*`9~5ה5nnmH~cʾPu{[i&[4[j2TBodo"a%8_NqG ){/KLܐ#OR|0^/-) +;'}УRÎ(KyFfGFu۟Ն-̟ޖ*O4ȫ "FEqJeo5 Ayw튢˧ED#`4b4' G'|UQ!!&튨[IV! Pӵ6'MNPt ˸ۮ%,Vi%I6 hDv[[ewm}=JCs @ËqTæ*Uتe],D>6=? 1U0'߻?h1WHE'e6*k^OWb]v*5@BБ**rK+VS\U#AQܜUl19#U1Wb]L=aQSBZNqVWb]ixWb]v*Uت]ծt}&, 7'"@itgUc sXqJlM_!ɪ1s[ ]Uʏ4l߻oVk'D h СIcXվO|U6]v*UثWb]v*UkGWb7W7Qbv*UW|S9={CrZ"sW儤_hyȖK9oB62\$dK4q?l{䋏1̺p{$G-_)on?ung' 6$2a@}X幇SѬ5տ`p$p>>.rbZ?Lno|ڒ*#3 {dsIXw2ڛ"[z$3@~4RbF>/a76N:I.m'9>vYux"1˓Ê/˚D'5@c`QȿSKۛKErQ@4:&&G Ӆz9b3E\k6_2yKsJrw_GqeBxFb$"_@`G ,q+A40_1\$SXF3w U)u1P⬃v*=jLa?ڦ튵t4⪱dv?I]#O* X*OӊWVZw[]Z^Ԛ*w ,;c b$~hzzbd[asOOӊUS|O*U(TWx߄uTUWy]z^gov*UثVVWb]v*UثWbR bV6zka)Ӣ_(ԈD^roOWb]v*UثWb]i}#oig犿7Qbv*U 8*o7wī5zktkRxeY+s'n X=oZBcW3H'?%yQfK2w]nmQ-23|p_86mY[Zdr "SbpUfb9"~*8|q'G=ɾh_Kl"ERܱW+f&i0qėZUѥݏo*Y[C?*GW^KW*[$>RkY>~?*XyǠ U&OB#JUd8w*4dlU,#cҿ?8++(kFt8W>5?*|UXKqVWb]i犷v*UثWb]v*:YHDT$ʞirNmv*UثWb]v*UثMoi|W7Qbv*Uث/ʏ.ZG!~_<%%8#IgukTuCPO<늇~sju冓MmnP?e}cց/,!N$Zӧ[H?o4ξc_bAt=VRA[oWeeS/Pro1NdXySB~(ͱW>Z%5G%IR!LȞ28┋Ơv$E 1~+?qRy'>'r"r{xߘ-:V:|:~YۯA@S$XWbXי'=Uv*UgOwz*]v*u4jra)\ǿ4?u ت,f̾GQ)e L^j-Տ=60^4oG^Cq oS2vK(רMuwV+*]̠*FOSJ37tC~C038w gI--IW RvuKf6A=.A OOf`v*jkwH.ݖ\3-%I@ɇ9K<S>\ L wW~e5YG×y'FfPEw-67]5M{H簂`S($|9v,/>$iKG[TZgdS9j_?\>߮98?C)ˉ~_s1+;lCumcEUeK gjoͮ1&/>Mxe#2ҽ+{ȫ\ēG^dP¿Ac.3 qV# hq_c͒ZW犿7QbNMe%f 44]{ϟ8,j:{j6z().d41b8>TY]C) ***0y|xR17׎ғ|_g#ʞXӣmSY)}aXȝJJ f4`/Å(%2iz}Ƥl՗%OE)4GT/t9!ek# hL D 7Ô?b?&H_OQoexai1U +_OFV.n*?hUs,v֮$_8yU] ,.6s^ܰfhw=M]eYT~*UثWbgϘsĞRJWLMzq$O+*wR/+6pU=%[X0_*f7DF6OH*fg_S㊢1U9'E@qUYV203с>C6aݓ>.>UX֥i]˱?5SH9 y-`rܥ#QOfZ|W=P!#sp0|UA0G|e'c?U0Te78 BqDJ*B>X+Ya)W8'D,fWaVU (]L=aQSBZxx_h|I,o- qry ERRo+y^˖wvr;w{s~C{ OR~>Yk#cJZmoݢ1]@nWv8%U(c1gLlܩ?/,j,wOe%~]Hh0`ÉS>84,rxdNcRXi(&$ Zwz"OMJ;Q8YD8cߩj^[cIQEyŁ?I(FQl>%K>UӼP,ё-j9+.83K@"q*}"M&DBݭ1d..S"/ʖUŎm#O e iffj9vS,3X<  }_&/[\\J4Bʧ؏Wz2GExEW)d,JEq3IC&)SB<0=?>IbNWkIu+XfkM~|=2fc(Hv:扬+ Y%@ReN$> "y?U yAӮ4<ʕ;^:7,Kp'Ԧ$z_mXKʽxusNJBW[R?ɬ}d DAЂaZR&(c>Ws"9!X@?a,`̆?+^jb,ӣA>ZTq)USL(=s8f*]@z&iϧj1m1Z2嘲jAZCYwZ^[QMfڃ*beeu1pLɬ}OTӵ9-/쮯IJS~/ a 'b|6I&7QbBze֐][LAIYXjS*5ː\:ͅqzinEdY{UE8ƍ,|X1q[:Ns,imwGrz喽nAk涷XsReI̚1<$! j E$S = IVk C[5f>qߵ[{8+Y.|go<`d6Ѡ?%(cb6pZ\JH*O/mHF(C[(^cM=&;=J(X^1<w?*W_L\FCg~_F*敦UY]=%@S8cv*Uث]?WK\TCɸSXÂ=!&_ /CD,3?ՠ%4I*4|Q0ߏO|U(yN㢯Q,-բ<'o*+P8!6`z޸˫¦=Nتk)IYHdN Uמ~cM SUT@=}⪇xķG)~vV$ ڽ?dxW\R#%^N*\G# r!{k聹h'*K(#USM"5hG~x\9v٩QQQ,lh?UDf<GUqWb]L=aQSBZ^b)i$gc )Tx\QZnivM-oD+Fb/bHV[o5BOIj~ \V9Ȍ/n#xDNe]'TNWTN ;Pҍ4WA,gT'߷[u5 W]eg:Mr74WE`68Ѷ]kA>)>4xmbQcTP>X'v*UثV)"F?[\h W_7 +k;@:ۯ*,U"zFwZ1Vb[VpG҄Dr+\EN6j8عٰQ# N?h7.*ճ۫[]>7,jqT6!24! ,} GEU59QOOW b ׮S=ݛޞdK+]0BxՏW.辥3? }U n XfR0?iYiVwH$7\O?ڙHV7w`" P6*4d2ޣuUm,#>-:E)~,U3=qULUتBU#T%%5#pKQثt>*,U[]v*UxWb]v*UثWbF=4mTLtLXF(xҸov*UثWb]v*UثM1VVWryF-w]CpPTtE8b6Pǧ}G~l}'労d[,>oQľjoU3%9lM8\g 7!x?& S/N[TKyjJ, *T^ۓO*<N_nU*v'ӊ<Β,O2!㊷/<,7ԋs7yK^&E*{1UVO3y!YjJr0-پx/0xJ+/NkbeƷƏw&+}:id)Bq7.MpDI&P^1Ia7#b-$5}tڷ ,rU,Up77@H[~C G\UrxۨO⪑F#brg}?⨬UgOwz*o**UثWb7o[]v*UثWb]cޡ}*lAq$M*/¿Viݞ;(Zs&QCQ3OU^t[Un*@Dbk'@&Ԛv_lU#7Y$籒Vab}I?ؾ*kZށzIsj_g޲9e_Yxf.(p>6S?}TK_W<+Wb.DWv}D ,dӗr@!:FR-?:Ű[.,d+;NK4V|%Oͳg`9%. 7'ub_@y{\״;fķ5RxC0+Ň/l5zi`,r˅ cWb]ݾcoh2A=i44꒿_UyKKַ,BUV8[[3=)U !^\r),MQ"x-8cDBO2MZH+Td?RZܲ=HDeS T=A/d$I'9>&RmYx'Ŋ|Ʋ-n16Ufoy3~*I~aMQ{w5uNC8|1;bk1khX914uؼ9/΍[Vս'3@Tw@"~:I£4oK[ȮEoFʁv@}URM{Jng7~rڏXŸN8ukimfA4p%`7xPw^qR3]8M[AQ#b/'eT2#*ZiNkgxT~)1U$S2Mfr? U1"c紒½}Ixxgk5olde7$bɊKkLKyO[𵥜m̏<#\Uq5 ˪J+qΪX*N֯neJWfVQqTJZOZEeINE_@ ^-xy"Ya{';o9bW^\K=fNaJ.ncx|KzK*s30j~Pf&xdՆߒY&T%~,U3nmg{4@սfPx^_T }'XmhSTJ4b*3'ƻEO i~|**)RM|qWbcȾzl:Ei%UE#1 #?!+hxP5&%u2H?6MzZ (I"ŹӚQwdhq ')|~RT_2C}b,. K#,3FY Y1(Y4G]"[ }%MtP J❾*YDq $7 Xo0C ud%1,!aR۴<Xs|_P/G#Pou$ON,U0 Q+Uy}d"(2ɽ>K!\LV:.?3?'w-ZGik%m_8O&HY:Ǵ'?OIoP.8|^(kAsA.x˓(iA_㧔5o/hKwm}7LcvcoLHjF0?ӥa ciu[^YP/_v^p'_Œ?@߼dļ{͚.m}u-S 2F gK><<|?wVd@=៟^^? [`ٔy*J"4\{70lU.*/o˽B/lm3G{kAm6lmwXPLe]ڃg#ck/&kZ7<;Mf,[$%jV_WsUtرVxٶ4Iﯹ歛T]f]iзXzԔ`xMK:}&L\8PdSB53iup&6qCY4*?qe'8_B&Xk6u_YD;EZ75_22vc<";zcdž ~^q)N_~yK^o<Ѡ o{eq6e^<sG y.o?x^VN]yO 宙++Fj-原64jޱ cS^o?xgv>,3AŚS_ u4@sMƠkVO[[}bOD+6bԹWɊpW~?aFB\9xG6YoFS&*zƱLI;~*gom=tX-QB'GAkD@4D15Vp$B+}U /ICjUj b\H`-og`nazVVEX̓o_Szwo&v~Mk!c#S}8SVs$ZȤٰZ% ~x%nK]iWD>"B6YJ!cR:xƞ?:A21Kו7NjqQ8U63 O\7+SƝO,U}hGȞmE4cGq' E "1I! T|b/j~ bmEXpb+B,dV!RDX5^)EN*co-VL[g,,D*&>&WD}W- POIdV1h#bps߾*ݧ5{y$K#Ƿ>#=OC9xi,6o\F75 +qTT̺l s}k5?N7<r |$[bw% F=⩮/u{{i(azux%{b6rBá*@H ӊӫq%Z 6Ż+P=Q6^a8٧#Fd'ҟF*((OlW$ѐ Tֵ +U~U2>a,(@?vXڿUN3Ē[)[z4aJӔj*,UG|ic̲Bt}F !<[ն~J,*`,DegF| Mt2K,- )_N+H3$OҴhfGd cb*H U[]cfjt̷ \ۅq(A$Z<xOXAjghnZ uFbFCj)6x#p«QEUP>@bI&ʺ %(ƀ*_v*<]'[.ܳ-ւH(E9e_20gHD<b` 1z_ZD>+VHQ?\[.Sia~P$A@$n+l:Y^*UثV9$ibմ JVME9'WFd`ǂ_1A 4  +E[Ue֣2Q"Fx5GU mdس+d?֦*k U@w9rc_P >+޸Ak:UJI*޴$_-_Q@)ꈖuoXƍ}H$+gkN+ioE9Kr+@孵̬e8# uy'6*ʝ."d^uAdDcqT6\yMZYTraX$aYBx@ހv1D~|qV{mEz&4FAޜ1V?7< vT?C,G$cO,#;PxI5IuEMV*)_#b'AOV7T<\2Wb.4颅fIaC!q9ML[b֞TY/P;⩜~B-wv~ qTDBG/? =5T<$*LҖslPPXgm(Nu@ykˈ^ @iH2ۍ)W,U 9䑥;H“X!]8⩒yBNy4mZ9 deMBA0SզߵqjH"Ng@GWzq"C[nӊeqTmj#D+#VlUm5+dq BRqq=L[Ԗ#I,$8^>hɊ[_yE`%[Cj:j6*to6Z77}?|S! TCLiRAzFZPm̒SrYUj'BkrpBtmm;ԗ jݿ'ˊ(<۽JjW~_ 5-dI9 /C/;tU阫t>*,Ub]v*1v*UثWb]v*|?yPaՕaWb]v*UثWb]i|VVQV\Ms%żf!>B}B(ls#KX[\z)ROqVׄr ȏ)F kWZo ?n),?*kWW HjCPhV0n`#^vb1U/Wɶqpi>kXdT$ 3b^Y_fqHό\Um,w-GgG\UdBE")y6b $i[_銨,Jڔ~S)IF4[T:f#+jNRUtT$5*GI 4eT 4d7lU{QvoZžh@ʋcH$'$lIcAu=?BnIĀޟUJ/,[Mѷ)m*?gkTUSj^&Ty[Hb'`O) C͛mK7Y\,U{JE (EFU?Ud<hbz`{*M zRP@ޝE0V/0ybQ%2$d<$Mߔ#rVS 4+nmzÄW1.{GPf!$[ 0(f2,1Ub٧J^%_E$t:,>+.*򆡪u!.]Q"_:.gveUVvBzG ⩓~Sk^9:bFS ?$k- N7m؊X |[R2i4'FQw zJq[\HH+׵x0hF$J[Vc37 Uo-[$}BQEs:Sd9.*w;[q LoCK@? U+%kQ aZ -UtM-inoѽVckfB#_@EAUB~ WT;wKJː@Lڵ1*HʜR/Hq\Ut>*,U]v*Ӎ~UUثWb]v*UثWߞW~Zum7QH߻*(Lof./|,&)զyv:OoSFԕxpsCC=.ib_͗lebӜKWb]v*UثWbZnܳ@?ZM6}ToF{{rcz+ˊS6jRQFSMɚG5 :b;E,-"bMRg_I%N0̀ʜ%P4 \qKoPQ8i H/޵JC2ݛ*"gR_Z/&`6ܽ>O,dnXWvVfG2pZzi֨V16*ԛN;IT~r5_hoT-zZ[IGT.p<[1xbMkn^?[F*,a6Df):omFz~|8>)hm ̇jDxPQ'HWҭsrKQDU7{/2Nd=/nС?b䀬UBRKF '"2u-~ 9Sy2TtU !^OŊV9T-o8UQqW 85&Ž]B0'Z܂H-";Gs٘$G7*V$Z0Sj.*LmaPeeff484ybx챁ۉ[8@XHNwQh B MMeZU"/V(0W$Wu\Uom4 1- aTIBNT:ֵdxt>*,U[]v*UUUثWb]v*Uت:zlmf$l%H|D]?YvbYٳ<y^oAu?yy-gx'/.Hzs v*UثWb]v*[Z?i~UVZI<ᎥU} PKV8yR[(Q!~#rh=TqTI1@U?$-Nߴwb[<,U ^%qUiDх9<gn?bGҘV_ZUwiG^*1U%iDV(Apħ}$y,U;ђ%wi"+aѥbj8zԚ'4f Q>Yu.*}WJR@b"_xcbnO*mOyU+i#ca^IuO佑B?hWi/۶9IF^4_H, E0K+$V&J|~OgoQdE[KqfboC_U N+#5QU]Y*&}Rd@rV-09 2yYG}U.]ӧ[(UڱEďw4s 9_LU30bҙ CMZ]#٫.jȁnB*~^(`_M#_ک䦿⨴-c;X>>ZYjWtws+P(;ȼ1!?lU3; ͻ6?];ҒTҜUvBWCe*'.b #*a Re('Q*jS'ib=?|kXTбVSv*UثNh>v*U~h~cX ˩On׷WSd՞@XEffgvv9;d IC~[~kitҮZi;,jʲ*8nKسL !̡3c0 ȼ-4F F'1$ ˋêkzy匱/Sdd$1RUثguO.jVAi<+be^(]3/M/\c9F(R 6/| u$%A??3z"!x:TO~k9yOPtmwie#)(GOlNW?(p%?~;Ie1 c>VWPj琑NcxثT6izuΥN6q3ʑ12PG9+_3ֆ4+X-F9bG Zf񟳝ټ1FwK$<`3-GkYypi G$|8.5#CDB+R{OUU*?ϲr MDdZ/!PӵizHﰚP9?\{/P.. sLICLUX\Ud [ӯٍ i.*DK6祜!y?-KFb^Gۑ/[S9~D#FOc;ۇH2yL)cVrXhޭ7)ēˏ<G3GH.,P|H.*˺$uFv@;j,U7d[h9쐂+,?9WgO0Qn YUI߉j"a6*}oX!@B|U:(S3=eOVSBћqV|"$__U8aEP䳺 yqUHfRV^@@,?fP0dMyEy_N fm˓ xӑU"Uive8 m` hC >"m?NC`E%dJ U&t郫fUAJ>WbZu:5Ւ4!/!S5RMn-oڈ)7?5X#CVQ^> *6m;񟣶*dwnZ Gdu8"E"~#P0x#4db5O/تkUìH>N>Q3K 4*I+M*hf^G;VqUzrCCh$l*j*3'ƻEO h}1VWb]V53Dn},F)1vT^_^͞11);L/[d_,WR׍t&o+.Ad#}ڙV˖0'N\<YJ=ȏ,k~ Ow)9 cCLpe2x%O:ͩ魥 Ŵpr"B G1D q_? BXGbyLE.u+k&A !*M. V^/?ή?-9 p2ygl|#gK(d;]RK${vIZ U|-'^-|93Q#XWKO9ĦW춢1s\> :0plbP;ZWQnlmS3sb0/h˧9/he#RQJ˿'._՘j Zg;Q}S"?>)V$|cu?/y\ e3˕Ô !":k|Yv]>if_Ō!HI;Ązk]WbPڎiwEgjwX4PYb<3cA46}䭩#Ԋ#z͈ ZE~3ya9OG_<<s5k6G,5TR&p(ኬGVXپ l\ۗ?UmǼ^s U ıTd:zY_k~NHw*UNg9ScBYvolU OQx%_Zv½)T'9 7EN[[؍Z #Ŀ*7 ;` #N! >&*!(@uf#¸%`G϶*juڂI$Q _n_)B,H7e**U#U0Q2I@F}*a.H4Y-Hn2q8^LВ kGJ@\kD[ 2E#~8~b%Ƅd’D7e تn>h#2V}TSS,X^LTu(Oo-u(n{; 7FVZFUx}@:A?wn]?N̤84*,Uъy#rv IJ\ݧ/v!gU=?|Y 9È9(^3+V>~ZQ2L롁&kBD%FUOq1YO/ogUqe>NaM7cekcgK k HU 16yH<-nJ k6d#2^ eF6fi49u^/E#WGFyrTj6I]ǨK*^zSc'ٺ0N)Dsoo#AwE.C/1%G#&̞xڞ!0 JGs]87wWH *nK7 cnj@tE~\t(bmR;ˉ/.+FcWmJ bd՞MReWb+?7ͤK/c3(]y5do_6z˞3H?<2zs:^XFeO7>BVr.g.bQquz.3/WYO'0ɘv_O4є+of 83|R3{ QE IH8""Uf6\W`CWb^q~'5Ȣ@UmfUn*~w贓bq Wˋ'(~|Mq]ťDQon~ j HFb<43\Fnϗ@G&G%_RItQy+*i-1UE~2zl+|~Uyv+.*mbeBo r(~&>Kyec- [o\U0A*UJy0/7m+BOPv(<⸪%"HIաb?ma-D~FGbتRP?qȭ{"|,Ui$dD󟇗j;3v5;%EuOUtyATOEF?SA$Qчbr[$L~C[\d܄'>z1THR_q^lتEYoFW>T˂7|lPG E_~*qT4J &@WC [+bn orzrA %m2C&K'f WKP_8]x.~ FHOqWJެR-OJ􎘪 ![wI?C?a_RhfH: R#ۋa\U9PN&S1$ӯaqU)&vmAE |~lUBmMqTaDoOˊGUl1FSE5UUcVn I{⫤dBh0x1؏[*b=?|kXTбV?F**ʚwtFoHiadu{yVU ׎_O jCc((YC'bZnv**UyZWn[hn|>ec̍>xI15eH'O$eVR+ 2vM UثWbX.42ydc,akI1/fNUQ5fi5R< 'Hğ*Ɲُ]TK ,OAF*h&>%vHWiKY踪EtW "q$ƃrzU&q>+*b˃ˡT, ՠCF&!Ӌ lU4KiYf"(v%Q]C&_=91W@eh+|HQqTF*s5W]C1Uҝu2qT*֓ kF5Uy1PQ֞EuVPz8頒7P?ln@ak MG^bW9!R?k1TDG6*~_㸑HY;Wr'gSUD,g',{|Ud6QSbWb:#:hEqVW>5?*|UXCov*UثV**UثWb]v*U~dk yZ46}ˡc*rh/tknvGG|m%^o?\/unjYj/Q]nӜccWb]v*UثWbVU⫱V_"3\MO_] 4iT_e}ϋbUث\2VȭNتGOVc?5-qrcnGqUXmE*vYwXx2K)+փUcd1YXUdQU !h?*]v*Rg8"xɳNV5'GOӳ>*=XA>N*ӑ?*mb o($բW*ovQqWF(OqU0#hy!AőP WVVPAS#**7!,m~eo$L>cdķ=$PiUqT4~WTI*WkከY$J~|UvG0)=w_*AkSU*xثND 8?Wb]L=aQSBZl**UثWb7LUUثWb]v*UثBX:ٙ@+PVU>ORK(;0Ggfo9޳5.L-nTVPY[@`Aع53cx/Dw#v:g9]v*UثWb]ioZ~8@E [N**Ul,qEqU.Q%WtPh=GS _\]PXF?S-!>tU~* N*48HAG@1VWbT|uXm;bU:#c㊩ ,,;c*%b6xbNiw<+=]EK|W#MD V]{uUHU08UM>1UoqU%D7Urv.>%}'{Tn*]!eeqU\UثU?M3t=> UMI=IfM=Q?'_iqT] wbՕTbI1׋è8QI(/Q؏튯]L=aQSBZ[]v*UB{ɒ ! *d""*i\wcrF2yO& '2Pm1 ljDY\Y~eB ~:XrYԄKH|.^W9'zW=* yItBev0"d/9GK-sDuD.f.r"h⸴ՊnXT?vCm( 88e?LJp&;yΚ$qatP43Q}0pqÇ2 ' ҿ-|_1y=f+4ҵ,Dad\!/s8#I/ᛓD ob//dKki༛7:L<\|D8rgH|Kb"_B#4215!e&VjN->}ynQ,kwLLUn*m^ *>x=>F*v*UDqFT^R{P ؏p?*C0\U'Q/*cCp|WJ0ٗ⫱W>5?*|UXCퟐ[]YͣZrNB*0y˿<|]){X.䷝9LTB|2&f?,rHgx&Ґ18_z[]ѵ p-e[\F|2Codd^v#|A=OlvWg \ :wNla-Nk ny5B J{{ H34d}ua'eU陫lhN*U }t!N+xZĕ4 Zn*U (6xWbWFsUP1Wb]qWb][$We7WW+tf) EzoXjHb~///ߊGC4L{𭊢YE r'ӊncmUBI^9C:'{bUgOwz***sumknS$RM+E,@2P)^VVDW?ϮEjl匜PBS_6lɥ?\!RwT,QPRYEdُquۮk[~Z3ZΑ&+ dʫ+M3]=8H53lv*UثWbthۈD"m`Jhު ~*?Y8G.xrovQī`ZTKuxLS;ASN*rv򭅉v/c-šKV}#$^a-8hez*(4ZV **PTPG0;w8UثWb]v*Uت̅*IJyMD eGn)/8*-h~]XwMJc _qT|ofZy^Y2zs@)/Ϥ3ɛ)LL0i^b-5*a=bH_b?eѾ_lz\|Őpw(KWb]v*UثWb\4@1<&h$Rm֪͊m;۫+_0^ /Xfz8}O}J촭6NONq)54K3Q]*ZmiI=̽`>tY⯚_yS<٨_?&+8b nrf󉟘K\^Z[MYd@? /渫ښwn1GCm UoQ>U]ΑbثWb]v*UثVbsD=Y'8#v*!('oUGCv*UثWbTn`.OY>zz8r== zcU1Wb]v*UثWb=?|kXTбVov*UثVUUثWb]i%"TRcm=QP ZM5vnnfw&rb$z\8αT>ys{/VY[M$UAX[+}Bi,U~@u|ӣ^3L:xKyaNm?(wUUEPb]A4qU|UثWb]v*Uت=6O~4TJQ@: UU$ڦҼ >q_nL[NC)TjU[v*UثWb]+A׮*UثWb]v*Uثt>*,U1VWb]i[R̖ZޭkmѾvlZd d&3S)"YvL<'^߇րm5RUv]꺔'f訂d3%/7/rO:\u^L4P\~|yt=#LNq뢟_x |bX鎎I7"GV~=X͈C*AeC_2ϧ}F= ar@ pcIr܉Vce,󾘾4K[hYtiZ$~(8dj^v*UK?cYL) _36}8j=i4:V"BQ/1RzpyhSgk1)?ŊY4LJHǟ\Țfe^Mv=.N'ݗ\~9Ie]iت3@$(H튼W=3Riq{gLZQRNU [aJ]n1c*,U1VWbXZ |P[[by![Z_+g'd.<~Ob$ #)~c~_yKsk%wI04tt Z=\691xVQ1nġ"B"TP1$m&0!ثWF^_TMg3COkMA-$XncyF:д Gsy3yOua{ųZ~-lܔ6nԉ0 t[(+}]ΗwVD ;_sڭlF1<كO3s ثWb7i#WPZ^O@#0wٽOo!#|]yVqzu uݤxǚoj?#o< O!(bHbPF#A p6[IUثWz>8yUֽ}MCXO[(C?iyr&N{;2=P.' S-D뿐Bմ-6ގvRJjHz{-d_=6g@#fRKTv*Uثɿ:6O%3Ƒ=Y*M1FI%Px4E"7>v6/9@2@_-?6V(b7I"Pd?iؖc٦&ۀF_F*N*UثWb]v*UثWb]v*UثWbtcР!f5[-0?*qTL:6P%wnHPZ?QAsQTTF*UثWb]v*UثWb]v*UثW>5?*|UX_bz}׳-m5*F1>  _YRZqVWbcIF=g[k{{e #9Vv?U 37&&AgwG|国.mlij4e+0Y<—e_.ZhfY\J+N9=Fye1)v*UثtI34 *`GDi(ܤZFY::aNjro<&#*~rt=X̶`Z @yOc')k2Iz]db1 Oq A,۹nѪ£#1IYv*Uت_ OﯡԙG 2dJ12?VI3%^J+5n[ˤpi)*/9;J88@ruDb]v*ɏ-y@5_04S}Oא0BKz _ͧiab#(C'5⌅#G5m~ъN*UثWb~ϥi{ $ـgT~&drzhgW~Gx`t"97CWbvSh&-rhJY4jy+W9׻G/8 g??X;v*U/ԯ-=:+ nmD6r Â9K9>&7ثWbw*Nlb*F3B̯\sP8̫$e}p,_Xnc}O^73Na= gne&%uK?1/Ћ;Z)Hz.*U/K]mZWHKe/̃3bEBU鯩8 \_[S UثTZY{DIbW*,U,UUiZsD'ḅ28oc2 /evv[ 5EDToHȒw%*v* bRO6;FU\wCŃkJ~3esxdQ杀 2N]v*UyKVj7rѧ2:-/I_?̭6xoqa(bO(fǼ^yWDoq%ԅ([jTڬi⬇v*UثWזt4IpI*q4iZ6l4]=fM7_b~ъN*UثWb]v*UثWb]v*UثT2ͨɧ~); <bO󖧬jItAJb*UثWb]v*UثWb]v*UثW>5?*|UX_b]v*]v*UثWb]I5r]tcGUxrEJSjZ|Xwv*UثWb]v*UثG/ӊ_F*N*UثWb]v*UثWb]v*UثTiڟ Km^T~*UثWb]v*UثWb]v*UثWb^{Wb>Gov*UثV쟖**iZmϤKs%c^Yl25S'Ԁm1ʒUثTNG=RJQWP<,}?˙4a9%d%X󥶿yLXI"NE^/ϥB|B^(Ded2-AKf> \^/#;ͯL0v*Uت]o-\СGb{1^iaG_c<1w#$q )KWb]~qVVo_N*UثWb]Py㍏Ef*7f°OD US4q*j6.}P⭋gVsbݷhZ7:{fL*ؽڳ8*1WGPt?#mu;۰|6Ou`f>Cx*O2uʹW)_:EZb)[,OK+lUS,._CQM9?Q8V?f!!Zδ6Y\[i\g;Gl}G*м"aD([ȯδCnEfY`'~8&Zu bثWb]v*UثW>5?*|UX_1VWb]jB` *,52PT m\M9*IJ JІ$p? dy [v#mxT6[mޣrioe  06i^17yV\~wm2QT[W;7Jw?G?l'Jq|=1%WEjynqN_.b94X`D~r{x#kHp9i*O6Ir)_]|Q ğ>uK[_bS㞙`0?5d^Y$1DWHemVId) 6+5Z1cK.\̓ӂR}Ka"Su;KeVhPkYБe$, bR?;yN򏕵0|JhH$qʹ~˒0qF}<_+Zc\izRm[0xgn-;=f ŸXey^p5G{ϗc[a":kZko|5[j2bQ)G?}Y.b%ثWbf ROXa|͢kf}!`TS`8k)IDܧ|UZ KK #ܪOU$*UثWb]v*Uت-v#XtXZ^4%~A*v*UثWb]p$t$|U g$nT&lUIm.`x=TEW:?VނO8:WLU2Tv*UثWb=?|kXTбV#v*Uu[y%'Ҿ$c+)v_c91CS eXq}?5f:tW0閱]?ujE@,dTI<-$tUت .'bi%UTQVf'`@$V9c/y:~/l⹱2є !o ~%3Uͤ#<F@+Z6BIc[\Fܠl7=#. x%?m3J$QM#JejV#9)~*Uث?:;_$MѴ[[{?^}~d~|V8II7/swIa>n y澇5Vp󍅝ƘTn2/9?̎ԍ9'2͎o}.R3. uJ6rsJr?.7c-ͦJv*'ŲztGdf\op;:7KP@: ݃Wb]0y&hZH^`y!`E$]Q?6&hNPǎXG~Hu4[^hiHoHcuS!)?"ckUN*UثT,({EbJT&'dA%e1_ZZŬjx4c+q]WU-!WB7'7`ɘG<,G}\?Tu9.kҞ ԭabr82 I4dHj!!??4.u]=jh4/{OQU7B\ߠŲDvfL?_ǥo|zLZ I,TuB%W8岜|PوQ`u@~nXv"+8kNcpĿVN[jR6IcqRfD m )"џ3Q9b'om6iNσ^el.{-)ōk*+: pV+rl[<'(~t?Q8X-Uߧt8(qV oYO;iiZ,Cumhn Rf>}(7&E2;]4C2JX넠?y>ɘwL9J\4u[V ?ӯZ=$~zٳo1|Ukt%V};ˈ gejRq*цlf<8֏.r|]UǓ0SSx&:9ՙ./17|UrdQ\ ?Kb[]ƧJ wKռԑXiyJI ͯ .'C&$2t?OC~)\9/z>/3Lyb]CG*2ReXE=^v_?f6YaϘ1N?9zF?7bL7QPOlm vٔ˯ٔQxWb]y]z^~qVWbXdaC~mATi-{q1J1$60ǃČa S ";,R~$8|yGdO)KD򮎚F IhǓz}TU\2e1$+KWb] _~[痵k#odmeRI!fiZ,͆\S(сq3# DJO.?m~f*}N*Uت Q%F2hI!#Oo[+kHkr0U_A*]*U,],7 x@|SJ?'?@\U@[$OZ@j$Vs&!i=8@ƔT\i>‏:ݟr(H{7kKkp+GOw;#7oSoݟx'6*/n?Zqת$eHWKG늵nϴWGxwnث_gp~o?V*p_6OݑIp8XKbXҝV4$aDw֟Tf?HQjGw?0 Fv9D$bLH]B4'kOPG~_Dڂam5+⨼UثWb^{Wbӊ]\kk5cft n]lRg2)G(JeÐ_TB?)Wb7?,UbX噵$$Pk P96\g.!? τ[ .mc) 2+aV}98;ZZZGki [B"%U`&Ы]v*02u[e-kWWDJ5SQ2'k\vgfYrz ': 8ϖ]fm>mJn%VŚʥCmf't&c"o6YI2 eWbPw֓e}gawyu¼qrx'8QQYD|yajp i驌QQJV5s1?ÿE DORN]v*U~[y<8Io#` I2PMl&\80x~l6Vo_N*UJ+kin%4%. vo*\{uGت3v*qq<#1WsR+u dlٱ4.ZZS( Nڃ=)b$*ԭnNQ\зեIAԭ*/QUYGt` qVo.4>jXW'y#?}U=X%đH#e=8UتQGS~~G߫O#T9DYGB]y]z^~8xLUeaFS ba"HaE$q (@1_qqWv*4']ӛNm9&9DD5;:2J'AZ? b]v*yZG.kB]^[BczMXQwyqp[EhҊrUx*K$*2 OI{۲%i.mNJJn<а,cIS)1S?:[*?_S{3yH&O[{Hdog@YMlJSNU F [#eݣ] &*[[qրeHhd'n~/|U5Smwl+z?JiN.*麄(nsZ |G"|͊ AWN`gz__OqWb*rQh09^qTv*Uثt>*,U[]v*Uǡ\: UmX|.ZOXQQ9dHDHثWbR7Z^ei#JJ2yBr_eثT]Ю s4B-*i^ed^MgO1TEiv[%MB ciԌE$˧agԋe)v*UثV)bCr;J*UثWb>UU1WN*U+u+H[GQG6|UBjok4GdOc\U'$$ V%Ɲb:ټ9";pzzhmRxXX)#Ԓj}՛HkہrvUmgb,6* B9e$X@yC*XA#/wf?c}U꺌dVjc$œWEõmyi$@MϦ c mk>rp[$ޭ+TKbyqT31OWlVDe'?2CO x?^ y"xgX,U\5Y IdVoUqVⰾV+ydTb?,G2Z 7'?Mɦ1YW?#4">lٺ}XG~GSFRLJ&<-r*X:$4bGQ "fVHXn:,pB<$|i_r@&ԇ =59 H2U I0eB0~N=>[VO]-yXܕݴe{#I[lPHes20-Yh2|_'M卝xpHxd"Niݝл/Uj_,*#4;coO$o,*] 8.UWbd>* b튦xWb^{WbN**UثWb"U)ϙ41i>xҴ.@y<%^~Mtv*Uث o56F`.br%PXlZ( 7XFVHk, .!#dUeP\B(\)+`WbR0jڅKfU.fJ,n-j0Z{=f ?Ɋ4qbYv:7򨋇8ɘ{qf|A3{ K5-R/NVhJCNL,DI .]I Z7QY@@,L(NL߲H#)RSkSXjR4NWg<늱w0Ův/e(N?̟RaMw?1ڲ֥96.9vyK2x͗K?~zҿ0,ei]ItDkU|Y8ޔNݍ\!A?/3 kD'?dg#EΫQT`8? 'GS~O/F_L'Qͫc{`Gᚣ a zR@*7`ZO,DT|,U[R}"N_<a6 Bb#'{BX%gN 4_D~=d/6Rl2Du_Eo ۗ_.jP8 xi=Aɞ[s]}/gغƆ5>?ILK$YEtoV@ʆY:|?cu]18r%r數l+$6mH~/Nڲjo}6Xa/p"qVCʴTz,gS~\* c]H}ݔVM'GpiUDqT01;+NzTHW=?|kXTбVQv*U/h-泪Jaq QrI OYG>dqm* #oaPrĐykkmmoC,#PىfI~%=DXu+rtYG*<2B)1UWbQ-ܮJ)4UvۯG\UA]_e<9q$̒ b!FğlU4Σp¨UOث0<WsAPC}eSuD uUeC(﷭'⬷KU'-o栧c|U^E9N6qEJz>q/v[JԞ̧e)/lU8V!fۧПPUSkxⷌ!PV=Kvj*$HIy_dثxWbP=֑eONx*Jq1e9 DԣɌ$(U8 (t 7%à]nzb]:=KHVuKTlQrF&pdoHI_Yvvs˨KDyq['U\\éUD#ib`ZW2 N1)q(c"A #&Vo_N*U,+HcbQ$Pv*U!mv/ي!&;~ A؍RCNk@vq vY|2͊[],NGYXhѺ sv(އ"t*2 9~PxE1TCXO0h(pLU&˶U#&C#/M R%?U{p(ebmǐ욾ot[@u=yYF?^*ߕ^nC_*˧Yx'KOD冎t6MG3Ϋ\}USCʥbGxTxoYx_I*掿f1S2TuV-<S?US+&ĔE2#2lU0AVF72 G_8uiedy}nRT4hSy/֮* @:rw#T6oQ֓TBWLU4-"(Q7 %K^(oتgi )@jܢS~OGSA}julv; <~/NWb 1UnN* /kQOQb7,UG~W|ϢKFAk+!0F D/%Rh,4H/%7WEƷl+G{<[,<$>80A*UgOwz*1VWbR+.\yN]ml,mGyn2%44&v&* `[]/k{8;0~!BWɱTPv*UثW>5?*|UXM~ov*jE֓-K 2QBi.ψBf G@6 .\zUGv*y6Uqo%V(ӛu@vٓx8-9ƯB4=EEwr(%ثWbRO3yIIzOIyi*_q+Is!%WeIby^Vw ɒ5? 0V bv*UتǞ8EY% Z /\@<^PfU[ ]v*UتF4i{c,`=Iid;3Lۇ(폑%U?,UN*UتQ5*O;kH%ngA "A]Fj&*\Aq@.( ih[\ۚ3ZROKAIj֋ڍ?#gh 4~/Pm%~ UN*UثT524Gi$m-XEwiI(LUkmMSEWA9Yah@ΎGگϗ3 2qG='gdҜY"L8(;f[;v*Y3,αĿiQU_?n{+ȁaAo=gfh~4v5B}K(Ǟ_ҏO :ӄws՝2 *UثWb^{Wb7U]v*!UUثWb]moHO" GFl+ȑJ]ccg<5[]ͻZk̮Ŋ U mY_\ߘ,OE,Yr)ثWbPZܦqrM%EhUB[9-k3!o"r v*UثU?[Uޙ&^GQx%y["hWN*UثWbRFi,dk7cVDcN 6@wZU0bϊ5;ḺZzr&⨠ ķ|DqVAޠ?N*n?faLUauڋ}1D*?$c];R=u\Ur4}cOԸ_a㊮MJR G#IYTZGk5ʠbWb]v*UثWb=?|kXTбVVWb]%m:t;G.SO7 ތ~ (Q¹U*ثWbRO.\ykpk1,v_z?2MO"e2Nbgp1;wA̝Aኵ|* 1Wp_ UyhU&*N*UثWb]v*UثWb]v*UثT=F>U~W HhLqWb]v*UثWb]v*UثWb]v*3'ƻEO iO**UتmB{)gDmcGD0哎9r-_ ov*UثU;;qqg:\[eYb`Y VSVNxR%U UثWb3ӓIcgo,\ >! Pi+r'TX]v*U evV%\8cUڃݘdl(9v*Uت L:&ؙH7pd2ƇE##U_WN*UثWb]v*UثWb]v*UثUejz#X^]cF,36*v*UثWb]v*UثWb]v*UثW>5?*|UXM犷v*A$+F$U@,SV*]v*UتDtMMӣ"i#42Ҿl.Ydo#R;+KWb]hkߢy5JRE}Coʑ2tړȏㄱ2t(5.I]J"T&Qҏ]ev*UتSfbdԚ: j(zXzrKx/zx?߱~IPثWb]4F=J/#XQG:7d5SVNY_eisث`PSoּ囯1jVZtfmB 'ky}0@DG"㊰_+xL.F&6MxqVUG&LK;Tӧ*^>u(˳ZqV{K \ӛNhti&uF cfg+ɌLQrZix2>/0U TQ@,qICWbnjR[++PU(v5*y1U\8%o&cUثV;20\%:b!^GgbOd =@ۑEb>cԓ"ɸ]KL]zMK򵤖C Ȯ[^ኞ b[Ѵ>RԵ[f죄j:m*?g?ߖ:&~^[浵2zȾ) _rUU^p򮹨1.q{J,$ "/ኾ]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb=?|kXTбWH~F~`Y>_tO2Hc"[9$GD[ۗ^QܿMZ42DCqsĨ(d7Y)v^-_keS|ӫwjZaѴwU 2Or89V;~O"f7LUثWb]v*UثWb]v*UثWb]v*UثWbVk~{N*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*Uثt?|k6Wb5ip$w/c]ثw5]|qWs_Uܗbx⮨]v*UثWb]v*UثWb]v*Uc㊯]N*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*Uت?/}7X/v*UثWb]qVثG*:}UثWb]v*UثWb]v*UثWb]dipy-1.11.0/doc/_static/images/logos/000077500000000000000000000000001476546756600173545ustar00rootroot00000000000000dipy-1.11.0/doc/_static/images/logos/dipy-favicon.png000066400000000000000000000511241476546756600224550ustar00rootroot00000000000000PNG  IHDR99WbKGDC pHYstIME2< IDATx]wXT~/])RAQD%v F%$vcF5K4QM`ņ Jo=|(~3O;̙9s NpL |AD 3J(M`D* \9dHK%pWzDV^JV@SJ!pMwDq\EFr " R"k 3Ҽ\`q1.H d6!@H8NF#9qt =Tq\! #9F$ $&Fr %5s  $~ &Fr '6+| &b?\8.BJl eMJxԖ ILe46"]#9愷 q\#9 X&]2펑ђ53͠pHX oپc0*,D& FrJn,;!5vH& FrBn ЉI|q]& Frb%F~1\00& Frb!7o)u``PdDYS=͑xAtȃ́xωh 90q>om0i0hlg^P84aA㸋Llmrs##.01x^ #RcriLև&& `"?YhFFr"8/( q(rUUr g @ODK20MN)k1;4D\-1!7S" ` J&W&Up' '=g`\I$OϤcINP`00#DT-Wg,C G1HL1$}#CԳ[.9ͅl31nM k$'ˮ00?| /W+#8Rx DNp`㙁LFr"$7VFb`PMD v&7Xe`8qL6mK\hh(郕+W`t1MN4-88˗/Gpp0RJx9E"dbbHl;w.&8HNN֭[u֮s᯿Bbb"z bPTTDv" 3yxxPAA,.LJk:z(effΖ'8 6$ իHyw؁#Gjm...HK+j033+4 u벯Ph&=?[۶mŽK$H y&3}A`{rzXjJ+#Fjժ 󅇇ZmcPP|;wf_<m㹘#9h6mnnYfʻ|r ԫWUTaSAaHNj&2K8;;+wu=zT+uEHHxd$%[;8ֹ|y5k 4^K.!77W$} < EArҫZ砣 SL|5^>8333z_jj*7oMSu hY(@Isu@iii-Yf mݺ5w%''gj*&HΝ;켑!/VSRRΎM:{Ϝ9C=z2OKK#e.\PaYIII]5k(%aÆ p6V1R&W\ɋÇroSP͋1cFyyyuOLLu|~ݺu2ɡ'O̙3~e>vZ6irssӓѣo޼͛7W666^+,ښj^uQ_>L]veӛ2R༅{ErԩSDDKRJr㏥ʫYr>3ڵkoV'NT\r͛7lz3s'8'^6mxF._+J#gϞzN֞Z\\ժU7 {{{JMM}=ɓ'f( '` Όn BCC8^ƍJY\Ο뙻wclllbEZ]dnnoMkEFUY[#1b/ⰶHxرcq}^M_nܸ!۷W6>d n{.&&ʕ+Nj@ZlIrO7%JHH DB+VTذa+::ȻHnWG^պO~-%&&2 X|2zn…dɒ%W޵k۷yݾ}{q}?NիWW ͛*qQdd$c'DT$^"777޶s.\ [[[^?CrwwW*VHaaa*HY;۷oU711]n3EDT1[)ۻw/oRqssۮ];yy޹sLSÇW^(::Z+"SSSwԉ; c 27=s9ydz*988(Ep*UѫWhƍT~}zvz+"g>zhܸqNALpkTJ$caa)J˖-#quupu77o[~}8%%A ۣ>#S1ɓ'5B\| YYY)Mp+V-[ʼPղ=z4ł x;`F!S"256k=7i$_cwcha<>}Wys-BW^~{I^13 1טZvv6}G:4j׮-7Xaa!*|<|6l@9t@DvZ D@pDdPcccy5):>}:={ڵkٻw2ϧU*ĉ6ą7d{Cׯ !Ĥh_+44{ƎKDDWΎ Jc^e5jH>JMMոg]?iӦҥKitpSjj*)2E+o&dffF rǻ=yëK.WD"ô{n})`j̙bi.DGg !kĒ%KDYwGGG<|PnaÆ)|OFFѩS'pbƖ=v߿իc W?~͟?_u0___1aBb*nBrG駟~5k֌n* W>?W^sB .NM4իWf,&BOG y-[x Nm%kkk/5#//*W1|׮]:/^D"u)u#ҲLBDjj*6ݼySӤ ΜDBIطoYZZj֬I?3eddh=|]BիWرcysڴiK:uRIN 6ojXBWT\$\CHYLvvvCݻwqѲehΝtYRiz*T|>Zߔ)Sx90033+u`iѢB!… ړ#)2N}з~+ʸJ-ZЌ3hڵHTUsj}#F(.?-- ]ŋ4F"o/_^SAl$ZiPnDe(liiI5k֤;ȑ#&]OwgR Ryy9ֈ<L1M.\HDh"}g :tZZ)SD Tؾ*Uˍ`(hҤ 7o.6TV 7S'ej سgJWR'N t+ '!LpF%CDX829CmV ªU^-ZvX4i҄-WDԈ0!** ?{Æ n:+W666~%bccIdE||T6lٳgcŊz)py?^fW26FP$Go(L1;&?r I ȯ|RjI,]!!! ѫl$k׮aҤIGuE?&& !>LmmjժZ*ڴ)c%cbb|KAdf={qƼHF߸zƌ"""бcG3gP^=]4{ ΂ 9" /x Ν;'w'''hB7e)Klعs'z"t?DFF"==7<]}GpIII;̙32ǎщjs)&4Ō̙3͕{׮]q\]]vsʕ+׮]C F{͛7c۶mpqqAJPRRW\GFFcǎHLL,d|'"Ud1Ձ8v2;͍EF|Vs@_+۳gOeFDDuBEv0gqqˍ]fffطo.xMN,pqqQ'55xA=~;vDBB<)))ر#I|Ǩ=zgϞYfrOUSSS 6kԨQC 3Uc%'lwR˞\F]SU8,_J)ivvvhѢZlVZe˖!wMRJe'OjpeɸB A;uJrDΨKs$'Mpe9sssԭ[:-{ҥׯի+4\RRbccUzߓ'Oxs]"ND.u\ml5|R35kfp}xk{JXXknnGEEcǎB%w<\UF]ɽ.M#!!7nywF\\AɲJ* qR1** JgϞi%sȉ(~=߿9//wx@mƍ… )44L7bB߾}vJ9׌U*%ի38Hn-ɑ*Ғ^~mp1c\]]i̘1t!<-Z+xȩJpϟ?ק&8ba $=z06;vWUH@q+uF7n區A?}UsBUS$hK95JgPMLL$ft }ɓVZz4h@}]| H[7oެ={L'H$h~e$7RUJJW"P.]N~QNNeggSnn{_~:v-ٙ( 233#ۢ"*W dV=}TmUghK9={Vj׮Awʕ/_z_FFݻF!cw^AɷE: .UZU%+((ZrLMHaD"QQ[h0R.KYc8iCCC655/uBpfffXW?jժŻnݣkRϞ=ֶ=D6 xDDPVVmf<<<$]/LY‰?]VBBN& ?$h;=yDn]i۶m4l0.]JGӕ+W(66ypO 8zz۷-F,n1qDDEEӡC̟?_`oo^`mgϞr LLL`ii397k nnnw4q9ԬYOOOٳg qJJ" NNNpttD"D"TUqP"q\&o<4R8tcܹs'oy_iVg7C$7k,4mwљ9C^zٳY&^|GbΜ9r M6&8WB/锓h'[|* ]ܹsGm$˟Çk/Ram۶-^Voԭ[7\[NjZXXЊ+tfzjXzݺu4ydjРo;G>iٴc HȠBuESg(K9|ƎK2>}̲_xAnnne.X߼yƾY׻woںu+:u4A05bV^-g̘!6ՠAz>y$ɕ(BCCiѢEԸq <m6~̙w/_2y{{ӭ[(??_Ӣh)[hK9iFn?^ T8˕+G}h;))I#J˗/e!,ɱJ@yF΂:wΎ!mO֯__ z!?~g-|gxULѣm/Kte̘1 ҬY3""tH΃?lܸQ8q`ᤥ%ݼySt̵,}^z˛{Xf ///u떐"k:uJp9(T~}A+߿_Pr UҒ6l SDr_2RL_>{{{mĘTIII.)nW!!!ammMނ[FFܸR~4fTDr2 S#F΅3;;[}l2jrߊQYgڵkH0 Ƴzt7#sU8@E֭ͥ[ ,,s֭[*~zg G $UTk8+#GRАM6m6WXX(m!Cƞ={ҋ/t]"ω,Bܽ{2oݺ*gϞaȑ ]yϘ17~H$x`뗝-xbϞ=?P\9?q?~jHL-Άh|r{M6x*^~>>hwޥu*ƟY_յa.@vv;wDw\\\(..`ܹsZ86++Fɫ}=zЧ1,f&/_keߟ 蛜6m^]+Fؒ@]{q%]'&NBa@ݟ} #L4Ia%KSN˗/QF_y7rrrD+o???ܸq/ ''{ۙј|?~\B < 2D$ѣ͛'… /@"nkˣ|(W\˗GڵE-z!44ӦM_t)ڴi9ZN` RِH$r}[XXbN_2e /M1]/ ׯ_WTIhulmm[nBqJ1A}ϨL6nܸ!wPwIu<|0{ׯ_m?dggٳvItqjӦV͔LϞ=3^1={\UǎS'0f֯_f͚N> &(eq&O+V !!F˗Z *dW&fԪUKHՑ\5PCѣGz[aa!o^v1x`Q>55 Q9s$ mذu\-_<͜9=" & zkԨ-^r-##Ct߹s'UXQi?~<~>}1#;B,d#{cU@2evKffftQ<**JRժUDDi&TPmFO6mhf͚N7%2ٔ^zǎK][|hd]PP@+W`4zh !'''PnvZSS?(͞=[H􆈧H}bʀ覦z ۧOSNz _N}Qreھ};tVa`'OFgΙ3v,^3ٔF X}Wc9;; : ;deeь368N~~~*(ԩ=W=6mOs̡T64ײH}({t=@J!k ;F*9rfmmM;oB2W[FvEra988cƌIygǎqO?էO K[V딹k׮ 8p`q#GP&M ^38uxbG 1446l4)Ӻuvjܧ~>W5?zü\W\iqe 0ah۶mZٳy hrÕLLLhԩb Pϟ.8pݝO6mǏѡC+ yA}Bb#,1f3FfnnNjSÇkUpB'o6m̍{maǎkӧjŌ\2:uј|"9Sc Q:t 755ժwmNNʕ+G^Ɖ'Ԯm߾lmmU~ǀtD9Le:ΌVzK2ׯ_@9rVdPPP@K,SIoxmPK@;A"]2F+=,YryZe72{F4>Sk/o޼Qʙ:϶iFiWFKͣϜ93͛7W,DѣG#''Gg'L?5k[ ͛ǏQuBJT~T`ffe˖dw㦢h_"djTUVIS% =x@H}&Mеk5NZj׮]_t2eahH$''#99III'''͛7HNNVꝞXu … Ujnj3⢖,$ z쉇"33鼟uvvFff&WvvvXl&MSSSA#OOO\zUL0 ʗ/t2H;gddP`` }/AWVVRN>NlQ޽U~Yz4l0i/CD_~Eڛ =zaCɕe˨m۶Z@]t)vk~~~:p[PP@۷oZjTݻ} *]Rt-Iޕ5p={ފΦgϞQhh(￴}vZr%͜9FIݺuٳg3ɕR2N4[>}˗cǎ(((i٦:t(ϟuͻk.9Rrի0+\pp0Ə(ӧ6o\jýVZxZX"FcǢ~YUV {_r0w\tx2224 559j۶-.^hkU8>'AN4rHhmqjt2z}n5KgϞUJ64|psppitGfR)z uڕ/ײI}A pd8{P[3:tt8}w!..U; NLP=x{ 5kĉk׮EԠAڵ+3.\H7oG[6Ou999+-Kr]XjU65رG->ϩ]vj_mzlll V111zj1ݾ}hƔ\d\Hƍӧ|ѣG%'''OT={" VVV#\ܼy=RKHH0bii gg2M ʃ GHdd$ڶmk_!GAYǏǨQv9C ]`ff&It4\RLjw1 ֭-[I&pqqylmmP%MD&1В|Lrrrh3]ꫯhtyZr%Ӈ\]]fS7778q"H`4-ѿ4iРQ-ݻwLBBBԊ-ƈ6?~L;wI&Q&Mv8jذ!M:8@IIIH榷1HF#G`6oJU G$8777uVڛ&M/^,љIAAAl2ѣUXQW5jD_5:tL)**V^'N`G4RUEGG \D"t/TޤTlMÇuVꫯAzKX"у-[Fɻ+ڿbS~~>8.jׯ_? /Hɓ`iiIȓH$t=hРAZ 27RƍiĉsNX;r3bĈBS4":͚mH-Y>Xm{_ӓ'Os)>4hP]#х >Xh8ujٴiSF,I}mٲ^~KI( ٳruqzիQtrADN?gdeeE~~~tTjcǎ @[l16;|K۲t %333dSJbŊx".\W8JAfPZ5&8~QzugnnObŊo q3fয়~L4 6m*///Doq\:'Uh~`Xyk.c*qY^8j(:~mw-O"Ј#oesS֏?f&'Y[[S֭i֬Yt1"3hcǎbҒ֬Y#̂7o1Sn7>׼ysܸqx666pvvF*U '''899NNNpuuRbIDATWsss&4 "77x|uA^^?뽓&M*e( jP5k`̙SSSxxx ^^^W:t8& *۝X:up-Hpw]N<///dgglmmQ~}4o:t@ƍ *0(ĉyf W\Af͘pF0qS"jb{ʕlN۷oSNQxx8zmkTK8332t7?q$K>V時4l=b-i|UT;݋.o+V4˗/"|}}SSߋӧOӸz* tR,XPE~6B$F(\PPt˜Ed@*UFY&jԨ5j&&&cǎGyWڵq8::>}Nٳgz!++(#F͛7C%d`ժUy-t___&KKKM6˗I͛H$J՘) p.X], >s8۷o_ݸqׯGHH5#V\׬YSZ`jV033éSpX|Xn!u} q5>@rΝ;1j(?3fϞ-W^ҥKx".^7n //K߻^ZdggL<0q\INJtkPիW%1c̙XzR"44BBBm ,IŇ$UTQ6''ւmo||<*Wl]oz9HWٳg+MB_|;vmvADHJJBtt4%e6;nqqqC|||qqqIŔ<%~wŊ4nXmoCp8V?œآ.==۶m3Iջwol۶M#ve nnnhժUyҐtu]>$I0>>ꏌ ddd]YXX_d,--ѦM899-Ӛ&'8Py~̙3E'ejjŋc޼yc3V޼ygϞӧx)~޼y5h]tA.]о}{+W&`q\HNJt\PoƠAQ\*UgkΨ';4Yܹ3tΝ;RJ]B^#33ƍþ}Dk}v؁+ʴ@ްA5oooCoQzk] ɹHĻlقs"##CtoK#h޼yֺukcXDѓtNIĆ vZ *TSbƌr$20-Fڵ+t邎;Xo"quQH@,7M3++ ;~襗0}tL6͘Xw'Md,--=zgϞ4IP8gt8|{srrel۶ I[ԩ???L2QDR$3?~IIIZ/={D^8]}`'Oĉ ޝ >t ݺu3֯ ??O>EVt$O?~)֭k]q]k ***իWqIDFF"99IIIHNN˗eۣRJũFOѶm[c2H ۷SL[VZLx:u2\87X]Cat~~>RRR DRLj2 _111A&MI?Il#P9\Tb2 nTlllC q+QHclsAɓ'ѳgOΝ;hԨ @-] DW]xI&}UR%$&& ܓz[`i |ׂW.]NpIpz%9)M!cȐ!pqq\>S!mtB3oݭ00?~kmmVH!=`Dxxx@0yddddԩS\Æ &Tq8.B'G0i daРAoff&VX{{{DDD ** 6mB߾}QB1.U "ZC Ƶkdjj+߇O> ($$-ZDZzァNW8P3[իWY&gI# FPP֮] +++!6 *xΞ 6={`ذa unnn*] 377ǚ5kzwVT8.Mh yxQPPի#!AkgbڴiJdj۶-Ν;q\+&RRaucӉA077ǤI|a7!!!]+/&T&Wbhm:&jժ!77Wn>[[[iii>|8N<)3339sF,tܠ4fٔb1tP^~۷p1,ZHUUVV D"(/0 oeEEEE=CK Y 6"GzӧԸq<>>>%fߐzLPId_ N믿7OOO\|cƌ<kkk79]" EEEY&}-::ZW=AEpJhti bɼnذAo" 8"6&WBs` z˗/ʗ/8888U9cE :^ B rrruV16X N\ s6p(ၨ(dxGpbQoK_M1};wV/::G5Z}#SJBAkN~z7EMKTY~tupOJx gu'OpD xj\ . @m7tcd21,`&چDpɕ, `v Fff&k+ &ir%4|0vvv5j|?R"&V1?F:u@DrURϟ?>;%Pi1 :EZg)質݌_t8$6tiӦÇya_ 8 wȠ<|TƏ'UU &WBKZ6t00uoڴ)v؁,YD_o㢦U_Wv6lL4 ƍC6m]Lju9d0lHxsa0 .t[<`0P`kgԚZ $ 9H$Up' '=g0jjIA@-FpL5p  %qw(&W L b1MNooJ0g BB"\xD49uxGb"@0o/0cƵ:7[b`q\"#9m];&̠#Hrw-Wu'41pG0& Zq\[ a 끷Yu`!uf$gd w4y8 g`$'k$%4ಔ˜('Džq@#L"  @#0c$'vq\gua1zP*[2 T0o'Ɔ60C^FrDv4jh8f`Uc[fsx{e0pTڧfd49 Rk B1cP8x{g*o6sGL'< ͥZP&A`T۾q\>#9N Rbe`$Ǡ}3.iiHwI%yFr '=Wms* $&Fr %<3դ7@&2BpWDHAg:㭙!@ w$s'aWSRJ~꼔̮NA10#:R)VU7ZY^d`$Ǡ*(')!Hv&{mL $ @6 Ҕ @rXo000000000000000NkIENDB`dipy-1.11.0/doc/_static/images/logos/dipy-logo.png000066400000000000000000000374751476546756600220050ustar00rootroot00000000000000PNG  IHDRssBIT|d pHYs\O\OYtEXtSoftwarewww.inkscape.org< IDATxw$U]%H  *$\QO "PE1$KR( ${ם鞮_ާHy,,|;CWZAC}~쟳8YFx ?i6qǝI,TؿH', 8DcqJ_~qn \,YPxfό"X\6\8 h6x1u;<;`{xSXXه X-:p39 AOF4ҿm'] x1xxXP–eOfg6}Ϗ~O$_€0v_E|^Q;t4,dOn/d_~"}Saske۟t_uzK",407k( +2{Q_5Sg.$׌' {hc'ViyH:,a.N,"E<۶siosX'U /1=_d2W) {^,;HWhSc Ԅ.2E[Z}xfZEDDRܠd1_Zaqr<:Z&$kn,>F o1ʻ6im3Y;I5,>$,\ e">}U3:tVwYwTE:Bkz&YY鲞ZۗWp9pp1 ggxp" {dTo^UEz3جɵ,wiU +}kXTɤ_cf;Ȑ,|; Ry)[XenDDD#okmGbt-Z{KcaC,;NZsO<lŧUEʵ8i]ȴ6GyVR-"""տ?Z#6kaD¦̜ ^PuIEr"i\?X-8\"""U07 p‚Ιo~- 7 :"g{i, j8DDDn%xX8 ym |;HKyb_,q'TEaakaXX7H%,|xTk5/a3!,O; ?ܤ⢃nG"h9 %#Hk.,P4k20m RuƐ s\,M p*p"_ 磥ڙ |Wxp,pwbC}!^^XXT'팏뀟bqs)ZoR![Xt,]rSʪz;4As(x,yI%~mY` ҥ:"u %,dD*xx" j0p{+Hg6lm{"8i2`0wҬˠnnntH&ܐXy'x>L"Ź,2,Nz+1X|}o(6Kb)0ªNit p4'H`^%xYGv}TB${Işf gaeQ  ,^v, ]TӁGxF3!}L[Xmf`.dUro2 2ҁXptw!5Vx!]2򏑾ѷÌ$ ыdNbA/~L*du`K,uRg?=œ.لtY{$N;RXY 2f4a(Rc>sWͱNٿXX qn"CM|όO% 6{c xƦ&Һе &xg6I.~5v(3wn⓮)M*:K?OtVC"a\S7g*›Hc_ 70i=cEInps{'5x+.X}"ɚj2p2p}wI`9,;HcY8tf/*F[ |X9M&yA ֧>.c.s7 [9&!VX8 ;FTG’^W'phҵ\& ; @ g )1t.wh^u)kpzO:!,^B2ga{ #]39X7".#6"Mi( z \I`(]LU?΁{, MFnA fd`aU-9%Igj _ X4Ohh'~`As ýIr^-*2+~ +BU_wCX;d M}=GSXȝV@*YRh ߊnI3p> ܂5H,N"Q(*բ.RkҰ=뵭cCt,@~wZ"O#IKQAXE;fy8(YX| `3rsNj2E*xpL}/zY<B*Yї,4Y>i :^8DҜ8wxb;fyO^ 5ޅ ;&ps*!~EȨpL!cX<C*, 'i65r1R 6B*{>ڹʻ, <뜪j>܁MȨpL!c=@*QzpwH I㰰w ;[^vn] w,ލ/d_e[ {zpL!c,}IzuPcŧIGsD*|[cKRTEf?ے& b֫9bp ҕ9dl"BX| y3C}h}!2N$_&{;*"+n-'W̔15Ms~Kg^wNҋͱwI~+/Qzt!, { }Sz'cq2 x9i?ᘚC&RyX(>8Q,҃C,ӯ;Sy Sx:+_׋aaa VK9_M6/ݳxpwA*aqoIMDFdqZݶw:'p="akH]FD=8;@,7yGwt[}[ݰx.Ӽt,-osNaMF,D15M4;d*ݓ|Oȼ ow3 QFi)CHߝ,٧zt.R$5Cۀ{GHg|ܡ4;dA1zм_eš!O,lNz^}YI}'"X;"cjH]z#J1,;T?<e!}&M1Џ &#I,Uy)Wx!pw>l9dl"wM>4Xwd1]os,|;H*PpX䜛*"Cx+>e54p1"@8M.swW59D,BJN}Ov6} ]/^';N,\EH26ʻX%Y8;gMp?#Ry7aq`Ge>[8uMᘚC&RybX|8;(K,Bב zOZG85"w/Nz"6~zT|pL!cK!:?!48; cI[c>WGB:Hw,~vDr8搱Tޥ8\'|yӀOnmҖX;G|%Gz[r*"U`ҌzG)~XyߜpL!ciY)YFa ,;D%X|4Qt!=3UXӕ*wfx?pw]C?څT'כ{zXX< tM]J, lCp6""",N~cTxU]: +{Q; X|1j*"Ucq:>J}ׄKn ""gFA}0'{8!d,l V<|c%T$+_ED1wڃV.>mCH\|//%OOTE]5,/Rݱwqqw.r,D-8;JN!c_')KviQi*"Ug@%'(iXX;ݵ4wJ71"pw逅(uTgʻH,NsE3u6CDq0 y,?$BgsWsx/GBx(CHY|;FT_`nyRN4\~zQ(w;D ǂfi܆/, C`!d5xZfC]$7=9')8,"""}yy{ct;zQ1i\%~.#ӀlX""7x{g~\C+כ Mp0 U]$W;ϓc CH_;@T;a1{zGСXX;,-qRyəʼn6Q oy> w.L ѡyC4^. tĒ_:wY| u;mlmCH_< /yLNw–!Pʛ]>bn}f.R_O^/Fr<\SDDa]PyF~OXda]X,^TB].,>LRՁOyU/yhə@Yo\|}ʻHX w} zR= :>:9 ? xhÀǀ[K-gO]~Q/zR *o{h o!]^ѩ.R7i ]оw){5|X;DgQpxKIv.RG|;FA!""y;@Ǚ*pwZLsNI>o7*"ueϽcd,BDDJ˙Zÿeҿ>$(X;DmYXjI{;qV;*"9Q BDDJKy;@-X|8;F oQ;" /cdKXP¾ݩԙߧyG)Gw)\.P#z !jH`ݰ]Qy;7x(!DDp\W|wڰ8|5ͼCԆǒ] *"p4pwٚTDDc~Ry/ūICs=,"{gRvWЀ UE -W֤ $w):CЁ@kp&MX;cqz wxpw2NDD:Ùbc9RQXX;Dҥ_.aϓl wfp-w)L]CbLX}a!da~%etσxS $w&x-隠CHar6^\~ 2V Ryi:}ߣ5鉈owZg@NS>D9=Q~+MY 4x0'3`ނ<,HejK]o&"! xh`!lX~?JoTEU1zwfT}C4^>k7{JO0E~G:/"ͽ,澜j=4q [ H{/쩼%zw{ Rew>;X}N> 7{*"_.уMP)""R QKN!d|~_8;DXx'kE$D]Dfy8]!DDcz!d9,t ]Pt<Py'TEDr&;0~'CȰrX}N{!dM3VTE$8;Ft]D$+zDF>k x({)ZQyЃUcHU|zZC2[Ko腮 t!$=+x6,,H5?A*RuFYۏ% Ea2wk1zwik_KizwXء>X~kI]D;@TED1p#BcSC²wZ P:."{aEm{5;B>k ѓBIDATBh,}֚ʻ vw̻HYC5Xw|~6у-x)x." wQRyOQнz8;,ߗ%e- |>G.?֞ʻ %׳caQ""2ŨLSwQ>k x芅9I+%&{}6ʻ E׽HQN1ȑX;&"6(x'c*"2\ϼ΋TMHC28S+ga#!:ba=;pHlwJgUED MR# xGiZSY$,9 xiYǟc $`7,N)p."Cyx;(TaZ!N$ R~_4ʎ(xGb8*"2tF1FiY""gaKHūCHY+XX;Đ, b{8}6ʻ 'ס xi4 Z3w<~|;l,,@K]^#.?}6V"R-NZ."\o7xG2u*͑!94HaX*"2\ϼ/@DN61G4IV)Yس^}6ʻ n>5ñ}c pA,D\yO Z١X*"24wy ' Zg:ޏŪ<.egC& Kx*"FI]D_,|;AYkC܇݀ yX<} *"Ҟʻ /]'{&N݋gI~ R9-![^>S>E]Dɵk:3v2?Mxw|~nkHo-T3SZTE\˻μQ+lŇH .x`SPyvTEDd& ppwޅxJaZCea5` SQyvTED$pw%-(/y>/pR=4Q^KSQyvTED,l leS[Kt"w{Ka4,}TE\DDj‚X88ׁ IF2bal{6TE\DDgasN` yLj(vuX>Qyvr-d²X8 X;cn ~? K X}J*"N=""~,,cID3}8;@kXȞ,l|}t/H;`y锅1Xx8gō[; R3ySѳ pž7  ܧt@]DQy+ cPQTgB:Hqi9gaS[˻w(c)Pyvr-)p&8p,o X_T0RcyQdZq"OP H:JgEcqwXX xs<{F9b m| b }w#w)֤a7imbGXX ՙnIX|;4IX؇BU-@HS?Ka_ ާtA]Df"xڸ4Z@}f}"XX؈tf}3`m<|kWG;JiXaﴰ]2pH.H;U=KNφH>,>Iex>܋-}Boa!`ֶ:o,^}6ҙ󼃈|X;H۰8mﰰEZ ӱ2iۋC|%D X}"RNŋsX84Ҧ> |oȯZaEgRyv0 *"R HכvkRιlRi;Ȱ,݁ͽq4~g-J4I]5h8i'a"Rt=C12u))PqLT}jy; f/گxu QPyvr,:."ewF>?["U†1뤹"TEwX ށDF8Amu3Kx Q5"NI"RK7&zt ,<~7n+a'_μH;9{H ɚ4Cz: XGAk$,Bf."V/)ŗp-(% \\ D* ߋpwʻ[yאy)owI{['c24wi'~w9a*U[8UOvRQ SsX8w|; O]Dɭ̻/~]4|ԯ , 3U"ix5ҋdž,MDsu]!dx*"Nn]gEWw6q^bfIm,RrH{yދi!=wi'3"$s:Wc,`w(%!u;\KDeA]x /yZ;z;L]Dyw.hȼs_'Ox,, 2/"""x&ps""."]Ry~ه4gEwʻ g%]Ұy{(X|;tF]D[y;4q@s*"2=X|;4HZq=;tN]D5{x5pwQ7!Syt2"""XӽCU;*"2; ccth2pwi(w1 *Qy,ѡwix̨Pr]CEDDė[ǽct!,WPyt&*8;@taTEd(ۼCЅHTEd( 9*"""s3Cta=wʪ:!"""R iGctH=C*"2+ kx&*;@"wʻ yc!DDDDȡ?ʼn!{9@;@t]DDD:!Ryr)]DDDfwgJ]Df0v\BDDDddtTEd Ctb,BDDDd2:*"2P.C;^Ё׽証@9{tЙLHba 8SCdJ=S*"2[Ct@CEDDDFO=S*"2CCCdL׼gJ]Dfȡ;H&zQy~y w0 A]D1hmwi*w;@4Q4ʻHY;u!DDDDD2!FpwO*"I#BDDDDēʻHY;.!DDDDD<4v!Fpwo*"V!w{.T1c1z.\;szhN!DDDDD@]>d^gEDDDDZTEš[cq"""""U.L:.""""w0j۱xw*QyiCaDDDDDF]I, |;F!DDDDDF]YuɎʻHSX8;FbB"""""U.oцy*w&0/pw6nE!DDDDDJ]ƑDDDDDL],aO!DDDDDL]>,b/y:w:0?mŧCTʻH;0 BDDDD$*"ueTD@]~ ,bGcA"""""Py# {1;oyɉʻHXX8;0 """""9Qy sg;'wxwܨ;C _!DDDDDr.RV>X|;HTEl`n(8zɕʻH= ;0# QRyɝ#c c:"""""3wYG.?;0KCN]$WpwaG'ɆʻHNv1Iw:Pyɑ/z+X|;H]Gc #θDDDDDNTErbϨo`<"""""uS "YX8eGz#wXXX;0vt """""u.Ru"弣 u#X|;H]TH륯xw:Sy* c;'iX<;Hݍ "CHC ;J!""""*"UcME9;yi  7Q~!1,N"""""*"UaaGj` (m\Ak;@rD9^r("͐H0SʼnT_xGi2`{,N"eRsz!{-"͐132=ىxJ3;UqZO9dl^+#"U(2E)(`K(#HOrxsR5!4Ce:ff*HXX4QFgXDzu(KEr8QkEǁ[~q_":M%" 9r(CЙw~4C`( _":M%" 9r(CЙw~p/y` RsD9^r("͐(2y)xGG8;.'26Q2H3p/ :He *`բǎHr+*w 3 VXTWafA_O"WiaYF2Zz숈tNoxJU-,lCFaRNJ|oOx,l oX|;i.\,U5 ID\XKZ8kaa^,D:W?zfPk[u*ޞoH&,l  8;HYXxw.}wikyӽHS-| ]^ ss3kv=4-GzTYNt cgO, ADqrvZ ߾p᎛:Օb˻IK%@붋Yawx>*^(@&1|A-/,,K`z&OXc6|]>+4D/gcώ*%ҵs{]f8BYK[#-Oi$2p` ؏T\3ywY;n6ԥ T譼$2 "#y w#"""""57naQ`laiDip=pEkMH?r} *RO4夲~='F&+wT/sqDDDDDDfRy&m~o#LM硶 kgnT$W&.Ǩiہ999[?s18`୨ρ2/?7SN33>)2k&?u Jca!0c_бtwruFL9IENDB`dipy-1.11.0/doc/_static/images/logos/dipy_full_logo.png000066400000000000000000000235071476546756600231000ustar00rootroot00000000000000PNG  IHDR@x+'IDATx^T_bW`/[{ jT,A DbEb+؅YZ(VņbW4Wc=?{ffgvf]97{n `0 `0 `0 `0 `0 `0 `0 `0 `0 `0 `0 `0 C"2ʒ*+JSTNWrdUƨ2 UtS_gP7P'GXkVr,^ēIt)y`0 k{+l+^{&Gb*ϩKV *\e_ܬOe1`0' }u7|S;[[ē2*s4 BևR{>Q/bP+(,c*?6sP]`J_矇s7 @UHpOTVT+P_O_ǽcC>O?ラeTq}70frg=odhK9-ZQ5u|8F#= 綏;LN13˱tE=Dݘ;ߜFm~|?^\~+?Qxt^9^^ߛ7yNBe1[վǬ>^wdx|'sW_}6ӏᅲ?X曼 fz_7*_>~'z||Uy[>I6z]MYrߧGǹ>}}-7Tn$z$eܬ#ن,7;\qܚ5qzw3c^ׇ7Y7^ms\3?f ڟk }ElB;oQolCt4w|7Ho6[IwT3:n9;=PYZI?^VXa[%СC+'<'^hܧʓofn ; B˸rRBk)m;2zsPLe.( 5H%>nG<&7K !ꥏ9"wh@\ _3thχ1*qB/v9N~nnF$ϕ'AǙQs77+HyRg-~_~K'yxUۢ@Ix7Oѣ;2yd;Vyӧ7|rW˫*?G-#$jd.ôJRׄ2eS.XnwJos7 A~2:6G*PSK,^;9Z<xZQ_rd4%XB\pAy_5XCv*=tM.XIˢ.*s'bfW駟ʬY>s~"_ec)yr7H0 ~M͟CK8z>[ |\V;#e3SخRw`U%s+b+~i1\u&ܒc )aՀsrZ`\|u˄m:%V)AУ V[-g~_w}68 4i}u2Za林 593e 7;ϑfǶlˣ=r{prC\KuפOƻ[dT;qb:mU2R5;OcOC -&)8;fBVlnN >m1uTMp޽{˔)Sdƌ.(r2,#/+^c.s92x`֭,"W6X 'x2>lgf%(UܸB;/b!הc49IDϧYC.Dtt)c~_خRw$w0*{fXIyrFyH G_pؾ-޷L:W<~B|zi ŒąHA$*GCuǰh<hGy%"D7,-\eaA` b"IWp3G>&? U m^H>{҉ I?&l[ nE6]H %}9UsIѢBrwL;/u%-$%奵@{nvdȑ.ԘM7TV]uU;󟓉lF@O>dIr2o˴ IHյz#@ wSǿ^/)^vӆ9&,v:=_ONj-#v D3`\>y\ɰm%ȥnCX](}\*JZ;7|3Mr ^jA\n{-cu 6D|3FZk- \$*8 U@= b|e>WԊ)Ve!KrzخиZc"-$v!/2%&k9#C` h5Hr^q]zK+phfD~11 ӏ>ԚF3ďFHP޻]񕥫/Wܿ}&Obp!?Sq~o` @zbx|qhخ5p0~LDVVӗ,>%Ū+u%T뮻ΙZQF9 pd"$8ҽ{wGW]u2 JM 6ͅ1ILT E[P\n&:M|54G9hRƋmԄZ 40ma=ö ?E|N_&nJM: x饗BO6͑V^ߎ>0\$*"<̜9ӭ sErʦaR#q/󁶮v+E|peFr嘬Ljfmr7;( 6wu BLs1|;펵ĐW$>%BaѦgϞ;h'P;pB"]yKj_pulm@đ/gp~i1PU*4>z*K駟ri! JL ]B|zTy]~ZN;4|Yveh~)~ G\FxN*E"e 3Vx3x=`5!ԓvXBV_o1 ö!r,wh >Bܶrˢ/Eʐ' 3Đ 'Rg0IEv}w?!/~.K 3hh MR):RD`jlR"Ґ;3:қi)0f`;¿knO\-QٗIخ2)/dvVPB-.@~w= !C8-BFZ>V%:zAAoQR!N0n9%y|&U$'F93 \HXY&W.R):]7ǖym1;! /yR4fuJ:YWjf~vղA/Fn;KdW/}NznD&:$?[* @5VWM#7C[\zK'p"e T4FRW ?";k>h2p[ 06ȏ ~qf%a͓M>cX*\ yo?f-@D3T ])RǞ*?o*4DG :Y.Z3%Vs)iv 7R&Br[l3GRS@^,Ͷ< r@~Ds!J 믿>oI%TF{d/{]PBJѡd3y.1ܛ е]>̘鲧+l[/ȝg >7>  >3 K4:-Kv7-HIyR*"BØ+49[6C +8E6:\ >56hBX/|D9HJ$BC,TQo.AXìZE7G< FH`#8SbcƂFJ|EF)Zѡ?&`8fa}v`~,dȜ[9ЛJj 47{ᑷG3~?cEH MM>]=fq*VhP/1.DkY IF ʱ̮LM i#:zsy_JR]N~i8D} 9(zܸ(M ;' pmu&q$<4I4Aj|ɲv90!&q=B6A**E((%lKNsN`~pg()Hΐm"[$j9PvjᮔP4NI GJ.ǏwCd hDsO9묳\aT9?H ZiRtޮTXΆ?0$Q=Cdƌ% A@ 1و[qAe ~{Ǽw`0l?`uDp`BQJ$?]f͚U@p ㏻b D~!D ;Pk9]YtSys#/Kmka~#@`1 #!%H-v9|){}ǙN`#ϓ&M*WRh'@_(2+P*XLeWL2ΕEΡPU` 0l?VJ++וbn`ƹzsH4ޝATfe u$W\qc۵2e)(y,U@ f'~qZ1T.AFU2ZGȈ@v,#J YQ-q-9 VL帮ˇ)뮻'cǎon%)f%uD}u֭,;R&%.iѼܺ`~02QFD?#9a(}w饗>L_׎0AhxLH4>}9 s ͥ7~\2 gHKc W#`1'2!3R .b %+E>y=frN $IDDVI*U&QO(ytu\J7_&)/Xa_f2 ˁ<*S$Gwuywڊ4rxuB,(*r \jxD= %Z\MZzqïDp$-SnȔ1cA[?lW)zJc@7K'-WtWPJˊ/n&0\6 0"L oU̼B*d c5#@(* r* Ry\<JΥY5S)EaJcm+j*)SkqVC3xS<{I' ]|K1 vן9sfRs-@?KX˛*(Du!ZiWw?=+hx~{_$lth4Oo5ۺyJ5bQ{'ez~ u8ﰨurQ14Ű0RS%j H'1+@0Njh'/F6J:#9tKR7炦T9{l} ׳ߠEgGR[mx7~ӅkxC9o]jLG8Ωwn,,T98V9\R~ArI6l!?vU:VeIQ7v'ňLL6HlhQnsM_|3-glȶ`,~_W4bx4ʺnPt9%n[jר7$&v8f!ar!\(۟9\d=ń1*̑PW|f(ߣGRh~Ljx]Z' $=$)$~,%6V&.Ghdž?U0c8M56VsASeso*-En߰;٨ Mx>ٰwm:h/\+FG(3s@jCͮ]c5<~_quav]xI?(QcشOS@R# R`5r7@]Nq534kAſ7S s)E*2,Iq0-/~Km0uL&,GMUzr;(4NTO{|8%:5QjLtӸѼc8gCϦs|ps/Gq )ʓK  Ԑ777}∼n: 9}#QSD3OC}e\e*|^Njh 崲ѹxc4{,=kvטshx~N͟o^hiQeĹiyW|'=9k>~=k(T&H͔j rHrXe=D7C*5{ͶLm֙]W\'{]"eA#!*+0S}&e oiXZx I 7^ctLn掙cǛu1^&xa|ΛAFfzƝkuS$w@T|ǟ_L{U#47c1楹ɅRЀ3g+}.c\V4V?s꯱9=h;"pI@| &1Kf6(g^a_4`hxůn[CTVSOjl C @:* /5Hl-5B4TNUYSeA1`0NH3.uoIuBHoU*WYQ C=B|"T*M(0L: ?8pW?3KA ')}#}Ȩ9HeGzԳtOз `^JՊ,3'CU  (Cbob:-%LfgHG `S_V2 e阌}?8ض ~uYg6YfY{]9~_>fۭeٚͺM~d l]3k]9YuŖ PoV;BX?ړ!tϯ$cyV,,JgoP?g:J1*n黟tӉdK3?89N#Xm-t>xzv=g!~_,?8p`988888888888l7qpppppppppppO\ "tA.^c Bw?۱\ "tAfs;p0? =|0Lw*8ýmvl~|=ezAv?nn2>?qppppppppppX' wzt{~]ZV Cv@AI};5Ǡg,7hYtUnG_߸OAm=?X!4{Ѿ?8p1 .m;j;~r?8qppppppppppX' ?fGwםdpgx] ?8+1 TmDɯ`ZV, bV;mgAer3`)V:;eAWwO9î` R"Nǻ~V1ꓼ?qppp{5 T_~'??9>.2d9bD_~8K|Z jiΏ>`A>l4aڥe ӧ}۵WE'۴Y2. %tOdUf#]7Ȗ{ }?ŮdYd|3K ~ f ]=ӇOq?M8S7q'^nhQ9ճpV/}[%/g=n*a 6 }( @!NxPz=3ˀ6uv UyO2觟Ne=EYt%}l=Gڹ=9C#]u:;ҧ 77ƈz2";;~YJܟՕEn?tLUd|=|݋xuxF;=wyG9p"'ķ2˘Z1g cu!-֎rpp!PJSwOylUe4_3j]sd(aA;MZ0"x(vʗﻅ/LTF[z哜Yy~tse_vpsֿݲ CΰƵsB*Y2=jYWeȒ1Ck] BY~r2GwhnI~~:Zxg-t2ˬ6SOg(Q-.k?_3ooy7֟qLvxg*[&vkga$۴{ƈ%ů>wnN^n4i7CT37SҕH#_e~2;pM}ƈ,An䏡"z5YϨ^-tRs΋nw>@^[ewt}2eyݬe~ꀺb,|e@ J"2-n| 8F[^{^Bo Bx<˛2Y}#3ݷ~Chc{c<{aŲ(|pc[#*$ :Qo}8qj~ 6qV=X;檇Z|={ôb"O7F^f΄P.ԗKz[J=I.6NEcهfO9p=+,k?7"=VeOڧ, BwA.~bUM5z'9&)I^oLtN=cUi'1q薂#4j,ѭwsV|K$fyهurdT۠̓&a#V$NcͳxE"Zy .*k+,.5%.. 3'DM.0=\'%0}򡢤Oդ٬hn&#{n*qJCu "TU|ӹCQ K3{N!ݻ)iwԭ2VYsq5"FV*KTw?%~ 秞(BNR.IHX!oqS2l8p]uw'['Z <ܵb|b͆p|i/~g}3MB?Gۄ ч)!7LxN>0Fh'p_q=cLG<8ոFT%YǴ7u[[ )Iz )Ab ;ȖNU^{1ksYdɺCOY~Cmq!{ksM!uk 9~%lA὏}RKA4Qh6:O+ I 7=N;gZ$Ѵ>O]0h~D/uqm&+P ZV 2j\K:]oT*j?xZgvK6ii(ꖒ0mU÷^Xwku~%"*("!I0N?>VȨa ÝtX0?| F7+caPȐ/`iCe-pЀ~: +oN}jP:ʭCȜEOR49޾ty(i*VuWsh,`\}w𝳯^ň`U‘‘YmhƑע`S%_O8ȇS % %H*O辩 y(.ri`%&,%8vrY- 392Õn3^7Suh : ^Kh}fY+m3Ln7ƍnJV+Z/&IKq*Ǖ]JP뉨LS V(`|E%/rtj9GՈ[Ԣn"N͏=jIUT5-/):G|yz[^{yj#r} t&Wp\__[^Z#>}VH;q!I ډ\_=<9 +˜m3Af&9q91JġϓsL_¦>gIPXY5VOZNo7h`DGF&-HE1*|;(2$`:Zq#9֣:vFO}?:rq@`Oy?gNX>f_h z+턽3gi4\\? i~ sȷ3~Y -5Q Q;* M_UDq7=Ϲd^N0HBFɇ-Tc _9;%Hg&B$HA̵$B'+VYCEt>r>Zb9+_שf^ސ+ߊ_҅ڈ\Ӧ)%h9ٻ${NҶ=[jPbM=+bE}R\e1HyQeu3t&!ƈz^㭪_H\O6.={DHj^a5zF:p]lKM/}vt.ct}B{sCb>TYAβ@Ad!Cz?LF1bbC~Wg?;A͞;ɮO'Y M%K֏̾i oēt=q#<VgYkm FVl4$FDRJPNi L.S.֤:ٝVvg!`s8*$/t%5Gjyj6fKE-#(iE'nJU!~*g<ߕs\c#8<|ks "{HꁧwiI;DzDSOJr''/^NUc,sTA[ =SZQ(b5U1c DSI|}jT=5:Hy?\A%ꥊ(x$VH@S[SӶsMڢ me=Q-1]_N rWR@ְkM)!J WvlήVxVUz5Gۏj:1V#=j[T51NUTcrGbqW'bc4jĺ~f#Im-y(7+d6>cHCX{q,}+kd~f]`3AI߳;Me6@;h;ofaT agД::-Lύ}Z(NCTa$QG-F ZzDd+*+$P\<~ZY˶A*{t Nזd|}M|MVó|wҹo=TڅSvgtT:_Qb>#<"*C 4A"J'.ܦv /-G& x~|.Mi3>є  x (S_ǃ֣d'iorMe.-QC5)+S.U#Fw! )ZL[,Ie1gi#_y3Q3YLYw.\l05RШ?u t;mIzvZ01;}s}.,R>>m;ۙ؏7.7oT IDATl?VV~p8_7.hd2%c<<^:n0ՇjTs~Xi?0,~9nlB~xcZǎ" >c0 z]'Gs#1zY<$0[X|T_RhzA&vJau+!wVQsS,_5FxV}SYa2Ʊ5TF~'>"(QHlgb) TPoA!WXնx) (ƌlI^&,6ɵ".N'X6*K]9JrMN ~U!N)ε1G7Nwr}q?zŦ庙˸ۧw ؇mv`Y ?>ɢ a|/%>~Htӳĩei<6g: H@zFGr,|"[x[|R ) ~곌훦ȝԱIyWf;NCO1+OܰED(j SC<ъcpPa=eY92/n^F>t#~{5 G&_2X __,n v@hE 1 >*|WȸufsWyw.ьAV^ݹd񕟞U tW(O]ѧ5䕕ݜ]$/da!D@rmNT71Aioމ=QcR4)A縔A@M@&V̓cHS tEZRJ Ή(_ L|SYhcE.~FVAsSchH<:vY\k/M$IÃB5)Tב\ {7ȕVCbz&.D&bCJ*TqnOߛY+i) d\grm^uu9+Dicg; mlr~7;xQfosOs3/r"SRDm&Rњ@ԊmUbQb5J$a./Ny_ƼYđyd'sz[:&}veq+Xn'u:ӣ%~[\3Wo z[n5Xެk*c 6ǣ,;Myu `VdþG3D8+KX.yMOؒ=" JGپb\kQʡ'y"XҒQ"~Hbs/ 4f%ͻ(,/F Ѱ**9hǯ_C? $`3Nb(-P` H-$afmw9fOIއKQ(xOa!Pvw%MIn*?Me)Rjd̏ RxT&o++gdep; 4_$ jUT9}h{Cu:`̑+1[gt){fc!^MQ*HS+iTX3'bQl4ex>Y;yF qq; 7`*2~3Yw2eš~myY~'̳\}dYtկݽVjNO݃~= +XY˪mٌxn-^?G3{(}$}V'yĜLTKnxW9_H~V~q~I϶$ZvȪwFZB,PRb[T_?k)6s^O b(بZ:QI0a[k(F-駔!٬ʺp9V ra," 0#WRXCXģ{kc#Yz_}ԱFtPz 9tw_x2V3'"|6}=ԭSj$z"F+KJ"zY>A1VX$"bT;YQQ TSQO$jj&li;FKZhEc}4:8pu珌hGU,0LY7)v@L~HUn {~^3,;Y.sa{ͽVz%d--AlEwɺ}'}__!c){lZ!8^+~ef̃eu)h%UYGڰQohp0#ń 4;&{ߦkfݰAtk 4 H]g<۹}Ţx)YSl-YKU5ZnSj=EUjVi~ H]Z|e#uԑ,_Bw~eRw=_ 7d,t 3#Xݷm-OfZG{qdʑQ~b6wyUˮђkn喼vyF&x^*R7]' <<\dxjnPH`#R|MqLZEtIĐOP$̍6yrtsIS9|iqģ23Z\4h;ϼrze y!Lk ^PmfN*s7tO".j bc(?E{%|p79~Ϩ׊l6g0/0| &gȣ,#QBWZJCqk#ER l|jf۰^ F]-sp6$x821s##Q۳K-QrhjuSk\=B J{acW~.xii}7kouw"ؿ6>AI,!< 5)JHlbhc,x3/)`@AUIExJb%A A[<18Yʨ9fT.S·a1V$:XMIDZRaj*zk8dǤ'Il%2njvY\n,&,&-{`b 'Z{n\KNv_=*ޣ{ +Q9Xj`'!#Mmr ө쮬|"/cx4+vEޕ4`b}W-Y#*^ VQÓT0ʽMՅE1Gt ?lO89?a ̌. KѼU[wP]֧_fKBnhLQ# -GgHF\D #H,c$|,Qj##ՇuiO>gi&%YܗS*I3}_'CփD.9R?_\/goh{G&E=<"VIRKΆ!IB楳oKziJ*DicIEL3kxZkE' >F j&QQ$e9Z`s5d"!#2tlNG:Aa?ܐ?|yH8pYzs˂5S؂amn)YnY\Ada%G<->:=b^[sYѯ~6gq-* 9~ut_)5mƷ}g<{Nʮ2_zT&,Yd6w풼847.mR/J*QKHa[9&VW ٝt*\cԤ5d%MeZ#,֥(!5;}R'OLŊJڵZ7v.͇*,YBj%3Z>#@gEvhbU%K*Lڨvϗ26r _<Ԯ<* ?|"#cyYNua>rJXVAB,Ycm3yu[گY1 0 !n(ҢDiEb$rTȩv].;DTJqDIb+XDIJ$D2>~{Ox=ANw=({jj[={ιg L Ҵ4D& ^`~rӀ2݌ zZeܚ&5_z/P~N3]S#s-~իl4*ה%ѧtmԲW.ikM^eIŸ1}wpk:ER'#f/ղpθ3k Z;/v9^i,kJaCٰCn._d٢$H [iX^ac-@ Fj#U!yfj)2J")Vԯ=Iǖ8ij\ /_gYshN ۰reޑmL G8= 1Ai T)u_R%S4(^/HV·<FMU$u;6%:'1Q`DqiEJ l4m)6?}rΙ(^wVn/x}Iy:F,\,xEyhL0+2Wec,jjjoeJH 6j ʢNjK|yu%xfcsFFi-4{u:jY::C,q^~m?P=>sRc{OcJS7ifw7wo A!ۄjG('N%A]d;]ob?)(o18.iP^6_X IDATϝh,oO [W8K#OvZcIF!Gΐ ]y,/՚lnE-pVOޑP)3=+?K>{+cٍ[ GJNu@78d#Y*ٷ+! CbsC>@ q DL}(R.ucduIR HL#zVmHӡU(jZxq OXM־qT,Hd@!Y 瞽Wj. $Ha_Ub%BPU9|iDÅqڙQLaףV&\Ɉ[]LJR .4<gf:bT)N挴G2O6t% f<9Jt(-z? r9h)p3m m }5 x]Zw%N96@(wlMn|Қirb{0 O1 wn=NcϚb "G2&Yy\I Z^K?6] 0Eó%6 y9(*ᆲg^do~QEҋ4תnd?0]GfKlK:.ՍtZO˜;8{8ΙKk:ww?NKH#Ik5 ʒ;v;k>43npܻ8 ^:Z efT.9cO<Xkk jnKm"9(uO3T1VQM$ҸolsӰq!$ʒZ0/*0H/W~J_|xg-B1p%fګ*E./Z_ld9]@h8*Ix$[$!aV%Gƥ(x#3*OݤK>g^Zc/iE<8VZpeyU huu\j9|rroC'{KNqL;FZQ4U$2_*T6[&CdpV1nr7nf_$j0Ĕ \sZ/+P!! :٢Oi*rV:Ff(6q͢Xmp~پӣ-kkF?~7'}mk/݋p@ v3d}sކv MyFaI#;]i7M9BddH یo;binv.tb\;x;w t:]g7?@Jj̷ Eճ${ 7Vozo?#:,~C#zcj2*͵j:mCW羣̶s5\?äKs=¥6(+ÏxH^pbڌʮ'0rʙs~vg}cF3(:䙱)fq L^# 9 T^3;g9M_kHCeQ2(0Gi9QԗxKv=4E49"r29s1RNR e GQ]Cd ˆh7Tou^׺IG?;j/ЈQ֤(YH(5ÐZG)8!,$t\Xm#sVkB63}H$uSȌf2쵻"fJF2ɜ@-EÜvig>V?Wخ{DyX//PwȎves^Vڷ8 ju)J,\eXCd@ic0 S$4sލ1]̨b$c)NUjmZ0RL!:˒@J8ܫ!=_ ,hݙ~cNQw`FME;\yEott/л?n鏛?&ayL?$v;؊;Ftb($.duZɆ)6LB5Ɋm|͒}x9~tohLK6N2L:ޛq"s3xWx{3xϽ,VXn> Tus=/=sg^_3|sԇspxa+}V"3|`iVьNFoH-OXO?z_‹z}|>ݸd0Li>Z>ǏPMh# jjȇ gU+1ӯa>S_6>ϝtuvEJǵ+5:Ο[zMtm*yH7zpyӯfsm:kÆ4DDBL1E؂J{Yx'6O?VtH{Q8OZ7UKlkT1hy%PUU7*22bVa@ELQFƑ|p*:RF') kPT8`[1C_wr3kWy-qK?ˉ~Xi͔˛eSO;DϿY@ɛYư1w^}wϧ=,ouۙ92n'c?x?cWK+qgaɻ8_|!7NE{N.sWpϑrou6BTUE̴vScm0hJ6Gt OQEN.#,4[{oUѤDED- ,#e  <޸ƹr k4} d@pm#eQyxbs(saQ}gS6!9"Ȍ )NfJɶuANȁ3ZkH+$O4U_HrYu#*'8P4[W t=M 6$Uvdp /aRSjy6"pĨȢFG݉ .VًmGυ>I5ZONn̡1rRby*䤭@:׆WYŒ6;w"ŀ/^-P;βx" WdhP6 ]d໠ZQ4Q 8ǵ'H,p*T HeaPw๚ &eza,4lWWZ#ցEsd.DhHRF&H@:@p B=by<}0o xZB8*wz7sn7N>o/uſ_n~w|_Dz|睮[B.`66ۍmұ`q4;ہ^;ц =ɼ'ɴߍ;={~ֈ~?V 76goa?'#޼WpNG<cܛ&ayowxwk::osKAl&[k]\ml%V:Irw-6-U\8g,UZ)`fkX͍@jS$Fjd%%4hOWcKR_Gla/[G,$onpl0n6z`` >{ʦqRo`xܚYS{ɂy3s%iZ9y xbնA5"P[G-3ike:,bX|4oA蘅e5kX-p$ h3Qf`FpբsX3eO q/.'GWjFR1GjަՌݽj2SˣҮ R3_, gn`&0{طW#ռRU U>e΄wfxĜU>0 $X=2Yp2:Y]Ig3[F6R,FgK-:V̢3HhfC<O?o|M[nb6a[&pߤvsL9wvibȔtҵ:ڴ[@ӅnI3t+g.'SeSI=q=umPQDQQȅrL*,eCMVis5Ըj\ u*܍h.q#62KI;Q:%%+`huyB!JE4a7?ćUmN sIFƥ5Wv tu?9~_ky[ˢ!'IJ%*S|^oDtz4ZŸHbyAHFC2A?+"ݐ\'Uug;a^3nᷰhRزK(&CUN49w`|EEׂlđjl4YW'ʔtՙslHkm/Cr֣[z1젤3jqͨBBk9иVp7È[[jl1V]OwƍIC Qpw+&͚8RdʂAW#B i$$ ݐ3UD=UH*po9$_ԡ-qKL?p!- ? ›Ie2ᆷ8 lh ^4WZ13¨UBŏ>}D?Quy?0'VGOѪ'{39WO_Ջ'o_c] ^h8JӜg BRF4h ˚!WYNV5*-*Qgjuf긴,k)zkm*I 992_'*K2aUkMF-6gYZѻyN/T}vHF|rt1>wyWm1pfɹfgdY~.2'LoYTD#o֣P39PyqƾKU`gJ6y&IW=֮5rZFZ9{Zxm6h#^zv( j4.v5F]CRe0ci8`X7q:doDJV#}mj2p%cP I F'k%=W05vlremTU2ւW;:H|LM#3Չ H}aRӛ*zqUR8CfF^78ǟ]ďsKٽOܫ;II &uROtQqhS~]fNC]…7cN;v.4)-T;͟M_r;w״|OG:LM2s"ьJ艔goO؁0ntgg8<"=A>C.q`?w'k?Nd_=ăw`js-4Yź5*%&V91٤bƬVXeH[uzzڣ]nZ35N3,:G+Rh\@i)A Hb%1:33B%̌gpӱRXa5@mOw[i]kžU~iG$\ FxJ* H4 o!m ׏ֈE;N=RK^8Do=(U2F0gBF+*"1M3sV~J-8; \sƢ5[\d1xE+\;嵋 iG3͔vϐZrX_G&ߦd"]BmPK,3A ~+}M纘rVL©¹q[adj5(%5c7.k0*oomQN'Xp#02GN1EyWecěgf3bmmz7t{R؅գvH}uSY$ ̓Saf KHvA˰lFA'xMjҏ#u]`(Dž '(d|fh]>KI,G&SrUMx J̢ /HUDH}W)5 VS\hrA-V"Y̲qօ8Q`5Q,DR&3I٪+ԩ*P8;]n ޷M;A$.cc4vF2]tV򹗨xiMRlmnX7nS?)X~7`Mo$&pg%4arkB O=tAT#hj_Ļ7.iW4QUGj4\r`Nc/s2s0h1 )E,8 )qD "24Ș#G5 cjnsEVu}`F}P4 g).-9ӅݩP0LEQFB#o$;)5A4q$Yz6?!KK}fsVFWx/GEv}ĊQHI]~~^jp8KKg_v~(p?KȆ3:S恬Z  wbu(()/oH IDATˎr92y"RQA,IK+|0U! #$T^Z7qdԤ.ETM\Yz`Nh^fNAƬLuDY%ju1/yj F`ě E )!^?4xm0G 's\["14%-'~?RdNi9: uN5KaB1 Mn!lẽmc4Mikq46#e?ұ ?_Z%oR'*aEg>z7FʗqS'y=oLKB{ό$2**R*:+Wl9Y㺖"QeU.c` f+qD5_Pmֈ)Vc@A 1:3|.Vefblə^"TX\JJbH@J.˹RnOכ #4)]8(-mZB443Yo)>2CF=5ˈnsSٶ}n Y9̺No;3ʭs_yAE,lѿt,/" [Qs92bmْ&KKW R`hhiE髛v x''.v`=:\Uxjbu4e@o  Bl 7TZQ%7oc:tloӺ^nlSc,$hHV9*XI"W%ն" "Epa Q٠ eoVWh[\ܗQb4LǸxHqzf,w"wT'{{> s-oe^7+ln?itiy3ʹG5jhҴ2 {IN׸$Y^w2 0}5׼MCb6ǩ:l9,3G )\[1]z`'~A~6juEKRgƄ`Ǽ^YCmyc|{~?F9n1lftC$. H"!+!ȭoe5 QوRٖaqv9d3Y?Hh̓ZA2ZdŸp:y3sd@bH'V3]id0C\ T1 U(VhegWbh^Uy&sVK笥K1dizD {u.]NIꞺ ztE* ZBǢBVQ9HȜhXBx<2N"Ϯ/#C'|ŝ /ۥyD㏾瘎u~}|+C3͙6]Y-p-H H*LTOx}B2JrǍ|aZIUYJՈ#7 ꣂZ1(ED4̹,2ȹ o\ LLR.˚'yp=Ut\C7ꊉ̇@9m;;ș3|BhV8{ \5"[QtNghTRáhlQFHnRfe q1\ @Y.cڴ7m\΂3G=+3+8qqEf;:S0Ȅgʛ= c0yt&YㅵRףXZG;a>a/=u=ky~@?"TZ1,*Ԓ]o K$Ә^NE!!S۸ҽ;drsVq'`K¡遊sAxgHeVZpQb 4.3L*୞y A"ިU^]J)AԨdة0 (3oR0$FzCBiV.bQяk=Kw;r_| **y0 bt(pMF88%fJYȻ[r7^nR TzloYmrj9_@^JYKD1$J[$"ıH+:b[+ؓ+1&ȏݔW€^OSgcɻ|ʌd1ޤy"cg!X(Nrk\Uڝ {qk5-Ye UtD8ݰfɃZ=ǙC/?q?3gxbG7O8\a_Kt=2>1]T⣣6JlIyL,X]HsZFh˅]3q7cJ$4'ʨ*fw5%ҍy r9j !Yp)+~E,:8<T*9[i̡WYy蓰?Hi?+3G*g]bH7x×}M>W3ʏ}]e;?vs>'I*rn2|>ذi1ta?6km9vHh uT\4IQ$`.Lv^$C;0/Ԥ:Ƌ i>OFFcK_ \.<*/&"ւc%8 #2*OJH{I>;uJ~%ɤԋdP(RG9czHUh'ZJS-eӔdLy%UNYT HtӊV^U~A2%iʏ؎C2l&t;nRb 2/$y*%2%cD 0mtUH2*e%D#\Rِ5) ]1пryl4a~1LuSRcP()k1XZqjy 렃)*L/԰ N%N T'j|6Fdu%!\4I>K^ 1..I~$((V:$̆\Z |kSK`+ʺzjNS3zdŧ%~{?zBo̮|vIVjZ2lӱV\TF)DTSj5od#gsRRHArU8.쎩@)tcGA!JM:>j&$tݸ(?MLDI?{oYv}ν-֮ezӜDEєXK!F8aCmV "8۱9 `ؖHH)ܷ8L{Wwկ~['_uO]U=#~޻sPY"3QHQ1Q`e$HR)T*bBWFRʤW: 555kE)qű\85699&iI(:PK/'wc ~Ghإo7;n#wCsD;Y7;cMp'kk7Y;];Yv`).tehv wu;?cmyOvs 2d}ϲL.l'WJOK w˿c'镪Q~%:˒D&(?S5ucMD 1 !oc &DH5WFl9d~)<_˪˯W'.T3X"kEO,\#K*ꐰ^kY.W-DT.Qo`ƂJ:E[*`X0$^$XH׮t.j,jBZ[y*ݕO`8Z?o%M%~h.!4<))5ͲAU D'ђvSCik(AT-vf]Ce5B)JjL7BGhQHAS0 Fz4$9NÉj8*QCQ'b-W^LNΦpqr2XzHSmI3w?%˗*skg>+LbνӄTv1)ԨU&]b5&mP+1iPPRcІksbԈUjHokWKv\3Y輷2~^b}JPSV1J ƨEE+ؠ ZFыE0M*x $`eMt):eaA_>L˷%,LZa3A2U?No/,$ bɧˇ>4<)?d @uo?< ~ݭ^s˰lIQ􃝄$r{':nۦЄ`ɞt*@,n 79iBrS6ro,eakOuǕm*:`y۹b$eOU t+s0u a2‘ DѢVj, - Kq,K'XFfS- ]P7#+$(6VD, AXAM` MH~ӌQP&/`i}U6z,I> <7TϓZC0o|cٿy/KK-csbG_vKQ56F}O~?_gge'{'︘.ؓ=yE'# p ` _gO}_`ucNDZjmcKgZZet4 # q{=Ņ1Ϟ_Er|`2+\Ƙ)_}+36 1X^>96'HJ2W6*Rw^_:lVҀW7 ѸZiR3eB ՜ D". K9yڙeLy6MWY!hzYe4T}+`VT HBim9a4oIԢPG'8&x5Mrc6ި UxE~'o/Qց# Z#nS1oVDI` c"A-,% VHW+VG\K B1Jn+T n$xq^<riØQlii>xùr`< 2xP- eJedWPVD! & E_d2dJ(\Yc~ԥUg 3X6ZEc|/r$vQ ojKZudR<:Ĝl-B:iϹz'U]9=w?GOo&Ͽ{n'{'^֞TҙTÂgؠRQoT,8wC|-.Yqtk+#w/9N,26z:3Rڦ&1`hl̐ew՗Vx󭔫c>(2 ~[NtT6D|sb!o]gS(>d] 5r[PS*uKlk,*AU(QW8g&!CEYK$\7#њ5VF?hoL<=c RqJiv}'N6`5d,6hZhJ10a`fC5$6:UhĆ#1 HRM˺:aNTk*,IL :QPJ,i>za U,C'j Fq:  :o zl`qnjw*XE?_^}?uy?ȧͫoH=}/wQ2` QGKtO,H* OO>?3-GPfZ q_l4ߨS-m9}ӱ8S ')%W 9V$J@TCn*2<υ:qQ"AP(A ѸiR!Q5$p8!(BNM!VY_I°T[%0_czM.0&Ewoҷ $`)mwo!2A r#eQi$P;xElTT#%p, .,L4?-hȚ ((0:Åd   Sc|E^`V88AԘ֒'[,<;^+إStͨ\Ĉ $bNRBt'|B91=Qy?7pښ2V W n)|)9iU&|U15ac$^ya[:U7oMDab2`kQ'5`0I"5V( (]M\I@T,$c.U҉,$h422&g:%h oMF9~42#.5w:'3svfaUs㈴6i ZhY50Ċ* IDb@Iи0j6 R;|SMqz@[JZsѐZOM[m_+NUJǺSjQ= > jUL`DM넶 s+@ÛX$\_$S\Xڜ*ppuz/]dq>WjЄ$P:79yڒh9ć>uy7=/~z1ɧ?÷μGk6#]81GK1NsKoY?޺~rot'nV;ӝ>y;znt;oW/߮#CS:䝀]>Ni;GH<Mޡ+;{'Vʝ.sI0bc\9zn?~?gRo*/d.k}h5;u$t޲[ygvA>O#-sڸqYV>sg_h2X,/> yD'1^a<ة2T5Dj]cۨUgj$$3u42#: PZְ#3*؇n5̈́TXZq b-mFÖJ[xcj1 k2MXYuct90=&uc"hKhLs-*mU/ rY0+jSszC)r@xfP2ER'HʒCFDh bņ 5ȯ>tD(&9WqHʹ^yM g*Kr]9><|BC+,\-&9uὥ! ojbrSۧӰ0)!oBǤ55*"Q1BD|edkU.xG9kpںJ^XW_մsǿ:i.tu`^@5y@mןz\ Np|P,闾7ll/.I^I퀤J1N(̰NJ3 fT%BdQETH'xSZU'*1gRi%c$^k kz0W}R t yyHB&}Wk fiV패*G1NѺ2D"F-~to$!-YK$`5%C}1k"0Xq+E#d+uK:IYkup9MDMk:k+Z!h/zIEǕmhB hUvl^7Wgg#iJF%ƭ&`k}h#u{Iӓ==?)Cȴr}Hڰ($H5EI6,4UF G O🧬 >osykn.~ڋ/w}f̢Vu.t׾p0cedit5zzvzvؿzޡxZ$rEwxw4ƝX÷cPܩt;Xa@}:vmM݉팣A;e][~f<>c GɻڹRmNX~l48z}5>`wd|_dKr廯7!n5W)qX,dVUa)1MCep@AiD,ڐ"45:xET >X ix$ĔẺwKZFaV.\[.mR|ؽe&F֒91He&A&"uM.[hCZ4(1RD8QmU kui_I#Jd$~y5)РaeZȼi6liFEVuY)V*꺨8B*dgF/tq̅Cm*"Ӕ[;eiB:*"( @\Ibwܐ}eưwhЮ "x9ྻȱK|kر9 燸wI1,)J0"53c:nS@v[]@nWcfɸޯwƷ/oD͜ܩmީ$? uڵ[kaɵS)k6.\w ԓrfkhňm) m`Ӱr+S#aƀSL[[$Wxܡ,si b퀼a -V+ \v꽃ڑmB2hZ mE*DjXy)s4Z ua 8TƓaQB[^G]s< 9I,HS05 .i{:l%Eۨ}Y}!#)qbl)LJ@07$QXI))MąFS,5xQBw_摏9 `BRYXqsҒ?Ψd|/^kKX3QXf x7 D $҃$ѲSc MDy"CS#:dvh۠4 I )Q *E9T%&5")LђJE-@K-5(PI*DC02̫aVM07ZY;O7Ø@v ӊN z2_(aY 9U_ f'+$vV; ɡ U\(ng@Vo…)ڜRxMDj&08+16EithMh4bx\bR%  iZ ƣ u7` oWZ]Rs0dIx.D$63d KƸqADִOԚ|Q7,1Pk" UA/2o[Moe֖(0J"K*K0Y(r>gԳ!5xe B[QkXvγ \A/%5gėqumހ9 6G_sD)$cFT`xu97.WI<6:НO7ǜKqQhF8l4XjuTG$5"JeQ 62JJdmN' swȬTH}OMN4\`5Ey#St WSCeh1( D9>jq|g=2g?EN+GyZbD g8=1< 'w{q]X^8|oxUL=9̏>g.^3O?@b \]H2xF4 ˈK\G9'{'o(oYz$^߾3ves2oU]>fooK.v;'mvcqgi}[a K-7 zc7]{oh5,]/t:ᵡkTQ"}b v:~ڻٕ:?K__A:ֻȌA dөֹ~'qԠjj*EjҠv[١YZ+1Fh]'g5bHҶ<ƂfR$PZeV.p2-MFLIm5J ɉXk*Q *LV1mԓDVƛ`:/ڎ+U JQ ҧ2CSMY+'цs`]gzhi NT^8B}\ܧZgt!?7HE*Fo:Myoͩ(Qkjε5x+Z뚙4N% zxlzKW&S,us4$#$%t5PΦdtе2p:̵*= Z|s'zտzDwU/KPnYUQMMiJ5(YT-2Q1&%1 *AfYK -ueUW:2zpv-%Q,%Jhh>wQD3 FUуUZTiDLƼ&>ex+?tvIOnK.]q2SUQksND5x㙯}p}A];<~@O 嚛/D]_|ȇ?bZV!w-?w~mXK$kE?G3Oӿy#4zi`U=ߝc:ǝXZw j;b':vNuwR.l;O?vQإӵy= |n~6ܮlnSӺ$Yjo]խv}{'I_yo7;MPM7}oܩ}9I{mN[a)C׎.mȅ/shNM#(BC!#r#[Kkʹ5"M~o^~_3_9dm@g'; ͐QJZMubbJ֮ "gϯI}3F1b,̬Y-HaZ%"*ؑQіD*ZL(z^"Ko}*W:AE@4υvd+Uyg%U6-88c@*#-/21*`%I7PїG˶fNҴFIm)1 "Jǃ㻥GDh H""9))Yi5x=A=x}\xQ J3\g JxT"p:kF-[Z=YX eԖ2{26}짲n IDAT|hG_[dJ5^EJ'm?vs܉uGЭ/vmTNemgi7.bvdmd4oE"A}gqVes$Zi+(V<-U߼Uޮ\Myխ}U[o7ۍ;~7e=dzZ- t&L P?%iZ?sK!}eDKU]>mڪC#Z@iTnMn}7rPձ?}<:u85BaRu";q˼wN"иՕG_x73}}Zv!~Q:y?Zg9ILFf#!bR*8\ވP%7jkD1XQjb4~IC5C gDIz:^CY!GTLV!@(2-E:F1H Yu!J/-M*((ڲe"dQ&6S\D$fcyձxJŋ\m:Q*ȸO4cWЪ-:\{W_in|Ф>Omo؛ڏFEeifi iemj`-ix>ā1Hf:),chG:-6AFqD`SLe?{>O=!gus!O]~g|cGKoǺSIo÷Bn4>mqPӈK3~F)iƩwpi}iדr};A@ :iN.AP4LOc7 w)N#XMMYڦsZXgwxQ@urS$@ʝ1ֈZ[煮DiUѨĺIk Ϝa>ˏsϣ zITٗ_W>"W$`8:)`EE"".%bGbގ}vv-6&P%eĐK!})T:;z&I͵$jXH(v6"$ H4B< L%*(*"ziaUU<ʣ;;Ѽ<(^¨^3SD$-٥/Y攷AVюie-ɒzG,Iw #g|JU!H)s|!" y[h2ɺRoIf[3T⣧?"?ȏ˺oAndJL_Lg݉cZ߽]*i}JiΝ 4;E@η:޷N`ړw3;BEiO6;⠱DDnLqiiO{Û S4x*ͫlue_uNOCn_y[^4 ?򃧩G @g/Cr_HYQD|6ly9]Rw6 r-F9Jeλ-)ѭgYLdusA⼗ F (j-bն՘ T5Z":-vvgsvGjHē ƐTzJw,0 $JǡpT_Ig8nsO=#32= ъ7JwJO;rG :0hزb?'!zdArAh['A(]iPyjRʶԚlh S()WhIV{E;kSZ |LIk*֬hmG-T_<$׺zlf79/R/ y*tU3".j?)SD'-`B%М}j DeDٵ^2T^v*}L2 ]zf[QC.^ PjTe(jSDٕK*Q dVvmyCMU;&p)f')!ȱѬLz *QX;h$r$A}֬ܿƱ3}C$!=l\䏾J[rx&6+/c% ǖT.+4"shpe)]IsۺT㱣rj锌KᅯWd<&bꉞ&="|Oڏ!7yBm/c'dO] Lo;h>6{":Zf)w``lʔP6pZWoLrPAHric8o9[}oTus]>ss&FT'<oʿϳzzFxT>>i+wİqgPÉ)c?tmT1J*%Td%X[͙DvzHm=҈hTO.팩mN/=.Ӟ V>'3.yb"i^ͽ .}1co5+ 41Q֪JH]-*fA$& ayRFŠ?<ń:1&QA 2gtw͟9n+4(=ۚ%"*wg_$y#7S&mtDV cM )U8~:ϲQ,^`PV-4 ^z!Ks؈EkB6ƄH=$lC"1&*Q.%kK@jIơbz(Z3ߣժ X< I?X8y0O6o2wv}5 {)߸™'>v3+㋚É XQle+}蔈i".s2ڿ1q[әiqe`M5=?[??ڋRޟ:NnӬU~K35fRG;! %KJ UH3m c0ER ``c$4&yQ1-$^5 >+$LbGAAMMBTXR-B bEX~'(T!O݁%/v0X)|A#hE?/G) R )A@W%Kd Ɯw 3ċɣ`p-6RơTBaRJF&0hG+:̨mdOP"'2EgMAjRaHp&8q8|ol*#UćXϫ˅KR,fM 8h4j'o}b5D=36\!G%؈`"cWa S$8FK.JL)U$jc HS @jڀFSF*#=ڊ\ =(;ڕgg7Xk$J-*[򉴂R}9Tv96R)lzGDZqw.FX+`IH+X R;pB82qՑ{Gj|˟#Fg)kGʅ_}lE`!fzfYnBX}֪0G.vIߠe̶fp9S?(!(³W-['ҟdaf-GI3{C~Nm0wܭ[r{6fmߗҧsxm"'|>Kk,B=P4OX=Lbo@dNm0__|g9|x?w˃Z}HЗ>˫Oyh @ ;y(8ׄpj 30a۴퐣yV& TbsL5a*NDBq10&cqIB>!!bbQףPD-uaIS;SkJ"M$ +(^qv-%Ӏf"/(`- x̅Yo/[x }X(Or*kRj: y4l*V)#FYhU E&I+fFr%\FR-u:f,ch ׻)*L llyʋ=OEdK$pyEi'NO=&V9$&452b xm$>F1Q0bY|FoFre%#AaC칈42\lLE4* ڴT (4vMdD@QB_->!skI[` &cn1W/$2Kɀ[mos/Xc&mrH lm*GZ[ڒՖ]Xb䮑r|f|iixowgV9`|ޮᝦ߃ro4k-8{`˟A˻l?q?<^^SwO\l[cUI~sD6|[\DUU bH1.#RP56F\|v[=|/G=~ W^ܙ}üRڑ5 tUhS83j T Z+ ֨FϨF45ܗT. R3Ř5A,Q(M3/ѰpAE!obJ8n#bdF7[b|9K8/4RG)7dxzA7!u^8 "! &9bmꥵzU[nPE *W R#AcBѺFOHPU254ZѰRtR^ziVQ Qh<Q$"4{Afu QѐYaM FJT:j `;loAREժH IlD4}~xÞDD]Z"u&jR9ƑW ;bAǢkFdie qd+:G,?3{,gN8 en;,sI=6CGzh4+Ҽt6a&Fu5V ">pIP?jjǎ?3萑0z\ҝIuHN%/ni?N?4'T/#63=LF?ޮ?۰POs?n08ͳ6|2qzADB2;jE-l7e66ApޮEn򻃜ݛD2x|\wp?}ۘo5)VA QI6⾿^1]m1p3P*b߸ȍbq˿Ġ,mdxU9th{\' GTϤ2rfT(?ȷ1;o6ǯZ;4׿rg!1l~׿*]&?{ϾJ33™ Duhc> bXQ-6@K@t"sy|6%M~A;ƞsOqIڧ/C?w?+g8ww91K_WnzQf˶8y"' mT#/ a7w$>He,S$F0ڒ*d5tdXE"CM-mQ'ͤ,hB^1`T&m<2OnRw{Gj7E%HyH{C;\mS Y+`=Ɣo&+;יtr4#7hd"AL0Jafh FJ<뵖qAK/ @vX h`ȱ*[s=@KN$B<`qbmUڻD[9k܃KrJELlo.bâ"1gάs`HbMdïJ { yWY@ttFh&r5 䓔 5Z;1HrqN4d&G0MjJ#n*)"}/T(ү( XY$\nW\ MƦEQ`czbVmdnU\e! jj) __L#1'EW4!Zx]K"U4W˜03aylP~w=y6FcfXhʎVaEЛZPMsz{%z5pr O5S_VϱO4.r!搵 CzK;m_<#1Rmw,Ko3⽓HN>e}7w;Ǵ0|62sb9z}!>"_D>`tiGZ}Bi/89蝘ac[#A C?d7v > )PBѪlrir$bsޅ-wlXG֟J9>B#(:3gp \b{mۚsO̿ t^%K'z,VLKdy>yUYd[3 [}T-6 65W@4*Hm)u-+g1e I= G ׏k%u5DR'Uhb֌M MgcTʔXՠj}dhv@bH;J]ۆͫ! Jb:VpriHb,rJkZ`XY&Kָbjmj]$* Rf:/6 %8]YЛCJY1`Zb,Xbi~Vh[ ;yMikY(3-Wvi/mSh[ph3kjKl2C}O{K^hJPO<>,2sz|DG9T5\)Lևz O_XI[Й)E. B/K:Q]xmw#_yeS tMnE#giut9=ASe< Wiɳ{pNyOYYq=kWY(53G5R5A(l "II 9m7ؚ,i1nɕͣZ-' j QJ9~ݐkZ訌kٵOߙiRyĭV>M?o?y 6. < ;7o`m7m} 3,7??yl;%tds۞giL- ҄Dޤ) @"13 ^+d۬VmvG(BیȒ䤔fQƄT*&Ek\ ;mzaLǏ>!8r7-:,HaШRl !NRb c o;ZӾjyOQ.9+q^Q QlCr6%G$ /" &iX۶1-%e!!c] jA4+{IWm;*# ]v8rE >;V'2B6`8{C-<yt|׮3}G-`F78/98l3Lvasopy֏sZ#{-qD@*ꐒ hXduO/cnRvwٰs6}3BD J&xI܆E,QVkl^ϓ7FA8QT"V՞dkGtMy΂D98:Ӓ=q$R.y}5:^h*^.A-;bhn~baڵRUmLPZ6auԦ,{yU "(cx$( [ӭ@ᝉrx1g}dAq`Dмs%xa1c3\í&$ !L,~1GÐa4d (bKMvfwU|Ι73!*u.pSsv!"ŷQ5\}ަ^Z-8 QSИ@/9N:m<=  U:9zjU<&f45 KASQ v9pxJ:M %36qku2csV]:!cXvXf\Ψ1:%q]#D&u$ifExn⸈tx]I\\]IHڝ\=f0y m՘ VίEś/]qjiboFC>px=O-r* (T2J:Q+pMNgQ p>Š.hjynk[vypƣgǤTMiLsĬR3ڻY*/<~Ob I n.z'vc]׾z[Gdmf^kyP5)5۟\P=y\͢br\eܺm|'cKnelz\e^lxNBɤ&!/N`rzd-krtTÝUz\oF'8;aJ>xz3z wcbc_E^x^Izc)2%E才#r&+6-Ӣ`r]D5Yӽ\Ry'29usI'r9yjXcRh, 7\)Դl)90#!KmbՖԴEɉ|zԛtuI#~P1Z)WīI#9c9ீ#bTx7T+":p3ɳ*JA`mytHBe@ HDװLGUpG#|gtF!L9ŸZ'pyy]ӽ](8)B0?ԞAo$'ў~ mdOЙO?_~%mm6?wrewwnGTk /?]6.1_D+(jU rHfkj#˂.s{D+}E݋94[6?9x߽sCzx7Ƚܱ;^aD2>֩tg:RR(K.Uo1bjӪx4aجk1٭;tSAgF g2wTX}gg(5z45zWtF)|[\ayal_Vz˾TT*6wXyÉN~c+;3n^/w:#.ǼM;yЇ?:#BEk@~q$u<\t邐րp }F]_~>ր"g3#gFF.8˺YgwO޵W7XJ};~)oc;`gvᾎ?8`hWnX=-^E^W4wbK(KH Ԯ&'eUj IDATnsd(3qv0XM +!5 7I}^_ݵ翶C]x#}}o?<qs;$gz,ӗ2kD*[^EY05bR;ҕoq:N@6{p6T3rւW}bM;IDXV(Wje41e%k QLޢ"y\"eY{,Z]TeR1i2*'_]ɪq6pg K8_ %6]r{, (Iu,ؼ,7d9T٧1Q(GD~?2-Y^yShd AOd1Lsn&L'L׉K+vXݏn/F}6,#GUbrdֳ|P-hDsMɛ a9yRؒ\(њT:o 'ʼbHND~PlFOJlқ.M.ܖH.L6lR0vDg f3zMfČZQz7Nfy7\QEe3j6 KUc,1V*c[byplȼL3sYIc[ےBC7:; I! xiHSc[{Œh ySo0 :Nf%˥/F(F<,6-^m82STs,!ق=J1`[FW_7[{(3FLzgFόrh0~B4O"? RkT}ֵ]u>V:"zuǥZ~f[B/͗[j~ ֱGtñ7g斖' -NgSt)ݑ~=:+ʵ ]R)b#k\팄,Eʹ0S.ӽSv>Θ>xo2_)$(:,&-&K:k4?3m@t68?~^ں=/ԕEPlR-ڪnl1Pq}.W$ƑL`yZ%'/,"״SPKjAe!5 S}1F93PX*WHZW wRbW3kkQT9 %ц-fm8I͔8TֹVW*`)썩T?k?ZXҗL4,zWP-)a6"!C0‡ʲWDdRz(1nG]ntnTY0uƼBQ\R5k iquRy AbQhaN}*p3eo1P'V<鬘Փ:0t2_-sZgGIh=֦nhH]Y~wr|aHwM[idyXTzPZNFH‹dI)Arg2è Z)jVBŠub>Z(&:g&#i i@&9FɅI(Dndڭ LP+"YrjH4/9Aŷb?alwAӧCN{S 9bk|Itii=SJGHNähd8m\Tr&ї$e,z2Q5b%7(thigpPHHA?&.AƓ%i񒙪l>kSivgL]0je-mSuT;rsi4^TF5@+/ܐf3ahct轼At:m& P@֦_zˋiOC,˓|y0_JO@z\=~`1iEۅ ``9'.@zg^<<.Jy$>/rmrqIxE2JEOri]gQ;uBTN+I}8sr@;;+&YtdXe=Q2pJVM,hbKg\i͒! D^ 9Qd"X?ɗz?"g+^K~֗t^0t7KE*GMkQ86!C iW!C0,L7|`/utgNқZ(!z9d1 37T53/5)kt>8Rr8{xt͢TslȳjWeFl/P"K DbI$aִH4E 56>ed(![5լ˵ D2$MA=T#Yrt\ *7'QD@Krvold[t4_}ީ,G҃Mh;mf[]czlCzVٙfpNf&KCMMź]ǬmgSމM֖t G. \lA@1&y BR8+ʬ&3KM*ډWTl߉$r5dd֨ͷ<>9&eLa~ooM%T2EXõ)sowUa`=E5<4+]hEI, *T!QfɎg:$yz'OL cd2jrDu 7=(5-\eDH 7C!jخ=덣q^MUdS`=AynqI1lfK|Y߭7t5ͭsd仗 G']Z>H=(CSWdzԭ?y]`.2?$mxJlO}"Y}h\Ci2 k=.@zrq,,מI1i?S֟[қȻEM5~윖[G>ں.*՞bٰ/d?y織켾-q>IzÇJM$.B2)٧+8"$*V:4ZJ5᥄ɘN&>.}\Btoi~rƵ7o(DS%h"GwiWu,7̵mծC7N&;鏷jotuuuo'RpU~e!&T8%seZgnxz5 -9k1/C Su%R UѽY~ɢɸ^4U^P)կuPUw5c5:.Mܖ?~|u\#{ޡd+gsҬYYY? oΞE nEYǺH= I9q\6=޿x>IMׯ{~؛OxS~~kk}Fݹ{}d| PYf5!Z6,ɐE,ETGZK3K&,-%Ýoh򡍄Hɬ"repYX(9{`es; j< ~%{omb/]2&ebٷՉi_ڿ{;vեɠTډiْ@VG|y hu\Z1`:Ò3;ڿ}wWĝ;zqj!5T䜥GIɩĹD3,Vv1:YCFcKD'oN(y YtBN!Rn/|xRY"=-E>V@w"h_9A9-U 1FkYx.+ΕSm^m;+eF^{i uقFmaI+k&+'ϥɐq?\13VW's1;Ϛ/0E-zVdQr?,o}pĢиoٺuˌFRԘZ RLL~(PFFcZvrsZ#6;IgK\3/4yׂIyXL|aaQg?AIxIXH5o_~G1X2򁸕0b*%%`.F#BlN3.s7aK?>flɂ77_y3ߞ`~'sE1uxJ\cO;wy.>B@Q+",ɃtP0bGjL2K ץ*{K瑵:9[|9 %cpuC 'v|~?ԏ:/w&6)Nj+/7X;z-V|+ڻQr[^e3جq') b=B0' "WE9rh&.–{Wwg-ԥ*e.OCxYW4xJ #/#Mnrff=r*NFLƜ;36ϠOg|x"=2 G4OL[Xr3<&z: /quzکdbNH_Bq>=lԧYl*Kj'Nl27qPprj|HE)ro·h٣&U1ziE_8orv2b}|i8ÛˣX}#D+/ ZQ1u .Fz /yCrWYGђKCHxZ&ҸFHM`'ICALt"^H'֤/ k% 9VLz"=Dזo*&/3qqFE~.K&?>eoB3_vBorЙrcN^G)Ajeg!2 Ik$;c"}gLڳgE-iiT@42=Μ"!ڎ0Ybx/.L}Ckmd틥'$F1Snb"^>^򣏖zq7⊨k46aT5h||ROg-Í&?K6ew\&j) ?urz P"z_ތzK뎏?p39/oo^>Iq°$C:S}NIO<-S'Qwipߓܣ>=*xy< @ΐW$'Nq!>/l0(+Xzq2)U :||%vt/PJmoLUYJŨc՚6[J $g1#Ědt-t]bm{sWQƭ?![ml6;/Ӱ> $+ژj<_~\6X̙>7Y/3~̫i^9Vwͼ^"$Ik]3ԩ!M0/2AD(|ɑmqX5I=7#wRiD^AeL6qhKD/]:U'tAJAg/} jVzq$dheI>>ZQ}Jޯ YT̰(LiA^UX- A \Ӏ&XewթbA%Qfpʑ2g%9ޡO"Lhc tˌGxBzw1cpGD8utTlNMSּp4U0|~߼z­Yכ֩Y&$2:0afy۸c>ᵁ饸eޙ]I"_8ՖA5FS1)J90Z{y L=Id:b$O4E:o~X<57֒g&S\YуIG/V9ؽU>`[ϑ~,8˗;%:NI<2g:SnT4#8m. z;F0s E#Bde8 `lG4)]YK$< I8Di2;M&q%^C?dBc})C۞7n>NY,+ō<Sŧsx-{c[g|GXz" ur~;.ͧ*x ^JyY,y< @  @̌+_̵_B j^+V^,#ǷNziS|$-N4O#R2]:+>8Y լI.sak^g)AJ^dqgć{Q'7obYUFqpo._)Yˏ9or{efe_ IDAT1?5پ߼1oѽ~N]65vo}~/Klk}ˣY,y< @.xOg۳i콵`k+e^W}}{e/?/^KM_9xyG\ ֟d?޷?"tsVdqv" 6<1^c~cWΟc׷{,/5G\;N[_R5-FpIĴ'_U82_L=6/!v<'&m~Υ=67k:cLCkaZEEc TՄn * p.a@'R¥HTFc]Wxte9 c'=.9RRNEѭj8+5T{ʘAOksNHϡ 3PH$kMOI׭H@na2D9:c`<4'Eͱ9=%W~YӲzHY^(ӛ,bytL8j{nls3e>p8v@G|el"!5::K@U .wAݺw欮x]cgQJ7 >7c=$/Oxg­lR̪:2>wja^`Zpue6z.[`p<Ylx* =P]^lxwlOaðga;}q#mxߗ`=+4Pz6sWާǏ[}NwsH΋ZJk% &a0;핶}-oS|9 -v2^Zhmén'nju]>hm7CNjv[2*d$ ԕLd 0Ꙏ]\!oݢm+{gW˗O'`ƞg>xܩ_B{oxŵt_Iö POTnRGZ\#9=Vkii((RVz܍^0^#g`})̚"3kp:J!X$uZ#>W0NvN);i&c># %YIYD-L*G&/ ,(+(Rrw9]y*$*UVZ5e(D8L"5zwE&AMcE#JFYGLBED#829 1y\!cfplA"$xϾiXU̎j C Sv[}ͨiǩWM{/uҲvcBɒ1IGjKR/!x  r戆EL̑V^Yqy/#g1m3 dҊ[N(+sy\6ln1:JOxؙ ,cIވWv2f:ntÿ6u%+q)GS|p2Ns渡Q\렕SjNGA3Nv.LǙ}*)nZ~$ݬT!T(0[+r2Nl^H>)f у\H}Tj3ŻzpXWWTYoEot~l0iOΫ\m=C;{*?!6?X%~Km5; /~ᴊY{TU#.ygt5zWŴйg:̪oW6hf1~sSislJ 3<J+> ynzk$6zYlxyB Bx͞b *ƈ/0DQ}aA>c~saaQ03ƀpy\b@>"yq O);2ޟ.-wEhgD1*83ጥKZ8|tCO/jiPkM{{޾oZ(kōאټR'FDڈ%HPV'~g8aNs)a&W7lu,=L,\\yΝ9R)ne4x3 0=qwkn69?Ԣ=oU=ۯ[t.]xăhx6X.4Fϫi\ķ#Kȱv4ou"LNQmTA&^J|IV$[r4KNCYrXbC;`azh}UJ&I)KqE(bY-udBOKV+jZh8V[*uj ajHΙ9i EΩWdv72M-6i%o))7X7!$mdmYIQAcϒBў{M+c׬WyoG" % dp"U}]/!B?G`ǍG݇‡ׇϒӄSg ~4rOK>1(ПE:رLygGq4SzzI [ZhoD1-.jt6u^YT`!;>qO0beE"Ŝ0b ofRN,[jPg5~aD, q٤JوS qth>BahOT8/lyy\˄C'Nk釘(hO lEFZ*X-SO\G@cjbLΕ9c׬Obbv,1)[T2" k~2"来,kc&vO] f(nk_#_qBB?ߖw8ZJ ȝᓈdjTub2qENedfIܑp?T1;Jq4*QP%IG$)qs]k y UTޚ8BaRbR}݄rV jٻoqWW8`e[\[79w:uYE4j"ϝXN=-_Rgܥcn_bȸ.%oۚ r^\J\k{;Ik}bLγv<6\Yp\^iD(˧?jLBR$?= $'g<'U?Tik)c ~>`},.|΁[CI(X0~1?I}'o՝#́|㷶6rYWѡF3jz'/[l84bfU }-^%YԌ/N֦-kpִPz֎"L;H"#wsDANFȄy&5Tfx&ۊ+ a%=MGtfmeusLQjR$y9Hj*3Z!ƪ:1 ^Fä6@Q͝mϊAqp\RK8yjϰ5S^]{WnD>B0aca8BrM˫rgZroN ~Kw4_ab%Wi<,T}{GCoVڴDIiN\Rôj # |Iyqb+Ua^ǼyNq5c{ 뺪GrO}J7UÝ:ѹ75j`wAStNMtzB7^ؠ=8s+8)b3/(b޹Ag/Rㅳ78 q!VXCuMwwM֬3[_YN57-g,Wa;84\:W'v:5~O d %XKR,<i*߫lQU<~z|io.=w?K#40ހP夠oY|e>ލ=aN{s}u6:ϴ{wvUk0xtG9P<8'ԁFXt:WBy)RV \\KN3~thLrաG)3nuYo I#S i|<PWVWi, /3Cu$'_~2ىq>yc׾I5ag.>Wo;: zP7'@ EȨ3;04z$^H9;CaU фwoYIIp`% KYhO<r .W?ǻߡ~se2.?;GWYF-vssrOpcm۠;2BI~̩J#7F|ns>nY3ԑh{[0 V ? jX }+^YTĔ' L4 xHjW[̨LOf, ؓ(\MQr-26 qvevr\č}~l?(gyԬ IJyx!HoGh@"HP&cx3ì=»tGsB߻s c^^rzeJZ{Vm!h?#v6./z4:Dqrim/~.E YAr8-&[,[Q0b,܏hth/=͍kr+t6_le^/{/1?+4Ӕ:EYS%ݥE)%ܻxF̈=`:)/nN(~OooO}J A Ye~yx˗8+h/I)S=ĭ ؿs@9-w:~C 8H; Wu8=`;bt8dU֗hӒhN[0ntO/ I+`p/Ϫ5*y^Kw@9c&S{mgD;G럾DHK۷u<ϭ^ptWGlӟsY' k?}oK9'[|yl~i|${B5]g2!?].Yg< {f:cw*4#Vrt\ܹ]ûJBՅ3gi⍽}U՘ /qfi7޹d4#OK]ǙʋsH86 u ňΛoV?.@>.@>{DHҘ(%娋Q^|gq:/y1xȺkI[)`gXę4:)à qQJ o#TY/c*>lр}+}V.0>39Nrc;9,fj4'g%@Mi[FiD{ex,kоr$KS:m=|V䓂Ӌ4٥ԡТ8q<i?9ފN1*y 7wh9O|0ZS{}{Ŵߢ3{]:,X_76EԠ7p8:-3":u|!1cVzmҾ KY9\VSV,2Q4u YHʠ{F 7 hXJ_5vba҈gty"W 3*=wH7:'#Հ~y`{,CZgy  Oe"+yWN$h#j.'c36@Ez!Ĺ5Fmu iu1ʌüaTk2_5@NMSl%5Vj)MQ!a2s>na0iRCSw^ܧpp?lqȌt½*%avw.e|a5)Jha\a3Uz*WRS~P䈨lQC ~$\ʰ! \=&ʈ2&rbQGSR1"C<&8aRgTA$sIޠ^KgtY;s,gәkx~/-% Q]Z^X{CgFA1-AkW4{N/q ֟_#jdN,NO  kXT۰jGϭr*^`tԊlV^l9qYavR-^ooޢbҊ{ gsG0!iֺ|`o&1|89g>,umswJQ5bF_]]lʤ4ά2c*z MVPtV܆픭l܊Z>ki|"w\LJn;F'E f7_Tr-kp(vtP7_⧟?b=3Y[F%;R⼳ׯix-s֝rv>'˧烄, cZSY(3kĦj3K!D"6QU.hDʙVjF3^M$VLrюL6i5U De J5ܘ١%ZuS;Ga)Uێ(i V̸KU9̗>IA^eļ ;[3ኚ^^AM L]lT0];O&'.2S744*5q yuKK:֊ RKBiWr.rPıG iTXktPhx̋E @krc*0izn|bJ9KkHik"ͻ}p$ML@CBFYApЭ0ڵC3<2^0:XAf&2x&jyr>t, Zv%GqaQpLR Ia +ݡWȞ[4eVo̻}_}b_7s&"۷KρSR25 wfז//+lX*$,:@:)vS˾nŌz9z":TSʯ="lѩ6!a{)]ܤRĴttj˧&V:b-8g3t2o֌:+j@}R|m WdqplUN 6۝M]X9WhFz'.< OsNB!F_*(f%UQdQkagX ˼Zҙ>nν|kBaȉ(QΨ+p8wɟgrBme$E=I+s+,뫽рbe~Kwǚrk}éٜ֞Nf6'ufqFɢO%1>{w9nAu =Ů^=ݠtZOy h0apD^g+`%Z?_o E^Yqa ^͍԰wv^,Czro N<2fov\Zzrz7abgzGXo:6t-ͬ;',HQ:|\AU%"Ď!;ޛ>ug~w=# Y(nq'vUS )HeNRIuNn Kh͖DRgČ \gOk/(y TҝT;g#(\Sf!Wh9Ezz\neJW#n$ $vJӏ%yPMB4'pQVM>(Rj mfT UCdJa 1m  $ua` "!%0ÂxU薁DV) ڋ)1dw"Vh "2AZ3D춤aR$= IvᜬݮZ]LɅ:O5Z2)lѣFtpMo^f X2E\v(L}D%z^x\*WmPjAj'RJRK[l&JSt'tRWWƠ 0PCb"HCT7iw(X\@n5ˀAk"(Qi`. g? :֕?afH$zcqܚ¨D? T3 F)W97NVCK("a.g b7b:y+>6 Ɖ.Cک)+facBd>㫮&hhzuc?삘Hɪ#0,h$N2/^Ԉ qgh#fW譍d Qz#BN\^ǔ5THF-1aK3pkǸGnSk!Yٔ|2zY19}%o?R,sNTjr%Nd9drjYܥ#E`X9*O|}ke&~ՕNZ&uԪD9("9YiEzmxmk̤C|^Ѝ?pBƿN?kX^t}<51=WKMs5fH1xhh_zLCݸjuu::P4t:U{DyG)JF'|)SW&TobC C*ُdxUe/<6Ӝ4I5Yh3L'9U.o*I6ݬ>6~9DVVx2HA`JC 7UG@ %N JkHa0%xFCabҒHR cJwIx&D&G]Q=LU_z(s"tb*:{Dڊ MD=erQsx0Ai PV&bc&! ,jD]Q( ,RwqssȡSxlI$t0ŗ^ϝFTWɑ F]V/IiRhT1Q ;b -(}ONE>IʞJYq9A(EA9jܟOuo}Aqq5pŖ|}x'|8ȭ'WgXŘGKB IDAT_~ m/ΐSY9X%>u#r^;4y̡6_ސtGxj͒r`fVTQxa[N$FM$1#j8[BQ\g:L\Ĩ&n*9ze 6- KqҘ$sMfDs1cz35fv'D 0^u"HkWyzw%iPrk=6stfFW97q`%ocdIUoY²Gd5P:+W5Q0QUMdt9|j2s0`jrih[ʮبO}0r{K A$%_yW;9jmm5/%t?5Bl`<ͣbOQv -5GdNd0A4PyC2lRuPS&*ڨhX+Zʜ3ꨤ~#W]ZJ! 2F .O(s3+'g%*X>F\yꉶEEcCHb AfM LFJh!N#.+)3O똮xQA ر\& s/;TJq9 |@zUQUcPTMaهxԋ4BK1q6S& 6;ՅU0i7Y)ehW .kHZө͖Jܗ|{P]5<<:?w_ޓ+W\.&|dfeA^"{̵/>,31oMb&5rcs]x2/lI=4'UչEy{؅Rʞ?F_]S8}cD~0Fޏ`T8}/1M6 y?" OI!|`Dx߿FbծH1-q+Qic kb1Ĉɷ!8)-3˥hw9DTԫ؊2%5VTX@@0 ΉN|V2ܫ|"[Š"`l)T,JaZf$A͊ih 4ZMiʹdէH I9-1"x_8a xl#H̬a(gk"bfgu|; ,dK/vL&o48葶\闔[r_ൂ(mȉpY"LԪ*ٙv*ߘpהi \%k̯?,jzIznzx _Ɩ/YƺjaA߼˗ܹ-HVyVRpIw`%?%~Qڎ82oeP8iJdJW#Dĉ k}1<2G_*^as'M1Q'W2[Q`Q"2Bj)T8_R/EZr"rLp&iXȠY w⃀2s/_xJUp ^T/&Np*ga_öJkόy)+_{zcjlWU8 _ܒ~`_vkz!Đ7v,tbuKЊ=wF&4DdfwS[tRL[#)e%̊k7&M<67.qFwwΟwidbL2,}1*iw7s7y(uNyoβ|.+'"xN˨~8`fu(4ϵr3v=be/J&`T]%fjq~?d -`*k9p\KJNb1OP!sHU"#j-Ng/YEӹ9=qB^}eXe-<ԡDBb\ &$V"C}UL\t0% VDo}`$ZS6EP4P~WYM$sו-dk 룯? 3]#/gRUދ+/3Q[riF$ӵE܁v3rת{A-jX)K),w;پ-}9.o 9\\rM@<(oTH%O3NWUQPА^Ql{Խէؒ6&k&-5>oK+9BcVSlM4 VPU~ڎg q`_OA8 2$$x5u,XtQQUQuKw9V| A-4I/FT+eJcTޝ-"mcK [S`¼[ ris|[¼XvUwxуӚ5[r|׵5͙n#7?uR'7/^ >tꎔD8DkV( s^M9%]MREa8uW^n<>7!!U^ţi'Rqjps9s(XJK[#⽼]PGCﲲ71]7A w`[&w1wa~i}]PnݞWʑw߭F[^}fs_8g\}l~ALzo/֧wc3GeBPX|pAj5/ꕸHز+Ի - Xgu*PQK g>}=WĔ+UӶ *!A#{eAge^N7^Z{bIz/⊊whhEZ&Db^R)nM^Qcz3gk@_wnk7\dn3a͒jcW ޘ=׎IcVȞ,W#=2_OO l[o]b~˭3r,,]|G*ӣ2{vIIژ#63ed4bSkc651) $EX$BJ` #x]^TM2Eu-6KI;$LFuĩi$ѐ3qM c.AHF4q"x\a)Jf@"AT(1i.|w*F-$)M)e,f/ EL$,<x }kSG3C4@j&ɭ|Mq)Dp&\ΠR+cJU/r!aZOb(`4V4޴]yUALUJt]vBu'+gIJQz(I!H]EsjAUX*O^=QB(XyI+Դ0HeN5W+N(l ƈDb"cNI ^S̟eR5ܿ|k @D!*&r+Ѥ$Z XY}g1Bf|F&&C9Rz qL71l9?t+|p }trIűxx)*I-%{1&>t~}omʸ0,(͵l-tmi"z B2ohȰ/|X^'x!qUƫ]Cmyb᨞Fx5no$~0r`u^v&Roh0ZQ*J8qn-IƄs'G- @;yH$[- ɞj/֍poEyP#vuw+=ƘJz8M"@H ^6p/OCaz@>x%g} ()k7\[ >U6[;7J&O-Dd , UOGe8$x8WGE=LT US˪jm(lHhT5nF)ȉ1'=b;'@Wq:CLH6#Tjj{#FrvŨT|о?Xy% `'Y0}q>\sM.\(=f<ԍ|G'g3fmtsyC9Yツ)7'YIYfo}~NຬrnvHv)Kym£C>R0=;6|o9~FCC@T#fRPIrZ%(f!S*o[9q NA*^OIf ֘XoB&&{D {%{ ;\>)yTz=S-"0y >_CJI$Q>$vLu/tMJ {^ܰZc\\j OiO9Z&^pNO)#v9VlL'M\r=s,yiUy<'ÉC6۴5"b1b1_;|#(x}O<_z"12U%RK,zC*PGV!Ci!Z/t7XBgՕ=,dxvS}taZx(d3(`\b&d"F"Kzš$A%obAjLs8%)E ;!`F$4 oRh%lO3diqP2yC (^$АL%m Gc]xK=B"Da USP!wxxؘ'*'t͕'&iN{awʥu´;:zwy+$ F]ʿt0o3MyrC woE,OukHBLw9rt2G|u6{|h +3ml26\ Kc~SPIoqY ~X\ Z ?9x8w` Nw{c0gֻ5[nG/r/-G>Q=x$d?d<ۯWuWoџGYuQ*Rlju6X騘d_)$y⴪JhȾUT{DUWa5pxɌjhDr( k·O?9bkTP4 h?Idi:$f41lĕUerЃϜ~r4?C4skޟ4" ^wyr`|y+'_|P?wʮ|c?W\c7~q8e7>jg9+lLU⾗zWי2~2(Kt~FcKZGх):蛵kKʖzd="+E@q`3ZEB^JZUwe-gd*҈&ƠSP %FI(_ްK'L1a4HFu1nG!v/[9M2k鷛,MwXmf}|e*aRt 8 >> ;`rHVVԾI<'oyQRi ވ^ P+BnzTDȽH\ :(Ԛ&P'۽6SfQ]]T\Sp)7%WKsM4g02FhGBJo*xj\UT<9_RZXPF&gV*U!Ҫ1E, T)_Z5@0D()ىRotԼanRf# nLYe7Ah)XzH-Xt!E,A<.\Uh6HͩP/u4o% >*M:(+T%eZ .5FZ T HU' lk"~cY gr`^D"a4,8(8}F#': ږBOR ⚚,'~dʰPfWJk{qA3ֿing8tkWxVaܨŇRe*z{q>.pEMbI}q>䓉1e:+X|4d~~t)'_Mt_oPʚf{zi(QŖKXWh+ )Ժc-],|Ie`+1"ƠA:&uQVOăRFXM)^~0Y^"DғxkQH(/=nģX?I{uvKLZҞ4gpjl]϶uA|p3-6sN|_awG++[*;\=̯=Enј/ޚcpӧ̓^Stfpe A]["qƒG>xKKl>7eF/z1G0ٌz&DV7$'4^&""6:%k kJ`0n$y B8)5p^\o_y,HH⒥VoCڭ1Ll<$kP 3H7ȏY93{ؖ-ðh&BN[ s1MSPX*zj6Id7pֿfdQNHI1E 4K-}SfJ |_SEH L02JĖ4wI ܒ[N*01P4sn0ۛLKR*tG4VƘ=bC-I1(Cq[K'xI4p4"`KCKuQk%@#Hh 6ȑ֝&œG辒R9vX(=FIː+#^EFauL^5ScfgAa#ҿihwY͓ҞL)MUy!?b9u-/7Yr]z'LLŇ0J[S^{n0Xjĸ! ÁE.pVC+LffFO>UƷ'qUKԍ4g꒎m:YuS_Tɪt'ZyRQj#d=`iU˫JT3ѫ,c:WuP0rO+H L'ok#2bs)ä) #ٻ$gȒH` )Ҝ^bi-=4'N}C:7H7TsЍ\ ifEҖ]#1ܜbQ 1[¢⍃R ǯ92`nܨdቑFIվ0ڊyGMH&Y{WMoNG<$ JQ*Zz+IE31Pj`Hzoq(ĵʯ@SaȞfmDǰe@Ȩ' 6YKBԊR"D53~GUDUs5EMxgYf.bt\0M=IN;z"Q9#8hZaJ@ юZ$pUxK%4y"5(Fh7s ]EQBjt4u;@8\*V$TVm9Ҹ>{Lj Z-%&RI$ϐcCT6TK5,U"Gdr? Z`V?-AA䥥 àRPPͼ%[y**2[6*XBPZC$vdPn5:%**E\:lN$-6$~xha)H7='NͨrKx"Dk1XF'3l7z2]LjuH&#2zsM˫A)ht4՞-QA"ӝ)VKf=7/ъ+WWcUShQ qf{ǡ*tb;? J]_ȑ 8Lb,7W(}I{o/< WdOyEۓbTٜrc~Fh/Zdڶn0SS\uxaC31-o1~"er '$7MS7ۄk7ɼġ~(#"lxhG.bjK ?ҭ7UM"_^kuWkN<=^('t.^e^H@~H}_l7E9k5qYAFO3{$gHs$&J7Xyp$WFB7{,[v'u9ߗ`8HB A )(R"EUf\.Ql~r*'M%R(A&r{͡o^p{.ښ]U}뜳O!3Mm\)EĄg &$DDx*G܃^4t q}DuZ.!@`Wa&Pk^^jQ A e!9Νl]T=FI <& kkviOq>EQXUX G(5]6ʼn2Z3Xa3$ -b%&vN_fl>\#//6@[4K}]D8|3Osr\9wKG;!#%O D .R|c̾\hQ6H'=!o #D1.ԤVw  i%q%8cwF tbx8%RjQf%kz@ID;ɀ=dnw($ 0cX4XŨuwjA(^ lA p9FKQX T&y|tGܖ$nTMl m!1m)=vFP#Ԃn7gqv@UbQ2tZSa Jd e,;jY-bB[- LԲ:3:-VIih&cD&'&)rX!_,}7<.Uva* w3ADgq VWb ~ܐ֎^;=(Bugdz Xv7[$|.sg)>S|<21SOMer2ꇈDؙݕ&Oq߯\eM/v=Qѷv9r_=OC8lkO4H[(TT(CXAj,~F#(XaPqkKpmchX2t$;]:dYffr3nКآ(@B"ü1(+ ;<6 ""5hi|)G)ѐCǑAHP$Pj0GUoxZ!Q6',}Ԡxqe ?_gӳC n|+^e'Կp4a)+NQvî-S]wC5*񸰏UCC񻩲 6=»8R+6]yhxa5^CwMN{UyiƞK!j GoRn !/v 'c{w, /H;KCY*β!x [xu@9Ѡ1NNdqIz C#;, ֙X'{<<6䇊ZCSIIH\w{WKr1KX Z-N;hU֎m%~&c#|YZ'^J?$ *gDgonQF^Bbu{7M+dgnsf_NY^9}Iv%[W_KD|9&<\zڛXh79۾1\'fH@C2"H,Ã>㵈Ä'W//d`#?xsm}_?Wҝ!w٥a@DTF\XlsdCoXhw@[=v;T?~Q˓?Uґųr)aAsi8Wx{hEҬ(o 8r,$aZ ܻ1qwH%k77ɇ+YƑL<6egiv#bE×gc_|ܔ ^ ̯ei,L2{*3lw'|? uDQ.)ⱜc4KHl$iMjk"&WfYxwD5D>(1;{uI'Y%<\.Prא6!k+6U?go0"eN (GMr-66f4dtrU.y^IאQA-z{CS/Ň1*FP:aso Rp3;žkbL#>bXP b"_y\ +^d|I NȌg`έu#0gEE<YBCA%^ |p'а(chݑ>R긾x項Q NN7CX#ċ'p枪;Wy/"73`Ds5d2%*'e(JP1BsXSڅ lR\O?h!VÐOլ谬 q42hKڻXyMaH47NLqL&~Zogy>B8t{@5RnmVI(.1iIF@3U53K\c?c:u^ zkzO玍IUgbZfG;1 z}PZMuɲ(ƚqR52EJzfROj jlK;EK%0lqYZ&R 6Bh9?<m2Vւדc2r[[:s_yOP?1xf;Ɩ俵 <'uiq>g}*'oymi[?M؉sCh"S) 6S-dMgT'@/ ;atu[>I3gnhv0& J'њQIhPBxԋCM%!D}P|2TD%UVpr;kIl {>[cfb!f2 :iә2ZQQK3zS\ðn @сEUbXET~[LDR/ӈ  %* fa/oi\zBY-HJPH3C RVzʄY+bjbl6lԆfEv cQ@_lBME3:ޥ`2r]!0|efR^9g]GݙeĬ^glsW} PȉGufdq-DwRƘ?>˝}Sw1M!w-Jsz۲(;F,^:DMhȐ^_)J^J0O&n'ֽ'Fm=;S[{YS11RǔWR"Wj{VSgNrGy훞ΐeཿpYF+̜1^gg?wIpkìnWzaLj؋x1*7k P%TVoL?~yz;BCtg= ^pe'S';c d~'R[gl<Vi||0s! C 8 2]I/LR-6w+H4*Y 9N?~矼WCCrsz|~Y![YY~,ΐlLw1€3u6X_5S$ (<ԵK IDATϟ-n0aHIx zV/Xۆ Gc-榅Z'᥋"+w8vlE!/<[iJSJR> \~/ afWou:kNoyF;T*y2z <Wvz}ƾ8M歔C_NUyǽmeKqȔC|pXE.K4 I{".;Hsl>}RZSw4GǤ5;:OEr;/˼ɼE>[|s<˲uxvG&벾֓JcF Ĥ} EQdzC)%HeDb}l(ab{'^'N(H=ĈE<<[\Q]{;g,#"Ӯ­e8:*i] QO_ۖ/BlO:Gx,n$@VyY\ eʞ ģ0g~AyiA"&LHBKM+ac 4{gh7>ʂkѱu/XʵvXޕ\8"svd[΀9ƽ ʹapCΨ[ĩH*QS5$-D"<'$*FS,edc*RvKVؚB ݓ^ٖH$i-H4,7d%ǡ rkQ5ĎA.fxc[0 w; ҉-SE9Qs >2LA&C f9z!"J2V[A'] 2l dadO+R&dӫdL Xʾ V$c$[Ƀcc7Ib(]xQ M& Nx9)tRtBl^L%.Z&"-ĹHAD XZѲrgM%kU!X TQE 5/NjʖRiT݂„7* XC.FnuFL-ˬH;k=iqHA!kqFy~Wxl\xxL@\2;ÒC;}![*1H`pA] RSjPƴy<&5Pc5IgcjLKJQ)aFVR?ۗVD!+A:@!ŋJ402 e>ߓw6#"pbgLgF͆r eqc\;6//#;zN&A^yܺ.tZZ() ryPp<ŝ>ty#LhJ5n_,_߿) =L᙭B$؊<>͠n1Q—~v2wا%;AvQq5~wrOK& OWȧ6;>0z4PY[e|kw_Y䕳W):rL} nF`j#3؏H g_|>3kL1G@S\<g-8D %|ovƀW8\>_[CYFFnkf&LGtwh3+UaRa 3G^S6V*WIuݵT.CQzZ Yi1qHc-hrf/g`r94 NArl M;`K4>AӴ:| {$?@mXT0t2(T =^+`*?4pe iؗ>"DlFSȖD풮gy'N V"TKy}(Юܠ6CM-jԬUx*ʁ~(ugyB")tP!wK!UWR׫=<o.8&.g GpG#̠e(Bر_ct{y2@qa5F)R;@]$!DQua|/PMyq$qxޏsЍ*azh] [?#r0\`C<-QxQ8+゚ ,tG,!9pwGQX3SIFۼ |3$7ngFFhxhO|}>3sd̷6_ ]>7:;ny;Yɩmt$inC'VB9g&nr]c9"!w_V޾rJɋ!cGUe.]E~Gg:4kSrQyd&a.kgjwͪ?|ȹ~HhOM iJTQfkM$U\q*.udnAl[ދObе%opYe^JLu?s ;w;پtUO B6γ05 7n/So+!&kDD珏jh'Ђ H~|Ԋ()L4d(P#5RѤWqb%S5\5TME3nG ^w;rҽ}KX7CR߿C{l3-/:"Fw0B—Bf YgVFHuj㻘UkGk% /pyŘ 9 .8V.7.ĵ~pCll|c`r_o^I&i0:dQF :=+ ցJb$#փ$k͗8E`w8DQ# #*rHYb+zaKd6$s%^ A5Vp%*RqrdaKΐj۔bujB)@ۣ]8dsgT2a%"dQ ,pE%0-&h2bod)*K4]¨ Q)P#ʱ(zCt-%G*Xo-Y&$06 48H9C- Eodq6%ӂ;%'E bi*CfY²MP&E  ԇ |qzU!(a5(EDWv@ÂPIyz"j a{^D5Qd ?5 EH He1/yTPsVD ͰS)cF-i ݉bby C$rn+Wˈ\P-.^{<%F@[A+ÖIq҆BۅXe NJP txU[r@_wgR Ih%/:Jw%ɻk:544B l[C蠯{- xƲ1r"r՛[{2:qcWů]yꁪ[+i`/@3"clC?y`G(s<6 _y,^pg7w#8{wdqmPfekO>Gq\Y6Ln4=x\ҋWeEY[k82&"k#vx?[r澶.;ve i!vb^,=K6 8 yK> T@ww(WApؾ~4#NsO` qߥޚ'xb*,g#6&Iq뷱'g\P|<%QfC_{) ^SO_!?տyg\H3|ёo2Ln?az+ K4*QEGE 0h>`{?e%2;bm0u[f33ac|#M>a8!7q1`|e퐭`k_%=\LiFF4V~*7K]yy*_#kp#W5c"[42KrUȳ{GY(ڐ3NR;ː < 'Whwi}h%6)KqEfp5~;Mnb'f3Iq]ǒq&vkD!]DxܡSfq }C(V!tH"8h`2ȲbF!>V؋5N^߈خ0in3a M]HGfXh:?29w&G&€흃 Ծ7#fNr'L\5ou:H{:#MoQ+#TϬ)_6xkK҉1V=e=dѠ>(pHIҌpa&!MY<7e_w0!ϏGcҭ>R0(ܱNC#!^zkT*w_ [#V7{@{f| ?HfNۨSvHfᲾ"?YzG9D[!?I◙Xg nm`. ?a utB@:6?ՏqN[+Sc}àՖ~ikGIy{6'p|YQJma)H@p2X>x9~7e_JjA <D D2ʫ&҈ddT:!E*& q@I^"sX?2]צN#7Rb/tԻ/ݓ}INKx;}7պN?N[I:$=PĈ$6#%\֑+GiN/qEtQ ݒ;@4mI.qџWV9xUCp͈aHLF2;28;. (Īx'Lp SBX0N(l viXm`c=zPSфܩ܋^0PP$pvǀD=aOb-ٟc_;4F=ޗ"ZVrF O8()H{$ u`ĕVvj+"{TX@<ɉ6HA0#`M&TZE@,*S)OdJ8 AX2^\ p Ƈeezu[?Bɐ9=W %>l/ +"GZE(Ƴg8ReWX |niޛYv}}r03@ (7Y6Z[q-%W%S)۩ر*TI)%$KQ9IApY1KL޾ɇƒ(`@po>05dg7@~$*!rY\V=J~\p Ō]ky۸>yY(*R ^ۣ*0ıרߖ8i35jlܑ6^%ygB+p~q2%/ I<΃(|#/LGE;&o0:{^1^30zK͔ԇmn1he&p^ɦnyr̆ _&.lD2%H"P)k$,GC{'i- ^&\΀C Q9rO!aB` ,B,4D c*U+ Ԯ v"?[<Ȉ_znY֗ek%>/#;+w=/W2\]Z R"@ BQL ÄommowD`;a;_wu‹1V+}/\.+O|hAh;҃x,;ǟzr+_PtШl'Ƭyh`J7Ѣ :~0 fTjJ4nFVZmvf{R)?'xVa77v9οoˏ?ĩ{w'2@Bڀp4ÈoJ aPdk ~)GxL4rIF=FH=㱝g8r<|Q ACyrShL<ATm:m "E**`,&v2.M*He7V|1ւ0"=[nxdsԳ:zb;x Y2<&a!Ƕ1BUQNשKFCh̄+knUFdri@Kh>>/͋UIҜ3۸m8ŸQhYQKЫ =7b.P2D3q#!* ų3Oqj܏̑6ß P!xR$q[Zֈ?M)' O=ޫԚ~N2_D,Xk0^ Ҳ/ƒ"D0sW]:,H+ʺ68t}k'XCH9* xaԓg.~eN|^toDu~`{ȧ/+q[-X3fY w_|^>; H UGiP)O/?ãR[[cKƓ;|'?0\.@.@IJp|%^W* IDAT<9ƤףckW/]^jR"= =ı<&׾en2%.b|Ubp}iGlvjb#<|l4|Ot%'SC>䭈]d#??OawLu,+,>>2 }#(sfl#9xaIs2%=gI$vh` GWnVh2 gEʫT ـL  ɥB.C`;Yl嚴^}٘*t[3əV(+ÏSjϬRhu$ٌ LA12 Ve\#Ov\p9/TpT1cv#;bF `LehTP Ia.h½yVoM P:G1HAaB0BxQ-q䑐5H嬪U (,@qPd\cVԘQ#3f(lυPZ[-2Z'Wm#2-%-h$gsv/7U^Va7&pJUbG+2!ϣCŮ_T[.`,Yf5֐K;k6Q,D|Jv:F<PeMD] .!*3C`YAҸԋ ,j;G,,@ƾGޒh{4Ta&PQxVBZR/.!(Az1jJ(ݼFk25{'J,b ױЧO,L="a#[CB~eG++;^"*|kJ?p* v5&)ƸL #v2/Q >O(@r!EA2O9컻d!epL;JZɕӷɿxW/MH%> bpZHviaS45!$YAIlvvvv'견=0ܰ^?͉3ɄT9FslXDu<>Z+>]3_hppɄ;‘~1ݣ{l8.L{uyg>q8NoFo]B݀O loMx׃>!'|{ε^&Vq8>Xsr(WBKͩ!u嬹JY``d>ECńS828Po%1^lt:!Z13~xW7>_XeyAZ9eV٬*yDj=!^v#n>6fvr_S|5o+aΤŨj{|Xr8&x8Vn@u6 wIJ=,zs|+f. 4H=qPGE!Cx"q;Wĕťk=>B7'\ d']G$t-,w>CSū .lrzL^c `6#S:\v0EpL<^a}Fc}39 A`(3ks{gsc̐tS*(W+\oؒU&0bjMUDsxUK%1- FE^1xo ̜pFYIE*J}Rfq@еU"`K-!$̰guR { Eʨn#JH핌7aZEuR/mb?}!NLY@:v:jy`#e |YINQ1`2gJhTl'w<&_6|<6GGI/^u4DU%w ;,2,Md c_^qLE1Ou1"X%)&cS07E\䞿M3gsJ A?(t 3Kd O'WeM6?yvJzBc'yaN2<>) E#,va vև2\Tuk!! Q)#V,I6 tb]d~iw/8EM!jpsXFLxS1D\㾓="GRӌ9_H-B<42R $JPU)=zPQӤ+0U 4C'R 0VF>cu((8WyJ}o #4l`|拲ں߭+^B2]ps>gg,3h6Ѧ ^eZ#c}" %N:BEtdEu.H;)D3liZaUYZʭ{hk :pCD R338FEL@*m,87\ EAJK %n`38S.d`J%V?D͛[ շ6 ͮ{7I'qox XXꀼQ[ {3܍\¾m`oϺpKݍ@8^w?.e)D% I3-6t| RXFE0k!29Cl_>;)IWh>(njdĸ8ǩjOZ6|7UCciޖ۵Lqr9gqǘ2O35H米{Rd9q2Sx"ǺU3cRDh&#_H^4V{!0lg~xd VDTTr $"Q,Z|nglao1=Gv dx0V^o2t\B_)B$N~Ǚ|4ϊ*+[TVˤ+::'R yXZT Hl‰\VܐL6:&߹1D͐'O~&=*4罗C \i$|m="dpӳ]9NuϵdļŇ"1HI4p-p{IXdH^q5G؋mx Jb N-^XQ c2c%vR2)3 FJwy$7wb)FpSx$g?%-s;xH(eۨqHш ##]Q3N& dPg!0o^kC<ɤuL:R.J6GŇhޔDE<7kdPt/0"r`ZސD&xp\ʞ xUqñlLxd@LQ ,@JM E}eAU"^Y""SF72<" R)ix#22#V74`"#D`T$ Y%gyVfn d217 3( JF 'w"PA#cv'>ǢO-bcn@\cS"wB,뀎&Lzh!4&^ʢTݟA;2O@㇥U7d"Ku#/*Da2"nS' j%ON`5œ?JV&}Pt[*Z# ŠN܋HW1%.D@ؾ1eG]+1dEPZ͊\rn1{ζhO-Ƴ=]7 F}͉Mb7.aOy nO[ sRն(oN8&QQom]9}Vbn]E$).R&̀<4ijB*#Ni2xdY˂" "ݛ"{9#,Zr\3gxUL1uʄwejɱdMhtp^4iZұ FG'H H6W).7xk,opj I1W2Y®caհp"=slПSQqeίqk_OؚyqO ?߱S!kzh37dߧ4od"%IZ]vs]ٹ轷  &2w=L-y;);ynhH5$SҼJN{* a?_tY}vV";]DQ‰9jG: K\7*\\s BL.x1!*0kW(ȉe glI/!Cy9ؠ?a6gT`QJZfUĈ8A!UM%UG~0 ]ﺈo"?v׫y^R@|iu:V&2 )s?:PҚƊU)Xjh4! ,d2Ô\y Ә 16caJ-A#wACJe\j\'Jsa.ehX#/\D;%eȁhI)S^a2 MB\vd<.ʔjD1Ϳ|w/3p_^eQԗ(VlWz9ӎTdKدGS_,d0K}:~1ׅG,;K% u2ZXfZg*qU>$S YI>{IPEŠ``s)Q]q^zܡx6,rmZ;걲nY?5ǹs3IclPElnj߯jteĕyV D8,"}2g~1#KJW3(O;:ms"y$A'C)'7嗿>M7'aF(]Jss7!,m#A&% 0DU'YX TrOI6H#ΗO2>DS h 8P]ye(i6ҽc1j}Om5G>gphJJ),,*Wha>e wBQhh(;7Ф`y&7}cVuSj]7teYl5&Y d{Fdk{ݡ%9=Dl骜,zY~Q$ֶ^r8Y S +c,'bThzU*q!JN. P5^v6ׁJsz &L~֭0lBMwzD[uy>7}ܔTww1 >C,T{D$s+\Wb\A/8ƀWiR4Hh&l8լ <.k밾EվT- kst\g,hkeٷYOK9|./Uٛ&9bjZ,$'l>CڈP#"qD|(y}WʶTRQ^PD%?Hƈ*B/12[ձ5-Bf1-5 "F= ju)(8e6s:E< R#Xu>Bdͭ挪Z/W1NUfoFZVB\XxU&=?+x$4*4T\Y{PcwLkrU !ū6E3Wө.2?y%YbHU1sOKP|. a=]1 Hd*Z w]ְKYؿ.iW#*!}+' m²66SGXFeꅑvu/d&jܗ)?6K!+JleļJVt IDAT.W PSFdm^/c2s$kdl|a[!#S0vY=TS8=sό}Ge RcMZR_h5m0mդW%)t,rg_.ɗ9uRڒPLpŀ(7bܨFP hPou+{w!X+*Dh1a?xuk!'\vS)XoSQ?ro&osUln{mmKހ&F7?Fg&;1oJ(> KUWB!X:Ή3/bDB!* *;Ni;ωNG \m^? ǟ|R.\{6H6Eg~$F|2-$-8U!TD&]n"6iGZqu[mֈHBP~;ܓk6'P(ui&p4 X 1)n ` ^Db+k'EJ°ՌCpVvY5R8;,l-~OwKؒF$UO)ܞMY}ТQl_4; !ȍC/'~K͋Z2 ˒  VNsiΟ Zo27٥P4DD⣈az(cɛP6YZ0[tMZ7vPTwyAOv% OϓL\ָeoI+A,kŒ M*Ȣy6dn.aVYm6XQ 9(@^.1ֈ*x.@Q`Yne*օoڂ[<"q` !)WKB` _/;p(S$;;݉V/߼̖q8'̍/#ˇd oeqό}jgN6Y"DȦ~&O9ɸ2G4 F c$K._lQHBd34e0JTGΞxuؕw1]#p4Ŋoۼny%o^F%y:Uncfe b{[|;iy|6ڪ-[o9fQ ږM1"ܹ)Tp! -N!Fb3OP)uanE^RYl|eˏp468R "gOL :5#K+*!eX5ɠsAږFH5[}gރ!$zMg܅СI8Ř#S`z<홸S1u o/l% |E .̝@&j(.9TW\MZrjc^fq˷ٲC0w9}pUO`uIF5HVeԝH.EV^=Q|j\p\b'hh>BJQLh *elD,wFrgN鞽񑕯m^N寅0!Gfӛeee.c?*{es |錡Yܩl,xq񻕏-yv>$d1O8kkL s,-jaXP$QN[4eUH$ 3'C%DgCѦ5nLEd?y.Uf|#xK Sk  ϊ"!>o';{$`^ˬsQD>'NZj,Np"[v8ɀ/64as9ǮaUZg ?^pYfޛuw~眻KKm*VIQ"%%힞i;v1ea$@>S>%@ {8==nmԒH")}vɇ[Rۉ;HE[?qit>ob ĴKENax&Guoٴ> 2N8g8"adVRZÉBaejZNI UHg* J{CRe Rs.;କVc*,DǖnU MsԖ\[K>ɥLEC~WțRcrSx.]sgXwnNW$(;LDwM-IJ51(PKoS%a".cvUZ D+ K[]^e yPG);}Yyc }\s,Y^`6E#>AeBc n&EO\՞ =KV16}a5ƙB \BVι`omsf#3D'? y\$e1يR9ȞuHqn-{~e!=ѝweC-6Jp_ :2l5\)ZSk4&vDIAK#7+0gWCU2h\Ptpڡ2!7V!.)$Oxeu=:r)`~Wv/ĹQ%DN5`blzP񈟚!*"džGx}weɺD8`GJFC|%ژpʕ"Q:w.Dh6(Q!W"ŘFP8ɞlUa{{/jg\SҦO5pm¥ ٳ#([Z%>@}c7,%Q+ E!OkX'wHKgd S{2eKsq E18!B-B{s/9Qb*-g #ӳ"kn'~Pΐ-:bk"T){ ,;b8heOĥVQzX|/V]p{Ξ ܝ=cL1Q]I3C9靶=f%MK󧼼r ԱKcxO{v'Ot?ls/ eW(X7=[ u!!믦`8YuNΜ/~*Gg^Ǫ%,8de1rrN |dgX :(g"@ךx(ѭkDP&qqUj8lIPo1nyIYpc՝XRbq s⛐lbGu98NEp/:֖! ޔ\a',ro&ۣdm֡@HmsZҵ(_ ƈјr"ʇz 1]WloFp0Fb2<&[M'OHHBIO\٘֩7%~SSRY+zƓ5MĕG^nI'ĸ,9$+%gyI.nXՅs#eIڧvxdO ^-c+\Z8MuoMٌm' *ӏs"ޓj<:⹮l^\myn/$q>8P抈SfD0?T+_&[kՖnPq%\FXшJ:$I{J2FU&MqKEwsq^CkJŻ1aڊ H bk32zKej,q{@ctmꠂLQa`Q*NJx h3yY,HkJ$H.*R2uLbȊV#:l3@z 1jpUBKhz9ǢR)ўArd̕NVK7$<<̘ |QL"iR1\[ qO4p?5/PI=9<եE[A~8"ZJ#G# j0f$uz1zB(\HDn$O/23M;PH]IΓ5v7#P4$X87FB.riL?se_Ÿ;)N3:F܂H+*ǐ@1iA[f$IzĝjAi3.%+,~$PHQY%Q`SSR;Za'?(㙔S([+- =%"n!J[ʁ 9bFXJ>.SlԵF*+WǴH#q拓,dkIZyY^cl!A$4 <ݼ+ҖL..7)\@ߕqO'( &K|(T6v2Ȥc-Г/>O|oջrHEZǞh⢨?a濠" *EQzWR<ӟ'?Id.[n)t@*ڧ)BN[[b-F;Zt{Js ]}ǞĿ<'Tyfuv&'wЫ؝.*q(3 _ YԙΜh3QIEͤΩϿJ}#љ᩽y:zvv> E/tI>+X[s/pib 3 rP~)s 21(C C %Rb(/("2ΰi^ND?k0Nq-Q3z 4(G4Iz k̾Rg|ˬ?F4K8G/xo_S){) 鰮 'qx? whQ#W]S-F?w4wͥؗ6jz-9 _JpPghOqjPcMYzu ;#a-m\ Fjz~F&P ,TYyoH׺Tr2fe+bvGxk ip ㌢j^dVCgş1Gɟm~`Ë]"1=qS8ms88T>=7/ s}n: Z3C-JYt<ͩϡ|~`# U[@4-wA: WX@ 3%]Ϋ7;)&#VIB0 QGg`=r]ʼns>1P 9; }.0,'CX5.yy☚՜51wD'K .х>v>_2 N{4<ǧ5ndZخx]Qzy &paS LQNA6{pG9q"\G'чzdq}H1ވoxs7.d|]2(@ N9DnzMNpG}/bvPղC[o2!9}J ; nǠrŨ?SۥyvUbE(2GF(]pK:Ӎ%N}m*E)k]?Z2cNy_s1Y_6\/J+pn[tfq&]v@w) ~ Gp{\' i:I1c4 JS "36 FA5a1$$+.{`2* epf'O!:R$hi`*e춸}h@q|Fc,KpCDYnċ#-KNY6$iRt/`TP=t?^s\xΰlbs ۯa\l𔅣vDNLz9Rl O4÷*Q=~vU9^E} IDAT*BQ8)&rVJQ8q zcF|Ѕd[꽼j2ے3eH#_^M>٣?)' ρsFB4axs(vT̨k|Ss{v3x9 \l¡dU^"?OpCCG[Di͠-3_9q 5]oXD" @Ѷ" kP7kZb{u/NEc))yk|:ar"%UhN,;ʍMotTPY򏮣BG}]'*":Eb$ }{n,2}N,yDakC8fklzoT7Y';h>b;Db#]1X㻫gK,rv({qZP#l8Z&ZKU]k|TP?[8|3q֢{t7^bң'xcO2}r݇<\טּ`p0ԵlT22hV|ӥ>RvV~0ͭѐLƊsdyƠO>Fw]8jrw|/s^ҹHzul Iḵac{|>3*?4SOP<ؙ_^Oƕe.^ܰ I<+"b7ta쿇- t#e ( 먵+ 58$5z#vr$ ;)Ghߚaf;a;>߃JZc䚝#LwSfr͠u7#qغ.{`mdE 3kK3vhCߟ2qsLJW^Ż4YԨ L]֧Me'oqr{5Cl{d`u@L4)"eq7Q"*GZ0`Z OzJBЕ$u\o nnH,EKn88̐?[~@u~T|'{/\|8bj'"q?V,;QtO7"TasJ,l}ET :Orngt<S% JY/ t>.ʹΞSXc ǔUFf @׭+'.Ad! %(4F[+ baX=~Gw0D( (hIꣂ߹}Dʱ8< !.M Db}GwV(\S&jQXJ!]d"ӐXvUBWrbZ*Q7sS47X46Z&y=֘LxJϾGyk8X542`OQz^i~x /_=B1$޷II{SZz nNEA hV#Y[8S AC\UsA'}k{"#]l1HtGu|'?'R|gj|D/\rNOMhJDT|p)81;.p'GWQZtTG/,:;K뎽Mf.^$- &efQr3v>zUiQC`uA[FÜá~s>!^-v}54S]tzEۓvQE^xqǙBhZ$><瞟>w3J+F'N rd+ Nˡ&bFKIm4+Z ;Dt5ܗuxVPVO~{GzYZn5]ӨȣH'o5͐d j bHKwn0_ : -(0;2O(YG됰1/~X"Ðv_w[N?`S';3W oL'7,z#7#Y[F봤8~p_N'8d}vD6󤇞ߐSuզy=i=ژȼ40Ð g(bgKP~Vw#5Î ]tߗ te <0on&7]#ol0PH_Y,܉Shw'7P\?94|rx1.Gh)D+A+8ޖskŰ谶ړw.-{> -J(pjxJ:Q t& WMD׺H+U5}j@kV;LE8]Zk;Zp S2-v-q^A8qFǴQDVXPZ) U9E F^5 B(?#C&COŤ*S ) E"O3\n8rE8'.;Pj>()K\b e'DFqdo"tkaSCyړɥ2QxW (@2'HIM;ɱci}/׷-N _ղ04ŲPk]{m;k>U oPUe*n1.-C6iPaӔ}'wefZ;I7oCb\0Y%VQD3/Hϕh!~,hoeFb(F5c-'gd=$RPjw?Dy 4}G/VXJzl2鼽!;6ܦMx\d<]WN@3_|}?^NJPh5F):ʁNRU::!Z[$Rhv)b HpΖ%B#:7xS-JT{rkz(BC_@x|_O˩'*1ɛ{r}/kb38~c KPDw(N( d>IFfcm@P,t{p_f/^)_*3ӈ B&O.ImqUI혁5X Z`8֝=vhnp`+h2y,IcgqNM}OJ7N&rzDWd1"f-lr'V%-"(WWVX+6y^^TDD +_ia oDz-x9)ƩN :wK ?wIkRD"sm2*2dk$i*19!^^w$˄8G;m$IN!ĴXE!3R|'PXgF>QȽןݕ8';=~Exdγ GfH܏e<8$ci֫b!ǾL>S ۾HX4ܗ뭷IW[5yzSxbPT5Y~bl,2brV%~4"ȡ<<` +Qcި%L4WtN$=>Nʉ&CTI;=`*sol&k{C(r]]5r*  C'}DFL\-D_9|e;B ΡIJY_eo)ֶYپuV WP c s:p(i*)_As8%3BETT'QJsTCajԓ{=*QWP)L:.>ce:L[8ĺVPN.s@"fB#J~1VF g+E x _Y.rTYB?֘ku:V@i0–[ȑY) ;U(q,玉Uk>q&a[@iīBI碌Q`/R"q"!_:Ec =F5c'+YO}+|@Fd;Y;*L$d@k1oe(i-M3>02\,՟R, 'k-M25l!/S#}~E ׶mҐt%N{-h#(PƲrڡB`(pfV1&r8Fo$"%N@[2N$?)OsQltnUR)LN&|bzEE v-=,|1c-/ζ1jFc~2$:EO?OWp8s*%,C Gдj Fi_sENļ-6 8şfPČ9`0I/01f3?w^ |=tE|/[jG#9 #nY]8~Co0Y Y|zmIFT򒛭\90SY<#͜S~3ZIqJa.O CJ,A& 2N*Q`Oe1Z 2t:By&e_"1Bb[\xC[z7;ZMո\X .i_4:)m!by;ʩ58RR(H% " S ,͜ |N,MmLqap֑~xo 5F3.clدxT*lŹ#yqC*F(Fb hX!D5s #zd~ R+((&ц^5^^rGpEGYFfLm,AH򇝒uG24z!+2z'謕GZ'D~nqV R S~{x&3 8Uol g~%e@[L\QciKrJq@zc=ǛRT yeof6&Y#[b+ v>&~aA,('@!)0h[?k-bz;67/g1[}یk:,={E>Ε.k8kxz^'J^O#aE}-ǩmƨgy%-T5>!xPgd bQ-*-`|dS s:`gh~c ^ NK͈X}\! *L܈1ԓ|=?]}X(=\?v ȏ;7S f6?:NRck<}0-c?'4b]9M+hy+õ-J4)tvVSX#)-j|!h _M*&1` ]RrlQ~od<^\&9o1JsfdqO?SyֆugQi-^BAZD{AH]О$Lm5tV|r[t3 !/7}2\󍥜7@fӓıgjunE*[w,oL㏬s6wrK-V? f?G~[ss$o圹ޛZySSWUWWOR4)ے"3CG6 A8 7 0;E ,SIn6swÙs>ݒb4*N^k~=K7vX;:'y*\zH-] 8mf$$-CL<$*D }TZגKh#ST_Io;G1&$n׈]k6f2v||颼0KDh(sDpk? V1_>#FK_A=V ֌ע$J2Rz8'l' iR9<PTSKDĖPIog 1$[jO􏝇1f!j(ZBhr,58"R"թF#nm%I fPW}Æt׏"}G@G)f?s8ySRGzssa]E&4چ"a>"iZM$AcPD3z[x]#~P>Ra[=ڥO (A.DrҌ `,q*TKC/9qs0v+fFu&P<T(<9wq_ځ#x~7S9" ^YV7U%sBgzZݙq#:`$-mA0fGDKA}NA|9c&*+pOIZI^R+3nP3legGmY_p1BS<_ ޯs뭄Ʌ5PJU--DLvСc+w:KX+Y7S}Eadpk58|4&;/p8=CojJi8Rev~Y )d9.-+s_<5ɠ:ODBx1@j;5~{?EA0 &1Ya!XrYD}4EŃh2捬.A@^W+A Bw[0mb:5ae֓߹v/f|O(oo= ͟tx{? Zaf~ߣ? "Q !FY/CQ-$ `+>C`%CE(4đ36w!6{+y)RjpH|7@x:-X.ch$zYOcyrBcD6{33m ̷\9s,6)R8HtҌ32&Z!˚IW1'Nul./b j-'\ }5"Aˤ hX|EwKQbC{%iVC]EFkS +T[ޫF8Y~t3'2=*w7iMw)n_򁇈YilIZz'.;* !QcVPQc9|OfevOV_'ܟ+ufq,0CF!)秇ZR(grunQt[  g>R rK4?QA#hJE*|}De/6mHkRxK {W%P0Srp4:Yۖvx{=@P[䕜ፗ$*TL7kbFeȧ>UYK |KDau y;åC13:RXjeRmdT'RmW.r񱶜BA.s~jUOʥ \ʤ֧"i),鬪mB.ޫ3VƝkq "Iz_b dܕ ]mf[fWijM 5FZ::cZkֈ5xbvrE߄Iݸ OoɺQ1 VHEE^sD"fQ.8%(:ID?3/eQK#x]|_XeXT/EOP N$ YD3. 19}NRNmpC RToַ!s^;?f񙧵ݚK'9Ֆl퐌 ¦K=#W~bsSGx[!BI2Lѭu,=j(Sw}wY  l .~p>鉪WH26w)S|TҁJ2AD䅄T IERV"JL^Ϊ#Y n@B~K-YnDL7 pۇ|'-ߧt:{>?Ch?M\vv:\gȒ>E29fL I-=s΂,[Jܦ.2 G1 ɂe-_Tv0{tݛo-'X3 9MvqsA:6Ӥóg#k#\997vJ~&\DbII:y\iUfmr!Wȹp_#ο?wgx៚6Wjӭy ݛS[\p|zB-5x}r|[ݩq'3n|B p:ݠ Kـ66(gU)RUr); qP;}͈s;<>(z5J/8/vT{kl@VdlmPd'j0PJ)^sԀBN]1J7JplR-TƴF@F_Yts$̃h)vAȳ~f(P1FN-R1t[&7sg2Rٽ?G≮>X9!!(}JJDr035x+Y1~YnJ;}6?ŧSNEP#8M|Ȱ|es%=J|``%K<݌pN:1:KJ0EYL.~9f]%+2Z*ƈ^%nҺ7>Mm1 1 ?M y?%_c=#?>ӡRaNAEzl?T~6#MRQ'jd P!)6 c'u7^I7(.MqU:Je`d{iќ^|w75rK'97K ?̓}y ++ꞇ;'uOLQ+ zŎ_k&6MѥUZwȺ,,u{D A?DbDHҒO.N3UߓD W2A tG%ZE>6cu2s8Ip̫~b67W._+ 0TUά#F,i`@b.d*?T$>dM-0!k9lK>6_I8\%KOi6~k4xxi{np|\b%PɊHvziͥ<i̭4a(soER7i 4C[|l$X袥7=÷v8_I#|P<[bl/A`Y-ʀ<ʂiJvmWnwUv%Z6orR=8D#Zu/xF6 A&E\cg!Wu鴟[6*T骲Mb ͥl:{H F9'α$%{@{n5'"*G=N'>,C`kzP)9C|P00*{'? > c c:L$y:xKXoRYǣg#)J%Ng-ҐDžĕptxn?"t!bp3V$:DKCKGb޾5saԒ}F)DpeA:^y)cjYO tlK@[YeJ Jʊ%_h`:Jv(ndJ=hɁs wfi{\y40vSro2{8srr"7b@]&)b,1ԛ`=`2ǜ%i.puu024r~AaVS4,>gdрG,<2w9fb'c>]Z$&ƩlE0[k۽mOv8`c#>JxZb#XntMZˆ9& T\?0h5\"Ax _k3%NJqRF)ӂq¨2ꥌ9^.~FHt@.Z qjTAb:!y'Xsc~!8ُ qF:", ae8j(^}_'.nʨ0R3=i)DO nCg\ ޡVȫ1NDR,X,0TuP0e\ɿ{!UBk UL`ޟFNecT%Oy.2c{ſ.Mя>b+ǴbXۄk0VԏS 銧`<{($ޔWf \őnt龴q5+P*H)ɟtk{`gZtk/uywؼ~_,<'ͪ ~{۷ɓsfAtq[xibT@kHY:|OxW ya1[܏ݼX bL4[o wMYcʒ2aHGqDŨeFf&[i blDp@@?/B[}Ā|Ā|ĀȀ]~h1ԃwefb+͖6M* hw6i-2<Д(;-CLYANkTx-/^ "c ʘQԕ4:@ "I7R)5/bA 2ֺ rTXDYF`t0[ooԋVXL[x+9{Yqu~Eo y>X]gβX {TAyr -qh&8xhi5REHU80PgtKY/x, P]3[E}tjnVw, )ET<# 3t>YL98GCJܙ7H}}w.i7H*!w92N3QSIb;+dL:/KzbCVRKMǰnyaҲ =j.I+Fr(Joޱy!ʩ6}:(Qy)/ؙZ#v#88E^=' *Ru\lN!6*.zJ$^Asj(Z;B+K:j0r]zmUQjdY%dqy>Ho,Ky<,JAᯨxA%pqlo ˵V D\'YxVEeYB\2ٖ.tK?d3fohP B ƈn>]0ٕk޼mGF"@<&sj W,NZW9:,(2& ԃYqY#Iq !S4~Ӱ5d Ԗ%5tL_L8X5H*V[jiٯ ikR"ID{11'dyY#øocS&Κ IDATffw6$K 1֌+ a@qH%Ziɷ*cE;(y\~&Z⽘V鈌s 3)O$_lX?K:Vh,bgP["v%69i a ےvJX -L5O2q3GL]Jv+R#{By\2m2m5U2hT*3M+'#/_}'g~pvJ++"opXѫ ^z*RHIjT"Tr/?+R*⪌\u}fr/yj@3)K$EJ^x_[T9Qgb.:ĸ)7/Q 6g(őRf"=1) IUqjbDcWkL}bGc |AB+D1C_ w *'}e 9YsҟH1*rܙwu]xH"t { bB'ɷ7(6+c܊,=3+L_%nW}ZK-7llgʗټuBd9Sw%(dgd2J2CNco?B]!>$ s]Ǎn:<`VTC$i@klLJt?h`3Vge #X? %~~4UDt@/@:w: C!8|/qdxxQP[~D΋ubD%"裋-jD ,ǫzetk{""+z%` K!Ѩ >26c3}2%#olam]ΑŇWҋXč%/B ժD ֨6Do9!Ug$+lː7=E [tL秹xJ]l t~+{^cJc;Z%DA܈ SGO|̢`< -Y#*TȰLˡz S ha$-~쬊|} Qy?hζhԦ ^=ا|D7 bV,n13ИϨk!趤U?,YsJÝ}\:KS%d:3u[WU02Fu/ݗ"nӯ^hy_9VI*:o b^4` *9N,r7D)̌cB]?M!5a&}ٞꉥ1"+Rp*1aʴU% XML(U`epijnsqom*x8^Kp/u5ʑ%fOooi~c_W{3N9^c!h~RHumodoZ{[km.hѢO`!Yi/rne,d!}\8R #*a0JX!*$^4M$ބM7xh875 a$ an<稄c̴iK(wZ͍Y'œ$NV$C&1w*(Өk8.䨖`^Wv0BS{mW&v1TmR~kDt8#xBC*F9e?c{BW\b;3JAeV\NW "^Y.4 S!]7{'ݯiEjGeB܎$ h8 TBwGq4TfUJ'i)qS HMX3 H FUՏ ji FA-e~$sη{feVeuUW:gz8%٤ [  ˀ_׾ЕB ql (Mr{zܷ#_DQ`=!aXD"ķeV 3ry |EQ͔$^_ d#C?zZҜ*u~5qm+<4dj&m>i:\4˽e@>+;ϢsiOπ2 F? OÂ|d543aA4b}I:Q|3Zӊ#Nf5B%E0㽡3XdK:RR9F"źX?F8_;9Fdc^DppbxIX*r&eY*#9˪u3gf# IW.[!޽k]]Qt2|p\F=|A΀ ]&e^x>&J2ý3)[_PtSU>>ÒB%>5L\jOPCpZ!CCAeH-|i}RN+q:5 䫊ʉƶgg%_vwB~Ab5oP_Gјx]AXl!13݊ Ekj o2X>:Q3`~Vmsه ;岸f3>4r^XeDHeteL7JrtNRSZчigxw{<^?V`#kqu2%Hf/~~[3JgٽWQ UC>1.[N9dǴ֣ѸPzT9L gg;L7{لG:Lw-ZEXzfѤOs-px) Bah£;4.M{N Oeo7;#V~p֭ 6Kvb弤Z2m3w9)S>8;{ݔɩSwkv꼲[ 6y~k/^YK;vV6[:vK+&+Kͮ-[M^~gonW/Wʁ5bcdEA<ƫJJ҈vfU^sθ9' Vfu\Eèz0&8ʲ$SW4aكI0yJA=ɻXMۧŃU]iQw(51AfFxo1 1aCL ^7t?LyNz'Tt#ܼTuw@T#G<3U|E-ܖX`kr8r=es5.ڼXui> & g\Ifجd:7ZJ.l~5t=8JCj!W1*W9ۄqΒ[,ힳw83BÕW'C0Y2$ 6K/EeR9C7 2)䯿 [4bZ;cUFhv!ȳrJ} k!EsoB>Ynh9y %>caG#BrK<><!6rzE}{̀$zq qG].L։u`TQAU1'sc0t [4ЫXr:86(}0(>iZ <,Hk-.qaCq$ى1k r"oV2+5|lo = R#(6fAzxUx2S*7?pY?)FL?!ǟT||>?}rmk$7Sٯ2ڿ|Km}cyZp+q8nFbax`ݢ6a<3KJE;x%TJb0X|460+ bOAOu|Zqw9Үxp2чk~V6֋ ˩zX#c[(ΏTMs7>3+9βm:Ϳvx%$Err^'{\ 5t>~[|x')4b0tD'3#&=MƇtrs3g+C+E'D)X,QZR͂4|-4QKD{s5+a~'[7]D VᗙFG޺)M-rƚE@E JSb彗y$R' ri2G-3FeRz,|ٓ H="kdkTfR|ESyZA2#Â~[j0D`@6tp5E^a4{ IppTKOcbz{ ]'*};ޑ|} A& URHs4~$+0.㠅P=(lӮ{AUGCV51dZV*hR>)MK֘JO#)U*rac*ZYB<_MTVI.fsk(j1> y5S煵U7~fDi7Nۜ<ȯT.?wЋ8Y|PQr\R`u8akk37l6}lO?Nof?iI~֟l|̗$>4ϟ\Yg1SP, z7Wr|ʵ_2;;]ڛ+[\%d,g'eT9QPRTW譴h@4h٣5[%* $I*Jyy,F^OWq(V+ TP.N2e1hوޞC%ŞLre\heˊp+ 0JBKC-X:|>٘Pfsز"réT:>z~rCG?%m-˙N'Ϭ5VO18`8/i.~8 gjxQ>+O&l|kI+/$j]FOC\||>QNDpףO==ߞј#G'<7NU{98.QŽ❰pN΁y,؈*| V0~HjQp>-/L(t0K]m=9]#gZ){Pl>Vrs?N@`B v"PHH+ZbbGϫ*~wo>Z\4dF5Qy[$iQ¨JR4l1x$d%y 苗#()pN>JJQd6T)R 056d1!tŬFfHq"M U[ĚB6y/y?te·cۧ%5c=_ZT$8:UKӱ`Bkp"3Mx9Wɩʋ\ +$:bW(3O$Fw^+{6 U+ *Bs8,_Œt!땷{Nlw?nFIftitn,SO3y1']}O&}5,'ewr V{sLA(O>TQd<+kLfUI=٨ZԈeeTSB'|Z1UB*&ɛOC2bT*RN(c0 0!'P!{l* .\Aj6.=<2{aqC7FUL>0rB\nq\+\u{KOMP^4մpr`*nndQh\R?nno4X群Շ#gZjv:whϖثjcw}-_zТ21g6o-'tVm9Y|'?N;P4[FKJZb|Jx>C1HeI;pNmWpw+[=2D$w{9]qe#^ [}VsRyp9vw9Yo2U+s!T5DYH+CNV[ܾ1ꄔ8j0Md'%~~8$cwpA :oY X޲ŕ7}so.i[K\%,BQBs7аբѡfhkjLԡp#|}G/q_a#a':|EFplhQ(bvuV{_`^3%S?c_7E|7H3c֚tܔ[,g\p =tvIs|czG?-~.\s g`}om|:]f#_+lDV?rZlɒ"2^d*Vy*7hV0`!J3^"ZHoBd4j:rSR>]Э V&\'ÀVe"9A^C\h! 8О[[Zʻ?{#mw랽(%EḤbsRB0-q#/ѠГ)qh*E۵EփV!M{BaVZ[*Oy۫-x/J/.h/D Wk}ތGufɯZjk?KDgyOf^ÍҟNTf+vNBp ; GxGg%VOOR݋n;3%T5^q#I_dI:K S9kLR6ok;/& % I>9Kq>鈠B0((\lʬrtsr>21S#Z;jgVןX9~Ч^KM?r`mPY3`{YQ^1m&>KBu#~tH h}$K 8#SjGs5cZj%s mibv\7\gjF,]olCK6z,?H1VUO]n ;a"<(fZc؛nBӑ7 _#k!<$RF1#_S`=\U"kd%zu9:B9)jeGT>TEr\!cxNP:OY9y(| IK_-} lW~[}X;#Mp-J$SQtNRs~õjq dCp5ٝsr)?C!w(iHޑz Kљ.bY9 (*߿`} >?!|DU6 V6%)#2:7]NlyKX cfr ¯9CSUq̳"$%-IL*g$>U=&Xb⸎j~@ӥ$Uȫrkr^O]xTVV G< O D%q:}wzKG'5; >gQsTnT~9}#-q&nˆ}q)yDhFs %XK>k_yI^ڸCOpVtNˍ$x'W)D(b #*.^ʑDHOTj.Hd*C`*T0.UI8JaU{Oi*Y..b.^rھt#~7.x KJw.`o\fXSt@4ΆlGi|vG5>YJtԎ5C.ZfnCfDJjWVU+A5URAǠAMke$YX2q@#(r8r*-K7L%U0&שXy`xP 5k|v[ t+ynU^oGZ ( )Xo<eEٹ*N {E$)eyYY$*FUepi"5H%֣cjC]Rgf:Z/X V?{d.\&i$n$I<[kr#xz0 FeNTdUvN-1&U '0 *VM)”z[$?Q!2kUT9_0 Vy<'DyWVI,XSVpm h!';l~aǤg"֑,* $A^9ވ"d<:{FΎ|teWֹѹy-ZǏO kWTc}h [v38%>(hkO"&Tetڛ*5j9cPȱuض|V 1y*b:;N5/U xx1T ll{Fnǐ'XN}6%?FzwQD- x;H_$?'84>gK_{ cbP1+_>WΪ.HO ]y& a;WBfHIO 8i] å!NBasߠso.wFA}Hg'3|k/1hnVƑ,K% z89Qu&X*| ˜uA뜢v70g윞 W/yZC;MW۷>>G'l{\?gwݽ yD`L9)%[s?*1<:WϸKC.Z5˅q/dk=ZrI^A 8"IsVFtS6hgك/5 Jǰ$.J:G=8zf[WYUD|w~'Kq?qvŃsߙ#ޞpDoO;S=y{Stk*ڽV(F&\ЂedcaBKY%Wggc?=+ ~vwg>~}]cQw0{)ˌ><"d;L$aV4qn pS"O GUu>"W9BPT 庯LE#-/|_1&!cZQsxU%Kc\H:Iӯ4jW'T WJj7I>|mEJ8N573Uj?A5ծjdYAMe1,] J#g}}#ocSqAm3d]Ϳz-Cň!Whbj3fs2o\ލ)6̱(/DXL>ՒqAN,PI ǸpZ|9)KeIAQ0è΃ߤ7+I`UU|$y I,NbQ[A|/by&na fOqstFaǐQ@pN;XLpl妹'M/NY 2' [pj53lXug΀|΀|΀7E6>c0EF=i iX#]5i%"JU*P8+cJRhJΨϗho#aʎ*QI Z%40v^M;0hw ɖlϝ鏯_`K4m3 iyIֿtwl}uT3rIňħc DEʺ_8ʬ /J)K( @/*f_v~{ oޓr2=|Ӿԑŏ?RՆ.QBmKA4X7FʑG"6#z)JIA"JFő{}/7rjCq_(8W6[!T#t}K0C  g %#9S1KbRN%4:ʋl" -M9;>+[!9X$~u{~.q*FC-xސRN/DOXf[6SUO%(Aiš۝D罨VsvK4k!?SoI"lΟg剧(]IGZ8+OkʜƳֈ4DAtjN-֦TG%ko"Y##1! D$&DZ9iD`Rbkf@pN2›TԷ SɌD.]0__:OJF ? > x%KXìp)婚r8ݐU IDAT~6b͗e/qiyh#S]QJԖ"LA+ ֑zH TQ;'WIZ1֒9G-ճgsߧT`%v`LT e3BokarMzjPyܭ2YP$sICsÐu~mYKSy7%])I*.I~ ex|'I_sYgHoxfnr~r]\NJ7%á42W~n0RXtkLypGyګ1 ^>I&zm>6xN% -s:ˌscJ…,vxd8b KӞ9pq!]je",;_ۑY\rPT=3]!X[ $cdr雜-U`,%Qj&OpDj)KC骲])PJ` O&ri8E`iob L'݂ѐǷ;b3ۢ5Y_lk$"aR&F)F[{n郁(pY8 eƶ4}ZGQksagV² mW9?`m7aT})g`ş8A/^;1R^뒿ӥ֥6yu׺W{ț=ގ5G2GWϰǷsIA^KD )c!c#$4%u@J&11x[FlʆcAT`d5=JM*I]6 2{FIKΦ RC$, |r{tYccR`ՓCI8JSƒgW 7-W|AItqlFiI#u_:yJ)@DjQ/[)f[O_kwG~AT?FҒ{чOPм)_s0MJLBTKTRbIņV 0W!2*2bBaX C+8 '9YYVaeC'R lmǢEXC-aEf8i 3DQT,^oLP@mS9!Sek^U*8a"e/*>1 <}4q0r1}gvXkRL z=amYb#* gM*Q^Ь^\s+w+^q@G_Wt?lO~}:oN4_ҁk9c]TUG by._a+9_ؾ>;ʓO.PkFk,^Jx~ «=yhؗTa )lunhXDâ:AC/0 EQra$R`$!^ H \HXaTҤQ4ygMlߓ,'ϑےt)cz"5e[zb >##uFa 8U7c{9_~E ӕIʨlNaRqc| U$Ju>TQ\4a;7xlau뻤RYzoK|tAeyረQCft"֑z 嫜!+ |'x>]>󛄵HJXjXhzbj_ٜoIo`*bO>OE:Ͼ_ >Η/n7<|>Y° MҁrQ,|6g0CI yΏ;m 0Ǖ^1$|K`5rYO䦅`H#O/HۤALQР̰r$Ğ3wtz7(kU9c/̲6ՠ>JY3 ^t?j^AOy qݡ\=NDE|0*Kf=96B 7>DiNn1Fb-T 2D8usȓ[C͍HGY+ތ9_ePG.Tիx8#llCLzHFMIÑNmY-r{637.RX/,H#^([ڋE[pݻ$21Z#үI=-gt_OyU7U?vQA2ϙ>;MZ*D`&6&UEOMY*j X0\qq#!Wn+gYΫR=gNβ<|pab2 »I #ı*B^TDB! kUkDV/T*oNOK~Y[ qՏ;Y',Qŏ'k.?e]6Cѿ~b2+Pd<ѵ.2Q"Lʹh2scH,[{=a(°NК^ 9H\A:C\Ia'+Tb y D3GvP,CNڰA$Ht,۫ ֢DŽDQd*V 2KwaO}8 pkhk0f#p퍷ibkK$! ToPcJ3eFۻ\aT̐r2E+:EY,W79}3@(f9ܓ% sL]Rk֒AҟRM2Fs VȢJ"CA9f>LuT 1S#)9 0fsmg .mJDhYvזx[#Tojb '޺e y(@@'Lu8(- 96JLv@P:7 (V+T#vGY^  0a+Ѿ2?JJݯ6 6 RSxy6XCmI\`|(R$zGQ龶M-©1O$tTC~)ҩR{lXi,eR2h mO8p8?LB.+Cq4|4|6 ޒćaL@ExD^294ĪA]32#~2@.#OPF9N QQ%%F0 q@_(jdt]_H*3&J- 8[ >0X pRPH%`.|Ht[N͖&2$A&"8BQ59תBX%8([%GI%#kHCQQ;JOn;l]N3_ڠL8KB JaB ,c'4JeA! aG99Ox]jiXawb7ce"" ?8Id-bLxqM H*WnݼʼnfU6!IN/ HܔFOnn!BıJ˲OwkCG챴 /]dx-gC90M ØCß-`W'E#:֨~%cGg̞%徥w4k1&eqq_ Ֆ(J=ZON@vvW{@N4Qc֎+W=djN[)JC-*Xw./`ƃop.wkom[;f7r ܈%-e|٤J(Uj!RUC7wrtوczuz?<2BE4 j*A{CG Tb3-UZT, Ӆ`\Лi*2G8y8AfwXs飮(P~ IDATąCn^Z?$/;dD͓sl̲Y!ecCT.mw4ӂp(AJd$gI  +RV2T1"b%!TEAJ_ <'ILLP1~ݱg)(LW":X$ D"aȟJJ#|0,G:tNrGe9#w2t*ovI,?{ {zv{ gwV*mzݔq n3N1ď 3(0 ՕiY]&Sґ$(F![$w3o y1 Fe)UC) Cn%a:0$6+W2)E$C#\"6fw* #y; j@3@%>)v9ٚ !"gg{'.s1\s/m'W9 PO0S!bRj OĊQkU~r#ݎ×^KWx7r'_㷾G{ߝT?&|pT[㤔w,0)  'ZD6>9z[uD;6@bB3N>B7T XzyfFvșzDy6{8ϟ8RF&eũcyEU<>43t !NJR R qDcr(qIڌHFiTUp/9̄440*jeyeqII^x{]8BڱOhjJq9Q$# Z8E\50j[hlq^ߧí($H?8]\YU/ :O?5{g9n/p|G)=NG|sK\A_hG̞Rmc4FΣ@LD8?bCK-'ٸWC5 8L2^;q{7p1ՠ;np@U% ITs-f5N JIOTx|gWS"1</U>x0i5Fw)xHE L){Ct{REC1}4C dL0)oYΑZG 0KmX⍐EJ5bqXJJi8j1 Hw>!6b'f%z:~o@\EY;zd[vN0jVh3nqjAr٥28}ml{,Kh-SKǞ* MLnqCh#m7ŎwZ%UeN9ACT*[.hL`}jCx q*A1"NQDhۨJu|0:WHw /B+qԂQ+L% RҏP&)n2,f QcX/ǜ̚>ǵǔJ1wD9ZgWVC̙l-y|kb$b+м8Og& ^2zeBdž4=W{ %oR+* _NSFz3!qJYD 0J1(a{]) 40LRWTڣI bx1^as&'᱅7s_NKhM&3qB:ac>sZsֿbu"f)k]d !]#8Ueѐ%6QmvƊS &/'\%n|N91*us_arUy~|6_{?n?&073u_GeF-9 ?o>o“tְqDu?FX T":h恆GSZ@ iycKEy+k-{Xٲ5W{Dbs4iRdJ" H` yH8z ; $ǖG`ɤlf7ӽ}svaalZ쁒"}[k׮k>{qI+WS9;~HGxX"Z~Srh2n7*Z#Z$= nhڗ,3gW/iQ@Mz+%:UAZ-!P+T(vhbv~<$JnYnm`|͆,lhܖG9^ApQgmw;D#V ~vCpԑVsتtMYu|xa]N'>]'7_y[?m]Wmߑ_ >$kȆ'u!cnS1U ƨXj'-OrĀ6[JteINZmSgcZe2]~y!Ib1`{S?r?xA|/RB""/ZL'~h6~օ"tE~{_ @GW)XuӛOx1h:4>5+I2*h3etbZ1$緛2UXki;4AFeQUӡtʊ|y"w恵*1Q%RG(p-MR'Jvp~BKmH{R0TwaU\#&hG4h݌\Vnq6 n^Ycxs!A S{`ATT Fv6,UV h{VqGbKFc'P9#D%IR5 ƈ@j$ngGLXATT#0"΂7cHЍQIE赵#wJvznY+siK~sA~qjY,yp)mfㅾ;wd82Ħ >dx7_/( R]\+ˬ|zDyXtۿmHШn(3|&G_Iu{} X|X}tYܫFE;NX+,\5K }ATTW%IIWju4o&6(** Хl2MHTwn̫.b@]}n-+S~pDAGN6f?յ~-9ε6뇊jV۲ݮ DK˷ouyr/?4 (^LP uKU=CЂ˃*7e_ʧ>1\fhz,kW*- AŇE pfJb0R*`րi. j=qjae23~?rwWxU<$Y_ݯj^$P;cM!m] ![PW'Q]h@$w{Omlx/S":?[[Ar#uJC+ZB$guZ7yŤ9=N׈1ѣ:xg(Uqb0(io u`(Cb Ż9M-+T|upJe(Iٕ?3w6nRʘk$x0pN%S|+rWвgg7I,M XHRVƽ$+,.W?<ڌg=`jsnU;V|OG'>lCN8&` E w98ZbycS^se=03 }X[mqD]޿ 7vC6>Q[O~r-n2뱃v8u+\wώ1 o2TViJK1Ղ!1g Kcڝ `Z+-&WdR.P1oj~Hj?| ^O\'+Ǭdhv-pFmR4Is US?"ff?i*dL䄨M WͲk7WQgo|FN $$l9HLn_>ihN|A͘JMStU zGG.5P⨺{ B '7Gm Y;WwHL3x#ܝ#6vOxn❥*[Dv$bM#4Eb m,Iиa c`9͑XSeOS5YE!=B*Fj5H\!ŠEeNao6 mEX>r,9%*°YtjrMS:ASHg?/>{ nu68mHҦqQYۑ$) )ӝb󷨿?'`vBPkZ\#]%-GcꨤO WRC75XF"Dgn]0'LH R;yY=Ccٓ8qmlK̫rĶ82,0Wf,]1JgVia@6䰚8&s" /gь$Iج={ ~.2{LWMJ3el;)Q(LsBl˘%&~K՟SNcK_X?7I3alJ0[T_ӊvˡoϊ!yGqE^'3 "|/*BYGT'tFZKAe)E DRʓY#u잔/PNb=>Ac"DJB PGJu0$,̄*Ԋ5Ӻ;9сX>67k--S9ՒXΰfZظr[VlR8O޽E٬䲅*uI/҂RZpN#XKRq KΈVI%>Gqsr N=uUֈ3`jlc} > >4d6u=G[|S8͹5ԩpZ9>/̈́%+D/'P9^`h6!HNB(labJ~ak*{^LM68͙ېR_A(W9}^t*hq=.gh3I{g~]Qfs`\3􇴝Jw0 &m]p}^զNA5B!uotm;Alڪrsߧ7S[˵<ʋ7_?LgXAZ2.93rA\]nJ3c[Kq*PR B'*ADmKW$!4FҢ"-$)1z. McCJ"HܨXSW̫#UEm"O X =霛W䰛18.1,뙴ʀShDۗIjxFg^2:]0:IJT&4be`E`aKtҙZcbM5/BamTQ\,(JBb$EdX*D%FjbrHUE4m"ltZ SݲnKQUĶ:7}Ӓ`6oJdIY2+x=I'~WFv<=;m9Y1{Z1|wL}cL9%0n}lU+ZÖX+PxaJkRF0HڸDg εhDԀ /bTI,Ƀ$=ꝩ` Aiq3a`;1)Y$%|p8_dKJ)ěduGhV{bÄ,42qA^JFeWrxΰ9h:*Ezl?'ٳL'Ϡa{t9I.f^蟞RB{§.//+<|2[\^Fw[VZYio1guƵ.zn5W2a=%j_-S9D@LƆ \A!U`R "#~|ƨr8ި[O] fP=6et׮\ڈJvmS/}aVd I$M%!?р)րSX m_6ESٔCH0D jiS=ߚ`PzwJ'M|S~P3fT"&am FIˈRz)黄]^a>c533?x3V`.ݶaXZzJ=3C~rngÂo_b}U9׻7G/ѩWE#dI :քM0Ҕ(BD5, IYZ=>I| zZ6聃VcXoIQ@GUi$FB@ b3\}?'&=kĈK4?}VY]FCxJr0 1iEUO@*OHepV)FCVm.cS1.\FЊ6 y04c}ڱ3lA}0ꗟK)*yE"ц&(n#(as <{J^'?J=w#ҚI`gB/"qy1c`AX+# w &J].ʟs-D~t";9ɏɃ{sG(M >˻]' IDATvZa|.Z?kFXI;#j%h˸5kU^I"u EF:&".g?H$ G`ET "QN &/\QglL .fϽZ z! ✪)[&y)&qRt3Xue%ݖcO䟽.Ari}֕YU\J;uϐ!ui*C:QD%,ʲ%]:&IkO.]f1)i}Iz瀞7p!tO4NEĠ9ޞ#YlH}d7&:k-Ik]=5AnGo:n'b.2.¿{9o룯jYs_W}u$؍kdUaM@PfR"~O-̅THQJ_}jE!Hi~f.oV;TXS%jT"͗OX&Ҙ4WRzH *]TD$D`\4>@ƩȢ#Z&GҲx,1utGvGb­wwڄY.jP]U9D:P9kI#pwQcƞ]yFz  ^XJ˩yF!BoA\0/gH+ʍ7_nUo=+ǪFeck ia֥EBL4a30QrD41P 1ۺ'8QDZH:Fl(>ODq*|/ ϕ/>vG:+ ksX^^2" {\ZIOUm4:Jަn~T^Evܻհ6AךFE!p y%y Ļ|P׻ѦCQ-8ڻ%v^o0O .HZEWVYm <ս$~g١'r/gZ3ܽ!2i w3pP0ee)eY:`{uLcMݮ1sw:Q\:~ϴzx^k/ :{W%"gl.e&RaQWEL3J%=8*،$r4_`HG ;AA4qG%2=: r_R=e`ix63WNd>2%^N˸ PR3 ŝş Ɗ')gQG`hS̕]b3&H [ F9 sK75[jx֖D`<\emuAopOč1uA2?Eb5DP`%+rZłv`y)"nǎzIJS赩E"Vdn~eHt͝{tfcI 2Ӝ*$)WmHSAG v bY>*K[tRbU4.;AAT+cLV}HHR0j K}WQׂ ^yL Po%1vXTu \\q hj܇[Hj 6)/]XanZiAF㛬3r LW/> K "VJ'#`xk{^swHo^(4X{J-FԷGMC²WYYĂiA?ݴi8ޓ_ܸ7ve꼼ƕw(wQj^;߽CGv]5IU Dn=x>˿b*f(FV҄3q*gw玴m?w>:ֺT|cz7 F)NSƺ{~OsbZc.gN[2J,dE鄯CEL>9Ã+VÄOl*M%=֣E$;J^?H*psrmBŎ<$یlt s\ۏÚ9zTʉ5yRԖ.$:Ti1 "j*!sh O|0!BN} X?~y Py iDƠVN f9R)[#zG秸au |_sCi=+e9@ ^8# MʽWXꡱ,|Ik<ړhT7Ō0RDƦUňHJgыQjbH@%~Yt_8սH{cKU+O5"Vj4Vŀ1DUu"$F",P C  1J3W5H>C5U_g( kU"T ݞi﯀*Z*v1GW*+:DuhIz&EсjBdbV64-$)>FSr(+1Yq\A){{׮5=He8?e+]ӽV!'\V'0O溺pJaXVeס7%wdKsʿs8 vՊi&K5j"AdiF*mRi짪JPUQHUFCTH#U )rsHԨZsfmfDE%*'IഥЮQ1)ԉ\^P{asRS+UD&hBO1K[\G^:k3IT+ՙ$?AkKt?3Xd"$?C`&O*QFlnSYܝKK Y'0UIʣ!"A5@NwgL](144]i7AujEAC֜ Ƴ;k:9*QKBc>"xk-:G7K؝chK+rpt,&q;>iqV籋2=c&Or7'"jAuhJ-dg:6|bXi#jҠu^i[%$Q=S 2k L+wfG)@ e5zDESWHTX ZYM2kO*> r5DOiJQW9AF/J'{w_啶)=:Yk_Y%2wQ߭qǚSa_?;owcjw (|LvnV`4zOv xp_3 1F?!ڼQECMc򘓹.c_LKKK7)LZHK?_HI7OCc-ˮ͇|g#ULM")q2ŠUi;ᥣwNLŝ#LBY,XobyhE'^".[ָk;}i*iK;"gbH=Uh{5n)5Ox/1P+iI)*Ϫ|>K:v)*d7H!1&],-RLWϵᕓS~23D+$`k8?"#4 b I viXb1P x%>IpmMmI!v|:&. sٯAǁWLe@HJH *?=՘3Y@_6W**w_Dw{h۾S|ˑPb&2Dz8 1`cLI'G =@q>eG)uXL ܣ7-JD 21az}Kt fe=ca%`P(;) tg%[7^gx@R7NTX/ʠ,$e h)A,ĊS7\>w}w59;+=bDŝMt$Bl>=[jLufp z8i$I!n2jOeb}&}eYZFHhdWlL.s({$53퓥FԞQ!ڏ-QF~Y؛>αܿ褦W71'?sVEP [9;6q8uu< Î̿Gq᧯`{) hqbdmK"k-*G7~xְm9 3..y+꥿Eyy"ĄfQydkUTD^~;G~7s2Gp}m>9$X>k|^0h gb/G! s~qÖ^ iDbb tAb3@8'Xp'UfU2KIVa2Ve^,&mZAy(N3ZxQcEbRp8P/c?\0{t;x PU?)GmzXoGǿ ڧ?#֕CiWrk^ȹ^_z Q4_71re+KXTw$Gb/4͎</]nWDDZɹVGr맻8\L>>\ϭlɓ5y!Vx"A nGU$J!EhIT$R+sJƕQ"F*Y^ڹ\n/dyc.z^;L#O\? rk,gS 1VWWKu+hk]V\U.KDdb1ADD1! *f,2 Bb8$I)B^-x!FGJ؋*N̢Bt~ 1FNFAQ ^ /Zd 2,њd+w$3)F])s9H@K·"X1KTs9=S9s9m&3KXHɩb" Ѫ F "֙FA"#|PΚjL$"gHqHQ4껿x*h_:]$n,TD""*: *缓+dFؓxQZ!i$uou8$:xD#!v[GkDrgd]_DʬOl2`J32vV,Q)$VAHr#sTK3܉[$=ӓڦJlbw~[he=פ*:I m?-(o(F?N$ƋW@L:!SL&/w%MчTSTo7)?/HN(,r8mZE-gAO2kŨ21rΫh m"(CDU|TRdd+9 $" qe&ׅ_=/Uߪf~~];`I"?8EwzggH4Yɜ/}P7U}[o0ApHVlޙcJzyw]}^<'lҌa$EiHX /zV$mOyJAo V8TT4E5-"QGbp: 4^.EJ@fn~W3:uq=&`A%∝xb*L~t[^B)Q% :c8%Du V.^Eb(jM86tߓUDbi#Xј<@@g[>o.^OoeUU)m6h"XS@0ۅq'شh}"aqؤc|uTKI&f &ETCJ:[p-텹XiLAP޿&ax.K}꓆ݣٛuo%+ va[O%#.&Do[+]NJ#Ƃd~Glyt'U ",۸OGQF2(*}NLIw[1[5*\9S5F|o610 XD #Gk3*Sr@L-ۯ_g`ˏ\ahU/Mojem1\_Z%"FqNr=E$NdPN8DPS;ڡV]kg1׹bHh$F1Fk*b&fYZ+"5F4TcHDdG'A[b8TXbFG85&N+cUq)c#&q -N DKlmϲkn1)+e}yAz#'%Ǔ# +uJJiйZJY IDATT2W%JRlz5TT+\z~~|wGj l 2=Vw~&E]qMhi!߃voWw={Tt=ޙ >^k4_}f6Ø%/"^*{_2b#]NDu2o;I0{zh>o7ϏVR rO>g6~_J|b*PcH]t5SRZd!c- F-*J錢ZդK-%Qj2 lchjOBBOacJ 2354aOk$bIiFN -VbA4cNN IգFo|#a^ Рp!Oabxa'}4垯fzk5/ `8R#t΂UiBz|LORk%ZqBtitFh""]HYu%)2.->bK7>WԈcC% :F0uJiyZ͖~b)a]1 "!# )YɏOX{unEvVrhjn:09F1s>s{L-Vƞ(κJU9ߖc(鎿!.bNOtdJ/]&֣;h׷.C5>U Q1"6I^qڹǩJq1Z#(m1T)([lS3ξ&&4ŀjA'cFc\ 19$Oƣ=85-|._8st3FQxV$ -i7[Z$ЊcLL0`=iÄ N!8oXVp̒nUc>;KCeNR 9d&wmV1R pLBZF#*F'X,YPa--8''EOnn}/_vOxrAI4NFOR.b5dޥ*/h j[1߼N~ |c\i8mZoyn߳'86_sy~QW6oE/ eq{rz$28z6gVOL̶QWɛwy;,;?w"Ӭ&~?E$Ե{+L#J/85~O5wni ?͇9?XuY6S%AxSm8.]Z3JoxV4#bT ur*,=Q>!͹[:"i@j&AO1i0 5 }0KԠf@ EH APFi{!1 ],! b5&2i)ꢑSTkAg0Oi)Ŋ̟p$-i&t2i=7O&j|m,[;'Vڹ>ϯӳ ζy54rjLmږGU~_b+ܺRS!er"f28胕΀>hL;%sUC*AD L6Dzy׍ ^e>+,%λ&TLؘb|Kzz b0{`k$l&boˮm/Y%^Ku6MlONX=8s42%sЃV Iv[ ʪÒW^#/HP+)AQ'5JaH~.bHzts^HTqIBpD7Bf12.I4_?#UUE_t~YFTR;새hP|5`&:dQ [J ”3L5%$)`U,d"9Gؿv{[ vh߸%lJ70m5ϾMu]fcsaا7>%ifi[elTl0kRI ssQKJWjM vRc2 Q ai%b9F2.Tw%H'Mx)J}Te?u/ҨWWԌr-lQ&s7dXNE$.-];(uӒ{V><]~-w&z\3h߫XՄs +X}}NY[4 NJv>fЌ'{VlwrâR*BFF);NKnj@UsljĂ,]rÊjPf/=fsH 0W$;c3h+7.{ZR'^D DӠdTYv__30k+48D'bPPT#Nj9F=EB$,Hw{~v]I$K,.8#ZFCOd˾Iq2%On.8 (iFNUɵ,q1Ō"$Ԙy~EPi vH[AO[>~!HM[ H9]2ZTshPOGr02Jfޓ5FٜE3gFbB5AQ͏U~s F ;}/KPB?@@~^! RHWWЮ*"gֳOu½Յt'1bcPqvՀ꒹ ]'1mQUgDDPR^hBg犔Uk:9g\"O}" =rI)0A A%.v< tL DQE>݌2-i#-*ʨ3gL)d]zmY_쐈% _k:D^9l_/2ϔ$u|t/Rգ҈X7§}bv/ЖiR$J'tmДѰ+nii[k Bu2ynzP >˻D#!ohɊ1((I t[Vk&.w 8u g^4|Lx{g6 U?{,<@:]( Dx"4(Eb$:f.!]nVtoFH% %T[OO[N.iG3D;c]rzU@c@TcgcSVɊF,.q\#h]\v;N>p, bJ:>Ljo0>ѥKTOzrk5H=oۊE09w;]Ta$g:G-%awGCXQRIF4 .Ć1jLg9P AH$0# dl;Bv5R{t#>+ #)uQp*FbxRwQ7oGCo}[+E -R{;d)a^S ?53WpУ*]_Ȗzj:(ML\Yn^7Cܾ9VⰇ]P KDvUb H҄EfUM[6S2ce)&D0iyG]VB_P,,`Q'8z_oŸ /{v'[솿E |1tEmTUZ6z+W#Шq4hTi Fl鸚RJC]XzUΧd+\Nxmy-PPVz0JpQ-SAUGֲs &k̤4cՐHkzs+d>gNeR?е5z(.O*mH@UTY =)NF̲GSgcJ b+<Y3ƴ: K/=Ozx@HUF!ּ~ uZO苧W{."X 1RhH_̈x<nKd),DQitȈОEM쾹^֠f٬`50,WeYF+'wm#QDAt懈JTQ4zԢAU*ǧ޼[UFc9:]4 G=&dY}r'7gO+e<ʄeb'UM& `[Q#Qa9?pDL+T٣oJ.־yGEH7ȃ~JP lI-ҡ^!5*M ؤqI17c i^ՖNGU@ QrP[$9=ĖS蜄/"R ۇ{ NN;r==bpr,s- RzxӍ3dR_?pR>׏dz8teƒ*<.?w J#cj6XWdqҲ}^ㅿ1Uơ29-U26~ Ҟ ϝ;cL\y?0%0l?H:eN)!_ Uk_̨orH;DPĄo'.2nEhDo[X>ȀENޞ_\pFPSY͟ѭy2ߐO6&$czYpe"ՠl^bz5\,%C4:Tl0&?݆KZUB^ܖY}-[L};D^[Wd&>*TtۯV\XRsm2kC#V8vO+tǬ*"Ty QZAqqpq M(& :HeV_e29PIam7?Wըߍ ~5෻?s]ԺIß 3D.l4gXQUsFrmC+~2ёgژMdP^^{y`gͺòIbHDYgk %ޢpk6؅;GTrѱz轼^yZe[c:n A6]pmufBJ&  L{d'Sɜ6J],I0Ci,8z{!g.?8DJiOpv|V%7Y~ʋ_<4G{nyff\!uև SGxԐpIZRoaz=5=0]|%Ф/5` R/-z@Yc>]1cx[+[|e2FޞebU| T[4*4|:c!;ߡGCN65?Mb}Mx2wpglqV~qF9ipIܒrgY")(eg]3+0OҬHo|yЮ$4i`w3DNpsOnخV2m$;h0&* bB"_$8lG47XXי>DxxJuyʯQ*Ki*RIGriΪXLGu1 sޢh.Ʌ I`2ߧ<9UAxk珎x@!b }і3UMUnTƀY2Q*UYj opa9%tbd;=)*KJ\ .g4m'p n]NĂP% (uxuLbf$v& D#&ZG/mV/߻E= :}}f'0k0}@$ BHmgɟXCrK*'f4wet-={3l3ʸYhmT_R5FeK)Daܴd`y Jdyl:ҫaMub611Y10ߟrOq}ҹ:X4yD}/yDDQ `XY;'&̧9Tgor9tOUR7b>-fȬ9@Ck+ܤu˅n'GݻwqIx%u}sU~?+#٧(ncLC uuۙ{)H}/ݜя[pN `k>J-뽚zrd mӶՄ{#RU)>>FBh'(%)f^$i՝bhYr:?`8ApvCj[*˟b=Ҁ} ȷlgwFu u{>dT,>4$e.d`2bPZKXU5,*b62C&yr_"M@{ɀTZRSYO.KBq6qry&y>tDpXdslSb›T_y K؛8.s^"/VZ{ɍ+rTOB0M$k Қą(-$:u$WQ#'#i6}mB낢VT+Л8󇺲Y˕QJzvp8-dx=me]edytA 0B@;E IDAT( Qj5D:rL;9B8PɒH3[$Pg),8DRÕ:7È`aZg5 HϏO? kԯs_a[78SlDD7lKwn΋ITE{SiƑO/YD^ %Fa'SDORe.*dcb9{tIg3mb%k:Q,̊ՠD|'% F!i_{PD"MY+<~D {Wryw'r{YI4 zoրBzݠ>鯯-4|r_WAxFƮ1N)uX~Y%>K/tSM 5ү*~nmW?u>ANYמu:JJE[ m~.}L !ǻ w}Nٷ>&ϷBMݿhqV8庥hflX^ШC2+UdKEoFQSOdsIBzȭTLVݙWr1&$4J|m dg#9RfէLj ݤ Md5p 2}au?|7^SϠU%&+褐&$DJvnIVi`Qw, oE+%wI o@DQ߹]2LYxPTWJRWIwa-^7nR/ƴpnssSOd/H6U\ޙkb_<ğ#"bUeBtyȲ/2*H:`܉$/,kriu:<3FrW{beL5%RT1TWXF:OK f_mv/QԢHBm-Rk 1V5ԡۀEU#3NDTՕ}bSs{SXkҶEhjtj鉑hKD*4((F;NMKx#1+G/XL~e,ƢD>LQ%y4CQIskƥQ36yۚ Vo0^{o?&~Q^y~;/ ?Eܟ~gӻ 1Dީםk+>J5DiҺ~{U#@mhFTIBxcLNç3L/=??2(AEqZdr}k\Z)i'b*mh[L3X.j\'C,1fÜꈶ]H:z@|NX+ŹBՂ-I&e#:?Kbh (X?`}/)XcZ~V rM|#]NAൣ}v7/rye~NNy~s:o|U޳vSNc˹R6ScQaΣ_vHd6u-Jn\WlLָ TԄ7 :o>snHKlɒ8RrUIr;W*N*R*q,'RiQ) Hh1|֗n@" "YaWEw}>{}}xxNgKޮkNl[we% tHy  O8f|a9W, 1P& ulEi"&ogڤ7( ֧dU8s_FmdM=;ƁE a`aFFNoŁ\>52?Qj()ƈi-8IBD?-4M7;Cɧ9̬GLk]' y래.8w;rq{{IBR7~)ˆ|qnnYB8wFl1w1b;F$.1ڜЊ]#T&d)nqu6Iŀ?`~=eK {p{7nUk h̞*{w- j r7L\eL@L s4v*9۔_CŐEa^%W!Q Qڮ_Y!|bD#ƿ<='ɻS75)[1~ VŊJ8ײ4{p@ V9p:S Y|zG_ C՛2H6 lΉXv/o<+ӎn  ;,/p\G{3^ɘJýϯ8)# 񃚍,e7d =.MiU0g{Q*E(/) ᔲ-)W$8li'7G4lU ԡ ͝Z1^:|7o(s aA3)q~XdC}!y5޼wJlnmDB)ixzcܦ3n.% 26GVm['OC>&Mfl'5WQF/K"u6zLPKŜR=u ȝOvC-Uju S(/J-h(HG Q(#7CͼbJW|hf"om.hy0ވoWYHѽ)n4A'^T`18U7q~.ɲ.iK(r  7͂xL0m4/fe1g1M1A&q+@ UYxY|%x|L˩û3CT7"Uf"%hPBf1aMJ KaZI#!mbfj!Op16E.6] AVӠLqi~YHny*$9`L_-=Co9'!HGK c@ WزVU%X6A1 .,N_ٔgaQњ@jdlȼς(+f`( [yIC<ro bsyjh. 0׊iLN˶ZDŽT{(ZU,ڶ#[L~#c3L&7Z4Mp9T42B%#6ӑJ6꣭R5vN6N4tR<0 Z%|)&q)1+6z]LkIQ0FH\2*nVФ8btێ#n Q;8LV4x~M)^I{E*HGX% Md5Ak8A-+\\9\\% Fii:-gBl875MtQQ7."XTO o:k\;/sЀPkMO~K\\\j.GF:{ &)Z~ӯ29>HA ЭC YחFV /΃j"Gr]bQyvhGw})VPlr1ɓ#/'j*+W˔&YFkU;4t+JgրzQDCi TG3=\)>5_ɤ3̀LP|ȵw :MT*Q:1ŨAUi4Jњ((]U*(mpFM-9̌/^'ԣAnE)=m>j@$Q Wф5Vm1Aj;ڙ_!TDQ#*Iw/UFB).hZ.'t#HO ImOy-vG|b+r}'('֓391T\eՌo^'coiyO;~SџlE@Jmi&#m)%~\$c9okZjs𩭔ͼ2=%Ъ4@cM[ ˘%.ŚȲ-iIǚgiZT#  ôAc-USچK0NW X):X)}72|{?`k.r#~"a̖y"=NG3^ͅz??2&E}}Z;Y =_pu OpiK%PPqV(Z@bC>ѓzi"';j3W<n<䞬Za9f"+ŨU L_l܊:;IEQf {;j |b$g)bl!# %JԀ$ݓۧx i% /6V;"tkhbm7L{K'2 &A)5%4%H/j&ҟAGVy&>sQ@Fe';#ru8"YIۀNC 3oy5;]Q0"bj]>ԗJEL^, d7xwC!)WLɌQiY$Vd ,mӠB> {j ےRh-^×ށ9Cbwi(|6=YoC"$7RĨWGIJ&륢j'j/M)5i ;L2_N-;G{%5 V4҄ p0GHXң;۔416:61Do^'Dh0-6=F$C T8E"x6 bpѰWeVX4sU*uRL+!2j.pIeg5@TFrJmMt+X6Ԓa%pJkFzVKzns|/Y% *Sܰx(#{t!FwHiK'ę2:>, hrq,#7*~e=*Oyd_N4_z/|AzUN.cI`szB!]V:7S֐RfG PFEO^e2e"UW2=pc8QOen8VQ|/L&jLbLmW:a\[T:u@QCC )QmK)[.Dʖl! גj9굔ab4\<^_ѣ z.AY䧞yp|opxO<-;W<>)ȹ Gw-_ >c}^ɊJ狂$=Jӈ1S bOwy)`lMk<#/cG\o;Cn=G6pM"W>GЈ3̋ \ V"*E[ \T|K nEJg <(z&PéoZeF0&"zRQpsi;|櫷˯~Cmנi߫ihJhji4C?1|?w:rQϺM~# k9mJ;|x;KCCjiټQEE]pssn ;b'˚ OGJ60Jc67߼+v~'xosB'h5{޴dR 9*$q  `l`uAmo yܯP5E j _R_sJe{[% )2n_a>IBdxQ97/)2Z5I !t6V6D\А3 $`7pG2GʲQ_Rs|I[J v2 mZ)`P?GlqgE^GBj~Yx]~ELYqRĮthH!QH׆6DA`7-x~s&tOζ!@%wGjжǨp NZbAA#;);aѐVj'gu>X4d iɦ &vo6( U]҆~rCȊLZ0Xjtzj>.I秘X} WOQ̦ 8 R y*#2%JE!iE(8z҈EP#ڗopizc£Iie;Qqr;ʅsr+vq/|4b*m;C[t7SX_\cjYpRz\٨,UT @\>M0*("FށⓂbl86hhAЯ-1%"hCʳ\9%^ OXA6 |'oCL $I<ϴMWbq4PAbvBIDGNe|dR7+$Cڡq޺Ƚ{E7p:Dĭ75+Z霰 (I7D]йq6%`I .4D]ȃEB$:/jHk3c#h-n^ @d6d8'>9fë]PN Jq(X*Խ-&ĮG]W>Y!"m`O1{i#丢dR^Q˨*Hq$Y"K0gܢZP /OH,鱗cʸĸt2X8ؿl2yh [HW rl4❛ g 4EޢըFDLYuL#\}7o]zplL( IIF֊~]OL0Aɚ#Uu*B"+Řt`q(]Qi}@/s +>MZStP }|,^4GkK;߰λ"B,T|"_͆ g3LkSkxm)-9~$-F>pVW#/Ns`eM/*kNXwޢjFScv_\G'<{!rlg_/4ʏ\0gyɭ~pr{vzW"5h,Wpt$;):WJ7"#(>x\ʎ1r'-=giaXAqn%i0ri64nrz89SnMBM/4d/v*JD1DSs} f"qpdW7ra^"M|ZRzl5JmD\$ZjiVQWESַ+W_~1k!&GO[e)Fa4rH˦{^vCkeOwSd#jJac4PL!'Mu):n$c/T>)A{lƊgTFyES7ozc.ږQo~j{{~뼯z޼knQs@˫Eȃ}TXeI'M|_Ni "*L4f vMU)FR|5" D( $ӤZgyDB6yYFr/+gV( ;/ǫV'7MV/ML6J&Fy&d8L5lfIzTvz|zh-!qk=qhrPҪ+QB"c|w8GB$TBDňu5Y6M%iyUI k8IN1>"LGUZ;]S~JݒTw}MW#f)ty$'%n!N:}Y/)[KjY%łP:\RJb i,XʌVb.ϹضlNʽ(==}o\۔ 7Ox)wkQx` jyUuu脛^ҝ}d\td髨ADhň\]YpLr[ޭP' aZ;G0^Rd;<{^VEQQQҽ?z͟yS}}nGE̒cGFBOuUH{(Æ Mi杊ٳA  G ?Q|1VKlր ^S;p%Ľ̵K;{' je#6uN}z_i}*EkDqҎ|餦 =# N*}X_ \ڪ8ze&ΓۤdJ:MOdR̈́:QhtHcFZ'6ÒW+On-E9NY2VI]LR-Y0[E'[m`I b3ZkQ3H ʶQxyzEy̯_`F񦀾}ǭewI p\+^,1,@'||7*bq XCzTc4Eh.!d֞M1jUg"{QgZ,ǂVzo3oYK=4'lC"a)cm$7Ѽ L Tz}{9#Ɵ} ʊ,I/0X MMĶU6(+3ܺ2ҡ-& JЄO * FK*iu56*vխe0!M1mC:?3>U?N;' ^NYXE*>:琧aDשB*o;PwT35Y&NN'$ (] 9[%9.Qof,.Y^,$;mɪ,*=I>$8hO)Q.έJfYlGŞOuE=y 'WKҦE ^% wxA6.Gfrnl%vg>Z%$]&Юq&Bvf2>;l^٠x h1]X*V:in = N>kOa{ vwg.L Rj#! \4L"FЪZY4+^KR$3~16`[ToqHR1/z|+\HPk ]O!Wf=XXh<4/rP` bpMNg-6Rܾ+79:'W-kl挲 lHS9+\[vCBiI-<,:.z4c W9vwV8xyK,&.dDWQx!e[Szg |kGIYvýw#['sW_Y&䄖 u$0y@&DBl*!` %6m)BCN'7 9#^**0'G])8y|T*7%3FQUOKՐ"h8Rg~Ǩ:E9KMk1h` /K0퓹;٧?$L!JZԦgvK 5 bdl!֨8c PUC oXLPdԙ2QDr}QqA5jYJRג/ZMbMSAF˜6gnhAKIr~axp(U-R晉I! GJǬi$""ŏR $Vy%2J0B^ei8b[Ϫr6lttu_L٨: Q R'4 bCT ]'NId E[QUH-teُE1* Y'x?0j$81bD-Rڪ%uFa:*hhZVnQ5 b1"&&c&H1'=(iri72>9$Y)+.%-=oEۨmiWsXSWH$Z65&:B] I\{Z׆ Y*RNcS\і>Yrxvp>'[@Rk7t0Lj;Z,2a{-vuPFU)ŪTJEU@!k?QU:ZDy!"b!^D*ID1jTS=JzaވߊZe`͆q"oB6v HDQV%Zŧov@͙ߢ5hlOկ[ N\.]NeXUy_Y@E5f~J1b9ceVzgVeΠ6hPN38՞ymo-tI:7 ,io̪B!2%Dɐ4O:Ӽ'^U!5ŢFebֲ$͹:A+Cl4!CF(HBc1b#יmi$p'4gtlqO!rJ unJ#Q4Ҡݖ,&*e2[e2_DeZZV\0XjlQK:!| Tq_E}!AQ:Ol*Q !u0〗Gc^N Q9nSmD?.D^BID|n0Ar kӒm&&CrIuKCK$Em8K"ݽŃǸ0UL+)S_t=WO2IBm gY`)i-'AH]=cل]rr#f#k]eWާa2c%=B`p^f rk/>!![rCOvyoѿG*"\YY)'N4R}т/"<@&,jKQ=9=~Hx̙Cͼ`:7D9UD 4NT!s5HPPPv/GO_rN]3ɏtm,UemiM xrs?J^;w\.ڬ@suqGz/PT M 7 iw8> Q_.1QztN'<[TE?Mp |ON˅5lvHR\ء>i< U%uNqRLg9M@1"wWFglwH+484JDW&S٘eͮT(bbR"VVD앤b).P kimߥןG?{sO/Qt=6n*a'!t n3#bd%->.<A#Tf!JCdy\+g&Rpқĥ# hd譵)!y5+Q hbMND9^FdS S!CM7bjkB%1BA%NA`b8jZ4lDofTiusBA,؅x2k=& gz"> ΓIT+ s1͝VL,=!E딬J^#IDQGak*I ~DI9쑎NH&2/f, tF'MFWIO yFk66 ֒-A K{{x#fp, g{ j2 ^bm2*!&:I^BIPHG?bԞ' 1/#t4N-4钢A/%O\̓TMOau; 0/QW%㫬\_%c@]yg"qz,=½)+/41m96ʨq"Ȕo=)Tnhd8@ 5f3֥֩k2J7@8win:eVKO/(y>qvP@eذ+Ɍ1 e3dQĒL~+"cZӈHHʚZ`zu $UI3q!nh$ .ZR.<ALGe|Q_y.?Wx#=h\]k'p `"|b\Ήn =GW#! 8wRL*95T 2_5,b#*ZT(^ eݍ%R!M- SL 2w>!sF)6n`d;n*4KE}"# FMNM36;'Vw '6+-7G&5͈Qa0&&K.nmckKqdv]LX}E}ݿuߊֻ UD(2iKFB9@DjW_ƇK?a֞BLD]}1wvοл*mItnԶ囸+;=NhrFuXeܖ)n~UzO,QU" k+`HlD L4]åV"G``ys㵖ǯzybK/s,dpʝC:$[$<;lS@c͛!z-UWTsvڦ%9k5WfZ^D0K kLZKk%G="u/zܴÐAhlV $x'& ʲ̖:dF#ijnR.EzLFo9b_>AݢXnH 1U| "M XԒR4pV}bRU7畸nJz\Pdvu].`Ki$tJHIє[@uy;KL3^fD/&Kԉe0aHZ2Ev1Tq"(x;Xx~I6s$:+.J5Y$ dт431ZuAՋZ06ƊQQX)R)RՒ֊Rn6_o\Fs{F+`cT?xyHI\ BpXkE5h3T:3Jbfx|J?&ㄪ$[^eXӑe-!gx60xHfs\,)(](s9Z]ͷi*Y흰;ŃDkfk8yrPфf$Њ5rLR%DѢS==%ͅ9t.CcJ=_ƕ^(2c)!uC(/z̍$QExfk̢{ _:`!bLt*Eds)>pI gR^ycKgo~^s%I PJ4yj=Ƈr6;>}rJUVD<C~Eh>|kY8"Xz] $RS:唼$yg889ԁC漵SsKJqT2)7RHc KWN)iȥ47_ř^B444ezV/>ar_+ ^3Dnھ\0'Dnj9gIj-tf拖xUzǞ 5B :'"]B`!3%VM#ATF k+RADC0UefVD'ʼ7D+TbSm8ZIMH 9:9+X֚W=YF\I{TU`Ĉ,D$ZhnǛ}Ͼz(KwХbٟ<{D XNG+"|yŏTDtć情 V81 EUrW9ƫr/ʹ>Q8}.d~;t<{=W'>U}DD E 1.Aibԫ.K"I0?:@'c阜k"4Ҋ0WsS"1[mI#ܸCz$u%ҵ 뾭2:ToJ< IÒVe ':p|DV\jqxbY,?l |=pjhkC.D9ȕ YƿE|gM K-,pP0{,1j!>9j|q. yUgYP#rq>roʡ(x=܌\ j6M+b5X,2UQaz# FqyUls$DDɼc8ѓ FGMQٙIlr9ER? 0/Q"! 7n< *1{r߈QA=urVOKY}jb#tR&-$k /t=dU9qɝ!>NO&C0*jLbj"hm;tEPIm6**@ޮ. zhpOȳrb+/6 5؉,Qli+Yl.LKE8 0›kR%A%=͙\PqI1̋ĂIшxUsN5!ĨV-%&&i-P@cLKbdw6;EP/]5">A;46TA0 .N¼b" o斸(Pֹw ZT:0V(P? *alDu2cр'\E+Ά01p &Jpe`=C=b8W_z6*&pqhP Ĩ"P_~{A+WKh1ݏ\$^o'M.2 )͟4똆zF5nKYÉѣZtjtVCl²OSlZD*X#r{:ֽm1b6$ /gATg3?M~o1 E7bߡ̰Q$ }[?N+KVwU_C=݁?qV[YYTLC֮]RG gfR&j Y^`D8R],9#C?yΘ&)2pߞx Di248c|,plW+~OD2cPjL,kuIGE$;ʼn ꊽ=cޖ *zFIUI-#+K4܊V6!$lE|0Oy!IHL—{w$( L#4CB'56S ao.\_Z^de=u_i0{K$MoD;_JEqr'6 v{P"(45Wy$>P V Ѹ u"!~ 0ƖS^ݥ=L/v]P7bR{u;WRn_\{3MRե&4"kF4+dm. I7ԗŵZ /t( CwoS7/êUgŝl o?3u9hS6%1k 5Tb3#YiJUJ]vxi"Eʑ:EFI=@ǫHE#B,FBX ap_^~z+晋Z ޴bi5Ѝ[CS[+PI+]kraam4QkYq1D`jD I\jvfr!]!Ju]A*H"5RfMH &tt ˇ?KkpU>KONirs\r k=TLDuqQjc&=L <( .JNkfxr1 GNjhBN%Nð4f.q_k9Ӝ"!AUn>GDūGg+B#0Alf$d!IeX,|W>wHL+ICveP뇖L, Ҍ)j&8H#/yj|5~[v*Tn̼֗' q!4?K{mܸ>{{}Gy]>jFţ5/?uyFJCUWL@=J|F46iZ,_{q+[q#j(Pur&8dHa}We`jGPׂԓ9Co_.؋1p Gfv:+ &ҎXT>Sp.@UP (РlzGQ]r5dgijo(^P\f5ฮcKM1Yq$9Bx6jebvUTK>Isf?1`-\T*xY.؈4w$m!•1:&^,')/n.D7e͔|?揳X-C(&/[;|W_$ڜtN6p_WkĪת-|H5cNew

    !Mť T欱-w*fc ;mH7q)I/\AX]1j9'w?q)^P!p:2jxUaSƑHم0[)*09k{'"NTjE*5Br[Bjim4;*%W 1"+YTڬAZ!a\hŢ:N taR,ڄBqOlQ]UgmkGrr$DgN7@yX'KT1qN,9EU۝W(J]95FIsZ,%MuW!bVh"]o~r/.[]S^<x7hsĢIM ̈́'D@%Je~4 LjJھL}F.`LP{Tx\{SVk]GY]߬`ҒB>噛UI?54𳮅Bk v5)˯: #(~ѸUoti)뀫ʋ1݇M%{}B8ZOrƔ{=uVWuXesC9N䊎,Z>T,.Ef)yj{j-dQunqo|Eٯ|P2Z<\Yc}YD|yJyA6bbKj(hq.ƒmpء-{\'PHA4[dI*ɲPɨ:F@YUU/(K[FvT+u#̂" E3(*t#JlVY?[ֈ󪪈 .\Ug@[J $\W_?7"O?7-'tPK\m{g|嵿R .'7_r0p9M(wka?>WFN߯69ޭ Qx3_LA\0 IDATDnG?juR9|SOrt>Hy!eOHnPڣBcg + ;76tz}M ZF̟8/UҸsL^OI,ѐ:VrF'vX4y:bnIJh(恆8ݦ1UmUY+ ?V+7Rԥx_ѻ'}{fSg3tp;$UH0w oKZOj9aN1k2$jb*喎 /aΑ"0BZIӈmRXXbM@  ZŔA~!QL.5eѾ3!?(YPoAb ޏϧTS$˛jL VYtZYQ 4K˘\⃝o Axxm*DsD$DZ)Uww_w$6O-$kVh>T5NDl ?=LP}F1b<x%^ S2+jZ%./IcYbti%Mj{x_.h3)%XU4bS;e(%5FGFja:B>{"O^[,7h6FDgNDN|ωWQr|3-ݟcc0h1WZRiҖ֌%Rd< )3B0: Z*UD*Ir盻 =oqɗX Dj]Th4Bl=The ONPFg" [ȥ+b'%iNNϿ O_巾#q>}ȗƱaN"eXZ#_~#z%UV֍Ɓ/=mkeVj}ˤ/ߟs|G<qFV]`qzg7P ' ;L8w}g2t4e+prlg.IHYs?39֟7_Tgؖ{wqebl[\t>C9=M$JdɱdKNIlsĹ #0|N؉ȑ㈲dQ&%I6uΩj׮=Z_.dKljw}g0);bWlbZR`#$n c%ܬ - Gy+;Fi$di@/HFRKj2aP8<ɕ6"Uβ[Ul1Q)%uwk\Ǘ(7Yf:.sZqpeɹ^VHzq1hɂ6@l-wE9('' ED#ZNgڜnČS~q{GgKX֥wn?m3n3*'l_ސޯS U|{@)F-E$ 2Bƒ]F, cތEp$ܶS327duhO'E)|6s?DVjQGRω${Jۢ Ӎ&E0}Q>FS/V_4&t;Y'XZ̃;Hᨛ!N䤕Rf(e$f0n*:3"BBδfu*AŤ0&}HZYjuLFAsVN).l6/&. UvEUdX+x}yY~0WT` 3ct'|t>Mէˊ\CP$s(t1a(I#KlDq6XÒ|3-\L-;xVK6_g3~FKTeS{]yvIu%`?r!şƂV<pZ29 IÐ2dg*la.Vw_Tj% AN^IH Djf 1*1"btk ~W}VZDk0C !⊒5Jr9f 5ݩ,%gOY0 xA?$ʂ|!ȷ7R.C5ӳ<>~SNYꈟ9(h=i5NZÀ܆yRCj/|a0> "đ~_oڂ|L[־\ևW_epO1^Yy*N}U39qU9_|Bw߹!;{LyuG}.bT\]Sp_x_yN߽?t?"n,T%Ki/SٜziϿx箲p 7 14ʌn6WZ8܈vc|)Qkd*LzQeh+=Њ ElÀi5ej᪊XOl +q2SjLTW\MxeߎCiI*6%#r9<u]19dLz˗ #PJTvZCiUgݾ_~xG^>ꓗkҪ-zPC.]:Y)׿}ʲqBx Z'NuJ/'=U;Hƙw4VHih*RU;R40+hT^ow/VųGG|R]Kzwk5]Yzh'" T1C&j(D4fVjT :-c̼N,e>59hl>Dc8 Q3)DEll/9.Q^ZU>{{MrrKT.N.VTIG"2LwoP3ܣ쒈 VPT=ѱ*ʽw9b>!OLh}#;B8Λ[Ap6 #XS4֔y s`f#a/\C{>)6Ry\b57N֘Xĵf+"sཨDrjL Y՝~ɫoW>c2i֞s8NyT]<]ԼVJOuT2q" #¦ק/uWNΖ1ZzϷh~-nzļk١+M;Qv|$>qg?-w75}, ÖLʍ`Փ FR.N"A赝XR$P ńԻj"0$~dw f)?<_yϴGͰ" bҏeG(G?{hoX@>'/c( 7;4W%jspe5> ˚ƫ^3D8tS8ɧrt\n}&ϽtExgCgPHYG u퇒Z;_gx!_~xs69tpȃ9Գe =sjl Z~XJߪL1/eHkp`O{}QSSRg*UT u<ŏNFU[vw^ NND7CDעN"QH{wQSaҡRzfSj$(_Q4/_eaMK01-,Wvۂ֪5d)'h^}M K`+uI)Zt~鳫صl}MV~n@hJk>\Ӻu^NŎZKF^Uy~wfw4Hիɍ%/^*II&#^\w v?{2g`Wy_|ՀV]zہ|Џ&ܣwtEmv$+dɘtJ-Azi,dV)^iKWA{gg(G5l8wA+7')[Odc4*mKOqlCjMN_MԎKW|߿^Ką'Z<^vɪXj2MHFHL$U}!w#W,t>xRZUňGq#҈ >]g#)~dv, G? H}>'|(? G\?Dq[9\嵺JE|h<~F(YRy[w~-$jkpkߦg?,k\!j5[ƄML5Ic) YJ{tf%@ : Xf?'t[4Ȣ) GM#C+1AS*@J5{ńcC5 3̜ z 9S{VC?RjRN@\*8e-Yy,6r]Y= qx/&t|+Ձ=b59!rؐT‚ijDI!E9u,-۲98I" DZk85 ҙoFcy7$jTp{KMMI2kt_W4ZCa AksZ>qq@OuWQfSIJ0ǫyuEΞ rzu,h!.X>e ޱ^+F8%>y&DžtaElΑ/dDӂ-1.n騌L#T*Y hwI9OŸ~W6)ޚj_{\L/z:=$s@'cV%CKL-wD+G{DOӺHz}HF$WRfXF%oj*{Rj $ "\k|J8Ћnvbkͯ'[2ټ-Lѱ` ci喤D'p]됖Fľ4p:=aa8ъQTvW3\mk8߂n{2| (Թs)7:W^ը'ǦG;Y辔 P#`Dĥ!z8кtZj\z#S1*`Z}0`%RʅKf pFBEk bBu5dZLEmH6IQSaUy8 IDATNSq2K_r'.uԢN4KYR$!8LHhbTj|Sx1N 36"·839X_ fxrweuK$&6j4mٔwdai_a9k0zmhM.V8=-uJ5M>kD>]@0jYQ1HT7ߤ"Gg3,'6DGoNT M=+s8~O>qU.C67 S3P0- $0F ޫG<'rTݐ*AŦf2Pd*LU}r" Kziy咦V$%YOuK|$>وd[ƮQ5C,[v|Q0Ft<*ebah"xEv'D6QapAl-*HXmSVkE۞fKFTGK0GģnZ[>gT][Z]o[+r{5zvEOxWTQG썌~<+"=R/է.v;mmWD\NȰVXigzn Y,=v%3>58/JF c @tC{F>j6?$Qy~b?la8>1sO^|N겚J\(J0iV?ZU7ޠM(}:o7(9jxuQ^:ȻߺK{`)CJKe*UQxO&8A\E5)O&TQDjgG`ȱU#a?/Bc}@>RO)"&#E\+ub QJǿ=[mhxKiMD+Ř׳؇ږe`УZUH3`ƴYleamqAS: K4o'UDLm9nE8Mx} >wَ? _[sU 'o魹 ?+iS>v+_|ܜo!\ӡϸt6 A'fA3~I08ܑ*ȨT5pH0q#duQ'jm{h(RJ`ba`8O݉);1';G~qSR xp% u<>!lᣐ3Oj,/x h#X 'R bAU[bClq%>M|R%- JhkCVB&y7h@s*^I++ϣ M# 6l%Ć1^*BMi̭?ڏ00UJMqu0hod-/cߟQ!ݗJXhyy48ZPS:7Br|vOO^\:[W8Yo*8zc7Kzv/ܹ١S n>)KaZfz??1.EǙGLp_u1iK1tcц j0`Pbq,625>dR$X.% MvDE5O;ln+-*WU3TufVH?Bԫ$GqWb#G{Qغ^Kg>G_4Q&S24ar%OiHF--#X癋 zaˋOh@جxį_n{=w6N*K^hS-$rjcEUTd1x, xkŽ)ֳ|8`so)*?c8)LX |K'ѵ~}q굋4z-͌ /-"^g6YKTJ+>1R7NN'PO ?YA321QDr8DÀjmLJ.9CFn,1 KI]}jt_FcBc ҄5*Т54ОZ3l AJ(jqg jīԅNu̍P>оZ"M tsԋ=qQȕSݓ7`pّ"8saʋ٘ML}N!e,:!i+5%˱C k mq%]{*"VfqJ,Yy|{U$XaQ Vxȱ=᪏e".- >yNQCB 2!YP5LV{qHw23)ʔP;BKܯ)w9o1{6Esh?mO?ٓ$ӹ9mJE$ 8EDt !`$ѴoJ=@JrhX$'漊,yP+4N"J1YUSB8=pqkV'uO\T;zXW1i`dPТ+3,S, ve4~@8xߡ?gI`"ޞlW=l@榆ڻϥ~S{wE^?^Í6S6 2_:*jd ,7&u$3sdLʂzw`T =6) 8zx%@x@8p ET!ԭ v$ 0Ƶb|h(SʹXðz+Dལh5'A(Nͫ@t :,cŀ= cZsA A6B&t7>4XQ"|4X71^x+DENX ]A8`!XiTȠXM6|"T8 T:c4uzSRTNHOEtZ:\Oŝ]m9U6tײۊϰ%sn|)~_ ۀЀ!7-wJNﱱ-k~W&,Җܻ/2rސՅo8)Qb3,w4wnӳ57sW.=\ŕ.kMۃLnǹĊ1g`͇cqmĬ8ZfW8[ִC =3IpoIyF6̗mIrLlN],SYء3wnn}pI;%4&0 Pf|oDUse/RT Tl ;r~H(UL'¬$C*ْFu24*p4(4(4ň)4e#Vq$ EV2M&NUVā2*?|DtQ|gHKW#~ϗSICpcg}A/fXА%֓ӧ^HH10ߺǑlrs(q3J?G܌QE #BQ}do(Ėb!"DűS7W1˯pgwflh@~|PT.ϾnsړI{]~A#>H4P10ZuGG ~@qةbr7{]Aܸ"K)@mV h3"953 ˊ+/hh$[L6YjM#W܀7 EQ~fSVqUMZ:<=+^Eޜ~ֵ8|.3L(G]e&<ݘي>Rv ݰ~~Հg%ҹ#]yOKX8BtŪu-Wt{-)dIP^$0yH-.ّujK3šnM9y`|с$""a`Q%ܸ'l˫zLJJ5̻e>6XFSzV 1F*oWr+YCD*&!^?ؖnYjSPe*ZԲ j-6C}IjlXF6Y {DSIK0kS!ݲHעDK\s9] j9/"?zNùt.=~4oV!amth2I$⚳Q9n&'$Lz񪮙a&pFAJ!Vሽ%1ijf:#r<dw(ϐ-1zM ,PD%KB&>K9ٰdY@{D#ܔ@E{9ȱ4}Q] HÐUO7gS:%|]=~W>,n<`{ ҸfEkDc,֠)Si7mNҠ˜GG\X +IzqIqM5ǷNwqPBBpDc2w.^@T1ŽVN $ 1;ZL _m"}pERse&c wn9@g23䅥ۯ·ɏ>s Z(4^hS: =9{mIX* <^gKÑ!l  8r.=w'/)yf{x&4'n5fGT --Ɔp=VZ.i{N΂*LM)dyj̙3/0L^7_y,r~8dqv|cb#Tr5Oiz@/ZH!Fď(gGlW UEQ踞"J+5T "I_1bhG !o0RSd:<[ߜ .fe}֞|Ǻn6M)!XBlEFC@ yS:d@; `"Ke4%&nvPw<󰧵~y8P}oNhx^MU+NfuȲ(As OiM`)Wwy쇦ɚAy@|@M- F2d2KbǕ^ ym ' N5jiNH8~:TKkF+!1-Uٖ@RQg"Rչ BARW)"%_ycƭe,$8 AtG~P)9^@x)>mc ZL`IbN: AtI nRVP55-P#9j >12(87e6^qWQIsOgO&l`F3PUϪ|*ɲ6[{X_ǭr9Hy}4Lhn띚 L3xEj2˭ذcfsе}}i%Twvm2a5M*o( 0[\9/3ӗ`|xo0`\+":J.z6n!uںȧ>ɫ7v{cB) !nL٥StѱGɢ9Vxf0$b8/hq1 QhS-("oFw;a5*EH/bLɤϸၦCF 0 WڄY) VEu, F͕CN9*XVY߹y+ cͳ1u&%{ANn \4ΕȒ, L19H akĠU2 )JV*EMVLpZ-K0tbG2}O)oE`5RRd +ט> 9O kjq:E$BV{9vng…w8 J4Xߣ%Q¨s?Η&<(Gf4z33M|5c:ٟX2<fr!ox.ns`;Z DbrC' hN3N&G/ZU!+-*w:g2l>`5ⶏ02x*]RJxflғ!_8*WQ%* aBdyI4(V3E ͔xXҼ=%X8d\9*h=eQMGT%)6 e'hje" p@AiRV,F`34琢\J''pwRTK ꕠMNWV)X1)ǘ0#l'∃> iT^ɛ+>V?ǫZg3t"pxHyQ$rʋƆ:[sxMr]S9)!7o} Ayyo<~ЗrVVksHTP=qb.3o}WOh,qTQݽB˸ތF+g~L9Y'cp:3$iED=ކ<9yy$\FzMv>Y3f;7.^jHRK+6f e#!]8De^@yrZ5bs: 0ALT,[֋gM^sX;14=tO\L>t/$Xsx1-Րfͥ5'̋I=ɝ v.,{%`B X*㩽ozLaEcXԈ̫qF(pL=JW1~wx'Vd INpj9B-U4U:k5/| >w=]Pvnq/|mg>q>N'3(=u`P:f 8Q)1iI\Rm#Ρ6b]μ%#" bJ{T9l(3 &0LK6{dndZV)S as K ZiQ]ZtaK & y^]\!RG܁0`A{EB-lv HUt?FRNUEB)P`lO}&o:g@Ϭ{c}ע~d ^=0$mwhOkҘpl1mo]}W s9EjCGr^7K1~/$ pR ]f IDATg<ɥND(%%B¬9Ԛ?>SGbpC^"}428c &DFkXg44|'^)e&Ii)Q8+J3b 8#DR]DLm~pJJp`2oWc@ּ Y} 41ݐٜ npZAa"J|=!;%9ŀQ}eNGq!ӯQ\D! ވ*(o_U5lm[b"QC $#)=Ņ6xT'!AF4PC-_+JkXnA]U<ɵ.< {6n3ywevV,.5/=u{SΗdqCp}h ?~-_ buMmYA1VV2!e{xg_:n|O ^~coU1xԱcroi\̫W 1rtg,b>:̼ *e35&61[ dmjEx_ !V#(ӚyRjsU;!u2Jbك`02k)H*7BxLhuHl#&ūCU@k"udbL00!X@Y;4} HclLtp B bwlI4F \Ml"=re:!, OBt%ٻ|$7z}5\b5s?ν7I}.2gi ;wpoKz6 jzmv@o:6J %gZ8$_ݻokIm=oˊd#kB}w?K*ޘZ--)1+jdt)q `uU_ΰk2@(od%R@?:@,u0/mC?cσQ 5r3FBⰦ!732S1_?-ړ.u_;`Mv%jq[ޔK i\"fIqFc0$`{-TR{P?w,%olf7O~?(@]>679{gUNw}F+N5yR:i]΋s+giO4Fq n #N/…*h΢0otIF;eQ.MRgZ\ k=iSE-:||P~O&cO˒NaЇ$"\['_h+&,) c\~6`V>ɾbt)o#:s!S ۞\o7c꫸Bm'Xu]n""qGSt2Ab$FF aF* ݻ= +-#6YChϿGw2yjސ .; hSFbsigX ,A:$\VlNHGVqI)j\Eʚh/:vY|fK@a*9$ZN\ExC<3VO՛Mfײc^=>9gŷmJOd<RRioA1O4m껄I\[ p8"r5X$#캗ERe.[*O#quJn`?x@pYP[&vdžNU/{=i5z_[ݾ<,8F L,m*צF.)o]'X?%[M),3~}öbẌCyO6J#XƤ`D&CX@b0ѵU^s%a? ,0\T)N)G)g׉1'U`pťӸj+]k~ ^ۡZMirD r_ #h^GN)/cr|TQNcILdS9u4Bd]8:˱Pԏ_OL}GAEQ̋!lAզ.+_m^}+uxI1KQ Jɾa@Iz5K'MT 9>JSPU +F {ev !AMҰrXsfY;ʱh\Z#So0μ/S="VMB6VЍ.(*>ԗ`4'5R8`b5Y<8bn)s7YFkY)&lmyV=*i&m*q!.ܼ{ȽEħK ӧ0c ;- L% j-> V]AH.,E T=3+T]x eHT'1:O]$L,K':2 lHD]\t=uŢO1̲fǬϕlw9uJGT[!x+Nge0}/~?obpiB[4L1 Ӌ;ԽM֧b/)dV;Y4x_*L4l6wcqђ$KY_[ZyMXD _pk|tH8aj/]ы]v&n-]:qǖo o Mwhm*-q켏6Iʇ&'}Cy#-ECzOv]5X!B6 ŵ5.`X )'}x{J5O=Bv_At-:_~4,6&z"vdM?ine ~M;s&${WsCT-=JgOҦqS4" wR?g(_8L+h  j<15wvO=<[=A2l/audfΡBP+qV4rav]f<:V$cWXɸX$iq;;)gw>Yǽ1h>f 3n)_bN.H/US?\8$qXeM1 #bXΥ߃,RDPCCkj5xQR !@b#R!HΤh3$FSC8v]_dJBvwDVh8p6Y$aOlVIo~s76W~MeHQ{jga`j1x`1ooMx 1 %s ҾH "XFԞ&AB,@$Ph]"uLH3uoZmn彷UG*.&,?%C=}?5Sc*b t6I3%"e(FakO 3K-GNhzFbﺔ@ň`  t0F3 Q81U1W&-zLb 2˼򠇓m; |Ȑm/E^ڠ_.>OF >DoM;8gWǔ)eaE;ev QћW9W%|T.X֧UQBZL^fŽ,O`k<1XY</]0) #ͧhV|ür1Jvs{?B2.&\gԄʐn&]sk1 D5w',/NKwOgȗ^=JE,5~JK8a0#=e} 8`V8s#Qvܣ?̥_Spu =.̈́h`o^8UJOB8QVۻ3{ZUrj?oIETbbZxV5qWǓIX5e F]}z YL/X,0R#Dk/8-{Ob`pLFl4j0ה.ҀvT0!Z4 1?TJfpZOQŰM[ ZvO0<č1Y>#hNڸ2K~}XȏFFRCU*B"29Mڠ/.9,$[3 p,?'Šu 2x2ճi7zoաzk0](@읨؃8xxE"Mk(T%@^hG1j +VeMw}򟪝fdKFNYۖӋEWD>Q 6u }'(bC :.kƂ_r\z -km9 ݵG8:B:9>xY(H4WVtEW2X 8XY'%>Qڷgԭ9?q|ML%׵`6O_z/ܣB+ f5լEg6#qgwُ:U߿Qry'NQւiNPT*"(e6,@XCu:2m&WJ'?>ҿU==#eqΕ;vnt·uH4Ko*6%5}/TLw$@c:hOx9zzwN7_{ QӘIEߘ 6UAuUn)W |? OGVFdu ug 3kV7zLrZGCCY@jKO,u{kKsKv<YFjلX<_ 8A^'i|J (BjY }cm%zxDv|Ȩ_m򑑲1kV|$fڟJToQOЛ7xOv"K Gt[l6.w#N9Ӻ 4hw SޢzųLYH<g5&5(ljJ75ZL;mL>f6qB7Y1qQŋ-9[TiX|[whYьhvY7bEN!3[}!(^ tdz'o= 6z83֎vAmwMf4#҉&ђaV2W&gw)?U;~eɝA!Om`RK[5?P/Qci%0Ե3ޔ٪%pqVɿc b-g:W]nF*e=#Ha LrjbJ*70ʻ{ O_J gVRtYs2WGiD%idlZ#u_\)Z͈)1bTj6RyA: n!\[!~bBWNcSZ=rBk6SRDBV꺎/gzS,*IxDבۯq=y(Τ+ C/@ZEy_i^~U{(Iӥ/#Sy*Ds9SL֕*ι(ڜ0uxċΕ\\7X@8RqU'$DeFYcU!o;UsM=*\Kl tYL-C6Xk\iuRc h#Qzb<.\}'Eϟ>,I]s]ݜ,K|Y;/®QF \,Oiye IDAT; 9EPKKcd;hɏ#2)$tޑ[)O^#_+8*2,ִ\(iQMyѹ*y~A 5D\hVP;v Ы4rG D1hW =A?OƄ6 JF,'xY#n_ =;:oUS1'?Byvė毎d6-(-q*, m6}[OcύMv;HA5 V1/)p΂뾍SYb6/(-տ*/^;'_ǾÄȶǨQcC>DꝐ Kb[S` ò5E,qi}J#RΒXqY6#/K>Zljc}e%6u ?^TCN'-yqoONk5D@D8pSe1>:1vD8*%> hPQ*B` 97j:hURBŜr2E#K6 l^gDxx\#Sd{r6& #("BaSfn˗<{b77N&GpjC>wyg@hĂ;@`5raa1jIsD[L}/oQ-ŹA~7=NCl !T>Nځ='b BڌAh`,}HPUU2:18MtKC%> zQ1X*Jׅ<}u'v|rըɂ,K3. wkkYJlp0"{x8s7+뽿__2n Vs\ڻrI7n(жduyn4KB[A,hT񝌠n, n6xFV=Z,3C023xP"ɋkAhQ ʍ{ĉh\|Dp3nǸՙX֫t}V4gZw^[{s|T#e͂-߷|{{t8K"٢BvȥKr\CʩrTW\VⲜHMDqA$@ݷw:Z_i *A>O޽笽Z)y֍]_&6xXt]^KBJ+.+#mK˲;қN(kOdCrUmү'S!!>6mKg>&}ݓ<(}pV^8cN/}u|On2if2_۟)a'ЙF|G!f*K.ȊPEkg"}gi XVyFXTj7J;JH1l2 8?se5!emϱb m{Ls$} {ե!mwlRġٹ,>XJ{G8jKNtޅܺ^pW7I?_("O{0ZK ǐ)u~>#m_glWuR𙫜etlr74Jn}E8jK*ٻ q&*tZN"~J lF qM锥vDQEA=,B^ޑ$-mA#XIF0* F@&ƓC)L+v1b5M$IZ BAOGu2)3ٕ^(6ٙv3ldNWʎ,nk.W j<|n=Ikї8rkdHemTPoNzǏ6?O8r6#JQm܁|w,CZ$ &ь.N7e;(7SF;4M5/*?},{| /ݐq تV/^yvZ0vEqaV ~'ͲwovD_f.u}xqF5jѝǮfU;q:U1i-|hnRhnKy0}7:a1Itb#*JY=Tѳ}ZEnr[6r:(4mO-ֵT,$=5mq׿a9 ?f8:FĠjthэJreg`)|7_p&Zg;,$- ζčf͡#h/.śnYT_HLTґ1}خR8Y2R tfV}VoLgkOѺT$NPN'O|y|J S&";[$6p6 'Uʏ?/[굚'8.l&U>쀭P% $0?+w> NVGC2^ OpIW 똢¡REؿc|`q-Ц`\QZX:ozuI*4UGT%І)Ɠj)d;"UK@v6G2ax#5Sw΋~cX_ "6h;Asm}!{ub $&x*:)xL(xG{Fx7V(6r+fzM JhT*䑥B1#ah `u96RunW%hpejktD#&{fgA!"W|~<\ F OSU_c"$jNYbIJ(zGwpa&y/I(6& ډH֑%*&0 q~@ to1aH tN7XaOo"](H<V/fɇ^忒QudV0~jfX*c9f0]n܊+ܔO~k;ŃVDUznY#%bֆZfKS X:!jFJod)h7dmYۅ;1ۇ2ӱ2YD=H_38yBƳ/VPt#`p>3;\po&iGL(U"FaQ`Y ⣜Ti%uC"DfDV|LB*e4y8 IV8J`9OAS i]&><@!xjWP.ftָSqV0Yth FI"T Zh9%-9o..*et7 ;$VV2Jo11m}jJk\xVDߧ~7n`U 5X~(R 6sH#:~rEaͩi@kT3(,O~.G< Lˑh+p0xd"^LT*A.9NjPװX[9uV3Gc9:%Z>?'D$ ?k 6͕C+V D3WFGAX4hĖ1X+d6>Cg#mcDbEdyՀR%BnYj#5QrLjc3;a e^uyc5/9W&yZr5< \m h2O6t8D}x49:(9jS2>Άr5f6 x iׂ?etſ?m?gBB.h.ӂ=.Qpz7rΞsm_ʝ3'x|vcRs?hLtcMc/Bt3& U^(8|wଊq ! ESAC(u'ćÅPAů KI: =l{"+H}gF*EX.]#p)w]{_ViO*MYOe8m7>[CkU %҄uUEuo sm7PUTuw87\yz+H$?|WX=S|#QŃYǵ, iHEAˆb!jWS5YUjFpUCW#&*vJ5VamCJVbn?g`P Z88ҞMܚak~aG_~(ǯ3xyN~Ϙ}tG."ϝ$g1}s91[Q<̷p gҪ+F [^)p^18DU 8i m#O<"'*T æҞXyFF]pK Hv:lB`F":2ċ~"u>&0HvxXv֛[NK06l^ʌgTW\;51ic]|v-i*w {^arp*>}D]N8A[ ܓkn!}>Y;}Z|sU kY̰ /Eޥ8J=w}2JlX jK=s?,|$$qIi bx  2&WK;5@6⭧څ&ݒ0HMqHs!FS\\O.V[<[\{1\Lk¼Q(8( B؊(SN 3K SxZVD)+Gj\S E3ALe<[] 5`e@!GyAw#I$eeܪG]&킯~u- A1YA0p9-2+D ׬5 B{aոJLa]-۴M40`Bl$(k<)eG}pcB,`7OxD i.;18U6SΝC mxKYkaNpwFZ IT# ǚ!^O^XVSKF EmQ~\ oSi̩n@ʚfc3gE)Y IBSSLmDŘ5sQո|t+Df=cK:ݛrs`?; "\fïZfNBnN_wÍQMt(0Ut޺uhyybOE /QǴT꣘VF{+He ?Hǫ*>{Ooc/lcW~oI6{ [JX4|e6@Rtj\,Z*O:5U7QEk~4i t8M15gGwD:ZBK[\6~H6rW°KXDUB,m<5~oߍh?7oxef)fˏmʖ*ZYU%f*Ew9PKMWjB4 H!m*cÂCc:"f Y@n/ Waoϸ+m_~[o0pGqLt ɋhEM}sj̤sNJ'H1øYv Nf6$xii7Gd"lT&|{/N3p.e"X {@-IZ?C|Er8̺>)زd2[L/GH?:o¹jQ[ RUUE71q`R,'s޷ D E:٣zKIxŌl$むq91&I Q8w]mQ#՜lax߹kYJ؅)UbcuDJM(㜵ӧ:eH}kZEd"L )3m)o ]Ã)^",oר8o(|@?.(,\@'3͸S3}^0M"ګ!U Q~LXM'e#,*ɺ0~ f{^ÂׯΈ㐟œ|ǺtӮ*rҹWZ<[㏇kfWfd2?Q%pv]HBJcKHl@'5Cd zr<7G˖lVj$րQ>0 H(^}sƯ~VKRF#Z=jnĦ#G:ժrJY@^XYN&`;؅kd(16^2 tg#{3!z#4[OӝG$y/6wu[9ϵG zW3%-=;"%6`pDIr)4^EwPQؐA g.Y+7^p9^(%wX*6!N!Q?^oQunJ8%>Te[ .dyo[)u@2 2f mi[l gl~b۲hՐT,Jx:@g,֢mak_!x>'T/e"?ʐ3':ta"s|7`Bz%ΈvoF>/ B=X0xȱ>7*Xm ^soH#F"%6"P6J.FD|-͵lxVSLS/ qR5ĄFدUʭ/D]6*[$ mj RP-Gtu^?b4-pqHG==δ! ksY70L5YE܁i6- Be6!š4yQ1wB\(j*칈D<h`ℰoBjN#(q5|@ld|&!׮NQUJ.=k#L"bn g?H?ͅv׮NMX "G?̀%Ax|z7 T!2P/=~8dĶĠ uEidT0wyo;_zɬw{H@{MBHw%k=/RfoK@{k@㭊6~z&4mCRWFu6E)mSqR|{QshU%eZUT3`*Fn)F6%-~oLRhIo6_<\/ʀ4T!"Ҿ~w~W) 齭3^|$C@32fыZ Mƣx`ñD;>sE$,vj u{"KM=|}- X m_sr4nL84hNK eR:_zTEVU$NVuBV!G#-ҦQ+iK%}dm'8 k7+?&|A'2c|$`Uq'c?1q+ wN~E3#z-g}jB8q)V^9zƵd-*ŠnGc{Y-L>lbRMjK1F5BS!RIFZp0?,mPŁu$Tfop "xhxQ;ZЪ3®ߗH.I}}2TS:}DD߼?x0z^(Zjb=浈aBVQATfByTfbN%jZe>=Ҳcx_anlHm.eP$==ooIZ*5JuBU ~A~MVڱof29Pԕ^9+qǶl7Bܣ2V h>t0) Eg"rRR3tקvd(T7kaA!U⪆ĈXU5EUR;Uե:6e:( )XWE-<~vE ԋwm ClB Nƌ ՛heUk[R!y bDZLU/OxU8t%l0b5&4*S_NhW(#j|eTWJĢZȨN6bP#s#d:ת22#5KKu E(EAabTیeѡ<M㩜6Y U+Csdj8l+q3&U.ߚr};L{X4?ۓ|h)/S͞ XX ZNq ڍ V|U=ٵFȠmT=^eZU&N)XTĦ~ee%̵B|@ ?]<`9G>z>Yo?ﱤ^o{S y$t3 {=) a(W.r'>No<ω[1Sq cQM^,{--EimmjY.e%0 b뫎NQBaD-1~<*]byTV. 8%-ZbUk&YkLip;Yh+N,Y0[eiLHKR%4-dQ7/Ig`Nsg4er˕{xgZw9^CCONK_VaH/D"c {+XkT*RmiÚQ`z] E"'I0 詀Bڜtd.C)$/k_;'_}?kt>wley-2D"s9{d[8k19"' "I|MWTx5bTwU4Xd)* U5F"KX'tEzYN]蔲r cu eɴ`+'a^xwc7CzO>|{Ʃi7Dnx_ƃz}֓ īp9[ "E:ssriOk+/V.|x'Wpg/ W>ɕWO:/h\w .Tyɮ$ab@\'ugq'L\uJN]*v#)[2IYx;L%*6zv("" +UbZ 07 DAvƗJw5X櫌qA-di=*Zx~-h/"#Zќ(tls'T߬Y<U ¬ H"J; 3z-h%᭨ ^A@TIhw—W{ÄW#wv'|w3'^}G7xٶфLK #˳}sJg e\^R#?t0ae%zQQ:&4N+^ڲ-f%#Q%hUT!""b-x>p5,,Aex7ټpV?+^.5ɳI"]RШO +\TF-w/*vR7nV*hdZU]V 0aaIۼejDP;hފyk+T’X[]YoÂ[q#q؂ёj8wQ.)tPaz@tSUF?<džk.a3Z 0trKcuRjDY3jgEAko%0ZIX jT)ZISL6hޣSÎ ruOrtϨ[pR>} UM\0VTtƚurwJ{)scmJi\ޛ>v}w3p<b& `RSd'qlR|JU_*W/p%|TRv8UeeIIJ s>za7us޽^}=\zZyx@^*mPcu^ֲ\ QAXfwlW֨7UN $!΋ڠD*# 4]2xZ.Pcb6*ܨIn 6ݺn>K]-G⤽4#eώ-TH ]aە%8G7o/Lb X!jsoyߛKcT7fig y]"zu9ĨA5R92:V]R<|/BvWzƸҵB(:+~z:3ev}E6O&rڌ^9Pkq8.uKj:^v*F .]StMIҫuV"wҒH,y4/|~rNR`Xc\*iƢn>޻OKs.;Ʃ Uy "$CmRJO9O]8HmUh]; ``.9"'8W0Sbמm*VN"w 1ҍ բ$oy{w^)) Q&i;ͩ3]}C9}֪5mm2?\󡸃"`i?O=+#Ld V2cZ St VZ D&Hו"m%AH'B{4Dž"&D(*V*ʣF+$+߻_hVc1y #a _`>~SP,svx[mGMKSGcD8ƨ Z\FAFL)ŗG4Kdզ%ET-:mM!Bڈ$bu1qOU45hǡNgЪGb P4X1vBM+E)^ `.yGϭ 1}i>Tۿcصd kT"EPzHޠZ~ڻy=^f_Fʊ5W2M/n̓ KZ! AʆO3/"]X̲`tdwIt$F=y>_ĥFlɌ4r Wo#~ƾ 3 q5bؓh_&iYͨm{~l\͎쏧[_YCyx<ޣR\;+1nŜ~6fYΐl>Es- U~*q\Z$鲃>—:/nO}h!XZ%;rnvD?7| #.}:?AxKjuڠ{q՟$Ȍr4'rjJ}w/O"O-$JJs\p Xok)64T$s WLJ{F'-.Xlt¶v7)u <) ݴ"{3 |/+^XRv4rW:61bӓ ZU~pkPy7w[iD5[}e$?lwリ|DI"]O3V mu='^zT 9R Q *A9I IDAT"%$S&V&BQ Zͬ"*Rz5KУ%}&>TѱX;HTU4 f9}B4g;{1:Vd<rE;|3:{*z4GIm,-sXaG*JBLKvP AChTAÃXq] yp9kuemH~/zxw~鄄#|[^:gH.n%-iFVtVQ$Au]tO{ccTM_bMpT۞I^S;Qn$bKOtd6 .<$'.𳏫J-_-f>eRaD9"~()F*?odyѺדh5X9.xVh ͨ*CBbZEah/*\Q*DmoPK% J*VDmb'^E+vuv[Dy!w{NZL >擿DcWId)l[o|Ov N~?a&{y+OyCt}^ΖSGH'g[O<=fHuKgkL J^.C~MϘ1u7P˜aJʙ%o}ćf_sc3#{~!|ԗ~r2}Ws+u*/,:ʆVeK%]N"'FFߞ4\y/jHs͍GU$ [L88*iDGOV,Q1vؿ~2#d]O=Vxduv^&U%ɡF< -7ý8V*Aqik\6W=Ux*.=;YND"фL\r4.kJ}z'9Âׯb2S*E$XKދ@ͻѳ-ݫQ F0F$mTct5/݃]yFҞHk8l9ף;\ ZgrkbWYH5f3ʉeDTM<(U2GQZsWjThSt>{+wֽa;P(:=|QUc)f)h'I_xM+9O7~@o}^2SJ|OeB' Kdզ8#jwiV) NfUu\tV?^*~uGuͺ XqbLoR:49zFMU,| \quXj2#QF_F-[Kؑx+n@"*MZ7H})/k(7tOk\}1UgsDVqZGpA䵠mq1T8JdY*FP[P7|5/T2Y4zLG~o]J"~h^/?4C|ub(I Ή8 JsxL #u^jXxE1,mt"F4}?U(CA[3h KhTT4q4x~,n|5=I?lo\( ?c#~^  ؏?h$VyDVO}vO,D1RP)BIY Pch#-FVbJ.V>"b#IRn$bmMP U{ZX'SN9l'NDP3nREV{P4G$jallfJ[H!mL-Xj[K!"H^@ZUU鎠US8Q6S5ٍxlSU)Y^YH:+TF5WNI7MRsN4ȃɛڨ32="Wog4y.u~Gׯ%HK\~NٳW 拗'%GK*N}Nh6-I {\)6l-9mV5ߗ^hT|TlO}rr n+kxu_=dpY}̑pf)$)g>yYƳ]E&8zClz0Xe;<~'rci O/vZ:8;g}]L]P3-lxUֺ[۾ok'k<1{_d\IY!#/r'r_ _,s'OqϜn~Yƥ=k5$#a4YS~M'3Kw}-I{zWm#,ՈxW]bَDe*r5*VQN@'5N\C4>w.;tV'N̉4%vSXڒ2ۓ:! >Jb "uPW:%awcΫLDDHIt$ LvfZ'.eYgk`"Ϝy"_ze]qxs!'/&_N䏾v[wR.* thP(jEm-:E sBEuKɋYZK߈`?Wo4gT :FSZ{ b-/LXY`Xg7(摪U2ǰc5l|.q8㇢|geqGW}īz}q-6rH9KVJ;{͌ H{wL:&K)*(׽.ZV9>qVܛR(}?,E@}GUEFD؝Nud0dq&3S+{*ڏbi5ݍP>P"FMg!d[vI2Վ.)o,I`?Ƨ>hJXC.X-Q:@mJ͛ԋJ>;F"|~aY)I\ƴ򡫥F&ϤvRDq \Aq`ըAh$4F%p@w9'1- &'i٘$ET`8O{$bIV.oPg-BDEL?ӨG&&DQ*3@pݹw0jmQ0,Mװ* mS&i,ጇSED UؿGZ9Ь$ޱ %Ih,ۤIcU z^%ͭ/iہ +} `et *[cpuu .BN$|͗&JkO72\ZM~xG)hќa| Ɣl<7\?xiq)IuֳS8Ik=dR|?/z ZǻO&RWE_iQ+cv"Q.b(ʻ]CSG4#.%5S3s2j<PRp> bG#^NPD3'i_܆/=dΏYFڲtpa=6<}SՂ7Zn3/YM&Y8t;XzIFd=u]4ޓְXgjh?@ޓ`A0葞 !P߹ŹzĽ՘iE1I',,'\eүsi?W&<OOa.}z*KsiiL<:GNu$7 t4t^ *7{GAopȑj=Ē/Wӿ^tDx_]'ߞp|ț(\"_pWkz5) ;g#3z<'^>I}uzΩi̭oF/&9UT2RTa(x3aޜnPJI:S'P< >Yf+ n)]fUVN- >R>O;.9w6'}OɫYd` PKQv(1L<SN؝qXX4 77Cr B,Dۆ,$-^Jx_%1rsOE) *&yƽEDycwˌc}—#ۜ~ǥ;{_ߟpga~-8w9J"iu̵7ͻ3xUa{yz[Ha}`2?~,k'S,,F-(2nމzƘ@@)zN^uD#QHVv[P 9ƒl O֜2}[ J)mj0S,!!e\LYӈVHP{ń@QE!Zrɸ^6O/uHȢ_ք?ޭ(S:'O'<q́N-`[m(Mӈ%cugҀwZ1I[x03=y%Qb~pIC;`ab1bQH8cUU+;ka0`44V-Lk_ߧxn0V}{ob3{)jk=BӼ* ݄O#D1̌pB'BFDl;R$E:]$NcZ?ki QT:. ;3n >?~MX=c+( "q+~W| J"ڹ{b9dqΰHl5TcXtW7DlC 8E}*>CiD <yS"-!NEPhQ  `8P17lT)JlԴ#:FU9r8e HtsX/,)7^eZl v]Vs)PQw=`ﱑ,_=Wv$PS+Z:?Y/wĊ"ss ip:^hR3D'% `h)eSPg+ҐJ'c֕gvI%~ 폲 fo:9\nΑX]9|1;F W֤wFůlju_3 (|9*8ꀭ4(=x/ 4.Y!YE2b-K^bvdpmYIl7/ӒƇ9KPV^ֻcFE%: 瞈0n!.9]wAj~P- $kW'BTrtד=bjQFvly)C~H %ʏE wXɖ;q4o`8+S/mN Ymns?DϿ淟zAF[0pg\%54>`K't3WS6Dٶ"ip >ZD6z4[%c|ʪ,IzV _d Ay7Jczz^^k!mupɍ^s(ѐ6 M6psQ'T4,YLb E& {uGG32d"sm{WKEh޳m3!D2̒w#3o,B*ns^km]$IBU^&ժ[)H\+bϿ6[5kDZ2|^ĖʓT YsK@X;R>#_twVSu7yV(yN9{Q.q s*Q[ 2-)ԉ!2HH[1A@JN1u1BneV !rF&G%Kw\SԱU}k={{|{g^ȆvVɛ6ˉJZA儸*YCKo'6؜ӈ[>B7>*Wrb ]i@Tő1;hVQQ|2rXlxut\|֪&8_0Dkh5D}#Es =JݠQòh]*"tQ ihihYaxfa2_{Zocd5⌡9Y A O+47n9u HmQI\' ;fkLw$A Z|.Q6Yw)Zwn67{޻F6#ӡ]BoD+3!iPňaT4:Kk*|xr61Q.iQBK`3!#01Uj+H3RܬSυi ԢI:R("4 *g>V,Ik@v1={Nb2 C^rvȢᴢ04?zYq@d0QCsM Un`[t럺LdmHf *S1&x#G 1O:/G A K94d^7[K0WQ<׆v25CIK P&&?fy~o<󧨿{7v@;tuIel%K?"R^F=Z^?IEIrPW&F=Y2'_NΖغ$. ɻHSC^21Q'#ٓxrM߸w߼/0g^W j*0]בj;tAt^Ղ4Kz`fU5EDT)xm8$C̝^ޣZ,vI.{{d%٨f5cUEl)+%, ٷ!Z\N1i3Mʌۻlܪ×ԏ/qGIe!ܫ9@-0;Gk9ÄrSF8 szoq&o8Pٝs|c?b" ^(#4u\8{ nޏoX:N0*CU3.f.*Ӣ j50W"N4UFnZqUOY˖ {v\M5F$XkTXj,ԽV A++᭜C'#:jfꕂ{#*D4endԢ.VǬMH%ac>-HK%3 T/V@|8#lGVH+[ ljƕJlT^ik?^[^Wj_Ns=ѧdq\o |р)䬐K.-UE1o@&R5F I!P&+BP?hVy.X$$RK6Ƀ>>tIR0 q(.qsݙagB*n 3v䥛;K2>tDWl@|f=;>t~9fq{v@ƭʵ_NqLi1 \.+>rq[!${t;"PF_7?8Μ'wu YiX{d Φ,&)1[|wl:,}<0ZGhݙIs{N0RŖp\i4(LWkJ"19!JIt*J."$bH!!5!(V!pk㜈X&Ib92&5(icMws\蘷4熳;Ss@fyQU+GI4k1vNe#Z}jX$PK>E61*N;R²Z؅cD5_# "Y9IF2B<]޽b4Cԋl]A9FS0CG#ZkHm{|ٞ|V]*nO={;v.mNH5Է1-WeTaS&;.wԣYfS-đ)zB/2LWŨMU]njy/e2%oC_YFC<#ĉGű8Q1"2-DI$VwSH%P႘騒ODw9oԔъ1}=aޤ+\[.VɜGkeWveTo^sj_o&z /]ft6_\'Seͣ4J~%.6 _yl XI¼ A$E,)gJ{ZbdSXYs+t-Nu,;]gT;*e 5E%f$>uZR| s;Lˉ*>b"N*r}SFܺC=f%ƞw8#q>Ntwrƽ*$9AM ^[lu3;pp+SY>4nnyJD@Y1>b x6>E*V$ƒWq^Ibpē4WY{QtPQXQ D#rsp C;aso#+HԜ.>,cBQD*+jdMIńO%%$]/UQ#,jwʿ@&f؊w[.5k~rqm^׈*V"ӾC‰x0r,3';9" REY+YP>&m6x'`?Zzz}DF4Rʆk(5\?sqC6ףec|`TQSa A2P#X1! ) ։.Ep{۸6a`oz-"hkID<FĐN)Jڜo*k;ݭ돮PyZIbBf=69y0*'g/%/K?g9=o9/o.:`Mjn-Lob9:BaJG%b7ѢB1 Q?'(֏bt;4(BNvfzLJ4(NƠ`D+ǼX2^HDj%=IAt'")7)jrvVqmƧSku+Cau!ˣ!"s=&>j>eEXл7┪p;12:h_˿o<mk.d}(᧟k>ze[HssJ|j6( 2Qxd̃mc,0amJ>Scoq_,8_x=~#fz!K$zxv;ZpƋzQ5r4d.(TKp?6{15gCR:@ 0r;ѕ3Bͣbg!U%*iL'ڣfl$+;8f칂F:&BX! *κ[8rBS)F%,ʝ|p,jjc;{|fr?Uԝ+a)cCwi۴M"f>jӟ76Ǥ+9є ֍o.ظ5˻|啰Xeb' @R)LwV&Ә/*^ . )c3{(q8chhoSD-Sc%LKXbL4`5/X2i; Ǜ[D\g Wf7Ya|{Yɧ"dx?7;wYuJkgg<p野M@I3Ѽ"g %6{3{8\fd`LjTfyq͊Frw!6p}aR80,S"%ۿ3{xk/ l<:YaP7xeb49ڌ8b6=Cg 2\3aL5geTEP|PGiC/s, GgBwLxoa:pK9;,N )#D+TդT9qL|{wU`xk@CSX *輞"FT&T ARG~N MZ2'ۼ>ᆎ{sҭ!:\=5!D}mOې!"J *L` Ӳ` uxbX\\Yh5DVGbB^8;%Ae3>xj UKٗuNal% B|U,Fpꂉu#ɞUG,Zps2_fSSc>%orpӝ(PM:a@{f)1U{E@j[uv^grIEhh~`oGKCJ@BuDRXb'@)ʙYvޫcj:]_áj:O<ϼ@itMbzX12|q6"RVD͛Ʒ}_7se"5A>i/:[\\pzSu{[1iΤ:PP:_^ef5N&tB^лsr]3wfz"g~Eu`7Nx4;$w ҳ)-4-}ӥ>o簾Es/?Ña0H2ZYƖ.LzqLYcQNh7 -GorxUo^D[_LH.qWlӫZ&YXC@KPGJQi:85K$Ҋ`j;IC :Dh%+rs9&MNU^_Z<2X`eҤk{l [LYm `g V8yw˸}u^}.ǎsl3vgW/o1 ڤBCBZ!PWӦF2h:e^vRio^=V(* 76K/ I7=}R [*hܢ5 qEkJ4=,pFoK+u5+Fwk \bg9&w_*uz_yDc ڷx, T?.W4"uHyFr%-=UMItAWɿy]{DUY:nɚaZ{6Y6b@_~%wfdq?w^s?7n:6=WjTF(2oǎ"u=PMRN`+r@\zbB426}N|DEU"YcbR^YZ 8u̲Vtjh1(-[YLN>LC]@]<[Z1x5DPBhjPklT<ޮT5h'YtZ*EcEug67.}rD3xL;u|hQk ASQ#Q/j^VԚuz]]]]-R{LR7b(:q(FԪeLo4b(1x㠥@`:gpm_M_ _r:ԋO/y!u|c/ZK:{Mn/$c^|ڂh8de}M${cUku6_bx_^_xN4tK#ݤƫ/ɡ|@ᏠZ{5dY<\kg"o1"&J9׎k]'jtA*rZu$u ~,|;!5V,kVCգT[/K^nW%/s)uλ;Y؊ImqީTEVT;knGNR!g5fE_z{"՛9ÏQ޺K_VҗRGv(ÀV +K1&Fjm*IU1\ZK̊xċHcI[$MflVYtuF"ht|M.]+eBbƬiS_NYKԌ,܋UvmvHOԩR:uOYI}KgS}z[k4#94s}-e}oɞRwCʆ@?pݯK[4Y4.4߾th{Z,7 -T424j#lwDdy>WNņXcdT3XGz/E(y)J|g3Sϥı` yj8ry4Xd&eՄ7:)ږ`]ZKш&h{c>d͹ IDATl NZ~YB~iƃQȯ\/4z\'$(F;Q[K',4g"f!&KO{%:oqzzI˅ ýz(:ȑcқ"zgfFqj&*lȘj ݈~=Zs&> =>+u&2/L4dT +!q ѣL0F}RF."PUF}9dJD˃S /olgήqX+w l*傄C4"V!8Sұ2kÚ8½6O%*(3suɁjU,=f} U`쒭Ѫ1lSèWrMwUz$u:@ץJv|I>_&d[ChZ$78%X /ZZ'Տ~A:}} iDbX-;^9Jao;w 셚ddLK\H,s;7E?n=9NM6%0{|.o >(80B|)\PzQqjEԊ*>5Xȝ͢F֬ȧ>Уp:bGh!9[T#J*),lJyh(&B1Tx-T*u> !⼧Z @i05s @D+dv!DD y~2-|Ɗ/)E~My~|k3|G>,ckZ"3EϟmPBhao=7& ֟gS~m=uuWa=tcߏ<Q]_Y|4LeM$OG#xYXs雈v4dlƈ~?J4|+}F5sk2|6zұâ& ޟ2[ C\PTN¼BT E`_ [B4/l}1 B6AdHU:)qVQ g[-F {RSxD^;@;HL[gP).0*.gZ YؾGNy[':3<~0PqV?}6?_w^+tH![NRZ$s1 Һ$I"*qԢrS#he&4bdugJX8fa,KzVTS &ػCqV0FOЖ%DO*ߪgk4 +#&ƢeZز8# DxTZ3t!\$*,[X=o^ O<z+qerݷůżbByI> r`A_ y!1U}>=2Uy Jxu_*v_$Z-Кewף^e:u:cDKʸa5vwXoeҠ|%> (-f`|%0! "X^D#DCTQ ,< A > P <ĥ+[KcL\¼&ŠyxkjHqZDB X 6F10j=WLˀ4+aZpB$H K,)$$RӔXJP[!4*Ȩ8cRUrRϟl>rkTkt^f{PT9&/6>%:ÁZ02?g |3/z{Tg=^˟Q^wٿW'SL&SOo||o0͸p>Ȭ? \Jٖ#0SG4Di^ 8GXTD} 0B(BmRBsBo0va &aVϼStjTJ41%ROkKqGZ (8gFFFzzʑVȏ]Xcg:Z9ܠN% M+/\N'>pv7N)B=#.RfbV 6C${X|;W;("}j͓3l`$u"?JqR6XVH?aܛ2;j2djj4#~'u ƼMh,c)\-rnxr[,l|@5ĩ*$q6(h i) bzűkl_oTaY]FsZqNjndžF8h̻#zRNY˩p}} !kuЈM*-Nu]Ig3R(#KBxP&u=-۔8 ϣcKDA,K'6Y ~~SM K9[ J8\xNc.gm6;:<+KVnpSe)n٠딯ȡ'"֞vET(\;N<JSŌ0  G?L,}#H$w)~֊y"^ "YV> q*LA_5r*'L^S y=pNE\Nv0?XA@n=FP`+osH&$6sJ N s!()B;=C{xXanz#"Ehld҃r WȪm "c9 5%YDu<d ϵVpSbTpoqc~)s'v= *:CwPF M'~w):C!8@31qV*!յ¹&Lzh0W#ns4 3ZyZnlVep34LpY$  pjV4=z Ӯ㵯Np-nĹ1TbY?[[d}fڍaycb6B) , O,lC#cp")Y1cg m)Ѐ򊄞ƵP)0JPi݇fsH2 3Gڈ&ptP xLᮛbώzn2 'd]y4CQFX* ]1s<Kp97 #T#x %~"_ŢDhXW][*^7mxDP4(8F#obZ`,.`}D*Ds@fd{FpF> A}Sc +vS2PRF;w{lUԒP4c@g+Y PAc`6Yxؗg2 ̣'0g0m8:y$':c|tc3s塚Z-ƥVou,&3MDžxY Zz!K DZl9QSß]Y}ijJG= `Q# *4l!yq?B PᦇC-Y_Auؐvܔì~H-\YNZo>j Mz5v3J`7:H\A9c4|6$!#(un% Pc d {<`ʌQ6DPdp&DVk#`.JfP2E 2c0J4,YC[ IDAT$ !AH^z~6q(S T==i%nbq[%?'=,R!SFq..@1ҹG B JxmA *8jJb}?M)S+BbN1J1OVџxq6$bU E K1U2zLǑ`.ˉ ]I/^Jq0P*( (-Xza AhXZk\b`5 @a62l:.&(Z{[2B{\v9ϪF'4}:3 h+M6vGޯҗ9^>< A ~,ayLJWMя?泀q e2Q@[72W>s/VX9{F `wZY 4(T ":() h&q7G\8X-HCf@O7DR5JOP6L0bP%>3d AG,5r3gbQޝw 8]CQ39)kq34Z?,$w/0e"\5SAud}|q-괃k/[Ah@AE |aٔ:,[iCFe(v >L+cqG9<ې:.@]B^1r]v _iƅxAN3 H{dEgbN+hZ+a(<Δ+niyC(К9%5ֆcq(W4],'X$b .4Q^ܩ=r%waZ`.޻u u~%ًX XtJW.l <+ *O$lPŀ&5ffFTB@k¤>esBV+|^ Oyڈ4\ łqPȃ!^IzRμrSf-tċp^F M|j)O0a>m`lG=LM~{CL˗`0J#M 氏ʸ@}1`sgGq yxd>A`5._H Q 5UZX 2) YJ>Cl[dJ_%2,-wVގhwSؤ_VXŦ+GzTR-"@HOqqb Ep SCզ!Tgut- F!ya/* `>]gϵx{J^C",Ny2]3 aP da F*7"|/3s@Pf#h[PL yDVk!fjFJ&VQSP^!z=h_- g'7N!4}8 `Vo sr?n[Xbj J5+ت&կ|s\tZlpr VG#C9rJ*#dꩵ @A1;w7_YE᰼P‹$T"Nq; +OV0.`0[Y L4ENi A>_V4G;xY>"@%2y4V$JE!Ŵry8dNSZb'ne\!Bj,V0H3HN` Y5X. uFID89^ <;X ?B6sL""6Ρx_>s1ra ce7aX+oR=GDR^A?y-*riy2ګ\83>SO^BQc$8Qè#H()^,wvLo].=r^xt+t*/hҳB3f4 (7X&@8,ģ ƤV,'E>ܻh ky`3rN!w:}T`)9gֈ( Jt:&F[_?vO{/p/b8ٻq;B@_ h 6j~\&wjDU Q>$=r &  5M̳r:u:6 *pL&ƃ*q8B90}ls5r)f#ss\t|F* I-f)S8TOn' EÆrY A=z}.Sxm/]itF[ф- ;. 8*g.ViAU: XGzT3ʎ'>?=\[.M:Wɻ5Bp?vM]+SRLЩN/(@{Y,=Z F5%8&D˼6dl)= 8ӟj3xn9%o BP'ԣ>+3AfA’BQj1BU:Dz~-Pǂ!B*)IД9E+6&%d&Kg_qʐ!Rъ-Αk&|ʰ@-U҈qgm>y@c3+j屮0`VF N.D1hax4OaW=NjgS>uizXL[QGg[7x\L ]iSJtQ3{XظF>j.kmɐ~t~dmAQ @IM#dJ̐%@̔0w(ZZ8A(z% AэxXa|ko^h1&-IZT|X`VJ@qJP>w-3T1ebRa6tdqhSm/qd8=vG@Ը?LmO!Gg$?$C~ |L@t`*aA騈,'HWZ4Ȩ=D&8r'׿24K 2?BuIHzo3Oz $s:K"¢ sy RI3N"GX:kn<4j4*i4@" 3H.ETzraMVdxl'y'ݩ({PCP8WoOG>y7M[+s2}D)Q5G{H;wܕ<T+Б*`aTPsq xOCHֈEI݃Y S-E~*ԡ潜Р҆lx1Eנ1|! aQwX$Z#Tj(NF AfK喱HCx)[&8j‰^,]NXU$+ Î6qzI?Ζ(.`t ,E7$4-ox-ir +2}44cDp)qBrņ%VIДVti)2F2ߣĉY݁ 1VaFnx Œݻ"ճh7=|pjR+)dhMэr񠏝沼]Sa| wyuqbgl=,!Gce8|bzP*bmn~ )HBE*(ɲA%1FO`T"&PF;RI^Σ;-RB PƲhd+7} 6 ~`Tw- Q*%0fW@Ca&ՒhDdo<2Aa0~Fg>uPEY( "J:йɉ iP&"ْ̄0oexsğt?n)##P#zGsCԽ_Od\zX} XrT{0( B-'JA{E㽜Rw&<&ݶj |=HDf?bSa_ܙ1Y} o>ឭQ<|p{,PTfڞ1Ĺ iEa\SMzuRlbP&e T1p3ݙ@s^j}@ l0͡p(H+#44VŚqJ[}, `)n6JJ'\jKI=dj-^|`7ߑoQUDA-;Q'o qyMl. y}oۿLv ^v$Q|@Xxq,~P ^)Z+ 8&($)u7C w3hLSVCpPդJPT0.rO^pD9y.5[g\)rK/*YJB(/E\1:D<HKH5+( Da P䔥CWJuuMEmq]1e eP;([SٹY=,@8w L},:8ppuF(^IoS%AD}9@A;[.R:LZfPb1S$)qvLH@hhne 8x\as)d>>G5RH@#P{#y+@x`Sۂ2w(SBGcI( G2^BVw~qA_sŭ02?$XtC$`?1/58aǞx羿1u9gM?3P)ؼ6I$὇ég.QJ9̭.c:!$h,?KJ3tV -zN2͙+Ǥlt U lL( 0Kf) WKP =ݕ s5DەP&t aŽm'D€z`BC7#~O~نWon7_/d J@YL=H:JGд4q(B*dsV${te3LJb:5t!E- UR@h8Y DgZ` ;CjlK ‘: Sj@.[N0Z?Ejf^EH[-n31Q 3ևT y 6Wxdх07 s|=囷!pcy*.ŠVǻe Q!f7^ (jjkg{((H m@\Ǒ2^; hH\][v.dnƌzCaao7E[E;KS]q/P>`#bz=ү/, DC C8P&PJ#B\F鵂5j50,p(ᘰXGYqu,ub^VN aYKVw4_u;wNL߼OSt7|nvN^ŧͅp%ѿԠ5{mlVrRz?C<m HDYb+*fj k2so8~jHZj7t^O#tk=wrhM(3qVAlLwH2'BfU§I^P;3a\UA5OQxB`dF'I3T}$6"|>0£0$%C\:: w<@z ?@USh.ՠKLEWpc٨poQh戓7ך^#VPoT7C5SqȕB{Ocu9=ǟl !dv8値chh#(Y1*Q*P)Ӳ02,:, 2\B{ O-4 NxxPC*hSW#N^:I6ј4NAy(-l". }d\[}TQIZat%dou3Co1=^V!Ba`I;P:Ju'9m p=`ArpK>YCݵoa#JCgFj4C{زZ\Z ?pO0ӏ`ž\{%޿ ws ԘT\8qd$+ @/JJ@XCOC t0P, H9|sD+B}&͊V)㛀+ HNI`SĮ݊dIrB1B%0Z0,}`B H0c e`s02ljQbޡ9#EECaIvn"`* icD/a*:'v*̀" 0Z@ecssBAzdPxc-nP4O&'^iOI$Iī']c>J)&Aâ3FyґlC9Jt{`W I `k(n(!ngYnIw1EK2+pݪJ(2ӡc BeA:O(<\eՇ0rəJ{c )-TDg `o%\-B ;EY `B0I1h A`%B3QjRjBB8_(aaVLkDE' le`XĨj(HdP@`Y&W?\Y'( ]_99%Bh yJ{a;w"Y`!7$DjG\J5G+ΐ;D2OeΉ~qFqɝ3fXP ޣ umiDjH9Ӡ)$5 Ro4q E|58W)kS"rWp*Ti #ě*O؛j0w5U~vT°(E$DdQKBLSFaע&F'JQtp͝#9Ɲ *㻗tetl jsLW; 4b׮7dQ"t1Hɟ;z_e/tM-"2 >bK`{z5J}>Bc dul$J6qU6,IG?|(a3?kzS˜x4g2<'$Oҥ8">C, i3:e 7X,REVbb|Fz3qrDvx07 Jڴml;Pȿwҏ|Ħ;'8+TW ~L<f-ꪆUsez%wĝD;'| yyJOZҸ=F,dѕ6?zfWp9M/OS?roaZ3^Hz}fs6iO4MB[<^ݝ5q8@R {&0υ PqA.QxHXxT6M l5$XM(".B _Z5Rz01Z\cVo/PBiaEJ4+xIhSPS98lF<//+%K!zŕ ES2BSUzT,=yocxoqdR2g!%főǎI=`LAoὒ؋|/mp~=GSb+2V5}h䥡, @!"@j j*Itdm*!y,wf4kB-v TQT!7Y"O+`R̵ϴX%\2j71!G눆1aH(%w$aCRZpw52yenj -˧>7cOE,.cLĨ_gp, &'i4ceIM-5A)-:#33%X *|x2VpZ]SN cWAlends{xaxa O'|!٠@LNcudqrV'fM&ŀT+NNX[jrSxUZ1>ǯ||GS OHJ̩xCm]ďk>A~$0t}A,6 X8:g+* 9{ Af9IRCgKnH;fRa"AQ%%hY 'ԏ&KM|q3Zzܾ?<9|^"qDA4<hJݖJL[]^cqm.Ѣ`eo۟g8C]jՂ.t'`Ǿ*H43I6.al4g= hc .\OTi+y <听Ҳ!wOLOi|"\Km9QΑ+8ґZN(IҜs&dNE8l)w:k.v^ل" pxH,Bh,-lrWzdgп`#&e&36;;^XfrƬMǬ1y,4W)BB6=a:8U Bae0يơgװs,ll< G[x0_֫dzkdq&]ӛ1ЬAV½ ō0Z^a');7гWp} G`<%^ 6H^B_(ާ_ N ߥ_#\YXj@Ոd>Iwi`7Ag(9~5,QKk+|?Bdv%b3=jG`̾D?dE:c'~EF'0_1JL}CQ%G8*2_M+ZEr8teHxz_y_!!bNJDy'{)fZkSN_ݞgR'=٭BqOEJ0:W8yUF۷y'<\t,i~Ż»;ة?O/:s~4oY4#38<%:\%8>ӕtW<$5þ#>3р+AHPFph#;ūO?4Z$;c,@j#R&.l-"6sS MR[5"M*ݧ7I#j#\TFRKYg`=i1zh(fBT u`-!JVnn5*:(>> YZQT2́B'NHN{߻j:{DQQ;@䕎f/A5}@ci ~8 NF4E28DK@֛crJz $Ԧ.`ob(()K&S `2xD< YUϰ9#v!T$1T5:2 Nبϸc e:OCqA1󘰤!Ig7hy`Uaj J%+g9<,Ì]lj7J/C݅޿o@\lS,_%!"8f\>$_$Sl1|1Θ-qyr{c7/l2nuʒQ ğ0u W.4Ȧ4[4eptsa5Rd3aR"&6Ԗ s)#c|d16`y..= MZ!ai"u F q&iG6y\&]9-;D6fg9?zCDx<0zI:~sy7w^: Ьor:ߠ .XҗIȖS6&mQ)k*fz[i+`oqkW܏xʽ armjM9&,sgbs^;|ϲ<[h  S6i:.byli(ꇸȣe5G)%86bM *ɝ6 #_p+cG8ጫ܄4'_F&j)ZNZ#Wwrx0r<):x;-}ԋs ?xc" h23'xͱlιqUo蟴1[ËO82=a|N%ڷ#c0w`$ic>yb*:,-Ӻ:Ow{>Bs)r_"n 7),)!RĘ8F`F! |'95(D.d<-fD2hA r\C_ wycΔ+gIfPW"@#8aL M.w[!0YWl(#P4,!X>uy}}{d̥ VVJfc$A×X^x\T3 [w(%E`bL8t҂%AwJl,$"ńqF)SOiB =+19ͯxL q)n{V$dw 4IӀ&=Y$=ZFEL =y:O+ıl0ܻ!g'\pqFiY@^0%fxmM"V= s:>tCfD3FTNneēSjc.zS9D% TAɹ%ru8y3_ː<E(pf~VZER23މ=Oq^"Cl@?bH{T!gCB!L@$pDS^^= Yp8b1/hUfHpjn)D!)KrKK3 3yA$h 5'nQqtSơ2h5q8Ӝ'GHF?I:GDÈtosoz)Q ~Lrp݅En?ϴդڨGy}z#EZsMi4ZAw؍A)Qć$s  k$&=< }=1!R_gN'c9XWM ؕLc@l}[OUJD`3)Cp9*wi$6}YHY` 6Q7 1[Wu+ry>׾&[{b+|.g0AS48gm :[qiND&s'C8}nL B2Gn 4)7ˏc>ɯCl~.%/bk- cOG[|DÂi}]GK3ʢɐ6y)=7;@qg>%u —Ъ{.=58xL8wJëD"UTT'w<΀&ԝw;&lS].#& [l g&.P{Hǵ|[GMt~ƭ{%wJ35l1V SF9Z}jANPzO}ҲMIj +.6h١T+v3+> \wYPlR0ycݭm6aUu+uù |3k|)e)ah]ܙ)^0ICN1K Z B0UEznfܾ?̃*\X }OmʪHbh $X"xd] w_EA_FqQb "PS#g|JR-S6aS,Z&OzQDY{5 pڮa٧>v&w)ũ^ Z1xca@hqNQ0 ŐQ-bԨQ'OOX纵l1(5tGh5rJX幛w߾z3k(MWh,zO:g6פ:+Qa0+r- cZl/ q>ŨRܺEyxt,|)3%-LɈh"=2hĖٴ!c]V/~{wﰸ ;O*11LUdNo\ŧew<Ɗ'|Bvccz]d[2_tqx꡺n$2_,B^qi]/$zpK &z:~)w 3'dJ87sd;|#L(Zx+0 q+&iNH/xci>'ƿF[l,3|C^ m} C:ZrN]b95 FbKZ.cSK3Xob:RB;.4\J_k .KPCFHe@-)9Y25$Sx3bR}剼p[˩~-& L{/쁾i"%>~?!+K`}a7/IhM}e{ִ`wS._o_XyqM[?<BwZNL,S5'GNvkS~B&s?Qkj럖ͧj4;}j}cpzoU>Az\C3GD 8NN[$ }eĨjb%o7 1;SF̤&)՜`<%zBJ sU!" )HA2Dr0F<侎7,_dM%M &SOИQGR8e?Ίȸ"*F*2'Wq!QT"jڪ](#҃1ɝbNmV9%GY+!ښ39#lF}S0`<SFQ ݮea6j x ٻ7ᡫ^T0swZܸc98zJqbKAévT*gb4Irf08ʡC%x!hUHHS'Xw1D?ƠwB1ɇgļf*FT C8xp|43]oegﭡ*mI* /袔*N$Xe!R"{n][fYA٬UҎa4j^Sc(YV} 1߇Twjwm↘vwʶe,Zg)E ,0s2祛d~1h]'rFXz[#>l|Avƛo>ZjsE>1^h4A> `V9di%0 :%~w[ה^AEmʧ_H`bD gg%qm+5x#}{6/MQKqS8MrIN&`;J)#cc\\ébfYlYI̥=|1aù6:A0T9 c+6)t?Uq+0˨ &tz,/6y# CK0<\qY!KA>Y>dAtk#Sz䓉$W,45YXF4w%e7hCt4r1A6,3NC =GS\ 0de*IW7<\{[nq^0=? 1g2R[x#g9Mm̅/|"Az>- myMa8+:?[_~CK&Ff_ɡ OJ1 E& +rEC/͵]-#i)Se#Ph^Yht6[ׅrMK[2q7dm|l"[W(O7̰٠H sw,ӇD\Uq4=0 h"VF > %NڕTO9) D5Hz Po ̓Zr6pӎ6/*;8^>_`"/+Ka1R44_U} e/ M k;]WΈeٟ촖t$j[&'2~N=X`G>;ms|Xέ8~䳿xFؐ.;sf{&("5gH[z԰d)P 2:vFg12f2<sEGvLЉciؙLDaKb&V3"3/eTcl6C1i.QccȤ'*xqE:w\&jlflWayO-HrC9#IZE<彊+өrYz$Aaӥ6f.&8IVHn:1*Eֻ""ym?~'p΁/OUIeAᡲ5c,-̤-q.qt1 ags|]/z<78geMDZƑM{$cАDe1M%@yetUK=シK)l\1aIPiT%) gRT$1 &Bw)*w-8М3|3τuD#ixEPSqlaEơhEdk6b>l=aZ8ccY"B-2 <ZTvzJW1AkK<͵9FZyrdy=*^(VV+‚A*ĹgKm3$d_Zk G W<O VM#"wrM`zW7J"R1te% 1QXRq( \pw~?yQתK4,!_1I,eEUi*||.b /&AKE@[ɸq~W(jPPj@8N?=+jE53BL A GsoqhI^4")X, H#+R $,iМEk1ǓUcrǠ vvo dezlY⎼;ԞE~Sw Y,艟q7HYk,? xW(e.'5Z$z-ӠDAFl$PfZ.%J+ α,gdN"c%ou0DԨEik5. %n0H{(9dz=6w˒C5/2^T3B8|x圌[ 1yN-}NVk^֢{iIcqc ʡh9'1XJtFyJw .l./$zq0oh+b3R~Dgņriɢ^0k҈EV2t|1"wNЀ {;TjT<Нz1v;U.+=7:]ݔՇ5|`i"zwnCgoGz1?xO>O.44 w7E}=+q*# P;G)[uV$Bo Қ?:ڨrG+"Z=A^8ɂ",f?\ٖ'ڽ4ͦg`̽5OL_IVi&^;, r仧VV*ZdHpF)N \y 'b[.J3t< u幯l"w&|pN"{mZVjُdY~s2"}ӳ3.E)bl`02`=؀!۰IqlqRCg4KLwTwWuY{qs7{4lȆ %/ȼq9|ߏL)$Ƀ?y  s6r~zWoFh'aUwL~-Y2vgV+wɟ7OWNt ;b&';ݚhһTD5Gg#M6k@;JRXhFso݅4>{F_7Pb?Wc yAuM  RKr2V&TKKFCy7Rl+%ZM1_849,FC(߼.;4|f:nKʝk'W1Kr3ڜ3 ytѶ`SkUHB\*$YC{ ]v.kc^$R42D((q.ス#KC$=}zKN$ %W"y6bD= ʱ1iCN|[ΛsFM8apRʹSm~3]LFv(#Vܼ7ۏƦ65YK; k]?b_iL$Mߐo?ݡ$JKEޓrb<2Xm648ÿDvM]jqʳpc-L+Rx3y4 sU(Ũ!j"-Y9bu9ZŊa4mRO$ԃژhhibgϭxzX_*s5t #"I, Sol8(ZRƊ7^̦{NR0":B!SK9iq}L7{hT'ة>*EgAv%Y<|]6h{EC* (I7%] ȃRVJR>7ɿ`/jcLJ"~t9J_'O|S>* O1ݭU."}QU+/W,+#A Jь] }(Gსf +ڵ3VUEw .tq0[t Q{SVHblOoq⠉0-mtzr铟~/6uTL]RUL5'4o=.y{箶ڒEwP.7ѴRuyCig+M3- 7h1<8GR0[yJh.uhEMzÜU]ҭ)iw )ў yy5(F&#ʩ㫀(S2Y }}3߿CU%N"_M=Axj_g}_?bu~+b:Hg~rUoSF+؟>1{i|cU͉^mA7TTooНV 83~>0t0,$g>2ՓGK;\TTT"irWdpj^jt+4yΨ~An鄨Ns~F/<ۗ*wa !I j! `W6e ıV!h,.Ȏ Z;stb>!6S"_1? JiL58*)1:7> VEl\a!֕p plPHNRN/GeaEk~؈(5辶y&gzV LZfl}/Y"']|Z8j<ޢnS+=lptWzolY =et璈*IxXqMџKD" ^)fPvZhdxUR!H=#jIBQQ%!3Heey"YR2WG%^"Z4N*Cb81AMX:Le^za0Sy@R(~cD[ I%JLfik`mԖ7ʫ8+{wHA?JdS*1S9 `rjc]?yyy. ?\'d)JX$UPܳ"b*=UWA$ q2'!1(ŋ<"cV , K 2ɣfFg/e|ֈ5 %*,eOhj|VITg6( U'~ IDATf=86VGKgiֈ%'$vuEɬUfh2}|_=)gC a@3iaag{" Rť"BX<D.~NenҊ:^Uѓ!om^C={K !nR(iqvpL|3=_3V9ԁF>}c/lE 89#k&g(#Q’~v$mߢ\_Q]5tuWV KW=j{\a-QɊ.!]f@Y#ǁN4>~GƁ8%$ {'f/LaOua_N-|wS) {4&z<T#,-Ea[5'u6Xyw7reZV;348z/RIJpqO|˕(:7~3|VGđI=|J0暠4ѯQ++6<&,EJA^1\|(~h:Wbg*l޽z-l{ As6 ~r-9 ?\Ҷrfi!b8ҝ9䫙e\tcN]*ҊO"Ă*EU;h:_?3XGʦjܐ| <|/.a"(O%m0)A!'we oJi#N(PN!E֦T& qJQִAi" B0VG؋&En1Pd ^z0deùJJ}:ǵ"&NL9 4,_y4xAu \̮GjВPIEM{#q& lÊm'jc#z6 Y|+`?&.GN\5©em-KW4L㊒c_QR>p}:XdS/Kf'wF4ŘZ"Tbj)S[ϱ[g|z%xGN%rl騺mx8ƥ1}Cspx:=W,fo0A6_e.3( 9ڒHGTU:y7nsg0gOyޖ5FH$Gh0.b޹":ֺ4ǺMGdSYZ9M: KjDV&a6%iAtetWT=C!A0yD\muh^}ISlQbB88hgz)z2fd_#5>66qIDD,|j0hiΡӌe)QW% Vg^Η44rpAk{z:ryZbu,єYl6NarB&ß;/FGlTB+V6HD ڕ*fc+]1HDQBbR¼$TDU:`Ll"!q`N ]#z"@KShv6\*UQ0 M>3{Y_lL81LV ?M*-|ܾϽҩB u ׺'$I%,Fkvi{Cfhϰ3r Uyqt׾?zDV.c#zіb *hk `L9ކ 3'1U8atl+w{LگK |D.xbj1x.֌i{f>:bWTO=gZļv:]ϟY+tQy*ՔE)d)1(TaOf8㏸0Sus&Q 0&W|ryoe FҹA Io(I& .]-qD6~(W4)~m\SԉE6i5W$Q@ErO0PfxѢ \#ax|A:*ɍ%Xp'1R[^~T6fH)҉\>Ǘ;_8͊M9ܿ˲ȳeL}m9>wvdҲ1>3h3#| >~ V7hDĩ%/ (jL7ϽFbs$N;(owVU@M=}1گOHG)滘NA\oƎkR&&(~4Kr!A#H Gd¬BPhR M `Gyrj>f}/21Oh8iLrϱ֙fzc%G߻$2010MPϪi '~xw znlN1 䘶AWJXɦ Qa |L/VI Z4g>CgbʭƔhH$Ph䅘!;pQŊ2f2('v Rے{cl;LӇ W+NcT0yCҷ#60Mel&vF_>|mO7?/>gY~g6i]0x϶?1}/kwq ڔE~4SX5Ξm2)P\[Hu*E0L`fY¶ fz#>x XLǘCyQwHFGsQE(#ɰqpD4-UAUʑ-ֶXO6Z[K,OclNre{eE2NF&ĵbPw1"b.]:h9**V7Z1|v{w|._Y4[~@1Ϯ""읔J ,kBGڌ0b8unϱ-zuP:f}sK~/uR3s 8!*BZ:+SC!4 4S``d"%2Aj9 [l<-ɻ%3c)w3A "\6]lUx]EczX̮!VK)O/~{~3t7 M4f?_yμ8b޽ =B'Akw_G=Ȥ FfAG~{g/5Nsd:g`Luh\x1urŠڼH>*$J >?AAu.;DfSZ,P UgQl njl!(lP02VT3p.neidZU\򁍢 WVb[ac:u.9M,3^D/2库Ɛ$cFJ:O.N*KΕY}9> 'EuE& Ѣm1j8P,̒&kxb@<&12IrG|R0aQsN2iCD9;KT Wzʓ+e3cjHm^v3#a4˹ugǟS/y1As֞[1d.: |W -O1)|'*Ǵim&ҢD@ՈꥤʸF;wP1ܻ.lF;5eS@bDq5)YBhhT_+U%F"|>*̂e,F+[} dbx6`3_bXx]rd_5#Ib>H0[EM"qGӉe=\w5=CYR"FQΣx^<ȳ9-Yi*fIi?Wgrĸ"Ocea+Wޏ%T46I:%(S$dfcG%hRƣY4HRM|9 "E!"-V&M G c[!6wH=#.0qf4?xgؼrcӤ=VessztsPGt$Yp')?ob4L˴dLeJdgM&=2' bxiPi5hCjYè'#U_>ǓwS<+e-=f^}2k#LU`#S1패p.*Nٌ. Wju8ml'LDŽ||hhL0h;\*X5~#EE!8Q8k <]ijM^HF6A~)j>;&&ksW9y1?Fu;4vfxxmu8ejFt I![20U<ޠQ)̕q3!64/ 7LνC& 1вKuAad2oycn;|՟% hoJb8^y 1*j! +wAm ֨tQ!_dE`BQOFXMkx37GmEBێ,K)%8`|' 8FҕLJhΫˊIPFV})}WzboǃSKkF'!e "|&|j#=t|^;eKdy ++ϑ\{A8o X􇌶o] +<QBwfvMBDk f2"iFn "MW:k^z FIQ/Z1vUzk$Т&6Yb s_x/1謓G_lQ7FF){nӢG8'X#X 4Zݬ*F"o~y:?8$ lkfE{ET^\arO9MLN6>TyȹG;|OFEݛZey{/ڌ3+**YEEʤMٰɰ-’a#{OG2A&hڐAA*Y,VV}fnO^"E4#c."{qik_X nrVƪ:Y">-:׻BW):q }S~}1\IHHC:4x *"bէč /N1!qX05Тd$Ikn 'TkHj6lwb0  wD%ĊvV#m(9{كmw>A A=^'RɃŌW^b<2KMed?W}rCO]}kvٿ<125>{eehp$"0Hă;G@%XjZtvv0ӨAEo5&pFc::Iq|gdZyޓxbe}8PI %"=\]coU[(hTiI1V!⽘ECzj]<zVUEXw94UCsf&P?Ō sGm!D<{nS)X)Г,CU7ॿ&g/PgmnD[\wj\Jwj| hD4hh.REV&'yVH ,N-(W+5^29a3Y-7{=c*E.5ƂpU\.ڊ'a`F#[jCC:wOtopjN3mH x6_xmW3î-%>rM _K7BS)~*L8h*Mj\?iNSfKY>Ӕ:us"ԤYFŔE,ugLσ'/ UD;]n>8!) U ҄k2m=?":u[[@']T=רs\ĕ+a{Y䙨1Uws,a !H\t;&㽫ϕdAm<*NUԣYM `j0ض%bbФRڼ=UҶo]OOtE5AE$B ۃ=}x!+/S=z.Ue:Г|[Kٸh4YudLY;-M4ŤҢ6.MPoXQjMG#~ӛ+!{j-ɶtHx^]j ٕ+$[7/nŚyO,9t½@v5iNH7Y=̠݌ڹ0W64u""b U+WQf Hj0˕c Kk_rZժ#5IY#-dߵ e# 1VFB46HA|*5CqąD5ڼՇE+#9f`Q%,a'T̲UD: 4 ]Hv] tZ?.1kȭT]7.Γ7 i 6lqFY|]# $4ԗF6U|0YCŴÜ(J6_)"Mو"F@+1,E e-:-iZW">3ds.rNTPcc.$)]ƣqn&Mnk?DΎōX* 41d1dtt!̿E|aH=NQnp&½-Zvb$+:LfqWK*0MpT"kW4(F<:uȺxLWa|xʰ>~.iT$JB$][hЬUuHR LKyB6jMml+%edK,E%s}^+5 _s\V%4Sjvk#ts2'&i %$i yD=b,4)~Gɺ[\8:Z!.o=lY{ϣ=if`];.rD,=^^>JyL = m>-=X 5XQ AT7_z~+G5=/4ѳ5V$x1 5jD,6O(*εxNГSMPrjbtwUMƖD މjIs1c ;$}ar玮z$Fa Y:ySL5+{zjy.߼HM3a'&S z9|ED1&QwFKEXBmfi#9MʈLJfGdZm9vj1m6@L&PLO/zU`6Đִ[TwV?3TMMQ6|ǼRQw|diz!&(Wo~ӂ. @6-P$jJɠc yͱ-)<: WA'ݪK,hL4_3%Q(Jm}AŔ$J دdXqpj(M4ihXi#H7:IKI ~8wZ<6=?SCP֏2)ڢn8g5MbLDl[pv `׆ 먓 ] 4XT-wJŪB8:==Zđ r$x8KbBjhH֍-_o I#ӟܖ%;??+-JumO~YTbRnS#ւ mzh(PbPVB]!B;|]"E#Uܼdmh?79*OCwT"ojX4bך-+OD qv QD.'j)ȺۡמE6}KZ)EgGN`;5_~/<{HYT~oQe|{U|zvKXzq,aiDE'ٚӺ:(G0^g?݇>| MQ/U4U j4&.u'U["gp"B .&ĝ=Mwω;߼sW*ވj/E-m^KOlu{4Ȫ5vC2Bsq [,[?e -!Fk1X[/59&nQ ;@g6cxIWD?wpHgdM80g1*nhݱrvM;yǚ}rȋ@ĻMCYT +]Xr= Be/l۔?%­"DqX{>x4W?bMOj!tq H=aVIsIi&B>qVbHܻȬƕ5lb,(o(ΒC7Ƃ["w*_''؏2Àc+*nOWi,Xn$4cmqQy@}5E De -.jiLBz1jP=FU ΂pE-)v"Z6 :t/% 1(,YDHg:*(Dx|5cYuJ_=VYAyy,[61vgi.6EZߕ`5MF޷|\i*MSV^YYA.Oc!鹚 ֏;lyզ2}$VH5݇ (A_7>v[ASI0Γ6-)We7a޻}羘^Gl)R"h,ҊPQ@[@1 oY*UA}tu ]M*wVׇb*E"Ԧ*2uVzXr4T6uF%v[<< t4#FN{)GA>{kSxwf, +oIR7UT_꜏'_VM#n Su[Ps[qxŒi(MΕ\V#~f}Nt5*-UO&%hm*Sp%! ".-EQ$6"QDAFT ITZ9q6GEFB"ւ(ƃISɌQ4GT/<D%Iu)kKq% V4xX͠!Ѧ!]/w,9;P $Bo(AijhK'k/ I܎cmɚ74'w?zO,bWŃyʷU yte-\3 9h[bEvEx3#Y-Ƌex\uIn0E+ZT#XYFWi45 6Hd3j-XAWڋEn\E$۔U{\ 3~DZi;b:~d^Hgp)ْU/|O(/tJ gJIA *g#DG ,\T*QU ,-3 ԑ"ª Q"upu$FI˂@ǡ vhPk!ɭSeuFe^wmv6]:v^{]Lguos_b!eԬu$5BC]jT bhFAU9ZI8[atMW޺1:H%G(>uqT"*+Q (un9=HgZh$|3v׻ś=gsHZ-J+uպ=Rd#2aU0/h$[DEO8|&NnF,K5uT%[9ո-m.I+_X  TM%ZbFxK쯸;oZ­P@&RsrʚUlx-T/xOЄE>ad8Ɣ &N[H B&yt* :eфG|I*Ȫp "rZwߗp!c0 21.2kA,'"Q@$ 4nL"!4ͣ =ݡbR#JDMR(X&"^Y^p7MQ U8:I:WˆrrDָbPl{>`*9T`N˜+W{ԏ_ۏ<8Y "K9w$g߾ǫ{]}`(D\,h4sQP{c-&ykNĩGOcoZAmGm _!#yQ}d+"5Ww Mh&4tb D{<:XH,:Z&NbD6u(g9Qx;0=1*F#&0I*5gS@՞ƍ͆J&MQhf~er K_kxa@k.zC[S`1! kMW+$ ѶR=u7'j%o]lKwM$b], + ,MИ\>I Ejmڱ#h2&F~u37M)k0HM |HUodYHh/ftGQ%t 鋯 CNC0iQ!dW&"Mɜ\2הĒT&J|J/͌k! B`zPY[5$^)=7nY' wLPJJ]-83n?9YWo_lϑoߤxZᥔ6˭hK|@r5"}5Qpe)NѮ}k_%C;B8՟?W:i7ᤄfq$[Ivak~I5Cr$0JSkPMwF5uhtکILS)fkN2=f6cBlQ.i".V:Nx1C?XFqx5b[NNz=:B{5Z'~rHM^5`{I$x]Nm7[{Q!mH:P]h8JQ̯{jS$MQuFҨXkJl>V?ɫ5^%( %U-*MN( W+߸O:otp5I SLY Z+>ouMa# Sd7qҮtݼy riHJ]RܙK1p졮@WVbhm&]cń4ت 8t1A|Z+*m a4OkDUd$%+Ľ]b ]ev#|URU!t;\EG t:i8Qvoi[gU~CTIDJAE8Ѩ$Z$iJ| #M-ŤzB!)=ZPz=;䮅BBx-Mi _PCpwtSymץ>$VkVͩYCj%s#"}m^{kE jO2Lˏ;yd5f?P:0 a pQ)Ib'̛ yy)1)!_+? NY*HO¢>h@EjaU.O.r3<5<'~ ヘ~Ύ,pO.ڋ3 XB@RO>Fy+4hxdxE䌠*>zdq a,1^p._#ۻv䡃x4mrzU? ;qk$j83NDY 7Jө;`tp0㧎O809XVw+YsU$-dmչ#=D!ު ?yŭ2ݒ*Zy×nXśF]Hқ ش" viYOXOIMC/qlOEOGTL$}PR'Ņa5o="CbU6C5"-Ue*^: u0<:_G띋oIhhnbl-/uTFP^'pKǏ?si޸~A'(w7_{GR}п|4 Ʈh`p$Ln>#cn4p[SbG0 IдDm۝6HRZQ %vݴ^dNgZVucU_/-oִ4kG!R 11ϱF0R̟l`905M$w`#W7$DUd]ZzɰF.}/?D5ׯ>6kؘs~ R8'#UKm_GJ7COWm듺!\SLZz5BɃ۬&b?AuP :dx;¥^{pgHQ02v0—+b]\u)y$4hi[{j-@NISuDK;vqTFtЭ*xKgf'.TP+'GĺO?ޢwYشt?FGg^|$:&vb:_0,SEBV,;vh8O#d6k" %NFVZ =&$H4ԺճX=8|#(>h+K7:TsXUu@(\-bA:FEALol,lH~ZSNy8K}c/eaxrS&O%ySfdDefUfM]CU t@7A$F@ZpvvZi%2Q&Jf$e$ an#"cz;*X0n }97ṯjFOKm s`b ?~Gp_hXcz?02y:|&%=IX}U[rv;wkO>|Z%t?< 7`&9[mXg~3d}rԿx< o-&--q6t߿MtZ8-  #b5(4cYX+ 3nax.ЫO#@"3 j"$EкAKi뱮IV![ְ=BP=>oYyG!1K3B'!tkL߶6q ޴M&p~iI4CK|‹eu' υ.!&Dŕ 1ԁ M?ecKbm]EZsuV͟AӷҤJ D f4 mEv76D͞.]˻mO3T b ,kc_H%CbF,i҄bw㙲qC4ԾC]yRA~M/-@G@5 f9p;'6_.IIٹ 5^*  1UtȳrisN40.&95lX(ꌯ=E?$Ͷzw6wbo!ng25\3n7ج~d$\.l ].xC8/p!NH7qyuj44}r2Dc *zc]#=Owj;9jRN}Z M*ƘWGU'4>k >F൉v('pBI7vmlpTp ]UBh Uy҄ O'(2!5*:+D>潷o2u''fUVgTC\kyP&.affL/NTN"xoGO(jNR;4lZwaE!b4 jJlubbsF:mS;'kTջKXIkH lREF$6gpA, hP ޫhF5#du$(5w1SQ r;]7czݬ֘D$wN4]6 Sy='fEJvث=ھ,9gs>rf˥^o _-("B]k0#橪is.⪢YtE[$cyUnΞJ_ǯ"DF/ss,NN`cD':3g)'GŁK`@VwD~jErlW6DQlڶAB$=t(eQtR?acK4Gf~6QNӕ٦.ղ zFZEu-cx .F,bLZGmr72fӻUPQn.u'_I:" l1a6p>j>E 5[[[GZ! x9FQ6]-5u# bPu"Z}~v|1x,I}@}Px|Pdmx PϖNe0׬AHiRkU`WTcUQ"ΨuꓜF2f#ERŃ%,ڄlDG%h Kl7rv d16 FVl}6o__1O_ɿ/8}~_"o?́@ѹ?/\"vG(Q‡栩 zٻN; 0:OyDuKՠpTZ[w;ao'"ӱmjh ts΀G} 1`ň AlvIr$^ʲҊ!MڣB{dY_UcT16S{a|K#Mx$v|ho/ywƱ#9#q\IUYdi#1S'{ҟ<>_<bO) Po\W_ s|*.MܻrKgEZoKZwc۹G ^1z\1u@,CT Ф `NPgѭK[`q R=شKwQmI*g0wfwv1YA,hTv|^ `뵶^ǡ# d1E|C2?'+R R['ԑ-y>r)Id۪@:N>3e~xycB IDATg-ul̗Rϳ(BIFw%t ƣM=RgGCC$Kt<.R3T*EXFH'P` @mwFB,&A1WZm۳/_-νaHH`.ݜ|v냯ҔPO6ёl_9>yKbgi['cKv82,= u+o5+H&hH&1}gxH%45s7,w@2|S_cStاn3YJ6mͿMzۻ8'h U,$ ĀMF5a= ,bAgĦAˊ0_xf%or:uW]K HL[ bj!Yvz" &Z1:Wi;FdvA-lu wJ*sbD!O#۴t.֯, 1 3R2#.fێlp^7>{^o|#}~oS.`ddej'3B`B">}4Z6D$8RIP:%d]8I&7gS:ʍPF?9$,JtiIS%GQ[ WwӼ}1&8M L˛L.R=O=M߯|tTĥR~J _#:8"AA$ˌ-7,ǯQO$9 oqr:b6)m| qvV{IMcd|$ O+KH: /N,Gri$9| ӌe4!K:SJ 'i~-/ڐyq?UR`:^00샄e|YG DLző?a4˃t vTewH]aV.xhkrDArc( gOaV)7wL,ہ_z Vwԓ5aՕ7npTz7.~[ۚ1J~1ə<-~pk"ã\.ggnKsbҜ(h-^ZKKC`"FHljc$NVTZY Ѥ=DF?e#UbC͔p5CėZՊ>EstTe G5h@$<:IDG8lE u[w|0nq*q&G$, "CQ'ZE$q8Ӫo3H4.6 \YҭV-ZG3I""{ UT%ݹ.4Zv㒮\B@"NgUs˥.\"ll#zcWJ?iGn1ՊtzN~~64$*FVmQFy~/˩0[L1u2G&L| ֞ywA5V[GX$ ]*{˚Sv0$#==#@mR޸;wH'K]B]䐦Ul5M7tܿok4#J;&!"! &U4Ti"ySS'g 7P#H!cO!{ERMk/ؾWV匭+ ;Flpnp@WOx;}Eͪ˟&/Pgbs",?;lΑw;Rdc\[o3{Zߘt'^l!j|iW3OFɪz:t~&nANٛUx|DL?gLvnh|^!qyK5dppy5m/QLq zrBvp[5z^2{ԏ]ZCZ3:"ơh>$_a_Yrz/㝋̖U1~9XԓD:*=)tmL1Ʋp]#ꢑosgh"ڠvIkhF%LhŇ+yypFs|pi:g&*Gp@b ^#>յw>"?WU#5y&ɵ+T9 fSWBWX]14XUTKbYK+|*._zko.: `LjĵlcDԈt$@j,6_HStو|Ч6EͳyM/ }N`ƹx∟jz| o {:'/fbZF%K'xvBY00)U 2Y/oBWo+9_R_)3<&?~τ3JBkujD'Bp 1䳚+0-ꮥ%kښ!c*q@(ƢAiQMFGcWpiqi=CWPPb!u"v25Q됼 J4;c.zuTM1 rPeјG+ƑdkXB3< pzo~pƫG ^~JZa[?>'LjAb v H$I<aF5ؼ1y&@G~ ҝ(4ncn{]VG֙kf{-|SQ htk&ly8joZrB:9o N [Wes)Ҍ󺢰 kxp]NRԎ酔~0\C6|ψ,Y>R_as2a-"z[Ut]Jtzt{jl1_t( cT #PTfUQ%H5WHb|̛艉8Γ41;GV֮%֢"0VDʹ:172'rx'_ó/Q{IŠO?Á\tgq .MHN~󍝖!tկlq cۇ4޻G'S? X#_ }#̎GdOc#/ZCշmӼ -񯰼}"L? :OQl~o/} Ә=]|1H=6B'gN[,T*FT-6`m83T/s"Ad1?_RMgeI"Yp3f T x"\V SC BGO #diK$[E\i2%4QAWs R:LbHΐm#q#bCI_ 0N+;ėcq>a=S|%ါ! 7]"S̠M3$*mAlh r9ky RsB@c=ūwdYUٮ"H@C  '%.Y%Ku451&&SL $U dcԷ)L*>ZqYmrGTfJ'om%%]>BqHYҧɲImF Fag/{bã3-J$>>MF P*3TKK5n7p[\m1[)=IH3Oޫ A"Ůjf9x#g|BF5#5J6*+Mt){MiQ|ty)$.и@q1ь'@Uf|*g#E݊wiQJU<T,6Y^ O ַ qg?b ÑԚ*ի; |QdOZ{@>޻Ń3qYJtBK()TVFF쇦.کiǨAh=b KR!=8$~qk4n (e!`f0$$v*)D_:Q)NJva("j-.Y @IHo\eU$^,`Fؽ/X`l'hGGv4U6g#xw#N ɎnfOXh,b3D#&H vhш$ZtHk?HfIF!I!FB5' h9*ᇛUc$4rF>~@6|m`bT nz\/ 8Nwq3"`oo\ ^)!& 5<;!6vYRv,;}^Nqk4w.vѰl,' . Ztʑ5m?'(*dI<605H1!'m!IɛHc4&1t]wRr"GcHB¼zdN B\cjѴ ]O&c 6s6LIθHRm?x 2gn;OaN>}^¿!_&nv_=Cxk_Td2CC RGg,k_%]fyf<|t?4qy~V̖oO%@oCsԣ\6c#G ` ?* Hɑ8W3?=Hq3Ц B`*ya׽AXMy[Y62`1Q>3\6͓-. |IY_"cTgw釜(.Y'xg'WYnRwst6CNȭ;S6~Y;GGt/퐏z\$߿uU :!C(צAcDdvm&BaC{M6:mm! &*8$Iy|B٘r$W"sjL+F5sp?1ri*Դ2$6Dm[s\aVHH2"e5ءȏpPĬ 51-U4 %W wqK <.%N/~n%߼[(' _:?&0uiܻ(!$v2w:11$Ou[١(2Ҏ.(Suy#dK~vo9?ѿa9GEcU'Mɶ,%T5J7 2OSF 8YO2-n!TNBpzaaI CшEeߜ#Ls\GC$5Y'wwg g6DttJR,Z( 1n,b60-EYuo+,#S/L{T.4ҍ+Q3K<7a;lWYN,#o89+&cZׅ.,T&ֱWnstR^!! 5Ϧ _\@wtW\@[aPʓSc M>OΙi$ Rto>`~GZd1ѼIz%&,lilȂ=DXՐI¿7hB~ڄ. ֧~_#}bå ^#a)mԶQ5QZO۩ӦbX7UsiYθ1j(9OlH6d }:\%?<`OSbc\NY=F'v:Chu[jS<{t<K6|Rfm+R !FJܺJZŊt%[VtOgļG]&Y.i L߷y`n(V\6'ȪOpl^˗3OX{mpPhh.(/o y{&<ۉa~"QccloGbhXѦ$6%%aB\'qYb!} .v>M.hF[-H6=%Y0WuC4Vx4I=K7Jczˊw6.&=17.mS9,S˾"E-MA%=IeBQ:RaU%7' V +8wK["t3UO"U.eӂ+'S\m'xux~׿39;,޾"!ݑH4L3ܣ!g&k$RP _OU/ނXXv[/ޘm\$Jܚ}I=.Id+,,Ifej$n-+yJK+JH "b2Q&v E'Q운N}6Tԉvv YLLhbkh/h+=5*_cN!7FlY:${0O,͕=L1f;—^>?wnmo9RikѪBUI"4I =JEݼ-ktAHZEH !ԓԵы|ߊ} q“Cn|eM8[.:cH J1#*$\TjvR_.0IU/ŢL0NCAܢsQS ]1R[p(wվv7Jk?u?}O+۷ K$K?V!-ZJU!Q'b~/Q?3Q?s@aO bm/+HS'{WϿإ9&Gcѓsil1"jxE$Sl$/h+Ļqk*̰3ٯ81dXױ?SLV"hjʒ_%V{]|˙,jx4'TrcɯrRAŪLiormgf \*et+h**Aa yZ#*n4̂$]esDUV gWvj7Xi6UjNXJf IDAT:U+< VA"E/]Ki-R!)-9omnH;7KE>OBYzNVC#|#^}\b[߾iW<ϧy./h?ǿ{)5m^rQt5ŧF]P2)YҶ&W ^bM uج#"Y0T2\jU Bv@7&~6!Iay)іƦABjy JWwOͽaq,K\Mu&tՆ$zN*]1k妕tԤ%"f֢ҦB}*mJM'ˆ̥%Czцj)hYiT˾xsk_QG^37$FL4jSw?h),"c`6{`˲<[{fTeU@A`+ҢeK!S9'Gi[2e%Q[Bՠ^׿wfypnf%(4"Fd{{RQĮSRDBHO1+5e4@FB? ^%b1i}ǒ!F4n9:f k5{#DDbZ_lmL>`p$2ZLhtw%^E4L#r )jU<MTabm׷RYb@ ¥KdM˜ )ń޽.q\hvT:%;v1ꄬ{SY߫ոNa[ Y5{c FUsBzG u2S1]ӪwH]Rkdrx.Jeݸ-_u4ME7r}ITլZZqo]e,~A/wߡDiWdHh$IR+ė^*߃? l i4{v޼:(I)#7"zL휅 aG/M~iz,NԞ'Rݒ=_OD@;u25Noq M9/~_L2PK\4ڈkdJDd:U,!FԼі:TG:'ECW\PB&E,TPA$ƤHF IBDFLR*HtNDUR]idWvҳVU#Nt炓vѱGش xezWI|ތ+,,7D\YhLJbhD8ckI1mC=eԢTARӢwE\:wFqm_/dպi,ia iMc>_Ӭɪ03vObOʨLK!IE]a5Q10L [NEWA)ZC+IB湦h02q0=6:?g iMW6"ء \SUJ{g2 a` *Yj yBs&k Oȇh55%m",KbSd36_1]mESRk'ƈ-°T\]9~pTr$ܓ[Ёr+^Fp&& V~ᇜa?S,OwyڀY\~PRQ"ҏ)~Lq.MZԷҸ ,HTLP$ 'O}`Wm]tmJIJBW\'zdغڕ~.~_ $w9x&D~>'_x4 K{pD[dUvhDLY0tӪxQwCݩi{'=u7GO}JKs[_F%- #Ow!{ѝkKao!u~EKy`O-K.b.i11&xb^ҭm-phOq!'KOAB4fsHGK>aAܜ V,`s$lqVT] aÓ)-1Č((BK#G.ev=;="./S%iCʌz*a؇$}ʙ2 |I.O41>O91ab/t#P<T2W|%#מ Bޛ96Af1!h@T6\xi3RW9ؘæQ.V\bqs3W.wk8ctʣS'COh>$?#s:9|rJY~G+LMZc"/>_b%)\(5`mݷ_E#\ k,\vlŌz0=Fֆ4~i]!Wo hkGF`L TkNY.u1CSζ1߻~xy'x楧;Ɛ(RD:DO(~ɬK5I%jYBZo0MFa-]jck$QhWR,)އzYg - !1Am޷{9A rtmڞDD8tP=`mS8D='ˣSb s1 Ji޻hǜl~&xC77 Ksw1oLRߍP:,j÷?ԋ^ڝLϟjŸb6ìl}DV RXL UVS6#l'B䊺0hY,2C( Ph b:~9~ď7Պ={ 5=K ~L ,$䤭ǩ2ܙ qJǩ{T 0u 9ݤNnnqdW#eeR^$?xp!Uýg e/n[F|92$ƛlIq>gh %LL#E] t9>t vBs~u^ixEPM J7n'~?6OWrq駆_7/y 0)$&A%@<Û7idQFtT%E9&`m蓯a9 %v1$t8 EP SC35D/`]B;"%e.Q8᭻`oó_pml Q!˩5tQXtㄯ^> 31|D{Nh zʥS\u%fm(jf, I)ѝy7A@HgR#Q*@"y? x.1wY|H+$ta||ۣۜ["ưw-* ֩qY/2c}_C}K~ q6adۛA ('̢+a}7nз'_HJ>CHP  ]K"`H )ɃD:/9M$юyF%4sb[OvXb\&nRƧAjy~s%ܡN*!O"ǮM]'2 MVE6О1нfa5wZ/XOM d%h"{(D.ǟ[α]Hd,1|!PZ6yA.}{?"zfӭ@[ıoSYYK 5ק5/,L'2=%wC0pp&QUv.Ilyb?K1`-]NO҆V•z0+Ⲡ+TJ cQi6J Sf l`TM:H}B^Ȋ@ڮʞU46 Oyn Y-ɿBḞ>bKQn{Nv}]bӢkkR{.c簷bէ(1k}2+8KI=Dks{Fd\3;/)e<H MZo$T#T׾@V  4w1db''n{Ǥ5F/3sr7!];ž0~T+Xq"gpKdm)N Ύdxь(A*;!7M<.Ȣ, AVM;x2=|!)"丣 P}PJ7!C{,,Ϊx#4h&8Mt)0 +)c,̺<",[΃^5h|͗\}怰;1 >")GFZzɝy;4%hu$nxh}'b5F l0jar ?\}vHRD!H\rMCjFaeJM]=R'Z%eĮ_A>t$ LHR79R`i0;3fg|Kk$?W?^yH?w]Uů.ggb BKZ/$27$!MIvڊԝ؃mdp1k03|4㒼Lv;[ Y|po,lzWE,w^`gԆqu\B(۾)d!%<:!{-dV8?t+i!z(סY,z9xZ>?CB/^ޗ6ZBz|t`3%B꭭SaC$5j ղ揾bLL=гd1.F2$~J~δvL=ޛ"*DFIk$ܰU:)D=Ç/9Zb mn%I+^[x~6f~64[)&%b5vmY+"gńfZ"63!f?K9!yA.?{s8PKbCU,ƕ4\<߆S&3ڀ6ؘy3YbA厒-VfDU.ZmvZCE֒]k6($߅$Cir%ާ|v ?V^%gYߐ/kgl -ɣU,C= g_-ϭNQ#6r'=82;S}tD:^4⊌͸~|lIpzr1$c=s_R Âh&ަr2!^Z:q ŝ] ;=!; ؞rk}0Q ,3" 3bmQ|"j "*:RLOq7!,`Qp t(Y 3`叟*DgxxソJɯ`K ?;gx9M:[0Fэ.|WYTC^oeVx՟gO(?K,d'YM2²̩nO țD]eo}4ɸ~7O tP\&H-MM̥ɚ6/g-F vF=N]&0;@ILӷe}9[?iFϠ-KQWz,N93\gDdѕ3T5.ERiJXu SYM!D֧viӝt| wb[Ic]ºR)2b'DwN`K E0+uFMDL#$M12G}c0Yc; s[Hl >N""Ծp2.=/s\ o~Adp)GI{f5l8QbhNi7:0 pLE21f l"UZZ,cJ0h=ySB"`Я'S r܋h! )$uS/N,0p:~sI|m~7nO[ ?>G^/!24gSQ@ fRޟraq ヱr-~{"7~aE,ίuܸa6*~{pAjƁw^sq3 7ܡ)8?9MrʨN ڀhhe8Og"{g|˖`|}0/m@A['WdN8^'w\rLnĶ [GKzۘ#YwD 53AWnɖ ?}Ē$k]HqՍ` &q! [jQ 會g!|ֻpUbRăUnsh9&kc Dt2&S3+[s._:m?;& Sud׎]Sm Z\.di2܍Gެ씟 !zKt߾o##'[ iG퟈ ygM3Wmszg$HYh5T@I5Qcs˕n„D`ʌhg!rl]@K槈1TEN Ȯ`" JUʫbVFU2S_"))b0%li ëd(r~]_vyM2Ktԓ]p>⋌jLYǭR߸0WL~aΉY%hRY[aMJ5IgQg`>,A'dr̍UPއ0?=4FUQWiO * jRo(Y.e땤$WVͰ=N;㤨n!!D}#CQLөFB@'wXaQHQIuAs 1E=Ttg)J[9ԭ28FYRy9,|E5 +Eaj3F?CKFu2r-85h),'Vg4M$V$"A.-TʠV[[zvsM%ݼB4"DԠ"!W C&%ܩZc; `WoP G(1 8l}y2R•gxm3_q^|u6ۖ #:<=3VU&ՉJ[߼\;PEOv蕋a{?[auUX{WUtnLVe^Z7|Tb3'#^X0:b1xuńЌ(LGLߝ<~oK>G(vudh'Ar Ptr)R>nw盭ijmCgH/#Ekh2C )D^{IRXOyBsz7xlrzxuHgY.q.Sc-m!DU.p-o@YMCtOEF:$FvèJb/1%D>-ꦥ>A]p%A< G IDAT4{."hh?[,(A')4QUQQi'݀bYj&rTúh#DzQ ,Jf·bͤ‰B^1EC7/K%.J3dىm9kB (7|O%^yyK۟Yo^oo>3E&bNk53/BE$ 1X=8vkqM"[&ғ-JlM7>Cq<+;*/Q~6Z[f>TrK+GHR+[25'bBc}XҬZl!+7o~K_6|l͗Z"qTs)24T0:=%maG11 I'?f{׎4%"b2@|(4fHW)\Ȗ=R?F\ 4 ɒk>5HKƒ hm'8YwŴL?@TRUR1ہH#37bяrhӷL%LRm_ qĴɰ7|)7hm'dTED]$yL1h 3p0}6*F{Y+3MBuR>sFh~y.0CdREF6AkۧAVg~; $2@߫8v.Qȿk~|1ƄS\Q`?0ΐb9"H&ip ݲ96( lR 1plm| YOYM7KZ+ڲ*U6M?֣y"7KԔKWFh֐G6KLlFqG51ٸtOϹsټ^$U[p).r{.g1'9~+?wȾke+%a,Fɋ髷S)pW^~Ev~{1a豶@b\"-:d[;8c&ǿ|c8~0xg'Wi,qqmO@vO~e#o ذc)$B$=(L-t~|bd(&*4~iP^NyMWl. 6ڂek._`rݢfmK_yj2&tׯp[h8&E'Yx=W{CSͲ0c[r=e'~Tk,}?v~0xX2\p+_aϊUԭ>}_X#?2A.IOO}{Z_<>fF`T!'ƺ ʜ.5x|Di1),{I/h$B6ێQ1IeA,t$$hT(} ^hi?v6Hyd&`Cb`L Yj(|G0CGoN/^%|*|cWt˚?b"eFw'$)Z O9zbRI:yD|0Y3ӜГes`X]"T^7 eQo:NC`gB'0Xtf, ^-^ /Џq/mPBOzޱ>Գ5u|#p/tCxiaih[ejx`~Md߰dy๊VNy*D D FO{iUc5&ϥPg(#+iFvLɆ0W3[,Ԏ4#䴵87a0F5qM<ܳ W s,/.a7H(=jlS߾N<n0 “&sGqw4m4gW0yD&sdyFf`qBj}$xܜf^ןާ>p:3W\?eri A=wWZl\FD8$%FAdX.G}SG\|P\ћ߸$ 4&T#R 1PPelѥd6U<}dPX'BlhdqX!,i`L.萍Jv^+9v($l֕1jF,trK8H"QyGd0Qy拯/FX-2lapr{W9MlIМqoą ů~g_~uƪʕ%K%"ƬMVKE%!R#?<ìW4?>iJ/eʇVЌ^w|i>O%G*/<P#E"-P?sUWBTGhD(٬ e,} cs$J/{SW=P&7h$ke`ʘvPTBl[A=wʀ87Iۄ9OmΖerPÚ! A#?=X^h6s+A~h]!5WyObb /ྐ!G̒BY?2mqT9Xi;mGs8&'}'b!Z$1%sBҞܼ!W20oFw~7?:`ɑRkYJ[vogzZKԮBhQLbI6]G0O~S2p憌6rx1Q+^:vB;FhDI䃈.kż ^|$(62$#ΩxȈMIHEkhGtu>JٯFlFFdBnR<1[Tkn*|6#,+dInT.cl tD}Rx4 VjґHxk"Îk>bT0"n__@x'fBȦWx_? ҭyx~v;عu!F:?`{^ɏ_=b}$J²a2K{ET.1Z7 w\0"FzhR!R(zDy:oI{ntœQK ciB;2E=d%HD+L6X9~qk&d買ϠcX$.!XMFto%#3Qu9R!&ylnpzMjIO9zb,߷U]=w%) r ΃ ׼'A6#@6b#c$[ER+ޱ}{ϸ֗}o_"#R=U9g[>f'OL:󆬂XGvU!H7O<{'+O!9R7eu(k-<!`놐u(ʰN$&2sˊE)$d]n~js7 $.U*4#C|hϧiqSjYNuZpQbcb}^Wj EV5RwW15DQer.A|nX>I##nҮu!6%`ؑXhC0BQ\*KҒ}}( e:&b Ϭ07`x}ybm߿ϽUl]t{2/$ JX}tޛL?EpA}f,~?ѿS/Rt3X)#K9|WFv w;qL'5ާs8Pr. ߾yzHT詩XgI;Jb$_+66:C`kd26Ĩ8hP/J^ goeNW' ?Cǧ4aqp8XSPzgFxopsm4\>"64"/|{ T*b=.Uq$D|TgڷVTc[,,D@'SXQ4OTA\20M-uh8i՞2bd^[h!i I\-WJLҜYיLkv`!Y4V1m"UoR ^#,}\; lp{J(lNPR$"JW]܉1fd;o{*W6@`t┦קȱtܐNa-qxݱBdxxu%suOӤd6-8X6ay/ +7nszwϘ9l]rxo JR u0a _lk/-/F|7J\݊| |SG)f~^yOT5|b/AsMRe茢*$f|sN/RJyx3\{ePb!1IBl!tX+)FM8RJy*ZJ yϳɬåфB3$vP\p{ǜ,x1; K-33 k\;![a[&W68"vAɿ|5hiI>)3KWOH4:qa&0`? (fq߼36.E{^'[][%`=r?3y_ll!綗B|BPs{D9+19S Xڑ;pN)-l-kBiBCjh1B-4ItJ=H_( Z)uug2O8)=WĺVM@MxX 3*JΠ%R%}4ǖU4`L.jtޠ:'n.I|jp:‘b #-tJN@|n5f秘[R.d&g/wh8r2cDlNbh8u5GQRuhYdƹ/|g}r> +CwxOGgW vbuCBZ'%3~CpخhxIQNU6o)EZ)eb}4\ ~<͓6J|v/ }JfFiI{yJřG[G%E }6/bmJJctV`cmh֥8ϱ8gY|t1q$dZDL/v=[9g+*RfV6Kak+RNiAzXroppRk7*Sz1~9Ա~><{*-_"wGvf-xX>[l8wF9\3' '`7\9f{C84 n"|-)mMw/`rIO9mF0P)rԀ AZI7LYٌ8nT'lY8T wg5N1 qdޥs,1IZQLUBKY/=?ܞFʃ5_`ϝb@E:7wC|M)a$\>FI@;CL:T!49CMWQ3g b0*oGjYpyHq&.H,^ݷK[%Q_?9!Ԟûxn}y+gybrp;LN-oIߡ&tNIg5R \ /_$Ŗ*IB=ZVD_CfZ _ct>Ňbʳv4&/ԓ̀M.avup0Ù sE3Nft!wsdY_̤LJ >/ZsQsX)ӚI1:M0 HON!p5B"êf,%YII1ims)c[{TϢi(+GLV1t:/!FY45T Q%5uZ5LHZ8mTs²3܊>EL 6:Ω/dxuPo 1r` Hf>Y|b[sV т]5[&F͓iK,|:{rX|gE`Yx Nl2z:m_++D$&)G>uͨn{ϙ(d__}w!/xSu(/t IDAT_e!o/kP1^'{]Fo$_v2Ltsb 7n?xL5=fM]c&^~/_k:lJ 5$o5yOYˆ0ʩ@TQRlUb:CCh!/}3_ tN C_47鰱jq "t\J8 B8;Xg5)l5 >"`r68K=`FAz'dӊX-ltyj\+4!RӪՓ}r#t1G} RgsCܽ1F=fb|/aS ,Nosmc_4k/>#w<{!qSWBӟ :zHD.ւ)_ ϿR-˟3 0"Nsts@$k#?2' G8{GXĤ/~Ny2|曞{~/!V & p|yOt#}k+:cVM,SM)k[4'%1W+LlHJΗMtA eM4s%+/ΈaxvZX]fo @!|Wщt [ VkLDR{;%U˚=1_?id.i&b|zPr. TKc|8Bv~/-qFswpSh]TT:u@k3tZvƘqVz]A4XW4 ռ"v8elksAo^E_?/T{'~'_hTݽ~Wo 4?`wwLN~ñE6/n%ECc\U!Z5f z_SU \1h(NB9I vQ^\ҒܫEGڙ/EjqɵLg["f!H4 WqB,z;c[fX뉆(ecW d]sbe^TJgOD}趘.bI =Ժ:KjZ"%U<&p1#jQJz]Nt#f\ Y[[t;K`AXiRiCU2&AҊVӓgNN1DufB37>U3&{KԖH7;*j%5}M*Y~ӑ:51><Уv_?A7 TfHj\IC'E/×Q"FKJ&&MFshvS=|/(VШ$"e7E{)jtQkqEAz)a?.Z5D4FT:>֠H >QpW,js|k/sm6n1{laqd^UlU7=uw*^=]~s~ӑ/itGol!ε Yu?Sb]tn{Nbh 2N ->/Y%ro`ސ:ҨQӪܹt_:nWZ(I3_p1bL[ V/}n?wEVn=n}-Nop֑v: V11w!Ńd)- ~%NIGԅV1%i[<"4(el΋lTMPQyzZV]h}J,[My*&*չ Ĵ&5TC4?hܶW&[6&M&Έ4F'~*шC+9ڤtO4Eh$-r(VڝEt6R1j%~!N2zRxqE¨vJ"6*G35Wrӫi?,O7;_ߒ/W3g׾t+Ny.襳VnL<~3h`xPj,қ%uˉk\=/I^3۶a:1w iC9$$ZQ@H{w` ?;3Nih 2sE1csâ9場ܿco\:0>){Q7NS.5o|9f#9FJu9!ͣ\;^DFpNڌ8z5+6G[5="y7n$Iƣ(!w>\{ݣF^E:GEۓUΉF i}JDWG=DYjTgc9~8XE~N$W?IlеRŠVV.guBHZ}[+> jP` EP8dHT?)~򻢘y6L5oߙგ%"Ϩ& ]~ʛ}Пktvprx{'оfv2w"Ft qEg!߽έ?`6!1D4g.}|kj%BYDD2O.D%`UфzOyiA3 w<-5B# uSz b-~KtCM&;)uUGƓN y%ѡ u(B/1K3'\tE LS~mAZ#V#6ShHur]Lu+RJw6:~ϡg/ Q i45!8O׭Qp]DW4, Ln..&O|,ar/UT[g9p TP5X$؏.³kil.AFb41hzbO" e5_JL蒛(˵,!+똪5čBPc-YڡR`[fzG4˰5>j %#>0Zia8j7(0?7XO(g!$`?LQ3).j!R[KpAe=:G؈ T02\Қ8>b *ǡЩ(.e.}~1x˿_a߁_|+!CK!x .zI})3Z1/馺7/'Χi?k0ِPN_;+ri f7n,hv6mU!_P%M38@zYDǂTJw1תZ|-&[׬*JH & sHH k$ޛRuhV2m ȓ&)Y{/]_ؖ?s̛1_dGvwyA+R;%^)yguM1fg|G(#998s9*-x9 F=o_yy RɔM|+ܺt+J]I*wިt%u]R 9䔓3-ɬcm6]$&$E|$w׮t AMjʄXtWkT4$%[&G28÷фE4X+ss=v^WJrUV#MTkDhEl?O8øPy+@) j: URWد^D#:ss/z8'8Υzs1E9ʄi #l΅=ANS(q 2E}%QYCVn i\U¢]O,,f$*LͰF`e tydI˭+Y䔅tcBH_RqY ?r9'3?3 ]- .ge}<8yp} +*>&CE?%wocӬꆜ_ߤgڱK?`E;}]w ƫXa(h="גbjY c1$掤VTnW{+7-Ȑ+e/T^ZkdsK]G}~ᄆ߼nٺM4 ٤Hj 1i鎄5bZܥz>ޫ ^Y7J䒬BZ7ΐz)>4hU UC#=$XK_QU(c,BS w`."浌)n#bpƊ!$5ɖ`y"3ꉨ4DZX12xʀv9jzCiE[& ;Ec-]t 4ud"u$ICƎHb5I #tEIp0Pa1>`1PҌR3Ns4Hl}V(t3 OmHe輐!͖!8]bysA,Q :h7OXc!FʇOMm.nE=Z]q99*b,t| }o}_Wү2zx*>'DVF̭eAKnN?dL/`qAOky@B/D2.QN~э9S݇w]͢f,0vnG&Q#;-i*xWN' Z"ÔD[t:'L }:2ig90}+]' kp OdwB*,iN4J[8AkLŒWP5%J%i1e}[[Hᱸ,)I0/?j*t A4p!]b0 gD Ts+G-TTFOBK*`J4t*ϨVlTT DcO,w=&X%&MhxAD|@IНP2DlN锁6.JѢBN[D%ArCAZAZF$RA RQ3BnRQ\2tN$bgsl.jMnsd <:l.[g8?.?#'Ǽ}\H3bhW} ^$hNXٺY=n[{*bs9>鄺TN׵bI-L<0&S7%ij.\"}Du:xdQF,ǩ'*i`YYZ(hnbUyԤ-ΦL ¨ĘJ\ 1k,BCci`dHN8~c\F1"tp>%Kggn+.`ӄy21VYAb+yu]rNJ*U w ǭ!ZS'ӽsȢ>33D!w1ylV)bi/< m ڈDqf^ӒG AVdQ㩆SB-E|,Mo7i[y"'|f94=w ~u{H(5G.VZ/ t&5Y""M3cQ$ibA2E63he[ڵI(e.o %FtDPl ZQkU'b,nP$&NXk(g"Z6(Xc)O^ZWxs,UHG&Z18Nͼ Dq$L4|rJ~9T IXR:3<3VO3NN>E`BD+D9RU'1RZ8H\i\A^˞w{)Jkkg=+Ͷ !@Z5z"^Mn)&A8i {dOT92Seee޾;򳰳Ag99[MV\maiHEM<-0I]#?m/tqM$_oY.P*$H6R 'VjOM$ >O}b,Ks%bȵ9NZ_=EҨƵA*yvf/Doȷ j6r GV$v_(ֿD=N@>vN='(~֪ɹS׸}K[ҫ;TyPsON/i[I Hkj7E#˖d)+z4_`Y;@ڣXb-sPcvboj83ɻ /mtxnbx>nO:PMտ~ tH{VWv T%*:UמxuHa=IqD#X❰v% Q;M=S6-?(,k"\*jbf1F1F]JlٔK{6Zp#aRd Mb Qbn9-1 1l{K-$Ifꋣ:9ꄅ?enҷ V42D5cUzңK5{N.n: 4tKMHY%{LL IDATh`c\֪|$m؈ B*Au0u#ncY.!fK"C#jѐ6Q,!"`hʕgȷNn=jסhk qC\OjA zUY;w0)8,OY :GbW,sJ)d a7x~[k]|UYۺ=Ց̓Ms?;r翮u!e~7wȾ9Γ P_b{½}9={zu ͒qGSνk_Y8Ke8i;מ$oO`>|=dfw~ָ3s<\.]n! hGx'$) !@'R47mˮn{gNڱ4H %-g_OZ ~9a{/S~+~^/Ϩ; tG#F qCV~'Z[]td*k%dpJ|&ѣ]ajҨHssABq2pA.Q$}.@֥ khY!UH%+I.>wDAA.ϨʼnL*,&E=($,Yr8qk}4&EWwnXŒ3rH1E0O\#K%GO=YG' bk}6G| x ȣPn^;GjW6Lzd*!AA]jh>[;d|vlNi'GH6 ĝXz{:~9fAa lOYet $A<~;KxFVE"i:OC#raKzA^T5bX|t$"q)R5w A+M~<5HY=!eśCd4qk! =?8{eΗ^I,̏<*A8B>VdѓToo5o?O/޽Ml(QЊ&x&A~X299a)|vxpT k ble/m &q6%p)9R1vRHo/a"vBdNRP!h[sDQ #D" B>dA' ą%_0Ҝk\$d!"{'@w8b{!Zs3. #JQ1NE.i?ğejJ0? -ȟ&yYB?5u(wGNЂ֚U2\B4 \}dIc# Bץ.TNPhɂ`-q)>7̮ $ ตtNAhFV\w|X~'mD>s)CN┐-]8wzƍ mIvPTN!;/{\-0R$JmvP%FjkjDt yBX4oi bA;-t!. x\̜uD,B42J.zpӭjO!l"Wˌq-bEqx,rC桘cM;6.d0HpJK bVCt.{Bh~{; -=FfZrf J@%\yXnm:ByDQ)AԺ+zLݣ=PM+BVE(r|3zM3>{^)Z$NV8} =Ym hqra,C֖9qeU[Zw=`tːue6/!tE@_zնHwl\{dՃ v`163}.o>Gs;('wi_$ LXd)>bu 5 Wg\s(fp S ӎߺ(w{<12o_وuD)]7[_j!˃mٖL/1_Cy&l_y{__s_;G{_fY3L>ӞpK-%'aC Dj,XTqv݉,!;iNx흕(KR.JB-Fc(.2"~w:q>nG=}}bQ8=-: ΩÅ^Թ^А4lYV8 Jh0R_tC mS.ԅehˊݣ9eq6U Y &NįE7r>l<"J-59`@YD$#>bf-ɢ#%vAY¯ I/hFGԋ$.AxƤ `1 QIc'1~DQڊ=>R՚<]$ɚ5ܡk'>"|Y\X O`>[$"~ nh0dqai;'Edb #z尉VGs-r}7+Mt~.HR"GHݭ. {;Lf9VHR"Ƴa#&Kyڕt{& mD~aků]XA'_hxmE9-N.0g+gR]اήnj^3uGn.ꠡW>X?B>y)agIEIQ ]{m|ދ~fT#6:JEܣ8I$*%0KƢjKl 50DA&] Kn(~Ky"H~&V]ȮՒ^j}j!xe#ݒGóM.͸8hΟaLg#a@~1'#X?E~B]?gfO6O/ۈ{_x 5oEwW$QaDKDZǸQ""EDKHӧJŘkU>v"H&Yˊ`$$lZS 'dHx.Jĵl9\Vi%i渹Y1[ .itE7w79|S%nz斸g>_+w":Eu+旼!6*KL %@gFSY"1z^#ъ$K<#MJa#VV)ɍE)h"4A5wߗlbEJ,%{T%vpCi ы' {<l}/{.̼6 ~)Q:%Q+iC'bEXIA'ta˔aubsi`%RKYedrPfkw-ߓ7~;z6oM'oo[?H îo<\:)ww6v+s)}E\ N` {iG26f-*Ht&F)eH+}V7=mTqLM`=3h+WHTָ(>y&;ꜨE{>FȦϻo6*rcp%*^c8ՎASK[%R <>O$$h!JnzȬ7bOޏ+% e)6kQPIc ,KDl1^[bƾHQy(+Td1]bBqr$KdӉ4~tF:,>-[#jAQQ D~]Lrf(e=+"O,SH-v lICE/zlV #nʎ ]6J"]ĢE˼hX^LM)9-bY)ToKrt@,j]q 4ÇH' '8skM FV4gصR8( '1 XIQQ6k0H3UN4 /}Hbdrd2\F$;OnԢG46HӔXEds;ho#ވĨHK'yHV7ᅋ.|q׶RضAT[ÆK'ۧ|#.^t|!B4UߟЄ^S7Leܤ2בꥲ qsG$NŻ9EBǛ1O#G4H<>툅[{RqJG%fVISd^meVM̊sӘ2gh͚F8lhY02%yed?3qisOyEEl tnT$W rsݒ/Y^˽⟲`ƤԛwJYe{.\-E>ZGiN{I/..gHsawHߗ⁸ሐ=^AR^ 6*l2O"!fR1lpG#~r$}{\¦{Xtt_3|c,[r*nq\~r[v'kR.i:` Yg#.es\1\,TNb^ 0 FW"K E!DNjh/sht`L M$DD%r930M&&23A3A&Jc ĈRThNRi<>g\j/$۷Z6"Q)I#*P!^(ҧ6"yr*r!Co8x0ɐڋ 5qWnBJ_/Eֳ*aKb-DŽDE[c51kc@ Ntv%b&!2!E)~MoS{m`[hD'LF^T"u- Ɗh-d<2\y|*ZXIHLHIPd if}щwhHQy*mDխa{,O/fEAkrrJw%R7K i8 tErj.*BRXb݈TV|ىd?ZZp"(kp9/mR^?3*`z^>ϕ^6}pGq0زgc,a1TE̍ho!芈VD6n1ڝZ*SH1[¸jyCvk*v[HXaFw Y0Om')(㋝dṕax/N }{%~+a$JwTrJQ$Z1ǥ*}R 'i*+\ppm%ɕFZ3KI wl}vk߮e1I1-?c|sIҘ@P-}\ жB<EP77bn8]Mzk|ZN $>G$B"Z+ʘ%c0 &w}>kz÷@58e81O#$.[dx>y($xtq#Z=Ic5){X$Σc\<V8ID,딼i'5<(:rI;hnmoUJ-/k@s{`4<7kN a9]XLԔ1:AnFa`Z!M&Oc>wE|=]wl$kA9GfK" )gz\"DEi:]S &Ѵhœ[ʱfCp#DT 'd tAre z?:Jwhӓ²c jM c)vXDGp _9goc?f٫ç'XN1%);?Õ}BsjwWdz GW~ng+_s/7~?~'3.۫D 1V,~$^蝂M,r5W\HpήfOI&xB2K"f>0v}P& کLb$tɼ?בV45І>+d!F1:R$&;D68ec$*C(cN!}7 Ü0@񊮍̞}Q>O;An Xή'*R4=[Ӣ'vv($uM9J'-ŪYcr7PHA#.MMSBp#ʐ48OD*b˅^jrcAydG9jg\ͭgx.1aQm =PgucDWW]1!M7qaNO1z8eE (9,C"yGMq.|D%5HVU柔#up"+7l#67s)L}K/ g ރqmYW^0NKowmdxhE,2U\,)wN^#:gY+nUl&}\&tJ3ݍ*7$5g+s^a"ӒսkzXxBP4mk4n=q#N^Pyx'0.aϳ/>&0E k$E ?=1^Ps&ϮpY=7IO3QC'^SK-p[+xùME FkiK45xD} t yfoi=LΪ mu邝|4[r.# IDAT7_1DVoη8γ;dF̺^&X5%^zvARъwľ91#'Sڑg{< ~~"9eB5Jqy=iOg!P:eϗgxn1䤙r->z Gt5YPZp[aŢZs4۞]f^ߤx!Qh֢:Všh]cך@ΟkUAAZ6{y LhW[oը&>$M!x\p4͂;^5.cYG{ 9<r4 1!Giʬmע|uL>cOgq Yxre^`51.pJQG YEE:5CSbI 9zq.U%)ԪcΌ(k{ϸ(AkU, uVE#yaQ6B=;Lz ]tqngSБU4xUfּ4[LϨ(9}pqeV'N[_ K5c׸UXEp=IM .%v<-s;7E?xW-vW?po ~@o;]UH% rq6'ӱT/^º;=G;y?@vH:[u^jR{c|!0iSf2%CnveJf%/XMHɇ`t$'1G|rE:⺋'F~leXM FzMp(F}9CvjO>6须>~MWG۴F [fN,6JbWWsxX;)2"-(`vBXD"bvzs]$chqwX5M,kE Pf|@RM7dBr]yi@54@, #ڬe UA;!&LOi.t[F-a.DvRޝ9BJ6Pgs$0^A9;13_exQMC\p>:7?P#.Ā34()FyV%YS-霮 t[}'U!>6^Mn_0F'ĮBY@O@.&OhI"䟿-ҏ9H{ :#ƀ1-Z("](?:!{H UMN=iD߃蝖c['٦'Y$Z&T'=A .n;-ST^pFy9'FFsv-IqBj'8GdC Y.iDy0x<#?AW^|bc?d̗쫻 37`?/rQ7{\wO&r|X^ܔ𭷿`MIgpF#*(iʀmޭ7L5O8LH=P1U9p J3Ypݭ}R"*0lPn1vDb'+-2(SЇ &Ope˜{Fꈨ-7}̍鐢GX/%{f ;[wa41] _|O|owΰOd ;Kt_FTݡR-j7o$=.@ijJԬ!(*\rB"|ohse<̦wF7¢HPܚC=dAzGp*,X*/g%n4$y?} {fHw9U"c֚21(do:rdUu:錨MĀ"Jx{a3 &mF..&0|9D6l_#,n"Z:HZ,eޓCG%sMxB 쮘 vt~W޿K5#}yC(x>xʓ31?C|GGee ׎.Y}2a|\to{WUsf%֤@u-mK}9tfG'I5F/Q@U h)P*9J!zeE[}zi"4S;qJ *d&+vum%T+H!34VDдܺ>2 N>W)ِuDTH;6NoG(&wŠJVSĎvbEuc^C'䴛1(!:O9RA1(KxuBH|$m=# IƗdC:T ]jCPfӒ h ޼ClTly˻pISc|R`4q~]_6VtQXt4e Qѡb/XE8 3hcա<r?ڇgؠi6D-$[,Iyt [a2z4ght%yil>۔*╡A[ ENbb"TUOH51 Ol],y0}N ٞYU`xbi.Z7{ocv}~kݜnu|)6%lȑc8p1dL AA2EXcɒ,IJN/]>2(ɖDL>{_~YH$]\tSw[:9cq2Se}R''<5;E#)y(K:6&EkM֢bEh1MȬb02SMzw#6fs35<2h]aX]W[;C$}#] !.vhzM =Gmn3zj<~D1 *IW;b@N@=\PfW4njFCjrW/wj2r?񫛜ˁ)jgj'w1og%9 ɘKx7Uvl0ϚzΝD&GڭKFoscdWD% >;ߒB;jhS$4A(ĉUePդ*:nXX F̧ΰ.iVjZkE*e0/8NaNHJ|UHwbtjMĆGW$AY'zq~!/ yWrƀGH cnrxtL$#ٴKf(1kT dyhDPmk.V3fT(+lNtRSErA(pMDV۔Mɞ"]ȬHnn"XieNj]7:n!"ÜYH WYH'N.dĪX$݌$1bgkUHzbQbRF0F!c%VoHޱ|ky (Z֚뻜L=\pds%AA7- ">SX 6>"g=\T;ȝ+,<|(5Ғ;]YFO}>&oR[ <,$$IYRoϬ2 F{R6K9v{)!.؄JJg>/eO_/Od&铍U:\JЈx%^2ߥwSB/ͭWC4࣊h6˕ bŘM2bXb'd[v(Y_U2lɰ`]!5$$k&[{ȳWd;lJו,#ZlHZT֥PPS8ّgns}c7ǝ-yn3K+r[ Nx1 HQ,0lVqh2< *b!i0I"Qc$ 1bĈw28F0\Ɗ+>S'=;% R![47esH}ToPoj{u&bmFtNj -Q,d2t.udHSBQ]v7Y>~*MYJlFa>Z'`*u-QROwS3/錴eIsF+Fg o<߂JrΊ3N|lY:HjjLfHbE"N"!UTQB?cĭKZĹ"PFA~81hO$8h"ig7Ę!w$&zb[)D!w[$3HfEFmUӜy":kD#f$h݈#Ԟ8_D"= E"8G'KX^%Hɜp:RA %m%NHIRiNH)WMB^{%yë_,/k?x Jz!W}Iӳ]L& _G*{H/2py "BWKv煛sW"ۅx%zڙ)MgM,Y?4MBF{b HV/G4ɻ{%7ddFt|5b8G8Ò4B*A"b(dc&x$mIH|G@>bcC"^$q6$AFņPJ PLj y^ ?/"Y-QOoƣ f,Ƚ'ɻVkfJD$>x9;UNNA#Xo`1{;4em$Dx*EM ͍ qk{rN9n!ilIRl}p SdI. kz*ۇ \A3rP{{YFd̘D06#khOuj@{LXy~,|k]˅[/s;+4B- UBYbP׵4UE]VR j.b U|#4HD">ĵ10INNsuwяhk}Ky-Vw$m` ߟb<'JFcA,;u觩tK 2褐 #Ct`vQ f~])g+c] g-eºD&W4 SizDobpuIzT}A9SZY8y'|.Mۡ!Dnחpt퉤J%hY i!2$&VLJi%9E0|MUؔnhъ0hpWAJumX-ٜA6T*uV 1"ɝHbDF+c'$@¬Y Y7br7ߐt~;ݥY2P"!0kFHcBb-:ڊ!!{uJQn9!#!QyFdo;o|{Q69d+~K3_1Ǩ?=5$ǿl<&4&r{0 &H>0m@UqYB*O" ^((ZA9~gHsg'4i~I,f~|Jt?btSԽt y@M(F*/8^yB&IWٱ|Ov4r] duQnxL}+;#;>B O9y|Ow |EZ,5W@#**1|&$UjR%1 D_sGoquIJ:Gĉ%ޑ/Y/:M 8B^TTڻZ\is:U:&@1ww_㣋5zNrZ9yCsq`zKf#q0,#gKғJ:_IP.?|ļAd̾&~o1߿LS;1Jғ.Ì:#|oA?'큄%oI~\_,~/;¶?Ľ):EcͶu} R_=D'c.KdֆXjF"ЗUY#,Ah1 \N+$\EVJ+z9aC6ǘ D$^U$ӊY:f+pԗN&A(ETEv [߂d_P%?V!*:5"mϱ VCUL#5J0d=]#$bV-,Z%'NkrIΛLmrY$];'ږr(7'w߻oՏxtxӣxɿ{H= >QN|%]`3}0ƽ6G+5!$`wZLtd_Z AӫVbmhIM~_RKt.錿so-^ gJ[/qGQG1 J=o]hml s)+jWb5sbiHZ'iH'Wf[{B3&PިȫكW.%l\ ÉrU`|)i;Top4͟8?fOY'VkpW:UQaaw޽X圇)5X]-?d2~)gc2soz-:K*i3iL4j#'iCtNɚ16.8ell,P눘lFX? ΐ|O۝=Tȋ*PD@!d)֏0ށ[ yo)Oȑ}0 (@NNQT&*tQuƬW3VB@"M 6Un%19@w7a8ň >KEui&Kǜ^AGϰd U9sGV+ifTb"hӘk@6Y1&!ᠹ 7~!m:3QXa[vl?JcF@}i&"bmP<~B(|?Ch7g!zD',pz@_u,ξNM> W\V'3v?@4a|h`5B$ u2K {+^y}=bC6Up]t,nt|a*SF[}Ӭ #<|xi3upB[E6iI{ْ1n!M i+adyR#eQjcQ3bggV,%v;5}yt"t /N|ݎ9Z=ǬV%ޚ VK^HwIB|V՞*{cެ~`9c۔\}MXX{1/J@r.ehֆX>yN' :+!DzݜްrG%YH7Eb|H1*OO !WH. N\"z(_ͩˆ%_ov8}&Tͧfk,ef{J4%4Y7~I%DI:DJ֠%@H#h)]Ji(O!T& \㻿_ iʗwNH˄OFkms\ z PS1.ZT~xl,Xi-&+bhLC84d "(.$,gӔhN5;n 1Ƅ!땭}D-m0!$sф5t#]+!(hm &!F!qkʽ"߾c,B,~ܧ{&eԓ7W|oCOqwϸ!i+ 2' QkO|}@6܆ΟH14Zl~d*J% TĈG#L%A 5H<-p~-7'|;j H` 8r=c--Rufׁs;sR6hr6f[Pfu)!F־ffq=4C0Gt޾u:WTXl*RQWAqV qGzQ`%~@jU S^2ҁN'c1GIMoՙƲ*OTHZzLT]FuEв+=1*17%=6)em?{V5 ,&iB8c bW¢o$TtK̯jɼ~~AsJ]?k?6r:tdW׷Udg35?ї:{ϴ3+ߢ3K?տNĥZ' t&zIʈd;ZSqCYG\I2]FVqeiQQ~ =!׭mc me2q]נ CQ!b Zt)úJm4D1ykPcۺAh{jMvt(=78x]TCЌz"ŋ6*X1m5"Q˂P/IU;<5]'H]+b<}NR4$n 1b /<ʵR0PHPlj=t[8㡳Z*јh,Mr|ʳ*:ӷ^a2B@JM>\-QTc$;X ps55!"jQ"A[N&Kϴ+c+TT4Hpu#,gdeo` ay?_*㦝ƹNB#9^4w-%1EQŔ"*jQ|hF4RVj Iv^i 2 Mz ,@bV>PN#-uao[Ω]CKB1&dԑ-]oI"hv0&>&n>DCoٜ_K߮1`y;Dw3ҝ.935e.=K CU9pkKj5Jt@JPs}˭ QHk>TȺs}ztOl'àܗJ*ŏmoeBZk;FMtڒZ碢QE 혗Q*B4Ę\!dDMX'۸{=uA3htl;R7PEU K;dHpT1V#=±ԕeh:67Q}ã5?Ʈꏈ?mL'gs9{t٣ϲSB18xk {|I0 sfgS58,<678m*Q#͍ NC3BYa JF 3z IdcާaN7$9_hVI9!y DO C3CHⰃ& ظ#F. .H#t2?;z} D & ."[L_?/X0ٙ>ȯCni"F0>|Sς5 7vc(OONO@[;ɑ(l^^`щ]tNvm"~D<9F{BSJH"Aœde9낪\"2 -J& $p-(p1tHuDkkG&SY~ѐ5skhDM,ALam+jcMԊON0B\pe%d)GdY" tų'7^Wk:Q(XC}G 1*b% ~<;b] 1!ˈ5{şd _91WU 2v1[3wΘ]g-7WHǛ ~[tD`uo2Wo!==b?k_#Q#j"OŊ$#byeD4sA<1=eor,֞+r#> "N3…"h\C=O0/jMIv!ѳ#eY6۟ L>umPSױuuӎ65HμLB 뚹 _=G4A mVZF`8BZ-Nz.Dg{[4 T.WmajӮc&dW1rJ# "'SńC4b h-DMEEqvH# ֽ\eS)غPgP\oT{45yA%'٪,eS5Ҏ1o #H%+IFQVeX!J:tNbL_60`m ^!c,rMIq%MS%˸jfYci"ܗܣ:B 0W%g}ƦcxhώyO=_<`z%{U]TɒotI 1b?dמp<@F%0Bg}R;J I+K,G>{Ɉ2 ? #t+/ Ea+d_.2c5*lXogHմ~+ Jm<ǟqbU|&F]dp0`#67opƄ`pg4KQ Ծ‡!Ϟ=Bfә\\^ɪLIJ4s~*?E%_FSKzZFO>aGzo`b>b:}ghRmeh(P/ub.ZkO׉\[7:!]z"ݷ?aرǬ [I+ˍlL岮*GwqInw(./s@wkvp!6d\מoݕήtII]N&6zLqD캔x>%ցlaΣܔH v =z+yZBR t0х".7mq fUc;\CVFJ o!J!j+MIUp#۲+IV+)t..^K84q[QS#,0!GT1l EZNF[vu>/Z<@$aQT2w6p2=feyo9Wӟ{=U,b#"hɲ:D5/?OF#FɎIՔHb'Yoߜ~ݭ~ΑﭢHR$ Lc}BTxr%b'1 s8kfXZ.2g3@K'2H-E/EH϶M!:^9|ED!+zƐ)|"du3>M։ACM]Mޛs1bQ]:"U/r1A'a b*a1#/Ӊ_,ŒPU٭2G-E:cE51D/A?*kPQãc(;CȈKjU0%.UZ"l""46s$g49b㸢>߽F}0D=dYB7 i6jl IDATT54 `DmCbţ[46+ S Bhv^dy-}Kt@ 3>("v'Yѳ jPFu'ۀkxT\xjDsrC*^,o䫿o,2|?>s_:EDW!tj= ]ex<}KN\-bZɳ`b;ߎL@~daX5`DPq ]KbT;0NmG$X7o!ř-!k⭒rcFT\/|d+c+5/Cn_I BG[Mi`%4'n6DDe-їS(lb1d$@TZ g%(w| 0(] 5#E >+;:ʰl3Lc 'EO,rʥFk-S`CMYRlG+Ru1`O׌s--RgQ6Bmh<M8up|9<z"i0iDqq%"FMf[ ;,G;DU-^k$*@~0O'0!`ږ(DF;hP)iMr.~5Q,}yk节nZO>[o 钽a5a%fxqh"siݧwdcx LdwKfgX]3b3 L`vs:nJڦB ZUtd0QKX`yK-icHnʎz#Ѫx*怚wY,[p1:BU1V Q)֯Sg\zpO@=g?fɮaD4빐1)DY7ģLÒ UDsB0YIKBBZ4"6µK.Iy0=,vX̪Q+b$ϑ!RXi>EE%7d$ӆt5mz  ڐ#1Q࢘(I9u8#8i)N4EZ*kв& mS\Lb<3-K#75aM]Mqc ^`n'qQ[3|jKo9yJz x$p,VIo"4W4;1&P40 7myV>G(Ϟ9?}ew7竿P Wk#~?=c19Ygs3'q ҥ! Qi%ȃ UJd%zҋIrK23Z5l++H0F@}6yHGSL]*QKObH_*jp0CKV"v0)%4AXDE6joHSF7?#o=MU6 aw =jm!shm"kU$2?I>ŧe?o1y]T{)fmն;(Dkc;iz];Li[ۅlӲ1Ǵڻ{W0Yd|= /hR^* G}|q&|Se["':FͺA}CJ% 'b8MR hwvO.v0,8K?dڍWo`*gsbv=<Өj'g>}r}AۍezXjBʼnU=Ѽ() (a4VO D]{[U޻q~N>JDTp*7v2Ђf5W4_\&iF+Aw6VoJΎ ׮V/ؿtc?߸}{QV/xkSZj9F^[/g:/]dE`PNμUB VU( 5+"j߹T^on\0a k . |hZ$ pN*wX#*{w߿EQ=(l)gf:33o2q+CfkkSHqUQWz}շf 46kҘb-s?co{>K9 o{"ױ,=F)!Kxx4%+fOdm}F;2q8˙> BE1U3`807VmhôBX}yp?Ts;d޳Kj/J"⺖nā|e $epQg+.1"i BaXC0~EHmHu4٥l[nC/ ,qƭgw0UHI*a޳5!ĄIllЅ8n D!^v媪>}2P r.oaİ'r^eܛQ!+R4by;zB<}~taA#ČydcƊ4-27"|4$s^YUyₛSX<鋤kDaDrz%z㡈>u"]Ly|%'^PoMޑ\Kk5\5/pq,ۚaf$٘eފx⿹IҺq;lY`H?}}zgUɾ-痤LU$Xl ^N-IroCu>1n؂֢p>Dc:9SOx~~;ƈwcAﺂDu.㜨hAz2 [rk$\I{La4m ?,/VVU3KlDb"Y;ۏdu6'UϞ~kF,YeQ,M?khU`~-KC]@'.KQϘbD ƆuЌ*į' -ADz9Q)mDɌ9+!,q~@SDl "tGEѬ"B9tkԬ^긴5&=.FYH4asȍqqvb~H,{݄ICG s]qQl[$$&:ӸTf$%@=pOX.E1-F uIJlFAˈF<L}ϊ`cE. IY|>sxg'7ps|6y^Ϋm|9qDhrrNTT=<l A"L\YOZNtgYv =Jd\B\ZqYBTbi0Eo*pбrsKV[/:=Rtu?ML@ǰN@4(hچxĵwiW$Q(UR myҊ@(F$ 6UU0H\rq>àд+zJwS/(hG^' 7^Aڐf=/ AD3ƭue-ɤNmxflbQq9*TjzF.*H;L;eNiKFX٦Nf/i>}D=5g9 ߒ7F_kjpj!esFh՞Y䈳e ӴU iy[ɴ*1*tL^TE< ~Zdb""N>bVYZ'1_飨ZR=xĉjU{3:) ]!OFvu׋._H>, 3IusBYߓ$Ni'6]sR^wS˂5^Cqۗzܹx-R&%ߺ+(raLxO4YPhN[K*qc:x(łhԶJ#t͛%_ilb'`xmqb]|1b3 Hz$YrӈrKo-n~!ԗy{O }G/~o3y%cީ *'XH^¢ l!K{PE+ V<&\>"˂UyE(@N:/O& {O 8H??D@Vh80񻈿˲4kwer L/h1\چ~Ӱ#2qʋCQ8]Q]1A5t^ɺnr[j!7ͻ<8B@8F` #1Gk(IPѼ$L"IEZ˹H7mlwu7]Z# F|Simx2%[@}_ſԮG|%$zBm /wJE!2ZHb!;/pQZڏub(npOIMK*Ӽ,OZmDx/r:~q[{|-_ N >{읜3ӊW8||ֺrrZ>-%+t#]Cz)2FVQEAd~ ~Lb^x/9Ǔ57RvrKG dCfGi$ xNe>zc`հc%(\U}yI{cjQ*īoWAk0j{h &RA, 0Ŝ&*k/_r/gW2?,iO;/3O}}/sEQE$I !Vz*#脼,\~8:UP*OJDez6yP ? 3OOgkH=Y_DJ(׆(B|cѦ۴ 죋.+@KhJIp!4FThzz3" MMߣ v 41QPr|zJ>!YKtmzb:T E9 QL/+p7M.Ge`e?ri45h4 G"dߡuA$v僩B38pa4ڡlpw>pmcL5_slj{N9UE;P|a| ҄8ϑ}ApeMr:Mbddb#NYq,3ޞ 5 eF_?'Jmm+.eWr &p|G&1j :56-& ¼8D-ʶa4(BS+iCKz"c^" D8Ή4CkOߤ*.8{=[ lbf~Yҫ> 5oKx2,e!Txa_ۏxw9*QPCJ/;^)"BC9eP<7~f|ɂ3_2W:Q^{cFYZ%fWU*_aBMR$Q|SF̂%dO\}m`:FZP!&NϠfQkD_wxv81d\$~9 IDAT ͵wn1_Gd܊A"x"(: @Űۍa@'9.ů䟲̖Kl'H.SHAKdY:gsli*;('QO0]b8;3s_%t:(żMqW0Qu<$z^AULi6c4*#;$v}n 4}rދ@/~bZ ؠ6 !9.v"Dql):\fw ZKe~U i`)P}H>6ݙ 3]A|@x1eh;zWv.—7g-wIג;Jhr׿=',%/n1f4 Mu[V֛½yWC>n:.akfT|?Gz-r z7jr,[&7U b¡`󨣿0 WqT ad0]jJt }DfHgL{~gxnـ !y"` ]a4mwTh-n^`Z6/=O/۠%05uS98D%a~G]ַz.YMy^|V(o 9C.f-V[ΫS/YIK7P!FU>%xB6xii}%_AX6qz{ /4CyҐ_dQԮojB*.M+4du Bڦ+VgθD B],ʂ(% ;4O|%\Ъ >Q>XڧB:偝p,6JU80gx>`1f4 + ?|z*MlG3t)*w:C./ˆ; XhѾCn6c|%TvfEBO3B^lFI dfhحljlpCǛwխ\i. ISRFG郶hGDq+N/O1%S[Έ뙖Wn0zMޣ~pKO2;:DhpՌϏ2h#=k1e>CSffc=`JQm*Ow^A+qH捻NeLy7 yz\/p²>TCr&\ڼU4# E)vNU"3 jY>:ΟUPޭ|cDa Tp]1k !!L)zM5ފxUNK&'] +nLFƐDsvTi?P2f]\Y`]\#N>)sݳZ4%,٤=?EfD"2!ǽ5Md[emSeqYԢ>RU;Y}rU6 # jJT=Xiki|βe-@/nSqAyH-Qg3/LǬ lKܹqI_ҔK凬 Bh m+CيCNzu}.SCC a$S|5Zi1*}ȫ>T .R75*)qې1r}~K,q&V$(Ux镞 KQJ[ z:!/5ǒ^1kg sKှC#hOQFUί#燢6c F|eu7?.õ74I~oD|x\YcWuU&[dӠ$$LC1 ؀7x᥷{Ó%RTvWwאUU9߼әxnfU5d 1'9⋈7=O8m?a :8"$s>;HP#1jh6IJ `@^m;_Vujn^I9٣Nbz1C igfQeiPJhgcVВ0>,M'ΰ+*Aq3G]:ld&K.ڍpH}KNJwS\okBSK"CsLh>zlfXhb;KtLj4Qpn$sC- IeK|Fa111B 7OZ lʗUL*v ռGMΗKnhq!j~MTl c 5y&K.4{=]ֺ[Bhqj}8Wf˵^A=]HhҴ$Q[߈Q9p#}aGJij9rM;=@oL`}4VGny~M'f,?<ѯu  G¯|DP tν]+3y+G~l-\tx%m/+okƞη:o~&ƃEs &blN+Π!vpގQLn6rb{H/8iӥ+2Im9=Y9@R|5 oՊ5]OVB{',-yjUU)RYbDjOYNW;UKN\=7,/b@~k_T\_yٵ]UA_Dsb`q1WlRzw:qu3MmIHڝ̋cUWBǚV*/ 7j6DDŐUёPdWԞ2 *m{F¡ d˷liw[h 9oAeZk7^R]*URqckh% S^T ~*|JZ__~VDUg/Y5Hڀr)dһaG*1xYj}JƋ >g:#NmHF*M/B}SU ߞc[x.z$d$c4[ɜ&I?& ^ε׾Qbg2|llH+CED$FUƴBgKl)+R$#Vt}O2/y{ONJ\,GT:wQO r|K?`,y+D r1eIw 1BZ)$ /kN@A:Iʸӄ$f#T+gROOD%a|pUcѬ<=ooDV3L3zoOAn@[_[ÚW񖦽sSʓo$h[vMT oN|mH$#m\AvoIP7zG1AI˰]ZBX[b7cNoK8[\{3/R_ gpqMWr2GT6P_EHRzjIF֫o0Nr(?]BU(bDkob#\sF:kaSײ[4l g3iB@ErNZu8ҴueғtA,Td[!dG:/yMOQ%#Z6H T!![<ޢq +Qt zC~6$aCQr! RZa"lZH ͧq@aidVm6Qޓ]xT!:vL"L 綾]bJ hů+Ղ:DQlL4 jh'+D/BU+n "IvTIٰ[:/l#Kssz;Ϩ*?7/SܾSpwB^{Gv"xqۈM~~w)|!|D'rI"q! ~n0N ڵ#Kf Us^K{'kvk0az9фrf_7 v2 ϔ9撳EIP9MvI1kjCjNa/|3F}4J_Lы}znFkFQr|dJxNcD g4Tc]GXmz+$/06)Mo2[R zS/mCô-I-l]Bs0D"U?YU@~z785%p&p IsPSXm.ӆ'iÎm@Q%hk$DQdG^2!T݆R6lucB]<{֥K 1*MsV!2/iSf| [Fh/ŗc;qu'zWa}YW_#D03({iC11iJw u̸x|Jܶd0p1#cZvLq5H1vA(db=Vw >,?`9lL6&p1vie\bm|"X{º&nŎ`"B( ]*.tHR8$Mͪ"lRA*7dh:_zv H"a =#q/orW^oX1{3~턫cG>\9=nQwѫG,gfM;;zJ|q0d)_zޯ̳3OV'|yo;r6\z*@_0?|Qf2FV7E&cB- q#p>3XTG.ieWM2Ԓ?]͞:qW  N|KZ&Ì_Ku`=W= #*Ϥ|XTo9T͒ !y_cIa׶,*+9U [*lQOsr-R@y$6ipҦBk,I2ޏ*驆^Bd-"&*!(V;OT-"M~MyEUzΟ_VW/Cdy^sH:s W^? 7)YD(izCW)UQ7_`+1cvKN5=\^KOv_*R3>S?b c=?Y/8V`N!@>7 9J?\^ULBhbM,یr!=Y/Nq޸MOt-xf5'JTh2TAATuG\Bcp!bDIֲܜ89@"PuXΪGr|a:ԩeT^oq;{TPK[?f=ۿKYLi}j! 1t~CU]H$g&q >11@Q+N 9u,E_SPWz2ڣ_%lr޾n]?e-xD/.6zCiWjoO֘(WQKodCWpOe_OAl5z IDATU!7t/|,l?;~KoaWf A4֟mbJ=F7 .3 im?/d=liVWBjiKp6'XN.?{L^ TaU`Oj+,WÀ[89Mn}=%X 1 "J,i)gەƳ9]тXdokŖiǧzU VmAmc͗Y?c(owt={CDM]'dIZ+J5l,w8;}Zմ%FTlFc޾#KbFTZXrD³8SWZb2ԨQY?8'V^]fEbb4Jah \:*XҼ$jh@|fM=p;r7wؗG]QF]| Œ$>YrepΣS>~R%FaH"| 7̥0T:94}~prǷٴxŋoGPhS!'`_/ ^Lz P|ɾDj{kڑ帞 F$ QpNro?`Ĩ?o;o6hHTUB~XH$}g绿j?$X?sK~𳋐yb}_e)$qA#HMjw~t7Ŷ;FLo: lK&܀:ݫ9 ]dwz*VP#8bD;o'7Wrgwly!euD%DMX_̐Od_q~uc ̥>w[}%KrBE˵ r+/O욆5`}K3N58'j# k'aưw>bZUCӨ͡m$ddmƊ4Qyvw$OC֒nZv FO{^g3ojv#v Ƈ FN4ң!U:f̸0D5=l|rz(u0KҶ՟$"Ԫj"xmҐelo RC5Rj%DE%ȋtkA kڥPE`dMsDʪ'MռT$&Q/~qmf˷&M Ʃvs^ib/ Rxcr"&*R356MXO&X?grv Y!G<梜H1zd2k|tO}S7+o-⩄2MH3<^ NrlDXB2oߚt=^"3 S=d8_2ՠ8 Skbf0m( j<>[ī0JnP^|Iֻلؾ&/bbd-]I10^V 2$9o$v̐t$foN{Sb7[ٟ<]/ÛGZTRoHUEbMQbjpGbD0!cllFHI˄4wl]H䜰`ynPbwP!ԤcmwĀ26!.of&0He3^Jf׋-krQe*PN tH]ʫ?7 <-Bg*nBh,;|Jÿ?Oo WpUr 5m'7:$?eYCOw3>=2mw^Mޙݷӿ/޸t ub Vmkb~ԟ}6Z>'(o~]U~y}̏#o8}̠Lw U$y5L{IflSlE$H;q/ZL]ii*0wd0Eŀ5ˈWc=o.d#~;lffzݻ'}[s5Ow٤e1TokFbةA.9=)NթD%L'=#1"3\,$! A@Lyab+Mwwl 8~d,JMR}r0Acٹ4FMkv&oG LFPd$ynY°ӟ}J3[J gqƪgmMzC9LaW$,.frEr0&h7}l &$^><ӌ'gg_{R5}>zzš-3S;Sf2J|K0yƠt(n5J.jC뫷~,/QѴ ݗڞW]¤"".TT$(;kޔ.'aNjǴ))CN愲R-GHT0y]͜٭:>̪|xPsenO/C(_KS]U=xQy¤uɔ-G&?5+ј>FAB"j[svq/_n~Y 2{]]'I:}$ D/5)>=B|&h@U9ǎu4.jXI4Wu%!qd|a7cpqBڀdj6g,ŋG\ITO[mĊ.P b)UdM}ƒ3Y҂KQ$Mp}Jʶ-u7?Äh9FK4u^ՖOPADLi\MR,h )d>}FT نgteJ,5d^f@Ruo?3WU@TuBP$dỉ%R{5- \j[HH2ʹ hbC?F j\,aTM1F2HSXdv^J!@DTg;bbo琢`$U3i"xΨ\D.dI\ٶ/63ܨ@R,E%iJ#i]kUzq+gCB{) "b d&ZҐJJ^$w1/bL?VMZkFeoO > $ .d ?LI%47{QgrZ=zGZ?9}^RQbP>tO|륩~#iCԶ~c+yAxs~d X\A@qHlw^gwxCw[\,wn"SһS=J6S\2 >R_:S&H4BGMǤCI*dWez#A g<墕L {~xm gt7p!xl.ht6-?|^ڽO7? O2ƧcOޫX+{d~_djEd4*5VrK{BA "V$TJ$qoAbQj[Q(dĄcċ"CqNL*jʡ:9ro+ySDB rctҸCH~]Y>i$7MƃՈs?D(N8!i($V~zF:-ּsf6d!6"zzxn^g j z|%⚠/)P>Z{҃+ KIELw'hxoݻ']o~ '렫(DJh"U%f x 5b-+el}NyxKeccdE=N}vz]&: 1P ڷ$.!F5'MpS1Kl/z ~AT;q v( "G?`H\gWu+ۻWH>rte?Yp볭,񓔍Dj u޾*dKE:)H%1RSU6ׄeюQ1d(Ym)Y/A,:f&qJ hiHT`Dv#gp(eF.Ϲژc΢ԘCx#6R엸>˳9 µw$cu#$TҞ#-Mq<'-N*z~p;kyxSih4"Dt򥫖 Ӗ mk$^:֔&>|$-Of,WBP 7RIJTIBIn啑%w*gPFU * $IꢖIC(YsH{BY,),N_}([dY=h U+Ή3eJP?}a׽L?S>bxӿH @q|?/{QEt*o_#e W3oZpv]xtC2Y u;DzZ1x0evR̐:0dT!RڣLRtgYۖ86ĩ0M` }eC8$d`C gU a lZ%/Y.mT HoοoKFlS0:; Gg~w\_˛Θϕ0b 8# f Y<<7 xnGTq;bh"&ØL$`<$Fٟѡ^A65[GV#ohe]H<<ޘ̨K=&vTr1T1"%ҸvQN=M#(VI sp"tk0fFFDD,ncjDi7[tIR,QR=& GMv$i(z ^l_|EUj)f&FdDn"Dk]d^yR %@P!^Vf}zDڑ23b%c0ڭI\&u>1\*hiAݎQpZuh-6KğSQ,C4m:՚"R}r_-Yjv9YohPPcZ H$fhm0VF4͆Bieu L.Ah_hKSPffOi$ܫDcSxl8b4ݒY tB3JZtc"F0Vvõ愗p3Y+s\nѲ'bSUh]@;>7+l4X;-Q7%HQy0.V$:` x[Ԋ/B^̨{PFs*iGc ffWoEtf1nH6@}n^iߗL.=szD2+.OIjuʭI/*e+)Y= la8 +9|a%e!6Rh+1` GM .+Xk/V;e^-$#|C}u m kJ5j`-z } b鉣pN4%-3i7.;O%28!k- G9,</{ok[v}vۿ{_^"H5TFA3#@F{d_2I ł-Yl"Kbͫwn`*2FM:{ᄃ~_FQ_b[ARK,V`dlek^. HYZHoҪ*+I*Z0*$E=\ c3n X~`s;B 0Qldhv<;(Lo͗5}#5{}ϕQkNӏLҖAVU`̫(bY5)C:$.8(K8ZE,SICي,G(҂#e|xxA{O)NL%uɲ2p*||:"2pZ,DDbH%Kb"CmʳcJU~g:NuC!=3qՓuν2{'V8ơC[Owf q7u~ ?{q/X41~fi/Igs w'9 s\m|q`"`Lύ佂h&'ˆξlidP̞_ю,IE`Az2dِ?u zF͔ڑ A],ܒ9!q,/PFR+' i9±2#p4Dh,DNWЫ< 5<jF yOd}U3n $_)g7z!LJS%G%kCGlI7j"DQ yE A9FSeiSjJUdw=;b¼#! 1A#dNNИ>qrW7$tdHEQ&f${ @ ƖD־[3cZW0qBSPQ)ΈEW\E4jq)rKYda!Ohoڀ-zM/BlD\0eAHm50e /oջ%IT(I\2sWVb!P1)q2B5М#J GHhh>`^S'=>̇QӖؕ4yIMk=AN0 .H?F)BFD IDAT[;'G`2`4z|gيxbCc:JvriEƷ1T̨t#AU`r; C# cf3g槌$wTicKQLL@`z#"1:kF0! ^@l8~ht9[*[cJ"كQ쏕bo""$9ٸ`oKʣ2'+G[1a_~ҵ!Gu3Q'F!H2/Ps?HlZ $)Pw0,f^k߻F37?& ]701xbU" mZyaD@K5*ǧ "-5H*?!>3Bh:?&5 1",Y]Y$xH}bt8A' 5I4CO+S$@YE(">2e#7[1%) W5LLubb?; @;rBj{~:܏z@Uyj >4*6j}Ƴm6.G3bmI+ǷwqWq~c&j{Ջ*&zxA>(Xe<¨VknvG{1O_6yQ1 JBuyY L}#P:ҷ0>/;@wX/zdZLt_(PUD "7pl}rBy {xE!3\c8A v[b''#4,y`0MT PҴGOޗSQ=hqgU>+LS.Ver0%G؀Wn=b-D7^]hSbchC1ut<$kd8ߗfֹ>ۄd Fb׬Lyx̣w ]# ] f9Q\pzv7r8- ˔Oxh2<w9{xGz᧼H7k QW#sR[UEݡ.h֌r/Dl{)c%р%(ZuKť+5>Ub=t^A<[oRKIe݈栧ˍ!&l6Qlm{4"US1a,`y] g}#N~:כ6KG*a lDŽmUBsPZHe9:=7g}An9¸ogO E45RSiJ*|D~L>rxIUkis&AePF-f-I!Ymu>ܪo`-!( %~qm)Ms($ϵ*t$xtvu`,j櫄2+Tf[̰!f5S,5]; Z֜'{\w;og`O0E4%:{'HAvZmՊ)773k9 I fL$3)TW2F X  FBF9˛fnA4ڸJ>zlw٫<t{~CmÛ} 6Ȱ>_V <ZK%c"k$꥘^u{B,jXbL(^iH\ݲZ4gKGHGc &t ۠Ҫ#`hX6jԘo; 62kxejFg Rf 5mu*h% :/nmd'}̽<]ѳ{GIJǧB>;ȕ+`ĈџWXU.-jj?em:DNKÓHu]FFv0UHmTBW1r/j$>B横C-2Ƭ7ï}m[zƓcޫiON??c- #/I9؍y[7~.L+[*vĖNo//3`M?O?cg?O_Dǟȳ^q:28+?*w*L&M0/S֫X,@ߢ)W#1QY#=]3!/oQj\acOJڽ@GؕKצa/ a#5 CY^34T!MP5iEk7Zf ^ &|aJgE<>(Yڤ^C8'9/~%dPۖ`m3ܾኽEwo {Y7qnfϽrY_GauʳdJ 7 ![M$UFHP JRVpЃ.(Re! bN@vI,(x?Ƙm$܇G mM FJˉ­g1QR%XuHX1jA^Ҏ %* \f{rOAbr1| qTl iŽQxz60 ob$ꜷr)'JA$"H rѼ oUA"/~>6J).ײƚ Bj9M~7I>ID]+4&- 4Gy)48N74Sd1 dгwGELy6y3cqmE *Woqq|{_ҿOmҮke; nW7E*W@M{Cvr'aI=?gpƤϪXQ1/YOX͎9}dm0PRXbgfV(g-C~i}dd-iB(Tc3k7SتG!KN7;8n51^)5'H-5&ChNoc+|QuShZŜWr)}Yn)۔BQL줢!J U .j¢JQL~6J#񂴁Pg7M-j;Ѥ^1V>M$w)H6 Q-IG*eE4Mr_˽/We0_K?ENZQENb{aYchTbydc{ ]7Ӫwcَ~J[DI;qM F:`1@ @1%^Fo|E~89gvc lǏx/fg'#UxՄ2piɉGGurtG{cmS,˯[bw7Q?O8[Ng1?򹫘EK;6cBdhI67 #5Vp1qX*I_d .heIZp".oŪ VV$3l.4Ɏ>3|=Q4vOg ΦDjFYA1"<~rHE) Be ̈́OD$0#y;_?wݝ;\b1kvNjx/gx!6ypUTLwx9& v60:Z Hv`E\`˒4L  .(@;Y#J h;;\&^o64kY?ޭɂ)UM6q kjZ4&-3q~qǣ1:B<ޤ]/X#!H@+lݝK5kȆ(xEBgPGU[wܜȤ,3=#6k/[?Ǽ|ģ!7ǿfrp92z>5*ݕ[o<[&b*@˰G%"a̲)*kLuZ/Myb&.Z$M1"diJjb(lLò?!&ݣ Yy4g C\3ԛl)t +"'0,y6Bf)^a«,!O(M1 -^V İCtɰQ'R_nS{LDPhҌ,`M]LĴ m3ge#mGZ,'Ȥ$&(B]a3n"ޠZ?,drՌ W܊vTBh^vr4B-!) M:u4Śub(faAp3+Y>6wh+5a(-VnW}M☺&#&f|3CG((GQbbvI:Tu}gG3'hyiSl#6H}bcR*4[Ůj\gn&,@h5TZ2_ rϟRɬoY Gv: nQ4 l6FKp{egԡB=cT)h]cZ.!x~&qA))X.h"0Wǘ!,D^XG[~i~xTpeojQы(!!iǤ!Y6,p!d\\g1S%]BI5nlbػֵ1xtJ'Ӷ75 DKL *_\gOG96p÷fD}Zc.Ь>$@0X'1Rc^] pNdsv…1`kU0Ai] E8;3'GUi5ɵjYbqoE[4#Km8d\>$)vɫS!54t Ƃ3Bjⓑ.LH5orR1t %Mbh&4 Hѫ-v%Vi\!E"ѐl"e7l ;eckn4[7a]U-|֎G7BD­k`~I96/~eܼw}P'8Oo4`߶,1p|w~bl ,K̔_'#%\=ܹ~j'U4.~_ف/ J9я1ǴEzsKIm$=Lо1Tci&PI7:ۉ cK# ``sH.rA7Hb/,x\60^a]EԖɄ&D)ٌO?MGEۄdBy7Ii ;ﱬ[c8Y4yXjû_Nɓ y뿑Wc~nf9^$7's[22cyT4k';Qٲ!EF\+#q7 IDAT1-MTwow{o>ЫEBiXīR|d;X)L$uxFzK>fz  9:jhɪZ6k2YIh_ 5%*rN/pƤ)) b;lj"db5eY9[US':ʄ^$!ˈMB I+<범o3Nw;eӇgW CAZN)MVsoIj\SQTMCoy ӫI*$&H,K /Ib^V2?YтjQ uۑJ[l6K zC>Kk dM **jk"Ɗw!F=Co`8Y'fiܥ%J#H"I$"ԎxIb]z$YDSK-b, R+S< 'F=ê&*l}v1VJuqJ5lI=_J;_數ٚvB@dP{tM;]e+Q1 HRZ1cc:ڵt0E KlĿp  DŴCVni #I`w~e]NƒC0IqʕWf{,qarslY FOώ<>T|3>ϕW2G FCdD5AȊ1]qKxēOgQ_gMNV; |\ݶ/_~%O_׾>bu/\-i5D$9eqcjcK6|l}tvY- l D1"*:vg[dhnhnX0܂ } f +}ٻ$&>ax4|`YLQ$Di=5TfnCQSS &i#`^D䚐jA`7{"mZlVg)/9bW~_5#~n~$w}y}>|Ti/ \/~mY籼75YJ=HHc&EHkv ;*[i8< [o)_8|g~U}?RooǶ:9v_U7$%%3rȶ H^ H<) $IG]$S%"$ݹ?uj#Υ,Wbk֮{U{1~߇_==`c)S MfW猓Ї"Jh&T3bsÍH=M-Z"~ǀSљIE_\.M jġFDZgs5ӓ&+`2UbSӜ-0# #h59ˏ>'uG3)*lo e-rJl=mղ:Zj\{Z) 0c?Czd.pD!xl558u>~et Xcu"[r YΉHn~g"F'ڌSY?W>X\0Pe7yb!H 1IJ}d(y }ksaS׳"j.dMz1B}JG泱֨+AH}n]ZdhLCjFUQgk}|s43%H3B;Ô5ɳsQL!U+a^Ҕ  Bz}k(KM+8>_ h榒NZv§O'FbT]e؄l]Sޅ"ˤϪ$Xzwqi"ϫV?c@u;ê7_ ?;LR+#^*p9y =&qUzov<Ud|ˡ Ò alY:Cä8>sf"D1b#4E|gQ y0R}&O8*D-.C#8z"|L &3+뚼.]BN!q%#ގ)Qt-1pA0TʴS17#>YIf{\~xO1:hJ3mv/gBTq!HDЦw]`5lF1с\s|gY ڈ1O>fݡiA4B6 Wo5_[G?}Da֜ҔKxy? q YIsk@!IMqF ƗhVt͜aI%b;l^-r+#ЮjC/I9 Jl;&wn1sȩ*xLfyt1XcP9bI-'>,0ܥڢ+Rڒ[ѥvB5 rghW-О_?C0$=FW"i5)A<ל/K >,E0|DK6hp@Ćwh$͘>dMNmaOɝ-M/1gh[cΈdVvUևޝOBIS˭8ݺBBHnXyB7&MzEGuė W] Y+ޝ[Ol=b >&[ c|D^wrLt|`Ȇ)a04y-awcF x ȠN#3/Mٜ iR7Uΐg2hb|ZVôJ3Z| j#2.~2xJs,m=U1~QvN9&wh?>dFAJU9"pb֫Z6NTshbIw,#t]G✬uc =˶cĮb 1F3%b 6Kp$0Jp]δY$3dt{ JI!9.ͰB׸t8J{yVЍ( ;WwgklPm/"C'C4Ņ*|F $ZxkT4#&/ܬ]Ox.j8r>#MUw:<ĪVk|kQ_YuU)ߺo>]Cj"AΗ?x?Eb@>UN/ *%J1kG!7cHj;^9i06V;\ш9}!%ݏÌl5cb>l%$H1dI`--$B:.H@Z ~бJRJ t.EO65N`knu~GbzQS? 7^_)k)_U_|oox!G߸=F;c~?C7G_?%$-'S>_ *,s]plֳcNs\֑Tɵo*QjQEh \x&qE?א{>ʚeOlXD "8\Z/L*D[H- dmd img."Yбܝ»BRUiY ng8Ƚ%+#lEl=24+ gwnqsoQAlng[ly||d {xM"͈cXBaqJ¯o\~3 k~?޵G|XxK.=R7#5;ٗ8WOD% N7 {O t(qrcTI&(綟DOCm`혒v 3K9ӗ/s%'EP]EF/#CT 2DDB$1FV*iF}rF~&6+hoZL|Yiw㞬 7G}VkN a} f5Ư僓cj;31`if5$L$,Yx/Muf瞎lymN1'oc ec7 ;uF -o0`\_~k-mٹ*NR]>h}qHQ ᶷQg4 c|8(Eh:Rs+F.% DU&go3g6˶!w2__fAώX &EŜh 7occXG)Z#Kn8 rZP-ON^S1匶F[tP=VDc(5,Ske)/K~pei'5G 6͸R򕠓 rٛ"%ʣIfp~y$(|dBDdS  1ݍGRmOo3!6'v>kb,ݤѬޢ g_ҳLA}ˌ1&%D5םbA}@'s!DsK Ml>yz[Wh-,w ;BºƬjHBAwR2vLmik%˳*4 M(#M8\rg9EU EBh3¨ )ۻOSA ^qb.y2ABQbK۬ ǪM 6@ZXvvY ZqS$Kp&1!.b6"~.ZiN{^ 1;o,W-HTL hEV'Kb!6Ghfie?Yh)x z*q#yBH4($h&UOm R.ePgsZghk0IGCkvL,|PV%c۲^s)?aNθvf2ycW,  vg#&m*Lc61k q6M-ON8[Őpd2aT >xIEx)?z=EQ6|)'ώXՎT'QO(Zޔ IDATӂ s؍?Y|sY /mYa P\fH}g7uMh>P܍FeֱLjL"bdOWm: )42v-*:(:mE^h,c[գ.#tQ,w?9bQLRRa6qt:(AYIXVUh 6|!$kqqˇOs<׮` ЦFmȭK|&m+76w[*QUi?}@?&GO#j}ɿd|Ma~!VO;99Z֖|o}t2yUko`Yh6Y o'63o>T}100NcdѵRL)jj欀 i"MΗ"MG(EzLylJ Ҝ\Dkb .#F'bㄘqRd| lYBP#1j$9Zfx疊jl*GTk\f,7t H/ߕgI݈+dd.ŲAT2sˡD *iQRȰ6Z>ϗ{9z"T+Q2y5'lTt‰r9{L0Oھz׿.PUU b4f{DuKλ vyIB(TĮ+zs6DlnpC~7Sm#ڮ& !iɶӻweP/ G,q}׮!Ӗӝ]Y6AG~E罔DS<~DAպ$q[M4im#oqS%BF7ݡXKIpcGF)uը.Ďnk5b|LX|$)B;95Ky8m具kzVo=_yN%1".WY9CSsbU4lN_JX8:)7nȣoT_{#Wq'wqgb- ˇbD.7ֲw)Oa1<9Mz$f$ځH6R|x"<>L,DkI*b;;ԃ[W2x\#{m`/ۥaIԠ`Ħ!tQ#A=A٠ x0%sK>4 ͑M2%U۪4-:NU֍]X>aqDf$%]jblp]MXjvoDbhaP Q\]j&xQRk TQ $A1N6MX#]R(H3NV%AF4ƈo;AcDSIŷF;' !BC${2%ܭ4 TB"ZLƭ V-z?P ~hR$B>]Id#.Glb̥:l)&s>J &7BPkES' 1(H SwhTu4j UۯP{$j~S:TxсԀƪ׵I1$%EY^{^M:?8f]K1PLe8NL}֗<XHӔ'w=$lє;xGOc4|k[*\,zfwMy1H7nԍ>cڟ./!$s͟xVx Z~{+ Sh|"nVG$?\n W_%gr|\pɎ Z,!Wa)SyIHv2;wI*n\'&uHZ+XIEGz}% H1-hrO(\Z7^à+Dc Ui0HȎ+dŏ ~HQ9:9`~F<߿>?kUT4I$24F|H$@ӵR.Pi%}O? (ootk/9vh)g(N~7~u3;?Zw֗TFA\sU0g'40։ J<#`;T魋kbu:g)IY2Z2'Dg $$4(=:T3ù1#䶢n5YWDz*M 6+14% !ɥ-*42c.,Ezk9DpgQn~ nrm5vņ W/2ɶI 23 usMKPJ/k#F3cȺs]~'b)D ]S]m!7<r:eKl,79g,7.0K/EIl+CIyyԅ_Kb8ax׈[t$DWEJ Rl\R޻HO>d0W/]|LF>2I 񇆎z}*]S%OJ/P!@q`4:虚d:I%e&l I$ h?@j$4~BasQ%DQܣ49OehA(2YJF&I mO֔d]Fd#EŜ$(㣗8h൐rk=r&OV.'l'4ݙ-t!哧wx㯈>fOF^H}&yy;Тl:Dg{zX8Q]1b݈O # f@+ΊYvCvUX9]ZNz!+[l*nRSYe͛}a-^S/ $Cc 8M@}|8%")6OH%'%6Dg,#Q;G!.J1@ Ik[EJ] !K2ٺ?yےؒK2k :{RcU1ڸM*KɷN$F<!EnNK]RVtgkړ 'QthS[k xZ1J6,}~]xp c+;RogZ܃@3b}nd,a%=S23"ܒ$P#nFg:P+eow`Q$ސ`M9gӌ0HءP5!h /LE#G,;Int&HRGD+LPAHP5biv[nAUqIF$$L}FF隚5<ֈ:AW-\C2Heg} oJ6 `,ͺqe%)q0Bn]Bv*HZSlYIZhpy{G|UVz<<~gqw|:|цl\͸9"W1ΊG Jº"6r&7n+Gc1XَV꟢^3)̓EE\~sO>gs~ֹ4$k,r[4m8VHjHc"$[3̸o/ϫ 邴l{i;ۗ+D6hT4vj+UC0Q}heďE$FAN;Q tưTUKJr I*nBQ AIb=:Cux5~Nwnd"]}bO.{RA.+>w/ ;#8?Qz'+F6Y}mմkO^[sPTΑMtc!K_)W /ՖoG +qJ{qYF^pDͶ[ Qu20b-}`#;~DB+'Qe?2k1i7!&zdD!GOY3T/ii9-."?8>`^ٝ[["1 mRKŵ>Ÿ\?#( V5@`9+J t*AvZQr`'< 5g \B\7_U:4!FރkhÎK{3kQY: EdvU^`½,d{z=YzK)?|MߧN+xǏYe#kvm/m3ܸU&ƨkw:jDVÇ\'3ܼ\߻XPuY*f/2}E};<|>:KNfk1mǠnpQegnϴQ>Tm6iup;[oH]j1Is*+5Z.Sx$'9OSݕ?t_V!jGb-]nlhC&""jJ54F *bnF+,7R4u/*ĕ5J7ІV).)&:j ?-^-}0թ5@P[kPtA1vzqUM `K_ S >PdOKbuvbzI6d8`\I1,t@n.i0? :-ZTUzK>`euTbꢒ3e\j#+щ""FJ1J46wl a["HV*Y*1 uؕW?Pt B sDUբGòD~QqH@Bݠuũ5FX/1B%K1Td!U>;%ٚmjdhR?<נt6l:%bF91Ұ,iO)#1 &!ϟǓh"g j~8?x~k_ɽҬן=[C˗ڝW& 'P bYӱ޸VڮӪ. 4,_?6s7I"3ώ?v2Majk7e_{G/C?9~T0-~v $ɰ'BM:inh6,rD-{?8zjKQj*ݜFL EAE \>f9;ۛ3M8ɺ"A1|#q%pPx Ͼ T]AXrix!7ǯH@#F0HA 6&(>"2c| $C4+`*uwAq IzGas ՀSE5 LfSiOj1VXWbP Bco~o7^KҜ>Cq|rwMy,@ܸ 'j`|5R)> O1M [wy{ٳ)ǏXeZ4FBH#-3~JQ:SV Lf:߉34=a-y B:?ˠ$+3lU*a7)f's>=EDXI@3Kg72ȗ&E׃/;_3(-- hZRS}NfNpF !2Yќu}`/)1,(FC:1${[,' tY!ZCb k=`ejyS^=~BXc9}}o)aUfh"e^L;(;oKx ߴo'q Nu*X^ hӚMzpMEM~lZ.~O1ho\R*) $H1m'XE߷!AuR,VXS333[C^͵T-Kʫ#'x7{{/_()H\B%ܣ5h9OvҚʜaXX~kYx?:>˔'g T 6 qp:#"%"Y-\QI%ZJ }wtia"L2лXi9Y"HlWVZd|81Җlg7;@]T57 nnnj6>&VyXۑ9>v>X,s=;;a[S"-<ز'P8a٪yd(m׿r;߼|ywoyJ[w4*; X.7dkhmLIS\.!3V<[9H T[LcNP.TSEhpL\MKh7v֋=Y7YxIN+9JrfÏ:ܨef/1.NÝ+4$yJXgiwo=?3+e!jYWʟGDUݦ0B( v%!x~\ެ!w{-=[Gk!RFd `H0fm5bMթz: 5I"FOeؤ jco wV+~^IXgߡ\IBJ?nJO5%5P\OBvM@'^vT#mZj!BуwdFO^5tG"&cHumQZQYcT4kM˼VjAۨՑldo|~d\>scTFوU1b`>tcfYKPp3\FnY'gOp5(YiTPfZȭx7b=Ŵ_.9t?x3m ΦhU),X )b}U߾@>ɳO/*˩(jAƥ}E&X*K [mibm*|jur$hz5#&逍On{~sZ6+S}_~7QsHj:_]UTP=|Qj\Z/YF1?f=Ͷnm t''Td0FʛaA<`Lb$?S*@y>\?E}h҅"QA4S𙓦ھU߁FR8cu5Q mεs!BX3hh/--n`9 hP󪖎 fCgȣRO1F͛FcCzTИY >P,=5қANnض$K0o¥RSFZOoY\ɊІu;l>jjEzk댊V^xa .!gG3fM$J+';4mEg8] #ua[N^r>BԨJ'ˌe&L¡јQ%Z/Vz:_5ڪƯ\[*iq]ֵ7vIˌg',N$6Q+"R/zuO,gn0_׺Zm w(3bb4گ5*F-3ML&09w%{ Kn_YM4zZOt>fQQdb $5$V51wQZh4d$ jDƆ!ԩ_U<]*RBlZJ%g5%;Qq&Uk(48E+, Chu}yB!P54Ĉ ?j[.e(5ua4nȝCӌƺE۶fW[BS_4kbTu' =;=_|k+<ꗝ_ "_)f?轱P&µ<"27oΨQreŔCد0};$9bXXwXHOdmHN$hpv+oJ/ 9(^YG ĮtuE3LU-jԣ3Iod֐Oߥ1^ͅ~\$%E[qk&D7=eO{4=$Pta?VӋ?CB\762`x/7*mC1[UDEXJ4)/*u)ԤZKrL'_rj[en]8Q U #"9T-eAE>\GFhBQe(=fI|%N՜wN l4ZOY%$!'iI'Ti͛M.mhQ<wHMX=k]W"1r^،bHua}hF,mڜ^7UQD8'@mkWs狨7LhkzWc+$.48G|6%3ԍ Lj y-ȃK$3,Qs2%- Q^ ϳKɒf7\"_evIl$@.XOf|?a>e8.P mmIbҤ5$6%MS66$|1씪m<񐟼|ArdѬym&#|h;ċIr:~hiڊ $lCn4#i'lprKO;7ľ+$[$ Y,iCZi%~9P2'_Iޡ sg4紱76v}%k5Hnk^Vwf0kI"b/»~7|)%0W_W,ƀII &ḬOlL"DO8{XdD|̗S_xBiW bM:7&XlG]xv".֭NG~q6]aHhQEvze:F;#ȪUd]*iúb>uhŭ[$$ nIz)AGcԦrp0f e1چ'sz9hmd<}~RC"%~J%˦@+gTa!mK 11J+/UP*:O"]~ L7 H-&3)iqk䃂G,. AiiϢv~XHRKjqV4Vܲ&^NimE[comS+F;CwZ?=)J)Z. R'.>B("I`PBuӪ\ʼ=1B؆NƖY YM$+0 ʡN@#PA,iyz*%pvUj욕iV1<^$Ƒ *9BHʄ__,G1TTb D Cq1jhcb0\![iE2d,!xA5k9==瓏>hlM߿Dʿ _fhtkh^O4"+<||tZ+>?ސwwՓ9q]ӫ>w)>nY7rD F Ҕ3UOT duƙSȴ"y^IG ]Qtd #UzѬ5RK~/s nep)ɪ@GKvT#?L2e%.M2LiV:] [&˹>'ǟtϹFk~g%C֪zc,tbИRj~l+Uo݃PDə,biϩ*e85j@Z6dRwO™T~GȶFk+BMA/ĻC6km&\-I]ӔA{Uq1ْ`O?a4dxÖ́6rp6s4SN>}G{7Bg1:PIΤ^m_/e4ES/T:9juǸ$/Pr596[5jd-BXEXVOC7~ %V1tRqшх_m,F KMG}čzB]٤Є+uFІI9C.}$6X'ƎG^oε;eo]Q+Be^+DKTXt׮;ojZ&Dķ/7wv;ݢ!jTR |}1s1!X5V-w%-j >Q#Eb^]Pbh͒:Du#ˁ=8r9gW31j}_oFH@$ ԉb5HlNR7%zA.>c};۸_.^̚(1^-*B%z_(ƈP:nA'PawrwˮHrhmJJF+H2-VCA ^C-d0[5t+.HdKzc~u7%]aЧZN) M4-Qo[@O<}U Η np.-ղ|\M"fcZmbkTU$&NA> cbM`]EZ>z1bh exԯgĻ/[-D,%D/qWD&KU|J,ZuF81"th@UULg anpڻ״>hxԎrSa=ZlXQ2Z qT/ZK&Ugݷ6苉8kWY,rSШؕ^ԃOOP٦q|NkDyMh+njT4D)HTbgۓG-k 8 Yg^C wKYi5h}R!*[v򫧪KSe-_ի#gj<.kaTUTFhCP;C+CsZJD qY4 Ql;"|>PbuM\WĶHZ&25k10{x#ߕɉ|%(XKhdmO# eZG^% ywޥU_% Ņes{7e}ODr΀n8m [0/swdMASڡ<Deވ+y0ELR_@cn!m]4ւ ]Tǯ4]jpRU5l[!kdsDc 2d._=)G\퍹Ш*Bﴠ~``1>n.ewW'ժd1w~Oj}a_`S@p?n)4:!e#`<0~T+ܫ'o/w)ۃgͿxئ}eLu K$t]` \ܚ7E` A#ڰ +,BVd=5Oul.\E/^RO,MP#4.Ȗuܶd)1Cv2}\Gl\sm<~/>Gpyɽ{m.uG3I<}P>fm+l&؂QwN7m[߬f4 W4Lvv1S%#!/ 'ϟg6dqJJi%!$k'гstn3[a&s`=lm$?^hq uB9ڭv7(n1|@ rJX,{Bj4WJ,҈%[ϘE׭emCh#:jXzƻ}~kCqKj}9lި7.0ƨȗp|XZzpFTEDRg)3]0T⳻f1=2"'kR! !\z&UW߾-<]Qnom E^NYhS͒AQ'ŪYgmG_;{cCh=J`Yp-ca A^4v:ՁL{ni˥d\) E&bG۶4zUqttOC뿆1Ƙ?/>c-_ޟzg'rR}9<4~&,>~ i/eщlFG6חD"+Wʟ7$:FUFDԈ'Ua!I٣L}=z'U*GV/Қf )dS&3ͬBخ;ݞ/5<=9e\?N$>5Cry5|2_I_r5֯~^?o靫mMF\*bOh_a=@6-}}^v PW^ߣ M-;<1gcr(hh Tĥ4͢ fB%yڱermٲ.HGQq 1z&QfR&TO Yι>-0_mSԥ6,`Vm"͂q:ǔ)WOwdX9xDh"CI音,hp!ּ?0ڦZJh5o-JKN"FZQn -s+FSHJ*hlqYٱ=yќ3޹bk2k$A(hjb͓?>B#BLJMR굵)Cmj}_2 Kr*9?N7?d*B[78Xij+ګ+MCe-VyhkId&ͽfqI/NIvJezxٽEՊz~Ic:ON[osBtLVKm5o\4Ӯ8?uO>j3YΎ4*eQI KUzٰ W7 |įԫ jfY&&ioK(7tO4ĿzI(1tu IApVc/GEhL$HիVfZijh)o/ݎǂDTPUZUb }UC4VYLcImkPe>_pq~Gd23Z{Weg5q~pCV9ɿ atVbxl›ӵbi.KD9 5lah?YG|.=_U]r|~|plBgXݷj"UEK IhB3#5厸k E`x4cuHWAsGo^x,v2&Gj.|D%Z* ?5j S8 91&=ouQgBZw$ aGш]54ż89b\qX'gzkȯ-+S%}iK*z+?\p+Y%>FlZ )"1Xsdr$PM͖<>a` ǔWXiFy#w܂gK*}\jX-,wo}!x] DTFD$rxh }Xy{||},78xW?>&m^"kZb6FШhW ~|ҮUr(~* a 6,L©|F?>v7PǙhڊT$r%dk='ǭdEC0LoH*Wk2O+ݞ-ekv u'RsN% }o^  IlZJ]Nh9bH?thuܹkY y$]ࣿ ݿ@&[NN9?\Ɩn%&2R 1咽OyΦgR*Fh  b& u|%g<}OXc"{@`'^ڕ1qq|g>@mXِ~snjzYU+բ"vsYDZ7 Piŗa/N^1Ԓ-WV p;Dѓ VSuWDQ`w@ Fzf" FRH[Փq呩!Y 27_d/ӿQ?޻owg? X4ҘVij<|}1O=*ɇם%"&=iѵ8H:O m5rH%yo|)Z9=?z z=IV5t&#)J(BT1>Tz M&@8Hh8ӲޗDWnS7H =^s,ULUwtkUbWdK/""j"b7_>To*ѩ]T2DP {#W-FMdY̼dYEHo0AArz/eN3ؔA_~kTVIXvKըJ9^HX'U̩q UX* FՖQ.ͬnX<9Z=?=АXm(E("?πl)h[49J4H:\}vZ<:!I#h/#9= kbsHFQEFLw@AMSZ0 +fN+\bʀz^1=:QDX$:ynquͬL5-cyQiDoV4ud+W)QhǻhlcG(n3b ΞZ7&b5{3iF@/.Xy/hp*a'9jEGt\wx"Ag=?|7XzFm}dRIDjWc{R,Y)yTxmK=,ZV涋]LgP%ѪQ%<)LSSޣU)jJFR,rL8GAѶmMM7YED kXΤ.+F< >ϣIvo~{ WV8ۣjyrkncݽJaxS_I_՝ղ7׬?|} _\Q.CO؈F}Β߿hFHx u(lbN'T~F /)Vŵ`kz{#w'u]x?Zh$=zJTG5L/蝝vư΅H'_k:"6&""XT"^pJ3qcQ# RM(Qpӑ3}t\j ?R &oL'ꝶjeCeg{WwxH瓦΃ (Qkot>mK{_hܺ*+lղ۸ל>1D;VS-:&H,bPU^}ٿ+?3?uNjzo,<{ z)l?cbcML.!v:}GԈCI،xlG}]eraTqϟSܽEZ.Yl;yi*^3hy+jAiYdB?Xj^,er@"E[\pqvS#%l;7~cjDgFcyUǟptdt;' F)59%%FO 3֮o0=sNvg|7eC&$#-RByp#ݽ91bnjp;# (z C,OiS!]1R 7pY|pRP|Y#+W^ 9N&XiJhBȊU;+9|di+TȜP3sbYEWܐI`O!оfBbr\SW.V_ʶv;&S^9RI^ z @uT~K^hlĚH9vtl V^ P߽c'4gK'7ij{d[ZbjWGbG=G1,kgw=C mnul99;e_гs\: qp„G<=yB O. 5ep3_bA\E}9upbD_//xoˣ_?d iCo2 R'3N6HumD3ү\ZHXѬL̃Csnb䷳6=Sr&fPg !QisEK%'ZSzGKɀv8$;|eRӚe4b3#uu!`9: ݇VlKsm.1 ~4˜Տ~Brz@u7ԞIa,ie_T'x$ws|#;ߠrP8{S͞Mo`Kz _}؋ob~DŽ"crN|7\c1mԺ9o+|NOpiOy Nu%~|^ɓH!* wIu4L H/?_ÏXԢikX4ymFaݶA :ǔ~jIo\\BS$Ϟџpb -m*51I95Yjĩp=@-'n/-NPGa";|I]L匛LJ$A$] IDAT40#њIXжs(.#Yk$ugLIypk:'<}r'ln~oIޏ k}Ͻ~ÿGG q5"W˟̜{lnWyrw dxkp_}C012Wg&DgQMBe$7%쀶:^}Ks )SΞ>:Rb+Em%Ɯ6˺Bت<cN]W0q[Q9. v92=#ڌhS%0a7^%,k"04bټZOA3cI$._<{$*z2GFߘ K[~tg'{7ۥix2PcQc ay ~E(kXw 6^leyaO{)5V_YPgwBpq{ LH=&IGs˒du{`/sy RҤ5x_ІH̤ZQޘŽ6.1/i}_,W:e#Lw,lcC *b|G:,lEEloP`H} Y?%?e߁U)_.SzYXYMN*QkOv-U؝YuA­g­']{m=qYu:_1,$uͲ|=N|ʼ&P Mg޻6'sWyTm}8 Hw ѣ45iSMOOq?42=Ŕ M#y|r'Z4|@uY}ZqT&FG?28 xvuCʾ`b'A)G'X!sE5O$ZMS ^kh1eZf=i'|"Q6>!mQ8XokjDU"YѬh$isqP6ňd a(D`-bI:+"Fr QLǏnKZ.1}G!|D'{\z_?s_Q\x\c})kDROb&79y1Ӄ/\K4@yl/Qcp[1\FhZ|5Qbj M<1 wU?#Yz]LV˻Z'cYdc0:|}G~_>V9AQ-$ܒf1ze< Afz}@RDI4 _)~2QYV$Ϟ;9gR7[e8iөi=LbXN'J81 ے9.^l/f2֢CĘT7S 8)d+643ٟ oOu$jk(&KyW_sdid η-w.q=S<SE&M%<@8cXݺ(i^#}myzUxm CEDBgd9;I3%$X$I=ДSB<?_ȠݑUiYChHӴW5Us,>, V;oE Ddiju"J41TĦ'Ŷ-u$s$߽`i7m#mLu.QFb6H0yv\S$¬ 6Xq5ûD^"_VM9yC(x^QZm|X5r'Ӕ%t޽GuRPwzW~b4%'_ޯ?Z,x- mWF_ *;0ݟrgη>/ן6\GTN< +Җ ώX ĨVKa;~z"]C cZ Qp\BϩjKL7޽3v!IH$⦞d֒GBZz'FΑ48on)'"_RK~vd}X׿ג,mw*?2f֩~M ء Ҟil HKIVPلUw &ʰ7)"rNM+\rgT4ʋ^Y8pC*[##U_9-\MM;X ~2}A>ق#Qfbcj,AF[&*H՛r%#5#0F@N%mCrz$&sj2==.ge[WȎhoJ;\!hIr{.1i0y4Cl+zhhfY,I>5:IjH˺RɓCc*hݖNJ)#Q15_} 4n=)c=_1-vW責[ɽ.uA4fF"K㧧Iב佄d:?x|  -z| ?g2;Z 4>LG2w.=3 0n yLċ-[~a+C(B~l.1e:tg\ܺ5n-ټ֘"'??Ӈڨru :Xٔ?OOYYW~}$S9՚\K71X]7W O|D H.]9UgF`ZHSOi)7}DDy5Y G,iΦ|d3ޡZhKa&)1&UvO$yl?ƥMLS۴ Yy̢@,&n!޷z4?rN̳\=*[v 45U$b|$I۾FWVT#Q$xLݥNqj3U=+$Uq]5XUbtqGU1bc$9a_d"-kɢ#E%y^B@3fQ n<`lf> FAfP2RXUqjOY ~`{CH֗|C[jSv8䮦\F] ZO4 *f.$=ZEmɣTa)f3.8[k6Z\zjn/ӳ.Y,DA]AAhg񘭞kqI`Ye2HH9z:mfRj!o@bK#foFOA<>?NgmJ:Ieei%?$kq.iwKTm$x4 iU;sV۹o#F:BNh Zn\gĥD 4!V5x|`uo^~& M cdz ۯ^d)'Ow<6鼼rᢌV'\qC'Lǟ':?=+cBZ 74kzѯqgwQE&ˏ?M-=<Ѿ81րOO%ȃ oF:ɨ %=*)bˀfFX N(RGwbj)G-g;QdCĦ@1&%:8ROP+15T3̧ 3l#dkz-|4?s(\(JdDċe Ha;ǸM'6_b沸ʐm1%y!\QuijXHtEWG?8& + Ү]Fz;S9ou/^v^huPQ`[̑ʐO>6"nѩ44!}qclt}C2=ܡMIB:XW1DCRjL$7jhDϻKǗԫGċB4`Őȕ -NMpb1&Dqbh"3IsN1|Ǐ{b)LBYf[Z2axYz.%[zKcȃݦWy5 {EֵD2"a=kW NHQ4[.HyzBx2Kš(M1Vϱ*RV/|s ͂qXcn-wmu4o9L+rrq1N-}([㶛f]֞|\\,R$E,[ [v'35'!<%H^1hmlK$),YŚ;gZ뗇}nV{ ORtԫP⍃KܨYXR_`Z'gE-4HJvW .NPE\T5$ ,}x1"i YQ2eA*^C1#Yhg$YĢDLlLMZ ;YFa#*JimѴP; nsuc-2+i\OAZA Hd$jAMCߦ5&ʩ&V찋Y쩵FdVcT\ X7'Ado ٠ZHd6R;7?'RU;_| _[7)$mNJ2R4874Zz-%O{O}Q}YY".uՋRӪeMLr6FƬu; '6Ȳ=ݭG. f`D[z H5vlΠ;éFbĶ3MODäbXFq"NWaO$+aضjs2͓7)lQI]B@A*B [aZzFO6pEb$C~ZzTN'^KxHzx( A1# 4h۹I8KhP"2QuWA1^Da^7׿zyg93t@D[j3˔$Ǐ1C``I+<5x*! z<"f )j (A D,_3<7H}QѴB:`@kG}A|@k`[ PlHM&>ӧ淘JAҀ.&șT.tk_460rb$$L67Vxc˩fA! $ث  BLL R-4?'Խ6R#4jQ"DhFO>ڹ %n=BB#]#j_BtwQ$EL#Y*;_30.Ա⇺$mtDKT a1ɺ(&)Q􉒌$O>!"c.6Go7!A)F'"D2 RCk$t0&!k]a}-͢6W ^KGGYyy;Χl (փӺkl)imiMՠ1కzdT7$P*S,%YR /t2Qo=".-2xwٲ]6CK~_V.nl]Y'6BZ=)YsNG )oD|=]SoUP5,%\#~xW#QB3Bv"<3%ε?$]:+KOS]=;o|GS>*?]FQ2/sWڛgqNw8 9쳜|;7mn~!gU H oMc %93"Hbd0ĉ(be'qu4/ i,R[zy4f]b1v֐ع`c=v2vJ(*͇Ѫ&ΈL'6F06{'e"W,c*| { 'wɖZj##} E)Z 4(I7#R K9Icp>55$.Õ. „s|mo Oa∛6'=BFl_੖EӚӍ*³xxJkiçH[ɘ;ԹîbQڱ$+l&l0(6ǔc5GAjM!JoGRW$e J'F@'- VB-kԉ?y=͚/ /.- Y&<F &#m1*0q< D+Z4fne(4Nyejh6Q()5_l<._Hè!964(8$`nge/;`3Rз=XL+D\-lou;D(isz \쎩Vq9?pu'J:\=}}C{3)GޚQJ IDAT,Rd6prX+_~Qr=^NF?;GTԊ*f+M!ۃ|bDi RR z4cvKH-F"r;qMKH̉cD<ߓyoTnԼq~jqJpEPbYXVb'Gl7bI+ڂ`ShHI>G~#ħQTDCAQQ ZϜwbOމ0&YUCs}F71sƸ96!(ş>Z&}Hk2SW J8 ^%x"Mcm{/DcHMm!ɑ*Ѹ9E {whF؂*R=MVn"n:zbpYl};?N(T;Zu/;wsy]zŻАQ4B wr.:0*ZeAڳDXY+J"^9EW `,XD{ecӇ,=_Ȥ25>} "Ӣ%;\9e&mHEz2=n/E=^j'X4.%ZDuϩ6Z9IO(ܾzrԯpzp㟒+?tt{kK_j.c/\[|_g<rm奜~~>+[;#1FqK|WĹHnޗowvr]&l)^:3 4!F1IQOh]bܠ,ة$NG(fopx󖘹m3_~QZ.$ J ]E5mjj/Ruy]1qDk^FjPi%P{g/c )v0}G,%諌uB`v@X "RX<5DhBWL$)KB;F*+LaHJvB75y8r3D 'mLBSSSbbY`bDt6u!ģ[JۨI)Cd-kR;u^|ZLLS7: (:E,j1)6qptd%&v$fz*)9"bL("ꃠ$ókYJJg}ݻ%ԲGU}zUywl6b=du#.Z]w%vuҋ NWu1*vt-Y2}o"tO"ZЍ$I$iV"ߝJP%t:#Z둬tvc-@‹k]Z,5YA(49㬡>{>泟hZsf)V_D/v~ H ݺYh#j&0s'SΨ7^@pR na$҅a&'ԥLBPSDTT _%zkYpڜ&(u˂mE "J'bs_9- Kdk(ƈ8jGnaÝ9^H>ߐ{A(@,g~Y81/7w$&</>_=&ܙ 8*fͻd6i||8ԊH6;:iZܩF ͡7F2(+o7lzF1qPWR S-?עPҽ9fryoFFwDRq%#{z8+ ?69o/ J>i,Y 3aeQTKisp )H-X2E4=q!kt$ʵ*%5J+/(V1uQD*5;['ŭ"(SլMC q&=l&(b暘eE+'!JpqJ< G Aiʸ7dD㪔xN>>ƽc0<.Vi# Z(ZJ 4bTPABՇR"0XJ,N#Fm=ػ)Uoh:0 LZU̒tϞcWdnPy(I+<]pq$h*.;BCNKL -TYDj` YZt0{zx^U?Z9*^*m˛דerZcwX%HLW͊tIl |-=UDQ_@r҅XԷ#>~ !62N᪙oO dA}Eܻ#j,G.oߣ5 ^bi!+rW8vo\7Σww)gNm*HzVaZtd 4ƦD K'XT>u=U3QN>u$I:]󒽫9^-ɷfFS`;+'p0(ΡyBek%b,s֓$!W/nnLM鈿.hsIx2Zfrеlj1:RasGhho҉$G+"icFWZԊb&#F;[9i F-j2)fޢ~ζF2X| Pת!5HRo(IM$2XVh/vڙIEݙV2X O)pc aD^іn=R;`-a&kNrazoI,t*(}*dݫ[b4|T$Mr ԓ/vCа3$VܰEQLm@!%6QAqUuFO&VNʚϼʧdg2Ny[N^(LΟ(*.4 7HiR*FAFVAA0ꬊ7M <(9鸌{9ou_w6˯?rgEUۆ+hQ}d!NitthȜd ̼w=^b'mS+i&o>]$E|GWor_ w{ [.eE< O /_gM;^ ;>Л_&o=zrAǧ?f=ؚtcocGw=d41Y+m-tNZ130;(ADirԝF: F*bDǧ!AJF"+^Z~iQ<(SQ K+Ie);陼 uG49$cSBl{EL*uMJUTFw?"uaLҔ=N[)i>VUrFCJwY\_,(v$/zE{!잶!cNd+oD]!W7Z~OJ>H' GŴWio%%]yf]~ۏ>>l`&)?J&X30/ U O`T*l;(ce [Pt{kb7Rn&!v,{K^H1t3$K܊)% >^KJxeUr}iqV#OJ"*YNU '!]ccĨv3⮬ ̅f}`|5o=mq1JM>N4Izy֟+?Õ~v$;|닿lsSW_J|q"D^C"ܱu>K#_Wp2ee6 ĭS_".`Dx+"׾׸-WB4 Ti֔iIT <2 *i"E:ӮPⵦh=Xt|C*7d2ftݺ#>8 T#1b%"F Qb=휸 Ï|Z}%2y hC`ˑ $M [9ZjH 'inf ;j](2lAK^Gg6chKQ(ep鰫ߺ*PkbXN.lܺ^_Dme)h*A0ޙ18@}zpgUҩd!n'v'N]ɶyE2lk|zK#eᒙGQծU%awڍQVTyA%J0"!_HduAxF?r"}6z g._9S/|K.Ե>3< 7/|HV9Y=| 1 7ߓoɴӎicՠzk킜9>Yh'Vj#g ͔YA4'Dihq-Qݨ3Gg.kyHw7]obտK')ˬdTRxeidFE6vk#{6Lj͛l:@d0#^;dVehbCѱd}㛌WJV0xF{T q}#wcrЉH_}ȣ}V*CyI2n\?I•!5dK͏h!7#|}?^9JYYFĮ0Ny0E5'VX\%qDZ JF`l^(f>!tF!&5 . 2&f 2Oŧj2Oso$F{e<>Y GP,O"ouއ&1ۦ}&;Dv1}t$\+-2)&IBME*5Jkp%QKc%b \X>Gu8&{O,5)?e߿Nl|t,7A4\$]XBGcl=@y.-2ݞFow;f .M%3fk^k<4P:!ݶ9a9>7=%qgsgm:RsލA,qILg]LLL$|?YeǶYv3κ.V;$pQI*N0d3;LY͸M)PQ,Nc'NQm}旑Go23g~+, O{'7nյx<_`5dVs~loHlBP VOg/rgoׯ9v4#1c맸/r\&6>s$'uA*Q"rLN|]h\c,$`b<g Uq%/&qڱS,_U߾+Kl4Em?"ǎ;)r:K9Wj8,{br5L'&d4.0:QJ4mLhx-0=,9ܚ0 M}5ysGrGs# 5=! G-Ujbq)Q*htK^'hj"B+& *16ȧLbTURU9F b-6Iɺ Y #xIS!p\ydB}p@~KZ$Q KlR"q_&;6WVLo3إ`[Qʽ=|Yb]$12&X,m!M#7x:YXDq #,?{]*ξxDFn?"<ڣZX.?IcP9tgFK]hSnP9ʩFޔ:2NFp)jZtZp[[G3(paV-$,b%lŤA.Xc ϝMH";|1g2g)~KY&>q\DQ~iڜ̧u\YaU,/Y?ojF i=xd-?x)YBWUl{"W7Pa(§ . 칌{~+i+!<s|ۏ;A$HkOoȽ=Wh-wx?!ߝiqi V'WWi-v[ܚu:|V+#->b>g<V,n)!.mߏHj V9X b::=/ eLj!^(MY=)@:fE|F4D6eIdT$CH5Dӟkp;_`]N~}~lpk:xE++6Ҫj"- 5ijݨZجl)&fŘai8 H]!Qk,Q`[q3! Dt#Ęҝ=1P;{|k1"V͜* >W^f1QZ@MP9M׃EM7!LkTvgȪ\@BHGLJn X=JP&jwct>F *]D U/":C nwI "FR;  &'.+h^?x@YΨqMQJ0+gR%LDZ2}&$,"pXj XMIԈJЦۦ0ר|+o&0D|zZtP뀌,kYyc7n9VzKD(+d#鑄ݤæ$g}ynd:3{MJ(y3ⱼe{ªd^6FUGtӺrlp|%rnKvKg~^+_?7Te?󟲘K{<9bR#m-|^ե_ ޹y BxRRŧ__i]?~ZZNG{Lrݷuwg4n%&-+QH!F5yA$f2/8%VOe.^7\N&h5UOAUVKfd?*"!>//|%N QdD4x\'ǜ"h5|""R찅QP9 ^Tч6Q" Ē1IbԪU7馬:&5Qо**Ue ̨1b@dTtHqU8G&M716ЩDC4XL<H5{suQLDx*hL!#fhe;+o>׵N$c ;>j^u{X˲3sukꪞ"ŦH")YD*,Ƕ  k-CC`v@d˓)LG{`5ϸ֗}nu%Q!Ps׷?DI|@;9wZ$+K:jDvTB x/ycpcTtGΤ<arpcS5x^7/GcZǏ*nWN<*e ǖi=.k;!vDԲmD1i;_{S?/fx(RDH邚ѹf~%EJڙzܟxCyUL|췼)[7 {ӧ[WN<[S'R '_kU8_cu}?8 9:4k\yQCMY4X#FS>Op9}Ut:vhW;/}>KCFS.*,JW5tDOiU %{0BT+&Z4z)bhJu<q^mM\<"aW;-+wOf)7DZ{Oௐ_SM$\Ӻt\+׿n~oq}7x~V߸|UR12_X.{Y߹ucuWQ]c{A^|Q^wL'#޺6E! VfoK bKaL}HtX40!)zJ9mK/> ]DQ , ~""H$RU=/&NMz?6Au&pHb_͟B3v$1ԒŖvi77!+yKd Br5zj&. !:DQ^se]iK8TѮWr0Fp-HdJ.К&ʠ+:9M򉘥>SՃ]}lGb4zBi2ۚDytk_lГbw[FD&+K91b4Ѱ3:VVֆ)n4*IQc:Փ%m W;l-Q`xbz9!Vjk7ZLjXyԜdf'CrE;ت-SPؔ#+yK<S=7_ݸJ뉧~9g6ƥo<ں?`{6_/3[o'5X۸Up0K/}#e;kWH-V`>!+A!Cw9⹟5}&c.Kou;6c\tS_.^Z9̰iD̂&GS =VpϰI9r3gc9F4{b]u#錬=l* +јo}0D="7#jm,6vR1I{WFv7Da6 "B7 Z` ȐƖ*,WhyF;SdZc) ; KYB2$/)S Iqn;="c|xF{ },dsbhP9DU'{I4 #]>v7!.dJ&. 8ơvt>uTtLĎ:H㵁$K].$'ڥMg9&M>;qle@(bkyrp.xF? (…M1A=ոct 1VQ'%]ܽ@cf9i'c i7ûwoݛPenbP"՞tc+t; }uW: [[DAѼӽ)ռdFB)m@r y'RVcY[I՟]'Tʿ{\Qprk/=޵ׯi3kݑ.\$g66HT& -ְ%%E8e~.A3խtd!~N"U]i-jItn1-u H&dRB4CWs͙`"؈j3`4P?%W/+z}%p?(0zS+աع9edv$B^/h5y:㟜.Q =RkU+~G"hHau9*Fp!A4(ґ`:D:bh9E]"7>"C֠H5*BM "GTk?9#<$(!IJW1K6gGX9+N4*ڮlڲq/܃r5\Kb{ދjը5_TE[W>t~lEһ}Sm^)Bp_f2&8V#Na|{aw64.!tW8R)sDpDSI5§)IkOG@ju>1Y -.|KYYzMR/]{OnGdЦ2ڋ:21@$F"cT j /j aߩ!~Հ(Qr"VĒpoxu ߟdymx~[L Xs.qEMi:+ل:ji39R87ՁejϨl@i˾+c-8\1gs.|IO;"nT_Uw "{h{s2?ϫ矸O=\ ˽U>e&];#P;I {}>Rq&}#xhi)j$X9ug?s<,a>s\޹}UU#V DT4U,xU QQalL\tK' +v2HBvgiISH,B-!PVB`>SJ=!lYdzcDClZoPJd^B\tU/$hDLfH+.sMVDAU՗M1/TЪɜz>oQR IRTj I.x+kZIs@ky@2c\MZTQJ֎XpLcټi+vЦ yE]ג2 {_꩹/aҝ1#A xzv[)B[޸T4CDhijug/^gT~TUE&+ j>|Y`QKZǖZlSSi5et_6;|Wl[~eog+ f<(ߥ(e)Oy8?7Ki%F &"Ê4Z>,G[vcI#*)R0U(AxHqX*M$CX *UZK/mYMLy]jYWA,lŰH\iSYàIQ8T=TAW&AMP,fC\kk'tr=!,C!U,yṣK㓒 lj&ȩ#Ֆ\nm,d'VqԕvcEQg~A2q]MD>;IRd=3ra|UAժݗQYxW2g`#h%L]+󪣩D.`T16MTvPaR5DηHCM)x_+D>jсo(?PO#9?9ym$HRC0v9⧜pC$@z> &Sb VgUQsazOWZeg&y#Z;HGݮ"0/kVc1_Գ'!;R]}jOn~MW+m/?o&|zt+?my5a[~?v"$3x\!9~'7Np+L&z׸~lnVSXiL!ò"Y]4UTJ)V3+MT`}Yv1V%Tq.AyTgy7[NՊC@bjG[Wtb'RzD +OHzQ{~ddbM1CQk&rZ z IDATilFbT W*6| +m$ H*RdžoΈQkF#WB=V%&`6WZ&|8l[Odl[aFܘ -1xQ,Ժ mI8( NKkGo[Ed 0ۥ<*<7k&HH׎R1`E&L&RJ*쎈ǔ牬ŶA,PԍjOZ ߑ~ǝe3 3s"/9W1pGA*QzF<`ty+>X6țٔwe3Ps5`C\M$<'3(.e'eNʲ~+c&rS$g=ż:ǷoQ\Sg_`w=:.O=<_™rjc#+\I`Q%]?ǎXc+ $%#Te-Οb#K+<>ߑW^}MncLua˦ԉcCo KfɌD5tFmLZHUAZQT.*Js8l-FrRU}g* xzh~QSEL -0ڗhDlc#C0 z0-i1+QNyLlɁi25Z5b(㈺$U$˫h1' mH6BFq#qFajqNtzEZEEks0s{K$- J1"$g:%ɔ.ߞFkܧ]FVŷ_ǻ\<ƉuoLJ"}ګB\0AB ;NfsOnl9T]s樗,1Y{k>IZϟJ[]6gɬp? $su߽߽Oc8HO7N lRLu?ʽqr"_f. N"hӓMSb-i0  AEbiK2bNpN DAM!(CB"l1(^܃ZTJQD͝#=2 ܥFXVM<^DDDpFި/󒵼_90t7weuoQE]IDZP)kI&Y~S>wk:b&|aLf^ǵFhx[wPqѿ2kSəiS%2yAZV@:iY35d T|%=b(nD=$؆c|`a诩6BF%FwE>Wn𵝠*H՟Zjw4 (6h _y"/~^h#V-:'5 Oj^i/q NJ'HsU*Wͽ]#cz_3KeUWJ)_}Ys_ꙿos魯1:50 6N#tۄXl+qOI͡ a>G˪Qr?aHf{H|_~+"V;4_*$'ܝĢT⢅&RhTz+m o$VA5<4Sj+/+;U~B&;3b] Kk8X#;͝ζ ήJ#$nwex$ 9ɽtUDzzg>~-%D7&A:Be~(vVm5)-&ܒ;hd8V2Ih,dtRv鸀@XI>xNajrdî-ACӚW2ct4d{LTZL!I"ֈXldž,.~5"Yֈ vkゥ[vs'ή681@U8Vdg`{s8Zc*q1KD^)q\gFTUA)t!9\mN@b@ȤOdEMR1j|5,6eLV-fyN@ۣ(KLJ61,:4cȋÞaf~ zQf&P,0o[d$ľ[˱YPH}U;wuȒ:%8i!@Z9#C&mZZ"Ƥ K¬a>v~* \\d-6w`Gie=_C_a.o:'.߼kG{Om7(q.U#nötVI¹P9upKLEQ >&/1͈-ԍ`Lc~jw6/- ?}4c |c!?bp:vxcAxKpf5uzlz|/OV{-ᕲtQ0[(:wˌ]6[D0[39W0f|obsf1|:':uߤmr,7{&٧<|8.W޿ ʷ"D]XC~piX\Ow ^ܾr>,wx껼pq Nwxؾ/zx[S>čڠ>DeM\쏩"%+Bjڛڎ! 544꭯Fg(i&q+6r.1 61YLK v\3XN"6u?Lj)jE'~ kE @U N8T @ {|dc$!(|ά7ɖ Bq8$@2S8۫x[.uS(X fcL:(UĠYu}b U8H3t8d3*XC$yVT{pOKtZhU7.iLY @GE].&`k0iܐ;S'H[- {א,ؙBlJf86ҼV5f^"V!ߠ,iO#DII1)Ÿ0]F zw҉>G]]}.J#<tMF3bE∴鷉 GEV!Ɋzg]U\홗CZ1Dq`=g$AT`a TzФAMy_'2ɧNo5fMrNYd꩝Clw:`:Q^te 50KXe>{ N(V3Cjя_aQ'#?*@N*ݵ.:KHO%\Ej^ή&z;%G{3+V{X˲3;׭yIH%ː  C-K y<P$(sjsw쮮p|7r 7P(g̎ FNؽ{'&<뫧-VU*VkF%UX.TdSLu**.Pj,S9F & #V,"6&NSX+8*R@?NtD'wt<\yurN+Fb1Qᣊ6PcQl#=RTf,H8x8i!hږb4*QUL)RѶ@E:t2 zĨg%ٜXUZ΋3s+&<;Dڕ+D^.IuݑG lgotG"/ʵkR}xz Et7os&AI R;++с||3ec' YOf‘Xvcy0ߕݨKkI(KZIKt5 *FUSen352YdDJ]8A,SuZ9ЈOb'AW}c|OySUF6Wh3h!7X46`NUi8;mnqя<_{ن|uxM6fc&\ˏ׏c>Î>WYYsXp3R/U6Ta^y DG.Ho sEI<§{rcy"JEnk_~{xcS/?P P'ډj+b Z%,r,EHڳW&jD;?rkss2;;57t-jXBmB Ur"^*"E)6UZN@dF<ӤY)v[2 R#D&# )Q<Yպ0aDZkCDVBos'Jbtb-."j/x`2t")TnܣTIiPDvnG49bH U#Fpދx>4Ey>/$@JʜѳѫQiL4cnmy=fFW)4IiXDɱV$+d=P;Q R= 8gUOKJ("(M}ЦIb.K#֟3,Aw/u%wpRv*D$zEwf,__\wOO]ʱL"bґ^ur0zDډiǜtZdnFȲDh#K:Md4u>// Tƅv x-5"V /D|R\HrupoY^o|Gri}I?]kݖÊ$('Z.]L֖-gRm2ԛ^qZf.R̂TRAFhXMh>(och&o:h,DM?ᐿFH(X?HJ+}dۤ`V]Ɠpcݿ8Եq|O|φb<⬙AtI+!J  Q-tFXZ}}eyfyEgdBcl/uW֨/@pє(~u?[ӝ|Ps_ۤ%LgGc}T5e:a UW$%Gc*SQ8'NR}9sQ#- ԀHaCтE;IXK|¼IrJwH% եU*t@ ~%QDABk \40[N2~@^UOv|̡6Mդ%M3Cg'EG8*#b L2AC$>IsBǝX kPURM[0761/3*ׇx 9<Ѷo >z|Qi>}%+\fu0H IDATַ=}x.y$@^ҵuTF+IE^9G[iqє+MUvi2 Z}mR{P3)$eYi1j$ѼZ D.4 Y0Uzm.[[MψEpV i+Rk/SՁk+#: Iq >`NrUU,:Ǽ1b4ʬT~xG uU5b$'`Tt7D{)Kpo P!mNTC懘⋚`U$AAk:#RLimJzF4"$gq;Yj*:0RBp(a:CJlГikb@;+Jo}EhsAt{cpŒշsZ"ҽNδ:0p-kY,`]բcTu *%/<(:IK' j4[b* Yi'Nu?'tH-R\4ct4%<zq.0)|sa2}s47 uFA|~YyAת'̎g2NI굴i F Y;ٜxI=Te@U/kaUJW@]8vxUxj뛼tEuHd I"_ ]HCPJIipA󪒢rF 4`G"mN#BU{_~yD;[<؝rU?E/[D?϶K~1u3xJ'bh4 @Y``+bw @LT'ĜL*`'H9-kݺ{>$KYpbUv&vy;یvy~Wݒ ٕp>7aҹx?_?eQ,׊]ۻ@u䥗7I}(gg1pN2IC*m;'Ta0\L˷PR &N!p ^I;"ٚő5}qcб n$ysqapo韋.ul1H*-vOK lm}[b/JT{rd%]골IّXg챜/wP LzΞs(JЊV:} c389Ic,0 %Z󂺬1cs"ӈ^9m#k]{z/-4ȇFsk-fb p1TFb@]͓nOd VNgzr0]X=y>LJ*+h[pDI8"xl`$|yDsBܸ9&1OYY˗:x݃Ե\*LEmhjFhLK|dbz+e Erz[#t8 L}ݑ'D-%tXű>G0&joiAQWBpx[ t!튋Tm*R%^TPuXbm,QD2X2 R$20C#ҍȃy~%oM{1/0atXU*labF1 Sʱ+r C3Li-4FM"F>kDqUA\U9D&2BH>MO{NsƝwkj$GHC Th:#fB4m*d}aO gEvliZV*XB^4K$f`Ib9epƎB^?#ҿ~^$ۥ>Y!DYh0-YJ锖tIPɌZ'3\rFmNCWTREЀm:NKdhY\zB;0 h(9j$NvdJXV7 uI{{/}~w"G6.4:V;p.5iOJ\0od(,:ĝ ֛mPќ}e}P&kL(!4;Qy@YsWԥڕ$kU/><Ԯ9[׆ HFKˋz6P!H,*@р`bYc- d^$ ON;a{ko'{P^~g_I-FO`2ߑ^SLO T"k/P9K.uƚvF=dm&POڊQk*؟xk_$|cԠ<=\QYs7)C{$f 69ec/zs}iQRaּ:2_\}LcMYaMK -=p!D"ĥej, lLmQ8r,±]GʭRQl⪊$,Iɒ+u>D-"v(1ٜ@8aXu=̈́}M(Z T()LCߖniy}HT i2T̝cVT76_%u'*KG_BƓLF5|APɧ^xU7$:Ф"Q sQeCZGۤєek92G2cnkDE)ŀPV []6I50SϑsrȌZ;e+s IFΔTQ A"kAj/f;~ģ'5o ýV̝YN0d0VcyKbcۑv;U=W ;ngCak{vGKqkmX*s,Gz%y[)!2s$oWҍ+݊dmHqxt v}}w{[Zիʋ/de?:%B b6Gj>}RMh 6O_D @1C3ZIDUMvMkĎ4AIP@`bbk!EKS(IԤ(02HLȊRQ)H.!=!K̂TjHZgDUNl"cvbvB7Zh;4ZM56B\8Og@:F(:1e+„Fa/d9ǣ9>0YZ s%*B5MТ1Q㣈Zi2IgIs3S N`@j'>9d2&:&4M0,StI\ j_9đa|4saTXy­zkj}/ Ԫ6R:v*Ui S$m%s*Ԥx"7+5exeE,D"lcA#Wb-2<L! JUcΦ,\T^[̰{°O]U!4'V$"8H$݌,xTƅKr3KuZhk̕ 9j~PZRI;U+m'DPPUF9ռĔA'E s$3ɲXKw&нXsM^83 0bhj4vH䞆(}*[GdB'b}LfsDϺ2*FHhŪawU%$Q'F3PZ^/b|P;7_9E|.jqƆ+ p|\c &2hPOTggܜAgg_mՌNiI]͋>}7+Xå-*SsA+7q#v՗fg-:op=q~鳼<|76v^htn,Y )׈U=$' FܰwӘGNs) ׉dH7>d:tV /Qo&3 *RUDb8fY痕veƖ'5.Y>1\k) {@Y 3mHަɄ(6Ia.#>3ȴ[]F<^~L7їmq[VW9ψ{>y݈^1awb<9/ޚ Q^%>mjwᔖ0bn#y4']^X CWg:IXǷ6J{S m맨neU46jA0*HjZ!^-鷗&g OI"ECfBfb!r1(pLd1:Qe122-r#l$ϵou r1MYmZrXS+YM䔸LI`&.tZ\gy<*(NIڧrWΨF Xnbi~rf?q3N{gunN PTX`i 0yABqpК-d}ucn;asc@Pn8l;eI2tL:7?Oװ;ߦxK*80[{ƪOB|/64'f0ï%n+єPEZJ11DјP9O JL3na26ldezqoUBosjZk7 v^B=-9W'm$W;Qrf #ul's+i {Y۝%(dEU1:xre{_hWOQWmqδ/m쬃b$`  ݷ`P' agz^;/>q:m2@r˿،E?}P H-f3 H:)''9dS:Ǜ]tiaBEj]O!i{U!TH5X3t(0#gJZYPZ+ gz4Iie)m1"}zIrE5`˂<3M,9y|pH; ?uX`s2LHUs'0 ަ/cyn_^e- ho_q$\gY2W}0pw/]k`ovCF{Q+$bXZx;|ݿʽ;<^ޭmXu=nv>͓_9bcDrʍq7|N^/w>|v2O3aBԦq2WY^{9=#[.OpH 敝}W3t]huDKHؔ͘VdyPJ.yvrcYQ*XһsD|ۃ!Hp)7҄o6;|hco?vRg*ge ┢n `K"Qc5voÛ|t"c1σhUx}aK)JL. FOUZޟۥE) G}c|Lg: i  '&#-q78Qsӊ i C h,%%]Y©,CSP.q !էMjQ e~OZx #qOR:'9uaT,!j#L=:lt֎^ 59)ǤbZ 9Bu N䯝3.5h퐓 i{c'y9<h.RNipU b!mΉ&S*PвӢL$hF2ܓcqNqVzc$YLm QZKmNF'0PE b KVKa:63 ʼ"YY2mtG``TC1UZ;}DijÄ/oAQc2R-J~-=-<D4 sGAEű1Vd ΐcrRؐ,vA+ dݔ:!MpH $nG[^*jŠ YP Ѩ =u`H ic4/Z,f4S'0, ,[–/5X4oUb㵋&DSgQ7͈7'1,O0whfѥEc(NrnAyy.WkƊeQ2HOtzaO>bX% J,'GZD !"@>k,LEУ{VMEn& ͕du9\PK[- ;6Ml:'4ii>g4:z=Ke!7d %( |d{|U d>9=ɏwBoֵI(Պoѵ^a/q%(:ey80%6q6*(5s[frDnf 2* BIo`L!tyxc=8(G|w'f ߹b3z)?; C4v1CvEId yGH ҾAy1v,= FS&Dp}P֘)f<8h Q46("t2Ac ZVpsLCfvY^޺KUg2t&i|r_[ X+>.Ǵc|OHHKF-?T'hp A!„@zmsW#sHqrI qBG,UE:wDѭwy.ɸT{_&D6II{W[/v^iپAf 2տőcΛX_Ѻ2Д*=Ell8{8BeQO G+9.HZ僮rTY0zuLB zk=ETưJ8jW͏&=:Cۼ`!{#)C,P)&i8$݌qhы`ϸhWc3CE[Z&\2pX+wGUa+u۲iDb7kxأ0Ϟc w.Ҥ% >uk?wWW2v'qqw/M5V66fL]F*w3=]/P=lsqeє!i 42sFħ~b9}iQ9K@d LIQg?LYabW`i#dRΡG)u3OF}bb$NE뚸$в1q6ĠmǨ1Ma,`Z0_PjN;!!PUtQ%nc!6Κ)imb6A qjBQEޛʖ]gz{..[f(%Rd*yb0`ذ0 jh=0 XO .wKV*U,J$E1)f̗߻ўn<8^&"-X.8pŽ'DZ1cbUa' H TJHE`gԱ]Ґ0& Vb$-fÒ KKd\317Nҗ  uWw0c\ՐX `kyQ">:ƈe;3j ,PO:q㻩 `BI3RG > #HHSF",Z@Y}0iuR/J!S"sMm0'r `KO;z$1%)'?)*92Pch` [OL?t>t+6>ʪŎrIX|#|6ki7~r4-9Xf1YCdq9i9~k{3!sw-zޡq O񱠖N<]" '8&18f^TR ͊F U8na8sI'<ӇwX)Y*9MI:yq6p:)?7~L޻F|w)tbI"|n` /|vgᷖ5_7pl|bq .JW-h&CjP{Pw;B4#&OY0D$DłWLr_q\K%.5{oc `ֱ<:KISkôTacz֗Yw}R.,9Cy15"86>w1ijHHPT,r?'` H_]̖9:~Q *ƾ&kHD4ŐRvLCmP9iOmCt՝os8PU5fK~ZK9IpIzܩ\qgL̳ͅ1`iyR.e= c3Nd]DUjFBa (s[(d監Uma~vsC9eo8gU~pk]dxq?a Ӝ-+oşK wi%ΘǨ* 5xk]Q76e6nڈD}xE}D#(B )u57HhC?T+N!1z=:).8/s0!f|6~Y,] U8?Đ'ÔvTRDc U:iF&AywI_dy֭,kTXL]1|u "md!OWSnl],$ fd8PJr6 # ѓFZ1,m MVȌX3t[}?A#ՅaD5LZ 1P丣u]3*Dn ahGTcLF-#N+]Wqd ,i'^`M|BrșF0G%&OlX& 11QvJHlZ,l dp>Z?Fu&u6H)i؅JW&AJ;_&xֳhZf'S/Rlif%Mi$BUE|F̢COݴegdI_{2g; (/m'K4s('CCGL '-v#v peq:blLЋ*6pmRkC'$p CD h'~Ҙt#D vaY//o{-?F֟{̟"=0rsG1 v F,?1IJ..s<#i/^ Yg떣ق&t?~uFAl /ɕkCvw8kj|1=٧1g8E 7Qoi5tBAX͇+K8\m3H {4,}0;mSf<椚q,yM8Kz6'$A&'7vv6HCYCT6c 6Fh[oF̤f"5-e@ )-+5VX:tlΜ*+UkkhR?~-`ĮXgM _Ǥ91QL[(|ߠMnC4HQc@H5p:??&xIN& {sȸߙםETVeQ}Mc>zZK,h`Z/*]H`lڶFuȠ$ {,JltӦwS&AQh+gYd Å.ٮ:N@jJ-&D1x5VhsBUXm<nh'3dz!)sGc j*ё%Ʀ$0 )XϸѤ ԰@Ub89{LܿAn9* v™ّH|T1a6֘ѡVo1n5 eK1&saE#ԃɅMHنd|G#M"]P/"¸o)Âm)5&DE{FЩBVi?/i#ISG@Y6zԡs F/X6B@Xҵt e$E4TQ;H; 5hbQ!0U|nhʆO\ʋ.#g@:xcbРb@,p~]_C#MqBvRG5U__I͗+R_#Ea Tj *ѳyL5߽Ϣ|O9lf> ׸i[zɤZ}V=UhPTU:X*]X2k)bM j:Sy ~#^OS=bsNJN'ܛ g=G+ZKpcZ(4ZNC+IiB+vPTnIumOā:5l?+_{`յ2 ˄5 D3~w+Rio7hլl.R'8;'m^A%&Xԙc'ijowy~*#Cw2amY7dm/l]G3cr͔y]$>m5Y9kYU R9q&Vpdʔ?#m~On(n[ϵk̠X"`:?2>'ѺWI$U*'tc|oWt퉳l9&hL):?1` |)"/ ̀qV6I="ʊhZT;R/yMA3Z4[BwʝJxzgq>*{3Af^HHp+v(ȯSCNw d7NmUϸ迃]Fp+prm.jݛl3 /W7G,RN[ҋW=v 'q'BkRQQ| Fs&$,!R:q]Dҋ J%(CX>6jm2OzHڤ/䴶F-,Рg%ԡ2Jv5:-w Bӳ,Ώ^OlRbD![AⅰV]}tѪN]I &vl<"m@91vž]ŀ$)%6*RmS#פ!>>`%,O-`W03f-HmHQԉEBHj CHO&HĐvzhPQXNCF+R_dy&qjI6EA(,Sn{LS\Ű,J|L7bH$s\lOMo|d掜5Q ۓq>fP`4I6a,YqFa c-Ɉ 8dt,ߪL EgN!e^F"֊Io戃Ǵ?TJŒɔ˻kJKϚ3䩕E-,+^Eh4 ֞b :I#-$,={S"׳;J$'͢]fyͭ5w19V'WWGd^w#$dm{?=]h}D LGCx \'T^ ݅6᥽ .iC|tͫCm:G(2,*͇Ng o<>ǧ+ᬬs#V3}. 0<\۳ŽS{〱 .BnԱ]xMΠa88,eAź9.m}݈G?7gt_7_>}ۼߞi1Hq$YQx;˒7A:$b!6-mT P`hg^cЖM)M0>U3Jxޜ[9%NG8h iͿ>џȝ憶 *N&(`bQF *ą4`d[}dGxe7od22.-7&%~vIf\X/[LBj'̳q=x)a}|UמGwD*MKxh0"q5(I5+{Z{wNq^kmY,Z޾kecb0jNof{jMy%lX IDAT+$v:Txf+j#UxtAHa [#9K6&#`,A;"#*"b AЧI? &I:$Mp*(UT Jdi4:PW MjKe8:R3O1F$ryghAB j3'fJhns]D,!Lt%e̜gv3c0:e)RD F@"I踼ٹ4f1۠>1&1DuKX{":jF*5DCx:eBXS=?4Oln^DTDphg0OfhUE&d3ٸP T'g4:xASyeM2i!FӹƪK)2IKnT .scN[CǨb(|aPVb,2ŶKԌ)EFͷzRʃ1iEKIv3|$ltD%WAy/8$@\ V~H;H t+r`"bUW'jqH4jOW'5J*3RՁ'֫.vA [$1? ^¢tmC]|Կ19 WBI?&+N;CΏrw@/=l o~@9)Y;7QQB='#3h٫4!$v1R>}ƒɜ? >aSɲG>4ǤC.%\J_\]Rsg%>u`gLRq?'i y)ډ32!YZ6錹utH ϑT)<_gge1>U+֮pߒ=cmaPo&1OYFUkd) /h bld\&-ΔlB\λS%x,#'ЫP`1HDe ߴlŌsiO ɷj/XT{=f񖬅]-;ŘEi=a!29u tN([u)l7%C5rqKN0ȵqn/a#/9 ""]BТ]3 ",[z3ꓹ'u$sh){å[Xt<ݔTfZigM>;'St!F#cݑT{9I÷-_z6_ePۇchy9fYdApE.~2]#mbOVԂ$hD!r4!,jlA$11J,hs$wut}9t1POI@V%4ZˆHB9Zf!yM]ARURIxig Wj]@at!FfĮbHSWB>-i6˻CGtw**[\?iBW!4_g/p!ͼs\(W!O }du:Qi#|%qQȫ6'O b͹w:u8| sATrM,ɲ"kKZ"=$ @g@:k HtM<jĈ&& C>>l cݧh&/}koR݂yA0sTw"[ *Q^5/> ߋ ~ϙYQBD!j")Gb#u}]@]]n{nnFHY\;uDzĘ0[v;tI3K1qZ4ASzDuzy6=vd7-7y643,.6 YsKRs9+=ii~~d=^B)Ts29T&LȢ3G[{nHT?8d3IΣrz˅rn{q3/W6 w8,puK\*弹_:'u|s!)7(Hw}o&qT(yPMkحZ^LRW ߩ6h7s8PBς%-m," M٤WC5# K 4*%p4҅%]EQdQ,*9 f{h1,#_0Yp@C`a'M#kq2 b&5at_rt"6! {onՎč/{rv.XK=cdB=S3V/O'-/|_~w>&#U#a:C_a.[%XΦh31QɌ&*H|s~SWKdQddIjD]Պ =$RX tYW .1kEiqjlPsK[S0"i?,*80F*i@ebbh4bb`=(3>iUU$YO:\I<4ME|ɶPm!젗7y&[[F(ZfFْZ:$a8e!Wm1œ$ʑv=ޢ!:"PjKv͙jphj#kyO3>K^"'A!ۙ˚_SGֶ(ĺ.6>JP ˚lPr!X)=:PX! i:!*2M:x0m'Q%K5,bELto8dlt1 {׸X?FU͞FxYj9g#d=5|ՒYcVhۮa!,*ԘAyEi@V:ܴHDUT7!C9=%}Erzmi'o;̖-\XuA奫_5BUWߘۿrI<ó׾<7)įr1O]5qG~x~tYI֬U@3ϼvTqk.>]6CBV5lH u*rQRS4Z&+XC. ݊.5k(&hJf׏X#_{Ͽ~wxfqF퟿ĸlլ<6E4dDjNRF2fgy>?w Fbէ֭<5ga :  Z軷FmFhBUXNW᳋o'KuyoFf*3+ZQ;]h n<#Ǜ,Y]g<$8{,c07d+m|ˎ/#DX\=]PfLy5cÍ5`RN@5", 5d<;?Vy/_+W?`~e9>{YϬy˙Ձlb3qok]ι>>2+9a橡L dS4 n Tx zQs;#뇁 |\ƗRlxebqI 9ߤ鬎>D,_ϯ+5kO >K[fh,5)xA0` ~ͯT8W5p+͜g1oHhhYJ jjJ ln>c/ȔTJ (AA#kwd#y푦FDH% 8!ikO ;CҍnhsMC=-I+K L͐5+U E )@ dוOb~udE;K?r<Ƨ+DIZk"yA~:{_L1ݦWgn1֑%z%.y@dA)m%ͨG'36bJc'b85],1yU!ڶ&Jb2D? }xՋpk[3 #*$4FH* >EP%dU佯992Q,y,Xejf;(!T ܐGbSUBBJ x"sz{YLZ\;ODQ# )P#Ɩd6 3W 8@ pp$S"%'}vrqJ9<C@u}R T{C p+t-J @Yg*qIL.0]j>n@U/ET^`vA)J>%tޱ[E@c0L: VLP 6*ʓ`VM#QkMNU"ӽy u `*SC(EUQZ3&!yZ q$8+4B11㬦 8Ȣc5$B?؈ 疘-=yT 96]~7>.^ Nw3 {SԈ^R'e0yMgPUg$~K[*Smf<"8ܠpkqg ۋU鑊霙RH5:#f;d!XbpD8 Zbݜ&6(Jh۬C x^wыoR s[W>A텨qś?ߊbx/&P\1j9 }"_wR3֗ YYWDfWC;vƁBcowd_ f7 ,xlr'rTlDB 䃂Mv))==񫅣- C3#|j0iD|uʩAީ=G km]*;M7/h0"̺LOo0̧>a6,2*.NfD&-~θX1֮J4`c].o1Φ ?LGQyH5ʣmʕ3<);;;4N&lrsb\;9(B-xO̪ld$O2CA7}!R%GU܏*ڊ+k-v i a{%æ)& &Gl]>Ë3PCySxސ[6{K=jr!j=bXrÞD ϸۆYN産1~>'3qv I8|zɓ ?3aR&ḟ3钅Idsx}E~E\( z}E(6KI0cSh UNd:;Wd:uѐfq.Ɔ G Eذ(#4ҸA[b kE01 I?@o"$U qc ^#p@Ƃ_mҸIsfd;&xM֜ĵcLD)dNpi2Rt^GRthܢYCtL?\c[glo;BS"2Liln>} ܆1g}K\?ctq=_~bM=B9{}ޜ럳sLޣsԄ9n[6Td{ 9$NėƱQzU4đV픢CwT`ct2RkAj4(18KhP3By,[lVAV,QaB3䒄*#bC qH'GԳ1Ce &1FFY!P+<'*s zE.]"DqL;$]Eٜ-YN^y^:A[6BVl D E$ڔIl6i PN3ьsƍ:A̢iP>9mEN 2N(b63iXDppB \L5Qil0gu#ū0 #~1Dx^(0+aXjn8IŪbcGHA:+Q'=vR`jO%0->0/ʅOV)`R `ZBH Z;B}Ԣы O1x5,ZD(-AtA*"5c6ZA=AV{'b"Jjy5" BL"FEؑΜtk=U--y[[W_;ۤX|鳜.M4qĆ?y}m[*kkiQi'ڕs|G#/ IDAT4PD"E[K+EP-kOЀQPZG_&jݠ;7 ySZ ]:LByMfYmH\ݷ=vnB׾9˕OyؠqVk_Ks=MCdpo__%9wđWW5rQ+Bekw@uhJ ˩F6Y>&C-fR4cQ܌en& ALmc;m `z)^bNuKXzhD䔓r{T}>K6.Fk ;x5y"QOLUű(2XmjMiX$KDK =`5aᐌC "I6A-"x]Uk0ubK.K6BVX+jH>G5$&kRW5"(%n&VzR"h]րYוZkzUtZ%EoN3!H̠"_ĺ"Iŀ.{ǒIB\=Ð[n,-ڮ"Y }4^&22\k=mlZJ:dNv4V؆A5@faorI|0a{$Ϋ&VdA8.̱sQ:4{2?҃ozg7H;j.pFUua;#sE4h4ΝTT4I"l^/V A)xaR*K]T['Ti$ FR@FRyZ."f4DNb~!Ek4<0 !FJgO ki?xsա`?X{#RJ+*E|:5@,"TZ5,A8E&#P-} .8> lC7ì G՗:QL ]frkCƁMĈ\È=hBdWo_vkȂnqԪX#z+OmjDhhˊ8HP;S`CXZkfdhW> &mSNIR}$\Sj0T2*1RCjg!uĨňb: *Mv0 g IF&.Ձm|˿xc'۟_{S{w$raDC/]ϩ|juVxϣk_WIJc:QOl #Oj~2+ ɴ^d@(U8y#h} MvotaEuz*,M#xv|_y_R;Ǘb#5JUdքջrk3]|'|7e՚g=)^&KI[::d|L e/G?SަLƋT5g,1u#1A։^* mP9QrHJL$M޿f{SY-v-Tobwl;ْ1oOBMz aNtUؗO5Udur3XNdv;nS'0=eh$O%w-nHc`T{Mջ撖JG\ߪyt&W𤩉M$&uYa|%uۢ U)Lo^{2--z^p{f%FfXkY*s:uH,nizTՌfFdt7MdD\sbk룚SQz-7k9#KA-u%'IOT .IO\e EFSb9'gP /~b/6o _e6.H+KWʋå˗~ uz͒lJS.IզSW4ԆWQ#"!JDp3QBl[NJv0f{焺^bUX$332*q%鮊XTl`TeutRRįߒ#TӘ嘭ɏ7x_X>+~?{CG .%ocz},lpp,g?):uDPv_{l4b]8Ik/* Ώߤn|~@{}]VΟU)h@l0ϙo:I[]桖i,IT* ( J4pcU놌m}kH=&8j/.V"PlrYu͟^lQTW`ɘb|*GH/'Шk,fjU9nRbrS*3 lN0NX?*^D(NN٤<(zʎ)\ #٫3NDmjQǣٌ(JE +^e:.j#"jDB^"X+8Ąըbzao7T[:M]9E>;)+9{$DIl^%qK &g ĥ'L )CdLjmإR"g1 JF,3TV"j?H,b` ~Ē ĭJ-*RLy۷=w'f4m$"㙧հ\L;3~{_pi+ᓏu\K覆J (Z< Ņ *8ᘂUbW7o|Y^3`Nb7?ſ3/\'-o7ag/r ax_wsCC$oU@IFTyǃXguAC" bAbR{bO$'_9^"kDQ "'\XiJ?uccq,S'V{l6y擉UL_뽲wFԤcTK$%lyY0cNrB*@=|9žX{cxVvxB/no?t]w_H4\wItC.K:+3u%m7WF{ nWvٺr_r/uY} {G팴FvUi<$x] %;sx6m3S^LzrS"eW Ejh<ίU<~5]R=QÕu"?sy`?!9'dWB}&ր25YF?SLL2lWzo=\&9^Q4ȲVnHW_vP 193/ybt~G`G~9|Y4p"I}/AOW/- B?Vt*H µzoQ}KG;kpo=+K^ָtHˡ\2u,~4t695e6yƻ<=cq7d-|%Gk}FK]9UI8X] 04$߱\n}..*Ef^o ]A5 jO܎0 '2u^wG^l/3>F}RWdj?Ñ$7PknHP_c^z%W'%_[cXz14WOʹոK'LU"uq UL!2vV)fT)gs'c|YVV1iDIm?rNlUku{Uɬ 6,#-A cԒU95ӚPy`LO`lb:-}R2<2ީ$i[JO=d՘OdZZb\#Qc=qJdۢ 9^9Qo}W7|iC9D=޻x*׾O[zOPl>v7שl#8DQS$I[:.GpE Jj#\1u4')w}۷17|<K:MMnhQylZ&&Iz%dUUEOL?(p(Uՠǵ3k#D+Q]^׵ԕGbAbTr2,Ajyx<R4{vciS:#a0ɸ]4ekWc\j)XkR(H8=*t^ikgJn@q~`+X4KkHÛm\dwqټdEQ&2F;9h[W#zMC*Ip^ՊD݆ZLqF=Q$Dݦb*,ԃt_ˀY,v=pVʬ4uG.jFj%x3+215{% vI';y=k>edk虍$ub%j$a0 KMtzwWZH [w×vOKJ-6Z7q ]u.!}CvY3L(>yFGEJ{U9{Wf܊hlET"!A:fA"##;Crx}b\|@ډeH'YȥZ߈3=yzՖ\^oCk-yt͹6MNwcN%JRpDŽL l]W䘽;PIlK"4 )F [B})ED]",VDjј1 jZR}i6m+EOd!tŷb=?ąSm\[=qIz#ȩh(85`IE&5I\Xdinɥ]Dv}(˟ޥLX'-#/g& IF < [OI+.̅W_D՞uK"K0`+*zj67h9W0OPGGThj! $wX+/Oi-: ب/r*6(p9Ubl.4&HR>I0]ƫR%OZ:ҌړtƤH!sA2WX["|(GۄB<6BbkJYl/G1pgz]ݻցeil\anjebkl-EjjI%Dg%g0{{2gi [+2ҸuMa)LIc+\$jӏ]Kϲp+)es:UYV4JI{*TIBduͼFHT$)JbIV$Ne r7yk_֛oɭ^epw(?]J5Ht6OJ]D81u)*BQ)j83h 洆3I}-N!ރb"{_# k𵈯}}Jz8ql2lZ eQ8hE" % \l.Yn2/1i"۔X-`&6C$rFnFrhc/q yM܊VJ U-!*TAH!4SB% ^9#-3eR1hLDJ^SDXt/bׁV"c$q2ycE"ʊ徥5OK i'oA-S)&sl*;$!/$Y E%~c&(Ȩ*(eZµ#nS$H[ܗ52\:by)v"kqv3njۍz/-.mr,Ek3޼>;{L2!_6zh!ċXA8q։奼q}aSkƼ ~xmѤϫdR0}h`gtG3: +TQo7ݙk9 M=xkF7edLvӟh@آ"أrC ٕP \pT?.dZ;fԈxLO>uWV\\mqD[.toF1K GRtpL3lG<8Aַq;DJ5pPq ":7=6u a%,jS%8w D}Elb6L;7}zC{7r캾ym4KBc}B>3gVXJQ¹X~WhEj_~_S;#ξ;Wd$ҿ}U+ݽI0QJkD95elLc GPҙ$:ČxcEzh MFY1X9b˸etcґYSj۝ߩt}dsTQRώ3$-O>$UwBh8'*qٜrvɴ!q)Lu_znU;:}WZW.- eW~#C:15nB4ׇcE.DXX;&mDN6 cY7Fh?q]ЙLCR~_g;roTEHwٛk6X me)#U#q^A &˂tRٔ`=`煄 v:zKY)ӈ9#J( r2Ry(ҳDlWF*z,E9^AL#kN CO[~*Y<֫},C YgnR5qcYdY#gIL]Nvٖwlu{_dB!L /6W5q5tRZQ y&yk1I?7T=a5Fee}HYV2-fg;m1Ɖu}4[1ISs+ ckjb[Ǯb3aZH+'6{r?}KkD5 U4Ϡdz,gEoUR ]R> kn 獯|0W58.8?cS IDATqIë7trҨM4z!SGEYxV+Kz?w~ ᄃoKS|)YyzYN4r2 W6yRǯy|4"+=gN$\< I#Z߹5wn$2c+K-S#kĐĎGipwoʻy>{هոLZ*!͂@?UG_GqIg{q=gRh?t 3?+9Y#$Ӄ VW BIagsNfH:' ")6VC))2i)]lMm1JdX6&=NZlNoa`uVsjT|JƧąJ0`U:?b^E` p"ͬą&ƏXZw5rtV$g>IY<+U䵬.]f}ak[:P'v5SN؊-'r!*wfF /OʤH$kklUcQZgc&o/7dŅfN.g&:J>_R6KDDE=-iouXʼHeȬ):˙db@H?xm8+8N#FqA sz&vܩ-;t9vɭ_O9{c&&'7r"&1ey;o9"̪z Mɴ@e7Zxa0ʀ yldlD97+̘7&&YUVE{7}oU&4c'" f1ϩۚINˏRdI+S{u_}ΠcKʬZrvWxľ+2ɹH$ͻ8;m {?k6{?x9֞{Յ*~]֗_oZf{~;;<>d:?C~1wyÇy'Vo->\OQ񧭠6|._bK_^k)q2RaJG2 NɨCwN#幫=ei%#8&<<<}9Ow̕l2]8{G5*Y냌ռ[o6f<-Ưg^ʠut6g49g~',Q=~᧾׀|Vg2jt gz|{wjǵo\Uh='KȎ|/^>Yɶ4Vp:1dnҲV|:]lS.rhwRqRe@K>4[6^ǸrAtrNXtDeM4?)SIwuR6TP$E+Ղ$2i Z(d|XEB<:S ;3žGX3Ff!&FķL,z(t8/ uN̦Ϯ ogPՍ1쒻=RgvcZP#ގ,M@-`c1v5@N fgcNNڛsB#h {14-[Sۊ C\W qblļ.)Ѻ#2GDXk{t6q'S λ]&þd ]288Rn2W92+QaR.gˣ* 2,b9=3vA׍m+ǷTzi};gL٘Fr傗W/%ǔ +ʲNǕNY^)~^gT9 YL8+?x,?q+:T b_sw(gsf͂y5U=u%h-kEyoFkrt@H|l0kJd Nj%v7)!7ίΔ,{bA Xޅ@ Q*.>r=O]2{(W8{\0=Zpt*~kWbn~K z28]?|L>-+d-Jf9cŽč\ٯ7ڳOs7~Ʉ0=9/0?x״&- ;$.|YδhVYdQBT)Y"|Kg)Ƥ%ͣO9i)ɖ*,%V&IDDY7!&J$PEY!ɔu6/l( q ' N-MҘ8HpJ}C~;@ "E\,t j+-Ly?Ŧ"DYH[al@ؐ%F đB'G` eqYBҢ" UQq>-"8#Yp"K~W5"I,!xh[GQ7H<(*F0I;[45Zh UmD!YaqѴ a^K^T_7/đQ"\0ce+Ôj.,:UN 8X(#8i*x.zy3Wr6<>~,?iv:w?'o};?#k4|s/l0< = ue.~vyk*$IO0BnY]c0.`K'{x%Ŭ`՗:93gf3m36B͗JQ`g[yVQ+Ya Oq?,]aOtTQ/= REXƅOB*-,"in)oS0:!fO=-V'ʿΫp}G7>Z~zQjW/ҫ-A{e) SQ<Wc~m]ZK'?:k3>qpX+&ں69_3]%,nל@R&4!i^PwTRޛAoן5M6*螑QZkm>Z?p6w q_k~2e k=u*x@r(8@f.i[YB|UUܧ5\(9Q,c21Q:#V9y8 IZ>YVU*IGvWzڞtxHZ{ [\3tK'/LM Rr5HIagyDmI'y)6$m!K3~*Qt~,Tw&]F|!ۿ/wQJZtϵZ k8YEu#S+u@fDZ!gyfA9&^jDvY[jpO9vv]?H9x=ל4 }bbuűNNVVDmbS.ƾ~WZ1YfrEIQ1.K4"17=Śzby7^'Y͎$*G!Jc QR,gfzVؠyX0#u.<:ʐenR _&a8urw:"ts,'w}=lczV;ӌk55CUC6}<&!!5 :QNW\Z{|)3r)jL)OF^n$,& ]ikh\xBf֮swJ;ߟBޯd &5d"$8km{̕ov s\Yz:+&)o7ouL jJ)W^e8}VڲOp5;dÁPs.&3?(rm56vi Ǭ(My/3Y(jR#Y⤐炪IEy0Qu<:]:]|zdtA}Py4%`dVK“ KJؼ'1uj8@ Kb}@Ds~4׼"qԅXn=,ڻ֬kQ-tx~nu^?d_7?< _;/عx}}k@My~ ?اOkŔhJFPİCMƬ\R.>{?DE,%7<ׇgz>T>m:0,bvqXG`va!'pmm\VI;jy]*qA#HSNix!&D""ӖxN t \ F >X+0rjcMټOnOٿzL4cb_`}apK\zn]'oҩ7ѧ侣'}3R; 1~ H_|K7 r5vm+n(1ȥ\p}D}Koe9WiSnXסbIp .JH9Y#mGؐLWsiM=E$"s6tzؼ'<8pm IC7!DKcjJt:c|M+M܆Q?R#B_8{9YԞw=E2Ĥ!NN"#@_R6|q28$z*u_tYC?+|bx</* ?$am{n)f9~v q'&<|+#uz<<p#SFe"1 AK隂")Nkƨ$O8N\\W]KGݐ_5F^P\}KaT֋6Xx|mWtq%bv6O9H7J~qc4IGz֗Vt^4/"#: ,t IDATzOmJs+QBc022=zo~{xMDaoc@l\HvO7BsVk#{KL6zluoQ9Wvдc:S1N-K0T!N{?Ƹc璟# hÜK*a*N4/4:CTI4!]54Y˰S$gEavNs!G &ƚ "*)qC'44PѢn#AIhRyb=<^2236Ⱥ$,a:7U4a$)ɯ>ҵ+شmHu⪦;]mnqmOϦ퇛}kq c*ؘrQTm(kGt~d461& !^LZ2+‘xob\v\£Z-uPryJ6Y(Qj7\*Mx}Ns(↫a*{n32]LK~pg¢lsa51C0٘)QZ6bTaZfi1| 8v(!zӉLu#oBp X'b~LHm!`=$ <8JB9j‏/}_A٨YK?ӆiFpxeX=e}.=S+7~=IɄՍu 79w?~\V/ ?pC<~􏦼>Gwjn~Yl@0?8"^_Ti7C7Cy._Hu6i\<%VzdyM,&tB<5 J &^饨2L/ O3̗6V8p.W).О/T)g լ{BSIn~Jwa3$fD yQ(KHvpx08ϙw;(mKLl. E%c*hzXkEO=TKtXVqS,ߴiѓh$y0a.PNE xjzi!A^7l |4 _p!bx?F@ y%XO;ev2_>k77x`ɣsMf;ZTk9|t8fA G|0E{5A;Sz^/+)$,MCDpG]:uE<===UF/idj3L傓 9֜F3 1 gJm6:l ۄhqAh4pѩm5V_s[\~u7}W}A__wwi]d]WWN_8Տ7xUُP/jMTwh:'۴G}* ; z4; }MrK$zInD^(dܹy˪6nhfg@on0b_g8vq1sֵ8LQi)˫dD`Ǿ? Х{Zb3BlMOw7fik'3mM*\V,]6L;+D3(&kJ4$XV&jo+I C[Yڢ"]C 4YQzbnqse!a$[v_62UWk;OVՎT58Ӗ$;nVJi|l֨5YV"Xoʯ5^~<+5xto&q^qh/hI7=&,Vi*:3+`Ѣ*Vf^ᛊ&j:RLd7=]P-EicCG:#&'fj=7|8t zg8rP=N<˵]N1;}3͵IGMMCm^޹ic[۔۸OޗB3utn_@EQ^S7D.tPU[GB6QՍZS]ܤLf{{ܑ,)̏0 ב!EAC|fqLѹ|ܥ!ΈYglXY \}yS^x*RB}v7ӥ"6$8茨ju{ʥS k/+lF?Ip_P[7ltk2ϴ3.e֥|IQjQdXAqA8ڍ\YE)LF_'M݌˷c`euR' ɇVEd` Ye4wQB+[##53PAj Xc<ņB3*bL^kXk I'C:H 0jQ/?5tf '|aRLo;g`wE'n1ftf=/T݂Ğq7 ?(x7-qyIƢZ+???<ƽԾ_NK:9S)-)K&2Saq5m|9`+9WvKc;SCO^g=v/yrrBt ] I3;1G9R+OW Q"]mA"&ʁ?ʦiY64be0&\|);ʮ`0OJ.~yK^Iܒ$^, o@)'IO nc1>u T+J|y Ja]ƴguWejC!v7~B7SAg#Ga;C[%]a֍x̻ mŬ)$bЋI3tp|\2לVjԘX\`_c^}ի_acsv/}o} ^Ig2+-iPF ˍC.̧9l֥N99KUj6SEAPfc(NPմg JI lYxS\HIQ?x7$}* Au0;;.9lNټv ڲyxfؔ"VÅիg%J.]ڠ & " O ĆXCݔTwa˒mSSꬠ3!ԴN_6Z̨Dep4*f\(OQAL~GW7dF9G3>SWalc*+[+l$oWl=cm&t7SE ruE,z7Ebc'S r뭷`kj~yÝ,\w_ptfv>+)~?->uK;D?+R~^c"kAYj:/)ytASTeʖP5YFaQVWs r-3 8AW27T5*+L݈ƁXc>ϳiZ(kTքɜx*7-h@}`>e\bQb ًRHp137q$KzYdQh+ڶA\֛wɩUODI*Ҫ_k{l>xʕ: pU/T3,,#)*LQa54'OE bu _!*r& FsBgN'-*,!l%킼Yu RJXx}13yVZ8@MLqv팮3 r:&KtUUmj,sȍ0}}>~ݖ78xg'<~b|ȕ_yW:J4I ۧϰg+7lp@0e)鲑8n60f&jwWwEqI(|bvYK޼95uUuWu7&-dJi?X6dJa !`~Od0 @DRgȮ3++=#fI62p.N;N}b_oYT<9+RQ iMb5sQOfZ%fJ:(h8=Q4EM()p.jU[r̤\LˀM8?;*&ʛN-wHє4Sأ D Sli܉>,exsH3, ";7V)Ϯ3n (议awJȎȖ։jǏQws|"l0 t28%aZዚxL cΛ,=~:);z8My7y=bf(#2C{QBp.! pҊÛy^Vԧ<` /S91y ~rP;:~{J1i8<,Ν!"d-ZZqLv;DAx*(YBk'Bغνw{xLQn2N1!a>gJ#yhf<B%,=;s0aœQQӜ0!%7/vu7RPO ``=wig,CwMKfrf*/$_\?M8̊y]1(LX QJb&gԑ[)q+J:&PG`=n<&^, :$6 P:LdUx-o7,v ̠FTžgb4ݜ,/,.X$^?CHfYAa=_9T~T;׷9l~Ċxi.%**ѱΓ9AJ"LKUYh˖-#$KxYMT)"XhN|3I5dHƟ#REˋHy"UV :β AilBq/XWuKQMuej]ldLݕXT٢MyNGoY*m?EPXAJdoí\E^f~-G3TVjﮒ62S"d}PCi8ѻ8Z{Ѯ6R'ZGZXMpL,Ke9u_K|g zWg֩⑞Zs{kYb&םFD9PݲZiu6vid$-j di8sD>(d@4kD^&,ta#s~)GSMǥJg|Bl5jwdXzQ4^/UX-B۽EDr^77SJ5]/" [!C):.0Ԫi4,rb);st֗eϬ1 êQ{;w)D+).N,.xQw:e׉E&SBLjem$Kщ}̵_]wt՗ïP9o\CfkwXŸVoq>R5#(n+霓lSW $mQ deSÖV_O?ꕧ]D[OTM_MίMZ{ku:z%|IU]Ztyn^{O|W.sf#HF2"[@Ayu:xz(Zzw7~K c6-}w[GC w:`^njDÙ4X#w'ĒAZ<.y8^%!cX)ki2CRaVΘl%ޡ֢^;;ʷGZqnTMCoMEp^qjhj|8S5Q-t$v\@ km#& hDq lbglȇ3X`T U4Qxy?Wo@jᬕjhvTHZVqrfZ cډ0Q<U^ mA,&DME⽜L]8eý[cfzj=a˶v!|;F + VNk5⠥8t4*5 !Ś^;6w+UN++纺z/t8jx㽱ߛi:ƒ FKmfw~^q~g{I} ?O$„ IDATk`c 1ٛIMѨ4e}T.AJbO?|3OZȒULo /5=4EAU5Sl4;UB+L#LY!$(b"S60j]bHy9"I r 㝟 Omȼ uڀ)!Nqm\h{e?ĝ.qѧCޜjxHR3Ҵb70tUzkT?6;.hGQ?`FF4L~ " U6|ìc%*bgNϞtgoukT% #U6ltoo% Y[ i΂$Ҋ!*T3u Y'\%su 7۠Mz+5&"QW y #k[S_$,DLQsx˖`ڱv  5 [1UiL4Hʸqx uʼnnJb(@x/ψ&9kk[gTL[ bjS/퐾l?cd0DT5hక^J]hhwZA}om͉ %5$c &qpDh #X.jHB$KݵcAa /9>^hSFߤ X'9Eo神 cA?tGyN=}B]mʖieˤ2eѽ3esr,){{cξܗ$8k@^7/6':q)ekOl'.cG9j;8G?_zolyGܱ3Q, 1ɄdB瘍'*IpB;%l! ݵm.zsKZ>%V.rN_ҩ4"KK,l,?9H-9&hSƪYcr}@~=(S0` p,h0Ѽ򻙖fy)ʆTYg.nw Y#Oئ15#U yngJJ4ȕhH+)l0>dEC232Ӷ-dSKpbhO e,i{ܚfԻye84~Ws eGrv[i7A5ȴɥN`o w5.YxEx.Np{Q,X^KS"r?ss/<Œ3<n.kϸ[\:Ĥ繸U;K{aW?W@.XsdZş6l(yWʏt h 1_6U|/sF)ܹ͋ۈRw61sݽx6cZدk-}~'!ѐibDY*& Y.sxC9&2 7/O3b\2=x?W6@fQbRѶ"댾 +?cQ"LdgpChNO]qa>Ma.tSWB6"JSbUk[b[Rb@ 0aCEہ"\iE"*ќԷHY0SbŠ)c#6jc9t1w|1tA!04^c0~S6;$RZզ1 aJt4Qd!T O̕=17 =etg{y޻Q;ӴNRy7S=HmN|<ՊfQWw06=EYn`oD$]MWuȳPE\s2'.2єiTY"Ocipm P#eBYOstlWQ%JG)ԡ<(ҚebBUјu l+x+hƱ2fNv ŗTKc1'j1gj :iԑUŘS{,"я4y:UݑSݠf1`B6c㑅` PM1#56e(Q&xoHhXlދ8n ";QM=ґÙh*=wwy'=F gх&qyDj!gy믽D5Nrv_KgwȢ{rp3;]a;'6$6+Uj~,DVVbCn=ވeD7PmU#c Xb 9)/8DBTҢM8oK8ة`yM}ZMɃm9l_8#:'9v^N'.wJu>7縓SJ5qv|?)ϡ4&2]IIm)kw3 [sGR)?wOLTSR1RQw,>Ƅ9g%X;Z1t޸>tFj%uϝ}s'lusw'}c_6oqb@]{玢V`In@dX(J^K}~s.SO $z K{}Z.Y[˫TonmPT.lkK˺|ڭ"ki>^./,=GS"xɴѼ`r>6) "o(|rܦ )8S93u+ԥ%,UJ]-^̈z-|,_wnEQ; 㯞di2?~Q8fe6<٨Sg99#EZܟL9yVKIO|4Ur 8g(m1 \i1?e,e*HENJoGWT%1k( 8RH,d&I!jLgDcXq;Nu`[1Q=q+I"j8R^JE%:IMN*;٥lggb]?)nKg, U,h{S߉e)>0tiMd ~ UCU6EEovTj]A^̸x:j[;AL,]`M?Y!ħϜ~nO2gnȇ>aßP" \XZ?8gay/Zd | ~F6P A`/j뜙 a7AΦ ~Dm 8B'"g?I/dp%6=]Z5\—<3C*Ru"m'u;_r.91|ǮOr֢*^[_:B8yc;6CpJO9qؾ5f`}5 \љ vW*v0gvuB:S/i-yBiFcg$EXZ*yK+!]YX#lVv*vqCrp7c[c`'M%9M%89s~5$o^y&t:'5; =FȚGA &qMCB(zCR2D(H:^ۭCM"8٢ !)g9^\f\WzU&&o%)tߠR2H<>d\x AMd˝k #tNg|ح5Z4QR8[jZƆY/5U؇tH >; Fe}2bY<>Ek?ʆS7a3z㯼JCaxqNdH͂Su%`}"˻gnwYcYh:,'򣰕)L #GmXVǙU >KnPT8)o'D6")lYS窐TGA6W*tCOp.v:8*_@ZBrЩ O¦Vʅ.aKY(g.zs4YGp<1 UFMLjCKd;'s*ڸ6na7}_vK"Bfe[mlm AF]Ɩ.$3O1턛ok8(F̏~$IoS:5YG>5V̆aw~s[o}ٰW_{#'w)g_~|z 8#b">U4{x?퍯?zG:USi֣~(lcέԉťpje."kKN'jY[O>CU5G$ILĊh^>)Dۊo] rs"5M,:-мP$YDƐ&Ș eg8( GuD`"lC34F!022>; *#L1lK͂VH$h\I#&为!UдIq'ak(1[յUᅫ}^ܻ=䩋}Ο[ׄJ 1T%S;BlCЊi)Q1ڑ Q+e kg9 ԵCN >dӠ[3&e67tryZ폆|;֨Z>Oi ч ~G $M 3 }~|/E)Ҫac0P42 B+BPUGkKPcT+Ԇ9f#Aj T>NqV}P !'C6Ɠh ZISh'z*A(!Z_h+F˵?\lxW귕 z\o6Jr[c[/B ?>r}- g׻f# fީߣ"Oyl^c&.z>ma\EV)h^j̜ף44MM8qCooKsOt_I\B`1ǵQ>3lAh-{?@X˒Rʗ4CqL8tXyZ@h$:WJ䏞*MG/|2QomocXH oWJatc^bPq]$\;`☚0ߎۧ|`-lT;]^,$`䒉FN^ɠvF/ꊦ'Te(qP3, fI6D!=Nj}pLv!d{3N-p*]V9dRJKRsc&|PIYMNMLgL$pX=[OeKN:x8s;QqФsI;C'yZH@]@HBhD!B\Bf, yY4؏Wg.嗟ۏ6|0wVoL7|<W_g_B6D&o xvN>ʎ .p{I˜][K\9{g_p: [{{~㺪"/KvL!feΜXW>EU@㽪#dM>LQs=/C4/}6:$OS>8ȘAaVC就 ,bЊd#!|3Ai F@#{9i?F#<9MOJ៤V\e!gO "/ۍЅV~Nּ%%!@dgz(ytYzQ`WTE(G꺦}HZBb1+2(кXuQ|Z4*4K5pLrlsYZ+?ZQ#_.8=|ݙphQtBS;B;i"I,ng`qkZ cєrV pTƻc^,!dTgɔ/\? IDAT_1 K•3x-n?~aC}J7|Oe?K6?^ X> `[TOb+קeh& )J"D( (sX3M QT 25qd5=A-,O4nKG̡Y69)FBޥzO;H #uH85mϟSrÛzosƭ-PeT̚v/L _yAE)\R*djnOXFœoDFkU#ۜ/i"G::h~zOKn8p~+b(/V+kv+ۊem9IըkxWT+*Wɒg5k;bfwjEfZf[ Mnoi7?4~編cUe.Ro$W*+iᢴtbpxZE5Il qE~"neQm+KZViTMPUc鮂3X9BJ` '4qkC[ՃPxBJqeárTP 8qOZ&D& d%~AV3={cϐDnnCWvm;VnJݐh\@#^4Q+cB 1w%_pp^TE[ JlrHrYrX1Aa#tLvtW+ȾPs5JM.><ԅ?HDQz)N\{:V3_YlLugG;JNҒw7M ڴsy !5V&u:I$N.DZVlKA*Db'N+虎QE2 +:i1TNoPm*Qi֡IU$֯^Vj~OaWٲ6x/I$ d:M43*,b nm)沲-NO-\e5ldJJΥo) IR'kG34Z|dz-lnύ[{zӝ],-QP =ѶXûSZ:m~p^3oyZ[׷5;hzPjrP2=VdvT}Os|ݬZyY֦޸yCGG#Y#ΜX`\i{u[-}Ux).o M@X+6UsG{˲;s7PYsqfqVЖvC ـZȀaȰK{#+ZY/,ȲD$sUd֐UcdDFċx;ŋ*v7ɢZ2y^|\հg,R#4d dSxt B8FI+ !PzV&kVyy(0)G !-WOc=;2 lIgAJpij&&R5YBe-FkyX,A+*LRSTՍVeL^H7Vd[&]FDi,,TY(PM`xc%XxŁQUCϰabPU֊WVZ!meF\I5[g:<-ޣHkN9R+U]9B4B7FqU ͼR/mQ,湜sB-sH/e0BUbcnxk:iK׷uicG'Z>0@/D\}Z>JG$'i2_y3 )޾,j9,s븺& :ƛG*)g_Wo;V1ۗ8.n6Ag]|7i#t 77Dyjt ԕcѤ|Cz sHCbI̱Gy_xKp;ӄ~ZP)wYecύ +ho];&`>`~qHɚU_&mm84s(*K#ѡ{|3>`;8&44^S̺"z5fv+'Ԋ1G#nnY>Vcvoy#:>o}Xy"TGs5\&2xJV{?}ct?BV)+# Fm17%j*j_ C(p44e E@^¼!W5c=5B8 0 qx ^Т:Oj8,35fa&k6$r3rx†E(2Wym/ݽsR弩j8~/J_T( M$vʒoG?Th F6k^YdmvR^kY8Q-J!|FT*$oyᴒg=}o7GGO",NAB;)uDžE5yFVB|6އxaHJhuRhӜnr.lJg &s†TWU7Uԥ6&pܐDmyp1 u~G|~zg?sf;B%1YSEnJ}N^8D|ScV8˜SeΨX0sǒoeẸ8~+ii4TyhnLZevMYh}iu5au@{}r8Up}\s&?3.POh ꈦx~Zhw~(R( K[zɏ5B2f(O5uPیxb(5}B"?U' -gi g֬X)wa&UձS;#݉٩DvH}WOxc0UL_=UT״iZy鉴u!/_;*ʆBJ5Ȥ룑e3$J† .c[ \SY@ D6PcШ-e~gN  W" YOS7z e0ҵёVgnZF崀)Km2pw7lt4#EQReGQa1N2_m-45}Ob(8d4ukLPS8UZ1tSqDĺƷ. \8zԍ)Rw*YJIzPVFO^KYhjg2S3]/-g}q;c6TH1'Kē gYX'qzx呚'%^z"CF60g J,):qI$VBg3ycjҊU8QVUW9$@9 8 ) t^#)UO+r5Zi$k9弐t9h -f͋[ޫt{wNƕ9\j\_ZTKy– 9QkOۑ䜗+&DD TO M!J.ux4z [w=\#nfZu圧MkBiTIYU5^qimmMkkkeIK&tXn(/a%k3U(S;n4*)m  |+Wg9<,a`ķaoˊVX:[0h_~J:\Ԩxyk YX;;mW={Eˇǻ?U+kmB8®5i\bCVn͹˰-wDK|+}pCk9`U.B!&[}?;>\ckԯgn-`5ԗuctkێ#ȷ7N maUԏv7TDt9[}pt}Cևz;rWm|1*rZdpS&NAʼpcfxORo??>xO)RF-IwJ\&u{n ˂PLJdīh_iiIGjbI6AXv $dpcoJ6323m3Wo饯]GԯssO-^1M$c6Ϸzz=pEGaݞqTPU=717:0%T7@F+mDs5/;.0"Wkf 7BkQɐ#gE?luq[kD>X8Z 6̦ҳh,΃S{oIMq#ch85|QjC_%Q/OMA4043jXP,]'kudW3cV[{}{R9CaVz9`N'[OhQLOzoq='_ɋ?ˊMϝxo_x5ӃC~#y˗?YS\ʯr {N= /R0 Ux>=.g.\V/ks~mg/]ƖzaLah tFsxGho8ul|Q|VPkjZG!6 U޻͆9{ᣩYC1?\3NgD8'_4sNCa$DY7>t?''N-߯* % gv%xRx;Eo-$—7WPS` M;IDJPݐOsg2bʆ^(k T|]TyQ #Q *h+_hey4RDx}o|4Zi$N),b}Y{ F%yŅͶ{G Q(9`KJV;wZ˓-mmp+qڥ]>%m>{4"lfX.O (qi4Tp%/ QDQJXY|R k-NkMs{%4qc)kQk"0` ጥ::ld[!{փI,2 5ַwyTy1׽w#+ȩ|N+WѨ 9JbZV$(pL6/Go?]8Јf>sc>8—^V$=?{o?(7>)vw:?f*{UAg\/3ɇe\GaB|FiP%4ŸoX+X?!ekh Lh1ņk `V:2$!a*lEf[I'Ab*`Fh䚎 b^S7?<\u~`7PIw~ %_%D-Ot OC͒fYFaэ8l9{j$2oS{N v, ܆8G]*a!ҲT7*Oa de7WPո0PiˆJ,;j$a[Øs^# , U4tZVF4.`W1)XJyx1?S-JN S\s ܹ?D ^+v?RfШF׾6A0͘NgpUvvvŋpoxﮫ/ZpN5"Pxj[ L -ܩY\v~zNsf8ŦFb&Z`’Kޗ v_|clRMئT OlB!5:j:_=~o>0鈽_|Ezpgh1lu<1mxy3ƇJ7O+>&o[onJeнw^>${ Uq/Iޢĝ2u&¬Y;J6Ve/sZa4>Tpqp!&Sr%;7Z]FՈfsW P;/Xo.LlsRxF}e>*4 O6KGVo){N' W$pQQ|t,ӥ9%"Y즌96)uE1P jhRWئܔ"}VrI p^L.n*kַeQf߸½^bC>oi\?_Լ&cS/߻?k[l}wӧNo|מּM~}7ahơ,vc-nBY)ɻԳژЄ$iWI ϩZuŜ.Y<,ۅea9DO \,#kGQժ=fEὗm(Kx/崲5,I)vߣ( ͔SdCl!]Ec厮xAltaHQ;-٠(=ڶJy|D^^#?X/hBΟ;2zXoǞ_xsׯk??w~,Nz?=&{Y~o⪚ǯ=r=ͨ)B1O48pC5:1 ,M#8TDQ@#XY;eOth|JNByvL2q,Ae^Nv?#%)aZ+a+q^vv pӓ>D,׼͇?zDuƖ|+XL}g$9I" OKyD]/iYaFA^b'?gK$( IdΨF~F+ 7 6[4e%?Q/UZ1dP8J@j[O9)T' &ilK`BCŒy"v_VV?^I鸱湾Ӣjg08:Xkhz>~Nb︡qӦ*|^6qh870\TVe^Qg Oa Ĩ<eZk2rx|噿f/&VBň_YO#pnC]4|n½?wp;v7lݺƺOՉC\a7 !{&dc XNT>'PME>!s!"CR|8 ㍚oCA2R4U{_Υwht<}c#a"v;ߢ0s_~'@o_b*)*zW-L}kMVJ.n$l؄F=Ê/8cȘQ5Sh =[n]q 㺡fѓyk~Nv0FXQ!xС]$շ!d n*欛@,w ^ `jysA*vv)ꬣq5ecNwCl]xA&lNyIlz-g O˟o~ZPSnSJ+XR6h3_+8~@ͺ 9[v7i||]V@|8CT/+:N ?nJ*3nbf&;.Iq*<-*Yz:oR,:>$^>,F;r37{o_7|-7oKm->WÔK L 8ħw@fϰ!vڼ/q?#t.7\̹3nKUե15' 5NJ"*[R' fʧsi w;I%Qj4a֢c6޹[o OWE.9WYPP7)enjGsv$Z+$T͈:v<((EE?k陋yeB0L5;3?]3],9uԑ'b:qT ՇA8\;eKe⼥=˗Z΃Nd'Ե[|diIQW5͆sv>%VBlƃe]` yIDW` \u^bdcr7_U215O98]8Ҋg/f4NINJC(ҹ,k?ilL4E|Sj\Z YO9ZX #4ӜXOu~j³7Dءpz{V& yM"o"y˜~/>O0OHZ-GG<|]{OiT[EJ .[4zr)5[G2-EEK3k/ŴW壆;H}Jp ;}&W ^۟ _ܖ~? 9- sÂmXλeURJ1DHRL@Ti-F^z^4uz;lvkk]qt!H>3݀G4لfso3?=w^.7[qd:OK0zWx+[)I6Jz%~$cY:)ތKx"7Wո8:FN5£ *DżET\[}N# 76BfdpHyo|^Nořg O Ů>+fG>ۈZpbD$XpriԯӪ&$#~J~p= C^ iGDp5ƄTFN89F&k6j?ocZOZfj4D{1 'ORIOaR&]smV!cʛxo0'a$Lq(V]DSKv,)~񘬞MU oSCW |tUauq2B>'$oI+F vڌjeDY,`UU,4%W{OT5*r\_aL-.Dm|LB\/%#,м@Ek g=y,ZZup4,f%e}/B2M>089L%6W)}M=YhjR5\,=#h%{[~4y#b<#|K޽kts}\A[͏t+l_we1~!cM=kܾCSbH OYɍ]%,LLp@bC5*+&lŴ$RG4RD,"D"frII 6XMbZ>.I ʹwt[կ~]?|unuxG?z^6[{2{!Ytxx0Q!~֓\t;^A@X_/g3'wZNOXyX.jL4ylu[Iٌ|0UU$qjhY ^\)eYڗk"-ueb@60Q/}%Kړįq^OҳzJ`uy)wg4+bУܿ?d IEXIzn%|FIh69wZOL)Fx*x M jgOb.BaB1 ZU667غx7'>Fwe Yؠ* >J /G=Y.bx9u~|~?P! }s ?/ G(Lޭެ3r'OrhƬk*Pފ@VFH'Xs2ak=׮*2n]m"v<'S~RE(44YNĄwP1˝x!1Bz>&f3J Eyãqq7[Eoz9&j'Ks2g 7+͂=xAQk;c%(yG^sG"¾'F~{>SW=M9_x@t J3Bn5{; 0Grt"9ٷl'1tz+5J9U1DпuQSQ}n!d;Wl rs I-Nٟ3/4uYS&tG't+廁!FuȨ48z>d﮲􄅮 WyXj.w#m]k[%ӤG)X{>L9z 5dŧhG/*8FB`_^G:RԤQd~]xw.qf^M 9cѶs q=?̷8-T :oN9uh$ RDQ=L旧ƖlW9H1vS3rdck,FE E$h t}o{{ǹ hH*iU{p֗}XV\LTLNQ- I4T4T%N都E>;,9~Av}Ua0WQj#̳وE'/fHhŎ \R(|ܔ dވu&>")xQj4Gؼ,ΐڶRʒPq7 Tzsm]_!~8b2A^-*sO?ٻt Od?},qŗ{͝n4;?kxtGDYJ("&v[=,3':yxɚD]}fAe{ܵz-[ݎ6'k5sc UL#L,&+uF+D$5VU,Zȋ_/E ((MEs=s#36'kvI/uB~' 2fo?KVCDD=F Z7*4 fqfLϵ4bavύe1)26Mv\!w15Hm"& 8_ۥe0,hիTOL%##*;JV&UfZ>*C.貼><9HYn'sd2S̷,37] Uڻ=*SU#-CFD2ZpquJiJ^^njJ. 9}JO=rY !kw&hx4*CޛFnyi﷊^tw3:{n];\r=8s#2_qW~>|O@1+K>lKe/H%"b ˭oLJ:RMXr\?==鴼`TFv* z|u3-T^Kto1t.hǢ&&HKd-JDL,QzZsfHHLƹiIj)_Jg=u~pUHrK74\OZ#KYj*~q%'$(abZ}H]2$I;K2H6X6K"ec2?*ijE e1elj.ti눉#Ih[% XA$0-=&#)PH"7 FTKG Zy_$bCUVgCM[T28F_BdbÀZ & 9-ekeK+HIciF.b|F+Z~Zv_ ّMDJLR,ݛfZL~Ly"(*]TD-Y)dЗfMH ?=DM+^P"![K$IE-aBOnLJ!5|\"Ifݣ7kҕs!WRѲ\G&/so< :ʉtGc{S~xP ?{|_d܁(IJ?YWT?hQMLَdHvG 5Zf!KJ`N ~_Y`L(UD^+ ^88-E#V3\H)DMZ*b lĈAͭa#f_8 6U*+clF""HBƤqB%NXQB)l.vKMEXܠ@K?V /8k04'"W_gmc}ۏu쐬|‡}hsOd?Z>I_x7ﻩlZ|x7z}O 1Q$E Q}2lgGyN$ ]63)IF}zR,18/FV ^D~>*>RרV+$i"!x1s 3Sy<|?=zF~ h LMNHِVAk] VC i4r߱c4*p[AeF77/!ls񎛸;}0w'A&I*DH0PؠQ$ZBPLOS4w+0Rbԓ=[PɂFF"FHI %iAP5C6oJ$Jأ1Œ82CH$ R #BS: Tu̦ŷ G cDŏJn#IkYp6^E{G}*J1JB"AV6qmb]Nj#/bkU1w& O3˕^6GC:Q|-':9X՘ G"w+o^!;294Ȩŕ9e GscƍY[~?wSs]׏: T$"!jNw,|\iD-x7RE+ax`1} ְ(:v; ?(A }#ȁHjyKD|o20hAh )1kՄxԓkCt6-Af.~_^h,mj[-"񺡉 Ƭ!KQDTŔsBUKdدit<؅3=!+,O~oLq&.'nN˨L7ePj_N(UAexL.%&Ze$٪DCYCjΞܒjM=F< y݁$)!YEW ƨ=bW/iFPJFv%©Jq`P#h T4I 䥎#h@EU2U#7VnD5)STNQ_:)G,^PN$ ^Vsyr' ^ FIk[[yCQ7!ȡ?9*11q,k_~x稵Z2wHn]'wݮrӧɱ>>67-{_}>CWs|(bWMnX@U9vNNMJPG8vF5A "ڈ^//|[+[:$wENMM-hiQҢy~9sA=sAΜyP<[ǀ7LV=*܆_ϭ&.Pd Q|U0.t`yB$ng|8 T%j:]K^1L8K(*EU#QTJ'6ID,"(QAYdXHh8Vb5d W3 ΫuNmƂ)2x7vD\IL.GZ.+\۫r6$6^4V=UuV΅ut2;I3TaT+ j$^"yaVu o,&u˫Cٕ/?wA.NiNɵYVe{m|YV\#twTR7,;.2vߩ!Kw}}7t6cۻUJ  Q"?Q@~Ory>{"7XnQs ^4P[Q f%zTO~O,̿ar8uJE)j:˿*yZ_kRJ [!QՋ5yhcgp!I˥u= ii)9mv8ۅI#adíD* :k@yЯS9ђD{Mտ(##> Q8QkB,R&י΄>&bTч\}WJi5j {TZbҥW4pQwX:)2 m7lFwtMad0͞4n]iîѬyfϴΤIU"д DBnz)Q]Vd,띍0־ /co"ǩtsT#RP#Tþ"֪[M E9 ƣQ8m7_ˡ,`ŢHT="5#jx>'kP$"A1X4Q5+W/|S]1 #ٟf?R橣<_ZUn\߾k!">_O.^O?1w;%s6QEN~Z=ٹvU%\yu_R ْ89p耊hP}ݦ}GAa8v֝ן8~TEq8E^py3 G#xh4l6T5w^CYBtlJS5l5јGksC#ȐT;^I1~F&vTy+)}jӬZMݞr PbE5o?:I&Kc\ح 1G4b^( ƌwy^sYѠ{!q5,<{./}-ǟf񞓼g?͡Dv%g_xU&\0w=N0k^+_[=fRsZ[?wO+Daw3,?!:w}n]uj>V{cOO}0bkrqSUJJWh x}@o"yA<8AQ ,T{sfboOiU%hzQT-{% Z1RMHT֙{.. ߒtTzl&'f1 c@/՞8i6.GUT&}z~!n=k74~ /fwOQ9$G ?FcSW?/([#{HZӕ! _6G>'0:.YPŇEk ( _.8^b ]%v5b0kF@1 ba-OAU`M%Z$EuIL4zq0UR9$  & } wqG>)0v!\jˀU$*jТȉ%UUcbOF bh"H} e!L). 07D c ֒qC+-R F2uʐbM{H,9" O QtܓݖQ--CS4\,q0QI&GzTA%`cEܸerrif~k#C Y#Zz5mّbj~WMa[vrǠ2˝yYgfh"}>9Wq*q`h,l>?`0LNнZ_l=xz/WJYXX+͠F nUƑ; fc$8q{=9A< 0p:Sf1$UT"*2z *c͞fV5:KT/B!Dk^8D}/ +[iL"(ƁHI"8{"e dŘ$"cT1w'nG"yI;*# AYJ=mı4q)>HCEAjbfRYB6V/B%cTX{)5AD j{zNUvU G1 Ϩjvϕ#@bch&_8&U)Hj:'?Zr斞}UQM#'3@gKx縝x{on;A^>Vv'㟙¤E~S-25mԶ)&8F$e(,`0ysʍ>3lJ[[|Rb1G3 =GK0#wK=ހÿ?BA bM8!x9UK gzLm%޺|o=ˡIǾ [P9X|@:= fd!(+EM&7yqu=79\n*{:z: $QU]-ĵ0v @h0Z׿Fj쭽۽I"F \spP=LP(z q}7)F \@WW-$.H&Fΰ50,IƯ`הּ?C}:4o+ʛq ƽu =)ܝt4G&A|kRv0 ` #f??VUF+7Uw7y`F jT%'A-rb]I+!;s16)0[".qTB)$vHZRoyOv|̃t';'YkGlځ3}EjGy]beJ:Ƥ%5)3Au7lmޒ۰︇ZZ YZ[1]%^AvL!-rhroebw d3Œc*(ƹ K4*I:C)yf#nyΖhw Qh cC_tܷ۪`boX50,yַY \YC?tŝ< '~[^?˷A\w\JE_u.<7~~DENQ{xץ #J!=x7?wG܏tf8磒|qYfo/p.[w:aGTJ=8 VhO{˧q:Sm\ +!xpE hb('W`/ a* YJ^ɐ<.62OWK&P= lo| ,*E-æ1 OЃ8ҸRUXU8[C<*0ƌ[U<#ۄ*J'g ==ݗ0XCdSzC]pLOeWLS۔ѬJ'1{sЬI s@řqy/ I}E>#,F$K\ JR ALq0!-73:m cW1c&qu%x3^ S+@%PƘPtsN+g'h0A?L0O,fqr"bz RDoQ6"\3@ rL}ȉt(nó׉N^f[$EΠEhE[[WyiH#lvÇo}ws37/ܡjExFL H8=yb/r x/n}]ç7c~rʩѐUOZڙPq0FkiG,,H {~=n x:s[y!F$xVqb+>KDŽT'VG7{c !3v  ph،ߟ `e.|?:\tWn1u~#qIK|ݛ+\tIvoqϓ>CZ,mwv Z*Ya! Qp@m?K KaU$eG.B0vC:E3u6l l=A`o9j%u$LmPZ$>D%BԠQXY YBQ$qJ*hk+^#s"Fo+(BJ%{LyŖaϮJ0I) ,RxhuHJS]>=_zQjiQF诼 8"nDTi2w^3Km^eK6Θ:pR\SU DccFcu,coݯNW־mtcd*f Dta`a@# n%k ہt8{I(i69z<ވ 9[ 5/P3ҝ|&Nʱ" I蕼rD op =UVʈ:Sq?s15;y3]LjYhaP qdA5TΡ~R7q $&p-a"F[YB^Rhݾ)'ɎMآEqJ++")Q1b"p #Ri0VHF9pIF"/(2ifPq#e@z% -Y%7*ޠ 7VDDJ)]qEbkTfKY83 _z$͈hmr3ࡿ)# J\}[+Ȏq \:qm)?j巟Г}*̘:vțo?"# .i@/x5y t1EquXKW7iM69ڬP J?qk #PvdyD~K,u3R4Ze^(iyh*#ȫjL_LN[R1ԢsW]I)Ed:n!0fϪjfKLrA1UHmS<\WH4\\^9w3Sgns#s˓gsȋ⮟.~8px{1*K4dE9 tjGiJʸ_j{9piJdJĝ IDAT6ybzk/3[/6}ט"?Chk{wr!2W\p*a? Qd;,~Ho*gz$GDGiVAT([\`Ayz5A}%$9d`2Ri`|-hND[NlpO,- G)XqkXhnc̐2Xꗘ^%tk: G&`a$Q~.|isK|p 6,J@pUqDǦcZg>O Ug'X[{tV{78[4 cJFgzT=,Kt%gX(LJD&Qqcx(I ֠"|TdY(18sݼ7 #33&1IP%e4vX)N.R,Yy\PU/s z?1+e1(xkO:rXֶx?wzE>Ə2O>O=I(j:] Ucc[aUD o[yS;8/J$$a8GTJ5JW3&Ar̉Ռ|nv֑لS*TCZE#b80IH'dI^`0lWn2Saz2l*wfkI_S^8 h[G+XcU_Wk8K:I >pK#LWȝ% -Mބ׮9؊xiggu+kԙǙlNnUXI&=iB~?xl>y@orq*@P/A@3d>!E ``] 7/^m*.ll48)U(%h>*DB$ƺgw.R?sXŨ8OޅkD A5d"?ԍpDF drbn_+h:DVy䎢# NƤȞqZ=4c86)$kw%F2Qx~ihN&r߱$Ǩ0ֲ-pH25wUm\wC@YHf-G*{4wgHvt7h?P=r®F%OڑF;EEF-~k|S:i'f` dXMQ$haɆ`YY mST )RL#pgzzzzT*[u{t "m\SϾoR_Y3l~'7Kdh"7Zމif:D3m/-Idw?G67Gcd5&bVGFe&==)Nb.]Oj+8e|C\QI8A)CRV/ӿ2Ϲ/ L:H!.2XJ+jDG'إkޣЙ?WM2IN ,ס3867I_ QUvdaJseTY`[&yU))_ ӂI]7Q_JuDEFRPp'Jb\3=Jqw#&'v|B Dj]+id[BO­翩?~}ǁO>'O`k?7=cMp ΋|o¿Ϗ|73o^ =?uJN<cK7^d}~NaW_Ns}ot10#vs۪#7$Ys IJ% Giw"ѴݓzKia_*l4*Oп-qd}qhaosfa.p,.i6+IA_#,vٹ]G( KQ앹N^jz 5$UY)Dt$CQUi]Tc8V7$\(H䑫C?JQ^Hb|nwkB.UUHUΌw;7*P%wKR\:S"aVΥ t .߼­9t9t{WB:MG  h fym:ѪkG2̩喨TИR:2)^jސ2}iFX Z1=iTTL`T8LY$˿ ԴI/,h[/ HuxPH: $}u5B߫]&p/L}'8 Q( f H}hHDAAv@Ed[U%v5f bLc8Uxb 'bۛK)z !Qң~"~cB1ll\%|E q{AF$]T1,H,eIUZa}2J$Q9 پR@H@[H[,_^srD8Kx3P$/Z.UCIO\ZHc/b7pZmFWs;G0"=Ĩa2(Q+E-ɺ%ou(랞غƑLH7ũ,N84HiM 䣘hH|+ %;9Y۟xA2@ ## t+x8=%b$]ƷRRCJVV%` z㫈qG8[Bm[<Ɍ4_gd!a|1ܕSQls܎"4BsTXJDzF\lVJiJzm>ɬ͑%wjOw'[t%5ztM(]iF%W%H Iѐ8MS(5ЌdӃמ(u8Dqĩ0( [u"KtX/`H{K6Ú.5ŵ%&P4)kkT4b{M2]6ʇqe~+37"mF{e}Gm|Y>yL7.]Ǿ[Zc R ~V>Ms.oR:[[<D^p.=3OV|We.NTw`{w%zkr5nͲ­Y2W'KR,/rmٜl׃^~ץ H"(2T/4!Fգa,GQWsWo=y,$xy:{ڈ%R?q+g_^"lURns8Q1E"$/qwR2 ͕fnEF&ՄLk1B-.s8hNUD-]GΈMƼ+ܨ].fr/"/}LjaIzMK<|UY^QWsxHQ:2ajldk{['~7úwzNѻ2|Yzk=DV.5wsLϐՎjk\DY+9PYe-\ 9e8P_5,]NuDpe,&Z:no$f5^m3?BB9>~n\{zIW:ӏ6:P*,|1nR5yuA8h^"fJO߹L6xHu2 !(8ύ*-{FLvRD6#H2PERR ZQWP[ /˜ $g2QVqHk'^&/ke> Y*#C佞v}4>K>K:"!uqu|6pj}oOī7 2^5a>Ref'KW x< .X-ro\R\8zF'vo=lD]*EWz$G%Kt[wV[fm[^T#xU[lDF8JbO?A#J7Gw ɗiTVy 7(Ѽć=uψg̝kWbN vɸd7gKڻEƭ - jҠx}m:3+4IcjLQ:eSq[y&*a Z.YXÑa]9}!;48̦'{$bdlP\R4* H Y1⨈%PKP!nK{&' s,QO_G._R>0qr4#Wd%e^͡(סtR##Rף gn_ө"㵁Q-͋,47C5oN_⪕!T`tJ7_' IFdܸy[z+Wo|`!"jr.u醶.  vh>wcύ7WswjWxv.~W_Cn=-s/ڱӝKa& \LF9+X]e۔9oX!ymxW̔|G1ח&Xb7C:7. ~*Kt<ނ +gjcu.CEO[.%xX̆ԕuq:V Ĺ'# ?"RTv08vDTe^ƍމp< RDI!$G]@+_Zё->@ Dn|uy}Ѡk~z)>"TB"I?p@P3~k<_L|=1wR~ >Ov{aqQƱcz7WoQwD@{&݋$gRGW[iS 4x8n"IRMMQ"WKU4D,8"–m`gNqe~j@7/1d,:v璕2<35ɥAkx;33bQysI1U<|ϐa](Z1.-SPR y&ϻV5٠3HR75Ӕыw|'/EX}% # OS\WcDQrO,uKu)1"jTNJDž՚m:q#Oy'-Sufs)rqTȕmLݗc'x*^'d1]i+IP#p;xZKJDpC+Ơ)HmW4V2RD6kbL$#FE18""b1"" (o19 R: K;-V8B4u)\iXps|0fpJ" 1q({b֊c nJ钮WJ04)^uXȞp4^KE^KH‘1)l޷-;nDPWGt \>"s\^Թ'Vyeus[C|AkLlHʾzEy~+YIy;$4_/A[VrHx#nڼupIb.u׭JC_lKɟ<8#"Y"enLrq\sݺJ~ey [3T Tڒ!;Gz]pnqo0r%"uqΞAVKw ϑ5)Ì<$;@IdEE2"*iY%+YUPg7%7~a[|,n'm\K&[qEKհY.~fCJcD}OB4p!Ć.*y˱{bBZ41lJǧ2Yi%,ġ֊oDd *bI-  &|S<\Vav#0.ϯ܃-waS=_'uSK]J *&qޖ[ yP$Dw *Wv%HvC-nii/GJkZAOrP=se a†:1ɜX{ZH`wt곸㏢2\AQDѢ,鮬"&~bQ[8l3R! mz~9Nfo1o1}݁'"rVAD֤ٷi?%"l'eeu%R-cH],9&2?%JKǺu2Oϑ&Mk21&d=+1CGlL˛J-WU11Nx4AݰCV0S )"R[4nv/&IՁ'#@f.yN73;y5 Yai ,2|--nB('u#cz)F%^.7xk7?Ln};cvP)p®:~}N8p]Gۘ+Zz`e7lg Yj 5R8hߓ5U`\K;"2ѓtx2GUfF'|o3?JlpZDYy).EL_]Iټ5M 5/tbFƨM`R l]eu\G[zȒ~Ge~vnn>:uL>yoLu_1eAnΟ0_o %' n_,oknqd:8 ~:v7xSxk/,=D ~܅}^s1zKUQ%YCf $}zIcCCҬumNEKDMY+ׯsg(>>#i7sx_y,޼ȧ0O:!&˴" '}Vs[7 c,M8="a70ԫ.+zdQDe*:9Đ67jb7[̣$mU˂$kSj!iNT<FN-?ܑzդpN]vb B)D͕^bN'/|M^ҭ{Ƚր|;ր܋xeN@G3{9rnz1۶wV7V-3=cWè~{W1t'6#e $6GebYwe -$ƞ}or0]B \QDĈzO[ ȇq/*zB ;\ep%1/p  nJD#2 xR"D"Zu\Ў[[((FqP\q d8IMQ&"EJo^Mвdm}i\ GrUJ)0jqoZvy()SKoyoQ#fI2`+K vٸ6&綵o?tT~QnGӫZX%\ҟlSl$ 5OwX8iDГ'f\-U?y%Sc>:է0^[g8ecu+ղ s"7;tIzDUin24!"|N.{m⺦o _v]~w?+釤w5->|xWzLtdV#giV'e``nw\#;M-LM6:1P֠ z QuM]4)G.?J$?)E\W@6.eTߥ6 ߦ 0^L'R85xo'/vEpצ3"O]rꦑW\JVkyzǵݵMy7@;}}>i8hkbzWf?wI]gӜ(Xۯ^fC #S"IRg3*#-vy/}q_AN(?FP4U~Ͽe~yO>x/r,yNOR)ǥr|ByAٕvg:k(܀TbCVui/מgmfN'*q)q%јfBIItD{,%+ BЫm1 V1f7 "ykgDT1{N}+EK& b7Hf&+/ 3Ì-0/Sz{~D𨣃R*}a?z"o潲`{wGn~o {5"@_VyV|~@W9y`uƇ5'uUʰu󱸢kDE1(hiK-NRgYj%P8ŎDaL/I^xPTYbIbX"Ȗ%l l;] '-&-%nu SnRh.?*m!6Geh N_Slk4a4đBdiRJNl^΋5B(ÈX@ib휨-pq@Cc 4rHbE(#olIZ#ɑYtS' ;5,EAԳ,YF~j(RXQ9^ԏ--c44ũKsr}gfj2%z_ [|>_#R#+uc. h,=z'{RLg Cn"#~+, kaɇIN.@WȪƇi7?L~F*ʷ% jz4OdS;G^"G :]|gWRFM{[!bͪ{[czÍ6˵B6}\ev*{hroJVJGRHr&vx8Ou7Ѕ00Rͨx9+[tm ǑFFSokJqYUK^5+qR8>eBqtZ_&\?O>\S:H鑶=Ew4iIk.eQlɀ,1P4z)vJ_۲V)GrNTc.0\ur74j3 4 uKm%sTQ "Tp|:ciFG趷vcY|Sʼ W%8&fG>!6K~ !O?,}u@^#H8d3(&5ed624/7~zϴCg2} Wq_`Es^s/\?}v̯a6;b?ks~§zlY]2:(qP#333fd|L^W|~!? _y;rҍca{SmR J^Z5%< ㉔]W)]ZznEڷs'w+(k}wH^Euۨ1{,Di4 E~7T#R͞b˽E avd:PTGcN2S(~mS1M`tHdCYD1  =u ڕ|EЂ]n)>c#j#J^I^ǖj dܭ;U^ALT UH L`q˄6HN4{M=q F9^Bmz"=-~AZBT=}8nZ|pGn\'N0ARt'O_xAAtz%B)j肷TATbs8(CX=-jFt -}jc^?bmqL5fT.(z@JL0% Ȑ~0QDtSC,ՖxiktkL2yTO4wC-mJG?7Wc[>;M1<@w N]k 71eB9HiU"6:nX8Z*'OwזΙ&EQVąUW4}jq2&&dOt[>8ִ؜)Qf閊_d ' ^t^p&s [ףcK|cqв$l NBwZ7-P%z6^%Gzaq#z֚ԆQU@4ckn -w3.NWwV!q(Eu|qcԆi`Y^~Oucw~(1u/5c'~/q[zI>5W =%;o}]bݦksdc?/?፯@Jw3ݳP{&LX `lxXO1<4َH3YXVon(~qg^Hsjt juZ30Cwr+{?;7Wp~> U|XN@j[JkA ""؎Q ۄ*;lDq:7,CFJ5GU( h~`LDIEew7˨qԛ{ܾsm*yRl6NY ^FQ2鐘-ϖ8Q06"a,: .L`[)\)K2(]t)r]B[P2J)pvC[߇ , 8N25W^럆FSK7J]Xx+8'qƦ(]JSbʒRA&(bG X`z'?MXη-g>ɖQ8+H7J@az[1FTq$DU]G4p(IT wQUZx9mJݦw@\O9;4O)7lRK:  >u,]!~zCiD0LT(ݑSdہKZ,rydI@ukP Gv׎xFRSV|?ϸA%}5޴n f0C$ d+H$RZbؠV 16"cIQdIa什zP@Pĉxu_wcZM9+K8~Z+1OVz{gWbTvl Q ]2Q?TWs tVLlc$5Tx=8Z :u p7fL&^G$'v[.{ԗh>>v?埽AZ:,fiPrr#C ٨ ]8רf/f$[Ru< ʾo±p(z7L-Eɛ#_Z$3Y!II_Wkĸߐɥy2zy8x"n!dhu)0ZޅSd[~`l:U06/=aE7oN$BgLwıl$=O硜h2{dPg}:pT$ ; *'W(L?lI_E'n9߆N^OIKI"vYǛ WLMɤDҎ¤\G Sw<95b\]T]* AS4.k3Q'K]6T)ZjB+@t8B6sR [E񋫼ZZ<'1vx#&tuLݏun(/au0o ~V/II IDAT<ﺗ?fG5Ķ][Yy9V6̸ohHs]4>{u2؎#C۶qt>g◹̳ҨV_)eB%K_[ʾ;kYb2 ^sxAMK:&qcr3!wcO>l۱l6k.OM-/X^džuc<Fă\;q,Wl,Wѓ%֚GO4>|9v2J)s] X%:+Lsc'16Ʊ <Ι9?36g/Io53lrW7NìuRL~mU6Vڬ/JT0N./i'%npA'Ft,&P"زi 1"lf 67ݜqL#^͌Wkl6F4|l3Ί2vekbsOcmvo͈33WϞcvy7_ֽWk?L6?M(у ]lZcI3lt"{bZ! vdQhH5H)0)&HT6 MƩ`5,p6Vu`ʹa11.qRWjR$v SvU 3|!b'ЦMi;nǧ;0 p>l{'B1fXivRAt&Kcmִe؈KNQ5tqlrT>ڒA4X&fb-p`9dqU7MrXD4rd> AvoID_aTzϢh LlL58W 01J,6V?3E57^~>Vi8h>|rG-` V*AG\c(QObcM k<|B=youl$؋=0NfI3HktVҕ! z*WblmO3u7jE8ML"I/G;^ ڄ&`U\:A͐kt1/[OoύZ|a:s9rCJ_ý+x0k ׅF,_Mp/)6V3DmS:3| )$ziv^&[&-J[VX٘TKk*\T$?j'n{&7?qv~V"`M|n~$I6R2Ů_|۾5*GY8;Ewv&vsyCǯ"3_|D1qO>=~y>}$Os?Fpٿ90H{.-{T~? 7A^+ z ۹Ӝ㛸Jkxb1 eaۊ,Aqy'iZ Z A*ӬϿXn\b}ضC JT6ؙ4Ǟx|gH.3>y*6M>C|fOk;nُ6/( l۱g_8Nzίޝ۹p}vyaeY./fQlcށQ޶ZqLXw吳-"q "B6h!l?@D!"\F /x)QqyFvt"m.o,6i>}Ivtؚ1Jt9rWoXJӧVu&{u5 ƶō>J46~3XHضd%(Bšg#Z\XX<IN+̖5c. ;J{I""F*%]O*AHmaJ8[Qse8@?cD k`cmzZ7!4UṔ9AX$cM79 p2-bioDMkJEHxӞ4ծ"6acSC\7F-/&$E#\@)G8׏b Fy6ÕA_C9BU"l,L|جU^Z4g< [Z,\ICXM0νl`c*Y?q&\ma&WwgV͘\ ' Lx{n<O:&d5Xqv{%,65'ʍ.Nt nqM6$M=̀CWI <4jZ<Ὃe~(i\>hX^ߣ7''ZI&hMo=1(Kk 0McW-\&;LA๯3yk]?hi9#ϡ9+U&IL =qbӵ*xC}յˍQ G(w1 m 9{m!|w6NE|Ghv#w5<@ʱhp_'2IN\HO_#a֭8u>)/Up[ߛ׮^errA8{$((KDGÉ9,O.# QU41X @2+욘,t# $ H0 6KR¡pt%ºԂԁ}EÃMڤhE }A>Y\F3`o&}!Ry=coEz'76 2lŚHX@&VRML>|,W rSHFmMk!nc63cI{W=6h䢈a%, ( [b%PF*"ePa5ma'&:)ڏlE;FE/cG !Zy(KXB$"leH:=i3,JG#RI:*OP iV+u0a `t(NvvbC^'hSDgE$ D cXF-&GdX!jYAbGg AD,ANM+kqi"ҳ<~/u4? T0dWz[~xwǏWH{q ]|׃:OoG@B߈5l-;vlw{SO7X7?hMN{`Ax:&f2@be-O<0RA }R#1&0 ?D?wqr /]$  93w]gv.OOldUߝ>l2=MWw1R|+%mRY]gbɤ9vk;MЌV87BLjLβӹ^@ē+}#_މj>0T 3ob$bl$4M%Bo;Z"h)9\4 -ٲeRY1;@)ce҄*OcnMv-Clvvڬ_$BU꧗6iDJD U6-\mo-W9~pƷ ?s"t6LdiL-Y2>B"QbIJM0m{xS Ws(ҖW& rA:-0U PD7u'Z)-9;=r859iaGB`DtlL M&KmiuRzt?\|TSȒR]Rimғ`@.f d͚-Dn,F}H;"inTb`5A'ӢU03 /c NZ$؆3Jr!XD 5, BMHq kSS4΁?aa.=O US8-エ/w.ܦmÏXN\^4,Gmsd[~esqveȷ?.}[t*waᅓfco~m #ǽcOxWn6р6˶7[}jf_On4Y-n0vһ%|3Dd7S[FD8xcCI: jyY{#ӅjcZ"sK@Ǜ,Lڌ IR53WR*箈6b&5Ztjesd3Xȳg|9ҒOgreKCq)VmrB>s3?4^&[m3GPa`,E˖S>#FqstaJl4bƈB[b֒ԱJ$ sRa0 iEL+&ju 6v%Mdmzf-Еd2JFZ7Lo[0>xGՍ^mc3٧n6[܏ Oy׭|x_0G\?Wߡ #|jL8פsoLK9nı1߁r8rj0AR &+L XPVSΗHl3.ŷ UK{ۗE_k\j.Uc*MZ(Ccg&D%]iťoOq%Bqp,x>٭#l,$.]E\dy bJ }ԽKM"/`C);B b2:!!PE"]ƨo&b60.T q:%|WR VUe\aEQ\ X#RX7Ŭ< )d*>E6"6C ks]mQh"+\̍U"mu荲ׇܭE'^3XQ nalR א(JK%EPq)l'&M#E5WiG/4 mQ?P +ccB'K!@K_?+[y6gyUn=_vGO^%疤,pQ/q3y>;?s5㬝Hlm5*Uy>̟i4/gt|[ztKL?]"54bdױ66€}{tM籲±ӧ_Jnzql8fuq=wm3Otò&v~- C|;Cs/@JkKK|ɧ;1I,.Q}3d*ɞ[8{ {ٳs'g_仩_tm>ws7|? }Ĭ4 ,`$1CC!RhePF9u<3gPJ1?@h(fU,s,-Qm%XJ# WmY0>α:Dc6(OS'c6NLϮ0R앿[nh'/)E^=T+=*^dvzQkC^of7S/Jk-| ߏd$z "FNwm9ͅg%dFZ+휖lJf;d M*4AHh$ێdK~)͎uJZJLwrsSȴ:A6< =s{%G&+;L{1xv߹RSN}"+t@[$ՃsrX$.zLFȖJAXD)07u:JIqHc(rb+%h1F$]Q~-:ŶED8Ћ0,4*G@ɘ1hlbKl6,Ϟ[&73nfm{9Z{ajEޢ  oG[mp*MyQivB[[w[b*O]‡=sIccTDҋm>Ӆ@C[V'NzC-OuQTe[$f.#BA:B=$߳w®C;1NQ}Uenx,^?=y>4&ɝEM#e/śY!9:w R3Hu4[f$=Ie"muDLn=7.CJ'1 %Q^hI&a'F񺻉J3ċ*\=!a)ǖ ;qRXmKWk3QqFQh,Mv vN ?ΖpBPu`_Oٸ5D[됬(dHɪB` )j^Nuޅ.`ݭUګtWn (7yci H'Z>4D._Nn;S+mm3ʧA̳R'* Kgnܘħ7`49uo niWt{[OGݣvە'n4 v;h'Y/ҐK&LDFFs}Zd9S?+0RƝ$)>rOLXT`cX58UJW߇QqDEC16ӭHvЮB g=Et||тd,&bEDֆ&|j|8M1gIY=z+=սoqyr˵c,V97B1bx/<8IYZope^git=q; h\ꥫt*[uocprK˯yd}{_E}c{wܨQ[]c}_= {ofEOO7+^e2 tw_,^AWevufd_R`Y4NA>af~^)LM_~xWja4Q>Lf̵z{z( $ye9RC8Ƅw7Sk$XTfBowf:2b A$V14 B*$RItkF\Il`)lUz ϾcaOJc7#oȫx@nf1}< ox{Fd˻} 7 qqn qߙKg%[رDT4P'[$ХTO, ~"rPB%Gflڬ9@&kn7p± ܽȐbnrGƍjgݬ{1-֧+L}2kK߾^ G1߾q*H;6&6&{d WY) drb|:h#xh ʣ#QcM 2v_FvA-#ZUw; u$AW g4O1gg˾yuYA/9#V;k6=Cwf|bJdmE_^hrh#:>4t]|8Hh>Zɍ N$X?"tv"u\ODTITjvgzLNvܖm1 )k<7y݂8 b\j{cR@OHP$f wb&:ԃ͉j;,{hM:iI~M~es.cǶmzh' V¢nY`ʈI ˦-u mKC_9];e|z#Ǐg?uM-UG߱G7kK,nmǏ(M>Nnh͖xA%hEzGGؿں&~$#;w0s#"g/}S?*sڼak[Jqo kаY٨XeeS%cn;pzI$\\f}3h[Jt* Ǭԩƿ.i0u^8uFjMkRow2e-˙,ݞ^4EKg)RIAnt#jcguk}`FrzM$ܳ7UB4*Q{klwűx 8Ode4IrHJn" vdBm>Q5눝qhVڄX3X[ ؅q`C6ۡ1ЊCa" V]Xt`o3'_êN9O+/j݀5du {~ũOc *q&fk:,_76hv؞:<yrrz%8'^=$pifO3G_4X?,-w,Ap] =}F6jҦxY WGg.ez15e8r)g RT*)Lh,fR՘t:Cl ~K6a\ñ,cf>vBN:EX`xlW؏;U @ _nzqHRX*Qwe-p`CKdfJt-cYRX<𴬬 g'jr.X'|ϮP7YQ9&/]cl@M:HX"=DŖr''f7@m ٠n#WO1X@=M$JRh"ݳ)d4Yd82# dcVG F-UZn4MXq-9PKSD/E%ΎYl DE!QdDCʒQcȡ^1`<=z^d)J!RouY^oj• .oO;Ķt%:xeI# o<$svǵXIm=mS:fh wHǡ% "v( v $MAAdľ[reL̇KTոF+nk$F1O\\uKŸ9ĠK/qf1oC{h0J{sdcfD#uЈ\[^P"}`Aܦ"%=b֍)VF k-. 6t 2I%l"ĀꁄU+E;UQ2!v;sCG9(, KW-N=~# JojD." y~,ơg&l$ɬ8'8'T4J r O[\bS6syiVddBI"Qjb=hu)J=l@_+#츢m;JbiIbG9 X6?m۠ݾŚdZyMvYʃm*FvZj*w3% RefϏUwzDC[Daq2M=37Te%ճ\̱i1Ϯefz;V8mDH4Pݧ 0~tzv-~>#ʞ\vJ3`r* U ec&Vd% ژ4.~#E" w {m6&̕B; gyڳ {Ǩ%ٺ6Bd jB F/Hp|e?qTa_p7~n3d#~Yۧ/TME?.&6{pf;`|0,HxT -6NN!֎fhnرKcD^7X9\ľ;o'Xkn{]ڍ:Cyqf/fY_+5l6֩)\k4ضiJss=uZ6OSf ajZ5᪺Qejf Ac96"_W(J KaHef|?wHݗT:7 < xJ._XHـFOl{, F6nu9rb4Z ӏ?|iN[y]{EDxoZ'%\tIWރȻvSkp-c%& TJiujo."wfm9+C$;U`[cOkmd rub=AQ{d.ɮMϱhjllp&[%Y\ڦM4e>\W  b[`Y!LVK3E#4+8<|5ѽ,k[r4+~sEܿtۈ܀@긎n@}fdwgi$qzRU,Ć imv`49;68AJ-<-!wm-aMlmn?6{l_a9]6I4$!ӵI6+ 3SHy-X[[';N4WĻscbTLTHTLQEr9F96Q|c9f:c.e!f.a1&N¢t%<ڮM!ӯR DqHt(r #Ll!1h,]a6VI3橂7!XCYktFST"%C+UYً)s|qS~ 4)=r{UtKO-󋧆K\a%併QL`9*D r~[@NNRno%;ΗqC ؽ̡-z 1CWx$*N_5",0p3C1=9%ٍ$ӌ_l3!ۊ QJ{}Ao {Er#8`?VČ#c)߶KYVc70͐0knX18CEr@"xTYy Uҭ|p\[ϏO f9>MM?:k,H%\nvGX]0s<u SxSwF~8 &]5{~~ٶ?CS6[z8HZ˸|KysZ@WŵbǶXXZ}%0Om[nayi~;-đܽm+Nqrg^FAd)Nñ14mnScc(L.νK׫*"Rca/pC.㣣*^!yL9Fѐ0v]Ž (YJ(D>æFiv5vkil'kd_R& t 0A#fQK|ݐ?/#SS,-8g28GqX/Qه:ؐ56Mҹ 5V3#SГx1H[];7jxi톍!Gs]{6{ Rˉ*G7xe6rc ],?4) b5.vJ"E~0 kqԽeXLTDcֺTVBZ2!@h$%Jn'&2,PYG-YQYM2= 1æ(GJ,:? ?=<7be_n3RV/},M?ʲ̉oz1β\k3T̰(O/0ľS#WȎ3~v>2tV_w˶OȑOO?#G }87@Ķ^ri O=v``m,-1z>Q_Y_gfSfRardZ[/Hx.wp|zZfqJ|@^dոZ^ZbmXXN;LƎRLfH( ٠r%8֑yX_WOwq=#?ww(9>Vt,U7M(v:9FqjQ[YAHbS7H$ag^_tDal| Q, ogc|ϝ[8? y%p*?-c\~w@&~q'ЏrVnvJ|vR+վ\?]`o5h/Ui-na9OMM?pzsutz*_-ͽ\JHGP:zR>L~s*?9= ,/du Or:xKÍm (C'sJz/2aTpPNFr7H*Lhz%RI6E=ۣiX&O7> ټuNa;DrU ;Gb "mYbE(39Kll>$5Cq}xy"p%?#y ԊNN ެig-b _X'D/߷bkVӿ ||#G_OKKo\KJ&c~F:'>?,E6VMKDy-̈ea e#z*8Ыyx}ϕ rBˮFQ"R(teuu07<ۥ ""^^8,TlWYGAٗ{a= O09:J2bj|\&uzܕ /-U9Ry'e||*߯D."c8r0wB.QQCkXtOˤٻs'q DaQ"uRFRE]k8l)lZU kT)~%P`_{% Ȥ\?5^ Fos=ql0FIx9->Q<N-} Ac[4:Fߴq=Mr:DZLʞd+ 1` =QDj_{/ HNJ1 M,rFRu(G!VL+տzA1E>(zĖT8DؗF+LnX hDg YReU0Shfa@黓% 齸zȦ<,!V˛ P&'6Z]'=eo0 BĤ̱@G+v"-gz|\*u'I/?L&N𭈞iQGNӘz~GXoOLϞe;ͧ %3l߬oP#H$Y1SJ7;*դXR>t%`EȴzlkČ𦅉s, I|YzJf)r?&v6w}A$Q=? ~B#5.BĠCh8 .Wi~q]ocGĤ4~ׯsV>E_5x?/caa0!;6Low\zLś㛉oyݷ|/ ~/HX ?zX?:r{^,gK?^EM{ k/ݳw.A=7;gz赸Uh5 DB%cڋ?渊qΆ7IV 1F.<}􂷉y*OQc;^_ BZ}v%ot]nNh]^@^o%[IoHv?P%tKz1ݒpmؐIjGxT-_V7JK1)f{;U [=wbT,^R3bEZRK`^"kiUiKJ-KyX X-XƠjF1*A@A$`*_ Ŷ饇jf@E=ܜbF*VZpƫ?hA24m,V&hdKPfC-i0š0F^3I3*]bpbN_2kB:b%KmNn  ԘQz G DL7ILw7`ޫiʸboe鰟UO* \QUb;XQUsr%ĈbL%)snWn5m"VR9b/H(ŖvH݁$ .nQ+ Ve.?ooaSY_=U<+ 賩8p[ō۾vmT+Jcy}Fjs'+tÀo˞=v6n ަq6)&BS qNfUZvA {bTclԣװ_GLзo‘Ⲝ`̖uh$AZyT@4BT#{:}_"!>&F5m'Cz$uXQֆ8R K:g}|<ȗFEW ܱoɃ+)]p^_;|rc~DONi=~\9jc׫7~\p9fZG唬o/^m[GGe2:}tE$ATr}\:Z ī yݟ2v[ujrBя}3r!|}^m;8ak/+<}0湮lk156]ߗd"'ocFwWx3÷6!uz!|Y,KDwca,K@1w/lYwPk.R6q DދyҢH`p[4j/XI .>gu_=Bux,F5/&g`i@hGLD[Q-~JQc&H]k s<)H#3N3í)oŶXya]iF9|o9F#f-f0XD~(?0}JQ;vMAD}n88kj# 7w0v0F*"( N*^z2ŖٲCWa>21g$Ċ-TQa " kvAk`xMM;p,eb#[VIb6mxg\'$],,| M'8aM$M $N.S8L;lq o st3Q<345asO&j )mB$L\e"b9 bv %#b*U]zy SJC&. qA S >dPRT`Y<$Dd8nHs$:AG=\%zvv7$#NȊyQJ%}k(>BT_vڥ}RDXW XAFgcݻ{PɊGfB >j2G/ԩ5޺o;G"om_ḩ?ɇ?'~woooWiq_l|K_]˞];y!ϝ{#"zLw^0z<3},B<{e{ay旖^L14[\I-ycL_ȥZ"Z%Q%Mws \N/(6pV?WQ)HȖ^ #ӶV񴣻dP#:"/.mQ˥BbXqXX& J= Z7be&,:g]yvPC])seumBwJ>\*3Ʉl |nk6uogK36J,4B/jXLi17ˉ@QduHuP7ء+'Ɠ$R1V6 IDAT59!vc0vbH$ӉulE"i7$31sIj 㘙AYys)IbCdDL_ї!B0{< iGT2zhc+8>{X^gB#Z_B`^jS~, WW<=r/0}g SgEzw<S=m?o;ֈ5ɛ_v!Lٵek _ԒPoI9cIٛciUBY\8y)~/2gv9T9+{r۩y>s5w©˺5¯R| #WGGz;ߩ_ny(<OcٯlPoߑ,9_4I[JzwZ:&& $x€s/~5fuܠ`KRCC%}#J~{ #7rnu/:NW*_w(Zgn=Z'\pqGnjN“>&v`\]q1H@ +M$پk [g6W%Nm4Ӵ2CC ˏ[B(*փ7kEPPzN$2^K/7w^^<bjIȍ_},rmGyɾI2ܫ9r#{deo,aUonf yTbyʅk wһ`049~e~ V*Cml7oFMQ XoP~pafœvofʙpBm}3J hi]~X߸Dʶ=ケ-&҂4s4xOE#~ a| >_|/OˁDFj=*T{&We['MFQAOXh_ #i  rʛZZT0EqZ + OOQY9~&'pBmyW2P1`a.Uk]YA=?>VJZ$d~IPZsIS 1b¨iXC|FhA NlsX 89̛N7ؚvBc֧y(U+l U6~QjDPwo/Ys-:$ˆ޾7mF#8.pxᵂ$S͐٩b*eVIXi7B Msn{TF+q0>gOq؃>ĖGG;)r~PWTs}弣\qg"HWG-J!GD? wncdçQ {^7p 7p >ѪZ4mhM3k6> aaQVEQ{<*jg L|PFHT=LE)ORv SGJxp"TǼꔊAu\Yy%4^=|J.>wSC][dk ,/O^Ȧ{Kk*ragWƢ"g>QŒh4zCS?~(X6yqvYfJe2Z bhSZJdq@T8E+DJّ tu20ڋCQ ="^(ܕq>{V:6$X1V0JR{f5Lj=e88n-Gϓɑ;yh(ϿZGdU-] qH$]seL Qiʉ:1eN-kLbb~CKn[Hd\%NKUNLG+Rl ~JҤU"8 CSA1XkeUAܺ:P jJ7u~`T%( 5 C/^U!1&V}bgbsٮv->7r{G;:ٗxvw=ʏ+)gÓr"JuOϲe &C?6Z&*6zDPXXy¶Ù*8~ p@>N('pR3T0.$,=qp.6c`*"D FVEVG;4]%e]FYz]7n{mJOVj(M s M֖R1U`psT*ښ[`⅟ͺz,M7U|CMsT;3؉'L8f.߭t1~]WjV8ӯ^v.O%d<]y}tѮZߑ +_Aw]={h)Zԧ9d0Ϡߵ< vT&R̄,&Ezܵ¬O:?贚S$*3 뻿''/>Op3ElNp#>fCś;Thzjq5ԜEoU/UY*.;**ޣq,i+h<[/9>ٱ< eP/hwX'T#N 'P R,Zy ^Ԩ]W5WBBU^s,ֺ qƋue+-*<~JxbPZ 1^/NT"IORr*,Wv;&q:е:PP@A\65 Ջ"Znz扨pj}*^4@WKxc֣J8ʮQ(54$8E%jԝ%**kjբx+=qW}J{\ʁdUrӥ90Rx!."zOmHxb LŝlNMΎu"Xifםfީ -{lh.[TTDR)#+9I_n4z%YK'BU&!b(B+Cc+,r8ƈP/"ΝaVF҈G>=Lrim[/O ةsX#O~qׯȏoiC#?>sw~_v6Пoa#7<~BgB,0>2cva>8G?y7п[= VzAb1Thv\D  !@ˇ=[9H] D֩!힢/eIwd٠jK Nsʦ!eMKkF9!QakaJ&*[Wol%jcRJ4rH"əzиTTt%*ʕt&4x"z%3Y/\ynCi_ѸM<\UWv} #wTzߐf4ꭕt=7aJyR*$0$2)*GERFTHkPƒV>d[k$Ntr@fZ5ZI~:QpF_Jf?t9y>I[cu0$R9#ɣIZvl^-TI>BB%t~䦏^o)KFO# W߶W.=ny, ;w\'+W9 wv nOw]:L^g Ȇdy u<Fg`/'M$_`R⽆ zJ]Ti%@ֽ(B~Q͡v{މ'[spzBZ^sٱy\sg&ȊlQqB=30;sm֌99ulʔ]ѫe$>ܗc#Y-7Y\‚ ulsQE8\XCǛA`d`w4Hj=]P 1Y,Q1*j*_$'Jz%T%v$}3e;TaZMα\:K* {ya¯V[TrFP*Q+C?I(4B ^.gYJkS} ص8MBgjTSrwȶ+Y=_oCfcٰ ۰ .IP< ꕧySu/}i`un 7cG=B&;$nV+ Zd-P{~`@:Q>1u<~| N~VЇ]sg ma^3/3|W{ Ki#VHXR" BGs2:[sP &VzqHhQʃS*.rFFIVSY)2.)gx1ec_M(*.QQ bEqAtBدIT⍗,) 3DR #imyH`V/!+#t@PZX3nCi"ek3CDtew5G\jHb :,GRYbT)Pǀ%bP!\(Rw52&pA`0HUlGN4裫.;4˹cxMnecr-/piWo"Mp(iq"QVFTx-'3NM$YI2V->\U)SIʭ f,di0K#`1е BUQHOa)ˀV'MQ \D7*F Fg<4CUP#/ᕜr5VfP~ߕ{."9eْmp0A!l[l;}/W!maCp6Fc ,ecP>IH!6*՞3R;$|;ޣӈH?5k9dJjǾtB~ͼuԾ_9.+§8A_җm9śxݦ፛GOH_\]lٴ;~&{RVy~+OR?ms3|]y>ݾ.ƍ<ñ!{gs4=9]vP'4!eH3څF_U!`֕I{IlضM25V]o|&p31¾e-=R5|R,MxE9Yкz(GJJQS 2p+A,[$+ L"%g`=j@-jDʐpqYW 0oSEQyr9eQ!(B >kuNJI7 +QBӏixKjVWZ^mU -h * iWdwխLl`EayH274>Z=D\KŖ&4ǃ:j$'g^1IQ്PASz aفtxՠm3,9iZiETQ_MWj)+ |Uѡ|MRtHӒϖAK긆Aj^c1b*bIk4 IDATtx81$1a7JXOOʣ[Xit:{fpjk2[Af*8#qC}7@0&R80 ܝU1rݿѾo#_bsٕoBid\!͢ǨR1Qv:gx 0qhқ@y+wzLNѷ@Kͼ6}K_3^;.m/ί>v O^͋.wn>cg&xFY٥fSj|X1sAW;QD-9tR p" Zk".|&&^ z FċQqA*zP/XgZc Q^BN5#1Nˤ>ިVoV:K (-%*gT)I~g}/7 u w8#Uqg !NcǑ ͚ӶX)KSL>{. 0QQ޳fz-3݀foԙ<~ :Pjk;jYokE=Vd5rnuyDZY #шM駚dFdbt{#˘%y 4EU",z˶ơBJA>a`$ Dk΋(.ޫDLU¨A@1AYY|qtH/}<.X܂Q}aĉWg 7~l'-Uj~gHsN'Ft"ʆ]c>ss1}GZE"^BZ#a93A ԛ0ο흐ȩ6A特Amx|y_J9?%?#գ//3kݣe<dc'\?؈foĀD+J#wZy[/߾HңT2Xސh, ˴\A(dE@wdr5lBX&eRR=|>pgr#"Μ Dž5gYg4tFTTe[`q$T tg=ƋY,<EUTURPj?"\ 9IC6Իz9^ړWe7sPY/dIi*B7NdVǪj?Idx%ZCKZDQMxX hPڐ\tevBѹ.Qdk9NXm%B,VV֪RJʹs|||e2gG7țOd95$a=7@-E.cež~W췩,Ls#V'Ĺf/J$ˎru''FVt*@-ɐR2Zjƪ_aa0Z dR&JNtk,0 c%I@UHX^}h+:Cڼ!Kd>HҨWV(ʘP"ʐa ";TR=\QtG?wpX"w}:*YWjCTKhQzDڬ ^!n[leIKT*~f]o;QkS:zu:A TÊ&9V:)'z1D7-ْXyBzX ,X Ft4$k$, 6${ĬKD{0 cs4+vtҒ4F}ƉhK)A\I])haq2q8^ᡂmmsXd)KtW?~G1'/g8&].;$:KM+ĕf(7$ j9gÊ=ɔ%kAWv*r*&V7$Ԁxtc"&֘;W''Lh)zGE,"FD,;kmp n|7r1> u?ϕiTj,Xe-okaƔ}ʪvam0s_(X ]*YCۑPC&/s㱝Ow gZoR.=ʿfI/dE~/o[_:_.?/sm?T,n ߊ2nPY7(XߚzmPm`}7RAul0N*CsbnoOr?Aa! (Rgo'`0,&+,V2`*eSkZ6J;쳰{ ó\f=ء;3}{%Z5l\r}8+ Dx1 R*BH9:A24Z4PadޮgGA#+橒UD7P1 ۖX7Qy̫Z)׶ dA@wI.~pJ \1 sv:Á&T8OY Eϐa קU &I3D7ԣh+ubQ]pN9ž)@^J3zU~k*Djc|lu)1Q>6>g&I\jEs &&OgHXq^@c!S4S>21Y& MT^ +tmqaZ8FX0`)9.nEZGxeli:!<>Xd9Ag19ߋɅS]t2F_$+*C}Vn_ +YY xPslENC:7:eJԋHn PFcwr"M/C>HBQ9<_m?Í:ví:oxϗW?yƧx2@b m,*k^eL+7]" ^sV^r&v{]݄vnZnz/x)=9}6l6鸰 }6nNSПz}]hB*`RFݺ]+c cTAy I#m5x~Oܟѫq>[5'<YՈvnܦMgd-fc_#\]Mxd*Ԭ˼ZF!'XX';u> (ӆIǨ7͹ *b@EgT:O ߀&_.e &\QD4rZWU'WSѮzi 5E%D :EP2- գF$ hC-G KV^!C}Xq#݄duaH$XVTfݰřD8x|Lt3FG ԦOoy׉\04Fr}ɴN.smvnkRd uJP C.$--Ts\l] /f5DeEL"ah hDH[]桉S]Bk˞ٟF>_b<@qQQ+%iQׇߞ2ok+_b)YFSk?oQN[ϯ1<;N\ ѭq6-͂cc#i*bW6hxM:4(s*W# GX21TdWrlj#z?%7|]fi7lwAR )I6iٴDz-Y(rG8qR8%ȶ#"HdJ)ľ0f޻-ݝ?$buխ7s,9bX.ylQ#ٴХ ?i)&֐wU=3盝φcǂ{6 @VYBSH,yěh apZqEStjAġl8Y5o*`M=e<hM,\;BlDePwgYGE/a퇬)wh,7>^7#crcmër)lX7RHC^y 7) y[Y.cp4)ުiI s!SCՒS-|1! ܤ[zTy4>c{{KԪ05ka?:ڠ<ܐ UR[&j&wdd # e?Rl6f>Χl$4C%՚įJzV߫4y E]|qDkI9CeR>KQ*Qܐ|$KR(d }dOU5'ղEzZ-/Q#ՔչA}8zS*ِ,B IaJoe,a`-a4&Bf]'*#m‡¡%g=< Jw]ί72 o;Eש}v% :/[Ղ9 >o|2EoptvVjTs\X{K$%p8Ǐcowe̿huY-7'Įnw=0ϣ'?sA'sTbC`in.{7<;hB(an˽cW/~MI\p[l_ ~ A[v Jw ~O W;%G\ГN[47Xy>RhW5Nsl: Eܝ|zű0Ղ/KԋK12&]rVm,bIwBaP*1k'4įGlB7AIˬrYWU lM :L}qzd,R'#bJ|&r(b8q 9'Yф'yw{v@\Z*!Q(FihFS YeRKwd6-W}~{Gosbz>K ZEĮD;ߘRc/d#nLzK702 0^FdcD>9j ($/ Ca+ (zk 8fΡ 006p f=k3KƸB 6b0 (QA>#EhD7Z1aN ;OV*m'.xCrj"Z\zگ|0>>ҭ+!\h{*͐6-pKw8tc;}Ծdev._9%>_>Sx\&ĸ=|i-|g(KKseJa`'_]OKϯ=vFחw_\̱yfi|t7vnm~7XWѿbZEs+] ޕһo}jHV‘>i2q%6*Q  {.4I`4W9Sic9VkN[OǤ]"4LNb5H|D%O(W8qG^tet0to&ʨF\)X`/zEх!Q|@@8 Lj #,{|δY0as1Կ/y?QfG zvҺGZ :#ʣ&I}:&7DkCuZ7f)'X-#@!0?ۀ v(1iFE؁qr_8~CÄycys/B_[YCkN_r.H/qrPY`P~T?9@pD}c@+T*;J,bӸLI5չhlؑNztɪViWp5(g 'VY2 $p)f3M^U1d A\JveAV(Hs"#3B_.D.C˔6ǚm.sẎ6+:?<[²ތkJ5x!,sKoćmkv6d OG* #Ä>4N4eWm̮gѯbʰ"肂2%4#wN|8YF} 츜|[QKWgy{Oiil=Jnl2څ<;_/^w!-vp~u;,KҸMw6x ּop|[| y7^ae+oqW ~7z_arÞyJs|?~׺mO xw1ӗ[Wkp\noemgq\w*g߉wk\ϭJ_{Wc_E-0WW9ի'7&>}q 1%^TMN'95^7nF4Tʺ]PiҰe5l/kթJ t*:O&jܙiP*BER~PFxטl.3i,#up /h ;V =Vɹk6+a$!& OkYWVod}P/">IGaWahA_AV.eꒌP$T/օeHOvڜzSfuOV;Ӻ6ڔp7jMnƮq·Î:Fʧ,ET^J*)![+*yj Qk<5s7U%]\`TZUB򨐋^9ʤ\次{N%[ Ⱥ]6ԩ8WC%?eٱl NhN?c\5Ȕ"#Yr CG&2|]ޣ܉A%ƘƎw_ ШЛOX{:^>^>:=̅:7':bWU 3fi-3tlZ%/donwO۱oTek$f^jr4ۭ82z5FC{[G9sIFi-]YvtI'd3йYėq[|@2{Jt~qq>/0kAts>.#]ׂWC+}+ru+J}/kXg[C] ֽ۲kHsn߼71c3|/\G?ɽk[o;C2;v:W%0.pkL=bMxd3)J p5KY@ kq&7i2s':FyWL%2%*)hFzUJ6*t>E#k7MmP\G 8 t)`\>vZNQN5hJ2Ŭy M9>Gy3P܇4q3HJF×ȣ P2rcY[ϵbtV_ ^9˫ן3Pov)%P|I?S1ƺS\N,|i9@!Q634Ǭ,Lcᖧ7ofJcxS RDسR01N IxI8PjaYЧs6U;[}ۿMZ.X0ao.X[ݶ m}s-BQh.82s+Z\I}AxA7)+d bdZ*-˙ D+kFbԝriS*b>JTR(JH6d~dPP:[5f^@bhrX^ :aWHMb_!'G>uqw,U)JYݽ9͍]H#KrIN_b]ULlI#.CټX~kPjک[TrT?ӌ2^.w|U EVE8yo'&"@JS0u@_  FhR*.#^;K>'Xͥq!,QWCAKJ3N%lg7(w8:̔Ζ#GRGw:VaL1OtFtFxFy (w!5$m58{F'\G=(ko_r_!WPe;]*w HZʴa>Q}P" hlBă֤#4#i\yV>qa"YdXj({9+_@9OlQzM?{߿x G6IWg٬|9RXʣX1ረaF,HK HnPbXdeE'.z}Wtwƕ=ߘca07V.hא ڐQ'../{ Q-dۿw<͑PÕ*fh-iXdUu; .Re5A`!oͻ98L4I Eycj=7Lkc ⩅ V3e5(ώYF{@SX[Ʃ_C2# X?|¢w|\ppg{O›\lu_* ?ejdGD&WtYLsOs<7/OY};[6l# 6q5?_k?jAR2eBJ FFޠ<rBJhey5"QZ=_ :/aZMb/0vp!_ecXyIQE@71z(|uuVS +Q0bVru]J53lP̧͌SĔU $W8^vcvV_zbZ iLJ];, uh4H+r6u! LAP”nD}"&pv^ :edΗ֬Nunre4(7"ê9Pa)N,{\Q yLyyD|{v٫/.#7ZSz^LJ}P<@^ph֐%%06Q4.!B>537?j5aO>uڳ1fV0.9@ gKn.3hh{e[l# 6lo*Z+o >y^eu85:, رͨQ*|'$*c8C,Z^etcJ^y㱡g4 YwIEq.[!Xijlk<| e s79[- ڜRCY( dVs14؎ z22$E@V-5|NbR^gkc)g3k?xE#gϒ| d50qETw[qCeW91u EZY9_4wxWxNωӳbl*CFO9YQ FTfHcIdTP+ Lb&ȇB:yFfS|>Jq] DMݤO#VPJG/>!I%2ˡׅʘ'n8[9]A]Mg5֮4qB3E:in}j|<}9lW>P#u0w_ё}J`BՕuCܝ,0RiTu=ؤ[i.1eCTh~$E%iP@<x~D #yž8!&.|&ٯ*!F9mSDuxsJil\4:3 GJk|~m Uʗ|ˬ Pàଜyë'Yu;k|eƙ{Wy漹g?nɯ?~qVv/pYکNJE^Kz#3s8=|t{~/ m7|Zy[9o=MzAOjawռc+Ͼ̱N~>= CߩxȀpȋGT\8 8"7jbc$h[:alVBo<-pqD8Eu(Bo>tz$ria /6xuzVy"\!wE3Qt7fJFD` fn,5{ap\4z?r9OpQ>'tlD$o7oWY /XY .bA?_BHhJ7Ɯٗ3Rs?BzRK@F)jSk,D4[a+裑>TlJy&KXhbϗrW;v;qAnԥ(;f.㋾G# C ׯ|cx3~ yrpڻ۝aQs%˯z_gٻ6 FuG#O=(49stⷆ|)~9c?jGӀ ^gX<~!ԨV3GBf[DgߛJ0^Ĺ.xp!ZQޘ$O(##ʄq"9L{w+w"ἥ2_zhD`EOt];)^,w!Szev^%A$gg~BoΘ;|Zs|bk`Gi,;6k*;Go.:{Jw6KEKUi 4]ߋAO4 ^~1ksqm⻶ڷ][뮕~4KO.(|k0pfkuH5="4JR(7+Lz#pϫb@HY\(aP , FFih$yH{;7P((TIkE0hng=ʺS8 D&uX.MUxY3m鮥umyYR$Rگo=W}ޫR}A(?*>JԦB}VkGZ^L\Mܡg$Lc By/úvɤN'omh}3]>S!Y)7*.y^!/WR+飴%呗 CLevV1xȝȽ0T^YYOs9yzeggtsӫjJlnmu&_Z/r)sGiW[mhl! YCzx34L -ʺٓ:'Mk{/-=M䅎N/z7[i3L.榦]{L{H>a q9%VoLK&9fO}4gPϔ2N,,X}[Tӡ[XB=-({nv/ugPR_V|eJ'/Hz`))h 4r ;3Ƀo?:-𨙦3lȽݡ'r|ϩߺ ޸nΞO*!>{X_wS y_܋zW"t}yJ#qG)h8m|9 z&=k|Ά-33|(u~\g}ߎtWh^aʛo6BuҨS]6 q"./HF^{2 AI:Ms&5 K^֋́Ǘˁʐ_֪?P\9PwɎ?۔';縘 23+n{gLӌ8ۂL@c?_)^/_5%"%7}]<û~ݻiTbϽU@C;γz|Q~|dV_\S;uhgk,gKъq+#$A??rt\> ZtrnϺ1puUi﯑WK\~ C2*_r5鷰v!?ٶ4XD1~RhfIҤm "RTtoV b_aD9$gQNCi֨Dz1H$"9Y0,T!d 3ؤ0x .Iox)C;Ẽ-^6m>Iۍx-\T !nQ'fwޠIȅR504ݘYFdA[<~ׯ3 %TN,Yag=cڮZoqm*gkӌkƖ-q(Q$-R b rN_~ RD >U({~M'sO"SCQy*xQJ׶˝`tʰK25m-Ő(zdLU&f h55kYjMgI{䛯MHĆ:ZcOdBSpDxnp'uvZO2{MN8%>f Ӑ5!e>AX߫>ع5:Z˱cM^yez&I2T w|Fuĥ( 3Ky9>pkF_2ޖɟ>Pʛ|p\0αg?@;~7DS+9͓ *,s,K,K\..,]0Y2ं 3Ll5K|SK)a^ͯAy5\g(g1<'ng~l*0s9Pҩ6?{5`X3ERXRNtwP%`*j^֞]H~~O]}00%Xhϟ/|{ccG/Je}M~wǏ?:i{M0Cr@cmrh4:u qch cܸI6nAD%v,,JƉ|>E-&a[%iG }2c k0FB!SEpPB7QՅM=b])9'd yLL2K[6y csn^VIa6L.298Qk5 NXw dž\7]` ,q63B\jBT\({Rc DL8=\ CdRhM"BFsp'!*d&0aȉQc-:n˩b|$Rq g7GF3QYa֧B+pvXƆwO2nPrI4cE|jG8L[K=FO+vy93D^ڨ:\ |/85ZGGI_sce.AbuR %* I^pC%T ,),N&Hx)W7֢5b;-ךqzkZYϟ?5qz/\ѩIU*ŋ(Չ4Xq]@HfWf~hb:(f=Cq0jbl\ >Q9Ge~K3|xdw=|rSzO,>/5^ LGS)}c` .ͽ`š߇ ΀u5$Kf^X{j3V1?3=:) $&Ѵl讗d7ςF^UK}N8_֭ d+* TÒe )SC$kv\FV"1*d 0ƦWt q&i: 6dڔhp$s>Mƪqa^,e?̈X2 C.scI Nf~&ev1®s.İ I}HTͰ#Ɓ"Nj%-+Gq ҋTT<~h%N/Y"RyTN\:^q"pEd7b9ւGup̙#Ka8&o,[ٽB) eϟܾݱ_8Ly#;^~xݏns]<6;W\¬&?VߕVst$K$K,7jpUV+k%+̀ujdȏDvb@Ƈ7dɸs[/1žC1O1֙Ae#FؾN IIG:0#G}ϺRM˸`?^+JQGퟔ/螨":`]|#FymS[ -ug{}jG S<3miWŋ ~E58jKzHN=J ZмW)%e-%:\4~-$/0rO3>u$M r :q랉#,6=@w uTZA"bg]2%GqEHKu& -N",ƀ&3yFb~>MK ybXg,?7ӈZ~H^t](gI5`= gp(/5h$ =9÷c檼xXcɖ|Mܺc?䢼zGʑ!nF?Lͫz!֚XXxb@b@k1 k1 k1 ߟf퟽Oݥ]^/yz宓ԩrt52Ys)sCH2XRy2W *MԉnkaF>/Wģݔ/&MB A|;ףppG;S\til!.Ą5xR d"xS sr"\3yx74]' MO{/)>: :e\- ,|ƫ3滜XdDơi2k@kdb69!? $"WL':wfTjYN4&$)6S|̈́$qs|eWvtyѭxG,Iz{Ⱥp,*ɱ;q}GgXPu2ǯ=`";"ߑg@~YpUh7|M.mft?Q?:Oš$،t?✌& }IM\G}{vcѢ eץiILS_Qz셂6K1ٚYRcX<;ϫSw /Xz7o ?36zvoGEL+5R9ϵ7¢~p/ϗ-9>t3ƣL]a&NO.>L5kk]k <$],odC^H-838#׸lkm~s})!nF9`k)n!}. T zCB%uZD!NHOSHE2C)*P  Q!^ˈ=Psĸc$ng,1d#b}rbǸ<])!"y7ʈ)Q3Ey|#y ^ϡĤ50t,HIYJi:&5B7B db-ָ$(@UZجH7,,)i A.ApH 5EЎ>ݿS#%Z~cAӭm|@=L*3I*1aRaRp-նa|_M,{*8p]IVBaǍړm3YXZ)eϖ;9AA2c Jbc4مv[O(a!c r )Vx'oyBqNq#;*$I?ĀD$(8W}Njǘws6~#b(_ UKӇ /_9·nwmL/ ЏS_H5⌧^(.(K$'2M'5ƿkkwzty޹7)vs.u)n8~6ǣ-D"RLJIb5p ->QJ@eKLFS*K$r)I^ƒ wG7HɧPا(y:RhȬ?@Z|44a,gK -/twipILR:M+˘I\7?E58SKqj Pg-mr[^/Z!M,q5b d*.~>0V 9Ӏ$% "aYyLWx=OV`= 2+^GhQ-(O|fXw,-BuX}H[&cpVϳsƵʐKeȥ1rhuUb-vS±C 86񋟬Y!"b s-k^6 Rhx0 nf8z KIa)CRXr€Ċ -|C,ĊgC'tb{Ej~2&̣2Jffm& Qbb&vEJL>/ϟn-Z =\$![Mv̜j?JIqN~|#%?Ց~H4(2b>%,9N$2ju'Qݲ8A7וW̢LI 4=e5EF2-JX5# ktT\SͩVTXWG6N` yӔ%K2$M]fj#rbdv]_7n>흊ҟz7=Odntzϯ<{O-|+w[掏 rr;*2*X'd=E]#rm+S<{-g7~FdL[k4_TJYԈx9)O2o5edإsQQɔEED K֏ވh ^R8۶K U j]ƛK_ MxޛѼ慤2xޥ/izwݼp@|R#徉nWC3~z<.,(QȞM5*p|ӳɏ皸7O׎c*#y{km;9#E._O\d.r ZEU.ү./Ubg-_zs9^bqP#Y_ \.Ń'%%{ 3s\;.Ƣ,/RJ2&\..d}Vt>~_<"_ܽ}e975HakG%8$ #eFGqLy>} rY\F`"K}DMčH ̨G) BK $u,bDдg$rfG u Ӑ ><7\* p[o!bda.jׅa6Ѩ(:TmҪySΩ2-;aPcR{//ȗw 91 }j*zLŜq|NDI]w)9N1&ĶU!oh.z i$ZLJȅ0ԉ%I,t(Yxnt^x;1dJeK?t[ف?|-6[}4cҞx0!?;wtAH?˳7u[+dp( Xv7cӘזo%飋ܵ#k8{0 k=$ĭz^7 d iP|BWHTÂ$h,)рE"DXҔ-Vа/l^JsjiX/Zmo{q@(" X=GGn`|aJz}+-U}';νw:?|(6$?V!WJj99[jd1ב&rts_n#tu%{,W/z{N~,c߰hL-rѥn.r7Ee~2(hץ6n7XYL6_Wɵ #0?uOj,'}sި>8qy#[5QΐBDWͧ/]re,ĩ4^.N.YȌED$u,uɖ6&9\Zv)17knYSg3;N#қ/R@q\V3*VFcuivyuwjT@jTW aSgKzd[ؒם"p]ĦyFÀ(P(葽Fzzp'A$:M$rqX˸U0]r1~M}tID#* aF'TͩɲTyzivl,:_?0bYL?'UwݺQOitb]#or2F5q2- +X@.˵0]oL+|5Xˡo{ ͖[aTWqc/P/_pW.߳cy.7y;t,o]\Jƴ5Y1_^?W Fvnc~XH'.ݜ6$2v`s vizN77k=;`)'U*KD^Xu2P% Ӑ _]R%Tyv]9!enZlhJ-PsX#]T**$>,VSe8kpCDOn2]tx H+TKd`J?xbz' [rç@/pk7Dj2#\rY (C80'(vR&9؈0q/HԦ68jHff]<:RĄ\ßl:Ӆ>?٣e ͩA(F]Zt8t }uO>:ӚZaH:1H>noT A’-$Tr %.2[ڐ u;*naVx) ONxLo9tҰ]e[B\or}(t`URB,qϑod/Xx,OR#o7O{oX޵cM}YNv^R_oXzǽJƸҾߪ*8^!ZI;]%]Y]*J}!?3cSnjxR@N~)i͇k:uzpi2J)RA]B`ur}O9#ɹ8~vT'S(H6,T3j(Jilb6 %ݠ$V4K4Rgluiʼc/Op# T=?mS# v ;bJOː(&hL̈́4ͅIӥ<\"1%u}LGٌNxyj#U4Mrу}(/8t)#g/vcEk,V+y da8W;)vTm7%u0k'yt{FmzIJ iږ<7pg !&)Ф1-C GӰ_:%G iEC3.-|'tO92HeDgk0jLI=UoÈ-Uhr5j9ens])|!wׯ9'y~1;^wP}\=.d$?oˎVZG~\mY<"+)$F)VY MFzݬ$]G&&9ZmX坬}_ _ɶRO'wo-U2P 3ъRDoqg@0y0dXUA du=LG*uyS~,d*WV>! m}I$Z-f MtV+"HA4Q!0H&tedԅB *K_G]Q!)x%KxMKFc\yeǗGg8XHKXO 1]w0S(=YGZ0_k=^gԏiRkň@*ǝqOSW]e=uM>?Z5OmO*t 8V; o Fr;0N x-CXB98. xSHW@G")E ZhW78ޢ& *.aC<\Yՠ ^;xn*$Bh [Y!s/է^%wYN:>E f1 U,u q e;|:yH䎁q!)[*qM>01ğo2w}9bc{דX71#SC:[Qvi9ӭj=JI[l-8:>ɑ1[nkt>n*5ݖH^V+/F%4b1bD'FI|o;WglO^')iJLoЗ[Jؓ%ϜEqz&{ݦFN$}QzH\{g\nEk$uDWCۡۧѾÇ70V3Pp}~Ӊk\#dGwmWѓg0ۋ VddhGc.Zp+Saa1C-o4"[B ZD=~M+AG nI1kS""z$$7g6~`a>=wgݝ&$-1p|-Z(#se Lef04_bk?ȿ{,;5;ٹe~H/IUڼ;{<@}, Q %7ʮw]+#3JmNʦ+B$8V4m,7وľdJwo=ӆdaN5!Hk6o0-/] BPt _=ݑgvoqfb0 \H 8bl#:.j!6nh'TzzG_{6q1r&?, SYk5 bLa%)▓ Në+`t٨դQ6|9MTMowXtp[[TǸs8N{s7SwC/n"8.o>@mʱ֊=fyeDmHGŦ]zgtySQĹNM[E;h cF=ҩrjϾ}mKI TW#m@$Hr0nQ 3/tc0$Cۏsٓn١:9#AQA'#*G8yhfǚBu%KưF?1NBMdܲrK|5  K di҄0? gќU_٧>?w2V:SȊ8#c.u2zw~&W֦Xʫ"Qn)C_{)υ[(6UdS9ҺwΞĨj"[ӢNⓇT$uT,"JwJ~HNIA$FD/6,E"Ke /Dghab_ojf}8 2rrU(:TSHq$$v!Cjm=dQɣwD~<4Ov sj&?BJ.'[Rܖے7~XIU$ 4+WXYv+ \d%9×{Z-X.C4B/3f} nV KŽ=Cgcgc\0]iۢCꆌhW)!_pF\7ȭ1v6h4-7hřKtħDqf =H@yl }FOR.9` VL%pbU)wF(TUW,vJ  xKsQKsJ穝XD]8YA%sJRV3ifZL״,{T*1?tOg|c9Dm[%Yۂʘ^|h_wQ*;Op͉i;}Υ5.XIȳrpE+^䒿犆r̜q:_T1|?#7ꈟ7ZJط f0a$χi&FӍ7Fwuf*?\GH3Csk㭔Q]鼯r.Wa߮t.L4/M+g{*{5V~,gb@.}D[I~{_I^c3TsUྟc^cgy Z8aC|>57 Tt{%^ꈒc4F&o]^8#Eiq,Q )dDPg#վÖǝgrlX+8 '놚2y+N#xs1A|Q }q,h={Xwk#'2g=|14T2+B$W|BG,9%tL Kpĭw ZR{\+ .`n?Җcm~OP_ϭc9xk`s%*~a]o~"Zg,B_ \D]\5]%8%M4]`jrsbL:#ǣ3\6*`|bdeqL ywldS3Ď H'>0ށw߻ RAGj Ͼ/Z_/۪o[d"IT#|x2c'VAОbd4VfSgG׎<|:&]cPΓg0#!nbiC$+EtnYJNƂ7ju&Md)7$(fZF+ՎjeBH=DEW:ԩDyDF7s+uj_}(}6k鎍l]XZRE ̠x{d$NXgsVВ _:uH]o^u(}"_o9Aj(VKZQO+cE:S5o Ei3 ,j%+l]Z)C(=B@ݑq!8"hce5@64*gu,6XB@"9.'{?XlBcqCh IDATs^f(.p/扆K| -YC78mۆs0Vt96Ui ;W>Բdωksxnk]P#~k<@+N~GzRXNQ3u4tST|F-qifI}%fXvQ K"S5;_].VF_X>1+KXվ{ $W5V_VϛSW}g昖!=6[[{?tx~lUyrZ}-B$o`M{BlcyZ뷞G.8PgOJ=z @p,|P_j[?DVTv+'Yqj41~x;Kp@0ZIG6ߢuFz;d%R_{/X9|cVʧ]I%WHw|[t}(wg?Af`S[wƏ*$9JN-e?.3ͯ>0<|s.v*+|+4d%~!dBw),Z Q}ür#OO#cR ,sHاst`xAu'͚GwTGZ4%Y% f ҂HV6TsO|1,RSH Og^S 9s]/KtЭl V#LTLIH Rqf?sģKX g`,E6W>|N;4xH2ѻJlOdHaGN) ";qc^i3Y˟~z|q9 _~$ z""L90re:+pJ3wLTd C5sJ[:VvT\D0CuYj&Oҁ"H$vgҹ6MgtoBj(B/V+*C>'ײF/hqv{niVvC?ƛv8:&G @!J /CRKk|j}~ xز8I~(7B+NBw<|lW7e?u$nBenXߘ? jpuƮF{)ﻔzF3y;VgT =M/' K;O``@ Ea*Ϙ&˹FUY]r K!F[QNRFK)M y#sN[EGK]0-ёVTGS[aiNc7QogRkw]:?0urL`5dXbCZCbQ |rS"0Vȓ5 ϻɀoi%gs5]riFեۮ۫_{q9A5{3.s>:hsBAx~G?/7T<>ݹLLF֐{`[|Op5nNS\"(44SQ6gjjQ)T蹢.tD~L8RWS7VbKJP6 BHW@_2)tc3l\jzM}ԩvu#اo3+92ZEyNx%j) Cʹ2O?{BM>u杷O(5e?FƩ|Fs=ƷwlTVךj؏Mʅzvr=g?Xxp km{h+ոmo tzc|er{ [hӻ7/ U,+Ҩg1ynBFIcR0"fd] F8p.@hoJ㾧SRArtJqKISeУWj)+Xa<bI fJz/>@L_7~&,iu:S)^&N^j n?sOp7syG?bUDp0˝ #  YhgPm^^ XŒfA+?[+l-4>̶\qj^zvB1h⣼uWh_?*07͵y.NĹ }0׌T8JE3"Ӌ'AdP? F\XIs"V.[ ȾK nu.ҘMt3\]UZ#y;0aW$mkba2'Ijm$S]fO KCU#zZl!ZV )iyWAOLeyc=~sB~k|4X3,L # 1T>pp]?Zv74"Gεx6o`דVM8Jx)ZChFy{%6x@kz#o/U]eT̮qJъfN{d0ibm4Od$.ϜZ#$ 5Q>>QNϽ6*[obEX&| +[: f_lm{K\@n8|ŵ倥}7וo,sV=zɇ[|O$ب'pu.}@ ZrjW#!Z9!ZgTc0C cEFZôwUDlknP>1S{ِepWJݐwEM7#\,25JDfM晕.@BGZ+%:HQ\[DA}wm;:%#3J+Ds?tB磏K//^W~~qQDJ7y~n߸6oZ>!I&4L]Lo\NlGDAk[!X5s8GLN/#ϕ~^F"tN5˝d^.{!߿$@Bh,;u`D_5"_{j/=6#oV/2H2D5"В9[9~jI??yqLJgdĒ1BxkHvȤyxg,ޢI)0tU 0V :J'7D:Rޙ5HB>*x5_=` GII=#M/#T Q;HZ bbK.*N iTa)DX:1 si+Q?rz #~Crl-;!#yƱvVcS%˷rDּ$ٗB?Y+y|8qӞ")i/d$GS́8_y)?=3G4|iYxsd5dL80IIsUɶ;ŕP3o 'zhft#IS,cO̕ݍk.}XY]t5{O_O۲W~կm`m7BǷ-k3.`mԽ/IZ.͸}n~ "Tsoˎm%·,~:͔(0/Krr|)q$wV_KA^sû4WY'P[ԶiGE N(|=ҰM54BXH Mq4s:4%(eLpẌ́FERz1z}L iϨW$ih%V%ȵ D4GӜF\ Fá1+"jUU0 o:nqŐkm'D>/4/='=\;93pO~_ޑ?EɿoٺzؠVBS=* XIܴM/ʉZ{W}'UO}!'{n(0+̞KZrdI}9 mr׾C?kH q&P35%O:2-^^èi K]ѫUB$5 n2HXrϿy&kz:hXٍSb8ֵrуk R^yWvjXx|F'ݲc9lBO݁~bC`Σ xY>*rVFV}֐dHYݳ^ ۏ z!ZyyVIt"a(PMb#N~s<wZol~$}[{~,}'>~MG3s|\>l}aZ e#D x>CA흜o;&FM$hCV4 BXϱ]C0iq|D\3F:KbOTo$k:fB=hP:mԆ,Vh/9PAL<5 J=)+뚘Gd59:12X{E=OuiS\b*\.Q6Vw]oLlBSExCdL=i*tos|[~\g3\!X}nrVn@ek}楆`md]uЯ\:e?xsIB禇u;M7N.~t5m:KcM~slŏܹCČ <5r DW}g(mEy\8d14Kz`טm("*DE:fR Pq,S19aT) 섑rhjĎCLF4xa"E+ectġmħHR~6 n@ Ag+S>yWɜ8#Acwhbasړ3ѸG**&S ^U[-'4 Y`k,jASEwruK2TJa$ zT,AQ'yFZG|^/ޥPPP>RrRƚ/sb͢5^kdi0ې(^57JړF8ջKON˥/?3#z)Zq:R3X$h%[.[,eKdq$ \uYA=C0MK! Ha}6DR/H$gq{5 oa'D}ˍV,ydw\[+qs/Ä'!L_?vp1#4O'jTK7$io[,oP#eTx 0*;qHP"DVcJ,%i0oFaE'RYm*0 P"r YMKp2EsKQ%+͢!,ZNy>N (V3$+ᑣ=ƙ&ִ3m){q,g+o qbWٷ%G#Qn&ʊ>ٔRWq1Ƃ4sd㼐/'&بExi`jW*cZڶJD,Pb_`u A MJdf'@җ+"×fbЅ*0@/0EE ]vAϾw?IWOW8o}Yfk}üu),_}f?q Vyřr?8Am{b@< E!iHZ*BE'9K Hh(e8UґU͐TI*!YJRfµy.>MȱZ`rEP eMPn=ҶiD9+3 c(UZ@bsefFs_l=n>,r-FZ+X2gmA`C!g͌_9x p.}r_>JyYLyP%463^mnh-arꌔ}nˣ4X2۠Ospx1';&kE1bA0a@n";T D@u>l%elXr!)qNeyܢ(($D jMo 0Fso|wSǗWخх.|A,tЅ}|5n5WГwcwg| 3 z! D9Q+4}ONqlNO!t ckֹT4P;[7G3Vsw;_@! .-# C4Άd&CP PcZT\I y2(m$CKu|+c1KO,{)4+&\ xOZSN0vQ߽]#%qFXXAmx57Zˍ7^ +B>i8H>CzW߄y⢡> (/ͧCqQ_^PeBPƈ{^J!vv=c.6¢9"}!ݵ)mddXuI>lbd3-ޢRaWx|Lw?IX49,{$.yNFԣFfV%=Oj! *zsݳXO\O.waSn+v{Е^lApG̽T|+?^Ec L /! y]D70Nt)ѕ/X\Hz󸯖^<^O/ IDATK]k=z6S'ֈۮrN#Y9 $+߉N׸vg"WUUI3J\;)nkHU6USw.+$a!oRJIj zѠ[R̮IQ # cKDD# 1RE!B:J%S}@ɋSu, cMH2"阩 UlK~'g8xkzξ~83R B]rFHo?~fyT(e/޳y"cxtj74q^o[7F Zi':'>s䭃Z 筗kx]S_iRb'K,Cu֊cۣ˜ sԫJ.'呆;AnZH㮗Xou£^<+?z lW?r@{|~ٕc`cj:qZ: YuQpwR#CظoHW~Bcr]=ceQ@}ü1+88UK{G{ȇz+kN>U]1fgz%#mT->rFSHg^\ _MVcΖTkEqM!؂ؗJ4$seˑ/ !\/lPUh$Ԧ[,Q ̸|OdM&' 允|Ѭ vM,rWn3Oo>dw_vi8eǮz^jP#,&&|O8kdƔ@Y/VR/GDow ;֠YSy03YhO>Eox373ܗ_S|Rʏ`q<򪡯/^j<'qu%@7 Vք$;?V<}z޼rPB=|ܵ.B >Y!fK{ajI %kaB^ GnesiW&G@)I5 )V=94AzxVcOM 4B+#¡:y Ǧ\^|寫\$x* f.BK-qv[vWUL{(rxT[j!St='awe7acRFFJQj %N3v!E)b?'j㍆nS?ĝsl5ee@bxdUs֩vuQeӂpf6L[o㪝}|I<8{ ]BЅ҅o% ;y9$cÌ/Cb J\ [0 YS](}yݻ53>S$GЅ.tRb[-@BpN~ {j2Q+ibA=':Mg(ybnWkWWio-\]lޙof$`-!s,0(e%1'Ns`d_~ w\ߣw>'auBTJRhS1 5tbd ~⦌H&O\Hٹi50W_7ԛJG@-Kf6TsH+Q :[2LӮg_MWDo>c%m&0#*BkȇVk3uNPh[CTid-EVf~9:v?{WqfjpqåJq Ὑ}!-7@LVJ>-" d򹓧s(Lg䮧E&qTr<y4v$=4 ڔ:ĭ̼A@?}2-VgkԦa})7;駮㛫Niܻ>sTNCʏxY*5h_*wk`!`5ڻaW~cuZދZY],elDC}z')J ']$E=8f7"*'7 EÖ|1r咑FD5qW,8r!|,8, Y&:܋UA'ub)ċB|Z'8׾Kkp|~b,;W-OmǪʾTz3$C<;NP=!X@Ds-#CZ{Ҝaa2$ BY^##i%Tr:K=~fI9g;r"Dd-M'jV]CV)zʁ)&-Fr(R.hx’M4!Ds6-gȲhopohAn}Ss2/8iWM$RU:Y{R zSzP6H7v#iדnT/Veg;NLtOܕ]ɏu.a]u:o]myFzc dË;k`}N2h6kUE1Qm#O@1#eqjIh{Acri͌f&' gR3L]\1Ti Ǟۯ$Cw'yAT#샋\H!B$F*FR=OlXgp}fV!ݳkmR[2:[Je by3=Hng^H0y#{eR=>8>˃|iް}_S~q{6ؤšS tr oMל”jpXzͣT9j,J2jrbY̐qV)B - ڵSlr֪[h|u^iy7_QG<gZ>?XSgl~.2/Zʹ??~Rhǁ爝ץf T:tCk0TN6r\< z~uF'=ʏxpx!ܬE=Ֆ\dj/uSA<S:_fW~F]VcF1v"V`KWй)v-S200t@$k;=[넰9\ A|y|Hk+9P7o8(0$ jP2晚B ,x%5BaQO̙3c2ط4ғ2ZR\.sXwp;_=&ZoXv^&6f I MڵM&qpv)sD;l.&Os,\?ij/Z=cBxW/^c &"N/(5Z2W/ՓBiE/Jm7&丰og;-_{zs<t<]ދ \ھT B֫6}ݦeh0?a:7D1Kpƒt# h|˞qN|%O2a"?`jYWZ\u9df"`hSAwe<>J#k%<4uR?|q>|s}z+8P-dRz6f JU3§:GI21`lC9޲{3#Ty)kw!_:*uhRĆތMx' 3Fl`„e4mR hxSb\Ү#ғSR-S'y4Se`G2?Sx}Z+UfӮcƀtǫZ~ #H%kxE2j< tFS.xTSdya RvJkף᠙ARvMU\m s)h`md{XTn HW~[~t- ]2=p.`!3<=O}OM<-di=J'f݇" NȳH 4JSvcx8%Xepk/)vʛnnD 4.c}CO? '9m~1W`9XO=pҀ ߛڞ~rX/u-i20WOi%`8}] B17U زA^ߖa$C4&\#ۖ3f&Ze>rY̟_@R^5z_|ro[B&Ibl+oC?.z#ݴ]~ZBxxL~IGzo7P``Qd-9_ôd2cȉQ%rFTI$_j)2kDZ:# XM[Z|:o-2YHϱawiʫJd[V2}*TCFe,f贓*gV!2Xɰ9nY>6 "`$❣X9Z eeMkOwx wځ@ɷϾgTO&> S; ծ S]xȏ7 ? K,=C @%DBd-Ejg+e l+z: ?t/<9|sw!M/TE*!m? ?%h> sL'D?!x?\5?z /qmՁWOж92Q: ?; 9owFu]Fv7^p-GO.>Ϗ?nt-OE`+?Յo)KPpiPn%y:p̞/ȃOkw? ]]w y~ž~E'k+yA, ]p ;wi5;{wAh/o'۱& ;?~3*϶]^^u8r{@-&|e~1/^ cs~}XwNo|>W[_8bە/ ߳_t QtY7w7]J}K_16B}~9oT4 7U7LL,+7fk*xJ}gbN U9Ė]C(J&63&n,yz69U=+1ښ0ԧ-.zbBO>e@"db5^nC> j=OG )l.йxsagmަ@o7ûí# b[6^"]]ѕ`Udi-N\pXe{kvi䋵贏k=+k[-/zq!Z]D%x]̡iv~/saΞ[4퓰mHo<2ޭ-;i&6B|8UX0n+Cy=]wlgnѧ2\9l/x^灇m+R)UA!F I]@=-D4"-g4 M"VݖieKeH4YRKR$dѥSu-N0r3R"q p4Z^j-叟;Lpx\ZW,/޿w/oC5aF>jVsYkw&މ b6e :f^Z>]m(?" 㹥s/˳Tϟ̫.?>?|I[W<ټ29ũN5Ymg:gX?Z@0|̵V߈G<ߠG-tקPArxpyonJ۝mg#'kT'2;?'.+?w]-" z*+&q .^_g<8qjSWN^:ό/p V3$UsX#J9t mk~p/{A,o XL0\㋧xtb\A/R79\&EOe[3T! V>3iKh\`})wj-!n<ʮ:}}SW$Jf#@$ bHb;;N/'Y Nc+Ʊ161ـ$@Tj.{w񪄄;Հ$η{ϸ眽dC4١ <ǣo੎ Sl\P Xr#-x';;VUm ZlZU}9t*ß~(D0c[XX vLU ġmJha#f gdk`M=O'ƸC}kYA]cI-vn]$'N"c嶱|c] qx%3݋±6 Kxۑ Bep Yvtqe/_¡됝U޺ݻ9r ME=>ϓD\W[`j7ąݛbr>Ȍ(w{ẁK~40g\Nu( &f\\YA' >#3a.l{;Oqi[}Fm[PڦZG}90L@:'}G]ĢP--WT3vy%.K3C'jaaDg-T> sHc:䂉?ߊ[zHa>[S^`[WwǍpo;O^04<pECCG&΅Z Z3 RFS2&@:L BL>3ULe<*݅;1wl3ۃ])d`h$YӼt=-r{Z|,qq 7q Y0bT6DJDe/[NhX^]-_O^WGsK,J.Tçw8_Um8! H)J y "9:|(^j7geē] "\VYÍܲIap(k*yxl^"Un*My5N!x ξh.eLaݐ2)S7dX:Mɚ:yq Ǹ">vdx<* @`w9;A7,Cфo@ګI C] ~adW_8]rĭwCG&dGouIAw"s?,繊s6Y3ho-k"-u.u}d7BLô/-Rxss ɟ`8y=vN<^RbsԷ"աTld'x %C4Wx+A թ(߲-ϝagT^:yҟUkp j"x8D\.4 n&# .>hucu Q]D}^߫S5R9\l^A7@%p#tN $d3y^ԃ4Is86+$+шPuxfiRUu0K%\uc"=? #_dgj>[Duӵ0*L԰} EN+jv p"qOՅv /BcU`cj3{ R~|~)ʣUO~3pףE,+aFH +?,ɢ`I ]Щ^ET*ꀉj DL"!Ǔ2{2 )CJ?aj,_[DoވtQU\E ʗVP>L*Qq$J#R @5"yWS' L8?m? -|.?{K$%%5+45'*ѸWpUƕ\{_\PxN4FMֵ-nw ͐=$=J]*܆_m.mw ;Ip@dqhʄ˗Na옃i4;\rư~ӈ*&ɺL(ӱlM?M/I?.A7@.Z$;=g@ Z.MLp AO$~|aĉ8Ò HQ_QXSN]m$ۏS}q"n޴yȾ>yd_5QO߶kcN_~R!$jTDl`xHb곿$['.jI:)WV3fOn@r9͞pc>WDqj*"UC_1-'R[w}2FL9F)S]ȇ友?k\ xCG yyKvXlJLh`NE!CU#[nC;$*sBƺB yC=.ui0bo'.6{PMy9;[~_Fh/DQrc7lw{=j;)m<- fX75q %p"Dո^ /dsID\Ng> ;Oow|~|j~]RUU$iD?r4VWk.=,An?;9ŋFhPt$'/6GD8="OtB ٰzkԫ;:XjKUGs st3{OSl\Iks@йQY1A[j+Hzq;)RA‹BQ]Ҝ"@j6%[.czn\1Oa<|~Ml^>TcߩHQCKS`7sѝ%"TLė%t(ȚVvC.hO;dzZ{46yrQn+Xj7O +\֘|pS6DY' uѼTDb^jFٰhʲ9U$c-d:s!+ټOsƍ%qe8;,h6Ͽ<:G\tHUg@f:ig#lX@ w C^~3/MHSjDmG6E߽~n'Qp~o5M?|~?W!|-vU /(zCKE 6@R9ϭk@ (7[to^ʆ dK+ni3 o=qqG]Gz3~:W0j:ķ_dߩAK @b(\ H/BK]QTp}!󻐑|67Co_Uhus;,MkDߎn/O@x?5􎟖9 )ZN䎟.cv ?b  m>)qҌz'q(sPMCd(A,JSArᴯy?3s~TD]|H%>8y!0=2%m!Ui;,;93[! e]q/Ǔ;e$ EDWuPTpQxvO?n1k4cU|`|ִi&.dc0//?%kc.9enb?6PkX%kaWO}Jt*"UUg(F Ts\Ut?k)-$b@$tRQG)x ^VKdXUWf u+4D"8L(Tx[ꋸ:' F+xuW;_{OqqM<+:{1um̭ {Gx@%K y UZ$/L :;wJJ|h7UkX ֡0a'RaJ|*a10a Vñ2mw ݐ͛DN.mncͼf4ى[b?o|e/ݣd>d}qK nFRQ~o!c;!4G| [ ]JHР.كבL ^qv@e!W nXlБ1HEeAҘ=j\yϐ $^꠩z4pŒL,XL¬^ ax-\ Q^6U'~ h2O'gnȳQqKPvo? 1@̍e+e i009fqK%cisXbx!p| %E'6J4ylJHZAw~L,g4 %!zќe#rli~p$r;f-Rpq{6,s9AE}96ɤ]w+˖ ١u> AQ귆8(7Sr4IR}2V*7%HǴr02(֋fM4&)?B9}kȭm2v1r|A?sg/cwTL湘Sq*뽄?f?O 30SOI_LM=Za=X]ɝ"}6~07%'wZ6TYZYW[G[mߺU:(L7h! P]s͟74JOIH&<{a ['Kgw;St2A;~* dOF"))6ҋR\.1S]@vL=AR.xJyʑǗLsaXq1fGOiEӊέdK'|e|00KwXY< *KX=abZXav JT0y^%{ IDAT10==)sK8՟&uX GVӌESzi~C$Dռ"zTG"Ufn&B^c[;ry=[U+e&=ޕةΨf2H&f2ptIg?,\a8w n` As8 .6?,\0*Qr,EŸ` ለ2Xb8FzImĀ/@cF Ԥb{ hjkde<Fd:^8͗ ;O+zH=f%cM?L?9p4{92K{eN}R7yܾym˸a̩Oy\^+Q85̖$/:s;Nr0#onLG E/2.5ͮݖ?1q t5@,\RRUݽ]}4HeEw|xVq{;&u^|{{-XkXvHY)dUXi##"gJFذ<ヲ;o\D2L݇z7uU17xbq#C{d]ģ.ϾzRKf-X ̈́^rѷlm&r=t C %[VZjef}Je"BW;?OP%P# \LcWli3[a#`` ?deRI` 2~OYl19J1oΩO6'%Ҷ@nES;K?xL֟yqPׁlَ'?,}(=?,5vz"`#D̕xWӮթ(wl^_m!+, 3n?,XDM,?b3 o8.[P-ǫ{&doc; ?,{4,,,b?f?uO 7Z Zae5Xb`nKog ˞xt?,\"aDR&>&2l"z6?,X5#X~LnNţ?,Xabaaaaaaaaaa1~s6{yqm0농nXaw8 Y2mgx/963/{ ?,Xq5@,,,,,,,,,,,,ʆ̈́n3ٚdknLWflL?,Xa"/HIJ<[úLrD%TꛪZj^ߎw [hc2E!"ȶmoB "e[͘I~HPDTD k"'P>&3CdED4 # 0fɖlWUy-=~Y n4ҲB[8;C} u'Qd 3gp\wフ|Ֆ5Bl pUn!;BYD)&X5k >0,2fy%4RRe{Q#R" mR m.S_ӚDFyqaXd0 s DT=\ֳ;EɎ~:T:__e,!1&_@ t 5tlK0,2fL00D<VNJ-Dd=_^R D;ui/l^Ё$ "ar.DuM]]a]SRtlϑ" Sˡ[ϑGGĘT _:D[umig Sy~j%T{D\]$\.Nu7V]*` K5ݼ_ D|ÊH(Jjdjct#l?,)PUn҉o<ð`(LME!2-Q1YHsD:^ICT#2!b{ȖTuNZ FNqBqa0s"z = Z1Rr۳`=(pK)c̙ej[*zTMUg0,2&@ȔAD""PSdy7`aI5Eko6Jh3 Ʌg7^n8΂풵K bJ&'3[t,WC-nETg̙+)s]אN?< DlklGW1ul;_# ݇88ð`\dv1ʡ!bp8y_#e$#*!2j`Qp0Vv\Plv SWm$aXd0L9}lw*pp\K6jk:M8stzuw=D#-Y y Qpy/7[*Jtt @h0,2&4-*KO\2{$Hو=hgjhcFmES x_Zg6OQ}ʡX~EDt Mj Pˮ,}e%OT|VIB'(sh+| "ԯRJJS-Hz/]ˁBDLD }+&$t^@"&K Dž%~\FLE <׶kx/w=ȑ# 4,ڤm[9ϯ͒ "oľc9;Է7PsߝZ蕶_69Uq դg8!YX k(LVYp. Yg2~7ERmTVGDi9'웢Q'F!V9$i @7:4QsT:,D5l%Ez䙞Y..SJx? +ܱʚy5t9O&+AMþƲYq[8|21!" 4nf0,2-eEM<;w}iyDhx$SR^hc6,n3tTP');d?=\ji2l@ %]RCX_ dEV&y#RruK ʙ3lx"FUGlFSV/hQR7ްSG3{0U\U[dg3`[C D61(╃ {9-(mz#6w}}lU4'ӏq)"ҘvB (58bu]^hpFm>KeDr.uEvغ[݀xgu''"Sts2@ w>Pwh9ؽVbXd0W,C  ZgnٓJeK6NDGDž^q*JQwXwPb]=N #;NKEU@l 65Jv#NX`D|nU*eİ`d !*Fk$f)5+F_MӍc*DJU.M}u8wCVPɮ)@q~'GݣA-&-(]wyHDc7VdOTd~#a\l{MѮY3sD37V t4B?#")ۣUϊJ2\_O*! IlWvGtZ$ ?n"b뺷lB-9ݟ[M% o)+f7<ڜ."ZfY>`(hJ祢QsDq`8KTDVec $U P"yQrtf'7όCTpt\Q}JZS 5ޏѾݙj$zex/gwJE` bޝ+gva@#/bζ°ozU68 Ees$6+W_hs0~4}YVV*|׳70%R9T눨dpL 񁃙k.9DHafvY4;J\"W%?Q5aXd0;L}#!E`+}o=x!W6N+,URnz[ ~kf<16wM_$lL\EĬ$wSNt4kD2ħ;YȲ*(1Yjc&TDv_5ld]jjT54yn0ߥ0p(tU5Ҷ(#ADZ*\s5}nK[vaBFge2]BjN~vG "= yo^V1ahHKErVF(ߓлxkfc#gh}c wv" {'U0,2۠{JJLYxąh`:DKX#2;ꈈGD)0>VTv"FM j]_9 2{gLop݈8xia#ʄhX|OL55BȜEZwc*n"<ƅO{ [a4%_.#Ụ,3I"E1:J% EUtӏGȧmn)W𔖬5ð`f^@|g88i(9"nar`ClJ0CIgF"afQ,"Z.76?N<BxRUscXd0lV<# 7"xa c.q`_K3;k8|m`4aKEolz뜧hq⏯Y: uh:.I W0SE혆}ϡ _0u% cXd0o \+}ǺHMD4 4|)C:cFDi\K mv~Vhdw3_pa+BOMDkeIm+#vh|K722 CDzkk=shjUvqcXd0B!΄/~z҂7ŰM#VryэP|>99OOfϮOF$%Hr!n?ů ܆>O)"6H xjJD2u-e7ZT0t;Ū%K) "G܎mKBK,1_&5=wYs,1t_zÃ"S K5;W-/IBˉLU o`O4g,ȔO=6rڣ8'I`[Ywkm'*%L/7WҾt@p"o ę_¬ƈuH!IDRWq2qiwb"vÉxY!j@쵢oª T;KK>1'B11_ܡG>옷bA f>uǺon); i\DT=в({m)[MVN4 zcA,)_5hFL1f-!;]% Wޖ;\'_^9᪉Hv  p";36|>l=BDܢ52Ӥb-O쫗dI̒yޔ{5=vحM֣cBkLk2ڊ_1?8e`1 d(TϾ]Pdzˊ a 2mGe"K3"si/D\ӊ |7 Iɥ;]bKg.ĥ;9>n9ӯyh?:p_t9W^>D(<;*<~⭙m#:o2'AY["!% IDATclxJ_Aﮟ,DNK 4=z#18PwN̢ dNi,%}G׵=)Zh,qsC秣@ f}Z"q& mw{ 5֋s_I?6o?}+V/CVZ:!,uM@v%FFoˇu01}2 ɸED!`m[^D֫uUn3mOۏDJKY5p4n;<(߱3A$/j5 ^P}u-u [/)N ܤC|`!*W>GEW"l,NX =d0YC%e Y+^yPrzgO͢H[3882+ /og&Ue}qρ^qmS jBq@TOp75irFlL-GwiK|`׾xꭳ5;g %JMCT/K<ܚƔ/€eB/Ï}\_~ѡrWI ~9eT"AQɪY8g{d/̅89 Mg N"_)y"_xo*+WKO*rv`B"COkN Y n.);24񄡉RȻ5 ^LV+S5 ODBvd}?MQ+^(Yjx`tXq%=;i>⏧ePx"ݞDvZ@ҎiQet3 XyJ[ʗ~? cYPC(EouϽӄ0{o׮]S!asVXS"a@(tOuשY_g$hף%uUosTqG@0W{ o~o*L٣OԖhk yd 'kţ1˲6mژ.BC-6\]\1 &"NBNܴ>cVf ~djsYW"_[Dy[X`t}DoIƜ(7a7ŵM( ` $6'~44g._U|*n?ܚs0^si:yD D8XÊ3Y^)*Ry E*ǝTfox+l^ǻN3tf>ЗU\cĈ`3 dvR?3鯣wٵ@P1 "HF \[<"a D|7{XIqG&Y&O[ˋt#0\#avvQ,~B/%Ć+%xgDR/\T*e+E`$l{wMof@)(Ο6!K5c;dU4˝;[]>8}΍nm M 2 0:w3v|x JW:lq3:d aE+|es͇j^j?T?kȫ|3v8}UMBDΪ ı{kU"3 A&ƥ\;w/؜Idd]'\хx-qӚodr(8">Jfl͂_`~ O*Y1Ds9 灁İ((:aLÌyt e[=MW~_4VadgȖ]8|'FGu`(i<Rd cKiZM}PW_ppdƕJJ;a堽"LX#$S,ƥ;HۛRcB5 FOw~IښF˘qrWؚV U%i iRX hZىP kb~2o-\Ep4MVik{  uMXd" r /z1LtssÒ_QaS2j?ӞwT"y-~[elnO!ŅC)% DhN0MPDg2U4lۘ өXlOT{|x[_ćK:BD(v5f0,2f@|}?+ٛ$Mހm'EO}Rx/"^YB^RqqP;wn9{V`:HJ@X$Owd.M{oh8 >wzO"Q^pN>W6-2Js˞oZ( g 9=Jcm)?D4P|#~5Hb~,ˇ(_#_7NY="a14Xd0D"4$۾5wxUƬ 2&"VV75yFq߬wLá-7~BH׮Ɍd0![/M>tUlWEe/ު;#JfȒd+}ĭ0@u3;RoD| ȥ? :E~9};*VpFU{eW P[i:QEw3xWRp]$Yv"bK$y uz׊ L(iq  S4RDJ,W۹7g3:kw>zS̠E*FGy˥R=!̎ <$l9]_qUKLNr0'"/?N܎ J>Ӿ%p]NJ"bl kVgh$HuXm6>`ă{ uwzWɟZ95ġ@8i N'&xǚe?ΓRE5>4C^y stC%tGj RVr*vz M? +)T,f#5KEM:>~  B8CK>jL9$lTN(#ތ#|r}!rB9>ehA'9"YZD/\l/t䨽J#+*K+،1an\f'Rrڤ7Cqp̔ENlGT "zu _6yF4zڌdeM~sA6ꃈ#j͹Sy2,2) GwӇ ɻc!M{{8xa7U:yꢅMD$8<؁(ϧnYW%tycwn|6Q`x "OUR2=iD1r| N8ԖLVfZ,(X* EMeWMx|{iPwzچOg U٤ f  UF~V#[CMײ%Ϟ; ɯtH NS p5UඣѥkWGC;d{R@:^?_^ցQNl=#R^&Ejw;q x\ &[z&HԿLXUuj1hY? e/ ]17hf}IIliBwG>i " +U>p JTp"ZsEo#6.ʶ^Zd =kOmR'^tgLA`H,AsM]KK6=Ph8xtb{׸a0HN=9$Pf9N]piꑎaR{S?H ĉSS03rDΣNЗ/$:e pO[:v469gJa$[PkdAq = 豯S8"ẏ b84ha:u9៚[t] b*`ⅺixegnLi>%_6Z5G1]v" -FbD玐S.P&PX)kCПr]; q "> ' Jv wZQA$M,=8#;V}ᅵ}Oe\ [/WbY7viP4gΉHMEj0gyRʎ'`X\*ypkX@ԧy7)1&%M~zV%BK,Mɪ'36R[RܰC35.ؕx+p+ޕ;%feyT֨$()lum+&/0,43GD!Ģuozj400پ-X0Xsi"=ްzO<d۝HE~hQ'<^PR; M-53CXwu&%#x'TW瞈+MLj30ؠڥ(>P&j?|FľA-h{Zk%W[o)U% G4_v{[[(SMhT9.=MuT wozwDBDT5hTVH)gMX+>gM$"25Sp_h"D^\8dbGDGH -FW@h5sk%a.!5}y ݻ-ʬ]"t~r#MUy"ؿ8NOVddRdg|@G\mF& (eb"rjֽf2&{+iOaXd"u~κeM}n\SnKz8AʚlD_XMGlmV>.yfDc$WT[QA@ y>` "Ub~;vK;M!&fqy# bQhY"a򑚚X>1DDjCѦjƇGz.|-.tx\-OdXLhXiB{bx*lEԜkf<~XO:>60Zr!=[IPMV-ੜ=SrS~pz+BF}u#?ytXdf;|]k8)):7^y牙C/>+K5AV-C[J*tLCD=c?sh~ɗi.V{NC>Ñȅ$F kx $5}RN#9M3F b^mR`i>ωV{J 1 |K<hq)(YɬmbCXd0W>x>X(PyN0]0jD=cMm]X<_ 4!N͹fg/wꞚxZuݳ4T2O a?/2`\SN_ b(%i2D%!]3M M`qUʧ_Ϥb'ak.u8?aP@Uym W<<"oMkg kӝÀdHB+0 z!:ҙdlX-y:7LX[rFm$NךE cdVb٧ֱ`&bBDGpWE 8HW_tn{C Z/H]jW»#˗]0(at;7 M["Pj]7XҲ-iLiDDT"y4yv.E&"]9[<ES ǓY9X:`d~g_v~ճÞ]ڙ9;,¶M!HN@Ԯ`&fZW$jO^D<ԹcmKg u毊wU4HYQVKDZOb:vWZ,+<<>fy+~άDm?ړuXPWDDDR^*b pdCQkRD~ÎY3MƥSDcA;vs{t{w{/`lM $K{/_K!$y@N i67Yzn/,ے,ɲƟ?W;f;;;C U,r +H֨@q%H".h79\rgΞu{~ol~<Ƨ*jv~OS|؂k cAD@<b@ = a|Tڜt+ƤE_J9{#N&ݧH*&ZrZ=4` T_'>дD2Ng}qk`8& Bf9DY7/xƤޢ92@p!2B[Gb. PKoIzxjHwM@j#Õ#{d2(q/oiS=on'̹ro[=C8)G8!܎޻µ)))Ĭ$C%񐘕6߿TdMԋG!%d 6܃C XV`[Hv^/ A7Pj)\]'EĸXRŽ1^6\PFKּ#FDnz-HӦPё5I6NP!'}%64E/VF2 bP8\q_?"hFk8an0s^UU~*z[u{JB(,ǐ+ J010zwrܛ%[5{0rwfhK)٬FxeC?|cǿWW~֙7;=[w<N.sdw]<+u:9YgJɉC̽ qcFpw0a$^)em!/U[KM.%CF@M]gHX߷+:CǽLQL8x40qk㖌kP_:A&Qʯx]/R XEW=ߜ -K*t  .|HU0x1(OuDJNtK)cU= k$dc;wm\̫JDVe%w[}V"fiAe{\ n %L/\Fu1IΈ#"ϋ> ?r&7mc8!-~G_qY3Zi P<^,F&m/?ߏ 1R5qKƵIuj\Z Vqx[`.}pfv}yGRB|2qo[z>1(؞:9whMc;1Ͽ`",>?!MoQLc_Ct?PL&(9۷TU5-mD=/ 4T+jUy Ś2WEtzd^j/:ngػfO3h\V՟w!\[^wxILR)־x 7,5)%m@qN?16cX=.LYhP|]LԷ ÎN0>4yҥXD#V!.8ai䊄mTw*b֨c Ukˤ%ǝ9Dr)WBE|g[2)t:ι# !qu_x1i !䍖.e[NMy "OLȠuu$BEQG Not5uĤ +9/m};gĜ f〭wٳW4RjE pP?kyOv(hX1st-%"kwcDů%)+ZSl[lB+.|P菰tU{k9a r%z0t=G@Ux%Ȱw."@g/rJsOa{,uY ߶Kg73Utxh!=aQL+{~y#1ԲP1}7> md\Sq|42]D\M[{" -rSBъR kb/}rؖAnHN I:%6=q/𽚪`KJ&+g@9>ݻ1 fVaקOQ(!lPy4R8cT+PyU[[ csRQ.XViuPVDB LvsW[;L4Yg})o\ wVD6Xv9Mڂۆ"wPh:;!uV r$l{UfK1٘5HB{AƵ0G2Xz,Oq?nTV~1Ms/516S,(JD妇6lzpai;cDp$@a஛ÁP$}['ɞx$dj.:p}FАg[L1͊(}.#[xUQpĬ4*0wh9Zc28*8T>G5TXŜЃ d.IVj]"‹! Ŭ`fr!c`G0mE剧45h#PA=,MgiI,S"s?IxeɋS`N\]8kTdzcf4T5N;RY7tF]hci>(} Myp~AJ 3Λ5RW%DBALZ%Q 6Fx俦IŁcLgNq(|Q74zqx~ g b4zkM@l]K)K5;zkk_)鬨j-,jYA&rϻm9 BHi;֬dofz`Y":a>iުN>:ImBJჃblDғaI9EM7vb&2=qn@.8a5xeZ3AR`$䦿7{q].,^ qquOYB!h2MfYWc:#ZI#g7xFҜљj%ǭݝxEUGN;1&h\ӒR'SΖߦ:^x`C|"CqK(H2eguEqY qb\CeocfF P^@xD=D%`cÄAݺј2H-DM9@ #@G@j|Bq aHG/'C ytF&3/`Kp7v-uŞbA4>|*U?LfWmd^JJ\:[G+A8}*HKKe)Wo}(t|ݲZhGgqIcdAړcԙ$;x*g^ј/j|Œ*\VnOL yPS;:tKk ¦e ܠ暪ͻc4;$yzg%HI_Pͨ]H^ox`%{-I6SI?p~ZX}<aL RNF=2A5u[Uܝu [zݑVԯ]2!+-&kBARu2/姒r Ƙ! Ȼ3qq" <$Igi, \N`Kꎅ.2‘l~^ 1Ǝ{5f7(1#;O]:Ve5k嚢2q,JFSdIm=-JlKϟ{tgEhSDL:7gY*HvGy{NCSmo;{¾~b|ꤸv# u`[ 8arG Ǡ;mc5mGk[4${ Jm{ %iiE.kmFd%$)3rU02ۺU@Q4ѪuRQMEż "2obUdM(PQ9C  ) C%g9)iu(a;t1;@-A{A"~&{<6avD&)F!U*{,k C!pM`W293 '>2QCxi|*׊݅?}Ef OggOj41aÕΌI:s]: P-ʻ5{DAgә+^L鑮0^ ޠQaݾ' zHzf|`.TGc{Rfzp|s18ŋxRid1ᒍP t刅~ f4%amV-6,7-74\_˻*+g X !T0فpIa(`zuJL(dM9R'L kw@/ftĀ_MIwEɚz݈®H7H`VS!7>1z7gM揃iFGSiJ& n@3vM0g/EvQf%H@ IDAT#Ns˱歯h]K,ڹswkk[Jh.f/^dk[З}{ˉ*TbM1pSy)-f#W1eR}6?2*W+`W ˆ߈RאPp)j?Vy1,')|P9k2$I_]{">>mL3erk@58"G4"GvٱlJ}zCl׾cĘ.V/j5UN}Q!)&眖6On?@BzkNЂDD*.!qD=DFS O<2q`EA0`OYK` s̱z=݉x0^{$WE}gAAUjuW%>CM#cC8a\"&I|C) & tUuMW^WhW]#+|ath/c ifG_M / G :DkRk9)pa,,< *qQ%+qUCF hI"x HDW*R0©D#s'`C97ܸwPmmg@qOaep1"Ԩ'V7(Iի V ^9ӌH3ǀh& 1 *–j1X{yl~QODE2,^gsCcyw?nGh;*D[CL-~W1yiU_&7vu^7Þo6hdYn:ʥ~.w|ζ^}|ox]E_lEĜyP!K|=>LVUS]ASR 2Ցr !_XD +\}qӢ F *G# #{hsy845(0X$@2+fD@rA!DQ#*6T(% VCQ#~c0EGʇ|}Qɛ ViK ,N6 Dz Z G#=]aCIGBrȉs:~Ǟ !HШVOGu:m>My^~`p|$'-M=hE[]C:Ї#lOUe~7r2ŠR|INXj|xQWnQ$.Ŵ?q 6y\;a݂KGcBT`Tγ֤% cP M9(*<vovpz!aͿpSgw%_CۊB R=uʒ/G#v֟|O ɥs~9 kJ#ϖW=Wybߡ,ySI9J2r!d6͔`Kˡκި>us%~@)%  !KQ׺gXIso#0 ؉~8kŌGwTo0K pYz֓/zgRt,GJ4vԓbpr`I|p !EذᬫL u!]dj+ɚ1`!O>yO";ҳƥerŖ| 81ǔ (Wm!$`0qDGg~#OFȍ9zq:Drī2q¼'wL-l6U$'2ʅ*񣮓6lMoljY;G"W2QSN.3 CMsVщwZm*3Gj=ٝ}BmI=^'#􈻢pS/͑xUUowl;R #aεNq]ƢqЦ>׸F\菨mqs⎁.z@azuTuF%OIf svE?y(E)E$C@J(xtG,@Ԕb7k( Vz8t5jbmsspY Tж@Q'#T7[EP՛niBtL} ":~U1s%sYLfkˡSs\dS㖌q5*jEn#b}0R `OjF2@'sc 2$TM=VF TTԵVu{-ŝͪpM+ۻ2C==MZgRBvhd GZTDŽ%_ YԻN ST6H7M3n QEPʨرښ(uw&+(c3>,M\# Mf#eNU6BD A4Sӥ`qZ>ǘ@)i L (;,s:;!ݪYH}nڂG4?wW4msmD\id= LʺR}xoݪq:KfN_$J১a$#^ 8e-3&d_22\(biZ:egتޘYS6=qq/V}Á_{?tSQ7b;;Fsi~JiՑ,W@Iܬwv>13L7O-(sV>Pu-}s4G3,"JMn)9lm@fg!dF u8%rTf 0% ڎc'e?P@I6`D%!L"`M-rк yt(M%vN2KI9QauGCg&fMX= cUX;p4>Edڄ %`6Hw@djCo=u؁ ,*NLkI6@NeSoÚ~'mQZ!לiɱ? ]n z*{E*䐶"ӁA<+Q"#&ϺOP٩ kH_lU[I"}m<£.GII^uӗnAƕKJDcWQQ]N- 1~;&,2?FQJhN12?/ƁMGoY(%݇ա@^ 8;Ph q\ty1d8ѬnS"JPjNP#cކJx p`ͱ#@V#w.S*!5=]M HCqnRऩ@" f-@ΩU*gG!*F>Xhd30tOK?eShN_- zIRĦFzT%:`^{S?DQlDRJ0xh9}5w뮟83]t)& Sk\]1QAio.{uS1c0+U8W{ biU>R@>vr2CĨ3YBTTVAo,W/UR9|^$ItL߾5.FgJR!.`m>3B66)=rUW.UU"\~)V;21pvJqDv&( $շJ_ּ]dgyIڱN)!Ǽ*W)͹'"۾H0X,6T^ј3{ Ϫkݾa\dpvt}A\H Li@8PFPkNKN%̚ aεh,G|#iSzChk%\ ;HޓRR9&GvL F/- W Vdjʚ%S|{ksO'&}z=С *=a:?+".'2NRf`W1K&l^uzlJ_>_x.‘.@0-@gI|>g2ʪW+NMl전ݑ򕨶cwTMywo|Q`Rfԩ7~'rU/Lbsn).hߏ~|4?*wgM_Jt]/!ޤc%/N;F+dӲp~WυK % c +L'^5Cycayc s~>ˆVZ8xC&b}mοʻK &O2 KFn$Î{_S뿰kϻ^o|[-sW+gzdٜrﭏ:産3E(;k{՛z֭<9xgUSd%E)a^ 4tQDQTEŬ urBRls.GDFbG[FeOo0DH[F7;j!xhjMׅ3dQk wѠ Ю_O%]dOejC}76ώ25M Y77"{11u(E-MMrjU Rz !ddm/ *߆6u-`˿WA8ȸ{"/K%C?T Dᶤ{L #xCM3t[HK01h]_h45Mc!.6%g-==^HZz*s!q\и:59lnI%BLyBF:ZJVnDnꚑeւ5*A eqq$ Q8@c6;,CЁb'e#7c8Ys}RRw2җΛ~^2XJNj|@5pNtCIC}V+닌B$@i,nX/BT`1grn狩j/} { ƔI n5x Q_>WM^!#MݠF 2Bh|8k겢O{^6y@&-Il MNJouN_|y$X{WN@ + 藆pL"7JAWvdwc",ᴢЩ9,lU}D22\U$ZD8{*ʲ;E3Kk9lB&OӴXt[Hw-&FG6m~Yk1Oj;(͖iSfܴ=ɞ6!/ph)(^ sXɁ1[M6uuF^ WY @IgT赈#9&{>Hl !F2!ܙ &Osg$Iw ^`3LzQB(J+kO.@ԅƟ^{+Dkk@!@ $;U K),ɨ"b4+=kshOy偝=m'"GV<5E@S^-hqBn_ugڎW wwh+ݹ=Y^9QQ쎴lJ oXtV:ll-Yr1;fq#f8Mpnz}دk&$8ȸrX9z#/Itn˰_ NzkR'ǟ}h#z1@znM?GxTkE׻ ߷5Pr#!Aoz[ʦhkN/*ݸ4fZKk',u)SJʄ:/>mX_zW@>˄Q$B]@z I@7&IqtgэD`~^- f* sQqsub)`:> ꥸ5j*+{Οi}Q|:aEDqʳʝXfߢ:MЙL{r9#D64ě|]e"/,o/'GO(sbnȺc=xWtXZ@ xJ:JAQ.1hjɭ;kz;GMg21vKWdJoW.At;dFb +pVB| 8wv"ekL “^q&<'E{S32,QB2Lf:ѷ>pTo۴s/>NYk_Q$]S )K|^o2(lM'-|8c@5 wQrhcLD P>a\JiYĆMx 7?RdBa \s3f'Su=I~G!1-t@K@cL{1;e)YSq`8@][ZuxiP*W@+hHHV`_ =1٫ ےy0@]{(kd&!N7eΊ8pRQۢ[ E yzb'xwagmwO@ivpKVĥf t!JdZ+h7 lYq) n/??=d&L^5thsrǑkk Wv2C&3^j@Ȩj|1BrS$% )[bDdQ |?-twkvm&Wi *Vb & Ut=V?:NkwPw%KE 7V_ka3(ʙ!]kKND7|T=g ||_uKEzBuVI*IT`?ڨOwW>l-zKL|6"^=<XSG;<[Jfդџ'=;ay@ܨxE<0[KW4kLR]Gw1 &)JW٤}j\Lj și-4_@o93&vT'z/F(J"RA2H-;Do>䨶7 ﻦPBa2&V1~ {Jz IDATuo8NAD{br~y,?cTP樭IׁӼYooWvߊ1󂲷b;+H )Q 14tU1cEN~I;ҤC=MO$"M3?}הԙߌζ34~B`gnפd]3 %DA{H.τC5D F]gtҵ:I?"!=]Ij,nO)\+'F:]IY43`@ҫF) gpqo\ z"Bm O`HOgǸ!tb :@<` Q- MZ"݊HVZ;%!툘k28Mi178&/Y>Xh:[U"|J][9@k$X>ٵc#"FK v (H4`Evsl5z&Z,N$@j2fԪ$K:l2`xoxz֌E_#ᠢZj5(: :S ~b6%k?u9{(QPw?{H6}RUM-l}okW e)?/َ&RMgx? Y3bIs+M^v{q>&nOh}uE]U twh*i0)_,>Y YzF{.؉? @ g<=1Lpt(D=vʻ _IJ$?L%ɫ&* k%~Hf]z,EpɷEVDБ-O? D "a~)[{F.Nuɜh[|+dT?Zbq 7|V 8ȸR^m@vZ]6ra߫ \|-}&"B{&}χ~w9䣦ԙDQgΟ7L nUl=YoI^t9cL8@OgG 8Rӄ}JC`:ݠquFrm^O`RJ+j%>%=iSe6(%QTɬhD ۲s_Q\#(}>\k5Ǵu.[B\y>:џzUѼ%+"/hVՖqm}DS8}?z]pYQwPg;*mWUj6`bf !! !yȗNz(@  ظ-E*z_vҶ[f+˲,΋qo$t%UkvsŬ/93XK$C]"i Yw30Ọ-^% Mo vvwE3Vk~ U3/: o@zg`v4%-KE|l`\iN)ťW: $+e;B8vY㛚;40;K }rPTI\-J#tCR;:]7qG#/SJk*%?i A M "Q! um< zC]5Rqb_mM)rysSiolh,^@MI3tY̳LҜAՆ޺.PB'H  ZJx}ʳeF]PvL9)IyP G#8:F3HV7&[ @?@?{/`BjS4 ng/v7B5''-UK2XǏӖzPJbyXHRo0*(;{BfpĂ:ȐSAyYgC V` mQ~,o;tXrQm͔R J~#Lx .JT)*}3Ә\JRzc5̔RKYOX30?8Œ c! WoLRZX4ʈoNgjѺ½Hwr*"Ob֛gM58hCΉs ]@{ں)?2j g_|R vXzM~ZSȸχJsM=-M,Z.R1,! BП: Z!=.{zlJ(umώ5 OK/ ;mc^hURdೳ#u ^aT<^J+Nx5c2`vI" (BB21χSpd g @:p{7uA}&8lw2..rahi{pp=ci(#"gQBkpp`#DZ\Mp`jcr%xDتɉjvri'%Λ}n@­ dl@F:@p1X(KC]tZѰީ״L̷԰cz'. hAR.{ʥ8>,(DV(j:Xg{z~Jk;P/{5r{Mk>h$㸴4؋.PBx.Ϫvu˔'T葪y?B]>?ǀp?Skg 㮖c,lmpgMk=۳{\g2zk _.5pj(tzKYYz-s0m ʯ|{X U Q>)غX}}ש.j_T+Ĥs sk۟@) $@pƚ!M+ʿࠀg M*wx80dQUhYi!***9gk0;|f2&R***.:{q)&5\czTB8)ԷwAVNJ SޭҰ2 3dy=ާ 5/Es`AoRC0ZQ׀D~gtl ܃Ḵ 3@Ć-{Nb$czW.i&Ɗo]HDB} ?<}n=(ck.}Gd,}pW?S?|.T#oL5ɻO)BY Hbt-$rU.JZzZ\ЌI?Ͷv˾]%V>y>%12:tF/'` ѫ=,▕CMUWC]Syi 5*Y!)XJ)z$YWw~-ڐOnYĄ *Mgs+ eBJaؑFvQi%lF<4(+0c*?ΪА/E0`U\?tTfBHvV 0C!(tCq|T݊v_:v:>g =Lr.>e޳*dG v9TlL<"@1A_mgag&w0"_o篐's`=Rk%_7wb!L?6R7QLP!{Žr>~a˒9.'eh1 0Q](g"${LA[G.'iߌ\B+W+g :N?|r<._D M}mM̄ؗYqd\BzM6Q' cz1#5HҚC1mҌ6r^μ pl ~OD;SƆΦ?h?qlN3 SD\)4¤0qaHÉ]c BXgO&:L.>􁲾Stлk?-i<BI16kXtua:)Ёьl<݉J0*`"2\Gň/6g,]Brڰ4K6ud}K"0qReg߼g[|e b0BM h Ee swe$G OϢI쐒FV`Q&ΰ/=]U8޷ETh9Ds;ӊV+*zgV2i.0TyPXRz:CJK#d|)#|n8}UϞ3X('Lcbxϯ#_ͺrxѥK5f|M9*Wc>|Q`&d&8n/xH ى#_;R1艬4c|MA_ߎS1Ä'dLHJ~MljW7ZwK 4buXc=V.8OpTs].KK뀵S$)8>5W.E"T@~Zw NforYadX h#~],ςEaF ~EFk_=A(>3yp1@_{: bFX!'bOŻ]Vwlm]`0]n*Ȩ[;O%Oy{66c^AD~nÙ"pHRF-]o"49fwg|1dHvIFǤ{ 8Qu~daBXJK#3S%Ϣ"TZAq(F%~_*>e"[DФD.$q-|Qzxy]μR:>5$ Bi[}ic@/b_P` 9 |Q"4M='z'Rzx6GL L2N6<1wd| <1ݙB?t\E !3x`DD8cBMoՅ[u| 0ICoKA`6- =crae9AAqi0NvvO91/6oδ!UQ(7@PO&q ;փ!Z9"DAse75+$|3x@İdJ)Î~ N)]@lwxoSz ,T4jk:&HFge"i ʵy}fC=Ëٿ~oD afûZQosmBhF[=uHw;47MB(NY۴UqhsxVȊ͏ (sqJV]R'Gt }%Tf6c 9)P͒-`B hQԢ(]l^T9Ff^ Sugڎkv*hYP3U *cǒX8(n%)E Pӭ gLAg6Pȍf(jFNPsICb8 ~{&v[X IDAT? ϖQ-Ry2fgEFZa(gZ,I=SYP Ni_59Y)La,nuvI" \!rtWTx{"zO=*D<D1FT mK%7@| ]=ԱxE|睮a_@-kkS2KiPlV.B({Kɕ o,|2p>T=mՇ1KM}MgPdFFpl2F"~+&g%(&QY“s L ]|.7T 9fR E) a 1[vչK7֣TN^!V_O4~~ J3WC<)8kCij}J;`k G0 rT4Anۥy = dxmӝQҾoSM Me&V*If^3# %E|6iKeS!ҬdMڠTJ>9~QQRSi%c,VxG|ܻ3(bt2(='e22E%Tzog]^z\ Í٘"KNȺ0˪:cXShoK J9O8C̉3h]2-MҵQQDv.dX PߨmQx(s5[q1~q NEn0tQv(Tf(b(#ͮ,byRBHK^`gà ,-VkPU(ƍY@mgNLoDuʪagB=N:e#O4 eoQ"kcZ=o-Wgs>sF_X1%#4n+Gu1#cxOc!Kk)ܝc-ε5 kژc=QPvYa66Q S o{BOT[|BUO}RﰱpO ,g 07D(&z3$ EQ4?AYoX3QED_1o>{}[٦ ƱH;ia9Vk-qR`82ex 3'v-zD}b #BARPnGD4>: co9з|Aڠq|mG}0MmVx^]РViφ` (9z"˱so|7@]iR{MaCqyh\n\-!l.?(rیaZ|BFR;]bSp:7t>7s*QU=Y&tObPH <43P\0>4%|7xJQ3[9upV B!0`27Am,7zL|*:!ƨ! %QF<ٔRYEfc8ڹ_Wˊ+ F~ܹ$sj6E`T5_诮v7QX4,4]օϚ`HQy-Jm;eu^]b-( X yV ٗuy=Hw` ȾiL߉-q*,Y J oB05ҔPaR,m\q3-Ay]`W`*-&2õ~S\ajmhOC:eR<}1Ѻ`qXv3p+?mOI  OOvQJ0z\R0pEHpXpv?IV-+\^; @r X^H6U;ql]MM U(A¾8sl4h$%Hcpj\T" =zdz`PLdt* W@/*w0dt%%Jƈ8>[BB^I*e(HQ \#-F@ AOI n$,7IR1> ʿlxІ[}~"2NF 4.<ު%CE[C}}Y?/e__̏)ӓ^Y}=~X !⩲&#cݯI1( ?*, PQ ?759"3! %6s\!4L8\ =|>L9 ޖR mBP@Q5؄FԸpE泚uurkqBt<ˠs 0= ^NΝx`"nOioBW)rsj9k!#*T'K׷lb3GKmk,٥MXES>{V!.N9CNM26( w:}cK9?uoWlP}PZdG7RĀצ+="}Jd14 ,sROfBX &2SGrg ~ 8R /Vyq5xUBx¨B |x+ElUZ0 cdW@^ȭ:8)qG9 !#0[SyuxjٻYt>P p6hK~qd\OaZt& bcdR$`NΌ\Op0IV~f dl9Kɩ~f#+V!V8=G[BSWٖQJʄH@ Uϕ>'N?-6\3@5O!QDVHJ4=W5c5%P%0@dobѯ)QfWDj?4L. DGV8o&VC\jyL>iuܙx7׿'z"3Vd.|T_nٷkmٞ37><+*1 gA9ʊ'I "ޔo֌ŏԨĩlEzHM^bua歔cY:6:Aw:hmZt"B4uf]&LͩʼhF6#)"%3nVg'~;UXP㻁~5%C< YALv10%gK<8kt+aC"_R"I_C }B#29]VS¨6`f֝wNomu!ӧDWl;'}.{V=Qĥ*c"Pgu1+cF#&dݛfOU8缮ay)tq%.7@ՔKcHILbNf '_f2G-Ǜ*I3'XTګnWc}N{oDHB#z}yk~Lr)kqo#= y`c Lጕxr5 u@=u,dSl3Y!@$F&Tդ]Gj`ͺ&]v̸Wda*">騡@xu8yw=^7&L:5!z\_vb0?ˌ @\ZΥ@%zX$2n+Hv׬ac, \Fn 3~ZnKzw&Gkӵgy>,>pʫ.mݵ5U.h ZLjsNAw8= 0Ӧ街K3~:V.lO(|$.Ym6\m6u٦o.4mCy/BK] )GwQ/WLœ uZ}+^êKY{"/Q+Jaҵ?0EiU;۹EG($Le^|w2q@H͹ނXBI)S喐Y#Na vxU`>].,1Rsq\?YOOQ-0Hr{ϱhT?oz8f<~847;nթMYqEiөx#յv{;?mmnhteg_n-VJ;(bƛ{[v Tc tm{ݞĜE:L+;PU^y՗2ȸ:&B 6~k= ,/`ՙC⓳g-KB(Ѻ_[(R֔i#v \U<%J7 z]k/i=s*s)ɚsYN2,[5H5]1bD Knp84gM+E yn8e~~Dz `QԅOVD[Se/Zɩʰ̵CB{{Hj'u5  l?2EQHâo :_BL{o6z@Mev;nM{ʫ:uIVX}ZbZE*oL|0Pp1ꮝ@PJYLm6:D5ZO4t&LqQ2d+K?}^JDVNW7a)]X/R-xӱa=݋©%g} ԏǤR[EzO)j./wy\PQEE3@k$QIqhDWk~  +U@HíS 6ހJ/36nid9|.BssD̢;릸Q%իMWP$ 7 1"X&V(dYFlNT?IїS-b=7T_NXt>ka[;;zoNʞ2(drORpffq;NrsmGnKsTW-0ԚCGJS , r?X#eLq8C84(29LqI FP m1DS% YEa( Be5boAL/`r)Oիtq g2oϨm 7#<ネ :3RFH:nϾuZlvU&vŲ$| ?~̤ D1!N[MXNpab$?f E:zw{JSs}!G2k94u3?}׌ X8hkePkMhB)tz%wKĎ*Z]ݭ̈́(!"%UҨ`07k5Z3r@=t'XΛ3QK ]Y)k(!A ?@xB0 8]#r%[7Yف'Ur̦<̝O|KS"+L!}=>kj9#+ <Ry IDAT|WmhN͖<Ә{ŸpPta}yKݙ4H  ՙ apPqzہe-sr'OLẗu)'\8Ҙg-Sdfs_N_&m,}93ÒշMD$Sޥǃ3Qvo>Xށۦ}>zI)ˊbIɕϘغII`Ho ^?UѰ{r0b}BLu@ JdS 4ƨ糎{;._$%g \\xkO4ז-ɑDQH@*Ԉq q7 c-+5HDG/VY&RL܅7GeEh?h?!}i2,pLWSCDBue0pQ*bS8m^ A'L0]eb=̎zYmG ʧt| Q q !:H݆ͯv#UP5WddICcqխ2Yθע (^7,r7u5+mDGe_E{N|YecrrT WW͒7WD$-$km0K:UXYxUb(Fheͅkv*Ako>9h̅ӳ^yo'7(*H^QD0}RADFMDY>(;[?xLފc߽:<tJqc!}C,K=[wMTy0fJiD ?ǻR9* r#yw߯v?TJ1#:8GGH/'vxD/: .g_72K-m n范:9j%a~P؂_x;0& q!tD,as_SJNZ]n |GG0ECEF[93p4 ^c4,Z;(%cw% ~$='}9iџݟC1ÝCZ+k(@|wδAmnoKOkXh8shj|WuCkieʜW8) hs_M4@)sjC҇XOȔRZa0US; zMN+ɱIs_XGb@Vh)in(cTT:)eicm^[\H2e}33_YCp1b 8lRJ xo1OHǬ ٟ0g@SucFcA<Ͻ1Vi5r1?'f}P%a;8Mz<}lkfxn^03 }-;s2.iÔI"xg %^ͧ9^\CR P6M@fj8"DP2 &3fF7THN[ Rw'@%+~uom'`+Z{ݜQ2U-w=%y*,'cɾLڥ$3tAw/Zp.;Ҿ,*IT;jeWG OB0 d;I(@M ie]~,^AVc OMh^>R]Uklz_ZTpYsD:li7Z4%!lޗI4>gMG=8xrڭT n[gJs(Yiq}& )Zdn ! {C$7u;N "{YUlqQV3};~+Aw| !gY_3s @)$fh R)ge "(7YI ra¦: e}[Hw㩴)EQ,Ǐ{य़ Q"xRiy=ٹFUJ[W.7+zZDHy{m@dkWH639RJѿ[x8Ǚ=趮ȈqjIG-1yiC읦 #FYQq闽l-Ey0W~ڊQdamfkM~,q3SR@}I#gD$-F$0> `ΜyaUًO.1''cU/^akMLyY;Zlgu؆Ri+^Wk.: Q$r6%.SfR/>!{`u7ݛ<3=^/:Z$ʇׄą#bщVM D ,̒$]F!~,|Iz 2;V,} BHkuMZ]xRjlˬă 3͵Uj.}1d,l )ో5#RڃJZ Qz1{hxRXN_V/d 1&)Ee3B Dl{HZYFA?%~ba9էU_r_4s&yXk4kۣdHEXECą<z=!_tz8 T( ~c% H 'vRpHœD>5;.Tj9tGMw78mV\^ZCM=ZHz5OgSѺC\w ?4}OqOCi褘TMb]GLpJngK",c/; |E1}m/fA:='T>*J 6cenY@cdubNO!f{*$<@14&R^^nv1Ej$~'S:lzx*՟ch-E/ }2Jzntq2AJ2zﱅ6"}յXQm^/jgۄjd99UaҼ2hZ]4s#m@|{P^eSP"cIAs,#&$0A_w_O.0Q*l(7@y@!q,WސdqU`?)4aa hPp`PKą0{PںUp w+lxE3ocԧQFl8r:EK3ڮwl- ]91J;B0Ni^zֱ޴\هY<ӷ`IkZ:y) Rj Z?m V/c227֔ABߘ=5 2ʰ;LDFd r ddWJvCYc"$|VYs3Bʴ狠wQzvnd3Ժ]gs$v4W+3òӾGOIX>$Xd̝CRbfuMۦO[U4IώdO߈P~4(֋;6EW! @#iAYÆyx2ϛI$YMcL܅ y301{ 4w!BCw@0(N'ǒ;2={NqI!^4[4;u]1O!^k)UF%QfSc!n']r@ĩ&(+NA)C&‚=k<}Ӭ_q@~=VzH=iw)wGChd#" c2I !ϿT ='j%3UDty]s/H[`֙;{*ݿ}Ή-cU{7/Tch e\f&sw LK7@B@holk/8%hJ"xBv횵^=۷ۥL> ^Ų43 F,mx#]F,`/rzg9}U͖'揲`L`S|sYSw0RFE4ݨNÊky)ƒY;eCK;|CѲ5g{Vҍ>_T'5j B>"@3-ƈ^{*Că[j%޽&Q.Mzl w/L[hҚ_݋&/iZF8:<.o^ ^`9~њG8T6@i Y $KdAm3v-%i,aIAƙR$@0я1/j?ͻ g;7Tu4G&$^懎qksjZ C(#ۨ"bE#>yF<=g cjrJx0d>Wwiƌ搽_q֫?VDaX9WYami)Kg`2P\X2%PFŧ& UB)QrY^ Eiʩ8+QB@G]F؞q4HӭRY\ ,c]hV$BHevJ-jY15IϰGmSH .?(;1o(L5Č6d%Q4G'!LHϼhru{[ ܝn* O*O)0}ǔOd-KiqHN#xg@54:E)ژKܸT^4,@xD{H+"v3aa S[Oֵյ$K$omqO"'XBOvV'eR< Ɍ.Tuv81(Ӹ3h*LtjhoE2oz_/8"O),Ryp<t̒B>mEW=iq񻷤k[{NYۯt((j5Ӧe@q/qa0X a-l{9'ŭ I9dQB:*Jf2ƕ?vɲP{ 4'N$;< 1ă%{ 2X1{VWSY . "tt 1B;{Y'e )6r+1ʩ<7?e4PXo=݂,~)7.!NgՒeo#e=Vgas{e+#V& C:hJ ]n;4NzJIʡXK( 5O#O/( R 3k[0)³*p~hHu 1_Ki}m9[⍡,wWg *˂ϼv|`X" @pd|T+e "/tEΣTxIX h"i)_LVЇ}Eb !jvfraҋU$,\D~XxxJM zQ%CuDL0Du}Nca@frSr3Įp<DZ2եqMhTL|!t? f7~!%89ǙsS4)_q$l3l3! )Y 2J/eI"/DrM3L𷠔̊='E 1:2"V DQ)ı)łw$^S*{= X]"b!ZئA2-ng_ƒ.ZshBw ]f"< Rܲ?!j]saO)`>Nc,$kAW H!ߖ}U 2с=RN2[;zY̕kƖmد}<@y:p5!,FƏe . Kg!7{Fo-V( rCE E:<$"E۵yTJ~,ix.O? S#M^Y>|d߻(›+Q#ܭs }G;ϭnhOJnvWɲW t!N#7!Pk'w1e vFa#t]x 4@iBPp*jZ L ߬b-@OW eŤ<=Yf~YdDF✗;zžV_BR~? ~!CL,Or0^h'HBU8a?Fyz%'}i\!|ƟӢQF)Te$k.zAg*{A|o\қbIh/xV$ܒVGs78@nc[{/NpA럊%U z7q#>{ -^;0}jGƆ o d0`k~&(3Cl"z'a)N[T47j_f !&͕'cbXz/^y@od>!D ,I&ט_X}ebM1 N%7{cCx1*p~hfjHxշLs~rLgAabC[hMSg\8=nYb8tҎjv7cQ<9 ,_2TBGXBbэr@q|o` Z5gөC\_f k/h6-I{'GƟ3xa[W:d 1-v5:_̟8=#'+z~NeEOmf8>G @xDRSlxYc= @xҋR* Q4~W"횕qc /!Ee:f-^}reDTD :0d,jܭUϒ{ &TkkB5k"&̯(;o?&J@ FߘԔMe&Ns(3VszNcם K.W_frڧGɭc 岾\?zqґ=a~ڱtwk"EJm]lЬQa <67 '?aL,=}i3c/ΝE}^Z;% KENtBDO>B{>ljd!Q;V?l%}_7E 5,$V;a /'#qZuf뎸=K3_{}aRlk[Eː81AH+vϝ[Q#P"J_v+jc)òF5.pB§M 6.bd."eI[h,,b&|W  @&zA.1/mn`-ƈrmh~1TFi. Q\P'TVV B=1uZmزe7VA !1pƶCC'Ns:w;]4*%*0Z{?t)aˎ}VEQêVװlG^-:d x{8`"2h~_2͐6@BG%jd f\3XS JQvߒUmSgTYT#STp|-Ll؉Wx;- .wmruJ6yQ\:Nb5DK\?[56 .g6UrbJ̬?7ʮFr;'㑩wݞ;U˙rufEỾw- 8 (A %HΫy铽96|ܵjw1yd US^(P}2| Q'IWCgO^W5 y>/mHa\:cwjd1X0n:,>~3*$I1|#b3ymsa ;;BȄ55E{;Q~Nn\nV |3mGl庙AOZϹdjBY0|lk\24mB&ڏ7^Ym{U2O9y:ůfdq&j$G]G=4O_Lf8U;pBT7gn; O6/..j"isPɃk Lqf Jڎx1-xKtq+# knV \.UHD1ѣi֕ "7 !o\}Va8NL9ۺ<.Y1I!,OL1Mt9-̭̭i t>0/`Gl\Ǔq\m'1~h^;`N@ݣ6̇vڍi$ޟ;*]nyߩV?<%Vǒڵ_hy.J;~bZΒɩϣ-*wo 7IeHe3&e?/Jk؅nH8@jN;I]!Wzh ێ3tO+ ˔++j6ك/{C K-!cp=գr6wcNIS/MUG"d/Q,|F974yNڌ%p9)3&~Q 3~{8U5wvqhm+, tޜ02qKF=uuUUqqq)㽍%UL\:sW>2! $a$k+,/623cfRj*GaGLF[5; @I86Au BLs$G~as Yql;KX_ ˎs=til4u" zNI]VVK hu_vzY=:h.Ahy5j 5CboZ'J尴k)MU*%)BCbG%]=P@Snm~ eV #POwr o \jD4ޑX_2%,u_$Gpiw;f5gvtr'Rv=r65fXE|zsa!dg 0N(IJ\v߸nIv].qŲ:}W2e~zr9/=I`[緯*R=W\@|ʁt$;2H~0DW~W/gB|oWoϹGeNC-/v::eN 2Y/oћ63Nx}Jz ^O{j{-3k_8OdW%^C!M31K4cEyB[_il>mΝ7iaѯv)GNӱCp߱k1, wg;UҕZ[T?!,X#NnQYnP1P0/4O5poG,'S uf 8|5v]>E[1@ԈHRN7_/%C ]谕oMyC^\-3ƒOϞS atuV$oY*؞vIefX[și޾.\8iNq;Ctь@KmqݽKK&V_$RmMڥ D"Ȳڜʔn\T@m 7՜+9ݑ{Gƍ&{Kq,@rDrJYYm]__:n UD1DA1_6= Xh˰jŞRhC/C,`^ 2@򓿃%-'ڋ|^J~TPS_Rv}MTj 0:JqGtFvKBrT$6kN88U/[p>K5&+'## _0y? {/;D`nA=swh[pނS.dVP&3F5aDD.칣~Y @ǩ#M[<(';6~Ƽr'P0RO'Idn t:PMJQ!$MzT^W#bnK+^EDWz6 3+{ܝA )]B8qҝ`,}hOͨ В -KXfƽI9Uy߻7 D&Lgby8BΖ4,U8߾刵Ez6/0D!Ktrܱ1i0;Q 457]J`¹Vw1 ̹xqe”5Bq!qfL^Nᤌm $l'J:j(%^<)`Z-8( Zˉp$4_nt[2,Ku<&)L{]]B:j~ZoXvi*~W̛>:&j; * .lюGk 2@j{d~XkN8+m.EUެVkuu㉌%b&o曢2Kxظ'wv6@iM뛟SؐӒ .j6YNk6x qe27lpi}ֱK xM|ӔoJTA})09(bzgDNm:6]IXGXj/`U9 0Oj5I?~{LbSOxluuRͳnV/fjn"}hOZm0E w2>fQK&vS#PU$Q:RCgVϲde9ujJіE/ٮ;1~uvՊ[sq=ʯ&.cTN bO8 R(ʳ5Zo<#.jh`<2hԫ#gNUl9 (Y+Dĝ]Nϼ `_)r!wwR t֕/\uŀ@t6)+1M !+= 2,3쨼Fp:dRL*2ZQ`yuxX:8vJ/ܽM=s(G܄5%15Md`{"!-c_$5<%dQꮑ/=Ɖ,)90̤Iiw?ճOK2X偖9n8qZ?OÂL߳x0)¹Hc=J?Doj= RksGX#7x7iŶ(6dWM+|j΄gV@ܿ8|x-nOOҽHezyUU^U40##x$sB|),zp9[ZƎZՙ0C!S4{/0P 2nk;Ñ Eًځ0[eQH"bʆUL֦c}-Cj=KXO煷Ch0 ;ڈ&/1.X`RGk:hWdy){MS z5pF*Q*s \4%:S~R Q~C~W+yWN]9YAķRfY=hghX]ATvV-hI"K+X D=+?d8cοё'qMREY19mu2b1kf !ןeX0̢8Y !)? \)~F(j&ׯ\?GP[p>1K[6kI{@"vw;My5$AGz\R/3'_9yY8ەW+ˑk2G%vmdȐ=lx"e @*0QFqD4ݛn{02;VBJBH~},K7^\x8 ,~3vWm ]yP!Qy@ $o3CcM}Z~г G&apH:N*JdznC:⫅5r H =ĨRo` y d)=ث!SX&)8w/G~7zW̌e&YzH*e"g?SVV+zoNꙘo)~ؗBqGp<ő*p, ) )/H+Ʉ7uZ)Jy~%!A ~hCmJݬ_[{@e `Q#i$?Ip g^)>TW67Udux7ףZ_eFnsxoIgf 鷇8!w@\p7 Dm um u *)=6_E@,Sx`a\0DY)yt2M9إm4lo()99uj 0yDLbӧ'zݩc"Vw{۴wZK@xRb.jEn?;ݦ [ʫ\#L΍=L}oʁ !8)ӧXeK~9Is}M~3nIA`NWx n-j3{,+Tj0 _p[׮ Y&iE)X7%LϮ;6pue<竤C{oDhc5.sʼn IDATge>8[r*؊=0eܩI ϟ]x:7ؼ֖FIuMXNN(н u(x>{08U9U-/Z3<wdnWAgBdle{Zs-sfEI>z HHԈ`rkkZ[:eQJkJ)͑ D kx+ ua̺O@M+$QVAa(w#SS.OJޟ¬;@}{:i aRd߾jCl=+9F/̈1(7<%OSY/WtbK8evƦq jIJZ-񹧎Yڧ B6g AĎ΢⌌Ijꒇ"(4Ko_>6qOϱ(u &H}1lكGsV 1~KCh'Z٬_ ɲ m#CVX-)=Ѫ^Ra=]uÞS$wL0ɶB/Eb~C2"Z!mDu 0B047nvԙEgkÎ;EY0|o TtƉ='-"$K~ˢ,9}^:-.EKtEGJ- V_qXBO7S<#"Ah>a w8}$r\q.Y3J)Cl%ѶTӑSHduBHK] ug$f q"CTf-}|ބ(@)Ã"JuK jlX01@#\m`n9VfP峓&7H-nf{zW6ñ3ӃӓT&/ L&PN(#b2ʕ 2I ߴb&*A!H(h,^ lsgwRrK31lm}TL P*#̾>Df:NgШHvTm,Y#G ªډ·È<| ::a; x.gpw0-zdn O%x@bXp4{N!XǥܲJܡZv,9TU~Xۊ0bN%I1eƥK3[_G~66^n8=1Eȸ{eQ.y览% Ϭ?,K a#ENkEwZY-z(-;II`'e 4nu_˚ LSu1<ʑgox) <(Jk+3!?Tj_&)hmǤXGU1^v@Dnވ:Nso_BK7UqɄѢ8Xn 5(2{ɫqV<~gUC˵Pr2 II\y nkCDDaE#F!ج:Zez+l!WBcUָ{X},flb2}6K 9 ӯ:fkr:&Q_ wÃkWtv&[fO=bHo7L{S%4 )eu>xW=im.xh1Wqw{k ^;mrK0$~rm "870w'YK==ƸTWcLH۽^ĭa 7⣇$Q0}N\ΩjùF֌6KXalm/FwuًKNP(.M֜[oǷ\ }\T a#붬{` !C>DpS#5`6nysoV}fG>z_9W/T"EY©Cc5Jy,QJQQqIYֿr巨҂)Ҍ iFe@.v7p8Pb0받f1 `;0<Yu&Rs zQ?5darr"}ﮕZ`&z_+>+ a!d*z/Ad;=*7S!nSt6ZmsJƲ\YZ.;Qη$J${~Üq]ȤY!R5{O}wx;{o$#K$v_ӝ8ũNs8v\ .0`z;?N:@N;;;|歄}i4`OtHvvMFHPك (I=}ͻ_[6&|zQ!"?U_[)V=!,{W}d NsRhs51\R&Ϟw8(HxV$kH===3g]*__R*+/eYH) Uoߏ{uf]gmIuJ|l[ +Z]~xFpS=TwM-NYw'$kCʨ00@oW 0#[^jx?"$gQ u@њ 1Ev#Kay_ +o:K垾#aɷ:"HYFlEIbF9\e(h<5`&?ܚ)q|V 2;{ˏ|tV&x 5&qM;Jvb0hYV,/b18f;j/f@: @G-$)0di un;NjoܠS8++HA-0<$+"iAvԕa [v$Au@W]%*'%o:70vd>|KW uD,e^HR?"$vfvoI>T4K\ekb|Jq"V?mU|vބpa{x2 Bxo7#wێ%S ӠpnSdTJ ;#w7]'FǕuA=Xֲ©z2: UWӨ.0ڈ[}7~tbcJLYijyۺ$mI%j>*R;3o "Ǩ^P8)?/|ȉ  z"?]"J[)OpuB5 tkoxsgENۣʥg=T&8$ #aY|M6 I'Vo+?⹱sgǶb$&Y̩Wu;(O]{S\LܘI:3b rv h"Ҋv 7ldPhV{CO%B_nz?Pw: Fb- eM*8+}Xf ^ '3VwP͑}/<\Rw{_{t1~Tn^Aed69Q_{|lIɞta9F4{}/lx,WD1 G783#QςW?cT\iOz%Sop+JOQGy*[~]]ydGݬŒK[􀵹ɋ$2NKҗxt0[0[As8g`ufoXr!9zR]LpX{9Oͅd?[TFqWT%`Eut\G &nW4\7P響y=sWn?yz9X>;)FsI𮿷J}TË" "bx/4!^[{6J CUb45#m5]⨨[oDAn(ݓCY$cT+;v4(Ui\n7UgR&=2t,`4p0; cInIt\Jqn(.qPAUx JQǏO[#AY$6qh;j@$NWUKgv8:/zDo_D&RoK+ja-!^TB2*: OڀC)A6G7qGR2YL7sy\pnjܸ# Mo(~_߽*KCƬum-b#b]#.XbP#C-JR&]{xVaSuO.6đ!脐!pwcI9~':_W$wFCHSwLS5QIj za-r ӫ6dTe?,hes;JQ =HDCO֖.;s7(xᶇlRB@QnsA}D D }izIy`s&Jl2NSu-+Mr !/Z׿zþ[uU#;gch @Ϣ T) \khPy6 jOܺ} J %Clc%"NB ?s ГȂC @ :`la-.!X^ρh'V|t["bWs`5BF =ꀼՍ0bŏLPYkk7|QゕTl*H.X u*DjWΞ=CHB)ѶKД>H*dE/"jO_n" `ȮaSj.dy!,\vhˀŜk@DSDᡗ4m`?}aT5W' ,0B6<,9a@:ci2g$ \kYICމ[ =j͐8@u|D4"SL{㦔{5QɫNz0GM5=4GWrj4=&zl^ D CnnDdL^Mapd]g EԴIQа{K؝Q/+X/'K-5GMnG kǥ]]RjDLJ0VQܲOk]bvڈP𔌍 @oH?F\dEASFO'JHG p AITR j"FbqAYL[j]( _C,x'6Yn&__d/|=s 3׮#*>ڊU;NrY-{T4V(քDκo'9fUOh'}wOWAƤD}^a{I6Y7[󥣁NV]?".K>0'`hWP啅S/cʞNHT>y8"V5"N͚#:[+zqMX?Bv;g%6[RTQᲸS2\y (0++97G gn%70 퍛2oWkc⾁'d8 30~ iO|nsɳ`2pC5Rs)"&c`wSbҊ;z냙 J3&8.zsJ,NF.Cl 4Vy]A-J>dF\6B(Fn~^TO+,4&lX 'dO_∋fLqbXJh @nQ VRfr {HJ2h, jm@[<1O!I?޶O*nv~? *O/R @fx9кM.*Pvn8(0=K<( ^%ӫnx/ dd EƊKdNTr+OK^9^̈^6onp3d|I(>\.ɓM+?+DNg K}d_לՠnF6sZyގ.Sd߁nGLtrN J{t鄐>Q]u_C.}rJnĘ{%xm }dռ{Alw-<#kzٷH?d)9[W%fSiƪ;^me:* IDAT&>O?7lmoqݩ TS_NkmB!kRg2Fj@dChI>͑:0{,iu~WO[]= @@Q] Y <8|Z}e3(/@#Bf"ze /"7=?\~[.-/ylk- o}@׋$$[/˽$!8 @ Ph9-p*0W@"6(hdKs5;nه '+&᤻ pe| -TN3`M*+XPaٗyu Dz 5%4/ު;ڌq1Q"<޷TbIBd ,--|m!"IKL o mc;~OK|E6Ue6 FXsGa? |5UTTsz^tzX뎀1 "1yyL|(_{~jxߵ_%>0PQ.0i[?|Q/,m, K5_{3w&_! _quu`0%8gD.pFpjϪ5w_U&ʽ JNs(J{W|h]yaѤ6o8}/HyE?:J4 c>EIrJ}2!(?[SyБޖF"-w̪*5A}pG%/4nIdtA髭W'+ Mz|pPF@'g@k " <( E@x^]Tzb냿>^8;}^8X{eMoSkGaTȔճR - шH/.{ ڣS^kk`aEZQJYnM$b]8wO=ֆ fZ VJZxCoyeB^u+mur}%Ч0T$*MwZwعY|:\t~b}USrvmgUGan7#iTqGhm3l 0KӶCN ?e1u)riQQi`wu(W>y.UM;`ډ@G_5S=W@@ch23N}}؝vK^cx^ȞV4t~o S)\G:z<}-*Uq I+E:&#c+ v*V:y|Q%n#썶P'L9H(I_ok٬h%,QGX*<ZO:7|[ 7'NjE ^d ,u !u}> М II_ ۪)OC5X%)&=2dCTEe< PG:[S =Q`~>\Մ>3'`_  517 n<,2䃿ZTkxAH+2RA>³S||Q5[w'?gsɒn] -4*pQ "*>S{j8u姳Ԛfo |zLN:zbKWm݊r9(( *P'ePQu$IW:Nˢ/g4f\JU â)kyN~k%{.9ofi۶ ޞy,5|XVqsAd"so+΅r}u#B "_;M-O&MgV.~Z{Q4G$;<4Glt- zat0pd98П`V'5#?*/j})I% VW՜9]851RcҦL/ñrWL hR#QiC-Ԃ;9 1%\yE|/?I>ݧ3O|r\7y06DKw^G kg~!J9US% d ௹j^Px $ .hlػeB" WtQ>$mru_jI~1[Abjf^̳LGr/x\L.È p PPmUɸb #FJc$)%\Q[bor-Bd/;D@((~xCkuFíl?)\  )I4㒴vϼ#?0jKOi™#b"qb霉 &C CAm:55/Pp~ 8V#~h>@[րk("Ò1 4e \>&O=@ :!f07}9/ m;پ'`LTTJoyg-ɱ^5!oV9u-^P%^uMJn OWl;tL28-JdthjQA,Fy:N~ 3E9 ?͞oo!"V~:̒Ʋr쉍' 3e]-̚8Ӡ#Ǻd-?c!v*Cdٻb,˕,mo^gNZbNZz%^Y`WV'co KS^Bv `]  !Pf:25ifL0FHF cϚ¢bӧ>= ^$c!vK3 `n?~W*F{M37hy]z }%8~k^D׀=&iS)zG!hy4޶nA@/O)u Wo7K]Ó¨(;h2x뛰ѸrTJHK)<(yXa_³Qn衆Y7懅x:Agr(9AM0*B:4I.UR$ϛţ_Q%:=QDD噄 Z_8/]q݆Tx ^F7hGD hNkJ1̜ICׁ)67,u~__c?a1IΟ.ޣ#j63}bwU+UJ6/Vu|Lnɢx< [*,Ѳ`qĞ˦S"{(I~=}JҸ)!S9R : ;T8…kq_TŪC~k⏤o([Z2>m:Рqe`Z1#p"ld$2tȝG|N"SDK8\Ma1$ ^aWPǩo$dD$@&) ީ1H Y^_82!km+aei[S:m~smXSc ;rϚsB\yd]VAKrN* V>rjOzP}y_MQRg< LI]5i8KyZQhjy[mVcYr?`c i]A BOU{#5uJx?徢|UoXUZ>wMIܳ"#nsKO9qD]T.Cv|=Kro !M[\F1%1㖱N꼆j.?S:39 ?=S*4(~ZnLS= #dL8ho #ʹS;҃ӕj |R#DyKtA74#䡂2@ 97[ϴO] jQq @A x@COGbz] 9` @  En&@T cNВz["BMD5—gQ^_r|TL $t_*V]TH.L /{=2!p=j 7=C}GɁEo2aDRA89lPۀ"~_@Ƨؗx8pp'2 $ r)tuȨ9K0 j?t'_f\DDO9M !2(K.;׬͜ϭS|M(`anWs_!=/N ^oU_RR̂Vl<^uNMCCj#9=}ʙ3_m1 7DP@1RWG\:* p" EĻc^mm]~} C~}zdv$fKLȪ:RV8y0]iDEKpVQiP M[PaYGR}N70[XUQ%3Zhi馱*x!bKoˡ#au*˕̽ w_9" Dm:BcrJ49^eͭɉ)_HpFA (!"Eem{v3Hw~qIEyNvs`m'0 v R\J؋^/ j < 29X!`©9SxFxʷ ;/".Ppel4p~%:d0FسAE|],C ]pS2 fp)Q{Ɨ̉;٬*u}!u~!w]mU5aE BxQ0K8͟>qۚ)*Ei41 iƼyW7mEFn1"Nd^S읎[l4 oT*UFH:a"O(Kz.#@8pV HA1KNl,>D!=iKn%)O**m7^v%V^}ؚk+{(K 9eoHڀ@XU#>DIﶴyfO2]3>@Tn DJ)]!g$@Ba&.bKW= -QS9M)U phL2 (\k@ƧI8oa 8GMpKGg"V@`PM^6st|.F+Bx1 pJ>nJ̜ޘ5H6=gbvŨjõ.OXZ81YP_ IDAT-GLq{}5ohnN|USn( DQ#WDgOg7ymx#g/wVK˞Z:A_MT{ ZDzZLqAy[}uiWHNq Il`MXAVȒoZ}Br]~ l~ tixsƳvaҙv@ mh~97ˊ'yxnJ2rIN4P^D1Uޠ֙ˬp  V/C"FVg<>F ;]aRd Kk4BHQSB8ha rFn4H)ʃ]0V?&pYxqev7s/L ]hy!wL5%,J,eMdyA&m"'.0?J'oީnNq].eC$~գz\NDĨhzgFbC%+Yn<2@ 3\RU{opZx3`냭/Y]k "ORU7͖qP?6b771Ge٠4t.-R[Ph <16FW~GvB*ۆn?CaBx{`ֹ>M:ӢET^/UQJݗ8 (rbXu >y*}w$M-h!y-th/7&0gIUw.b\?!vAY2;ibg.ȸ+aBĀwr>umH 92>FryBA>j3{%(~y!ql}Grt`M~!ڤ{A<" d~)YUբvጘ3|r%(^*Luzz ~X,. \-mp`_҆{ Io Ao9yw;XlY_{j/xsYHCS~loo}]Ey9!sQ`tSQ) В]U$-^ijKY3`_]GŹi7 ?{z;jTV򪰦C?C&BR>NC1}Cu୳VJ ARp %:(鯮4ꛒLLWfds*"^`6AӓICP5j0O4Ӂ!@(LW3ED$K8`]mTv`:iJpR%]{݊wR 4b 6g/90=9ySra!w9X*ܓjL3!鷡 $::h?tzmn*pq"rFjnO\*n0i?H޲3'Ц3z>&3GL,SUHr[UI&Ou2L~8 #t7|5ur{?fxpMikil*OM-R2J"*ll7X/0:vbww78NS͐ ~f󂗲hkHuE"j4MJ[fnZ<)r<3u*=QDf߆L6Ǝ,Ohbuˊrg$YSYh?߰CH9 9"fn$ZYj;+"Jsg3+WpdpUH7QG P% 0 ؀8q_VU`:†T}װŵIyr 40dzI \GJi?R?IbNN]m;n /Yr`Q+N}|-GWǗ>&yz$p/FIMM#}¼Ԉ!wJ  6caG=K-N}V,'`wI㒄t4a!~ʥ(A-# ?OX#.K24j]tzҼ)\;;~*+aƈE 8FIre=6'8:P~)Mo2k B8N3gY=IPpE', 3$ɮV7X_AkmQp p|XɴӨSeG+õ(~Ӯ^DD"g7~x9 7lU9ʅxZQŒBisL )/AKHGF&0= !B#Nڵ𖽖 _5xq}"AFp}^`c 3oЇ7UdxHW>>hI;v֏w[ED#Ë?^&5? %BG) QhlT 1~qE)W> 8-| RSr2&nS'̛N PƜ;W}}  "~t,!`ҌJpP (QE o/SB =L~ vQ7<t %<ʉ<@ ωL @p$A_'>0  %{ xjKO! :e毿STwiʝwv3EA1\篺ɾ`$b=3ஞnfc41^Pwܪk<9O.))6HzVU-_%͓dKN"ʹA?$AiNQc 7Ѫ߶2~yՊ?{\C.mѺ9ז\@CjoyzJD*GU{H3] 'Bc^|79l]k3]:o-h#).&s-q !>z'3z͊V8]#g@{WW0e [B4+1=c/j ƦQɼ'`l9uɎ,]$;b"g7\xYO:4j0-ɧ#Ύ{ѢOQWwvo! CqE1Əe؟z:Já>&*Ē,0D8M0Y`FucB8CˮKJ-kꜳGҩeG=>.I.b`giv] rlr__j[Ř?ӈ@j3{*sz@ +Vn6LP^CpSP8ۗCL_10[xt @o ۲]+<כet.")|m95)V";2O56]wHFٚɅI#@lӐkW86 tr|-QFi0H|mYkMT"Fi<[I6 &MM[>۱]K 4 ,B~5\m.Ѯ V홹7!t a3n.F^"=  0紹CSŔ `LO8QF)lFcAA5I-&T]/R҅8*iJPw/8M #XER&1qf_[Ss[62tz|:cUbE G Fw Nd!YPeڹ"¨;.-ChN6+2bs";74$ؕÊ"[@F,P$UO끾j=< ]qywu~ϑeY^~aʮ7mӲZ W,l̷EaF\F|rJ/l'Z^ԯ] eڝl9_6E#>56J -ť.PW2;1-,nyHM1L(-X7A F":)͚`𿵍юwhMHԦ%Q80bN8 ]r%#LZwHb0r R,}s x1魚!FZ~䞸HBѽc)<[79焌tNÍ*[uEtQd:T$AuXARb2RP\sԩ#]9VՅ٭50$Pss]s5~ 8A4M.nq>'.XWTZ/VɅ:wc#C:YW44,Y sgLڜP/h.]=hx}%Hu(̘-FjyZ@/āBw8rZEa<BP[HidBB:3.oKJ@3`J_g G(3o2al**p EO-Fjd3p܇jOow0t}pAHXhg7)Q2(4} ( uʹl766vupI(zdͯ>7@)%_Ƥ "L_F-NGEz(9`S9C<ƆƧw"UMu{&m+߮n#?tpyd{< Ƣe?<ٖ7Z71`4MtAKǙ,F̈3B:c3(1E]^P=P#;iWv4^s?!tgF1ݲL z9b̲[J`磤ƅL)  H ֖H8DȲdr*ℤ(Yqc:\_sFvGIJt)|K ɻy d `B IDATxbDz:f(ydZkH#,%2CT;fC[\b xpZX\qz7a55 wQM7d%9g G J{;Nk}$k48j~B w%>Â`șР:W_?rL >W)%W:;km]?ޖxd۶earn߶H8cG2-)caݔ:y@h:,A]e@LN f)5cLvҏPE-5)"'q~ySk+T\a]tϘS0ǃ`a>Pp'y:Q12)R&.qI=V|F+AG@(@ "q@ Ɂs-%Bȳ]i\ ] m bZLㅋtþCp@ӣ(ei]ou1a>);,)%`rJ2(Rp{+KV"h9cxNV4_z2/_mW?W֤k`k\wG|Ap茩m ﰘκr"hvι3>Q$>#R6<<0B ?%9F30BS;90Z7 Oe9$@A܉`(oZث:yR%kt]UyzBZ:<'H~8%ͱ";OK+\{yV3@m=9M>vl;ꪏ9WU (a~g^ʘIE.oV_˲,e=ﴈHMIJRک v,r[$yizz@Бk!#}bGphc q-&Z;>=>#66+j ȸ(iiiG> F*n7FE%'Ӱ>0msڼ3i|WBFIʠPg rrׯ.IbuoHLJb!%#aoh-uGOפD;=R\n6;cs2ΡKGG%pz91]-aޞȬ.**Ij 8_p%$euZ |\~AL6(I8~vԴw\T/^WSWۜy6e.탖pX9]}K3ZB-^N=1Լ:9z8a֗17}RCK2?7j45='՗g ͜UZ̫({kcdcY&Dd4p{+!$>_xKoMHHs8PDqZ;<հal:/KRlV@5'z/ltu(mQsQldVl|^'t0$J_ɺ4-F%"(D9#h\i@'^Z{@8RRڹcCHP ,PFH_8|8Qa Ǚ' ֊Tz !窶pg hZ4Xsuּo")O4YYN[:wA+(^gpI WўNMh^&griʧ)L p­zUs]%#Ybl2szxKqI3\Km"@ !Ux@ 1@ZҲ k~!6i=gD[^j0ykһ#T{ʌ8gn>jFBnvD6wcT4ZdHxFyt?~">3U"IOuKgMJ*I|Eio̊MIr Kgj\u 997"P茵 no@zQg'>p4%ֹFfan gLЉZmX {$ͨ)O RpS3BgO4V5'(P^ $ 4ƌGGr 5;#>'r,Kͣ ]iW@4H/IA}\Ƙ[F1Jbxd@xJĘ4]N'\-7^xӫ feJ_ - CƢ)5ߐ @!xS") _۰F2򧙌P+Aq'7. W IiO4T2([4?^:L.M@ -jXp$<_aT}Zּ#ywu*aNJh@=Z<ҥְǃ<]EN\:gG/|n&__@3̓FkY768S[JK,Rw6EveN͋_9< U6Un;":7f 57}G cGWV $BDJ_/I S5#śq=CJE@W4B?X/\+ 59?_GwQʙU%b|qItT Uۄ߼ t FWiq~6ڀ1us?Cknޥ1ao[3lcҲzk^DZH. _ ;&=A[s2f[߯lekR"p87wɸw 댥?9muYf%]7Bzls0z䍇h/rKg, +;Z[ʌ|8~JpVʓ[[E! ӣU$âI ㄋ@x!?LZ?1t]=*Bz1 s^_qtۿ,8&Rށ* kJ7dS\أ/X~dntLf>(ޫ"RyfA7]u3Txu-MkPWh4z#*9Jeu_qtîτkO[K:,k_yYvLm/oX~?iBhGw$&eQ]l+ }~=K}&zza@T[<4A=9wMA78pGPҚ|a^ˠ7txKA|9`v'$R_}p75ĚB^WuB-_G@ι!a,I\+%W@yli-ѐPj3f҈ ѓ}"p#F$Ϟ yX3r:q((0@p" "(> )c&rLfb3<{ ;jrV{fS,=00L9y` %׉&1)}MXRBvh(lS%Rڒ\@gȮ푑09t}I c۹'?1uՇ>ؼ⦍璏*ί}Ms׋IY^@ق))yuI}/vCӾI1.h|ح'pb9l{ Ք=-qgGkβ޾is czw?E֗3#Δd7L;V5h鮿na몯id`O^>A6_ 4_Ur>;Wd)5ᴭe牏$QNAJ6Wd$$AvB4 T)=}Ώ)VlyaOP=M;&0J'|mE2:p ^y0D- wJ?E}:h]T>aȱnx}.P9;MC<6Gū*ʂtHWӏI!|_&d,YvҨ 4ÇXa9#wִ^4QU­''kC[b4kNHψ1&ڔssxͼ{qdD1%ZְoᎶ;˂895lvd>zb5_smm9O!au;o^r:B ]l=@$HD35>06*Ӱ'מmn]{ےhZ7oeŐ_8A,ྚ 1ܥ' b ɠJ8ckg0UWGۯ((Pw`˂%2`LG^ ^~UAֳt|{ϲ3qBWP8bx' W,#p0Iyi(PB'$Q +îSWZ)9M>Tu߿]صܵ'wdcDoNlzzk|PW9qyw~8ሾu{s婾/ i;0Oe{N57zm)}X 9SWnϥg/^Zzi5%0cL$܂%:c dhImo7{R"D"!\2M BP} ׁ:?aam{!#*&8/7g˩F>KHmEOW}-M:3cy1n| ܨ,K&RC>O+` x*3J9&[{q!*nRpW7#-< ~]PS95o϶T kd[3/aOWwᜃ30rۗ8gl v7^S's8 and7  "J"JV-UEM.6C?OĀ'?zTM$3<~ FG!N鎫fD*:s%2-aٵ_9w^]kz%e!wLf[4'^#[GӡIx/tM3Fn> pH7BC9,1 V# vg-T_%IOVYO+}*cɷ &ŖDv)Yv鵚Es;/bSBij{!<2raDH y*R螙ȟ qQ7WOUTyzk<"kf&zhqUZ:Kbiûv#n0+Pw#6g8t`=%TʵU[8zspM̽tp}@ sXd9%&c= t059b=b,v9=./ߦ( 9"^Wѿ&Z&eQeQ[KG]@j]_k0 Vm*w:`i8:ƾt}ID OI9WC1(Oq= RVX}"Ss۾ykUW%geLXhU(Ůw#ޮ)=ϦeY<_k۱?rβVפSs[0(8侟^ wSJNs:u-$gP*ln ij_ 28P//\dpE29znրWT#*ktp'm@ŊvWCȧY툗 !"!w{tU5y/KdIL|~rL,-ϛ_y\is}A] a˽ NmfJ")9bΆ+^}zONDrBKl|Ob?EQO +:ܪ)C+:=~A$&ΟjɌG5ណZ2g &du3/ݯ~9,ɩµYEoGcnmMδ~zCϭn G4I>8l35t]M J'],qɩlAQ!W%e89tVhoa0nw~c4ٲg.DO*zXwJ+)x-0SyH%V[‚h7HB`ް߹3nBjZ+j[+VYgl)_[lyƻ5gߞ,]rz3P=*W.ҹ'UH@h u+֔+Q Ƙ8B F.F_|i+ @G;BoyD S u r u (N-z8fH^[x. 'hV@EI4 )`]zwFjNi9h e /k?,2[LJ,%ɜý> #z٭GIPVZF|ƅCjݭͪh/K;DB33]sɰfAERLS^bOm[϶/$uX%s ,B54uuY KfG@P΁9VLԙ a" _[Gœlsj+M5[lsNcy('_/_mcCn[db]cfU:VhMXRޥ)InMCɵҼe@R@@uG\I„ް~ˠaiŶ!L]re?h~U5iPJ(1Hf'S;MUg|l̑eiݶU"Z[: uc8 b qg 0a@^7 IDATyO { b&"ɗGs؋ok֘ݒkR ~qƀ" S YE` хȣ{( GL72F(m|D *LR7.5220 /GDt`05'ŒbfƸݱ/,TņOr%ט^+c캴[?2NhkXC(¸  6<瞒mQoAPB92L?  )iә)S5u3fn|$o4B|p"VQ߸y(΀?4mQy_eܰlB  U뾵_7W$_Q6ֺO -y#6m}D8">I7lQ w@[߯tQc*jG6LLr:|_jao 87on:LA2'gv^_ӪmGVζ>fqY7PU M &Itϳo6}Oi]σ/OT3G~p16f 'A&a=8ҐqUgJ*a`w^,{M3?ѓ6c 8OHظ*1>{`̫+ iPι|W#9 Bׂ ph\5~px{oC#?6ZȊ))C4Y鹸Р诞{"7ܦdHH}Nɣ :'No_2c3>,l7&7`c}"竃*<1;|Aa){l;oı6z "-N \3ʷfg-s& Ss_hm߃a1caǡjz zRn~e9Rl6Ǘ%CҔ4y݁,exjIO-M|>I0T2d":0C[RuP_bM"d!tÚ~)$3"v"zd؆\GUH<*$Qqg~U[k;`MsvŒq~Mh #e˹(WUlZ)ͳ,Ct_mAJ@JC`:p | u2~%eiFhYznλ I1.17/>tx|~z` &7?8$'\bKBNΝ@D8'> rڿ߾{ Y4EjuAv:>6ScM:$hG׷տCb  "b͓έ#O;ZTI1pAi[VIK+ |RA|Su<[gUN)zdžCե]q&땣UU}&lwj>5?>kX,[oń#-*.t`@ʡYxo$pQO t5J gQE`%BhDs , l{-37V=Wq|UZVł\(1^d!6Q5b~+*I6_jNR~bpݕK2z hoF 4C+av7K;Z+Z{70GSw>'b+Q.]{8㜯>#9S3ˏyQ}"q"UuHm# )"1#jv.&; { ;&0Mi3L}f2}'"M*7榺c"ڋr3W͏1ƙ3 zMgNLAeё%㞓D0Q6*Amk@Kn˰..dO6D&bM:W&t3ޭҨs2}c,n%\iU!ĉ1qNBnL70̚c(EZdQޚj?xl்BGFhĈ (5nP [D>dyu)Ho?[T3]Ӆr$[j5()SU3knћ İP\Da0mz˖Qm媎 E@?#T ]Ɯ(OUW\&JyP֝)Hm{u;sL 3_J%IXx4랱fw?V!)GfJ8 V ppXꎮwp2.2U:]&9(rWzOC.I -(tp31U/wb7ͯZguqFB3x{<8$l2WaÃ?yeCO*{D3-mVݜ*vs{,!^ hqbܔ:uj{:;,Y9q>B -؄phpǯZ)v/( pgӖM*w!uf$>*s j(JM: E|T+gˁл>C> [*xv*>u敯V<0B3D:g=T5Ms٦k4B0(;6ןh*p%u<(^O8s#{[ƕӖ,֕=9WKpá@`FѼ/BgA|ڝIsx^5h%2,QxZXHG }tW_SQ.8N/7O@=:Xl3+¸1yBuƾ# SHB-!;f[Y#?,)-kȶksC^Dׂ'_ޙ +Ve'}U}/x MvՂo<\P(IS m&Q`:JɢAxPO'DM6$N;AD潹KJѦ [N%CtyPa08 l;hm̛+ 8C3q%ܾgEo]\y)!9Ƒ{cJ]L9xp yC; Ƥ&5aԔÆG" 40OWMhB쥓ExmT:isY`, O>xrJ̊& P*ůMJX3;!>{ۿ]w&۸&hLxvkv_nAatFk! b }u`\ו0)8 ' Iۤnݶv!vMRM09a2d134 y1X%YN ;}fa5{Oߝd5.Oʸ4UlSNdLK?`GNPO^ZӧKG"Ye)6 }Ɖ/U d/z(߯;"/JDglH )j?#='T@FXc!NHX51 Щjp!ސ\(=7Q͎EsCZ=fA1YbNOub֥Jx!DXx(Tr =76}]s7;?t@ODEOeEC'PJn"'&zçrЧ;mF~(wUG~Џ<O:ڿABbJҩ xg@=o*" &:f"Wt5.,89Pʔ2C*qJa_C %*:`a{SG3;i莉UzCB:EL+JPqF_k!{FG{S2BHƆ\S&rYʺL7W~# *',-X0oȋe =;n!SFFόr!@yhK/UBcUwdb™ģ74io4-J7DOToxk/?^~ߐkꛢ3#53DxH51B%k浄 @&zr]C=#"H|0T(0-QĜb7. 490@`O!9aJ|uun\(Vf !A/rlҳPj7T1GV @@Hp ](1&sdplv%3Hd[De@QXhqDB#&y,!䃶74 9)4{kZүڲ$Gig uoVr0e8Ӹkvgf]Y {eǖ.UUjV^팛-C/:^vD-4?&+--C>OX? &SgGHn^{m[ p(|nM<#WrxI< H(7AN 11-̽TfPh y|+m5@Uɚq}j\$G9a#9!/L{;AU)TJ8VwթsWXQ/jI4KO?&VÊcT/'ciJ][k8n<}WODlCOpPY(@ j02BߌQmRAhٗc᭨V`Sm^BW@%T"%Ṟ:;Q^JZ0IvtTY%zW_lԼuܿ4-mޖ?矦M/Bˎގ|nlm|FdanEb{@2VD%d؜ix_ Rg[L<_WAChm8;j~ >jח[L@WY6nw|D f- ;}B;0B97!T0\#^uejy܀tpB iGJ(G lq-1-}qXp&Fx .ʼnG BnrB ܰlܸz_U PAZDMٶsXq[K&%0S2YQTD@(w UF"<6l: 2T$T|1DIP IDAT~3ryJ_|~EqM'G"*%܉Q4 nWfP]կi'qטz|68c"gL!ެ3@E6b&t"O\ArѦw_*ەc6M)cXI]m} 7Cϯk2A4ZwlaDCYsc7pkUTK'ՁzQ c[ d]TCmy6-0<ۊ-kz[?l < U7Pw_{חH6ݴPxDwS,]Hw$:O44}J9qS]õB{@GrJE{co[mᐶhmK۴P9Va⤬k8~ py. ;`AdZSw7@-;9gIG#WXK% r''i IVp>|JQg[;ʹJy/]g#y ǀn18_W@w ""A&_4v'gw'81w1-mYuPf \cp6,rg/@Ԝ٢Nr1+hĔkɸl%)ꗻv7q+8oy?mk)5 U.ʝUj {< D}/eN 61jV yD|^c LE|(A-bGsZI 59/eiP]țWl*?1'$*mw$\'Rq/:;ս{[#&w,9-UN|mCQ,sg˜%5ъJ++P~#P,i$&-!|l"{}0rGjф9.c9$wf ^TEH45[rb҂P@U<+t"IࢬΝxՊ׫6db^y*6۷Jz0X֒ sò«g. /^-JXv{;ڃe_r?rI;xiU_KNE%DҜ|IDpP}=%dT0#Mw洛.+ L֟cT :왆@y1eT pjrm{MM!{P cu;zEZxePj^)WaM֘^?_HSW*h0LVϿEK J9Z|6 E"M쀯 QcYt Y<_i[ZX&a_jjЯj /%;Nj\1Api H>"˫/lB!A(4Uͻx[C~ nuX ՇFƴ;75,Ysˡ/ \[mo[7^0 Ȩ|k ~ 4bR[H,c %Tٲz[NRɝ(eZN)RsR0@hs pUGL!?ᄐP)>֜,[J8z:}~) *u-UƮw>Ucظ$Y,8EC맍zZ0(ёg5={DNrqzBƿ\ɕpAK_h~@ kgQ&O<9і;cg>p;.L Cx@V-V(NVp/p5{" +?\1%V$$}תhrgV^pgȪZ*wT _/,YĤ7%1" H9Xw #30o5l{wFU}c=MI(΀1U6G DH,N7AD_5pmi8B$"&)['H0Щ1|; x7$Zfۋ`d cdP<=n$dݴ̽T *fwεŧ,0yrk$a#3v'xϦw$='4k576b&w#BJ ʫ jg hVȽ>l/Hy-/i c9e[*eOHHc퍡˖_,wlzm竑d*< %`5XH\z_WwJ4eBŅkmpwĄ̉ +@gG]L^KCo$tHG1UtQgm$Qi P.ªfXBi*!s pTkSje xsɹ#?Ú⭯<ԍ|ԫpfTg 2!G>'rso[4F9zF{eB 9O-'AyXhT[픂]_,V t\ŘpgwmyaodȮ]2m>ߵi 5Vxh<اww"  y~YQgLϛ֬]_2{M%yJN^ڴHe(?/ץd\.ݭ ]Z72Jh2@8ΔbM{Q;QB~@tlJL ФwL/ƮmxO49 p֟>6=]?(pVkl3ts8?,G!ܶLZ2H?&k!DԋWsBZ"@Y+,}9AFoY@K 1eM19nSJg>?S5✟3~U"Rcc4n{z<.B(iʠLq"-Tz<IBE o|f!ΒqA@h@6p#7 ~w[uDKSPfe@sy<*?9˯ׇߨtSb3/XW(k|tF\ǦW`TIDk H LBj>rPoȱ;#!Nܩp{l(!AyWE{/:&R@]"MhǼ$cʅpWĆ1kƘ/V{˻zvC)i3~nFN/HXv5xB9E1l8ыs8wh@eZeGhL[S2 4Ɗ9SuS+jv2,-:S o[_ٹ⩝E){=DRj*{[sK:ofC&$DiiASWT,OPtډ.݊wri-ksgI_9SSIϐV~קOĮJIY`>@47S zi^s~ xb;c/I.IӚJ4{JdRP.L2:>9FHO/RLDMjBiX$0}]u+Rɦ)YoHW!c-:u CMyJ8s 1 5:SuG9/hѦ.^ mCu{ia DdJ;`B~ݽUaE Ӆfy`XU7U8J3MˊwGӕVV=xٸ@!59uAoi N s77.%4f!q>5>۶eQBH?'7Oz}o`%))*pJv@9[N wC}G&؜"Ykgi1a(=X=/Uǹh? W%;rXcVaxTD?9nr|5m7~Lfs"Le}꥓[O>.ɹx?Vg 2S߼W!p)lWqLvrvLohȎr@AcC뇴J(z~OB";8?Oܥqޝa8^Dsd%̐L~%J8|iZ*}I)+s_ԘWdjiod.(w *p8s̜nOML=vfq~r]Uªr~RѢDD0NElC̍ESLNc!Ϛoo6ֆT>˜s^C4 rL K-:tYHqSyЋo g;yZM 2{;ĵ_1tWo'qf-Xvx^ ޻ JPZp1pda|~gG?W-VqTՍ+XJoƬ;}WanY 4, CI򡻮g74g˼X83ASVj s-:Է'С,qCpd Dй?/yqf5,}q@F_E@|irBird^rS8ڷ|~ܰl6gs}4>L8:ЫC$YtAi[VE7gD~l#@iy-Ș3=H*SdJO^V}k\rjqIN(}uϡWlqʆI4aQҀ)#G6*}N1*?tsg5{>tyi6;d}v`{ozeߝJ8N u>7N; Hp0RvH-wWz?x8;]=Z]ꨛʵS A |]c`* +hB V:(qg84E4iªaLXSt*=joֻVN/!PYSm8lyԀ$w$ٳfpd d~d%+#x{vhi{~^ۻu^ woS5D@QJEnd,YYfPзe< KX’pB5S;mIKr(I뮼n7[=ܖyf l p#P0W_O|4U q~ ; z~'͋4[v׈qsὯ\&K)vJ hX7cP 0O+8 WDD4,0 IDATo{C4Nd%Wc!$ݫ@y P0貉\ GIWSa_kTf̧. DT7 |Zaٙbfn5x8$ "@v GNթy'Je'r [ JG>#Έ!Qg7f\P5lxNY[]G~};1|fEs#0.逼 4;2@%`:|u5ømnb R!5&_Y ◭ OƶN_;v6ܜeD190$PQ"?Fe\'䲮#`)%iJd,um}yg!E $$@tGAR{k b\Q-;>uV ?'~qz9[-*G՜3s'-i}Ϣ:To4(#yE +E'zFisS5̙% <F,]?usqeS  _n4|v"R0;T18= NBj_9 X\ :-ex Ct[PmP}H ),! ff٥#deYU,Lmju>7 c^׮fOӀT²639rcLexdp)!ՃH(O ߁} '2ԜƐwrm{Q9T6 SKNg7,O;RO'dzC* U-n9h .t qqi%+Ss򍖩uBZ{v=fJ%/6Yd@ 6à!9@D֛g\{05y/Bϳ p"Qճ-;+;|M|AHX8N0Ü*E>?T!| l z/X"Q;{*=]teZzbS]GA0MtJkoPۀ&+LϱƜ !ϝk[(Krlk9бN k~ d`;ޕddh @Y/Ow6InXլA+/v]$Am.q0DBs[u DXb,i1^wmmXB΍XIU  hr7 C#1uSH KD#P7/Y+nPFb#>wo^ EnhcoVtdu\?\jP>@_WI:Q7))fJ@{`FhVS/Vԕ&T~ >(2\ K8b܄tx)GS8Obg9h[Ѱɜ)PQL\d3&jƀȠ10@ '0fbd)^1H1xWiD*J47ߺdd=Q80AYR}OT:2}@gMVC%3*[p2Ktn=~wGa& v]䍧SJyKCV"VxsdC׎yfx[C$y!ܒcgd|7R Tأ3 |&N'akwԿy<6drqF C޼#Q'2$n'sw4 F&^f|D#>wU30Enxz'r@(s3=uJn74# "턐/r1tfֳ*UM=2kɘsd 81P {esթ3گR6y"bt|J:4򲮐8 cj>&E(ŷ Tj^WBd38WEK[ kz\%'G B~ed LzC @ĝhTP/Kt<eJ:z5!:qoȎQ{h3\6*MD(4WXp^ܸ=%\8&FDWZx9TIr;+m<1p^s8{$' @d]ۯx]uy΋ ?80d! 7$ <2at1w{t}UNMۏ~`j%:D<.x[֯q3L@G AӤzdڪ&r&zZkᵩΖ`s d _kª1ޜiώ"<6 u T2FȮSVD~ohyjK2ocɼe89-}*T-uUӘ#/1zMUG[j*{:)J)?FyR]! u:d   V'›6|Lz@ۿ~-!)U8@/iPygpBZ+j ͷo.GD߇ZVJȝpPCse=7\TH蛁_3f"S 瓩' 3S4T14ni9ax9iF4Jh@/[, #‹Wz&-J; '*;58l͈(`b8F!"raE?{>ETP|U/N83977p Mt1SsH]#hʂ'jgXqC:PB& A}"޸ADR/|f8d(>6USHH5Xcʽ'`Rb59>ˏS9Wt:Å;`i  wVD:90M~Z :QkTYs#;T{Y0q|pZtGd+fL}L<5;i ZFGwoON]@)#-K'/LXy~W%U?DD/p> #mc{99 E7޾@ 5Lbxk}"> }=ڤ=2%tXɧ2 9$?d0Wl☞$m o#"8R_8یQkL /-1"l "k:gp\vW&Ӻ4(8LMFw5#E\d>DD1p̢ц|X,"&ii*EYq/߻‘ ]r}WN,9A2 Jn}ZMA]bg wdOћ->4UseSBRVTQ;?JEv07Yd wQ22dT>y'=7!)"p;,93e 8QU@[&j|2KI߾kSH-miOҊٽ'nQ-?&}|˗jY|̻fb^l1_*preܯy/ϙ?ӰSS잵e&01$aS1q"^Q_׸x+U!%q?w.Is&^8삁]+ǯ<'+}("7 ]S}Yc)ksYK_-;͌S, 1Fb̎nHc,ٽ#Z8oAƴ7ZEkח;T8o >&NJN@T_lUmE˸91/0S 9mgs錼Uɉ8N0ĜX@i ߮t׏Oϼz  Pm+7ޢ"sg,Rt%qwWoA|}oꘟ|Iz%*BƉ3*A^WmupŐ &/ 1Dt%S_r|gɑ|AFݗ+?(͛uPS~Y1wrdV~잺 UGg}(IX/2 Cy|]2t(}XzgSdoh% w3mCC$U j i-o Gȓ_ 17\[pQ!k+0JX%Nq9P)01\$1wͰ8S=R|u]@Y 2lhD [$"91k m0 hlE:ky<-A},ql†n]6N"53.^{lhG++E& eOb3aIFto^YveL͍t#x(.8}OS޼C{6ni;tЗsCmwc{v V,ADQᨱq1z {mҵM4yn d֑ ܠʭxyܲYrsH{:"{N%"~kd;[- "4'@6@@a5>! @KY5__*axD3 Ԭb҄>owE"hr[-nȴqVůYP==^()vX\]Wd}ALR1 uJ/!ؠV qحx+bG,3˳!9?c{}5_W휛{`2>,YsW'x$Zsy7qzNCZP&&YlW GfMpK`#kvE%P'_:B~fMo1tۯ(ux-t'd&NA_sϪ6kSKb-v Y^o *'u9ȮT7KKO!0h]Tp?rs dZ0FBRx̄+Y3L~FVτ69S'CoY,l@?<&]Hɥm!mYb!=~;^hw)x^c4qK]20s1q$#:WwaoOoK>7W G%Z\ÛH^ |ШP0]{ʛ}WGZ'߂CD~ KlSF+¯00=fuƭ ,WK+vO'51|x6ȿ\Q@l%:}-'SX&m_6:'ޔDϺtzTloK`X\mi-(]M""~R`Xu '"6y[Ę~D0>]at P\+34dBk f" Fʶم#R̨=wk[0F `q8Q~yc{c_:d؜bSh (4?miSh=oso {klˇ.2v[Cڮɗ]3=NmcھHHfN֡.r8@iMVU&L^{a<cURDVsVFsg۟1wL1oQYTDs M7E'W"#QnAo:b~_cEK9xm'cs(43de۷}̥IȾv8YIȏ˜DّLYv}t[gţFTQwz([Bў:qoI &Yh">#!ٚWkM`Kc8čWT,`]qHh!KqJ? w=F,ڇ't_[3 $F:炝}ge=+lYioH(>>"C0𘀬nqV|̢q|%^x+54cZPK5.%lޕ L[b @F_2D{r> s.{h8#`Ak2fB4ѕ+6阀&eh,p%aBpu]6xs'Qsc%g]o8qOI \cbF4aZ2)tM5eq]Q1ts2/H KITd=[+`0IHi_N!XDŮ_rY!=u$"j B)zҨ뀈RL"Zgyh ׽ DXU?ʡ0h'.|(zOOR -h޳*.͓&LqnEG۾d =.ifxS,5M`U4)X}*"F֠ KcEJ?H֔mT]~QdQ2%s.j+ }LFFbRY/ꦃ =#'tٷۆqRŸK9UT`?RyVb]/{DzǑU^)L`ŷeNees\ۜt!-Np i.R1R$`1MOž3$ ]G8&Lq5 4Cx$"ʌO1ݕAm@y\Hes΍v@L?*#-DVD&t [$]~ " 3aj#6_ơ)}op9IQ۽J2CSM: xv}iRs'iѶ-s 29CgBe޺E s,(;_ k3 "ϷT)zb3 d nQZ/2aYY%AO]$1yL7hh=Rma('sCX : 4S0 ";aWx r(` {X,}9LDWܝR8A6C!Ou {`<~fNtv_WkLa_dT'Gtү_6;[7 qm+앻^ گ_I”m>4CD0L$cڿ/"rZ͆r8oaֲ&Jl8\# 6낌S%Cm\da tl㴘iims&CKyl!Y9g_P#N$41@̛n|VF:w+%!,m–>= _lt t]|'B_3xϢ~b߂9¯wQ[7Ӳęw x+<{E&MKԹׅ7I(SDKI:S&(cOOX[mP#^M֌#嵋)g8Qʣٳz#h -Ԑ)۪vH)' d ,:H@3zod{MMq 'jP7 L\tgpDϠJoş[}o`ne`O.tL[(;G] ]OA:)Br4q]b\\|.xc־?ցWzҔEK>O*q],V}aY32RD7LCd\ۙ d ZsAst8Ӟk!-$%c\ک#uZE7 ==- .۷})D ɬp>0Ta-_g`:ilZ;o2]R<df6G&x,0S"ia}#%K;ilI~l{cI<vb]%޶iwjz}x-޹r˧5Ô@!^f佡Kuݡ$o|'~MRD\װbeg ._F4#>r 0O"RY̮`GF^2xGÅ h-\%L$Ä>BHC.gEN\m,d LZ hW^.Z IDAT? p@-]"sFpIy=)Nz1|a-1`n5eDzrCf@n`%μuu~ȺyUOcDurdqJHD>ógV!C1BMH>GQqw(39]9q Tl(H5 n8eeZ8}UqJ}Ǻھah)G)g؝S0,H5]hc N#ZTߩZt1x&nӌa$&L}h`J,sו~wˀUq6[ 9>c2bR?3,SHq֏M y?5γpiJVcouDHo}  _ nnI/nT!1Jڴ]߿-j.[ n}@ix52wycOd΁ fF);BS,\߲h@~`0Q#9kI&0a & pիLSzm5Fk4S1L"B+φ1 qw;Yxel댓ԟ))YPc1O\Xeszlx bTDZEw} HnF)_Og3%YҮ5i]Y]>_삔E"!u k5Ckc:㮚x$Ɨ[?阥R0zm=(9Ma$&LmQaFO_j\ڥrWB`9+-FDMBzՙb  -}Ƥ߼2ޖnר M1WF[CV uH0`IыMϔ-w˫jׄ\]4ƚ=Psv9> jpGA}poնRr‚w䛳ۄI2L8荢j!߶\w`lJ L`k&O*:AM\3X/xRO:8L{?={lsOؿAk>_O:Jf4և6*q·Ia1zb<7S0I ?/zVl6157C C2f c]8Zq~ZUq\)&{f&Paό M;Gz)LD =o|]˪gu l?ŀӸ62a|Gױ }{q^ިDa-}Y#rzJ/Ah](ռr0Āw/H\@4ڃgт߃^Dqٲy ~0 ( (_DtsOⲜOf}mV[|Zݼ%L!^9Tݳ6"@`&B3a &A cWj}hE5&[#FH]\@Kr?Ѳ:myQVXm&?p&X9lhwp*=P&+o$&L|HzYZz"kfRK"˾?smsD$(bջ[O8d=炉~jMax҃L`ݵaDTb'tsjN||~l "h32 jm˞~gB}0DaRw!#qLNl„I2LOlXiFQtpݾ M,霭3Y:v4AoV 9±(KHHQ[ؤo"bQ.EiZm;v5.ʼL2rΎa*Qyӌg"ϵyBq*6c0LjN[&0a0!#{lOOIӛCCS`ς_udDArd_fn(0ꍔ< d,DHkmW"'GDG~?C_5Rw,G h5{ݚׂ9̶j´d0M5iuB\LS#5k*,F#`#QLKł*Rr[$Kf// kV sioF:EώX&\S#2[)Jyruq1~2:fIeL0-&L WN SA}N\auQ𵉡JkD7xd_/}3pN} b]}gcU͘x~QzF[V}ҿ L{)]ՓU!/=M$ÄobzRCE@qg.MDFB *]Mi Yڬ iu!0 9B=PTeTmpN|[k}DkۯmKD*<n%12"o 8:ciuʧZi`w U7Z:$'sz]kӕsĄI2LBTTK2evFVNcnAGWhh#yVL&JbQ@)?w7?ؐI)=eKk>@@%Z{ʭLT?5w+Yuh`aLa$#zd& L &XA{ޠׅ:STx&ǵB(I@эR9=<j RZ3=J[ؾʏ5`"Y my[W` "K>M'sK\q o'}=2jGʒµP6 I IƢc#RmM~%Դdu'w be;<}!A_ՠ?8>HvHԻ%Cc RieNDCz6IxZ\FaFW;;sGa?O!s |kLZPw&0zaY@~c MI߷Q¯kڷ_UG5G&cDA-ĞQS$2%JtشIE[^o,386?ZD׼_O ^4c+|*E+R![aF  R`DHD㔼&$|¤9ΑmN4̺4O"ծa|`M D\P(=>ۉ?]y]~ Eљ{׶VllOJT->]ӻ#7|sG qzIJ8&]9f#P~#V",'[TP~m@Ծ4:0[tMV=Zz~Ӫ|c)o t)o+hэS܍'lTRǎ+' " SR?"[I&87* MS%ÆGNӄ]KpR`",AO2;Zu$O>ˏ,Wd65^+^ˆn eBOW@2R`q  Y;*@J%ųG^5.'_s=V}|_RieΫgN"/4ԏQp X"2* `:y=:9tODIҳ@{UG&PDz'dzCc * l3J_|"] -])/?7DY e8nI包?H}p"0VM2R>' wJZh$:όƠ˾;^77ߚdBD-e妞p: 1 `ml*2Knh*T\2YI"׬sD:zw5? تs*C"HDz[ll*V Z" s ADRrے}5,η\Sf3Q;e2K> 04 M}7 myq-q)" ي*}Ů]Sqv~EZ&I: zeI4R7]TꝣNS3kneOv #aypȏÈ }Cc #-!˯@fo7m mP玡\QQ~4SiE0UZ/Zs2Jpb%>Ce R❺W#,]T+Zε {LB߹jFꂻIZ[0^3`eC5UXXx4XBvcȈw!s-9Iֆ6 ~U8'Y]6 D_rɬТ5(PB">UH:);ep $߯59jf;4|ɹf%*-Vʠ>z^5h *X1PZزɸTk?"VGYr=uYײt3[xTJ*-dN4Û9]vg!RGf=។6w~ 2;eYn)"DmIb vM-Uy=Fxj3~2v Igwc,l`7#C M)u:!j ^-ղ6urntyCDҌZגxQ"]I ND*@q %e7tWIsHHv OMJp5+ɬ6Botc? @;e2K^#e6nX;e ՎUu#_,淥 LJ4ikҗ a~,]|8j~;dODox1eLd;.^b P?Gۃ{JcL9?F@ )jR\aCcC[> R=>z7tx$GgUyS㼅͹xlpC ZNf59{|("~ذ#z>],9`tsity@ lV$-:"rWi&bG|"\?n w@Ngr!DTSCcČL6+;!O1NMYZ֟&d&eWWwVd1dnŞ~AݘSO*A޼`˕i  Hu4cm }2g9Zjϖ A<^~1л+u{y~-qې/ 5u~go;KT_d 3i)D%͚噎 s?uVu]j{Q)dp_8 5 :-kIDATcT|z]xb])f=}1%sƌO}8^`yX-H膄Rl:ʹ΅cȔ^m;SlfW&"d 'VaGmp-V^T,V[a$}6%c8e*< 9a!s D7=<.xkW̵JD̹c_7׺+)u'q@j}3E#7p7;_9Y [P w8Z A ̿ZJfO;S*`Fql:8~~Oܖ{ĆD$ ūޯMbf$eJsyS}0=2M (_M5ft|c_1=ʫ*f|ƫ7uNz`x\#glz̼}]mǹ@#}Mu'cعt]gcdh+3`QſWu фѳp Z}XHV2R#8d0N\pU7dbkJטيҹXo Sd{h{ÅԟHfgg5琚"h5Ƿ?~0,NRN/? ŒhH V:A.R|Bzֺ=l;Z2KT4Uuۚ6VoҌzMT M`2,y@o^+.V=̞Ƞk)rꉽ?7v1@׌Eȫ+[x+ql~)։e%#bݫ;G:J]ۚ]i=<>|"`)i֬ ߉TAQqL?i=IRC Zp`ܓEU=sQc/MIA*ش:cΘSf߬8+CvYڄo DZ7dbw`@0;)]*SIA MBTxL;0;okLێChKyO꼶)\ޤbkGjR g("noȧSDθ?d[H"dzc @p n ا Cc1:#qllӦU}dR|Fcs^I%Y뫎}p k~tG:ꮖ$ 2f_G=n\{vúyھzlE|FDIC)p 8X1Lno QXGs`>\y LVЦq뼿t~8{vCVy-܌qɓT=G| ֱ)C<s9Niʙ3!\})d9sSxPDѥRZ,j6Ϸ'Mmkc`+ uB ݷ*hw" :r\[Z40i'B"ʠQWS .S9r3V,dBH&" B^q`lιmJFCD+TpOKO=M"[蝈ґM{FdyY@]k 5)˻Ȅ=>$$ɵ׀e1<v ~"dXFdDϧ)K3J.Ks}ޅ!#xŨ ȩ1S&\Us/bLFgXgMWK4ϙHP*z1n?SZQMVxdž \Sh4@Z!`2;=5mzit Sb5FuޖiߊR轉'\%CC6dI8d0ֿ*i䢞S_7ĵ\9q3 +EAš:^sr[RiŮ־y'o"\_^%Hakg"6Av b Cc'5ij%^FbK D|W#]8 cҥ!+PHv= 30Sџs HB`p}_ Ì~WM7BIXƱ+?xͥ2׵Z$Ȉ?*CsΊcGsLcNP)@h{J=ci?ampwf6HP>Wdܓ.nZ0MKNzwm-yF4wW7`ǥL:aW^beێCY^~^X^ 5h5^]o]o{yEeL"j bܓ..I%} /6={z1["Y"T¦ئeM C£N٧wAօ5J3bh\L#ѹ4"D/&# C}@@t,j_xt7Sܻ;x cա7,B DNJfwiɎƵ}B"!R'OҺPH#גV_,]Kxwڋo:>w +AВ|2dܓK;y%']X$K1#ܺy@!I{JSK NORV}ز'gQ}6= ײvfc* j7q"9hZ %;6 g7}hUEa"(+j=[C-"IK5"Ԟl3fiTSu_Ddr}uqn^ѝ[z7B"栚2D]R )ipp`2XBXاE!goxy`[5/5'EIz|qF8{̰h9qm S|猘%5c:.A""ڦ*b2X5~NNȍ)qb5dW$ɹKL9IF Rmcooubz(#t0z.|d/Dal2B Q,8d9|}pkGݓLR;[4:­U cj|:P ~sс?ezz~zb[, v>x% BŊL!2ȷTsQC WI qb8d06MYZf+θBȩJo^ӞR󹞹'pW*ODnJVϜESފȮQ)3eߝDgl'njIzC3O 䉷Cc|f(;Wd P "Fˬnq&ֶd/|vcUs zmRs;UZAj a͂#عŎ* ż!Cl@!e]q8d06g_ci.B͵]UGjs^,A.iBΑw왞ޛlpv.ߕxefMBv!B9WKtlN p7F!ݺ5o 5[Uo5oN;=` ւ,.twcO\2:"a6nZ3 c&]P=,~W21k4,fP (L?~33Dn78dav"bGcxsFaOoWZջ.~ 7.a|DC=/?8|Xk~0ַ}C?'̶믿 .qh׾rOIO\^xaJy饗t\.mG%I/~ W#:}Q^{5j=-vozzZ>~E9@^{5b`? !D#Ǐʱ]?q/7?'xb5zxn-^[\\WUzy/YT*e;c?Ν;mKΜ9eQTxn1f^[٭ ybG>"}ۿoOOSC y'//nY/_d"#졯~g>__ʊy>w}ACΎnwQσkv5z!O9cwCf^[٭ y" @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7Iz]|:~g?ّ~>MOOرc#]=êT*ݿS{P"iiiI8?$c{W_+ZZZg//_gտ:D.Ap=huuUo?4r__SP7 Jq=z/|A}sz'^OFCO~RR$)KоoH2Bןٟi}}=o>d۞x U*}+_;Ɨec8r ]R75 -..JJZ5OTs?sVg?o}[Ї>zJv[O?N>__Ɏu1=zgti} //| _%=FIoKJVA?ɟ1:.=CwOG?Bٟs=74㩧/_Μ9zJg$u{߭ocO|B'nX! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC@ 7! @n rC'"q|DZ*EcnXum~?,ko|\9s&7v[ XkW~Ck2FY~err uTxFz 'x h4?UEQ8Ӯ@IVHQ˄(by8m'i~~VOky(Χ%|z?\׮^c`]NfluuaIFS'NCޭI(PDpt@#.#?^?q4GV֚t++Uo(V:uW,#-c8ZK]|Q++m4X+c V%IQX,49s:ږX80i7Gf~W}c=k5@bdc5mj{^!w5q(Z[[8~×roA:S~a$lj;ԒWVպtmt/~o3£8 l6uVGr%JVN,.-R)5[JfgQKzWɉ o&&&X88Օ=mmz_U Ha* Ca |?TP˗x}A+ u8֛ov"@vZYYkAZEa( "iI&Aj7Gj6[Z[K& Pֲ,G,YvMΠsID咬VR̠#IJ֕Ib 1F1DʕR(u!QiumUӁOwENV$'62 ' &k{j _kj7.?~B"]#ʕ+z$7iiUBd,HF*V j{j;N.B Gt[mOkl2-o Paq9k6v{}x>c{@2-8".]˗.ݥvnW,Y+ 8 "BQ,icc} 4;Gŷ geeY7?.c&X  P:U.+zZ%S5 HcM{W˗.˪ol^t$hbb@#}Ea8bǍ}Y8N+Xj^PH7,믿f+W.kc}-FC+e1:qt:=)Pjz.;转i0&kv8چhwݰu:s\׽YE!GuIIlϿ[IVW7*|_FCZ oqtYFE66Uf?:MVEQ^#ٳ*JX+++nkmmo8cN8A,w pl:ztF[%Au]Z ܜ܂+[-\ՕcmmUׯ/(pO[+*16pc6]sUwS-m5qy ֪jjwccC++NraF0 ǝ.Nskt@6NB 5u]vUVk=лpT8w@immU qvn-..&L9~S~`G !wE66tS؆m.wNTT*j5j5{R&`8r&''qo576ǍXqLO#\PYP`BHVME\UXʒ^y好+@459b(k|^ۓN9K vr+j^v7۝Ʊl+NK*X{A6E5pLNMP(*j!kux]oFvT, `G !f#?;!ĦݰdqBUXPQs)q=X'NP\IZXkZF dd='GR2 pqT{/x[%#eTT(8Z9 Ð 8RnՊ$)VWWEmZI9q@C.љ*&r]$Vq/ KZ ˈZB8RfffT,1yFCk}jj' )K?~|/C63}OqYɤӟSQ(T*U.]{{@Je? 9ymEQCE򼶬Q\V\RⰙ̺LYkFZڭJvVVVHBAsss{^f> `D(hm9VF Pb! aF>}A8Tj}_.^Pِm,{6(\)W4??/?ec;K3i{`wŢ\M[cu-..hfvvۭH&l0RSNUt" $}nn/A8F0Ru yކ.}^'OnUa^k}mMkZ[[yzy^v" qfff5zjGǡ vbv[`Vae=?vtb~V ]x!Iv+d|J)F;;}SFZ[[եKkSŋiH~3:پTU(BKr|gl j6 PV/.^Ϻ_uc:aw !EZ m*c0Z@Z-OAi~nFO;tRj۪P j6ui$G&m7ـ/.^onMqo `k^:q* r]WR)yʛ@jztw&3;0??jɓllըudaa`gŢs.9 p-/-YC`7dX2Ii{mON{o66UT9ŽtzkcJmJBѫ?ԅ oµb于ߦ;俞R(**V+mI״Vjk?Njt] ;RV3#YYE֪lK:wrR&FFRRщ'GK* #|Z(IUj1("-,螻ibIU]q]---^_ޭcd8lk؜g,kћ~Nej4lzcFf(>S⼤Z)b5[mMM5m?>RRrt;~Xkl6j5eڪԶZM邕(LkkV }(t_7xUFMٰQV$űZi}葇ղ8{WQuy~ ձc''l3;QGCVm2w:4*AV%?FܲjzzfXV&[dW22C<2izj: IZ[[/lƸ_UPd]pEQ~9Y*:qگ}MbARQrI3޳Xڪ]st턐[H4ZjZ\V4# IDAT"Urnnq/BlA٦6~N,uBdΧN!Z%4X aΞM3*K:>?Zu]_ZdFUm{uLKیk') Q 0 Lnz@;UlV*T,:*qILj 2ƨRJT*VxR1[Rhrj22V}/έرcZ]vKJUW\>J@`7 !ֻA fG͖]_:rGHS*WִV5?qrtZ1$ZmCkZ__W}_|5dѳfs`F{v>[_{?cɉqTu724uBl2%=AZC3Sz7:uNW\zC.-heu]/q:.]il`*0lI.o?zgXTX̺uBKXT\Hϙq P,vK.WTTuYjY9N2T(U($q}_åT*PL&aűͦ^}=oV} 8E|²,vL3ӓĜJņVZ]c' !9 P=Z-k}mMVV 5u{k; IUG19J<4v#+$ |g۰(jyj4[|?Pk}z._(Խn[p3| Zchad?cpw-% rIbI6755\7yO9bRQR\TV*Jr]7~fiP(RP($6H_~K{\W^K:1RMP}kdIgǪ[r/q8V̤ ז4;w6xl6v-zU+Va)bMLTAfzCAb2NX*8ٙS7ޜxGAiccM]TTV47;'q555Rv*dX,RNƼTդK1*\hzzJ3jI#yV5??kӅ8y]pk#T#jmVigm_IgJVJߨ% ʅQ2%!djjB'oֵUmMMk40L2|8ZͦjTX׵IR7l8{McѷQ+P'iWX$wcdWơ~E<ߗ1RX Pz+uUeMVGd, *݆,Şq|kZ_[`,-^Ua珫:1!U'&T*% lNOOiZr_x857w,po\qdn@CllcmMGa27b~ LY](0\(vC8o8ǑlԵA*LWn6j6' 9WhMVbfXqleӿ(N15T(*d{5CSgu[P:Y r8޷k!%AZhzjBLNZxފS!۱TCh<QFhuzR8+ MLLQPtdNCGZvM=H:;XwݛJ #ps_ ޮX$Ma(/{|?Ξ^ի+=vBSӳ*W&TNT*%3܊Ħ:?͆(ƺVWWׯ;KP>̍`>㇁ лl7릳ɺX&YǢtHL+'ljdّ}ϧ0A8յ=`PpU,n>FnhQVcBnvSo cGV됴zٚbw.)iAv Pɩi #lfeeE~XX%SJQE|O,-"ӺS HzS.])M1FSSS#I﷡Z瞚fAkTtU5YgkN7OGlm>m}oGuJe/$XpU,V555(n HB2#u.suMNVt^R^u%7|j0}̮~ @h5Pbqɛ.*@qJm}]sv{G4bz_gPِr]'3쎂*0ROdA$Ycҩ,Ӯ wcl To 8FjEꍯm>tˇld^ZҔšg@뽵ɛ;m Zmf#GU 4F6|l[_^; 8|Y_:>:-!FnHoQ|Lul cQ,c&%:9'?RFZY˗RF:uZss󚝝I2zp$5m&ͦFv2n>Iu_X))v8=15oznfknwfw?НUoiP-ƿﱱ⍱Ox ѝ.+=pn~@X*#Mp ^.~`#u[IZ0R1)c\|իw߫q!pH5l쭦I׀%$&`S2R{ABmRAI TIk%&Q,UG?еteűT(U*4==U)njp`.S*ίHqϔQ+N Hq);wN+++-Iҫz;ߩ??VO?~'Nd|g3ӟ?gyF/_CyműiKSVp]$- Rҵh~v4!I0̑t T۾;/l톁Zqgө&ZEQPG=v-{}'n6ľco7M{>w}.BZ[4xe 9b:- $ݦ!$iI'؞jFtRuCDL}9F2[pTp *iI1j]JYUNizjBEQ/\QDmj4Zڨ5h4jzj5j^ic}Mw}i|~G]ɓ'}=ܣȯگP(fgg%I?~a=s IZ^^>9=?/IzާF}s'?Gy$G*]HJZ evO':/\\GbAtʨl֥"k!oumNr0ꭊ;w 0FInMbO52[7DԬ5fn m2V?Mn׿A`avڇ*):GYmn B2Jv9Ɩ|=qgf|T 23$-g"dcbI˚_S(于*ղ&̔jNiZVF]k5oԵPPk}mE66ЩSg!G1>$irrR.]$)\o|CDz!I<~~J_ײ<=}|'׿NA.|}bkӁݱ ݮ &c(v1NefQIiOu2isGgRu8\'8mkMK`Ubr^3:-2֞ Jν6 ҹr{Fw͈e]F~meGft $,'mwN6?t.ǎu(J*/z+#$qT`+-@EO 1&[&Ⲋqf)>HVV3Ina*j=s0Jס)X(\.\*T,Volkt[}OFCO̬#@FY\\(Iz7nuܹ}}Q_|WTK/$ICw}Vzv[.Ǩ WqR]g1QXTힲ%ꞩ']Խг_vH}9i(tVMeSkN4L:4w}#Fp2~Yzu܍|[6yo#7X7T$gyxKBL,&lo-I%7 ( Ǒ%(RFr"¤TJr]Gm/P0fun8&'*0Rj4Z@FIELgGz?nQqR与QP帎°;^Pu-.޻] c8 P*{(.]u5 =C2Z(g>II*I~~^ZԩS[;77] /8e5d\D2S,V7*d B6lYC_:ݸ@Ǒ%( LUcVFٚ }&!xVmOMf[9Kōvy)G<(p8e{`2 t»bV>lZkd RZ.:KT঵m5[-ōʥ W ܶ\.ifzRRAzSF[Zr|vɸɪM'AY+ #ybd}ED zQ\L2G|mm*J* xyAW-%cҵn*JV'~~~wCT_#(G֑lǾdsOεt zjVWדig%ca3N$u6jxL!Fd]MW:I@QڽSFLlsgF"űBIWrVǟgclHEq2~S$Ɔολ@|F_ۿ_fۏ?.IZYYʊ1:vо3cIjvVΝ;{lgG\Ugv|I'`%un9Ndj_c:c;A*3֭6ٗ0U8ܰX#o`du{ptHPa>ǤF-NFD :@ݠtR1i%. Wv~ͫ3ΣX,T,R)kTUD7[ U`{1cӪ' Q,㘾2T4nQo7ۍ5ڏn/f;> y}׳>oMuժ^|š۽z&\R̠%IoH=:s̍<@SSS}Svz4'#oS,1gWN]vqpK{x~\D +raD8XpZu[cQJ(jc&k䘥bAn븪i}Oi8V\*j<A(#e] <}u4Q@i ?{DZd$Nj& ƝPwʭz** ~zg]_(??3g3awO}SپO<*/2? @Դ}adJ:!׏ް.Zc~=4im{]7'7UA 7xn׬uQ:8{1F͖'cad2Qi45FBgSJ͗dڞ:c+NWs~z&,˔lsҖcd#5}9 XkqܤcLFtp@r@{Ngs?s?}{}׿=o}[Ї>zJv[O?N>__?v옞~i=3:}>^/~\0~! }IItIf~>1mm 7jTj;3NRItb9(NV˚8j=Z(N4sk#I47otO>+kNѤ$mpIWwM3_!U;ݱ3hí! ֹ¤IGgRRbRn;by~fSF PS-ʊo~2o]gQ%}8z!};ѧ?i}UPSOizzZ_җ/~QgΜSO=ͪabb2[X*6K*ƨl:~o]Q|nz7o'ӷ_Ү5IGƑ\7PpdQE °;^PLZ)U.53=)I@augV "9-sT׭7}Ȃd3nOMgupKge;P(T.R.R)gY+?UolQWR{8|n}kO|'>b7ljjJj PQdtX}vc<tivsI=/^]%^='ll@wD<:O1IwKoGɔmOjԋX*f3:uI@[ M Ƿn6a&/ 0(٨3KVժX.N IDATpU,yk%󳩀j-OVF=wkvvn288 N:FfLYa; ;n] ەge6`)_:n6gq) c^ϗVQ;a݅O{]pWR^8to<uwtݙ*Y#$MZcG0j{͎F cҵ:v9o|M;3t';f[g~z:8*]EIfpX %ٰ:g1%A1. R%]nxA=C2@/p8qBsǎQgՙAޒS!xՍ FuַPQaj> F_4ceert[FvU=i 7|lZd8(yOvu$ARubXqƒqIqǝ{u]wkrr2[ 8JN<Q .<ƫd/7Ai)٪{u}"%V3nQojGwwH6ծMBXkQrrIV;kԨ,ȁ"7֨#mxdLM>: a +X(=Fzk19FϜw߫zHs7uAZ>;sjm6.ϗ}''G! ~n a޾VN?㎢d Ir5dsw7q珞Vbpxj$Y53=s=;si8*~Z-]ddJEU;9aqGvY],'3iű:[oR:nnp.tTnFיm,E~ :Fl?V\.U419z;ߥ;s 0=3x0еT**9͖?aAmHK&1#d!DFgZ{,=3=3=7if$KKd],,`ml0w XRPf˰1eNRa,Z 1^@1k-۲e]43=}=<{禙i~Os9ݭ;<\%DEu Kx_6*6Xϗg1~Fi0q1gU˧,mp$Bkky%K "kDOO˷J>MXK e3Ni~oDwq8<@me,NHxQZΘ yjk5WUSFLefZ<LbUh ll;Hg"޽WmD"^H5۶y%3O8yBP gN$ [:$Kc`nP;C>nnVLlØvk,Jjzp8m>6l"r)1}w F[8LL'mN`M lj:bjGR]'zձbywR(V/1S{frwV.gI9Ü.2|~U-kywF饽-[8t5 k,b)1mH$ػwyț?vTjp ffMYeI|,bu8qgw{QS*+Tffzf; H/>l-#6xeݭ5;f>D{e[l`0H     uI8C0j }D ۶ikk#[8voxdb5R(G7+T߸|Hs?JB=L7sD5ֿD-*f,gZy8K1pcy^wΝ5gUvݪ*|{24t9X魭->@D(˲tvimmۮ s 3Lds`Nd1~܅qadi29_."PVؗ[ zd;.VkizZ'P1GudZnL04cޫ+룥2|R)h099A:"XE,HDDBtuuՍdjjl&K:8cd2&fw, lG[uOT|{!ՅV;V&f9WK׵<'Zxs !@HqsjѲ2Jo?ైA8Oߢ5bLYW+ofJϰ{j===s0099YXl0$LvY "R! EWWόL&8L6t\6G6e|||>O6!N:SL 򥫬BAVٕ ZPW!}n-\{k_)xâ^RšJXxEhHٹgY4@8]ٻ*ĂʋqɊRy:::訾f&ˑNN8|d2I:"ϑ&U̮5Oo{n1)L8Z~IQ Kxg-.*Rzbu(jUn%+g/gE㫿2Y72bvXҮ3!bBb{o;F FFF98`H U]e:yc3T'\|>_X(,뀩QBSNx11e /8s"~N({-˰sZ%]טPEm--[aH΋:mhQf(S@fZf55,TYLKJU?ثњsEOrffҊMW7e3VֲǬǖ)Q\'1V1>)va34t9W_ɗ6JX RF_.CC[LcMRccdYSSLCG& 4z.‹?)_J-xd2Y<$099t|>_+najI?RJJY_icLo/,܂U8Ԭ 1kgv̿9VSyfm1!Ӕ75.;-y!w<@>:::ٳw/\ *y?۶ @JQ9rl&7t,x<@{{zObrrgϐff2r9, yt:<5م|RѪ1o8*O/?ݒRP/q+5,/ϭ1#\g[ƫ%H 8.e jB-3aEuLyΖ-[.p LSdpSd6i_:ȉ'عsumBJ[[[{eR),oy d2Ir:I2s"˒fK5],UmЛB(HCeY^}7w}؍mضM(7rԭ.lR{N9~i\4?55I>ò,\#.XCR+Z U+/Ki1)`>wwE%X%.%?|,۪h4U`qK+\] ? f]YM[[$]XG@ PX S:x2 ۶mi@ @,[pKqI.}d2~.˿Hg2LL'Ic<0ɓ;^J>?ߠQ4֜ՌBS}\-a >Vl~~venq?wgw?Ż]z8<,u5\xZUUKX77::\u}}}lڴn-ǖe5lL] "0xur}}VB!:X㯋LR..\.]}!`7+c U3sb&0.Q5bdzg!+$n-C:.`hUSP"-TιIU,IqbLQ@˲j^/I'kuϞ-e0twif6l )ŲRY=(Ȋq]\.ضE(ƶmN>I0~ͺuٸy+,"Fݯ+͒dedҶTlL6yi24Lt:M6C.W ,V)Ā%]HY&?{WXfJ̮UlFheGϲ6uQeQ޽̪3ϵV΢ɶ/\˫bNiCgLk׿Ͻ߅^쮀~'Ngg'}XpmfYV%8lIV YQ{yc<"0mm1es)xc3B!Bwyr\1i6Q<#͔*||\|E Cq0`o|YU :T,_ₒ%V_U?*\-.rX; ٺuEםʶmZ@DdY4jV@b%P"(px<`s^ebjjbtla"H&$ҶTjq_\ DEc>f#`b+Lή( wkXL/LUX XضMG{WT#O1s c3VbfyPpfJcSwEVo-=ݲokòl D"A%FwOF@3Ӓpl'YAN6n}d21pQR)RiҩP^.~)P,,ys![UC 3-56Gq ~]7όVu;JS럔W[%*O1{6hafXP(D4%EK_8((؅ԋj",m}ǕDQ?Tu B[yE.'D.~OߒC>VI&&#95UZ%u8P1F Lt*m< 1K6!ȐHtq&zzDy~\'_h1nZiH˲wo|op `۶K&X<-[DCDDmt259I>'B-N@'rљA]%ϑdIRdsBo9ɤ39{uINM;hsܬQֵBc1y%mqa&& s:--z{Kvayc+4l"344ܗOV@ s]́%u""+-N[[ڪ˕H}w[F[jfٰaVws~+$l *Vo竦3? ]7l```M60pЫLNM`[6@g0 39OCF5YQ \Hu|o=WXޛ+ n}Æryu&BpqA$/ei5o=CCkk늝7s[1_crr` PRnuK![lXRWp\΢fyY,͋)"""g˖ٸq3L$Ht*5+kf(jwPj[qyΞ91M Μu\=b(L[D:X3ԃZ@DDDdz;8I8}tw 3 K‡_?N@DDDdEuv&odW_;ʆuD#ؖyqLaz:V/)k>Y"""}[d99A&# p]L&xXTjE@4DDDDV\ ˲cQjQk egw9K IDAT""""uaYtvAy WH7yg,KݯVX(HEql&  1m{]U=} P 4D#`šo1 U5Y fQ 4"bJU\Dҧ"""" e[<:guD"Xԇ]q 0o QY="(eUJBDDDD|VcqLDJDDDDff`s1u2t:xW ~H(H;giH<iAeA{VDd-пtiW~k?V ꣷD@DDDD`fbѴ"klz!jY%,ˢ CH]DDDD"bg<0::1^c %Ul.? @DDD<ϣ*FVZ@DDD SO[\uMJff#S"""" a05klG1R)=XU߈,]j>k$?1‡ȥNDDDDlO1=TY%Z=N@DDDq>S";_JS=͕1+w͒U@ b%HDDDD0ޣO,>=.XDkk+Q1k7c4mp%GDDDDhY><ypyM,Fj:Peaz:ɯUSڶST*C>`YPHUdJde)Ȋ;wgN-|`0ضM `.|b;c wR """b1 g?eQ ̇mلAB=+|&'Hӵ<Y1o'Wl+|U@re!LSOJXji$Y;v\.STz>๕c8{ y :XDADDDD-c~5&BG\<%͒J-$Er蛌ΚW`4DVȲk<3r,Ja ?ݶ@P0XndLNL\K_ߺf^ښ҂m@DDDdY9ySSDacy`l ۲mǁ!Z[[hmm!Ndx,$4֪`0XD@de)Ȳ9rӧN gu]g*xɻdtVΞx]]4ğLdE) 395I$*m7eYe_eV@w b`hh^ZZ"գ p8[˔@dei\t:yg8.xQ\xж a_Fs ^?x+W_}]36:Ҍ5H(KӸC 0S66eZ?,fH˯f93/scc&r$gΜ}yk^ X]d%(s4<JBFlzYw,۲H2S/p96nXOwW;c6*׮h$ZRJP k l"m.ezXC!@T]xxQx-'ŢLLq1>NwwOxʾDV\휏F!83]E!Zc當G&s\`l.U~~2əD _ sQwF.\B@DDDdYzqiƆ[Eg{D;tqK3b S099bJ8/?4`L&G>dvkWl' )Ȋ[oeƍab硇[n!H$9rH~_eǎDQ򗿌8룧`0H8c v \=vw3N,u\\f֝pT*t*MqsLO'B(o%mk `bb;wpo}|/~^}Un&,7+ 7HwgЇ>?Νw}]wV.lݽ$x̴ Ձ ccO$ɗݔl6y!b \i3v%\W"+E]w]]ÇOO<#tttw^v<({/wq_W{s;wlԏm۬[̶$IN8RpWl4SLNM¬Qh0 g˷׷NwE.rjG"(~qxG>P [n?Aic=F6;8w܁1D"lx+o 7!5! FGGX P۞n̤1`u饿@kR\R$I~a~s=pa2 w:n׮]@|_SϟOOqom&""rhos5o#r)<#s87wcxWlnR`BHx;W;Hsٳӧ׾֤]4 ۛ]\3V7x衇}|c+}v87|#AT""""4} {EnLNUqFD" .\ @ְ_ܹO"믿D""""""Ddz75 ;J###y1B!l[ݳDDDD(Q="έBe[{?}PU$?OqV~Ȳ(YWm`+:RivpEKҬA4x6Pkݏ(>y8w&HWDDDDdq=`dkQox;AwwȲ6f] t\8 -bs/""""8 kH&ᡇ"J> 7@# X\,`!Ƅ~?7,4~HsY>)6 !3d{FTDDDD.Q kD2_nj7~~cy ǎcddl6ۨb%F]ֈL׿nY`_%_FË/ab]vnݺXDDDD.I k/xrT 7 _cRlٲ^EK0>'N=r,:F`#~׫TsKww7/\@ք^w*{>/{i E{8Ѩ>6""""ra4.byEԻ1 8thlcDDDDDr{Q~Ã.&ڀ}dr- Ks 3`:`5[Sɢlw׍s7ֳ""""r R <ߕmvC]k_}<`uQر7~N8d [, O|汱XlK+""""k%,F|l_J^y5H^A !eAd  o+l1!qc%"""".͂%5,v^QY """"R/ Rԇ̢~w+\8E- """"R? Rԇ̢ADDDDDCDDDDDDFDfQ,EDDDDDGDjPPY""""""T""""""!DDDDDD#jbT*ɡC,N홬N#L%6ԓdbtww*qSNdޤ] 2YoN6evmD"'O'䭷b۶mr-M Ν㩧ѣ[k]vUӧObddAn85ԓ`,6mmܹuyWDDDD@Dw^"\d8s O&088W\҂mܹ;wbYN駟u]g㸮˾}زeKE4bmD"UW]1i1;w6oLggg @<gǎlݺT*Çy9s /flذݻwH$]\5Kc@Z!~^xR"NsIپ};CCCM*omڵaN<'_OOOO)"""Dv_жm:::ؿ?msA~_1::ʺuؼy3xY)yfjl&LrY1o6""""ͤ"̄P(ioogllÇ3==6lejblڴM6a<֭[ǎ;hiis&[}5HYb6Dݻw Iqe֏"c tҶd2I2,%""""ͣ"[Alf޽u>6mDool7 ccc(WbbbcʻH3Z4nInOҠr-:oP[w^l_W^azzY@ف:"sQ^y2LiEmlܸuy8~8lY4%Rk<^{m0}$I^}UΜ9C"+dӦMxǵ^?/"mmm ZuDDDDq@֨SȇL7#Isl1lf㉍suN_ϙt#GDزe {1,W\qo? ] A($pbn}}s-""""K"pӳ&{_8]SK]8vp Px)+.YDDDDV+3?Sr߈˃bXz Hl\`8Ν7ވ]Or VN pjԂj#{L%v;CT38g):GcqQ2swr}q]w5rJ >矺棛y-ǁ?$~X"*IDATHfPYNnf:;;O`0#5.| z>Of<1qOt|Ag|\ Ll 7@'ADD )""""Ut+___|c*WpGy|uV:~=cdY㎊sqcX l8 '6{{;n};&:1>U>I){ogccۻߍٰ _*VDgϞs?U>|L&ݻ۵kO>$\p8K/P---%|t3sfh&{G7Dˁ ]Z0PDDDdS)뮻+;k>?:: @WWWs]]]cgݺuYzٶEX@ {<þ*kvDDDDd@G^hvQ W4ض{nӤ@k5-ELItL&g>ÿuqy_q`bb`0Hww7cccUò, s===mF.LDDDDeϞ=U7O>׾&Ұp9xJ_wH$|Cb֭psl۶p8 3cUs#Y | H??ϫWq@G?bbb4ÇyꩧP:n#oo[e~1'""""ʬDx׻U&Qt=裏ロL&×%֯_?~D/}K1ׯ[o駟瓟$;vBk \,˪jپ}; |ر'xj\wz_yy"""""ef'Msq5OZEDDDV g}78%H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(H(Bz KpfAHGGE4*tfAॗ^jvd]|]|]\~I#)H(H(H]Jd2>}ɥu{v{v{vqx.E2ƘfB|_'?b:Ķmۚ]Z@V~رX,҈H-x\c"""""" A"""""0 """""0 @2spW~ZS$-Bww7ms=[n!H$9rH}կc(CCC|_qz^ʚ~ I$O>dվzV[o7b߿zj_g׿ulƶzϚg/ˊ}~ISi|k_30o,2fm8r47|?n,2sO~7>yΝ;{5e??2ݿ3p|Sje]>򑏘n|#/$0 =w%K4**J׋/F- ׯWán;ޮ6Msssc>~ ҧOxߘ &+cdf}/V}Lf֔ȃ5 ۷o:Bq VUUUn 6oڴI޼y#KGpR[[+k׮pxLL$&&JUUw͛q\>rDUl72i$0ׯ_,!"dfU/^&))){$3o08G^ H=yDnʼnH[[[ tttϟe~sqqqso3fHhh( ߿Gy2&UAqRVV&7nܐݻwYѻwdΝr2e>^v%W/ 6~?Az{{0 q8\er)9vdgg{̚M&ϗ˗KIIl޼Y<(o߾%3 x<}vٱcLlʑD`DGGKhh<~oMfΜ)cǎ9s戈[ R\\029,X@~*^"3 .9q8NQQQ!".YnĐOs V@ 4+//(YhQv˚5kF"iiiޱ$ rp!WVTT$RXX(yyy~dssl6!3 ill9VZ%"nzwIldfaGd̙t:9` _|oʕgϞznD ׯ_J=sZYY'UU}͛]:uTyb Ҽ 3ks\~!3kp\ZTTuuuԤϟx ֚:BQ@,hNNFFFn׹s˗0 5 C|~~w݃tŊᚖ#>ӧu֬YjuZTTf_ҥK}r]411Ql6u8l2tZ2 sL̬СCƍ`8q{֒PN5/; LC` P@40 i( LC` P@40 i( LC` P@40 i( LC` P@40 i( LC` P@40 i( LC` TCKk/IENDB`dipy-1.11.0/doc/_static/spitfire_hoped.png000066400000000000000000001460611476546756600205060ustar00rootroot00000000000000PNG  IHDR XvpsBIT|d pHYsaa?i IDATxidY.w=ĐSTcq7`8 q؈0@ ,[B6#[XB?B $e,su}ھۦǚ*Șc֞"#~+2c;R*﷾Oc F@݃FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5M|ĹsPTw7"""""Nя~wx/Ǡ>q cA}[‡?a_^|||2!% PDDDD+Co|R Z~װ^ziLGFDDDD4> ŋx'P* }k8,""""b!j8}tơݿ)\pa܇ql1!.]SO=5 """xB!ZLT>Μ9|#{| /0à{_:W<C_2>0LoOz?{=!Μ9sHG\xFM(>կnWstdDDDDD !y?'>n .ooۿ[!}DDDDD#rӟ4xgկ~+2C#"""" C411/}KҗtO{'0CGggg _4J9ϏwvwvwvEBDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#ݣncy:RXXXIH #"z7a!"i6+aqq>_#>4""""c z}6 ӧ !DD{Ȕ=1Rbmm8}xDD4 DD42xꩧ c ^x 5(}DDt@hd< OBk*!o^"q"2""))%&&|-]:1>D"":D DD4RɅW.1H !P. t-E,-]a!a!"RG2,VW7_ʭq&""9qBVɾ(q-EllP!c!"B`jjϜ**v W\f?;? "w'Oa~~f*a>c(~WP.1;7l6E}xCZφ}h6v Kl}ܺy?(carr Qai×'f053Әy<%""`X=󘛛2[~HŸt-kh4~ݗombvŋ?;+t<%""1916#BH 6x_Vۀ[ N.̠7^ Qh""G}SӮaP %c bVXi:97^W/CxEDDC0-,,'YDi^!O@ 67&'+U/_b%a!"B`~~t &f[ 66jtz(!|߇{;u&CBDDGٳpIA00@) zHA^/v|[!G}t:<5""a!"#! C?(fthDQ\eA bjߏ! Vo7.Y "":2Ξ=œiZC)'+|䂈1(FCAk>$Gz nXDDGj vG,cWBzG;u0@ ~~?u:C]z}{t'CDDC1ё'OBk[0tV~9VKVXA).Jj븱.""'"":Rq9TBxd!ޚ4%[7Rݸݴ8UX^BIQJS`~a!bt.DJv&]eIÇ{ 4vvpMlomdh"":l}HwJfvA.|Glnh"":R֨jX][Q R Ym\_^NR DDtulfUC " Yj^jT'"BDDGJcgzu' H)yYb )W[v9!h@HY__ڪ8} 24dVH~l/ȝ;Dh(@DD޵!=haNw Db"d~anmm5Ogvnni%)$OZ;se>h?[F-BR"=5PG@H/₁ +Iܸy/ 5 VP~i !06r JAy>fgg9']ri]'brrBJT+Ux&&& ,ts0 vyI.#&hh[{#zLM`jr¶k Xad]~אB@k'QVj6h41G? )6Bٵp{uP.vX\*!,{Q)l0q!6WUa alMDt7[FBk(l`{k-lomjBT%LMO`RRbHƹiE]3 }C)(N!xagދ;wAu!&; z(sax@IS{̃:ȋf6q=f =Z劻IĉV(W*6y>J2ʥ|?<ɓ'ӥcDD DDt8FEZm 6kDc>,4{)%8$nJa)DЏ|.r;nk-xY>*Zkq#Y8B^G\s=fmT&^){H1ŕZ]Q"Įv|K w[PTPTa`P)W?>!LNN !w""zֈ~[[y2e;Dn+0ڠ#5&T+eH) |g{8\fKl\.AknA8+~%T?RYbn~NkkZj& ־⢵A)k`wV@VELCۙuR{.2>~b1>I#>oo"I_6~MT*U."" DDcEvvvuZ N6=/ݡnN10S P !K+$upAmoJa50R eXB(7^4~ٴ; "gϞŏZvj6&rEnhZ&TQG@ӵMڠh䂚_FL7D& ?~ Cp=32b״.?FK@ٳ,"J1CaA z t:,z3]Ja 8FAEE߇0paEkx=!-1с=^Y˗^vm')|ǫVJV~f^?Bԏs 0W c z^U vڸڅqM8c4YebaCHŸ(x NL3>l^~{s٩## $CBDDv;wZ z^$*}Jd }-jߏE aXI={'QT033qWE vH=! T% bA݁y";0?\aQt:rQde}7TXXGzV 33'vDBDDcu+/fVXŐ鐿rjS{~N4R qr$fQVqbi5"qTTa YaZmt16~z@DrRIo "[ʴkWQTK: w 8>5{Q!(1QJzXYVI-I)P.(JaߗR#t=he019ӧ13s33,8\((amfK1=nۓP3/rc"a%Y'H 7P*qcAayy B%kY#_*DDD]N)n67ks{X`/C7T Q C6qz:b159>affrt dR (F=q8VPZg vp/gd;p#aڭ]r'/>s6YA2/e [DD ѻa{{ױt ;5CjG\P*Bߏt!NT05=G=AZ}c+,J/ ֈB?EAZnk W5# f $ nX*xjpMljG'7-=% >/o"w̪5lllVʭhZs!"AcnW D4 DD0Ewƍ7p[v[b%Q }J%LLT0Q-#(hۋP*W?~9svnhI6Hwvֈ@ںBqdE0VA<<.I} גh4vpe~~(eGNDt?@A.j ܸqKKװY16>&'07;0яbZԶv 'O<}` bTJT8\pH)s{pm\rc{Fpq#wZ炈Ć~7o[og}ߡ}{V# CGmFkRX:*Qx7Ys aajjs3n;©#gaaaӀnSMU!vU#+OJxR@ cl1`\&!D j/bqa 'חb{kZ.?Ͳ[Jab!":66q \_Zz.@K@3Ñ   [evxpbv'}1FW{[W3RVa!Q`!"8ڬӧsB)/aqfN]%ƷZ^og.<~!Un0!Ҡ(n=k`0/Da 7&mWp¤Wqm*&&pb /۸~} '܏݁c0i݃IWnO1?*oh%wo &1c~A) =͵:0D6c%L_.]p< W. !n,_Gmc0>=u]mkw}Xss(]7"""6-Zit=DQ)eSRK]DFLaA4$ VEvuܾRK/aBuqLNMـNDdI1Zt+=o.r9$G I*H=sSl3t:Xs6*LJ ?!rQvuZ Z6[IAǹ]Z:Qe:I1|jjwrA@Ct>H$w*H-! =~D~{4[M4vviZX][CNϥ^rFt !CJ~C/C0;;__µk׆>OO3Ϡ\."$UՉ2wCCE.d|g%Yg xbeU48;:66a}*جp{&0RL,}2aBJc??`x7>z==/o~'.~'> /"_6|(װ z|} ÑP$w'.cakk pu_>|7M~~< / _Vs^|E~j}'3T%ݧ<\t篫z>># w{D:&=Ye>c'<\1!^Ɖrp2ڝG&7t|o?Sǀ7|$ O'~'o|#6z^|k0ޡl_*c*7ʇ .B!1:~$ۃb+1" ғiZجՠzEG~} 7o@T:_)~<23sVF䀮\n{n}{ptŋ/>T*x!> tZ0nW*!b /e$3A%M!^tq%a"'˰d+j6qg6ZC<666Fka ҅5BHDr@Z 077뾹9ce\=ĉDD`( iφ{CB8< ~ao &Ph2,}?6"cU;DHOR)!rdF= Gзmm=1ш5ڠchӻ0y>b + 6Bg[] şmHw[Jke4=׶1ޓ)l%B)z6j8ccX_[CFR5xub@ꫯ+[nwLGr@! !fggw=vaaح 8s8l"cXAH.E*q.xgHnY"=6r_ b|~l}V 7n,㩧A:΍7 OtǪbʽ5VO,!]|m蝁K'@R^{ .\@Xv=ի{pR a흦D]( m J }qӑ[ڕ]lFO~+iAvIHVbvQj_nDtՎ=W_%h7`Ѩ09?zz+W|G^xe|+_)Ɨe! #;n"NOJjuMDBŮd(`Nא>awI+ :kBOJ)ҝeXkF|'݁R1|K7R{1SЫ*gp hZX__`ٟYT*|ŷ-|Ƨ>)t]|3ӧۿk3 ~~Oॗ^:y晱#qT$ֈ拾8mpT!߂Bkra{< + .r%cRiaD1;;!ovozMloo|"2aJ("77qu_;k_vΟ?~o//}|]}ԧ055??ßɟ̙3ԧ>O8="c'?(6∕B%V"}o%M$WM_'T%mBPbe~ߟE^ǝ۷U0 S~WIV?h@k׮q??=c?ѻBCk=Nc5*Ql̥Pمޕbi׈t;p0wIMvò} rW& !J HCy7n]G}fw[gy9@D "=Hr]Q.I yZew~ & id1-n+ޤ'i4 4M\|~Jفi&fmL2==ߢl1Ӆ!DD8Jar)$Ja (WH+ {42~PZ/Nw;_e绫 >7|iE)}w=WU"\rWRJ0-oܑT~#WگM3 ҪARL%XTRxt_uh۩NW^~~J'mX_RkꇣƩS8FADDGVV}0nɓ`%wU=ҞSq VM>pwMZ4(x³S!~X__RߎW)+/j=)ޓd.{ݯSl]PV'ySIh0& ]Ixe`/R*v!Je= Zkt,)$rsRX__GsIV a@TtrGy!"-}JeMnOwUnK_!ykty Vim7vK*HRVIzꝽwʉnDDŝ M{;ĝᾧgg={ D4r DDtI)+e'xSMjXX%ܺ :YbiD']CȽ{jxɲ3M1n35Rٳl@':DDteOJ$ C+ mli9d \HI#wސ6群 ydqk_Aպ^[n'0]?g{ݯ[w""#O)F A z>plOkɐmw U V Vnar_|LO0B徒'k`__[vkZ.P@rJƐ].Hn"\Öa^eksJCϷ͛7ah) jUi !a!" !0Q-(ma+ܰaŮ\H@,'ӥSRʡu &n߾f0 ~<+c4|/𕈈 Br a˝YD:HH#iF*ha`aylzVkhXZ__Ǖ+Wu0K[^gw:r黜)`: BJ!1N r?D1#.( E;_PiCѶ/$< =mCiD7=hiR kܬ! xL_;w4Ud!!ՉLDth@J)]OE }?_sϳ }BJɱuUhv-i(^"%*R͞Տ,8@)Tɓ`"CBDDGA:/k6n$9/sV"e#БU9yZkhc+JhdG,)B믽ۅsuat(x+;6ёdrB\.@{A6 "OanB#{dֈ`.JOf˫666QolJ^YHWX=P1d"I^O6&'(t `~dk݁k]Ie.D~vt$Պ$hl3Bf/s x閼f~';&3r#=?h|@rKϓ 077,#/i#v->gH1&_KֈUv@>^~e> >` {QEq>imHtg,3Y#ەKc!$ 1hcrKKn-L*CiO8q 3܂Ɗ.,i%#Ɨk%} f*P5B0ޟ|#mۆn++YfagGиG63g>"";;[UA&WHBқlzq@~N^dW0C {`U!E1ZedynPr!0d`ծ}_(˘=, DDt \\z(_ɽ2BnfJ# ;r40,17cQVE"":L+CV0YS6i߃]Ȱ7ަkT< bCi 0eY ,M$bC}"]v01ҥM"W0q]WAno# ҶܘV.]vIp [n5x Pp} ":| DDtًz*0Y@KGZGGF~3䧬EIECh邊, {oaaRǫ=\ DDto&a c{aOA2`"A~aDv1Z'o!!􂤏1w{ہuW4?~{":@˶5@CI.s,:4r U :}tˮV&8d[~ar4MlooN!!!a!"#ֶͭ b F w}o`7쎅ߴ6[hׁ$ځ^! m\rs;w 3ZT+ULMN%\m8R hT*eLMM`v (xg077zDp1ёhͭ668t0No%Ik1Rn#}?ۓ# m0bF#ߋJi{{2nݺFcQ0lӓ"\j8w J;~ `bbSSS:⭷ĵkWt:N3X6 b4mD瑓;v8{0;YŝleCحE,+ ₅N-{.H`lK%rʇ&RJi c"-n,\~x?1+UcyŰb#ll+(q<*Pu:}3#?"z@r V(# \1i"HÇ{|'$wQ/vJf.JKx.l6(RJHGٯq2W2x KH'kh4oOJ)Td8VZc}c [=1GΝDE%x'/<ɑ =| DDtt%nT:h0wwRH*IHvvB=?#)6'ǡAcı`wW{d>Cbs{~,I?2C$b IDAT {\R$ 8;LFAV}{9F&66zR{rtYB&z>Px/qBDDGZ4ۥ1л.J3&WdJ({4׌$V9林FdASm]c5֐@ 6ePZ)LNL ReƢ%`nnO<$fffةcmmKo =r;$)nғ]fxG31BDDG^Umyo+nVҘ ɅoD* "2l;_ =c=>h]Ӽ(Js'+W(.NB 3.(ONhurL iUB+Wl6QE[fm7o\Gӄy(K𼼽w|!ॿF;wVd{A1"":ҭs@#C eWRƵ0h-Nwjtzٚ//Ӿ8VvK/q/r!V;|sc춿"8!P$6P߮ف†{"p$ w$̾o4>{bB%ܺqnAIT*%đtUm`HQmc}}?ÇNM!"x/!Bl0PPE15}zz#կ߶V[dz(NSmv۰MvҽNo&ͻ_|%c!"[XXĝ۷l6|߫y[IC CfJJ ϓ" L!=}B<$Ko}Ao7l'NJ1*aUᗴqժ 6h%oAhh{ג|'3+^{?Ǩ{jLyVzCJ#k58\ׁ%%"LDD[u=S'=Fi]Ux|8IUSd˒-ʶӭX!)KAFFH!: `mҿtSGh(mmk>4UcmD XQl t\Z= ~m\DQf3DƮ]ẛ7՝6m;vej;wl:mO`""rJ?mk[֝VښiV+)&H8v@dhj:S:ޙJw  uRJ- H m&bLLNbttmgQ=iyAH ۶ێRXZZ@l@i[ZB@d$ݑ;Kk KJx!PEbpawlh`!"-g۶٪ӓn^%aD' YX~|M/f3RF KJu[:O|ajIt8o>_nk%%&﹗(+gp QDDf[ ֪ıB`Z 06>cn:fBDD[\&~%rӹ-AHKk͖Ϙ־FT͊50=3E({.J%/{bX#wDǑZwZ$۲l*9%e=95˲= 8{Ƅ"aiH! DQq096|Am98fMkyJ i(-Vm8RQlavnzu R cmb 0ݽ Uy>*意i~8ӧ^B3h@Xv2zLZC&J2v;Ğ=70|]G@/A]oxe9t#NCN<ҕ0điXXB ¶㺔ZYnq}"%5P7`[)R_ _*xFs]۲qP hʰխ1B؎۲T0s  7c+ m34۰H3Dv8d : JkDQ0eh4pj =g{粖7jjҰ-{ճAHa]+m\+b>\Ƌ/<(eΝSկ]0<<%LDt^Z_HkBdR~%V|c3<̴`! yB^ڏ+ܧ544lۂ:XZ\u`?v-Kmcl|'Fwct8\{Asukdd]TOD""_H uaH~;,̩He d*bıb 3sY^G{YeV uwmaJJJ)[>+ۘvgΜƳ>K" DDڇ vޟN"4D#eGkZaj~,-13 $C:y'0X[+ ߻C%-bffqcW0S[>y ccT05 󘟛C^Ԯ][}Do(""h###}KmEixpx.^R&h mYH" dժhbzvekaY9$j0`"[B\ UJTJX\嶗htCe()%|]\LNNl""}xDtm1Q_pݴRwW:pe?t2m$`NVAI萰aY(`x!9pt|R*VAЄ:JWDH\ER -EFW|,ā4qP^2ݯҘ_Dل:R^(n! !6~ɴa!B@RmXsKUDQ8V{X>(111qj*fsh hT /abb'GFkm;V@ҙ@K$ t :so‹BDQj\0۵ ϲl/eYyiMı2!&(F3`6*VI9zSQ (B@˗u94TC;&%"ZqB]{GZ-|2tK!X8v65^G"IWV( [Ikim.I( 4h6C8 ǶBy+c-bx]!oj###^Q*FV4`qqz 3pKԝ4ua[?KR\hݗ#W;KnF)d5WǫN/U(QV+h8 ۶Q*yK( zqlORh6cQrOдJuMX)Q  XqmsD)y>|߃z 33x,,$,˂m[g}"D DDFF~A}t 8^/@C>bQJj2>X_q8s:VGCCef3D #J\F(6#8mvl3Pf{8i|8 019q؁QX)}eRj8w,9\p%aVVU`|RH Ua!"`YL/h@G6֪H6QhBUe;^uU*qlHaJ:B+mona#\8j-m 86a۶aI04EqI%%\ׁzռ<"X|T̐}cxh{n6+#j/<,Nt{@Wݿ-X1ð "V f86?ٰfh6[6V-ױa;AiՇ(E1(`%ۥlۆmyn JkLM R (5M.U34Q׵|-le#"Z ">cAguHzvTq@2@.|t#| z.lt @9fR)cRJ/allmcZR ZagAjC򶥀ܲzıٺz= },кu@RU0!BŸp"vJeIDBDDFgH'G%wZƱB# e2G0LzQRIW 4j+Eڕ |e2BU.1|ZjIE*B,4!,)[O 3] TbxFZ˒Va_, mqؖՃR\.)Ѡb!"ySSt3Sn$h4a >:l| @0"RM;\DZQ=H)T8I Fq*&Vm c4uH(Zö1@#9pR UʨXXzS'O@Jz>" DDm:\u듕ܜ~RD(YQm5cE3CV'hQ Jɒ<AF`V% Ғ´=â90"0B@bd2@4DVOG~a Q60PEad FG`h}q/8r[n9OR 4{tG*Rq*maۖ\&ˆYhյ!\^S\vq˱G!(B61;;f RqN.4ԤD)![AO+F؄kj:Ae55F10}vH3"d DD7nxcr ,nõ Fu!.a`WG(L~u(_)]IW0siQ.Upm0ۯ 4z0l"j:!DPFGfH=+bOuXDl" cJaZG=h"b=/css8{{[QB׺);FMh6Cs}i;QʬrXI;Ҳz?( Jd%Dewe\.}NMM&1??LO_\]JCX6FPUMAyFX H!aY\ׁ*Fqff =)|h'NG2QQߐ,j@r7` D4e?9Dw>j"y]@S[fn}JgC)5Ieڽ9 u122njgϞӧQV!%clStNysP*%%ۂ9(< @@C# du =\XB)?)<%ƥKgbc!"Ytڬ|AT!|(Lۗkv%ϡasGyg3Z#NN&6.4ʕ!x㍫<#Rc8rjUA !ƛx%<3U͖*C]8@@k Z3N(RJk u06:X)j \x##&HAAjB,,VQ.s['HD3n$"c ?m kVEm#*2E>szt6pI`lغ^޽k.ض $6%%E0Y/+\v[زj55X}"lF\?Vr1 ³رcp] QFF0֦s:gjV8R *h40996D,""i JadڿJY Ezu\MѶZotH:n bn~ KhX,|r ;+O>dJ+Tk Tu([Q? "m%,a۶lͯ~\t*կ|x˲(.AoVǯEΝSxիn3O+\8 Ƕ!ynF?4")7CCCx_b',"0Q_1(O) [h˒YA֫7˅vZ-! CM3D)ْ€|G2=/bڵ7tzvYc߾x晧q WP*yjFz=@hm+x"*`!"bYl ZRb'EbXtmZngX_G@ňB S mVZskX}Ҳ/`iq33P1 *!,0>GTyw pl }}n7/|hQN#i+j9vQ~J+e oJQؿFGFqyze,--bvvq[imcrN=z+nf.'Q'"" /^ ?ӧN!l˒YplB@i B̼uH`I~(y#֦;?= cܴvmٯq윚ϟS/` My޽{qM;?"~@h]x/pqah;`:bI)\G6@c=Y0:i׉{h  IDATΝ:aY&'wbrr'O?1x[ނ'q=7#|׾5ÛJ6133:sxn1|_¯6|nYG,طwߍ}{wc %a*j8Vm2/Ez@Z_B(05 ð,k'"05U(򶷽 -oy ӟ4lO3`СC8{,y';,|Ç7۞|IA￿?ws vКnnI3;'* 9aW]>|]Ju`Ye&K)F#hZnCPEB@2Ee+{'"N>tsem8~8nco6Kh6wqU O?5>{"l6133 )*eG7GX7׆m[fyҕ+czuG>Wr(`i m>80۪`||qh1;;;v'"DZkQ ˲P}f!)[C2ͬ lۂ%e6DPBǭȿXh\mZ#b K_>")V@l+jV>ܮuj*NGQJ!JGډ*]R&K:޹zMZʫ_8tYH΃>/| {|Cnt._!h&AX#V 6LbXe0msNt|)%b(4IHS*_VESSh)|ǣ>*w!J%(>oo;mx;}Ǐs=n>o_:x׻uQZXXs=gGz{.^ѹ>t)‡ HW@W"cӱW((4ZBⵯ}-W?3=x{_p_z&?v<h4x衇{n|ώC=O}Sؽ{7^c=8vئ?"4??)8$|{[ǫ:/;GK>VIZqM((eS""]rFGG^B>B cѣGӟ'mo}+񎺎xҗ/| سgx h4xgf.ke.zk+ ރֹ]W>җN"褰}FF@3'?ɪ{hU~#G>=-"mK/usX>|E I[]TNS*ѥRE綐p]wqG6z26ɓ'pKVJmC=|dek Gi&ٯs@:i{ }^"~BDDjiq_zI>Wm :m̑f:pi;:2<Qb,""0xpEQwh>:+Z)|[Ö  3yXiy>:Dt j/<^x94fS>&oLjzkӗHV+Շ, L+'Jix䉈W@h] sxgQrG>O)<=s.sbr7Q K8ueM|Dd{~Lۭ sʪ* XQJ/qM7KvDD+ DDt8˗i]G |輽}d3G2o#Y ѭ!"V874FGp6Qc!"˗_SXfk:m>n:IKB &idAI,h $.."BDDWʕ?#wHJ7>tZX+s&JvڅrrGD֬V+ RݪGU %es5RPqslH'z'R¡C~ED?њ4u?})HK.ЁBh+(Յlcu>i~2zUݏ]yj-XDDsqre[Vװ|+tܮ +"_pK&^H@\{`ݶM-JmѲ勈Ut~1W ݾ >tN)s\IKmm_$&'w¶:u"BDD=A/<.^]o]}hfh@Ր=(!eEKtby><ϻo0 81 e)%0/AtVmu>P *kës-xZǛ2:<5tV0>\݌DD@ -J˲7;;AZVjU׬>Rf*=J)MH%Z$ih>W@h0 ( d?yK077+W.ZT򲚏Qxn[iuc]a[5 i}R R H ! " e15n>xss/8PZG>p ct z&|@E08B>!zMLB1 ,v`Yve("`fr`:>zckև8Rc$8imzҰ^[C{8!", DDJVthԩo~(Q\pu oQg#V&tIJ)ض k""*`!"PAkE\8/͝JE> _nS? @J+![e‡ z?D4Xр*JV8{ N:կ0EJ@ |h)`+jҍeY3h0 (۶13}'+ӗEQ.|mW }U`FMG,!lZ+1 +8I\r q_}heUvAW.@y+LG j68.\88Mau{x[;[}߭W))$P[ qwDD рi68{ Ο{zt8=1Zc;4R@ΠmW+m3s]ho="EΟ?טn-(^Vh{GHCDh@hq9<_ca~#|t$_V cvz H!:>]8f>""+ DDAkWiJ]}(YŬs=+5|", DD c?l!|莰qG@b""Zu.c\| /^@GYR،!Z`!"EQ/෿(D* a* g4#{>HDT"t"Tǘ ._t2h[X~G1G zX5"@ DDשZ_>K#]Nqcv DD"PrN|ag}t ˵~1+7i86,S!"T!"΄a.]ɓ'V!ebr D$w0|I>.m; D4pBDtQq+Wܹmm\yv{7BlaX^Nk<6HD:SNh,8~`շeF뭃@RzxVDD[:1Νrf35t0cC^ LB"N DDbLO_3177۱媽F>҈c~!2 Dk1DD#" y)\p@~Uz6ZkDQz :~ l93"?gOWj>958FYx!$_t|JD4@/^@٬bEzU &=FmCJ3di||_%"_ DD Ν;zj!YXUyہmG bѹRa!cHiAH6ECcĕKe86_a!"ƞz그tbk#7XsU=t3CdyZ-r-XD4x@#0+Шg):>z}a`GA =(B y:;ֲ$;`@^fggh4LPzEy>Ұ7t[a WZkRb,"H DDیq gNBlڃtZC (VҒJŲ]D RXH srea"<܂ED(PVqYjUDQƎWk1hTj ۤm(v8vRa"< 6M=2Μ9fe[~VxeH/>SJ!cJlH + 5 cMYfu뺰,kNh`!"&1??_@tu5rqG,|ıBBj3rztLB;Z+U,D4KD hgNaql*h]vmmG&&pAqwZ+ qW@Q d`[Iހ۶8vnji>y)%&&'Λhb!"ff3BaU~cJ^>(F\X\]@[Jk.9i}@\  ,">h177Z0^[^(|;])/^-wW8\.3b!"c 8{N|A#jTw a|K Ǔz>fC/%:;`ZЉrXNDǂ˘C [eG!D! +@ȒpQ\Pu0cSЉh`qOEQ C3)|"6 DD}H 3Nw$v.ђǶyn>ҏ&g*Vu¢:鮫J)D4иBDpy,/!bhze0|(4C(nE> l.~VL(WL',"BDŔ1{,^9w (~t`a3R B ?凬fXbw5 aVf5,hoA"-gΞ"(]Q1EJk!qQLLN¶펉 bfYٔP1vDD)"-fg/f;ΕwK4|6ݍ}T*unjl%OVq]#4 n""2agNcnn -sEi +c;0<$$"J1m h 8"m`5\7㮻d1##(逿gsuiGBJ,~F~ӕlaV IDATYADhiqyf3Rc#?HohС#xk_rR h4Gkm.W`Bv+,@++"x DDlii ._BV~2HNR>kJ)u={V<~ :aIFؠ""H9.0Ѡc!"djs3й1(~_#GnYc|߃7m{tn)_ LhǪ`#˭\"h… PJC=t2(ְ,w}n(WBkܹsۭ&i-HZi@VI#id Rp <"M+,Y1N RH 5áC144]r Ѿ~ޫ|y^K ]vs <"M433+/5&Ogۃ*:o}{Q*׼: pa˶\ѳWڶGV`]n"myAWJ݇Ҙ9?B|[*CοBf%D<򯶮PХ=]hK р &kXZZDE@Z֫]K 8}ޒT*P.1;;-RC [q6d#{oZU&k@h1m9Aq۔ZCH1>| nnsbttr5u,iA)w-׭u!ֹ֊K"l DD^灃ıcpy5{MPTX]) )WWkc|/Z]@6,)ہ:Q#cرc5JB(mY1 _aMR/4\"&8!sW_f4>׽8Φz(7>"‰5%p#`xH`iH!144!"@6, DaBi q]ni||{/<`[2)3 !WFg?DtZhHKBH [}** R;'wO8۶7լ(Jm'A0WaJh둵Bk?ZsF4TADp:pb޽p]rs/w_ o{0<<1ɓ'>E;v >g@uhutڱc{>e8%s.p~BA+ve~Wmo˲I0"~xߍ?G}7SJ!cheMd?O[mZkXoc * Y6IԥU"c7q 7MxӛTqq?#}Q?iض'x";ıc`=#VxG} z&C"JA@FP\b٥jSJKč7ބn껮J[@WCt1׶q Q Di/mg\YX "<x;YIÇ7۞|IA￿|?w9oBFqzJH 0ۥb׮]cǎ~" !p㍸x4 %Wdc,*Sۏ]`شW˲,xqEQ6C`1m۰}-$5 DRY!"1<>9|[g>DQ~[}jDMN.,-.!f ] f!,ˆ:T!׽ru##NVAZX[Ho'd$W. )%FFP0ݻ{(#<`ffq333B`,龒?vrrplv/O>$bowqս1"rxիnGxhAak.\B@JRIO+ɝ;zxqL' ܞuҽCƾS!̟Vm}VCD+zꩧFEgs`Yƫ_j|_ӧWR O=TqO?49uX['N#ɻgϞ |'Dٓ%k8q<&!v$7\Je#ؽ{>q(PVeYא:`B鄅l~j~ֹò$l"y)gIDw+_aoeY8|0l;|{|vy{>o|׿!wm{ {u=8x萙[:2o73'r7޸wy75}ƒ5)%ԃlr{M-~#ғf23@j!` Q+ >cll 166+Wl?xߎxF=vލ󍍍ᡇ§>)޽{/~_~;vl*]@kK}Zw^dd5!J) cllb۶Q.]nW,t+ζ]GXS"B\VzEDk_/caa###뮻//?츣G?)>O}{l[V<u</} _g<D4&&&_>zVXvf5cjj S1 ˲-.o*{sIŲP'@K&7 IHHOB@![oos %Is^,4`[`Ehs{8˜d3#I͜%}}OnՀnX555?~?)/ɜZf>ӦM݊׭qp~oht# @p!Lb!" !4͚L&L&t:m͈䑰jEU|V3``+SWss0Mۭmxu 9}."ɆhQa& pK) -  <[b)Mk:`!"1D˘&EQ%!aEiT@YO=gcV@   3 énOsTHuX9!BJh=@0QI}}}uEQ˫xHnrBHarx­ %WyMy DDTR a JXtB#2m![V\]J}PQ~>0 kyi/)+6 !pk:(]H. 4RZ#"D8BdRCJHMM E0NHJn X;b a$/^bπ6#[f+si DDE0WgQ<+=""pR?*4 p M;sqHwp6JT|N oB79Bep[*=""HR0 CS&A2|Ν0M @=:z&4 eZnՈD(F>vZ^dsϙx5 @ |ӄawwCBBŊn!ȹ n1zH1ͯtH!-x pLF* &`06'xÇšA~ n!c .ik O+1 `_$>q2eRɪ_è$FE@QDN=C[vE"wg-[:E~q{5]] ??+6&".^KB! z&Aoo țPD"%wwA͈tODDE0LqVVaQ|gf##9 ݰ*+g+,g>ÛBe]~p}&"ɈS֭==8u@"1M6M; 0088Cn@ KZy'ނ7[n77,hzHDDBg(`g t(-[X,VQQTyHix7yfw !S 7nf˰' KVQUO:c6l;9* >!466h U_˰w^رP;p( d@l !,(1 uDD1LYg+q_ esvr?{)j%ضlݲD4>M4s6VCBpF|DXBDTL!FG:j^R>`>i=z $jdl5ǦM֭[!@ CM$h4HP>M0M (| VGSڳ!N;so%;b͖:,p nADTg@r1$]0C !#`Ԁ>܍͛޽{1H@T>AԄ 4 $LS"npvr5YXN ]-> Љa"8~!arl0  E4.DSL"@ ȿ}41::~۷Gnd2;| &P0@"1 MTž݀D";f{rkD܇ui5!T.'")"Tys#Y9Z~@?Hlj`p K.EKKKFND`tt-x|==;! pw744~p hpB kٕb$t>T+p'9zǏ"cUP~њ0jk  C7uR;@P,|gI-kv !"@^xĞ-8 XK} $qR3 !٠iHuaZ$cp0n-VNjnYـo툽]Cjjj:2ɽ~<p'Fx JR :~C4I)aFGG}]]1X1s7l5j*4MnS"s]O`a0qD7Ng08<OkMt&L[3 ‡tg_;O}{ R0 VMṷX/ t=xag0|8glDT]O2H`=غ-;z4{(*|߇@@E*k{\ôf/i{dCH6H)L`"T z8פUbuD+ ϭp!y>Q5UQD, IDAT+Il˖-x؟ dO5uX"fzKXD &ZEDSACc0ڍw݁^h !~~u=G g;zÔ0ta @FF a )Jvn"H$Z ki0 *nOp΀)ؓRB{*=íN)/IlѢE1cF<!-\ y ~X<QD"B`۶w[100`nϭ"qbkh- ~iW SU b u҄nK)%dg(rӇfU9 =O岾ֈwM'"" $ ~adWcG:-hmmşٟqR!EkX6>4MHS} vډ7d2|:;X?L`љ"8t=H8h Z(aԌ ]!M{Bt[diݳCc=C7L2gl1M}Rgg+rj?}^½W!G!Ĺ5C";[7p!̘9 \<@h\"!| VtrBMLZ pR'3 44 >UkݼJJĮ>q8(7< َ74"zSJCa̞=!$FFi*f̘9FD41N7M%a @š.2M7~)%2[ *$dy̫ȍ m; Ӯ- O(򚼛1?lZB؄p8bEK<;B%nt4T*`@C:7X eN f93ߙޯ!(cq,Z|VQ{ܠ!rr,(9_@@$Rs;o{V~LomC0d:MY DD4.>~et82jv6ĻEo(V;SbBmhVcAu#9t<]@څ[bq^O C!̚55lݲق~Lފi܃$""H@F*P4x{gCrR:aP"XuޚhdsM3E}v̝1O54 MӰzعs۽ Gށ@ &3""EQ aw8NpOyC 7_x ȝ"\Uk US ~7?ƌ @6m JN.@q‡4 f6TUES445O{Žm`CuX=h*`!"qH|jt({cxͭ|cUvcl0Vt !|v)=C CCp;~ifloΝ`0oc=}X,h4TzT $@ёNȤ3ua<;]9eDV@N(#SS[^t_=>K´gA`-$@[[u.n0}{`z $ķYȃ;`Qip Tń(P \Qm:XlUl=HCE&4_AUU,=ddd C(hp彞-}C}} q.J& "FIٻA,Vyay&΀8AD5gLoE*vkLS” Vv#w )>\6X g? مݚsU J$2Rk `ñd24Mk(F4$GG14xz"l !Dtb555hkkCmif!tczCy' ;4`0V{',˳Vba)wmW]]=¡̀8[ UQ*ֶL¾}{p`~."IMQttt`ƌd2t;;!4=;Dx? ?fᠠTBUQOk%%XHd0 0M!R LC{@ha!"R[[ bΜyH3V9!<*SC!7k1nCit>aPMUNj]%sw4U2^h-¡ bª:3'1J"E蔇D46UU2}:VZ )% OE1ab֞~FJT*EMcfZ"NG{{;d*cG@Ԝq^]?fY EQܺ t.rp0*ĸPd,EǑ; >Rzf!G#H設 cܹaha! ߏvuغ- ]QżN~>"jBL&؈p>d 10o@LMABo[$ Aщ0Gvus8>|>il߾xӃCaddӦM+:jFXh:;;H$Յ[ȑ#شiR:::l2bJhJhjj_'Ng "n"[!D& 0F^B>Ӡ{.]dvV/yNp.MMxc`Uuuu8묳c۶mXh(݋ncΜ9hhɶ6,]===?h4L&P(s=MMM&є( qb͚58p`?vލcǎ!H f̃3+"QBX3IW4 . "NRB {rz'̜9 /8*"Pv؁x<={:A,\ .&YbXz56n܈x#*@Qwy:N}###طo/9) hųM=}]4M$F؍g ^_̂a؅gtB!hZ4.$"'6F%9\.[ o}]ȑ#ٳn9S8ƬY0k,tuu4M`ѢEB,<' <`0hW 0ىT2~:t|>W0HBuw#wk\0!bϲ΄ '8 ndn 444 "qwK#g`ʕB__:,Y9s,/)%FGG144QWWW55DSS 555i--hkk… 100!  p7i*a %$@E9je#DH{nbWb%K!DDB'؈ 7@<Gss3f͚Uuۋ۷cdd_444*̎477#N#H H`@?p"єʒ> 4ݭ!! !NPYzFQ,[3g΂+_фƷ {~.]}U71<<]va|Ċ+( vڅ;v`dd$qG[[,Yի׸^ᙉ(h>]: 4M@ZK! > >M (ΖҚE1CJ}~,^k:Hol΀ξN絵nL{Ŏ;L&ގk"ȑ#8x lق̛7ŢDij_wf=$ g?yFGd2H&$TEX;   @*ze {`]eڳ.̜9 gA՗ohr`"[wLG,}LkkC<ǻヒ#G aɒ%5kLڵk}}}غu+jjj^u38DTi8ۋXM6okT0sl,XA&H躎T2L&T*EQiPa" u4i>b =g,Xz"S24`(:s[20}80Yt]ǣ>ut `(ك@ 9sE p[ؽ{7F I)LJJPțp:Ys1{~R)qA__@ibt QӺ[ 5`޼N̟DD C5ۋ#?f#q0O_ 08_|Dcq[ɤÎsӃG\M4iF(跂?.7 } Cgg'MDbx=ǎy `0p8iZ0{\466ɗID4e0LAp/[2BD` W_W0`}1aCu{;>`Taa\}Օ2T*={@(46)ЄիW5MCmmjk~NDDE0L5 Ta =\C;_1Cp?= 曡ރhBSx ϳ)X ,D$R]dQ.)d44uB"(qӄp"j5PlfJC5kVHD@@QNif5{,\Tq0L1pR(`חb`ٶew`lcJ"y16T;.zf`喕it'վKښWܾ1| p`,\tEgxDtE  Zа|ԙ1D0BcS{rM SDƗcoDЇ5S*rfߗo:^:z}}CC}}0{{hƥ1ѩQ@(G"s`9nr""n c1⏇ /K#Cq'3'cn2MDK/igv D0GJDT6B@T&'|3`B444TtDD4~ S@^Ы'<،a\y_9;ϠFc7N6p*{ܔzLV$0^fJVaX iJ<4 Cz4%|fΜPU8#"({ g^ bV\9wL}l1s[ o.}q4 _.YaHD!?V@u}b /Z1LD v.;=nژ@R3 Z-?DR8 ͋C4>|ě ;*Q(HM R4PU 3gʕDDۄ= `G[ي4 jX-;\(x3M]R"6k1aR`uQO "ގ-~/G0\f͚m EQچW^MD u_/~~St:'~ܦ(+ 00kk![[ruFFNDJ447OCKt(| AD4A1L :pt}?W0??gkTdc$tm[ BP}>`` [ N`0u555d*Έ0L } _njF >Gq֛gѝCH>oQϾnPT/ .Cҥ8 $Cm& vm*(EZXn(bjٳ'?E 1w\|{߃y+j˪J^/oxn_{-V|3Z/؈Cmmm..R)z4-/p@٥X{,\΃bMY{eq"""=~cӦMذaʡ:z)|ptvvs?T ׯyCJ'x̾h=܊ f9B \Xk:v 4*(6 k+-u~3h"P::P;}zeIDDDD8b;z(n&}hkk+ d˖-+oҥx饗N;@sE(¶m̋8 ~ yV+/fTG<""""R d\E?~8྆H)ߏ1wN(TUş韢4!WDSM *=(""""7<z)lٲC`-½l2,_B#bڵMoőL@F5H<W|k_CKK Hnpp*+x>! 65?_%\ˆ}Y|y}WMSǎÏ~#444_X,+Bx c۶m?>~;+{d T0g@Z[[뮻7nD,?'|NX]]]xWp7K  ^x{OS!pWU)@>o< .2r-H&ӧbonq^wߍXhљaDDDDDUh/*EQ0+pBlܸ`\s >aѢEx :n?3{1\|Ÿ{q-`Æ |DDDDDUE{T1o&:,|_d:Qrx YÙ8BDDDDDeLb IDATBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeBDDDDDeRvU!Ix+=:I\x|M<*/^,eOOOαwqBo~/~P5]s5 .s|O|xΪƍ7(_ʗ^zIwsr-q:Ns9x~ Ja9ϱ~zH)OO!ӦM+-`޼y8x "AZ?ǦMaÆOA4K./EՀŽ?۝ێ?^!Q ι(u?W^y{/; .O?4/ND7ߌ zJp뭷 _裏:VzhGSO=-[Tz(4+Wʕ+?/? (3 X>~ιp΍W__b'<4[OO~/}K%~aD"!wic%KȎۛ|wyTE~[ߒ7ne _+&R!/2ꫯW^y%9_孷*|I/'xB~ӟBERekk rժUWU5̞=[ !B*s}߾}qo\nD"N^uU+/".\(={ۥzI\s.uS]uy6c)׻@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H d e}c_B׿.n.V|@>>o?ÊX׻x@,cYBlEGw{キmo{N.. kO_/~qE[֨DX km>f  )6ÇwBfkz!M!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@ 5!H @j RC@"\.듟wjϞ=m[?p}{9:}t}/;P>7߬PkT "i||\_WDZ~[dYV~/<}_[w裏~w~G??ң>??M9n&MMMIΞ=ߧ?i'xB;w$}ݺ;cs$ibbB<^}$oUR#<}c; - cznPO<‡$կ֛&}nm{'yގc{2; 4<v!>|xJ9rճ^Si2ٳG499u,`׾{wjjuB Fp>uWN%肵Lr 9u۱ct뭷*JJfƒԵSGler]W}{~W333я~|msueYzߟZ.X J1II??KwP(}=߯z|P?ֱꡇwK<>#uyz#4ɟΞ=+)Yoʲ,>}Z7xnv?Ч>)}z;ޡ{k\߯}_ <O O^~oxSO-kOwߵ R 5!H @j RC@ 5!H @j.:8VZfg%˒$e39݃U(f׹\(T.433#߫Uuy]tYa*crYrY{n }رS;vT>_X EpUjfg159sjfvF|VUS&Z.uiȶoоUS6S.[`@T*e˗.) 2:zEG_ŋTTֵ,͖czgH[ײ6i;Ο?)EBj+H"m*c ԙӧtunVTͅ˲t ݱ͈6Effu >5qEQZ$4[BΜ9l,m=ĉ:vzgZ!DR^ӧ$Y_kΝl"0 U*tE8Ν;EZdvïUl846:'N6X !F_POwe.e Ag?._YJ%jh:ٳWmw(dY;-* C:5->ZLB< Ș_ XWT*).6(Z@آ}i1UyQ5ӜWFG1~588%۶FO /ֻH؀ l1JY?ٳzJctY۷O\Nbq->uJrI-Uj5#Ectij||\al.Im r+W@I2ʌ{׆NY?499')$c>}JwQX$z(A&W?sT*|a$UU\.Wq/C A X4G/kffz>ݰ._ i``@Ba }uUU&R&PDoI#z>lk ˗5;;^EږJ(du]9`֋/ Ƕ4+au8>>K/ʫoY6R$cbV@lY"a"ɔ%K:v*:k搩);wNSSZdjjJQɶ-e3.6(HV'K/>t6hddDASyIa$˲f希Ao6 t) \l(Vm:y&&&R/vEWX@E:}^]MVjbb|-eEQ(W@Ν;gF._^l\VIRV$- itw,Y) #U*e'?8WWZʕ+1UDz,K,ye@~1mX|V:h{0hffnR#dױTȘ 9]xFci0 U.3cZ1I 1R.W6KAo60 5<|RG>Z*c[jy f [+qkttTq,B(Im3 "|߿/zߤBauK<8[*%z]q~c*} ٶ;d;{ض%Z8KQreDڳg8U.~C׫ekz4H|˚Νֱp[WVK~5@X!Xq^󼞷g2UFm;*K7 Cac/2 e))jy,rPw%0.mcq>54m;ku]k1RQwJ^۶"Șf/mK399)@|Sn5CHE*J"cL@\.ҥKP `*jYrlGJYrIekۗ<~Ǎ+:+v+@RǍ"Œe߶1ȸ+V . Rl6NZfJ]5…zR}ܭ" 8X9~c499ј+i%`1, C]toѩa(M*ւsqEQʷm;d3 @X 0q)yc֦UaI20zf1}HȲl6#.XX5Ą}?NlVb1Yjzi3Xa#|Ae[ڧ}vu <!Z]uy8~\33дvǑb8n,|h:O,K\/( Z>t F&N^%S2( Pn{ U:yee9[uѾHZ𼺆OԥK FyuqZcl α\XRRI9}qKZ*|,41R 0t Hb^aRiVaX2'u5oe_=c]VQE;%`qK_V^S26cSaFQtjOB{_Ѕ YƘ~E'{S3o";)VCؖ%&''VV[8tb[Bv٩bRinXHUǒeɲW'q̙LyCQZ ֪2o5ł X@OzM]]GRx34CHl0J{UV a@Փ,_NlYBH+,+WF4>>ֶencA[A0>_PVUz ؜0ŋ anp(KZ0RTu4~`9 5Ea(۶Z>nh񡶯 gf5GD Q={vj`ɮZ (yN {\ϭ0EFbUE*rzPrŋ4==V Ze慔֏ AFLFؒ 6HI=0˗/ix WŲlrYuK%aj EP DХ\.FS5ou[AVFWc*^cG(VXeiǎ~U+%^3jAYOk5 PJYmɒdH\]a#RU^SMڧ">ћyۍE0 P]>599`t"kJEtU*V%cj55 @r%97r\Z@8ivvN՞>"̵fʘʴ7_E؏(@QCāF\+.ע\.3*@aa4>{p1RȲ^bqCtV]L>\Gaki?+jkQ8EշDQ >}jKxkAkm&~X)轪g7f ݰ@(cڵC{v4=5\. FCǵ pe @ɉ օk!h9U(E8d,H$DZߠs2:rJٳgֻ(9( B5jN.IV[?]qWiEle-QibrR^.˚[|4Dž45#"A$?HZ@|/c9b!+V+kX8@tW1V\.K`ݚA.|fg}LË%@eժu[)8 &ݯ.XI7ZSKw wuu)'8j]xa kgIQ@Qi|| X@KVUEZ|5Z'BAzS\lEAhGQ}9]8^\e'cfK%t_z2D"(h IDAT/r lǖ[S:w`#h)ˊzLs_D? c~rٲff˪VabNm4>6feVT733qy^.}Uդn[tr]"xh~G/F]9'cU*5e2\I|>bǴ "6FƬmttTkT"cZ*۲% d׶4nz1Xqt\ o7]QeY@A*6FdL*rGȒ՘yR.z]]E7ZFdžbaAHlb9Z)n[xmV0svՖز8Lcuɺm:|?Ѹ1FY*udۖ8V6Q6uNxY~TDQzqy`sli1mYX,& Z&'ϥłHa 9~%A%bٶZĶǶdێ1 PLFV3<[m+V3tͯ`5}MOO_ب*(R38Kd!(5-`ltz]3CKX HZS'Ms?$cy5EquؖdF.Ke)RJ(U| yrY9#ϫjxnnfEW$*F￱+rK.^J|SET2h`G(1We9 ]0 펫@ )+5ZtZh=(VIƢDQ,˒l˖؊M,E H|N/(tͷȶٶ\ۘ1F|J**J ðHz]rIYMMM߬f3Yܹ=nt7 5)iK9b1 (8ڱsGsWqe2c:F fYY^ )GfYAYeƨk!DˊeE'eYVRy0$zU'OLՁC*5,˶ PGr>dd6Jj&Ky^]ǔJQmGNبdDQ$K\Q^ybݻ煜l6b<~6~4$I._RZd)kq.խ%6Fa*"EqT*yF7pkh\L1;;YNn@I+W .dhY|SV쬦4==j͸,[|^leYrYd2ۧkgEs?Yv|d* XNY4XI8۬UL2Eq,nٌۖjՒ^z(+oYCC׷n٤+Fhb|Te0%˒c'Z0 {@dYPjUҬ'499ɉ 9Z9Y5ٶB!̼C_hajrrR~bB vkנv:m) By { Irfsr]+6ݨ]D}׏cE5׽*:rH.=\.lXή$%+N[1Mצ(T*WQoule\W~/(#|˭ sw, Ezk:fggdjb_(Rr݌eWi^a HZ5J1^ /R.7BF{\EW}, =ߧ<&߯kz: ϟ[=_gqeLطodKF1BbO|1m5(q8nG^1nIEO'F~G 7cuBFK`X&Ijɪ=,}6q]G7NkUʝdBcպ<ڱ1>Vx:vyuuE dw_qZhbrT_'ϫ`lV$qO Q)kjjJWFhdΞ=sx60g+=5ٯ6Ww+z uɌjMgϖu6i{ŶlgZ贺 Zeg^^KZH9 8p# ߿_3s78h<R\ }9f?^B%KFakl|Zc$) /̴7wu#Xmء[F2yu\|>l.'qcNIj\qiVzUKY  Z-ȴ|m[6-NJ=֫~^iJU+28so^&Ѿn֞={8 -QJҧS/޽u5ֶ߯ٙ}w ]@4z劂 l5_]7>w$ɲdK8VXH:4 z󯽵սZr9pMǗX'''455oxJ;{^]:Μ9ɩd Z%A9P_}45s&kyAIRF22>ͿfP WA}*;CǏL{8U(7BNoر-X $1ig iBDZeLy:ZCBLYQXH6G_қVx!ZCu]ٳWr݌lV\% :fs+nIKTʪT*:sFGtro$Q Hr4}UwULZ55De#,弍]ә(A'tY-۶:BJ\c2V57#ҋ=[v{0M#ۖcTT5AҊŶm78ᭉy5 +2EJبOV;6("]|IOU^zHwJ[jD\($W]71Vz衇O}J~Z=.^|+T1TӬt!VrZ\.Q@W+255s(I ) 馛sU-u@k8WqFKD]Rd433j&ϫ*aF\.Z(a&Yo`;\r(ӨhwW@j FK{^w6Ɍ 9&e433Y땯I]  ݟqD[wS7t@>Ou]=ڹ3wܡ{L$IzGt곟$o}*y}cӝwޙ3gѕ+2`ri6j3@!RI(]`%y5"MLkbburP6S_dPm[99cAq`[ݟjZRHw CjJ,qUU{@UYeHY`R+Yϒc'{k NdL7ob#oEc_Zczo4ϽݵI 4/LuO`:'Ǔ)%۲FQcAwqEI.]kڱc'" Ç$[nх $%3x }CjIz_7M O><ӽq{W|;l(IRU\x͓҆-DEQH=[y:{L ;!,@捙Ri-߽:~ eɘМ9@{>6y㐰yXI">Z5y%$1c۲m[cWAQ\jTU-"~EmR_A}EQrREz=CIKs>;5\.0T)T*WU{(g[kB C&7saDFv8jm[sѶE266^[%IV^סC뮻??}_lVG}of ;vlBɪӒ{6mKql5$]j ƚFuFG-YV2܊KW>8OwRy^ ,kaZkWp޵˻hY_k8Zjϙ|>'uqU]ו:ڻgJ͘ O|974wt`n ثMMq]2j&It[|_,u {pL>#W&uetB33ִضmuƅ %4Z}$Y5QlFB^nQɶleٌ2Wa| @Aa~ U5|rF##499_qݧ|e @1ExII*Iڽ{wwN触tEݵkWv`# pUT| űecClےeH2ױ)b~ +i<3B}/ތ1$ymLk1Լٸ_翚eꦾj5ǘyCdb djزFtM@b2%xWSV ۶T[ \FlFREqddۑےe%=/m[Tk+d[v PJ]QdZdC+)O#xrYY mV_1mK{C帶u_JMJMj]jM/^/gtݲ1^y>O__/Y׻83&LkkB庎\7vP(rl#6mw~lcbQtW{ŷm HJc[rU463 $u=X˸vdoE=F|4pX]wHY U&tˆLeD2Xj]Qw4se\Gl5۲,j'JZW(\F65r][lҹMY6i Pm__A;wkN5 \KVZWiɩYMLhrjVREZ]u/~T(7F]}ٳG499uIYZA}$ijjub|ImÇWe0F*JVg؍j|?P2Rdɉy.UL'j]vޕE[@[kzE%|~WO 50-T0tڼ7Xs=|+kuJ1tV!S6t_@_ .v#d32tʸd\UEAɏHJBD&Wu53[VdܹϲW !q+ Fhu3r$8Eu_Z܌b!b!}ڿwVhjiOi|bF3%U5^?M{/oW_]ȑ#Mz}Ju@z!}գ>??[nQPБ#Gw1z뭭w} ۜAKN:8 ]-ceelۙ;!B!B!77K'#'M'MsVVš{rB4՜r}hќco@8W3k~ NsZcE$uIA83̶fzm`[y4yIv@O}o*DZ0dy}jff5~Dk{G|^_:?.˲LqZjt˶;3k:ޡV#ѫjwG[Ti1"ndu^.ctko "VR[ i7\Q>Y ZY2W#۲%3K͗Lde\UEmh4Es82d֪b1̌F#|5Vc}8nWT)7qx8RŊHR0W>tdL# #A2ՓndI>y\Ym@{1=z߭w]z;n() )G|Aq>?`)FXfk8jS^.G{?-o7]λmvm.-P]pChlx$W=ǘ%ǟ]G,Y?̍-ko0sc5ƼIu}'#YGn΍?Kd8U{mS쪵Wh &ql':8j}EQ* PVOBqp8N2f#aҽ 7Ɲ>=,Yv/:6m@ɲ,}{Ͽ@ԧ~Px{챮q/Y_444￿54+箰]^ KnwՒҶ̿:ǜW%Y C 1O$XFZS|9VFXi}NkEI*KL[h~ٝ0j}r5)|s-͵^88DQ5%n Hl? }ޠzjYw}X@]%CĭUse3 |eߺ2KXd60 [Vs5ln : 1s[,cۍ5Bڎ͚}s]L?kUm]J,y>hh4CeF˳Q͵Z?,Kv[@h[ًWW^p*m@$i.NYZV1=]#׮qBB#AcՖlhv˚k$]tc1ZaBͨԱp]71Zt<-:Ϸ:NipsyhMb5WQ[vc>sL9Ӛ[,jMܾbn~Zٽ07*!-`dc<ɘ3(2ZutjnJz/{|R>d{}=uĴnj5ofkOŤꗴ=նzz"&gVd TZQa0w~۷_lN 6fYrUU[!d ƾfcl>$ T.mԢ*:25ZEz>HZVL,\ZMmOF}\`[ql8n ҷqFe3se3:!W1Fa2ZRGBD( 6g۶::5hjmWP![s7ƁsY IDAT⻴=w︦b[ e6uÚ7qB}$EY"zc~,@ha.U>QRUEUk}˕:IK|M!${5OrUʕw+oz >6 6g۶n:s0XެljlC[zcj ~ݯkH9YLƑ\G4UZ,Y]]4u붮om-{FƏHn;@KA蘳!c%&]0Ͳ,muski ^ݭseXr>{ZwAYrlKncM䫫b!B!'۱[eŲDQE m}?Pqu]@ohhhHKut9˲קnQN{4wRsyW_4|lqYvһn#R;.hؔ&ј0 I{׿Myv멸T箎C>ZfFF*/ !+ak^K՚:Y#6FAkDQ9,_w7:b{!$ʯQ㚙V2w/k1w 뛕uZ<ݺ@ $4gv_o}VhX[Jzf%ݻzezvc1Au4BHcPeY gc#;]j"Yܻ7&%˒lǒc<;,%vdIN1 =I^$/b$Fh g,%Kқ-u7ŭ]Fd~ku海w~KKEi_ҁ r핮( 'Vzմ0!"@6ő#St%q۴p4V:FByc!?C=:"ӲRwRtM%>={"͡1"j200'O᭷q9h(H%Hcpv8 u k HZ#⁜AەlǓ^P˽yG;^yT@pv& n\;L|V 0wBSDȇ/>\T z1Fg2)ۿǦN D.j* D4MG:Ӹ{ bUΠyuU [@hOc>񆾜A;Q7ӯ +yر"<=9#xGQ46U;m;ᾼUE NG/ݙh4j 8 >o:|/~ࡨP5?QyհPTQ, X p$tл}DG 5낑`sYjU]pul@@WU~VEQi*1Z5povKg}2S]|ԏAw&k[lrp]7~dY$ ܹsQVW"/Qъb0660;w`YfVQӽ?@Uմꃑp+ԡiV;vo2H^,b?-^h0WVkM(xLG,#!!M/A&DTWTroK`xVZJt(~С9 ӴPLӂm{鴹~8p=d2ٖR²,Rt:>sڒQ8N8'NR+X^^"LDXL˂e;Gѝ'֒`f?~U>TҺV>ӫ-r2կbpԬ`]S?UwwA= RVz[ll3Y׈Kvh|;xhO2W0-a4,˲`ɓ8u4bX6J)aZ6KE2"Zd2uݸ1kװ\\7 qq`&+!ܤZ{mOVQUώtz*AnUsMS{3@4M N=%ی}-AFH%Kulmۆm;PխC;fVV- Fzƛ?"ji]n-piZ EQ\R RBTPATޝ/`qA0ius@Y.Lҭ04Ǐ؏ua^e4`BD=1>>///C..\WQ(`;.Eaxg?‘!y(+v1ѸXY6lѪyM} B\yyK;=>#Hjv/@A]tVo=Lheܛ"l6 ðzxG}D~̑#h*ႂzWم[9E~`yO^S ܾUA:-fH  Dee2@>?+8p NzvX^^K%\vsmiX*¨VkW ɯ@' M$굮Z=2R20)Y*W#@w]J>+@t+[D~WF-)]^jx%VJ\)yIU^VwnVMF V;R ^,Jtۖc8s1޽{tUEGLP*9  De !p bq?p-qctl;nsJ }Kc0-R a KZ)M&6}ԲHK:=谿$:{̣Y4RB@kV:pmsMNjnU oGqI3 ލ 趩y?yml}пB>߷Rm^X.J؎ H?[ֿ:.l+.cH$\ʲ,T*?R8ʥҖ=mYJ}al2H'F쟶\RJ {չǁ }+Rp=80Pa[v#:M}wڱQvѳ^rNUS 3-&ީu#%uiNQ]ޙMHhװUOE@' XѬLؖUTT6߼53QH"9h]N8jJۿN$ĺfYroX2@TdBcBD[qiݻ/(A0Vֲ, ◚sAX7P((jjKGYuUZ&-FLVvW F֢~5LEߓFc' ~booz%ܐI^`'\7#  ߭Vwm)=qbHւZT,~:D<ǻ``=Vc&JJi#1!-GJϝśģgށ LMc``m&zصkʥ2 l[K. jpE͡jTaT oRяƬNT F9"e[^ԞN~]EU7nW Gm#;wtҁ<6O-qj[B V#*|m;4u0A AEӕHmTU~@DJ#8|0t}GUq  um)mڵ++?pR^7# L,ٶ=, o&LӨL 󘟟aa*A n/@YgB_E~YTpmdwx_۶1pKLLBX,}۳}CVvUmYw"MUEQ5nuBXtя TA=z̯6kcK#rtU0lG  Deض[nW~9hX k4 !JaxžL||F iE\+]aG 捬0iB_^=TBx9/+?\}7u͵@?l޼T5Qt#=Z}{VB(6TkBM  $~վ?gMvB v@Wۑ0.jG b(n]57M124ɉI cBP KM=m @hKm0 O=CO,FnG~}KKpmZ²,ٶ, rKKKam YWءkIZQ%6܆hg1izvcabm :,0:N$_1jNGHnnI$骉0̎i/hP})f6Hv5γcpp##ܳޘ(ު~^|QOUU/ɨ7/bii 'NȎ >:z=NW ss~:ۆaV*;:ejT9,nh>hoոAý gW>/믭?yn'U݂NFȎ!L:X18ـ4(!U~mQUw{t:b +4;k[l+1]" 'oӴs[8⛔088cSSسgF#JE{*G@c @+*Jb8Yps6fnܼSɓ'{mEL&L&{ζm,//;owo~e0 K}PE/-1uV6Dsoq͟&A&} u#)Fq!o`k͍?"3xܛ]iOPep~6 -߇u›ti*4MFJٮ *5iښ: >k:r}9  )Mu$ 'Z1!M%ix^QTTUi$PD,}|ic` cSS+nkY^au&^!ZE[˲Qx]7;Wδ˱ x̣R2|t\u!vV'Pa=E(BqxXu DzvCp7 ZYW ިR渮? W?VU[S}D2,) ]̓ e TD D,=\p9$]EZ 05uGu#bhhN^q{4QTP.peܾ} K~82Vwj}56e#Z.Z\UfA0=NF**9^ tktֵP BL/BAlWkit|4<):VUY4ϯM`~cS8} #Lx'e.DHcBDʲ,嗑˥17-u 0_K/}C=j)mX,]ב0227v[,իWqYb1x< *V6klմE~!v0K}VBthv-PU"k4BhZ=.UѪh-;W}/ګOW7Hxɓ'1>>l{%X<,"4e;4,oaN4"H`hh'O,8{@rGJu9z KFϚe\&޽RR0W֪Sƞ]mbGkcNH6ؤ(WJڴ4Mmᄄ駇{m>ZPgd~U*Sd~!3[K2+ւhSy^%E@U8]q?"!4V2 ,,˄2Zz_q{w1}mt1_(pR|uMZڰ*AN^{V?ᕺ,{ {5݌jkt#Lr099|H$H[2h$oconeڞѦVOAL㸈'گNԊm'J)a50 Ûtl_qFՀ+]WBarLӏOhS:^֓Q\ׅm*l'x o=װdeּf~ JS'xe8Qt/L( b1dTx|l]BU /umEQߏC`~~H&t#n*m#ѓFւ0  6$ׯOcffJÏo=8A\7÷C*%z5͛W甴0j&IdYLi.kd2ĈZx<Dul{ N6T ?.\8 ˼Ru:&&ȑcC)_"6>n6FG?._Fa( 鐵+Db0u BEiT/-#2ـ9]躆?"c` >E Kx J! LBkPN(kU職3kkE8,*2E"䞮.EVa׮]HSH8;_cSDTajq{ [TCƇVd+p]uW%UEF.uCs .vj;NBXm=>L&b`~I9[Y\ #ŧǎ_iHӐPg݇d2?aHWvvv]ݿ_6KFBlx/trY=h'| < ٶ1x\`>dbRu9(*ZkB$pɦxz. >h ݧ~]XGBLDD'Yp>0S5+јU;f-mk%`~asE gEle՛JDBua}kxuJU$SHu2D0!""'&ș3][+wDԫp­,KH#+E֍4u5Pl\a E rƒ1EsX\m@Uj>c޽H$>@QURiDuU6iؿ?" VKOuu،cٹ ͤ`F*X}ڥtxL/c1ڍݻ'NꌪHpJ׾=ph$I:t=0֘#AZW'k~lJr!2a (ו@('8by=UAB]׽JXV^8BDD8~md2#G;nܸr ge!aW Ϊ+ϰ( `tl{v,! a &{4a*PreeTFx溮T(p"V,áC1>k_ !$p\ :he9<xG8Y7\0 n^=q](7u,nH+{^R<* dXZu^7 @6Y"<܆㏀UEgAbq$?W.luu\>ȑbnҖ%u@j !mjUUN%qG0UwEQJQATpVǓO ~o^7h `) ]PٶR Ӳp; 3EDDeX Ǐq9C`|pyPu@Zk$1ǑuQaB3N1!""xGT;1H)ކFa# [ \V5`BDD]@{0 OגT*ۘG\FT G&Pu"q j*EAD6]C(+vp-T*^+ 9 @he @hb1:xrWVMeT`7i]k\Cmo_} ӛ^(aaae8^A `:,"""t *2\ׅz!mC T@Q4MJfiP_\aH$Lx" ۶ӪQWqۿ!!%`Z,!k8*UbEQpC8p0t]۱4 0 .'QLLL %$ Iضa Ex EFC۱! 8xG%)HNaBDDD]144;wnu-dI  @SUuenφA7[Pt:,t]R =OTUTpzvKDDD]S.qo.`*U/L&t: XepTYiavv7o4{(.J'fz*fAU$q_EQ0>:qkہ++EaX 4FGF|l1=S @+* n`qq!'(1]pΜ_zePW0LWq&in=X+|D,C*™3g/iۗ_~f100+WZp1$ 8pWض/hKvmqj{Hv<(:b1x X MӠ( BRsgQ.8y/~?'|E|#""jk׮byy9\CJ?@Bx "]e-رxwM-NHmٳg1779|[•+W|2;O}Sҗ~~_W}={SSSM{裏cZh[y xcqqq( ?L& T'Dnᥗ^™3gzݜG@V000cf6o'~':z wo~ah>яBJ}kyDDD[̍T* ݧ(p 8=@DH)a6sO} [oZɓM;q._ 4gϞm8d2sm+!""Z7-Hq:lڔ}c++b?}s-Ӱ|<_uq"",g}K)q6#zP og>/277?۶ۿ۽nql&+Ph%һKKTH~,"DLNNbrrm _>appP(W( W!Pݶ|D"ɓ8uΉH$QTa&E (Bj+g P{1!Bc0M0 믇jڣl @V裏1==G}dzvΝÇO}iyp~<~EDDJ:҂V8dlH0: JTb8  mSN5`7uw*:M?o^|E|?D"??ߟɟ@k "" 2 TU MU$D"R"GNg|=@8O```˿K|_oo)S;؏ӟ4*>bll o``g}ccc>Ǘ%/;֫JDDc>sNśi*RU vwK"#d]}DtKz/KKKrxggG>nwQ|ooggi~4ӟ4,{q|ӟ[YhA<C\V5fRm {Ҳ@(@B"HShkbgų>ѶgΜ??w'?I|򓟼m ߅j׵a&L3,PPYh '+(?ED„I"""tSm;plbsE@T+]*uDB.WDD"""ꊡ!?pr b8eT&?*B#BHB ;UUh0!""P5 m[x7U`6EފAA\X '&qCr|)DtQ)ҥ7pꕰnRzMl6#ߵ ' esPT133JZ[Я?`~cc؃|>}bBDDD=knڽKKK8a^^ծ 잘@:eSh1!""rx≧z "%""""aBDDDDD]u """"" DDDDD5 @kQ0!""""aBDDDDD]u  J: dADDDD ??nmc @("eDDDD]1:/jWnmS @⋻/RSnmS @`U@p!""""x @^s۷oEDDDD]1!ΎP8@J?_W6{;+k H X?Ѷ[.)sG 7z|I#go n=&FAwc`` 5KK9xW0 4@4̈́ؐRBu( 3h}`'+; Gp5H%i~}aC4@b^,Q{0T*(vڅX,֛F4;_ƍhp,dY'+纑ÄBMj/mW @ⓟD}=bw &9"߇>!C=Mp% 8W!KJ;wʕ+H&طo_׃P?0<z*~qT*K=كX,v|$@u\W\E\2UJRQo1!ԧ`Dy>|drT {2&oEX,% B F@oBѣG100Km2x7p$ 9rǏ뺸p.\jfx[H= @Z`y&2UL˗q%X'(sn޼^z |{n2юկv+q!DOW}8˸y&.\{appSSS0w"N޽{xW177mhsqX/='[+V7" $:cǎb;ׯ^Å 0<_ƨ9^^{ &'US ;~1#"|s_7+on##[.yLHDDDD> ;U]{}H,@zVD^[3lJ騳[ijgYݫY^Ѳ(s\ 2, G=y>f>sWMx=NR58? )|Iq}ԺjUR !I F4Fժ7d9*ߘ1JII @ t!2*^"cĪ{@\7}UOÆ$I"TV&I]ܿ_f`5 jKhD"N Z MUIu7WLѕw[.)rRKb{XR]ITRW/Wwѣ;f@:W>ml~$UeSRgaf넯|5YnW5A+IڟU"JJ.)QOIGj)4T m5hF~F~y0ڸyQ 17XgUR{n'w3#gP$*}* $}l"@J?!Ӭ5Q$9hk-FoiW~fn6kʾ^ڻWFAQQs{4.N=H[=jX&RM7s7=aRa5r[{o3͇Z,wSP{]TsmRc*߿_&%)22Re=z $7UPt}^{Gn0r=kDLC<{VnUwtVq:JՒƌQg>J'6~}gQ㙄w|rez.\f&<\ז;X7@GCQt9rK#>rI7::_o].\OIQ P%Z"t2]/X~w!NށqoDo~Fc5a9,K$ÕA5Oz֮]+!?ѣG(nM:UgΜiu?+WTjjÕW_}U84"ew6u %}5},XY@I= &H _{9%$$Ȳ|_TTtꭷҺugVIIܼ<͛7O'O|ivp]9Wݪ|r!;DΫ;XRf㤤JI\3GiмyNJqp X 9995jbbb~zm/BBBTPPW 4HZb-[&I*--Ւ%KKJy]rEK,կ_?{Le,Ɵio]~j]Pݭ9˰Ԩ,7$Eũ߄ pHj6ߘp7j߾}ϗ1I׫@'NIJNNֈ#uV_Gvv1ڶm[Mğפ >^_/jT<`\\=ǍO Upg@pϟe˖)!!o{qqjjj4p@m ЁTWW08qB&%%)""B'O옃֣$owF:(U5uSIŒ ^d{]Df߿rrrZ^ZlrpnLLw{%]-9J7(놜N~РIQ?ztih~[:vXKx Wxن*--^_ ˫ntjᒤk#?~{eKUsJsܹs IoܖJ(66VTVV淏2Y%-I>sw3ܻ-cǎU||nKZZ/ϟ?5kC D/^Ԋ+xcӦM$ۭɓ'+99Y:~>N<Ij|2$On>+g@gw~,]TvRaanN&L۷$b:tH ,~vر ׆ 4j(7|Se)++˞L .K#G[oHj|G\ܹSǏ… USS^zI={/K<ۭ^zI/VϞ=Çkٲez駕. V[,;+ҷo_*<<\ӦMӓO>Tݻホ }]effjZp< X0Gհa4k,nBBM79rDC t9g@؆6! m lC`@؆6! m lC`@؆6! m lC`@؆6! m lC BN t Ǐt Eg=\D BŁ.ĉ.u>g  lC`ۄ\W]]-I\ ڋ~u>g=\W]t)%tz14Zv~@ϕ2:%΀,IRjjujК(m p:@؆4|%&&*""BC ͛]VRUU_WPllrss[{Qedd(**Jn[SNՙ3gZrJ*<<\IIIzWU__ߑe|zǔ0n'?сҳ'(33SzRXXu릡CjժU~sYpZv>г+,,8x\2G}DGG5k֘BSO˲̟@e9sĘQF'|Xerss}g&**s,Yb,2/裏kf̙3:ڴiLzzYjٻwٶm>| 5wΣg̛7lڴ8p|ﷅ zѳt9mFςÞ={eYf>y F ;v˲G#G4ի:{ldڴiv SNiyﺒnf̘_~8駟vt.\[WUUewn222Y{G~MςMVV1c,g= M䣏><@ۺu\.~ӟ'72m<^8q듓5bmݺջWmm}-cm1w!{ߺHGΝD: -˲$ѳ`qF۷O~?#Y1`nuB0 ؉'ԧO\. $}(""BǏvI(,,L4h I{6ŋtRy#hB:aÆA?Y)))ŋb y<شi}&Odz։=`@ )SɆ |0@M0A۷oWeew}qq:)Sx׍;Vڰa>|MY,꾛*//O^|EsػwNYמ={TXX3233%5N'= bUUUںuRRRxCpɿhi̘1&&&ƬY޽Ν;͖-[̪UeYff˖-f˖-cLQQ˛}]ӿs}a^|ESXXh^{5r̬Yqxw_XeƏo׿C&,x  IENDB`dipy-1.11.0/doc/_static/spitfire_y.png000066400000000000000000001307031476546756600176530ustar00rootroot00000000000000PNG  IHDR XvpsBIT|d pHYsaa?i IDATxi$]6眈ȥ޻gc1+Yˆ0@ ,[B6#[XBB $e,}/m<㞞q[-]kV.sUYYU2"2Df':0<BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#BDDDDD#:>ŋ(Jxߍ?fp}C·-ɟ {9| _?ahGDDDD4rcG׿u|__W^ɷ}Í7p]H("""":^xOկ}گZ__oPˈ>y7O/"ʕ+,""""}smruj}m$y晃nơ2F_g}ADDDD[vC.1쓓'OX]]j|?go_>f3;|>~^۷|+@ɥK/|qP([/ ?~H{X,:d>~f ?/%}C_җ_ŋ?2""""}/g~gя~ag??wwBtFd}+_'> !VWW /K_~WFDDDDt @QR?y|~}jyh<3;|>~^4JCptA7vgvQb!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""a!""""qDDD{ KeH9@1 C,..⭷RJ?DDZu|߆#%\ׁHN<!A7X㯂HHk{R!H /`{@Gs38yrE^},.?EDDn\m/TJ(nԱZEaa~+0;{@8bJ7n\|>U,Xy.&'+=1 (0~"f7n`mm }ȶkaAXTSS>Hkk6Fz"7o^e?$E'pRdrɇ#%(wq"5 DDt(pUllTa{?ízB6HAA+qP,( TAiz8Q(FAG1&m B7L/"CA&ϡ̎ Ma * )$BC("Q[n\]?cG}kgbC(B^G+K?yޮODG !l6pIF(JT*p]{W*qĉnxP7:  ZĩBDքZklz^2Ơls$$A^G4}Zη1 #q$a}i6PJ! Ci 3ؘte!!׾*VW{CMC)%\ׅ纐Rq+<21+K0Z7vST<'G;$à#uqF#@Z$16Hn^xYX^^ή"4MFa~ :6Jچ1d6su30+;mrrgΜdTuqY9{biO":@BI,-=*ѐǎD!m(Z uVK쓲bĥ70ÿ1yB.0rBE8 i8(J%='89s{;]uqVobZHI{Z$Ak9-//ѨlB+շ_)F h6 C(1$ɮS#I$Y +lH^ߞkΞ!:C5;;ӧOlMrrP.  /}xG:<=DԇK/]“Okh6B0F# #(eg{ڨVQ#"FS m @ZD;}H]|h yr7~,~7Ŏt ;|/qbfE߇6߾ Qr>$('  @JRS%s׵fYN_o^c!":D( [N֓& 1 @Vh6lA6a"c( ַ Z{:onְ 'qRv %4I24M!ۋ`0#(Fܬn" ~G (m"D=sK5)|LTJR"c@hhlhɆMMNM\P= h!Q(p\ZH!}~v\TFRCC8--"#qJ%yo(cg4JQV! Dq8*eoFf'^F(hJFmA\/ևAIȡC& %8HԆ^)%Dg$H$^e JQ|&(8I"ܬ}kJ<)pVυ;ԉ"c)n8A>DˢFEQjAH>'+vNId= kW(T*cvvN ꭷƒB|9$E("$(emuqdDq4(u6u=JEg-JmDDGmv~f3I) v-HkIZcfޖt$VFaOƀ5R>{w)7jǑ15U1֫uM;CUG|@-HDD{C|իoAdΎu@Z as%|㏟Ë/!$kW>_ή;(FJEJ ChlMvM$is pNޯ'":V@E*"%uXw͛׷`"qBDDcKFccZaCJnlz|GZiK!ISDQ0 DAJ/\h_KBJ 0?m%""""cZk$IBRJTEH9?mE_JiB'a!cl^s Rb}m hh3!"dG]ѣ5UbjBC3Cozm YO]}]@Hx Ǒo=ׅzpR H!F!y3'P,XND4%8~0JEShm wZc ZCf!Vn q 8ufC4m/ ډ2""KI dA/Xht?vat *~ec ɆvR xccR MRӘ<㿌DD4wd hmc )ѻ gz*F7P΂՚+IB=FZ;  ! @Yh LDtqA} RΚ?79UGY +L;,Kk GJוpψ%@ب⢉6"":C:WC߹0 MMk;M*_/y `3޽GhBDDc)Mgk32nfOБdu;VKCHϹ5Q""Kihny+ln'#Yi~kC^֦c]["HFc/LDt1XҦ"yO;ȶGk:Q z>46y/ vHM,.."Wg'":@hXS{Z'Dgï0Zw4*wy]DDG$I[ ;uv 61iaJAi Z'礶4^ADt0XJMVy] \*bff2de 0)yTAV!zAݍTi2BDD®JLM1=5)C6;װ5$= K)qɆ_$iՕellTw{DDG jB~r#!D{Mu Zѷ0n/LP,]њVR@DD4܀62Jk!8l8r Ul>~6JFѵ1R tvd;kQ׶qDDG%c+7; _uM}I6>c}BdK4zBD0 7]Wpfݽ@簬ѻ2: Bt"Nh ݽ܎@ZM}DN0 Qn>E0^ww'Oԧꫯ?&''q /2n߾=???bz G4MR4ٓ:j000 jձA*.Sp- c2/Zk/ \E?!"? obyy|3G?__¿77|ۿ=k"":=:(};MfO63]?E49@^ 扈'΂'Ν;\׾5LOO~G~?<>᳟,`ee+?S?FO>^xa/jԷ~QdJ'h{)X6`A=!b!:_1lv4M} |;_Wm7E IDAT^y啮s+0DDGŦڝc{@ƠX*ĉIH){zi0_#${`Z{,aRm@+M7oDx饗⋸q8wSO=R+Wh'u<{Ba;w,//!BNKD6fggؙ|?mf!"+qt؋M kz@kB{K &>\iEDtܰd }@XK/ҥK""K5R:=Xޣ;tׁZC:8X^Y~}D42/_Gax@9:@ɓվ}Bĉ}Ǟ:uص|0/2Ο?&":8Fn^L" c{Û3d;z;V:6, :|І)zh<]tj!XOT*}\gyB @߱nZGBDDmq# ͪ4(A3u~QeȺ} }M;w .~joy& Їm/2"/v __ő0ZðT:W:Ft|8 ׁ̔ԂZ fZs5t":8+//h4XZZ`{5s?s(Jԧ>>'q9N~'N$~Ν&>7~75]&!Zk[CGC] ~ܵgyI@`mm IÕމooΝ;le|_o?{ïʯu]O4>u|$8>OIݱ.Hص=sUAѷ-[6ADtt0dn߾~~ oc?#ȣ40k+`[mkD4M֐f 8zDD4vR[t#_Cg#QVO]p3CJ!cNYCflP~~Ω!cBY!=/L"Z Ia!")gJݞ6a/Vq5t":@h| vء3zphxa:M0l R8niu; cֿG\Pk"":]n_9a!"CN~۵Ǚt-rEw 0DDGb @kzg;_. )WC'=z#&`՞/t%vبVl10QBDDc%I$q,a\ҳ`gTwƮl6p*^":f@h(ֳCuACD=bُvmq0 'ED %"%>[.8(ڣDkAҀi~pBJkh>D$oR)L@"ca /CB#%\ׁ6)*Bm`:Q@z=DDG $oVUq0 jd/p uy6H) Si$`tIr1B":v@huj]?ǑdYCVT®\υ~x GJ8= UZza/$`7 DD4V8rfA!#_d2 Dt0i c` ;2gUw*K{Yzgpi P[Bܹsha!"CGwO*H)p]CM $=U[)*CHDVշ/x |3^+aBDDnuuZ+LNNB@@ âlp٫ѻ !eG G/xy/Ơ:gs2d= (C|aueSxՉ%""sz k8J! #I ( ! ‰ٓX]]kj8N߹mͅmP rFL -RPZCi /ٱ!U}6s hU4 %&K/]>V DD#QV@:6(ʨתVQՐ)dbI MSc$ އ1h6abч; (4hN"s0C zĠ\R"z!a7߀AXO6yAk|t=}Bq$:\ !p=Μ9_Ac!":ffI ,/=*:j q cL> \- ÄׁrxCoo;rq!:×s-n]ׁ͋-|l㴃R!#6y)ARx h77p9\t J33#14y:00J^bqq/_?yB.%I$IT4 \N:s!. R"' DDG1J)+qƵ[ޖ: M]l S+ h6C\~wO{ r% AD)oruܿKQcO1kkX[[ssY__Ûo\Ƶk?D]ӷ{&o~Muf%eGp`}[n\={O=4&&& ׮]EwZ>!Rr Nab}ܸq <?/Yh1Q~*k-H èFdMC?ρ-Hrɇ繸wm\&]=p#-\v/_FZ鼡^| gϝcu4 \~7n\GZR)p]QayyiGCb!":n\7cuuq5 "pA1ޮ>0<V؁rJ),=\B.<|_ +)u88 pF'X[]ťy{~oB(j| n\qi+t](}[h1bs7Fz@nwز"Y}[Խ0ェQǟxoesssη"M|Uɉ2AwBC"F˨Vl6'?hG!8Bl W$ pc8w"W':b@!5._~ ;x W0 7=JH  ,.CkH<3zWB %gϜS3p]Ս:j&Af#@8)Ew.`~~n`$RHh" 2VWW;̉m!bFիoh6нw{>bp= _{4IׯT,ñ6Uܽ{׮]Ń905Yٳ'q,NA\z ch&RxSQ\r [hmǶXJ &Vxbjjzo-"CBkjo:޾} qq0z=m?w' 3q#c<\\[7cr >4EF \ׁ{ +bqjwS'p)/,#Me q M=̉YY쒈WLDth|׮5xmG # ,\lR7B( I 1|"_&9mIq$8FEhkPʠPpQ,p] DںΩ}a(ƃ%\~&zf'pi5[?dk0~e%B @+6"NP,#hqܼyW&ÖYسcQ.gVx; !tc쓽d )miF۟:݅Hh 4M[aW[w]E4 P.5RFÃu.QaZֵ.(*t 0?SN3 DDc,I,.㭷;>8pMkGFx)kh=6]Y[ M( <R1}ۗ?ki@GNs Rڞב!(8sǁ̰uB~vOFh6 &tVO!& S5hc{QڊW=BȾފv`!TC %A=AzqZ۵Ph8 xr {yz6ث1iyx® yJ>"G DDc7n}ı]ἻcГ=s.8ulՖ]VqIZ|JĸQo?ts6chTknKߴi^f3ҁ發|mZm<ŢǑ(|lXZ^C*{ʆȇ_b_\D4@ƀ1o\~a7ֽ杝dvd[bWCh/O Z+ !$Ԧ{sl8p [Õڇ:}=xF4KKk.#{Mw0 ?J 2,Q# "V۽yDzs;fݲ}Y\xli6Ts8w!BDt$~1|6D8J{;Zqqd`_1kkp\ok6|G|(pVR.8fC&'(`ZCd&ڙB^Vޖְ4ڻƓO?_ DDH)u- vౣбaXޫ-(!:yh-HOK!$:WXĩSh4l4ll@3(\JERZ*jX_u]81^E*5p"w…;N"_ DD( C\v^67| rvҵxa!": ZkllTqƵ{0R~CczjzժunW^!{3gu;wWSא o4!@%X]" C|_Sx?㘝6H&VV۷QCEPZeh/!%g{/"*ʞ\+"h4p-BH9xƢo=Yq6A8}Vת V=SislW+}mƠv&… 8{ ffNѨhh0 !TUOG㘙ɟ+g?$Ξ;9ܽ{u q=;{OEez]a DD$h6=T%4!oH #Fxi/+=:?lA/ I]oG^{$q\.\j|r .…xϠh\`zzfBDt@$ B!tбOqKKܳf:w$-|OYObqkqN555eX!jHtYoٱMޞqhC+ءM cst6H~a{c4/N8k"BDt66jw. {YmX,`cq6 ruP\Dz=*j'""Bg3m:jo ;9 {}o=.ו\B 4Gʭ/0at.~j-pHD4 DD c-ڧ;½mA{6* #J!]Cvh=g8!кU7ގo+ "SJ! C$q_@MzH~}YTJ#bH)!͆U !lh!Y1fm/؉'f"SJA)sz.RqHa!Dպ IDAT[ӹbB:{B v+"8σ #!jn64uZ/ZC)-FhG Fb >4j6֚ 8?.[ɾw}Z m݀Pp1;*FD( ш8w<0QA;xۍ:?r;tf= S͚B8Rs>\pEh_"bRHhGAQ!Ygz?ݳo{֘̆?uJp]Sv߫=}ve{>(M2k@hd (KxOݜmb8gN${>'[ݭl =$#d0Cqn Wօx5` $p(p0 +g$%%E-<5t~ΐdʪʪ̬Օuɝ-*Wr3 5e㫓"[iDD.n""j4NDQ,vN$Zku |s}H_ KWԁF%}(˖N83 DD- iFJ/KnjvÛN%Gn%Ď8"FMy=B|'"|9,]TցhX’Ue5b܆EDkQH)H$TS*0ß{eF},@'58DD-d&y&0+9no(;Iytsc%C'C2!Ǘ~a7/) DD-d&yrAAHۯ(׃lsZ4@I2BDkqZHx<^ 7ʩhneZkXj2 1!"j1)%c.D"v7"vjWvR#@Vd hrCZQ !H$ T)dZը!v& D___.HD  DDm@!\u^$V/*"; pHeXXUNe_ ӌ` 5:DDm"J!"ړ[ԊBrϩr!;xU9#H%TCJt:Ӕk-"6u6<&`42H`Hh |k_-GjHd2!$5`f^Ji7odi%BHÞ|5شi"Zs @ FFGH$7)RpS[vǩVeA|85,XAD-azDDdFF6bYELLo25Z޻e;?N\ Y~%YODDkQbހW*ptD>Vr˒fna+킥F4;Wv!"ebM`t&z)-&.=RnyI(w>]!Qlh Dl۶me""Z& DDm*ahh7Z-XaQ <uAWj?54 Ć #z<Jq QK&Sظi3校1=5v ^/VLJ K,nr.N ԍ\>5 2!`BD6mC>^G%!6pT k)Dwþf&k0LR_hqq08Ѿv;^)^!DU:`iL$YAD-$)ع G#Gছn˗|y{G⮻ZGD\6cغmRv.׬=J! b ҙhϦ|aKJi?@ǩ'XTC,le]"j߂۰m6-nVg?Y{ ===Ca޽xC;E6'> ۷o _!gvK!hmo :s[-.w94,WkF; H!aRJ(KA)mQ]^weJiSi 6"Rv܉ӧOJ{1]صknF|{=;׼;Uz%Dԭ$if "zIKf;z)`0$BB zs@{wٓFRϣaCJ) bddw$6HK.^*r8Puqq K/UرD^Q7J&Sذa7YR4 NM[ݚZVy[ۋۂ%̋PXRl۾} "j& G>˲pU@Zcrrrs{{{[næMcHڥR"b0MJs WN͆ݶ.^ [܀fZ0 $e\O?#o~sCD$==سw=Ҧ 4 D(P,/#ag@BRhRa[SZ\,!,~2asK_ǼLLL@s+&''y8.QWf{gB`Y%oTElEEQ(`,rWk?NBrmHsZ{A,=ƶ`Æ%?_|^r-Zq_"|A|S ܷsN$ U;|0v}:tA ^{u$Qdػo?$Ν?'^iI 5!a)˲tuȡEk \?awʒN eE>ۦE3Qg9x`/ϝ;~-Zs~4w?1==WO}w;@<75CwݫB|R4v ;vƶ;n'ivޔr7+CG )*Bw/R rZ®Wapwt}?y)nv<3ovx;p= >Ozߏ~կ~zw5}}Dݢ(6n)6E߶! Ӱ[upmorgzxC"B@B,KA ElڸúQ7G?я~G`Ϟ=x'|a&{d#K_FGGq=x]֊hdK/Ywۚi0 ;Q(+r`fAR0a3v "`9?Z^eYݸCÈbK|DD {?iC}ݸ,"qlxc~ҹVwQ(07,|mxZC 07M}Z<sP,>/Js^s@i0Mv_) ;w._\ֶg1==bbR"V,w>stDDkBk`-/04<72ADmQIؾ}R4=Y9}&/V.U ,졄®@ˮpQNfW@cӫ%\s͵[V:jcBDԅҙ Ӵ[8yfff0;;ꥵd3} g'?X!)ۇ[2ADmQJ$H$ƢH8w.?I._}wG,7_fDuT*"Hg0}e W|sh%DUz!Ӌ s""B,z{{f1>~/ `f!]\B]= hs DDD"ذaCCH&R8x$f@No:;T=X K_cBDDUض};$6 oo<&''M6Zj~Dû+XaBDD5 oiFpiz$ZaK/Xza$8t: DDT7mB<G_?^{8NzBTb&$Hؿ/FDFѢ @zzzL tb{"X*a(nzzED50  FׇaO\ƅ[%9x,reYā144""ZH$޾>$SIlڴ3338y59IXe ! qrbYBnƶDqђ ӴSiKEd2Yq),,̷xvjC,rTiًd2ٴ DD"aH$QL1=5yLOOb0Ҥjn42a.X"X>n|Vdaph汱&hm0!"I@:X<LNL,Z5%.X=(pd}ϒlx]|bL3RoDD".A?pP(J3RbU‚=ռA/_So`nnvzVTxwEvw(1x)ʲm۶cpp㉈ DDh4h4}ۻkǓ?GYk}84"z\HD%B"@$H67o3TJ)K^PϝZDD @} 0:׮8H!< ltv/!dhDH!wU V#3K3 DD6""Qq@U-e;VB؂困"u""""jiB2 T"V‹\׍-T:;J/k$QcBDDm##vP]~^xp?H 9 !j'>x@mX:ؓЅEH&+DD݀QR֞8"hԞY<ʢTgTuϹMWDQ[ff{8$HFerP"l,jcBDDm%N!3 $@q4=:a)1J sx|i"uCn0ЄPD9~bXRڮ«X'`BD݇X,x<,;XF.{ BOrnZ::""j+Rk@eE Yeku+oҾDY"zKEFH$J'XX~XaқaH Ðճ uyBƍJy~"Qm9žX!;#l+Ƣ*Nn\ ,bo ˆ ֈD0# @a= i9#o+JZ)d2ٶGDQۉDH$KxDB6tقb[!L u'n""F@V,Vu.hI<HԈD,B'K)|E%"H֘.4XABD^,kee!:u' DD]*ˡX,z"2 [,h&F`ΏU! p |*O>O0p Z`BDDm)bwVu'.rFh|%;^ ""jKʮDu- @vw$r0KDDm)bǓ5o|FGC9W  *?KMNNbvv˨|}M VD6+Ð^p_ @,@$-XDԝuRԶ] 7ƽDMB*>( IDATo Ta u- DD]*!z5EQ3i(U|ݻBAΖ,~4*Wqgp!Qw?".ef[ R"O ۓ Q7bhQu3Cnk86,T )D'uX<{|Mc&ؓυ.@DF""j[i J׸˻4!p'!%ނ|D|QWcBDԥLӀa^F]RK|Tb xuWv XΓ2HDDPT,o$`s'?tNmRxYeu5f@z -n[0 S`CWgu^v ^7&QJAimoDD-4ff|;n$@1!"DZ{;| 49?y EiMjr[i (n+րQWcBDDmMJd2Y1kh h[mUbʂ綛Q`HG"_ux|m[#`hx rc!04#0ק$z})DD]Q^oS% $DjլBlËTt{7*/Ck Qw@""jkBDQOw{[*ZN]-X,J;&#"Z HDr\(0q[ޠCie3(D{DD".[EI)dBF rQYUɃb ,Wk@R#~/@*[:}ӂbH4c:u=f@I)1<<\vB2 Kz BD QaBDDmOJt:hԩ*.i2Z #[Z!?:LaFV*$?pþ}5BbxdBDݍu #0`[(Ho$Xv#$/8W:x4I3BD]EDD]X,tP7{#DؼKDUHu DD],woHӍLP+  a=́ݡ,3]ڋ.2L6nשÞtA9klKD]ut:S3@6Hd,īEv !:k*wA""n'd*lxg*7ʑ 3LbXGeV"puv[D"<_ހf?< CDu7ʝNuk\c)9c7 =2BD]]=ԯG1Y$;я]wݵ/&,zI0̊77a,!TWO^KAQ=p7o?#<~0M=7LСCػw/~ambTc=w]Iv7ވ}{ޱ|wygzwy'6/hf3LXZ(*OYw)!D"R4ODԅhQ*055G}O}SW_}\zqqoK/UرD^WBDԘ|>|>e,Y$A,!GߕրRjn`j޵bk"vc(я~? 5&''=׻-z*hSǡTMqOٕҐB"""5 {/>`jj /AT>V/hU! Dg.dsnh@Ju AV;)֡i&>0Q*100z ΝAuOz}݇###_ =>c޽zDDFdzNfDC{} ]Y,vahD"Q /DD [}Q|+_+Wfq5~;oϞ=x'|a&{d#K_FGGq=&Z2WfD"ag@w8p |Uv!ag@D3@< @|C‡>νkӟsn}+YѪ3!ӃT*9($be Mc?A4V@E Qc QSʮT}H3ǫJX!-U ""Xi0is׾rє|DE|PZ4M4 DDԱLӷ~C]Qy~h!$"0!"e& ig@'_SU\[5ⲟ[i!9QDD:X<wq v!ʩ33 !LDDD`u10:CS&E" dJ_=Oo @ 5DD".[Ѩ=4QJA!9 V^ "ABS.'r! pQ+u9,j2-"^yms@6;{Ɍh iXNDaffXL:Lƞ&OB"MSeD#qzm:uW 4HcT7-㉈/ DD]β:{ V<C<w<%!L)A'"cBDDMRcnSM ZRe]:Z:A&|U8sEqȇ DDD{۠NXr4[aDv.""0!"reT*z&"߂vZfZB@H{ ZiD"ә>ѺI)a*T*0 $)na_xxF&v{]iH&+>z˙ӌz˖硕F"QWiN, >H DDєR0 XlZVxe ^T/h1[""0 T T>}ݔp T "p?B@J3gn.'Ѻ;UeYP*lb D l3d)MAW@$aQ y vzD:}Zi0͕&VH&[Ẉ 䓅RP,61>5N^oUvV. 4 QaBGFosQ7D:r.C"p}0DԸlÝ֪@D0!Oo~ ZC}^QC.? "${z_w)@oay U`,&rhCDk(ՙ"za&ݣVT?ޭ@T"| XJk d{|%DD݃¯m,?ϟ;s;-vڛo` .t_'NCMr&&v{ pcDLӀ"쭷*lhR ~>F<C:n3 @o[qۯ1;;'x#"ZR4|8 )' E|MxC2 a"քDD p!:y)C{ìkׁە Cq/xXQ%nE'` 0;xh}, e, w`wƺ NLLlFxh=088~YeN+D@id*t:v"w]P*"tj +|@ڹ{"Hj)YRmca"aP×$ʑ7)avd`GDݱX/[z7yؙa V%h4 DJ Nw;C25o]n!S5:/#"ZK @<~\`#³<{G8oa`CG 9{vv䅊zI% BJy_an M06j@ۂ邕J$":k/-ŋ ~vy-f%*;r5\\ %UL& k;0۲ܽSn!Y|8[PNDDsR x `8_{EDs4+BL%,,ߕgu !ܸ u'VX@PBDDsǎ÷-5||v;f|Pm& 4kekJ"5Np |]^Ao 4 B"ZXg;toX>k7)?c3J%n۽j&; 6lj\(*d2젥23ᔐ Oܹ w*/BGgv$"ZK @ֹ;w {N> {6n?D<??Ν;YN^tB .]`}8 6,/Cb#- :Q<}?ЈbDv"" dF]zaCkY<E2_QDVFXp+!;@/FʬGO-5K)QSC N$Ӌx,ބWNDZ촒xӛ4q1LLLx{ָpN8D"m۶yoܹpI7eYXXXs=ӧOCk믿[liVDT*-z˲p1g?E2p,~(?ʺMMw6iJ\D"Q,q1LMM&&&p)̠]w]l6c۶mqa9sO=غu+:L&ӲuQ㎼2ɟX(@Y ní0`}_pj@~dŌ)d` VUx y""[& !Jp5gѣG{nx7dsN j-[`rrtx $ Xd2[olk$qq9r/]@TF9P Hݦo? Q$twC%%%!a8CP-j'DDL @.g)%Zw˗qIDQ={غu+ib׮]3<+Wxr alITE7xXXX@>]9=ua⍊*qM5aJː۰B-ƾ^mb򃈨! @H o6W_g}Ǐ Ǒd}@az+e2l۶ N믿}XA|>5kQ]*+Gѣ,M5| t!Gx e`St!!o0ap !mW^Av$"D @Q;Rb8qΝ;Y(}v޽mf(0??`ͱ."RޖbY&qpYLMMbrrn!`P|]L3Rb[ZCk{#EBƳ%p9۳"H˳DD=9R[ ;Pmؽ{7055~lݺm=W^4 @$+WOo;4䯆=sϟ._@[`a![ks& F%!_vCn wktCR˗16dr[DDBu,燲 !g?~SSSXd?+ĉزe ~ȑ#ظq#>#GҥKƾ}066B??O~\t _t:a&PBW[<Ξ=I?wb;OV|g:?;dBv|VtŲgwY_LBJߚ܉n+;2 W_t"122-m"vP+[gl<bdN?MOCv]ӧO#H`׮]ػw/ b޽8u^x9rCCCH&|C@T\.y LLLka|2&&&>KmV/4N*nu$`/ ]p-W I)w*ג ;lEQ #J̙S8wN<+Wfp~d2CDbŦzPYN~~boX `d6ܲx;4M|͘•+WpilݺLD>P(7xM&>_B'h47U*xcNL6 q#,y)H$J~ek%Hu[WXkO\t)pS*$C`Q : rNVa({D/rG~7PBHHÀV댵i" ɠo7pmӼU]KG\Xq-@B|} o;~\ĺ~xy^ߏl ֏_X+n'ÿ*  EJҩ]{fYva+7#tԐrM0 Vn-~===8xMR⥗^eۏ0!]葻x8~Q+N;9`3&'!8[@h .栔{tT7%7BHȥ"W*(U|DESUhJXX邊G}ZZ륭V-t畨(hQ`$ !̜sǹ̙KB23!ߏk`rΙ3ge2ߓJpOn kB)| t5ܝmq{B"iE&!e)2ٟӧ@4Z{`ϞO!IYa$ u;jt$.]#hk_om ̚5+7N4D:xovnDGGgZEZ0z3{0L`iBg,oSH xUԻpdnkz54Ba֎EUU5ZZ`=طwFUzB>} "ȴsN*yl/)R {np4'Lh=;7$ZZP҂C}1:guutt?ĻBoo,eHۆ^JҮ0 ͬ;«-#w1vvvsRt-/w!G! ,B i ;vʑ(/Q;WED4t1 ]]K?Ҭ?jގ E ~ic Br] L$e liֆ=,$:)e>xF@)!#u;v$IM:ktX޴1W? #<񣷷tgdI!ҏڴ,Ha|Orz>'T:!!Xʲg̴@555^X0MX,C 0:au!Dt|b!`8pݎd_Uڶ|@1ǻh87RH/ q˂*+ÿCUW3 RjCH$Գ2Cw7b mNc&{@L1(e8CD]}0,gu@mtTU}@DOJ`0h1(˞v7u.H.#~c;Z h% Cnpzxs:!孒Կ.L!b#GV *4M4 J%[H$>a"~w=vκo/ ԛHCtuM @Cލ(ы1@ʂf.@!>=4 ZX8x ÐR{) SȲ #=(2-IėKJ)$ !ݚ s,̲&&=B˲{@S>0&5pw 8!4.THD1 }}mط(*řW-e[qq}ݘgD_k'!g_hFaľ4H$ix #Ek̏[TV!!Xԣ- 5nJ֒ 5 )`&,gH$ M6Evh TFK,QV؈>:*t[Z%vA<&G4BC!:,\P!"]d-ej?ti"}ʲ`{mD!gTtD2!D'pzSM#? +q`30 J)躎8xDDC 0 =HKBX_Uه.'v~6u晘8{6Np±H!:gc+'mB,wS_eٙ6aiY}.~ p / VUU=KaB>GQAH)HU=3g>s 5 Y߿^W Ӷ`…8Đ a$S W}OBR JY)OoQٿ4- Ȉ~Țҋϐ b`2Ni<@BII) ""@۷7x#ƌ}>fϞ3f#zzzpm&z-9;vڵXj/_ /:VZ}% c1V=ws:4~@~Z^Z@i)T  "D<MNqfR}b;B&u퍣|D=mp&HN9M.fƸӔP\\2W:=+#:3l9ᆈxfҥ3g***#ۡ:lقr1i$_֭b͚5XhNgÇc͚5XlN9ܾ0 ,|j!6] M5MG~czϽvWWcTjj[E#9 _4ug(C@~gw"z !ìLAEEEuuա?  e"   rn 2˔"ͯ=ҝ_GGj'Ir8]>bC^'#N^.$4]=Upl2[cƌ؈X,Sf2e ^}UqA;qqPTT]v1Hc5n=e$c\ NCرP0kj EPH8z#]H, Pap'lCRNR;c)hܙ+콿WPB8S@EEy""(5\ɓ'cҥYh4c_4R ؊ o!]KMpyA,}kQUK#BFFuu5 @{2H0M C:Rv=i @{5R,w- Ǧ(J8\,ק7orI Rt{rӴ!P3Rh0@<زe ~;}ޤx6L:ӦMMpg87"jH)QVVڱhl|a4-$p$҂2 !LS͂ݙ 4`כ#I9p"j^}IKY0MJeeEt۹s7t5Ǐa@pGuu5::H] ]QYY hkk8G[[D?c3c@{{/CMM͗{aDDG! zh_0LEaoؕ[+W YdG*d*Hn Gò,X )t]sSD2x8BO.BhWī']5 D ({M দhzE---hnnFO<D"0aaΝصk&N`0 @Ʊ|Iu$DD  D?ڽP(P(p8p8`0dUy~t/g! tɩ홭oI)k 躳t Rfx=0eWS k5 F„ƞP߷0{@jjju֌.;_ЀH$MpcfjllqM7y7o0{1̙3裏B M1zt >=zQQ^A5$PiPNAR:"_BصP0 w4HMBׂ^񷳪9 )᰻XE^9s{JR.w] {X4a(P(S E>B!s9p^=ϟ+V nѣӟ;.nêU0zh\pرc֭[ &M:/(麎Z?{ヘ:NτiY0La,=눜u]zBvIꓤMkrR 'xU#NH. cMvy<8za@"|]'|2|r\quw֯_Qױb aÆ {PSS+V`ʕ|DDGE4r)5  uc*Q\UN0- LJ'%uUCe?T$'ɘP?M[8+{]NH`;!b~XJA XR H""҇~?pYp)IDATf^9:\wu_̨jzp`3 P.BAx<4SScCJp:,[d ) "q9#d?|(e%K/T!˲`&tM)u֝xoeBDD}*++C4ZÇw_+1%%aiBB$LÝ6sIq rCpBH# ")A&Shw(,"R,GJ2qp4=IL>ҷBDD}RHMMz-mФBYY Ф q ,Xh²U pOE& IOQI$RSrbͲeJ)H)o ( a`']׼JRJgv)JGBb)wdQ,XX+K ӴGoo IVĤI1v JD4<0Q ѣQ=z{q0{fD7T"}iBTTD$Ӭ{x4i0ƍ0 Xa^W^q! " /īq,۬0[ 0vXAc֬Y2e|RJHm ^^{c^W[*//W#iZ|Eax~JJ}cglR#GTs gѨ5۬0͟?_-X@-^X !R @^z~c{QqVmܸPwR{;vʆ/pa`˖-KP^^m0a:,lܸϣ-J9ǢE¦M#FVRRc޽fCE$۬P=xWaÆHYQ/c{Q!`ɳwyǏG(J>e]qYEcc#bNoʔ) qơz8p??l¤aC=/ l2[cƌ6+< .(//Ǽym6oۋ H"flwn[^J)؊ 1’%K`&V\ mVjADQ,]V-mVL<K.ͺmV8"/_Gy۷oߏf̞=>,NDDžn a̜93ߗCXr%,Y<3~0pw(O?-[̘13f>q饗bڴiO~.(WG<6o?-ܶkkkHǶ]b+W=܃W_}mVƎYfsņ k֬ASS۬tuuk_jtttىÇ \II .R|Ghkkc{QA`ɳiӦ{Cw*Ѥ?~YjN]֫#p͆N; eaϞ=l҂f_hԻ=>]v&L6Bo H]~Cmm-8<]u_|16oތNo{cc#oߎ/6o%P .TO=zꩧTwwRJ?e?OjɪNoڵJJnVР. ԏ~|/K%PWھ}{6+?ʕ+͛նmԦMԕW^)k ۢE2aEիWg}V+GQӦMS͛7{DZ(@ @WWTMM BSOUO>d/kةWB%PRʔ}woΝJJJTyy'|'BWWVa%fϞN[*lC:KE"iD"jΜ9w]ƱlµxcJ /~ 5c UVVt]WUUUj^8E$c"""""k@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g@(g?Zt:IIENDB`dipy-1.11.0/doc/_static/three_brains_golden_new_small.png000066400000000000000000017604021476546756600235420ustar00rootroot00000000000000PNG  IHDR OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3-bKGD pHYs..OatIME ܋B IDATxْH-vfVfVuu,}&_(:~P(!g3w*| S X|!`U( vS ӟvFwT|~^T}VIf>e$WWQ1d.w2t.28 Qzl#sy[ &7x/ibWe{^L5ٿ3\ǡd2?Ef3S\+^:X8Eۗupb*sMu2' FD:\nCdy.2ɼi*ǾVmeg"3p\z=LZۏ!іZfg 3O 3=cw}?dǻj2o2L㺧-dex*sFڠvrOm&뮩meQfڳh,s{< d6$3[3" ]fgnqmtl1ďk:6.sŢ svL{緋)eC'hesniӧ̇w\]m{yg8]dʼMrg?z7) 3K3ѵ[˼ D*%l&4 \) dR..2ӞeƖ2/Df<<76AcC2enj-"sW%3vj`Wi!sEix,/E}:Hv+;מB.{u@t}ˌ-eU׮{Ѧ.mloMbs7e/M}W9vy׾|(8 98x;86xj5JOWE(WJ9֦uT})2ץx|,0x\sm%s\U{cȌy!s٨a]?d2IxS ldJS=˼Jeޔd"s6>RvȼU./FCs0W|_M ٷG_(d~.2oC}n2 c#G2?D8xq<8c/ǦB{6!u]Wn[{k-3=Bq@#hOcԗǐnBUt!2y~ 4˞>dn:̻vvy硫̏TVnޢJh×r7*}js̼Cv\RdjS&Mg2YTwX*2 cdR%1dnhlmStk[ȼuG랽MǦܴ幻\e7Is6_Ȝռt;2Tg{j7eZ>uk 22uV|mݲfR͞Y>R%sAyZ87FY|6kI:VfBf죏k2m"0y*. #ɼBDK9k竍i^;m]e~k2w]G)6mIU_|7vnsWdX.=!|>̌eq~xǹ=>dGgrw/0o/Mfr^*_@UfNNE~?޹}g%)M-<zfJ1Y>t\[{e?돱UӴǥzb5>v'KjKq(Ozzg(on쾆Ϥ~qg|e>O.%;ͣ8Rٱ"D-Ut?4AUMYEX=ur*|ULY: 3<]mWT>dFmdjdYfn1ϻL Uɾeje.&yXnz^2a"3gd/X:MrG;Nmk-C:9yy2ozoQf}KhG2V{,UAg "yLFdF;rXFfFFΩs  i3rp*]ur~vV=I C޴Vݣ)F[fBO9ԑSk>N-/en>dnƺ|(KؿXbmmu.6{u.Q[VH؀waBA u[y>u=ȼ<;SIdӎԵT}d$}RVh?wxԵ\]Cp|cIf:ӦC%6<֞dni-ᡒ[BLOmH0e[U㒚v5$aS꫸1z(z+a=-Qc:̌3qZaB5yyLsu/fS)kS?SO8.33.ҒP'T)1$"'JฅD<מ/K^ь5bTܵv )EUf93 .R%l\m*kƶTɮ}v܃\Ex)e.Gsߧdne.~/y"3m3ٲ7eil0")0xPt (} F̩~@@ccNrN=R}ݘqu%Mb@X(4^Z;Cȧl 1J mcѰqt\A#9/^~J?W/U&۱Q?Q旿B槮^cOX[tL*Yb3޸ޖSGzMiNch0%oxP%1B#W93'o2]^%= kk3jP%1p̋^ۏwn;8xqѿ=!9RmP,ݖQ118WgD" <إ:~zglmShiMW*S+\4n+8{mdƆ7 "3jUf:԰! ȥ5u>d-dnf"9Ǿ5%s27THǦ=n+sL)X2g/r-ۡd*1썃} ‰H2ˤ('$AIYH׺0wX"32M3fiب"/"CXKa\Jh;#32>{1VbZ[yHy FBE& ˸,u,̚el%5HlWM:xJV?k󞪕9ih%儆J{n2;wx {ۻյ>`]I^Kj]eFMĮim"⼽8Ҵ˺E* z(/?=̴CۏSMuX[+U\5AaТC!n = 4ZQ,ﴍo k=R@5,~V%~0]-ݸ${AѱI`YVPIcp$ X vdt=(#7 w}q2"-{U{\d$t(, *6s> ioXRBb5` eBj~kdemfRTmuҶ&Ԯabs{MޅC|xQǗ0f/Adžg0E|6ϲ\k^6kɖUBARAUxr]{_{GyN9>u,wh.+Hl%u$Ӷ.2SKm֙:K 9%ЮsveRMy{">2VX!M9a0%FD1]ygj{6p!AѦ/5\Fn3gTRjg0 f3/n*0RE8'k۫w|PrH+;d8/EC|} #V1C׿[U4B#D[70X'By= 3"ghuY+j˴3'4nER |;hŕFUR P#eƐef$PLhH(V#Ր\2OxHzU&/ \]laq&چvnS|jr-\|2އ]dx2oum\~d27U&]7_G|sCԅ7zHpj%/wWs3)|OvZM6k=+ N{2wXGOeBTI8__%606uI}iQtzex;x!SqP1#8"bq*)1F£X7&,\ 4'9e@-zbXg 5:o|YB![-#T!B6xCBHr~d]_?VV+dC$AE!sf\ ZYe{O^Ff(u܈6]6=>:cvy>cɼ/EygX R˷n y@wYϱN]en|6IueAyO^`r38/QTak]l;m_Âv0N}7J文&M7W"wI q.k =άfhqi)"ثRz9 +?F&\@"!İ"0VTN7}c "qYv!9~R $ чhE^A4x"j~ƪ2H +@ɸMհQX/+7t/BtDg< V~*%e d>=bnqk3}aș\LW hb* ;>.Q krlڶkU;k. T8>mhoWTѝX;m{U<|)5GZM2y2%?UU6}j3z':wڗ?!D}F(Ta'P/ԐxC1$+_ZcH7.i9^!pYl. bJٍ-Be^ZRd۽{XII?1[XS;;NdH>a$} &3xw xΰzq@%C$*|^ਏ5c7-duht[PO>em^`FQ3h4q!AD:D+0C(3^wMCBEU͐^.A,˺>sH̒id=v T!HHFzB1+){G@kt9}:r!"M_J%¸? _V!8X+:)+7t.(uڿ|{5NƐ)ujgq w3ЋqJ@Sί!0LVw$Z [56U KtmAY lTS*~ԋ4vFFܒV1}ﲛ[΃UE/]wxַl)vIܮ.\f<#wN_? zA/6?U^!>{:,zاm#27Qv9VO_G4pWa Vƶx`2WݠL1ڥŭ9u-yS6-ds%5}W2G~?9qH{G2j9F9W։$!ıE`(ۮpư<742mSynN<$^)GFfe s5tʴ)V1./˞׭P/93E@N)t^^xEZ d6H9R icH*BPӠj7aFRڷƖHSʾGH5p4,YVpB-y{_A3֢'>_Br? 3Զ,5 a>>EʵR sJgGG,x K]CuwN$3V2t2]"?+HarimG:gcg\ ]@2z-.R  ޟ9{vvAAMk抆MjhD a3t$0Mf4Ȍ cneE(j"0Q)8D+l{<E.TMjI9d[PR7F \x: > k7s{9x D5;28FWk0SfF}BVGI֧~\2F#uYL =kiU" ֹUc^2`p`ݵ~w#ViJ[a {E%q069?պ*P%EL@ 3>!Pe9q"vH+޲ABq=rnɽ-kI:~y"t<.$S߫|C夬HʉJxJEbR/Jҩ}..53IE*Oa4m:{2ܦZC7QaGVVs|.kiR(E0fv.G@ M=$'2W%¹z# Z163S?iT ⭷q_)-/5BYϹ4*[Js5nNδdN!k=e2VaPb؛4;HNeBp(ld  h 6kk2gg.a{<|o3y-!̈p@"u4J"n;>_^i@F!j0 rHQo˥%cE2m8BR\Jk:UA1bϗW u /]]&y$ǵu5w3rofps z'3k%2/Tyֳ*"^3cD;Ny^SfAPSyBH )αS4Cw)>m[6f3:_"X!=Sj3p5_DrP$4^pYFTf.7χz_疪ː!OJVD m$b%.ƀLTå8j6,܎ ʷ+j+\>EZ D?[H06.KguA,Qʗ̛u)BA^KM. ڱm VtiXaѵ62໠MZ9my!QXڽnfOMk]#s``(0[D/,czyf==u={|űD9LkPX="ژuMBL&= Y~1ms^9 z% ظ|.m0nnB Tu?TWŶ<6G~(2X&Go.{zVN{]l4B`L7e 9u")ջWlQWq˄re 5pvz͋5j[a2S  ɽD/Γ /#0>ǣ;,/6ˆ\O+qq(pF87\X;>5p1zf>8TچCkņ\#*)iʊ)Vټk[[Y3jfos^"dYib A|2<2豾-e-s8sh~}tdugvs~MO8d2xNA^BFXRqw)}Wn֎m~YRSWgmJ3NfQXCɿq}JХ6hAU̮ms8Bi_k8\3J/9di jAK׆qM|qqE2|-"/eJ#uE]%574TX*&R&U]6sI,Bu27eyv2 uA}$s4~enVld8f\ %幬.Ī4=2X(̩A0q1xd%4 8]b:Vtm!=p!VMۦys٨v$q̄Ʉ1R~cx劵(6c"(ȧCchD&]G;.q*`h x,~f QqwGZLi2X`z=`:ábbXLvi\mlz=lVL.sa2?A~2څyMo s+c^K #[+|fHi2yhYkgt)"\½mKġ6F,B~FDS@_ܶ7w57ћϲ]Q ( m&H =ύDQ4Z4f3hJ;劶)(w|F)fΕ,C60ETEzL z- %y: D8#_+K?ww˥-J4sǏZVS&EI>f)m{="r*FpCsLG.Eʃq^E*Fle|^[V^=Y]EW:kUɲt)"kQHy1S*-HYк*VRVaP8\C{EcM{1Ȝ z D^*$Rɢ2Fb I? ]?1VHF+s0z<!mjpDF8Zc[4gh yNh\ 5y5t<汶9P8Ά;ѹs}:b}FuRk30!ms97CexZ"Qfx|uO Gal }}LTT?v>@2}G3#H#+-ˁgowRp0M$8}sk!3?B._!Qb)_bNE^{eoEu2 e8A\L4 ,An ^P*P,Y\L#Gm*2kl&ʶ))ágJOLf.pT1[Q*-u+4?Q9o8;~n1pϸ 9geI b$Jz=9wо\2noXa!g6܊ >!m "rYipjB;́ϠQ̧V۴9dk˴Ylܱ=)5^)ˬd;ll^lΉ£h3vvR%\ܽnW T ;89Owv 0WV@i܄{Nsy)S{3g>qQX*eb@(I`.bH)S?hѹv\od6}Rޘ74:ޔ+݅'V [1\->Eڈԅ7{ZjhncHW$c)%֖k9w >dWh4hd@YMjtgZ+3 MsZ4o5je:՘0Nxh ׾>/̦jθxf2'j7((Y-#:]0ܫ]!4DUL86aӾf/Դ s(DJ$8phv>՟),[6 f(Ĩd7U{S^BIް!KfChRiXSyVg>'N]Δr[olḤotsl_fXp\4$k1vĪP^dk0ҩ .%.YۤX(t|ފ(>l-eR`R77JA[.ﻻ@>PźZsxY,uWŨ02-}1&^OF#c͓>x Lw0t |o}>i{-0'YƘ:;} Xq; \@1TX=e8 Y_m>-U7"X伄˙ C"_Cx} scZOe; IDAT roыRN(d9ΖZm7XC␕HDAJycͻmyEܮtIݢ*-3ڼ½"*ҖqV)ָDHk<SڧX3 NNY7z%G BT:cȍY䌯s9 Q3e}Jt=qHbP o? Padӽ[kf n@j5aQ|Muw ek1s|ps0u{_qj0K;Ugp02rՊvZzPAHוE|R# ?!$#պR+7ͱ3mN 8J&%8o`tͭK.*O6vͻC=5^ڒ&7zѿEg}@J [=/+^'aBf+yWɬZ;m붵}%n `_m :q]A2 SUWnl?(Ư-tMn^B( M"j ~ޅy;;9UMXI+K(C@6 VN|3$26)i.ɉ bT<)$1Pa2+`"!/^Xc#N6Aqҧ<"HACʹ)rm _,\}*%C䞁#deYx wmS+XS%׻`[}P N,Ggvp#$nvz=5/~T{vZ{>?2jdc#A:+PhΙRXyE86-Hb8e`*#k5'p57)K 0L|B!M9P %[&}հ4QXDm$:o}ߔCh;Wzi fFT)mɕB3sAlsU'&S7ևY4&q\ ".y֣ek{)py uJF.J= @J]a:mCbU!B6d&q`AܞdQ?^<:K>TʱA}@֩00d#pZ#NU.{Y^w<TEҗ`TdxT`aCy%QȖ^asY}%j(zTISf>U2ڮ} цuTdԻw-M*zv>^DNɒd(<N%y)M9vq Y2ה8Rmǜ0lUYprHk2$kmIS뉁0f89T' E^1X-I{w'<өkqJead)ss(Uƈ圤p9$;|.QH"U癥a81<ծU'/W冁fJƫQ4 ՐحyVl!{Nf!(ORE 3QnPO8P2]2&g3j9h]deE!Z>@t}+h-]ϛJ)R,^ېC\fD/'oՓmFRgSReNԻn֑F 1$+O֎R*r3k ")Ȋ%Y M oZd8nڜQ(R2KAd*=.ƀ{K$ xJH߇Oi*WȲA r,K`3I?ҟ %>Iepv9!ˠ!I2  02@V+i|,$\xL8?| !ٙSi'DQ|mo\ :o6D Vu!:`FI'<~_1=LRwW]Jzb{x`cu ճ"6 ʞ SzOҰ=NջxO*Č XkL/iY-[7/f@'kْvI#b=B*2D ]43nY"yj`-wQ!E+/Rwa7\)Hq&ʰ_#@9s+(o Sk>E ;Yeޚ”$ѧ͸S#ԯ>BEko8(4F2ҽBf!C nõB>+dˊ5j5t}|D!ݘ@4RZpxGn*I-Fhۙi=Wg['5=E"m7`X>91iDIp}o IkSa;j gP kQ^!F^i$蝎ÉS!&J8RHn\". FV\$҂B"WV"` CE ]"FT'7Vb } - `J:o xN &YFydb84Qԏ by d)>!ӐBs}m`ٍ|V8BrڌF3Q$'伂cah߾)B [~vVA{4#B J☵0 iZfNzoߊ,10s))a]oc# !=K"9״"ίdl8X6 M%\ɷ:VŲf͐96yPɉ,Y Lw8qZ(_HV,hs(w2˾s0#*7FڙFb|X!:4P8DP|Ƥ{Ƅ8ʁ; oKۿ1TjLt?"d27x'sJ谌2,ojs'fLa];tlE5 E9Wg,jB3SK`o~^߂B65畎kӢ]K~ʻ;A7bȢjXl'Q1{a Av $ڇ2x+h8Qy29cOŃ3zЉ)ejYKBB.+]S6mu%y*'rWwUȼWw~PB&V72PNfw$8Y'=QBbXTϻP؝hR"}Ʒ`gn ÝU Sl?a ֒743,XX]vhUqt`d(7-`-b!s4Ӕjtp/D-gu𨐊j!:Yv5$|ȊsDɔzm80hCHX l)I$J=+g$^Հ \_x?(Ǣ,~ ѽ~-ЩB:9NOM5ܿ)ʒw6L[z=1*$սŻ/Tw(IywwQIi0c OyȜRV\S8lYR;8/^< ц,FY^O侒j kH'}49+Puam[5t[޷X` hORW4czsp^ B\ ռ#Ţ_r04n\bSUB,XaB8(_7 , ~78ḏx'hk.RsJW_bgBe~wQq#W(Ѕ*wdLaR9y^&!aPRΦ(a0k| ,t|!f} `5Ih}TW GAbƈ=y־ Ŋ`1|FE8\=[q^"( ,YAz@돜( Z,I"6#,@ 5NX!g3]Ϭ7e}F.gQSGYyiõ޳Ȟ];4dOH%*( KSE/m>)92ʜ1u"W#Q#%lB-ޤ;cW6IBN6[}YA߹Ǣ }A\}K[?M35Xٟ෠ojz^&Qf, y]g}H}Zǔbu OHJuS ^ښhR"qPyq/W* iz*L-uAr26,tY{jkv!dUe*- X9pySIS_.̶ G/rLSJQb')r7"''q$oJӫ7µ3_Q:}%\Sb& K!1裏] c>7rr5ɘEJcFO[p$LP{G#KX^O"*''r=cbv;SAy"~s5(=Kjc2[R?p|[($ sóO`07^nWag[_'92.5BcdQ{bqav``5Ant WS`DԼ>=>j2l]|Q1ʍဂLI_Km홼ik'h&D7AtG-M%M*,>`Su77xTLP/6.)pW]|DK%QJ6DGG! IxGR#;NAӺUN%AxU*^S9W~SنIaWm. <5\^@V.ml~dOq&B^RkTC[gx)FPLP:eG;_=px/*  |־UCu EeQJʖnT0DDxzmU*ŵ}}-)W?} HۙN0kzlIUkQj)cTi3~{+ h"t*2D1LɼKƻwblf"|{'Y`I Kd"7ps{ǸX \"Â,U2RWC R$.O&bnU8ҩza,g9ٙSSZg sAa9V=?h#1Q^YJBVJB1/zJݎH 3#>a1"Y/ gC/ՀrnE\#Pvxhí6FN^!$YXi:.k3VnZ_Ͻ{\„zm64Pc6:T[RϦ(c_ /lʫqbndVầ|Qw]~P߮Dh[Za^#y7H1t?QxgnZ Q,tH;q;zrvncfL&c L~0> $+Xdո-㗡6 ` IDATx4#ȸԅ/,h2gxǞDn/sqaj(iR3gbW Uk{:O,m,ȆyyXS}*y"$+odj.]EїJیc|5Yc]^J?u1c632/f)>xy).wɹ''r݇rǏ"p/SE.|pAZ9-#"[}@ƨ9IQ$ݝC*s'I0V,`Q0Sqq!YƮ jYH-)D 7frJ`G),8nXA]O2:U̳AD\8IgՀ yJ'{92O+s ~Rs,&s) |b|/U1ּǤ0 ˔sh?_3S}s$k$+sH6GdT9ۛC*6No#UmkŴ[>7*UN#eHTd!g)\7D̒(XbŰӧjTZf RLP,x8m|cm3w9.aZK§͉cHMU19V,ZN?irB:vkk<rz 52D\S-! )9ߨ[%)Bl`Q;.~أH8RrzX=vuW*u,Ru=F?9Pwz+}D8I3X;8O_S]lcr?:򾬔o)2LYsmwb!/b\}"#q(y)V$$5E1$R$2U8ϠKF|#>]7}"} k+ürp~nz5(pJA\\>$#jax B$DUo4a?M2=B /?:}Z)&c@F^z,ٽOT;" ! mgڀ\KK>{Ǘ]`emtS*s7%B1{-]Nҧk.QYr0icq04S--$ka>^Xm J}l8TӼ%-9X?N /z13еBdzRprLVX8x gv7p8(=e]`t=m \7u9 8r\Kz>Ҁ:${ĕ3JT*KB'`Jd=sH%Ruzn )yt']|0+ge݉ޯw%ӝ;78ͯg8 Wדw{r]K8rMV+c[9cw=\4I­V]mUu3 ]Fw>s ;W_$e-ot]r̕J]-G>wss(8A'=ݏH9(ȏq٣ \ u{.4+RLCUmڂ׮dA.\WŔ\fCV^sUn>ՓVw?'sʖ\ޒ \knSиm!sU.1߸uX`N-M+)c\2qɈo.5pUR5> ^[9.o9дu0}rqtznN]@ʚ~ux${םYR`ПAfWιvc;INhIZ֥lȽOK.غe"rEV.тw =̈́c׮Y? -Kw>;7c{аZߋ Z}$ߥ+\;W?]mgUc?{51QNyg &B3 R2MMts5z( dA%s^$I@,1,&mj][EۺnHT[tev~7 $XkQB>^ռ۰A|R_rJnW c"rP &nxvAuL3 8ȝLH@-\ 1\]-tnAG.a#cɈ />$:b]rz*1O\[RjX} 7R9:E謕*e>\b}K_hi\7z_֢Йa7_y}HRq+B66:ݣ#Ǝn H 3BęS_u{m]fٻcMޱ&#[Pd5ޤKڹnrᖥZ״%Z@[z۪-\H}zFHm cW#BtbrOkkЧUk]"8>NtZl-= KʻihK4bu|nmjBBV(TQFRV {e2*3M4Eޚ"ʢnrmvb&fO*I"AIjESBm"d5&;j &*&e+ABiA~-5I,67cjxdqYfjKtInw<n=o6Zەm`(ȶ950+"9jEMu@8:2s8|6 Rl624_0E$6I^SٵI frK! FVKƴ%ɡF<DiF04gnD5v*q^Jj5IJ n(WP  +.ĕyu:VIzMc DoA3ܭue69u+Nz2o҅΃{~>`|o ]N:gBt8 k$ɂፓ5 9㷍+iYf~}WK+i :MXڝloV~#FXZXRHќ,q/ @H8 ݟ'NA,R0SU7Z9|9L!"뺜u\; c&6Vs:/&t2x;BekD! r oOw|(m $t`yX:%zojR8eJJAa mIuM~Kx\Y@`V/Gp+4kGRY&U"4!ҁ1z:tyei&2QkETu$n|:[\΃AB%Dp.e( 2iqI%197&f|8t %=cAӿz`I`Av:Di*0=QTG.q($dLǩkBq{ .2`xmD|_KoɟJHz3_^&16 .GG&x3, 5|4m$oqrުğ&|-H:#)zx@n/T۞Y݉Pj Fy\]s^{%6mh~&!\PT3W:5]p hP2ȩKaN],Lo飨 \PkPU ۓ`BYH"/FvY Hvz4f3n'r)l<|#al.Yנ.Ʒ0?$A#kênjѠ\ tㆌ㥗C fS9qdI^%fs`){F,1KRctg!Me|Yq3v/޶R7|~1Ȱ^%vSeƃ0ҸqoRq`D,)=YRKBWd<&du+YB!;K%ɤC }' fٽc 2ɀ$]#U$q¬+`I9G$,A%=4As&SGJ?~6kbg_%v  L0>4]90!cTx]CZk2A L5m"Ξ=cۂVΧqhzQYXG 5ċB:vO9ju&"ߑ[M|0b\dJC%Q@T5 [q??ooFE͌x?Dnj3]rp(%)U\6J65|^6:ZZc8[ڿ24rKjV5jB^Σv\f^.y5XXZshgM:` j IDATJ]x88JF+KrYؼ?zF$ʦ֕5I8$BkiQЀ\af+sdG^Gnuѵ`YSJ4s@]veb vr[w+ro4Q'뽚2 5emx~{=>F`* UZJ K& 9=G4T<~;qz^}r65h:-hCBBǬu΅WX(\+MZ Bmz6:}\-W#;Q)z"zu@tʘ1|*sJ?p7R:A*W2.9=N$AS+IlPvDZA&p5PR$a< &QD%uVTU%eu({ 6unӔE\a5!J!7шZ S Of |X`bQY\Y $*Bt*ёc1j]^?> fܽ+nQphrssf e?̬A өzBN'=֪<+wCf3cnTW+9)p~.!{y <Zfۺ}6&Y2^Ok[,HvSXHb,Q y_1@>oT\^'o0#kYBAjVyZ1 W~ (" OE$1HL; .~ Bèؗ@n5RȋfwmFU[ln#iQWO]W Arpލ*tYj6`YT8^oIbmnda{*'ݨiN>r܆9 D=kOZðv}֗_hEY +=VIBK)svo%SD?p $~M2[HisP5[in<3›C3r jSmqA,SHn{dIWitYQ0[y?7.-pcm;sڥXzeo]ǺyZ:tdbE̴$|k[92:|ƛ29SIq*S=M/ * dk0듰̋%:-]㯰! u"=Uصs5wI)˜W]t=2JOϹ|[F{;_FA4Ϊ+Σ"=߃{F:15-]iqqIM~rMr#[ߑ Cw&E!hL1&Ssk:>㾪KUH&V!,m\63ZӉ[rʞC!p-s!~ @]st?GТVyZ @]רD8)K<,KSmdU#w֪FT~SO!:=ŎF߃mq]qyAQpƌ}\r"dQH2S y f#, 8wR͵ :=͂2Ӎ4xYB1IW$Ix)O3^|Q_B|[-}-ABB`6gg"C^Έa>B7,( 8;3wo9wwE0sj48/froů, v[֫j˥8|%(B1:ᡘ"W&neܑk>= w2u,0nͫ}ov~JX0FݱWYi[މmu:ڽЪ$E˒h` dK UjK0ʰ234f߇t]>׀"sQvVr2 &dJ7ME^#Ň8[R^AL4YkC`qoVr +o\r ;(R?0ݖ0l\rUdGmsj eP*Ѥ;w5T`&WjvڝAdov;؀i$֏ IKRsܮR޴W^Ez}eRLu"_>?<з0qvng ZAu^f7tϭa9#(Jٛs=F鮭s :A,6!|R&VH9@:zML[_z:? \6MӴ#$hJiPJ፺o`߬+d 3-$ :75!A 銺G=J03OO͛8?c3?]",եgl6e(?kL+t[lG^xJOsI>C!sEgtE ^#rn{v;B|}m\+\u]D2YhHv9b4me^}$ɼs2FjKUǢ-%Gz؝eBBt0c^dâ(P0s7fF(8*{9GG(~O1 U%F|EA1o:$(mXT8UUhm6L uK et:T-yB #F~#M=/SIϔ)`՚Lꂥ*|ܠ"1ky|NoDC| M/. סJ"񶖀ܕjȹ:&0?]v&CpvXRJSƱЄMDXd[ĔknβcXs:npZ|kO~riԩ:,zO+Krй^4Ҁ&`P^F;Ɯ^*.ݪe˖&Yk}f{{%vAuV[;חbw^ﻮXR\;Om6Q7a4GohGc.2^oW^wW{=hQ& Ɗ8;9K[8gʓJΒފљk d)#?#?A\_=G?#?BݺcÌ0@D9UI9xHkE^bK: A--uBQ\j*ZH}g8@OI2DZ ,(I DYTf]WB'Ҁ2zO+JOdsUJ!XdX#w!.%E6A*FH#1qb"qMaQUTEu(mЏ"znv3(,)kJvBҺɄ&Y,)y咪"fFTc2S^v s9mW+-0B3KZ-*V ^HǶZpH Y{ ڢ3DwOK-M~6MDŽ7g!0Gh{H# |uN'j`>eIxy]n :F斱~/PD Q2DBH+͗HtnGگ`ӑAC89! l&1F׮{Sъ09Ui^c9FQV iJ}A<&2L/JZ #YB%chaz~PCn(eXܤF}2~t*U"B.mHM,圻]97#o6rNG[}5=vZ( !aglND -^J]JznK TwW!SUP+e@bQm#FAq'RH8 Mf ,#v*RցBK-MZ+ I@$cK17(R;Ti幊$IU&Ey? A$ BW4/QHnl;ƫAwOkµ!IW>J$a2xk]U&Wi$ሤ~>Hb#`yӅcNBI-85Fr$Q3RpE$qM{G$BhKa@9%IR'Jkݏ$ m{UKHcDr'OEBؿG2 =ڧa)@U}׺{d״)O~d' H:vg&yFsn\o$ے$ Y5dۇ Ugiǿs:p.7VYIG.Յkc=d,S 6dXϳ ß%ɛD7}BW*-UQPJ+p)Z 90$nf]O{?q|kO);? ޥ$ep*o|^;`-+y MRW(*Rⵂc8ud牎ᶎy]D;<#jTlkiMekĖX~笲֙h %k h&ILy+kZȇWu?MBQX9ԌG8`檪pE8. Aj|DUOI,9jPnHs"Č4ːZ\!2Г'HS$yδX YsC\H󜍒?/Q!cnwxpz oD#=<lR6^u#og70^:˥ɔ1`R$vM72 oܐwǵ%<׮I\Cnd]; vHw:R)g% \ͧ!X`6KA$]&`o2q~.s وA񛲕l L2Bf.\7n zeN%7<1/FeeM737#+^I3|eP0fI&D[,&{Twt.c&pS)[0(h@;yzAWo*@ӊ:]F4~md$>F,Pq{oRb%#NqaJ UJ fius kpofW\Q)8etap0>~DQd3?G9 %:2xUv= VB!Rsfj ͵KmtQoqJj(hTrE[JV5U%~Dhi7fFEX]vt7IPTE\q#t\ߣjЊcd.Z=ZeVc\"L/ t7^@WeXDV?<_?ԧЎ"0l0.jX74ƳkRpwn%`N TZSܐEi-nfh`L[r!k`|R$ #ёtL-+K,َ#B&6u##!Q$_u|vˡ߯1=<|(i*9=F'.lfҺ}nTT@kj`Ppe a]iZOQMmLq%]I!%XvI[\"AUZDQ6{Ff@U~ZRUKR +ݒol}BIJ ujHVrk>K hԧiKOgrjJ 7,SkZib1 }tk&:p 4MnuáTw4`;p~Co:;?6aS=A]PC --yTmo7pcjgg`okNA&dk K<%C;@pΝV֒;CfODϙBdfwx\soX]B5Į4k}tM+Bv^ ڋMikߗ.=S,cn]qH}=ni+uVٹ$"SLp]3GJAN .5ҍքWv?^8fqJ 1d Ux $`ⷣ"Йx;Jox'*Roķxݔ"]fV 5)csM<@_ ?Cx}7~#U]P܇Th<ܲ8PTWVnQ4BMtASSҺPC1Cҽ}ߠLj\ "%LۣR *MjxmN<Dr*s'Ruxvb"/|jE8*w' ¡Q$JS`Drvu#\N&&kSg E (Jyp=> 9pTEHԌ:N~] q(!m6d Hp1#͛9.. T'@| 伄?gxbgId%F~)>pHwB|$ gS^15O #5]3Z,D(da37~D≱\5S-Hn,I G 0)eF]*1Ddlœ/\6UiȪ D)Vs *xmoKF7:ѩb)\2R%|#z4\mGńz+zկh9VG5X-^Ҁ] HZU}pXwWwT*% xlk%F6QMq8N(Mw?ǏϨj1СrY<0q{Z2{s8sMr1z*H Ak@i nv#cݍkׂ ӽ{{#1 #b\G$J:?ďC>kvlg9s~(q,z/TL2F%%j*z)%,Y $&uUkD] R׺=$M#^JU?}߮+ATWHW Ceiz9HϽ,|0=3ڐB(4_gҽSVf;#Qc:Ms7}->m.0ZrVRXW3Yv?wp7MnЩ ] 3a@!9v@\6B}1SĽS:{jZ7ś:rnu\4 CR/]9y|JD;8{d㼗{Ut GҎTZdeS#3 4؞#5fuYDkAB۶AH;%g.fSYc*wP nO gu}jjQSe+OA{iڵU- ]hvN]U]R;W"\A1.߮gS1?)'/?__Ï8weeML/J7: ׂpШj&{uBKrz^E=VSE}MG;aB2ZU@̦ХcйY;ءjWlx,m 5X}zβȽKsJ#)MD1̨T|Atʻ%@aP?:P_fZWڶ  ́fFENiS;q!Vsnz&e ~MJJBYTngx  8~Ue;rD)aPL\ 8u~@&Z= & @r'j"N{&sڝ*;.>':J^pyjP>ZmOȕ--4Z/]V>ECdv)C@m9ȵk^[$hFn7^ʪaO=o*9$G:[W6eWB2p[%ƥT%Œ Dws6ϑ_C7D`7>6fIhgnᙻQA#g7FϺ{b``Oߙ .9K78`rIrypєrgE?Oۿ/W^}e>q۷6"t5D]K8m 0p]&Tm6i([7 l4 W$sQa>IdKqZ-9'3 4b͸)0r<1"N5!P}=>zu-H.5m tjMz G S[W@<}qkg󑮬+Urd<+\ױ]z[AsVyg_'r8y bsg.C|]U"ATٛCn+ {Y+"{P챾-(\;y -}[N~ }v2] S@Ne#\w CK^=to3|jqU] P0 뎵{e>ҰE^E]H"ύ&} k%#t[wݩV. 86`4K\Ҡ#knBJ4V 'J^:~L×_n n!gP7迹nzcTKNCCܻ皇OtiM1DUkju#XaRZY^>qt`ЋS\V{ZFkێ6fњѕD5Y8fDRcke,E!BfaawNt֣GHSoK_mlwAs0X'iM{d\?WmUԑM)Qrνu,He:B "׼@0J"]ٕno8uGs{`wױA*7cWJVxضcCNk ͩU0e#1{Dnz#R{>vEqln'x+7Ǝ8v0Gu":̀N8@FQꅪ;<;['n8%X7)Xoŵp^}Pj7xF4-t+Zs֯, L:% :S5!@̵h3ij5;ױtEMKefp(VVyvA9bˌMVR% Z1yUӔ((kUtJc}z=$uvYUzTz1ܡGpI~G/"s|軿>.~@%8F7Ha6cXmۭ_ʾ>$7uP>H9̡uSHnc8 >&E!<B-5I3d$xOSxK᝹xF}ݮtRM&=5 %n53ϩ鞘˺̕ai*(K$,7@{9 @΄]薄x dvzx]!F@+UVw}[Bb`׺sw%A]W&)IhMM ̌{yzJ% J7JlʛT-+ѷY$o@(UHK릞? }vG?ߏkptto毡WKg_WؙcEs82 IܜԀ_AX/S[ޞ{N޾xeC*X$ϲh'1ƉPN@G9!!OX { ݲΝ%=iTP@$31)K\*&ΫwJAZ$Q]G/#QPTuLu7VBE)z/+!΁u0Nu7z|KKG^"9 I"HBLD] ]#D[RxF9k+:v*\'*R­7(zDyQp'oݿ(9_ #ݾ K/gQ:;8x0#sJ hfCQ:ў1K IB0}Rg੧g&\..$8e^Ϭv\Lpq9Tk69۠i#&w-8v-\t5;: k9U7w'kV29\K Q:'Q=pFL5a8MɰO8+_=/|c)3z?O9Lz$dma&\8 Vwnp.bMB]?b,d!AJi&W/w /cH$аsV#k]Q|#b| _ !?, t*2hmȼ%@ss Z=ICסe U2tG.V 2-unnIPl&Y}G|)hpĮ ץ%צ7&u kB*gF^L=/r%%ˍv6 #"f!13U_ajw ySВ=@nc?EQ`ew:8fFD}Px * $Qđm*:*8OݛZ_ub[m^UuwRv{3 0&ac0C& 3B(J$HA3R00`<$48 cvv]]]^[O8߷{vL"z}}9|6/ A{y׮?kַ|>q.;wcTHފ"Lk6P~}(4/]"?PiFl"8[C:PRe`4$ۜQ̋$Wϓ 4n5 D3vL,tPIt¨PDX\e9fH>-u iH j`<O `L&#`B1%!uۀ!f=_ W@OL 6(>U q dKwQD>>^$yuLpG&D=Q]h &3RRZ:_t2,LI" T9[vZ0Su{nN~ym[yݾ3? x}@i]ؿ́noj:w4 c@:o\le wjM(5Kg>,rcBN"QU4FԵDfS]X]T[}mNo#hڶтz8S)::U;m}3?DS+  ɨnL,T J)+-\n9jk1ռjmD*Evqkц=P#n,>jS/fáC/upe維X)R߈6k65 MTKz)5KD>cS~Z@f.=rz`v+H݇;=ׁxk3OyG p.m IDATع"hؑ@)Keέf uMhȧ<6kffw+u6ݮ20Mbϙ)YbKfe,_&l?EmP#4g=[ XdՂf;&F;sGtYFk-sƊcZH8{s~ؐ}4˛ Ovk:[P`6O&5,Mk+NkK*٬ k-F,B#)y JAA!iHQ^,2,fChi" EG(V(5 p f `  y>Pz@(AF1i:ݥ*.P:U20T:㟅3ʆyke9㧖> }ywDX (].:q-8 ;̱tFhSo[8hߏq|E\ ~|/~DjWE%inV !=A9 ƅs pDvR؎̜I)Rj&gV$j^mjQxKs Cbgp6w mҬݦ&$7M+y\[+Nrx4c3y{$ aC ̺ܩl[)ڷT̡YzDZQ4nýz*Jh$r (Mh5_2d:ߣ"28fcJ5{4K\[3gڈhusZtT!DAF(R*hP}~] yKJox՟Ɠ4řrhB~F>F*)u|_ww>:NtM^K=FimQRR'uޮ١E) 6+E,rXםôxPh oX7#Z(3UџTB)XzSա&y }T6VD MB-$ZZѬ R*WPzkYapAlf۽iiPU{Kʢ`)f/ҠpZ!.K4K> C>ZP$I5EEO???;qn&x[XhG,C#^03GA"c0ϙ^yC`>g030XL qF_"Sc5AEanT3M5SB%\_&BfMͦll&4-+t2RXZ0Q$E;BUjdA Ԭ$T%ܾ-a(ZMr)A|a2a-I)UA .k\|Y~;&vbpC RjZn,u(k!0aeIs2$އfu~Lc @1Xh24R4I"V+/„-=`y@~3BД8e)KS;Fm n!1R_8L24%P\JirMki6BϱZ]6X4CE1w{E/tTbgmKZT+ccfihD(Bn# CxDPܭ-Zll:Kaod\Hk^~)SѨV ; 0NOt$#CqR:=w a餜1NەYBX&)ϥ ''ٌ.HwH+Tp.e2Ik }iV+q>q4_֍XQHnA#%Gb;KhPHu"ֶﲔ4 I 1$EY榗%,it}rsZ'fbHO(FyJ5Ä $CT;/gsB'q3WZgo~">ؓFX1L#X#y@6P,MF$ʽeED:$:&QjqyCIYz~,0uF^1iʱY,nj=!欴qb"z5t}!eZ"il`-T7vmD@j15]PZLT цƜbC!86T&9v,$mHUP}~-  *_'h+!(*(V3 {_Esbx:g \>|,._Kb}]+;mNg:?G{o%q!KoO3ӵ:)+:!-9WFzU6co(WӍ2;FJOؿK!Z(BsRցu+E PD>ubuȦX׵tTf(_6f4tdUx`zN!q+dٿ\A>0\2>?2>|)O77EmmQZq$4aAdIcR6W+ iYeh|NO&ޘ6Pm0_|wnN]h x6cf #>:B62M.Bh4.Ayʲ]i)67Epv&MK9tq ^/BA|הL d=IB\l[- Ґs]wIvQ\Dߌ[D` 3.Nb<_'eohe""5jyb:: Iׂ/5eiYb4YcapdN_*]|F?,/ k׬8mܔ}4p -ˌfk]̀<#TjUH=~^*IAIn`9.-__GYz h,ڋhVsu"y 0 z%@`,Mah5)ub^O}Z:-CEC)v2Ұ5-&7ZŖC 5[ [hlBKMQH?'.>7=A,OhfWx4:7s Z0EZYYaߦ1C(]5bڛ4K-MK--esl~@- D6>'5A kCQ[Z5 a3:ǐ_o`1iTkTs=nf62A|ȵҚ&.\sjژ\Qmoޚ"ruu]jCX6Vw69u)=3G2ϭ Z'tUHOw=;J]׽,#\ڤj.qf/+lj*uZJr 7"EF^G_%ˋk1&8OJNKs`2pn _g2R5bi-_]kh-SXпZ~ TͿN)_jcJ򣎮Xcm*mΫ4%C)rYlQLl}OZ jhEpҼFꔵfںq#=(T֘5M$Jnr\Ur\(Ϥ w~XƄ4(b;MM 2 j) ٯ uheUq2Tn0S~`y.6\bu2Ximi;zI·o댐$1Lo#YJ3#:R9VOܜmv,Մ@i⁠<Z[nhiP1%v'#` ғ ?s h"a7`AÏ|v6t{yn0*Iml4j<.[Bt5hQrpYR-&.i'@EbWj?:-Y2Tm"d=׿>z&]_c:# |?BYl~ w~Q4uMI _ f6GfNCq-"dDq=G}wƋ/u۶dEdB??ӿ9qfBCZk1zO}@>>vl궙PܯAW] Nunuuly|)pޤ5\ˤBǿ-sǟP[[pNvV6 VIf,KWKTp5ȷO'Y@8 <8vwvA.)N|u:N1VDC%wv( \*]Ȉ,W lO4}t}|vT1m|W}]%Mͯ_k" g96LN7bsЕ+FsV+y! p)&tJ~ omI!oSu˚0zٙ/(??9Dvk K>XSneS0[K2)ww }vu/XgO uP^+_߾-<ʲeߟyF>h۲/2ͭk}JP,Ot&62% )t䙚rC9fE!f~F_J:"e]l(ߗϛ" yBͦ|GŒU@T4ݖ<&g7!@^MFr&@g@P8 <#DfS`#W\BBE*H#yнPNAs໾G~O~?Al\EŹT=5؄?x| U5O{<|X44+g#Azj9 Q"dTr5o-c8Omv" :wScn!zڮLX]UIL5ut!8#Cڀ=͢r7*Sx{صr6Pli#`nTX򡟹q*Al;:Ԗ-==jrGdn-/ H8J8naQi.ݗζ2;C͜s~!uTcO@Sn:F3m0?]k0eSz+i0@0,A0^i}e?{ 5+2ڊPԬ3i0 @w8=E/Kav;qx5:HM6sgjv5 1=;>u>\s"d9Pj2ڒ}]u(q"窳0 P }\bmH+/ӲcuR&#W=to!7~؆XGO:˱RBu7ڸІ)2Y̔I @D~%UE<ǢU"2Mq}t6 zوFGg<@;.\@{<4˼g:V79 t >h9@󢠽fTӔXU4XUh!Ve8^,bAW&퓇l5F>c/W(/b$%sJpZd{G-A& h-/_·٤tJyS,NOHUenF8CBO8>- _pp PpxH {{҈4B{j6 o;.tVs%en( نBI=4,;V#$ŢξuKЌCyu^ĕc8<,Hҳ}x!Y|\MzTh0<Βh6-Nt"&g4mfaYMbn6u|.NBU:yY&H$,8"4Z@@:#T 4#"FE"4#D-`9%DKմ^"!t"pJEB,a3`9.]v]d}I yD( BqHo|Q_(~_}~zfw%%В@ yq[_/d2ut{G IDATr Z(n*e&ǨkjhwJɲ˸W Q&Յ0ikXk hs/UL9:8ǻM xߋo􏾏~_wW/STWmhK:g_cԉrIܴt_sМ3/-@MS4dzܺfhD jkgFhaІ[1zi=TzLlB욮ht#=zd˲5;[u^3K-mG?6G-O,hǮV\'/-yQjjYT_*X JjTj'ڑ.'eGgqLfkM41 !ZrnoATx x92A6btF,ܯE7}MWL%)_@pTkOM<%v5vx  Oua;PP]P_cX0WZwd.D+@5+tK?"EM̾>t]D4PMG*%] ֙4qLw:aG}̄Z Xs"]%9cKigL$ŒBEmg rf 4,f@iVD@-$>.5ѣ%&Eׯ? v)=>Q|\U4CprqEeQU`#ьcUEZX,bA=D!n1Jbd>^hOvq:lz_ctsaynUsTz=G)SOfdifxGl|$'bjai:w")v])`Ls)MZSrE\Rx;gOKX%=MmђĊIWܙfM`"[@-hwj%#@ޖ@$@!Ў@(Z{#? ț_|ܿG?Q.18WZWS}bxfM]i.^M#QKcaӗy5VŊ5rhGSj /=L=ϖ;=R Y CR m>{ظ'"d1V.`yx(hs8ĪCw6C$bYRyV A3p} aRUp<%BP^ +!ʲj!* +xDrl۽SKJMܢլZL)xk lܑtIlJacf"iP!ZkB )- ;Mi@w|f>Mqbb4}КXFj47oҚZnj$ as5XTNk#ύn%sV4M(i7!t_ðp }[UXCi0U%hSYԨ<71:5fٻXrMZӨHqQNu{ui =Zgp yJh7YnX΁*lc4@ pb;D,;Sm6cF}Bad)aʁ(} Eh7|J*`z.|ÿKyy~>?n~k_ߢTZy )^LH,CT~*Hl'B0_|&[[jz"軴I VS]9va̴07ϋׄI}z:ׯ9eٽ˿lEo>_ˏ>_\[(8&YgL׮LFZG",ׂ?1)l:5 >/Th&ݡ:ٴC 04ꬅnDG*2((ba-ErMq9׷ؒjNJRkS,u&"z.t֠XClmjCaLu>jbT [5(}[KZjcD]QJnѼ(W)sIE-m[uKϕSislyY9~a'e[s;#oRs}SL5}E݌0b"jFPׯ3Q&?Wu>:M)eu>[O]4k"^, }ScWH }xAu 'ܔ˜XGE:އ>QJW5 !Ւ!Z@CҬ-Zu4e8֩t}E0,No/Rj6:ka.ng:Xϭԧ͚HjJ+Q @/ q%:OOSO!}?Qyp\D،#8FhEA&AQPU(¢(hc:Ec("QI% COS̚MtJ UUaFDGqp:3QY"*GoYx쨏?ոvSnq h: x(i{XP&hЊźvH|$*3 *u'Rg@IHC]8 P@ؔZ&@n 3a6~ = = hR¥~Q# HU R x=ASX<Тoԑ1_ p,JJYB 8z4WGOMw<# rF5եL7ًZo*E3׿{{h4"zq k>'||r,{ȱɌ{<j[R%i/tFj8=gm` TMo[ =͘GVrC)gș;T6.*9+JcQM1N+pJ# !PϨ5g$7AsLJE*NeN:u'K5Fkd)R>^c~UX豾Wǯ>S;.+}kӚTnZtZ_r&Ϲ~D fUӜj+-ܴ7chS{?7-KPmKVPﭦR)R-:4Dkk ζ¿G.ܟ^^N8U$G5[2rI ~5/n=8f4tz]VSDuZkandIy]Rq]O p){ |/KѐX}=G*ncȼFJY)"1w2ĎSiZxM.tl?j+kS1n8 iMm>[XvfׯmqEgM5禞[dfp iHhB3I@D"?O_2^#[?D{=-[wj4B0<.0Dg8z=4S$vY ^I`,,\!Dh4ܝPZ%EyHKZ9 ݆ł HʸK@1A*:wﵨ?H5Bo̳Q U[/&q167%{!MAAd(q2ŋbǂlJ}_D2Mp߾f/vbzX& S"T~4Y}PS ||Y˥|nbܸ!hh$(SyNն_r+dn7MAT,cZłwي״ٌpz&YhXa"JDޫY7AI:$ y|ZdЙ$BPq,EQrPJ ɫ*i|vI!(d1Pf@19] a h@1;%Њsʈ|L\e4.2-M`{R<03ҌI `ex]AL2% DqB GO? 0MN^{|c^<k\XL-#dsu[(}mQK{ w(>;4eU/+ щNt}5(zsߐ3‚bJ&_;("b_pgm{eMҟ,2[{Hǿ}/AGx6,DŽs:7T//tY/5䬞d!K-Q8NLQxJX9ShQ~VSqjڏ|g5r$c[^Uvw̱ufe=f83V&U  塇^|W~ G%:1 NൺzoFPU8Fh,8tXP$\=eI^8|g 4ElRhP1qLL9@7 2Il41Vy(MV4].Q4hCV\伃ov?ͯ=#kP}dLEgc@,B/ˀlvfj] EZ-)~}72MoyL! ]606B*vL3E1];N[MN0 бBd\ч>*aǐZ0ڃR܍'R)D7 ]ePMB7t_Xn9 .-m2A4`u9^=DǬM <-Ǎ)rlk MHBRX\Ц̮իC:c:묧%ow~6lΈ3BFk*txǎÔib:vZ"2 C/3L6HCj{a$7=:ަ^+PjW)jqK)py/eܚU%㫡WW|*W2)g5дMp't&| m0eeY|c@ )Ԝ# ODT7 "QaJ-qV?-x[EVʙ#źmIJ)"fY)rZnrT]T0Mq3|Ni/( qT9_{,>ߩE./iS=z|mmrG Meɯ+KlTʒ/!&6=[7y|ANVɼ23~5^a3O^1zey*pQ"9_j4v>:<ϳ0/[DV 9WEᐛU,"&8p>bGe#kiqކ?lc~?#<;{LGGfb*Kr`fL=63p `Z{nHۼ+1"P̈́m^GGmfj%֖á$lwyMdlmZkq~.kq-Y \ESkV+`>g>c_L># yC9?!e1^db2C k%`Oʕ/5y#۾Z%XiM(bMme=I"M` z<2Lf3 Fez%ീ,b MFFdEUA(թK6c 4V )ь-SF2d@co3z\&\~b$ex\bsn \`4.0ʒQ}D%M 5\.ڈx oŠ',o2AAFuYQ.Z070T`3QL2%õ2SUsTe^2 Ld9 j+߼1||9qgx4z#/z2[]qmP~,f^=:4H-y'p]׷8ƇX*q31I(8[> )/jH@[_ {5A u4)5eAbD&..=צ&OP 纃e6Z35b IDAT(ȉn{ k 2Th6z,g-]GbQHeikd|#t}צ3gyAGeoYн9A-Hk{.XMr\wuWy=Ŗnk_Mmt.6UnF ΈT]'tjzگ]AW& TxS gAzIT#ޭ9$8{t3)ywЁ҉YP=iݣS~]o-W/h^,׵_;3J&tK/Z8zG芓-an<H-gSuE@amHU@7us' Y*#ͷ\m”];@Ld"~ S:7zlMt?}nLCf Ai A0#:=z1qF(CM9*-J='moxO|gWPn~z9nr4I.ߧ1=h`#p4**f3EeI CjF("LM9 ZG8$(r 䙸FQD,J 4!QU.-r$mB! "4IZm ORx͖I@}BfLDa(! YK'$I5:]* @( %cbqdBy!%Ɏ"oڙ O@-uZ*@ϕj:uOִt<]jϔ'|PZ I'(v>Mxb-gܹ[U$fVKݭ,Yj`jXDZcQ<p`B8--%HV7lġI53{^yu"-R):^{%OS&y>' xF!V<sP9 1GIe%]BPc bߴRvdlrL+ g2sb(ґD53bs"'֜q7.Y6_rL3K'פ)JuAjݐ\$-@2JZ歷ejɷ2/W29u Y. a1>Ѿ%+uʽږzBm-lsuzG!ݏTO.YhaǪ{?Yy[{uO@>I{Dܕp&wbb=ru1;?׾BTTPǦ|1kg/8?;3ө}?[յg6SNxV+?oK+n'U&buGew#8YbVs}#g;0kSPX)| 5&93$>#",wO IQpsR(JJ8P5SSc5.*VjS+m]6˧swE(s`"Ya#;Pρ';؀Rq1Za2/}EV Z8g̴skWj oC-SZ).ԉ"N8`bMy ּ ~MW3)HhZUFȽGkwGh Ф<#ځ Σ-޽ȓ`[۲һ?hG#yB{`"H'Odx\O# 1hQ]RZ(72#Vu/'{ve9G8~o2^DoR Lρ=t-rYOߵӵ]/@x% ߄nFox 7|Ͼn*_лZí/cy~,fq G)V+@Zw1cyO/u?1ylr (@I\B\P(i'lV%V3JH:vMpՎJSNŦ]+8/v/pfSKIiK/{huiZswCw˚5\iވp1ډ}i%.]߇Z6,R|ǒI辌R:B-+NH.>3VdVbt2ڶ{a8K'1z5>}DH 0'om:Hej#WB8ZK+@Ro~ ?R25+WdMrESe w<'pClOY]l_ΆPπ"h R{MsD9/%ЉSG B޼z>_scT~$4m4(*z7 ) }yN*IЊc)(%BlRӪB_)"ץ,q? 1sPQ CDyNAU-K8A?pA8YzY, l:p 3j}~ͩ;>)x(Beh!(R(HKFIŮ˪NDmP^cHQ+ s?TV"<|Hk@+<1\ tb>zŴ"KVhRVԅB,yTɩ?ָg-8AێP`E0DkCo>KX J㙻V1$fWp|^;".jPjwU(̤ʘይ+"YuS eLtP[3m{*AW'RV~ x'r8(|L >FZx;k"UL& Y4#(Za 9Kt %pWcwX>!cHq@)qS$rVH 0ZSCmosyߔ`_7Vrp,\ I[svlVgFK΄0D }2ktt+̖|)FĒhu 2eJ3BmHb!QRd0t2&~RD:`%GKքHxDڄϵ"] Lڕ\ dw1DRr͵*>PY2NG5j|<.Y `Vj,ׁ;@`2F[½2]^c#?]yf+޻_rz ^YU}+ՖKL'-067d[ +W&@-{tH:OiCLLe;:q%-TRW 44]IhxRK~WȣkZ+Y]YX]XrWRX7ֺsRˉKnKGq@E{Uei^STxמw =|Oʿk>^==ſ}EK8RB})޸ax9xzDns͏ S{k&yQt^G"BeZz-"ySfI F{eQ n6EYJϣEbv σ݅Wgg?~EY")KEUHT7fO{cKюLN{sh|":2"J:tcaOW9};EqݣY)>]w :xDj՜fӸ||բ8?;pM>z4r֞{Q::㜉faXljzVxbRy$;c~0ۜL؏$+ݿ5'(:L6@=& P*BYY=^mp.j$l Ի@pm^mH !V00@H"eFmp7xU@uh^H< >hAhx@v,@^ PT[E<[7OR-msLՀ*T@A"k7v)n[pRiY Hr"k9SFHjqJ5D~G.e:k(*vO<¥Kx(U~Bb:ИL0=#SٟY.pՂoihU.wPGBFeHå" 4޽%񺌝Y]y* $ڐJ~e qI0YTk$L^+˴DOKƒiyrksDcADGg:hP WCf]z%B8x㨉Td}9"+ք!Hk!p C 'cFJ|ʵd4м!ada^.dmv,TeS}f肄dXƻ-&yOK~v%y>(dj8TG@P_n|v&-#]s੧.ׁ`Rr,SѱbL$qOJr&Y&4h p:@ppltvD^ _e ̈́dVqIt%2V…v<]3. 3b9e! iu-]j'KN( `*+(UiU^Ua~?Wb$ o}ʗ]|7ᲢU w\6"$eeؠk!/O׿L}xc邾M lNqX\E"]PRc1qjVC=96 sB|NQD*,G#1q o4!\.uy)iUaYfUgI(B1?itRUG Ii؆`,5_eYD4˥̀˗Zqy} :XV)VAs1NN >(5Hw ;hgٿ&Os2uhy>zQ2< NzEkM^._6*m纴 LǪ8ܸ{{&aK'Uǡ6F}''4=B-`6bHŲ"D0NG@C*Ij+O s)@Oe;!CfGV,{ "kRCaZ;|8 9yMB3Jp<ҩ\]NX]UǷ^:>pppE٢=2C/w?)o$JaQؐi.~Ph`DҿKбY4֫dҪVC[ݖTX"#XBm /,ll;VeGpݽaa$ǣ+0y@,6$au!`@ڇVXgm@S%Ʌve\4LvJhMk{B.Eږds+$@tYuNej?-d;]#[J>纔I% #%ʈo}й G\B5p}ehO 1ޮ*("[h!(MQxLs@67QN&,l_,Q A[Mþj B&n]] a\ h]-H ]2磶ń JS|C/o|} e؁Z-[O೟~x@߄t* jPws c>JPj *^1T aS1[Y5 *XgV VЦRNfuH\=+JEI`!M33kQv`YNGys}oŎHV$ 5 n ҕ!%M '$W ^<ۥC Ӯ[21h!4֬]X8w-Uu䲒đ9bVOe=!+ŸlZ##$5$Е2:j] E5?HGe74qQ1=H#m-ұHO8_E5!N7-\bN+6҅[J"-?V5^$NN)VxdEbŽE(߱ڟ&ř2 weq|eU$`cU?wʉm=5 x<{o\+I`sx[-Q,ALp >o["׾v-MN*-=Y-5|SVW/*!im Q WS92u#,׎FP˱jD&]CuȿP{@:5ho)=tD?J_m¿tPUC8PAY`$f5a(}HiZC<}c.N_$<(f\p-tv/,]|}~bzj PCTXL-,XZ5HF#JjBQ ncuٞ*ːjtQY"܄P1T}F9|!mq*,C^UR2G{5N?xg˽9VESuY XCSn]ÚKlUGGuP̟->D@sZ-VAp]Z+Fe&?|锿[a-Z94 los8f:'8x]]x۝L"?>6Vc+.? .բt-4^}m>n[s+ʒI+SNi ^p4@$BۅrpH8I[X N:K!@9Py`̓&wHVL:I%K(@OX$[π@.m@`c]J׉(d9%~ŵS,-zTtU_ğ|})!?/;E ƛ q.,@m4`Yܦ|NsHTYd]kZTkjַ @X[ӕTVf ,n\~iln.pC\tEeABu2]K"jOXgItf)J>mN,c0^tj_+9JYzr9mZI 6&%cɪ|Nz_d~JM*in=Vd5":wmٗLzGlZղ+u}+1TCxccJ ?2HbօB27|yMsw *mOEme%XK9kuM6t)zeu2HGG%}׬!Iu"VӦ{o+κ<}J0_2C?ǧʼns _۷QdMi)I6hf~]zm`@f#960>?!,$CK|ļ T8&cSP${B.whe: "xSP&n0ʸL# 6# dAJ^k6dЅŏ-XK/!֛os"Ϭ:+yҚ}1Kzu,[ )= .eп_  @,i8 "TQF]"VÉc랇V('ӯw|ǖJ/nW]V t?}URǣAz l4@jVä,q= ߧ|( Ȩױ<8 ݆͔*K jjwu9= QR.CxyqÓFI(ˀ%ɣ0{<@+X)ޚᙽMLZ@ZLۡ\Q0Lg>w #1K7ovvcqsBlUn_*VYIjЫ kT0hNWЎ۳-kᾯ^ڐn"1oc0ٌ.حLZҰZ)ylm1l]=|aT*k װ#U+& NV@-DcdEpAàS;{sywyS OtjB%^`F`Ӱ]t<6OC1JJHi0re1rń*2V¨5FXVsgtgIE5$t&z).B[VT\ӶT(w {%cז)L=mZ&GOﶌ@I.5K|CҺ.}dƫfUwZ0ҺtWJwΑy!SXTKXXI֖Xp4뢁N.< Ji$qj͛$:"ҎceuG2%[bmGbXȀ*%yE8)7Yg5rOF,}ln<ǐ#Gmk{ ox&о/װDZwh _͢`#Tx%Gs݀}{׀WE:ϒmhEڮČO/3!~ AbK,OYyڦ0J7%Hڷ2&s ڰhsdCQ:ړeL}뇀@,p/+^zU#P(БZQaa3I@UCvb#'`VQwR4,nm/0n{Rߡ"m}<@wvWﴩ]xf>ǎt!)STa,áR"P C, QE,.ՂZ..a\ZϽd {@7G# ar f>~SsM$XaҸZF@[:1@1Y9PAh*- ^R*% '*ddXv*{E(i Dʔ#fM t @,[ЫvύAd/T) 2 If|SE,(-K`v*5IS b|DV-7NQ`|أTP5{Vȱ6XRC^B(bFK{;ݡRdN۬VÀfԥ0x.hf/6Z,ĊtFd wugXIFWauJT\+lduKzrvMuduaT|N;xi p$N+\`5! NbEݗyݵ+Vߴ桌i½ H[(U$L0/_2ӃjZzSFɹ \U2OWO1ο `P<<PEuEw2^w_r%ڳ4,Z(IMhioɭXr)2Oj1;:!HCiu% 褕F.!4)V`e%s$j5hX߱M+UfxG9:u\G45ɾe;B>?4gy|eK\E^b81u# #cj:e{x£x̖Ry$@ˠ,i>KL6qB ..(h呋'L(ZǏq*IP%ƝIB*ApAo>0% IZ@Av ÐX͞xe8@{{ 1M 4E b(m)8pvv;Eqc~iQQq_~qR}oAׯ3pK**II+WEW}rLau2ÁtkN@"  a>7CUeIIB|΄tz!b,DŽ}"<~lL W+s*դ "Vzm>>'8f,ZzK.z/ 8"Ke^ hGLv/ZHaRaTqDz;rr>f@T(O1C6![*u "|1yuµ;wΒ&@8Nu/ (xS&s%A!u6~KLdm{hʣ.]{'d|%0{egnTS&Cz2iQO/pɤ'ǥ+ϦQ`SOKXHPǡO92Q|R&yPR;#XˠЂ(JTw 4-.]W?TׁzT-zZu| }r Ρp"ȶoh R LSY\I$ %X `uջ @j) -6Z#K8/Ёz?ۅ IZMPM.qr BP' It1Hj*P!㥬sz޶Ԣ هb*$pi n_|Bq:.:r]̉VU蕥Re _):q]UQ@X$i?-KLyaUaRp[Muug?n]woNw l48F)m;^9 zmuK9Sww1J5C;D(Be*CS(Bp1#m4VXa@ex0yPj,XFM&*srX$% ݆@sW+r\nI0Rx`r+ OhԅB3BX0"KeF<`PA(s] a(FclNX@h9sLr`h MVZ=:v)TSV{ |rr ".PL (+B~"@M:bNjPe.) &0@44+df g&S$&VUudu+$SspSNsG/ 'uI*g5A@cR(p*y$K^ s(`P%!Z$/- gJ8bD8Һm t| ͥ ^EڕzwWO`T29 Y}.r_+#F^ @dTU8uHh`R% P(am4T $q-"L砙C_Hg!pgBK* oj@b:*kcJ:x|:A9滻Tp@-@yNiQ m4Mpu0!f`TUPeףlFj;^p0_.$P IDATǔS( j@PTvH=nUQUV+)Z>^ͦJ9 A|Jft=E_;4R{[D~s\^,I*`pt8vSY[o)\Ąٌ#bC0_},cVTzERAC;s0(L}>/iBӺr[+P1GC|,$uZ8a]">m. ׮1D%t661CN>vIAtZ-K洘b:夅sZø01wc&x+( 90sVB2VKB2rlmkA0>Q6B8M =7iFC|hx@z<8P-”඀l%yiR) +!m Qx'TDl?ŶBeBrpZj!3`щ VT$~[ 3b#]:S`_5Vu_I%w]1"x09:.|XK0Np֔x5 GxJTJq8l~`8ҺTKyA$k w$ i$dv^$ +Dx-w!O'Br-]k|M.К#Q:Яn8}*u8yJecV-(*E3UCp\WUr]yN(† s)(ۿTbsgU@͜j5AH,sʽw}.VS6eo V~AQU!&8}@8( DqeApH` KW,PXjp/.-Kt`@U#9:2 >jEs]xXmo#j4#ep*]vP:Ö4?HU%r\ƛLQ9}|=5SMK I a}] xT씽\w5rоrsN.t'>q[NNH+RU;EA%sVJS>6- X]2N|( Bp$t'jFDh9)29?hb.JO&&!V#|=cR!5?sJnkBUrw"+ ;s}ƙ:Mk;;!8[<W{{p>",+ qkv.΁T+`-`<MPHWQJt(IPСPYt9%Ca#eATJ%Zpi!R"Ѣ P/BNeUW-H;"GwKOA܁zk3uz@6x* }9䵲 ǎt_钥˹k(]<(uGr~ <_44;W3{8k~(JD:&Q A+ǡϩK mTUE*%E"@o%P]߅xsNRg'h? xQy#"r] ĎdT)EK$Rg&;HSNvY'kq$1&{.gm&HOg<%P=BO?,ز}{<91yYU*Jj5ݘ6ml^4fOO"mlL!h; v@C ͭjI%JyݫݭM3Ffsk^1C0/@ ^Os}eP &' !نiTZkn* 4,,b& v|$D3[1`\-?ڡ0 mQ=,yLd:ߖwlXS%}"& PGVQjnyʻlKgwDJZZ)4qmYz՞Gu&[Im榖=REdC鑩Yy:S&?;s')>D Cz׾*2 7)uNީV8&]NSl4P@#^wSlIH-!:Ls'jzr <ԛsi"׮ACshҠL`?^󥃸iLe܁iejFˊ y a{%B^G#vA^Q¨А:ڧ&PǷi: ^:{WQ4]q߅&.AxE¥|1CKl+y͎9:krO1aum8ݑ8xA -Տk'?uZ)kS0Zgc#NE9km[mS*QZq.F֚*zii z< #:DYIQu'£],IjUbT]}&~Rڞߙ ?֗Hti 2^]m?rGl=r5&Bg  _{;5>0*Ҳtp+ـTQDh`UA$b7[bE0iEF'Yƴ 73x7~rBYKO奤%m:A@; -/)IX cEU IB' tW+4I,P2lᐭ^3GPT/{$4..'OZ-iܺeVrIM,3$ D%`/6ڂ -WÙ9Wm{|eU|(D͛j$9B..lE_$\IƱkSr 8`ǰ^۹" y3|gnʃ6)zvV,)E=ޓ'hBeWU3deeiZVQʎـ$kvA;>y+Xo$4TmX X /- P G7` )؃YiKe zB|.}ì)еt X2oA7"w+gGU̮!QYx64uwSߣF9-~ɥlԻ@1g OR)@9 ȢrmQ.ކhc!]W:G)s)'8VB}Hm :='vVڭXy$_R8ɐ̶BUlJ1F3KKxd*`fny Y%yA;Џ#Oa(|W&HϴJƈ*in6d͆MQ0, ʒ(0$O^ #ID) ګ%w~W'LMK&vQU?e9rms;rՠU^Дlaxm^n'׹x7xvAi޻GhVbՊjVH+ILc vY^]AM.H.lkSInӜIr8Dssz-sHoݢ$4OOi>~Lj vIV+da' 1db qEc846P@+D xۭ)7垕-J[ݿq& >XrwYBQv[-(9;M-`IͦUrv4P՝29@gRwtdWUY_xv׷V+nqt/dsyiabjDgwvtos+7;؄k<ޫ ڪ{[-xy{)gh/rUfm6sܷO V*g91!jq;pcBUCH,ZƢ/.!lhоp3+ك| hЌ܌ LHz@A@؂!}O~C2M9Y5W+C-cd=ʦ,Y%n ۬}ٙ؉c[[h4`2a$4 VD&M~otJ) cMyμٔ2)lWM+mZQn6Ea]NNmv (Mzx鴠E}8iٓ-@V6 l@[F rI\Y _} ӧ$zmy VU+ΎM^{=ۭhlbQg]ۄ&Ibp~eiWi߷CX\\M&KqtK$-߷F Y/ {s$;~M8:JV˥Msun,CjT*l5`($-`[i٪ s޶>vÎaa*H&]Z68`r%=(ְ{.4o.K߳.EXJ!Bܲ;G@7cBqncYŖ`#>coWoL4O,3@7"eJ'axԺK/-?ڪIΜ\]!Z9}_:y Ot` r #O5)<̤NJ2Rv1*喝d~ vɻe4Ҳ |R;6/ 74Y|G%}>{N 7=ny\>tRQ9ҙ*Fxbr 51u݋DXL}Ak{󓶰}X~S ߴv 5^4 εz?}.ty#>zsHeQdV%n6 `kK$!j6 s Ȍa|uE!yKy|L<{dA@eTQDtJjo6A`!kq) L >5L&b`61q G'0h)1oՓi$ԫU3z+pp`߷ Kqqaqll?WW6vҬir)l\ð{_+',W3kdbn7}AnDsa%dK;bu=nܰc~}K^.k_ڎ=,w$MXmeWWv^ݼ7 C܁.lL=8ͭ߀䢵mU6Kdl|eQI5a IG##[O'sFz4E*,ʁkB'1琷U &LS&:IkkUWݒ"ܾl+h&;/:9XRʝ_\w,y֋?g_EVgg.z}nah@{ ms?_A3 v{ Sul9T",zŰʠ̠ 6k߅fb0ab'%c뤽AܲIU}SCyXM X'V} 9FO|F|eSƀkw0Jw3hϲ' \*#}{<ؑK004ٺPG׊KVG]j~Dvοd젅^9tuke*2bEKJę TCMmj\Tw#;~u=8΅.90HͿ`f8$Ta]:O 2(2 1QH:UŶ1(6F2 i6&n{Sn24{xżυ_LId&maob6[tr^9{;6v:C :qpcg|~Bt>aܺEceFC&2[Ulo6]YE6!t$ 6vpg׳wrgwyibaHGm驅HE"g}zj8ktp=z]þ='T̺iC 7oian )4rz 8jLU%o؎N0>f amW Z}-X_XixCX-,`Q$jWQ&@DؙX" @ZԞ#، ˡZ A WV$<}cןGZ _RX5v5 SB \+c!)/iuhSMT.IW6SpY=q&nuv#C#6wc굼ߕ!Ý \UM~go o5U:Ԓ$It* NvD#j"wiK8 ~y`.pb'_yKq!ؖuM,{/gw=:vg7x9;ɵ5!:˅W9%(c{u]'[? JsnG LO!E{& )'d[nEjJnAu!}t\&|)ƺa] s#ƛ®̨JۥK9`~VuQN:aq ~\_^*S+efj /`i{M =t .HJmj_S*`y X:v{9먁}T^^P"kB6;9 wn{pz?ֆZ5eDǽհ2=#6' MDxv\޻MT7v ily-_;-x3}TtWoDA@huefͪ$$ iU14aeax?SSx`(ؔ/Kb-.eCch4 aQJ4i0i4Hㄅ1d4ynݜ_٤䀜eL%55_9ypi>?88`X$$5"̇C C1 !Y9YEuxHxz R^BT!YQ yβ'"4NQlXm",ȢMMlR+̨tŢcF&;ߧ|>l5ri~.X n쓉MZ:y4تё%?ybQdvHUل"Ϲ[[6Qp݀rȎղ/rͦ2{<__.vv..gvm^[Qؤan]M.i윆`٠ ؀dm(aEc:yar '&=-@ڪҵ9<:殅UK: ȬEeS*k#?FjUkXG8 (!?}XVVBֲ9_Caa VbL^]rܒ'O9{ЀȞ;ށ*eK؍\;Q uK`E)h%4!b!#eMdH NFu,oX=ZU0HU-8~ z X1+РK',vWƓ8꜖NSdytCX}]+<9mOhBm2&r6w.x%jid: +8Iغ£yͦۃûp}N->\uAͽq59pzyݘgؽ7_Mwx}{Z<%<u,Tū-˵OWt}|ɟwu^ځ[y3+8)ӎ?ٯïZl⡋;<@A@$ FDyNTUUEs`ekŜ*c0?p~`z0JW; ?pc' !ZDz=ʪ1D]Ge Ab| rNIytxa,aw&?ཇ< 2Z)/>=e\$aTV:hm.=,)KC4%`d}zj򲔥1,7V ws{7nlj<vN-Zpu-8ѝiv 0= O?&KzMUVd^ 5UU)=/_hy .xTA+}=+'ܩv^~^;qJ`* kRK80/s̾62kӃ-@jHuy*'=S;yS9%]z2.:t8[0~]2ᒘK%Nuy"gܭM$5czɭBSB{\軫s0 Gx۩\ҋzWX*=+$\jB>"\-^g:}8?8N޿u^ GDJmЖ]INykc>AݩSrvzE-W ]rW:&0_izJ. v _{-)y;+B3].;5Jz&**Pr&S"x)mt fHA~*BI=Q6?<sG߄o/d,%{mya9M\k*]9q}{~?`?k֋éK4DhtBeIl 2.KHh)KZ%AQ(KUE8XJwN!0. -y)nč^BgU C)Lʪ8Q2>O */ߖn-yA\ci  mz+Ӿ{"ޜҘΤna>bytj8rI0!LSړݮ}ʭ-1"aHnɉHs+d]]tK<1F6eI\J41j%m}0D66I´bn䭖U6j4j ]2ME Ullս6 }ZLY7$[l'~f= CUѰb!mQ(^Q38XZZ5) -}{&M vI"t:6q Ƭy°۷׬E]ysNByvXwqn1*k?UcZO(l9zBgUBpp`V\1D ug6+3G60NVI6̧ =cc5҆Pf.$$ \^ I*-!6̄|%d n 2\>6LlB2QnFٍe;! K>*sW?&Pͳ,0ͧ_RvT2SB`naY5+۽CH;q]Cz KlTaqFZ|O)VZnsQ# 5ݹޕ>My8unhvpgRGtw>ݡ6gۣD85 EҀg=ZgN F*Di`g*nֹLKRaғ]ӮsaVDq[RURU)HwdQאsUwƌ;^/ZVIzDER'Ԝ) k붾MhЙ". =ÿ8\*u[yLjSߩ@jw푎yd(VLkk k{tt.h;5N$7NW=,Qs̶Lt~Gʼ[q0~߬ka~ h&E)B*0G?IA|s-7?f+~L?#`n)Kƭe[9Uri 1*W~OB{ӊ/%|V vGdI:MM8uS1b!nUr*c}53QŽo&Dv ʊ7_0YRam2'O|H|>DU!m"*YFc8<ݥ >}jmdvوTlF{$!l@̌aa ە*!3"&JZϷrT IDATpH%B厜*~eP=3 T=|{ohp Ns˻pФ]=l 68? ΝOƭ[|WV\\\ψ8XHC{dI*&!YuѰYfʚIo8[81no }M =:l VnJ-H"C[M $GMKޱ/ ]U*ϚeVr3=.GbIϞs<ǖth!taf ]v!"/2;7ZԷ\'Slګrhd~C%&.RAхDU,km"4vjAA9 .E AC`!C {6V W ue80ԠO<>9}-u/^{ ~<~1/=͕SH2ݠ;ڮ|S K9#QRcwH-!{!X赶=9\?wS?1&a( :uǃ45u?ݱ&YMD!AmcqUzSldje,5Tu*.^W41>g?rtԤEN4\xRu˃/@. <)ic6CjOy(hujwGK,)𘤁FE4"qXXqXz[(5h&kzڥhʚImMTWꘞR٥[Rw:{Nxɿ-'߬Ka~U[Dk'MDl>Yie;¦5s>NcI&`$ni[m$Z7y(Z␜7<*xy< R:JZ1|8OIQ(EVc E ㅌu,[zVHt>019{ ;!_1\4)jx$XUf:,Q ]c?WqA"s+bB >߬QAr߆Ez5K ['3eA ah,T*INMedH&ZE~|s"Qn$lc+#R@.qn;b'9*J%14D(DjŰ(],@iZY&mR0j%l!Yƫ"竐o}VGC1g FE;˯DSfS2MXGAh4t:˒VU1 IU ]CLUR={f&E!YN&!1e)4VjX!E`2m$M-JDH,b}Zij4t:rMv^d5Zm(s1Uz\ z%UXgkKʖ[`X`<\`bw<Z-C)Y(x,t$jPUCjaw051]ᡅ/VYf;=k[NSvğS|ݻ2WO|9{= r 0`Xp9}ݖk̩o S{ίl7ܲPD(Zc{$1dM۴0cCA90B U8OF!lB i\pn71,/n>E+ j,c]ϡylҭȰV& [B66JoAq9w_xȇ__h?ua9As萗^0r5y h^Q!D遐yA ]HՀVMf - ƪf  ?Hne+Uf?S UPMi][iݠ2Gn^]Ejmiq+)-eʻ-46H<φq2{(C,K套NuB~?IYNK X-Lj|Ԇp-]`vP˖H֠l-qhz ǁ{W`Bo4,4pvrs=/ʾQJSKn|,^ΔhY:2=<=sݪ=y@,Xoϓj 0`5"^Rc:NX~J>GI|cׁRB-^ʳ0|Ov&d+3 +㕾SU:8@:()DӑMi۞=XsŲ7 MtOi~)u`au%P邏<ץd=^wog>tN{`%cǏYa B!AS5L ٬RN&$F:zlb!sf$ jEe qYl6H6ith0*lmI 6AQcܲzmR\ck]WՃ@I3N#1* 5YR0N_qWn]4^idɾ*/6 =ω Mjlq1;8.A8X{1GTwf3}('Fz̕&67J ~(M;R. 6c7$ @3ՠq\i"`uI ϼX$>4O1wW/q.T$,0B.߶KHA kм<=A'#ԯS8irk+sm%_S׻-it9~3=d݋CJ)A 0{ BYrטt߄A|rC"Jap RXz652PWo{y )Ir+qHCFĊ\+u+Z!* i,§l@ZT)B^yp{}{5ʮr9]6Nb%1qw!(4EGA-Aϖ:tt+Q# hAQv٩*Uի7w{iXu$?{9{:{˥b>UٓM`kHVs:V\HCA ZW`>ehtK +g˲pc(ErfsJƿO;mƱNm?߷qO~bGr1&`&e!S˿Tj8AسX&KBR|bQ#9&L'1Oh@Sh SΙ{`ۘ*P U $SZI3GVOw^F8铹ll \km#b_㡪't^[b):JPB$P /4HN9+ %Ż;hЖw=B!/+REeZ+z_Y)\ln1<|Or#<%|y7޸g+I1Q>B̀rH9ߟCZ>T^|g@  5{<=k5a0 9 9A(mkc$T ,keh!v/&Lh3W(Ԫ~ڱZ+ũg]PTu΍&D\R}_=jR( e Uu0PL5]'3ʒ ʜԓt3j#QPo U9م`*iSM`Kaa==ުeqmz*硹 !Ƞܖ%uԳ랰[S:qd?%/Z)}b"A0f"$"Hn #`O*cEWάq%H ApoD,Ld+UF@Ӕf ix?3JY5baIt~jsA0deHXoF,rv3zD!NtuǶ{`FɤRIjQV1 Rp4˘]B$9<$ B TYT uł|tEDu*>+!q7 uDzUŀ Ҕ, ÐU͛Mpw`.#_ ˴&C.?Ѩ,lvIhd$  ijyZ-~gr|kY9=8(_Y[]׭llnM}N6`o-c4MLtj]Nщ lNNxS~˝{lblf$ ;f9tzUxmB:&R3fS7y?c7~S8ZX^NrgO^̓u=3v lCu?gTJAd9q[RYI O2J˖]j!LjMUhBFm~ '#+k7}o~ "P(Jey JCL3E+t=$k* H̷IJ%2*:"B&BeDdcƐ5e89Д0$v%+eP(8xVpi0ͅ$ AaX$8Adlt:V=T0ݻXȕ=r@3Ix4`r+HX^6K}497krovcX(! 5L8%zJ6`Tڵ…{0<} Cg`'IUb VVNegG#V!IU_1^uLR!tKv.^mGt 'g;r>8.x6!zŸGB BWAf7!L-{2,N؎ CЮôwY 3HsAdaYVS۷-5j '<;7Alݔ}^Ŀg 52/f7h.䡇wirfJ*$RBpX6ڤ܎oxs‘bs#@.$BPp_˺ PʅeE0m[\B r7 6yܞ*v|rXlq^RTWRz8<\⡇}_7Áw4`AމX%/ qC%)H==GlU}el/'ԼӤ9-Xa29 xc_^~Uy睷_~wxWxkq&I:#v\Y6'S@$bLꑋ.2T.KSy~$:vU=p(W xb.(`Cw$[<גlO#j"O˩Yuw+/;8-M:T&|9qFΝL-X-=ߞ#+ǣIɪx]]פa_yћ3I.HYdJo t8 t%wHQ9_'k:NIήg7皽XFkm2%%zHe;*YGUvnaN4I k zwʩfȷ^x5KqF0I IDATWpї M̇Aa0 tjiᲗʔ &@%e.Ct6? )L{Txhu W4Ә$^B ED(" CJQcjaH9(HYČeK[ r#"BԄ؄D]Xw0zۢjRYFYb2s3 ;q,=A521Fwe &du$a2_p7.1Lxގ8ҪF|g>Ti)hDYʤ$" 3[[`@scM ejyNf&o4`>t KKԬWb!<'[,$&q2d:X'Kbc nLHaJ"R.! U(܊j勒 ,]7b6q[}Zᄗ#M ++~Unݲo M1C/~<ހyLCh aC!5SY%Lz`C·0]a|.i3+{0=cQfկP0@R֨؃I7_ۄ$`&I*džta y@"D]۵IRC.YҾFly*uC9c%yU ]&fV7](k𳫐/],٥$Ey 2z:lH+0\c,gmU}6 y]u 9Rtv+ Z5s]Hݾm,۞K\wY #Sjx֝f&RnKpE0G\'E$YH&M$ WdyyXƙSϕdUMyz̆^5 s~'-OCOu0B)LSEwȫ^x \ ԑH-%!ShlrAwkKabMמ1fߡd&4Qk{UUᔲy[#%o-|_#y@ohSt§"/vݣM(qNqJlrkbU..lQ?+"%fEIB@h%Z9/y0Ѓ9IL ~q劕=ގ{Uر;oArO $g3{J8gn~f:3>/ ''NFW8{& ;z$ű-+{=*\`Xpե@ԫ9~ N`8瑕J>[]]cơP]]\lhGIp1}r(^X'ƪ-:l4,dA`,ق ZV^=_z#h0mZv^'{KLAfYu0߇XH6^cy:O=z /䳁 RرԬt.&B~vaȾ5^00`$6) lґXY.Rx>/ uY_WYډ !dcTBvęz4 {|peBCbygVjܓ-s<`rcDZdRvbWdY'G)y'y7yz{HN58c=:Ҧ:pxYy]<ƞRՁg wΉ\Yh@6wXStkIt<'AZSұ9 ʼzX9_]Ιˍ7%ONnz ؋J{P;jMIUt\3c{vYVJlxNkshD!<6Wo:ktK%M6J[jFk覜"dNa]f,S M!kҷϡem7Ŷ_Fz!cY_&F#Ǻ:/_qF Ф8Ro}od}QXކInڕvmFz똸cыskDB;{61 G3 t@ * M'?i~M&U2k;Q^FtzM2K K!> eV3I7YULiAJ3Pk9GCJz9_S)q \fݖvNIR1cT"<:$I2R"VN0ϩtee8~_쀝-JK >1+%ë M´S2|N0,-!$yOff4<KTB#0FIB612R%9L]Jah+ pycӵ G#gYY:& v+:߁ndtan5pԇ@ks^WTƖƓcajUaksփÉ$,/Y)YG;08"/#W&"5&˷TtT(=e4XSY^:ևpx7sk6h*ʧt]6VzV< ! !!`2A:j[5!oKK\Ohg< z\5yUW58Q.N?' ؉Α| ϩPϩ*S=Ύ9AK*z?W4QVuLUR0]F_+ݧ؆uv*XWpPX ܆޿i!C9UP+{#M")F)_:;036eNřHrIkhr|UF/rΛ&I*.}gt-WR2S ߜp{oR :}O! Yݬ*o^' 4ҨR*%|;yD2PBPje%"WZU E]ܲ.!(; ![O"iwޱ{S9lW`S& MAB#HU|HOJZ[ jiYHVqp/I$ޞdA@ rdP"Ezz]>ǍmMէ9wo]bT- CjBIswWFCfT*|N#9fA@%4= P@!V6viYx I wZҒ}OyV!j}* DQ* K]ZK,=g'_BB!T'xw{AVW ak&-kk6lZ^4)Ν];Rx G؟'%8`Jy_ؾG<g7z}ްsA*,ư͒(La6J5a %ҁPayS#8.bMayȦ00RlPqWav9cnXj) P 8u<۴:^5VH*@&*55/5+enۀJU6$ !b P>jM4 \RMH"נ~N"؉1RXHQsZQȁu<'.|!XKՅRf/kEjAQlăZD^P7㝧1 #Ӡ+,1=w(kk_HWRXFRC`EWh`AT 5vrG q̓Ըőbc{&m ,;ե=@%0ǟg}}IߣJQ,yO8 NDͲA~97(KRq_kp.eP,5ꜭ1R I5u.9rWO&Oډ{:L39Q.mϹtMU՚:-똗TAViÌD- .5B~ڮxyC :n:d|U˙A9xkZsUsԦ&7GvLѤ} 0):]5ǡ'ڢS\6B =O WR4# ߍM}kk)*߃Gs9^5+WpT"|SQgTһla:OC0Fd| \1t?jG{xtuGGSތ%(@ʘyg"r8Jwr&ɯFj\j3ќY|0]7e^wis= #.Asw4Jf\ 1"x],.L{=z RQ[,7&LXڜ}B|xeDy}yw˄*g@ YFY,˘3"2IlJyNOs:%yNP2eeV6,)M!˨JdKK콲b+''VuK6vU]AY9#U&zVnݲjqruu7^П0+qp}0)u㏳\<{["<4B\v# cݵU[X679MXo { Y¶]j@X SW^%],x9qO_5Yw ksxh;#MÝ;BWt~"<[J J;)ֆzX(D)]+4Ɛέ{yކhfcV:00!sǹWQvU\OX)B>ӀaCau 3Xݛq[x;ov W!]qUP%{bWxۯxJSSVk}&檪} :{级dPSRf 8v)-(¹<7o=}G|vѩG<+PK7pi6yOqtt/w\pfl5 lkԥ xO= l=hT\ AN-jźzvv4r5;Q"ح [NgA4o顇n3? VW0I\*K˃{@=%S~CB}9lE3[ԺVYM2;*N8ŲއCɚ-vd]7V';蜮xШijF)yDwNd^u5eo)tV=QC9 bǹs]fs ^Wuhpmw,n<:`uzBzbo CP̭vs`y,Ǟ>s0lT_9qppdky9O|;a؃%B}X5G(0" W?I߽ y5U.3 CӔ*qCf~}>au9')CM }M MFtx;o.,8-lh ߔÿ͗^_W~x߀zh>g}c!6.۶kky!>I=9]"宯T"mBlڪU0:TWخ 730Fye\@MM|^K\Ț.˻3``+p8$ ѪO_|w?­c1/RoW?[;+0'0B cH2Xݴܪ"[ SA}@m&BM jRb*m`4W`^,N`ч6L?HzI42]cG7>sIdeve]֙;kB ˫<ƜW>tc=G d  u3hOf*eSs 2,ʝLb0JQyur/rҩ/0kRغb聙S/vЊ͈ܱZ|d8 O)~*yxxg=R9|1fǓ@qRS%y <д$Hi xKȃ|ӱ^ IDAToQtPN SZI!Q>}HQ  S=rPϲ)l-̙M\; |AFzNk InyW'xp8烲IQe7ȼ#ɽ$뒸s׽v$36O %j{ZVzn .ܠWpM$G7<:=t2+zΨѱdvz3l\qML!s O{rJRc1\놞羾ޥ0<\]*4O yȊ02(n_~V S{LT$oh7Q .3Ǵ1mx uuVu%QrLU6ҥB6 ~[& Cd,,_i,H^$n!B"<).L 5JJ_!Vkpm{wzxڂ9sD*reXN: ˒Z(2Rj5#?q9|};9RI4U:9I.Q^x,$#~=^lʙϤr{EIz?2C$8i/TDݔa ɘH%,doBEB"jaJ?3!KSiөE0$TX"zy.rUj*",ÔJ@Lƣ@VS>7Y'_*sfk{7L:1x,h$< F#89yESrY 0ZM Cdei%F( v?-HD$.^lx4PUۊveIM&Ғ0YxSl L'J5>{H)e@M Cτy&/ʠz[Jj6>;80T*?-_߰y~!nܰb,k{ۺ-%놡j Rl5)5|jst~[>e,d!]Wo~B~5֡ ܹ#;u7,׳JZ5m<r[eS0`:B0*iB0Ȏ5,+L :|,%AJ(vPuk'BP;хXHTHMC9e.)\fNHu5xdڊB"E*c: CYR8/##UxTaMqGe}NNrNa3)$DlzҩuY!`,aʢJ>cw%ˊv"Lž/pdx]K.KP"7Ƒ#) -ǩTڼ;XZ\2QQV)* WJ !{Zyw'9 PlIΰ3]CN_^Pn#o*I"b*ijYgvGdoo#ty!aCHLTF6ThTds3ʰL ّB=Rg:UOʑgڑq WqUUwݵz#]'az^2}B|NLGZ]aTgYS5 M/W#:29^FT(\z:-9e#\^cE͓ .떖LJz%Oj9u i!'a" äWPӾ,51׿2$j&$cՏvhԚ7SOٶ^ZSu۩o= Qf/ڎ=6Wmι&;`cL~/O1zڟr` uO n[-R,榹``ۍ145]Dl#KP|vN{:sџڥ_3䵐…^g&f(߄ݨ&q93#!πە "6vU8Rql]ds 󹡥~ hXmLVC:5y`kW.Kڰ5dpp`!H ե]7-8:@MCEks|? Yf=0/ PyalnXZ2L&hd \g& +b:m~kؚ4w{/Ih߇ޱt áu_,.``8>^cű叀_GnǬwgwOvmX@ܣ/sgɂFCre6_{k~7cC0Obda!RAo *-C0Vz8| ÅXh`vI!0L uؽk$Bޅa[C#$ڸ ,|>3؇BKmh0q@ZjlSG0.oPCn]elf8{q3#y̛j[5Q ӷ a^rd/7v@3!\XXl y(5H$Ⱦ&$Id6)9//yeo7nRtgTƊjO5qI=鏅WeO ݹdYe@S@sq׉eS 5_o`Lu:4jJKU)oEa85}pݝ pNTTNt#*c%o쩳#clk{IhbSVroY8^?A?[zc]g$Ʀ Џ{l%5uv#3}5ODŌ L2B؜(<<ԁ b׺nlI}2^f<'^㡇$:'bI y\>SM̩u3|] .msDMžA]=OxM JzOgd,ijËo3xGB):_$?j 8R.>)},Eיi*+lu=RV;pVO>;*~*efleĚ(m"N8RF}qo= h!$3d)(#!ܳꑞ/=ɏDz>[ȑrW@` H]j${z_b9g44>TWzބt1F'䃛7z*Kuٛq~A.o.7f3.~P1+pכ#An&?HsSJ,Ko[)eȬ]%W$DpZaop\ٌ04e89JecCcH\(HN3*7H/Fp<&k4H66{=d4E2/2r;|V{C\pLԪo>"SL(9Q BJ!RyNPܱެLeHVIH:K-MYBCPX]d4z&Q<:&˄+WlܭGq 9j6UIHUT(-VAd"t:r89&|:f{pF羅7/r '=>>q$]eTXZ\:.gVJuXJJut %tvOh 4 n[ }ۂ$^yFB%ET;x% *ޮIu7W/抍C;ZL,Lc|V*j 䊹#0mAYk^!/lX#¤cav j{A"dM$sxEBbhM$[vJtf*;v TČ=zCeϑ#_םfENmg1 R8 $EA pWFw=XMs%IϴO_u"o;"wk$4&!'|yK%zoaH. ɗJFj$tsI$ b~ILN_y q1uC! wrD(rN s8##JS&yOJԙ% C$cDF! C6(brt[,' ty2QܼQ.1I8.SH1A z;KUm.2t,9׳qv;m], y˅6oc||b1!4էUJ<.`SX@S._>ju_!~d0 !v)K7voG3\ t>"f]Tʷ_d#tnu҈i[XYyfO}?3oOo+\! y{ᅗe} wreƙu-g!b>9{ if$`FG`RȢI<Ō}PnFJkEUNfF0U323i8&O#Ξ"ܼYP8Z&jtR/FH1LdƖ2^"ef>i0$佮sy$pMǿ)a?C>O2ݧ ǩ P|@;hiǫ0;'9_Ŀ֊Q(O¤gB4_Kt.3iG"qcGdE֠X,0 lkޢ!: VXcäϜvs B/"3H,qA&{H|)>i+"S񻺏ͶKSEc]m_ v\<xzv*lg_U;OПĀ]4D?$}?@/"7hH~}ƿB:BCו+CוpeTD6wE^zroy ֒_cߘb"l$KdtU}Wb9MONَE|edsM'CcD'1ҚLd2P|ϓR lD FFJ9xDT]W 12"q$(T⻮/h$ 8gp|L' %qqEד   [,b {=)F8M&¥^'w4jJ..)nW"Pҳժp 4~1YV:(w|i-Nr^hT66$p:geý=cLYZb%5s9Tc,fPj>UpBQ~忻myfyCߒcI6Q_dz2á++ahy)!h$ı gdnΜsc;Bk|\v˓OlI$_xݾǑyPiWMAhR7'БgW@&LG!^2$먷ĜLGh*fjNcLSdj<~ƬgR|x\0ǚ5G50QC]ֱm@ɼ^f)Řd7=n=~:cEͭy+3kr5TIFxS\@hi4Arfz RH݇x_y{x[ȞASsbäbkA>J  K+ -Ch22a~ƪ IDAT#$ZYL"KPt}x,ᱟMhN8l '=#8#n4HPARI0CL5;[ Y{^YQutG:79O111$A@2t31>IDH-:zd"8&CƓ QA.( Q,,0]X ǬZ8Jymۓ=9u:.7O& R<4 C^4N9p }ǼSˆŔa{>]gYvv3*LXnW斗yTy(3[(V@޾8Ch1)ߺk>!ye۫f'8cC,Lϳ ph1*B9ì2Tع[\)uԯղse rYH"ӏg]vJH:}UE<)>N9‘-Ur<_4\}5'ׁVs}؆ҲZn`uȭZY &EeHag LC0"L'` FyEāU'Q])`2{M|O>P l| ~ӷx3 ;إy!g NNI7Tx9ٮEXX'x-Cӻm]lkQIiKQa9c+i!u.T,4r\S?̕W8"13Td}zrF:GfRC8ifK}q\$ +<2qzS? jsS1멊a QGgl8L+:Ck`8jy3PԖm52DՋ$Sҧ*8 Viw83쓃 )|g#OMxjHyLE|K/br $"Iө%7}q KKnmsҠŋ/ cnܭ=Z--&yw'"o/ C$qAJ7q)jE!J=*$8F67qCmErs"FL𶷑\n}[w[-dc ٜ9FE6L߭֬#D]-aX6W.HXҮG8 O9yF}CrA>6X< jk-h+Ņ?aͽ)& l@.5ʩV-:If򱶣0S~SI j*d9 ssv>SJdt GG=FMZ&: oR;3Jo-n8ϯanz-%߅un8o'P()סh mE{ǝ W?WxL:u`z9wqOV~bg3}jLTF{]{N|Џs_jF$Lu'N,0Őfh 4Udp4Be} "%#fm2&_ O棇Ubc".]cumDܸᝤ]38f]ݩK)OUvf>\z*PMA#3O$o{|soc66No7~E^}uW_mp[[xHa4zC$V 5HݸGӽAe72HIÌD Wh2z83`)C.f*)Wa;xAF5%3%`^;C)ol"heyܵHt3ϼ~o|Tƫ#IrӚsx3 z x%sN7uՌCwgYRժBshlb~UgϜ^ʧxuqzdP:G]Ar!,qՎ01Ek0dUOz 'cyG䖵CJi(Y,e(>Le|ƿ[rJg[/qmK&[C|C!?K{7,\omPݜ&YFJKSzf#4GȜv2ʺ' qW!a.`TIO- klem^0<A*0.R@ma`ge^UaMùO:\Ř߭'CFQ~ỿ {,NGpaUnOlt:㐷TL ;G#!ˠXԡ70M“ϏɟNg,=̅tN+\_-r^l q, @v^g3Y]Ʉryapp3n9uz|r-IQ}gA3pxb 0˩DeEzUDp]lYx`B8]&- %¦ăļyoI;I?c"&ubsHzEș5FOd3Ύ98 ZGJ_5|ow8B*vPhѰAgJ(oXh$&WnX^答nL O۳ ֖|{nނGܑ3Ks _x~呯qv}Gnܾȯ?M}2sڧAx]z2U"/D%]ß3bLy.#Jyf *k2Ry̴iXKblLD_gKY7L&LPA1nAF' w߅^V$IGA'2|}}XO~nQI܌oI/ӐU(ڮGGNo3gR&?sޔ+CE}xd SPYXy0 VV }&(aaclٔW~F԰OY0FR՜ci<)~r/Gm>|>q2(p|7Wc$7}9c4b {纸岔K%'"¹9emcX[]%'ြYlXYQZ?9Ihd8A^T*8&\R!j FUv:ɐZũ6o_f!ᑜz1 ?n5ԚUq|>S痾 o__$okYhA`Ǟ,u;9s v.g)m\XBד R~|tOz]n_q8G_ nݭW@dJCm.!LEh&xU-Xً6ϗ[BNF6ɗa#|֥h@.c5[iѮeBoUaбwb++_M!T:wlJO`ۃXCX(CXu`I"ygX[x\y;?*t'|m*$5kd؜9>ӧk\U!#. q;_jp;% ƴyc"J69}S<þaC&"'a/B.'>9Y.3B[`s5M\Cxqyr6\f'hXd$L{=&d8ĩ-*B [[x)n*ܜhDn~J׊v\c:CTD߷0bݮ tND}r kM&!)dTRcc̱^8ߺLJ )$nW(qZSkMc;K$m'pt.sQyvmPޑ>"J436/gpݘ~@OC }c\3"ӡt94b8M8wt}y}y6ׯN GFm9 twQAꂜ-_1re `Že YY ^2=tD@iy)/ 2@bSQϗĒEfЬL 2 dlz_j|J3]9Rh u<ѿ "Y$&8}J5)wRÂ̡}YR>F pqleNW J|#6oܻwAvXU9dYVthVHTt:NǢpi̔hc- znwSAO{[hoQ[*j-'y1Xǚϡpt?J*f*E RR2ߢ>3Y0ErJK`נօGD{ [Oa^:7˘ ELc>0tϨBU3 0824oB}_ԧ64ûHkELe43nJOkP!܂ M8yF߅W޷[@4ǞGB~;ƒ߅ks'TѝԖJԪ%8< X,P.FܮTnAYIlظpms<خy1+c1Ƿ(fTј^ʸX`ק1z]QľS<"A)X1h4j{ "gzT}޼^0#^]碻zkt9BDpMKn%Dxc;"D^F㐟nj8a8ScCtpd:%,c-Iy 4 7ߕ hn׵i6m`֭xV'UFZ^2b^#^Sv1 ͦ8фVÓ_ K,n9*9]+M.mA ;Wj>k]{%r8SD$J](mcmS ZO[K,{o&g"ּp<]cd#(7W/]g wҥ;74BB4{v6;ENVBn ThXxm(,w񠟃P_=W) _^Ox Mv&:C]}jP~Zp{ ΡPdCηfn@ gl<1ڇ\‘b#,Ta*ڒt ] x .No#n;$_\coF|AdZٟH.ȁ`zN{Jȃ3New|Jek鼦Ewu%U@dLh/$ACASet@ƍP7IK5 _x05B&Ok0.3H3< rZ/}_EdlP\mP2TK=˙@^ C,ZUnC i fp'ɒՐ]rzQ-3t~Ҥ&zw,$j/ >]E7S1 %~fIURV ʪu[ Uʛ>Va*P$iBH!dϟ5\%EB44Ed 1b5O?kͯ~gik`0po~nSY7Fcv Y4Eh)n#Vu⌆"||7bT/^^g_x_)n/+U>VIݜg"Fb0划cgǸ}_z `a^jƈlJQVYg^쳶qkljc.ǫvEYރ>fwjF%xi(' }B|.H\!])>\ҽ{m<\uÝNmsbDZC0uؠٴAsm89 eP&Hq+-T6wb Ή0tcG\]'Z9͂7{{+eXIR^ʾ',a<_//)6Cߍ=$ D>oם%}]4ߴIpM:, pM67tΎv( GB{17p0ϯ}eWu.N8"76c~Q& @1^c+-Hjֱ;J`8Ys0d¯% NPҀLpZ7Wt#l! +0iCo IxoA\?gH=n}yuc{\ѦmG{C Mxzn0߈7ޜ<r GGL:,i;~q2]#c;s)OUBE0MG乣峮*tfj!pJ2 [BvieH'xO^ E:Li)O~65se&9sݽ;/,p-|.|NdbjsqOǒ?^j+zn]˜sϯ/ҟS? c"LO~yBd41b99n%1ec$7v% S l!T` 7^c!ޑy֧rrcOS=)=1n޼yȍiKXŠG;JƒN%Fr r9|Ǒ8 nK2ʕ|^( RIP\(Ȥ?dQ Z GL 5֟γZؔxPq=ɓgv~gQ6xw%r|LRI$ c a5E/{O0-YHR.Eg$BPry&;X&!75J ?ZS^ΞN;+odivvNNrё0b.>+reuL*in|5m\wܶa0vbI@K0JpnS!X&^[HF?*P0C10 k eB 0m!C7{moB=]Tcڄ"i"zBXct8a)Lg $H lą䅸"D#!dMmNً\8w]g[u_Ν!5nJj_M*8;j &g ŮB2瘗@/-\Md^e'CMwhe5K%MC Le1Ymd鹌\dk0_WӹX\Tԏc/U f En$I&<Ć|anwQf؊~'RX:C a j-3өV?SnK3ǯGyՕL7iG,65LDˡ]MՋ$RBopTrHö4]T]VUwZ-ʚNSP~5 ڝfȅ4Xuo^K23(o8 `=/LTohGA\UV5P)́=n;/z*ˁy(Ir04f"t}'KVTFZ8OVlbZ@b4PWH==ЄdØS#א-Rw @~s2ĢwIpK+ R]#ރǾXma7<뙵g.b sFŤɕo9rȳѐqTVԦU']sF<\x䘋fMuscw/Dyc6g}qu3R<?;rY u%}DZ,Ej5dBWf2TL\LSHV_4ϥLf/>qvc[1yܡU]xSqMq !naPZMréV f3& 29<4}$2 Lg8$ R"faHEnN/-3$--/)9< 7%lL٤$8//&뛑j"ZJR,`S`+ij2Cn}*p=f+| On~q雤:x* GL=6ˬ1߽{7QpO66,< `kp|l<<ݮ熻w74;O?mVK8q^AcY[@JĎk6Λ֩} ok' 9 ߿ۻPM ,a4L+p)D r X}4Fy &Յ!4RC DBkVkn``g卑0sr[ 2G{/a~ 8q +dJRt]*5 r9vtbUsZm5 ],d xg Nߪ@eQ46hzey. 5<ו^$M Rj<,3 6qUQ`b]1LvRUK 5*6dQEXRJvĒRRU_*ܞZmUobI-V๧Q1|V2wvk xSOs)3njeXP]V̹ǽ2,ȰIIe[M4^7Vȵksrn-YQ8RZ-]c\e^¾l*P"V:QXuQaY]͢d5c)JjOuP#=IFnǺ<c+w}O\M35lۚe3m]kG:N0657;H0A-/% Q)5RgEM| hWCMKh]p mnI5}m^>&t]aϨijRְhۂ@hWt *i.X}S^vI!/eq:5厨 ^ .4/sf[!HRGwܵuwJM6&-<1wh^( %g׆\]`n@Cv j|pN[/WkTv8Xidd_$Gw=MDvk01YQO*A IN-l*W\0qYw?7V/LmYTa+/_6߼0!vmOHdI$'x93e;s5|h[7ܸ_?)kހ+?"N4IS 1ĮKwete|6#y,MۣlJ`d:^k2Iu0 C"g~|Lzxp{[&{WyNc6#ڢbeW\iɄx,hdUG|.lmPE[YW[ݵ[5҉j}N9w\ImŬGӗ1jn355i? uGvym9?m֠;W O zc9A!.F# `5 .ŋ$e$ܹcJ~XBu~hdw,cv3 gڸW?;“ vM?pޑuӞ5OmU7a >@} @3?nAc?֑BC=}9VQ7 R (9Ksk&=~*\rX]+@!u厮[=Bm[+a|j{Rt&څR1j=+t@J%? ^->هȲ%Ukjo H|=~"bEנV}E#ӊ|UJ+y,G%;pBjF!|6VWUHJ7hH|Ȇ:tqi : Lc=ww.h,R\*BB 2FlDBKJ݆HAɄ//(RSMv5 k#(@<5-Eu:˼Wr.{p.t΃9Xme~k%y7`rG6Gmg^bzU݁5,:Ex(5)_f_B\ނr7U (Q.GR";:ℚi{ &~[? ~߂ȭס5CU@Ν˹_Ȩ>L\ί9rV$2MR&K+h:á,vSnJ})yoMxJٽ^ +ۼvٛTGPOFvD'1LL Cf󹌪Utʡ&ˈufS锖1A&ۣ]^4ˤjIY[#hISk56yάݖhDR! &;DVx]޾\Z|F OlIrp!N{es i.?ظx7=ys~yŇoeah'Vɪb;g*Kc\ևB8sm^]VUrY[Ƅ?lsMׅӧBԭ[6CS.?u~&v>ONǏC/y#PZR_7V oA+=qpQq*t}MzZ5Jpo,\@v.uf0YXp GL-̩ҲϪ1  :D+*Jhna6|j0;2gUB7%PO35V)ѡ+E *?-}iߕng̛oj%WܼÍǹreMX)i 0 ? gTdj෧kIN fo)zW+%DZ,| R~* /)T *ź:D u +P)Ԫߑze[h)b[%\}JaRq`Rr/,:vcTt#.H71"9Եʂu iPpNJ܎=<]zjr[z)=[:忥 FQ.H**X]|PHV5H>4t (i,EՓ,6Kwԫ &:QtR; 7KzoDSJt/ȥ5 l?4NB<$'llvFԥD Rש&k< #MR-<ȼD,Z`O!CƟe{I"15㰌'#%k 7w*rLCuYs"$ &:0 };$\Coɋ!ou#չJNi&j'H\yFYƽZ O C!MI66k5Yv]먽&{f3} %;÷n1Xǿ-`[w Bw2!7qhD0ӃfSi*$#<ᐰٴBlf7oJ2aHĕ 1$$uc &Mf|6T"!˨dGG$gXQiGө s8sm hXwx$a& )Pɵ,Uy*\1+#=8jޓ<[2ţ_?ag?޶]ۡs l߷ SV߷zyn454c=nv.:ZGp$tڰ`; j@e9 M'?>E{v^f}u;w̗#Ka~_p`ԅmRqm{G5s80 GEF 氼bc9T+B4Z'BՅJ"U.g{C=X*:B̢7ȅ$|&M%x[fCv,ta2w=g H&TW,|*۹ilBiv ۻohO+#Hszngolё{'y≷$ /}хޠbaUժ4 JˣwR+;0Y43]c7.kzk3:6ȗY<50:Ԋ>Ig ,h\+i(z%\L@E@)%u!TIS}hBZ(O"hҤe@]Í8r|>7n/e Ђ]$K]e]Њrْ$\ibD^-[WmpiRtSK憅t Vu,Jt#E8}ޱ\Gr{[z2WOڳw:'=|187Wk<}S#/L߫l?ڎJbcÒv*M Ri`.^ɐ++{{BjMM۱hE-.Ht`r={Oat,LgNyG.mzϸ}+?iLU_Lx[m~O/Z~$C2™m {6,,xP qlـaVS½0VVBǃy(t΂ud&LPQDP=\pB ZTEl"TV Hk\ma%m,FSa6kL`Tm-G{IޖYI!W H.\C|}Svv3g4;GΕ2Kr \oUtۗ /HK0  S$YC j΃ % f~*ePu;ŲCx/JfbqtUJ} `e) .ɑU ((fikȐWmDC#~ }-5=NA(x[BUHNOV>rғݿ!'ǫ&o*ܨILi4- w_ 3M.zgpH t. @>?Ͽ_~7OҴ b92(|j 9YY$:.B!׮U I֖~&+~V#sX>D`jH Ms=340zޡ Kzg\k+9B6xwPʍ˹9%Bw9L K GM]Z/6gM6y͎ >h>qMLh" gˎ6SC))["{V690[ Kn_bէL[\* A޲3%d0 RI0M\BOC,a}Ln@oRz{Dq|:PoOcp%ZAI 5d{pծFO$&`ID"TAkqLna[0L[Uvr];Kgn[\sxXq;:C%{6KFC}q>[|_!|+ϙ(XtN=Z6%OUiSIv*쪻c!e˂4uD&$o5%-:"I;hcXرk3vlyFYv|3Wg~(œ?'>`~̌Ǯեƪ9<9jm#IOC=<빒]U+۫ӟ S2^䳟8!p{[3Va*4pX _P =NSqhfQK~5輫^F;+.&nn;xvj.nFvBcy)U;)pE=c6J݋i W[/-I+ ile\EIa-Zf-A~&(*7uA \h uh8!P='!cc*⤩1i*a\i4DP|N^t:2ݥ2F8tClJb|_<'2TO,HcUI\LA<(qV 'Iīevp "$x ݻא i'kُNɎlB W. G\ھj50Dz 9~ DZTZXl&8a'Ǫ]uVNv:FzgXu. [Z _8{ڤ&$se#{0uY'_d+LRC'/7[||M=#^>s4?Ɵ}Z! a>VM*40ժxf GȦpԇc4vTV![x뫶hIh$Ҿ0a. #dh%_T!hڮEB 4Be YEl dGh`a؀βȈ[ir*&3g9&m ['ł84\lY,ƕZ3gU^{ecDj,>wE7^_A,HE- #Hvc\OHHblz! H}Ⱥ !RէVakyTZ _BV@*/[a/dZ#}*YӋ[Q{Z/t6TbʢJ|P[%z x-wʵkk9r<Ԏ8ǵkriRTWO}]~G<yVz 6dSepLPbzK%؏uO v{H ` _ ^W.@xWv{9 ON%NAVG R ?@h ?sPytG|O빼T],3MJ% 㖮YݫBh8uN ۽%wラ7c,X<7u-)|ۖKq Y|@z^7m%wu3%s0,@.ud*OUNvU&z^a 6Dswe IDATrU, 6 ^7~[(PxP]>/%>KՀQKS!4 E8Ym=t[٤U4ML/* B W:y%&W+@M& ( K{*@]|ETĒc}q1dSX{d]+p.A04߅W&1>ݟׇUxЋlҶOd#nAnuJǃ0g4G`%:b<riWi"u 945='s,'ˍgQ½Vʱ;FIåwvr#/|Lpʋeҩ01K9)Gw *~x3@ڒ7\Aѐ4C\Bpji2ũT*B/ %V]^22r CUyNqyNepe7B;3>>38 y 2ۼyluykvb\qHYCE$sGycU6}F#0xluIӔhaDqg3$US $j 'z]B|\/čc$I78y佞5v|lex X[!dxnw0#ߐ'oCawHSyK1v.?\S{Kq})WGו{{s0 mj{ۚZYZ+d6t:2U{[΅|TkIM+/.+r7.3#Wq|_CZ/kO4q(;[uc9ppSb,P]cM0 wgll"cNEI68ʇVP}T6 tE#YnVG;YE@[l-Hx  = e'a>V+jKr%V])Ǐ58UR[5YgBqZl y׻vvG[:jE \`,Wyٗd8lcږεrVC_+S"i*)9, gcM%~KA6TGenT>UڻPZRUiIΥ{2EHk]8]UܽП"zu/e7_WQ <#Tȶ7͂csĿ8# vψ%b {:v4sm`W'Ƙ&SS:Լʯߒpeܶ<'t|4Rӵ>TJx!0q,CF|P(~_AӴq^ +s;De_#BQM^ ({k)2:Q)\x/t0kI߳UBL Vׅ It9)| 䫨WLTf!&];3MjZS+ UVoI MIUX :R(v/TNS,5y^Pmt2 (n-[Y 2K[[Ip7`wv^_GAסR{iա04]h#G6?E^AofR*Qr Cb-  JZ)KH)d^ W,P0[_7}BzHN,!<3}Dx~*{voaøZi*/K2l"锞ʸZoϣ)B63vſqo4HuICzu@>2\BǑI$}sx(` ɂ'MMDqv`v^OX]C$~kcyD8:+-gl,_:z7_s7CC&o֓&Uy΃^gmwu|n* rخMll>DրS*_4#$o_sY7X#;psmq__ؐk-`g1T\a*5U vzհ [-Cw$5g!۹ aғ̾@PUd:&~UaKWT5jlײbZΠG(c+`E+}0 YIMX$]5<sЧj!5kSОP))5=%c5kAZQ+!"ܺUe4Zň]X|(cT~{3/*"75J53wC 4CxTߋ MVFJ{QQS@zUyk3ֽtJPAYXBS@V""_x^kb ||ۯxdog _~^a5 Tׄ.=.MYzo%($cSYgϱcOk*my%Bj]ݒL5\)Ȍ^ө6F CVQ..(M0{^Ӓ X;wsUK -)nWECJ-^rO4h)~IZME/A'y}ɨ1 ʘ*$K)LkRMLuhnK/NSi,ڴ4<RPyXrég[Wa ][H W$,=OAj#y&\}x5=o+M),jO״0}.t%Td`,tOAo$,}21D)S)v_Ƈ.?e;zoKCEr2x"#¼ZIh0өqlԖ%r]RCk6#zJg$![Z&و"QDuuZf8D"6˕ yЭVIV:RNeǤx9ӫ0~*9fN9ѹþY.RXNr'{R:$BIy>l&n$1F|U]Ҝ0Μ(BsjP!:nO&HiM<07 M 6@ vZŝL+z=D#qH'Z /0cԾj[.ʊdO8'6I% 8|K惊6= >8.?$ts;g#/%Vy=F- mbkv/uݮM{08#. 5:U'sv.?z,rwϿ.>‰~:}M=w%0%n`n9ec,$Cڻxf}40րK+]ۧnViԇ ]\6.B4OYl&]x{vs}x0uLlA$!$|*z;e;++A. hϢ~ބ~'ck,/gJ98fkk/8GGg_a98¤\K:fvP2ZR?+_orOh?g5=| =Spt"cY)* X#ݟJceS2=hk2(uMpxO3, {J֯# }/>?G?OO?c>_._Qu鼏Jڷep!E7u 0H>ф}N|7l8c,!g/jaiTF* wt_J!sd& PB_Ɨ\z.UXo 轢~Z ؎XUΊ29kY&jaMad4j*uWgQ}ѡYBnerRqP^V{E9!Ʊv :% 4%Wɹ[vB9 ұAg8薋~HRķjrM:.;~&w̓-/yw[^-]UUJn[jZ,r[6#` ?q(LDxe AHBhChnVU^o73u6SUr9y]Yd 1ժ~Lw"2>R8t!#^ ||)Cs"L7hlx#vW ul-B@"O#Dw]aG} gRqکX,I+7y{\,\#7+RDnN^bs]>4 mRrhYW*t\{0iN$" Vu Ϟ%[,H]vd<We(ׅZM,Z,uFٌ$IH\<qLTd%q<`2/*!MF#zĶ%$A9Ğ'y.*Ir],AVP.f'y[ߘڭU鯞 ˲?JEm6<dh@оqCz; %,q|82 /Id U*/LvWe}GhcB.4'0@%a3nfnHfM MmSYzlLV SDf&f^ M5P^Kܻl0nxxU\2*|Vr4{U5.f0u?(!dYW;&YY7ǿ9瑕( i}^)1rV!:ˮ&6J͗C8jeN^a1Ny]w׉&` @B]ƉƖ 2WspOCkPHoAwzm-8tn\*ԚA8Qh*T$M n2U.%ƃ"g$pb~BރdUBuzN G]K Ue$"( RZzCYOKyEKe`DKwp*R^]27VU*%O )i$\uiWA(m*4밴 K+28 KFg# Qb^2 2pAz:Kdy晥7 IDAT s䍷oG%4amKJ o6l[@~tDjDZTllߧDA@my֞훗мOye1]ß#|AqV ᮜڞX"鴎W8_pl|h*O(,дb.gCazq ư/8>TOCtLs 6nd a : ;-ttgfW PAsSwWIIk]HBl ;B{SCXZ~G{yUe}TǮ.J86,YjʝzD9Nlc~H5yՇ'y}Z.|񰰐]lF;ܖu+KcބkyQ |b"yvT`fI ^=^J&|[pqy_)fwe3E\slָi݋'/zIY ZUaVmJvv6u.VWssRP/g we U";'gC[$pVJR 0vfTŽ¡.@E''4<TK[]/eTM;+l(s-84"dmb#\:擟_yg'uԠj{01S4G; 93WG#*o~OgM8)Ep"mR %Tu]3 'OJ)qQ.IӒ;W7sH g9ol@>KG]S8nޣdyi B8-Mԁ29[$S9IԆAf$X6}d}S`/S݌B٠弰e)ރZ]'eV\nd澶*۟67jgY}&SJm=wHOX =RƒIOD)!,90Ia>-n#z`6Se6J*jyEKR2+ULz\y1S \ UبfJh[(ǡ+\U,!O,**WHqy w=&_ޯF^c3Y;ǚ=h`; ?]|Ŗ X֔\%W5ݲYy 5!Y@fK: 7RخTmaX5v0&X6јl ~KCm@-L"?xU<q!\*]PAxW'@=8 е юY7Ay8`mT8`d60}-ya YUai)f=2HitC{[΅5UFʵ% 1<eiIT5T9}H-!~)PMLnTyRuP0&2IL4BZb;&h($cSCmo 7cUzm^sQJw(%Bj&*]^ym2ܡտM>?O/w8U$/]6"Z1$"L47~s%͢|hERgFXf#Cje~a"oQ3:/~I&Qh&x]H.!} !\)ȓO_/'"~Y)Fs-EΒ$747rsk]OZS><ɲ*x,9D(ٚvc 8J),볽i5;xJ|I<L/W%IUJ _hU:<2r@RAղhpeTu1W¹<(SK.NQ0MS_LXʾRt>(q8iƮW%%{QKg6py% Kݞ8*(2BTJQœįcpH21uBl0Qy$ sE8EŨVuŵ,ߺEE밾Τ^ؠ1f#=f lF+پru*T Z-R'"}z`4bu8$T=bfHza1łmPTNn>vGܽG$GD4 қQb|ySוzdA5a*{Ŋ"~uNf^E.x̰٤x!xQD(Z,t:8.A7Z- W /iafx؝۸8YRԧSz{2jU6e^#gښY`02niӊ~{eW[ϦuII@Yel~gsOͩzSn|>}?#VVB`} (`7JEKF!{4QNryl$ot6|7 qiz'z&651g:orwԟ'w GC_:z}ChW3OL'nCc:(ϦZ"S`B t5zR8A^ "BņF *sMd_D03(MM|VSX Na \ԛm*[mIiyzVkht?;i$RNh$a- رǏ}'.aY;;|5Kt-ADOEVǔmWbc 0 J)  f2eRITZFl\X/dk漪ds-rmbi)9߉s{R{&dLe'C,1JKz C%[sGIA+gWsj-~] Yox%Y2KCq(ȂRP-8*My)Xd\J9N>mIBLw`aX{v\aZ-mR.7K*%t9-o/s}-jԦ NIIN1:(nbgGqdWu4o`M3&m<-v~' T C$H@u׶ieFC(BV 'h\=WH6*I3j6s\h8mZ UlN1::"}:DiʼVc}8dݶI||$A2ev66s球<'ulmveۚ=5"vvp66e<cy("snel%մT G]T=SZq(Z OGu@%*VayWU!o@+±_q{izg:O8c'fr5rZƋm~4`>:֋(:jԠQ7O58u>DBMG[pt!\hEoGU]8,^Bxs0yS_Cp״ժ@%ՋiH&:r,`;E`h1mOAsZs_[z`xX +! -t^]с"{URKKk*MyDB_!~|tt]{߽41݁Dq|ܥΡt]%5V jQRVʾ^LXV 뱒SQn*ۥ,ԋjzRb|8phvj)((U>}? ݪ K%&4_?_RTj kf4yهWG167^7 sfU spE7IߨDʟ~mtQ)Y*p'fg/Eh~c[9 e`J8HDZVn'?hXZ>Vʧ?}/i:ԩ_y//>ƌtFc_N Q=V jػ}?C:y~/_;"8(Ï:բvxK{_59߯3n2zO|[:MQoRJL^hJNY-:4~aY5c4UEbVb3c7k0k4W3,]ss|eq]޼IzDi^ii$ H%_lشarXu1g6@-f6'}} Hnr/~c[gW.40>W[iF^1 : tr3BGZ5J_i޾q\}.q3B l@:xqr'lA*1Poh U _uu˺&%h+P@h΁Juf0kF64⍁b 1BZmiC3$n 9Ӿ‹vMnw6SsPQ~ ܆.-^:aYBX44ļ6]| XzuvP)HL– aK7x#:\΀^}}`fN\>vBR+r.;X[j. JP"Q‘f(54tjX$=kfrJƂpViǨzG2a=Ejv£|i<Ʋ,+4J OeJhh-m㏿ƙ3?boz5) Gӌ:w.D\0@)ͳ8 v7 O$N:Ɉc$y$$I{ȫ,+W?7LJ́#jz}Gy2#y·wslXòl™Z V-j1^+ *yQh%_,R'sξWo{^eXRvl+eo-Kf픠^<&EC)4E.m1.y$uJ*E˶eqj(}ztVFrpHXj!>vqp ms'Z2"^gu<(MmTd}Q+דZ""˶4f6XY8m :-W"^ɛp\=-J` .w+/s]l!i/dWɄt0 kpbSvn2ihxOf4#VQ,rxHz^Tc?k4XfE 7TƕC cz-E& t@zzm~JQTkEC ?J:aoOq I`*0&SZ:7 )<=*U{& רfv^t6ů/2 6/M᫻ysԜ\R,}jMsMsюڄ-+u8j(ހ45 FpkOv:G=aA{ QCk8B%-m>W2y5_kл.jl:P}7w2_$d}Z>2miKpmSUލ Xu4h 9Lnj k]L9k|?gO4tJ&ɤmkt5#I{dɂM)qw۞cӫqh/ưL*U)V|"c$f7 {LFBg^"h?_*c5 f׬%DY+m_ %.CtfjXŹivK2Vc~y!_w +,xM88п8!IbxT) _+@]}rc `>w?'O8yEx.S&KQ 0~AQ5QCxRQF|sjm4U{K6.)RBHNo[+_ wyz=c4UX4%M}VHӜxlfsRHCMq R Q)D躑'r2'k[U7 jB남;X  :œQ:Wo|:'E c,P*ju::R,mY& #9mı]z]f8q]J:庂㠺]$㱮6.N&hD"nԷn)&.`x ۷ޔӫ};2VΜ`/r=b՞6O+TnFXUzֻA6QXpZ(*wETLk8k$XX+cM!UjF}!rղ{쪢"B>!YkÓB)lWm s\;U ˁ4,)dUf=iW>`"h$7鐥xjG;F XEԄZn QՊS($ `p;Ϫ?Kj6[G?:gch3 i9j&)GF6,hwfF֌A.3岦i2 \Pee!&lM(M9Na;n:ņڋ"q9"iAfl)t°@mw6NQfFrKmö,g&YFӄLLj58* gУ"A`YE(I `&b$сi"A{!aj4m&V q, z^k'wqb{7 AE:& Z[SyIM`""mlmGd:麪j4ȜL И-`iJBH[*8E2ΝB59yHShGYCs]DF!?@HS-<`@JWޏ .pE2ѳ_NSyA%{%|?{8nI|F@u۞:^cgM;ԫ#Բ)O4ߢZua?57M5!R`;K!"&Ԇ8V40"@^Ei eU8:0MWs5Ut2FU,(WtB  p ʀ ]U_zUQ±sjAt%Kj&8 ꉨwJTB Nf8zVuM # ufL-B6lyK,(t5fvcP#$Bd&"]GBe8e"=$Af1\ZPv0jW:"dz륂TK)OL + ۀ#ؗ_`y8 )TGtyV v #3)#ޖeN ͅR/? K[ 1cm-G> z:X"k$P<2ʹ>>?sH_#vr2p, =+ G3kgb,1_Us~wc~.|}[xcsh=7…א$MgY"CfԸ(_ )G|Sܺe֭  O^C.|"~CW&x@ *{GDyGMYauB.]Z Xek2 3An+?(Z0@ n?G|.օCh)(3dDߔK2Os^D*VeHmx[03 ^ ULpbJ5) gr'R"G9\U.r!̭ڞAPseS"NA/Vut J70%awFz@#W0; Ж U wMʊT=H>ml0 {X)2h040E!+*: ÕD;Na "k5Z&TӰ1M@Ր6>edJ/QJXqPL&ƞ2Eh:,i$Rʺa&h~^<!.`2CWd0c7M,xl23#MSL?1NSt N Z~2]8cG mD|p3u0M+G#F5[B\ FEOiB7MᐲV P*aZ!T2̰|A+j4]Gle0H[\DhTBuw+YL8m0A<"h+(S\VV߹~2zq $B)*K ,Yxm:.ggX;L} :ZD/'F> ./j Nֈ2d],Xk'vn3mL{e?Wŕwl ,#iS4]lF0ٶտcxGɔQ.L[V 0fdtjTeYk@K&j^Hjݛ 64?&)a k@D ,UhQ*&!PA O*Ԡ.!UПT20WpN0@wy*oS?RzC=b(Rq|]YDmKf(\qƂy{31qfYWQed(Po< f$'<LËxzpZM\|aMQX2HX[aPXzkr]-8ydUd;4DԃqQA(IR1dzC= Avc;Or!&<-,++VYIQ.`uL&#ܹ m\vDoᇏ7W$BY֕R3 Xx__~?[o@.$;w4{P*tzp3< ?3 yR|.o|rnHa<,4\G tL$E>\s KK[y };?|a=>ͪ] ̩4-KʹS:Bk&C8 Q=:s]W!66ٓ!rb~E |+賟nЍ7z0`ȱsWM-n¥8n`sË/``4;߹QTeRp\/ C8Vp*ʉyFK%RHf5.f (n <ͮ2T zNm͚ 9Tfy؂(hMOxTB g::e<rCWyEiSi-A//9la X̙@i6(Y&IFwWnx58yo~w/΃1>Av70b~piABT\2+ZilV QYa4RMHLT9BD!rJdaI@ptw-`DC ':V&p `=~n{b7+sV MH*n:UsT%d}ЫjP p4&= ]/+Ay) D%”PFk@9x M$23F=SRlutWuL=Iu[e`0!z*SdL0\{@Qz豻h7M%Dhfa<ZPB xػmQrW>)tT1uLR @b2fc$Ug4CA4 y Z_)Nڧӧ۴[&yıNݻy5,$j Tht4e|K?FLH2uo`Qcs.];tj q>,y"lJLejQC ôdR .PR X'Rl41=P_iɟ\\U%8ҐmMrMRu@FpQ&I2GYڅr1&5sAl2z& ͛} ?vwMRgD4mD.u/_ӯşgut:Yet7os] o(9$uME]Hq ^}EQB?&fiRHݟ4) vHʋrqaaߑB8gdd$ۑDzU'+8&:.STP?%岫|nU^ 1̩j\ܔ/%JCH2c@ ٞd]A[ _$۲Ui>X8\ZYK"ӟ> h1wT$IB,Ô:,4ːfFJDi:|PČ QcrL$!-I`9BM4M1q]5b7LSu܈t]GbyJ4 ՂiӔɄ [-㘼V "4j5Ei4hY\F캰G#R $5 RMC0`cem[ Pz0{6?!k:Eq.0(mlPY$1M2[36s߉ %ԇ@PEزi6Y"BΝ#0&! ˂Ђbj1M) 6u]=d6l0(Gua)j5D!k ҥ%4OnTED0MF>rt'mC5NB/ U!.VLݯfo2lM5Sq`Jl nD-OEb=|OV2|z~"w'cDԶkґ>=< ZX"R[zT3Vtrv(@f0 J=l%y ݦlK^kjT tb_)>|!x:6 ծO3o&)7O ([@))xAxzN.@>gǶ/]SO0ʕ'=M%\ m(S5~U86._~3^r;9Gr{i>D$ەlG9ofj ujey˹ s5)3ԲBvF[,ֺ.|L3Ƕ<}0 <Ζs*TB{-Ի1 ejʱ0k`  fq0@V*t@f%R10Fw1fB\&|C~ɋʇkَ~]V:V T-4U4U>.\}YnI]NT.L=%ɩXz&V>_)Ų*h5t9Fi!Df[ $6P~X8ڻ j T%"xz\^9Gm-"g8g~򍯃o+=Oco{Ue(e]aFϲ6M RyZʍٶmSZ*5Z ꖅ~!,ʄ_xA;I0M: i$?O*kuDʞGƦ PMJk5do8Dy8,MCeR Cva!qh`:E*ZMfaqz vWϦg-7I7vFx:+4>D@|PT**ZQj k<7 g'ၚGjGiXLS!0*r%yqf"؄!Jc2k7 `} Iv vFhՇolyX9bG?~cmo!F K~<۷x5[wf"ʂ7yUW6Fp=@2A CE~*"rI}V:"Ee0p[

    7;P|rUY˿uk GX\OBZ׮   NOLU|c/aeeGtISN;4kz~{&-.F܇#RSZn.O>J!QޔB6֘#۟4+3җ~?Gb4ʋuQ~>CdŲ~!yB ="t^ ˝¨RI/~uT#\-*l_}?>?g?}Ikn^#-hxAPhH9'ε {ҔFoI#pT[3j5-8*D ih{r>hg KL3\nqKghh䃃 99&/}ݻye^LN!"/0 θ0fNQޔ#H4j;sCL,&vbh˺4.=i%Gf!C܅)a]ЃX3Y_ܐ2cZRh ˚’]pJe"9'P.1fWK 5LY0Vc zo>\"g3tRG%ϓ“ݖhW.: }4/~ D׍rjJtˮa 8MLmheadTmXĶhBBƤTBEH3 , $y)*:&QTT`6J yBNF:D4NaHZ;,u)dFZi0;/0N>80Si:EiXi /Lv]Leh12Z%g0@T#< 8mEӡepx2AZ*2Doypw8hPieivTǥG&h&?ڕGQ-t:.uCBc$OL1F# DQ(l.J^ dCNeQlcB˰U 4UCfA2Y-U0( Cd)RAJDXi\W}Zx|倞:A4PE~:՜$Fw-㭃t>o5Nh +fy'&\]?_W%1ƕi ٮ3C-u2{e; HZl aaG]8J@u=cÏ?MO<ťK=<zd^Ň/he3 hoK9*^;vzئBn\CĚv]!ц`%5BʁjТU1alGHUnkJcQ1Ap3VGb*0:a"_(7@}X@&D#4OeyCsY%F2;BekB(Y!["^ǝu0F6ȭTHK([@&Uw\x;8w.J }HFzc ),ҰNR=i2{Nv՛5o$ S(zb2l_>FYvDn/ jV뀎.\hb!mxf Wf+zӞ8q {襗#RxEi.pl*naB&]ܺ :ql,Xش"i`BY4s6"soqQ.oOv{!XKx` YgE2ؗ0<+R)QVoG/x&Agώb86@>CyS"9T}_biO»/5ӣgt˶v{'v0s} 7ީw8RRerrpWlp~n݇?TY <$>(MʂY ?ߛrD\ {j̿{?3}ߥGQ!81)-/KJvOkRE~,q CEW%/2{m+wX %b,ǹ/M_N9-lrrる˳cܧSA+|m58d#yBdv*EW4嫄RN;Ц;)އ)aaZS{$4`](:⠄,.剐̦T̄Pr:^py4x&FMiLپL n$:"ɭ @ Kr,+Ē47>L CYV4RK! ylIRYg5f)IIޞCt 8p5Er 5\~fe{D32D[q H8/z:h95?ѮEbT^ZEtQ5*#m%Ir]d 0$"h &`FلQQT*mpXc㘸\hшgThĩe!,i2l` @VCr1c ;:9#mC<@n̲ЪT~3M VMi\ߧVR V x T"o:i/rBvwr xߡ:&.,s}|䴍oH05P6d2~EtQ݅eZSh{i¯1 !xm z1Z nÚNa/.""Ҕ6,*Zyȶ{=p70mcjv83P IDATC uufSqLh4ǎ,7bխ*D{ [[ PHƆKEʒX^&W~֤g?=hUapw4Z6n"&Wl5yغđJ3sm i&y.`5ns3)يxn.7L;S[ LhZff<\G.難&QHyđ֦.Gs>{.L[_ o}3='r^Wt&`3S3ut 9 p;jS~Gԩ=<>UJ5 )l-#xemID9{Uk"K?FaDD)1kд1}*}3st@?+2ͳXLsHy(`e)Tߗ}hWͭ~TgN2bq瞻3gޅmw>^8+g H=`&,⩧SzUoGOmOrQB!G=g2qVB"znhb4|sr]gl9 ~ci#BӬa)&L%*~ ۘo;B97}7O \\ߗF+bYR.I3K”fIꜺBPy h?hmSPD,MaN$BZ'I" #ȅ1cq!#%@%<$ == -8H $l\EA)g`|TΉ*%HT5ZM cY=Cr(pS-yeULe\i(wL4ϭ ܼ̝| uo!T_17f<2ɧ%97,LM:7dK-h׮Xmh֔y((y2s) aM8E4Rσ35xydyI?P2M B"&c~xL~ < w2Qt7]GuZL 㘲 @70)t ER3k9}'}}ǎay&ilN樻42*]GE2 ClجVFtqZd7Iе,}Qc̽\#p$T4'Oy0I/a $Q*!fѣjGqJpw}onOj#^;Jw;<3ܯpX?xi #5hLaYssxSCڑ#*;I@=*z<mJ9K<Ð,f0C4Nm6#5Mxnf|"hDjA,FΔ{[۝c \\GTE+m#7n0FcB>'N/4_ZA36>>6.ނxwMXKė Tb0i@Nek0ي#P J2y2 Ak}PCoͱAA248m#@@6O>H#6X pĢTOK6R];d$b3f xBRUH95)È vIȖ}P 8ˀ$e%'PfB#a cbdj *H0-Mg\g|c  zu0Z!v&߄Q_#8&7F`i H; ixWb`mH9^CJLGI3*RA7R2(doII/%Eּd1XMUs.{.d-ɛc,S`C !Cf( n O:&:3FVV_St<0'oͿ_/L/^ſWtICyT B_1OaHMmH34KG!J,{+_>HN:)r3Zn? }TLU6+_yE%/vM"Ay>En뚱:7ZN$OYX5)7!˓ A'_'2>.|KPK}|WlS}?!fEi. }64Ŀŋ}`"i8hNJzh8hF "GRS,z\8u[)50r'l&>m^Պ%Ȃ) YODjC)i\v:XgEaۗMiҔ%9%T>e\WbK˽= ߫ҨlV`mj0hST $ Og.Q=a*-W[XV;S9):-,e/W  Y%h>9wbj@3Z {(8'˿k8cT^n6Lq8!iZ:88?Ae8ڬ ^qDM 1k/ X۶,݉eSJ8M*nL"3~Uyٗ^wǸsk\kR\'O\xo}T8ěDfq5I ^UdAj|1L*l75RBnmF^65R.g>kW MΥԪI0cK\,dHeWЮ[=\2c>f$h={߬flUXzDup t월+> #2# 7 Gހ[IH882Km R%275h#%fs{*t"T9l3-WZuzd76_5s-|JxFjdt2/b:VE31 ܔ[s]hҞfc`kVgV4CX t.JfA/ծt>PO3q}^I1Q[*0yHZ-2WEe]X<<xL<)WVZ88 aa!mUQ,2ƖEѠA vRjڌw`s2ϣ=0cf9~qܶ96b% Y&ZM)_) ΥKȽ{AՂ@^eh gۣ4{̋[z1 C%8&T6QX:wrn8>ԥZ.nv͑߾l.&+>zVJxȣ'or{%D>=Lm(vwFCI CcTlocY`)"7X)bZX9j<ƲmZ`2A(Eĕ\•k^ʭuue\}}Y%6'N)spTëj蘿| dک{:6 Y!D1{ղLEȅiE6<^+MrQuuaϊ;ը,t64{7!9t(#ao8]Rz0ndsa/H !!ꪵ+j5[7!njo rX2 Udul=^gQ 5ޔ@7`G%zH(\m//yH;<7./|gĥL2 R5)2URT3 [ےa3mVj[SWg>?oQ+~,ZAُ%Yp$RJR WeŸg l{}//8.ǯI?s5x<~|4hy,Sy^T^On>˱c.]Rs_ՂVSڵ]H@C!g~lO)!fۦxyvB%iyWclk`\g2A_3 \z|%jrq<65'nKJo~͵T]9W$FIJ)\۴T"YlIEwtV>U?C&[DeGp& 6 of$h솕$ó=**l%^OL-3ĵQ&KrC"fNc۫(oW Yii:_'wh/uR,#JD~%P/ &/VGn!]hQ;5RL`"K8Z`E8nϐ IT xCiWBY\:JV3OQʌ"en]XItkFSjUejܲ>  A82x{*%Fj@XsCN֐Pg֑/ ^G-]ǛN%T2/ FCmևCvm&0Vm4HF#шFCRy,%<Ν]=8`Ԋv-QJш"H;NG6wl&<͛(%up|uUAup@<QX庸ӑ /;;cx"쬰פ>>p}Bf<o,ȩΖ#ƮL2AvjuY lDQPHj5dkPEmE9Cym-+(וuL'Dda(eX^Oz]@.j =5Cy+p>F kkj͛:n6T++<ג'NhNtZƪt_Ycs05ݓx4!GY.pF4SD8(ɉ&Dh4(-ȧCd;ѡ0Ά w`_GӹlmL[B#J]O0HP0,_4LR + VhF)^O$mXhvKWo߆ć0L<ը "0MlX'RMJa-8낓j &8=(`D f |j.b E ~YKϥm~va#6'PԵw D 2 6 Q*G7)ϲs8accLQ$0S ƄmʝkyG û.3ϼoq2Se*ř, b9r,K/|Og_ш='_ȗ|^Y"ϕInꦺ6p.QRqIhth/W" &֖"-'?{E.\!'Wi9 /#$ 3?.?/Yf ;ۦ !@AeZ/ĸZKrHFi` W5yEKܗ'fq\A`=ۓ|d[|Ju(G%xvJ #4t,Hvhֽ[ ݙhEiz1ϜRd{Ь71rᨙ$wD-s"[Lߤ)/ec-pJ֦Z/!a{&Xత2jPΖQ:[yVؕ4˪(XI5{ZiNPv0zRҞ&i2 I̭fWJ4S}1"T$&fܲVQʺonQ<]v "rUu3ZbM l˩ȭ J OAp9| #}.т#"~Ğ\Ir ~אa%xwWF3US&keNoxJF5 JSO&dIB8RNLD( IIT*&AKE_KΟ|KϲG.%dK\N#ZMZnWf'XS}qg7X/cL)1Il;_Wx!;;Kܻ^"[^$C,$ok%s,1ܭX/fxZH'G,YYckN)WI0JVR1,% f$gM/TU64t^J$&+51QKhRPREj* 7p[O8AcVЧ3E}L`i5 k_V92eݼ2g~fLJQ-UAUEZI5oU|NŶ%AC˻g{X⼅4+.܃ @>#׷CH_79I̒o)jTV@P)*Ee(ۦnYAqh6VS6#`;L=@N8:cǸQD4ʟ)uR!NSJ̬V}Ya%hLl4IB¶mx̊BΚRZj}D):ܹS9u:dtL=ӽuho;~XKfH*EaY09m[,#5F)dz]>r;[畿\'wOko830y{wTʅu.-yA9]0ɲ $Ou$$cYiuWVs,G)vU;lFu94gE}TOr,la`Cso-]^#xN.uHҒ\xܿS=ӗA͟|;f}jg)0rpe>O- }Ӊ~#ai 98v7I[7P.5pp{: ;{`b ͡uR;j0\%Xn1.=HZu`2㓺.G "F)G9P ݁h~v)P[욯ǓYSvJcY`8Eq0 ?=!غx,0+uȮ2ѥ؃YO_yslW&C[Y_=>EZ:(<=nBy1؇I1)%+g y a*[4+Xtg$!& ZYXbЛT,{/AhfpeS#orvdܸzC/Kܼyo]kO|8˿S!kk67n-T"Ԑlś#X<8^W/QT +BΜ?M>7W5qU W7sohcSf 5 gӁko rSYŦ| Wjjag! UjW5]d﨓Q5țUeJ"WԕF"z^IJ_yqpRE/)LaRJG%5es^-هJgG'+a.x0VP)? | }S$w 8݆~ : !oAk/TTƛdZW4WSI!#ų,4%mZAR4v2A?X} }ۦjL @Mڮ+(z`y a'MdqLՒ PyNϲuV\,w=(P"e$Ei19sqqd6=MX)j0np *Mqu$!GCvL LyR˱&,JDnF&3ZE9<1ږC3<8*p֬C4L]dy+Hc~D+ (PE5XIAV 2j!ǒ 1VZ_RY갚?}M۰qS㰹8L:ICXȧ>p|&·@F'=" *ZGqt%ˠ,]H}Bp6eL`,F bt7e@fnZV i%Ks8(!iF{B $14,JR e7ks>0FyѴRNDL7`+2O#h̡@SB=}qH:\8"N6ɝBv_s,&<} hkY^gLjX} K0/ %˜y:\^N14SlΜrvᅦw 506X'j+` cX?7hd͚]7^ N8Y)!͓}طYY )9|,^DGA-TiJ^yO۾L4oPvG}qy%Dr9<<裯#Rc W꫏r>?|-;?MܿNM{.o}Y7̡udv>|#ĉ&Ӽr$a& IXRU/}(f(kfA;^%8mUd[ w-[BU,4#AG?̷YY`<>yf%jIegO>Mk_q%J™US5daWv ᣲU%V"x~ҮW`.&ўU@}?t_~+hT$f4g7xr4Z?4Ÿd`\tj+ጷL[vJXhv$!-2pkVz텤i%Ur3)n$R,dmvۡtUGŢ!|X/56QiȨE9(E|רdefşLv,nX M5GJǦY))3d/& Y̏kιkb/lԣLIu*nʭYꂀ~?fp);(O){igsb)J^FKyEs۶IWV]Ϝ i xIdǘ$H(x;ZY RMG~ŧ9W5 &$ ll(SCl[Hb|[<-)OlLQsGwNݻzөNvH=Ip6EyBN HiZ ,uxGB9΅m/I\'tb'kB C׃١r{'tk0jEZs F:ׂFO~r#bb 3򼧏 5+ =L8-Ƴy>iR˜:8wә믟h`jgq5?4LIgMC<V;7irW$rTFfRދ ĤBfVru I[dR/~%r(ˏKwJny#d@'9nbuuW^y)0f0XQׯG}˗ߠ^' K?#өht'b<>ի Lo0T{ZJʂ쮽,Uy:Cu?C;0"-Ν㓟|}e#\c| ?ŋ/f:VʊiW%,yI^9zHL9)!9,"S1[G*O`ǗY8[-CN[0o޿ ^M"P"KCĪcC>ˆedxQEN6'i[<`51Nڦ{d9=}nr܌4į\ȷ8*B[djǛ=xw4,5fp,ѩi"J,$w]m[ZEdےxA@T8R(EeڻҲa,GHZ") <וy!mۦ>pHwoO=ٌx,x,ŝ;@y쭭;lFưV#Lt8yi*mjáԒ{슈3i4h)N=ݮ m"JIwcd?zL`˗X_#nt..U}MxgK_y^|2QĢxUfw{2~kTg!`Y:eILE{/\}^j|?/? iUx4f,DGߔ\|seko"$Ee2R:g?# SucHO*}#)[mKf0Dz!xdY3ݜ*۷$^#`(k6;ie?=7cT`"uѼs^Yse9~n`L幗]&90krܳXa76r3sLk(Uv͵y?Svb{h~(bˤ}\|^13U7 ]99`L1|* *{Է9x&ڭ F/ QUPtJ'e@Eh< ~$fW-I6rʌ;zi hө6}q+>^ M? ){YP-3K8ȱ.<ϑ _9jt>mЦ13Nanj5B˒Զ j45L\۶zQ̋_RJ8y؎ÞRIS!~ű!2edBO2>e?ϟgn3>8v]^qۖyrtJRɸ(8L K)i,1NPD2(M<#tZ]qQм~g:e/I7yv@^NFVNۨzǶ>+2LfjHI:$MY](5y씋KD=nܵ961mJ; BDRs9<Ȁ m]cwdeF6+&-8f["czm <)@-F*rb¯::Ka|  Lg:ՌP(uG-qX^ 0䙐)ť#{_vsʝw淞XkbP.|_{5Cם @OFn$F%4V%-@l *ҘP}LEY>$gR$6S++cnZG)!MEtBj._ IDATM0م06qM»}sHsju Ԇ4ٚB nSHr !|˞V)ۀz,2H-AZ "8DŽFAĔQTjׄ>vqMUbOR\C։DC};:K"5idk޵T&ߛDң Nf|4Jk.k*1q-฾I Cޅq1zz`O[x^3[Z&x֔zN|n (ZdmvS_W8?k=+YHY #E7SCT Tb=7͘Kۤ\7̱FƄfT;/§PBuRX/Z./A _/:*!OIDX;c0+l' }l. i EfӊcsQJYq&4>߲(DkYg o2a/Mi)ǔby4u>θ(PJIqwo`:s,%rD*ɓb7DJ*ºYyy8H"VhjEw$:FCe%<58F uplfTRww)_'"yhom$ eZlFsg|SSwa2* Tn/Me,#A:A@;FH|wԃUP8jwo20yd6 ܢ KS|Q8E6:ˇsuԣ6ʱSu͏,ʥN؃C5r= \;Ip҄t9m1,38>4?/m6?շʝl !z]˚<-ܾ탑espx8gW9 |/M8>2S Ml-q KK:!}@t4]YLoNWcT?96b:I|H']#e]I*xH D:tRc=7&<̭=7PQq},s ] _ejC֥j44+BgzLuVik]`Bf)aڀ ]Il=g@J@'^LRؚt 0K5ub n[Q4Ӳ6}>X0q%ʔ.%\5:9Q٧^m^~6LcU~[}}`³ǮtM0/`QڕY&ˊ)({FE^~ Tc (A0$]TBW<(׎q s͛sɤwɓ)O_~- zL+18uﳺzW#X~Ӌ,-zw n|-3OJ}Y>S!%+s>=:~| O>ãWa/}a67[R eJ^z8F"LJ͊`Fa2_@|dԢ*A|I.#okU¾V?…IS/C[=^FRJG~~<~맸vD%-XqD8̐놜\:mua*~3Wd&PM=V:u*l[&GkuE~7׍[}>n"Fk-==18*,/DCkW)on5Ъ$-eҾa~nUhӆнb"ĎHLQil$ +?vkʴK"]Ǒۤ#ּ|CIRۦbYIG*Eu_|p)TU*/bU* i5+MrCC\RVb>[?I%jlNT) WBfco,oabtZUBq(< 2[uS?ytESl-3P J)KjYnA- |e" rA0]65dPAmk= 3/u·,D$O8C* BHSfͶUnӲmZy.|˒fSD 2ҎcI F CMz 0T';6Z,N9ԞKNJTJվ $(OS歖EhsVW2,c=P cu1^O&eM&ڶ ie[6eI8jlJNSߗVCrE`g{`! k\sp<:]Z)Pp]L&R$鄢v-do=rk~QUI9xey}< Sjꪸb5~C"' zDaO%*GAωMQO7|urWw>½[*O X8yR1hGnǁH5m-;{/gqp;;w[oy/Ї^0K/zO;Βҥ=| e9{v/]zTn)2m2|)=cj_ OG2歷.o?rxXRT)~[DVKcmV6Hɲ2/JKR~xWze*7hC3~Z>q]O?G?4ޱmUqdjz pa(~|g \ȁ~yȾ^MK-Bj)OW+o-ds17mjkxSwmq8WKhP.,BVDmRlEU Q<ؔO}]=AbP,-gkVrPq۽ʵ)Us+ =^ݢm:Z.f^mR߷Ƕ1QN<,XSv͉^pK笠bMXH٤#ٌiJrz8eҽuI3㹮hxI"qHu;\zW8畝uݫz]X0K]F E!ʼnR17$:2C^nH14kWzr}A~2Ny&B%HW|rxxO׾)Ǡ*-#恬jT+mjK>{9+Ϟ=^bu2,QVeJyn؟TUe*w} #Z;NfwnhKV~V 0nVGEs[zk`Y6Zi%ޓ{>{9-`yWd}K!.Ƣx2) D|Fk*h6ZmjsQUgYBJ^bʕퟵk MKHQ1+V#uf>謄s7qlX,XIӢvϾVD~7)Cg0̞kam}[۬ }Ţ"%еmk5{*ZEFȩ`P[6fbײHNhm&Vsݢ1|_F:?H 'BR$.SFԖKtSOfI8 Mge'Mf1I1x_.Y}.WO2z1ҔI$, n Cl:P0Sz];) Qz}|N{4"sA$\_gY ̋B&kkd&IP'OJ#iDѷLj5"Yd1w/wlsc|:ᱵHvH8K$<`s<ھOພ8ќFMNLk=䯮a_9ۼ{<|/ȅeNr]1lG[+8є_nSO#C1V~]8 O2C7Fknܡe+I`t>45?7\4^M 4 akPk ʇh{PpbC3'.v! 6BG0~w (Z\ABm/SgMCk Rՠ43h*M[ n،3HڰZhP(f-(8{5OЮFOLS7*ecFK#twp&jLH yiz/\7<ػ>ŻOBkM1T 8'EH E9D&Xܢ *"5t"%DKs|9!CPujbbF +Mkt)j&'[_-qr;ld sy7K/mIqz貼y~^>7}*7ooq css@p֭clmq<-yGn%Ӣl~wU*x}]tͺK瞊T,vK;l\+ngtlDeWEoJLKkT&5n['ߖʝw|_o|I[UlnY_/}GyN+OJbmZkV&ՋQӵ%J;TVnvS[4i$J{ի̢#SB[=f!&^b?yf`Ѷ*e'_Pp*a^5NzRuhu -ꃊvHUZcc ОtW=ڵheCUaCm[dqdMm:DŪԙR#1L 3LW4^EJְHB҈U¸,J崒,n1V`^bi K7tK}qe0,rƊ7kbUzQ0mUbiGU\ݫln˹B;7RR9DmN#wazs+5%V"'qhp5tSZ}6KYX)DkBkNb]:|N6sbD)4eeƂ6MɳRRw]HHkY$IEA(m_)|C֌2p$k9\zd8 /hHp6tJ2>MIw35DEy,MiM&AMZBk&ww/LA~Bp<&sYnnRdaChߧp] 8fm}4%k612"jEd-MQ[[DIqpu( $ =oLd#s%Oļ6xJۅВ$ a4㠚M4k4pk5j"uD7J) .[7]^^2-Α7טz-s |g!~\|v/MNЪQ&j4bxN z14 I'OcgGpA#7c9 7y~x": NAh͍y`Y4n0ﹱaЋX]~gPRyI}9jWAkhHƱ0 )jNMӰ?CԄz˺z IDATж.%fǗSp PkZ̧0/Lo'9X~4B5ڄƦ)UKX4?-c[3uH90Byd&Y9tM<өs0 jϤ[/9w#8:,NA=L'bak<.x >2c-σ3-ͭiwd !شn.1E2 {ߋ啗\)5.I /Gh!,dVu2D=H}HLb{)RW N\dž 8Aɜ]b #m³(ݠj[ R8X֖OQ=k6e.WfR$Ȳia988O?Fq%J x;<_u]F#g<ث$[#w~u_| Y6'ޑp˛vۧ{"p*2-OZV.KeÖ=/ "6ˡ&ҒUذ{o"^?~/|< 6_o ɲ_0x ?O<2K>¿7_K.$P8ՔJB[Is-BWI65REcѱuȖ-cul=-YinYT#Kml@O?yy3teT^b8hd^8ǎ0htƻ ֤ gh2"䮫kI" j45R+E9i*ii yEAQA@?YRǚk@)y;z,z=I"ֺ}e.~_:Z|.\Zz]ayfxACȨ Z[ӣnEEX{ LY821PVW)(6Ihh5bLt>vxHn A@8L =s]ł!J#k% ]uY6ZDZmle:]gH(K66:֍' e˙qY?%O(fS 4EG04 TjhySF )Ғ$04RN5}xN<^粇y:yL~6{\kަ&ĩEKj=WE6[o ⭭Է$ţ#,tvF#h xI%_]׮4~xkD (ьƂL8NǦrC!C8}8 D5gZtC#ڸ9Օ;Z8qVj3^(8ZjwapG04lѼCϗ_Hm(SF)Y@qź`he;V74c$ja ̇ZiPS垻oq/l<&hxm<7?,\iXp5%$;Bmi`e_z&dO48 Mq:ZAF.36VgumR$ܺW1.ȺFݶG'NJqUj0Rq{/y#K7)LVȆמ$>Wݮχ> s,kʩS/g5?&{u {'ve;,Z?<7R;{ILT'2@h$!f3Ke,N((Gq]Q"JM&[-;Ge*vMt7Hk<0֬NGzzM d:JnN89,? %p]W1mԹzG/8?Fzv~GR,B,s4E7BaSDktqADA9RSZo;aӡ6?& _q Eͦ"<'s5i$qёIXHSk+X^`wW0NM&Esm2{?= (J3"lϓ2@jQЈj5(]pM#(e&CC\R' 42y F! A%ں La<& ca'N,ZAU.=!O5Q"HW KӥMw&8N= 4O&w'Ok #$" S8˙" &"ȅ0hquoƎMpuQ@ȦP'rWH !h9cW:B$BaS¶̓5n,T9 ("s$E<\z<(ҫX}._>tQ][Hn@Td{[ Ocnj PQBTa1{V87UXNyNnP.&u[fiR!;('eg{3BV  O!*ݞͿ+?<[rr||ܼu;K)Xᅇ/'?Ţ.?o9<Q"10ZmG*"ӟvTQLl**Xn $ Q(;gnu`ۉ}}[`]ڐG9/|U?S|gDDdT)kV7ԩLW7~/9{UŷY n{A썖ZChࡘ/Qp+L~L]p+twd卹kVxhu#0sL3'pߎzbx-n'?SCvw 祎$ٵk.`^2 *]{OO"۰-5>ؼbhV;v)=ظ,m%}CeQB{zeo-ob~E4Re퐴Bo VZ2|+Aۂ"bZǺ/[Sr[MCQT*=U@Wr˙@\1*p1kv/X SC&K- R5O 6p}=^l;^i0h/Bk: lolğ*RVUPШXFϑ?Ah -P/YaaGA2YZgJy}RkQiu K)k5RZY&NP0Ԛ0wb`]ǡh˱Rjq̊"JQ.QW\kjQ$ѧ "zZ Y!].' g\Wv67p0pdz=I󹾵IiǸJ"q0d3KSTܝ"2f6<-"4E|UC= G#}(\_x}aSvQ 5ia:[-LǍz]syΝwv/If3ښg3)D$5z.q_߯IWÛLY.{H6=+ =xJisL4GGHY,VWK Bű &St8&k $aXN: ݨPy)y >w>!#}5}x-o\)X.5'O4(24>$#4"Fy2<`Ws&o>7 CDJoRxP F+MR4ktngLSr s!I4iMyM#-Oi!ȦL%B@jWZ3*`*L8P4swއ!@| 7`R74kH;pWӜ]7YSh]4tɎfk B1Cvv@yzMW Ni-HwfqB!?Y id{b[%6#c|LyrOB '[؃ٔc;gx?{ouf>i-RyfBk&9ciYmzq\CWbgs(<B3"G*"*yEX  iJ*uZd^6OxUIU[J^~!~ͻ? Qt{}z&nI{|'Onsi7O wXg"tz'|M18i4RWYYqK @)̶IW*UAmGa8N'>:_O~rxxQ}S&iqJJs /yK<ԏɓHy_/y֒oeur*nfUM۔+IyT/Hf-vkls0e|iXLbӉ|9F +Q.[ZJY:v,qh:|+m}m^Ӱk.l%z'sk|67nslϥk]E^ƒSV n8JآQ&yJb%V"^bHa -[lץRJkj^Vɉۂ}Ƕ`:V|+p^խ$xlUQTT,sٵ#[tWyb{C|e4bxUҺdh9k]؏aJ`ݱJ_jidRcVj8^) HRR)h pKY3͐4E4u)j5uQHE!p m8&d!\&Y7hMz,nF#a00L0A䣿yC#yk F#j¸@ sU:r }lnJTQd"Lul 7n# !bPy0Ȉ=X(Rs"Р7G5!+ ?VOXNMXL۳j.55"X0c9i2 xvFA%f 5-̘DB1]Sni`]5m&Q|4jB ALNe/ cÀHpn0e=I&&+k!M8eRs!RpBӄ9,,sHEeB Ͼbhn XlkCN^:zoWNx l AB~A.CtmȗLyA}߰N^"FW`6ڈk53i2!LlS)Km(JER:Y1^ 1a$+RH^RI9%K:Q9P6(ROxn8KtڵG9::(g>\?/?="<93Ջ,'"#߿MyqpP+(;NJtNJbTOJĥ`QkZSYig>3ohhy'/>7+=,mb%%?Õ P]reI:=!ןlnY^uV*5l69FtZk{6-$^1FS[vݷCv]Jz}]N?|yrܔ;g|wsq<,jӞ\VڈD:ʄ268S2D0PA = k{G N5Q80 Z61 q[-RHch4D/DyNEtEHsDV1)j8dɲX. { R0A Vc>|eI,?{;k<2&ǁ-!4ahv8<4@^4˔Xn>F]dsL&Fj DPkBx@krϾpk wu˯c;ga]ޑAMPX~fh]BZ00abȕ=wln,'o)׶ j9+Ee[Bƪ**Ȅk=gP}+=࢝=6! xd]^|apjry/ wYvv{o?!_ڗw>\r||}/y[$I7}ΜYprE];isʠɪ+V#P C&0`ks{Vzꛜ?Y+_O?}?WIq%=ZU ƽb>+{-M=6y_?4[Ufr +lRJ7,N?kmUrm6ujUmEDnUJrdkԻ3^{{|яxяO9<ܳմ_纰Ew6{qhݒfW^-*޾Ւ 0֯lKW)PO *PABlH8ְ{-Mg@i { Y6beI+NI+T&e^kvnS. [QJO[iAõYc(%ukPэ -׉hlU@ǷkQsqm+ XɨBǦ_i#哅ܱ9Z%{}6ފz&}*^ ~]Fؤ۽CX wBr=xf0;4V 1v +o>&*nI& ,#Ay C{Kcys:I. NEw]&Ks|ѝ҈"bpI\qmFN!^ . |XD"k5NӑYǸٔQ._-靋E8{s˪)Ls; fhGprx 涘ij\YM<]Dϯl0XI(* e'YV[AӺ-XѴ4;yFQ / hs=L /}EZ}|Ao7?*GGHMsL&'8wn'=֭{8>ǟ7<}G(ݵ¹L8+6,r3ң+\&gD|3o˙3/K?ӟ>l[}Bզn!imvuJiXXɚM=S''4wwq at48}i9sp@8G$"8)iOxrɹsp(\(""[HʒK#MiZ*$ 񲌬ݦuEMNM۷fyh40a4\ldl0}\{«hLgj*ipqM,ޞ'1I,a}]d0@1á EϤ$ .{1-K `>xnb= (Yvg?C(z_/r4Ez-:4u&{t ś͖GEq $ϕV~KAz٘9u C8i0RHYVb{0[Bx 0>z-N`D-J} Iϐmf #yqf 2viӵvN)l,7ZH!׈Pw Кhb .,VbZv\auzwB.B&M;mW}.оhA IY XEk >Bi" }p61]Jpt+&­0l Փ\>B]zKw- ųod'6ÙBv9{:>Cs7qpa~ iYiQ(VcQB7xdqr('WFjvJ߆-չ8:Pㅱ!t2ԥYjW ӣZ1Mu }Z8jR9U-X,o 8uHIZ䩧|D)`0#y2//Ck 9%'7o^֭ .]zܸ%qy)~Ro*2/**Пu~U^x+\36a?U^~CLQ+B4 _.ŘW^yVO>k^yqf3Ƽ{)VeЫ;4=kr}-ĪSj~EMGEq-Td9Cyy{4ۿ}o}|>lfEq Yhjbة Rq| [-t+ިx2Y= %*mmzlTcj;>2ƺ}qbQIy(jtty806j@kDAl|mʚd OW!;3B{jٯ9sGrO_(BQ%zJPOdi>|-7ePciVPS()k2@xn81Cd -\`kn Y,, $]k!+yNg:%- )'\!QDDa;4e.=ͮ㐸.eY27q0-C!)K ((,A5a8Ւښzʒ`6#"K;ws&e$ I&fEO׮!A ^quMS=,Z-!#XE !/3"шn>p|Lu 76p LY2k4(}ł`uo46h:9+ QDݖ^Ieϣ92!ј1m7WxemISHSܲ>1^%neI4Hs(VW1! rpqLt:{{1 $ ]{iL&̏}F1aL3<)x lCh)v]+>8P;ZTݮu{bDݡr5یBDl#rm@ű-lCǶ |!F* aXAs{B;%V<jr Xl Bׇr xsz:mCg @/֙߃>m30ž=~Q #X`>wr.B'g/ a3f;P[ꓹg n 8Z>wY=_d[6iDZ1z@k aAIʍ m:RWs5!EV˜in1'UEթFEM,O6VOT Yէb''ߓ^owq3ϼͿϿ~O~4?c{>Ve60AG`IW^(7nǥK<7x89y(^z$UAam2=ڶR{DK|s_&~GOWw٬:7='U_n3;M>n[/}_~JsjE}80R4b Gzx`TV8eUIjMB˾V(}Zus&xyVDr=tOGĎc>7gڂ|k_#|Orae[%=}{c?NM^U^A66*Zݨ]W~_ʶݪz  x_+㬖sQ@k {zmUeX; 8W1>syJYi >i~rRԄuFY mYTZRB>WDn]9)YX e=F ]up:e6u7a4Ni(I(ERKEJhutU*bV(pkvt{ ژ庍cF]֗9&gfiTfj\dKyUȧ %O_YG TX.XCx0B]l^Aq{ 46"4p8x/ 08* VYretܴ:j1脡) .g 37 ŴZ>+yQH}=H8t<όʒFC:q,9 ϓ ~_$7Mifr,&LWV GG߷uc(?<K6=InKe qr4Νs< шlk䓉: fiQHlJ $(Kɉa(Y1sv;ᮭnwn4'4cp0:EJE&ʢb/wew{\_*Y`:]t Q ]0l0u% qM$ϑ ''ݻcCGp8$=<g4nVQ`TN'8llgk_zh&_Fnܗ͖x pxhym.|6 hXw<FEa5gݮu}660m`$O>FݶHlfV7b!VЈDM>(-%(hddHvrB# t[0ϭVȄ3BЈ ܂K=CYT[]Pbط0BӤ>a~ܷf'ቁK0CqYa[BtaL"aXX,V-6 uJ #+ 0فp+c¸/ky(@Orh)~0uΡ EfHcQ=iIe(m})B }}*MJA2Ȫ}{5+6eopd"qY(go5E + ,l! WHm616 e6EPϵVɚjAS9a.K~S栖{!3E$j%t'eUJ}u 9*tRCDe'f:}E2/!)5ys#r|jN5}vիߕ^x[y"E.y^MZ$V˫&׉Z2_9~$̫|+6E?|˷}L&J 2JigO/~~۬^{\- SJ;ъ {iTp=g9,y>҆ UPVmu-WtU5 au)~HuX]u15y=zh4uk]?'?+iH5ߞR<-O?4Ku8VTh`߱rK-ohW5=uhSREƽBGU}[<5*oX=GU#~Me;;}u{ZJ/tr5%)اfV6~ _KzTP ScE _BwY[e?,gRMګP bG2YXU]vy *T/r . Ƭkyj^%h1YBݳo^I#m,SaO)T}@RtxKȑm[:k8[Er'13*m#s(/ G g-:h M)i>1AO>0/K(DD\#H8cQ`/92,%FFQ0szyN'M Cq{=.FN&Vjq  ~iv:4=dBt ^`@X[-Id]۷b}NbL9'۬3sۥYdx.GE"">yGA|}Ʉt8$vqc$$yS$Z6nҌ"u{ije-zq9{~sfx _=zoa%H6׳:@vf3$/Kʢ.iARR)EY|/"NܽKxrBM# qDg3ᐠfIpKi*r\O}x$x-|}2BlKZ-u]!M^B.s>Otjmy";;M&Űay%B@6ёՁA r>efOza>-J tVPt׆ #螷:6 o't|:ZeE*6mx>:0[ҙ&aou0ƁB5" K|ptήCo& :Ͱk:}a &g IDATt1| 0uv"ܢn1x{'4P'܄~l̾EJJufqCۘR{TR̚=n fu-JT&qͭ!S7nCHnY4ٶ=.^Ӑ{BٲH۰ HxZ;4"c4V(eő`Khz̎2N:]ʁ/jYm>jHD]LvPTs5Z~8y66SO_lo"sEfsbO;W?rp-xuxMDQQ04G@~@h6sLzj=G|˥KoR ^L^z3 )kjR uxC~w'>M#nx?ߑo|rt4Т=+}Z*+[ZYs-pZXMǞ6EZ$Ԣ;Ji:L{ͯ5~7?+qt嗟O,/t{ڔ墆e5!@(X/S$QQR%cvL-Y]IHLj\ f8zWzb31ީ𲬘ݥi.LH.k(b%Bf kV<&mJ@U1ĒTs2&KpٔMzj&+0dNApA ][ uI:cyl*&N7Cө%`kͶ=F&2s_ FTQz>e^Kߕ@5%XhuݮcQ[Tt>cfԛ-Ãg tg_]"yI|MxyWBrng{!WC!)/ d4n J  t:%OS9, p<8L˒1f4Kju:QahVL锩16h8$i4&WE%;xlnn m85VV(mVf3pXke!&8, X]% =ݻ]f:%|׃yS(KS?{g{[֎u$o$amHQHNytׯ99FIByGDvv 0$;<4e94cy4mQĤG)cҜLL&E.^DҔ׉/]"ѩ~$_"{=G~?L& PMe.%pl-1}͹S-[mϏHK'x0زMF + 0sQ~-( L-{ܩ:m8yYR(ZKz8R/J[}5:Sy>}[݄:A?5"[Xc)! 9p!擼M->{7y*l{J`3ˤ})m|-=ZC16܉PiUF4Rh,sH4'b`71, 54aSϼ0&) tݥ~@J͂X(](-eUsc.uK2$'z)t{5w*W'4kۦa Oed?6Q:2}2zKZTC]]&c_Wg6Ckw^zx.>SFfO&Śx&b$y4$_DX8"r[d'BTZ#F1F:I"}וFC<ߗˎ#bdlƱ4Rb Cq]v\ߧ9I2_Y``E[[qx(+f}0:y!8B<0ɉ3]Y!p] ݓ{e#m9vōc) FBq/ Ǒ+Q$nQI*E|oOA I/y+B⬮J$t$n%,_/pI%VӁ3M3q1QQUt$ed< YSL)}Cs솦Wc جyg>s?Ke1npذgRWᄺXO35m .Vh~a9sYR6t#W4-ݶ*);c}QyKKW \rq= b-~ǮS9_퇞n_} nD5:A~{'N?ګ1*}IҴdW1M͌Wm%'^?jVt+0Sx3֢ ˻X+mQ ȪhE;`S}I2igTTǧjuJj@.Rw Y\T'eV~OnM5Կ5CxWj%ځUQ@tvj-;1R. 0"|S䇆o\{MO=z`u͡}4&zdư_얥8ƘX:1eI,lvQY2"uq }٤<<'sFA`clC3sڎjm6y̼dYA.!fSVVL2Y,& pN%aQv:m)FYs4Uk0~=EA;Ie$8Q$`}0ϑv$]ܓrtH}rm^,r0<;Q>`Ǚ=.6bҔ# 5` W{3Y-Kt  yy$iʬ,c w[\d+pV6iwk<ʕD5 &( p~Z-#K]! 1qY$ѰNO&Y ^_z|@Ce'өWt(g"CECz=o1. jTIͦum][4ӱ?IbYf\딥d`1+PxX[c.vLaA^Rhuӆc蝳>[=VH $IXfi]fCK2MH;_c{ʛvG!JnB 'cKOj 47a1 pڠ+p ܻ c(B8!ߴȃ?ȆmD6SGVo󑏼ArZp׭nV˗ǫ~/#4#c\8w[꫏Kw?|)xo?=ϭ[M&6"A-3KLyf|:׮=ŋ/~EwE.k*{WmQ72[bXUcz|zܛ5?Pcl!Bxjr+91UAyUE)0p yx{ُҥ n|y饇}1֌ZZu]i Q5KY '9WknQ}zmJ_Y]O%Sh5^Ԫ͸&8VqVĤ. z*Hu+6qsZ4 qd."EqdE2}F#a+8EacwmM[6Ks"ө 2 m^'x;=C3xaڭSѳ8ͷH8>F<sESF!EHh!"8q]̃idYM~3z7꣐͛BS'}S 2NNl8.Ka84er$1M+{ҧ*mʊyZphmHF#Evݕem }Ma&6{ l2pha r"_h'hFSRa,41d#w-U1lZqf!Rw}ص j?FR`ų9vΊv = N'6d!Vքs@g B(rͱv-q4EA&mpNP½6lyj6(7Hj }͜13;on{V ]Bx}f܎mTD f&GBqӺ9I$ȅrl c@|?=Q1fQʒ^w$5w#yT*Ci*NYE4DzG``U:ÿ/+eҴ_Ϟ^tQDQ7l4r9>^Qiu'.QEҴyK'4~QBq"Y6u6 99d8̘ͦq٬OQxJyZ6WzmV3Zry/|[>O~ɜ95W@%ޗǧGh5esqX'|W@,Mjp\MGΞݕOo$=v]wW/o|Aݽ$id5tYCMK[Jl~N#V:T5Y( M^JoupTZ_[;-*" QꎵˤvKG ~U)- zޫPĩ^[BI\܊nf>m-Oj6UфmwȠz\7U*W? i>_S Y5ƤX'5R6tz%~Yhz ۈH[)S,?ALD[gC<Գ}Xds0]:iZc5eyNѐal e/dEsh0v%p];]Rc&6Kk:e5iC$"3t$Cj =1Ih@Ic4"OSSt*,d̋B5Xiq}Ғs֯qyc>tt( fS\E!d"yV>CR9#foF `HqqwsvMVːep/|vӏ"xWY m{pi;mZ-! m0Wbk[QZՂX} ;>ŋV$}p`( Q?Mmc{]H371uqb5C, w~,]*΅N hp\mJ ޾::/-kػcpBYۂ1L wf)RƂ@ n G0Bhk-`hp8b]H=}_\T IDAT}8C\"XzӂV|ɘ;$7B lH7R @A<&ǖ&lYש,xC}u-ZmkW@:Wg!H826L\:[i-<5fܾ}ɦC5/O' ӷOTed뜀 3sKϻW$YW5g.3=k7ϡlk~Sh`Tt>Uk@l.LdMt٫LhG>"W{?/?pXя.e9套>EWY\*Krڵ'y#/O͛qrr._=]S N F.@=jR3wu=_d)ƍ8jEO~o~Byqg_*1RSA-Zۭ)*k WCD>pjy˩ִخ~pu jx=yw_&;|S?b08f8l˛o>ɋ/>!w{hiSِt_] ]諸jNewvSSj5yWʑb*>Q赧 .u;cH ZEHCiM{*7s{^]EQ洲mBwPayndMNU:Q1_-AFt^j:xT ;.Nԃr[;]T uj~-[#}mZUs57j܈˿m0Gz<>^JթiUfDf*VD==>5gv̤^U5Ni(ނ WֵU !t@Buyu!]u6Flqpؽ }#2wg҅ڨway#B ei<綱ּ,YAg6EeI8a*2H3Mڬ0d4e$9^YbaPfww:ĭE޸K'yRܼIբp]fI" 7$!sMư:t:\\g{(Š1ݦǬ)˗Y.kiJk4bqtİ(( IZ-AhHz++ \lF2IHZ-f$ ŝ͌&sY1A47o2})\b2!wlyqnkS'8H\0Dfyx>> OsMhZ_ lF}ҏlߡ\<-~G?phf&q͞ ʊQg'Ku* am 66ei7c61v++wұ] lhNrH 4χ (Tv/Ol:vo 0jT`۰ ȎBuJ2p=!ci7[˶mFĺE-{÷ K~ӽskk,!qak]A]j:ypCڶ7[}X#@ E0 ffSθBaxfuw5ױ߆cmMm! &UҬ98mp2UV+P^*8}xe94ۖjg el` 9>?]Gs+|}Y ./a>8s[CI>-,5[_YY1? {NYM:hh eO̊\e'tYAɯK[f6+kN{xW>ҥ7g3L~?NE"7k:tkMU崴?b3@.]#G}?]^zyI?xΌFn %_>\༬ߪ9HU-ܞ9&ڬtL! R'Ot>ě|so?ezurn`:~/|yJ5Ru/+uju*yU\E7]<=5ЩG)ٕS[*bV]QF-)|]xO[ ķuS*T'Zst Jj)lg5ǫʼeSW9N5{:PԼU z6]9]X@;@Z$}B9MPZIEXъJҩХNtbWb6tUEiJxi)먲.4X֬YTT'5qxO?l՜L#݆~Nuys?e-O4l֓opQvdeIZ0Hy8K8q(DZ(DތM^7Yɹs%O}z 2) #cpK$4 vRf$tCNSc4Z-YF a7aQ" lFe`0 9ol\_i:>2MS=6;ˤC-&ݻ4qỹ@˒2Mйߧtץ0hpr2ur] ݦ <ߗ (= v/iFY_'<9"tJei;űH8xOkOd9wywğ #o>^nb6cYFMX`T^r{l0)Xt7!pɯQ\-ӱ}k ܴN3<D=vh]vۊԺDUx" ,n px|Zsө߷`eYΞI=;ą/ͯx6m}@;2C~g5١[J)B9'Ѷ*SR1+;ZX^R5[ ZhzOݥ=7y;˿k\ C嗟ѧ9>^yP qf?[,_-hzBIv~t<ܛr6"}-B~aj? TA /O'y>d<ސ{/}>ë>&gg"4Ƹ}V&3=OJReM f F]RgSl]*hz>o~O|\zN'e0w}9~7k_o_ !rdgR?i\MNjsp-=~sw $,8 Δ@Ϧ^Vﱊ7)ѿ/#}UaUx`8^\VYr&+%\<]](XT|^7sY` %zFS278Ja<%ٕڽ >Zb \ CfQDC^|NefEhPҵ5cP45pHӑEpm)}#,K?M)ѕ+fEui,cnݢtV6x,E$$}0|>7&EYDzN]nYahQDfE"!$ 164 ]zY<]&bFȃ@3$g4P2 qDdPfQMz=M0d1dYؠ18/#̋߉yҙ9ω&6t} Q$ʘL I-#AƆ81Ar|Lg~drt7#;e?nX{}rCiV7K 0 >jI+/Iqa}҂@͠߷γ3ua:5dhBvZn ϳݻ… h4sCQXmGYSPhE WR5,ͧndR(b6(oȚB3ԕ &gz;V.8 wqّ !ZΦP4a)aTh6cX ̖B{-R[3,`)'B1o ¾AB m0ݮZ~svfGpҁ<6! -fsLw6i:a}#DwmC 4C(sPpچ'#Q cQ&t6-(L'9rf5 RV URfS;7J<W湧f/vINy3Zr]sM.\-EBe|[LInnnZ򩐖vW::,v\ j`(bȖJ?*S Sj)07+@VM-$c,{Dт?ů"Uao%ƄCͥKȻ>m׫:MKc9" vø~GW8fo|3ƑiZ%kkGڬ~/ߏZ"WlloO}Oe2O~~w/=GQdIqW` d"@GKO W;_EZUkY2B +Xrr:<)>|c ywx᛬ɲ޻$g#i^~!};'&uŚɊViGfRynkTEBN7z-tYG< +*C:TmRm]uU;|ԤgԊ6Jj)i ;1ZtՀTuRXߋujJ:0YUFEw=DZkH2tWj%x,&NoDc%39>5VީjoPcMVFY*m=٩' Hւ;]TT™+4]iW}tVTN"ھI-rBvjNW jɵᲥm,^4aLw]Q^KJXltZ5kA2cHMc\9ZE)LH&{p&w> oV㐗lOǘ0d\JY(E]&92Tbv0/KfabsՒ{Ldyl1ٙko_|sS6 I"_gElL"wНL(h\?ag=)G4 9V xJWp7nLC6aB˳t yȌ 隱- D`'%uFBѷ03%} 6S"BpBXD=/g`نib)WufK(bBa4ݹh[dnfn(}ٴq!kZ4{{.Du[HöJR LO69GVnNnS֌|S?Fe>jV jZk`ByODvËJSR-5*p"+ǨiM@0}sݎ+j7A`5Ѩ+Tx"Uc~_F*'fRV(Du~bV"pjkZ\Z==,޿s;ȀGu r䨦j0=R%V{N:vj4,%aYsY!y)eL:A׿l#>"<1:d M vRnYP!FicvqzJw>Ǿω1t!<>1rբBI2\_mZs1ɄӡXy6E{=D;GGbLzZN&4F@ܼiY EA/ دhK[YZxk?Xhbߦz,gq 7nX;XpSc.sxftR'9eyQ׷[b" E }aF4m4v<$\aCk=؂I-xK3T/uavN#j)ɑ-6= t Xz8Ŷ/N+X;kz)64&F aN56d؇["]X,!]&fΑ&;5Ċ9|Q3g&4v؀oޙ}B9 IDATo) #m8R)ZzzkP̸g<4ql4C<O0? Ͼɣ-7/ή@}z0 vmR[x툍{.mA1Tc֬grR3<W%eE{tլ7+TųN[뷸|5{_^ڭ֣HN\Fc 7ywj^=dE_?uEGlo_k,˸{7+0/|A`"d6k3l0tdg2f>_+r#C|?ݻkyWIhڲDX݅RӅ"vߤFXDδj@֙|lK /ܗW?Gޓv[887yCs|s:ERTSԹj)jYQͪ@cuE|EZE5P捻ݐŤ5v\5.@@[z:Z/[oXKpNN5]H^*q-j/zX+ H}:UlsbUN04(#9wJZ]W4)G?cDMVl)!+:5('Q`"FU[.RrI7Q־eܚU0)AC/ }W8]vj/4љ3>rY䃏4+ "K8%,3McyFf382iq]i!$!LSA@8!c!Mͣɶhwn׊SoSdlnlb63EO%fEhp !9wޢ{2{IF)]q ӾshcťK-:>{떉76}Z,77M-~H~&m)|}Fy΢,e&N%o1EAx&` 61y.Eƴ۔ c&ٌ([-$?wN8ggW⬯KRF$~g3V߾o'?ͭE߬E$EAl$"1ak˂qlwTzYfhK? ߕh43pzKZw r)<4z3+c 6.^'cՍN+438v1Z]߷#K2ܽ+<݆ރfN/KC -a.6Eams1>]sN(O &3xB#dl Xnt.Md ض3#ۆ$ށ1̵v\PB8HXxoGgnɠQ   0tD Y }&PN H nAHXnf:V\==rp)d[4&GPةJܾ1;nޫpuPdh}{\3h ٱXWHe$ p& o+MpS+؄2t=!kC ͟|?2?m/"_ϗ)<$` c%b%q!K ) x~^iSbժHݓjYsfe5ۭ45xǬƉvfwYu/j7~|dK?m~7~@˗ʍEZ: ggqO}j/gSB}.4sa*ed}/;vv^_qݿa-YY&,SǵkxxYjo\e2h\* 5"x\>WP;nMo2"<ոV"V*˜G=2_]S\?8)jrpSM*kSP+O3c}]\T^W[VTMcmZT`pZ]*\{fnɽ4h୯5g:e6cc<d{0 Ͳ(hg8ܹxuIbHŬb\z} "pҔ,d}>9ОNɋӝc=9.TQ$Ei,]0!I!g.yQ$ fIB1ff1XqIaϰ ѩG{5F \"o3)Ep}g%!#].ɒeGGr 5)NND/MΝ#oMaCrqMryr{;v'͐h ,\,`}/E׳=Ŝxy~vs.sxj)Qqla]װgs&f3 &z=;}"oCۇv+޶YzKm)ښ]G[Qd]ߥK6ףr"- Ib;TqԹ'V p{5a ܵebRvh908=1ot)MCtՐlY(MH,)Pb =Hn8e_ڻd x-;-]|i׷cVTz6#uA'ws{x{n@4cO,l!-{z3f#C0dP:КA܁mweKaa}mBhGnчرb%X -TT#N1J]jy넣i$e‹xfY/H.۟am=wg.^6rpvf̺Z C2*rCֲЂ1ٰq9ӛOvXS1@'`%vNkVUTf/:༭7~^2I?_K5Xw3Y=m+{F8:Yî$]!r$ۡmh/2zwCVˮ{LcOF@kbW%oXa٪'+OXS$5h|G@SNE)b`RǑ`l qEAX\(KNE =OC)3y6F`sߧߣur3f(ᅜf#ql1{9.:% \2X_3U(?8]c8u]4a05d4b0(@i_E`)Bj gdX N>1'yNKse#"CY,lX0w]q$i6M>Rzkk'mDmkkD)d)|yaHu!{8gcgOxe<$٭۔1&I IΝwyN$i8Y_ =.0߾y^Ib a X,,NJ4 M۳ӄ"YY_lQemxm[Pr|lݥ}145maYwmk[) mJpl)S/!sp2ҋ:kVZBcv̎ҩS{Wy/ұkr]ok͕H,( ,[فNCV>Dk64!*aqf]haްEfQun4(1I=Kqڂ;?i&4a9Rf[{g`.؀A'-5\d&^dpc'˹چB޲iꈃRmUmKȆ 6s(Gglڝ Fh /Aͯ>-瞾&.|'CpMʳ KrB^ ' X4!l}qfb].+-T`UKs GQ佪.J0ҥ/k\yگe^yjg\ sZˬ D25x{lnAڵ*rHUQhs7\uȿU!ʪ0O?էթiWVOc ڗejX4Q]Gf]:ZTIg?&a{bo|y~7?!GsfVWU_^ͦDSVmF~lVȸfZUzfxkq6Td5#mUlRz.+)![`v(QU%U l kZPs;zFMJjnQK=5ejjY T^in6cw٩CJuW,gI "(B9j`Tǡv54Tv œZY+|+QyӐDh( P6)gҨ[whzzm١Ѿ8TLma+ʼM.ٴTr.4lH^dRsj=@nB^JuPom}ζ7sjݲ@~"eY[]۫!}2!7ןLȬ5ČG:'{Ɛc ccX%1%8́eI71F#Y)eKZ0:pq"fJgg7呇9sx{KDž007FH,Ytp]וFQ01nR,sL+g377%p|$1. [0׺ Iy,IqtDw:2m4HS rg8˥LRRzN3 ) Cc\i#fs, Y,z="L'd"l&^K\NeEiJ;IѐqHL ]W& HҔb2T~_}*ڜ8aybm/gᡤtkbaՍpvF>#1˗ɮ<۹#ݧY"otCL<' a!~mH8LK6/ ;Wk ޅVP v0Ylg@dY4lo yqQ( VcePM)9,l֑l&igFXۭK.V_7fhwkag|[~gD3o|'9QʭcKբfI/?/^מah괡pWgws=)?į_łZyKtx4@ -X39DUP4{b* 4V_캖>8b&aʏ7+K/~1?#_'xLg_gjU&H"?hso pt:SQf "Ή_PDBz}+l]C+1[y J|W; USPӿ;wҔJjS]u[kjrKSZTIhpz6XΤ5_jPNЌľָnT3??Yjrު-a_Ԥ\CI0Gt&]Txj %=B*7* I*SDSTt.羂'Bt#&@]1hvTiݢnS1@\9F?Xd5Gkz9.TR#ww)0rGӇ6[O`5dYX#2 c Z't8L**T8 My2pRT<0[!@XUYFdxyΚ/ܺ%1Lw^}SB/䱧p q$( .* Ǒ#l&>lM?ҭ-i߾9ltg#c#o_=|,EW$K(!/KnWtXg6K /K1]7Sl =dY}#<(ı0ZVKgi1vpzjf7Z=F!ܼi( @0~W߷)ǖ$VX XȃxMBc''}oTM݃mlpm8T)EI JZaF :JU*'mbC2t ?R[x]P,-K Co ,:8F09l}Ճ atܥ=٦mAkS (F0>1t"! -iD}˅ ma]&%L]MlZ%8ښj^ q5QXjSPk[Wg>gbY5KȎy)FPnPNCTYKK6 WXb]X]/9srz^{/8.?oGw~KQD?kA>*O6QJ$+ЪMSYi}gSD5ɩ*L$ }(ܭ +S~*v "81%8v@@Kc֮M-Ă'AUW׼R]YKv*V)+։ 8z^JV@-kZ;L0@Ud\1j: OJYs~5Xꍰ bt\iCeS,UTXe VNpۚ1T׹æN+P:OFv:cJ lFsC'ffD5 @{tGyZc konN%lpyh"6~_cz*꼾g\+[Qh:]Y&>3M2 "7/$<%YƹN!p+ $1r>Ks>0flli)>2r8مuLpF6nt4Z- 8Mͱ Cg% 7 ,FNtF#}mlH\R%/\?ȷz4d-( d0LSYN&'hzAf:N?ϙ!<'qSi }EA93J3CQppM,|>'NI̊y!1Ȕ|bjufFı!D48I<Ǹ RMcjAhֈLaN}oݛql 1K v Y[3E%E{]g/?U?❓g}w(ţEkĺ2`X]tj' ǖd- "6+,4Aʷ-[WT ggwUwu|ͭ(K-0v!/]Ki5T@YTK)#=8e#, h7 a& 9}&r 4 `R[/g}>pk)N7o,ގXk@e]͒kQk {^PQD|Xð^ =8ތ0 p\6wa ,KK%8b- ayhi>c)[I"H1znjW-8zD$s{~ʑL{=ܦ:lXIYێ2B>u7l;l9\qCH:L뮬^xJ@=]Օof>U'l~۷qΜ, ٞկorͭG۾Uy{LV5\u{BXAJ /MCzl;Y3*sGaJz#cm}JðTrR8?;mmy$n} WKSdJ Rj2jHT4MoSOF$M;yߗ4/afjpvMUK'|_!pMFXvK HP+b,S\;>fxBYʬ׾V3Mf(I cRdEc9-I[ּS=X;hh<:ƭWh8Vsv/*%^e `lW+vM|b]Wcn8s08cn֡q2Īu5f\-$o/E+o_|T %jЩ&*KA<"5Ż[w^],ЪAg/rHYO @+RڷRI&:ŸPb,.Tp(dC T啯![԰Z -K{χw|Hwu. J*`EtN gU Y&IBR8GqpEpL,c{"tq8V<.?ǒd9% {%{¥{߿tHaU"ć"rrlz $"qɃaHDL]IɄd:epדl8LwvI\]$Z 9<=weC!yS[H!XdXUvl, $j7qj5(yI"D~"MЮ /B|bIj"ކ MTT6ɭ_ >Ԛ0=S[L[y R `ܴ]pR+!f+ d"WRjߴ' /bVĢg)43#-t-aC:wc׷p~/2deU*aj$xֳb4gN{gYsltr=oSC.^0Pe5 S V }H&UiP-A P)$\R_ou>>c+E4tK] ?dMSj?I']?Scg>׮_>K_zon4NOM+edn1m%e캸 ~Ck>G шEKpz{z|0HlFl} `ҔL#<'"1iRaZqhUK)3c0ISk4 1A Li40ah&;Zyي~i.Ҕ0` i!/lg n.iIbj5-V P Mf̫.<`,->tHk>|50 fcV:ٛd?@qy߻g]߷܉N}kѰ]Z|n`%lܱɆٿ_~V-S@y@u3<;Fּl!bm 6&PPp׵Hَrn+M5M$64VvJ:cXg|^3iKv-,ʩb;Zw-)j[BU݄vºL`p0 1u8K69u-_ D CVIlʊ(*Лۆ +OblUL`Y:K!iX8TE\\Y1 -o/or^K krχJ:>fYկ I[/Cf6Z [7W7=52<ҧsYCDB=h}(*V<ֹZXvieXN=i'>ӟ~yՕdm`lo I/0<|gjy{(y!R#|c+a^ ZJE㙵Ȭ AiEU[Zq8F}pJݠD-Ti\;M@rB,kq*\T_k_3gdg.ڈod"Mp k%d1k|MdXx:J V!c .D^wj"RDSJZ?VXTR\^QiMmFI0iRWus%$i欣*xhZ[|?(:5rx_窭Ik;{I @Θ,'ƺ_ HQAW6N)-^1?֞Q[rG(%z˧[цhEQ$!ړQ!yɰ-]Vhh^&EPVХYj/֥PZTX6MRaƭsXZ6WD;v7Ғ1aϪQ,t5tu m\H|JT1_A kp'?t>rFuuNWkY-5)jjWDZ,z>Z]9 mĸLK#T"c\WHSS, DCLrǑU?[j3j~alm>""qi*NLk8)y2!ly~/M g3ω] @ 兗 WOx|mDi]hD>xץobY$cIC09 {;ySTlW¢VK$`}*j6pt9I(E@%—~XXYADΒ wo买!U8!)DF@`9Pŭb @4Eq\l#p4|%d{V&5@ _0*y{ysY {H{ȅ&T]•pIxs'Bâ ˊpz T>5f]feyF- Lsi ~9u^#^ik| Dƴ-o[ PXރB>܅ U:,==n ]iAfl(Lfݾu/^SL\/ۮ*A&lu pe9%C:p,T+)'Pr߿|åWɫ_oc=z$ÙׅyfN*R*cHU͒[w1*Uk@O} ^+_ȵk X ϡ  =I?ɏ''o%<_궪,d]䛥Zp^QVr]:Ne7i[Vt{" ual r)U rc=ޚb[loxO~Sy4\v_~|泟}[( +Yxg Ǵt|劤{PDvNJ%_,T|t]l굵NBaa"k+cHHdL+-&/~a0봀JZᗢltﹺ~ XCĕ:o*O#5qn3]G9QZ̝Ⱥ+1.qS|ZIJN '9-D1W'"Y(/'7ylKfe K$?K5xy;⬻&g6d+4h|.۩M0LGԩ~'RQsѦt,qlנIWXT詃vK&ZyxJ;*}*S۴^TwvRļtK-1>M0L7%=[p0D2%믓ITHhPs| f$jPs|c>ƪLInARHk<86 (ϩ$ Ag'nJ$2Ʉ_K{1+S^~k5+B5Lat:} f3SZlı&y.8 %B/}pH4ΎIAQ.qSqKiOV*L]"s2u]M\Jj12Ҕm6uMEW*D*1lrZȉEBT*y"k$ȍA bSc,8M׳JD}X{BDŽ9qGݮdA`} 2yX`*ٌb$6~,W+L%5myI7T߼>_{8\ߘ(Bhl຅쮰WS_ ߷IjeIގcX.;㱰Z\DZ _0vM}yPb%r+` sX-jma\! 6CX,SK8\XLl(zШ6w:M*޲uب2BYrRSf04͑ƒµ&6zhV&0w a9GoYJS CP6]&dm˒s+-OhO '1$G6İ$FZBB橕MSCm0B|{ϖif+S& >BֵkM@$m ׈L !;jW*xaa!Q+ Y/YZAMqA7k孇le8ɠڠ)dKk@c  Y ^z3_nGncP޻`g^پ=VKJ6EZ$vի!;@L~|K?¿?xRh>/uV55MK*MFq&Mo|juD@NN"]ݾT}/2U'r(U Բ4Ox_\">aޔ vmPbuh`[' ~K>EgiO}{}g&!QII {!o[u'^rNkQn̬[Piz5+ Dzbq%kZ}`rՒQ uMr , hpOozIG^xzx%&vkURMIMV\Sh˚,9ODJQ2%oZIu>󙕘m]wǑûo>r^.OV"MڕצYj\z*_=jӳ<0q]bץ:,UT*TU wc*r1iGlbk*8JS9si' f^&8Nqr>Mߓ/P|.ۏ/m{|/ ,ce2sLd`ZXH6#l cK +Vo QDv2x;!YpFj5X.me\c ~2n$T8:dBeN9ct$ ի,f ӑUO0IY1sT*Zt]aH d"Ar69}b! U.nge,T?ؐQA3%xFD|6*x)G +[ ``+Tr0$x92hP5kKVZGLQD)tj&#o41;[ЮҫUvőG?7~#K䣑-p֑HS:*<>NF{{}]{\|>d` =ddOy:]YsobN\fe=eGVʄ}|'k_8S%?䞅q\TL sZ8uާ #"s9sT@ږ`璤{043ו<8Ƙ0Im Yp8pi 11f⺲Lư)bFq[JZe"ƒfY$!X8l#Gy폏eC4gbLm0IL$7a",7h$7JEFaH-h f3qjI& %}J-I(YLS;,=ѹsx jf2JS$rUi4,kZVȐ•+{, P@f3Cg6ӱՊ< LMhE==d0jU浚+4 5$45Uו`MW\Kco4c>\a0x c9''6v-}ٴ\9"m7d>7,ٴ݆8Abh4ʪBY& "jz=9S"xl;ͦ(f&M.Lj3/ &V.m6 g8]*Vq CXJ QVu!lno òehՅJ! T^1x?oBRgfH68Ok4Ȑ̄|:s 'K8|(8!Ib{5 jHafcpsC-w T _@5@ԅݶ3Cvdlê 4dB4\p_G^LWHkDZiq 8K34Ш5-SRNY]Gr>P03[~Khlꦷ3'|x%4Rۧ !rӴDhY$ VGFRVf$F`xYyxx׶Ά*{d١Ԉf:/4Ir5y/juFtd_ZKPQ9eIUUIZҷB*%5\=/{x{7 N$:楗'?y^:'GGmpEm,ha\5)>5k0wVrAs]gFSJ~ 5;XDw5iR-M%c։M\R]*OP tT^v[v*D[$%vsUN*֟gx ݞS\t~4l\0Vk.pW!UnoX>h_QMHeit{3 :כ:_T!WS#5_,:QtdCM{:2vc5%;JR$'* z$V.i|_zImRҍ(tza %Y/YjVF@OPxӚ:Wbq&%V.UX:D/GD>Ӱ5_c*yC=1E%/ZɚeY2?.O8f[1y$7掃QJt Y85HjF3Eƍ"rX.y֘3_֖*p.][ަYF$L|~9G$ӇoфoAUhe㰪y8R31}~rT'y5yέV\qj5VHB\NNdw4t纤"ZϢRa.BKah4%I;=Žw,]& bqh=(ZrtDc$q]9"aeeUܨDE5ɸV|78#0c ͦAx{`2!hiMtJEynxsˍviܼ乤˥IMv`qO$^=;| É./ij!zX_w۵^~՟ 8hd6m #¢"bk~mQX\@lrb9o4J x ؏wz$+TOGAG#0nנWFc" U=K'Zũ``ݣù!_ DUUVevc{:$waCw.?p*.g7lڇy=7N*ts{g(9{8@!Pل!)س+d!yf-/@e|S^"p6B[$zZd֤N NÖ9V6 aiUFJQ=BeҶIAgElj+W?H*QW9Lh-U3c2@RkhdMc{&'V K,x׻84-Ҵ|go.?.ETQ#?){wOrxX̴rZ`ZgmUqMдWy[>G{D\go1dRTOJOPCɻ}[}vϜ? c*E7d61/ۏ\zKNY@[F+VKhq1ozS¥KonÌz}F2vLxɭ[7f,TH)7-I<)D[~%]PS^J*_qI jŁ,sO7༪"$Fc L?ǢZ$~3}c3f9VnIFDUK c3٠@,unw/<(E}lb<Ӯv"VqF)+X',/kN)B4Q/"bM:*ӼiQ,[q[Q:e%H 9u>Vǥ".]YؕtJžnU] =D2vJEꭞ"RM RBS2NVP_BB;GN}jkOsd]niaXjT<\^;o[38Sc@8 s#ޠDEȃQRY.Ys}МT*(͋Tva̙ṷZ1YͲVtCjhh0 #cd:w?VYAPhܾͭFYnxw F7MiV)Cc&4嬺o6u]\4b2QrFfӸIBXH㫷;⺮84<09i NE$61C>AZE}E7?}c^u_~1rh|2{lA`_npWv.VыNC,۶RwRt@8 mw#'Fe-v 2<÷),5잇͇@=VGChoYn BZ݄ 7`Fhe8] [W ^޸ >L,ןB| A`0PYr䅰lATUp2! HJkPUٌtj{ҋ5+ӑ MԮ ݹ:Z&9R3=Ux$3]zW 'VoBҴ6 ٦: cH7!ؤ0r-X 5sGv҆b{e1JbPQfQӵ`@°Vڂ  k6o n~7pC+ܰOUBJφ:&f`,υUXWr܁TNyG@@y6/3?f:_rT$JNKeqšti;o{gi62xg?ĝ;,gU)˜M}✔[yx1xNvlmsMD_zaooMJrOBVn1 67y  #C'VIܸsXl0Mh9^[CӐRWYEY8 eQi%"E)٢*ʼ+ اz뚀NtEbsGӢ0+;pp5S!1nhFjkji>jݰTm_:owJݥFbGz\H~Yc%TG] ثpK:zlPXvt\HTp,$%KonDq*10*[,3S3:?6UZo疼BE*;ʧ%+Oq5EӴʛ%׬Dj"+N-c%o:?} (%@uU2r+ [- xQ>Sx=ǤzutNN+q11,K^Fa8K5ey<$Yqs:ý,3y.&&Wf21ah.ı f"aȰ&"JjGbYoHmT*Cv|̇>Q_%?]~>ˌzxLrqs\nclf92sƝɄL1+)i.QrB^PVL2 n+tioʵ,n,2pY!;c tX>kׁ͛n#-rq/,Ğ|#qۭ#1wr);G6%nMxn:"dSt(%)[<|Ewk+?:U[w[=+'i6SrvөKCJ m(K K]Q {{kY\m7QM+8yFU%nҵ&˟+k _mۿqVHq'},ѻvM=җC?DnWVKITp=wѭ[7o:jYKg>uede4%(Z-'ltjeө[ʓɜ$$zpNh欍sB;./GG*ݚJٸ`8tJ~|4%kqAyk y,+KU|.Ue.jui-Ӂū'nںʏ}>_oRxw_xhE|qΗ~#=8hϱ^k1 ."\D,d$7jN c2df2ű0WLJ/ZJo X, rrfȤëK!7o8p5K`C.Lᑺf7B8}G|#:pP x{{PưBl`@?XȜ#YA28#LN+V`ӧ[6Jv܂qxpqq Ga{O،8Ǣ-䞰h߀HI v`?tt+p$oÓĤSU=(> /`#x晣{QW8&TrWW=@ o*P+\G&d;QEP%9Lq.rB!lW=Ur(:Mp}((S ҩ+;[FKO7 (HHB+Uoޮn;#: __ny$Щ謄mZ) s{ݣb#i-"#9>ߖj:5#Wѹk«NfaDTcy[. _t/s'P :FR ⒙sbc6(YE:42tF=tUUW-nCyr1h o^>ɀu\ӷ R^<H+#6Hu5c&l JFt3F8v Z>6<jU;υ͹fEe #^MQKss7%cx[vly8M6 Nʠe5Z;hs߈_=UC[JjXxeN-"Y8ExdֱxbDv}S95ޒG߸-wLB7t:%1uI0&u/WA3X}eS z 4Oy;! jې7Z)QW_੫<0]ݡ0j:Nj9_64QC9yyN%B[?G)>~9.g,=z;0NS^ۥַ>Yk=,ɇCz )KN}c%b,~VY%1񋂪ErIjQ>dB^㢈eOǸ . n2 qLj)O[@(h!f" po TUE1f0jkˡZG$Tgp@t.MhdhP !z(7AX3T*܃Z~C/`\]WKlboK7Alz>n]+*i<9U]Z"%!ԡ !@w5|%)$q#[o0&Yyߒ%eCoS?wk_Ä%և x /LHwl:TT V\N_[wsl%MO- l =bސBڛؿ//oٛko5N}8_-CZk3%Z頵5w!CCUj<{n ׀L]6J>IzNein K"oorMqg{՟mZh:u2{x^sp| ={_4Laٝ}xhdNQI`O F&['sےmstSz 9 l T1KKu9`,9 CA(".s,yܬ(K:e+KZqYYFU* n2Ih%^Q O\tzʝv9A2wA}ii?7^\K+\e9\+@(s9}Wyp% :qN~R% cnwEH[NW1b2w)KxIGyQ$ts)tJ''k!g3‡q yW+BW12tJ.MSW&~r{=488@!tIOEOOnWdUEwsfo1 &ʗa&SwW3~aRyX XF-ѿiԤbGvd0k/ 驎F5۱ZNڜ&s# B?TвGC: %:$t ~ݓ@ix I×vlK(0_;)[^ `CiZdݮ cpR|>Y"|vYe[䋅,q"Rt׮H7 p;dO²-d+m)KJρҷ-!'^gX_.T.ȶ IDATN\#<E.lzW qWǢ or"Wp"T#ڢR]K0%BYٽռ/ P:!BBX9+LF|AΠ B  Gvfd".2T(2ڝX.;Ko`^!j)meWEQxDAފ0B1E)#ٯ5ݷ50Vvcs̎ekҵqtT^| J$os_<OmN'^MM߮{C4Y]gQ:sex;ek~ڵ He'l<ܠc6<:ơQzfFH0P6'pl}b3V'pǑ}O ~n$qCydE;sPEVX)DȜs^U*ʢ) -K2"$"x",}g `"<"\e<*yܴ!aT"E! t([-'assTiJ8]BE ҔeiYrۣt0s;3'<|3xɳr>/߼ʧ>_U۷9("+KJ} EA@kowtD ctL/u[#J:)"Xm!h)w!x ŐpsIe$mܥT畗Pʻ^:ZC!`;i{ E_6UqE堨4‰¶p_j'CN(OY}_sV^8$>$vkނa @ -9' Xm4& ifBZk+z c}uA/H^/]ȶ*%Gd3T{F:jGAF7V~҅GvJ7=f<N'y`Q5goB!ly. Z;(# xc{KC }~g~ݹy{/۸jxYАp9o<V*RWsvUCuOaj%:x͎؛kX0.ĕ&.7 6+jwx cr3vU/ ϏM =R(~!7kRvmXڀ?a6"Incs[Så۱X26n%n΅ { 0f~- kꥢ ST -6)+D`QSMvqj_3ulo,yUV (PΪyM"p;s@ZeA.0(,O3Y)<7Zn`Ib{[,Lwnc\Ⱦo;_o^ FݕwwcU+o̍h+C,l#šprJmHZvHKmNuM``b uN' 'Wsg Oõ"w> 4ryBTirEsX<"G{%Oa.+(S$n;(Ȗu_P&EfJD/:/AٿOl f1U$𝓭vdi};6#*ICJS<(˒ BUنXL/9JIL;lUA) j=1$\ ${B+il<vCXHHa JoODPj꾐as їv&L(`~tC5=G2䶖&VJaY@1%C5zFx8}ܷJgC BV-~&`3*3b }'xs=oٖ_XoGjj {*baVk]U%$~q# jKۀ*WR@~_Uy )ZhiJvJv&{JB @c1Vaά. }(;_{ ɕyVLvD*#̻6s7-`}~4}9wܳu3|VV5{Ղ› U+ 9b8^` ZNuʎ$Xe:1[`a&Ke>g:5E l 5u=g۷{vv?p%;'Sb;5 6u*7[#؟='G򹍹]dn~Pn:\jd m#ٹmsq~z6ޮI.ߋ[walSŭu`?LT}4 ҵtMS)Mm \A5I⦖mwJUO5y^Ct*xSÏ(M`*Xb oא79wN TvHPƱ}6=g=p%RULk,P*wk3Ω8ȝs10s\ZNV yxrI/Iv:xUE`p!u CVz7D,_p4߃/~_5~峟uk&{-u"I|09y>xkaQZ.QS9'v$! X( e>#k}gsGG\2өgv)LHQVE(^E> =;śT4 wzJ.˄,s$Oy\ o+KV?V/8*;c6pO|Gw[_YSw |XPM[p Gi&8~KbΔ36Zݑ&[6Ei˰hR6z=xmt-kr^NNG!ː wU>4_ Vݶ&/v*+:]oXdA2vdmP,aն`詓w9pSȑS[ui听t~ }o. lmP) _ "~| nuU[hsEղ*P=E\!;nC5 "H-cG9}Rlfe"gQg%I[P<0{7,#LM`FQ9 'DF, 0oO2S`Uj.N^ \|)+^W\A&{v[xFH"{2}XR%y,$SW!+2U#|?g~7?)yhMC%lȤdylu``: 9*)lh*JNm\o4*]ٙ$SmȧivҜ횉lĪG!| 2߷ 'zkX) aL昞%nNj,4͹]4-DW ͑I::gݹw@zڷU[ ֽmO c%|um"vc[Du4j.?ixd'J48G5{RkˆTqa\KNA"GeXLڦTP6TvWy4=Sysoz ii"}R5nFta; ŧcZ /d6@o_ Ƽxn9vMƉE8A7;ygw+.0i|}fͿJo _}[6C/'pN[ mj۶MV4i#Ië0$}6αa Bzڨ.sEACs)nScxmV>0$"HҔGG`8l#gz=QDEdA*C[ՊQQSUVDᐢm64G9 n@|KߺsCBhWHPw?x~{ZAO,۵|BK QmYq9?{7?7+h}wC*l\+WOuX]ϓ&jk2v*bMy0DBEz&yz }O6FYJo܎Cx'Pæq #5;_ ҼpykP T9;+k8ڃ`b~ ۹V(@tLi* t6 `Hae BR+x;6cp\AH!߃P JZ#nLbWXPOwW29U[ʭ*;B0hb L{R+O٘zRLT1QIG3iNmrNgw6O^O_9± Z[,`:1pGx (o~؏k^zי>?Eg=u6To|QMά\6h(xˆ>݆+Ďeow-l Ex% ʒ:` ' m˫vnUwhF) 8Y^6k(U;f7je7`> ec+3okblKny8]cxb@#K-p]sKkڗ5Tfhݗ:zdoi4tVCF ŵkymjX_cH̪SxX|6Jv%6b{A8=;>~~#D}>\Sz3oCJG5cI AҝD+KO|9Ux:XϐUT= 㻰}N_"{zۧ|"݂#+p!xMxaxoKxJI!DG,M2Y=RێmgCZA (*r Ӣ`eU:'ݪrm`xG+yqaUqUT9pR8Kx<*y'aHZUaHh]MM*)Ba$>N1z=dkxyIݲq?h>J..H]*K||/Bnj; $ˈLk{$ȋByNrߧE1cH=*./IsZyN Yv☰+ I7ĒbonP9a?p<E?4ZZ}r8ldXZA@RuUAt0YjU 04`|?u9!O~K~ ^#F-KzU$`h:\\ukx9ov>W̒Uw`oݾX+:\\K) ;PL#";3pvZ>`po_6KR**uSy7 Ue*3n[u&-d K!.N!nCV젥PjoA{$\:PPC}p cXo<l-`t{N8f0SOn"o}"tC> 7~U8|f L_ٿ6\ sP}Y`ݤ| u:qUƉ-9$*E,AUz}/Ba*\U1)K1})}0DaB3缪xTU̪D<\o!!ۭ:=nxט05(݇iVA߾E@lc'G4 ܒͧOt& !ZB*v(켱bufgc%ܵ+ H5"i0=R6yU;!|mz ŗ4HRC"%ZPŅ eBV{jxu-[~gk|y{k,Ѣ,AOEC0; -Z KЫ# jH=x ;Æ}sgޚKK@XA?o:]cNvkшƍYW VÇcismi|i_#uvkѨWdG~hfq5nSNni]|zQ˵FwHMieg0]xd IJ Qk7ƿgUnNI#p_5Y81k|qMH8/k m@Dfs ݎD>5nH&dОKOq-A2i;}~ h4 $7l47L'*9ej\1:| G<&c4IH☬' Ca 4_ }A>w!2ܗի ])½Q~%K:[* |۬Zed-1KwR .M\ aXzԽzT /Ą(WJ6eXGğ\|#<|%aa,+uމ6|Z+UR.߉(>7#fEl^%W}OdWIN*p1$erϏwo{<ܓ2v/>Ϲo_/^Vݡ`t$+R|2L" ySy՛lq}%93aY&ZJ+N6(TeQF#! h4LEܐ3,3ȓܚNijRj7o 6\XrB Q+B@XT =h:u{O3a º",,j` fDv$ Sa Ѷ0SՀMN@Nt,YZB>6UԄ(}Mع( !'tEh$o A 5!MEeX A鹀 MatߒdKD]0Oe,H$#AQ򉰌 >U-l !P9H]mP{M!*x%lA A p5(?l{B E,PLB=J, )䁐/=RicBSflR}ݣߒ ^v 9dn*P a(BA2\ E#t,}ɛgrz|c?*iZL='pCؽrF^}]e+2EE{풤Ҧϡw0"I$sX vb*ayE PVR=$neuLwo{GVUԷ3[` e 6)cY)K=H*+G-T -H9?2O=!]kc_m1JKe'u߷oϹ{6Gӓ\I!UQel>W1Jy!^ }m/-}Qg&vJ>O&۞D.[}N(+ܷߕK|?of,-84hhhc! =)Z`ʾ 9{{W&Y"IEbQd7Kj"gN0S4uƃn"hd Ҧ#09O"1EqBT=:G0;6v(6NtZ0Cփ"~ +4:l8q>/Gf501Ŏ}S߈P =&4|  hlСdnxiXS/Ǥ8Xkc lӗ9,"#c?gr&SArAr #a1O/~Ib :0PK` -I>ĎCPw:[ }5N;ʯGG^6;\d*UOC4$%yiEj$y+>CþI?WVi:ۼUUT䠢4aϷ{\(ݫt\zT\V*F~22p#VJ(ռˊu)s|ZAiU^$ޫt~)@br RG ʱxW6O9fW~E%?~lT!bno$򗐢طyޫ|fXU#.Ь,=kqԢ=?^HS\!^$o+%K)T:Md~+sKgk+D*kRZv*5+]!eY _/MON%E#Ѕ |'zS]x}8 ;ȍ܄yOҁkB>GI 9%&}~ /@ECoLGbnݷYƧɖ-e= hel񤵕C:Nd뫆m4E;S5|:\.%MQ *8_@k-ϙ뒞tצ7g6SϞc2gښjqAi0$ӱȎi0X hN`422yncUJ.S#;u!)ra&ca[> 㹲|\¬& 084< t"/`@aM7 maz)<# J[ZKJt߀h*q ӥX/OԽ'cqsF>{] נ%L[^R,h#a~`m59. *ȆE]DЅqU8qLG݄"U{F¢mL8C5?ĜŲ^FJVA]fnGaOh^=r9u,K??aEe Q!&clzy!V+oB]*AuWw Od%OtmϹH\nC^|'F,]e SOh;&ǣe6ubBWAf?/jACJ[gj]a`8j~uѽA| ZӖ+>,gλy#:44(;<%s\z0w_KW=I+u\>iwwd3SU7ܣ򳄛j9riͶ#I~8/8֝}Vs??)߉- *'̗߸_qxOKh|sC4cO,^WT-5=x&!d` 2r+E`t~_5e9?9CI]ZKG} f]85 Y9mC#lw, )w<I:L^̶[nڷke:xwv|zL=]!J|‹IbdU$HғɄeTPUiE|ш88]0ha!Yrɼ~) ZO ];fA)cz]8Im$!Uk5 /6֖䝎 V#dA$ATGGDaIE 9#E2ˌ[#&iBU4י,LD\fׯf2:Mr8= n)''jX<\2 "bxluj )s'o;tx٦~鏖 CekTHi r玩mo$_XIS%ϕ-[]%WJSX T"PD} IDATԡ#}BwWhG^{̗JQ֭DvSc&RDJ ]XP) &uGKX4!Ȕv &u3C̅dK&IME "}%lOIu%(-kDv,`-\ա^|~)>ݩ5h-@ @I6!k:7$|sEKǺZR'I_g}<ܻ'H+|NԶSZG Kyb5Y+K^H +?w<:x R]i Nulݫ K=:ȓZW`z9 GUgvO{z*A׾p-=ЌFJԺmyxS<Hֆn~ 8?lvގ^_OD?:ȝaz{%ٻ\)׽fbC^xoztֺ7'~.(z̉V:d~ %N+Av+d|~iE@*LG[Ҝ #1Ayi?O!"< sY&łbh߻gRQyѠeX4@N^09I&$ U %$sLUa<& <'dj`@xp`ͦijU6y, j6}yNcwWf=׽iR Z%V݅+W,Y8:Fښ2ʙgD u:ք]Hb#mOGgdح f~Z-~d9y7Ij4%LL婔5%'%Ds[7{\RuM*w=96/m31-;g4s~%llXY½{%Qyh]A)bB\T@fs2Ifa*G%!<'m(^vMͷ`IQ,n<J1ʶ6 %aqނyUdVi s+Y)|f Э`5Yp,L?X}:v R eapl5#!Z;zAZ/}!'F6:̅z3E#ᤰd2uTi0*j~׸t(;T"R((p#]* pL[\EoE!|x7] nR-yp\Y;y*H [pޮ'xT$GɘeD< |w*c!} }|m/= io~#JXQyDLuEV%α^Y:"́w}l?s[/?-Vfr%߄?|MW`&hL< O<{Y3e=ccepϗg);q Gl>ӝ`RI䡧,o5xFfx#?b6W=Lng< sA۲ȶ-dEj)y"$alcCuX2V ],$sOe,k5ҍ =s׍)\2[. ӔamښuDTBbwKOyGj5K4\2C ćCK\KhT'vt(u>J&nL)R,`}&@6*iEWQK^YL];e4@_r<<ϻ6|+ NW*I$@t*U<ګOZo,YL^1ګ(SQɏpSVeZr{:=|,j>W$Sa=m;]xկ׿TC+Ni ܑL-iҮ$;\kUx8[)A%G/s,UhNh5_:dϦ\twou"O+x0yͦ9_RyCZ[gdL.i%[ؼh| v? yZ#^^[_A0=6|Kh$fZP跅5ŕ㐵B~O+]S&:g68n.J/ RqJ k!EZVYck@9pVȲ 'N^GK4PUB8>mn]dr7n0vnQ Ca yN/v(Qt]Y aX "Jh+j*y ͦJ)DFE!\[3O 45(b13M',QQ9cED1шQDfͦYƲ&V%U% 8i6[-fnLם` 8wN,c\[%=,8WݳqMew RYֵt`>rF^,`}^Uք(,~?1\hVMISa>/9&K{ÖtԂ(Rĺ&;@{n)sOW]me8m8=̀_[\޷(!N7/'`Hӵ9U ̘0MXOI;iZmX$ ٰLiӥ0K!\@AstQ!@orvX YkUXZ'@o^c {p4k(QׄM6}-,9Qf^uh6.4~1aBZ{:(q$LNtXKUВSӝ'>3G18Xd9+6 ees-B[򑠙.c#s3u% )4הVC(z0 ~@4)ѡ%K8)A Qn"cO8NL[1oQ yjba|ӻh *w{or<ş롙XB C./l}#%[=%lP7\~79mV360mz+4*2))K #tԟSY 9NԻ]h\2mU PS)t^sbQU ԺD{߿찔Yʲ^HM@cYE L+DӠYԒRԣ1\@5|%aXMŕyWG;C}Onn%ϣZ.SV'ww]޸ѭx14K Z>$ ke|CR>//8TiՕ|kp8R#>s*p3_vZ-sҵ}C@>;29p)M rn͏avsz][{U._H0 ?8!rCTt# JґxQ>Yo~E>9~go3Ŋq7A[}xk&ԯEK(&ITӗT@g Sjp7Kloh._:\dy\^h>_vru=_})t)²N8fz.*;cfyxP}4W*8oT.5GTj " Wy.AU:T0Uu)Z7M'OQUܷz}[EZ-aH4Ӕh6C\N:8fAc4be eQ%yNEH%nyNJlJ6H\ &_9V&2$89e4mf]ݹCs>'ng1<'2.Q!h0{!ַ^xpHdB'咥*hNN8X_/ ,s ,Oڿ%S=[[&:iz_[jb/}8߆0Fׄ ;;vdb1 ,ޮoie|wrbPfHYVgs:nsx{O7=GM߼ʷΙu'xqr| jݮW9z":4\~uw+?qסA#lpiNi !;pv;Be\fMa4:Qk]H{Ɣݳq0+vIg G`}m\M7i|aoDCXZY$lCk)52v'L:Eȁ«8qڒ##׺Gv|Bxjb  RD)'O0t^d[@2&#{~!) ❈J1wuwj,bfĠ;bL#ț֭KJtwjB|%y 1+F\Mȶ`8ɧ=y߻?o%js.:MđPD3 u:K~~$;WIB=-?SY!ս!5&bR!O%JU }XvE)WuˮVRW']Y%:eoOY%"߭ooݷmCV>V3}lj?i}[Xkt2~\8%ķ-L+ yO\tQ V*M|e4~^6|C3ȕz>J$_*C?e%L*ӪkD0bE/}_etϏ^1މQcIwnI_m.>Y Y|g|P1k\5^ň' e^O-^wdQ`å/F`KX_SU蛮Rg:+dYRNEsXWě Ln?{^⣿jtdtS½/YǺD=GFo |*ߖzjӅZ~p}0K z<G4 :x9;+!paUn?4ŠvRS}Ǜ}e#V -JWKʋ~edFWTU}U֋ryL(>pgJ>^jqLʸ(HE8+ WÒW5ȳP) =fUw-X95YsF( )"?sCp6#Y,LiIJu rTldiJtrB0Sp-WaGxL^Cۄ>lf֢5h6Y;G$"ħ$iJo2!EVcGRCDxL6#:iphDA-^\IZ_L5jtp_KΟ^CKO`vmjjM0vF# ΛM;}rbcjwvV]뺔*Xݮf3K8Z-phҭ|g>_|f'ߠ|{g݇O _{;[panH1[s7.ǖ\av0RBirǐك)23{if~zӤ1z`E6BZAkNoC:&y[[X249EݰoC܂oc8`nW&ܝn׸ yӠMB !dM+D 82Z IfGmiكIn=iW ƭt.3[e=HVJZX&Lg 6&܎9Lpb00΃/LC}ghЖA2`>yf֠9eݮ> ;/`deUK!C=}[5-$;h 0ɱw&aӅN>z._?OLJ j߳Nij1't^O?ᰠJUy\h2U*?A*8$TuR**T\rSNGIܘo߻zu1q29TJdyAaiXd%vtW [ďYyM|f1ϔҵ-ZmV%*9(1 s|Ͻ"U)KgIqETBJbb(s$LsܪD:߭+u4`zeĻ{Ϸ[Y^eJV\/gb$"ɕ>ad^qi_II@XIǕ2Z$3+ru:pVl #|Mbf2wvȍJb Z%,8$?byK__ Go|_e'Xiw>˓oេGCNO O~3KN]/+ _~7EeUȥ_7 "t3x`w?"XzyQ=u^ĥ߆^Mg9RSI̾]Q-cDm蝋aa'M- (:Yʱ*RUs9%B/ EtTi"22<qQxTiw'YKZqLdQlmCJհ :qZ.=Aeloe"\X tS,b< ֫Zӂ$m8p̡0nJ L3SǖH jMICmAmw`7c!oX'"Ԑ06]>,~Z' 7I2I )؃gf CfPgVS Q_R-B`M CcaF΄%Fq0[l*4ܿ%-?Ia84R1Q%,{1>1.GVH0уDEx5H`z;y|4ZX hfH6̅0.Jѐylm^g\A,e`V J6`|3w6GG$IgX*VtT-}"wXVnZgA 9`o:\gsT~\g&~8{}R8v=ͯK"CU;$w;e.[a)cJ{Ժ 70%z/%dBg L0W!_RtNU|*eT=tHS}1Y1nΣJvJ[@+y%-n\!J`1"@\!wkekJ9[v.[,H)N,=/A| '??x;ǟ=Ȳ\4}ɕ4_#e@Fi#;o0{cۗ]?9¿D+63[C }mSWl\@뇑KoD!ހב }OCKi"d]x3$2l%Z'Q{l :9cRP=1A~OJۡ jŖ("* ja(Zz2]'j"*4TB9GKTU j|.Z!A콣ϻSU]o~NV+XXȀ , ػ-aYvᘸ8++B2 Ȓ=3hn};wgxs=sU~]oqEHY](Z9*0[wb靐vHgR:N U&XNs۲XX{8=:*hjQ^ElZ(4d^W`Jg^_7աD~0߅ш(%2M'j(D3E&%+E*;;!fI%4& FL(;'sV3ς` HtΎnL,X>k]pu*ۉ?r@=jSuaIl&ܼiښꃁ%[[6}n>X,y㱜IFxөlo/Մ& QVW&\X;x}s?XP/?3"݀K}\qt:B)d0>cy+Ep4;gN.r=wU~ɟ-acLNG+؝@w%&@Q[F]vbbwn0̹Kµ{ '3ʗaAtGJV3%I`~v"!.YGNuگ1i֣(Cd|%P 8!j[PQCXL"t#3܋$ iOiͅ"pı=mʮ'!y|+%[\h\$چR/:‚yE5K,CL>VbFn TuAcS~UTbCGfK7S6 2E)GJzJ<Hs<@AuՍԙ@ ;}yNb4r!.Š?MGԈ(ȵk7/]6g~/rЕwZM P{U^Vl9tg g |,}3x[ҬS PQuI˕zt`%5(<8Vh(p=O\mg㭕j:%*0Wc/ybYru'Nύ<88or $b^ R=;^Ha]_LtiԨv'|*sr 'jb}7,Y:vG+~{.;\$A-)>=txRO87}.8ap"]&nX؆Cq!p =_ډ'/^ e QD>-y~*>:yk%nycES7~;tY'KԪ4đo5MFtC~:m~Wy(Ihԅ;y3?_0<ҟUBC'_==~we y<]oy<;C X{^ǐH /S6T}7Rt%ĿB. Tt=Y^i^so~ll]&yx'/6}l'=\Zq,(bEҏc6Ry.cU]r\b[RUՅ&DvZd"QD)" aHS[Y*ZC& $lj i*EPh6 x+7{FZоW1;Ab;(=P·5=p`_G{>hV?t[JYPމMpRE޶rש\rQjEyeN% 5g 4^)ۊwFb՟6/U*ܽ)x =] AK`Я9$Js.*_ }7X& EE+kN=5J|tyP-+1 V'6/}OP>a[|E6|O/lKӏԔ@c_ '/C| }~̶~􇡾 .xqQ+L)x_7 W{? cSXz|R.~;= ]NEv)5x@[iÃ[H1hJz\vۭ;+~Yfi)l9|iJT łEY2Ll7Vn FAj>Gf3J:Rtj<3Mi[%Dv-886:S:__iC2( TMҶ$knׂ^Oi63? FdelLAy߷kl}%'>ft><~sWl?~ѦR;ђ ;uv,UHY9`N67ek/9ʒ=| xʒ3ʼn"Hkӟ0=_绒ʔ;/Oy?Zg|Vrx[kvNLx|l{ןs:?|x~oǾ>IC/;{7݀^v!_R<4$R1Z7HTkFzQ'fη}X]z͒8;.”5%qrlQڄ€2hGa$V.Y,mKF-8J ҅{6ȁ"PdJ H[4v Vd G bXQ1rfs1?ć?-!Hap*,e[TsLF墯;^* ӭ,Y V;XJr,]I+¹.S1`ە(givW!\/d) G'W3?gUU# rR^BUz[UNd u%YA3r\Qc JZmdOV KJ]:&!I 翝q *R{ U $k,'COj^bON\+(]N_R!Yq"xZqΖ2٤g#9BV["{j -QT:ͦwJiۅ;7=:У[׼U+wx~UEOB=rs'g~vъ̍^NLYȽ߂G&G!$3xS荏#?{v i6|[r *ύNKtշՎä6<;^"3-s8aQ4ϊk==kv-9m;׷u%@֊da6ގ,eo< `iAƢ^GjU;2LS䴘Lшٔ@c95/ՠѼ ͈'IDt"Mid"LEc-gUslq,j*t2bu;^O`G!u Hi"m-S)CɳyAY&xEMbm͠Fj-+pp];]dE *QAig[S0˹J.}crHɿ?ޤevw_n f4vo>/\Y*Q}%I,X"۷e ׯZ-[V|4 ~)B)A:2nZͮL 'apCOCO55~#\2))ۖ=|z&KiQ]|A8Vv^`engepCŀͭl^Ã}8>1xO҃g*0WlՄY`Ac lmX|_P͠nfUn lLĒzIuB GFn9 ЍiRUM\iAʼ)B40$ QC+:JԀ#JO(EP )~]!](ql␢|01hU(Se|jIaCMuj ^$RZ)UZǧ0^N7 1D5!97PbS(v :#Hx+U/+.3L-uy^(Y Oa2S |ӈͼEnܾ^!& @Z7}˗g_{\t'(9,hNHR2jqXC4V]-K}xP PU]Ȓ{;$X-]V鳊)YVI`pǃ)qkP9%DĜS!pCn@[c;"y'w ]pQ .v}9?}I>tvBgB^)qtI/dajҥЃމs'l6tK(Et oR=,I<Gzփކ,ka؊)N+š'e}WC-]v.VwTVcv|ڲpkWo8iR1dɩUޱ}yduwՏ7o_˫M5]j~+[dqKK2!9eePT/RدPvb~s?{~CG?=07z87QV 3f2dǧ>O/w?ܼe8*x[W7Sŋvp`,ݳ"KϾ ֹCQxlǷ|x~տ#`.H.V AEakUPبPbzc6~{%3 *5%qQ7!"$IlF^ֈcHcc*v)b>Gea@d@NHAYyӘL`8XQLS^j}p(Xf@f3Ԫi>;YEKեUKa\| ],BnZv]}Ѱcv~mX]%ҾU/\0tQ{oώjsZ7 ,W>rH\8^;'49Y՘O>sIssubvw</ynql$榍 Ue{mycO.nxa ]p\w .^7Oݿi5.#p*=`D_ lGcEݒ \YuI|ZwoMJEoZڮЇF=- '~ΚIUlXDf"^K2UuPy WCQ%słBJP%DѰ*SLuX9=U:׎P=:BTk<U(2?$e| =pּ(*V 3Lo9C%?CYfu߷ 7xsjW9emͮшA3c*Iϒ458RڵAPt![[3P)'J 7n矂9q=GJ}#يReٴ`u\d) l>a\Ccۨו虹_iI1ܭXz R`szjpĒ^ϒ }!%+m FVrސOlX[_7/x7q÷O};jtn\ٻ <7}&hѹ[7^(-&JWixnʮw;дJ/A/QnB>W#{M @5I"Lb0GLu"ceMyb"쌍k ʼs8هFf2/*Y {. r:JgÆ3n('Uњ=^ΐJԆyGY ( (W ١L`f9Z`iM&Vs+0Kssb/F5@Rʁ=./Ru=R$ @Ϭ/f}6mC4WejﺦH cQ!@XOkFNJ 0HU܆&|##l Cz{Ȳo,_PY89-'$I[݋  q*58P^]>r {KMѨTy;A nPb\ ڂWƉ?8_o_n.ñZn |zMBAҕ:d*rQEҷSY&K㵩.ե&>%LIkA|tY"^?edP w&u@@reR1Ǹ'7*y%<":WGSwV*"_8u3ǁwiҊ# ɆO-a?in|:+3 L SxgyÅ:t׬޽l2>Wep߄VA܆OB}c?hiw\|->MWJ@9Z?>d^0DCx|œ66yxrz5& #kWyǩW"u"DZ"dl>:H(K+?eE>7W,ˈMJV˾(,4) áA_zPba(",ѢDTEE$"deI]DzL0Rj>WZMh6 ncɑ)HjJoi4l^:u'L .#Y&\nɕ%YF8/l.gRij8 dc ,  u {ZMf3 p0^Ľ(/E5>}&oү#ᑞ疠yWWMv6DGڵZb1Bi׼Xݶk5K" Bʻ\%ͦ3]$llm ;;z8&?/LF(_Y̯m://[ \[8;?u_ȯZo>sN `~-cJ"aԔYL;bk D sA;UgX ۅZ@ku1Fs,p|H j+}X^Sʾq=n̡⾒ 39B J"7xTC6V}3;)! L#E` sF.fK L!=7ݺ ;ׄ\ciC߲-VhUt γ{ٶی;ƭfُcE֢HfeLDi"U,ZXkyUU'4SEX(ב^ ܂@Ѝc2^DIikl6ht:&'P4Jǵ ^"&#jdiXYY KHaښɍ0x%ܾnZM]-8_zD(z^mҹ֖%\1)7K' y͏oa^ϽƁ|'c+ 9Id 6oV|L1JOv*r$I@WÒS'gPKAE^8 ZxwU{·\V\_ݓ: Y|. ee;brx{;L,`9Xס4Y\6gǻ+ YU㿈L[ȏy~g} \bUdX7p4%!mn %RthjWj=nx(+IyR49kV^!RE"4,n.rHHSh#$|D4Em6!INOa4̈́V~ fcݵxlrF Sey.fQ EAǒ5m4h۲2w_,e' EaR8!dPON ?*65-1֦r,(;vl2 WV-HNKx] F$/ ;_'&@d:t6nr~o{tkecxvg.^A|,9 `p*gS2iV˒zyBөn``L:9cɞ,]ćC^T}zΣe4ΐ'_j;WyU3y+z~|KM|XO~AR5ٽu(sa3ޑ0GЦ ؤp} yWXq6!o}|P_5A aD܋R\ϕv&j`[1"a%6B %, HdF&N2G%rOȧaU(gB^j*uzmh]uƥ0 k am.QBA}#!`G)a< uwv!S 1Ja6*H_./ ]aT}K^J  YCȚ0)ո 0υ"]֤kBq.4wVA"d (b$T(FWe¢achB0#t  tb={A.]y'tx g,-?K2cMHV+4=pȲJ lfD?$C]Vv=eI+Ax+_cϸC׏ hO{^]G*ޮ ,sVu1&bPMư%.<8X>+nTFޱ(KF~ M**(A/ A^p}+JW$ػ)}yF#ooh<ܧ֋#kWZ ;@k4G~vKnZ dI?kCo/}̍[f4.qztA3^~3M*_  ZgE b:Y8lleY{ѻ=qL}N=1|ѺKrkarݔp8`Y,]K_xYX=jrm>Ib )CfXjCKaIbN {|Ɔ M!Q#t|8(`` ΃|܌f_IDi= ͔f|F^mA9N.Mo~>/e>˟oHqS ws]Xdµ;<BVv]Á})E9앦ծ r-`S3i0MmchQ?",wa|}U. 񏿉z? []" T n5mPΏy~ h @EQE"*әZ/8C\p s|֨s8鼋[$i+]?%\'$LSLSI=@zPb *2YBn,;B0=qUZ󠔥욯YHu|> > xhK5*Pz)sxSUeoU~n쉟r fjk{b]0!h9|d{`l߃G9.e0zsV8] >uٍtI~+D8]djbMT$$B(u28DAkǃJ~9Fõ9YY8D)QqB-t}ô8A?~3?{%{^G>‡{Nx[О>Fq-f5vǑl˚_/k6op+}!_į>FJnWVxiB "t>۞sXعs'wĈܪXAhETJiK.uxq)-Tu,L5өAQu *9soJD LgE@YYF\Dq$ $$. IqkABU=l`n\ fLϪݦ^gɪ]Pyj4,Ȝ|a20~@|g\^Zs X4 IDATW:LzxuL&{'(O<+ \/BLRD `LwF6cxz<0T0$q%Ln~xl_rzklgܘ|/66ԉv,ּc6S2N˥K8>Vm[&Zvo޺hP]|{>^Q};'~78x<_R9 f[kJTm-`xD}W ΅Φ <A'kJP]h!(龹vMR*($(jJQY8`t|*Ԡ׆LOGKi*dseU3-A]1.&;>VFjOB2_LrǹRtMJB!Jsa5FH=3ޛud@S)Q.&[68ԂiNwh(:QiԒ*.پbЦ:TIGn_5!)m+56I)yŹ"5%+,Ǯطn$47y??.q Q AxZX $yE10o|MRI*Fn([t$䞔< J>uwM?wb0fM++e ӊM~Fz>U9<PApê{U 7k5Vq9Rѯ0Gϴ+n aRU.; .uqVxWuɳ_ӚÊ#.]voԫus E%@]T$z!?ffcEi;H~2V=qc}kto9zA)W ٞsS2IuYSs=Y2ngNDbxNm~N88>iWV.{zC+.֐D¤/3[no}+?oCwY zn/_EpWi Cz#oF_6$.}m||̗X2qk;]q)3޴(OW:LA6pvR5^u},*MU23'?5U1"KRV_j"TC=(o! @ݖ@<, >WONU w g>'. &2hsj59nGDiJ$dA(,L;`>?HU\@6ԃ*1Z{zj LLw<\bC`K8"~ zi F%FpYac;M`m #?$Mhn@4J/4Ō! 6_~ZF[^-`<.P ل9dG bh݆axfնЮ༙0BgV}&xElc3ãc:eɒ\](46"/w%wv1XkE)D',.Z[MZdidEe -Ϭ`;:jO Z^\Eۈ G^0i|&,R \7+iZZQL GÀ*aЧRlt!,D "~"*W3PqĚ;T$ȶ+HA{lU!{>;+-5:Ye #KR~}+`G\8ź0|NYs]T4?3B7.l@O*vǙSB%.#Z uYV՛nzRiΒwr AqW*x*ǞL*,ݑ'T/p J?~\ 7]Wq;/5|] ۑ,]>GA7+8Ta1+\dG~mY&]a?ֆ ߮'0W}{MAAm E_ FT󒊣tcX"ZifhTħ솜%>ed>|);X3D7?\rZ7u) w}ӷMPF~1xv q9Æ/r-y t/|ڶMaŧ"4W8$ ׫mv[+A=J#<+.r^[R8^*,YbE|!B%z0㌴=p_` wqL$B|zJ6[vm I$A h4KSQD"HM&+.3rst.5$Y.ͦveiKV˂e$pdfs.sנHiF;_E S&; n@7 w:x@7v$utv:vM]CYuٺzUatIgHS8^rر<J} iӼ(N~jM8á%+jf]eWyp˱A\(YH` ,ptu;: Mn 8   K%*jP 97y8a=;*Ix{9{s8B \&Cݹ nA*.\hTrӯ7C|Y:iExf׀'p*Kjr-`y&rX/fව'U`[Equ&&p Iv2Ѹ+VL<^+(πNCcH2p$QM/Y (6K@-(x_NNB'0 c'@/o .D:빮`ّs]V&xe{7'%aZ wԑMKާu M9;}]%9^OQc,i;8(y}/;&KQ;,H/Q-t#OrzJb5GG xE<8_6_Cӟk\j'Dxɿ^xA!. CEJ_ǯiu=x+>!HڳLy<u ,\ BG|oyl2Gy=OP|K{0L$Β3G^M {=9%v^j~:z]T{Aen z>,l \UsesGzl,pz%P`U*i,Xs3Y-P%  :!kI2p0./EYc%p,&lL,QBT2 B D k{M{rʳ qIRH"RlogE ^;DVPu(C3pD H''R# J%q" @)!J4^J\ w#"1툱_Fż\J:3M`QP {uQJ ZRȕi*Q Pϙ,cR710iT Ш~p7b<_ ܨa;Lw [}y\*G3(3Yr{0ZosMZ܇z G ,qSa.9^pB4@r1xT\^g.j: Aֆd$UsN2Sc/s>rșlT9V]iv⌙SY?t49+xXPn*; 0p;tv11̇qcz~v{zU}v~IN2nx_]~tC1 4/{^t&$NZ⥻850]-JMHmCx{=p.:%yE,29 &,7Nd;t֜]Wntx(u]9n^!I߄]Zz/~e_ѥFx4TrX?;!TChRDv{eY$)[nlԬsswCUǪ>OSf2)Bf|kb}(!`{D9JڪЄ Ka^&d"J:kkB6LL&1x Chv< PT,XLTADx ?M9<<%19N򽍆ȼ\P,c% Q[֮* |z*(%]8c9w K%3L2\i<JbU<8Y⻯\[V\} N=;# "zP3<Xշox{ܺ%vIϋD.3a`BXVLGGuK IBΖ*2Kz= X$ld@V#db<Yw L2f*0^'2zt:-LD`# ?g'@(f.r3ş31EP ,(XF0L_@a ;L8KeM!03ɜ{@$ (ĄX]BЗ'O@XLv' Gf Tc"EB%Afk%E,Ȏ?4e}ٟ7 綥u;[/g _0Jv:E Wˡ8snB¼Ƙ!9c,΀eU jRVd |BVWap1)!.22tBH2 I±XXpY?p&q" JaC ΀(d1" xJRZ^aAT=xN@2RO<`B|t $>À>k7 _qsq O>ixsSӟ~H{3U(Jy6@<3λ,]U?2l{Id*t:"66U8JHcߙf \RI9{60ܔDn_aMrG$)%Eʹ[Ky#+\|s/+ +RҩpLITtJjO4ٲ8ٺ[Bu ~E);JarYN?ZzZg"b\6?S(SI:* @O'y~Ö^sS?e9y@@ dj1!`Ysy^[%0z+TݯoV&tLc%Z_TلESZL@$ޥma=VZWI)t*!'H96U9de1'y !Ct5iM="VxΏ @}̟>;+sS\J>rRk䁇x1F)𧿐ΩLxR=s뮁{7pI2n? GA Զpu;X`'Lo~-t: k1gƶ~verY:pGb|3Jּ!ڠGYF+6܁Vr:W,T[,UVPX1$*db"}xi:3qb`x(A>x`zKmp59.͕47t3F#Wh0A_ȴ \a=zox 49x>R'# %\׮yIp/s#vpR\J'IHggyBtbdrFgvvW#./ci8[/ժJ2e#2Q$cwJyŢW-2if|֝2*51dI+>Z-yBpB@N1XR(L uo.vZiy,}@P#`UT_MHHTL՗ .Й ap RKk0mmPxK10^ 8qM@z@ \1pÓ闥rߪJ'>UZ@,Ft@2¾@u!{cUot KE Y(CIXIv KٟȌ$U);:Z"M= A@29-x X)-;6`I'b)}:9II }@\>R07{yW`g|E>?Zakк YlSymUe`#[(2+CVd>eى }鈘gC!{`lh a9?o ]4e ?r_'z}? ]1x ˫ xG-yKt9o/"LcƧ~{\oTցo}REo͢Nm6mZ@O 2fD>"Ш&2fԘ1bƩ{N8to QA7<31BU0WZrfg~DL+<ˆ2Y({ &Qx up\eEyHepeIiiVXH`\D\@h$n4˿ϟ\1Iܵ[:7쓉''r>Q` vSh{_;2.{/ ~sj${Mr钌իo'4?upzÄb59LPKC,wC|_a%5k|^/W2!F#?wv ]Iy IDATJ gc4HRhp$@Q ߸&Gǚ.DDٔ4rEBL\=]Q7Z,mDP(h.|`|,@~&$k4.5цKHmp H? =FHZRIoUux[76d,Mzo i8K1)tQv I#`Cz(׵'`v*IIJt[@w]I \WGh/E/0&諚, c!~#t-r-E7T/^ X9r4B-xZWoD#j +]H31iZI }4tSȗ=# V*/O(տ;N+!5\p`w7;([1U$>>qvY V4u`9.a8sG@w^B[as)8eR_k9tV3'(hbeS] 2AvR4tV͝l380PՁ1C7Rρ!8k9p9r< =8^GA 4j`ףlP " s3{9x:U<׬Ѩq;QD6^ЄAG?㭱ɦA* ؜ jkO?LOQX:㾭;k5=cVY=Lx|% 냱Mu*bNT `4zZo:>{D]R{yPmoi<@;LF3)cO LǺ±8sO{(Qi೿ *FrR>Ru0#YӴSjP&$w@W ^=筆3hKB÷:Ɉ5ΐ\[<M#7m{}SgXЪ+aYx7 ~m$;>^}fs̳ YtI`:Ǩ)&&|lte I}DJyeۂW]0Kq>QNgEqy6Q˭TC+S(%hUɊ#Q@%|1s9ok߁Oߺ ϽJg"ZEJ$+NW*^(BaoQ.q&9x'X\ D͍m̳L>gp(" K^dc%u-I:,o5|_BxեHa(fq" ]"!Hx1`:n67)' q 490YkdY! Pޑ r.%Oga&*A<Wf!!zM2yBR&xπb@1{RώiA#cRΚ@}ҏK"@ oIg 9 fwZ NڌfE(NLnUwfC8pT(J&HL&@:lH@PJbfdE `x^,M m/]j)! 1ɄQ$G,I3ꨜ(܀2M{H $&!U'2&%M;!Q,;Ë'I}_$w8w,eo?w"~"]UeǩF[/|Cs M&9A]φff%; tX)B\X`5%[`1tv"RVC2 G~5=Pw59)Dw=pNP}X82kPaaEuL&O;P!8[уSkRRK":眒T&mWGh2$tK>zUZq"rTjgNY/ZGeYM̺5oRSIC[Ѧ͸\sQ#DMTz35M RE̱󈲺U1Z!|] F~01?d~n'Euή2:>&ɥy&|ZC3# diݛXP$a_"pDV˃X ؽM>2qVYPP ϚMDD"c0PeU/(+JJ\( 2 1HW,T*\܄*qr"AlJiUqfYt"A+H,8_안f0WDZ-Ip ݚ]`:UBSK ݔ@a?ck^ZF60r.2Z!am(t~J~F:p6u9FS`r T护yQŦHsKju ݏ;aa, `VJ!c {@(+xP]$T| O׫ oCpCkm p>DA^815d;9氫xz.^ם1q*ӳ܄ϕMU=H 4n;ݲ츂/Ñ#z :9/d5X/lr2 #H AjÑ:u\NWdfVNKW7&uL:E '$CΣo2 98881+&R><#~/'!Ywkw,3ϔrYyzz(07&Z@~H58%rԟ8UX$"q3GzUօNۀ5I3o ^U9c3cM/o;&5tM.V Ujcߨɋ`tCNV!6樤b(륭km>ׁg~=JS* #mVV9@)Gx!zUG.Am`/~?_>mRgM(M#M֦m)$QEW~-{xl4nMIY@CH f|MJHEq0C G Z.c4#MOb<S.`R&2 akaP:%ԫX0TU"GP2f n5E HbUa/Q eq>K_By,;8zkSBKEs5TU>CnVQB@Q  kp 3Fe HR40=Q| XD`j@߁2 F0$Ϙ#T>ooT{,szO?'u#1"H1[e@0Fĥ HS!#]3? \4)lzsx%j֗M%Zr=*wTJZPG+]SI'#5B;P1 CEz]GaǸ)8c%l QbX;Q6K;~ `) t5ΜRhE1$Z =%$!`5dKkjDrc f3h:z^cM»l`=LWsQS4_jkr(nsCy8[5USĹSYM{"sd179%z<8{XISxd]|A}T:z/ oa Wd8uo@C}dc^6l#HDv/8Ԥ„|~{30X=>OU_v/E?#/^xK.%P;_֗AsQY;m?PUd ār9RyLy%%;az#QQ;(E4INjP'R`k:k27yxKDƈgy/ fVjy݂njH$ %@DD*bj+(@=$4tNFExeQ*1$i:> UEeө\ޞ JA:{:ID'^0OK$i%!VYբruKe`6#1+WD>cN}c$`d2H|00.]f4n1TX#3KBw(U3ԗ&@ƨ`|i3k`$%2  yn3fc+y;Z[@֕2E`r (,iPZg eP'CO 3 "FBʄ(N,kD# jEFaAA  熌z5 KZ6! 03rn&#+@!$$ u:IA%˴Xӊ"׊JY>Ίq; RQ‐0R3B mFi ($ R4C|&U`K i@kf}` 'JϿ_0SFQ"*aط…]8=. Hع&}K^\^<J#xG&;TU|h0Vѵ$s({TAKh(.PO&o~D> CRNځlf2V%zͧ=_յgU%*5ugWAY, H6S k B|7.q#_N,*FS Rw!f4P\I侾.{9amMdáȿ޺%ltH@, %1B~ʘ|)hEYPhlf ^#497F֪$2)';pr"]BJX"!(ܣbA-=-v .31[Pa"qJ@V9!.jI y~]*s>Z#BK7qJ:aB挰J8REDx:BŒЪ tԦf T=@(G˜0̀YPJ}^"b\7]C(&Dw p ofׁ*bc!-iu3pE># @t.;ja z.=͝f C1IY"@zO`5B<)e}2} T!,SWrTZ?٬C9SP.3E;c?o?'KF#ZgZ*ZU{PV*W;Ҫ*] A\7C8JsoœM+XPnlVČr/ ]7}G1ۿTr9s[Zآ1!&lI˜ʚ, RՎDCU=VMؕ2C83pG" 4=9PH+XD漫7;z™4kyC!@@!T_b[+tK٢&dFFxVUK9G^ߙ QWJV5I9 qIexoR>J 3׺M=7T4Ytths'Wt~C"Q@;ւv&v9 S&X4gk6y#QW sQ.\~'*ӿ b`z20|:p"x6Bő5PXf+Qh(~K'JuGzuo{&7> xi;@$ 4nv|01ƻj9YIJᢂC2;Q\]E(yQJ}iyr2Uy`cs,K!J&g+RL+öS`A"inI.*T4")~]FTZ$sHQ?G=O=E˯ o~ og~?r&szt1OETw~Vrߋ|nčfJyh/|׎Kxؽl%)!^-Uk )L$fDW^!anY;Klj1/E!kO\Z$X=l j҇D ];E&6C|qnWHba\UJܟÜMvW'M IDAT5oDz*\לȍQHǠW"c2$42Gy`6<ܗ/G?ұy í҂xC'yƖuTî#i|/ xg)/QHOl/((53KTtgƲDޕ(Rngw[:c 4}Sѱ W<= SbfUw .qoN Zt^=rL -Y*#I,L8p~Zi:ӿF֟[ǽA tl:2 el;*U_;Sqw458Su.9dc=umOV(Gk%;3Xs㇨LuB;ƍ,XgS#w\OZh6}+4zvHjRA9C$ԕ1^S5w`_.#>@@X*r;^-aIG:DIE*͊v/O_gb0;0}x+>p'Ro:^Eƅ\U1Ȕ5XY!(.Oxdk)(_v:M7bٴ̝ڀ(WI=H:$@T,1Ua n J`xZRb $AŒ,=I( I V Y oD{E,C<:dAiCݮK(?Kc^>๫WqxNir@H‚|C'ǯBJ%Fg;{}G8zFfƶa򲜤]QSm -\Z%`"K+$CDJ.pK;' BTdYF#9/!Ⱥ.RN'I)$93PHb|$JNDG\̌lB],->,$ܸ gr$,{s@ lt&r 2кX*:HNoE40NFUdI%!e@ X$ 1؟zF4IX" 0*$AH E))x:0Gc`t @6P NՀ@i]:́tqݭQ4e'Sf}A z<)0iٽn+]?ҡCȼfcN\q*FzNȝBkB1Lo<5DQdX_s2|x\ÿw?8>1(; b/x*=8km N1SGԑ)/IMcqvOgjlSGdֱgvJr.:p-S5en1A%Fq>I$n;qi[':eGBEGJGLq[%t5XS|o-Q^=bƩ{P]k㉆ 9%S#rY}G֔_==vB' SG n|]Eo!Ǻ^6,\O̰q_*{k):WE#C@>qn8[#ó_$v́vP͡|hӔ(ϴ~ӵZxv4^Bx@z鈩pHqT=LK0c@f(o>cnx:N|Bv{ 6;=5Mb|Tں Ow絾T mi0׀uY}cB.kj{aKkAHPf^wt% f$7gz%.஁`#]"K`MKU Ij|Lúhq3rxuLu*sRYC`sKa$K2>Pl|kY=[N'r6.IdVb:bxSB%X<#w Tk@T3a+ g>͚tKJ.p8}^ d<1Rd&2ݚ4^\$viZ'09eN 鶤,6@ @JgM`{8*AaSbz"ۋB,;NU6s\Oܯ8$U֟9T;f2I;!94nxd `8j;műs x]rbK0<')m;INmC`5տd@4,uq.(1k8E FN6>Xu{q:>3 _BnjYp:QjXպJzÊS|YC{3λ<ވMozDU"UNM+m]DʢFí6AYxOCV* e񡞇Qv~To/> H: HUU-)q_JҦ~EKE;5 )lYUi=xv_6<u փánr9U TmJ|RatU$xzQ.ghH;L -!QR_.3 ޿1wI ma2`_ݤ'JtJ"PShЄCi\@F~pEHӔ0p2is.M!!2#+ĨDdj2EFj ,RD-x&{V{Lժ8z XwwE_2ߊcR(t;_ܴJf3I`5Nx,I@$͟zA";ꞙrAh%ak I""(z+vjZ bTۄɐg@ 4$_y+U}qdHƌfc:&UIh!%\Xl\P.PoPH"2NB`:N? 0Pš|XaBM|G܂KG]1VIEtUY ܨM՜WҬǐo}AT\UV$q2HWWsšw:3|e~,{*j"5"z{d~G3bVzq ZꘟͭkU^ɛMðo&ZJj%ŪGB9i :3pnb6%`م$:x*9ay}-873N[c&NSZPC@ʻUcE:IspgSM:ڥӹ(Xv^5rvX<P.LAjHDaGs*5ύ3sy;R1-U:3rWz_Շc9WXmXb x\SjWy3cRPJd+ʥcSggߟp572{ÄHEYb\P ͬ!yNNy6}eKekү'H7_e J%jGd>r|HtKv_S~ɚ#TRt ~.Lٙ\یs WGPUc1fUyfk6"߮PyAN$_B~$ӌr~}5w4gպjf{KUGʑ K"7{ S#ƹD'QnJd$ghuHT'VuDlުњZ G R'.Stw4iiH]&&3I4Wk!sx$sޤBvufu_6eKB O@ȫt"gL>T=S)I(iR>ge>$2l)~5ӓYL5Dܲ]5%1 ɗ߷fGV|$˾&ڝe-USiMr*|ѧ&l=ZQKj5eb*M_*leb̂85\+0>eYV2怽-_xO|h`n d* d{Q$ߊe#z FL-u`4!:✅ ِJp9`mC ^2U>FV8NNg1i? VxRfN5™ N \oboba؟nhWԤy@@x{SyJN*!.jbjzup5}k?'qu/żx7GUf+Z]`2c2zßM^wjE7ëy,3wJsښvnyΞꄾQ H3BPm;fI5q;T"5iҡS9` ʎ K~%@<*% ;l;v8A+!ȁQIȢbB 3y5I9M#JmB܉~Fw ~W.p:}B+m_Z>Pn72pW@:EL)9!% ;C~#uOxeȆaNx.9 1w! P*0)_TDZ򖿍ix*ocʣ08ӳJԪ"Yj𸞁)Vuu? ~.;Xl T "Pp@Vʰ]}K=8 ?2*³fXɓb{Gn͛O᳟}Lo1%c9J p*K萞y挩>?1c NQ\Ƀ:¸M  ^c$ TI+u!=HHAH/nړ@")ӫr=jÊZbI>_1 H^$tDEXoʱZ(Hp~][!ye0-}(?ilk,YBsLZ5gɥSp&2+]du&#IH&ҙSuJg@a`ҽyPnXc (#wJαiSBRB"yCݓ-*k"ܕn׈z`9Vjuor]Tu\Z4_Tץ+1C?嘉["WE,a`u8j7nx@բBkKY\Ek!8&[/~ kHc_g/{[@1r ˫T5َnp>jl /?)[ҥҹhK?Q~޽ǘ㠍̽*o;sсuU%}5J\d5`"l-[ZZ0~2⮍C@g4TE>ͤC{{ry}X]O'VYi6;|cF86Q³)Z?h{+^z{YA1㮧 Zc0.alx603hM[;woxǏ_nL/7wĮs;d*= Eoa)sA ԫyb vv892V"ml#xxzLw2K%>f `y&K' ϩqM|쵨*^J*Wr3x-2*M9QW锫~V*ZzШL?ܠR⿳XoCo8u0)6@r60Da1%`6|է.R:!~)Xc]`=P3ZU&5F ]܈5h/'O℡^лO9! IDATL~ŀbB^cG 9&98- LL <eX2y}̉ @<#kxnX:wǂ$@XѩF -ɛ_aUMS :Sng;%H7&!%,f`YK :(qЇ>KKtZhQ $REfܵ1l= nҥ?_NipSIkҷ\v!RQfI6:ڦUh-S۱Ǻ|gKUe|f*79Lp4e7QI7SRT@ޕ]$Q X+Acu4Ɩ,k5,DYTp.H*HVzdüe7PΧyO}x-:SPPK7Pۭŋ]/FKM8P:58n}LTn8V/S1R?k#uW%I2$$0TKأSrr)dϳJP)η+ѭ<yb z=V.=H~o/#xܺ\y .a>D3*1$ܹ:%85.q mP!/e/ZNㄤw2l7&..wؽfhE86p Nwv!$#&tPC&yW!IWg8BX3i>Tj4UhԄ# Nk,5EMii?-;U K@vUuv]q oyK$W qRsHi'  `IkV$ǭp*Z-C^{{% # ͳ*vH$M˧p.Ν{Xjr2Ԡ4 r-ul:>@b^rS|ݑk֮[˘8ߒnɡgY:*39RQK9)"HɸYJTF InՈϧ%VB 9n1k;9`ַ$UuDrڢ&ce rpN[ba6-Iu U7D̫TsPHd-ܪtBլk%I:&Z3W+}Y_Mgmy>z26_g!尟Uy! G>XbQ:W-)"5;/ב[kC}GQtHKdwJ3W.DoL4$?6sb|g K˜DI±\bUyԪ;0,(d3Ey-$k5*ߑxI/. H gA|M$S=/hl󞐺hK|?f_=Oe}[9Ļ7t&*'π-ZzShRwyӴ .W*|/3gGF[[̣3HM MՎ2hynXN5ʵacꋢ>-Ȉ 3x6p^`7K@k0WOl`0&x8\uB>=5@%SbOb6 * 6p,ȁ kl8-WYR68$7HZ̗htÀEJ^1:@%Bzy1e0 wA*Y p u*q]Bl09< _C!BUҭc rsn݈`ʜ\!c R9€jN 09DM2sBz,gPXydX*TT0&5 U%p/E.wAURJ\ **5Fʈ +DQ}J˟~4/BFe*wb\w4Z U$XsOe9q!ȿD|^GɂB)!A:Z\U:!A\kYםZ|Z<*59ϞQGD5I8!܀DٱօVSu'b!)_沩?pIY h@]I0ݔCҡBxHZK8T(Uuڵa%5!;C'2dK$-|q4M-6o,ْ{<'3*\}FKQn◲sk&sfw' ZiG.H&LMRd!0F җ[(}}W^^t1<25M V3sM`(yP%K4(d4X$|%GZQgu6Wje&ڷ&UMIX ? 8C>m+z[nV4H"a _||Wdw)hjLH>F#)OK;n XYyW`fBGSDՆzn}}1Pƞ5 xVTcDDž;YJ@ra0}qaS$j(1$Au",|R4Xll 9}nhᯬ`n nlͦw}w~w;.oF Fu~?N/MS>EqY GpuUwA*6rE}2ahāJ\ۮ='ar=/y+~%k[>^wWn}6NWR_ qX|5B qƾsqRZŜ +p%|) R߇*9A+u%NTv6+䃕kc^!$m5^Tv^/>0dȓJVWKły6+*l ]6I;A>pJM`< .p8%0s MIK$o["HS ,%,'$ǀNͪxpDh4Pi' /sC yn=`9P]g"'YBv8׀a[L1uD^PQh+Hı8ekm %Veܠ+IŦ0dŐޓec|MA:WD cs)g5~jִIq !ŢXdcPŮr+}j5OK~X~8m`Ig5Q}r7UǏdZ:|'`|@`Zw.~.8CmPc9@y sItdLf}K9{[})P忄15 z K~sh"r]q8Y(M&ȌA" nw@#8:bW<$(9܃ԣaHeT00 2;!Yf1ñc?Ï$X_7(Ls/DrμkWC~'6QAp.a/uZ<,@@xVPhwHzXT,ܪt3~c}yG*f KR&z@s8L^c?Z\{ hROx]`8Pcw2-^x[U*N0\E r6sWc*V*Q2<6f䷀YBH<.X#@vL@p(Eζ*2ITGͻ3c`= VC"L %`OTψɑUjӲN eYw'Xc ۿi%]_PjcNя5oy5oqx[ PB=OuKQ P&2?O0^{9BgeDZ,^NfNNbhE@nY 2&ճ*#zcCշVN9$$j[jkG*;A#@W j̭'=-s=M;@*eD?W{qd)Sͤ;ƮTTKg#(Ycf* %[$|ϬW8sfZ-$E-ZmK&d,k]AKhPkɜiy*w9/Zs>YJ G3Ҧ^O@iW1}+Բ$i(|u]wϦ|ܚ(լ\_fN\M}ZF:pZio6>r18z=t5nỸ0R )ZmY%+ͧ]Ӝ3C8#/W$U:5Փ8̅g^>hB!^m\LRb= ۠HD:k]+pI^vϮ R90Kߞ |I 'br/Cb@iZMko44&N Ajٱc,U*ǢFZ;RAy*c:n|05o'C.?0kԲ>(Trs(++|K,QVAnq?`uAaDWj5]N؝Q͉]_&P~ww3ѕ>_^ Մ`42"̦x,ޱvTCg _ٓGtAfRQUڷxN؍w r ޥ_Rt s5p41 XD|( RUX-2uu^III˹s|Pnf@Wo8 ?}NCHrSm~qBVz@mFӠ\gqGB$8r zhڀj@&`jD&e+ik~lg|` u gS B+tXn0ɛJ@|Nw9IG@o? P@t(RJ-*C|ي?v2@faEa-U΁sd$ nyŋG⥗.`:m(GV;Jaj2JVr >{Ж{WbdIȾhW}|_9+!@o˽{˺E%ؖcܵ4W, ^ ϒ̑+c>YkDmI%jعrmY+3ty=)d*-ϭd:*Nd}h,hZT(*BM3/MsLE°F'Ы\~K .Z - D$e%K{`4/R^wG(Q5.X_\*J S6%pCNKԚ0Ase[r +QLLeM{PWR^Re\+N?pT,W$2gCyT*XÐ&jID='syMЧ~5ZZѫ6E@OcB7zҔx_fa %p"fb':/~cIK眔N42MU!k$1ޝÏkPjz}&hD$$>ܦA D-`ԯpw$!Y of 5 k*!<ܲA@8`dC?UBae;Cp@Go &s $)`FKiSHe †L|N5.}xzCEEWtZA ~-CWuNMu 箁n M 88\F@++S<4>`Bl9 IDATZ)QQAEYucfԪ*OpYJM?|ro 3!nCPZkv,ֆp48ˉOM<l඘Ѝ %,l ⪳VVx]*v5fM~cՃ/Ɍu &){^#fRU?khTY`W+lu$plP[P.JLrUCEW`&8|u@<JU2yFfEؑyH31֚TֶGQ&9hj(k#Њ́FT$RkS([C$+Sda{*Wb&\)aΪ;$<[-ݑkJ$9 3wFuPj 'gCGIƊI(s9ϋv@rOߪ$h8oo%ǂ ,($ZE'OH3,#s7VdY};AѢPVQ.іݐ{%axIj "waDM`ކ3F[xX]P&1Q#;& _+[ ?(*} kDчIQy.ʔ|YfXEDu5d)m$}OnynʳcMiJ)9EDf$X!y|mR Nq^Aj`{9lfLS7߳J#/=A=B䱜Lؓ!DL>8Ъ=EMq\`yͫ]Ʊp~7gրdl%L >D  µ7o2i|u|$ 2рj0b Ei)#?S~BGҢz^G!c}}ysxG⋧q/B S%%k@\9gW@ƍu2& 6- YktH_`,P.cY"`xc 'F(Ҝ+V ]/Ib=K6V0eI5Z5kn[}ї$,52L|wnZGA"W$4Fԥykb_Q.X@7HL~RCI,HB_*\@9!`*ЎO.*f( fMyfA+CE7MkoK OIe)T‘T 2ϡ@)[ы%, ߙϥа"?rWjV,B>kWiiư쌴[Uɢ9V $Vg"RSCIE-l]ofeہq o>y6RWde8TnUk1A *8/C5阌1XSi۴:{Pdx r cK{zLĤϑe0#39Z_dCѾ8 " \7C1ï2H8i|8` d? ^8V n 76?d8`}>Gic ̫Uq3$xԿw1>x?#ﺁcʲ ¯W0/8K/_<6q JW8ux1*CO@HNcN&aŤu-dt:g9NL򭜨Mc|W\ 7~O#^m=Ƹ_P6M, |E^q}aws 0C\;pҏp d ! u )_@FdžG <`\ #%lyR H,K݁,wtrbуpNxy\)Bx/}t s'e ?"]6zV˧o$A/UV}©ulɦ3 b14,2+gjHpd8E,f|t\+!9&R>wֆkJ覨fM-U&$%=$HUT'G]vaz 0fO>%\xmL&9nun 8ZjCu_Ttl uǥX xI XV}TaD~$_ U Uǖ*@cROA 7~0gf/v&z4oqI^{d RO{;|r>~gg1ü(` с<"}9vX<:.nh[1`:@~Kwm*T 5 $P­+,m*B HTE#Jw[$aգI±HEM/*,B QiRuЧ"TS:DC|wZ-gN_P5rxEKUaI@.+׮mॗ.iccjOdu;T*C&('$)ݖ:heNd`TָeQ+r:Ԥ.sJ2Tۧ(=g1$]PH̒]00,iќoypTa% e =GWP*+kgAP,;)v”;@V+ Z'e8+͙R%aLDrϠ"%cж]*X%бSZ^&5D#?ۓq=b[.k6sA4Au,qcCׇ/r 39D+w:QA%ȑZzP!=P`+q,[\*`8Ekv> ,N`u;`HL]luI6-~xqq瀟yеwR k[#g`P`I5-D-jTڑ`Ui? ,'\lziWKrK:"ʣ\Tߖq8_!uh5VX_^X64/[P{) ,g%K&*Dna]ȁq>\GTBloVKb6V}σlnCV.#3pSxcᐛ'OSyܹiyCxkA7a, O lŰz0AΫS8ݹ gxIBHV9^}3 $Ѥ?1\9[vφC(n6'O׮14S iA!+gK^Q1ԁZ8>a6fFlDSVKnp,\ʁ|s{DijVDi~hlΉA-2 k!P C0)͸d1~)݄Ǹ^@ ÀR: 8Q'\!LY^63Z@RdteYҭ=`@Yd{/9u -ʮ K.oU O0 v] C= :sRXJT:ŵRSQ M-<@@%/Pgy~.١囡-P *eN*_BH2}q7Gwp]FZ a),9 s> ȄYK'qJ@ZII"+<$×zRZР[է A+r̞ lZ7Zr g.JTt4:YՏT# VՃE}1Eb `~ e>W.QK>[Kؾ!ܓ%7 gALSxG\NRRE E{*yI M$\T/*+ AZ݀"ë}Z~N ɀ:iW 'scAR0ٳ焷A%`Дd=bKc8́B龵wV7|@ntpyXXhNki@ت/qmj xs}>I~ QoxBϋWSb-ݛ\XR qe :wp!`0ǁO1ÍWհЌ;^jV^cBުXPҢp'(.dܧFz+X㹫K,3@n3B1SzRS=)4.~pM&4P fIbзzNоX^hKʥ$ 888U 2Փ\+891cGbI! wkK.r`dJɲhL gN(E9+~Y3,nBHVhIIphA[v0F@v\Ew(<"$;J'SYIJR%A:W8Z 0Szs UyV,2Bp29jV֮څ$<*c{_(Sj4tkt&.VQ@TVTa{eh⺐̳X܇oIO`dU#Pz uPaZZ8;ڔ.g- `O'5$p&*T..N:S%H% R:tB:@@wVWK*"Qc=FMyT}\Z'2gU* rܑ+A}dͧ|*ABW+:X6#s]=9`Ryvi]ClA 햬XUODWHJ%:}k*KExՒShi"yV1LElH67Mq59*[ HiR d?(p-F|Ⳡ_5`ud dCU.n ͤCMUIv/w0æw|}Ug?&pI4&K} 9{$PUwO^k@J-JEv|^N"tĿޮ879Ilja}ҙJqY- RE>X1 Jqq );a))DZ:/9Yr]`cy"ʳ ::SJ[-8kkAT}.'4 cwNl[Wn~>/6?lV~+Ǐz 8{W `58qy ٙz4]t,gIh|b`(Z5tj3^c=z?j`KxÐ( {鴘6ցAB.x[X~?]D"t;|eN݅u-z Z&ibJ!|^t"l2lLejWW>: CNEgUCu]Zj"݌8Vx_эk6 CF#S.$ V;Lz=[wɔ_ !O/LJd{~ 1@t6ptk@9x|R%=VZ""8>̀6oɈb0u7!cNH̜@sÙ҈՘L]֤TK$*+@ 0.üR%`NVD"ږDsNJ!0s,z\PVU8C.ݍܒ(arZ>qd&uKJTy{a7*YUCըB (}+SBbҳp$YYε:c7F_%ӹcu$}\ KVPՄUɉ"L,W^9~!$I O>*~Y:5EnAmhAR %Jk$̬X0%[JJ3~TWs(k2<(6P8]/6[*FoH`QI+sJhl=*:X2 SB}ɩ[}#P.nIFI %Yؓsiz\+]=RCmY w=з[c5Ԥr֢ IDATAz©{ʰLiH#C5+YB|*[R>(xQvOHB2Ե*zJvYÚ,n}@oA4O>PSvSt SPՐv5+8^qU+GWpwO}πt"ObMJ?lXÄ̮gUMJ!T;n.dO: n&:D>}oHN*ŕUEڳ#$KE"",+KuFXp ePRV*Z 3*ei mlww)HSPv0M)}?O?ql{_|"Hڌ9`zFMB,/w~ ߗDB. =7&Bjj4o~_: <|6 `P^h4!x+Wy#W!rY`2B& 4%,,HeѣOK"d$rۅOZMF {KR*IWjU "3ӣ;M:D4Ԩd~e9bj49V'RLTD)J83Gc')]pDȆiTػM (@!$Di3V! !VtBB/OH@= tE^AπhĈ2!qHvGHUM]¸Pʈl[8a #`dي"1+D@ c0V k B@xH@(PIX9 3%ySr@ NX'/lKWɞF ɯ" , W9A660-x𤒧f}۫'tة$ 17})DlDܪ>] w:Bmv'DWRZ.#׽as 36_m q_%UZL+vTW%lmx,i-=:g% ^:ExQcMF?9ͳ& es?b,>MV^$P5ē^бVԞTuXmoE[h:q@糢Lƚg' =^YEWA m ut͚ޞJp|^^ħmnyIˊE? %ݗ'xEas'ǟ&uÑy^uk!?Q!Zvmz}1+ Е\1&ִӺuAZhTl sy玳陘y5`r&ڙK$8P5H;NEqŢTwvZ Є8 onIUŗA~"`aQPfr l"*4qXDpv<-v.C$S׷Wc)g76M/} o>PG wSK%Ag,ׁBA굚ޞFl6@IGu jǎ,sz }[秀 , ycC`V"{pV, G$ŭKxEc [y4mI^AcOUqӸkx˭MLd;Ptj5`o15%%L &|N4p000I\1s,v\Y_4IC_(H0ZKQӖxtvH/<@Y(k~(j;:uBr6*m@JB[& I"oQ`A)$GꠓJP0@JR-{aSmJiDt=Ţ:P>nkbFLlY \H:ըE6$2#&8Ji趷hm"+&l!x`,Wa-ZKC*t35:Q*5x#e]ۯ}j|/;==GgU`o~WyFזv'0*G(忦~N?G'/(&ۑS&FA\k0H 之tQN:Ïs ]Y˻z?"*YiKk[ɶGnUgw3.ej^]/Z3DbkHJE~oR&1LDLU +b&;(dҒHF(A|(BJSUdJ% :!j0;wʛh!~x>-O8.\B_\'aqQo':$\֖$OXeqQ}V7H*냁$\779Bo;t?z#(£_oBB-d/^(O(RD%ЦnWQs @r1 aa xqV<"0fSI"6P=3ǯ>բS3xak9c-+aFu*^&ɂl9KL"1qDQjiIbժàO[Ve_:6~ħeiVma4_vu0"4C5 JP-Vz&6 K:,ҁ$=y1弔IB0YٜxI鶄u Z) XԺ!ժy2"2 ʱ1PJ2IZJ0' KPH Ox vb@('j:=}ḷ,!H}nMi^==<BӢM Esּ@a=۴V^nOBk6SǛV߼B~|s``UPԢ:.:<榜(zRڱ'Y$b:X__ٳH)s8q99ᐑ$s^Ҷ]oY=Ͼ'y2M:9:UĦ` \+Dx#rp'#%CJ@2 %:O;wˎΙ=ξe8 LX,)X]:@N:DžZVduK:m-f*zys(f8M&G,` g{#瑜:9dģfgGOCwM[03o' _0]'?#>;N : %\_'4@hY`~^<ۄH RBX6.|Ւ 8@$ocǀ"~`[W~8ǞX?cA>PRs.W@>KGVKj{Bkb1=t+s2c4`|q^ k0 }<g%7h=\NC#iѣ6ZBwf|taP8*iu"p(S2 "tfCB"ssFؖj%h8@s!jW4!zN*M%YtDr\FE88`$,ү@20Ohwa @P!0T`C^-!x #3 r Iz@8 3@Ԑ d:@Jq 'b˜UUq*qN1# %pda#F,B FBXU43.(H% pńB@!!D-ͪ$Ƃs!aC}1D]A@Ou5M lםx@nu:s@ȹ=,#-VYX3(iNy ~tR?4YL(IÖZJ 11H:sPΊ/ IE ql{0.{@J/\FM<H6Xmڄi·^||oSkvMS+U^AaT 2<1O$l$"@A!n1TC|&j1E (#hs?}[qbu'>xgxe^nayn'<㦛ei lo󤚽 F\ΜY_F3Uy,2uѦ X^{{8u:hZשcLOo]_̧qn $@7CVJCIIz= {*$q43Ónih*TŇh/l'Fe¿0Ύqo{Fa&cD UDަp8d@)2Ĥ0ұv5r^s{Y[tnH%J`mm&MtMugcRn{A:̞dKP0%bKp}~Qrw22glܹ Ν;/|x;‰xB ,393g4hT^LY ]b_x%Rse2egrE"^bz ϾU5K yjEnp+EN^Ӑth<ؒl*7eNT My)kafIu,ij> gUod4( `/{ŁKm-01+fԱ1Χ KH JUQXeO.xP#,njv }-QŸהbQ2 뼖-D< 3=r98 Rb7.f_R!NcY|O7?W~G[A]%}zWd(r9gxRЎC.{u@﫤pqCyB Sdz9I=5 } )kLm$Խj},?Kǂ&FiօCVo!A$uB̘%2X,c0sj"pE, IDATo i瞓cU5[85|qWc>ng/P*ʿ #Etvw 1+qp@h6RHߌJ"ƥKn;F1vvDZ榌].\@bFc9PZٌ r_"v(>8a l1S5Y߀2vwL\ xfF:i*h$pu~;M;o;o]>ֹٙ—Qptի:I$hp?TL_+[|ӡY<'YFmVX(B%t-‚ZHR%ߗy)HU$?$S nM:r8;eVK#g.h6]R(:OqT| /UHfC}_b'Qdm|GEc=ss_3];MvD*HuU"7ϢdAWɟ!SîQĊBH>2k #ﺾ-)61x9ߙ'@D-UǬ݁(ϚB/2GH04OF#]wJJPl"Z^FKU9W;'Q"<^1n" VcL]u:r$! $1I15%rE2‚H2(34|Y*yB.fKM':Ԏc-xGvi}ƒM:Kyux16zO2./0jaha0҄Q#!?!nXzͿ~~"2u ,#5: b_{i zBJtt9L=.&(+W$"wLdl͐ߗs$ ca!@ Ɇ^Ӳ]!;&+"Vhu:IBcSi*\[_rDtF e Pv8 3m*!M ȩ32Jrv$*ZL}y (ϰk 2Kx %A]XIelEzŽ`YBd(2! iSB 2 Asݬb~9Xaox$}\kemzʎחy]7SIMV_MXٹߴ✗Tr8Lx A#Z?w) e@U./}%jV58ȴmR)c\ 1NC 9β":9է=R4^_]M<:t~/RNX #ɸ]u+GĈ&^0 KFDg^IǏ馛o{[x_B^1PdLNJ/;2d,%$ɱpsQt=-:뚺 _@@uFg@f0 Z)jU1@]]7Ϣ]VȊ!B]y:ֵu/hӚ u76 INd% Rkęv3Nyxz>RU&QM5uyD!q -u6"O2OXݬf|=Pwxۃ:lCŸFBxKV#`CI0' t(.ql5 G361?׼EJN5l kе~z? ;&s_ D_n ISkR)(:u>4.;8+: g2Ք}#.{{1'^ur9K#/?k{MGҾY7r] zMJD=xڋf7DܨkrTh m6Oҩe,~K%=I_} c5v ށ{q#w]R_'sN*z;=lqeh ΈT=AY?_Ї󟣿#kuw"_wp]TA3uCLde# aK3ԧ2뵩m |.5q4NIF.]rBNGە5} GEy+\aI`p![rQ TJBޕ@*H $*zZ=<ؓ0L*}O&Ou B"4 :M Vt}+tlUȻR~I!_n`*sR}}, պ Z5{u#&9}Z=Umd?Wv\8FeO9spU(|KqXֽM㣤$ czEݛ xZ'#[zu2ZCT] v zx3s]?O\?AGaJɫsݏ ۺ$X;p[Cj(g]Nn{e(`mcOd|}USϼo<+=n|Nn{!#{(X?t%ԿV4F ^[Օ 8D7xoawG̥v@;A2V^YdEg@^N ~8xp%9n 2?̑%dvI `4pF$[K4E-.҄TM$ !cc,PzmTT$Y"L\㓄03JwPϚ9? S$#j)lςrA &yA'嘀q/BwFD0rY6xCSb%H{{Z+jE9c'iկ+RAVU 9 dkD)b IW%97%,;jMp!#:D/ v5ZE^֗Uo4";5!|eðz=uA+V5kS ٙo٤(5vm4c[WI+o{PP-̴veojU9gN삑YT@=ʺ<50GiRE7@s%7"x͓iy,Jt]M8,m<,8{P~YoA*{$.*ɂԟ496<1oh@"cO;5x_<Shm^8cnЃMkhFte%]u S 9j:/i]7U'E]0 ힵ2f9%۬gZi)}d{d>ȢbZ]:uwt̓XI3^}Xa/FqgFyWФ+ТĞRnL8{ 쬎ir o"@_!OɘtV5AΛK{ {-6}QFPߌx]p>A*W~;4/W<jyIG |=) %ܿ1.AC&@ߡA<D΃0R&d I:_T\PgǬׁn!#YHX:$ibtIMrB+ljonJiUT83vu$GmlH%|0ϛ ILm!tO<tXy g0q66eaW;U|:r@,U;`j{݈>^\5W h<QN0hGeA稵:A;NLMz#kNn?è[{0|_x Xx|^xI w1z7/y >AYFC lK%n'ҥ3KS3ŋc,,E ˺33ҕh3;4=IB|X gג&fZWxEz#|Afh?Ta)e18Lo+=Ϟ5ȒW,hY_,-%qH^:4-Y TygmՀy#F7M##i L覻읳/yޑG 5еs 500 R<Ҡќ: 5^p;qؓku:/%rdZ d$){ ?,Y9]kU+l;WgS;`Nܨ 6J"7 @sJ/CeX}R i /`;N|%{X ; vWtJHGf=QA:WsmIYwq83񀆮y4Y$cؖug3ul kg]4|Va{*T^hFTp EP۾%2#DSYc=g+51ʐĺ ס5I^T_mG#>CHwu%X?rDcJ-Be^9 9'1QU|ACwR;QƢvE[WvGKp(nH~_9 i*J;x*dTꍄMdҠ#>a8L%vL" 6 /Q*.j-^tJ|Y^uz!eB`aAdBj[cjHvy`vHRo oO~4@s+Mn}3@#G)ƙ3:r/  F#@r~.dU_?xB c0:UT-:Hy[|ϻ(Ӹg~?Fne'ed$BF8-/!Y!*jT6bu'&WELe-c,@h*):{zUvetSz^a۾V*# JJʹgԂcSg.G #ЪS^j"BNn 56ϩF3J8|x*.d]K*~&:,;qL;uA IDATjd-@Rvh2٬XH m玮W#h}:uoSX]0¥Kė.GNۘO^oY:th8wn|pp[[:!gYN+-j(6]Ȝ a&GLk w#koY q 2rX󦚦ϤK4ҤZ:й,Sh_Am_adUX`K--f$z wTc.6uM&,~rX-}zz}3&C%V/z\&P+?M $-}.UU5}})ɾ \"lgqj{Bf]MĚX(4u?L }^UwD/i׸[AUvLJ0H(X ^2s$C\ϼ߫;*Râ%/w^ߍol5QʮmQ "sQY*X*k PdP⒮DK@n4/XRQMfLS ߂P,Jѐ# He\QV%YUDUiqR葨O-.רVhP] Tu gfmsD t B(יRMVibV,z=RB!J6-,pJ| ao~>u&=y$1Ή Yȵ.{Vaui4YIr ('0u;&i.'s-sa֍ve<;@RsBSs%iI.^r9KP!t.\  22ʊ\-$aV-)pp,eY0qTxIkޚM}$ $ *xT吥&H* YJS4.jBB"T(k3TvUqWˌ<t@" !1^l2BI@]v.MJ.a]Rȫ*9}_ @nji ߺP4@gZg <9-#2BسzQ Y@(t|V58f$J4*zY:±ns,>9H%IC=Uo6i es.R]>!<#WÂr\̍`/p:@i`)Om\/4$\%ܩHFn .ttJ|rO>8G+9zSh}Ο_gc:wk$=]0e8_l4XGb^_5dgt@.O[kv\Lf kzO|'^ '>9}>zTsOš1cA2 J#$Ƀ9ok`_gq{>u,]]5iD_6 7ƖާbJNj!k1Sq hk=zwF@S3_  \Ǫ(4xTL|U59v <7T)$TIX(Xb"]' Hc1߰r7wz y末%sTC_::5炜Ӥt טMAs/`Vtm2/17T炾<:0t*:& $am`eلfR%,s"E Hϝ$@k gj󤉇bOGe]֦GUATHڮhR_UV4p%}5136md-j(YZ3 \y_@wMdE%Р\erP.V ܒ׉3R\>V@hK/8iuy!Ǔy{˺ @E9߭K_#mvxrxXhyVyI]S=0|_t !EH/vo«k\O,x"+WT(Hj02h R* pNh{X)nS,wbQEUV*ܜZMbaIncBh7"ybDa]E⢜Qqwn <5<~>}umxv-^xhL++rέ-9g(`Nk41?8KN*F([1Y}pr ;>GZ8؟W6b0xm\7J섈]?tt.eW*1'<4 R ^60˻|HF@3y)S/HK igf]SLX_&j^e8՗rNNjvW}#G/%x<)FLs5gU.}^{ד 17fԘ9‡γu(u?D03<67a{;zβA4P&-=i\#?!؏]zk]0]X|st=*c_2B~n7O>9ƓOq-5y_}tu:&jMvkPǻ@+,k 网˹ge0. =wn\ow#S=Aox.#Jr@lc cBYt2e"\&T6Jܜ1 Dʠ9eY:6g,/3fgxV#qO*&~0R^O:&%Ŵ͙kH%I}-62~{x)Ξ-ҟ2׼~ޛ[/o}}k{U^mT&$H Pf lckb<1 ;D0vf 0f1f#6 hZZR7wu3|S^ɓ/;;DV|NGE |Ţ\E }u/(9rleM@.++tu 7#Y_zWе-!͒FKTJ(VȭneɈa(E db BF#`2mQAJMM҉`:vT_>2@drQ䉕;nً Y*96+ 5yNk3~\٢ cݝ20%TV&{ %j 4|;)0e 9,>rw3p뎃@t 'L"1KZ#,FLDHD*_A+ƗV4YA?jGc.;xT) BXy|'^ x%fȷǵ =8jAYرĪ9gXukkCD.\XrAp+)M`a \# oHRq y~,!TF28^3cm7A~@ @s9y |µ8WKu-̯cBMUxLx%<}Kzw=&'Owo_rwDay}C5p{62[I4 U z*57<28Lp1@t \gS(JgCΣgB]]&a@@m`΋ª& >l\ΫTFp#>' <k-tI50rځPoȬiy׬hnr&M=n&#*KL!E'rJ4؜Lt :n4ӧ:m ?+?›z)??|RI _.kacCݴmA\]w#9<Չ 3z׌Z-iиQM:f ^`}]ŋs3Mpjir1I )UIXC3/OydcE RšP \kZVӗq2Mf,(l)/5@ mYfHVHc_RHJ@qE0nO>FjIN0:y -:KQɒn hAI;IDsnAFQ<LHnaJ/& b# 0D7J/7ӫ326JgкC~L6$*ֱțjB9!V68\*fUN#$$xGgg=g>(&m D*[4! oXgP2SYd2(aɋ2G?d]4HNsڄ7ȪUI\)"PB$Fμ<ȭr1y,-u{ NB3!^xWx\<LjoR㫼dhmLpY1k|=M<'®xzs2COԞTr2UR 300>퓳ܧk{fR3"Cx/o4=-"H"`R?àJ[3>c{^K<&aRvnktsqk|cuN! :Wa{kC.hUk&Y.n?{ (xekWc͈q v2|;ִ 6/ ܘ^Z; Zkί:ss90i] 4W쿙)$ke8 9M N5h|[>LjY,jpj)z84N9ҰWԠuK&UTٳZ^ZAhG1ɓZnu\Ze|]B2"oÍϟOgP]k`mMevM^L\%bMhz:P78CV0nv?I}O0}S/|Ap@׽Xu3Th0pIΔEFؓ&$4`_$zMTIӺޫy1YWxB*,׷R5o,-x4U%Q,zw|^ z^D ҩ`/PP,sK Yk$U>C [*@՗ATqO C%=PWLi/x1Opa. yfYIv:=dJBaA4ԲK,09T2@ HC09$LfQ 8߳-_s/M&m̠7t';.[GAD}^荀SM ,Yg L 3q9AS1)Yx7d XzG!(~dpn4,W+={/j qC=[nn{n;pAdi T][sƟ2#N&9fa;쵹aV=3nÁ  ݼR2y]xoi5LI~Ql^bXsH> C x w+u{=CHY]qϊ,II~r"8g7̋8>ȼ|,9X:tYdŖx;ֻ09iKwsd,#^Z?f>R=Z7;wxZ;39 d%bc`:/q/4+xK|/ lJ/s93*r)7529;}9p|[4^l^:[s6pWBQ̹{ER%9Ⱦ`FPy琞wt/:9?pHٮ7yάc5ޡ7#8Fw I!T7qA"`ׁI1E ꦐDd±MSsP.e4ngqT$œwH TzN&^;-TZ/#r!nվyL uy ]T)ґxi̹ aSߝpP|hDu 㭌Db=VWA&3O9U$ MChT*΄Ad }~ι{a:TVkBNx;ڄ,s7zY/Ar Jw a,WyBd43@N.H+_=м{{:??u<^'f4 *Tu:}M u<%}£ZYN05׿U̟0S^$RoR=Ĩʙ4ӱ{kdy;n]́}b6$(E+mB àScJђa' 9`)徛:J鍰?Tg JϻlSs`,`>7ўJ+wQ@Dx2r #@w>.< ]G98$I`c AඍZ!8y38$KMlT39N^#ih)\?Qd*X_agǫu)pݚ`]M6 Ss SGc"ia]c_]{cb<PMCfݺE,JJ><lmRSCA**W4VRF0Fe\i*3wo3Pp ML2sqVr$h{cO%3Md4Zkmf$6&|\t[\tfiNCr@NbJVF-PBR3 ISaّ uQf $ρ W!}`p sX]v(ԀEdV1! ]y **y=dwҬ 9'iw _ +|Lē:F+, ^̏"/U p0rp3+%:KyJMv2ҔЫap\ i=Co֬x7wL$,Y)N~yy =Oݞ<<𜆛,3n -VQ-(j1?Yǣ%cbŬ/yF1هRQѨ$r= H#^GkLn bg>":();I4j`za5:p@PhrlbϡN9F2wa.|\<ڋwFG_^x\+zk|<(baApѣHՑD4ptt C]ScJ $N}e:=uz=ϠP^O岮-/~E|_bk;%'=8H}u^[[>}/k\V>Nk4(|9$^ۣ򺾣eBZ"I/̯%o:AB5qsR4 tqg$Lw}'%}xTdD @ը:gbT4pe_l WXH>T.JB/ں㪒 KRPc>}C|-W !UVe fa]AY1 Vୂ/d|p^i4>| jrx5 0x-H l ; F ͸ spAwNX\75['m _&gǯs,LUF#S(#: Mú< 0:Z67 k„o 7 &w*Aޒ|D=s VsHgϱb]&<Q-!d Aw `O2 D_ëWzl5_^нW9g# *RAw3^k\sVnOMw4k6"OhTSO))(eoO+tI{upxZN`qQkk.>n??-G ^ $&w߭U~SȨx| ?JreK"|09F:ի&{V M ?3zݤXu]M7Me1S ]8VuNG0 db /k4M^z=GҺVӄ Zd?ƆBpaAI!rӾ|2v@%XZv1űW"jihAvΝӄTwYUv']`{fh &xhige2q8g+1c!r Ys`,INw0:]`,g<1M3xON&bHe  HG >!E<qO @ uFs60p^$S^VIfU/wBG.CYY*sV2*CL as"t1 p݁)keB ;%܎ͪ |,kQ<:!9QsR8c@0"7*8?-6i~aH(a< 7cxo!?: V?/[yAh o1 W(<$]dD|0!ܣt/28f*kq?jg %MA{I]f&8rYcUSJffc:znMX^Vqp*ddD&Ws8ֵkQQB 2(U"*rDzĝyZ4C8,Q ȑTE6t/zJ^\t$|Ԑ1qh4!0f3A%Q/Xrf 1=f(}qQ~LlO㣳J64bvӂc3`k)!@_h#sdwmL&q@VSLk u>S(USf$#Hϧg":dn;F o/ZeaC)4Š.pk0GBwA&TzxATk%BD$<4E#>KG2f ' G+߬@-;N {.sRЅ*yÂN əo*V{jܹsre8x2Qm'yؐńUO)1JyDZ#(ל^a5;MҠQ3 8E{L670d,ڞLM O>+P eր-@] T,1k@>^|c_G? r7/~ι$7p;j2X%i`Lq %ې ` stOa!"T3w9RªU*&C ͥHodB(ؔ:u-ѩqo9E=:rYFB=GH](:+73uܜɄd]#sxRļv J *Nց8/aYh doV)A|mMw&㱠çտ~?#eyGflVpI|PMrhw(gcƓ+fٯ-ill\:%^Oѣ:NGrYDb.ם̪{TKaC&'[[`n4RX&] ڝ){$uȌOE4q"Ux2ʊvwg,%mw~~j<]oH 'C^e!wt>y 6nnxsv/Hh\7;㍞ZTSr.keuSpB<TbQ0 Us_=A9 Sw=+U%jeE'~I/{> ]߅z\W*3jhI"2?|] PNF)_>\|!zklʌ<^3n{e. jUbժ1k@n(JnfRIlVls69( : .($*I0#=@$I}ႌ[}`eII67uX;\l zyN"@!CNj 9+&Ɍsa)SR Ϸ2r*yU;3 o `a#]4Q>Dnٹ&-ۢ`xm=bx;U4UB&uHQ-7 80RFlN"{2u ӁDslA49(pm(*KkjUH@3Z!d*\*dfm/_dEw/vC8 Xps3 Լ>3F IDAT׈ 1!x rHnhGa\/YJX;'o}U|Pv@-l(a0,cBrA%e/F5GOcfu2E7Ԁ."T`jmgc9h>#K$VI%9 7b ηlDrVkrW;X~]D6+W [x}d'?gӜAAk[ME<"Nͨ0n@>C.qLOx T5謵9`5Y>34R*K v#W~L.H#V1cM2ԋL<9W~i^ z1 GrУWSz_t98g̜oөz h8rLaJ<<*q݈Ц=`=??Nd[p. :wNe=)A:"x㫟CkkF" ֱr@6ܵ> ]}L]Cٟdp}+Ow:{fg1 6E龗^qr?͂?kAӈ?/[06S \?^t K09ˈkP$Lay3;o7%\!|ӧLJ>? _BCr6#{A|ٌY0t UD+TLVD1/NVv6) pm["Vy9bfb#~Trܫe. wx_mx㲘Ja'Ne$S| ',Zp"Wm?b]+*L< ώ5CLY@{f=aveP>0-li l8AL@a6!}rL8w睜Y&0U2A*tP\! *0!ޓ޶9Vdǧ,K8 }Z3N?}?s|F^`8MbH mh~4P*2ǟ]/y&:v[ozR縰ƀOfhBQ% U.p(q5&yk h`NA❨ej4_TK`JU p~vo1񡿟G\G~#չ*K˗u~|o;G>*uisE1gxHnU|?4o849f#OD^6De*dKг&>ctG!*ŒLbybBʗ^KmMYq_bw9/UX! T#Y4jut&C6#,cssvL8w*A :-=efF.[aaBFpR% ~`~Y)V$_ƶ*)ke r‘HwBA(텑9Ā6(L',06iK?5qKں*D|/_`i7hxcS89L0)I;J')9 Z \ Ο)MդJVgF)H@$K/~ *-19MFr eԩ nm/ފG9_cUWJ_`@Ax 1m& h6Gʵ 籖ł\!dey3|>^d(Gg?RsA@O$\^/r2?p ߞ~HU=4-y64BV|9$Py VtY *sb^4~≠2f+]` 0<,a޹hܥ s܋U&]߻Agk] 5d%ExC5r]i[rgᥲ˹G߁XfU%0`5Yx:A\|wBEپ<^KZj14gxDdj;ihhpin%7&,}PΩ9ɫ6*aFd|OmǎMG//~7xO~/Wq8?O~7Fq5K|@<>%McŢcqQnkK;өt?&I{{,&8|0gs0:J fJP!ٮu3})=G!5oLvkxw|G~w!O c=_.IAzz^?#|z HMtZ^BhKe5q C+.\X'/s-0X&#>I3..T3'~ >}&]S oF|nxcD,3w=-)gI-_ * (X`50e#s] DAs> l^4`/]3gqtر)xbt=(*Yծ@=H~m'UΌGD%񰞈\›<:x *56/ cʊI5n%pޑnv\yS g3sŞsʃc.8%xz)}r|tyu wˈ:a7̤ dFs!#08qݻUBЩq]:|)$b&J{/G< U .Pq ٿƷ s~?I71 3NHU.ѣ$9 %mOhN>v\;vhм S}N-c>S]>}S".DlSl<ۺf22s~ag3=%˰;Qj1  (A0F% AvOe8ˬ$ CB#o,Bp܄h!(gƉ*Kc@z/)sK͠bgY'G>R¯]: BXJu :aȺ ".Z !{ <=6o<9dUT; 6fL<ۗ{v5xxP;EcNy`oϡXT.u,Z-.=Y]l*hnPCRU+ k5AY6+@(|`\\|pϡ>Rn^֝SOy|^pC>W+JEZ̤M#[& b /QnWx<瀡Hټn8#` wL/Ez}j'uz,(T+ +T (Gac&Y$FHHE5)` e !}J(dT.\AUjɲ봲9L ?Ro\>+/A6MmkN1Ř +{a$x<%נ>@j9K'\> m[SI 5K{t: mC? c͠Y<}$y>ǀ"a0x W(1pтTH}(Ս y-pFKPQYžtRV,n@\4doSɩ`{|gp?';|oyKEydynHnMƱD 1q5޿~1ވ= -&Й {OUўz &#@ 8aOOH`nMRJx]l<@OJ|/c_r˲s\%1n,it)1fR. ^i0)D,e ykK_P؈9}=dmTj`I:ktV0^/3LJ6r"c Sy?Gj _jiDSC[ |+|rסo6pT0ɵG0M!*G~9~.s'v:Kw2Ƨ ގ:RGP,r:n׻ \ "JvDaa[[\lmyrt>ISSjUW%LtaLFǬ–JF~_rY\NDJ%1nxʟlX:&3S^ш#^V X]U\s^\㩲X7OH%_ d/#űvCNEM7*.Pl`<J9P$e-T|_XpcL^[Jf:%9 ]2u Q =* NcJT|5 ZPovAЏ _f9|0HXA|!& % ŠgkkǍӠPוgYk18[ bfhVō6eJbdZ5+d#<`g$Dp73| O~v[ >IP1Y*u&>WܽW,0 ++ wɣ: u^x/IY9FA7FB >dhwfX8eI9?egP}9pW{Edp(8 oic]ʶWֱDb= r_ m&qFP|02{B9 dٲ$-S4 /`לyYX'Ďp cь a~$s&18YBڙOuWo"{^ ) 7Z4l+F5&`UvLv8\ ޷*BWy_,9v`w3\ .7xίe76g|k =N\9V?q8}Z}N",zrHD=.\u0ѻ@,% -wcV#O4Ωg ~=_=J?J} GJԸ^׀׿JN^OVe2˚ j0*:8ر5z7}kr J"r3U@&j5LR* +M5Yr={V:-n}s!1sXX 2St:t|5?ooK%ׯkru.4h1AKnbi\"= 9ı!HLDU"0(_!Sm [zC S(+? 1hV X7Ō ^2ey,LȤč|Qu/F dտ#>$9 +õ 07ȗa3/hUMh2D +v: "*T]&<+޽ړ IDATi,Y ;K 3O* a-_U^\G(&ǭ^,⸁_a4}=$2ǚ>nJ>BhV݂ۜoCz>%́hBgI *By b+ZYߦ}Nmà;f7,Iqlq p̵0hJ1 ^19Egtl6d}}6=AbH w)i Q O h`gKNcx(*x%"`? ۼ1kg ,cWw-YnP̱VDcb6s3>kB!{^e{acLyyyv(Os-(%DwcYvh<IÆ+%fy&&Ղk%yQ7_ҟZ^aߺo}_ه)T _$ _h014/׀4-Cbw C ~*oG{_?`lXg p#Bwo}rxXSfT)1:IpP^͒ث)9]~wE, 3Nu^m=F?t3ݦ/k'Xgba)KY|K|[, 4NM tT=sszNGЧ\N榿á7Cv0ٰIDnل݉btQ grU,qPRAE_ b\k݀QP7U 8xFڕnUa#H*Q2mr4T*,/hζ3B5S)pW|J4N Wj|5w/ị($mWaЅ]^u&.K@lG.ae%o=8{ Ly͠*tG9t^k3 0ex* v[>ֶr#HK.La'EI ޿ ZrG({ }c2KS 21[yj“"l%8=; x".\[:8~|~-ZqM٠M-eRۜ[>.I>aNcCϺ5u`2=5l ugLͪϵ9A"gI56kݮ6L]`½KfmHxqph ⠫ٿn.[r=b'f F Ĕ?G,?~7Ȍ\z̐ i !Q5e `O~m6-=  R2j( gz5}=~8UU2hTUfĽ{[Lw]Au96`"nqr~{iw|4܌͛Z F/8bK1J&S\..zce2zfx`^?*L2xH8.EzhKKّf3)-A W< 0Ij2H&c\.Н;y1^ f/9ѧR M;ۆ0f'Blr%^ϰ9 ao/*Q,vSF㺻whn|tApY)3c} `8 a@5orE`6 kKL_eaB/|x# dn Y-F@ ʭ ä` W@$IB/&j_ҫ&.h߷C(-/!y(L*q&ḺN*/ˈC|in?pj+ĉQv -AlV$<+_f=v -b3U|]|S[ZsNt['ym),38nu%6"wYi^ qg ~ I`=79TUX?b3qXXEh^ֵ8;.o>bФ$}"Ŏ2RRl!%]KF[O_UlmՐQ;hUȹXrqݔKQ1L]8yL~G?W걣 ɰyWTIyi|K8*ism/>pX.Qi0x)5$3Lyv9\2 kNHϋ 6 -8kFy>Ƀxskv , V^K@rT.!ٯxMjS*,ax꾃U٭!>65GY (pG]tΏc^Q(jz5Vo݊8:iL | %*" .}l. K `}ШݣXv-VяL\&j"T玎$k^ǟ7R3*#!cgB]^Ҫn ÚF#;n[| 9e8E X egY*Gxvuq*㭷- #7gxˆV?(b 3g ѽ}w7&mF46&|Ix81إKqlαƂ bv`'$y.9+ԮR= MaJE,B>US6뙃܀ɂcllzg`L -2q_E`!,fxAU}ĵ%!{|  :0Msͅ60U\Ȅ\x"@f){4&&/K@vR%6_9B8CzAKMA _B~~W<D2[VݿQu*oKŲ*}.:r&u&Hm9~>#s+!ܐ/qlot]guykyc @ sΣ:GSh 2 頛e"$^c{?Y"Lyw3^r'$ _ |I,q ڦ!v  k G(%u&.qOQ )E!΋< CJx?Oمky2 ._ /ZfՐ ];ީ9p R2]܊T_ᾜ=6aBgVq)6;>χTHg3|C(bpx+|&Z8O3#&7XЇ9NHAL|.2>[gt,t!&;!]`ls>$K\gN'i\a-_,&,vRGŐq%b`0͐$0_&`#D%u$C$rN=p+8"sx'>99j@Fu%sQspX3ˆeCbX\ Ffᡡۍo6ET"9:%<,)egQ,>boT2r嘰Ղ^qbظdx`6pFFJ]n4̋9ZHbc"BB0&p͹*$q63{ ?} o BA1BfY0f@J'|B ɇU,靷4T].s%K]sf/"`ޓ Y3`&!, WU. f _L3cG=cu`1xqS>X1V=3:Z\"B0.({SidaP#A0iZ"qs/0)s f:[ ,K -W9~Ua]ƗY ~ꧾk_+,yI6Iǯ.4#%ʜk!oY-Y)`q {󗸦O(g1xP)B,qZKUUߛdYQ}q$#勐q%X2{MK"Ғ%>=ԸF?2X2lS?< w?k/c;﬒vwY>ܴ䁢k^aGֿ$YD!ӞXWBW{`$y^*6׿kYLag1j \+Ylrk<9oXLxilp#u狖k;U `gy =\au{{T|; x3zXixzH5X/c%8^/3yFfN~q]xoK% _?~xb Ew~7q;R X^2r+7㺚%ģ.GGR̊,"]ҽ2:F>/5R HcBHP;¡\23 S=Ȁ1".@iU014`@6JKV’%' Wru\%Ye )(Y1d|֒s'"Sb "EVs2fŶx iuVT&kzˬJI=/Ax-orXɼs׾ܟ{ ?~-γ,+8ƶ~ cuh!w-< b$AzHRĂx6h_m.#%J <;b% R2|HEJk 3׍P B"EKԭ$NԲ"-6ܗϾW{x $GHEuVsHP?U@{YB21pj4^:^c!uJvY^[\V=KxUq,='s!2yI6D7^{v$K[aEk9y?k .A{n-%ṤʦԯrIKBIns\6>;:\\H"s2y<[j+ %y\4? 'ɜ1~Jq^/zJ6xEEYLZRrq:˱#E& Tŵ;4D A_ .P5Ƒ nQv:#|Ɖ9l$qf%h%KQ2ۜ{br E$e$_x hwS } ܗ( c8 1xZZddL"^!\{nݚH(+E$#$9ewSfߒ1#^ Ȓ$DBJHBk4':QێKʽ|ђ`\ue$!%-loO>̅^!VڕKEƕ$~>sETLV8`lJ5IxMK NIs [K NE21{0%VIUmW} 0v9U,ro sUExu d¥ 3W)]}#VW]PkC"țXFtrIߌs]8ӂG&Sa D"gĪs%UM~?1՚%(Y]gȽruWKQH2 *#$O| _ur彰boKT ?,1y'wyϜeHC[L#}B.~Ku,3K]W<մTm7;@%.N D#ffw!M~>\&ꤔSk]&]^z()daFmS0qRR ጞ8|tp7K%{ C$=s垥D.U>7K ';R"dԨwXpϓ IDATsZEEqM0N;dgqǕT=8ϜlE䝐ϋZM8ŧ8Ϝb^ݔjhxǧ:F |8 S"bK}JKqKQ( ++;=1:8r֛o{/ՊI6 S4?;åPv0&Ki nE^H1$W06v0N1Yr8^A3r|oLyg×<$NE/;_7$($,ϪY0kYzQ;o+w3Lg[rOV dΒrRY`nL.xQUmU!LA*HiD}tk Xrc﹤Jt +?t{h:Ipb]eJƺIN]BA IKpOޥ\o\H:KefElo?k/fnio71Wo Lr(j W.ksmp481ِˆ oCvZT[u (g2EuBr>bFuK? Z|`Q<`Rgl, -yr#DbkHjnp9Ld"?g̉+4p_ .JC&[kv`)!y'Lm`I<կd嗥__[ ,3?lؽ]X)rU.Gl pV1S2d`e*X\g񕶸FQlgg 'On /ga{5SdY }ԽK4,5w\rYT4rJNmuZHc|Pu˪H,(s:rKL $sL:ZTz/C2&h]gXH1!:`6Px󾇔Ȏ8<-\g/{b:;ˣGeKɈ8G[,D#T$d+ 3G]U$-<܃~s (mrnmt]&R yZUv rQF UWB6}}lsR)”^U%ZM{{iNFL?#=rn{S!˿R {7?-EMf<lmΝhCYLbBHю%^p6strB4tUItԐH/gIYdAaes}+AHҒl-  " ?$ȁCKᒅsAG/[L/Y&z@NZg Rw]eNncK2SW]fRPu .ǵI(F%TS}|fگ ܹܯWWE~enýħ:sk@gsPK Hpy;2q85RgePڜG Y;<=M̽Z`Kﻵ>\)yuj|nQ .#u$֞/ynj.=vmۋh;,[ʖx]+\UK."1>(so_ޕ<}ڝ .H|ՔF sZ6eQdx{.IOJ$uC )9^^sd.qm;;ǐ{ Uw1Qe_<ﱆDu䞕9_)<XD~]^ bQi7_*“`wqA:;~ݎ`;,AأbIU$|>ս^#Ewm {?"`~fBkEP"[R,A` tz=v,1rt:%`sH\;;;+$0@'Gٿ^?%%WzZ GEKPG0ލm޽-0|"bE.\/|+;e++}DKBRHF:=< =FUEL>&zVyA,6PJG)q$%"WpR7oě\CZYzkƱ%@Ϳն[ۿD-ѻBݟ-&REx`gd΄H~q]%EJ z]ْE_$Hw>RT=(ZRcHR{]GdykuA 7T cq0Hܶ}Wm# !HGJZHEtpe\vO?>q/w ykbݾ.!ajI̸/-uPk\Hdy`}{dA${?cB(ύ1s=#79g#dlu>?syYS<Ɩ‚5m}:/CDXqSjHDdMϙtV9U~F{-AQl!LL)n ] `Hqtu0~l/z=Ez4$(eRz1'Y8A*Bb7OH ?\JȔJ+lK](DxϕF>ժx# TAe _}+yqwվ^7i\!Wkkq^w9>|:w_>? !Jx`3!a}1v *\{IRrcW ֋O IYVh.#<9~1HTG?d.+X.!I`EvjA%^\5tNZRY$M ;5;H猫 KܬUD;xs?*]ᅣOϺ.S.GO0A*20ܺTrKP cp&O U-ɅxY1Pe\\窲#hN{A9b%m9Ze%uuUt/-w.;S_iZz YJ`n B78 cd\gX=p")[Mlnpj_-F{6)8Tz̄HVIۖ.@dan &"".ٔYuSgVIH4];rEK d]%5 Oq^2\\GDAr_9^IN{%qx YxSJއ%>Z*vM, \rե;u%%<"ѨH%9˜qʜ9sn4bёͱrPKKh7}Qq(stҹ(1+bg\D_7~oaP:GA!db؈ݘ4[y7H vIakHfw)UgH>yKUK"&|,8 s#?MT5o0/R?s)*FkxIs 5"+v56\ 8{H2z9XIVS:!cKppur*眏xmPGF}B%:.Г/SXV-ɡ:ʀ,XE-:֔U]r؟3Cҗ^Vk K?F >.3q$6ȟ+T%qhH8uZ}^˄ C9BL"]cP}wNhU1ALv- pce1[V &ܣ :`BBpK8E2'X2 )=$9NWZѳ%H 8%wE$brK{,㘳]pΏw'.8g NQڹ :#);}\ҽ{QE*80=(sNa;zbQI<(l.I;D>I &*!v92*Tk/9ѷFòE* p3;߾b򯟴=AÖ?f!ztFFq|bI+C;[]5O28H!,"ARdBNEI2țt/Z&; $g[.UX-@u|7 MA]!BI< Y%9)р%#UEeRzTYlF?@rsy&NR%Āw) ϮZ"ͷ hc4㥗pfժ!{}oܶwz(&8_Grija6rϴ<58Df܋5~W*vc$i_%.Uu\c۲$!+*9'LpIY9@Tg/J}Fx,%aJp*{$uawt9&u~߳E(s^0`I>l6ymL|;vh)*k$`93g5Svx/ A3>9u9fdgη9 A X&Rsxgs9uϻXLpJm;'Wk5k4Dϗҽ(7/ub_tv huo=o2 o dT+": (Ύ$n)V2C.?:?ɵr=MH9%6yט,9q1}? rNVP a$mo%>fոMʻq7@=d |k`0$.DA<ȭ3 YRP2vKJ,wm %"YU,$Mkٟ>m';%uHl8tWD2HĽ]1SI.ջ0,>pBVP"KOӉ-8 E!s"Z[3uY;n.2YZ >!UۼJ/=^U\o®ɲ c8Jn~u$> 1F^wٳ__ު+eCJz-XGXdRUizIv:xv(*fNOH',By#zgUQv2K!~PLuKtӐ8 n%RSrŮ1QewW9)bpR鈁F ξEb*SpԆJ̎Cz VyVtZT]`' VzjeAA .0w;C8Gt Zgjugq40G*E. T* >h L'uL Ȗ_`caCW]ÿ"yt^kՀ|>rSbϧ'J%IqI{Li8Og"11 p>Q@ G yў`NP%^s i檯3_v HYH82Jn—Ldbw{.˗X& _FRQZyˈ;uY ($UUA<}W Ukv2ʡKh,'Ϗ>\UJU,02AX P n+u$l!I # WoqW^?_&L6Y8H39Rk6@¿<7 BF!lk72cZcr.HM'!a`b9$g14=`RTB<+vܗd{=@"\/h$YuAb_$q?Cr(!wëʾ5_@4,ٱ7O5%>nk1Qib IDAT2TwQ>Rs NfUg/\`v Ny- x%\pZ2Yew,q%|k6 1(n=$"%8M@}ɧ91?̪͘ψL _ER/+:+vC<4oui^brwKLTLA6בT@sp Xr9γ$U?(G_NEċ͹Pyu?\郞A'0v;")KK\Rb|ΌF f*5ʀD'2/Oy0n7_ͷx ;@q6BL"j0P#7k}B `"W? &@K :\C"v{SkA\Бw(8^1D}h^$`&$#.ĄI dD\rASN~D2:rm{@*s]}OL fdtUd2a-yAer,6U*}>n=7wOq܂ p!T\a,Էf hd&Oros<"7T#Jp ^e51"DE Slmض nRl2P[^2IŸ9Fpo j"-$7# qyp}87C5I2 D2+.`ae v9H|dj `'H! Q*TThoX^i?`v)>sΫq]|XqylI"_Ct`}P`_|%<|xa!uUp(v)`gGFzK.GDjf̂LMGA`w-'CWXy>4e|] !%iLxa\@p_ u/e*"DXeIr&TUGU1XЖ*øvGHwE!u5s-: *5By֑%KW˗&/Z:_OZSAdk{8IBg>/-2dP3Fr!.PM xv:H\E rە܌ܹ`M(;$>=Dm׫* XtUXa!Y-Vܓ-YrX?s*VGq(,~dD=}DUNu :K N;5DdSk`c[!͟?wLwuMNE9$&0 tqE/1A> &c7 !!vpP9#2g{.){'e" ɑk:@-NGJD#59 >B2|-"ytcȓ?Rp(̃|8C8SpqZ]t;!47r9u\DG9gW"a<#u3oD^\ r5zv1;+kc<;pڨR  2óPpgK_cC_Y}YyU⳦Պ~;zA}  4mW{ M UDY;R肣1 {3$B%Bv?@4?' .5|A*&;!52; ?A"yJbHu4y]wI!I .ɉXZ}&;- >Ie柷bΖwsYBcUc%Vqe$iOyo ^x+AH\ (l\wCL(\vHEmUw95u D@3ul*u4BD]zMUy5;DH3I "ϥyGݣ.q=pmk;y +~>$>\%^${4ټs_AJC&z(!/n3oL;j}&()vH數&sy<$_'9o5|p$p_g9c~a9B} y9U~te0 $uۇ<1u@4L$便uU';̲BR!f`6lrl6 p㵑ie 0`6E@1u#);8W,JwmA{xtw8:_qf6V~RP `UCŸ\??\W;**s9\%2ΗC"XHY|Yȁ[%VWt4,UVb~ATQ|!]J)1SVcKˮ 5Y`rHrE}c[]'U U%AX=,ͷf1ݽK^'[K#TAL 5Wzn30u4>HzߟLL., n⽭;XōE<GS]# Zfhc`2tZcL%6J`8K܎RQ Q*:~Q]/a ӎs؃,/s|6U\|y%EwrzTa]!.TmU-I"VSϪ98'( K}jSe$m*\KvI iV9ϑGX˺`)yݷ-%8(H*IňL]?3?&~{ݿ{?:kmVu| y`C)UA4Vk*rp~JVhnCu̴{:t^%_C`U U)u|{H.RPFu2]Kܚ*#h3Kԑ;$OIysdXc9fP|@G R| >W} HTRwWP /CfBss\Y^>+݂!A ʈ5v: 7-w<8Pwk Bh|,jEŅUcyu\8*$?{ψgmԽYg EҗS_`B8"PC& +9W3DF4kdb':I{ #UnpNß8^1YL _| >H˅11"IV t3'r^J.TfV .yR TpA!% 2j%zo xN)R\W݇'Te>D"h.d'eAZ_˿/};~);]v *X keMށ5W( o/jXuSP%%:~}}A> YҿuL?b7mђ4>ǰj.WcuDyQ|@mC;uZq_Ơrdjc* r\m&FDA($WksQr-ߥ(\z{Oe{qΗy<3O#(mXDO>1Iq>i:y9.j}99d97 6uL?a=9`swcP@`G{;DxcF*P"X4TĎEo.:˜q8}sNv39cHU"RXbJz0QI{R-UKET\`N(e#l_tfYݳZH: \̢ ,sec]Al-rvP cWN5& K~!yT30aoe|Xn)uxldW a@@hH,nڵ1%.ͲW,C]G9+-VhStP2Y <%9ϱoɨa{Pgl` cSKҘ"E, & {{k8.ȑ=KFG-t ɘ<XJM3vFf-ܯ0D"ژH-Xwm EǞwD K\*X)kTEi_KܡϢ*0sVԮٌ7wRhMjzéh}s~?g{α?9?S7_B 墭`4R% چt)>cLr&\:_<|rT00mpUWYS~.\SDn}FR hY*t I 5.{ki=勵l<%nk! V:CHʔҎ9EOqjzH=z}_D¿{+l d)'0m׮:U.|A=juC! S5tDGHqDFujL.2B:\Jj._F>h9!x.Qn Y_5Ե3ާ9YO3$(I1=<%:Q~$x~9A-9 ^ ȼr{C?LRК U[tг:D..u|D۔>jr}\CvZJ|mj.ծ]K2b)w;S19:i);%kN q 7ᓬO/T@Iuun,Tp񺔙 qM:f:%t)Ts}q?;6dN`<lg6V]sQQ9SEJƇ1g1>@go܈??az5Ds%G'3d\l}ps>_Q=~)Q6 :9W.;ҌK׌A)ppK4,A+J|ajHN.Z"2 5b0p*tg )h1dP/ xwL x#0xpÎ{f4`b5$ ]W<`E8o@ Vg-"]vՀ)rMWl|[0;@=Kݜ5D|A$w?`Cj9HxKU; os7,}`R5_᫖ 'KXc|׫b1%Wc85b Uuθ.Ỹ#KxLL^xXɖS{Hն]#. +o+Ȝcdut.YRHVdbA |mx#&E^[b.PRҒ&ZcvAZe%W&.?;k 3 qFX`#HygR0y9q̟|FTy1/"vm}yfn*?#q]0q͞K(92v*N}yι\tQ|Re4Ylki@dXa㋰=9Bޝ[k'u':?xo+;cf82P'C/*?`ZiHD@K,ꂃs*z%uArgVAJ[c+~#DըoURw^|9Vٗ.wC2\r$0NNWF|8%VfFHCkvuh"ŖElE<2B%^y 8>ux=~~|H_fTDd!2>ƪ*ҭ+9G1`]ce}qcc{pֈ0%59Cu8r 즈B+ܿ[4Ϫ{lD;yΩ "3A\O?+u,wkB R=H CT)=/3a1Vr08-s- 3`RYφ=, I-m:UEW(Yۖ8^w9C}1ṋ!&H 8Bh+Q XCx) g}]AfG9uo3UKy.8Wۈ ĩ9&BYxќUaWvΎJQ+"SǜCXYr&RI܃$zr,cwyDWUs?z7,쀐fIe/ V˄r·rmr\,%jV1[3_O^7ne2k ǵyAr=[-ж< R\FD1=#g ~]Dz՛ IDAT*:8#R̙w\&XS{Dݬ[ܛxdڷ{HE;H§%EKx]Kк?>:3,/21\9Ow񺷹$hU[܃x@$A]Q ,v pƀ H!E0y Rc[v Uw,u1$=ΣΟDԨrSq:b >'u~_v`*1q:bXeXpdX6eR8a/1Y#% s#ǺcI=JmC>K8^s`QXA\8ö8Ai@+FBO%*_l֡XD+ !BX)`֬ChXYܭ"`<>՘q(po\9'4id@`nk Xe~r.Ό&;ǽbj4Y3 tƊf/Bis:aunGY h >iJ S. z'E&9&`%0{א eMq`x7wNﳿFug}mzGe\ɑjFysgƑY QIȮ7ﯙ̅rE˙tXHO|z54"sХL[&.E5f,)zWnCӀ(!({"ɂDP+w&2ƒ^4 4xUqcҡ1*$?#HyygVQ!9b.re$*Qf{E<2 L!(9NaNz'5Knx), &B-I4+Ih{mc"0Mf"h}Ռ4b%~%yO /(3\2D\~[EcR(Ϙ/fSI!hhKRQr dкfM 7P+ `ƴ=gY/rTrqT "0rՍlt #C]* ,&?:%%y?9a $-3M P:=ef%\o8o*޳v")Mumӹz=o02@gkeVrʛsEL3Lpo+9?46i)3k=gM#KS{%[/n}Wz>ȍ}z}v7Z}:}N܆>6q|ou=<}k,v˻}mկ|8V7 _\x͙=m6R˅4娰 bRR+:tsU|K_ķ?Hn4T͇YtoLe &L4,"C 6HXGs qhudEpYCR.DeJʆ,i8S1*#+B{eRR?!>elG/0L(̲/o2j# ?BqwqDwXu_x ΁ MrTM:QBu}ִ^H5B&Lj h8ĝ+\b*WY0ei,פdcNME]LCn_Xu1X8˾sXO==G · YS6syq5A N3 ^0Ihܙ=r?XEM!x زIPpNYݑ E\pDts8=e)>{kjZ֭^F>XoeM>w3{%!s^ݻ0o⽄ r.X 4v>m5"#Lu<$rlȝA)"MFb)2 H>0tLvmX!z! }`kq}XN 28Um$ąrzrG~:ًSD\2wǎMqlD-5 \Vi|J{d4HaCViч9WgDXNYRBL#+Dd; ЙUdN(?ӈcIs e0uc-!S~#Ъ."PcWz*BTeqcCG eh$> 5xe>9vS}ZnCK&׭p.c!G8 xxC\é2tY2 \[Uak<=[7M>2wLrVcVSsxR0kKkOE7RH"7p7y}j[nufrn}v+[n7:6b 4)紡+"=A{ctYZM#ϯʁ06DHȟ"a)Jg4l/faN@B fX&i +8gB_#֣e^Kec()Lj*WSKqQtoQ\V&E8p9du/R?{&a "b"ي1eT(q!:k0i IJ1FLTZ4#co2ұ)od{'sѽf,0KNv:.⬩Y5T?W;ZU4<焊OJq-!1"L` nuvIgUi^ıQ}MLk.DӜ+3a_k/ʡxfzX{n8Y{HgLBr[5 !4c)Iux۴EE9ې15E?aO 6IbqݮG]B1uEe/bB|TTy|DF"H$XYqp. J|}84 E,-pBj;PoG?a7VF`ZD( &#!8 QCДPF~~A_`K2q3X|A3f ̽5x&VeHvH7mFAL\m&͌W٬)q'؈5dĺ-*Lt}팏팉1E"p!2`t Mvnt6njϠcSd[#J,bumMs|`هomp Xb΃zrWc½aMc=ɇRoqd+foo[soLf؝ lrnqMDv1p}~|3!;z7rf"g_>*[gLE ytda!hG@a4W('zF%w7Ţ4&6?]6}V,by n)V e(` hLYhzai8%s:cb \i 7qaŔ!T*dM<>\tlQ$F;Je m00GLvPhcyyQpRf 33I }29*.PXBD\s)ˋpD"T*DtH&3h}>_C>qb YJȡZ(Q<>]qL ܳBdJeL7i=h$\ =S1p 1kf'Xb#bV ơ{w'Q^Bc-M78.*V-i&)6E}|- xe iJKQI\?5Ģe\#I64*mРFFG\o@ac21 ˎh(.2 ULJBaKt)ÎfÞ_d 򈋢;>(i˶-c;a2RN!"W0eUۈGs%λ[}:"":fQpdC>WS.>Z4 27:>0g놇H&066|mۺX[Ғw/x?^~y'ZQ_s_usPFB/r;E3g|87|ً!nr [d;"d-iY(}fMs aT}`R UG(^:z)sGziJ{hx­w#1i5h4ř1. FZt|O{2<⃞%R ebUsʱƵ֌2Y"..#vPQ|YҜW#2!k>fA^g'#{^K9V"}씀%Țv2;`6áqMr--PҾmopuH̵:YCNt0ޭ>cێp0ޯ}~2vDEiͫь4 pak4($^ݐݐ#"1^ov!Pfcm?Ȉ-܍ho1Rb/4e 9m%Kv~~Y`Nَ q0c=Ba< ԼsIH4\>\ O&W6-$%f*s56\ &=PĿ_A3:7Iu9ဵ(vF.q,1o 2cqyWt[<%FW=iLI 1:y5H{XA^>ck܋Fu̇7>\3{`XfL |*זSdT#u|hs&렚ge'9k>@rﴋa_@*q(Τ,$'Ik}ǚcxb(g#C{U+Q(.ԛL"d}q>ѯjrK>fhfB-1CQH@LT]fI3mw>كoVgkkcw7s k|;Ϸ:3x}J﷪|[_|^zi/ΝK9Z%ĥy@+AF"R 02E`6b`>8u|ᐡa# 9et&#z pU+=FϤ }%@6泖IϬ3LH]A5''ϫb)9vb#Yʐ9~3f/f_ D%$bbBu>8CƓ%osZ e m UD1H4Y]&7Kodks >lќЮf+/O")1PM)~bד`f˶8|t 2\w n*q=f kyIW=W1,ޫ}ތs>ot>&;W_ys}n[GL9mmu..#в(JΈ5i EP-1lA+"Jۇ(6?Ǣ$)::ZF8C!3(nK1p 3"(ѐlM(54 6Uh8}(N zEݫر.^l~?rPLH$"_]uGNP(dطJRi j=o ΜyʿVT ݮsp.p0Lrd}њ2H*q|;e̓jF !g۸T9r/CsSUsx d2}wQ<1.V+?G ^C]F)E $(YYmqT&)ZY498bguB}qMD_} 8k7}ʇڐy¨hT] 0^dj>ۥ]pXuAQAYYzt]ZzzV(#2>8yǴ@ ;ZZҞؤ)#և:\ hpmq, iԋ:\Y5<0esa9>Cb9n>>L]&1gDYy؇ٖ:BP3)r937q}6u+e06c \(ѿҫ]Zk_ahyV4t}RTgw6`8lY\ M.>-JJ9/ؙo_r8t$ɚ$Q& E"袩G(<\0F*#F ƿ7 (fWxkxH$QE|2x}goa,bn>!::ǨǛn%< k#Nù;}z+w ?^B*Uÿdo|1<8ZgUqYR+T t-]Wr`/p,sѰi|YKXb?I̸224dO}>U'?y=4ANȑ?/XCnq1?8rd7>r9>i|^FP/ѩtB\(0ڪUH5ihܓ@C1%R9ff#$i`f &&r=~ 8{6"2t;|NM;xee9SA@fQb[ B̛EH@rhT)˸3Yٶ\D4ꛌoc$h>9fV]H CiGl/Gεcbf8$t^T%*`gm}:us#4>S)۸@'⾢zU "X-TY;}YYՄTyfim8w!+$)v/ ך̦T $)Mr>0\y~fXb]{`;}QCh+}@VNڻ.$A\ "@km! q `=ăjt/ S-~zDM9#HB۳H6s>*}g%7HeכE+@LZBлx]z*.M\K'~b>GF׿>^(9[DkH 5)>65OŎUl߾~z.^܉ptFts:+'|0;"4!E\`/ DsZl倥Jc3Ee*/zyؾ \DCQ,|>pQ-$=LL,ywIo[0IHn!pOLd7؃o}ko1DǯfS!CD*I[lp7WD}u.sƸU~(fF2iC_]>isQgD`|J*3`y:ܟ‘639:  {FB&/O'fjFIͼ šL)aN :-gUOfM D1ʖRQiߥ!r ͩ.Ɋ !^WM 梏3 )mrl{iğI>u>q,lks!ޓƹK8Z莴of[a??}|"v+A{!:,DDd@ sڰ!5FdJ,;BJdE")_znt1Dg;%˼42gwӠaf8ߓ4xb|>|f>AXDK/y|9t]q#<ط T'x716\zNj8m*!PG*b>`b[^o;azzvtP }C Ey;?SA(}kə97K#[ DZ @}߽={ 2& "Z-^Dbjej\nb||{\@RBgq|̿:Ģ:,w17'݅, o JA.e*xKδ`2=~CT`|1dAԙ=w!"迠p mv VO-+XFi"؇N1Q7Aꤟ*kRa:jZP(1>}@34qy,~r1s!kd@αiAV* 7")'e–f?]iNw3;iOEQmp~,qM_F/o+bd>(&w@&c~ng9%~CYxh#-(:kb) W9溃`wUIF#_c4n?E3Gi0y4@{xPSF2ekՄ+7w~gGSD_gcffz5u );:FM03͹ ߊ1fML3};9XDOXY}#vN~r͗#pFoU>9mąuOSsS *](`2נְ֨cl넏AK<@*j5$6 IHâ1\&e2>R.wƉp)4k 4 9ά<İV $s1كf*gϪk#hU,nmiX ߭ @'5^w]Swf0{n2R߽wN;y>>+b͉۝$@dC|_X߼1*ұ å )!AP*z%~^4A~]&Ч^?EdO!@&2e`}/05UF07_eO >k.LK3#m%u~11FoDZc?_@\e}q<,{—t#X@7ȠIjN c1fc|~KQp\8 >CE0)ĉ,4$&:fU pع8>}~'NLw$.;?h,ЈS0 |QmW~xС7w8 z"8ׁx'7O\15:wفEvk"P֙u8Oq89"q$u.ߌyr0YԆeޘ^i;kƀ'n{ f5@ T߁|%M&g|.2U `j !KF_GG9zDCP6N+xY|'Cgm43~NO΢d{mș:fbko9Z0' 5n##OpC>b(&c&Jld`@(`TZ9c 9 I!hWD[e0s*EP U 0IwND᫚ 8t5ʯ`++S;_C v! Xxǎ&>\xq78rDh%E kd?cb>ً*_CpA2Ԥ[7% -CB,JwCZ-茊e|9S~ *zjO?CU4]4S>ߞo ۭ"@LT]3G `7\8zt9 ě?}nᓟ\C?|/(^|q/Hx=:q?}pA|E+,r>x1^2gk0 + Vӂ 2pY33k2Q**O>UqК"֘X9hk+Ec@6D8EyNdMT\"_O_Hg?3"=VL?o yS$g9eO" Yљ6a+a2G2dLm:aY4 $t6P޴_c}VsaMGqca bn7N9j=ߝ$akU{ϻMop}XϸN;gw;D7~_cn~N3nϗ!Q^VF(b#.b u bwJ#'gM J:.~˰q.Â,a pAaDv)5r"GR#~md{y<:CX],E%.#qhگ5ԔGWt܇?Xd97YZIs.^~ypNFT1炋ޣ4dT'ؓ1r1nb2=g](͢TJg~|C8zt?ܗ|;zw8h9WGqwM0]5Gll'D %?~>3ι{anRE70_Ľ.`nnņ_cA*Uv#h6IzB6fnׂvzA HP!&Z⽫1}OYD0Oe&'?v,m+g^u1ɂ ?k0ٟ,&邖Q -j`@ U&jdp V IDATtM,FɅ"=Jps=6`? UC#3nkk6?Ek׬ijM9kׅ~$C 6B6sw83.H#:vM0HD-mSxgQ7ck_ƭPxL7:~vluw w;~6D"c Xwb7CAm<ӖWS<{Ѿ唼 Dkic(D?6ר #gpj;dUmQMTfh}?A*}hg( =nmH8ΖC~'ukIx4908|xXMT_B pn0|r>'v:44|M4+4Rt,zU9| s.l!%]Oo:.{!|?|g(ZC "h@dY{qڎPq.v,͠+ȑGp"zw2Z'8xଟN,="XìJBM>ǻ99O{, ˕hKzVC.Љ(9֕bCCm5j,.D؞`0N]MͰ4 S5MPEyrFqZor%ηlT 4簜 - 0 *R.M³d41YG^]gT{tvdzGD9?5꛲7:r݌7s=cF&>A6H%6ц+D7M:m&>Gwp7Zg\#My3kF26I"9p^xi>\cL6C4&i}"dL 1<(`@p4yܨS_OP.| 9@ps.Vq\0f*4 RxzN:T prqimdCb^dgE.@Hw@#B_)w*>#x≓H&xajsӥLv뒢ń82bXXޅb]ez.^^.f0uwݎ酻5:5Z{.q}tGF޽VKGo64`*Z|qLZ(beF#L[vaI_GVtf-]Εlx}+sQf)2F O4Y$VBбHsiRV8}7i%#,\lU]0(PAlCd:I9c}^Y=aVF$aߡ6K8C5&I_GfoP%yrAQ}$^DGP?8\p=zޅ@2.Om8n&w{䁼e:nuTvL.}-VVE7Qv7gliqN8D&w2_͘9+^5u"'X4$g !:&Xɜ }Sla9V ?GopG&7AYF(c= {Rą jCNN#0dޅj ޔw*_cT:wƜCRF 3;Kd?meM|NGiDW\ľ};B'ډyDeF3Fh p5dܟ8QΝػ J>.rgΌfs3G9ry =}StBP( w/2y&*]Xvh<6 +*!vtd5 `SW:-sׅY1> &T׵[w9eDl$۸o#Etkp3Ra?Lo̽J cj=CEsLԪPD Ad #9b4oѾI"̐#1:a92B;i"(s: X\ƻJ$`5\D&L{F Aqj3O\n}ѸMLV|=os@/"ԥ7L_)PD*Cq#2riPHG0|f5qc`HVƴ26@Ca}#-FS pG}};wp-xa!݋ /B^|s=\{s;#<`_bvhUiS c;%|+_91>A1$ cC 1uoI3{8'p*]8O<\r^gFK5ꖏ3"uTk|m qyADm{tV >PB`ª|e۷?o~gEϜP7Yq++_ࡇNcd(jr9%pQTsm; jXf} B(Xa.8>6Uy1gZQt\0E5؈Љq(qO9l( ڃ} -q( zghb\(-YY:`[~=@57tߛo^`[ԞRݪxٿ}sN7>;ϛpٸ_>L=Vֻ{|%G1s; QQ22-+j-lGY$o"RpA;"&x J{ ;>M3ăgZ& {Y8uxyxn,T1Zo8p NmOX_Am/ueXbXb{zxꩳB #Ht^^${h("d/@3?p8$| i0%A@C@Ȣ,~ux4ݻ/$WP2b*cP(8~k"Q]Pu~oY.ǹtطM:u[X(AxaϜqܳ▖&w% 7/cΨ^jH̓.-oh1qNm#ݦ^v|ֵj1Kƾ؃tdD+?ߤc_uSvA2KqL[p1lž}0k( CB%imHK"Ff"uctwxmXZ@DWBև:ifM%urV&EɅz>ǸA&8IDk-CU6=3#}B`[A9uq>'X_b&ow#oܾa1n4n`l4މ>(}k |ށ>-:kv7$)*ZƮR&bct8~Ձ2Q2v^C`J(DRQfBJfAf00}t*OEW_<=U)6wU𻿻(b-ȾJ_cAlE᳟= 33c8vOĬ  9AfMޢ^Ϲv;/$#,\B y|΅ʉ0ف72K<^xaw?f]ANQ'+|}<Emc{O!:>ljrse XQ\<2բ{!eESn|jWw#4&z>6e'}`, W,e &-E-YSP6QQ;'}H v#Tߝsm#ԴBxF.; A<ƹ^.vti̞b Fk2c!7y2}Ujm}g+@:dY[Wzo"TmsґC;\kĹ%tlȬo΅\@dF 0mP2& NDd:8,JtI잾'lۘɸ}|uF2|6{&7USn }v>wQɟV&!UX{Y(E<.1eGH*L]@TQ<ĥxCP?8쓑f\!(Oa߾y:(J?}tr PgCع3_}_=^C^0 U8vC&2ad2@"?~éS!1c"ix*a#.w9pP><&'ŝo.\,Φ*=p.خ)]E0$VtH]0{Et2Fh yPɓ=4ESxX\:| *%&/ù<1o;s7]E3c6]>:`T#$êoLqv8U&8ǎI3h[nKnYnU:#0ǿof8|x/^}.!4Š4Sqxv43<5\``~>pO>y ;wְإhaKn}Gލ}0x-UShU3Xf "d8 ړOF̎d\(CXδ2.@ =Q1B 75)5CPv5i26*b4f?k{c: PFDXT҆y!hȑ1a(ٷT4kgJQoCZ,UC5dҊM⌙#253wyfRPo6iomvMƥtp]_rǼUd'fOW.8 "+ͯTy~52*] 5]g}]7ʨt#}VV*zzдk=g\:lq |=6er#6ͣwiFR>dľ䨮+in F5~ׅGqw z }# -ea, 4C :R3<{3_Vb\"bE ?2ܙ3E8 ۰bhwa_{177#,dq aA=(uC2mۋ_&Z&o /ղN^RaÌtEvݹ)%G~O}N ?.^qWgd>Ϛy#.CJ|r]ϙ:av Ast6=!m .6z>bHZ˨| mñ% q)`&W؆Ȭ99<ɅB">(v= i8 rY s nޛzkg>}zwԀ$4!Ib*8Ħ<`"ĉ/RW媯RJ)8  ql؀B@ IHBԷ}^sNun>޽k)4JGcK:vljig a 5)mk= ܇RGҴy>; #Vەww]WZ%^|oc\~g%^=1iYQ[=oq,,5ї}9k!cHHW, fgX]mQ z5:{8w@x/,cj*?ߎz!ٟn,jA'HeK`~?WVя|4]1OrliĢfRrm!PN_!y=⅏SM9RZ2i (R13Sw_p2~ʃR0;;}Zi?^#8p`FG5y6x{9zVβ?#"̈'Ns沏)O t ybDn7WZ9c%wLT$1Q'莩l-*v}BU-WݔMN.FfA=5s.-3LM:hV+i)oe#Ҏ`}? {||NVHU#c"~CF W̚H._2{j mbX/]}9ɏ35JK/^|="| IDATrWj|kp< Vs7? ~cJTj|z$6 EX$Ruy UERUE<+x %8W hRg߿5kl,(FFG^ӧn! sČޭ&_^s?{qlYk'jH ꈅ #%.#β+ط{n?>WĀec#>;3sa=+f܀HQwN 1qRIEh6H l\;^}no}zjĀ&ffnFGx4Mܠ%M=neR0b=*wcY5τZG/ ,E[ SX{>PV ]I},|}v(RP.WѹK~>gGoa{vڱ:U=E&)E]?&(3)-I CHiS6f}9z;Wf\z{n$}ュ\<X/B+d=Jbx8;Os38 1yᜇU [n׽(^s^OcWOz?fgEiN84 brIF6>V1^A22򖇐 ]O}F]"К&L9ȁ~ZC.,PVvԶ%MjII+0:.W9WRCNg&ޖÇ _Oɬ+aqq8v>${ۋp㍧}X\85ު=1S Kpއ+JG%{dL}Ըy9 3YIಏy)fT )wИCy%ED2suF,@s:$#%aN#GȳemWT`T6CLTIsϳ Kac[5M),wm'cvDZfU;ccrHy_Ϩ]Se k"#zSɵw|T*rNepS浶-N ]6%cop1nq~{Mymn_,wbŗ6<|n]ݫϛs 7g?}}< 3?֕6t/pX;FJ@>\K. y#>WsI;Hȥ #rki0j*,yNot_tQHi<OOvTTe_xַj "?@/z cxӛN}V߉˫R4`w@Zb3)8}TrozӔ|;6n@PJCΉ gGd AE"9L&4j5Mo_Fnus [0yV!!e}+* )s3| ѝQ0ƞPp]}t|~"+MWpdC%\u NHGh>zSS)[\x<ᮻ.k^3g& H{1 FgY) ?-o|s$O)/E9P_@Ǿ WML*ihe3:b^h+B^4gX5vF <-#@,@NȶMJc(˽bxZN#gM;F4b fk( 6BM js\0  QEJ4sF6`mϑ  Mc˙Fꃳ gc`0qyY^,|3g$X;Ipvf:nYA^ cԉglx65>oX_qHڱ,%Z#Э k1Q<"m<]T.bD墄 yKX5|,4V3恳ԟĵ!ʳws |[RâD+R707fg_~OnE/SZC@!&KKW*Z5ޭѣh$hOE< ]L_/ 3CPZv9gyo8V}{[O\^©SFJm"D'1d4`B>J!_GhjL~}=Ngҗwz{h{Р9dR%*z=*D\QH)z]!5@e٣u6=J v\@9 )fᡇ})zqgpm8thO_11@Cv K jeAg{f:v(J۱JR6o" &HOqM5\EUGT+sZSm29+u\!ʛۉ9ϻT  8.R )gDjls^#Ahk<7I׫ Jl!{Dk8-bm[gX!0߫[?<^@\]}!iYɑ6Ͼ>)[~B_%^{ۙh+S=S.Vk] Izr<4T?7ÇK~?t {.Ћs7;C˸޺毹)4]tsh18+.o3l,u(~M$c`(rYS!'_߭2J&s1Rv7& Psłu]2b*TO R˻S(a"jԢ hT2JB}2ͳB]?.}Fkv'}ޭӫػg\>]>MA"7&J8D#F/,0D7<`$#MOފ@{/[?GرQo-UrKBj;|w>V> O^O P\?%O"xϦ1Yv@k=9 _-x䑆"Q$)HհA*"&*!ʀ1![>"Mimo@"YJ7-FF^\GQw016e\wI hˠ:`t%`'i\PN~jſsxQ? ${-yW!7h(w(k|#bTq.K ec{9~fǡ#(i1bB ^C9( #Ne?ZL4/rmU}|Ew~E9Ld3KqVIcE>8qT =E08A](0ՙʴj5tGqWGLGA9e#"F{b;9f}U`)GJC]0vUSMwb^SקU籾y/E"=_>tv4T_>iWh~58w{ZZxS>Jpx@Z5Gۀ )#UvI³_j S 1'2սN9M%4S'[D Wd PIx6= R΅#\ { N2ccxWg?{Ϭ -zˇe`[pib ^Z?G@$HArCP~k늹G.SL,s&9-RR ٲF=l^*nIIsNDM.*b=s=d8r(硈X%rŜ,ën֜_!۰* 1G9 Dnm;0M$HVyo"W% C;f\/PU0sb\:.U$gϓrt0E^]6y]`]# :qM0>V_&b݋]I}_I}v>/ic{9 n69Ԁ>.<l2J$iSBqchetXDB>|ʃxa}"$rkT4EG)Pޣ_}Mg<_SO= ꯞvǏ!j=oć?i 055:j^/ m3.})ufض**|*:8z`bՅq]y<)s%&x!KN(,G:q;XW+;{OZS>WdGߛ6k8occ7X'Nıc+lbC\QEs8~|g϶p,^3ַ* "6Ҧ.Sw{E|JdQ!8qx(T4$b u o|5$, 7]4L0HGf/KrvrTKεSyɛhAΕ*kߑR 5j3o.A_p4OEpEY p摔)B1˛X(q\LE}c:&R=u ^4dY.[v;I{`o؏5>\7mc$d~2ݚgc߃>A~#A[}{a|"&?<Ф"B\oqaX##|e ݣn" 4HJ|9t2xI+1ŔF&yr<H1$IW;gϘ:>GFs Z xǫwO| ?>GvDXL.̲=#Їh2%&?j{ *cA&E9Oq-:_aTEDcxYEg)+^"0M3t|uCb4e jG ay9bNSr>o(g7q#70qlLߚ )H#[@`xY-ﴞaj)`8P>k1Dի {r8.e jw.>3Jq\pq{]p/v! YiJKD`r<>\@d]%> ,>gR`DpOnb5=bެ{yS搦' o=$b۲5 &\$/!XK Cc4=Qߟw{/gpBGV5e*=H HB4 W *g bX_*7G[, ФdW]¿o}ժßKLRYB M@2 ~AYarr׌9ڏƺTGx,z>G-%#*(h (α*b@̷8L.(\"ֹBP5eѓBA7@x3m9m"eH"">@CUDAq4p--q}8+c-&ʙ(ɡJ̜;E><\,3s,Q3|׍T0F]b o n0B%#OIc( УvwnE_.fR[)׭Po2k7M=F>"c8dS'_ڏDZ>͒q}kf:~}lE_>wkf}}o .v=նhq5y3Ƀ](^sGo$5 {l/IFy0۪by3^6 4j!6޺蕨,uyv[۵Z%|8a܇_[>Q$w4s#< 9]kl 5x0K-hp Q؍R<1{Ss~%Nbee"X2'H?]LƻĹZ"EI Ǵ1^,"w= 2e-ULs4hT2\; )"7`u?\P>ݙ3)_~!i}/^JkX͢LTCftPuk"p1-㒕o8(OMQҹE#g:k@QsCA{G^|& 5> 5) knE#Лj.en<51I;" d3cT-lu]8|7u#="og$6E;.Jj/ Ћh'xS $lb<ϕ8Mӕ̞p'zf8gx,< FQ?+J\iT32ʽ)zԺ.@n?1.ָ l;6%n&Sx+~ܫp>IX c0w{ne]o`W>^/>N7[C^TVۘ-au^Y}'":e%r/*>eUQ鬏*=IC^.ur[B7DH!j< ^ >qqI*9X_u@@(s4 Dw?+8^~nm[-獵am/:U?iaj 2fT䊹 93c1KasOνէRom̰9nlr< /L8Z!(52,~KiOo&B'@M>A@*+d4,W~ew5N'/yla >I>\K{݄K.` i#yZU@_Q@U}>2jx:Yb )>Ki<2q.EhPJt15=A* ϴ j}b~CAGFjx{9J; FJN8ϣ4 uƅQk-G+eO14e}TbR$>R-s3뫊}ƴÿuk^d|gJc%d]V#I<#dhBJJc5J')Nf >ݰŰEejp1oQnr4/4Uh |qlJ{䕖@*zN.Ii^C`@c`QHE[E1 %Ê׭+GdPU#U!%BEACc*G493I58y4~g'Nҗ _wqPlɨ4Ճ'_%ʂ ݄^2.رAFDFΩݢ(3%R`l ˜SLUðn(3{ tk9wo\ _7\EB~`J}=Кl &T$ WDDN=e\+?izʢ2+ERN>3m?5X!iPj6UYHV<*AHWF62QdJfD )Od<ʉ]K%uM[Y_j1K #gR5_3y2mVƆ-CB1`=jF`80cP'g֗m1R:&eIB! ߲GIjԎ p})ȕخjnI}ƕ]u[޿cs%o-|QԔauQG<\ B/6C}ٙQ9x ҳ^0YNP[DwQrbW-_auu/zZm3 g8xࣸ瞻7$l#Hu|,Bn*س_+_Yr_as+j7gYDJ~a2bU !DS. Ke jevP$4({=}\O=G>pϢ^ I$XxaƓ;ʗ}'CAP;uvkޥ\彬~r摷Vl ˤeWMqekc @qHq& y*e%h*@>X'ac!z=;XA #uoL#JgH&墑g9ÜiKig8k/gX9?qHUTgiO 2Q{mVcU}]rRg4J_zў6[V5웽Z,bb<:t24/@BbU?L&ekMrm S~=HE%2DI25{%X_ FFJh4V7TRHI|k'w;l}γL}~}6Mat7$}oo6z69kۯO pՌErW@5B@!o[WfY7&!Bjۊ&!j=7 @l$99SQ M+{J};.`yY/ү`x??s~#HӲVG߀={rtah\@Hl R1e=ux_q9|=mC>SUL/1t5,C.TJߞMDI  '|1uL.DϺ(eH^װߊ :Y([w9\ufg3/bO>EWH}Ts ɣQMyƗ(TA:uM@$(U:XO-zc\yii%H-(v1RXCXvOmjY,!N,㚬TrtE:2G" 9oސ0#}6[ cxz +t~qxkql`La4 \1f%.R{d絖l|l9Λdk 8܎ ?3\du)ڸ50qtLc5r|t^WJ c/Gv>M3wz][zD6Ƥ͢g7.t9l/$zķZfןw, M}?Aj4"\T![Lt:$I>e1]ϝoi yU*Q%TeTmv۪{A&;Hf<]HOه5??h L _>"uiAc|h[y[RR$ja w i|4UntIϺԇb;u&}Le*>~EЖ%>k4"PO|9R2z6)&`*4@c:d0>&\y)T58> ]E+@C9W؟h#fg9ki 4;czMo%s\o곪Ճ"'ݰ9Tx&b,aw6nf n>{y/*6MihHq %3jh4O|9$TSguE.x[Ds.VN9+wל3QtJ).PAbݰQՐR]|'YEv!Ҙ(XLMqDfao,4냞>Ë۳.N2pwG5+}X3z{|a=Wx >l |~J(^"P>ߏ!x_@*ɨIq#J7c JD;Ƣ^nW(i)ʌ Hu.mcn..k_+`yJqDj%?CpNzj;vBꂗ+^^ NQփƒҋߠ?fe'7D)e4TqUFh-FY(&bq1a_QN)2uTF8bNI^E!%Cꧏv4qAw40\b*siELQ\d4w|L\nE2 oEZsVF8-kikf?Wab.NsHt*ȡ鳣AHI}z[=+o;P]H?u0Ҟݰg`] ^Vaj_;6>c u^ ڹi_%4;~}J_>[NEj Rk!uG"he%͹XUJJ1A[^K=95<#!xFJn=`<*&KX/%c)%{u D%(cmkorM4,Y\*2,VV2xBsརZ6)xs.DLsVA4c`75Kϲϣ9]b@\f q͐漫^AY2\lڭ ٗnQ 7lyi8WmQOԷf (KҔk?o< ۞"#&oU/b(EhxjV EqJP\z)2Rr)7%vg&#NW-nyh(1繁XjY󼆪HƳ1 >:&(2&ܾXycq*>}MQ~]B OѭV"[@*9-g̶:إys>>3݊*v{c<{W4QIL>ƹ@rL`EDq)MxE1.r=sQwCZō77pKHff3J&oC UWr*pPrdK"UI7ҚsHUc#DѰuBsg٬,֫gJNQxAL81o8_jZ##1ma㛞YSf|A4Tv*Uv? DZPR5כo'n2[tunV6Xou)"":[ݘ~R}YQEP`[0Aqn%KϕbesDO%NZWwdSWc2"hL ?"0~>y c >$e!M 05O8acb/X6! Tf|I\s2ކgWN{ ^~ 9NFMX@ɚ{i+mb1_`)dQSNg2Mw xӛ8|BG=oxͦSUw&饕Y_+BҰt d2)[ 0.#^ap]ͱ/e)rme\G8 *5lc<}` 1vaHIB >vwd(ڳGOv{|4_2%USyf.j>ZҰ0mۯQ|Bvm뭊|%ů* ik]赯:g%OgLTALjAPaO8f A1옏;\uC>֪i3@պEzlܐ:_oܭ]XH ;ek5m p]U'c+q`} 7Ɛzl@-f6;-,WwQU7w4mn~ÓUp^;əTn >t EV9&)Q!6hkAw[(o(W{=Eh]a(Ӏ8Vmz+lodl ,DPk@rԢz*W#c{ +Ƌ> ,כ"hYT)F!\@{`b ՅРԳ"XÈ Yӧŵr#R̙֘sL1g  }\E/)^N3Ev"Տ6v8:( Mm@=nL_/%jo5w2>}sech>F3<*vs8xX(B>& dbIMpUAaJ<{ dNF bgYc"1TDy灭C>*,-x0 d=p;%#9z}sY}!m`q1}0/R A@@>EyAȚ7Q^uzZWyntI_D%Du|%2JfMr~}2$ݙ:X%GqIkJnu($nF1r:|=\dW\4:1Q 1,`Os-,wIe?uA3DQM2"_\Q~U|/m56xt:y9Q*~t% 5|,VjkN2e)ЀXQ%E밣* P/H]k3qTTTKA`^px͓ d+\sOp?c~ v3}j(V_7n`2k^(T?a?dQ[65 5rDMU5߯"9l"rj(>V*ȕ!x7.Ggۓҵ$/{$I ?< .ʆU9F1FNs dIFU @TU C*K-*-h~_1k@+zA ˳(^Ww~,#yC>M!&_TacB;ɗ`"gr.IVO3 "ULNČ%MDz ׼`<@~ϡd^࿉(JMDVISs |2hh1 ^91#Eh! LAǒ17IŽ[tsjib{gQ :-N\#E|3?DQt:40ㅇBUWs Te.D\wX3ό fKI sJ~f*2ʢ$knrs4vn * !ғZ) &hormJ9NyQW1['(Y:փPg f35(YCwijeh$ʾXj3EHmIMD֊(It&K`FDz)Aجq֌hj-/ ۟7F4ެĬ5 dCEORE7aˈa:{/%|+ dK0Vۉ`IԿ]'Y;k̸H@G~7_E y+w1tnP `CM+!R5EHmggK)j)d./zD;RDeL{Jѥl$ܜKu"zxE/)18RT>} ~gF>0}6 @좪OsVDE1ξU Z4`/aDGg#CR{|t8V7W4k?D=y<0=cje3HR[f>NTuJ!o#P``gE 9&dח%ˋt=qX4|B~Ek!6"O)DGlӰk(5wr&Py+ J:Fu||OMvCJ`75}I^Q`( Z4h1ǵk4"'fԳV2ZY70{z[22~~7`Q͚lJEZ 3EUmp]G4d 7 #&bg~bJ*8ཟnӧmfDĽ0CcE4#X7\c``O?}Y5ωԭs`w %dD@@^8X+Z7JX3#c<g ‘j%àjyf;"[1ʤ56RJMDIuHHhFJ8nj&w W55{hW6ύ" "*5,թƱR7c1# *(K2MEZg~ؿ{ɌMDݢl/~;W?}$vo k?T\{F!&2k׊4 ь- ҠZBӍùAxpW]u"j?;8JeGdq<1$YѽXd|HrVѩQp$:XcѸa߸ I M$8 J.Kk;f%I7ܰ lQ([`Rs> 1ْBڪ15ib75#FOo!$/Zhϴ̘%ڍs[w7=o"DyϜ  " N?vҬ]綩M `>LcN=I#EF s7'T+{8u\9'58W 11꣡.rX{Ж3(cn@v7Q`sW߷nz:f_$$հ/<㲉97O9=WP;|6x UZbru[.o4S[f40Cbe1={8pM,V9vU?{$U] s~vWunu <08lm=31b?< G̯/1|;1v5#c<0l^z j]{g;S7%a:{{̾I{I~ :]ě>4}ÀM8wy? :uO<1Q({91oC <-C3d1z^ȭ )³sv6{8dxߌw,..EX =.(>߁sM:oCpv ȩ H|8+DU <PsuR)>d㸀c6kn_W%Шh'G3m4b*/I-oYF?c>4U^_*GY/ ٗhε氀@pKĪZsAa0Lg#v[[éS#GOM5q]_OܚFcVXj#(Y\TR\uH 6Ee8Oޗ𖷴÷ ,-Dq8ʵ fAxC\4Ux?ZN95,յW) UBnF6^Z=7Qk=qkzuLab6浊Q'^D02Py+1xiR78BkRpV1&Jxai݂$\gA-֠GJ cX6byWSc*-ᾞ*+S6qns)t11UKcAom5cziqbAPb>U2QϪٽ)"jZ)k=qd) KO'P@ 0\ {.ːJ&95l]ªH?a(R!0kpct;7U↸ߓAM.8͋ IDATasui<- ZzYDb)SkNUoNT 3iRA౷"(MA7l3 ⱳNM;ͼ^ vnOO1xhs+Mob5"A Ɵ/㘞_ō~eeǹ99]w Ni? Adj 8H#Fq5%Uj>ܫ_}++?^B:~~w?WOr.;P ޺{?ԇb*,RM-;rV~W6!8O𚷼653{E\HQ(QG@geP>?f'r^`t**UGq/9MWxMR x޳DO9RlzHV%YX}iögr"$ G5 ǩ;SeS.BsLgmp̪̬p$;RJ9(m)M(}6K62;7`SМMۦ dWiE-,h(fzRky:yȫj րI*9c)-_۫&K@\6(K2MK4&*6;𰯵LI8狮}*r\~^/<<|O?: `M*0_߄'jm'=S_o{nz=~'=9c"xT@@Jњd=27ߜp$}T$DM|ʳu=/p,WUDQ &Pg x#6oBch M|K3zt L~XJnqS2U, 86a9sA/Bc \G lrּU Bq+:FAUp^\ĈG3fʵ %Jyv!P5IY9,BK[q'K%&"t*l^v&>']ɳE'4~/;;{$ " %% %8akyD5wui<t{;:JjFx@`'h NT5e%hKqpR9IfՐJaFLZGdk<9]8eZ05/2Y&y3<3k]:zغ1˸h2u̜N P>;,qfHrSSi7?ugXyUE$cgn|/޵LJNk=N_JIB>s%}+p`rNGzq@9mf,ؤp9VF{.ApvUAnî]w[W_@ž?Fp:ULIԧhx5.>Io Jo4YF4*b#INpۤ,â/w;P#y-ruK,<gpO:L (q%hӬd?똝| 66Gt?x h6c|Wxp@ؒf4HJgL5,e׵J7puX[[< YiLA|'y'Lf;O-4j94vdL_K6]`&1 fy/͙lzҵŊMTzj;g$锓PSI宊WeMu.m 1"x:sF5(U `JWӦ*`7rtdR EOdM2I*;u44c{T!7ZTl*PJJlW`lNPǻ@*uY'wa9h3 -o/);GF{~ɆRr˾'v/]uCu-k ;-͋f 䥜*X D86\e-MlÀVT/wEfi /q@eupc[VJ%e]!fEFbfxkx+w |%lg*;eaEj+%x+3B\G <MCXް9j~ ߾ oxÍcWښPBRE**Y]y^PоYW0@5'P]178 ԳBdP*sZ*M ϛnڸAR׋P{ğd̬y\'27,xIlAn֠q$;P 0r*Ze]Y ǤJHF3@菖#M{:]6U s=ҕ6cG>15Qkj׸ɳi ns(5NFBU# oUrf<[} `^ȋYxc%QJ<ի08 k*A\kwX&pΒT oza('a(Eޟ?:E<ގɚKh $E9m*΀idr3LMPHN^eB9h F2$ 8{.OԤedqg  N.~;;1WW6 ;m."% Kh&4c3׆¾2!)s)VN~gP5{U<Fb$7$ڽ^;8HB>5$ 4(v{|O0tD솜烜cm}\Yi[]c|%x ߽km>\sDAѣ/#4AOPqxTA1LE>dV[.: *\e'@e ty~ڬ9=V k|}?Isob`n*f`뜃"] \x﮸u7'(ʙdGEJN\ԏyUsyܫAdJT5M#]s@5m_…y|SUʯR<7s ^ڷſrz,籽(:BWա+K `S&5"S񚁯 O57Q^ŝg%/+0g`u㩲9<@斝-<Uflzo4}mcvޫ5ÍOq},W8ye^sөZ[Tvp=b"9`lMU)[΅nA:Uԥb̠'BkLb׈ag3Vgx} F@;jQUXe5ᒁjC;Mus^ h0uV3HϤ.Isٴlj60r\Gդ>U$ճ$I{瀱=l[A@W ёG)Q{sJ1+CrJ-M*ٖfo)#B 1#NYeVS(_B^EŜ g>Px^Ef`g-ti`g1>^wozEGtժhTy vK<"UQ.BOCM*`ly: O|Y?_ښ1uTlfq{/[z;7k86V N%^ VMPWfnx晣Snp`k qwL|D\V|6W#>ph׼FY *dRݛbfNE ֢@qj><{aAg(횶ߛ *Mug/u.F$@,}Om~3({)mPe0x =YaF.I]7LèB-Ϟ16KD꟏vN%xzpQc_3:M3àk5rI hJd0w s|οm[I,y1jO* - k'1$&H΋ĩO<(Ԡ^Fm{ɋ:a9)5fPbPY>^(I`?k i6Zv"x4nu%~=$ S9tO~CrGn7`Jp`W0`\“pT8Y,{'"Fz >'[OɺSMSQn[p 4 ǎuLC$+f(&tJq_/07W2ꗗI)2݋PXxʭ^$O#P7Bv^R/"T,3=O`vv[[yq|ӫ4|qg?ޖsYA^~{xo]wiWl@ȵa[ke0'GV1L\Npʜ>x5eRVwQs~\ %5u4)>;VW[wh6<@#@aOPt-xb2^=?Xg@/ďzTކz8TʼfZX*㕪$t%_hl>c'1V^lS]wSBS^=uǴi^A"׸¿ǝRJtdxY) [/ar2}WdF| I^%(Gy,+L[&@T\r*ԖN8;<*_r 5[Aߋ<,+lSң6mְՊ8{{2'=q׫mQWG=^y{4>uG^hd'[4*3?^.~|7MJ>0a:u jT0zOV|7c_yN*0LcS~jkײ1^mgm9 ݉yᤡ? "C{uQU .^Sp! M3i{+M{5 Eoy"|ogQO/ t ·L"P$E|րq166#G_NUZ7YJ.66/s4(Sxgh9wd"fO93?C.0o@.u(RY'3g>x[~=u)8ళ~ (u xeVͤZKޗqd |gxcv (EɌuZ\sg|PgKGg-1+-yhFܳeJ 5"J9v&"U<3w׉T|.ڄ7ɡ:EHњ*Rl uBT }gsCÒg%Y_R'$~< 釙׼{>u>uTixSB‡KƆ=/? H~dw¬ʘ}`!boɚꃥ4yJ@%hiB>HMno6d/BUzL( X(W)f+Pnjw^F?JpVb hL63oƱf#JN P27"4:א8sV+?Y|+ܗ{hЇV+r8pTO?'|V7t|X ۿrFıҎ@%&yǏ7c?V\r9/}.7@- NsN7%}V S9L&Z&Ow13">YQ6hC *7L@_1YP2+n~6_GPIT)* rӽ$0.c*u-2(Go2>C;7rf m(FA;No mhvԀLᩗxc`]hSqdz瞇R[@{#ªcrNXɓWp*o7ѩ1/ΘjdWL&A2Ul bs fGǩSҥ>"Rbx'+<Iai)n(㦛;Z|yر;ނW4vvZ8~w ';8.1C L7[S~qbUhDܹVVb_Z5{_#"B#w@i(wu ?:l8esN`v2oam-{T.0Hcb7`YJc硍V'Z(t):qnA%` 3AJdFT7q$6AudkTR1djϾ] t9`*mum#ב2j.6USĦj+n7T<,Px}J5Df~&lʋʥJ5xes>ki`k&9&o=/5#A&7IԌ@? '$$ל"UiW]ًa5IIn Z6Or CŽ?ad ,$mX]M<&L2%w [+Žy^Jsv6zY#[7oAm,SS DPxVUվHٓT0~~~h?ԽCBLW󞼬S @oC))͙n lH>cok2+2c<i,U/pݓ|KT`ZPjQuIWdA3}P[1Fx(N൯]CUoweEKKe( s*)nڂ|Mh`"SJ8w~of1i=Ws-< kfŢíq|<`/{|n}G40= M4^mg"lnܗ4J%F!V&?dS=Azjfs&~sOᮻWC]#5x?nogL#Y{*D4~n8mhAl+ժ"u"\T& kMڻyGoU.yx@>創iE8oZ0Mރ(8,5c!Aπú䃑%ǻE'_[{ԃKR Z6 Mf}ہr#Kh}Z@:A8c\vD %D>NMA)$3Y||Ԩy_G8Pc2Ў#/7#9;'Rm\I9I $n*IM^kv &D nI<=~ܸ]g,Ȥ;$Ab?aXAH="3a9Gudl 3z+%|te%&o8m^My՝ C֫d"zHD&+hL |iK_Ad)X@_y TZuI~ϫNN^JR$8Xw>e6):~g|ٳ0Y4%e\n["7~@65 cY]=ʾN㓟 W؎[Tf n]U$",-y,-;7źKonggcTq nQ#`5k?4(wHǙs'Գ$[?A ?)ѧPO~bbwKK0!jka+dD"#a1,qF_$LJ!8ؗQ>,r_9fe ^+u<{ kgp6$x0y}Gbl'26c&AF ב1 QTZ_ES)Smg.;<_/^0Rt":je F3S ymOaԴ wOߘ:t6VN)m.5=LK'i w I!\zՇZ\s$*B9/`a z)LC2^rIԷa~d2nx};9>3OT{ 3eė< ܂7![| @!tDEθliUr , 'McY6 ,k>|Sfz +WL9hqڬYE08u}aa[n)GZ?pl^qNyԢ %,5k褘X}8SU˜Y>WV+rx<ҌQm+Gp<F#)ļ0xUH@uޏҥ.]~8=mi&u{?1A\gT8. .U;"`8!PƮp T/vҗxMh yhNw4`M[SOA=&-61 /& \5ոNEֻNZS<N)Z-tJn@{vlooIb G^c\wSq&FucyVv^U׳&GyB+>[W屛Q/wjxB{ NոsksQBb3~V;v{v$ FJ?Vw~z01,':1ɏ~^ ?y%tI߂5,|ogS1ˢV^rj'TB ]/I5 OzK8ؽ.sǚ$(JKAEhi[mx$"ffꘞn~uU@*hzH:Iv̢1 2Nls zUZw=4Ϛ,z`H(JZufӮ-|eueVr=۲Cڀ/΄&"ԯUi(uocuS(bѬ2ԌqhW<{W*MwP$ŜoT*i05O(i;:p%f UU5O3o 釨|C,!ܗ0,[]e2ȻvGVBVT*̫{"ܱJHp=YNTKv+h^^v#4uCTdkc^.L8CϪ=ٟ,)sͻc*PGڃc k_$Nܛ1Ջ^dNoM$QtJ2Xs;¿ϳ>e3xy)eM& h):T{QV"IaTE_'Gg#*)3&-2=Ɗ#|@pv:Y$` g1h߾ZILu<兾Z/%~Ad?ٿ ׼߾$Nג"uHN*#8DJ*qEj;L'o#_o7 XWeoF~ L&NhI΀lԙ7Y)u&s]2= 4 ̺S3 5;MI[2aӅW*^1 \ʼn"en>qsMW #2Nq.+EfkFs0,s?M@<uCWr#ι%F#>E<L(EjP0"g49~,^2G V!6/1`rLE$&7 J8coYGS *n1UDQP ] EWjInM\U4_m ڃ1ýɕ ܎2J%cÆW%ee0$DhhȊ(2YYMkQQrap 4Y9j'~`ui4*rO6&]ml.jNZo$"mh_zQQh4 ;Pym3TklC8&P>Rg;Iuٓ \WE7Š~ggٱ'=קg~Y~tk\k#I}C!U0^~hs%nG>2 յ6(Rx!Lf-r.NuG /%=—ӎTWO߅g49/qߒ]"He G^<9vC>​lT?Inq?≐9PE*O#)OE??p=2xThH&(k0'F&҂xYVj.E%x)Ԗ_%wmh裻T-}*1t, )\8u9 IDAT鯻xb}S 1 MӼW.r078v'E8@͞ϹXsqͰLN8H', Pzo^ZHЊ=+GsjQ[^v4py#M$0"#zDN *UcS%E<$ N sSI|IP¿SWp?Ԍ0,VDqJEm+6):U|WoON}J|k'~?2@rA$%\z}!0ejvoA 胂 zUKkcǼc<|'5sH.Uļ.ɸ Iבڒ 4gNN!vL0{D%]3#daMASwcVcV2ŗgЙ w-QJ^!L0z7=pZ\!ؘp!\'m!d@Ry7}eVpǛ7=A AZ楟UJ^[q"ހ冿=Wq)4Sԧ-YfC c(B2ƌsFL^_fQWDyAhr"],M ;TPf%p1l>9c8-v-̙(ZA`$.E߉YS5s 9]/Pe.Ch \A-P+xml L5d}U \(w9UR5kSF <{㚗I(rֶP{"BՖ1LlY gtaEDŽ=#4I@ڇt; vQFz^P_R{ zznz^h1`^c37 <}cﵝKt3֯w*-*: 3Mۦٲ Bh\Ճ]K9U%LabrEyy*fu'I:uAvh^Efq/1ĈS#9_XE</:^._+_I#(39V|jO MfOkn=iA´@ɘ/eٽuW9|7ᡇFrHLZg ʳ"p>9X眤7BYJ  mfy!<+yz0йZ@X.xL38Ozh}e4 ]g74cBbj7F)M 뢑)=½F- |A+aB*;5 ~ V!gy2i&S.=<[<ِ S\{UoܐJ"9NRwwb0!T~z^;#zᜪmTQ@:MsN7Xp.[=TLcT#7Bgr]>{jFk#v0*RI%LEFnata}0z;7~ JՠG>U`;$UPHW5?;IdЩ$]i&-15fՍ\̖W3ɪU:!yuއLwهL RbtxWp\玧MGqn6(&-1.=LQ7v}l.)3K@=wHC+ Y:-onw 5 UYcu]Ϛʓ0T9:ـ?㵎qLXԡ4Qߗb X1s(Mқh$UVz)POKEj0x*O ^ܓ?9Bw>z[f_ݮ޷ҿ)E#񣦺*M pj_RUy.f<H10{IF]Gmxu!Unj>طoЋ-S{ vS 0 3;r-|رNZ7Vaq ~$o~UAaiiO:/`k~>'ۮ;)$r^$Ru:@K XL`PG$bAEAE/-9TK'Ǡj@OvԇfbULPe{- ԣ!gYHb|cxq !SLc[D3^6{cSiK̺i<3FxA)ceҶtpR\d[yeƉ㤼g8s7uɴ-(&P%@sU@/׹mTmdꬹ%9`h朮up$4;}TMZf]4x""쐆AxVXZӵWE_~J D$ :qu@9f41V4*;)U,lto^RSGfy4N4:/IqAtL0"=0I n~Lam ǾO!:i5<3l\ Nh}|f՘}m;mnmpCw2Bu u*:lA{I rڒ8=}V#_H !"@EVҤF0;[dJt& /z5Eǫ^]Jfx{̙5lo?^[p.o8M]"CUÎ"3\gK n+S&TqJdtk hmuŐ$-a~.`Tsn.5*Woc]՚ZB]wN 񽉜Rz=9@G??m0>Hޠ F?$R/ Ơ FgevmD"zmSu&xi -srvo`=Ռ߆ګ|Ay#_GPd !_B5L+"n@śrxkr9"ԯAxjM(H/c2BAUFRgĬ {U Nʦ*e%Kn~'4g*X1}=B^b U0b ߹}ce ׹$0ycc |KX\\ƕ+ԧ&p87C7i " lTY;ugPU61ywB%~?o+j5.kJRݨC=1"%,ؕ8eS7 YS)t}$^TM۔OdH.)MEC1cRΊa}^|{~0^i$~t7Cdz5&8ث4c5mk}cvÔ~Y^aoA9ܫXK I+2ѐMIub UlzUv61zShՋ{g}ʎJ"dˀ })Z^Wx;>Pay-p2?~iv]rqՏC)P7Na:#5 DPpyhWcrگ!M[ߺb1?)wiY*@BTW"*YmPbC5UfD|pJۭ1x@LHJv~Yu%1}r?wRqy܋avŀ.@6Bb^q^Wio#ກE٫@;Ajʅ)"pB]ѣWp8~&n/; 㽗ܜH~isL29#CŢmJNCǪNc\Cc [hq  9k[*"B+&~w8; 7H au{ j\8'r"q+q g6֪XwfE¸)}DBqB_RZ{mbR~9_ϝ~[$]\Cy$J>9~g*9:14FϽj2PA}ӔB#P-T#cFҔ]4kUHS%O;~tCէ (יYks )nO@eE9b۩sН\6(9ϋ4=6g#Ջ4(9d*}'U4V*0 J3;ǐ~#9^؉CéIgȵe:].|&-K݊LPa҇@52Le\pTY) +oS^]ẞűi>P|'BHލx;ϻAKsO@X`a$[_fd=5Kx)y0`tD~y;>Aos5 Mr={:$@wq?l 1~~pHJsrJi$Mfи 4*<_fO"m?C 6!@ Ñ#O~tq,qU޼8TT8ny#}ICPh2,XqkxJef6'& 5R4`T5L\E\eQ_UTTmq@+1@cgyfvcyv^(.AG=M"2 =G]i7:>w~c,ER"Y+[~ Cwːcv:a0p;a4wǃ lGt@X6q iEVKMIbUWtsvkSw8wxCU{i>$Sk7]`K]m 9f\^&=-R&og 6N?P1-+Ad>; LIQj=΂mU#ʖQVD,.KǬ,^µEb}i{c8(n>($6[&'bBֆ.Ib2N8/1si TxNN kCE9rF<&3)uf,!_iGi4EY5+wvVi7iuv{0l/|p,.P^Lz\eֈ$L4]AL#sWy<e%2;B*z/ftQn!"Rs}zo㳟]>2#KA3K#@IYatFj>jJVs '!#֡]1Sr/my`CbP G!:?qn!L)tkX$8DXL$ɖ52Zk\VVV~<2 /\?g)o9"#hjgcDhy4n0Z-fɹ]R2 R#foZoIL,љ5es _W5@Rُ뼦`-uHGCKʼ}>)3" p ͖Y)Ks#l{5FKdE屫f݄̹G5m/ˌSzX AP㨛'E0XIgflɻL$~V :ʦ5Xtg%%3ܐi3 ksri ׳<o#Daum U >.PBO%ۈLQ-jEߡ>cp /WW{'A #t b ['h9&=bϻPV6cm2`˻\6ǼoGuk3Y%`zEr}~df6x\#ikA\'04٤s&+PGf{GoW~:=1ܼNn$PE}_9@BERC{9>m&{#BLM*yk^Aہbw}~GצY> z1*!xWSJ=55.f$YԪi|:x+wqe|Al,dQWD뢓 ND k̐&,SV͛i{n6~yҵM9nCYȔR:㉌։-]fx7y_,5:I|R#5nʔ M+Y|mrJ':=B{Kh-Ap6QTE:=ݒne{!*` Ԙ G`0"eUĿ 6!d@VW@9< 3`oou?asB2RO HnAhDE e\{ IDAT?A,!Һ7KAZ=59质,jN[}0 ZtGwJRFs5>I^#(ḇ"I!bj܅m|÷xdMgU<*#[-h#ֽWXlZ1Dd\okgq7@T"6 *1hQgf_JM΢2]"¨:~@ #_o/ζtY3{b:q+R3k?>5Ze?, c6{]ov0ϳ$+U8g"exo^Gmǎy1YoNĘ!fQ`,Θ0Xڛy. F'a@ol#D@!:}Sb"qUz*^|t FsHICQs 69!6|Soxg̿`U>ReK%cbZ1ZV}FJL|} 2F|o"}#6q%9κ)Iy]^/8|l}ǿq f. vUgYb66Cl43 ܷk8s&O W>K}ԫ&3-gyo'}W6H 3jvnԍ 8cB=.q4\U~ޒ(+X4:BYo9*٩@m.候MDƩAi]0neSӹ|QY:%_ܧBI.!h/AC?Sσ;7drxs> ;'㸅N_->I ,5(\Ʊ%1;I =ͼ+s˶W6y`^݀.y3&r.6D߹B,]:wogW4^ =mht8Sꇐ·@j̀?ez$Z5sND.!~%g8+།sgV D =~ok)%ɬ. 8~?m?M|+g\KӢGaY'-`q5l=AD#fg*pιEl.˦Kd]EyZm6K!Va_CdDߨ hN}CWYu JEB* lMϬг0r#YK^-5@V?w}<ǻX6!q3oDö?(9fe*5GWoҶ 6U毆=D~A;:DߋR}QdY{=CYvv5iŽ0u/b&w}MOg{啇nͽk<'K5~. U\DȒ}lk ;8>;ڧ#(i~6˩l}V_ c}L91Qр%UNH>Dz?]FfX}9-2?jg4*3 {J|^Z:&rgj $Q2mVUlz$A 2ՌS0 Lx\)K]+VZ[:ZQ |͂4֞)Q qs?rrv7ؘfemPb BG\?usAY]fhfkj2%EoHq%9[`CH$F)J່=(+,Q%]Y|]GAuFJ[]t S5e(dc%ymBSx 'PŹs)߽8.fZk*ZG(Zs[ ~*5]emDZX 7F{Kf23Hm5[-kT"uқQ'EMO_sc.d?Me<αGlo&8jytĒ 5S%w A,zQWgnu"(C\gVFP-PTVS <'>i HI{lį cDlAϱLcj@*UT]by: +ʎ<LjUi!lN#ķy}A!*C;V#-Ē 9tyARQ}.~zF/GuBrnj*."-q"/\d&c΋^oZƑCGP2?MeWDʥaUf9v80&8Jt'KLi+!Ocqk@+D,UVnb得z2%6eCݖUFgBy^ԖϢ0J0k4h9lVZEmԖAS;Qse)ff%y}l.}S2 Geqi,yKl gb]#UX3FSRv{YhvW zrsT0"0bѪ߃@ q_E m9\m_u\qQ`0Ӓ>yʦj`ފ#G'urԳ"Y& ȑLqQA降Ebd.f۔yZuQJ >6X/f.8$6eUX fe;C] SfV|,k\(W5%9zZH LFKrtkGef`i\wNy)?eZ/bf@lfeja*o18p>2F۳Yyw︒e3g0~*T. 'ɀԠG}D%픀,m'b2By]e}_Q5&7x9Đ~G(כx~me?_k|<5pIy"-A*ܥGD=]E%%gpjJHz_$ bkMw Eoy@eMԐ. κ>|'A(;xc_Cl!A4-|z:>'u}裸wp|{o[f4EzA(wٓ~={s>OMc&@ -0b c@3qƵީzQ# J*9.1 4=ٓMpJ4)wQ󙵮NQZ )V,^k O͚M mӱl< Ht շ}alV?mYhuzR`Y2(/M t.;'yJ-8%)<2w6OFY;95H21|] ѲHpk÷J/ph w/`>"*XжPTUmnⳟ?=^]|?O?bt ] `5:f]dF{$V1zƆYnUs筛 6U׋؛*%)dxZ蔬o"jS8&/9ޠC7;|댬kM#:u~ 83y pn?b9."{&B 8hӖك-2![UtmQmY+e=@}iTa ry'%!ꯨ"405{p`Ω=*8팣L`*g&!G$5ЉpyXRqF;6w[GÖ||)3̪6ibɡۤ3z\&~&=*9ȎsԸlPq<%R*yV]i^Uk7ƣ,ì5lY%lN+&(y6O46-+>,'e0-j=ym>m>*y$F*vxo`$^ND 1_gEřr#To8 c~ Pt>m_=OOB2_{џ<7C=nbtA`"hYx)t=Y+562/9g Df[3;>4lHY\3u,̱w|t0w.,ަzSŒojA碎wEŋ{x WЦb2_xȲKV~!(zcަ&hŒUڸGU6bn<ljroxN}F@㵥GY"sZ"\ _L5>^\k}&u,)mβAihdϳBypp^+j\óKCR&nP0,d3:kʚ6e<~UW͑3=3({oLdR2Wyw{<2 ?.. ./㭸=iui'5h-y/#:w?`#G>&oaGc!\sX'0O%F|IsѵJb4@&ܡ!HרH&}uv5ӌ6ٳg?#g%<_|}k/^oB8xTs}~jbݬ#2TsbD0q|Q{dB\ᱟCUYꮋk.M[..ƤM q qsm{<ٳЇk_<O=wQ/}QAdZEwy"4/=W]byUfaFxE.80٭6mo#]:cR1"= ]"0kud&7Njb-Z\; S^QĖo8VAp.nTf^5p_mh 4݃^Gٞ)Rf28!ְYl^㨛oyLg0z!KprFH}P@}~xw~?΃@yZ̟HHN,;uy}R5Y޶(ҝľu^Pd1;<'>qȳwbQ+akv5|^} jr|qXp5lip+n?*vv>wk&9>D铈e^.s#|<|tc)SCĽtamD}^J: =m#cf }}虾*y]\Ӻ#1C+0:+RCui9W CQ2=z.^b˔5\ic1^=M^|Ǚ^ux|hULzB/Ig#;s 6k$`;Qٌ2L[zQf:vަylafdm#ė}}>_9Sz+v1*"_Q1%Xa p4pt:X_oG) oyі8sW/h IȲ?HZpEJ}#̪\EwN5q~M@> _FWZQ5  0>KwNS;0Nͬ^\;I 4XqL9OYe*6##;W_#يq d3zI.LKfuOu' f<#١ܲuGmӽmf]&";Sk)א$ww@G!Im! f5 bl3k#55f+[.m0KH}p@:4eM6iQSBCx8g|C4?f 3BnW}Ct xS ww5 Ȳ=΍w8)Q(' 6NlD]oao޹(ʗ!FU SF2ZlBzyT2'gh]̎=W?ɒ^5-5dyb->(-KC-okK+s/GJ}^U]f}*&kgG`!rj fn }AG>|6/ >'5=Ɠi˴qF5u㚼%fFٜH錣Ѥ w nHy{9B?@+J>fa}}٬1z\3.4 &&:o|u>L?](A5](Z35RxE^5.qI җlgGoqCFk&B㻄s AiZ.6lwLpטE><읥qY'TG`=H[bis/f7g4bJŒ=og5)W],_ 2u\tV|Ca%J뱤Y%L ,$S,kͅr;}!=R)yF*fҰhpa;?3 %?"v-kk4"{^U7)@L&4Q3al^dLxMn3h$;eX6l yQ;f[.G Ȯy0W <w IDAT8l(襨gGP2QM|TYf.]% wLw@<4PBJ+WD2.V=$5dƤމ-#Rخ#- gȗʽ.!Z˴8w},Qðy}{|w BCID!{HssNӷ] D=b)ek+]ۤtYeL]`89q Ap e"8]]lczX! wJٺja A"CP{d %lT5H]x.ɨsN|,ȽJ_0p|]nvѾٟ-b&m;? x߇{cȲsa |cO묲UF?woq: 6+Aḏ]g+.5ߗm?bm}aq] 11jŹSV"E-(`2cMz?Hz<X֞I>G>__Cॗ^,~w/Z5>4ɚt %X>Ԛx@Wts.&M'PSy\8{<^h\$>Ln /*W7ET!벁osv^tM4^k*RgWAEF)eAw2CxU6 5F[bB@ULkDzL4]ْ:m ]7צIew-C:@uƹ411F%^01Y =zZUqyS 1>"ӛԿ^tc|ΕŪxskn `<dXոG*[k٘+ 5$Yi=`sP9(yym 6=,pXi7ylv'fD6ϲzz5iR[WI)N#ƪe"ŪL_RQzE>d}l0u/,IMիR7cxWD /mILNQEl!Љq BO6Ik{!jOr[n1`5Dƫwyn89w7}s C!#HNGmMҟȌ#!5$'o]=/qfMQ0oOl`N lv^[A8W"&H[yjӹn*"kg&oKPV}>!^p^<~{'3s: >_Of2zG(+͂K<^6o}uF:eHpI & kئq2[$7M?tigytax2am*@wLsHD-kT.ѹĬAvrq.~oEBeZf+,/qR~i }]غ3Nm237ks?6X:+WPW 5;Qb6j2MD̊Զ5-vuP5 ,-=ԞN_DlO]4ec6*'*, q`@j7>>!`g?*'Yƹs`SV`@22x.k&&B#tJMT_ P}T6mcK <}~]Pe>MDEuf,U}#ιC?sUa5{A=& 0`K`EcT Ȯc@xYDm0@/v=l\k}?ѹDTX_3 P@˹EȮ 2o"F/!4X6m}n6B\Ʊi/X8bHiG"_!1~kZr>G=zt PIQHr P_5{MYihd\s]Tͼ8^yy+Q1q{8r@9R&ۦa~=>gyX^U;^5Y3q^E/ [e]Ubr}O4zGY{!ιUǸFPؤ)&G1,:u֕{zEV}"4͂ <$ [wvTKmp!Қ\7^=ϲw1.氌wV^g|!3Y!6 H{B*, $& aI/{w#Cm*{exF78O= I\;@Ge]&uNX+=|G[etm8~V}=Gf3](q$mHOz?G]#2)8љwdP|iϽʂHk]=<_E'kt=^s<&k7a[> $ L}Vy*gˤ$L:91;l<(,csd0Pros/׍;uC 0]ꇰ(:0UI+BMƖK`9O__6/ᓟ$\dB}+ya,3 6㑈~4sg";K:c_{G'LszY&B]%K&/b7Qug2+^EV$He8wAPu..Eۜs80s탬b^ނaBtDn5.iяw~u_rf_{sDy9g:%@*; d링 ʈP͜Wcp/f 6Үo;5msPKg\IQDoW_Dc;Rxo6޲/y*)pu!_GE 0En'<~19yv},bkZVW6ڎ*9أ ]Sq9_g}v䘼妶I0J]G^`M耴|cԢgxԖ`fB:#9X$ chjo>: j"UF<"@CoE!k:NkZ˾kR5?!_ESөkCUn[[olն|+&g֠>Оkm]rh2snhL.p YkffDiH^=-u^6K"gIC\$9{]WZ 52=}(g2E{k}xuOzYO(˲ygmjel7aR"0>-6kRIŬ6cA1ol5'؜Lٜ QkƬ!JϨ}zT6O>= ]1͘`פkOFi3= aL2:cʱPIN1w,TnxVUBUǾ4OM#٪.yOEc4X~3Y0יs %Ǡ͌IꃒAOPڥ*CZmmpWא0uQeUdc3~z7s-z{4]r0ܡCjMDa;gtHI Jw"syF#Q:LSqk9껨4=,rpzҺ1tsb.!+ j.^<Jb˹T;3QY1\hn:8ITprQtqU7p,{v[wNgej$`El.l~lÏyX$zND Z9Hy%l.\ y^w_&B[6`3E˲yYe<@"ys6:~y*40IY#涙" ,ͮ| \4|dx3<ޞh5z*1"d z.v^!XVsJ|dDeCCǒD'4v['Ezo#9)^E[A,5RTUBoVa[VNSc!l<}-9%&3f3'rg"}t< J*W&=_{ EijAyX:J.Lsh9LR߇f&bsǽ"2o=\bkYKGrkkYNr .w.vypQ`e<k[qEoepݴ$BlqʈΕ`qj3f7ţmEٌm}n/q-_8uv[f7Yo tU.[f+͸+%s#zJLdXbl`t*5g{1\+.j~(#SAHJzXSݺh74ef=4N1y$ۧEDɕ)ɲ6lƒl 6ϛQ%28B!J+_˴l3f/fͣ2|yl&CHNߏp0|^8&z pD#1g,, 0:(lwCt\-ҡ]]Ex}GeH>:Jp>`;|d%l!0 =[>ʶm̪cSʜ*g6൮6QE 9v`tc?3j#)?lvi!MC=co"uU{I:ј*z1?V&yM5>2inhM{\ۺ AJח7N\c)R6X7Ĕc8E,;+ f4,K>{2meIP~32/|7وfLπ+¦]&$ˠ%[4 B)Ұd4tMMIzRu.2%Qoaellso/x"3.wjfW UDJt9&-[̉=^1z: 5i \?861X$5HӴc>+QƔ=O&Vn!~ ΒV~Rewsc5Ԫ|[ϙ}G_y)ݗ7w9Y۲l6u̓fa9-G}(k?1q|"3ڜN9G6 VSLB2f{Qm4*"=kT E(ù!Ҷ:r/!5[ЪPL@ 7"Y\Dh~zi@][sJJӻf^.?ޅb֠am~:%l\S`Hxa4Xe:7gWPhׄ3eIs(8%qYm<ρT]s jQNӖ%xp7zuR12cenHoؽm~Nu*J1.K kt.NgTG1:Dh}FNQ8кM v ЃJm\b@hsKK"b|xNK{ȯr<"d8W0a%\o9PbuUC㚚"_yNոkA|DpiὊVZaO9ꕙK`˴4Ȫc JLs8HEm85嫗jb4W$(_u}̿lN6 _pƁLQg6v8VWn"4\ߗ89e*;m{/.:.GКh,b͖&CՂc6,E"}`ǂضoDFGs3VM㨥.z)LIDATsX0^C^CfjAs1K%˙qpab rou~e`0ĝbsM\gfI#OK}-&Z*Yf7fNPt 9w6#Xfh"Jis9-Q?r. J\C`4 ؖqئ:],)Phs#H]u*E'OŀBoEUCE8C_Ŷ GBV[.Ȩ^A pRAL3%i5>W$!X{EGt:C\~[U4I H *a6E`ϻdDlO(>xkGkH ͤu]0n+v蒚=Tڸ(K0QҠHFkf8G~*(,M f`q}osrV8#˴؏":ξ2o&SeUE kљa;jfrbrj7׊uT)g;QHTVH."ekaF#xٌc`s4,6X~%y86͘bs)g|Rr\ٹF0F!r)ɿu|v;wc(' BşG(wx*}H{TvWz xH ;.hBp̌hufTՇ%ZDǧ>m*x;({ prʤ] D>+swpH:)GR󜜰_۪/OkR?@tjIr,'^bNWGe+\0m2 )3icvŊ؁?\z%nPfX2UF€н,d:Ʃ񜷞gObP,/xet\XF_ҎL6ƻjy#8uMɁqtt@txR׋1u#q\f1|t:l3?Gӧs6B&'q%Kc<_ǽ:;֗{ތMÝQf;@k%IULZ4UQ:4OOMN(< g[pͳ\ QGh1=ylKZg5pl^־6Mw:&: `ƁHs 0%%@ [5$;wÖj(8B;_t$ZNa!!@:.|&[z*RIԾ.\h2$! ˰TsVLiVF콙"UmBǦ?>v|TGن̬鰹VxeWRc{#1|5ݽN0B.T]E}<լv8e ee`E0a⍉6W|,:vn78!6ha"aքp'Nm0o.ָހM\ޖ$e4ۀo =]8% oqqQ:#;X{oIv"6%X]SSL{U8zN1aA/̚9"` }E=gpmȯr%hO8.dV&K?J?EiT&dif fQ>p ;æ-Fш2)5{>6/Gmr~wD6mhc49Qؼޓc_iМ<5 GQ8sݽ> ]}*Ros Ʈbw`: Uy E7z? *t+!haӀe_Co}2$b\*neLD (5[uxhOn. ce?hj|rϊWcwI459˾Y9m]mQ_潂e][) " ͵A#F0K>XzQی#8NA:A.bD6 -dl])8*i y[IMlJB>8Gݳ.BY:P Q|o'<R ~A_(|eӕP)":1}-MW\~TѨs FY\y83s.(S7Yً*\.oo:"F:ڲ[>6s?WM`3i#L2(lO_uȪc*XQǚT/]=:~wz`TWx A`z6+!HYu:`totKv%{#Y1X4=MEij\hҀ㳹j[fQe4.b3^漞$q6'LF6ou1-/s0j)I8c(Xsq_^$/#(ѡ#~r6)PVphh|׀5<FbπSͅPӻ 4T ⌣Xgn~t(8r.g@G߶す{aJ?#Hz ܈Seϥύa#)za6'ШϺ ˆh wI6csIԜ 3Œ;V ŽRf$ݔ=I$##lv`󴱻)w%6'sqUfS0@hcs9A_z}dRRb`*lV&K=sy=&Bt-5srg0_9MM٘mR BFҥZA]^V$Ulp I^zGhq?4\PnWfIu[20NUwIa`n2>_1Nhm{t,<ðeiw '9dym k'%M1wq&e7:+I\lvSlv%mBRl(/|,=FSj+isyܦE Gח#>'Xռ{3*mQt7';a[g}nS^݀wQ|NNʦo0. xɕ,Ue1N .s]4Vۮb꼘/WTgZd0lƌ6);*ݸlmvl;!6fL%2lsڼ9yeoY/ns_ePAـ;n|Osfv`ƛYK\PntB5j%b DT{cl-J3+. T sJo"-E%P4 МH(юi`ތqeؕ܎jF!upyg2uy|u5)_+*[w) #ض+q_{V0_ L43،6Z;YD2=IENDB`dipy-1.11.0/doc/_static/version_switcher.json000066400000000000000000000022211476546756600212510ustar00rootroot00000000000000[ { "name": "development", "version": "development", "url": "https://docs.dipy.org/dev/" }, { "name": "1.11.0 (stable)", "version":"1.11.0", "url": "https://docs.dipy.org/1.11.0/", "preferred": true }, { "name": "1.10.0", "version":"1.10.0", "url": "https://docs.dipy.org/1.10.0/" }, { "name": "1.9.0", "version":"1.9.0", "url": "https://docs.dipy.org/1.9.0/" }, { "name": "1.8.0", "version":"1.8.0", "url": "https://docs.dipy.org/1.8.0/" }, { "name": "1.7.0", "version":"1.7.0", "url": "https://docs.dipy.org/1.7.0/" }, { "name": "1.6.0", "version":"1.6.0", "url": "https://docs.dipy.org/1.6.0/" }, { "name": "1.5.0", "version":"1.5.0", "url": "https://docs.dipy.org/1.5.0/" }, { "name": "1.0.0", "version":"1.0.0", "url": "https://docs.dipy.org/1.0.0/" }, { "name": "0.16.0", "version":"0.16.0", "url": "https://docs.dipy.org/0.16.0/" } ] dipy-1.11.0/doc/_templates/000077500000000000000000000000001476546756600154735ustar00rootroot00000000000000dipy-1.11.0/doc/_templates/documentation.html000066400000000000000000000005601476546756600212330ustar00rootroot00000000000000 {# documentation.html #} {% for topic in topics %}

    {% endfor %}dipy-1.11.0/doc/_templates/layout.html000066400000000000000000000062041476546756600177000ustar00rootroot00000000000000{% extends "!layout.html" %} {% block rootrellink %}
  1. Home
  2. Overview
  3. Gallery
  4. Download
  5. Subscribe
  6. Developers
  7. Cite  
  8. {% endblock %} {% block extrahead %} {% endblock %} {# This block gets put at the top of the sidebar #} {% block sidebarlogo %}

    Site Navigation

    NIPY Community

    {% endblock %} {# I had to copy the whole search block just to change the rendered text, so it doesn't mention modules or classes #} {%- block sidebarsearch %} {%- if pagename != "search" %} {%- endif %} {# The sidebarsearch block is the last one available in the default sidebar() macro, so the only way to add something to the bottom of the sidebar is to put it here, at the end of the sidebarsearch block (before it closes). #} {%- endblock %} dipy-1.11.0/doc/api_changes.rst000066400000000000000000000441371476546756600163420ustar00rootroot00000000000000============ API changes ============ Here we provide information about functions or classes that have been removed, renamed or are deprecated (not recommended) during different release circles. DIPY 1.10.0 changes ------------------- **General** - PEP 3102 (Keyword-only arguments) has been implemented in the codebase. This means that all the functions/classes that have keyword-only arguments will raise a warning if the user tries to call them with positional arguments. - Standardized the symbols for the spherical harmonic concepts of order and phase factor with ``l_value`` and ``m_value``, respectively. In case these are arrays, we use ``l_values`` and ``m_values`` to be more informative of the type of the variable. For a spherical harmonic 𝑌ℓ𝑚, ℓ is referred to as the order and 𝑚 as phase factor. The parameter ``sh_order`` in the codebase became ``sh_order_max``. **Align** - The `alpha` parameter in the BundleWarp method has been updated to provide better result for the bundle warping. The default value of `alpha` has been changed from 0.3 to 0.5. **IO** - The ``dipy.io.peaks.save_peaks`` and ``dipy.io.peaks.load_peaks`` functions have been deprecated. Please Use the ``dipy.io.peaks.save_pam`` and ``dipy.io.peaks.load_pam`` functions instead. **Reconstruction** - Applied the change of the default `cvxpy` solver from `ECOS` to `CLARABEL`. Starting in CXVPY 1.6.0, ECOS will no longer be installed by default with CVXPY. Since we do not want to add an explicit dependency on ECOS, we switched to the new default solver, Clarabel. **Workflows** - The `vol_idx` parameter datatype from ``dipy_median_otsu`` has been changed from `variable int` to `str`. this change allows user to provide a range of values for the `vol_idx` parameter. e.g: `--vol_idx 0,1,2` or `--vol_idx 4,5,12-20,22`. - The `odf_to_sh_order` parameter has been removed from multiple workflows. The parameter was not being used and was causing confusion with the `sh_order_order` parameter. **NN** - A new backend has been added: PyTorch. This backend is becoming the default backend for the `NN` module. Tensorflow backend is still available but deprecated. DIPY 1.9.0 changes ------------------ **General** - The module ``dipy.boots.resampling`` has moved to ``dipy.stats.resampling``. - The package ``dipy.boots`` has been removed. - FURY minimum version is 0.10.0. - Multiple deprecated parameters have been removed from the codebase. **IO** - ``dipy.io.bvectxt`` module is removed DIPY 1.8.0 changes ------------------ **Gradients** - Change in ``dipy.core.gradients``, function ``reorient_bvecs`` now requires the affine to have a shape of (4, 4, n) or (3, 3, n) **Direction** - Change in ``dipy.direction.bootstrap_direction_getter``. - The parent class was changes from ``PmfGenDirectionGetter`` to ``DirectionGetter``. The ``BootPmfGen`` functions were merged in ``BootDirectionGetter``. - The class constructor parameter `pmfgen` was removed. Parameters `data`, `model` and `sh_order=0` were added. - The class method `BootDirectionGetter.from_data()` was not changed. - Change in ``dipy.direction.pmf``. The class ``BootPmfGen`` was removed; its functions were merged in ``BootDirectionGetter``. DIPY 1.7.0 changes ------------------ **Denoising** - Change in ``dipy.denoise.localpca``, function ``genpca`` can use fewer images than patch voxels. - Change in ``dipy.denoise.pca_noise_estimate``, function ``pca_noise_estimate`` has new argument ``images_as_samples`` DIPY 1.6.0 changes ------------------ DIPY 1.5.0 changes ------------------ **General** - FURY minimum version is 0.8.0 - Distutils has been dropped - ``dipy.io.bvectxt`` module is deprecated and will be removed **Denoising** - The default option in the command line for Patch2Self 'ridge' -> 'ols' **Tracking** - Change in ``dipy.tracking.pmf`` - The parent class ``PmfGen`` has new mandatory parameter ``sphere``. The sphere vertices correspond to the spherical distribution of the pmf values. - The parent class ``PmfGen`` has new function ``get_pmf_value(point, xyz)`` which return the pmf value at location ``point`` and orientation ``xyz``. **Segment** - The deprecated ``from dipy.segment.metric import ResampleFeature`` was removed and replaced by ``from dipy.segment.featurespeed import ResampleFeature``. DIPY 1.4.1 changes ------------------ **General** - The name of the argument for the number of cores/threads has been standardized to: - ``num_threads`` for OpenMP parallelization. - ``num_processes`` for parallelization using multiprocessing package. - Change in the parallelization logic when using OpenMP: - If ``num_threads = None`` the value of ``OMP_NUM_THREADS`` environment variable is used. If it is not set then all available threads are used. - If ``num_threads > 0`` that number is used as the number of threads. - If ``num_threads < 0`` the maximum between ``1`` and ``num_cpu_cores - |num_threads + 1|`` is selected. If ``-1`` then all available threads are used. - If ``num_threads = 0`` an error is raised. - Change in the parallelization logic when using multiprocessing package: - The same as with OpenMP with the difference that ``num_processes = None`` uses all cores directly. **Tracking** - Change in DirectionGetters: - The deprecated ``dipy.direction.closest_peak_direction_getter.BaseDirectionGetter`` was removed and replaced by ``dipy.direction.closest_peak_direction_getter.BasePmfDirectionGetter``. - The deprecated ``dipy.reconst.EuDXDirectionGetter`` was removed and replaced by ``dipy.reconst.eudx_direction_getter.EuDXDirectionGetter``. DIPY 1.4.0 changes ------------------ - Migration from Tavis to Azure DIPY 1.3.0 changes ------------------ - new dependency added: tqdm **Registration** - The argument `interp` of the method `dipy.align.imaffine.AffineMap.transform` has been renamed `interpolation`. - The argument `interp` of the method `dipy.align.imaffine.AffineMap.transform_inverse` has been renamed `interpolation`. **Segmentation** - The tissue segmentation method ``dipy.segment.TissueClassifierHMRF`` now checks the tolerance-based stopping criterion at every iteration (previously it was only checked every 10th iteration). This may result in earlier termination of iterations than with previous releases. DIPY 1.2.0 changes ------------------ **Reconstruction** The ``dipy.reconst.csdeconv.auto_response`` has been renamed ``dipy.reconst.csdeconv.auto_response_ssst``. The ``dipy.reconst.csdeconv.response_from_mask`` has been renamed ``dipy.reconst.csdeconv.response_from_mask_ssst``. The ``dipy.sims.voxel.multi_shell_fiber_response`` has been moved to ``dipy.reconst.mcsd.multi_shell_fiber_response``. **Segmentation** In prior releases, for users with SciPy < 1.5, a memory overlap bug occurs in ``multi_median``, causing an overly smooth output. This has now been fixed, regardless of the user's installed SciPy version. Users of this function via ``median_otsu`` thresholding should check the output of their image processing pipelines after the 1.2.0 release to make sure thresholding is still operating as expected (if not, try readjusting the ``median_radius`` parameter). **Tracking** The ``dipy.reconst.peak_direction_getter.EuDXDirectionGetter`` has been renamed ``dipy.reconst.eudx_direction_getter.EuDXDirectionGetter``. The command line ``dipy_track_local`` has been renamed ``dipy_track``. **Others** The ``dipy.core.gradients.unique_bvals`` has been renamed ``dipy.core.gradients.unique_bvals_magnitude``. **Visualization** - Use ``window.Scene()`` instead of ``window.Renderer()``. - Use ``scene.clear()`` instead of ``window.rm_all(scene)``. - Use ``scene.clear()`` instead of ``window.clear(scene)``. DIPY 1.1.1 changes ------------------ **IO** ``img.get_data()`` is deprecated since Nibabel 3.0.0. Using ``np.asanyarray(img.dataobj)`` instead of ``img.get_data()``. **Tractogram** ``dipy.io.streamlines.StatefulTractogram`` can be created by another one. **Workflows** ``dipy_nlmeans`` command lines have been renamed ``dipy_denoise_nlmeans``. **Others** ``get_data`` has been deprecated by Nibabel and replaced by ``get_fdata``. This modification has been applied to all the codebase. The default datatype is now float64. DIPY 1.0.0 changes ------------------ Some of the changes introduced in the 1.0 release will break backward compatibility with previous versions. This release is compatible with Python 3.5+ **Reconstruction** The spherical harmonics bases ``mrtrix`` and ``fibernav`` have been renamed to ``tournier07`` and ``descoteaux07`` after the deprecation cycle started in the 0.15 release. We changed ``dipy.data.default_sphere`` from symmetric724 to repulsion724 which is more evenly distributed. **Segmentation** The API of ``dipy.segment.mask.median_otsu`` has changed in the following ways: if you are providing a 4D volume, `vol_idx` is now a required argument. The order of parameters has also changed. **Tractogram loading and saving** The API of ``dipy.io.streamlines.load_tractogram`` and ``dipy.io.streamlines.save_tractogram`` has changed in the following ways: When loading trk, tck, vtk, fib, or dpy) a reference nifti file is needed to guarantee proper spatial transformation handling. **Spatial transformation handling** Functions from ``dipy.tracking.streamlines`` were modified to enforce the affine parameter and uniform docstrings. ``deform_streamlines`` ``select_by_rois``, ``orient_by_rois``, ``_extract_vals`` and ``values_from_volume``. Functions from ``dipy.tracking.utils`` were modified to enforce the affine parameter and uniform docstring. ``density_map`` ``connectivity_matrix``, ``seeds_from_mask``, ``random_seeds_from_mask``, ``target``, ``target_line_based``, ``near_roi``, ``length`` and ``path_length`` were all modified. The function ``affine_for_trackvis``, ``move_streamlines``, ``flexi_tvis_affine`` and ``get_flexi_tvis_affine`` were deleted. Functions from ``dipy.tracking.life`` were modified to enforce the affine parameter and uniform docstring. ``voxel2streamline``, ``setup`` and ``fit`` from class ``FiberModel`` were all modified. ``afq_profile`` from ``dipy.stats.analysis`` was modified similarly. **Simulations** - ``dipy.sims.voxel.SingleTensor`` has been replaced by ``dipy.sims.voxel.single_tensor`` - ``dipy.sims.voxel.MultiTensor`` has been replaced by ``dipy.sims.voxel.multi_tensor`` - ``dipy.sims.voxel.SticksAndBall`` has been replaced by ``dipy.sims.voxel.sticks_and_ball`` **Interpolation** All interpolation functions have been moved to a new module name `dipy.core.interpolation` **Tracking** The `voxel_size` parameter has been removed from the following function: - ``dipy.tracking.utils.connectivity_matrix`` - ``dipy.tracking.utils.density_map`` - ``dipy.tracking.utils.stremline_mapping`` - ``dipy.tracking._util._mapping_to_voxel`` The ``dipy.reconst.peak_direction_getter.PeaksAndMetricsDirectionGetter`` has been renamed ``dipy.reconst.peak_direction_getter.EuDXDirectionGetter``. The `LocalTracking` and `ParticleFilteringTracking` functions were moved from ``dipy.tracking.local.localtracking`` to ``dipy.tracking.local_tracking``. They now need to be imported from ``dipy.tracking.local_tracking``. - functions argument `tissue_classifier` were renamed `stopping_criterion` The `TissueClassifier` were renamed `StoppingCriterion` and moved from ``dipy.tracking.local.tissue_classifier`` to ``dipy.tracking.stopping_criterion``. They now need to be imported from ``dipy.tracking.stopping_criterion``. - `TissueClassifier` -> `StoppingCriterion` - `BinaryTissueClassifier` -> `BinaryStoppingCriterion` - `ThresholdTissueClassifier` -> `ThresholdStoppingCriterion` - `ConstrainedTissueClassifier` -> `AnatomicalStoppingCriterion` - `ActTissueClassifier` -> `ActStoppingCriterion` - `CmcTissueClassifier` -> `CmcStoppingCriterion` The ``dipy.tracking.local.tissue_classifier.TissueClass`` was renamed ``dipy.tracking.stopping_criterion.StreamlineStatus``. The `EuDX` tracking function has been removed. EuDX tractography can be performed using ``dipy.tracking.local_tracking`` using ``dipy.reconst.peak_direction_getter.EuDXDirectionGetter``. **Streamlines** ``dipy.io.trackvis`` has been removed. Use ``dipy.io.streamline`` instead. **Other** - ``dipy.external`` package has been removed. - ``dipy.fixes`` package has been removed. - ``dipy.segment.quickbundes`` module has been removed. - ``dipy.reconst.peaks`` module has been removed. - Compatibility with Python 2.7 has been removed. DIPY 0.16 Changes ----------------- **Stats** Welcome to the new module ``dipy.viz.stats``. This module will be used to integrate various analyses. **Tracking** - New option to adjust the number of threads for SLR in Recobundles - The tracking algorithm excludes the stop point inside the mask during the tracking process. **Notes** - Replacement of Nose by Pytest DIPY 0.15 Changes ----------------- **IO** ``load_tck`` and ``save_tck`` from ``dipy.io.streamline`` have been added. They are highly recommended for managing streamlines. **Gradient Table** The default value of ``b0_thresold`` has been changed(from 0 to 50). This change can impact your algorithm. If you want to assure that your code runs in exactly the same manner as before, please initialize your gradient table with the keyword argument ``b0_threshold`` set to 0. **Visualization** ``dipy.viz.fvtk`` module has been removed. Use ``dipy.viz.*`` instead. This implies the following important changes: - Use ``from dipy.viz import window, actor`` instead of ``from dipy.viz import fvtk`. - Use ``window.Renderer()`` instead of ``fvtk.ren()``. - All available actors are in ``dipy.viz.actor`` instead of ``dipy.fvtk.actor``. - UI elements are available in ``dipy.viz.ui``. ``dipy.viz`` depends on the FURY package. To learn more about FURY, go to https://fury.gl DIPY 0.14 Changes ----------------- **Streamlines** ``dipy.io.trackvis`` module is deprecated. Use ``dipy.io.streamline`` instead. Furthermore, ``load_trk`` and ``save_trk`` from ``dipy.io.streamline`` is highly recommended for managing streamlines. When you create streamlines, you should use ``from dipy.tracking.streamlines import Streamlines``. This new object uses much less memory and it is easier to process. **Visualization** ``dipy.viz.fvtk`` module is deprecated. Use ``dipy.viz.*`` instead. This implies the following important changes: - Use ``from dipy.viz import window, actor`` instead of ``from dipy.viz import fvtk`. - Use ``window.Renderer()`` instead of ``fvtk.ren()``. - All available actors are in ``dipy.viz.actor`` instead of ``dipy.fvtk.actor``. - UI elements are available in ``dipy.viz.ui``. DIPY 0.13 Changes ----------------- No major API changes. **Notes** ``dipy.io.trackvis`` module will be deprecated on release 0.14. Use ``dipy.io.streamline`` instead. ``dipy.viz.fvtk`` module will be deprecated on release 0.14. Use ``dipy.viz.ui`` instead. DIPY 0.12 Changes ----------------- **Dropped support for Python 2.6*** It has been 6 years since the release of Python 2.7, and multiple other versions have been released since. As far as we know, DIPY still works well on Python 2.6, but we no longer test on this version, and we recommend that users upgrade to Python 2.7 or newer to use DIPY. **Tracking** ``probabilistic_direction_getter.ProbabilisticDirectionGetter`` input parameters have changed. Now the optional parameter ``pmf_threshold=0.1`` (previously fixed to 0.0) removes directions with probability lower than ``pmf_threshold`` from the probability mass function (pmf) when selecting the tracking direction. **DKI** The default of DKI model fitting was changed from "OLS" to "WLS". The default max_kurtosis of the functions axial_kurtosis, mean_kurtosis, radial_kurotis was changed from 3 to 10. **Visualization** Prefer using the UI elements in ``dipy.viz.ui`` rather than ``dipy.viz.widgets``. **IO** Use the module ``nibabel.streamlines`` for saving trk files and not ``nibabel.trackvis``. Requires upgrading to nibabel 2+. DIPY 0.10 Changes ----------------- **New visualization module** ``fvtk.slicer`` input parameters have changed. Now the slicer function is more powerful and supports RGB images too. See tutorial ``viz_slice.py`` for more information. **Interpolation** The default behavior of the function `core.sphere.interp_rbf` has changed. The default smoothing parameter is now set to 0.1 (previously 0). In addition, the default norm is now `angle` (was previously `euclidean_norm`). Note that the use of `euclidean_norm` is discouraged, and this norm will be deprecated in the 0.11 release cycle. **Registration** The following utility functions from ``vector_fields`` module were renamed: ``warp_2d_affine`` is now ``transform_2d_affine`` ``warp_2d_affine_nn`` is now ``transform_2d_affine_nn`` ``warp_3d_affine`` is now ``transform_3d_affine`` ``warp_3d_affine_nn`` is now ``transform_3d_affine_nn`` DIPY 0.9 Changes ---------------- **GQI integration length** The calculation of integration length in GQI2 now matches the calculation in the 'standard' method. Using values of 1-1.3 for either is recommended (see docs and references therein). DIPY 0.8 Changes ---------------- **Peaks** The module ``peaks`` is now available from ``dipy.direction`` and it can still be accessed from ``dipy.reconst`` but it will be completely removed in version 0.10. **Resample** The function ``resample`` from ``dipy.align.aniso2iso`` is deprecated. Please, use instead ``reslice`` from ``dipy.align.reslice``. The module ``aniso2iso`` will be completely removed in version 0.10. Changes between 0.7.1 and 0.6 ------------------------------ **Peaks_from_model** The function ``peaks_from_model`` is now available from ``dipy.reconst.peaks`` . Please replace all imports like:: from dipy.reconst.odf import peaks_from_model with:: from dipy.reconst.peaks import peaks_from_model **Target** The function ``target`` from ``dipy.tracking.utils`` now takes an affine transform instead of a voxel sizes array. Please update all code using ``target`` in a way similar to this:: img = nib.load(anat) voxel_dim = img.header['pixdim'][1:4] streamlines = utils.target(streamlines, img.get_data(), voxel_dim) to something similar to:: img = nib.load(anat) streamlines = utils.target(streamlines, img.get_data(), img.affine) dipy-1.11.0/doc/cite.rst000066400000000000000000000024701476546756600150170ustar00rootroot00000000000000 Publications ============== - :cite:t:`Garyfallidis2014a` - :cite:t:`Garyfallidis2010b` - :cite:t:`Garyfallidis2010a` - :cite:t:`Correia2011a` - :cite:t:`Chamberlain2010` - :cite:t:`Garyfallidis2011` - :cite:t:`Yeh2010` - :cite:t:`Garyfallidis2012a` - :cite:t:`Garyfallidis2018` - :cite:t:`Garyfallidis2015` - :cite:t:`Rokem2015` - :cite:t:`Ocegueda2016` - :cite:t:`NetoHenriques2017` - :cite:t:`NetoHenriques2021a` A note on citing our work -------------------------- * The main reference citation for DIPY is :cite:p:`Garyfallidis2014a`. * If you are using QuickBundles method please also cite :cite:p:`Garyfallidis2012a`. * If you are using track correspondence also cite :cite:p:`Garyfallidis2010a`. * If you are using Generalized Q-sampling please also cite :cite:p:`Yeh2010`. * If you are using Free Water DTI, please also cite :cite:p:`NetoHenriques2017`. * If you are using Diffusional Kurtosis Imaging, please also cite :cite:t:`NetoHenriques2021a`. Citing other algorithms ----------------------- Depending on your research topic, it may also be appropriate to cite related method papers, some of which are listed in the documentation strings of the relevant functions or methods. All references cited in the DIPY codebase and documentation are collected in the :ref:`general_bibliography`. .. footbibliography:: dipy-1.11.0/doc/conf.py000066400000000000000000000461201476546756600146400ustar00rootroot00000000000000# dipy documentation build configuration file, created by # sphinx-quickstart on Thu Feb 4 15:23:20 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import datetime import os import re import sys import time # Doc generation depends on being able to import dipy try: import dipy except ImportError as e: raise RuntimeError("Cannot import dipy, please investigate") from e # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.append(os.path.abspath("sphinxext")) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.autosummary", "sphinx.ext.coverage", "sphinx.ext.doctest", "sphinx.ext.intersphinx", "sphinx.ext.mathjax", "sphinx.ext.viewcode", "sphinx.ext.todo", "sphinx.ext.ifconfig", "math_dollar", # has to go before numpydoc "numpydoc", "prepare_gallery", "sphinx_gallery.gen_gallery", "sphinxcontrib.bibtex", "github", "sphinx_design", ] numpydoc_show_class_members = True numpydoc_class_members_toctree = False numpydoc_show_inherited_class_members = { "dipy.viz.horizon.visualizer.peak.PeakActor": False } autodoc_skip_members = [ "docstring_addendum", ] # Sphinx extension for BibTeX style citations. # https://github.com/mcmtroffaes/sphinxcontrib-bibtex bibtex_bibfiles = ["references.bib"] # bibtex_default_style = 'plain' # ghissue config github_project_url = "https://github.com/dipy/dipy" # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix of source filenames. source_suffix = ".rst" # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = "index" build_date = datetime.datetime.fromtimestamp( int(os.environ.get("SOURCE_DATE_EPOCH", time.time())), tz=datetime.timezone.utc, ) # General information about the project. project = "dipy" copyright = ( f"Copyright 2008-{build_date.year},DIPY developers." f" Created using Grg Sphinx Theme and PyData Sphinx Theme." ) # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = dipy.__version__ # The full version, including alpha/beta/rc tags. release = version # Include common links # We don't use this any more because it causes conflicts with the gitwash docs # rst_epilog = open('links_names.inc', 'rt').read() # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. # unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_patterns = [ "_build", "examples", "examples_revamped", "reconstruction_models_list.rst", "tractography_methods_list.rst", ] # The reST default role (used for this markup: `text`) to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = "grg_sphinx_theme" # The style sheet to use for HTML and HTML Help pages. A file of that name # must exist either in Sphinx' static/ path, or in one of the custom paths # given in html_static_path. html_style = "css/dipy.css" # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. doc_version = "development" if "dev" in version else version html_theme_options = { "switcher": { "json_url": "https://docs.dipy.org/dev/_static/version_switcher.json", "version_match": doc_version, }, "check_switcher": False, "show_version_warning_banner": True, "navbar_end": ["search-field.html", "version-switcher", "navbar-icon-links.html"], "secondary_sidebar_items": ["page-toc"], "show_toc_level": 1, "navbar_center": ["components/navbar-links.html"], "navbar_links": [ { "name": "Docs", "children": [ { "name": "Overview", "url": "index", }, { "name": "Tutorials", "url": "examples_built/index", }, { "name": "Recipes", "url": "recipes", }, { "name": "CLI / Workflows", "url": "interfaces/index", }, { "name": "API", "url": "reference/index", }, { "name": "CLI API", "url": "reference_cmd/index", }, ], }, { "name": "Workshops", "sections": [ { "name": "Latest", "children": [ { "name": "DIPY Workshop 2024", "url": "https://dipy.org/workshops/dipy-workshop-2024", "link_type": "external", } ], }, { "name": "Past", "children": [ { "name": "DIPY Workshop 2023", "url": "https://dipy.org/workshops/dipy-workshop-2023", "link_type": "external", }, { "name": "DIPY Workshop 2022", "url": "https://dipy.org/workshops/dipy-workshop-2022", "link_type": "external", }, { "name": "DIPY Workshop 2021", "url": "https://dipy.org/workshops/dipy-workshop-2021", "link_type": "external", }, { "name": "DIPY Workshop 2020", "url": "https://dipy.org/workshops/dipy-workshop-2020", "link_type": "external", }, { "name": "DIPY Workshop 2019", "url": "https://dipy.org/workshops/dipy-workshop-2019", "link_type": "external", }, ], }, ], }, { "name": "Community", "sections": [ { "name": "News", "children": [ { "name": "Calendar", "url": "https://dipy.org/calendar", "link_type": "inter", }, { "name": "Newsletters", "url": "https://mail.python.org/mailman3/lists/dipy.python.org/", "link_type": "external", }, { "name": "Blog", "url": "https://dipy.org/blog", "link_type": "inter", }, { "name": "Youtube", "url": "https://www.youtube.com/c/diffusionimaginginpython", "link_type": "external", }, ], }, { "name": "Help", "children": [ { "name": "Live Chat (Gitter)", "url": "https://app.gitter.im/#/room/%23dipy_dipy:gitter.im", "link_type": "external", }, { "name": "Github Discussions", "url": "https://github.com/dipy/dipy/discussions", "link_type": "external", }, ], }, ], }, { "name": "About", "children": [ {"name": "Team", "url": "https://dipy.org/team", "link_type": "inter"}, { "name": "FAQ", "url": "faq", }, { "name": "Mission Statement", "url": "user_guide/mission", }, { "name": "Releases", "url": "stateoftheart", }, { "name": "Cite", "url": "cite", }, { "name": "Glossary", "url": "glossary", }, ], }, ], # To remove search icon "navbar_persistent": "", "icon_links": [ { "name": "GitHub", "url": "https://github.com/dipy", "icon": "fa-brands fa-github", }, { "name": "Twitter/X", "url": "https://twitter.com/dipymri", "icon": "fa-brands fa-twitter", }, { "name": "YouTube", "url": "https://www.youtube.com/c/diffusionimaginginpython", "icon": "fa-brands fa-youtube", }, { "name": "LinkedIn", "url": "https://www.linkedin.com/company/dipy/", "icon": "fa-brands fa-linkedin", }, ], "logo": { "image_dark": "_static/images/logos/dipy-logo.png", "alt_text": "DIPY", "link": "https://dipy.org", }, "footer_start": ["components/footer-sign-up.html"], "footer_signup_data": { "heading": "Never miss an update from us!", "sub_heading": "Don't worry! we are not going to spam you.", }, "footer_end": ["components/footer-sections.html"], "footer_links": [ { "title": "About", "links": [ { "name": "Developers", "link": "https://dipy.org/team", "link_type": "inter", }, { "name": "Support", "link": "https://github.com/dipy/dipy/discussions", "link_type": "external", }, {"name": "Download", "link": "user_guide/installation"}, {"name": "Get Started", "link": "user_guide/getting_started"}, {"name": "Tutorials", "link": "examples_built/index"}, { "name": "Videos", "link": "https://www.youtube.com/c/diffusionimaginginpython", "link_type": "external", }, ], }, { "title": "Friends", "links": [ { "name": "Nipy Projects", "link": "http://nipy.org/", "link_type": "external", }, {"name": "FURY", "link": "http://fury.gl/", "link_type": "external"}, { "name": "Nibabel", "link": "http://nipy.org/nibabel", "link_type": "external", }, { "name": "Tortoise", "link": "https://tortoise.nibib.nih.gov/", "link_type": "external", }, ], }, { "title": "Support", "links": [ { "name": "The department of Intelligent Systems Engineering of Indiana University", # noqa: E501 "link": "https://engineering.indiana.edu/", "link_type": "external", }, { "name": "The National Institute of Biomedical Imaging and Bioengineering, NIH", # noqa: E501 "link": "https://www.nibib.nih.gov/", "link_type": "external", }, { "name": "The Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation, through the University of Washington eScience Institute Data Science Environment", # noqa: E501 "link": "https://escience.washington.edu", "link_type": "external", }, { "name": "Google supported DIPY through the Google Summer of Code Program (2015-2024)", # noqa: E501 "link": "https://summerofcode.withgoogle.com/", "link_type": "external", }, ], }, ], "footer_copyright": copyright, } html_theme_options["analytics"] = { "google_analytics_id": "G-D610GKJZRC", } # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = "_static/images/logos/dipy-logo.png" # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = "_static/images/logos/dipy-favicon.png" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = {"index": []} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # Setting to false fixes double module listing under header html_use_modindex = False # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = "dipydoc" # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ( "index", "dipy.tex", "dipy Documentation", "Eleftherios Garyfallidis, Ian Nimmo-Smith, Matthew Brett", "manual", ), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. latex_preamble = r""" \usepackage{amsfonts} """ # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True # -- Options for sphinx gallery ------------------------------------------- from docimage_scrap import ImageFileScraper from prepare_gallery import folder_explicit_order from sphinx_gallery.sorting import ExplicitOrder sc = ImageFileScraper() ordered_folders = [f"examples_revamped/{f}" for f in folder_explicit_order()] sphinx_gallery_conf = { "doc_module": ("dipy",), # path to your examples scripts "examples_dirs": [ "examples_revamped", ], # path where to save gallery generated examples "gallery_dirs": [ "examples_built", ], "subsection_order": ExplicitOrder(ordered_folders), "image_scrapers": (sc), "backreferences_dir": "examples_built", "reference_url": { "dipy": None, }, "abort_on_example_error": False, "filename_pattern": re.escape(os.sep), "default_thumb_file": "_static/images/logos/dipy_full_logo.png", # 'pypandoc': {'extra_args': ['--mathjax',]}, } # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {"python": ("https://docs.python.org/3/", None)} def setup(app): def skip_doc_element(app, what, name, obj, would_skip, options): if name in autodoc_skip_members: return True return would_skip app.connect("autodoc-skip-member", skip_doc_element) dipy-1.11.0/doc/devel/000077500000000000000000000000001476546756600144355ustar00rootroot00000000000000dipy-1.11.0/doc/devel/benchmarking.rst000066400000000000000000000000731476546756600176170ustar00rootroot00000000000000.. _benchmarking: .. include:: ../../benchmarks/README.rstdipy-1.11.0/doc/devel/bibliography.rst000066400000000000000000000037021476546756600176440ustar00rootroot00000000000000=============== 📖 Bibliography =============== DIPY uses the sphinxcontrib-bibtex_ Sphinx extension to manage the references across the code base. This bibliography uses the bibtex_ format and is gathered in a file named ``references.bib``. The following conventions are used in that file. - Entries are grouped according to the nature of the work: ``article`` types appear first, then e.g. ``dataset`` types, the ``electronic``, etc. - Entries are sorted alphabetically within each group following the key of the entries. - Keys are built using the last name of the first author and the year when the work was published, e.g. ``Garyfallidis2012``. If there are multiple works by the same author on the same year, a letter (e.g. ``a``, ``b``, etc.) is added after the year to distinguish them, e.g. ``Garyfallidis2012a``, ``Garyfallidis2012b``, starting with the least recent work (including works across groups). Within the same group, the least recent work is put closest to the end of the file. Similarly, if multiple works of the same author appeared in the same month, the name of the venue (i.e. its first letter) is used to distinguish works, following the rules described for the year. - Strictly unnecessary (e.g. ``abstract``) or non-standard fields (``issn``) should be avoided. - Author full names (i.e. including their first names) are used, and names use ``first name last name`` format. - Authors are linked using the ``and`` (lowercase) conjunction. - A blank line is added between two consecutive entries. - Braces are used to hold the values of the keys. - A whitespace line is added around the equal ``=`` operator in each field. - The fields are aligned in a particular way for readability: the values of the fields are aligned. - Entries should only contain ASCII characters. Whenever this file needs to be edited (e.g. changed or a new entry added), the above conventions must need to be followed. .. include:: ../links_names.inc dipy-1.11.0/doc/devel/coding_style_guideline.rst000066400000000000000000000245371476546756600217120ustar00rootroot00000000000000.. _coding_style_guideline: ============================== 💻 DIPY Coding Style Guideline ============================== The main principles behind DIPY_ development are: * **Robustness**: the results of a piece of code must be verified systematically, and hence stability and robustness of the code must be ensured, reducing code redundancies. * **Readability**: the code is written and read by humans, and it is read much more frequently than it is written. * **Consistency**: following these guidelines will ease reading the code, and will make it less error-prone. * **Documentation**: document the code. Documentation is essential as it is one of the key points for the adoption of DIPY as the toolkit of choice in diffusion by the scientific community. Documenting helps to clarify certain choices, helps to avoid obscure places, and is a way to allow other members *decode* it with less effort. * **Language**: the code must be written in English. Norms and spelling should be abided by. ------------ Coding style ------------ DIPY uses the standard Python PEP8_ style to ensure the readability and consistency across the toolkit. DIPY has adopted an automated style checking framework that uses `pre-commit `_ and `ruff `_. Style conformance rules are specified in the ``ruff.toml`` configuration file. When requesting to push to DIPY, the compliance to the set of rules is automatically checked using a GitHub Actions workflow. In order to check the compliance locally, developers should install the ``[style]`` dependencies:: pip install -e .[style] The git hook scripts need to be installed then running:: pre-commit install ``pre-commit`` will then run automatically on ``git commit``. Most text editors can be configured to check the compliance of your code with the set of rules specified in the ``ruff.toml`` file. Beyond the aspects checked, as a contributor to DIPY, you should try to ensure that your code, including comments, conforms to the above principles. Imports ------- DIPY recommends using the following package shorthands to increase consistency and readability across the library:: import nibabel as nib import numpy as np import numpy.testing as npt import scipy as sp No alias should be used for ``h5py``:: import h5py ------------------- Cython coding style ------------------- DIPY recommends the use of the standard Python PEP8_ style when writing cython_ code. Cython-specific syntax should follow these additional rules: Imports ------- The ``cimport``'s should add the ``c`` prefix to the usual Python import package shorthand, e.g.:: cimport numpy as cnp Adding the ``c`` prefix to the import line makes it clear that the Cython/C symbols are being referred to as compared to the Python symbols. Variable declaration -------------------- Separate ``cdef``, ``cpdef``, and ``ctypedef`` statements from the following type by exactly one space. In turn, separate the type from the variable name by exactly one space. Declare only one ``ctypedef`` variable per line. You may ``cdef`` or ``cpdef`` multiple variables per line as long as these are simple declarations; note that multiple assignment, references, or pointers are not allowed on the same line. Grouping ``cdef`` statements is allowed. For example:: # Good cdef int n cdef char * s cdef double Xf[3] cdef double d[3] cpdef int i, j, k cdef ClusterMapCentroid clusters = ClusterMapCentroid() cdef: double *ps = cnp.PyArray_DATA(seed) cnp.npy_intp *pstr = qa.strides cnp.npy_intp d, i, j, cnt double direction[3] double tmp, ftmp cdef int get_direction_c(self, double* point, double* direction): return 1 # Bad cdef char *s cdef char * s, * t, * u, * v cdef double Xf[3], d[3] cdef double x=42, y=x+1, z=x*y cdef ClusterMapCentroid clusters = ClusterMapCentroid() cdef int get_direction_c(self, double* point, double* direction): return 0 Inside of a function, place all ``cdef`` statements at the top of the function body:: # Good cdef void estimate_kernel_size(self, verbose=True): cdef: double [:] x double [:] y # Bad cdef void estimate_kernel_size(self, verbose=True): cdef double [:] x self.kernelmax = self.k2(x, y, r, v) cdef double [:] y x = np.array([0., 0., 0.]) Using C libraries ----------------- The ``cimport``'s should follow the same rules defined in PEP8 for ``import`` statements. If a module is both *imported* and *cimported*, the ``cimport`` should come before the ``import``. An example of an imported C library:: from libc.stdlib cimport calloc, realloc, free Do not use ``include`` statements. Error return values ------------------- When declaring an error return value with the ``except`` keyword, use one space on both sides of the ``except``. If in a function definition, there should be no spaces between the error return value and the colon ``:``. Avoid ``except *`` unless it is needed for functions returning ``void``:: # Good cdef void bar() except * cdef void c_extract(Feature self, Data2D datum, Data2D out) noexcept nogil: cdef int front(x) except +: ... # Bad cdef char * hat(x) except *: cdef int front(x) except + : ... Pointers and references ----------------------- Pointers and references may be either zero or one space away from the type name. If followed by a variable name, they must be one space away from the variable name. Do not put any spaces between the reference operator ``&`` and the variable name:: # Good cdef int& i cdef char * s i = &j # Bad cdef int &i cdef char *s i = & j Casting ------- When casting a variable there must be no whitespace between the opening ``<`` and the type. There must one space between the closing ``>`` and the variable:: # Good i s # Bad < float >i s Loops ----- Use Python loop syntax:: for i in range(nrows): ... Other ``for``-loop constructs are deprecated and must be avoided. ------------- Documentation ------------- DIPY uses `Sphinx `_ to generate documentation. We welcome contributions of examples, and suggestions for changes in the documentation, but please make sure that changes that are introduced render properly into the HTML format that is used for the DIPY website. DIPY follows the `numpy docstring standard `_ for documenting modules, classes, functions, and examples. The documentation includes an extensive library of `examples `_. These are Python files that are stored in the ``doc/examples`` folder and contain code to execute the example, interleaved with multi-line comments that contain explanations of the blocks of code. Examples demonstrate how to perform processing (segmentation, tracking, etc.) on diffusion files using the DIPY classes. The code is intermixed with generous comments that describe the former, and the rationale and aim of it. If you are contributing a new feature to DIPY, please provide an extended example, with explanations of this feature, and references to the relevant papers. If the feature that you are working on integrates well into one of the existing examples, please edit the ``.py`` file of that example. Otherwise, create a new ``.py`` file in that directory. Please also add the name of this file into the ``doc/examples/valid_examples.txt`` file (which controls the rendering of these examples into the documentation). Additionally, DIPY relies on a set of reStructuredText files (``.rst``) located in the ``doc`` folder. They contain information about theoretical backgrounds of DIPY, installation instructions, description of the contribution process, etc. Again, both sets of files use the `reStructuredText markup language `_ for comments. Sphinx parses the files to produce the contents that are later rendered in the DIPY_ website. The Python examples are compiled, output images produced and corresponding ``.rst`` files produced so that the comments can be appropriately displayed in a web page enriched with images. Particularly, in order to ease the contribution of examples and ``.rst`` files, and with the consistency criterion in mind, beyond the numpy docstring standard aspects, contributors are encouraged to observe the following guidelines: * The acronym for the Diffusion Imaging in Python toolkit should be written as **DIPY**. * The classes, objects, and any other construct referenced from the code should be written with inverted commas, such as in *In DIPY, we use an object called ``GradientTable`` which holds all the acquisition specific parameters, e.g. b-values, b-vectors, timings, and others.* * Cite the relevant papers. Use the *[NameYear]* convention for cross-referencing them, such as in [Garyfallidis2014]_, and put them under the :ref:`references` section. * Cross-reference related examples and files. Use the ``.. _specific_filename:`` convention to label a file at the top of it. Thus, other pages will be able to reference the file using the standard Sphinx syntax ``:ref:`specific_filename```. * Use an all-caps scheme for acronyms, and capitalize the first letters of the long names, such as in *Constrained Spherical Deconvolution (CSD)*, except in those cases where the most common convention has been to use lowercase, such as in *superior longitudinal fasciculus (SLF)*. * As customary in Python, use lowercase and separate words with underscores for filenames, labels for references, etc. * When including figures, use the regular font for captions (i.e. do not use bold faces) unless otherwise required for a specific text part (e.g. a DIPY object, etc.). * When referring to relative paths, use the backquote inline markup the convention, such as in ``doc/devel``. Do not add the greater-than/less-than signs to enclose the path. .. _references: References ---------- .. [Garyfallidis2014] Garyfallidis E, Brett M, Amirbekian B, Rokem A, van der Walt S, Descoteaux M, Nimmo-Smith I and Dipy Contributors (2014). `Dipy, a library for the analysis of diffusion MRI data. `_ Frontiers in Neuroinformatics, vol.8, no.8. .. include:: ../links_names.inc dipy-1.11.0/doc/devel/commit_codes.rst000066400000000000000000000021221476546756600176310ustar00rootroot00000000000000.. _commit-codes: :octicon:`git-commit;1em;sd-text-info` Commit message codes ----------------------------------------------------------- Please prefix all commit summaries with one (or more) of the following labels. This should help others to easily classify the commits into meaningful categories: * *BF* : bug fix * *RF* : refactoring * *NF* : new feature * *BW* : addresses backward-compatibility * *OPT* : optimization * *BK* : breaks something and/or tests fail * *PL* : making pylint happier * *DOC*: for all kinds of documentation related commits * *TEST* : for adding or changing tests * *STYLE* : PEP8 conformance, whitespace changes etc that do not affect function. So your commit message might look something like this:: TEST: Relax test threshold slightly Attempted fix for failure on windows test run when arrays are in fact very close (within 6 dp). Keeping up a habit of doing this is useful because it makes it much easier to see at a glance which changes are likely to be important when you are looking for sources of bugs, fixes, large refactorings or new features. dipy-1.11.0/doc/devel/gitwash/000077500000000000000000000000001476546756600161035ustar00rootroot00000000000000dipy-1.11.0/doc/devel/gitwash/branch_dropdown.png000066400000000000000000000710551476546756600217720ustar00rootroot00000000000000PNG  IHDR0v2&0iCCPICC ProfileXYTKL%Ar9 "% KR@D* (*"QEEATD^P P{NuuWS5]GGG9 zxz hCl6Ҟʭkc 9 8 0(Ѵ803\$1.X%`!,C6:6:.NOO q`%ØQiQBEtd###67}hxjlt[dD9D>f{ nЃ;{'C-\# 7F0/] a=XDiE>*)*旝 ߸<(N0H Q8K_6$P̈pg_c?$+i3sD 9mjTK_rPͱ/'Âb=l~ 21E D9үpE˅$6ع8dmaVY\6 0&@#WL c K{FOrwI@qFA n@Foƈp0H47ZAȥ@k'{V)k5JmeWP"g:ab0 CÿQl(bw-VEuw4;ȡUO o?q͚@@81J3?yn f5p|n mp3 Zƻ{6 >o: +qx{N5$4NAQyYA%E$i#C=(P,+H[&l,ՓxZ¦ ~"`Bv Gr$P+`\'Es(DX'2A8 p @ h# +dL`,epD H H2l 'B((JB9PTju-`=ŇGmAi Q(VT*@BQQv#jf`9X6a/8l./7H?G&rz@)nt.]nDws /Fx`B0LL!s sSE,ˎ#{MbOb뱷}qtq8\.ww׆Mx% Ok~~L#h }m_ 10aaHfgd dLe,eldgD`c2deJb*dj`aa&0333f.ey̢bɒR2M‘I@R,i Ed y/|Ke圕.Nkzkg664([+#vbvQvMk XGRI'Edmε΋.F.y.\%]];ܘ|jܾzlȓۓrZ6>=3Ub뎭]ܾ1ms[/_ 80G1Q> |T4\<r$}~ha ՘ZB;->*|-=>y=չ}2љѣ1Z1bhִX(vkls+r?P=-aˎ;wfJ2K:N$w2uf7;`wGHjF=iĴ u{#/cO>}u:>@=$K9x9 99+܇\;|IZ^aÃ,IGl4<}cێu*"7=~JIh@Qi Y' <_fPvߩSK_1?X.^^x{6d[sj*+s*WF;kkjjykPuu`r39K.]b}AU'e7B;BF=[]︡}MU-B-n[3ZڒnGߞiiǝgOZ}pޝ>hPa#Gݪ>DIczOsf>~&O=|hnous/_Lx9?0j0f85ޑ7RoGFouu~j2]컕I)if{?xqy&˧%?_5󘛘ͯ}/:F#e^C%Jꍟ?"֢iGP|r/DW yAGTEAgb̰0E!Jf*YIGl/pLs~^YC B$a(8$YEe%5e-}US5KuK 3M3-3mcC]=-}yCq#>cVɚWIݖ-VGlRm|;`(1BJxwr4r*UVW\}X7tÅK%7x^xz׍-nu^n+ھCdNy+#nw5#_yq@l=yCq܆5^s^y>zn,qw:*[z6}}чԏ3Oѳ9sgufғԟZkk<.Xcٰq \BфN~a 7,H_YVٗ9V9rr/|+t]R$G4BF\Z AKF:[*k-'' ?ObR2EQHMI]HYkXKQ\/W?Hɘe%|XK+5ke7wj ]]8%^޷|귞=m_?-J r v ZYkGG EmFE|]$EvhtIM.Li5JG M.]w]F̃+e ee=t1/G]+.R-v8^ԉ擽eӧ~a-:[pR_suH/\λRyuڗ&l3uZ7Zyճ7/ӄg1ǽH~oȫF^4ޒƅ)NMZNL~L?\ns[S˼+N?;eo?18I<:!DK7B!Q ΜbEb"ڳ1=f08E_~+KLBQa-a]bb+*ݒA+RGede&e^n1BRrJjژz'R-K::?u?x75x#6mv3||_RV cXƩH=p(!s-tJB O" > 4R_F 5e]`B* BǠf5RɢPizx|ХWH%9Uæap6<Oŷ ~2BG{LI_ɐˈbLfϴȜdId r/k)$N^8n.V@^:+|^0o"ay*Z$-.$>)q^2VJKɗFV|JTgU-5uH}TEVΰn^ޠϰq V;]=V#S6xvGq' gk[{Ǵ'zk0 -X#$9=Q",f6V'.'~$Qe)v릶YgdeY}9+ȷ8[_Tb|BLXJ]eLui ^]Цn^߶iq/OW{>=4py2|adfw\c`J=3>omBwWVS'! 0uR(0DR姣."u h]^t7iF3.//|":.nޕ!1C+-}<ԩkdyc(lfh bϽGgϞ!GH+XP_T^tE[L"FTGCr[8 +J*UT?Q_c+Ǭ?djXotٸYyC^ollNΚ.nK<{yovگ'FOS./EGnѡmG'$LL>24:1})$ʺc})o4?ۅŌǛKOUv˹U5:u]JsUi՛{oٵqy|TxOOS/\_NJz:reelezsiÁ̳O [P޳TSWa@!+D} / YAP4C ;Î) ]!`h7XdB!Q8Awހ~!Q)Yy),Ez=ÛӀKŻ·Ŀ"p OT^LWI"DrqkH='ϼEI]qRrʲFf6nެA7&ʦ9fc:V_ll/;D8>tvqnQE߶o"RܡeOGDG젍ƙ_Lܱ{ddVq@Kr+rZCK/!W$S{|ĭ2SH<}N&vzKy|kk: 304 255 @IDATx] @U.(8,zϢA,PV%9[bb+P,3|O)5͡?q.8"2ksŽ{9{=]k}ϱ%D@"`XdD@" &?"`mj!Z̬,rF3O<ӹؑ^C;),4/,--Z7r7/K`cq3uՁ=,G HKKP[$vKG`4fCFz:۷AՋ<05o5ma&I̜)+7'VCxHMWfD^4d:& #axڢ^xb%0`mc +++X^#l8ӗRq,:_)򪳪aj S՚r%6fol遙={X%4U>*rTm7m$"'l I1_x_1d[e?kWO_r7? fS8tTKcsZl|L^_^rzoz'_4i/ˬUo:s[ȄF'&/:Qܛ|+UѾ`SI$zfW+NxL2%8znx#7B ~C1݉y5[XBΩBW+!> bT9:qt CڥlGpIT"}4Jp ЋJ2ƕ0?J[|Z2dsh}QsZ7^D= |~g҇jWNs0SҡhQI\!  0~2܌.wjHzB|Rlx`ke[~~U|:9\c(Q5z)ݑ?6CEkx57{h >Öc E_ػwo!)I1,b)6m"? "##͛7U(|})x7`v1.U7W1tvƹ}vn]2n^:\q):Цޚ1}1dytE]+0_ӭ88}[(b{"{abmxp_X% >o+j: !v\$}J6}б> 3[lة @GNNu àH?yQ^AYP/CAǶz ->_r:(,wx ?XfD<`s;i2;'?-\<+|w˻e+VuVn BrrbT  &M$"V/W =f,9\ZahG?7>؋%ncݜ`#+SV izbxgA0\6gn3  g é+0d_86:pmMkOsNצcGⓘ4k4[? uB"{-^kz4gGL\V"skpIiy+ӂ0Cl; î ¾x$`8rG` ~ȓ ]Ɂ1x?Zq}K,a;Iz"!zD"nPh"Ŏ'C egXv!z̓x{ؖ 3faI ;O5l6E0svB?'N$M $}KG6;Tov f /1B&$5j^QFvdbǗf+͐,i둣Gşڮ/ eCL&vDj,`<;KԆ^ݺ6t(Bxp3Ϡiab)pQ 2s4> B[cFNTwì3_k}7'< Lb/8ÐŀW >ɰu GHضtpMѾصc0hO6Mۇ! t09\#u>< } " Kx ź>ǿ߼o֬R: e ʰ|¶ucCX>WA;1aDg>.ryQ@+Yi8:3ATmI`hW1q{W _ـ'jY`xbg.\ n%)fcxpH5&yrzcRëq:4z4ҕ͋8gXF$X*z"c@mT;l z^7w WEP!h 8:: DQRV')31!^Б ܹXr%~G[ymӝ*Fd M e(|@=xf 2.?]Z9)߻z&Bp 1ֽ//tkyATU]]TNKS bD8/qb;r{ڼô4`Wl>cW@p LwG:6p=BbS僧<8"6 ?u$IꀾX]PNIzN'&&0$M.Gr^/7Z_"uez)'WJհzK.U2c ZAD+[l!#ӃJI _ALD۳GxM} ggg 7իJ"^7nc#SMkqp.>Q=~ az_l}wcaYR_S3)9uK?X9^v daHQSӘg .[iTɾUq Ұ^i$ i٦x/z_`Ǝ0B)Ml?5uH% |GMKd"ѽkC`A G1mohĝW;Uo56IG{aPwS[r9WUjžq\Eé~dNbƆ=ƉơGR-Xh 7QI$Gth.J>;6N8Hz3IoAuQF QPSMqrpJ2<|&Xt5/n FϾ/FSv yep2G9)~4H1(d*x ^eճCJLF^=v;6ϙ_oE lǓ~h~ȿpb l BaџbPi|ۯetmpom%诱[+ %̧A+B/~C.]'aȰwEzJh@?uuG.Y-DXc ^ T2N9E/$ 8y# y p왱'3b{^p0{<խlX{zxEO>٨Hq.K6).fY V2B֢M >%'ûxO J+6snq& ǫ#:aϋsxpx5?Emxwpb7>*<"Qzypu>)8,zC1Vw{?.Gz'a1B>xFD6_C<Փ]۾Asx;Sޣ$7zM]ɧsꀀR>xo:름4x@yBVsf vÏ˧5b+8SBgm?G ǖ/"aJL2=ۋ2M_N%53+xw}g!NeNbrK8_| #K ݟz <|W &^bQu[D#/e[qS2K>Fͥ`O!ȋCxiZ\[tx+<6mP;T4 X+fY&z AǷ}`K\3ϲm_CX݋Of}m6 ~o2bO_ 2,OVli<ϩSܕOF>Bʃ[ȤK"/DĥNC '*${*oCĮQߙJ?j 1ևtFAO> 'GcIziފɋǣ)2hG_ʯ7~KPTxg"mxxP4G9*+^W?i2O5& Ldy!ɛOS5ٻǸ5LV9)z˜įzCI$/b0y /:PB Ldc P{"MP17|Lm7?4%%%oC? FuRm1WqQz+*:}}#n`eULX3OqW ^DWu@cX%f?^79Vu\N[z~RI[濪~Oܚ>?\U'a5Yx1&7=E=/&+VCfJjt' CP5 EH*S* #+nj"0 D@"  猅p) S˔ca)Hp H˄5J\*isB D"9I3T<=L!"L _$[>Bo/+C!5 d~$Hf҃[Ę,/cc\u*ƻ"qOqt _t5_‹~>ǿGn Ddu~83LtDt- 0&/)m|zq L\„{@߁|~`<3VZ'6"X_@_ɓb*޻ z]ƅ/v6Ϯ63b=Hpm; ;Et^Ǒusa~i颰jPCf?N۩n;a}LY@> L_4MxCy$YJG4Fo¤Suwα8Kn1ZxB"kER$A'yKQ2Q|c)1_\coOSNM x[7tƑWA^2(`RHݸT"QwƊwS@ӽW IyRxx1 Kyv8vyݣ϶WDi*Ñ_qIDMw)Ґb_l=FkB7>ň'^M*C' eL֣/C {7JKWb W S+ҡS_4#/;P2e:6TN*AqS4$|X<%GI zc2%9a9_Tz$?Z" xH{ˮ%! |CD@C_v-|(J*!9WdC\|'H z%ֲcRL&Ynr$@bJƮVhF=\RQ$a*+I$e@#H=0M=g[ RK_ilu%j9\O7΄t1R҄%&0yU%N 7Sq@OlsMIa%&0NثžqhKK JQ&ND@" (tu.>DF3+1^m䰱%aY.XNN.qIVT)bFs'Xy=/T' $G %rbLql l,0nӲ'6:Ȏ_k泟"l*'_sE3OlcA otr' $h{EmhXN'ɠF {Ax c(H'--g3Q[yuDDvw A7n{I|bMLtqD,r(UYIv,|b^!rn7)E F6OLnSRܸyLl;Q8J׻֪ g’);"Z]0Fc-tJVXXC~mjj|Tm YQqjj:2$jXS~-;;4k'ϞDՠ^ANL崦o} É,_-DG(O͚G_'=j$^#boA=7ԯ& & OSh̸ִ`χ=kd֫9 􈾩y 5.gaolDqn"B\"Ns"EG!#{M587h$7>dUc -dR/ưgqn"d;*n( ^RC^ {K,Y$!I  1)|q8ᰐQU\j: KMKS OJẜC:w1FlHI&(XٳI%cy,L L^jyFc쵨X^Nֺp{&H3ba3s Ɨ#.jpU!/59dQiїa/IL<88dG'0>aSyDMv3DDbL>/ )ǡ#- {r\&BX BΓ}LR]92U)2+Q}Qf2iOe #)9q(As{hE-cxFȣe)icdу.ԦUD2QZhu5+|'g Gz`c6>0Ȕ D-ݺ&ɡFl,e2B'%[6-mO+X}&/ST=ONg~ϸ L{')QEX+k|$pxcy >8̠<*QƞS/hcH$ HK%"ث&M+bI(6EML8gU< NsץYəsQrkI!{r(`unEAF'!r:*-g]7W)qC;ō]eԀrtL\w6FckŸ7>짋؛fd U/r9{{|/z|TJ䆆ُ%-w(L,)e*nRy,''^BK(IO'ޣ.Ĉ|'\N~Jf>IY7+hs2?R z9^ڟ% FN)aOm֡&%!yR`wbRUzC즍1^ g'j#{EL7o||qIVrxLg^#MWae$qKBvJ,&*y1.Wd[prqKg~Ku ,L<C1ZyFR)rL.N]'W@9h )0*[lTeA;*OLgM&?m*>V"PÄ+W\*H &ª͘@$GI`DaJ#1I$="-)*Sx="Hʊss s*\+(,ƙ_+,EƟՉ<.H "u+%Wp s*5%XZVMP̿M:֞~nTc%S֓H$𼘼STa)\-5?vZe=D@"P15%$+rYO" &/H=0U)w.;Х3.HJ'9ŜRR+-5)i&06;*ig-HGn*,^,H$I`oٛD@"PH@0*D" -{H*I`T%:%A:ȵ`#I#PyFܥߙ-{;߸cVTžmCR +aJ%I%XsJF* ָ"a] .go}yD" (@z?(KBxNo| tf^D?᛫)6aQg&`h^3Qgl?=OxtO >%ʯ1~"VaIP sn֟#QV/6˿c :{ ˷ -T-̞Te_a. #b#{ļ 2[>&l{xϺ>^:ϵtoH8O =l9/?{{j>f5RG\|M[2Iסn0io=P< š18u c/\v}tX~Ǵѣ:s#_H~?sDkq/Xw8SB#t,vadS4ݏ69u˿>X1 {|! ꧊Blz medw_4G4>_3nfg1)z._=AԿt/a**5SOWH+!,= 6'v~/NLp9W*_}D9ITc_i^oЍ;ӹ: uh&BGRmzN ?ʇN\,Vxcwm tfP-u8ġ7'\m$BHE!߫Sln66J):%mk@ߙxfQЉrkQrR60DgHkf;fe ݚqapo5N!rgYTXֶؔ/jN%荛hܠDuK[#EJܖ'~>[~_̶@)[zuK(XKc7c)KDHkL)iY,ZWo]QɆr#pi4,mA1 JJT08;R"%RZQ+C{/bzR XZþ)]"P=v3zOEvB22P "xfnoIy:"L H$f$0:iD@"/䫋%g|01%._C# |9=ʸED@" xy3񷓪2N" 03\\VÝIs%#X-Dff&͕HH3b!$3C@M4W" 0" ̈ܒH I`f6a\D$0#rK" 03$لIs%#Ֆ>[?v1 [=kcـʾtцU.K$@5%0G {8r(ʡN8>rOŊ=j@5B ߏϸ1d^6j%! dX]4'"ž|T1c_y@"P2z=."?FוISargYY'ebTY+ҳX&z]R`?3R@%\,31z)l}= Uz69 cEcy<+n?\ZmENb>Ms;a [?-1VW35#_vYkVNvI5W"P[o8v9=l2+FUjǰU1mTݙ8[lz7%iA8ˤ-hĭX敽h N&\?q"zvUpLG@6u"z/~ˏŻ>?ln"l<1#` ݃6ç`Y8NN@v &-Fz8r6 6l[W E u,.D;?F\v N8E;Q L2#-A.^ ٵjq*k55Lrs`5:xm聵@oR'EE? {W|\)B`P⨾{g=He,ud. ǤH*6"߷>k @wAtHïEsaw6|7hōD}, Ƚƍ e@,=FQ}m EmӘNuʧ&A^(6Cs|g Ld"v,L{EDƢOLBr8O}؃I_eɺxݾ]]؎$x.zfx^lۇC CYo}aro䃶7J okެE OYt}_׸A퉥=֥3qs8̵sR,Z٠FMOyR$@Pݎ)Uŗ<o,spm6; IwmGugD\!ݰmK^YF=V!hD=}!i[t)t֊1tXS?/\ӓPFoN5%w+Vf4P3/0; nˡW֚Pndw!|<"-Ua\),7aByi1}FGȩaCOU|TU4o| ND6.ػ ^'DMH@[`t=\N'y9~i_!<-o^[ͯ)n%q-^*]_1Zh_RuwֲG{8ţG/6EWѕc':J!X+oq򪊍+;A3w24'_aW*nQRףlN@WagߠSg'(?UbEFm;Bw{SK0.X1g~-)_ӱ%S?kqry(p4H:vCL w *m5Cޓ1_\8AZm:PG`s)XX}{)UW}zkX|6V=H 7ێA  exkH᫒d685 bF J1\PB_2QwEs`69AJXW-TBhvu9i3g*Z|65$?L:uYAjP)iԃ.ڠ l2OBN:xc! قѩgt9|0e-._]B=M%ʬ'%5 [ZU=Q "t,s6UhY/6#66F2*b`b knc MjIxjI8=t.2L^Cr GGaKb^]#_D`T⁙LG ZYj!(2c>ԪMi$7mTmt$hË[KGYqAZ.rTBp[yq?yY̖mF]HRVѕEtbnRac^9Ĕ"xx^g@l8c9/E"pW!.wD\f3'Ho!DH$2 =2&H$UI`Uc@VdDj ׁUyVH$%D 11h%\VBd5D ΂dYUfE!|!'D@"`&#:|'P)D"kMF,D@"`fH3 J$F$[! &L+fBnI$f$030iD@"`D@ %l¤I`F,D@"`f[ofKs%GE/4IU҉WS]f)31XaD6 řz {qd^,"ˌ@E`QD#q8\I &?rY UӬ/0i?ݔ >g>R 74}sI*R."pOH)F RDmR3BBlIƗ_Q%T:KdGG/O%x90kԨ <䧥ְGzp[ԥRpDtra#ּ͊1sf֮ZvNIp*_Y$koD29?!0( qd7j61;S4г9wP! G[]w3GI|I} , W~YuE$Y/'P W g\x]W)\f~!W|v<%4nebܣhB[vYbd1dI?ahI|La.]!p7WhR_2/oäLZz\Gx͉+ޚ>΢z8\n!&bw} P>rԳ#g~sE`o.deTj =8{|3#pza&$!ÅS8%‰`pt_<QVK~&.M߃<թy%1 hp_ @DfBd:1=fg :3 x9Q\:⒕LD8Y}Sa[q,ՕӸ"@gK/wOA '@ֵ&%]T]XG*h#R[=%Љ[{%caLCéahLE1L5Ftr7FDi8_ʸ'?8 w&oLiCv 6!P%3]%O j5_wEV(nD"hx èsbHSfA]֚ԄAG|AG4g BIg󄗘ץܐشkܶ *B`q [:cɼrpF]{-ffљDD4O4efSg<%6(,YT3^y# Sb=F=L=EDiMSLqp90eBX";yjwSOIƉgFBlyE@ZE+0L&ou?/wKmŌc-pLڊMTbǝqRJf}EEҮBk*R"_%Fv'~Gtܛ7O0)hŦ^KH}h[q r!Ďq黱I f7mebz"~rHExx:ޮܦ "B{HfIBݾ(#Œx#BB"p>ZpL ""UB.]ѥAh4ŰA/9gvJ[.<][8<{'6 {knA F |=hP@;1A1^ܨ[ dQv ʼn0X2^a(G$VX tsAtnnn>o# ]| ~PCznJ?InL*rCڄsɹMoRΩ<3:VU׋:l_+%NdRp ,  8ocp5ZMVev. Q'iWo#yf3j;KL2I\}n7hwpcė@Xj 9Q@P  ̉2 m ' $@JwY@@Xb(` !9" @r[:0@ n?7y@![.u@<z !,XrF_.IENDB`dipy-1.11.0/doc/devel/gitwash/branch_list.png000066400000000000000000000320611476546756600211030ustar00rootroot00000000000000PNG  IHDRyU pHYs   IDATx]\T/,lcL-F1bSS^QcFhF+RK~g‚ vzΜ9sf]xp8WSG# <G##S\5G#<G## oΆJ߳=#<^D֜ȀRD"xq85Xl3 )xGGGܭAp8ՈJhwT#xG ,xe>x8Gf!`g;p855Kָi4 233QPP`U]IJhooWWWږLwGJ[J#SD@@REkl>甔h)W5pS@(yf3_^:;BSeG!`]\4̂;N"Yճ3G#x{\4ea"uY8GZKZz5-'!3@X(GC-y-bOH.#vyn0^z 8#*,8<  QgtHFuǔ;5&Z6o*q H#dy~LYhb [W?.!|%tR'ؗHV"5+^A\FKn]–+/$$i&-dnj2fnfhh1vQ0KɎUkƇ|53w > ,dľp,] )5Rѿ-drg9~oG]YuⓇ3qX|,,o@d=#rfD/0gr[Magx8kaFPkM e|z+1Pb(;_M:;OB/@Išطl.Q2).X30 )c"p!cC^ƣWpr**8  [ Iv!*m&nPiteW"!WU 0ط N&ՙX=&MEW2zR#Rى[ۊ|0{}܏؈kA #Vf5 ^9PԣWiq`~}Fv»=}XTi ;HN {79lsHiU]{6.Fہq-ĠɊуt*jI5dE]wr:'KZw-Ь-u#)lJ>8c`HDЫq+6.FD:5EG Quא5+SG§lMah*ԋu푹%[m;1]B˜ UY5,q{U_1eI{`ؿr)fb١' *%_֩VcLC#I`okep7=v,œkg>~>z{P&v| ~oLگFU7K wo@+\Z4-ImulN'ad5?=Rw,XZ O7QyX=DOSPթRBCcvy WWdҁ0v2VgN^-0h4u.~ߐH?s3Dj;q#S[Ƽ6DM:r6ߢ!`&v; boGJqw`/X=g>6vعu$+24 a/U%lz6fnc0EilRч{Ʒر堀L^ sHVړ ϿE&y#ZS\! -Y5Ur Zvơ\@.)U5v@x8h1ZO@kGd Gu[h {tC_j1B(yds]0zL >[y"4c/vaִGS<#S]U͸!)'7IpR:F,u!Y{k8'/HƐ~*pl] ع' %K*7 [BO& ؁RK|4c$IcZ7ǦQ8r1 A]eX`tP9Y{ڴ=@įǕh֪5d)5Zm޴`DoDշQi ǔ~1LĠٳ>diYdĚY7~=rq'\C}"-3ǐGKM}D8ΘxB ez>E _̌'_Q%60jg">j'f=y8C~M7KG47۠|Q8ID8iwG8 674^r#; X6ک)X4@Rʺض68y.n艄8Lx:^ŕT-h >O//aq%}F@O<'fk&%r߅-| 2uI|WR:M!bRl]-U>]HsԸz23*bpOOypew-1g?k'y"`Ş!=ÉVA;ԫ]}|:,l \d vE~j 2ӄ,C {psײ`Wʗo^WU\A2Lal߱ȅ‚;X`-Α7W0{=9P?Hn#<?!1{V0)F"R3!!*q /k aԸ-Ҵp:t2[dIƝ<2l eI^ mt\p#\w:Syptv(9"xZyY򎭇ൠXMV )o GWI|,Z~qG`{ˠ^B^l%P{?{;{?*+"n'xE3~dZ i GcL_'!}GY6b8P]@gANFiP~ixw` i/܅R0PsCr>_1{k>@,w Ǡe;}꓏7->}C{W>Zwr0"Ѕ@?>ȗR0384Pa0[TۣRD*9 /W(+'03̃G#V"<2W#7Z|Pwn߾#/`G/`P5,= ~3p_ITڹ{9KpE$\W˛8z&\: (̘ V|5(E&%[iS dhhch2W +[ϖxi*r"Dn @tAm컣 s i E3C`r'e"٥Zk !$C?l)hΫh>\Ov7#Ƞ-bsnLę<;]1.^maM8Q⎺n"2(d;2F$`Xeۇ*5֗Af+b5>9 Z+a/k%\Dޘ \8m; ""q6Y2z9^pшhЍ}pƷW2Dۤ-~XI$8PnDC;a5!Mb4ۖ|ZJ]0lg,eNi+7'Y/$42 1OFJܴ/t*%s)V򽩡ͱA$Rڣge Rd8aR{"0E2Ȫ37M+#ȨqPʧO=# jK<>펦*d0^y)ŒMEZ 1nŽezdn "J0kVSGh0SIVDvydّRQADVZCA$$<б=Y¤2 ,;( G&qo MD(O=k"N)#⥬$')qfY lzcuZ̶,^yy |Tј> χ`pg23=* LJݍ4 Q_b?"tB50lt0Eseđew3t:loڱo z-р(gDB~NFwhꌅQsTeN\*ph=IBVpJcDD$W]`D?]+p矈Sۑ &< 0h?ipgG{!hݒ-5x=7B̕%Ֆ>HWfRS*C½+,Rmx#mtahGSjKGMl;ƯRIsJe?Ej.bN_I#ȻhSߌwz= ʯIiG4~G".hbQF26s^OF_6H"=%o qÿ}'a ,:;o6TÓR+w c)X V d-le'bMWx W|_ b # k="3ƈE,):w}Pԇ sG1m3 /&c@ND3, zLǒՂ` NM pp֬ZͿڹ }0=J.]D:t *˳`KUe'_wZfȧ'eRtM6郃<Ҥ"rfλ1|8#C;- oąLglH-'Kq*7ՎYfmxcL@,nױG6 h4bz忇 QGxU3U\VM:C2_#yKexAtŐHb8a^IiƮ21o >{'NGF\[ĉ:njҋʰ!+I%1jί'ƿGoCP`ӄX "PyQe ϮQ t6dl${eaŜ؃5˗㷋9<1%a,zJ Þt|uXgc- vX,Zr؏znCю{1JӸӑ͊Pp镢,Xӂk/=xKقLMFf3${z`(ؒS!_nvj𨿫k5%jy@[{zR ?]-ox:OTާTҹJbN9G;;ɏJ9g)dѦZ+Yhr49)BM9r S+0lnKH]f,CE2#mfoX]R:˜CL_)st'JI\!7fTL9*6prs{5)a }٠P6ZЦ>8І&QW@* О6db(3"mlѶ3#.m-ah,9/bZ' al]U!7n%W7h>bvwܹ/ΌWvO3rטڒО*_ ɓYpZ9%9\npc*ڼUT*野 ssH>Id2+)@Hk1/6x6Kug:JIToIiwv^"2բJe0dV99B:MgOa)˽JK~DµFɗYI dJxHdذ,YXZr<e]pjUYeo.#p oaެ9C4v(; ԱXHDң"bʩp^-§<*HL ? PfqO1_l^q|cԿ&mE @=#iIo粖&{]_xw~ ҇EbIxڻTSFrpѧ?F/Wzs~ '$WqBtu<2e1m܂IMJ9߅J[jɫrR!r4sutDZ\_1O$XWos5,,O?O`*1Ub8>f7C*E"=)?LMD@/C.  Z5ơhM㮢9Vo,>G+2ikq| \9Bl #bp:4]%f;U4"e#R҅[ż1Pygbv$˙'o<_-qaB@a(F lMԦH]އG7<_Afo#:.l>XD@yup;$]vCOB\= ~$?mO`U kìi=w67`ոگ!2=Nl_$?G-Mѷ-L8湫iW`#$ h\} ƍh蝏tyϽ1Qrx ѷۚƉ( LAO5G^0cԡgD+WaYKU>y%uۡ1r0G=|*,_?'q;W'C߿6C19)~Qk 3S1oX8}㣙{UFRR^+b@t:Q |x~)b޼x"6#m9QI=pjd"߀6/c1N 6ChɎw牘DԹ}ײW\Ƃ=M̝>'w #30yH>Gi0:7)&D⽻‡i^ 6G'YǙ8tp;wNV9~v58 '.gzǘԧ.f"H|zpܳWs%9JO[dų6ѱ.\+ʫ'iJOt= %dOT N!ڼys1m]w톎@k*Я?Yl麞*a[>/očy6Gw£ӫ3~ĥ5Í$IC' ehe-ד'C Y8v\.L UL; lY]bOR[fKҍ됵8`oT)SxYɩYٰA@ oJG2ߌA̍3Z9]1czg`'bR 9b"Y{ j[=.3.5ɅuKa1u6+>|6gt?9R;l6$U:~g6CY<޴A $H=*U^mxKY%Idd+ 'r)fYy< zBoz"݀n>3ՅA g:jgJ,+to@~1y>"!\qh/Y~/`v^SK"ƆgJLy[صXLw,A/6fgMڷAP6hݦOCΤD7Xt_K.ooęJ4iYf9xFl0i@ C'4FY̑TiJe%]G>Z->:wX8ohXo!^HQ^bk S14j"2Uޚ4Q6FǞ0 tIh'˽^y?DbkHiҲ R±1DS?N3ˢP xC&7Ȗؽb^z[5oų`ǮY('qR #@"3e39o:a04cR|h~/9fDOi14Tbh,-)N; SC>kkȐ5GW1P9L-4׋cȔ0Q2 (筌4ͪ ef&2pa^wZ:9xK@tnV  d-P5f̮,φbHǕ†/hy.IdFH)8]t<@Ehڥ&9 7=~٠!TWn",.4*o+#֚K}׫[̂wqq1LYo/wAzlV?q okP5B}FhOtg4!_:Io⡶ %S.- _~Ӎ`⽱d\P}4xԚiNJMxmH[;X!]H梑,]sZؽ--YJ{\ yBI ,0MMIKFJr X4B'@fY'`.~)aaĴA+(xwucWڌ#6o!MmHAџ 7 ~ڄY*l7r٢r'i#MYL0s߉ϻ]&Z4_ ř,C-GyP??ޠ`)^T/+mÂӦM }޾t0YR;#k|0w$)ܵÙa0^ !"̟yZ=x:Hq BhRkþlL NNlT!D Мo:g!^Q%bFbc4N_[5qmr/8fp#ѻ{@,KUƼu8i)Z<>kWkll,$]~. 7_CYVCL0TP( fc,ɃT*χ YC*/ZE. :rub,@dp(5~={B-'$ 4>)Ao# */g+ 6Ԁ0sh<*bp=z~uφ5;]K*.f-NcxIENDB`dipy-1.11.0/doc/devel/gitwash/branch_list_compare.png000066400000000000000000000246671476546756600226260ustar00rootroot00000000000000PNG  IHDRL4sRGBbKGD pHYs  tIME0 IDATxy|u4I4mz=-Zh r! JUP\WPpZQ ޠTVXE[RP,-6M#Ee|B!2JB!OB B!$B!VB ֒cX]*S&D!^k]3WBH^xe 5 s{ 2;SO︗%fC%gLP <9;[F\/=:Jˤdqɲ>#n{J Bi9qRx|3"L#RBڒ2ג?#!?:d0o(SGDV"=w t08ztfÎT8BĮmeT8lmձ7e&d+ q0өG i+Y0e_sU7̵s]K9NWMd: G #enz6{-J2v7!5yԅNrol>Gj6c/$iC-A)buZw Ŷ۷RL,]څ7(͈߮#..6m0d q:F-!{>2ӗ0{VT˿x۷`j;@B\snѵ: $3H\;ݓr">٫v/U" JRQ#ڸ2h*B\l]ٴIϠ?,0b)sA9SȬQl 'gp+!5UU; qۍ)f%h?~{[ӱ[ PG.r } !1\F]B!% B!$B!B]!B!$B!B]!mces_r݇~U%JB\!e7/I.;g-y|kGbːdKn чֲB]!oZMat/jIPZ)߿n¥فC:ŲEiҮJl۲Ehkܿ&O `;_D.*7Dl! HׅvSisbH,M(.Ub4)hlXqoSI~ʯBhiM Ӓ=`[:b}wvx#_XP;`D 2cb& W fBqstx>r0Ex~ n ؀&fKş¢ZRU& Դ~`; )R!PQ]v39QҚ| UžP C (GH`laJ69нfq&BˁS.Z[5I.-;Cb-1F6<Ѐz]Ť<Il|g7YU[5*Rm*.⭫zUl nl6 hGz-!$.*6_ԊOVQG_Azi`^F86*؋ H`+Jcf\ӡNSgڰ(؈Ѓ^zܞ/'lJOa{]K$%` ^X9oC'_E!,uwV[XPj4r[;nЗl*qCo?nTa()jqnVuѦ} VR 7et zon\ui}74äT}TCpx[)|h܊:qN`WIk6Ǚv^r"0~6 Ź6ErO"e穀hsFwg9R Ԟ͵޸fU }XSܜ]yok% 37'jX!tY95&MKHuCiwS>`Ex<F V>FB {o檞'ȴhx] /~d=јPw1(\{T~-;dkYWx<8qr  !,, ?soUTUU8k3뉫)XN WqĮO捾ju`&[O,L46ޙO5_Vg6 哩 ٷAyi걱|1k*x)aXo7C80Mmg?Cu|5lj¼SD8|.<oԌכylku賯|_ËRVb0d;߫,vt&: 8nt'JP5nc ԂӉON.&4 !įwq<&IqTU ???6mz˝}6)Rt&ޖq귟yjT[k靦RM'=niUY%dU_:⿿]ˇ)LpD[8≯6YvXuW|UpЦRWYp 8p6޸306GܷJ[QyT&=oh˒x7JuxDڲ{R/槊:  -D9& 搐3$$8c g0aL3jk -Kk2K+, FQqEh4bXkֿ[njM,~7rޖMm+QA]IOs MlY (A`7|`y[jN8M@ۺ7Є!x2VPɫp࿷vN-8UK#meaUaފqJ7{Y9xya 1G>-f(pnB\5k^bygd(Y15]8^w#_OB(G9 iF\:#wy$3\ըed}Ŵp }6r%uᄧnz-:_.Jmu]ѺWІ1|ҁBN)ѱT}v1}`8>k5ԻWO]]PΩ~?gQ(8uP}s+h+k xfh.^2.ךb}]_Q]5Z/wUT.ժZ"!MHXɱ&Bb<{UUU5k3+9ZNg|lʔDw N _!!(χwB\-~k̘1(J /ԩSQVK>}NQ5ϥZFu;Zy ,fV-SNhO9_BQ5~gm@P&'jB{kzuẚ.H `K iZM_C{3fk5?Db`432.&ՅSc[PXsx(wԌvEC\E~׆sB!~?SO??W_}bt:o4|V^ݸy#+0~$}>0%q5Naݻ7a^F.b;q+1woy+yav7VBUU̓&3ݛ[|-6TUxv:N 7@U+INySne\T>?vMuƸy}_^E͓߼VJݠnݛ\'y 4+;h0p2@dlf`qCdTi׿O?`dbʔ)`09r$ .l0 5g!CRY0J*T@`2-[͍'SN"яf<,cғ<'~GS"$3۹Խ(F p>]3i}'7Mw/N#4iszt{%O$5yͩvZNgaaAm۵;_Į n2omQE)?{hMZFsitD6OF!>`^W;Y`vUUt' BCuU{\MU?m 1s8;ᶩd(fl7o#ۄ^}L&O_Oe'׵"7?Cܴ3+wQS}/6<q"ϬB!9/vr{2eoѭ0gǼ;י:(#w_]?&w¤QyLo& (z t#'y lPb<)h_wX6!iZ}yޖh%@5qo=OߟeZ]wzhB#7 !75{n"##Oŋcj9r$/`ȑ 毙qvva jFHӆ}̗NqQU.f!cힲ<Չ% @⩢܃S7_ACW܎oՕ>K rVr*ǨBGBꡦVrOU.Z.:y~:km z7.g<tҠʈQ*\3U=칳\2BqI Qߤ4Z\<~L].fտ?`yV71czg}iЇsIW@ɜ{Qy_ݭjxsxj LJOȶoEF?ى6RKv٩dI -|eX9gP!)<@;kϾo4Gxor <7<8n,%$qe7#o fi:U2` Ӡg3ɺYG8[R}&3n]mc{1Ww?;pZn"3 }B|t$1j>/O샓cG>^Z@ؑ{o0o%A[mE!1>z7ulR-JeQ7c-%UyY|4c~*>>w>1l]Xz\KѱpQ+ea>Lۺa߄ZFņ14Cók^Q$FCAeέMb4GrΥ܊泏l-Q/!s.]V>\N_5XYUn(sIP`/bkn[HC  ^c c&MeRJm|<~bԩoƚo UV0}JÄ$sm&?o/~k3ld&ȮT4G~{VR?Uֲ߮~O FLOiEӺH^ mOY`EǤkIRb8)3eTw4LC9OG._) 1uX/|IҪqՏ9Xshңvj>v擣.*t?k[槂J m;F5pURxFh 땈Fc_*̲V@+ |j@Bh4x<$ќqW%Zll3 %¤S,H`l[$0tP\duk.7=eCΠt!TEV@㧥d<ݺBCѨ`wGQbڸs@NAui#C! +IDAT<(8 uB ӌL`jU!`=OqW ;cV}OG_);o,yo,WnEMmL̘ "#Nnl}NYDMFmejQ~FIvaA!R[ OLe)Ir >1+b]bLиP ԢH7Ѥd3KW"QT~4rwJ[.eox8.Ն`Fud׵5q򯢴Nʍ,_N Wb.]7c-#d2I0.@UU'5Wo/bop}=QfPJ69нfqɦBƝp ),n]"44e9]@]Beh}qDMlYWet<32zJ茮8jиb`Uix ˹[y[ &ob/ȺI(ɣiֲ"=G_ \NhJT0(wFP@URR&"-/&5B!c z=3xCJi9wS[3Y[E_W 4Sо8:ϡLe+!)=3x{g9w9&\I/>H@2OαT@lu7ثM%a]ߩ\|Y(D h5 ^So'ʑÒm2D] qFp5Lk1vh]J0wm?*c+.*hNNq:\6 h- !ğ͛7y9%fTV?ȽڳvlƭtKaӆTOnȢgoO& 3bIK[a Z20&d%zZ:QRHf⽽Иaf|2PУ!덄| sly{S2xi_ȂSj/yl z3HayL4×EBYMB>n0gl[[vBZ̨赚}[5113ãi@H`Gՠ3,{9Zz& d;s=RW"1a&w }#WGID~/aׂ,$#`oN̰'Ƒ3u}=6|-UCHՒڦyXhrt nFVE'z$X6ѥ}8JV M/A+md_.찔bui3j`2,%EX &s:`:z)ưuSQR7O=Φk&a9QInk 'p,!aZqL5q8pi:5;,Tu[)=aEu[+8aDx/a`EC(7騢܎`=a'ǵ^Y㸴!`Ȧa+ =PuSܑ[8M wn>t= p=#hYn%Q7Bdk ;Xl#6{eh:eђzwOLɜվ/7X'QZ O{FOݣ F#e !݋q#?l<9;¯YMI<9(Asѳ];evA!Os[ƈPcۉZdB+<\V 6M A%B!n#7[3)D8lŹYdnR' =naPg-$B!B]!B]!B!NEF/B! !B!B! !B:/Q֭[!I!2B!,I!B!B! !B!B! !B!B!DW!BItB! hE$IENDB`dipy-1.11.0/doc/devel/gitwash/collaborators.png000066400000000000000000000605751476546756600214740ustar00rootroot00000000000000PNG  IHDR`d!f0iCCPICC ProfileXYTKL%Ar9 "% KR@D* (*"QEEATD^P P{NuuWS5]GGG9 zxz hCl6Ҟʭkc 9 8 0(Ѵ803\$1.X%`!,C6:6:.NOO q`%ØQiQBEtd###67}hxjlt[dD9D>f{ nЃ;{'C-\# 7F0/] a=XDiE>*)*旝 ߸<(N0H Q8K_6$P̈pg_c?$+i3sD 9mjTK_rPͱ/'Âb=l~ 21E D9үpE˅$6ع8dmaVY\6 0&@#WL c K{FOrwI@qFA n@Foƈp0H47ZAȥ@k'{V)k5JmeWP"g:ab0 CÿQl(bw-VEuw4;ȡUO o?q͚@@81J3?yn f5p|n mp3 Zƻ{6 >o: +qx{N5$4NAQyYA%E$i#C=(P,+H[&l,ՓxZ¦ ~"`Bv Gr$P+`\'Es(DX'2A8 p @ h# +dL`,epD H H2l 'B((JB9PTju-`=ŇGmAi Q(VT*@BQQv#jf`9X6a/8l./7H?G&rz@)nt.]nDws /Fx`B0LL!s sSE,ˎ#{MbOb뱷}qtq8\.ww׆Mx% Ok~~L#h }m_ 10aaHfgd dLe,eldgD`c2deJb*dj`aa&0333f.ey̢bɒR2M‘I@R,i Ed y/|Ke圕.Nkzkg664([+#vbvQvMk XGRI'Edmε΋.F.y.\%]];ܘ|jܾzlȓۓrZ6>=3Ub뎭]ܾ1ms[/_ 80G1Q> |T4\<r$}~ha ՘ZB;->*|-=>y=չ}2љѣ1Z1bhִX(vkls+r?P=-aˎ;wfJ2K:N$w2uf7;`wGHjF=iĴ u{#/cO>}u:>@=$K9x9 99+܇\;|IZ^aÃ,IGl4<}cێu*"7=~JIh@Qi Y' <_fPvߩSK_1?X.^^x{6d[sj*+s*WF;kkjjykPuu`r39K.]b}AU'e7B;BF=[]︡}MU-B-n[3ZڒnGߞiiǝgOZ}pޝ>hPa#Gݪ>DIczOsf>~&O=|hnous/_Lx9?0j0f85ޑ7RoGFouu~j2]컕I)if{?xqy&˧%?_5󘛘ͯ}/:F#e^C%Jꍟ?"֢iGP|r/DW yAGTEAgb̰0E!Jf*YIGl/pLs~^YC B$a(8$YEe%5e-}US5KuK 3M3-3mcC]=-}yCq#>cVɚWIݖ-VGlRm|;`(1BJxwr4r*UVW\}X7tÅK%7x^xz׍-nu^n+ھCdNy+#nw5#_yq@l=yCq܆5^s^y>zn,qw:*[z6}}чԏ3Oѳ9sgufғԟZkk<.Xcٰq \BфN~a 7,H_YVٗ9V9rr/|+t]R$G4BF\Z AKF:[*k-'' ?ObR2EQHMI]HYkXKQ\/W?Hɘe%|XK+5ke7wj ]]8%^޷|귞=m_?-J r v ZYkGG EmFE|]$EvhtIM.Li5JG M.]w]F̃+e ee=t1/G]+.R-v8^ԉ擽eӧ~a-:[pR_suH/\λRyuڗ&l3uZ7Zyճ7/ӄg1ǽH~oȫF^4ޒƅ)NMZNL~L?\ns[S˼+N?;eo?18I<:!DK7B!Q ΜbEb"ڳ1=f08E_~+KLBQa-a]bb+*ݒA+RGede&e^n1BRrJjژz'R-K::?u?x75x#6mv3||_RV cXƩH=p(!s-tJB O" > 4R_F 5e]`B* BǠf5RɢPizx|ХWH%9Uæap6<Oŷ ~2BG{LI_ɐˈbLfϴȜdId r/k)$N^8n.V@^:+|^0o"ay*Z$-.$>)q^2VJKɗFV|JTgU-5uH}TEVΰn^ޠϰq V;]=V#S6xvGq' gk[{Ǵ'zk0 -X#$9=Q",f6V'.'~$Qe)v릶YgdeY}9+ȷ8[_Tb|BLXJ]eLui ^]Цn^߶iq/OW{>=4py2|adfw\c`J=3>omBwWVS'! 0uR(0DR姣."u h]^t7iF3.//|":.nޕ!1C+-}<ԩkdyc(lfh bϽGgϞ!GH+XP_T^tE[L"FTGCr[8 +J*UT?Q_c+Ǭ?djXotٸYyC^ollNΚ.nK<{yovگ'FOS./EGnѡmG'$LL>24:1})$ʺc})o4?ۅŌǛKOUv˹U5:u]JsUi՛{oٵqy|TxOOS/\_NJz:reelezsiÁ̳O [P޳TSWa@!+D} / YAP4C ;Î) ]!`h7XdB!Q8Awހ~!Q)Yy),Ez=ÛӀKŻ·Ŀ"p OT^LWI"DrqkH='ϼEI]qRrʲFf6nެA7&ʦ9fc:V_ll/;D8>tvqnQE߶o"RܡeOGDG젍ƙ_Lܱ{ddVq@Kr+rZCK/!W$S{|ĭ2SH<}N&vzKy|kk: 651 144 .@IDATx \TU.(JB.hW#4A}^r}qEMA+%("qIq 5_KÕOK."j ) #ν3pAAÇ{ٿs9)}`L 0&CL 0&`L@"bO&`L 0| X {0&`L TdL 0&(-UcGߥUWOKzTCBGrŀ0&`ψq獞QnM L?Jfal`L ߌ$~iF8Ah9lYo⁦;/\0ju~%4oaR' :px)}@}GTuFԮvUpr!r6[#hh8$fP%&7P:~ʺ??|| `'t#'bmWN.Ѥ!X$ nu1>Sz1mva ŢĞCP"8vԯ_ϴqΘD)cAé[m*'@{/ŢU{PO$ј7lM滹}l9<*ȳ~Ɵí [_c C,Je8z~PsAtDZRgAPjeō"ˣLxXܹ-2&R qKqnдHu?Qn1 ~?Kb!Aߢ~06 v+oI|T-S Žgc>=B9.C\mHfqzX0s cbu|{&#njB88޼Yca? xL!i{{>qcN/q^m({gBpx<eexQzmh^QV\EKN`lժAE1|䴈 E_ Rw`-fjg}ئhl+g|8Ё&4?LqahsvJR>bQi7,OkX +mjimV$ƺMΣCGhY",Kybe[#OC0qD\K0K#=fM?D #E:řqE=qLPv: I>\V]/JQm=U8}WpI8JBQ$Xt_8#}wi o 7ܷp.]ϩ~Ձ2fqP+\yho":6ؓXg.oHJMKr*ţ- GpWCs+ì).358 '>$uG`T٘,W`jWhaNn$`-z zQy֥/ m0r7nç= sI`5v.Kx 26{$Un1e& ˾\Iq3q-q)? :zT:s]uUX6? ɻ3t `( /XkBQ=p`}kasKlw5L! /7k1:oQ- /Ol&; mgq, t(A+2k#ڽZRLt?'_9r?zhp7 >᧿ӧ>*!', 'ݰw\9U!'VCP?F/İR9s(,Y]Rѻ'wiQMr%X<ƑKƕG}= dR)_F SuWMU"XchJtw㱄?ӎ=?͠$ / {A0Mw g_"+f =EyλP'G10h ~x&NPXv~d W;!I>1SOZw1/=/GUHO@7 ?%P'K,6Vtф{PbP/hxbĜF5 *x EE wo,@68%t!!%؍_#]I褮};»EV-MO9.qlVƢJӦctę@4'R Ƣd3S8Z 󉻤=HmqJ!-thw$ ⛊ْs~}#2t©ӺyGT6P>dB\`m"g &hW|:³V*6/qdzxcd_7$%Z=I Dm?^x)C,jnc&\-q'7~3&Gbh۵d!Q9Ճw6p|,!ؠ=hz1%ɟ \b%a}?M0⌥a$3;$F)ʌv~ΚU%cd~i|OOkz[v/n%+PNQɟI~"ިhi;χ3:Ӈc#ҜEFj3 |uHVH(Za򬙘1GJij{t̡]OL5o&;p ]J~.$$%y7'8 gb} !S #H(cE O.Gl E2,5k.`~q}P 'E1PԧbZMHs䄕.cK;:iD:f}OzRR]=vauD@7xԙOOKNN>t%@ĨBïq\IeC#O0 CP>PNd=M4Y\ \':]1·c)tLЉlo #gr%-EQnX+|֟MB] SMZuh6pw%Gu/9]l{[k"g_xIs"qAX9<7ƞ~2=^碉kVSbT}XLh%}r}ȿh9m? E4RaGK7 M<nn8Iz5R@fH(nuPM%V+|(C}4X[(EaA~O;`3M@:8 Y`v&kBcqzD ,7Eu]*$>, xMQOeT~ OTu{Ѹ \`IcKљߋ2Kմ'K:ٰ:TtyھzK.}yzG7ZlJ#5M^UDJCz/fE<% \Ϛ#hu1*iw&{L#D>C0 ]˰.X}x5Z؈m?^EH ]=r?o֞^O:A`lsx]KinG]`26cuz5׳E7D$w ӵ7i8*P鿊E,QNQo*#P޾ E-O Ar!'B=4_!=B'icce* G;Xf<0OaP]`qLJ-d(5WXXTh0gM¸GNs ggGHZ]=>ǶãElY !f0>;ݴH%ӝMsm|OciIl(FMT=S1C!ϩ4'NT4gwE7n Zt )ݵ+ӅdR?JSA+:H)&4}fcɰwuxÈuB?hO=lQCqfN\Ѳu'՟YXj{㼐tN:-6}ɫV8u^5&p2~\-IP~d*(Q?W?oET'ʶ3eW/牠?DLs}̅ӱm{w>DtL]:HSƿkŎ11DE" jR$>AnzLlaw)Ƴz/Rvrc8x $ C8ID+esMG:?6W[6OsmmDåm ̛U. X!OznF'Qܼuwdnlz'=e)h 7WIe YXs -Ҝch|U4T}b*\t}GGtkEG/u`rta99T-.9$H\*zxvMѐ'cZX uu>4^!pċ -"GܗWwI[9bV:@" KuJ`n <ߌ%f:5 *kV>8@u| ;p5CP9|Z6*Ec -]y۽2"kJLӥF/#Kkں ?fիn4* 4lGA NnMIDFam4MoQ_vCi*yT^4G)!?G?i jstPDHěZx-$aŨWbQR? {ɂ0"Oh--MaucNi^{'i!ۮ]h他ŒOg \ʙ^۠]Ub^!cJH0'H649N&?E;5sW܀Ѐ<'EܫףCtz?n!NEigŒ="z3uz'^#ֿK۵gVYJ $b<#p'8VJFaż1JL-SU`:/K z7*Q9hcU ?1y]ٳhC*GW/)}kvg0Vʜûw},R,mThYvG',)~s}qo* D QK1j7gCZH>C{A0upUGVАJn4 \^NA1.`_c[1[Ă_ 7"̱0T'Uy#]XIj8yE$-b1pLT*4|si(RE&Gq3]c>ԅWlkwS'𹅩d\xHzC'DNC T=0HxMInV}EgmuFs\շ h:\ZQWcVly8!Q*%'z v4.Y=uz0>]6E\wpz`RlEL}d)?F vkOr"z 0,=GI aįUyv.ÔOG`g ߱h ?n ?Z3N5鋍!ujg Ϥ0#:Ҫg6K l +w? RDbBo#0hs`@ CzewL'Fz&WRӯ"uka2"64 ^T4146 #Y=CbŸ+sĖ) 5տD R*/_hSԾWXƔy\Gb$I{K;4`eCa䜳@A0(03 z79%34tat#K-p1r b)(RM1"W,O@I4vqFb{W{Z:F+ͿDO[H}AE޹v6^u|s˹Q/>{\ޢOG|M`3b !_ KZY$nBؖh F5Way0u+ER[mmrMԪc<3oNr%:^/oUE2b<>Ͷ-h!{9).H?h]zz!"D_8* /~E[_q! |R_o)@̧:z87wNEzh@J2qN)|C~|zbmYd/*` ;=zOȳ:ʕ+'zן3M -`ly6br [#J3hYn2&`Lrr`L 0&JRY2&`L`XVZ`L 0R bsL 0&(+X,r2&`LX,%`L 0& eL 0&(,K:g`L 0Bx{:;&`L ~jTG Ӫrxk7n|r.P|( vvv׹ \B&E& ĵ(v¢BrzL 0矀$3[ Z, "M(sK$ŒZ`L 0 #b`L 0K"bђZ`L 0 #b`L 0K"bђZ`L 0 #b`L 0K"bђZ`L 0 #b`L 0K"bђZ`L 0 #b,@&n_EիHK4xW}'bnqsJ1-{Mg&(VO'ǡ_n *F½b([mL!1N 0!p돝ԸZw@B)adƵI(/pTʧrM/n(纋`LDQ,f(7Z7|'`痀扼{\=B흽(o1& m%X٭G:#ty߹=[}K52Z;ZYuȽof_2D}FbVD2&J@_dC7QWlwRF+t-VHMNxWp|3ϧz1bLE}'of07 NL^]KQaԹ35WaX\Ln_ArDWU>WgXo&ϟ0 (Wƭ,] !Qj6#'TorN$K0GVo\U1jJP`/ G֋:xȬ}UQ43xvY__VڨiV@|~U::2qPˍ+BLj0Z4FCϑwJDS/8pvo_JAI|E0~{"Yɥ#*27_L &r `aU[m._mĊk/{xOsB޸ o֍?2}V1;Ē&kӹ@O>f̂O7-8ߌ*YȥŅDD8{6wD,azG$/&b!Z@iPə$nƻd=vn#rx7`$nŠ2:q1;&O~?7ሯ֊wqlԷ33&`y-R=UCQQqZ\',D>B˙p7L.wh H+6$݇c*ˉ4pdd>RO؜וZ >ϭAOIu%Kyy w)x,tU)~K(|L htdhJ5g4\O&Y9@!cOゕ8e_Zj+=Ul8g.R7r " jND.D̲RAr6ވ~5й' :ZNŏhL5bVle`$L0&%śP:beL@kU`N/ s?ԄW:ؾ  G݇sІeЅ]iN0T.ox 4MM-G0ԁKe1J (K0m Ш <$w k_e 釧y7`9 ]WKixGݔ㴏!$apk Zǒ=*M]eل$Ž',?heG/&bPWәNCr+sػs'IaZ%62WD䧭/2 ij8Z62l8aPrYYҎ,zNGfD]W/=3 5j|Gݜ:~+i?zŸa{^*jvy 0&`iBoEBeH,d)|SVRZ )xe2vm]J*bVBV*g0$^&ǯfۺִ4\>{y`& P0.-d XyTq?8­:G;{')|O@~p9tB| ,bϨL_0 C Dz_٣r 銶PWd14lӣSZ C/>K>SRu)aʂ0i(|Lo)|‡`ϒ9p*ߵS1pCF}-z4q@$OR5wHOuNe+caB> 3ݼGpJ*2|P{n#EڹT"&o2@u?#,A ODC 5NLµ1K~#fkG+py( ®=(+|5ocyIG6+{ѻ3~ybpپ@X[~~}.`FgyR-G[͟ ձ\S}dJ{̄& 7?[5 =ȵ&t#ŋhK y 0&`Y Mv*5zTSh2V-r+$n -'-,ޛbU4i(Ϫ]M*C-BVZlG fỉcB ߄A8B>F7PkDWhcxc98Wc76h!4yljXmǽIɑL Ǽab~ ꄢ۰i80AQt۠ɚ~b>nحDaPt쏍<$NĦI k"BѾ0ly"LV0f_t=DXsI(6r"K-d: cb9D[͙knNjUطelJ›`LB {BTܟh JG|(b׮84PdTR}+`GIPn R0aߘK4noo}_Wq7W߬{ix[炇U/K{ UDJlx6^2FLogO QXէL4Usd*siPmesU4һ.0XE*D>ֿ[,dq,&Q.|9"YLkiemccL "Eo0 "`YbтpQ`L0΢Ae47wһZJ_dGs%s/1+-Ӭ80`L 0&^O) UɆ^ Gg_&`L 0"PbUEؖUa. `L 0&@ a5tL 0&`eŲ~\z&`L (%gL 0&@&bl 0&`%Jbę`L 0&P X,ǥgL 0&@`Xx9q&`L m,vq`L 0&PJ%ZtN 0s<* 2' eL 0&lx9&`L9'y zL 0&PŢ݄R6w`fw? 0&@9%ÕSeL 0&sAsь\ &`L %ÕSeL 0&sAsь\ &`L %ÕSeL 0&sAsь\ &`L %ÕSeL 0&sAsь\ &`L %ÕSeL 0&sAsь\ &`L %ÕSeL 0&sAsь\ &`L JFRr2R40=[v -9n}S)%`2`L 0&`"Ŕo+u芷7GVbz\ڇᓧ`jU)R}S)i(E|S_s+ARZq1&`L2I,\աh|>]й\,Q(Q[4rbTO2F`_peL 0&@(XL|ER|ݎ~y18$d\k0l3-+ЧwNG[UE2DʧW?1 zaPLnRtk0FʙhI.?x@VbP\Y1:?:_upYнyj`L 0&@hnR˧."jt gSPtE|{xvm3cI\ގy]sh bxM387>3 ={`f;/.)SͥЮ߸ݻغl &xH@  L"목:_ܚQ<>8`ך0cm([`u`L 0&fIvmvhY>*,hPǼm554w]) ^N5zaƬ^hԱ4#6aH{`#,߳jn!8򘋸d='xF |a˅y*kML ^h*!V?J>s}0:tpu~4Uc'6/L}ȹ .`L 0&JcV+=7EHBQ8^DlniM3eLl MZnRe0:鄢T;6?ݫJy4GSb˔Sg$NCf1hl͓v >hґc#j&g'%U':IO!BIW̅9`L 0&1,bߙd"zޅϛpdIobȗL}#z) n5>#ęe\!p%mj{Dix̜jƬ,:S~;$kp%띃/ wHg7`$XS S!IU" -z 3 jBW?E`=K =nr/u2o~DIC$rƔN8n+Z E-y 0&x oY$XppL4@N(:_Hz~iXU'߿|GPA7C6ط{$kމ$c06p>n) 넢klLáҍm]# E&~|` ꈡ j m{OŔ*c`4R.|:&S?!:O eR/¸bQ;&`L rOȝIEYČK,j lmUa4?Q#oyFλ'oHq#+!Cպuij8s+)G.b5U&q1]VÕ9lb-wIYĘ`L" aZY5l Hi"2am n|ȂXZsBh)M&`L <%" C?e 0&`L`XF`L 0 b4sL 0&(#X,b2&`L4X, '`L 0&eL 0&( ,K:`L 02Bbi(.&`L 0&JҠy2&`L`XF`L 0 b4sL *#ڕ|CK3NSIDAT{pq+f-XdwYH˴KŶo>ςR-*Z_^"i9sЌL?Q)K*DuD,Ɔr{o17%(6IrwoH|$ׂ 0&`@">mF䃓,/8qG뼁ouxX _vGY-/F*-1ȣn)V&`nr9d7Nì:SL?2\Ǯ }2p&A kgxE)TV0l+ʌ1&PO-L^*_J!GoґUa g6hmܫ~XKřmgmRit{vbB<6"Ⱦh8<#B_NT5UAJ9P>EE(xJ,- 0&Plȯ؉Umfe4y9n9{Sͽ%EF<j⿓z7}Y' Sce4Oo|͈>sͽN"X9o ? °+!>O!9I[1:|4FThNܡ/;cQhVG̑u,(ys5 )pa8)?8yCHV#STs'}#)೫"LC7ʸwv+Bi7ϫ>.v+3NcîT4/E}/m+b52}1Oh0Sox  w"Q-zDlh*G,- P |;\' tU B"O_U^aw i\(d.dĶXsҹb5M Znpnu(` :FnkG?R,F\эv/HԵ]>%N_ Ć2{:iWƯ3\'`i_5 6(^V׋ ѭAM9s4WZu[#ņ-N^ҵ)RΊ pjWJ =5(QcKִp nR1ρ|kj::Eog9yQ5_b9r,*i6`/ &;ovm9փiH֫.͝CA1HIENDB`dipy-1.11.0/doc/devel/gitwash/configure_git.rst000066400000000000000000000112701476546756600214620ustar00rootroot00000000000000.. _configure-git: =============== Configure git =============== .. _git-config-basic: Overview ======== Your personal git configurations are saved in the ``.gitconfig`` file in your home directory. Here is an example ``.gitconfig`` file:: [user] name = Your Name email = you@yourdomain.example.com [alias] ci = commit -a co = checkout st = status stat = status br = branch wdiff = diff --color-words [core] editor = vim [merge] summary = true You can edit this file directly or you can use the ``git config --global`` command:: git config --global user.name "Your Name" git config --global user.email you@yourdomain.example.com git config --global alias.ci "commit -a" git config --global alias.co checkout git config --global alias.st "status -a" git config --global alias.stat "status -a" git config --global alias.br branch git config --global alias.wdiff "diff --color-words" git config --global core.editor vim git config --global merge.summary true To set up on another computer, you can copy your ``~/.gitconfig`` file, or run the commands above. In detail ========= user.name and user.email ------------------------ It is good practice to tell git_ who you are, for labeling any changes you make to the code. The simplest way to do this is from the command line:: git config --global user.name "Your Name" git config --global user.email you@yourdomain.example.com This will write the settings into your git configuration file, which should now contain a user section with your name and email:: [user] name = Your Name email = you@yourdomain.example.com Of course you'll need to replace ``Your Name`` and ``you@yourdomain.example.com`` with your actual name and email address. Aliases ------- You might well benefit from some aliases to common commands. For example, you might well want to be able to shorten ``git checkout`` to ``git co``. Or you may want to alias ``git diff --color-words`` (which gives a nicely formatted output of the diff) to ``git wdiff`` The following ``git config --global`` commands:: git config --global alias.ci "commit -a" git config --global alias.co checkout git config --global alias.st "status -a" git config --global alias.stat "status -a" git config --global alias.br branch git config --global alias.wdiff "diff --color-words" will create an ``alias`` section in your ``.gitconfig`` file with contents like this:: [alias] ci = commit -a co = checkout st = status -a stat = status -a br = branch wdiff = diff --color-words Editor ------ You may also want to make sure that your editor of choice is used :: git config --global core.editor vim Merging ------- To enforce summaries when doing merges (``~/.gitconfig`` file again):: [merge] log = true Or from the command line:: git config --global merge.log true .. _fancy-log: Fancy log output ---------------- This is a very nice alias to get a fancy log output; it should go in the ``alias`` section of your ``.gitconfig`` file:: lg = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)[%an]%Creset' --abbrev-commit --date=relative You use the alias with:: git lg and it gives graph / text output something like this (but with color!):: * 6d8e1ee - (HEAD, origin/my-fancy-feature, my-fancy-feature) NF - a fancy file (45 minutes ago) [Matthew Brett] * d304a73 - (origin/placeholder, placeholder) Merge pull request #48 from hhuuggoo/master (2 weeks ago) [Jonathan Terhorst] |\ | * 4aff2a8 - fixed bug 35, and added a test in test_bugfixes (2 weeks ago) [Hugo] |/ * a7ff2e5 - Added notes on discussion/proposal made during Data Array Summit. (2 weeks ago) [Corran Webster] * 68f6752 - Initial implementation of AxisIndexer - uses 'index_by' which needs to be changed to a call on an Axes object - this is all very sketchy right now. (2 weeks ago) [Corr * 376adbd - Merge pull request #46 from terhorst/master (2 weeks ago) [Jonathan Terhorst] |\ | * b605216 - updated joshu example to current api (3 weeks ago) [Jonathan Terhorst] | * 2e991e8 - add testing for outer ufunc (3 weeks ago) [Jonathan Terhorst] | * 7beda5a - prevent axis from throwing an exception if testing equality with non-axis object (3 weeks ago) [Jonathan Terhorst] | * 65af65e - convert unit testing code to assertions (3 weeks ago) [Jonathan Terhorst] | * 956fbab - Merge remote-tracking branch 'upstream/master' (3 weeks ago) [Jonathan Terhorst] | |\ | |/ Thanks to Yury V. Zaytsev for posting it. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/development_workflow.rst000066400000000000000000000327771476546756600231310ustar00rootroot00000000000000.. _development-workflow: #################### Development workflow #################### You already have your own forked copy of the dipy_ repository, by following :ref:`forking`. You have :ref:`set-up-fork`. You have configured git by following :ref:`configure-git`. Now you are ready for some real work. Workflow summary ================ In what follows we'll refer to the upstream dipy ``master`` branch, as "trunk". * Don't use your ``master`` branch for anything. Consider deleting it. * When you are starting a new set of changes, fetch any changes from trunk, and start a new *feature branch* from that. * Make a new branch for each separable set of changes |emdash| "one task, one branch" (`ipython git workflow`_). * Name your branch for the purpose of the changes - e.g. ``bugfix-for-issue-14`` or ``refactor-database-code``. * If you can possibly avoid it, avoid merging trunk or any other branches into your feature branch while you are working. * If you do find yourself merging from trunk, consider :ref:`rebase-on-trunk` * Ask on the `DIPY mailing list`_ if you get stuck. * Ask for code review! This way of working helps to keep work well organized, with readable history. This in turn makes it easier for project maintainers (that might be you) to see what you've done, and why you did it. See `linux git workflow`_ and `ipython git workflow`_ for some explanation. Consider deleting your master branch ==================================== It may sound strange, but deleting your own ``master`` branch can help reduce confusion about which branch you are on. See `deleting master on github`_ for details. .. _update-mirror-trunk: Update the mirror of trunk ========================== First make sure you have done :ref:`linking-to-upstream`. From time to time you should fetch the upstream (trunk) changes from github:: git fetch upstream This will pull down any commits you don't have, and set the remote branches to point to the right commit. For example, 'trunk' is the branch referred to by (remote/branchname) ``upstream/master`` - and if there have been commits since you last checked, ``upstream/master`` will change after you do the fetch. .. _make-feature-branch: Make a new feature branch ========================= When you are ready to make some changes to the code, you should start a new branch. Branches that are for a collection of related edits are often called 'feature branches'. Making an new branch for each set of related changes will make it easier for someone reviewing your branch to see what you are doing. Choose an informative name for the branch to remind yourself and the rest of us what the changes in the branch are for. For example ``add-ability-to-fly``, or ``bugfix-for-issue-42``. :: # Update the mirror of trunk git fetch upstream # Make new feature branch starting at current trunk git branch my-new-feature upstream/master git checkout my-new-feature Generally, you will want to keep your feature branches on your public github_ fork of dipy_. To do this, you `git push`_ this new branch up to your github repo. Generally (if you followed the instructions in these pages, and by default), git will have a link to your github repo, called ``origin``. You push up to your own repo on github with:: git push origin my-new-feature In git >= 1.7 you can ensure that the link is correctly set by using the ``--set-upstream`` option:: git push --set-upstream origin my-new-feature From now on git will know that ``my-new-feature`` is related to the ``my-new-feature`` branch in the github repo. .. _edit-flow: The editing workflow ==================== Overview -------- :: # hack hack git add my_new_file git commit -am 'NF - some message' git push In more detail -------------- #. Make some changes #. See which files have changed with ``git status`` (see `git status`_). You'll see a listing like this one:: # On branch ny-new-feature # Changed but not updated: # (use "git add ..." to update what will be committed) # (use "git checkout -- ..." to discard changes in working directory) # # modified: README # # Untracked files: # (use "git add ..." to include in what will be committed) # # INSTALL no changes added to commit (use "git add" and/or "git commit -a") #. Check what the actual changes are with ``git diff`` (`git diff`_). #. Add any new files to version control ``git add new_file_name`` (see `git add`_). #. To commit all modified files into the local copy of your repo,, do ``git commit -am 'A commit message'``. Note the ``-am`` options to ``commit``. The ``m`` flag just signals that you're going to type a message on the command line. The ``a`` flag |emdash| you can just take on faith |emdash| or see `why the -a flag?`_ |emdash| and the helpful use-case description in the `tangled working copy problem`_. The `git commit`_ manual page might also be useful. #. To push the changes up to your forked repo on github, do a ``git push`` (see `git push`_). Ask for your changes to be reviewed or merged ============================================= When you are ready to ask for someone to review your code and consider a merge: #. Go to the URL of your forked repo, say ``https://github.com/your-user-name/dipy``. #. Use the 'Branch' dropdown menu near the top left of the page to select the branch with your changes: .. image:: branch_dropdown.png #. Click on the 'New pull request' button near the 'Branch' dropdown. Enter a title for the set of changes, and some explanation of what you've done. Say if there is anything you'd like particular attention for - like a complicated change or some code you are not happy with. If you don't think your request is ready to be merged, just say so in your pull request message. This is still a good way of getting some preliminary code review. Some other things you might want to do ====================================== Delete a branch on github ------------------------- :: git checkout master # delete branch locally git branch -D my-unwanted-branch # delete branch on github git push origin :my-unwanted-branch (Note the colon ``:`` before ``test-branch``. See also: `remove remote branch`_. Several people sharing a single repository ------------------------------------------ If you want to work on some stuff with other people, where you are all committing into the same repository, or even the same branch, then just share it via github. First fork dipy into your account, as from :ref:`forking`. Then, go to your forked repository github page, say ``https://github.com/your-user-name/dipy`` Click on the 'Settings' on the right, then 'Collaborators' on the left, and add anyone else to the repo as a collaborator: .. image:: collaborators.png Now all those people can do:: git clone git@githhub.com:your-user-name/dipy.git Remember that links starting with ``git@`` use the ssh protocol and are read-write; links starting with ``git://`` are read-only. Your collaborators can then commit directly into that repo with the usual:: git commit -am 'ENH - much better code' git push origin master # pushes directly into your repo Explore your repository ----------------------- To see a graphical representation of the repository branches and commits:: gitk --all To see a linear list of commits for this branch:: git log You can also look at the `network graph visualizer`_ for your github repo. Finally the :ref:`fancy-log` ``lg`` alias will give you a reasonable text-based graph of the repository. .. _rebase-on-trunk: Rebasing on trunk ----------------- Let's say you thought of some work you'd like to do. You :ref:`update-mirror-trunk` and :ref:`make-feature-branch` called ``cool-feature``. At this stage trunk is at some commit, let's call it E. Now you make some new commits on your ``cool-feature`` branch, let's call them A, B, C. Maybe your changes take a while, or you come back to them after a while. In the meantime, trunk has progressed from commit E to commit (say) G:: A---B---C cool-feature / D---E---F---G trunk At this stage you consider merging trunk into your feature branch, and you remember that this here page sternly advises you not to do that, because the history will get messy. Most of the time you can just ask for a review, and not worry that trunk has got a little ahead. But sometimes, the changes in trunk might affect your changes, and you need to harmonize them. In this situation you may prefer to do a rebase. rebase takes your changes (A, B, C) and replays them as if they had been made to the current state of ``trunk``. In other words, in this case, it takes the changes represented by A, B, C and replays them on top of G. After the rebase, your history will look like this:: A'--B'--C' cool-feature / D---E---F---G trunk See `rebase without tears`_ for more detail. To do a rebase on trunk:: # Update the mirror of trunk git fetch upstream # go to the feature branch git checkout cool-feature # make a backup in case you mess up git branch tmp cool-feature # rebase cool-feature onto trunk git rebase --onto upstream/master upstream/master cool-feature In this situation, where you are already on branch ``cool-feature``, the last command can be written more succinctly as:: git rebase upstream/master When all looks good you can delete your backup branch:: git branch -D tmp If it doesn't look good you may need to have a look at :ref:`recovering-from-mess-up`. If you have made changes to files that have also changed in trunk, this may generate merge conflicts that you need to resolve - see the `git rebase`_ man page for some instructions at the end of the "Description" section. There is some related help on merging in the git user manual - see `resolving a merge`_. .. _recovering-from-mess-up: Recovering from mess-ups ------------------------ Sometimes, you mess up merges or rebases. Luckily, in git it is relatively straightforward to recover from such mistakes. If you mess up during a rebase:: git rebase --abort If you notice you messed up after the rebase:: # reset branch back to the saved point git reset --hard tmp If you forgot to make a backup branch:: # look at the reflog of the branch git reflog show cool-feature 8630830 cool-feature@{0}: commit: BUG: io: close file handles immediately 278dd2a cool-feature@{1}: rebase finished: refs/heads/my-feature-branch onto 11ee694744f2552d 26aa21a cool-feature@{2}: commit: BUG: lib: make seek_gzip_factory not leak gzip obj ... # reset the branch to where it was before the botched rebase git reset --hard cool-feature@{2} .. _rewriting-commit-history: Rewriting commit history ------------------------ .. note:: Do this only for your own feature branches. There's an embarrassing typo in a commit you made? Or perhaps the you made several false starts you would like the posterity not to see. This can be done via *interactive rebasing*. Suppose that the commit history looks like this:: git log --oneline eadc391 Fix some remaining bugs a815645 Modify it so that it works 2dec1ac Fix a few bugs + disable 13d7934 First implementation 6ad92e5 * masked is now an instance of a new object, MaskedConstant 29001ed Add pre-nep for a copule of structured_array_extensions. ... and ``6ad92e5`` is the last commit in the ``cool-feature`` branch. Suppose we want to make the following changes: * Rewrite the commit message for ``13d7934`` to something more sensible. * Combine the commits ``2dec1ac``, ``a815645``, ``eadc391`` into a single one. We do as follows:: # make a backup of the current state git branch tmp HEAD # interactive rebase git rebase -i 6ad92e5 This will open an editor with the following text in it:: pick 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable pick a815645 Modify it so that it works pick eadc391 Fix some remaining bugs # Rebase 6ad92e5..eadc391 onto 6ad92e5 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # # If you remove a line here THAT COMMIT WILL BE LOST. # However, if you remove everything, the rebase will be aborted. # To achieve what we want, we will make the following changes to it:: r 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable f a815645 Modify it so that it works f eadc391 Fix some remaining bugs This means that (i) we want to edit the commit message for ``13d7934``, and (ii) collapse the last three commits into one. Now we save and quit the editor. Git will then immediately bring up an editor for editing the commit message. After revising it, we get the output:: [detached HEAD 721fc64] FOO: First implementation 2 files changed, 199 insertions(+), 66 deletions(-) [detached HEAD 0f22701] Fix a few bugs + disable 1 files changed, 79 insertions(+), 61 deletions(-) Successfully rebased and updated refs/heads/my-feature-branch. and the history looks now like this:: 0f22701 Fix a few bugs + disable 721fc64 ENH: Sophisticated feature 6ad92e5 * masked is now an instance of a new object, MaskedConstant If it went wrong, recovery is again possible as explained :ref:`above `. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/dot2_dot3.rst000066400000000000000000000012511476546756600204350ustar00rootroot00000000000000.. _dot2-dot3: ======================================== Two and three dots in difference specs ======================================== Thanks to Yarik Halchenko for this explanation. Imagine a series of commits A, B, C, D... Imagine that there are two branches, *topic* and *master*. You branched *topic* off *master* when *master* was at commit 'E'. The graph of the commits looks like this:: A---B---C topic / D---E---F---G master Then:: git diff master..topic will output the difference from G to C (i.e. with effects of F and G), while:: git diff master...topic would output just differences in the topic branch (i.e. only A, B, and C). dipy-1.11.0/doc/devel/gitwash/following_latest.rst000066400000000000000000000014641476546756600222160ustar00rootroot00000000000000.. _following-latest: ============================= Following the latest source ============================= These are the instructions if you just want to follow the latest DIPY source, but you don't need to do any development for now. The steps are: * :ref:`install-git` * get local copy of the `dipy github`_ git repository * update local copy from time to time Get the local copy of the code ============================== From the command line:: git clone git://github.com/dipy/dipy.git You now have a copy of the code tree in the new ``dipy`` directory. Updating the code ================= From time to time you may want to pull down the latest code. Do this with:: cd dipy git pull The tree in ``dipy`` will now have the latest changes from the initial repository. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/forking_button.png000066400000000000000000000311211476546756600216410ustar00rootroot00000000000000PNG  IHDRn&q:0iCCPICC ProfileXYTKL%Ar9 "% KR@D* (*"QEEATD^P P{NuuWS5]GGG9 zxz hCl6Ҟʭkc 9 8 0(Ѵ803\$1.X%`!,C6:6:.NOO q`%ØQiQBEtd###67}hxjlt[dD9D>f{ nЃ;{'C-\# 7F0/] a=XDiE>*)*旝 ߸<(N0H Q8K_6$P̈pg_c?$+i3sD 9mjTK_rPͱ/'Âb=l~ 21E D9үpE˅$6ع8dmaVY\6 0&@#WL c K{FOrwI@qFA n@Foƈp0H47ZAȥ@k'{V)k5JmeWP"g:ab0 CÿQl(bw-VEuw4;ȡUO o?q͚@@81J3?yn f5p|n mp3 Zƻ{6 >o: +qx{N5$4NAQyYA%E$i#C=(P,+H[&l,ՓxZ¦ ~"`Bv Gr$P+`\'Es(DX'2A8 p @ h# +dL`,epD H H2l 'B((JB9PTju-`=ŇGmAi Q(VT*@BQQv#jf`9X6a/8l./7H?G&rz@)nt.]nDws /Fx`B0LL!s sSE,ˎ#{MbOb뱷}qtq8\.ww׆Mx% Ok~~L#h }m_ 10aaHfgd dLe,eldgD`c2deJb*dj`aa&0333f.ey̢bɒR2M‘I@R,i Ed y/|Ke圕.Nkzkg664([+#vbvQvMk XGRI'Edmε΋.F.y.\%]];ܘ|jܾzlȓۓrZ6>=3Ub뎭]ܾ1ms[/_ 80G1Q> |T4\<r$}~ha ՘ZB;->*|-=>y=չ}2љѣ1Z1bhִX(vkls+r?P=-aˎ;wfJ2K:N$w2uf7;`wGHjF=iĴ u{#/cO>}u:>@=$K9x9 99+܇\;|IZ^aÃ,IGl4<}cێu*"7=~JIh@Qi Y' <_fPvߩSK_1?X.^^x{6d[sj*+s*WF;kkjjykPuu`r39K.]b}AU'e7B;BF=[]︡}MU-B-n[3ZڒnGߞiiǝgOZ}pޝ>hPa#Gݪ>DIczOsf>~&O=|hnous/_Lx9?0j0f85ޑ7RoGFouu~j2]컕I)if{?xqy&˧%?_5󘛘ͯ}/:F#e^C%Jꍟ?"֢iGP|r/DW yAGTEAgb̰0E!Jf*YIGl/pLs~^YC B$a(8$YEe%5e-}US5KuK 3M3-3mcC]=-}yCq#>cVɚWIݖ-VGlRm|;`(1BJxwr4r*UVW\}X7tÅK%7x^xz׍-nu^n+ھCdNy+#nw5#_yq@l=yCq܆5^s^y>zn,qw:*[z6}}чԏ3Oѳ9sgufғԟZkk<.Xcٰq \BфN~a 7,H_YVٗ9V9rr/|+t]R$G4BF\Z AKF:[*k-'' ?ObR2EQHMI]HYkXKQ\/W?Hɘe%|XK+5ke7wj ]]8%^޷|귞=m_?-J r v ZYkGG EmFE|]$EvhtIM.Li5JG M.]w]F̃+e ee=t1/G]+.R-v8^ԉ擽eӧ~a-:[pR_suH/\λRyuڗ&l3uZ7Zyճ7/ӄg1ǽH~oȫF^4ޒƅ)NMZNL~L?\ns[S˼+N?;eo?18I<:!DK7B!Q ΜbEb"ڳ1=f08E_~+KLBQa-a]bb+*ݒA+RGede&e^n1BRrJjژz'R-K::?u?x75x#6mv3||_RV cXƩH=p(!s-tJB O" > 4R_F 5e]`B* BǠf5RɢPizx|ХWH%9Uæap6<Oŷ ~2BG{LI_ɐˈbLfϴȜdId r/k)$N^8n.V@^:+|^0o"ay*Z$-.$>)q^2VJKɗFV|JTgU-5uH}TEVΰn^ޠϰq V;]=V#S6xvGq' gk[{Ǵ'zk0 -X#$9=Q",f6V'.'~$Qe)v릶YgdeY}9+ȷ8[_Tb|BLXJ]eLui ^]Цn^߶iq/OW{>=4py2|adfw\c`J=3>omBwWVS'! 0uR(0DR姣."u h]^t7iF3.//|":.nޕ!1C+-}<ԩkdyc(lfh bϽGgϞ!GH+XP_T^tE[L"FTGCr[8 +J*UT?Q_c+Ǭ?djXotٸYyC^ollNΚ.nK<{yovگ'FOS./EGnѡmG'$LL>24:1})$ʺc})o4?ۅŌǛKOUv˹U5:u]JsUi՛{oٵqy|TxOOS/\_NJz:reelezsiÁ̳O [P޳TSWa@!+D} / YAP4C ;Î) ]!`h7XdB!Q8Awހ~!Q)Yy),Ez=ÛӀKŻ·Ŀ"p OT^LWI"DrqkH='ϼEI]qRrʲFf6nެA7&ʦ9fc:V_ll/;D8>tvqnQE߶o"RܡeOGDG젍ƙ_Lܱ{ddVq@Kr+rZCK/!W$S{|ĭ2SH<}N&vzKy|kk: 366 38 4IDATx] \Tպ9S^Oh'7Y!*aTDQϱZ*h:u5M˫T2o(eyеfԸ߷lAϽz}[k}[k|N)Q@Fu 4 h( (nM4 h(P(:0 ]4 h[4 1 ko9jW4kf дiSh­g-fGkj8?nѾs6hNxytD{W ΜA]ww]_-Rk|9sݺt]xlPSm=o]ScOyhm_򕣏m%w=n~I|&U5hEjWvɪimtOPݻw # "ȔUAR޽ԢBNH>G-ci|ʱ[fS㦘T-Ak_~ðѽUGJxgy+|aC\+֙z.uO|1dSx)Xq->=[ݟ]37>*Ѡ){܄Ê[$eR7-/bQw1#G y rYszVމ:NݩQ}>ubc_B޽*+#ۉ*nR!NY?ǖ-ېw > Myz<wt?Ņ8ۂT'KFomڧ/ Benћ ˑuzku^N*G ˅eB: }*[_ro-3 ;dtOj@9T±/Qci8z,[ms"( ׌BU\6Yu [Wܹ/>Gp1OD|\"rϳyThMŧmE6cP=u#/j{=k-O,'Dz Y`p՞l˝Cֳ#HNaǹ2pjƳƷB{7Hv [%$r=Lr]1)WM >Ҟe8*/}zj#E ѲeKL@ ^üiv)30o[x:B3Pt* qQ8 @? Q?3L#HO7ӏ&\RQTU58ϫ{D脱 MyKF0VNov؀A ٵ|=-7O&vdl*;qo:b|j"5R!{һJW`ezqҸLM. YCnzb-ݮE?ͪNqVJ2ŻUKdLrB҆ ̰DL0՝xߕNS>2j"nf4HBjBʘ 50bd8f3׌FLƮ3FNV N(nnϱ//}P/x;goc7eCC~~-BA1x.OF#=pKjz.A8p^_R\O$u*kc,þ0r"n>b\~m~b-\q0N%JraZM+BD3BcB|$=iq{υ-P\gc%Х1c4+6`;;ؒ2ͨ%1f>;;8Do]rG^eJyӓgM0iB)h ܣWzJ|sɉO@Y73 (T3v7JFe4s&QC; D+ß'?_rO-oPLI]00a0n)n"3=1aY8oz } t:u ^FuJ۾޺67!>c,z91zKŘJ۔ޱQu&(巒G=݉Fx{{;sQ1o3 c>ل_`(Ey'C͇bm%=(N@d_P:zP!ЄI(,52^x(3Vp4jdF&S5bȲޮ>J#+u1X2K&!iKM7 r2`Q˾W+?G<]b OPƁ  ұy4l1n!R*[3{/EC/M'%5'~M ;nOR#BL4kUnl\=)9聴w_K`4ԺOu|Gia)kΈ06+3Cr!$G‰Q>\tF9Cфu#f` ~^} |IBD7bz6TNd=.GDץF Z@nct*)L"{|]u.o%RҬw!C(m895{42aFrMҺ׹s׉<]=,#omFs d#0 HznNANe_ߟ38t!2y1lja|KD1+?%qpk 7u/6 "dfU!0A3ᖮhbWcW_uVBw }. wpjMh:vPhS[ HccfOBнħRڐ+O:=hMdfd:/rڷjbDN$7kHyJ2w%'vi;2P v E:u lrrF6BD aC&ΦNH#%Cʖ-[atA<('33yF~O$}7HȘ_ߊ%D{ =yf>剒)_٥pCߝTJ[" {m\G؀'PM@XA Sv9[ѰE['DORS"P?J-HMA!nCK$i*0Co=,+O_XGR{5c|e"<ポcNl#%x9/IxD=/${fm QX?`#uE@ >ᳬؔEqӕ $ޞ!~;[Z:7bME!f(Df7v]]XA7CsE~8QJ=x:MvHn/ӧ[&b?/Wy6rd">N}C2L:y ߒvaEؒzmj Wo%Noejg7x)`:KQ)=tb<# bLoՌ﬍C-ٛJ1 )1/b$i%Uŋhsd{)- 7'O %ý#BI?iH tFOEI*x!!ýqhӧ΂݌![+[܋d ׃<:f)kg&3VFo9y ,IB{@F:ٶ~TFCg`r](qNkL\pQDFm70exc+t=|:fNQ=0Ks$|xcROwb):r2dkc#FN+[; ^\Y'N.?G = /NtƉ&ǂ1a4[ّr+ŎIP\?N(kt"(W.0n ڈ%` 1n ŕ]9[nyR)d<:T^b]G3ڵ1HZ*1ޮ` xj7S8q#Oxꭇ*nan|_=se Vn(Ufo]/!!tUT,3 t+I]_}b'Q87-< 7+r:(g#on|իzjO>q3|zKQ@F5@EͻYyjP!|T#ikt}cS4{孩!r[u?·g-Gkj8?n{@Z^FjC%5 4 hPfi(QS@3ܵ9j4 hPfi(QSO§IENDB`dipy-1.11.0/doc/devel/gitwash/forking_hell.rst000066400000000000000000000021731476546756600213030ustar00rootroot00000000000000.. _forking: ====================================================== Making your own copy (fork) of DIPY ====================================================== You need to do this only once. The instructions here are very similar to the instructions at http://help.github.com/forking/ |emdash| please see that page for more detail. We're repeating some of it here just to give the specifics for the `DIPY`_ project, and to suggest some default names. Set up and configure a github account ===================================== If you don't have a github account, go to the github page, and make one. You then need to configure your account to allow write access |emdash| see the ``Generating SSH keys`` help on `github help`_. Create your own forked copy of DIPY_ ====================================================== #. Log into your github account. #. Go to the `dipy`_ github home at `dipy github`_. #. Click on the *Fork* button: .. image:: forking_button.png Now, after a short pause and some 'Hardcore forking action', you should find yourself at the home page for your own forked copy of `dipy`_. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/git_development.rst000066400000000000000000000003401476546756600220170ustar00rootroot00000000000000.. _git-development: ===================== Git for development ===================== Contents: .. toctree:: :maxdepth: 2 forking_hell set_up_fork configure_git development_workflow maintainer_workflow dipy-1.11.0/doc/devel/gitwash/git_install.rst000066400000000000000000000010551476546756600211470ustar00rootroot00000000000000.. _install-git: ============= Install git ============= Overview ======== ================ ============= Debian / Ubuntu ``sudo apt-get install git-core`` Fedora ``sudo yum install git-core`` Windows Download and install msysGit_ macOS Use the git-osx-installer_ ================ ============= In detail ========= See the git page for the most recent information. Have a look at the github install help pages available from `github help`_. There are good instructions here: `Installing Git`_. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/git_intro.rst000066400000000000000000000010241476546756600206300ustar00rootroot00000000000000============== Introduction ============== These pages describe a git_ and github_ workflow for the DIPY_ project. There are several different workflows here, for different ways of working with DIPY. This is not a comprehensive git reference, it's just a workflow for our own project. It's tailored to the github hosting service. You may well find better or quicker ways of getting stuff done with git, but these should get you started. For general resources for learning git, see :ref:`git-resources`. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/git_links.inc000066400000000000000000000065741476546756600205750ustar00rootroot00000000000000.. This (-*- rst -*-) format file contains commonly used link targets and name substitutions. It may be included in many files, therefore it should only contain link targets and name substitutions. Try grepping for "^\.\. _" to find plausible candidates for this list. .. NOTE: reST targets are __not_case_sensitive__, so only one target definition is needed for nipy, NIPY, Nipy, etc... .. git stuff .. _git: http://git-scm.com/ .. _github: http://github.com .. _github help: http://help.github.com .. _msysgit: https://gitforwindows.org/ .. _git-osx-installer: http://code.google.com/p/git-osx-installer/downloads/list .. _subversion: http://subversion.tigris.org/ .. _git cheat sheet: http://github.com/guides/git-cheat-sheet .. _pro git book: http://git-scm.com/book .. _git svn crash course: http://git-scm.com/course/svn.html .. _learn.github: http://learn.github.com/ .. _network graph visualizer: http://github.com/blog/39-say-hello-to-the-network-graph-visualizer .. _git user manual: http://schacon.github.com/git/user-manual.html .. _git tutorial: http://schacon.github.com/git/gittutorial.html .. _git community book: http://book.git-scm.com/ .. _git ready: http://www.gitready.com/ .. _git casts: http://www.gitcasts.com/ .. _Fernando's git page: http://www.fperez.org/py4science/git.html .. _git magic: http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html .. _git concepts: http://www.eecs.harvard.edu/~cduan/technical/git/ .. _git clone: http://schacon.github.com/git/git-clone.html .. _git checkout: http://schacon.github.com/git/git-checkout.html .. _git commit: http://schacon.github.com/git/git-commit.html .. _git push: http://schacon.github.com/git/git-push.html .. _git pull: http://schacon.github.com/git/git-pull.html .. _git add: http://schacon.github.com/git/git-add.html .. _git status: http://schacon.github.com/git/git-status.html .. _git diff: http://schacon.github.com/git/git-diff.html .. _git log: http://schacon.github.com/git/git-log.html .. _git branch: http://schacon.github.com/git/git-branch.html .. _git remote: http://schacon.github.com/git/git-remote.html .. _git rebase: http://schacon.github.com/git/git-rebase.html .. _git config: http://schacon.github.com/git/git-config.html .. _why the -a flag?: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html .. _git staging area: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html .. _tangled working copy problem: http://tomayko.com/writings/the-thing-about-git .. _git management: http://kerneltrap.org/Linux/Git_Management .. _linux git workflow: http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html .. _git parable: http://tom.preston-werner.com/2009/05/19/the-git-parable.html .. _git foundation: http://matthew-brett.github.com/pydagogue/foundation.html .. _deleting master on github: http://matthew-brett.github.com/pydagogue/gh_delete_master.html .. _rebase without tears: http://matthew-brett.github.com/pydagogue/rebase_without_tears.html .. _resolving a merge: http://schacon.github.com/git/user-manual.html#resolving-a-merge .. _ipython git workflow: http://mail.scipy.org/pipermail/ipython-dev/2010-October/006746.html .. _Installing Git: http://book.git-scm.com/2_installing_git.html .. _remove remote branch: https://www.git-tower.com/learn/git/faq/delete-remote-branch .. other stuff .. _python: http://www.python.org .. |emdash| unicode:: U+02014 .. vim: ft=rst dipy-1.11.0/doc/devel/gitwash/git_links.txt000066400000000000000000000070161476546756600206330ustar00rootroot00000000000000.. This (-*- rst -*-) format file contains commonly used link targets and name substitutions. It may be included in many files, therefore it should only contain link targets and name substitutions. Try grepping for "^\.\. _" to find plausible candidates for this list. .. NOTE: reST targets are __not_case_sensitive__, so only one target definition is needed for nipy, NIPY, Nipy, etc... .. PROJECTNAME placeholders .. _PROJECTNAME: http://neuroimaging.scipy.org .. _`PROJECTNAME github`: http://github.com/nipy .. _`PROJECTNAME mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. nipy .. _nipy: http://nipy.org/nipy .. _`nipy github`: http://github.com/nipy/nipy .. _`nipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. ipython .. _ipython: http://ipython.scipy.org .. _`ipython github`: http://github.com/ipython .. _`ipython mailing list`: http://mail.scipy.org/mailman/listinfo/IPython-dev .. dipy .. _dipy: https://dipy.org .. _`dipy github`: https://github.com/dipy/dipy .. _`dipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. git stuff .. _git: http://git-scm.com/ .. _github: http://github.com .. _github help: http://help.github.com .. _msysgit: http://code.google.com/p/msysgit/downloads/list .. _git-osx-installer: http://code.google.com/p/git-osx-installer/downloads/list .. _subversion: http://subversion.tigris.org/ .. _git cheat sheet: http://github.com/guides/git-cheat-sheet .. _pro git book: http://git-scm.com/book .. _git svn crash course: http://git-scm.com/course/svn.html .. _learn.github: http://learn.github.com/ .. _network graph visualizer: http://github.com/blog/39-say-hello-to-the-network-graph-visualizer .. _git user manual: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html .. _git tutorial: http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html .. _git community book: http://book.git-scm.com/ .. _git ready: http://www.gitready.com/ .. _git casts: http://www.gitcasts.com/ .. _Fernando's git page: http://www.fperez.org/py4science/git.html .. _git magic: http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html .. _git concepts: http://www.eecs.harvard.edu/~cduan/technical/git/ .. _git clone: http://www.kernel.org/pub/software/scm/git/docs/git-clone.html .. _git checkout: http://www.kernel.org/pub/software/scm/git/docs/git-checkout.html .. _git commit: http://www.kernel.org/pub/software/scm/git/docs/git-commit.html .. _git push: http://www.kernel.org/pub/software/scm/git/docs/git-push.html .. _git pull: http://www.kernel.org/pub/software/scm/git/docs/git-pull.html .. _git add: http://www.kernel.org/pub/software/scm/git/docs/git-add.html .. _git status: http://www.kernel.org/pub/software/scm/git/docs/git-status.html .. _git diff: http://www.kernel.org/pub/software/scm/git/docs/git-diff.html .. _git log: http://www.kernel.org/pub/software/scm/git/docs/git-log.html .. _git branch: http://www.kernel.org/pub/software/scm/git/docs/git-branch.html .. _git remote: http://www.kernel.org/pub/software/scm/git/docs/git-remote.html .. _git config: http://www.kernel.org/pub/software/scm/git/docs/git-config.html .. _why the -a flag?: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html .. _git staging area: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html .. _git management: http://kerneltrap.org/Linux/Git_Management .. _linux git workflow: http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html .. _git parable: http://tom.preston-werner.com/2009/05/19/the-git-parable.html dipy-1.11.0/doc/devel/gitwash/git_resources.rst000066400000000000000000000034241476546756600215150ustar00rootroot00000000000000.. _git-resources: ============= git resources ============= Tutorials and summaries ======================= * `github help`_ has an excellent series of how-to guides. * `learn.github`_ has an excellent series of tutorials * The `pro git book`_ is a good in-depth book on git. * A `git cheat sheet`_ is a page giving summaries of common commands. * The `git user manual`_ * The `git tutorial`_ * The `git community book`_ * `git ready`_ |emdash| a nice series of tutorials * `git casts`_ |emdash| video snippets giving git how-tos. * `git magic`_ |emdash| extended introduction with intermediate detail * The `git parable`_ is an easy read explaining the concepts behind git. * `git foundation`_ expands on the `git parable`_. * Fernando Perez' git page |emdash| `Fernando's git page`_ |emdash| many links and tips * A good but technical page on `git concepts`_ * `git svn crash course`_: git for those of us used to subversion_ Advanced git workflow ===================== There are many ways of working with git; here are some posts on the rules of thumb that other projects have come up with: .. * Linus Torvalds on `git management`_ * Linus Torvalds on `linux git workflow`_ . Summary; use the git tools to make the history of your edits as clean as possible; merge from upstream edits as little as possible in branches where you are doing active development. Manual pages online =================== You can get these on your own machine with (e.g) ``git help push`` or (same thing) ``git push --help``, but, for convenience, here are the online manual pages for some common commands: * `git add`_ * `git branch`_ * `git checkout`_ * `git clone`_ * `git commit`_ * `git config`_ * `git diff`_ * `git log`_ * `git pull`_ * `git push`_ * `git remote`_ * `git status`_ .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/index.rst000066400000000000000000000003431476546756600177440ustar00rootroot00000000000000.. _using-git: Working with DIPY source code ============================= Contents: .. toctree:: :maxdepth: 2 git_intro git_install following_latest patching git_development dot2_dot3 git_resources dipy-1.11.0/doc/devel/gitwash/known_projects.inc000066400000000000000000000027031476546756600216450ustar00rootroot00000000000000.. Known projects .. PROJECTNAME placeholders .. _PROJECTNAME: http://neuroimaging.scipy.org .. _`PROJECTNAME github`: http://github.com/nipy .. _`PROJECTNAME mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. numpy .. _numpy: http://numpy.scipy.org .. _`numpy github`: http://github.com/numpy/numpy .. _`numpy mailing list`: http://mail.scipy.org/mailman/listinfo/numpy-discussion .. scipy .. _scipy: http://www.scipy.org .. _`scipy github`: http://github.com/scipy/scipy .. _`scipy mailing list`: http://mail.scipy.org/mailman/listinfo/scipy-dev .. nipy .. _nipy: http://nipy.org/nipy .. _`nipy github`: http://github.com/nipy/nipy .. _`nipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. ipython .. _ipython: http://ipython.scipy.org .. _`ipython github`: http://github.com/ipython/ipython .. _`ipython mailing list`: http://mail.scipy.org/mailman/listinfo/IPython-dev .. dipy .. _dipy: https://dipy.org .. _`dipy github`: https://github.com/dipy/dipy .. _`dipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. nibabel .. _nibabel: http://nipy.org/nibabel .. _`nibabel github`: http://github.com/nipy/nibabel .. _`nibabel mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. marsbar .. _marsbar: http://marsbar.sourceforge.net .. _`marsbar github`: http://github.com/matthew-brett/marsbar .. _`MarsBaR mailing list`: https://lists.sourceforge.net/lists/listinfo/marsbar-users dipy-1.11.0/doc/devel/gitwash/links.inc000066400000000000000000000001611476546756600177140ustar00rootroot00000000000000.. compiling links file .. include:: known_projects.inc .. include:: this_project.inc .. include:: git_links.inc dipy-1.11.0/doc/devel/gitwash/maintainer_workflow.rst000066400000000000000000000060001476546756600227120ustar00rootroot00000000000000.. _maintainer-workflow: ################### Maintainer workflow ################### This page is for maintainers |emdash| those of us who merge our own or other peoples' changes into the upstream repository. Being as how you're a maintainer, you are completely on top of the basic stuff in :ref:`development-workflow`. The instructions in :ref:`linking-to-upstream` add a remote that has read-only access to the upstream repo. Being a maintainer, you've got read-write access. It's good to have your upstream remote have a scary name, to remind you that it's a read-write remote:: git remote add upstream-rw git@github.com:dipy/dipy.git git fetch upstream-rw ******************* Integrating changes ******************* Let's say you have some changes that need to go into trunk (``upstream-rw/master``). The changes are in some branch that you are currently on. For example, you are looking at someone's changes like this:: git remote add someone git://github.com/someone/dipy.git git fetch someone git branch cool-feature --track someone/cool-feature git checkout cool-feature So now you are on the branch with the changes to be incorporated upstream. The rest of this section assumes you are on this branch. A few commits ============= If there are only a few commits, consider rebasing to upstream:: # Fetch upstream changes git fetch upstream-rw # rebase git rebase upstream-rw/master Remember that, if you do a rebase, and push that, you'll have to close any github pull requests manually, because github will not be able to detect the changes have already been merged. A long series of commits ======================== If there are a longer series of related commits, consider a merge instead:: git fetch upstream-rw git merge --no-ff upstream-rw/master The merge will be detected by github, and should close any related pull requests automatically. Note the ``--no-ff`` above. This forces git to make a merge commit, rather than doing a fast-forward, so that these set of commits branch off trunk then rejoin the main history with a merge, rather than appearing to have been made directly on top of trunk. Check the history ================= Now, in either case, you should check that the history is sensible and you have the right commits:: git log --oneline --graph git log -p upstream-rw/master.. The first line above just shows the history in a compact way, with a text representation of the history graph. The second line shows the log of commits excluding those that can be reached from trunk (``upstream-rw/master``), and including those that can be reached from current HEAD (implied with the ``..`` at the end). So, it shows the commits unique to this branch compared to trunk. The ``-p`` option shows the diff for these commits in patch form. Push to trunk ============= :: git push upstream-rw my-new-feature:master This pushes the ``my-new-feature`` branch in this repository to the ``master`` branch in the ``upstream-rw`` repository. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/patching.rst000066400000000000000000000076421476546756600204430ustar00rootroot00000000000000================ Making a patch ================ You've discovered a bug or something else you want to change in `DIPY`_ |emdash| excellent! You've worked out a way to fix it |emdash| even better! You want to tell us about it |emdash| best of all! The easiest way is to make a *patch* or set of patches. Here we explain how. Making a patch is the simplest and quickest, but if you're going to be doing anything more than simple quick things, please consider following the :ref:`git-development` model instead. .. _making-patches: Making patches ============== Overview -------- :: # tell git who you are git config --global user.email you@yourdomain.example.com git config --global user.name "Your Name Comes Here" # get the repository if you don't have it git clone git://github.com/dipy/dipy.git # make a branch for your patching cd dipy git branch the-fix-im-thinking-of git checkout the-fix-im-thinking-of # hack, hack, hack # Tell git about any new files you've made git add somewhere/tests/test_my_bug.py # commit work in progress as you go git commit -am 'BF - added tests for Funny bug' # hack hack, hack git commit -am 'BF - added fix for Funny bug' # make the patch files git format-patch -M -C master Then, send the generated patch files to the `DIPY mailing list`_ |emdash| where we will thank you warmly. In detail --------- #. Tell git who you are so it can label the commits you've made:: git config --global user.email you@yourdomain.example.com git config --global user.name "Your Name Comes Here" #. If you don't already have one, clone a copy of the `dipy`_ repository:: git clone git://github.com/dipy/dipy.git cd dipy #. Make a 'feature branch'. This will be where you work on your bug fix. It's nice and safe and leaves you with access to an unmodified copy of the code in the main branch:: git branch the-fix-im-thinking-of git checkout the-fix-im-thinking-of #. Do some edits, and commit them as you go:: # hack, hack, hack # Tell git about any new files you've made git add somewhere/tests/test_my_bug.py # commit work in progress as you go git commit -am 'BF - added tests for Funny bug' # hack hack, hack git commit -am 'BF - added fix for Funny bug' Note the ``-am`` options to ``commit``. The ``m`` flag just signals that you're going to type a message on the command line. The ``a`` flag |emdash| you can just take on faith |emdash| or see `why the -a flag?`_. #. When you have finished, check you have committed all your changes:: git status #. Finally, make your commits into patches. You want all the commits since you branched from the ``master`` branch:: git format-patch -M -C master You will now have several files named for the commits:: 0001-BF-added-tests-for-Funny-bug.patch 0002-BF-added-fix-for-Funny-bug.patch Send these files to the `DIPY mailing list`_. When you are done, to switch back to the main copy of the code, just return to the ``master`` branch:: git checkout master Moving from patching to development =================================== If you find you have done some patches, and you have one or more feature branches, you will probably want to switch to development mode. You can do this with the repository you have. Fork the `dipy`_ repository on github |emdash| :ref:`forking`. Then:: # checkout and refresh master branch from main repo git checkout master git pull origin master # rename pointer to main repository to 'upstream' git remote rename origin upstream # point your repo to default read / write to your fork on github git remote add origin git@github.com:your-user-name/dipy.git # push up any branches you've made and want to keep git push origin the-fix-im-thinking-of Then you can, if you want, follow the :ref:`development-workflow`. .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/pull_button.png000066400000000000000000000311351476546756600211630ustar00rootroot00000000000000PNG  IHDR~\iu pHYs   IDATx]|ToߔMHH PKT,"S, O}%"]zHR@dw޽fIBB$a&{3s̙3s72p8 G# ?G#p!}p]G#?G#p!}p]G#?G#p!HKKCVVVuNPF C./y>xbpˏUVe͛7׭=Obyyy()6c\,4lYѬ)4,= {$H& 0,n8^CtqѸ˱gEsZzl6~?vu;Oͽyv+gkx G#}كSԩSª8xy$;w}Z*.6.O[B$ɴ r)ɝĻB? Z}|?ìqu'.=IXHG.A0[rgcǎN,YDx_>kƮG όSN3^ǶQX+PmY+ˁ\K_,?~\#GBPN$,"E؟ A~ǧGLVs',o<5#3( fa0`9UI l>`ŒOлa彣p0e3ggx~Zb>1cƠiӦE*(O_˝)ޝ:8G| < 'g>!A>; oxMۍ9gH0taق.IFp̟3W6bά|`xhs &އĶ?EK+X4|ad#,0+VCSȦٳоq?$` 0ot$WRsp-.vx3k;~ŒO >`4|H>Og,O5E 4}LZ'Š&^xz` hXl,CbM8ִvXI:?? $̯?c{+]DJ___L4 ׿h5 &O  ^d g|tU"?5yɟ0pwX&<[?GWirToV|){<3mLAmxs 6'@9l>.#0}H4tŅx-w}__n Ơu~1b{֮҇#`4|(6y{fm^g][l?2wagsxqt'`oqѪ!!CMcZxL8ϦmE)*ue/fJ{ ž{LhϞ=WZˀ8y36 ‘3ݟL?w $ZMsѬ^ tɉ)"NKto+Щw+N7oG@2_ >tł@Y``0<\"aJ)Ϭ &57"&I! Rs=oֱ8) V8ޑxkme: #Hxӧp8ƿRh,@ #膣kw )@Ȥ r-e0gG|)o$b}"1bPddpyC#W.?E'\lǂ\Zز\m4rognYr2y^ /&욟*SY|!ڜi&rE|v9#' eH\|=0i;@F%!_Am62bK ? ( #`H.Ң #ӑݐ`6ֿfO< ͨOzde/mȄ<@soΛXyɒKcs30oOIZM}5 R-AnJiz$HlY3"1j^8A: GZE_ErrV $1W,PߐT?M!1v\w'_w :\db5nMQ^&SZ ,h׮Knq7mP?څ3PBK2}G6IjwMF:,`*Lъ<-r$PjZaY4l/,y&q-[$e]u߾}'ABWՂgyL&%1/H}0 3wSJ;Fc~1{NĿ?\ꂠ8\"`}H`5%zf40>-|ڊyJw-<˖-1}tt ؿ?q7ni#)}ӡd:җ؁hn)U8D_Nҗ"z@vM$Y/ kg߲CC0[mğev= Lg!Kĩ֮#_<*JzPp. ˌA"ө)(5K@0Mm?_@~}'v U8k ~Z(tn@Pu/71fM |_7P>1/fGn, K/)_QWKcAtهK嗰EVG,]0>\!dD6]4}.>_F MX5GfNY {;Ga1 Z*,Rf-Nslff*e0gJ1)#36fKGzܰ W&}9cpNA-[XP&ɹ[u.lNw}M=]gT̚ghSOgϞ`&eؤ) |4xnwH{ n>X'5!ȋQ(0Mз"d]H001\ka/<[`0->BkG8.ТVUo[1㥓l&IUF˾B["^.W".=REIiWiMb6WW/.MDvT~kg ;DY(4(sy#jᓟephn×/?Wx,j[ П|\Ld1q@)2j8ˊ}'`㝑|v ( j:/}Za;,Tw(pDQBU2Z}d.s^_|~j?^4IaA'¯vW>TaS I) ֖4$^+0ܹ^=NѺs ""]tg:ea3YLN˳g/tZ9llv4AAslnf6L4Ծbf"^}ϱS Y9P{“].ã䞠{'śrx)帬V _cO%U8c\LdҗVHe =S9SnLH'?lm|,:Z( cq8@,o]UW8UKi<+{? f˘0`X.pnPⱹWS8nL8ftv\+­=bcNIU:%qK^L<+B-p8!PiX\)p8U,8G|p_>x)G#PeʲpʇWÍp8U,8G|p_>x)G#PeʲpʇWÍp8U,8G|p_>x)G#PeʲpʇWÍp8U,8G|p_>x)G#PeʲpʇWÍp8U,8G|p_>x)G#PeʲpʇWÍp8U,8G|p_>x)G#Peʲpʇ+Kq8@D@YF*I8'#pʇw7^#TY p8#-cKp8*WU}xG#PvGJ#-*>NP{@)#mq57 B"hC<jE@O>ǧ"ӪC:9ш /1g.%O_V0$t3D& S݆WT`3kA-sprNdlg.Ν8x"g/!Eo.0nUՆ+ZjUW*@\=SXXe(^l0T:2hVquSM(lx:}0^su*i 7 ,I&g-51xk/cx2S&2[qyZHAͩ#ɑz[Pw ~!} _lH2pm0q;W۶q6_1\=D"|+`-'B-09Tyy0 "u Aj>{`ƄpmȭWw1XXZM90t$Q6h@<FsxE(٤mҮ/}|%fg7е7IAvAPz^TTKu8L: ځ,{+NZ*(Vw_Bn+oAxbPSĬɊ-ʋB_zeʠhƕ qB뗐Њ$㲞CzegڪZO$<g_b̛F>-rOG`V'%|<|࡮x y-U rYѦfdLYǾy8dAZ>\ IfEnA$_Nͳ/[liQyNxvyC;S[>SXm1dqd4v:mS`- ܈6tAx5nCfy9A7Nnʭ kja&5Rnhۼ6^6䦜ǁh3|zc՘W)0;BHwb9bj4 cuɰhjt/nF蓮!mHPHs`"nfVbl8l#ߠ@ {/|3cGw|g,h42G`Ӯ|l7aCwa\]T߉ 7"9=$FNm:6ݒ!c)v5.gBgNϧV㔃+Pw yF+D_ҝ EzqLhIf/@1[Ix}!⃺uB?m"A VD{lŔycۡtxb]?ca6JZIbvby-z==\dryW)ih__:(\JNESOgL"VtjڹkuImvHtUBe_h^&*oM6hGN>g WCaBU|A _ I+,N,ɉ~ʕF)2\ju~]8kGP^AwԈ9ʇum %O6B\ti-%IT2ꃊr*Cm& o^ f2f쒘pF6r%߾tzfdQ OQ2 hZKwV&v@taa iZV~ jOadtNC},ŏ4<RdL'Sԥ9s>ǡa,Jm{|˨CLN]3>^| vM.\5|xU wnsI,!Τ-gI<4 nۣ 1f4GOJ~F\VD؜A<)IaR$1tX6{y _?x$ ;6\KC=;!keؖpMhBT1X9]l A&-q4 ž|Yj:/; PÓ͂bdٱILMQjb =)7M7@ >G'5j4abwTm |9٠9!mʻ@ w+ۀVJh-7iĽ>;$ \V&·/>:=C+:joxFjobpH;;x'#%nȋJ$9nYioP.eӠ1Ypr>̦ oH ^Bs@jr)l+kl7QWSg]-u9[ӈՑIU.d+)/'Ъ jFw$@20HV~$˛')^ЫI-l}V z'&h`űbi.2b J+W"lR*d f$He bɔ)D{2E= u yS/,oe@t6Gځ+Fk 4UZ됰{-"ikKpq ώ u9yG; bYR~S&@tmļԀMBHygCju/.Ykس4c|:(>*6mBjڐ&lM4X.mv6\[lS)o"jegGʸ4@|d:tjH.*ъ x=ŷj 4$e%z FN8џEgD׎K( _kҥȩF~JeY.i7!%5O߫cm'lŶN6ܔl$ɊfD9hP3;okJ-RY~uʒ{?$Zܶ;۱yՏ$:ڣK;r;zCbwād >vvQ 6at܆BDȗ*XMTcKVGӪ. >,o;~g&M~)?lFRv_;r1gbK5$_5::EuZsZG)&rЊMn.@gT"T:5,JVC`R]Ehi~ 2߶hk'5!h߆k[&xB4v>EMWBb<iR+UŖ)$K'Ϯ3pSb7\8X0: <;+K#;Yv#^;7҆ZtBZш՘Xj,8(G0S[ gbI T&(dt Vn O t…NAF*Qϓtj#+)e& LdȾSx 2K.Ҳ ]?tj rrjCm#-ϡ{.M9d,de8_7,mf$i3O(lI IɆл 5$PER Cg1~wf )kVzփϥ7csb7?ƳSn>yH̫TShj$R:;o .<Ѫfڌ;^EapiWSGJ6#>|nV7ՄԠpǟxL{.鴡)kK~b$|& r's;% '* ;mAӬ?g~ݒ<|}=p)LW|>}LIRT/M}twI4mP{Ċuo" vZ ,7:t`J!6%) E ge4)ZF,{ MX%Ft8W23- ^ y/pQP2ZSyfRC1.6ꈼ>^l$ཹo9"l$Crr#6R $36iXiEG ╭_cYd;̚=3!k-Nr䲘dF #UəPFPBxKI+\;&{&{eTR%}@Y3rr)|wE1`h׬&RF\C1T~4j%3&lgz dQ4#-ߝYzr/! zjI^p8@+ hWp8:youSp8eA[eAp8nW&.p8 -r8jWՀ G,pWOYy9@5@[Հ G,?V#9}nIENDB`dipy-1.11.0/doc/devel/gitwash/set_up_fork.rst000066400000000000000000000036441476546756600211640ustar00rootroot00000000000000.. _set-up-fork: ================== Set up your fork ================== First you follow the instructions for :ref:`forking`. Overview ======== :: git clone git@github.com:your-user-name/dipy.git cd dipy git remote add upstream git://github.com/dipy/dipy.git In detail ========= Clone your fork --------------- #. Clone your fork to the local computer with ``git clone git@github.com:your-user-name/dipy.git`` #. Investigate. Change directory to your new repo: ``cd dipy``. Then ``git branch -a`` to show you all branches. You'll get something like:: * master remotes/origin/master This tells you that you are currently on the ``master`` branch, and that you also have a ``remote`` connection to ``origin/master``. What remote repository is ``remote/origin``? Try ``git remote -v`` to see the URLs for the remote. They will point to your github fork. Now you want to connect to the upstream `dipy github`_ repository, so you can merge in changes from trunk. .. _linking-to-upstream: Linking your repository to the upstream repo -------------------------------------------- :: cd dipy git remote add upstream git://github.com/dipy/dipy.git ``upstream`` here is just the arbitrary name we're using to refer to the main `dipy`_ repository at `dipy github`_. Note that we've used ``git://`` for the URL rather than ``git@``. The ``git://`` URL is read only. This means we that we can't accidentally (or deliberately) write to the upstream repo, and we are only going to use it to merge into our own code. Just for your own satisfaction, show yourself that you now have a new 'remote', with ``git remote -v show``, giving you something like:: upstream git://github.com/dipy/dipy.git (fetch) upstream git://github.com/dipy/dipy.git (push) origin git@github.com:your-user-name/dipy.git (fetch) origin git@github.com:your-user-name/dipy.git (push) .. include:: links.inc dipy-1.11.0/doc/devel/gitwash/this_project.inc000066400000000000000000000001641476546756600212740ustar00rootroot00000000000000.. dipy .. _`dipy`: https://dipy.org .. _`dipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging dipy-1.11.0/doc/devel/index.rst000066400000000000000000000003711476546756600162770ustar00rootroot00000000000000.. _development: DIPY Developer Guide ==================== .. toctree:: :maxdepth: 2 intro installation_from_source gitwash/index make_release commit_codes coding_style_guideline bibliography benchmarking toolchain dipy-1.11.0/doc/devel/installation_from_source.rst000066400000000000000000000214561476546756600223030ustar00rootroot00000000000000.. _installation-from-source: ********************** Installing from source ********************** Getting the source ================== More likely you will want to get the source repository to be able to follow the latest changes. In that case, you can use:: git clone https://github.com/dipy/dipy.git For more information about this see :ref:`following-latest`. After you've cloned the repository, you will have a new directory, containing the DIPY ``pyproject.toml`` file, among others. We'll call this directory - that contains the ``pyproject.toml`` file - the *DIPY source root directory*. Sometimes we'll also call it the ```` directory. Building and installing ======================= Install from source (all operating systems - recommended) --------------------------------------------------------- Change directory into the *DIPY source root directory*. To clean your directory from temporary file, use:: git clean -fxd This command will delete all files not present in your github repository. Then, complete your installation by using this command:: pip install -r requirements/build.txt pip install --no-build-isolation -e . This command will do the following : - install the requirements for building dipy - remove the old dipy installation if present - build dipy - install dipy locally on your user environment .. _install-source-nix: Install from source for Unix (e.g Linux, macOS) ----------------------------------------------- Change directory into the *DIPY source root directory*. First, install some python build packages:: pip install -r requirements/build.txt Then, to install for the system:: pip install dipy Or, to build DIPY in the source tree (locally) so you can run the code in the source tree (recommended for following the latest source) run:: pip install --no-build-isolation -e . add the *DIPY source root directory* into your ``PYTHONPATH`` environment variable. Search google for ``PYTHONPATH`` for details or see `python module path`_ for an introduction. When adding dipy_ to the ``PYTHONPATH``, we usually add the ``PYTHONPATH`` at the end of ``~/.bashrc`` or (macOS) ``~/.bash_profile`` so we don't need to retype it every time. This should look something like:: export PYTHONPATH="/home/user_dir/Devel/dipy:\$PYTHONPATH" After changing the ``~/.bashrc`` or (macOS) ``~/.bash_profile`` try:: source ~/.bashrc or:: source ~/.bash_profile so that you can have immediate access to DIPY_ without needing to restart your terminal. Ubuntu/Debian ------------- :: sudo apt-get install python3-dev python3-setuptools sudo apt-get install python3-numpy python3-scipy sudo apt-get install cython then:: sudo pip install nibabel (we need the latest version of this one - hence ``pip`` rather than ``apt-get``). You might want the optional packages too (highly recommended):: sudo apt-get install ipython python3-h5py python3-vtk python3-matplotlib Now follow :ref:`install-source-nix`. Fedora / Mandriva maybe Redhat ------------------------------ Same as above but use yum rather than apt-get when necessary. Now follow :ref:`install-source-nix`. Windows ------- Anaconda_ is probably the easiest way to install the dependencies that you need. To build from source, you will also need to install the exact compiler which is used with your specific version of python. For getting this information, type this command in shell like ``cmd`` or Powershell_:: python3 -c "import platform;print(platform.python_compiler())" This command should print information of this form:: MSC v.1900 64 bit (AMD64) Now that you find the relevant compiler, you have to install the VisualStudioBuildTools_ by respecting the following table:: Visual C++ 2008 (9.0) MSC_VER=1500 Visual C++ 2010 (10.0) MSC_VER=1600 Visual C++ 2012 (11.0) MSC_VER=1700 Visual C++ 2013 (12.0) MSC_VER=1800 Visual C++ 2015 (14.0) MSC_VER=1900 Visual C++ 2017 (15.0) MSC_VER=1910 Visual C++ 2019 (16.0) MSC_VER=1920 Visual C++ 2022 (17.0) MSC_VER=1930 After the VisualStudioBuildTools_ installation, restart a command shell and change directory into the *DIPY source root directory*. Start to install the build tools:: pip install -r requirements/build.txt Then to install into your system:: pip install dipy To install inplace - so that DIPY is running out of the source code directory:: pip install --no-build-isolation -e . (this is the mode we recommend for following the latest source code). If you get an error saying "unable to find vcvarsall.bat" then you need to check your environment variable ``PATH`` or reinstall VisualStudioBuildTools_. Setuptools should automatically detect the compiler and use it. macOS ----- Make sure you have Xcode_ and Anaconda_ installed. From here follow the :ref:`install-source-nix` instructions. OpenMP with macOS ----------------- OpenMP_ is a standard library for efficient multithreaded applications. This is used in DIPY for speeding up many different parts of the library (e.g., denoising and bundle registration). If you do not have an OpenMP-enabled compiler, you can still compile DIPY from source using the above instructions, but it might not take advantage of the multithreaded parts of the code. To be able to compile DIPY from source with OpenMP on macOS, you will have to do a few more things. First of all, you will need to install the Homebrew_ package manager. Next, you will need to install and configure the compiler. You have two options: using the GCC compiler or the CLANG compiler. This depends on your python installation: Under Anaconda ~~~~~~~~~~~~~~~~ We recommend to install llvm via Anaconda_. Run the following:: conda install -c conda-forge llvm-openmp In case the compiler is not detected automatically, you can specify the compiler by using the environment variable ``CC``. Under Homebrew Python or python.org Python ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are already using the Homebrew Python, or the standard python.org Python, you will need to use the llvm compiler with OMP. Run:: brew install llvm brew install libomp export CC="/opt/homebrew/opt/llvm/bin/clang" In case the compiler or OpenMP are not detected , you can specify some environment variables. For example, you can add the following lines to your ``~/.bash_profile``:: export CC="/opt/homebrew/opt/llvm/bin/clang" export CXX="/opt/homebrew/opt/llvm/bin/clang++" export LDFLAGS="-L/opt/homebrew/opt/libomp/lib -L/opt/homebrew/opt/llvm/lib" export CPPFLAGS="-I/opt/homebrew/opt/libomp/include -I/opt/homebrew/opt/llvm/include" export CFLAGS="-I/opt/homebrew/opt/libomp/include -I/opt/homebrew/opt/llvm/include" Building and installing ~~~~~~~~~~~~~~~~~~~~~~~ Whether you are using Anaconda_ or Homebrew/python.org Python, you will need to then run ``pip install dipy``. When you do that, it should now compile the code with this OpenMP-enabled compiler, and things should go faster! Testing ======== If you want to run the tests:: sudo pip install pytest Then, in the terminal from ````:: pytest -svv dipy You can also run the examples in ``/doc``. Documentation ============= To build the documentation in HTML in your computer you will need to do:: sudo pip install sphinx Then change directory to ```` and:: cd doc make clean make -C . html Tip --- Building the entire ``DIPY`` documentation takes a few hours. You may want to skip building the documentation for the examples, which will reduce the documentation build time to a few minutes. You can do so by executing:: make -C . html-no-examples Troubleshooting --------------- If you encounter the following error when trying to build the documentation:: tools/build_modref_templates.py dipy reference *WARNING* API documentation not generated: Can not import dipy tools/docgen_cmd.py dipy reference_cmd *WARNING* Command line API documentation not generated: Cannot import dipy Build API docs...done. cd examples_built && ../../tools/make_examples.py Traceback (most recent call last): File "../../tools/make_examples.py", line 33, in import dipy ModuleNotFoundError: No module named 'dipy' it is probably due to a conflict between the picked ``Sphinx`` version: this happens when the system's ``Sphinx`` package is used instead of the virtual environment's ``Sphinx`` package, and the former trying to import a ``DIPY`` version in the system: the ``Sphinx`` package used should correspond to that of the virtual environment where ``DIPY`` lives. This can be solved by specifying the path to the ``Sphinx`` package in the virtual environment:: make html SPHINXBUILD='python3 /sphinx-build' .. include:: ../links_names.inc dipy-1.11.0/doc/devel/intro.rst000066400000000000000000000024621476546756600163260ustar00rootroot00000000000000============== Introduction ============== DIPY_ is always seeking courageous scientists who want to take dMRI analysis to the next level. If you share the same vision and you are willing to share your code please let us know we will be happy to help. The lead developer is Eleftherios Garyfallidis, with support from Ian Nimmo-Smith, Matthew Brett, Bago Amirbekian, Ariel Rokem, Stefan van der Walt and (your name here). See the main documentation for the full list of DIPY developers and contributors. The primary development repository is `dipy github`_ Please do contribute. Have a look at :ref:`using-git` for some ideas on how to get going. Have a look at the `nipy development guidelines`_ for our coding habits. In summary, please follow the `numpy coding style`_ - and of course - PEP8_ . Test everything! We are using pytest_ ; see the existing code for example tests. If you can please use our :ref:`commit-codes`. But - just pitch in - send us some code - we'll give you feedback if you want it - that way we learn from each other. And - welcome... If you are new to diffusion MRI and you want to learn more here is a simple `video `_ we made for the general public. I hope you enjoy it and apologies for the low resolution. .. include:: ../links_names.inc dipy-1.11.0/doc/devel/make_release.rst000066400000000000000000000227701476546756600176140ustar00rootroot00000000000000.. _release-guide: ********************************* A guide to making a DIPY release ********************************* A guide for developers who are doing a DIPY release .. _release-checklist: Release checklist ================= * Review the open list of `dipy issues`_. Check whether there are outstanding issues that can be closed, and whether there are any issues that should delay the release. Label them ! * Check whether there are no build failing on `DIPY Github Actions`_. Indeed, ``PRE`` build is allowed to fail and does not block a PR merge but it should block release ! So make sure that ``PRE`` build is not failing. * Generate, Review and update the release notes. Run the following command from the root directory of DIPY:: python3 tools/github_stats.py 1.7.0 > doc/release_notes/release1.8.rst where ``1.7.0`` was the last release tag name. * Review and update the :file:`Changelog` file. Get a partial list of contributors with something like:: git shortlog -ns 0.1.7.0.. where ``1.7.0`` was the last release tag name. Then manually go over ``git shortlog 0.7.0..`` to make sure the release notes are as complete as possible and that every contributor was recognized. * Use the opportunity to update the ``.mailmap`` file if there are any duplicate authors listed from ``git shortlog -nse``. * Add any new authors to the ``AUTHORS`` file. * Check the copyright years in ``doc/conf.py`` and ``LICENSE`` * Check the examples - we really need an automated check here. * Check the ``pyx`` file doctests with:: ./tools/doctest_extmods.py dipy We really need an automated run of these using the buildbots, but we haven't done it yet. * Check the ``README`` in the root directory. Check all the links are still valid. * Check all the DIPY builds are green on the nipy `DIPY Github Actions`_ * If you have `DIPY Github Actions`_ building set up you might want to push the code in its current state to a branch that will build, e.g.:: git branch -D pre-release-test # in case branch already exists git co -b pre-release-test * Clean and compile:: make distclean git clean -fxd python3 -m build or pip install --no-build-isolation -e . * Make sure all tests pass on your local machine (from the ```` directory):: cd .. pytest -svv --with-doctest --pyargs dipy cd dipy # back to the root directory * Check the documentation doctests:: cd doc make doctest cd .. At the moment this generates lots of errors from the autodoc documentation running the doctests in the code, where the doctests pass when run in pytest - we should find out why this is at some point, but leave it for now. * Build and test the DIPY wheels. See the `wheel builder README `_ for instructions. In summary, clone the wheel-building repo, edit the ``.github/workflows/wheel.yml`` text files (if present) with the branch or commit for the release, commit and then push back up to github. This will trigger a wheel build and test on macOS, Linux and Windows. Check the build has passed on the Github Actions interface at https://github.com/MacPython/dipy-wheels/actions. You'll need commit privileges to the ``dipy-wheels`` repo; ask Matthew Brett or on the mailing list if you do not have them. * The release should now be ready. Doing the release ================= Doing the release! This has two steps: * build and upload the DIPY wheels; * make and upload the DIPY source release. The trick here is to get all the testing, pushing to upstream done *before* you do the final release commit. There should be only one commit with the release version number, so you might want to make the release commit on your local machine, push to `dipy pypi`_, review, fix, rebase, until all is good. Then and only then do you push to upstream on github. * Make the release commit. Edit :file:`pyproject.toml` and to get the correct version number and commit it with a message like REL: set version to . Don’t push this commit to the DIPY_ repo yet.; * Finally tag the release locally with git tag . Continue with building release artifacts (next section). Only push the release commit to the DIPY_ repo once you have built the sdists and docs successfully. Then continue with building wheels. Only push the release tag to the repo once all wheels have been built successfully on `DIPY Github Actions`_. * For the wheel build / upload, follow the `wheel builder README`_ instructions again. Edit the ``.github/workflows/wheel.yml`` files (if present) to give the release tag to build. Check the build has passed on the Github Interface interface at https://github.com/MacPython/dipy-wheels/actions. Now follow the instructions in the page above to download the built wheels to a local machine and upload to PyPI. * Now it's time for the source release. Build the release files:: make distclean git clean -fxd make source-release * Once everything looks good, upload the source release to PyPi:: pip install twine twine upload dist/* * Check how everything looks on pypi - the description, the packages. If necessary delete the release and try again if it doesn't look right. * Make an annotated tag for the release with tag of form ``1.8.0``:: git tag -am 'Public release 1.8.0' 1.8.0 * Set up maintenance / development branches If this is a full release you need to set up two branches, one for further substantial development (often called 'trunk') and another for maintenance releases. * Branch to maintenance:: git co -b maint/1.8.x Set ``_version_extra`` back to ``.dev`` and bump ``_version_micro`` by 1. Thus the maintenance series will have version numbers like - say - '1.8.1.dev' until the next maintenance release - say '1.8.1'. Commit. Push with something like ``git push upstream-rw maint/1.8.x --set-upstream`` * Start next development series:: git co main-master then restore ``.dev`` to ``_version_extra``, and bump ``_version_minor`` by 1. Thus the development series ('trunk') will have a version number here of '0.7.0.dev' and the next full release will be '0.7.0'. Next merge the maintenance branch with the "ours" strategy. This just labels the maintenance branch `info.py` edits as seen but discarded, so we can merge from maintenance in future without getting spurious merge conflicts:: git merge -s ours maint/1.8.x Push with something like ``git push upstream-rw main-master:master`` If this is just a maintenance release from ``maint/1.8.x`` or similar, just tag and set the version number to - say - ``1.8.2.dev``. * Push the tag with ``git push upstream-rw 1.8.0`` Uploading binary builds for the release ======================================= By far the easiest way to do this is via the buildbots. In order to do this, you need first to push the release commit and the release tag to github, so the buildbots can find the released code and build it. * In order to trigger the binary builds for the release commit, you need to go to the web interface for the binary builder, go to the "Force build" section, enter your username and password for the buildbot web service and enter the commit tag name in the *revision* field. For example, if the tag was ``1.8.0`` then you would enter ``1.8.0`` in the revision field of the form. This builds the exact commit labeled by the tag, which is what we want. * Trigger binary builds for Windows from the ``https://github.com/MacPython/dipy-wheels`` Download the builds and upload to pypi. You can upload the exe files with the *files* interface for the new DIPY release. Obviously you'll need to log in to do this, and you'll need to be an admin for the DIPY pypi project. For reference, if you need to do binary exe builds by hand, use something like:: make distclean git clean -fxd * Trigger binary builds for macOS from the buildbots ``https://github.com/MacPython/dipy-wheels``. Download the eggs and upload to pypi. Upload the dmg files with the *files* interface for the new DIPY release. * Building macOS dmgs from the mpkg builds. The buildbot binary builders build ``mpkg`` directories, which are installers for macOS. These need their permissions to be fixed because the installers should install the files as the root user, group ``admin``. The all need to be converted to macOS disk images. Use the ``./tools/build_dmgs.py``, with something like this command line:: ./tools/build_dmgs "dipy-dist/dipy-1.8.0-py*.mpkg" For this to work you'll need several things: * An account on a macOS box with sudo (Admin user) on which to run the script. * ssh access to the buildbot server http://nipy.bic.berkeley.edu (ask Matthew or Eleftherios). * a development version of ``bdist_mpkg`` installed from https://github.com/matthew-brett/bdist_mpkg. You need this second for the script ``reown_mpkg`` that fixes the permissions. Upload the dmg files with the *files* interface for the new DIPY release. Other stuff that needs doing for the release ============================================ * Checkout the tagged release, build the html docs and upload them to the github pages website:: make upload You need to checkout the tagged version in order to get the version number correct for the doc build. The version number gets picked up from the ``info.py`` version. * Announce to the mailing lists. With fear and trembling. .. include:: ../links_names.inc dipy-1.11.0/doc/devel/toolchain.rst000066400000000000000000000215261476546756600171550ustar00rootroot00000000000000.. _toolchain-roadmap: 🛠️ Toolchain Roadmap ==================== The DIPY_ library relies on a set of primary dependencies, most notably Python, NumPy and Nibabel_ to function effectively. Additionally, a broader array of libraries and tools is necessary for both building the library and constructing its documentation. It's important to note that these tools and libraries are continuously evolving. This document aims to outline how DIPY_ will manage its usage of these dynamic dependencies over time. DIPY_ strives to maintain compatibility across multiple releases of its dependent libraries and tools. Requiring users to switch to other components with every release would significantly reduce the value of DIPY_. Nevertheless, retaining compatibility with older toolsets and libraries sets boundaries on incorporating newer functionalities and capabilities. DIPY_ adopts a moderately conservative approach by ensuring compatibility with several major releases of Python and NumPy across major platforms. However, this stance may introduce further constraints, as illustrated in the C Compilers section. - First and foremost, DIPY_ is a Python project, hence it requires a Python environment. - Compilers for C code is needed, as well as for Cython. - The Python environment needs ``NumPy`` and ``Nibabel`` package to be installed. - Testing requires the ``pytest`` Python packages. - Building the documentation requires Sphinx packages along with ``sphinx_design``, ``sphinxcontrib-bibtex``, ``sphinx-gallery``, PyData/GRG theme. Building DIPY -------------- Python Versions ^^^^^^^^^^^^^^^ DIPY is compatible with several versions of Python. When dropping support for older Python versions, DIPY takes guidance from `Scientific Python SPECS`_. Python 2.7 support was dropped starting from DIPY 1.0.0 ================ ======================================================================= Date Pythons supported ================ ======================================================================= 2018 Py2.7, Py3.4+ (DIPY 0.16.x is the last release to support Python 2.7) 2019 Py3.5+ mid-2020 Py3.6+ 2022 Py3.7+ mid-2023 Py3.8+ 2024 Py3.9+ 2025 Py3.10+ ================ ======================================================================= NumPy ^^^^^ While DIPY_ relies on NumPy, its releases are not bound to specific NumPy releases. Instead, DIPY_ strives for compatibility with at least the four preceding releases of NumPy. This approach necessitates writing DIPY_ using features common to all these versions, rather than relying solely on the latest NumPy features [1]_. The table provided illustrates the compatible NumPy versions for each major Python release. ================= ======================== ======================= DIPY_ version Python versions NumPy versions ================= ======================== ======================= 0.16.0 2.7, >=3.4, <=3.7 >=1.8.2, <= 1.16.x 1.0.0 >=3.5, <=3.7 >=1.13.3, <= 1.16.x 1.1.0/1 >=3.5, <=3.8 >=1.13.3, <= 1.17.3 1.2.0 >=3.6, <=3.8 >=1.14.5, <= 1.17.3 1.3.0 >=3.6, <=3.8 >=1.14.5, <1.17.3 1.4.0/1 >=3.6, <=3.9 >=1.14.5, <1.20.x 1.5.0 >=3.7, <=3.10 >=1.16.x, <1.23.0 1.6.0 >=3.7, <=3.11 >=1.16.x, <1.24.0 1.7.0 >=3.8, <=3.11 >=1.19.5, <1.24.0 1.8.0 >=3.8, <=3.12 >=1.19.5, <1.26.0 1.9.0 >=3.9, <3.12 >=1.21.6, <1.27.0 1.10.0 >=3.10, <3.13 >=1.21.6, <2.2.0 1.11.0 >=3.10, <3.14 >=1.21.6, <2.2.0 ================= ======================== ======================= .. note:: The table above is not exhaustive. In specific cases, such as a particular architecture, these requirements could vary. Compilers ^^^^^^^^^ Building DIPY_ requires compilers for C as well as the python transpilers Cython. To maintain compatibility with a large number of platforms & setups, especially where using the official wheels (or other distribution channels like Anaconda or conda-forge) is not possible, DIPY_ tries to keep compatibility with older compilers, on platforms that have not yet reached their official end-of-life. As explained in more detail below, the current minimal compiler versions are: ========== =========================== =============================== ============================ Compiler Default Platform (tested) Secondary Platform (untested) Minimal Version ========== =========================== =============================== ============================ GCC Linux AIX, Alpine Linux, OSX GCC 8.x LLVM OSX Linux, FreeBSD, Windows LLVM 10.x MSVC Windows - Visual Studio 2019 (vc142) ========== =========================== =============================== ============================ Note that the lower bound for LLVM is not enforced. Older versions should work, but no version below LLVM 12 is tested regularly during development. Please file an issue if you encounter a problem during compilation. Official Builds ~~~~~~~~~~~~~~~ Currently, DIPY_ wheels are being built as follows: ================ ============================== ============================== ============================= Platform CI Base Images [2]_ [3]_ [4]_ Compilers Comment ================ ============================== ============================== ============================= Linux x86 ``ubuntu-22.04`` GCC 10.2.1 ``cibuildwheel`` Linux arm ``docker-builder-arm64`` GCC 11.3.0 ``cibuildwheel`` OSX x86 ``macOS-11`` clang-13 ``cibuildwheel`` OSX arm ``macos-monterey-xcode:14`` clang-13.1.6 ``cibuildwheel`` Windows ``windows-2019`` Microsoft Visual 2019 (14.20) ``cibuildwheel`` ================ ============================== ============================== ============================= Testing and Benchmarking -------------------------- Testing and benchmarking require recent versions of: ========================= ======== ==================================== Tool Version URL ========================= ======== ==================================== pytest Recent https://docs.pytest.org/en/latest/ asv (airspeed velocity) Recent https://asv.readthedocs.io/ ========================= ======== ==================================== Building the Documentation -------------------------- ==================== ================================================= Tool Version ==================== ================================================= Sphinx Whatever recent versions work. >= 6.0. GRG Sphinx theme Whatever recent versions work. >= 0.4.0. Sphinx-Design Whatever recent versions work. >= 0.5.0. numpydoc Whatever recent versions work. >= 1.8.0. Sphinx-Gallery Whatever recent versions work. >= 0.15.0. Sphinxcontrib-bibtex Whatever recent versions work. >= 2.6.1 ==================== ================================================= .. note:: Developer Note: The documentation build might require additional libraries based on the examples provided. Specifically, the versions of "numpy" and "nibabel" needed have implications for the Python docstring examples. It's essential for these examples to execute seamlessly in both the documentation build environment and any supported versions of "numpy/nibabel" that users might utilize with this DIPY_ release. Packaging --------- A Recent version of: ============= ======== =============================================== Tool Version URL ============= ======== =============================================== meson-python Recent https://meson-python.readthedocs.io/en/latest/ wheel Recent https://pythonwheels.com cibuildwheel Recent https://cibuildwheel.readthedocs.io/en/stable/ ============= ======== =============================================== :ref:`release-guide` contain information on making and distributing a DIPY_ release. References ---------- .. [1] https://numpy.org/doc/stable/release.html .. [2] https://github.com/actions/runner-images .. [3] https://cirrus-ci.org/guide/docker-builder-vm/#under-the-hood .. [4] https://github.com/orgs/cirruslabs/packages?tab=packages&q=macos .. include:: ../links_names.inc dipy-1.11.0/doc/developers.rst000066400000000000000000000104111476546756600162350ustar00rootroot00000000000000.. _dipy_developers: Developers ========== The core development team consists of the following individuals: - **Eleftherios Garyfallidis**, Indiana University, IN, USA - **Ariel Rokem**, University of Washington, WA, USA - **Matthew Brett**, Birmingham University, Birmingham, UK - **Bago Amirbekian**, Databricks, San Francisco, CA, USA - **Omar Ocegueda**, Google, San Francisco, CA - **Marc-Alexandre Côté**, Microsoft Research, Montreal, QC, CA - **Serge Koudoro**, Indiana University, IN, USA - **Gabriel Girard**, Swiss Federal Institute of Technology (EPFL), Lausanne, CH - **Rafael Neto Henriques**, Cambridge University, UK - **Matthieu Dumont**, Imeka, Sherbrooke, QC, CA - **Ranveer Aggarwal**, Microsoft, Hyderabad, Telangana, India And here is the rest of the wonderful contributors: - **Ian Nimmo-Smith**, retired, formerly at MRC Cognition and Brain Sciences Unit, Cambridge, UK - **Maxime Descoteaux**, University of Sherbrooke, QC, CA - **Stefan Van der Walt**, University of California, Berkeley, CA, USA - **Jean-Christophe Houde**, University of Sherbrooke, QC, CA and Imeka, Sherbrooke, QC, CA - **Francois Rhéault**, University of Sherbrooke, QC, CA - **Samuel St-Jean**, University Medical Center (UMC) Utrecht, Utrecht, NL - **Michael Paquette**, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, DE - **Christopher Nguyen**, Massachusetts General Hospital, MA, USA - **Emanuele Olivetti**, NeuroInformatics Laboratory (NILab), Trento, IT - **Yaroslav Halchenco**, PBS Department, Dartmouth, NH, USA - **Emmanuel Caruyer**, Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), University of Rennes I, Rennes, FR - **Sylvain Merlet**, INRIA, Sophia-Antipolis, FR - **Erick Ziegler**, Université de Liège, BE - **Kimberly Chan**, Stanford University, CA, USA - **Chantal Tax**, Cardiff University, Cardiff, UK - **Demian Wassermann**, INRIA, Sophia Antipolis, FR - **Mauro Zucchelli**, INRIA, Sophia-Antipolis, France - **Rutger Fick**, INRIA, Sophia Antipolis, FR - **Gregory R. Lee**, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA - **Endolith**, New-York, NY, USA - **Matthias Ekman**, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, NL - **Andrew Lawrence**, University of Cambridge, Cambridge, UK - **Kesshi Jordan**, University of California, San Francisco, CA, USA - **Maria Luisa Mandelli**, University of California, San Francisco, CA, USA - **Adam Rybinski**, Jagiellonian University, Krakow, PL - **Qiyuan Tian**, Stanford University, Stanford, CA, USA - **Jon Haitz Legarreta Gorroño**, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA - **Rafael Neto Henriques**, Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, PT - **Stephan Meesters**, Eindhoven University of Technology, NL - **Himanshu Mishra**, Indian Institute of Technology, Karaghpur, IN - **Alexander Gauvin**, University of Sherbrooke, QC, CA - **Oscar Esteban**, Stanford University, CA, US - **Bishakh Ghosh**, National Institute of Technology, Durgapur, IN - **Dimitris Rozakis**, Tomotech, Athens, GR - **Rohan Prinja**, Indian Institute of Technology, Bombay, IN - **Sagun Pai**, Indian Institute of Technology, Bombay, IN - **Vatsala Swaroop**, Mombai, IN - **Shahnawaz Ahmed**, Birla Institute of Technology and Science, Pilani, Goa, IN - **Nil Goyette**, Imeka, Sherbrooke, QC, CA - **Scott Trinkle**, University of Chicago, IL, USA - **Kevin R. Sitek**, MIT McGovern Institute for Brain Research, MA, USA - **Derek Pisner**, University of Texas at Austin, USA - **Ross Lawrence**, Johns Hopkins University, USA - **Eric Larson**, University of Washington, USA - **Jakob Wasserthal**, German Cancer Research Center, DE - **Bramsh Qamar Chandio**, Indiana University, IN, USA - **Javier Guaje**, Indiana University, IN, USA - **Shreyas Fadnavis**, Indiana University, IN, USA - **Matt Cieslak**, University of Pennsylvania, USA - **Sven Dorkenwald**, Princeton University, USA Boundless collaboration is in the heart of DIPY_. We encourage everyone from anywhere in the world to join the team. You can start sharing your code `here`__. If you want to contribute but you don't know in area to focus, please send us an e-mail. We will be more than happy to help. __ `dipy github`_ .. include:: links_names.inc dipy-1.11.0/doc/doc-requirements.txt000066400000000000000000000004651476546756600173720ustar00rootroot00000000000000cython numpy>=1.22.4 scipy>=1.8.1 nibabel>=4.0.0 h5py cvxpy<=1.4.4 pandas tables matplotlib fury>=0.10.0 scikit-learn scikit-image statsmodels>=0.14.0 numpydoc sphinx!=4.4 sphinx-gallery>=0.10.0 tomli>=2.0.1 grg-sphinx-theme>=0.3 Jinja2 sphinx-design>=0.5.0 boto3 tensorflow torch sphinxcontrib-bibtex>=2.5.0 dipy-1.11.0/doc/examples/000077500000000000000000000000001476546756600151545ustar00rootroot00000000000000dipy-1.11.0/doc/examples/.gitignore000066400000000000000000000000361476546756600171430ustar00rootroot00000000000000gqs_tracks.npy ten_tracks.npy dipy-1.11.0/doc/examples/README.md000066400000000000000000000003731476546756600164360ustar00rootroot00000000000000# Examples These are the DIPY examples. They are built as docs in the DIPY `examples_built` directory in the documentation. If you add an example (yes please!), please remember to add it to the `_valid_examples.toml` file located in this directory. dipy-1.11.0/doc/examples/_valid_examples.toml000066400000000000000000000117461476546756600212160ustar00rootroot00000000000000[main] readme = """ :html_theme.sidebar_secondary.remove: .. _examples: Examples ======== .. note:: The examples here are some uses of the analysis and visualization functionality of DIPY, with example data from actual neuroscience experiments, or with synthetic data, which is generated as part of the example. All the examples presented in the documentation are generated from *fully functioning* python scripts, which are available as part of the source distribution in the `doc/examples` folder. If you want to replicate a particular analysis or visualization, copy or download the relevant `.py` script, edit out the body of the text of the example and alter it to your purpose. .. contents:: :depth: 2 """ position = 0 # it should always be 0 enable = true # it should always be true files = [] [quick_start] readme = """ Quick Start ----------- """ position = 1 enable = true files = [ "quick_start.py", "tracking_introduction_eudx.py", ] [preprocessing] readme = """ Preprocessing ------------- """ position = 4 enable = true files = [ "brain_extraction_dwi.py", "denoise_ascm.py", "denoise_nlmeans.py", "denoise_gibbs.py", "denoise_localpca.py", "denoise_mppca.py", "denoise_patch2self.py", "gradients_spheres.py", "piesno.py", "reslice_datasets.py", "snr_in_cc.py", "motion_correction.py", ] [reconstruction] readme = """ Reconstruction -------------- Below, an overview of all reconstruction models available in DIPY. .. note:: Some reconstruction models do not have a tutorial yet .. include:: ../../reconstruction_models_list.rst """ position = 7 enable = true files = [ "kfold_xval.py", "reconst_csa.py", "reconst_csd.py", "reconst_cti.py", "reconst_mcsd.py", "reconst_dki.py", "reconst_dki_micro.py", "reconst_dsi.py", "reconst_dsi_metrics.py", "reconst_dsid.py", "reconst_dti.py", "reconst_forecast.py", "reconst_fwdti.py", "reconst_gqi.py", "reconst_ivim.py", "reconst_mapmri.py", "reconst_msdki.py", "reconst_qtdmri.py", "reconst_shore.py", "reconst_shore_metrics.py", "restore_dti.py", "reconst_sfm.py", "reconst_sh.py", "reconst_rumba.py", "reconst_qti.py", "reconst_qtiplus.py", "histology_resdnn.py", "reconst_bingham.py" ] [contextual_enhancement] readme = """ Contextual Enhancement ---------------------- """ position = 10 enable = true files = [ "contextual_enhancement.py", "fiber_to_bundle_coherence.py", ] [fiber_tracking] readme = """ Fiber Tracking -------------- .. include:: ../../tractography_methods_list.rst """ position = 13 enable = true files = [ "tracking_bootstrap_peaks.py", "tracking_deterministic.py", "tracking_introduction_eudx.py", "tracking_pft.py", "tracking_ptt.py", "tracking_probabilistic.py", "tracking_sfm.py", "tracking_stopping_criterion.py", "tracking_rumba.py", "linear_fascicle_evaluation.py", "surface_seed.py", ] [streamline_analysis] readme = """ Streamlines Analysis and Connectivity ------------------------------------- """ position = 17 enable = true files = [ "afq_tract_profiles.py", "cluster_confidence.py", "path_length_map.py", "streamline_length.py", "streamline_tools.py", "bundle_assignment_maps.py", "bundle_shape_similarity.py", ] [registration] readme = """ Registration ------------ """ position = 20 enable = true files = [ "affine_registration_3d.py", "bundle_registration.py", "bundle_group_registration.py", "register_binary_fuzzy.py", "streamline_registration.py", "bundlewarp_registration.py", "syn_registration_2d.py", "syn_registration_3d.py", "affine_registration_masks.py", ] [segmentation] readme = """ Segmentation ------------ """ position = 23 enable = true files = [ "bundle_extraction.py", "segment_clustering_features.py", "segment_clustering_metrics.py", "segment_extending_clustering_framework.py", "segment_quickbundles.py", "tissue_classification_dam.py", "tissue_classification.py", "brain_extraction_dwi.py", "fast_streamline_search.py", ] [simulations] readme = """ Simulation ---------- """ position = 26 enable = true files = [ "simulate_dki.py", "simulate_multi_tensor.py", "reconst_dsid.py", ] [Multiprocessing] readme = """ Multiprocessing --------------- """ position = 29 enable = true files = [ "reconst_csa_parallel.py", "reconst_csd_parallel.py", "tracking_disco_phantom.py", ] [file_formats] readme = """ File Formats ------------ """ position = 32 enable = true files = [ "streamline_formats.py", ] [visualization] readme = """ Visualization ------------- """ position = 35 enable = true files = [ "viz_advanced.py", "viz_bundles.py", "viz_roi_contour.py", "viz_slice.py", ] [workflows] readme = """ Workflows --------- """ position = 37 enable = true files = [ "combined_workflow_creation.py", "workflow_creation.py", ]dipy-1.11.0/doc/examples/affine_registration_3d.py000066400000000000000000000357041476546756600221470ustar00rootroot00000000000000""" ========================= Affine Registration in 3D ========================= This example explains how to compute an affine transformation to register two 3D volumes by maximization of their Mutual Information :footcite:p:`Mattes2003`. The optimization strategy is similar to that implemented in ANTS :footcite:p:`Avants2009`. We will do this twice. The first part of this tutorial will walk through the details of the process with the object-oriented interface implemented in the ``dipy.align`` module. The second part will use a simplified functional interface. """ from os.path import join as pjoin import numpy as np from dipy.align import affine_registration, register_dwi_to_template from dipy.align.imaffine import ( AffineMap, AffineRegistration, MutualInformationMetric, transform_centers_of_mass, ) from dipy.align.transforms import ( AffineTransform3D, RigidTransform3D, TranslationTransform3D, ) from dipy.data import fetch_stanford_hardi from dipy.data.fetcher import fetch_syn_data from dipy.io.image import load_nifti from dipy.viz import regtools ############################################################################### # Let's fetch two b0 volumes, the static image will be the b0 from the Stanford # HARDI dataset files, folder = fetch_stanford_hardi() static_data, static_affine, static_img = load_nifti( pjoin(folder, "HARDI150.nii.gz"), return_img=True ) static = np.squeeze(static_data)[..., 0] static_grid2world = static_affine ############################################################################### # Now the moving image files, folder2 = fetch_syn_data() moving_data, moving_affine, moving_img = load_nifti( pjoin(folder2, "b0.nii.gz"), return_img=True ) moving = moving_data moving_grid2world = moving_affine ############################################################################### # We can see that the images are far from aligned by drawing one on top of # the other. The images don't even have the same number of voxels, so in order # to draw one on top of the other we need to resample the moving image on a # grid of the same dimensions as the static image, we can do this by # "transforming" the moving image using an identity transform identity = np.eye(4) affine_map = AffineMap( identity, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=moving.shape, codomain_grid2world=moving_grid2world, ) resampled = affine_map.transform(moving) regtools.overlay_slices( static, resampled, slice_index=None, slice_type=0, ltitle="Static", rtitle="Moving", fname="resampled_0.png", ) regtools.overlay_slices( static, resampled, slice_index=None, slice_type=1, ltitle="Static", rtitle="Moving", fname="resampled_1.png", ) regtools.overlay_slices( static, resampled, slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving", fname="resampled_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Input images before alignment. # # # We can obtain a very rough (and fast) registration by just aligning the # centers of mass of the two images c_of_mass = transform_centers_of_mass( static, static_grid2world, moving, moving_grid2world, ) ############################################################################### # We can now transform the moving image and draw it on top of the static image, # registration is not likely to be good, but at least they will occupy roughly # the same space transformed = c_of_mass.transform(moving) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="transformed_com_0.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="transformed_com_1.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="transformed_com_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result by aligning the centers of mass of the images. # # # This was just a translation of the moving image towards the static image, # now we will refine it by looking for an affine transform. We first create the # similarity metric (Mutual Information) to be used. We need to specify the # number of bins to be used to discretize the joint and marginal probability # distribution functions (PDF), a typical value is 32. We also need to specify # the percentage (an integer in (0, 100]) of voxels to be used for computing # the PDFs, the most accurate registration will be obtained by using all # voxels, but it is also the most time-consuming choice. We specify full # sampling by passing None instead of an integer nbins = 32 sampling_prop = None metric = MutualInformationMetric(nbins=nbins, sampling_proportion=sampling_prop) ############################################################################### # To avoid getting stuck at local optima, and to accelerate convergence, we # use a multi-resolution strategy (similar to ANTS :footcite:p:`Avants2009`) by # building a Gaussian Pyramid. To have as much flexibility as possible, the user # can specify how this Gaussian Pyramid is built. First of all, we need to # specify how many resolutions we want to use. This is indirectly specified by # just providing a list of the number of iterations we want to perform at each # resolution. Here we will just specify 3 resolutions and a large number of # iterations, 10000 at the coarsest resolution, 1000 at the medium resolution # and 100 at the finest. These are the default settings level_iters = [10000, 1000, 100] ############################################################################### # To compute the Gaussian pyramid, the original image is first smoothed at each # level of the pyramid using a Gaussian kernel with the requested sigma. # A good initial choice is [3.0, 1.0, 0.0], this is the default sigmas = [3.0, 1.0, 0.0] ############################################################################### # Now we specify the sub-sampling factors. A good configuration is [4, 2, 1], # which means that, if the original image shape was (nx, ny, nz) voxels, then # the shape of the coarsest image will be about (nx//4, ny//4, nz//4), the # shape in the middle resolution will be about (nx//2, ny//2, nz//2) and the # image at the finest scale has the same size as the original image. This set # of factors is the default factors = [4, 2, 1] ############################################################################### # Now we go ahead and instantiate the registration class with the configuration # we just prepared affreg = AffineRegistration( metric=metric, level_iters=level_iters, sigmas=sigmas, factors=factors ) ############################################################################### # Using AffineRegistration we can register our images in as many stages as we # want, providing previous results as initialization for the next (the same # logic as in ANTS). The reason why it is useful is that registration is a # non-convex optimization problem (it may have more than one local optima), # which means that it is very important to initialize as close to the solution # as possible. For example, let's start with our (previously computed) rough # transformation aligning the centers of mass of our images, and then refine it # in three stages. First look for an optimal translation. The dictionary # regtransforms contains all available transforms, we obtain one of them by # providing its name and the dimension (either 2 or 3) of the image we are # working with (since we are aligning volumes, the dimension is 3) transform = TranslationTransform3D() params0 = None starting_affine = c_of_mass.affine translation = affreg.optimize( static, moving, transform, params0, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=starting_affine, ) ############################################################################### # If we look at the result, we can see that this translation is much better # than simply aligning the centers of mass transformed = translation.transform(moving) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="transformed_trans_0.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="transformed_trans_1.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="transformed_trans_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result by translating the moving image, using Mutual # Information. # # # # Now let's refine with a rigid transform (this may even modify our previously # found optimal translation) transform = RigidTransform3D() params0 = None starting_affine = translation.affine rigid = affreg.optimize( static, moving, transform, params0, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=starting_affine, ) ############################################################################### # This produces a slight rotation, and the images are now better aligned transformed = rigid.transform(moving) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="transformed_rigid_0.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="transformed_rigid_1.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="transformed_rigid_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result with a rigid transform, using Mutual Information. # # # # Finally, let's refine with a full affine transform (translation, rotation, # scale and shear), it is safer to fit more degrees of freedom now since we # must be very close to the optimal transform transform = AffineTransform3D() params0 = None starting_affine = rigid.affine affine = affreg.optimize( static, moving, transform, params0, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=starting_affine, ) ############################################################################### # This results in a slight shear and scale transformed = affine.transform(moving) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="transformed_affine_0.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="transformed_affine_1.png", ) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="transformed_affine_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result with an affine transform, using Mutual Information. # # # # Now, let's repeat this process with a simplified functional interface. # This interface constructs a pipeline of operations from a given list of # transformations. pipeline = ["center_of_mass", "translation", "rigid", "affine"] ############################################################################### # And then applies the transformations in the pipeline on the input (from left # to right) with a call to an `affine_registration` function, which takes # optional settings for things like the iterations, sigmas and factors. The # pipeline must be a list of strings with one or more of the following # transformations: center_of_mass, translation, rigid, rigid_isoscaling, # rigid_scaling and affine. xformed_img, reg_affine = affine_registration( moving, static, moving_affine=moving_affine, static_affine=static_affine, nbins=32, metric="MI", pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors, ) regtools.overlay_slices( static, xformed_img, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="xformed_affine_0.png", ) regtools.overlay_slices( static, xformed_img, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="xformed_affine_1.png", ) regtools.overlay_slices( static, xformed_img, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="xformed_affine_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result with an affine transform, using functional interface. # # # # # Alternatively, you can also use the `register_dwi_to_template` function that # needs to also know about the gradient table of the DWI data, provided as a # tuple of (bvals_file, bvecs_file). In this case, we are going to move the # diffusion data to the B0 image (the opposite of the previous examples), # which reverses what is the "moving" image and what is "static". xformed_dwi, reg_affine = register_dwi_to_template( dwi=static_img, gtab=(pjoin(folder, "HARDI150.bval"), pjoin(folder, "HARDI150.bvec")), template=moving_img, reg_method="aff", nbins=32, metric="MI", pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors, ) regtools.overlay_slices( moving, xformed_dwi, slice_index=None, slice_type=0, ltitle="Static", rtitle="Transformed", fname="xformed_dwi_0.png", ) regtools.overlay_slices( moving, xformed_dwi, slice_index=None, slice_type=1, ltitle="Static", rtitle="Transformed", fname="xformed_dwi_1.png", ) regtools.overlay_slices( moving, xformed_dwi, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="xformed_dwi_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Same again, using the `dwi_to_template` functional interface. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/affine_registration_masks.py000066400000000000000000000255311476546756600227540ustar00rootroot00000000000000""" ============================== Affine Registration with Masks ============================== This example explains how to compute a transformation to register two 3D volumes by maximization of their Mutual Information [Mattes03]_. The optimization strategy is similar to that implemented in ANTS [Avants11]_. We will use masks to define which pixels are used in the Mutual Information. Masking can also be done for registration of 2D images rather than 3D volumes. Masking for registration is useful in a variety of circumstances. For example, in cardiac MRI, where it is usually used to specify a region of interest on a 2D static image, e.g., the left ventricle in a short axis slice. This prioritizes registering the region of interest over other structures that move with respect to the heart. """ from os.path import join as pjoin import matplotlib.pyplot as plt import numpy as np from dipy.align import affine_registration, register_series, rigid, translation from dipy.align.imaffine import AffineMap, AffineRegistration, MutualInformationMetric from dipy.align.transforms import RigidTransform3D, TranslationTransform3D from dipy.data import fetch_stanford_hardi from dipy.io.image import load_nifti from dipy.viz import regtools ############################################################################### # Let's fetch a single b0 volume from the Stanford HARDI dataset. files, folder = fetch_stanford_hardi() static_data, static_affine, static_img = load_nifti( pjoin(folder, "HARDI150.nii.gz"), return_img=True ) static = np.squeeze(static_data)[..., 0] # pad array to help with this example pad_by = 10 static = np.pad( static, [(pad_by, pad_by), (pad_by, pad_by), (0, 0)], mode="constant", constant_values=0, ) static_grid2world = static_affine ############################################################################### # Let's create a moving image by transforming the static image. affmat = np.eye(4) affmat[0, -1] = 4 affmat[1, -1] = 12 theta = 0.1 c, s = np.cos(theta), np.sin(theta) affmat[0:2, 0:2] = np.array([[c, -s], [s, c]]) affine_map = AffineMap( affmat, domain_grid_shape=static.shape, domain_grid2world=static_grid2world, codomain_grid_shape=static.shape, codomain_grid2world=static_grid2world, ) moving = affine_map.transform(static) moving_affine = static_affine.copy() moving_grid2world = static_grid2world.copy() regtools.overlay_slices( static, moving, slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving", fname="deregistered.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Same images but misaligned. # # # Let's make some registration settings. nbins = 32 sampling_prop = None metric = MutualInformationMetric(nbins=nbins, sampling_proportion=sampling_prop) # small number of iterations for this example level_iters = [100, 10] sigmas = [1.0, 0.0] factors = [2, 1] affreg = AffineRegistration( metric=metric, level_iters=level_iters, sigmas=sigmas, factors=factors ) ############################################################################### # Now let's register these volumes together without any masking. For the # purposes of this example, we will not provide an initial transformation # based on centre of mass, but this would work fine with masks. # # Note that use of masks is not currently implemented for sparse sampling. transform = TranslationTransform3D() transl = affreg.optimize( static, moving, transform, None, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=None, static_mask=None, moving_mask=None, ) transform = RigidTransform3D() transl = affreg.optimize( static, moving, transform, None, static_grid2world=static_grid2world, moving_grid2world=moving_grid2world, starting_affine=transl.affine, static_mask=None, moving_mask=None, ) transformed = transl.transform(moving) transformed = transl.transform(moving) regtools.overlay_slices( static, transformed, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname="transformed.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result. # # # # We can also use a pipeline to achieve the same thing. For convenience in this # tutorial, we will define a function that runs the pipeline and makes a figure. def reg_func(figname, static_mask=None, moving_mask=None): """Convenience function for registration using a pipeline. Uses variables in global scope, except for static_mask and moving_mask. """ pipeline = [translation, rigid] xformed_img, reg_affine = affine_registration( moving, static, moving_affine=moving_affine, static_affine=static_affine, nbins=32, metric="MI", pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors, static_mask=static_mask, moving_mask=moving_mask, ) regtools.overlay_slices( static, xformed_img, slice_index=None, slice_type=2, ltitle="Static", rtitle="Transformed", fname=figname, ) ############################################################################### # Now we can run this function and hopefully get the same result. reg_func("transformed_pipeline.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result using pipeline. # # # Now let's modify the images in order to test masking. We will place three # squares in the corners of both images, but in slightly different locations. # # We will make masks that cover these regions but with an extra border of # pixels. This is because the masks need transforming and resampling during # optimization, and we want to make sure that we are definitely covering the # troublesome features. sz = 15 pd = 10 # modify images val = static.max() / 2.0 static[-sz - pd : -pd, -sz - pd : -pd, :] = val static[pd : sz + pd, -sz - pd : -pd, :] = val static[-sz - pd : -pd, pd : sz + pd, :] = val moving[pd : sz + pd, pd : sz + pd, :] = val moving[pd : sz + pd, -sz - pd : -pd, :] = val moving[-sz - pd : -pd, pd : sz + pd, :] = val # create masks squares_st = np.zeros_like(static).astype(np.int32) squares_mv = np.zeros_like(static).astype(np.int32) squares_st[-sz - 1 - pd : -pd, -sz - 1 - pd : -pd, :] = 1 squares_st[pd : sz + 1 + pd, -sz - 1 - pd : -pd, :] = 1 squares_st[-sz - 1 - pd : -pd, pd : sz + 1 + pd, :] = 1 squares_mv[pd : sz + 1 + pd, pd : sz + 1 + pd, :] = 1 squares_mv[pd : sz + 1 + pd, -sz - 1 - pd : -pd, :] = 1 squares_mv[-sz - 1 - pd : -pd, pd : sz + 1 + pd, :] = 1 regtools.overlay_slices( static, moving, slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving", fname="deregistered_squares.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Same images but misaligned, with white squares in the corners. static_mask = np.abs(squares_st - 1) moving_mask = np.abs(squares_mv - 1) fig, ax = plt.subplots(1, 2) ax[0].imshow(static_mask[:, :, 1].T, cmap="gray", origin="lower") ax[0].set_title("static image mask") ax[1].imshow(moving_mask[:, :, 1].T, cmap="gray", origin="lower") ax[1].set_title("moving image mask") plt.savefig("masked_static.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The masks. # # # # Let's try to register these new images without a mask. reg_func("transformed_squares.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration fails to align the images because the squares pin the images. # # # # Now we will attempt to register the images using the masks that we defined. # # First, use a mask on the static image. Only pixels where the mask is non-zero # in the static image will contribute to Mutual Information. reg_func("transformed_squares_mask.png", static_mask=static_mask) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result using a static mask. # # # We can also attempt the same thing use a moving image mask. reg_func("transformed_squares_mask_2.png", moving_mask=moving_mask) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result using a moving mask. # # # And finally, we can use both masks at the same time. reg_func( "transformed_squares_mask_3.png", static_mask=static_mask, moving_mask=moving_mask ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration result using both a static mask and a moving mask. # # # # In most use cases, it is likely that only a static mask will be required, # e.g., to register a series of images to a single static image. # # Let's make a series of volumes to demonstrate this idea, and register the # series to the first image in the series using a static mask: series = np.stack([static, moving, moving], axis=-1) pipeline = [translation, rigid] xformed, _ = register_series( series, ref=0, pipeline=pipeline, series_affine=moving_affine, static_mask=static_mask, ) regtools.overlay_slices( np.squeeze(xformed[..., 0]), np.squeeze(xformed[..., -2]), slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving 1", fname="series_mask_1.png", ) regtools.overlay_slices( np.squeeze(xformed[..., 0]), np.squeeze(xformed[..., -1]), slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving 2", fname="series_mask_2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration of series using a static mask. # # # # In all of the examples above, different masking choices achieved essentially # the same result, but in general the results may differ depending on # differences between the static and moving images. # # # References # ---------- # # .. [Mattes03] Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., # Eubank, W. (2003). PET-CT image registration in the chest using # free-form deformations. IEEE Transactions on Medical Imaging, # 22(1), 120-8. # .. [Avants11] Avants, B. B., Tustison, N., & Song, G. (2011). Advanced # Normalization Tools (ANTS), 1-35. dipy-1.11.0/doc/examples/afq_tract_profiles.py000066400000000000000000000147661476546756600214130ustar00rootroot00000000000000""" ==================================================== Extracting AFQ tract profiles from segmented bundles ==================================================== In this example, we will extract the values of a statistic from a volume, using the coordinates along the length of a bundle. These are called `tract profiles` One of the challenges of extracting tract profiles is that some of the streamlines in a bundle may diverge significantly from the bundle in some locations. To overcome this challenge, we will use a strategy similar to that described in :footcite:p:`Yeatman2012`: We will weight the contribution of each streamline to the bundle profile based on how far this streamline is from the mean trajectory of the bundle at that location. """ import os.path as op import matplotlib.pyplot as plt import numpy as np from dipy.data import fetch_hbn, get_two_hcp842_bundles from dipy.io.image import load_nifti from dipy.io.streamline import load_trk from dipy.segment.clustering import QuickBundles from dipy.segment.featurespeed import ResampleFeature from dipy.segment.metricspeed import AveragePointwiseEuclideanMetric import dipy.stats.analysis as dsa import dipy.tracking.streamline as dts ############################################################################### # To demonstrate this, we will use data from the Healthy Brain Network (HBN) # study :footcite:p:`Alexander2017` that has already been processed # :footcite:p:`RichieHalford2022`. For this demonstration, we will use only the # left arcuate fasciculus (ARC) and the left corticospinal tract (CST) from the # subject NDARAA948VFH. subject = "NDARAA948VFH" session = "HBNsiteRU" fdict, path = fetch_hbn([subject], include_afq=True) afq_path = op.join(path, "derivatives", "afq", f"sub-{subject}", f"ses-{session}") ############################################################################### # We can use the `dipy.io` API to read in the bundles from file. # `load_trk` returns both the streamlines, as well as header information, and # the `streamlines` attribute will give us access to the sequence of arrays # that contain the streamline coordinates. cst_l_file = op.join( afq_path, "clean_bundles", f"sub-{subject}_ses-{session}_acq-64dir_space-T1w_desc-preproc_dwi_space" "-RASMM_model-CSD_desc-prob-afq-CST_L_tractography.trk", ) arc_l_file = op.join( afq_path, "clean_bundles", f"sub-{subject}_ses-{session}_acq-64dir_space-T1w_desc-preproc_dwi_space" "-RASMM_model-CSD_desc-prob-afq-ARC_L_tractography.trk", ) cst_l = load_trk(cst_l_file, reference="same", bbox_valid_check=False).streamlines arc_l = load_trk(arc_l_file, reference="same", bbox_valid_check=False).streamlines ############################################################################### # In the next step, we need to make sure that all the streamlines in each # bundle are oriented the same way. For example, for the CST, we want to make # sure that all the bundles have their cortical termination at one end of the # streamline. This is so that when we later extract values from a volume, # we will not have different streamlines going in opposite directions. # # To orient all the streamlines in each bundle, we will create standard # streamlines, by finding the centroids of the left ARC and CST bundle models. # # The advantage of using the model bundles is that we can use the same # standard for different subjects, which means that we'll get the same # orientation of the streamlines in all subjects. model_arc_l_file, model_cst_l_file = get_two_hcp842_bundles() model_arc_l = load_trk( model_arc_l_file, reference="same", bbox_valid_check=False ).streamlines model_cst_l = load_trk( model_cst_l_file, reference="same", bbox_valid_check=False ).streamlines feature = ResampleFeature(nb_points=100) metric = AveragePointwiseEuclideanMetric(feature) ############################################################################### # Since we are going to include all of the streamlines in the single cluster # from the streamlines, we set the threshold to `np.inf`. We pull out the # centroid as the standard using QuickBundles :footcite:p:`Garyfallidis2012a`. qb = QuickBundles(threshold=np.inf, metric=metric) cluster_cst_l = qb.cluster(model_cst_l) standard_cst_l = cluster_cst_l.centroids[0] cluster_af_l = qb.cluster(model_arc_l) standard_af_l = cluster_af_l.centroids[0] ############################################################################### # We use the centroid streamline for each atlas bundle as the standard to # orient all of the streamlines in each bundle from the individual subject. # Here, the affine used is the one from the transform between the atlas and # individual tractogram. This is so that the orienting is done relative to the # space of the individual, and not relative to the atlas space. oriented_cst_l = dts.orient_by_streamline(cst_l, standard_cst_l) oriented_arc_l = dts.orient_by_streamline(arc_l, standard_af_l) ############################################################################### # Tract profiles are created from a scalar property of the volume. Here, we # read volumetric data from an image corresponding to the FA calculated in # this subject with the diffusion tensor imaging (DTI) model. fa, fa_affine = load_nifti( op.join( afq_path, f"sub-{subject}_ses-{session}_acq-64dir_space-T1w_desc" "-preproc_dwi_model-DTI_FA.nii.gz", ) ) ############################################################################### # As mentioned at the outset, we would like to downweight the streamlines that # are far from the core trajectory of the tracts. We calculate # weights for each bundle: w_cst_l = dsa.gaussian_weights(oriented_cst_l) w_arc_l = dsa.gaussian_weights(oriented_arc_l) ############################################################################### # And then use the weights to calculate the tract profiles for each bundle profile_cst_l = dsa.afq_profile(fa, oriented_cst_l, affine=fa_affine, weights=w_cst_l) profile_af_l = dsa.afq_profile(fa, oriented_arc_l, affine=fa_affine, weights=w_arc_l) fig, (ax1, ax2) = plt.subplots(1, 2) ax1.plot(profile_cst_l) ax1.set_ylabel("Fractional anisotropy") ax1.set_xlabel("Node along CST") ax2.plot(profile_af_l) ax2.set_xlabel("Node along ARC") fig.savefig("tract_profiles") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Bundle profiles for the fractional anisotropy in the left CST (left) and # left AF (right). # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/brain_extraction_dwi.py000066400000000000000000000063161476546756600217320ustar00rootroot00000000000000""" =================================== Brain segmentation with median_otsu =================================== We show how to extract brain information and mask from a b0 image using DIPY_'s ``segment.mask`` module. First import the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.histeq import histeq from dipy.data import get_fnames from dipy.io.image import load_nifti, save_nifti from dipy.segment.mask import median_otsu ############################################################################### # Download and read the data for this tutorial. # # The ``scil_b0`` dataset contains different data from different companies and # models. For this example, the data comes from a 1.5 Tesla Siemens MRI. data_fnames = get_fnames(name="scil_b0") data, affine = load_nifti(data_fnames[1]) data = np.squeeze(data) ############################################################################### # Segment the brain using DIPY's ``mask`` module. # # ``median_otsu`` returns the segmented brain data and a binary mask of the # brain. It is possible to fine tune the parameters of ``median_otsu`` # (``median_radius`` and ``num_pass``) if extraction yields incorrect results # but the default parameters work well on most volumes. For this example, # we used 2 as ``median_radius`` and 1 as ``num_pass`` b0_mask, mask = median_otsu(data, median_radius=2, numpass=1) ############################################################################### # Saving the segmentation results is very easy. We need the ``b0_mask``, and # the binary mask volumes. The affine matrix which transform the image's # coordinates to the world coordinates is also needed. Here, we choose to save # both images in ``float32``. fname = "se_1.5t" save_nifti(fname + "_binary_mask.nii.gz", mask.astype(np.float32), affine) save_nifti(fname + "_mask.nii.gz", b0_mask.astype(np.float32), affine) ############################################################################### # Quick view of the results middle slice using ``matplotlib``. sli = data.shape[2] // 2 plt.figure("Brain segmentation") plt.subplot(1, 2, 1).set_axis_off() plt.imshow(histeq(data[:, :, sli].astype("float")).T, cmap="gray", origin="lower") plt.subplot(1, 2, 2).set_axis_off() plt.imshow(histeq(b0_mask[:, :, sli].astype("float")).T, cmap="gray", origin="lower") plt.savefig(f"{fname}_median_otsu.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # An application of median_otsu for brain segmentation. # # # ``median_otsu`` can also automatically crop the outputs to remove the largest # possible number of background voxels. This makes outputted data significantly # smaller. Auto-cropping in ``median_otsu`` is activated by setting the # ``autocrop`` parameter to ``True``. b0_mask_crop, mask_crop = median_otsu(data, median_radius=4, numpass=4, autocrop=True) ############################################################################### # Saving cropped data as demonstrated previously. save_nifti(fname + "_binary_mask_crop.nii.gz", mask_crop.astype(np.float32), affine) save_nifti(fname + "_mask_crop.nii.gz", b0_mask_crop.astype(np.float32), affine) dipy-1.11.0/doc/examples/bundle_assignment_maps.py000077500000000000000000000056151476546756600222610ustar00rootroot00000000000000""" ==================================== BUAN Bundle Assignment Maps Creation ==================================== This example explains how we can use BUAN :footcite:p:`Chandio2020a` to create assignment maps on a bundle. Divide bundle into N smaller segments. First import the necessary modules. """ import numpy as np from dipy.data import fetch_bundle_atlas_hcp842, get_two_hcp842_bundles from dipy.io.streamline import load_trk from dipy.stats.analysis import assignment_map from dipy.viz import actor, window ############################################################################### # Download and read data for this tutorial atlas_file, atlas_folder = fetch_bundle_atlas_hcp842() ############################################################################### # Read AF left and CST left bundles from already fetched atlas data to use them # as model bundles model_af_l_file, model_cst_l_file = get_two_hcp842_bundles() sft_af_l = load_trk(model_af_l_file, reference="same", bbox_valid_check=False) model_af_l = sft_af_l.streamlines ############################################################################### # let's visualize Arcuate Fasiculus Left (AF_L) bundle before assignment maps interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_af_l, fake_tube=True, linewidth=6)) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="af_l_before_assignment_maps.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # AF_L before assignment maps # # # # Creating 100 bundle assignment maps on AF_L using BUAN # :footcite:p:`Chandio2020a` rng = np.random.default_rng() n = 100 indx = assignment_map(model_af_l, model_af_l, n) indx = np.array(indx) colors = [rng.random(3) for si in range(n)] disks_color = [] for i in range(len(indx)): disks_color.append(tuple(colors[indx[i]])) ############################################################################### # let's visualize Arcuate Fasiculus Left (AF_L) bundle after assignment maps interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_af_l, fake_tube=True, colors=disks_color, linewidth=6)) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="af_l_after_assignment_maps.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # AF_L after assignment maps # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/bundle_extraction.py000066400000000000000000000513211476546756600212410ustar00rootroot00000000000000""" ================================================== Automatic Fiber Bundle Extraction with RecoBundles ================================================== This example explains how we can use RecoBundles :footcite:p:`Garyfallidis2018` to extract bundles from tractograms. First import the necessary modules. """ import numpy as np from dipy.align.streamlinear import whole_brain_slr from dipy.data import ( fetch_bundle_atlas_hcp842, fetch_target_tractogram_hcp, get_bundle_atlas_hcp842, get_target_tractogram_hcp, get_two_hcp842_bundles, ) from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_trk, save_trk from dipy.io.utils import create_tractogram_header from dipy.segment.bundles import RecoBundles from dipy.viz import actor, window ############################################################################### # Download and read data for this tutorial target_file, target_folder = fetch_target_tractogram_hcp() atlas_file, atlas_folder = fetch_bundle_atlas_hcp842() atlas_file, all_bundles_files = get_bundle_atlas_hcp842() target_file = get_target_tractogram_hcp() sft_atlas = load_trk(atlas_file, "same", bbox_valid_check=False) atlas = sft_atlas.streamlines atlas_header = create_tractogram_header(atlas_file, *sft_atlas.space_attributes) sft_target = load_trk(target_file, "same", bbox_valid_check=False) target = sft_target.streamlines target_header = create_tractogram_header(target_file, *sft_target.space_attributes) ############################################################################### # let's visualize atlas tractogram and target tractogram before registration interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(atlas, colors=(1, 0, 1))) scene.add(actor.line(target, colors=(1, 1, 0))) window.record(scene=scene, out_path="tractograms_initial.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Atlas and target before registration. # # # We will register target tractogram to model atlas' space using streamlinear # registration (SLR) :footcite:p:`Garyfallidis2015`. moved, transform, qb_centroids1, qb_centroids2 = whole_brain_slr( atlas, target, x0="affine", verbose=True, progressive=True, rng=np.random.default_rng(1984), ) ############################################################################### # We save the transform generated in this registration, so that we can use # it in the bundle profiles example np.save("slr_transform.npy", transform) ############################################################################### # let's visualize atlas tractogram and target tractogram after registration interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(atlas, colors=(1, 0, 1))) scene.add(actor.line(moved, colors=(1, 1, 0))) window.record( scene=scene, out_path="tractograms_after_registration.png", size=(600, 600) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Atlas and target after registration. # # # # Extracting bundles using RecoBundles :footcite:p:`Garyfallidis2018` # # RecoBundles requires a model (reference) bundle and tries to extract similar # looking bundle from the input tractogram. There are some key parameters that # users can set as per requirements. Here are the key threshold parameters # measured in millimeters and their function in Recobundles: # # - model_clust_thr : It will use QuickBundles to get the centroids of the # model bundle and work with centroids instead of all streamlines. This # helps to make RecoBundles faster. The larger the value of the threshold, # the fewer centroids will be, the and smaller the threshold value, the # more centroids will be. If you prefer to use all the streamlines of the # model bundle, you can set this threshold to 0.01 mm. # Recommended range of the model_clust_thr is 0.01 - 3.0 mm. # # - reduction_thr : This threshold will be used to reduce the search space # for finding the streamlines that match model bundle streamlines in shape. # Instead of looking at the entire tractogram, now we will be looking at # neighboring region of a model bundle in the tractogram. Increase the # threshold to increase the search space. # Recommended range of the reduction_thr is 15 - 30 mm. # # - pruning_thr : This threshold will filter the streamlines for which the # distance to the model bundle is greater than the pruning_thr. # This serves to filter the neighborhood area (search space) to get # streamlines that are like the model bundle. # Recommended range of the pruning_thr is 8 - 12 mm. # # - reduction_distance and pruning_distance : Distance method used # internally. Minimum Diferect Flip distance (mdf) or Mean Average Minimum # (mam). Default is set to mdf. # # - slr : If slr flag is set to True, local registration of model bundle # with neighbouring area will be performed. Default and recommended is True. # # # # Read Arcuate Fasciculus Left and Corticospinal Tract Left bundles from # already fetched atlas data to use them as model bundle. Let's visualize the # Arcuate Fasciculus Left model bundle. model_af_l_file, model_cst_l_file = get_two_hcp842_bundles() sft_af_l = load_trk(model_af_l_file, reference="same", bbox_valid_check=False) model_af_l = sft_af_l.streamlines interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_af_l)) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="AF_L_model_bundle.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Model Arcuate Fasciculus Left bundle rb = RecoBundles(moved, verbose=True, rng=np.random.default_rng(2001)) recognized_af_l, af_l_labels = rb.recognize( model_bundle=model_af_l, model_clust_thr=0.1, reduction_thr=15, pruning_thr=7, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Arcuate Fasciculus Left bundle interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_af_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="AF_L_recognized_bundle.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Arcuate Fasciculus Left bundle # # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_af_l = StatefulTractogram(recognized_af_l, atlas_header, Space.RASMM) save_trk(reco_af_l, "AF_L_rec_1.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_af_l = StatefulTractogram(target[af_l_labels], target_header, Space.RASMM) save_trk(reco_af_l, "AF_L_org_1.trk", bbox_valid_check=False) ############################################################################### # Now, let's increase the reduction_thr and pruning_thr values. recognized_af_l, af_l_labels = rb.recognize( model_bundle=model_af_l, model_clust_thr=0.1, reduction_thr=20, pruning_thr=10, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Arcuate Fasciculus Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_af_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="AF_L_recognized_bundle2.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Arcuate Fasciculus Left bundle # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_af_l = StatefulTractogram(recognized_af_l, atlas_header, Space.RASMM) save_trk(reco_af_l, "AF_L_rec_2.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_af_l = StatefulTractogram(target[af_l_labels], target_header, Space.RASMM) save_trk(reco_af_l, "AF_L_org_2.trk", bbox_valid_check=False) ############################################################################### # Now, let's increase the reduction_thr and pruning_thr values further. recognized_af_l, af_l_labels = rb.recognize( model_bundle=model_af_l, model_clust_thr=0.1, reduction_thr=25, pruning_thr=12, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Arcuate Fasciculus Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_af_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="AF_L_recognized_bundle3.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Arcuate Fasciculus Left bundle # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_af_l = StatefulTractogram(recognized_af_l, atlas_header, Space.RASMM) save_trk(reco_af_l, "AF_L_rec_3.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_af_l = StatefulTractogram(target[af_l_labels], target_header, Space.RASMM) save_trk(reco_af_l, "AF_L_org_3.trk", bbox_valid_check=False) ############################################################################### # Let's apply auto-calibrated RecoBundles on the output of standard # RecoBundles. This step will filter out the outlier streamlines. This time, # the RecoBundles' extracted bundle will serve as a model bundle. As a rule of # thumb, provide larger threshold values in standard RecoBundles function and # smaller values in the auto-calibrated RecoBundles (refinement) step. r_recognized_af_l, r_af_l_labels = rb.refine( model_bundle=model_af_l, pruned_streamlines=recognized_af_l, model_clust_thr=0.1, reduction_thr=15, pruning_thr=6, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted refined Arcuate Fasciculus Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(r_recognized_af_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record( scene=scene, out_path="AF_L_refine_recognized_bundle.png", size=(600, 600) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Arcuate Fasciculus Left bundle # # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_af_l = StatefulTractogram(r_recognized_af_l, atlas_header, Space.RASMM) save_trk(reco_af_l, "AF_L_rec_refine.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_af_l = StatefulTractogram(target[r_af_l_labels], target_header, Space.RASMM) save_trk(reco_af_l, "AF_L_org_refine.trk", bbox_valid_check=False) ############################################################################### # Let's load Corticospinal Tract Left model bundle and visualize it. sft_cst_l = load_trk(model_cst_l_file, "same", bbox_valid_check=False) model_cst_l = sft_cst_l.streamlines interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_cst_l)) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="CST_L_model_bundle.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The Corticospinal tract model bundle recognized_cst_l, cst_l_labels = rb.recognize( model_bundle=model_cst_l, model_clust_thr=0.1, reduction_thr=15, pruning_thr=7, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Corticospinal tract Left bundle interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_cst_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="CST_L_recognized_bundle.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Corticospinal tract Left bundle # # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_cst_l = StatefulTractogram(recognized_cst_l, atlas_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_rec_1.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_cst_l = StatefulTractogram(target[cst_l_labels], target_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_org_1.trk", bbox_valid_check=False) ############################################################################### # Now, let's increase the reduction_thr and pruning_thr values. recognized_cst_l, cst_l_labels = rb.recognize( model_bundle=model_cst_l, model_clust_thr=0.1, reduction_thr=20, pruning_thr=10, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Corticospinal tract Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_cst_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="CST_L_recognized_bundle2.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Corticospinal tract Left bundle # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_cst_l = StatefulTractogram(recognized_cst_l, atlas_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_rec_2.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_cst_l = StatefulTractogram(target[cst_l_labels], target_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_org_2.trk", bbox_valid_check=False) ############################################################################### # Now, let's increase the reduction_thr and pruning_thr values further. recognized_cst_l, cst_l_labels = rb.recognize( model_bundle=model_cst_l, model_clust_thr=0.1, reduction_thr=25, pruning_thr=12, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted Corticospinal tract Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_cst_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record(scene=scene, out_path="CST_L_recognized_bundle3.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted Corticospinal tract Left bundle # # # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_cst_l = StatefulTractogram(recognized_cst_l, atlas_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_rec_3.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_cst_l = StatefulTractogram(target[cst_l_labels], target_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_org_3.trk", bbox_valid_check=False) ############################################################################### # Let's apply auto-calibrated RecoBundles on the output of standard # RecoBundles. This step will filter out the outlier streamlines. This time, # the RecoBundles' extracted bundle will serve as a model bundle. As a rule of # thumb, provide larger threshold values in standard RecoBundles function and # smaller values in the auto-calibrated RecoBundles (refinement) step. r_recognized_cst_l, r_cst_l_labels = rb.refine( model_bundle=model_cst_l, pruned_streamlines=recognized_cst_l, model_clust_thr=0.1, reduction_thr=15, pruning_thr=6, reduction_distance="mdf", pruning_distance="mdf", slr=True, ) ############################################################################### # let's visualize extracted refined Corticospinal tract Left bundle. interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(r_recognized_cst_l.copy())) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) window.record( scene=scene, out_path="CST_L_refine_recognized_bundle.png", size=(600, 600) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Extracted refined Corticospinal tract Left bundle # # # Save the bundle as a trk file. Let's save the recognized bundle in the # common space (atlas space), in this case, MNI space. reco_cst_l = StatefulTractogram(r_recognized_cst_l, atlas_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_rec_refine.trk", bbox_valid_check=False) ############################################################################### # Let's save the recognized bundle in the original space of the subject # anatomy. reco_cst_l = StatefulTractogram(target[r_cst_l_labels], target_header, Space.RASMM) save_trk(reco_cst_l, "CST_L_org_refine.trk", bbox_valid_check=False) ############################################################################### # This example shows how changing different threshold parameters can change the # output. It is up to the user to decide what is the desired output and select # parameters accordingly. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/bundle_group_registration.py000066400000000000000000000065171476546756600230160ustar00rootroot00000000000000""" ============================= Groupwise Bundle Registration ============================= This example explains how to coregister a set of bundles to a common space that is not biased by any specific bundle. This is useful when we want to align all the bundles but do not have a target reference space defined by an atlas :footcite:p:`RomeroBascones2022`. How it works ============ The bundle groupwise registration framework in DIPY relies on streamline linear registration (SLR) :footcite:p:`Garyfallidis2015` and an iterative process. In each iteration, bundles are shuffled and matched in pairs. Then, each pair of bundles are simultaneously moved to a common space in between both. After all pairs have been aligned, a group distance metric is computed as the mean pairwise distance between all bundle pairs. With each iteration, bundles get closer to each other and the group distance decreases. When the reduction in the group distance reaches a tolerance level the process ends. To reduce computational time, by default both registration and distance computation are performed on streamline centroids (obtained with Quickbundles) :footcite:p:`Garyfallidis2012a`. Example ======= We start by importing and creating the necessary functions: """ import logging from dipy.align.streamlinear import groupwise_slr from dipy.data import read_five_af_bundles from dipy.viz.streamline import show_bundles logging.basicConfig(level=logging.INFO) ############################################################################### # To run groupwise registration we need to have our input bundles stored in a # list. # # Here we load 5 left arcuate fasciculi and store them in a list. bundles = read_five_af_bundles() ############################################################################### # Let's now visualize our input bundles: colors = [ [0.91, 0.26, 0.35], [0.99, 0.50, 0.38], [0.99, 0.88, 0.57], [0.69, 0.85, 0.64], [0.51, 0.51, 0.63], ] show_bundles( bundles, interactive=False, colors=colors, save_as="before_group_registration.png" ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Bundles before registration. # # # They are in native space and, therefore, not aligned. # # Now running groupwise registration is as simple as: bundles_reg, aff, d = groupwise_slr(bundles, verbose=True) ############################################################################### # Finally, we visualize the registered bundles to confirm that they are now in # a common space: show_bundles( bundles_reg, interactive=False, colors=colors, save_as="after_group_registration.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Bundles after registration. # # # # Extended capabilities # ===================== # # In addition to the registered bundles, `groupwise_slr` also returns a list # with the individual transformation matrices as well as the pairwise distances # computed in each iteration. # # By changing the input arguments the user can modify the transformation (up to # affine), the number of maximum number of streamlines per bundle, the level of # clustering, or the tolerance of the method. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/bundle_registration.py000066400000000000000000000201521476546756600215710ustar00rootroot00000000000000""" ========================== Direct Bundle Registration ========================== This example explains how you can register two bundles from two different subjects directly in the space of streamlines :footcite:p:`Garyfallidis2014b`, :footcite:p:`Garyfallidis2015`. To show the concept we will use two pre-saved cingulum bundles. The algorithm used here is called Streamline-based Linear Registration (SLR) :footcite:p:`Garyfallidis2015`. """ from time import sleep from dipy.align.streamlinear import StreamlineLinearRegistration from dipy.data import two_cingulum_bundles from dipy.tracking.streamline import set_number_of_points from dipy.viz import actor, window ############################################################################### # Let's download and load the two bundles. cb_subj1, cb_subj2 = two_cingulum_bundles() ############################################################################### # An important step before running the registration is to resample the # streamlines so that they both have the same number of points per streamline. # Here we will use 20 points. This step is not optional. Inputting streamlines # with a different number of points will break the theoretical advantages of # using the SLR as explained in :footcite:p:`Garyfallidis2015`. cb_subj1 = set_number_of_points(cb_subj1, nb_points=20) cb_subj2 = set_number_of_points(cb_subj2, nb_points=20) ############################################################################### # Let's say now that we want to move the ``cb_subj2`` (moving) so that it can # be aligned with ``cb_subj1`` (static). Here is how this is done. srr = StreamlineLinearRegistration() srm = srr.optimize(static=cb_subj1, moving=cb_subj2) ############################################################################### # After the optimization is finished we can apply the transformation to # ``cb_subj2``. cb_subj2_aligned = srm.transform(cb_subj2) def show_both_bundles(bundles, colors=None, show=True, fname=None): scene = window.Scene() scene.SetBackground(1.0, 1, 1) for i, bundle in enumerate(bundles): color = colors[i] lines_actor = actor.streamtube(bundle, colors=color, linewidth=0.3) lines_actor.RotateX(-90) lines_actor.RotateZ(90) scene.add(lines_actor) if show: window.show(scene) if fname is not None: sleep(1) window.record(scene=scene, n_frames=1, out_path=fname, size=(900, 900)) show_both_bundles( [cb_subj1, cb_subj2], colors=[window.colors.orange, window.colors.red], show=False, fname="before_registration.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Before bundle registration. show_both_bundles( [cb_subj1, cb_subj2_aligned], colors=[window.colors.orange, window.colors.red], show=False, fname="after_registration.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # After bundle registration. # # # # As you can see the two cingulum bundles are well aligned although they # contain many streamlines of different lengths and shapes. # # Streamline-based Linear Registration (SLR) is a method which given two sets # of streamlines (fixed and moving) and a streamline-based cost function, will # minimize the cost function and transform the moving set of streamlines # (target) to the fixed (reference), so that they maximally overlap under the # condition that the transform stays linear. # # We denote a single streamline with s and a set of streamlines with S. # A streamline s is an ordered sequence of line segments connecting 3D vector # points $\mathbf{x}_{k} \in \mathbb{R}^{3}$ with $k \in[1, K]$ where K is the # total number of points of streamline s. Given two bundles(two sets of # streamlines), we denote $S_{a}=\left\{s_{1}^{a}, \ldots, S_{A}^{a}\right\}$ # and $S_{b}=\left\{s_{1}^{b}, \ldots, s_{B}^{b}\right\}$, where A and B are # the total numbers of streamlines in each set respectively. We want to # minimize a cost function so that we can align the two sets together. For # this purpose, we introduce a new cost function, the Bundle-based Minimum # Distance (BMD), which is defined as: # # .. math:: # # \operatorname{BMD}\left(S_{a}, S_{b}\right)=\frac{1}{4}\left(\frac{1}{A} # \sum_{i=1}^{A} \min _{j} D(i, j)+\frac{1}{B} \sum_{j=1}^{B} \\ # \min _{i} D(i, j)\right)^{2} # # # where D is the rectangular matrix given by all pairwise Minimum average # Direct-Flip (MDF) streamline distances (Garyfallidis et al., 2012). # Therefore, every element of matrix D is equal to # $D_{i j}=M D F\left(s^{a}{ }_{i}, s^{b}{ }_{j}\right)$. # # Notice, how in Eq. (1), the most similar streamlines from one streamline set # to the other are weighted more by averaging the minimum values of the rows # and columns of matrix D. This makes our method robust to fanning streamlines # near endpoints of bundles and spurious streamlines if any in the bundle. The # MDF is a symmetric distance between two individual streamlines. It was # primarily used for clustering (Garyfallidis et al., 2010; Visser et al., # 2011) and tractography simplification (see Garyfallidis et al., 2012). This # distance can be applied only when both streamlines have the same number of # points. Therefore we assume from now on that an initial interpolation of # streamlines has been applied, so that all streamlines have the same number # of points K, and all segments of each streamline have equal length. The # length of each segment is equal to the length of the streamline divided by # the number of segments $(K-1)$. This is achieved by a simple linear # interpolation with the starting and ending points of the streamlines intact. # When K is small, the interpolation provides a rough representation of the # streamline, but as K becomes larger and larger the shape of the interpolated # streamline becomes identical with the shape of the initial streamline. # Under this assumption, the MDF for two streamlines $S_{a}$ and $S_{b}$ is # defined as: # # # .. math:: # # \operatorname{MDF}\left(s_{i}^{a}, s_{j}^{b}\right)=\min \\ # \left(d_{\text {direct }}\left(s_{i}^{a}, s_{j}^{b}\right), \\ # d_{\text {flipped }}\left(s_{i}^{a}, s_{j}^{b}\right)\right) # # # where $d_{\text {direct }}$ is the direct distance which is defined as: # # .. math:: # # d_{\text {direct }}\left(s_{i}^{a}, s_{j}^{b}\right)=\frac{1}{K} \\ # \sum_{k=1}^{K}\left\|\mathbf{x}_{k}^{a}-\mathbf{x}_{k}^{b}\right\|_{2} # # where $x_{k}^{a}$ is the k-th point of streamline $S_{i}^{a}$ and $x_{k}^{b}$ # is the k-th point of streamline $S_{j}^{b}$. $d_{\text {flipped }}$ is the # one of the streamlines flipped and it is defined as: # # .. math:: # # d_{\text {flipped }}\left(s_{i}^{a}, s_{j}^{b}\right)=\frac{1}{K} \\ # \sum_{k=1}^{K}\left\|\mathbf{x}_{k}^{a}-\mathbf{x}_{K-k+1}^{b}\\ # \right\|_{2} # # and K is the total number of points in $x^{a}$ and $x^{b}$. # The MDF has two very useful properties. First, it takes into consideration # that streamlines have no preferred orientation. Second, it is a # mathematically sound metric distance in the space of streamlines as proved # in Garyfallidis et al. (2012). This means that the MDF is nonnegative, 0 # only when both streamlines are identical, symmetric and it satisfies the # triangle inequality. Now that we have defined our cost function in Eq. (1) # we can formulate the following optimization problem. Given a fixed bundle S # and a moving bundle M we would like to find the vector of parameters t # which transforms M to S using a linear transformation T so that BMD is # minimum: # # .. math:: # # \operatorname{SLR}(S, M)=\\underset{\mathbf{t}}{\operatorname{argmin}} \\ # \operatorname{BMD}(S, T(M, \mathbf{t})) # # # Here, $\mathbf{t}$ is a vector in $\mathbb{R}^{n}$ holding the parameters of # the linear transform where n = 12 for affine or n = 6 for rigid registration. # From this vector we can then compose the transformation matrix which is # applied to all the points of bundle M. # # # References # ---------- # # .. footbibliography:: dipy-1.11.0/doc/examples/bundle_shape_similarity.py000077500000000000000000000066541476546756600224430ustar00rootroot00000000000000""" ================================== BUAN Bundle Shape Similarity Score ================================== This example explains how we can use BUAN :footcite:p:`Chandio2020a` to calculate shape similarity between two given bundles. Where, shape similarity score of 1 means two bundles are extremely close in shape and 0 implies no shape similarity whatsoever. Shape similarity score can be used to compare populations or individuals. It can also serve as a quality assurance metric, to validate streamline registration quality, bundle extraction quality by calculating output with a reference bundle or other issues with pre-processing by calculating shape dissimilarity with a reference bundle. First import the necessary modules. """ import numpy as np from dipy.data import two_cingulum_bundles from dipy.segment.bundles import ( bundle_shape_similarity, select_random_set_of_streamlines, ) from dipy.viz import actor, window ############################################################################### # To show the concept we will use two pre-saved cingulum bundle. # Let's start by fetching the data. cb_subj1, _ = two_cingulum_bundles() ############################################################################### # Let's create two streamline sets (bundles) from same bundle cb_subj1 by # randomly selecting 60 streamlines two times. rng = np.random.default_rng() bundle1 = select_random_set_of_streamlines(cb_subj1, 60, rng=None) bundle2 = select_random_set_of_streamlines(cb_subj1, 60, rng=None) ############################################################################### # Now, let's visualize two bundles. def show_both_bundles(bundles, colors=None, show=True, fname=None): scene = window.Scene() scene.SetBackground(1.0, 1, 1) for i, bundle in enumerate(bundles): color = colors[i] streamtube_actor = actor.streamtube(bundle, colors=color, linewidth=0.3) streamtube_actor.RotateX(-90) streamtube_actor.RotateZ(90) scene.add(streamtube_actor) if show: window.show(scene) if fname is not None: window.record(scene=scene, n_frames=1, out_path=fname, size=(900, 900)) show_both_bundles( [bundle1, bundle2], colors=[(1, 0, 0), (0, 1, 0)], show=False, fname="two_bundles.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Two Cingulum Bundles. # # # # Calculate shape similarity score between two bundles. # 0 cluster_thr because we want to use all streamlines and not the centroids of # clusters. clust_thr = [0] ############################################################################### # Threshold indicates how strictly we want two bundles to be similar in shape. threshold = 5 ba_score = bundle_shape_similarity( bundle1, bundle2, rng, clust_thr=clust_thr, threshold=threshold ) print("Shape similarity score = ", ba_score) ############################################################################### # Let's change the value of threshold to 10. threshold = 10 ba_score = bundle_shape_similarity( bundle1, bundle2, rng, clust_thr=clust_thr, threshold=threshold ) print("Shape similarity score = ", ba_score) ############################################################################### # Higher value of threshold gives us higher shape similarity score as it is # more lenient. # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/bundlewarp_registration.py000066400000000000000000000211461476546756600224670ustar00rootroot00000000000000""" ============================================ Nonrigid Bundle Registration with BundleWarp ============================================ This example explains how you can nonlinearly register two bundles from two different subjects directly in the space of streamlines :footcite:p:`Chandio2023`, :footcite:p:`Chandio2020b`. To show the concept, we will use two pre-saved uncinate fasciculus bundles. The algorithm used here is called BundleWarp, streamline-based nonlinear registration of white matter tracts :footcite:p:`Chandio2023`. """ from os.path import join as pjoin from time import time from dipy.align.streamwarp import ( bundlewarp, bundlewarp_shape_analysis, bundlewarp_vector_filed, ) from dipy.data import fetch_bundle_warp_dataset from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_trk, save_tractogram from dipy.tracking.streamline import ( Streamlines, set_number_of_points, unlist_streamlines, ) from dipy.viz.streamline import viz_displacement_mag, viz_two_bundles, viz_vector_field ############################################################################### # Let's download and load two uncinate fasciculus bundles in the left # hemisphere of the brain (UF_L) available here: # https://figshare.com/articles/dataset/Test_Bundles_for_DIPY/22557733 bundle_warp_files = fetch_bundle_warp_dataset() s_UF_L_path = pjoin(bundle_warp_files[1], "s_UF_L.trk") m_UF_L_path = pjoin(bundle_warp_files[1], "m_UF_L.trk") uf_subj1 = load_trk(s_UF_L_path, reference="same", bbox_valid_check=False).streamlines uf_subj2 = load_trk(m_UF_L_path, reference="same", bbox_valid_check=False).streamlines ############################################################################### # Let's resample the streamlines so that they both have the same number of # points per streamline. Here we will use 20 points. static = Streamlines(set_number_of_points(uf_subj1, nb_points=20)) moving = Streamlines(set_number_of_points(uf_subj2, nb_points=20)) ############################################################################### # We call ``uf_subj2`` a moving bundle as it will be nonlinearly aligned with # ``uf_subj1`` (static) bundle. Here is how this is done. # # # Let's visualize static bundle in red and moving in green before # registration. viz_two_bundles(static, moving, fname="static_and_moving.png") ############################################################################### # BundleWarp method provides a unique ability to either partially or fully # deform a moving bundle by the use of a single regularization parameter alpha. # alpha controls the trade-off between regularizing the deformation and having # points match very closely. The lower the value of alpha, the more closely # the bundles would match. # # Let's partially deform bundle by setting alpha=0.5. start = time() deformed_bundle, moving_aligned, distances, match_pairs, warp_map = bundlewarp( static, moving, alpha=0.5, beta=20, max_iter=15 ) end = time() print("time taken by BundleWarp registration in seconds = ", end - start) ############################################################################### # Let's visualize static bundle in red and moved (warped) in green. Note: You # can set interactive=True in visualization functions throughout this tutorial # if you prefer to get interactive visualization window. viz_two_bundles(static, deformed_bundle, fname="static_and_partially_deformed.png") ############################################################################### # Let's visualize linearly moved bundle in blue and nonlinearly moved bundle in # green to see BundleWarp registration improvement over linear SLR # registration. viz_two_bundles( moving_aligned, deformed_bundle, fname="linearly_and_nonlinearly_moved.png", c1=(0, 0, 1), ) ############################################################################### # Now, let's visualize deformation vector field generated by BundleWarp. # This shows us visually where and how much and in what directions deformations # were added by BundleWarp. offsets, directions, colors = bundlewarp_vector_filed(moving_aligned, deformed_bundle) points_aligned, _ = unlist_streamlines(moving_aligned) ############################################################################### # Visualizing just the vector field. fname = "partially_vectorfield.png" viz_vector_field(points_aligned, directions, colors, offsets, fname) ############################################################################### # Let's visualize vector field over linearly moved bundle. This will show how # much deformations were introduced after linear registration. fname = "partially_vectorfield_over_linearly_moved.png" viz_vector_field( points_aligned, directions, colors, offsets, fname, bundle=moving_aligned ) ############################################################################### # We can also visualize the magnitude of deformations in mm mapped over # affinely moved bundle. It shows which streamlines were deformed the most # after affine registration. fname = "partially_deformation_magnitude_over_linearly_moved.png" viz_displacement_mag(moving_aligned, offsets, fname, interactive=False) ############################################################################### # Saving partially warped bundle. new_tractogram = StatefulTractogram(deformed_bundle, m_UF_L_path, Space.RASMM) save_tractogram(new_tractogram, "partially_deformed_bundle.trk", bbox_valid_check=False) ############################################################################### # Let's fully deform the moving bundle by setting alpha <= 0.01 # # We will use MDF distances computed and returned by previous run of BundleWarp # method. This will save computation time. start = time() deformed_bundle2, moving_aligned, distances, match_pairs, warp_map = bundlewarp( static, moving, dist=distances, alpha=0.001, beta=20 ) end = time() print("time taken by BundleWarp registration in seconds = ", end - start) ############################################################################### # Let's visualize static bundle in red and moved (completely warped) in green. viz_two_bundles(static, deformed_bundle2, fname="static_and_fully_deformed.png") ############################################################################### # Now, let's visualize the deformation vector field generated by BundleWarp. # This shows us visually where and how much and in what directions deformations # were added by BundleWarp to perfectly warp moving bundle to look like static. offsets, directions, colors = bundlewarp_vector_filed(moving_aligned, deformed_bundle2) points_aligned, _ = unlist_streamlines(moving_aligned) ############################################################################### # Visualizing just the vector field. fname = "fully_vectorfield.png" viz_vector_field(points_aligned, directions, colors, offsets, fname) ############################################################################### # Let's visualize vector field over linearly moved bundle. This will show how # much deformations were introduced after linear registration by fully # deforming the moving bundle. fname = "fully_vectorfield_over_linearly_moved.png" viz_vector_field( points_aligned, directions, colors, offsets, fname, bundle=moving_aligned ) ############################################################################### # Let's visualize the magnitude of deformations in mm mapped over affinely # moved bundle. It shows which streamlines were deformed the most after affine # registration. fname = "fully_deformation_magnitude_over_linearly_moved.png" viz_displacement_mag(moving_aligned, offsets, fname, interactive=False) ############################################################################### # We can also perform bundle shape difference analysis using the displacement # field generated by fully warping the moving bundle to look exactly like # static bundle. Here, we plot bundle shape profile using BUAN. Bundle shape # profile shows the average magnitude of deformations along the length of the # bundle. Segments where we observe higher deformations are the areas where # two bundles differ the most in shape. _, _ = bundlewarp_shape_analysis( moving_aligned, deformed_bundle, no_disks=10, plotting=False ) ############################################################################### # Saving fully warped bundle. new_tractogram = StatefulTractogram(deformed_bundle2, m_UF_L_path, Space.RASMM) save_tractogram(new_tractogram, "fully_deformed_bundle.trk", bbox_valid_check=False) ############################################################################### # References # ---------- # # .. footbibliography:: dipy-1.11.0/doc/examples/cluster_confidence.py000066400000000000000000000134021476546756600213640ustar00rootroot00000000000000""" ===================================================== Calculation of Outliers with Cluster Confidence Index ===================================================== This is an outlier scoring method that compares the pathways of each streamline in a bundle (pairwise) and scores each streamline by how many other streamlines have similar pathways. The details can be found in :footcite:p:`Jordan2018`. """ import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines, cluster_confidence from dipy.tracking.utils import length from dipy.viz import actor, window ############################################################################### # First, we need to generate some streamlines. For a more complete # description of these steps, please refer to the CSA Probabilistic Tracking # and the Visualization of ROI Surface Rendered with Streamlines Tutorials. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine = load_nifti(hardi_fname) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) white_matter = (labels == 1) | (labels == 2) csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) stopping_criterion = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) ############################################################################### # We will use a slice of the anatomically-based corpus callosum ROI as our # seed mask to demonstrate the method. # Make a corpus callosum seed mask for tracking seed_mask = labels == 2 seeds = utils.seeds_from_mask(seed_mask, affine, density=[1, 1, 1]) # Make a streamline bundle model of the corpus callosum ROI connectivity streamlines = LocalTracking(csa_peaks, stopping_criterion, seeds, affine, step_size=2) streamlines = Streamlines(streamlines) ############################################################################### # We do not want our results inflated by short streamlines, so we remove # streamlines shorter than 40mm prior to calculating the CCI. lengths = list(length(streamlines)) long_streamlines = Streamlines() for i, sl in enumerate(streamlines): if lengths[i] > 40: long_streamlines.append(sl) ############################################################################### # Now we calculate the Cluster Confidence Index using the corpus callosum # streamline bundle and visualize them. cci = cluster_confidence(long_streamlines) # Visualize the streamlines, colored by cci scene = window.Scene() hue = [0.5, 1] saturation = [0.0, 1.0] lut_cmap = actor.colormap_lookup_table( scale_range=(cci.min(), cci.max() / 4), hue_range=hue, saturation_range=saturation ) bar3 = actor.scalar_bar(lookup_table=lut_cmap) scene.add(bar3) stream_actor = actor.line( long_streamlines, colors=cci, linewidth=0.1, lookup_colormap=lut_cmap ) scene.add(stream_actor) ############################################################################### # If you set interactive to True (below), the scene will pop up in an # interactive window. interactive = False if interactive: window.show(scene) window.record(scene=scene, out_path="cci_streamlines.png", size=(800, 800)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Cluster Confidence Index of corpus callosum dataset. # # # If you think of each streamline as a sample of a potential pathway through a # complex landscape of white matter anatomy probed via water diffusion, # intuitively we have more confidence that pathways represented by many samples # (streamlines) reflect a more stable representation of the underlying # phenomenon we are trying to model (anatomical landscape) than do lone # samples. # # The CCI provides a voting system where by each streamline (within a set # tolerance) gets to vote on how much support it lends to. Outlier pathways # score relatively low on CCI, since they do not have many streamlines voting # for them. These outliers can be removed by thresholding on the CCI metric. fig, ax = plt.subplots(1) ax.hist(cci, bins=100, histtype="step") ax.set_xlabel("CCI") ax.set_ylabel("# streamlines") fig.savefig("cci_histogram.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Histogram of Cluster Confidence Index values. # # # Now we threshold the CCI, defining outliers as streamlines that score # below 1. keep_streamlines = Streamlines() for i, sl in enumerate(long_streamlines): if cci[i] >= 1: keep_streamlines.append(sl) # Visualize the streamlines we kept scene = window.Scene() keep_streamlines_actor = actor.line(keep_streamlines, linewidth=0.1) scene.add(keep_streamlines_actor) interactive = False if interactive: window.show(scene) window.record(scene=scene, out_path="filtered_cci_streamlines.png", size=(800, 800)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Outliers, defined as streamlines scoring CCI < 1, were excluded. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/combined_workflow_creation.py000066400000000000000000000125311476546756600231260ustar00rootroot00000000000000""" ================================ Creating a new combined workflow ================================ A ``CombinedWorkflow`` is a series of DIPY_ workflows organized together in a way that the output of a workflow serves as input for the next one. First create your ``CombinedWorkflow`` class. Your ``CombinedWorkflow`` class file is usually located in the ``dipy/workflows`` directory. """ import os from dipy.workflows.combined_workflow import CombinedWorkflow ############################################################################### # ``CombinedWorkflow`` is the base class that will be extended to create our # combined workflow. from dipy.workflows.denoise import NLMeansFlow from dipy.workflows.segment import MedianOtsuFlow ############################################################################### # ``MedianOtsuFlow`` and ``NLMeansFlow`` will be combined to create our # processing section. class DenoiseAndSegment(CombinedWorkflow): """ ``DenoiseAndSegment`` is the name of our combined workflow. Note that it needs to extend CombinedWorkflow for everything to work properly. """ def _get_sub_flows(self): return [NLMeansFlow, MedianOtsuFlow] """ It is mandatory to implement this method if you want to make all the sub workflows parameters available in commandline. """ def run( self, input_files, out_dir="", out_denoised="processed.nii.gz", out_mask="brain_mask.nii.gz", out_masked="dwi_masked.nii.gz", ): """ Parameters ---------- input_files : string Path to the input files. This path may contain wildcards to process multiple inputs at once. out_dir : string, optional Where the resulting file will be saved. (default '') out_denoised : string, optional Name of the denoised file to be saved. out_mask : string, optional Name of the Otsu mask file to be saved. out_masked : string, optional Name of the Otsu masked file to be saved. """ """ Just like a normal workflow, it is mandatory to have out_dir as a parameter. It is also mandatory to put 'out_' in front of every parameter that is going to be an output. Lastly, all out_ params needs to be at the end of the params list. The class docstring part is very important, you need to document every parameter as they will be used with inspection to build the command line argument parser. """ io_it = self.get_io_iterator() for fnames in io_it: in_fname = fnames[0] _out_denoised = os.path.basename(fnames[1]) _out_mask = os.path.basename(fnames[2]) _out_masked = os.path.basename(fnames[3]) nl_flow = NLMeansFlow() self.run_sub_flow( nl_flow, in_fname, out_dir=out_dir, out_denoised=_out_denoised ) denoised = nl_flow.last_generated_outputs["out_denoised"] me_flow = MedianOtsuFlow() self.run_sub_flow( me_flow, denoised, out_dir=out_dir, out_mask=_out_mask, out_masked=_out_masked, ) ############################################################################### # Use ``self.get_io_iterator()`` in every workflow you create. This creates # an ``IOIterator`` object that create output file names and directory # structure based on the inputs and some other advanced output strategy # parameters. # # Iterating on the ``IOIterator`` object you created previously you # conveniently get all input and output paths for every input file # found when globbin the input parameters. # # In the ``IOIterator`` loop you can see how we create a new ``NLMeans`` # workflow then run it using ``self.run_sub_flow``. Running it this way will # pass any workflow specific parameter that was retrieved from the command line # and will append the ones you specify as optional parameters (``out_dir`` # in this case). # # Lastly, the outputs paths are retrieved using # ``workflow.last_generated_outputs``. This allows to use ``denoise`` as the # input for the ``MedianOtsuFlow``. # # # This is it for the combined workflow class! Now to be able to call it easily # via command line, you need to add this workflow in 2 different files: # # - ``/pyproject.toml``: open this file and add the following line # to the ``[project.scripts]`` section: # ``dipy_denoise_segment = "dipy.workflows.cli:run"`` # # - ``/dipy/workflows/cli.py``: open this file and add the workflow # information to the ``cli_flows`` dictionary. The key is the name of the # command line command and the value is a tuple with the module name and the # workflow class name. In this case it would be: # ``"dipy_denoise_segment": ("dipy.workflows.my_combined_workflow", # "DenoiseAndSegment")`` # # That`s it! Now you can call your workflow from anywhere with the command line. # Let's just call the script you just made with ``-h`` to see the argparser help # text:: # # dipy_denoise_segment --help # # You should see all your parameters available along with some extra common # ones like logging file and force overwrite. Also all the documentation you # wrote about each parameter is there. dipy-1.11.0/doc/examples/contextual_enhancement.py000066400000000000000000000234661476546756600222740ustar00rootroot00000000000000""" ========================================== Crossing-preserving contextual enhancement ========================================== This demo presents an example of crossing-preserving contextual enhancement of FOD/ODF fields :footcite:p:`Meesters2016a`, implementing the contextual PDE framework of :footcite:p:`Portegies2015b` for processing HARDI data. The aim is to enhance the alignment of elongated structures in the data such that crossing/junctions are maintained while reducing noise and small incoherent structures. This is achieved via a hypo-elliptic 2nd order PDE in the domain of coupled positions and orientations :math:`\\mathbb{R}^3 \\rtimes S^2`. This domain carries a non-flat geometrical differential structure that allows including a notion of alignment between neighboring points. Let :math:`({\\bf y},{\\bf n}) \\in \\mathbb{R}^3\rtimes S^2` where :math:`{\\bf y} \\in \\mathbb{R}^{3}` denotes the spatial part, and :math:`{\\bf n} \\in S^2` the angular part. Let :math:`W:\\mathbb{R}^3\\rtimes S^2\\times \\mathbb{R}^{+} \\to \\mathbb{R}` be the function representing the evolution of FOD/ODF field. Then, the contextual PDE with evolution time :math:`t\\geq 0` is given by: .. math:: \\begin{cases} \\frac{\\partial}{\\partial t} W({\\bf y},{\\bf n},t) &= ((D^{33}({\\bf n} \\cdot \\nabla)^2 + D^{44} \\Delta_{S^2})W)({\\bf y},{\\bf n},t) \\ W({\\bf y},{\\bf n},0) &= U({\\bf y},{\\bf n}) \\end{cases}, where: * :math:`D^{33}>0` is the coefficient for the spatial smoothing (which goes only in the direction of :math:`n`); * :math:`D^{44}>0` is the coefficient for the angular smoothing (here :math:`\\Delta_{S^2}` denotes the Laplace-Beltrami operator on the sphere :math:`S^2`); * :math:`U:\\mathbb{R}^3\\rtimes S^2 \\to \\mathbb{R}` is the initial condition given by the noisy FOD/ODF’s field. This equation is solved via a shift-twist convolution (denoted by :math:`\\ast_{\\mathbb{R}^3\\rtimes S^2}`) with its corresponding kernel :math:`P_t:\\mathbb{R}^3\\rtimes S^2 \\to \\mathbb{R}^+`: .. math:: W({\\bf y},{\\bf n},t) = (P_t \\ast_{\\mathbb{R}^3 \\rtimes S^2} U) ({\\bf y},{\\bf n}) = \\int_{\\mathbb{R}^3} \\int_{S^2} P_t (R^T_{{\\bf n}^\\prime}({\\bf y}-{\\bf y}^\\prime), R^T_{{\\bf n}^\\prime} {\\bf n} ) U({\\bf y}^\\prime, {\\bf n}^\\prime) Here, :math:`R_{\\bf n}` is any 3D rotation that maps the vector :math:`(0,0,1)` onto :math:`{\\bf n}`. Note that the shift-twist convolution differs from a Euclidean convolution and takes into account the non-flat structure of the space :math:`\\mathbb{R}^3\\rtimes S^2`. The kernel :math:`P_t` has a stochastic interpretation :footcite:p:`Duits2011`. It can be seen as the limiting distribution obtained by accumulating random walks of particles in the position/orientation domain, where in each step the particles can (randomly) move forward/backward along their current orientation, and (randomly) change their orientation. This is an extension to the 3D case of the process for contour enhancement of 2D images. .. figure:: /_static/images/examples/stochastic_process.jpg :scale: 60 % :align: center The random motion of particles (a) and its corresponding probability map (b) in 2D. The 3D kernel is shown on the right. Adapted from :footcite:p:`Portegies2015b`. In practice, as the exact analytical formulas for the kernel :math:`P_t` are unknown, we use the approximation given in :footcite:p:`Portegies2015a`. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.denoise.enhancement_kernel import EnhancementKernel from dipy.denoise.shift_twist_convolution import convolve from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data from dipy.reconst.csdeconv import ( ConstrainedSphericalDeconvModel, auto_response_ssst, odf_sh_to_sharp, ) from dipy.reconst.shm import sf_to_sh, sh_to_sf from dipy.segment.mask import median_otsu from dipy.sims.voxel import add_noise from dipy.viz import actor, window ############################################################################### # The enhancement is evaluated on the Stanford HARDI dataset # (150 orientations, b=2000 $s/mm^2$) where Rician noise is added. Constrained # spherical deconvolution is used to model the fiber orientations. # Read data hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data = load_nifti_data(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) # Add Rician noise b0_slice = data[:, :, :, 1] b0_mask, mask = median_otsu(b0_slice) rng = np.random.default_rng(1) data_noisy = add_noise( data, 10.0, np.mean(b0_slice[mask]), noise_type="rician", rng=rng ) # Select a small part of it. padding = 3 # Include a larger region to avoid boundary effects data_small = data[25 - padding : 40 + padding, 65 - padding : 80 + padding, 35:42] data_noisy_small = data_noisy[ 25 - padding : 40 + padding, 65 - padding : 80 + padding, 35:42 ] ############################################################################### # Enables/disables interactive visualization interactive = False ############################################################################### # Fit an initial model to the data, in this case Constrained Spherical # Deconvolution is used. # Perform CSD on the original data response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model_orig = ConstrainedSphericalDeconvModel(gtab, response) csd_fit_orig = csd_model_orig.fit(data_small) csd_shm_orig = csd_fit_orig.shm_coeff # Perform CSD on the original data + noise response, ratio = auto_response_ssst(gtab, data_noisy, roi_radii=10, fa_thr=0.7) csd_model_noisy = ConstrainedSphericalDeconvModel(gtab, response) csd_fit_noisy = csd_model_noisy.fit(data_noisy_small) csd_shm_noisy = csd_fit_noisy.shm_coeff ############################################################################### # Inspired by :footcite:p:`Rodrigues2010`, a lookup-table is created, containing # rotated versions of the kernel :math:`P_t` sampled over a discrete set of # orientations. In order to ensure rotationally invariant processing, the # discrete orientations are required to be equally distributed over a sphere. # By default, a sphere with 100 directions is used. # Create lookup table D33 = 1.0 D44 = 0.02 t = 1 k = EnhancementKernel(D33, D44, t) ############################################################################### # Visualize the kernel scene = window.Scene() # convolve kernel with delta spike spike = np.zeros((7, 7, 7, k.get_orientations().shape[0]), dtype=np.float64) spike[3, 3, 3, 0] = 1 spike_shm_conv = convolve( sf_to_sh(spike, k.get_sphere(), sh_order_max=8), k, sh_order_max=8, test_mode=True ) spike_sf_conv = sh_to_sf(spike_shm_conv, default_sphere, sh_order_max=8) model_kernel = actor.odf_slicer( spike_sf_conv * 6, sphere=default_sphere, norm=False, scale=0.4 ) model_kernel.display(x=3) scene.add(model_kernel) scene.set_camera(position=(30, 0, 0), focal_point=(0, 0, 0), view_up=(0, 0, 1)) window.record(scene=scene, out_path="kernel.png", size=(900, 900)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Visualization of the contour enhancement kernel. # # # # Shift-twist convolution is applied on the noisy data # Perform convolution csd_shm_enh = convolve(csd_shm_noisy, k, sh_order_max=8) ############################################################################### # The Sharpening Deconvolution Transform is applied to sharpen the ODF field. # Sharpen via the Sharpening Deconvolution Transform csd_shm_enh_sharp = odf_sh_to_sharp( csd_shm_enh, default_sphere, sh_order_max=8, lambda_=0.1 ) # Convert raw and enhanced data to discrete form csd_sf_orig = sh_to_sf(csd_shm_orig, default_sphere, sh_order_max=8) csd_sf_noisy = sh_to_sf(csd_shm_noisy, default_sphere, sh_order_max=8) csd_sf_enh = sh_to_sf(csd_shm_enh, default_sphere, sh_order_max=8) csd_sf_enh_sharp = sh_to_sf(csd_shm_enh_sharp, default_sphere, sh_order_max=8) # Normalize the sharpened ODFs csd_sf_enh_sharp *= np.amax(csd_sf_orig) csd_sf_enh_sharp /= np.amax(csd_sf_enh_sharp) * 1.25 ############################################################################### # The end results are visualized. It can be observed that the end result after # diffusion and sharpening is closer to the original noiseless dataset. scene = window.Scene() # original ODF field fodf_spheres_org = actor.odf_slicer( csd_sf_orig, sphere=default_sphere, scale=0.4, norm=False ) fodf_spheres_org.display(z=3) fodf_spheres_org.SetPosition(0, 25, 0) scene.add(fodf_spheres_org) # ODF field with added noise fodf_spheres = actor.odf_slicer( csd_sf_noisy, sphere=default_sphere, scale=0.4, norm=False, ) fodf_spheres.SetPosition(0, 0, 0) scene.add(fodf_spheres) # Enhancement of noisy ODF field fodf_spheres_enh = actor.odf_slicer( csd_sf_enh, sphere=default_sphere, scale=0.4, norm=False ) fodf_spheres_enh.SetPosition(25, 0, 0) scene.add(fodf_spheres_enh) # Additional sharpening fodf_spheres_enh_sharp = actor.odf_slicer( csd_sf_enh_sharp, sphere=default_sphere, scale=0.4, norm=False ) fodf_spheres_enh_sharp.SetPosition(25, 25, 0) scene.add(fodf_spheres_enh_sharp) window.record(scene=scene, out_path="enhancements.png", size=(900, 900)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The results after enhancements. Top-left: original noiseless data. # Bottom-left: original data with added Rician noise (SNR=10). Bottom-right: # After enhancement of noisy data. Top-right: After enhancement and sharpening # of noisy data. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_ascm.py000066400000000000000000000142071476546756600201630ustar00rootroot00000000000000""" ============================================================== Denoise images using Adaptive Soft Coefficient Matching (ASCM) ============================================================== The adaptive soft coefficient matching (ASCM) as described in :footcite:p:`Coupe2012` is an improved extension of non-local means (NLMEANS) denoising. ASCM gives a better denoised images from two standard non-local means denoised versions of the original data with different degrees sharpness. Here, one denoised input is more "smooth" than the other (the easiest way to achieve this denoising is use :ref:`non_local_means` with two different patch radii). ASCM involves these basic steps * Computes wavelet decomposition of the noisy as well as denoised inputs * Combines the wavelets for the output image in a way that it takes it's smoothness (low frequency components) from the input with larger smoothing, and the sharp features (high frequency components) from the input with less smoothing. This way ASCM gives us a well denoised output while preserving the sharpness of the image features. Let us load the necessary modules """ # noqa: E501 from time import time import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.denoise.adaptive_soft_matching import adaptive_soft_matching from dipy.denoise.noise_estimate import estimate_sigma from dipy.denoise.non_local_means import non_local_means from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti ############################################################################### # Choose one of the data from the datasets in dipy_ dwi_fname, dwi_bval_fname, dwi_bvec_fname = get_fnames(name="sherbrooke_3shell") data, affine = load_nifti(dwi_fname) bvals, bvecs = read_bvals_bvecs(dwi_bval_fname, dwi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) mask = data[..., 0] > 80 data = data[..., 1] print("vol size", data.shape) t = time() ############################################################################### # In order to generate the two pre-denoised versions of the data we will use # the :ref:`non_local_means denoining` # noqa: E501 # For ``non_local_means`` first we need to estimate the standard deviation of # the noise. We use N=4 since the Sherbrooke dataset was acquired on a # 1.5T Siemens scanner with a 4 array head coil. sigma = estimate_sigma(data, N=4) ############################################################################### # For the denoised version of the original data which preserves sharper # features, we perform non-local means with smaller patch size. den_small = non_local_means( data, sigma=sigma, mask=mask, patch_radius=1, block_radius=1, rician=True ) ############################################################################### # For the denoised version of the original data that implies more smoothing, we # perform non-local means with larger patch size. den_large = non_local_means( data, sigma=sigma, mask=mask, patch_radius=2, block_radius=1, rician=True ) ############################################################################### # Now we perform the adaptive soft coefficient matching. Empirically we set the # adaptive parameter in ascm to be the average of the local noise variance, # in this case the sigma itself. den_final = adaptive_soft_matching(data, den_small, den_large, sigma[0]) print("total time", time() - t) ############################################################################### # To access the quality of this denoising procedure, we plot an axial slice # of the original data, it's denoised output and residuals. axial_middle = data.shape[2] // 2 original = data[:, :, axial_middle].T final_output = den_final[:, :, axial_middle].T difference = np.abs(final_output.astype(np.float64) - original.astype(np.float64)) difference[~mask[:, :, axial_middle].T] = 0 fig, ax = plt.subplots(1, 3) ax[0].imshow(original, cmap="gray", origin="lower") ax[0].set_title("Original") ax[1].imshow(final_output, cmap="gray", origin="lower") ax[1].set_title("ASCM output") ax[2].imshow(difference, cmap="gray", origin="lower") ax[2].set_title("Residual") for i in range(3): ax[i].set_axis_off() plt.savefig("denoised_ascm.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the axial slice without (left) and with (middle) ASCM denoising. # # # From the above figure we can see that the residual is really uniform in # nature which dictates that ASCM denoises the data while preserving the # sharpness of the features. Now, we are Saving the entire denoised output in # ``denoised_ascm.nii.gz`` file. save_nifti("denoised_ascm.nii.gz", den_final, affine) ############################################################################### # For comparison propose we also plot the outputs of the ``non_local_means`` # (both with the larger as well as with the smaller patch radius) with the ASCM # output. fig, ax = plt.subplots(1, 4) ax[0].imshow(original, cmap="gray", origin="lower") ax[0].set_title("Original") ax[1].imshow( den_small[..., axial_middle].T, cmap="gray", origin="lower", interpolation="none" ) ax[1].set_title("NLMEANS small") ax[2].imshow( den_large[..., axial_middle].T, cmap="gray", origin="lower", interpolation="none" ) ax[2].set_title("NLMEANS large") ax[3].imshow(final_output, cmap="gray", origin="lower", interpolation="none") ax[3].set_title("ASCM ") for i in range(4): ax[i].set_axis_off() plt.savefig("ascm_comparison.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Comparing outputs of the NLMEANS and ASCM. # # # From the above figure, we can observe that the information of two # pre-denoised versions of the raw data, ASCM outperforms standard non-local # means in suppressing noise and preserving feature sharpness. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_gibbs.py000066400000000000000000000244321476546756600203270ustar00rootroot00000000000000""" =============================================================================== Suppress Gibbs oscillations =============================================================================== Magnetic Resonance (MR) images are reconstructed from the Fourier coefficients of acquired k-space images. Since only a finite number of Fourier coefficients can be acquired in practice, reconstructed MR images can be corrupted by Gibbs artefacts, which is manifested by intensity oscillations adjacent to edges of different tissues types :footcite:p:`Veraart2016a`. Although this artefact affects MR images in general, in the context of diffusion-weighted imaging, Gibbs oscillations can be magnified in derived diffusion-based estimates :footcite:p:`Veraart2016a`, :footcite:p:`Perrone2015`. In the following example, we show how to suppress Gibbs artefacts of MR images. This algorithm is based on an adapted version of a sub-voxel Gibbs suppression procedure :footcite:p:`Kellner2016`. Full details of the implemented algorithm can be found in chapter 3 of :footcite:p:`NetoHenriques2018` (please cite :footcite:p:`Kellner2016`, :footcite:p:`NetoHenriques2018` if you are using this code). The algorithm to suppress Gibbs oscillations can be imported from the denoise module of dipy: """ import matplotlib.pyplot as plt import numpy as np from dipy.data import get_fnames, read_cenir_multib from dipy.denoise.gibbs import gibbs_removal from dipy.io.image import load_nifti_data import dipy.reconst.msdki as msdki from dipy.segment.mask import median_otsu ############################################################################### # We first apply this algorithm to a T1-weighted dataset which can be fetched # using the following code: t1_fname, t1_denoised_fname, ap_fname = get_fnames(name="tissue_data") t1 = load_nifti_data(t1_denoised_fname) ############################################################################### # Let's plot a slice of this dataset. axial_slice = 88 t1_slice = t1[..., axial_slice] fig = plt.figure(figsize=(15, 4)) fig.subplots_adjust(wspace=0.2) t1_slice = np.rot90(t1_slice) plt.subplot(1, 2, 1) plt.imshow(t1_slice, cmap="gray", vmin=100, vmax=400) plt.colorbar() fig.savefig("structural.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Representative slice of a T1-weighted structural image. # # # Due to the high quality of the data, Gibbs artefacts are not visually # evident in this dataset. Therefore, to analyse the benefits of the Gibbs # suppression algorithm, Gibbs artefacts are artificially introduced by # removing high frequencies of the image's Fourier transform. c = np.fft.fft2(t1_slice) c = np.fft.fftshift(c) N = c.shape[0] c_crop = c[64:192, 64:192] N = c_crop.shape[0] t1_gibbs = abs(np.fft.ifft2(c_crop) / 4) ############################################################################### # Gibbs oscillation suppression of this single data slice can be performed by # running the following command: t1_unring = gibbs_removal(t1_gibbs, inplace=False) ############################################################################### # Let’s plot the results: fig1, ax = plt.subplots(1, 3, figsize=(12, 6), subplot_kw={"xticks": [], "yticks": []}) ax.flat[0].imshow(t1_gibbs, cmap="gray", vmin=100, vmax=400) ax.flat[0].annotate( "Rings", fontsize=10, xy=(81, 70), color="red", xycoords="data", xytext=(30, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) ax.flat[0].annotate( "Rings", fontsize=10, xy=(75, 50), color="red", xycoords="data", xytext=(30, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) ax.flat[1].imshow(t1_unring, cmap="gray", vmin=100, vmax=400) ax.flat[1].annotate( "WM/GM", fontsize=10, xy=(75, 50), color="green", xycoords="data", xytext=(30, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "green"}, ) ax.flat[2].imshow(t1_unring - t1_gibbs, cmap="gray", vmin=0, vmax=10) ax.flat[2].annotate( "Rings", fontsize=10, xy=(81, 70), color="red", xycoords="data", xytext=(30, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) ax.flat[2].annotate( "Rings", fontsize=10, xy=(75, 50), color="red", xycoords="data", xytext=(30, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) plt.show() fig1.savefig("Gibbs_suppression_structural.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Uncorrected and corrected structural images are shown in the left # and middle panels, while the difference between these images is shown # in the right panel. # # # The image artificially corrupted with Gibb's artefacts is shown in the left # panel. In this panel, the characteristic ringing profile of Gibbs artefacts # can be visually appreciated (see intensity oscillations pointed by the red # arrows). The corrected image is shown in the middle panel. One can appreciate # that artefactual oscillations are visually suppressed without compromising # the contrast between white and grey matter (e.g. details pointed by the green # arrow). The difference between uncorrected and corrected data is plotted in # the right panel which highlights the suppressed Gibbs ringing profile. # # # Now let's show how to use the Gibbs suppression algorithm in # diffusion-weighted images. We fetch the multi-shell diffusion-weighted # dataset which was kindly supplied by Romain Valabrègue, # CENIR, ICM, Paris :footcite:p:`Valabregue2015`. bvals = [200, 400, 1000, 2000] img, gtab = read_cenir_multib(bvals=bvals) data = np.asarray(img.dataobj) ############################################################################### # For illustration purposes, we select two slices of this dataset data_slices = data[:, :, 40:42, :] ############################################################################### # Gibbs oscillation suppression of all multi-shell data and all slices # can be performed in the following way: data_corrected = gibbs_removal(data_slices, slice_axis=2, num_processes=-1) ############################################################################### # Due to the high dimensionality of diffusion-weighted data, we recommend # that you specify which is the axis of data matrix that corresponds to # different slices in the above step. This is done by using the optional # parameter 'slice_axis'. # # Below we plot the results for an image acquired with b-value=0: fig2, ax = plt.subplots(1, 3, figsize=(12, 6), subplot_kw={"xticks": [], "yticks": []}) ax.flat[0].imshow( data_slices[:, :, 0, 0].T, cmap="gray", origin="lower", vmin=0, vmax=10000 ) ax.flat[0].set_title("Uncorrected b0") ax.flat[1].imshow( data_corrected[:, :, 0, 0].T, cmap="gray", origin="lower", vmin=0, vmax=10000 ) ax.flat[1].set_title("Corrected b0") ax.flat[2].imshow( data_corrected[:, :, 0, 0].T - data_slices[:, :, 0, 0].T, cmap="gray", origin="lower", vmin=-500, vmax=500, ) ax.flat[2].set_title("Gibbs residuals") plt.show() fig2.savefig("Gibbs_suppression_b0.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Uncorrected (left panel) and corrected (middle panel) b-value=0 images. For # reference, the difference between uncorrected and corrected images is shown # in the right panel. # # # The above figure shows that the benefits of suppressing Gibbs artefacts is # hard to observe on b-value=0 data. Therefore, diffusion derived metrics for # both uncorrected and corrected data are computed using the mean signal # diffusion kurtosis image technique # (:ref:`sphx_glr_examples_built_reconstruction_reconst_msdki.py`). # # To avoid unnecessary calculations on the background of the image, we also # compute a brain mask. # Create a brain mask maskdata, mask = median_otsu( data_slices, vol_idx=range(10, 50), median_radius=3, numpass=1, autocrop=False, dilate=1, ) # Define mean signal diffusion kurtosis model dki_model = msdki.MeanDiffusionKurtosisModel(gtab) # Fit the uncorrected data dki_fit = dki_model.fit(data_slices, mask=mask) MSKini = dki_fit.msk # Fit the corrected data dki_fit = dki_model.fit(data_corrected, mask=mask) MSKgib = dki_fit.msk ############################################################################### # Let's plot the results fig3, ax = plt.subplots(1, 3, figsize=(12, 12), subplot_kw={"xticks": [], "yticks": []}) ax.flat[0].imshow(MSKini[:, :, 0].T, cmap="gray", origin="lower", vmin=0, vmax=1.5) ax.flat[0].set_title("MSK (uncorrected)") ax.flat[0].annotate( "Rings", fontsize=12, xy=(59, 63), color="red", xycoords="data", xytext=(45, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) ax.flat[1].imshow(MSKgib[:, :, 0].T, cmap="gray", origin="lower", vmin=0, vmax=1.5) ax.flat[1].set_title("MSK (corrected)") ax.flat[2].imshow( MSKgib[:, :, 0].T - MSKini[:, :, 0].T, cmap="gray", origin="lower", vmin=-0.2, vmax=0.2, ) ax.flat[2].set_title("MSK (uncorrected - corrected") ax.flat[2].annotate( "Rings", fontsize=12, xy=(59, 63), color="red", xycoords="data", xytext=(45, 0), textcoords="offset points", arrowprops={"arrowstyle": "->", "color": "red"}, ) plt.show() fig3.savefig("Gibbs_suppression_msdki.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Uncorrected and corrected mean signal kurtosis images are shown in the left # and middle panels. The difference between uncorrected and corrected images # are show in the right panel. # # # In the left panel of the figure above, Gibbs artefacts can be appreciated by # the negative values of mean signal kurtosis (black voxels) adjacent to the # brain ventricle (red arrows). These negative values seem to be suppressed # after the `gibbs_removal` function is applied. For a better visualization of # Gibbs oscillations, the difference between corrected and uncorrected images # are shown in the right panel. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_localpca.py000066400000000000000000000115071476546756600210160ustar00rootroot00000000000000""" ======================================================= Denoise images using Local PCA via empirical thresholds ======================================================= PCA-based denoising algorithms are effective denoising methods because they explore the redundancy of the multi-dimensional information of diffusion-weighted datasets. In this example, we will show how to perform the PCA-based denoising using the algorithm proposed by :footcite:t:`Manjon2013`. This algorithm involves the following steps: * First, we estimate the local noise variance at each voxel. * Then, we apply PCA in local patches around each voxel over the gradient directions. * Finally, we threshold the eigenvalues based on the local estimate of sigma and then do a PCA reconstruction To perform PCA denoising without a prior noise standard deviation estimate please see the following example instead: :ref:`sphx_glr_examples_built_preprocessing_denoise_mppca.py` Let's load the necessary modules """ from time import time import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.denoise.localpca import localpca from dipy.denoise.pca_noise_estimate import pca_noise_estimate from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti ############################################################################### # Load one of the datasets. These data were acquired with 63 gradients and 1 # non-diffusion (b=0) image. dwi_fname, dwi_bval_fname, dwi_bvec_fname = get_fnames(name="isbi2013_2shell") data, affine = load_nifti(dwi_fname) bvals, bvecs = read_bvals_bvecs(dwi_bval_fname, dwi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) print("Input Volume", data.shape) ############################################################################### # Estimate the noise standard deviation # ===================================== # # We use the ``pca_noise_estimate`` method to estimate the value of sigma to be # used in the local PCA algorithm proposed by :footcite:t:`Manjon2013`. # It takes both data and the gradient table object as input and returns an # estimate of local noise standard deviation as a 3D array. We return a # smoothed version, where a Gaussian filter with radius 3 voxels has been # applied to the estimate of the noise before returning it. # # We correct for the bias due to Rician noise, based on an equation developed # by :footcite:t:`Koay2006a`. t = time() sigma = pca_noise_estimate(data, gtab, correct_bias=True, smooth=3) print("Sigma estimation time", time() - t) ############################################################################### # Perform the localPCA using the function `localpca` # ================================================== # # The localpca algorithm takes into account the multi-dimensional information # of the diffusion MR data. It performs PCA on a local 4D patch and # then removes the noise components by thresholding the lowest eigenvalues. # The eigenvalue threshold will be computed from the local variance estimate # performed by the ``pca_noise_estimate`` function, if this is inputted in the # main ``localpca`` function. The relationship between the noise variance # estimate and the eigenvalue threshold can be adjusted using the input # parameter ``tau_factor``. According to :footcite:t:`Manjon2013`, this # parameter is set to 2.3. t = time() denoised_arr = localpca(data, sigma=sigma, tau_factor=2.3, patch_radius=2) print("Time taken for local PCA (slow)", -t + time()) ############################################################################### # The ``localpca`` function returns the denoised data which is plotted below # (middle panel) together with the original version of the data (left panel) # and the algorithm residual image (right panel) . sli = data.shape[2] // 2 gra = data.shape[3] // 2 orig = data[:, :, sli, gra] den = denoised_arr[:, :, sli, gra] rms_diff = np.sqrt((orig - den) ** 2) fig, ax = plt.subplots(1, 3) ax[0].imshow(orig, cmap="gray", origin="lower", interpolation="none") ax[0].set_title("Original") ax[0].set_axis_off() ax[1].imshow(den, cmap="gray", origin="lower", interpolation="none") ax[1].set_title("Denoised Output") ax[1].set_axis_off() ax[2].imshow(rms_diff, cmap="gray", origin="lower", interpolation="none") ax[2].set_title("Residual") ax[2].set_axis_off() plt.savefig("denoised_localpca.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Below we show how the denoised data can be saved. # # # The denoised data is saved in the same format as the input data. save_nifti("denoised_localpca.nii.gz", denoised_arr, affine) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_mppca.py000066400000000000000000000221011476546756600203300ustar00rootroot00000000000000""" ====================================================== Denoise images using the Marcenko-Pastur PCA algorithm ====================================================== The PCA-based denoising algorithm exploits the redundancy across the diffusion-weighted images :footcite:p:`Manjon2013`, :footcite:p:`Veraart2016b`. This algorithm has been shown to provide an optimal compromise between noise suppression and loss of anatomical information for different techniques such as DTI :footcite:p:`Manjon2013`, spherical deconvolution :footcite:p:`Veraart2016b` and DKI :footcite:p:`NetoHenriques2018`. The basic idea behind the PCA-based denoising algorithms is to remove the components of the data that are classified as noise. The Principal Components classification can be performed based on prior noise variance estimates :footcite:p:`Manjon2013` (see :ref:`denoise_localpca`) or automatically based on the Marchenko-Pastur distribution :footcite:p:`Veraart2016b`. In addition to noise suppression, the PCA algorithm can be used to get the standard deviation of the noise :footcite:p:`Veraart2016b`. In the following example, we show how to denoise diffusion MRI images and estimate the noise standard deviation using the PCA algorithm based on the Marcenko-Pastur distribution :footcite:p:`Veraart2016b` Let's load the necessary modules """ # noqa: E501 # load general modules from time import time import matplotlib.pyplot as plt import numpy as np # load other dipy's functions that will be used for auxiliary analysis from dipy.core.gradients import gradient_table # load functions to fetch data for this example from dipy.data import get_fnames # load main pca function using Marcenko-Pastur distribution from dipy.denoise.localpca import mppca from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti import dipy.reconst.dki as dki from dipy.segment.mask import median_otsu ############################################################################### # For this example, we use fetch to download a multi-shell dataset which was # kindly provided by Hansen and Jespersen (more details about the data are # provided in their paper :footcite:p:`Hansen2016a`). The total size of the # downloaded data is 192 MBytes, however you only need to fetch it once. dwi_fname, dwi_bval_fname, dwi_bvec_fname, _ = get_fnames(name="cfin_multib") data, affine = load_nifti(dwi_fname) bvals, bvecs = read_bvals_bvecs(dwi_bval_fname, dwi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # For the sake of simplicity, we only select two non-zero b-values for this # example. bvals = gtab.bvals bvecs = gtab.bvecs sel_b = np.logical_or(np.logical_or(bvals == 0, bvals == 1000), bvals == 2000) data = data[..., sel_b] gtab = gradient_table(bvals[sel_b], bvecs=bvecs[sel_b]) print(data.shape) ############################################################################### # As one can see from its shape, the selected data contains a total of 67 # volumes of images corresponding to all the diffusion gradient directions # of the selected b-values. # # The PCA denoising using the Marchenko-Pastur distribution can be performed by # calling the function ``mppca``: t = time() denoised_arr = mppca(data, patch_radius=2) print("Time taken for local MP-PCA ", -t + time()) ############################################################################### # Internally, the ``mppca`` algorithm denoises the diffusion-weighted data # using a 3D sliding window which is defined by the variable patch_radius. # In total, this window should comprise a larger number of voxels than the # number of diffusion-weighted volumes. Since our data has a total of 67 # volumes, the patch_radius is set to 2 which corresponds to a 5x5x5 sliding # window comprising a total of 125 voxels. # To assess the performance of the Marchenko-Pastur PCA denoising algorithm, # an axial slice of the original data, denoised data, and residuals are # plotted below: sli = data.shape[2] // 2 gra = data.shape[3] - 1 orig = data[:, :, sli, gra] den = denoised_arr[:, :, sli, gra] rms_diff = np.sqrt((orig - den) ** 2) fig1, ax = plt.subplots(1, 3, figsize=(12, 6), subplot_kw={"xticks": [], "yticks": []}) fig1.subplots_adjust(hspace=0.3, wspace=0.05) ax.flat[0].imshow(orig.T, cmap="gray", interpolation="none", origin="lower") ax.flat[0].set_title("Original") ax.flat[1].imshow(den.T, cmap="gray", interpolation="none", origin="lower") ax.flat[1].set_title("Denoised Output") ax.flat[2].imshow(rms_diff.T, cmap="gray", interpolation="none", origin="lower") ax.flat[2].set_title("Residuals") fig1.savefig("denoised_mppca.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The noise suppression can be visually appreciated by comparing the original # data slice (left panel) to its denoised version (middle panel). The # difference between original and denoised data showing only random noise # indicates that the data's structural information is preserved by the PCA # denoising algorithm (right panel). # # # Below we show how the denoised data can be saved. save_nifti("denoised_mppca.nii.gz", denoised_arr, affine) ############################################################################### # Additionally, we show how the PCA denoising algorithm affects different # diffusion measurements. For this, we run the diffusion kurtosis model # below on both original and denoised versions of the data: dkimodel = dki.DiffusionKurtosisModel(gtab) maskdata, mask = median_otsu( data, vol_idx=[0, 1], median_radius=4, numpass=2, autocrop=False, dilate=1 ) dki_orig = dkimodel.fit(data, mask=mask) dki_den = dkimodel.fit(denoised_arr, mask=mask) ############################################################################### # We use the following code to plot the MD, FA and MK estimates from the two # data fits: FA_orig = dki_orig.fa FA_den = dki_den.fa MD_orig = dki_orig.md MD_den = dki_den.md MK_orig = dki_orig.mk(min_kurtosis=0, max_kurtosis=3) MK_den = dki_den.mk(min_kurtosis=0, max_kurtosis=3) fig2, ax = plt.subplots(2, 3, figsize=(10, 6), subplot_kw={"xticks": [], "yticks": []}) fig2.subplots_adjust(hspace=0.3, wspace=0.03) ax.flat[0].imshow( MD_orig[:, :, sli].T, cmap="gray", vmin=0, vmax=2.0e-3, origin="lower" ) ax.flat[0].set_title("MD (DKI)") ax.flat[1].imshow(FA_orig[:, :, sli].T, cmap="gray", vmin=0, vmax=0.7, origin="lower") ax.flat[1].set_title("FA (DKI)") ax.flat[2].imshow(MK_orig[:, :, sli].T, cmap="gray", vmin=0, vmax=1.5, origin="lower") ax.flat[2].set_title("AD (DKI)") ax.flat[3].imshow(MD_den[:, :, sli].T, cmap="gray", vmin=0, vmax=2.0e-3, origin="lower") ax.flat[3].set_title("MD (DKI)") ax.flat[4].imshow(FA_den[:, :, sli].T, cmap="gray", vmin=0, vmax=0.7, origin="lower") ax.flat[4].set_title("FA (DKI)") ax.flat[5].imshow(MK_den[:, :, sli].T, cmap="gray", vmin=0, vmax=1.5, origin="lower") ax.flat[5].set_title("AD (DKI)") plt.show() fig2.savefig("denoised_dki.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # In the above figure, the DKI maps obtained from the original data are shown # in the upper panels, while the DKI maps from the denoised data are shown in # the lower panels. Substantial improvements in measurement robustness can be # visually appreciated, particularly for the FA and MK estimates. # # # Noise standard deviation estimation using the Marchenko-Pastur PCA algorithm # ============================================================================ # # As mentioned above, the Marcenko-Pastur PCA algorithm can also be used to # estimate the image's noise standard deviation (std). The noise std # automatically computed from the ``mppca`` function can be returned by # setting the optional input parameter ``return_sigma`` to True. denoised_arr, sigma = mppca(data, patch_radius=2, return_sigma=True) ############################################################################### # Let's plot the noise standard deviation estimate: fig3 = plt.figure("PCA Noise standard deviation estimation") plt.imshow(sigma[..., sli].T, cmap="gray", origin="lower") plt.axis("off") plt.show() fig3.savefig("pca_sigma.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The above figure shows that the Marchenko-Pastur PCA algorithm computes a 3D # spatial varying noise level map. To obtain the mean noise std across all # voxels, you can use the following lines of code: mean_sigma = np.mean(sigma[mask]) print(mean_sigma) ############################################################################### # Below we use this mean noise level estimate to compute the nominal SNR of # the data (i.e. SNR at b-value=0): b0 = denoised_arr[..., 0] mean_signal = np.mean(b0[mask]) snr = mean_signal / mean_sigma print(snr) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_nlmeans.py000066400000000000000000000053701476546756600206760ustar00rootroot00000000000000""" ============================================== Denoise images using Non-Local Means (NLMEANS) ============================================== Using the non-local means filter :footcite:p:`Coupe2008` and :footcite:p:`Coupe2012` and you can denoise 3D or 4D images and boost the SNR of your datasets. You can also decide between modeling the noise as Gaussian or Rician (default). We start by loading the necessary modules """ from time import time import matplotlib.pyplot as plt import numpy as np from dipy.data import get_fnames from dipy.denoise.nlmeans import nlmeans from dipy.denoise.noise_estimate import estimate_sigma from dipy.io.image import load_nifti, save_nifti ############################################################################### # Then, let's fetch and load a T1 data from Stanford University t1_fname = get_fnames(name="stanford_t1") data, affine = load_nifti(t1_fname) mask = data > 1500 print("vol size", data.shape) ############################################################################### # In order to call ``non_local_means`` first you need to estimate the standard # deviation of the noise. We have used N=32 since the Stanford dataset was # acquired on a 3T GE scanner with a 32 array head coil. sigma = estimate_sigma(data, N=32) ############################################################################### # Calling the main function ``non_local_means`` t = time() den = nlmeans(data, sigma=sigma, mask=mask, patch_radius=1, block_radius=2, rician=True) print("total time", time() - t) ############################################################################### # Let us plot the axial slice of the denoised output axial_middle = data.shape[2] // 2 before = data[:, :, axial_middle].T after = den[:, :, axial_middle].T difference = np.abs(after.astype(np.float64) - before.astype(np.float64)) difference[~mask[:, :, axial_middle].T] = 0 fig, ax = plt.subplots(1, 3) ax[0].imshow(before, cmap="gray", origin="lower") ax[0].set_title("before") ax[1].imshow(after, cmap="gray", origin="lower") ax[1].set_title("after") ax[2].imshow(difference, cmap="gray", origin="lower") ax[2].set_title("difference") plt.savefig("denoised.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing axial slice before (left) and after (right) NLMEANS denoising save_nifti("denoised.nii.gz", den, affine) ############################################################################### # An improved version of non-local means denoising is adaptive soft coefficient # matching, please refer to # :ref:`sphx_glr_examples_built_preprocessing_denoise_ascm.py` for more # details. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/denoise_patch2self.py000066400000000000000000000164001476546756600212700ustar00rootroot00000000000000""" ================================================================== Patch2Self: Self-Supervised Denoising via Statistical Independence ================================================================== Patch2Self :footcite:p:`Fadnavis2020` and :footcite:p:`Fadnavis2024` is a self-supervised learning method for denoising DWI data, which uses the entire volume to learn a full-rank locally linear denoiser for that volume. By taking advantage of the oversampled q-space of DWI data, Patch2Self can separate structure from noise without requiring an explicit model for either. Classical denoising algorithms such as Local PCA :footcite:p:`Manjon2013`, :footcite:p:`Veraart2016b`, Non-local Means :footcite:p:`Coupe2008`, Total Variation Norm :footcite:p:`Knoll2011`, etc. assume certain properties on the signal structure. Patch2Self *does not* make any such assumption on the signal, instead using the fact that the noise across different 3D volumes of the DWI signal originates from random fluctuations in the acquired signal. Since Patch2Self only relies on the randomness of the noise, it can be applied at any step in the pre-processing pipeline. The design of Patch2Self is such that it can work on any type of diffusion data/ any body part without requiring a noise estimation or assumptions on the type of noise (such as its distribution). The Patch2Self Framework: .. _patch2self: .. image:: https://github.com/dipy/dipy_data/blob/master/Patch2Self_Framework.PNG?raw=true :width: 70 % :align: center The above figure demonstrates the working of Patch2Self. The idea is to build a new regressor for denoising each 3D volume of the 4D diffusion data. This is done in the following 2 phases: (A) Self-supervised training: First, we extract 3D Patches from all the ``n`` volumes and hold out the target volume to denoise. Each patch from the rest of the ``(n-1)`` volumes predicts the center voxel of the corresponding patch in the target volume. This is done by using the self-supervised loss: .. math:: \\mathcal{L}\\left(\\Phi_{J}\\right)=\\mathbb{E}\\left\\|\\Phi_{J}\\ \\left(Y_{*, *,-j}\\right)-Y_{*, 0, j}\\right\\|^{2} (B) Prediction: The same 'n-1' volumes which were used in the training are now fed into the regressor :math:`\\Phi` built in phase (A). The prediction is a denoised version of held-out volume. Note: The volume to be denoised is merely used as the target in the training phase. But is not used in the training set for (A) nor is used to predict the denoised output in (B). Let's load the necessary modules: """ # noqa: E501 import matplotlib.pyplot as plt import numpy as np from dipy.data import get_fnames from dipy.denoise.patch2self import patch2self from dipy.io.image import load_nifti, save_nifti ############################################################################### # Now let's load an example dataset and denoise it with Patch2Self. Patch2Self # does not require noise estimation and should work with any kind of diffusion # data. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals = np.loadtxt(hardi_bval_fname) denoised_arr = patch2self( data, bvals, model="ols", shift_intensity=True, clip_negative_vals=False, b0_threshold=50, ) ############################################################################### # The above parameters should give optimal denoising performance for # Patch2Self. The ordinary least squares regression ``(model='ols')`` tends to # be a little slower depending on the size of the data. In that case, please # consider switching to ridge regression ``(model='ridge')``. # # Please do note that sometimes using ridge regression can hamper the # performance of Patch2Self. If so, please use ``model='ols'``. # # The array ``denoised_arr`` contains the denoised output obtained from # Patch2Self. # # .. note:: # # Depending on the acquisition, b0 may exhibit signal attenuation or # other artefacts that are not ideal for any denoising algorithm. We # therefore provide an option to skip denoising b0 volumes in the data. # This can be done by using the option `b0_denoising=False` within # Patch2Self. # # Please set ``shift_intensity=True`` and ``clip_negative_vals=False`` by # default to avoid negative values in the denoised output. # # The ``b0_threshold`` is used to separate the b0 volumes from the DWI # volumes. Changing the value of the b0 threshold is needed if the b0 volumes # in the ``bval`` file have a value greater than the default # ``b0_threshold``. # # The default value of ``b0_threshold`` in DIPY is set to 50. If using data # such as HCP 7T, the b0 volumes tend to have a higher b-value (>=50) # associated with them in the `bval` file. Please check the b-values for b0s # and adjust the ``b0_threshold``` accordingly. # # Now let's visualize the output and the residuals obtained from the denoising. # Gets the center slice and the middle volume of the 4D diffusion data. sli = data.shape[2] // 2 gra = 60 # pick out a random volume for a particular gradient direction orig = data[:, :, sli, gra] den = denoised_arr[:, :, sli, gra] # computes the residuals rms_diff = np.sqrt((orig - den) ** 2) fig1, ax = plt.subplots(1, 3, figsize=(12, 6), subplot_kw={"xticks": [], "yticks": []}) fig1.subplots_adjust(hspace=0.3, wspace=0.05) ax.flat[0].imshow(orig.T, cmap="gray", interpolation="none", origin="lower") ax.flat[0].set_title("Original") ax.flat[1].imshow(den.T, cmap="gray", interpolation="none", origin="lower") ax.flat[1].set_title("Denoised Output") ax.flat[2].imshow(rms_diff.T, cmap="gray", interpolation="none", origin="lower") ax.flat[2].set_title("Residuals") fig1.savefig("denoised_patch2self.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Patch2Self preserved anatomical detail. This can be visually verified by # inspecting the residuals obtained above. Since we do not see any structure in # the difference residuals, it is clear that it preserved the underlying signal # structure and got rid of the stochastic noise. # # # Below we show how the denoised data can be saved. save_nifti("denoised_patch2self.nii.gz", denoised_arr, affine) ############################################################################### # You can also use Patch2Self version 1 to denoise the data by using version # argument. The default version is set to 3. To use version 1, you can call # Patch2Self as follows:: # # patch2self(data, bvals, version=1) # # Lastly, one can also use Patch2Self in batches if the number of gradient # directions is very high (>=200 volumes). For instance, if the data has 300 # volumes, one can split the data into 2 batches, (150 directions each) and # still get the same denoising performance. One can run Patch2Self # using:: # # denoised_batch1 = patch2self(data[..., :150], bvals[:150]) # denoised_batch2 = patch2self(data[..., 150:], bvals[150:]) # # After doing this, the 2 denoised batches can be merged as follows:: # # denoised_p2s = np.concatenate((denoised_batch1, denoised_batch2), axis=3) # # One can also consider using the above batching approach to denoise each # shell separately if working with multi-shell data. # # # References # ---------- # # .. footbibliography:: dipy-1.11.0/doc/examples/fast_streamline_search.py000066400000000000000000000142701476546756600222370ustar00rootroot00000000000000""" ====================== Fast Streamline Search ====================== This example explains how Fast Streamline Search :footcite:p:`StOnge2022` can be used to find all similar streamlines. First import the necessary modules. """ import numpy as np from dipy.data import ( fetch_bundle_atlas_hcp842, fetch_target_tractogram_hcp, get_target_tractogram_hcp, get_two_hcp842_bundles, ) from dipy.io.streamline import load_trk from dipy.segment.fss import FastStreamlineSearch, nearest_from_matrix_row from dipy.viz import actor, window ############################################################################### # Download and read data for this tutorial fetch_bundle_atlas_hcp842() fetch_target_tractogram_hcp() hcp_file = get_target_tractogram_hcp() streamlines = load_trk(hcp_file, "same", bbox_valid_check=False).streamlines ############################################################################### # Visualize the atlas (ref) bundle and full brain tractogram interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(streamlines)) if interactive: window.show(scene) else: window.record(scene=scene, out_path="tractograms_initial.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Atlas bundle and source streamlines before registration. # # # # Read Arcuate Fasciculus Left and Corticospinal Tract Left bundles from # already fetched atlas data to use them as model bundle. Let's visualize the # Arcuate Fasciculus Left model bundle. model_af_l_file, model_cst_l_file = get_two_hcp842_bundles() sft_af_l = load_trk(model_af_l_file, "same", bbox_valid_check=False) model_af_l = sft_af_l.streamlines scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_af_l, colors=(0, 1, 0))) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) if interactive: window.show(scene) else: window.record(scene=scene, out_path="AF_L_model_bundle.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Model Arcuate Fasciculus Left bundle # # # # # Search for all similar streamlines :footcite:p:`StOnge2022` # # Fast Streamline Search can do a radius search to find all streamlines that # are similar to from one tractogram to another. It returns the distance # matrix mapping between both tractograms. The same list of streamlines can # be given to recover the self distance matrix. # # here are the ``FastStreamlinesSearch`` class need the following # initialization arguments: # # - ref_streamlines : reference streamlines, that will be searched in (tree) # - max_radius : is the maximum distance that can be used with radius search # # Then, the ``radius_search()`` method needs the following arguments: # # - radius : for each streamline search find all similar ones in the # "ref_streamlines" that are within the given radius # # Be cautious, a large radius might result in a dense distance computation, # requiring a large amount of time and memory. Recommended range of the # radius is from 1 to 10 mm. radius = 7.0 fs_tree_af = FastStreamlineSearch(ref_streamlines=model_af_l, max_radius=radius) coo_mdist_mtx = fs_tree_af.radius_search(streamlines, radius=radius) ############################################################################### # Extract indices of streamlines with an similar ones in the reference ids_s = np.unique(coo_mdist_mtx.row) ids_ref = np.unique(coo_mdist_mtx.col) recognized_af_l = streamlines[ids_s].copy() ############################################################################### # let's visualize streamlines similar to the Arcuate Fasciculus Left bundle scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(recognized_af_l, colors=(0, 0, 1))) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) if interactive: window.show(scene) else: window.record(scene=scene, out_path="AF_L_recognized_bundle.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Recognized Arcuate Fasciculus Left bundle # # # # # Color the resulting AF by the nearest streamlines distance nn_s, nn_ref, nn_dist = nearest_from_matrix_row(coo_mdist_mtx) scene = window.Scene() scene.SetBackground(1, 1, 1) cmap = actor.colormap_lookup_table(scale_range=(nn_dist.min(), nn_dist.max())) scene.add(actor.line(recognized_af_l, colors=nn_dist, lookup_colormap=cmap)) scene.add(actor.scalar_bar(lookup_table=cmap, title="distance to atlas (mm)")) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) if interactive: window.show(scene) else: window.record( scene=scene, out_path="AF_L_recognized_bundle_dist.png", size=(600, 600) ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Recognized Arcuate Fasciculus Left bundle colored by distance to ref # # # # Display the streamlines reference reached in green # Default red color ref_color = np.zeros((len(model_af_l), 3), dtype=float) ref_color[:] = (1.0, 0.0, 0.0) # Reached in green color ref_color[ids_ref] = (0.0, 1.0, 0.0) scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.line(model_af_l, colors=ref_color)) scene.set_camera( focal_point=(-18.17281532, -19.55606842, 6.92485857), position=(-360.11, -30.46, -40.44), view_up=(-0.03, 0.028, 0.89), ) if interactive: window.show(scene) else: window.record( scene=scene, out_path="AF_L_model_bundle_reached.png", size=(600, 600) ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Arcuate Fasciculus Left model reached (green) in radius # # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/fiber_to_bundle_coherence.py000066400000000000000000000251761476546756600226760ustar00rootroot00000000000000""" ================================== Fiber to bundle coherence measures ================================== This demo presents the fiber to bundle coherence (FBC) quantitative measure of the alignment of each fiber with the surrounding fiber bundles :footcite:p:`Meesters2016b`. These measures are useful in 'cleaning' the results of tractography algorithms, since low FBCs indicate which fibers are isolated and poorly aligned with their neighbors, as shown in the figure below. .. _fiber_to_bundle_coherence: .. figure:: /_static/images/examples/fbc_illustration.png :scale: 60 % :align: center On the left this figure illustrates (in 2D) the contribution of two fiber points to the kernel density estimator. The kernel density estimator is the sum over all such locally aligned kernels. The local fiber to bundle coherence, shown on the right, color-coded for each fiber, is obtained by evaluating the kernel density estimator along the fibers. One spurious fiber is present which is isolated and badly aligned with the other fibers, and can be identified by a low LFBC value in the region where it deviates from the bundle. Figure adapted from :footcite:p:`Portegies2015b`. Here we implement FBC measures based on kernel density estimation in the non-flat 5D position-orientation domain. First we compute the kernel density estimator induced by the full lifted output (defined in the space of positions and orientations) of the tractography. Then, the Local FBC (LFBC) is the result of evaluating the estimator along each element of the lifted fiber. A whole fiber measure, the relative FBC (RFBC), is calculated by the minimum of the moving average LFBC along the fiber. Details of the computation of FBC can be found in :footcite:p:`Portegies2015b`. The FBC measures are evaluated on the Stanford HARDI dataset (150 orientations, b=2000 $s/mm^2$) which is one of the standard example datasets in DIPY_. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.denoise.enhancement_kernel import EnhancementKernel from dipy.direction import ProbabilisticDirectionGetter, peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.fbcmeasures import FBCMeasures from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.viz import actor, window # Enables/disables interactive visualization interactive = False # Fix seed rng = np.random.default_rng(1) # Read data hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") t1_fname = get_fnames(name="stanford_t1") data, affine = load_nifti(hardi_fname) labels = load_nifti_data(label_fname) t1_data = load_nifti_data(t1_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) # Select a relevant part of the data (left hemisphere) # Coordinates given in x bounds, y bounds, z bounds dshape = data.shape[:-1] xa, xb, ya, yb, za, zb = [15, 42, 10, 65, 18, 65] data_small = data[xa:xb, ya:yb, za:zb] selectionmask = np.zeros(dshape, "bool") selectionmask[xa:xb, ya:yb, za:zb] = True ############################################################################### # The data is first fitted to the Constant Solid Angle (CDA) ODF Model. CSA is # a good choice to estimate general fractional anisotropy (GFA), which the # stopping criterion can use to restrict fiber tracking to those areas where # the ODF shows significant restricted diffusion, thus creating a # region-of-interest in which the computations are done. # Perform CSA csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.6, min_separation_angle=45, mask=selectionmask, ) # Stopping Criterion stopping_criterion = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) ############################################################################### # In order to perform probabilistic fiber tracking we first fit the data to the # Constrained Spherical Deconvolution (CSD) model in DIPY. This model # represents each voxel in the data set as a collection of small white matter # fibers with different orientations. The density of fibers along each # orientation is known as the Fiber Orientation Distribution (FOD), used in # the fiber tracking. # Perform CSD on the original data response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd_model.fit(data_small) csd_fit_shm = np.pad( csd_fit.shm_coeff, ((xa, dshape[0] - xb), (ya, dshape[1] - yb), (za, dshape[2] - zb), (0, 0)), "constant", ) # Probabilistic direction getting for fiber tracking prob_dg = ProbabilisticDirectionGetter.from_shcoeff( csd_fit_shm, max_angle=30.0, sphere=default_sphere ) ############################################################################### # The optic radiation is reconstructed by tracking fibers from the calcarine # sulcus (visual cortex V1) to the lateral geniculate nucleus (LGN). We seed # from the calcarine sulcus by selecting a region-of-interest (ROI) cube of # dimensions 3x3x3 voxels. # Set a seed region region for tractography. mask = np.zeros(data.shape[:-1], "bool") rad = 3 mask[26 - rad : 26 + rad, 29 - rad : 29 + rad, 31 - rad : 31 + rad] = True seeds = utils.seeds_from_mask(mask, affine, density=[4, 4, 4]) ############################################################################### # Local Tracking is used for probabilistic tractography which takes the # direction getter along with the stopping criterion and seeds as input. # Perform tracking using Local Tracking streamlines_generator = LocalTracking( prob_dg, stopping_criterion, seeds, affine, step_size=0.5 ) # Compute streamlines. streamlines = Streamlines(streamlines_generator) ############################################################################### # In order to select only the fibers that enter into the LGN, another ROI is # created from a cube of size 5x5x5 voxels. The near_roi command is used to # find the fibers that traverse through this ROI. # Set a mask for the lateral geniculate nucleus (LGN) mask_lgn = np.zeros(data.shape[:-1], "bool") rad = 5 mask_lgn[35 - rad : 35 + rad, 42 - rad : 42 + rad, 28 - rad : 28 + rad] = True # Select all the fibers that enter the LGN and discard all others filtered_fibers2 = utils.near_roi(streamlines, affine, mask_lgn, tol=1.8) sfil = [] for i in range(len(streamlines)): if filtered_fibers2[i]: sfil.append(streamlines[i]) streamlines = Streamlines(sfil) ############################################################################### # Inspired by :footcite:p:`Rodrigues2010`, a lookup-table is created, containing # rotated versions of the fiber propagation kernel :math:`P_t` # :footcite:p:`Duits2011` rotated over a discrete set of orientations. See the # :ref:`sphx_glr_examples_built_contextual_enhancement_contextual_enhancement.py` # example for more details regarding the kernel. In order to ensure # rotationally invariant processing, the discrete orientations are required # to be equally distributed over a sphere. By default, a sphere with 100 # directions is obtained from electrostatic repulsion in DIPY. # Compute lookup table D33 = 1.0 D44 = 0.02 t = 1 k = EnhancementKernel(D33, D44, t) ############################################################################### # The FBC measures are now computed, taking the tractography results and the # lookup tables as input. # Apply FBC measures fbc = FBCMeasures(streamlines, k) ############################################################################### # After calculating the FBC measures, a threshold can be chosen on the relative # FBC (RFBC) in order to remove spurious fibers. Recall that the relative FBC # (RFBC) is calculated by the minimum of the moving average LFBC along the # fiber. In this example we show the results for threshold 0 (i.e. all fibers # are included) and 0.2 (removing the 20 percent most spurious fibers). # Calculate LFBC for original fibers fbc_sl_orig, clrs_orig, rfbc_orig = fbc.get_points_rfbc_thresholded(0, emphasis=0.01) # Apply a threshold on the RFBC to remove spurious fibers fbc_sl_thres, clrs_thres, rfbc_thres = fbc.get_points_rfbc_thresholded( 0.125, emphasis=0.01 ) ############################################################################### # The results of FBC measures are visualized, showing the original fibers # colored by LFBC (see :ref:`this figure `), # and the fibers after the cleaning procedure via RFBC thresholding (see # :ref:`this other figure `). # Create scene scene = window.Scene() # Original lines colored by LFBC lineactor = actor.line(fbc_sl_orig, colors=np.vstack(clrs_orig), linewidth=0.2) scene.add(lineactor) # Horizontal (axial) slice of T1 data vol_actor1 = actor.slicer(t1_data, affine=affine) vol_actor1.display(z=20) scene.add(vol_actor1) # Vertical (sagittal) slice of T1 data vol_actor2 = actor.slicer(t1_data, affine=affine) vol_actor2.display(x=35) scene.add(vol_actor2) # Show original fibers scene.set_camera(position=(-264, 285, 155), focal_point=(0, -14, 9), view_up=(0, 0, 1)) window.record(scene=scene, n_frames=1, out_path="OR_before.png", size=(900, 900)) if interactive: window.show(scene) # Show thresholded fibers scene.rm(lineactor) scene.add(actor.line(fbc_sl_thres, colors=np.vstack(clrs_thres), linewidth=0.2)) window.record(scene=scene, n_frames=1, out_path="OR_after.png", size=(900, 900)) if interactive: window.show(scene) ############################################################################### # .. _optic_radiation_before_cleaning: # # .. rst-class:: centered small fst-italic fw-semibold # # The optic radiation obtained through probabilistic tractography colored by # local fiber to bundle coherence. # # # .. _optic_radiation_after_cleaning: # # .. rst-class:: centered small fst-italic fw-semibold # # The tractography result is cleaned (shown in bottom) by removing fibers # with a relative FBC (RFBC) lower than the threshold :math:`\tau = 0.2`. # # # Acknowledgments # --------------- # The techniques are developed in close collaboration with Pauly Ossenblok of # the Academic Center of Epileptology Kempenhaeghe & Maastricht UMC+. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/gradients_spheres.py000066400000000000000000000120111476546756600212320ustar00rootroot00000000000000""" ===================== Gradients and Spheres ===================== This example shows how you can create gradient tables and sphere objects using DIPY_. Usually, as we saw in :ref:`sphx_glr_examples_built_quick_start_quick_start.py`, you load your b-values and b-vectors from disk and then you can create your own gradient table. But this time let's say that you are an MR physicist and you want to design a new gradient scheme or you are a scientist who wants to simulate many different gradient schemes. Now let's assume that you are interested in creating a multi-shell acquisition with 2-shells, one at b=1000 $s/mm^2$ and one at b=2500 $s/mm^2$. For both shells let's say that we want a specific number of gradients (64) and we want to have the points on the sphere evenly distributed. This is possible using the ``disperse_charges`` which is an implementation of electrostatic repulsion :footcite:t:`Jones1999` . Let's start by importing the necessary modules. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, Sphere, disperse_charges from dipy.viz import actor, window ############################################################################### # We can first create some random points on a ``HemiSphere`` using spherical # polar coordinates. rng = np.random.default_rng() n_pts = 64 theta = np.pi * rng.random(n_pts) phi = 2 * np.pi * rng.random(n_pts) hsph_initial = HemiSphere(theta=theta, phi=phi) ############################################################################### # Next, we call ``disperse_charges`` which will iteratively move the points so # that the electrostatic potential energy is minimized. hsph_updated, potential = disperse_charges(hsph_initial, 5000) ############################################################################### # In ``hsph_updated`` we have the updated ``HemiSphere`` with the points nicely # distributed on the hemisphere. Let's visualize them. # Enables/disables interactive visualization interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.point(hsph_initial.vertices, window.colors.red, point_radius=0.05)) scene.add(actor.point(hsph_updated.vertices, window.colors.green, point_radius=0.05)) window.record(scene=scene, out_path="initial_vs_updated.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Illustration of electrostatic repulsion of red points which become # green points. # # # We can also create a sphere from the hemisphere and show it in the # following way. sph = Sphere(xyz=np.vstack((hsph_updated.vertices, -hsph_updated.vertices))) scene.clear() scene.add(actor.point(sph.vertices, window.colors.green, point_radius=0.05)) window.record(scene=scene, out_path="full_sphere.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Full sphere. # # # It is time to create the Gradients. For this purpose we will use the # function ``gradient_table`` and fill it with the ``hsph_updated`` vectors # that we created above. vertices = hsph_updated.vertices values = np.ones(vertices.shape[0]) ############################################################################### # We need two stacks of ``vertices``, one for every shell, and we need two sets # of b-values, one at 1000 $s/mm^2$, and one at 2500 $s/mm^2$, as we discussed # previously. bvecs = np.vstack((vertices, vertices)) bvals = np.hstack((1000 * values, 2500 * values)) ############################################################################### # We can also add some b0s. Let's add one at the beginning and one at the end. bvecs = np.insert(bvecs, (0, bvecs.shape[0]), np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, (0, bvals.shape[0]), 0) print(bvals) print(bvecs) ############################################################################### # Both b-values and b-vectors look correct. Let's now create the # ``GradientTable``. gtab = gradient_table(bvals, bvecs=bvecs) scene.clear() ############################################################################### # We can also visualize the gradients. Let's color the first shell blue and # the second shell cyan. colors_b1000 = window.colors.blue * np.ones(vertices.shape) colors_b2500 = window.colors.cyan * np.ones(vertices.shape) colors = np.vstack((colors_b1000, colors_b2500)) colors = np.insert(colors, (0, colors.shape[0]), np.array([0, 0, 0]), axis=0) colors = np.ascontiguousarray(colors) scene.add(actor.point(gtab.gradients, colors, point_radius=100)) window.record(scene=scene, out_path="gradients.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Diffusion gradients. # # # References # ---------- # # .. footbibliography:: dipy-1.11.0/doc/examples/histology_resdnn.py000066400000000000000000000110451476546756600211210ustar00rootroot00000000000000""" ================================================== Local reconstruction using the Histological ResDNN ================================================== A data-driven approach to modeling the non-linear mapping between observed DW-MRI signals and ground truth structures using sequential deep neural network regression with residual block deep neural network (ResDNN) :footcite:p:`Nath2018`, :footcite:p:`Nath2019`. Training was performed on two 3-D histology datasets of squirrel monkey brains and validated on a third. A second validation was performed on HCP datasets. """ import os import numpy as np import scipy.ndimage as ndi from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.nn.histo_resdnn import HistoResDNN from dipy.reconst.shm import sh_to_sf_matrix from dipy.segment.mask import median_otsu from dipy.viz import actor, window # Disable oneDNN warning os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" ############################################################################### # This ResDNN model requires single-shell data with one or more b0s, the data # is fetched and a gradient table is constructed from bvals/bvecs. # Fetch DWI and GTAB hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) data = np.squeeze(data) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # To accelerate computation, a brain mask must be computed. The resulting mask # is saved for visual inspection. mean_b0 = data[..., gtab.b0s_mask] mean_b0 = np.mean(mean_b0, axis=-1) _, mask = median_otsu(mean_b0) mask_labeled, _ = ndi.label(mask) unique, count = np.unique(mask_labeled, return_counts=True) val = unique[np.argmax(count[1:]) + 1] mask[mask_labeled != val] = 0 save_nifti("mask.nii.gz", mask.astype(np.uint8), affine) ############################################################################### # Using a ResDNN for sh_order_max 8 (default) and load the appropriate weights. # Fit the data and save the resulting fODF. resdnn_model = HistoResDNN(verbose=True) resdnn_model.fetch_default_weights() predicted_sh = resdnn_model.predict(data, gtab, mask=mask) save_nifti("predicted_sh.nii.gz", predicted_sh, affine) ############################################################################### # Preparing the scene using FURY. The ODF slicer and the background image are # added as actors and a mid-coronal slice is selected. interactive = False sphere = get_sphere(name="repulsion724") B, invB = sh_to_sf_matrix( sphere, sh_order_max=8, basis_type=resdnn_model.basis_type, smooth=0.0006 ) fod_spheres = actor.odf_slicer( predicted_sh, sphere=sphere, scale=0.6, norm=True, mask=mask, B_matrix=B ) mean_sh = np.mean(predicted_sh, axis=-1) background_img = actor.slicer(mean_sh, opacity=0.5, interpolation="nearest") # Select the mid coronal slice for slicer ori_shape = mask.shape[0:3] slice_index = ori_shape[1] // 2 fod_spheres.display_extent(0, ori_shape[0], slice_index, slice_index, 0, ori_shape[2]) background_img.display_extent( 0, ori_shape[0], slice_index, slice_index, 0, ori_shape[2] ) scene = window.Scene() scene.add(fod_spheres) scene.add(background_img) ############################################################################### # Adjusting the camera for a nice zoom-in of the corpus callosum (coronal) to # screenshot the resulting prediction. FODF should be aligned with the # curvature of the corpus callosum and a single-fiber population should be # visible. camera = { "zoom_factor": 0.85, "view_position": np.array( [(ori_shape[0] - 1) / 2.0, max(ori_shape), (ori_shape[2] - 1) / 1.5] ), "view_center": np.array( [(ori_shape[0] - 1) / 2.0, slice_index, (ori_shape[2] - 1) / 1.5] ), "up_vector": np.array([0.0, 0.0, 1.0]), } scene.set_camera( position=camera["view_position"], focal_point=camera["view_center"], view_up=camera["up_vector"], ) scene.zoom(camera["zoom_factor"]) if interactive: window.show(scene, reset_camera=False) window.record( scene=scene, out_path="pred_fODF.png", size=(1000, 1000), reset_camera=False ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Visualization of the predicted fODF. # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/kfold_xval.py000066400000000000000000000140151476546756600176600ustar00rootroot00000000000000""" ============================================ K-fold cross-validation for model comparison ============================================ Different models of diffusion MRI can be compared based on their accuracy in fitting the diffusion signal. Here, we demonstrate this by comparing two models: the diffusion tensor model (DTI) and Constrained Spherical Deconvolution (CSD). These models differ from each other substantially. DTI approximates the diffusion pattern as a 3D Gaussian distribution, and has only 6 free parameters. CSD, on the other hand, fits many more parameters. The models are also not nested, so they cannot be compared using the log-likelihood ratio. A general way to perform model comparison is cross-validation :footcite:p:`Hastie2009`. In this method, a model is fit to some of the data (a *learning set*) and the model is then used to predict a held-out set (a *testing set*). The model predictions can then be compared to estimate prediction error on the held out set. This method has been used for comparison of models such as DTI and CSD :footcite:p:`Rokem2014`, and has the advantage that it the comparison is imprevious to differences in the number of in the model, and it can be used to compare models that are not nested. In DIPY_, we include an implementation of k-fold cross-validation. In this method, the data is divided into $k$ different segments. In each iteration $\frac{1}{k}th$ of the data is held out and the model is fit to the other $\frac{k-1}{k}$ parts of the data. A prediction of the held out data is done and recorded. At the end of $k$ iterations a prediction of all of the data will have been conducted, and this can be compared directly to all of the data. First, we import that modules needed for this example. In particular, the :mod:`reconst.cross_validation` module implements k-fold cross-validation """ import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats from dipy.core.gradients import gradient_table import dipy.data as dpd from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.cross_validation as xval import dipy.reconst.csdeconv as csd import dipy.reconst.dti as dti np.random.seed(2014) ############################################################################### # We fetch some data and select a couple of voxels to perform comparisons on. # One lies in the corpus callosum (cc), while the other is in the centrum # semiovale (cso), a part of the brain known to contain multiple crossing # white matter fiber populations. hardi_fname, hardi_bval_fname, hardi_bvec_fname = dpd.get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) cc_vox = data[40, 70, 38] cso_vox = data[30, 76, 38] ############################################################################### # We initialize each kind of model: dti_model = dti.TensorModel(gtab) response, ratio = csd.auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = csd.ConstrainedSphericalDeconvModel(gtab, response) ############################################################################### # Next, we perform cross-validation for each kind of model, comparing model # predictions to the diffusion MRI data in each one of these voxels. # # Note that we use 2-fold cross-validation, which means that in each iteration, # the model will be fit to half of the data, and used to predict the other # half. rng = np.random.default_rng(2014) dti_cc = xval.kfold_xval(dti_model, cc_vox, 2, rng=rng) csd_cc = xval.kfold_xval(csd_model, cc_vox, 2, response, rng=rng) dti_cso = xval.kfold_xval(dti_model, cso_vox, 2, rng=rng) csd_cso = xval.kfold_xval(csd_model, cso_vox, 2, response, rng=rng) ############################################################################### # We plot a scatter plot of the data with the model predictions in each of # these voxels, focusing only on the diffusion-weighted measurements (each # point corresponds to a different gradient direction). The two models are # compared in each sub-plot (blue=DTI, red=CSD). fig, ax = plt.subplots(1, 2) fig.set_size_inches([12, 6]) ax[0].plot( cc_vox[gtab.b0s_mask == 0], dti_cc[gtab.b0s_mask == 0], "o", color="b", label="DTI in CC", ) ax[0].plot( cc_vox[gtab.b0s_mask == 0], csd_cc[gtab.b0s_mask == 0], "o", color="r", label="CSD in CC", ) ax[1].plot( cso_vox[gtab.b0s_mask == 0], dti_cso[gtab.b0s_mask == 0], "o", color="b", label="DTI in CSO", ) ax[1].plot( cso_vox[gtab.b0s_mask == 0], csd_cso[gtab.b0s_mask == 0], "o", color="r", label="CSD in CSO", ) ax[0].legend(loc="upper left") ax[1].legend(loc="upper left") for this_ax in ax: this_ax.set_xlabel("Data (relative to S0)") this_ax.set_ylabel("Model prediction (relative to S0)") fig.savefig("model_predictions.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Model predictions. # # # # We can also quantify the goodness of fit of the models by calculating an # R-squared score: cc_dti_r2 = ( stats.pearsonr(cc_vox[gtab.b0s_mask == 0], dti_cc[gtab.b0s_mask == 0])[0] ** 2 ) cc_csd_r2 = ( stats.pearsonr(cc_vox[gtab.b0s_mask == 0], csd_cc[gtab.b0s_mask == 0])[0] ** 2 ) cso_dti_r2 = ( stats.pearsonr(cso_vox[gtab.b0s_mask == 0], dti_cso[gtab.b0s_mask == 0])[0] ** 2 ) cso_csd_r2 = ( stats.pearsonr(cso_vox[gtab.b0s_mask == 0], csd_cso[gtab.b0s_mask == 0])[0] ** 2 ) print( "Corpus callosum\n" f"DTI R2 : {cc_dti_r2}\n" f"CSD R2 : {cc_csd_r2}\n" "\n" "Centrum Semiovale\n" f"DTI R2 : {cso_dti_r2}\n" f"CSD R2 : {cso_csd_r2}\n" ) ############################################################################### # As you can see, DTI is a pretty good model for describing the signal in the # CC, while CSD is much better in describing the signal in regions of multiple # crossing fibers. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/linear_fascicle_evaluation.py000066400000000000000000000311421476546756600230610ustar00rootroot00000000000000""" ================================= Linear fascicle evaluation (LiFE) ================================= Evaluating the results of tractography algorithms is one of the biggest challenges for diffusion MRI. One proposal for evaluation of tractography results is to use a forward model that predicts the signal from each of a set of streamlines, and then fit a linear model to these simultaneous predictions :footcite:p:`Pestilli2014`. We will use streamlines generated using probabilistic tracking on CSA peaks. For brevity, we will include in this example only streamlines going through the corpus callosum connecting left to right superior frontal cortex. The process of tracking and finding these streamlines is fully demonstrated in the :ref:`sphx_glr_examples_built_streamline_analysis_streamline_tools.py` example. If this example has been run, we can read the streamlines from file. Otherwise, we'll run that example first, by importing it. This provides us with all of the variables that were created in that example: """ from os.path import join as pjoin import matplotlib import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid import numpy as np # We'll need to know where the corpus callosum is from these variables: from dipy.core.gradients import gradient_table import dipy.core.optimize as opt from dipy.data import fetch_stanford_tracks, get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.streamline import load_trk import dipy.tracking.life as life from dipy.viz import actor, colormap as cmap, window hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") t1_fname = get_fnames(name="stanford_t1") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) t1_data = load_nifti_data(t1_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) cc_slice = labels == 2 # Let's now fetch a set of streamlines from the Stanford HARDI dataset. # Those streamlines were generated during the # :ref:`sphx_glr_examples_built_streamline_analysis_streamline_tools.py` example. # Read the candidates from file in voxel space: streamlines_files = fetch_stanford_tracks() lr_superiorfrontal_path = pjoin(streamlines_files[1], "hardi-lr-superiorfrontal.trk") candidate_sl_sft = load_trk(lr_superiorfrontal_path, "same") candidate_sl_sft.to_vox() candidate_sl = candidate_sl_sft.streamlines ############################################################################### # The streamlines that are entered into the model are termed 'candidate # streamlines' (or a 'candidate connectome'): # # # Let's visualize the initial candidate group of streamlines in 3D, relative # to the anatomical structure of this brain: # Enables/disables interactive visualization interactive = False candidate_streamlines_actor = actor.streamtube( candidate_sl, colors=cmap.line_colors(candidate_sl) ) cc_ROI_actor = actor.contour_from_roi(cc_slice, color=(1.0, 1.0, 0.0), opacity=0.5) vol_actor = actor.slicer(t1_data) vol_actor.display(x=40) vol_actor2 = vol_actor.copy() vol_actor2.display(z=35) # Add display objects to canvas scene = window.Scene() scene.add(candidate_streamlines_actor) scene.add(cc_ROI_actor) scene.add(vol_actor) scene.add(vol_actor2) window.record(scene=scene, n_frames=1, out_path="life_candidates.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Candidate connectome before life optimization # # # # Next, we initialize a LiFE model. We import the ``dipy.tracking.life`` # module, which contains the classes and functions that implement the model: fiber_model = life.FiberModel(gtab) ############################################################################### # Since we read the streamlines from a file, already in the voxel space, we do # not need to transform them into this space. Otherwise, if the streamline # coordinates were in the world space (relative to the scanner iso-center, or # relative to the mid-point of the AC-PC-connecting line), we would use this:: # # inv_affine = np.linalg.inv(hardi_img.affine) # # the inverse transformation from world space to the voxel space as the affine # for the following model fit. # # The next step is to fit the model, producing a ``FiberFit`` class instance, # that stores the data, as well as the results of the fitting procedure. # # The LiFE model posits that the signal in the diffusion MRI volume can be # explained by the streamlines, by the equation # # .. math:: # # y = X\beta # # Where $y$ is the diffusion MRI signal, $\beta$ are a set of weights on the # streamlines and $X$ is a design matrix. This matrix has the dimensions $m$ by # $n$, where $m=n_{voxels} \cdot n_{directions}$, and $n_{voxels}$ is the set # of voxels in the ROI that contains the streamlines considered in this model. # The $i^{th}$ column of the matrix contains the expected contributions of the # $i^{th}$ streamline (arbitrarily ordered) to each of the voxels. $X$ is a # sparse matrix, because each streamline traverses only a small percentage of # the voxels. The expected contributions of the streamline are calculated # using a forward model, where each node of the streamline is modeled as a # cylindrical fiber compartment with Gaussian diffusion, using the diffusion # tensor model. See :footcite:p:`Pestilli2014` for more detail on the model, and # variations of this model. fiber_fit = fiber_model.fit(data, candidate_sl, affine=np.eye(4)) ############################################################################### # The ``FiberFit`` class instance holds various properties of the model fit. # For example, it has the weights $\beta$, that are assigned to each # streamline. In most cases, a tractography through some region will include # redundant streamlines, and these streamlines will have $\beta_i$ that are 0. fig, ax = plt.subplots(1) ax.hist(fiber_fit.beta, bins=100, histtype="step") ax.set_xlabel("Fiber weights") ax.set_ylabel("# fibers") fig.savefig("beta_histogram.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # LiFE streamline weights # # # # We use $\beta$ to filter out these redundant streamlines, and generate an # optimized group of streamlines: optimized_sl = [np.vstack(candidate_sl)[np.where(fiber_fit.beta > 0)[0]]] scene = window.Scene() scene.add(actor.streamtube(optimized_sl, colors=cmap.line_colors(optimized_sl))) scene.add(cc_ROI_actor) scene.add(vol_actor) window.record(scene=scene, n_frames=1, out_path="life_optimized.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Streamlines selected via LiFE optimization # # # # # The new set of streamlines should do well in fitting the data, and redundant # streamlines have presumably been removed (in this case, about 50% of the # streamlines). # # But how well does the model do in explaining the diffusion data? We can # quantify that: the ``FiberFit`` class instance has a `predict` method, which # can be used to invert the model and predict back either the data that was # used to fit the model, or other unseen data (e.g. in cross-validation, see # :ref:`sphx_glr_examples_built_reconstruction_kfold_xval.py`). # # Without arguments, the ``.predict()`` method will predict the diffusion # signal for the same gradient table that was used in the fit data, but # ``gtab`` and ``S0`` keyword arguments can be used to predict for other # acquisition schemes and other baseline non-diffusion-weighted signals. model_predict = fiber_fit.predict() ############################################################################### # We will focus on the error in prediction of the diffusion-weighted data, and # calculate the root of the mean squared error. model_error = model_predict - fiber_fit.data model_rmse = np.sqrt(np.mean(model_error[:, 10:] ** 2, -1)) ############################################################################### # As a baseline against which we can compare, we calculate another error term. # In this case, we assume that the weight for each streamline is equal # to zero. This produces the naive prediction of the mean of the signal in each # voxel. beta_baseline = np.zeros(fiber_fit.beta.shape[0]) pred_weighted = np.reshape( opt.spdot(fiber_fit.life_matrix, beta_baseline), (fiber_fit.vox_coords.shape[0], np.sum(~gtab.b0s_mask)), ) mean_pred = np.empty((fiber_fit.vox_coords.shape[0], gtab.bvals.shape[0])) S0 = fiber_fit.b0_signal ############################################################################### # Since the fitting is done in the demeaned S/S0 domain, we need # to add back the mean and then multiply by S0 in every voxel: mean_pred[..., gtab.b0s_mask] = S0[:, None] mean_pred[..., ~gtab.b0s_mask] = (pred_weighted + fiber_fit.mean_signal[:, None]) * S0[ :, None ] mean_error = mean_pred - fiber_fit.data mean_rmse = np.sqrt(np.mean(mean_error**2, -1)) ############################################################################### # First, we can compare the overall distribution of errors between these two # alternative models of the ROI. We show the distribution of differences in # error (improvement through model fitting, relative to the baseline model). # Here, positive values denote an improvement in error with model fit, relative # to without the model fit. fig, ax = plt.subplots(1) ax.hist(mean_rmse - model_rmse, bins=100, histtype="step") ax.text( 0.2, 0.9, f"Median RMSE, mean model: {np.median(mean_rmse):.2f}", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes, ) ax.text( 0.2, 0.8, f"Median RMSE, LiFE: {np.median(model_rmse):.2f}", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes, ) ax.set_xlabel("RMS Error") ax.set_ylabel("# voxels") fig.savefig("error_histograms.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Improvement in error with fitting of the LiFE model. # # # # # # Second, we can show the spatial distribution of the two error terms, # and of the improvement with the model fit: vol_model = np.ones(data.shape[:3]) * np.nan vol_model[ fiber_fit.vox_coords[:, 0], fiber_fit.vox_coords[:, 1], fiber_fit.vox_coords[:, 2] ] = model_rmse vol_mean = np.ones(data.shape[:3]) * np.nan vol_mean[ fiber_fit.vox_coords[:, 0], fiber_fit.vox_coords[:, 1], fiber_fit.vox_coords[:, 2] ] = mean_rmse vol_improve = np.ones(data.shape[:3]) * np.nan vol_improve[ fiber_fit.vox_coords[:, 0], fiber_fit.vox_coords[:, 1], fiber_fit.vox_coords[:, 2] ] = mean_rmse - model_rmse sl_idx = 49 fig = plt.figure() fig.subplots_adjust(left=0.05, right=0.95) ax = AxesGrid( fig, 111, nrows_ncols=(1, 3), label_mode="1", share_all=True, cbar_location="top", cbar_mode="each", cbar_size="10%", cbar_pad="5%", ) ax[0].matshow(np.rot90(t1_data[sl_idx, :, :]), cmap=matplotlib.cm.bone) im = ax[0].matshow(np.rot90(vol_model[sl_idx, :, :]), cmap=matplotlib.cm.hot) ax.cbar_axes[0].colorbar(im) ax[1].matshow(np.rot90(t1_data[sl_idx, :, :]), cmap=matplotlib.cm.bone) im = ax[1].matshow(np.rot90(vol_mean[sl_idx, :, :]), cmap=matplotlib.cm.hot) ax.cbar_axes[1].colorbar(im) ax[2].matshow(np.rot90(t1_data[sl_idx, :, :]), cmap=matplotlib.cm.bone) im = ax[2].matshow(np.rot90(vol_improve[sl_idx, :, :]), cmap=matplotlib.cm.RdBu) ax.cbar_axes[2].colorbar(im) for lax in ax: lax.set_xticks([]) lax.set_yticks([]) fig.savefig("spatial_errors.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Spatial distribution of error and improvement. # # # # # # This image demonstrates that in many places, fitting the LiFE model results # in substantial reduction of the error. # # Note that for full-brain tractographies *LiFE* can require large amounts of # memory. For detailed memory profiling of the algorithm, based on the # streamlines generated in # :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py`, see # `this IPython notebook # `_. # # For the Matlab implementation of LiFE, head over to `Franco Pestilli's github # webpage `_. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/motion_correction.py000066400000000000000000000046621476546756600212720ustar00rootroot00000000000000""" ================================================= Between-volumes Motion Correction on DWI datasets ================================================= During a dMRI acquisition, the subject motion inevitable. This motion implies a misalignment between N volumes on a dMRI dataset. A common way to solve this issue is to perform a registration on each acquired volume to a reference b = 0 :footcite:p:`Jenkinson2001`. This preprocessing is a highly recommended step that should be executed before any dMRI dataset analysis. Let's import some essential functions. """ from dipy.align import motion_correction from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti ############################################################################### # We choose one of the data from the datasets in dipy_. However, you can # replace the following line with the path of your image. dwi_fname, dwi_bval_fname, dwi_bvec_fname = get_fnames(name="sherbrooke_3shell") ############################################################################### # We load the image and the affine of the image. The affine is the # transformation matrix which maps image coordinates to world (mm) coordinates. # We also load the b-values and b-vectors. data, affine = load_nifti(dwi_fname) bvals, bvecs = read_bvals_bvecs(dwi_bval_fname, dwi_bvec_fname) ############################################################################### # This data has 193 volumes. For this demo purpose, we decide to reduce the # number of volumes to 3. However, we do not recommend to perform a motion # correction with less than 10 volumes. data_small = data[..., :3] bvals_small = bvals[:3] bvecs_small = bvecs[:3] gtab = gradient_table(bvals_small, bvecs=bvecs_small) ############################################################################### # Start motion correction of our reduced DWI dataset(between-volumes motion # correction). data_corrected, reg_affines = motion_correction(data_small, gtab, affine=affine) ############################################################################### # Save our DWI dataset corrected to a new Nifti file. save_nifti( "motion_correction.nii.gz", data_corrected.get_fdata(), data_corrected.affine ) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/path_length_map.py000066400000000000000000000145621476546756600206700ustar00rootroot00000000000000""" ========================= Calculate Path Length Map ========================= We show how to calculate a Path Length Map for Anisotropic Radiation Therapy Contours given a set of streamlines and a region of interest (ROI). The Path Length Map is a volume in which each voxel's value is the shortest distance along a streamline to a given region of interest (ROI). This map can be used to anisotropically modify radiation therapy treatment contours based on a tractography model of the local white matter anatomy, as described in :footcite:p:`Jordan2019`, by executing this tutorial with the gross tumor volume (GTV) as the ROI. .. note:: The background value is set to -1 by default Let's start by importing the necessary modules. """ import matplotlib as mpl from mpl_toolkits.axes_grid1 import AxesGrid import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.utils import path_length from dipy.viz import actor, colormap as cmap, window ############################################################################### # First, we need to generate some streamlines and visualize. For a more # complete description of these steps, please refer to the # :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py` # and the Visualization of ROI Surface Rendered with Streamlines Tutorials. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) white_matter = (labels == 1) | (labels == 2) csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) stopping_criterion = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) ############################################################################### # We will use an anatomically-based corpus callosum ROI as our seed mask to # demonstrate the method. In practice, this corpus callosum mask (labels == 2) # should be replaced with the desired ROI mask (e.g. gross tumor volume (GTV), # lesion mask, or electrode mask). # Make a corpus callosum seed mask for tracking seed_mask = labels == 2 seeds = utils.seeds_from_mask(seed_mask, affine, density=[1, 1, 1]) # Make a streamline bundle model of the corpus callosum ROI connectivity streamlines = LocalTracking(csa_peaks, stopping_criterion, seeds, affine, step_size=2) streamlines = Streamlines(streamlines) # Visualize the streamlines and the Path Length Map base ROI # (in this case also the seed ROI) streamlines_actor = actor.line(streamlines, colors=cmap.line_colors(streamlines)) surface_opacity = 0.5 surface_color = [0, 1, 1] seedroi_actor = actor.contour_from_roi( seed_mask, affine=affine, color=surface_color, opacity=surface_opacity ) scene = window.Scene() scene.add(streamlines_actor) scene.add(seedroi_actor) ############################################################################### # If you set interactive to True (below), the scene will pop up in an # interactive window. interactive = False if interactive: window.show(scene) window.record(scene=scene, n_frames=1, out_path="plm_roi_sls.png", size=(800, 800)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # A top view of corpus callosum streamlines with the blue transparent ROI in # the center. # # # # Now we calculate the Path Length Map using the corpus callosum streamline # bundle and corpus callosum ROI. # # **NOTE**: the mask used to seed the tracking does not have to be the Path # Length Map base ROI, as we do here, but it often makes sense for them to be # the same ROI if we want a map of the whole brain's distance back to our ROI. # (e.g. we could test a hypothesis about the motor system by making a # streamline bundle model of the cortico-spinal track (CST) and input a # lesion mask as our Path Length Map base ROI to restrict the analysis to # the CST) path_length_map_base_roi = seed_mask # calculate the WMPL wmpl = path_length(streamlines, affine, path_length_map_base_roi) # save the WMPL as a nifti save_nifti("example_cc_path_length_map.nii.gz", wmpl.astype(np.float32), affine) # get the T1 to show anatomical context of the WMPL t1_fname = get_fnames(name="stanford_t1") t1_data = load_nifti_data(t1_fname) fig = mpl.pyplot.figure() fig.subplots_adjust(left=0.05, right=0.95) ax = AxesGrid( fig, 111, nrows_ncols=(1, 3), cbar_location="right", cbar_mode="single", cbar_size="10%", cbar_pad="5%", ) ############################################################################### # We will mask our WMPL to ignore values less than zero because negative # numbers indicate no path back to the ROI was found in the provided # streamlines wmpl_show = np.ma.masked_where(wmpl < 0, wmpl) slx, sly, slz = [60, 50, 35] ax[0].matshow(np.rot90(t1_data[:, slx, :]), cmap=mpl.cm.bone) im = ax[0].matshow(np.rot90(wmpl_show[:, slx, :]), cmap=mpl.cm.cool, vmin=0, vmax=80) ax[1].matshow(np.rot90(t1_data[:, sly, :]), cmap=mpl.cm.bone) im = ax[1].matshow(np.rot90(wmpl_show[:, sly, :]), cmap=mpl.cm.cool, vmin=0, vmax=80) ax[2].matshow(np.rot90(t1_data[:, slz, :]), cmap=mpl.cm.bone) im = ax[2].matshow(np.rot90(wmpl_show[:, slz, :]), cmap=mpl.cm.cool, vmin=0, vmax=80) ax.cbar_axes[0].colorbar(im) for lax in ax: lax.set_xticks([]) lax.set_yticks([]) fig.savefig("Path_Length_Map.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Path Length Map showing the shortest distance, along a streamline, # from the corpus callosum ROI with the background set to -1. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/piesno.py000066400000000000000000000100521476546756600170210ustar00rootroot00000000000000""" ============================= Noise estimation using PIESNO ============================= Often, one is interested in estimating the noise in the diffusion signal. One of the methods to do this is the Probabilistic Identification and Estimation of Noise (PIESNO) framework :footcite:p:`Koay2009b`. Using this method, one can detect the standard deviation of the noise from Diffusion-Weighted Imaging (DWI). PIESNO also works with multiple channel DWI datasets that are acquired from N array coils for both SENSE and GRAPPA reconstructions. The PIESNO method works in two steps: 1) First, it finds voxels that are most likely background voxels. Intuitively, these voxels have very similar diffusion-weighted intensities (up to some noise) in the fourth dimension of the DWI dataset. White matter, gray matter or CSF voxels have diffusion intensities that vary quite a lot across different directions. 2) From these estimated background voxels and the input number of coils $N$, PIESNO finds what sigma each Gaussian from each of the $N$ coils would have generated the observed Rician ($N = 1$) or non-central Chi ($N > 1$) distributed noise profile in the DWI datasets. PIESNO makes an important assumption: the Gaussian noise standard deviation is assumed to be uniform. The noise is uniform across multiple slice locations or across multiple images of the same location. For the full details, please refer to the original paper. In this example, we will demonstrate the use of PIESNO with a 3-shell data-set. We start by importing necessary modules and functions: """ import matplotlib.pyplot as plt import numpy as np from dipy.data import get_fnames from dipy.denoise.noise_estimate import piesno from dipy.io.image import load_nifti, save_nifti ############################################################################### # Then we load the data and the affine: dwi_fname, dwi_bval_fname, dwi_bvec_fname = get_fnames(name="sherbrooke_3shell") data, affine = load_nifti(dwi_fname) ############################################################################### # Now that we have fetched a dataset, we must call PIESNO with the right number # of coils used to acquire this dataset. It is also important to know what # was the parallel reconstruction algorithm used. Here, the data comes from a # GRAPPA reconstruction, was acquired with a 12-elements head coil available on # the Tim Trio Siemens, for which the 12 coil elements are combined into 4 # groups of 3 coil elements each. The signal is therefore received through 4 # distinct groups of receiver channels, yielding N = 4. Had we used a GE # acquisition, we would have used N=1 even if multiple channel coils are used # because GE uses a SENSE reconstruction, which has a Rician noise nature and # thus N is always 1. sigma, mask = piesno(data, N=4, return_mask=True) axial = data[:, :, data.shape[2] // 2, 0].T axial_piesno = mask[:, :, data.shape[2] // 2].T fig, ax = plt.subplots(1, 2) ax[0].imshow(axial, cmap="gray", origin="lower") ax[0].set_title("Axial slice of the b=0 data") ax[1].imshow(axial_piesno, cmap="gray", origin="lower") ax[1].set_title("Background voxels from the data") for a in ax: a.set_axis_off() plt.savefig("piesno.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the mid axial slice of the b=0 image (left) and estimated # background voxels (right) used to estimate the noise standard deviation. save_nifti("mask_piesno.nii.gz", mask.astype(np.uint8), affine) print("The noise standard deviation is sigma = ", sigma) print("The std of the background is =", np.std(data[mask[..., :].astype(bool)])) ############################################################################### # Here, we obtained a noise standard deviation of 7.26. For comparison, a # simple standard deviation of all voxels in the estimated mask (as done in the # previous example :ref:`sphx_glr_examples_built_preprocessing_snr_in_cc.py`) # gives a value of 6.1. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/quick_start.py000066400000000000000000000125101476546756600200560ustar00rootroot00000000000000""" ========================= Getting started with DIPY ========================= In diffusion MRI (dMRI) usually we use three types of files, a Nifti file with the diffusion weighted data, and two text files one with b-values and one with the b-vectors. In DIPY_ we provide tools to load and process these files and we also provide access to publicly available datasets for those who haven't acquired yet their own datasets. Let's start with some necessary imports. """ from os.path import expanduser, join import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import fetch_sherbrooke_3shell from dipy.io import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti ############################################################################### # With the following commands we can download a dMRI dataset fetch_sherbrooke_3shell() ############################################################################### # By default these datasets will go in the ``.dipy`` folder inside your home # directory. Here is how you can access them. home = expanduser("~") ############################################################################### # ``dname`` holds the directory name where the 3 files are in. dname = join(home, ".dipy", "sherbrooke_3shell") ############################################################################### # Here, we show the complete filenames of the 3 files fdwi = join(dname, "HARDI193.nii.gz") print(fdwi) fbval = join(dname, "HARDI193.bval") print(fbval) fbvec = join(dname, "HARDI193.bvec") print(fbvec) ############################################################################### # Now, that we have their filenames we can start checking what these look like. # # Let's start first by loading the dMRI datasets. For this purpose, we # use a python library called nibabel_ which enables us to read and write # neuroimaging-specific file formats. data, affine, img = load_nifti(fdwi, return_img=True) ############################################################################### # ``data`` is a 4D array where the first 3 dimensions are the i, j, k voxel # coordinates and the last dimension is the number of non-weighted (S0s) and # diffusion-weighted volumes. # # We can very easily check the size of ``data`` in the following way: print(data.shape) ############################################################################### # We can also check the dimensions of each voxel in the following way: print(img.header.get_zooms()[:3]) ############################################################################### # We can quickly visualize the results using matplotlib_. For example, # let's show here the middle axial slices of volume 0 and volume 10. axial_middle = data.shape[2] // 2 plt.figure("Showing the datasets") plt.subplot(1, 2, 1).set_axis_off() plt.imshow(data[:, :, axial_middle, 0].T, cmap="gray", origin="lower") plt.subplot(1, 2, 2).set_axis_off() plt.imshow(data[:, :, axial_middle, 10].T, cmap="gray", origin="lower") plt.show() plt.savefig("data.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the middle axial slice without (left) and with (right) diffusion # weighting. # # # # The next step is to load the b-values and b-vectors from the disk using # the function ``read_bvals_bvecs``. bvals, bvecs = read_bvals_bvecs(fbval, fbvec) ############################################################################### # In DIPY, we use an object called ``GradientTable`` which holds all the # acquisition specific parameters, e.g. b-values, b-vectors, timings and # others. To create this object you can use the function ``gradient_table``. gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # Finally, you can use ``gtab`` (the GradientTable object) to show some # information about the acquisition parameters print(gtab.info) ############################################################################### # You can also see the b-values using: print(gtab.bvals) ############################################################################### # Or, for example the 10 first b-vectors using: print(gtab.bvecs[:10, :]) ############################################################################### # You can get the number of gradients (including the number of b0 values) # calling ``len`` on the ``GradientTable`` instance: print(len(gtab)) ############################################################################### # ``gtab`` can be used to tell what part of the data is the S0 volumes # (volumes which correspond to b-values of 0). S0s = data[:, :, :, gtab.b0s_mask] ############################################################################### # Here, we had only 1 S0 as we can verify by looking at the dimensions of S0s print(S0s.shape) ############################################################################### # Just, for fun let's save this in a new Nifti file. save_nifti("HARDI193_S0.nii.gz", S0s, affine) ############################################################################### # Now, that we learned how to load dMRI datasets we can start the analysis. # See example :ref:`sphx_glr_examples_built_reconstruction_reconst_dti.py` to # learn how to create FA maps. dipy-1.11.0/doc/examples/reconst_bingham.py000066400000000000000000000330431476546756600206730ustar00rootroot00000000000000""" ============================================= Reconstruction of Bingham Functions from ODFs ============================================= This example shows how to reconstruct Bingham functions from orientation distribution functions (ODFs). Reconstructed Bingham functions can be useful to quantify properties from ODFs such as fiber dispersion :footcite:p:`Riffert2014`, :footcite:p:`NetoHenriques2018`. To begin, let us import the relevant functions and load data consisting of 10 b0s and 150 non-b0s with a b-value of 2000s/mm2. """ from dipy.core.gradients import gradient_table from dipy.core.sphere import unit_icosahedron from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.bingham import sf_to_bingham, sh_to_bingham from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.viz import actor, window from dipy.viz.plotting import image_mosaic hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # To properly fit Bingham functions, we recommend the use of a larger number of # directions to sample the ODFs. For this, we load a `sphere` object with 12 # vertices sampling a 3D sphere (the icosahedron). We further subdivide the # faces of this `sphere` representation five times, to get 10242 directions. sphere = unit_icosahedron.subdivide(n=5) nd = sphere.vertices.shape[0] print("The number of directions on the sphere is {}".format(nd)) ############################################################################### # Step 1. ODF estimation # ====================== # # Before fitting Bingham functions, we must reconstruct ODFs. In this example, # fiber ODFs (fODFs) will be reconstructed using the Constrained Spherical # Deconvolution (CSD) method :footcite:p:`Tournier2007`. For simplicity, we # will refer to fODFs as ODFs. # In the main tutorial of CSD (see # :ref:`sphx_glr_examples_built_reconstruction_reconst_csd.py`), several # strategies to define the fiber response function are discussed. Here, for # the sake of simplicity, we will use the response function estimates from a # local brain region: response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) # Let us now compute the ODFs using this response function: csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=8) ############################################################################### # For efficiency, we will only fit a small part of the data. data_small = data[20:50, 55:85, 38:39] csd_fit = csd_model.fit(data_small) ############################################################################### # Let us visualize the ODFs csd_odf = csd_fit.odf(sphere) interactive = False scene = window.Scene() fodf_spheres = actor.odf_slicer( csd_odf, sphere=sphere, scale=0.9, norm=False, colormap="plasma" ) scene.add(fodf_spheres) print("Saving the illustration as csd_odfs.png") window.record(scene, out_path="csd_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Fiber ODFs (fODFs) reconstructed using Constrained Spherical # Deconvolution (CSD). For simplicity, we will refer to them just as ODFs. # # # Step 2. Bingham fitting and Metrics # =================================== # Now that we have some ODFs, let us fit the Bingham functions to them by using # the function `sf_to_bingham`: # A maximum search angle of 45 degrees is chosen arbitrarily for fitting # each ODF lobe. max_search_angle = 45 BinghamMetrics = sf_to_bingham(csd_odf, sphere, max_search_angle) ############################################################################### # The above function outputs a `BinghamMetrics` class instance, containing the # parameters of the fitted Bingham functions. The metrics of interest contained # in the `BinghamMetrics` class instance are: # # - amplitude_lobe (the maximum value for each lobe. Also known as Bingham's # f_0 parameter.) # - fd_lobe (fiber densitiy: as defined in :footcite:p:`Riffert2014`, # one for each peak.) # - fs_lobe (fiber spread: as defined in :footcite:p:`Riffert2014`, # one for each peak.) # - fd_voxel (voxel fiber density: average of fd across all ODF lobes.) # - fs_voxel (voxel fiber spread: average of fs across all ODF lobes.) # - odi1_lobe (orientation dispersion index along Bingham's first dispersion # axis, one for each lobe. Defined in :footcite:p:`NetoHenriques2018` # and :footcite:p:`Zhang2012`.) # - odi2_lobe (orientation dispersion index along Bingham's second dispersion # axis, one for each lobe.) # - odi_total_lobe (orientation dispersion index averaged across both Binghams' # dispersion axes. Defined in :footcite:p:`Tariq2016`.) # - odi1_voxel (orientation dispersion index along Bingham's first dispersion # axis, averaged across all lobes) # - odi2_voxel (orientation dispersion index along Bingham's second dispersion # axis, averaged across all lobes) # - odi_total_voxel (orientation dispersion index averaged across both # Binghams' axes, averaged across all lobes) # - peak_dirs (peak directions in Cartesian coordinates given by the Bingham # fitting, also known as parameter mu_0. These directions are slightly # different than the peak directions given by the function # `peaks_from_model`.) # # For illustration purposes, the fitted Bingham derived metrics can be # visualized using the following lines of code: bim_odf = BinghamMetrics.odf(sphere) scene.rm(fodf_spheres) fodf_spheres = actor.odf_slicer( bim_odf, sphere=sphere, scale=0.9, norm=False, colormap="plasma" ) scene.add(fodf_spheres) print("Saving the illustration as Bingham_odfs.png") window.record(scene, out_path="Bingham_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Bingham functions fitted to CSD fiber ODFs. # # Alternatively to fitting Bingham functions to sampled ODFs, DIPY also # contains the function `sh_to_bingham` to perform Bingham fitting from the # ODF's spherical harmonic representation. Although this process may require # longer processing times, this function may be useful to avoid memory issues # in handling heavily sampled ODFs. For example, you may have reconstructed # ODFs using another script and saved their spherical harmonics to disk. # This function is for such cases. Below we show the lines of code to use the # function `sh_to_bingham` (feel free to skip these lines if the function # `sf_to_bingham` worked fine for you). Note, to use `sh_to_bingham` you # need to specify the maximum order of spherical harmonics that you defined # when reconstructing the ODF. In this example this was set to 8 for # the function `csd_model`: sh_coeff = csd_fit.shm_coeff BinghamMetrics = sh_to_bingham(sh_coeff, sphere, max_search_angle) ############################################################################### # Step 3. Bingham Metrics # ======================= # As mentioned above, reconstructed Bingham functions can be useful to # quantify properties from ODFs :footcite:p:`Riffert2014`, # :footcite:p:`NetoHenriques2018`. Below we plot the Bingham metrics # expected to be proportional to the fiber density (FD) of specific fiber # populations. FD_ODF_l1 = BinghamMetrics.fd_lobe[:, :, 0, 0] FD_ODF_l2 = BinghamMetrics.fd_lobe[:, :, 0, 1] FD_voxel = BinghamMetrics.fd_voxel[:, :, 0] FD_images = [FD_ODF_l1[:, -1:1:-1].T, FD_ODF_l2[:, -1:1:-1].T, FD_voxel[:, -1:1:-1].T] FD_labels = ["FD ODF lobe 1", "FD ODF lobe 2", "FD ODF voxel"] kwargs = [{"vmin": 0, "vmax": 2}, {"vmin": 0, "vmax": 2}, {"vmin": 0, "vmax": 2}] print("Saving the illustration as Bingham_fd.png") image_mosaic( FD_images, ax_labels=FD_labels, ax_kwargs=kwargs, figsize=(16, 4), filename="Bingham_fd.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The figure shows from left to right: 1) the FD estimated for the # first ODF peak (showing larger values in white matter); 2) the FD estimated # for the second ODF peak (showing non-zero values in regions of crossing # white matter fibers); and 3) the sum of FD estimates across all ODF lobes # (quantity that should be proportional to the density of all fibers within # each voxel). # # Bingham functions can also be used to quantify fiber dispersion from the # ODFs :footcite:p:`NetoHenriques2018`. In addition to quantifying a combined # orientation dispersion index (`ODI_total`) for each ODF lobe # :footcite:p:`Tariq2016`, Bingham functions allow the quantification of # dispersion along two main axes (`ODI_1` and `ODI_2`), offering unique # information of fiber orientation variability within the brain tissue. Below # we show how to extract these indexes from the largest ODF peak. Note, for # better visualization of ODI estimates, voxels with total FD lower than 0.5 # are masked. ODIt = BinghamMetrics.odi_total_lobe[:, :, 0, 0] ODI1 = BinghamMetrics.odi1_lobe[:, :, 0, 0] ODI2 = BinghamMetrics.odi2_lobe[:, :, 0, 0] ODIt[FD_voxel < 0.5] = 0 ODI1[FD_voxel < 0.5] = 0 ODI2[FD_voxel < 0.5] = 0 ODI_images = [ODI1[:, -1:1:-1].T, ODI2[:, -1:1:-1].T, ODIt[:, -1:1:-1].T] ODI_labels = ["ODI_1 (lobe 1)", "ODI_2 (lobe 1)", "ODI_total (lobe 1)"] kwargs = [{"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}] print("Saving the illustration as Bingham_ODI_lobe1.png") image_mosaic( ODI_images, ax_labels=ODI_labels, ax_kwargs=kwargs, figsize=(15, 5), filename="Bingham_ODI_lobe1.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The figure shows from left to right: 1) ODI of the largest ODF lobe along # the axis with greater dispersion, a.k.a. ODI_1 (direction in which fibers # exhibit the most variability in orientation); 2) ODI of the largest ODF lobe # along the axis with lesser dispersion, a.k.a ODI_2 (directions in which # fiber orientations are more uniform); and 3) total ODI of the largest lobe # across both axes. # # Above, we focused on the largest ODF's lobe, representing the most pronounced # fiber population within a voxel. However, this methodology is not limited to # a singular lobe since it can be applied to the other ODF lobes. Below, we # show the analogous figures for the second-largest ODF lobe. Note that for # this figure, regions of white matter that contain only a single fiber # population display ODI estimates of zero, corresponding to ODF profiles # lacking a second ODF lobe. ODIt = BinghamMetrics.odi_total_lobe[:, :, 0, 1] ODI1 = BinghamMetrics.odi1_lobe[:, :, 0, 1] ODI2 = BinghamMetrics.odi2_lobe[:, :, 0, 1] ODIt[FD_voxel < 0.5] = 0 ODI1[FD_voxel < 0.5] = 0 ODI2[FD_voxel < 0.5] = 0 ODI_images = [ODI1[:, -1:1:-1].T, ODI2[:, -1:1:-1].T, ODIt[:, -1:1:-1].T] ODI_labels = ["ODI_1 (lobe 2)", "ODI_2 (lobe 2)", "ODI_voxel (lobe 2)"] kwargs = [{"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}] print("Saving the illustration as Bingham_ODI_lobe2.png") image_mosaic( ODI_images, ax_labels=ODI_labels, ax_kwargs=kwargs, figsize=(15, 5), filename="Bingham_ODI_lobe2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The figure shows from left to right: 1) ODI for the second-largest ODF lobe # along the axis with greater dispersion a.k.a. ODI_1 (direction in which # fibers exhibit the most variability in orientation); 2) ODI for the # second-largest ODF lobe along the axis with lesser dispersion a.k.a. ODI_2 # (directions in which fiber orientations are more uniform); and 3) total ODI # for the second-largest ODF lobe across both axes. In this figure, regions of # the white matter that contain only a single fiber population (one ODF lobe) # display ODI estimates of zero, corresponding to ODF profiles lacking a # second ODF lobe. # # BinghamMetrics can also be used to compute the average ODI quantities across # all ODF lobes a.k.a. voxel ODI (see below). The average quantitaties are # computed by weigthing each ODF lobe with their respective fiber density (FD) # value. These quantities are plotted in the following figure. ODIt = BinghamMetrics.odi_total_voxel[:, :, 0] ODI1 = BinghamMetrics.odi1_voxel[:, :, 0] ODI2 = BinghamMetrics.odi2_voxel[:, :, 0] ODIt[FD_voxel < 0.5] = 0 ODI1[FD_voxel < 0.5] = 0 ODI2[FD_voxel < 0.5] = 0 ODI_images = [ODI1[:, -1:1:-1].T, ODI2[:, -1:1:-1].T, ODIt[:, -1:1:-1].T] ODI_labels = ["ODI_1 (voxel)", "ODI_2 (voxel)", "ODI_total (voxel)"] kwargs = [{"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}, {"vmin": 0, "vmax": 0.2}] print("Saving the illustration as Bingham_ODI.png") image_mosaic( ODI_images, ax_labels=ODI_labels, ax_kwargs=kwargs, figsize=(15, 5), filename="Bingham_ODI_voxel.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # The figure shows from left to right: 1) weighted-averaged ODI_1 across all ODF # lobes; 2) weighted-averaged ODI_2 across all ODF lobe; 3) weighted-averaged # ODI_total across all ODF lobes. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_csa.py000066400000000000000000000101211476546756600200240ustar00rootroot00000000000000""" ============================================== Reconstruct with Constant Solid Angle (Q-Ball) ============================================== We show how to apply a Constant Solid Angle ODF (Q-Ball) model from :footcite:t:`Aganj2010` to your datasets. First import the necessary modules: """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.shm import CsaOdfModel from dipy.segment.mask import median_otsu from dipy.viz import actor, window ############################################################################### # Download and read the data for this tutorial and load the raw diffusion data # and the affine. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # img contains a nibabel Nifti1Image object (data) and gtab contains a # GradientTable object (gradient information e.g. b-values). For example to # read the b-values it is possible to write print(gtab.bvals). print(f"data.shape {data.shape}") ############################################################################### # Remove most of the background using DIPY's mask module. maskdata, mask = median_otsu( data, vol_idx=range(10, 50), median_radius=3, numpass=1, autocrop=True, dilate=2 ) ############################################################################### # We instantiate our CSA model with spherical harmonic order ($l$) of 4 csamodel = CsaOdfModel(gtab, 4) ############################################################################### # `Peaks_from_model` is used to calculate properties of the ODFs (Orientation # Distribution Function) and return for # example the peaks and their indices, or GFA which is similar to FA but for # ODF based models. This function mainly needs a reconstruction model, the # data and a sphere as input. The sphere is an object that represents the # spherical discrete grid where the ODF values will be evaluated. csapeaks = peaks_from_model( model=csamodel, data=maskdata, sphere=default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_odf=False, normalize_peaks=True, ) GFA = csapeaks.gfa print(f"GFA.shape {GFA.shape}") ############################################################################### # Apart from GFA, csapeaks also has the attributes peak_values, peak_indices # and ODF. peak_values shows the maxima values of the ODF and peak_indices # gives us their position on the discrete sphere that was used to do the # reconstruction of the ODF. In order to obtain the full ODF, return_odf # should be True. Before enabling this option, make sure that you have enough # memory. # # Let's visualize the ODFs of a small rectangular area in an axial slice of the # splenium of the corpus callosum (CC). data_small = maskdata[13:43, 44:74, 28:29] # Enables/disables interactive visualization interactive = False scene = window.Scene() csaodfs = csamodel.fit(data_small).odf(default_sphere) ############################################################################### # It is common with CSA ODFs to produce negative values, we can remove those # using ``np.clip`` csaodfs = np.clip(csaodfs, 0, np.max(csaodfs, -1)[..., None]) csa_odfs_actor = actor.odf_slicer( csaodfs, sphere=default_sphere, colormap="plasma", scale=0.4 ) csa_odfs_actor.display(z=0) scene.add(csa_odfs_actor) print("Saving illustration as csa_odfs.png") window.record(scene=scene, n_frames=1, out_path="csa_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Constant Solid Angle ODFs. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_csa_parallel.py000066400000000000000000000070361476546756600217130ustar00rootroot00000000000000""" ==================================== Parallel reconstruction using Q-Ball ==================================== We show an example of parallel reconstruction using a Q-Ball Constant Solid Angle model (see Aganj et al. (MRM 2010)) and `peaks_from_model`. Import modules, fetch and read data, and compute the mask. """ import time from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.shm import CsaOdfModel from dipy.segment.mask import median_otsu hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) maskdata, mask = median_otsu( data, vol_idx=range(10, 50), median_radius=3, numpass=1, autocrop=True, dilate=2 ) ############################################################################### # We instantiate our CSA model with spherical harmonic order ($l$) of 4 csamodel = CsaOdfModel(gtab, 4) ############################################################################### # `Peaks_from_model` is used to calculate properties of the ODFs (Orientation # Distribution Function) and return for # example the peaks and their indices, or GFA which is similar to FA but for # ODF based models. This function mainly needs a reconstruction model, the # data and a sphere as input. The sphere is an object that represents the # spherical discrete grid where the ODF values will be evaluated. sphere = get_sphere(name="repulsion724") start_time = time.time() ############################################################################### # We will first run `peaks_from_model` using parallelism with 2 processes. If # `num_processes` is None (default option) then this function will find the # total number of processors from the operating system and use this number as # `num_processes`. Sometimes it makes sense to use only a few of the processes # in order to allow resources for other applications. However, most of the # times using the default option will be sufficient. csapeaks_parallel = peaks_from_model( model=csamodel, data=maskdata, sphere=sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_odf=False, normalize_peaks=True, npeaks=5, parallel=True, num_processes=2, ) time_parallel = time.time() - start_time print(f"peaks_from_model using 2 processes ran in : {time_parallel} seconds") ############################################################################### # If we don't use parallelism then we need to set `parallel=False`: start_time = time.time() csapeaks = peaks_from_model( model=csamodel, data=maskdata, sphere=sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_odf=False, normalize_peaks=True, npeaks=5, parallel=False, num_processes=None, ) time_single = time.time() - start_time print(f"peaks_from_model ran in : {time_single} seconds") print(f"Speedup factor : {time_single / time_parallel}") ############################################################################### # In Windows if you get a runtime error about frozen executable please start # your script by adding your code above in a ``main`` function and use:: # # if __name__ == '__main__': # import multiprocessing # multiprocessing.freeze_support() # main() dipy-1.11.0/doc/examples/reconst_csd.py000066400000000000000000000251001476546756600200320ustar00rootroot00000000000000""" .. _reconst-csd: =================================================================== Reconstruction with Constrained Spherical Deconvolution model (CSD) =================================================================== This example shows how to use Constrained Spherical Deconvolution (CSD) introduced by :footcite:p:`Tournier2007`. This method is mainly useful with datasets with gradient directions acquired on a spherical grid. The basic idea with this method is that if we could estimate the response function of a single fiber then we could deconvolve the measured signal and obtain the underlying fiber distribution. In this way, the reconstruction of the fiber orientation distribution function (fODF) in CSD involves two steps: 1. Estimation of the fiber response function 2. Use the response function to reconstruct the fODF Let's first load the data. We will use a dataset with 10 b0s and 150 non-b0s with b-value 2000. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.csdeconv import ( ConstrainedSphericalDeconvModel, auto_response_ssst, mask_for_response_ssst, recursive_response, response_from_mask_ssst, ) from dipy.reconst.dti import TensorModel, fractional_anisotropy, mean_diffusivity from dipy.sims.voxel import single_tensor_odf from dipy.viz import actor, window hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # You can verify the b-values of the dataset by looking at the attribute # ``gtab.bvals``. Now that a dataset with multiple gradient directions is # loaded, we can proceed with the two steps of CSD. # # Step 1. Estimation of the fiber response function # ================================================= # # There are many strategies to estimate the fiber response function. Here two # different strategies are presented. # # **Strategy 1 - response function estimates from a local brain region** # One simple way to estimate the fiber response function is to look for regions # of the brain where it is known that there are single coherent fiber # populations. For example, if we use a ROI at the center of the brain, we will # find single fibers from the corpus callosum. The ``auto_response_ssst`` # function will calculate FA for a cuboid ROI of radii equal to ``roi_radii`` # in the center of the volume and return the response function estimated in # that region for the voxels with FA higher than 0.7. response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) ############################################################################### # Note that the ``auto_response_ssst`` function calls two functions that can be # used separately. First, the function ``mask_for_response_ssst`` creates a # mask of voxels within the cuboid ROI that meet the FA threshold constraint. # This mask can be used to calculate the number of voxels that were kept, or # to also apply an external mask (a WM mask for example). Second, the function # ``response_from_mask_ssst`` takes the mask and returns the response function # calculated within the mask. If no changes are made to the mask between the # two calls, the resulting responses should be identical. mask = mask_for_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) nvoxels = np.sum(mask) print(nvoxels) response, ratio = response_from_mask_ssst(gtab, data, mask) ############################################################################### # The ``response`` tuple contains two elements. The first is an array with # the eigenvalues of the response function and the second is the average S0 for # this response. # # It is good practice to always validate the result of auto_response_ssst. For # this purpose we can print the elements of ``response`` and have a look at # their values. print(response) ############################################################################### # The tensor generated from the response must be prolate (two smaller # eigenvalues should be equal) and look anisotropic with a ratio of second to # first eigenvalue of about 0.2. Or in other words, the axial diffusivity of # this tensor should be around 5 times larger than the radial diffusivity. print(ratio) ############################################################################### # We can double-check that we have a good response function by visualizing the # response function's ODF. Here is how you would do that: # Enables/disables interactive visualization interactive = False scene = window.Scene() evals = response[0] evecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T response_odf = single_tensor_odf(default_sphere.vertices, evals=evals, evecs=evecs) # transform our data from 1D to 4D response_odf = response_odf[None, None, None, :] response_actor = actor.odf_slicer( response_odf, sphere=default_sphere, colormap="plasma" ) scene.add(response_actor) print("Saving illustration as csd_response.png") window.record(scene=scene, out_path="csd_response.png", size=(200, 200)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Estimated response function. scene.rm(response_actor) ############################################################################### # **Strategy 2 - data-driven calibration of response function** Depending # on the dataset, FA threshold may not be the best way to find the best # possible response function. For one, it depends on the diffusion tensor # (FA and first eigenvector), which has lower accuracy at high # b-values. Alternatively, the response function can be calibrated in a # data-driven manner :footcite:p:`Tax2014`. # # First, the data is deconvolved with a 'fat' response function. All voxels # that are considered to contain only one peak in this deconvolution (as # determined by the peak threshold which gives an upper limit of the ratio # of the second peak to the first peak) are maintained, and from these voxels # a new response function is determined. This process is repeated until # convergence is reached. Here we calibrate the response function on a small # part of the data. ############################################################################### # A WM mask can shorten computation time for the whole dataset. Here it is # created based on the DTI fit. tenmodel = TensorModel(gtab) tenfit = tenmodel.fit(data, mask=data[..., 0] > 200) FA = fractional_anisotropy(tenfit.evals) MD = mean_diffusivity(tenfit.evals) wm_mask = np.logical_or(FA >= 0.4, (np.logical_and(FA >= 0.15, MD >= 0.0011))) response = recursive_response( gtab, data, mask=wm_mask, sh_order_max=8, peak_thr=0.01, init_fa=0.08, init_trace=0.0021, iter=8, convergence=0.001, parallel=True, num_processes=2, ) ############################################################################### # We can check the shape of the signal of the response function, which should # be like a pancake: response_signal = response.on_sphere(default_sphere) # transform our data from 1D to 4D response_signal = response_signal[None, None, None, :] response_actor = actor.odf_slicer( response_signal, sphere=default_sphere, colormap="plasma" ) scene = window.Scene() scene.add(response_actor) print("Saving illustration as csd_recursive_response.png") window.record(scene=scene, out_path="csd_recursive_response.png", size=(200, 200)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Estimated response function using recursive calibration. scene.rm(response_actor) ############################################################################### # Step 2. fODF reconstruction # =========================== # # After estimating a response function for one of the strategies shown above, # we are ready to start the deconvolution process. Let's import the CSD model # and fit the datasets. csd_model = ConstrainedSphericalDeconvModel(gtab, response) ############################################################################### # For illustration purposes we will fit only a small portion of the data. data_small = data[20:50, 55:85, 38:39] csd_fit = csd_model.fit(data_small) ############################################################################### # Show the CSD-based ODFs also known as FODFs (fiber ODFs). csd_odf = csd_fit.odf(default_sphere) ############################################################################### # Here we visualize only a 30x30 region. fodf_spheres = actor.odf_slicer( csd_odf, sphere=default_sphere, scale=0.9, norm=False, colormap="plasma" ) scene.add(fodf_spheres) print("Saving illustration as csd_odfs.png") window.record(scene=scene, out_path="csd_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # CSD ODFs. # # # In DIPY we also provide tools for finding the peak directions (maxima) of the # ODFs. For this purpose we recommend using ``peaks_from_model``. csd_peaks = peaks_from_model( model=csd_model, data=data_small, sphere=default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, parallel=True, num_processes=2, ) scene.clear() fodf_peaks = actor.peak_slicer(csd_peaks.peak_dirs, peaks_values=csd_peaks.peak_values) scene.add(fodf_peaks) print("Saving illustration as csd_peaks.png") window.record(scene=scene, out_path="csd_peaks.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # CSD Peaks. # # # We can finally visualize both the ODFs and peaks in the same space. fodf_spheres.GetProperty().SetOpacity(0.4) scene.add(fodf_spheres) print("Saving illustration as csd_both.png") window.record(scene=scene, out_path="csd_both.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # CSD Peaks and ODFs. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_csd_parallel.py000066400000000000000000000064041476546756600217140ustar00rootroot00000000000000""" ================================= Parallel reconstruction using CSD ================================= This example shows how to use parallelism (multiprocessing) using ``peaks_from_model`` in order to speedup the signal reconstruction process. For this example will we use the same initial steps as we used in :ref:`sphx_glr_examples_built_reconstruction_reconst_csd.py`. Import modules, fetch and read data, apply the mask and calculate the response function. """ import time from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.segment.mask import median_otsu hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) maskdata, mask = median_otsu( data, vol_idx=range(10, 50), median_radius=3, numpass=1, autocrop=False, dilate=2 ) response, ratio = auto_response_ssst(gtab, maskdata, roi_radii=10, fa_thr=0.7) data = maskdata[:, :, 33:37] mask = mask[:, :, 33:37] ############################################################################### # Now we are ready to import the CSD model and fit the datasets. csd_model = ConstrainedSphericalDeconvModel(gtab, response) ############################################################################### # Compute the CSD-based ODFs using ``peaks_from_model``. This function has a # parameter called ``parallel`` which allows for the voxels to be processed in # parallel. If ``num_processes`` is None it will figure out automatically the # number of CPUs available in your system. Alternatively, you can set # ``num_processes`` manually. Here, we show an example where we compare the # duration of execution with or without parallelism. start_time = time.time() csd_peaks_parallel = peaks_from_model( model=csd_model, data=data, sphere=default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_sh=True, return_odf=False, normalize_peaks=True, npeaks=5, parallel=True, num_processes=2, ) time_parallel = time.time() - start_time print(f"peaks_from_model using 2 processes ran in : {time_parallel} seconds") start_time = time.time() csd_peaks = peaks_from_model( model=csd_model, data=data, sphere=default_sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_sh=True, return_odf=False, normalize_peaks=True, npeaks=5, parallel=False, num_processes=None, ) time_single = time.time() - start_time print(f"peaks_from_model ran in : {time_single} seconds") print(f"Speedup factor : {time_single / time_parallel}") ############################################################################### # In Windows if you get a runtime error about frozen executable please start # your script by adding your code above in a ``main`` function and use:: # # if __name__ == '__main__': # import multiprocessing # multiprocessing.freeze_support() # main() dipy-1.11.0/doc/examples/reconst_cti.py000066400000000000000000000162561476546756600200540ustar00rootroot00000000000000""" ============================================================================== Reconstruction of the diffusion signal with the correlation tensor model (CTI) ============================================================================== Correlation Tensor MRI (CTI) is a method that uses double diffusion encoding data to resolve sources of kurtosis. It is similar to the Q-space Trajectory Imaging method (see :ref:`sphx_glr_examples_built_reconstruction_reconst_qti.py`) :footcite:p:`NetoHenriques2020`. However, in addition to the kurtosis sources associated with diffusion variance across compartments (``K_aniso`` and ``K_iso``, which are related to microscopic anisotropy and the variance of the mean diffusivities of compartments, respectively), CTI also measures K_micro. This quantifies non-Gaussian diffusion effects that deviate from the multiple Gaussian component tissue representation, such as restricted diffusion, exchange, and structural disorder in compartments like cross-sectional variance :footcite:p:`Novello2022`, :footcite:p:`Alves2022`. Although the CorrelationTensorModel and the DiffusionKurtosisTensorFit may share some similarities, they have significantly different representations for the diffusion-weighted signal. This difference leads to the necessity for a unique ``design matrix`` specifically for CTI. The CorrelationTensorModel expresses the diffusion-weighted signal as: .. math:: \\begin{align} \\log E_{\\Delta}(q_1, q_2) &= \\left(q_{1i}q_{1j} + q_{2i}q_{2j}\\right) \\Delta D_{ij} \\ &+ q_{1i}q_{2j}Q_{ij} \\ &+ \\frac{1}{6} \\left( q_{1i}q_{1j}q_{1k}q_{1l} + q_{2i}q_{2j}q_{2k}q_{2l} \\right) \\ &\\quad \\times \\Delta^2 D^2 W_{ijkl} \\ &+ \\frac{1}{4} q_{1i}q_{1j}q_{2k}q_{2l}Z_{ijkl} \\ &+ \\frac{1}{6} \\left( q_{1i}q_{1j}q_{1k}q_{2l} + q_{2i}q_{2j}q_{2k}q_{1l} \\right) S_{ijkl} \\ &+ O(q^6) \\end{align} where: $\\Delta D_{ij}$ refers to the elements of the total diffusion tensor (DT) and $W_{ijkl}$ are the elements of the total kurtosis tensor (KT) and $D$ is the mean diffusivity and $Q_{ij}$ are the elements of a 2nd order correlation tensor Q. $Z_{ijkl}$ and $S_{ijkl}$ are the elements of the 4th order displacement correlation tensors Z and S, respectively. However it differentiates from the DiffusionKurtosis Model by calculating the different sources of kurtosis. In the following example we show how to fit the correlation tensor model on a real life dataset and how to estimate correlation tensor based statistics. First, we'll import all relevant modules. """ import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.cti as cti ############################################################################### # For CTI analysis, data must be acquired using double diffusion encoding, # taking into account different pairs of b-values and gradient directions # between the two diffusion epochs, as discussed by Henriques et al. in Magn # Reson Med 2021 [NetoHe2021]._ # To run CTI we need to have separate b-value and b-vector files for each DDE # diffusion epoch. In this tutorial, a sample DDE dataset and respective # b-value and b-vector files for the two epochs are fetched. If you want to # process your own DDE data compatible with CTI, you will have to change the # below lines of code to add the paths of your data. # The users should also ensure that the data is formatted correctly for the # CTI analysis they are performing. fdata, fbvals1, fbvecs1, fbvals2, fbvecs2, fmask = get_fnames(name="cti_rat1") data, affine = load_nifti(fdata) bvals1, bvecs1 = read_bvals_bvecs(fbvals1, fbvecs1) bvals2, bvecs2 = read_bvals_bvecs(fbvals2, fbvecs2) ############################################################################### # In this example, the function load_nifti is used to load the CTI data saved # in filefdata and returns the data as a nibabel Nifti1Image object along with # the affine transformation. The b-values and b-vectors for two different # gradient tables are loaded from ``bvals1.bval`` and ``bvec1.bvec``, and # ``bvals2.bval`` and ``bvec2.bvec``, respectively, using the # ``read_bvals_bvecs`` function. For CTI reconstruction in DIPY, we need to # define the b-values and b-vectors for each diffusion epoch in separate # gradient tables, as done in the above line of code. gtab1 = gradient_table(bvals1, bvecs=bvecs1) gtab2 = gradient_table(bvals2, bvecs=bvecs2) ############################################################################### # Before fitting the data, we perform some data pre-processing. We first # compute a brain mask to avoid unnecessary calculations on the background # of the image. mask, mask_affine = load_nifti(fmask) ############################################################################### # Now that we've loaded the data and generated the two gradient tables we can # go forward with CTI fitting. For this, the CTI model is first defined for # GradientTable objects gtab1 and gtab2 by instantiating the # CorrelationTensorModel object in the following way: ctimodel = cti.CorrelationTensorModel(gtab1, gtab2) ############################################################################### # To fit the data using the defined model object, we call the fit function of # this object. ctifit = ctimodel.fit(data, mask=mask) ############################################################################### # The fit method for the CTI model produces a CorrelationTensorFit object, # which contains the attributes of both the DKI and DTI models. Given that CTI # is a built upon DKI, which itself extends the DTI model, the # CorrelationTensorFit instance captures a comprehensive set of parameters and # attributes from these underlying models. # # For instance, the CTI model inherently estimates all DTI and DKI statistics, # such as mean, axial, and radial diffusivities (MD, AD, RD) as well as the # mean, axial, and radial kurtosis (MK, AK, RK). # To better illustrate the extraction of main DTI/DKI parameters using the CTI # model, consider the following lines of code: AD = ctifit.ad MD = ctifit.md RD = ctifit.rd MK = ctifit.mk() AK = ctifit.ak() RK = ctifit.rk() ############################################################################### # However, in addition to these metrics, CTI also provides unique sources of # information, not available in DTI and DKI. Below we draw a feature map of the # 3 different sources of kurtosis which can exclusively be calculated from the # CTI model. kiso_map = ctifit.K_iso kaniso_map = ctifit.K_aniso kmicro_map = ctifit.K_micro slice_idx = 0 fig, axarr = plt.subplots(1, 3, figsize=(15, 5)) axarr[0].imshow(kiso_map[:, :, slice_idx], cmap="gray", origin="lower", vmin=0, vmax=1) axarr[0].set_title("Kiso Map") axarr[1].imshow( kaniso_map[:, :, slice_idx], cmap="gray", origin="lower", vmin=0, vmax=1 ) axarr[1].set_title("Kaniso Map") axarr[2].imshow( kmicro_map[:, :, slice_idx], cmap="gray", origin="lower", vmin=0, vmax=1 ) axarr[2].set_title("Kmicro Map") plt.tight_layout() plt.show() ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dki.py000066400000000000000000000416101476546756600200340ustar00rootroot00000000000000""" =========================================================================== Reconstruction of the diffusion signal with the kurtosis tensor model (DKI) =========================================================================== Diffusional Kurtosis Imaging (DKI) is an expansion of the Diffusion Tensor Imaging (DTI) model (see :ref:`sphx_glr_examples_built_reconstruction_reconst_dti.py`). In addition to the Diffusion Tensor (DT), DKI quantifies the degree to which water diffusion in biological tissues is non-Gaussian using the Kurtosis Tensor (KT) :footcite:p:`Jensen2005`. Measurements of non-Gaussian diffusion from DKI are of interest because they were shown to provide extra information about microstructural alterations in both health and disease (for a review see our paper :footcite:p:`NetoHenriques2021a`). Moreover, in contrast to DTI, DKI can provide metrics of tissue microscopic heterogeneity that are less sensitive to confounding effects in the orientation of tissue components, thus providing better characterization in general white matter configurations (including regions of fibers crossing, fanning, and/or dispersing) and gray matter :footcite:p:`NetoHenriques2015`, :footcite:p:`NetoHenriques2021a`. Although DKI aims primarily to quantify the degree of non-Gaussian diffusion without establishing concrete biophysical assumptions, DKI can also be related to microstructural models to infer specific biophysical parameters (e.g., the density of axonal fibers) - this aspect will be more closely explored in :ref:sphx_glr_examples_built_reconstruction_reconst_dki_micro.py. For additional information on DKI and its practical implementation within DIPY, refer to :footcite:p:`NetoHenriques2021a`. Below, we introduce a concise theoretical background of DKI and demonstrate its fitting process using DIPY. We'll also guide you through the fitting process of DKI using DIPY, demonstrating how to effectively apply this technique. Furthermore, we discuss the various diffusion metrics that can be derived from DKI, providing insight into their practical significance and applications. Additionally, we address strategies to mitigate common artifacts, such as implausible negative kurtosis estimates, which manifest as 'black' voxels or holes in DKI maps. These artifacts can compromise the accuracy of the DKI analysis, and we'll offer solutions to ensure more reliable results. Theory ====== The DKI model expresses the diffusion-weighted signal as: .. math:: S(n,b)=S_{0}e^{-bD(n)+\\frac{1}{6}b^{2}D(n)^{2}K(n)} where $\\mathbf{b}$ is the applied diffusion weighting (which is dependent on the measurement parameters), $S_0$ is the signal in the absence of diffusion gradient sensitization, $\\mathbf{D(n)}$ is the value of diffusion along direction $\\mathbf{n}$, and $\\mathbf{K(n)}$ is the value of kurtosis along direction $\\mathbf{n}$. The directional diffusion $\\mathbf{D(n)}$ and kurtosis $\\mathbf{K(n)}$ can be related to the diffusion tensor (DT) and kurtosis tensor (KT) using the following equations: .. math:: D(n)=\\sum_{i=1}^{3}\\sum_{j=1}^{3}n_{i}n_{j}D_{ij} and .. math:: K(n)=\\frac{MD^{2}}{D(n)^{2}}\\sum_{i=1}^{3}\\sum_{j=1}^{3}\\sum_{k=1}^{3} \\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl} where $D_{ij}$ are the elements of the second-order DT, and $W_{ijkl}$ the elements of the fourth-order KT and $MD$ is the mean diffusivity. As the DT, KT has antipodal symmetry and thus only 15 Wijkl elements are needed to fully characterize the KT: .. math:: \\begin{matrix} ( & W_{xxxx} & W_{yyyy} & W_{zzzz} & W_{xxxy} & W_{xxxz} & ... \\\\ & W_{xyyy} & W_{yyyz} & W_{xzzz} & W_{yzzz} & W_{xxyy} & ... \\\\ & W_{xxzz} & W_{yyzz} & W_{xxyz} & W_{xyyz} & W_{xyzz} & & )\\end{matrix} DKI fitting in DIPY =================== In the following example we show how to fit the diffusion kurtosis model on diffusion-weighted multi-shell datasets and how to estimate diffusion kurtosis based statistics. First, we import all relevant modules: """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.denoise.localpca import mppca from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.dki as dki import dipy.reconst.dti as dti from dipy.segment.mask import median_otsu from dipy.viz.plotting import compare_maps ############################################################################### # DKI requires multi-shell data, i.e. data acquired from more than one # non-zero b-value. Here, we use fetch to download a multi-shell dataset which # was kindly provided by Hansen and Jespersen (more details about the data are # provided in their paper :footcite:p:`Hansen2016a`). The total size of the # downloaded data is 192 MBytes, however you only need to fetch it once. fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # Function ``get_fnames`` downloads and outputs the paths of the data, # ``load_nifti`` returns the data as a nibabel Nifti1Image object, and # ``read_bvals_bvecs`` loads the arrays containing the information about the # b-values and b-vectors. These later arrays are converted to the # GradientTable object required for Dipy_'s data reconstruction. # # The downloaded dataset was acquired with an unusually large number of # b-values. To run this example with acquisitions that are more common in # practice, we select below data for three non-zero b-values (if you want to # run this example with the full data extent, skip the following lines of # code) bval_sel = np.zeros_like(gtab.bvals) bval_sel[bvals == 0] = 1 bval_sel[bvals == 600] = 1 bval_sel[bvals == 1000] = 1 bval_sel[bvals == 2000] = 1 data = data[..., bval_sel == 1] gtab = gradient_table(bvals[bval_sel == 1], bvecs=bvecs[bval_sel == 1]) ############################################################################### # Before fitting the data, we perform some data pre-processing. We first # compute a brain mask to avoid unnecessary calculations on the background # of the image. datamask, mask = median_otsu( data, vol_idx=[0, 1], median_radius=4, numpass=2, autocrop=False, dilate=1 ) ############################################################################### # Since the diffusion kurtosis model involves the estimation of a large number # of parameters :footcite:p:`Tax2015` and since the non-Gaussian components of # the diffusion signal are more sensitive to artifacts # :footcite:p:`NetoHenriques2012`, :footcite:p:`Tabesh2011`, it might be # favorable to suppress the effects of noise and artifacts before diffusion # kurtosis fitting. In this example, the effects of noise are suppressed using # the Marcenko-Pastur (MP)-PCA algorithm (for more information, see # :ref:sphx_glr_examples_built_preprocessing_denoise_mppca.py). Processing # MP-PCA may take a while - for illustration purposes, you can skip this step. # However, note that if you don't denoise your data, DKI reconstructions may # be corrupted by a large percentage of implausible DKI estimates (see below # for more information on this issue). data = mppca(data, patch_radius=[3, 3, 3]) ############################################################################### # Now that we have loaded and pre-processed the data we can go forward # with DKI fitting. For this, the DKI model is first defined for the data's # GradientTable object by instantiating the DiffusionKurtosisModel object in # the following way: dkimodel = dki.DiffusionKurtosisModel(gtab) ############################################################################### # To fit the data using the defined model object, we call the ``fit`` function # of this object. For the purpose of this example, we will only fit a # single slice of the data: dkifit = dkimodel.fit(data[:, :, 9:10], mask=mask[:, :, 9:10]) ############################################################################### # The fit method creates a DiffusionKurtosisFit object, which contains all the # diffusion and kurtosis fitting parameters and other DKI attributes. For # instance, since the diffusion kurtosis model estimates the diffusion tensor, # all standard diffusion tensor statistics can be computed from the # DiffusionKurtosisFit instance. For example, we can extract the fractional # anisotropy (FA), the mean diffusivity (MD), the radial diffusivity (RD) and # the axial diffusivity (AD) from the DiffusionKurtosisiFit instance. Of # course, these measures can also be computed from DIPY's ``TensorModel`` fit, # and should be analogous; however, theoretically, the diffusion statistics # from the kurtosis model are expected to have better accuracy, since DKI's # diffusion tensor are decoupled from higher order terms effects # :footcite:p:`Veraart2011`, :footcite:p:`NetoHenriques2021a`. Below we compare # the FA, MD, AD, and RD, computed from both DTI and DKI. tenmodel = dti.TensorModel(gtab) tenfit = tenmodel.fit(data[:, :, 9:10], mask=mask[:, :, 9:10]) fits = [tenfit, dkifit] maps = ["fa", "md", "rd", "ad"] fit_labels = ["DTI", "DKI"] map_kwargs = [{"vmax": 0.7}, {"vmax": 2e-3}, {"vmax": 2e-3}, {"vmax": 2e-3}] compare_maps( fits, maps, fit_labels=fit_labels, map_kwargs=map_kwargs, filename="Diffusion_tensor_measures_from_DTI_and_DKI.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Diffusion tensor measures obtained from the diffusion tensor estimated # from DKI (upper panels) and DTI (lower panels). # # # DTI's diffusion estimates present lower values than DKI's estimates, # showing that DTI's diffusion measurements are underestimated by higher # order effects (for detailed discussion on this see # :footcite:p:`NetoHenriques2021a`. # # In addition to the standard diffusion statistics, the DiffusionKurtosisFit # instance can be used to estimate the non-Gaussian measures of mean kurtosis # (MK), the radial kurtosis (RK) and the axial kurtosis (AK). maps = ["mk", "rk", "ak"] compare_maps( [dkifit], maps, fit_labels=["DKI"], map_kwargs={"vmin": 0, "vmax": 1.5}, filename="DKI_standard_measures.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # DKI standard kurtosis measures. # # # The non-Gaussian behaviour of the diffusion signal is expected to be higher # when tissue water is confined by multiple compartments. MK is, therefore, # higher in white matter since it is highly compartmentalized by myelin # sheaths. These water diffusion compartmentalization is expected to be more # pronounced perpendicularly to white matter fibers and thus the RK map # presents higher amplitudes than the AK map. # # Mitigating 'Black' Voxels / Holes in DKI metrics # ================================================ # # It is important to note that kurtosis estimates might present implausible # negative estimates in deep white matter regions that will manifest as # 'Black' voxels or holes in DKI metrics (e.g. see the band of dark voxels in # the RK map above). These negative kurtosis values are artifactual and might # be induced by: # 1) low radial diffusivities of aligned white matter - since it is very hard # to capture non-Gaussian information in radial direction due to its low # diffusion decays, radial kurtosis estimates (and consequently the mean # kurtosis estimates) might have low robustness and tendency to exhibit # negative values :footcite:p:`NetoHenriques2012`, :footcite:p:`Tabesh2011`; # 2) Gibbs artifacts - MRI images might be corrupted by signal oscillation # artifact between tissue's edges if an inadequate number of high frequencies # of the k-space is sampled. These oscillations might have different signs on # images acquired with different diffusion-weighted and inducing negative # biases in kurtosis parametric maps :footcite:p:`Perrone2015`, # :footcite:p:`NetoHenriques2018`. 3) Underestimation of b0 signals - Due to # physiological or noise artifacts, the signals acquired at b-value=0 may be # artifactually lower than the diffusion-weighted signals acquired for the # different b-values. In this case, the log diffusion-weighted signal decay may # appear to be concave rather than showing to be convex (as one would typically # expect), leading to negative kurtosis value estimates. # # Given the above, one can try to suppress the 'Black' voxel / holes in DKI # metrics by: # 1) using more advanced noise and artifact suppression algorithms, e.g., # as mentioned above, the MP-PCA denoising # (:ref:`sphx_glr_examples_built_preprocessing_denoise_mppca.py`), other # denoising alternatives such as Patch2self # (:ref:`sphx_glr_examples_built_preprocessing_denoise_patch2self.py`) or # incorporating methods for Gibbs Artifact Unringing # (:ref:`sphx_glr_examples_built_preprocessing_denoise_gibbs.py`) # algorithms. # 2) computing the kurtosis values from powder-averaged diffusion-weighted # signals which are known to be less sensitive to implausible negative # estimates. The details on how to compute the kurtosis from powder-averaged # signals in DIPY are described in the following tutorial # (:ref:`sphx_glr_examples_built_reconstruction_reconst_msdki.py`). # 3) computing alternative definitions of mean and radial kurtosis such as # the mean kurtosis tensor (MKT) and radial tensor kurtosis (RTK) metrics (see # below). # 4) constrained optimization to ensure that the fitted parameters # are physically plausible :footcite:p:`DelaHaije2020` (see below). # # Alternative DKI metrics # ======================= # # In addition to the standard mean, axial, and radial kurtosis metrics shown # above, alternative metrics can be computed from DKI, e.g.: # 1) the mean kurtosis tensor (MKT) - defined as the trace of the kurtosis # tensor - is a quantity that provides a contrast similar to the standard MK # but it is more robust to noise artifacts :footcite:p:`Hansen2013`, # :footcite:p:`NetoHenriques2021a`. 2) the radial tensor kurtosis (RTK) provides # an alternative definition to standard radial kurtosis (RK) that, as MKT, is # more robust to noise artifacts :footcite:p:`Hansen2013`. # 3) the kurtosis fractional anisotropy (KFA) that quantifies the anisotropy of # the kurtosis tensor :footcite:p:`Glenn2015`, which provides different # information than the FA measures from the diffusion tensor. # # These measures are computed and illustrated below: compare_maps( [dkifit], ["mkt", "rtk", "kfa"], fit_labels=["DKI"], map_kwargs=[ {"vmin": 0, "vmax": 1.5}, {"vmin": 0, "vmax": 1.5}, {"vmin": 0, "vmax": 1}, ], filename="Alternative_DKI_metrics.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Alternative DKI measures. # # # Constrained optimization for DKI # ================================ # # When instantiating the DiffusionKurtosisModel, the model can be set up to use # constraints with the option `fit_method='CLS'` (for ordinary least squares) # or with `fit_method='CWLS'` (for weighted least squares). Constrained fitting # takes more time than unconstrained fitting, but is generally recommended to # prevent physically implausible parameter estimates # :footcite:p:`DelaHaije2020`. For performance purposes it is recommended to use # the MOSEK solver (https://www.mosek.com/) by setting ``cvxpy_solver='MOSEK'``. # Different solvers can differ greatly in terms of runtime and solution # accuracy, and in some cases solvers may show warnings about convergence or # recommended option settings. # # .. note:: # In certain atypical scenarios, the DKI+ constraints could potentially be # too restrictive. Always check the results of a constrained fit with their # unconstrained counterpart to verify that there are no unexpected # qualitative differences. dkimodel_plus = dki.DiffusionKurtosisModel(gtab, fit_method="CLS") dkifit_plus = dkimodel_plus.fit(data[:, :, 9:10], mask=mask[:, :, 9:10]) ############################################################################### # We can now compare the kurtosis measures obtained with the constrained fit to # the measures obtained before, where we see that many of the artifactual # voxels have now been corrected. In particular outliers caused by pure noise # -- instead of for example acquisition artifacts -- can be corrected with # this method. compare_maps( [dkifit, dkifit_plus], ["mkt", "rtk", "ak"], fit_labels=["DKI", "DKI+"], map_kwargs={"vmin": 0, "vmax": 1.5}, filename="Alternative_DKI_measures_comparison_to_DKIplus.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # DKI standard kurtosis measures obtained with constrained optimization. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dki_micro.py000066400000000000000000000151471476546756600212330ustar00rootroot00000000000000""" ====================================================================== Reconstruction of the diffusion signal with the WMTI model (DKI-MICRO) ====================================================================== DKI can also be used to derive concrete biophysical parameters by applying microstructural models to DT and KT estimated from DKI. For instance, :footcite:t:`Fieremans2011` showed that DKI can be used to estimate the contribution of hindered and restricted diffusion for well-aligned fibers - a model that was later referred to as the white matter tract integrity (WMTI) technique :footcite:p:`Fieremans2013`. The two tensors of WMTI can be also interpreted as the influences of intra- and extra-cellular compartments and can be used to estimate the axonal volume fraction and diffusion extra-cellular tortuosity. According to previous studies :footcite:p:`Fieremans2012`, :footcite:p:`Fieremans2013` these latter measures can be used to distinguish processes of axonal loss from processes of myelin degeneration. Details on the implementation of WMTI in DIPY are described in :footcite:p:`NetoHenriques2021a`. In this example, we show how to process a dMRI dataset using the WMTI model. First, we import all relevant modules: """ import matplotlib.pyplot as plt import numpy as np from scipy.ndimage import gaussian_filter from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.dki as dki import dipy.reconst.dki_micro as dki_micro from dipy.segment.mask import median_otsu ############################################################################### # As the standard DKI, WMTI requires multi-shell data, i.e. data acquired from # more than one non-zero b-value. Here, we use a fetcher to download a # multi-shell dataset which was kindly provided by Hansen and Jespersen # (more details about the data are provided in their paper # :footcite:p:`Hansen2016a`). fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # For comparison, this dataset is pre-processed using the same steps used in # the example for reconstructing DKI (see # :ref:`sphx_glr_examples_built_reconstruction_reconst_dki.py`). # data masking maskdata, mask = median_otsu( data, vol_idx=[0, 1], median_radius=4, numpass=2, autocrop=False, dilate=1 ) # Smoothing fwhm = 1.25 gauss_std = fwhm / np.sqrt(8 * np.log(2)) data_smooth = np.zeros(data.shape) for v in range(data.shape[-1]): data_smooth[..., v] = gaussian_filter(data[..., v], sigma=gauss_std) ############################################################################### # The WMTI model can be defined in DIPY by instantiating the # 'KurtosisMicrostructureModel' object in the following way: dki_micro_model = dki_micro.KurtosisMicrostructureModel(gtab) ############################################################################### # Before fitting this microstructural model, it is useful to indicate the # regions in which this model provides meaningful information (i.e. voxels of # well-aligned fibers). Following :footcite:t:`Fieremans2011`, a simple # way to select this region is to generate a well-aligned fiber mask based on # the values of diffusion sphericity, planarity and linearity. Here we will # follow these selection criteria for a better comparison of our figures with # the original article published by :footcite:t:`Fieremans2011`. # Nevertheless, it is important to note that voxels with well-aligned fibers # can be selected based on other approaches such as using predefined regions # of interest. # Diffusion Tensor is computed based on the standard DKI model dkimodel = dki.DiffusionKurtosisModel(gtab) dkifit = dkimodel.fit(data_smooth, mask=mask) # Initialize well aligned mask with ones well_aligned_mask = np.ones(data.shape[:-1], dtype="bool") # Diffusion coefficient of linearity (cl) has to be larger than 0.4, thus # we exclude voxels with cl < 0.4. cl = dkifit.linearity.copy() well_aligned_mask[cl < 0.4] = False # Diffusion coefficient of planarity (cp) has to be lower than 0.2, thus # we exclude voxels with cp > 0.2. cp = dkifit.planarity.copy() well_aligned_mask[cp > 0.2] = False # Diffusion coefficient of sphericity (cs) has to be lower than 0.35, thus # we exclude voxels with cs > 0.35. cs = dkifit.sphericity.copy() well_aligned_mask[cs > 0.35] = False # Removing nan associated with background voxels well_aligned_mask[np.isnan(cl)] = False well_aligned_mask[np.isnan(cp)] = False well_aligned_mask[np.isnan(cs)] = False ############################################################################### # Analogous to DKI, the data fit can be done by calling the ``fit`` function of # the model's object as follows: dki_micro_fit = dki_micro_model.fit(data_smooth, mask=well_aligned_mask) ############################################################################### # The KurtosisMicrostructureFit object created by this ``fit`` function can # then be used to extract model parameters such as the axonal water fraction # and diffusion hindered tortuosity: AWF = dki_micro_fit.awf TORT = dki_micro_fit.tortuosity ############################################################################### # These parameters are plotted below on top of the mean kurtosis maps: MK = dkifit.mk(min_kurtosis=0, max_kurtosis=3) axial_slice = 9 fig1, ax = plt.subplots(1, 2, figsize=(9, 4), subplot_kw={"xticks": [], "yticks": []}) AWF[AWF == 0] = np.nan TORT[TORT == 0] = np.nan ax[0].imshow( MK[:, :, axial_slice].T, cmap=plt.cm.gray, interpolation="nearest", origin="lower" ) im0 = ax[0].imshow( AWF[:, :, axial_slice].T, cmap=plt.cm.Reds, alpha=0.9, vmin=0.3, vmax=0.7, interpolation="nearest", origin="lower", ) fig1.colorbar(im0, ax=ax.flat[0]) ax[1].imshow( MK[:, :, axial_slice].T, cmap=plt.cm.gray, interpolation="nearest", origin="lower" ) im1 = ax[1].imshow( TORT[:, :, axial_slice].T, cmap=plt.cm.Blues, alpha=0.9, vmin=2, vmax=6, interpolation="nearest", origin="lower", ) fig1.colorbar(im1, ax=ax.flat[1]) fig1.savefig("Kurtosis_Microstructural_measures.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Axonal water fraction (left panel) and tortuosity (right panel) values # of well-aligned fiber regions overlaid on a top of a mean kurtosis all-brain # image. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dsi.py000066400000000000000000000076751476546756600200610ustar00rootroot00000000000000""" ================================================= Reconstruct with Diffusion Spectrum Imaging (DSI) ================================================= We show how to apply Diffusion Spectrum Imaging :footcite:p:`Wedeen2005` to diffusion MRI datasets of Cartesian keyhole diffusion gradients. First import the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.core.ndindex import ndindex from dipy.data import get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.dsi import DiffusionSpectrumModel from dipy.reconst.odf import gfa ############################################################################### # Download and get the data filenames for this tutorial. fraw, fbval, fbvec = get_fnames(name="taiwan_ntu_dsi") ############################################################################### # img contains a nibabel Nifti1Image object (data) and gtab contains a # GradientTable object (gradient information e.g. b-values). For example to # read the b-values it is possible to write print(gtab.bvals). # # Load the raw diffusion data and the affine. data, affine, voxel_size = load_nifti(fraw, return_voxsize=True) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) bvecs[1:] = bvecs[1:] / np.sqrt(np.sum(bvecs[1:] * bvecs[1:], axis=1))[:, None] gtab = gradient_table(bvals, bvecs=bvecs) print(f"data.shape {data.shape}") ############################################################################### # This dataset has anisotropic voxel sizes, therefore reslicing is necessary. # # Instantiate the Model and apply it to the data. dsmodel = DiffusionSpectrumModel(gtab) ############################################################################### # Let's just use one slice only from the data. dataslice = data[:, :, data.shape[2] // 2] dsfit = dsmodel.fit(dataslice) ############################################################################### # Load an odf reconstruction sphere sphere = get_sphere(name="repulsion724") ############################################################################### # Calculate the ODFs with this specific sphere ODF = dsfit.odf(sphere) print(f"ODF.shape {ODF.shape}") ############################################################################### # In a similar fashion it is possible to calculate the PDFs of all voxels # in one call with the following way PDF = dsfit.pdf() print(f"PDF.shape {PDF.shape}") ############################################################################### # We see that even for a single slice this PDF array is close to 345 MBytes # so we really have to be careful with memory usage when use this function # with a full dataset. # # The simple solution is to generate/analyze the ODFs/PDFs by iterating # through each voxel and not store them in memory if that is not necessary. for index in ndindex(dataslice.shape[:2]): pdf = dsmodel.fit(dataslice[index]).pdf() ############################################################################### # If you really want to save the PDFs of a full dataset on the disc we # recommend using memory maps (``numpy.memmap``) but still have in mind that # even if you do that for example for a dataset of volume size # ``(96, 96, 60)`` you will need about 2.5 GBytes which can take less space # when reasonable spheres (with < 1000 vertices) are used. # # Let's now calculate a map of Generalized Fractional Anisotropy (GFA) # :footcite:p:`Tuch2004` using the DSI ODFs. GFA = gfa(ODF) fig_hist, ax = plt.subplots(1) ax.set_axis_off() plt.imshow(GFA.T) plt.savefig("dsi_gfa.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # See also :ref:`sphx_glr_examples_built_reconstruction_reconst_dsi_metrics.py` # for calculating different types of DSI maps. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dsi_metrics.py000066400000000000000000000124571476546756600216010ustar00rootroot00000000000000""" =============================== Calculate DSI-based scalar maps =============================== We show how to calculate two DSI-based scalar maps: return to origin probability (RTOP) :footcite:p:`Descoteaux2011` and mean square displacement (MSD) :footcite:p:`Wu2007`, :footcite:p:`Wu2008` on your dataset. First import the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.dsi import DiffusionSpectrumModel ############################################################################### # Download and get the data filenames for this tutorial. fraw, fbval, fbvec = get_fnames(name="taiwan_ntu_dsi") ############################################################################### # img contains a nibabel Nifti1Image object (data) and gtab contains a # GradientTable object (gradient information e.g. b-values). For example to # read the b-values it is possible to write print(gtab.bvals). # # Load the raw diffusion data and the affine. data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) bvecs[1:] = bvecs[1:] / np.sqrt(np.sum(bvecs[1:] * bvecs[1:], axis=1))[:, None] gtab = gradient_table(bvals, bvecs=bvecs) print(f"data.shape {data.shape}") ############################################################################### # Instantiate the Model and apply it to the data. dsmodel = DiffusionSpectrumModel(gtab, qgrid_size=35, filter_width=18.5) ############################################################################### # Let's just use one slice only from the data. dataslice = data[30:70, 20:80, data.shape[2] // 2] ############################################################################### # Normalize the signal by the b0 dataslice = dataslice / (dataslice[..., 0, None]).astype(float) ############################################################################### # Calculate the return to origin probability on the signal # that corresponds to the integral of the signal. print("Calculating... rtop_signal") rtop_signal = dsmodel.fit(dataslice).rtop_signal() ############################################################################### # Now we calculate the return to origin probability on the propagator, that # corresponds to its central value. By default the propagator is divided by # its sum in order to obtain a properly normalized pdf, however this # normalization changes the values of RTOP, therefore in order to compare it # with the RTOP previously calculated on the signal we turn the normalized # parameter to false. print("Calculating... rtop_pdf") rtop_pdf = dsmodel.fit(dataslice).rtop_pdf(normalized=False) ############################################################################### # In theory, these two measures must be equal, # to show that we calculate the mean square error on this two measures. mse = np.sum((rtop_signal - rtop_pdf) ** 2) / rtop_signal.size print(f"mse = {mse:f}") ############################################################################### # Leaving the normalized parameter to the default changes the values of the # RTOP but not the contrast between the voxels. print("Calculating... rtop_pdf_norm") rtop_pdf_norm = dsmodel.fit(dataslice).rtop_pdf() ############################################################################### # Let's calculate the mean square displacement on the normalized propagator. print("Calculating... msd_norm") msd_norm = dsmodel.fit(dataslice).msd_discrete() ############################################################################### # Turning the normalized parameter to false makes it possible to calculate # the mean square displacement on the propagator without normalization. print("Calculating... msd") msd = dsmodel.fit(dataslice).msd_discrete(normalized=False) ############################################################################### # Show the RTOP images and save them in rtop.png. fig = plt.figure(figsize=(6, 6)) ax1 = fig.add_subplot(2, 2, 1, title="rtop_signal") ax1.set_axis_off() ind = ax1.imshow(rtop_signal.T, interpolation="nearest", origin="lower") plt.colorbar(ind) ax2 = fig.add_subplot(2, 2, 2, title="rtop_pdf_norm") ax2.set_axis_off() ind = ax2.imshow(rtop_pdf_norm.T, interpolation="nearest", origin="lower") plt.colorbar(ind) ax3 = fig.add_subplot(2, 2, 3, title="rtop_pdf") ax3.set_axis_off() ind = ax3.imshow(rtop_pdf.T, interpolation="nearest", origin="lower") plt.colorbar(ind) plt.savefig("rtop.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Return to origin probability. # # # Show the MSD images and save them in msd.png. fig = plt.figure(figsize=(7, 3)) ax1 = fig.add_subplot(1, 2, 1, title="msd_norm") ax1.set_axis_off() ind = ax1.imshow(msd_norm.T, interpolation="nearest", origin="lower") plt.colorbar(ind) ax2 = fig.add_subplot(1, 2, 2, title="msd") ax2.set_axis_off() ind = ax2.imshow(msd.T, interpolation="nearest", origin="lower") plt.colorbar(ind) plt.savefig("msd.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Mean square displacement. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dsid.py000066400000000000000000000057151476546756600202160ustar00rootroot00000000000000""" =============================== DSI Deconvolution (DSID) vs DSI =============================== An alternative method to DSI is the method proposed by :footcite:p:`CanalesRodriguez2010` which is called DSI with Deconvolution. This algorithm is using Lucy-Richardson deconvolution in the diffusion propagator with the goal to create sharper ODFs with higher angular resolution. In this example we will show with simulated data how this method's ODF performs against standard DSI ODF and a ground truth multi tensor ODF. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.reconst.dsi import DiffusionSpectrumDeconvModel, DiffusionSpectrumModel from dipy.sims.voxel import multi_tensor, multi_tensor_odf from dipy.viz import actor, window ############################################################################### # For the simulation we will use a standard DSI acquisition scheme with 514 # gradient directions and 1 S0. btable = np.loadtxt(get_fnames(name="dsi515btable")) gtab = gradient_table(btable[:, 0], bvecs=btable[:, 1:]) ############################################################################### # Let's create a multi tensor with 2 fiber directions at 60 degrees. evals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) directions = [(-30, 0), (30, 0)] fractions = [50, 50] signal, _ = multi_tensor( gtab, evals, S0=100, angles=directions, fractions=fractions, snr=None ) sphere = get_sphere(name="repulsion724").subdivide(n=1) odf_gt = multi_tensor_odf( sphere.vertices, evals, angles=directions, fractions=fractions ) ############################################################################### # Perform the reconstructions with standard DSI and DSI with deconvolution. dsi_model = DiffusionSpectrumModel(gtab) dsi_odf = dsi_model.fit(signal).odf(sphere) dsid_model = DiffusionSpectrumDeconvModel(gtab) dsid_odf = dsid_model.fit(signal).odf(sphere) ############################################################################### # Finally, we can visualize the ground truth ODF, together with the DSI and DSI # with deconvolution ODFs and observe that with the deconvolved method it is # easier to resolve the correct fiber directions because the ODF is sharper. # Enables/disables interactive visualization interactive = False scene = window.Scene() # concatenate data as 4D array odfs = np.vstack((odf_gt, dsi_odf, dsid_odf))[:, None, None] odf_actor = actor.odf_slicer(odfs, sphere=sphere, scale=0.5, colormap="plasma") odf_actor.display(y=0) odf_actor.RotateX(90) scene.add(odf_actor) window.record(scene=scene, out_path="dsid.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Ground truth ODF (left), DSI ODF (middle), DSI with Deconvolution ODF (right). # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_dti.py000066400000000000000000000230401476546756600200420ustar00rootroot00000000000000""" ===================================================================== Reconstruction of the diffusion signal with DTI (single tensor) model ===================================================================== The diffusion tensor model is a model that describes the diffusion within a voxel. First proposed by Basser and colleagues :footcite:p:`Basser1994a`, it has been very influential in demonstrating the utility of diffusion MRI in characterizing the micro-structure of white matter tissue and of the biophysical properties of tissue, inferred from local diffusion properties and it is still very commonly used. The diffusion tensor models the diffusion signal as: .. math:: \\frac{S(\\mathbf{g}, b)}{S_0} = e^{-b\\mathbf{g}^T \\mathbf{D} \\mathbf{g}} Where $\\mathbf{g}$ is a unit vector in 3 space indicating the direction of measurement and b are the parameters of measurement, such as the strength and duration of diffusion-weighting gradient. $S(\\mathbf{g}, b)$ is the diffusion-weighted signal measured and $S_0$ is the signal conducted in a measurement with no diffusion weighting. $\\mathbf{D}$ is a positive-definite quadratic form, which contains six free parameters to be fit. These six parameters are: .. math:: \\mathbf{D} = \\begin{pmatrix} D_{xx} & D_{xy} & D_{xz} \\\\ D_{yx} & D_{yy} & D_{yz} \\\\ D_{zx} & D_{zy} & D_{zz} \\\\ \\end{pmatrix} This matrix is a variance/covariance matrix of the diffusivity along the three spatial dimensions. Note that we can assume that diffusivity has antipodal symmetry, so elements across the diagonal are equal. For example: $D_{xy} = D_{yx}$. This is why there are only 6 free parameters to estimate here. In the following example we show how to reconstruct your diffusion datasets using a single tensor model. First import the necessary modules: ``numpy`` is for numerical computation """ import numpy as np """ ``dipy.io.image`` is for loading / saving imaging datasets ``dipy.io.gradients`` is for loading / saving our bvals and bvecs """ from dipy.core.gradients import gradient_table from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti """ ``dipy.reconst`` is for the reconstruction algorithms which we use to create voxel models from the raw data. """ import dipy.reconst.dti as dti """ ``dipy.data`` is used for small datasets that we use in tests and examples. """ from dipy.data import get_fnames """ ``get_fnames`` will download the raw dMRI dataset of a single subject. The size of the dataset is 87 MBytes. You only need to fetch once. It will return the file names of our data. """ hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") """ Next, we read the saved dataset. gtab contains a ``GradientTable`` object (information about the gradients e.g. b-values and b-vectors). """ data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) print(f"data.shape {data.shape}") """ data.shape ``(81, 106, 76, 160)`` First of all, we mask and crop the data. This is a quick way to avoid calculating Tensors on the background of the image. This is done using DIPY_'s ``mask`` module. """ from dipy.segment.mask import median_otsu maskdata, mask = median_otsu( data, vol_idx=range(10, 50), median_radius=3, numpass=1, autocrop=True, dilate=2 ) print(f"maskdata.shape {maskdata.shape}") """ maskdata.shape ``(72, 87, 59, 160)`` Now that we have prepared the datasets we can go forward with the voxel reconstruction. First, we instantiate the Tensor model in the following way. """ tenmodel = dti.TensorModel(gtab) """ Fitting the data is very simple. We just need to call the fit method of the TensorModel in the following way: """ tenfit = tenmodel.fit(maskdata) """ The fit method creates a ``TensorFit`` object which contains the fitting parameters and other attributes of the model. You can recover the 6 values of the triangular matrix representing the tensor D. By default, in DIPY, values are ordered as (Dxx, Dxy, Dyy, Dxz, Dyz, Dzz). The ``tensor_vals`` variable defined below is a 4D data with last dimension of size 6. """ tensor_vals = dti.lower_triangular(tenfit.quadratic_form) """ You can also recover other metrics from the model. For example we can generate fractional anisotropy (FA) from the eigen-values of the tensor. FA is used to characterize the degree to which the distribution of diffusion in a voxel is directional. That is, whether there is relatively unrestricted diffusion in one particular direction. Mathematically, FA is defined as the normalized variance of the eigen-values of the tensor: .. math:: FA = \\sqrt{\frac{1}{2}\frac{(\\lambda_1-\\lambda_2)^2+(\\lambda_1- \\lambda_3)^2+(\\lambda_2-\\lambda_3)^2}{\\lambda_1^2+ \\lambda_2^2+\\lambda_3^2}} Note that FA should be interpreted carefully. It may be an indication of the density of packing of fibers in a voxel, and the amount of myelin wrapping these axons, but it is not always a measure of "tissue integrity". For example, FA may decrease in locations in which there is fanning of white matter fibers, or where more than one population of white matter fibers crosses. """ print("Computing anisotropy measures (FA, MD, RGB)") from dipy.reconst.dti import color_fa, fractional_anisotropy FA = fractional_anisotropy(tenfit.evals) """ In the background of the image the fitting will not be accurate there is no signal and possibly we will find FA values with nans (not a number). We can easily remove these in the following way. """ FA[np.isnan(FA)] = 0 """ Saving the FA images is very easy using nibabel_. We need the FA volume and the affine matrix which transform the image's coordinates to the world coordinates. Here, we choose to save the FA in ``float32``. """ save_nifti("tensor_fa.nii.gz", FA.astype(np.float32), affine) """ You can now see the result with any nifti viewer or check it slice by slice using matplotlib_'s ``imshow``. In the same way you can save the eigen values, the eigen vectors or any other properties of the tensor. """ save_nifti("tensor_evecs.nii.gz", tenfit.evecs.astype(np.float32), affine) """ Other tensor statistics can be calculated from the ``tenfit`` object. For example, a commonly calculated statistic is the mean diffusivity (MD). This is simply the mean of the eigenvalues of the tensor. Since FA is a normalized measure of variance and MD is the mean, they are often used as complimentary measures. In DIPY, there are two equivalent ways to calculate the mean diffusivity. One is by calling the ``mean_diffusivity`` module function on the eigen-values of the ``TensorFit`` class instance: """ MD1 = dti.mean_diffusivity(tenfit.evals) save_nifti("tensors_md.nii.gz", MD1.astype(np.float32), affine) """ The other is to call the ``TensorFit`` class method: """ MD2 = tenfit.md """ Obviously, the quantities are identical. We can also compute the colored FA or RGB-map :footcite:p:`Pajevic1999`. First, we make sure that the FA is scaled between 0 and 1, we compute the RGB map and save it. """ FA = np.clip(FA, 0, 1) RGB = color_fa(FA, tenfit.evecs) save_nifti("tensor_rgb.nii.gz", np.array(255 * RGB, "uint8"), affine) """ Let's try to visualize the tensor ellipsoids of a small rectangular area in an axial slice of the splenium of the corpus callosum (CC). """ print("Computing tensor ellipsoids in a part of the splenium of the CC") from dipy.data import get_sphere sphere = get_sphere(name="repulsion724") from dipy.viz import actor, window # Enables/disables interactive visualization interactive = False scene = window.Scene() evals = tenfit.evals[13:43, 44:74, 28:29] evecs = tenfit.evecs[13:43, 44:74, 28:29] """ We can color the ellipsoids using the ``color_fa`` values that we calculated above. In this example we additionally normalize the values to increase the contrast. """ cfa = RGB[13:43, 44:74, 28:29] cfa /= cfa.max() scene.add( actor.tensor_slicer(evals, evecs, scalar_colors=cfa, sphere=sphere, scale=0.3) ) print("Saving illustration as tensor_ellipsoids.png") window.record( scene=scene, n_frames=1, out_path="tensor_ellipsoids.png", size=(600, 600) ) if interactive: window.show(scene) """ .. rst-class:: centered small fst-italic fw-semibold Tensor Ellipsoids. """ scene.clear() """ Finally, we can visualize the tensor Orientation Distribution Functions for the same area as we did with the ellipsoids. """ tensor_odfs = tenmodel.fit(data[20:50, 55:85, 38:39]).odf(sphere) odf_actor = actor.odf_slicer(tensor_odfs, sphere=sphere, scale=0.5, colormap=None) scene.add(odf_actor) print("Saving illustration as tensor_odfs.png") window.record(scene=scene, n_frames=1, out_path="tensor_odfs.png", size=(600, 600)) if interactive: window.show(scene) """ .. rst-class:: centered small fst-italic fw-semibold Tensor ODFs. Note that while the tensor model is an accurate and reliable model of the diffusion signal in the white matter, it has the drawback that it only has one principal diffusion direction. Therefore, in locations in the brain that contain multiple fiber populations crossing each other, the tensor model may indicate that the principal diffusion direction is intermediate to these directions. Therefore, using the principal diffusion direction for tracking in these locations may be misleading and may lead to errors in defining the tracks. Fortunately, other reconstruction methods can be used to represent the diffusion and fiber orientations in those locations. These are presented in other examples. References ---------- .. footbibliography:: """ dipy-1.11.0/doc/examples/reconst_forecast.py000066400000000000000000000121371476546756600210750ustar00rootroot00000000000000""" ============================================================== Crossing invariant fiber response function with FORECAST model ============================================================== We show how to obtain a voxel specific response function in the form of axially symmetric tensor and the fODF using the FORECAST model from :footcite:p:`Anderson2005`, :footcite:p:`Kaden2016a` and :footcite:p:`Zucchelli2017`. First import the necessary modules: """ import os.path as op import matplotlib.pyplot as plt import nibabel as nib import numpy as np from dipy.core.gradients import gradient_table from dipy.data import fetch_hbn, get_sphere from dipy.reconst.forecast import ForecastModel from dipy.viz import actor, window ############################################################################### # Download and read the data for this tutorial. Our implementation of FORECAST # requires multi-shell `data.fetch_hbn()` provides data that was acquired using # b-values of 1000 and 2000 as part of the Healthy Brain Network study # :footcite:p:`Alexander2017` and was preprocessed and quality controlled in the # HBN-POD2 dataset :footcite:p:`RichieHalford2022`. data_path = fetch_hbn(["NDARAA948VFH"])[1] dwi_path = op.join( data_path, "derivatives", "qsiprep", "sub-NDARAA948VFH", "ses-HBNsiteRU", "dwi" ) img = nib.load( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.nii.gz", ) ) gtab = gradient_table( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.bval", ), bvecs=op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.bvec", ), ) data = np.asarray(img.dataobj) mask_img = nib.load( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-brain_mask.nii.gz", ) ) brain_mask = mask_img.get_fdata() ############################################################################### # Let us consider only a single slice for the FORECAST fitting data_small = data[:, :, 50:51] mask_small = brain_mask[:, :, 50:51] ############################################################################### # Instantiate the FORECAST Model. # # "sh_order_max" is the spherical harmonics order (l) used for the fODF. # # dec_alg is the spherical deconvolution algorithm used for the FORECAST basis # fitting, in this case we used the Constrained Spherical Deconvolution (CSD) # algorithm. fm = ForecastModel(gtab, sh_order_max=6, dec_alg="CSD") ############################################################################### # Fit the FORECAST to the data f_fit = fm.fit(data_small, mask=mask_small) ############################################################################### # Calculate the crossing invariant tensor indices :footcite:p:`Kaden2016a`: the # parallel diffusivity, the perpendicular diffusivity, the fractional anisotropy # and the mean diffusivity. d_par = f_fit.dpar d_perp = f_fit.dperp fa = f_fit.fractional_anisotropy() md = f_fit.mean_diffusivity() ############################################################################### # Show the indices and save them in FORECAST_indices.png. fig = plt.figure(figsize=(6, 6)) ax1 = fig.add_subplot(2, 2, 1, title="parallel diffusivity") ax1.set_axis_off() ind = ax1.imshow( d_par[:, :, 0].T, interpolation="nearest", origin="lower", cmap=plt.cm.gray ) plt.colorbar(ind, shrink=0.6) ax2 = fig.add_subplot(2, 2, 2, title="perpendicular diffusivity") ax2.set_axis_off() ind = ax2.imshow( d_perp[:, :, 0].T, interpolation="nearest", origin="lower", cmap=plt.cm.gray ) plt.colorbar(ind, shrink=0.6) ax3 = fig.add_subplot(2, 2, 3, title="fractional anisotropy") ax3.set_axis_off() ind = ax3.imshow( fa[:, :, 0].T, interpolation="nearest", origin="lower", cmap=plt.cm.gray ) plt.colorbar(ind, shrink=0.6) ax4 = fig.add_subplot(2, 2, 4, title="mean diffusivity") ax4.set_axis_off() ind = ax4.imshow( md[:, :, 0].T, interpolation="nearest", origin="lower", cmap=plt.cm.gray ) plt.colorbar(ind, shrink=0.6) plt.savefig("FORECAST_indices.png", dpi=300, bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # FORECAST scalar indices. # # # Load an ODF reconstruction sphere sphere = get_sphere(name="repulsion724") ############################################################################### # Compute the fODFs. odf = f_fit.odf(sphere) print(f"fODF.shape {odf.shape}") ############################################################################### # Display a part of the fODFs odf_actor = actor.odf_slicer( odf[30:60, 30:60, :], sphere=sphere, colormap="plasma", scale=0.6 ) scene = window.Scene() scene.add(odf_actor) window.record(scene=scene, out_path="fODFs.png", size=(600, 600), magnification=4) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Fiber Orientation Distribution Functions, in a small ROI of the brain. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_fwdti.py000066400000000000000000000222461476546756600204060ustar00rootroot00000000000000""" ============================================================================= Using the free water elimination model to remove DTI free water contamination ============================================================================= As shown previously (see :ref:`sphx_glr_examples_built_reconstruction_reconst_dti.py`), the diffusion tensor model is a simple way to characterize diffusion anisotropy. However, in regions near the ventricles and parenchyma, anisotropy can be underestimated by partial volume effects of the cerebral spinal fluid (CSF). This free water contamination can particularly corrupt Diffusion Tensor Imaging analysis of microstructural changes when different groups of subjects show different brain morphology (e.g. brain ventricle enlargement associated with brain tissue atrophy that occurs in several brain pathologies and aging). A way to remove this free water influences is to expand the DTI model to take into account an extra compartment representing the contributions of free water diffusion :footcite:p:`Pasternak2009`. The expression of the expanded DTI model is shown below: .. math:: S(\\mathbf{g}, b) = S_0(1-f)e^{-b\\mathbf{g}^T \\mathbf{D} \\mathbf{g}}+S_0fe^{-b D_{iso}} where $\\mathbf{g}$ and $b$ are diffusion gradient direction and weighted (more information see :ref:`sphx_glr_examples_built_reconstruction_reconst_dti.py`), $S(\\mathbf{g}, b)$ is thebdiffusion-weighted signal measured, $S_0$ is the signal in a measurement with no diffusion weighting, $\\mathbf{D}$ is the diffusion tensor, $f$ the volume fraction of the free water component, and $D_{iso}$ is the isotropic value of the free water diffusion (normally set to $3.0 \times 10^{-3} mm^{2}s^{-1}$). In this example, we show how to process a diffusion weighting dataset using an adapted version of the free water elimination proposed by :footcite:p:`Hoy2014`. The full details of Dipy's free water DTI implementation was published in :footcite:p:`NetoHenriques2017`. Please cite this work if you use this algorithm. Let's start by importing the relevant modules: """ import os.path as op import matplotlib.pyplot as plt import nibabel as nib import numpy as np from dipy.core.gradients import gradient_table from dipy.data import fetch_hbn import dipy.reconst.dti as dti import dipy.reconst.fwdti as fwdti ############################################################################### # Without spatial constrains the free water elimination model cannot be solved # in data acquired from one non-zero b-value :footcite:p:`Hoy2014`. Therefore, # here we download a dataset that was acquired with multiple b-values. data_path = fetch_hbn(["NDARAA948VFH"])[1] dwi_path = op.join( data_path, "derivatives", "qsiprep", "sub-NDARAA948VFH", "ses-HBNsiteRU", "dwi" ) img = nib.load( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.nii.gz", ) ) gtab = gradient_table( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.bval", ), bvecs=op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-preproc_dwi.bvec", ), ) data = np.asarray(img.dataobj) ############################################################################### # The free water DTI model can take some minutes to process the full data set. # Thus, we use a brain mask that was calculated during pre-processing, to # remove the background of the image and avoid unnecessary calculations. mask_img = nib.load( op.join( dwi_path, "sub-NDARAA948VFH_ses-HBNsiteRU_acq-64dir_space-T1w_desc-brain_mask.nii.gz", ) ) ############################################################################### # Moreover, for illustration purposes we process only one slice of the data. mask = mask_img.get_fdata() data_small = data[:, :, 50:51] mask_small = mask[:, :, 50:51] ############################################################################### # The free water elimination model fit can then be initialized by instantiating # a FreeWaterTensorModel class object: fwdtimodel = fwdti.FreeWaterTensorModel(gtab) ############################################################################### # The data can then be fitted using the ``fit`` function of the defined model # object: fwdtifit = fwdtimodel.fit(data_small, mask=mask_small) ############################################################################### # This 2-steps procedure will create a FreeWaterTensorFit object which contains # all the diffusion tensor statistics free for free water contamination. Below # we extract the fractional anisotropy (FA) and the mean diffusivity (MD) of # the free water diffusion tensor. FA = fwdtifit.fa MD = fwdtifit.md ############################################################################### # For comparison we also compute the same standard measures processed by the # standard DTI model dtimodel = dti.TensorModel(gtab) dtifit = dtimodel.fit(data_small, mask=mask_small) dti_FA = dtifit.fa dti_MD = dtifit.md ############################################################################### # Below the FA values for both free water elimination DTI model and standard # DTI model are plotted in panels A and B, while the respective MD values are # plotted in panels D and E. For a better visualization of the effect of the # free water correction, the differences between these two metrics are shown # in panels C and E. In addition to the standard diffusion statistics, the # estimated volume fraction of the free water contamination is shown on # panel G. fig1, ax = plt.subplots(2, 4, figsize=(12, 6), subplot_kw={"xticks": [], "yticks": []}) fig1.subplots_adjust(hspace=0.3, wspace=0.05) ax.flat[0].imshow(FA[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=1) ax.flat[0].set_title("A) fwDTI FA") ax.flat[1].imshow(dti_FA[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=1) ax.flat[1].set_title("B) standard DTI FA") FAdiff = abs(FA[:, :, 0] - dti_FA[:, :, 0]) ax.flat[2].imshow(FAdiff.T, cmap="gray", origin="lower", vmin=0, vmax=1) ax.flat[2].set_title("C) FA difference") ax.flat[3].axis("off") ax.flat[4].imshow(MD[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=2.5e-3) ax.flat[4].set_title("D) fwDTI MD") ax.flat[5].imshow(dti_MD[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=2.5e-3) ax.flat[5].set_title("E) standard DTI MD") MDdiff = abs(MD[:, :, 0] - dti_MD[:, :, 0]) ax.flat[6].imshow(MDdiff.T, origin="lower", cmap="gray", vmin=0, vmax=2.5e-3) ax.flat[6].set_title("F) MD difference") F = fwdtifit.f ax.flat[7].imshow(F[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=1) ax.flat[7].set_title("G) free water volume") plt.show() fig1.savefig("In_vivo_free_water_DTI_and_standard_DTI_measures.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # In vivo diffusion measures obtain from the free water DTI and standard # DTI. The values of Fractional Anisotropy for the free water DTI model and # standard DTI model and their difference are shown in the upper panels (A-C), # while respective MD values are shown in the lower panels (D-F). In addition # the free water volume fraction estimated from the fwDTI model is shown in # panel G. # # # From the figure, one can observe that the free water elimination model # produces in general higher values of FA and lower values of MD than the # standard DTI model. These differences in FA and MD estimation are expected # due to the suppression of the free water isotropic diffusion components. # Unexpected high amplitudes of FA are however observed in the periventricular # gray matter. This is a known artefact of regions associated to voxels with # high water volume fraction (i.e. voxels containing basically CSF). We are # able to remove this problematic voxels by excluding all FA values # associated with measured volume fractions above a reasonable threshold # of 0.7: FA[F > 0.7] = 0 dti_FA[F > 0.7] = 0 ############################################################################### # Above we reproduce the plots of the in vivo FA from the two DTI fits and # where we can see that the inflated FA values were practically removed: fig1, ax = plt.subplots(1, 3, figsize=(9, 3), subplot_kw={"xticks": [], "yticks": []}) fig1.subplots_adjust(hspace=0.3, wspace=0.05) ax.flat[0].imshow(FA[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=1) ax.flat[0].set_title("A) fwDTI FA") ax.flat[1].imshow(dti_FA[:, :, 0].T, origin="lower", cmap="gray", vmin=0, vmax=1) ax.flat[1].set_title("B) standard DTI FA") FAdiff = abs(FA[:, :, 0] - dti_FA[:, :, 0]) ax.flat[2].imshow(FAdiff.T, cmap="gray", origin="lower", vmin=0, vmax=1) ax.flat[2].set_title("C) FA difference") plt.show() fig1.savefig("In_vivo_free_water_DTI_and_standard_DTI_corrected.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # In vivo FA measures obtain from the free water DTI (A) and standard # DTI (B) and their difference (C). Problematic inflated FA values of the # images were removed by dismissing voxels above a volume fraction threshold # of 0.7. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_gqi.py000066400000000000000000000100361476546756600200430ustar00rootroot00000000000000""" =============================================== Reconstruct with Generalized Q-Sampling Imaging =============================================== We show how to apply Generalized Q-Sampling Imaging :footcite:p:`Yeh2010` to diffusion MRI datasets. You can think of GQI as an analytical version of DSI orientation distribution function (ODF) (Garyfallidis, PhD thesis, 2012). First import the necessary modules: """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.gqi import GeneralizedQSamplingModel ############################################################################### # Download and get the data filenames for this tutorial. fraw, fbval, fbvec = get_fnames(name="taiwan_ntu_dsi") ############################################################################### # img contains a nibabel Nifti1Image object (data) and gtab contains a # ``GradientTable`` object (gradient information e.g. b-values). For example # to read the b-values it is possible to write:: # # print(gtab.bvals) # # Load the raw diffusion data and the affine. data, affine, voxel_size = load_nifti(fraw, return_voxsize=True) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) bvecs[1:] = bvecs[1:] / np.sqrt(np.sum(bvecs[1:] * bvecs[1:], axis=1))[:, None] gtab = gradient_table(bvals, bvecs=bvecs) print(f"data.shape {data.shape}") ############################################################################### # This dataset has anisotropic voxel sizes, therefore reslicing is necessary. # # Instantiate the model and apply it to the data. gqmodel = GeneralizedQSamplingModel(gtab, sampling_length=3) ############################################################################### # The parameter ``sampling_length`` is used here to # # Lets just use one slice only from the data. dataslice = data[:, :, data.shape[2] // 2] mask = dataslice[..., 0] > 50 gqfit = gqmodel.fit(dataslice, mask=mask) ############################################################################### # Load an ODF reconstruction sphere sphere = get_sphere(name="repulsion724") ############################################################################### # Calculate the ODFs with this specific sphere ODF = gqfit.odf(sphere) print(f"ODF.shape {ODF.shape}") ############################################################################### # Using ``peaks_from_model`` we can find the main peaks of the ODFs and other # properties. gqpeaks = peaks_from_model( model=gqmodel, data=dataslice, sphere=sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_odf=False, normalize_peaks=True, ) gqpeak_values = gqpeaks.peak_values ############################################################################### # ``gqpeak_indices`` show which sphere points have the maximum values. gqpeak_indices = gqpeaks.peak_indices ############################################################################### # It is also possible to calculate GFA. GFA = gqpeaks.gfa print(f"GFA.shape {GFA.shape}") ############################################################################### # With parameter ``return_odf=True`` we can obtain the ODF using # ``gqpeaks.ODF`` gqpeaks = peaks_from_model( model=gqmodel, data=dataslice, sphere=sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=mask, return_odf=True, normalize_peaks=True, ) ############################################################################### # This ODF will be of course identical to the ODF calculated above as long as # the same data and mask are used. print(np.sum(gqpeaks.odf != ODF) == 0) ############################################################################### # The advantage of using ``peaks_from_model`` is that it calculates the ODF # only once and saves it or deletes if it is not necessary to keep. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_ivim.py000066400000000000000000000326551476546756600202420ustar00rootroot00000000000000r""" =================================== Intravoxel incoherent motion (IVIM) =================================== The intravoxel incoherent motion (IVIM) model describes diffusion and perfusion in the signal acquired with a diffusion MRI sequence that contains multiple low b-values. The IVIM model can be understood as an adaptation of the work of Stejskal and Tanner :footcite:p:`Stejskal1965` in biological tissue, and was proposed by Le Bihan :footcite:p:`LeBihan1988`. The model assumes two compartments: a slow moving compartment, where particles diffuse in a Brownian fashion as a consequence of thermal energy, and a fast moving compartment (the vascular compartment), where blood moves as a consequence of a pressure gradient. In the first compartment, the diffusion coefficient is $\mathbf{D}$ while in the second compartment, a pseudo diffusion term $\mathbf{D^*}$ is introduced that describes the displacement of the blood elements in an assumed randomly laid out vascular network, at the macroscopic level. According to :footcite:p:`LeBihan1988`, $\mathbf{D^*}$ is greater than $\mathbf{D}$. The IVIM model expresses the MRI signal as follows: .. math:: S(b)=S_0(fe^{-bD^*}+(1-f)e^{-bD}) where $\mathbf{b}$ is the diffusion gradient weighing value (which is dependent on the measurement parameters), $\mathbf{S_{0}}$ is the signal in the absence of diffusion gradient sensitization, $\mathbf{f}$ is the perfusion fraction, $\mathbf{D}$ is the diffusion coefficient and $\mathbf{D^*}$ is the pseudo-diffusion constant, due to vascular contributions. In the following example we show how to fit the IVIM model on a diffusion-weighted dataset and visualize the diffusion and pseudo-diffusion coefficients. First, we import all relevant modules: """ import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data from dipy.reconst.ivim import IvimModel ############################################################################### # We get an IVIM dataset using DIPY_'s data fetcher ``read_ivim``. # This dataset was acquired with 21 b-values in 3 different directions. # Volumes corresponding to different directions were registered to each # other, and averaged across directions. Thus, this dataset has 4 dimensions, # with the length of the last dimension corresponding to the number # of b-values. In order to use this model the data should contain signals # measured at 0 bvalue. fraw, fbval, fbvec = get_fnames(name="ivim") ############################################################################### # The gtab contains a GradientTable object (information about the gradients # e.g. b-values and b-vectors). We get the data from the file using # ``load_nifti_data``. data = load_nifti_data(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs, b0_threshold=0) print(f"data.shape {data.shape}") ############################################################################### # The data has 54 slices, with 256-by-256 voxels in each slice. The fourth # dimension corresponds to the b-values in the gtab. Let us visualize the data # by taking a slice midway(z=33) at $\mathbf{b} = 0$. z = 33 b = 0 plt.imshow(data[:, :, z, b].T, origin="lower", cmap="gray", interpolation="nearest") plt.axhline(y=100) plt.axvline(x=170) plt.savefig("ivim_data_slice.png") plt.close() ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Heat map of a slice of data # # # The region around the intersection of the cross-hairs in the figure # contains cerebral spinal fluid (CSF), so it should have a very high # $\mathbf{f}$ and $\mathbf{D^*}$, the area just medial to that is white # matter so that should be lower, and the region more laterally contains a # mixture of gray matter and CSF. That should give us some contrast to see # the values varying across the regions. x1, x2 = 90, 155 y1, y2 = 90, 170 data_slice = data[x1:x2, y1:y2, z, :] plt.imshow( data[x1:x2, y1:y2, z, b].T, origin="lower", cmap="gray", interpolation="nearest" ) plt.savefig("CSF_slice.png") plt.close() ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Heat map of the CSF slice selected. # # # Now that we have prepared the datasets we can go forward with # the ivim fit. We provide two methods of fitting the parameters of the IVIM # multi-exponential model explained above. We first fit the model with a simple # fitting approach by passing the option `fit_method='trr'`. This method uses # a two-stage approach: first, a linear fit used to get quick initial guesses # for the parameters $\mathbf{S_{0}}$ and $\mathbf{D}$ by considering b-values # greater than ``split_b_D`` (default: 400))and assuming a mono-exponential # signal. This is based on the assumption that at high b-values the signal can # be approximated as a mono exponential decay and by taking the logarithm of # the signal values a linear fit can be obtained. Another linear fit for ``S0`` # (bvals < ``split_b_S0`` (default: 200)) follows and ``f`` is estimated using # $1 - S0_{prime}/S0$. Then a non-linear least-squares fitting is performed to # fit ``D_star`` and ``f``. If the ``two_stage`` flag is set to ``True`` while # initializing the model, a final non-linear least squares fitting is performed # for all the parameters. All initializations for the model such as # ``split_b_D`` are passed while creating the ``IvimModel``. If you are # using Scipy 0.17, you can also set bounds by setting # ``bounds=([0., 0., 0.,0.], [np.inf, 1., 1., 1.]))`` while initializing the # ``IvimModel``. # # For brevity, we focus on a small section of the slice as selected above, # to fit the IVIM model. First, we instantiate the IvimModel object. ivimmodel = IvimModel(gtab, fit_method="trr") ############################################################################### # To fit the model, call the `fit` method and pass the data for fitting. ivimfit = ivimmodel.fit(data_slice) ############################################################################### # The fit method creates a IvimFit object which contains the # parameters of the model obtained after fitting. These are accessible # through the `model_params` attribute of the IvimFit object. # The parameters are arranged as a 4D array, corresponding to the spatial # dimensions of the data, and the last dimension (of length 4) # corresponding to the model parameters according to the following # order : $\mathbf{S_{0}, f, D^*, D}$. ivimparams = ivimfit.model_params print(f"ivimparams.shape : {ivimparams.shape}") ############################################################################### # As we see, we have a 20x20 slice at the height z = 33. Thus we # have 400 voxels. We will now plot the parameters obtained from the # fit for a voxel and also various maps for the entire slice. # This will give us an idea about the diffusion and perfusion in # that section. Let(i, j) denote the coordinate of the voxel. We have # already fixed the z component as 33 and hence we will get a slice # which is 33 units above. i, j = 10, 10 estimated_params = ivimfit.model_params[i, j, :] print(estimated_params) ############################################################################### # Now we can map the perfusion and diffusion maps for the slice. We # will plot a heatmap showing the values using a colormap. It will be # useful to define a plotting function for the heatmap here since we # will use it to plot for all the IVIM parameters. We will need to specify # the lower and upper limits for our data. For example, the perfusion # fractions should be in the range (0,1). Similarly, the diffusion and # pseudo-diffusion constants are much smaller than 1. We pass an argument # called ``variable`` to out plotting function which gives the label for # the plot. def plot_map(raw_data, variable, limits, filename): fig, ax = plt.subplots(1) lower, upper = limits ax.set_title(f"Map for {variable}") im = ax.imshow( raw_data.T, origin="lower", clim=(lower, upper), cmap="gray", interpolation="nearest", ) fig.colorbar(im) fig.savefig(filename) ############################################################################### # Let us get the various plots with `fit_method = 'trr'` so that we can # visualize them in one page plot_map(ivimfit.S0_predicted, "Predicted S0", (0, 10000), "predicted_S0.png") plot_map(data_slice[:, :, 0], "Measured S0", (0, 10000), "measured_S0.png") plot_map(ivimfit.perfusion_fraction, "f", (0, 1), "perfusion_fraction.png") plot_map(ivimfit.D_star, "D*", (0, 0.01), "perfusion_coeff.png") plot_map(ivimfit.D, "D", (0, 0.001), "diffusion_coeff.png") ############################################################################### # Next, we will fit the same model with a more refined optimization process # with `fit_method='VarPro'` (for "Variable Projection"). The VarPro computes # the IVIM parameters using the MIX approach :footcite:p:`Farooq2016`. This # algorithm uses three different optimizers. It starts with a differential # evolution algorithm and fits the parameters in the power of exponentials. Then # the fitted parameters in the first step are utilized to make a linear convex # problem. Using a convex optimization, the volume fractions are determined. # The last step is non-linear least-squares fitting on all the parameters. # The results of the first and second optimizers are utilized as the initial # values for the last step of the algorithm. # # As opposed to the `'trr'` fitting method, this approach does not need to set # any thresholds on the bvals to differentiate between the perfusion # (pseudo-diffusion) and diffusion portions and fits the parameters # simultaneously. Making use of the three step optimization mentioned above # increases the convergence basin for fitting the multi-exponential functions # of microstructure models. This method has been described in further detail # in :footcite:p:`Fadnavis2019` and :footcite:p:`Farooq2016`. ivimmodel_vp = IvimModel(gtab, fit_method="VarPro") ivimfit_vp = ivimmodel_vp.fit(data_slice) ############################################################################### # Just like the `'trr'` fit method, `'VarPro'` creates a IvimFit object which # contains the parameters of the model obtained after fitting. These are # accessible through the `model_params` attribute of the IvimFit object. # The parameters are arranged as a 4D array, corresponding to the spatial # dimensions of the data, and the last dimension (of length 4) # corresponding to the model parameters according to the following # order : $\mathbf{S_{0}, f, D^*, D}$. i, j = 10, 10 estimated_params = ivimfit_vp.model_params[i, j, :] print(estimated_params) ############################################################################### # To compare the fit using `fit_method='VarPro'` and `fit_method='trr'`, we can # plot one voxel's signal and the model fit using both methods. # # We will use the `predict` method of the IvimFit objects, to get the predicted # signal, based on each one of the model fit methods. fig, ax = plt.subplots(1) ax.scatter(gtab.bvals, data_slice[i, j, :], color="green", label="Measured signal") ivim_trr_predict = ivimfit.predict(gtab)[i, j, :] ax.plot(gtab.bvals, ivim_trr_predict, label="trr prediction") S0_est, f_est, D_star_est, D_est = ivimfit.model_params[i, j, :] trr_pro_param_est = ( f"S0={S0_est:06.3f} f={f_est:06.4f}\n" f"D*={D_star_est:06.5f} D={D_est:06.5f}" ) text_fit = f"""trr param estimates:\n {trr_pro_param_est}""" ax.text( 0.65, 0.80, text_fit, horizontalalignment="center", verticalalignment="center", transform=plt.gca().transAxes, ) ivim_predict_vp = ivimfit_vp.predict(gtab)[i, j, :] ax.plot(gtab.bvals, ivim_predict_vp, label="VarPro prediction") ax.set_xlabel("bvalues") ax.set_ylabel("Signals") S0_est, f_est, D_star_est, D_est = ivimfit_vp.model_params[i, j, :] var_pro_param_est = ( f"S0={S0_est:06.3f} f={f_est:06.4f}\n" f"D*={D_star_est:06.5f} D={D_est:06.5f}" ) text_fit = f"""VarPro param estimates:\n{var_pro_param_est}""" ax.text( 0.65, 0.50, text_fit, horizontalalignment="center", verticalalignment="center", transform=plt.gca().transAxes, ) fig.legend(loc="upper right") fig.savefig("ivim_voxel_plot.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Plot of the signal from one voxel. # # # Let us get the various plots with `fit_method = 'VarPro'` so that we can # visualize them in one page plt.figure() plot_map( ivimfit_vp.S0_predicted, "Heatmap of S0 predicted from the fit", (0, 10000), "predicted_S0.png", ) plot_map( data_slice[..., 0], "Heatmap of measured signal at bvalue = 0", (0, 10000), "measured_S0.png", ) plot_map( ivimfit_vp.perfusion_fraction, "Heatmap of perfusion fraction values predicted from the fit", (0, 1), "perfusion_fraction.png", ) plot_map( ivimfit_vp.D_star, "D* - Heatmap of perfusion coefficients predicted from the fit", (0, 0.01), "perfusion_coeff.png", ) plot_map( ivimfit_vp.D, "D - Heatmap of diffusion coefficients predicted from the fit", (0, 0.001), "diffusion_coeff.png", ) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_mapmri.py000066400000000000000000000375761476546756600205720ustar00rootroot00000000000000""" ================================================================= Continuous and analytical diffusion signal modelling with MAP-MRI ================================================================= We show how to model the diffusion signal as a linear combination of continuous functions from the MAP-MRI basis :footcite:p:`Ozarslan2013`. This continuous representation allows for the computation of many properties of both the signal and diffusion propagator. We show how to estimate the analytical orientation distribution function (ODF) and a variety of scalar indices. These include rotationally invariant quantities such as the mean squared displacement (MSD), Q-space inverse variance (QIV), teturn-to-origin probability (RTOP) and non-Gaussianity (NG). Interestingly, the MAP-MRI basis also allows for the computation of directional indices, such as the return-to-axis probability (RTAP), the return-to-plane probability (RTPP), and the parallel and perpendicular non-Gaussianity. The estimation of these properties from noisy and sparse DWIs requires that the fitting of the MAP-MRI basis is constrained and/or regularized. This can be done by constraining the diffusion propagator to positive values :footcite:p:`Ozarslan2013`, :footcite:p:`DelaHaije2020`, and through analytic Laplacian regularization (MAPL) :footcite:p:`Fick2016b`. First import the necessary modules: """ from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst import mapmri from dipy.viz import actor, window from dipy.viz.plotting import compare_maps ############################################################################### # Download and read the data for this tutorial. # # MAP-MRI requires multi-shell data, to properly fit the radial part of the # basis. The total size of the downloaded data is 187.66 MBytes, however you # only need to fetch it once. fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") ############################################################################### # ``data`` contains the voxel data and ``gtab`` contains a ``GradientTable`` # object (gradient information e.g. b-values). For example, to show the # b-values it is possible to write:: # # print(gtab.bvals) # # For the values of the q-space indices to make sense it is necessary to # explicitly state the ``big_delta`` and ``small_delta`` parameters in the # gradient table. data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) big_delta = 0.0365 # seconds small_delta = 0.0157 # seconds gtab = gradient_table( bvals=gtab.bvals, bvecs=gtab.bvecs, big_delta=big_delta, small_delta=small_delta ) data_small = data[40:65, 50:51] print(f"data.shape {data.shape}") ############################################################################### # The MAP-MRI Model can now be instantiated. The ``radial_order`` determines # the expansion order of the basis, i.e., how many basis functions are used to # approximate the signal. # # First, we must decide to use the anisotropic or isotropic MAP-MRI basis. As # was shown in :footcite:p:`Fick2016b`, the isotropic basis is best used for # tractography purposes, as the anisotropic basis has a bias towards smaller # crossing angles in the ODF. For signal fitting and estimation of scalar # quantities the anisotropic basis is suggested. The choice can be made by # setting ``anisotropic_scaling=True`` or ``anisotropic_scaling=False``. # # First, we must select the method of regularization and/or constraining the # basis fitting. # # - ``laplacian_regularization=True`` makes it use Laplacian regularization # :footcite:p:`Fick2016b`. This method essentially reduces spurious # oscillations in the reconstruction by minimizing the Laplacian of the fitted # signal. Several options can be given to select the regularization weight: # # - ``regularization_weighting=GCV`` uses generalized cross-validation # :footcite:p:`Craven1979` to find an optimal weight. # - ``regularization_weighting=0.2`` for example would omit the GCV and # just set it to 0.2 (found to be reasonable in HCP data # :footcite:p:`Fick2016b`). # - ``regularization_weighting=np.array(weights)`` would make the GCV use # a custom range to find an optimal weight. # # - ``positivity_constraint=True`` makes it use a positivity constraint on the # diffusion propagator. This method constrains the final solution of the # diffusion propagator to be positive either globally # :footcite:p:`DelaHaije2020` or at a set of discrete points # :footcite:p:`Ozarslan2013`, since negative values should not exist. # # - Setting ``global_constraints=True`` enforces positivity everywhere. # With the setting ``global_constraints=False`` positivity is enforced on # a grid determined by ``pos_grid`` and ``pos_radius``. # # Both methods do a good job of producing viable solutions to the signal # fitting in practice. The difference is that the Laplacian regularization # imposes smoothness over the entire signal, including the extrapolation # beyond the measured signal. In practice this may result in, but does not # guarantee, positive solutions of the diffusion propagator. The positivity # constraint guarantees a positive solution which in general results in smooth # solutions, but does not guarantee it. # # A suggested strategy is to use a low Laplacian weight together with a # positivity constraint. In this way both desired properties are guaranteed # in final solution. Higher Laplacian weights are recommended when the data is # very noisy. # # We use the package CVXPY (https://www.cvxpy.org/) to solve convex # optimization problems when ``positivity_constraint=True``, so we need to # first install CVXPY. When using ``global_constraints=True`` to ensure # global positivity, it is recommended to use the MOSEK solver # (https://www.mosek.com/) together with CVXPY by setting # ``cvxpy_solver='MOSEK'``. Different solvers can differ greatly in # terms of runtime and solution accuracy, and in some cases solvers may show # warnings about convergence or recommended option settings. # # For now we will generate the anisotropic models for different combinations. radial_order = 6 map_model_laplacian_aniso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.2, ) map_model_positivity_aniso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=True, ) map_model_both_aniso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.05, positivity_constraint=True, ) map_model_plus_aniso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=False, positivity_constraint=True, global_constraints=True, ) ############################################################################### # Note that when we use only Laplacian regularization, the ``GCV`` option may # select very low regularization weights in very anisotropic white matter such # as the corpus callosum, resulting in corrupted scalar indices. In this # example we therefore choose a preset value of 0.2, which was shown to be # quite robust and also faster in practice :footcite:p:`Fick2016b`. # # We can then fit the MAP-MRI model to the data: mapfit_laplacian_aniso = map_model_laplacian_aniso.fit(data_small) mapfit_positivity_aniso = map_model_positivity_aniso.fit(data_small) mapfit_both_aniso = map_model_both_aniso.fit(data_small) mapfit_plus_aniso = map_model_plus_aniso.fit(data_small) fits = [ mapfit_laplacian_aniso, mapfit_positivity_aniso, mapfit_both_aniso, mapfit_plus_aniso, ] fit_labels = ["MAPL", "CMAP", "CMAPL", "MAP+"] ############################################################################### # From the fitted models we will first illustrate the estimation of q-space # indices. We will compare the estimation using only Laplacian regularization # (MAPL), using only the global positivity constraint (MAP+), using only # positivity in discrete points (CMAP), or using both Laplacian regularization # and positivity in discrete points (CMAPL). We first show the RTOP # :footcite:p:`Ozarslan2013`. compare_maps( fits, maps=["rtop"], fit_labels=fit_labels, map_labels=["RTOP"], filename="MAPMRI_rtop.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # It can be seen that all maps appear quite smooth and similar, although it is # clear that the global positivity constraints provide smoother maps than the # discrete constraints. Higher Laplacian weights also produce smoother maps, # but tend to suppress the estimated RTOP values. The similarity and # differences in reconstruction can be further illustrated by visualizing the # analytic norm of the Laplacian of the fitted signal. compare_maps( fits, maps=["norm_of_laplacian_signal"], fit_labels=fit_labels, map_labels=["Norm of Laplacian"], map_kwargs={"vmin": 0, "vmax": 3}, filename="MAPMRI_norm_laplacian.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # A high Laplacian norm indicates that the gradient in the three-dimensional # signal reconstruction changes a lot -- something that may indicate spurious # oscillations. In the Laplacian reconstruction (left) we see that there are # some isolated voxels that have a higher norm than the rest. In the positivity # constrained reconstruction the norm is already smoother. When both methods # are used together the overall norm gets smoother still, since both smoothness # of the signal and positivity of the propagator are imposed. # # From now on we will just use the combined approach and the globally # constrained approach, show all maps we can generate, and explain their # significance. fits = fits[2:] fit_labels = fit_labels[2:] compare_maps( fits, maps=["msd", "qiv", "rtop", "rtap", "rtpp"], fit_labels=fit_labels, map_labels=["MSD", "QIV", "RTOP", "RTAP", "RTPP"], filename="MAPMRI_maps.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # From left to right: # # - Mean squared displacement (MSD) is a measure for how far protons are able # to diffuse. a decrease in MSD indicates protons are hindered/restricted # more, as can be seen by the high MSD in the CSF, but low in the white # matter. # - Q-space Inverse Variance (QIV) is a measure of variance in the signal, # which is said to have higher contrast to white matter than the MSD # :footcite:p:`Hosseinbor2013`. QIV has also been shown to have high # sensitivity to tissue composition change in a simulation study # :footcite:p:`Fick2016a`. # - Return-to-origin probability (RTOP) quantifies the probability that a # proton will be at the same position at the first and second diffusion # gradient pulse. A higher RTOP indicates that the volume a spin is inside # of is smaller, meaning more overall restriction. This is illustrated by # the low values in CSF and high values in white matter. # - Return-to-axis probability (RTAP) is a directional index that quantifies # the probability that a proton will be along the axis of the main # eigenvector of a diffusion tensor during both diffusion gradient pulses. # RTAP has been related to the apparent axon diameter # :footcite:p:`Ozarslan2013`, :footcite:p:`Fick2016b` under several strong # assumptions on the tissue composition and acquisition protocol. # - Return-to-plane probability (RTPP) is a directional index that quantifies # the probability that a proton will be on the plane perpendicular to the # main eigenvector of a diffusion tensor during both gradient pulses. RTPP # is related to the length of a pore :footcite:p:`Ozarslan2013` but in # practice should be similar to that of Gaussian diffusion. # # # It is also possible to estimate the amount of non-Gaussian diffusion in every # voxel :footcite:p:`Ozarslan2013`. This quantity is estimated through the ratio # between Gaussian volume (MAP-MRI's first basis function) and the non-Gaussian # volume (all other basis functions) of the fitted signal. For this value to be # physically meaningful we must use a b-value threshold in the MAP-MRI model. # This threshold makes the scale estimation in MAP-MRI use only samples that # realistically describe Gaussian diffusion, i.e., at low b-values. map_model_both_ng = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.05, positivity_constraint=True, bval_threshold=2000, ) map_model_plus_ng = mapmri.MapmriModel( gtab, radial_order=radial_order, positivity_constraint=True, global_constraints=True, bval_threshold=2000, ) mapfit_both_ng = map_model_both_ng.fit(data_small) mapfit_plus_ng = map_model_plus_ng.fit(data_small) fits = [mapfit_both_ng, mapfit_plus_ng] fit_labels = ["CMAPL", "MAP+"] compare_maps( fits, maps=["ng", "ng_perpendicular", "ng_parallel"], fit_labels=fit_labels, map_labels=["NG", "NG perpendicular", "NG parallel"], filename="MAPMRI_ng.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # On the left we see the overall NG and on the right the directional # perpendicular NG and parallel NG. The NG ranges from 1 # (completely non-Gaussian) to 0 (completely Gaussian). The overall NG of a # voxel is always higher or equal than each of its components. It can be seen # that NG has low values in the CSF and higher in the white matter. # # Increases or decreases in these values do not point to a specific # microstructural change, but can indicate clues as to what is happening, # similar to fractional anisotropy. An initial simulation study that quantifies # the added value of q-space indices over DTI indices is given in # :footcite:p:`Fick2016a`. # # The MAP-MRI framework also allows for the estimation of orientation # distribution functions (ODFs). We recommend to use the isotropic # implementation for this purpose, as the anisotropic implementation has a bias # towards smaller crossing angles. # # For the isotropic basis we recommend to use a ``radial_order`` of 8, as the # basis needs more generic and needs more basis functions to approximate the # signal. radial_order = 8 map_model_both_iso = mapmri.MapmriModel( gtab, radial_order=radial_order, laplacian_regularization=True, laplacian_weighting=0.1, positivity_constraint=True, anisotropic_scaling=False, ) mapfit_both_iso = map_model_both_iso.fit(data_small) ############################################################################### # Load an ODF reconstruction sphere. sphere = get_sphere(name="repulsion724") ############################################################################### # Compute the ODFs. # # The radial order ``s`` can be increased to sharpen the results, but it might # also make the ODFs noisier. Always check the results visually. odf = mapfit_both_iso.odf(sphere, s=2) print(f"odf.shape {odf.shape}") ############################################################################### # Display the ODFs. # Enables/disables interactive visualization interactive = False scene = window.Scene() sfu = actor.odf_slicer(odf, sphere=sphere, colormap="plasma", scale=0.5) sfu.display(y=0) sfu.RotateX(-90) scene.add(sfu) window.record(scene=scene, out_path="odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Orientation distribution functions (ODFs). # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_mcsd.py000066400000000000000000000310161476546756600202120ustar00rootroot00000000000000""" ================================================ Reconstruction with Multi-Shell Multi-Tissue CSD ================================================ This example shows how to use Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) introduced by :footcite:t:`Jeurissen2014`. This tutorial goes through the steps involved in implementing the method. This method provides improved White Matter(WM), Grey Matter (GM), and Cerebrospinal fluid (CSF) volume fraction maps, which is otherwise overestimated in the standard CSD (SSST-CSD). This is done by using b-value dependencies of the different tissue types to estimate ODFs. This method thus extends the SSST-CSD introduced in :footcite:p:`Tournier2007`. The reconstruction of the fiber orientation distribution function (fODF) in MSMT-CSD involves the following steps: 1. Generate a mask using Median Otsu (optional step) 2. Denoise the data using MP-PCA (optional step) 3. Generate Anisotropic Powermap (if T1 unavailable) 4. Fit DTI model to the data 5. Tissue Classification (needs to be at least two classes of tissues) 6. Estimation of the fiber response function 7. Use the response function to reconstruct the fODF First, we import all the modules we need from dipy as follows: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table, unique_bvals_tolerance from dipy.data import get_fnames, get_sphere from dipy.denoise.localpca import mppca import dipy.direction.peaks as dp from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.mcsd import ( MultiShellDeconvModel, auto_response_msmt, mask_for_response_msmt, multi_shell_fiber_response, response_from_mask_msmt, ) import dipy.reconst.shm as shm from dipy.segment.mask import median_otsu from dipy.segment.tissue import TissueClassifierHMRF from dipy.viz import actor, window sphere = get_sphere(name="symmetric724") ############################################################################### # For this example, we use fetch to download a multi-shell dataset which was # kindly provided by Hansen and Jespersen (more details about the data are # provided in their paper :footcite:p:`Hansen2016a`). The total size of the # downloaded data is 192 MBytes, however you only need to fetch it once. fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # For the sake of simplicity, we only select two non-zero b-values for this # example. bvals = gtab.bvals bvecs = gtab.bvecs sel_b = np.logical_or(np.logical_or(bvals == 0, bvals == 1000), bvals == 2000) data = data[..., sel_b] ############################################################################### # The gradient table is also selected to have the selected b-values (0, 1000 # and 2000) gtab = gradient_table(bvals[sel_b], bvecs=bvecs[sel_b]) ############################################################################### # We make use of the ``median_otsu`` method to generate the mask for the data # as follows: b0_mask, mask = median_otsu(data, median_radius=2, numpass=1, vol_idx=[0, 1]) print(data.shape) ############################################################################### # As one can see from its shape, the selected data contains a total of 67 # volumes of images corresponding to all the diffusion gradient directions # of the selected b-values and call the ``mppca`` as follows: denoised_arr = mppca(data, mask=mask, patch_radius=2) ############################################################################### # Now we will use the denoised array (``denoised_arr``) obtained from ``mppca`` # in the rest of the steps in the tutorial. # # As for the next step, we generate the anisotropic powermap introduced by # :footcite:t:`DellAcqua2014`. To do so, we make use of the Q-ball Model as # follows: qball_model = shm.QballModel(gtab, 8) ############################################################################### # We generate the peaks from the ``qball_model`` as follows: peaks = dp.peaks_from_model( model=qball_model, data=denoised_arr, relative_peak_threshold=0.5, min_separation_angle=25, sphere=sphere, mask=mask, ) ap = shm.anisotropic_power(peaks.shm_coeff) plt.matshow(np.rot90(ap[:, :, 10]), cmap=plt.cm.bone) plt.savefig("anisotropic_power_map.png") plt.close() ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Anisotropic Power Map (Axial Slice) print(ap.shape) ############################################################################### # The above figure is a visualization of the axial slice of the Anisotropic # Power Map. It can be treated as a pseudo-T1 for classification purposes # using the Hidden Markov Random Fields (HMRF) classifier, if the T1 image # is not available. # # As we can see from the shape of the Anisotropic Power Map, it is 3D and can # be used for tissue classification using HMRF. The # HMRF needs the specification of the number of classes. For the case of # MSMT-CSD the ``nclass`` parameter needs to be ``>=2``. In our case, we set # it to 3: namely corticospinal fluid (csf), white matter (wm) and gray # matter (gm). nclass = 3 ############################################################################### # Then, the smoothness factor of the segmentation. Good performance is achieved # with values between 0 and 0.5. beta = 0.1 ############################################################################### # We then call the ``TissueClassifierHMRF`` with the parameters specified as # above: hmrf = TissueClassifierHMRF() initial_segmentation, final_segmentation, PVE = hmrf.classify(ap, nclass, beta) ############################################################################### # Then, we get the tissues segmentation from the final_segmentation. csf = np.where(final_segmentation == 1, 1, 0) gm = np.where(final_segmentation == 2, 1, 0) wm = np.where(final_segmentation == 3, 1, 0) ############################################################################### # Now, we want the response function for each of the three tissues and for each # bvalues. This can be achieved in two different ways. If the case that tissue # segmentation is available or that one wants to see the tissue masks used to # compute the response functions, a combination of the functions # ``mask_for_response_msmt`` and ``response_from_mask`` is needed. # # The ``mask_for_response_msmt`` function will return a mask of voxels within a # cuboid ROI and that meet some threshold constraints, for each tissue and # bvalue. More precisely, the WM mask must have a FA value above a given # threshold. The GM mask and CSF mask must have a FA below given thresholds # and a MD below other thresholds. # # Note that for ``mask_for_response_msmt``, the gtab and data should be for # bvalues under 1200, for optimal tensor fit. mask_wm, mask_gm, mask_csf = mask_for_response_msmt( gtab, data, roi_radii=10, wm_fa_thr=0.7, gm_fa_thr=0.3, csf_fa_thr=0.15, gm_md_thr=0.001, csf_md_thr=0.0032, ) ############################################################################### # If one wants to use the previously computed tissue segmentation in addition # to the threshold method, it is possible by simply multiplying both masks # together. mask_wm *= wm mask_gm *= gm mask_csf *= csf ############################################################################### # The masks can also be used to calculate the number of voxels for each tissue. nvoxels_wm = np.sum(mask_wm) nvoxels_gm = np.sum(mask_gm) nvoxels_csf = np.sum(mask_csf) print(nvoxels_wm) ############################################################################### # Then, the ``response_from_mask`` function will return the msmt response # functions using precalculated tissue masks. response_wm, response_gm, response_csf = response_from_mask_msmt( gtab, data, mask_wm, mask_gm, mask_csf ) ############################################################################### # Note that we can also get directly the response functions by calling the # ``auto_response_msmt`` function, which internally calls # ``mask_for_response_msmt`` followed by ``response_from_mask``. By doing so, # we don't have access to the masks and we might have problems with high # bvalues tensor fit. auto_response_wm, auto_response_gm, auto_response_csf = auto_response_msmt( gtab, data, roi_radii=10 ) ############################################################################### # As we can see below, adding the tissue segmentation can change the results # of the response functions. print("Responses") print(response_wm) print(response_gm) print(response_csf) print("Auto responses") print(auto_response_wm) print(auto_response_gm) print(auto_response_csf) ############################################################################### # At this point, there are two options on how to use those response functions. # We want to create a MultiShellDeconvModel, which takes a response function as # input. This response function can either be directly in the current format, # or it can be a MultiShellResponse format, as produced by the # ``multi_shell_fiber_response`` method. This function assumes a 3 compartments # model (wm, gm, csf) and takes one response function per tissue per bvalue. It # is important to note that the bvalues must be unique for this function. ubvals = unique_bvals_tolerance(gtab.bvals) response_mcsd = multi_shell_fiber_response( sh_order_max=8, bvals=ubvals, wm_rf=response_wm, gm_rf=response_gm, csf_rf=response_csf, ) ############################################################################### # As mentioned, we can also build the model directly and it will call # ``multi_shell_fiber_response`` internally. Important note here, the function # ``unique_bvals_tolerance`` is used to keep only unique bvalues from the gtab # given to the model, as input for ``multi_shell_fiber_response``. This may # introduce differences between the calculated response of each method, # depending on the bvalues given to ``multi_shell_fiber_response`` externally. response = np.array([response_wm, response_gm, response_csf]) mcsd_model_simple_response = MultiShellDeconvModel(gtab, response, sh_order_max=8) ############################################################################### # Note that this technique only works for a 3 compartments model (wm, gm, csf). # If one wants more compartments, a custom MultiShellResponse object must be # used. It can be inspired by the ``multi_shell_fiber_response`` method. # # Now we build the MSMT-CSD model with the ``response_mcsd`` as input. We then # call the ``fit`` function to fit one slice of the 3D data and visualize it. mcsd_model = MultiShellDeconvModel(gtab, response_mcsd) mcsd_fit = mcsd_model.fit(denoised_arr[:, :, 10:11]) ############################################################################### # The volume fractions of tissues for each voxel are also accessible, as well # as the sh coefficients for all tissues. One can also get each sh tissue # separately using ``all_shm_coeff`` for each compartment (isotropic) and # ``shm_coeff`` for white matter. vf = mcsd_fit.volume_fractions sh_coeff = mcsd_fit.all_shm_coeff csf_sh_coeff = sh_coeff[..., 0] gm_sh_coeff = sh_coeff[..., 1] wm_sh_coeff = mcsd_fit.shm_coeff ############################################################################### # The model allows one to predict a signal from sh coefficients. There are two # ways of doing this. mcsd_pred = mcsd_fit.predict() mcsd_pred = mcsd_model.predict(mcsd_fit.all_shm_coeff) ############################################################################### # From the fit obtained in the previous step, we generate the ODFs which can be # visualized as follows: mcsd_odf = mcsd_fit.odf(sphere) print("ODF") print(mcsd_odf.shape) print(mcsd_odf[40, 40, 0]) fodf_spheres = actor.odf_slicer( mcsd_odf, sphere=sphere, scale=1, norm=False, colormap="plasma" ) interactive = False scene = window.Scene() scene.add(fodf_spheres) scene.reset_camera_tight() print("Saving illustration as msdodf.png") window.record(scene=scene, out_path="msdodf.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # MSMT-CSD Peaks and ODFs. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_msdki.py000066400000000000000000000373471476546756600204100ustar00rootroot00000000000000""" ============================================== Mean signal diffusion kurtosis imaging (MSDKI) ============================================== Diffusion Kurtosis Imaging (DKI) is one of the conventional ways to estimate the degree of non-Gaussian diffusion (see :ref:`sphx_glr_examples_built_reconstruction_reconst_dki.py`) :footcite:p:`Jensen2005`. However, a limitation of DKI is that its measures are highly sensitive to noise and image artefacts. For instance, due to the low radial diffusivities, standard kurtosis estimates in regions of well-aligned voxel may be corrupted by implausible low or even negative values. A way to overcome this issue is to characterize kurtosis from average signals across all directions acquired for each data b-value (also known as powder-averaged signals). Moreover, as previously pointed :footcite:p:`NetoHenriques2015`, standard kurtosis measures (e.g. radial, axial and standard mean kurtosis) do not only depend on microstructural properties but also on mesoscopic properties such as fiber dispersion or the intersection angle of crossing fibers. In contrary, the kurtosis from powder-average signals has the advantage of not depending on the fiber distribution functions :footcite:p:`NetoHenriques2019`, :footcite:p:`NetoHenriques2021a` In short, in this tutorial we show how to characterize non-Gaussian diffusion in a more precise way and decoupled from confounding effects of tissue dispersion and crossing. In the first part of this example, we illustrate the properties of the measures obtained from the mean signal diffusion kurtosis imaging (MSDKI) :footcite:p:`NetoHenriques2018`, :footcite:p:`NetoHenriques2021a` using synthetic data. Secondly, the mean signal diffusion kurtosis imaging will be applied to in-vivo MRI data. Finally, we show how MSDKI provides the same information than common microstructural models such as the spherical mean technique :footcite:p:`NetoHenriques2019`, :footcite:p:`Kaden2016b`. Let's import all relevant modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, disperse_charges # For in-vivo data from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti # Reconstruction modules import dipy.reconst.dki as dki import dipy.reconst.msdki as msdki from dipy.segment.mask import median_otsu # For simulations from dipy.sims.voxel import multi_tensor ############################################################################### # Testing MSDKI in synthetic data # =============================== # # We simulate representative diffusion-weighted signals using MultiTensor # simulations (for more information on this type of simulations see # :ref:`sphx_glr_examples_built_simulations_simulate_multi_tensor.py`). For # this example, simulations are produced based on the sum of four diffusion # tensors to represent the intra- and extra-cellular spaces of two fiber # populations. The parameters of these tensors are adjusted according to # :footcite:p:`NetoHenriques2015` (see also # :ref:`sphx_glr_examples_built_simulations_simulate_dki.py`). mevals = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) ############################################################################### # For the acquisition parameters of the synthetic data, we use 60 gradient # directions for two non-zero b-values (1000 and 2000 $s/mm^{2}$) and two # zero bvalues (note that, such as the standard DKI, MSDKI requires at least # three different b-values). # Sample the spherical coordinates of 60 random diffusion-weighted directions. rng = np.random.default_rng() n_pts = 60 theta = np.pi * rng.random(n_pts) phi = 2 * np.pi * rng.random(n_pts) # Convert direction to cartesian coordinates. hsph_initial = HemiSphere(theta=theta, phi=phi) # Evenly distribute the 60 directions hsph_updated, potential = disperse_charges(hsph_initial, 5000) directions = hsph_updated.vertices # Reconstruct acquisition parameters for 2 non-zero=b-values and 2 b0s bvals = np.hstack((np.zeros(2), 1000 * np.ones(n_pts), 2000 * np.ones(n_pts))) bvecs = np.vstack((np.zeros((2, 3)), directions, directions)) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # Simulations are looped for different intra- and extra-cellular water # volume fractions and different intersection angles of the two-fiber # populations. # Array containing the intra-cellular volume fractions tested f = np.linspace(20, 80.0, num=7) # Array containing the intersection angle ang = np.linspace(0, 90.0, num=91) # Matrix where synthetic signals will be stored dwi = np.empty((f.size, ang.size, bvals.size)) for f_i in range(f.size): # estimating volume fractions for individual tensors fractions = np.array([100 - f[f_i], f[f_i], 100 - f[f_i], f[f_i]]) * 0.5 for a_i in range(ang.size): # defining the directions for individual tensors angles = [(ang[a_i], 0.0), (ang[a_i], 0.0), (0.0, 0.0), (0.0, 0.0)] # producing signals using Dipy's function multi_tensor signal, sticks = multi_tensor( gtab, mevals, S0=100, angles=angles, fractions=fractions, snr=None ) dwi[f_i, a_i, :] = signal ############################################################################### # Now that all synthetic signals were produced, we can go forward with # MSDKI fitting. As other Dipy's reconstruction techniques, the MSDKI model has # to be first defined for the specific GradientTable object of the synthetic # data. For MSDKI, this is done by instantiating the MeanDiffusionKurtosisModel # object in the following way: msdki_model = msdki.MeanDiffusionKurtosisModel(gtab) ############################################################################### # MSDKI can then be fitted to the synthetic data by calling the ``fit`` # function of this object: msdki_fit = msdki_model.fit(dwi) ############################################################################### # From the above fit object we can extract the two main parameters of the # MSDKI, i.e.: 1) the mean signal diffusion (MSD); and 2) the mean signal # kurtosis (MSK) MSD = msdki_fit.msd MSK = msdki_fit.msk ############################################################################### # For a reference, we also calculate the mean diffusivity (MD) and mean # kurtosis (MK) from the standard DKI. dki_model = dki.DiffusionKurtosisModel(gtab) dki_fit = dki_model.fit(dwi) MD = dki_fit.md MK = dki_fit.mk(min_kurtosis=0, max_kurtosis=3) ############################################################################### # Now we plot the results as a function of the ground truth intersection # angle and for different volume fractions of intra-cellular water. fig1, axs = plt.subplots(nrows=2, ncols=2, figsize=(10, 10)) for f_i in range(f.size): axs[0, 0].plot(ang, MSD[f_i], linewidth=1.0, label=f"$F: {f[f_i]:.2f}$") axs[0, 1].plot(ang, MSK[f_i], linewidth=1.0, label=f"$F: {f[f_i]:.2f}$") axs[1, 0].plot(ang, MD[f_i], linewidth=1.0, label=f"$F: {f[f_i]:.2f}$") axs[1, 1].plot(ang, MK[f_i], linewidth=1.0, label=f"$F: {f[f_i]:.2f}$") # Adjust properties of the first panel of the figure axs[0, 0].set_xlabel("Intersection angle") axs[0, 0].set_ylabel("MSD") axs[0, 1].set_xlabel("Intersection angle") axs[0, 1].set_ylabel("MSK") axs[0, 1].legend(loc="center left", bbox_to_anchor=(1, 0.5)) axs[1, 0].set_xlabel("Intersection angle") axs[1, 0].set_ylabel("MD") axs[1, 1].set_xlabel("Intersection angle") axs[1, 1].set_ylabel("MK") axs[1, 1].legend(loc="center left", bbox_to_anchor=(1, 0.5)) plt.show() fig1.savefig("MSDKI_simulations.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # MSDKI and DKI measures for data of two crossing synthetic fibers. # Upper panels show the MSDKI measures: 1) mean signal diffusivity (left # panel); and 2) mean signal kurtosis (right panel). # For reference, lower panels show the measures obtained by standard DKI: # 1) mean diffusivity (left panel); and 2) mean kurtosis (right panel). # All estimates are plotted as a function of the intersecting angle of the # two crossing fibers. Different curves correspond to different ground truth # axonal volume fraction of intra-cellular space. # # # The results of the above figure, demonstrate that both MSD and MSK are # sensitive to axonal volume fraction (i.e. a microstructure property) but are # independent to the intersection angle of the two crossing fibers (i.e. # independent to properties regarding fiber orientation). In contrast, DKI # measures seem to be independent to both axonal volume fraction and # intersection angle. # # # # Reconstructing diffusion data using MSDKI # ========================================= # # Now that the properties of MSDKI were illustrated, let's apply MSDKI to # in-vivo diffusion-weighted data. As the example for the standard DKI # (see :ref:`sphx_glr_examples_built_reconstruction_reconst_dki.py`), we use # fetch to download a multi-shell dataset which was kindly provided by Hansen # and Jespersen (more details about the data are provided in their paper # :footcite:p:`Hansen2016a`). The total size of the downloaded data is 192 # MBytes, however you only need to fetch it once. fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # Before fitting the data, we perform some data pre-processing. For # illustration, we only mask the data to avoid unnecessary calculations on the # background of the image; however, you could also apply other pre-processing # techniques. For example, some state of the art denoising algorithms are # available in DIPY_ (e.g. the non-local means filter # :ref:`sphx_glr_examples_built_preprocessing_denoise_nlmeans.py` or the # local pca :ref:`sphx_glr_examples_built_preprocessing_denoise_localpca.py`). maskdata, mask = median_otsu( data, vol_idx=[0, 1], median_radius=4, numpass=2, autocrop=False, dilate=1 ) ############################################################################### # Now that we have loaded and pre-processed the data we can go forward # with MSDKI fitting. As for the synthetic data, the MSDKI model has to be # first defined for the data's GradientTable object: msdki_model = msdki.MeanDiffusionKurtosisModel(gtab) ############################################################################### # The data can then be fitted by calling the ``fit`` function of this object: msdki_fit = msdki_model.fit(data, mask=mask) ############################################################################### # Let's then extract the two main MSDKI's parameters: 1) mean signal diffusion # (MSD); and 2) mean signal kurtosis (MSK). MSD = msdki_fit.msd MSK = msdki_fit.msk ############################################################################### # For comparison, we calculate also the mean diffusivity (MD) and mean kurtosis # (MK) from the standard DKI. dki_model = dki.DiffusionKurtosisModel(gtab) dki_fit = dki_model.fit(data, mask=mask) MD = dki_fit.md MK = dki_fit.mk(min_kurtosis=0, max_kurtosis=3) ############################################################################### # Let's now visualize the data using matplotlib for a selected axial slice. axial_slice = 9 fig2, ax = plt.subplots(2, 2, figsize=(6, 6), subplot_kw={"xticks": [], "yticks": []}) fig2.subplots_adjust(hspace=0.3, wspace=0.05) im0 = ax.flat[0].imshow( MSD[:, :, axial_slice].T * 1000, cmap="gray", vmin=0, vmax=2, origin="lower" ) ax.flat[0].set_title("MSD (MSDKI)") im1 = ax.flat[1].imshow( MSK[:, :, axial_slice].T, cmap="gray", vmin=0, vmax=2, origin="lower" ) ax.flat[1].set_title("MSK (MSDKI)") im2 = ax.flat[2].imshow( MD[:, :, axial_slice].T * 1000, cmap="gray", vmin=0, vmax=2, origin="lower" ) ax.flat[2].set_title("MD (DKI)") im3 = ax.flat[3].imshow( MK[:, :, axial_slice].T, cmap="gray", vmin=0, vmax=2, origin="lower" ) ax.flat[3].set_title("MK (DKI)") fig2.colorbar(im0, ax=ax.flat[0]) fig2.colorbar(im1, ax=ax.flat[1]) fig2.colorbar(im2, ax=ax.flat[2]) fig2.colorbar(im3, ax=ax.flat[3]) plt.show() fig2.savefig("MSDKI_invivo.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # MSDKI measures (upper panels) and DKI standard measures (lower panels). # # # This figure shows that the contrast of in-vivo MSD and MSK maps (upper # panels) are similar to the contrast of MD and MSK maps (lower panels); # however, in the upper part we ensure that direct contributions of fiber # dispersion were removed. The upper panels also reveal that MSDKI measures # are let sensitive to noise artefacts than standard DKI measures (as pointed # by :footcite:p:`NetoHenriques2018`), particularly one can observe that MSK # maps always present positive values in brain white matter regions, while # implausible negative kurtosis values are present in the MK maps in the same # regions. # # Relationship between MSDKI and SMT2 # =================================== # As showed in :footcite:p:`NetoHenriques2019`, MSDKI captures the same # information than the spherical mean technique (SMT) microstructural models # :footcite:p:`Kaden2016b`. In this way, the SMT model parameters can be # directly computed from MSDKI. For instance, the axonal volume fraction (f), # the intrinsic diffusivity (di), and the microscopic anisotropy of the SMT # 2-compartmental model :footcite:p:`NetoHenriques2019` can be extracted using # the following lines of code: F = msdki_fit.smt2f DI = msdki_fit.smt2di uFA2 = msdki_fit.smt2uFA ############################################################################### # The SMT2 model parameters extracted from MSDKI are displayed below: fig3, ax = plt.subplots(1, 3, figsize=(9, 2.5), subplot_kw={"xticks": [], "yticks": []}) fig3.subplots_adjust(hspace=0.4, wspace=0.1) im0 = ax.flat[0].imshow( F[:, :, axial_slice].T, cmap="gray", vmin=0, vmax=1, origin="lower" ) ax.flat[0].set_title("SMT2 f (MSDKI)") im1 = ax.flat[1].imshow( DI[:, :, axial_slice].T * 1000, cmap="gray", vmin=0, vmax=2, origin="lower" ) ax.flat[1].set_title("SMT2 di (MSDKI)") im2 = ax.flat[2].imshow( uFA2[:, :, axial_slice].T, cmap="gray", vmin=0, vmax=1, origin="lower" ) ax.flat[2].set_title("SMT2 uFA (MSDKI)") fig3.colorbar(im0, ax=ax.flat[0]) fig3.colorbar(im1, ax=ax.flat[1]) fig3.colorbar(im2, ax=ax.flat[2]) plt.show() fig3.savefig("MSDKI_SMT2_invivo.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # SMT2 model quantities extracted from MSDKI. From left to right, the figure # shows the axonal volume fraction (f), the intrinsic diffusivity (di), and # the microscopic anisotropy of the SMT 2-compartmental model # :footcite:p:`NetoHenriques2019`. # # # The similar contrast of SMT2 f-parameter maps in comparison to MSK (first # panel of Figure 3 vs second panel of Figure 2) confirms than MSK and F # captures the same tissue information but on different scales (but rescaled # to values between 0 and 1). It is important to note that SMT model # parameters estimates should be used with care, because the SMT model was # shown to be invalid:footcite:p:`NetoHenriques2019`. For instance, although # SMT2 parameter f and uFA may be a useful normalization of the degree of # non-Gaussian diffusion (note than both metrics have a range between 0 and 1), # these cannot be interpreted as a real biophysical estimates of axonal water # fraction and tissue microscopic anisotropy. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_qtdmri.py000066400000000000000000000452411476546756600205710ustar00rootroot00000000000000""" ================================================================= Estimating diffusion time dependent q-space indices using qt-dMRI ================================================================= Effective representation of the four-dimensional diffusion MRI signal -- varying over three-dimensional q-space and diffusion time -- is a sought-after and still unsolved challenge in diffusion MRI (dMRI). We propose a functional basis approach that is specifically designed to represent the dMRI signal in this qtau-space :footcite:p:`Fick2018`. Following recent terminology, we refer to our qtau-functional basis as :math:`q\tau`-dMRI. We use GraphNet regularization --imposing both signal smoothness and sparsity -- to drastically reduce the number of diffusion-weighted images (DWIs) that is needed to represent the dMRI signal in the qtau-space. As the main contribution, :math:`q\tau`-dMRI provides the framework to -- without making biophysical assumptions -- represent the :math:`q\tau`-space signal and estimate time-dependent q-space indices (:math:`q\tau`-indices), providing a new means for studying diffusion in nervous tissue. :math:`q\tau`-dMRI is the first of its kind in being specifically designed to provide open interpretation of the :math:`q\tau`-diffusion signal. :math:`q\tau`-dMRI can be seen as a time-dependent extension of the MAP-MRI functional basis :footcite:p:`Ozarslan2013`, and all the previously proposed q-space can be estimated for any diffusion time. These include rotationally invariant quantities such as the Mean Squared Displacement (MSD), Q-space Inverse Variance (QIV) and Return-To-Origin Probability (RTOP). Also directional indices such as the Return To the Axis Probability (RTAP) and Return To the Plane Probability (RTPP) are available, as well as the Orientation Distribution Function (ODF). In this example we illustrate how to use the :math:`q\tau`-dMRI to estimate time-dependent q-space indices from a :math:`q\tau`-acquisition of a mouse. First import the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.data.fetcher import ( fetch_qtdMRI_test_retest_2subjects, read_qtdMRI_test_retest_2subjects, ) from dipy.reconst import dti, qtdmri ############################################################################### # Download and read the data for this tutorial. # # :math:`q\tau`-dMRI requires data with multiple gradient directions, gradient # strength and diffusion times. We will use the test-retest acquisitions of two # mice that were used in the test-retest study by :footcite:p:`Fick2018`. The # data itself is freely available and citeable at :footcite:p:`Wassermann2017`. fetch_qtdMRI_test_retest_2subjects() data, cc_masks, gtabs = read_qtdMRI_test_retest_2subjects() ############################################################################### # data contains 4 qt-dMRI datasets of size [80, 160, 5, 515]. The first two are # the test-retest datasets of the first mouse and the second two are those of # the second mouse. cc_masks contains 4 corresponding binary masks for the # corpus callosum voxels in the middle slice that were used in the test-retest # study. Finally, gtab contains the qt-dMRI gradient tables for the DWIs in the # dataset. # # The data consists of 515 DWIs, divided over 35 shells, with 7 "gradient # strength shells" up to 491 mT/m, 5 equally spaced "pulse separation shells" # (big_delta) between [10.8-20] ms and a pulse duration (small_delta) of 5ms. # # To visualize qt-dMRI acquisition schemes in an intuitive way, the qtdmri # module provides a visualization function to illustrate the relationship # between gradient strength (G), pulse separation (big_delta) and b-value: plt.figure() qtdmri.visualise_gradient_table_G_Delta_rainbow(gtabs[0]) plt.savefig("qt-dMRI_acquisition_scheme.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # In the figure the dots represent measured DWIs in any direction, for a given # gradient strength and pulse separation. The background isolines represent the # corresponding b-values for different combinations of G and big_delta. # # # Next, we visualize the middle slices of the test-retest data sets with their # corresponding masks. To better illustrate the white matter architecture in # the data, we calculate DTI's fractional anisotropy (FA) over the whole slice # and project the corpus callosum mask on the FA image.: subplot_titles = [ "Subject1 Test", "Subject1 Retest", "Subject2 Test", "Subject2 Retest", ] fig = plt.figure() plt.subplots(nrows=2, ncols=2) for i, (data_, mask_, gtab_) in enumerate(zip(data, cc_masks, gtabs)): # take the middle slice data_middle_slice = data_[:, :, 2] mask_middle_slice = mask_[:, :, 2] # estimate fractional anisotropy (FA) for this slice tenmod = dti.TensorModel(gtab_) tenfit = tenmod.fit(data_middle_slice, mask=data_middle_slice[..., 0] > 0) fa = tenfit.fa # set mask color to green with 0.5 opacity as overlay mask_template = np.zeros(np.r_[mask_middle_slice.shape, 4]) mask_template[mask_middle_slice == 1] = np.r_[0.0, 1.0, 0.0, 0.5] # produce the FA images with corpus callosum masks. plt.subplot(2, 2, 1 + i) plt.title(subplot_titles[i], fontsize=15) plt.imshow(fa, cmap="Greys_r", origin="lower", interpolation="nearest") plt.imshow(mask_template, origin="lower", interpolation="nearest") plt.axis("off") plt.tight_layout() plt.savefig("qt-dMRI_datasets_fa_with_ccmasks.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Next, we use qt-dMRI to estimate of time-dependent q-space indices # (q$\tau$-indices) for the masked voxels in the corpus callosum of each # dataset. In particular, we estimate the Return-to-Original, Return-to-Axis # and Return-to-Plane Probability (RTOP, RTAP and RTPP), as well as the Mean # Squared Displacement (MSD). # # # # In this example we don't extrapolate the data beyond the maximum diffusion # time, so we estimate :math:`q\tau` indices between the minimum and maximum # diffusion times of the data at 5 equally spaced points. However, it should # the noted that qt-dMRI's combined smoothness and sparsity regularization # allows for smooth interpolation at any :math:`q\tau` position. In other # words, once the basis is fitted to the data, its coefficients describe the # entire :math:`q\tau`-space, and any :math:`q\tau`-position can be freely # recovered. This including points beyond the dataset's maximum # :math:`q\tau` value (although this should be done with caution). tau_min = gtabs[0].tau.min() tau_max = gtabs[0].tau.max() taus = np.linspace(tau_min, tau_max, 5) qtdmri_fits = [] msds = [] rtops = [] rtaps = [] rtpps = [] for data_, mask_, gtab_ in zip(data, cc_masks, gtabs): # select the corpus callosum voxel for every dataset cc_voxels = data_[mask_ == 1] # initialize the qt-dMRI model. # recommended basis orders are radial_order=6 and time_order=2. # The combined Laplacian and l1-regularization using Generalized # Cross-Validation (GCV) and Cross-Validation (CV) settings is most robust, # but can be used separately and with weightings preset to any positive # value to optimize for speed. qtdmri_mod = qtdmri.QtdmriModel( gtab_, radial_order=6, time_order=2, laplacian_regularization=True, laplacian_weighting="GCV", l1_regularization=True, l1_weighting="CV", ) # fit the model. # Here we take every 5th voxel for speed, but of course all voxels can be # fit for a more robust result later on. qtdmri_fit = qtdmri_mod.fit(cc_voxels[::5]) qtdmri_fits.append(qtdmri_fit) # We estimate MSD, RTOP, RTAP and RTPP for the chosen diffusion times. msds.append(np.array(list(map(qtdmri_fit.msd, taus)))) rtops.append(np.array(list(map(qtdmri_fit.rtop, taus)))) rtaps.append(np.array(list(map(qtdmri_fit.rtap, taus)))) rtpps.append(np.array(list(map(qtdmri_fit.rtpp, taus)))) ############################################################################### # The estimated :math:`q\tau`-indices, for the chosen diffusion times, are now # stored in msds, rtops, rtaps and rtpps. The trends of these # :math:`q\tau`-indices over time say something about the restriction of # diffusing particles over time, which is currently a hot topic in the dMRI # community. We evaluate the test-retest reproducibility for the two subjects # by plotting the :math:`q\tau`-indices for each subject together. This # example will produce similar results as Fig. 10 in :footcite:p:`Fick2018`. # # We first define a small function to plot the mean and standard deviation of # the :math:`q\tau`-index trends in a subject. def plot_mean_with_std(ax, time, ind1, plotcolor, ls="-", std_mult=1, label=""): means = np.mean(ind1, axis=1) stds = np.std(ind1, axis=1) ax.plot(time, means, c=plotcolor, lw=3, label=label, ls=ls) ax.fill_between( time, means + std_mult * stds, means - std_mult * stds, alpha=0.15, color=plotcolor, ) ax.plot(time, means + std_mult * stds, alpha=0.25, color=plotcolor) ax.plot(time, means - std_mult * stds, alpha=0.25, color=plotcolor) ############################################################################### # We start by showing the test-retest MSD of both subjects over time. We plot # the :math:`q\tau`-indices together with :math:`q\tau`-index trends of free # diffusion with different diffusivities as background. # we first generate the data to produce the background index isolines. Delta_ = np.linspace(0.005, 0.02, 100) MSD_ = np.linspace(4e-5, 10e-5, 100) Delta_grid, MSD_grid = np.meshgrid(Delta_, MSD_) D_grid = MSD_grid / (6 * Delta_grid) D_levels = np.r_[1, 5, 7, 10, 14, 23, 30] * 1e-4 fig = plt.figure(figsize=(10, 3)) # start with the plot of subject 1. ax = plt.subplot(1, 2, 1) # first plot the background plt.contourf(Delta_ * 1e3, 1e5 * MSD_, D_grid, levels=D_levels, cmap="Greys", alpha=0.5) # plot the test-retest mean MSD and standard deviation of subject 1. plot_mean_with_std(ax, taus * 1e3, 1e5 * msds[0], "r", "dashdot", label="MSD Test") plot_mean_with_std(ax, taus * 1e3, 1e5 * msds[1], "g", "dashdot", label="MSD Retest") ax.legend(fontsize=13) # plot some text markers to clarify the background diffusivity lines. ax.text(0.0091 * 1e3, 6.33, "D=14e-4", fontsize=12, rotation=35) ax.text(0.0091 * 1e3, 4.55, "D=10e-4", fontsize=12, rotation=25) ax.set_ylim(4, 9.5) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_title(r"Test-Retest MSD($\tau$) Subject 1", fontsize=15) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_ylabel("MSD ($10^{-5}mm^2$)", fontsize=17) # then do the same thing for subject 2. ax = plt.subplot(1, 2, 2) plt.contourf(Delta_ * 1e3, 1e5 * MSD_, D_grid, levels=D_levels, cmap="Greys", alpha=0.5) cb = plt.colorbar() cb.set_label("Free Diffusivity ($mm^2/s$)", fontsize=18) plot_mean_with_std(ax, taus * 1e3, 1e5 * msds[2], "r", "dashdot") plot_mean_with_std(ax, taus * 1e3, 1e5 * msds[3], "g", "dashdot") ax.set_ylim(4, 9.5) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_title(r"Test-Retest MSD($\tau$) Subject 2", fontsize=15) plt.savefig("qt_indices_msd.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # You can see that the MSD in both subjects increases over time, but also # slowly levels off as time progresses. This makes sense as diffusing particles # are becoming more restricted by surrounding tissue as time goes on. You can # also see that for Subject 1 the index trends nearly perfectly overlap, but # for subject 2 they are slightly off, which is also what we found in the # paper. # # # # Next, we follow the same procedure to estimate the test-retest RTAP, RTOP and # RTPP over diffusion time for both subject. For ease of comparison, we will # estimate all three in the same unit [1/mm] by taking the square root of RTAP # and the cubed root of RTOP. # Again, first we define the data for the background illustration. Delta_ = np.linspace(0.005, 0.02, 100) RTXP_ = np.linspace(1, 200, 100) Delta_grid, RTXP_grid = np.meshgrid(Delta_, RTXP_) D_grid = 1 / (4 * RTXP_grid**2 * np.pi * Delta_grid) D_levels = np.r_[1, 2, 3, 4, 6, 9, 15, 30] * 1e-4 D_colors = np.tile(np.linspace(0.8, 0, 7), (3, 1)).T # We start with estimating the RTOP illustration. fig = plt.figure(figsize=(10, 3)) ax = plt.subplot(1, 2, 1) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) plot_mean_with_std( ax, taus * 1e3, rtops[0] ** (1 / 3.0), "r", "--", label="RTOP$^{1/3}$ Test" ) plot_mean_with_std( ax, taus * 1e3, rtops[1] ** (1 / 3.0), "g", "--", label="RTOP$^{1/3}$ Retest" ) ax.legend(fontsize=13) ax.text(0.0091 * 1e3, 162, "D=3e-4", fontsize=12, rotation=-22) ax.text(0.0091 * 1e3, 140, "D=4e-4", fontsize=12, rotation=-20) ax.text(0.0091 * 1e3, 113, "D=6e-4", fontsize=12, rotation=-16) ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_title(r"Test-Retest RTOP($\tau$) Subject 1", fontsize=15) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_ylabel("RTOP$^{1/3}$ (1/mm)", fontsize=17) ax = plt.subplot(1, 2, 2) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) cb = plt.colorbar() cb.set_label("Free Diffusivity ($mm^2/s$)", fontsize=18) plot_mean_with_std(ax, taus * 1e3, rtops[2] ** (1 / 3.0), "r", "--") plot_mean_with_std(ax, taus * 1e3, rtops[3] ** (1 / 3.0), "g", "--") ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_title(r"Test-Retest RTOP($\tau$) Subject 2", fontsize=15) plt.savefig("qt_indices_rtop.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Similarly as MSD, the RTOP is related to the restriction that particles are # experiencing and is also rotationally invariant. RTOP is defined as the # probability that particles are found at the same position at the time of # both gradient pulses. As time increases, the odds become smaller that a # particle will arrive at the same position it left, which is illustrated by # all RTOP trends in the figure. Notice that the estimated RTOP trends decrease # less fast than free diffusion, meaning that particles experience restriction # over time. Also notice that the RTOP trends in both subjects nearly # perfectly overlap. # # # # Next, we estimate two directional :math:`q\tau`-indices, RTAP and RTPP, # describing particle restriction perpendicular and parallel to the # orientation of the principal diffusivity in that voxel. If the voxel # describes coherent white matter (which it does in our corpus callosum # example), then they describe properties related to restriction # perpendicular and parallel to the axon bundles. # First, we estimate the RTAP trends. fig = plt.figure(figsize=(10, 3)) ax = plt.subplot(1, 2, 1) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) plot_mean_with_std( ax, taus * 1e3, np.sqrt(rtaps[0]), "r", "-", label="RTAP$^{1/2}$ Test" ) plot_mean_with_std( ax, taus * 1e3, np.sqrt(rtaps[1]), "g", "-", label="RTAP$^{1/2}$ Retest" ) ax.legend(fontsize=13) ax.text(0.0091 * 1e3, 162, "D=3e-4", fontsize=12, rotation=-22) ax.text(0.0091 * 1e3, 140, "D=4e-4", fontsize=12, rotation=-20) ax.text(0.0091 * 1e3, 113, "D=6e-4", fontsize=12, rotation=-16) ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_title(r"Test-Retest RTAP($\tau$) Subject 1", fontsize=15) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_ylabel("RTAP$^{1/2}$ (1/mm)", fontsize=17) ax = plt.subplot(1, 2, 2) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) cb = plt.colorbar() cb.set_label("Free Diffusivity ($mm^2/s$)", fontsize=18) plot_mean_with_std(ax, taus * 1e3, np.sqrt(rtaps[2]), "r", "-") plot_mean_with_std(ax, taus * 1e3, np.sqrt(rtaps[3]), "g", "-") ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_title(r"Test-Retest RTAP($\tau$) Subject 2", fontsize=15) plt.savefig("qt_indices_rtap.png") # Finally the last one for RTPP. fig = plt.figure(figsize=(10, 3)) ax = plt.subplot(1, 2, 1) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) plot_mean_with_std(ax, taus * 1e3, rtpps[0], "r", ":", label="RTPP Test") plot_mean_with_std(ax, taus * 1e3, rtpps[1], "g", ":", label="RTPP Retest") ax.legend(fontsize=13) ax.text(0.0091 * 1e3, 113, "D=6e-4", fontsize=12, rotation=-16) ax.text(0.0091 * 1e3, 91, "D=9e-4", fontsize=12, rotation=-13) ax.text(0.0091 * 1e3, 69, "D=15e-4", fontsize=12, rotation=-10) ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_title(r"Test-Retest RTPP($\tau$) Subject 1", fontsize=15) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_ylabel("RTPP (1/mm)", fontsize=17) ax = plt.subplot(1, 2, 2) plt.contourf(Delta_ * 1e3, RTXP_, D_grid, colors=D_colors, levels=D_levels, alpha=0.5) cb = plt.colorbar() cb.set_label("Free Diffusivity ($mm^2/s$)", fontsize=18) plot_mean_with_std(ax, taus * 1e3, rtpps[2], "r", ":") plot_mean_with_std(ax, taus * 1e3, rtpps[3], "g", ":") ax.set_ylim(54, 170) ax.set_xlim(0.009 * 1e3, 0.0185 * 1e3) ax.set_xlabel("Diffusion Time (ms)", fontsize=17) ax.set_title(r"Test-Retest RTPP($\tau$) Subject 2", fontsize=15) plt.savefig("qt_indices_rtpp.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # As those of RTOP, the trends in RTAP and RTPP also decrease over time. It can # be seen that RTAP$^{1/2}$ is always bigger than RTPP, which makes sense as # particles in coherent white matter experience more restriction perpendicular # to the white matter orientation than parallel to it. Again, in both subjects # the test-retest RTAP and RTPP is nearly perfectly consistent. # Aside from the estimation of :math:`q\tau`-space indices, :math:`q\tau`-dMRI # also allows for the estimation of time-dependent ODFs. Once the Qtdmri model # is fitted it can be simply called by qtdmri_fit.odf(sphere, # s=sharpening_factor). This is identical to how the mapmri module functions, # and allows one to study the time-dependence of ODF directionality. # # # # This concludes the example on qt-dMRI. As we showed, approaches such as # qt-dMRI can help in studying the (finite-:math:`\tau`) temporal properties # of diffusion in biological tissues. Differences in :math:`q\tau`-index trends # could be indicative of underlying structural differences that affect the # time-dependence of the diffusion process. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_qti.py000066400000000000000000000152611476546756600200650ustar00rootroot00000000000000""" ================================================= Reconstruct with Q-space Trajectory Imaging (QTI) ================================================= Q-space trajectory imaging (QTI) by :footcite:t:`Westin2016` is a general framework for analyzing diffusion-weighted MRI data acquired with tensor-valued diffusion encoding. This tutorial briefly summarizes the theory and provides an example of how to estimate the diffusion and covariance tensors using DIPY. Theory ====== In QTI, the tissue microstructure is represented by a diffusion tensor distribution (DTD). Here, DTD is denoted by $\\mathbf{D}$ and the voxel-level diffusion tensor from DTI by $\\langle\\mathbf{D}\\rangle$, where $\\langle \\ \\rangle$ denotes averaging over the DTD. The covariance of $\\mathbf{D}$ is given by a fourth-order covariance tensor $\\mathbb{C}$ defined as .. math:: \\mathbb{C} = \\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle - \\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle , where $\\otimes$ denotes a tensor outer product. $\\mathbb{C}$ has 21 unique elements and enables the calculation of several microstructural parameters. Using the cumulant expansion, the diffusion-weighted signal can be approximated as .. math:: S \\approx S_0 \\exp \\left(- \\mathbf{b} : \\langle \\mathbf{D} \\rangle + \\frac{1}{2}(\\mathbf{b} \\otimes \\mathbf{b}) : \\mathbb{C} \\right) , where $S_0$ is the signal without diffusion-weighting, $\\mathbf{b}$ is the b-tensor used in the acquisition, and $:$ denotes a tensor inner product. The model parameters $S_0$, $\\langle\\mathbf{D}\\rangle$, and $\\mathbb{C}$ can be estimated by solving the following equation: .. math:: S = \\beta X, where .. math:: S = \\begin{pmatrix} \\ln S_1 \\\\ \\vdots \\\\ \\ln S_n \\end{pmatrix} , .. math:: \\beta = \\begin{pmatrix} \\ln S_0 & \\langle \\mathbf{D} \\rangle & \\mathbb{C} \\end{pmatrix}^\\text{T} , .. math:: X = \\begin{pmatrix} 1 & -\\mathbf{b}_1^\\text{T} & \\frac{1}{2} (\\mathbf{b}_1 \\otimes \\mathbf{b}_1) \\text{T} \\\\ \\vdots & \\vdots & \\vdots \\\\ 1 & -\\mathbf{b}_n^\\text{T} & \\frac{1}{2} (\\mathbf{b}_n \\otimes \\mathbf{b}_n) ^\\text{T} \\end{pmatrix} , where $n$ is the number of acquisitions and $\\langle\\mathbf{D}\\rangle$, $\\mathbb{C}$, $\\mathbf{b}_i$, and $(\\mathbf{b}_i \\otimes \\mathbf{b}_i)$ are represented by column vectors using Voigt notation. Estimation of the model parameters requires that $\\text{rank}(\\mathbf{X}^\\text{T}\\mathbf{X}) = 28$. This can be achieved by combining acquisitions with b-tensors with different shapes, sizes, and orientations. For details, please see :footcite:p:`Westin2016`. Usage example ============= QTI can be fit to data using the module `dipy.reconst.qti`. Let's start by importing the required modules and functions: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.qti as qti ############################################################################### # As QTI requires data with tensor-valued encoding, let's load an example # dataset acquired with q-space trajectory encoding (QTE): fdata, fbvals, fbvecs, fmask = get_fnames(name="qte_lte_pte") data, affine = load_nifti(fdata) mask, _ = load_nifti(fmask) bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) btens = np.array(["LTE" for i in range(61)] + ["PTE" for i in range(61)]) gtab = gradient_table(bvals, bvecs=bvecs, btens=btens) ############################################################################### # The dataset contains 122 volumes of which the first half were acquired with # linear tensor encoding (LTE) and the second half with planar tensor encoding # (PTE). We can confirm this by calculating the ranks of the b-tensors in the # gradient table. ranks = np.array([np.linalg.matrix_rank(b) for b in gtab.btens]) for i, ell in enumerate(["b = 0", "LTE", "PTE"]): print(f"{np.sum(ranks == i)} volumes with {ell}") ############################################################################### # Now that we have data acquired with tensor-valued diffusion encoding and the # corresponding gradient table containing the b-tensors, we can fit QTI to the # data as follows: qtimodel = qti.QtiModel(gtab) qtifit = qtimodel.fit(data, mask=mask) ############################################################################### # QTI parameter maps can accessed as the attributes of `qtifit`. For instance, # fractional anisotropy (FA) and microscopic fractional anisotropy (μFA) maps # can be calculated as: fa = qtifit.fa ufa = qtifit.ufa ############################################################################### # Finally, let's reproduce Figure 9 from :footcite:p:`Westin2016` to visualize # more QTI parameter maps: z = 36 fig, ax = plt.subplots(3, 4, figsize=(12, 9)) background = np.zeros(data.shape[0:2]) # Black background for figures for i in range(3): for j in range(4): ax[i, j].imshow(background, cmap="gray") ax[i, j].set_xticks([]) ax[i, j].set_yticks([]) ax[0, 0].imshow(np.rot90(qtifit.md[:, :, z]), cmap="gray", vmin=0, vmax=3e-3) ax[0, 0].set_title("MD") ax[0, 1].imshow(np.rot90(qtifit.v_md[:, :, z]), cmap="gray", vmin=0, vmax=0.5e-6) ax[0, 1].set_title("V$_{MD}$") ax[0, 2].imshow(np.rot90(qtifit.v_shear[:, :, z]), cmap="gray", vmin=0, vmax=0.5e-6) ax[0, 2].set_title("V$_{shear}$") ax[0, 3].imshow(np.rot90(qtifit.v_iso[:, :, z]), cmap="gray", vmin=0, vmax=1e-6) ax[0, 3].set_title("V$_{iso}$") ax[1, 0].imshow(np.rot90(qtifit.c_md[:, :, z]), cmap="gray", vmin=0, vmax=0.25) ax[1, 0].set_title("C$_{MD}$") ax[1, 1].imshow(np.rot90(qtifit.c_mu[:, :, z]), cmap="gray", vmin=0, vmax=1) ax[1, 1].set_title("C$_{μ}$ = μFA$^2$") ax[1, 2].imshow(np.rot90(qtifit.c_m[:, :, z]), cmap="gray", vmin=0, vmax=1) ax[1, 2].set_title("C$_{M}$ = FA$^2$") ax[1, 3].imshow(np.rot90(qtifit.c_c[:, :, z]), cmap="gray", vmin=0, vmax=1) ax[1, 3].set_title("C$_{c}$") ax[2, 0].imshow(np.rot90(qtifit.mk[:, :, z]), cmap="gray", vmin=0, vmax=1.5) ax[2, 0].set_title("MK") ax[2, 1].imshow(np.rot90(qtifit.k_bulk[:, :, z]), cmap="gray", vmin=0, vmax=1.5) ax[2, 1].set_title("K$_{bulk}$") ax[2, 2].imshow(np.rot90(qtifit.k_shear[:, :, z]), cmap="gray", vmin=0, vmax=1.5) ax[2, 2].set_title("K$_{shear}$") ax[2, 3].imshow(np.rot90(qtifit.k_mu[:, :, z]), cmap="gray", vmin=0, vmax=1.5) ax[2, 3].set_title("K$_{μ}$") fig.tight_layout() plt.show() ############################################################################### # For more information about QTI, please read the article by # :footcite:t:`Westin2016`. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_qtiplus.py000066400000000000000000000230571476546756600207730ustar00rootroot00000000000000""" ===================================================================== Applying positivity constraints to Q-space Trajectory Imaging (QTI+) ===================================================================== Q-space trajectory imaging (QTI) :footcite:p:`Westin2016` with applied positivity constraints (QTI+) is an estimation framework proposed by :footcite:t:`Herberthson2021` which enforces necessary constraints during the estimation of the QTI model parameters. This tutorial briefly summarizes the theory and provides a comparison between performing the constrained and unconstrained QTI reconstruction in DIPY. Theory ====== In QTI, the tissue microstructure is represented by a diffusion tensor distribution (DTD). Here, DTD is denoted by $\\mathbf{D}$ and the voxel-level diffusion tensor from DTI by $\\langle\\mathbf{D}\\rangle$, where $\\langle \\ \\rangle$ denotes averaging over the DTD. The covariance of $\\mathbf{D}$ is given by a fourth-order covariance tensor $\\mathbb{C}$ defined as .. math:: \\mathbb{C} = \\langle \\mathbf{D} \\otimes \\mathbf{D} \\rangle - \\langle \\mathbf{D} \\rangle \\otimes \\langle \\mathbf{D} \\rangle , where $\\otimes$ denotes a tensor outer product. $\\mathbb{C}$ has 21 unique elements and enables the calculation of several microstructural parameters. Using the cumulant expansion, the diffusion-weighted signal can be approximated as .. math:: S \\approx S_0 \\exp \\left(- \\mathbf{b} : \\langle \\mathbf{D} \\rangle + \\frac{1}{2}(\\mathbf{b} \\otimes \\mathbf{b}) : \\mathbb{C} \\right) , where $S_0$ is the signal without diffusion-weighting, $\\mathbf{b}$ is the b-tensor used in the acquisition, and $:$ denotes a tensor inner product. The model parameters $S_0$, $\\langle \\mathbf{D}\\rangle$, and $\\mathbb{C}$ can be estimated by solving the following weighted problem, where the heteroskedasticity introduced by the taking the logarithm of the signal is accounted for: .. math:: {\\mathrm{argmin}}_{S_0,\\langle \\mathbf{D} \\rangle, \\mathbb{C}} \\sum_{k=1}^n S_k^2 \\left| \\ln(S_k)-\\ln(S_0)+\\mathbf{b}^{(k)} \\langle \\mathbf{D} \\rangle -\\frac{1}{2} (\\mathbf{b} \\otimes \\mathbf{b})^{(k)} \\mathbb{C} \\right|^2 , the above can be written as a weighted least squares problem .. math:: \\mathbf{Ax} = \\mathbf{y}, where .. math:: y = \\begin{pmatrix} \\ S_1 \\ln S_1 \\\\ \\vdots \\\\ \\ S_n \\ln S_n \\end{pmatrix} , .. math:: x = \\begin{pmatrix} \\ln S_0 & \\langle \\mathbf{D} \\rangle & \\mathbb{C} \\end{pmatrix}^\\text{T} , .. math:: A = \\begin{pmatrix} S_1 & 0 & \\ldots & 0 \\\\ 0 & \\ddots & \\ddots & \\vdots \\\\ \\vdots & \\ddots & \\ddots & 0 \\\\ 0 & \\ldots & 0 & S_n \\end{pmatrix} \\begin{pmatrix} 1 & -\\mathbf{b}_1^\\text{T} & \\frac{1}{2} (\\mathbf{b}_1 \\otimes \\mathbf{b}_1) \\text{T} \\\\ \\vdots & \\vdots & \\vdots \\\\ \\vdots & \\vdots & \\vdots \\\\ 1 & -\\mathbf{b}_n^\\text{T} & \\frac{1}{2} (\\mathbf{b}_n \\otimes \\mathbf{b}_n) ^\\text{T} \\end{pmatrix} , where $n$ is the number of acquisitions and $\\langle\\mathbf{D}\\rangle$, $\\mathbb{C}$, $\\mathbf{b}_i$, and $(\\mathbf{b}_i \\otimes \\mathbf{b}_i)$ are represented by column vectors using Voigt notation. The estimated $\\langle\\mathbf{D}\\rangle$ and $\\mathbb{C}$ tensors should observe mathematical and physical conditions dictated by the model itself: since $\\langle\\mathbf{D}\\rangle$ represents a diffusivity, it should be positive semi-definite: $\\langle\\mathbf{D}\\rangle \\succeq 0$. Similarly, since $\\mathbf{C}$ represents a covariance, it's $6 \\times 6$ representation, $\\mathbf{C}$, should be positive semi-definite: $\\mathbf{C} \\succeq 0$ When not imposed, violations of these conditions can occur in presence of noise and/or in sparsely sampled data. This could result in metrics derived from the model parameters to be unreliable. Both these conditions can be enfoced while estimating the QTI model's parameters using semidefinite programming (SDP) as shown by :footcite:t:`Herberthson2021`. This corresponds to solving the problem .. math:: \\mathbf{Ax} = \\mathbf{y} \\text{subject to:} \\langle\\mathbf{D}\\rangle \\succeq 0 , \\mathbf{C} \\succeq 0 Installation ============= The constrained problem stated above can be solved using the cvxpy library. Instructions on how to install cvxpy can be found at https://www.cvxpy.org/install/. A free SDP solver called 'SCS' is installed with cvxpy, and can be readily used for performing the fit. However, it is recommended to install an alternative solver, MOSEK, for improved speed and performance. MOSEK requires a licence which is free for academic use. Instructions on how to install Mosek and setting up a licence can be found at https://docs.mosek.com/latest/install/installation.html Usage example ============== Here we show how metrics derived from the QTI model parameters compare when the fit is performed with and without applying the positivity constraints. In DIPY, the constrained estimation routine is available as part of the `dipy.reconst.qti` module. First we import all the necessary modules to perform the QTI fit: """ from dipy.data import read_DiB_70_lte_pte_ste, read_DiB_217_lte_pte_ste import dipy.reconst.qti as qti from dipy.viz.plotting import compare_qti_maps ############################################################################### # To showcase why enforcing positivity constraints in QTI is relevant, we use # a human brain dataset comprising 70 volumes acquired with tensor-encoding. # This dataset was obtained by subsampling a richer dataset containing 217 # diffusion measurements, which we will use as ground truth when comparing # the parameters estimation with and without applied constraints. This emulates # performing shorter data acquisition which can correspond to scanning patients # in clinical settings. # # The full dataset used in this tutorial was originally published at # https://github.com/filip-szczepankiewicz/Szczepankiewicz_DIB_2019, # and is described in :footcite:p:`Szczepankiewicz2019`. # # # First, let's load the complete dataset and create the gradient table. # We mark these two with the '_217' suffix. data_img, mask_img, gtab_217 = read_DiB_217_lte_pte_ste() data_217 = data_img.get_fdata() mask = mask_img.get_fdata() ############################################################################### # Second, let's load the downsampled dataset and create the gradient table. # We mark these two with the '_70' suffix. data_img, _, gtab_70 = read_DiB_70_lte_pte_ste() data_70 = data_img.get_fdata() ############################################################################### # Now we can fit the QTI model to the datasets containing 217 measurements, and # use it as reference to compare the constrained and unconstrained fit on the # smaller dataset. For time considerations, we restrict the fit to a # single slice. mask[:, :, :13] = 0 mask[:, :, 14:] = 0 qtimodel_217 = qti.QtiModel(gtab_217) qtifit_217 = qtimodel_217.fit(data_217, mask=mask) ############################################################################### # Now we can fit the QTI model using the default unconstrained fitting method # to the subsampled dataset: qtimodel_unconstrained = qti.QtiModel(gtab_70) qtifit_unconstrained = qtimodel_unconstrained.fit(data_70, mask=mask) ############################################################################### # Now we repeat the fit but with the constraints applied. # To perform the constrained fit, we select the 'SDPdc' fit method when # creating the QtiModel object. # # .. note:: # this fit method is slower compared to the defaults unconstrained. # # If mosek is installed, it can be specified as the solver to be used # as follows: # # .. code-block:: python # # qtimodel = qti.QtiModel(gtab, fit_method='SDPdc', cvxpy_solver='MOSEK') # qtifit = qtimodel.fit(data, mask) # # If Mosek is not installed, the constrained fit can still be performed, and # SCS will be used as solver. SCS is typically much slower than Mosek, but # provides similar results in terms of accuracy. To give an example, the fit # performed in the next line will take approximately 15 minutes when using SCS, # and 2 minute when using Mosek! qtimodel_constrained = qti.QtiModel(gtab_70, fit_method="SDPdc") qtifit_constrained = qtimodel_constrained.fit(data_70, mask=mask) ############################################################################### # Now we can visualize the results obtained with the constrained and # unconstrained fit on the small dataset, and compare them with the # "ground truth" provided by fitting the QTI model to the full dataset. # For example, we can look at the FA and µFA maps, and their value # distribution in White Matter in comparison to the ground truth. z = 13 wm_mask = qtifit_217.ufa[:, :, z] > 0.6 compare_qti_maps(qtifit_217, qtifit_unconstrained, qtifit_constrained, wm_mask) ############################################################################### # The results clearly show how many of the FA and µFA values # obtained with the unconstrained fit fall outside the correct # theoretical range [0 1], while the constrained fit provides # more sound results. Note also that even when fitting the rich # dataset, some values of the parameters produced with the unconstrained # fit fall outside the correct range, suggesting that the constrained fit, # despite the additional time cost, should be performed even on densely # sampled diffusion data. # # For more information about QTI and QTI+, please read the articles by # :footcite:p:`Westin2016` and :footcite:t:`Herberthson2021`. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_rumba.py000066400000000000000000000421631476546756600203770ustar00rootroot00000000000000""" =================================================================================== Reconstruction with Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA) =================================================================================== This example shows how to use RUMBA-SD to reconstruct fiber orientation density functions (fODFs). This model was introduced by :footcite:t:`CanalesRodriguez2015`. RUMBA-SD uses a priori information about the fiber response function (axial and perpendicular diffusivities) to generate a convolution kernel mapping the fODFs on a sphere to the recorded data. The fODFs are then estimated using an iterative, maximum likelihood estimation algorithm adapted from Richardson-Lucy (RL) deconvolution :footcite:p:`Richardson1972`. Specifically, the RL algorithm assumes Gaussian noise, while RUMBA assumes Rician/Noncentral Chi noise -- these more accurately reflect the noise generated by MRI scanners :footcite:p:`Constantinides1997`. This algorithm also contains an optional compartment for estimating an isotropic volume fraction to account for partial volume effects. RUMBA-SD works with single- and multi-shell data, as well as data recorded in Cartesian or spherical coordinate systems. The result from RUMBA-SD can be smoothed by applying total variation spatial regularization (termed RUMBA-SD + TV), a technique which promotes a more coherent estimate of the fODFs across neighboring voxels :footcite:p:`Rudin1992`. This regularization ability is also included in this implementation. This example will showcase how to: 1. Estimate the fiber response function 2. Reconstruct the fODFs voxel-wise or globally with TV regularization 3. Visualize fODF maps To begin, we will load the data, consisting of 10 b0s and 150 non-b0s with a b-value of 2000. """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.direction import peak_directions, peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.csdeconv import auto_response_ssst, recursive_response from dipy.reconst.rumba import RumbaSDModel from dipy.segment.mask import median_otsu from dipy.sims.voxel import single_tensor_odf from dipy.viz import actor, window hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) sphere = get_sphere(name="symmetric362") ############################################################################### # Step 1. Estimation of the fiber response function # ================================================= # # There are multiple ways to estimate the fiber response function. # # **Strategy 1: use default values** # One simple approach is to use the values included as the default arguments # in the RumbaSDModel constructor. The white matter response, `wm_response` # has three values corresponding to the tensor eigenvalues # (1.7e-3, 0.2e-3, 0.2e-3). The model has compartments for cerebrospinal # fluid (CSF) (`csf_response`) and grey matter (GM) (`gm_response`) as well, # with these mean diffusivities set to 3.0e-3 and 0.8e-3 respectively # :footcite:p:`CanalesRodriguez2015` These default values will often be adequate # as RUMBA-SD is robust against impulse response imprecision # :footcite:p:`DellAcqua2007`. rumba = RumbaSDModel(gtab) print( f"wm_response: {rumba.wm_response}, " + f"csf_response: {rumba.csf_response}, " + f"gm_response: {rumba.gm_response}" ) ############################################################################### # We can visualize what this default response looks like. # Enables/disables interactive visualization interactive = False scene = window.Scene() evals = rumba.wm_response evecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T response_odf = single_tensor_odf(sphere.vertices, evals=evals, evecs=evecs) # Transform our data from 1D to 4D response_odf = response_odf[None, None, None, :] response_actor = actor.odf_slicer(response_odf, sphere=sphere, colormap="plasma") scene.add(response_actor) print("Saving illustration as default_response.png") window.record(scene=scene, out_path="default_response.png", size=(200, 200)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Default response function. scene.rm(response_actor) ############################################################################### # **Strategy 2: estimate from local brain region** # The `csdeconv` module contains functions for estimating this response. # `auto_response_sst` extracts an ROI in the center of the brain and isolates # single fiber populations from the corpus callosum using an FA mask with a # threshold of 0.7. These voxels are used to estimate the response function. response, _ = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) print(response) ############################################################################### # This response contains the estimated eigenvalues in its first element, and # the estimated S0 in the second. The eigenvalues are all we care about for # using RUMBA-SD. # # We can visualize this estimated response as well. evals = response[0] evecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T response_odf = single_tensor_odf(sphere.vertices, evals=evals, evecs=evecs) # transform our data from 1D to 4D response_odf = response_odf[None, None, None, :] response_actor = actor.odf_slicer(response_odf, sphere=sphere, colormap="plasma") scene.add(response_actor) print("Saving illustration as estimated_response.png") window.record(scene=scene, out_path="estimated_response.png", size=(200, 200)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Estimated response function. scene.rm(response_actor) ############################################################################### # **Strategy 3: recursive, data-driven estimation** # The other method for extracting a response function uses a recursive # approach. Here, we initialize a "fat" response function, which is used in # CSD. From this deconvolution, the voxels with one peak are extracted and # their data is averaged to get a new response function. This is repeated # iteratively until convergence :footcite:p:`Tax2014`. # # To shorten computation time, a mask can be estimated for the data. b0_mask, mask = median_otsu(data, median_radius=2, numpass=1, vol_idx=np.arange(10)) rec_response = recursive_response( gtab, data, mask=mask, sh_order_max=8, peak_thr=0.01, init_fa=0.08, init_trace=0.0021, iter=4, convergence=0.001, parallel=True, num_processes=2, ) ############################################################################### # We can now visualize this response, which will look like a pancake. rec_response_signal = rec_response.on_sphere(sphere) # transform our data from 1D to 4D rec_response_signal = rec_response_signal[None, None, None, :] response_actor = actor.odf_slicer(rec_response_signal, sphere=sphere, colormap="plasma") scene.add(response_actor) print("Saving illustration as recursive_response.png") window.record(scene=scene, out_path="recursive_response.png", size=(200, 200)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Recursive response function. scene.rm(response_actor) ############################################################################### # Step 2. fODF Reconstruction # =========================== # # We will now use the estimated response function with the RUMBA-SD model to # reconstruct the fODFs. We will use the default value for `csf_response` and # `gm_response`. If one doesn't wish to fit these compartments, one can specify # either argument as `None`. This will result in the corresponding volume # fraction map being all zeroes. The GM compartment can only be accurately # estimated with at least 3-shell data. With less shells, it is recommended # to only keep the compartment for CSF. Since this data is single-shell, we # will only compute the CSF compartment. # # RUMBA-SD can fit the data voxelwise or globally. By default, a voxelwise # approach is used (`voxelwise` is set to `True`). However, by setting # `voxelwise` to false, the whole brain can be fit at once. In this global # setting, one can specify the use of TV regularization with `use_tv`, and the # model can log updates on its progress and estimated signal-to-noise ratios by # setting `verbose` to True. By default, both `use_tv` and `verbose` are set to # `False` as they have no bearing on the voxelwise fit. # # When constructing the RUMBA-SD model, one can also specify `n_iter`, # `recon_type`, `n_coils`, `R`, and `sphere`. `n_iter` is the number of # iterations for the iterative estimation, and the default value of 600 # should be suitable for most applications. `recon_type` is the technique used # by the MRI scanner to reconstruct the MRI signal, and should be either 'smf' # for 'spatial matched filter', or 'sos' for 'sum-of-squares'; 'smf' is a # common choice and is the default, but the specifications of the MRI scanner # used to collect the data should be checked. If 'sos' is used, then it's # important to specify `n_coils`, which is the number of coils in the MRI # scanner. With 'smf', this isn't important and the default argument of 1 can # be used. `R` is the acceleration factor of the MRI scanner, which is termed # the iPAT factor for SIEMENS, the ASSET factor for GE, or the SENSE factor # for PHILIPS. 1 is a common choice, and is the default for the model. This # is only important when using TV regularization, which will be covered later # in the tutorial. Finally, `sphere` specifies the sphere on which to construct # the fODF. The default is 'repulsion724' sphere, but this tutorial will use # `symmetric362`. rumba = RumbaSDModel(gtab, wm_response=response[0], gm_response=None, sphere=sphere) ############################################################################### # For efficiency, we will only fit a small part of the data. This is the same # portion of data used in # :ref:`sphx_glr_examples_built_reconstruction_reconst_csd.py`. data_small = data[20:50, 55:85, 38:39] ############################################################################### # **Option 1: voxel-wise fit** # This is the default approach for generating ODFs, wherein each voxel is fit # sequentially. # # We will estimate the fODFs using the 'symmetric362' sphere. This # will take about a minute to compute. rumba_fit = rumba.fit(data_small) odf = rumba_fit.odf() ############################################################################### # The inclusion of RUMBA-SD's CSF compartment means we can also extract # the isotropic volume fraction map as well as the white matter volume # fraction map (the fODF sum at each voxel). These values are normalized such # that they sum to 1. If neither isotropic compartment is included, then the # isotropic volume fraction map will all be zeroes. f_iso = rumba_fit.f_iso f_wm = rumba_fit.f_wm ############################################################################### # We can visualize these maps using adjacent heatmaps. fig, axs = plt.subplots(1, 2, figsize=(12, 5)) ax0 = axs[0].imshow(f_wm[..., 0].T, origin="lower") axs[0].set_title("Voxelwise White Matter Volume Fraction") ax1 = axs[1].imshow(f_iso[..., 0].T, origin="lower") axs[1].set_title("Voxelwise Isotropic Volume Fraction") plt.colorbar(ax0, ax=axs[0]) plt.colorbar(ax1, ax=axs[1]) plt.savefig("wm_iso_partition.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # White matter and isotropic volume fractions # # # # To visualize the fODFs, it's recommended to combine the fODF and the # isotropic components. This is done using the `RumbaFit` object's method # `combined_odf_iso`. To reach a proper scale for visualization, the argument # `norm=True` is used in FURY's `odf_slicer` method. combined = rumba_fit.combined_odf_iso fodf_spheres = actor.odf_slicer( combined, sphere=sphere, norm=True, scale=0.5, colormap=None ) scene.add(fodf_spheres) print("Saving illustration as rumba_odfs.png") window.record(scene=scene, out_path="rumba_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RUMBA-SD fODFs scene.rm(fodf_spheres) ############################################################################### # We can extract the peaks from these fODFs using `peaks_from_model`. This # will reconstruct the fODFs again, so will take about a minute to run. rumba_peaks = peaks_from_model( model=rumba, data=data_small, sphere=sphere, relative_peak_threshold=0.5, min_separation_angle=25, normalize_peaks=False, parallel=True, num_processes=4, ) ############################################################################### # For visualization, we scale up the peak values. peak_values = np.clip(rumba_peaks.peak_values * 15, 0, 1) peak_dirs = rumba_peaks.peak_dirs fodf_peaks = actor.peak_slicer(peak_dirs, peaks_values=peak_values) scene.add(fodf_peaks) window.record(scene=scene, out_path="rumba_peaks.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RUMBA-SD peaks scene.rm(fodf_peaks) ############################################################################### # **Option 2: global fit** # Instead of the voxel-wise fit, RUMBA also comes with an implementation of # global fitting where all voxels are fit simultaneously. This comes with some # potential benefits such as: # # 1. More efficient fitting due to matrix parallelization, in exchange for # larger demands on RAM (>= 16 GB should be sufficient) # 2. The option for spatial regularization; specifically, TV regularization is # built into the fitting function (RUMBA-SD + TV) # # This is done by setting `voxelwise` to `False`, and setting `use_tv` to # `True`. # # TV regularization requires a volume without any singleton dimensions, so # we'll have to start by expanding our data slice. rumba = RumbaSDModel( gtab, wm_response=response[0], gm_response=None, voxelwise=False, use_tv=True, sphere=sphere, ) data_tv = data[20:50, 55:85, 38:40] ############################################################################### # Now, we fit the model in the same way. This will take about 90 seconds. rumba_fit = rumba.fit(data_tv) odf = rumba_fit.odf() combined = rumba_fit.combined_odf_iso ############################################################################### # Now we can visualize the combined fODF map as before. fodf_spheres = actor.odf_slicer( combined, sphere=sphere, norm=True, scale=0.5, colormap=None ) scene.add(fodf_spheres) print("Saving illustration as rumba_global_odfs.png") window.record(scene=scene, out_path="rumba_global_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RUMBA-SD + TV fODFs # # # # This can be compared with the result without TV regularization, and one can # observe that the coherence between neighboring voxels is improved. scene.rm(fodf_spheres) ############################################################################### # For peak detection, `peaks_from_model` cannot be used as it doesn't support # global fitting approaches. Instead, we'll compute our peaks using a for loop. shape = odf.shape[:3] npeaks = 5 # maximum number of peaks returned for a given voxel peak_dirs = np.zeros((shape + (npeaks, 3))) peak_values = np.zeros((shape + (npeaks,))) for idx in np.ndindex(shape): # iterate through each voxel # Get peaks of odf direction, pk, _ = peak_directions( odf[idx], sphere, relative_peak_threshold=0.5, min_separation_angle=25 ) # Calculate peak metrics if pk.shape[0] != 0: n = min(npeaks, pk.shape[0]) peak_dirs[idx][:n] = direction[:n] peak_values[idx][:n] = pk[:n] # Scale up for visualization peak_values = np.clip(peak_values * 15, 0, 1) fodf_peaks = actor.peak_slicer( peak_dirs[:, :, 0:1, :], peaks_values=peak_values[:, :, 0:1, :] ) scene.add(fodf_peaks) print("Saving illustration as rumba_global_peaks.png") window.record(scene=scene, out_path="rumba_global_peaks.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RUMBA-SD + TV peaks scene.rm(fodf_peaks) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_sfm.py000066400000000000000000000145111476546756600200520ustar00rootroot00000000000000""" =================================================== Reconstruction with the Sparse Fascicle Model (SFM) =================================================== In this example, we will use the Sparse Fascicle Model (SFM) :footcite:p:`Rokem2015`, to reconstruct the fiber Orientation Distribution Function (fODF) in every voxel. First, we import the modules we will use in this example: """ from dipy.core.gradients import gradient_table import dipy.data as dpd import dipy.direction.peaks as dpp from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.csdeconv import auto_response_ssst import dipy.reconst.sfm as sfm from dipy.viz import actor, window ############################################################################### # For the purpose of this example, we will use the Stanford HARDI dataset (150 # directions, single b-value of 2000 $s/mm^2$) that can be automatically # downloaded. If you have not yet downloaded this data-set in one of the other # examples, you will need to be connected to the internet the first time you # run this example. The data will be stored for subsequent runs, and for use # with other examples. hardi_fname, hardi_bval_fname, hardi_bvec_fname = dpd.get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) # Enables/disables interactive visualization interactive = False ############################################################################### # Reconstruction of the fiber ODF in each voxel guides subsequent tracking # steps. Here, the model is the Sparse Fascicle Model, described in # :footcite:p:`Rokem2015`. This model reconstructs the diffusion signal as a # combination of the signals from different fascicles. This model can be written # as: # # .. math:: # # y = X\beta # # Where $y$ is the signal and $\beta$ are weights on different points in the # sphere. The columns of the design matrix, $X$ are the signals in each point # in the measurement that would be predicted if there was a fascicle oriented # in the direction represented by that column. Typically, the signal used for # this kernel will be a prolate tensor with axial diffusivity 3-5 times higher # than its radial diffusivity. The exact numbers can also be estimated from # examining parts of the brain in which there is known to be only one fascicle # (e.g. in corpus callosum). # # Sparsity constraints on the fiber ODF ($\beta$) are set through the Elastic # Net algorithm :footcite:p:`Zou2005`. # # Elastic Net optimizes the following cost function: # # .. math:: # # \sum_{i=1}^{n}{(y_i - \hat{y}_i)^2} + \alpha (\lambda \sum_{j=1}^{m}{w_j}+(1-\lambda) \sum_{j=1}^{m}{w^2_j} # noqa: E501 # # where $\hat{y}$ is the signal predicted for a particular setting of $\beta$, # such that the left part of this expression is the squared loss function; # $\alpha$ is a parameter that sets the balance between the squared loss on # the data, and the regularization constraints. The regularization parameter # $\lambda$ sets the `l1_ratio`, which controls the balance between L1-sparsity # (low sum of weights), and low L2-sparsity (low sum-of-squares of the # weights). # # Just like Constrained Spherical Deconvolution (see :ref:`reconst-csd`), the # SFM requires the definition of a response function. We'll take advantage of # the automated algorithm in the :mod:`csdeconv` module to find this response # function: response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) ############################################################################### # The ``response`` return value contains two entries. The first is an array # with the eigenvalues of the response function and the second is the average # S0 for this response. # # It is a very good practice to always validate the result of # ``auto_response_ssst``. For, this purpose we can print it and have a look # at its values. print(response) ############################################################################### # We initialize an SFM model object, using these values. We will use the # default sphere (362 vertices, symmetrically distributed on the surface of # the sphere), as a set of putative fascicle directions that are considered # in the model sphere = dpd.get_sphere() sf_model = sfm.SparseFascicleModel( gtab, sphere=sphere, l1_ratio=0.5, alpha=0.001, response=response[0] ) ############################################################################### # For the purpose of the example, we will consider a small volume of data # containing parts of the corpus callosum and of the centrum semiovale data_small = data[20:50, 55:85, 38:39] ############################################################################### # Fitting the model to this small volume of data, we calculate the ODF of this # model on the sphere, and plot it. sf_fit = sf_model.fit(data_small) sf_odf = sf_fit.odf(sphere) fodf_spheres = actor.odf_slicer(sf_odf, sphere=sphere, scale=0.8, colormap="plasma") scene = window.Scene() scene.add(fodf_spheres) window.record(scene=scene, out_path="sf_odfs.png", size=(1000, 1000)) if interactive: window.show(scene) ############################################################################### # We can extract the peaks from the ODF, and plot these as well sf_peaks = dpp.peaks_from_model( sf_model, data_small, sphere, relative_peak_threshold=0.5, min_separation_angle=25, return_sh=False, ) scene.clear() fodf_peaks = actor.peak_slicer(sf_peaks.peak_dirs, peaks_values=sf_peaks.peak_values) scene.add(fodf_peaks) window.record(scene=scene, out_path="sf_peaks.png", size=(1000, 1000)) if interactive: window.show(scene) ############################################################################### # Finally, we plot both the peaks and the ODFs, overlaid: fodf_spheres.GetProperty().SetOpacity(0.4) scene.add(fodf_spheres) window.record(scene=scene, out_path="sf_both.png", size=(1000, 1000)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # SFM Peaks and ODFs. # # # To see how to use this information in tracking, proceed to # :ref:`sphx_glr_examples_built_fiber_tracking_tracking_sfm.py`. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_sh.py000066400000000000000000000157611476546756600177070ustar00rootroot00000000000000""" ========================================================== Signal Reconstruction Using Spherical Harmonics ========================================================== This example shows how you can use a spherical harmonics (SH) function to reconstruct any spherical function using DIPY_. In order to generate a signal, we will need to generate an evenly distributed sphere. Let's import some standard packages. """ import numpy as np from dipy.core.sphere import HemiSphere, Sphere, disperse_charges from dipy.data import get_sphere from dipy.reconst.shm import sf_to_sh, sh_to_sf from dipy.sims.voxel import multi_tensor_odf from dipy.viz import actor, window ############################################################################### # We can first create some random points on a ``HemiSphere`` using spherical # polar coordinates. rng = np.random.default_rng() n_pts = 64 theta = np.pi * rng.random(n_pts) phi = 2 * np.pi * rng.random(n_pts) hsph_initial = HemiSphere(theta=theta, phi=phi) ############################################################################### # Next, we call ``disperse_charges`` which will iteratively move the points so # that the electrostatic potential energy is minimized. In ``hsph_updated`` we # have the updated ``HemiSphere`` with the points nicely distributed on the # hemisphere. hsph_updated, potential = disperse_charges(hsph_initial, 5000) sphere = Sphere(xyz=np.vstack((hsph_updated.vertices, -hsph_updated.vertices))) ############################################################################### # We now need to create our initial signal. To do so, we will use our sphere's # vertices as the sampled points of our spherical function (SF). We will # use ``multi_tensor_odf`` to simulate an ODF. For more information on how to # use DIPY_ to simulate a signal and ODF, see # :ref:`sphx_glr_examples_built_simulations_simulate_multi_tensor.py`. mevals = np.array([[0.0015, 0.00015, 0.00015], [0.0015, 0.00015, 0.00015]]) angles = [(0, 0), (60, 0)] odf = multi_tensor_odf(sphere.vertices, mevals, angles, [50, 50]) # Enables/disables interactive visualization interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) odf_actor = actor.odf_slicer(odf[None, None, None, :], sphere=sphere) odf_actor.RotateX(90) scene.add(odf_actor) print("Saving illustration as symm_signal.png") window.record(scene=scene, out_path="symm_signal.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Illustration of the simulated signal sampled on a sphere of 64 points # per hemisphere # # # We can now express this signal as a series of SH coefficients using # ``sf_to_sh``. This function converts a series of SF coefficients in a series # of SH coefficients. For more information on SH basis, see :ref:`sh-basis`. # For this example, we will use the ``descoteaux07`` basis up to a maximum SH # order of 8. # Change this value to try out other bases sh_basis = "descoteaux07" # Change this value to try other maximum orders sh_order_max = 8 sh_coeffs = sf_to_sh(odf, sphere, sh_order_max=sh_order_max, basis_type=sh_basis) ############################################################################### # ``sh_coeffs`` is an array containing the SH coefficients multiplying the SH # functions of the chosen basis. We can use it as input of ``sh_to_sf`` to # reconstruct our original signal. We will now reproject our signal on a high # resolution sphere using ``sh_to_sf``. high_res_sph = get_sphere(name="symmetric724").subdivide(n=2) reconst = sh_to_sf( sh_coeffs, high_res_sph, sh_order_max=sh_order_max, basis_type=sh_basis ) scene.clear() odf_actor = actor.odf_slicer(reconst[None, None, None, :], sphere=high_res_sph) odf_actor.RotateX(90) scene.add(odf_actor) print("Saving output as symm_reconst.png") window.record(scene=scene, out_path="symm_reconst.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Reconstruction of a symmetric signal on a high resolution sphere using a # symmetric basis # # # While a symmetric SH basis works well for reconstructing symmetric SF, it # fails to do so on asymmetric signals. We will now create such a signal by # using a different ODF for each hemisphere of our sphere. mevals = np.array([[0.0015, 0.0003, 0.0003]]) angles = [(0, 0)] odf2 = multi_tensor_odf(sphere.vertices, mevals, angles, [100]) n_pts_hemisphere = int(sphere.vertices.shape[0] / 2) asym_odf = np.append(odf[:n_pts_hemisphere], odf2[n_pts_hemisphere:]) scene.clear() odf_actor = actor.odf_slicer(asym_odf[None, None, None, :], sphere=sphere) odf_actor.RotateX(90) scene.add(odf_actor) print("Saving output as asym_signal.png") window.record(scene=scene, out_path="asym_signal.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Illustration of an asymmetric signal sampled on a sphere of 64 # points per hemisphere # # # Let's try to reconstruct this SF using a symmetric SH basis. sh_coeffs = sf_to_sh(asym_odf, sphere, sh_order_max=sh_order_max, basis_type=sh_basis) reconst = sh_to_sf( sh_coeffs, high_res_sph, sh_order_max=sh_order_max, basis_type=sh_basis ) scene.clear() odf_actor = actor.odf_slicer(reconst[None, None, None, :], sphere=high_res_sph) odf_actor.RotateX(90) scene.add(odf_actor) print("Saving output as asym_reconst.png") window.record(scene=scene, out_path="asym_reconst.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Reconstruction of an asymmetric signal using a symmetric SH basis # # # As we can see, a symmetric basis fails to properly represent asymmetric SF. # Fortunately, DIPY_ also implements full SH bases, which can deal with # symmetric as well as asymmetric signals. For this tutorial, we will # demonstrate it using the ``descoteaux07`` full SH basis by setting # ``full_basis=true``. sh_coeffs = sf_to_sh( asym_odf, sphere, sh_order_max=sh_order_max, basis_type=sh_basis, full_basis=True ) reconst = sh_to_sf( sh_coeffs, high_res_sph, sh_order_max=sh_order_max, basis_type=sh_basis, full_basis=True, ) scene.clear() odf_actor = actor.odf_slicer(reconst[None, None, None, :], sphere=high_res_sph) odf_actor.RotateX(90) scene.add(odf_actor) print("Saving output as asym_reconst_full.png") window.record(scene=scene, out_path="asym_reconst_full.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Reconstruction of an asymmetric signal using a full SH basis # # # As we can see, a full SH basis properly reconstruct asymmetric signal. dipy-1.11.0/doc/examples/reconst_shore.py000066400000000000000000000067001476546756600204060ustar00rootroot00000000000000""" ================================================================== Continuous and analytical diffusion signal modelling with 3D-SHORE ================================================================== We show how to model the diffusion signal as a linear combination of continuous functions from the SHORE basis :footcite:p:`Merlet2013`, :footcite:p:`Ozarslan2008`, :footcite:p:`Ozarslan2009`. We also compute the analytical Orientation Distribution Function (ODF). First import the necessary modules: """ from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.shore import ShoreModel from dipy.viz import actor, window ############################################################################### # Download and read the data for this tutorial. # ``fetch_isbi2013_2shell()`` provides data from the `ISBI HARDI contest 2013 # `_ acquired for two shells at # b-values 1500 $s/mm^2$ and 2500 $s/mm^2$. # The six parameters of these two functions define the ROI where to reconstruct # the data. They respectively correspond to ``(xmin,xmax,ymin,ymax,zmin,zmax)`` # with x, y, z and the three axis defining the spatial positions of the voxels. fraw, fbval, fbvec = get_fnames(name="isbi2013_2shell") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) data_small = data[10:40, 22, 10:40] print(f"data.shape {data.shape}") ############################################################################### # ``data`` contains the voxel data and ``gtab`` contains a ``GradientTable`` # object (gradient information e.g. b-values). For example, to show the # b-values it is possible to write:: # # print(gtab.bvals) # # Instantiate the SHORE Model. # # ``radial_order`` is the radial order of the SHORE basis. # # ``zeta`` is the scale factor of the SHORE basis. # # ``lambdaN`` and ``lambdaL`` are the radial and angular regularization # constants, respectively. # # For details regarding these four parameters see :footcite:p:`Cheng2011` and # :footcite:p:`Merlet2013`. radial_order = 6 zeta = 700 lambdaN = 1e-8 lambdaL = 1e-8 asm = ShoreModel( gtab, radial_order=radial_order, zeta=zeta, lambdaN=lambdaN, lambdaL=lambdaL ) ############################################################################### # Fit the SHORE model to the data asmfit = asm.fit(data_small) ############################################################################### # Load an odf reconstruction sphere sphere = get_sphere(name="repulsion724") ############################################################################### # Compute the ODFs odf = asmfit.odf(sphere) print(f"odf.shape {odf.shape}") ############################################################################### # Display the ODFs # Enables/disables interactive visualization interactive = False scene = window.Scene() sfu = actor.odf_slicer(odf[:, None, :], sphere=sphere, colormap="plasma", scale=0.5) sfu.RotateX(-90) sfu.display(y=0) scene.add(sfu) window.record(scene=scene, out_path="odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Orientation distribution functions. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/reconst_shore_metrics.py000066400000000000000000000073661476546756600221450ustar00rootroot00000000000000""" =========================== Calculate SHORE scalar maps =========================== We show how to calculate two SHORE-based scalar maps: return to origin probability (RTOP) :footcite:p:`Descoteaux2011` and mean square displacement (MSD) :footcite:p:`Wu2007`, :footcite:p:`Wu2008` on your data. SHORE can be used with any multiple b-value dataset like multi-shell or DSI. First import the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.reconst.shore import ShoreModel ############################################################################### # Download and get the data filenames for this tutorial. fraw, fbval, fbvec = get_fnames(name="taiwan_ntu_dsi") ############################################################################### # img contains a nibabel Nifti1Image object (data) and gtab contains a # GradientTable object (gradient information e.g. b-values). For example, to # read the b-values it is possible to write print(gtab.bvals). # # Load the raw diffusion data and the affine. data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) bvecs[1:] = bvecs[1:] / np.sqrt(np.sum(bvecs[1:] * bvecs[1:], axis=1))[:, None] gtab = gradient_table(bvals, bvecs=bvecs) print(f"data.shape {data.shape}") ############################################################################### # Instantiate the Model. asm = ShoreModel(gtab) ############################################################################### # Let's just use only one slice only from the data. dataslice = data[30:70, 20:80, data.shape[2] // 2] ############################################################################### # Fit the signal with the model and calculate the SHORE coefficients. asmfit = asm.fit(dataslice) ############################################################################### # Calculate the analytical RTOP on the signal # that corresponds to the integral of the signal. print("Calculating... rtop_signal") rtop_signal = asmfit.rtop_signal() ############################################################################### # Now we calculate the analytical RTOP on the propagator, # that corresponds to its central value. print("Calculating... rtop_pdf") rtop_pdf = asmfit.rtop_pdf() ############################################################################### # In theory, these two measures must be equal, # to show that we calculate the mean square error on this two measures. mse = np.sum((rtop_signal - rtop_pdf) ** 2) / rtop_signal.size print(f"MSE = {mse:f}") ############################################################################### # Let's calculate the analytical mean square displacement on the propagator. print("Calculating... msd") msd = asmfit.msd() ############################################################################### # Show the maps and save them to a file. fig = plt.figure(figsize=(6, 6)) ax1 = fig.add_subplot(2, 2, 1, title="rtop_signal") ax1.set_axis_off() ind = ax1.imshow(rtop_signal.T, interpolation="nearest", origin="lower") plt.colorbar(ind) ax2 = fig.add_subplot(2, 2, 2, title="rtop_pdf") ax2.set_axis_off() ind = ax2.imshow(rtop_pdf.T, interpolation="nearest", origin="lower") plt.colorbar(ind) ax3 = fig.add_subplot(2, 2, 3, title="msd") ax3.set_axis_off() ind = ax3.imshow(msd.T, interpolation="nearest", origin="lower", vmin=0) plt.colorbar(ind) plt.savefig("SHORE_maps.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RTOP and MSD calculated using the SHORE model. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/register_binary_fuzzy.py000066400000000000000000000121231476546756600221640ustar00rootroot00000000000000""" ======================================================= Diffeomorphic Registration with binary and fuzzy images ======================================================= This example demonstrates registration of a binary and a fuzzy image. This could be seen as aligning a fuzzy (sensed) image to a binary (e.g., template) image. """ import matplotlib.pyplot as plt import numpy as np from skimage import draw, filters from dipy.align.imwarp import SymmetricDiffeomorphicRegistration from dipy.align.metrics import SSDMetric from dipy.viz import regtools ############################################################################### # Let's generate a sample template image as the combination of three ellipses. # We will generate the fuzzy (sensed) version of the image by smoothing # the reference image. def draw_ellipse(img, center, axis): rr, cc = draw.ellipse(center[0], center[1], axis[0], axis[1], shape=img.shape) img[rr, cc] = 1 return img img_ref = np.zeros((64, 64)) img_ref = draw_ellipse(img_ref, (25, 15), (10, 5)) img_ref = draw_ellipse(img_ref, (20, 45), (15, 10)) img_ref = draw_ellipse(img_ref, (50, 40), (7, 15)) img_in = filters.gaussian(img_ref, sigma=3) ############################################################################### # Let's define a small visualization function. def show_images(img_ref, img_warp, fig_name): fig, axarr = plt.subplots(ncols=2, figsize=(12, 5)) axarr[0].set_title("warped image & reference contour") axarr[0].imshow(img_warp) axarr[0].contour(img_ref, colors="r") ssd = np.sum((img_warp - img_ref) ** 2) axarr[1].set_title(f"difference, SSD={ssd:.02f}") im = axarr[1].imshow(img_warp - img_ref) plt.colorbar(im) fig.tight_layout() fig.savefig(fig_name + ".png") show_images(img_ref, img_in, "input") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Input images before alignment. # # # # Let's use the general Registration function with some naive parameters, # such as set `step_length` as 1 assuming maximal step 1 pixel and a # reasonably small number of iterations since the deformation with already # aligned images should be minimal. sdr = SymmetricDiffeomorphicRegistration( metric=SSDMetric(img_ref.ndim), step_length=1.0, level_iters=[50, 100], inv_iter=50, ss_sigma_factor=0.1, opt_tol=1.0e-3, ) ############################################################################### # Perform the registration with equal images. mapping = sdr.optimize(img_ref.astype(float), img_ref.astype(float)) img_warp = mapping.transform(img_ref, interpolation="linear") show_images(img_ref, img_warp, "output-0") regtools.plot_2d_diffeomorphic_map(mapping, delta=5, fname="map-0.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration results for default parameters and equal images. # # # # Perform the registration with binary and fuzzy images. mapping = sdr.optimize(img_ref.astype(float), img_in.astype(float)) img_warp = mapping.transform(img_in, interpolation="linear") show_images(img_ref, img_warp, "output-1") regtools.plot_2d_diffeomorphic_map(mapping, delta=5, fname="map-1.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration results for a naive parameter configuration. # # # # Note, we are still using a multi-scale approach which makes `step_length` # in the upper level multiplicatively larger. # What happens if we set `step_length` to a rather small value? sdr.step_length = 0.1 ############################################################################### # Perform the registration and examine the output. mapping = sdr.optimize(img_ref.astype(float), img_in.astype(float)) img_warp = mapping.transform(img_in, interpolation="linear") show_images(img_ref, img_warp, "output-2") regtools.plot_2d_diffeomorphic_map(mapping, delta=5, fname="map-2.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration results for decreased step size. # # # # An alternative scenario is to use just a single-scale level. # Even though the warped image may look fine, the estimated deformations show # that it is off the mark. sdr = SymmetricDiffeomorphicRegistration( metric=SSDMetric(img_ref.ndim), step_length=1.0, level_iters=[100], inv_iter=50, ss_sigma_factor=0.1, opt_tol=1.0e-3, ) ############################################################################### # Perform the registration. mapping = sdr.optimize(img_ref.astype(float), img_in.astype(float)) img_warp = mapping.transform(img_in, interpolation="linear") show_images(img_ref, img_warp, "output-3") regtools.plot_2d_diffeomorphic_map(mapping, delta=5, fname="map-3.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Registration results for single level. dipy-1.11.0/doc/examples/reslice_datasets.py000066400000000000000000000043711476546756600210510ustar00rootroot00000000000000""" ========================== Reslice diffusion datasets ========================== Often in imaging it is common to reslice images in different resolutions. Especially in dMRI we usually want images with isotropic voxel size as they facilitate most tractography algorithms. In this example we show how you can reslice a dMRI dataset to have isotropic voxel size. Let's start by importing the necessary modules. """ import nibabel as nib from dipy.align.reslice import reslice from dipy.data import get_fnames from dipy.io.image import load_nifti, save_nifti ############################################################################### # We use here a very small dataset to show the basic principles but you can # replace the following line with the path of your image. fimg = get_fnames(name="aniso_vox") ############################################################################### # We load the image, the affine of the image and the voxel size. The affine is # the transformation matrix which maps image coordinates to world (mm) # coordinates. Then, we print the shape of the volume data, affine, voxel_size = load_nifti(fimg, return_voxsize=True) print(f"Data size: {data.shape}") print(f"Voxel size: {voxel_size}") ############################################################################### # Set the required new voxel size. new_voxel_size = (3.0, 3.0, 3.0) print(f"New Voxel size: {new_voxel_size}") ############################################################################### # Start resampling (reslicing). Trilinear interpolation is used by default. data2, affine2 = reslice(data, affine, voxel_size, new_voxel_size) print(f"New data size: {data2.shape}") ############################################################################### # Save the result as a new Nifti file. save_nifti("iso_vox.nii.gz", data2, affine2) ############################################################################### # Or as analyze format or any other supported format. img3 = nib.Spm2AnalyzeImage(data2, affine2) nib.save(img3, "iso_vox.img") ############################################################################### # Done. Check your datasets. As you may have already realized the same code # can be used for general reslicing problems not only for dMRI data. dipy-1.11.0/doc/examples/restore_dti.py000066400000000000000000000210111476546756600200440ustar00rootroot00000000000000""" ===================================================== Using the RESTORE algorithm for robust tensor fitting ===================================================== The diffusion tensor model takes into account certain kinds of noise (thermal), but not other kinds, such as "physiological" noise. For example, if a subject moves during acquisition of one of the diffusion-weighted samples, this might have a substantial effect on the parameters of the tensor fit calculated in all voxels in the brain for that subject. One of the pernicious consequences of this is that it can lead to wrong interpretation of group differences. For example, some groups of participants (e.g. young children, patient groups, etc.) are particularly prone to motion and differences in tensor parameters and derived statistics (such as FA) due to motion would be confounded with actual differences in the physical properties of the white matter. An example of this was shown in a paper by :footcite:t:`Yendiki2014`. One of the strategies to deal with this problem is to apply an automatic method for detecting outliers in the data, excluding these outliers and refitting the model without the presence of these outliers. This is often referred to as "robust model fitting". One of the common algorithms for robust tensor fitting is called RESTORE, and was first proposed by :footcite:p:`Chang2005`. In the following example, we will demonstrate how to use RESTORE on a simulated dataset, which we will corrupt by adding intermittent noise. We start by importing a few of the libraries we will use. - The module ``dipy.reconst.dti`` contains the implementation of tensor fitting, including an implementation of the RESTORE algorithm. - The module ``dipy.data`` is used for small datasets that we use in tests and examples. - ``dipy.io.image`` is for loading / saving imaging datasets - ``dipy.io.gradients`` is for loading / saving our bvals and bvecs - ``dipy.viz`` package is used for 3D visualization and matplotlib for 2D visualizations: """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table import dipy.data as dpd import dipy.denoise.noise_estimate as ne from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti import dipy.reconst.dti as dti from dipy.viz import actor, window # Enables/disables interactive visualization interactive = False ############################################################################### # If needed, the ``fetch_stanford_hardi`` function will download the raw dMRI # dataset of a single subject. The size of this dataset is 87 MBytes. You only # need to fetch once. hardi_fname, hardi_bval_fname, hardi_bvec_fname = dpd.get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # We initialize a DTI model class instance using the gradient table used in # the measurement. By default, ``dti.TensorModel`` will use a weighted # least-squares algorithm (described in :footcite:p:`Chang2005`) to fit the # parameters of the model. We initialize this model as a baseline for # comparison of noise-corrupted models: dti_wls = dti.TensorModel(gtab) ############################################################################### # For the purpose of this example, we will focus on the data from a region of # interest (ROI) surrounding the Corpus Callosum. We define that ROI as the # following indices: roi_idx = (slice(20, 50), slice(55, 85), slice(38, 39)) ############################################################################### # And use them to index into the data: data = data[roi_idx] ############################################################################### # This dataset is not very noisy, so we will artificially corrupt it to # simulate the effects of "physiological" noise, such as subject motion. But # first, let's establish a baseline, using the data as it is: fit_wls = dti_wls.fit(data) fa1 = fit_wls.fa evals1 = fit_wls.evals evecs1 = fit_wls.evecs cfa1 = dti.color_fa(fa1, evecs1) sphere = dpd.default_sphere ############################################################################### # We visualize the ODFs in the ROI using ``dipy.viz`` module: scene = window.Scene() scene.add( actor.tensor_slicer(evals1, evecs1, scalar_colors=cfa1, sphere=sphere, scale=0.3) ) window.record(scene=scene, out_path="tensor_ellipsoids_wls.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Tensor Ellipsoids. scene.clear() ############################################################################### # Next, we corrupt the data with some noise. To simulate a subject that moves # intermittently, we will replace a few of the images with a very low signal noisy_data = np.copy(data) noisy_idx = slice(-10, None) # The last 10 volumes are corrupted noisy_data[..., noisy_idx] = 1.0 ############################################################################### # We use the same model to fit this noisy data fit_wls_noisy = dti_wls.fit(noisy_data) fa2 = fit_wls_noisy.fa evals2 = fit_wls_noisy.evals evecs2 = fit_wls_noisy.evecs cfa2 = dti.color_fa(fa2, evecs2) scene = window.Scene() scene.add( actor.tensor_slicer(evals2, evecs2, scalar_colors=cfa2, sphere=sphere, scale=0.3) ) window.record(scene=scene, out_path="tensor_ellipsoids_wls_noisy.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # In places where the tensor model is particularly sensitive to noise, the # resulting tensor field will be distorted # # .. rst-class:: centered small fst-italic fw-semibold # # Tensor Ellipsoids from noisy data. # # # To estimate the parameters from the noisy data using RESTORE, we need to # estimate what would be a reasonable amount of noise to expect in the # measurement. To do that, we use the ``dipy.denoise.noise_estimate`` module: sigma = ne.estimate_sigma(data) ############################################################################### # This estimate of the standard deviation will be used by the RESTORE # algorithm to identify the outliers in each voxel and is given as an input # when initializing the TensorModel object: dti_restore = dti.TensorModel(gtab, fit_method="RESTORE", sigma=sigma) fit_restore_noisy = dti_restore.fit(noisy_data) fa3 = fit_restore_noisy.fa evals3 = fit_restore_noisy.evals evecs3 = fit_restore_noisy.evecs cfa3 = dti.color_fa(fa3, evecs3) scene = window.Scene() scene.add( actor.tensor_slicer(evals3, evecs3, scalar_colors=cfa3, sphere=sphere, scale=0.3) ) print("Saving illustration as tensor_ellipsoids_restore_noisy.png") window.record( scene=scene, out_path="tensor_ellipsoids_restore_noisy.png", size=(600, 600) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # Tensor Ellipsoids from noisy data recovered with RESTORE. # The tensor field looks rather restored to its noiseless state in this # image, but to convince ourselves further that this did the right thing, we # will compare the distribution of FA in this region relative to the # baseline, using the RESTORE estimate and the WLS estimate # :footcite:p:`Chung2006`. fig_hist, ax = plt.subplots(1) ax.hist(np.ravel(fa2), color="b", histtype="step", label="WLS") ax.hist(np.ravel(fa3), color="r", histtype="step", label="RESTORE") ax.hist(np.ravel(fa1), color="g", histtype="step", label="Original") ax.set_xlabel("Fractional Anisotropy") ax.set_ylabel("Count") plt.legend() fig_hist.savefig("dti_fa_distributions.png") ############################################################################### # This demonstrates that RESTORE can recover a distribution of FA that more # closely resembles the baseline distribution of the noiseless signal, and # demonstrates the utility of the method to data with intermittent # noise. Importantly, this method assumes that the tensor is a good # representation of the diffusion signal in the data. If you have reason to # believe this is not the case (for example, you have data with very high b # values and you are particularly interested in locations in the brain in which # fibers cross), you might want to use a different method to fit your data. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/segment_clustering_features.py000066400000000000000000000262611476546756600233340ustar00rootroot00000000000000""" ============================================ Tractography Clustering - Available Features ============================================ This page lists available features that can be used by the tractography clustering framework. For every feature a brief description is provided explaining: what it does, when it's useful and how to use it. If you are not familiar with the tractography clustering framework, read the :ref:`clustering-framework` first. .. contents:: Available Features :local: :depth: 1 Let's import the necessary modules. """ import numpy as np from dipy.segment.clustering import QuickBundles from dipy.segment.featurespeed import ( ArcLengthFeature, CenterOfMassFeature, IdentityFeature, MidpointFeature, ResampleFeature, VectorOfEndpointsFeature, ) from dipy.segment.metric import ( AveragePointwiseEuclideanMetric, CosineMetric, EuclideanMetric, ) from dipy.tracking.streamline import set_number_of_points from dipy.viz import actor, colormap as cmap, window ############################################################################### # .. note:: # # All examples assume a function `get_streamlines` exists. We defined here # a simple function to do so. It imports the necessary modules and loads a # small streamline bundle. def get_streamlines(): from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.tracking.streamline import Streamlines fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines streamlines = Streamlines(fornix) return streamlines ############################################################################### # .. _clustering-examples-IdentityFeature: # # Identity Feature # ================ # **What:** Instances of `IdentityFeature` simply return the streamlines # unaltered. In other words the features are the original data. # # **When:** The QuickBundles algorithm requires streamlines to have the same # number of points. If this is the case for your streamlines, you can tell # QuickBundles to not perform resampling (see following example). The # clustering should be faster than using the default behaviour of QuickBundles # since it will require less computation (i.e. no resampling). However, it # highly depends on the number of points streamlines have. By default, # QuickBundles resamples streamlines so that they have 12 points each # :footcite:p:`Garyfallidis2012a`. # # *Unless stated otherwise, it is the default feature used by `Metric` objects # in the clustering framework.* # Get some streamlines. streamlines = get_streamlines() # Previously defined. # Make sure our streamlines have the same number of points. streamlines = set_number_of_points(streamlines, nb_points=12) # Create an instance of `IdentityFeature` and tell metric to use it. feature = IdentityFeature() metric = AveragePointwiseEuclideanMetric(feature=feature) qb = QuickBundles(threshold=10.0, metric=metric) clusters = qb.cluster(streamlines) print("Nb. clusters:", len(clusters)) print("Cluster sizes:", list(map(len, clusters))) ############################################################################### # .. _clustering-examples-ResampleFeature: # # Resample Feature # ================ # **What:** Instances of `ResampleFeature` resample streamlines to a # predetermined number of points. The resampling is done on the fly such that # there are no permanent modifications made to your streamlines. # # **When:** The QuickBundles algorithm requires streamlines to have the same # number of points. By default, QuickBundles uses `ResampleFeature` to resample # streamlines so that they have 12 points each :footcite:p:`Garyfallidis2012a`. # If you want to use a different number of points for the resampling, you should # provide your own instance of `ResampleFeature` (see following example). # # **Note:** Resampling streamlines has an impact on clustering results both in # terms of speed and quality. Setting the number of points too low will result # in a loss of information about the shape of the streamlines. On the contrary, # setting the number of points too high will slow down the clustering process. # Get some streamlines. streamlines = get_streamlines() # Previously defined. # Streamlines will be resampled to 24 points on the fly. feature = ResampleFeature(nb_points=24) metric = AveragePointwiseEuclideanMetric(feature=feature) # a.k.a. MDF qb = QuickBundles(threshold=10.0, metric=metric) clusters = qb.cluster(streamlines) print("Nb. clusters:", len(clusters)) print("Cluster sizes:", list(map(len, clusters))) ############################################################################### # .. _clustering-examples-CenterOfMassFeature: # # Center of Mass Feature # ====================== # **What:** Instances of `CenterOfMassFeature` compute the center of mass # (also known as center of gravity) of a set of points. This is achieved by # taking the mean of every coordinate independently (for more information see # the `wiki page `_). # # **When:** This feature can be useful when you *only* need information about # the spatial position of a streamline. # # **Note:** The computed center is not guaranteed to be an existing point in # the streamline. # Enables/disables interactive visualization interactive = False # Get some streamlines. streamlines = get_streamlines() # Previously defined. feature = CenterOfMassFeature() metric = EuclideanMetric(feature) qb = QuickBundles(threshold=5.0, metric=metric) clusters = qb.cluster(streamlines) # Extract feature of every streamline. centers = np.asarray(list(map(feature.extract, streamlines))) # Color each center of mass according to the cluster they belong to. colormap = cmap.create_colormap(np.arange(len(clusters))) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color # Visualization scene = window.Scene() scene.clear() scene.SetBackground(0, 0, 0) scene.add(actor.streamtube(streamlines, colors=window.colors.white, opacity=0.05)) scene.add(actor.point(centers[:, 0, :], colormap_full, point_radius=0.2)) window.record( scene=scene, n_frames=1, out_path="center_of_mass_feature.png", size=(600, 600) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the center of mass of each streamline colored according to # the QuickBundles results. # # # .. _clustering-examples-MidpointFeature: # # Midpoint Feature # ================ # **What:** Instances of `MidpointFeature` extract the middle point of a # streamline. If there is an even number of points, the feature will then # correspond to the point halfway between the two middle points. # # **When:** This feature can be useful when you *only* need information about # the spatial position of a streamline. This can also be an alternative to the # `CenterOfMassFeature` if the point extracted must be on the streamline. # Get some streamlines. streamlines = get_streamlines() # Previously defined. feature = MidpointFeature() metric = EuclideanMetric(feature) qb = QuickBundles(threshold=5.0, metric=metric) clusters = qb.cluster(streamlines) # Extract feature of every streamline. midpoints = np.asarray(list(map(feature.extract, streamlines))) # Color each midpoint according to the cluster they belong to. colormap = cmap.create_colormap(np.arange(len(clusters))) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color # Visualization scene = window.Scene() scene.clear() scene.SetBackground(0, 0, 0) scene.add(actor.point(midpoints[:, 0, :], colormap_full, point_radius=0.2)) scene.add(actor.streamtube(streamlines, colors=window.colors.white, opacity=0.05)) window.record(scene=scene, n_frames=1, out_path="midpoint_feature.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the middle point of each streamline colored according to the # QuickBundles results. # # # .. _clustering-examples-ArcLengthFeature: # # ArcLength Feature # ================= # **What:** Instances of `ArcLengthFeature` compute the length of a streamline. # More specifically, this feature corresponds to the sum of the lengths of # every streamline's segments. # # **When:** This feature can be useful when you *only* need information about # the length of a streamline. # Get some streamlines. streamlines = get_streamlines() # Previously defined. feature = ArcLengthFeature() metric = EuclideanMetric(feature) qb = QuickBundles(threshold=2.0, metric=metric) clusters = qb.cluster(streamlines) # Color each streamline according to the cluster they belong to. colormap = cmap.create_colormap(np.ravel(clusters.centroids)) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color # Visualization scene = window.Scene() scene.clear() scene.SetBackground(0, 0, 0) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="arclength_feature.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the streamlines colored according to their length. # # # .. _clustering-examples-VectorOfEndpointsFeature: # # Vector Between Endpoints Feature # ================================ # **What:** Instances of `VectorOfEndpointsFeature` extract the vector going # from one extremity of the streamline to the other. In other words, this # feature represents the vector beginning at the first point and ending at the # last point of the streamlines. # # **When:** This feature can be useful when you *only* need information about # the orientation of a streamline. # # **Note:** Since streamlines endpoints are ambiguous (e.g. the first point # could be either the beginning or the end of the streamline), one must be # careful when using this feature. # Get some streamlines. streamlines = get_streamlines() # Previously defined. feature = VectorOfEndpointsFeature() metric = CosineMetric(feature) qb = QuickBundles(threshold=0.1, metric=metric) clusters = qb.cluster(streamlines) # Color each streamline according to the cluster they belong to. colormap = cmap.create_colormap(np.arange(len(clusters))) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color # Visualization scene = window.Scene() scene.clear() scene.SetBackground(0, 0, 0) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="vector_of_endpoints_feature.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the streamlines colored according to their orientation. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/segment_clustering_metrics.py000066400000000000000000000130741476546756600231620ustar00rootroot00000000000000""" =========================================== Tractography Clustering - Available Metrics =========================================== This page lists available metrics that can be used by the tractography clustering framework. For every metric a brief description is provided explaining: what it does, when it's useful and how to use it. If you are not familiar with the tractography clustering framework, check this tutorial :ref:`clustering-framework`. See :footcite:p:`Garyfallidis2012a` for more information on the metrics. .. contents:: Available Metrics :local: :depth: 1 Let's start by importing the necessary modules. """ import numpy as np from dipy.segment.clustering import QuickBundles from dipy.segment.featurespeed import VectorOfEndpointsFeature from dipy.segment.metric import ( AveragePointwiseEuclideanMetric, CosineMetric, SumPointwiseEuclideanMetric, ) from dipy.tracking.streamline import set_number_of_points from dipy.viz import actor, colormap, window ############################################################################### # .. note:: # # All examples assume a function `get_streamlines` exists. We defined here # a simple function to do so. It imports the necessary modules and loads a # small streamline bundle. def get_streamlines(): from dipy.data import get_fnames from dipy.io.streamline import load_tractogram fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False) return fornix.streamlines ############################################################################### # .. _clustering-examples-AveragePointwiseEuclideanMetric: # # Average of Pointwise Euclidean Metric # ===================================== # **What:** Instances of `AveragePointwiseEuclideanMetric` first compute the # pointwise Euclidean distance between two sequences *of same length* then # return the average of those distances. This metric takes as inputs two # features that are sequences containing the same number of elements. # # **When:** By default the `QuickBundles` clustering will resample your # streamlines on-the-fly so they have 12 points. If for some reason you want # to avoid this and you made sure all your streamlines already have the same # number of points, you can manually provide an instance of # `AveragePointwiseEuclideanMetric` to `QuickBundles`. Since the default # `Feature` is the `IdentityFeature` the streamlines won't be resampled thus # saving some computational time. # # **Note:** Inputs must be sequences of the same length. # Get some streamlines. streamlines = get_streamlines() # Previously defined. # Make sure our streamlines have the same number of points. streamlines = set_number_of_points(streamlines, nb_points=12) # Create the instance of `AveragePointwiseEuclideanMetric` to use. metric = AveragePointwiseEuclideanMetric() qb = QuickBundles(threshold=10.0, metric=metric) clusters = qb.cluster(streamlines) print("Nb. clusters:", len(clusters)) print("Cluster sizes:", map(len, clusters)) ############################################################################### # .. _clustering-examples-SumPointwiseEuclideanMetric: # # Sum of Pointwise Euclidean Metric # ================================= # **What:** Instances of `SumPointwiseEuclideanMetric` first compute the # pointwise Euclidean distance between two sequences *of same length* then # return the sum of those distances. # # **When:** This metric mainly exists because it is used internally by # `AveragePointwiseEuclideanMetric`. # # **Note:** Inputs must be sequences of the same length. # Get some streamlines. streamlines = get_streamlines() # Previously defined. # Make sure our streamlines have the same number of points. nb_points = 12 streamlines = set_number_of_points(streamlines, nb_points=nb_points) # Create the instance of `SumPointwiseEuclideanMetric` to use. metric = SumPointwiseEuclideanMetric() qb = QuickBundles(threshold=10.0 * nb_points, metric=metric) clusters = qb.cluster(streamlines) print("Nb. clusters:", len(clusters)) print("Cluster sizes:", map(len, clusters)) ############################################################################### # Cosine Metric # ============= # **What:** Instances of `CosineMetric` compute the cosine distance between two # vectors (for more information see the # `wiki page `_). # # **When:** This metric can be useful when you *only* need information about # the orientation of a streamline. # # **Note:** Inputs must be vectors (i.e. 1D array). # Enables/disables interactive visualization interactive = False # Get some streamlines. streamlines = get_streamlines() # Previously defined. feature = VectorOfEndpointsFeature() metric = CosineMetric(feature) qb = QuickBundles(threshold=0.1, metric=metric) clusters = qb.cluster(streamlines) # Color each streamline according to the cluster they belong to. colormap = colormap.create_colormap(np.arange(len(clusters))) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color # Visualization scene = window.Scene() scene.clear() scene.SetBackground(0, 0, 0) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="cosine_metric.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the streamlines colored according to their orientation. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/segment_extending_clustering_framework.py000066400000000000000000000223771476546756600255640ustar00rootroot00000000000000""" ========================================================== Enhancing QuickBundles with different metrics and features ========================================================== QuickBundles :footcite:p:`Garyfallidis2012a` is a flexible algorithm that requires only a distance metric and an adjacency threshold to perform clustering. There is a wide variety of metrics that could be used to cluster streamlines. The purpose of this tutorial is to show how to easily create new ``Feature`` and new ``Metric`` classes that can be used by QuickBundles. .. _clustering-framework: Clustering framework ==================== DIPY_ provides a simple, flexible and fast framework to do clustering of sequential data (e.g. streamlines). A *sequential datum* in DIPY is represented as a numpy array of size :math:`(N \times D)`, where each row of the array represents a $D$ dimensional point of the sequence. A set of these sequences is represented as a list of numpy arrays of size :math:`(N_i \times D)` for :math:`i=1:M` where $M$ is the number of sequences in the set. This clustering framework is modular and divided in three parts: #. Feature extraction #. Distance computation #. Clustering algorithm The **feature extraction** part includes any preprocessing needed to be done on the data before computing distances between them (e.g. resampling the number of points of a streamline). To define a new way of extracting features, one has to subclass ``Feature`` (see below). The **distance computation** part includes any metric capable of evaluating a distance between two sets of features previously extracted from the data. To define a new way of extracting features, one has to subclass ``Metric`` (see below). The **clustering algorithm** part represents the clustering algorithm itself (e.g. QuickBundles, K-means, Hierarchical Clustering). More precisely, it includes any algorithms taking as input a list of sequential data and outputting a ``ClusterMap`` object. Extending `Feature` =================== This section will guide you through the creation of a new feature extraction method that can be used in the context of this clustering framework. For a list of available features in DIPY see :ref:`sphx_glr_examples_built_segmentation_segment_clustering_features.py`. Assuming a set of streamlines, the type of features we want to extract is the arc length (i.e. the sum of the length of each segment for a given streamline). Let's start by importing the necessary modules. """ import numpy as np from dipy.data import get_fnames from dipy.io.streamline import load_tractogram from dipy.segment.clustering import QuickBundles from dipy.segment.featurespeed import Feature, VectorOfEndpointsFeature from dipy.segment.metric import Metric, SumPointwiseEuclideanMetric from dipy.tracking.streamline import Streamlines, length from dipy.viz import actor, colormap, window ############################################################################### # We now define the class ``ArcLengthFeature`` that will perform the desired # feature extraction. When subclassing ``Feature``, two methods have to be # redefined: ``infer_shape`` and ``extract``. # # Also, an important property about feature extraction is whether or not # its process is invariant to the order of the points within a streamline. # This is needed as there is no way one can tell which extremity of a # streamline is the beginning and which one is the end. class ArcLengthFeature(Feature): """Computes the arc length of a streamline.""" def __init__(self): # The arc length stays the same even if the streamline is reversed. super(ArcLengthFeature, self).__init__(is_order_invariant=True) def infer_shape(self, streamline): """Infers the shape of features extracted from `streamline`.""" # Arc length is a scalar return 1 def extract(self, streamline): """Extracts features from `streamline`.""" return length(streamline) ############################################################################### # The new feature extraction ``ArcLengthFeature`` is ready to be used. Let's # use it to cluster a set of streamlines by their arc length. For educational # purposes we will try to cluster a small streamline bundle known from # neuroanatomy as the fornix. # # We start by loading the fornix streamlines. fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False).streamlines streamlines = Streamlines(fornix) ############################################################################### # Perform QuickBundles clustering using the metric # ``SumPointwiseEuclideanMetric`` and our ``ArcLengthFeature``. metric = SumPointwiseEuclideanMetric(feature=ArcLengthFeature()) qb = QuickBundles(threshold=2.0, metric=metric) clusters = qb.cluster(streamlines) ############################################################################### # We will now visualize the clustering result. # Color each streamline according to the cluster they belong to. cmap = colormap.create_colormap(np.ravel(clusters.centroids)) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, cmap): colormap_full[cluster.indices] = color scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="fornix_clusters_arclength.png", size=(600, 600)) # Enables/disables interactive visualization interactive = False if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the different clusters obtained by using the arc length. # # # Extending `Metric` # ================== # This section will guide you through the creation of a new metric that can be # used in the context of this clustering framework. For a list of available # metrics in DIPY see # :ref:`sphx_glr_examples_built_segmentation_segment_clustering_metrics.py`. # # Assuming a set of streamlines, we want a metric that computes the cosine # distance giving the vector between endpoints of each streamline (i.e. one # minus the cosine of the angle between two vectors). For more information # about this distance check # ``_. # # We now define the class ``CosineMetric`` that will perform the desired # distance computation. When subclassing ``Metric``, two methods have to be # redefined: ``are_compatible`` and ``dist``. Moreover, when implementing the # ``dist`` method, one needs to make sure the distance returned is symmetric # (i.e. `dist(A, B) == dist(B, A)`). class CosineMetric(Metric): """Compute the cosine distance between two streamlines.""" def __init__(self): # For simplicity, features will be the vector between endpoints of a # streamline. super(CosineMetric, self).__init__(feature=VectorOfEndpointsFeature()) def are_compatible(self, shape1, shape2): """Check if two features are vectors of same dimension. Basically this method exists so that we don't have to check inside the `dist` method (speedup). """ return shape1 == shape2 and shape1[0] == 1 def dist(self, v1, v2): """Compute a the cosine distance between two vectors.""" def norm(x): return np.sqrt(np.sum(x**2)) cos_theta = np.dot(v1, v2.T) / (norm(v1) * norm(v2)) # Make sure it's in [-1, 1], i.e. within domain of arccosine cos_theta = np.minimum(cos_theta, 1.0) cos_theta = np.maximum(cos_theta, -1.0) return np.arccos(cos_theta) / np.pi # Normalized cosine distance ############################################################################### # The new distance ``CosineMetric`` is ready to be used. Let's use # it to cluster a set of streamlines according to the cosine distance of the # vector between their endpoints. For educational purposes we will try to # cluster a small streamline bundle known from neuroanatomy as the fornix. # # We start by loading the fornix streamlines. fname = get_fnames(name="fornix") fornix = load_tractogram(fname, "same", bbox_valid_check=False) streamlines = fornix.streamlines ############################################################################### # Perform QuickBundles clustering using our metric ``CosineMetric``. metric = CosineMetric() qb = QuickBundles(threshold=0.1, metric=metric) clusters = qb.cluster(streamlines) ############################################################################### # We will now visualize the clustering result. # Color each streamline according to the cluster they belong to. cmap = colormap.create_colormap(np.arange(len(clusters))) colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, cmap): colormap_full[cluster.indices] = color scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="fornix_clusters_cosine.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the different clusters obtained by using the cosine metric. # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/segment_quickbundles.py000066400000000000000000000106321476546756600217430ustar00rootroot00000000000000""" ========================================= Tractography Clustering with QuickBundles ========================================= This example explains how we can use QuickBundles :footcite:p:`Garyfallidis2012a` to simplify/cluster streamlines. First import the necessary modules. """ import numpy as np from dipy.data import get_fnames from dipy.io.pickles import save_pickle from dipy.io.streamline import load_tractogram from dipy.segment.clustering import QuickBundles from dipy.viz import actor, colormap, window ############################################################################### # For educational purposes we will try to cluster a small streamline bundle # known from neuroanatomy as the fornix. fname = get_fnames(name="fornix") ############################################################################### # Load fornix streamlines. fornix = load_tractogram(fname, "same", bbox_valid_check=False) streamlines = fornix.streamlines ############################################################################### # Perform QuickBundles clustering using the MDF metric and a 10mm distance # threshold. Keep in mind that since the MDF metric requires streamlines to # have the same number of points, the clustering algorithm will internally use # a representation of streamlines that have been automatically # downsampled/upsampled so they have only 12 points (To set manually the # number of points, see :ref:`clustering-examples-ResampleFeature`). qb = QuickBundles(threshold=10.0) clusters = qb.cluster(streamlines) ############################################################################### # `clusters` is a `ClusterMap` object which contains attributes that # provide information about the clustering result. print("Nb. clusters:", len(clusters)) print("Cluster sizes:", map(len, clusters)) print("Small clusters:", clusters < 10) print("Streamlines indices of the first cluster:\n", clusters[0].indices) print("Centroid of the last cluster:\n", clusters[-1].centroid) ############################################################################### # `clusters` also has attributes such as `centroids` (cluster representatives), # and methods like `add`, `remove`, and `clear` to modify the clustering # result. # # Let's first show the initial dataset. # Enables/disables interactive visualization interactive = False scene = window.Scene() scene.SetBackground(1, 1, 1) scene.add(actor.streamtube(streamlines, colors=window.colors.white)) window.record(scene=scene, out_path="fornix_initial.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Initial Fornix dataset. # # # Show the centroids of the fornix after clustering (with random colors): colormap = colormap.create_colormap(np.arange(len(clusters))) scene.clear() scene.SetBackground(1, 1, 1) scene.add(actor.streamtube(streamlines, colors=window.colors.white, opacity=0.05)) scene.add(actor.streamtube(clusters.centroids, colors=colormap, linewidth=0.4)) window.record(scene=scene, out_path="fornix_centroids.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the different QuickBundles centroids with random colors. # # # Show the labeled fornix (colors from centroids). colormap_full = np.ones((len(streamlines), 3)) for cluster, color in zip(clusters, colormap): colormap_full[cluster.indices] = color scene.clear() scene.SetBackground(1, 1, 1) scene.add(actor.streamtube(streamlines, colors=colormap_full)) window.record(scene=scene, out_path="fornix_clusters.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Showing the different clusters. # # # It is also possible to save the complete `ClusterMap` object with pickling. save_pickle("QB.pkl", clusters) ############################################################################### # Finally, here is a video of QuickBundles applied on a larger dataset. # # .. raw:: html # # # noqa: E501 # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/simulate_dki.py000066400000000000000000000130531476546756600202020ustar00rootroot00000000000000""" ========================== DKI MultiTensor Simulation ========================== In this example we show how to simulate the Diffusion Kurtosis Imaging (DKI) data of a single voxel. DKI captures information about the non-Gaussian properties of water diffusion which is a consequence of the existence of tissue barriers and compartments. In these simulations compartmental heterogeneity is taken into account by modeling different compartments for the intra- and extra-cellular media of two populations of fibers. These simulations are performed according to :footcite:p:`NetoHenriques2015`. We first import all relevant modules. """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.reconst.dti import decompose_tensor, from_lower_triangular from dipy.sims.voxel import multi_tensor_dki, single_tensor ############################################################################### # For the simulation we will need a GradientTable with the b-values and # b-vectors. Here we use the GradientTable of the sample DIPY_ dataset # ``small_64D``. fimg, fbvals, fbvecs = get_fnames(name="small_64D") bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs) ############################################################################### # DKI requires data from more than one non-zero b-value. Since the dataset # ``small_64D`` was acquired with one non-zero b-value we artificially produce # a second non-zero b-value. bvals = np.concatenate((bvals, bvals * 2), axis=0) bvecs = np.concatenate((bvecs, bvecs), axis=0) ############################################################################### # The b-values and gradient directions are then converted to DIPY's # ``GradientTable`` format. gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # In ``mevals`` we save the eigenvalues of each tensor. To simulate crossing # fibers with two different media (representing intra and extra-cellular # media), a total of four components have to be taken in to account (i.e. the # first two compartments correspond to the intra and extra cellular media for # the first fiber population while the others correspond to the media of the # second fiber population) mevals = np.array( [ [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], [0.00099, 0, 0], [0.00226, 0.00087, 0.00087], ] ) ############################################################################### # In ``angles`` we save in polar coordinates (:math:`\theta, \phi`) the # principal axis of each compartment tensor. To simulate crossing fibers at # 70$^{\circ}$ the compartments of the first fiber are aligned to the X-axis # while the compartments of the second fiber are aligned to the X-Z plane with # an angular deviation of 70$^{\\circ}$ from the first one. angles = [(90, 0), (90, 0), (20, 0), (20, 0)] ############################################################################### # In ``fractions`` we save the percentage of the contribution of each # compartment, which is computed by multiplying the percentage of contribution # of each fiber population and the water fraction of each different medium fie = 0.49 # intra-axonal water fraction fractions = [fie * 50, (1 - fie) * 50, fie * 50, (1 - fie) * 50] ############################################################################### # Having defined the parameters for all tissue compartments, the elements of # the diffusion tensor (DT), the elements of the kurtosis tensor (KT) and the # DW signals simulated from the DKI model can be obtain using the function # ``multi_tensor_dki``. signal_dki, dt, kt = multi_tensor_dki( gtab, mevals, S0=200, angles=angles, fractions=fractions, snr=None ) ############################################################################### # We can also add Rician noise with a specific SNR. signal_noisy, dt, kt = multi_tensor_dki( gtab, mevals, S0=200, angles=angles, fractions=fractions, snr=10 ) ############################################################################### # For comparison purposes, we also compute the DW signal if only the diffusion # tensor components are taken into account. For this we use DIPY's function # ``single_tensor`` which requires that dt is decomposed into its eigenvalues # and eigenvectors. dt_evals, dt_evecs = decompose_tensor(from_lower_triangular(dt)) signal_dti = single_tensor(gtab, S0=200, evals=dt_evals, evecs=dt_evecs, snr=None) ############################################################################### # Finally, we can visualize the values of the different version of simulated # signals for all assumed gradient directions and bvalues. plt.plot(signal_dti, label="noiseless dti") plt.plot(signal_dki, label="noiseless dki") plt.plot(signal_noisy, label="with noise") plt.legend() plt.savefig("simulated_dki_signal.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Simulated signals obtain from the DTI and DKI models. # # # Non-Gaussian diffusion properties in tissues are responsible to smaller # signal attenuation for larger bvalues when compared to signal attenuation # from free Gaussian water diffusion. This feature can be shown from the # figure above, since signals simulated from the DKI models reveals larger DW # signal intensities than the signals obtained only from the diffusion tensor # components. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/simulate_multi_tensor.py000066400000000000000000000107541476546756600221640ustar00rootroot00000000000000""" ====================== MultiTensor Simulation ====================== In this example we show how someone can simulate the signal and the ODF of a single voxel using a MultiTensor. """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.core.sphere import HemiSphere, disperse_charges from dipy.data import get_sphere from dipy.sims.voxel import multi_tensor, multi_tensor_odf from dipy.viz import actor, window ############################################################################### # For the simulation we will need a GradientTable with the b-values and # b-vectors. To create one, we can first create some random points on a # ``HemiSphere`` using spherical polar coordinates. rng = np.random.default_rng() n_pts = 64 theta = np.pi * rng.random(size=n_pts) phi = 2 * np.pi * rng.random(size=n_pts) hsph_initial = HemiSphere(theta=theta, phi=phi) ############################################################################### # Next, we call ``disperse_charges`` which will iteratively move the points so # that the electrostatic potential energy is minimized. hsph_updated, potential = disperse_charges(hsph_initial, 5000) ############################################################################### # We need two stacks of ``vertices``, one for every shell, and we need two sets # of b-values, one at 1000 $s/mm^2$, and one at 2500 $s/mm^2$, as we discussed # previously. vertices = hsph_updated.vertices values = np.ones(vertices.shape[0]) bvecs = np.vstack((vertices, vertices)) bvals = np.hstack((1000 * values, 2500 * values)) ############################################################################### # We can also add some b0s. Let's add one at the beginning and one at the end. bvecs = np.insert(bvecs, (0, bvecs.shape[0]), np.array([0, 0, 0]), axis=0) bvals = np.insert(bvals, (0, bvals.shape[0]), 0) ############################################################################### # Let's now create the ``GradientTable``. gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # In ``mevals`` we save the eigenvalues of each tensor. mevals = np.array([[0.0015, 0.0003, 0.0003], [0.0015, 0.0003, 0.0003]]) ############################################################################### # In ``angles`` we save in polar coordinates (:math:`\theta, \phi`) the # principal axis of each tensor. angles = [(0, 0), (60, 0)] ############################################################################### # In ``fractions`` we save the percentage of the contribution of each tensor. fractions = [50, 50] ############################################################################### # The function ``multi_tensor`` will return the simulated signal and an array # with the principal axes of the tensors in cartesian coordinates. signal, sticks = multi_tensor( gtab, mevals, S0=100, angles=angles, fractions=fractions, snr=None ) ############################################################################### # We can also add Rician noise with a specific SNR. signal_noisy, sticks = multi_tensor( gtab, mevals, S0=100, angles=angles, fractions=fractions, snr=20 ) plt.plot(signal, label="noiseless") plt.plot(signal_noisy, label="with noise") plt.legend() # plt.show() plt.savefig("simulated_signal.png", bbox_inches="tight") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Simulated MultiTensor signal # # # # For the ODF simulation we will need a sphere. Because we are interested in a # simulation of only a single voxel, we can use a sphere with very high # resolution. We generate that by subdividing the triangles of one of DIPY_'s # cached spheres, which we can read in the following way. sphere = get_sphere(name="repulsion724") sphere = sphere.subdivide(n=2) odf = multi_tensor_odf(sphere.vertices, mevals, angles, fractions) # Enables/disables interactive visualization interactive = False scene = window.Scene() odf_actor = actor.odf_slicer(odf[None, None, None, :], sphere=sphere, colormap="plasma") odf_actor.RotateX(90) scene.add(odf_actor) print("Saving illustration as multi_tensor_simulation") window.record(scene=scene, out_path="multi_tensor_simulation.png", size=(300, 300)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Simulating a MultiTensor ODF. dipy-1.11.0/doc/examples/snr_in_cc.py000066400000000000000000000147311476546756600174710ustar00rootroot00000000000000""" ============================================ SNR estimation for Diffusion-Weighted Images ============================================ Computing the Signal-to-Noise-Ratio (SNR) of DW images is still an open question, as SNR depends on the white matter structure of interest as well as the gradient direction corresponding to each DWI. In classical MRI, SNR can be defined as the ratio of the mean of the signal divided by the standard deviation of the underlying Gaussian noise, that is $SNR = mean(signal) / std(noise)$. The noise standard deviation can be computed from the background in any of the DW images. How do we compute the mean of the signal, and what signal? The strategy here is to compute a 'worst-case' SNR for DWI. Several white matter structures such as the corpus callosum (CC), corticospinal tract (CST), or the superior longitudinal fasciculus (SLF) can be easily identified from the colored-FA (CFA) map. In this example, we will use voxels from the CC, which have the characteristic of being highly red in the CFA map since they are mainly oriented in the left-right direction. We know that the DW image closest to the X-direction will be the one with the most attenuated diffusion signal. This is the strategy adopted in several recent papers (see :footcite:p:`Descoteaux2011` and :footcite:p:`Jones2013`). It gives a good indication of the quality of the DWI data. First, we compute the tensor model in a brain mask (see the :ref:`sphx_glr_examples_built_reconstruction_reconst_dti.py` example for further explanations). Let's load the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from scipy.ndimage import binary_dilation from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, save_nifti from dipy.reconst.dti import TensorModel from dipy.segment.mask import bounding_box, median_otsu, segment_from_cfa ############################################################################### # Then, we fetch and load a specific dataset with 64 gradient directions: hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") data, affine = load_nifti(hardi_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) print("Computing brain mask...") b0_mask, mask = median_otsu(data, vol_idx=[0]) print("Computing tensors...") tenmodel = TensorModel(gtab) tensorfit = tenmodel.fit(data, mask=mask) ############################################################################### # Next, we set our red-green-blue thresholds to (0.6, 1) in the x axis and # (0, 0.1) in the y and z axes respectively. These values work well in practice # to isolate the very RED voxels of the cfa map. # # Then, as assurance, we want just RED voxels in the CC (there could be noisy # red voxels around the brain mask and we don't want those). Unless the brain # acquisition was badly aligned, the CC is always close to the mid-sagittal # slice. # # The following lines perform these two operations and then saves the # computed mask. print("Computing worst-case/best-case SNR using the corpus callosum...") threshold = (0.6, 1, 0, 0.1, 0, 0.1) CC_box = np.zeros_like(data[..., 0]) mins, maxs = bounding_box(mask) mins = np.array(mins) maxs = np.array(maxs) diff = (maxs - mins) // 4 bounds_min = mins + diff bounds_max = maxs - diff CC_box[ bounds_min[0] : bounds_max[0], bounds_min[1] : bounds_max[1], bounds_min[2] : bounds_max[2], ] = 1 mask_cc_part, cfa = segment_from_cfa(tensorfit, CC_box, threshold, return_cfa=True) save_nifti("cfa_CC_part.nii.gz", (cfa * 255).astype(np.uint8), affine) save_nifti("mask_CC_part.nii.gz", mask_cc_part.astype(np.uint8), affine) region = 40 fig = plt.figure("Corpus callosum segmentation") plt.subplot(1, 2, 1) plt.title("Corpus callosum (CC)") plt.axis("off") red = cfa[..., 0] plt.imshow(np.rot90(red[region, ...])) plt.subplot(1, 2, 2) plt.title("CC mask used for SNR computation") plt.axis("off") plt.imshow(np.rot90(mask_cc_part[region, ...])) fig.savefig("CC_segmentation.png", bbox_inches="tight") ############################################################################### # Now that we are happy with our crude CC mask that selected voxels in the # x-direction, we can use all the voxels to estimate the mean signal in this # region. mean_signal = np.mean(data[mask_cc_part], axis=0) ############################################################################### # Now, we need a good background estimation. We will reuse the brain mask # computed before and invert it to catch the outside of the brain. This could # also be determined manually with a ROI in the background. # # .. warning:: # # Certain MR manufacturers mask out the outside of the brain with 0's. # One thus has to be careful how the noise ROI is defined. mask_noise = binary_dilation(mask, iterations=10) mask_noise[..., : mask_noise.shape[-1] // 2] = 1 mask_noise = ~mask_noise save_nifti("mask_noise.nii.gz", mask_noise.astype(np.uint8), affine) noise_std = np.std(data[mask_noise, :]) print("Noise standard deviation sigma= ", noise_std) ############################################################################### # We can now compute the SNR for each DWI. For example, report SNR # for DW images with gradient direction that lies the closest to # the X, Y and Z axes. # Exclude null bvecs from the search idx = np.sum(gtab.bvecs, axis=-1) == 0 gtab.bvecs[idx] = np.inf axis_X = np.argmin(np.sum((gtab.bvecs - np.array([1, 0, 0])) ** 2, axis=-1)) axis_Y = np.argmin(np.sum((gtab.bvecs - np.array([0, 1, 0])) ** 2, axis=-1)) axis_Z = np.argmin(np.sum((gtab.bvecs - np.array([0, 0, 1])) ** 2, axis=-1)) for direction in [0, axis_X, axis_Y, axis_Z]: SNR = mean_signal[direction] / noise_std if direction == 0: print(f"SNR for the b=0 image is : {SNR}") else: print(f"SNR for direction {direction} {gtab.bvecs[direction]} is : {SNR}") ############################################################################### # Since the CC is aligned with the X axis, the lowest SNR is for that gradient # direction. In comparison, the DW images in the perpendicular Y and Z axes # have a high SNR. The b0 still exhibits the highest SNR, since there is no # signal attenuation. # # Hence, we can say the Stanford diffusion data has a 'worst-case' SNR of # approximately 5, a 'best-case' SNR of approximately 24, and a SNR of 42 on # the b0 image. # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/streamline_formats.py000066400000000000000000000223341476546756600214300ustar00rootroot00000000000000""" =========================== Read/Write streamline files =========================== Overview ======== DIPY_ can read and write many different file formats. In this example we give a short introduction on how to use it for loading or saving streamlines. The new stateful tractogram class was made to reduce errors caused by spatial transformation and complex file format convention. Read :ref:`faq` """ import os import nibabel as nib import numpy as np from dipy.data.fetcher import fetch_file_formats, get_file_formats from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.io.utils import create_nifti_header, get_reference_info, is_header_compatible from dipy.tracking.streamline import select_random_set_of_streamlines from dipy.tracking.utils import density_map ############################################################################### # First fetch the dataset that contains 5 tractography file of 5 file formats: # # - cc_m_sub.trk # - laf_m_sub.tck # - lpt_m_sub.fib # - raf_m_sub.vtk # - rpt_m_sub.dpy # # And their reference anatomy, common to all 5 files: # # - template0.nii.gz fetch_file_formats() bundles_filename, ref_anat_filename = get_file_formats() for filename in bundles_filename: print(os.path.basename(filename)) reference_anatomy = nib.load(ref_anat_filename) ############################################################################### # Load tractogram will support 5 file formats, functions like load_trk or # load_tck will simply be restricted to one file format # # TRK files contain their own header (when written properly), so they # technically do not need a reference. (See how below) # # ``cc_trk = load_tractogram(bundles_filename[0], 'same')`` cc_sft = load_tractogram(bundles_filename[0], reference_anatomy) print(cc_sft) laf_sft = load_tractogram(bundles_filename[1], reference_anatomy) raf_sft = load_tractogram(bundles_filename[3], reference_anatomy) ############################################################################### # These files contain invalid streamlines (negative values once in voxel space) # This is not considered a valid tractography file, but it is possible to load # it anyway. lpt_sft = load_tractogram( bundles_filename[2], reference_anatomy, bbox_valid_check=False ) rpt_sft = load_tractogram( bundles_filename[4], reference_anatomy, bbox_valid_check=False ) ############################################################################### # The function ``load_tractogram`` requires a reference, any of the following # inputs is considered valid (as long as they are in the same share space) # - Nifti filename # - Trk filename # - nib.nifti1.Nifti1Image # - nib.streamlines.trk.TrkFile # - nib.nifti1.Nifti1Header # - Trk header (dict) # - Stateful Tractogram # # The reason why this parameter is required is to guarantee all information # related to space attributes is always present. affine, dimensions, voxel_sizes, voxel_order = get_reference_info(reference_anatomy) print(affine) print(dimensions) print(voxel_sizes) print(voxel_order) ############################################################################### # If you have a Trk file that was generated using a particular anatomy, # to be considered valid all fields must correspond between the headers. # It can be easily verified using this function, which also accept # the same variety of input as ``get_reference_info`` print(is_header_compatible(reference_anatomy, bundles_filename[0])) ############################################################################### # If a TRK was generated with a valid header, but the reference NIFTI was lost # a header can be generated to then generate a fake NIFTI file. # # If you wish to manually save Trk and Tck file using nibabel streamlines # API for more freedom of action (not recommended for beginners) you can # create a valid header using create_tractogram_header nifti_header = create_nifti_header(affine, dimensions, voxel_sizes) nib.save(nib.Nifti1Image(np.zeros(dimensions), affine, nifti_header), "fake.nii.gz") nib.save(reference_anatomy, os.path.basename(ref_anat_filename)) ############################################################################### # Once loaded, no matter the original file format, the stateful tractogram is # self-contained and maintains a valid state. By requiring a reference the # tractogram's spatial transformation can be easily manipulated. # # Let's save all files as TRK to visualize in TrackVis for example. # However, when loaded the lpt and rpt files contain invalid streamlines and # for particular operations/tools/functions it is safer to remove them save_tractogram(cc_sft, "cc.trk") save_tractogram(laf_sft, "laf.trk") save_tractogram(raf_sft, "raf.trk") print(lpt_sft.is_bbox_in_vox_valid()) lpt_sft.remove_invalid_streamlines() print(lpt_sft.is_bbox_in_vox_valid()) save_tractogram(lpt_sft, "lpt.trk") print(rpt_sft.is_bbox_in_vox_valid()) rpt_sft.remove_invalid_streamlines() print(rpt_sft.is_bbox_in_vox_valid()) save_tractogram(rpt_sft, "rpt.trk") ############################################################################### # Some functions in DIPY require streamlines to be in voxel space so # computation can be performed on a grid (connectivity matrix, ROIs masking, # density map). The stateful tractogram class provides safe functions for such # manipulation. These functions can be called safely over and over, by knowing # in which state the tractogram is operating, and compute only necessary # transformations # # No matter the state, functions such as ``save_tractogram`` or # ``removing_invalid_coordinates`` can be called safely and the transformations # are handled internally when needed. cc_sft.to_voxmm() print(cc_sft.space) cc_sft.to_rasmm() print(cc_sft.space) ############################################################################### # Now let's move them all to voxel space, subsample them to 100 streamlines, # compute a density map and save everything for visualisation in another # software such as Trackvis or MI-Brain. # # To access volume information in a grid, the corner of the voxel must be # considered the origin in order to prevent negative values. # Any operation doing interpolation or accessing a grid must use the # function 'to_vox()' and 'to_corner()' cc_sft.to_vox() laf_sft.to_vox() raf_sft.to_vox() lpt_sft.to_vox() rpt_sft.to_vox() cc_sft.to_corner() laf_sft.to_corner() raf_sft.to_corner() lpt_sft.to_corner() rpt_sft.to_corner() cc_streamlines_vox = select_random_set_of_streamlines(cc_sft.streamlines, 1000) laf_streamlines_vox = select_random_set_of_streamlines(laf_sft.streamlines, 1000) raf_streamlines_vox = select_random_set_of_streamlines(raf_sft.streamlines, 1000) lpt_streamlines_vox = select_random_set_of_streamlines(lpt_sft.streamlines, 1000) rpt_streamlines_vox = select_random_set_of_streamlines(rpt_sft.streamlines, 1000) # Same dimensions for every stateful tractogram, can be reused affine, dimensions, voxel_sizes, voxel_order = cc_sft.space_attributes cc_density = density_map(cc_streamlines_vox, np.eye(4), dimensions) laf_density = density_map(laf_streamlines_vox, np.eye(4), dimensions) raf_density = density_map(raf_streamlines_vox, np.eye(4), dimensions) lpt_density = density_map(lpt_streamlines_vox, np.eye(4), dimensions) rpt_density = density_map(rpt_streamlines_vox, np.eye(4), dimensions) ############################################################################### # Replacing streamlines is possible, but if the state was modified between # operations such as this one is not recommended: # -> cc_sft.streamlines = cc_streamlines_vox # # It is recommended to re-create a new StatefulTractogram object and # explicitly specify in which space the streamlines are. Be careful to follow # the order of operations. # # If the tractogram was from a Trk file with metadata, this will be lost. # If you wish to keep metadata while manipulating the number or the order # look at the function StatefulTractogram.remove_invalid_streamlines() for more # details # # It is important to mention that once the object is created in a consistent # state the ``save_tractogram`` function will save a valid file. And then the # function ``load_tractogram`` will load them in a valid state. cc_sft = StatefulTractogram(cc_streamlines_vox, reference_anatomy, Space.VOX) laf_sft = StatefulTractogram(laf_streamlines_vox, reference_anatomy, Space.VOX) raf_sft = StatefulTractogram(raf_streamlines_vox, reference_anatomy, Space.VOX) lpt_sft = StatefulTractogram(lpt_streamlines_vox, reference_anatomy, Space.VOX) rpt_sft = StatefulTractogram(rpt_streamlines_vox, reference_anatomy, Space.VOX) print(len(cc_sft), len(laf_sft), len(raf_sft), len(lpt_sft), len(rpt_sft)) save_tractogram(cc_sft, "cc_1000.trk") save_tractogram(laf_sft, "laf_1000.trk") save_tractogram(raf_sft, "raf_1000.trk") save_tractogram(lpt_sft, "lpt_1000.trk") save_tractogram(rpt_sft, "rpt_1000.trk") nib.save(nib.Nifti1Image(cc_density, affine, nifti_header), "cc_density.nii.gz") nib.save(nib.Nifti1Image(laf_density, affine, nifti_header), "laf_density.nii.gz") nib.save(nib.Nifti1Image(raf_density, affine, nifti_header), "raf_density.nii.gz") nib.save(nib.Nifti1Image(lpt_density, affine, nifti_header), "lpt_density.nii.gz") nib.save(nib.Nifti1Image(rpt_density, affine, nifti_header), "rpt_density.nii.gz") dipy-1.11.0/doc/examples/streamline_length.py000066400000000000000000000155201476546756600212350ustar00rootroot00000000000000""" ==================================== Streamline length and size reduction ==================================== This example shows how to calculate the lengths of a set of streamlines and also how to compress the streamlines without considerably reducing their lengths or overall shape. A streamline in DIPY_ is represented as a numpy array of size :math:`(N \times 3)` where each row of the array represents a 3D point of the streamline. A set of streamlines is represented with a list of numpy arrays of size :math:`(N_i \times 3)` for :math:`i=1:M` where $M$ is the number of streamlines in the set. """ import matplotlib.pyplot as plt import numpy as np from dipy.tracking.distances import approx_polygon_track from dipy.tracking.streamline import set_number_of_points from dipy.tracking.utils import length from dipy.viz import actor, window ############################################################################### # Let's first create a simple simulation of a bundle of streamlines using # a cosine function. def simulated_bundles(no_streamlines=50, n_pts=100): rng = np.random.default_rng() t = np.linspace(-10, 10, n_pts) bundle = [] for i in np.linspace(3, 5, no_streamlines): pts = np.vstack((np.cos(2 * t / np.pi), np.zeros(t.shape) + i, t)).T bundle.append(pts) start = rng.integers(10, 30, no_streamlines) end = rng.integers(60, 100, no_streamlines) bundle = [ 10 * streamline[start[i] : end[i]] for (i, streamline) in enumerate(bundle) ] bundle = [np.ascontiguousarray(streamline) for streamline in bundle] return bundle bundle = simulated_bundles() print(f"This bundle has {len(bundle)} streamlines") ############################################################################### # Using the ``length`` function we can retrieve the lengths of each streamline. # Below we show the histogram of the lengths of the streamlines. lengths = list(length(bundle)) fig_hist, ax = plt.subplots(1) ax.hist(lengths, color="burlywood") ax.set_xlabel("Length") ax.set_ylabel("Count") # plt.show() plt.legend() plt.savefig("length_histogram.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Histogram of lengths of the streamlines # # # ``Length`` will return the length in the units of the coordinate system that # streamlines are currently. So, if the streamlines are in world coordinates # then the lengths will be in millimeters (mm). If the streamlines are for # example in native image coordinates of voxel size 2mm isotropic then you # will need to multiply the lengths by 2 if you want them to correspond to mm. # In this example we process simulated data without units, however this # information is good to have in mind when you calculate lengths with real # data. # # Next, let's find the number of points that each streamline has. n_pts = [len(streamline) for streamline in bundle] ############################################################################### # Often, streamlines are represented with more points than what is actually # necessary for specific applications. Also, sometimes every streamline has a # different number of points, which could be a problem for some algorithms. # The function ``set_number_of_points`` can be used to set the number of # points of a streamline at a specific number and at the same time enforce # that all the segments of the streamline will have equal length. bundle_downsampled = set_number_of_points(bundle, nb_points=12) n_pts_ds = [len(s) for s in bundle_downsampled] ############################################################################### # Alternatively, the function ``approx_polygon_track`` allows reducing the # number of points so that there are more points in curvy regions and less # points in less curvy regions. In contrast with ``set_number_of_points`` it # does not enforce that segments should be of equal size. bundle_downsampled2 = [approx_polygon_track(s, 0.25) for s in bundle] n_pts_ds2 = [len(streamline) for streamline in bundle_downsampled2] ############################################################################### # Both, ``set_number_of_points`` and ``approx_polygon_track`` can be thought as # methods for lossy compression of streamlines. # Enables/disables interactive visualization interactive = False scene = window.Scene() scene.SetBackground(*window.colors.white) bundle_actor = actor.streamtube(bundle, colors=window.colors.red, linewidth=0.3) scene.add(bundle_actor) bundle_actor2 = actor.streamtube( bundle_downsampled, colors=window.colors.red, linewidth=0.3 ) bundle_actor2.SetPosition(0, 40, 0) bundle_actor3 = actor.streamtube( bundle_downsampled2, colors=window.colors.red, linewidth=0.3 ) bundle_actor3.SetPosition(0, 80, 0) scene.add(bundle_actor2) scene.add(bundle_actor3) scene.set_camera(position=(0, 0, 0), focal_point=(30, 0, 0)) window.record(scene=scene, out_path="simulated_cosine_bundle.png", size=(900, 900)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Initial bundle (down), downsampled at 12 equidistant points (middle), # downsampled with points that are not equidistant (up). # # # From the figure above we can see that all 3 bundles look quite similar. # However, when we plot the histogram of the number of points used for each # streamline, it becomes obvious that we have managed to reduce in a great # amount the size of the initial dataset. fig_hist, ax = plt.subplots(1) ax.hist(n_pts, color="r", histtype="step", label="initial") ax.hist(n_pts_ds, color="g", histtype="step", label="set_number_of_points (12)") ax.hist(n_pts_ds2, color="b", histtype="step", label="approx_polygon_track (0.25)") ax.set_xlabel("Number of points") ax.set_ylabel("Count") # plt.show() plt.legend() plt.savefig("n_pts_histogram.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Histogram of the number of points of the streamlines. # # # Finally, we can also show that the lengths of the streamlines haven't changed # considerably after applying the two methods of downsampling. lengths_downsampled = list(length(bundle_downsampled)) lengths_downsampled2 = list(length(bundle_downsampled2)) fig, ax = plt.subplots(1) ax.plot(lengths, color="r", label="initial") ax.plot(lengths_downsampled, color="g", label="set_number_of_points (12)") ax.plot(lengths_downsampled2, color="b", label="approx_polygon_track (0.25)") ax.set_xlabel("Streamline ID") ax.set_ylabel("Length") # plt.show() plt.legend() plt.savefig("lengths_plots.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Lengths of each streamline for every one of the 3 bundles. dipy-1.11.0/doc/examples/streamline_registration.py000066400000000000000000000227531476546756600224740ustar00rootroot00000000000000""" ================================================ Applying image-based deformations to streamlines ================================================ This example shows how to register streamlines into a template space by applying non-rigid deformations. At times we will be interested in bringing a set of streamlines into some common, reference space to compute statistics out of the registered streamlines. For a discussion on the effects of spatial normalization approaches on tractography the work by :footcite:t:`Greene2018` can be read. For brevity, we will include in this example only streamlines going through the corpus callosum connecting left to right superior frontal cortex. The process of tracking and finding these streamlines is fully demonstrated in the :ref:`sphx_glr_examples_built_streamline_analysis_streamline_tools.py` example. """ from os.path import join as pjoin import numpy as np from dipy.align import affine_registration, syn_registration from dipy.align.reslice import reslice from dipy.core.gradients import gradient_table from dipy.data import fetch_stanford_tracks, get_fnames from dipy.data.fetcher import fetch_mni_template, read_mni_template from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_tractogram from dipy.segment.mask import median_otsu from dipy.tracking.streamline import transform_streamlines from dipy.viz import has_fury, horizon, regtools ############################################################################### # In order to get the deformation field, we will first use two b0 volumes. Both # moving and static images are assumed to be in RAS. The first one will be the # b0 from the Stanford HARDI dataset: hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") dwi_data, dwi_affine, dwi_img = load_nifti(hardi_fname, return_img=True) dwi_vox_size = dwi_img.header.get_zooms()[0] dwi_bvals, dwi_bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(dwi_bvals, bvecs=dwi_bvecs) ############################################################################### # The second one will be the T2-contrast MNI template image. The resolution of # the MNI template is 1x1x1 mm. However, the resolution of the Stanford HARDI # diffusion data is 2x2x2 mm. Thus, we'll need to reslice to 2x2x2 mm # isotropic voxel resolution to match the HARDI data. fetch_mni_template() img_t2_mni = read_mni_template(version="a", contrast="T2") t2_mni_data, t2_mni_affine = img_t2_mni.get_fdata(), img_t2_mni.affine t2_mni_voxel_size = img_t2_mni.header.get_zooms()[:3] new_voxel_size = (2.0, 2.0, 2.0) t2_resliced_data, t2_resliced_affine = reslice( t2_mni_data, t2_mni_affine, t2_mni_voxel_size, new_voxel_size ) ############################################################################### # We remove the skull of the dwi_data. For this, we need to get the b0 volumes # indexes. b0_idx_stanford = np.where(gtab.b0s_mask)[0].tolist() dwi_data_noskull, _ = median_otsu(dwi_data, vol_idx=b0_idx_stanford, numpass=6) ############################################################################### # We filter the diffusion data from the Stanford HARDI dataset to find all the # b0 images. b0_data_stanford = dwi_data_noskull[..., gtab.b0s_mask] ############################################################################### # And go on to compute the Stanford HARDI dataset mean b0 image. mean_b0_masked_stanford = np.mean(b0_data_stanford, axis=3, dtype=dwi_data.dtype) ############################################################################### # We will register the mean b0 to the MNI T2 image template non-rigidly to # obtain the deformation field that will be applied to the streamlines. This is # just one of the strategies that can be used to obtain an appropriate # deformation field. Other strategies include computing an FA template map as # the static image, and registering the FA map of the moving image to it. This # may may eventually lead to results with improved accuracy, since a # T2-contrast template image as the target for normalization does not provide # optimal tissue contrast for maximal SyN performance. # # We will first perform an affine registration to roughly align the two # volumes: pipeline = [ "center_of_mass", "translation", "rigid", "rigid_isoscaling", "rigid_scaling", "affine", ] level_iters = [10000, 1000, 100] sigmas = [3.0, 1.0, 0.0] factors = [4, 2, 1] warped_b0, warped_b0_affine = affine_registration( mean_b0_masked_stanford, t2_resliced_data, moving_affine=dwi_affine, static_affine=t2_resliced_affine, pipeline=pipeline, level_iters=level_iters, sigmas=sigmas, factors=factors, ) ############################################################################### # We now perform the non-rigid deformation using the Symmetric Diffeomorphic # Registration (SyN) Algorithm proposed by :footcite:t:`Avants2008` (also # implemented in the ANTs software :footcite:p:`Avants2009`): level_iters = [10, 10, 5] final_warped_b0, mapping = syn_registration( mean_b0_masked_stanford, t2_resliced_data, moving_affine=dwi_affine, static_affine=t2_resliced_affine, prealign=warped_b0_affine, level_iters=level_iters, ) ############################################################################### # We show the registration result with: regtools.overlay_slices( t2_resliced_data, final_warped_b0, slice_index=None, slice_type=0, ltitle="Static", rtitle="Moving", fname="transformed_sagittal.png", ) regtools.overlay_slices( t2_resliced_data, final_warped_b0, slice_index=None, slice_type=1, ltitle="Static", rtitle="Moving", fname="transformed_coronal.png", ) regtools.overlay_slices( t2_resliced_data, final_warped_b0, slice_index=None, slice_type=2, ltitle="Static", rtitle="Moving", fname="transformed_axial.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Deformable registration result. # # # Let's now fetch a set of streamlines from the Stanford HARDI dataset. # Those streamlines were generated during the # :ref:`sphx_glr_examples_built_streamline_analysis_streamline_tools.py` # example. # # We read the streamlines from file in voxel space: streamlines_files = fetch_stanford_tracks() lr_superiorfrontal_path = pjoin(streamlines_files[1], "hardi-lr-superiorfrontal.trk") sft = load_tractogram(lr_superiorfrontal_path, "same") ############################################################################### # The affine registration already gives a pretty good result. We could use its # mapping to transform the streamlines to the anatomical space (MNI T2 image). # For that, we use `transform_streamlines` and `warped_b0_affine`. Do not # forget that `warp_b0_affine` is the affine transformation from the mean b0 # image to the T2 template image. Thus, you typically need to apply the inverse # transformation of the affine transformation matrix that was used to register # the two images. This is because the transformation matrix describes how # points in one image (mean_b0) are mapped to points in the other image # (mni T2), and to move points from the warped_b0 space to the t2 space, # you need to "undo" this transformation. mni_t2_streamlines_using_affine_reg = transform_streamlines( sft.streamlines, np.linalg.inv(warped_b0_affine) ) sft_in_t2_using_aff_reg = StatefulTractogram( mni_t2_streamlines_using_affine_reg, img_t2_mni, Space.RASMM ) ############################################################################### # Let's visualize the streamlines in the MNI T2 space. Switch the interactive # variable to True if you want to explore the visualization in an interactive # window. interactive = False if has_fury: horizon( tractograms=[sft_in_t2_using_aff_reg], images=[(t2_mni_data, t2_mni_affine)], interactive=interactive, world_coords=True, out_png="streamlines_DSN_MNI_aff_reg.png", ) ############################################################################### # To get better result, we use the mapping obtain by Symmetric Diffeomorphic # Registration (SyN). Then, we can visualize the corpus callosum bundles # that have been deformed to adapt to the MNI template space. mni_t2_streamlines_using_syn = mapping.transform_points_inverse(sft.streamlines) sft_in_t2_using_syn = StatefulTractogram( mni_t2_streamlines_using_syn, img_t2_mni, Space.RASMM ) if has_fury: horizon( tractograms=[sft_in_t2_using_syn], images=[(t2_mni_data, t2_mni_affine)], interactive=interactive, world_coords=True, out_png="streamlines_DSN_MNI_syn.png", ) ############################################################################### # Finally, we save the two registered streamlines: # # - `mni-lr-sft_in_t2_using_aff_reg.trk` is the streamlines registered using # the affine registration. # - `sft_in_t2_using_syn` is the streamlines registered using the # SyN registration and prealigned with the affine registration. save_tractogram( sft_in_t2_using_aff_reg, "mni-lr-superiorfrontal_aff_reg.trk", bbox_valid_check=False, ) save_tractogram( sft_in_t2_using_syn, "mni-lr-superiorfrontal_syn.trk", bbox_valid_check=False ) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/streamline_tools.py000066400000000000000000000254151476546756600211200ustar00rootroot00000000000000""" ========================================================= Connectivity Matrices, ROI Intersections and Density Maps ========================================================= This example is meant to be an introduction to some of the streamline tools available in DIPY_. Some of the functions covered in this example are ``target``, ``connectivity_matrix`` and ``density_map``. ``target`` allows one to filter streamlines that either pass through or do not pass through some region of the brain, ``connectivity_matrix`` groups and counts streamlines based on where in the brain they begin and end, and finally, density map counts the number of streamlines that pass through every voxel of some image. To get started we'll need to have a set of streamlines to work with. We'll use EuDX along with the CsaOdfModel to make some streamlines. Let's import the modules and download the data we'll be using. Let's load the necessary modules: """ import matplotlib.pyplot as plt import numpy as np from scipy.ndimage import binary_dilation from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.direction import peaks from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data, save_nifti from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst import shm from dipy.tracking import utils from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.viz import actor, colormap as cmap, window ############################################################################### # We'll be using the Stanford HARDI dataset which consists of a single # subject's diffusion, b-values and b-vectors, T1 image and some labels in the # same space as the T1. We'll use the ``get_fnames`` function to download the # files we need and set the file names to variables. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") t1_fname = get_fnames(name="stanford_t1") data, _, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) t1_data = load_nifti_data(t1_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # We've loaded an image called ``labels_img`` which is a map of tissue types # such that every integer value in the array ``labels`` represents an # anatomical structure or tissue type [#]_. For this example, the image was # created so that white matter voxels have values of either 1 or 2. We'll use # ``peaks_from_model`` to apply the ``CsaOdfModel`` to each white matter voxel # and estimate fiber orientations which we can use for tracking. We will also # dilate this mask by 1 voxel to ensure streamlines reach the grey matter. white_matter = binary_dilation((labels == 1) | (labels == 2)) csamodel = shm.CsaOdfModel(gtab, 6) csapeaks = peaks.peaks_from_model( model=csamodel, data=data, sphere=peaks.default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) ############################################################################### # Now we can use EuDX to track all of the white matter. We define an identity # matrix for the affine transformation [#]_ of the seeding locations. To keep # things reasonably fast we use ``density=1`` which will result in 1 seeds per # voxel. The stopping criterion, determining when the tracking stops, is set to # stop when the tracking exits the white matter. affine = np.eye(4) seeds = utils.seeds_from_mask(white_matter, affine, density=1) stopping_criterion = BinaryStoppingCriterion(white_matter) streamline_generator = LocalTracking( csapeaks, stopping_criterion, seeds, affine=affine, step_size=0.5 ) streamlines = Streamlines(streamline_generator) ############################################################################### # The first of the tracking utilities we'll cover here is ``target``. This # function takes a set of streamlines and a region of interest (ROI) and # returns only those streamlines that pass through the ROI. The ROI should be # an array such that the voxels that belong to the ROI are ``True`` and all # other voxels are ``False`` (this type of binary array is sometimes called a # mask). This function can also exclude all the streamlines that pass through # an ROI by setting the ``include`` flag to ``False``. In this example we'll # target the streamlines of the corpus callosum. Our ``labels`` array has a # sagittal slice of the corpus callosum identified by the label value 2. We'll # create an ROI mask from that label and create two sets of streamlines, # those that intersect with the ROI and those that don't. cc_slice = labels == 2 cc_streamlines = utils.target(streamlines, affine, cc_slice) cc_streamlines = Streamlines(cc_streamlines) other_streamlines = utils.target(streamlines, affine, cc_slice, include=False) other_streamlines = Streamlines(other_streamlines) assert len(other_streamlines) + len(cc_streamlines) == len(streamlines) ############################################################################### # We can use some of DIPY_'s visualization tools to display the ROI we targeted # above and all the streamlines that pass through that ROI. The ROI is the # yellow region near the center of the axial image. # Enables/disables interactive visualization interactive = False # Make display objects color = cmap.line_colors(cc_streamlines) cc_streamlines_actor = actor.line( cc_streamlines, colors=cmap.line_colors(cc_streamlines) ) cc_ROI_actor = actor.contour_from_roi(cc_slice, color=(1.0, 1.0, 0.0), opacity=0.5) vol_actor = actor.slicer(t1_data) vol_actor.display(x=40) vol_actor2 = vol_actor.copy() vol_actor2.display(z=35) # Add display objects to canvas scene = window.Scene() scene.add(vol_actor) scene.add(vol_actor2) scene.add(cc_streamlines_actor) scene.add(cc_ROI_actor) # Save figures window.record( scene=scene, n_frames=1, out_path="corpuscallosum_axial.png", size=(800, 800) ) if interactive: window.show(scene) scene.set_camera(position=[-1, 0, 0], focal_point=[0, 0, 0], view_up=[0, 0, 1]) window.record( scene=scene, n_frames=1, out_path="corpuscallosum_sagittal.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum Axial and Corpus Callosum Sagittal # # # # Once we've targeted the corpus callosum ROI, we might want to find out which # regions of the brain are connected by these streamlines. To do this we can # use the ``connectivity_matrix`` function. This function takes a set of # streamlines and an array of labels as arguments. It returns the number of # streamlines that start and end at each pair of labels and it can return the # streamlines grouped by their endpoints. Notice that this function only # considers the endpoints of each streamline. M, grouping = utils.connectivity_matrix( cc_streamlines, affine, labels.astype(np.uint8), return_mapping=True, mapping_as_streamlines=True, ) M[:3, :] = 0 M[:, :3] = 0 ############################################################################### # We've set ``return_mapping`` and ``mapping_as_streamlines`` to ``True`` so # that ``connectivity_matrix`` returns all the streamlines in # ``cc_streamlines`` grouped by their endpoint. # # Because we're typically only interested in connections between gray matter # regions, and because the label 0 represents background and the labels 1 # and 2 represent white matter, we discard the first three rows and columns # of the connectivity matrix. # # We can now display this matrix using matplotlib. We display it using a log # scale to make small values in the matrix easier to see. plt.imshow(np.log1p(M), interpolation="nearest") plt.savefig("connectivity.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Connectivity of Corpus Callosum # # # # In our example track there are more streamlines connecting regions 11 and # 54 than any other pair of regions. These labels represent the left and right # superior frontal gyrus respectively. These two regions are large, close # together, have lots of corpus callosum fibers and are easy to track so this # result should not be a surprise to anyone. # # However, the interpretation of streamline counts can be tricky. The # relationship between the underlying biology and the streamline counts will # depend on several factors, including how the tracking was done, and the # correct way to interpret these kinds of connectivity matrices is still an # open question in the diffusion imaging literature. # # The next function we'll demonstrate is ``density_map``. This function allows # one to represent the spatial distribution of a track by counting the density # of streamlines in each voxel. For example, let's take the track connecting # the left and right superior frontal gyrus. lr_superiorfrontal_track = grouping[11, 54] shape = labels.shape dm = utils.density_map(lr_superiorfrontal_track, affine, shape) ############################################################################### # Let's save this density map and the streamlines so that they can be # visualized together. In order to save the streamlines in a ".trk" file we'll # need to move them to "trackvis space", or the representation of streamlines # specified by the trackvis Track File format. # Save density map save_nifti("lr-superiorfrontal-dm.nii.gz", dm.astype("int16"), affine) lr_sf_trk = Streamlines(lr_superiorfrontal_track) # Save streamlines sft = StatefulTractogram(lr_sf_trk, hardi_img, Space.VOX) save_trk(sft, "lr-superiorfrontal.trk") ############################################################################### # .. rubric:: Footnotes # # .. [#] The image `aparc-reduced.nii.gz`, which we load as ``labels_img``, is # a modified version of label map `aparc+aseg.mgz` created by # FreeSurfer_. The corpus callosum region is a combination of the # FreeSurfer labels 251-255. The remaining FreeSurfer labels were # re-mapped and reduced so that they lie between 0 and 88. To see the # FreeSurfer region, label and name, represented by each value, see # `label_info.txt` in `~/.dipy/stanford_hardi`. # .. [#] An affine transformation is a mapping between two coordinate systems # that can represent scaling, rotation, shear, translation and # reflection. Affine transformations are often represented using a 4x4 # matrix where the last row of the matrix is ``[0, 0, 0, 1]``. dipy-1.11.0/doc/examples/surface_seed.py000066400000000000000000000100011476546756600201460ustar00rootroot00000000000000""" ======================================== Surface seeding for tractography ======================================== Surface seeding is a way to generate initial position for tractography from cortical surfaces position :footcite:p:`StOnge2018`. """ from fury.io import load_polydata from fury.utils import ( get_actor_from_polydata, get_polydata_triangles, get_polydata_vertices, normals_from_v_f, ) import numpy as np from dipy.data import get_fnames from dipy.tracking.mesh import ( random_coordinates_from_surface, seeds_from_surface_coordinates, ) from dipy.viz import actor, window ############################################################################### # Fetch and load a surface brain_lh = get_fnames(name="fury_surface") polydata = load_polydata(brain_lh) ############################################################################### # Extract the triangles and vertices triangles = get_polydata_triangles(polydata) vts = get_polydata_vertices(polydata) ############################################################################### # Display the surface # =============================================== # # First, create an actor from the polydata, to display in the scene scene = window.Scene() surface_actor = get_actor_from_polydata(polydata) scene.add(surface_actor) scene.set_camera(position=(-500, 0, 0), view_up=(0.0, 0.0, 1)) # Uncomment the line below to show to display the window # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="surface_seed1.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Initial cortical surface # # # Generate a list of seeding positions # ============================================================== # # Choose the number of seed nb_seeds = 100000 nb_triangles = len(triangles) ############################################################################### # Get a list of triangles indices and trilinear coordinates for each seed tri_idx, trilin_co = random_coordinates_from_surface(nb_triangles, nb_seeds) ############################################################################### # Get the 3d cartesian position from triangles indices and trilinear # coordinates seed_pts = seeds_from_surface_coordinates(triangles, vts, tri_idx, trilin_co) ############################################################################### # Compute normal and get the normal direction for each seeds normals = normals_from_v_f(vts, triangles) seed_n = seeds_from_surface_coordinates(triangles, normals, tri_idx, trilin_co) ############################################################################### # Create dot actor for seeds (blue) seed_actors = actor.dot(seed_pts, colors=(0, 0, 1), dot_size=4.0) ############################################################################### # Create line actors for seeds normals (green outside, red inside) normal_length = 0.5 normal_in = np.tile(seed_pts[:, np.newaxis, :], (1, 2, 1)) normal_out = np.tile(seed_pts[:, np.newaxis, :], (1, 2, 1)) normal_in[:, 0] -= seed_n * normal_length normal_out[:, 1] += seed_n * normal_length normal_in_actor = actor.line(normal_in, colors=(1, 0, 0)) normal_out_actor = actor.line(normal_out, colors=(0, 1, 0)) ############################################################################### # Visualise seeds and normals along the surface scene = window.Scene() scene.add(surface_actor) scene.add(seed_actors) scene.add(normal_in_actor) scene.add(normal_out_actor) scene.set_camera(position=(-500, 0, 0), view_up=(0.0, 0.0, 1)) # Uncomment the line below to show to display the window # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="surface_seed2.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Surface seeds with normal orientation # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/syn_registration_2d.py000066400000000000000000000207271476546756600215260ustar00rootroot00000000000000""" ========================================== Symmetric Diffeomorphic Registration in 2D ========================================== This example explains how to register 2D images using the Symmetric Normalization (SyN) algorithm proposed by :footcite:t:`Avants2008` (also implemented in the ANTs software :footcite:p:`Avants2009`) We will perform the classic Circle-To-C experiment for diffeomorphic registration """ import numpy as np import dipy.align.imwarp as imwarp from dipy.align.imwarp import SymmetricDiffeomorphicRegistration from dipy.align.metrics import CCMetric, SSDMetric from dipy.data import get_fnames from dipy.io.image import load_nifti_data from dipy.segment.mask import median_otsu from dipy.viz import regtools fname_moving = get_fnames(name="reg_o") fname_static = get_fnames(name="reg_c") moving = np.load(fname_moving) static = np.load(fname_static) ############################################################################### # To visually check the overlap of the static image with the transformed moving # image, we can plot them on top of each other with different channels to see # where the differences are located regtools.overlay_images( static, moving, title0="Static", title_mid="Overlay", title1="Moving", fname="input_images.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Input images. # # # # We want to find an invertible map that transforms the moving image (circle) # into the static image (the C letter). # # The first decision we need to make is what similarity metric is appropriate # for our problem. In this example we are using two binary images, so the Sum # of Squared Differences (SSD) is a good choice. dim = static.ndim metric = SSDMetric(dim) ############################################################################### # Now we define an instance of the registration class. The SyN algorithm uses # a multi-resolution approach by building a Gaussian Pyramid. We instruct the # registration instance to perform at most $[n_0, n_1, ..., n_k]$ iterations # at each level of the pyramid. The 0-th level corresponds to the finest # resolution. level_iters = [200, 100, 50, 25] sdr = SymmetricDiffeomorphicRegistration(metric, level_iters=level_iters, inv_iter=50) ############################################################################### # Now we execute the optimization, which returns a DiffeomorphicMap object, # that can be used to register images back and forth between the static and # moving domains mapping = sdr.optimize(static, moving) ############################################################################### # It is a good idea to visualize the resulting deformation map to make sure # the result is reasonable (at least, visually) regtools.plot_2d_diffeomorphic_map(mapping, delta=10, fname="diffeomorphic_map.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Deformed lattice under the resulting diffeomorphic map. # # # # Now let's warp the moving image and see if it gets similar to the static # image warped_moving = mapping.transform(moving, interpolation="linear") regtools.overlay_images( static, warped_moving, title0="Static", title_mid="Overlay", title1="Warped moving", fname="direct_warp_result.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Moving image transformed under the (direct) transformation in green on top # of the static image (in red). # # # # And we can also apply the inverse mapping to verify that the warped static # image is similar to the moving image warped_static = mapping.transform_inverse(static, interpolation="linear") regtools.overlay_images( warped_static, moving, title0="Warped static", title_mid="Overlay", title1="Moving", fname="inverse_warp_result.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Static image transformed under the (inverse) transformation in red on top # of the moving image (in green). # # # # Now let's register a couple of slices from a b0 image using the Cross # Correlation metric. Also, let's inspect the evolution of the registration. # To do this we will define a function that will be called by the registration # object at each stage of the optimization process. We will draw the current # warped images after finishing each resolution. def callback_CC(sdr, status): # Status indicates at which stage of the optimization we currently are # For now, we will only react at the end of each resolution of the scale # space if status == imwarp.RegistrationStages.SCALE_END: # get the current images from the metric wmoving = sdr.metric.moving_image wstatic = sdr.metric.static_image # draw the images on top of each other with different colors regtools.overlay_images( wmoving, wstatic, title0="Warped moving", title_mid="Overlay", title1="Warped static", ) ############################################################################### # Now we are ready to configure and run the registration. First load the data t1_name, b0_name = get_fnames(name="syn_data") data = load_nifti_data(b0_name) ############################################################################### # We first remove the skull from the b0 volume b0_mask, mask = median_otsu(data, median_radius=4, numpass=4) ############################################################################### # And select two slices to try the 2D registration static = b0_mask[:, :, 40] moving = b0_mask[:, :, 38] ############################################################################### # After loading the data, we instantiate the Cross-Correlation metric. The # metric receives three parameters: the dimension of the input images, the # standard deviation of the Gaussian Kernel to be used to regularize the # gradient and the radius of the window to be used for evaluating the local # normalized cross correlation. sigma_diff = 3.0 radius = 4 metric = CCMetric(2, sigma_diff=sigma_diff, radius=radius) ############################################################################### # Let's use a scale space of 3 levels level_iters = [100, 50, 25] sdr = SymmetricDiffeomorphicRegistration(metric, level_iters=level_iters) sdr.callback = callback_CC ############################################################################### # And execute the optimization mapping = sdr.optimize(static, moving) warped = mapping.transform(moving) ############################################################################### # We can see the effect of the warping by switching between the images before # and after registration regtools.overlay_images( static, moving, title0="Static", title_mid="Overlay", title1="Moving", fname="t1_slices_input.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Input images. regtools.overlay_images( static, warped, title0="Static", title_mid="Overlay", title1="Warped moving", fname="t1_slices_res.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Moving image transformed under the (direct) transformation in green on top # of the static image (in red). # # # # And we can apply the inverse warping too inv_warped = mapping.transform_inverse(static) regtools.overlay_images( inv_warped, moving, title0="Warped static", title_mid="Overlay", title1="moving", fname="t1_slices_res2.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Static image transformed under the (inverse) transformation in red on top # of the moving image (in green). # # # # Finally, let's see the deformation regtools.plot_2d_diffeomorphic_map(mapping, delta=5, fname="diffeomorphic_map_b0s.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Deformed lattice under the resulting diffeomorphic map. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/syn_registration_3d.py000066400000000000000000000151501476546756600215210ustar00rootroot00000000000000""" ========================================== Symmetric Diffeomorphic Registration in 3D ========================================== This example explains how to register 3D volumes using the Symmetric Normalization (SyN) algorithm proposed by :footcite:t:`Avants2008` (also implemented in the ANTs software :footcite:p:`Avants2009`) We will register two 3D volumes from the same modality using SyN with the Cross -Correlation (CC) metric. """ import numpy as np from dipy.align import read_mapping, write_mapping from dipy.align.imaffine import AffineMap from dipy.align.imwarp import SymmetricDiffeomorphicRegistration from dipy.align.metrics import CCMetric from dipy.data import get_fnames from dipy.io.image import load_nifti from dipy.segment.mask import median_otsu from dipy.viz import regtools ############################################################################### # Let's fetch two b0 volumes, the first one will be the b0 from the Stanford # HARDI dataset hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") stanford_b0, stanford_b0_affine = load_nifti(hardi_fname) stanford_b0 = np.squeeze(stanford_b0)[..., 0] ############################################################################### # The second one will be the same b0 we used for the 2D registration tutorial t1_fname, b0_fname = get_fnames(name="syn_data") syn_b0, syn_b0_affine = load_nifti(b0_fname) ############################################################################### # We first remove the skull from the b0's stanford_b0_masked, stanford_b0_mask = median_otsu( stanford_b0, median_radius=4, numpass=4 ) syn_b0_masked, syn_b0_mask = median_otsu(syn_b0, median_radius=4, numpass=4) static = stanford_b0_masked static_affine = stanford_b0_affine moving = syn_b0_masked moving_affine = syn_b0_affine ############################################################################### # Suppose we have already done a linear registration to roughly align the two # images pre_align = np.array( [ [1.02783543e00, -4.83019053e-02, -6.07735639e-02, -2.57654118e00], [4.34051706e-03, 9.41918267e-01, -2.66525861e-01, 3.23579799e01], [5.34288908e-02, 2.90262026e-01, 9.80820307e-01, -1.46216651e01], [0.00000000e00, 0.00000000e00, 0.00000000e00, 1.00000000e00], ] ) ############################################################################### # As we did in the 2D example, we would like to visualize (some slices of) the # two volumes by overlapping them over two channels of a color image. To do # that we need them to be sampled on the same grid, so let's first re-sample # the moving image on the static grid. We create an AffineMap to transform the # moving image towards the static image affine_map = AffineMap( pre_align, domain_grid_shape=static.shape, domain_grid2world=static_affine, codomain_grid_shape=moving.shape, codomain_grid2world=moving_affine, ) resampled = affine_map.transform(moving) ############################################################################### # plot the overlapped middle slices of the volumes regtools.overlay_slices( static, resampled, slice_index=None, slice_type=1, ltitle="Static", rtitle="Moving", fname="input_3d.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Static image in red on top of the pre-aligned moving image (in green). # # # # We want to find an invertible map that transforms the moving image into the # static image. We will use the Cross-Correlation metric metric = CCMetric(3) ############################################################################### # Now we define an instance of the registration class. The SyN algorithm uses # a multi-resolution approach by building a Gaussian Pyramid. We instruct the # registration object to perform at most $[n_0, n_1, ..., n_k]$ iterations at # each level of the pyramid. The 0-th level corresponds to the finest # resolution. level_iters = [10, 10, 5] sdr = SymmetricDiffeomorphicRegistration(metric, level_iters=level_iters) ############################################################################### # Execute the optimization, which returns a DiffeomorphicMap object, # that can be used to register images back and forth between the static and # moving domains. We provide the pre-aligning matrix that brings the moving # image closer to the static image mapping = sdr.optimize( static, moving, static_grid2world=static_affine, moving_grid2world=moving_affine, prealign=pre_align, ) ############################################################################### # Now let's warp the moving image and see if it gets similar to the static # image warped_moving = mapping.transform(moving) ############################################################################### # We plot the overlapped middle slices regtools.overlay_slices( static, warped_moving, slice_index=None, slice_type=1, ltitle="Static", rtitle="Warped moving", fname="warped_moving.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Moving image transformed under the (direct) transformation in green on top # of the static image (in red). # # # # And we can also apply the inverse mapping to verify that the warped static # image is similar to the moving image warped_static = mapping.transform_inverse(static) regtools.overlay_slices( warped_static, moving, slice_index=None, slice_type=1, ltitle="Warped static", rtitle="Moving", fname="warped_static.png", ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Static image transformed under the (inverse) transformation in red on top of # the moving image (in green). Note that the moving image has a lower # resolution. # # # If you wish, you can also save the transformation to a file, so that it can # be applied to other images in the future. This can be done with the # `write_mapping` function. The data in the file will be organized with shape # (X, Y, Z, 3, 2), such that the forward mapping in each voxel is in # data[i, j, k, :, 0] and the backward mapping in each voxel is in # data[i, j, k, :, 1] write_mapping(mapping, "mapping.nii.gz") # To read the mapping back, use the `read_mapping` function saved_mapping = read_mapping("mapping.nii.gz", hardi_fname, b0_fname) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tissue_classification.py000066400000000000000000000111341476546756600221150ustar00rootroot00000000000000""" ======================================================= Tissue Classification of a T1-weighted Structural Image ======================================================= This example explains how to segment a T1-weighted structural image by using Bayesian formulation. The observation model (likelihood term) is defined as a Gaussian distribution and a Markov Random Field (MRF) is used to model the a priori probability of context-dependent patterns of different tissue types of the brain. Expectation Maximization and Iterated Conditional Modes are used to find the optimal solution. Similar algorithms have been proposed by :footcite:t:`Zhang2001` and :footcite:p:`Avants2011` available in FAST-FSL and ANTS-atropos, respectively. Here we will use a T1-weighted image, that has been previously skull-stripped and bias field corrected. """ import time import matplotlib.pyplot as plt import numpy as np from dipy.data import get_fnames from dipy.io.image import load_nifti_data from dipy.segment.tissue import TissueClassifierHMRF ############################################################################### # First we fetch the T1 volume from the Syn dataset and determine its shape. t1_fname, _, _ = get_fnames(name="tissue_data") t1 = load_nifti_data(t1_fname) print(f"t1.shape {t1.shape}") ############################################################################### # We have fetched the T1 volume. Now we will look at the axial and coronal # slices of the image. fig = plt.figure() a = fig.add_subplot(1, 2, 1) img_ax = np.rot90(t1[..., 89]) imgplot = plt.imshow(img_ax, cmap="gray") a.axis("off") a.set_title("Axial") a = fig.add_subplot(1, 2, 2) img_cor = np.rot90(t1[:, 128, :]) imgplot = plt.imshow(img_cor, cmap="gray") a.axis("off") a.set_title("Coronal") plt.savefig("t1_image.png", bbox_inches="tight", pad_inches=0) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # T1-weighted image of healthy adult. # # # Now we will define the other two parameters for the segmentation algorithm. # We will segment three classes, namely corticospinal fluid (CSF), white matter # (WM) and gray matter (GM). nclass = 3 ############################################################################### # Then, the smoothness factor of the segmentation. Good performance is achieved # with values between 0 and 0.5. beta = 0.1 ############################################################################### # We could also set the number of iterations. By default this parameter is set # to 100 iterations, but most of the time the ICM (Iterated Conditional Modes) # loop will converge before reaching the 100th iteration. # After setting the necessary parameters we can now call an instance of the # class "TissueClassifierHMRF" and its method called "classify" and input the # parameters defined above to perform the segmentation task. # # Now we plot the resulting segmentation. t0 = time.time() hmrf = TissueClassifierHMRF() initial_segmentation, final_segmentation, PVE = hmrf.classify(t1, nclass, beta) t1 = time.time() total_time = t1 - t0 print(f"Total time: {total_time}") fig = plt.figure() a = fig.add_subplot(1, 2, 1) img_ax = np.rot90(final_segmentation[..., 89]) imgplot = plt.imshow(img_ax) a.axis("off") a.set_title("Axial") a = fig.add_subplot(1, 2, 2) img_cor = np.rot90(final_segmentation[:, 128, :]) imgplot = plt.imshow(img_cor) a.axis("off") a.set_title("Coronal") plt.savefig("final_seg.png", bbox_inches="tight", pad_inches=0) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Each tissue class is color coded separately, red for the WM, yellow for # the GM and light blue for the CSF. # # # And we will also have a look at the probability maps for each tissue class. fig = plt.figure() a = fig.add_subplot(1, 3, 1) img_ax = np.rot90(PVE[..., 89, 0]) imgplot = plt.imshow(img_ax, cmap="gray") a.axis("off") a.set_title("CSF") a = fig.add_subplot(1, 3, 2) img_cor = np.rot90(PVE[:, :, 89, 1]) imgplot = plt.imshow(img_cor, cmap="gray") a.axis("off") a.set_title("Gray Matter") a = fig.add_subplot(1, 3, 3) img_cor = np.rot90(PVE[:, :, 89, 2]) imgplot = plt.imshow(img_cor, cmap="gray") a.axis("off") a.set_title("White Matter") plt.savefig("probabilities.png", bbox_inches="tight", pad_inches=0) plt.show() ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # These are the probability maps of each of the three tissue classes. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tissue_classification_dam.py000066400000000000000000000065461476546756600227510ustar00rootroot00000000000000""" ===================================================== Tissue Classification using Diffusion MRI with DAM ===================================================== This example demonstrates tissue classification of white matter (WM) and gray matter (GM) from multi-shell diffusion MRI data using the Directional Average Maps (DAM) proposed by :footcite:p:`Cheng2020`. DAM uses the diffusion properties of the tissue to classify the voxels into WM and GM by fitting a polynomial model to the diffusion signal. The process involves preprocessing steps including skull-stripping with median otsu, denoising with Patch2Self, and then perform tissue classification. Let's start by loading the necessary modules: """ import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.denoise.patch2self import patch2self from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti from dipy.segment.mask import median_otsu from dipy.segment.tissue import dam_classifier from dipy.viz.plotting import image_mosaic ############################################################################### # First we fetch the diffusion image, bvalues and bvectors from the cfin dataset. fraw, fbval, fbvec, t1_fname = get_fnames(name="cfin_multib") data, affine = load_nifti(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # After loading the diffusion data, we can apply the median_otsu algorithm to # skull-strip the data and obtain a binary mask. We can then use the mask to # denoise the data using the Patch2Self algorithm. b0_mask, mask = median_otsu(data, median_radius=2, numpass=1, vol_idx=[0, 1]) denoised_arr = patch2self(b0_mask, bvals=bvals, b0_denoising=False) ############################################################################### # Now we can use the DAM classifier to classify the voxels into WM and GM. # The DAM classifier requires the denoised data, the bvalues, and the mask. # The DAM classifier returns the WM and GM masks. # It is important to note that the DAM classifier is a threshold-based classifier # and the threshold values can be adjusted based on the data. The `wm_threshold` # parameter controls the sensitivity of the classifier. # For data like HCP, threshold of 0.5 proves to be a good choice. For data like # cfin, higher threshold values like 0.7 or 0.8 are more suitable. wm_mask, gm_mask = dam_classifier(denoised_arr, bvals, wm_threshold=0.7) ############################################################################### # Now we can visualize the WM and GM masks. images = [ data[:, :, data.shape[2] // 2, 0], # DWI (b0) wm_mask[:, :, data.shape[2] // 2], # White Matter Mask gm_mask[:, :, data.shape[2] // 2], # Grey Matter Mask ] ax_labels = ["DWI (b0)", "White Matter Mask", "Grey Matter Mask"] ax_kwargs = [{"cmap": "gray"} for _ in images] fig, ax = image_mosaic( images, ax_labels=ax_labels, ax_kwargs=ax_kwargs, figsize=(10, 5) ) plt.show() ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Original B0 image (left), White matter (center) and gray matter (right) are # binary masks as obtained from DAM. # # # References # ---------- # # .. footbibliography:: dipy-1.11.0/doc/examples/tracking_bootstrap_peaks.py000066400000000000000000000117451476546756600226200ustar00rootroot00000000000000""" ==================================================== Bootstrap and Closest Peak Direction Getters Example ==================================================== This example shows how choices in direction-getter impact fiber tracking results by demonstrating the bootstrap direction getter (a type of probabilistic tracking, as described in :footcite:p:`Berman2008` and the closest peak direction getter (a type of deterministic tracking) :footcite:p:`Amirbekian2016`. This example is an extension of the :ref:`sphx_glr_examples_built_quick_start_tracking_introduction_eudx.py` example. Let's start by loading the necessary modules for executing this tutorial. """ from dipy.core.gradients import gradient_table from dipy.data import get_fnames, small_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import bootstrap_tracking, closestpeak_tracking from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) seed_mask = labels == 2 white_matter = (labels == 1) | (labels == 2) seeds = utils.seeds_from_mask(seed_mask, affine, density=1) ############################################################################### # Next, we fit the CSD model. response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) csd_fit = csd_model.fit(data, mask=white_matter) ############################################################################### # we use the CSA fit to calculate GFA, which will serve as our stopping # criterion. csa_model = CsaOdfModel(gtab, sh_order_max=6) gfa = csa_model.fit(data, mask=white_matter).gfa stopping_criterion = ThresholdStoppingCriterion(gfa, 0.25) ############################################################################### # Next, we need to set up our two direction getters # # # Example #1: Bootstrap direction getter with CSD Model # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ boot_streamline_generator = bootstrap_tracking( seeds, stopping_criterion, affine, step_size=0.5, data=data, model=csd_model, max_angle=30.0, sphere=small_sphere, ) streamlines = Streamlines(boot_streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_bootstrap_dg.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record(scene=scene, out_path="tractogram_bootstrap_dg.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum Bootstrap Probabilistic Direction Getter # # # We have created a bootstrapped probabilistic set of streamlines. If you # repeat the fiber tracking (keeping all inputs the same) you will NOT get # exactly the same set of streamlines. # # # # Example #2: Closest peak direction getter with CSD Model # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ pmf = csd_fit.odf(small_sphere).clip(min=0) peak_streamline_generator = closestpeak_tracking( seeds, stopping_criterion, affine, step_size=0.5, sf=pmf, max_angle=30.0, sphere=small_sphere, ) streamlines = Streamlines(peak_streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "closest_peak_dg_CSD.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_closest_peak_dg.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum Closest Peak Deterministic Direction Getter # # # We have created a set of streamlines using the closest peak direction getter, # which is a type of deterministic tracking. If you repeat the fiber tracking # (keeping all inputs the same) you will get exactly the same set of # streamlines. # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_deterministic.py000066400000000000000000000074371476546756600222660ustar00rootroot00000000000000""" ================================================= An introduction to the Deterministic Tractography ================================================= Deterministic tractography follows the trajectory of the most probable pathway within the tracking constraint (e.g. max angle). It follows the direction with the highest probability from a distribution, as opposed to the probabilistic tractography which draws the direction from the distribution. Therefore, deterministic tractography is equivalent to the probabilistic tractography returning always the maximum value of the distribution. Deterministic tractography is an alternative to EuDX deterministic tractography and unlike EuDX does not follow the peaks of the local models but uses the entire orientation distributions. This example is an extension of the :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py` example. We begin by loading the data, fitting a Constrained Spherical Deconvolution (CSD) reconstruction model for the tractography and fitting the constant solid angle (CSA) reconstruction model to define the tracking mask (stopping criterion). """ from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import deterministic_tracking from dipy.tracking.utils import seeds_from_mask from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) seed_mask = labels == 2 seeds = seeds_from_mask(seed_mask, affine, density=2) white_matter = (labels == 1) | (labels == 2) sc = BinaryStoppingCriterion(white_matter) response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) csd_fit = csd_model.fit(data, mask=white_matter) ############################################################################### # The Fiber Orientation Distribution (FOD) of the CSD model estimates the # distribution of small fiber bundles within each voxel. This distribution # can be used for deterministic fiber tracking. As for probabilistic tracking, # there are many ways to provide those distributions to the deterministic # tractography. Here, the spherical harmonic representation of # the FOD is used. streamline_generator = deterministic_tracking( seeds, sc, affine, sh=csd_fit.shm_coeff, random_seed=1, sphere=default_sphere, max_angle=30, step_size=0.5, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_deterministic.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record(scene=scene, out_path="tractogram_deterministic.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using deterministic maximum direction getter dipy-1.11.0/doc/examples/tracking_disco_phantom.py000066400000000000000000000217051476546756600222440ustar00rootroot00000000000000""" ================================= Tractography on the DiSCo Phantom ================================= This example compares probabilistic, deterministic and parallel transport tractography (PTT) algorithms. See :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py` :ref:`sphx_glr_examples_built_fiber_tracking_tracking_deterministic.py` :ref:`sphx_glr_examples_built_fiber_tracking_tracking_ptt.py` for detailed examples of those algorithms. """ import os import matplotlib.pyplot as plt import numpy as np from scipy.ndimage import binary_erosion from scipy.stats import pearsonr from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import load_tractogram, save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import ( deterministic_tracking, probabilistic_tracking, ptt_tracking, ) from dipy.tracking.utils import connectivity_matrix, seeds_from_mask from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False print("Downloading data...") ############################################################################### # Prepare the synthetic DiSCo data for tractography. The ground-truth # connectome will be use to evaluate tractography performances. fnames = get_fnames(name="disco1") disco1_fnames = [os.path.basename(f) for f in fnames] GT_connectome_fname = fnames[ disco1_fnames.index("DiSCo1_Connectivity_Matrix_Cross-Sectional_Area.txt") ] GT_connectome = np.loadtxt(GT_connectome_fname) # select the non-zero connections of the upper triangular part of the connectome connectome_mask = np.tril(np.ones(GT_connectome.shape), -1) > 0 # load the labels_fname = fnames[disco1_fnames.index("highRes_DiSCo1_ROIs.nii.gz")] labels, affine, labels_img = load_nifti(labels_fname, return_img=True) labels = labels.astype(int) print("Loading data...") tracks_fname = fnames[disco1_fnames.index("DiSCo1_Strands_Trajectories.tck")] GT_streams = load_tractogram(tracks_fname, reference=labels_img).streamlines if has_fury: scene = window.Scene() scene.add(actor.line(GT_streams, colors=colormap.line_colors(GT_streams))) window.record(scene=scene, out_path="tractogram_ground_truth.png", size=(800, 800)) if interactive: window.show(scene) plt.imshow(GT_connectome, origin="lower", cmap="viridis", interpolation="nearest") plt.axis("off") plt.savefig("connectome_ground_truth.png") plt.close() ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo ground-truth trajectories (left) and connectivity matrix (right). # # # Tracking mask, seed positions and initial directions mask_fname = fnames[disco1_fnames.index("highRes_DiSCo1_mask.nii.gz")] mask = load_nifti_data(mask_fname) sc = BinaryStoppingCriterion(mask) voxel_size = np.ones(3) seed_fname = fnames[disco1_fnames.index("highRes_DiSCo1_ROIs-mask.nii.gz")] seed_mask = load_nifti_data(seed_fname) seed_mask = binary_erosion(seed_mask * mask, iterations=1) seeds = seeds_from_mask(seed_mask, affine, density=2) plt.imshow(seed_mask[:, :, 17], origin="lower", cmap="gray", interpolation="nearest") plt.axis("off") plt.title("Seeding Mask") plt.savefig("seeding_mask.png") plt.close() plt.imshow(mask[:, :, 17], origin="lower", cmap="gray", interpolation="nearest") plt.axis("off") plt.title("Tracking Mask") plt.savefig("tracking_mask.png") plt.close() ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo seeding (left) and tracking (right) masks. # # # Compute ODFs data_fname = fnames[disco1_fnames.index("highRes_DiSCo1_DWI_RicianNoise-snr10.nii.gz")] data = load_nifti_data(data_fname) bvecs = fnames[disco1_fnames.index("DiSCo_gradients_dipy.bvecs")] bvals = fnames[disco1_fnames.index("DiSCo_gradients.bvals")] gtab = gradient_table(bvals=bvals, bvecs=bvecs) response, _ = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=8) csd_fit = csd_model.fit(data, mask=mask) ODFs = csd_fit.odf(default_sphere) if has_fury: scene = window.Scene() ODF_spheres = actor.odf_slicer( ODFs[:, :, 17:18, :], sphere=default_sphere, scale=2, norm=False, colormap="plasma", ) scene.add(ODF_spheres) window.record(scene=scene, out_path="GT_odfs.png", size=(600, 600)) if interactive: window.show(scene) ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo ground-truth ODFs. # # # Perform deterministic tractography using 1 thread (cpu) print("Running Deterministic Tractography...") streamline_generator = deterministic_tracking( seeds, sc, affine, sf=ODFs, nbr_threads=1, random_seed=42, sphere=default_sphere ) det_streams = Streamlines(streamline_generator) sft = StatefulTractogram(det_streams, labels_img, Space.RASMM) save_trk(sft, "tractogram_disco_deterministic.trk") if has_fury: scene = window.Scene() scene.add(actor.line(det_streams, colors=colormap.line_colors(det_streams))) window.record( scene=scene, out_path="tractogram_disco_deterministic.png", size=(800, 800) ) if interactive: window.show(scene) # Compare the estimated connectivity with the ground-truth connectivity connectome = connectivity_matrix(det_streams, affine, labels)[1:, 1:] r, _ = pearsonr( GT_connectome[connectome_mask].flatten(), connectome[connectome_mask].flatten() ) print("DiSCo ground-truth correlation (deterministic tractography): ", r) plt.imshow(connectome, origin="lower", cmap="viridis", interpolation="nearest") plt.axis("off") plt.savefig("connectome_deterministic.png") plt.close() ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo Deterministic tractogram and corresponding connectome. # # # Perform probabilistic tractography using 4 threads (cpus) print("Running Probabilistic Tractography...") streamline_generator = probabilistic_tracking( seeds, sc, affine, sf=ODFs, nbr_threads=4, random_seed=42, sphere=default_sphere ) prob_streams = Streamlines(streamline_generator) sft = StatefulTractogram(prob_streams, labels_img, Space.RASMM) save_trk(sft, "tractogram_disco_probabilistic.trk") if has_fury: scene = window.Scene() scene.add(actor.line(prob_streams, colors=colormap.line_colors(prob_streams))) window.record( scene=scene, out_path="tractogram_disco_probabilistic.png", size=(800, 800) ) if interactive: window.show(scene) # Compare the estimated connectivity with the ground-truth connectivity connectome = connectivity_matrix(prob_streams, affine, labels)[1:, 1:] r, _ = pearsonr( GT_connectome[connectome_mask].flatten(), connectome[connectome_mask].flatten() ) print("DiSCo ground-truth correlation (probabilistic tractography): ", r) plt.imshow(connectome, origin="lower", cmap="viridis", interpolation="nearest") plt.axis("off") plt.savefig("connectome_probabilistic.png") plt.close() ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo Probabilistic tractogram and corresponding connectome. # # # Perform parallel transport tractography tractography using all threads (cpus) print("Running Parallel Transport Tractography...") streamline_generator = ptt_tracking( seeds, sc, affine, sf=ODFs, nbr_threads=0, random_seed=42, sphere=default_sphere, max_angle=20, ) ptt_streams = Streamlines(streamline_generator) sft = StatefulTractogram(ptt_streams, labels_img, Space.RASMM) save_trk(sft, "tractogram_disco_ptt.trk") if has_fury: scene = window.Scene() scene.add(actor.line(ptt_streams, colors=colormap.line_colors(ptt_streams))) window.record(scene=scene, out_path="tractogram_disco_ptt.png", size=(800, 800)) if interactive: window.show(scene) # Compare the estimated connectivity with the ground-truth connectivity connectome = connectivity_matrix(ptt_streams, affine, labels)[1:, 1:] r, _ = pearsonr( GT_connectome[connectome_mask].flatten(), connectome[connectome_mask].flatten() ) print("DiSCo ground-truth correlation (PTT tractography): ", r) plt.imshow(connectome, origin="lower", cmap="viridis", interpolation="nearest") plt.axis("off") plt.savefig("connectome_ptt.png") plt.close() ############################################################################### # # .. rst-class:: centered small fst-italic fw-semibold # # DiSCo PTT tractogram and corresponding connectome. # # dipy-1.11.0/doc/examples/tracking_introduction_eudx.py000066400000000000000000000203761476546756600231660ustar00rootroot00000000000000""" ============================== Introduction to Basic Tracking ============================== Local fiber tracking is an approach used to model white matter fibers by creating streamlines from local directional information. The idea is as follows: if the local directionality of a tract/pathway segment is known, one can integrate along those directions to build a complete representation of that structure. Local fiber tracking is widely used in the field of diffusion MRI because it is simple and robust. In order to perform local fiber tracking, three things are needed: 1. A method for getting directions from a diffusion dataset. 2. A method for identifying when the tracking must stop. 3. A set of seeds from which to begin tracking. This example shows how to combine the 3 parts described above to create a tractography reconstruction from a diffusion data set. Let's begin by importing the necessary modules. """ import matplotlib.pyplot as plt from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import eudx_tracking from dipy.viz import actor, colormap, has_fury, window ############################################################################### # Now, let's load an HARDI dataset from Stanford. If you have # not already downloaded this data set, the first time you run this example you # will need to be connected to the internet and this dataset will be downloaded # to your computer. # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # This dataset provides a label map in which all white matter tissues are # labeled either 1 or 2. Let's create a white matter mask to restrict tracking # to the white matter. white_matter = (labels == 1) | (labels == 2) ############################################################################### # Step 1: Getting directions from a diffusion dataset # --------------------------------------------------- # # The first thing we need to begin fiber tracking is a way of getting # directions from this diffusion data set. In order to do that, we can fit the # data to a Constant Solid Angle ODF Model. This model will estimate the # Orientation Distribution Function (ODF) at each voxel. The ODF is the # distribution of water diffusion as a function of direction. The peaks of an # ODF are good estimates for the orientation of tract segments at a point in # the image. Here, we use ``peaks_from_model`` to fit the data and calculate # the fiber directions in all voxels of the white matter. csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) ############################################################################### # For quality assurance we can also visualize a slice from the direction field # which we will use as the basis to perform the tracking. The visualization # will be done using the ``fury`` python package if has_fury: scene = window.Scene() scene.add( actor.peak_slicer( csa_peaks.peak_dirs, peaks_values=csa_peaks.peak_values, colors=None ) ) window.record(scene=scene, out_path="csa_direction_field.png", size=(900, 900)) if interactive: window.show(scene, size=(800, 800)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Direction Field (peaks) # # # Step 2: Identifying when the tracking must stop # ----------------------------------------------- # Next we need some way of restricting the fiber tracking to areas with good # directionality information. We've already created the white matter mask, # but we can go a step further and restrict fiber tracking to those areas where # the ODF shows significant restricted diffusion by thresholding on # the generalized fractional anisotropy (GFA). stopping_criterion = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) ############################################################################### # Again, for quality assurance, we can also visualize a slice of the GFA and # the resulting tracking mask. sli = csa_peaks.gfa.shape[2] // 2 plt.figure("GFA") plt.subplot(1, 2, 1).set_axis_off() plt.imshow(csa_peaks.gfa[:, :, sli].T, cmap="gray", origin="lower") plt.subplot(1, 2, 2).set_axis_off() plt.imshow((csa_peaks.gfa[:, :, sli] > 0.25).T, cmap="gray", origin="lower") plt.savefig("gfa_tracking_mask.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # An example of a tracking mask derived from the generalized fractional # anisotropy (GFA). # # # # Step 3: Defining a set of seeds from which to begin tracking # ------------------------------------------------------------ # Before we can begin tracking, we need to specify where to "seed" (begin) # the fiber tracking. Generally, the seeds chosen will depend on the pathways # one is interested in modeling. In this example, we'll use a # $2 \times 2 \times 2$ grid of seeds per voxel, in a sagittal slice of the # corpus callosum. Tracking from this region will give us a model of the # corpus callosum tract. This slice has label value ``2`` in the label's image. seed_mask = labels == 2 seeds = utils.seeds_from_mask(seed_mask, affine, density=[2, 2, 2]) ############################################################################### # Finally, we can bring it all together using ``eudx_tracking``, using # the EuDX algorithm :footcite:p:`Garyfallidis2012b`. ``EuDX`` is a fast # algorithm that we use here to generate streamlines. This algorithm is what is # used here and the default option when providing the output of peaks directly # in eudx_tracking. # Initialization of eudx_tracking. The computation happens in the next step. streamlines_generator = eudx_tracking( seeds, stopping_criterion, affine, step_size=0.5, pam=csa_peaks ) # Generate streamlines object streamlines = Streamlines(streamlines_generator) ############################################################################### # We will then display the resulting streamlines using the ``fury`` # python package. if has_fury: # Prepare the display objects. color = colormap.line_colors(streamlines) streamlines_actor = actor.line( streamlines, colors=colormap.line_colors(streamlines) ) # Create the 3D display. scene = window.Scene() scene.add(streamlines_actor) # Save still images for this static example. Or for interactivity use window.record(scene=scene, out_path="tractogram_EuDX.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using EuDx # # # We've created a deterministic set of streamlines using the EuDX algorithm. # This is so called deterministic because if you repeat the fiber tracking # (keeping all the inputs the same) you will get exactly the same set of # streamlines. We can save the streamlines as a Trackvis file so it can be # loaded into other software for visualization or further analysis. sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_EuDX.trk", streamlines) ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_pft.py000066400000000000000000000152601476546756600202050ustar00rootroot00000000000000""" =============================== Particle Filtering Tractography =============================== Particle Filtering Tractography (PFT) :footcite:p:`Girard2014` uses tissue partial volume estimation (PVE) to reconstruct trajectories connecting the gray matter, and not incorrectly stopping in the white matter or in the corticospinal fluid. It relies on a stopping criterion that identifies the tissue where the streamline stopped. If the streamline correctly stopped in the gray matter, the trajectory is kept. If the streamline incorrectly stopped in the white matter or in the corticospinal fluid, PFT uses anatomical information to find an alternative streamline segment to extend the trajectory. When this segment is found, the tractography continues until the streamline correctly stops in the gray matter. PFT finds an alternative streamline segment whenever the stopping criterion returns a position classified as 'INVALIDPOINT'. This example is an extension of :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py` and :ref:`sphx_glr_examples_built_fiber_tracking_tracking_stopping_criterion.py` examples. We begin by loading the data, fitting a Constrained Spherical Deconvolution (CSD) reconstruction model, creating the probabilistic direction getter and defining the seeds. """ import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.tracking import utils from dipy.tracking.stopping_criterion import CmcStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import pft_tracking, probabilistic_tracking from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") f_pve_csf, f_pve_gm, f_pve_wm = get_fnames(name="stanford_pve_maps") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) pve_csf_data = load_nifti_data(f_pve_csf) pve_gm_data = load_nifti_data(f_pve_gm) pve_wm_data, _, voxel_size = load_nifti(f_pve_wm, return_voxsize=True) shape = labels.shape response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd_model.fit(data, mask=pve_wm_data) seed_mask = labels == 2 seed_mask[pve_wm_data < 0.5] = 0 seeds = utils.seeds_from_mask(seed_mask, affine, density=2) ############################################################################### # CMC/ACT Stopping Criterion # ========================== # Continuous map criterion (CMC) :footcite:p:`Girard2014` and # Anatomically-constrained tractography (ACT) :footcite:p:`Smith2012` both use # PVEs information from anatomical images to determine when the tractography # stops. Both stopping criterion use a trilinear interpolation at the tracking # position. CMC stopping criterion uses a probability derived from the PVE maps # to determine if the streamline reaches a 'valid' or 'invalid' region. ACT uses # a fixed threshold on the PVE maps. Both stopping criterion can be used in # conjunction with PFT. In this example, we used CMC. voxel_size = np.average(voxel_size[1:4]) step_size = 0.2 cmc_criterion = CmcStoppingCriterion.from_pve( pve_wm_data, pve_gm_data, pve_csf_data, step_size=step_size, average_voxel_size=voxel_size, ) ############################################################################### # Particle Filtering Tractography # =============================== # `pft_back_tracking_dist` is the distance in mm to backtrack when the # tractography incorrectly stops in the WM or CSF. `pft_front_tracking_dist` # is the distance in mm to track after the stopping event using PFT. # # The `particle_count` parameter is the number of samples used in the particle # filtering algorithm. # # `min_wm_pve_before_stopping` controls when the tracking can stop in the GM. # The tractography must reaches a position where WM_pve >= # `min_wm_pve_before_stopping` before stopping in the GM. As long as the # condition is not reached and WM_pve > 0, the tractography will continue. # It is particularlyusefull to set this parameter > 0.5 when seeding # tractography at the WM-GM interface or in the sub-cortical GM. It allows # streamlines to exit the seeding voxels until they reach the deep white # matter where WM_pve > `min_wm_pve_before_stopping`. pft_streamline_gen = pft_tracking( seeds, cmc_criterion, affine, max_cross=1, step_size=step_size, max_len=1000, pft_back_tracking_dist=2, pft_front_tracking_dist=1, particle_count=15, return_all=False, min_wm_pve_before_stopping=1, max_angle=20.0, sphere=default_sphere, sh=csd_fit.shm_coeff, ) streamlines = Streamlines(pft_streamline_gen) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_pft.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record(scene=scene, out_path="tractogram_pft.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using particle filtering tractography # Local Probabilistic Tractography prob_streamline_generator = probabilistic_tracking( seeds, cmc_criterion, affine, step_size=step_size, max_len=1000, return_all=False, sh=csd_fit.shm_coeff, max_angle=20.0, sphere=default_sphere, ) streamlines = Streamlines(prob_streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_cmc.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_probabilistic_cmc.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using probabilistic tractography # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_probabilistic.py000066400000000000000000000152311476546756600222400ustar00rootroot00000000000000""" ================================================= An introduction to the Probabilistic Tractography ================================================= Probabilistic fiber tracking is a way of reconstructing white matter connections using diffusion MR imaging. Like deterministic fiber tracking, the probabilistic approach follows the trajectory of a possible pathway step by step starting at a seed, however, unlike deterministic tracking, the tracking direction at each point along the path is chosen at random from a distribution. The distribution at each point is different and depends on the observed diffusion data at that point. The distribution of tracking directions at each point can be represented as a probability mass function (PMF) if the possible tracking directions are restricted to discrete numbers of well distributed points on a sphere. This example is an extension of the :ref:`sphx_glr_examples_built_quick_start_tracking_introduction_eudx.py` example. We'll begin by repeating a few steps from that example, loading the data and fitting a Constrained Spherical Deconvolution (CSD) model. """ from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import probabilistic_tracking from dipy.tracking.utils import seeds_from_mask from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) seed_mask = labels == 2 seeds = seeds_from_mask(seed_mask, affine, density=2) white_matter = (labels == 1) | (labels == 2) sc = BinaryStoppingCriterion(white_matter) response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) csd_fit = csd_model.fit(data, mask=white_matter) ############################################################################### # The Fiber Orientation Distribution (FOD) of the CSD model estimates the # distribution of small fiber bundles within each voxel. We can use this # distribution for probabilistic fiber tracking. One way to do this is to # represent the FOD using a discrete sphere. This discrete FOD can be used by # ``probabilistic_tracking`` as a PMF (sf or spherical function) for sampling # tracking directions. fod = csd_fit.odf(default_sphere) streamline_generator = probabilistic_tracking( seeds, sc, affine, sf=fod, random_seed=1, sphere=default_sphere, max_angle=20, step_size=0.2, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_sf.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_probabilistic_sf.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using probabilistic tractography from PMF # # # # One disadvantage of using a discrete PMF to represent possible tracking # directions is that it tends to take up a lot of memory (RAM). The size of the # PMF, the FOD in this case, must be equal to the number of possible tracking # directions on the hemisphere, and every voxel has a unique PMF. In this case # the data is ``(81, 106, 76)`` and ``small_sphere`` has 181 directions so the # FOD is ``(81, 106, 76, 181)``. One way to avoid sampling the PMF and holding # it in memory is to use directly from the spherical # harmonic (SH) representation of the FOD. By using this approach, we can also # use a larger sphere, like ``default_sphere`` which has 362 directions on the # hemisphere, without having to worry about memory limitations. streamline_generator = probabilistic_tracking( seeds, sc, affine, sh=csd_fit.shm_coeff, random_seed=1, sphere=default_sphere, max_angle=20, step_size=0.2, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_sh.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_probabilistic_sh.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using probabilistic tractography from SH # # # # Not all model fits have the ``shm_coeff`` attribute because not all models # use this basis to represent the data internally. However we can fit the ODF # of any model to the spherical harmonic basis using the ``peaks_from_model`` # function. peaks = peaks_from_model( csd_model, data, default_sphere, 0.5, 25, mask=white_matter, return_sh=True, parallel=True, num_processes=1, ) fod_coeff = peaks.shm_coeff streamline_generator = probabilistic_tracking( seeds, sc, affine, sh=fod_coeff, random_seed=1, sphere=default_sphere, max_angle=20, step_size=0.2, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_sh_pfm.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_probabilistic_sh_pfm.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using probabilistic tractography from SH # (peaks_from_model) dipy-1.11.0/doc/examples/tracking_ptt.py000066400000000000000000000052621476546756600202240ustar00rootroot00000000000000""" =============================== Parallel Transport Tractography =============================== Parallel Transport Tractography (PTT) :footcite:p:`Aydogan2021` Let's start by importing the necessary modules. """ from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.tracking.stopping_criterion import BinaryStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import ptt_tracking from dipy.tracking.utils import seeds_from_mask from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) seed_mask = labels == 2 seeds = seeds_from_mask(seed_mask, affine, density=2) white_matter = (labels == 1) | (labels == 2) sc = BinaryStoppingCriterion(white_matter) response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response, sh_order_max=6) csd_fit = csd_model.fit(data, mask=white_matter) ############################################################################### # Prepare the Parallel Transport Tractography using the fiber ODF (FOD) # obtained with CSD. # Start the local tractography using ``ptt_tracking``. fod = csd_fit.odf(default_sphere) streamline_generator = ptt_tracking( seeds, sc, affine, sf=fod, random_seed=1, sphere=default_sphere, max_angle=20, step_size=0.5, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_ptt.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record(scene=scene, out_path="tractogram_ptt.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using ptt direction getter from PMF # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_rumba.py000066400000000000000000000125301476546756600205170ustar00rootroot00000000000000""" ============================================================================ Tracking with Robust Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) ============================================================================ Here, we demonstrate fiber tracking using a probabilistic direction getter and RUMBA-SD, a model introduced in :footcite:p:`CanalesRodriguez2015`. This model adapts Richardson-Lucy deconvolution by assuming Rician or Noncentral Chi noise instead of Gaussian, which more accurately reflects the noise from MRI scanners (see also :ref:`sphx_glr_examples_built_reconstruction_reconst_rumba.py`). This tracking tutorial is an extension on :ref:`sphx_glr_examples_built_fiber_tracking_tracking_probabilistic.py`. We start by loading sample data and identifying a fiber response function. """ import matplotlib.pyplot as plt from numpy.linalg import inv from dipy.core.gradients import gradient_table from dipy.data import get_fnames, small_sphere from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import auto_response_ssst from dipy.reconst.rumba import RumbaSDModel from dipy.tracking import utils from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines, transform_streamlines from dipy.tracking.tracker import probabilistic_tracking from dipy.viz import actor, colormap, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") t1_fname = get_fnames(name="stanford_t1") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) t1_data, t1_aff = load_nifti(t1_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) seed_mask = labels == 2 white_matter = (labels == 1) | (labels == 2) seeds = utils.seeds_from_mask(seed_mask, affine, density=2) response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) sphere = small_sphere ############################################################################### # We can now initialize a `RumbaSdModel` model and fit it globally by setting # `voxelwise` to `False`. For this example, TV regularization (`use_tv`) will # be turned off for efficiency, although its usage can provide more coherent # results in fiber tracking. The fit will take about 5 minutes to complete. rumba = RumbaSDModel( gtab, wm_response=response[0], n_iter=200, voxelwise=False, use_tv=False, sphere=sphere, ) rumba_fit = rumba.fit(data, mask=white_matter) odf = rumba_fit.odf() # fODF f_wm = rumba_fit.f_wm # white matter volume fractions ############################################################################### # To establish stopping criterion, a common technique is to use the Generalized # Fractional Anisotropy (GFA). One point of caution is that RUMBA-SD by default # separates the fODF from an isotropic compartment. This can bias GFA results # computed on the fODF, although it will still generally be an effective # criterion. # # However, an alternative stopping criterion that takes advantage of this # feature is to use RUMBA-SD's white matter volume fraction map. stopping_criterion = ThresholdStoppingCriterion(f_wm, 0.25) ############################################################################### # We can visualize a slice of this mask. sli = f_wm.shape[2] // 2 plt.figure() plt.subplot(1, 2, 1).set_axis_off() plt.imshow(f_wm[:, :, sli].T, cmap="gray", origin="lower") plt.subplot(1, 2, 2).set_axis_off() plt.imshow((f_wm[:, :, sli] > 0.25).T, cmap="gray", origin="lower") plt.savefig("f_wm_tracking_mask.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # White matter volume fraction slice # # # # These discrete fODFs can be used as a PMF in the # `ProbabilisticDirectionGetter` for sampling tracking directions. The PMF # must be strictly non-negative; RUMBA-SD already adheres to this constraint # so no further manipulation of the fODFs is necessary. streamline_generator = probabilistic_tracking( seeds, stopping_criterion, affine, step_size=0.5, max_angle=30.0, sphere=sphere, sf=odf, ) streamlines = Streamlines(streamline_generator) color = colormap.line_colors(streamlines) streamlines_actor = actor.streamtube( list(transform_streamlines(streamlines, inv(t1_aff))), colors=color, linewidth=0.1 ) vol_actor = actor.slicer(t1_data) vol_actor.display(x=40) vol_actor2 = vol_actor.copy() vol_actor2.display(z=35) scene = window.Scene() scene.add(vol_actor) scene.add(vol_actor2) scene.add(streamlines_actor) if interactive: window.show(scene) window.record( scene=scene, out_path="tractogram_probabilistic_rumba.png", size=(800, 800) ) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_rumba.trk") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # RUMBA-SD tractogram # # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_sfm.py000066400000000000000000000141751476546756600202050ustar00rootroot00000000000000""" ======================================= Tracking with the Sparse Fascicle Model ======================================= Tracking requires a per-voxel model. Here, the model is the Sparse Fascicle Model (SFM), described in :footcite:p:`Rokem2015`. This model reconstructs the diffusion signal as a combination of the signals from different fascicles (see also :ref:`sphx_glr_examples_built_reconstruction_reconst_sfm.py`). """ from numpy.linalg import inv from dipy.core.gradients import gradient_table from dipy.data import get_fnames, get_sphere from dipy.direction.peaks import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst import sfm from dipy.reconst.csdeconv import auto_response_ssst from dipy.tracking import utils from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import ( Streamlines, select_random_set_of_streamlines, transform_streamlines, ) from dipy.tracking.tracker import eudx_tracking from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False ############################################################################### # To begin, we read the Stanford HARDI data set into memory: hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) ############################################################################### # This data set provides a label map (generated using FreeSurfer_, in which the # white matter voxels are labeled as either 1 or 2: white_matter = (labels == 1) | (labels == 2) ############################################################################### # The first step in tracking is generating a model from which tracking # directions can be extracted in every voxel. # For the SFM, this requires first that we define a canonical response function # that will be used to deconvolve the signal in every voxel response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) ############################################################################### # We initialize an SFM model object, using this response function and using # the default sphere (362 vertices, symmetrically distributed on the surface # of the sphere): sphere = get_sphere() sf_model = sfm.SparseFascicleModel( gtab, sphere=sphere, l1_ratio=0.5, alpha=0.001, response=response[0] ) ############################################################################### # We fit this model to the data in each voxel in the white-matter mask, so that # we can use these directions in tracking: pnm = peaks_from_model( sf_model, data, sphere, relative_peak_threshold=0.5, min_separation_angle=25, mask=white_matter, parallel=True, num_processes=1, ) ############################################################################### # A ThresholdStoppingCriterion object is used to segment the data to track only # through areas in which the Generalized Fractional Anisotropy (GFA) is # sufficiently high. stopping_criterion = ThresholdStoppingCriterion(pnm.gfa, 0.25) ############################################################################### # Tracking will be started from a set of seeds evenly distributed in the white # matter: seeds = utils.seeds_from_mask(white_matter, affine, density=[2, 2, 2]) ############################################################################### # For the sake of brevity, we will take only the first 1000 seeds, generating # only 1000 streamlines. Remove this line to track from many more points in # all of the white matter seeds = seeds[:1000] ############################################################################### # We now have the necessary components to construct a tracking pipeline and # execute the tracking streamline_generator = eudx_tracking( seeds, stopping_criterion, affine, pam=pnm, step_size=0.5 ) streamlines = Streamlines(streamline_generator) ############################################################################### # Next, we will create a visualization of these streamlines, relative to this # subject's T1-weighted anatomy: t1_fname = get_fnames(name="stanford_t1") t1_data, t1_aff = load_nifti(t1_fname) color = colormap.line_colors(streamlines) ############################################################################### # To speed up visualization, we will select a random sub-set of streamlines to # display. This is particularly important, if you track from seeds throughout # the entire white matter, generating many streamlines. In this case, for # demonstration purposes, we select a subset of 900 streamlines. plot_streamlines = select_random_set_of_streamlines(streamlines, 900) if has_fury: streamlines_actor = actor.streamtube( list(transform_streamlines(plot_streamlines, inv(t1_aff))), colors=colormap.line_colors(streamlines), linewidth=0.1, ) vol_actor = actor.slicer(t1_data) vol_actor.display(x=40) vol_actor2 = vol_actor.copy() vol_actor2.display(z=35) scene = window.Scene() scene.add(streamlines_actor) scene.add(vol_actor) scene.add(vol_actor2) window.record(scene=scene, out_path="tractogram_sfm.png", size=(800, 800)) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Sparse Fascicle Model tracks # # # Finally, we can save these streamlines to a 'trk' file, for use in other # software, or for further analysis. sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_sfm_detr.trk") ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/tracking_stopping_criterion.py000066400000000000000000000314101476546756600233300ustar00rootroot00000000000000""" ================================================= Using Various Stopping Criterion for Tractography ================================================= The stopping criterion determines if the tracking stops or continues at each tracking position. The tracking stops when it reaches an ending region (e.g. low FA, gray matter or corticospinal fluid regions) or exits the image boundaries. The tracking also stops if the direction getter has no direction to follow. Each stopping criterion determines if the stopping is 'valid' or 'invalid'. A streamline is 'valid' when the stopping criterion determines if the streamline stops in a position classified as 'ENDPOINT' or 'OUTSIDEIMAGE'. A streamline is 'invalid' when it stops in a position classified as 'TRACKPOINT' or 'INVALIDPOINT'. These conditions are described below. The 'LocalTracking' generator can be set to output all generated streamlines or only the 'valid' ones. See :footcite:t:`Girard2014` and :footcite:p:`Smith2012` for more details on these methods. This example is an extension of the :ref:`sphx_glr_examples_built_fiber_tracking_tracking_deterministic.py` example. We begin by loading the data, creating a seeding mask from white matter voxels of the corpus callosum, fitting a Constrained Spherical Deconvolution (CSD) reconstruction model and creating the maximum deterministic direction getter. """ import matplotlib.pyplot as plt import numpy as np from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_trk from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel, auto_response_ssst from dipy.reconst.dti import TensorModel, fractional_anisotropy from dipy.tracking import utils from dipy.tracking.stopping_criterion import ( ActStoppingCriterion, BinaryStoppingCriterion, ThresholdStoppingCriterion, ) from dipy.tracking.streamline import Streamlines from dipy.tracking.tracker import deterministic_tracking from dipy.viz import actor, colormap, has_fury, window # Enables/disables interactive visualization interactive = False hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") _, _, f_pve_wm = get_fnames(name="stanford_pve_maps") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) white_matter = load_nifti_data(f_pve_wm) seed_mask = labels == 2 seed_mask[white_matter < 0.5] = 0 seeds = utils.seeds_from_mask(seed_mask, affine, density=2) response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = ConstrainedSphericalDeconvModel(gtab, response) csd_fit = csd_model.fit(data, mask=white_matter) ############################################################################### # Threshold Stopping Criterion # ============================ # A scalar map can be used to define where the tracking stops. The threshold # stopping criterion uses a scalar map to stop the tracking whenever the # interpolated scalar value is lower than a fixed threshold. Here, we show # an example using the fractional anisotropy (FA) map of the DTI model. # The threshold stopping criterion uses a trilinear interpolation at the # tracking position. # # **Parameters** # # - metric_map: numpy array [:, :, :] # - threshold: float # # **Stopping States** # # - 'ENDPOINT': stops at a position where metric_map < threshold; the # streamline reached the target stopping area. # - 'OUTSIDEIMAGE': stops at a position outside of metric_map; the streamline # reached an area outside the image where no direction data is available. # - 'TRACKPOINT': stops at a position because no direction is available; the # streamline is stopping where metric_map >= threshold, but there is no valid # direction to follow. # - 'INVALIDPOINT': N/A. # tensor_model = TensorModel(gtab) tenfit = tensor_model.fit(data, mask=labels > 0) FA = fractional_anisotropy(tenfit.evals) threshold_criterion = ThresholdStoppingCriterion(FA, 0.2) fig = plt.figure() mask_fa = FA.copy() mask_fa[mask_fa < 0.2] = 0 plt.xticks([]) plt.yticks([]) plt.imshow( mask_fa[:, :, data.shape[2] // 2].T, cmap="gray", origin="lower", interpolation="nearest", ) fig.tight_layout() fig.savefig("threshold_fa.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Thresholded fractional anisotropy map. streamline_generator = deterministic_tracking( seeds, threshold_criterion, affine, step_size=0.5, return_all=True, sh=csd_fit.shm_coeff, max_angle=30.0, sphere=default_sphere, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_probabilistic_thresh_all.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_deterministic_thresh_all.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using deterministic tractography with a thresholded # fractional anisotropy mask. # # # # Binary Stopping Criterion # ========================= # A binary mask can be used to define where the tracking stops. The binary # stopping criterion stops the tracking whenever the tracking position is # outside the mask. Here, we show how to obtain the binary stopping criterion # from the white matter mask defined above. The binary stopping criterion uses # a nearest-neighborhood interpolation at the tracking position. # # **Parameters** # # - mask: numpy array [:, :, :] # # **Stopping States** # # - 'ENDPOINT': stops at a position where mask = 0; the streamline # reached the target stopping area. # - 'OUTSIDEIMAGE': stops at a position outside of metric_map; the streamline # reached an area outside the image where no direction data is available. # - 'TRACKPOINT': stops at a position because no direction is available; the # streamline is stopping where mask > 0, but there is no valid direction to # follow. # - 'INVALIDPOINT': N/A. # binary_criterion = BinaryStoppingCriterion(white_matter == 1) fig = plt.figure() plt.xticks([]) plt.yticks([]) fig.tight_layout() plt.imshow( white_matter[:, :, data.shape[2] // 2].T, cmap="gray", origin="lower", interpolation="nearest", ) fig.savefig("white_matter_mask.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # White matter binary mask. streamline_generator = deterministic_tracking( seeds, binary_criterion, affine, step_size=0.5, return_all=True, sh=csd_fit.shm_coeff, max_angle=30.0, sphere=default_sphere, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_deterministic_binary_all.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_deterministic_binary_all.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using deterministic tractography with a binary white # matter mask. # # # # ACT Stopping Criterion # ====================== # Anatomically-constrained tractography (ACT) :footcite:p:`Smith2012` uses # information from anatomical images to determine when the tractography stops. # The ``include_map`` defines when the streamline reached a 'valid' stopping # region (e.g. gray matter partial volume estimation (PVE) map) and the # ``exclude_map`` defines when the streamline reached an 'invalid' stopping # region (e.g. corticospinal fluid PVE map). The background of the anatomical # image should be added to the ``include_map`` to keep streamlines exiting # the brain (e.g. through the brain stem). The ACT stopping criterion uses # a trilinear interpolation at the tracking position. # # **Parameters** # # - ``include_map``: numpy array ``[:, :, :]``, # - ``exclude_map``: numpy array ``[:, :, :]``, # # **Stopping States** # # - 'ENDPOINT': stops at a position where ``include_map`` > 0.5; the streamline # reached the target stopping area. # - 'OUTSIDEIMAGE': stops at a position outside of ``include_map`` or # ``exclude_map``; the streamline reached an area outside the image where no # direction data is available. # - 'TRACKPOINT': stops at a position because no direction is available; the # streamline is stopping where ``include_map`` < 0.5 and # ``exclude_map`` < 0.5, but there is no valid direction to follow. # - 'INVALIDPOINT': ``exclude_map`` > 0.5; the streamline reach a position # which is anatomically not plausible. # f_pve_csf, f_pve_gm, f_pve_wm = get_fnames(name="stanford_pve_maps") pve_csf_data = load_nifti_data(f_pve_csf) pve_gm_data = load_nifti_data(f_pve_gm) pve_wm_data = load_nifti_data(f_pve_wm) background = np.ones(pve_gm_data.shape) background[(pve_gm_data + pve_wm_data + pve_csf_data) > 0] = 0 include_map = pve_gm_data include_map[background > 0] = 1 exclude_map = pve_csf_data act_criterion = ActStoppingCriterion(include_map, exclude_map) fig = plt.figure() plt.subplot(121) plt.xticks([]) plt.yticks([]) plt.imshow( include_map[:, :, data.shape[2] // 2].T, cmap="gray", origin="lower", interpolation="nearest", ) plt.subplot(122) plt.xticks([]) plt.yticks([]) plt.imshow( exclude_map[:, :, data.shape[2] // 2].T, cmap="gray", origin="lower", interpolation="nearest", ) fig.tight_layout() fig.savefig("act_maps.png") ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Include (left) and exclude (right) maps for ACT. streamline_generator = deterministic_tracking( seeds, act_criterion, affine, step_size=0.5, return_all=True, sh=csd_fit.shm_coeff, max_angle=30.0, sphere=default_sphere, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_deterministic_act_all.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_deterministic_act_all.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using deterministic tractography with ACT stopping # criterion. streamline_generator = deterministic_tracking( seeds, act_criterion, affine, step_size=0.5, return_all=False, sh=csd_fit.shm_coeff, max_angle=30.0, sphere=default_sphere, ) streamlines = Streamlines(streamline_generator) sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM) save_trk(sft, "tractogram_deterministic_act_valid.trk") if has_fury: scene = window.Scene() scene.add(actor.line(streamlines, colors=colormap.line_colors(streamlines))) window.record( scene=scene, out_path="tractogram_deterministic_act_valid.png", size=(800, 800) ) if interactive: window.show(scene) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Corpus Callosum using deterministic tractography with ACT stopping # criterion. Streamlines ending in gray matter region only. # # # # The threshold and binary stopping criterion use respectively a scalar map # and a binary mask to stop the tracking. The ACT stopping criterion use # partial volume fraction (PVE) maps from an anatomical image to stop the # tracking. Additionally, the ACT stopping criterion determines if the # tracking stopped in expected regions (e.g. gray matter) and allows the # user to get only streamlines stopping in those regions. # # Notes # ----- # Currently,the proposed method that cuts streamlines going through # subcortical gray matter regions is not implemented. The # backtracking technique for streamlines reaching INVALIDPOINT is not # implemented either :footcite:p:`Smith2012`. # # # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/viz_advanced.py000066400000000000000000000227121476546756600201670ustar00rootroot00000000000000""" ================================== Advanced interactive visualization ================================== In DIPY_ we created a thin interface to access many of the capabilities available in the FURY 3D visualization library :footcite:p:`Garyfallidis2021` but tailored to the needs of structural and diffusion imaging. Let's start by importing the necessary modules. """ import numpy as np from dipy.data.fetcher import fetch_bundles_2_subjects, read_bundles_2_subjects from dipy.tracking.streamline import Streamlines from dipy.viz import actor, ui, window ############################################################################### # In ``window`` we have all the objects that connect what needs to be rendered # to the display or the disk e.g., for saving screenshots. So, there you will # find key objects and functions like the ``Scene`` class which holds and # provides access to all the actors and the ``show`` function which displays # what is in the scene on a window. Also, this module provides access to # functions for opening/saving dialogs and printing screenshots # (see ``snapshot``). # # In the ``actor`` module we can find all the different primitives e.g., # streamtubes, lines, image slices, etc. # # In the ``ui`` module we have some other objects which allow to add buttons # and sliders and these interact both with windows and actors. Because of # this they need input from the operating system so they can process events. # # Let's get started. In this tutorial, we will visualize some bundles # together with FA or T1. We will be able to change the slices using # a ``LineSlider2D`` widget. # # First we need to fetch and load some datasets. fetch_bundles_2_subjects() ############################################################################### # The following function outputs a dictionary with the required bundles e.g. # ``af left`` (left arcuate fasciculus) and maps, e.g. FA for a specific # subject. res = read_bundles_2_subjects( subj_id="subj_1", metrics=["t1", "fa"], bundles=["af.left", "cst.right", "cc_1"] ) ############################################################################### # We will use 3 bundles, FA and the affine transformation that brings the voxel # coordinates to world coordinates (RAS 1mm). streamlines = Streamlines(res["af.left"]) streamlines.extend(res["cst.right"]) streamlines.extend(res["cc_1"]) data = res["fa"] shape = data.shape affine = res["affine"] ############################################################################### # With our current design it is easy to decide in which space you want the # streamlines and slices to appear. The default we have here is to appear in # world coordinates (RAS 1mm). world_coords = True ############################################################################### # If we want to see the objects in native space we need to make sure that all # objects which are currently in world coordinates are transformed back to # native space using the inverse of the affine. if not world_coords: from dipy.tracking.streamline import transform_streamlines streamlines = transform_streamlines(streamlines, np.linalg.inv(affine)) ############################################################################### # Now we create, a ``Scene`` object and add the streamlines using the ``line`` # function and an image plane using the ``slice`` function. scene = window.Scene() stream_actor = actor.line(streamlines) if not world_coords: image_actor_z = actor.slicer(data, affine=np.eye(4)) else: image_actor_z = actor.slicer(data, affine=affine) ############################################################################### # We can also change also the opacity of the slicer. slicer_opacity = 0.6 image_actor_z.opacity(slicer_opacity) ############################################################################### # We can add additional slicers by copying the original and adjusting the # ``display_extent``. image_actor_x = image_actor_z.copy() x_midpoint = int(np.round(shape[0] / 2)) image_actor_x.display_extent(x_midpoint, x_midpoint, 0, shape[1] - 1, 0, shape[2] - 1) image_actor_y = image_actor_z.copy() y_midpoint = int(np.round(shape[1] / 2)) image_actor_y.display_extent(0, shape[0] - 1, y_midpoint, y_midpoint, 0, shape[2] - 1) ############################################################################### # Connect the actors with the Scene. scene.add(stream_actor) scene.add(image_actor_z) scene.add(image_actor_x) scene.add(image_actor_y) ############################################################################### # Now we would like to change the position of each ``image_actor`` using a # slider. The sliders are widgets which require access to different areas of # the visualization pipeline and therefore we don't recommend using them with # ``show``. The more appropriate way is to use them with the ``ShowManager`` # object which allows accessing the pipeline in different areas. Here is how: show_m = window.ShowManager(scene=scene, size=(1200, 900)) show_m.initialize() ############################################################################### # After we have initialized the ``ShowManager`` we can go ahead and create # sliders to move the slices and change their opacity. line_slider_z = ui.LineSlider2D( min_value=0, max_value=shape[2] - 1, initial_value=shape[2] / 2, text_template="{value:.0f}", length=140, ) line_slider_x = ui.LineSlider2D( min_value=0, max_value=shape[0] - 1, initial_value=shape[0] / 2, text_template="{value:.0f}", length=140, ) line_slider_y = ui.LineSlider2D( min_value=0, max_value=shape[1] - 1, initial_value=shape[1] / 2, text_template="{value:.0f}", length=140, ) opacity_slider = ui.LineSlider2D( min_value=0.0, max_value=1.0, initial_value=slicer_opacity, length=140 ) ############################################################################### # Now we will write callbacks for the sliders and register them. def change_slice_z(slider): z = int(np.round(slider.value)) image_actor_z.display_extent(0, shape[0] - 1, 0, shape[1] - 1, z, z) def change_slice_x(slider): x = int(np.round(slider.value)) image_actor_x.display_extent(x, x, 0, shape[1] - 1, 0, shape[2] - 1) def change_slice_y(slider): y = int(np.round(slider.value)) image_actor_y.display_extent(0, shape[0] - 1, y, y, 0, shape[2] - 1) def change_opacity(slider): slicer_opacity = slider.value image_actor_z.opacity(slicer_opacity) image_actor_x.opacity(slicer_opacity) image_actor_y.opacity(slicer_opacity) line_slider_z.on_change = change_slice_z line_slider_x.on_change = change_slice_x line_slider_y.on_change = change_slice_y opacity_slider.on_change = change_opacity ############################################################################### # We'll also create text labels to identify the sliders. def build_label(text): label = ui.TextBlock2D() label.message = text label.font_size = 18 label.font_family = "Arial" label.justification = "left" label.bold = False label.italic = False label.shadow = False label.background_color = (0, 0, 0) label.color = (1, 1, 1) return label line_slider_label_z = build_label(text="Z Slice") line_slider_label_x = build_label(text="X Slice") line_slider_label_y = build_label(text="Y Slice") opacity_slider_label = build_label(text="Opacity") ############################################################################### # Now we will create a ``panel`` to contain the sliders and labels. panel = ui.Panel2D(size=(300, 200), color=(1, 1, 1), opacity=0.1, align="right") panel.center = (1030, 120) panel.add_element(line_slider_label_x, (0.1, 0.75)) panel.add_element(line_slider_x, (0.38, 0.75)) panel.add_element(line_slider_label_y, (0.1, 0.55)) panel.add_element(line_slider_y, (0.38, 0.55)) panel.add_element(line_slider_label_z, (0.1, 0.35)) panel.add_element(line_slider_z, (0.38, 0.35)) panel.add_element(opacity_slider_label, (0.1, 0.15)) panel.add_element(opacity_slider, (0.38, 0.15)) scene.add(panel) ############################################################################### # Then, we can render all the widgets and everything else in the screen and # start the interaction using ``show_m.start()``. # # # However, if you change the window size, the panel will not update its # position properly. The solution to this issue is to update the position of # the panel using its ``re_align`` method every time the window size changes. global size size = scene.GetSize() def win_callback(obj, event): global size if size != obj.GetSize(): size_old = size size = obj.GetSize() size_change = [size[0] - size_old[0], 0] panel.re_align(size_change) show_m.initialize() ############################################################################### # Finally, please set the following variable to ``True`` to interact with the # datasets in 3D. interactive = False scene.zoom(1.5) scene.reset_clipping_range() if interactive: show_m.add_window_callback(win_callback) show_m.render() show_m.start() else: window.record( scene=scene, out_path="bundles_and_3_slices.png", size=(1200, 900), reset_camera=False, ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # A few bundles with interactive slicing. del show_m ############################################################################### # References # ---------- # # .. footbibliography:: # dipy-1.11.0/doc/examples/viz_bundles.py000066400000000000000000000153501476546756600200560ustar00rootroot00000000000000""" ======================================== Visualize bundles and metrics on bundles ======================================== First, let's download some available datasets. Here we are using a dataset which provides metrics and bundles. """ import numpy as np from dipy.data import fetch_bundles_2_subjects, read_bundles_2_subjects from dipy.tracking.streamline import length, transform_streamlines from dipy.viz import actor, window fetch_bundles_2_subjects() dix = read_bundles_2_subjects( subj_id="subj_1", metrics=["fa"], bundles=["cg.left", "cst.right"] ) ############################################################################### # Store fractional anisotropy. fa = dix["fa"] ############################################################################### # Store grid to world transformation matrix. affine = dix["affine"] ############################################################################### # Store the cingulum bundle. A bundle is a list of streamlines. bundle = dix["cg.left"] ############################################################################### # It happened that this bundle is in world coordinates and therefore we need # to transform it into native image coordinates so that it is in the same # coordinate space as the ``fa`` image. bundle_native = transform_streamlines(bundle, np.linalg.inv(affine)) ############################################################################### # Show every streamline with an orientation color # =============================================== # # This is the default option when you are using ``line`` or ``streamtube``. scene = window.Scene() stream_actor = actor.line(bundle_native) scene.set_camera( position=(-176.42, 118.52, 128.20), focal_point=(113.30, 128.31, 76.56), view_up=(0.18, 0.00, 0.98), ) scene.add(stream_actor) # Uncomment the line below to show to display the window # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle1.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # One orientation color for every streamline. # # # You may wonder how we knew how to set the camera. This is very easy. You just # need to run ``window.show`` once to see how you want to see the object and # then close the window and call the ``camera_info`` method which prints the # position, focal point and view up vectors of the camera. scene.camera_info() ############################################################################### # Show every point with a value from a volume with default colormap # ================================================================= # # Here we will need to input the ``fa`` map in ``streamtube`` or ``line``. scene.clear() stream_actor2 = actor.line(bundle_native, colors=fa, linewidth=0.1) ############################################################################### # We can also show the scalar bar. bar = actor.scalar_bar() scene.add(stream_actor2) scene.add(bar) # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle2.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Every point with a color from FA. # # # Show every point with a value from a volume with your colormap # ============================================================== # # Here we will need to input the ``fa`` map in ``streamtube`` scene.clear() hue = (0.0, 0.0) # red only saturation = (0.0, 1.0) # white to red lut_cmap = actor.colormap_lookup_table(hue_range=hue, saturation_range=saturation) stream_actor3 = actor.line( bundle_native, colors=fa, linewidth=0.1, lookup_colormap=lut_cmap ) bar2 = actor.scalar_bar(lookup_table=lut_cmap) scene.add(stream_actor3) scene.add(bar2) # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle3.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Every point with a color from FA using a non default colormap. # # # Show every bundle with a specific color # ======================================== # # You can have a bundle with a specific color. In this example, we are choosing # orange. scene.clear() stream_actor4 = actor.line(bundle_native, colors=(1.0, 0.5, 0), linewidth=0.1) scene.add(stream_actor4) # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle4.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Entire bundle with a specific color. # # # Show every streamline of a bundle with a different color # ======================================================== # # Let's make a colormap where every streamline of the bundle is colored by # its length. scene.clear() lengths = length(bundle_native) hue = (0.5, 0.5) # blue only saturation = (0.0, 1.0) # black to white lut_cmap = actor.colormap_lookup_table( scale_range=(lengths.min(), lengths.max()), hue_range=hue, saturation_range=saturation, ) stream_actor5 = actor.line( bundle_native, colors=lengths, linewidth=0.1, lookup_colormap=lut_cmap ) scene.add(stream_actor5) bar3 = actor.scalar_bar(lookup_table=lut_cmap) scene.add(bar3) # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle5.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Color every streamline by the length of the streamline # # # Show every point of every streamline with a different color # ============================================================ # # In this case in which we want to have a color per point and per streamline, # we can create a list of the colors to correspond to the list of streamlines # (bundles). Here in ``colors`` we will insert some random RGB colors. scene.clear() rng = np.random.default_rng() colors = [rng.random(streamline.shape) for streamline in bundle_native] stream_actor6 = actor.line( bundle_native, colors=np.asarray(colors, dtype=object), linewidth=0.2 ) scene.add(stream_actor6) # window.show(scene, size=(600, 600), reset_camera=False) window.record(scene=scene, out_path="bundle6.png", size=(600, 600)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Random colors per point per streamline. # # # In summary, we showed that there are many useful ways for visualizing maps # on bundles. dipy-1.11.0/doc/examples/viz_roi_contour.py000066400000000000000000000072501476546756600207640ustar00rootroot00000000000000""" ====================================================== Visualization of ROI Surface Rendered with Streamlines ====================================================== Here is a simple tutorial following the probabilistic CSA Tracking Example in which we generate a dataset of streamlines from a corpus callosum ROI, and then display them with the seed ROI rendered in 3D with 50% transparency. Let's start by importing the relevant modules. """ from dipy.core.gradients import gradient_table from dipy.data import default_sphere, get_fnames from dipy.direction import peaks_from_model from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti, load_nifti_data from dipy.reconst.shm import CsaOdfModel from dipy.tracking import utils from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion from dipy.tracking.streamline import Streamlines from dipy.viz import actor, colormap as cmap, window ############################################################################### # First, we need to generate some streamlines. For a more complete # description of these steps, please refer to the CSA Probabilistic Tracking # Tutorial. hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames(name="stanford_hardi") label_fname = get_fnames(name="stanford_labels") data, affine, hardi_img = load_nifti(hardi_fname, return_img=True) labels = load_nifti_data(label_fname) bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname) gtab = gradient_table(bvals, bvecs=bvecs) white_matter = (labels == 1) | (labels == 2) csa_model = CsaOdfModel(gtab, sh_order_max=6) csa_peaks = peaks_from_model( csa_model, data, default_sphere, relative_peak_threshold=0.8, min_separation_angle=45, mask=white_matter, ) stopping_criterion = ThresholdStoppingCriterion(csa_peaks.gfa, 0.25) seed_mask = labels == 2 seeds = utils.seeds_from_mask(seed_mask, affine, density=[1, 1, 1]) # Initialization of LocalTracking. The computation happens in the next step. streamlines = LocalTracking(csa_peaks, stopping_criterion, seeds, affine, step_size=2) # Compute streamlines and store as a list. streamlines = Streamlines(streamlines) ############################################################################### # We will create a streamline actor from the streamlines. streamlines_actor = actor.line(streamlines, colors=cmap.line_colors(streamlines)) ############################################################################### # Next, we create a surface actor from the corpus callosum seed ROI. We # provide the ROI data, the affine, the color in [R,G,B], and the opacity as # a decimal between zero and one. Here, we set the color as blue/green with # 50% opacity. surface_opacity = 0.5 surface_color = [0, 1, 1] seedroi_actor = actor.contour_from_roi( seed_mask, affine=affine, color=surface_color, opacity=surface_opacity ) ############################################################################### # Next, we initialize a ''Scene'' object and add both actors # to the rendering. scene = window.Scene() scene.add(streamlines_actor) scene.add(seedroi_actor) ############################################################################### # If you uncomment the following line, the rendering will pop up in an # interactive window. interactive = False if interactive: window.show(scene) window.record(scene=scene, out_path="contour_from_roi_tutorial.png", size=(1200, 900)) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # A top view of corpus callosum streamlines with the blue transparent # seed ROI in the center. dipy-1.11.0/doc/examples/viz_slice.py000066400000000000000000000210451476546756600175170ustar00rootroot00000000000000""" ===================== Simple volume slicing ===================== Here we present an example for visualizing slices from 3D images. Let's start by importing the relevant modules. """ import os from dipy.data import fetch_bundles_2_subjects from dipy.io.image import load_nifti, load_nifti_data from dipy.viz import actor, ui, window ############################################################################### # Let's download and load a T1. fetch_bundles_2_subjects() fname_t1 = os.path.join( os.path.expanduser("~"), ".dipy", "exp_bundles_and_maps", "bundles_2_subjects", "subj_1", "t1_warped.nii.gz", ) data, affine = load_nifti(fname_t1) ############################################################################### # Create a Scene object which holds all the actors which we want to visualize. scene = window.Scene() scene.background((0.5, 0.5, 0.5)) ############################################################################### # Render slices from T1 with a specific value range # ================================================= # # The T1 has usually a higher range of values than what can be visualized in an # image. We can set the range that we would like to see. mean, std = data[data > 0].mean(), data[data > 0].std() value_range = (mean - 0.5 * std, mean + 1.5 * std) ############################################################################### # The ``slice`` function will read data and resample the data using an affine # transformation matrix. The default behavior of this function is to show the # middle slice of the last dimension of the resampled data. slice_actor = actor.slicer(data, affine=affine, value_range=value_range) ############################################################################### # The ``slice_actor`` contains an axial slice. scene.add(slice_actor) ############################################################################### # The same actor can show any different slice from the given data using its # ``display`` function. However, if we want to show multiple slices we need to # copy the actor first. slice_actor2 = slice_actor.copy() ############################################################################### # Now we have a new ``slice_actor`` which displays the middle slice of the # sagittal plane. slice_actor2.display(x=slice_actor2.shape[0] // 2, y=None, z=None) scene.add(slice_actor2) scene.reset_camera() scene.zoom(1.4) ############################################################################### # In order to interact with the data you will need to uncomment the line below. # window.show(scene, size=(600, 600), reset_camera=False) ############################################################################### # Otherwise, you can save a screenshot using the following command. window.record(scene=scene, out_path="slices.png", size=(600, 600), reset_camera=False) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Simple slice viewer. # # # Render slices from FA with your colormap # ======================================== # # It is also possible to set the colormap of your preference. Here we are # loading an FA image and showing it in a non-standard way using an HSV # colormap. fname_fa = os.path.join( os.path.expanduser("~"), ".dipy", "exp_bundles_and_maps", "bundles_2_subjects", "subj_1", "fa_1x1x1.nii.gz", ) fa = load_nifti_data(fname_fa) lut = actor.colormap_lookup_table( scale_range=(fa.min(), fa.max() * 0.8), hue_range=(0.4, 1.0), saturation_range=(1, 1.0), value_range=(0.0, 1.0), ) ############################################################################### # This is because the lookup table is applied in the slice after interpolating # to (0, 255). fa_actor = actor.slicer(fa, affine=affine, lookup_colormap=lut) scene.clear() scene.add(fa_actor) scene.reset_camera() scene.zoom(1.4) # window.show(scene, size=(600, 600), reset_camera=False) window.record( scene=scene, out_path="slices_lut.png", size=(600, 600), reset_camera=False ) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # Simple slice viewer with an HSV colormap. # # # # Now we would like to add the ability to click on a voxel and show its value # on a panel in the window. The panel is a UI element which requires access to # different areas of the visualization pipeline and therefore we don't # recommend using it with ``window.show``. The more appropriate way is to use # the ``ShowManager`` object, which allows accessing the pipeline in different # areas. show_m = window.ShowManager(scene=scene, size=(1200, 900)) show_m.initialize() ############################################################################### # We'll start by creating the panel and adding it to the ``ShowManager`` label_position = ui.TextBlock2D(text="Position:") label_value = ui.TextBlock2D(text="Value:") result_position = ui.TextBlock2D(text="") result_value = ui.TextBlock2D(text="") panel_picking = ui.Panel2D( size=(250, 125), position=(20, 20), color=(0, 0, 0), opacity=0.75, align="left" ) panel_picking.add_element(label_position, (0.1, 0.55)) panel_picking.add_element(label_value, (0.1, 0.25)) panel_picking.add_element(result_position, (0.45, 0.55)) panel_picking.add_element(result_value, (0.45, 0.25)) scene.add(panel_picking) ############################################################################### # Add a left-click callback to the slicer. Also disable interpolation so you # can see what you are picking. def left_click_callback(obj, ev): """Get the value of the clicked voxel and show it in the panel.""" event_pos = show_m.iren.GetEventPosition() obj.picker.Pick(event_pos[0], event_pos[1], 0, scene) i, j, k = obj.picker.GetPointIJK() result_position.message = f"({str(i)}, {str(j)}, {str(k)})" result_value.message = f"{data[i, j, k]:.8f}" fa_actor.SetInterpolate(False) fa_actor.AddObserver("LeftButtonPressEvent", left_click_callback, 1.0) # show_m.start() ############################################################################### # Create a mosaic # =============== # # By using the ``copy`` and ``display`` method of the ``slice_actor`` it # becomes easy and efficient to create a mosaic of all the slices. # # So, let's clear the scene and change the projection from perspective to # parallel. We'll also need a new show manager and an associated callback. scene.clear() scene.projection("parallel") result_position.message = "" result_value.message = "" show_m_mosaic = window.ShowManager(scene=scene, size=(1200, 900)) show_m_mosaic.initialize() def left_click_callback_mosaic(obj, ev): """Get the value of the clicked voxel and show it in the panel.""" event_pos = show_m_mosaic.iren.GetEventPosition() obj.picker.Pick(event_pos[0], event_pos[1], 0, scene) i, j, k = obj.picker.GetPointIJK() result_position.message = f"({str(i)}, {str(j)}, {str(k)})" result_value.message = f"{data[i, j, k]:.8f}" ############################################################################### # Now we need to create two nested for loops which will set the positions of # the grid of the mosaic and add the new actors to the scene. We are going # to use 15 columns and 10 rows but you can adjust those with your datasets. cnt = 0 X, Y, Z = slice_actor.shape[:3] rows = 10 cols = 15 border = 10 for j in range(rows): for i in range(cols): slice_mosaic = slice_actor.copy() slice_mosaic.display(z=cnt) slice_mosaic.SetPosition( (X + border) * i, 0.5 * cols * (Y + border) - (Y + border) * j, 0 ) slice_mosaic.SetInterpolate(False) slice_mosaic.AddObserver( "LeftButtonPressEvent", left_click_callback_mosaic, 1.0 ) scene.add(slice_mosaic) cnt += 1 if cnt > Z: break if cnt > Z: break scene.reset_camera() scene.zoom(1.0) # show_m_mosaic.ren.add(panel_picking) # show_m_mosaic.start() ############################################################################### # If you uncomment the two lines above, you will be able to move # the mosaic up/down and left/right using the middle mouse button drag, # zoom in/out using the scroll wheel, and pick voxels with left click. window.record(scene=scene, out_path="mosaic.png", size=(900, 600), reset_camera=False) ############################################################################### # .. rst-class:: centered small fst-italic fw-semibold # # A mosaic of all the slices in the T1 volume. dipy-1.11.0/doc/examples/workflow_creation.py000066400000000000000000000100571476546756600212670ustar00rootroot00000000000000""" ============================================================ Creating a new workflow. ============================================================ A workflow is a series of DIPY_ operations with fixed inputs and outputs that is callable via command line or another interface. For example, after installing DIPY_, you can call anywhere from your command line:: dipy_nlmeans t1.nii.gz t1_denoised.nii.gz First create your workflow (let's name this workflow file as my_workflow.py). Usually this is a python file in the ``<../dipy/workflows>`` directory. """ import shutil ############################################################################### # ``shutil`` Will be used for sample file manipulation. from dipy.workflows.workflow import Workflow ############################################################################### # ``Workflow`` is the base class that will be extended to create our workflow. class AppendTextFlow(Workflow): def run( self, input_files, text_to_append="dipy", out_dir="", out_file="append.txt" ): """ Parameters ---------- input_files : string Path to the input files. This path may contain wildcards to process multiple inputs at once. text_to_append : string, optional Text that will be appended to the file. (default 'dipy') out_dir : string, optional Where the resulting file will be saved. (default '') out_file : string, optional Name of the result file to be saved. (default 'append.txt') """ """ ``AppendTextFlow`` is the name of our workflow. Note that it needs to extend Workflow for everything to work properly. It will append text to a file. It is mandatory to have out_dir as a parameter. It is also mandatory to put `out_` in front of every parameter that is going to be an output. Lastly, all `out_` params needs to be at the end of the params list. The ``run`` docstring is very important, you need to document every parameter as they will be used with inspection to build the command line argument parser. """ io_it = self.get_io_iterator() for in_file, out_file in io_it: shutil.copy(in_file, out_file) with open(out_file, "a") as myfile: myfile.write(text_to_append) ############################################################################### # Use self.get_io_iterator() in every workflow you create. This creates # an ``IOIterator`` object that create output file names and directory # structure based on the inputs and some other advanced output strategy # parameters. # # By iterating on the ``IOIterator`` object you created previously you # conveniently get all input and output paths for every input file # found when globbing the input parameters. # # The code in the loop is the actual workflow processing code. It can be # anything. For example, it just appends text to an input file. # # This is it for the workflow! Now to be able to call it easily via command # line, you need to add this workflow in 2 different files: # # - ``/pyproject.toml``: open this file and add the following line # to the ``[project.scripts]`` section: # ``dipy_append_text = "dipy.workflows.cli:run"`` # - ``/dipy/workflows/cli.py``: open this file and add the workflow # information to the ``cli_flows`` dictionary. The key is the name of the # command line command and the value is a tuple with the module name and the # workflow class name. In this case it would be: # ``"dipy_append_text": ("dipy.workflows.my_workflow", "AppendTextFlow")`` # # That`s it! Now you can call your workflow from anywhere with the command line. # Let's just call the script you just made with ``-h`` to see the argparser help # text:: # # dipy_append_text --help # # You should see all your parameters available along with some extra common # ones like logging file and force overwrite. Also all the documentation you # wrote about each parameter is there. dipy-1.11.0/doc/examples_built/000077500000000000000000000000001476546756600163535ustar00rootroot00000000000000dipy-1.11.0/doc/examples_built/.gitignore000066400000000000000000000001451476546756600203430ustar00rootroot00000000000000# Ignore everything in this directory apart from gitignore and the README file * !.gitignore !README dipy-1.11.0/doc/examples_built/README000066400000000000000000000012301476546756600172270ustar00rootroot00000000000000Examples built README --------------------- This directory is a build-time container for the examples. The /tools/make_examples.py script takes the examples in /doc/examples, compiles the examples to rst files, builds all the graphics, and puts the result into this directory, along with copies of the .py files. The contents of this directory then gets included because the /doc/example_index.rst file points to this directory and the built .rst files. Please don't put anything in this directory other than the .gitignore (ignore everything) and this README. If you need something in here, please copy it from the make_examples.py script. dipy-1.11.0/doc/faq.rst000066400000000000000000000146651476546756600146530ustar00rootroot00000000000000.. _faq: ========================== Frequently Asked Questions ========================== ----------- Theoretical ----------- 1. **What is a b-value?** The b-value $b$ or *diffusion weighting* is a function of the strength, duration and temporal spacing and timing parameters of the specific paradigm. This function is derived from the Bloch-Torrey equations. In the case of the classical Stejskal-Tanner pulsed gradient spin-echo (PGSE) sequence, at the time of readout $b=\gamma^{2}G^{2}\delta^{2}\left(\Delta-\frac{\delta}{3}\right)$ where $\gamma$ is the gyromagnetic ratio, $\delta$ denotes the pulse width, $G$ is the gradient amplitude and $\Delta$ the center-to-center spacing. $\gamma$ is a constant, but we can change the other three parameters and in that way control the b-value. 2. **What is q-space?** Q-space is the space of one or more 3D spin displacement wave vectors $\mathbf{q}$. The vector $\mathbf{q}$ parametrizes the space of diffusion gradients. It is related to the applied magnetic gradient $\mathbf{g}$ by the formula $\mathbf{q}=(2\pi)^{-1}\gamma\delta\mathbf{g}$. Every single vector $\mathbf{q}$ has the same orientation as the direction of diffusion gradient $\mathbf{g}$ and length proportional to the strength $g$ of the gradient field. Every single point in q-space corresponds to a possible 3D volume of the MR signal for a specific gradient direction and strength. Therefore if, for example, we have programmed the scanner to apply 60 gradient directions, then our data should have 60 diffusion volumes, with each volume obtained for a specific gradient. A Diffusion Weighted Image (DWI) is the volume acquired from only one direction gradient. 3. **What does DWI stand for?** Diffusion Weighted Imaging (DWI) is MRI imaging designed to be sensitive to diffusion. A diffusion weighted image is a volume of voxel data gathered by applying only one gradient direction using a diffusion sequence. We expect that the signal in any voxel should be low if there is greater mobility of water molecules along the specified gradient direction and it should be high if there is less movement in that direction. Yes, it is counterintuitive but correct! However, greater mobility gives greater opportunity for the proton spins to be dephased, producing a smaller RF signal. 4. **Why dMRI and not DTI?** Diffusion MRI (dMRI or dwMRI) are the preferred terms if you want to speak about diffusion weighted MRI in general. DTI (diffusion tensor imaging) is just one of the many ways you can reconstruct the voxel from your measured signal. There are plenty of others, for example DSI, GQI, QBI, etc. 5. **What is the difference between Image coordinates and World coordinates?** Image coordinates have positive integer values and represent the centres $(i, j, k)$ of the voxels. There is an affine transform (stored in the nifti file) that takes the image coordinates and transforms them into millimeter (mm) in real world space. World coordinates have floating point precision and your dataset has 3 real dimensions e.g. $(x, y, z)$. 6. **We generated dMRI datasets with nonisotropic voxel sizes. What do we do?** You need to resample your raw data to an isotropic size. Have a look at the module ``dipy.align.aniso2iso``. (We think it is a mistake to acquire nonisotropic data because the directional resolution of the data will depend on the orientation of the gradient with respect to the voxels, being lower when aligned with a longer voxel dimension.) 7. **Why are non-isotropic voxel sizes a bad idea in diffusion?** If, for example, you have $2 \times 2 \times 4\ \textrm{mm}^3$ voxels, the last dimension will be averaged over the double distance and less detail will be captured compared to the other two dimensions. Furthermore, with very anisotropic voxels the uncertainty on orientation estimates will depend on the position of the subject in the scanner. --------- Practical --------- 1. **Why Python and not MATLAB or some other language?** Python is free, batteries included, very well-designed, painless to read and easy to use. There is nothing else like it. Give it a go. Once with Python, always with Python. 2. **Isn't Python slow?** True, sometimes Python can be slow if you are using multiple nested ``for`` loops, for example. In that case, we use Cython_, which takes execution up to C speed. 3. **What numerical libraries do you use in Python?** The best ever designed numerical library - NumPy_. 2. **Which Python console do you recommend?** IPython_ 3. **What do you use for visualization?** For 3D visualization, we use ``dipy.viz`` which depends in turn on ``FURY``:: from dipy.viz import window, actor For 2D visualization we use matplotlib_. 4. **Which file formats do you support?** Nifti (.nii), Dicom (Siemens(read-only)), Trackvis (.trk), DIPY (.dpy), Numpy (.npy, ,npz), text and any other formats supported by nibabel and pydicom. You can also read/save in Matlab version v4 (Level 1.0), v6 and v7 to 7.2, using `scipy.io.loadmat`. For higher versions >= 7.3, you can use pytables_ or any other python-to-hdf5 library e.g. h5py. For object serialization, you can use ``dipy.io.pickles`` functions ``load_pickle``, ``save_pickle``. 5. **What is dpy**? ``dpy`` is an ``hdf5`` file format that we use in DIPY to store tractography and other information. This allows us to store huge tractography and load different parts of the datasets directly from the disk as if it were in memory. 6. **Which python editor should I use?** Any text editor would do the job but we prefer the following: PyCharm, Sublime, Aptana, Emacs, Vim and Eclipse (with PyDev). 7. **I have problems reading my dicom files using nibabel, what should I do?** Use Chris Rorden's dcm2nii to transform them into nifti files. https://people.cas.sc.edu/rorden/mricron/dcm2nii.html Or you can make your own reader using pydicom. https://pydicom.github.io/ and then use nibabel to store the data as nifti. 8. **Where can I find diffusion data?** There are many sources for openly available diffusion MRI. the :mod:`dipy.data` module can be used to download some sample datasets that we use in our examples. In addition there are a lot of large research-grade datasets available through the following sources: - https://fcon_1000.projects.nitrc.org/ - https://www.humanconnectome.org/ - https://openneuro.org/ .. include:: links_names.inc dipy-1.11.0/doc/gimbal_lock.rst000066400000000000000000000154031476546756600163360ustar00rootroot00000000000000.. _gimbal-lock: ============= Gimbal lock ============= See also: https://en.wikipedia.org/wiki/Gimbal_lock Euler angles have a major deficiency, and that is, that it is possible, in some rotation sequences, to reach a situation where two of the three Euler angles cause rotation around the same axis of the object. In the case below, rotation around the $x$ axis becomes indistinguishable in its effect from rotation around the $z$ axis, so the $z$ and $x$ axis angles collapse into one transformation, and the rotation reduces from three degrees of freedom to two. Imagine that we are using the Euler angle convention of starting with a rotation around the $x$ axis, followed by the $y$ axis, followed by the $z$ axis. Here we see a Spitfire aircraft, flying across the screen. The $x$ axis is left to right (tail to nose), the $y$ axis is from the left wing tip to the right wing tip (going away from the screen), and the $z$ axis is from bottom to top: .. image:: /_static/spitfire_0.png Imagine we wanted to do a slight roll with the left wing tilting down (rotation about $x$) like this: .. image:: /_static/spitfire_x.png followed by a violent pitch so we are pointing straight up (rotation around $y$ axis): .. image:: /_static/spitfire_y.png Now we'd like to do a turn of the nose towards the viewer (and the tail away from the viewer): .. image:: /_static/spitfire_hoped.png But, wait, let's go back over that again. Look at the result of the rotation around the $y$ axis. Notice that the $x$ axis, as was, is now aligned with the $z$ axis, as it is now. Rotating around the $z$ axis will have exactly the same effect as adding an extra rotation around the $x$ axis at the beginning. That means that when there is a $y$ axis rotation that rotates the $x$ axis onto the $z$ axis (a rotation of $\pm\pi/2$ around the $y$ axis) - the $x$ and $y$ axes are "locked" together. Mathematics of gimbal lock ========================== We see gimbal lock for this type of Euler axis convention, when $\cos(\beta) = 0$, where $\beta$ is the angle of rotation around the $y$ axis. By "this type of convention" we mean using rotation around all 3 of the $x$, $y$ and $z$ axes, rather than using the same axis twice - e.g. the physics convention of $z$ followed by $x$ followed by $z$ axis rotation (the physics convention has different properties to its gimbal lock). We can show how gimbal lock works by creating a rotation matrix for the three component rotations. Recall that, for a rotation of $\alpha$ radians around $x$, followed by a rotation $\beta$ around $y$, followed by rotation $\gamma$ around $z$, the rotation matrix $R$ is: .. math:: R = \left(\begin{smallmatrix}\operatorname{cos}\left(\beta\right) \operatorname{cos}\left(\gamma\right) & - \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) + \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\beta\right) & \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) + \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\beta\right)\\\operatorname{cos}\left(\beta\right) \operatorname{sin}\left(\gamma\right) & \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) + \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\beta\right) \operatorname{sin}\left(\gamma\right) &- \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) + \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\beta\right) \operatorname{sin}\left(\gamma\right)\\- \operatorname{sin}\left(\beta\right) & \operatorname{cos}\left(\beta\right) \operatorname{sin}\left(\alpha\right) & \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\beta\right)\end{smallmatrix}\right) When $\cos(\beta) = 0$, $\sin(\beta) = \pm1$ and $R$ simplifies to: .. math:: R = \left(\begin{smallmatrix}0 & - \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) + \pm{1} \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) & \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) + \pm{1} \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right)\\0 & \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) + \pm{1} \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) & - \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) + \pm{1} \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right)\\- \pm{1} & 0 & 0\end{smallmatrix}\right) When $\sin(\beta) = 1$: .. math:: R = \left(\begin{smallmatrix}0 & \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) - \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) & \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) + \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right)\\0 & \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) + \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) & \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) - \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right)\\-1 & 0 & 0\end{smallmatrix}\right) From the `angle sum and difference identities `_ (see also `geometric proof `_, `Mathworld treatment `_) we remind ourselves that, for any two angles $\alpha$ and $\beta$: .. math:: \sin(\alpha \pm \beta) = \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \, \cos(\alpha \pm \beta) = \cos \alpha \cos \beta \mp \sin \alpha \sin \beta We can rewrite $R$ as: .. math:: R = \left(\begin{smallmatrix}0 & V_{1} & V_{2}\\0 & V_{2} & - V_{1}\\-1 & 0 & 0\end{smallmatrix}\right) where: .. math:: V_1 = \operatorname{cos}\left(\gamma\right) \operatorname{sin}\left(\alpha\right) - \operatorname{cos}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) = \sin(\alpha - \gamma) \, V_2 = \operatorname{cos}\left(\alpha\right) \operatorname{cos}\left(\gamma\right) + \operatorname{sin}\left(\alpha\right) \operatorname{sin}\left(\gamma\right) = \cos(\alpha - \gamma) We immediately see that $\alpha$ and $\gamma$ are going to lead the same transformation - the mathematical expression of the observation on the spitfire above, that rotation around the $x$ axis is equivalent to rotation about the $z$ axis. It's easy to do the same set of reductions, with the same conclusion, for the case where $\sin(\beta) = -1$ - see http://www.gregslabaugh.name/publications/euler.pdf. dipy-1.11.0/doc/glossary.rst000066400000000000000000000071031476546756600157340ustar00rootroot00000000000000========== Glossary ========== .. glossary:: Affine matrix A matrix implementing an :term:`affine transformation` in :term:`homogeneous coordinates`. For a 3 dimensional transform, the matrix is shape 4 by 4. Affine transformation See `wikipedia affine`_ definition. An affine transformation is a :term:`linear transformation` followed by a translation. Axis angle A representation of rotation. See: `wikipedia axis angle`_ . From Euler's rotation theorem we know that any rotation or sequence of rotations can be represented by a single rotation about an axis. The axis $\boldsymbol{\hat{u}}$ is a :term:`unit vector`. The angle is $\theta$. The :term:`rotation vector` is a more compact representation of $\theta$ and $\boldsymbol{\hat{u}}$. Euclidean norm Also called Euclidean length, or L2 norm. The Euclidean norm $\|\mathbf{x}\|$ of a vector $\mathbf{x}$ is given by: .. math:: \|\mathbf{x}\| := \sqrt{x_1^2 + \cdots + x_n^2} Pure Pythagoras. Euler angles See: `wikipedia Euler angles`_ and `Mathworld Euler angles`_. Gimbal lock See :ref:`gimbal-lock` Homogeneous coordinates See `wikipedia homogeneous coordinates`_ Linear transformation A linear transformation is one that preserves lines - that is, if any three points are on a line before transformation, they are also on a line after transformation. See `wikipedia linear transform`_. Rotation, scaling and shear are linear transformations. Quaternion See: `wikipedia quaternion`_. An extension of the complex numbers that can represent a rotation. Quaternions have 4 values, $w, x, y, z$. $w$ is the *real* part of the quaternion and the vector $x, y, z$ is the *vector* part of the quaternion. Quaternions are less intuitive to visualize than :term:`Euler angles` but do not suffer from :term:`gimbal lock` and are often used for rapid interpolation of rotations. Reflection A transformation that can be thought of as transforming an object to its mirror image. The mirror in the transformation is a plane. A plan can be defined with a point and a vector normal to the plane. See `wikipedia reflection`_. Rotation matrix See `wikipedia rotation matrix`_. A rotation matrix is a matrix implementing a rotation. Rotation matrices are square and orthogonal. That means, that the rotation matrix $R$ has columns and rows that are :term:`unit vector`, and where $R^T R = I$ ($R^T$ is the transpose and $I$ is the identity matrix). Therefore $R^T = R^{-1}$ ($R^{-1}$ is the inverse). Rotation matrices also have a determinant of $1$. Rotation vector A representation of an :term:`axis angle` rotation. The angle $\theta$ and unit vector axis $\boldsymbol{\hat{u}}$ are stored in a *rotation vector* $\boldsymbol{u}$, such that: .. math:: \theta = \|\boldsymbol{u}\| \, \boldsymbol{\hat{u}} = \frac{\boldsymbol{u}}{\|\boldsymbol{u}\|} where $\|\boldsymbol{u}\|$ is the :term:`Euclidean norm` of $\boldsymbol{u}$ Shear matrix Square matrix that results in shearing transforms - see `wikipedia shear matrix`_. Unit vector A vector $\boldsymbol{\hat{u}}$ with a :term:`Euclidean norm` of 1. Normalized vector is a synonym. The "hat" over the $\boldsymbol{\hat{u}}$ is a convention to express the fact that it is a unit vector. .. include:: links_names.inc dipy-1.11.0/doc/index.rst000066400000000000000000000075551476546756600152130ustar00rootroot00000000000000.. _home: Diffusion Imaging In Python - Documentation =========================================== .. container:: index-paragraph DIPY_ is the paragon 3D/4D+ imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging. DIPY is part of the `NiPy ecosystem `__. *********** Quick links *********** .. grid:: 2 :gutter: 3 .. grid-item-card:: :octicon:`rocket` Get started :link: user_guide/installation :link-type: any New to DIPY_? Start with our installation guide and DIPY key concepts. .. grid-item-card:: :octicon:`image` Tutorials :link: examples_built/index :link-type: any Browse our tutorials gallery. .. grid-item-card:: :octicon:`image` Recipes :link: recipes :link-type: ref How do I do X in DIPY? This dedicated section will provide you quick and direct answer. .. grid-item-card:: :octicon:`zap` Workflows :link: interfaces/index :link-type: any Not comfortable with coding? we have command line interfaces for you. An easy way to use DIPY_ via a terminal. .. grid-item-card:: :octicon:`rocket` Theory :link: theory/index :link-type: any Back to the basics. Learn the theory behind the methods implemented in DIPY_. .. grid-item-card:: :octicon:`tools` Developer Guide :link: development :link-type: any Saw a typo? Found a bug? Want to improve a function? Learn how to contribute to DIPY_! .. grid-item-card:: :octicon:`repo` API reference :link: reference/index :link-type: any A detailed description of DIPY public Python API. .. grid-item-card:: :octicon:`repo` Workflows API reference :link: reference_cmd/index :link-type: any A detailed description of all the DIPY workflows command line. .. grid-item-card:: :octicon:`history` Release notes :link: stateoftheart :link-type: ref Upgrading from a previous version? See what's new and changed between each release of DIPY_. .. grid-item-card:: :octicon:`comment-discussion` Get help :octicon:`link-external` :link: https://github.com/dipy/dipy/discussions Need help with your processing? Ask us and a large neuroimaging community. ********** Highlights ********** **DIPY 1.11.0** is now available. New features include: - NF: Refactoring of the tracking API. - Deprecation of Tensorflow backend in favor of PyTorch. - Performance improvements of multiple functionalities. - DIPY Horizon improvements and minor features added. - Added support for Python 3.13. - Drop support for Python 3.9. - Multiple Workflows updated and added (15 workflows). - Documentation update. - Closed 73 issues and merged 47 pull requests. See :ref:`Older Highlights `. ************* Announcements ************* - :doc:`DIPY 1.11.0 ` released March 15, 2025. - :doc:`DIPY 1.10.0 ` released December 12, 2024. - :doc:`DIPY 1.9.0 ` released March 8, 2024. See some of our :ref:`Past Announcements ` .. This tree is helping populate the side navigation panel .. toctree:: :maxdepth: 2 :hidden: user_guide/index examples_built/index interfaces/index devel/index theory/index reference/index reference_cmd/index recipes api_changes stateoftheart old_highlights old_news glossary developers gimbal_lock faq cite subscribe .. Main content will be displayed using the jinja template .. include:: links_names.inc dipy-1.11.0/doc/interfaces/000077500000000000000000000000001476546756600154615ustar00rootroot00000000000000dipy-1.11.0/doc/interfaces/basic_flow.rst000066400000000000000000000256231476546756600203330ustar00rootroot00000000000000.. _basic_flow: ====================================================== Introduction to command line interfaces ====================================================== This tutorial provides a basic introduction to DIPY's :footcite:p:`Garyfallidis2014a` command line interfaces. Using a terminal, let's download a dataset. This is multi-shell dataset, which was kindly provided by Hansen and Jespersen (more details about the data are provided in their paper :footcite:p:`Hansen2016a`). For this tutorial we will use a Linux terminal, please adapt accordingly if you are using Mac or Windows. First let's create a folder:: mkdir data_folder Download the data in the data_folder:: dipy_fetch cfin_multib --out_dir data_folder Move to the folder with the data:: cd data_folder/cfin_multib ls __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii T1.nii Let's rename the long filenames to something that is easier to read:: mv __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval dwi.bval mv __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec dwi.bvec mv __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii dwi.nii We can use ``dipy_info`` to check the bval and nii files :: dipy_info *.bval *.nii INFO:----------------- INFO:Looking at T1.nii INFO:----------------- INFO:Data size (256, 256, 176) INFO:Data type uint16 INFO:Data min 0 max 888 avg 62.940581408413976 INFO:2nd percentile 0.0 98th percentile 377.0 INFO:Native coordinate system PSR INFO:Affine Native to RAS matrix [[ 0. 0.01 1. -89.569] [ -1. 0. 0. 138.451] [ 0. 1. -0.01 -131.289] [ 0. 0. 0. 1. ]] INFO:Voxel size [1. 1. 1.] INFO:--------------------------------------------------------- INFO:Looking at __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii INFO:--------------------------------------------------------- INFO:Data size (96, 96, 19, 496) INFO:Data type int16 INFO:Data min 0 max 1257 avg 58.62918037280702 of vol 0 INFO:2nd percentile 0.0 98th percentile 234.0 of vol 0 INFO:Native coordinate system LAS INFO:Affine Native to RAS matrix [[ -2.498 0.084 0.067 113.641] [ 0.069 2.451 -0.488 -104.142] [ 0.082 0.486 2.451 -31.504] [ 0. 0. 0. 1. ]] INFO:Voxel size [2.5 2.5 2.5] (base) C:\Users\elef\Data\examples\data_folder\cfin_multib>dipy_info *.bval *.nii INFO:---------------------------------------------------------- INFO:Looking at __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval INFO:---------------------------------------------------------- INFO:b-values [ 0. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 200. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 400. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 600. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 800. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1000. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1200. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1400. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1600. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 1800. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2000. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2200. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2400. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2600. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 2800. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000. 3000.] INFO:Total number of b-values 496 INFO:Number of gradient shells 15 INFO:Number of b0s 1 (b0_thr 50) INFO:----------------- INFO:Looking at T1.nii INFO:----------------- INFO:Data size (256, 256, 176) INFO:Data type uint16 INFO:Data min 0 max 888 avg 62.940581408413976 INFO:2nd percentile 0.0 98th percentile 377.0 INFO:Native coordinate system PSR INFO:Affine Native to RAS matrix [[ 0. 0.01 1. -89.569] [ -1. 0. 0. 138.451] [ 0. 1. -0.01 -131.289] [ 0. 0. 0. 1. ]] INFO:Voxel size [1. 1. 1.] INFO:--------------------------------------------------------- INFO:Looking at __DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii INFO:--------------------------------------------------------- INFO:Data size (96, 96, 19, 496) INFO:Data type int16 INFO:Data min 0 max 1257 avg 58.62918037280702 of vol 0 INFO:2nd percentile 0.0 98th percentile 234.0 of vol 0 INFO:Native coordinate system LAS INFO:Affine Native to RAS matrix [[ -2.498 0.084 0.067 113.641] [ 0.069 2.451 -0.488 -104.142] [ 0.082 0.486 2.451 -31.504] [ 0. 0. 0. 1. ]] INFO:Voxel size [2.5 2.5 2.5] We can visualize the data using ``dipy_horizon`` :: dipy_horizon dwi.nii .. figure:: https://github.com/dipy/dipy_data/blob/master/cfin_basic1.png?raw=true :width: 70 % :alt: alternate text :align: center Visualization of a slice from the first volume of the diffusion data We can use ``dipy_median_otsu`` to build a brain mask for the diffusion data:: dipy_median_otsu dwi.nii --median_radius 2 --numpass 1 --vol_idx 10-50 --out_dir out_work Visualize the mask using ``dipy_horizon``:: dipy_horizon out_work/brain_mask.nii.gz .. figure:: https://github.com/dipy/dipy_data/blob/master/cfin_basic2.png?raw=true :width: 70 % :alt: alternate text :align: center Visualization of a slice from the generated brain mask Perform DTI using ``dipy_fit_dti``. The input of this function is the DWI data, b-values and b-vector files and the brain mask that we calculated in the previous step:: dipy_fit_dti dwi.nii dwi.bval dwi.bvec out_work/brain_mask.nii.gz --out_dir out_work/ The default options of the script generate the following files ad.nii.gz, evecs.nii.gz, md.nii.gz, rgb.nii.gz, fa.nii.gz, mode.nii.gz, tensors.nii.gz, evals.nii.gz, ga.nii.gz and rd.nii.gz. Visualize DEC map:: dipy_horizon out_work/rgb.nii.gz --rgb .. figure:: https://github.com/dipy/dipy_data/blob/master/cfin_basic3.png?raw=true :width: 70 % :alt: alternate text :align: center Visualization of a slice from the first volume of DEC image We can now move to more advanced reconstruction models. One of the fastest we can use is Constant Solid Angle (CSA) :footcite:p:`Aganj2010` :: dipy_fit_csa dwi.nii dwi.bval dwi.bvec out_work/brain_mask.nii.gz --out_dir out_work/ Now, to move into doing some tracking we will need some seeds. We can generate seeds in the following way :: dipy_mask out_work/fa.nii.gz 0.4 --out_dir out_work/ --out_mask seed_mask.nii.gz Build tractography with the ``peaks.pam5`` file as input using the fast EuDX algorithm :footcite:p:`Garyfallidis2012b` :: dipy_track out_work/peaks.pam5 out_work/fa.nii.gz out_work/seed_mask.nii.gz --out_dir out_work/ --out_tractogram tracks_from_peaks.trk --tracking_method eudx We can visualize the result using ``dipy_horizon``:: dipy_horizon out_work/tracks_from_peaks.trk .. figure:: https://github.com/dipy/dipy_data/blob/master/some_tracks.png?raw=true :width: 70 % :alt: alternate text :align: center Showing tracks from the specific dataset. This dataset contains only a few slices. For more information about each command line, try calling the ``-h`` flag for example :: dipy_horizon -h should provide the available options :: usage: dipy_horizon [-h] [--cluster] [--cluster_thr float] [--random_colors] [--length_gt float] [--length_lt float] [--clusters_gt int] [--clusters_lt int] [--native_coords] [--stealth] [--emergency_header str] [--bg_color [float [float ...]]] [--disable_order_transparency] [--out_dir str] [--out_stealth_png str] [--force] [--version] [--out_strat string] [--mix_names] [--log_level string] [--log_file string] input_files [input_files ...] Otherwise please see :ref:`workflows_reference`. The commands shown in this tutorial are not by any stretch of imagination what we propose as a complete solution to tracking but a mere introduction to DIPY's command interfaces. Medical imaging requires a number of steps that depend on the goal of the analysis strategy. Nonetheless, if you are using these commands do cite the relevant papers to support the DIPY developers so that they can continue maintaining and extending these tools. References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/buan_flow.rst000066400000000000000000000244171476546756600201770ustar00rootroot00000000000000.. _buan_flow: ================================= BUndle ANalytics (BUAN) framework ================================= This tutorial walks through the steps for reproducing Bundle Analytics :footcite:p:`Chandio2020a` results on Parkinson's Progression Markers Initiative (PPMI) :footcite:p:`Marek2011` data derivatives. Bundle Analytics is a framework for comparing bundle profiles and shapes of different groups. In this example, we will be comparing healthy controls and patients with parkinson's disease. We will be using PPMI data derivatives generated using DIPY :footcite:p:`Garyfallidis2014a`. First we need to download streamline atlas :footcite:p:`Yeh2018` with 30 white matter bundles in MNI space from ``_ For this tutorial we will be using a test sample of DIPY Processed Parkinson's Progression Markers Initiative (PPMI) Data Derivatives. It can be downloaded from the link below ``_ .. note:: If you prefer to run experiments on the complete dataset to reproduce the paper :footcite:p:`Chandio2020a` results please see the "Reproducing results on larger dataset" section at end of the page for more information. There are two parts of Bundle Analytics group comparison framework, bundle profile analysis and bundle shape similarity analysis. ----------------------------------- Group Comparison of Bundle Profiles ----------------------------------- For generating bundle profile data (saved as .h5 files): You must have downloaded bundles folder of 30 atlas bundles and subjects folder with PPMI data derivatives. Following workflows require specific input directory structure but don't worry as data you downloaded is already in the required format. We will be using ``bundles`` folder you downloaded from streamline atlas link and ``subjects_small`` folder downloaded from test data link. Where, ``bundles`` folder has following model bundles:: bundles/ ├── AF_L.trk ├── AF_R.trk ├── CCMid.trk ├── CC_ForcepsMajor.trk ├── CC_ForcepsMinor.trk ├── CST_L.trk ├── CST_R.trk ├── EMC_L.trk ├── EMC_R.trk ├── FPT_L.trk ├── FPT_R.trk ├── IFOF_L.trk ├── IFOF_R.trk ├── ILF_L.trk ├── ILF_R.trk ├── MLF_L.trk ├── MLF_R.trk ├── ML_L.trk ├── ML_R.trk ├── MdLF_L.trk ├── MdLF_R.trk ├── OPT_L.trk ├── OPT_R.trk ├── OR_L.trk ├── OR_R.trk ├── STT_L.trk ├── STT_R.trk ├── UF_L.trk ├── UF_R.trk └── V.trk The ``subjects_small`` directory has following structure:: subjects_small ├── control │ ├── 3805 │ │ ├── anatomical_measures │ │ ├── org_bundles │ │ └── rec_bundles │ ├── 3806 │ │ ├── anatomical_measures │ │ ├── org_bundles │ │ └── rec_bundles │ ├── 3809 │ │ ├── anatomical_measures │ │ ├── org_bundles │ │ └── rec_bundles │ ├── 3850 │ │ ├── anatomical_measures │ │ ├── org_bundles │ │ └── rec_bundles │ └── 3851 │ ├── anatomical_measures │ ├── org_bundles │ └── rec_bundles └── patient ├── 3383 │ ├── anatomical_measures │ ├── org_bundles │ └── rec_bundles ├── 3385 │ ├── anatomical_measures │ ├── org_bundles │ └── rec_bundles ├── 3387 │ ├── anatomical_measures │ ├── org_bundles │ └── rec_bundles ├── 3392 │ ├── anatomical_measures │ ├── org_bundles │ └── rec_bundles └── 3552 ├── anatomical_measures ├── org_bundles └── rec_bundles And each subject folder has the following structure:: ├── anatomical_measures │ ├── ad.nii.gz │ ├── csa_peaks.pam5 │ ├── fa.nii.gz │ ├── md.nii.gz │ └── rd.nii.gz ├── org_bundles │ ├── streamlines_moved_AF_L__labels__recognized_orig.trk │ ├── streamlines_moved_AF_R__labels__recognized_orig.trk │ ├── streamlines_moved_CCMid__labels__recognized_orig.trk │ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . │ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . │ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . │ ├── streamlines_moved_UF_L__labels__recognized_orig.trk │ ├── streamlines_moved_UF_R__labels__recognized_orig.trk │ └── streamlines_moved_V__labels__recognized_orig.trk └── rec_bundles ├── moved_AF_L__recognized.trk ├── moved_AF_R__recognized.trk ├── moved_CCMid__recognized.trk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ├── moved_UF_L__recognized.trk ├── moved_UF_R__recognized.trk └── moved_V__recognized.trk If you want to run this tutorial on your data, make sure that the directory structure is The same as shown above. Where, ``anatomical_measures`` folder has nifti files for DTI measures such as FA, MD, and CSA/CSD pam5 files. The ``org_bundles`` folder has extracted bundles in native space. The ``rec_bundles`` folder has extracted bundles in common space. .. note:: Make sure all the output folders are empty and do not get overridden. Create an ``out_dir`` folder (eg: bundle_profiles):: mkdir bundle_profiles Run the following workflow:: dipy_buan_profiles bundles/ subjects_small/ --out_dir "bundle_profiles" For running Linear Mixed Models (LMM) on generated .h5 files from the previous step: Create an ``out_dir`` folder (eg: lmm_plots):: mkdir lmm_plots And run the following workflow:: dipy_buan_lmm "bundle_profiles/*" --out_dir "lmm_plots" This workflow will generate 30 bundles group comparison plots per anatomical measures. Plots will look like the following example: .. figure:: https://github.com/dipy/dipy_data/blob/master/AF_L_fa.png?raw=true :width: 70 % :alt: alternate text :align: center Result plot for left arcuate fasciculus (AF_L) on FA measure We can also visualize and highlight the specific location of group differences on the bundle by providing output p-values file from dipy_buan_lmm workflow. The user can specify at what level of significance they want to see group differences by providing threshold value of p-value to ``buan_thr`` (default 0.05). The color of the highlighted area can be specified by providing RGB color values to ``buan_highlight`` (Default Red) Run the following commandline for visualizing group differences on the model bundle:: dipy_horizon bundles/AF_L.trk lmm_plots/AF_L_fa_pvalues.npy --buan --buan_thr 0.05 Where, ``AF_L.trk `` is located in your model bundle folder ``bundles`` and ``AF_L_fa_pvalues.npy`` is saved in output folder ``lmm_plots`` of dipy_buan_lmm workflow The output of this commandline is an interactive visualization window. Example snapshot: .. figure:: https://github.com/dipy/dipy_data/blob/master/AF_L_highlighted.png?raw=true :width: 70 % :alt: alternate text :align: center Result plot for left arcuate fasciculus (AF_L) with a highlighted area on the bundle in red color. The highlighted area represents the segments on bundles with significant group differences that have pvalues < 0.05. Let's use a different highlight color this time on ``CST_L`` bundle:: dipy_horizon bundles/CST_L.trk lmm_plots/CST_L_fa_pvalues.npy --buan --buan_thr 0.05 --buan_highlight 1 1 0 .. figure:: https://github.com/dipy/dipy_data/blob/master/CST_L_highlighted.png?raw=true :width: 50 % :alt: alternate text :align: center Result plot for left corticospinal tract left (CST_L) with a highlighted area on the bundle in yellow color. The highlighted area represents the segments on bundles with significant group differences that have pvalues < 0.05. ----------------------------------------------------------- Shape similarity of specific bundles across the populations ----------------------------------------------------------- Create an ``out_dir`` folder (eg: sm_plots):: mkdir sm_plots Run the following workflow:: dipy_buan_shapes subjects_small/ --out_dir "sm_plots" This workflow will generate 30 bundles shape similarity plots. Shape similarity score ranges between 0-1, where 1 being highest similarity and 0 being lowest. Plots will look like the following example: .. figure:: https://github.com/dipy/dipy_data/blob/master/SM_moved_UF_R__recognized.png?raw=true :width: 50 % :alt: alternate text :align: center Result plot for right uncinate fasciculus (UF_R) for 10 subjects. First 5 subjects belong to the healthy control group and last 5 subjects belong to patient group. In the diagonal, we have shape similarity score of 1 as it is calculated between a bundle and itself. -------------------------------------- Reproducing results on larger dataset: -------------------------------------- Complete dataset of DIPY Processed Parkinson's Progression Markers Initiative (PPMI) Data Derivatives can be downloaded from the link below: ``_ Please note this is a large data file and might take some time to run. If you only want to test the workflows use the test sample data. All steps will be the same as mentioned above except this time the data downloaded will have different folder name ``subjects`` instead of ``subjects_small``. For more information about each command line, you can go to ``_ If you are using any of these commands do cite the relevant papers. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/bundle_segmentation_flow.rst000066400000000000000000000076141476546756600233000ustar00rootroot00000000000000.. _bundle_segmentation_flow: ================================================= White Matter Bundle Segmentation with RecoBundles ================================================= This tutorial explains how we can use RecoBundles :footcite:p:`Garyfallidis2018` to extract bundles from input tractograms. First, we need to download a reference streamline atlas. Here, we downloaded an atlas with 30 bundles in MNI space :footcite:p:`Yeh2018` from: ``_ For this tutorial, you can use your own tractography data or you can download a single subject tractogram from: ``_ Let's say we have an input target tractogram named ``streamlines.trk`` and the atlas we downloaded, named ``whole_brain_MNI.trk``. Visualizing the target and atlas tractograms before registration:: dipy_horizon "streamlines.trk" "whole_brain_MNI.trk" --random_color .. figure:: https://github.com/dipy/dipy_data/blob/master/tractograms_initial.png?raw=true :width: 70 % :alt: alternate text :align: center Atlas and target tractograms before registration. ------------------------------------ Streamline-Based Linear Registration ------------------------------------ To extract the bundles from the tractogram, we first need move our target tractogram to be in the same space as the atlas (MNI, in this case). We can directly register the target tractogram to the space of the atlas, using streamline-based linear registration (SLR) :footcite:p:`Garyfallidis2015`. The following workflows require two positional input arguments; ``static`` and ``moving`` .trk files. In our case, the ``static`` input is the atlas and the ``moving`` is our ``target`` tractogram (``streamlines.trk``). Run the following workflow:: dipy_slr "whole_brain_MNI.trk" "streamlines.trk" --force Per default, the SLR workflow will save a transformed tractogram as ``moved.trk``. Visualizing the target and atlas tractograms after registration:: dipy_horizon "moved.trk" "whole_brain_MNI.trk" --random_color .. figure:: https://github.com/dipy/dipy_data/blob/master/tractograms_after_registration.png?raw=true :width: 70 % :alt: alternate text :align: center Atlas and target tractograms after registration. ----------- RecoBundles ----------- Create an ``out_dir`` folder (e.g., ``rb_output``), into which output will be placed:: mkdir rb_output For the RecoBundles workflow, we will use the 30 model bundles downloaded earlier. Run the following workflow:: dipy_recobundles "moved.trk" "bundles/*.trk" --force --mix_names --out_dir "rb_output" This workflow will extract 30 bundles from the tractogram. Example of extracted Left Arcuate fasciculus (AF_L) bundle (visualized with ``dipy_horizon``): .. figure:: https://github.com/dipy/dipy_data/blob/master/AF_L_rb.png?raw=true :width: 70 % :alt: alternate text :align: center Extracted Left Arcuate fasciculus (AF_L) from input tractogram Example of extracted Left Arcuate fasciculus (AF_L) bundle visualized along with the model AF_L bundle used as reference in RecoBundles: .. figure:: https://github.com/dipy/dipy_data/blob/master/AF_L_rb_with_model.png?raw=true :width: 70 % :alt: alternate text :align: center Extracted Left Arcuate fasciculus (AF_L) in pink and model AF_L bundle in green color. Output of RecoBundles will be in native space. To get bundles in subject's original space, run following commands:: mkdir org_output dipy_labelsbundles 'streamlines.trk' 'rb_output/*.npy' --mix_names --out_dir "org_output" For more information about each command line, please visit DIPY website ``_ . If you are using any of these commands please be sure to cite the relevant papers and DIPY :footcite:p:`Garyfallidis2014a`. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/bundlewarp_registration_flow.rst000066400000000000000000000100441476546756600241760ustar00rootroot00000000000000.. _bundlewarp_registration_flow: ========================================================= Nonrigid White Matter Bundle Registration with BundleWarp ========================================================= This tutorial explains how we can use BundleWarp :footcite:p`Chandio2023` to nonlinearly register two bundles. First, we need to download static and moving bundles for this tutorial. Here, we two uncinate fasciculus bundles in the left hemisphere of the brain from: ``_ Let's say we have a moving bundle (bundle to be registered) named ``m_UF_L.trk`` and static/fixed bundle named ``s_UF_L.trk``. Visualizing the moving and static bundles before registration:: dipy_horizon "m_UF.trk" "s_UF_LI.trk" --random_color .. figure:: https://github.com/dipy/dipy_data/blob/master/before_bw_registration.png?raw=true :width: 70 % :alt: alternate text :align: center Moving bundle in green and static bundle in pink before registration. BundleWarp provides the capacity to either partially or fully deform the moving bundle using a single regularization parameter, alpha (represented with λ in BundleWarp paper). Where alpha controls the trade-off between regularizing the deformation and having points match very closely. The lower the value of alpha, the more closely the bundles would match. Here, we investigate how to warp moving bundle with different levels of deformations using BundleWarp registration method :footcite:p`Chandio2023`. -------------------------------------------- Partially Deformable BundleWarp Registration -------------------------------------------- Here, we partially deform/warp the moving bundle to align it with the static bundle. partial deformations improve linear registration while preserving the anatomical shape and structures of the moving bundle. Here, we use a relatively higher value of alpha=0.5. By default, BundleWarp partially deforms the bundle to preserve the key characteristics of the original bundle. The following BundleWarp workflow requires two positional input arguments; ``static`` and ``moving`` .trk files. In our case, the ``static`` input bundle is the ``s_UF_L.trk``, and the ``moving`` is ``m_UF_L.trk``. Run the following workflow:: dipy_bundlewarp "s_UF_L.trk" "m_UF_L.trk" --alpha 0.5 --force Per default, the BundleWarp workflow will save a nonlinearly transformed bundle as ``nonlinearly_moved.trk``. Visualizing the moved and static bundles after registration:: dipy_horizon "nonlinearly_moved.trk" "s_UF_L.trk" --random_color .. figure:: https://github.com/dipy/dipy_data/blob/master/partially_deformable_bw_registration.png?raw=true :width: 70 % :alt: alternate text :align: center Partially moved bundle in green and static bundle in pink after registration. ---------------------------------------- Fully Deformable BundleWarp Registration ---------------------------------------- Here, we fully deform/warp moving bundle to make it completely aligned with the static bundle. Here, we use lower value of alpha=0.01. NOTE: Be cautious with setting lower value of alpha as it can completely change the original anatomical shape of the moving bundle. Run the following workflow:: dipy_bundlewarp "s_UF_L.trk" "m_UF_L.trk" --alpha 0.01 --force Per default, the BundleWarp workflow will save a nonlinearly transformed bundle as ``nonlinearly_moved.trk``. Visualizing the moved and static bundles after registration:: dipy_horizon "nonlinearly_moved.trk" "s_UF_L.trk" --random_color .. figure:: https://github.com/dipy/dipy_data/blob/master/fully_deformable_bw_registration.png?raw=true :width: 70 % :alt: alternate text :align: center Fully moved bundle in green and static bundle in pink after registration. For more information about each command line, please visit DIPY website ``_ . If you are using any of these commands please be sure to cite the relevant papers and DIPY :footcite:p:`Garyfallidis2014a`. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/data_fetch.rst000066400000000000000000000014151476546756600202760ustar00rootroot00000000000000.. _data_fetch: ------------------------- Downloading DIPY datasets ------------------------- In this tutorial, you will learn how to download the DIPY datasets using the terminal. First, let's create a folder:: mkdir data_folder You can list all available datasets in DIPY using the command:: dipy_fetch list In order to fetch a specific dataset, you will need to specify its name to the ``dipy_fetch`` command, and an optional destination directory using the ``--out_dir`` argument:: dipy_fetch {specific_dataset} --out_dir {specific_data_out_folder} For example, to download the ``sherbrooke_3shell`` dataset, you would run:: dipy_fetch sherbrooke_3shell --out_dir data_folder You can find details about all the datasets available in DIPY :ref:`data`. dipy-1.11.0/doc/interfaces/denoise_flow.rst000066400000000000000000000235671476546756600207050ustar00rootroot00000000000000.. _denoise_flow: ========= Denoising ========= This tutorial walks through the steps to denoise diffusion-weighted MR images using DIPY. Multiple denoising methods are available in DIPY. You can try these methods using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. -------------------------------------- Denoising using Overcomplete Local PCA -------------------------------------- Denoising algorithms based on principal components analysis (PCA) are effective denoising methods because they explore the redundancy of the multi-dimensional information of diffusion-weighted datasets. The basic idea behind the PCA-based denoising algorithms is to perform a low-rank approximation by thresholding the eigenspectrum of the noisy signal matrix. The algorithm to perform an Overcomplete Local PCA-based (LPCA) denoising involves the following steps: * Estimating the local noise variance at each voxel. * Applying a PCA in local patches around each voxel over the gradient directions. * Thresholding the eigenvalues based on the local estimate of the noise variance, and then doing a PCA reconstruction. The Overcomplete Local PCA algorithm turns out to work well on diffusion MRI owing to the 4D structure of DWI acquisitions where the q-space is typically oversampled giving highly correlated 3D volumes of the same subject. For illustrative purposes of the Overcomplete Local PCA denoising method, we will be using the ``stanford_hardi`` dataset, but you can also use your own data. The workflow for the LPCA denoising requires the paths to the diffusion input file, as well as the b-values and b-vectors files. You may want to create an output directory to save the denoised data, e.g.:: mkdir denoise_lpca_output To run the Overcomplete Local PCA denoising on the data it suffices to execute the ``dipy_denoise_lpca`` command, e.g.:: dipy_denoise_lpca data/stanford_hardi/HARDI150.nii.gz data/stanford_hardi/HARDI150.bval data/stanford_hardi/HARDI150.bvec --out_dir "denoise_lpca_output" This command will denoise the diffusion image and save it to the ``denoise_lpca_output`` directory, defaulting the resulting image file name to ``dwi_lpca.nii.gz``. In case the output directory (``out_dir``) parameter is not specified, the denoised diffusion image will be saved to the current directory by default. Note: Depending on the parameters' values, the effect of the denoising can be subtle or even hardly noticeable, apparent or visible, depending on the choice. Users are encouraged to carefully choose the parameters. .. |image1| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_original.png?raw=true :align: middle .. |image2| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_denoise_LPCA.png?raw=true :align: middle +--------------------+--------------------+ | Before Denoising | After Denoising | +====================+====================+ | |image1| | |image2| | +--------------------+--------------------+ ----------------------------------- Denoising using Marcenko-Pastur PCA ----------------------------------- The principal components classification can be performed based on prior noise variance estimates or automatically based on the Marcenko-Pastur distribution. In addition to noise suppression, the Marcenko-Pastur PCA (MPPCA) algorithm can be used to get the standard deviation of the noise. We will use the ``sherbrooke_3shell`` dataset in DIPY to showcase this denoising method. As with any other workflow in DIPY, you can also use your own data! We will create a directory where to save the denoised image (e.g.: ``denoise_mppca_output``):: mkdir denoise_mppca_output In order to run the MPPCA denoising method, we need to specify the location of the diffusion data file, followed by the optional arguments. In this case, we will be specifying the patch radius value and the output directory. The MPPCA denoising method is run using the ``dipy_denoise_mppca`` command, e.g.:: dipy_denoise_mppca data/sherbrooke_3shell/HARDI193.nii.gz --patch_radius 10 --out_dir "denoise_mppca_output" This command will denoise the diffusion image and save it to the specified output directory. .. |image3| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_3shell_original.png?raw=true :align: middle .. |image4| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_3shell_denoise_MPPCA.png?raw=true :align: middle +--------------------+--------------------+ | Before Denoising | After Denoising | +====================+====================+ | |image3| | |image4| | +--------------------+--------------------+ ----------------------- Denoising using NLMEANS ----------------------- In the Non-Local Means algorithm (NLMEANS) :footcite:p:`Coupe2008`, :footcite:p:`Coupe2012`, the value of a pixel is replaced by an average of a set of other pixel values: the specific patches centered on the other pixels are contrasted to the patch centered on the pixel of interest, and the average only applies to pixels with patches close to the current patch. This algorithm can also restore good textures, which are distorted by other denoising algorithms. The Non-Local Means method can be used to denoise $N$-D image data (i.e. 2D, 3D, 4D, etc.), and thus enhance their SNR. We will use the ``cfin_multib`` dataset in DIPY to showcase this denoising method. As with any other workflow in DIPY, you can also use your own data! In order to run the NLMEANS denoising method, we need to specify the location of the diffusion data file, followed by the optional arguments. In this case, we will be specifying the noise standard deviation estimate (``sigma``) and patch radius values, and the output directory. We will create a directory where to save the denoised image (e.g.: ``denoise_nlmeans_output``):: mkdir denoise_nlmeans_output The NLMEANS denoising is performed using the ``dipy_denoise_nlmeans`` command, e.g.:: dipy_denoise_nlmeans data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii --sigma 2 --patch_radius 2 --out_dir "denoise_nlmeans_output" The command will denoise the input diffusion volume and write the result to the specified output directory. .. |image5| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_original.png?raw=true :align: middle .. |image6| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_denoise_NLMEANS.png?raw=true :align: middle +--------------------+--------------------+ | Before Denoising | After Denoising | +====================+====================+ | |image5| | |image6| | +--------------------+--------------------+ ----------------------------- Overview of Denoising Methods ----------------------------- Note: Users are recommended to zoom (click on each image) to see the denoising effect. .. |image7| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_3shell_original.png?raw=true :align: middle .. |image8| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_denoise_LPCA.png?raw=true :align: middle .. |image9| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_3shell_denoise_MPPCA.png?raw=true :align: middle .. |image10| image:: https://github.com/dipy/dipy_data/blob/master/sherbrooke_denoise_NLMEANS.png?raw=true :align: middle .. |image11| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_original.png?raw=true :align: middle .. |image12| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_denoise_LPCA.png?raw=true :align: middle .. |image13| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_denoise_MPPCA.png?raw=true :align: middle .. |image14| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_denoise_NLMEANS.png?raw=true :align: middle .. |image15| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_original.png?raw=true :align: middle .. |image16| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_LPCA.png?raw=true :align: middle .. |image17| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_denoise_MPPCA.png?raw=true :align: middle .. |image18| image:: https://github.com/dipy/dipy_data/blob/master/cfin_multib_denoise_NLMEANS.png?raw=true :align: middle .. |image19| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_t1_original.png?raw=true :align: middle .. |image20| image:: https://github.com/dipy/dipy_data/blob/master/stanford_hardi_t1_NLMEANS.png?raw=true :align: middle Diffusion --------- +--------------------+--------------------+--------------------+--------------------+--------------------+ | Dataset | Original Image | Denoise LCPA | Denoise MPPCA | Denoise NLMEANS | +====================+====================+====================+====================+====================+ | sherbrooke_3shell | |image7| | |image8| | |image9| | |image10| | +--------------------+--------------------+--------------------+--------------------+--------------------+ | stanford_hardi | |image11| | |image12| | |image13| | |image14| | +--------------------+--------------------+--------------------+--------------------+--------------------+ | cfin_multib | |image15| | |image16| | |image17| | |image18| | +--------------------+--------------------+--------------------+--------------------+--------------------+ Structural ---------- +--------------------+--------------------+--------------------+ | Dataset | Original Image | Denoise NLMEANS | +====================+====================+====================+ | stanford_hardi T1 | |image19| | |image20| | +--------------------+--------------------+--------------------+ ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/gibbs_unringing_flow.rst000066400000000000000000000060631476546756600224150ustar00rootroot00000000000000.. _gibbs_unringing_flow: =============== Gibbs Unringing =============== This tutorial walks through the steps to perform Gibbs unringing using DIPY. You can try this using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. --------------------------- Suppress Gibbs Oscillations --------------------------- Magnetic Resonance (MR) images are reconstructed from the Fourier coefficients of acquired k-space images. Since only a finite number of Fourier coefficients can be acquired in practice, reconstructed MR images can be corrupted by Gibbs artefacts, which is manifested by intensity oscillations adjacent to edges of different tissue types :footcite:p:`Veraart2016a`. Although this artefact affects MR images in general, in the context of diffusion-weighted imaging, Gibbs oscillations can be magnified in derived diffusion-based estimates :footcite:p:`Veraart2016a`, :footcite:p:`Perrone2015`. We will use the ``tissue_data`` dataset in DIPY to showcase the ability to remove Gibbs ringing artefacts. As with any other workflow in DIPY, you can also use your own data! You may want to create an output directory to save the resulting Gibbs ringing-free volume:: mkdir gibbs_ringing_output To run the Gibbs unringing method, we need to specify the path to the input data. This path may contain wildcards to process multiple inputs at once. You can also specify the optional arguments. In this case, we will be specifying the number of processes (``num_processes``) and output directory (``out_dir``). The number of processes allows one to exploit the available computational power and accelerate the processing. The maximum number of processes available depends on the CPU of the computer, so users are expected to set an appropriate value based on their platform. Set ``num_processes`` to ``-1`` to use all available cores. To run the Gibbs unringing on the data it suffices to execute the ``dipy_gibbs_ringing`` command, e.g.:: dipy_gibbs_ringing data/tissue_data/t1_brain_denoised.nii.gz --num_processes 4 --out_dir "gibbs_ringing_output" This command will apply the Gibbs unringing procedure to the input MR image and write the artefact-free result to the ``gibbs_ringing_output`` directory. In case no output directory is specified, the Gibbs artefact-free output volume is saved to the current directory by default. Note: Users are recommended to zoom on each image by clicking on them to see the Gibbs unringing effect. .. |image1| image:: https://github.com/dipy/dipy_data/blob/master/t1_brain_denoised_gibbs_unringing.png?raw=true :align: middle .. |image2| image:: https://github.com/dipy/dipy_data/blob/master/t1_brain_denoised_gibbs_unringing_after.png?raw=true :align: middle +--------------------------+--------------------------+ | Before Gibbs unringing | After Gibbs unringing | +==========================+==========================+ | |image1| | |image2| | +--------------------------+--------------------------+ ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/index.rst000066400000000000000000000006271476546756600173270ustar00rootroot00000000000000.. _workflow_interfaces: ========================= DIPY Workflows Interfaces ========================= .. toctree:: basic_flow.rst data_fetch.rst denoise_flow.rst motion_flow.rst gibbs_unringing_flow.rst registration_flow.rst reconstruction_flow.rst tracking_flow.rst buan_flow.rst bundle_segmentation_flow.rst bundlewarp_registration_flow.rst viz_flow.rstdipy-1.11.0/doc/interfaces/motion_flow.rst000066400000000000000000000040111476546756600205430ustar00rootroot00000000000000.. _motion_correction_flow: ================================= Between-Volumes Motion Correction ================================= This tutorial walks through the steps to perform Between-Volumes Motion Correction using DIPY. You can try this using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. ----------------- Motion Correction ----------------- During a dMRI acquisition, the subject motion inevitable. This motion implies a misalignment between N volumes on a dMRI dataset. A common way to solve this issue is to perform a registration on each acquired volume to a reference b = 0 :footcite:p:`Jenkinson2001`. This preprocessing is an highly recommended step that should be executed before any dMRI dataset analysis. We will use the ``sherbrooke_3shell`` dataset in DIPY to showcase the motion correction process. As with any other workflow in DIPY, you can also use your own data! You may want to create an output directory to save the resulting corrected dataset:: mkdir motion_output To run the motion correction method, we need to specify the path of the input data, b-vectors file and b-values file. This path may contain wildcards to process multiple inputs at once. You can also specify the optional arguments. In this case, we will be just specifying the output directory (``out_dir``). For more information, use the help option (-h) to obtain all optional arguments. To run the motion correction on the data it suffices to execute the ``dipy_correct_motion`` command, e.g.:: dipy_correct_motion data/sherbrooke_3shell/HARDI193.nii.gz data/sherbrooke_3shell/HARDI193.bval data/sherbrooke_3shell/HARDI193.bvec --out_dir "motion_output" This command will apply the motion correction procedure to the input MR image and write the artefact-free result to the ``motion_output`` directory. In case no output directory is specified, the corrected output volume is saved to the current directory by default. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/reconstruction_flow.rst000066400000000000000000000305401476546756600223250ustar00rootroot00000000000000.. _reconstruction_flow: ============== Reconstruction ============== This tutorial walks through the steps to perform reconstruction using DIPY. Multiple reconstruction methods are available in DIPY. You can try these methods using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. ----------------------------------------- Constrained Spherical Deconvolution (CSD) ----------------------------------------- This method is mainly useful with datasets with gradient directions acquired in Cartesian coordinates that can be resampled to spherical coordinates so as to use SD methods. The basic idea of spherical deconvolution methods lies in the fact that the underlying fiber distribution can be obtained by deconvolving the measured diffusion signal with a fiber response function, provided that we are able to accurately estimate the latter. In this way, the reconstruction of the fiber orientation distribution function (fODF) in CSD involves two steps: * Estimation of the fiber response function. * Use the response function to reconstruct the fODF. We will be using the ``stanford_hardi`` dataset for the CSD command line interface demonstration purposes. We will start by creating the directories to which to save the peaks volume (e.g.: ``recons_csd_output``) and mask file (e.g.: ``stanford_hardi_mask``):: mkdir recons_csd_output stanford_hardi_mask The workflow for the CSD reconstruction method requires the paths to the diffusion input file, b-values file, b-vectors file and mask file. The optional arguments can also be provided. In this case, we will specify the FA threshold for calculating the response function (``fa_thr``), spherical harmonics order (l) used in the CSA fit (``sh_order_max``), whether to use parallelization in peak-finding during the calibration procedure or not (``parallel``), and the output directory (``out_dir``). To get the mask file, we will use the median Otsu thresholding method by calling the ``dipy_median_otsu`` command:: dipy_median_otsu data/stanford_hardi/HARDI150.nii.gz --vol_idx 10-50 --out_dir "stanford_hardi_mask" Then, to perform the CSD reconstruction we will run the ``dipy_fit_csd`` command as:: dipy_fit_csd data/stanford_hardi/HARDI150.nii.gz data/stanford_hardi/HARDI150.bval data/stanford_hardi/HARDI150.bvec stanford_hardi_mask/brain_mask.nii.gz --fa_thr 0.7 --sh_order_max 8 --parallel --out_dir "recons_csd_output" This command will save the CSD metrics to the specified output directory. ---------------------------------- Mean Apparent Propagator (MAP)-MRI ---------------------------------- The MAP-MRI basis allows for the computation of directional indices, such as the Return To the Axis Probability (RTAP), the Return To the Plane Probability (RTPP), and the parallel and perpendicular Non-Gaussianity. The estimation of analytical Orientation Distribution Function (ODF) and a variety of scalar indices from noisy DWIs requires that the fitting of the MAPMRI basis is regularized. We will use the ``cfin_multib`` dataset in DIPY to showcase this reconstruction method. You can also use your own data! We will create the output directory in which to save the MAP-MRI metrics (e.g.: ``recons_mapmri_output``):: mkdir recons_mapmri_output The Mean Apparent Propagator reconstruction method requires the paths to the diffusion input file, b-values file, b-vectors file, small delta value, big delta value. The optional parameters can also be provided. In this case, we will be specifying the threshold used to find b0 volumes (``b0_threshold``) and the output directory (``out_dir``). To run the MAP-MRI reconstruction method on the data, execute the ``dipy_fit_mapmri`` command, e.g.:: dipy_fit_mapmri data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec 0.0157 0.0365 --b0_threshold 80.0 --out_dir recons_mapmri_output This command will save the MAP-MRI metrics to the specified output directory. ------------------------------ Diffusion Tensor Imaging (DTI) ------------------------------ The diffusion tensor model is a model that describes the diffusion within a voxel. First proposed by Basser and colleagues :footcite:p:`Basser1994a`, it has been very influential in demonstrating the utility of diffusion MRI in characterizing the micro-structure of white matter tissue and of the biophysical properties of tissue, inferred from local diffusion properties and it is still very commonly used. The diffusion tensor models the diffusion signal as: .. math:: \frac{S(\mathbf{g}, b)}{S_0} = e^{-b\mathbf{g}^T \mathbf{D} \mathbf{g}} Where $\mathbf{g}$ is a unit vector in 3D space indicating the direction of measurement and b are the parameters of measurement, such as the strength and duration of the diffusion-weighting gradient. $S(\mathbf{g}, b)$ is the measured diffusion-weighted signal and $S_0$ is the signal conducted in a measurement with no diffusion weighting. $\mathbf{D}$ is a positive-definite quadratic form, which contains six free parameters to be fit. These six parameters are: .. math:: \mathbf{D} = \begin{pmatrix} D_{xx} & D_{xy} & D_{xz} \\ D_{yx} & D_{yy} & D_{yz} \\ D_{zx} & D_{zy} & D_{zz} \\ \end{pmatrix} This matrix is a variance/covariance matrix of the diffusivity along the three spatial dimensions. Note that we can assume that diffusivity has antipodal symmetry, so elements across the diagonal are equal. For example: $D_{xy} = D_{yx}$. This is why there are only 6 free parameters to estimate here. We will use the ``stanford_hardi`` dataset in DIPY to showcase this reconstruction method. As with any other workflow in DIPY, you can also use your own data! We will first create a directory in which to save the output volumes(e.g.: ``recons_dti_output``):: mkdir recons_dti_output To run the Diffusion Tensor Imaging reconstruction method, we need to specify the paths to the diffusion input file, b-values file, b-vectors file and mask file, followed by optional arguments. In this case, we will be specifying the list of metrics to save (``save_metrics``), the output directory (``out_dir``), and the name of the tensors volume to be saved (``out_tensor``). The DTI reconstruction is performed by calling the ``dipy_fit_dti`` command, e.g.:: dipy_fit_dti data/stanford_hardi/HARDI150.nii.gz data/stanford_hardi/HARDI150.bval data/stanford_hardi/HARDI150.bvec stanford_hardi_mask/brain_mask.nii.gz --save_metrics "md" "mode" "tensor" --out_dir "recons_dti_output" --out_tensor "dti_tensors.nii.gz" This command will save the DTI metrics to the specified output directory. The tensors will be saved as a 4D data with last dimension representing (Dxx, Dxy, Dyy, Dxz, Dyz, Dzz). -------------------------------- Diffusion Kurtosis Imaging (DKI) -------------------------------- The diffusion kurtosis model is an expansion of the diffusion tensor model. In addition to the diffusion tensor (DT), the diffusion kurtosis model quantifies the degree to which water diffusion in biological tissues is non-Gaussian using the kurtosis tensor (KT) :footcite:p:`Jensen2005`. Measurements of non-Gaussian diffusion from the diffusion kurtosis model are of interest because they can be used to characterize tissue microstructural heterogeneity :footcite:p:`Jensen2010`. Moreover, DKI can be used to: * Derive concrete biophysical parameters, such as the density of axonal fibers and diffusion tortuosity :footcite:p:`Fieremans2011`. * Resolve crossing fibers in tractography and to obtain invariant rotational measures not limited to well-aligned fiber populations :footcite:p:`NetoHenriques2015`. We will use the ``cfin_multib`` dataset in DIPY to showcase this reconstruction method. You can also use your own data! We will create the directories in which to save the DKI metrics (e.g.: ``recons_dki_output``) and mask file (e.g.: ``cfin_multib_mask``):: mkdir recons_dki_output cfin_multib_mask The Diffusion Kurtosis Imaging reconstruction method requires the paths to the diffusion input file, b-values file, b-vectors file and mask file. The optional parameters can also be provided. In this case, we will be specifying the threshold used to find b0 volumes (``b0_threshold``) and the output directory (``out_dir``). To get the mask file, we will use the median Otsu thresholding method by calling the ``dipy_median_otsu`` command:: dipy_median_otsu data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii --vol_idx 3-6 --out_dir "cfin_multib_mask" To run the DKI reconstruction method on the data, execute the ``dipy_fit_dki`` command, e.g.:: dipy_fit_dki data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.nii data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bval data/cfin_multib/__DTI_AX_ep2d_2_5_iso_33d_20141015095334_4.bvec cfin_multib_mask/brain_mask.nii.gz --b0_threshold 70.0 --out_dir recons_dki_output This command will save the DKI metrics to the specified output directory. -------------------------- Constant Solid Angle (CSA) -------------------------- We are using ``stanford_hardi`` dataset. As with any other workflow in DIPY, you can also use your own data! We will create a directory in which to save the peaks volume (e.g.: ``recons_csa_output``):: mkdir recons_csa_output The workflow for the CSA reconstruction method requires the paths to the diffusion input file, b-values file, b-vectors file and mask file. The optional arguments can also be provided. In this case, we will be specifying whether or not to save pam volumes as single nifti files (``extract_pam_values``) and the output directory (``out_dir``). Then, to perform the CSA reconstruction we will run the ``dipy_fit_csa`` command as:: dipy_fit_csa data/stanford_hardi/HARDI150.nii.gz data/stanford_hardi/HARDI150.bval data/stanford_hardi/HARDI150.bvec stanford_hardi_mask/brain_mask.nii.gz --extract_pam_values --out_dir "recons_csa_output" This command will save the CSA metrics to the specified output directory. ----------------------------------- Intravoxel Incoherent Motion (IVIM) ----------------------------------- The intravoxel incoherent motion (IVIM) model describes diffusion and perfusion in the signal acquired with a diffusion MRI sequence that contains multiple low b-values. The IVIM model can be understood as an adaptation of the work of :footcite:t:`Stejskal1965` in biological tissue, and was proposed by :footcite:t:`LeBihan1988`. The model assumes two compartments: a slow moving compartment, where particles diffuse in a Brownian fashion as a consequence of thermal energy, and a fast moving compartment (the vascular compartment), where blood moves as a consequence of a pressure gradient. In the first compartment, the diffusion coefficient is $\mathbf{D}$ while in the second compartment, a pseudo diffusion term $\mathbf{D^*}$ is introduced that describes the displacement of the blood elements in an assumed randomly laid out vascular network, at the macroscopic level. According to :footcite:p:`LeBihan1988`, $\mathbf{D^*}$ is greater than $\mathbf{D}$. We will be using the ``ivim`` dataset for the IVIM command line interface demonstration purposes. We will start by creating the directories in which to save the output volumes (e.g.: ``recons_ivim_output``) and mask file (e.g.: ``ivim_mask``):: mkdir recons_ivim_output ivim_mask In order to run the IVIM reconstruction method, we need to specify the locations of the diffusion input file, b-values file, b-vectors file and mask file followed by the optional arguments. In this case, we will be specifying the value to split the bvals to estimate D for the two-stage process of fitting (``split_b_D``) and the output directory (``out_dir``). To get the mask file, we will use the median Otsu thresholding method by calling the ``dipy_median_otsu`` command:: dipy_median_otsu data/ivim/HARDI150.nii.gz --vol_idx 10-50 --out_dir "ivim_mask" Then, to perform the IVIM reconstruction we will run the command as:: dipy_fit_ivim data/ivim/ivim.nii.gz data/ivim/ivim.nii.gz.bval data/ivim/ivim.nii.gz.bvec ivim_mask/brain_mask.nii.gz --split_b_D 250 --out_dir "recons_ivim_output" This command will save the IVIM metrics to the directory ``recons_ivim_output``. In case the output directory was not specified, the output volumes will be saved to the current directory by default. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/registration_flow.rst000066400000000000000000000154371476546756600217660ustar00rootroot00000000000000.. _registration_flow: ============ Registration ============ This tutorial walks through the steps to perform image-based and streamline-based registration using DIPY. Multiple registration methods are available in DIPY. You can try these methods using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. ------------------ Image Registration ------------------ DIPY's image registration workflow can be used to register a moving image to a static image by applying different transformations, such as center of mass, translation, and rigid body or full affine (including translation, rotation, scaling and shearing) transformations. During such a registration process, the static image is considered to be the reference, and the moving image is transformed to the space of the static image. Registration methods use some sort of optimization process, and a given metric or criterion (like maximizing the mutual information between the two input images) that is optimized during the process, to achieve the goal. The DIPY image registration workflow applies the specified type of transformation to the input images, and hence, users are expected to choose the type of transformation that best matches the requirements of their problem. Alternatively, the workflow allows one to perform registration in a progressive manner. For example, using affine registration with ``progressive`` set to ``True`` will involve center of mass, translation, rigid body and full affine registration; meanwhile, if ``progressive`` is set to ``False`` for an affine registration, it will include only center of mass and affine registration. The progressive registration will be slower but will improve the quality. We will first create a directory in which to save the transformed image and the affine matrix (e.g.: ``image_reg_output``):: mkdir image_reg_output To run the image registration, we need to specify the paths to the static image file, and to the moving image file, followed by the optional arguments. In this case, we will be specifying the type of registration to be performed (``transform``) and the output directory (``out_dir``). To perform center of mass registration, we will call the ``dipy_align_affine`` command with the ``transform`` parameter set to ``com`` e.g.:: dipy_align_affine --transform "com" --out_dir "image_reg_output" This command will save the transformed image and the affine matrix to the specified output directory. If we are to use an affine transformation type during the registration process, we would call the ``dipy_align_affine`` command as, e.g.:: dipy_align_affine --transform "affine" --out_dir "affine_reg_output" --out_affine "affine_reg.txt" This command will apply an affine transformation on the moving image file, and save the transformed image and the affine matrix to the ``affine_reg_output`` directory. In case you did not specify the output directory, the transformed image file and affine matrix would be saved to the current by default. If you did not specify the name of the output affine matrix, the affine matrix will be saved to a file named ``affine.txt`` by default, located in the current directory also by default. ------------------------------------ Symmetric Diffeomorphic Registration ------------------------------------ Symmetric Diffeomorphic Registration is performed using the Symmetric Normalization (SyN) algorithm proposed by :footcite:t:`Avants2008` (also implemented in the ANTs software :footcite:t:`Avants2009`). It is an optimization technique that brings the moving image closer to the static image. Create a directory in which to save the transformed image (e.g.: ``syn_reg_output``):: mkdir syn_reg_output To run the symmetric normalization registration method, we need to specify the paths to the static image file, and to the moving image file, followed by optional arguments. In this case, we will be specifying the metric (``metric``), the output directory (``out_dir``) and the file name of the output warped image (``out_warped``). You can use cc (cross correlation), ssd (sum squared differences) or em (expectation-maximization) as metrics. The symmetric diffeomorphic registration method in DIPY is run through the ``dipy_align_syn`` command, e.g.:: dipy_align_syn --metric "cc" --out_dir "syn_reg_output" --out_warped "syn_reg_warped.nii.gz" In case you did not specify the output directory, the transformed files would be saved in the current directory by default. If you did not specify the file name of the output warped image, the warped file will be saved as ``warped_moved.nii.gz`` by default. ---------------------- Apply a Transformation ---------------------- We can apply a transformation computed previously to an image. In order to do so, we need to specify the path of the static image file, moving image file, and transform map file, which is a text(``*.txt``) file containing the affine matrix for the affine case and a nifti file containing the mapping displacement field in each voxel with this shape (x, y, z, 3, 2) for the diffeomorphic case, followed by optional arguments. In this case, we will be specifying the transform type (``transform_type``) and the output directory (``out_dir``). Create a directory in which to save the transformed files (e.g.: ``transform_reg_output``):: mkdir transform_reg_output For a ``diffeomorphic`` transformation, we would run the command as:: dipy_apply_transform --transform_type "diffeomorphic" --out_dir "transform_reg_output" This command will transform the moving image and save the transformed files to the specified output directory. ----------------------------- Streamline-based Registration ----------------------------- Streamline-based registration (SLR) :footcite:p:`Garyfallidis2015` is performed to align bundles of streamlines directly in the space of streamlines. The aim is to align the moving streamlines with the static streamlines. The workflow for streamline-based registration requires the paths to the static streamlines file, and to the moving streamlines file, followed by optional arguments. In this case, we will be specifying the number of points for discretizing each streamline (``nb_pts``) and the output directory (``out_dir``). Create a directory in which to save the transformed files (e.g.: ``sl_reg_output``):: mkdir sl_reg_output Then, run the command as:: dipy_slr --nb_pts 25 --out_dir "sl_reg_output" This command will perform streamline-based registration and save the transformed files to the specified output directory. ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/tracking_flow.rst000066400000000000000000000217671476546756600210610ustar00rootroot00000000000000.. _tracking_flow: ======== Tracking ======== This tutorial walks through the steps to perform fiber tracking using DIPY. Multiple tracking methods are available in DIPY. You can try these methods using your own data; we will be using the data in DIPY. You can check how to :ref:`fetch the DIPY data`. -------------------- Local Fiber Tracking -------------------- Local fiber tracking is an approach used to model white matter fibers by creating streamlines from local directional information. The idea is as follows: if the local directionality of a tract/pathway segment is known, one can integrate along those directions to build a complete representation of that structure. Local fiber tracking is widely used in the field of diffusion MRI because it is simple and robust. In order to perform local fiber tracking, three things are needed: #. A method for getting directions from a diffusion data set. #. A method for identifying when the tracking must stop. #. A set of seeds from which to begin tracking. We will be using the ``stanford_hardi`` dataset for the local tracking command line interface demonstration purposes. We will start by creating the directory to which to save the stopping file (e.g.: ``stop_file``):: mkdir stop_file The workflow for each local tracking method requires the paths to the peaks and metrics files, stopping files, and seeding files. The default value for the local tracking method to be used (specified through the ``tracking_method`` parameter) is ``eudx`` (EuDX tracking :footcite:p:`Garyfallidis2012b`). A number of optional arguments, such as the number of seeds per dimension within a voxel (``seed_density``), or the output directory (``out_dir``), can also be provided. To get the stopping file, we will create a binary file of the GFA of the CSA reconstruction method and set a stopping criterion by calling the ``dipy_mask`` command:: dipy_mask recons-csa/gfa.nii.gz 0.25 --out_dir "stop_file" We will use the white matter (WM) partial volume effects (PVE) map corresponding to the diffusion data being tracked, and available through the ``stanford_pve_maps`` dataset, as the seeding file (e.g.: ``pve_wm.nii.gz``). There are four tracking methods as described below. EuDX Tracking ************* We will first create the directory in which to save the output tractogram file (e.g.: ``eudx_tracking_output``):: mkdir eudx_tracking_output Then, to perform the EuDX tracking we will run the ``dipy_track`` command as:: dipy_track recons-csa/peaks.pam5 stop_file/mask.nii.gz data/stanford_hardi/pve_wm.nii.gz --out_dir "eudx_tracking_output" Deterministic Tracking ********************** Deterministic maximum direction getter is the deterministic version of the probabilistic direction getter. It can be used with the same local models and has the same parameters. Deterministic maximum fiber tracking follows the trajectory of the most probable pathway within the tracking constraint (e.g. max angle). In other words, it follows the direction with the highest probability from a distribution, as opposed to the probabilistic direction getter which draws the direction from the distribution. Therefore, the maximum deterministic direction getter is equivalent to the probabilistic direction getter returning always the maximum value of the distribution. Deterministic maximum fiber tracking is an alternative to EuDX deterministic tractography, and unlike EuDX, it does not follow the peaks of the local models, but uses the entire orientation distributions. We will first create the directory in which to save the output tractogram file (e.g.: ``det_tracking_output``):: mkdir det_tracking_output Then, to perform the deterministic tracking we will run the ``dipy_track`` command as:: dipy_track recons-csa/peaks.pam5 stop_file/mask.nii.gz data/stanford_hardi/pve_wm.nii.gz --seed_density 2 --tracking_method "det" --out_dir "det_tracking_output" Probabilistic Tracking ********************** Probabilistic fiber tracking is a way of reconstructing white matter connections using diffusion MR imaging. Like deterministic fiber tracking, the probabilistic approach follows the trajectory of a possible pathway step by step starting at a seed; however, unlike deterministic tracking, the tracking direction at each point along the path is chosen at random from a distribution. The distribution at each point is different and depends on the observed diffusion data at that point. The distribution of tracking directions at each point can be represented as a probability mass function (PMF) if the possible tracking directions are restricted to discrete numbers of well distributed points on a sphere. We will first create the directory in which to save the output tractogram file (e.g.: ``prob_tracking_output``):: mkdir prob_tracking_output Then, to perform the probabilistic tracking we will run the ``dipy_track`` command as:: dipy_track recons-csa/peaks.pam5 stop_file/mask.nii.gz data/stanford_hardi/pve_wm.nii.gz --seed_density 2 --tracking_method "prob" --out_dir "prob_tracking_output" Closest Peaks Tracking ********************** Closest peaks fiber tracking is a type of deterministic tracking. Deterministic streamlines follow a predictable path through the data by selecting the same diffusion orientation when evaluating the same point through the propagation process. There may be several estimated directions at each point (such as a voxel center): closest peaks tracking selects one of these estimated directions on the basis of closeness of match to the previous direction of the streamline. We will first create the directory in which to save the output tractogram file (e.g.: ``closest_peaks_output``):: mkdir closest_peaks_output Then, to perform the closest peaks tracking we will run the ``dipy_track`` command as:: dipy_track recons-csa/peaks.pam5 stop_file/mask.nii.gz data/stanford_hardi/pve_wm.nii.gz --seed_density 2 --tracking_method "cp" --out_dir "closest_peaks_output" --------------------------------- Particle Filtering Tracking (PFT) --------------------------------- Particle Filtering Tracking (PFT) :footcite:p:`Girard2014` uses tissue partial volume estimation (PVE) to reconstruct trajectories connecting the gray matter, and not prematurely stopping in the white matter or in the corticospinal fluid. It relies on a stopping criterion that identifies the tissue where the streamline stopped. If the streamline reaches the gray matter, the trajectory is kept. If the streamline incorrectly stopped in the white matter or in the cerebrospinal fluid, PFT uses anatomical information to find an alternative streamline segment to extend the trajectory. When this segment is found, the tractography continues until the streamline reaches the gray matter. We will use the ``stanford_hardi`` dataset in DIPY to showcase this tracking method. As with any other workflow in DIPY, you can also use your own data! We will first create a directory in which to save the output tractogram file (e.g.: ``pft_output``):: mkdir pft_output To run the Particle Filtering Tracking method, we need to specify the paths to the diffusion input file, white matter partial volume estimate, grey matter partial volume estimate, and cerebrospinal fluid partial volume estimate for tracking, and seeding file followed by optional arguments. In this case, we will be specifying the threshold for the Probability Mass Function that controls the randomness or probabilistic nature of the tracking (``pmf_threshold``), and the output directory (``out_dir``). White matter, grey matter, and cerebrospinal fluid volume files are available through the ``stanford_pve_maps`` dataset. The Particle Filtering Tracking is performed by calling the ``dipy_track_pft`` command, e.g.:: dipy_track_pft recons-csa/peaks.pam5 data/stanford_hardi/pve_wm.nii.gz data/stanford_hardi/pve_gm.nii.gz data/stanford_hardi/pve_csf.nii.gz data/stanford_hardi/pve_wm.nii.gz --pmf_threshold 0.5 --out_dir "pft_output" This command will save the tractogram file to the specified output directory. ---------------------------------- Overview of Fiber Tracking Methods ---------------------------------- .. |image1| image:: https://github.com/dipy/dipy_data/blob/master/eudx_tracking.png?raw=true :align: middle .. |image2| image:: https://github.com/dipy/dipy_data/blob/master/deterministic_tracking.png?raw=true :align: middle .. |image3| image:: https://github.com/dipy/dipy_data/blob/master/closest_peaks_tracking.png?raw=true :align: middle +-----------------------------+-----------------------------+ | Fiber Tracking Method | Output | +=============================+=============================+ | EuDX Tracking | |image1| | +-----------------------------+-----------------------------+ | Deterministic Tracking | |image2| | +-----------------------------+-----------------------------+ | Closest Peaks Tracking | |image3| | +-----------------------------+-----------------------------+ ---------- References ---------- .. footbibliography:: dipy-1.11.0/doc/interfaces/viz_flow.rst000066400000000000000000000107031476546756600200530ustar00rootroot00000000000000.. _viz_flow: =============================== dMRI Visualization with Horizon =============================== This section talks about Horizon workflows in DIPY and how to use them. To follow this tutorial let's make sure we have the latest version of dipy and fury installed on our system. :: pip install dipy --upgrade pip install fury --upgrade Let's explore the options that horizon provides. :: dipy_horizon --help ------------------------- Visualize 3D Brain Image ------------------------- This tutorial provides a basic example of loading an dMRI image to horizon using the command line interface. Using a terminal, let's download a dataset called ``mni_template``. You can skip this step if you already have the dataset downloaded. :: dipy_fetch mni_template To see more details about ``dipy_fetch`` you can refer to :ref:`data_fetch` This command will download the data in your ``.dipy`` folder placed in your home directory. Let's try to load the image. **For macOS and Linux** :: dipy_horizon ~/.dipy/mni_template/mni_icbm152_t1_tal_nlin_asym_09a.nii ~/.dipy/mni_template/mni_icbm152_t2_tal_nlin_asym_09a.nii **For Windows(cmd)** :: dipy_horizon %USERPROFILE%/.dipy/mni_template/mni_icbm152_t1_tal_nlin_asym_09a.nii %USERPROFILE%/.dipy/mni_template/mni_icbm152_t2_tal_nlin_asym_09a.nii **For Windows(powershell)** :: dipy_horizon "\$home/.dipy/mni_template/mni_icbm152_t1_tal_nlin_asym_09a.nii" "\$home/.dipy/mni_template/mni_icbm152_t2_tal_nlin_asym_09a.nii" You can also use your own data by invoking the command below. :: dipy_horizon .nii We also support direct visualization of compressed NIFTI files with extension ``.nii.gz``. :: dipy_horizon .nii.gz ------------------------ Visualize 4D Brain Image ------------------------ This tutorial shows how visualize a 3D image with Volume. Using a terminal, let's download a dataset called ``stanford_hardi``. You can skip this step if you already have the dataset downloaded. :: dipy_fetch stanford_hardi This dataset has a NIFTI file with volumes. Let's try to load the image. **For macOS and Linux** :: dipy_horizon ~/.dipy/stanford_hardi/HARDI150.nii.gz **For Windows(cmd)** :: dipy_horizon %USERPROFILE%/.dipy/stanford_hardi/HARDI150.nii.gz **For Windows(powershell)** :: dipy_horizon "\$home/.dipy/stanford_hardi/HARDI150.nii.gz" -------------------------- Visualize Brain Tractogram -------------------------- This tutorial shows how to visualize a tractogram. Using a terminal, let's download a dataset called ``bundle_atlas_hcp842``. You can skip this step if you already have the dataset downloaded. :: dipy_fetch bundle_atlas_hcp842 Horizon supports below mentioned tractogram formats. * .trk * .trx * .dpy * .tck * .vtk * .vtp * .fib Let's try to load the tractogram. **For macOS and Linux** :: dipy_horizon ~/.dipy/bundle_atlas_hcp842/Atlas_80_Bundles/whole_brain/whole_brain_MNI.trk --cluster **For Windows(cmd)** :: dipy_horizon %USERPROFILE%/.dipy/bundle_atlas_hcp842/Atlas_80_Bundles/whole_brain/whole_brain_MNI.trk --cluster **For Windows(powershell)** :: dipy_horizon "\$home/.dipy/bundle_atlas_hcp842/Atlas_80_Bundles/whole_brain/whole_brain_MNI.trk" --cluster Using the ``--cluster`` option, we visualize the clusters(bundles) of the tractograms. If we do not provide ``--cluster`` it will open up all the streamlines and the interaction panel will not be provided. Opening the streamlines can be computationally expensive, if a large dataset is provided. ----------------------- Visualize Brain Surface ----------------------- This tutorial shows how to visualize surfaces in the Horizon. Using terminal, let's download brain surface. **For macOS and Linux** :: wget https://github.com/maharshi-gor/dipy_data/raw/surface_data/surfaces/lh.pial **For Windows(powershell)** :: wget https://github.com/maharshi-gor/dipy_data/raw/surface_data/surfaces/lh.pial -O lh.pial **For macOS users**, if you do not have ``wget`` on your terminal you can setup by writing following command :: brew install wget If you are still getting an error you can download the surface by clicking `here `_. Previous step will download the file into your current directory. **Downloaded using wget** To load the surface, :: dipy_horizon lh.pial **Downloaded using link** To load the surface, :: dipy_horizon /lh.pial dipy-1.11.0/doc/links_names.inc000066400000000000000000000222561476546756600163430ustar00rootroot00000000000000.. This (-*- rst -*-) format file contains commonly used link targets and name substitutions. It may be included in many files, therefore it should only contain link targets and name substitutions. Try grepping for "^\.\. _" to find plausible candidates for this list. .. NOTE: reST targets are __not_case_sensitive__, so only one target definition is needed for nipy, NIPY, Nipy, etc... .. _nipy: https://nipy.org .. _`Brain Imaging Center`: https://bic.berkeley.edu/ .. _dipy: https://dipy.org .. _`dipy github`: https://github.com/dipy/dipy .. _`dipy pypi`: https://pypi.python.org/pypi/dipy .. _`nipy issues`: https://github.com/nipy/nipy/issues .. _`dipy issues`: https://github.com/dipy/dipy/issues .. _`dipy discussions`: https://github.com/dipy/dipy/discussions .. _`dipy github actions`: https://github.com/dipy/dipy/actions .. _`dipy paper`: https://www.frontiersin.org/Neuroinformatics/10.3389/fninf.2014.00008/abstract .. _journal paper: https://www.frontiersin.org/Neuroinformatics/10.3389/fninf.2014.00008/abstract .. _nibabel: https://nipy.org/nibabel .. _nibabel pypi: https://pypi.python.org/pypi/nibabel .. _nipy development guidelines: https://nipy.org/nipy/devel/guidelines/index.html .. _`dipy gitter`: https://gitter.im/dipy/dipy .. _neurostars: https://neurostars.org/ .. _h5py: https://www.h5py.org/ .. _cvxpy: https://www.cvxpy.org/ .. Packaging .. _neurodebian: https://neuro.debian.net .. _neurodebian how to: https://neuro.debian.net/#how-to-use-this-repository .. _pip: https://www.pip-installer.org/en/latest/ .. _easy_install: https://pypi.python.org/pypi/setuptools .. _homebrew: https://brew.sh/ .. _neurofedora: https://neuro.fedoraproject.org/ .. Documentation tools .. _graphviz: https://www.graphviz.org/ .. _`Sphinx reST`: https://sphinx.pocoo.org/rest.html .. _reST: https://docutils.sourceforge.net/rst.html .. _docutils: https://docutils.sourceforge.net .. _sphinxcontrib-bibtex: https://pypi.org/project/sphinxcontrib-bibtex/ .. Licenses .. _GPL: https://www.gnu.org/licenses/gpl.html .. _BSD: https://www.opensource.org/licenses/bsd-license.php .. _LGPL: https://www.gnu.org/copyleft/lesser.html .. Working process .. _nifticlibs: https://nifti.nimh.nih.gov .. _nifti: https://nifti.nimh.nih.gov .. _`nipy launchpad`: https://launchpad.net/nipy .. _launchpad: https://launchpad.net/ .. _`nipy trunk`: https://code.launchpad.net/~nipy-developers/nipy/trunk .. _`nipy mailing list`: https://mail.python.org/mailman/listinfo/neuroimaging .. _`dipy mailing list`: https://mail.python.org/mailman3/lists/dipy.python.org/ .. _`nipy bugs`: https://bugs.launchpad.net/nipy .. _pep8: https://www.python.org/dev/peps/pep-0008/ .. _`numpy coding style`: https://numpydoc.readthedocs.io/en/latest/format.html#docstring-standard .. _`python module path`: https://docs.python.org/tutorial/modules.html#the-module-search-path .. Code support stuff .. _pychecker: https://pychecker.sourceforge.net/ .. _pylint: https://pylint.readthedocs.io/en/latest/ .. _pyflakes: https://github.com/PyCQA/pyflakes .. _virtualenv: https://pypi.python.org/pypi/virtualenv .. _git: https://git.or.cz/ .. _github: https://github.com .. _flymake: https://flymake.sourceforge.net/ .. _rope: https://rope.sourceforge.net/ .. _pymacs: https://pymacs.progiciels-bpi.ca/pymacs.html .. _ropemacs: https://rope.sourceforge.net/ropemacs.html .. _ECB: https://ecb.sourceforge.net/ .. _emacs_python_mode: https://www.emacswiki.org/cgi-bin/wiki/PythonMode .. _doctest-mode: https://www.cis.upenn.edu/~edloper/projects/doctestmode/ .. _bazaar: https://bazaar-vcs.org/ .. _nose: https://somethingaboutorange.com/mrl/projects/nose .. _pytest: https://docs.pytest.org .. _`python coverage tester`: https://nedbatchelder.com/code/modules/coverage.html .. _cython: https://cython.org .. _travis-ci: https://travis-ci.com/ .. Scientific Python practices .. _`Scientific Python`: https://scientific-python.org/ .. _`Scientific Python SPECS`: https://scientific-python.org/specs .. _`SPEC 0 — Minimum Supported Versions`: https://scientific-python.org/specs/spec-0000/ .. Other python projects .. _numpy: https://numpy.org .. _scipy: https://www.scipy.org .. _IPython: https://www.ipython.org/ .. _`ipython manual`: https://ipython.scipy.org/doc/manual/html .. _matplotlib: https://matplotlib.sourceforge.net .. _pythonxy: https://www.pythonxy.com .. _ETS: https://code.enthought.com/projects/tool-suite.php .. _`Enthought Tool Suite`: https://code.enthought.com/projects/tool-suite.php .. _canopy: https://assets.enthought.com/downloads/ .. _anaconda: https://www.anaconda.com/download .. _python: https://www.python.org .. _mayavi: https://mayavi.sourceforge.net/ .. _sympy: https://code.google.com/p/sympy/ .. _networkx: https://networkx.lanl.gov/ .. _setuptools: https://pypi.python.org/pypi/setuptools .. _distribute: https://packages.python.org/distribute .. _datapkg: https://okfn.org/projects/datapkg .. _pytables: https://www.pytables.org .. _python-vtk: https://www.vtk.org .. _pypi: https://pypi.python.org/pypi .. _FURY: https://fury.gl .. _scikit-learn: https://scikit-learn.org .. _statsmodels: https://www.statsmodels.org .. _pandas: https://pandas.pydata.org .. _pytables: https://www.pytables.org .. _tensorflow: https://www.tensorflow.org .. _tqdm: https://tqdm.github.io/ .. _trx-python: https://tee-ar-ex.github.io/trx-python/ .. Python imaging projects .. _PyMVPA: https://www.pymvpa.org .. _BrainVISA: https://brainvisa.info .. _anatomist: https://brainvisa.info .. _pydicom: https://code.google.com/p/pydicom/ .. Not so python imaging projects .. _matlab: https://www.mathworks.com .. _spm: https://www.fil.ion.ucl.ac.uk/spm .. _spm8: https://www.fil.ion.ucl.ac.uk/spm/software/spm8 .. _eeglab: https://sccn.ucsd.edu/eeglab .. _AFNI: https://afni.nimh.nih.gov/afni .. _FSL: https://www.fmrib.ox.ac.uk/fsl .. _FreeSurfer: https://surfer.nmr.mgh.harvard.edu .. _voxbo: https://www.voxbo.org .. _mricron: https://www.mccauslandcenter.sc.edu/mricro/mricron/index.html .. _slicer: https://www.slicer.org/ .. _fibernavigator: https://github.com/scilus/fibernavigator .. File formats .. _DICOM: https://medical.nema.org/ .. _`wikipedia DICOM`: https://en.wikipedia.org/wiki/Digital_Imaging_and_Communications_in_Medicine .. _GDCM: https://sourceforge.net/apps/mediawiki/gdcm .. _`DICOM specs`: ftp://medical.nema.org/medical/dicom/2009/ .. _`DICOM object definitions`: ftp://medical.nema.org/medical/dicom/2009/09_03pu3.pdf .. _dcm2nii: https://www.cabiatl.com/mricro/mricron/dcm2nii.html .. _`mricron install`: https://www.cabiatl.com/mricro/mricron/install.html .. _dicom2nrrd: https://www.slicer.org/slicerWiki/index.php/Modules:DicomToNRRD-3.4 .. _Nrrd: https://teem.sourceforge.net/nrrd/format.html .. General software .. _gcc: https://gcc.gnu.org .. _xcode: https://developer.apple.com/xcode/resources/ .. _mingw: https://www.mingw.org/wiki/Getting_Started .. _mingw distutils bug: https://bugs.python.org/issue2698 .. _cygwin: https://cygwin.com .. _macports: https://www.macports.org/ .. _VTK: https://www.vtk.org/ .. _ITK: https://www.itk.org/ .. _swig: https://www.swig.org .. _openmp: https://www.openmp.org/ .. Windows development .. _mingw: https://www.mingw.org/wiki/Getting_Started .. _msys: https://www.mingw.org/wiki/MSYS .. _powershell: https://www.microsoft.com/powershell .. _msysgit: https://code.google.com/p/msysgit .. _putty: https://www.chiark.greenend.org.uk/~sgtatham/putty .. _visualstudiobuildtools: https://visualstudio.microsoft.com/downloads/?q=build+tools#build-tools-for-visual-studio-2022 .. Functional imaging labs .. _`functional imaging laboratory`: https://www.fil.ion.ucl.ac.uk .. _FMRIB: https://www.fmrib.ox.ac.uk .. Other organizations .. _enthought: https://www.enthought.com .. _kitware: https://www.kitware.com .. _nitrc: https://www.nitrc.org .. General information links .. _`wikipedia FMRI`: https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging .. _`wikipedia PET`: https://en.wikipedia.org/wiki/Positron_emission_tomography .. Mathematical methods .. _`wikipedia ICA`: https://en.wikipedia.org/wiki/Independent_component_analysis .. _`wikipedia PCA`: https://en.wikipedia.org/wiki/Principal_component_analysis .. Mathematical ideas .. _`wikipedia spherical coordinate system`: https://en.wikipedia.org/wiki/Spherical_coordinate_system .. _`mathworld spherical coordinate system`: https://mathworld.wolfram.com/SphericalCoordinates.html .. _`wikipedia affine`: https://en.wikipedia.org/wiki/Affine_transformation .. _`wikipedia linear transform`: https://en.wikipedia.org/wiki/Linear_transformation .. _`wikipedia rotation matrix`: https://en.wikipedia.org/wiki/Rotation_matrix .. _`wikipedia homogeneous coordinates`: https://en.wikipedia.org/wiki/Homogeneous_coordinates .. _`wikipedia axis angle`: https://en.wikipedia.org/wiki/Axis_angle .. _`wikipedia Euler angles`: https://en.wikipedia.org/wiki/Euler_angles .. _`Mathworld Euler angles`: https://mathworld.wolfram.com/EulerAngles.html .. _`wikipedia quaternion`: https://en.wikipedia.org/wiki/Quaternion .. _`wikipedia shear matrix`: https://en.wikipedia.org/wiki/Shear_matrix .. _`wikipedia reflection`: https://en.wikipedia.org/wiki/Reflection_(mathematics) .. _`wikipedia direction cosine`: https://en.wikipedia.org/wiki/Direction_cosine .. Bibliography management .. _bibtex: https://www.bibtex.org/ .. vim:syntax=rst dipy-1.11.0/doc/make.bat000066400000000000000000000115301476546756600147430ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation set PYTHON=python set SPHINXBUILD=sphinx-build set ALLSPHINXOPTS=-d _build/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (_build\*) do rmdir /q /s %%i del /q /s _build\* call :api-clean call :examples-clean goto end ) if "%1" == "api-clean" ( :api-clean del /q /s reference reference_cmd rmdir reference reference_cmd exit /B ) if "%1" == "api" ( :api if not exist reference mkdir reference %PYTHON% tools/build_modref_templates.py dipy reference if not exist reference_cmd mkdir reference_cmd %PYTHON% tools/docgen_cmd.py dipy reference_cmd echo.Build API docs...done. exit /B ) if "%1" == "examples-clean" ( :examples-clean cd examples_built @echo off for /d %%i in (*) do ( if /I not "%%i"=="README" if /I not "%%i"==".gitignore" ( rmdir /s /q "%%i" ) ) for %%i in (*) do ( if /I not "%%i"=="README" if /I not "%%i"==".gitignore" ( del /q "%%i" ) ) cd .. exit /B ) if "%1" == "gitwash-update" ( %PYTHON% ../tools/gitwash_dumper.py devel dipy --repo-name=dipy ^ --github-user=dipy ^ --project-url=https://dipy.org ^ --project-ml-url=https://mail.python.org/mailman/listinfo/neuroimaging ) if "%1" == "html" ( :html echo "build full docs including examples" call :api %SPHINXBUILD% -b html %ALLSPHINXOPTS% _build/html echo. echo.Build finished. The HTML pages are in _build/html. exit /B ) if "%1" == "html-no-examples" ( :html-no-examples call :api %SPHINXBUILD% -D plot_gallery=0 -b html %ALLSPHINXOPTS% _build/html echo. echo.Build finished. The HTML pages are in _build/html. exit /B ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% _build/dirhtml echo. echo.Build finished. The HTML pages are in _build/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% _build/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% _build/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% _build/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in _build/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% _build/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in _build/qthelp, like this: echo.^> qcollectiongenerator _build\qthelp\dipy.qhcp echo.To view the help file: echo.^> assistant -collectionFile _build\qthelp\dipy.ghc goto end ) if "%1" == "latex" ( :latex call :latex-after-examples exit /B ) if "%1" == "latex-after-examples" ( :latex-after-examples %SPHINXBUILD% -b latex %ALLSPHINXOPTS% _build/latex echo. echo.Build finished; the LaTeX files are in _build/latex. echo.Run make all-pdf or make all-ps in that directory to ^ run these through (pdf)latex. exit /B ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% _build/changes echo. echo.The overview file is in _build/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% _build/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in _build/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% _build/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in _build/doctest/output.txt. goto end ) if "%1" == "pdf" ( call :latex cd _build/latex && make all-pdf type nul > %* goto end ) if "%1" == "upload" ( call :html %* ./upload-gh-pages.sh _build/html/ dipy dipy goto end ) if "%1" == "xvfb" ( set TEST_WITH_XVFB=true && make html goto end ) if "%1" == "memory_profile" ( set TEST_WITH_MEMPROF=true && make html goto end ) :end dipy-1.11.0/doc/old_highlights.rst000066400000000000000000000335441476546756600170710ustar00rootroot00000000000000.. _old_highlights: **************** Older Highlights **************** **DIPY 1.10.0** is now available. New features include: - NF: Patch2Self3 - Large improvements of self-supervised denoising method added. - NF: Fiber density and spread from ODF using Bingham distributions method added. - NF: Iteratively reweighted least squares for robust fitting of diffusion models added. - NF: NDC - Neighboring DWI Correlation quality metric added. - NF: DAM - DWI-based tissue classification method added. - NF: New Parallel Backends (Ray, joblib, Dask) for fitting reconstruction methods added. - RF: Deprecation of Tensorflow support. PyTorch support is now the default. - Transition to Keyword-only arguments (PEP 3102). - Zero-warnings policy (CIs, Compilation, doc generation) adopted. - Adoption of ruff for automatic style enforcement. - Transition to using f-strings. - Citation system updated. It is more uniform and robust. - Multiple Workflows updated. - Multiple DIPY Horizon features updated. - Large documentation update. - Closed 250 issues and merged 185 pull requests. **DIPY 1.9.0** is now available. New features include: - Numpy 2.0.0 support. - DeepN4 novel DL-based N4 Bias Correction method added. - Multiple Workflows added. - Large update of DIPY Horizon features. - Pytest for Cython files (``*.pyx``) added. - Large documentation update. - Support of Python 3.8 removed. - Closed 142 issues and merged 60 pull requests. **DIPY 1.8.0** is now available. New features include: - Python 3.12.0 support. - Cython 3.0.0 compatibility. - Migrated to Meson build system. Setuptools is no more. - EVAC+ novel DL-based brain extraction method added. - Parallel Transport Tractography (PTT) 10X faster. - Many Horizon updates. Fast overlays of many images. - New Correlation Tensor Imaging (CTI) method added. - Improved warnings for optional dependencies. - Large documentation update. New theme/design integration. - Closed 197 issues and merged 130 pull requests. **DIPY 1.7.0** is now available. New features include: - NF: BundleWarp - Streamline-based nonlinear registration method for bundles added. - NF: DKI+ - Diffusion Kurtosis modeling with advanced constraints added. - NF: Synb0 - Synthetic b0 creation added using deep learning added. - NF: New Parallel Transport Tractography (PTT) added. - NF: Fast Streamline Search algorithm added. - NF: New denoising methods based on 1D CNN added. - Handle Asymmetric Spherical Functions. - Large update of DIPY Horizon features. - Multiple Workflows updated - Large codebase cleaning. - Large documentation update. Integration of Sphinx-Gallery. - Closed 53 issues and merged 34 pull requests. **DIPY 1.6.0** is now available. New features include: - NF: Unbiased groupwise linear bundle registration added. - NF: MAP+ constraints added. - Generalized PCA to less than 3 spatial dims. - Add positivity constraints to QTI. - Ability to apply Symmetric Diffeomorphic Registration to points/streamlines. - New Human Connectome Project (HCP) data fetcher added. - New Healthy Brain Network (HBN) data fetcher added. - Multiple Workflows updated (DTIFlow, LPCAFlow, MPPCA) and added (RUMBAFlow). - Ability to handle VTP files. - Large codebase cleaning. - Large documentation update. - Closed 75 issues and merged 41 pull requests. **DIPY 1.5.0** is now available. New features include: - New reconstruction model added: Q-space Trajectory Imaging (QTI). - New reconstruction model added: Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD). - New reconstruction model added: Residual block Deep Neural Network (ResDNN). - Masking management in Affine Registration added. - Multiple Workflows updated (DTIFlow, DKIFlow, ImageRegistrationFlow) and added (MotionCorrectionFlow). - Compatibility with Python 3.10 added. - Migrations from Azure Pipeline to Github Actions. - Large codebase cleaning. - New parallelisation module added. - ``dipy.io.bvectxt`` module deprecated. - New DIPY Horizon features (ROI Visualizer, random colors flag). - Large documentation update. - Closed 129 issues and merged 72 pull requests. **DIPY 1.4.1** is now available. New features include: - Patch2Self and its documentation updated. - BUAN and Recobundles documentation updated. - Standardization and improvement of the multiprocessing / multithreading rules. - Community and governance information added. - New surface seeding module for tractography named `mesh`. - Large update of Cython code in respect of the last standard. - Large documentation update. - Closed 61 issues and merged 28 pull requests. **DIPY 1.4.0** is now available. New features include: - New self-supervised denoising algorithm Patch2Self added. - BUAN and RecoBundles documentation updated. - Response function refactored and clarified. - B-tensor allowed with response functions. - Large Command Line Interface (CLI) documentation updated. - Public API for Registration added. - Large documentation update. - Closed 47 issues and merged 19 pull requests. **DIPY 1.3.0** is now available. New features include: - Gibbs Ringing correction 10X faster. - Spherical harmonics basis definitions updated. - Added SMT2 metrics from mean signal diffusion kurtosis. - New interface functions added to the registration module. - New linear transform added to the registration module. - New tutorials for DIPY command line interfaces. - Fixed compatibility issues with different dependencies. - Tqdm (multiplatform progress bar for data downloading) dependency added. - Large documentation update. - Bundle section highlight from BUAN added in Horizon. - Closed 134 issues and merged 49 pull requests. **DIPY 1.2.0** is now available. New features include: - New command line interfaces for group analysis: BUAN. - Added b-tensor encoding for gradient table. - Better support for single shell or multi-shell response functions. - Stats module refactored. - Numpy minimum version is 1.2.0. - Fixed compatibilities with FURY 0.6+, VTK9+, CVXPY 1.1+. - Added multiple tutorials for DIPY command line interfaces. - Updated SH basis convention. - Improved performance of tissue classification. - Fixed a memory overlap bug (multi_median). - Large documentation update (typography / references). - Closed 256 issues and merged 94 pull requests. **DIPY 1.1.1** is now available. New features include: - New module for deep learning ``dipy.nn`` (uses TensorFlow 2.0). - Improved DKI performance and increased utilities. - Non-linear and RESTORE fits from DTI compatible now with DKI. - Numerical solutions for estimating axial, radial and mean kurtosis. - Added Kurtosis Fractional Anisotropy by Glenn et al. 2015. - Added Mean Kurtosis Tensor by Hansen et al. 2013. - Nibabel minimum version is 3.0.0. - Azure CI added and Appveyor CI removed. - New command line interfaces for LPCA, MPPCA and Gibbs unringing. - New MSMT CSD tutorial added. - Horizon refactored and updated to support StatefulTractograms. - Speeded up all cython modules by using a smarter configuration setting. - All tutorials updated to API changes and 2 new tutorials added. - Large documentation update. - Closed 126 issues and merged 50 pull requests. **DIPY 1.0.0** is now available. New features include: - Critical :doc:`API changes ` - Large refactoring of tracking API. - New denoising algorithm: MP-PCA. - New Gibbs ringing removal. - New interpolation module: ``dipy.core.interpolation``. - New reconstruction models: MTMS-CSD, Mean Signal DKI. - Increased coordinate systems consistency. - New object to manage safely tractography data: StatefulTractogram - New command line interface for downloading datasets: FetchFlow - Horizon updated, medical visualization interface powered by QuickBundlesX. - Removed all deprecated functions and parameters. - Removed compatibility with Python 2.7. - Updated minimum dependencies version (Numpy, Scipy). - All tutorials updated to API changes and 3 new added. - Large documentation update. - Closed 289 issues and merged 98 pull requests. **DIPY 0.16.0** is now available. New features include: - Horizon, medical visualization interface powered by QuickBundlesX. - New Tractometry tools: Bundle Analysis / Bundle Profiles. - New reconstruction model: IVIM MIX (Variable Projection). - New command line interface: Affine and Diffeomorphic Registration. - New command line interface: Probabilistic, Deterministic and PFT Tracking. - Integration of Cython Guidelines for developers. - Replacement of Nose by Pytest. - Documentation update. - Closed 103 issues and merged 41 pull requests. **DIPY 0.15.0** is now available. New features include: - Updated RecoBundles for automatic anatomical bundle segmentation. - New Reconstruction Model: qtau-dMRI. - New command line interfaces (e.g. dipy_slr). - New continuous integration with AppVeyor CI. - Nibabel Streamlines API now used almost everywhere for better memory management. - Compatibility with Python 3.7. - Many tutorials added or updated (5 New). - Large documentation update. - Moved visualization module to a new library: FURY. - Closed 287 issues and merged 93 pull requests. **DIPY 0.14** is now available. New features include: - RecoBundles: anatomically relevant segmentation of bundles - New super fast clustering algorithm: QuickBundlesX - New tracking algorithm: Particle Filtering Tracking. - New tracking algorithm: Probabilistic Residual Bootstrap Tracking. - Integration of the Streamlines API for reading, saving and processing tractograms. - Fiber ORientation Estimated using Continuous Axially Symmetric Tensors (Forecast). - New command line interfaces. - Deprecated fvtk (old visualization framework). - A range of new visualization improvements. - Large documentation update. **DIPY 0.13.0** is now available. New features include: - Faster local PCA implementation. - Fixed different issues with OpenMP and Windows / macOS. - Replacement of cvxopt by cvxpy. - Replacement of Pytables by h5py. - Updated API to support latest numpy version (1.14). - New user interfaces for visualization. - Large documentation update. **DIPY 0.12.0** is now available. New features include: - IVIM Simultaneous modeling of perfusion and diffusion. - MAPL, tissue microstructure estimation using Laplacian-regularized MAP-MRI. - DKI-based microstructural modelling. - Free water diffusion tensor imaging. - Denoising using Local PCA. - Streamline-based registration (SLR). - Fiber to bundle coherence (FBC) measures. - Bayesian MRF-based tissue classification. - New API for integrated user interfaces. - New hdf5 file (.pam5) for saving reconstruction results. - Interactive slicing of images, ODFs and peaks. - Updated API to support latest numpy versions. - New system for automatically generating command line interfaces. - Faster computation of cross correlation for image registration. **DIPY 0.11.0** is now available. New features include: - New framework for contextual enhancement of ODFs. - Compatibility with numpy (1.11). - Compatibility with VTK 7.0 which supports Python 3.x. - Faster PIESNO for noise estimation. - Reorient gradient directions according to motion correction parameters. - Supporting Python 3.3+ but not 3.2. - Reduced memory usage in DTI. - DSI now can use datasets with multiple b0s. - Fixed different issues with Windows 64bit and Python 3.5. **DIPY 0.10.1** is now available. New features in this release include: - Compatibility with new versions of scipy (0.16) and numpy (1.10). - New cleaner visualization API, including compatibility with VTK 6, and functions to create your own interactive visualizations. - Diffusion Kurtosis Imaging (DKI): Google Summer of Code work by Rafael Henriques. - Mean Apparent Propagator (MAP) MRI for tissue microstructure estimation. - Anisotropic Power Maps from spherical harmonic coefficients. - A new framework for affine registration of images. DIPY was an **official exhibitor** for OHBM 2015. .. raw :: html
    **DIPY 0.9.2** is now available for :ref:`download `. Here is a summary of the new features. * Anatomically Constrained Tissue Classifiers for Tracking * Massive speedup of Constrained Spherical Deconvolution (CSD) * Recursive calibration of response function for CSD * New experimental framework for clustering * Improvements and 10X speedup for Quickbundles * Improvements in Linear Fascicle Evaluation (LiFE) * New implementation of Geodesic Anisotropy * New efficient transformation functions for registration * Sparse Fascicle Model supports acquisitions with multiple b-values **DIPY 0.8.0** is now available for :ref:`download `. The new release contains state-of-the-art algorithms for diffusion MRI registration, reconstruction, denoising, statistical evaluation, fiber tracking and validation of tracking. For more information about DIPY_, read the `DIPY paper`_ in Frontiers in Neuroinformatics. .. raw :: html
    So, how similar are your bundles to the real anatomy? Learn how to optimize your analysis as we did to create the fornix of the figure above, by reading the tutorials in our :ref:`gallery `. In DIPY_ we care about methods which can solve complex problems efficiently and robustly. QuickBundles is one of the many state-of-the-art algorithms found in DIPY. It can be used to simplify large datasets of streamlines. See our :ref:`gallery ` of examples and try QuickBundles with your data. Here is a video of QuickBundles applied on a simple dataset. .. raw:: html .. include:: links_names.incdipy-1.11.0/doc/old_news.rst000066400000000000000000000061171476546756600157070ustar00rootroot00000000000000.. _old_news: ********************** Past Announcements ********************** - :doc:`DIPY 1.8.0 ` released December 13, 2023. - :doc:`DIPY 1.7.0 ` released April 23, 2023. - :doc:`DIPY 1.6.0 ` released January 16, 2023. - :doc:`DIPY 1.5.0 ` released March 11, 2022. - :doc:`DIPY 1.4.1 ` released May 6, 2021. - :doc:`DIPY 1.4.0 ` released March 13, 2021. - :doc:`DIPY 1.3.0 ` released November 3, 2020. - :doc:`DIPY 1.2.0 ` released September 9, 2020. - :doc:`DIPY 1.1.1 ` released January 10, 2020. - :doc:`DIPY 1.0 ` released August 5, 2019. - :doc:`DIPY 0.16 ` released March 10, 2019. - :doc:`DIPY 0.15 ` released December 12, 2018. - :doc:`DIPY 0.14 ` released May 1, 2018. - :doc:`DIPY 0.13 ` released October 24, 2017. - :doc:`DIPY 0.12 ` released June 26, 2017. - :doc:`DIPY 0.11 ` released February 21, 2016. - :doc:`DIPY 0.10 ` released December 4, 2015. - :doc:`DIPY 0.9.2 ` released, March 18, 2015. - :doc:`DIPY 0.8.0 ` released, January 6, 2015. - DIPY_ was an official exhibitor in `OHBM 2015 `_. - DIPY was featured in `The Scientist Magazine `_, Nov, 2014. - `DIPY paper`_ accepted in Frontiers of Neuroinformatics, January 22nd, 2014. - :doc:`DIPY 0.7.1 ` is available for :ref:`download ` with **3X** more tutorials than 0.6.0! In addition, a `journal paper`_ focusing on teaching the fundamentals of DIPY is available in Frontiers of Neuroinformatics. - A new **hands on DIPY** seminar to 50 neuroscientists from Canada, as part of QBIN's "Do's and dont's of diffusion MRI" workshop, 8 April, 2014. - The creators of DIPY will attend both ISMRM and HBM 2015. Come and meet us! - `DIPY paper`_ accepted in Frontiers of Neuroinformatics, 22 January, 2014. - :doc:`DIPY 0.7.1 ` Released!, 16 January, 2014. - :doc:`DIPY 0.7.0 ` Released!, 23 December, 2013. - **Spherical Deconvolution** algorithms are now included in the current development version 0.7.0dev. See the examples in :ref:`gallery `, 24 June 2013. - A team of DIPY developers **wins** the `IEEE ISBI HARDI challenge `_, 7 April, 2013. - **Hands on DIPY** seminar took place at the dMRI course of the CREATE-MIA summer school, 5-7 June, McGill, Montreal, 2013. - :doc:`DIPY 0.6.0 ` Released!, 30 March, 2013. - **DIPY 3rd Sprint**, Berkeley, CA, 8-18 April, 2013. - **IEEE ISBI HARDI challenge** 2013 chooses **DIPY**, February, 2013. .. include:: links_names.inc dipy-1.11.0/doc/recipes.rst000066400000000000000000000063141476546756600155260ustar00rootroot00000000000000.. _recipes: :octicon:`mortar-board` Recipes =============================== **How do I do X in DIPY?** This page contains a collection of recipes that will help you quickly resolve common basic/advanced operation. If you have a question that is not answered here, please ask on the `dipy discussions`_ or even better, answer it yourself and contribute to the documentation! .. dropdown:: How do I convert my tractograms? :animate: fade-in-slide-down We recommend to look at the tutorials :ref:`Streamline File Formats` .. code-block:: Python from dipy.io.streamline import load_tractogram, save_tractogram # convert from fib to trk my_fib_file_path = '///my_tractogram.fib' my_trk_file_path = '///my_tractogram.trk' my_trk = load_tractogram(my_fib_file_path, 'same') save_tractogram(my_trk.streamlines, my_trk_file_path, 'same') .. dropdown:: How do I convert my Spherical Harmonics from MRTRIX3 to DIPY? :animate: fade-in-slide-down Available from DIPY 1.9.0+. See `this thread `_ for more information. .. code-block:: Python from dipy.reconst.shm import convert_sh_descoteaux_tournier convert_sh_descoteaux_tournier(sh_coeffs) .. dropdown:: How do I convert my tensors from FSL to DIPY or MRTRIX3 to DIPY? :animate: fade-in-slide-down Available with DIPY 1.9.0+ .. code-block:: Python from dipy.reconst.utils import convert_tensors from dipy.io.image import load_nifti, save_nifti data, affine, img = load_nifti('my_data.nii.gz', return_img=True) # convert from FSL to DIPY otensor = convert_tensors(data, 'fsl', 'dipy') save_nifti(otensor, data, affine, image.header) # convert from MRTRIX3 to DIPY otensor = convert_tensors(data, 'mrtrix', 'dipy') save_nifti(otensor, data, affine, image.header) .. dropdown:: How do I save/read the output fields from the Symmetric Diffeomorphic Registration? :animate: fade-in-slide-down The mapping fields is saved in a Nifti file. The data in the file is organized with shape (X, Y, Z, 3, 2), such that the forward mapping in each voxel is in `data[i, j, k, :, 0]` and the backward mapping in each voxel is in `data[i, j, k, :, 1]`. The output fields from the Symmetric Diffeomorphic Registration can be saved and read using the functions `write_mapping` and `read_mapping` respectively. .. code-block:: Python from dipy.align import read_mapping, write_mapping from dipy.align.imwarp import DiffeomorphicMap, SymmetricDiffeomorphicRegistration # Load your static and moving images and setup the registration parameters. # Then apply the registration sdr = SymmetricDiffeomorphicRegistration(metric, level_iters) mapping = sdr.optimize(static, moving) # Save the mapping write_mapping(mapping,'outputField.nii.gz') # If you need to read the mapping back mapping = read_mapping('outputField.nii.gz', static_filename, moving_filename) .. include:: links_names.incdipy-1.11.0/doc/reconstruction_models_list.rst000066400000000000000000000237531476546756600215610ustar00rootroot00000000000000.. list-table:: Reconstruction methods available in DIPY :widths: 10 8 8 8 56 10 :header-rows: 1 * - Method - Single Shell - Multi Shell - Cartesian - Paper Data Descriptions - References * - :ref:`DTI (SLS, WLS, NNLS) ` - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-success:`Yes` - Typical b-value = 1000 s/mm^2, maximum b-value 1200 s/mm^2 (some success up to 1500 s/mm^2) - :cite:t:`Basser1994a` * - :ref:`DTI (RESTORE) ` - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-success:`Yes` - Typical b-value = 1000 s/mm^2, maximum b-value 1200 s/mm^2 (some success up to 1500 s/mm^2) - :cite:t:`Chang2005`, :cite:t:`Chung2006`, :cite:t:`Yendiki2014` * - :ref:`FwDTI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - DTI-style acquisition, multiple b=0, all shells should be within maximum b-value of 1000 s/mm^2 (or 32 directions evenly distributed 500 s/mm^2 and 1500 s/mm^2 per :cite:t:`NetoHenriques2017`) - :cite:t:`Pasternak2009`, :cite:t:`NetoHenriques2017` * - :ref:`DKI - Standard ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Dual spin echo diffusion-weighted 2D EPI images were acquired with b values of 0, 500, 1000, 1500, 2000, and 2500 s/mm^2 (max b value of 2000 suggested as sufficient in brain tissue); at least 15 directions - :cite:t:`Jensen2005` * - :ref:`DKI+ Constraints ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - None - :cite:t:`DelaHaije2020` * - :ref:`DKI - Micro (WMTI) ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - DKI-style acquisition: at least two non-zero b shells (max b value 2000), minimum of 15 directions; typically b-values in increments of 500 from 0 to 2000, 30 directions - :cite:t:`Fieremans2011`, :cite:t:`Tabesh2011` * - :ref:`Mean Signal DKI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - b-values in increments of 500 from 0 to 2000 s/mm^2, 30 directions - :cite:t:`NetoHenriques2018` * - :ref:`CSA ` - :bdg-success:`Yes` - :bdg-danger:`No` - :bdg-danger:`No` - HARDI data (preferably 7T) with at least 200 diffusion images at b=3000 s/mm^2, or multi-shell data with high angular resolution - :cite:t:`Aganj2010` * - Westins CSA - :bdg-success:`Yes` - :bdg-danger:`No` - :bdg-danger:`No` - - * - :ref:`IVIM ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - low b-values are needed - :cite:t:`LeBihan1988` * - :ref:`IVIM Variable Projection ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - - :cite:t:`Fadnavis2019` * - SDT - :bdg-success:`Yes` - :bdg-danger:`No` - :bdg-danger:`No` - QBI-style acquisition (60-64 directions, b-value 1000 s/mm^2) - :cite:t:`Descoteaux2009` * - :ref:`DSI ` - :bdg-danger:`No` - :bdg-danger:`No` - :bdg-success:`Yes` - 515 diffusion encodings, b-values from 12,000 to 18,000 s/mm^2. Acceleration in subsequent studies with ~100 diffusion encoding directions in half sphere of the q-space with b-values = 1000, 2000, 3000 s/mm^2) - :cite:t:`Wedeen2008`, :cite:t:`Sotiropoulos2013` * - :ref:`DSID ` - :bdg-danger:`No` - :bdg-danger:`No` - :bdg-success:`Yes` - 203 diffusion encodings (isotropic 3D grid points in the q-space contained within a sphere with radius 3.6), maximum b-value = 4000 s/mm^2 - :cite:t:`CanalesRodriguez2010` * - :ref:`GQI - GQI2 ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-success:`Yes` - Fits any sampling scheme with at least one non-zero b-shell, benefits from more directions. Recommended 23 b-shells ranging from 0 to 4000 in a 258 direction grid-sampling scheme - :cite:t:`Yeh2010` * - :ref:`SFM ` - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-danger:`No` - At least 40 directions, b-value above 1000 s/mm^2 - :cite:t:`Rokem2015` * - :ref:`Q-Ball (OPDT) ` - :bdg-success:`Yes` - :bdg-danger:`No` - :bdg-danger:`No` - At least 64 directions, maximum b-values 3000-4000 s/mm^2, multi-shell, isotropic voxel size - :cite:t:`Tuch2004`, :cite:t:`Descoteaux2007`, :cite:t:`TristanVega2009b` * - :ref:`SHORE ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Multi-shell HARDI data (500, 1000, and 2000 s/mm^2; minimum 2 non-zero b-shells) or DSI (514 images in a cube of five lattice-units, one b=0) - :cite:t:`Merlet2013`, :cite:t:`Ozarslan2008`, :cite:t:`Ozarslan2009` * - :ref:`MAP-MRI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Six unit sphere shells with b = 1000, 2000, 3000, 4000, 5000, 6000 s/mm^2 along 19, 32, 56, 87, 125, and 170 directions (see :cite:t:`Olson2019` for candidate sub-sampling schemes) - :cite:t:`Ozarslan2013`, :cite:t:`Olson2019` * - :ref:`MAP+ Constraints ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - - :cite:t:`DelaHaije2020` * - MAPL - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Multi-shell similar to WU-Minn HCP, with minimum of 60 samples from 2 shells b-value 1000 and 3000 s/mm^2 - :cite:t:`Fick2016b` * - :ref:`CSD ` - :bdg-success:`Yes` - :bdg-danger:`No` - :bdg-danger:`No` - Minimum: 20 gradient directions and a b-value of 1000 s/mm^2; benefits additionally from 60 direction HARDI data with b-value = 3000 s/mm^2 or multi-shell - :cite:t:`Tournier2004`, :cite:t:`Tournier2007`, :cite:t:`Descoteaux2007` * - :ref:`SMS/MT CSD ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - 5 b=0, 50 directions at 3 non-zero b-shells: b=1000, 2000, 3000 s/mm^2 - :cite:t:`Jeurissen2014` * - :ref:`FORECAST ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Multi-shell 64 direction b-values of 1000, 2000 s/mm^2 as in :cite:t:`Alexander2017`. Original model used 1480 s/mm^2 with 92 directions and 36 b=0 - :cite:t:`Anderson2005`, :cite:t:`Alexander2017` * - :ref:`RUMBA-SD ` - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-success:`Yes` - HARDI data with 64 directions at b = 2500 s/mm^2, 3 b=0 images (full original acquisition: 256 directions on a sphere at b = 2500 s/mm^2, 36 b=0 volumes) - :cite:t:`CanalesRodriguez2015` * - :ref:`QTI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Evenly distributed geometric sampling scheme of 216 measurements, 5 b-values (50, 250, 50, 1000, 200 s/mm^2), measurement tensors of four shapes: stick, prolate, sphere, and plane - :cite:t:`Westin2016` * - :ref:`QTI+ ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - At least one b=0, minimum of 39 acquisitions with spherical and linear encoding; optimal 120 (see :cite:t:`Morez2023`), ideal 217 see Table 1 in :cite:t:`Herberthson2021` - :cite:t:`Herberthson2021`, :cite:t:`Morez2023` * - Ball & Stick - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-danger:`No` - Three b=0, 60 evenly distributed directions per :cite:t:`Jones1999` at b-value 1000 s/mm^2 - :cite:t:`Behrens2003` * - :ref:`QTau-MRI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - Minimum 200 volumes of multi-spherical dMRI (multi-shell, multi-diffusion time; varying gradient directions, gradient strengths, and diffusion times) - :cite:t:`Fick2018` * - Power Map - :bdg-success:`Yes` - :bdg-success:`Yes` - :bdg-danger:`No` - HARDI data with 60 directions at b-value = 3000 s/mm^2, 7 b=0 (Minimum: HARDI data with at least 30 directions) - :cite:t:`DellAcqua2014` * - :ref:`SMT / SMT2 ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - 72 directions at each of 5 evenly spaced b-values from 0.5 to 2.5 ms/μm^2, 5 b-values from 3 to 5 ms/μm^2, 5 b-values from 5.5 to 7.5 ms/μm^2, and 3 b-values from 8 to 9 ms/μm^2 / b=0 ms/μm^2, and along 33 directions at b-values from 0.2–3 ms/μm^2 in steps of 0.2 ms/μm^2 (24 point spherical design and 9 directions identified for rapid kurtosis estimation) - :cite:t:`Kaden2016b`, :cite:t:`NetoHenriques2019` * - :ref:`CTI ` - :bdg-danger:`No` - :bdg-success:`Yes` - :bdg-danger:`No` - - :cite:t:`NetoHenriques2020`, :cite:t:`NetoHenriques2021b`, :cite:t:`Novello2022` dipy-1.11.0/doc/references.bib000066400000000000000000003333741476546756600161520ustar00rootroot00000000000000%% article @article{Aganj2010, author = {Iman Aganj and Christophe Lenglet and Guillermo Sapiro and Essa Yacoub and Kamil Ugurbil and Noam Harel}, title = {{Reconstruction of the orientation distribution function in single- and multiple-shell q-ball imaging within constant solid angle}}, journal = {Magnetic Resonance in Medicine}, volume = {64}, number = {2}, pages = {554-566}, year = {2010}, doi = {https://doi.org/10.1002/mrm.22365}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.22365} } @article{Alexander2017, author = {Lindsay M. Alexander and Jasmine Escalera and Lei Ai and Charissa Andreotti and Karina Febre and Alexander Mangone and Natan Vega-Potler and Nicolas Langer and Alexis Alexander and Meagan Kovacs and Shannon Litke and Bridget O'Hagan and Jennifer Andersen and Batya Bronstein and Anastasia Bui and Marijayne Bushey and Henry Butler and Victoria Castagna and Nicolas Camacho and Elisha Chan and Danielle Citera and Jon Clucas and Samantha Cohen and Sarah Dufek and Megan Eaves and Brian Fradera and Judith Gardner and Natalie Grant-Villegas and Gabriella Green and Camille Gregory and Emily Hart and Shana Harris and Megan Horton and Danielle Kahn and Katherine Kabotyanski and Bernard Karmel and Simon P. Kelly and Kayla Kleinman and Bonhwang Koo and Eliza Kramer and Elizabeth Lennon and Catherine Lord and Ginny Mantello and Amy Margolis and Kathleen R. Merikangas and Judith Milham and Giuseppe Minniti and Rebecca Neuhaus and Alexandra Levine and Yael Osman and Lucas C. Parra and Ken R. Pugh and Amy Racanello and Anita Restrepo and Tian Saltzman and Batya Septimus and Russell Tobe and Rachel Waltz and Anna Williams and Anna Yeo and Francisco X. Castellanos and Arno Klein and Tomas Paus and Bennett L. Leventhal and R. Cameron Craddock and Harold S. Koplewicz and Michael P. Milham}, title = {{An open resource for transdiagnostic research in pediatric mental health and learning disorders}}, journal = {Scientific Data}, volume = {4}, number = {1}, pages = {170181}, year = {2017}, month = {December}, doi = {10.1038/sdata.2017.181}, url = {https://doi.org/10.1038/sdata.2017.181} } @article{Alves2022, author = {Rita Alves and Rafael Neto Henriques and Leevi Kerkelä and Cristina Chavarr\'{i}as and Sune N. Jespersen and Noam Shemesh}, title = {{Correlation Tensor MRI deciphers underlying kurtosis sources in stroke}}, journal = {NeuroImage}, volume = {247}, pages = {118833}, year = {2022}, doi = {10.1016/j.neuroimage.2021.118833}, url = {https://doi.org/10.1016/j.neuroimage.2021.118833} } @article{Anderson2005, author = {Adam W. Anderson}, title = {{Measurement of fiber orientation distributions using high angular resolution diffusion imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {54}, number = {5}, pages = {1194-1206}, year = {2005}, doi = {10.1002/mrm.20667}, url = {https://doi.org/10.1002/mrm.20667} } @article{Arsigny2006, author = {Vincent Arsigny and Pierre Fillard and Xavier Pennec and Nicholas Ayache}, title = {{Log-Euclidean metrics for fast and simple calculus on diffusion tensors}}, journal = {Magnetic Resonance in Medicine}, volume = {56}, number = {2}, pages = {411-421}, year = {2006}, doi = {10.1002/mrm.20965}, url = {https://doi.org/10.1002/mrm.20965} } @article{Assaf2004, author = {Yaniv Assaf and Raisa Z. Freidlin and Gustavo K. Rohde and Peter J. Basser}, title = {{New modeling and experimental framework to characterize hindered and restricted water diffusion in brain white matter}}, journal = {Magnetic Resonance in Medicine}, volume = {52}, number = {5}, pages = {965-978}, year = {2004}, doi = {10.1002/mrm.20274}, url = {https://doi.org/10.1002/mrm.20274} } @article{Avants2011, author = {Brian B. Avants and Nicholas J. Tustison and Jue Wu and Philip A. Cook and James C. Gee}, title = {{An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data}}, journal = {Neuroinformatics}, volume = {9}, number = {4}, pages = {381--400}, year = {2011}, doi = {10.1007/s12021-011-9109-y}, url = {https://doi.org/10.1007/s12021-011-9109-y}, } @article{Avants2009, author = {Brian B. Avants and Nick Tustison and Gang Song}, title = {{Advanced Normalization Tools: V1. 0}}, journal = {The Insight Journal}, volume = {2}, number = {365}, pages = {1-35}, year = {2009}, month = {July} } @article{Avants2008, author = {Brian B. Avants and Charles L. Epstein and Murray Grossman and James C. Gee}, title = {{Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain}}, journal = {Medical Image Analysis}, volume = {12}, number = {1}, pages = {26-41}, year = {2008}, doi = {10.1016/j.media.2007.06.004}, url = {https://doi.org/10.1016/j.media.2007.06.004}, note = {Special Issue on The Third International Workshop on Biomedical Image Registration – WBIR 2006} } @article{Avram2015, author = {Alexandru V. Avram and Joelle E Sarlls and Alan S Barnett and Evren {\"O}zarslan and Cibu Thomas and M. Okan Irfanoglu and Elizabeth Hutchinson and Carlo Pierpaoli and Peter J. Basser}, title = {{Clinical feasibility of using mean apparent propagator (MAP) MRI to characterize brain tissue microstructure}}, journal = {NeuroImage}, volume = {111}, pages = {159--169}, year = {2015}, doi = {10.1016/j.neuroimage.2015.01.007}, url = {https://doi.org/10.1016/j.neuroimage.2015.01.007} } @article{Aydogan2021, author = {Dogu Baran Aydogan and Yonggang Shi}, title = {IEEE Transactions on Medical Imaging}, journal = {{Parallel Transport Tractography}}, volume = {2021}, number = {40}, pages = {2}, year = {635-647}, doi = {10.1109/TMI.2020.3034038} } @article{Basser1996, author = {Peter J. Basser and Carlo Pierpaoli}, title = {{Microstructural and Physiological Features of Tissues Elucidated by Quantitative-Diffusion-Tensor MRI}}, journal = {Journal of Magnetic Resonance, Series B}, volume = {111}, number = {3}, pages = {209-219}, year = {1996}, doi = {10.1006/jmrb.1996.0086}, url = {https://doi.org/10.1006/jmrb.1996.0086} } @article{Basser1994b, author = {Peter J. Basser and James Mattiello and Denis {Le Bihan}}, title = {{Estimation of the Effective Self-Diffusion Tensor from the NMR Spin Echo}}, journal = {Journal of Magnetic Resonance, Series B}, volume = {103}, number = {3}, pages = {247-254}, year = {1994}, month = {March}, doi = {10.1006/jmrb.1994.1037}, url = {https://doi.org/10.1006/jmrb.1994.1037} } @article{Basser1994a, author = {Peter J. Basser and James Mattiello and Denis {Le Bihan}}, title = {{MR Diffusion Tensor Spectroscopy and Imaging}}, journal = {Biophysical Journal}, volume = {66}, number = {1}, pages = {259--267}, year = {1994}, month = {January}, doi = {http://dx.doi.org/10.1016/S0006-3495(94)80775-1} } @article{Batchelor2005, author = {Phillipp G. Batchelor and Maher Moakher and David Atkinson and Fernando Calamante and Alan Connelly}, title = {{A rigorous framework for diffusion tensor calculus}}, journal = {Magnetic Resonance in Medicine}, volume = {53}, number = {1}, pages = {221-225}, year = {2005}, doi = {10.1002/mrm.20334}, url = {https://doi.org/10.1002/mrm.20334} } @article{Behrens2003, author = {Timothy E. J. Behrens and Mark W. Woolrich and Mark Jenkinson, M. and Heidi Johansen-Berg and Rita Gouveia Nunes and Stuart Clare and Paul M. Matthews and John Michael Brady and Stephen M. Smith}, title = {{Characterization and propagation of uncertainty in diffusion-weighted MR imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {50}, number = {5}, pages = {1077-1088}, year = {2003}, doi = {10.1002/mrm.10609}, url = {https://doi.org/10.1002/mrm.10609}, } @article{Behrens2007, author = {Timothy E. J. Behrens and Heidi Johansen-Berg and Saad Jbabdi and Matthew F.S. Rushworth and Mark W. Woolrich}, title = {{Probabilistic diffusion tractography with multiple fibre orientations: What can we gain?}}, journal = {NeuroImage}, volume = {34}, number = {1}, pages = {144-155}, year = {2007}, doi = {10.1016/j.neuroimage.2006.09.018}, url = {https://doi.org/10.1016/j.neuroimage.2006.09.018} } @article{Berman2008, author = {Jeffrey I. Berman and SungWon Chung and Pratik Mukherjee and Christopher P. Hess and Eric T. Han and Roland G. Henry}, title = {{Probabilistic streamline q-ball tractography using the residual bootstrap}}, journal = {NeuroImage}, volume = {39}, number = {1}, pages = {215-222}, year = {2008}, doi = {10.1016/j.neuroimage.2007.08.021}, url = {https://doi.org/10.1016/j.neuroimage.2007.08.021} } @article{Biggs1997, author = {David S. C. Biggs and Mark Andrews}, title = {{Acceleration of iterative image restoration algorithms}}, journal = {Applied Optics}, publisher = {Optica Publishing Group}, volume = {36}, number = {8}, pages = {1766--1775}, year = {1997}, month = {March}, doi = {10.1364/AO.36.001766}, url = {https://doi.org/10.1364/AO.36.001766} } @article{Bresenham1965, author = {Jack E. Bresenham}, title = {{Algorithm for computer control of a digital plotter}}, journal = {IBM Systems Journal}, volume = {4}, number = {1}, pages = {25-30}, year = {1965}, doi = {10.1147/sj.41.0025} } @article{CanalesRodriguez2015, author = {Erick J. Canales-Rodr\'{i}guez and Alessandro Daducci and Stamatios N. Sotiropoulos and Emmanuel Caruyer and Santiago Aja-Fern\'{a}ndez and Joaquim Radua and Jes\'{u}s M. Yurramendi Mendizabal and Yasser Iturria-Medina and Lester Melie-Garc\'{i}a and Yasser Alem\'{a}n-G\'{o}mez and Jean-Philippe Thiran and Salvador Sarr\'{o} and Edith Pomarol-Clotet and Raymond Salvador}, title = {{Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization}}, journal = {PLOS ONE}, volume = {10}, number = {10}, pages = {e0138910}, year = {2015}, doi = {10.1371/journal.pone.0138910}, url = {https://doi.org/10.1371/journal.pone.0138910} } @article{CanalesRodriguez2010, author = {Erick J. Canales-Rodr\'{i}guez and Yasser Iturria-Medina and Yasser Alem\'{a}n-G\'{o}mez and Lester Melie-Garc\'{i}a}, title = {{Deconvolution in diffusion spectrum imaging}}, journal = {NeuroImage}, volume = {50}, number = {1}, pages = {136-149}, year = {2010}, doi = {10.1016/j.neuroimage.2009.11.066}, url = {https://doi.org/10.1016/j.neuroimage.2009.11.066} } @article{Carlson1995, author = {Billie C. Carlson}, title = {{Numerical computation of real or complex elliptic integrals}}, journal = {Numerical Algorithms}, volume = {10}, number = {1}, pages = {13--26}, doi = {10.1007/BF02198293}, year = {1995}, url = {https://doi.org/10.1007/BF02198293} } @article{Caruyer2013, author = {Emmanuel Caruyer and Christophe Lenglet and Guillermo Sapiro and Rachid Deriche}, title = {{Design of multishell sampling schemes with uniform coverage in diffusion MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {69}, number = {6}, pages = {1534-1540}, year = {2013}, doi = {10.1002/mrm.24736}, url = {https://doi.org/10.1002/mrm.24736} } @article{Chamberlain2010, author = {Samuel R. Chamberlain and Adam Hampshire and Lara A. Menzies and Eleftherios Garyfallidis and Jon E. Grant and Brian L. Odlaug and Kevin Craig and Naomi Fineberg and Barbara J. Sahakian}, title = {{Reduced Brain White Matter Integrity in Trichotillomania: A Diffusion Tensor Imaging Study}}, journal = {Archives of General Psychiatry}, volume = {67}, number = {9}, pages = {965-971}, year = {2010}, month = {September}, doi = {10.1001/archgenpsychiatry.2010.109}, url = {https://doi.org/10.1001/archgenpsychiatry.2010.109} } @article{Chambolle2004, author = {Antonin Chambolle}, title = {{An Algorithm for Total Variation Minimization and Applications}}, journal = {Journal of Mathematical Imaging and Vision}, volume = {20}, number = {1}, pages = {89--97}, year = {2004}, doi = {10.1023/B:JMIV.0000011325.36760.1e}, url = {https://doi.org/10.1023/B:JMIV.0000011325.36760.1e} } @article{Chandio2020a, author = {Bramsh Qamar Chandio and Shannon Leigh Risacher and Franco Pestilli and Daniel Bullock and Fang-Cheng Yeh and Serge Koudoro and Ariel Rokem and Jaroslaw Harezlak and Eleftherios Garyfallidis}, title = {{Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations}}, journal = {Scientific Reports}, volume = {10}, pages = {17149}, year = {2020}, month = {October}, doi = {10.1038/s41598-020-74054-4}, url = {https://doi.org/10.1038/s41598-020-74054-4} } @article{Chang2005, author = {Lin-Ching Chang and Derek K. Jones and Carlo Pierpaoli}, title = {{RESTORE: Robust estimation of tensors by outlier rejection}}, journal = {Magnetic Resonance in Medicine}, volume = {53}, number = {5}, pages = {1088-1095}, year = {2005}, doi = {10.1002/mrm.20426}, url = {https://doi.org/10.1002/mrm.20426} } @article{Cheng2022, author = {Hu Cheng and Sophia Vinci-Booher and Jian Wang and Bradley Caron and Qiuting Wen and Sharlene Newman and Franco Pestilli}, title = {{Denoising diffusion weighted imaging data using convolutional neural networks}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {17}, number = {9}, pages = {1-24}, year = {2022}, month = {September}, doi = {10.1371/journal.pone.0274396}, url = {https://doi.org/10.1371/journal.pone.0274396} } @article{Cheng2020, author = {Hu Cheng and Sharlene Newman and Maryam Afzali and Shreyas Fadnavis and Eleftherios Garyfallidis}, title = {{Segmentation of the brain using direction-averaged signal of DWI images}}, journal = {Magnetic Resonance Imaging}, publisher = {Elsevier}, volume = {69}, pages = {1-7}, year = {2020}, month = {June}, doi = {10.1016/j.mri.2020.02.010}, url = {https://doi.org/10.1016/j.mri.2020.02.010} } @article{Chang2012, author = {Lin-Ching Chang and Lindsay Walker and Carlo Pierpaoli}, title = {{Informed RESTORE: A method for robust estimation of diffusion tensor from low redundancy datasets in the presence of physiological noise artifacts}}, journal = {Magnetic Resonance in Medicine}, volume = {68}, number = {5}, pages = {1654-1663}, year = {2012}, doi = {10.1002/mrm.24173}, url = {https://doi.org/10.1002/mrm.24173} } @article{Chung2006, author = {SungWon Chung and Ying Lu and Roland G. Henry}, title = {{Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters}}, journal = {NeuroImage}, volume = {33}, number = {2}, pages = {531-541}, year = {2006}, doi = {10.1016/j.neuroimage.2006.07.001}, url = {https://doi.org/10.1016/j.neuroimage.2006.07.001} } @article{CohenAdad2011, author = {Julien Cohen-Adad and Maxime Descoteaux and Lawrence L. Wald}, title = {{Quality assessment of high angular resolution diffusion imaging data using bootstrap on Q-ball reconstruction}}, journal = {Journal of Magnetic Resonance Imaging}, volume = {33}, number = {5}, pages = {1194-1208}, year = {2011}, doi = {10.1002/jmri.22535}, url = {https://doi.org/10.1002/jmri.22535} } @article{Collier2015, author = {Quinten Collier and Jelle Veraart and Ben Jeurissen and Arno den Dekker and Jan Sijbers}, title = {{Iterative reweighted linear least squares for accurate, fast, and robust estimation of diffusion magnetic resonance parameters: IRLLS for Estimation of Diffusion MR Parameters}}, journal = {Magnetic Resonance in Medicine}, volume = {73}, number = {6}, pages = {2174-2184}, year = {2015}, doi = {10.1002/mrm.25351}, url = {https://doi.org/10.1002/mrm.25351} } @article{Constantinides1997 author = {Chris D. Constantinides and Ergin Atalar and Elliot R. McVeigh}, title = {{Signal-to-noise measurements in magnitude images from NMR phased arrays}}, journal = {Magnetic Resonance in Medicine}, volume = {38}, number = {5}, pages = {852-857}, year = {1997}, doi = {10.1002/mrm.1910380524}, url = {https://doi.org/10.1002/mrm.1910380524} } @article{Cook2007, author = {Philip A. Cook and Mark Symms and Philip A. Boulby and Daniel C. Alexander}, title = {{Optimal acquisition orders of diffusion-weighted MRI measurements}}, journal = {Journal of Magnetic Resonance Imaging}, volume = {25}, number = {5}, pages = {1051-1058}, year = {2007}, doi = {10.1002/jmri.20905}, url = {https://doi.org/10.1002/jmri.20905} } @article{Correia2011b, author = {Marta Morgado Correia and Virginia F. J. Newcombe and Guy B. Williams}, title = {{Contrast-to-noise ratios for indices of anisotropy obtained from diffusion MRI: A study with standard clinical b-values at 3T}}, journal = {NeuroImage}, volume = {57}, number = {3}, pages = {1103-1115}, year = {2011}, month = {August}, doi = {10.1016/j.neuroimage.2011.03.004}, url = {https://doi.org/10.1016/j.neuroimage.2011.03.004}, note = {Special Issue: Educational Neuroscience} } @article{Cote2013, author = {Marc-Alexandre C\^{o}t\'{e} and Gabriel Girard and Arnaud Bor\'{e} and Eleftherios Garyfallidis and Jean-Christophe Houde and Maxime Descoteaux}, title = {{Tractometer: Towards validation of tractography pipelines}}, journal = {Medical Image Analysis}, volume = {17}, number = {7}, pages = {844-857}, year = {2013}, month = {October}, doi = {https://doi.org/10.1016/j.media.2013.03.009}, url = {10.1016/j.media.2013.03.009}, note = {Special Issue on the 2012 Conference on Medical Image Computing and Computer Assisted Intervention} } @article{Coupe2012, author = {Pierrick Coup\'{e} and Jos\'{e} V. Manj\'{o}n and Montserrat Robles and Louis D. Collins}, title = {{Adaptive Multiresolution Non-Local Means Filter for 3D MR Image Denoising}}, journal = {IET Image Processing}, publisher = {Institution of Engineering and Technology}, volume = {6}, number = {5}, pages = {558-568}, year = {2012}, month = {July}, doi = {10.1049/iet-ipr.2011.0161}, url = {https://doi.org/10.1049/iet-ipr.2011.0161} } @article{Coupe2008, author = {Pierrick Coup\'{e} and Pierre Yger and Sylvain Prima and Pierre Hellier and Charles Kervrann and Christian Barillot}, title = {{An Optimized Blockwise Nonlocal Means Denoising Filter for 3-D Magnetic Resonance Images}}, journal = {IEEE Transactions on Medical Imaging}, volume = {27}, number = {4}, pages = {425-441}, year = {2008}, doi = {10.1109/TMI.2007.906087}, url = {https://doi.org/10.1109/TMI.2008.2008967} } @article{Craven1979, author = {Peter Craven and Grace Wahba}, title = {{Smoothing Noisy Data with Spline Functions - Estimating the Correct Degree of Smoothing by the Method of Generalized Cross-Validation}}, journal = {Numerische Mathematik}, volume = {31}, number = {4}, pages = {377-403}, year = {1979}, doi = {} } @article{DelaHaije2020, author = {Tom {Dela Haije} and Evren {\"O}zarslan and Aasa Feragen}, title = {{Enforcing necessary non-negativity constraints for common diffusion MRI models using sum of squares programming}}, journal = {NeuroImage}, volume = {209}, pages = {116405}, year = {2020}, doi = {10.1016/j.neuroimage.2019.116405}, url = {https://doi.org/10.1016/j.neuroimage.2019.116405} } @article{DellAcqua2007, author = {Flavio Dell'Acqua and Giovanna Rizzo and Paola Scifo and Rafael Alonso Clarke and Giuseppe Scotti and Ferruccio Fazio}, title = {{A Model-Based Deconvolution Approach to Solve Fiber Crossing in Diffusion-Weighted MR Imaging}}, journal = {IEEE Transactions on Biomedical Engineering}, volume = {54}, number = {3}, pages = {462-472}, year = {2007}, doi = {10.1109/TBME.2006.888830} } @article{Descoteaux2011, author = {Maxime Descoteaux and Rachid Deriche and Denis {Le Bihan} and Jean-Fran\c{c}ois Mangin and Cyril Poupon}, title = {{Multiple q-shell diffusion propagator imaging}}, journal = {Medical Image Analysis}, volume = {15}, number = {4}, pages = {603-621}, year = {2011}, doi = {10.1016/j.media.2010.07.001}, url = {https://doi.org/10.1016/j.media.2010.07.001}, note = {Special section on IPMI 2009} } @article{Descoteaux2009, author = {Maxime Descoteaux and Rachid Deriche and Thomas R. Kn\"{o}sche and Alfred Anwander}, title = {{Deterministic and Probabilistic Tractography Based on Complex Fibre Orientation Distributions}}, journal = {IEEE Transactions on Medical Imaging}, volume = {28}, number = {2}, pages = {269-286}, year = {2009}, month = {February}, doi = {10.1109/TMI.2008.2004424}, url = {https://doi.org/10.1109/TMI.2008.2004424} } @article{Descoteaux2007, author = {Maxime Descoteaux and Elaine Angelino and Shaun Fitzgibbons and Rachid Deriche}, title = {{Regularized, fast, and robust analytical Q-ball imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {58}, number = {3}, pages = {497-510}, year = {2007}, doi = {10.1002/mrm.21277}, url = {https://doi.org/10.1002/mrm.21277} } @article{DiCiccio1996, author = {Thomas J. DiCiccio and Bradley Efron}, title = {{Bootstrap confidence intervals}}, journal = {Statistical Science}, publisher = {Institute of Mathematical Statistics}, volume = {11}, number = {3}, pages = {189--228}, year = {1996}, doi = {10.1214/ss/1032280214}, url = {https://doi.org/10.1214/ss/1032280214} } @article{Duits2011, author = {Remco Duits and Erik Franken}, title = {Left-Invariant Diffusions on the Space of Positions and Orientations and their Application to Crossing-Preserving Smoothing of HARDI images}, journal = {International Journal of Computer Vision}, year = {2011}, volume = {92}, number = {3}, pages = {231--264}, doi = {10.1007/s11263-010-0332-z}, url = {https://doi.org/10.1007/s11263-010-0332-z} } @article{Efron1979, author = {Bradley Efron}, title = {{Bootstrap Methods: Another Look at the Jackknife}}, journal = {The Annals of Statistics}, publisher = {Institute of Mathematical Statistics}, volume = {7}, number = {1}, pages = {1--26}, year = {1979}, doi = {10.1214/aos/1176344552}, url = {https://doi.org/10.1214/aos/1176344552} } @article{Ennis2006, author = {Daniel B. Ennis and Gordon Kindlmann}, title = {{Orthogonal tensor invariants and the analysis of diffusion tensor magnetic resonance images}}, journal = {Magnetic Resonance in Medicine}, volume = {55}, number = {1}, pages = {136-146}, year = {2006}, doi = {10.1002/mrm.20741}, url = {https://doi.org/10.1002/mrm.20741} } @article{Farooq2016, author = {Hamza Farooq and Junqian Xu and Jung Who Nam and Daniel F. Keefe and Essa Yacoub and Tryphon Georgiou and Christophe Lenglet}, title = {Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI}, journal = {Scientific Reports}, volume = {6}, number = {1}, pages = {38927}, year = {2016}, month = {December}, doi = {10.1038/srep38927}, url = {https://doi.org/10.1038/srep38927} } @article{Federau2012, author = {Christian Federau and Philippe Maeder and Kieran O'Brien and Patrick Browaeys and Reto Meuli and Patric Hagmann}, title = {{Quantitative Measurement of Brain Perfusion with Intravoxel Incoherent Motion MR Imaging}}, journal = {Radiology}, volume = {265}, number = {3}, pages = {874-881}, year = {2012}, doi = {10.1148/radiol.12120584}, url = {https://doi.org/10.1148/radiol.12120584} } @article{Fick2018, author = {Rutger H. J. Fick and Alexandra Petiet and Mathieu Santin and Anne-Charlotte Philippe and Stephane Lehericy and Rachid Deriche and Demian Wassermann}, title = {{Non-parametric graphnet-regularized representation of dMRI in space and time}}, journal = {Medical Image Analysis}, volume = {43}, pages = {37-53}, year = {2018}, doi = {10.1016/j.media.2017.09.002}, url = {https://doi.org/10.1016/j.media.2017.09.002} } @article{Fick2016b, author = {Rutger H. J. Fick and Demian Wassermann and Emmanuel Caruyer and Rachid Deriche}, title = {{MAPL: Tissue microstructure estimation using Laplacian-regularized MAP-MRI and its application to HCP data}}, journal = {NeuroImage}, volume = {134}, pages = {365--385}, year = {2016}, month = {July}, doi = {10.1016/j.neuroimage.2016.03.046}, url = {https://doi.org/10.1016/j.neuroimage.2016.03.046} } @article{Fieremans2013, author = {Els Fieremans and Andreana Benitez and Jens H. Jensen and Maria F. Falangola and Ali Tabesh and Rachael L. Deardorff and Maria Vittoria S. Spampinato and James S. Babb and Dmitry S. Novikov and Steven H. Ferris and Joseph A. Helpern}, title = {{Novel White Matter Tract Integrity Metrics Sensitive to Alzheimer Disease Progression}}, journal = {American Journal of Neuroradiology}, volume = {34}, number = {11}, pages = {2105--2112}, year = {2013}, doi = {10.3174/ajnr.A3553}, url = {https://www.ajnr.org/content/34/11/2105} } @article{Fieremans2011, author = {Els Fieremans and Jens H. Jensen and Joseph A. Helpern}, title = {{White matter characterization with diffusional kurtosis imaging}}, journal = {NeuroImage}, volume = {58}, number = {1}, pages = {177-188}, year = {2011}, doi = {10.1016/j.neuroimage.2011.06.006}, url = {https://doi.org/10.1016/j.neuroimage.2011.06.006} } @article{Fonov2013, author = {Vladimir Fonov and Alan C. Evans and Kelly Botteron and C. Robert Almli and Robert C. McKinstry and D. Louis Collins}, title = {{Unbiased average age-appropriate atlases for pediatric studies}}, journal = {NeuroImage}, volume = {54}, number = {1}, pages = {313-327}, year = {2011}, doi = {10.1016/j.neuroimage.2010.07.033}, url = {https://doi.org/10.1016/j.neuroimage.2010.07.033} } @article{Fonov2009, author = {Vladimir S. Fonov and Alan C. Evans and Robert C. McKinstry and C. Robert Almli and D. Louis Collins}, title = {{Unbiased nonlinear average age-appropriate brain templates from birth to adulthood}}, journal = {NeuroImage}, volume = {47}, pages = {S102}, year = {2009}, doi = {10.1016/S1053-8119(09)70884-5}, url = {https://doi.org/10.1016/S1053-8119(09)70884-5}, note = {Organization for Human Brain Mapping 2009 Annual Meeting} } @article{Garyfallidis2021, author = {Eleftherios Garyfallidis and Serge Koudoro and Javier Guaje and Marc-Alexandre C\^{o}t\'{e} and Soham Biswas and David Reagan and Nasim Anousheh and Filipi Silva and Geoffrey Fox and Fury Contributors}, title = {{FURY: advanced scientific visualization}}, journal = {Journal of Open Source Software}, publisher = {The Open Journal}, volume = {6}, number = {64}, pages = {3384}, year = {2021}, doi = {10.21105/joss.03384}, url = {https://doi.org/10.21105/joss.03384} } @article{Garyfallidis2018, author = {Eleftherios Garyfallidis and Marc-Alexandre C\^{o}t\'{e} and Fran\c{c}ois Rheault and Jasmeen Sidhu and Janice Hau and Laurent Petit and David Fortin and Stephen Cunanne and Maxime Descoteaux}, title = {{Recognition of white matter bundles using local and global streamline-based registration and clustering}}, journal = {NeuroImage}, volume = {170}, pages = {283-295}, year = {2018}, month = {April}, doi = {10.1016/j.neuroimage.2017.07.015}, url = {https://doi.org/10.1016/j.neuroimage.2017.07.015}, note = {Segmenting the Brain} } @article{Garyfallidis2015, author = {Eleftherios Garyfallidis and Omar Ocegueda and Demian Wassermann and Maxime Descoteaux}, title = {{Robust and efficient linear registration of white-matter fascicles in the space of streamlines}}, journal = {NeuroImage}, volume = {117}, pages = {124-140}, year = {2015}, month = {August}, doi = {10.1016/j.neuroimage.2015.05.016}, url = {https://doi.org/10.1016/j.neuroimage.2015.05.016} } @article{Garyfallidis2014a, author = {Eleftherios Garyfallidis and Matthew Brett and Bagrat Amirbekian and Ariel Rokem and Stefan Van Der Walt and Maxime Descoteaux and Ian Nimmo-Smith and Dipy Contributors}, title = {{Dipy, a library for the analysis of diffusion MRI data}}, journal = {Frontiers in Neuroinformatics}, volume = {8}, year = {2014}, month = {February}, doi = {10.3389/fninf.2014.00008}, url = {https://doi.org/10.3389/fninf.2014.00008} } @article{Garyfallidis2012a, author = {Eleftherios Garyfallidis and Matthew Brett and Marta M. Correia and Guy B. Williams and Ian Nimmo-Smith}, title = {{QuickBundles, a Method for Tractography Simplification}}, journal = {Frontiers in Neuroscience}, volume = {6}, number = {}, pages = {175}, year = {2012}, month = {December}, doi = {10.3389/fnins.2012.00175}, url = {https://doi.org/10.3389/fnins.2012.00175} } @article{Gautschi1978, author = {Walter Gautschi and Josef Slavik}, title = {{On the computation of modified Bessel function ratios}}, journal = {Mathematics of Computation}, volume = {32}, number = {143}, pages = {865–875}, year = {1978}, doi = {10.1090/S0025-5718-1978-0470267-9} } @article{Girard2014, author = {Gabriel Girard and Kevin Whittingstall and Rachid Deriche and Maxime Descoteaux}, title = {{Towards quantitative connectivity analysis: reducing tractography biases}}, journal = {NeuroImage}, volume = {98}, pages = {266-278}, year = {2014}, month = {September}, doi = {10.1016/j.neuroimage.2014.04.074}, url = {https://doi.org/10.1016/j.neuroimage.2014.04.074} } @article{Glenn2015, author = {G. Russell Glenn and Joseph A. Helpern and Ali Tabesh and Jens H. Jensen}, title = {{Quantitative assessment of diffusional kurtosis anisotropy}}, journal = {NMR in Biomedicine}, volume = {28}, number = {4}, pages = {448-459}, year = {2015}, doi = {10.1002/nbm.3271}, url = {https://doi.org/10.1002/nbm.3271} } @article{Gorgolewski2016, author = {Krzysztof J. Gorgolewski and Tibor Auer and Vince D. Calhoun and R. Cameron Craddock and Samir Das and Eugene P. Duff and Guillaume Flandin and Satrajit S. Ghosh and Tristan Glatard and Yaroslav O. Halchenko and Daniel A. Handwerker and Michael Hanke and David Keator and Xiangrui Li and Zachary Michael and Camille Maumet and B. Nolan Nichols and Thomas E. Nichols and John Pellman and Jean-Baptiste Poline and Ariel Rokem and Gunnar Schaefer and Vanessa Sochat and William Triplett and Jessica A. Turner and Ga{\"e}l Varoquaux and Russell A. Poldrack}, title = {The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments}, journal = {Scientific Data}, volume = {3}, number = {1}, pages = {160044}, year = {2016}, month = {June}, doi = {10.1038/sdata.2016.44}, url = {https://doi.org/10.1038/sdata.2016.44} } @article{Greene2018, author = {Clint Greene and Matt Cieslak and Scott T. Grafton}, title = {{Effect of different spatial normalization approaches on tractography and structural brain networks}}, journal = {Network Neuroscience}, volume = {2}, number = {3}, pages = {362-380}, year = {2018}, month = {September}, doi = {10.1162/netn_a_00035}, url = {https://doi.org/10.1162/netn\_a\_00035} } @article{Gudbjartsson1995, author = {H\'{a}kon Gudbjartsson and Samuel Patz}, title = {{The rician distribution of noisy mri data}}, journal = {Magnetic Resonance in Medicine}, volume = {34}, number = {6}, pages = {910-914}, year = {1995}, doi = {10.1002/mrm.1910340618}, url = {https://doi.org/10.1002/mrm.1910340618} } @article{Hansen2016b, author = {Brian Hansen and Noam Shemesh and Sune N\o{}rh\o{}j Jespersen}, title = {{Fast imaging of mean, axial and radial diffusion kurtosis}}, journal = {NeuroImage}, volume = {142}, pages = {381-393}, year = {2016}, month = {November}, doi = {10.1016/j.neuroimage.2016.08.022}, url = {https://doi.org/10.1016/j.neuroimage.2016.08.022} } @article{Hansen2016a, author = {Brian Hansen and Sune N\o{}rh\o{}j Jespersen}, title = {{Data for evaluation of fast kurtosis strategies, b-value optimization and exploration of diffusion MRI contrast}}, journal = {Scientific Data}, volume = {3}, number = {1}, pages = {160072}, year = {2016}, month = {August}, doi = {10.1038/sdata.2016.72}, url = {https://doi.org/10.1038/sdata.2016.72} } @article{Hansen2013, author = {Brian Hansen and Torben E. Lund and Ryan Sangill and Sune N\o{}rh\o{}j Jespersen}, title = {{Experimentally and computationally fast method for estimation of a mean kurtosis}}, journal = {Magnetic Resonance in Medicine}, volume = {69}, number = {6}, pages = {1754-1760}, year = {2013}, doi = {10.1002/mrm.24743}, url = {https://doi.org/10.1002/mrm.24743} } @article{Hardin1996, author = {Ronald H. Hardin and Neil J. A. Sloane}, title = {McLaren's improved snub cube and other new spherical designs in three dimensions}, journal = {Discrete \& Computational Geometry}, volume = {15}, number = {4}, pages = {429--441}, year = {1996}, doi = {10.1007/BF02711518}, url = {https://doi.org/10.1007/BF02711518} } @article{Haroon2009, author = {Hamied A. Haroon and David M. Morris and Karl V. Embleton and Daniel C. Alexander and Geoffrey J. M. Parker}, title = {{Using the Model-Based Residual Bootstrap to Quantify Uncertainty in Fiber Orientations From $Q$-Ball Analysis}}, journal = {IEEE Transactions on Medical Imaging}, volume = {28}, number = {4}, pages = {535-550}, year = {2009}, doi = {10.1109/TMI.2008.2006528} } @article{Herberthson2021, author = {Magnus Herberthson and Deneb Boito and Tom Dela Haije and Aasa Feragen and Carl-Fredrik Westin and Evren {\"O}zarslan}, title = {{Q-space trajectory imaging with positivity constraints (QTI+)}}, journal = {NeuroImage}, volume = {238}, pages = {118198}, year = {2021}, doi = {10.1016/j.neuroimage.2021.118198}, url = {https://doi.org/10.1016/j.neuroimage.2021.118198} } @article{Hosseinbor2013, author = {A. Pasha Hosseinbor and Moo K. Chung and Yu-Chien Wu and Andrew L. Alexander}, title = {{Bessel Fourier Orientation Reconstruction (BFOR): An analytical diffusion propagator reconstruction for hybrid diffusion imaging and computation of q-space indices}}, journal = {NeuroImage}, volume = {64}, pages = {650-670}, year = {2013}, doi = {10.1016/j.neuroimage.2012.08.072}, url = {https://doi.org/10.1016/j.neuroimage.2012.08.072} } @article{Hoy2014, author = {Andrew R. Hoy and Cheng Guan Koay and Steven R. Kecskemeti and Andrew L. Alexander}, title = {{Optimization of a free water elimination two-compartment model for diffusion tensor imaging}}, journal = {NeuroImage}, volume = {103}, pages = {323-333}, year = {2014}, url = {10.1016/j.neuroimage.2014.09.053}, doi = {https://doi.org/10.1016/j.neuroimage.2014.09.053} } @article{Hui2008, author = {Edward S. Hui and Matthew M. Cheung and Liqun Qi and Ed X. Wu}, title = {{Towards better MR characterization of neural tissues using directional diffusion kurtosis analysis}}, journal = {NeuroImage}, volume = {42}, number = {1}, pages = {122-134}, year = {2008}, doi = {10.1016/j.neuroimage.2008.04.237}, url = {https://doi.org/10.1016/j.neuroimage.2008.04.237} } @article{Jenkinson2001, author = {Mark Jenkinson and Stephen Smith}, title = {{A global optimisation method for robust affine registration of brain images}}, journal = {Medical Image Analysis}, volume = {5}, number = {2}, pages = {143-156}, year = {2001}, doi = {10.1016/S1361-8415(01)00036-6}, url = {https://doi.org/10.1016/S1361-8415(01)00036-6} } @article{Jensen2010, author = {Jens H. Jensen and Joseph A. Helpern}, title = {{MRI quantification of non-Gaussian water diffusion by kurtosis analysis}}, journal = {NMR in Biomedicine}, volume = {23}, number = {7}, pages = {698-710}, year = {2010}, doi = {10.1002/nbm.1518}, url = {https://doi.org/10.1002/nbm.1518}, } @article{Jensen2005, author = {Jens H. Jensen and Joseph A. Helpern and Anita Ramani and Hanzhang Lu and Kyle Kaczynski}, title = {{Diffusional kurtosis imaging: The quantification of non-gaussian water diffusion by means of magnetic resonance imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {53}, number = {6}, pages = {1432-1440}, year = {2005}, doi = {https://doi.org/10.1002/mrm.20508}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.20508} } @article{Jeurissen2014, author = {Ben Jeurissen and Jacques-Donald Tournier and Thijs Dhollander and Alan Connelly and Jan Sijbers}, title = {{Multi-tissue constrained spherical deconvolution for improved analysis of multi-shell diffusion MRI data}}, journal = {NeuroImage}, volume = {103}, pages = {411-426}, year = {2014}, month = {December}, doi = {10.1016/j.neuroimage.2014.07.061}, url = {https://doi.org/10.1016/j.neuroimage.2014.07.061} } @article{Jeurissen2011, author = {Ben Jeurissen and Alexander Leemans and Derek K. Jones and Jacques-Donald Tournier and Jan Sijbers}, title = {{Probabilistic fiber tracking using the residual bootstrap with constrained spherical deconvolution}}, journal = {Human Brain Mapping}, volume = {32}, number = {3}, pages = {461-479}, year = {2011}, doi = {10.1002/hbm.21032}, url = {https://doi.org/10.1002/hbm.21032} } @article{Jones1999, author = {Derek K. Jones and Mark A. Horsfield and Andrew Simmons}, title = {{Optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {42}, number = {3}, pages = {515--25}, year = {1999}, doi = {10.1002/(SICI)1522-2594(199909)42:3<515::AID-MRM14>3.0.CO;2-Q}, url = {http://www.ncbi.nlm.nih.gov/pubmed/10467296} } @article{Jones2013, author = {Derek K. Jones and Thomas R. Kn\"{o}sche and Robert Turner}, title = {{White matter integrity, fiber count, and other fallacies: The do's and don'ts of diffusion MRI}}, journal = {NeuroImage}, volume = {73}, pages = {239-254}, year = {2013}, doi = {10.1016/j.neuroimage.2012.06.081}, url = {https://doi.org/10.1016/j.neuroimage.2012.06.081} } @article{Jordan2019, author = {Kesshi Jordan and Olivier Morin and Michael Wahl and Bagrat Amirbekian and Christopher Chapman and Julia Owen and Pratik Mukherjee and Steve Braunstein and Roland Henry}, title = {{An Open-Source Tool for Anisotropic Radiation Therapy Planning in Neuro-oncology Using DW-MRI Tractography}}, journal = {Frontiers in Oncology}, volume = {9}, year = {2019}, doi = {10.3389/fonc.2019.00810}, url = {https://doi.org/10.3389/fonc.2019.00810} } @article{Jordan2018, author = {Kesshi M. Jordan and Bagrat Amirbekian and Anisha Keshavan and Roland G. Henry}, title = {{Cluster Confidence Index: A Streamline-Wise Pathway Reproducibility Metric for Diffusion-Weighted MRI Tractography}}, journal = {Journal of Neuroimaging}, volume = {28}, number = {1}, pages = {64-69}, year = {2018}, doi = {10.1111/jon.12467}, url = {https://doi.org/10.1111/jon.12467} } @article{Kaden2016b, author = {Enrico Kaden and Nathaniel D. Kelm and Robert P. Carson and Mark D. Does and Daniel C. Alexander}, title = {{Multi-compartment microscopic diffusion imaging}}, journal = {NeuroImage}, volume = {139}, pages = {346-359}, year = {2016}, month = {October}, doi = {10.1016/j.neuroimage.2016.06.002}, url = {https://doi.org/10.1016/j.neuroimage.2016.06.002} } @article{Kaden2016a, author = {Enrico Kaden and Frithjof Kruggel and Daniel C. Alexander}, title = {{Quantitative mapping of the per-axon diffusion coefficients in brain white matter}}, journal = {Magnetic Resonance in Medicine}, volume = {75}, number = {4}, pages = {1752-1763}, year = {2016}, month = {April}, doi = {10.1002/mrm.25734}, url = {https://doi.org/10.1002/mrm.25734} } @article{Kanakaraj2024, author = {Praitayini Kanakaraj and Tianyuan Yao and Leon Y. Cai and Ho Hin Lee and Nancy R. Newlin and Michael E. Kim and Chenyu Gao and Kimberly R. Pechman and Derek Archer and Timothy Hohman and Angela Jefferson and Lori L. Beason-Held and Susan M. Resnick and Eleftherios Garyfallidis and Adam Anderson and Kurt G. Schilling and Bennett A. Landman and Daniel Moyer and The Alzheimer's Disease Neuroimaging Initiative (ADNI) and The BIOCARD Study Team}, title = {{DeepN4: Learning N4ITK Bias Field Correction for T1-weighted Images}}, journal = {Neuroinformatics}, volume = {22}, number = {2}, pages = {193--205}, year = {2024}, month = {April}, doi = {10.1007/s12021-024-09655-9}, url = {https://doi.org/10.1007/s12021-024-09655-9} } @article{Kellner2016, author = {Elias Kellner and Bibek Dhital and Valerij G. Kiselev and Marco Reisert}, title = {{Gibbs-ringing artifact removal based on local subvoxel-shifts}}, journal = {Magnetic Resonance in Medicine}, volume = {76}, number = {5}, pages = {1574-1581}, year = {2016}, doi = {10.1002/mrm.26054}, url = {https://doi.org/10.1002/mrm.26054} } @article{Knoll2011, author = {Florian Knoll and Kristian Bredies and Thomas Pock and Rudolf Stollberger}, title = {{Second order total generalized variation (TGV) for MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {65}, number = {2}, pages = {480-491}, year = {2011}, doi = {10.1002/mrm.22595}, url = {https://doi.org/10.1002/mrm.22595} } @article{Koay2009b, author = {Cheng Guan Koay and Evren {\"O}zarslan and Carlo Pierpaoli}, title = {{Probabilistic Identification and Estimation of Noise (PIESNO): A self-consistent approach and its applications in MRI}}, journal = {Journal of Magnetic Resonance}, volume = {199}, number = {1}, pages = {94-103}, year = {2009}, month = {July}, doi = {10.1016/j.jmr.2009.03.005}, url = {https://doi.org/10.1016/j.jmr.2009.03.005} } @article{Koay2009a, author = {Cheng Guan Koay and Evren {\"O}zarslan and Peter J. Basser}, title = {{A signal transformational framework for breaking the noise floor and its applications in MRI}}, journal = {Journal of Magnetic Resonance}, volume = {197}, number = {2}, pages = {108-119}, year = {2009}, month = {April}, doi = {10.1016/j.jmr.2008.11.015}, url = {https://doi.org/10.1016/j.jmr.2008.11.015} } @article{Koay2006c, author = {Cheng Guan Koay and Lin-Ching Chang and John D. Carew and Carlo Pierpaoli and Peter J. Basser}, title = {{A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging}}, journal = {Journal of Magnetic Resonance}, volume = {182}, number = {1}, pages = {115-125}, year = {2006}, month = {September}, doi = {10.1016/j.jmr.2006.06.020}, url = {https://doi.org/10.1016/j.jmr.2006.06.020} } @article{Koay2006b, author = {Cheng Guan Koay and John D. Carew and Andrew L. Alexander and Peter J. Basser, and M. Elizabeth Meyerand}, title = {{Investigation of anomalous estimates of tensor-derived quantities in diffusion tensor imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {55}, number = {4}, pages = {930-936}, year = {2006}, month = {April}, doi = {10.1002/mrm.20832}, url = {https://doi.org/10.1002/mrm.20832} } @article{Koay2006a, author = {Cheng Guan Koay and Peter J. Basser}, title = {{Analytically exact correction scheme for signal extraction from noisy magnitude MR signals}}, journal = {Journal of Magnetic Resonance}, volume = {179}, number = {2}, pages = {317-322}, month = {April}, year = {2006}, doi = {10.1016/j.jmr.2006.01.016}, url = {https://doi.org/10.1016/j.jmr.2006.01.016} } @article{LeBihan1988, author = {Denis {Le Bihan} and Eric Breton and Denis Lallemand and M. L. Aubin and Jacqueline Vignaud and Maurice Laval-Jeantet}, title = {{Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging}}, journal = {Radiology}, volume = {168}, number = {2}, pages = {497-505}, year = {1988}, doi = {10.1148/radiology.168.2.3393671}, url = {https://doi.org/10.1148/radiology.168.2.3393671} } @article{Leemans2009, author = {Alexander Leemans and Derek K.Jones}, title = {{The B-matrix must be rotated when correcting for subject motion in DTI data}}, journal = {Magnetic Resonance in Medicine}, volume = {61}, number = {6}, pages = {1336-1349}, year = {2009}, doi = {10.1002/mrm.21890}, url = {https://doi.org/10.1002/mrm.21890} } @article{Manjon2013, author = {Jos\'{e} V. Manj\'{o}n and Pierrick Coup\'{e} and Luis Concha and Antonio Buades and D. Louis Collins and Montserrat Robles}, title = {{Diffusion Weighted Image Denoising Using Overcomplete Local PCA}}, journal = {PLOS ONE}, volume = {8}, number = {9}, pages = {e73021}, year = {2013}, doi = {10.1371/journal.pone.0073021}, url = {https://doi.org/10.1371/journal.pone.0073021} } @article{Manjon2010, author = {Jos\'{e} V. Manj\'{o}n and Pierrick Coup\'{e} and Luis Mart\'{i}-Bonmat\'{i} and D. Louis Collins and Montserrat Robles}, title = {{Adaptive non-local means denoising of MR images with spatially varying noise levels}}, journal = {Journal of Magnetic Resonance Imaging}, volume = {31}, number = {1}, pages = {192--203}, year = {2010}, doi = {10.1002/jmri.22003}, url = {https://doi.org/10.1002/jmri.22003} } @article{Marek2011, author = {Kenneth Marek and Danna Jennings and Shirley Lasch and Andrew Siderowf and Caroline Tanner and Tanya Simuni and Chris Coffey and Karl Kieburtz and Emily Flagg and Sohini Chowdhury and Werner Poewe and Brit Mollenhauer and Paracelsus-Elena Klinik and Todd Sherer and Mark Frasier and Claire Meunier and Alice Rudolph and Cindy Casaceli and John Seibyl and Susan Mendick and Norbert Schuff and Ying Zhang and Arthur Toga and Karen Crawford and Alison Ansbach and Pasquale {De Blasio} and Michele Piovella and John Trojanowski and Les Shaw and Andrew Singleton and Keith Hawkins and Jamie Eberling and Deborah Brooks and David Russell and Laura Leary and Stewart Factor and Barbara Sommerfeld and Penelope Hogarth and Emily Pighetti and Karen Williams and David Standaert and Stephanie Guthrie and Robert Hauser and Holly Delgado and Joseph Jankovic and Christine Hunter and Matthew Stern and Baochan Tran and Jim Leverenz and Marne Baca and Sam Frank and Cathi-Ann Thomas and Irene Richard and Cheryl Deeley and Linda Rees and Fabienne Sprenger and Elisabeth Lang and Holly Shill and Sanja Obradov and Hubert Fernandez and Adrienna Winters and Daniela Berg and Katharina Gauss and Douglas Galasko and Deborah Fontaine and Zoltan Mari and Melissa Gerstenhaber and David Brooks and Sophie Malloy and Paolo Barone and Katia Longo and Tom Comery and Bernard Ravina and Igor Grachev and Kim Gallagher and Michelle Collins and Katherine L. Widnell and Suzanne Ostrowizki and Paulo Fontoura and Tony Ho and Johan Luthman and Marcel van der Brug and Alastair D. Reith and Peggy Taylor}, title = {{The Parkinson Progression Marker Initiative (PPMI)}}, journal = {Progress in Neurobiology}, volume = {95}, number = {4}, pages = {629-635}, year = {2011}, doi = {10.1016/j.pneurobio.2011.09.005}, url = {https://doi.org/10.1016/j.pneurobio.2011.09.005}, note = {Biological Markers for Neurodegenerative Diseases} } @article{Mattes2003, author = {David Mattes and David R Haynor and Hubert Vesselle and Thomas K. Lewellen and William Eubank}, title = {{PET-CT image registration in the chest using free-form deformation}s}, journal = {IEEE Transactions on Medical Imaging}, volume = {22}, number = {1}, pages = {120-128}, year = {2003}, doi = {10.1109/TMI.2003.809072} } @article{Merlet2013, author = {Sylvain L. Merlet and Rachid Deriche}, title = {{Continuous diffusion signal, EAP and ODF estimation via Compressive Sensing in diffusion MRI}}, journal = {Medical Image Analysis}, volume = {17}, number = {5}, pages = {556--572}, year = {2013}, month = {July}, doi = {10.1016/j.media.2013.02.010}, url = {https://doi.org/10.1016/j.media.2013.02.010} } @article{Morez2023, author = {Jan Morez and Filip Szczepankiewicz and Arnold J. {den Dekker} and Floris Vanhevel and Jan Sijbers and Ben Jeurissen}, title = {{Optimal experimental design and estimation for q-space trajectory imaging}}, journal = {Human Brain Mapping}, volume = {44}, number = {4}, pages = {1793-1809}, year = {2023}, doi = {10.1002/hbm.26175}, url = {https://doi.org/10.1002/hbm.26175} } @article{Nath2019, author = {Vishwesh Nath and Kurt G. Schilling and Prasanna Parvathaneni and Colin B. Hansen and Allison E. Hainline and Yuankai Huo and Justin A. Blaber and Ilwoo Lyu and Vaibhav Janve and Yurui Gao and Iwona Stepniewska and Adam W. Anderson and Bennett A. Landman}, title = {{Deep learning reveals untapped information for local white-matter fiber reconstruction in diffusion-weighted MRI}}, journal = {Magnetic Resonance Imaging}, volume = {62}, pages = {220-227}, year = {2019}, doi = {10.1016/j.mri.2019.07.012}, url = {https://doi.org/10.1016/j.mri.2019.07.012} } @article{NetoHenriques2021b, author = {Rafael {Neto Henriques} and Sune N. Jespersen and Noam Shemesh}, title = {{Evidence for microscopic kurtosis in neural tissue revealed by correlation tensor MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {86}, number = {6}, pages = {3111-3130}, year = {2021}, month = {December}, doi = {10.1002/mrm.28938}, url = {https://doi.org/10.1002/mrm.28938} } @article{NetoHenriques2021a, author = {Rafael {Neto Henriques} and Marta M. Correia and Maurizio Marrale and Elizabeth Huber and John Kruper and Serge Koudoro and Jason D. Yeatman and Eleftherios Garyfallidis and Ariel Rokem}, title = {{Diffusional Kurtosis Imaging in the Diffusion Imaging in Python Project}}, journal = {Frontiers in Human Neuroscience}, volume = {15}, year = {2021}, month = {July}, doi = {10.3389/fnhum.2021.675433}, url = {https://doi.org/10.3389/fnhum.2021.675433}, note = {Sec. Brain Imaging and Stimulation. Advanced Diffusion MRI for Brain Microstructure: Methodology, Consistency, and Application} } @article{NetoHenriques2020, author = {Rafael {Neto Henriques} and Sune N. Jespersen and Noam Shemesh}, title = {{Correlation tensor magnetic resonance imaging}}, journal = {NeuroImage}, volume = {211}, pages = {116605}, year = {2020}, doi = {10.1016/j.neuroimage.2020.116605}, url = {https://doi.org/10.1016/j.neuroimage.2020.116605} } @article{NetoHenriques2019, author = {Rafael {Neto Henriques} and Sune N. Jespersen and Noam Shemesh}, title = {{Microscopic anisotropy misestimation in spherical-mean single diffusion encoding MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {81}, number = {5}, year = {2019}, pages = {3245-3261}, doi = {10.1002/mrm.27606}, url = {https://doi.org/10.1002/mrm.27606} } @article{NetoHenriques2015, author = {Rafael {Neto Henriques} and Marta Morgado Correia and Rita Gouveia Nunes and Hugo Alexandre Ferreira}, title = {{Exploring the 3D geometry of the diffusion kurtosis tensor—Impact on the development of robust tractography procedures and novel biomarkers}}, journal = {NeuroImage}, volume = {111}, pages = {85-99}, year = {2015}, doi = {10.1016/j.neuroimage.2015.02.004}, url = {https://doi.org/10.1016/j.neuroimage.2015.02.004} } @article{Novello2022, author = {Lisa Novello and Rafael {Neto Henriques} and Andrada Ianuş and Thorsten Feiweier and Noam Shemesh and Jorge Jovicich}, title = {{In vivo Correlation Tensor MRI reveals microscopic kurtosis in the human brain on a clinical 3T scanner}}, journal = {NeuroImage}, volume = {254}, pages = {119137}, year = {2022}, doi = {10.1016/j.neuroimage.2022.119137}, url = {https://doi.org/10.1016/j.neuroimage.2022.119137} } @article{Ocegueda2016, author = {Omar Ocegueda and Oscar Dalmau and Eleftherios Garyfallidis and Maxime Descoteaux and Mariano Rivera}, title = {{On the computation of integrals over fixed-size rectangles of arbitrary dimension}}, journal = {Pattern Recognition Letters}, volume = {79}, pages = {68-72}, year = {2016}, doi = {10.1016/j.patrec.2016.05.008}, url = {https://doi.org/10.1016/j.patrec.2016.05.008} } @article{Olson2019, author = {Daniel V. Olson and Volkan E. Arpinar and L. Tugan Muftuler}, title = {{Optimization of q-space sampling for mean apparent propagator MRI metrics using a genetic algorithm}}, journal = {NeuroImage}, volume = {199}, pages = {237-244}, year = {2019}, doi = {10.1016/j.neuroimage.2019.05.078}, url = {https://doi.org/10.1016/j.neuroimage.2019.05.078} } @article{Ozarslan2013, author = {Evren {\"O}zarslan and Cheng Guan Koay and Timothy M. Shepherd and Michal E. Komlosh and M. Okan {\.I}rfano\v{g}lu and Carlo Pierpaoli and Peter J. Basser}, title = {{Mean apparent propagator (MAP) MRI: A novel diffusion imaging method for mapping tissue microstructure}}, journal = {NeuroImage}, volume = {78}, pages = {16-32}, year = {2013}, doi = {10.1016/j.neuroimage.2013.04.016}, url = {https://doi.org/10.1016/j.neuroimage.2013.04.016} } @article{Pajevic1999, author = {Sinisa Pajevic and Carlo Pierpaoli}, title = {{Color Schemes to Represent the Orientation of Anisotropic Tissues from Diffusion Tensor Data: Application to White Matter Fiber Tract Mapping in the Human Brain}}, journal = {Magnetic Resonance in Medicine}, volume = {42}, number = {3}, pages = {526--540}, year = {1999}, doi = {10.1002/(SICI)1522-2594(199909)42:3<526::AID-MRM15>3.0.CO;2-J}, url = {https://doi.org/10.1002/(SICI)1522-2594(199909)42:3<526::AID-MRM15>3.0.CO;2-J} } @article{Papadakis2000, author = {Nikolaos G. Papadakis and Chris D. Murrills and Laurance D. Hall and Christopher L.-H. Huang and T. {Adrian Carpenter}}, title = {{Minimal gradient encoding for robust estimation of diffusion anisotropy}}, journal = {Magnetic Resonance Imaging}, volume = {18}, number = {6}, pages = {671-679}, year = {2000}, doi = {10.1016/S0730-725X(00)00151-X}, url = {https://doi.org/10.1016/S0730-725X(00)00151-X} } @article{Park2024, author = {Jong Sung Park and Shreyas Fadnavis and Eleftherios Garyfallidis}, title = {{Multi-scale V-net architecture with deep feature CRF layers for brain extraction}}, journal = {Communications Medicine}, volume = {4}, number = {1}, pages = {29}, year = {2024}, month = {February}, doi = {10.1038/s43856-024-00452-8}, url = {https://doi.org/10.1038/s43856-024-00452-8} } @article{Parzen1962, author = {Emanuel Parzen}, title = {{On Estimation of a Probability Density Function and Mode}}, journal = {The Annals of Mathematical Statistics}, publisher = {Institute of Mathematical Statistics}, volume = {33}, number = {3}, pages = {1065--1076}, year = {1962}, doi = {10.1214/aoms/1177704472}, url = {https://doi.org/10.1214/aoms/1177704472} } @article{Pasternak2009, author = {Ofer Pasternak and Nir Sochen and Yaniv Gur and Nathan Intrator and Yaniv Assaf}, title = {{Free water elimination and mapping from diffusion MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {62}, number = {3}, pages = {717-730}, year = {2009}, url = {10.1002/mrm.22055}, doi = {https://doi.org/10.1002/mrm.22055} } @article{Perrone2015, author = {Daniele Perrone and Jan Aelterman and Aleksandra Pižurica and Ben Jeurissen and Wilfried Philips and Alexander Leemans}, title = {{The effect of Gibbs ringing artifacts on measures derived from diffusion MRI}}, journal = {NeuroImage}, volume = {120}, pages = {441-455}, year = {2015}, doi = {10.1016/j.neuroimage.2015.06.068}, url = {https://doi.org/10.1016/j.neuroimage.2015.06.068} } @article{Pestilli2014, author = {Franco Pestilli and Jason D. Yeatman and Ariel Rokem and Kendrick N. Kay and Brian A. Wandell}, title = {Evaluation and statistical inference for human connectomes}, journal = {Nature Methods}, year = {2014}, volume = {11}, number = {10}, pages = {1058--1063}, doi = {10.1038/nmeth.3098}, url = {https://doi.org/10.1038/nmeth.3098} } @article{Portegies2015b, author = {Jorg M. Portegies, J. M. and Rutger H. J. Fick and Gonzalo Sanguinetti and Stephan P. L. Meesters and Gabriel Girard and Remco Duits}, title = {{Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {10}, pages = {1-33}, number = {10}, year = {2015}, month = {October}, doi = {10.1371/journal.pone.0138122}, url = {https://doi.org/10.1371/journal.pone.0138122} } @article{Presseau2015, author = {Caroline Presseau and Pierre-Marc Jodoin and Jean-Christophe Houde and Maxime Descoteaux}, title = {{A new compression format for fiber tracking datasets}}, journal = {NeuroImage}, volume = {109}, pages = {73-83}, year = {2015}, doi = {https://doi.org/10.1016/j.neuroimage.2014.12.058}, url = {10.1016/j.neuroimage.2014.12.058} } @article{Renauld2016, author = {Emmanuelle Renauld and Maxime Descoteaux and Micha\"{e}l Bernier and Eleftherios Garyfallidis and Kevin Whittingstall}, title = {{Semi-Automatic Segmentation of Optic Radiations and LGN, and Their Relationship to EEG Alpha Waves}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {11}, number = {7}, pages = {1-21}, year = {2016}, month = {July}, doi = {10.1371/journal.pone.0156436}, url = {https://doi.org/10.1371/journal.pone.0156436} } @article{Richardson1972, author = {William Hadley Richardson}, title = {Bayesian-Based Iterative Method of Image Restoration$\ast$}, journal = {Journal of the Optical Society of America}, publisher = {Optica Publishing Group}, volume = {62}, number = {1}, pages = {55--59}, year = {1972}, month = {January}, doi = {10.1364/JOSA.62.000055}, url = {https://doi.org/10.1364/JOSA.62.000055}, } @article{RichieHalford2022, author = {Adam Richie-Halford and Matthew Cieslak and Lei Ai and Sendy Caffarra and Sydney Covitz and Alexandre R. Franco and Iliana I. Karipidis and John Kruper and Michael Milham and B\'{a}rbara Avelar-Pereira and Ethan Roy and Valerie J. Sydnor and Jason D. Yeatman and Nicholas J. Abbott and John A. E. Anderson and B. Gagana and MaryLena Bleile and Peter S. Bloomfield and Vince Bottom and Josiane Bourque and Rory Boyle and Julia K. Brynildsen and Navona Calarco and Jaime J. Castrellon and Natasha Chaku and Bosi Chen and Sidhant Chopra and Emily B. J. Coffey and Nigel Colenbier and Daniel J. Cox and James Elliott Crippen and Jacob J. Crouse and Szabolcs David and Benjamin De Leener and Gwyneth Delap and Zhi-De Deng and Jules Roger Dugre and Anders Eklund and Kirsten Ellis and Arielle Ered and Harry Farmer and Joshua Faskowitz and Jody E. Finch and Guillaume Flandin and Matthew W. Flounders and Leon Fonville and Summer B. Frandsen and Dea Garic and Patricia Garrido-V\'{a}squez and Gabriel Gonzalez-Escamilla and Shannon E. Grogans and Mareike Grotheer and David C. Gruskin and Guido I. Guberman and Edda Briana Haggerty and Younghee Hahn and Elizabeth H. Hall and Jamie L. Hanson and Yann Harel and Bruno Hebling Vieira and Meike D. Hettwer and Harriet Hobday and Corey Horien and Fan Huang and Zeeshan M. Huque and Anthony R. James and Isabella Kahhale and Sarah L. H. Kamhout and Arielle S. Keller and Harmandeep Singh Khera and Gregory Kiar and Peter Alexander Kirk and Simon H. Kohl and Stephanie A. Korenic and Cole Korponay and Alyssa K. Kozlowski and Nevena Kraljevic and Alberto Lazari and Mackenzie J. Leavitt and Zhaolong Li and Giulia Liberati and Elizabeth S. Lorenc and Annabelle Julina Lossin and Leon D. Lotter and David M. Lydon-Staley and Christopher R. Madan and Neville Magielse and Hilary A. Marusak and Julien Mayor and Amanda L. McGowan and Kahini P. Mehta and Steven Lee Meisler and Cleanthis Michael and Mackenzie E. Mitchell and Simon Morand-Beaulieu and Benjamin T. Newman and Jared A. Nielsen and Shane M. O'Mara and Amar Ojha and Adam Omary and Evren {\"O}zarslan and Linden Parkes and Madeline Peterson and Adam Robert Pines and Claudia Pisanu and Ryan R. Rich and Matthew D. Sacchet and Ashish K. Sahoo and Amjad Samara and Farah Sayed and Jonathan Thore Schneider and Lindsay S. Shaffer and Ekaterina Shatalina and Sara A. Sims and Skyler Sinclair and Jae W. Song and Griffin Stockton Hogrogian and Christian K. Tamnes and Ursula A. Tooley and Vaibhav Tripathi and Hamid B. Turker and Sofie Louise Valk and Matthew B. Wall and Cheryl K. Walther and Yuchao Wang and Bertil Wegmann and Thomas Welton and Alex I. Wiesman and Andrew G. Wiesman and Mark Wiesman and Drew E. Winters and Ruiyi Yuan and Sadie J. Zacharek and Chris Zajner and Ilya Zakharov and Gianpaolo Zammarchi and Dale Zhou and Benjamin Zimmerman and Kurt Zoner and Theodore D. Satterthwaite and Ariel Rokem and The Fibr Community Science Consortium}, title = {{An analysis-ready and quality controlled resource for pediatric brain white-matter research}}, journal = {Scientific Data}, volume = {9}, number = {1}, pages = {616}, year = {2022}, month = {October}, doi = {10.1038/s41597-022-01695-7}, url = {https://doi.org/10.1038/s41597-022-01695-7} } @article{Riffert2014, author = {Riffert, Till W. and Schreiber, Jan and Anwander, Alfred and Knösche, Thomas R.}, title = {Beyond fractional anisotropy: {Extraction} of bundle-specific structural metrics from crossing fiber models}, journal = {NeuroImage}, volume = {100}, pages = {176--191}, year = {2014}, doi = {10.1016/j.neuroimage.2014.06.015}, url = {https://www.sciencedirect.com/science/article/pii/S1053811914004923}, } @article{Rokem2015, author = {Ariel Rokem and Jason D. Yeatman and Franco Pestilli and Kendrick N. Kay and Aviv Mezer and Stefan van der Walt and Brian A. Wandell}, title = {{Evaluating the Accuracy of Diffusion MRI Models in White Matter}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {10}, number = {4}, pages = {1-26}, year = {2015}, month = {April}, doi = {10.1371/journal.pone.0123272}, url = {https://doi.org/10.1371/journal.pone.0123272} } @article{Rudin1992, author = {Leonid I. Rudin and Stanley Osher and Emad Fatemi}, title = {{Nonlinear total variation based noise removal algorithms}}, journal = {Physica D: Nonlinear Phenomena}, volume = {60}, number = {1}, pages = {259-268}, year = {1992}, doi = {10.1016/0167-2789(92)90242-F}, url = {https://doi.org/10.1016/0167-2789(92)90242-F} } @article{Schilling2020, author = {Kurt G. Schilling and Justin Blaber and Colin Hansen and Leon Cai and Baxter Rogers and Adam W. Anderson and Seth Smith and Praitayini Kanakaraj and Tonia Rex and Susan M. Resnick and Andrea T. Shafer and Laurie E. Cutting and Neil Woodward and David Zald and Bennett A. Landman}, title = {{Distortion correction of diffusion weighted MRI without reverse phase-encoding scans or field-maps}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {15}, number = {7}, pages = {1-15}, year = {2020}, month = {July}, doi = {10.1371/journal.pone.0236418}, url = {https://doi.org/10.1371/journal.pone.0236418} } @article{Schilling2019, author = {Kurt G. Schilling and Justin Blaber and Yuankai Huo and Allen Newton and Colin Hansen and Vishwesh Nath and Andrea T. Shafer and Owen Williams and Susan M. Resnick and Baxter Rogers and Adam W. Anderson and Bennett A. Landman}, title = {{Synthesized b0 for diffusion distortion correction (Synb0-DisCo)}}, journal = {Magnetic Resonance Imaging}, volume = {64}, pages = {62-70}, year = {2019}, doi = {10.1016/j.mri.2019.05.008}, url = {https://doi.org/10.1016/j.mri.2019.05.008}, note = {Artificial Intelligence in MRI} } @article{Smith2012, author = {Robert E. Smith and Jacques-Donald Tournier and Fernando Calamante and Alan Connelly}, title = {{Anatomically-constrained tractography: Improved diffusion MRI streamlines tractography through effective use of anatomical information}}, journal = {NeuroImage}, volume = {62}, number = {3}, pages = {1924-1938}, year = {2012}, month = {September}, doi = {10.1016/j.neuroimage.2012.06.005}, url = {https://doi.org/10.1016/j.neuroimage.2012.06.005} } @article{Soderman1995, author = {Olle S\"{o}derman and Bengt J\"{o}onsson}, title = {{Restricted Diffusion in Cylindrical Geometry}}, journal = {Journal of Magnetic Resonance, Series A}, volume = {117}, number = {1}, pages = {94-97}, year = {1995}, doi = {10.1006/jmra.1995.0014}, url = {https://doi.org/10.1006/jmra.1995.0014} } @article{Sotiropoulos2013, author = {Stamatios N. Sotiropoulos and Saad Jbabdi and Junqian Xu and Jesper L. Andersson and Steen Moeller and Edward J. Auerbach and Matthew F. Glasser and Moises Hernandez and Guillermo Sapiro and Mark Jenkinson and David A. Feinberg and Essa Yacoub and Christophe Lenglet and David C. {Van Essen} and Kamil Ugurbil and Timothy E.J. Behrens}, title = {{Advances in diffusion MRI acquisition and processing in the Human Connectome Project}}, journal = {NeuroImage}, volume = {80}, pages = {125-143}, year = {2013}, doi = {10.1016/j.neuroimage.2013.05.057}, url = {https://doi.org/10.1016/j.neuroimage.2013.05.057}, note = {Mapping the Connectome}, } @article{Stejskal1965, author = {Edward O. Stejskal and John E. Tanner}, title = {{Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient}}, journal = {The Journal of Chemical Physics}, volume = {42}, number = {1}, year = {1965}, month = {January}, pages = {288-292}, doi = {http://dx.doi.org/10.1063/1.1695690} } @article{StOnge2018, author = {Etienne St-Onge and Alessandro Daducci and Gabriel Girard and Maxime Descoteaux}, title = {{Surface-enhanced tractography (SET)}}, journal = {NeuroImage}, year = {2018}, volume = {169}, pages = {524-539}, doi = {10.1016/j.neuroimage.2017.12.036}, url = {https://doi.org/10.1016/j.neuroimage.2017.12.036} } @article{StOnge2022, author = {Etienne St-Onge and Eleftherios Garyfallidis and D. Louis Collins}, title = {{Fast Streamline Search: An Exact Technique for Diffusion MRI Tractography}}, journal = {Neuroinformatics}, year = {2022}, volume = {20}, number = {4}, pages = {1093--1104}, doi = {10.1007/s12021-022-09590-7}, url = {https://doi.org/10.1007/s12021-022-09590-7} } @article{Szczepankiewicz2019, author = {Filip Szczepankiewicz and Scott Hoge and Carl-Fredrik Westin}, title = {{Linear, planar and spherical tensor-valued diffusion MRI data by free waveform encoding in healthy brain, water, oil and liquid crystals}}, journal = {Data in Brief}, year = {2019}, volume = {25}, pages = {104208}, doi = {10.1016/j.dib.2019.104208}, url = {https://doi.org/10.1016/j.dib.2019.104208} } @article{Tabesh2011, author = {Ali Tabesh and Jens H. Jensen and Babak A. Ardekani and Joseph A. Helpern}, title = {{Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {65}, number = {3}, pages = {823-836}, year = {2011}, doi = {10.1002/mrm.22655}, url = {https://doi.org/10.1002/mrm.22655} } @article{Tariq2016, author = {Maira Tariq and Torben Schneider and Daniel C. Alexander and Claudia A. Gandini Wheeler-Kingshott and Hui Zhang}, journal = {NeuroImage}, title = {{Bingham-Noddi: Mapping anisotropic orientation dispersion of neurites using diffusion MRI}}, volume = {133}, year = {2016}, pages = {207--223}, doi = {10.1016/j.neuroimage.2016.01.046}, url = {https://doi.org/10.1016/j.neuroimage.2016.01.046} } @article{Tax2015, author = {Chantal M.W. Tax and Willem M. Otte and Max A. Viergever and Rick M. Dijkhuizen and Alexander Leemans}, title = {{REKINDLE: Robust extraction of kurtosis INDices with linear estimation}}, journal = {Magnetic Resonance in Medicine}, volume = {73}, number = {2}, pages = {794-808}, year = {2015}, doi = {10.1002/mrm.25165}, url = {https://doi.org/10.1002/mrm.25165} } @article{Tax2014, author = {Chantal M.W. Tax and Ben Jeurissen and Sjoerd B. Vos and Max A. Viergever and Alexander Leemans}, title = {{Recursive calibration of the fiber response function for spherical deconvolution of diffusion MRI data}}, journal = {NeuroImage}, volume = {86}, pages = {67-80}, year = {2014}, doi = {10.1016/j.neuroimage.2013.07.067}, url = {https://doi.org/10.1016/j.neuroimage.2013.07.067} } @article{Tournier2019, author = {Jacques-Donald Tournier and Robert Smith and David Raffelt and Rami Tabbara and Thijs Dhollander and Maximilian Pietsch and Daan Christiaens and Ben Jeurissen and Chun-Hung Yeh and Alan Connelly}, title = {{MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation}}, journal = {NeuroImage}, volume = {202}, pages = {116137}, year = {2019}, month = {November}, doi = {10.1016/j.neuroimage.2019.116137}, url = {https://doi.org/10.1016/j.neuroimage.2019.116137} } @article{Tournier2012, author = {Jacques-Donald Tournier and Fernando Calamante and Alan Connelly}, title = {{MRtrix: Diffusion tractography in crossing fiber regions}}, journal = {International Journal of Imaging Systems and Technology}, volume = {22}, number = {1}, pages = {53-66}, year = {2012}, doi = {https://doi.org/10.1002/ima.22005}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/ima.22005} } @article{Tournier2007, author = {Jacques-Donald Tournier and Fernando Calamante and Alan Connelly}, title = {{Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution}}, journal = {NeuroImage}, volume = {35}, number = {4}, pages = {1459-1472}, year = {2007}, month = {May}, doi = {10.1016/j.neuroimage.2007.02.016}, url = {https://doi.org/10.1016/j.neuroimage.2012.06.005} } @article{Tournier2004, author = {Jacques-Donald Tournier and Fernando Calamante and David G. Gadian and Alan Connelly}, title = {{Direct estimation of the fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution}}, journal = {NeuroImage}, volume = {23}, number = {3}, pages = {1176-1185}, year = {2004}, month = {November}, doi = {10.1016/j.neuroimage.2004.07.037}, url = {https://doi.org/10.1016/j.neuroimage.2004.07.037} } @article{TristanVega2010, author = {Antonio Trist\'{a}n-Vega and Carl-Fredrik Westin and Santiago Aja-Fern\'{a}ndez}, title = {{A new methodology for the estimation of fiber populations in the white matter of the brain with the Funk–Radon transform}}, journal = {NeuroImage}, volume = {49}, number = {2}, pages = {1301-1315}, year = {2010}, doi = {10.1016/j.neuroimage.2009.09.070}, url = {https://doi.org/10.1016/j.neuroimage.2009.09.070} } @article{TristanVega2009a, author = {Antonio Trist\'{a}n-Vega and Carl-Fredrik Westin and Santiago Aja-Fern\'{a}ndez}, title = {{Estimation of fiber Orientation Probability Density Functions in High Angular Resolution Diffusion Imaging}}, journal = {NeuroImage}, volume = {47}, number = {2}, pages = {638-650}, year = {2009}, month = {August}, doi = {10.1016/j.neuroimage.2009.04.049}, url = {https://doi.org/10.1016/j.neuroimage.2009.04.049} } @article{Tuch2004, author = {David S. Tuch}, title = {{Q-ball imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {52}, number = {6}, pages = {1358-1372}, year = {2004}, doi = {10.1002/mrm.20279}, url = {https://doi.org/10.1002/mrm.20279} } @article{Veraart2016c, author = {Jelle Veraart and Dmitry S. Novikov and Daan Christiaens and Benjamin Ades-aron and Jan Sijbers and Els Fieremans}, title = {{Denoising of diffusion MRI using random matrix theory}}, journal = {NeuroImage}, volume = {142}, pages = {394--406}, year = {2016}, month = {November}, doi = {10.1016/j.neuroimage.2016.08.016}, url = {https://doi.org/10.1016/j.neuroimage.2016.08.016} } @article{Veraart2016b, author = {Jelle Veraart and Els Fieremans and Dmitry S. Novikov}, title = {{Diffusion MRI noise mapping using random matrix theory}}, journal = {Magnetic Resonance in Medicine}, volume = {76}, number = {5}, pages = {1582--1593}, year = {2016}, month = {November}, doi = {10.1002/mrm.26059}, url = {https://doi.org/10.1002/mrm.26059} } @article{Veraart2016a, author = {Jelle Veraart and Els Fieremans and Ileana O. Jelescu and Florian Knoll and Dmitry S. Novikov}, title = {{Gibbs ringing in diffusion MRI}}, journal = {Magnetic Resonance in Medicine}, volume = {76}, number = {1}, pages = {301-314}, year = {2016}, month = {July}, doi = {10.1002/mrm.25866}, url = {https://doi.org/10.1002/mrm.25866} } @article{Veraart2013, author = {Jelle Veraart and Jan Sijbers and Stefan Sunaert and Alexander Leemans and Ben Jeurissen}, title = {{Weighted linear least squares estimation of diffusion MRI parameters: Strengths, limitations, and pitfalls}}, journal = {NeuroImage}, volume = {81}, pages = {335-346}, year = {2013}, doi = {10.1016/j.neuroimage.2013.05.028}, url = {https://doi.org/10.1016/j.neuroimage.2013.05.028} } @article{Veraart2011, author = {Jelle Veraart and Dirk H. J. Poot and Wim Van Hecke and Ines Blockx and Annemie Van der Linden and Marleen Verhoye and Jan Sijbers}, title = {{More accurate estimation of diffusion tensor parameters using diffusion kurtosis imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {65}, number = {1}, pages = {138-145}, year = {2011}, doi = {10.1002/mrm.22603}, url = {https://doi.org/10.1002/mrm.22603}, } @article{Vercauteren2009, author = {Tom Vercauteren and Xavier Pennec and Aymeric Perchant and Nicholas Ayache}, title = {{Diffeomorphic demons: Efficient non-parametric image registration}}, journal = {NeuroImage}, volume = {45}, number = {1, Supplement 1}, pages = {S61-S72}, year = {2009}, doi = {10.1016/j.neuroimage.2008.10.040}, url = {https://doi.org/10.1016/j.neuroimage.2008.10.040}, note = {Mathematics in Brain Imaging} } @article{Wedeen2008, author = {Van Jay Wedeen and Ruopeng P. Wang and Jeremy Dan Schmahmann and Thomas Benner and Wen Yih Isaac Tseng and Guangping Dai and Deepak N. Pandya and Patric Hagmann and Helen D'Arceuil and Alexander J. {de Crespigny}}, title = {{Diffusion spectrum magnetic resonance imaging (DSI) tractography of crossing fibers}}, journal = {NeuroImage}, volume = {41}, number = {4}, pages = {1267-1277}, year = {2008}, doi = {10.1016/j.neuroimage.2008.03.036}, url = {https://doi.org/10.1016/j.neuroimage.2008.03.036} } @article{Wedeen2005, author = {Van Jay Wedeen and Patric Hagmann and Wen-Yih Tseng and Timothy G. Reese and Robert M. Weisskoff}, title = {{Mapping Complex Tissue Architecture with Diffusion Spectrum Magnetic Resonance Imaging}}, journal = {Magnetic Resonance in Medicine}, volume = {54}, number = {6}, pages = {1377-1386}, year = {2005}, doi = {10.1002/mrm.20642}, url = {https://doi.org/10.1002/mrm.20642} } @article{Westin2016, author = {Carl-Fredrik Westin and Hans Knutsson and Ofer Pasternak and Filip Szczepankiewicz and Evren {\"O}zarslan and Danielle {van Westen} and Cecilia Mattisson and Mats Bogren and Lauren J. O'Donnell and Marek Kubicki and Daniel Topgaard and Markus Nilsson}, title = {{Q-space trajectory imaging for multidimensional diffusion MRI of the human brain}}, journal = {NeuroImage}, volume = {135}, pages = {345-362}, year = {2016}, doi = {10.1016/j.neuroimage.2016.02.039}, url = {https://doi.org/10.1016/j.neuroimage.2016.02.039} } @article{Wichmann2006, author = {Brian A. Wichmann and Ian David Hill}, title = {{Generating good pseudo-random numbers}}, journal = {Computational Statistics \& Data Analysis}, volume = {51}, number = {3}, pages = {1614-1622}, year = {2006}, doi = {10.1016/j.csda.2006.05.019}, url = {https://doi.org/10.1016/j.csda.2006.05.019} } @article{Wu2008, author = {Yu-Chien Wu and Aaron S. Field and Andrew L. Alexander}, title = {{Computation of Diffusion Function Measures in $q$-Space Using Magnetic Resonance Hybrid Diffusion Imaging}}, journal = {IEEE Transactions on Medical Imaging}, year = {2008}, volume = {27}, number = {6}, pages = {858-865}, doi = {10.1109/TMI.2008.922696} } @article{Wu2007, author = {Yu-Chien Wu and Andrew L. Alexander}, title = {{Hybrid diffusion imaging}}, journal = {NeuroImage}, volume = {36}, number = {3}, pages = {617-629}, year = {2007}, doi = {10.1016/j.neuroimage.2007.02.050}, url = {https://doi.org/10.1016/j.neuroimage.2007.02.050} } @article{Yeatman2012, author = {Jason D. Yeatman and Robert F. Dougherty and Nathaniel J. Myall and Brian A. Wandell and Heidi M. Feldman}, title = {{Tract Profiles of White Matter Properties: Automating Fiber-Tract Quantification}}, journal = {PLOS ONE}, publisher = {Public Library of Science}, volume = {7}, pages = {1-15}, number = {11}, year = {2012}, month = {November}, doi = {10.1371/journal.pone.0049790}, url = {https://doi.org/10.1371/journal.pone.0049790} } @article{Yeh2019, author = {Fang-Cheng Yeh and Islam M. Zaydan and Valerie R. Suski and David Lacomis and R. Mark Richardson and Joseph C. Maroon and Jessica Barrios-Martinez}, title = {{Differential tractography as a track-based biomarker for neuronal injury}}, journal = {NeuroImage}, volume = {202}, pages = {116131}, year = {2019}, doi = {10.1016/j.neuroimage.2019.116131}, url = {https://doi.org/10.1016/j.neuroimage.2019.116131} } @article{Yeh2018, author = {Fang-Cheng Yeh and Sandip Panesar and David Fernandes and Antonio Meola and Masanori Yoshino and Juan C. Fernandez-Miranda and Jean M. Vettel and Timothy Verstynen}, title = {{Population-averaged atlas of the macroscale human structural connectome and its network topology}}, journal = {NeuroImage}, volume = {178}, pages = {57-68}, year = {2018}, doi = {10.1016/j.neuroimage.2018.05.027}, url = {https://doi.org/10.1016/j.neuroimage.2018.05.027} } @article{Yeh2010, author = {Fang-Cheng Yeh and Van Jay Wedeen and Wen-Yih Isaac Tseng}, title = {{Generalized ${q}$-Sampling Imaging}}, journal = {IEEE Transactions on Medical Imaging}, volume = {29}, number = {9}, pages = {1626-1635}, year = {2010}, doi = {10.1109/TMI.2010.2045126}, url = {https://doi.org/10.1109/TMI.2010.2045126} } @article{Yendiki2014, author = {Anastasia Yendiki and Kami Koldewyn and Sita Kakunoori and Nancy Kanwisher and Bruce Fischl}, title = {{Spurious group differences due to head motion in a diffusion MRI study}}, journal = {NeuroImage}, volume = {88}, pages = {79-90}, year = {2014}, doi = {10.1016/j.neuroimage.2013.11.027}, url = {https://doi.org/10.1016/j.neuroimage.2013.11.027} } @article{Zhang2001, author = {Yongyue Zhang and Michael Brady and Stephen Smith}, title = {Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm}, journal = {IEEE Transactions on Medical Imaging}, volume = {20}, number = {1}, pages = {45-57}, year = {2001}, doi = {10.1109/42.906424} } @article{Zhang2012, author = {Hui Zhang and Torben Schneider and Claudia A. Wheeler-Kingshott and Daniel C. Alexander}, journal = {NeuroImage}, number = {4}, title = {NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain}, volume = {61}, pages = {1000-1016}, year = {2012}, doi = {10.1016/j.neuroimage.2012.03.072}, url = {https://doi.org/10.1016/j.neuroimage.2012.03.072} } @article{Zou2005, author = {Hui Zou and Trevor Hastie}, title = {{Regularization and Variable Selection Via the Elastic Net}}, journal = {Journal of the Royal Statistical Society Series B: Statistical Methodology}, volume = {67}, number = {2}, pages = {301-320}, year = {2005}, month = {March}, doi = {10.1111/j.1467-9868.2005.00503.x}, url = {https://doi.org/10.1111/j.1467-9868.2005.00503.x} } %% book @book{Hastie2009, author = {Trevor Hastie and Robert Tibshirani and Jerome Friedman}, title = {{The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition}}, edition = {2}, publisher = {Springer}, year = {2009}, pages = {745}, doi = {https://doi.org/10.1007/978-0-387-84858-7}, } %% incollection @incollection{Topgaard2016, author = {Daniel Topgaard}, title = {{NMR Methods for Studying Microscopic Diffusion Anisotropy}}, booktitle = {Diffusion NMR of Confined Systems: Fluid Transport in Porous Solids and Heterogeneous Materials}, publisher = {The Royal Society of Chemistry}, year = {2016}, month = {December}, doi = {10.1039/9781782623779-00226}, url = {https://doi.org/10.1039/9781782623779-00226}, isbn = {978-1-78262-190-4} } %% inproceedings @inproceedings{Aganj2009, author = {Iman Aganj and Christophe Lenglet and Guillermo Sapiro}, title = {{ODF reconstruction in q-ball imaging with solid angle consideration}}, booktitle = {2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro}, year = {2009}, volume = {}, number = {}, pages = {1398-1401}, doi = {10.1109/ISBI.2009.5193327} } @inproceedings{ArceSantana2014, author = {Edgar Arce-Santana and Daniel U. Campos-Delgado and Flavio Vigueras-G{\'o}mez and Isnardo Reducindo and Aldo R. Mej{\'i}a-Rodr{\'i}guez}, title = {{Non-rigid Multimodal Image Registration Based on the Expectation-Maximization Algorithm}}, booktitle = {Image and Video Technology}, editor = {Reinhard Klette and Mariano Rivera and Shin'ichi Satoh}, publisher = {Springer Berlin Heidelberg}, address = {Berlin, Heidelberg}, pages = {36--47}, year = {2014}, isbn = {978-3-642-53842-1} } @inproceedings{Bruhn2005, author = {Andr\'{e}s Bruhn and Joachim Weickert}, title = {{Towards ultimate motion estimation: combining highest accuracy with real-time performance}}, booktitle = {Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1}, volume = {1}, number = {}, pages = {749-755}, year = {2005}, doi = {10.1109/ICCV.2005.240} } @inproceedings{Chandio2020b, author = {Bramsh Qamar Chandio and Eleftherios Garyfallidis}, title = {{StND: Streamline-based Non-rigid partial-Deformation Tractography Registration}}, booktitle = {Medical Imaging Meets NeurIPS Workshop}, address = {Online}, year = {2020}, month = {December} } @inproceedings{Cheng2011, author = {Jian Cheng and Tianzi Jiang and Rachid Deriche}, title = {{Theoretical Analysis and Practical Insights on EAP Estimation via a Unified HARDI Framework}}, booktitle = {MICCAI Workshop on Computational Diffusion MRI (CDMRI)}, address = {Toronto, Canada}, year = {2011}, month = {September} } @inproceedings{Correia2011a, author = {Marta Morgado Correia and Guy B. Williams and Frank Yeh and Ian Nimmo-Smith and Eleftherios Garyfallidis}, title = {{Robustness of Diffusion Scalar Metrics When Estimated with Generalized Q-Sampling Imaging Acquisition Schemes}}, booktitle = {ISMRM 19th Annual Meeting \& Exhibition SMRT 20th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Montr\'{e}al, Canada}, volume = {}, pages = {}, year = {2011}, month = {May} } @inproceedings{DellAcqua2014, author = {Flavio Dell'Acqua and Luis Lacerda and Marco Catani and Andrew Simmons}, title = {{Anisotropic Power Maps: A diffusion contrast to reveal low anisotropy tissues from HARDI data}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB 2014 SMRT 23rd Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Milan, Italy}, volume = {}, pages = {}, year = {2014} } @inproceedings{Descoteaux2008a, author = {Maxime Descoteaux and Nicolas Wiest-Daessl\'{e} and Sylvain Prima and Christian Barillot and Rachid Deriche}, title = {{Impact of Rician adapted Non-Local Means filtering on HARDI}}, booktitle = {Medical Image Computing and Computer-Assisted Intervention (MICCAI)}, editor = {Dimitris Metaxas and Leon Axel and Gabor Fichtinger and G{\'a}bor Sz{\'e}kely}, publisher = {Springer Berlin Heidelberg}, address = {Berlin, Heidelberg}, volume = {11}, number = {Pt 2}, pages = {122--130}, year = {2008}, doi = {10.1007/978-3-540-85990-1_15}, isbn = {978-3-540-85990-1} } @inproceedings{Fadnavis2024, author = {Shreyas Fadnavis and Agniva Chowdhury and Joshua Batson and Petros Drineas and Eleftherios Garyfallidis}, title = {{Patch2Self2: Self-supervised Denoising on Coresets via Matrix Sketching}}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, editor = {}, publisher = {}, volume = {}, pages = {27641--27651}, year = {2024}, url = {https://openaccess.thecvf.com/content/CVPR2024/papers/Fadnavis_Patch2Self2_Self-supervised_Denoising_on_Coresets_via_Matrix_Sketching_CVPR_2024_paper.pdf}, } @inproceedings{Fadnavis2020, author = {Shreyas Fadnavis and Joshua Batson and Eleftherios Garyfallidis}, title = {{Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning}}, booktitle = {Advances in Neural Information Processing Systems}, editor = {Hugo Larochelle and Marc'Aurelio Ranzato and Raia Hadsell and Maria-Florina Balcan and Hsuan-Tien Lin}, publisher = {Curran Associates, Inc.}, volume = {33}, pages = {16293--16303}, year = {2020}, url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-Paper.pdf}, } @inproceedings{Fadnavis2019, author = {Shreyas Fadnavis and Marco Reisert and Hamza Farooq and Maryam Afzali and Cheng Hu and Bago Amirbekian and Eleftherios Garyfallidis}, title = {{MicroLearn: Framework for machine learning, reconstruction, optimization and microstructure modeling}}, booktitle = {ISMRM 27th Annual Meeting \& Exhibition SMRT 28th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Montr\'{e}al, Canada}, pages = {}, year = {2019} } @inproceedings{Fick2016a, author = {Rutger H. J. Fick and MArco Pizzolato and Demian Wassermann and Mauro Zucchelli and Gloria Menegaz and Rachid Deriche}, title = {{A sensitivity analysis of q-space indices with respect to changes in axonal diameter, dispersion and tissue composition}}, booktitle = {IEEE 13th International Symposium on Biomedical Imaging (ISBI)}, volume = {}, number = {}, pages = {1241-1244}, year = {2016}, month = {April}, doi = {10.1109/ISBI.2016.7493491} } @inproceedings{Fick2015, author = {Rutger Fick and Demian Wassermann and Marco Pizzolato and Rachid Deriche}, title = {{A Unifying Framework for Spatial and Temporal Diffusion in Diffusion MRI}}, booktitle = {Information Processing in Medical Imaging}, editor = {Sebastien Ourselin and Daniel C. Alexander and Carl-Fredrik Westin and M. Jorge Cardoso}, publisher = {Springer International Publishing}, address = {Cham}, pages = {167--178}, year = {2015}, isbn = {978-3-319-19992-4} } @inproceedings{Fieremans2012, author = {Els Fieremans and Jens H. Jensen and Joseph A. Helpern and Sungheon Kim and Robert I. Grossman and Matilde Inglese and Dmitry S. Novikov}, title = {{Diffusion distinguishes between axonal loss and demyelination in brain white matter}}, booktitle = {ISMRM 20th Annual Meeting \& Exhibition SMRT 21st Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Melbourne, Australia}, pages = {}, year = {2012} } @inproceedings{Garyfallidis2019, author = {Eleftherios Garyfallidis and Marc-Alexandre C\^{o}t\'{e} and Bramsh Qamar Chandio and Shreyas Fadnavis and Javier Guaje and Ranveer Aggarwal and Etienne St-Onge and Karandeep Singh Juneja and Serge Koudoro and David Reagan}, title = {{DIPY Horizon: fast, modular, unified and adaptive visualization}}, booktitle = {ISMRM 27th Annual Meeting \& Exhibition SMRT 28th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Montr\'{e}al, Canada}, volume = {}, pages = {}, year = {2019} } @inproceedings{Garyfallidis2016, author = {Eleftherios Garyfallidis and Marc-Alexandre C\^{o}t\'{e} and Fran\c{c}ois Rheault and Maxime Descoteaux}, title = {{QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time}}, booktitle = {ISMRM 24th Annual Meeting \& Exhibition SMRT 25th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Singapore}, volume = {}, pages = {}, year = {2016} } @inproceedings{Garyfallidis2014b, author = {Eleftherios Garyfallidis and Demian Wassermann and Maxime Descoteaux}, title = {{Direct native-space fiber bundle alignment for group comparisons}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB 2014 SMRT 23rd Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Milan, Italy}, pages = {}, year = {2014}, month = {May} } @inproceedings{Garyfallidis2011, author = {Eleftherios Garyfallidis and Matthew Brett and Bagrat Amirbekian and Chistopher Nguyen and Fang-Cheng Yeh and Emanuele Olivetti and Yaroslav Halchenko and Ian Nimmo-Smith}, title = {{Dipy - a novel software library for diffusion MR and tractography}}, booktitle = {17th annual meeting of the Organization for Human Brain Mapping}, organization = {Organization for Human Brain Mapping (OHBM)}, address = {Quebec City, Canada}, volume = {}, pages = {}, year = {2011}, month = {June} } @inproceedings{Garyfallidis2010b, author = {Eleftherios Garyfallidis and Matthew Brett and Ian Nimmo-Smith}, title = {{Fast Dimensionality Reduction for Brain Tractography Clustering}}, booktitle = {16th Annual Meeting of the Organization for Human Brain Mapping}, organization = {Organization for Human Brain Mapping (OHBM)}, address = {Barcelona, Spain}, volume = {}, pages = {}, year = {2010}, month = {June} } @inproceedings{Garyfallidis2010a, author = {Eleftherios Garyfallidis and Matthew Brett and Vassilis Tsiaras and George Vogiatzis and Ian Nimmo-Smith1}, title = {{Identification of corresponding tracks in diffusion MRI tractographies}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB 2010 SMRT 19th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Stockholm, Sweden}, volume = {}, pages = {}, year = {2010}, month = {May} } @inproceedings{Houde2015, author = {Jean-Christophe Houde and Marc-Alexandre Ct-Harnois and Maxime Descoteaux}, title = {{How to Avoid Biased Streamlines-Based Metrics for Streamlines with Variable Step Sizes}}, booktitle = {ISMRM 23rd Annual Meeting \& Exhibition SMRT 24th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Toronto, Canada}, volume = {}, pages = {}, year = {2015} } @inproceedings{Lee2008, author = {Agatha D. Lee and Natasha Lepore and Marina Barysheva and Yi-Yu Chou and Caroline Brun and Sarah K. Madsen and Katie L. McMahon and Greig I. {de Zubicaray} and Matthew Meredith and Margaret J. Wright and Arthur W. Toga and Paul M. Thompson}, title = {{Comparison of fractional and geodesic anisotropy in diffusion tensor images of 90 monozygotic and dizygotic twins}}, booktitle = {2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro}, volume = {}, pages = {943-946}, year = {2008}, month = {May}, doi = {10.1109/ISBI.2008.4541153} } @inproceedings{Meesters2016b, author = {Stephan Meesters and Gonzalo Sanguinetti and Eleftherios Garyfallidis and Jorg Portegies and Pauly Ossenblok and Remco Duits1}, title = {{Cleaning output of tractography via fiber to bundle coherence, a new open source implementation}}, booktitle = {The Organization for Human Brain Mapping Annual Meeting}, organization = {Organization for Human Brain Mapping (OHBM)}, address = {Geneva, Switzerland}, volume = {}, pages = {}, year = {2016}, month = {June} } @inproceedings{Meesters2016a, author = {Stephan Meesters and Gonzalo Sanguinetti and Eleftherios Garyfallidis and Jorg Portegies and Remco Duits1}, title = {{Fast implementations of contextual PDE's for HARDI data processing in DIPY}}, booktitle = {ISMRM 24th Annual Meeting \& Exhibition SMRT 25th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Singapore}, volume = {}, pages = {}, year = {2016}, month = {May} } @inproceedings{Nath2018 author = {Vishwesh Nath and Kurt G. Schilling and Prasanna Parvathaneni and Allison E. Hainline and Colin B. Hansen and Camilo Bermudez and Andrew J. Plassard and Justin A. Blaber and Vaibhav Janve and Yurui Gao and Iwona Stepniewska and Adam W. Anderson and Bennett A. Landman}, title = {{Deep Learning Captures More Accurate Diffusion Fiber Orientations Distributions than Constrained Spherical Deconvolution}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB 2018 SMRT 27th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Paris, France}, pages = {}, year = {2018} } @inproceedings{Niethammer2006, author = {Marc Niethammer and Ra\'{u}l San Jos\'{e} Estepar and Sylvain Bouix and Martha Shenton and Carl-Fredrik Westin}, title = {{On Diffusion Tensor Estimation}}, booktitle = {International Conference of the IEEE Engineering in Medicine and Biology Society}, volume = {}, number = {}, pages = {2622-2625}, year = {2006}, doi = {10.1109/IEMBS.2006.259826} } @inproceedings{Ozarslan2009, author = {Evren {\"O}zarslan and Cheng Guan Koay and Timothy M. Shepherd and Stephen J. Blackband and Peter J. Basser}, title = {{Simple harmonic oscillator based reconstruction and estimation for three-dimensional q-space MRI}}, booktitle = {ISMRM 17th Scientific Meeting \& Exhibition SMRT 18th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Honolulu, HI, USA}, volume = {}, pages = {}, year = {2009} } @inproceedings{Ozarslan2008, author = {Evren {\"O}zarslan and Cheng Guan Koay and Peter J. Basser}, title = {{Simple harmonic oscillator based reconstruction and estimation for one-dimensional q-space magnetic resonance (1D-SHORE)}}, booktitle = {ISMRM 16th Scientific Meeting \& Exhibition SMRT 17th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Toronto, Canada}, volume = {16}, pages = {35}, year = {2008} } @inproceedings{Portegies2015a, author = {Jorg Portegies and Gonzalo Sanguinetti and Stephan Meesters and Remco Duits}, title = {{New Approximation of a Scale Space Kernel on SE(3) and Applications in Neuroimaging}}, booktitle = {5th International Conference on Scale Space and Variational Methods in Computer Vision}, editor = {Jean-Fran{\c{c}}ois Aujol and Mila Nikolova and Nicolas Papadakis}, publisher = {Springer International Publishing}, address = {Cham}, pages = {40--52}, year = {2015}, isbn = {978-3-319-18461-6} } @inproceedings{Rathi2011, author = {Yogesh Rathi and Oleg Michailovich and Kawin Setsompop and Sylvain Bouix and Martha E. Shenton and Carl-Fredrik Westin}, title = {{Sparse Multi-Shell Diffusion Imaging}}, booktitle = {Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2011}, editor = {Gabor Fichtinger and Anne Martel and Terry Peters}, publisher = {Springer Berlin Heidelberg}, address = {Berlin, Heidelberg}, pages = {58--65}, year = {2011}, isbn = {978-3-642-23629-7} } @inproceedings{Rheault2015, author = {Fran\c{c}ois Rheault and Jean-Christophe Houde and Maxime Descoteaux}, title = {{Real Time Interaction with Millions of Streamlines}}, booktitle = {ISMRM 23rd Annual Meeting \& Exhibition SMRT 24th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Toronto, Canada}, volume = {}, pages = {}, year = {2015} } @inproceedings{Rodrigues2010, author = {Paulo Rodrigues and Remco Duits and Bart M. ter Haar Romeny and Anna Vilanova}, title = {{Accelerated Diffusion Operators for Enhancing DW-MRI}}, booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine}, editor = {Dirk Bartz and Charl Botha and Joachim Hornegger and Raghu Machiraju and Alexander Wiebel and Bernhard Preim}, publisher = {The Eurographics Association}, year = {2010}, isbn = {978-3-905674-28-6}, doi = {https://doi.org/10.2312/VCBM/VCBM10/049-056} } @inproceedings{Rokem2014, author = {Ariel Rokem and Kimberly L. Chan and Jason D. Yeatman and Franco Pestilli and Brian A. Wandel}, title = {{Evaluating the accuracy of diffusion models at multiple b-values with cross-validation}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB 2014 SMRT 23rd Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Milan, Italy}, volume = {}, pages = {}, year = {2014} } @inproceedings{TristanVega2009b, author = {Antonio Trist{\'a}n-Vega and Santiago Aja-Fern{\'a}ndez and Carl-Fredrik Westin}, title = {{On the Blurring of the Funk-Radon Transform in Q-Ball Imaging}}, booktitle = {Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2009}, editor = {Guang-Zhong Yang and David Hawkes and Daniel Rueckert and Alison Noble and Chris Taylor}, publisher = {Springer Berlin Heidelberg}, address = {Berlin, Heidelberg}, pages = {415-422}, year = {2009}, month = {September}, isbn = {978-3-642-04271-3} } @inproceedings{RomeroBascones2022, author = {David Romero-Bascones and Bramsh Qamar Chandio and Shreyas Fadnavis and Jong Sung Park and Serge Koudoro and Unai Ayala and Maitane Barrenechea and Eleftherios Garyfallidis}, title = {{BundleAtlasing: unbiased population-specific atlasing of bundles in streamline space}}, booktitle = {Joint Annual Meeting ISMRM-ESMRMB \& ISMRT 31st Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {London, UK}, volume = {}, pages = {}, year = {2022} } @inproceedings{Wanyan2017, author = {Tingyi Wanyan and Eleftherios Garyfallidis}, title = {{Important new insights for the reduction of false positives in tractograms emerge from streamline-based registration and pruning}}, booktitle = {ISMRM 25th Annual Meeting \& Exhibition SMRT 26th Annual Meeting}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Honolulu, HI, USA}, volume = {}, pages = {}, year = {2017} } @inproceedings{Westin1997, author = {Carl-Fredrik Westin and Sharon Peled and H\'{a}kon Gudbjartsson and Ron Kikinis and Ferenc Jolesz}, title = {{Geometrical diffusion measures for MRI from tensor basis analysis}}, booktitle = {5th Scientific Meeting and Exhibition}, organization = {International Society for Magnetic Resonance in Medicine (ISMRM)}, address = {Vancouver, Canada}, volume = {}, pages = {}, year = {1997} } @inproceedings{Zucchelli2017, author = {Mauro Zucchelli and Maxime Descoteaux and Gloria Menegaz}, title = {{A generalized SMT-based framework for Diffusion MRI microstructural model estimation}}, booktitle = {Computational Diffusion MRI}, editor = {Enrico Kaden and Francesco Grussu and Lipeng Ning and Chantal M. W. Tax and Jelle Veraart}, year = {2018}, publisher = {Springer International Publishing}, address = {Cham}, pages = {51--63} } %% mastersthesis @mastersthesis{NetoHenriques2012, author = {Rafael {Neto Henriques}}, title = {{Diffusion kurtosis imaging of the healthy human brain}}, type = {MSc thesis}, school = {Universidade de Lisboa}, address = {Lisboa, Portugal}, year = {2012}, doi = {}, url = {https://repositorio.ul.pt/bitstream/10451/8511/1/ulfc104137_tm_Rafael_Henriques.pdf} } %% misc @misc{Chandio2023, author = {Bramsh Qamar Chandio and Emanuele Olivetti and David Romero-Bascones and Jaroslaw Harezlak and Eleftherios Garyfallidis}, title = {{BundleWarp, streamline-based nonlinear registration of white matter tracts}}, howpublished = {bioRxiv}, publisher = {Cold Spring Harbor Laboratory}, year = {2023}, doi = {https://doi.org/10.1101/2023.01.04.522802} } @misc{NetoHenriques2017, author = {Rafael {Neto Henriques} and Ariel Rokem and Eleftherios Garyfallidis and Samuel St-Jean and Eric Thomas Peterson and Marta Morgado Correia}, title = {{[Re] Optimization of a free water elimination two-compartment model for diffusion tensor imaging}}, howpublished = {bioRxiv}, publisher = {Cold Spring Harbor Laboratory}, year = {2017}, doi = {https://doi.org/10.1101/108795} } @misc{Valabregue2015, author = {Romain Valabr\`{e}gue}, title = {{Diffusion MRI measured at multiple b-values}}, howpublished = {University Libraries, eScience Institute Faculty Research}, publisher = {University of Washington}, year = {2015}, doi = {http://hdl.handle.net/1773/33311} } @misc{Wassermann2017, author = {Demian Wassermann and Mathieu Santin and Anne-Charlotte Philippe and Rutger Fick and Rachid Deriche and Stephane Lehericy and Alexandra Petiet}, title = {{Test-Retest qt-dMRI datasets for Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time}}, howpublished = {Zenodo}, year = {2017}, month = {September}, doi = {10.5281/zenodo.996889}, url = {https://zenodo.org/records/996889}, note = {Dataset} } %% phdthesis @phdthesis{Amirbekian2016, author = {Bagrat Amirbekian}, title = {{The Modeling of White Matter Architecture and Networks Using Diffusion MRI: Methods, Tools and Applications}}, type = {PhD thesis}, school = {University of California San Francisco}, address = {San Francisco, CA, USA}, year = {2016}, doi = {}, url = {} } @phdthesis{Cheng2012, author = {Jian Cheng}, title = {{Estimation and Processing of Ensemble Average Propagator and Its Features in Diffusion MRI}}, type = {PhD thesis}, school = {Universit{\'e} Nice Sophia Antipolis}, address = {Valbonne, France}, year = {2012}, doi = {}, url = {} } @phdthesis{Descoteaux2008b, author = {Maxime Descoteaux}, title = {{High Angular Resolution Diffusion MRI: from Local Estimation to Segmentation and Tractography}}, type = {PhD thesis}, school = {Universit{\'e} Nice Sophia Antipolis}, address = {Valbonne, France}, year = {2007}, doi = {}, url = {https://theses.hal.science/tel-00457458} } @phdthesis{Garyfallidis2012b, author = {Eleftherios Garyfallidis}, title = {{Towards an accurate brain tractography}}, type = {PhD thesis}, school = {Cambridge University}, address = {Cambridge, United Kingdom}, year = {2012}, doi = {}, url = {} } @phdthesis{NetoHenriques2018, author = {Rafael {Neto Henriques}}, title = {{Advanced Methods for Diffusion MRI Data Analysis and their Application to the Healthy Ageing Brain}}, type = {PhD thesis}, school = {Cambridge University}, address = {Cambridge, United Kingdom}, year = {2018}, doi = {10.17863/CAM.29356}, url = {https://doi.org/10.17863/CAM.29356} } @phdthesis{Tuch2002, author = {David S. Tuch}, title = {{Diffusion MRI of Complex Tissue Structure}}, type = {PhD thesis}, school = {Massachusetts Institute of Technology}, address = {Cambridge, USA}, year = {2002}, doi = {}, url = {http://hdl.handle.net/1721.1/8348} } dipy-1.11.0/doc/release_notes/000077500000000000000000000000001476546756600161665ustar00rootroot00000000000000dipy-1.11.0/doc/release_notes/release0.10.rst000066400000000000000000000344771476546756600206560ustar00rootroot00000000000000.. _release0.10: ==================================== Release notes for DIPY version 0.10 ==================================== GitHub stats for 2015/03/18 - 2015/11/19 (tag: 0.9.2) These lists are automatically generated, and may be incomplete or contain duplicates. The following 20 authors (alphabetically ordered) contributed 1022 commits: * Alexandre Gauvin * Ariel Rokem * Bago Amirbekian * David Qixiang Chen * Dimitris Rozakis * Eleftherios Garyfallidis * Gabriel Girard * Gonzalo Sanguinetti * Jean-Christophe Houde * Marc-Alexandre Côté * Matthew Brett * Mauro Zucchelli * Maxime Descoteaux * Michael Paquette * Omar Ocegueda * Oscar Esteban * Rafael Neto Henriques * Rohan Prinja * Samuel St-Jean * Stefan van der Walt We closed a total of 232 issues, 94 pull requests and 138 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (94): * :ghpull:`769`: RF: Remove aniso2iso altogether. * :ghpull:`772`: DOC: Use xvfb when building the docs in a headless machine. * :ghpull:`754`: DOC: Should we add a side-car gitter chat to the website? * :ghpull:`753`: TST: Test DSI with b0s. * :ghpull:`767`: Offscreen is False for test_slicer * :ghpull:`768`: Document dipy.reconst.dti.iter_fit_tensor params * :ghpull:`766`: Add fit_tensor iteration decorator * :ghpull:`751`: Reorient tracks according to ROI * :ghpull:`765`: BF: Typo in data file name. * :ghpull:`757`: Optimize dipy.align.reslice * :ghpull:`587`: Fvtk 2.0 PR1 * :ghpull:`749`: Fixed deprecation warning in skimage * :ghpull:`748`: TST: added test for _to_voxel_tolerance. * :ghpull:`678`: BF: added tolerance for negative streamline coordinates checks * :ghpull:`714`: RF: use masks in predictions and cross-validation * :ghpull:`739`: Set number of OpenMP threads during runtime * :ghpull:`733`: Add RTOP, RTAP and RTPP and the relative test * :ghpull:`743`: BF: memleaks with typed memory views in Cython * :ghpull:`724`: @sinkpoint's power map - refactored * :ghpull:`741`: ENH: it is preferable to use choice rather than randint to not have * :ghpull:`727`: Optimize tensor fitting * :ghpull:`726`: NF - CSD response from a mask * :ghpull:`729`: BF: tensor predict * :ghpull:`736`: Added installation of python-tk package for VTK travis bot * :ghpull:`735`: Added comment on nlmeans example about selecting one volume * :ghpull:`732`: WIP: Test with vtk on Travis * :ghpull:`731`: Np 1.10 * :ghpull:`640`: MAPMRI * :ghpull:`682`: Created list of examples for available features and metrics * :ghpull:`716`: Refactor data module * :ghpull:`699`: Added gaussian noise option to estimate_sigma * :ghpull:`712`: DOC: API changes in gh707. * :ghpull:`713`: RF: In case a user just wants to use a single integer. * :ghpull:`700`: TEST: add tests for AffineMap * :ghpull:`677`: DKI PR3 - NF: Adding standard kurtosis statistics on module dki.py * :ghpull:`721`: TST: Verify that output of estimate_sigma is a proper input to nlmeans. * :ghpull:`572`: NF : nlmeans now support arrays of noise std * :ghpull:`708`: Check for bval dimensionality on read. * :ghpull:`707`: BF: Keep up with changes in scipy 0.16 * :ghpull:`709`: DOC: Use the `identity` variable in the resampling transformation. * :ghpull:`703`: Fix syn-3d example * :ghpull:`705`: Fix example in function compress_streamline * :ghpull:`635`: Select streamlines based on logical operations on ROIs * :ghpull:`702`: BF: Use only validated examples when building docs. * :ghpull:`689`: Streamlines compression * :ghpull:`698`: DOC: added NI citation * :ghpull:`681`: RF + DOC: Add MNI template reference. Also import it into the dipy.da… * :ghpull:`696`: Change title of piesno example * :ghpull:`691`: CENIR 'HCP-like' multi b-value data * :ghpull:`661`: Test DTI eigenvectors * :ghpull:`690`: BF: nan entries cause segfault * :ghpull:`667`: DOC: Remove Sourceforge related makefile things. Add the gh-pages upl… * :ghpull:`676`: TST: update Travis config to use container infrastructure. * :ghpull:`533`: MRG: some Cython refactorings * :ghpull:`686`: BF: Make buildbot Pyhon26-32 happy * :ghpull:`683`: Fixed initial estimation in piesno * :ghpull:`654`: Affine registration PR 3/3 * :ghpull:`684`: BF: Fixed memory leak in QuickBundles. * :ghpull:`674`: NF: Function to sample perpendicular directions relative to a given vector * :ghpull:`679`: BF + NF: Provide dipy version info when running dipy.get_info() * :ghpull:`680`: NF: Fetch and Read the MNI T1 and/or T2 template. * :ghpull:`664`: DKI fitting (DKI PR2) * :ghpull:`671`: DOC: move mailing list links to neuroimaging * :ghpull:`663`: changed samuel st-jean email to the usherbrooke one * :ghpull:`648`: Improve check of collinearity in vec2vec_rotmat * :ghpull:`582`: DKI project: PR#1 Simulations to test DKI * :ghpull:`660`: BF: If scalar-color input has len(shape)<4, need to fill that in. * :ghpull:`612`: BF: Differences in spherical harmonic calculations wrt scipy 0.15 * :ghpull:`651`: Added estimate_sigma bias correction + update example * :ghpull:`659`: BF: If n_frames is larger than one use path-numbering. * :ghpull:`658`: FIX: resaved npy file causing load error for py 33 * :ghpull:`657`: Fix compilation error caused by inline functions * :ghpull:`628`: Affine registration PR 2/3 * :ghpull:`629`: Quickbundles 2.1 * :ghpull:`637`: DOC: Fix typo in docstring of Identity class. * :ghpull:`639`: DOC: Render the following line in the code cell. * :ghpull:`614`: Seeds from mask random * :ghpull:`633`: BF - no import of TissueTypes * :ghpull:`632`: fixed typo in dti example * :ghpull:`627`: BF: Add missing opacity property to point actor * :ghpull:`626`: Use LooseVersion to check for scipy versions * :ghpull:`625`: DOC: Include the PIESNO example. * :ghpull:`624`: DOC: Corrected typos in Restore tutorial and docstring. * :ghpull:`619`: DOC: Added missing contributor to developer list * :ghpull:`618`: Update README file * :ghpull:`616`: Raise ValueError when invalid matrix is given * :ghpull:`576`: Piesno example * :ghpull:`615`: bugfix for double word in doc example issue #387 * :ghpull:`610`: Added figure with HBM 2015 * :ghpull:`609`: Update website documentation * :ghpull:`607`: DOC: Detailed github stats for 0.9 * :ghpull:`606`: Removed the word new * :ghpull:`605`: Release mode: updating Changelog and Authors * :ghpull:`594`: DOC + PEP8: Mostly just line-wrapping. Issues (138): * :ghissue:`769`: RF: Remove aniso2iso altogether. * :ghissue:`772`: DOC: Use xvfb when building the docs in a headless machine. * :ghissue:`754`: DOC: Should we add a side-car gitter chat to the website? * :ghissue:`771`: Should we remove the deprecated quickbundles module? * :ghissue:`753`: TST: Test DSI with b0s. * :ghissue:`761`: reading dconn.nii * :ghissue:`723`: WIP: Assign streamlines to an existing cluster map via QuickBundles * :ghissue:`738`: Import tkinter * :ghissue:`767`: Offscreen is False for test_slicer * :ghissue:`752`: TST: Install vtk and mesa on Travis to test the fvtk module. * :ghissue:`768`: Document dipy.reconst.dti.iter_fit_tensor params * :ghissue:`763`: Tensor Fitting Overflows Memory * :ghissue:`766`: Add fit_tensor iteration decorator * :ghissue:`751`: Reorient tracks according to ROI * :ghissue:`765`: BF: Typo in data file name. * :ghissue:`764`: 404: Not Found when loading Stanford labels * :ghissue:`757`: Optimize dipy.align.reslice * :ghissue:`587`: Fvtk 2.0 PR1 * :ghissue:`286`: WIP - FVTK refactor/cleanup * :ghissue:`755`: dipy.reconst.tests.test_shm.test_sf_to_sh: TypeError: Cannot cast ufunc add output from dtype('float64') to dtype('uint16') with casting rule 'same_kind' * :ghissue:`749`: Fixed deprecation warning in skimage * :ghissue:`748`: TST: added test for _to_voxel_tolerance. * :ghissue:`678`: BF: added tolerance for negative streamline coordinates checks * :ghissue:`714`: RF: use masks in predictions and cross-validation * :ghissue:`739`: Set number of OpenMP threads during runtime * :ghissue:`733`: Add RTOP, RTAP and RTPP and the relative test * :ghissue:`743`: BF: memleaks with typed memory views in Cython * :ghissue:`737`: Possibly set_number_of_points doesn't delete memory * :ghissue:`672`: Power map * :ghissue:`724`: @sinkpoint's power map - refactored * :ghissue:`741`: ENH: it is preferable to use choice rather than randint to not have * :ghissue:`730`: numpy 1.10 breaks master * :ghissue:`727`: Optimize tensor fitting * :ghissue:`726`: NF - CSD response from a mask * :ghissue:`729`: BF: tensor predict * :ghissue:`736`: Added installation of python-tk package for VTK travis bot * :ghissue:`735`: Added comment on nlmeans example about selecting one volume * :ghissue:`732`: WIP: Test with vtk on Travis * :ghissue:`734`: WIP: Fvtk 2.0 with travis vtk support * :ghissue:`688`: dipy.test() fails on centos 6.x / python2.6 * :ghissue:`731`: Np 1.10 * :ghissue:`725`: WIP: TST: Install vtk on travis with conda. * :ghissue:`640`: MAPMRI * :ghissue:`611`: OSX test fail 'we check the default value of lambda ...' * :ghissue:`715`: In current segment_quickbundles tutorial there is no example for changing number of points * :ghissue:`719`: Fixes #715 * :ghissue:`682`: Created list of examples for available features and metrics * :ghissue:`716`: Refactor data module * :ghissue:`699`: Added gaussian noise option to estimate_sigma * :ghissue:`712`: DOC: API changes in gh707. * :ghissue:`713`: RF: In case a user just wants to use a single integer. * :ghissue:`700`: TEST: add tests for AffineMap * :ghissue:`677`: DKI PR3 - NF: Adding standard kurtosis statistics on module dki.py * :ghissue:`721`: TST: Verify that output of estimate_sigma is a proper input to nlmeans. * :ghissue:`693`: WIP: affine map tests * :ghissue:`694`: Memory errors / timeouts with affine registration on Windows * :ghissue:`572`: NF : nlmeans now support arrays of noise std * :ghissue:`708`: Check for bval dimensionality on read. * :ghissue:`697`: dipy.io.gradients read_bvals_bvecs does not check bvals length * :ghissue:`707`: BF: Keep up with changes in scipy 0.16 * :ghissue:`710`: Test dipy.core.tests.test_sphere.test_interp_rbf fails fails on Travis * :ghissue:`709`: DOC: Use the `identity` variable in the resampling transformation. * :ghissue:`649`: ROI seeds not placed at the center of the voxels * :ghissue:`656`: Build-bot status * :ghissue:`701`: Changes in `syn_registration_3d` example * :ghissue:`703`: Fix syn-3d example * :ghissue:`705`: Fix example in function compress_streamline * :ghissue:`704`: Buildbots failure: related to streamline compression? * :ghissue:`635`: Select streamlines based on logical operations on ROIs * :ghissue:`702`: BF: Use only validated examples when building docs. * :ghissue:`689`: Streamlines compression * :ghissue:`698`: DOC: added NI citation * :ghissue:`621`: piesno example not rendering correctly on the website * :ghissue:`650`: profiling hyp1f1 * :ghissue:`681`: RF + DOC: Add MNI template reference. Also import it into the dipy.da… * :ghissue:`696`: Change title of piesno example * :ghissue:`691`: CENIR 'HCP-like' multi b-value data * :ghissue:`661`: Test DTI eigenvectors * :ghissue:`690`: BF: nan entries cause segfault * :ghissue:`667`: DOC: Remove Sourceforge related makefile things. Add the gh-pages upl… * :ghissue:`676`: TST: update Travis config to use container infrastructure. * :ghissue:`533`: MRG: some Cython refactorings * :ghissue:`686`: BF: Make buildbot Pyhon26-32 happy * :ghissue:`622`: Fast shm from scipy 0.15.0 does not work on rc version * :ghissue:`683`: Fixed initial estimation in piesno * :ghissue:`233`: WIP: Dki * :ghissue:`654`: Affine registration PR 3/3 * :ghissue:`684`: BF: Fixed memory leak in QuickBundles. * :ghissue:`674`: NF: Function to sample perpendicular directions relative to a given vector * :ghissue:`679`: BF + NF: Provide dipy version info when running dipy.get_info() * :ghissue:`680`: NF: Fetch and Read the MNI T1 and/or T2 template. * :ghissue:`664`: DKI fitting (DKI PR2) * :ghissue:`539`: WIP: BF: Catching initial fodf creation of SDT * :ghissue:`671`: DOC: move mailing list links to neuroimaging * :ghissue:`663`: changed samuel st-jean email to the usherbrooke one * :ghissue:`287`: Fvtk sphere origin * :ghissue:`648`: Improve check of collinearity in vec2vec_rotmat * :ghissue:`582`: DKI project: PR#1 Simulations to test DKI * :ghissue:`660`: BF: If scalar-color input has len(shape)<4, need to fill that in. * :ghissue:`612`: BF: Differences in spherical harmonic calculations wrt scipy 0.15 * :ghissue:`651`: Added estimate_sigma bias correction + update example * :ghissue:`659`: BF: If n_frames is larger than one use path-numbering. * :ghissue:`652`: MAINT: work around scipy bug in sph_harm * :ghissue:`653`: Revisit naming when Matthew is back from Cuba * :ghissue:`658`: FIX: resaved npy file causing load error for py 33 * :ghissue:`657`: Fix compilation error caused by inline functions * :ghissue:`655`: Development documentation instructs to remove `master` * :ghissue:`628`: Affine registration PR 2/3 * :ghissue:`629`: Quickbundles 2.1 * :ghissue:`638`: tutorial example, code in text format * :ghissue:`637`: DOC: Fix typo in docstring of Identity class. * :ghissue:`639`: DOC: Render the following line in the code cell. * :ghissue:`614`: Seeds from mask random * :ghissue:`633`: BF - no import of TissueTypes * :ghissue:`632`: fixed typo in dti example * :ghissue:`630`: Possible documentation bug (?) * :ghissue:`627`: BF: Add missing opacity property to point actor * :ghissue:`459`: streamtubes opacity kwarg * :ghissue:`626`: Use LooseVersion to check for scipy versions * :ghissue:`625`: DOC: Include the PIESNO example. * :ghissue:`623`: DOC: Include the PIESNO example in the documentation. * :ghissue:`624`: DOC: Corrected typos in Restore tutorial and docstring. * :ghissue:`619`: DOC: Added missing contributor to developer list * :ghissue:`604`: Retired ARM buildbot * :ghissue:`613`: Possible random failure in test_vector_fields.test_reorient_vector_field_2d * :ghissue:`618`: Update README file * :ghissue:`616`: Raise ValueError when invalid matrix is given * :ghissue:`617`: Added build status icon to readme * :ghissue:`576`: Piesno example * :ghissue:`615`: bugfix for double word in doc example issue #387 * :ghissue:`600`: Use of nanmean breaks dipy for numpy < 1.8 * :ghissue:`610`: Added figure with HBM 2015 * :ghissue:`609`: Update website documentation * :ghissue:`390`: WIP: New PIESNO example and small corrections * :ghissue:`607`: DOC: Detailed github stats for 0.9 * :ghissue:`606`: Removed the word new * :ghissue:`605`: Release mode: updating Changelog and Authors * :ghissue:`594`: DOC + PEP8: Mostly just line-wrapping. dipy-1.11.0/doc/release_notes/release0.11.rst000066400000000000000000000200711476546756600206400ustar00rootroot00000000000000.. _release0.11: ==================================== Release notes for DIPY version 0.11 ==================================== GitHub stats for 2015/12/03 - 2016/02/21 (tag: 0.10) The following 16 authors contributed 271 commits. * Ariel Rokem * Bago Amirbekian * Bishakh Ghosh * Eleftherios Garyfallidis * Gabriel Girard * Gregory R. Lee * Himanshu Mishra * Jean-Christophe Houde * Marc-Alexandre Côté * Matthew Brett * Matthieu Dumont * Omar Ocegueda * Sagun Pai * Samuel St-Jean * Stephan Meesters * Vatsala Swaroop We closed a total of 144 issues, 55 pull requests and 89 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (55): * :ghpull:`933`: Updating release dates * :ghpull:`925`: fix typos * :ghpull:`915`: BF: correct handling of output paths in dipy_quickbundles. * :ghpull:`922`: Fix PEP8 in top-level tests * :ghpull:`921`: fix typo * :ghpull:`918`: Fix PEP8 in test_expectmax * :ghpull:`917`: Website 0.11 update and more devs * :ghpull:`916`: Getting website ready for 0.11 release * :ghpull:`914`: DOC: Update release notes for 0.11 * :ghpull:`910`: Singleton sl vals * :ghpull:`908`: Fix pep8 errors in viz * :ghpull:`911`: fix typo * :ghpull:`904`: fix typo * :ghpull:`851`: Tissue Classifier tracking example - changed seeding mask to wm only voxels * :ghpull:`858`: Updates for upcoming numpy 1.11 release * :ghpull:`856`: Add reference to gitter chat room in the README * :ghpull:`762`: Contextual enhancements of ODF/FOD fields * :ghpull:`857`: DTI memory: use the same step in prediction as you use in fitting. * :ghpull:`816`: A few fixes to SFM. * :ghpull:`811`: Extract values from an image based on streamline coordinates. * :ghpull:`853`: miscellaneous Python 3 compatibility problem fixes in fvtk * :ghpull:`849`: nlmeans use num threads option in 3d * :ghpull:`850`: DOC: fix typo * :ghpull:`848`: DOC: fix typo * :ghpull:`847`: DOC: fix typo * :ghpull:`845`: DOC: Add kurtosis example to examples_index * :ghpull:`846`: DOC: fix typo * :ghpull:`826`: Return numpy arrays instead of memory views from cython functions * :ghpull:`841`: Rename CONTRIBUTING to CONTRIBUTING.md * :ghpull:`839`: DOC: Fix up the docstring for the CENIR data * :ghpull:`819`: DOC: Add the DKI reconstruction example to the list of valid examples. * :ghpull:`843`: Drop 3.2 * :ghpull:`838`: "Contributing" * :ghpull:`833`: Doc: Typo * :ghpull:`817`: RF: Convert nan values in bvectors to 0's * :ghpull:`836`: fixed typo * :ghpull:`695`: Introducing workflows * :ghpull:`829`: Fixes issue #813 by not checking data type explicitly. * :ghpull:`830`: Fixed doc of SDT * :ghpull:`825`: Updated toollib and doc tools (#802) * :ghpull:`760`: NF - random seeds from mask * :ghpull:`824`: Updated copyright to 2016 * :ghpull:`815`: DOC: The previous link doesn't exist anymore. * :ghpull:`669`: Function to reorient gradient directions according to moco parameters * :ghpull:`809`: MRG: refactor and test setup.py * :ghpull:`821`: BF: revert accidentally committed COMMIT_INFO.txt * :ghpull:`818`: Round coords life * :ghpull:`797`: Update csdeconv.py * :ghpull:`806`: Relax regression tests * :ghpull:`814`: TEST: compare array shapes directly * :ghpull:`808`: MRG: pull in discarded changes from maintenance * :ghpull:`745`: faster version of piesno * :ghpull:`807`: BF: fix shebang lines for scripts * :ghpull:`794`: RF: Allow setting the verbosity of the AffineRegistration while running it * :ghpull:`801`: TST: add Python 3.5 to travis-ci test matrix Issues (89): * :ghissue:`933`: Updating release dates * :ghissue:`925`: fix typos * :ghissue:`915`: BF: correct handling of output paths in dipy_quickbundles. * :ghissue:`922`: Fix PEP8 in top-level tests * :ghissue:`886`: PEP8 in top-level tests * :ghissue:`921`: fix typo * :ghissue:`918`: Fix PEP8 in test_expectmax * :ghissue:`863`: PEP8 in test_expectmax * :ghissue:`919`: STYLE:PEP8 workflows * :ghissue:`896`: STYLE: PEP8 for workflows folder * :ghissue:`917`: Website 0.11 update and more devs * :ghissue:`900`: SLR example needs updating * :ghissue:`906`: Compiling the website needs too much memory * :ghissue:`916`: Getting website ready for 0.11 release * :ghissue:`914`: DOC: Update release notes for 0.11 * :ghissue:`910`: Singleton sl vals * :ghissue:`908`: Fix pep8 errors in viz * :ghissue:`890`: PEP8 in viz * :ghissue:`911`: fix typo * :ghissue:`905`: math is broken in doc * :ghissue:`904`: fix typo * :ghissue:`851`: Tissue Classifier tracking example - changed seeding mask to wm only voxels * :ghissue:`858`: Updates for upcoming numpy 1.11 release * :ghissue:`856`: Add reference to gitter chat room in the README * :ghissue:`762`: Contextual enhancements of ODF/FOD fields * :ghissue:`857`: DTI memory: use the same step in prediction as you use in fitting. * :ghissue:`816`: A few fixes to SFM. * :ghissue:`898`: Pep8 #891 * :ghissue:`811`: Extract values from an image based on streamline coordinates. * :ghissue:`892`: PEP8 workflows * :ghissue:`894`: PEP8 utils * :ghissue:`895`: PEP8 Tracking * :ghissue:`893`: PEP8 Viz * :ghissue:`860`: Added Travis-CI badge * :ghissue:`692`: Refactor fetcher.py * :ghissue:`742`: LinAlgError on tracking quickstart, with python 3.4 * :ghissue:`822`: Could you help me ? "URLError:" * :ghissue:`840`: Make dti reconst less memory hungry * :ghissue:`855`: 0.9.3rc * :ghissue:`853`: miscellaneous Python 3 compatibility problem fixes in fvtk * :ghissue:`849`: nlmeans use num threads option in 3d * :ghissue:`850`: DOC: fix typo * :ghissue:`848`: DOC: fix typo * :ghissue:`153`: DiffusionSpectrumModel assumes 1 b0 and fails with data with more than 1 b0 * :ghissue:`93`: GradientTable mask does not account for nan's in b-values * :ghissue:`665`: Online tutorial of quickbundles does not work for released version on macosx * :ghissue:`758`: One viz test still failing on mac os * :ghissue:`847`: DOC: fix typo * :ghissue:`845`: DOC: Add kurtosis example to examples_index * :ghissue:`846`: DOC: fix typo * :ghissue:`826`: Return numpy arrays instead of memory views from cython functions * :ghissue:`841`: Rename CONTRIBUTING to CONTRIBUTING.md * :ghissue:`839`: DOC: Fix up the docstring for the CENIR data * :ghissue:`842`: New pip fails on 3.2 * :ghissue:`819`: DOC: Add the DKI reconstruction example to the list of valid examples. * :ghissue:`843`: Drop 3.2 * :ghissue:`838`: "Contributing" * :ghissue:`833`: Doc: Typo * :ghissue:`817`: RF: Convert nan values in bvectors to 0's * :ghissue:`836`: fixed typo * :ghissue:`695`: Introducing workflows * :ghissue:`829`: Fixes issue #813 by not checking data type explicitly. * :ghissue:`805`: Multiple failures on Windows Python 3.5 build * :ghissue:`802`: toollib and doc tools need update to 3.5 * :ghissue:`812`: Python 2.7 doctest failures on 64-bit Windows * :ghissue:`685`: (WIP) DKI PR5 - NF: DKI-ODF estimation * :ghissue:`830`: Fixed doc of SDT * :ghissue:`825`: Updated toollib and doc tools (#802) * :ghissue:`760`: NF - random seeds from mask * :ghissue:`824`: Updated copyright to 2016 * :ghissue:`666`: Parallelized local tracking branch so now you can actually look at my code :) * :ghissue:`815`: DOC: The previous link doesn't exist anymore. * :ghissue:`747`: TEST: make test faster * :ghissue:`631`: NF - multiprocessing multi voxel fit * :ghissue:`669`: Function to reorient gradient directions according to moco parameters * :ghissue:`809`: MRG: refactor and test setup.py * :ghissue:`820`: dipy.get_info() returns wrong commit hash * :ghissue:`821`: BF: revert accidentally committed COMMIT_INFO.txt * :ghissue:`818`: Round coords life * :ghissue:`810`: Wrong input type for `_voxel2stream` on 64-bit Windows * :ghissue:`803`: Windows 7 Pro VM Python 2.7 gives 5 test errors with latest release 0.10.1 * :ghissue:`797`: Update csdeconv.py * :ghissue:`806`: Relax regression tests * :ghissue:`814`: TEST: compare array shapes directly * :ghissue:`808`: MRG: pull in discarded changes from maintenance * :ghissue:`745`: faster version of piesno * :ghissue:`807`: BF: fix shebang lines for scripts * :ghissue:`794`: RF: Allow setting the verbosity of the AffineRegistration while running it * :ghissue:`801`: TST: add Python 3.5 to travis-ci test matrix dipy-1.11.0/doc/release_notes/release0.12.rst000066400000000000000000000742261476546756600206540ustar00rootroot00000000000000.. _release0.12: ==================================== Release notes for DIPY version 0.12 ==================================== GitHub stats for 2016/02/21 - 2017/06/26 (tag: 0.11.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 48 authors contributed 1491 commits. * Alexandre Gauvin * Antonio Ossa * Ariel Rokem * Bago Amirbekian * Bishakh Ghosh * David Reagan * Eleftherios Garyfallidis * Etienne St-Onge * Gabriel Girard * Gregory R. Lee * Jean-Christophe Houde * Jon Haitz Legarreta * Julio Villalon * Kesshi Jordan * Manu Tej Sharma * Marc-Alexandre Côté * Matthew Brett * Matthieu Dumont * Nil Goyette * Omar Ocegueda Gonzalez * Rafael Neto Henriques * Ranveer Aggarwal * Riddhish Bhalodia * Rutger Fick * Samuel St-Jean * Serge Koudoro * Shahnawaz Ahmed * Sourav Singh * Stephan Meesters * Stonge Etienne * Guillaume Theaud * Tingyi Wanyan * Tom Wright * Vibhatha Abeykoon * Yaroslav Halchenko * Eric Peterson * Sven Dorkenwald * theaverageguy We closed a total of 511 issues, 169 pull requests and 342 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (169): * :ghpull:`1273`: Release 0.12 doc fix * :ghpull:`1272`: small correction for debugging purpose on nlmeans * :ghpull:`1269`: Odf slicer * :ghpull:`1271`: Viz tut update * :ghpull:`1268`: Following up on #1243. * :ghpull:`1223`: local PCA using SVD * :ghpull:`1270`: Doc cleaning deprecation warning * :ghpull:`1267`: Adding a decorator for skipping test if openmp is not available * :ghpull:`1090`: Documentation for command line interfaces * :ghpull:`1243`: Better fvtk.viz error when no VTK installed * :ghpull:`1263`: Cast Streamline attrs to numpy ints, to avoid buffer mismatch. * :ghpull:`1254`: Automate script installation * :ghpull:`1261`: removing absolute path in tracking module * :ghpull:`1255`: Fix missing documentation content * :ghpull:`1260`: removing absolute path in reconst * :ghpull:`1241`: Csa and csd reconstruction workflow rebased * :ghpull:`1250`: DOC: Fix reconst_dki.py DKI example documentation typos. * :ghpull:`1244`: TEST: Decrease precision of tests for dki micro model prediction * :ghpull:`1235`: New hdf5 file format for saving PeaksAndMetrics objects * :ghpull:`1231`: TST: Reduce precision requirement for test of tortuosity estimation. * :ghpull:`1233`: Feature: Added environment override for dipy_home variable * :ghpull:`1234`: BUG: Fix non-ASCII characters in reconst_dki.py example. * :ghpull:`1222`: A lightweight UI for medical visualizations #5: 2D Circular Slider * :ghpull:`1228`: RF: Use cython imports instead of relying on extern * :ghpull:`1227`: BF: Use np.npy_intp instead of assuming long for ArraySequence attributes * :ghpull:`1226`: DKI Microstructural model * :ghpull:`1229`: RF: allow for scipy pre-release deprecations * :ghpull:`1225`: Add one more multi b-value data-set * :ghpull:`1219`: MRG:Data off dropbox * :ghpull:`1221`: NF: Check multi b-value * :ghpull:`1212`: Follow PEP8 in reconst (part 2) * :ghpull:`1217`: Use integer division in reconst_gqi.py * :ghpull:`1205`: A lightweight UI for medical visualizations #4: 2D Line Slider * :ghpull:`1166`: RF: Use the average sigma in the mask. * :ghpull:`1216`: Use integer division to avoid errors in indexing * :ghpull:`1214`: DOC: add a clarification note to simplify_warp_funcion_3d * :ghpull:`1208`: Follow PEP8 in reconst (part 1) * :ghpull:`1206`: Revert #1204, and add a filter to suppress warnings. * :ghpull:`1196`: MRG: Use dipy's array comparisons for tests * :ghpull:`1204`: Suppress warnings regarding one-dimensional arrays changes in scipy 0.18 * :ghpull:`1199`: A lightweight UI for medical visualizations #3: Changes to Event Handling * :ghpull:`1202`: Use integer division to avoid errors in indexing * :ghpull:`1198`: ENH: avoid log zero * :ghpull:`1201`: Fix out of bounds point not being classified OUTSIDEIMAGE (binary cla… * :ghpull:`1115`: Bayesian Tissue Classification * :ghpull:`1052`: Conda install * :ghpull:`1183`: A lightweight UI for medical visualizations #2: TextBox * :ghpull:`1186`: MRG: use newer wheelhouse for installs * :ghpull:`1195`: Make PeaksAndMetrics pickle-able * :ghpull:`1194`: Use assert_arrays_equal when needed. * :ghpull:`1193`: Deprecate the Accent colormap, in anticipation of changes in MPL 2.0 * :ghpull:`1140`: A lightweight UI for medical visualizations #1: Button * :ghpull:`1171`: fix:dev: added numpy.int64 for my triangle array * :ghpull:`1123`: Add the mask workflow * :ghpull:`1174`: NF: added the repulsion 200 sphere. * :ghpull:`1177`: BF: fix interpolation call with Numpy 1.12 * :ghpull:`1162`: Return S0 value for DTI fits * :ghpull:`1147`: add this fix for newer version of pytables. * :ghpull:`1076`: ENH: Add support for ArraySequence in `length` function * :ghpull:`1050`: ENH: expand OpenMP utilities and move from denspeed.pyx to dipy.utils * :ghpull:`1082`: Add documentation uploading script * :ghpull:`1153`: Athena mapmri * :ghpull:`1159`: TST - add tests for various affine matrices for local tracking * :ghpull:`1157`: Replace `get_affine` with `affine` and `get_header` with `header`. * :ghpull:`1160`: Add Shahnawaz to list of contributors. * :ghpull:`1158`: BF: closing matplotlib plots for each file while running the examples * :ghpull:`1151`: Define fmin() for Visual Studio * :ghpull:`1149`: Change DKI_signal to dki_signal * :ghpull:`1137`: Small fix to insure that fwDTI non-linear procedure does not crash * :ghpull:`942`: NF: Added support to colorize each line points individually * :ghpull:`1141`: Do not cover files related to benchmarks. * :ghpull:`1098`: Adding custom interactor for visualisation * :ghpull:`1136`: Update deprecated function. * :ghpull:`1113`: TST: Test for invariance of model_params to splitting of the data. * :ghpull:`1134`: Rebase of https://github.com/dipy/dipy/pull/993 * :ghpull:`1064`: Faster dti odf * :ghpull:`1114`: flexible grid to streamline affine generation and pathlength function * :ghpull:`1122`: Add the reconst_dti workflow * :ghpull:`1132`: Update .travis.yml and README.md * :ghpull:`1125`: Intensity adjustment. Find a better upper bound for interpolating images. * :ghpull:`1130`: Minor corrections for showing surfaces * :ghpull:`1092`: Line-based target() * :ghpull:`1129`: Fix 1127 * :ghpull:`1034`: Viz surfaces * :ghpull:`1060`: Fast computation of Cross Correlation metric * :ghpull:`1124`: Small fix in free water DTI model * :ghpull:`1058`: IVIM * :ghpull:`1110`: WIP : Ivim linear fitting * :ghpull:`1120`: Fix 1119 * :ghpull:`1075`: Drop26 * :ghpull:`835`: NF: Free water tensor model * :ghpull:`1046`: BF - peaks_from_model with nbr_processes <= 0 * :ghpull:`1049`: MAINT: minor cython cleanup in align/vector_fields.pyx * :ghpull:`1087`: Base workflow enhancements + tests * :ghpull:`1112`: DOC: Math rendering issue in SFM gallery example. * :ghpull:`1109`: Changed default value of mni template * :ghpull:`1106`: Including MNI Template 2009c in Fetcher * :ghpull:`1066`: Adaptive Denoising * :ghpull:`1091`: Modifications for building docs with python3 * :ghpull:`1105`: Import reload function from imp module explicitly for python3 * :ghpull:`1102`: MRG: add pytables to travis-ci, Py35 full test run * :ghpull:`1100`: Fix for Python 3 in io.dpy * :ghpull:`1094`: Updates to FBC measures documentation * :ghpull:`1059`: Documentation to discourage misuse of GradientTable * :ghpull:`1063`: Fixes #1061 : Changed all S0 to 1.0 * :ghpull:`1089`: BF: fix test error on Python 3 * :ghpull:`1079`: Return a generator from `orient_by_roi` * :ghpull:`1088`: Restored the older implementation of nlmeans * :ghpull:`1080`: DOC: TensorModel.__init__ docstring. * :ghpull:`828`: Fiber to bundle coherence measures * :ghpull:`1072`: DOC: Added a coverage badge to README.rst * :ghpull:`1025`: PEP8: Fix pep8 in segment * :ghpull:`1077`: DOC: update fibernavigator link * :ghpull:`1069`: DOC: Small one -- we need this additional line of white space to render. * :ghpull:`1068`: Renamed test_shore for consistency * :ghpull:`1067`: Generate b vectors using disperse_charges * :ghpull:`1065`: improve OMP parallelization with scheduling * :ghpull:`1062`: BF - fix CSD.predict to work with nd inputs. * :ghpull:`1056`: Remove tracking interfaces * :ghpull:`1028`: BF: Predict DKI with a volume of S0 * :ghpull:`1041`: NF - Add PMF Threshold to Tractography * :ghpull:`1039`: Doc - fix definition of real_sph_harm functions * :ghpull:`1019`: MRG: fix heavy dependency check; no numpy for setup * :ghpull:`1018`: Fix: denspeed.pyx to give correct output for nlmeans * :ghpull:`1035`: Fix for fetcher files in Windows * :ghpull:`974`: Minor change in `tools/github_stats.py` * :ghpull:`1021`: Added warning for VTK not installed * :ghpull:`1024`: Documentation fix for reconst_dsid.py * :ghpull:`981`: Fixes #979 : No figures in DKI example - Add new line after figure * :ghpull:`958`: FIX: PEP8 in testing * :ghpull:`1005`: FIX: Use absolute imports in io * :ghpull:`951`: Contextual Enhancement update: fix SNR issue, fix reference * :ghpull:`1015`: Fix progressbar of fetcher * :ghpull:`992`: FIX: Update the import statements to use absolute import in core * :ghpull:`1003`: FIX: Change the import statements in direction * :ghpull:`1004`: FIX: Use absolute import in pkg_info * :ghpull:`1006`: FIX: Use absolute import in utils and scratch * :ghpull:`1010`: Absolute Imports in Viz * :ghpull:`929`: Fix PEP8 in data * :ghpull:`941`: BW: skimage.filter module name warning * :ghpull:`976`: Fix PEP8 in sims and remove unnecessary imports * :ghpull:`956`: FIX: PEP8 in reconst/test and reconst/benchmarks * :ghpull:`955`: FIX: PEP8 in external * :ghpull:`952`: FIX: PEP8 in tracking and tracking benchmarks/tests * :ghpull:`982`: FIX: relative imports in dipy/align * :ghpull:`972`: Fixes #901 : Added documentation for "step" in dti * :ghpull:`971`: Add progress bar feature to dipy.data.fetcher * :ghpull:`989`: copyright 2008-2016 * :ghpull:`977`: Relative import fix in dipy/align * :ghpull:`957`: FIX: PEP8 in denoise * :ghpull:`959`: FIX: PEP8 in utils * :ghpull:`967`: Update index.rst correcting the date of release 0.11 * :ghpull:`954`: FIX: PEP8 in direction * :ghpull:`965`: Fix typo * :ghpull:`948`: Fix PEP8 in boots * :ghpull:`946`: FIX: PEP8 for test_sumsqdiff and test_scalespace * :ghpull:`964`: FIX: PEP8 in test_imaffine * :ghpull:`963`: FIX: PEP8 in core * :ghpull:`947`: FIX: PEP8 for test files * :ghpull:`897`: PEP8 * :ghpull:`926`: Fix PEP8 in fixes * :ghpull:`937`: BF : Clamping of the value of v in winding function * :ghpull:`907`: DOC: switch to using mathjax for maths * :ghpull:`932`: Fixes #931 : checks if nb_points=0 * :ghpull:`927`: Fix PEP8 in io and remove duplicate definition in test_bvectxt.py * :ghpull:`913`: Fix pep8 in workflows * :ghpull:`935`: Setup: go on to version 0.12 development. * :ghpull:`934`: DOC: Update github stats for 0.11 as of today. * :ghpull:`933`: Updating release dates Issues (342): * :ghissue:`1273`: Release 0.12 doc fix * :ghissue:`1272`: small correction for debugging purpose on nlmeans * :ghissue:`1269`: Odf slicer * :ghissue:`1143`: Slice through ODF fields * :ghissue:`1271`: Viz tut update * :ghissue:`1246`: WIP: Replace widget with ui components in example. * :ghissue:`1268`: Following up on #1243. * :ghissue:`1223`: local PCA using SVD * :ghissue:`1265`: Test failure on OSX in test_nlmeans_4d_3dsigma_and_threads * :ghissue:`1270`: Doc cleaning deprecation warning * :ghissue:`1251`: Slice through ODF fields - Rebased * :ghissue:`1267`: Adding a decorator for skipping test if openmp is not available * :ghissue:`1090`: Documentation for command line interfaces * :ghissue:`1243`: Better fvtk.viz error when no VTK installed * :ghissue:`1238`: Cryptic fvtk.viz error when no VTK installed * :ghissue:`1242`: DKI microstructure model tests still fail intermittenly * :ghissue:`1252`: Debug PR only - Odf slicer vtk tests (do not merge) * :ghissue:`1263`: Cast Streamline attrs to numpy ints, to avoid buffer mismatch. * :ghissue:`1257`: revamp piesno docstring * :ghissue:`978`: Use absolute import in align * :ghissue:`1179`: Automate workflow generation * :ghissue:`1253`: Automate script installation for workflows * :ghissue:`1254`: Automate script installation * :ghissue:`1261`: removing absolute path in tracking module * :ghissue:`1001`: Use absolute import in tracking * :ghissue:`1255`: Fix missing documentation content * :ghissue:`1260`: removing absolute path in reconst * :ghissue:`999`: Use absolute import in reconst * :ghissue:`1258`: Fix nlmeans indexing * :ghissue:`369`: Add TESTs for resample * :ghissue:`1155`: csa and csd reconstruction workflow * :ghissue:`1000`: Use absolute import in segment, testing and tests * :ghissue:`1070`: [Docs] Examples using deprecated function * :ghissue:`711`: Update api_changes.rst for interp_rbf * :ghissue:`321`: Median otsu figures in example don't look good * :ghissue:`994`: Use absolute import in dipy/core * :ghissue:`608`: Customize at runtime the number of cores nlmeans is using * :ghissue:`865`: PEP8 in test_imwarp * :ghissue:`591`: Allow seed_from_mask to generate random seeds * :ghissue:`518`: TODO: aniso2iso module will be completely removed in version 0.10. * :ghissue:`328`: "incompatible" import of peaks_from_model in your recent publication * :ghissue:`1241`: Csa and csd reconstruction workflow rebased * :ghissue:`1250`: DOC: Fix reconst_dki.py DKI example documentation typos. * :ghissue:`1244`: TEST: Decrease precision of tests for dki micro model prediction * :ghissue:`1235`: New hdf5 file format for saving PeaksAndMetrics objects * :ghissue:`1231`: TST: Reduce precision requirement for test of tortuosity estimation. * :ghissue:`1210`: Switching branches in windows and pip install error * :ghissue:`1209`: Move data files out of dropbox => persistent URL * :ghissue:`1233`: Feature: Added environment override for dipy_home variable * :ghissue:`1234`: BUG: Fix non-ASCII characters in reconst_dki.py example. * :ghissue:`1222`: A lightweight UI for medical visualizations #5: 2D Circular Slider * :ghissue:`1185`: unable to use fvtk.show after ubuntu 16.10 install * :ghissue:`1228`: RF: Use cython imports instead of relying on extern * :ghissue:`909`: Inconsistent output for values_from_volume * :ghissue:`1182`: CSD vs CSA * :ghissue:`1211`: `dipy.data.read_bundles_2_subjects` doesn't fetch data as expected * :ghissue:`1227`: BF: Use np.npy_intp instead of assuming long for ArraySequence attributes * :ghissue:`1027`: (DO NOT MERGE THIS PR) NF: DKI microstructural model * :ghissue:`1226`: DKI Microstructural model * :ghissue:`1229`: RF: allow for scipy pre-release deprecations * :ghissue:`1225`: Add one more multi b-value data-set * :ghissue:`1219`: MRG:Data off dropbox * :ghissue:`1218`: [Docs] Error while generating html * :ghissue:`1221`: NF: Check multi b-value * :ghissue:`1212`: Follow PEP8 in reconst (part 2) * :ghissue:`1217`: Use integer division in reconst_gqi.py * :ghissue:`1205`: A lightweight UI for medical visualizations #4: 2D Line Slider * :ghissue:`1166`: RF: Use the average sigma in the mask. * :ghissue:`1216`: Use integer division to avoid errors in indexing * :ghissue:`1215`: [Docs] Error while building examples: tracking_quick_start.py * :ghissue:`1213`: dipy.align.vector_fields.simplify_warp_function_3d: Wrong equation in docstring * :ghissue:`1214`: DOC: add a clarification note to simplify_warp_funcion_3d * :ghissue:`1208`: Follow PEP8 in reconst (part 1) * :ghissue:`1206`: Revert #1204, and add a filter to suppress warnings. * :ghissue:`1196`: MRG: Use dipy's array comparisons for tests * :ghissue:`1191`: Test failures for cluster code with current numpy master * :ghissue:`1207`: Follow PEP8 in reconst * :ghissue:`1204`: Suppress warnings regarding one-dimensional arrays changes in scipy 0.18 * :ghissue:`1107`: Dipy.align.reslice: either swallow the scipy warning or rework to avoid it * :ghissue:`1199`: A lightweight UI for medical visualizations #3: Changes to Event Handling * :ghissue:`1200`: [Docs] Error while generating docs * :ghissue:`1202`: Use integer division to avoid errors in indexing * :ghissue:`1188`: Colormap test errors for new matplotlib * :ghissue:`1187`: Negative integer powers error with numpy 1.12 * :ghissue:`1170`: Importing vtk with dipy * :ghissue:`1086`: ENH: avoid calling log() on zero-valued elements in anisotropic_power * :ghissue:`1198`: ENH: avoid log zero * :ghissue:`1201`: Fix out of bounds point not being classified OUTSIDEIMAGE (binary cla… * :ghissue:`1115`: Bayesian Tissue Classification * :ghissue:`1052`: Conda install * :ghissue:`1183`: A lightweight UI for medical visualizations #2: TextBox * :ghissue:`1173`: TST: Test on Python 3.6 * :ghissue:`1186`: MRG: use newer wheelhouse for installs * :ghissue:`1190`: Pickle error for Python 3.6 and test_peaksFromModelParallel * :ghissue:`1195`: Make PeaksAndMetrics pickle-able * :ghissue:`1194`: Use assert_arrays_equal when needed. * :ghissue:`1193`: Deprecate the Accent colormap, in anticipation of changes in MPL 2.0 * :ghissue:`1189`: Np1.12 * :ghissue:`1140`: A lightweight UI for medical visualizations #1: Button * :ghissue:`1022`: Fixes #720 : Auto generate ipython notebooks * :ghissue:`1139`: The shebang again! Python: bad interpreter: No such file or directory * :ghissue:`1171`: fix:dev: added numpy.int64 for my triangle array * :ghissue:`1123`: Add the mask workflow * :ghissue:`1174`: NF: added the repulsion 200 sphere. * :ghissue:`1176`: Dipy.tracking.local.interpolation.nearestneighbor_interpolate raises when used with Numpy 1.12 * :ghissue:`1177`: BF: fix interpolation call with Numpy 1.12 * :ghissue:`1162`: Return S0 value for DTI fits * :ghissue:`1142`: pytables version and streamlines_format.py example * :ghissue:`1147`: add this fix for newer version of pytables. * :ghissue:`1076`: ENH: Add support for ArraySequence in `length` function * :ghissue:`1050`: ENH: expand OpenMP utilities and move from denspeed.pyx to dipy.utils * :ghissue:`1082`: Add documentation uploading script * :ghissue:`1153`: Athena mapmri * :ghissue:`1097`: Added to quantize_evecs: multiprocessing and v * :ghissue:`1159`: TST - add tests for various affine matrices for local tracking * :ghissue:`1163`: WIP: Combined contour function with slicer to use affine * :ghissue:`940`: Drop python 2.6 * :ghissue:`1040`: SFM example using deprecated code * :ghissue:`1118`: pip install dipy failing on my windows * :ghissue:`1119`: Buildbots failing with workflow merge * :ghissue:`1127`: Windows buildbot failures after ivim_linear merge * :ghissue:`1128`: Support for non linear denoise? * :ghissue:`1138`: A few broken builds * :ghissue:`1148`: Actual S0 for DTI data * :ghissue:`1157`: Replace `get_affine` with `affine` and `get_header` with `header`. * :ghissue:`1160`: Add Shahnawaz to list of contributors. * :ghissue:`740`: Improved mapmri implementation with laplacian regularization and new … * :ghissue:`1045`: Allow affine 'shear' tolerance in LocalTracking * :ghissue:`1154`: [Bug] connectivity matrix image in streamline_tools example * :ghissue:`1158`: BF: closing matplotlib plots for each file while running the examples * :ghissue:`1151`: Define fmin() for Visual Studio * :ghissue:`1145`: DKI_signal should be dki_signal in dipy.sims.voxel * :ghissue:`1149`: Change DKI_signal to dki_signal * :ghissue:`1137`: Small fix to insure that fwDTI non-linear procedure does not crash * :ghissue:`827`: Free Water Elimination DTI * :ghissue:`942`: NF: Added support to colorize each line points individually * :ghissue:`1141`: Do not cover files related to benchmarks. * :ghissue:`1098`: Adding custom interactor for visualisation * :ghissue:`1136`: Update deprecated function. * :ghissue:`1113`: TST: Test for invariance of model_params to splitting of the data. * :ghissue:`1134`: Rebase of https://github.com/dipy/dipy/pull/993 * :ghissue:`1064`: Faster dti odf * :ghissue:`1114`: flexible grid to streamline affine generation and pathlength function * :ghissue:`1122`: Add the reconst_dti workflow * :ghissue:`1132`: Update .travis.yml and README.md * :ghissue:`1051`: ENH: use parallel processing in the cython code for CCMetric * :ghissue:`993`: FIX: Use absolute imports in testing,tests and segment files * :ghissue:`673`: WIP: Workflow for syn registration * :ghissue:`859`: [WIP] Suppress warnings in tests * :ghissue:`983`: PEP8 in sims #884 * :ghissue:`984`: PEP8 in reconst #881 * :ghissue:`1009`: Absolute Imports in Tracking * :ghissue:`1036`: Estimate S0 from data (DTI) * :ghissue:`1125`: Intensity adjustment. Find a better upper bound for interpolating images. * :ghissue:`1130`: Minor corrections for showing surfaces * :ghissue:`1092`: Line-based target() * :ghissue:`1129`: Fix 1127 * :ghissue:`1034`: Viz surfaces * :ghissue:`394`: Update documentation for VTK and Anaconda * :ghissue:`973`: Minor change in `tools/github_stats.py` * :ghissue:`1060`: Fast computation of Cross Correlation metric * :ghissue:`1124`: Small fix in free water DTI model * :ghissue:`1058`: IVIM * :ghissue:`1110`: WIP : Ivim linear fitting * :ghissue:`1120`: Fix 1119 * :ghissue:`1121`: Recons dti workflow * :ghissue:`1083`: nlmeans problem * :ghissue:`1075`: Drop26 * :ghissue:`835`: NF: Free water tensor model * :ghissue:`1046`: BF - peaks_from_model with nbr_processes <= 0 * :ghissue:`1049`: MAINT: minor cython cleanup in align/vector_fields.pyx * :ghissue:`1087`: Base workflow enhancements + tests * :ghissue:`1112`: DOC: Math rendering issue in SFM gallery example. * :ghissue:`670`: Tissue classification using MAP-MRF * :ghissue:`332`: A sample nipype interface for fit_tensor * :ghissue:`1116`: failing to build the docs: issue with io.BufferedIOBase * :ghissue:`1109`: Changed default value of mni template * :ghissue:`1106`: Including MNI Template 2009c in Fetcher * :ghissue:`1066`: Adaptive Denoising * :ghissue:`351`: Dipy.tracking.utils.target affine parameter is misleading * :ghissue:`1091`: Modifications for building docs with python3 * :ghissue:`912`: Unable to build documentation with Python 3 * :ghissue:`1105`: Import reload function from imp module explicitly for python3 * :ghissue:`1104`: restore_dti.py example does not work in python3 * :ghissue:`1102`: MRG: add pytables to travis-ci, Py35 full test run * :ghissue:`1100`: Fix for Python 3 in io.dpy * :ghissue:`1103`: BF: This raises a warning on line 367 otherwise. * :ghissue:`1101`: Test with optional dependencies (including pytables) on Python 3. * :ghissue:`1094`: Updates to FBC measures documentation * :ghissue:`1059`: Documentation to discourage misuse of GradientTable * :ghissue:`1061`: Inconsistency in specifying S0 values in multi_tensor and single_tensor * :ghissue:`1063`: Fixes #1061 : Changed all S0 to 1.0 * :ghissue:`1089`: BF: fix test error on Python 3 * :ghissue:`1079`: Return a generator from `orient_by_roi` * :ghissue:`1088`: Restored the older implementation of nlmeans * :ghissue:`1080`: DOC: TensorModel.__init__ docstring. * :ghissue:`1085`: Enhanced workflows * :ghissue:`1081`: mean_diffusivity from the reconst.dti module returns incorrect shape * :ghissue:`1031`: improvements for denoise/denspeed.pyx * :ghissue:`828`: Fiber to bundle coherence measures * :ghissue:`1072`: DOC: Added a coverage badge to README.rst * :ghissue:`1071`: report coverage and add a badge? * :ghissue:`1038`: BF: Should fix #1037 * :ghissue:`1078`: Fetcher for ivim data, needs md5 * :ghissue:`953`: FIX: PEP8 for segment * :ghissue:`1025`: PEP8: Fix pep8 in segment * :ghissue:`883`: PEP8 in segment * :ghissue:`1077`: DOC: update fibernavigator link * :ghissue:`1069`: DOC: Small one -- we need this additional line of white space to render. * :ghissue:`1068`: Renamed test_shore for consistency * :ghissue:`1067`: Generate b vectors using disperse_charges * :ghissue:`1011`: Discrepancy with output of nlmeans.py * :ghissue:`1055`: WIP: Ivim implementation * :ghissue:`1065`: improve OMP parallelization with scheduling * :ghissue:`1062`: BF - fix CSD.predict to work with nd inputs. * :ghissue:`1057`: Workaround for https://github.com/dipy/dipy/issues/852 * :ghissue:`1037`: tracking.interfaces imports SlowAdcOpdfModel, but it is not defined * :ghissue:`1056`: Remove tracking interfaces * :ghissue:`813`: Windows 64-bit error in segment.featurespeed.extract * :ghissue:`1054`: Remove tracking interfaces * :ghissue:`1028`: BF: Predict DKI with a volume of S0 * :ghissue:`1041`: NF - Add PMF Threshold to Tractography * :ghissue:`1039`: Doc - fix definition of real_sph_harm functions * :ghissue:`1019`: MRG: fix heavy dependency check; no numpy for setup * :ghissue:`1018`: Fix: denspeed.pyx to give correct output for nlmeans * :ghissue:`1043`: DO NOT MERGE: Add a test of local tracking, using data from dipy.data. * :ghissue:`899`: SNR in contextual enhancement example * :ghissue:`991`: Documentation footer has 2008-2015 mentioned. * :ghissue:`1008`: [WIP] NF: Implementation of CHARMED model * :ghissue:`1030`: Fetcher files not found on Windows * :ghissue:`1035`: Fix for fetcher files in Windows * :ghissue:`1016`: viz.fvtk has no attribute 'ren' * :ghissue:`1033`: Viz surfaces * :ghissue:`1032`: Merge pull request #1 from nipy/master * :ghissue:`1029`: Errors building Cython extensions on Python 3.5 * :ghissue:`974`: Minor change in `tools/github_stats.py` * :ghissue:`1002`: Use absolute import in utils and scratch * :ghissue:`1014`: Progress bar works only for some data * :ghissue:`1013`: `dipy.data.make_fetcher` test fails with Python 3 * :ghissue:`1020`: Documentation does not mention Scipy as a dependency for VTK widgets. * :ghissue:`1023`: display in dsi example is broken * :ghissue:`1021`: Added warning for VTK not installed * :ghissue:`882`: PEP8 in reconst tests/benchmarks * :ghissue:`888`: PEP8 in tracking benchmarks/tests * :ghissue:`885`: PEP8 in testing * :ghissue:`902`: fix typo * :ghissue:`1024`: Documentation fix for reconst_dsid.py * :ghissue:`979`: No figures in DKI example * :ghissue:`981`: Fixes #979 : No figures in DKI example - Add new line after figure * :ghissue:`958`: FIX: PEP8 in testing * :ghissue:`1005`: FIX: Use absolute imports in io * :ghissue:`997`: Use absolute import in io * :ghissue:`675`: Voxelwise stabilisation * :ghissue:`951`: Contextual Enhancement update: fix SNR issue, fix reference * :ghissue:`1015`: Fix progressbar of fetcher * :ghissue:`1012`: TST: install the dipy.data tests. * :ghissue:`992`: FIX: Update the import statements to use absolute import in core * :ghissue:`1003`: FIX: Change the import statements in direction * :ghissue:`996`: Use absolute import in dipy/direction * :ghissue:`1004`: FIX: Use absolute import in pkg_info * :ghissue:`998`: Use absolute import in dipy/pkg_info.py * :ghissue:`1006`: FIX: Use absolute import in utils and scratch * :ghissue:`1010`: Absolute Imports in Viz * :ghissue:`1007`: Use absolute import in viz * :ghissue:`929`: Fix PEP8 in data * :ghissue:`874`: PEP8 in data * :ghissue:`980`: Fix pep8 in reconst * :ghissue:`1017`: Fixes #1016 : Raises VTK not installed * :ghissue:`877`: PEP8 in external * :ghissue:`887`: PEP8 in tracking * :ghissue:`941`: BW: skimage.filter module name warning * :ghissue:`976`: Fix PEP8 in sims and remove unnecessary imports * :ghissue:`884`: PEP8 in sims * :ghissue:`956`: FIX: PEP8 in reconst/test and reconst/benchmarks * :ghissue:`955`: FIX: PEP8 in external * :ghissue:`952`: FIX: PEP8 in tracking and tracking benchmarks/tests * :ghissue:`982`: FIX: relative imports in dipy/align * :ghissue:`972`: Fixes #901 : Added documentation for "step" in dti * :ghissue:`901`: DTI `step` parameter not documented. * :ghissue:`995`: Use absolute import in dipy/data/__init__.py * :ghissue:`344`: Update citation page * :ghissue:`971`: Add progress bar feature to dipy.data.fetcher * :ghissue:`970`: Downloading data with dipy.data.fetcher does not show any progress bar * :ghissue:`986`: "pip3 install dipy" in Installation for python3 * :ghissue:`990`: No figures in DKI example * :ghissue:`989`: copyright 2008-2016 * :ghissue:`988`: doc/conf.py shows copyright 2008-2015. Should be 2016? * :ghissue:`975`: Use absolute import in imaffine, imwarp, metrics * :ghissue:`517`: TODO: Peaks to be removed from dipy.reconst in 0.10 * :ghissue:`977`: Relative import fix in dipy/align * :ghissue:`875`: PEP8 in denoise * :ghissue:`957`: FIX: PEP8 in denoise * :ghissue:`960`: PEP8 in sims #884 * :ghissue:`961`: PEP8 in reconst #880 * :ghissue:`962`: PEP8 in reconst #881 * :ghissue:`889`: PEP8 in utils * :ghissue:`959`: FIX: PEP8 in utils * :ghissue:`866`: PEP8 in test_metrics * :ghissue:`867`: PEP8 in test_parzenhist * :ghissue:`868`: PEP8 in test_scalespace * :ghissue:`869`: PEP8 in test_sumsqdiff * :ghissue:`870`: PEP8 in test_transforms * :ghissue:`871`: PEP8 in test_vector_fields * :ghissue:`864`: PEP8 in `test_imaffine` * :ghissue:`967`: Update index.rst correcting the date of release 0.11 * :ghissue:`862`: PEP8 in `test_crosscorr` * :ghissue:`873`: PEP8 in core * :ghissue:`831`: ACT tracking example gives weird streamlines * :ghissue:`876`: PEP8 in direction * :ghissue:`954`: FIX: PEP8 in direction * :ghissue:`965`: Fix typo * :ghissue:`968`: Use relative instead of absolute import * :ghissue:`948`: Fix PEP8 in boots * :ghissue:`872`: PEP8 in boots * :ghissue:`946`: FIX: PEP8 for test_sumsqdiff and test_scalespace * :ghissue:`964`: FIX: PEP8 in test_imaffine * :ghissue:`963`: FIX: PEP8 in core * :ghissue:`966`: fix typo * :ghissue:`947`: FIX: PEP8 for test files * :ghissue:`920`: STYLE:PEP8 for test_imaffine * :ghissue:`897`: PEP8 * :ghissue:`950`: PEP8 fixed in reconst/tests and reconst/benchmarks * :ghissue:`949`: Fixed Pep8 utils tracking testing denoise * :ghissue:`926`: Fix PEP8 in fixes * :ghissue:`878`: PEP8 in fixes * :ghissue:`939`: Fixed PEP8 in utils, denoise , tracking and testing * :ghissue:`945`: FIX: PEP8 in test_scalespace * :ghissue:`937`: BF : Clamping of the value of v in winding function * :ghissue:`930`: pep8 fix issue #896 - "continuation line over-indented for visual indent" * :ghissue:`943`: BF: Removed unused code in slicer * :ghissue:`907`: DOC: switch to using mathjax for maths * :ghissue:`931`: dipy/tracking/streamlinespeed set_number_of_points crash when nb_points=0 * :ghissue:`932`: Fixes #931 : checks if nb_points=0 * :ghissue:`927`: Fix PEP8 in io and remove duplicate definition in test_bvectxt.py * :ghissue:`924`: in dipy/io/tests/test_bvectxt.py function with same name is defined twice * :ghissue:`879`: PEP8 in io * :ghissue:`913`: Fix pep8 in workflows * :ghissue:`891`: PEP8 in workflows * :ghissue:`938`: PEP8 issues solved in utils, testing and denoise * :ghissue:`935`: Setup: go on to version 0.12 development. * :ghissue:`934`: DOC: Update github stats for 0.11 as of today. * :ghissue:`933`: Updating release dates dipy-1.11.0/doc/release_notes/release0.13.rst000066400000000000000000000167241476546756600206540ustar00rootroot00000000000000.. _release0.13: ==================================== Release notes for DIPY version 0.13 ==================================== GitHub stats for 2017/06/27 - 2017/10/24 (tag: 0.12.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 13 authors contributed 212 commits. * Ariel Rokem * Bennet Fauber * David Reagan * Eleftherios Garyfallidis * Guillaume Theaud * Jon Haitz Legarreta Gorroño * Marc-Alexandre Côté * Matthieu Dumont * Rafael Neto Henriques * Ranveer Aggarwal * Rutger Fick * Saber Sheybani * Serge Koudoro We closed a total of 115 issues, 39 pull requests and 76 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (39): * :ghpull:`1367`: BF: Import Streamlines object directly from nibabel. * :ghpull:`1361`: Windows instructions update + citation update * :ghpull:`1316`: DOC: Add a coding style guideline. * :ghpull:`1360`: [FIX] references order in workflow * :ghpull:`1348`: ENH: Add support for ArraySequence in`set_number_of_points` function * :ghpull:`1357`: Update tracking_quick_start.py rebase of #1332 * :ghpull:`1239`: Enable memory profiling of the examples. * :ghpull:`1356`: Add picking to slicer - rebase * :ghpull:`1351`: From tables to h5py * :ghpull:`1353`: FIX : improve epilogue accessibility for workflow * :ghpull:`1262`: VIZ: A lightweight UI for medical visualizations #6: 2D File Selector * :ghpull:`1352`: use legacy float array for printing numpy array * :ghpull:`1314`: DOC: Fix typos and formatting in .rst files and Python examples. * :ghpull:`1345`: DOC: Format README.rst file code blocks. * :ghpull:`1330`: ENH: Add Travis badge to README.rst. * :ghpull:`1315`: Remove GPL from our README. * :ghpull:`1328`: BUG: Address small_delta vs. big_delta flipped parameters. * :ghpull:`1329`: DOC: Fix typos in multi_io.py workflow file docstring. * :ghpull:`1336`: Test modification for windows 10 / numpy 1.14 * :ghpull:`1335`: Catch a more specific warning in test_csdeconv * :ghpull:`1319`: Correct white-space for fwdti example. * :ghpull:`1297`: Added eigh version of localpca to svd version * :ghpull:`1298`: Make TextActor2D extend UI instead of object * :ghpull:`1312`: Flags correction for windows * :ghpull:`1285`: mapmri using cvxpy instead of cvxopt * :ghpull:`1307`: PyTables Error-handling * :ghpull:`1310`: Fix error message * :ghpull:`1308`: Fix inversion in the dti mode doc * :ghpull:`1304`: DOC: Fix typos in dti.py reconstruction file doc. * :ghpull:`1303`: DOC: Add missing label to reciprocal space eq. * :ghpull:`1289`: MRG: Suppress a divide-by-zero warning. * :ghpull:`1288`: NF Add the parameter fa_operator in auto_response function * :ghpull:`1290`: Corrected a small error condition * :ghpull:`1279`: UI advanced fix * :ghpull:`1287`: Fix doc errors * :ghpull:`1286`: Last doc error fix on 0.12.x * :ghpull:`1284`: Added missing tutorials * :ghpull:`1278`: Moving ahead with 0.13 (dev version) * :ghpull:`1277`: One test (decimal issue) and a fix in viz_ui tutorial. Issues (76): * :ghissue:`1367`: BF: Import Streamlines object directly from nibabel. * :ghissue:`1366`: Circular imports in dipy.tracking.utils? * :ghissue:`1146`: Installation instructions for windows need to be updated * :ghissue:`1084`: Installation for windows developers using Anaconda needs to be updated * :ghissue:`1361`: Windows instructions update + citation update * :ghissue:`1248`: Windows doc installation update is needed for Python 3, Anaconda and VTK support * :ghissue:`1316`: DOC: Add a coding style guideline. * :ghissue:`1360`: [FIX] references order in workflow * :ghissue:`1359`: Epilogue's reference should be last not first * :ghissue:`1324`: WIP: Det track workflow and other improvements in workflows * :ghissue:`1348`: ENH: Add support for ArraySequence in`set_number_of_points` function * :ghissue:`1357`: Update tracking_quick_start.py rebase of #1332 * :ghissue:`1332`: Update tracking_quick_start.py * :ghissue:`1239`: Enable memory profiling of the examples. * :ghissue:`1356`: Add picking to slicer - rebase * :ghissue:`1334`: Add picking to slicer * :ghissue:`1351`: From tables to h5py * :ghissue:`1353`: FIX : improve epilogue accessibility for workflow * :ghissue:`1344`: Check accessibility of epilogue in Workflows * :ghissue:`1262`: VIZ: A lightweight UI for medical visualizations #6: 2D File Selector * :ghissue:`1352`: use legacy float array for printing numpy array * :ghissue:`1346`: Test broken in numpy 1.14 * :ghissue:`1333`: Trying QuickBundles (Python3 and vtk--> using: conda install -c clinicalgraphics vtk) * :ghissue:`1044`: Reconstruction FOD * :ghissue:`1247`: Interactor bug in viz_ui example * :ghissue:`1314`: DOC: Fix typos and formatting in .rst files and Python examples. * :ghissue:`1345`: DOC: Format README.rst file code blocks. * :ghissue:`1349`: Doctest FIX : use legacy printing * :ghissue:`1330`: ENH: Add Travis badge to README.rst. * :ghissue:`1337`: Coveralls seems baggy let's remove it * :ghissue:`1341`: ActiveAx model fitting using MIX framework * :ghissue:`1315`: Remove GPL from our README. * :ghissue:`1325`: Small is Big - Big is small (mapl - mapmri) * :ghissue:`1328`: BUG: Address small_delta vs. big_delta flipped parameters. * :ghissue:`1329`: DOC: Fix typos in multi_io.py workflow file docstring. * :ghissue:`1336`: Test modification for windows 10 / numpy 1.14 * :ghissue:`1323`: Warnings raised in csdeconv for upcoming numpy 1.14 * :ghissue:`1335`: Catch a more specific warning in test_csdeconv * :ghissue:`1042`: RF - move direction getters to dipy/direction/ * :ghissue:`1319`: Correct white-space for fwdti example. * :ghissue:`1317`: reconst_fwdti.py example figures not being rendered * :ghissue:`1297`: Added eigh version of localpca to svd version * :ghissue:`1313`: No module named 'vtkCommonCore' * :ghissue:`1318`: Mix framework with Cythonized func_mul * :ghissue:`1167`: Potential replacement for CVXOPT? * :ghissue:`1180`: WIP: replacing cvxopt with cvxpy. * :ghissue:`1298`: Make TextActor2D extend UI instead of object * :ghissue:`375`: Peak directiions test error on PPC * :ghissue:`1312`: Flags correction for windows * :ghissue:`804`: Wrong openmp flag on Windows * :ghissue:`1285`: mapmri using cvxpy instead of cvxopt * :ghissue:`662`: dipy/align/mattes.pyx doctest import error * :ghissue:`1307`: PyTables Error-handling * :ghissue:`1306`: Error-handling when pytables not installed * :ghissue:`1309`: step_helpers gives a wrong error message * :ghissue:`1310`: Fix error message * :ghissue:`1308`: Fix inversion in the dti mode doc * :ghissue:`1304`: DOC: Fix typos in dti.py reconstruction file doc. * :ghissue:`1303`: DOC: Add missing label to reciprocal space eq. * :ghissue:`1289`: MRG: Suppress a divide-by-zero warning. * :ghissue:`1293`: Garyfallidis recobundles * :ghissue:`1292`: Garyfallidis recobundles * :ghissue:`1288`: NF Add the parameter fa_operator in auto_response function * :ghissue:`1290`: Corrected a small error condition * :ghissue:`1279`: UI advanced fix * :ghissue:`1287`: Fix doc errors * :ghissue:`1286`: Last doc error fix on 0.12.x * :ghissue:`1284`: Added missing tutorials * :ghissue:`322`: Missing content in tracking.utils' documentation * :ghissue:`570`: The documentation for `dipy.viz` is not in the API reference * :ghissue:`1053`: WIP: Local pca and noise estimation * :ghissue:`881`: PEP8 in reconst * :ghissue:`880`: PEP8 in reconst * :ghissue:`1169`: Time for a new release - scipy 0.18? * :ghissue:`1278`: Moving ahead with 0.13 (dev version) * :ghissue:`1277`: One test (decimal issue) and a fix in viz_ui tutorial. dipy-1.11.0/doc/release_notes/release0.14.rst000066400000000000000000000343231476546756600206500ustar00rootroot00000000000000.. _release0.14: ==================================== Release notes for DIPY version 0.14 ==================================== GitHub stats for 2017/10/24 - 2018/05/01 (tag: 0.13.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 24 authors contributed 645 commits. * Ariel Rokem * Bago Amirbekian * Bennet Fauber * Conor Corbin * David Reagan * Eleftherios Garyfallidis * Gabriel Girard * Jean-Christophe Houde * Jiri Borovec * Jon Haitz Legarreta Gorroño * Jon Mendoza * Karandeep Singh Juneja * Kesshi Jordan * Kumar Ashutosh * Marc-Alexandre Côté * Matthew Brett * Nil Goyette * Pradeep Reddy Raamana * Ricci Woo * Serge Koudoro * Shreyas Fadnavis * Aman Arya * Mauro Zucchelli We closed a total of 215 issues, 70 pull requests and 145 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (70): * :ghpull:`1504`: Fix test_whole_brain_slr precision * :ghpull:`1503`: fix plotting issue on Mapmri example * :ghpull:`1424`: ENH: Use the CFIN dataset rather than the CENIR dataset. * :ghpull:`1502`: [DOC] Fix doc generation * :ghpull:`1498`: BF: fix bug in example of segment_quickbundles.py * :ghpull:`1431`: NF - Bootstrap Direction Getter (cythonized) * :ghpull:`1443`: RecoBundles - recognition of bundles * :ghpull:`1398`: Deform streamlines * :ghpull:`1447`: DOC: Link Coding Style Guide ref in CONTRIBUTING to the website. * :ghpull:`1423`: DOC: Fix `reconst_mapmri` example markup. * :ghpull:`1493`: Fix surface rendering bugs in VTK8 * :ghpull:`1497`: BF: fix bug in example of fiber_to_bundle_coherence.py * :ghpull:`1496`: BF: fix bug in example of streamline_tools.py * :ghpull:`1495`: BF: fix bug in example of sfm_reconst.py * :ghpull:`1494`: BF: fix bug in example of reconst_csd.py * :ghpull:`1474`: DOC: Fix typo on website & examples * :ghpull:`1471`: Code Cleaning * :ghpull:`1457`: Fix for "Sliders in examples don't react properly to clicks" * :ghpull:`1491`: Fix documentation typos * :ghpull:`1468`: DOC: correct error when doing 'make html' * :ghpull:`1484`: DOC: Use the correct punctuation marks for `et al.`. * :ghpull:`1475`: Refactor demon registration - _iterate * :ghpull:`1482`: DOC: Fix typo in `test_mapmri.py` file. * :ghpull:`1460`: Fix for "DiskSlider does not rotate actor in opposite direction" * :ghpull:`1452`: actor.slicer.copy() copies opacity set via actor.slicer.opacity() * :ghpull:`1466`: DOC: Limit the DIPY logo height in README for better rendering. * :ghpull:`1464`: DOC: Use the correct DIPY logo as the banner in `README`. * :ghpull:`1465`: Fixed the Progit book link in doc * :ghpull:`1451`: DOC: Add the DIPY banner to the README file. * :ghpull:`1379`: New streamlines API integration on dipy examples * :ghpull:`1445`: repr and get methods for AffineMap, w/ precise exceptions * :ghpull:`1450`: [Fix] Manage multiple space delimiter * :ghpull:`1425`: DOC: Add different GitHub badges to the `README.rst` file. * :ghpull:`1446`: DOC: Fix bad hyperlink format to CONTRIBUTING.md from README.rst. * :ghpull:`1437`: DOC: Fix missing reference to QuickBundles paper. * :ghpull:`1440`: Raise Error when MDFmetric is used in QB or QBX * :ghpull:`1428`: Mapmri workflow rebased * :ghpull:`1385`: Enh textblock * :ghpull:`1422`: [MRG] Improves delimiter in read_bvals_bvecs * :ghpull:`1434`: QuickBundlesX * :ghpull:`1430`: BF - replaced non-ascii character in workflows/reconst.py * :ghpull:`1421`: DOC: Fix reStructuredText formatting issues in coding style guideline. * :ghpull:`1416`: Updated links * :ghpull:`1413`: BF::Fix inspect.getargspec deprecation warning in Python 3 * :ghpull:`1393`: Adds a DKI workflow. * :ghpull:`1294`: Suppress a warning in geometry. * :ghpull:`1419`: Suppress rcond warning * :ghpull:`1358`: Det track workflow rebased (merge) * :ghpull:`1384`: NF - Particle Filtering Tractography (merge) * :ghpull:`1411`: Added eddy_rotated_bvecs extension * :ghpull:`1407`: [MRG] Default colormap changed in examples * :ghpull:`1408`: Updated color map in reconst_csa.py and reconst_forecast.py * :ghpull:`1406`: [MRG] assert_true which checks for equality replaced with assert_equal * :ghpull:`1347`: Replacing fvtk by the new viz API * :ghpull:`1322`: Forecast * :ghpull:`1326`: BUG: Fix factorial import module in test_mapmri.py. * :ghpull:`1400`: BF: fixes #1399, removing an un-needed singleton dimension. * :ghpull:`1391`: Re-entering conflict-free typos from deleted PR 1331 * :ghpull:`1386`: Possible fix for the inline compilation problem * :ghpull:`1165`: Make vtk contour take an affine * :ghpull:`1300`: RF: Remove patch for older numpy ravel_multi_index. * :ghpull:`1381`: DOC - re-orientation of figures in the DKI example * :ghpull:`1375`: Fix piesno type * :ghpull:`1342`: Cythonize DirectionGetter and whatnot * :ghpull:`1378`: Fix: numpy legacy print again... * :ghpull:`1377`: FIX: update printing format for numpy 1.14 * :ghpull:`1374`: FIX: Viz test correction * :ghpull:`1368`: DOC: Update developers' affiliations. * :ghpull:`1370`: TST - add tracking tests for PeaksAndMetricsDirectionGetter * :ghpull:`1369`: MRG: add procedure for building, uploading wheels Issues (145): * :ghissue:`1504`: Fix test_whole_brain_slr precision * :ghissue:`1418`: Adding parallel_voxel_fit decorator * :ghissue:`1503`: fix plotting issue on Mapmri example * :ghissue:`1291`: Existing MAPMRI tutorial does not render correctly and MAPL looks hidden in the existing tutorial. * :ghissue:`1424`: ENH: Use the CFIN dataset rather than the CENIR dataset. * :ghissue:`1502`: [DOC] Fix doc generation * :ghissue:`1498`: BF: fix bug in example of segment_quickbundles.py * :ghissue:`1431`: NF - Bootstrap Direction Getter (cythonized) * :ghissue:`1443`: RecoBundles - recognition of bundles * :ghissue:`644`: Dipy visualization: it does not seem possible to position tensor ellipsoid slice in fvtk * :ghissue:`1398`: Deform streamlines * :ghissue:`1447`: DOC: Link Coding Style Guide ref in CONTRIBUTING to the website. * :ghissue:`1423`: DOC: Fix `reconst_mapmri` example markup. * :ghissue:`1493`: Fix surface rendering bugs in VTK8 * :ghissue:`1490`: Streamtube visualization problem with vtk 8.1 * :ghissue:`1469`: Errors in generating documents (.rst) of examples * :ghissue:`1497`: BF: fix bug in example of fiber_to_bundle_coherence.py * :ghissue:`1496`: BF: fix bug in example of streamline_tools.py * :ghissue:`1495`: BF: fix bug in example of sfm_reconst.py * :ghissue:`1494`: BF: fix bug in example of reconst_csd.py * :ghissue:`1474`: DOC: Fix typo on website & examples * :ghissue:`1485`: BF: Fix bug in example of segment_quickbundles.py * :ghissue:`1483`: BF: Fix bug in example of fiber_to_bundle_coherence.py * :ghissue:`1480`: BF: Fix bug in example of streamline_tool.py * :ghissue:`1479`: BF: Fix bug in example of sfm_reconst.py * :ghissue:`1477`: BF: Fix bug in example of reconst_csd.py * :ghissue:`1448`: Enh ui components positioning * :ghissue:`1471`: Code Cleaning * :ghissue:`1481`: BF: Fix bug of no attribute 'GlobalImmediateModeRenderingOn' in actor.py * :ghissue:`1454`: Sliders in examples don't react properly to clicks * :ghissue:`1457`: Fix for "Sliders in examples don't react properly to clicks" * :ghissue:`1491`: Fix documentation typos * :ghissue:`1468`: DOC: correct error when doing 'make html' * :ghissue:`1467`: Error on "from dipy.core.gradients import gradient_table" * :ghissue:`1488`: Unexpected behavior in the DIPY workflow script * :ghissue:`1484`: DOC: Use the correct punctuation marks for `et al.`. * :ghissue:`1475`: Refactor demon registration - _iterate * :ghissue:`1482`: DOC: Fix typo in `test_mapmri.py` file. * :ghissue:`1478`: DOC: Add comment about package CVXPY in example of reconst_mapmri.py * :ghissue:`1476`: BF: Fix bug in example of reconst_csd.py * :ghissue:`1470`: simplify SDR iterate * :ghissue:`1458`: DiskSlider does not rotate actor in opposite direction * :ghissue:`1460`: Fix for "DiskSlider does not rotate actor in opposite direction" * :ghissue:`1452`: actor.slicer.copy() copies opacity set via actor.slicer.opacity() * :ghissue:`1438`: actor.slicer.copy() doesn't copy opacity if set via actor.slicer.opacity() * :ghissue:`1473`: Uploading Windows wheels * :ghissue:`1466`: DOC: Limit the DIPY logo height in README for better rendering. * :ghissue:`1472`: Invalid dims failure in 32-bit Python on Windows * :ghissue:`1464`: DOC: Use the correct DIPY logo as the banner in `README`. * :ghissue:`1462`: Logo/banner on README not the correct one! * :ghissue:`1461`: Broken link in Documentation: Git Resources * :ghissue:`1465`: Fixed the Progit book link in doc * :ghissue:`1463`: Fixed the Progit book link in the docs * :ghissue:`1455`: Using pyautogui to adapt to users' monitor size in viz examples * :ghissue:`1459`: Fix for "DiskSlider does not rotate actor in opposite direction" * :ghissue:`1456`: Fix for "Sliders in examples don't react properly to clicks" * :ghissue:`1453`: changed window.record() to a large value * :ghissue:`1451`: DOC: Add the DIPY banner to the README file. * :ghissue:`1379`: New streamlines API integration on dipy examples * :ghissue:`1339`: Deprecate dipy.io.trackvis? * :ghissue:`1445`: repr and get methods for AffineMap, w/ precise exceptions * :ghissue:`1441`: Cleaning UI and improving positioning of Panel2D * :ghissue:`1450`: [Fix] Manage multiple space delimiter * :ghissue:`1449`: read_bvals_bvecs crash with bvec rotated eddy * :ghissue:`1425`: DOC: Add different GitHub badges to the `README.rst` file. * :ghissue:`1446`: DOC: Fix bad hyperlink format to CONTRIBUTING.md from README.rst. * :ghissue:`1437`: DOC: Fix missing reference to QuickBundles paper. * :ghissue:`1371`: Quickbundles tutorials miss reference * :ghissue:`1362`: Make more use of TextBlock2D constructor * :ghissue:`1440`: Raise Error when MDFmetric is used in QB or QBX * :ghissue:`1395`: Mapmri workflow * :ghissue:`1428`: Mapmri workflow rebased * :ghissue:`1385`: Enh textblock * :ghissue:`1436`: Fixed delimiter issue #1417 * :ghissue:`1422`: [MRG] Improves delimiter in read_bvals_bvecs * :ghissue:`1417`: Improve delimiter on read_bvals_bvecs() * :ghissue:`1435`: compilation failed with the new cython version (0.28) * :ghissue:`1439`: BF: Avoid using memview in struct (Cython 0.28) * :ghissue:`1434`: QuickBundlesX * :ghissue:`1184`: Bootstrap direction getter * :ghissue:`1380`: WIP: QuickBundlesX * :ghissue:`1429`: BUG - SyntaxError (Non-ASCII character '\xe2' in file dipy/workflows/reconst.py on line 596 * :ghissue:`1430`: BF - replaced non-ascii character in workflows/reconst.py * :ghissue:`1421`: DOC: Fix reStructuredText formatting issues in coding style guideline. * :ghissue:`1390`: coding_style_guideline.rst does not render correctly * :ghissue:`1427`: Add delimiter to read_bvals_bvecs() * :ghissue:`1426`: Add delimiter parameter to numpy.loadtxt * :ghissue:`1416`: Updated links * :ghissue:`987`: Practical FAQs don't have hyperlinks to modules/libraries. * :ghissue:`1327`: Fix inspect.getargspec deprecation warning in Python 3 * :ghissue:`1413`: BF::Fix inspect.getargspec deprecation warning in Python 3 * :ghissue:`1393`: Adds a DKI workflow. * :ghissue:`1294`: Suppress a warning in geometry. * :ghissue:`1181`: peaks warning while CSD reconstructing * :ghissue:`1419`: Suppress rcond warning * :ghissue:`1150`: Line-based version of streamline_mapping * :ghissue:`1358`: Det track workflow rebased (merge) * :ghissue:`1384`: NF - Particle Filtering Tractography (merge) * :ghissue:`1409`: create documentation in multiple languages * :ghissue:`1415`: NF: check compiler flags before compiling * :ghissue:`1117`: .eddy_rotated_bvecs file throws error from io.gradients read_bvals_bvecs function * :ghissue:`1411`: Added eddy_rotated_bvecs extension * :ghissue:`1412`: BF:Fix inspect.getargspec deprecation warning in Python 3 * :ghissue:`791`: Possible divide by zero in reconst.sfm.py * :ghissue:`1410`: BF: Added .eddy_rotated_bvecs extension support * :ghissue:`1407`: [MRG] Default colormap changed in examples * :ghissue:`1403`: Avoid promoting jet color map in examples * :ghissue:`1408`: Updated color map in reconst_csa.py and reconst_forecast.py * :ghissue:`1406`: [MRG] assert_true which checks for equality replaced with assert_equal * :ghissue:`1387`: Assert equality, instead of asserting that `a == b` is true * :ghissue:`1405`: Error using CSD model on data * :ghissue:`1347`: Replacing fvtk by the new viz API * :ghissue:`1402`: [Question] rint() or round() * :ghissue:`1321`: mapfit_laplacian_aniso (high non-Gaussianity, NG values in CSF) * :ghissue:`1161`: fvtk volume doesn't handle affine (crashes notebook) * :ghissue:`1394`: Deprecation warning in newer versions of scipy, because `scipy.misc` is going away * :ghissue:`1382`: is there any defined function that reads locally stored data or is all downloaded? I refer to nii or nifti files * :ghissue:`1322`: Forecast * :ghissue:`1326`: BUG: Fix factorial import module in test_mapmri.py. * :ghissue:`1399`: New test errors on Python 3 Travis bots * :ghissue:`1400`: BF: fixes #1399, removing an un-needed singleton dimension. * :ghissue:`1350`: WIP: Add mapmri flow * :ghissue:`1392`: Gitter chat box not visible on chrome? * :ghissue:`1391`: Re-entering conflict-free typos from deleted PR 1331 * :ghissue:`1331`: Update gradients_spheres.py * :ghissue:`1388`: Mapmri workflow * :ghissue:`1386`: Possible fix for the inline compilation problem * :ghissue:`1165`: Make vtk contour take an affine * :ghissue:`1340`: NF - Particle Filtering Tractography * :ghissue:`1383`: Mmriflow * :ghissue:`1299`: test_rmi on 32 bit: invalid dims: array size defined by dims is larger than the maximum possible size. * :ghissue:`1300`: RF: Remove patch for older numpy ravel_multi_index. * :ghissue:`1381`: DOC - re-orientation of figures in the DKI example * :ghissue:`1301`: Brains need re-orientation in plotting in DKI example * :ghissue:`1375`: Fix piesno type * :ghissue:`1342`: Cythonize DirectionGetter and whatnot * :ghissue:`1378`: Fix: numpy legacy print again... * :ghissue:`1376`: New test failures with pre-release numpy * :ghissue:`1377`: FIX: update printing format for numpy 1.14 * :ghissue:`1343`: ActiveAx model fitting using MIX framework * :ghissue:`1374`: FIX: Viz test correction * :ghissue:`1282`: Tests fail on viz module * :ghissue:`1368`: DOC: Update developers' affiliations. * :ghissue:`1370`: TST - add tracking tests for PeaksAndMetricsDirectionGetter * :ghissue:`1369`: MRG: add procedure for building, uploading wheels dipy-1.11.0/doc/release_notes/release0.15.rst000066400000000000000000000436211476546756600206520ustar00rootroot00000000000000.. _release0.15: ==================================== Release notes for DIPY version 0.15 ==================================== GitHub stats for 2018/05/01 - 2018/12/12 (tag: 0.14.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 30 authors contributed 676 commits. * Ariel Rokem * Bramsh Qamar * Chris Filo Gorgolewski * David Reagan * Demian Wassermann * Eleftherios Garyfallidis * Enes Albay * Gabriel Girard * Guillaume Theaud * Javier Guaje * Jean-Christophe Houde * Jiri Borovec * Jon Haitz Legarreta Gorroño * Karandeep * Kesshi Jordan * Marc-Alexandre Côté * Matt Cieslak * Matthew Brett * Parichit Sharma * Ricci Woo * Rutger Fick * Serge Koudoro * Shreyas Fadnavis * Chandan Gangwar * Daniel Enrico Cahall * David Hunt * Francois Rheault * Jakob Wasserthal We closed a total of 287 issues, 93 pull requests and 194 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (93): * :ghpull:`1684`: [FIX] testing line-based target function * :ghpull:`1686`: Standardize workflow * :ghpull:`1685`: [Fix] Typo on examples * :ghpull:`1663`: Stats, SNR_in_CC workflow * :ghpull:`1681`: fixed issue with cst orientation in bundle_extraction example * :ghpull:`1680`: [Fix] workflow variable string * :ghpull:`1683`: test for new error in IVIM * :ghpull:`1667`: Changing the default b0_threshold in gtab * :ghpull:`1677`: [FIX] workflow help msg * :ghpull:`1678`: Numpy matrix deprecation * :ghpull:`1676`: [FIX] Example Update * :ghpull:`1283`: get_data consistence * :ghpull:`1670`: fixed RecoBundle workflow, SLR reference, and updated fetcher.py * :ghpull:`1669`: Flow csd sh order * :ghpull:`1659`: From dipy.viz to FURY * :ghpull:`1621`: workflows : warn user for strange b0 threshold * :ghpull:`1657`: DOC: Add spherical harmonics basis documentation. * :ghpull:`1660`: OPT - moved the tolerance check outside of the for loop * :ghpull:`1658`: STYLE: Honor 'descoteaux'and 'tournier' SH basis naming. * :ghpull:`1281`: Representing qtau- signal attenuation using qtau-dMRI functional basis * :ghpull:`1651`: Add save/load tck * :ghpull:`1656`: Link to the dipy tag on neurostars * :ghpull:`1624`: NF: Outlier scoring * :ghpull:`1655`: [Fix] decrease tolerance on forecast * :ghpull:`1650`: Increase codecov tolerance * :ghpull:`1649`: Path Length Map example rebase * :ghpull:`1556`: RecoBundles and SLR workflows * :ghpull:`1645`: Fix workflows creation tutorial error * :ghpull:`1647`: DOC: Fix duplicate link and AppVeyor badge. * :ghpull:`1644`: Adds an Appveyor badge * :ghpull:`1643`: Add hash for SCIL b0 file * :ghpull:`787`: TST: Add an appveyor starter file. * :ghpull:`1642`: Test that you can use the 724 symmetric sphere in PAM. * :ghpull:`1641`: changed vertices to float64 in evenly_distributed_sphere_642.npz * :ghpull:`1564`: Added scroll bar to ListBox2D * :ghpull:`1636`: Fixed broken link. * :ghpull:`1584`: Added Examples * :ghpull:`1554`: Checking if the input file or directory exists when running a workflow * :ghpull:`1528`: Show spheres with different radii, colors and opacities + add timers + add exit a + resolve issue with imread * :ghpull:`1526`: Eigenvalue - eigenvector array compatibility check * :ghpull:`1628`: Adding python 3.7 on travis * :ghpull:`1623`: NF: Convert between 4D DEC FA and 3D 24 bit representation. * :ghpull:`1622`: [Fix] viz slice example * :ghpull:`1626`: RF - removed duplicate tests * :ghpull:`1619`: [DOC] update VTK version * :ghpull:`1592`: Added File Menu element to viz.ui * :ghpull:`1559`: Checkbox and RadioButton elements for viz.ui * :ghpull:`1583`: Fix the relative SF threshold Issue * :ghpull:`1602`: Fix random seed in tracking * :ghpull:`1609`: [DOC] update dependencies file * :ghpull:`1560`: Removed affine matrices from tracking. * :ghpull:`1593`: Removed event.abort for release events * :ghpull:`1597`: Upgrade nibabel minimum version * :ghpull:`1601`: Fix: Decrease Nosetest warning * :ghpull:`1515`: RF: Use the new Streamlines API for orienting of streamlines. * :ghpull:`1590`: Revert 1570 file menu * :ghpull:`1589`: Fix calculation of highest order for a sh basis set * :ghpull:`1580`: Allow PRE=1 job to fail * :ghpull:`1533`: Show message if number of arguments mismatch between the doc string and the run method. * :ghpull:`1523`: Showing help when no input parameters are given and suppress warnings for cmds * :ghpull:`1543`: Update the default out_strategy to create the output in the current working directory * :ghpull:`1574`: Fixed Bug in PR #1547 * :ghpull:`1561`: add example SDR for binary and fuzzy images * :ghpull:`1578`: BF - bad condition in maximum dg * :ghpull:`1570`: Added File Menu element to viz.ui * :ghpull:`1563`: Replacing major_version in viz.ui * :ghpull:`1557`: Range slider element for viz.ui * :ghpull:`1547`: Changed the icon set in Button2D from Dictionary to List of Tuples * :ghpull:`1555`: Fix bug in actor.label * :ghpull:`1522`: Image element in dipy.viz.ui * :ghpull:`1355`: WIP: ENH: UI Listbox * :ghpull:`1540`: fix potential zero division in demon register. * :ghpull:`1548`: Fixed references per request of @garyfallidis. * :ghpull:`1542`: fix for using cvxpy solver * :ghpull:`1546`: References to reference * :ghpull:`1545`: Adding a reference in README.rst * :ghpull:`1492`: Enh ui components positioning (with code refactoring) * :ghpull:`1538`: Explanation that is mistakenly rendered as code fixed in example of DKI * :ghpull:`1536`: DOC: Update Rafael's current institution. * :ghpull:`1537`: removed unnecessary imported from sims example * :ghpull:`1530`: Wrong default value for parameter 'symmetric' connectivity_matrix function * :ghpull:`1529`: minor typo fix in quickstart * :ghpull:`1520`: Updating the documentation for the workflow creation tutorial. * :ghpull:`1524`: Values from streamlines object * :ghpull:`1521`: Moved some older highlights and announcements to the old news files. * :ghpull:`1518`: DOC: updated some developers affiliations. * :ghpull:`1517`: Dev info update * :ghpull:`1516`: [DOC] Installation instruction update * :ghpull:`1514`: Adding pep8speak config file * :ghpull:`1513`: fix typo in example of quick_start * :ghpull:`1510`: copyright updated to 2008-2018 * :ghpull:`1508`: Adds whitespace, to appease the sphinx. * :ghpull:`1506`: moving to 0.15.0 dev Issues (194): * :ghissue:`1684`: [FIX] testing line-based target function * :ghissue:`1679`: Intermittent issue in testing line-based target function * :ghissue:`1220`: RF: Replaces 1997 definitions of tensor geometric params with 1999 definitions. * :ghissue:`1686`: Standardize workflow * :ghissue:`746`: New fetcher returns filenames as dictionary keys in a tuple * :ghissue:`1685`: [Fix] Typo on examples * :ghissue:`1663`: Stats, SNR_in_CC workflow * :ghissue:`1637`: Advice for saving results from MAPMRI * :ghissue:`1673`: CST Image in bundle extraction is not oriented well * :ghissue:`1681`: fixed issue with cst orientation in bundle_extraction example * :ghissue:`1680`: [Fix] workflow variable string * :ghissue:`1338`: Variable string input does not work with self.get_io_iterator() in workflows * :ghissue:`1683`: test for new error in IVIM * :ghissue:`1682`: Add tests for IVIM for new Error * :ghissue:`634`: BinaryTissueClassifier segfaults on corner case * :ghissue:`742`: LinAlgError on tracking quickstart, with python 3.4 * :ghissue:`852`: Problem with spherical harmonics computations on some Anaconda python versions * :ghissue:`1667`: Changing the default b0_threshold in gtab * :ghissue:`1500`: Updating streamlines API in streamlinear.py * :ghissue:`944`: Slicer fix * :ghissue:`1111`: WIP: A lightweight UI for medical visualizations based on VTK-Python * :ghissue:`1099`: Needed PRs for merging recobundles into Dipy's master * :ghissue:`1544`: Plans for viz module * :ghissue:`641`: Tests raise a deprecation warning * :ghissue:`643`: Use appveyor for Windows CI? * :ghissue:`400`: Add travis-ci test without matplotlib installed * :ghissue:`1677`: [FIX] workflow help msg * :ghissue:`1674`: Workflows should print out help per default * :ghissue:`1678`: Numpy matrix deprecation * :ghissue:`1397`: Running dipy 'Intro to Basic Tracking' code and keep getting error. On Linux Centos * :ghissue:`1676`: [FIX] Example Update * :ghissue:`10`: data.get_data() should be consistent across datasets * :ghissue:`1283`: get_data consistence * :ghissue:`1670`: fixed RecoBundle workflow, SLR reference, and updated fetcher.py * :ghissue:`1669`: Flow csd sh order * :ghissue:`1668`: One issue on handling HCP data -- HCP b vectors raise NaN in the gradient table * :ghissue:`1662`: Remove the points added outside of a mask. Fix the related tests. * :ghissue:`1659`: From dipy.viz to FURY * :ghissue:`1621`: workflows : warn user for strange b0 threshold * :ghissue:`1657`: DOC: Add spherical harmonics basis documentation. * :ghissue:`1296`: Need of a travis bot that runs ana/mini/conda and vtk=7.1.0+ * :ghissue:`1660`: OPT - moved the tolerance check outside of the for loop * :ghissue:`1658`: STYLE: Honor 'descoteaux'and 'tournier' SH basis naming. * :ghissue:`1281`: Representing qtau- signal attenuation using qtau-dMRI functional basis * :ghissue:`1653`: STYLE: Honor 'descoteaux' SH basis naming. * :ghissue:`1651`: Add save/load tck * :ghissue:`1656`: Link to the dipy tag on neurostars * :ghissue:`1624`: NF: Outlier scoring * :ghissue:`1655`: [Fix] decrease tolerance on forecast * :ghissue:`1654`: Test failure in FORECAST * :ghissue:`1414`: [WIP] Switching tests to pytest and removing nose dependencies * :ghissue:`1650`: Increase codecov tolerance * :ghissue:`1093`: WIP: Add functionality to clip streamlines between ROIs in `orient_by_rois` * :ghissue:`1611`: Preloader element for viz.ui * :ghissue:`1615`: Color Picker element for viz.ui * :ghissue:`1631`: Path Length Map example * :ghissue:`1649`: Path Length Map example rebase * :ghissue:`1556`: RecoBundles and SLR workflows * :ghissue:`1645`: Fix workflows creation tutorial error * :ghissue:`1647`: DOC: Fix duplicate link and AppVeyor badge. * :ghissue:`1644`: Adds an Appveyor badge * :ghissue:`1638`: Fetcher downloads data every time it is called * :ghissue:`1643`: Add hash for SCIL b0 file * :ghissue:`1600`: NODDIx 2 fibers crossing * :ghissue:`1618`: viz.ui.FileMenu2D * :ghissue:`1569`: viz.ui.ListBoxItem2D text overflow * :ghissue:`1532`: dipy test failed on mac osx sierra with ananoda python. * :ghissue:`1420`: window.record() resolution limit * :ghissue:`1396`: Visualization problem with tensors ? * :ghissue:`1295`: Reorienting peak_slicer and ODF_slicer * :ghissue:`1232`: With VTK 6.3, streamlines color map bar text disappears when using streamtubes * :ghissue:`928`: dipy.viz.colormap crash on single fibers * :ghissue:`923`: change size of colorbar in viz module * :ghissue:`854`: VTK and Python 3 support in fvtk * :ghissue:`759`: How to resolve python-vtk6 link issues in Ubuntu * :ghissue:`647`: fvtk contour function ignores voxsz parameter * :ghissue:`646`: Dipy visualization with missing (?) affine parameter * :ghissue:`645`: Dipy visualization (fvtk) crash when saving series of images * :ghissue:`353`: fvtk.label won't show up if called twice * :ghissue:`787`: TST: Add an appveyor starter file. * :ghissue:`1642`: Test that you can use the 724 symmetric sphere in PAM. * :ghissue:`1641`: changed vertices to float64 in evenly_distributed_sphere_642.npz * :ghissue:`1203`: Some bots might need a newer version of nibabel * :ghissue:`1156`: Deterministic tracking workflow * :ghissue:`642`: WIP - NF parallel framework * :ghissue:`1135`: WIP : Multiprocessing - implemented a parallel_voxel_fit decorator * :ghissue:`387`: References do not render correctly in SHORE example * :ghissue:`442`: Allow length and set_number_of_points to work with generators * :ghissue:`558`: Allow setting of the zoom on fvtk ren objects * :ghissue:`1236`: bundle visualisation using nibabel API: wrong colormap * :ghissue:`1389`: VTK 8: minimal version? * :ghissue:`1519`: Scipy stopped supporting scipy.misc.imread * :ghissue:`1596`: Reproducibility in PFT tracking * :ghissue:`1614`: for GSoC NODDIx_PR * :ghissue:`1576`: [WIP] Needs Optimization and Cleaning * :ghissue:`1564`: Added scroll bar to ListBox2D * :ghissue:`1636`: Fixed broken link. * :ghissue:`1584`: Added Examples * :ghissue:`1568`: Multi_io axis out of bounds error * :ghissue:`1554`: Checking if the input file or directory exists when running a workflow * :ghissue:`1528`: Show spheres with different radii, colors and opacities + add timers + add exit a + resolve issue with imread * :ghissue:`1108`: Local PCA Slow Version * :ghissue:`1526`: Eigenvalue - eigenvector array compatibility check * :ghissue:`1628`: Adding python 3.7 on travis * :ghissue:`1623`: NF: Convert between 4D DEC FA and 3D 24 bit representation. * :ghissue:`1622`: [Fix] viz slice example * :ghissue:`1629`: [WIP][fix] remove Userwarning message * :ghissue:`1591`: PRE is failing : module 'cvxpy' has no attribute 'utilities' * :ghissue:`1626`: RF - removed duplicate tests * :ghissue:`1582`: SF threshold in PMF is not relative * :ghissue:`1575`: Website: warning about python versions * :ghissue:`1619`: [DOC] update VTK version * :ghissue:`1592`: Added File Menu element to viz.ui * :ghissue:`1559`: Checkbox and RadioButton elements for viz.ui * :ghissue:`1583`: Fix the relative SF threshold Issue * :ghissue:`1602`: Fix random seed in tracking * :ghissue:`1620`: 3.7 wheels * :ghissue:`1598`: Apply Transform workflow for transforming a collection of moving images. * :ghissue:`1595`: Workflow for visualizing the quality of the registered data with DIPY * :ghissue:`1581`: Image registration Workflow with quality metrics * :ghissue:`1588`: Dipy.reconst.shm.calculate_max_order only works on specific cases. * :ghissue:`1608`: Parallelized affine registration * :ghissue:`1610`: Tortoise - sub * :ghissue:`1607`: Reminder to add in the docs that users will need to update nibabel to 2.3.0 during the next release * :ghissue:`1609`: [DOC] update dependencies file * :ghissue:`1560`: Removed affine matrices from tracking. * :ghissue:`1593`: Removed event.abort for release events * :ghissue:`1586`: Slider breaks interaction in viz_advanced example * :ghissue:`1597`: Upgrade nibabel minimum version * :ghissue:`1601`: Fix: Decrease Nosetest warning * :ghissue:`1515`: RF: Use the new Streamlines API for orienting of streamlines. * :ghissue:`1585`: Add a random seed for reproducibility * :ghissue:`1594`: Integrating the support for the visualization in Affine registration * :ghissue:`1590`: Revert 1570 file menu * :ghissue:`1589`: Fix calculation of highest order for a sh basis set * :ghissue:`1577`: Revert "Added File Menu element to viz.ui" * :ghissue:`1571`: WIP: multi-threaded on affine registration * :ghissue:`1580`: Allow PRE=1 job to fail * :ghissue:`1533`: Show message if number of arguments mismatch between the doc string and the run method. * :ghissue:`1523`: Showing help when no input parameters are given and suppress warnings for cmds * :ghissue:`1579`: Error on PRE=1 (cython / numpy) * :ghissue:`1543`: Update the default out_strategy to create the output in the current working directory * :ghissue:`1433`: New version of h5py messing with us? * :ghissue:`1541`: demon registration, unstable? * :ghissue:`1574`: Fixed Bug in PR #1547 * :ghissue:`1573`: Failure in test_ui_listbox_2d * :ghissue:`1561`: add example SDR for binary and fuzzy images * :ghissue:`1578`: BF - bad condition in maximum dg * :ghissue:`1566`: Bad condition in local tracking * :ghissue:`1570`: Added File Menu element to viz.ui * :ghissue:`1572`: [WIP] * :ghissue:`1567`: WIP: NF: multi-threaded on affine registration * :ghissue:`1563`: Replacing major_version in viz.ui * :ghissue:`1557`: Range slider element for viz.ui * :ghissue:`1547`: Changed the icon set in Button2D from Dictionary to List of Tuples * :ghissue:`1555`: Fix bug in actor.label * :ghissue:`1551`: Actor.label not working anymore * :ghissue:`1522`: Image element in dipy.viz.ui * :ghissue:`1549`: CVXPY installation on >3.5 * :ghissue:`1355`: WIP: ENH: UI Listbox * :ghissue:`1562`: Should we retire our Python 3.5 travis builds? * :ghissue:`1550`: Memory error when running rigid transform * :ghissue:`1540`: fix potential zero division in demon register. * :ghissue:`1548`: Fixed references per request of @garyfallidis. * :ghissue:`1527`: New version of CVXPY changes API * :ghissue:`1542`: fix for using cvxpy solver * :ghissue:`1534`: Changed the icon set in Button2D from Dictionary to List of Tuples * :ghissue:`1546`: References to reference * :ghissue:`1545`: Adding a reference in README.rst * :ghissue:`1492`: Enh ui components positioning (with code refactoring) * :ghissue:`1538`: Explanation that is mistakenly rendered as code fixed in example of DKI * :ghissue:`1536`: DOC: Update Rafael's current institution. * :ghissue:`1487`: Commit for updated check_scratch.py script. * :ghissue:`1486`: Parichit dipy flows * :ghissue:`1539`: Changing the default behavior of the workflows to create the output file(s) in the current working directory. * :ghissue:`1537`: removed unnecessary imported from sims example * :ghissue:`1535`: removed some unnecessary imports from sims example * :ghissue:`1530`: Wrong default value for parameter 'symmetric' connectivity_matrix function * :ghissue:`1529`: minor typo fix in quickstart * :ghissue:`1520`: Updating the documentation for the workflow creation tutorial. * :ghissue:`1524`: Values from streamlines object * :ghissue:`1521`: Moved some older highlights and announcements to the old news files. * :ghissue:`1518`: DOC: updated some developers affiliations. * :ghissue:`1517`: Dev info update * :ghissue:`1516`: [DOC] Installation instruction update * :ghissue:`1514`: Adding pep8speak config file * :ghissue:`1507`: Mathematical expressions are not rendered correctly in reference page * :ghissue:`1513`: fix typo in example of quick_start * :ghissue:`1510`: copyright updated to 2008-2018 * :ghissue:`1508`: Adds whitespace, to appease the sphinx. * :ghissue:`1512`: Fix typo in example of quick_start * :ghissue:`1511`: Fix typo in exaample quick_start * :ghissue:`1509`: DOC: fix math rendering for some dki functions * :ghissue:`1506`: moving to 0.15.0 dev dipy-1.11.0/doc/release_notes/release0.16.rst000066400000000000000000000163601476546756600206530ustar00rootroot00000000000000.. _release0.16: ==================================== Release notes for DIPY version 0.16 ==================================== GitHub stats for 2018/12/12 - 2019/03/10 (tag: 0.15.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 14 authors contributed 361 commits. * Ariel Rokem * Bramsh Qamar * Clément Zotti * Eleftherios Garyfallidis * Francois Rheault * Gabriel Girard * Jean-Christophe Houde * Jon Haitz Legarreta Gorroño * Katrin Leinweber * Kesshi Jordan * Parichit Sharma * Serge Koudoro * Shreyas Fadnavis * Yijun Liu We closed a total of 103 issues, 41 pull requests and 62 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (41): * :ghpull:`1755`: Bundle Analysis and Linear mixed Models Workflows * :ghpull:`1748`: [REBASED ] Symmetric Diffeomorphic registration workflow * :ghpull:`1714`: DOC: Add Cython style guideline for DIPY. * :ghpull:`1726`: IVIM MIX Model (MicroLearn) * :ghpull:`1753`: BF: Fixes #1751, by adding a `scale` key-word arg to `decfa` * :ghpull:`1743`: Horizon * :ghpull:`1749`: [Fix] Tutorial syntax * :ghpull:`1739`: IVIM Workflow * :ghpull:`1695`: A few tractometry functions * :ghpull:`1741`: [Fix] Pre-build : From relative to absolute import * :ghpull:`1742`: [REBASED] Apply Transform Workflow * :ghpull:`1745`: Adjust number of threads for SLR in Recobundles * :ghpull:`1746`: [DOC] Add NIH to sponsor list * :ghpull:`1735`: [Rebase] Affine Registration Workflow with Supporting Quality Metrics * :ghpull:`1738`: [FIX] Python 3 compatibility for some tools script * :ghpull:`1740`: [FIX] mapmri with cvxpy 1.0.15 * :ghpull:`1730`: [Fix] Cython syntax error * :ghpull:`1666`: Remove the points added outside of a mask. Fix the related tests. * :ghpull:`1737`: Doc Fix * :ghpull:`1733`: Minor Doc Fix * :ghpull:`1732`: MAIIncrement Python version for testing to 3.6 * :ghpull:`1716`: DOC: Add `Imports` section on package shorthands recommendations. * :ghpull:`1640`: Workflows - Adding PFT, probabilistic, closestpeaks tracking * :ghpull:`1652`: Switching tests to pytest * :ghpull:`1720`: [WIP] DIPY Workshop link on current website * :ghpull:`1719`: BF: Syntax fix example images not rendering * :ghpull:`1715`: DOC: Avoid the bullet points being interpreted as quoted blocks. * :ghpull:`1706`: BUG: Fix Cython one-liner non-trivial type declaration warnings. * :ghpull:`1705`: BUG: Fix Numpy `.random.random_integer` deprecation warning. * :ghpull:`1704`: DOC: Fix typo in `linear_fascicle_evaluation.py` script. * :ghpull:`1701`: BUG: Fix Sphinx math notation documentation warnings. * :ghpull:`1707`: BUG: Address `numpy.matrix` `PendingDeprecation` warnings. * :ghpull:`1703`: BUG: Fix blank image being recorded at `linear_fascicle_evaluation.py`. * :ghpull:`1700`: DOC: Use triple double-quoted strings for docstrings. * :ghpull:`1708`: BF: Clip the values before passing to arccos, instead of fixing nans. * :ghpull:`1710`: DOC: fix typo in instruction below sample snippet * :ghpull:`1702`: BUG: Fix `dipy.io.trackvis` deprecation warnings. * :ghpull:`1697`: Hyperlink DOIs to preferred resolver * :ghpull:`1696`: Transfer release notes to a specific folder * :ghpull:`1690`: Typo in release notes * :ghpull:`1693`: Changed the GSoC conduction years Issues (62): * :ghissue:`1757`: NI Learn info from sensors designed to acseptible eeg format? * :ghissue:`1755`: Bundle Analysis and Linear mixed Models Workflows * :ghissue:`1748`: [REBASED ] Symmetric Diffeomorphic registration workflow * :ghissue:`1714`: DOC: Add Cython style guideline for DIPY. * :ghissue:`1726`: IVIM MIX Model (MicroLearn) * :ghissue:`1751`: Scale values in `decfa` * :ghissue:`1753`: BF: Fixes #1751, by adding a `scale` key-word arg to `decfa` * :ghissue:`1754`: DeprecationWarning: This function is deprecated. Please call randint(0, 10000 + 1) instead C:\Users\ranji\Anaconda3\lib\site-packages\ipykernel_launcher.py:14: DeprecationWarning: `imsave` is deprecated! `imsave` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0. Use ``imageio.imwrite`` instead. * :ghissue:`1743`: Horizon * :ghissue:`1749`: [Fix] Tutorial syntax * :ghissue:`1616`: Implementing the Diffeomorphic registration * :ghissue:`1739`: IVIM Workflow * :ghissue:`1695`: A few tractometry functions * :ghissue:`1741`: [Fix] Pre-build : From relative to absolute import * :ghissue:`1742`: [REBASED] Apply Transform Workflow * :ghissue:`1745`: Adjust number of threads for SLR in Recobundles * :ghissue:`1746`: [DOC] Add NIH to sponsor list * :ghissue:`1605`: Cleaned PR for Apply Transform Workflow to quickly register a set of moving NIFTI images using a given affine matrix. * :ghissue:`1735`: [Rebase] Affine Registration Workflow with Supporting Quality Metrics * :ghissue:`1738`: [FIX] Python 3 compatibility for some tools script * :ghissue:`1740`: [FIX] mapmri with cvxpy 1.0.15 * :ghissue:`1730`: [Fix] Cython syntax error * :ghissue:`1661`: Local Tracking - BinaryTissueClassifier - dropping the last point * :ghissue:`1666`: Remove the points added outside of a mask. Fix the related tests. * :ghissue:`1737`: Doc Fix * :ghissue:`1604`: Cleaned PR for Affine Registration Workflow with Supporting Quality Metrics * :ghissue:`1734`: Documentation Error * :ghissue:`1733`: Minor Doc Fix * :ghissue:`1565`: New warnings with `PRE=1` * :ghissue:`1732`: MAIIncrement Python version for testing to 3.6 * :ghissue:`1716`: DOC: Add `Imports` section on package shorthands recommendations. * :ghissue:`1640`: Workflows - Adding PFT, probabilistic, closestpeaks tracking * :ghissue:`1729`: N3/N4 Bias correction? * :ghissue:`1652`: Switching tests to pytest * :ghissue:`1280`: Switch testing to pytest? * :ghissue:`1727`: Upgrade Scipy for MIX framework? * :ghissue:`1723`: Failure on * :ghissue:`1720`: [WIP] DIPY Workshop link on current website * :ghissue:`1718`: cannot import name window * :ghissue:`1719`: BF: Syntax fix example images not rendering * :ghissue:`1717`: cannot import packages of dipy.viz * :ghissue:`1715`: DOC: Avoid the bullet points being interpreted as quoted blocks. * :ghissue:`1706`: BUG: Fix Cython one-liner non-trivial type declaration warnings. * :ghissue:`1705`: BUG: Fix Numpy `.random.random_integer` deprecation warning. * :ghissue:`1664`: Deprecation warnings on testing * :ghissue:`1704`: DOC: Fix typo in `linear_fascicle_evaluation.py` script. * :ghissue:`1701`: BUG: Fix Sphinx math notation documentation warnings. * :ghissue:`1707`: BUG: Address `numpy.matrix` `PendingDeprecation` warnings. * :ghissue:`1633`: Blank png saved/displayed on documentation * :ghissue:`1703`: BUG: Fix blank image being recorded at `linear_fascicle_evaluation.py`. * :ghissue:`1700`: DOC: Use triple double-quoted strings for docstrings. * :ghissue:`1708`: BF: Clip the values before passing to arccos, instead of fixing nans. * :ghissue:`1698`: test failure in travis * :ghissue:`1710`: DOC: fix typo in instruction below sample snippet * :ghissue:`1702`: BUG: Fix `dipy.io.trackvis` deprecation warnings. * :ghissue:`1697`: Hyperlink DOIs to preferred resolver * :ghissue:`1696`: Transfer release notes to a specific folder * :ghissue:`1691`: Doc folder for release notes * :ghissue:`1690`: Typo in release notes * :ghissue:`1693`: Changed the GSoC conduction years * :ghissue:`1692`: Minor doc fix * :ghissue:`1632`: Link broken on website dipy-1.11.0/doc/release_notes/release0.6.rst000066400000000000000000000322741476546756600205740ustar00rootroot00000000000000.. _release0.6: =================================== Release notes for DIPY version 0.6 =================================== GitHub stats for 2011/02/12 - 2013/03/20 The following 13 authors contributed 972 commits. * Ariel Rokem * Bago Amirbekian * Eleftherios Garyfallidis * Emanuele Olivetti * Ian Nimmo-Smith * Maria Luisa Mandelli * Matthew Brett * Maxime Descoteaux * Michael Paquette * Samuel St-Jean * Stefan van der Walt * Yaroslav Halchenko * endolith We closed a total of 225 issues, 100 pull requests and 125 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (100): * :ghpull:`146`: BF - allow Bootstrap Wrapper to work with markov tracking * :ghpull:`143`: Garyfallidis tutorials 0.6 * :ghpull:`145`: Mdesco dti metrics * :ghpull:`141`: Peak extraction isbi * :ghpull:`142`: RF - always use theta and phi in that order, (not "phi, theta") * :ghpull:`140`: Sf2sh second try at correcting suggestions * :ghpull:`139`: Spherical function to spherical harmonics and back * :ghpull:`138`: Coding style fix for dsi_deconv * :ghpull:`137`: BF - check shapes before allclose * :ghpull:`136`: BF: add top-level benchmarking command * :ghpull:`135`: Refactor local maxima * :ghpull:`134`: BF - fix shm tests to accept antipodal directions as the same * :ghpull:`133`: Corrected test for Deconvolution after the discrete direction finder was removed * :ghpull:`124`: Remove direction finder * :ghpull:`77`: Rework tracking * :ghpull:`132`: A new fvtk function for visualizing fields of odfs * :ghpull:`131`: Add missing files * :ghpull:`130`: Implementation of DSI deconvolution from E.J. Canales-Rodriguez * :ghpull:`128`: Colorfa * :ghpull:`129`: RF - minor cleanup of pdf_odf code * :ghpull:`127`: Adding multi-tensor simulation * :ghpull:`126`: Improve local maxima * :ghpull:`122`: Removed calculation of gfa and other functions from inside the odf(sphere) of DSI and GQI * :ghpull:`103`: Major update of the website, with a few examples and with some additional minor RFs * :ghpull:`121`: NF: Allow the smoothing parameter to come through to rbf interpolation. * :ghpull:`120`: Fast squash fix * :ghpull:`116`: RF: common dtype for squash without result_type * :ghpull:`117`: Fix directions on TensorFit and add getitem * :ghpull:`119`: RF: raise errors for Python version dependencies * :ghpull:`118`: Separate fa * :ghpull:`111`: RF - clean up _squash in multi_voxel and related code * :ghpull:`112`: RF: fix vec_val_vect logic, generalize for shape * :ghpull:`114`: BF: fix face and edge byte order for sphere load * :ghpull:`109`: Faster einsum * :ghpull:`110`: TST: This is only almost equal on XP, for some reason. * :ghpull:`108`: TST + STY: Use and assert_equal so that we get more information upon failure * :ghpull:`107`: RF: A.dot(B) => np.dot(A, B) for numpy < 1.5 * :ghpull:`102`: BF - Allow ndindex to work with older numpy than 1.6. * :ghpull:`106`: RF: allow optional scipy.spatial.Delaunay * :ghpull:`105`: Skip doctest decorator * :ghpull:`104`: RF: remove deprecated old parametric testing * :ghpull:`101`: WIP: Fix isnan windows * :ghpull:`100`: Small stuff * :ghpull:`94`: Multivoxel dsi and gqi are back! * :ghpull:`96`: ENH: Implement masking for the new TensorModel implementation. * :ghpull:`95`: NF fetch publicly available datasets * :ghpull:`26`: Noise * :ghpull:`84`: Non linear peak finding * :ghpull:`82`: DTI new api * :ghpull:`91`: Shm new api * :ghpull:`88`: NF - wrapper function for multi voxel models * :ghpull:`86`: DOC: Fixed some typos, etc in the FAQ * :ghpull:`90`: A simpler ndindex using generators. * :ghpull:`87`: RF - Provide shape as argument to ndindex. * :ghpull:`85`: Add fast ndindex. * :ghpull:`81`: RF - fixup peaks_from_model to take use remove_similar_vertices and * :ghpull:`79`: BF: Fixed projection plots. * :ghpull:`80`: RF - remove some old functions tools * :ghpull:`71`: ENH: Make the internals of the io module visible on tab completion in ip... * :ghpull:`76`: Yay, more gradient stuff * :ghpull:`75`: Rename L2norm to vector_norm * :ghpull:`74`: Gradient rf * :ghpull:`73`: RF/BF - removed duplicate vector_norm/L2norm * :ghpull:`72`: Mr bago model api * :ghpull:`68`: DSI seems working again - Have a look * :ghpull:`65`: RF: Make the docstring and call consistent with scipy.interpolate.Rbf. * :ghpull:`61`: RF - Refactor direction finding. * :ghpull:`60`: NF - Add key-value cache for use in models. * :ghpull:`63`: TST - Disable reconstruction methods that break the test suite. * :ghpull:`62`: BF - Fix missing import in peak finding tests. * :ghpull:`37`: cleanup references in the code to E1381S6_edcor* (these were removed from... * :ghpull:`55`: Ravel multi index * :ghpull:`58`: TST - skip doctest when matplotlib is not available * :ghpull:`59`: optional_traits is not needed anymore * :ghpull:`56`: TST: Following change to API in dipy.segment.quickbundles. * :ghpull:`52`: Matplotlib optional * :ghpull:`50`: NF - added subdivide method to sphere * :ghpull:`51`: Fix tracking utils * :ghpull:`48`: BF - Brought back _filter peaks and associated test. * :ghpull:`47`: RF - Removed reduce_antipodal from sphere. * :ghpull:`41`: NF - Add radial basis function interpolation on the sphere. * :ghpull:`39`: GradientTable * :ghpull:`40`: BF - Fix axis specification in sph_project. * :ghpull:`28`: Odf+shm api update * :ghpull:`36`: Nf hemisphere preview * :ghpull:`34`: RF - replace _filter_peaks with unique_vertices * :ghpull:`35`: BF - Fix imports from dipy.core.sphere. * :ghpull:`21`: Viz 2d * :ghpull:`32`: NF - Sphere class. * :ghpull:`30`: RF: Don't import all this every time. * :ghpull:`24`: TST: Fixing tests in reconst module. * :ghpull:`27`: DOC - Add reference to white matter diffusion values. * :ghpull:`25`: NF - Add prolate white matter as defaults for multi-tensor signal sim. * :ghpull:`22`: Updating my fork with the nipy master * :ghpull:`20`: RF - create OptionalImportError for traits imports * :ghpull:`19`: DOC: add comments and example to commit codes * :ghpull:`18`: DOC: update gitwash from source * :ghpull:`17`: Optional traits * :ghpull:`14`: DOC - fix frontpage example * :ghpull:`12`: BF(?): cart2sphere and sphere2cart are now invertible. * :ghpull:`11`: BF explicit type declaration and initialization for longest_track_len[AB] -- for cython 0.15 compatibility Issues (125): * :ghissue:`99`: RF - Separate direction finder from model fit. * :ghissue:`143`: Garyfallidis tutorials 0.6 * :ghissue:`144`: DTI metrics * :ghissue:`145`: Mdesco dti metrics * :ghissue:`123`: Web content and examples for 0.6 * :ghissue:`141`: Peak extraction isbi * :ghissue:`142`: RF - always use theta and phi in that order, (not "phi, theta") * :ghissue:`140`: Sf2sh second try at correcting suggestions * :ghissue:`139`: Spherical function to spherical harmonics and back * :ghissue:`23`: qball not properly import-able * :ghissue:`29`: Don't import everything when you import dipy * :ghissue:`138`: Coding style fix for dsi_deconv * :ghissue:`137`: BF - check shapes before allclose * :ghissue:`136`: BF: add top-level benchmarking command * :ghissue:`135`: Refactor local maxima * :ghissue:`134`: BF - fix shm tests to accept antipodal directions as the same * :ghissue:`133`: Corrected test for Deconvolution after the discrete direction finder was removed * :ghissue:`124`: Remove direction finder * :ghissue:`77`: Rework tracking * :ghissue:`132`: A new fvtk function for visualizing fields of odfs * :ghissue:`125`: BF: Remove 'mayavi' directory, to avoid triggering mayavi import warning... * :ghissue:`131`: Add missing files * :ghissue:`130`: Implementation of DSI deconvolution from E.J. Canales-Rodriguez * :ghissue:`128`: Colorfa * :ghissue:`129`: RF - minor cleanup of pdf_odf code * :ghissue:`127`: Adding multi-tensor simulation * :ghissue:`126`: Improve local maxima * :ghissue:`97`: BF - separate out storing of fit values in gqi * :ghissue:`122`: Removed calculation of gfa and other functions from inside the odf(sphere) of DSI and GQI * :ghissue:`103`: Major update of the website, with a few examples and with some additional minor RFs * :ghissue:`121`: NF: Allow the smoothing parameter to come through to rbf interpolation. * :ghissue:`120`: Fast squash fix * :ghissue:`116`: RF: common dtype for squash without result_type * :ghissue:`117`: Fix directions on TensorFit and add getitem * :ghissue:`119`: RF: raise errors for Python version dependencies * :ghissue:`118`: Separate fa * :ghissue:`113`: RF - use min_diffusivity relative to 1 / max(bval) * :ghissue:`111`: RF - clean up _squash in multi_voxel and related code * :ghissue:`112`: RF: fix vec_val_vect logic, generalize for shape * :ghissue:`114`: BF: fix face and edge byte order for sphere load * :ghissue:`109`: Faster einsum * :ghissue:`110`: TST: This is only almost equal on XP, for some reason. * :ghissue:`98`: This is an update of PR #94 mostly typos and coding style * :ghissue:`108`: TST + STY: Use and assert_equal so that we get more information upon failure * :ghissue:`107`: RF: A.dot(B) => np.dot(A, B) for numpy < 1.5 * :ghissue:`102`: BF - Allow ndindex to work with older numpy than 1.6. * :ghissue:`106`: RF: allow optional scipy.spatial.Delaunay * :ghissue:`105`: Skip doctest decorator * :ghissue:`104`: RF: remove deprecated old parametric testing * :ghissue:`101`: WIP: Fix isnan windows * :ghissue:`100`: Small stuff * :ghissue:`94`: Multivoxel dsi and gqi are back! * :ghissue:`96`: ENH: Implement masking for the new TensorModel implementation. * :ghissue:`95`: NF fetch publicly available datasets * :ghissue:`26`: Noise * :ghissue:`84`: Non linear peak finding * :ghissue:`82`: DTI new api * :ghissue:`91`: Shm new api * :ghissue:`88`: NF - wrapper function for multi voxel models * :ghissue:`86`: DOC: Fixed some typos, etc in the FAQ * :ghissue:`89`: Consistent ndindex behaviour * :ghissue:`90`: A simpler ndindex using generators. * :ghissue:`87`: RF - Provide shape as argument to ndindex. * :ghissue:`85`: Add fast ndindex. * :ghissue:`81`: RF - fixup peaks_from_model to take use remove_similar_vertices and * :ghissue:`83`: Non linear peak finding * :ghissue:`78`: This PR replaces PR 70 * :ghissue:`79`: BF: Fixed projection plots. * :ghissue:`80`: RF - remove some old functions tools * :ghissue:`70`: New api dti * :ghissue:`71`: ENH: Make the internals of the io module visible on tab completion in ip... * :ghissue:`76`: Yay, more gradient stuff * :ghissue:`69`: New api and tracking refacotor * :ghissue:`75`: Rename L2norm to vector_norm * :ghissue:`74`: Gradient rf * :ghissue:`73`: RF/BF - removed duplicate vector_norm/L2norm * :ghissue:`72`: Mr bago model api * :ghissue:`66`: DOCS - docs for model api * :ghissue:`49`: Reworking tracking code. * :ghissue:`68`: DSI seems working again - Have a look * :ghissue:`65`: RF: Make the docstring and call consistent with scipy.interpolate.Rbf. * :ghissue:`61`: RF - Refactor direction finding. * :ghissue:`60`: NF - Add key-value cache for use in models. * :ghissue:`63`: TST - Disable reconstruction methods that break the test suite. * :ghissue:`62`: BF - Fix missing import in peak finding tests. * :ghissue:`37`: cleanup references in the code to E1381S6_edcor* (these were removed from... * :ghissue:`55`: Ravel multi index * :ghissue:`46`: BF: Trying to fix test failures. * :ghissue:`57`: TST: Reverted back to optional definition of the function to make TB hap... * :ghissue:`58`: TST - skip doctest when matplotlib is not available * :ghissue:`59`: optional_traits is not needed anymore * :ghissue:`56`: TST: Following change to API in dipy.segment.quickbundles. * :ghissue:`52`: Matplotlib optional * :ghissue:`50`: NF - added subdivide method to sphere * :ghissue:`51`: Fix tracking utils * :ghissue:`48`: BF - Brought back _filter peaks and associated test. * :ghissue:`47`: RF - Removed reduce_antipodal from sphere. * :ghissue:`41`: NF - Add radial basis function interpolation on the sphere. * :ghissue:`33`: Gradients Table class * :ghissue:`39`: GradientTable * :ghissue:`45`: BF - Fix sphere creation in triangle_subdivide. * :ghissue:`38`: Subdivide octahedron * :ghissue:`40`: BF - Fix axis specification in sph_project. * :ghissue:`28`: Odf+shm api update * :ghissue:`36`: Nf hemisphere preview * :ghissue:`34`: RF - replace _filter_peaks with unique_vertices * :ghissue:`35`: BF - Fix imports from dipy.core.sphere. * :ghissue:`21`: Viz 2d * :ghissue:`32`: NF - Sphere class. * :ghissue:`30`: RF: Don't import all this every time. * :ghissue:`24`: TST: Fixing tests in reconst module. * :ghissue:`27`: DOC - Add reference to white matter diffusion values. * :ghissue:`25`: NF - Add prolate white matter as defaults for multi-tensor signal sim. * :ghissue:`22`: Updating my fork with the nipy master * :ghissue:`20`: RF - create OptionalImportError for traits imports * :ghissue:`8`: X error BadRequest with fvtk.show * :ghissue:`19`: DOC: add comments and example to commit codes * :ghissue:`18`: DOC: update gitwash from source * :ghissue:`17`: Optional traits * :ghissue:`15`: Octahedron in dipy.core.triangle_subdivide has wrong faces * :ghissue:`14`: DOC - fix frontpage example * :ghissue:`12`: BF(?): cart2sphere and sphere2cart are now invertible. * :ghissue:`11`: BF explicit type declaration and initialization for longest_track_len[AB] -- for cython 0.15 compatibility * :ghissue:`5`: Add DSI reconstruction in Dipy * :ghissue:`9`: Bug in dipy.tracking.metrics.downsampling when we downsample a track to more than 20 points dipy-1.11.0/doc/release_notes/release0.7.rst000066400000000000000000000125241476546756600205710ustar00rootroot00000000000000.. _release0.7: =================================== Release notes for DIPY version 0.7 =================================== GitHub stats for 2013/03/29 - 2013/12/23 (tag: 0.6.0) The following 16 authors contributed 814 commits. * Ariel Rokem * Bago Amirbekian * Eleftherios Garyfallidis * Emmanuel Caruyer * Erik Ziegler * Gabriel Girard * Jean-Christophe Houde * Kimberly Chan * Matthew Brett * Matthias Ekman * Matthieu Dumont * Mauro Zucchelli * Maxime Descoteaux * Samuel St-Jean * Stefan van der Walt * Sylvain Merlet We closed a total of 84 pull requests; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (84): * :ghpull:`292`: Streamline tools * :ghpull:`289`: Examples checked for peaks_from_model * :ghpull:`288`: Link shore examples * :ghpull:`279`: Update release 0.7 examples' system * :ghpull:`257`: Continuous modelling: SHORE * :ghpull:`285`: Bad seeds cause segfault in EuDX * :ghpull:`274`: Peak directions update * :ghpull:`275`: Restore example * :ghpull:`261`: R2 term response function for Sharpening Deconvolution Transform (SDT) * :ghpull:`273`: Fixed typos + autopep8 * :ghpull:`268`: Add gfa shmfit * :ghpull:`260`: NF: Command line interface to QuickBundles. * :ghpull:`270`: Removed minmax_normalize from dipy.reconst.peaks * :ghpull:`247`: Model base * :ghpull:`267`: Refactoring peaks_from_model_parallel * :ghpull:`219`: Update forward sdeconv mat * :ghpull:`266`: BF - join pool before trying to delete temp directory * :ghpull:`265`: Peak from model issue #253 * :ghpull:`264`: peak_from_model tmp files * :ghpull:`263`: Refactoring peaks calculations to be out of odf.py * :ghpull:`262`: Handle cpu count exception * :ghpull:`255`: Fix peaks_from_model_parallel * :ghpull:`259`: Release 0.7 a few cleanups * :ghpull:`252`: Clean cc * :ghpull:`243`: NF Added norm input to interp_rbf and angle as an alternative norm. * :ghpull:`251`: Another cleanup for fvtk. This time the slicer function was simplified * :ghpull:`249`: Dsi metrics 2 * :ghpull:`239`: Segmentation based on rgb threshold + examples * :ghpull:`240`: Dsi metrics * :ghpull:`245`: Fix some rewording * :ghpull:`242`: A new streamtube visualization method and different fixes and cleanups for the fvtk module * :ghpull:`237`: WIP: cleanup docs / small refactor for median otsu * :ghpull:`221`: peaks_from_model now return peaks directions * :ghpull:`234`: BF: predict for cases when the ADC is multi-D and S0 is provided as a volume * :ghpull:`232`: Fix peak extraction default value of relative_peak_threshold * :ghpull:`227`: Fix closing upon download completion in fetcher * :ghpull:`230`: Tensor predict * :ghpull:`229`: BF: input.dtype is used per default * :ghpull:`210`: Brainextraction * :ghpull:`226`: SetInput in vtk5 is now SetInputData in vtk6 * :ghpull:`225`: fixed typo * :ghpull:`212`: Tensor visualization * :ghpull:`223`: Fix make examples for windows. * :ghpull:`222`: Fix restore bug * :ghpull:`217`: RF - update csdeconv to use SphHarmFit class to reduce code duplication. * :ghpull:`208`: Shm coefficients in peaks_from_model * :ghpull:`216`: BF - fixed mask_voxel_size bug and added test. Replaced promote_dtype wi... * :ghpull:`211`: Added a md5 check to each dataset. * :ghpull:`54`: Restore * :ghpull:`213`: Update to a more recent version of `six.py`. * :ghpull:`204`: Maxime's [Gallery] Reconst DTI example revisited * :ghpull:`207`: Added two new datasets online and updated fetcher.py. * :ghpull:`209`: Fixed typos in reconst/dti.py * :ghpull:`206`: DOC: update the docs to say that we support python 3 * :ghpull:`205`: RF: Minor corrections in index.rst and CSD example * :ghpull:`173`: Constrained Spherical Deconvolution and the Spherical Deconvolution Transform * :ghpull:`203`: RF: Rename tensor statistics to remove "tensor\_" from them. * :ghpull:`202`: Typos * :ghpull:`201`: Bago's Rename sph basis functions corrected after rebasing and other minor lateral fixes * :ghpull:`191`: DOC - clarify docs for SphHarmModel * :ghpull:`199`: FIX: testfail due to Non-ASCII character \xe2 in markov.py * :ghpull:`189`: Shm small fixes * :ghpull:`196`: DOC: add reference section to ProbabilisticOdfWeightedTracker * :ghpull:`190`: BF - fix fit-tensor handling of file extensions and mask=none * :ghpull:`182`: RF - fix disperse_charges so that a large constant does not cause the po... * :ghpull:`183`: OPT: Modified dipy.core.sphere_stats.random_uniform_on_sphere, cf issue #181 * :ghpull:`185`: DOC: replace soureforge.net links with nipy.org * :ghpull:`180`: BF: fix Cython TypeError from negative indices to tuples * :ghpull:`179`: BF: doctest output difference workarounds * :ghpull:`176`: MRG: Py3 compat * :ghpull:`178`: RF: This function is superseded by read_bvals_bvecs. * :ghpull:`170`: Westin stats * :ghpull:`174`: RF: use $PYTHON variable for python invocation * :ghpull:`172`: DOC: Updated index.rst and refactored example segment_quickbundles.py * :ghpull:`169`: RF: refactor pyx / c file stamping for packaging * :ghpull:`168`: DOC: more updates to release notes * :ghpull:`167`: Merge maint * :ghpull:`166`: BF: pyc and created trk files were in eg archive * :ghpull:`160`: NF: add script to build dmgs from buildbot mpkgs * :ghpull:`164`: Calculation for mode of a tensor * :ghpull:`163`: Remove dti tensor * :ghpull:`161`: DOC: typo in the probabilistic tracking example. * :ghpull:`162`: DOC: update release notes * :ghpull:`159`: Rename install test scripts dipy-1.11.0/doc/release_notes/release0.8.rst000066400000000000000000000560741476546756600206020ustar00rootroot00000000000000.. _release0.8: =================================== Release notes for DIPY version 0.8 =================================== GitHub stats for 2013/12/24 - 2014/12/26 (tag: 0.7.0) The following 19 authors contributed 1176 commits. * Andrew Lawrence * Ariel Rokem * Bago Amirbekian * Demian Wassermann * Eleftherios Garyfallidis * Gabriel Girard * Gregory R. Lee * Jean-Christophe Houde * Kesshi jordan * Marc-Alexandre Cote * Matthew Brett * Matthias Ekman * Matthieu Dumont * Mauro Zucchelli * Maxime Descoteaux * Michael Paquette * Omar Ocegueda * Samuel St-Jean * Stefan van der Walt We closed a total of 388 issues, 155 pull requests and 233 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (155): * :ghpull:`544`: Refactor propspeed - updated * :ghpull:`543`: MRG: update to plot_2d fixes and tests * :ghpull:`537`: NF: add requirements.txt file * :ghpull:`534`: BF: removed ftmp variable * :ghpull:`536`: Update Changelog * :ghpull:`535`: Happy New Year PR! * :ghpull:`531`: BF: extend pip timeout to reduce install failures * :ghpull:`527`: Remove npymath library from cython extensions * :ghpull:`528`: MRG: move conditional compiling to C * :ghpull:`530`: BF: work round ugly MSVC manifest bug * :ghpull:`529`: MRG: a couple of small cleanup fixes * :ghpull:`526`: Readme.rst and info.py update about the license * :ghpull:`525`: Added shore gpl warning in the readme * :ghpull:`524`: Replaced DiPy with DIPY in readme.rst and info.py * :ghpull:`523`: RF: copy includes list for extensions * :ghpull:`522`: DOC: Web-site release notes, and some updates on front page. * :ghpull:`521`: Life bots * :ghpull:`520`: Relaxing precision for win32 * :ghpull:`519`: Christmas PR! Correcting typos, linking and language for max odf tracking * :ghpull:`513`: BF + TST: Reinstated eig_from_lo_tri * :ghpull:`508`: Tests for reslicing * :ghpull:`515`: TST: Increasing testing on life. * :ghpull:`516`: TST: Reduce sensitivity on these tests. * :ghpull:`495`: NF - Deterministic Maximum Direction Getter * :ghpull:`514`: Website update * :ghpull:`510`: BF: another fvtk 5 to 6 incompatibility * :ghpull:`509`: DOC: Small fixes in documentation. * :ghpull:`497`: New sphere for ODF reconstruction * :ghpull:`460`: Sparse Fascicle Model * :ghpull:`499`: DOC: Warn about the GPL license of SHORE. * :ghpull:`491`: RF - Make peaks_from_model part of dipy.direction * :ghpull:`501`: TST: Test for both data with and w/0 b0. * :ghpull:`507`: BF - use different sort method to avoid mergsort for older numpy. * :ghpull:`504`: Bug fix float overflow in estimate_sigma * :ghpull:`494`: Fix round * :ghpull:`503`: Fixed compatibility issues between vtk 5 and 6 * :ghpull:`498`: DTI `min_signal` * :ghpull:`471`: Use importlib instead of __import__ * :ghpull:`419`: LiFE * :ghpull:`489`: Fix diffeomorphic registration test failures * :ghpull:`484`: Clear tabs from examples for website * :ghpull:`490`: DOC: corrected typos in the tracking PR * :ghpull:`341`: Traco Redesign * :ghpull:`483`: NF: Find the closest vertex on a sphere for an input vector. * :ghpull:`488`: BF: fix travis version setting * :ghpull:`485`: RF: deleted unused files * :ghpull:`482`: Skipping tests for different versions of Scipy for optimize.py * :ghpull:`480`: Enhance SLR to allow for series of registrations * :ghpull:`479`: Report on coverage for old scipy. * :ghpull:`481`: BF - make examples was confusing files with similar names, fixed * :ghpull:`476`: Fix optimize defaults for older scipy versions for L-BFGS-B * :ghpull:`478`: TST: Increase the timeout on the Travis pip install * :ghpull:`477`: MAINT+TST: update minimum nibabel dependency * :ghpull:`474`: RF: switch travis tests to use virtualenvs * :ghpull:`473`: TST: Make Travis provide verbose test outputs. * :ghpull:`472`: ENH: GradientTable now calculates qvalues * :ghpull:`469`: Fix evolution save win32 * :ghpull:`463`: DOC: update RESTORE tutorial to use new noise estimation technique * :ghpull:`466`: BF: cannot quote command for Windows * :ghpull:`465`: BF: increased SCIPY version definition flag to 0.12 * :ghpull:`462`: BF: fix writing history to file in Python 3 * :ghpull:`433`: Added local variance estimation * :ghpull:`458`: DOC: docstring fixes in dipy/align/crosscorr.pyx * :ghpull:`448`: BF: fix link to npy_math function * :ghpull:`447`: BF: supposed fix for the gh-439, but still unable to reproduce OP. * :ghpull:`443`: Fix buildbots errors introduced with the registration module * :ghpull:`456`: MRG: relax threshold for failing test + cleanup * :ghpull:`454`: DOC: fix docstring for compile-time checker * :ghpull:`453`: BF: refactor conditional compiling again * :ghpull:`446`: Streamline-based Linear Registration * :ghpull:`445`: NF: generate config.pxi file with Cython DEF vars * :ghpull:`440`: DOC - add info on how to change default tempdir (multiprocessing). * :ghpull:`431`: Change the writeable flag back to its original state when finished. * :ghpull:`408`: Symmetric diffeomorphic non-linear registration * :ghpull:`438`: Missing a blank line in examples/tracking_quick_start.py * :ghpull:`405`: fixed frozen windows executable issue * :ghpull:`418`: RF: move script running code into own module * :ghpull:`437`: Update Cython download URL * :ghpull:`435`: BF: replaced non-ascii character in dipy.reconst.dti line 956 * :ghpull:`434`: DOC: References for the DTI ODF calculation. * :ghpull:`430`: Revert "Support read-only numpy array." * :ghpull:`427`: Support read-only numpy array. * :ghpull:`421`: Fix nans in gfa * :ghpull:`422`: BF: Use the short version to verify scipy version. * :ghpull:`415`: RF - move around some of the predict stuff * :ghpull:`420`: Rename README.txt to README.rst * :ghpull:`413`: Faster spherical harmonics * :ghpull:`416`: Removed memory_leak unittest in test_strealine.py * :ghpull:`417`: Fix streamlinespeed tests * :ghpull:`411`: Fix memory leak in cython functions length and set_number_of_points * :ghpull:`409`: minor corrections to pipe function * :ghpull:`396`: TST : this is not exactly equal on some platforms. * :ghpull:`407`: BF: fixed problem with NANs in odfdeconv * :ghpull:`406`: Revert "Merge pull request #346 from omarocegueda/syn_registration" * :ghpull:`402`: Fix AE test error in test_peak_directions_thorough * :ghpull:`403`: Added mask shape check in tenfit * :ghpull:`346`: Symmetric diffeomorphic non-linear registration * :ghpull:`401`: BF: fix skiptest invocation for missing mpl * :ghpull:`340`: CSD fit issue * :ghpull:`397`: BF: fix import statement for get_cmap * :ghpull:`393`: RF: update Cython dependency * :ghpull:`382`: Cythonized version of streamlines' resample() and length() functions. * :ghpull:`386`: DOC: Small fix in the xval example. * :ghpull:`335`: Xval * :ghpull:`352`: Fix utils docs and affine * :ghpull:`384`: odf_sh_sharpening function fix and new test * :ghpull:`374`: MRG: bumpy numpy requirement to 1.5 / compat fixes * :ghpull:`380`: DOC: Update a few Dipy links to link to the correct repo * :ghpull:`378`: Fvtk cleanup * :ghpull:`379`: fixed typos in shm.py * :ghpull:`339`: FVTK small improvement: Arbitrary matplotlib colormaps can be used to color spherical functions * :ghpull:`373`: Fixed discrepancies between doc and code * :ghpull:`371`: RF: don't use -fopenmp flag if it doesn't work * :ghpull:`372`: BF: set integer type for crossplatform compilation * :ghpull:`337`: Piesno * :ghpull:`370`: Tone down the front page a bit. * :ghpull:`364`: Add the mode param for border management. * :ghpull:`368`: New banner for website * :ghpull:`367`: MRG: refactor API generation for sharing * :ghpull:`363`: RF: make cvxopt optional for tests * :ghpull:`362`: Changes to fix issue #361: matrix sizing in tracking.utils.connectivity_matrix * :ghpull:`360`: Added missing $ sign * :ghpull:`355`: DOC: Updated API change document to add target function change * :ghpull:`357`: Changed the logo to full black as the one that I sent as suggestion for HBM and ISMRM * :ghpull:`356`: Auto-generate API docs * :ghpull:`349`: Added api changes file to track breaks of backwards compatibility * :ghpull:`348`: Website update * :ghpull:`347`: DOC: Updating citations * :ghpull:`345`: TST: Make travis look at test coverage. * :ghpull:`338`: Add positivity constraint on the propagator * :ghpull:`334`: Fix vec2vec * :ghpull:`324`: Constrained optimisation for SHORE to set E(0)=1 when the CVXOPT package is available * :ghpull:`320`: Denoising images using non-local means * :ghpull:`331`: DOC: correct number of seeds in streamline_tools example * :ghpull:`326`: Fix brain extraction example * :ghpull:`327`: add small and big delta * :ghpull:`323`: Shore pdf grid speed improvement * :ghpull:`319`: DOC: Updated the highlights to promote the release and the upcoming paper * :ghpull:`318`: Corrected some rendering problems with the installation instructions * :ghpull:`317`: BF: more problems with path quoting in windows * :ghpull:`316`: MRG: more fixes for windows script tests * :ghpull:`315`: BF: EuDX odf_vertices param has no default value * :ghpull:`305`: DOC: Some more details in installation instructions. * :ghpull:`314`: BF - callable response does not work * :ghpull:`311`: Bf seeds from mask * :ghpull:`309`: MRG: Windows test fixes * :ghpull:`308`: typos + pep stuf * :ghpull:`303`: BF: try and fix nibabel setup requirement * :ghpull:`304`: Update README.txt * :ghpull:`302`: Time for 0.8.0.dev! * :ghpull:`299`: BF: Put back utils.length. * :ghpull:`301`: Updated info.py and copyright year * :ghpull:`300`: Bf fetcher bug on windows * :ghpull:`298`: TST - rework tests so that we do not need to download any data * :ghpull:`290`: DOC: Started generating 0.7 release notes. Issues (233): * :ghissue:`544`: Refactor propspeed - updated * :ghissue:`540`: MRG: refactor propspeed * :ghissue:`542`: TST: Testing regtools * :ghissue:`543`: MRG: update to plot_2d fixes and tests * :ghissue:`541`: BUG: plot_2d_diffeomorphic_map * :ghissue:`439`: ValueError in RESTORE * :ghissue:`538`: WIP: TEST: relaxed precision * :ghissue:`449`: local variable 'ftmp' referenced before assignment * :ghissue:`537`: NF: add requirements.txt file * :ghissue:`534`: BF: removed ftmp variable * :ghissue:`536`: Update Changelog * :ghissue:`535`: Happy New Year PR! * :ghissue:`512`: reconst.dti.eig_from_lo_tri * :ghissue:`467`: Optimize failure on Windows * :ghissue:`464`: Diffeomorphic registration test failures on PPC * :ghissue:`531`: BF: extend pip timeout to reduce install failures * :ghissue:`527`: Remove npymath library from cython extensions * :ghissue:`528`: MRG: move conditional compiling to C * :ghissue:`530`: BF: work round ugly MSVC manifest bug * :ghissue:`529`: MRG: a couple of small cleanup fixes * :ghissue:`526`: Readme.rst and info.py update about the license * :ghissue:`525`: Added shore gpl warning in the readme * :ghissue:`524`: Replaced DiPy with DIPY in readme.rst and info.py * :ghissue:`523`: RF: copy includes list for extensions * :ghissue:`522`: DOC: Web-site release notes, and some updates on front page. * :ghissue:`521`: Life bots * :ghissue:`520`: Relaxing precision for win32 * :ghissue:`519`: Christmas PR! Correcting typos, linking and language for max odf tracking * :ghissue:`513`: BF + TST: Reinstated eig_from_lo_tri * :ghissue:`508`: Tests for reslicing * :ghissue:`515`: TST: Increasing testing on life. * :ghissue:`516`: TST: Reduce sensitivity on these tests. * :ghissue:`495`: NF - Deterministic Maximum Direction Getter * :ghissue:`514`: Website update * :ghissue:`510`: BF: another fvtk 5 to 6 incompatibility * :ghissue:`511`: Error estimating tensors on hcp dataset * :ghissue:`509`: DOC: Small fixes in documentation. * :ghissue:`497`: New sphere for ODF reconstruction * :ghissue:`460`: Sparse Fascicle Model * :ghissue:`499`: DOC: Warn about the GPL license of SHORE. * :ghissue:`491`: RF - Make peaks_from_model part of dipy.direction * :ghissue:`501`: TST: Test for both data with and w/0 b0. * :ghissue:`507`: BF - use different sort method to avoid mergsort for older numpy. * :ghissue:`505`: stable/wheezy debian -- ar.argsort(kind='mergesort') causes TypeError: requested sort not available for type ( * :ghissue:`506`: RF: Use integer datatype for unique_rows sorting. * :ghissue:`504`: Bug fix float overflow in estimate_sigma * :ghissue:`399`: Multiprocessing runtime error in Windows 64 bit * :ghissue:`383`: typo in multi tensor fit example * :ghissue:`350`: typo in SNR example * :ghissue:`424`: test more python versions with travis * :ghissue:`493`: BF - older C compilers do not have round in math.h, using dpy_math instead * :ghissue:`494`: Fix round * :ghissue:`503`: Fixed compatibility issues between vtk 5 and 6 * :ghissue:`500`: SHORE hyp2F1 * :ghissue:`502`: Fix record vtk6 * :ghissue:`498`: DTI `min_signal` * :ghissue:`496`: Revert "BF: supposed fix for the gh-439, but still unable to reproduce O... * :ghissue:`492`: TST - new DTI test to help develop min_signal handling * :ghissue:`471`: Use importlib instead of __import__ * :ghissue:`419`: LiFE * :ghissue:`489`: Fix diffeomorphic registration test failures * :ghissue:`484`: Clear tabs from examples for website * :ghissue:`490`: DOC: corrected typos in the tracking PR * :ghissue:`341`: Traco Redesign * :ghissue:`410`: Faster spherical harmonics implementation * :ghissue:`483`: NF: Find the closest vertex on a sphere for an input vector. * :ghissue:`487`: Travis Problem * :ghissue:`488`: BF: fix travis version setting * :ghissue:`485`: RF: deleted unused files * :ghissue:`486`: cvxopt is gpl licensed * :ghissue:`482`: Skipping tests for different versions of Scipy for optimize.py * :ghissue:`480`: Enhance SLR to allow for series of registrations * :ghissue:`479`: Report on coverage for old scipy. * :ghissue:`481`: BF - make examples was confusing files with similar names, fixed * :ghissue:`428`: WIP: refactor travis building * :ghissue:`429`: WIP: Refactor travising * :ghissue:`476`: Fix optimize defaults for older scipy versions for L-BFGS-B * :ghissue:`478`: TST: Increase the timeout on the Travis pip install * :ghissue:`477`: MAINT+TST: update minimum nibabel dependency * :ghissue:`475`: Does the optimizer still need `tmp_files`? * :ghissue:`474`: RF: switch travis tests to use virtualenvs * :ghissue:`473`: TST: Make Travis provide verbose test outputs. * :ghissue:`470`: Enhance SLR with applying series of transformations and fix optimize bug for parameter missing in old scipy versions * :ghissue:`472`: ENH: GradientTable now calculates qvalues * :ghissue:`469`: Fix evolution save win32 * :ghissue:`463`: DOC: update RESTORE tutorial to use new noise estimation technique * :ghissue:`466`: BF: cannot quote command for Windows * :ghissue:`461`: Buildbot failures with missing 'nit' key in dipy.core.optimize * :ghissue:`465`: BF: increased SCIPY version definition flag to 0.12 * :ghissue:`462`: BF: fix writing history to file in Python 3 * :ghissue:`433`: Added local variance estimation * :ghissue:`432`: auto estimate the standard deviation globally for nlmeans * :ghissue:`451`: Warning for DTI normalization * :ghissue:`458`: DOC: docstring fixes in dipy/align/crosscorr.pyx * :ghissue:`448`: BF: fix link to npy_math function * :ghissue:`447`: BF: supposed fix for the gh-439, but still unable to reproduce OP. * :ghissue:`443`: Fix buildbots errors introduced with the registration module * :ghissue:`456`: MRG: relax threshold for failing test + cleanup * :ghissue:`455`: Test failure on `master` * :ghissue:`454`: DOC: fix docstring for compile-time checker * :ghissue:`450`: Find if replacing matrix44 from streamlinear with compose_matrix from dipy.core.geometry is a good idea * :ghissue:`453`: BF: refactor conditional compiling again * :ghissue:`446`: Streamline-based Linear Registration * :ghissue:`452`: Replace raise by auto normalization when creating a gradient table with un-normalized bvecs. * :ghissue:`398`: assert AE < 2. failure in test_peak_directions_thorough * :ghissue:`444`: heads up - MKL error in parallel mode * :ghissue:`445`: NF: generate config.pxi file with Cython DEF vars * :ghissue:`440`: DOC - add info on how to change default tempdir (multiprocessing). * :ghissue:`431`: Change the writeable flag back to its original state when finished. * :ghissue:`408`: Symmetric diffeomorphic non-linear registration * :ghissue:`333`: Bundle alignment * :ghissue:`438`: Missing a blank line in examples/tracking_quick_start.py * :ghissue:`426`: nlmeans_3d breaks with mask=None * :ghissue:`405`: fixed frozen windows executable issue * :ghissue:`418`: RF: move script running code into own module * :ghissue:`437`: Update Cython download URL * :ghissue:`435`: BF: replaced non-ascii character in dipy.reconst.dti line 956 * :ghissue:`434`: DOC: References for the DTI ODF calculation. * :ghissue:`425`: NF added class to save streamlines in vtk format * :ghissue:`430`: Revert "Support read-only numpy array." * :ghissue:`427`: Support read-only numpy array. * :ghissue:`421`: Fix nans in gfa * :ghissue:`422`: BF: Use the short version to verify scipy version. * :ghissue:`415`: RF - move around some of the predict stuff * :ghissue:`420`: Rename README.txt to README.rst * :ghissue:`413`: Faster spherical harmonics * :ghissue:`416`: Removed memory_leak unittest in test_strealine.py * :ghissue:`417`: Fix streamlinespeed tests * :ghissue:`411`: Fix memory leak in cython functions length and set_number_of_points * :ghissue:`412`: Use simple multiplication instead exponentiation and pow * :ghissue:`409`: minor corrections to pipe function * :ghissue:`396`: TST : this is not exactly equal on some platforms. * :ghissue:`407`: BF: fixed problem with NANs in odfdeconv * :ghissue:`406`: Revert "Merge pull request #346 from omarocegueda/syn_registration" * :ghissue:`402`: Fix AE test error in test_peak_directions_thorough * :ghissue:`403`: Added mask shape check in tenfit * :ghissue:`346`: Symmetric diffeomorphic non-linear registration * :ghissue:`401`: BF: fix skiptest invocation for missing mpl * :ghissue:`340`: CSD fit issue * :ghissue:`397`: BF: fix import statement for get_cmap * :ghissue:`393`: RF: update Cython dependency * :ghissue:`391`: memory usage: 16GB wasn't sufficient * :ghissue:`382`: Cythonized version of streamlines' resample() and length() functions. * :ghissue:`386`: DOC: Small fix in the xval example. * :ghissue:`385`: cross_validation example doesn't render properly * :ghissue:`335`: Xval * :ghissue:`352`: Fix utils docs and affine * :ghissue:`384`: odf_sh_sharpening function fix and new test * :ghissue:`374`: MRG: bumpy numpy requirement to 1.5 / compat fixes * :ghissue:`381`: Bago fix utils docs and affine * :ghissue:`380`: DOC: Update a few Dipy links to link to the correct repo * :ghissue:`378`: Fvtk cleanup * :ghissue:`379`: fixed typos in shm.py * :ghissue:`376`: BF: Adjust the dimensionality of the peak_values, if provided. * :ghissue:`377`: Demianw fvtk colormap * :ghissue:`339`: FVTK small improvement: Arbitrary matplotlib colormaps can be used to color spherical functions * :ghissue:`373`: Fixed discrepancies between doc and code * :ghissue:`371`: RF: don't use -fopenmp flag if it doesn't work * :ghissue:`372`: BF: set integer type for crossplatform compilation * :ghissue:`337`: Piesno * :ghissue:`370`: Tone down the front page a bit. * :ghissue:`364`: Add the mode param for border management. * :ghissue:`368`: New banner for website * :ghissue:`367`: MRG: refactor API generation for sharing * :ghissue:`359`: cvxopt dependency * :ghissue:`363`: RF: make cvxopt optional for tests * :ghissue:`361`: Matrix size wrong for tracking.utils.connectivity_matrix * :ghissue:`362`: Changes to fix issue #361: matrix sizing in tracking.utils.connectivity_matrix * :ghissue:`360`: Added missing $ sign * :ghissue:`358`: typo in doc * :ghissue:`355`: DOC: Updated API change document to add target function change * :ghissue:`357`: Changed the logo to full black as the one that I sent as suggestion for HBM and ISMRM * :ghissue:`356`: Auto-generate API docs * :ghissue:`349`: Added api changes file to track breaks of backwards compatibility * :ghissue:`348`: Website update * :ghissue:`347`: DOC: Updating citations * :ghissue:`345`: TST: Make travis look at test coverage. * :ghissue:`338`: Add positivity constraint on the propagator * :ghissue:`334`: Fix vec2vec * :ghissue:`343`: Please Ignore this PR! * :ghissue:`324`: Constrained optimisation for SHORE to set E(0)=1 when the CVXOPT package is available * :ghissue:`277`: WIP: PIESNO framework for estimating the underlying std of the gaussian distribution * :ghissue:`336`: Demianw shore e0 constrained * :ghissue:`235`: WIP: Cross-validation * :ghissue:`329`: WIP: Fix vec2vec * :ghissue:`320`: Denoising images using non-local means * :ghissue:`331`: DOC: correct number of seeds in streamline_tools example * :ghissue:`330`: DOC: number of seeds per voxel, inconsistent documentation? * :ghissue:`326`: Fix brain extraction example * :ghissue:`327`: add small and big delta * :ghissue:`323`: Shore pdf grid speed improvement * :ghissue:`319`: DOC: Updated the highlights to promote the release and the upcoming paper * :ghissue:`318`: Corrected some rendering problems with the installation instructions * :ghissue:`317`: BF: more problems with path quoting in windows * :ghissue:`316`: MRG: more fixes for windows script tests * :ghissue:`315`: BF: EuDX odf_vertices param has no default value * :ghissue:`312`: Sphere and default used through the code * :ghissue:`305`: DOC: Some more details in installation instructions. * :ghissue:`314`: BF - callable response does not work * :ghissue:`16`: quickie: 'from raw data to tractographies' documentation implies dipy can't do anything with nonisotropic voxel sizes * :ghissue:`311`: Bf seeds from mask * :ghissue:`307`: Streamline_tools example stops working when I change density from 1 to 2 * :ghissue:`241`: Wrong normalization in peaks_from_model * :ghissue:`248`: Clarify dsi example * :ghissue:`220`: Add ndindex to peaks_from_model * :ghissue:`253`: Parallel peaksFromModel timing out on buildbot * :ghissue:`256`: writing data to /tmp peaks_from_model * :ghissue:`278`: tenmodel.bvec, not existing anymore? * :ghissue:`282`: fvtk documentation is incomprehensible * :ghissue:`228`: buildbot error in mask.py * :ghissue:`197`: DOC: some docstrings are not rendered correctly * :ghissue:`181`: OPT: Change dipy.core.sphere_stats.random_uniform_on_sphere * :ghissue:`177`: Extension test in dipy_fit_tensor seems brittle * :ghissue:`171`: Fix auto_attrs * :ghissue:`31`: Plotting in test suite * :ghissue:`42`: RuntimeWarning in dti.py * :ghissue:`43`: Problems with edges and faces in create_half_unit_sphere * :ghissue:`53`: Is ravel_multi_index a new thing? * :ghissue:`64`: Fix examples that rely on old API and removed data-sets * :ghissue:`67`: viz.projections.sph_projection is broken * :ghissue:`92`: dti.fa division by 0 warning in tests * :ghissue:`306`: Tests fail after windows 32 bit installation and running import dipy; dipy.test() * :ghissue:`310`: Windows test failure for tracking test_rmi * :ghissue:`309`: MRG: Windows test fixes * :ghissue:`308`: typos + pep stuf * :ghissue:`303`: BF: try and fix nibabel setup requirement * :ghissue:`304`: Update README.txt * :ghissue:`302`: Time for 0.8.0.dev! * :ghissue:`299`: BF: Put back utils.length. * :ghissue:`301`: Updated info.py and copyright year * :ghissue:`300`: Bf fetcher bug on windows * :ghissue:`298`: TST - rework tests so that we do not need to download any data * :ghissue:`290`: DOC: Started generating 0.7 release notes. dipy-1.11.0/doc/release_notes/release0.9.rst000066400000000000000000000122671476546756600205770ustar00rootroot00000000000000.. _release0.9: =================================== Release notes for DIPY version 0.9 =================================== GitHub stats for 2015/01/06 - 2015/03/18 (tag: 0.8.0) The following 12 authors contributed 235 commits. * Ariel Rokem * Bago Amirbekian * Chantal Tax * Eleftherios Garyfallidis * Gabriel Girard * Marc-Alexandre Côté * Matthew Brett * Maxime Descoteaux * Omar Ocegueda * Qiyuan Tian * Samuel St-Jean * Stefan van der Walt We closed a total of 80 issues, 35 pull requests and 45 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (35): * :ghpull:`594`: DOC + PEP8: Mostly just line-wrapping. * :ghpull:`575`: Speeding up LiFE * :ghpull:`595`: BF: csd predict multi * :ghpull:`599`: BF: use dpy_rint instead of round for Windows * :ghpull:`603`: Fix precision error in test_center_of_mass * :ghpull:`601`: BF: Some versions (<0.18) of numpy don't have nanmean. * :ghpull:`598`: Fixed undetected compilation errors prior to Cython 0.22 * :ghpull:`596`: DOC + PEP8: Clean up a few typos, PEP8 things, etc. * :ghpull:`593`: DOC: fixed some typos and added a notes section to make the document more... * :ghpull:`588`: Nf csd calibration * :ghpull:`565`: Adding New Tissue Classifiers * :ghpull:`589`: DOC: minor typographic corrections * :ghpull:`584`: DOC: explain the changes made to the integration length in GQI2. * :ghpull:`568`: Quickbundles2 * :ghpull:`559`: SFM for multi b-value data * :ghpull:`586`: BF: all_tensor_evecs should rotate from eye(3) to e0. * :ghpull:`574`: Affine registration PR1: Transforms. * :ghpull:`581`: BF: Normalization of GQI2 `gqi_vector`. * :ghpull:`580`: docstring for tensor fit was wrong * :ghpull:`579`: RF: Compatibility with scipy 0.11 * :ghpull:`577`: BF: update cython signatures with except values * :ghpull:`553`: RF: use cholesky to solve csd * :ghpull:`552`: Small refactor of viz.regtools * :ghpull:`569`: DOC: How to install vtk using conda. * :ghpull:`571`: Bf cart2sphere * :ghpull:`557`: NF: geodesic anisotropy * :ghpull:`566`: DOC: Some small fixes to the documentation of SFM. * :ghpull:`563`: RF: Cleanup functions that refer to some data that no longer exists here... * :ghpull:`564`: fixed typo * :ghpull:`561`: Added option to return the number of voxels fitting the fa threshold * :ghpull:`554`: DOC: Link to @francopestilli's matlab implementation of LiFE. * :ghpull:`556`: RF: change config variable to C define * :ghpull:`550`: Added non-local means in Changelog * :ghpull:`551`: Website update * :ghpull:`549`: DOC: Update download link. Issues (45): * :ghissue:`594`: DOC + PEP8: Mostly just line-wrapping. * :ghissue:`575`: Speeding up LiFE * :ghissue:`595`: BF: csd predict multi * :ghissue:`599`: BF: use dpy_rint instead of round for Windows * :ghissue:`603`: Fix precision error in test_center_of_mass * :ghissue:`602`: Precision error in test_feature_center_of_mass on 32-bit Linux * :ghissue:`601`: BF: Some versions (<0.18) of numpy don't have nanmean. * :ghissue:`598`: Fixed undetected compilation errors prior to Cython 0.22 * :ghissue:`597`: tracking module not building on cython 0.22 * :ghissue:`596`: DOC + PEP8: Clean up a few typos, PEP8 things, etc. * :ghissue:`404`: A better way to create a response function for CSD * :ghissue:`593`: DOC: fixed some typos and added a notes section to make the document more... * :ghissue:`588`: Nf csd calibration * :ghissue:`565`: Adding New Tissue Classifiers * :ghissue:`589`: DOC: minor typographic corrections * :ghissue:`584`: DOC: explain the changes made to the integration length in GQI2. * :ghissue:`568`: Quickbundles2 * :ghissue:`559`: SFM for multi b-value data * :ghissue:`586`: BF: all_tensor_evecs should rotate from eye(3) to e0. * :ghissue:`585`: NF: Initial file structure skeleton for amico implementation * :ghissue:`574`: Affine registration PR1: Transforms. * :ghissue:`581`: BF: Normalization of GQI2 `gqi_vector`. * :ghissue:`580`: docstring for tensor fit was wrong * :ghissue:`579`: RF: Compatibility with scipy 0.11 * :ghissue:`577`: BF: update cython signatures with except values * :ghissue:`553`: RF: use cholesky to solve csd * :ghissue:`552`: Small refactor of viz.regtools * :ghissue:`569`: DOC: How to install vtk using conda. * :ghissue:`571`: Bf cart2sphere * :ghissue:`557`: NF: geodesic anisotropy * :ghissue:`567`: NF - added function to fetch/read stanford pve maps * :ghissue:`566`: DOC: Some small fixes to the documentation of SFM. * :ghissue:`414`: NF - added anatomically-constrained tractography (ACT) tissue classifier * :ghissue:`560`: dipy.data: three_shells_voxels is not there * :ghissue:`563`: RF: Cleanup functions that refer to some data that no longer exists here... * :ghissue:`564`: fixed typo * :ghissue:`561`: Added option to return the number of voxels fitting the fa threshold * :ghissue:`554`: DOC: Link to @francopestilli's matlab implementation of LiFE. * :ghissue:`556`: RF: change config variable to C define * :ghissue:`555`: Use chatroom for dev communications * :ghissue:`354`: Test failures of 0.7.1 on wheezy and squeeze 32bit * :ghissue:`532`: SPARC buildbot fail in multiprocessing test * :ghissue:`550`: Added non-local means in Changelog * :ghissue:`551`: Website update * :ghissue:`549`: DOC: Update download link. dipy-1.11.0/doc/release_notes/release1.0.rst000066400000000000000000000427361476546756600205730ustar00rootroot00000000000000.. _release1.0: ==================================== Release notes for DIPY version 1.0 ==================================== GitHub stats for 2019/03/11 - 2019/08/05 (tag: 0.16.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 17 authors contributed 707 commits. * Adam Richie-Halford * Antoine Theberge * Ariel Rokem * Clint Greene * Eleftherios Garyfallidis * Francois Rheault * Gabriel Girard * Jean-Christophe Houde * Jon Haitz Legarreta Gorroño * Kevin Sitek * Marc-Alexandre Côté * Matt Cieslak * Rafael Neto Henriques * Scott Trinkle * Serge Koudoro * Shreyas Fadnavis We closed a total of 289 issues, 97 pull requests and 192 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (97): * :ghpull:`1924`: Some updates in Horizon fixing some issues for upcoming release * :ghpull:`1946`: Fix empty tractogram loading saving * :ghpull:`1947`: DOC: fixing examples links * :ghpull:`1942`: Remove dipy.io.trackvis * :ghpull:`1917`: A functional implementation of Random matrix local pca. * :ghpull:`1940`: Increase affine consistency in dipy.tracking.streamlines * :ghpull:`1909`: [WIP] - MTMS-CSD Tutorial * :ghpull:`1931`: [BF] IVIM fixes * :ghpull:`1944`: Update DKI, WMTI, fwDTI examples and give more evidence to WMTI and fwDTI models * :ghpull:`1939`: Increase affine consistency in dipy.tracking.utils * :ghpull:`1943`: Increase affine consistency in dipy.tracking.life and dipy.stats.analysis * :ghpull:`1941`: Remove some viz tutorial * :ghpull:`1926`: RF - dipy.tracking.local * :ghpull:`1935`: Remove dipy.external and dipy.fixes packages * :ghpull:`1903`: Skip some tests on big endian architecture (like s390x) * :ghpull:`1892`: Use the correct (row) order of the tensor components * :ghpull:`1804`: BF: added check to avoid infinite loop on consecutive coordinates. * :ghpull:`1937`: Add a warning about future changes that will happen in dipy.stats. * :ghpull:`1928`: Update streamlines formats example * :ghpull:`1925`: FIX: Stateful tractogram examples * :ghpull:`1927`: BF - move import to top level * :ghpull:`1923`: [Fix] removing minmax_norm parameter from peak_direction * :ghpull:`1894`: Default sphere: From symmetric724 to repulsion724 * :ghpull:`1812`: ENH: Statefull tractogram, robust spatial handling and IO * :ghpull:`1922`: Remove deprecated functions from imaffine * :ghpull:`1885`: BF - remove single pts streamline * :ghpull:`1913`: RF - EuDX legacy code/test * :ghpull:`1915`: Doc generation under Windows * :ghpull:`1630`: [Fix] remove Userwarning message * :ghpull:`1896`: New module: dipy.core.interpolation * :ghpull:`1912`: Remove deprecated parameter voxel_size * :ghpull:`1916`: Spherical deconvolution model CANNOT be constructed without specifying a response * :ghpull:`1918`: ENH: Remove unused `warning` package import * :ghpull:`1881`: DOC - RF tracking examples * :ghpull:`1911`: Add python_requires * :ghpull:`1914`: [Fix] vol_idx missing in snr_in_cc Tutorial * :ghpull:`1907`: DOC: Fix examples documentation generation warnings * :ghpull:`1908`: DOC: Fix typos * :ghpull:`1887`: DOC - updated streamline_tools example with the LocalTracking Framework * :ghpull:`1905`: ENH: Remove deprecated SH bases * :ghpull:`1849`: Adds control for number of iterations in CSD recon * :ghpull:`1902`: Warn users if they don't have FURY installed * :ghpull:`1904`: DOC: Improve documentation * :ghpull:`1771`: Gibbs removal * :ghpull:`1899`: Fix: Byte ordering error on Python 3.5 * :ghpull:`1898`: Replace SingleTensor by single_tensor * :ghpull:`1897`: DOC: Fix typos * :ghpull:`1893`: Remove scratch folder * :ghpull:`1891`: Move the tests from test_refine_rb to test_bundles. * :ghpull:`1888`: BF - fix eudx tracking for npeaks=1 * :ghpull:`1879`: DOC - explicitly run the streamline generators before saving the trk * :ghpull:`1884`: Clean up: Remove streamlines memory patch * :ghpull:`1875`: ENH: Add binary tissue classifier option for tracking workflow * :ghpull:`1882`: DOC - clarified the state of the tracking process once stopped * :ghpull:`1880`: DOC: Fix typos and improve documentation * :ghpull:`1878`: Clean up: Remove NUMPY_LESS_0.8.x * :ghpull:`1877`: Clean up: Remove all SCIPY_LESS_0.x.x * :ghpull:`1876`: DOC: Fix typos * :ghpull:`1874`: DOC: Fix documentation oversights. * :ghpull:`1858`: NF: MSMT - CSD * :ghpull:`1843`: [NF] new workflow: FetchFlow * :ghpull:`1866`: MAINT: Drop support for Python 3.4 * :ghpull:`1850`: NF: Add is_hemispherical test * :ghpull:`1855`: Pin scipy version for bots that need statsmodels. * :ghpull:`1835`: [Fix] Workflow mask documentation * :ghpull:`1836`: Corrected median_otsu function declaration that was breaking tutorials * :ghpull:`1792`: [NF]: Add seeds to TRK * :ghpull:`1851`: DOC: Add single-module test/coverage instructions * :ghpull:`1842`: [Fix] Remove tput from fetcher * :ghpull:`1800`: Update command line documentation generation * :ghpull:`1830`: Delete six module * :ghpull:`1821`: Fixes 238, by requiring vol_idx input with 4D images. * :ghpull:`1775`: Remove Python 2 dependency. * :ghpull:`1816`: Remove Deprecated function dipy.data.get_data * :ghpull:`1818`: [DOC] fix rank order typo * :ghpull:`1827`: Remove deprecated module dipy.segment.quickbundes * :ghpull:`1824`: Remove deprecated module dipy.reconst.peaks * :ghpull:`1819`: [Fix] Diffeormorphic + CCMetric on small image * :ghpull:`1823`: Remove accent colormap * :ghpull:`1814`: [Fix] add a basic check on dipy_horizon * :ghpull:`1815`: [FIX] median_otsu deprecated parameter * :ghpull:`1813`: [Fix] Add Readme for doc generation * :ghpull:`1766`: NF - add tracking workflow parameters * :ghpull:`1772`: BF: changes min_signal defaults from 1 to 1e-5 * :ghpull:`1810`: [Bug FIx] dipy_fit_csa and dipy_fit_csd workflow * :ghpull:`1806`: Plot both IVIM fits on the same axis * :ghpull:`1789`: VarPro Fit Example IVIM * :ghpull:`1770`: Parallel reconst workflows * :ghpull:`1796`: [Fix] stripping in workflow documentation * :ghpull:`1795`: [Fix] workflows description * :ghpull:`1768`: Add afq to stats * :ghpull:`1788`: Add test for different dtypes * :ghpull:`1769`: Change "is" check for 'GCV' * :ghpull:`1767`: BF: self.self * :ghpull:`1759`: Add one more acknowledgement * :ghpull:`1230`: Mean Signal DKI * :ghpull:`1760`: Implements the inverse of decfa Issues (192): * :ghissue:`1798`: plotting denoised img * :ghissue:`1924`: Some updates in Horizon fixing some issues for upcoming release * :ghissue:`1946`: Fix empty tractogram loading saving * :ghissue:`1947`: DOC: fixing examples links * :ghissue:`1942`: Remove dipy.io.trackvis * :ghissue:`1917`: A functional implementation of Random matrix local pca. * :ghissue:`1940`: Increase affine consistency in dipy.tracking.streamlines * :ghissue:`1909`: [WIP] - MTMS-CSD Tutorial * :ghissue:`1931`: [BF] IVIM fixes * :ghissue:`1817`: Unusual behavior in Dipy IVIM implementation/example * :ghissue:`1774`: Split up DKI example * :ghissue:`1944`: Update DKI, WMTI, fwDTI examples and give more evidence to WMTI and fwDTI models * :ghissue:`1939`: Increase affine consistency in dipy.tracking.utils * :ghissue:`1943`: Increase affine consistency in dipy.tracking.life and dipy.stats.analysis * :ghissue:`1941`: Remove some viz tutorial * :ghissue:`1926`: RF - dipy.tracking.local * :ghissue:`1935`: Remove dipy.external and dipy.fixes packages * :ghissue:`1903`: Skip some tests on big endian architecture (like s390x) * :ghissue:`1587`: Could tests for functionality not supported on big endians just skip? * :ghissue:`1890`: Tensor I/O in dipy_fit_dti * :ghissue:`1892`: Use the correct (row) order of the tensor components * :ghissue:`1804`: BF: added check to avoid infinite loop on consecutive coordinates. * :ghissue:`1937`: Add a warning about future changes that will happen in dipy.stats. * :ghissue:`1933`: Remove deprecated voxel_size from seed_from_mask * :ghissue:`1928`: Update streamlines formats example * :ghissue:`985`: Getting started example should be commented at each step * :ghissue:`1558`: Example of creating Trackvis compatible streamlines is needed * :ghissue:`1925`: FIX: Stateful tractogram examples * :ghissue:`1910`: BF: IVIM fixes * :ghissue:`1927`: BF - move import to top level * :ghissue:`1923`: [Fix] removing minmax_norm parameter from peak_direction * :ghissue:`389`: minmax_norm in peaks_directions does nothing * :ghissue:`1894`: Default sphere: From symmetric724 to repulsion724 * :ghissue:`590`: Change default sphere * :ghissue:`1722`: Error when using TCK files written by dipy * :ghissue:`1832`: Tracking workflow header affine issue & fix * :ghissue:`1812`: ENH: Statefull tractogram, robust spatial handling and IO * :ghissue:`1922`: Remove deprecated functions from imaffine * :ghissue:`1885`: BF - remove single pts streamline * :ghissue:`1913`: RF - EuDX legacy code/test * :ghissue:`283`: Spherical deconvolution model can be constructed without specifying a response * :ghissue:`1915`: Doc generation under Windows * :ghissue:`1630`: [Fix] remove Userwarning message * :ghissue:`1896`: New module: dipy.core.interpolation * :ghissue:`728`: Many interpolation functions in different places can they all go to same module? * :ghissue:`1912`: Remove deprecated parameter voxel_size * :ghissue:`1920`: How can I get streamlines using fiber orientation by bedpostx of MRtrix3? * :ghissue:`1432`: DOC/RF - update/standardize tracking examples * :ghissue:`1779`: Probabilistic Direction Getter gallery example * :ghissue:`1916`: Spherical deconvolution model CANNOT be constructed without specifying a response * :ghissue:`1918`: ENH: Remove unused `warning` package import * :ghissue:`1881`: DOC - RF tracking examples * :ghissue:`1906`: Add python_requires=">=3.5" * :ghissue:`1911`: Add python_requires * :ghissue:`1901`: window.record() function shows the coronal view * :ghissue:`1914`: [Fix] vol_idx missing in snr_in_cc Tutorial * :ghissue:`1718`: cannot import name window * :ghissue:`1747`: CI error that sometimes shows up (Python 2.7) * :ghissue:`1907`: DOC: Fix examples documentation generation warnings * :ghissue:`1908`: DOC: Fix typos * :ghissue:`1887`: DOC - updated streamline_tools example with the LocalTracking Framework * :ghissue:`1839`: [WIP] IVIM fixes * :ghissue:`1905`: ENH: Remove deprecated SH bases * :ghissue:`583`: Make a cython style guide * :ghissue:`1849`: Adds control for number of iterations in CSD recon * :ghissue:`1902`: Warn users if they don't have FURY installed * :ghissue:`1904`: DOC: Improve documentation * :ghissue:`1694`: Intermittent test failures in `test_streamline` * :ghissue:`1724`: Failure on Windows/Python 3.5 * :ghissue:`1771`: Gibbs removal * :ghissue:`1899`: Fix: Byte ordering error on Python 3.5 * :ghissue:`1898`: Replace SingleTensor by single_tensor * :ghissue:`844`: Refactor behavior of dipy.sims.voxel.single_tensor vs SingleTensor * :ghissue:`1752`: Intermittent failure on Python 3.4 * :ghissue:`1856`: Figure out how to get a "used by" button * :ghissue:`1897`: DOC: Fix typos * :ghissue:`1807`: tracking fails when npeaks=1 for peaks_from_model with tensor model * :ghissue:`1889`: `segment.bundles` package not being tested * :ghissue:`1893`: Remove scratch folder * :ghissue:`1713`: Clean up "scratch" * :ghissue:`1891`: Move the tests from test_refine_rb to test_bundles. * :ghissue:`1888`: BF - fix eudx tracking for npeaks=1 * :ghissue:`668`: Add transformation matrix output and input * :ghissue:`592`: Shouldn't TRACKPOINT be renamed to NODIRECTION? * :ghissue:`1879`: DOC - explicitly run the streamline generators before saving the trk * :ghissue:`1884`: Clean up: Remove streamlines memory patch * :ghissue:`1875`: ENH: Add binary tissue classifier option for tracking workflow * :ghissue:`1811`: Add binary tissue classifier option for the tracking workflow * :ghissue:`1846`: streamlines to array * :ghissue:`1831`: bvec file dimension prob * :ghissue:`1882`: DOC - clarified the state of the tracking process once stopped * :ghissue:`1880`: DOC: Fix typos and improve documentation * :ghissue:`1857`: point outside data error * :ghissue:`1878`: Clean up: Remove NUMPY_LESS_0.8.x * :ghissue:`1877`: Clean up: Remove all SCIPY_LESS_0.x.x * :ghissue:`1863`: Clean up core.optimize * :ghissue:`1876`: DOC: Fix typos * :ghissue:`1874`: DOC: Fix documentation oversights. * :ghissue:`1781`: [WIP] Random lpca * :ghissue:`1858`: NF: MSMT - CSD * :ghissue:`1843`: [NF] new workflow: FetchFlow * :ghissue:`1869`: get rotation and translation parameters of a rigid transformation * :ghissue:`1844`: Statsmodels import error * :ghissue:`1866`: MAINT: Drop support for Python 3.4 * :ghissue:`1865`: Drop Python 3.4? * :ghissue:`1850`: NF: Add is_hemispherical test * :ghissue:`1860`: Dependency Graph: Dependents? * :ghissue:`1855`: Pin scipy version for bots that need statsmodels. * :ghissue:`1168`: Nf mtms csd model * :ghissue:`1854`: Testing the CI. DO NOT MERGE * :ghissue:`1835`: [Fix] Workflow mask documentation * :ghissue:`1764`: DTI metrics workflow: mask is optional, but crashes when no mask provided * :ghissue:`1836`: Corrected median_otsu function declaration that was breaking tutorials * :ghissue:`1792`: [NF]: Add seeds to TRK * :ghissue:`1731`: Plan for dropping Python 2 support. * :ghissue:`1851`: DOC: Add single-module test/coverage instructions * :ghissue:`1845`: Signal to noise * :ghissue:`1842`: [Fix] Remove tput from fetcher * :ghissue:`1829`: When fetching ... 'tput' is not reco... * :ghissue:`1606`: Cleaned PR for Visualization Modules to Assess the quality of Registration Qualitatively. * :ghissue:`1837`: labels * :ghissue:`1786`: Upcoming DIPY lab meetings * :ghissue:`1828`: IVIM VarPro implementation throws infeasible 'x0' * :ghissue:`1833`: Affine registration of similar images * :ghissue:`1834`: Which file to convert from dicom to nifti?! * :ghissue:`1800`: Update command line documentation generation * :ghissue:`1830`: Delete six module * :ghissue:`1721`: using code style * :ghissue:`238`: Median_otsu b0slices too implicit? * :ghissue:`1821`: Fixes 238, by requiring vol_idx input with 4D images. * :ghissue:`1775`: Remove Python 2 dependency. * :ghissue:`1816`: Remove Deprecated function dipy.data.get_data * :ghissue:`1818`: [DOC] fix rank order typo * :ghissue:`1499`: Possible mistake about B matrix in documentation "DIY Stuff about b and q" * :ghissue:`1827`: Remove deprecated module dipy.segment.quickbundes * :ghissue:`1822`: .trk file * :ghissue:`1824`: Remove deprecated module dipy.reconst.peaks * :ghissue:`1825`: Fury visualizing bug - plane only visible for XY-slice of FODs * :ghissue:`1819`: [Fix] Diffeormorphic + CCMetric on small image * :ghissue:`1048`: divide by zero error in DiffeomorphicRegistration of small image volumes * :ghissue:`1823`: Remove accent colormap * :ghissue:`1797`: function parameters * :ghissue:`1802`: crossing fibers & fractional anisotropy * :ghissue:`1787`: RF - change default tracking algorithm for dipy_track_local to EuDX * :ghissue:`1763`: Threshold default in QballBaseModel * :ghissue:`1814`: [Fix] add a basic check on dipy_horizon * :ghissue:`1756`: Error using dipy_horizon * :ghissue:`1815`: [FIX] median_otsu deprecated parameter * :ghissue:`1761`: Deprecation warning when running median_otsu * :ghissue:`795`: dipy.tracking: Converting an array with ndim > 0 to an index will result in an error * :ghissue:`620`: Extend the AUTHOR list with more information * :ghissue:`1813`: [Fix] Add Readme for doc generation * :ghissue:`436`: Doc won't build without cvxopt * :ghissue:`1758`: additional parameters for dipy_track_local workflow * :ghissue:`1766`: NF - add tracking workflow parameters * :ghissue:`1772`: BF: changes min_signal defaults from 1 to 1e-5 * :ghissue:`1810`: [Bug FIx] dipy_fit_csa and dipy_fit_csd workflow * :ghissue:`1808`: `dipy_fit_csd` CLI is broken? * :ghissue:`1806`: Plot both IVIM fits on the same axis * :ghissue:`1794`: Removed/renamed DetTrackPAMFlow? * :ghissue:`1801`: segmentation * :ghissue:`1803`: tools * :ghissue:`1809`: datasets * :ghissue:`1799`: steps from nifiti file to tracts * :ghissue:`1712`: dipy.reconst.peak_direction_getter.PeaksAndMetricsDirectionGetter.initial_direction (dipy/reconst/peak_direction_getter.c:3075) IndexError: point outside data * :ghissue:`1789`: VarPro Fit Example IVIM * :ghissue:`1770`: Parallel reconst workflows * :ghissue:`1796`: [Fix] stripping in workflow documentation * :ghissue:`1795`: [Fix] workflows description * :ghissue:`1768`: Add afq to stats * :ghissue:`1783`: Make trilinear_interpolate4d work with more dtypes. * :ghissue:`1784`: Generalize trilinear_interpolate4d to other dtypes. * :ghissue:`1788`: Add test for different dtypes * :ghissue:`1790`: ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (13,13)->(13,13) (10000,10)->(10000,newaxis,10) * :ghissue:`1782`: Conversion from MRTrix SH basis to dipy * :ghissue:`1769`: Change "is" check for 'GCV' * :ghissue:`1320`: WIP: Bias correction * :ghissue:`1245`: non_local_means : patch size argument for local mean and variance * :ghissue:`1240`: WIP: Improve the axonal water fraction estimation. * :ghissue:`1237`: DOC: Flesh out front page example. * :ghissue:`1192`: Error handling in SDT * :ghissue:`1096`: Robust Brain Extraction * :ghissue:`832`: trilinear_interpolate4d only works on float64 * :ghissue:`578`: WIP: try out Stefan Behnel's cython coverage * :ghissue:`1780`: [WIP]: Randommatrix localpca * :ghissue:`1022`: Fixes #720 : Auto generate ipython notebooks * :ghissue:`1126`: Publishing in JOSS : Added paper summary for IVIM * :ghissue:`1603`: [WIP] - Free water elimination algorithm for single-shell DTI * :ghissue:`1767`: BF: self.self * :ghissue:`1759`: Add one more acknowledgement * :ghissue:`1230`: Mean Signal DKI * :ghissue:`1760`: Implements the inverse of decfa dipy-1.11.0/doc/release_notes/release1.1.rst000066400000000000000000000203571476546756600205670ustar00rootroot00000000000000.. _release1.1: ==================================== Release notes for DIPY version 1.1 ==================================== GitHub stats for 2019/08/06 - 2020/01/10 (tag: 1.0.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 11 authors contributed 392 commits. * Ariel Rokem * Derek Pisner * Eleftherios Garyfallidis * Eric Larson * Francois Rheault * Jon Haitz Legarreta Gorroño * Rafael Neto Henriques * Ricci Woo * Ross Lawrence * Serge Koudoro * Shreyas Fadnavis We closed a total of 130 issues, 52 pull requests and 81 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (52): * :ghpull:`2030`: [MAINT] Release 1.1.1 * :ghpull:`2029`: Use the array proxy instead of get_fdata() * :ghpull:`2023`: [Upcoming] Release 1.1.0 * :ghpull:`2021`: Examples update * :ghpull:`2005`: [CI] add python3.8 matrix * :ghpull:`2009`: Upgrade on DKI implementations * :ghpull:`2022`: Update pep8speaks configuration * :ghpull:`2020`: Update get_fnames * :ghpull:`2018`: The Python is dead. Long live the Python * :ghpull:`2017`: Update CONTRIBUTING.md * :ghpull:`2016`: replace get_fdata by load_nifti_data * :ghpull:`2012`: Add ability to analyze entire streamline in connectivity_matrix * :ghpull:`2015`: Update nibabel minimum version (3.0.0) * :ghpull:`1984`: Enabling TensorFlow 2 as an optional dependency * :ghpull:`2013`: SFT constructor from another SFT * :ghpull:`1862`: Use logging instead of print statements * :ghpull:`1952`: Adding lpca, mppca and gibbs ringing workflows * :ghpull:`1997`: [FIX] Avoid copy of Streamlines data * :ghpull:`2008`: Fix origin nomenclature sft (center/corner) * :ghpull:`1965`: Horizon updates b1 * :ghpull:`2011`: BF: Use the `sklearn.base` interface, instead of deprecated `sklearn.linear_model.base` * :ghpull:`2002`: BUG: Fix workflows IO `FetchFlow` `all` dataset fetching * :ghpull:`2000`: NF: SplitFlow * :ghpull:`1999`: add cenir_multib to dipy_fetch workflows * :ghpull:`1998`: [DOC] Workflows Tutorial * :ghpull:`1988`: DOC: Multi-Shell Multi-Tissue CSD Example * :ghpull:`1975`: Azure CI * :ghpull:`1994`: MAINT: Update gradient for SciPy deprecation * :ghpull:`1711`: ENH: Improve the point distribution algorithm over the sphere. * :ghpull:`1989`: DOC: Fix parameter docstring in `dipy.io.streamline.load_tractogram` * :ghpull:`1987`: BF: use MAM distance when that is requested. * :ghpull:`1986`: Fix the random state for SLR and Recobundles in bundle_extraction. * :ghpull:`1977`: Add a warning on attempted import. * :ghpull:`1983`: [Fix] one example * :ghpull:`1981`: DOC: Improve the `doc/examples/README` file contents * :ghpull:`1980`: DOC: Improve the `doc/README` file contents * :ghpull:`1978`: DOC: Add extension to `doc/examples/README` file * :ghpull:`1979`: DOC: Change the `doc/README` file extension to lowercase * :ghpull:`1972`: Small fix in warning message * :ghpull:`1956`: stateful_tractogram_post_1.0_fixes * :ghpull:`1971`: fix broken link of devel * :ghpull:`1970`: DOC: Fix typos * :ghpull:`1929`: ENH: Relocate the `sim_response` method to allow reuse * :ghpull:`1966`: ENH: Speed up Cython, remove warnings * :ghpull:`1967`: ENH: Use language level 3 * :ghpull:`1962`: DOC: Display `master` branch AppVeyor status * :ghpull:`1961`: STYLE: Remove unused import statements * :ghpull:`1963`: DOC: Fix parameter docstring in LiFE tracking script * :ghpull:`1900`: ENH: Add streamline deformation example and workflow * :ghpull:`1948`: Minor Fix for Cross-Validation Example * :ghpull:`1951`: [FIX] update some utilities scripts * :ghpull:`1958`: Adding Missing References in SHORE docs Issues (81): * :ghissue:`2021`: Examples update * :ghissue:`2005`: [CI] add python3.8 matrix * :ghissue:`1197`: Documentation improvements * :ghissue:`1959`: DIPY open lab meetings -- fall 2019 * :ghissue:`2003`: Artefacts in DKI reconstruction * :ghissue:`2009`: Upgrade on DKI implementations * :ghissue:`2022`: Update pep8speaks configuration * :ghissue:`2020`: Update get_fnames * :ghissue:`1777`: Maybe `read_*` should return full paths? * :ghissue:`1634`: Use logging instead of print statements * :ghissue:`2018`: The Python is dead. Long live the Python * :ghissue:`2017`: Update CONTRIBUTING.md * :ghissue:`1949`: Update Horizon * :ghissue:`2016`: replace get_fdata by load_nifti_data * :ghissue:`2012`: Add ability to analyze entire streamline in connectivity_matrix * :ghissue:`2015`: Update nibabel minimum version (3.0.0) * :ghissue:`2006`: get_data deprecated * :ghissue:`1984`: Enabling TensorFlow 2 as an optional dependency * :ghissue:`2013`: SFT constructor from another SFT * :ghissue:`1862`: Use logging instead of print statements * :ghissue:`1952`: Adding lpca, mppca and gibbs ringing workflows * :ghissue:`1997`: [FIX] Avoid copy of Streamlines data * :ghissue:`2014`: Removes the appveyor configuration file. * :ghissue:`2008`: Fix origin nomenclature sft (center/corner) * :ghissue:`1965`: Horizon updates b1 * :ghissue:`2011`: BF: Use the `sklearn.base` interface, instead of deprecated `sklearn.linear_model.base` * :ghissue:`2010`: Issue with sklearn update 0.22 - Attribute Error * :ghissue:`2002`: BUG: Fix workflows IO `FetchFlow` `all` dataset fetching * :ghissue:`1995`: ModuleNotFoundError: No module named 'numpy.testing.decorators' (Numpy 1.18) * :ghissue:`2000`: NF: SplitFlow * :ghissue:`1999`: add cenir_multib to dipy_fetch workflows * :ghissue:`1998`: [DOC] Workflows Tutorial * :ghissue:`1993`: [ENH][BF] Error handling for IVIM VarPro NLLS * :ghissue:`1870`: Documentation example of MSMT CSD * :ghissue:`1988`: DOC: Multi-Shell Multi-Tissue CSD Example * :ghissue:`1975`: Azure CI * :ghissue:`1953`: Deprecate Reconst*Flow * :ghissue:`1994`: MAINT: Update gradient for SciPy deprecation * :ghissue:`1992`: MD bad / FA good * :ghissue:`1711`: ENH: Improve the point distribution algorithm over the sphere. * :ghissue:`184`: RF/OPT: Use scipy.optimize instead of dipy.core.sphere.disperse_charges * :ghissue:`1989`: DOC: Fix parameter docstring in `dipy.io.streamline.load_tractogram` * :ghissue:`1982`: small bug in recobundles for mam * :ghissue:`1987`: BF: use MAM distance when that is requested. * :ghissue:`1986`: Fix the random state for SLR and Recobundles in bundle_extraction. * :ghissue:`1976`: Better warning when dipy is installed without fury * :ghissue:`1977`: Add a warning on attempted import. * :ghissue:`785`: WIP: TST: Add OSX to Travis. * :ghissue:`1859`: Update Travis design * :ghissue:`1950`: StatefullTractogram error in Horizon * :ghissue:`1983`: [Fix] one example * :ghissue:`1930`: dipy.io.peaks.load_peaks() IOError: This function supports only PAM5 (HDF5) files * :ghissue:`1981`: DOC: Improve the `doc/examples/README` file contents * :ghissue:`1980`: DOC: Improve the `doc/README` file contents * :ghissue:`1978`: DOC: Add extension to `doc/examples/README` file * :ghissue:`1979`: DOC: Change the `doc/README` file extension to lowercase * :ghissue:`1968`: Broken site links (404) on new website when accessed from google * :ghissue:`1972`: Small fix in warning message * :ghissue:`1960`: DOC: MCSD Tutorial * :ghissue:`1867`: WIP: CircleCI * :ghissue:`1956`: stateful_tractogram_post_1.0_fixes * :ghissue:`1971`: fix broken link of devel * :ghissue:`1970`: DOC: Fix typos * :ghissue:`1929`: ENH: Relocate the `sim_response` method to allow reuse * :ghissue:`1966`: ENH: Speed up Cython, remove warnings * :ghissue:`1967`: ENH: Use language level 3 * :ghissue:`1954`: WIP: Second series of updates and fixes for Horizon * :ghissue:`1964`: Added a line break * :ghissue:`1962`: DOC: Display `master` branch AppVeyor status * :ghissue:`1961`: STYLE: Remove unused import statements * :ghissue:`1963`: DOC: Fix parameter docstring in LiFE tracking script * :ghissue:`1900`: ENH: Add streamline deformation example and workflow * :ghissue:`1840`: Tutorial showing how to apply deformations to streamlines * :ghissue:`1948`: Minor Fix for Cross-Validation Example * :ghissue:`1951`: [FIX] update some utilities scripts * :ghissue:`1841`: Lab meetings, summer 2019 * :ghissue:`1958`: Adding Missing References in SHORE docs * :ghissue:`1955`: where is intersphinx inventory under new website domain? * :ghissue:`1401`: Workflows documentation needs more work * :ghissue:`1442`: ENH: Minor presentation enhancement to AffineMap object * :ghissue:`1791`: ENH: Save seeds in TRK/TCK dipy-1.11.0/doc/release_notes/release1.10.rst000066400000000000000000000745741476546756600206610ustar00rootroot00000000000000.. _release1.10: ===================================== Release notes for DIPY version 1.10 ===================================== GitHub stats for 2024/03/08 - 2024/12/09 (tag: 1.9.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 25 authors contributed 871 commits. * Alex Rockhill * Ariel Rokem * Asa Gilmore * Atharva Shah * Bramsh Qamar * Charles Poirier * Eleftherios Garyfallidis * Eric Larson * Florent Wijanto * Gabriel Girard * Jon Haitz Legarreta Gorroño * Jong Sung Park * Julio Villalon * Kaibo Tang * Kaustav Deka * Maharshi Gor * Martin Kozár * Matt Cieslak * Matthew Feickert * Michael R. Crusoe * Prajwal Reddy * Rafael Neto Henriques * Sam Coveney * Sandro Turriate * Serge Koudoro We closed a total of 445 issues, 187 pull requests and 258 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (187): * :ghpull:`3420`: CI: define output path for coverage * :ghpull:`3415`: tests: leave no trace behind * :ghpull:`3417`: MAINT: Address changes to API with new (1.6) version of cvxpy. * :ghpull:`3414`: Bump codecov/codecov-action from 4 to 5 in the actions group * :ghpull:`3412`: BF: Fixing single-slice data error in Patch2Self * :ghpull:`3409`: [RF] Cleanup deprecated functions and remove some old legacy scripts * :ghpull:`3407`: CI: unpin cvxpy * :ghpull:`3408`: RF: fix step argument in DTI NLLS model * :ghpull:`3402`: DOC: Add Exporting and importing Mapping field information in recipes and registration tutorial * :ghpull:`3403`: Robust split idea rebase tidy * :ghpull:`3404`: RF: Improve and Simplify version management * :ghpull:`3395`: RF: Refactor `nn` module and deprecate `tensorflow` module * :ghpull:`1957`: niftis2pam, pam2niftis Workflow * :ghpull:`3396`: DOC: Adopt `sphinxcontrib-bibtex` for reconst model list refs * :ghpull:`3399`: STYLE: Remove empty quoted paragraph in developer guide index * :ghpull:`3398`: DOC: Improve first interaction GitHub Actions config file * :ghpull:`2826`: [ENH] Compute fiber density and fiber spread from ODF using Bingham distributions * :ghpull:`3303`: NF: Patch2Self3 * :ghpull:`3392`: [WIP] NF: Adding pytorch versions * :ghpull:`3368`: [NF] DAM implementation for tissue classification using DMRI signal properties. * :ghpull:`3390`: DOC: Update DTI tutorial title * :ghpull:`3391`: STYLE: removing pep8speaks conf file in favor of pre-commit action * :ghpull:`3393`: RF: fix API generation * :ghpull:`3387`: DOC: Add first interaction GHA workflow file * :ghpull:`3386`: DOC: Update the CI tool to GHA in `CONTRIBUTING` file * :ghpull:`3384`: BF: Updated non_local_means * :ghpull:`3140`: NF: Adding correct_mask to median_otsu * :ghpull:`3345`: DOC: Skip element in documentation generation * :ghpull:`3372`: BugFix: New Atlas OMM not working with Horizon * :ghpull:`3381`: RF: Add support for sequential processing in Gibbs unringing * :ghpull:`3380`: ensure all calls to a python executable are to `python3` * :ghpull:`3376`: DOC: Use placeholder for unused variable in `streamline_tools` * :ghpull:`3373`: DOC: Consider warnings as errors in documentation CI build * :ghpull:`3379`: DOC: Remove example files labels * :ghpull:`3378`: doc: Link reconstruction model list to multiple pages * :ghpull:`3377`: DOC: Miscellaneous improvements to `PeakActor` docstring * :ghpull:`3375`: DOC: Reference footnote in `streamline_tools` * :ghpull:`3348`: DOC: Address remaining some warnings * :ghpull:`3369`: ci: Bump scientific-python/upload-nightly-action from 0.6.0 to 0.6.1 * :ghpull:`3367`: Bump scientific-python/upload-nightly-action from 0.5.0 to 0.6.0 in the actions group * :ghpull:`3366`: DOC: Make `rng` optional parameter docstrings consistent * :ghpull:`3365`: DOC: Fix some cites. * :ghpull:`3356`: BF: fix s390x compatibility * :ghpull:`3360`: DOC: Remove unnecessary leading whitespace in rst doc paragraph * :ghpull:`3357`: FIX: remove keyword only warning on examples (part2) * :ghpull:`3343`: BF Fixing transformation function * :ghpull:`3355`: FIX: missing keyword only arguments on example * :ghpull:`3221`: Updating BundleWarp default value of alpha * :ghpull:`3323`: BF: Allow passing kwargs in fit method, by moving parallelization kwargs elsewhere, including PEP 3102 * :ghpull:`3351`: DOC: Fix miscellaneous documentation build warnings (part 3) * :ghpull:`3306`: NF: Update to examples * :ghpull:`3293`: BF: Fix attempting to delete frame local symbol table variable * :ghpull:`3257`: NF: Applying Decorators in Module (Reconst) * :ghpull:`3254`: NF: Applying Decorators in Module (Direction) * :ghpull:`3317`: DOC: Miscellaneous documentation improvements * :ghpull:`3350`: DOC: Do not use the `scale` option for URL-based images * :ghpull:`3344`: DOC: Fix miscellaneous documentation build warnings (part 2) * :ghpull:`3346`: RF: Removal of keyword form Cython files * :ghpull:`3341`: DOC: Host MNI template note references in references file * :ghpull:`3333`: RF: Decorator fix * :ghpull:`3335`: RF: Allow parallel processing for sphinx extension * :ghpull:`3342`: RF: Doctest warnings * :ghpull:`3337`: DOC: Fix miscellaneous documentation build warnings * :ghpull:`3338`: DOC: Cite examples references using `sphinxcontrib-bibtex` * :ghpull:`3319`: DOC: Use references bibliography file for DIPY citation file * :ghpull:`3321`: BF: Set the superclass `fit_method` param value to the one provided * :ghpull:`3324`: RF: Refactored for keyword arguments * :ghpull:`3340`: CI: pin cvxpy to 1.4.4 until 1.5.x issues are solved * :ghpull:`3316`: DOC: Cite code base references using `sphinxcontrib-bibtex` * :ghpull:`3332`: BF: Set the `Diso` parameter value to the one provided * :ghpull:`3325`: DOC: Fix warnings related to displayed math expressions * :ghpull:`3331`: DOC: Miscellaneous documentation improvements (part 3) * :ghpull:`3329`: STYLE: Use a leading underscore to name private methods * :ghpull:`3330`: DOC: Do not use unfinished double backticks * :ghpull:`3320`: DOC: Miscellaneous documentation improvements (part 2) * :ghpull:`3318`: RF: Remove unused parameters from method signature * :ghpull:`3310`: DOC: Cite `nn` references through `sphinxcontrib-bibtex` * :ghpull:`3315`: RF: remove legacy numpydoc * :ghpull:`2810`: [DOC] introducing sphinxcontrib-Bibtex to improve reference management * :ghpull:`3312`: DOC: Use `misc` for other types of BibTeX entries * :ghpull:`3309`: DOC: Miscellaneous doc formatting fixes (part 4) * :ghpull:`3308`: DOC: Rework the BibTeX bibliography file * :ghpull:`3275`: FIX: remove sagital from codespellrc ignore list |# codespell:ignore sagital| * :ghpull:`3304`: DOC: Miscellaneous doc formatting fixes (part 3) * :ghpull:`3295`: ENH: Add a GHA workflow file to build docs * :ghpull:`3302`: DOC: Miscellaneous doc formatting fixes (part 2) * :ghpull:`3301`: FIX: explicit keyword argument for Horizon * :ghpull:`3297`: DOC: Miscellaneous doc formatting fixes * :ghpull:`3291`: FIX: nightly wheels for macOS arm64 * :ghpull:`3262`: NF: Applying Decorators in Module (Visualization) * :ghpull:`3263`: NF: Applying Decorators in Module (Workflow) * :ghpull:`3287`: NF: Add `__len__` to `GradientTable` * :ghpull:`3260`: NF: Applying Decorators in Module (Tracking) * :ghpull:`3256`: NF: Applying Decorators in Module (NeuralNetwork) * :ghpull:`3258`: NF: Applying Decorators in Module (Segment) * :ghpull:`3249`: NF: Applying Decorators in Module (Align) * :ghpull:`3251`: NF: Applying Decorators in Module (Core) * :ghpull:`3279`: FIX: Explicit type origin for long to solve the cython error during compilation * :ghpull:`3259`: NF: Applying Decorators in Module (Sims) * :ghpull:`3252`: NF: Applying Decorators in Module (Denoise) * :ghpull:`3261`: NF: Applying Decorators in Module (Utils) * :ghpull:`3255`: NF: Applying Decorators in Module (Io) * :ghpull:`3253`: NF: Applying Decorators in Module (Data) * :ghpull:`3233`: STYLE: Set `stacklevel` argument explicitly to warning messages * :ghpull:`3239`: NF: Decorator for keyword-only argument * :ghpull:`2593`: Embed parallelization into the multi_voxel_fit decorator. * :ghpull:`3274`: RF: Update pyproject.toml for numpy 2.0 * :ghpull:`3273`: STYLE: Make statement dwell on a single line * :ghpull:`3237`: Add support for tensor-valued spherical functions in `interp_rbf` * :ghpull:`3245`: RF: Switch from using sparse `*_matrix` to `*_array`. * :ghpull:`3267`: STYLE: Avoid deprecated NumPy types and methods for NumPy 2.0 compat * :ghpull:`3264`: TEST: avoid direct comparison of floating point numbers * :ghpull:`3268`: STYLE: Prefer using `np.asarray` to avoid copy while creating an array * :ghpull:`3271`: RF: Do not use `np.any` for checking optional array parameters * :ghpull:`3250`: DOC: Fix param order * :ghpull:`3269`: STYLE: Prefer using `isin` over `in1d` * :ghpull:`3238`: NF - add affine to peaks_from_position * :ghpull:`3247`: STYLE: Add imported symbols to __all__ in direction module * :ghpull:`3246`: STYLE: Import explicitly `direction.peaks` symbols * :ghpull:`3241`: RF: Codespell fix for CI * :ghpull:`3228`: STYLE: Fix unused loop control variable warning * :ghpull:`3235`: STYLE: Do not allow running unintended modules as scripts * :ghpull:`3230`: STYLE: Fix function definition loop variable binding warning * :ghpull:`3232`: STYLE: Simplify implicitly concatenated strings * :ghpull:`3229`: STYLE: Prefer using f-strings * :ghpull:`3224`: BF: Rewrite list creation as `list()` instead of `[]` * :ghpull:`3216`: STYLE: Format code using `ruff` * :ghpull:`3178`: DOC: Fixes the AFQ tract profile tutorial. * :ghpull:`3218`: STYLE: Fix codespell issues * :ghpull:`3209`: [CI] Move filterwarnings from pyproject to conftest * :ghpull:`3220`: [RF] from `os.fork` to `spawn` for multiprocessing * :ghpull:`3214`: RF - remove buffer argument in pmf_gen.get_pmf_value(.) * :ghpull:`3219`: [ENH] Prefer CLARABEL over ECOS as the CVXPY solver * :ghpull:`3215`: tests: correct module-level setup * :ghpull:`3211`: [RF] PMF Gen: from memoryview to pointer * :ghpull:`3210`: Python 3.13: Fix tests for next Python release * :ghpull:`3212`: STYLE: Relocate `pre-commit` and `ruff` packages to style requirements * :ghpull:`3205`: BF: Declare variables holding integers as `cnp.npy_intp` over `double` * :ghpull:`3174`: NF - initial directions from seed positions * :ghpull:`3207`: DOC: Fix Cython method parameter type description * :ghpull:`3206`: BF: Use `cnp.npy_intp` instead of `int` as counter * :ghpull:`3204`: DOC: Fix documentation typos * :ghpull:`3202`: [TEST] Add flag to turn warnings into errors for pytest * :ghpull:`3158`: ENH: Remove filtering `UserWarning` warnings in test config file * :ghpull:`3194`: MAINT: fix warning * :ghpull:`3199`: Bump pre-commit/action from 3.0.0 to 3.0.1 in the actions group * :ghpull:`3182`: [NF] Add DiSCo challenge data fetcher * :ghpull:`3197`: ENH: Fix miscellaneous warnings in `dki` reconstruction module * :ghpull:`3198`: ENH: Ensure that `arccos` argument is in the [-1,1] range * :ghpull:`3191`: [RF] allow float and double for `trilinear_interpolate4d_c` * :ghpull:`3151`: DKI Updates: (new radial tensor kurtosis metric, updated documentation and missing tests) * :ghpull:`3189`: Update affine_registration to clarify returns and make them consistent with docstring * :ghpull:`3176`: ENH: allow vol_idx in align workflow * :ghpull:`3188`: ENH: Add `pre-commit` to project `dev` dependencies * :ghpull:`3183`: ENH: Specify the solver for the MAP-MRI positivity constraint test * :ghpull:`3184`: STYLE: Sort import statements using `ruff` * :ghpull:`3181`: [PEP8] fix pep8 and docstring style in `dti.py` file * :ghpull:`3177`: Loading Peaks faster with complete range and synchronization functionality. * :ghpull:`3180`: BF: Fix bug in mode for isotropic tensors * :ghpull:`3172`: [ENH] Enable range for dipy_median_otsu workflow * :ghpull:`3171`: Clean up for tabs and tab manager * :ghpull:`3168`: Feature/peaks tab revamp * :ghpull:`3128`: NF: Fibonacci Hemisphere * :ghpull:`3153`: ENH: add save peaks to dipy_fit_dti, dki * :ghpull:`3156`: ENH: Implement NDC from Yeh2019 * :ghpull:`3161`: DOC: Fix `tri` parameter docstring in `viz.projections.sph_project` * :ghpull:`3163`: STYLE: Make `fury` and `matplotlib` presence message in test consistent * :ghpull:`3162`: ENH: Fix variable potentially being referenced before assignment * :ghpull:`3144`: ROI tab revamped * :ghpull:`2982`: [FIX] Force the use of pre-wheels * :ghpull:`3134`: Feature/cluster revamp * :ghpull:`3146`: [NF] Add 30 Bundle brain atlas fetcher * :ghpull:`3150`: BUG: Fix bug with nightly wheel build * :ghpull:`3149`: ENH: Miscellaneous cleanup * :ghpull:`3148`: ENH: Fix HDF5 key warning when saving BUAN profile data * :ghpull:`3138`: [CI] update CI's script * :ghpull:`3126`: Bugfix for ROI images updates * :ghpull:`3141`: ENH: Fix miscellaneous warnings * :ghpull:`3139`: BF: Removing Error/Warning from Tensorflow 2.16 * :ghpull:`3132`: BF: Removed allow_break * :ghpull:`3135`: DOC: Fix documentation URLs * :ghpull:`3133`: grg-sphinx-theme added as dependency * :ghpull:`3127`: Feature/viz interface tutorials * :ghpull:`3120`: DOC - Removed unnecessary line from tracking example * :ghpull:`3110`: Viz cli tutorial updated * :ghpull:`3086`: [RF] Fix spherical harmonic terminology swap * :ghpull:`3095`: [UPCOMING] Release preparation for 1.9.0 Issues (258): * :ghissue:`3420`: CI: define output path for coverage * :ghissue:`3415`: tests: leave no trace behind * :ghissue:`3417`: MAINT: Address changes to API with new (1.6) version of cvxpy. * :ghissue:`3414`: Bump codecov/codecov-action from 4 to 5 in the actions group * :ghissue:`2469`: Error in patch2self for single-slice data * :ghissue:`3412`: BF: Fixing single-slice data error in Patch2Self * :ghissue:`1531`: Test suite fails with errors regarding the ConvexHull of scipy.spatial.qhull objects * :ghissue:`3409`: [RF] Cleanup deprecated functions and remove some old legacy scripts * :ghissue:`3410`: `radial_scale` parameter of `actor.odf_slicer` has some issues * :ghissue:`3407`: CI: unpin cvxpy * :ghissue:`3030`: I do not see a way to change step as used by reconst.dti.TensorModel.fit() * :ghissue:`3408`: RF: fix step argument in DTI NLLS model * :ghissue:`3361`: Exporting and importing SymmetricDiffeomorphicRegistration outputs * :ghissue:`3402`: DOC: Add Exporting and importing Mapping field information in recipes and registration tutorial * :ghissue:`3170`: Iteratively reweighted least squares for robust fitting * :ghissue:`3358`: robust algorithm REBASE * :ghissue:`3403`: Robust split idea rebase tidy * :ghissue:`3115`: Fix `get_info` for release package * :ghissue:`3404`: RF: Improve and Simplify version management * :ghissue:`3401`: Robust split idea rebase arokem * :ghissue:`3395`: RF: Refactor `nn` module and deprecate `tensorflow` module * :ghissue:`1957`: niftis2pam, pam2niftis Workflow * :ghissue:`3396`: DOC: Adopt `sphinxcontrib-bibtex` for reconst model list refs * :ghissue:`3399`: STYLE: Remove empty quoted paragraph in developer guide index * :ghissue:`3398`: DOC: Improve first interaction GitHub Actions config file * :ghissue:`2826`: [ENH] Compute fiber density and fiber spread from ODF using Bingham distributions * :ghissue:`3169`: [RF] Add peaks generation to reconst workflows * :ghissue:`3303`: NF: Patch2Self3 * :ghissue:`3392`: [WIP] NF: Adding pytorch versions * :ghissue:`3368`: [NF] DAM implementation for tissue classification using DMRI signal properties. * :ghissue:`3389`: Single tensor tutorial - hard to find * :ghissue:`3390`: DOC: Update DTI tutorial title * :ghissue:`3391`: STYLE: removing pep8speaks conf file in favor of pre-commit action * :ghissue:`3393`: RF: fix API generation * :ghissue:`3387`: DOC: Add first interaction GHA workflow file * :ghissue:`3386`: DOC: Update the CI tool to GHA in `CONTRIBUTING` file * :ghissue:`3384`: BF: Updated non_local_means * :ghissue:`3285`: Awkward interaction of dipy.denoise.non_local_means.non_local_means and dipy.denoise.noise_estimate.estimate_sigma * :ghissue:`3140`: NF: Adding correct_mask to median_otsu * :ghissue:`3345`: DOC: Skip element in documentation generation * :ghissue:`3372`: BugFix: New Atlas OMM not working with Horizon * :ghissue:`2757`: Use for loop when `num_processes=1` in gibbs_removal() * :ghissue:`3381`: RF: Add support for sequential processing in Gibbs unringing * :ghissue:`3380`: ensure all calls to a python executable are to `python3` * :ghissue:`3376`: DOC: Use placeholder for unused variable in `streamline_tools` * :ghissue:`3373`: DOC: Consider warnings as errors in documentation CI build * :ghissue:`3379`: DOC: Remove example files labels * :ghissue:`3374`: DOC: Remove `tracking_introduction_eudx` from quick start * :ghissue:`3347`: Reconstruction model list not linked in documentation since it cannot be located * :ghissue:`3378`: doc: Link reconstruction model list to multiple pages * :ghissue:`2665`: DOC: Improve the CLI documentation rendering * :ghissue:`3377`: DOC: Miscellaneous improvements to `PeakActor` docstring * :ghissue:`3375`: DOC: Reference footnote in `streamline_tools` * :ghissue:`3326`: Avoid Sphinx warnings from inherited third-party method documentation * :ghissue:`3348`: DOC: Address remaining some warnings * :ghissue:`3349`: DOC: Fix footbibliography-related errors in workflow help doc * :ghissue:`3370`: dipy_buan_profiles CLI IndexError * :ghissue:`3369`: ci: Bump scientific-python/upload-nightly-action from 0.6.0 to 0.6.1 * :ghissue:`3367`: Bump scientific-python/upload-nightly-action from 0.5.0 to 0.6.0 in the actions group * :ghissue:`3366`: DOC: Make `rng` optional parameter docstrings consistent * :ghissue:`3248`: [NF] Multicompartment DWI simulation technique implementation * :ghissue:`3365`: DOC: Fix some cites. * :ghissue:`3363`: Avoid SyntaxWarnings due to embedded LaTeX * :ghissue:`2886`: test_streamwarp.py: Little-endian buffer not supported on big-endian compiler * :ghissue:`3356`: BF: fix s390x compatibility * :ghissue:`3360`: DOC: Remove unnecessary leading whitespace in rst doc paragraph * :ghissue:`3357`: FIX: remove keyword only warning on examples (part2) * :ghissue:`3343`: BF Fixing transformation function * :ghissue:`3355`: FIX: missing keyword only arguments on example * :ghissue:`2143`: Build template CLI * :ghissue:`3221`: Updating BundleWarp default value of alpha * :ghissue:`3286`: BF: Allow passing kwargs in `fit` method, by moving parallelization kwargs elsewhere * :ghissue:`3323`: BF: Allow passing kwargs in fit method, by moving parallelization kwargs elsewhere, including PEP 3102 * :ghissue:`3351`: DOC: Fix miscellaneous documentation build warnings (part 3) * :ghissue:`3306`: NF: Update to examples * :ghissue:`3292`: Python 3.13: `TypeError: cannot remove variables from FrameLocalsProxy` in tests * :ghissue:`3293`: BF: Fix attempting to delete frame local symbol table variable * :ghissue:`3257`: NF: Applying Decorators in Module (Reconst) * :ghissue:`3254`: NF: Applying Decorators in Module (Direction) * :ghissue:`3317`: DOC: Miscellaneous documentation improvements * :ghissue:`3350`: DOC: Do not use the `scale` option for URL-based images * :ghissue:`3344`: DOC: Fix miscellaneous documentation build warnings (part 2) * :ghissue:`3346`: RF: Removal of keyword form Cython files * :ghissue:`2394`: Documentation References - Remove (1, 2, ...) * :ghissue:`3341`: DOC: Host MNI template note references in references file * :ghissue:`3333`: RF: Decorator fix * :ghissue:`3335`: RF: Allow parallel processing for sphinx extension * :ghissue:`3342`: RF: Doctest warnings * :ghissue:`3337`: DOC: Fix miscellaneous documentation build warnings * :ghissue:`3338`: DOC: Cite examples references using `sphinxcontrib-bibtex` * :ghissue:`3319`: DOC: Use references bibliography file for DIPY citation file * :ghissue:`3321`: BF: Set the superclass `fit_method` param value to the one provided * :ghissue:`3339`: BUG: Bug with params * :ghissue:`3324`: RF: Refactored for keyword arguments * :ghissue:`3340`: CI: pin cvxpy to 1.4.4 until 1.5.x issues are solved * :ghissue:`3316`: DOC: Cite code base references using `sphinxcontrib-bibtex` * :ghissue:`3332`: BF: Set the `Diso` parameter value to the one provided * :ghissue:`3325`: DOC: Fix warnings related to displayed math expressions * :ghissue:`3331`: DOC: Miscellaneous documentation improvements (part 3) * :ghissue:`3329`: STYLE: Use a leading underscore to name private methods * :ghissue:`3330`: DOC: Do not use unfinished double backticks * :ghissue:`3320`: DOC: Miscellaneous documentation improvements (part 2) * :ghissue:`3318`: RF: Remove unused parameters from method signature * :ghissue:`3310`: DOC: Cite `nn` references through `sphinxcontrib-bibtex` * :ghissue:`3315`: RF: remove legacy numpydoc * :ghissue:`1026`: Multprocessing the multivoxel fit * :ghissue:`2810`: [DOC] introducing sphinxcontrib-Bibtex to improve reference management * :ghissue:`3312`: DOC: Use `misc` for other types of BibTeX entries * :ghissue:`3309`: DOC: Miscellaneous doc formatting fixes (part 4) * :ghissue:`3308`: DOC: Rework the BibTeX bibliography file * :ghissue:`3223`: Remove`sagital` from de codespell ignore list |# codespell:ignore sagital| * :ghissue:`3275`: FIX: remove sagital from codespellrc ignore list |# codespell:ignore sagital| * :ghissue:`3298`: Inaccurate docstring in `omp.pyx::determine_num_threads` * :ghissue:`3304`: DOC: Miscellaneous doc formatting fixes (part 3) * :ghissue:`3305`: How to apply NODDI sequence in dipy * :ghissue:`3295`: ENH: Add a GHA workflow file to build docs * :ghissue:`3056`: [WIP][RF] Use lazy loading * :ghissue:`3302`: DOC: Miscellaneous doc formatting fixes (part 2) * :ghissue:`3301`: FIX: explicit keyword argument for Horizon * :ghissue:`3231`: Coverage build failing on and off in to a numpy-related statement * :ghissue:`3297`: DOC: Miscellaneous doc formatting fixes * :ghissue:`3300`: BF: Title Fix * :ghissue:`3299`: Numpy compatibility issue * :ghissue:`3291`: FIX: nightly wheels for macOS arm64 * :ghissue:`3262`: NF: Applying Decorators in Module (Visualization) * :ghissue:`3263`: NF: Applying Decorators in Module (Workflow) * :ghissue:`3283`: BUG: Gradient table requires at least 2 orientations * :ghissue:`3287`: NF: Add `__len__` to `GradientTable` * :ghissue:`3282`: Define ``__len__`` within ``GradientTable``? * :ghissue:`3260`: NF: Applying Decorators in Module (Tracking) * :ghissue:`3256`: NF: Applying Decorators in Module (NeuralNetwork) * :ghissue:`3258`: NF: Applying Decorators in Module (Segment) * :ghissue:`3249`: NF: Applying Decorators in Module (Align) * :ghissue:`3251`: NF: Applying Decorators in Module (Core) * :ghissue:`3279`: FIX: Explicit type origin for long to solve the cython error during compilation * :ghissue:`3242`: Broken source installation * :ghissue:`3259`: NF: Applying Decorators in Module (Sims) * :ghissue:`3252`: NF: Applying Decorators in Module (Denoise) * :ghissue:`3280`: numpy.core.multiarray failed when importing dipy.io.streamline (dipy.tracking.streamlinespeed) * :ghissue:`3261`: NF: Applying Decorators in Module (Utils) * :ghissue:`3255`: NF: Applying Decorators in Module (Io) * :ghissue:`3253`: NF: Applying Decorators in Module (Data) * :ghissue:`3233`: STYLE: Set `stacklevel` argument explicitly to warning messages * :ghissue:`3277`: can't find dipy_buan_profiles!!! * :ghissue:`3029`: Migrating to Keyword Only arguments (PEP 3102) * :ghissue:`3239`: NF: Decorator for keyword-only argument * :ghissue:`2593`: Embed parallelization into the multi_voxel_fit decorator. * :ghissue:`3274`: RF: Update pyproject.toml for numpy 2.0 * :ghissue:`3265`: NumPy 2.0 incompatibility * :ghissue:`3266`: NF: Call `cnp.import_array()` explicitly to use the NumPy C API * :ghissue:`3273`: STYLE: Make statement dwell on a single line * :ghissue:`3236`: Allow `interp_rbf` to accept tensor-valued spherical functions * :ghissue:`3237`: Add support for tensor-valued spherical functions in `interp_rbf` * :ghissue:`3245`: RF: Switch from using sparse `*_matrix` to `*_array`. * :ghissue:`3267`: STYLE: Avoid deprecated NumPy types and methods for NumPy 2.0 compat * :ghissue:`3264`: TEST: avoid direct comparison of floating point numbers * :ghissue:`3268`: STYLE: Prefer using `np.asarray` to avoid copy while creating an array * :ghissue:`3271`: RF: Do not use `np.any` for checking optional array parameters * :ghissue:`3243`: Create `DiffeomorphicMap` object with saved nifti forward warp data * :ghissue:`3250`: DOC: Fix param order * :ghissue:`3269`: STYLE: Prefer using `isin` over `in1d` * :ghissue:`3238`: NF - add affine to peaks_from_position * :ghissue:`3247`: STYLE: Add imported symbols to __all__ in direction module * :ghissue:`3246`: STYLE: Import explicitly `direction.peaks` symbols * :ghissue:`3241`: RF: Codespell fix for CI * :ghissue:`3228`: STYLE: Fix unused loop control variable warning * :ghissue:`3235`: STYLE: Do not allow running unintended modules as scripts * :ghissue:`3230`: STYLE: Fix function definition loop variable binding warning * :ghissue:`3232`: STYLE: Simplify implicitly concatenated strings * :ghissue:`3229`: STYLE: Prefer using f-strings * :ghissue:`3224`: BF: Rewrite list creation as `list()` instead of `[]` * :ghissue:`3216`: STYLE: Format code using `ruff` * :ghissue:`3175`: Tract profiles in afq example look all wrong * :ghissue:`3178`: DOC: Fixes the AFQ tract profile tutorial. * :ghissue:`3218`: STYLE: Fix codespell issues * :ghissue:`3209`: [CI] Move filterwarnings from pyproject to conftest * :ghissue:`3220`: [RF] from `os.fork` to `spawn` for multiprocessing * :ghissue:`3214`: RF - remove buffer argument in pmf_gen.get_pmf_value(.) * :ghissue:`3196`: Enhancing Gradient Approximation in DTI Tests #3155 * :ghissue:`3203`: [WIP][CI] warning as error at compilation level * :ghissue:`3219`: [ENH] Prefer CLARABEL over ECOS as the CVXPY solver * :ghissue:`3165`: 1.9.0 system test failures * :ghissue:`3215`: tests: correct module-level setup * :ghissue:`3217`: ENH: Prefer `CLARABEL` over `ECOS` as the CVXPY solver * :ghissue:`3211`: [RF] PMF Gen: from memoryview to pointer * :ghissue:`3210`: Python 3.13: Fix tests for next Python release * :ghissue:`3212`: STYLE: Relocate `pre-commit` and `ruff` packages to style requirements * :ghissue:`3205`: BF: Declare variables holding integers as `cnp.npy_intp` over `double` * :ghissue:`3174`: NF - initial directions from seed positions * :ghissue:`3207`: DOC: Fix Cython method parameter type description * :ghissue:`3206`: BF: Use `cnp.npy_intp` instead of `int` as counter * :ghissue:`3204`: DOC: Fix documentation typos * :ghissue:`3208`: BF: Cast operation explicitly to `cnp.npy_intp` in denoising Cython * :ghissue:`3202`: [TEST] Add flag to turn warnings into errors for pytest * :ghissue:`3201`: TEST: Turn warnings into errors when calling `pytest` in CI testing * :ghissue:`3158`: ENH: Remove filtering `UserWarning` warnings in test config file * :ghissue:`3200`: Check relevant warnings raised (DO NOT MERGE) * :ghissue:`2299`: NF: Add array parsing capabilities to the CLIs * :ghissue:`2880`: improve test_io_fetch_fetcher_datanames * :ghissue:`3194`: MAINT: fix warning * :ghissue:`3199`: Bump pre-commit/action from 3.0.0 to 3.0.1 in the actions group * :ghissue:`3182`: [NF] Add DiSCo challenge data fetcher * :ghissue:`3197`: ENH: Fix miscellaneous warnings in `dki` reconstruction module * :ghissue:`3198`: ENH: Ensure that `arccos` argument is in the [-1,1] range * :ghissue:`3186`: Update `trilinear_interpolate4d` to accept float and double * :ghissue:`3191`: [RF] allow float and double for `trilinear_interpolate4d_c` * :ghissue:`3151`: DKI Updates: (new radial tensor kurtosis metric, updated documentation and missing tests) * :ghissue:`3185`: Improve consistency of affine_registration docstring * :ghissue:`3189`: Update affine_registration to clarify returns and make them consistent with docstring * :ghissue:`3187`: Setting `Legacy=True` SH basis is not possible for SH models * :ghissue:`3176`: ENH: allow vol_idx in align workflow * :ghissue:`3188`: ENH: Add `pre-commit` to project `dev` dependencies * :ghissue:`3183`: ENH: Specify the solver for the MAP-MRI positivity constraint test * :ghissue:`3184`: STYLE: Sort import statements using `ruff` * :ghissue:`3181`: [PEP8] fix pep8 and docstring style in `dti.py` file * :ghissue:`3177`: Loading Peaks faster with complete range and synchronization functionality. * :ghissue:`3145`: dki_fit errors from divide by zero * :ghissue:`3180`: BF: Fix bug in mode for isotropic tensors * :ghissue:`3172`: [ENH] Enable range for dipy_median_otsu workflow * :ghissue:`3171`: Clean up for tabs and tab manager * :ghissue:`2796`: Tract profiles in the afq_profile example look terrible * :ghissue:`1985`: Something is wonky with the AFQ tracts profile example * :ghissue:`3168`: Feature/peaks tab revamp * :ghissue:`2036`: [WIP] NF - Add Closest Peak direction getter from peaks array * :ghissue:`3128`: NF: Fibonacci Hemisphere * :ghissue:`3122`: Add `peaks_from_model` to `dipy_fit_dti` CLI * :ghissue:`3153`: ENH: add save peaks to dipy_fit_dti, dki * :ghissue:`3113`: [FIX] Nlmeans Algorithm Enhancement #2950 * :ghissue:`3111`: Add support for sequential processing in Gibbs unringing #2757 * :ghissue:`3154`: ENH: Add neighboring DWI correlation QC metric * :ghissue:`3156`: ENH: Implement NDC from Yeh2019 * :ghissue:`3161`: DOC: Fix `tri` parameter docstring in `viz.projections.sph_project` * :ghissue:`3163`: STYLE: Make `fury` and `matplotlib` presence message in test consistent * :ghissue:`3162`: ENH: Fix variable potentially being referenced before assignment * :ghissue:`3144`: ROI tab revamped * :ghissue:`2982`: [FIX] Force the use of pre-wheels * :ghissue:`3134`: Feature/cluster revamp * :ghissue:`3146`: [NF] Add 30 Bundle brain atlas fetcher * :ghissue:`3150`: BUG: Fix bug with nightly wheel build * :ghissue:`3149`: ENH: Miscellaneous cleanup * :ghissue:`3148`: ENH: Fix HDF5 key warning when saving BUAN profile data * :ghissue:`3138`: [CI] update CI's script * :ghissue:`3142`: Horizon slider does not show proper 0-1 range images such as FA * :ghissue:`3126`: Bugfix for ROI images updates * :ghissue:`3141`: ENH: Fix miscellaneous warnings * :ghissue:`3139`: BF: Removing Error/Warning from Tensorflow 2.16 * :ghissue:`3096`: TissueClassifierHMRF has some argument logic error * :ghissue:`3132`: BF: Removed allow_break * :ghissue:`3136`: conversion of cudipy.align.imwarp.DiffeomorphicMap to dipy.align.imwarp.DiffeomorphicMap * :ghissue:`3135`: DOC: Fix documentation URLs * :ghissue:`3133`: grg-sphinx-theme added as dependency * :ghissue:`3127`: Feature/viz interface tutorials * :ghissue:`3120`: DOC - Removed unnecessary line from tracking example * :ghissue:`3116`: diffusion gradient nonlinearity correction * :ghissue:`3110`: Viz cli tutorial updated * :ghissue:`2970`: spherical harmonic degree/order terminology swapped * :ghissue:`3086`: [RF] Fix spherical harmonic terminology swap * :ghissue:`3095`: [UPCOMING] Release preparation for 1.9.0 .. |# codespell:ignore sagital| replace:: . dipy-1.11.0/doc/release_notes/release1.11.rst000066400000000000000000000177201476546756600206500ustar00rootroot00000000000000.. _release1.11: ===================================== Release notes for DIPY version 1.11 ===================================== GitHub stats for 2024/12/12 - 2025/03/14 (tag: 1.10.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 11 authors contributed 175 commits. * Ariel Rokem * Atharva Shah * Eleftherios Garyfallidis * Gabriel Girard * Jon Haitz Legarreta Gorroño * Jong Sung Park * Maharshi Gor * Michael R. Crusoe * Prajwal Reddy * Sam Coveney * Serge Koudoro We closed a total of 120 issues, 47 pull requests and 73 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (47): * :ghpull:`3487`: RF: update tracking cli * :ghpull:`3490`: ENH: Updated the logging info * :ghpull:`3489`: DOC - Renamed fast tracking example * :ghpull:`3475`: NF: add workflow for N4 biasfield * :ghpull:`3485`: DOC: remove modulo in dipy_math docstring * :ghpull:`3486`: RF: Fixed Warnings of Patch2self3 * :ghpull:`3483`: ENH: Add use_cuda option to torch models * :ghpull:`3471`: Patch2Self3 skipping b0_denoising error fixed. * :ghpull:`3477`: NF: Add dipy_classify_tissue CLI * :ghpull:`3461`: BF: Handling binary and small intensity value range and Volume slider not be shown if single channel provided. * :ghpull:`3479`: DOC - Updated tracking examples to use the fast tracking framework * :ghpull:`3482`: RF: Avoid crash of `optional_package` when dirty install * :ghpull:`3480`: RF: Allow hcp and hcp in dipy_fetch CLI * :ghpull:`3476`: BF - generate_tractogram needs seed coordinate in image space * :ghpull:`3478`: RF: Small variable mismatch in the tutorial for DAM classifier * :ghpull:`3458`: NF: Add optional fetcher * :ghpull:`3438`: NF: Add 7 new `dipy_fit_*` workflows * :ghpull:`3462`: NF: Reset of visualization introduced. * :ghpull:`3470`: RF: Enforce bvecs for the cli extract_b0 * :ghpull:`3472`: RF: Rename and Deprecate dipy_sh_convert_mrtrix CLI * :ghpull:`3449`: Adding DeepN4 PyTorch model * :ghpull:`3465`: RF: Import fixes * :ghpull:`3459`: NF: Add default value to docstring for the CLI * :ghpull:`3446`: Dki constraints * :ghpull:`3467`: RF: Changed the matrix * :ghpull:`3457`: DOC: fix markup issues with tracking tutorial * :ghpull:`3456`: Added tissue classification with DAM example * :ghpull:`3444`: NF: Add 3 new ``dipy_extract_*`` workflows * :ghpull:`3089`: Parallel Tracking Framework * :ghpull:`3448`: Spelling error in the deprecation warning * :ghpull:`3442`: RF: update of peak_directions to allow nogil * :ghpull:`3445`: DOC: removed title * :ghpull:`3400`: CI: Add python 3.13 * :ghpull:`3440`: RF - add pmf_gen argument to peaks_from_positions * :ghpull:`3441`: ENH: Add GitHub CI workflow file to run benchmarks using `asv` * :ghpull:`3427`: NF: add dipy_math workflow * :ghpull:`3432`: RF: bump tensorflow minimal version to 2.18.0. * :ghpull:`3436`: RF: Update ``scipy.special`` deprecated functions * :ghpull:`3433`: TEST: remove skip if not have_delaunay * :ghpull:`3434`: ENH: Transition remaining `NumPy` `RandomState` instances to `Generator` * :ghpull:`3430`: ENH: Add type annotation information * :ghpull:`3428`: DOC: Add implemented tractography method table to tracking example index * :ghpull:`3426`: RF: from relative import to absolute import * :ghpull:`3423`: Make the docs more reproducible * :ghpull:`3421`: CI: remove python3.9 support * :ghpull:`3422`: RF: fix joblib warning in sfm * :ghpull:`3364`: UPCOMING: Release 1.10.0 Issues (73): * :ghissue:`3487`: RF: update tracking cli * :ghissue:`3490`: ENH: Updated the logging info * :ghissue:`3489`: DOC - Renamed fast tracking example * :ghissue:`3475`: NF: add workflow for N4 biasfield * :ghissue:`3485`: DOC: remove modulo in dipy_math docstring * :ghissue:`3486`: RF: Fixed Warnings of Patch2self3 * :ghissue:`3483`: ENH: Add use_cuda option to torch models * :ghissue:`3471`: Patch2Self3 skipping b0_denoising error fixed. * :ghissue:`3477`: NF: Add dipy_classify_tissue CLI * :ghissue:`3484`: Default parameter values not shown on DIPY's CLIs * :ghissue:`3371`: Horizon miscalculating contrast when returning to previous volume * :ghissue:`3112`: Horizon fails with a single channel 4D volume * :ghissue:`3461`: BF: Handling binary and small intensity value range and Volume slider not be shown if single channel provided. * :ghissue:`3479`: DOC - Updated tracking examples to use the fast tracking framework * :ghissue:`3482`: RF: Avoid crash of `optional_package` when dirty install * :ghissue:`3480`: RF: Allow hcp and hcp in dipy_fetch CLI * :ghissue:`3476`: BF - generate_tractogram needs seed coordinate in image space * :ghissue:`3478`: RF: Small variable mismatch in the tutorial for DAM classifier * :ghissue:`3190`: Allow to define optional file to fetch * :ghissue:`3458`: NF: Add optional fetcher * :ghissue:`3438`: NF: Add 7 new `dipy_fit_*` workflows * :ghissue:`3469`: horizon - SSLCertVerificationError * :ghissue:`3152`: Horizon needs a home button which realigns the view to the z slice we are at. * :ghissue:`2421`: DIPY Horizon Menu * :ghissue:`3462`: NF: Reset of visualization introduced. * :ghissue:`3470`: RF: Enforce bvecs for the cli extract_b0 * :ghissue:`3472`: RF: Rename and Deprecate dipy_sh_convert_mrtrix CLI * :ghissue:`3449`: Adding DeepN4 PyTorch model * :ghissue:`3164`: Simplify or remove the use of mpl_tri in viz module * :ghissue:`3465`: RF: Import fixes * :ghissue:`3454`: Default Value in Workflows docstring * :ghissue:`3459`: NF: Add default value to docstring for the CLI * :ghissue:`3446`: Dki constraints * :ghissue:`3467`: RF: Changed the matrix * :ghissue:`3466`: Theory for b and q values represent incorrect b matrix * :ghissue:`3457`: DOC: fix markup issues with tracking tutorial * :ghissue:`3463`: Remove the warnings from doc build in DIPY docs * :ghissue:`3451`: How can be the drawing background changed from black to white using `actor.odf_slicer` * :ghissue:`3179`: Need specific area zooming for horizon * :ghissue:`3359`: Bug: Horizon throws errors when changing intensity range and then changing volumes * :ghissue:`3460`: [WIP] RF: force use of header for fetcher * :ghissue:`3394`: Create a tutorial for DAM classifier * :ghissue:`3456`: Added tissue classification with DAM example * :ghissue:`3444`: NF: Add 3 new ``dipy_extract_*`` workflows * :ghissue:`1501`: Refactoring tracking and checking tutorials and workflows - high priority for next release. * :ghissue:`834`: Multiprocessing the local tracking? * :ghissue:`3089`: Parallel Tracking Framework * :ghissue:`3448`: Spelling error in the deprecation warning * :ghissue:`3442`: RF: update of peak_directions to allow nogil * :ghissue:`3445`: DOC: removed title * :ghissue:`3443`: NF: Replace urllib by requests to improve fetcher stability. * :ghissue:`3400`: CI: Add python 3.13 * :ghissue:`3440`: RF - add pmf_gen argument to peaks_from_positions * :ghissue:`3441`: ENH: Add GitHub CI workflow file to run benchmarks using `asv` * :ghissue:`3427`: NF: add dipy_math workflow * :ghissue:`3432`: RF: bump tensorflow minimal version to 2.18.0. * :ghissue:`3436`: RF: Update ``scipy.special`` deprecated functions * :ghissue:`3431`: Tests with scipy.spatial.Delaunay being skipped * :ghissue:`3433`: TEST: remove skip if not have_delaunay * :ghissue:`3434`: ENH: Transition remaining `NumPy` `RandomState` instances to `Generator` * :ghissue:`3430`: ENH: Add type annotation information * :ghissue:`3428`: DOC: Add implemented tractography method table to tracking example index * :ghissue:`3426`: RF: from relative import to absolute import * :ghissue:`3423`: Make the docs more reproducible * :ghissue:`3421`: CI: remove python3.9 support * :ghissue:`3422`: RF: fix joblib warning in sfm * :ghissue:`2364`: Streamlines get negative coordinates in voxel space * :ghissue:`3016`: [WIP] NF: Wigner-D Rotation Functions * :ghissue:`3276`: [Feature] Multithreading support for reading and opening files. * :ghissue:`2304`: [WIP] DKI ODF redux * :ghissue:`2705`: WIP: Single-shell FWDTI * :ghissue:`3416`: support for numpy 2.0 seems missing * :ghissue:`3364`: UPCOMING: Release 1.10.0 dipy-1.11.0/doc/release_notes/release1.2.rst000066400000000000000000000377671476546756600206050ustar00rootroot00000000000000.. _release1.2: ==================================== Release notes for DIPY version 1.2 ==================================== GitHub stats for 2020/01/10 - 2020/09/08 (tag: 1.1.1) These lists are automatically generated, and may be incomplete or contain duplicates. The following 27 authors contributed 491 commits. * Ariel Rokem * Aryansh Omray * Bramsh Qamar * Charles Poirier * Derek Pisner * Eleftherios Garyfallidis * Fabio Nery * Francois Rheault * Gabriel Girard * Gregory Lee * Jean-Christophe Houde * Jirka Borovec * Jon Haitz Legarreta Gorroño * Leevi Kerkela * Leon Weninger * Martijn Nagtegaal * Rafael Neto Henriques * Sarath Chandra * Serge Koudoro * Shrishti Hore * Shubham Shaswat * Takis Panagopoulos * Tashrif Billah * Kunal Mehta * svabhishek29 * Areesha Tariq * Philippe Karan We closed a total of 256 issues, 94 pull requests and 163 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (94): * :ghpull:`2191`: Multi-shell multi-tissue constrained spherical deconvolution * :ghpull:`2212`: [FIX] renamed Direct Streamline Normalization tutorial title * :ghpull:`2207`: [FIX] Horizon shader warning * :ghpull:`2208`: Removing Python 3.5 from Travis and Azure due to its end of life (2020-09-13) * :ghpull:`2157`: BF: Fix variable may be used uninitialized warning * :ghpull:`2205`: Add optional installs for different settings. * :ghpull:`2204`: from Renderer to scene * :ghpull:`2183`: _streamlines_in_mask bounds check * :ghpull:`2203`: End of line period in sft * :ghpull:`2142`: ENH: function to calculate size/shape parameters of b-tensors and vice-versa * :ghpull:`2195`: [ENH] Validate streamlines pre-LiFE * :ghpull:`2161`: Fix a memory overlap bug in multi_median (and median_otsu) * :ghpull:`2163`: BF: Fix Cython label defined but not used warning * :ghpull:`2174`: Improve performance of tissue classification * :ghpull:`2168`: add fig_kwargs * :ghpull:`2178`: Add full SH basis * :ghpull:`2193`: BUAN_flow.rst to buan_flow.rst * :ghpull:`2196`: [Fix] update save_vtk_streamlines and load_vtk_streamlines * :ghpull:`2188`: [Fix] update mapmri due to cvxpy 1.1 * :ghpull:`2176`: [DOC] Update SH basis documentation * :ghpull:`2173`: Install ssl certificate for azure pipeline windows * :ghpull:`2171`: Gitter url update: from nipy/dipy to dipy/dipy * :ghpull:`2154`: Bundle segmentation CLI tutorial * :ghpull:`2162`: BF: Fix string literal comparison warning * :ghpull:`2156`: BF: Fix Cython signed vs. unsigned integer comparison warning * :ghpull:`2160`: TST: Change assert_equal(statement, True) to assert_(statement). * :ghpull:`2158`: BF: Fix Cython floating point absolute value warning * :ghpull:`2155`: [Fix] sfm RuntimeWarning * :ghpull:`2147`: BF: Fix Cython function override warning * :ghpull:`2148`: BF: Fix `distutils` Python version requirement option warning * :ghpull:`2150`: [Fix] some warning on clustering test * :ghpull:`2149`: [Fix] some warning on stats module * :ghpull:`2145`: Rename dipy_track command line * :ghpull:`2152`: changed buan_flow.rst to BUAN_flow.rst * :ghpull:`2146`: Cluster threshold parameter in dipy_buan_shapes workflow * :ghpull:`2134`: Slicing, adding function of StatefulTractogram * :ghpull:`2001`: Basic processing documentation for CLI. * :ghpull:`2135`: [Fix] shm.py RuntimeWarning * :ghpull:`2141`: [FIX] doc generation issue * :ghpull:`2136`: [Fix] laplacian_regularization on MAPMRI + cleanup warning * :ghpull:`2140`: BF: Fix NumPy warning when creating arrays from ragged sequences * :ghpull:`2139`: BF: Use equality check instead of identity check * :ghpull:`2108`: [Horizon] Update clipping range on slicer * :ghpull:`2121`: BF: ensure `btens` attribute of `GradientTable` is initialised * :ghpull:`2129`: BF: Fix sequence stacking warning in LiFE tracking * :ghpull:`2133`: BF: Fix NumPy warning when creating arrays from ragged sequences * :ghpull:`2125`: ENH: function to calculate anisotropy of b-tensors * :ghpull:`2124`: BUAN framework documentation * :ghpull:`2033`: RF - Direction getters naming * :ghpull:`2111`: Added an epsilon to bounding_box check * :ghpull:`2086`: WIP Issue 1996 * :ghpull:`2091`: Modified the model for multiple hidden layers * :ghpull:`2057`: DOC: Add DIPY dataset list to documentation. * :ghpull:`2103`: Documentation typos, grammar corrections & imports * :ghpull:`2088`: BUAN paper code and CLIs * :ghpull:`2120`: rename var sp to sph * :ghpull:`2113`: BF: refer to cigar_tensor * :ghpull:`2116`: fixed code tags and minor changes * :ghpull:`2100`: Fixed typos, grammatical errors and time import * :ghpull:`2101`: Minor typos and imports * :ghpull:`2095`: Fixed typos in dipy.align * :ghpull:`2099`: Minor Typos and imports in the beginning * :ghpull:`2102`: Modules imported in the beginning * :ghpull:`2055`: Multidimensional gradient table * :ghpull:`2097`: Replace manual sform values with get_best_affine * :ghpull:`2104`: Fixed minor typos * :ghpull:`2065`: Some typos and grammatical errors * :ghpull:`2090`: Small fix reorient_bvecs * :ghpull:`2067`: Some spelling and grammatical mistakes * :ghpull:`2093`: Placed all imports in the beginning * :ghpull:`2077`: Fixed minor typos in tutorials * :ghpull:`2071`: Some change backs * :ghpull:`2084`: Kunakl07 patch 7 * :ghpull:`2085`: Kunakl07 patch 8 * :ghpull:`2068`: Some spelling and grammatical errors * :ghpull:`2069`: some typos * :ghpull:`2063`: Gibbs tutorial patch * :ghpull:`2045`: [Fix] workflow variable string * :ghpull:`2060`: Replace old function with cython version * :ghpull:`2058`: DOC: Fix `Sphinx` link in `CONTRIBUTING.md` * :ghpull:`2059`: DOC: Add `Azure Pipelines` to CI tools in `CONTRIBUTING.md` * :ghpull:`2056`: MAINT: Up Numpy version to 1.12. * :ghpull:`2053`: Correct TYPO on the note about n_points in _gibbs_removal_2d() * :ghpull:`2043`: [NF] Add a Deprecation system * :ghpull:`2047`: [fix] Doc generation issue * :ghpull:`2044`: [FIX] check seeds dtype * :ghpull:`2041`: BF: SFM prediction with mask * :ghpull:`2039`: Remove __future__ * :ghpull:`2042`: Add tests to rng module * :ghpull:`2040`: RF: Swallow a couple of warnings that are safe to ignore. * :ghpull:`2038`: [DOC] Update repo path * :ghpull:`2037`: DOC: fix typo in FW DTI tutorial * :ghpull:`2028`: Adapted for patch_radius with radii differing xyz direction. * :ghpull:`2035`: DOC: Update DKI documentation according to the new get data functions Issues (164): * :ghissue:`2114`: Tutorial title in Streamline-based registration section is misleading * :ghissue:`2207`: [FIX] Horizon shader warning * :ghissue:`2208`: Removing Python 3.5 from Travis and Azure due to its end of life (2020-09-13) * :ghissue:`1793`: ENH: Improving Command Line and WF Docs * :ghissue:`2007`: Reference for `load_trk` in recobundles example * :ghissue:`2061`: Is dipy.viz supported in Colab or Kaggle Notebooks? * :ghissue:`2070`: Python has stopped working during import * :ghissue:`2107`: PNG images saved by window.record in the tutorial example are always black * :ghissue:`2153`: lowercase .rst file names * :ghissue:`2138`: Basic introduction to CLI needs better dataset to showcase capabilities * :ghissue:`2194`: LiFE model won't fit ? * :ghissue:`2157`: BF: Fix variable may be used uninitialized warning * :ghissue:`2177`: VIZ: streamline actor failing on Windows + macOS due to the new VTK9 * :ghissue:`2205`: Add optional installs for different settings. * :ghissue:`2204`: from Renderer to scene * :ghissue:`2183`: _streamlines_in_mask bounds check * :ghissue:`2182`: `target_line_based` might read out of bounds * :ghissue:`2203`: End of line period in sft * :ghissue:`2200`: BF: Fix `'tp_print' is deprecated` Cython warning * :ghissue:`2142`: ENH: function to calculate size/shape parameters of b-tensors and vice-versa * :ghissue:`2199`: BUG: Fix NumPy and Cython deprecation and initialization warnings * :ghissue:`2195`: [ENH] Validate streamlines pre-LiFE * :ghissue:`2161`: Fix a memory overlap bug in multi_median (and median_otsu) * :ghissue:`2163`: BF: Fix Cython label defined but not used warning * :ghissue:`2174`: Improve performance of tissue classification * :ghissue:`2168`: add fig_kwargs * :ghissue:`2178`: Add full SH basis * :ghissue:`2193`: BUAN_flow.rst to buan_flow.rst * :ghissue:`2196`: [Fix] update save_vtk_streamlines and load_vtk_streamlines * :ghissue:`2175`: Save streamlines as vtk polydata to a supported format file updated t… * :ghissue:`2188`: [Fix] update mapmri due to cvxpy 1.1 * :ghissue:`2190`: Reconstruction with Multi-Shell Multi-Tissue CSD * :ghissue:`2051`: BF: Non-negative Least Squares for IVIM * :ghissue:`2176`: [DOC] Update SH basis documentation * :ghissue:`2173`: Install ssl certificate for azure pipeline windows * :ghissue:`2172`: fetch_gold_standard_io fetcher failed regularly * :ghissue:`2169`: saving tracts in obj format * :ghissue:`2170`: Output of utils.density_map() using tck file is different than MRTrix * :ghissue:`2171`: Gitter url update: from nipy/dipy to dipy/dipy * :ghissue:`2144`: Move gitter to dipy/dipy? * :ghissue:`2154`: Bundle segmentation CLI tutorial * :ghissue:`2162`: BF: Fix string literal comparison warning * :ghissue:`2156`: BF: Fix Cython signed vs. unsigned integer comparison warning * :ghissue:`2160`: TST: Change assert_equal(statement, True) to assert_(statement). * :ghissue:`2158`: BF: Fix Cython floating point absolute value warning * :ghissue:`2155`: [Fix] sfm RuntimeWarning * :ghissue:`2159`: BF: Fix Cython different sign integer comparison warning * :ghissue:`2147`: BF: Fix Cython function override warning * :ghissue:`2148`: BF: Fix `distutils` Python version requirement option warning * :ghissue:`2151`: [DOC] Fixed minor typos, grammar errors and moved imports up in all examples * :ghissue:`2130`: Checking empty Cluster objects generates NumPy warning * :ghissue:`2131`: Elementwise comparison failure warning in multi_voxel test * :ghissue:`2150`: [Fix] some warning on clustering test * :ghissue:`2149`: [Fix] some warning on stats module * :ghissue:`2145`: Rename dipy_track command line * :ghissue:`2152`: changed buan_flow.rst to BUAN_flow.rst * :ghissue:`2146`: Cluster threshold parameter in dipy_buan_shapes workflow * :ghissue:`2128`: Registration Module failing with pre-matrix on Travis with future Numpy/Scipy release * :ghissue:`2134`: Slicing, adding function of StatefulTractogram * :ghissue:`2001`: Basic processing documentation for CLI. * :ghissue:`2135`: [Fix] shm.py RuntimeWarning * :ghissue:`2141`: [FIX] doc generation issue * :ghissue:`2136`: [Fix] laplacian_regularization on MAPMRI + cleanup warning * :ghissue:`1765`: Refactor dipy/stats/analysis.py * :ghissue:`2122`: [WIP] Add build template * :ghissue:`2140`: BF: Fix NumPy warning when creating arrays from ragged sequences * :ghissue:`2139`: BF: Use equality check instead of identity check * :ghissue:`2127`: DOC : Minor grammar fixes and moved imports up with respective comments * :ghissue:`2108`: [Horizon] Update clipping range on slicer * :ghissue:`2121`: BF: ensure `btens` attribute of `GradientTable` is initialised * :ghissue:`2129`: BF: Fix sequence stacking warning in LiFE tracking * :ghissue:`2133`: BF: Fix NumPy warning when creating arrays from ragged sequences * :ghissue:`2125`: ENH: function to calculate anisotropy of b-tensors * :ghissue:`2124`: BUAN framework documentation * :ghissue:`2126`: dipy / fury fails to install on Ubuntu 18 with pip3 * :ghissue:`2033`: RF - Direction getters naming * :ghissue:`2111`: Added an epsilon to bounding_box check * :ghissue:`2112`: [WIP] BUndle ANalytics (BUAN) pipeline documentation * :ghissue:`2086`: WIP Issue 1996 * :ghissue:`2091`: Modified the model for multiple hidden layers * :ghissue:`2096`: Deep Code * :ghissue:`2057`: DOC: Add DIPY dataset list to documentation. * :ghissue:`2103`: Documentation typos, grammar corrections & imports * :ghissue:`2088`: BUAN paper code and CLIs * :ghissue:`2120`: rename var sp to sph * :ghissue:`2118`: Local namespace of Scipy is same as a variable name * :ghissue:`1861`: WIP: Refactoring the stats module * :ghissue:`2113`: BF: refer to cigar_tensor * :ghissue:`2116`: fixed code tags and minor changes * :ghissue:`2024`: DIPY open lab meetings, Winter 2020 * :ghissue:`2100`: Fixed typos, grammatical errors and time import * :ghissue:`2101`: Minor typos and imports * :ghissue:`2094`: Detailed Beginner Friendly Tutorials * :ghissue:`2095`: Fixed typos in dipy.align * :ghissue:`2099`: Minor Typos and imports in the beginning * :ghissue:`2102`: Modules imported in the beginning * :ghissue:`2055`: Multidimensional gradient table * :ghissue:`2097`: Replace manual sform values with get_best_affine * :ghissue:`2105`: Tutorial Symmetric Regn 3D patch 3 * :ghissue:`2104`: Fixed minor typos * :ghissue:`2078`: Applying affine transform to streamlines in a SFT object * :ghissue:`2065`: Some typos and grammatical errors * :ghissue:`1305`: Questions/policies about writing .rst/web doc files * :ghissue:`2090`: Small fix reorient_bvecs * :ghissue:`2067`: Some spelling and grammatical mistakes * :ghissue:`2093`: Placed all imports in the beginning * :ghissue:`2077`: Fixed minor typos in tutorials * :ghissue:`2089`: Transforming bvecs after registration * :ghissue:`2071`: Some change backs * :ghissue:`2084`: Kunakl07 patch 7 * :ghissue:`2085`: Kunakl07 patch 8 * :ghissue:`2072`: Some typos and grammatical errors in faq.rst * :ghissue:`2073`: Some minor grammatical fixes old_highlights.txt * :ghissue:`2074`: Small typos * :ghissue:`2075`: Some grammatical changes in maintainer_workflow.rst * :ghissue:`2076`: Some grammatical changes in maintainer_workflow.rst * :ghissue:`2079`: Some minor typos in gimbal_lock.rst * :ghissue:`2080`: Some minor grammatical errors fixes * :ghissue:`2081`: Some typos and grammatical corrections in Changelog * :ghissue:`2082`: Grammatical fixes in readme.rst * :ghissue:`2083`: Fixes in regtools.py * :ghissue:`2066`: Fixed some typos * :ghissue:`2068`: Some spelling and grammatical errors * :ghissue:`2069`: some typos * :ghissue:`2063`: Gibbs tutorial patch * :ghissue:`2045`: [Fix] workflow variable string * :ghissue:`2060`: Replace old function with cython version * :ghissue:`2058`: DOC: Fix `Sphinx` link in `CONTRIBUTING.md` * :ghissue:`2059`: DOC: Add `Azure Pipelines` to CI tools in `CONTRIBUTING.md` * :ghissue:`1363`: MDF not working properly * :ghissue:`2056`: MAINT: Up Numpy version to 1.12. * :ghissue:`1871`: apply transformation to all volumes in a series of DWI (4D) * :ghissue:`2052`: dipy.tracking.utils.density_map order of arguments changed by mistake * :ghissue:`1785`: Could we use gifti for streamlines? * :ghissue:`1728`: Bug in shm_coeff computation? * :ghissue:`1699`: Details in aparc-reduced.nii.gz * :ghissue:`1671`: Question about shm_coeff * :ghissue:`1552`: dti.py - quantize_evecs - error * :ghissue:`1373`: How to convert from converted isotropic to original resolution (anisotropic) * :ghissue:`1364`: SNR estimation troubleshooting * :ghissue:`1152`: nan gfa and odf values when mask includes voxels with 0 dwi signal * :ghissue:`1047`: Gradient flipped in the x-direction - FSL bvecs handling * :ghissue:`2019`: Apply deformation map to render "registered" image * :ghissue:`2049`: KFA calculation * :ghissue:`2048`: Group analysis * :ghissue:`2053`: Correct TYPO on the note about n_points in _gibbs_removal_2d() * :ghissue:`2043`: [NF] Add a Deprecation system * :ghissue:`218`: Callable response broken in csd module * :ghissue:`2047`: [fix] Doc generation issue * :ghissue:`313`: csdeconv response as callable * :ghissue:`1848`: Add Dipy to MRI-Hub (ISMRM Reproducible Research Study Group) * :ghissue:`2044`: [FIX] check seeds dtype * :ghissue:`2034`: Using the tutorial on Euler method on my data * :ghissue:`2041`: BF: SFM prediction with mask * :ghissue:`1724`: Failure on Windows/Python 3.5 * :ghissue:`1938`: Auto-clearing the AppVeyor queue backlog * :ghissue:`2039`: Remove __future__ * :ghissue:`2042`: Add tests to rng module * :ghissue:`1864`: Add tests to dipy.core.rng * :ghissue:`2040`: RF: Swallow a couple of warnings that are safe to ignore. * :ghissue:`2038`: [DOC] Update repo path * :ghissue:`2037`: DOC: fix typo in FW DTI tutorial * :ghissue:`2028`: Adapted for patch_radius with radii differing xyz direction. * :ghissue:`2035`: DOC: Update DKI documentation according to the new get data functions dipy-1.11.0/doc/release_notes/release1.3.rst000066400000000000000000000230321476546756600205620ustar00rootroot00000000000000.. _release1.3: ==================================== Release notes for DIPY version 1.3 ==================================== GitHub stats for 2020/09/09 - 2020/11/02 (tag: 1.2.0) The following 14 authors contributed 284 commits. * Areesha Tariq * Ariel Rokem * Basile Pinsard * Bramsh Qamar * Charles Poirier * Eleftherios Garyfallidis * Eric Larson * Gregory Lee * Jaewon Chung * Jon Haitz Legarreta Gorroño * Philippe Karan * Rafael Neto Henriques * Serge Koudoro * Siddharth Kapoor We closed a total of 134 issues, 49 pull requests and 85 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (49): * :ghpull:`2181`: BUAN bundle highlight option in horizon * :ghpull:`2223`: [NF] new linear transforms for Rigid+IsoScaling and Rigid+Scaling * :ghpull:`2238`: [FIX] fix cython error from pre matrix * :ghpull:`2265`: Gibbs denoising: fft only along the axis of interest * :ghpull:`2206`: NF: Update definitions of SH bases and documentation * :ghpull:`2266`: STYLE: minor refactoring in TissueClassifierHMRF * :ghpull:`2255`: Modifying dti.design_matrix to take gtab.btens into account * :ghpull:`2271`: Increase Azure pipeline timeout * :ghpull:`2263`: [FIX] update multiple models due to cvxpy 1.1 (part2) * :ghpull:`2259`: [Fix] Allow read_bvals_bvecs to have 1 or 2 dwi volumes only * :ghpull:`2264`: BF: Fix `dipy_align_syn` default value assumptions * :ghpull:`2268`: BUG: Fix literal * :ghpull:`2267`: BUG: Fix string literal * :ghpull:`2262`: [FIX] update tests to respect numpy NEP 34 * :ghpull:`2244`: DOC : Denoising CLI * :ghpull:`2119`: RecoBundles updated to read and save .trk files from Old API * :ghpull:`2260`: [Fix] Better error handling in Diffeomorphic map `get_map` * :ghpull:`2258`: [FIX] update Azure OSX CI + remove Azure Linux CI's * :ghpull:`2257`: [Fix] warning if not the same number of points * :ghpull:`2261`: [DOC]:Removed tracking evaluation section * :ghpull:`1919`: [DOC] Add an overview of reconstruction models * :ghpull:`2256`: update BUAN citations * :ghpull:`2253`: Improve FFT efficiency in gibbs_removal * :ghpull:`2240`: [ENH] Deprecate interp parameter name in AffineMap * :ghpull:`2198`: Make single and multi tensor simulations compatible with btensors * :ghpull:`2025`: Adds an align.api module, which provides simplified API to registration functions * :ghpull:`2197`: Estimate smt2 metrics from mean signal kurtosis * :ghpull:`2227`: RF: Replaces our own custom progressbar with a tqdm progressbar. * :ghpull:`2250`: [ENH] Add parallelization to gibbs denoising * :ghpull:`2252`: BUG: Set tau factor to parameter value in local PCA * :ghpull:`2248`: [DOC] fetching dataset * :ghpull:`2249`: [fix] fix value_range in HORIZON * :ghpull:`2247`: BF: In LiFE, set nan signals to 0. * :ghpull:`2246`: [DOC] Replace simple backticks with double backticks * :ghpull:`2239`: [ENH] Add inplace kwarg to gibbs_removal * :ghpull:`2242`: maintenance of bundle_shape_similarity function * :ghpull:`2241`: STYLE: Exclude package information file from PEP8 checks * :ghpull:`2235`: DOC: Add tips to the documentation build section * :ghpull:`2234`: DOC: Improve some of the links in the `info.py` file * :ghpull:`2233`: Clarifying msmt response function docstrings * :ghpull:`2231`: DOC: Fix HTML tag in dataset list documentation table * :ghpull:`2221`: Robustify solve_qp for possible SolverError in one odd voxel * :ghpull:`2226`: STYLE: Conform to `reStructuredText` syntax in examples sections * :ghpull:`2225`: [CI] Replace Rackspace by https://anaconda.org/scipy-wheels-nightly * :ghpull:`2224`: Replace pytest.xfail by npt.assert_raises * :ghpull:`2220`: [DOC] move Denoising on its own section * :ghpull:`2218`: DOC : inconsistent save_seeds documentation * :ghpull:`2217`: Fixing numpy version rcond issue in numpy.linalg.lstsq * :ghpull:`2211`: [FIX] used numerical indices for references Issues (85): * :ghissue:`2181`: BUAN bundle highlight option in horizon * :ghissue:`2272`: DOC : Registration CLI * :ghissue:`2223`: [NF] new linear transforms for Rigid+IsoScaling and Rigid+Scaling * :ghissue:`2180`: [NF] add new linear transforms for Rigid+IsoScaling and Rigid+Scaling * :ghissue:`2238`: [FIX] fix cython error from pre matrix * :ghissue:`2265`: Gibbs denoising: fft only along the axis of interest * :ghissue:`2206`: NF: Update definitions of SH bases and documentation * :ghissue:`392`: mrtrix 0.3 default basis is different from mrtrix 0.2 * :ghissue:`2266`: STYLE: minor refactoring in TissueClassifierHMRF * :ghissue:`2255`: Modifying dti.design_matrix to take gtab.btens into account * :ghissue:`2271`: Increase Azure pipeline timeout * :ghissue:`2054`: Discrepancy between dipy.gibbs.gibbs_removal and reisert/unring/ * :ghissue:`2263`: [FIX] update multiple models due to cvxpy 1.1 (part2) * :ghissue:`2190`: Reconstruction with Multi-Shell Multi-Tissue CSD * :ghissue:`2259`: [Fix] Allow read_bvals_bvecs to have 1 or 2 dwi volumes only * :ghissue:`2046`: read_bvals_bvecs can't read a single volume dwi * :ghissue:`2264`: BF: Fix `dipy_align_syn` default value assumptions * :ghissue:`2268`: BUG: Fix literal * :ghissue:`2267`: BUG: Fix string literal * :ghissue:`2262`: [FIX] update tests to respect numpy NEP 34 * :ghissue:`2132`: Generating ndarrays with different shapes generates NumPy warning at testing * :ghissue:`1266`: test_mapmri_isotropic_static_scale_factor failure on OSX buildbot * :ghissue:`1264`: FBC measures test failure on PPC * :ghissue:`2244`: DOC : Denoising CLI * :ghissue:`2119`: RecoBundles updated to read and save .trk files from Old API * :ghissue:`2117`: RecoBundles workflow still using old API * :ghissue:`2260`: [Fix] Better error handling in Diffeomorphic map `get_map` * :ghissue:`2202`: Add error handling in Diffeomorphic map `get_map` * :ghissue:`2258`: [FIX] update Azure OSX CI + remove Azure Linux CI's * :ghissue:`2257`: [Fix] warning if not the same number of points * :ghissue:`342`: Missing a warning if not the same number of points * :ghissue:`2261`: [DOC]:Removed tracking evaluation section * :ghissue:`2115`: Independent section on Fiber tracking evaluation not necessary * :ghissue:`1744`: [WIP] [NF] Free Water Elimination for single-shell DTI (updated version) * :ghissue:`1919`: [DOC] Add an overview of reconstruction models * :ghissue:`1489`: Documentation: how to know which models support multi-shell? * :ghissue:`2256`: update BUAN citations * :ghissue:`2253`: Improve FFT efficiency in gibbs_removal * :ghissue:`2240`: [ENH] Deprecate interp parameter name in AffineMap * :ghissue:`2192`: Bringing AffineMap and DiffeomorphicMap a little closer together * :ghissue:`2198`: Make single and multi tensor simulations compatible with btensors * :ghissue:`2025`: Adds an align.api module, which provides simplified API to registration functions * :ghissue:`2201`: Gradient table message error * :ghissue:`2232`: This should be len(np.unique(gtab.bvals)) - 1 or somesuch * :ghissue:`2197`: Estimate smt2 metrics from mean signal kurtosis * :ghissue:`2227`: RF: Replaces our own custom progressbar with a tqdm progressbar. * :ghissue:`2219`: Replace fetcher progress bar with tqdm * :ghissue:`2250`: [ENH] Add parallelization to gibbs denoising * :ghissue:`2236`: Parallelize gibbs_removal * :ghissue:`2254`: Trackvis header saved with Dipy (nibabel) is not read by Matlab or other tools * :ghissue:`2252`: BUG: Set tau factor to parameter value in local PCA * :ghissue:`2251`: localpca tau_factor is hard coded to 2.3 * :ghissue:`2248`: [DOC] fetching dataset * :ghissue:`2249`: [fix] fix value_range in HORIZON * :ghissue:`2243`: Unable to visualize data through dipy_horizon * :ghissue:`2247`: BF: In LiFE, set nan signals to 0. * :ghissue:`2246`: [DOC] Replace simple backticks with double backticks * :ghissue:`2239`: [ENH] Add inplace kwarg to gibbs_removal * :ghissue:`2237`: gibbs_removal overwrites input data when inputting 3-d or 4-d data. * :ghissue:`2245`: DOC: Fix Sphinx verbatim syntax in coding style guide * :ghissue:`2242`: maintenance of bundle_shape_similarity function * :ghissue:`2241`: STYLE: Exclude package information file from PEP8 checks * :ghissue:`2235`: DOC: Add tips to the documentation build section * :ghissue:`2234`: DOC: Improve some of the links in the `info.py` file * :ghissue:`2222`: How can I track different streamlines in DIPY? * :ghissue:`2233`: Clarifying msmt response function docstrings * :ghissue:`2231`: DOC: Fix HTML tag in dataset list documentation table * :ghissue:`2230`: TST: Assert the shape of the output based on the docstring. * :ghissue:`2228`: Best practices for saving a tissue classifier object? * :ghissue:`2221`: Robustify solve_qp for possible SolverError in one odd voxel * :ghissue:`2109`: DIPY lab meetings, Spring 2020 * :ghissue:`2226`: STYLE: Conform to `reStructuredText` syntax in examples sections * :ghissue:`2225`: [CI] Replace Rackspace by https://anaconda.org/scipy-wheels-nightly * :ghissue:`2214`: Rackspace does not exist anymore -> Update PRE-matrix on Travis and Azure required * :ghissue:`2224`: Replace pytest.xfail by npt.assert_raises * :ghissue:`2220`: [DOC] move Denoising on its own section * :ghissue:`2218`: DOC : inconsistent save_seeds documentation * :ghissue:`2217`: Fixing numpy version rcond issue in numpy.linalg.lstsq * :ghissue:`2216`: test_multi_shell_fiber_response failed with Numpy 1.13.3 * :ghissue:`2211`: [FIX] used numerical indices for references * :ghissue:`2185`: Inconsistency in stating references in dti.py * :ghissue:`2215`: problem with fetching stanford data * :ghissue:`1762`: Font on instructions is small on mac * :ghissue:`1354`: strange tracks * :ghissue:`325`: streamline extraction from eudx is failing - but perhaps eudx is failing dipy-1.11.0/doc/release_notes/release1.4.1.rst000066400000000000000000000140301476546756600207200ustar00rootroot00000000000000.. _release1.4.1: ===================================== Release notes for DIPY version 1.4.1 ===================================== GitHub stats for 2021/03/14 - 2021/05/05 (tag: 1.4.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 11 authors contributed 153 commits. * Ariel Rokem * Bramsh Qamar Chandio * David Romero-Bascones * Eleftherios Garyfallidis * Etienne St-Onge * Felix Liu * Gabriel Girard * John Kruper * Nasim Anousheh * Serge Koudoro * Shreyas Fadnavis We closed a total of 89 issues, 28 pull requests and 61 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (28): * :ghpull:`2367`: [Upcoming] Release 1.4.1 * :ghpull:`2387`: added all examples of CST and updated AFQ file name * :ghpull:`2386`: Adding CST_L back in Bundle Segmentation Tutorial * :ghpull:`2375`: Expanding Bundle Segmentation Tutorial * :ghpull:`2382`: Updated docs for using P2S optimally * :ghpull:`2385`: RF: Standardize the argument name for the number of threads/cores * :ghpull:`2384`: RF - Removed deprecated tracking code * :ghpull:`2351`: Updating Vec2vec_rotmat to deal with numerical issues * :ghpull:`2381`: Adds the NIPY code of conduct to our repo. * :ghpull:`2371`: [Fix] Add "None" options in the CLIs * :ghpull:`2352`: RF: configure num_threads==-1 as the value to use all cores * :ghpull:`2373`: [FIX] warning if not the same number of points * :ghpull:`2372`: Expand patch radius if input is int * :ghpull:`2348`: RF: Use new name for this function. * :ghpull:`2363`: [ENH] Adding cython file(``*.pyx``) in documentation * :ghpull:`2365`: [DOC]: Change defaults in Patch2Self example * :ghpull:`2349`: [ENH] Allow for other statistics, like median, in afq_profile * :ghpull:`2350`: [FIX] Use npy_intp variables instead of int and size_t to iterate over numpy arrays * :ghpull:`2346`: [MNT] Update and fix Cython warnings and use cnp.PyArray_DATA wherever possible * :ghpull:`2347`: Replacing Data in NLMeans Tutorial * :ghpull:`2340`: [FIX] reactivate codecov * :ghpull:`2344`: [FIX] Tractogram Header in RecoBundles Tutorial * :ghpull:`2339`: [FIX] Cleanup deprecated np.float, np.bool, np.int * :ghpull:`1648`: Mesh seeding (surface) * :ghpull:`2337`: BF: Change patch2self defaults. * :ghpull:`2333`: Add __str__ to GradientTable * :ghpull:`2335`: RF: Replaces deprecated basis by its new name. * :ghpull:`2332`: [FIX] fix tests for all new deprecated functions Issues (61): * :ghissue:`2375`: Expanding Bundle Segmentation Tutorial * :ghissue:`1973`: Recobundles documentation * :ghissue:`2382`: Updated docs for using P2S optimally * :ghissue:`2385`: RF: Standardize the argument name for the number of threads/cores * :ghissue:`2377`: RF: standardize the argument name for the number of threads/cores * :ghissue:`2384`: RF - Removed deprecated tracking code * :ghissue:`2351`: Updating Vec2vec_rotmat to deal with numerical issues * :ghissue:`2381`: Adds the NIPY code of conduct to our repo. * :ghissue:`2380`: Community and governance * :ghissue:`2371`: [Fix] Add "None" options in the CLIs * :ghissue:`2300`: NF: Add "None" options in the CLIs * :ghissue:`2352`: RF: configure num_threads==-1 as the value to use all cores * :ghissue:`2373`: [FIX] warning if not the same number of points * :ghissue:`2320`: RecoBundles distances * :ghissue:`2372`: Expand patch radius if input is int * :ghissue:`2341`: Allow use of all threads in the gibbs ringing workflow * :ghissue:`2348`: RF: Use new name for this function. * :ghissue:`2353`: How to create tractogram from a multi-shell data for RecoBundles * :ghissue:`1311`: Adding cython file(``*.pyx``) in documentation * :ghissue:`2363`: [ENH] Adding cython file(``*.pyx``) in documentation * :ghissue:`1302`: [DOC] cython (pyx) files are not parsed * :ghissue:`366`: Some doc missing * :ghissue:`2365`: [DOC]: Change defaults in Patch2Self example * :ghissue:`1672`: Dipy Segmentation fault when visualizing * :ghissue:`1444`: Move general registration tools into own package? * :ghissue:`562`: Multiprocessing the tensor reconstruction * :ghissue:`13`: Coordinate maps stuff * :ghissue:`2324`: Dipy for VR/AR * :ghissue:`2345`: Saving and/or importing nonlinear warps * :ghissue:`2349`: [ENH] Allow for other statistics, like median, in afq_profile * :ghissue:`2350`: [FIX] Use npy_intp variables instead of int and size_t to iterate over numpy arrays * :ghissue:`423`: Use npy_intp variables instead of int and size_t to iterate over numpy arrays * :ghissue:`837`: Should we enforce float32 in tractography results? * :ghissue:`636`: Get a standard interface for the functions using the noise variance * :ghissue:`861`: open mp defaults to one core, is that a good idea? * :ghissue:`2346`: [MNT] Update and fix Cython warnings and use cnp.PyArray_DATA wherever possible * :ghissue:`1895`: Cython warnings * :ghissue:`545`: Use cnp.PyArray_DATA wherever possible * :ghissue:`2347`: Replacing Data in NLMeans Tutorial * :ghissue:`1847`: Replacing Data in NLMeans Tutorial * :ghissue:`2340`: [FIX] reactivate codecov * :ghissue:`1872`: Did we lose our coverage reporting? * :ghissue:`1646`: Fetcher should not be under coverage * :ghissue:`1635`: Track from mesh * :ghissue:`2344`: [FIX] Tractogram Header in RecoBundles Tutorial * :ghissue:`2309`: Tractogram Header in RecoBundles Tutorial * :ghissue:`2334`: Aphysical signal after running patch2self * :ghissue:`1873`: ERROR while import data * :ghissue:`2343`: Missing Python 3.9 wheels * :ghissue:`1996`: Documentation not being rendered correctly * :ghissue:`2311`: Accuracy of DKI measures * :ghissue:`2274`: DKI metrics' accuracy * :ghissue:`2339`: [FIX] Cleanup deprecated np.float, np.bool, np.int * :ghissue:`1648`: Mesh seeding (surface) * :ghissue:`1675`: WIP: Integer indices * :ghissue:`2316`: TranslationTransform2D Exact X-Y Shift * :ghissue:`2337`: BF: Change patch2self defaults. * :ghissue:`2333`: Add __str__ to GradientTable * :ghissue:`2331`: gtab.info does not print anything * :ghissue:`2335`: RF: Replaces deprecated basis by its new name. * :ghissue:`2332`: [FIX] fix tests for all new deprecated functions dipy-1.11.0/doc/release_notes/release1.4.rst000066400000000000000000000067461476546756600206000ustar00rootroot00000000000000.. _release1.4: ==================================== Release notes for DIPY version 1.4 ==================================== GitHub stats for 2020/11/05 - 2021/03/13 (tag: 1.3.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 8 authors contributed 119 commits. * Areesha Tariq * Ariel Rokem * Bramsh Qamar * Eleftherios Garyfallidis * Jon Haitz Legarreta Gorroño * Philippe Karan * Serge Koudoro * Shreyas Fadnavis We closed a total of 47 issues, 19 pull requests and 28 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (19): * :ghpull:`2318`: [MAINT] Fixed Handling Negative Values for P2S * :ghpull:`2317`: Allowing response_from_mask_msmt to take btens into account * :ghpull:`2315`: Add weekly build to azure + update linux matrix * :ghpull:`2307`: BUAN Documentation Typo * :ghpull:`2278`: BUAN and RB Documentation update * :ghpull:`2275`: [DOC]: Reconstruction CLI * :ghpull:`2277`: [DOC] Registration CLI * :ghpull:`2289`: Patch2Self: Denoising Diffusion MRI with Self-supervised Learning * :ghpull:`2292`: [DOC]: Add Tracking CLI Documentation * :ghpull:`2296`: [DOC] Fix the 'None' value in num_threads attribute * :ghpull:`2294`: [DOC]: Remove default values from docstrings * :ghpull:`2291`: Adds for the dipy.align._public module * :ghpull:`2295`: [DOC]: Fix inconsistent function parameters * :ghpull:`2280`: DOC: Improve the denoising workflow documentation * :ghpull:`2287`: [DOC] Remove default values from doctstrings for reconstruction methods * :ghpull:`2286`: [DOC] fix inconsistent citation style * :ghpull:`2284`: [DOC] Fix incorrect default output directory in doctsrings * :ghpull:`2283`: [DOC] Fix incorrect default output directory in doctsrings * :ghpull:`2282`: [DOC] fix minor typo Issues (28): * :ghissue:`2210`: Really release the memory used by arclength variable. * :ghissue:`722`: Adding optional (GPL) dependencies to have new features * :ghissue:`2328`: Website is down * :ghissue:`2323`: Brain mask as a required argument to CLI * :ghissue:`2318`: [MAINT] Fixed Handling Negative Values for P2S * :ghissue:`2317`: Allowing response_from_mask_msmt to take btens into account * :ghissue:`2315`: Add weekly build to azure + update linux matrix * :ghissue:`2229`: DIPY open group meetings, Fall 2020 * :ghissue:`2307`: BUAN Documentation Typo * :ghissue:`2098`: Q-Space Interpolation Methods * :ghissue:`2278`: BUAN and RB Documentation update * :ghissue:`2275`: [DOC]: Reconstruction CLI * :ghissue:`2277`: [DOC] Registration CLI * :ghissue:`2303`: DKI ODF * :ghissue:`685`: (WIP) DKI PR5 - NF: DKI-ODF estimation * :ghissue:`2289`: Patch2Self: Denoising Diffusion MRI with Self-supervised Learning * :ghissue:`2292`: [DOC]: Add Tracking CLI Documentation * :ghissue:`2296`: [DOC] Fix the 'None' value in num_threads attribute * :ghissue:`2294`: [DOC]: Remove default values from docstrings * :ghissue:`2291`: Adds for the dipy.align._public module * :ghissue:`2295`: [DOC]: Fix inconsistent function parameters * :ghissue:`2280`: DOC: Improve the denoising workflow documentation * :ghissue:`2287`: [DOC] Remove default values from doctstrings for reconstruction methods * :ghissue:`2286`: [DOC] fix inconsistent citation style * :ghissue:`2284`: [DOC] Fix incorrect default output directory in doctsrings * :ghissue:`2283`: [DOC] Fix incorrect default output directory in doctsrings * :ghissue:`2282`: [DOC] fix minor typo * :ghissue:`2279`: Visibility of Workflow Tutorials on DIPY website dipy-1.11.0/doc/release_notes/release1.5.rst000066400000000000000000000315201476546756600205650ustar00rootroot00000000000000 .. _release1.5: =================================== Release notes for DIPY version 1.5 =================================== GitHub stats for 2021/05/06 - 2022/03/10 (tag: 1.4.1) These lists are automatically generated, and may be incomplete or contain duplicates. The following 22 authors contributed 573 commits. * Ariel Rokem * Dan Bullock * David Romero-Bascones * Derek Pisner * Eleftherios Garyfallidis * Eric Larson * Francis Jerome * Francois Rheault * Gabriel Girard * Giulia Bertò * Javier Guaje * Jon Haitz Legarreta Gorroño * Joshua Newton * Kenji Marshall * Leevi Kerkela * Leon Weninger * Lucas Da Costa * Nasim Anousheh * Rafael Neto Henriques * Sam Coveney * Serge Koudoro * Shreyas Fadnavis We closed a total of 200 issues, 72 pull requests and 128 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (72): * :ghpull:`2561`: [FIX] Motion correction tutorial * :ghpull:`2520`: Resdnn inference * :ghpull:`2558`: BUG: Fix errant warning about starting_affine * :ghpull:`2557`: MAINT: Fix version * :ghpull:`2556`: [FIX] Update `dipy.segment` tutorials * :ghpull:`2554`: Support .vtp files * :ghpull:`2555`: Limit `peaks_from_model` number of processes in examples * :ghpull:`2539`: Adds utilities for embarrassingly parallel loops. * :ghpull:`2545`: Stateful Tractogram DPS and DPP keys ordering * :ghpull:`2548`: Add timeout + concurrency to GHA * :ghpull:`2549`: [ENH] Clarify reconst_sh tutorial * :ghpull:`2550`: [ENH] Add sigma to DTI/DKI RESTORE workflow * :ghpull:`2551`: [MNT] Update minimal dependencies version * :ghpull:`2536`: Random colors fix in horizon * :ghpull:`2533`: [FIX] Docstring cleaning: wrong underline length... * :ghpull:`2342`: NF: q-space trajectory imaging * :ghpull:`2512`: Masking for affine registration * :ghpull:`2526`: TEST: Filter legacy SH bases warnings in tests * :ghpull:`2534`: TEST: Remove unnecessary `main` method definition in tests * :ghpull:`2532`: STYLE: Remove unused import statements * :ghpull:`2529`: STYLE: Remove unused import statements * :ghpull:`2528`: TEST: Remove legacy `nose`-related dead testing code * :ghpull:`2527`: TEST: Fix intermittent RUMBA test check failure * :ghpull:`2493`: Fury dependency resolution * :ghpull:`2522`: ENH: Miscellaneous cleanup * :ghpull:`2521`: DOC: Use GitHub actions status badge in README * :ghpull:`2420`: Documentation corrections * :ghpull:`2482`: ENH: Improve SH bases warning messages * :ghpull:`2423`: NF: rumba reconst * :ghpull:`2518`: Migrations from Azure Pipeline to Github Actions * :ghpull:`2515`: Default to False output for null streamlines in streamline_near_roi * :ghpull:`2513`: [MNT] Drop distutils * :ghpull:`2506`: Horizon FURY update * :ghpull:`2510`: Optimize sfm (reboot) * :ghpull:`2487`: ENH: Better error message * :ghpull:`2442`: [NF] Add Motion correction workflow * :ghpull:`2470`: Add utilities functions: radius curvature <--> maximum deviation angle * :ghpull:`2485`: DOC: Small updates. * :ghpull:`2481`: ENH: Import ABCs from `collections.abc` * :ghpull:`2480`: STYLE: Make `sklearn` import warning messages consistent * :ghpull:`2478`: ENH: Deal appropriately with user warnings * :ghpull:`2479`: STYLE: Improve style in misc files * :ghpull:`2475`: ENH: Fix `complex` type `NumPy` alias deprecation warnings * :ghpull:`2476`: ENH: Fix `dipy.io.bvectxt` deprecation warning * :ghpull:`2472`: ENH: Return unique invalid streamline removal indices * :ghpull:`2471`: DOC: Fix coding style guideline link * :ghpull:`2468`: [MNT] Use windows-latest on azure pipeline * :ghpull:`2467`: [ENH] Add fit_method option in DTI and DKI CLI * :ghpull:`2466`: deprecate dipy.io.bvectxt module * :ghpull:`2453`: make it compatible when number of volume is 2 * :ghpull:`2413`: Azure pipeline: from ubuntu 1604 to 2004 * :ghpull:`2447`: reduce_rois: Force input array type to bool to avoid bitwise or errors * :ghpull:`2444`: [DOC] : Added citation for IVIM dataset * :ghpull:`2434`: MAINT: Update import from ndimage * :ghpull:`2435`: BUG: Backward compat support for pipeline * :ghpull:`2436`: MAINT: Bump tolerance * :ghpull:`2438`: BUG: Fix misplaced comma in `warn()` call from `patch2self.py` * :ghpull:`2374`: ROIs visualizer * :ghpull:`2390`: NF: extend the align workflow with Rigid+IsoScaling and Rigid+Scaling * :ghpull:`2417`: OPT: Initialize `Shape` struct * :ghpull:`2419`: Fixes the default option in the command line for Patch2Self 'ridge' -> 'ols' * :ghpull:`2406`: Manage Approx_polygon_track with repeated points * :ghpull:`2411`: [FIX] `c_compress_streamline` discard identical points * :ghpull:`2416`: OPT: Prefer using a typed index to get the PMF value * :ghpull:`2415`: Implementation multi_voxel_fit progress bar * :ghpull:`2410`: [ENH] Improve Shore Tests * :ghpull:`2409`: NF - Sample PMF for an input position and direction * :ghpull:`2405`: Small correction on KFA * :ghpull:`2407`: from random to deterministic test for deform_streamlines * :ghpull:`2392`: Add decomposition * :ghpull:`2389`: [Fix] bundles_distances_mdf asymmetric values * :ghpull:`2368`: RF - Moved tracking.localtrack._local_tracker to DirectionGetter.generate_streamline. Issues (128): * :ghissue:`2561`: [FIX] Motion correction tutorial * :ghissue:`2123`: WIP: Residual Deep NN * :ghissue:`2520`: Resdnn inference * :ghissue:`2558`: BUG: Fix errant warning about starting_affine * :ghissue:`2557`: MAINT: Fix version * :ghissue:`2489`: MAINT: Get Python 3.10 binaries up on scipy-wheels-nightly * :ghissue:`2556`: [FIX] Update `dipy.segment` tutorials * :ghissue:`2554`: Support .vtp files * :ghissue:`2525`: Support Opening `.vtp` files * :ghissue:`2555`: Limit `peaks_from_model` number of processes in examples * :ghissue:`2539`: Adds utilities for embarrassingly parallel loops. * :ghissue:`2509`: Easy robustness for streamline_near_roi and near_roi for empty streamlines? * :ghissue:`2543`: StatefulTractogram.are_compatible compare data_per_point keys as list instead of set * :ghissue:`2545`: Stateful Tractogram DPS and DPP keys ordering * :ghissue:`2548`: Add timeout + concurrency to GHA * :ghissue:`2549`: [ENH] Clarify reconst_sh tutorial * :ghissue:`2546`: Confusing import in 'reconst_sh` * :ghissue:`2550`: [ENH] Add sigma to DTI/DKI RESTORE workflow * :ghissue:`2542`: DTI workflow should allow user-defined fitting method * :ghissue:`2551`: [MNT] Update minimal dependencies version * :ghissue:`2477`: Numpy min dependency update * :ghissue:`2541`: Issue with coverage and pytests for numpy.min() * :ghissue:`2507`: kernel died when use dipy.viz * :ghissue:`2536`: Random colors fix in horizon * :ghissue:`2533`: [FIX] Docstring cleaning: wrong underline length... * :ghissue:`2422`: WIP-Adding math in SLR tutorial * :ghissue:`2342`: NF: q-space trajectory imaging * :ghissue:`2512`: Masking for affine registration * :ghissue:`1969`: imaffine mask support * :ghissue:`2526`: TEST: Filter legacy SH bases warnings in tests * :ghissue:`2456`: Horizon tests failing * :ghissue:`2534`: TEST: Remove unnecessary `main` method definition in tests * :ghissue:`2532`: STYLE: Remove unused import statements * :ghissue:`2524`: Add concurrency + timeout to Github Actions (GHA) * :ghissue:`2529`: STYLE: Remove unused import statements * :ghissue:`2528`: TEST: Remove legacy `nose`-related dead testing code * :ghissue:`2527`: TEST: Fix intermittent RUMBA test check failure * :ghissue:`2493`: Fury dependency resolution * :ghissue:`2522`: ENH: Miscellaneous cleanup * :ghissue:`2521`: DOC: Use GitHub actions status badge in README * :ghissue:`2420`: Documentation corrections * :ghissue:`2482`: ENH: Improve SH bases warning messages * :ghissue:`2449`: Nonsense deprecation warning * :ghissue:`2423`: NF: rumba reconst * :ghissue:`2179`: NF: Complete masking implementation in affine registration with MI * :ghissue:`2518`: Migrations from Azure Pipeline to Github Actions * :ghissue:`2492`: Move to GitHub actions / reusable actions * :ghissue:`2515`: Default to False output for null streamlines in streamline_near_roi * :ghissue:`2497`: Remove python 3.6 from Azure pipelines * :ghissue:`2495`: Remove Distutils (deprecated) * :ghissue:`2513`: [MNT] Drop distutils * :ghissue:`2506`: Horizon FURY update * :ghissue:`2305`: [WIP] Brain Tumor Image Segmentation Code * :ghissue:`2499`: Problem generating Connectivity Matrix: "Slice step cannot be zero" * :ghissue:`2510`: Optimize sfm (reboot) * :ghissue:`2488`: Minimize memory footprint wherever possible, add joblib support for … * :ghissue:`2504`: Why are there many small dots on the fwdwi image? * :ghissue:`2502`: Can i read specific b-values from my own multishell data? * :ghissue:`2500`: MAP issue * :ghissue:`2490`: [BUG] MRI-CT alignment failure * :ghissue:`2487`: ENH: Better error message * :ghissue:`2402`: Dipy 1.4.1 breaks nipype.interfaces.dipy.dipy_to_nipype_interface * :ghissue:`2486`: Wrong doc in interpolation * :ghissue:`2442`: [NF] Add Motion correction workflow * :ghissue:`2470`: Add utilities functions: radius curvature <--> maximum deviation angle * :ghissue:`2485`: DOC: Small updates. * :ghissue:`2484`: [ENH] Add grid search to `AffineRegistration.optimize` * :ghissue:`2483`: [DOC] Stable/Latest Documentation Structure * :ghissue:`2481`: ENH: Import ABCs from `collections.abc` * :ghissue:`2480`: STYLE: Make `sklearn` import warning messages consistent * :ghissue:`2478`: ENH: Deal appropriately with user warnings * :ghissue:`2479`: STYLE: Improve style in misc files * :ghissue:`2475`: ENH: Fix `complex` type `NumPy` alias deprecation warnings * :ghissue:`2476`: ENH: Fix `dipy.io.bvectxt` deprecation warning * :ghissue:`2472`: ENH: Return unique invalid streamline removal indices * :ghissue:`2471`: DOC: Fix coding style guideline link * :ghissue:`2468`: [MNT] Use windows-latest on azure pipeline * :ghissue:`2467`: [ENH] Add fit_method option in DTI and DKI CLI * :ghissue:`2463`: DTI RESTORE on the CLI * :ghissue:`2466`: deprecate dipy.io.bvectxt module * :ghissue:`2460`: Deprecate and Remove dipy.io.bvectxt * :ghissue:`2429`: random_colors flag in dipy_horizon does not work as before * :ghissue:`2461`: Patch2Self: Less than 10 3D Volumes Bug * :ghissue:`2464`: Typo on the homepage * :ghissue:`2453`: make it compatible when number of volume is 2 * :ghissue:`2457`: Choosing sigma_diff and radius parameters for SyN registration * :ghissue:`2413`: Azure pipeline: from ubuntu 1604 to 2004 * :ghissue:`2454`: Can I show fiber with vtk? * :ghissue:`2446`: Use of bitwise or with non-bool inputs results in ufunc 'bitwise_or' error * :ghissue:`2447`: reduce_rois: Force input array type to bool to avoid bitwise or errors * :ghissue:`2444`: [DOC] : Added citation for IVIM dataset * :ghissue:`2443`: Citation for IVIM dataset not present in docs * :ghissue:`2434`: MAINT: Update import from ndimage * :ghissue:`2441`: Horizon error - disk position outside the slider line * :ghissue:`2435`: BUG: Backward compat support for pipeline * :ghissue:`2436`: MAINT: Bump tolerance * :ghissue:`2438`: BUG: Fix misplaced comma in `warn()` call from `patch2self.py` * :ghissue:`2430`: dipy.align.reslice * :ghissue:`2431`: dipy.align.reslice interpolation order for downsampling * :ghissue:`2432`: How to apply MI metric in dipy? * :ghissue:`2374`: ROIs visualizer * :ghissue:`2390`: NF: extend the align workflow with Rigid+IsoScaling and Rigid+Scaling * :ghissue:`2417`: OPT: Initialize `Shape` struct * :ghissue:`2419`: Fixes the default option in the command line for Patch2Self 'ridge' -> 'ols' * :ghissue:`2406`: Manage Approx_polygon_track with repeated points * :ghissue:`2314`: Approx_polygon_track with repeated points gives an error * :ghissue:`2411`: [FIX] `c_compress_streamline` discard identical points * :ghissue:`1805`: `c_compress_streamline` keeps identical points when it shouldn't * :ghissue:`2418`: kernel failure when importing mask from dipy.segment * :ghissue:`2416`: OPT: Prefer using a typed index to get the PMF value * :ghissue:`2415`: Implementation multi_voxel_fit progress bar * :ghissue:`2410`: [ENH] Improve Shore Tests * :ghissue:`365`: Code review items for `dipy.reconst.shore` * :ghissue:`2409`: NF - Sample PMF for an input position and direction * :ghissue:`2404`: Change affine in StatefulTractogram * :ghissue:`2405`: Small correction on KFA * :ghissue:`2407`: from random to deterministic test for deform_streamlines * :ghissue:`2392`: Add decomposition * :ghissue:`717`: Download each shell of the CENIR data separately? * :ghissue:`2209`: _pytest.pathlib.ImportPathMismatchError: * :ghissue:`1934`: Random lpca denoise * :ghissue:`2312`: DIPY open group meetings, Spring 2021 * :ghissue:`2383`: error in mcsd model fitting (DCPError) * :ghissue:`2391`: error performing cross-validation on diffusion HCP data * :ghissue:`2393`: Add a function to read streamline from the result generated by the command "probtrackx2" in FMRIB's Diffusion Toolbox * :ghissue:`2389`: [Fix] bundles_distances_mdf asymmetric values * :ghissue:`2310`: `bundles_distances_mdf` asymmetric values * :ghissue:`2368`: RF - Moved tracking.localtrack._local_tracker to DirectionGetter.generate_streamline. dipy-1.11.0/doc/release_notes/release1.6.rst000066400000000000000000000175231476546756600205750ustar00rootroot00000000000000.. _release1.6: ==================================== Release notes for DIPY version 1.6 ==================================== GitHub stats for 2022/03/11 - 2023/01/12 (tag: 1.5.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 15 authors contributed 242 commits. * Ariel Rokem * David Romero-Bascones * Deneb Boito * Eleftherios Garyfallidis * Emmanuelle Renauld * Eric Larson * Francois Rheault * Jacob Roberts * Jon Haitz Legarreta Gorroño * Malinda Dilhara * Omar Ocegueda * Sam Coveney * Serge Koudoro * Shreyas Fadnavis * Tom Dela Haije We closed a total of 116 issues, 41 pull requests and 75 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (41): * :ghpull:`2710`: small fixes for tutorials * :ghpull:`2711`: [FIX] use tempfile module instead of nibabel for TemporaryDirectory * :ghpull:`2702`: One more small fix to the hcp fetcher. * :ghpull:`2704`: MAINT: Fixes for 3.11 and sdist * :ghpull:`2701`: FIX: Don't print a progbar for downloading unless you need to. * :ghpull:`2700`: [FIX] incompatible type in numpy array * :ghpull:`2694`: NF: Add RUMBA-SD reconstruction workflow * :ghpull:`2697`: NF: Adds a fetcher for HCP data. * :ghpull:`2692`: RF: Improve multi-shell RUMBA test WM response parameterization * :ghpull:`2693`: TEST: Fix `Node.js` warnings linked to GitHub actions * :ghpull:`2687`: DOC: Fix typos in SH theory documentation page * :ghpull:`2690`: STYLE: Remove unnecessary b-val print in CSD reconstruction flow * :ghpull:`2688`: DOC: Document Dirac delta generation method missing member * :ghpull:`2683`: DOC: Adds missing documentation for a kwarg. * :ghpull:`2668`: ENH: Make the DTI fit CLI metric saving message accurate * :ghpull:`2674`: Improve doc (tensor values' order) * :ghpull:`2670`: ENH: Allow non-default parameters to `Patch2Self` CLI * :ghpull:`2672`: DOC: Miscellaneous docstring fixes * :ghpull:`2669`: DOC: Remove inaccurate `patch2self` docstring default values * :ghpull:`2664`: DOC: Fix DTI fit CLI docstring * :ghpull:`2553`: NF: Unbiased groupwise linear bundle registration * :ghpull:`2369`: Transform points with DiffeormorphicMap * :ghpull:`2631`: [FIX] Allow patch size parameter to be an int on denoise Workflow * :ghpull:`2630`: [DOC] Remove search index * :ghpull:`2629`: [FIX ] Handle save VTP * :ghpull:`2618`: Use np.linalg.multi_dot instead of multiple np.dot routines * :ghpull:`2606`: STYLE: Fix miscellaneous Python warnings * :ghpull:`2600`: Pin Ray * :ghpull:`2531`: NF: MAP+ constraints * :ghpull:`2589`: Switch from using nibabel InTemporaryDirectory to standard library tmpfile module * :ghpull:`2577`: Add positivity constraints to QTI * :ghpull:`2595`: Temporary skip Cython 0.29.29 * :ghpull:`2591`: STYLE: Avoid array-like mutable default argument values * :ghpull:`2592`: STYLE: Fix miscellaneous Python warnings * :ghpull:`2579`: Generalized PCA to less than 3 spatial dims * :ghpull:`2584`: transform_streamlines changes dtype to float64 * :ghpull:`2566`: [FIX] Update tests for the deprecated `dipy.io.bvectxt` module * :ghpull:`2581`: Fix logger in SFT * :ghpull:`2580`: DOC: Fix typos and grammar * :ghpull:`2576`: DOC: Documentation fixes * :ghpull:`2568`: DOC: Fix the docstring of `write_mapping` Issues (75): * :ghissue:`2710`: small fixes for tutorials * :ghissue:`2711`: [FIX] use tempfile module instead of nibabel for TemporaryDirectory * :ghissue:`2709`: DiffeomorphicMap object on github not the same as recent release * :ghissue:`2708`: Provide the dataset of this code * :ghissue:`2699`: WIP: Single shell/noreg redux * :ghissue:`2702`: One more small fix to the hcp fetcher. * :ghissue:`2704`: MAINT: Fixes for 3.11 and sdist * :ghissue:`2701`: FIX: Don't print a progbar for downloading unless you need to. * :ghissue:`2700`: [FIX] incompatible type in numpy array * :ghissue:`2694`: NF: Add RUMBA-SD reconstruction workflow * :ghissue:`2696`: Port HCP fetcher from pyAFQ into here * :ghissue:`2697`: NF: Adds a fetcher for HCP data. * :ghissue:`2692`: RF: Improve multi-shell RUMBA test WM response parameterization * :ghissue:`2693`: TEST: Fix `Node.js` warnings linked to GitHub actions * :ghissue:`1418`: Adding parallel_voxel_fit decorator * :ghissue:`2687`: DOC: Fix typos in SH theory documentation page * :ghissue:`2690`: STYLE: Remove unnecessary b-val print in CSD reconstruction flow * :ghissue:`2688`: DOC: Document Dirac delta generation method missing member * :ghissue:`2683`: DOC: Adds missing documentation for a kwarg. * :ghissue:`2679`: Problems with a .nii.gz file when loading and floating * :ghissue:`2676`: Does ```convert_sh_to_legacy``` work as intended ? * :ghissue:`2668`: ENH: Make the DTI fit CLI metric saving message accurate * :ghissue:`2674`: Improve doc (tensor values' order) * :ghissue:`2670`: ENH: Allow non-default parameters to `Patch2Self` CLI * :ghissue:`2673`: ENH: Doc: DTI format * :ghissue:`2667`: Defaults for Patch2Self * :ghissue:`2672`: DOC: Miscellaneous docstring fixes * :ghissue:`2669`: DOC: Remove inaccurate `patch2self` docstring default values * :ghissue:`2662`: Update cmd_line dipy_fit_dti * :ghissue:`2664`: DOC: Fix DTI fit CLI docstring * :ghissue:`2658`: Any chance of arm64 wheels for Mac / Python 3.10? * :ghissue:`2659`: Angle var * :ghissue:`2649`: IVIM VarPro fitting running error * :ghissue:`2553`: NF: Unbiased groupwise linear bundle registration * :ghissue:`2424`: Transforming individual points with SDR * :ghissue:`2327`: Diffeomorphic transformation of coordinates * :ghissue:`2313`: deform_streamlines for wholebrain tractogram doesn't function properly * :ghissue:`936`: WIP: coordinate mapping with DiffeomorphicMap * :ghissue:`2369`: Transform points with DiffeormorphicMap * :ghissue:`2616`: dti TensorModel fitting issue * :ghissue:`2627`: Free-Water Analysis Gradient Table Error * :ghissue:`2635`: get_flexi_tvis_affine(tvis_hdr, nii_aff) * :ghissue:`2634`: Fix small difference between pdfs dense 2d/3d * :ghissue:`2559`: CLI denoise patch_size error * :ghissue:`2631`: [FIX] Allow patch size parameter to be an int on denoise Workflow * :ghissue:`2564`: Search, index, module links not working in the doc * :ghissue:`2630`: [DOC] Remove search index * :ghissue:`2572`: Saving vtp file error * :ghissue:`2629`: [FIX ] Handle save VTP * :ghissue:`2622`: Error with Illustrating Electrostatic Repulsion * :ghissue:`2618`: Use np.linalg.multi_dot instead of multiple np.dot routines * :ghissue:`2617`: DKI model fit shape broadcast error * :ghissue:`2606`: STYLE: Fix miscellaneous Python warnings * :ghissue:`2602`: Dear experts, how can I set different number of fiber tracks before generating streamlines? * :ghissue:`2603`: Dear experts, how to save dti_peaks? * :ghissue:`2600`: Pin Ray * :ghissue:`2531`: NF: MAP+ constraints * :ghissue:`2587`: Thread safety concerns with reliance on `nibabel.tmpdirs.InTemporaryDirectory` * :ghissue:`2589`: Switch from using nibabel InTemporaryDirectory to standard library tmpfile module * :ghissue:`2577`: Add positivity constraints to QTI * :ghissue:`2450`: [NF] New tracking Algorithm: Parallel Transport Tractography (PTT) * :ghissue:`2594`: DIPY compilation fails with the last release of cython (0.29.29) * :ghissue:`2595`: Temporary skip Cython 0.29.29 * :ghissue:`2591`: STYLE: Avoid array-like mutable default argument values * :ghissue:`2592`: STYLE: Fix miscellaneous Python warnings * :ghissue:`2579`: Generalized PCA to less than 3 spatial dims * :ghissue:`2584`: transform_streamlines changes dtype to float64 * :ghissue:`2566`: [FIX] Update tests for the deprecated `dipy.io.bvectxt` module * :ghissue:`2581`: Fix logger in SFT * :ghissue:`2580`: DOC: Fix typos and grammar * :ghissue:`2576`: DOC: Documentation fixes * :ghissue:`2573`: Cannot Import "Feature" From "dipy.segment.metric" * :ghissue:`2568`: DOC: Fix the docstring of `write_mapping` * :ghissue:`2567`: This should be a `1` * :ghissue:`2565`: compress_streamlines() not available anymore dipy-1.11.0/doc/release_notes/release1.7.rst000066400000000000000000000133161476546756600205720ustar00rootroot00000000000000.. _release1.7: ==================================== Release notes for DIPY version 1.7 ==================================== GitHub stats for 2023/01/16 - 2023/04/23 (tag: 1.6.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 21 authors contributed 496 commits. * Ariel Rokem * Bramsh Qamar * Charles Poirier * Dogu Baran Aydogan * Eleftherios Garyfallidis * Etienne St-Onge * Francois Rheault * Gabriel Girard * Javier Guaje * Jong Sung Park * Martino Pilia * Mitesh Gulecha * Rahul Ubale * Sam Coveney * Serge Koudoro * Shilpi * Tom Dela Haije * Yaroslav Halchenko * karp2601 * lb-97 * ujjwal-shekhar We closed a total of 87 issues, 34 pull requests and 53 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (34): * :ghpull:`2765`: Sphinx-gallery integration * :ghpull:`2788`: Remove NoseTester * :ghpull:`2768`: BundleWarp, streamline-based nonlinear registration of white matter tracts * :ghpull:`2749`: adding a new getitem method * :ghpull:`2744`: Horizon Tabs * :ghpull:`2785`: EVAC+ workflow * :ghpull:`2540`: Updates the default value of rm_small_clusters variable in slr_with_qbx function * :ghpull:`2609`: NF: DKI+ constraints * :ghpull:`2596`: NF - Parallel Transport Tractography (PTT) * :ghpull:`2740`: Integration of Denoising Method for DWI with 1D CNN * :ghpull:`2773`: Including EVAC+ and util function * :ghpull:`2783`: Fix test_roi_images * :ghpull:`2782`: [MTN] Fix CI codecov upload * :ghpull:`2780`: Added option to set Legacy=False in PmfGenDirectionGetter.from_shcoeff * :ghpull:`2778`: BF: QBX and merge clusters should return streamlines * :ghpull:`2767`: NF - add utility functions to fast_numpy * :ghpull:`2626`: Adding Synb0 * :ghpull:`2763`: Update dki.py * :ghpull:`2751`: [ENH] Asymmetric peak_directions * :ghpull:`2762`: Remove Python 3.7 from CI * :ghpull:`2753`: Update adaptive_soft_matching.py * :ghpull:`2722`: fixed pca for features > samples, and fixed pca_noise_estimate * :ghpull:`2741`: Fixing solve_qp error * :ghpull:`2739`: codespell: config, workflow, typos fixed * :ghpull:`2590`: Fast Streamline Search algorithm implementation * :ghpull:`2733`: Update SynRegistrationFlow for #2648 * :ghpull:`2723`: TRX integration, requires new attributes for SFT * :ghpull:`2727`: Fix EXTRAS_REQUIRE * :ghpull:`2725`: DOC - Update RUMBA-SD data requirement * :ghpull:`2716`: NF - Added cython utility functions * :ghpull:`2717`: fixed bug for non-linear fitting with masks * :ghpull:`2628`: resolve some CI's script typo * :ghpull:`2713`: Empty vtk support * :ghpull:`2625`: [Upcoming] Release 1.6.0 Issues (53): * :ghissue:`2537`: Importing an example in another example - doc * :ghissue:`1778`: jupyter notebooks from examples * :ghissue:`720`: Auto-convert the examples into IPython notebooks * :ghissue:`1990`: [WIP] Sphinx-Gallery integration * :ghissue:`2765`: Sphinx-gallery integration * :ghissue:`2788`: Remove NoseTester * :ghissue:`2768`: BundleWarp, streamline-based nonlinear registration of white matter tracts * :ghissue:`1073`: Add a method to slice gtab using bvals (Eg : gtab[bvals>200]) * :ghissue:`2749`: adding a new getitem method * :ghissue:`2744`: Horizon Tabs * :ghissue:`2785`: EVAC+ workflow * :ghissue:`2530`: slr_with_qbx breaks when bundle has only one streamline * :ghissue:`2540`: Updates the default value of rm_small_clusters variable in slr_with_qbx function * :ghissue:`2609`: NF: DKI+ constraints * :ghissue:`2596`: NF - Parallel Transport Tractography (PTT) * :ghissue:`2756`: Remove unused inplace param from gibbs_removal() * :ghissue:`2754`: [Question] dipy/denoise/gibbs.py * :ghissue:`2740`: Integration of Denoising Method for DWI with 1D CNN * :ghissue:`2773`: Including EVAC+ and util function * :ghissue:`2783`: Fix test_roi_images * :ghissue:`2782`: [MTN] Fix CI codecov upload * :ghissue:`2775`: NF - Add option to set Legacy=False in PmfGenDirectionGetter.from_shcoeff(.) * :ghissue:`2780`: Added option to set Legacy=False in PmfGenDirectionGetter.from_shcoeff * :ghissue:`2778`: BF: QBX and merge clusters should return streamlines * :ghissue:`2767`: NF - add utility functions to fast_numpy * :ghissue:`2626`: Adding Synb0 * :ghissue:`2770`: BF - update viz.py * :ghissue:`2763`: Update dki.py * :ghissue:`2751`: [ENH] Asymmetric peak_directions * :ghissue:`2762`: Remove Python 3.7 from CI * :ghissue:`2753`: Update adaptive_soft_matching.py * :ghissue:`2722`: fixed pca for features > samples, and fixed pca_noise_estimate * :ghissue:`2750`: Adding tests for gradient.py. * :ghissue:`2741`: Fixing solve_qp error * :ghissue:`2745`: Dipy Segmentation Core Dumped - Windows.Record * :ghissue:`2742`: ValueError: slice step cannot be zero * :ghissue:`2739`: codespell: config, workflow, typos fixed * :ghissue:`2590`: Fast Streamline Search algorithm implementation * :ghissue:`2733`: Update SynRegistrationFlow for #2648 * :ghissue:`2723`: TRX integration, requires new attributes for SFT * :ghissue:`2729`: Numpy Version Incompatibility, AttributeError in dipy.align * :ghissue:`2726`: Setup broken on Python 3.10.9 setuptools 67.2.0 * :ghissue:`2727`: Fix EXTRAS_REQUIRE * :ghissue:`2725`: DOC - Update RUMBA-SD data requirement * :ghissue:`2707`: Fixdenoise * :ghissue:`2575`: [WIP] Define curvature and stepsize as default parameter instead of max_angle for tractography * :ghissue:`2414`: AffineMap.transform with option: interpolation='nearest' returns: "TypeError: No matching signature found" * :ghissue:`2716`: NF - Added cython utility functions * :ghissue:`2717`: fixed bug for non-linear fitting with masks * :ghissue:`2628`: resolve some CI's script typo * :ghissue:`2713`: Empty vtk support * :ghissue:`2599`: Support empty ArraySequence in transform_streamlines * :ghissue:`2625`: [Upcoming] Release 1.6.0 dipy-1.11.0/doc/release_notes/release1.8.rst000066400000000000000000000474151476546756600206020ustar00rootroot00000000000000.. _release1.8: ==================================== Release notes for DIPY version 1.8 ==================================== GitHub stats for 2023/04/23 - 2023/12/13 (tag: 1.7.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 28 authors contributed 733 commits. * Ariel Rokem * Atharva Shah * Bramsh Qamar * Charles Poirier * Dimitri Papadopoulos * Eleftherios Garyfallidis * Emmanuelle Renauld * Eric Larson * Francois Rheault * Gabriel Girard * Javier Guaje * John Kruper * Jon Haitz Legarreta Gorroño * Jong Sung Park * Maharshi Gor * Michael R. Crusoe * Nicolas Delinte * Paul Camacho * Philippe Karan * Rafael Neto Henriques * Sam Coveney * Samuel St-Jean * Serge Koudoro * Shilpi Prasad * Theodore Ni * Vara Lakshmi Bayanagari * dependabot[bot] * Étienne Mollier We closed a total of 327 issues, 130 pull requests and 197 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (129): * :ghpull:`3009`: [DOC] Update installation instruction [ci skip] * :ghpull:`2999`: TEST: Set explicitly `CLARABEL` as the CVXPY solver * :ghpull:`2943`: BF: Fix bundlewarp shape analysis profile values for all `False` mask * :ghpull:`2992`: RGB support for images * :ghpull:`2989`: BF: Mask `1` values in leveraged, residual matrix computation * :ghpull:`3007`: [RF] Define minimum version for some optional packages * :ghpull:`3006`: [NF] Introduce minimum version in `optional_package` * :ghpull:`3002`: [RF] Improve scripts and import management * :ghpull:`3005`: Bump actions/setup-python from 4 to 5 * :ghpull:`3004`: TEST: Check and filter PCA dimensionality problem warning * :ghpull:`2996`: RF: Fix b0 threshold warnings * :ghpull:`2995`: [MTN] remove custom module `_importlib` * :ghpull:`2998`: TEST: Filter SciPy 0.18.0 1-D affine transform array warning in test * :ghpull:`3001`: RF: Create PCA denoising utils methods * :ghpull:`3000`: RF: Prefer raising `sklearn` package warnings when required * :ghpull:`2997`: TEST: Filter warning about resorting to `OLS` fitting method * :ghpull:`2991`: MTN: fix byte swap ordering for numpy 2.0 * :ghpull:`2987`: STYLE: Make `cvxpy`-dependent test checking consistent in `test_mcsd` * :ghpull:`2990`: STYLE: Use `.astype()` on uninitialized array casting * :ghpull:`2984`: DOC: Miscellaneous documentation improvements * :ghpull:`2988`: STYLE: Remove unused import in `dipy/core/gradients.py` * :ghpull:`2985`: Bump conda-incubator/setup-miniconda from 2 to 3 * :ghpull:`2986`: DOC: Fix typos and grammar in `lcr_matrix` function documentation * :ghpull:`2983`: STYLE: Fix boolean variable negation warnings * :ghpull:`2981`: [MTN] replace the deprecated sctypes * :ghpull:`2980`: [FIX] int_t to npy_intp * :ghpull:`2978`: DOC: Fix issue template [ci skip] * :ghpull:`2976`: [MTN] Update index url for PRE-Wheels dependencies * :ghpull:`2975`: connectivity_matrix code speed up * :ghpull:`2715`: Enable building DIPY with Meson * :ghpull:`2964`: RF: Moving to numpy.random.Generator from numpy.random * :ghpull:`2963`: NF: Updating EVAC+ model and adding util function * :ghpull:`2974`: [MTN] Disable github check annotations * :ghpull:`2956`: Adding support for btens to multi_shell_fiber_response function * :ghpull:`2969`: bugfix for --force issue * :ghpull:`2967`: Feature/opacity checkbox * :ghpull:`2966`: volume slider fix * :ghpull:`2958`: TEST: Filter legacy SH bases warnings in bootstrap direction getter test * :ghpull:`2944`: [DOC] Remove `..figure` directive in examples * :ghpull:`2961`: fixes for pep8 in previous PR * :ghpull:`2922`: synchronized-slicers for same size * :ghpull:`2924`: Additional check for horizon * :ghpull:`2957`: STYLE: Fix typo in `msdki` reconstruction test name * :ghpull:`2941`: TEST: Fix NumPy array to scalar value conversion warning * :ghpull:`2932`: OPT - Optimized pmfgen * :ghpull:`2929`: Stabilizing some test functions with set random seeds * :ghpull:`2954`: TEST: Bring back Python 3.8 testing to GHA workflows * :ghpull:`2946`: RF: Refactor duplicate code in `qtdmri` to `mapmri` coeff computation * :ghpull:`2947`: RF - BootDirectionGetter * :ghpull:`2945`: DOC: Fix miscellaneous docstrings * :ghpull:`2940`: TEST: Filter legacy SH bases warnings in PTT direction getter test * :ghpull:`2938`: TEST: Adding random generator with seed to icm tests * :ghpull:`2942`: OPT: Delegate to NumPy creating a random matrix * :ghpull:`2939`: DOC: Update jhlegarreta affiliation in developers * :ghpull:`2933`: fixed bug with fit extra returns * :ghpull:`2930`: Update of the tutorial apply image-based registration to streamlines * :ghpull:`2759`: TRX integration * :ghpull:`2923`: [DOC] Large documentation update * :ghpull:`2825`: NF - add initial directions to local tracking * :ghpull:`2892`: BF - fixed random generator seed `value too large to convert to int` error * :ghpull:`2926`: ENH: MSMT CSD module unique b-val tolerance parameter improvements * :ghpull:`2927`: DOC: Fix package name in documentation config file comment * :ghpull:`2925`: STYLE: Fix miscellaneous Numpy warnings * :ghpull:`2781`: Small fixes in functions * :ghpull:`2910`: STYLE: f-strings * :ghpull:`2921`: [FIX] tiny fix to HBN fetcher to also grab T1 for each subject * :ghpull:`2906`: [FIX] Pin scipy for the conda failing CI's * :ghpull:`2920`: Mark Python3 files as such * :ghpull:`2919`: fix various grammar errors * :ghpull:`2915`: DOC: http:// → https:// * :ghpull:`2916`: Build(deps): Bump codespell-project/actions-codespell from 1 to 2 * :ghpull:`2914`: GitHub Actions * :ghpull:`2816`: Correlation Tensor Imaging * :ghpull:`2912`: MAINT: the symbol for second is s, not sec. * :ghpull:`2902`: Short import for horizon * :ghpull:`2904`: Apply refurb suggestions * :ghpull:`2899`: DOC: Fix typos newly found by codespell * :ghpull:`2891`: Apply pyupgrade suggestions * :ghpull:`2898`: Remove zip operation in transform_tracking_output * :ghpull:`2897`: BF: Bug when downloading hbn data. * :ghpull:`2893`: Remove consecutive duplicate words * :ghpull:`2894`: Get rid of trailing spaces in text files * :ghpull:`2889`: Apply pyupgrade suggestions * :ghpull:`2888`: Fix typos newly found by codespell * :ghpull:`2887`: Update shm.py * :ghpull:`2814`: [Fix] Horizon: Binary image loading * :ghpull:`2885`: [ENH] Add minimum length to streamline generator * :ghpull:`2875`: Increased PTT performances * :ghpull:`2879`: Add fetcher for a sample CTI dataset * :ghpull:`2882`: Change license_file to license_files in setup.cfg * :ghpull:`2804`: Adding diffusion data descriptions from references to reconstruction table * :ghpull:`2730`: Fix weighted Jacobians, return extra fit data, add adjacency function * :ghpull:`2821`: NF - added pft min wm parameter * :ghpull:`2876`: Introduction of pydata theme for sphinx * :ghpull:`2846`: Vara's Week 8 & Week 9 blog * :ghpull:`2870`: Vara's Week 12 and Week 13 blog * :ghpull:`2865`: Shilpi's Week0&Week1 combined * :ghpull:`2868`: Submitting Week13.rst file * :ghpull:`2871`: Corrected paths to static files * :ghpull:`2863`: Shilpi's Week12 Blog. * :ghpull:`2856`: Adding Week 11 Blog * :ghpull:`2849`: Shilpi's 10th Blog * :ghpull:`2847`: Pushing Week 8 + 9 blog * :ghpull:`2836`: Shilpi's Week 5 blog * :ghpull:`2864`: Change internal space/origin when using sft.to_x() with an empty sft. * :ghpull:`2806`: BF - initial backward orientation of local tracking * :ghpull:`2862`: Vara's Week 10 & Week 11 blog * :ghpull:`2843`: Pushing 7th_blog * :ghpull:`2841`: Vara's Week 6 & Week 7 blog * :ghpull:`2835`: Vara's week 5 blog * :ghpull:`2829`: Pushing 3rd blog * :ghpull:`2828`: Vara's week 3 blog * :ghpull:`2860`: Updates HCP fetcher dataset_description to be compatible with current BIDS * :ghpull:`2831`: Vara's week 4 blog * :ghpull:`2833`: Pushing 4thBlog * :ghpull:`2840`: Shilpi's Week6 Blog * :ghpull:`2839`: make order_from_ncoef return an int * :ghpull:`2844`: doc/tools/: fix trailing dot in version number. * :ghpull:`2832`: BundleWarp: added tutorial and fixed a small bug * :ghpull:`2818`: Vara's week 0 blog * :ghpull:`2823`: submitting clearn PR for 2nd blog * :ghpull:`2813`: First Blog * :ghpull:`2808`: [DOC] Fix cross referencing * :ghpull:`2798`: Move evac+ to new module `nn` * :ghpull:`2797`: remove Nibabel InTemporaryDirectory * :ghpull:`2800`: Remove the Deprecating nisext * :ghpull:`2795`: bump dependencies minimal version * :ghpull:`2792`: Add `patch_radius` parameter to Patch2Self denoise workflow * :ghpull:`2761`: [UPCOMING] Release 1.7.0 - workshop release Issues (197): * :ghissue:`3009`: [DOC] Update installation instruction [ci skip] * :ghissue:`2999`: TEST: Set explicitly `CLARABEL` as the CVXPY solver * :ghissue:`2943`: BF: Fix bundlewarp shape analysis profile values for all `False` mask * :ghissue:`2992`: RGB support for images * :ghissue:`2989`: BF: Mask `1` values in leveraged, residual matrix computation * :ghissue:`3007`: [RF] Define minimum version for some optional packages * :ghissue:`3006`: [NF] Introduce minimum version in `optional_package` * :ghissue:`1256`: script path can not be found on OSX * :ghissue:`3002`: [RF] Improve scripts and import management * :ghissue:`3005`: Bump actions/setup-python from 4 to 5 * :ghissue:`3004`: TEST: Check and filter PCA dimensionality problem warning * :ghissue:`2996`: RF: Fix b0 threshold warnings * :ghissue:`2995`: [MTN] remove custom module `_importlib` * :ghissue:`2998`: TEST: Filter SciPy 0.18.0 1-D affine transform array warning in test * :ghissue:`3001`: RF: Create PCA denoising utils methods * :ghissue:`3000`: RF: Prefer raising `sklearn` package warnings when required * :ghissue:`2997`: TEST: Filter warning about resorting to `OLS` fitting method * :ghissue:`2979`: Prerelease wheels not NumPy 2.0.0.dev compatible * :ghissue:`2991`: MTN: fix byte swap ordering for numpy 2.0 * :ghissue:`2987`: STYLE: Make `cvxpy`-dependent test checking consistent in `test_mcsd` * :ghissue:`2990`: STYLE: Use `.astype()` on uninitialized array casting * :ghissue:`2984`: DOC: Miscellaneous documentation improvements * :ghissue:`2988`: STYLE: Remove unused import in `dipy/core/gradients.py` * :ghissue:`2985`: Bump conda-incubator/setup-miniconda from 2 to 3 * :ghissue:`2986`: DOC: Fix typos and grammar in `lcr_matrix` function documentation * :ghissue:`2905`: base tests * :ghissue:`2983`: STYLE: Fix boolean variable negation warnings * :ghissue:`2981`: [MTN] replace the deprecated sctypes * :ghissue:`2980`: [FIX] int_t to npy_intp * :ghissue:`2978`: DOC: Fix issue template [ci skip] * :ghissue:`2976`: [MTN] Update index url for PRE-Wheels dependencies * :ghissue:`2975`: connectivity_matrix code speed up * :ghissue:`2514`: Reshape our packaging system * :ghissue:`2715`: Enable building DIPY with Meson * :ghissue:`2964`: RF: Moving to numpy.random.Generator from numpy.random * :ghissue:`2736`: dipy_horizon needs --force option if there is tmp.png * :ghissue:`2960`: add type annotation in io module * :ghissue:`2803`: Type annotations integration * :ghissue:`2963`: NF: Updating EVAC+ model and adding util function * :ghissue:`2974`: [MTN] Disable github check annotations * :ghissue:`2956`: Adding support for btens to multi_shell_fiber_response function * :ghissue:`2969`: bugfix for --force issue * :ghissue:`2967`: Feature/opacity checkbox * :ghissue:`2965`: Pip installation issues with python 3.12 * :ghissue:`2968`: Pip installation issues with python 3.12 * :ghissue:`2966`: volume slider fix * :ghissue:`2958`: TEST: Filter legacy SH bases warnings in bootstrap direction getter test * :ghissue:`2801`: Some left-overs from sphinx-gallery conversion * :ghissue:`2944`: [DOC] Remove `..figure` directive in examples * :ghissue:`2961`: fixes for pep8 in previous PR * :ghissue:`2922`: synchronized-slicers for same size * :ghissue:`2878`: DIPY reinstall doesn't automatically update needed dependencies * :ghissue:`2924`: Additional check for horizon * :ghissue:`2957`: STYLE: Fix typo in `msdki` reconstruction test name * :ghissue:`2941`: TEST: Fix NumPy array to scalar value conversion warning * :ghissue:`2932`: OPT - Optimized pmfgen * :ghissue:`2929`: Stabilizing some test functions with set random seeds * :ghissue:`2954`: TEST: Bring back Python 3.8 testing to GHA workflows * :ghissue:`2953`: [WIP] Nlmeans update * :ghissue:`2946`: RF: Refactor duplicate code in `qtdmri` to `mapmri` coeff computation * :ghissue:`2955`: set_number_of_points function not found for dipy 1.7.0 * :ghissue:`2947`: RF - BootDirectionGetter * :ghissue:`2952`: Delete dipy/denoise/nlmeans.py * :ghissue:`2949`: HBN fetcher failed * :ghissue:`2945`: DOC: Fix miscellaneous docstrings * :ghissue:`718`: Create an example of multi b-value SFM * :ghissue:`2523`: Doc generation failed * :ghissue:`2940`: TEST: Filter legacy SH bases warnings in PTT direction getter test * :ghissue:`2928`: `test_icm_square` failing on and off * :ghissue:`2938`: TEST: Adding random generator with seed to icm tests * :ghissue:`2942`: OPT: Delegate to NumPy creating a random matrix * :ghissue:`2939`: DOC: Update jhlegarreta affiliation in developers * :ghissue:`2933`: fixed bug with fit extra returns * :ghissue:`2936`: Automatic Fiber Bundle Extraction with RecoBundles in DIPY 1.7 broken? * :ghissue:`2934`: demo code not working * :ghissue:`2787`: Adds a pyproject file. * :ghissue:`2786`: "Image based streamlines_registration: unable to warp streamlines into template" * :ghissue:`2400`: Applying image-based deformations to streamlines example * :ghissue:`2703`: Image based streamlines_registration: unable to warp streamlines into template space * :ghissue:`2930`: Update of the tutorial apply image-based registration to streamlines * :ghissue:`2759`: TRX integration * :ghissue:`2931`: Add caption to sphinx gallery figure * :ghissue:`2560`: MCSD Tutorial failed with `cvxpy>=1.1.15` * :ghissue:`2794`: Add a search box to the DIPY documentation * :ghissue:`2815`: Reconstruction table of content doesn't connect to MAP+ * :ghissue:`2923`: [DOC] Large documentation update * :ghissue:`2790`: DTI fitting function with NLLS method raises an error. * :ghissue:`2872`: Website image (not showing up or wrong tag showing) * :ghissue:`2884`: WIP: trx integration * :ghissue:`2825`: NF - add initial directions to local tracking * :ghissue:`2892`: BF - fixed random generator seed `value too large to convert to int` error * :ghissue:`2926`: ENH: MSMT CSD module unique b-val tolerance parameter improvements * :ghissue:`2927`: DOC: Fix package name in documentation config file comment * :ghissue:`2925`: STYLE: Fix miscellaneous Numpy warnings * :ghissue:`2777`: Error using dipy_motion_correct * :ghissue:`2781`: Small fixes in functions * :ghissue:`2648`: Issues with dipy_align_syn * :ghissue:`2900`: format → f-strings? * :ghissue:`2910`: STYLE: f-strings * :ghissue:`2921`: [FIX] tiny fix to HBN fetcher to also grab T1 for each subject * :ghissue:`2906`: [FIX] Pin scipy for the conda failing CI's * :ghissue:`2920`: Mark Python3 files as such * :ghissue:`2919`: fix various grammar errors * :ghissue:`2915`: DOC: http:// → https:// * :ghissue:`2896`: Interactive examples for dipy * :ghissue:`2901`: patch2self question * :ghissue:`2916`: Build(deps): Bump codespell-project/actions-codespell from 1 to 2 * :ghissue:`2914`: GitHub Actions * :ghissue:`2816`: Correlation Tensor Imaging * :ghissue:`2912`: MAINT: the symbol for second is s, not sec. * :ghissue:`2913`: DOC: fix links * :ghissue:`2902`: Short import for horizon * :ghissue:`2908`: Voxel correspondence between Non-Linearly aligned Volumes * :ghissue:`2890`: Attempt to fix error in conda jobs * :ghissue:`2907`: Temp - Gab PR * :ghissue:`2904`: Apply refurb suggestions * :ghissue:`2903`: Typo in the skills required section (Project 2) of Project Ideas * :ghissue:`2899`: DOC: Fix typos newly found by codespell * :ghissue:`2891`: Apply pyupgrade suggestions * :ghissue:`2898`: Remove zip operation in transform_tracking_output * :ghissue:`2897`: BF: Bug when downloading hbn data. * :ghissue:`2893`: Remove consecutive duplicate words * :ghissue:`2894`: Get rid of trailing spaces in text files * :ghissue:`2889`: Apply pyupgrade suggestions * :ghissue:`2888`: Fix typos newly found by codespell * :ghissue:`2887`: Update shm.py * :ghissue:`2814`: [Fix] Horizon: Binary image loading * :ghissue:`2885`: [ENH] Add minimum length to streamline generator * :ghissue:`1372`: Change direction getter dictionary keys from floats[3] to int * :ghissue:`2805`: Incorrect initial direction for the backward segment of local tracking * :ghissue:`2875`: Increased PTT performances * :ghissue:`2883`: Adding last,Week14Blog * :ghissue:`2879`: Add fetcher for a sample CTI dataset * :ghissue:`2769`: DOC example for data_per_streamline usage * :ghissue:`2774`: Added a tutorial in doc folder for saving labels. * :ghissue:`2882`: Change license_file to license_files in setup.cfg * :ghissue:`2881`: Adding fetcher in the test_file for #2879 * :ghissue:`2867`: Bug in PFT when changing the random function * :ghissue:`2804`: Adding diffusion data descriptions from references to reconstruction table * :ghissue:`2820`: fixed bug for nlls fitting * :ghissue:`2746`: Weighted Non-Linear Fitting may be wrong * :ghissue:`2730`: Fix weighted Jacobians, return extra fit data, add adjacency function * :ghissue:`2821`: NF - added pft min wm parameter * :ghissue:`2876`: Introduction of pydata theme for sphinx * :ghissue:`2846`: Vara's Week 8 & Week 9 blog * :ghissue:`2870`: Vara's Week 12 and Week 13 blog * :ghissue:`2865`: Shilpi's Week0&Week1 combined * :ghissue:`2868`: Submitting Week13.rst file * :ghissue:`2871`: Corrected paths to static files * :ghissue:`2873`: Motion estimate * :ghissue:`2863`: Shilpi's Week12 Blog. * :ghissue:`2856`: Adding Week 11 Blog * :ghissue:`2849`: Shilpi's 10th Blog * :ghissue:`2847`: Pushing Week 8 + 9 blog * :ghissue:`2836`: Shilpi's Week 5 blog * :ghissue:`2864`: Change internal space/origin when using sft.to_x() with an empty sft. * :ghissue:`2806`: BF - initial backward orientation of local tracking * :ghissue:`2862`: Vara's Week 10 & Week 11 blog * :ghissue:`2843`: Pushing 7th_blog * :ghissue:`2841`: Vara's Week 6 & Week 7 blog * :ghissue:`2835`: Vara's week 5 blog * :ghissue:`2829`: Pushing 3rd blog * :ghissue:`2828`: Vara's week 3 blog * :ghissue:`2860`: Updates HCP fetcher dataset_description to be compatible with current BIDS * :ghissue:`2831`: Vara's week 4 blog * :ghissue:`1883`: Interesting dataset for linear, planar, spherical encoding * :ghissue:`2491`: ENH: Extend Horizon to visualize 2 volumes simultaneously * :ghissue:`2812`: Patch2self denoising followed by topup and eddy corrections worsens distortions in the orbitofrontal region * :ghissue:`2833`: Pushing 4thBlog * :ghissue:`2858`: Odffp * :ghissue:`2857`: Odffp * :ghissue:`2840`: Shilpi's Week6 Blog * :ghissue:`2838`: Reconstruction issues using MAP-MRI model (RTOP, RTAP, RTPP) * :ghissue:`2845`: MAP ODF issues * :ghissue:`2851`: How to use "synb0" in Dipy for preprocessing * :ghissue:`2839`: make order_from_ncoef return an int * :ghissue:`2844`: doc/tools/: fix trailing dot in version number. * :ghissue:`2827`: BundleWarp CLI Tutorial - Missing from Website * :ghissue:`2832`: BundleWarp: added tutorial and fixed a small bug * :ghissue:`1627`: WIP - NF - Tracking with Initial Directions and other tracking parameters * :ghissue:`2818`: Vara's week 0 blog * :ghissue:`2823`: submitting clearn PR for 2nd blog * :ghissue:`2822`: Pushing 2nd blog, * :ghissue:`2813`: First Blog * :ghissue:`2808`: [DOC] Fix cross referencing * :ghissue:`2798`: Move evac+ to new module `nn` * :ghissue:`2797`: remove Nibabel InTemporaryDirectory * :ghissue:`2706`: FYI: Deprecating nisext in nibabel * :ghissue:`2800`: Remove the Deprecating nisext * :ghissue:`2689`: Installing DIPY fails with current conda version * :ghissue:`2718`: StatefulTractogram * :ghissue:`2795`: bump dependencies minimal version * :ghissue:`2747`: Cannot set `dipy` as a dependency * :ghissue:`2791`: Update Patch2Self CLI * :ghissue:`2792`: Add `patch_radius` parameter to Patch2Self denoise workflow * :ghissue:`2771`: BUG: Missing Python 3.11 macOS wheels * :ghissue:`2761`: [UPCOMING] Release 1.7.0 - workshop release dipy-1.11.0/doc/release_notes/release1.9.rst000066400000000000000000000225401476546756600205730ustar00rootroot00000000000000.. _release1.9: ==================================== Release notes for DIPY version 1.9 ==================================== GitHub stats for 2023/12/14 - 2024/03/07 (tag: 1.8.0) These lists are automatically generated, and may be incomplete or contain duplicates. The following 17 authors contributed 175 commits. * Ariel Rokem * Atharva Shah * Ebrahim Ebrahim * Eleftherios Garyfallidis * Gabriel Girard * John Shen * Jon Haitz Legarreta Gorroño * Jong Sung Park * Maharshi Gor * Matthew Feickert * Philippe Karan * Praitayini Kanakaraj * Sam Coveney * Sandro * Serge Koudoro * dependabot[bot] * Étienne Mollier We closed a total of 139 issues, 60 pull requests and 81 regular issues; this is the full list (generated with the script :file:`tools/github_stats.py`): Pull Requests (60): * :ghpull:`3095`: [UPCOMING] Release preparation for 1.9.0 * :ghpull:`3086`: [RF] Fix spherical harmonic terminology swap * :ghpull:`3105`: [doc] improve some tutorials rendering * :ghpull:`3109`: [BF] convert_tractogram fix * :ghpull:`3108`: enabled trx support with correct header * :ghpull:`3107`: enabled trx support for viz * :ghpull:`3033`: [RF] fix dki mask for nlls * :ghpull:`3104`: Dkimaskfix * :ghpull:`3106`: volume slices visibility fixed * :ghpull:`3102`: Bugfix for peaks slices and synchronization. * :ghpull:`3078`: return S0 from dki fit * :ghpull:`3101`: [BF] Uniformize cython version * :ghpull:`3097`: Feature/surface * :ghpull:`3048`: [TEST] Adds support of cython for pytest * :ghpull:`3053`: [NF] Add workflow to convert tensors in different formats * :ghpull:`3073`: [NF] Add DSI workflow * :ghpull:`3099`: [DOC] fix some typo [ci skip] * :ghpull:`3098`: Removing tensorflow addon from DL models * :ghpull:`2973`: Tab names for slice tabs as file names. * :ghpull:`3081`: NF: Adding N4 bias correction deep learning model * :ghpull:`3092`: Feature: volume synchronizing * :ghpull:`3059`: Generalize special casing while loading bvecs, to include the case of transposed 2,3 vectors * :ghpull:`3090`: RF - changed memory view to double* trilinear_interpolation_4d * :ghpull:`3080`: Adding SH basis legacy option support to peaks_from_model * :ghpull:`3087`: backward compatibility fixed * :ghpull:`3088`: [TEST] Pin pytest * :ghpull:`3084`: fixed 4d slice issue * :ghpull:`3083`: Np.unique check removed. * :ghpull:`3082`: Add Fedora installation instructions [ci skip] * :ghpull:`3076`: [CI] Update scientific-python/upload-nightly-action to 0.5.0 * :ghpull:`3070`: [DOC] Fix installation link in README [ci skip] * :ghpull:`3069`: [DOC] Fix DTI Tutorial [ci skip] * :ghpull:`3063`: [RF] remove cpdef in PmfGen * :ghpull:`3054`: [DOC] Fix some links [ci skip] * :ghpull:`3060`: Bump codecov/codecov-action from 3 to 4 * :ghpull:`3061`: [OPT] Enable openmp for macOS wheel and CI's * :ghpull:`3049`: [MTN] code cleaning: remove some dependencies version checking * :ghpull:`3050`: [RF] Move ``dipy.boots.resampling`` to ``dipy.stats.resampling`` * :ghpull:`3051`: [RF] Remove dipy.io.bvectxt module * :ghpull:`3052`: Bump scientific-python/upload-nightly-action from 3eb3a42b50671237cace9be2d18a3e4b3845d3c4 to 66bc1b6beedff9619cdff8f3361a06802c8f5874 * :ghpull:`3045`: [DOC] fix `multi_shell_fiber_response` docstring array dims [ci skip] * :ghpull:`3041`: [NF] Add convert tractograms flow * :ghpull:`3040`: [BW] Remove some python2 reference * :ghpull:`3039`: [TEST] Add setup_module and teardown_module * :ghpull:`3038`: [NF] Update `dipy_info`: allow tractogram files format * :ghpull:`3043`: d/d/t/test_data.py: endian independent dtype. * :ghpull:`3042`: pyproject.toml: no cython at run time. * :ghpull:`3027`: [NF] Add Concatenate tracks workflows * :ghpull:`3008`: NF: add SH basis conversion between dipy and mrtrix3 * :ghpull:`3025`: [TEST] Manage http errors for stateful tractograms * :ghpull:`3031`: Bugfix: Horizon image's dtype validation * :ghpull:`3021`: [MTN] Remove 3.8 Ci's * :ghpull:`3026`: [RF] Fix cython 3 warnings * :ghpull:`3022`: [DOC] Fix logo size and link [ci skip] * :ghpull:`3013`: Added Fibonacci spiral and test for it * :ghpull:`3019`: DOC: Fix link to toolchain roadmap page in `README` * :ghpull:`3012`: DOC: Document observance for Scientific Python min supported versions * :ghpull:`3018`: Bump actions/download-artifact from 3 to 4 * :ghpull:`3017`: Bump actions/upload-artifact from 3 to 4 * :ghpull:`3014`: Update release1.8.rst Issues (81): * :ghissue:`2970`: spherical harmonic degree/order terminology swapped * :ghissue:`3105`: [doc] improve some tutorials rendering * :ghissue:`3109`: [BF] convert_tractogram fix * :ghissue:`3108`: enabled trx support with correct header * :ghissue:`3107`: enabled trx support for viz * :ghissue:`2994`: DKI masking * :ghissue:`3033`: [RF] fix dki mask for nlls * :ghissue:`3104`: Dkimaskfix * :ghissue:`3106`: volume slices visibility fixed * :ghissue:`3102`: Bugfix for peaks slices and synchronization. * :ghissue:`2281`: Black Output for pam5 file with dipy_horizon * :ghissue:`3078`: return S0 from dki fit * :ghissue:`3101`: [BF] Uniformize cython version * :ghissue:`3097`: Feature/surface * :ghissue:`2719`: pytest and cdef functions * :ghissue:`3048`: [TEST] Adds support of cython for pytest * :ghissue:`3053`: [NF] Add workflow to convert tensors in different formats * :ghissue:`3073`: [NF] Add DSI workflow * :ghissue:`3099`: [DOC] fix some typo [ci skip] * :ghissue:`3098`: Removing tensorflow addon from DL models * :ghissue:`2973`: Tab names for slice tabs as file names. * :ghissue:`3081`: NF: Adding N4 bias correction deep learning model * :ghissue:`3092`: Feature: volume synchronizing * :ghissue:`3093`: Can I use a different sort of dataset and also what if I don't have a bval , I use . mat images format PATCH2Self * :ghissue:`3059`: Generalize special casing while loading bvecs, to include the case of transposed 2,3 vectors * :ghissue:`3090`: RF - changed memory view to double* trilinear_interpolation_4d * :ghissue:`3080`: Adding SH basis legacy option support to peaks_from_model * :ghissue:`3085`: Viz Tests failing * :ghissue:`3087`: backward compatibility fixed * :ghissue:`3088`: [TEST] Pin pytest * :ghissue:`3074`: Horizon for large datasets - concerns regarding np.unique * :ghissue:`3075`: Horizon - 4D data support - slicing on the 4-th dim * :ghissue:`3084`: fixed 4d slice issue * :ghissue:`3083`: Np.unique check removed. * :ghissue:`3082`: Add Fedora installation instructions [ci skip] * :ghissue:`3065`: Add s390x test workflow * :ghissue:`3076`: [CI] Update scientific-python/upload-nightly-action to 0.5.0 * :ghissue:`3070`: [DOC] Fix installation link in README [ci skip] * :ghissue:`3069`: [DOC] Fix DTI Tutorial [ci skip] * :ghissue:`3066`: Dipy website incorrect image * :ghissue:`3063`: [RF] remove cpdef in PmfGen * :ghissue:`3054`: [DOC] Fix some links [ci skip] * :ghissue:`3060`: Bump codecov/codecov-action from 3 to 4 * :ghissue:`3061`: [OPT] Enable openmp for macOS wheel and CI's * :ghissue:`3057`: Using the atlas HCP1065 in DIPY * :ghissue:`3055`: [RF] replace Bunch by Enum * :ghissue:`3049`: [MTN] code cleaning: remove some dependencies version checking * :ghissue:`3050`: [RF] Move ``dipy.boots.resampling`` to ``dipy.stats.resampling`` * :ghissue:`3051`: [RF] Remove dipy.io.bvectxt module * :ghissue:`3052`: Bump scientific-python/upload-nightly-action from 3eb3a42b50671237cace9be2d18a3e4b3845d3c4 to 66bc1b6beedff9619cdff8f3361a06802c8f5874 * :ghissue:`2789`: Horizon image's dtype validation * :ghissue:`3047`: "Editable" installation broken * :ghissue:`3045`: [DOC] fix `multi_shell_fiber_response` docstring array dims [ci skip] * :ghissue:`3041`: [NF] Add convert tractograms flow * :ghissue:`3040`: [BW] Remove some python2 reference * :ghissue:`3039`: [TEST] Add setup_module and teardown_module * :ghissue:`3038`: [NF] Update `dipy_info`: allow tractogram files format * :ghissue:`3043`: d/d/t/test_data.py: endian independent dtype. * :ghissue:`3042`: pyproject.toml: no cython at run time. * :ghissue:`3027`: [NF] Add Concatenate tracks workflows * :ghissue:`3035`: If I want to use 6D array in "actor.odf_slicer", how can i do? * :ghissue:`2993`: Add conversion utility between DIPY and MRtrix3 spherical harmonic basis * :ghissue:`3008`: NF: add SH basis conversion between dipy and mrtrix3 * :ghissue:`3025`: [TEST] Manage http errors for stateful tractograms * :ghissue:`3031`: Bugfix: Horizon image's dtype validation * :ghissue:`3032`: Consider moving your nightly wheel away from the scipy-wheel-nightly (old location) to scientific-python-nightly-wheels * :ghissue:`3021`: [MTN] Remove 3.8 Ci's * :ghissue:`3003`: DIPY installation raises Cython warnings * :ghissue:`3026`: [RF] Fix cython 3 warnings * :ghissue:`2852`: Different behavior regarding color channels of horizon * :ghissue:`2378`: A novice's request for advice on loading very large tractograms (.tck) * :ghissue:`2064`: How to register MR to CT of the same person? * :ghissue:`2601`: read_bvals_bvecs can't read double volume dwi * :ghissue:`3022`: [DOC] Fix logo size and link [ci skip] * :ghissue:`3013`: Added Fibonacci spiral and test for it * :ghissue:`3020`: load_nifti import doesn't work if using submodule directly * :ghissue:`3019`: DOC: Fix link to toolchain roadmap page in `README` * :ghissue:`3012`: DOC: Document observance for Scientific Python min supported versions * :ghissue:`3018`: Bump actions/download-artifact from 3 to 4 * :ghissue:`3017`: Bump actions/upload-artifact from 3 to 4 * :ghissue:`3014`: Update release1.8.rst * :ghissue:`1525`: Clang-omp moved to boneyard on brew dipy-1.11.0/doc/sphinxext/000077500000000000000000000000001476546756600153705ustar00rootroot00000000000000dipy-1.11.0/doc/sphinxext/docimage_scrap.py000066400000000000000000000036221476546756600207050ustar00rootroot00000000000000import glob import os import shutil import time from sphinx_gallery.scrapers import figure_rst class ImageFileScraper: def __init__(self): """Scrape image files that are already present in current folder.""" self.embedded_images = {} self.start_time = time.time() def __call__(self, block, block_vars, gallery_conf): # Find all image files in the current directory. path_example = os.path.dirname(block_vars['src_file']) image_files = _find_images(path_example) # Iterate through files, copy them to the SG output directory image_names = [] image_path_iterator = block_vars['image_path_iterator'] for path_orig in image_files: # If we already know about this image and it hasn't been modified # since starting, then skip it mod_time = os.stat(path_orig).st_mtime already_embedded = (path_orig in self.embedded_images and mod_time <= self.embedded_images[path_orig]) existed_before_build = mod_time <= self.start_time if already_embedded or existed_before_build: continue # Else, we assume the image has just been modified and is displayed path_new = next(image_path_iterator) self.embedded_images[path_orig] = mod_time image_names.append(path_new) shutil.copyfile(path_orig, path_new) if len(image_names) == 0: return '' else: return figure_rst(image_names, gallery_conf['src_dir']) def _find_images(path, image_extensions=['jpg', 'jpeg', 'png', 'gif']): """Find all unique image paths for a set of extensions.""" image_files = set() for ext in image_extensions: this_ext_files = set(glob.glob(os.path.join(path, '*.'+ext))) image_files = image_files.union(this_ext_files) return image_filesdipy-1.11.0/doc/sphinxext/github.py000066400000000000000000000127301476546756600172270ustar00rootroot00000000000000"""Define text roles for GitHub * ghissue - Issue * ghpull - Pull Request * ghuser - User Adapted from bitbucket example here: https://bitbucket.org/birkenfeld/sphinx-contrib/src/tip/bitbucket/sphinxcontrib/bitbucket.py Authors ------- * Doug Hellmann * Min RK """ # # Original Copyright (c) 2010 Doug Hellmann. All rights reserved. # from docutils import nodes, utils from docutils.parsers.rst.roles import set_classes def make_link_node(rawtext, app, type, slug, options): """Create a link to a github resource. :param rawtext: Text being replaced with link node. :param app: Sphinx application context :param type: Link type (issues, changeset, etc.) :param slug: ID of the thing to link to :param options: Options dictionary passed to role func. """ try: base = app.config.github_project_url if not base: raise AttributeError if not base.endswith('/'): base += '/' except AttributeError as err: raise ValueError(f'github_project_url configuration value is not set ({str(err)})') ref = base + type + '/' + slug + '/' set_classes(options) prefix = "#" if type == 'pull': prefix = "PR " + prefix node = nodes.reference(rawtext, prefix + utils.unescape(slug), refuri=ref, **options) return node def ghissue_role(name, rawtext, text, lineno, inliner, options={}, content=[]): """Link to a GitHub issue. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ try: issue_num = int(text) if issue_num <= 0: raise ValueError except ValueError: msg = inliner.reporter.error( 'GitHub issue number must be a number greater than or equal to 1; ' '"%s" is invalid.' % text, line=lineno) prb = inliner.problematic(rawtext, rawtext, msg) return [prb], [msg] app = inliner.document.settings.env.app # app.info('issue %r' % text) if 'pull' in name.lower(): category = 'pull' elif 'issue' in name.lower(): category = 'issues' else: msg = inliner.reporter.error( 'GitHub roles include "ghpull" and "ghissue", ' '"%s" is invalid.' % name, line=lineno) prb = inliner.problematic(rawtext, rawtext, msg) return [prb], [msg] node = make_link_node(rawtext, app, category, str(issue_num), options) return [node], [] def ghuser_role(name, rawtext, text, lineno, inliner, options={}, content=[]): """Link to a GitHub user. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ app = inliner.document.settings.env.app # app.info('user link %r' % text) ref = 'https://www.github.com/' + text node = nodes.reference(rawtext, text, refuri=ref, **options) return [node], [] def ghcommit_role(name, rawtext, text, lineno, inliner, options={}, content=[]): """Link to a GitHub commit. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ app = inliner.document.settings.env.app # app.info('user link %r' % text) try: base = app.config.github_project_url if not base: raise AttributeError if not base.endswith('/'): base += '/' except AttributeError as err: raise ValueError(f'github_project_url configuration value is not set ({str(err)})') ref = base + text node = nodes.reference(rawtext, text[:6], refuri=ref, **options) return [node], [] def setup(app): """Install the plugin. :param app: Sphinx application context. """ from sphinx.util import logging logger = logging.getLogger(__name__) logger.info('Initializing GitHub plugin') app.add_role('ghissue', ghissue_role) app.add_role('ghpull', ghissue_role) app.add_role('ghuser', ghuser_role) app.add_role('ghcommit', ghcommit_role) app.add_config_value('github_project_url', None, 'env') metadata = {"parallel_read_safe": True, "parallel_write_safe": True, } return metadata dipy-1.11.0/doc/sphinxext/math_dollar.py000066400000000000000000000041441476546756600202330ustar00rootroot00000000000000import re def dollars_to_math(source): r""" Replace dollar signs with backticks. More precisely, do a regular expression search. Replace a plain dollar sign ($) by a backtick (`). Replace an escaped dollar sign (\$) by a dollar sign ($). Don't change a dollar sign preceded or followed by a backtick (`$ or $`), because of strings like "``$HOME``". Don't make any changes on lines starting with spaces, because those are indented and hence part of a block of code or examples. This also doesn't replaces dollar signs enclosed in curly braces, to avoid nested math environments, such as :: $f(n) = 0 \text{ if $n$ is prime}$ Thus the above line would get changed to `f(n) = 0 \text{ if $n$ is prime}` """ s = "\n".join(source) if s.find("$") == -1: return # This searches for "$blah$" inside a pair of curly braces -- # don't change these, since they're probably coming from a nested # math environment. So for each match, we replace it with a temporary # string, and later on we substitute the original back. _data = {} def repl(matchobj): nonlocal _data s = matchobj.group(0) t = f"___XXX_REPL_{len(_data)}___" _data[t] = s return t s = re.sub(r"({[^{}$]*\$[^{}$]*\$[^{}]*})", repl, s) # matches $...$ dollars = re.compile(r"(?= 2 or \ content.count(sphx_glr_sep_2) >= 2 def convert_to_sphinx_gallery_format(content): """Convert the example file to sphinx-gallery format. Parameters ---------- content : list of str example file content Returns ------- list of str example file content converted to sphinx-gallery format """ inheader = True indocs = False new_content = '' for line in content: if inheader: if not indocs and (line.startswith('"""') or line.startswith("'''") or line.startswith('r"""') or line.startswith("r'''")): new_content += line if len(line.rstrip()) < 5: indocs = True else: # single line doc inheader = False continue if line.rstrip().endswith('"""') or line.rstrip().endswith("'''"): inheader = False indocs = False new_content += line continue if indocs \ or (line.startswith('"""') or line.startswith("'''") or line.startswith('r"""') or line.startswith("r'''")): if not indocs: # guaranteed to start with """ if len(line.rstrip()) > 4 \ and (line.rstrip().endswith('"""') or line.rstrip().endswith("'''")): # single line doc tmp_line = line.replace('"""', '').replace("'''", '') new_content += f'{sphx_glr_sep}\n' new_content += f'# {tmp_line}' else: # must be start of multiline block indocs = True new_content += f'{sphx_glr_sep}\n' else: # we are already in the docs # handle doc end if line.rstrip().endswith('"""') or line.rstrip().endswith("'''"): # remove quotes # import ipdb; ipdb.set_trace() indocs = False new_content += '\n' else: # import ipdb; ipdb.set_trace() new_content += f'# {line}' # has to be documentation continue new_content += line return new_content def folder_explicit_order(): srcdir = os.path.abspath(os.path.dirname(__file__)) examples_dir = os.path.join(srcdir, '..', 'examples') f_example_desc = Path(examples_dir, '_valid_examples.toml') if not f_example_desc.exists(): msg = f'No valid examples description file found in {examples_dir}' msg += "(e.g '_valid_examples.toml')" abort(msg) with open(f_example_desc, 'rb') as fobj: try: desc_examples = tomllib.load(fobj) except Exception as e: msg = f'Error Loading examples description file: {e}.\n\n' msg += 'Please check the file format.' abort(msg) if 'main' not in desc_examples.keys(): msg = 'No main section found in examples description file' abort(msg) try: folder_list = sorted( [examplesConfig(folder_name=k.lower(), **v) for k, v in desc_examples.items()] ) except Exception as e: msg = f'Error parsing examples description file: {e}.\n\n' msg += 'Please check the file format.' abort(msg) return [f.folder_name for f in folder_list if f.enable] def preprocess_include_directive(input_rst, output_rst): """ Process include directives from input RST, and write the output to a new RST. Parameters ---------- input_rst : str Path to the input RST file containing the include directive. output_rst : str Path to the output RST file with the include content expanded. """ with open(input_rst, "r") as infile, open(output_rst, "w") as outfile: for line in infile: if line.strip().startswith(".. include::"): # Extract the included file's path included_file_path = line.strip().split(" ")[-1] included_file_path = os.path.normpath( os.path.join(os.path.dirname(input_rst), included_file_path) ) # Check if the included file exists and read its content if os.path.isfile(included_file_path): with open(included_file_path, "r") as included_file: included_content = included_file.read() # Write the included file's content to the output file outfile.write(included_content) else: print(f"Warning: Included file '{included_file_path}' not found.") else: # Write the line as-is if it's not an include directive outfile.write(line) def prepare_gallery(app=None): srcdir = app.srcdir if app else os.path.abspath(pjoin( os.path.dirname(__file__), '..')) examples_dir = os.path.join(srcdir, 'examples') examples_revamp_dir = os.path.join(srcdir, 'examples_revamped') os.makedirs(examples_revamp_dir, exist_ok=True) f_example_desc = Path(examples_dir, '_valid_examples.toml') if not f_example_desc.exists(): msg = f'No valid examples description file found in {examples_dir}' msg += "(e.g '_valid_examples.toml')" abort(msg) with open(f_example_desc, 'rb') as fobj: try: desc_examples = tomllib.load(fobj) except Exception as e: msg = f'Error Loading examples description file: {e}.\n\n' msg += 'Please check the file format.' abort(msg) if 'main' not in desc_examples.keys(): msg = 'No main section found in examples description file' abort(msg) try: examples_config = sorted( [examplesConfig(folder_name=k.lower(), **v) for k, v in desc_examples.items()] ) except Exception as e: msg = f'Error parsing examples description file: {e}.\n\n' msg += 'Please check the file format.' abort(msg) if examples_config[0].position != 0: msg = 'Main section must be first in examples description file with' msg += 'position=0' abort(msg) elif examples_config[0].folder_name != 'main': msg = "Main section must be named 'main' in examples description file" abort(msg) elif examples_config[0].enable is False: msg = 'Main section must be enabled in examples description file' abort(msg) disable_examples_section = [] included_examples = [] for example in examples_config: if not example.enable: disable_examples_section.append(example.folder_name) continue # Create folder for each example if example.position != 0: folder = Path( examples_revamp_dir, f'{example.folder_name}') else: folder = Path(examples_revamp_dir) if not folder.exists(): os.makedirs(folder) # Create readme file if example.readme.startswith('file:'): filename = example.readme.split('file:')[1].strip() preprocess_include_directive(Path(examples_dir, filename), Path(folder, "README.rst")) else: with open(Path(folder, "tmp_readme.rst"), "w", encoding="utf8") as fi: fi.write(example.readme) preprocess_include_directive(Path(folder, "tmp_readme.rst"), Path(folder, "README.rst")) os.remove(Path(folder, "tmp_readme.rst")) # Copy files to folder if not example.files: continue for filename in example.files: if not Path(examples_dir, filename).exists(): msg = f'\tFile {filename} not found in examples folder: ' msg += f'{examples_dir}.Please, Add the file or remove it ' msg += 'from the description file.' logger.warning(msg) continue with open(Path(examples_dir, filename), encoding="utf8") as f: xfile = f.readlines() new_name = None if filename in included_examples: # file need to be renamed to make it unique for sphinx-gallery occurrences = included_examples.count(fi) new_name = f'{filename[:-3]}_{occurrences+1}.py' if already_converted(xfile): shutil.copy(Path(examples_dir, filename), Path(folder, new_name or filename)) else: with open(Path(folder, new_name or filename), 'w', encoding="utf8") as fi: fi.write(convert_to_sphinx_gallery_format(xfile)) # Add additional link_names with open(Path(folder, new_name or filename), 'r+', encoding="utf8") as fi: content = fi.read() fi.seek(0, 0) link_name = f'\n{sphx_glr_sep}\n' link_name += '# .. include:: ../../links_names.inc\n#\n' fi.write(content + link_name) included_examples.append(filename) # Check if all python examples are in the description file files_in_config = [fi for ex in examples_config for fi in ex.files] all_examples = fnmatch.filter(os.listdir(examples_dir), '*.py') for all_ex in all_examples: if all_ex in files_in_config: continue msg = f'File {all_ex} not found in examples ' msg += f"description file: {f_example_desc}" logger.warning(msg) def setup(app): """Install the plugin. Parameters ---------- app: Sphinx application context. """ logger.info('Initializing Examples folder revamp plugin...') app.connect('builder-inited', prepare_gallery) # app.connect('build-finished', summarize_failing_examples) metadata = {'parallel_read_safe': True, 'version': app.config.version} return metadata if __name__ == '__main__': gallery_name = sys.argv[1] outdir = sys.argv[2] print(folder_explicit_order()) prepare_gallery(app=None) dipy-1.11.0/doc/stateoftheart.rst000066400000000000000000000044551476546756600167550ustar00rootroot00000000000000.. _stateoftheart: ============================ A quick overview of features ============================ Here are just a few of the state-of-the-art :ref:`technologies ` and algorithms which are provided in DIPY_: - Reconstruction algorithms: CSD, DSI, GQI, DTI, DKI, QBI, SHORE and MAPMRI. - Fiber tracking algorithms: deterministic and probabilistic. - Simple interactive visualization of ODFs and streamlines. - Apply different operations on streamlines (selection, resampling, registration). - Simplify large datasets of streamlines using QuickBundles clustering. - Reslice datasets with anisotropic voxels to isotropic. - Calculate distances/correspondences between streamlines. - Deal with huge streamline datasets without memory restrictions (using the .dpy file format). - Visualize streamlines in the same space as anatomical images. With the help of some external tools you can also: - Read many different file formats e.g. Trackvis or Nifti (with nibabel). - Examine your datasets interactively (with ipython). For more information on specific algorithms we recommend starting by looking at DIPY's :ref:`gallery ` of examples. For a full list of the features implemented in the most recent release cycle, check out the release notes. .. toctree:: :maxdepth: 1 release_notes/release1.11 release_notes/release1.10 release_notes/release1.9 release_notes/release1.8 release_notes/release1.7 release_notes/release1.6 release_notes/release1.5 release_notes/release1.4.1 release_notes/release1.4 release_notes/release1.3 release_notes/release1.2 release_notes/release1.1 release_notes/release1.0 release_notes/release0.16 release_notes/release0.15 release_notes/release0.14 release_notes/release0.13 release_notes/release0.12 release_notes/release0.11 release_notes/release0.10 release_notes/release0.9 release_notes/release0.8 release_notes/release0.7 release_notes/release0.6 ================= Systems supported ================= DIPY_ is multiplatform and will run under any standard operating systems such as *Windows*, *Linux* and *Mac OS X*. Every single new code addition is being tested on a number of different buildbots and can be monitored online `here `_. .. include:: links_names.inc dipy-1.11.0/doc/subscribe.rst000066400000000000000000000011001476546756600160410ustar00rootroot00000000000000.. _subscribe: ========= Subscribe ========= DIPY_ is a part of the NiPy_ community and we are happy to share the same e-mail list. This makes sense as we can share ideas with a broader community and boost collaboration across neuroimagers. If you want to post to the list you will need first to subscribe to the `nipy mailing list`_. We suggest to begin the subject of your e-mail with ``[DIPY]``. This will help us respond faster to your questions. Additional help can be found on the Neurostars_ website, or on the `dipy gitter`_ channel. .. include:: links_names.inc dipy-1.11.0/doc/theory/000077500000000000000000000000001476546756600146505ustar00rootroot00000000000000dipy-1.11.0/doc/theory/b_and_q.rst000066400000000000000000000123351476546756600167710ustar00rootroot00000000000000.. _b-and-q: ========================= DIY Stuff about b and q ========================= This is a short note to explain the nature of the ``B_matrix`` found in the Siemens private (CSA) fields of the DICOM headers of a diffusion weighted acquisition. We trying to explain the relationship between the ``B_matrix`` and the *b value* and the *gradient vector*. The acquisition is made with a planned (requested) $b$-value - say $b_{req} = 1000$, and with a requested gradient direction $\mathbf{g}_{req} = [g_x, g_y, g_z]$ (supposedly a unit vector) and peak amplitude $G$. When the sequence runs the gradient is modulated by an amplitude envelope $\rho(t)$ with $\max |\rho(t)| = 1$ so that the time course of the gradient is $G\rho(t)\mathbf{g}.$ $G$ is measured in units of $T \mathrm{mm}^-1.$ This leads to an important temporal weighting parameter of the acquisition: .. math:: R = \int_0^T ( \int_0^t \rho ( \tau ) \, d{ \tau } )^2 \, d{t}. (See Basser, Matiello and LeBihan, 1994.) Another formulation involves the introduction of k-space. In standard in-plane MR image encoding .. math:: \mathbf{k} = \gamma \int \mathbf{g}(t)dt. For the classical Stejskal and Tanner pulsed gradient spin echo (PGSE) paradigm where two rectangular pulses of width $\delta$ seconds are spaced with their onsets $\Delta$ seconds apart $R = \Delta (\Delta-\delta/3)^2.$ The units of $R$ are $s^3$. The $b$-matrix has entries .. math:: b_{ij} = \gamma^2 G^2 g_i g_j R, where $\gamma$ is the gyromagnetic radius (units $\mathrm{radians}.\mathrm{seconds}^{-1}.T^{-1}$) and $i$ and $j$ are axis direcrtions from $x,y,z$ . The units of the B-matrix are $\mathrm{radians}^2 . \mathrm{seconds} . \mathrm{mm}^{-2}.$ .. math:: \mathbf{B} = \gamma^2 G^2 R \mathbf{g} \mathbf{g}^T. The b-value for the acquisition is the trace of $\mathbf{B}$ and is given by .. math:: b = \gamma^2 G^2 R \|\mathbf{g}\|^2 = \gamma^2 G^2 R. ================================ The B matrix and Siemens DICOM ================================ Though the Stejskal and Tanner formula is available for the classic PGSE sequence, a different sequence may be used (e.g. TRSE on Siemens Trio), and anyway the ramps up and down on the gradient field will not be rectangular. The Siemens scanner software calculates the actual values of the $b_{ij}$ by numerical integration of the formula above for $R$. These values are in the form of the 6 'B-matrix' values $[b_{xx}, b_{xy}, b_{xz}, b_{yy}, b_{yz}, b_{zz}]$. In this form they are suitable for use in a least squares estimation of the diffusion tensor via the equations across the set of acquisitions: .. math:: \log(A(\mathbf{q})/A(0)) = -(b_{xx}D_{xx} + 2b_{xy}D_{xy} + 2b_{xz}D_{xz} + \ b_{yy}D_{yy} + 2b_{yz}D_{yz} + b_{zz}D_{zz}) The gradient field typically stays in the one gradient direction, in this case the relationship between $b$, $\mathbf{g}$ and the $b_{ij}$ is as follows. If we fill out the symmetric B-matrix as: .. math:: \mathbf{B} = \begin{pmatrix} b_{xx} & b_{xy} & b_{xz}\\ b_{xy} & b_{yy} & b_{yz}\\ b_{xz} & b_{yz} & b_{zz} \end{pmatrix} then $\mathbf{B}$ is equal to the rank 2 tensor $\gamma^2 G^2 R \mathbf{g} \mathbf{g}^T$. By performing an eigenvalue and eigenvector decomposition of $\mathbf{B}$ we obtain .. math:: \mathbf{B} = \lambda_1\mathbf{v}_1\mathbf{v}_1^T + \lambda_2\mathbf{v}_2\mathbf{v}_2^T + \lambda_3\mathbf{v}_3\mathbf{v}_3^T, where only one of the $\lambda_i$, say $\lambda_1$, is (effectively) non-zero. (Because the gradient is always a multiple of a constant direction $\mathbf{B}$ is a effectively a rank 1 tensor.) Then $\mathbf{g} = \pm\mathbf{v}_1$, and $b = \gamma^2 G^2 R = \lambda_1$. The ``b-vector`` $\mathbf{b}$ is given by: .. math:: \mathbf{b}_{\mathrm{actual}} = \gamma^2 G^2 R \mathbf{g}_{\mathrm{actual}} = \lambda_1 \mathbf{v}_1. Once we have $\mathbf{b}_{actual}$ we can calculate $b_{actual} = \|\mathbf{b}_{actual}\|$ and $\mathbf{g}_{actual} = \mathbf{b}_{actual} / b_{actual}$. Various software packages (e.g. FSL's DFT-DTIFIT) expect to get N x 3 and N x 1 arrays of $\mathbf{g}_{actual}$ (``bvecs``) and $b_{actual}$ values (``bvals``) as their inputs. ======================= ... and what about 'q'? ======================= Callaghan, Eccles and Xia (1988) showed that the signal from the narrow pulse PGSE paradigm measured the Fourier transform of the diffusion displacement propagator. Propagation space is measured in displacement per unit time $(\mathrm{mm}.\mathrm{seconds}^{-1})$. They named the reciprocal space ``q-space`` with units of $\mathrm{seconds}.\mathrm{mm}^{-1}$. .. math:: :label: fourier q = \gamma \delta G /{2\pi} .. math:: b = 4 \pi^2 q^2 \Delta Diffusion spectroscopy measures signal over a wide range of $b$-values (or $q$-values) and diffusion times ($\Delta$) and performs a $q$-space analysis (Fourier transform of the diffusion signal decay). There remains a bit of mystery as to how $\mathbf{q}$ (as a vector in $q$-space) is specified for other paradigms. We think that (a) it only matters up to a scale factor, and (b) we can loosely identify $\mathbf{q}$ with $b\mathbf{g}$, where $\mathbf{g}$ is the unit vector in the gradient direction. dipy-1.11.0/doc/theory/bmatrix.rst000066400000000000000000000061351476546756600170550ustar00rootroot00000000000000================================ The B matrix and Siemens DICOM ================================ This is a short note to explain the nature of the ``B_matrix`` found in the Siemens private (CSA) fields of the DICOM headers of a diffusion-weighted acquisition. We try to explain the relationship between the ``B_matrix`` and the *b value* and the *gradient vector*. The acquisition is made with a planned (requested) b value - say $b_{req} = 1000$, and with a requested gradient direction $\mathbf{g}_{req} = [g_x, g_y, g_z]$ (supposedly a unit vector). Note that here we're using $\mathbf{q}$ in the sense of an approximation to a vector in $q$ space. Other people use $\mathbf{b}$ for the same concept, but we've chosen $\mathbf{q}$ to make the exposition clearer. For some purposes, we want the q vector $\mathbf{q}_{actual}$ which is equal to $b_{actual} . \mathbf{g}_{actual}$. We need to be aware that $b_{actual}$ and $\mathbf{g}_{actual}$ may be different from the $b_{req}$ and $\mathbf{g}_{req}$! Though the Stejskal and Tanner formula is available for the classic PGSE sequence, a different sequence may be used (e.g. TRSE on Siemens Trio), and anyway, the ramps up and down on the gradient field will not be rectangular. The Siemens scanner software calculates the effective directional diffusion weighting of the acquisition based on the temporal profile of the applied gradient vector field. These are in the form of the 6 ``B_matrix`` values $[b_{xx}, b_{xy}, b_{xz}, b_{yy}, b_{yz}, b_{zz}]$. In this form they are suitable for use in a least squares estimation of the diffusion tensor via the equations across the set of acquisitions: .. math:: \log(A(\mathbf{q})/A(0)) = -(b_{xx}D_{xx} + 2b_{xy}D_{xy} + 2b_{xz}D_{xz} + \ b_{yy}D_{yy} + 2b_{yz}D_{yz} + b_{zz}D_{zz}) The gradient field typically stays in the one gradient direction, in this case the relationship between $\mathbf{q}$ and the $b_{ij}$ is as follows. If we fill out the symmetric B-matrix as: .. math:: \mathbf{B} = \begin{pmatrix} b_{xx} & b_{yx} & b_{yz}\\ b_{xy} & b_{yy} & b_{xz}\\ b_{xz} & b_{yz} & b_{zz} \end{pmatrix} then $\mathbf{B}$ is equal to the rank 1 tensor $b\mathbf{g}\mathbf{g}^T$. One of the ways to recover $b$ and $\mathbf{g}$, and hence $\mathbf{q}$, from $\mathbf{B}$ is to do a singular value decomposition of $\mathbf{B}: \mathbf{B} = \lambda_1\mathbf{v}_1\mathbf{v}_1^T + \lambda_2\mathbf{v}_2\mathbf{v}_2^T + \lambda_3\mathbf{v}_3\mathbf{v}_3^T$, where only one of the $\lambda_i$, say $\lambda_1$, is effectively non-zero. Then $b = \lambda_1$, $\mathbf{g} = \pm\mathbf{v}_1,$ and $\mathbf{q} = \pm\lambda_1\mathbf{v}_1.$ The choice of sign is arbitrary (essentially we have a choice between two possible square roots of the rank 1 tensor $\mathbf{B}$). Once we have $\mathbf{q}_{actual}$ we can calculate $b_{actual} = |\mathbf{q}_{actual}|$ and $\mathbf{g}_{actual} = \mathbf{q}_{actual} / b_{actual}$. Various software packages (e.g. FSL's DFT-DTIFIT) expect to get 3 × N and 1 × N arrays of $\mathbf{g}_{actual}$ and $b_{actual}$ values as their inputs. dipy-1.11.0/doc/theory/gqi.rst000066400000000000000000000020721476546756600161630ustar00rootroot00000000000000.. _gqi: ============================== Generalised Q-Sampling Imaging ============================== These notes are to help the user of the DIPY module understand Frank Yeh's Generalised Q-Sampling Imaging (GQI) [reference?]. The starting point is the classical formulation of joint k-space and q-space imaging (Calaghan 8.3.1 p. 438) using the narrow pulse gradient spin echo (PGSE) sequence of Tanner and Stejskal: .. math:: S(\mathbf{k},\mathbf{q}) = \int \rho(\mathbf{r}) \exp [j 2 \pi \mathbf{k} \cdot \mathbf{r}] \int P_{\Delta} (\mathbf{r}|\mathbf{r}',\Delta) \exp [j 2 \pi \mathbf{q} \cdot (\mathbf{r}-\mathbf{r'})] \operatorname{d}\mathbf{r}' \operatorname{d}\mathbf{r}. Here $S$ is the (complex) RF signal measured at spatial wave number $\mathbf{k}$ and magnetic gradient wave number $\mathbf{q}$. $\rho$ is the local spin density (number of protons per unit volume contributing to the RF signal). $\Delta$ is the diffusion time scale of the sequence. $P_{\Delta}$ is the averages diffusion propagator (transition probability distribution). dipy-1.11.0/doc/theory/index.rst000066400000000000000000000002261476546756600165110ustar00rootroot00000000000000===================== Theory and concepts ===================== .. toctree:: :maxdepth: 2 bmatrix b_and_q spherical sh_basis gqi dipy-1.11.0/doc/theory/sh_basis.rst000066400000000000000000000146011476546756600171770ustar00rootroot00000000000000.. _sh-basis: ======================== Spherical Harmonic bases ======================== Spherical Harmonics (SH) are functions defined on the sphere. A collection of SH can be used as a basis function to represent and reconstruct any function on the surface of a unit sphere. Spherical harmonics are orthonormal functions defined by: .. math:: Y_l^m(\theta, \phi) = \sqrt{\frac{2l + 1}{4 \pi} \frac{(l - m)!}{(l + m)!}} P_l^m( cos \theta) e^{i m \phi} where $l$ is the order, $m$ is the phase factor, $P_l^m$ is an associated $l$-th order, $m$-th phase factor Legendre polynomial, and $(\theta, \phi)$ is the representation of the direction vector in spherical coordinates. The relation between $Y_l^m$ and $Y_l^{-m}$ is given by: .. math:: Y_l^{-m}(\theta, \phi) = (-1)^m \overline{Y_l^m} where $\overline{Y_l^m}$ is the complex conjugate of $Y_l^m$ defined as $\overline{Y_l^m} = \Re(Y_l^m) - \Im(Y_l^m)$. A function $f(\theta, \phi)$ can be represented using a spherical harmonics basis using the spherical harmonics coefficients $a_l^m$, which can be computed using the expression: .. math:: a_l^m = \int_S f(\theta, \phi) Y_l^m(\theta, \phi) ds Once the coefficients are computed, the function $f(\theta, \phi)$ can be computed as: .. math:: f(\theta, \phi) = \sum_{l = 0}^{\infty} \sum_{m = -l}^{l} a^m_l Y_l^m(\theta, \phi) In HARDI, the Orientation Distribution Function (ODF) is a function on the sphere. Therefore, SH functions offer the ideal framework for reconstructing the ODF. :footcite:t:`Descoteaux2007` use the Q-Ball Imaging (QBI) formalization to recover the ODF, while :footcite:t:`Tournier2007` use the Spherical Deconvolution (SD) framework. Several modified SH bases have been proposed in the diffusion imaging literature for the computation of the ODF. DIPY implements two of these in the :mod:`~dipy.reconst.shm` module. Below are the formal definitions taken directly from the literature. - The basis proposed by :footcite:t:`Descoteaux2007`: .. math:: Y_i(\theta, \phi) = \begin{cases} \sqrt{2} * \Re(Y_l^m(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \sqrt{2} * \Im(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} - The basis proposed by :footcite:t:`Tournier2007`: .. math:: Y_i(\theta, \phi) = \begin{cases} \Im(Y_l^m(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \Re(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} In both cases, $\Re$ denotes the real part of the spherical harmonic basis, and $\Im$ denotes the imaginary part. The SH bases are both orthogonal and real. Moreover, the `descoteaux07` basis is orthonormal. By alternately selecting the real or imaginary part of the original SH basis, the modified SH bases have the properties of being both orthogonal and real. Moreover, due to the presence of the $\sqrt{2}$ factor, the basis proposed by Descoteaux *et al.* is orthonormal. The SH bases implemented in DIPY for versions 1.2 and below differ slightly from the literature. Their implementation is given below. - The ``descoteaux07`` basis is based on the one proposed by :footcite:t:`Descoteaux2007` and is given by: .. math:: Y_i(\theta, \phi) = \begin{cases} \sqrt{2} * \Re(Y_l^{|m|}(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \sqrt{2} * \Im(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} - The ``tournier07`` basis is based on the one proposed by :footcite:t:`Tournier2007` and is given by: .. math:: Y_i(\theta, \phi) = \begin{cases} \Im(Y_l^{|m|}(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \Re(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} These bases differ from the literature by the presence of an absolute value around $m$ when $m < 0$. Due to relations $-p = |p| ; \forall p < 0$ and $Y_l^{-m}(\theta, \phi) = (-1)^m \overline{Y_l^m}$, the effect of this change is a sign flip for the SH functions of even degree $m < 0$. This has no effect on the mathematical properties of each basis. The ``tournier07`` SH basis defined above is the basis used in MRtrix 0.2 :footcite:t:`Tournier2012`. However, the omission of the $\sqrt{2}$ factor seen in the basis from :footcite:t:`Descoteaux2007` makes it non-orthonormal. For this reason, the MRtrix3 :footcite:t:`Tournier2019` SH basis uses a new basis including the normalization factor. Since DIPY 1.3, the ``descoteaux07`` and ``tournier07`` SH bases have been updated in order to agree with the literature and the latest MRtrix3 implementation. While previous bases are still available as *legacy* bases, the ``descoteaux07`` and ``tournier07`` bases now default to: .. math:: Y_i(\theta, \phi) = \begin{cases} \sqrt{2} * \Re(Y_l^m(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \sqrt{2} * \Im(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} for the ``descoteaux07`` basis and .. math:: Y_i(\theta, \phi) = \begin{cases} \sqrt{2} * \Im(Y_l^{|m|}(\theta, \phi)) & -l \leq m < 0, \\ Y_l^0(\theta, \phi) & m = 0, \\ \sqrt{2} * \Re(Y_l^m(\theta, \phi)) & 0 < m \leq l \end{cases} for the ``tournier07`` basis. Both bases are very similar, with their only difference being the sign of $m$ for which the imaginary and real parts of the spherical harmonic $Y_{l}^m$ are used. In practice, a maximum order $k$ is used to truncate the SH series. By only taking into account even order SH functions, the above bases can be used to reconstruct symmetric spherical functions. The choice of an even order is motivated by the symmetry of the diffusion process around the origin. Both bases are also available as full SH bases, where odd order SH functions are also taken into account when reconstructing a spherical function. These full bases can successfully reconstruct asymmetric signals as well as symmetric signals. NOTE: The definition of spherical harmonics that DIPY utilizes does not match the one in Wikipedia and scipy. Instead, DIPY follows the dMRI literature conventions, like in ``descoteaux07`` and ``tournier07``. The code in DIPY also follows the following convention: Let the SH be noted as $Y_{l}^m$. Then, $l$ is referred to as either order or l_value(s), and $m$ is referred to as either phase factor or m_value(s). These decisions were made as a result of the PR in https://github.com/dipy/dipy/pull/3086 References ---------- .. footbibliography:: dipy-1.11.0/doc/theory/spherical.rst000066400000000000000000000055771476546756600173720ustar00rootroot00000000000000.. _spherical: ======================= Spherical coordinates ======================= There are good discussions of spherical coordinates in `Wikipedia spherical coordinate system`_ and `Mathworld spherical coordinate system`_. There is more information in the docstring for the :func:`~dipy.core.geometry.sphere2cart` function. Terms ===== Origin Origin of the sphere P The point represented by spherical coordinates OP The line connecting the origin and P radial distance or radius. The Euclidean length of OP. z-axis The vertical of the sphere. If we consider the sphere as a globe, then the z-axis runs from south to north. This is the zenith direction of the sphere. Reference plane The plane containing the origin and orthogonal to the z-axis (zenith direction) y-axis The horizontal axis of the sphere, orthogonal to the z-axis, on the reference plane. West to east for a globe. x-axis Axis orthogonal to y and z-axis, on the reference plane. For a globe, this will be a line from behind the globe through the origin towards us, the viewer. Inclination angle The angle between the OP and the z-axis. This can also be called the polar angle, or the co-latitude. Azimuth angle or azimuthal angle or longitude. The angle between the projection of OP onto the reference plane and the x-axis The physics convention ====================== The radius is $r$, the inclination angle is $\theta$ and the azimuth angle is $\phi$. Spherical coordinates are specified by the tuple of $(r, \theta, \phi)$ in that order. Here is a good illustration we made from the scripts kindly provided by `Jorge Stolfi`_ on Wikipedia. .. _`Jorge Stolfi`: https://commons.wikimedia.org/wiki/User:Jorge_Stolfi .. image:: spherical_coordinates.png The formulae relating Cartesian coordinates $(x, y, z)$ to $r, \theta, \phi$ are: .. math:: r=\sqrt{x^2+y^2+z^2} \theta=\arccos\frac{z}{\sqrt{x^2+y^2+z^2}} \phi = \operatorname{atan2}(y,x) and from $(r, \theta, \phi)$ to $(x, y, z)$: .. math:: x=r \, \sin\theta \, \cos\phi y=r \, \sin\theta \, \sin\phi z=r \, \cos\theta The mathematics convention ========================== See the `Wikipedia spherical coordinate system`_. The mathematics convention reverses the meaning of $\theta$ and $\phi$ so that $\theta$ refers to the azimuthal angle and $\phi$ refers to the inclination angle. Matlab convention ================= Matlab has functions ``sph2cart`` and ``cart2sph``. These use the terms ``theta`` and ``phi``, but with a different meaning again from the standard physics and mathematics conventions. Here ``theta`` is the azimuth angle, as for the mathematics convention, but ``phi`` is the angle between the reference plane and OP. This implies different formulae for the conversions between Cartesian and spherical coordinates that are easy to derive. .. include:: ../links_names.inc dipy-1.11.0/doc/theory/spherical_coordinates.png000066400000000000000000005464331476546756600217410ustar00rootroot00000000000000PNG  IHDRlXfbKGD X pHYsHHFk> vpAglX`3IDATxw|ϜB!ЫbA \XrZ~l (H @$!33? 7\M.e_/_~gB91}B!B!B4w!B!B!2) !B!Bq&B!B!!B!B!@ lB!B!B) !B!Bq&B!B!!B!B!@ lB!B!B) !B!Bq&B!B!!B!B!@ lB!B!B) !B!Bq&B!B!!B!B!@ lB!˗/_ 0 ÿիW߯B!ıK lB!UVZбcǎ;B!%6!B64MӄooB!8v>~B!{ywy̙3gΜYvRJ)|'| 4hߩB!]2M!(......袋.sa[o[oIaM!B!BP(L2eʔ?/p 7p ~B!&B.. &L0a„?O>pwy~B!$6!Bqq`1cƌ3=> |px^x?߳gϞ={W_}W_eYeZq(l5_t`V!"[B! &L 4hП k6o޼yfhԨQF0 0ׇ~Uj̄9y+M +7',7 4,:n54>ޢVɠfV!&KDB!*P@k; jժUV~j_Udšz-y==;D, X MK^ܪ"ZPD|^'gn֌s&) !BT`… .;E) QPI. k2-[=3coz4j?3Nk`~^Ts5;"hB!DlٲeS!R[\ M}63i]cP͋u삄'+#%(fYҧo=g~MrCf'%&@$o3< >.3s@ Bq̑B!);eC] 4>Bu\LRt8~aJ~M iw~uǼ)'rB!(tp|q|A/FeIEVC6K4 })H76T;!8ڤ&BqUVZ ,,,,, N%ujw(w) eMȟkz$OhzS ZY5~Z#Q`nj 6!8f(!B#<`Ν;w P_/w+Ȁ l<=*)wn6jw*!G`B!VZjժB뺮8iiRTW=zWI +2ikn0bHN|5^3iw።'Lgf L!$3؄B!sΝ;l\Zh۴iӦMN'S6$e_f<0j g>H9M#;PUɛ@b*d\]66w*!G`B!fϞ={?ߞ7nܸ)EUnr!-O-%Vq1]w$MsSJol;sy^RJCN'hlB!ܬYf͚W^zjӉj&+26mZ0'&FA†gw[Zbz# 2%hB!D5}۷úu֭[͛7oj n3#%Nv.+ڹ1gb g"eB3dB!D5` }aԤI&M$*'k-83.(L+P j<';4n[?t.Ԙ<]\8>!r 4wBdB!D5Sځ*]J*" gOy;4omuaZ4Q<lNpݍhR66Cfd TB!6) !BT3gϞ={6i^P˗/_|ߩhgIna0?l5J&?q'4ѡj%Ew573FC^MS;Z m§ZxW׭{gF!_ +ݭC!BTePXXXXXxm۶m۶$ͮMEaj}m+ ,1u)P?܆Pˍv4%͆=w.=v^ߩBm2M!O>ܹsΝyyyyyyЩSN:}5ϞPme>5\>łx[8 ZMu:꛸")UvN-w7ͪiBE !BTׯԪUVZpYguY8R \u]84>7j{ԴY>ԽvcA}Z{g>`Ł2K+\n;BQ!BjFkz衇.]t_@ `ڴiӦM(SW}>7IKIWgwjP/NbJiJP]^Ad@[t8sߩBmR`B!8Ftڵk׮eYf͚5TUWi-Kss7ڟYq-jN^ˁ:iJP]^{Y>԰^MTB!6Y"*B!A(vaVeaWTs!?. )U[^Fx] ;BHM!Bng^zv/maCÇCmGB\jLaxOZ-t}s/񑷚[Xw&!GtB!b?šPQ,[>a O266CNu4 u]&>U3Ƌ-`OzF!_dB!8b'dzԑm*608Cb\B©}:L\ >yƛl!؄B!1ծvBiu.7WNss4~*ԻNf! ~{wVN;B?Q!BQvVyWV91l\=D֋;2hUqz!QoM#+ ܁JgV+N#/2M!BTK }VXSҥk!ST>4\Bȉ13y~'UƍpWHP'If{S !؄B!DP3Iew 3 lyEx4>~WBY1,HSFi* dZGلB}R`B!URiϴQY/|MAMS?Y%#aT6 0P 8 <;[3 f'Cx;B?HM!BT uL[;iAσe Q$sɾnptg v9ZN w:!G49B!\nݺfMLpO)bkC_]g@m6<) qĸ;jdB!BQd/۱bzXXٙ{Q9Co?KI6a=#h@ScwM>&`v1 MCM QUx.gĺ: Nfh#8Ɏ4'%:46p:#u>;u>أ:tc>t3%8Oۻ^k]`?l @Ұ j.d!a߯*&B!|Pme\vmscĖ7YQһCLJ@ ލnXsmt%6QNܺN4خ)Y:Ëwb;FnP{e4siǹ:tYL֠:Z3Tk=~ v6Tlgs6f1πul`6 y`þ#~;fcs`>an\S͜V3PQLaӾ}&7R`B!Gծ=;!-&3u齰}B3 xU7 4W/4UP_|n|S tY6 j )_gi Ni :[hқǮ :t(n*pZ/u>5nܳ֠{~m8w`m[:١{Nv/hi! V콿IK%oeq 013(Ij mf̭F X̩6@ pN*0=hf\ Y[Mrc *Sx޸ߚV?06f@SJWKfy'뀭$ \ 4w` ;(C&^Uw!Bqduڵk׮eYf͚5?oiuSV6pǩ ko:k[<0o4%]]yy9>)pܹusυ2qltcj NS;VR Q585u`5W HxE8ܕ prN=T`s$z/k*9Œ5s@jvNdczF9Od3 kjdQoc6LyVRƗ`bTor[59º$6aZSkσ c%N?ML]~U؄B!GVsuɶ[> BZ.goP,M DՔgܘo1˙ǵ"ahm;N(8Z6ZgănvNss/tAǃ3Ιqt$yWs5y]sp 鞯ۻTe;|g78^3?j{k gOfe^?T̞P0]5 us=-Œt-X盫ԥ6;PjkIV#s'f`h| @blǎS6 f=8f˶ݯ;j&Bq8Rno6'epj{(պ懵@R~|s!0-08WCCmgm`c)gmۧ~\A :Fh/Q]u8\DZ#ppxt8N89J~=ۙ\nopƹj zm۠sV۟{9mytszI|uj $A90QD0b'ЈFzNe!y\9m0CdT <g0Y[@bYVK"t01x43!P ;f# MoeCԧakjN59TB!1pJ}2Z dٽNW 6J6a~_8yѓ]g@Ƹ?; 3mܖttIs[;~ Zg.=CAݻυ؟o Ϻ.N֠5~Bs99T_Jj^hƻFs+ 0k݈5瀺Jeb.~afNR̆i`I`.4BrafOj8*07/@9f*Q5 * Y-*Y y:h0#~U؄B!Z`Kە=d,ɂ; rCĤ~D >kԋ=Y^긫٦8gr;=W/'rŸw{Oiՠ8q n+gzNCp:#[Ku댯68k`~NUg*T)T9i*yq19fWPuMlVORU[0j9jJ 8Cm'BT-K:4":FSF lB!Lj-gK@Nboz_$)CΘ{^uց]><:nw3o ܅NF&pGyg8~T'3$OY5w[݇Mрw:Ar:[Z6v>wǀ;ad^/A4V{xRsHc=#̗yD0g_2pu!gfPn0>P5~V1[uXT<}:|PjH3Ģ/ ܢ۠iz5B oȍEgԮ4KLltB!1 l%Z{@)u2QCk7In YN:v.rxDp׻Ap8_9AX|ؙnMo pw#))?:C+At wk9Aw& ֻ;I_%=9A TS`lީrxd> Y۬{͖F 擪7.j&y=*,Ts>pNu6WwdM4Gb.P#X'#Ҹ2Wl׊=vNݩ{ @[~ѺߩCsNk[ rRw&>qDU#6!Bc l?4}I!­Wy}u> u{նG@`1wb^zN~}ҳK>NГ wq`L˝܎ xgn|;fkpN;F\ԳKev+TPjfx8_cy  _H6c5չ`7/8ټPeoi浠Uw@%ͳAEAcFȽ磌]gaO)E@Wꦴjw*!V-\ƒ&1Og6w:QHM!1cƌ3s9s^ {6x\΋Zg,`5t֞ n8KG]rIVg7z(Is^w "*',4w^cz]y`jJ@UvY!o<6Z4`N7?4uj~>51_*5 Uԩ\ny%O'A%U YK!)Y?__辺7TBQxBQ)f̦IsS5<j5jYDU#6!BծvBmwuvuN^z Nm̙onA_3ݎZ؟qGj spgSzӽ:_z{9Tu/f?P6ټvy^VRߩ:0_7nS`|.2'9ɼ,+u@| HPÌ3W:N+DmT:KAk*Y@Tw;6wؿ꾺;$D5Jq0xy6={BT\xR(؄BQeu]: B'8ۜ|N|е1;tkgRNIo^m3Buvh68٩) Z.ݰAB!,{t|/L۠=]hAwlt}7Vc83RCh/4tŹSN:8g֡@u_~ͽ]s]?owNk*9Œ5sxhbnz3\v#fs XMU GGjT2Y `[IZ`^Z_Qjlxng!QxHRjM>vuDz.0n`+!$S+}p>cWM 3i]&DՒ㮎7W(69TB37;#<zر0 PtBQSlkpmE588Zjcmvpkh NM5xՠkmvp{{3Rkpܻ[8]i`h۟5}Kt+KU!0@i/zjfMPQV#k`^d <f'X :kj1"ZX.Tm3T'u$Wqf3P͍LQ1jg\`-r6n΅~;=^SԮ]+;a.>_Ws˙{?:73qFiG]#|s$Ve#TB限z98JTUR`BqLm st˵&%|; "N]wF|Mn( 81+81{ ZzނO 3Ι5wZ3Ľ>?нdh#`^m~omLW)\i0+x30k9O~6`\o3b3׌6Uh,45-*J0mu:u0MT~Im[y;3/qA\FUqBñ d؄8|rvn ԼLH&Jq06n 9/ԩ^raeT߱!8"}iqh 8Ի ; `k7[]G@Si|pNp&88v ܈u O@h :2t8Qp^FtSZt/ӥa 6,#Kgo3I1/l| 5@޾oi@ 4ɾsl~Ke*ԋ_ٰ/9Pj~^U+]*8:Fy'š؄R}2Z낽N#:&Uzv\֠s'JXXgpйcG^6k?p!4P |Bp.w{ۗ3Tkh}&}38ۜ1:+U Fyu6+yUg#IjzBste>TV,Ph3R50@wP90'tT \5j(H(y#7Q~ DUg(C  nl\ %mK,NȘ5?&=aO_ Gf qyVa^ybTBeR !p{`׮~U؄.u ½+7^G mn{_LJQ`Od=J7+` utpyb7Uw{]BV TB}o3Rb=fm[/PPWLs"XC̥֯&m`;;R̆i`56[L2Y=kyu`r̫ܠ2Vo2 ~qxgi֒EP-<š5_uBϬQyE&mR^̖ӡFYP礚I)~Bm׏^2jU 3N>Tq8!m8`y=ý8Gv>XT:O58͜vqn 'k X}} v6Tlgs6{Sk̋Z}|N%X`RLƀZfUAM2.yZjc6y2ƛAZwy-σ1Y-UI`uZ90hE!{ s/7 Yr@ý>Yx}n&5!; oj:7Ōwߩc;kv"$] huf:JTUR`*u،Pk__Hg3[AqgnP踄~B}{.:vJhޯ^Lh^6NTUR`(pB58:5<7GkЪtiwN֠ÝsMt悳))NhN8Cv8o:B3T t6mP"8׵wߜa]UfJU ")k] ';ԗ2OUceMI# 끷\V g*4f?jM !%>Fl u]9gY ΀nG؄8|r}Zz JMw*!āN K_ cl;'48N1~UtՊ9Ŷȵ']zQNN8EN7-c0m^ ΥNz~N烞Z4p.vh Mi3f8cF_g9CoQD0OnO<k8h]j3O ̓F5&qb[Xs-%`^oje`4sn3VZ:v^5T9|0cTZ\I W JwJ:F g҄/} /^\Jiooװ#x}šhv^lkn<{ 5wG}W3 j_\;p%[L"BWy < F{&S\,щ@ S )6?U+]FNgQp=f}ھnRh<*SN7}A=?ZZ꤮7~[L^UCYOm4n1~QoC`ZYV߯hk=f[5*`Ś *H7:\.^u*3I̝2Ҝg~J_h/ϟ J O{h dH&7;NN^8pԠ;e}GA!9;vYl,I)]LXɗ%ޜr;M [wj!aW[tG KфƮD. h,w6QuIMjm-ӚBN-CXbpr)`^bAնvYׁa `޾NPZ |Uf1\5μTjmjZj>Xif5T{F`5O5RU}!s{𥅥Xz C+ze;4@14d~,-dDg %Ǯݩ{"wAl*ݗ}\D[q${zߩGē^3I>T5Hc9BJk1@ 6;&|c 63<ձu|S p(PߟЏb 9{&QVJ"v-YlA,U,don\ ;vʍE I ˒l0{Ba۾;q&ROa/XSu#2X `DXӉJ@N#XD6yo6hH{ Pi7@%JMn(L.u}~BMwU qߩC/uk]6x* JTuR` cv  cfB!đngl˽9|{PbC _kM?x#8]?Fl4WJљw aji iJTuDTKfo͍p=@ !6'7mde䘛g=VQO՜ u~L4Fp'ЀI~GBT2yYp`$Ml0B~B 'ii4n;dU0J K*:6S !. N;Vaҋ`-S/@T ]nx|H{5$B)ԫq {3bߩU-*TB'j޵;.&|lL ]6v^so҃N%(>bKǵ)UCbE7@=/4ұ mjߩUڽ^ףlh QPBϡwXӈB` [6C2{پB !l˜H;^Yy2ַ !P+5Ԋ5;qP0vVz~BT ޯ^վ`+ ~gB}U0UY b\(N!߼5qa͹[!2=Y[bj.+X曁;GD !#7{ L:F"d&Uۍ m[6IT}DTP2X? LG9/J!_rtE3yBGۗm "ꄫд}Cl2$H_aM!0o_s~726*>(rM|yB C JTR`B0K'o`!8 &;~U$65ߒn%0ouL;5%t)"2eC5 uߡBsƁځ!VT qȤ&*V~`_WF!đV8c"H+*#cU3p{H_@J75.>"AnWm'S !@e`;.&*pp<!8B.%g7oʈtwIm37aQ<$ŷJjLŅuiL)6ykua#Nۗ̂(SB lRt V}N(4B!Uiό׷Nτԫu]1wx꘵K(mn8&̩vVz^!*NzoZQ~B Fw_-ʊ JGqHMT +?8w4B!Vi9S2ן1lmGK Ԩbo&o@)PSQx+ƻ~B ;EG!sg;.V+*` sc|fൡT~BG>s l).;-BWAWCk aXB@,.˜H:S"59TBtV:0]3`J`q 6Q)k:ơyP]S !swU6XSYOs:34,ʺ}5WXBj¹'z= ~B t@XӈF lR6́8N%bOjIHy `srp9u!iyA#>~B#ǘT9D-3Fq0t^bjYIӈFJ!0jFSvS !ı>[1e븍? >cY>ļx+Ğ^إ@k~4~;B=tM08`7(;@} au" VqߩDu!6Q)݁\`#9}w;BT_zD։:cl;oM70vE\m$:8BAwwXmS8Rߩ67BwB7N#)J!e  9#4&M!nےr?.-y3ҵ kn:%6+V-6GZ!*cܛ ~*D lBT L{XAkZITR`ef 敆VIo_L!n;w 3sgn'E56GO:̚wOHaM!Ľ;F ;@>N%K l&R6 4WvS!Dյk-;a/2Uk\RX|AbjJ3)5xNR#8.n^waJT72MT*sMc K{S !DW33}Fvn^x4nH~މ ȍ1#,S !DM9d& !wt{~V JT7R`JWIX@C$}1b{aʂi+&S!_h%&wC3K;Bj u)a)v=lЦj8-%*@븰Nb; [-Xኢ˺}n(\g7GgBR~k!L૰N-/~ЍtC>M Q%8uz=/7_C Ҳ_qlR 4gԒBj`OXʢ+M;`gi;BFo ߟ%D*ӊdQIa0o ) ^`TwI;޺$:N#+)J%2Šz.@n/ |gdnR@i攚:C}V!Kme 0p`F&?Bv6^xjZsʎ]ϔ <⴫wXo]Jɟ\>;DuzWDv\eo7⮯v>֊:׀ynEuumuoNB6}; d?}[Z\pWkÞ{^.A9;V#Oxk}H/֏_5q93oyt@̨h~ Zuc??E֍xLf3n =eIS>sQlLcd !vݛ @[3;%Œ˖ 4 wNtPsMBW !ğvim@vo~ν ~*棘PLj VW~Օ13o'{Z39׃n PTdPWυ6QRZw9;G!]qS!o^ʿLj:Jj _| (d\DؼacIF*+d?οmGk \孆okudb5y3%jǾ7]xA&S$NzHqS[^vH>Ɨ@Xj01beՌNcMta&6Qu`ԅ _]]yPwZɗN!P2eDmoYݷmd]\2Y;CQ΄DCRaN_ vn9Ǜ[33ݬenйy|Xbc-+#hc}3ӶmN$t{q| 6唉Ӂ_ Akvoa]Æl w}pЏNN/[rGmƶC?o{7k $ cޮ YTޝys4ozj:{mtBw~9[n?qNC3WKv:tisJ#a[S[Nq.0@ΰMO&ٞ}Z\{^ HJ |xLҡܺiW8:xY`Ef|o,끄5^#7܁Lw3%w6QHMTJ;%`/wYf&IQ993{6yo)C­TUTsT\;0F9{}ʧ*qٓv7x|{pn]N^f.j$HޙGp0M o\Y< פMw [t[ZP#bRPc\]B:M'5 ]?שoA3S~:''\W;nȽe`5Y.0tB8N/`< 멍aJTWR`R0d>v%x.H;qSݵ:H̋ [è]qmjEƭڷu[jg;8vmg'Wwk75`q<|jKqRo?V}ڪ%MiAe0`.q nDTP ?J6|F&aa@ukO篹cH7Cb7hHZc|!lBkH{1M{gMT?B=jxбp=4B?>GBBt~'A+a{g^ʷ~v_ y C -޲ܜ*7\ѣRuT8tΫ~5nw*ޯԶ߁/z}2l #Z\s5\W^X~_\ȗ :x ieJWtKs־aEJTW2MTJ˭SIt>s-ۜR!Niϕ*}waÌȵX{_Arr gC BpBN-UcGo[3A?4b'`+栺ҽFQ!J5 54Xso7eF}U_ zca.*{ܿ/|W`[Wfϭ?8;o l| c10Baυn) aPNGD͂tߩQiu2 6b,FI&$nr\h86 */⻨~ǺoHmz,<ީ^~~^9\}vWC`9 /ܻf-\饷S>e[ ݱooF~_Ց>} |1;@89Qz@8k~ՕDbFku+;;ߩs6~}X}`ȝA[kM7ռ|U\tم(L|~aIc 4KL~_>š7֎x:\qRN`)ųnUR[s޽ܴ 7AqTIa~_a{T8O'8N4sF໽M`TA"o-Q)³=TB?N"ltMm!u/.rLve\,4~u-BbOFct;{ZĹGܧ]^ͦ%uBh̯qzu lFڻ k 9}gw'}{ĥnfNQpNK:E?;\\G to[9?7פ~_=o (W.;@8Zu`r.~՝D(2mΉ:TB8U^l_a;6,-/dQ#Mzd7ۧ@MRSTgxg># ;.ڹy4$?+o 8Wy{⥠n[~_Si0Ek+̖|a|1zJ K"zv~=pC|us4Bav \wQIMTJmZޝ˞w*!đPsG֮vǯ?{e9f c 鰒bhrI'=&ֽ%Bh wz!`=~1P8n WӮod}^I#pg.6"|=TKOʠǰvFLzχW֟8<~+e'GgHʷtتFq ܚ{,~4~՝D0rMN;pmAT6-}6̺F]^zmΔnr8ܳa9(Pr_m\xA/3XRor`,|>kqpӦ^.}}k4k<ϭS'zoys[߻ Gub_M!āpp~pB@ q$HMTjLs``؆w!ġ('5W'<\ NJhg7ӴtUCiaMOMܳ>9W|Wn~_S~¾]ӈN lR Szw!(7| ҟXvmwK&+v]PDnJط4ΠsE~~u8bp+>Q2kY=_ Z1!JT]8q#zfyƮ/-B||žaY%ǭϮtKS S\.c:~ԋ c xL;,WL; lJQn[ee} }%ep;7{^Yϸb{oSk29HOQE38 wf5cǧ9S^%a'ckzqG(Lj J!7n/)hY!W,jr^{o5}pf>u>.}UpwrC"$[C9>^4PWGQ6 ֒;3 Q1&*V~0 r!˾ v\EaK|NdF]0h|q`ۧ%5Q;}5㉠8e^@+Z u(5#h\|!*o\4Hr7?^հylftYݡow`?\tp> wwjW/+XgoZuJXߩDu'6QSAb_fF3iYy9:eWcʺ}R pr/<wj!M~!XvuW}i_>7:{exr/TjOxqe?U7BY_ >fOxޛk8 Gyܫ+k~_5ݥl4 JTwR`Z0Zf{ȡOqvkڦ^mxxüP? ԫqD B??a=n\}=88 NA>{5-߯Mi&6hǝ yY=q{44" o>gLb5*Bk,`sSN lR , ^[w)8J&jnQW-_o9)ʺ}6ұ}]I#~~ɒ8}ۏ춭孠}nJq ZRVQ;›&āq-'̄wgWc{=p;(tG[A?X[Vdo&}5BTݨlL FN%;)J-(kN%D;6 }7[8寁KjT'4oţe>4l qxe{Ò8u}-wk z===F,ƒ˟u8:K;G 9wFLKpL4<{+{6ivƓ0>{Lɤe{3=uRoOزqS`T!+&|gg۱~j+37;#VZ8 r^WQjEAԆ5{& FcHsQM.Xvxয়o@h^{S5*ZYQ3/[^̳uF_͠BmnTԽ$~[j;K*fϞ6m->űAwZ?v `l*(~Tt*t~eܰ#ʎ7cDk(}ti H{5ppv~zSȀqaoF8JYt_M3 G\pSRZw=**sy T^o*h[^NqEfj W^yO_ez"J~nwߩ8 ڀac|B)UJt:I0鞣JʡYH+XyV`MnCq{9$6Ӣu{}6!M虇BTi;{x=N%??^=Ɩ09i[άj0+bJ6q] N0w*qK pp~pFC?Й~`]I]wjJtx9%'9AG+y +^w4DmϏ3Z=;_]C)f, Wҹ;CfIi{ !{g o[}\\^NN~GH "go춧múZ!6Ŏ+jEJ+&V~`8طU~PYA`}a%Fau¾vӳ7}ֽ{.詉WG\4CE]C?Ot&x*ރ> cu9UlyHΝf|_66Zjl:l\ =vn`'MoПO\`Ɓ3LJ+&@MskX~X_t{J#LRԧ}lR.on }RZ}x;槑ThYcIVXrp{Utx̶qB"21Ƽ?&z&OZ~MϋYLrq:j04 kLTX!6Q% 5,n'TBY}f8'ywiDUBjZ X"7CPk|ZY/Е\쉖SVZ7< hzQk=3~t ݃aD!D`m $Ajjw*!, 97y+|ZQMƆ] ϊ0ƠFwA̅@P5;ӝ6Ο;1xL{6yد_]||ϊUXHVos{k'Z,e'Z{u(lB`]GGB61\m37ᩍ/LQ0ᝩӾ{zvk=-_ةk/Kqqz f&6Q%j*;hѓAw*!,B4+;~4to_qG=z:uԩS,˲, %K\{M;H-\i-`ޫ:!b^܍ kx7ה݁ 2ǂԼǫSE VwCf 's{v&իW^ 77777RRRRRR ǁSgq1bĈp}w~;vرȄ@n/$;C/I6Xtκy-g+0_96SXF:굯 }}o ?B| '^;8HMT)a)}4BYVKc^6v5N%LQ}___3 A~h֬Yf 7qOw?v[Ž>şm>>@ ~~GX-dtײ Ywn~>S9vVT`U[K93\sCc$m{ +G]GUށ5?(93O^/np3\q#Dur_j;́:g56 )T fnnN5*-\',d+YkN;8HMT).f`ɺi8Exlr?M\ SΝcʹ0nܸqƕwqЮ]vA^=d= ESvm*;P/r[ybcccccu֭[׍yʕ+WP( f[hꂅˡhk=k3?s?%e/HUJrAE*:3%3߿^?Wp{uQg=yaa DDDF|8v3ջ̧`#4mzǹdؚ!aUӳomwJ!w߫0e6qIMT)aaN}i8,>ϲ7yThq;z_>84 ~Yxv^VrǡPV/g2{jmˬusnzl6 ruu|<'Dԯ]T*޿!ƒYY̳D+U^l=˟ey+˛!WQυ wFZQS40RSN;s;y0zdv/QQZ~^ ӂ1~JT)a]aaeS6qWyTK}֝]w*q׬񯔫܎0+>˨{ۊm絧=B]?mh !N=,ؗ]one7 : s P+˟t,oϵg5/073`XW dxBzɠNR/ks*#GY6׬2PsT%h%9`ޫ:q=QQ4 xKT"0 s9bJrf&I>~)oL vqoRx+`xGycb|zS30p!t`m"hϽ3'U06k I̒`8uO`Wg`pe'b&yKi@R]F;{$G}1:~b2G;B|5j i MaСCؚn١a78y<.!f,/l/ÎVgol9O⩡D;Z rx;S\R*+l魘~/N1]}y%K+$)ZD{__xig֬|2!-u;ET=.:R@M`08=~kwGK[ɻB3˗돞>`Wi_ǩ(9s5C~k2a\Mct*F:uW*-R3TCr"{Vg2Sd?'W 26M% JMRSzӄ5%K,YD…p~4Vp/vII/Ya.HZ1 1iYq;_պy>MhP5߰ty+ ;Pwz3&*x 0a7u^1o>τܦ9urvuudN~,vj 5u8v8:^zW3bIi+ׂ]&F:\&  y۱?[ӟIf DV+7A)e`֭[n8p@_^{ #K?PmV}nJ}q (΍ ?دEy6$xF(l+N8:sS~Ņw3 ڪ& /x9nWߕ~|0 ~Q+5ԻmXA0f2vu6ȝVMM{"*:r29X>76?j 5吩 6mSb${#r6!Ta((Y,#+4mxYBSL2eq<<Bw9~qtߝ9'jsԯ[n-IDATWKfg FrQjrcˀBb(.Aq . ^g`'O (.p\lU=CBi`WuPyCջ*ׄۦUR/[aϲ=N:J`J2Y*fpMb޴jG~[10awVʻeh3P>]\r` '昆GLt4iҤI[nݺuk[n[\822ls^ie3Pf3s-Bb`!Mv||cb5P_on!ZUYl^' EL, 6g]{^9' }>6,U}So`u:K(8c9<= {ڞ wj (=Hk|2bl$L9US>'Ix]`Wcpqv<]oרQFرcwFȑ#G  o ǫQtͺ|ڼ&ؼ Z2d#>ج=0Ė&dP]ΫҲBvl @b+?akA`ǚvvΣo-nBOQf2_yAДJLf) I>^%)pkCIRo v+r/|m)' vUU ?Mӎǀ *+6iI^-\nly@¶)Tn܉_A0G||0X!dt"ءDt_4ar_:OO ۵iۺdTٓeׯNW?8z?uST o!KCfHH+N0ӜfZBW4[+U{ NuE+᤺GpZdυ;5s}/8a\1Sͅ7Œ뺜p lS>_FH>Yގ"gvqo¥綿g``plㄺ\LBY1#E] Csunf3<8;߳?oRLg2bl$G}f\R}[E=\Ĭ,k<`4ȟ-[lٲ%lݺu֭{ݻwÏG~c-c_L; ߿?l߾}p}w}pNC8ON~G8v6^ DW@qXo~*5 Q2eŶxy*9ּ?]gp6t |b'}tSv91!ѷ֘f`p*;m! B`Wu2a7?kƮZ>>|vO G%"#}msIn1B !\Xk"B< Jwe}y202uL{e> Xel9S{ 를J~PoN7<m{Ϡx b +(R=i'^9*)^K/K|$E qc 4\$.O*!{ڹ ,( O88A|[;gf93yn go5008D3*EڥO~{FC{1>x?>~0qkK_࿉.9ҹL]i颡r gpMbrs#*U\LLwK7۲*+NʨC֓?ԅSsV'Y{ /. jiF@ԂV'm`;x'o+':n o>mn(r7(ю]BY`ALtPLHz`zPgy;t,<o7!P%pv:*W&l < Fޕ{ϔ 4[;xԥ7 v뇱ҧߎ ~k\$+K[k`Wc_ IDETh00&-/W@Ba\Y-pđ@& 23 m ťjY1]yicYmBoYҳςz-H+r@YPL輄1VT}֛l6![ gPp?`rXm ^뇈Mk#-U*kJkI0בZn v5u 2ؔiӎ^ +.VA~/UWȓ},8+Z8B߉Xksz _ﻰPb 43M!}xVf Z.癯\Wq Iay=5˅U=2 řD1QeTh[IF#?3ZF . W0!eZN:fpMc޼IJ?)79+(I ѷaʔIZણ]\ XFH`e_ i,CM[csR102)ʋkޠ>Q;*\76elV\X;lfiŵ _ }ނW [^oc-l9UG8]kYU 9UW(!:wxaKgAÝ?h`5[} 6JGZl2Y! A i,vӺ@QPSW) R?!_ Pe'\oȣwM-IojL1n2%gx eR@/ɦ X9C5ԄVqP\ &v:g7ͯ /AWSpu}[W]`qc[m:}\202HRSV']\ȑ}PRꃜ~z7v\-jaڬVB@FB}|ZE}?5Ao):_]q!~-^B}pi\{:|_70086 (a0!w%l'`OO|`Welە+msYF2cl4!R5[ߖse,*+iI 'e"wBK)~ rtnlC+P9ˊk UҡPdJKũB_>r?@ Z<---0S?jx;xoaQL{ B g``pOsV lzQQzcيe2a̘n[o,5f8CKUmKyK $kͤX  k/A/\ȣwM(cp>Ӫr:E~4_LE'9 2uJ1(@lny seXlY(8y*}mb9qپBw멐+b``pIjS`pᒗ3D|uT-xas6m*wg3qNpNsoUn*:```p)%Irie#Hrk30܈ %M_rTyp0LFk2vuԶ@hi5>z3 ]2MQ[*_w^*ю[B&iBK0s|,rꂔسn5AT'fj! =;քF ؜!ǝZKaQJ{  (Ju$@x^U= HH?0_ji˩y r-j4H+rWeR}a2O%%}˒iJCշ>yğHU].Wz[]"~3fuPExܟ'Yh&<,XRp UyB;!/ A|@&j*<PBCC(+b7 Q 5* ')B000fE 6&Kz.2ޟmMszZCkg3A._YUy>qD2AFPKaέTA_ ./G{@%ej3+ 5''vF~[?Qo'(o6"Q\hj7Q-i۹ާ;\N"go6u-߈mܯ IRRG ځZQU oe45j7I1-y$ګ(mIUAUNApm^w:]~ ?3_ PE$( (@e xlw-HۄV /Pzm*?G? 6(YdJJgdPqRX UY>@9upL=ۿSM:Eݠq%?ÇBOa!= ,vg@>/:I N\j^cC}bC1J iMtQ)6S V B!Mhv 8295UC;&!U;`>+8{dsH2c8 ̿:W4z&ˍ4I /%`dp3~PUgVMJM8L%ZD&2T/88j8o @ {%0FJQO% W@xU~g7ܒ@)-WUPkȲ ` gB]^JY(  ؑM~+rr)ׅ@yL 2寅V 43wRS&q mPJu ( 0 2PkϪ( TL5@H0:zgԵ 45 vu>V FG߷i!kہj9D逧w07Ս#Ynդ%TVTg`7`Lu{v@]vKj3P,8SP 5SyJj$Yr%g>PPRkmeP:Ro@Vu9AB+5ĒDiMXmk c@a2I ! n~ )2e¤}Y1+x~o*+Ct~ L- nfp]# ?s ss w5 vU1fl91jJK_?:$^ρ.c_M6㵔PMeT F|I^M٠Svց[|l4([I3@GAQ vbb0>@k`< Al]9ALfxZ4Pc OPcÅW+PԵ"U=u*vUnws]R"AP j|~~TJh;~t[cGv *biuzPyu9;V!P>T) eB c@}NY (~VOsqFTH tA,"M)_OAj+؈E v)0PF}>7E]gI U56J O/}(K^ްƪ&q&\|WŰ`Wgp?xzF n|ֶ*:FȁueyQapu<j R?+]ՏEK9ٽ&Vol̟x^[}{4?QPr+ 5PP`f(QX^.cۅAzO(w7{jbiwXkKhW=!t1=mY aX*~!B!Q `oi L,Kwyi:ͦ^ CA8~!AI ; -3T=ZB.tyn!%y칭g kuRmq&iBZ`a~m?7 vCZB\R4HXv05i*h2W\:>lm0D Y"RFFnBek/ U`-i> Xn1l- h+0_Gr>{:j:?ylSsC1<76u,dɛ>rV^ ޖl(x컝3 ST%AؔWo:}Sl* vQ_|C*w|i289iK_r`lR>p-]ALޒ2ͩ-х:PykXA- bm bM)$50KzT J \FF%ܙ44l>l.2 ̈8l`Է{X't.Qu#HiEc0?w䵰 rwd!sm~=Gqs=w]0gSKJ5LXE%}܉|e+f<K^*E]vS)b v5.8QվŽ`We`pyYqt) IՃ\ݥT}@ 3 rc8sxPZ+9sМajځ Bj4-b8 EV>9 T*(PFkBTikrxy 6cK_[Ϋűh[S*yHnf4X_2(C]b uG V4ˬk֙ r^~s}Fn-Q pk]*\۪޶zݷiLP1.cb8*"r5!^Xh/`ggzOۖhysP&g;o xz{]9 Htv4S]p͜;-%sK. 9&s^d<.GH8z!j_]V |e,Wyo9tve:Hh<~Mok`p=!RG}[m^y&؆sAN홭!Ӟ76'dkx&rlϺ ض;Ŀ 0de`%F}pZ4g44¯B7QP+pou}̧>j'v}1~B*YoYSکIK9XjԆoʍ xՑ*݂;|rJ@匿T*>&{j"]۱jw+qz93EJ0EzkBn,_M_!=KdAkTTTTTSn'|5v'_ mڴiӦ~X;kɗ ,B8ex9ϩsZ+s?,rȹs3s9=Wu {,UB`w|)$V)}?Il_Bx'D Q&J%d VC$뱸fPy#qX({>7 h}\ZGQ.m ̪ժo;2{`We`pyO%mjKd_ʝcO3u:9 ee@f{ݼP|R3ii&XK4qFx" C"CHU!9,UQr(J?~|m}@xûeW$dW)@م BuL2~-z Ak.,*ZG?QL(HN,b*Z+JF<*dK;!11111F5jٳgHHHHHHC:VZj*װaÆ Cʋ"9=ֺ='՞sV{̷J.ki]43}EE|}[n\(7<67MNN-=BL¾s'a;wܹ[o[oq4iBR9ݙo[M _t:N&eDw޽{nh޼yaK.]t'؞5k֬YUVZ~~ ݸ;vرcumɒ%K,~~vuWC`3.b,M vU)3A~/Uwzq;r@nY/A'JC]3*@g١p-6ҏ`-iqv8V/x?MAZ)&ŵfD|haL߀/T Tg;]NboEm}.!E7)m֙@ ^B7X5Y aL7ЃiUUt^VMM8kZ\;]¦ݢ*Ls*rBR b@x" l .\p M7tM7??ӧO>h|~w࿂빫=-뼅!Æqn3k},WHNm=<ߍ92ZxЕ[Mn7R q¤{N~Br^W+aXG d[9rȑ}ӧԫW^zP"Z1՟e6E[cm۶mm۶m۶-lݺu֭ȂW^yW^˗/_V\rJx'x }f`РA ]3p9{ٳgO} / /=eᵂ!\ͶV$rgG`We`py1颇p,ϑ/ZCMcQouk9me6I9Gfn8۩_ Y7./QM3biS󾩉~/2s]QXCQU.Qo-u"xO6hq^YogX% Ә}. BPi\-2/tZK]~gw{D'3L*d(엖Y+ Y?y\9=-s_k]05}nml:AYT@9s⩝pxө{ɸ38!ݖ[75  u?u?ybS :urF8:lMh!%5\??Ǐ?~\?N{>+7`u)՟?·ifeu&i-gΜ9s TTRJy͛7oolmfǏ?{>_7W=zѣGaޘů{clihi_]ELK}erH~6U]:r >3''`fVdvGX!&0_G#$`U +tY`ef; U/ܫۆϿ6kpTvOOW`MCk f'ĹgvpvOzx󛍦*|_לESLq szʨYVbΣ9nMpxGm bmjYq0>\HI?\;[ChFypjpnCt|4[g`Q{\ [wܪyCJl&6Z-_[2v"b?(7Ɛ]^3pǒҔF{gG7@fRx8^cNB& ;ߍf/[O.K𿡰Pol̼ۮP y@)" 2dȐ!zbX,PzիF5jfC;wܹsM6mڴ)4k֬Yf~ͱ4;NůblY4KH8+؍1 P zz^>`Wun]0gS(<=Y2&Iy{3VBsFJ|eG;.Ó=Bb礉@kyv[NB鷷@5di~ܹ"'9ZB5ȳ>t ~ $9|Pog-g>^[,LjL M1[KaX;,!/={WjQک[HiMu\(-۷o|`Kڪé95ezɥT``p+; zTPM %2#Dw+0|WK5:i_B_K(e['!ɐΑ@#p>g:4=]Np"ٹ Hs gg9^:wgH>~`Ԭ^ZA$KGH M dfϞ={l8uԩSYc1ah##bBls[[ ߯թIjb@h^Ͽze5wPlٲe˖y͛>|g%K,YԩSN}p8%~3C>wn4Gsϙ.?fp]aT-$͝eC`3)4rA׊ATU+|SNw<?_me̛qW|?_`9b2Y>1,- *&` =? ( ΂ε_Jh+TUssfs; fwƚ~N9|BNZR|[V}[9[A}C|1~"L;T<۩qiw,~W|kNe٤SAB:p\(+8z1geLM.lPNhS)(5DCi4)S=V ס4S?H7Z.}HJ,B3Bo  8>rSapñ2X9-OeBA[RSLC{ OK" B&61oV]$IG=z¹3_2O=`Y+%u財壠Tjvktl(Y3Kr QZG׆7U, JE_$뱸fPy#qܹ+; `Z!8믿jZ>%Ttҥ=qN'N8q&_6mڴiШM %(KIfiC`302UTyr90NzLo'9k{U*?yq+8=^r/(9Y/T7mZaݣNS}`rhzdȇ[ 4ԘQc mE)c/5\ݴtL?Z;tI*vԥ]>^ yByb 7lR7Wڪ?M ?U58 C*/Lю"يش5yw9r>d= ο }h0JGvL=s;H@=;mA6aC'K=↕_!J=ust6ﵔf'ggD {AeuzmpVRZ:[._{4Rwh~[g)ⱎì2+VXbzׯ CJق pLF &DBa9ZH`ޣ &uzݔж^y~_lllll,˅ݦ ` 7ũLC,skywy~ˍt:ΥjZ>5GO?OѣG { 68o m!\W7239SY3W3B_![ !oV@GQ*;T ;zj6YOT"`j|' _ BkFVSݴhB3js ^+?@4;p 8??{ c瘹 dF_DG%#j[LK"ͱf* ~ H~s-ɦ T>->a E:,_ A CrEشkE¦o؁p~hC8Pwk 3X*&*`B3 8MQI)akjɅr>L'q}%_Ck?;|u;0Xh!v>G!deCX2hQz-y;%Ww~# eS莔E@CpFȉHYGi6Cx+ASw_ pH)`Wugw:Zŋ/^ؿe˖-[ԷqBRCQKoԨQFtaK`СCnrrrrrriiJ>.!OeI }ӧh=f^*nSHHHHZ j.]t颧jh%K,YDm>/qqqqqqz+6sk׮]v }۷׷5ARz +%M'%AF]~;\R?K|-|A_)* pN(SK,vȯ z f˟SB~ Y^Ko. #]6tU&PuK]omF;OCk,vZ1BZ̰r)^3`ۉ3M;B 63|Z>_xj-)6OOh}mupgoYqK vTӄ:S,Hx;.u%G|5M;vBelo˨rOȚs,# Aj,%B9%/,,[cRa ͅw]殦[,qoQPc^1ArRPdDt] @քeH*f>ngqfoL^ u]@Iw]ڑzW op^c˻o|4kvPEvv߻ɕ' `ӗzk5k֬Y\VZj`5Յh8`&d/EDDDDDNu:@sτS(_}'NݷR2T=B!4G^2eʔ B+[o[o_]iZ,}ׄҊ+VXQ?رcǎ &L0lܸqW^z߮fp]anm2u b]g>cl4{3wANxL3g[:h#E} !U/\ӟՎ#??V^[ph9~]{zKh[93f~ J_ !o0A 2ZfpN5[(Tc\+-&-~ւq?JVx8\B_QH8 T_Q ]]-4uS]j8X(g/gI8_mM_:-Z6slٲe˖cVUpaH"saFp>BrC|]R)O@GsŸ'bt NMR9qK?Kf7Yݙ_B3cSτ[ʄ ҸSN8b၃k?<쮭PP-:tÑ Q&_dA!un.kڂ sa }[ vڵkNe\fZ8m… .[TҟӧO>}ZC&ii8p*hr< ,o_0?s̙3g`׮]v˲,˲~/pkBԿ`fΚh+W\r%tԩSNPۯѭ[nݺd{7x Dž׫_|v:fp]a7s9N4}]E#p+_ ?J+ 9 R2}MM tU Bj{tj3@k5},߀{nVw(ڊ EŅ3;DXDDw Qr :2e!k-٩p2+~#RoiMjyu_|hLO38񊪨 I6&x {,ەwafBYq}ȑ#G^SlϜPf)oȑ#Gv޽{P&L\xPRmJ%[@iuh[jGAɖ@=6W~ҏV4'&ihBp_4j֬Yf{K/K/W_}WPVZj7nܸqF[hѢE ;Ç%M\]]Ēe &v9ʑ |'#r;=yکFwAxX m`͔bb9s,H`5q9cb|Q7fQ M!9jZ_05d!oǕ:>S $$:N_Cܯf["?w+wq Olס8R\N1>N-EXAw)'Pq[xsFSqק$;%epk}ĢVJJ[P9A*E!;QTvtJm-rqH [ llH82qq9:Jc\&aȭW9k!K7;4%6D* |:9lQEHeL`=m" XHI>`?rGNny3v!4 [B[Ɔ-3MXss&L0vWǧzׄ6A7='e?5x%X$ 2;麃M`SUw)ms/ BPw.k|fi@c,ͷ3OΝP6eҊYFwځc퐚־a.G>B`ݻw6jժU^Œnu[h:m枆fЧO>}o˖-[lߤI&Mjtd2iPD%J/s{blfYZfuds79wX\(avq|‘ 9#lM X~FfudYݭ60󡏃T_Gb,Ơ`_/yf3g뺆šp l>-"'74GEtҍpSct'Nl^לiY9 xt Nx%$fE!SCN/q ܂Gg}Er׶v^)j2>azX==@Ѣx]6NEmz OM^٫4@x7EVyϛUq p%ŝ{ S9?=c{<众z ։|k)Q) ЃJ!lT ;qe1'M|f±g\ ۇC K{fdNJ; fvV+ B"FH˃}U:&/py8qG'Nd1)JGNol6\ddddd$P. >ˑm鱠)S"Z }qr:K0 6C3U>m֭[-W`k۶m۶m/֗|,}[nݺu͎CYv i jZVŅF\-*Es~ Ύ* ] 5 p*5(8r8ׁ!cjNǔ@/! aaHL#}5Wӳs?"OSv%jBoxEE^ӎr6rP,"a̶Tl fM>KPgiTiW2cO2Z3;] R VVwx=' D0!l[)Ro:h~w:imOw&2}B=}fJϦ 5U6+p`yz8J'JC9UCn_Н}.7={?k'/wMjް>8}gawͧ9+څs)wR9iU89v+(8v\7}]7vk@KID/^Xj"ʄ&7gYVP_ZzKCϽ!4;شP5:(׿Xvt}T^fno҆_,ٳgϞ@f _yyyyyyZC56lذa-[lٲם忿"jp]aMpg Yt= EO].%Bvj~z`cW H ;ř7cDUM~RY<,,}]qKX J+v(s1\ Yx6*/8gQ­.Z wp >ڄ.Mќh䳶vKhRj2!մ <׵PRwڧ>[MfvN+y%$ZC=w T?Sy5M.U5Nq9մ=-AZ[fi"a.θ.Pn տ*>nt}s>?3Mz̽Y!f-8ߓ #Nki^'࿌\F\ Y/?^9(/,,!Ⓢ%'L%PwsBYb/_͸K)8bgZd ̚ yr?̗_j"Gz8g{o菺îWw {%I^=ݜ{x澑-Ħ*@9MSmrEW[Ĺ -&9bKh-.ߕ`΄2䙲F*wA3w$2.9v@ȞT Rحg`}5, U4%r$!W 弯zGjX!=̰F0w *5:To$<2eCSᗦK))_D^y;Q9=?ib,QȓG%UVZ58~!*Ŏ>ȔJXo\V/uN;֜AaIl[X5V֋eʔ)SL<W5?lia 111111 ꋖ{%-ZhѢE;tСC 8ƻ06 s)b+M Ӝ_@̥n`pvZ kD;`YcPX_ rTuXSN: x~nI]PZ㤝8dʢ?4*g_npڻ4p~&p9gf94-px>{Ms3{M 9p;VOEU05Vi8=G+n==\삚&z@+L6@E--VW+y@ Z0]ƪEεz.$nV~꭭R-lj.(g*\0VV+!ŸN(qld2fpضM!#!"#lID&|:4*& X Kyt?w6z ;ffewpӺvc˔h{} oS۳Ȳdk{^Ňuv1ڙ=g!3LXEqOHڽ:txS9[]k cgǔ2#CդOA*ypjtep |񐛐3.y(H QQ#_]!oAz3<H1?r2Xf"P`"U6-b>p-Uķ!rk`J[4`=~dބ c?gPoW5A͠FZGxAힽ17. ȵQk##j < X2㜥{аaÆ Š+VX OxƻJ_TN.رcǎ-&}ߥK)jMTJ/۫Aٮ"ǀO(| "qLZU}Zd/i &M4i_v-r-- Yh!vn?KȅhN9s̙)Svڵk_m޽{_ kC`3o5(@>9\K1W5Z\}A'{ݜ1+|f"&ο L#L:(Ib "2yzڕRxwÀt/P3U⌾ܻ_$띪:uZ])Ї ,{Mu EYdErr :kV~tRMXӜt6aӱ>IM3]wFW"OA7(3F2PM[CT< x.rXz $/^BZyiP5u-hPc5",v]~߭>״tT: 5$y$5PXޥդu"l=`ʔn5uP``yش*rw97lw~+d3,sV䎃n{Pq@%$5`y>gBh Ω lw 0TZ05|,{x|K?!5g|,*~?]}"-19y(]lcr<ׄ&$maKR/=_6S%TDD`.ZOmj%wWPj>sԆ=jCy,A2Y@.!{ Bnhs {VXm:Bx}j4 5i+[Y&l/~:ڱRU.5.ݯMϟ _=}߹ DCU1*wJT#s7.-DaM)M7tM7Ǐ_nݺu` L={$_7k~=ETC{ OkyaY+%u!ք@CIr円V.!7?8Eccccc/8jԨQFAnnnnn{~- 4Ϩ(]8F8A2eʔ)8qƍ7߯BxtȌ1bĈK\R^/;g #ZϺv(Ʊ`WepԄ\[un3sXvZV@aF!YPFp\zr̫'K^~r_0wl1O ZMytiPڃ#YL[Օ-~.!gi&sRIѝ[ޭ-8w׭pѲzuh }]$Tqgj>ό6wݚ&jNE8u%袼.{WB\߯mz}H(YdJJV1Ǻze>>Ρ~uo+t۲K-?noٱv&&{dmΞOd˾ ?BnR/0ϽͰv}ރz5 bSHx5/߳<9A\8) B$&<5׆ m֥@iA|wRO@ʔSN[3!H:Ұma+& OYg@3B3wo^77E;VⅨTM ;GT%PS=nB1u Y{,@MЎ8!uE_zhgXs?C}:I#кk@xH>:?~ ̰X4zyÇi  KCP6aԳ< <}oPaHL?BoOM82v'We+# qڲB55c֌Y3_~yK{mvm` @ 7;/b&]s|RکVGMؕzMaP/ߪUVZmڴiӦw޽{ٳgϞ=|~-if+5TkQku.˝FZ$tR+ MZ?_*\3شUw륰|C&: %]wOOZt`^_Qj)jNup=haɮQw i AQrx,쓐)7 *|_\k]c`S c2 9-{ԅ>hҼyF&P+9GԽ@i|\8 ppr;I _ Iտ#;wm~U{ʃ-`tA/ψ1Yz+PȚK2 qHnoPu2݁9DkAHOV9PqO0QJ|b:5 h/X D+"?%ǩn5OvhT@u ow=8VFs?+OA$GK*y+t͡_ [=`=lq+}XQP컦SPPgdivڵkv`DC˗P#3@<.7e((82qdbsE-}!%AN /~5LN>ެ?߯ A5jԨ>{짟~駟tgK/K/~]vڵ+3f̘177C 2dHpիW^Fzիk֬Yfț?4!N&/VXKIIIIIwiEX0l%"-_rn+XioV}aȥkp}bǙh˃!@Ϊ]9 c l\pj:?}啇cO;r-$> #Cv\%#Uo@{쵐aڪ&OQz{fc̗vfה<IMR>}[Q%! 74KHӝeZmUuGpR͟gvZ+o?WA@''ճh&>Mݯߝ9jӜ>iE̵ju q;>Kvj%yvSj^ we̙7 qa"&z,oU7 dz"Xjءu`՞u5n<0rӅ>Pj-EE=w Mt}U:czR#LԏӜu N Rt!4i:;%Zn,]l/W#tL'JZ' 3yn0W_Ue 4woMni4&~(e@t_`{ FA1[!򳰅%JiXŲ g(-m@Ënd}q0ﬓc.3\Z˗1̙3gΜ ߿~ܗ_~_a9Y; 9Ⴔ2 "c-7J[o[ov?k֬YfAr}'Y1S\|ۇU)wJR@322222;yNVׯ_^.j6f.>}=zf̘1c ]Xl6f; |]wu]oo_/uԩSnq_|_СCH>{{648>n*[#>:d;:m!~hS[զ6@ro ܅X̓~lU4{`Wgpol r#suNQ3aͽR)] Z+Ͳ;,F'@!E›i.TR!At;$oǛ[0|sfHKM1yP~P9K]lnh۷*ݹtPS*gճ^%TENծ{0Q= @u?wu\KQ\۴(x>W3I!{LfrPgvCVH4hXxɳ+BV|Fl3d>'op[]':oSacqڪ wIr _&ܳz͆f=96Q '_|~;lN_{!ⅰu5>rYԓp y898 9 du-t^ƗekE.xa{r>(W>x}3AyRNiiX$;Իީ$t&QC@)} g!.Kk!,jJ>6PP빿Vw7]}s* ͞рk[ =D"3AX*߬ 6"7W%!Q&N 3TO+O ͜B}-zw"zW,M9tǖ}_cߊ *?-UJ 1W+؞$gQM^)C *TNNXtYjnd;-Gq|֑׋i~wpU쬵bg< 5(|RG}[H}fuۉSTg(r 5R[4mzBZ\]6NGiDhr6a[38GՁ!{p9-ƵYC\0璱_/YYY9[ ̿C c\pvf<#+ߖ%׵Ѕ.Qt};>h3P;{QaމH;y"_6mݖ.f/&ڻmh8H;!–6;%7cSww_1%“KX*rYXg ><>qbu程9YRc# ٖ炢~c6 OB]P+e*pq~UmNn\Nk#;*/WlԭN:tUlδ˅zTݦ KG2l( j3"V|2X5GxH'f?oLBv/A{ +7C?vq}n7"2Gh}G}<3<[ C ۱7slk\XS󸮂ʎ9 oms>&vBtBrgi3:t!O(UNdezjT]0gϞ={iWVZjŇ$\.u}+VX:dSrʕ+=Fh346Ms].4AiӦM6k-O?O?}w06c!a㛶mF%kl(1J 4] pA[Caǃ2%+a?Fn1 a̴ˍo'@L~bjAdXD\ԦK_ 8dpώa`7o%,N"* mri%p mŅxM*啂ZL__>3hq] %cЯߵNSn!%nGZQHJ nZ ']ԡBUOdغl+PsyC7xxn~yA_ďYe; z;C -p>-$so?鰲"z+xGv On{ǧ圃D[fWfo"dž}4tȺu@q ]ʍi1}* l2.s`o0^ɻ'C{'DM0-1Ud v V)w+زmمiPf͚5kǏ?}۷/ZüYK }SA # rut) I>7rm1KWAV/"ޚn7sK7̙3g{%u3́cj~.7oÆ 6l88uԩSӧO> <^:rZ] $Z I*=l.)a<ͯ9g<S%U?_s ]gsGgS.u%ȝuNzg;H23- 6/!]w!|R=Ś)-ȹC"B*קj:P}|! wDGL RR5o)u{`o7q8|~#ەҖ48.?O-mY5jAcԝa&K)-ؓGxB~7DND;(BZ_fsE}??њQH )Jaն_\~Ciq@ wj]$~iAFdW y|2ķ ~7(+d¥2^M AHMB0%K5͏O>;c/Jyy\~X~5?\09ʇ^?FwzH7d^ |vGU7aNb 0`(p#R*oa'-KҁJzҺvڵkB-k:>ǟCJv u˔z ~w}-[lٲ;Ϡov0͎;vء+(;scuY PCcW Qr[ءoy|)ѻK *4XkgX9ׁ!։-ʀWB)ܱvYX>n"P\&J}ׯN5Mxrx(n甤GB^ܯ5g$wju]͠ͶHv{J^[9c]3؄dܳ7s 8AOA2ckTä]kvR&p;7J5-sߑ#M&(y获Gx nopdPZC*{hN"x!V-{x u$(t}ۇe ◦ 4ZNW)"B2$ǭ= e+)A&GRcAIQ^TcBUPFUnS?B(nw}UۅmmP)@AcWӴ[c \.&Jit9Ԅ;=g .{p疨}}g qbdm ɸu9NÜByzBhVtF-`: #P䝭E,jw=~t<,u|9?* &xZ 0S똼j򡭭M]]edgdw,@!v@\c܋\šCKX "d}`\ho[q6<P8mC^T-3P%,??#F1YpB(ܕ"5V)) $_*p$'u,BqwZa}4iҤI_|>WֹsΝ;_š`3.ɲ;{/74SJE%s],'ʹ'NWqA],mjQãJ| ![j]9XiO(SYJM+5`Wepd\f(iP="uE_LTib.vqEi> \! ~ΰ|f-^q;w 6VT#z~ZNݳҦ u`[z7ޯ Zk5|~*ϬЩ3rC\PGzwLdc}$+U S{s<]՞ )R jl,4Wk# +nywЇ'%)AEA:jYmBĕ(wֆӻ[]s+9Ғ 1B+PZb`Z gP|[H5A:+)>B&pkKs)~h[8}_֢Ы~V{SqU-ԋd?xiЬNTňZe^L:uԩ0hРA<Ԟ={3U^u-ov=T^[}g=Qξ ,!]Ωr䛳ʀTJIxDž&Da7fGs\) ; ,3lK^_Bt>XƟИvq:~,y =H*jY,N@8P5ϺZ8/aL19X|?Mq͊RAKL-[w+=M-&H%Ÿhjyl u p>š[aɉ|}ݏ@:tnɻ}K?@\^{w*I(g7p gVuRzV(L[r[ݷOz5Ky  ; ,3u w -A8J(aWZq;(]\{\_ߦQXbT\B[+GS .aNi4t sVcԁYt aA2C*& |ݡ8wZOA̵&i-bfV4ӭHHs;kCOKKfjx<_/iN6% 2|5H=bG{.l@d+8t ]snp: >5_q9|V=%o{Gal]ĎkhSq8`Z׹Nhz5xT9hײoyPe: vW!Bwuir?b{^P",|G(6;|ӿrYEe5+} W M Yy#eӗN^E33MСVz;C. 댧zꩧO>O<ҳ>|aRJ*U]!\6Utҕ[7<P.l].'J/d-[pf,dxz!(3AmOv?mA1Uۏݼ3B> b)<\dukcypkq?ᣈ?gi!Ѕ| zQ4\i@|078ǛwgJ l>˜X nq|Cb~WM,j[M]̛Y oe3M۠ӽvvڕ: u^jW/ 3 ir3c*:j%@X%*) >@yH0M )3<;ށ$t.HQn)`EE$A(wPJ!@hғ͖sgfgwI(=;sy)R/r3~ \ 1t:ioh?MײHl~(eݸS@3+ߌD x,UF)\E(5Nz[ ~t=bߕBJդ]<x ηn @\,4,3Hܫ;6_ͺi]՗PeJ>UvDsr'?`[p|, Awq]n~mk<@𤰓o ߯'@F^ sR0d0ZCo8tġ! ׳~˛@_~;xbL?D;#kҸzk˻ Xf[|M* %kPRyͿ}.svq'ЫWÇ3HR/+%ؼOcwGAN g՝ {>ڟҕ慑w*׀sol8?Rj9/$ˀʻ4,YyQ;3/-Wkk ѹ}dp,Yj >J㸑 2ÐivtOm΄ aQfbu׹SX2sn5t6@anf9VJd{$&OD(@.&z$Fd+ 8`QzݶtߙUSՂ!ϊSŦ' F@MMM h|hHY@U*k|DACjkiN'̴=I+9H2#H :E[Zi%Jx? S_U9[2R(J2TOX;'BP!"٨XT4Z*@#uYD {cv;ġ(X'Selҍ+q`u|*UvlO~CHq0b39+ |q17E f}wk=Vm>U2͘eGJ&m%@`8|/vYSkG~kTZ> x otjc _po@Đ?iH<%Z Wc:g%ؼOcS1cڸ7oMAcZ*ț[{@RȻbE hO@@Ui=.r`@ڇgi,P8u{V^֏LX"~=wPG-3p^S`IMTgS5*Џ 0 9UZ#9ʜs_{{ ҴяY/pؙxRa z)H')_JD[#a@B}.0bZ&N[ ʹ$8+h~r릪tSF( _ghLt^b@:F &0+k6J$Ѡx4(HÔ5MkEՏ ׏3,K' pd ePpk0oGVtwYu O#Lo/g ϶Ev+xB<!@ gȿ\aAuնfwzU ~=(SSܪl)i(mV(37dmʱ?wF}W:u c,ϼNwUy%݁‰y,-wV] Y@$Kl98?rЙK?[NDy /+? oLuZi9``;Puřu@NKWe*^+'/v }G}ӑ(=U޳t8^5SΞ~@OXp WU)JekN.gŕ6F&s%uubn..-\QqzƬ4m@S՚(bkڮl >N֭tcV5\R2ظ"c8k }ZcM ᥚGH?L'>w t7@hO0d,\r0vI@nRTb!h)Y&gPPS ~M:0˘܌dIB@ZB0%W$d3]hN()Yzv%1#_˱ ͅ: m aLUhF  ӱL.ֶ>W(g͔]l82#(WQMʮ#U YR$):T˺ݑc{(:gInP0kzt|< w#Ǖut&o V~*a~\Qݫ!N5N橶fr+7=/!C'%}rSg٠\߸RZP ~ݯ&=y"Dn@*qɹ~r,؟{ς9,>B Ӊ-$I<4{m~;o-d ?+6u*}*~!Atd Hޫ֫uO#mB>ʝs#O^\^iy wBkhZLZ|J:\^lA dQBv/ BKyÃRMݾ62*= o$<)So/S\ys|]lqczCU)D|ag/μ>{W:Tu=eGUwh8R$ddįLL~*gB* R+)dR sLeѠ )3K1L2'WxU8t`+-8BF\\IwyyFw%S.HbÔr$d[Sc"R?놟$;[=3"LNfgBWxqBUr+2^l$+0L_@pK7s 2bϵMrmHɃa}#I c? 1\zƛwg#BhWE7\nVZb/Pna_do m;z?9}*QUK`S >;@-7VlxX'^mtt1v T.][?lftwv}Rd\KOR ݖm?~*znl8`nT7>}}m 4 1ࣰ߀ -CE)wODJ#Ƞ@>;=i {(,J@Yzr)=,mq@ӹ/8VO@#TW޳!#0Z }yZq5{V^xBA?,ǖ=s]"I6ZC5ˣ2# e) 4{P5it#yS%s,4eY̚K8a'gԖc4SWt1*a2(׉ͳϠB" HA0qN!8E24ڎ+Xo%zDbu5_r8|"VNíǓPvYt Sa^ iPԫ»a:\z0G@Pj=]},Ef}ܔC2QfȌCR[g6쓮#J>:8. ? atO9 ~͵}>0!OK> ׺)&ㄓI<@dea9N nb#pg٥IJxIb>sOĸӶ•= NRϋfъO0`~P8xEiIRۂ߬j " /x 6/8ʚǁ3W;A]%ͼ578*_[sP2,?_+)  LŞ#=>.i99[Z_s5+YyN@֔kH?}#m_oGCyOim}(TDu[Zm.axFMwWfp6})~;M)K >okٿ]d ]-B&g 4HRMXL" G#k4w?@@"9 mҫM~_ ի@ϖS: ,4 譁zpeqMr0"K4Sl3pHZE3r 9L.$%hXf؊lVFЈyW}fEiKd$5#O8̭E9J‡(V"dCVgB>hsMW;⓾+ݖ%,4I:+Hda Z3XegG(lKd*X`ݰ\7LRxI0a?L t)u;6z컲[2MVs Y曑2H5"}Cy#3kf 7|f`vʨ@ mQ>8ˎd[] DᱼQ$: 0PᇐѻPOsP©CUP7@ǫՏfy`?WϜ/xYza Sbn\޳ؖn o˱\|8V'U 흃o2 ,9g@jϪ[P޳sN= -8e-#pO`:F^fPx$ʸOmMݖJ6e}ՕH3(:$0D#֜=?;sԅeּjk.Spr+z3K-|?]׺,@bT(yKJ!bO=1 SMb%& ՚JOeӥT>N \DV Ka{Zzpi@Pff?>\gmtvϷ+miqy+P齨2M͒)\R͒W_$R(h\FuFU`ZFЛ<;MUq)A-Wp+<&퐞-@AC``o!YJī Sw/yr?% _ M~y2 %F!x*% >81/}y'h[+'QT[ח8'~b|85^-'Y&|#LGLC6isx;K ;b2vZZv @}2^N`Z>Sj/-0µ\~!L_6K X4,xقAΓ6Hݵvu1kK% R W6df4+\{Ot׭#:T\92.ҒEd,ؙ5@2Hhfɷ͏ 6/N*23})n]x@pǀ_C= skK_%m,y~֠'ZRd6=[/r*fw2ljV30g{v^r'Z^%If<׾%׭5"IyUJ>-ZT]zjYqeX p?sH{ q22Mq.HZ{@Y<' Og`/fc2Ln1em)ـmV$ !\#F24ew8 #Q$3bgoLS)D LƯ80DӡdZP-:BI=QCy֡`{Ro79a: 5D_nf*2!(ʸ=[W%7gk(́)%+B8?0.h3`afRʉkcWi)YBMrTZR Q#q@%+nSK# p&F+a(8_phŲ]-/-H_3Ck cTQ5y&$0 g2  L)MTHj-I}L3iz"tQ,4WdD:F: QzU)9J1bHnά$ND֑/Y1U+2"'xUDz2Rw)P,̂ʕd\٤6sEr`VjUΏ$(YhNSz)$"<^%/wW,Ɣj L>3lߜέ$N eA*EVN4A>TgAgu.pVWOJYThIe#\u&n3 }:zW3悔 @u:T5אjDu=b/ ?v;1N f9J5 ŚBōe%S 0*%ql0a'M#/(5gF^_Qs?. [ȨoB`m( H '>{N NjzY? #߀j@T0k PD{'g\I;sڽY_@f^l^tж+!^-k/Y}u԰D_z~  ,)Hx[-?o0^(d}p7RVW ]{vw.}m6Ëtt~[bߐ=zUHy61җe^]?u)J'uqX6A-FB.൉Yp6w~* BthRL3ƴfrfe*dR% _$v2L% REaY d*e2G[0FF[ L`[P*hr!靭yBB*{',4:0X&u+j+T4 b 2Ni+ȉTXɈ<=q 0Z%dVK8es5 7Sq"!E)[ h4 | Lϰ ԔY%klg,k1P@R 3Aobg0US9U5ŊQV-1lgjyiKcƢ^gTy/gJF_odðNOJ[ ?b[h;\zx˽WA#l)l9 ߅P:37hnF+_kԲ!p*4rYnCg3w)n9+s1?~;Щ_c}|긙jg튒j'EM8^Xm1S_v!k .o.5z@ϻf,=;/pow|0$ҾgVKgOuG:<%^yzpܫ;(B) 3Dڧ_2|\ʹsk6>'œ3ˌT,P[m^x(aԖC9g$uj *'rbm[-v}<3_~\I aY88Q&ZLFB?14\ƈ8XcětDZ!g3)0bNH:6!Uh BVz:XF9TOEaׇ*ٟRwOim$FjYhX&t\d`#PԔoek%[6Hn7]''[;8֛YUyJ9&I2w2{] J~|a&]ĐSsq$;]%c-[y)['3ԖP^V¿Gw]8S1Pm# ke\X^r.ZAQ{S[FjySG=|Ы@R`9 l8huZSIN+i6Va!.p jܲ)dL5il, ܺ©oέ4oESQ:[AkNqQ~jlGT|y'':ĵO~O+ X06 hC88\p)?ooZ@ ِ/tH;lP4F?-?x0=+E/ſ^w~aFFcT[ƹ=9[ q2>Q30)'@>{^*ۥaVIY TBd $(:d_yEΕfŽs@I^+T2q,jA),x-q::K>qK\4eXi?;+8־#p9{c>l.C_h )& dǦZB=Ɠ ԅ4D@R5*ZYl2 Tf-\Um$:04 tpnk Ŭ, GM҇sKݪX*ud(03 hoHʼn`M4du%("EI%ʱ4.Bi4G%T#nuZnkCNBUɕtÉ'ƕo8Z2/ ĤV7*~ X!>3d Nv:VauQ2r,?&F !Q+n5U""YQBժʿwv $>Tu`cbb}3[N<xhN4=e~/kIi$ZR~fbokY˦p g&U,یd^AvIVɲ촵LUl kipU++&`SmG#qk=_,? %t>@5:zWtЧ&sZcn-D ΉjժV~p.͙b@&y%ؼ#p0L)@Q5%hU=ۈdd~ \^~X.G}@X\pp+FV@>BJ:{[*NTV n꽫7^/~+. ؤU bF'f߹BQp׷xjp[LΖ.ǛvN!U3Ni#GQrVPƯKNutpҫgl׮MăS:*O4 Z{t>@SZ"BJR[1MmT &ҕ׬=0(yfx\:"@K4%RĮW֗۲o l lP0!&ZMȄownچQbPfb>L=ZH=W2z pa{/AA$0 躱3}?y@8*n[oIm,,-bUXYbs遰@,SqaFie ZϷ^!x(I@÷{'|?UwƐ;Ou=>|>c'z \`,>PX^/Y, T m\9 2n\HKJ0T @Uޫׇ"SOxMnk\޳})h,WD\knQ/\A^x*}gu‘a-WV56P*Ah,drNzSpbK49[\Λo*Rj0SP-VSe~lQ55~}l]lq֗wB W"τ_y@=OxhRpbNlJʫSS7P#b[38 H;۰[#0Y`@2˩Ppd䘐d `T@#4YSȩnpV# ҭ'3(3$Z'29E2DPr*7W4e'VIpTjTej]bd@%P~=u'KS.OPJtV*>']ImL1USX#r- ddPЁO#zL4%#rT;J{q4 m{]er 'WQuF]+?n'q1dߔf;L8&^lrxG/YG-1]4# ם6o~1Z? VL}Qw[RK6V zbY4XuÀFIݹ%BmkuSמ-l?n4Z1ΖQ͂i8n"Ϸ^\GIד6]8u U8w62GW D$"lA"? 䭠~ZY^9OP}߼$:Ingay^ ^wRV]J8\xDd `inPH+?sK {glP4 EY!L`ʱ)4MB0XӼ=YQ!"UjUO86S}qB&2\6+iD7 Rv|l7mqEze=? yq` q~Q^;+-Ko%[|VlľBo+{#8QD15֔dvOĔU4[]P:^ kӖVv'͆/)48اt;EbKje tQ@h!@2u?Fm@JMklwW //~kO&'W]>[Ab,B]!Ok݀K}v] 8HZ?‚C+|g,qH >:y|-GЛ@EGŷ*(Y9ERMF}ؚfcFPwRWH5)MJ#TҢ)cM۞3ܷ2tYheyJt!N'Olc@j?C2Wa@B9#c5+񬔀&Q LA;@(բ Sa<##0eirpq~&3K)kTn &O'ZK)r S|Tb)%3A4 B4NXG}x5Lmt!v[KlFgN rB1˛bA֠duRCƘ|-(R? Ler1\7ԛe1e툵{fdߕK|b>SjCn]Ԕu:2T+2;Mzk6S{ٌLh$чTS+/:n9P_ntw^2h߲k۶llq<[N mJ[q' ,wųlwg>|j]YW]/5:Y9xƩ3gMBFTlbW&T%ZS8v& i?[co7jaiR48*h^7@ x^ Ё 1mSdYZ [5U(yֶ8+) y0 Y TmS1&@("an4PbDn&@j-ZjxᵈzqGtҷ-8{VeHJǯ%; Km'!WKz-=;_`o u<7 ҷ9[62rhZbbㄖF(LZ@t+m3l&XS3Ҝ׃Ε@u'{-9YЅOZGyt #ndqš!BE+h)!!@cH=@)Ϧ)0)vd0pL`5i%Mf8FsÎ'5r ߞYNi\e)D5~ƖJb*5aJR[HEmuU=Ô\_vWFCn fH}^V>TKzi&4J7 Xjt޶Z-р0X)ZsdJ*- "݉'*D@~;hߋK}_z3]#BUk'>5Ěv0kt-YC`M2E;)? GmĚ GUKFj-d}~f咵zvjN2WoT-Y'YjhIu7ɢɡoQE՟ke<}[荜F-bҊ~I9jW,u5G3B_v?~om Qq`*}\μvTYs@"#S)ǧW /͋;W%_V6 ֪լP'hk:Pj \zmDžv-^[(~}|wgEyna>l>D΍:U޳sPsOo/jP|6El*CKͶm2܍2+^Sh-̂Ƴ[O=+׌l̃k"ޯq@ 8Zjc/YпԊ2g@#BKw1^He 4y<_!Ԛ6Q&f 7a8<:#hBf $WH&p\J=-3TMtϯ2CgalΖAPmkUgA/:HeVLc$#VLu-hԛ)T/OiT-rSd xՑ:@d[:^|Z>#|N_3GwrB[Q]ojKBQ,Ө$Z^NӜGUlj:TWb4͑HY.gG_lKG٨(1kOO6甡@-CRd~TN5췟Ldm}Q-?oLfsyd2aٔXG|௶rp֞9%3v\|=9cB|EOxLZBO 6wղg啦(s_>[⦲)ȌJ5cɁ6sWڕ4+SAܘ"Nw%\x2ge,B>Z{ @;B_>۠蠠@Qn7s.>iQanlysT=B{md_ge`⎀)立u,r.-YB>.W>F;.|}ܲB_8lE@N4ևl?$RѷߴF-+VJ0vbҙilmesn*͊L̃[>K!д8!-g|-F!(Ts(f!Y׮L |3vݝԤT&چ,#3E'x w !9`nDXy7 GrSfUhϬNрߴ{Y',-Í*5rM81H1(s:gag);B-B}} t 1唢$Rf)5oϭ*! "ĽQ#nrg! q4' ɉ1[DUM!8Ag>2(\ޒőB 0GU+~NFDWV^,@;_gZ*Cwiy ETS*ĸ2J9ɾ^>[q7=+߶]V=wŗ+>(>py*%PI0/J֜f5w]{V!i R|q gM+EY~%ʇA~򀞨q&VCr׎Y Gpʕ .2zS#-6[Vک=h5nJ|֞z}^d0ů=@bO{@ckTtae;BSo. 4r]钡k/hcTOruBG'L<_ge`⎀OڎROjMFd]>s,(B j\!Kyq}˅g3#QzgszdC @!co+? 6EIAU:=Oa'N`U@]ڸFBJFM̀W@(D\g-;W\txLh*y0P;O#9-)Фɫ\Ba4#b4$B9}ocOrʱJ(a)'µC׎ޙb㩭DB0M)f ڢ)ʷGf\](XXD[)lv}V 7^5 ~'|ޒYBE#4҂{< N[}OuZɃfU%nU iyS־ >Wq%ۊG~5jѥ@_vAk.DWm;'PUtnzc@bF{m݌]Cw"?gzTz2*4b+V۸ 2|8AFFT 04*nHcNί4^i {~f{7>^#bqX?G@B`xg i}>yHFĜf;BowOjTqwUDRB=+AW~G%BA@`_HЦt\e*Bw_/Psxy꿏_5_E[u츯͍t*֕9;ølhM5][xjs\M&?,V6` 4(q<ЬܒZ:9ĭ\&bfBC}{$W>qFU֍a-`!8ŅQŊҗRf~Z͖G5ZQ5MJ[r8+8uJf~e"@)xZCR뮵NrЦI- _':wiӗ^p3櫭3Ѯ}r xi 7-xCH$_9zhs~FllPKg~nTCY=2#~qgr$ W\˨FC>]0;R '}̶͑)n\3g3Ǐ˲ZNaߤə?@(f)rQр03r`Gu5[[#@1 $ݾ%;WR> 1@!*[9=d ~wS]v? +)y%ؼףC5.D[EƢkk@PR}&CjyodW|-/5sCy 9=. Azz ڿG}єd!{нAJo9̵_ZظzW vFl۱wŅ.ps7kH4%N’HF,99df%|,qL8Ƭ@Xsl34լl@ȲτTjŨ4n u] kc͈/ ` 5u#cYiZ0S&̈2a,kXmR+ܐ>AGmQ-[<34"HN%߳b9ĜbuL: WS 9{γm,?؆&`e4W+'(& =LĄ>v"n<)M= .=ີYi7GyZ![L'˶7|y4+H#/^'7lrboa.BE\:B1м-@a\ђA@!ۢ#*{/33+Á̶>zmt(fQ.‹k⎂ HŎ7⻭=_3s$^b͋i1 ȯytyGԟ[2^jܔl)"=VVpK8Ěz%guĚul5CvQK4kJ0+8< t¼y]4\cUi@mm-,z 4>L^`8&*;Y7MM"Y;F=15Fqbm4\j2׸˕q@2Ub9LI&rR< Ь\9xe) 5a< |\'k%9@76NA\CD3(㦢t! 5~ҝNlS#ָLJB9+ŒGNO8_X! \ԕXSh. M~LAQĞn>6(sLӕh|]qSX[% /{ X=`60qc_[pij U5}=ԖO 5uY=rBUx-jbd)hΫ|uoo%b'@D)@~:˧2z*;+5kMlc5 VBp,2ϡ-D\,զe魡ߒ`~&4b-乙', Ě/{~b'tg.+5W|^͟1L{I)~X͋; 2Oۺ%1@\o#H;l` *9Rdyqg#$B Kz5ڳ(dz@qٛX-}h@n^{Ѳ Hܕ[X3x֚b+5 B$Q'X{ φ\)zz׀>{tjv~YȫG\Y %SO| Ϲeg\d@ȳ)8Eri HqjC{RbpFŚ~)ڜ'KVNۇI;4bS6PpdJYijJ:q?Sik$8;x;mQ^ +i׳u&>4bʓ"MyRfQGF SheePy `o1l$AQ\3ԸBH&Ne{:^vJF\ᔂE󷼿k,D3pA{wގX Otm%3uhHgn-- duW+1e9ZaVd HNyԣF%[G#kSW}@b*~Ǭ1Ui3n5*7b"؈/SQX;yijj4͢hƲoUg\]sr% |Sk[=@ܴ"`w 0Z͟B>ϫEv}c!ْ]kh߻o͢5߼WM#WZg~cly :R+ ] ,)‹`⎂X<W:tڲtG7i 8y[Tx-k!^b͋[Z29eyqSfn!@cMX:ơ)ǔ_~hRW:3NbqvY'pFq#v ʳ8FrH.ۏe?Κp@FrǓ0ڱo/űNjPv 0+il<hr 7Z#4ex2txT a 6+ NPXNk>Kɕj֔y[t`{9iuHw>+7>+BuR E^qb5SpsJ ]}ѰJ:쟳g"}w,j zD~bYY!XR5p}[]:Sv\ߤ˜s*l9:2 L?8Tz2hQl{.hXdAC=B]8$S,F8VRPr\,$)\ӗ%ܬRM&MIV8v^Nre ZY҂/u|1`ݛZ>}:>|q>q/oi`w63a{v>YpOV $ JT<rW}B䮙~\/;UyqG)Ic"z!PШV@pP;!9^b͋ ӣkM۲t>{6#uJSbpeiP45&Tefg-n2\GpRmy;q `Z3>  kTh Ҁl3NJ3?1k'$Wω5c^):N0" S83k%U(St1Ԕ) :( X)ߊ[4$9yǍYBLc'4Β̡3mxd v4\FغpcGf!qR5[~y@"#lT_ܞXM/98^ۆd@QpIh8F9:gA'}}LM z)]5p,A.~gjf'~Uc)E_|L{g~ (Z`XjYF5Lƽ=@͔VGHӗP1I0{UɜSn5f rU/bblcƣK:1 [8d4Z&Spb[o7~)et:抄( %fTqd,sMӔkkQƕ`JV[wgED Ιd$Sщ|b} Z m.ڞ(L.y}$`zU|Fl%ؼ`^/go@ESA@VM~} ދ;naR>&o=lǷLv>D:RNӃZi((*#t!߀T0ZPK+EЬ\|c-n>Խo-2F:4:P{iϘTh&D-|9CpFs}qLh0)h<44#cռ e'vbLQ&-gIrO?֏)8ҍhĠ )x\xb'a<#ب&x!arjd i+j;ߓnc!Y3 V- zzŕ3 A`$?/gZ(t*J'y#oMt߯FDl1߅ޢ@bSԵ׮s|Yt~AYg l]l-w3w];G] z.x֧^Z=A!):l<[ԪW|)Μ8NT1y''Q2Z9rPzꈯUQXx9sfZfa8gz+ؾ_j w-I5ѝSF 8^΄˯{*Z6S2וӬF"lDgP'HrCUJKR@\ScLF?.N/}[sW< ?>ᅡ`2Țzv^dk|{^xqcZD%e-uϞ@1rV>|oC=,.[-9Ov4[5zf ~^/k(.zV>e^0vjI^ECH2 (CMOWb5{CvTrcq+w( I[k"X94h<+#O=n咶,5= pY,!#k]sƸRh%cB+h8y'ՌD o%9* g2d".U(h5ү4Ez:Pqy1+ep^s4R+0luT! s{h ̴D[DxMG4Dzi "Ѭe,uE~BLC}F@@`@dzY@[Y]e &5ՠsFg[e;Wj3Fݏwj?WD?dрGݭj`̐M>]xJk֪|걼B@SZR 6d6cԀՖ}ϚS!$ύNg Fa,'0ZFyE"vs>y X߳]ucOA!rCq=X?K.j9t#VMN̒  l,Z#gx6>z,2$>#h\Ys}׉T!ŀ\IVk]Mkw[ o[m)AqQH _p "Y[mH ׼@~pC@QiFMUz^K%ؼpM.ao8m%plMث@J-o}uN rM?ګy}^WQ2)Oio ZZWi.=?9֩u̻'5J7pph7 ni".g:˗l[n%u~7c{Hs2=qWc^~xN pI@v:ikM^RMP+ bRG:z0ԅ8zu@mQ-}Ί3F"l7zPM@^ xy, :Vhq<nN~]P%?ShL; ݜ_  }~⃣]HW_xVDCvr&YFl*? >W:d2Eڼh.D])h4Q8F_,FL5vVc)KD) 4fB˖)L=n876QFdZ4[iJ2R4vR* 5o|^HGgX U69v=WXCt+G9?bY;*a?it,SOZ#}8'_LGe[tQV֎<~@@-G}@zCZ.= W:@i,Dؔ@"=ƗxEൈzqG'M׶w6 ɱr#YK|Ě&BZ(`sy{?ߥ{:c-O.ЬRJ&:̓Y4% )=->St ʉ*)@e:/u)ֈ0O?hiAG:^pl-3ѹALj\FhJT뤳eR%ilYMYggylU5N1_'|pQ6j <6z81ll5vnU ح}҉,ƶPndyX,7iZddۗZ9=Q 59ürI$, ^n@hҖj1mc7a>;kyk'^^XZ޹`Ͷ'[O'urm 'G̬/6j)9på}g>R\|T1p)#k'\|&N}~nZO̩ˁ \z> <#׀ׯ~6Z_ ݼ_E((I7+a0Sv>Eb |Ȍ@vq@ Q4L&pVIu(ZoqoWB{ضȹ55UVP!5T,q3OP5`8lz4_b>k/밙S@jR2E6}B(kiߘ"I6̈:Em+@o:IT2_siZ-l߼?+pq8pd趯Esd5VI窿,RiS2rzHQmXB{YlZ'2DWq%F1Jvr"0+(̊KX:YA9AǕiLiFidekj2EH6j+{įLt±KrVq%0 0EEpdJSl?nAe B*8f?B[>_qtҞBSsCl.D4KK+Y4 ĮůD(^ٶT7)Ɲ%}ZrE ,KvdouT`ӚlH[yi`OFA f#:Y8Uhy߮ >g%qJ~1/(ۮc E:CYKyB-ϨvRMSĺ@ˠ`U 7 !U 礘ӵ{ˍc) f NQ-ꞥ5 %L)WV'/5 I3[;CO[U cvgoc_]1xanx ŧ  eɏR0 Yb@OtTޓWʏLy@7ɇ2@JHK\i B!z@xK'$L~a> _a yQYGBC@ &kSe1, V2?doMb8|``6}[6|96ҁ{r: *Wf4]Lkopx 6/(!ـ /[ TIR0ݭbϪo|< yYk! ? Azq@(G*"ና*!͂z]-j,{v5{h=3њ+3n*H'@AL(\U ˵sO Eb-(nT[, qBL9jpx62)E6)BK)5w\Ʋ4{l\%`Yn f*v>Yf)4>O-:L7^B3?#m<4[-ϧɀ80ea/tB*;KHU5!gi8`:M(FH} 5g۵k# r|]B〒ZwmCWs[L(gBOϙk7)=s(*É:f|oD bu6<e*DԺp8Toz|K#<jeJ9~cl労1MǽeUo)UUi slgiwvC$I?hm~}BiMXFc#ē! @xVQyvjӈYe5 M|v  #*>=pݏVB#jcg)[! HTБ9t߱c@گ_dߗeXHq'y`{[6#* ,n ̨l@F1ՀBnv{u'|`µkE~XMZ!đa0t* ^#?jrqs?&, -`Z-IDATB2YNڔ}6qb ;)۫р_I}@fk%W WjPRy /KyqGb+Iہ -{FVPIے{%@9r Uؼ^gŝE~ɀoNX o"rXW{v^G b= /ZH՗֒7ݮҁC:zN+CwU%G==0aϨg'U5Jm WL3c^0嗞8d vpE'¸O|DZ&WrǑW)StiVUPe $FykYm+٘RejB*' YL38h֍giWکe T:a?)_FB%@$ץ 02 G=Es]=n5(99@lϯz.FXB'*vORSDQ#̳Pskg7l:G;FYZhٺvv5_Fzs62 J<,2c{M]84VQb3JP9+d=7J s9,S}X+ϸe. J x(_x&つ_O~HG59_;,lZ`5+n$˭ Ɣb(e̖SfqmQP3yZHKK˧N\ 䋧]}dmF@t m >IW~0`Fu`5V|+5n୷>嗁u4](,,((*mLf|XhѰymmx'jmH!CۇcQ`1KF/[QXa s7t޶'`Z/0 Д&?}4bk[ 伟3\i|Lq BO~`F&l=w$x̾v<w0p+P[:ʾQ@u7İG*V|JL Δyq"sj#PGOPտS6ymV%g3@[iVc~KN EK2׽$5W[@ݣ![CGKO YԱ@è?!47U(DSc vrE'S~pSb!{ ` Sl6n7M)%xkL4pcFX 62VBf4*jj^ d+84q_sl{F2x)Pw1˧z;*B\ 98zJkdd03H#@fid-p~k>KB}NtNHWșSS^ſZ-Uހ0#ŞXߊPЕx".]k,58GWm97h6pϣ/lW'<XD]k<{)\[T(%̓"gv.Wf+7y"n]TWcug sWX F? FLhe)9Ьf 8JI>rgQ C2z^)\[0(ydu Żf4 V pIWm⹧dR3k#5ׄ 5jԮ]*`,|[,%Aֶgn;[]hpѰs=7?P@v^ dhOA5́'!u:*ڰJDy /%ؼ#ɀBF@݋~o_55!@尰Ȭ>+/t9;  z<8PQavPq~};"E }#2:^HNݱV9Vy`5< {8WOX]=.]6wuxI.)xCo$c&t4u@;HYoZ1Ik eQ2Y?l/=,cM8˔ptZ0 ݦq_XBL5 q%I'h[P8ER'@]@`H@V^ɘvj1ұVmY+CnڽoIg fdyk[(?/O1 ?C2,b聨d,%UGuZE;>xtU,Νճl6u}eiu͌P4^x5da%#<u օ-g3G"l5n@ Mm}VjїR2L˼r(x=@!K b뵱GYƀ K/W޹4 b֒=FbbׯQ$IeW&O~M`J0BE!<ԩc f4Ӏ%";vKQY>ɳ״uSjjĞb!]Ŕvbg3 c<ŽU棴jYobgF0F <0]ucZ0͗cX9\>#V d[(FTHe\&͡tM+Oc:lAA@pfWIIc"Pvk쁜R@ehe1کz)W565Lm>DƼ0?Qql5qACv8#!Xf@2ʼn9ODϬsVZ~Bq}oYլrW4)'댙|ۗsudYuގVbr-9k 8p\?edޝ.k=p%.@~/آ⧀Zl%d@~PIXfO% jx%=yi\mVZ'%wK2ho\vJP_x, ֧VCYwJ&( ټݓA$F{F0Ŗ1>4avOQ7%myHt dRrq̱U.c݉\y@@Oj)KJ+u#a+#x{@m=Uol<2VO%F 'g9qԕ)x;Q&vS,LT[UrHH)=@8o+@` !k1d`]e8.K_[{3b sKqn<"Հ^xL?[ o3:߽`Ύ,Ι[ܢI(+J,Sfjyl4!] IOQmc|C8yQ92xx*Op?J7Q3<%sISȽwI\&{%eW[RoY4ܴw7 |K?w];8LC()Xl6`hԨiz8}ԩXKGOx@m2@?~/"R 'C t[ T5yww_:> ~n=` _  Tz g-a!TsQfhJ<VF } LifUDZBV)m}0G/bT@H 9??=&@Hȴo;=Upaٓ}un3X٪Wͻ;ַgEnwt5FkEn{H.(1:<٨n^0x 6/HLt8Htz<ޜO{z0=҅t ".J}6^xpǥrIlL"#x 7 1;;y.b_G) V3$%(Jy Mr@o 8[Te-7>3 !@T8$s_Jk ɡat-jjˆ)N{ȂS^e4uG}߄igH 9eaj8@Ç4mڲeÆZ oyy׮„A6m:vlFGA@ޯ9{ h0rw@ TMֺr!@ WKM26ڵ$!BG  H]?XKPٶ'MI})P kiRဩ\?>z`f ͷu^`Af_F )2Y.m4$zͺ'n;0kܼwwU<cRڦqΨdk]^Á&Byw:w$ΞNwXJ|hQѣD'c[Kkх}^xጋE?GO ڶj[{V>X.; _wRT=;]at7q~Θ[Oy eo (FɫNJ5Fiz({_p!!A9{ea ygtxs)|,;8ZM& RGD-ʹA2hF2nDĝ5ϧ&l[@e}j8+IU^x{;1gpgRJdF[@ml;0y$ɫZ jɫ'yB̤3Se-K\k cϳϵJK얷?IS1D ` Uom7>- ϼx^ϖ_te(=ɺj, QJP*sϦsVPdL]jBUF/ڶK1@+,/^(0 , ؤA xsؒ<1B~XݚR&&8P6f@b).X~h޼m&MJ%M3 33==+ $-kȐ#ر{vQ4য়_z5=='|:`9`/F,MZ] fwpk1 EKw{לX^!c؉?Ys^'@tX7DL-J0GPhRͣڕYxqKyqG񬬳 m7K鈸߼PNm\܇uRfTz /qiAzӀ5ݞa ?UտAZ;X%{q]֐k\LY~:䟠/+`'SAFlʻ54)+TהUFӀOظ.sR ^zkKDh|FvRMQ]uz}U_%߁_ (8g/ /P<ʯZi?@_:N߼}ptzmo'u:[b]X\0=$ 2& < Ìjک/CJ'܌ZH4KњBԹ*BPr60}ŀyq؟+Ck6=ZUdF,b7@T: -xqt.XYnO@4D, 1shݦ<3 /_b~eY+8Z ?tqkآq ͯ0}AF2iSr)5pz:aoQXʀf$SX)sWɫY6o!l+p3Һb%) Mtıl7\csJLHZU[MZR0YJIiζЎj %.&,mw̋34Ijdc*@DA}3o}6d {pQzџdH{T ef?YD|M ΖoK5SjFV=7 5m>s ߦyGQuqM$BHA*EDTPzS& H{/B  tғݝȒª<<;s{g; CsTn!qjI .*~g\A_@)FѩCVێ* FB6^^k7n\:TVNr F#ٓ&-\7o޹ٕA-[֭ MjU"7CHݻi[{lMAB q </l#<^Q |@+"f57WBCsKS溅>Bxf^H3s)3) сT*ِ.Ek~3 i=27 t-q [FƙXE":j4RkZRΰG\xC)ɻ|4W=xS965o8-V5{_G jѷ_v_?]1E ak{=_lfT׋)4eqb:+<*dը[IllBBd$୕ I9aPKG=aa?|'B _-տ9,60}.ZI!uq7M]*tꝬEW>`!2M'eJ&aR8>7ߏI ~{NہSm',qC8龗7'I}Sޅj֭T j֬_bEeS׼yӧ/[׮]rI2~[ijՠEv6MY׬ٺ]w߷6^^uԼEz_X]#phȍ=!bVHpиx]v!o}%,n>s /(QMnqO>/d̛1h_#xj6s-nOeIUKz:me+2LRNH&A8䙖SZff\<^N~Co$1{= .Awg8\xB5,1KRSҪd֮֝|k(r D=| 9vMܖu43t錩.0h`f:?$7c[Hy[ 0yGXX[27E hLXl;#VOonzBɳozZ4S9`3R/`N^e=t/xkgH<ڞ31t̮B͍+'9?E83:+kVeXO srOxygf@ҴĕIvЮtUZ{-6 Z睟9N:'ےլ#$kDϹ Α#R@ڌ ǃ})pP8dǡ{W#-pnd,DN1Cؾha,xuq^6 ;bn͟굊olm\-: U`Sy+' 2;|?0O+p'HXqbK9bxu[w!RlUT'3{5%8$`ZWm!aÐ(NzRs9i˥L7 ͭA .vmٗV{gJ|/xgJ8W_^lϖf7ߝ4wg!ĘemiHe4Dnʐhו{:&NiׂX"2T_h)Ziw npeAH~<=OG+_Ξ=y…g3*UQL=ڶ%o߸q^ <{Uc:?U T_j-^q1z<\/ǡc(8\uvX yWt4XQ'|K8io,-[o' g O_ǜЦi=OFf mWIotm8˚$im)J]T9oڅ޹ƥj{(:OKGey !HLGw;iF`mu!~2r.y:'oI#tn^Lcթm[gwswn3|A րDmt\8t**'jY ^TGm@bAEAS4? 5Qvphؙ>ۧ?-Y}׷З+3k&kHͅ 5nUnds<83)=jSyE.]Xbƍd!ĤŖbϬGŚy2̎`O kYĞJSiiSh7yed '%dx@%ï@C+s@n.[#(I4e0XBUE47Fs Ȓ2!e Dw-!66666M?~lOJ5e ^Q@QAQ^|C E9]%8wk ViVA#.Uz^%O 9OW RG7nf-ep$+|^&s2Ş/#32@}v]r73/9>ܳyzX:^u<HMuL V lVO$=ݝ3Rݎξ9N֠`'}c"p QK^phE' jn"`ӡNΤU/OQw n{`L7;y H VP\alKNmCTj%] b]q:[\ ،rzl]CUX?v#A~ov^270Sʵ n;Ny/+,=;ͪp|ۮK`WB>~!XZ[˹3Sb뾙\@^jX>^6#-2ėf#3C`Dz}KCM,"a:+@4 Rwv-J}e"(4YWDy<B]P(@ME e0BfwFQEA`eZ{{{{;ݸqFp09w ))ii pw;OT^bL'qr>Pݓc3$OulP*J@1ZnʥI`4 4hТEZЪUv͚AuÅ?RnTzzH+)e:J?/a(?;Qzkɻ'%2̖Nf e}sl=sr, 7e}hfRQh3M殗sfZB_ޒHO3}hn3rg[$i< X٥&L*$ˮ \JY^p{Q!iO6ct.JCL,؀fA:}e#h[gJ 9I+ &ݩWs,\>t^N:b,GŴ bܚé;hWjJGOF WvgXR>Jo AVWuK&\Wx=Q.qS̒JB eJׁYCpZ:GYHzʋ/){]Dr̔org5Gۏn+}f3^n)6u4׌ ^Fe)M`RRW$m $es,]>J5)xOg"q# ~^0(ƏAw<.nJ߂ͣG W3,Ni{w. [w^q*8VˆX::ˑh 6!W"F$KI?)>x!{,R&xs*UTMOS^_ _:%4t**#amRO~󇿇?}-Vͬ[?R%5t;!xmAڶ> rty6̌ uH3Z޷~ ["ĥWwGٙb ƾFg 3̮Ϛ22 23u^7ytl#~(!$O3yX=; x.K|3Ó Ll- jDM/k=:td8aM <-Z6)5yU,1QGb֭L1PR%Y::_5iU\!H8.. (g)&$~ADA 99 |]jmS~W l*o5c~&p8ʿAb͝wP]~fhqVzg!YX} L_iu$<|]1&!-|Åazljtm7/9J6h Q\('*JS@)0C&c2 J2 0(a1rAU^'KM3ge &{E1 gb/aP4OHPDY'@Pa JASPPgX.TecĊ zJSAn4ͼnʌc @1y!|u^N}buwJuYbh08z昨  D #aҮ'ۇ\H_=B.N#d7BON^Zٗ2wlËǐ-f.0 5ԛPUCB?>b^`{ ̌'x톷ʄtK6^.<|e,qm׍`6(JyAOK+e-gPv〲yj^pSz5kϿ9&<2{S)f=r7#^ske9y5^i:n4lI_ɈL_ϥi͖' $&KEVA 6Լews3S@?%ތײU*Y}4Hw?˿Xf{gRHBKRmATjL p}Wydi:.bIJ#aPn c )E&)L1.+.}-,q' a` m2SƑG5rU{1#71N F?P5 Vfw*|w(z݂+ ˲gfYKG_L۱R*~4ӎ0NѦ.fK>DJP@ɠ h%JhAhT(`k`KT MԸզ P60?ltT#A\u/?_"*@PJ*ڂY2X*U`Sy{3p-(ptttttn·>Bm KG;gZw I?x[z)%A>`\rPUN.RV~|2vyq Ie8,N(!Ӆ/,S&io& V *(uh)StU ",>eĻ=!B{@lij!d)Kej bLMb^i032-SdiaWFnDF1i<+w)hI t`$ڜh!zIS2A0}er]I(.՝wlz\ԟ l>Nyw^FsK Di{f egV.<.$s^NfJ@͔e 4f}yx!>~0}^/fJpm坟CB>%,Z:V3:n8r'pppppp~-@.Ɖ,J0?1v;cVn~0~7e|Jn0y.H9+JAY5pGٓ%xĮü%*?A4FEx#8bKw`D b3:0^"NiTF /bE)|uȒAo`"A?0uݜz(0|4Jj }ܡK"ٜ`|fSP^mfws덤[AXгALq |b<݀Fmr9L"]+A>^p/ ow"wtZp#8a{Oý蜭em2jrʍsUlP$t̖TC<.dT﹮&Ȳ_J77+0̮irJI?NΞp5 'y5..;/D /kφPꃊ]2,%۔h)Ү0H1id_&j%'Mg+ `ۏSS`*>{~F3徱vU j _.y0NGFJs]ZEX4vAMَۋ{T5QyKi5\w[P2[Ggş#ƾvliGs|_;XR_.y(7BS[㝂Q6(9^gv8~h+فe?WQ021a ׏G'(^ 7.q|θ$xennmmeem VVV*BiD~PY+Cc@-TF8Vj"aXSAJwa-H3rbru Tꖾ+\kfs@⥑#sN\F ?˅U!A:_6zs߬;  W. ><ϣx1q#?700ɟw5@Bu>jF yΏmGwbnsO'_xd)xv=kF讀SOUY3y{/pRÜ<̔"f he㩕M\ o/SjFJz; }{efx$lK\utJoa${wQps >Y#wɩ$22yR=BYoxB˟-IGbco܀yk2@CfJ:͗Try){֗}:}l[GoNU .<-^Gq"G]\MYUT?]a:9QɳGPfo9Rda溡$0k(@3/eXg<^Xx7"%?c׋ЌL;iRԨXҎ3 |ya .=m0@zi Xh6@ 54vנLi'FE幌=33_o4!-՛G񠸢ȋl&0pkͣȫo{C~˖U[MrtZBb8:x}f0 ^?ѽ= AWʀ<| ?[!8N/M`gc cy剆Gk 5^tk-|qÕXAlJ#,@sP+ŃM^1G)ťR;R5_K@SY lI/uTYA!Aao*}OG KZ9mo 1 c%N{;ޫ aCˇ@oc rZLvJ[ AŇ?*$o]`O^qmDp ^}:kn^vE<4%Q%^qϳ, _*ξ`y9xmإ0=Jr*Y`ɥ R9rʾ\so:!uZYI }BLek19-3q9w4ff\Nr?㩗񧞛U?E+w25;/7N_8|;t % %;C eJMޚUC2r(6'Pe|5554'-BK~>NPy9!mϽ#_rwvH RiNTɳ XY7PlvzTw+kUHSQy5ϕr>9Hh xp<Zy/~6x|@ 9{*Yz*uTMFeYaׯa6:ːwcR{nA4lqq쌟~toC~q$EqK&o2Y) rKy7{TfE4FRYq,] :E%'MF+״m~4O ysI$M5=A$}kRChC@ Pw%gnμ|-l]y~hyD:.} SIMXit03Ϟ.e6}IޥW_%͏?)=񜌭=O5#dLf<55y=󅱜<2 r2I1W2LlWN`qX }y*~uon#/3#\/}>Kjݜ3% 9t{ǤLF :27?L!j]]BwV0 "W)l*[6]~1x;WM5'h?/CeKRݗ3cs2M-a6c5gB}O(V=lG\oԏwp"IId ,rݥ'Jv\ FV!kui d9U$u~'G:ŭ;6^y<p-;_#KG1 @>hr3l[ˎq>դ۵/Nv"T{^U`SQvsº`[i] M)PQ=CC X!,1i0?ЬԬ4e,`W&P0BaN_-e_ʫ.R>zq3-L 8O`<5 ػ}n~)Oضm۶m 66666/^xq٧qA"dN^= ޯDû[/kbk<)Az)ALǻ"B{ ~R?^}~U`S9x!zC'0=6lcH V/F0>20'̤ ^arpr4azYŖK^ڜ?xNBR ŷaț7 >z<\=d7Ld=gޜ6%9ESm\ffT}ϔKrD y޾4.?rM+ 9(UICPz`s%E(~%|W }<,'Bۋ636Oq ^6b,h^.z? -} kvAG_`٠ν^s]¤ا.L>J'8MmU(U0z']3|L۔p… .~TrT>zhja0ʷǤ,r} nŠ .v_n~}}Sy>kg/rg-{O (V{?&?UL<ScZB^QEA*&мqj%P6`I`Oi#* & n|.8C,bXM\]K YK?hz[b SSQWMaW_0V~o9 gwprgK]{ySQRAo۽Ke7]]. ׹yxyaX^( wo'qXS-=r^nP<woNɡL϶k\%/B/\ >ga|鍿mX:kfQn׼V9xu(tӊ9+LW.|%OO()\y㵫p.ᗧ!B2;pJ .\#:]XPzD_K4?ˍМSFtSr>rl| }_{!#'? MXhe{;l Vk[/t}Z bX)N#ȮbNy\$ɦ6xW ȲsÇw0~&<ە+W\ܠI&M4< ))))))lŋ/^z+]5H:ש|o`V]?]8ѽy&Hq?&ֈ^E" 7Sϔ j7*G^}#$M}/!*R|RQy9 YIy7-ЮCy_ME%_W L} ^GCGAi!MH;u6$p"E1f񀣪rE&ܴ@ePf;mGe}}Apʽ9(3'y!a9>5 {ubĕQ40xyO0۝՚@rqlŸsi?͕2?HnX?LWs'PF8s.<R6 ɣ&mm WǂzgJ}N(,߿V@ӻJ11V4otޟ[zIޑ?3HH6+O} Uo996)8~׺kPUQU`SQDȵ%{qPb9P̶{IUQaa?|'B _-?֥[}Zb0ycY:*P"As3@v^2ӂN4N=% d VvxϺ,<& ]~\7Cȱt1.9Xz0lFJߟ2 ,BMq=Nq'klmlSLzb33'eK 0% |Lše4)s3Kf aͬ!ͤ3cVN__?x $KM!@)|{wUts]hyo*pQSB< %@m8 "Iյ26$oeyhCV-T=2jOBaU U_(\kcP)qMZ FMj+lQ@<|~mP믿B1E>/sN:uԳYg1j VF۷o߾ u!F;/ͩ> |̳[6 ÛF~W\IY2Uq'g-E~X9k"nxPҫV l**!rqIR'=o|*R( ?c,w`ꟃE}c)dq:i`27B0ۮq]Jb` iBAڬ6+Ӈ8EQ \t(k wO vܹsgʕ+W\ڷo߾}{f <%/pٳgI /)5u AXYBz)ο]^?`C;!l]``q|4B;MII3_]vtkV l**u?uJK}9`\bXT%Q.N.bٰ| kЦpہ?aS3 %f/q: {;:Z{l}X=F@H+@%zu`s@)NiPZ3jAso#l@*JX?|~w^ rǀ}n[A91!XܘJ'yש.[ ۏmjzf}%%zt[FEW]%~~4Ihj}xԪW]P/7S dHLġW9'_Y{d2"<_)̑o ^V@yrO9o| J166C/LIgFRZS|RބɽAu=M⦃ែ qTCCA^6ѨCg0$}UMMeV+>QHJ\sM%B.2JKkDgwZ+>1k~gS.PƌE2bPҚ=ڷZc꯮kRg5A"&pQ K5\O+5J`voߒ@7e=>裏 e~AHOIH)->~5˼4*轳F&+΂QgDes*P$|Pn*H(u`ˆ ?nC|Y"X' Y:#:NJuO UHqJݚdZ 6ع[ l**U`SQl Yp( iSSSQy1b Mȶj " tt#iwO(_">g-@qڕ գ.H?T(qtt>2V)!0n0 5) w ~";_ҮlfuNoܰ8VmoleE, \ \ yN5 tyrte|v {Vo|\ZU{ֱFĝ:w gKek4`?p&%WyJ_316x}ةO_ t'+0[xaNf{&}n j>/C0g y0 {eLL3 `^u:O)RJd"!a|0H^€ɽA]ds&tO2+*"(H`4Z sLe&i>U5@3~@s`"K@Z9\]w^t,,A@3 hF0(ᗺB<,-CUtH1`/W@f%ketG.|"vn[1П%~7:qjWZ[r:@! "ÀϼĀA=$gZkygϫUX+D8$E>N=p~d_p… M);?ZXEKb!88888nL_dJ.>*XÓN 6G.Vޛ,hi,fD^ 7E>hki⬺FS*ҫUy[P6*EQ 6i[ӾN?8ѩRG)AjQNV\;nhoo+J>i9 nA1ƕ`mvY`$(gЊi |%O{fA7oVvuwAo V PaeNXV]ªAR!DKg2__tҥKuF+H !|8Z!1111$:{*TPBHoO ֕=mMlNDOP}/Ad@0{&iISoյ T{gT -KUb|2mMJ}Aneu@÷ lbOn}m+ gJU^/rgPcg8۩_.avo{\?ow.Ȳ,28&:;$AcW6@) jAͻuUg>MҐ A{L[=k>[1ovu:NBK o>iōSJa3yѣG5k֬Y3sI&M4 ><Ы'hޗ鴠-=4hǎ;v.So+$ŕ]s%RȵO~Co4/)JGxT!6pw] ^p*D.<|]jmS~W l**"+"îί[~mV_**op9t^}}c蕧RW,סE4V݉0b3C:Z:*EҦAM.w{y`킝WAAB=o&cSѺԁ["5b]R]@^q)1ӳy _ǣ,},٤oa?fbc\Mٰ TvܦBeQ@6%wګf>n 4S0 BUEEd{Y[[BX@LgA`T fU.h'5ur;\iWW\|Bd R O}wz89[]e2/zAC^A(@[ HF||yOw;c=Nm@([-DMظ*pp}cG_u4y~ ? j~_RP7aӠġ ΂[keew_LAMj0axh<~?Xd pZmuc?̚5k֬Yпg[ojT:Vj3رR%3FRț7o޼y!>>>>>+VX1zn`$4~e4eVGi``uZ7Qs0T6FB s@iMMnAPН;tPޱm͆QͩW l**YWB`6#s%NE%w$'M*wB߾>9 6mC[::Z͇!mXX;Y:* M`kЯG_ ޽u=(}Wa]vÚ/TJ!)ft,5y=KGr+Ʋ;uKPG'G",d%IIV2rVU4Էod_'jO^~cA /U>xOLpD'8%.˧ppwwTd6?Xp…Y#!'/]Lw*W9~5Iϟ{wmbĆ'XwU52<_F!0?~  þ0l[+vp=;WyyG~*p:ԻWF F5RyCv<**ϠYZ iIY:*CY'|-'+IKGe9S_eG?~о;zǏcb?_W5`ˎšDA]x4\`ӆ&s>?ӧCH?&qL>댎r}> \]+7fX;{?@X9Z~АkX (ӷ|3=oΜ9s/a-sΝ;w.u|EkRv&x;^8yyVXQF5E=&w UJDE[Bcc=4ڲU.ꟇZ"g4报4_ s >Q5ˠ>r**YݖfXH< ( GŊREw/| vlZ=zK|տԨكKu Ne<:2g-ڲ; . gsu u[ghUTT hfѮJO[MQW퇄spp凋*[7jq lj&J` -޺̭Z!qo8fmJ݁ö-B{ scǎ;v,_qdw iٯ~g 2ץ]c֭! |,?Zvp@,!wI|2r%} 39@IDATَת l**Yy S}@(!nJ1``*=+RY Ncqo!@tf.h?|0ұ8cO< =W_ EIVR(! ȥjjTaMEE%8EM8 \Up^m fO-2=Wq#@qG VK$03'k%U͛7o iiiiii0hkX| ˞w۶m۶mi/^xqXYKg͘kf3U;w ` $Y:nAyӢ(|}[?XQ6O~KRmE-UQɂ Fd[`aJ}6**oW(!DtT#[ؒȒ`# 6؇8gT^-A5v %%995z7K lھmmѪBDT2`W9+]S~{zԃe~/:D,[y Gj.K`]gw\A0yYf͚5Я_~= #FO>}49On-oE?|Wܝ!]:!_UX3\J._qД  K/!ei;r8$غ:[G'kKRmETT=mje~;i< ~hbr|#0?{먨N.eV*-~$G^t4*Elj a罬y~~pbzOx̩-ʿ g;y /?UySy|P=3̗բғÊm@Zc}f0s̙3gBXXXXXonj3f8vرcÇ>\2:$lyBHjһ oKx݇._}:rf`4zqxwmW\S0J4'.PANkvb _΅SE " K>l$𡥣z[5}p#E>=iQ<͜O9z{\ٽAeYK^EF[O2,XH)B-ʿeH^4'5ľ랤):]AsI| Qg! *TP!W`shp+1+jui 7aÚfkM竖/;]lz,|`ԵiۡaHʂJ5-ݿMl<I#} x(QWWp;ξ I:W l**Y9kekfgX.a^?!rdsH Mn v>6-JH{b<(:%Qy!. BP -ȕ>:]mo ܻ~z`z[gVim\|wIZEE必7+|waapիWBrrrrrryZ͋כ 7*-a jI8\ v_`e+ӱ|áq5B=ZvMtWpu$j@o|;&<1 +_AL`5]{Aȋ=R [-^dׇ*dAq !Ƽj_y#$Xk 7?v>8t**#FI! *cRQ9^z]hؿRy`A#3@]6&{n=?!)J T 1tZEE?7>H?v_ުæ`ٳgϞ=y˗/_<]K'[λtC:a @ ,C`G9f|f)S:maxߡWe{BjuTAֲ݁50?pg-uX~zci!#0n0Ej cn~ g!ƣWbBTT`q. l0qne"ogy?/dmdK8ddmUTDS^x re&آE=U"^v 'Yd=s(ʳZax,U5J'3ޡTL~vN/622 eWr1E_e_ (KJ_H8. s?[5ܱe~_u:WԪYjԟW=j_ys]"dpA%WޫV*w-տ9"EP9{ C=P>ౚ j^J&m% Co}TyM\>YϔfoUTB%Ah(GivZ:7Ǵhٙw׹vi 8(|xf(_tX#KG6\r>\ (۩BhPp5 7*l.S>SvepuAG.*4J$A)wHG!\{r>8lqjN|ԟTaJ$YJ2=2z0LSaȊf[W #+(?^򁥣zs^lm4׃.mpKG%wwsuH km]\`ێ Cޙs0 UXSQQy(f=!ZȠy^)~l\ 8[ ]>e s,y4g>*A#0}pYiήn8㦹Q0}l8:4^]ή>}Z:!ߤW`ǐcC-lYFw( V؛ף!a[..Th]`@rcUT4PQ'ML@w{>{<#ͶM724pHE5U0L#y(ot0C'X :wPKz:;Mu&5^'8ruOk****!PfPM@%ܰO^MtW+_(7eo`]iˇ݇_J/=m1^`S;\+3_43j6.Ҥ9WCu0Mp[BDMR7ޡsyZ,z#-ݲZ"u"͟HBS:@߻ޮkpOvrʓҫQQAIҁ8z3q,ФH[4SƷDϞ$yn %0<~wܺ:WrWX&~aa[+bҿTfa4m<u%._5sogp뾝 n0 ''GG\hNzI_?$)ftk{JE͒fJ.Ct7>y4mq~0DHrHrH$ʟ2lԫհ^KP{x5C*,8AǓbZ)m0r- 옺,y_wCj)!|\Mv?^?X{kZ/~G Ї{.P9HBpvwwjTTG`SQyA`fV[  Egv*F-qѼy256QW;嗷6I; kժUVO?~uF1bĈgɓ'O~MVT^/jJ6\Lp|LX%@ }zF[C <`3zu3k}%jb!ߍ9Ez4 $Z*vuѽ~ڿdjoK9;kei-տ %} 6k}HKKKgJ]5Y$xXxѪC`SQ#)@PuvCb+:E}?nۛ|dR_3/QD*P}ho-/pm~3vjӁv`ԦMCXǺ:]u->S&HW2Pϻlj ht yon4H=Abg]VV\rJܹsY#sΝ;w@j/axŋC=z>}̙3gΜ|T^; lX]G> n e+y\1Vxt B @n~ݟNn*oq 5p (tto+) f]QXo0$=:hGVy̴t*ETT >ua P+ޮ[YKG\byhfǚ{Ny' 9mj6le-':߈џ %˔(9\z=EH1TR bEy7]?%\|y%z^CWXb ԩSNr?M4iҤIu2hڴiӦMa˖-[lFѨ-(%**٠+מhcx(\Tsukوca+3%z*E|_+|mLu`bz>nڹ5ԃp>|]FAE?T k***~YB3qPyU0t~cƌ3f~ x{g2eʔ)S֮]vZUX l**`UkCp2qȊ /U}vP<(vUySXZ7.*oQKekw_,?ӱZMZO4/ ;~ޑ~n[\h:iSUTT(9RHyѨDc{ALJ-z0hٴm;`tI{u1ٓ 5,,Ro]M/m<TXzB<5ni+8%=8H=gj|| H!b! H0zB UNFu'N8q?~GDDDDD@͛7o111111ɓ'Ǒ57N׋*dnVM_euV _~/GSzԔT U@J7R{Ѫ:)%C nۊFuQb#徂 %w\~f}Pd6Yk3A"nO+8%`ۣ;|an.EVsÖ . J/Z0t޵ދ`7| AaU|$B{'u%|*[2RRRRRRu֭[vvvvvvZ޼y} ZVT6l.7nXR2 kr9$vdC{׷l`b6ҫU쟼6 :{gy o$ǍL>q뤿kȵ`ߍ<9զ ~rPT5[:ZՃMEK9A'`NcZ_}ߞsfpw)pϣ)7neg WjB׬??F;pvyw'v}dP+!- Zc^:ŋ/^e˖-[67cF)gFӃ&;vر#Yf͚5狢("lٲerDȡ ":o,m!JJ}<gɏ䦲P5!uѬe6S䕖(py"3_ ||`ݏe ktXN8WNɃ'TaMEE忏rAP_QyQ v|= +E1{`/ǵ0%G^lYC=M*50ф_|i (ve~r31tW{p1%O)ʹjEkQ&eV~go0 ̞={0lذaÆ=+e0}ӧۆ i^NI/߼}lyM9t~ck ٚC܅HAM%WHlξ E[0(ŠmmM"Tv*B jOV9L+R_TܕS6$00q4kWG;KG,7nܸq#iӦM6gxe>̀ T $UT9nwjVf Q%_GTTTF !:⣴_uB7?.rY=oo__OOawיmhUTTT,%L9BK ^+-rZk1QG}GQlM!{*H/*`AEED6D, "E@ޛ:{6~Hu]\|愲)iृcZߟwG2!I8mڠ7SaC6ܰ&O$mYSc}Vφ'SPcjKM¡{>87jI8Zeo,U_mfڋ/c5k֬Y-ZhѢ &%zz?H!RCS>OF߽eIkUimѩ~Y; =)Fw  fxgF^ ;hd(ҳ覠4 LjN|nt!_ܰkG6A;Wt;Pŏoª6:n >8=-a嚍?8]c=7@ʔ2a֤}ztkw"P=xl0_NMҙsBu"݂z\8c\ N{|4ăWE^ۍ A%gTPhk'ϒge/G7p.I}@ v's5i˳BppѢ0Y_tպntjAGtF* lol7*?*ח'ڇ5`t+͹9Vg֡uѐ1o};rȑ#e5jԨQtBa{ Jkz'srPziՊ>6-ٱ93ӌAc,}$~F2F̨ .bt;S?r6dx{f@O]Om55A4I칩]ѱv78T։3!·ج%|4zぅ,dtGG%3"A)=1fgz }.<;쐭796}3ԝKkע_7g>άå'/YWW\yꀂYM!VDM7DMK922]W(12I~{ ?kL G+':[۪kt{/?Zo]x &9O> 4 4: Bgނ_ oӽ[xu % gB}xiVz .VV +vuXqu'` .v0N3;jZиqƍC֭[n ۷o߾=TVZjwN]u]Ǐ?_h6ῑt= )6)%e8y@es5jUyghC gqG#M ^3=|FP\}\T_Wo[GVS;^{Q;'=sWyk5 ///j |P1RM;|R'&eX?FCLU!{oɬm4X5s+<⹰mιj u6fL_SKƠjipTbm--]q.Jq[_rrkd7/Z}vڵk_~qƍ7ѣG=ZP 6mڴiӦkvڵkE-Zݻsu9Wbi1)SL2)RH"wNb idŸ<&7]\Six%65ngL6lL?ѽ&6p<mi,̺=Uo^0,Jt:.<Ca&.( -Zn/R )TΞ@&/_| Vy(naXay ~UiS^/^ nozo2G#VWυ|m.X3f٠e˖-[SN:>|a~a… .:t6o޼yfR>)Bà-+ҲhF)p~4挺ntʔ_wwwwXץ_/WybćFAxM/sF*R{c&J԰cҳW+@JV,xOիF=i|N8qĉ^WU]!l}&Q`fق ΢ѩ HP<2 oBǩ;AȄe@"I>LvU F{?)`֪y_6: 8kiKyշлS= 3<h ~%x ;H4rReѡ Zzժ+^DĐg͇6A/[p}ܩ_ >G(*CIsceGlaKU_}甤KX=zѣ F%'''''L9>| F 4h|yQ`v?ؒvڵk׆裏 6)y͛7ZhѢELo lpСCA:uԩ*TPlڴiӦM~I0M&LOH񮯌Nϊ ӡ=Xٖ3"G\3f3;`0kKA1FwުIX5E:T>k;㯭`ˮ?:߳h]PIRPd%5,v?P +hѢE .\'۟l=8QAf _K.]tix饗^zrQQQQQ1Lu4M`)n&|2.64/ػKy(Zy>ȕ&υu)BzN)sK ڂ[xa|&Lxv8<*EH!:|z⹶jt }KJеDӔ>1pzv +N;5zl؀s7w( &=ZZЖ*lt/59eWZO=fGqpuwF) Y$*VZ_tA<|V@՗kl Dj c<$ߟܶ^{Ż}5\m-6KI[ѽWeʔ)SLAҥK.]+VX-[lٲ8p_/=====Ə?~xC:Ν;w9{E]&򸥼 Pk}NsEsi(gUEj MG!JnP7:pɍm=YE[?s+?{L>X,_[;hNxkitZA?׈遷$ѩ\T{ ߽v׹B͜}TxjY%lgG@z{kn7:p9HW0\r+ %!br{(:* :ZLզS.rQ\7MMG!{o߾}z뭷 XBGQEQ`ܹs΅e˖-[`-STM~0oYAUT1:]P@}%+VJBeyzAXG2w~9XK]'mIͩ>S.(zx{Q6a  AAEk\86Uiw/ z; RF}濵ÁЮWݒ`gK1AϔM?k׭^\PXn/fOwnW[og^(u=̳;]8dhk]m1LFQ6fO& KZnSK f{e6: <,YѠ46t7对PRp[l@L뇮qޱ/@pp`"O,K\­N~]wjwzs>Rޠ"Sy04 ++##;ZN0tnoOxAB]AB п^9Plgys[x{Z[ckbV@4MoI;S/}L(,7}flW)@Zv<ZolJBNNNNNdeeeeer\Z0B-33333<֭[n +VXL.Q`[`ilf{ T%+|#fZ EER J'7\ K漑 ax.O2:OL͟ӂ1RNwE^0AaF)7+|><ρ7L7e1KA_;I~Dz\ ahfO8y~{Q *Ӕi G8DB[3'R&­\{,[u]z >E-Pz]rV0gXvn|y&jN}=T}`N2P g0 }Õ9;\xMT: j ~) *}@?ٴc@<jzhM#kfLfI-P/LRm >˓ ;hX&:u}Н#Cjj"w dztY>#s{&S K*(uN7= 4HO}#sH$U>s'[+(i+1m{z2:5( &LS-?jxcm5)/A_2P݄gWBraP`^ë[yQ]PۿvKۡGe^0-}sԚ rπ}LxDي97އP𾼰RP4~htA?DMn)f+h;[h UףT )QZgL 9Gk.<9y]Nei͛Ri{ԏNuvijeVrW u{3&} ;IRQ oxyΐFA}~J^rei'*=g;-} ??~yL}\:Q9ù|S0qڴAowѽ"1P9)IYPѫTjK)kt[+u)ik -6sC6YҌNg<q!ׂ9j3j1.}>^ |5C 0EuN0g:h'w]]Zcu$FAq_oߤȀ]55^䝦FA/yL )@Jd-m|1ezC^//IdL@u 6>(@Hn] OS@%y^w4c c[w# +:w[ȶ% zrz`S.l~iکvFpz}CUT$<>+ʟ?=u wem3^Ax`[`l ax_駤ss270TpoI"9LX nvCv)0:Ճ+S˶ /Oκ)24.pCȇ`nbZiK\Ptw5Nu^􄗆968qS)%z.٭lx}k '8ԛ\,hE|(|zxFߝ I>A%I?FB͗‘T?~Z7(7/֗,c_C͚%7>I*k'C7[Wn&4mYug [qQYҿc!y%W#5ފia?\J*~v/_:D P#_`h\ԃvAn[b(Qϐ2}X|'舾 /` \@_nJ-Aߧ_7PQ_mw0 Y.r˟Ox>۩SByrw}gq'8uh艹? KD7;vVB7O+g/q#}Oxot/ bP%ſ:iѝph쾠cPsU=Zh7sϖ`>_ hfӖuAɫuB#ͼo K(8✳We)k{8['H-Lr dJnzumfC!C! /19,IFr+de3ןHeK[4ޕCBN'( ¿p*J쁉&PyBؚwͻpkJ;\<ҊO%KgS[X*x'zi}ѩ k b9S ёSĒSހ>1:#:tXV[`/nkpuKzm2H{e^U­02ʏ,9jԉ*_aM{@6   /&)Mf}J}L{~bO6NW_kSp]K6 {!C- mPH[w4{L`}=zxL6W*stKF|j$&4WuUp ~8,bXC-q*F6^Ɔ,-u /U5/_UU𱺛N-Q`/Crg꿗+YS=zrlyz^C mqԞ`Mx>A[:_N[x\ߞZ]nH*R#Tm~ xf[ c*{agPowV+Q5'Ñ~ác}i^}WnE7[f֚|7 ܨBŰ*;#6aQ`=RkGgwK8ÁٻG"*6jt:ADM+g6_O˖锜#&h̊pmcbːq={`Y9FMoftf#$TN ޙSA׊U*T̾01 ̬0e.(R$448>ncf:Ug3Vٗ3{ ʞv:hM},˲,CE撇 \_+ʄ +CɊ\ /-[6ߥzzpыւu^Y?;.Zl^G>bMavgk>:u}uh9]&sdqL(V75ˣѽ,G56kJFH>HʁŹ+rQ^@G5nA^5\%Nkb6}si:lYZn`6LBq/deZ3U(1\{rkg:{NJ;oPrzw;rrp}xBw &=1'c ]xxr%gf,kH6uKWܩ CSCʁW]>F~LLiR\ӡmЌN%ET5sU.h!jchNg%xn%\*5IYo~|\g}H2.>K,m~XzVV? |V 9djrb |vː`nNad7J?*m Rv4ӎ&a>Vp\EOk> ҩ aW>;{8p'$*ZyWo}Vl(j|<*^!+~VUңJۡgŷ+ /Z ?ȋ̓M Rnst䙳G#N<1lMHL3$ɲ$ ; BuiW5T U/רZ+RT<'nЌ64:؀PXE{l yVZ,gN'+deI R&N\{ŎLctu (P<\n yuy4щ#_HN3g6-ҭ",z9{N'ܪݍW's-f6HK1twwl@=m/dY-2:K=2I(nXISĝ|;i N}? |}}|`r̀?6KrL0A3/.?GTwS% 4llUM` 5w][(XzJjPrhǠl ߗk0O w`Zz+ G8= .pnWB!bB5|!3BBߟ:w;7 fWP8Sj3Pr*k>Y軼&C Y8veϱnxei=O/7A!ep}u^Y |w N3< 4j^vFC_p%deĬ[-m>BN'ܮMr_{R_͞y*CESh}7+ ,L{- F+fz.քb{· ,o9~q<u>bzuMH!gۿu|B#h<{Ge Mm/Jȝ^ .ݩnzѪL;YŃ> ߼YpBďˑB7hM6G1{% lGe8-VX?WF} ļqkC!äYo*/}: o-Uh$bVOš沟HR (3 Z_MQ omΖDȶoy!e+ҾX?Luj#ܡaMŦ~WXyeԎ/sysS ! l/\t–-ǎ3 (tB uFBBp )/}G\uCE*4op5:?(Qb޾~Y8zXoֲjkٙHx੯cb3/?^WR_ R)^nW~A ¥a}A委8;KgMMV:b&Fψ}rtsӻ'xl8Gom}|2 EC5{[.ԱYPYL M#* :hy& l59uHH>Ľ|H"9yZj04Jjvقy>hq|f Ig!~w^LL, 3~7J-ڕYKBȑaAϑmJ[oUn?x콿|v|j:d9U:MgN'*GܧC!eIzL Y粚k/ 75! poNHSFbKs{۾SC] O{  uȖeW!sH Qh^(]e+o/?_Vp|m:g1"<ktk5*dx$S[]~tͿW P)9G.gtz!߰t#zqFC6l[skZxx"K&J_$Y%Sv= I=cB5koj7䁷ūO#?5$XfX 1'2g2\mq Yv쟺uw5FMX1ճJ5+T̳L[ ie¶wFzXP^7w% q2<㔮0o=ѻmJSV[ћ3ȍAՐ cu\k> 2I#9==;octuMo߇q75 >";RxL0K/;7q&|5n\]2Z9QEZB"@)⽢O)<s[y={#xeWuYZҌN'F)󬐖nO qsFXTw#g`߼P'\ k؝9}Cr+tFXbs! lpvm ֪DRVѩr^Iq ebP[~;<''yo#ݜ#Wǟ=(VG{T\v%0"r.ٳN]mس?wG _̅x$ 7zX%I!50#}2Ssj fևÕ=bR2MYq73o#xb -9b@֜k=sڀAvਕ]Q&4ޮCm?4Ni'D%lJ޻>ћAKf4?7c}Qٞ9Ni/UQ]«>^|m8~{ُPL %K̩MJ,)ƒ ܉Cs,pоh[`_nPjZ_Hɼ=dl~)}<3M#9R}-y.x>4rDE]|8TQu:Ztpw~ AxXƙDctA?!\o2/)%KRX[R ᗔV﮴< >/fOw/jqH) MiW}XbzBObτb ~Q7Cս5{MA/NWLs}AEjetOn1*[̙Y ҫC3z6U۩-& >ط_{y@Lޖ8@?l RJ/4b` 5b #6X+:R^P#Qp7-EC\❪J]KvAgafi^r98u3 #)Zp>ZS| ߔo%z[d[0o54T#F  wj:_l9;Aڬ$i<_LYF,| *H ;$E= ;5J|[,nPR [eG & g2*![g1DMn5e+\V*ה@0t£.vV0s/ YyxM'h߽o "7~zw3w{MH|3-l<"fXI{ѽ  <,]}\B.W<<gPPٝ/7 ri<,/y U>7{Mav#Ng)Z J1Wjt:A6AVi Ȳ5˜(S _vYSF^zߨ`V?vYa{Тr䦭v>y)z)*cLo`{:ذb3#tI{rFGypx@Z/Ӟ5GjZgZ[= no nl쯀YlF"`rlu%rxrE0}5';9tR0(Tu!W݇4Y ύr\&ϋšQ+VQ ,񖒶p4FgMGKlljE#k$<\EUaՄ9׾L Ml8̽Y{!yȵV$@N;N[ J.タ(s=ƎG,xxt@/j ozx1g+28;f2J:,mvm򤵁*pHV7QZm_Et];yr@ΔrfAwrB:rk00 3wutwOpKyzLIͥs5(NL1IYZ@ ^`.YCN%6A#rF6:Px$+Bjhu"eؗwƶk0Fgkw9xa"0㖂q?䅦&Mdht f;6*{)` F)XsC WԐm$x^4x^-ڧGZ" "p YS%l5W cJ6AVy}h43giTSr]āYϮf)oTRHf|3%Hxl%fm6I8wxvN٥N; e'{ȝL4hiZ 2&,?{wey >2lgW/a74#b,ț|1{p΍v<Y?gl# <ݭaYl}>mOCs oXbm8A=sG:j:SWQ=zi/vkQyvveCnf5GGG)DhM[-T}g2y`V%F X6X{[C򝹑%TUSUe^agkepnw;kU\me35¹\pr& zt2@A*nm(?[=1 p q y|Nh. ^P$@Pj[ |lb1[o|JG( 128 fVUdr&{˖.S pk?R 7Ւ{egmd:l 4G4[ 5/&y#xĹx I*@zUPT9%9}r_ٙn=av-@ȟl[ǃe<֑b`i`׬m)b 9Axʔ-2yJ5p~T*SP*dpv6ʛ Js ݉AN' 8,Kiv[`y,zzl?Z7G_8w0]-An3G9&b dMu"*w锳a瞓m%e ,et:^9}"~b@p_[^P,| .=WκQQ/Bx|hbg Þ{w H4 jN*:*r9_a켮7:7xϛ?uՒaZE*ZjIg0w$ѽ"[bAR+;@JQ^ݧ<@8+8#@)LP8(\ĴS O+ݼ27-`o{~KiG !7 DЇP4ݿT`;e q `[_rrkd7&UtT` Q`;R3~< QȱRT,9S5Zz7|\j{+AU]k.mzr^.ʇC!=kGN*NU\ϻJ9v:tSW-J-28/f:ۂrUm~X[roXmA^kc LCLq&7?mNPSE`~bzLLV7Fҙ#Q`3F1.uT>'EF:Gz\O)ߺ"AyKYZ Ԯ4WiPJ[u?42GR<޺,`b=n-R`bnaYĂD&#!}?QRk}(jyl-F Åb z7#P~[J! lpl.ϒuL5@nfmR-5X΂ss`3fm}e9L̇V0eHMV+[qDF\{ 'Hu ^4BQϩf~뼼Cכlt*A0( ]vomk%v*T:I٦ӒgwZtpah]]y$Y%]>}yl6{2(/tcd:L5R7J5^t PeJ%m=hٮeКhSU~D"(5Q8hE[ Z}uj@]yK.o* 7*K׽af m]Y;\NJCer $/.s}SUw87OWmѶ'@-kGS佦)r6fiL)2SU0]0M0y4A>ӓ&3`(}&s=s2ˤL X[6Zc07+AJ7# `us$LH/f,W|K)Bi/| a-9eJ! lp\xmσj e#Vft*ve۳B ttt?ob+hql{=zJ{K;A^brXNwּmY(NIzt`qv8(s!uoZ5]oaR~{mh/X|}qԠ,tꔂ'?șrMzoYװFS,ѽ|ts z*k=A T ½蛳-kdn-5(R="Sz"b7UA=zXn rxq 116pX}pQOin|*[ eȍڈ#Fqc1;ڪUVZu[X̑v_=Ȼ5i:%K,Y$$$$$$$}v;Tg_ضkqPþuc{G,EK3AD_M0: peZs^L!}Evôii!/I׋š Um0mFc ֛vkS!ӏ vTiژ1cƌfli͛7o {ݻw/3w;frY{5 :vرcG-8֭[n:YKz^kK8_b#kjO=-4 ½VZҳ?MZCh'R)AxRжvOS pXcL6SA[٩\FÔ'8wŋ/^ 0`?iҤI&鰵XM=t~ ,}eG=z'Otv[k-|sx  &L0ϟ?}~Z.Y9wmR&#<Z\]mtAvA$I -7]J3Ȟ:[^^E^SSN)D PC\f3: &wl&+ɹnO+mNeG xh%7|7z܊+VXQQQQQQw=߿=3J1Aft*AVIz~Rlrs>t샠ZB7S蔂 ӯ)Ax)'畺`Rj5: &w1mPPGb|\u* }k!0nܸqE-Z`*r\.]tR0Mf2J\Efy[iqFot*Ao='葐J)(A)P_J-'萂 < z4XNo.Ă U\C'NSxhVAnEךyРA @-ZhQp(@FFFFFF"xzzzzzc+W[˭h]>xd8ypA[-'jJAO-W)UϹBW"Wg]θ 2%OIq>&tB lpY}->^ot'3BWAFk׮]BCCCCC!gZ:3rfg}W.f+hk'}ѩA|ZO-Z f<#hN)m >yd$Wz-bI$I8p T\rʠ>Wu}ZF̓\N65*h_kFA:j:UN m>B=SFQȮnN &wjgayLIϟ?t]u/\p…'u_mwyA[FAxt4WH I1R>ϖ*nҝ_Gv)״*JZAZiFA.7oSkNTsodqy%CΆgR.̙3gΜ `@Ε[.}W>y|ܶWJZlqA9}cbKƣ)Gsu)A@U*h[H lpYܬ7VrTƱ3s^ԭSgϞ={8wwwwww5jԨQzO>O> pbIl= SCiOA[]&}N%p vmvi9 Z|X}MqFA(TkԂlJ Q`k ߴhVTSK4x5 |{{nݺuւ^˟ :uԩSBRJ*Up\H//+^A>&ś.}.sʹ7nt -ʫ<䎙?$K7S [+K_V[079N)Wj Vж̒5TP8 E)Zr URNuZ%Mv2hi yLL,FA{2R_fjK lpWYz*&kѩ=TBdzXy?ƍ7n_2dȐ!C`̘1cƌ=w-\ “O>OBDDDDDD_~u;wܹsG\^xH d#EnAR1Cont*AO8g]G NߙoyCXƑ@^"]6]AAxkXV[[VF Q`n6dNJ% !(.k88цe˖-[ٳgϞ=0,oArdws(bѣGN6mڴire)`7`to}'N8QpjZrʕ+W 7_˩ 9YmP;1bĈ_pܹsΝ;K,Yd XB"Ֆj[9`* z SiA<['@fGFxv9E}g+ lc)AQA?DM[&:\s?[V9/˰nݺu9s̙34iҤIpm[8el2֯uQ<=====aȑ#G]C?>Hsl9^{p˾rhjAxedm춎yc~- /a52 AWրl74: .2 ܩWM5j SK-uT&θF~:~jH3a0h"j}n]-`.ss%+ .;N{ߥkHsZP".<4BJQɧde(c+[^3:Ճ'mqOy2C?<ÂCN'p57 }PkX\U([cFA`{2(yJy9WyN^pTpZu|_zuؚ;'eepMmz]疛WET. FڃCn*]hS >zI4Hqš&O Q fikѩp#>9SQTy]/0MF+HBѐ&}mryxZ97S p/l @f"[n% }|FAr|(ݺQN -nt:A(6A,VV4eѩ3w00k%FA0:!uHFJGsgn K5A= үmJ Q`{Zڜ lWԊJѩΐKn (m *S  =A q;Wiy^+(D÷)A?uzE.h[,!dS B" lpX֜Rgt*A RAA[]^S ?˚V RHH.u+1<ۋz0: q\\K] )j5: .&U1EK1 )qi}A[_U7: ½Ԣի"qbP2Pt﷑e=ΚetJAղJ]= i%L*)auK {)A %CsNkip6A,K@$&RrL ڪyJq*QҐv:Qk^7 V6GVRRkϸʃeOF&庹4#N/'eT[<{fsɾ/d#;KB6esA nK]ZJkժmkk\P"ʎldOHN#x- L73ID'= ckvEejzmo`պ٪PW56(8r;BSQxpZ|9nBBE%-|>jDddwly-qJ[ 7»v"K˫55)%؝SQwysEQYR Amc5Z:z>Iک@Tf^|W>JDDaݸܢ\\8onX|zp+GGzk lt:c.@EQU.h;:{ d|jv"ڈ՗/m{ HީhL`0yqӞzA[D z9?s l,ˆ.,؈F@Ukvճ+Rn~ݩќQԕ6}X pOgw6"#]YgF;% at0gLQ+p\ +T8vt&.n8wױ_#"8F4*i*jwO5'PʥwEy6iW@UmCey,&fԋZp8l[w7ww4> XwKgȏ"B(M'D?sݹ"F'lD#]׾y)1qNEO[Uz{^*? D h ztD4Z۽hk[p|6QqlD#뇊 #7`!|MUv":`?~y/貰GD?lݗ| [|-gh=@dhȜv@29Vٽ. F"lw;|v#"":`#A;D^hw%2NE]_<Jr <%NEDl] =uq~1f@!N O:E;aDDD4ڰ`#Ay[֮7bw*ڳ[U| Dݩh8sӣ]Ow 3| aG=aDDDt`F4x7/8eu9Iv"wl|r( ȹv"aqh(/_1Xu=}N1ƝETP\ɀ*VhuLX1n@V=mRnE7DX#?/;uo&_qF`%YkM0YOuqc`DXwf8`: q^G Y2Z އX 6T9|Ppr2 )uZlw:~œ+{dKk 7R/NEtb00dntVQc y0!0K G =@̲.eֺ}MR\"Tq hzzM5L`_XVR0"""cS$ʧ0@}RsjʅF Ipݙ.ŒϬ,2z/:Vg:7H i@(ߤJ)BVr^@z.Zm@Mԙ2N9[OI-4H@ٙu{ÁޅZfK J:㐧 'p_#"""'؈FPkGgiKMpVٝ_^[P|9N;p-0!5yݩdmf_EEaUm7V.=.J|"@,PW%P&b2[4aP%H*)>wP@y'(JNGQ3`݉rrh+=!&Âhv| {gz:mw:~ֿ[:;Դ$pujݩDa>hyk2nnH0V l]b,ys'`Z׼X_k0aLG (+J| J/x)U,"l=flz`! +̆3f;ƍ;%łhy ѧ|坷#/B? Fvٸ h]Roߔ 9)"LoaEhD ~7v#הJCQb7@U3dCEJEpӶ4tBL@ާS&-} bwJ"""oN  ClFxfT^l^XLz`ݡNbEn<|ώ"L/P|O,IrAܯP(|@%mQ/b&@ISnB]VT&`L^շӚO3};DDDD48,؈l{~m xۡq Qٝ_ۃ=@}:p'MN5sfL07kf'`d_Kuc38Xg]\vf^6[+!iEXӻ[ѷ(*:kDʲL*9i 07 $DDDD'lD6uAcss/0~aT>FƖ/jzs+RffTnwgc-N3HEJfOEح֝aaƥېQDe VU @~Hz^^()UʽR⋃EXP#/„ ȿT76IN(һ(Sl[g/]\7 ;%ɋ >N'mw*~MjT6T? YMLTUSRʝғ递"%u-JyrK@iuqH.Bu$*"OO(\-G }}|) +_@ǵ=ۼ[_wTDZD{ (Ju)No0L/ I憁"/Œ"l67 m}ul E]"Q%*EnE@i^ͅ& NKeSѷX m7ܒCSX%e4͸B_ hړV9cO߻@u i {?; k`|`M=¥5xLP0eX?7r P׫Hɡİ՞Y #wv>0T{:,؈lPnw*_iTg36۝6"L+J@ASM.N}Ư)F?Ԝ!gK`nn0C^ y^2HrzR<| (e& *> w@FfFt$1x}jn-9SX d5=kt]V3E=EzY/DcK˶ʗ, B(ڝlD6CEۖρI b%Rpv;L ?Ew-u]Y3E;0ômӦGe_8X=c) ?yp8]\(*vBjEʪ qjVаn0M 8"`qX_U)kv\;f3|0՝"fK2 w"""" 6"4n`%_o\ '. |{\~vyR=%{SEПӚ^px^h Q+ Q_8Y u p|.]c(q9wW)t3\Wgp8= 8i"xz ֮\g$WٝĶyK i^4 teQSJ{@V`'пDDDDԏ :*{d⢂^;q.׀ޭm޶+U>W\?qKR.p Ύ[>/& `S-! X+>E;NGtbdId}Sbêo&w4O&Iv%"""c 6"lعw{c~j>#IJ{t@{Q[m\=김[g.}$KUW"@ }Cwup9Y#*Ghw  &"a'ݩDXh4'ڝh+G:~dmQ@#5H(KmoMz!h`#чرzk"IUpXr7w*sn9Q<$:\~Z48W*9]>ާ?V xC/"/ϹgÛlކ2h}J^ irT HYЁw ;vgXUUZng,vg"-(אָ-tuEyqLkNfFDDDt󪈈D+0*xv!fQl|뒶WS冟`wJ""""i,؈ijIxחٝ^FdWzkpyo9NIDDDDvaFDDV ; =b- ^%zCOtS&kwJ"""" 6"":,|%@ 0ڝhdi u[ ^]yu=d$?rsM"""":`#"Rx*7v!})zR V JM`zSyv$"""цDRȬ1NCt|.JTțX/7iڀ(߀c QDD4:)r!#xxNE4zkJ 6zQ Ϳ pv$"""юlDDtXJ#fYbٝhxt9(QTsr bFDDDDCÂK UuUg>hw*cӑzcPz^GkcY 7>u炳o)DÂK4ʦ5_7GzDvjl*4U-8Tó""""":JhS7tDEt-usJ@ jN:m& \PY1)%R& }`m~im۝hpO;dOM 8TS\"":"S>XT4t,ZZկ]>LM_6?m<%"#R\rHX֥vѢ3P~;+ G?p)DDDD4Ʊ`#"#RVUX=VwS^^tzd;w;Sx)0Ѡpnb'mzլ;-0ӀR ӻ+ڧcX#"""":,؈hP*ṰZPf^ ͮvo]]$3DSXѠgԹbN3V؝KӋtyUv~%7=gS:g? DzNIDDDDtbcFDD"nV)/FowOZw)P|{uK[}ϛde$= ;%ET\N2+ٺ4x/O[S}K!=@OG}gFD؝hlaFDD,~'`g?N=ػ3s'q_ ePSM,؈hP?++7NCv]e@UR--%v)v$""""XѠIQ@4uJϭ~P Rz0 7.\_S v ]TӜ:9-ط8p?ߟ0 p!>nS\8FDDcdԈk/*mMάf?r~؂3Xى A%cqqiN_b ۙ7p)DDDDDbFDDb\lev7,ycD ;*sUUJDDDD4*`#"AKNqȤ?]9v{SZU[8=C'ֈFlDD4(׊ksQjwc ƀ}"Brbc)R.s (v$""""aFDDawO Nu⫍̨ԾԴ# &/.1,J2 e2߭F5Ѡ(.%f`m4֟;ՉG%YoӁ<*`] $IdwZ""""" lDD4(Dxm; Zwr՚PJ&]u!h`#"A;dC$ bP'S<,w-ܻhq%zM>y2㛙iwJ""""":| G=k(tUfPZmޫοw2M)~v$"""" 6"".1;ctfP`:weO&P?wX#""""kXѠ(,7v=$k.|gƝ=~)x`FDD"+0e@E^J7Ne?=L/ +*g~ ~¥ ;%,؈hH% !Kz @ٗ58&>@nwƸ wk؝FlDD4$Y+.UfiO{yWik"P_vѝ{]Z Xp3SC|v$""""ш9 "!z*4 t,ZwMm*ޫ= 79v$""""ьlDD4$Oـy4NsF?bI]Dɏ=dߐ3UUkDDDDD4,؈hHB6" s*g}Dy&zבYi \cFDDCk"u`mtfݩ kVUO?o ל9ʪ؝ND,؈hHD*fuh~f]TsrUPRӞ@K)I9 S!~KADDCI\neڝj$K$Uا [>U*`P H$Iih,`FDDCbw*1'ڝ @<9@:g+'5HԢfw.κDDDDD4 щEq+"h`m:Kcje@e]YZS:WxNR'OEN`Q#""""lDD4$J :F'ac{溢@ۭO4齱iSEX#""""DJg3zX7r7avQ@كUtD(lxf aI_DDDDDD'lDD4$xi{k$Y(5م@=:3M b!"""" 6""J߹Dt۞fth@V-)=>IWΪ2''}TdƂDd36ˍ*co8\(1UmRwb |j_LG ,d8)F"/gr{}oV2V@c|ߛri@( `FDDGE (6+ gM}ͼc1LrHq/ ?0_`#""Jī3޼Y?{%v{)ՓOG_JrggeBv'ڽDDDDDD 6"":*"CHrcywgP`y^@JSȩHO~@_h щIY\)vLL_pp+{B{N``#""JM-s_HjXRQ# ߙ{ɝTpX#""""lDDtTEn܁7V74}*y0$1g" ,v'""""">`#"w XW[FаqZ]π?Hq͙b.: DDtb)]U)>sF蒄`_H$IiNQ'><4*$'X#""""'؈hmZ6u5sR۝h`#"""""""":D`#"""""""":3/yPM:IENDB`dipy-1.11.0/doc/theory/spherical_coordinates.svg000066400000000000000000000654521476546756600217510ustar00rootroot00000000000000 X X Y Y Z Z --> r r --> --> --> --> --> --> θ θ φ φ --> (r,θ,φ) (r,θ,φ) dipy-1.11.0/doc/tools/000077500000000000000000000000001476546756600144765ustar00rootroot00000000000000dipy-1.11.0/doc/tools/LICENSE.txt000066400000000000000000000003661476546756600163260ustar00rootroot00000000000000These files were obtained from https://www.mail-archive.com/sphinx-dev@googlegroups.com/msg02472.html and were released under a BSD/MIT license by Fernando Perez, Matthew Brett and the PyMVPA folks. Further cleanups by the scikit-image crew. dipy-1.11.0/doc/tools/apigen.py000066400000000000000000000512071476546756600163200ustar00rootroot00000000000000"""Attempt to generate templates for module reference with Sphinx. To include extension modules, first identify them as valid in the ``_uri2path`` method, then handle them in the ``_parse_module_with_import`` script. Notes ----- This parsing is based on the import and introspection of modules. Previously functions and classes were found by parsing the text of .py files. Extension modules should be discovered and included as well. This is a modified version of a script originally shipped with the PyMVPA project, then adapted for use first in NIPY and then in skimage. PyMVPA is an MIT-licensed project. """ import ast from importlib import import_module from inspect import getmodule, ismethod import os import re from types import BuiltinFunctionType, FunctionType # suppress print statements (warnings for empty files) DEBUG = True class ApiDocWriter: """Automatic detection and parsing of API docs to Sphinx-parsable reST format.""" # only separating first two levels rst_section_levels = ["*", "=", "-", "~", "^"] def __init__( self, package_name, rst_extension=".rst", package_skip_patterns=None, module_skip_patterns=None, object_skip_patterns=None, other_defines=True, ): r"""Initialize package for parsing. Parameters ---------- package_name : string Name of the top-level package. *package_name* must be the name of an importable package rst_extension : string, optional Extension for reST files, default '.rst' package_skip_patterns : None or sequence of {strings, regexps} Sequence of strings giving URIs of packages to be excluded Operates on the package path, starting at (including) the first dot in the package path, after *package_name* - so, if *package_name* is ``sphinx``, then ``sphinx.util`` will result in ``.util`` being passed for searching by these regexps. If is None, gives default. Default is: ['\.tests$'] module_skip_patterns : None or sequence Sequence of strings giving URIs of modules to be excluded Operates on the module name including preceding URI path, back to the first dot after *package_name*. For example ``sphinx.util.console`` results in the string to search of ``.util.console`` If is None, gives default. Default is: ['\.setup$', '\._'] object_skip_patterns: None or sequence of {strings, regexps} skip some specific class or function other_defines : {True, False}, optional Whether to include classes and functions that are imported in a particular module but not defined there. """ if package_skip_patterns is None: package_skip_patterns = ["\\.tests$"] if module_skip_patterns is None: module_skip_patterns = ["\\.setup$", "\\._"] if object_skip_patterns is None: object_skip_patterns = [] self.root_path = "" self.written_modules = None self._package_name = None self.package_name = package_name self.rst_extension = rst_extension self.package_skip_patterns = package_skip_patterns self.module_skip_patterns = module_skip_patterns self.object_skip_patterns = object_skip_patterns self.other_defines = other_defines def get_package_name(self): return self._package_name def set_package_name(self, package_name): """Set package_name. >>> docwriter = ApiDocWriter('sphinx') >>> import sphinx >>> docwriter.root_path == sphinx.__path__[0] True >>> docwriter.package_name = 'docutils' >>> import docutils >>> docwriter.root_path == docutils.__path__[0] True """ # It's also possible to imagine caching the module parsing here self._package_name = package_name root_module = self._import(package_name) if root_module.__path__: self.root_path = root_module.__path__[-1] if not self.root_path or 'editable_loader' in self.root_path.lower(): self.root_path = os.path.dirname(root_module.__file__) self.written_modules = None package_name = property( get_package_name, set_package_name, None, "get/set package_name" ) @staticmethod def _import(name): """Import namespace package.""" mod = __import__(name) components = name.split(".") for comp in components[1:]: mod = getattr(mod, comp) return mod @staticmethod def _get_object_name(line): """Get second token in line. Examples -------- >>> docwriter = ApiDocWriter('sphinx') >>> docwriter._get_object_name(" def func(): ") 'func' >>> docwriter._get_object_name(" class Klass: ") 'Klass' >>> docwriter._get_object_name(" class Klass: ") 'Klass' """ name = line.split()[1].split("(")[0].strip() # in case we have classes which are not derived from object # ie. old style classes return name.rstrip(":") def _uri2path(self, uri): """Convert uri to absolute filepath. Parameters ---------- uri : string URI of python module to return path for Returns ------- path : None or string Returns None if there is no valid path for this URI Otherwise returns absolute file system path for URI Examples -------- >>> docwriter = ApiDocWriter('sphinx') >>> import sphinx >>> modpath = sphinx.__path__[0] >>> res = docwriter._uri2path('sphinx.builder') >>> res == os.path.join(modpath, 'builder.py') True >>> res = docwriter._uri2path('sphinx') >>> res == os.path.join(modpath, '__init__.py') True >>> docwriter._uri2path('sphinx.does_not_exist') """ if uri == self.package_name: return os.path.join(self.root_path, "__init__.py") path = uri.replace(self.package_name + ".", "") path = path.replace(".", os.path.sep) path = os.path.join(self.root_path, path) # XXX maybe check for extensions as well? if os.path.exists(path + ".py"): # file path += ".py" elif os.path.exists(path + ".pyx"): # file path += ".pyx" elif os.path.exists(os.path.join(path, "__init__.py")): path = os.path.join(path, "__init__.py") else: return None return path def _path2uri(self, dirpath): """Convert directory path to uri.""" package_dir = self.package_name.replace(".", os.path.sep) relpath = dirpath.replace(self.root_path, package_dir) if relpath.startswith(os.path.sep): relpath = relpath[1:] return relpath.replace(os.path.sep, ".") def _parse_module(self, uri): """Parse module defined in *uri*.""" filename = self._uri2path(uri) if filename is None: print(filename, "erk") # nothing that we could handle here. return ([], []) f = open(filename, "rt") functions, classes = self._parse_lines(f) f.close() return functions, classes def _parse_module_with_import(self, uri): """Look for functions and classes in an importable module. Parameters ---------- uri : str The name of the module to be parsed. This module needs to be importable. Returns ------- functions : list of str A list of (public) function names in the module. classes : list of str A list of (public) class names in the module. constants : list of str A list of (public) constant names in the module. """ mod = import_module(uri) patterns = f"(?:{'|'.join(self.object_skip_patterns)})" pat = re.compile(patterns) if mod.__file__.endswith('.py'): with open(mod.__file__, encoding="utf8") as fi: node = ast.parse(fi.read()) functions = [] classes = [] constants = [] for n in node.body: if not hasattr(n, "name"): if not isinstance(n, ast.Assign): continue if isinstance(n, ast.ClassDef): if n.name.startswith("_") or pat.search(n.name): continue classes.append(n.name) elif isinstance(n, ast.FunctionDef): if n.name.startswith("_") or pat.search(n.name): continue functions.append(n.name) # Specific condition for vtk and fury elif isinstance(n, ast.Assign): try: if isinstance(n.value, ast.Call): if hasattr(n.targets[0], "id"): func_name = n.targets[0].id else: continue if func_name.startswith("_") or pat.search(func_name) or \ isinstance(n.targets[0], ast.Tuple): continue if isinstance(n.value, (ast.Constant, ast.Dict, ast.Tuple)): constants.append(n.targets[0].id) continue functions.append(n.targets[0].id) elif hasattr(n.value, "attr"): if n.value.attr.startswith("vtk"): classes.append(n.targets[0].id) except Exception as e: print(mod.__file__) print(n.lineno) print(n.targets[0]) print(str(e)) return functions, classes, constants else: # find all public objects in the module. obj_strs = [obj for obj in dir(mod) if not obj.startswith("_")] functions = [] classes = [] constants = [] for obj_str in obj_strs: # find the actual object from its string representation if obj_str not in mod.__dict__: continue obj = mod.__dict__[obj_str] # Check if function / class defined in module if not self.other_defines and not getmodule(obj) == mod: continue # figure out if obj is a function or class if (hasattr(obj, "func_name") or isinstance(obj, BuiltinFunctionType) or ismethod(obj) or isinstance(obj, FunctionType)): functions.append(obj_str) else: try: issubclass(obj, object) classes.append(obj_str) except TypeError: # not a function or class pass return functions, classes, constants def _parse_lines(self, linesource): """Parse lines of text for functions and classes.""" functions = [] classes = [] for line in linesource: if line.startswith("def ") and line.count("("): # exclude private stuff name = self._get_object_name(line) if not name.startswith("_"): functions.append(name) elif line.startswith("class "): # exclude private stuff name = self._get_object_name(line) if not name.startswith("_"): classes.append(name) else: pass functions.sort() classes.sort() return functions, classes def generate_api_doc(self, uri): """Make autodoc documentation template string for a module. Parameters ---------- uri : string python location of module - e.g 'sphinx.builder' Returns ------- head : string Module name, table of contents. body : string Function and class docstrings. """ # get the names of all classes and functions functions, classes, constants = self._parse_module_with_import(uri) if not len(functions) and not len(classes) and DEBUG: print("WARNING: Empty -", uri) # dbg # Make a shorter version of the uri that omits the package name for # titles uri_short = re.sub(r"^%s\." % self.package_name, "", uri) head = ".. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n" body = "" # Set the chapter title to read 'module' for all modules except for the # main packages if "." in uri_short: title = "Module: :mod:`" + uri_short + "`" head += title + "\n" + self.rst_section_levels[2] * len(title) else: title = ":mod:`" + uri_short + "`" head += title + "\n" + self.rst_section_levels[1] * len(title) head += "\n.. automodule:: " + uri + "\n" head += "\n.. currentmodule:: " + uri + "\n" body += "\n.. currentmodule:: " + uri + "\n\n" for c in constants: # must NOT exclude from index to keep cross-refs working body += c + "\n" body += self.rst_section_levels[3] * len(c) + "\n" body += "\n.. autodata:: " + c + "\n" body += " :no-value:\n" body += " :annotation:\n" body += "\n\n" for c in classes: body += f"\n:class:`{c}`\n{self.rst_section_levels[3] * (len(c) + 9)}\n\n" body += "\n.. autoclass:: " + c + "\n" # must NOT exclude from index to keep cross-refs working body += " :members:\n" " :undoc-members:\n" " :show-inheritance:\n" "\n" head += ".. autosummary::\n\n" for f in constants + classes + functions: head += " " + f + "\n" head += "\n\n" for f in functions: # must NOT exclude from index to keep cross-refs working body += f + "\n" body += self.rst_section_levels[3] * len(f) + "\n" body += "\n.. autofunction:: " + f + "\n" body += "\n\n" return head, body def _survives_exclude(self, matchstr, match_type): r"""Return True if *matchstr* does not match patterns. ``self.package_name`` removed from front of string if present Examples -------- >>> dw = ApiDocWriter('sphinx') >>> dw._survives_exclude('sphinx.okpkg', 'package') True >>> dw.package_skip_patterns.append('^\\.badpkg$') >>> dw._survives_exclude('sphinx.badpkg', 'package') False >>> dw._survives_exclude('sphinx.badpkg', 'module') True >>> dw._survives_exclude('sphinx.badmod', 'module') True >>> dw.module_skip_patterns.append('^\\.badmod$') >>> dw._survives_exclude('sphinx.badmod', 'module') False """ if match_type == "module": patterns = self.module_skip_patterns elif match_type == "package": patterns = self.package_skip_patterns else: raise ValueError(f"Cannot interpret match type '{match_type}'") # Match to URI without package name L = len(self.package_name) if matchstr[:L] == self.package_name: matchstr = matchstr[L:] for pat in patterns: try: _ = pat.search except AttributeError: pat = re.compile(pat) if pat.search(matchstr): return False return True def discover_modules(self): r"""Return module sequence discovered from ``self.package_name``. Parameters ---------- None Returns ------- mods : sequence Sequence of module names within ``self.package_name`` Examples -------- >>> dw = ApiDocWriter('sphinx') >>> mods = dw.discover_modules() >>> 'sphinx.util' in mods True >>> dw.package_skip_patterns.append('\.util$') >>> 'sphinx.util' in dw.discover_modules() False """ modules = [self.package_name] # raw directory parsing for dirpath, dirnames, filenames in os.walk(self.root_path): # Check directory names for packages root_uri = self._path2uri(os.path.join(self.root_path, dirpath)) # Normally, we'd only iterate over dirnames, but since # dipy does not import a whole bunch of modules we'll # include those here as well (the *.py filenames). filenames = [os.path.splitext(f)[0] for f in filenames if (f.endswith(".py") and not f.startswith("__init__")) or f.endswith(".pyx")] for subpkg_name in dirnames + filenames: package_uri = ".".join((root_uri, subpkg_name)) package_path = self._uri2path(package_uri) if package_path and self._survives_exclude(package_uri, "package"): modules.append(package_uri) return sorted(modules) def write_modules_api(self, modules, outdir): # upper-level modules ulms = [ ".".join(m.split(".")[:2]) if m.count(".") >= 1 else m.split(".")[0] for m in modules ] from collections import OrderedDict module_by_ulm = OrderedDict() for v, k in zip(modules, ulms): if k in module_by_ulm: module_by_ulm[k].append(v) else: module_by_ulm[k] = [v] written_modules = [] for ulm, mods in module_by_ulm.items(): print(f"Generating docs for {ulm}:") document_head = [] document_body = [] for m in mods: print(f" -> {m}") head, body = self.generate_api_doc(m) document_head.append(head) document_body.append(body) out_module = ulm + self.rst_extension outfile = os.path.join(outdir, out_module) fileobj = open(outfile, "wt") fileobj.writelines(document_head + document_body) fileobj.close() written_modules.append(out_module) self.written_modules = written_modules def write_api_docs(self, outdir): """Generate API reST files. Parameters ---------- outdir : string Directory name in which to store files We create automatic filenames for each module Returns ------- None Notes ----- Sets self.written_modules to list of written modules """ if not os.path.exists(outdir): os.mkdir(outdir) # compose list of modules modules = self.discover_modules() self.write_modules_api(modules, outdir) def write_index(self, outdir, froot="gen", relative_to=None): """Make a reST API index file from written files. Parameters ---------- path : string Filename to write an index to outdir : string Directory to which to write generated index file froot : string, optional root (filename without extension) of filename to write to Defaults to 'gen'. We add ``self.rst_extension``. relative_to : string path to which written filenames are relative. This component of the written file path will be removed from outdir, in the generated index. Default is None, meaning, leave the path as it is. """ if self.written_modules is None: raise ValueError("No modules written") # Get full filename path path = os.path.join(outdir, froot + self.rst_extension) # Path written into index is relative to rootpath if relative_to is not None: relpath = (outdir + os.path.sep).replace(relative_to + os.path.sep, "") else: relpath = outdir idx = open(path, "wt") w = idx.write w(".. AUTO-GENERATED FILE -- DO NOT EDIT!\n\n") title = "API Reference" w(title + "\n") w("=" * len(title) + "\n\n") w(".. toctree::\n\n") for f in self.written_modules: w(" %s\n" % os.path.join(relpath, f)) idx.close() dipy-1.11.0/doc/tools/build_modref_templates.py000077500000000000000000000066711476546756600215760ustar00rootroot00000000000000#!/usr/bin/env python3 """Script to auto-generate our API docs. """ # stdlib imports from os.path import join as pjoin import re import sys # local imports from apigen import ApiDocWriter # version comparison from packaging.version import Version # ***************************************************************************** def abort(error): print(f'*WARNING* API documentation not generated: {error}') exit() if __name__ == '__main__': package = sys.argv[1] outdir = sys.argv[2] try: other_defines = sys.argv[3] except IndexError: other_defines = True else: other_defines = other_defines in ('True', 'true', '1') # Check that the package is available. If not, the API documentation is not # (re)generated and existing API documentation sources will be used. try: __import__(package) except ImportError as e: abort(f"Can not import {package}") # NOTE: with the new versioning scheme, this check is not needed anymore # Also, this might be needed if we do not use spin to generate the docs # module = sys.modules[package] # # Check that the source version is equal to the installed # # version. If the versions mismatch the API documentation sources # # are not (re)generated. This avoids automatic generation of documentation # # for older or newer versions if such versions are installed on the system. # installed_version = Version(module.__version__) # info_file = pjoin('..', package, 'info.py') # info_lines = open(info_file).readlines() # source_version = '.'.join([v.split('=')[1].strip(" '\n.") # for v in info_lines if re.match( # '^_version_(major|minor|micro|extra)', v # )]).strip('.') # source_version = Version(source_version) # print('***', source_version) # if source_version != installed_version: # print('***', installed_version) # abort("Installed version does not match source version") docwriter = ApiDocWriter(package, rst_extension='.rst', other_defines=other_defines) docwriter.package_skip_patterns += [r'\.tracking\.interfaces.*$', r'\.tracking\.gui_tools.*$', r'.*test.*$', r'^\.utils.*', r'\.stats\.resampling.*$', r'\.info.*$', r'\.__config__.*$', ] docwriter.object_skip_patterns += [ r'.*FetcherError.*$', r'.*urlopen.*', r'.*add_callback.*', r'.*Logger.*', r'.*logger.*', # Global variable. Must be enable in the future when using Typing. r'.*hemi_icosahedron.*', r'.*unit_octahedron.*', r'.*unit_icosahedron.*', r'.*default_sphere.*', r'.*small_sphere.*', r'.*icosahedron_faces.*', r'.*icosahedron_vertices.*', r'.*octahedron_faces.*', r'.*octahedron_vertices.*', r'.*diffusion_evals.*', r'.*DATA_DIR.*', r'.*RegistrationStages.*', r'.*VerbosityLevels.*', ] docwriter.write_api_docs(outdir) docwriter.write_index(outdir, 'index', relative_to=outdir) print(f'{len(docwriter.written_modules)} files written') dipy-1.11.0/doc/tools/docgen_cmd.py000077500000000000000000000170761476546756600171500ustar00rootroot00000000000000#!/usr/bin/env python3 """ Script to generate documentation for command line utilities """ import importlib import inspect import os from os.path import join as pjoin from subprocess import PIPE, CalledProcessError, Popen import sys # version comparison # from packaging.version import Version # List of workflows to ignore SKIP_WORKFLOWS_LIST = ("Workflow", "CombinedWorkflow") def sh3(cmd): """ Execute command in a subshell, return stdout, stderr If anything appears in stderr, print it out to sys.stderr https://github.com/scikit-image/scikit-image/blob/master/doc/gh-pages.py Copyright (C) 2011, the scikit-image team All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True) out, err = p.communicate() retcode = p.returncode if retcode: raise CalledProcessError(retcode, cmd) else: return out.rstrip(), err.rstrip() def abort(error): print(f"*WARNING* Command line API documentation not generated: {error}") exit() def get_doc_parser(class_obj): # return inspect.getdoc(class_obj.run) try: ia_module = importlib.import_module("dipy.workflows.base") parser = ia_module.IntrospectiveArgumentParser() parser.add_workflow(class_obj()) except Exception as e: abort(f"Error on {class_obj.__name__}: {e}") return parser def format_title(text): text = text.title() line = "-" * len(text) return f"{text}\n{line}\n\n" if __name__ == "__main__": # package name: Eg: dipy package = sys.argv[1] # directory in which the generated rst files will be saved outdir = sys.argv[2] try: __import__(package) except ImportError: abort(f"Can not import {package}") # NOTE: with the new versioning scheme, this check is not needed anymore # Also, this might be needed if we do not use spin to generate the docs # module = sys.modules[package] # Check that the source version is equal to the installed # version. If the versions mismatch the API documentation sources # are not (re)generated. This avoids automatic generation of documentation # for older or newer versions if such versions are installed on the system. # installed_version = Version(module.__version__) # info_file = pjoin('..', package, 'info.py') # info_lines = open(info_file).readlines() # source_version = '.'.join( # [v.split('=')[1].strip(" '\n.") # for v in info_lines # if re.match('^_version_(major|minor|micro|extra)', v)]).strip('.') # source_version = Version(source_version) # print('***', source_version) # if source_version != installed_version: # print('***', installed_version) # abort("Installed version does not match source version") # generate docs command_list = [] workflow_module = importlib.import_module("dipy.workflows.workflow") cli_module = importlib.import_module("dipy.workflows.cli") workflows_dict = getattr(cli_module, "cli_flows") workflow_desc = {} # We get all workflows class obj in a dictionary for path_file in os.listdir(pjoin("..", "dipy", "workflows")): module_name = inspect.getmodulename(path_file) if module_name is None: continue module = importlib.import_module(f"dipy.workflows.{module_name}") members = inspect.getmembers(module) d_wkflw = {name: {"module": obj, "parser": get_doc_parser(obj)} for name, obj in members if inspect.isclass(obj) and issubclass(obj, workflow_module.Workflow) and name not in SKIP_WORKFLOWS_LIST } workflow_desc.update(d_wkflw) cmd_list = [] for fname, wflw_value in workflows_dict.items(): flow_module_name, flow_name = wflw_value print(f"Generating docs for: {fname} ({flow_name})") out_fname = fname + ".rst" with open(pjoin(outdir, out_fname), "w", encoding="utf-8") as fp: dashes = "=" * len(fname) fp.write(f".. {fname}:\n\n{dashes}\n{fname}\n{dashes}\n\n") parser = workflow_desc[flow_name]["parser"] if parser.description not in ["", "\n\n"]: fp.write(format_title("Synopsis")) fp.write(f"{parser.description}\n\n") fp.write(format_title("usage")) str_p_args = " ".join([p[0] for p in parser.positional_parameters]).lower() fp.write(".. code-block:: bash\n\n") fp.write(f" {fname} [OPTIONS] {str_p_args}\n\n") fp.write(format_title("Input Parameters")) for p in parser.positional_parameters: fp.write(f"* ``{p[0]}``\n\n") comment = '\n '.join([text.rstrip() for text in p[2]]) fp.write(f" {comment}\n\n") optional_params = [p for p in parser.optional_parameters if not p[0].startswith("out_")] if optional_params: fp.write(format_title("General Options")) for p in optional_params: fp.write(f"* ``--{p[0]}``\n\n") comment = '\n '.join([text.rstrip() for text in p[2]]) fp.write(f" {comment}\n\n") if parser.output_parameters: fp.write(format_title("Output Options")) for p in parser.output_parameters: fp.write(f"* ``--{p[0]}``\n\n") comment = '\n '.join([text.rstrip() for text in p[2]]) fp.write(f" {comment}\n\n") if parser.epilog: fp.write(format_title("References")) fp.write(parser.epilog.replace("References: \n", "")) cmd_list.append(out_fname) print("Done") # generate index.rst print("Generating index.rst") with open(pjoin(outdir, "index.rst"), "w") as index: index.write(".. _workflows_reference:\n\n") index.write("Command Line Utilities Reference\n") index.write("================================\n\n") index.write(".. toctree::\n\n") for cmd in cmd_list: index.write(f" {cmd}") index.write("\n") print("Done") dipy-1.11.0/doc/tractography_methods_list.rst000066400000000000000000000020671476546756600213620ustar00rootroot00000000000000.. list-table:: Tractography methods available in DIPY. :widths: 14 6 12 10 :header-rows: 1 * - Method - Local vs. Global - Deterministic vs. Probabilistic - References * - :ref:`Local deterministic ` (EuDX) - Local - Deterministic - :cite:t:`Garyfallidis2012b` * - :ref:`Local probabilistic ` - Local - Probabilistic - :cite:t:`Behrens2003`, :cite:t:`Behrens2007` * - :ref:`Bootstrap tracking ` - Local - Probabilistic - :cite:t:`Berman2008` * - :ref:`Particle Filtering Tractography (PFT) ` - Local - Deterministic/Probabilistic - :cite:t:`Girard2014` * - :ref:`Parallel Transport Tractography (PTT) ` - Local - Deterministic - :cite:t:`Aydogan2021` dipy-1.11.0/doc/upload-gh-pages.sh000077500000000000000000000015671476546756600166630ustar00rootroot00000000000000#!/bin/bash # Upload website to gh-pages USAGE="$0 []" HTML_DIR=$1 if [ -z "$HTML_DIR" ]; then echo $USAGE exit 1 fi if [ ! -e "$HTML_DIR/index.html" ]; then echo "$HTML_DIR does not contain an index.html" exit 1 fi if [ -d "$HTML_DIR/.git" ]; then echo "$HTML_DIR already contains a .git directory" exit 1 fi PROJECT=$2 if [ -z "$PROJECT" ]; then echo $USAGE exit 1 fi ORGANIZATION=$3 if [ -z "$ORGANIZATION" ]; then ORGANIZATION=dipy fi upstream_repo="https://github.com/$ORGANIZATION/$PROJECT" cd $HTML_DIR git init git checkout -b gh-pages git add * # A nojekyll file is needed to tell github that this is *not* a jekyll site: touch .nojekyll git add .nojekyll git commit -a -m "Documentation build - no history" git remote add origin $upstream_repo git push origin gh-pages --force rm -rf .git # Yes dipy-1.11.0/doc/upload_docs.py000066400000000000000000000062101476546756600162030ustar00rootroot00000000000000#!/usr/bin/env python3 # Script to upload docs to gh-pages branch of dipy_web that will be # automatically detected by the dipy website. import os from os import chdir as cd import re from subprocess import check_call import sys def sh(cmd): """Execute command in a subshell, return status code.""" print("--------------------------------------------------") print(f"Executing: {cmd}") print("--------------------------------------------------") return check_call(cmd, shell=True) # paths docs_repo_path = "_build/docs_repo" docs_repo_url = "https://github.com/dipy/dipy_web.git" if __name__ == '__main__': # get current directory startdir = os.getcwd() # find the source version info_file = '../dipy/info.py' info_lines = open(info_file).readlines() source_version = '.'.join( [v.split('=')[1].strip(" '\n.") for v in info_lines if re.match( '^_version_(major|minor|micro|extra)', v)]) print("Source version: ", source_version) # check for dev tag if source_version.split(".")[-1] == "dev": dev = True print("Development Version detected") else: dev = False # pull current docs_repo if not os.path.exists(docs_repo_path): print("docs_repo not found, pulling from git..") sh(f"git clone {docs_repo_url} {docs_repo_path}") cd(docs_repo_path) print(f"Moved to {os.getcwd()}") try: sh("git checkout gh-pages") except: while 1: print("\nLooks like gh-pages branch does not exist!") print("Do you want to create a new one? (y/n)") choice = str(input()).lower() if choice == 'y': sh("git checkout -b gh-pages") sh("rm -rf *") sh("git add .") sh("git commit -m 'cleaning gh-pages branch'") sh("git push origin gh-pages") break if choice == 'n': print("Please manually create a new gh-pages branch and try again.") sys.exit(0) else: print("Please enter valid choice ..") sh("git pull origin gh-pages") # check if docs for current version exists if os.path.exists(source_version) and not dev: print("docs for current version already exists") else: if dev: print("Re-building docs for development version") else: print("Building docs for a release") # build docs and copy to docs_repo cd(startdir) # remove old html and doctree files try: sh("rm -rf _build/json _build/doctrees") except: pass # generate new doc and copy to docs_repo sh("make api") sh("make rstexamples") sh("make json") sh(f"cp -r _build/json {docs_repo_path}/") cd(docs_repo_path) if dev: try: sh(f"rm -r {source_version}") except: pass sh(f"mv json {source_version}") sh("git add .") sh(f"git commit -m \"Add docs for {source_version}\"") sh("git push origin gh-pages") dipy-1.11.0/doc/user_guide/000077500000000000000000000000001476546756600154715ustar00rootroot00000000000000dipy-1.11.0/doc/user_guide/bibliography.rst000066400000000000000000000001651476546756600207000ustar00rootroot00000000000000.. _general_bibliography: General bibliography ==================== .. bibliography:: ./../references.bib :all: dipy-1.11.0/doc/user_guide/data.rst000066400000000000000000000016321476546756600171360ustar00rootroot00000000000000.. _data: ==== Data ==== -------------------- How to get data -------------------- The list of datasets can be retrieved using:: from dipy.workflows.io import FetchFlow available_data = FetchFlow.get_fetcher_datanames().keys() To retrieve all datasets, the following workflow can be run:: from dipy.workflows.io import FetchFlow fetch_flow = FetchFlow() with TemporaryDirectory() as out_dir: fetch_flow.run(['all']) If you want to download a particular dataset, you can do:: from dipy.workflows.io import FetchFlow fetch_flow = FetchFlow() with TemporaryDirectory() as out_dir: fetch_flow.run(['bundle_fa_hcp']) or:: from dipy.data import fetch_bundle_fa_hcp files, folder = fetch_bundle_fa_hcp() ------------- Datasets List ------------- Details about datasets available in DIPY are described in the table below: .. include:: dataset_list.rst dipy-1.11.0/doc/user_guide/dataset_list.rst000066400000000000000000000174561476546756600207200ustar00rootroot00000000000000.. list-table:: Datasets available in DIPY :widths: 10 8 8 8 56 10 :header-rows: 1 * - Name - Synthetic/Phantom/Human/Animal - Data features (structural; diffusion; label information) - Scanner - DIPY name - Citations * - Tractogram file formats examples - Synthetic - Tractogram file formats (`.dpy`, `.fib`, `.tck`, `.trk`) - - bundle_file_formats_example - Rheault, F. (2019). Bundles for tractography file format testing and example (Version 1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3352379 * - CENIR HCP-like dataset - - Multi-shell data: b-vals: [200, 400, 1000, 2000, 3000] (s/mm^2); [20, 20, 202, 204, 206] gradient directions; Corrected for Eddy currents - - cenir_multib - * - CFIN dataset - - T1; Multi-shell data: b-vals: [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000] (s/mm^2); 496 gradient directions - - cfin_multib - Hansen, B., Jespersen, S.. Data for evaluation of fast kurtosis strategies, b-value optimization and exploration of diffusion MRI contrast. Sci Data 3, 160072 (2016). doi:10.1038/sdata.2016.72 * - Gold standard streamlines IO testing - Synthetic - Tractogram file formats (`.dpy`, `.fib`, `.tck`, `.trk`) - - gold_standard_io - Rheault, F. (2019). Gold standard for tractogram io testing (Version 1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.2651349 * - HCP842 bundle atlas - Human - Whole brain/bundle-wise tractograms in MNI space; 80 bundles - Human Connectome Project (HCP) scanner - bundle_atlas_hcp842 - Garyfallidis, E., et al. Recognition of white matter bundles using local and global streamline-based registration and clustering. NeuroImage 170 (2017): 283-297; Yeh, F.-C., et al. Population-averaged atlas of the macroscale human structural connectome and its network topology. NeuroImage 178 (2018): 57-68. figshare.com/articles/Advanced_Atlas_of_80_Bundles_in_MNI_space/7375883 * - HCP bundle FA - Human - Fractional Anisotropy (FA); 2 bundles - - bundle_fa_hcp - * - HCP tractogram - Human - Whole brain tractogram - Human Connectome Project (HCP) scanner - target_tractogram_hcp - * - ISBI 2013 - Phantom - Multi shell data: b-vals: [0, 1500, 2500] (s/mm^2); 64 gradient directions - - isbi2013_2shell - Daducci, A., et al. Quantitative Comparison of Reconstruction Methods for Intra-Voxel Fiber Recovery From Diffusion MRI. IEEE Transactions on Medical Imaging, vol. 33, no. 2, pp. 384-399, Feb. 2014. HARDI reconstruction challenge 2013 * - IVIM dataset - Human - Multi shell data: b-vals: [0, 10, 20, 30, 40, 60, 80, 100, 120, 140, 160, 180, 200, 300, 400, 500, 600, 700, 800, 900, 1000] (s/mm^2); 21 gradient directions - - fetch_ivim - Peterson, Eric (2016): IVIM dataset. figshare. Dataset. figshare.com/articles/dataset/IVIM_dataset/3395704/1 * - MNI template - Human - MNI 2009a T1, T2; 2009c T1, T1 mask - - mni_template - Fonov, V.S., Evans, A.C., Botteron, K., Almli, C.R., McKinstry, R.C., Collins, D.L., BDCG. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage, Volume 54, Issue 1, January 2011, ISSN 1053–8119, doi:10.1016/j.neuroimage.2010.07.033; Fonov, V.S., Evans, A.C., McKinstry, R.C., Almli, C.R., Collins, D.L. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood, NeuroImage, Volume 47, Supplement 1, July 2009, Page S102 Organization for Human Brain Mapping 2009 Annual Meeting, doi:10.1016/S1053-8119(09)70884-5 ICBM 152 Nonlinear atlases version 2009 * - qt-dMRI C57Bl6 mice dataset - Animal - 2 C57Bl6 mice test-retest qt-dMRI; Corpus callosum (CC) bundle masks - - qtdMRI_test_retest_2subjects - Wassermann, D., Santin, M., Philippe, A.-C., Fick, R., Deriche, R., Lehericy, S., Petiet, A. (2017). Test-Retest qt-dMRI datasets for "Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time" [Data set]. Zenodo. https://doi.org/10.5281/zenodo.996889 * - SCIL b0 - - b0 - GE (1.5, 3 T), Philips (3 T); Siemens (1.5, 3 T) - scil_b0 - Sherbrooke Connectivity Imaging Lab (SCIL) * - Sherbrooke 3 shells - Human - Multi shell data: b-vals: [0, 1000, 2000; 3500] (s/mm^2); 193 gradient directions - - sherbrooke_3shell - Sherbrooke Connectivity Imaging Lab (SCIL) * - SNAIL dataset - - 2 subjects: T1; Fractional Anisotropy (FA); 27 bundles - - bundles_2_subjects - * - Stanford HARDI - Human - HARDI-like multi-shell data: b-vals: [0, 2000] (s/mm^2); 160 gradient directions - GE Discovery MR750 - stanford_hardi - Human brain diffusion-weighted MRI, collected with high diffusion-weighting angular resolution and repeated measurements at multiple diffusion-weighting strengths. Rokem, A., Yeatman, J.D., Pestilli, F., Kay, K.N., Mezer A., van der Walt, S., and Wandell, B.A. (2015) Evaluating the Accuracy of Diffusion MRI Models in White Matter. PLoS ONE 10(4): e0123272. doi:10.1371/journal.pone.0123272 * - Stanford labels - Human - Gray matter region labels - GE Discovery MR750 - stanford_labels - Human brain diffusion-weighted MRI, collected with high diffusion-weighting angular resolution and repeated measurements at multiple diffusion-weighting strengths. Rokem, A., Yeatman, J.D., Pestilli, F., Kay, K.N., Mezer A., van der Walt, S., and Wandell, B.A. (2015) Evaluating the Accuracy of Diffusion MRI Models in White Matter. PLoS ONE 10(4): e0123272. doi:10.1371/journal.pone.0123272 * - Stanford PVE maps - Human - Partial Volume Effects (PVE) maps: Gray matter (GM), White matter (WM); Cerebrospinal Fluid (CSF) - GE Discovery MR750 - fetch_stanford_pve_maps - Human brain diffusion-weighted MRI, collected with high diffusion-weighting angular resolution and repeated measurements at multiple diffusion-weighting strengths. Rokem, A., Yeatman, J.D., Pestilli, F., Kay, K.N., Mezer A., van der Walt, S., and Wandell, B.A. (2015) Evaluating the Accuracy of Diffusion MRI Models in White Matter. PLoS ONE 10(4): e0123272. doi:10.1371/journal.pone.0123272 * - Stanford T1 - Human - T1 - GE Discovery MR750 - stanford_t1 - Human brain diffusion-weighted MRI, collected with high diffusion-weighting angular resolution and repeated measurements at multiple diffusion-weighting strengths. Rokem, A., Yeatman, J.D., Pestilli, F., Kay, K.N., Mezer A., van der Walt, S., and Wandell, B.A. (2015) Evaluating the Accuracy of Diffusion MRI Models in White Matter. PLoS ONE 10(4): e0123272. doi:10.1371/journal.pone.0123272 * - SyN data - Human - T1; b0 - - syn_data - * - Taiwan NTU DSI - - DSI-like data; Multi-shell data: b-vals: [0, 308 ,615, 923, 1231, 1538, 1538, 1846, 1846, 2462, 2769, 3077, 3385, 3692, 4000] (s/mm^2); 203 gradient directions - Siemens Trio - taiwan_ntu_dsi - National Taiwan University (NTU) Hospital Advanced Biomedical MRI Lab DSI MRI data * - Tissue data - Human - T1; denoised T1; Power map - - tissue_data - dipy-1.11.0/doc/user_guide/dependencies.rst000066400000000000000000000017061476546756600206550ustar00rootroot00000000000000.. _dependencies: ================================ Python versions and dependencies ================================ DIPY follows the `Scientific Python`_ `SPEC 0 — Minimum Supported Versions`_ recommendation as closely as possible, including the supported Python and dependencies versions. Further information can be found in :ref:`toolchain-roadmap`. Dependencies ------------ Depends on a few standard libraries: python_ (the core language), numpy_ (for numerical computation), scipy_ (for more specific mathematical operations), cython_ (for extra speed), nibabel_ (for file formats; we require version 2.4 or higher), h5py_ (for handling large datasets), tqdm_ and trx-python_. Optionally, it can use fury_ (for visualisation), matplotlib_ (for scientific plotting), and ipython_ (for interaction with the code and its results). cvxpy_, scikit-learn_, statsmodels_, pandas_ and tensorflow_ are required for some modules. .. include:: ../links_names.inc dipy-1.11.0/doc/user_guide/getting_started.rst000066400000000000000000000024111476546756600214100ustar00rootroot00000000000000.. _getting_started: *************** Getting Started *************** Here is a quick snippet showing how to calculate `color FA` also known as the DEC map. We use a Tensor model to reconstruct the datasets which are saved in a Nifti file along with the b-values and b-vectors which are saved as text files. Finally, we save our result as a Nifti file :: fdwi = 'dwi.nii.gz' fbval = 'dwi.bval' fbvec = 'dwi.bvec' from dipy.io.image import load_nifti, save_nifti from dipy.io import read_bvals_bvecs from dipy.core.gradients import gradient_table from dipy.reconst.dti import TensorModel data, affine = load_nifti(fdwi) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs=bvecs) tenmodel = TensorModel(gtab) tenfit = tenmodel.fit(data) save_nifti('colorfa.nii.gz', tenfit.color_fa, affine) As an exercise, you can try to calculate `color FA` with your datasets. You will need to replace the filepaths `fdwi`, `fbval` and `fbvec`. Here is what a slice should look like. .. image:: /_static/colorfa.png :align: center ********** Next Steps ********** You can learn more about how you to use DIPY_ with your datasets by reading the examples in our :ref:`examples`. .. include:: ../links_names.incdipy-1.11.0/doc/user_guide/index.rst000066400000000000000000000003451476546756600173340ustar00rootroot00000000000000.. _user_guide: ========================= DIPY User Guide ========================= .. toctree:: :maxdepth: 2 :hidden: introduction mission installation dependencies getting_started data bibliographydipy-1.11.0/doc/user_guide/installation.rst000066400000000000000000000124061476546756600207270ustar00rootroot00000000000000.. _installation: ############ Installation ############ DIPY_ is in active development. You can install it from our latest release, but you may find that the release has gotten well behind the current development - at least - we hope so - if we're developing fast enough! If you want to install the latest and greatest from the bleeding edge of the development, skip to :ref:`installation-from-source`. If you just want to install a released version, read on for your platform. ******************** Installing a release ******************** In general, we recommend you try to install DIPY_ package :ref:`install-pip`. .. _install-packages: .. _install-pip: Using pip: ========== This method should work under Linux, Mac OS X, and Windows. For Windows and macOS you can use Anaconda_ to get numpy, scipy, cython and lots of other useful python modules. Anaconda_ is a big package but will install many tools and libraries that are useful for scientific processing. When you have numpy, scipy and cython installed then try:: pip install dipy Then from any python console or script try:: >>> import dipy >>> print(dipy.get_info()) Using Anaconda: =============== On all platforms, you can use Anaconda_ to install DIPY. To do so issue the following command in a terminal:: conda install -c conda-forge dipy Some of the visualization methods require the FURY_ library and this can be installed separately (for the time being only on Python 3.4+):: conda install -c conda-forge fury Using packages: =============== Windows ------- #. First, install the python library dependencies. One easy way to do that is to use the Anaconda_ distribution (see below for :ref:`alternatives`). #. Even with Anaconda_ installed, you will still need to install the nibabel_ library, which supports reading and writing of neuroimaging data formats. Open a terminal and type:: pip install nibabel #. Finally, we are ready to install DIPY itself. Same as with `nibabel` above, we will type at the terminal shell command line:: pip install dipy When the installation has finished we can check if it is successful in the following way. From a Python console script try:: >>> import dipy >>> print(dipy.__version__) This should work with no error. #. Some of the visualization methods require the FURY_ library and this can be installed by doing :: pip install fury OSX --- #. To use DIPY_, you need to have some :ref:`dependencies` installed. First of all, make sure that you have installed the Apple Xcode_ developer tools. You'll need those to install all the following dependencies. #. Next, install the python library dependencies. One easy way to do that is to use the Anaconda_ distribution (see below for :ref:`alternatives`). #. Even with Anaconda_ installed, you will still need to install the nibabel_ library, which supports the reading and writing of neuroimaging data formats. Open a terminal and type:: pip install nibabel #. Finally, we are ready to install DIPY itself. Same as with `nibabel` above, we will type at the terminal shell command line:: pip install dipy When the installation has finished we can check if it is successful in the following way. From a Python console script try:: >>> import dipy This should work with no error. #. Some of the visualization methods require the FURY_ library and this can be installed by doing:: pip install fury Linux ----- For Debian, Ubuntu and Mint set up the NeuroDebian_ repositories - see `NeuroDebian how to`_. Then:: sudo apt-get install python-dipy In Fedora DIPY can be installed from the main repositories courtesy of NeuroFedora_:: sudo dnf install python3-dipy We hope to get packages for the other Linux distributions, but for now, please try :ref:`install-pip` instead. ******* Support ******* Contact us: =========== Do these installation instructions work for you? For any problems/suggestions please let us know by: - sending us an e-mail to the `dipy mailing list`_ or - sending us an e-mail to the `nipy mailing list`_ with the subject line starting with ``[DIPY]`` or - create a discussion thread via https://github.com/dipy/dipy/discussions Common problems: ================ Multiple installations ---------------------- Make sure that you have uninstalled all previous versions of DIPY before installing a new one. A simple and general way to uninstall DIPY is by removing the installation directory. You can find where DIPY is installed by using:: import dipy dipy.__file__ and then remove the DIPY directory that contains that file. .. _alternatives: Alternatives to Anaconda ------------------------- If you have problems installing Anaconda_ we recommend using Canopy_. Memory issues ------------- DIPY can process large diffusion datasets. For this reason, we recommend using a 64bit operating system that can allocate larger memory chunks than 32bit operating systems. If you don't have a 64bit computer that is okay DIPY works with 32bit too. .. _python-versions: Note on python versions ----------------------- Most DIPY functionality can be used with Python versions 2.6 and newer, including Python 3. However, some visualization functionality depends on FURY, which only supports Python 3 in versions 7 and newer. .. include:: ../links_names.inc dipy-1.11.0/doc/user_guide/introduction.rst000066400000000000000000000013651476546756600207510ustar00rootroot00000000000000.. _introduction: =============== What is DIPY? =============== * **a python package** for analyzing ``diffusion MRI data`` * a free and open project to collaborate and **share** your code and expertise. Want to know more? Read our :ref:`home`, :ref:`installation` guidelines and try the :ref:`examples`. Didn't find what you are looking for? Then try :ref:`faq` and then if this doesn't help send an e-mail to our e-mail list neuroimaging@python.org with subject starting with ``[DIPY]``. .. figure:: /_static/pretty_tracks.png :align: center This is a depiction of tractography created with DIPY. If you want to learn how you can create these with your datasets, read the examples in our :ref:`home` . .. include:: ../links_names.inc dipy-1.11.0/doc/user_guide/mission.rst000066400000000000000000000010661476546756600177070ustar00rootroot00000000000000.. _mission: =================== Mission statement =================== Mission of Statement The purpose of DIPY_ is to make it **easier to do better diffusion MR imaging research**. Following up with the nipy mission statement we aim to build software that is * **clearly written** * **clearly explained** * **a good fit for the underlying ideas** * **a natural home for collaboration** We hope that, if we fail to do this, you will let us know and we will try and make it better. See also :ref:`introduction` .. include:: ../links_names.inc dipy-1.11.0/meson.build000066400000000000000000000035761476546756600147460ustar00rootroot00000000000000project( 'dipy', 'c', 'cpp', 'cython', version: run_command(['tools/gitversion.py'], check: true).stdout().strip(), license: 'BSD-3', meson_version: '>= 1.1.0', default_options: [ 'buildtype=debugoptimized', 'c_std=c99', 'cpp_std=c++14', 'optimization=2', ], ) # https://mesonbuild.com/Python-module.html py_mod = import('python') py3 = py_mod.find_installation(pure: false) py3_dep = py3.dependency() # filesystem Module to manage files and directories fs = import('fs') cc = meson.get_compiler('c') cpp = meson.get_compiler('cpp') cy = meson.get_compiler('cython') host_system = host_machine.system() host_cpu_family = host_machine.cpu_family() cython = find_program('cython') # Check compiler is recent enough (see "Toolchain Roadmap" for details) if cc.get_id() == 'gcc' if not cc.version().version_compare('>=8.0') error('DIPY requires GCC >= 8.0') endif elif cc.get_id() == 'msvc' if not cc.version().version_compare('>=19.20') error('DIPY requires at least vc142 (default with Visual Studio 2019) ' + \ 'when building with MSVC') endif endif if not cy.version().version_compare('>=0.29.35') error('DIPY requires Cython >= 0.29.35') endif # TODO: the below -Wno flags are all needed to silence warnings in # f2py-generated code. This should be fixed in f2py itself. #_global_c_args = cc.get_supported_arguments( # '-Wno-unused-but-set-variable', # '-Wno-unused-function', # '-Wno-conversion', # '-Wno-misleading-indentation', # '-Wno-incompatible-pointer-types', #) #add_project_arguments(_global_c_args, language : 'c') # We need -lm for all C code (assuming it uses math functions, which is safe to # assume for dipy). For C++ it isn't needed, because libstdc++/libc++ is # guaranteed to depend on it. m_dep = cc.find_library('m', required : false) if m_dep.found() add_project_link_arguments('-lm', language : 'c') endif subdir('dipy')dipy-1.11.0/pyproject.toml000066400000000000000000000154131476546756600155110ustar00rootroot00000000000000[project] name = "dipy" description = "Diffusion MRI Imaging in Python" license = {file = "LICENSE"} readme = "README.rst" requires-python = ">=3.10" authors = [{ name = "DIPY developers", email = "dipy@python.org" }] maintainers = [ {name = "Eleftherios Garyfallidis", email="neuroimaging@python.org"}, {name = "Ariel Rokem", email="neuroimaging@python.org"}, {name = "Serge Koudoro", email="neuroimaging@python.org"}, ] keywords = ["dipy", "diffusionimaging", "dti", "tracking", "tractography", "diffusionmri", "mri", "tractometry", "connectomics", "brain", "dipymri", "microstructure", "deeplearning", "registration", "segmentations", "simulation", "medical", "imaging", "brain", "machinelearning", "signalprocessing"] classifiers = [ 'Development Status :: 4 - Beta', 'Environment :: Console', 'Intended Audience :: Developers', 'Intended Audience :: Science/Research', 'License :: OSI Approved :: BSD License', 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.11', 'Programming Language :: Python :: 3.12', 'Programming Language :: Python :: 3 :: Only', "Topic :: Software Development :: Libraries", 'Topic :: Scientific/Engineering', "Operating System :: OS Independent", 'Operating System :: Microsoft :: Windows', 'Operating System :: POSIX', 'Operating System :: Unix', 'Operating System :: MacOS', ] version = "1.11.0" dependencies = [ "numpy>=1.22.4", "scipy>=1.8", "nibabel>=3.0.0", "h5py>=3.1.0", "packaging>=21", "tqdm>=4.30.0", "trx-python>=0.3.0", ] [project.scripts] dipy_align_syn = "dipy.workflows.cli:run" dipy_align_affine = "dipy.workflows.cli:run" dipy_apply_transform = "dipy.workflows.cli:run" dipy_buan_shapes = "dipy.workflows.cli:run" dipy_buan_profiles = "dipy.workflows.cli:run" dipy_buan_lmm = "dipy.workflows.cli:run" dipy_bundlewarp = "dipy.workflows.cli:run" dipy_classify_tissue = "dipy.workflows.cli:run" dipy_correct_biasfield = "dipy.workflows.cli:run" dipy_correct_motion = "dipy.workflows.cli:run" dipy_denoise_nlmeans = "dipy.workflows.cli:run" dipy_denoise_lpca = "dipy.workflows.cli:run" dipy_denoise_mppca = "dipy.workflows.cli:run" dipy_denoise_patch2self = "dipy.workflows.cli:run" dipy_evac_plus = "dipy.workflows.cli:run" dipy_fetch = "dipy.workflows.cli:run" dipy_fit_csa = "dipy.workflows.cli:run" dipy_fit_csd = "dipy.workflows.cli:run" dipy_fit_dki = "dipy.workflows.cli:run" dipy_fit_dsi = "dipy.workflows.cli:run" dipy_fit_dsid = "dipy.workflows.cli:run" dipy_fit_dti = "dipy.workflows.cli:run" dipy_fit_forecast = "dipy.workflows.cli:run" dipy_fit_gqi = "dipy.workflows.cli:run" dipy_fit_ivim = "dipy.workflows.cli:run" dipy_fit_mapmri = "dipy.workflows.cli:run" dipy_fit_opdt = "dipy.workflows.cli:run" dipy_fit_qball = "dipy.workflows.cli:run" dipy_fit_sdt = "dipy.workflows.cli:run" dipy_fit_sfm = "dipy.workflows.cli:run" dipy_math = "dipy.workflows.cli:run" dipy_mask = "dipy.workflows.cli:run" dipy_gibbs_ringing = "dipy.workflows.cli:run" dipy_horizon = "dipy.workflows.cli:run" dipy_info = "dipy.workflows.cli:run" dipy_extract_b0 = "dipy.workflows.cli:run" dipy_extract_shell = "dipy.workflows.cli:run" dipy_extract_volume = "dipy.workflows.cli:run" dipy_labelsbundles = "dipy.workflows.cli:run" dipy_median_otsu = "dipy.workflows.cli:run" dipy_recobundles = "dipy.workflows.cli:run" dipy_reslice = "dipy.workflows.cli:run" dipy_sh_convert_mrtrix = "dipy.workflows.cli:run" dipy_snr_in_cc = "dipy.workflows.cli:run" dipy_split = "dipy.workflows.cli:run" dipy_track = "dipy.workflows.cli:run" dipy_track_pft = "dipy.workflows.cli:run" dipy_slr = "dipy.workflows.cli:run" dipy_concatenate_tractograms = "dipy.workflows.cli:run" dipy_convert_tractogram = "dipy.workflows.cli:run" dipy_convert_tensors = "dipy.workflows.cli:run" dipy_convert_sh = "dipy.workflows.cli:run" dipy_nifti2pam = "dipy.workflows.cli:run" dipy_pam2nifti = "dipy.workflows.cli:run" dipy_tensor2pam = "dipy.workflows.cli:run" [project.optional-dependencies] all = ["dipy[dev,doc,style,test, viz, ml, extra, benchmark]"] style = ["ruff", "pre-commit"] viz = ["fury>=0.10.0", "matplotlib"] test = ["pytest", "coverage", "coveralls", "codecov"] ml = ["scikit_learn", "pandas", "statsmodels>=0.14.0", "tables", "tensorflow>=2.18.0", "torch"] dev = [ # Also update [build-system] -> requires 'meson-python>=0.13', 'wheel', 'setuptools~=69.5', 'packaging>=21', 'ninja', 'Cython>=0.29.35', 'numpy>=1.22.4', # Developer UI 'spin>=0.5', 'build', ] extra = ["dipy[viz, ml]", "cvxpy", "scikit-image", "boto3", "numexpr" ] doc = [ "numpydoc", "sphinx>=7.2.6", "texext", "tomli; python_version < \"3.11\"", "sphinxcontrib-bibtex", "sphinx_design", "sphinx-gallery>=0.10.0", "tomli>=2.0.1", "grg-sphinx-theme>=0.4.0", "Jinja2" ] benchmark = [ "asv", "pyperf", "virtualenv", ] [project.urls] homepage = "https://dipy.org" documentation = "https://docs.dipy.org/stable/" source = "https://github.com/dipy/dipy" download = "https://pypi.org/project/dipy/#files" tracker = "https://github.com/dipy/dipy/issues" [build-system] build-backend = "mesonpy" requires = [ "meson-python>=0.13.1", "Cython>=3.0.4", "packaging>21.0", "wheel", # numpy requirement for wheel builds for distribution on PyPI - building # against 2.x yields wheels that are also compatible with numpy 1.x at # runtime. # Note that building against numpy 1.x works fine too - users and # redistributors can do this by installing the numpy version they like and # disabling build isolation. "numpy>=2.0.0rc1; python_version>='3.10' and platform_python_implementation!='PyPy'", "numpy; python_version>='3.10' and platform_python_implementation=='PyPy'", ] [tool.spin] package = 'dipy' [tool.spin.commands] Build = [ "spin.cmds.meson.build", "spin.cmds.meson.test", "spin.cmds.build.sdist", ".spin/cmds.py:clean" ] Environments = [ "spin.cmds.meson.run", "spin.shell", "spin.ipython", "spin.python" ] "Documentation" = [ "spin.cmds.meson.docs", # ".spin/cmds.py:docs" ] Metrics = [ ".spin/cmds.py:bench", # ".spin/cmds.py:coverage ] # TODO: Add custom commands [tool.pytest.ini_options] addopts = "--ignore=dipy/testing/decorators.py --ignore-glob='bench*.py' --ignore-glob=**/benchmarks/*" filterwarnings = [ # FURY "ignore:.*You do not have FURY installed.*:UserWarning", # pkg_resources (via Ray) "ignore:Implementing implicit namespace packages.*:DeprecationWarning", "ignore:Deprecated call to `pkg_resources.*:DeprecationWarning", "ignore:pkg_resources is deprecated as an API.*:DeprecationWarning", ] dipy-1.11.0/requirements.txt000066400000000000000000000002621476546756600160550ustar00rootroot00000000000000# Check against .travis.yml file and dipy/info.py cython>=0.29.35, !=0.29.29 numpy>=1.22.4 scipy>=1.8.1 nibabel>=4.0.0 h5py>=3.7.0 packaging>=19.0 tqdm>=4.30.0 trx-python>=0.2.9 dipy-1.11.0/requirements/000077500000000000000000000000001476546756600153145ustar00rootroot00000000000000dipy-1.11.0/requirements/build.txt000066400000000000000000000001341476546756600171520ustar00rootroot00000000000000meson-python>=0.13 ninja Cython>=0.29.35 packaging>=20.0 wheel build numpy>=1.21.1 spin==0.5dipy-1.11.0/ruff.toml000066400000000000000000000015441476546756600144340ustar00rootroot00000000000000target-version = "py310" line-length = 88 force-exclude = true extend-exclude = [ "__pycache__", "build", "_version.py", "doc/sphinxext/**", "doc/tools/*.py", "doc/upload_docs.py", "tools/**", # dipy/info.py ] [lint] select = [ "F", "E", "C", "W", "B", "I", ] ignore = [ "B905", "C901", "E203", "F811" ] [lint.extend-per-file-ignores] "dipy/align/__init__.py" = ["E402"] "dipy/reconst/*.py" = ["B026"] "dipy/viz/__init__.py" = ["F401"] "doc/examples/**" = ["E402"] "doc/conf.py" = ["E402"] [lint.isort] case-sensitive = true combine-as-imports = true force-sort-within-sections = true known-first-party = ["dipy"] no-sections = false order-by-type = true relative-imports-order = "closest-to-furthest" section-order = ["future", "standard-library", "third-party", "first-party", "local-folder"] [format] quote-style = "double" dipy-1.11.0/src/000077500000000000000000000000001476546756600133605ustar00rootroot00000000000000dipy-1.11.0/src/conditional_omp.h000066400000000000000000000015721476546756600167140ustar00rootroot00000000000000/* Header file to conditionally wrap omp.h defines * * _OPENMP should be defined if omp.h is safe to include */ #if defined(_OPENMP) #include #define have_openmp 1 #else /* These are fake defines to make these symbols valid in the c / pyx file * * All uses of these symbols should to be prefaced with ``if have_openmp``, as * in: * * cdef omp_lock_t lock * if have_openmp: * openmp.omp_init_lock(&lock) * * */ typedef int omp_lock_t; void omp_init_lock(omp_lock_t *lock) {}; void omp_destroy_lock(omp_lock_t *lock) {}; void omp_set_lock(omp_lock_t *lock) {}; void omp_unset_lock(omp_lock_t *lock) {}; int omp_test_lock(omp_lock_t *lock) { return -1;}; void omp_set_dynamic(int dynamic_threads) {}; void omp_set_num_threads(int num_threads) {}; int omp_get_num_procs() { return -1;}; int omp_get_max_threads() { return -1; }; #define have_openmp 0 #endif dipy-1.11.0/src/ctime.pxd000066400000000000000000000105241476546756600152000ustar00rootroot00000000000000""" Cython implementation of (parts of) the standard library time module. Note: On implementations that lack a C-API for monotonic/perfcounter clocks (like PyPy), the fallback code uses the system clock which may return absolute time values from a different value range, differing from those provided by Python's "time" module. Note: To remove and use Cython when the cython minimum version of the project will be 3.1.0 """ from libc.stdint cimport int64_t from cpython.exc cimport PyErr_SetFromErrno cdef extern from *: """ #if PY_VERSION_HEX >= 0x030d00b1 || defined(PyTime_t) #define __Pyx_PyTime_t PyTime_t #else #define __Pyx_PyTime_t _PyTime_t #endif #if PY_VERSION_HEX >= 0x030d00b1 || defined(PyTime_TimeRaw) static CYTHON_INLINE __Pyx_PyTime_t __Pyx_PyTime_TimeRaw(void) { __Pyx_PyTime_t tic; (void) PyTime_TimeRaw(&tic); return tic; } #else #define __Pyx_PyTime_TimeRaw() _PyTime_GetSystemClock() #endif #if PY_VERSION_HEX >= 0x030d00b1 || defined(PyTime_MonotonicRaw) static CYTHON_INLINE __Pyx_PyTime_t __Pyx_PyTime_MonotonicRaw(void) { __Pyx_PyTime_t tic; (void) PyTime_MonotonicRaw(&tic); return tic; } #elif CYTHON_COMPILING_IN_PYPY && !defined(_PyTime_GetMonotonicClock) #define __Pyx_PyTime_MonotonicRaw() _PyTime_GetSystemClock() #else #define __Pyx_PyTime_MonotonicRaw() _PyTime_GetMonotonicClock() #endif #if PY_VERSION_HEX >= 0x030d00b1 || defined(PyTime_PerfCounterRaw) static CYTHON_INLINE __Pyx_PyTime_t __Pyx_PyTime_PerfCounterRaw(void) { __Pyx_PyTime_t tic; (void) PyTime_PerfCounterRaw(&tic); return tic; } #elif CYTHON_COMPILING_IN_PYPY && !defined(_PyTime_GetPerfCounter) #define __Pyx_PyTime_PerfCounterRaw() __Pyx_PyTime_MonotonicRaw() #else #define __Pyx_PyTime_PerfCounterRaw() _PyTime_GetPerfCounter() #endif #if PY_VERSION_HEX >= 0x030d00b1 || defined(PyTime_AsSecondsDouble) #define __Pyx_PyTime_AsSecondsDouble(t) PyTime_AsSecondsDouble(t) #else #define __Pyx_PyTime_AsSecondsDouble(t) _PyTime_AsSecondsDouble(t) #endif """ ctypedef int64_t PyTime_t "__Pyx_PyTime_t" ctypedef PyTime_t _PyTime_t "__Pyx_PyTime_t" # legacy, use "PyTime_t" instead PyTime_t PyTime_TimeUnchecked "__Pyx_PyTime_TimeRaw" () noexcept nogil # legacy, use "PyTime_TimeRaw" instead PyTime_t PyTime_TimeRaw "__Pyx_PyTime_TimeRaw" () noexcept nogil PyTime_t PyTime_MonotonicRaw "__Pyx_PyTime_MonotonicRaw" () noexcept nogil PyTime_t PyTime_PerfCounterRaw "__Pyx_PyTime_PerfCounterRaw" () noexcept nogil double PyTime_AsSecondsDouble "__Pyx_PyTime_AsSecondsDouble" (PyTime_t t) noexcept nogil from libc.time cimport ( tm, time_t, localtime as libc_localtime, ) cdef inline double time() noexcept nogil: cdef PyTime_t tic = PyTime_TimeRaw() return PyTime_AsSecondsDouble(tic) cdef inline int64_t time_ns() noexcept nogil: return PyTime_TimeRaw() cdef inline double perf_counter() noexcept nogil: cdef PyTime_t tic = PyTime_PerfCounterRaw() return PyTime_AsSecondsDouble(tic) cdef inline int64_t perf_counter_ns() noexcept nogil: return PyTime_PerfCounterRaw() cdef inline double monotonic() noexcept nogil: cdef PyTime_t tic = PyTime_MonotonicRaw() return PyTime_AsSecondsDouble(tic) cdef inline int64_t monotonic_ns() noexcept nogil: return PyTime_MonotonicRaw() cdef inline int _raise_from_errno() except -1 with gil: PyErr_SetFromErrno(RuntimeError) return -1 # Let the C compiler know that this function always raises. cdef inline tm localtime() except * nogil: """ Analogue to the stdlib time.localtime. The returned struct has some entries that the stdlib version does not: tm_gmtoff, tm_zone """ cdef: time_t tic = time() tm* result result = libc_localtime(&tic) if result is NULL: _raise_from_errno() # Fix 0-based date values (and the 1900-based year). # See tmtotuple() in https://github.com/python/cpython/blob/master/Modules/timemodule.c result.tm_year += 1900 result.tm_mon += 1 result.tm_wday = ((result.tm_wday + 6) % 7) result.tm_yday += 1 return result[0]dipy-1.11.0/src/cythonutils.h000066400000000000000000000001231476546756600161120ustar00rootroot00000000000000// Maximum number of dimension supported by Cython's memoryview #define MAX_NDIM 7 dipy-1.11.0/src/dpy_math.h000066400000000000000000000037111476546756600153400ustar00rootroot00000000000000/* dipy math functions * * To give some platform independence for simple math functions */ #include #include "numpy/npy_math.h" #define DPY_PI NPY_PI /* From numpy npy_math.c.src commit b2f6792d284b0e9383093c30d51ec3a82e8312fd*/ double dpy_log2(double x) { #ifdef HAVE_LOG2 return log2(x); #else return NPY_LOG2E*log(x); #endif } #if (defined(_WIN32) || defined(_WIN64)) && !defined(__GNUC__) #define fmin min #endif #define dpy_floor(x) floor((double)(x)) double dpy_rint(double x) { #ifdef HAVE_RINT return rint(x); #else double y, r; y = dpy_floor(x); r = x - y; if (r > 0.5) { y += 1.0; } /* Round to nearest even */ if (r == 0.5) { r = y - 2.0*dpy_floor(0.5*y); if (r == 1.0) { y += 1.0; } } return y; #endif } int dpy_signbit(double x) { #ifdef signbit return signbit(x); #else union { double d; short s[4]; int i[2]; } u; u.d = x; #if NPY_SIZEOF_INT == 4 #ifdef WORDS_BIGENDIAN /* defined in pyconfig.h */ return u.i[0] < 0; #else return u.i[1] < 0; #endif #else /* NPY_SIZEOF_INT != 4 */ #ifdef WORDS_BIGENDIAN return u.s[0] < 0; #else return u.s[3] < 0; #endif #endif /* NPY_SIZEOF_INT */ #endif /*NPY_HAVE_DECL_SIGNBIT*/ } #ifndef NPY_HAVE_DECL_ISNAN #define dpy_isnan(x) ((x) != (x)) #else #ifdef _MSC_VER #define dpy_isnan(x) _isnan((x)) #else #define dpy_isnan(x) isnan(x) #endif #endif #ifndef NPY_HAVE_DECL_ISFINITE #ifdef _MSC_VER #define dpy_isfinite(x) _finite((x)) #else #define dpy_isfinite(x) !npy_isnan((x) + (-x)) #endif #else #define dpy_isfinite(x) isfinite((x)) #endif #ifndef NPY_HAVE_DECL_ISINF #define dpy_isinf(x) (!dpy_isfinite(x) && !dpy_isnan(x)) #else #ifdef _MSC_VER #define dpy_isinf(x) (!_finite((x)) && !_isnan((x))) #else #define dpy_isinf(x) isinf((x)) #endif #endif dipy-1.11.0/src/safe_openmp.pxd000066400000000000000000000012041476546756600163660ustar00rootroot00000000000000cdef extern from "conditional_omp.h": ctypedef struct omp_lock_t: pass extern void omp_init_lock(omp_lock_t *) noexcept nogil extern void omp_destroy_lock(omp_lock_t *) noexcept nogil extern void omp_set_lock(omp_lock_t *) noexcept nogil extern void omp_unset_lock(omp_lock_t *) noexcept nogil extern int omp_test_lock(omp_lock_t *) noexcept nogil extern void omp_set_dynamic(int dynamic_threads) noexcept nogil extern void omp_set_num_threads(int num_threads) noexcept nogil extern int omp_get_num_procs() noexcept nogil extern int omp_get_max_threads() noexcept nogil cdef int have_openmp dipy-1.11.0/tools/000077500000000000000000000000001476546756600137315ustar00rootroot00000000000000dipy-1.11.0/tools/build_dmgs.py000077500000000000000000000044211476546756600164200ustar00rootroot00000000000000#!/usr/bin/env python3 """Script to build dmgs for buildbot builds Examples -------- %(prog)s "dipy-dist/dipy*-0.6.0-py*mpkg" Note quotes around the globber first argument to protect it from shell globbing. """ from argparse import ArgumentParser, RawDescriptionHelpFormatter from functools import partial from glob import glob import os from os.path import isdir, isfile, join as pjoin import shutil from subprocess import check_call import warnings my_call = partial(check_call, shell=True) BUILDBOT_LOGIN = "buildbot@nipy.bic.berkeley.edu" BUILDBOT_HTML = "nibotmi/public_html/" def main(): parser = ArgumentParser(description=__doc__, formatter_class=RawDescriptionHelpFormatter) parser.add_argument('globber', type=str, help='glob to search for build mpkgs') parser.add_argument('--out-path', type=str, default='mpkg-dist', help='path for output files (default="mpkg-dist")', metavar='OUTPATH') parser.add_argument('--clobber', action='store_true', help='Delete OUTPATH if exists') args = parser.parse_args() globber = args.globber out_path = args.out_path address = f"{BUILDBOT_LOGIN}:{BUILDBOT_HTML}{globber}" if isdir(out_path): if not args.clobber: raise RuntimeError(f'Path {out_path} exists and "clobber" not set') shutil.rmtree(out_path) os.mkdir(out_path) cwd = os.path.abspath(os.getcwd()) os.chdir(out_path) try: my_call(f'scp -r {address} .') found_mpkgs = sorted(glob('*.mpkg')) for mpkg in found_mpkgs: pkg_name, ext = os.path.splitext(mpkg) assert ext == '.mpkg' my_call(f'sudo reown_mpkg {mpkg} root admin') os.mkdir(pkg_name) pkg_moved = pjoin(pkg_name, mpkg) os.rename(mpkg, pkg_moved) readme = pjoin(pkg_moved, 'Contents', 'Resources', 'ReadMe.txt') if isfile(readme): shutil.copy(readme, pkg_name) else: warnings.warn(f"Could not find readme with {readme}") my_call('sudo hdiutil create {0}.dmg -srcfolder ./{0}/ -ov'.format(pkg_name)) finally: os.chdir(cwd) if __name__ == '__main__': main() dipy-1.11.0/tools/build_release000077500000000000000000000012561476546756600164620ustar00rootroot00000000000000#!/usr/bin/env python3 """dipy release build script. """ import os from toollib import c, cd, compile_tree, get_dipydir, pjoin, remove_tree # Get main dipy dir, this will raise if it doesn't pass some checks dipydir = get_dipydir() cd(dipydir) # Load release info execfile(pjoin('dipy','info.py')) # Check that everything compiles compile_tree() # Cleanup for d in ['build','dist',pjoin('doc','_build'),pjoin('doc','dist')]: if os.path.isdir(d): remove_tree(d) # Build source and binary distros c('./setup.py sdist --formats=gztar,zip') # Build eggs for version in ['2.5', '2.6', '2.7']: cmd='python'+version+' ./setup_egg.py bdist_egg' stat = os.system(cmd) dipy-1.11.0/tools/ci/000077500000000000000000000000001476546756600143245ustar00rootroot00000000000000dipy-1.11.0/tools/ci/activate_env.sh000077500000000000000000000005241476546756600173340ustar00rootroot00000000000000if [ -e venv/bin/activate ]; then source venv/bin/activate elif [ -e venv/Scripts/activate ]; then source venv/Scripts/activate elif [ "$INSTALL_TYPE" == "conda" ]; then conda init bash source $CONDA/etc/profile.d/conda.sh conda activate venv else echo Cannot activate virtual environment ls -R venv false fidipy-1.11.0/tools/ci/archives/000077500000000000000000000000001476546756600161305ustar00rootroot00000000000000dipy-1.11.0/tools/ci/archives/.travis.yml.old000066400000000000000000000117401476546756600210210ustar00rootroot00000000000000# vim ft=yaml # Multiple lines can be made a single "virtual line" because of the way that # Travis munges each line before executing it to print out the exit status. # It's okay for it to be on multiple physical lines, so long as you remember: # - There can't be any leading "-"s - All newlines will be removed, so use # ";"s sudo: false # To use travis container infrastructure language: python cache: directories: - $HOME/.cache/pip addons: apt: packages: - libhdf5-serial-dev env: global: - DEPENDS="cython numpy matplotlib h5py nibabel cvxpy tqdm" - VENV_ARGS="--python=python" - INSTALL_TYPE="setup" - PRE_WHEELS="https://pypi.anaconda.org/scipy-wheels-nightly/simple" - EXTRA_PIP_FLAGS="--timeout=60 " python: - 3.6 - 3.7 - 3.8 matrix: include: - python: 3.7 dist: xenial env: - DEPENDS="$DEPENDS scipy" # To test minimum dependencies for Python 3.6: - python: 3.6 env: # Check these values against requirements.txt and dipy/info.py - DEPENDS="cython==0.29 numpy==1.12.0 scipy==1.0 nibabel==3.0.0 h5py==2.5.0 nose tqdm>=4.30.0" # To test minimum dependencies for Python 3.7: - python: 3.7 dist: xenial env: # Check these values against requirements.txt and dipy/info.py - DEPENDS="cython==0.29 numpy==1.15.0 scipy==1.1 nibabel==3.0.0 h5py==2.8.0 tqdm>=4.30.0" # Need to be uncomment when tensorflow and statsmodel available #- python: 3.8 # dist: xenial # env: # - DEPENDS="$DEPENDS scikit_learn pandas statsmodels tables scipy tensorflow" - python: 3.7 dist: xenial env: - COVERAGE=1 - DEPENDS="$DEPENDS scikit_learn pandas statsmodels tables scipy tensorflow" # To test vtk functionality - python: 3.7 dist: xenial sudo: true # This is set to true for apt-get services: - xvfb env: - COVERAGE=1 - VTK=1 - TEST_WITH_XVFB=true - MESA_GL_VERSION_OVERRIDE=3.3 - LIBGL_ALWAYS_INDIRECT=y - DEPENDS="$DEPENDS scikit_learn vtk fury scipy" - python: 3.7 dist: xenial env: - INSTALL_TYPE=sdist - DEPENDS="$DEPENDS scipy" - python: 3.7 dist: xenial env: - INSTALL_TYPE=pip # Dependency checking should get all needed dependencies - DEPENDS="" - python: 3.7 dist: xenial env: - INSTALL_TYPE=wheel - DEPENDS="$DEPENDS scipy" - python: 3.7 dist: xenial env: - INSTALL_TYPE=requirements - DEPENDS="" - python: 3.7 dist: xenial # Check against latest available pre-release version of all packages env: - USE_PRE=1 - DEPENDS="$DEPENDS scipy statsmodels pandas scikit_learn" allow_failures: - python: 3.7 dist: xenial env: - USE_PRE=1 - DEPENDS="$DEPENDS scipy statsmodels pandas scikit_learn" before_install: - PIPI="pip install $EXTRA_PIP_FLAGS" - if [ -n "$USE_PRE" ]; then PIPI="$PIPI --extra-index-url=$PRE_WHEELS --pre"; fi - pip install --upgrade virtualenv - virtualenv $VENV_ARGS venv - source venv/bin/activate - python --version # just to check - $PIPI --upgrade pip "setuptools<50.0" - $PIPI pytest - $PIPI numpy - if [ -n "$DEPENDS" ]; then $PIPI $DEPENDS; fi - if [ "${COVERAGE}" == "1" ]; then pip install coverage coveralls; fi - if [ "${VTK}" == "1" ]; then sudo apt-get update; sudo apt-get install -y $VTK_VER; sudo apt-get install -y xvfb; sudo apt-get install -y python-tk; sudo apt-get install -y python-imaging; $PIPI xvfbwrapper; fi install: - | if [ "$INSTALL_TYPE" == "setup" ]; then python setup.py install elif [ "$INSTALL_TYPE" == "pip" ]; then $PIPI . elif [ "$INSTALL_TYPE" == "sdist" ]; then python setup_egg.py egg_info # check egg_info while we're here python setup_egg.py sdist $PIPI dist/*.tar.gz elif [ "$INSTALL_TYPE" == "wheel" ]; then pip install wheel python setup_egg.py bdist_wheel $PIPI dist/*.whl elif [ "$INSTALL_TYPE" == "requirements" ]; then $PIPI -r requirements.txt python setup.py install fi # command to run tests, e.g. python setup.py test script: # Change into an innocuous directory and find tests from installation - mkdir for_testing - cd for_testing # We need the setup.cfg for the pytest settings - cp ../setup.cfg . # No figure windows for mpl; quote to hide : from travis-ci yaml parsing - 'echo "backend : agg" > matplotlibrc' - if [ "${COVERAGE}" == "1" ]; then cp ../.coveragerc .; cp ../.codecov.yml .; COVER_CMD="coverage run -m "; fi - $COVER_CMD pytest -s --doctest-modules --verbose --durations=10 --pyargs dipy after_success: - if [ "${COVERAGE}" == "1" ]; then coveralls; codecov; fi dipy-1.11.0/tools/ci/archives/appveyor.yml000066400000000000000000000063421476546756600205250ustar00rootroot00000000000000# vim ft=yaml # CI on Windows via appveyor environment: global: # SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the # /E:ON and /V:ON options are not enabled in the batch script interpreter # See: https://stackoverflow.com/a/13751649/163740 CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\tools\\run_with_env.cmd" DEPENDS: "cython numpy scipy matplotlib h5py" INSTALL_TYPE: "requirements" EXTRA_PIP_FLAGS: "--timeout=60" matrix: - PYTHON: C:\Python35-x64 - PYTHON: C:\Python36 - PYTHON: C:\Python36-x64 - PYTHON: C:\Python37 - PYTHON: C:\Python37-x64 INSTALL_TYPE: "pip" COVERAGE: 1 platform: - x64 init: - systeminfo - ps: iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/appveyor/ci/master/scripts/enable-rdp.ps1')) install: # If there is a newer build queued for the same PR, cancel this one. # The AppVeyor 'rollout builds' option is supposed to serve the same # purpose but is problematic because it tends to cancel builds pushed # directly to master instead of just PR builds. # credits: JuliaLang developers. - ps: if ($env:APPVEYOR_PULL_REQUEST_NUMBER -and $env:APPVEYOR_BUILD_NUMBER -ne ((Invoke-RestMethod ` https://ci.appveyor.com/api/projects/$env:APPVEYOR_ACCOUNT_NAME/$env:APPVEYOR_PROJECT_SLUG/history?recordsNumber=50).builds | ` Where-Object pullRequestId -eq $env:APPVEYOR_PULL_REQUEST_NUMBER)[0].buildNumber) { ` throw "There are newer queued builds for this pull request, failing early." } - "set PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%" - ps: $env:PIPI = "pip install $env:EXTRA_PIP_FLAGS" - echo %PIPI% # Check that we have the expected version and architecture for Python - "python --version" - ps: $env:PYTHON_ARCH = python -c "import struct; print(struct.calcsize('P') * 8)" - ps: $env:PYTHON_VERSION = python -c "import platform;print(platform.python_version())" - cmd: echo %PYTHON_VERSION% %PYTHON_ARCH% - ps: | if($env:PYTHON -match "conda") { conda update -yq conda Invoke-Expression "conda install -yq pip $env:DEPENDS" pip install nibabel cvxpy scikit-learn } else { python -m pip install -U pip pip --version if($env:INSTALL_TYPE -match "requirements") { Invoke-Expression "$env:PIPI -r requirements.txt" } else { Invoke-Expression "$env:PIPI $env:DEPENDS" } Invoke-Expression "$env:PIPI nibabel matplotlib scikit-learn cvxpy" } - "%CMD_IN_ENV% python setup.py build_ext --inplace" - "%CMD_IN_ENV% %PIPI% --user -e ." build: false # Not a C# project, build stuff at the test step instead. test_script: - pip install pytest coverage coveralls - mkdir for_testing - cd for_testing - echo backend:Agg > matplotlibrc - if exist ../.coveragerc (cp ../.coveragerc .) else (echo no .coveragerc) - ps: | if ($env:COVERAGE) { $env:COVER_CMD = "coverage run -m " } - cmd: echo %COVER_CMD% - "%COVER_CMD% pytest -s --doctest-modules --verbose --durations=10 --pyargs dipy" cache: # Avoid re-downloading large packages - '%APPDATA%\pip\Cache' dipy-1.11.0/tools/ci/archives/azure-pipelines.yml.old000066400000000000000000000111761476546756600225520ustar00rootroot00000000000000schedules: - cron: "0 0 * * 1,4" #twice per week displayName: Weekly midnight build branches: include: - master trigger: - master - maint/* pr: - master - maint/* jobs: - template: ci/azure/linux.yml parameters: name: Linux vmImage: ubuntu-latest matrix: Python36-64bit: python.version: '3.6' Python37-64bit: python.version: '3.7' Python38-64bit: python.version: '3.8' Python39-64bit: python.version: '3.9' Python37-64bit + MIN_DEPS: python.version: '3.7' DEPENDS: "cython==0.29 numpy==1.15.0 scipy==1.1 nibabel==3.0.0 h5py==2.8.0 tqdm" Python38-64bit + MIN_DEPS: python.version: '3.8' DEPENDS: "cython==0.29 numpy==1.17.5 scipy==1.3.2 nibabel==3.0.0 h5py==3.0.0 tqdm" Python37-64bit + OPTIONAL_DEPS: python.version: '3.7' EXTRA_DEPENDS: "scikit_learn pandas statsmodels tables scipy" Python37-64bit + VIZ + COVERAGE: TEST_WITH_XVFB: "1" COVERAGE: "1" python.version: '3.7' MESA_GL_VERSION_OVERRIDE: '3.3' LIBGL_ALWAYS_INDIRECT: 'y' EXTRA_DEPENDS: "scikit_learn vtk==8.1.2 fury scipy pandas statsmodels tables xvfbwrapper" Python37-64bit + SDIST: python.version: '3.7' INSTALL_TYPE: "sdist" EXTRA_DEPENDS: "scipy" Python37-64bit + PIP: python.version: '3.7' INSTALL_TYPE: "pip" DEPENDS: "" # Dependency checking should get all needed dependencies Python37-64bit + WHEEL: python.version: '3.7' INSTALL_TYPE: "wheel" EXTRA_DEPENDS: "scipy" Python37-64bit + Requirements: python.version: '3.7' INSTALL_TYPE: "requirements" DEPENDS: "" CONDA Python37-64bit + OPTIONAL_DEPS: python.version: '3.7' EXTRA_DEPENDS: "scikit-learn pandas statsmodels pytables scipy" INSTALL_TYPE: "conda" CONDA Python37-64bit: python.version: '3.7' INSTALL_TYPE: "conda" CONDA Python36-64bit: python.version: '3.6' INSTALL_TYPE: "conda" Python37-64bit - PRE: USE_PRE: 1 python.version: '3.7' EXTRA_DEPENDS: "scikit_learn scipy statsmodels pandas tables" - template: ci/azure/osx.yml parameters: name: OSX vmImage: macOS-latest matrix: Python37-64bit + OPTIONAL_DEPS: python.version: '3.7' EXTRA_DEPENDS: "scikit_learn pandas statsmodels tables scipy" Python37-64bit: python.version: '3.7' Python38-64bit: python.version: '3.8' Python39-64bit: python.version: '3.9' # Problem with h5py # CONDA Python37-64bit + OPTIONAL_DEPS: # python.version: '3.7' # EXTRA_DEPENDS: "scikit-learn pandas statsmodels pytables scipy==1.2" # INSTALL_TYPE: "conda" CONDA Python37-64bit: python.version: '3.7' INSTALL_TYPE: "conda" CONDA Python36-64bit: python.version: '3.6' INSTALL_TYPE: "conda" Python37-64bit + VIZ: TEST_WITH_XVFB: "1" python.version: '3.7' MESA_GL_VERSION_OVERRIDE: '3.3' LIBGL_ALWAYS_INDIRECT: 'y' EXTRA_DEPENDS: "scikit_learn vtk fury scipy xvfbwrapper" # TODO: Need to figure out how to allow failure before any activation # Python37-64bit - PRE: # USE_PRE: 1 # python.version: '3.7' # EXTRA_DEPENDS: "scikit_learn scipy statsmodels pandas " - template: ci/azure/windows.yml parameters: name: Windows vmImage: windows-latest matrix: Python37-64bit + OPTIONAL_DEPS: python.version: '3.7' EXTRA_DEPENDS: "scikit_learn pandas statsmodels tables scipy" Python39-64bit: python.version: '3.9' Python38-64bit: python.version: '3.8' Python37-64bit: python.version: '3.7' Python36-64bit: python.version: '3.6' CONDA Python37-64bit + OPTIONAL_DEPS: python.version: '3.7' EXTRA_DEPENDS: "scikit-learn pandas statsmodels pytables scipy" INSTALL_TYPE: "conda" CONDA Python37-64bit: python.version: '3.7' INSTALL_TYPE: "conda" CONDA Python36-64bit: python.version: '3.6' INSTALL_TYPE: "conda" Python37-64bit + VIZ: TEST_WITH_XVFB: "1" python.version: '3.7' MESA_GL_VERSION_OVERRIDE: '3.3' LIBGL_ALWAYS_INDIRECT: 'y' EXTRA_DEPENDS: "scikit_learn vtk fury scipy" # TODO: Need to figure out how to allow failure before any activation # Python37-64bit - PRE: # USE_PRE: 1 # python.version: '3.7' # EXTRA_DEPENDS: "scikit_learn scipy statsmodels pandas " dipy-1.11.0/tools/ci/archives/create_env_install_dependencies.sh000066400000000000000000000021061476546756600250320ustar00rootroot00000000000000#!/bin/bash set -ev if [ "$INSTALL_TYPE" == "conda" ]; then conda config --set always_yes yes --set changeps1 no conda update -yq conda conda install conda-build anaconda-client conda config --add channels conda-forge conda create -n venv --yes python=$PYTHON_VERSION pip conda install -yq --name venv $DEPENDS $EXTRA_DEPENDS pytest else PIPI="pip install --timeout=60 " if [ "$USE_PRE" == "1" ]; then PIPI="$PIPI --extra-index-url=$PRE_WHEELS --pre"; fi pip install --upgrade virtualenv virtualenv $VENV_ARGS venv source venv/bin/activate # just to check python version python --version $PIPI pytest $PIPI numpy if [ -n "$DEPENDS" ]; then $PIPI $DEPENDS $EXTRA_DEPENDS; fi if [ "$COVERAGE" == "1" ]; then pip install coverage coveralls; fi if [ "$VTK" == "1" ]; then sudo apt-get update; sudo apt-get install -y $VTK_VER; sudo apt-get install -y xvfb; sudo apt-get install -y python-tk; sudo apt-get install -y python-imaging; $PIPI xvfbwrapper; fi fi dipy-1.11.0/tools/ci/archives/linux.yml000066400000000000000000000034211476546756600200120ustar00rootroot00000000000000parameters: name: '' vmImage: '' matrix: [] jobs: - job: ${{ parameters.name }} timeoutInMinutes: 120 pool: vmIMage: ${{ parameters.vmImage }} variables: DEPENDS: "cython numpy matplotlib h5py nibabel cvxpy tqdm" VENV_ARGS: "--python=python" INSTALL_TYPE: "setup" PRE_WHEELS: "https://pypi.anaconda.org/scipy-wheels-nightly/simple" strategy: # maxParallel: 4 matrix: ${{ insert }}: ${{ parameters.matrix }} steps: - task: UsePythonVersion@0 inputs: versionSpec: $(python.version) architecture: 'x64' addToPath: true - bash: | sudo apt-get update sudo apt-get install -y libhdf5-serial-dev displayName: 'Install packages' - bash: | sudo apt-get install -y xvfb /usr/bin/Xvfb :99 -screen 0 1920x1080x24 > /dev/null 2>&1 & echo ">>> Started xvfb" displayName: 'Install Xvfb package' condition: and(succeeded(), eq( variables['TEST_WITH_XVFB'], '1' )) - bash: echo "##vso[task.prependpath]$CONDA/bin" displayName: Add conda to PATH condition: eq( variables['INSTALL_TYPE'], 'conda' ) - bash: | sudo chmod +x ci/azure/install.sh if [ "${USE_PRE}" == "1" ]; then echo "we use PRE" echo "${USE_PRE}" ./ci/azure/install.sh || echo -e "\043#vso[task.logissue type=warning;] Error during pre-package installation" else ./ci/azure/install.sh fi displayName: 'Install dependencies' - bash: | sudo chmod +x ci/azure/script.sh if [ "${USE_PRE}" == "1" ]; then ./ci/azure/script.sh || echo -e "\043#vso[task.logissue type=warning;] Error during tests on PRE matrix" else ./ci/azure/script.sh fi env: CODECOV_TOKEN: $(CODECOV-TOKEN) displayName: 'Install DIPY and Run Tests' dipy-1.11.0/tools/ci/archives/osx.yml000066400000000000000000000027741476546756600174760ustar00rootroot00000000000000parameters: name: '' vmImage: '' matrix: [] jobs: - job: ${{ parameters.name }} timeoutInMinutes: 120 pool: vmIMage: ${{ parameters.vmImage }} variables: DEPENDS: "cython numpy matplotlib h5py nibabel cvxpy tqdm" VENV_ARGS: "--python=python" INSTALL_TYPE: "setup" PRE_WHEELS: "https://pypi.anaconda.org/scipy-wheels-nightly/simple" strategy: # maxParallel: 3 matrix: ${{ insert }}: ${{ parameters.matrix }} steps: - task: UsePythonVersion@0 inputs: versionSpec: $(python.version) architecture: 'x64' addToPath: true - script: | brew install hdf5 displayName: 'Install packages' - script: | brew install --cask xquartz displayName: 'Install Xquartz package' condition: eq( variables['TEST_WITH_XVFB'], '1' ) - bash: echo "##vso[task.prependpath]$CONDA/bin" displayName: Add conda to PATH condition: eq( variables['INSTALL_TYPE'], 'conda' ) # On Hosted macOS, the agent user doesn't have ownership of Miniconda's installation directory/ # We need to take ownership if we want to update conda or install packages globally - bash: sudo chown -R $USER $CONDA displayName: Take ownership of conda installation condition: eq( variables['INSTALL_TYPE'], 'conda' ) - script: | sudo chmod +x ci/azure/install.sh ./ci/azure/install.sh displayName: 'Install dependencies' - script: | sudo chmod +x ci/azure/script.sh ./ci/azure/script.sh displayName: 'Install DIPY and Run Tests' dipy-1.11.0/tools/ci/archives/windows.yml000066400000000000000000000032231476546756600203450ustar00rootroot00000000000000parameters: name: '' vmImage: '' matrix: [] jobs: - job: ${{ parameters.name }} timeoutInMinutes: 120 pool: vmIMage: ${{ parameters.vmImage }} variables: AZURE_CI_WINDOWS: 'true' DEPENDS: "cython numpy matplotlib h5py nibabel cvxpy tqdm" VENV_ARGS: "--python=python" INSTALL_TYPE: "setup" PRE_WHEELS: "https://pypi.anaconda.org/scipy-wheels-nightly/simple" strategy: # maxParallel: 3 matrix: ${{ insert }}: ${{ parameters.matrix }} steps: - task: UsePythonVersion@0 inputs: versionSpec: $(python.version) architecture: 'x64' addToPath: true - powershell: | Write-Host "##vso[task.prependpath]$env:CONDA\Scripts" displayName: Add Conda to PATH condition: eq( variables['INSTALL_TYPE'], 'conda' ) - powershell: | Set-StrictMode -Version Latest $ErrorActionPreference = "Stop" $PSDefaultParameterValues['*:ErrorAction']='Stop' powershell ./ci/azure/install_opengl.ps1 displayName: 'Install OpenGL' condition: eq( variables['TEST_WITH_XVFB'], '1' ) - powershell: | Invoke-WebRequest -Uri https://zenodo.org/record/2651349/files/gs.trk -UseDefaultCredentials Invoke-WebRequest -Uri https://stacks.stanford.edu/file/druid:yx282xq2090/label_info.txt -UseDefaultCredentials displayName: Add/Verify whether the certificate has been installed correctly - powershell: | Set-StrictMode -Version Latest $ErrorActionPreference = "Stop" $PSDefaultParameterValues['*:ErrorAction']='Stop' ./ci/azure/install.ps1 displayName: 'Install dependencies' - powershell: | ./ci/azure/script.ps1 displayName: 'Run Tests' dipy-1.11.0/tools/ci/deploy_doc.sh000066400000000000000000000000231476546756600167740ustar00rootroot00000000000000#!/bin/bash set -evdipy-1.11.0/tools/ci/install.ps1000066400000000000000000000020341476546756600164160ustar00rootroot00000000000000# Powershell Install script # Useful function from https://stackoverflow.com/questions/50093582/powershell-not-recognizing-conda-as-cmdlet-function-or-operable-program function Invoke-CmdScript { param( [String] $scriptName ) $cmdLine = """$scriptName"" $args & set" & $Env:SystemRoot\system32\cmd.exe /c $cmdLine | Select-String '^([^=]*)=(.*)$' | ForEach-Object { $varName = $_.Matches[0].Groups[1].Value $varValue = $_.Matches[0].Groups[2].Value Set-Item Env:$varName $varValue } } if($env:INSTALL_TYPE -match "conda") { Write-Output "Activate testenv" Invoke-CmdScript $env:CONDA\Scripts\activate.bat testenv } $env:PIPI = "pip install --timeout=60" # Print and check this environment variable Write-Output "Pip command: $env:PIPI" # Print and Install FURY Write-Output "======================== Install DIPY ========================" Invoke-Expression "$env:PIPI --user -e ." # Run tests Write-Output "======================== Run DIPY tests ========================" Invoke-Expression "pytest -svv dipy" dipy-1.11.0/tools/ci/install.sh000077500000000000000000000022541476546756600163340ustar00rootroot00000000000000#!/bin/bash echo "Activate virtual environment" source tools/ci/activate_env.sh set -ex PIPI="pip install --timeout=60 -Csetup-args=--vsenv -Ccompile-args=-v" if [ "$USE_PRE" == "1" ] || [ "$USE_PRE" == true ]; then # --index-url takes priority over --extra-index-url, so that packages, and # their dependencies, with versions available in the nightly channel will be installed before falling back to the Python Package Index. PIPI="$PIPI --pre --index-url $PRE_WHEELS --extra-index-url https://pypi.org/simple"; fi #---------- DIPY Installation ----------------- if [ "$INSTALL_TYPE" == "setup" ]; then python setup.py install elif [ "$INSTALL_TYPE" == "pip" ]; then $PIPI -vv . elif [ "$INSTALL_TYPE" == "sdist" ]; then # python -m pep517.build python setup_egg.py egg_info # check egg_info while we're here python setup_egg.py sdist $PIPI dist/*.tar.gz elif [ "$INSTALL_TYPE" == "wheel" ]; then pip install wheel python setup_egg.py bdist_wheel $PIPI dist/*.whl elif [ "$INSTALL_TYPE" == "requirements" ]; then $PIPI -r requirements.txt python setup.py install elif [ "$INSTALL_TYPE" == "conda" ]; then $PIPI -vv . fi set +exdipy-1.11.0/tools/ci/install_dependencies.ps1000066400000000000000000000016061476546756600211300ustar00rootroot00000000000000# Powershell Install script if($env:INSTALL_TYPE -match "conda") { # Get Anaconda path Write-Output "Conda path: $env:CONDA\Scripts" #gci env:* Invoke-Expression "conda config --set always_yes yes --set changeps1 no" Invoke-Expression "conda update -yq conda" Invoke-Expression "conda install conda-build anaconda-client" Invoke-Expression "conda config --add channels conda-forge" Invoke-Expression "conda create -n testenv --yes python=$env:PYTHON_VERSION pip" Invoke-Expression "conda install -yq --name testenv $env:DEPENDS $env:EXTRA_DEPENDS pytest" } else { $env:PIPI = "pip install --timeout=60 --find-links=$env:EXTRA_WHEELS" # Print and check this environment variable Write-Output "Pip command: $env:PIPI" Invoke-Expression "python -m pip install -U pip" Invoke-Expression "pip --version" Invoke-Expression "$env:PIPI $env:DEPENDS $env:EXTRA_DEPENDS pytest" } dipy-1.11.0/tools/ci/install_dependencies.sh000077500000000000000000000017501476546756600210420ustar00rootroot00000000000000#!/bin/bash echo "Activate virtual environment" source tools/ci/activate_env.sh set -ex echo "Display Python version" python -c "import sys; print(sys.version)" python -m pip install -U pip 'setuptools~=69.5' wheel echo "Install Dependencies" if [ "$INSTALL_TYPE" == "conda" ]; then conda install -yq --name venv $DEPENDS $EXTRA_DEPENDS pytest else PIPI="pip install --timeout=60 " if [ "$USE_PRE" == "1" ] || [ "$USE_PRE" == true ]; then # --index-url takes priority over --extra-index-url, so that packages, and # their dependencies, with versions available in the nightly channel will be installed before falling back to the Python Package Index. PIPI="$PIPI --pre --index-url $PRE_WHEELS --extra-index-url https://pypi.org/simple"; fi $PIPI pytest==8.0.0 $PIPI numpy if [ -n "$DEPENDS" ]; then $PIPI $DEPENDS $EXTRA_DEPENDS; fi if [ "$COVERAGE" == "1" ] || [ "$COVERAGE" = true ]; then pip install coverage coveralls; fi fi set +ex dipy-1.11.0/tools/ci/install_opengl.ps1000066400000000000000000000032011476546756600177570ustar00rootroot00000000000000# Sample script to install Python and pip under Windows # Authors: Olivier Grisel, Jonathan Helmus and Kyle Kastner # License: CC0 1.0 Universal: https://creativecommons.org/publicdomain/zero/1.0/ # Adapted from VisPy $MESA_GL_URL = "https://github.com/vispy/demo-data/raw/main/mesa/" # Mesa DLLs found linked from: # https://wiki.qt.io/Cross_compiling_Mesa_for_Windows # to: # http://sourceforge.net/projects/msys2/files/REPOS/MINGW/x86_64/mingw-w64-x86_64-mesa-10.2.4-1-any.pkg.tar.xz/download function DownloadMesaOpenGL ($architecture) { [Net.ServicePointManager]::SecurityProtocol = 'Ssl3, Tls, Tls11, Tls12' $webclient = New-Object System.Net.WebClient # Download and retry up to 3 times in case of network transient errors. $url = $MESA_GL_URL + "opengl32_mingw_" + $architecture + ".dll" if ($architecture -eq "32") { $filepath = "C:\Windows\SysWOW64\opengl32.dll" } else { $filepath = "C:\Windows\system32\opengl32.dll" } takeown /F $filepath /A icacls $filepath /grant "${env:ComputerName}\${env:UserName}:F" Remove-item -LiteralPath $filepath Write-Output "Downloading" $url $retry_attempts = 2 for($i=0; $i -lt $retry_attempts; $i++){ try { $webclient.DownloadFile($url, $filepath) break } Catch [Exception]{ Start-Sleep 1 } } if (Test-Path $filepath) { Write-Output "File saved at" $filepath } else { # Retry once to get the error message if any at the last try $webclient.DownloadFile($url, $filepath) } } function main () { DownloadMesaOpenGL "64" } main dipy-1.11.0/tools/ci/run_tests.sh000077500000000000000000000022101476546756600167040ustar00rootroot00000000000000#!/bin/bash echo "Activate virtual environment" source tools/ci/activate_env.sh set -ex echo "Run the tests" # Change into an innocuous directory and find tests from installation mkdir for_testing_results mkdir for_testing cd for_testing # We need the setup.cfg for the pytest settings cp ../pyproject.toml . # No figure windows for mpl; quote to hide : from travis-ci yaml parsing echo "backend : agg" > matplotlibrc if [ "$COVERAGE" == "1" ] || [ "$COVERAGE" == true ]; then cp ../.coveragerc .; cp ../.codecov.yml .; chmod -R a-w . # Run the tests and check for test coverage. coverage run --data-file=../for_testing_results/.coverage -m pytest -o cache_dir=../for_testing_results -c pyproject.toml -svv --doctest-modules --verbose --durations=10 --pyargs dipy chmod -R a+w . cd ../for_testing_results coverage report -m # Generate test coverage report. coverage xml # Generate coverage report in xml format for codecov upload. else chmod -R a-w . pytest -o cache_dir=../for_testing_results -c pyproject.toml -svv --doctest-modules --verbose --durations=10 --pyargs dipy chmod -R a+w . fi cd .. set +ex dipy-1.11.0/tools/ci/setup_headless.sh000077500000000000000000000007611476546756600176770ustar00rootroot00000000000000#!/bin/bash echo "DISPLAY=:99.0" >> $GITHUB_ENV if [ "$RUNNER_OS" == "Linux" ]; then echo "LIBGL_ALWAYS_SOFTWARE=1" >> $GITHUB_ENV echo "LIBGL_ALWAYS_INDIRECT=0" >> $GITHUB_ENV sudo apt-get install -y libgl1-mesa-glx xvfb; Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 & sleep 3 elif [ "$RUNNER_OS" == "Windows" ]; then powershell ./tools/ci/install_opengl.ps1 elif [ "$RUNNER_OS" == "macOS" ]; then echo 'Install Xquartz package for headless' brew install --cask xquartz fi dipy-1.11.0/tools/doctest_extmods.py000077500000000000000000000032711476546756600175210ustar00rootroot00000000000000#!/usr/bin/env python3 """Run doctests in extension modules of Collect extension modules in Run doctests in each extension module Examples ---------- %prog dipy """ from distutils.sysconfig import get_config_vars import doctest from optparse import OptionParser import os from os.path import abspath, dirname, join as pjoin, relpath, sep import sys EXT_EXT = get_config_vars('SO')[0] def get_ext_modules(pkg_name): pkg = __import__(pkg_name, fromlist=['']) pkg_dir = abspath(dirname(pkg.__file__)) # pkg_root = __import__(pkg_name) ext_modules = [] for dirpath, dirnames, filenames in os.walk(pkg_dir): reldir = relpath(dirpath, pkg_dir) if reldir == '.': reldir = '' for filename in filenames: if filename.endswith(EXT_EXT): froot = filename[:-len(EXT_EXT)] mod_path = pjoin(reldir, froot) mod_uri = pkg_name + '.' + mod_path.replace(sep, '.') # fromlist=[''] results in submodule being returned, rather # than the top level module. See help(__import__) mod = __import__(mod_uri, fromlist=['']) ext_modules.append(mod) return ext_modules def main(): usage = f"usage: %prog [options] \n\n{__doc__}" parser = OptionParser(usage=usage) opts, args = parser.parse_args() if len(args) == 0: parser.print_help() sys.exit(1) mod_name = args[0] mods = get_ext_modules(mod_name) for mod in mods: print(f"Testing module: {mod.__name__}") doctest.testmod(mod, optionflags=doctest.NORMALIZE_WHITESPACE) if __name__ == '__main__': main() dipy-1.11.0/tools/github_stats.py000077500000000000000000000154431476546756600170150ustar00rootroot00000000000000#!/usr/bin/env python3 """Simple tools to query github.com and gather stats about issues. Taken from ipython """ # ----------------------------------------------------------------------------- # Imports # ----------------------------------------------------------------------------- from datetime import datetime, timedelta import json import re from subprocess import check_output import sys from urllib.request import urlopen # ----------------------------------------------------------------------------- # Globals # ----------------------------------------------------------------------------- ISO8601 = "%Y-%m-%dT%H:%M:%SZ" PER_PAGE = 100 element_pat = re.compile(r'<(.+?)>') rel_pat = re.compile(r'rel=[\'"](\w+)[\'"]') LAST_RELEASE = datetime(2015, 3, 18) # ----------------------------------------------------------------------------- # Functions # ----------------------------------------------------------------------------- def parse_link_header(headers): link_s = headers.get('link', '') urls = element_pat.findall(link_s) rels = rel_pat.findall(link_s) d = {} for rel, url in zip(rels, urls): d[rel] = url return d def get_paged_request(url): """Get a full list, handling APIv3's paging.""" results = [] while url: print(f"fetching {url}", file=sys.stderr) f = urlopen(url) results.extend(json.load(f)) links = parse_link_header(f.headers) url = links.get('next') return results def get_issues(project="dipy/dipy", state="closed", pulls=False): """Get a list of the issues from the Github API.""" which = 'pulls' if pulls else 'issues' url = ( f"https://api.github.com/repos/{project}/{which}" f"?state={state}&per_page={PER_PAGE}" ) return get_paged_request(url) def _parse_datetime(s): """Parse dates in the format returned by the Github API.""" if s: return datetime.strptime(s, ISO8601) else: return datetime.fromtimestamp(0) def issues2dict(issues): """Convert a list of issues to a dict, keyed by issue number.""" idict = {} for i in issues: idict[i['number']] = i return idict def is_pull_request(issue): """Return True if the given issue is a pull request.""" return 'pull_request_url' in issue def issues_closed_since(period=LAST_RELEASE, project="dipy/dipy", pulls=False): """Get all issues closed since a particular point in time. Period can either be a datetime object, or a timedelta object. In the latter case, it is used as a time before the present. """ which = 'pulls' if pulls else 'issues' if isinstance(period, timedelta): period = datetime.now() - period url = ( f"https://api.github.com/repos/{project}/{which}?state=closed" f"&sort=updated&since={period.strftime(ISO8601)}&per_page={PER_PAGE}" ) allclosed = get_paged_request(url) # allclosed = get_issues(project=project, state='closed', pulls=pulls, # since=period) filtered = [i for i in allclosed if _parse_datetime(i['closed_at']) > period] # exclude rejected PRs if pulls: filtered = [pr for pr in filtered if pr['merged_at']] return filtered def sorted_by_field(issues, field='closed_at', reverse=False): """Return a list of issues sorted by closing date date.""" return sorted(issues, key=lambda i: i[field], reverse=reverse) def report(issues, show_urls=False): """Summary report about a list of issues, printing number and title.""" # titles may have unicode in them, so we must encode everything below if show_urls: for i in issues: role = 'ghpull' if 'merged_at' in i else 'ghissue' print('* :%s:`%d`: %s' % (role, i['number'], i['title'])) else: for i in issues: print('* %d: %s' % (i['number'], i['title'])) # ----------------------------------------------------------------------------- # Main script # ----------------------------------------------------------------------------- if __name__ == "__main__": # Whether to add reST urls for all issues in printout. show_urls = True # By default, search one month back tag = None if len(sys.argv) > 1: try: days = int(sys.argv[1]) except: tag = sys.argv[1] else: tag = check_output(['git', 'describe', '--abbrev=0'], text=True).strip() if tag: cmd = ['git', 'log', '-1', '--format=%ai', tag] tagday, tz = check_output(cmd, text=True).strip().rsplit(' ', 1) since = datetime.strptime(tagday, "%Y-%m-%d %H:%M:%S") else: since = datetime.now() - timedelta(days=days) print(f"fetching GitHub stats since {since} (tag: {tag})", file=sys.stderr) # turn off to play interactively without redownloading, use %run -i if 1: issues = issues_closed_since(since, pulls=False) pulls = issues_closed_since(since, pulls=True) # For regular reports, it's nice to show them in reverse # chronological order issues = sorted_by_field(issues, reverse=True) pulls = sorted_by_field(pulls, reverse=True) n_issues, n_pulls = map(len, (issues, pulls)) n_total = n_issues + n_pulls # Print summary report we can directly include into release notes. print() since_day = since.strftime("%Y/%m/%d") today = datetime.today().strftime("%Y/%m/%d") print(f"GitHub stats for {since_day} - {today} (tag: {tag})") print() print("These lists are automatically generated, and may be incomplete or" " contain duplicates.") print() if tag: # print git info, in addition to GitHub info: since_tag = tag + '..' cmd = ['git', 'log', '--oneline', since_tag] ncommits = len(check_output(cmd, text=True).splitlines()) author_cmd = ['git', 'log', '--format=* %aN', since_tag] all_authors = check_output(author_cmd, text=True) \ .splitlines() unique_authors = sorted(set(all_authors)) if not unique_authors: print("No commits during this period.") else: print(f"The following {len(unique_authors)} authors contributed {ncommits} commits.") print() print('\n'.join(unique_authors)) print() print() print(f"We closed a total of {n_total} issues," f" {n_pulls} pull requests and {n_issues} regular issues;\n" "this is the full list (generated with the script \n" ":file:`tools/github_stats.py`):") print() print(f'Pull Requests ({n_pulls}):\n') report(pulls, show_urls) print() print(f'Issues ({n_issues}):\n') report(issues, show_urls) dipy-1.11.0/tools/gitversion.py000066400000000000000000000060411476546756600164750ustar00rootroot00000000000000#!/usr/bin/env python3 """ Note ---- This file is copied (possibly with major modifications) from the sources of the scipy project - https://github.com/scipy/scipy. It remains licensed as the rest of scipy (BSD-3 license as of October 2023). # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # # See COPYING file distributed along with the scipy package for the # copyright and license terms. # # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## """ import os import textwrap def init_version(): init = os.path.join(os.path.dirname(__file__), '../pyproject.toml') with open(init) as fid: data = fid.readlines() version_line = next( line for line in data if line.startswith('version =') ) version = version_line.strip().split(' = ')[1] version = version.replace('"', '').replace("'", '') return version def git_version(version): # Append last commit date and hash to dev version information, # if available import subprocess import os.path git_hash = '' try: p = subprocess.Popen( ['git', 'log', '-1', '--format="%H %aI"'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=os.path.dirname(__file__), ) except FileNotFoundError: pass else: out, err = p.communicate() if p.returncode == 0: git_hash, git_date = ( out.decode('utf-8') .strip() .replace('"', '') .split('T')[0] .replace('-', '') .split() ) # Only attach git tag to development versions if 'dev' in version: version += f'+git{git_date}.{git_hash[:7]}' return version, git_hash if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument('--write', help="Save version to this file") parser.add_argument( '--meson-dist', help='Output path is relative to MESON_DIST_ROOT', action='store_true' ) args = parser.parse_args() version, git_hash = git_version(init_version()) template = textwrap.dedent(f''' """ Module to expose more detailed version info for the installed `scipy` """ version = "{version}" full_version = version short_version = version.split('.dev')[0] git_revision = "{git_hash}" release = 'dev' not in version and '+' not in version if not release: version = full_version ''') if args.write: outfile = args.write if args.meson_dist: outfile = os.path.join( os.environ.get('MESON_DIST_ROOT', ''), outfile ) # Print human readable output path relpath = os.path.relpath(outfile) if relpath.startswith('.'): relpath = outfile with open(outfile, 'w') as f: f.write(template) else: print(version) dipy-1.11.0/tools/gitwash_dumper.py000077500000000000000000000171321476546756600173340ustar00rootroot00000000000000#!/usr/bin/env python3 """ Checkout gitwash repo into directory and do search replace on name """ import fnmatch import glob from optparse import OptionParser import os from os.path import join as pjoin import re import shutil from subprocess import call import sys import tempfile verbose = False def clone_repo(url, branch): cwd = os.getcwd() tmpdir = tempfile.mkdtemp() try: cmd = f'git clone {url} {tmpdir}' call(cmd, shell=True) os.chdir(tmpdir) cmd = f'git checkout {branch}' call(cmd, shell=True) except: shutil.rmtree(tmpdir) raise finally: os.chdir(cwd) return tmpdir def cp_files(in_path, globs, out_path): try: os.makedirs(out_path) except OSError: pass out_fnames = [] for in_glob in globs: in_glob_path = pjoin(in_path, in_glob) for in_fname in glob.glob(in_glob_path): out_fname = in_fname.replace(in_path, out_path) pth, _ = os.path.split(out_fname) if not os.path.isdir(pth): os.makedirs(pth) shutil.copyfile(in_fname, out_fname) out_fnames.append(out_fname) return out_fnames def filename_search_replace(sr_pairs, filename, backup=False): """ Search and replace for expressions in files """ in_txt = open(filename, 'rt').read(-1) out_txt = in_txt[:] for in_exp, out_exp in sr_pairs: in_exp = re.compile(in_exp) out_txt = in_exp.sub(out_exp, out_txt) if in_txt == out_txt: return False open(filename, 'wt').write(out_txt) if backup: open(filename + '.bak', 'wt').write(in_txt) return True def copy_replace(replace_pairs, repo_path, out_path, cp_globs=('*',), rep_globs=('*',), renames=()): out_fnames = cp_files(repo_path, cp_globs, out_path) renames = [(re.compile(in_exp), out_exp) for in_exp, out_exp in renames] fnames = [] for rep_glob in rep_globs: fnames += fnmatch.filter(out_fnames, rep_glob) if verbose: print('\n'.join(fnames)) for fname in fnames: filename_search_replace(replace_pairs, fname, False) for in_exp, out_exp in renames: new_fname, n = in_exp.subn(out_exp, fname) if n: os.rename(fname, new_fname) break def make_link_targets(proj_name, user_name, repo_name, known_link_fname, out_link_fname, url=None, ml_url=None): """ Check and make link targets If url is None or ml_url is None, check if there are links present for these in `known_link_fname`. If not, raise error. The check is: Look for a target `proj_name`. Look for a target `proj_name` + ' mailing list' Also, look for a target `proj_name` + 'github'. If this exists, don't write this target into the new file below. If we are writing any of the url, ml_url, or github address, then write new file with these links, of form: .. _`proj_name` .. _`proj_name`: url .. _`proj_name` mailing list: url """ link_contents = open(known_link_fname, 'rt').readlines() have_url = url is not None have_ml_url = ml_url is not None have_gh_url = None for line in link_contents: if not have_url: match = re.match(r'..\s+_`%s`:\s+' % proj_name, line) if match: have_url = True if not have_ml_url: match = re.match(r'..\s+_`%s mailing list`:\s+' % proj_name, line) if match: have_ml_url = True if not have_gh_url: match = re.match(r'..\s+_`%s github`:\s+' % proj_name, line) if match: have_gh_url = True if not have_url or not have_ml_url: raise RuntimeError('Need command line or known project ' 'and / or mailing list URLs') lines = [] if url is not None: lines.append(f'.. _`{proj_name}`: {url}\n') if not have_gh_url: gh_url = f'http://github.com/{user_name}/{repo_name}\n' lines.append(f'.. _`{proj_name} github`: {gh_url}\n') if ml_url is not None: lines.append(f'.. _`{proj_name} mailing list`: {ml_url}\n') if len(lines) == 0: # Nothing to do return # A neat little header line lines = [f'.. {proj_name}\n'] + lines out_links = open(out_link_fname, 'wt') out_links.writelines(lines) out_links.close() USAGE = """ If not set with options, the repository name is the same as the If not set with options, the main github user is the same as the repository name.""" GITWASH_CENTRAL = 'git://github.com/matthew-brett/gitwash.git' GITWASH_BRANCH = 'master' def main(): parser = OptionParser() parser.set_usage(parser.get_usage().strip() + USAGE) parser.add_option("--repo-name", dest="repo_name", help="repository name - e.g. nitime", metavar="REPO_NAME") parser.add_option("--github-user", dest="main_gh_user", help="github username for main repo - e.g fperez", metavar="MAIN_GH_USER") parser.add_option("--gitwash-url", dest="gitwash_url", help=f"URL to gitwash repository - default {GITWASH_CENTRAL}", default=GITWASH_CENTRAL, metavar="GITWASH_URL") parser.add_option("--gitwash-branch", dest="gitwash_branch", help=f"branch in gitwash repository - default {GITWASH_BRANCH}", default=GITWASH_BRANCH, metavar="GITWASH_BRANCH") parser.add_option("--source-suffix", dest="source_suffix", help="suffix of ReST source files - default '.rst'", default='.rst', metavar="SOURCE_SUFFIX") parser.add_option("--project-url", dest="project_url", help="URL for project web pages", default=None, metavar="PROJECT_URL") parser.add_option("--project-ml-url", dest="project_ml_url", help="URL for project mailing list", default=None, metavar="PROJECT_ML_URL") (options, args) = parser.parse_args() if len(args) < 2: parser.print_help() sys.exit() out_path, project_name = args if options.repo_name is None: options.repo_name = project_name if options.main_gh_user is None: options.main_gh_user = options.repo_name repo_path = clone_repo(options.gitwash_url, options.gitwash_branch) try: copy_replace((('PROJECTNAME', project_name), ('REPONAME', options.repo_name), ('MAIN_GH_USER', options.main_gh_user)), repo_path, out_path, cp_globs=(pjoin('gitwash', '*'),), rep_globs=('*.rst',), renames=(('\.rst$', options.source_suffix),)) make_link_targets(project_name, options.main_gh_user, options.repo_name, pjoin(out_path, 'gitwash', 'known_projects.inc'), pjoin(out_path, 'gitwash', 'this_project.inc'), options.project_url, options.project_ml_url) finally: shutil.rmtree(repo_path) if __name__ == '__main__': main() dipy-1.11.0/tools/run_with_env.cmd000066400000000000000000000064501476546756600171320ustar00rootroot00000000000000:: To build extensions for 64 bit Python 3, we need to configure environment :: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1) :: :: To build extensions for 64 bit Python 2, we need to configure environment :: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0) :: :: 32 bit builds, and 64-bit builds for 3.5 and beyond, do not require specific :: environment configurations. :: :: Note: this script needs to be run with the /E:ON and /V:ON flags for the :: cmd interpreter, at least for (SDK v7.0) :: :: More details at: :: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows :: https://stackoverflow.com/a/13751649/163740 :: :: Author: Olivier Grisel :: License: CC0 1.0 Universal: https://creativecommons.org/publicdomain/zero/1.0/ :: :: Notes about batch files for Python people: :: :: Quotes in values are literally part of the values: :: SET FOO="bar" :: FOO is now five characters long: " b a r " :: If you don't want quotes, don't include them on the right-hand side. :: :: The CALL lines at the end of this file look redundant, but if you move them :: outside of the IF clauses, they do not run properly in the SET_SDK_64==Y :: case, I don't know why. @ECHO OFF SET COMMAND_TO_RUN=%* SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows SET WIN_WDK=c:\Program Files (x86)\Windows Kits\10\Include\wdf :: Extract the major and minor versions, and allow for the minor version to be :: more than 9. This requires the version number to have two dots in it. SET MAJOR_PYTHON_VERSION=%PYTHON_VERSION:~0,1% IF "%PYTHON_VERSION:~3,1%" == "." ( SET MINOR_PYTHON_VERSION=%PYTHON_VERSION:~2,1% ) ELSE ( SET MINOR_PYTHON_VERSION=%PYTHON_VERSION:~2,2% ) :: Based on the Python version, determine what SDK version to use, and whether :: to set the SDK for 64-bit. IF %MAJOR_PYTHON_VERSION% == 2 ( SET WINDOWS_SDK_VERSION="v7.0" SET SET_SDK_64=Y ) ELSE ( IF %MAJOR_PYTHON_VERSION% == 3 ( SET WINDOWS_SDK_VERSION="v7.1" IF %MINOR_PYTHON_VERSION% LEQ 4 ( SET SET_SDK_64=Y ) ELSE ( SET SET_SDK_64=N IF EXIST "%WIN_WDK%" ( :: See: https://connect.microsoft.com/VisualStudio/feedback/details/1610302/ REN "%WIN_WDK%" 0wdf ) ) ) ELSE ( ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%" EXIT 1 ) ) IF %PYTHON_ARCH% == 64 ( IF %SET_SDK_64% == Y ( ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture SET DISTUTILS_USE_SDK=1 SET MSSdk=1 "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION% "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ELSE ( ECHO Using default MSVC build environment for 64 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ) ELSE ( ECHO Using default MSVC build environment for 32 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) dipy-1.11.0/tools/toollib.py000066400000000000000000000024031476546756600157460ustar00rootroot00000000000000"""Various utilities common to nibabel release and maintenance tools. """ # Library imports import os import sys # Useful shorthands pjoin = os.path.join cd = os.chdir # Utility functions def c(cmd): """Run system command, raise SystemExit if it returns an error.""" print ("$",cmd) stat = os.system(cmd) #stat = 0 # Uncomment this and comment previous to run in debug mode if stat: raise SystemExit(f"Command {cmd} failed with code: {stat}") def get_dipydir(): """Get dipy directory from command line, or assume it's the one above.""" # Initialize arguments and check location try: dipydir = sys.argv[1] except IndexError: dipydir = '..' dipydir = os.path.abspath(dipydir) cd(dipydir) if not os.path.isdir('dipy') and os.path.isfile('setup.py'): raise SystemExit(f'Invalid dipy directory: {dipydir}') return dipydir # import compileall and then get dir os.path.split def compile_tree(): """Compile all Python files below current directory.""" stat = os.system('python3 -m compileall .') if stat: msg = '*** ERROR: Some Python files in tree do NOT compile! ***\n' msg += 'See messages above for the actual file that produced it.\n' raise SystemExit(msg)