././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.642428 con_duct-0.11.0/0000755000175100001660000000000014761605034013000 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097495.0 con_duct-0.11.0/CHANGELOG.md0000644000175100001660000002747414761605027014631 0ustar00runnerdocker# v0.11.0 (Tue Mar 04 2025) #### 🚀 Enhancement - Add con-duct --log-levels [#253](https://github.com/con/duct/pull/253) ([@asmacdo](https://github.com/asmacdo)) - Add --eval-filter [#241](https://github.com/con/duct/pull/241) ([@asmacdo](https://github.com/asmacdo) [@yarikoptic](https://github.com/yarikoptic)) #### 🐛 Bug Fix - docs: add RRID badge to README [#254](https://github.com/con/duct/pull/254) ([@asmacdo](https://github.com/asmacdo)) - Implement and use packaging.Version replacement [#247](https://github.com/con/duct/pull/247) ([@asmacdo](https://github.com/asmacdo)) - Add test: ls field list should contain all info.json fields [#243](https://github.com/con/duct/pull/243) ([@asmacdo](https://github.com/asmacdo)) - ls --help: list fields only once [#250](https://github.com/con/duct/pull/250) ([@asmacdo](https://github.com/asmacdo)) - bf: yaml should be optional [#248](https://github.com/con/duct/pull/248) ([@asmacdo](https://github.com/asmacdo)) - Fixup: blacken [#249](https://github.com/con/duct/pull/249) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.10.1 (Fri Feb 07 2025) #### 🐛 Bug Fix - bf: show ls results when no positional args given [#240](https://github.com/con/duct/pull/240) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.10.0 (Fri Feb 07 2025) #### 🚀 Enhancement - con-duct ls [#224](https://github.com/con/duct/pull/224) ([@asmacdo](https://github.com/asmacdo)) #### 🐛 Bug Fix - Test abandoning parent [#226](https://github.com/con/duct/pull/226) ([@asmacdo](https://github.com/asmacdo) [@yarikoptic](https://github.com/yarikoptic)) - Fix issue where pillow fails to install on pypy 3.9 [#233](https://github.com/con/duct/pull/233) ([@asmacdo](https://github.com/asmacdo)) - Add Fail time unit [#229](https://github.com/con/duct/pull/229) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.9.0 (Tue Dec 03 2024) #### 🚀 Enhancement - Add --fail-time option and by default remove all outputs if command fails fast [#227](https://github.com/con/duct/pull/227) ([@yarikoptic](https://github.com/yarikoptic)) #### 🐛 Bug Fix - Add FAQ with a question on git-annex and large files [#225](https://github.com/con/duct/pull/225) ([@yarikoptic](https://github.com/yarikoptic)) - Add released auto plugin to mark issues with releases where they were fixed [#216](https://github.com/con/duct/pull/216) ([@yarikoptic](https://github.com/yarikoptic)) - ENH/BF: render floats only to 2 digits after . . Allow for composing format + conversion [#214](https://github.com/con/duct/pull/214) ([@yarikoptic](https://github.com/yarikoptic)) - Various enhancements for plot command [#217](https://github.com/con/duct/pull/217) ([@yarikoptic](https://github.com/yarikoptic)) #### Authors: 1 - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.8.0 (Thu Oct 24 2024) #### 🚀 Enhancement - Add testing for Python 3.13 [#202](https://github.com/con/duct/pull/202) ([@asmacdo](https://github.com/asmacdo)) - Add `con-duct plot` with matplotlib backend [#198](https://github.com/con/duct/pull/198) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.7.1 (Thu Oct 24 2024) #### 🐛 Bug Fix - Persistently open usage file until the end and open info as "w" not "a" [#209](https://github.com/con/duct/pull/209) ([@yarikoptic](https://github.com/yarikoptic) [@asmacdo](https://github.com/asmacdo)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.7.0 (Thu Oct 24 2024) #### 🚀 Enhancement - Rm num_samples & num_reports from summary_format [#200](https://github.com/con/duct/pull/200) ([@asmacdo](https://github.com/asmacdo)) - Add start and end time to info.json [#201](https://github.com/con/duct/pull/201) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.6.0 (Mon Oct 14 2024) #### 🚀 Enhancement - Drop Python 3.8, which is EOL [#199](https://github.com/con/duct/pull/199) ([@asmacdo](https://github.com/asmacdo)) - Create structure for full con-duct suite [#164](https://github.com/con/duct/pull/164) ([@asmacdo](https://github.com/asmacdo)) - Add ps stat counter [#182](https://github.com/con/duct/pull/182) ([@asmacdo](https://github.com/asmacdo)) #### 🐛 Bug Fix - Explicitly mention con-duct command in the summary [#204](https://github.com/con/duct/pull/204) ([@asmacdo](https://github.com/asmacdo)) - BF: Do not rely on having sources under ./src and __main__.py to be executable [#196](https://github.com/con/duct/pull/196) ([@yarikoptic](https://github.com/yarikoptic)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.5.0 (Wed Oct 02 2024) #### 🚀 Enhancement - Report $USER as .user, and store actual numeric UID as .uid [#195](https://github.com/con/duct/pull/195) ([@yarikoptic](https://github.com/yarikoptic)) - Move all logic into single file [#191](https://github.com/con/duct/pull/191) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.4.0 (Mon Sep 30 2024) #### 🚀 Enhancement - Add custom formatter conversion flags and colors based on datalad ls [#183](https://github.com/con/duct/pull/183) ([@yarikoptic](https://github.com/yarikoptic) [@asmacdo](https://github.com/asmacdo)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) --- # v0.3.1 (Fri Sep 20 2024) #### 🐛 Bug Fix - BF: Fix sample aggregation [#180](https://github.com/con/duct/pull/180) ([@asmacdo](https://github.com/asmacdo)) - Fix operator precedence involving or and addition [#179](https://github.com/con/duct/pull/179) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.3.0 (Thu Sep 12 2024) #### 🚀 Enhancement - (Re)add etime and cmd into process stats [#175](https://github.com/con/duct/pull/175) ([@asmacdo](https://github.com/asmacdo)) - Modify exit code if cmd terminated by signal [#169](https://github.com/con/duct/pull/169) ([@asmacdo](https://github.com/asmacdo)) - Add output files and schema version to info.json [#168](https://github.com/con/duct/pull/168) ([@asmacdo](https://github.com/asmacdo)) #### 🐛 Bug Fix - Catchup to actual version for auto releases [#177](https://github.com/con/duct/pull/177) ([@asmacdo](https://github.com/asmacdo)) - Argparse abbreviation affects and breaks cmd args [#167](https://github.com/con/duct/pull/167) ([@asmacdo](https://github.com/asmacdo)) - Add tests for correct handling of args [#166](https://github.com/con/duct/pull/166) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.2.0 (Thurs Aug 22 2024) #### 🚀 Enhancement - Add log level NONE and deprecate quiet [#159](https://github.com/con/duct/pull/159) ([@asmacdo](https://github.com/asmacdo)) - Collect hostname in sys info [#153](https://github.com/con/duct/pull/153) ([@asmacdo](https://github.com/asmacdo)) - RF+BF: make explicit wall_clock_time separate from elapsed_time [#141](https://github.com/con/duct/pull/141) ([@yarikoptic](https://github.com/yarikoptic)) - RF: Add logging, dissolve duct_print (INFO level), add CLI option -l, dissolve --quiet [#140](https://github.com/con/duct/pull/140) ([@yarikoptic](https://github.com/yarikoptic)) - ENH: give "typical" shell behavior when command is not found to be executed [#138](https://github.com/con/duct/pull/138) ([@yarikoptic](https://github.com/yarikoptic)) - Use None rather than 0 prior to measurement [#135](https://github.com/con/duct/pull/135) ([@asmacdo](https://github.com/asmacdo)) - RF+ENH: output messages to stderr not stdout + move printing out of "controllers/models" [#136](https://github.com/con/duct/pull/136) ([@yarikoptic](https://github.com/yarikoptic)) - Remove units for machine readability [#125](https://github.com/con/duct/pull/125) ([@asmacdo](https://github.com/asmacdo)) - Make execute return returncode of the process and use it for duct CLI process exit code [#119](https://github.com/con/duct/pull/119) ([@yarikoptic](https://github.com/yarikoptic)) #### 🐛 Bug Fix - Add direct pytest usage to CONTRIBUTING [#161](https://github.com/con/duct/pull/161) ([@asmacdo](https://github.com/asmacdo)) - Improve helptext top-level description [#158](https://github.com/con/duct/pull/158) ([@asmacdo](https://github.com/asmacdo)) - Check that each PR has one of the semver labels [#156](https://github.com/con/duct/pull/156) ([@asmacdo](https://github.com/asmacdo)) - Do not use setsid directly, use dedicated start_new_session [#155](https://github.com/con/duct/pull/155) ([@yarikoptic](https://github.com/yarikoptic)) - Disable MacOS tests [#151](https://github.com/con/duct/pull/151) ([@asmacdo](https://github.com/asmacdo)) - Fix pmem calculation [#151](https://github.com/con/duct/pull/151) ([@asmacdo](https://github.com/asmacdo)) - Collect sys info and env in parallel [#152](https://github.com/con/duct/pull/152) ([@asmacdo](https://github.com/asmacdo)) - Fix GPU info collection [#147](https://github.com/con/duct/pull/147) ([@asmacdo](https://github.com/asmacdo) [@yarikoptic](https://github.com/yarikoptic)) - RF+BF: update maxes on each sample, more logging during monitoring [#146](https://github.com/con/duct/pull/146) ([@yarikoptic](https://github.com/yarikoptic)) - RF: no shebang since file is no longer can be executed [#139](https://github.com/con/duct/pull/139) ([@yarikoptic](https://github.com/yarikoptic)) #### Authors: 2 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) - Yaroslav Halchenko ([@yarikoptic](https://github.com/yarikoptic)) # v0.1.1 (Wed Jul 31 2024) #### 🐛 Bug Fix - SC_PAGESIZE should work on macOS and Linux [#115](https://github.com/con/duct/pull/115) ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) --- # v0.1.0 (Mon Jul 29 2024) #### 🚀 Enhancement - Fixup autorc syntax [#110](https://github.com/con/duct/pull/110) ([@asmacdo](https://github.com/asmacdo)) - Explain totals [#110](https://github.com/con/duct/pull/110) ([@asmacdo](https://github.com/asmacdo)) - Fix test [#110](https://github.com/con/duct/pull/110) ([@asmacdo](https://github.com/asmacdo)) - Improve usage.json schema [#110](https://github.com/con/duct/pull/110) ([@asmacdo](https://github.com/asmacdo)) #### 🐛 Bug Fix - Use datalad labels to avoid future collision with Dependabot [#113](https://github.com/con/duct/pull/113) ([@asmacdo](https://github.com/asmacdo)) - release on PR merge [#113](https://github.com/con/duct/pull/113) ([@asmacdo](https://github.com/asmacdo)) - Prepare for auto-powered releases [#113](https://github.com/con/duct/pull/113) ([@asmacdo](https://github.com/asmacdo)) - sorted + output-capture [#112](https://github.com/con/duct/pull/112) ([@asmacdo](https://github.com/asmacdo)) - Add pypi keywords [#112](https://github.com/con/duct/pull/112) ([@asmacdo](https://github.com/asmacdo)) - Fixup ignore new location of egginfo [#112](https://github.com/con/duct/pull/112) ([@asmacdo](https://github.com/asmacdo)) #### ⚠️ Pushed to `main` - Update README for release ([@asmacdo](https://github.com/asmacdo)) #### Authors: 1 - Austin Macdonald ([@asmacdo](https://github.com/asmacdo)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/LICENSE0000644000175100001660000000207514761605014014007 0ustar00runnerdockerMIT License Copyright (c) 2024 Center for Open Neuroscience Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/MANIFEST.in0000644000175100001660000000014114761605014014530 0ustar00runnerdockerinclude CHANGELOG.* CONTRIBUTORS.* LICENSE tox.ini graft src graft test global-exclude *.py[cod] ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.642428 con_duct-0.11.0/PKG-INFO0000644000175100001660000002335714761605034014107 0ustar00runnerdockerMetadata-Version: 2.2 Name: con-duct Version: 0.11.0 Summary: Runs a not-so-simple command and collects resource usage metrics Home-page: https://github.com/con/duct/ Author: Austin Macdonald Author-email: austin@dartmouth.edu License: MIT Project-URL: Source Code, https://github.com/con/duct/ Project-URL: Bug Tracker, https://github.com/con/duct/issues Keywords: command-line,cpu,memory,metrics,output-capture,provenance,time,usage Classifier: Development Status :: 3 - Alpha Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: License :: OSI Approved :: MIT License Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Science/Research Classifier: Intended Audience :: System Administrators Classifier: Topic :: System :: Systems Administration Requires-Python: >=3.9 Description-Content-Type: text/markdown License-File: LICENSE Provides-Extra: all Requires-Dist: matplotlib; extra == "all" Requires-Dist: PyYAML; extra == "all" Requires-Dist: pyout; extra == "all" # duct [![codecov](https://codecov.io/gh/con/duct/graph/badge.svg?token=JrPazw0Vn4)](https://codecov.io/gh/con/duct) [![PyPI version](https://badge.fury.io/py/con-duct.svg)](https://badge.fury.io/py/con-duct) [![RRID](https://img.shields.io/badge/RRID-SCR__025436-blue)](https://identifiers.org/RRID:SCR_025436) ## Installation pip install con-duct ## Quickstart Try it out! duct --sample-interval 0.5 --report-interval 1 test/data/test_script.py --duration 3 --memory-size=1000 `duct` is most useful when the report-interval is less than the duration of the script. ## Summary: A process wrapper script that monitors the execution of a command. ```shell >duct --help usage: duct [-h] [--version] [-p OUTPUT_PREFIX] [--summary-format SUMMARY_FORMAT] [--colors] [--clobber] [-l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}] [-q] [--sample-interval SAMPLE_INTERVAL] [--report-interval REPORT_INTERVAL] [--fail-time FAIL_TIME] [-c {all,none,stdout,stderr}] [-o {all,none,stdout,stderr}] [-t {all,system-summary,processes-samples}] command [command_args ...] ... duct is a lightweight wrapper that collects execution data for an arbitrary command. Execution data includes execution time, system information, and resource usage statistics of the command and all its child processes. It is intended to simplify the problem of recording the resources necessary to execute a command, particularly in an HPC environment. Resource usage is determined by polling (at a sample-interval). During execution, duct produces a JSON lines (see https://jsonlines.org) file with one data point recorded for each report (at a report-interval). environment variables: Many duct options can be configured by environment variables (which are overridden by command line options). DUCT_LOG_LEVEL: see --log-level DUCT_OUTPUT_PREFIX: see --output-prefix DUCT_SUMMARY_FORMAT: see --summary-format DUCT_SAMPLE_INTERVAL: see --sample-interval DUCT_REPORT_INTERVAL: see --report-interval DUCT_CAPTURE_OUTPUTS: see --capture-outputs positional arguments: command [command_args ...] The command to execute, along with its arguments. command_args Arguments for the command. options: -h, --help show this help message and exit --version show program's version number and exit -p OUTPUT_PREFIX, --output-prefix OUTPUT_PREFIX File string format to be used as a prefix for the files -- the captured stdout and stderr and the resource usage logs. The understood variables are {datetime}, {datetime_filesafe}, and {pid}. Leading directories will be created if they do not exist. You can also provide value via DUCT_OUTPUT_PREFIX env variable. (default: .duct/logs/{datetime_filesafe}-{pid}_) --summary-format SUMMARY_FORMAT Output template to use when printing the summary following execution. Accepts custom conversion flags: !S: Converts filesizes to human readable units, green if measured, red if None. !E: Colors exit code, green if falsey, red if truthy, and red if None. !X: Colors green if truthy, red if falsey. !N: Colors green if not None, red if None (default: Summary: Exit Code: {exit_code!E} Command: {command} Log files location: {logs_prefix} Wall Clock Time: {wall_clock_time:.3f} sec Memory Peak Usage (RSS): {peak_rss!S} Memory Average Usage (RSS): {average_rss!S} Virtual Memory Peak Usage (VSZ): {peak_vsz!S} Virtual Memory Average Usage (VSZ): {average_vsz!S} Memory Peak Percentage: {peak_pmem:.2f!N}% Memory Average Percentage: {average_pmem:.2f!N}% CPU Peak Usage: {peak_pcpu:.2f!N}% Average CPU Usage: {average_pcpu:.2f!N}% ) --colors Use colors in duct output. (default: False) --clobber Replace log files if they already exist. (default: False) -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. (default: INFO) -q, --quiet [deprecated, use log level NONE] Disable duct logging output (to stderr) (default: False) --sample-interval SAMPLE_INTERVAL, --s-i SAMPLE_INTERVAL Interval in seconds between status checks of the running process. Sample interval must be less than or equal to report interval, and it achieves the best results when sample is significantly less than the runtime of the process. (default: 1.0) --report-interval REPORT_INTERVAL, --r-i REPORT_INTERVAL Interval in seconds at which to report aggregated data. (default: 60.0) --fail-time FAIL_TIME, --f-t FAIL_TIME If command fails in less than this specified time (seconds), duct would remove logs. Set to 0 if you would like to keep logs for a failing command regardless of its run time. Set to negative (e.g. -1) if you would like to not keep logs for any failing command. (default: 3.0) -c {all,none,stdout,stderr}, --capture-outputs {all,none,stdout,stderr} Record stdout, stderr, all, or none to log files. You can also provide value via DUCT_CAPTURE_OUTPUTS env variable. (default: all) -o {all,none,stdout,stderr}, --outputs {all,none,stdout,stderr} Print stdout, stderr, all, or none to stdout/stderr respectively. (default: all) -t {all,system-summary,processes-samples}, --record-types {all,system-summary,processes-samples} Record system-summary, processes-samples, or all (default: all) ``` # con-duct suite In addition to `duct`, this project also includes a set of optional helpers under the `con-duct` command. These helpers may use 3rd party python libraries. ## Installation pip install con-duct[all] ## Extras Helptext ```shell >con-duct --help usage: con-duct [options] A suite of commands to manage or manipulate con-duct logs. positional arguments: {pp,plot,ls} Available subcommands pp Pretty print a JSON log. plot Plot resource usage for an execution. ls Print execution information for all matching runs. options: -h, --help show this help message and exit -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. ``` ## FAQs ### git-annex add keeps adding duct logs directly into git By default, [git-annex](https://git-annex.branchable.com/) treats all dotfiles, and files under directories starting with a `.` as "small" regardless of `annex.largefiles` setting [[ref: an issue describing the logic](https://git-annex.branchable.com/bugs/add__58___inconsistently_treats_files_in_dotdirs_as_dotfiles/?updated#comment-efc1f2aa8f46e88a8be9837a56cfa6f7)]. It is necessary to set `annex.dotfiles` variable to `true` to make git-annex treat them as regular files and thus subject to `annex.largefiles` setting [[ref: git-annex config](https://git-annex.branchable.com/git-annex-config/)]. Could be done the repository (not just specific clone, but any instance since records in `git-annex` branch) wide using `git annex config --set annex.dotfiles true`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/README.md0000644000175100001660000002034514761605014014261 0ustar00runnerdocker# duct [![codecov](https://codecov.io/gh/con/duct/graph/badge.svg?token=JrPazw0Vn4)](https://codecov.io/gh/con/duct) [![PyPI version](https://badge.fury.io/py/con-duct.svg)](https://badge.fury.io/py/con-duct) [![RRID](https://img.shields.io/badge/RRID-SCR__025436-blue)](https://identifiers.org/RRID:SCR_025436) ## Installation pip install con-duct ## Quickstart Try it out! duct --sample-interval 0.5 --report-interval 1 test/data/test_script.py --duration 3 --memory-size=1000 `duct` is most useful when the report-interval is less than the duration of the script. ## Summary: A process wrapper script that monitors the execution of a command. ```shell >duct --help usage: duct [-h] [--version] [-p OUTPUT_PREFIX] [--summary-format SUMMARY_FORMAT] [--colors] [--clobber] [-l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}] [-q] [--sample-interval SAMPLE_INTERVAL] [--report-interval REPORT_INTERVAL] [--fail-time FAIL_TIME] [-c {all,none,stdout,stderr}] [-o {all,none,stdout,stderr}] [-t {all,system-summary,processes-samples}] command [command_args ...] ... duct is a lightweight wrapper that collects execution data for an arbitrary command. Execution data includes execution time, system information, and resource usage statistics of the command and all its child processes. It is intended to simplify the problem of recording the resources necessary to execute a command, particularly in an HPC environment. Resource usage is determined by polling (at a sample-interval). During execution, duct produces a JSON lines (see https://jsonlines.org) file with one data point recorded for each report (at a report-interval). environment variables: Many duct options can be configured by environment variables (which are overridden by command line options). DUCT_LOG_LEVEL: see --log-level DUCT_OUTPUT_PREFIX: see --output-prefix DUCT_SUMMARY_FORMAT: see --summary-format DUCT_SAMPLE_INTERVAL: see --sample-interval DUCT_REPORT_INTERVAL: see --report-interval DUCT_CAPTURE_OUTPUTS: see --capture-outputs positional arguments: command [command_args ...] The command to execute, along with its arguments. command_args Arguments for the command. options: -h, --help show this help message and exit --version show program's version number and exit -p OUTPUT_PREFIX, --output-prefix OUTPUT_PREFIX File string format to be used as a prefix for the files -- the captured stdout and stderr and the resource usage logs. The understood variables are {datetime}, {datetime_filesafe}, and {pid}. Leading directories will be created if they do not exist. You can also provide value via DUCT_OUTPUT_PREFIX env variable. (default: .duct/logs/{datetime_filesafe}-{pid}_) --summary-format SUMMARY_FORMAT Output template to use when printing the summary following execution. Accepts custom conversion flags: !S: Converts filesizes to human readable units, green if measured, red if None. !E: Colors exit code, green if falsey, red if truthy, and red if None. !X: Colors green if truthy, red if falsey. !N: Colors green if not None, red if None (default: Summary: Exit Code: {exit_code!E} Command: {command} Log files location: {logs_prefix} Wall Clock Time: {wall_clock_time:.3f} sec Memory Peak Usage (RSS): {peak_rss!S} Memory Average Usage (RSS): {average_rss!S} Virtual Memory Peak Usage (VSZ): {peak_vsz!S} Virtual Memory Average Usage (VSZ): {average_vsz!S} Memory Peak Percentage: {peak_pmem:.2f!N}% Memory Average Percentage: {average_pmem:.2f!N}% CPU Peak Usage: {peak_pcpu:.2f!N}% Average CPU Usage: {average_pcpu:.2f!N}% ) --colors Use colors in duct output. (default: False) --clobber Replace log files if they already exist. (default: False) -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. (default: INFO) -q, --quiet [deprecated, use log level NONE] Disable duct logging output (to stderr) (default: False) --sample-interval SAMPLE_INTERVAL, --s-i SAMPLE_INTERVAL Interval in seconds between status checks of the running process. Sample interval must be less than or equal to report interval, and it achieves the best results when sample is significantly less than the runtime of the process. (default: 1.0) --report-interval REPORT_INTERVAL, --r-i REPORT_INTERVAL Interval in seconds at which to report aggregated data. (default: 60.0) --fail-time FAIL_TIME, --f-t FAIL_TIME If command fails in less than this specified time (seconds), duct would remove logs. Set to 0 if you would like to keep logs for a failing command regardless of its run time. Set to negative (e.g. -1) if you would like to not keep logs for any failing command. (default: 3.0) -c {all,none,stdout,stderr}, --capture-outputs {all,none,stdout,stderr} Record stdout, stderr, all, or none to log files. You can also provide value via DUCT_CAPTURE_OUTPUTS env variable. (default: all) -o {all,none,stdout,stderr}, --outputs {all,none,stdout,stderr} Print stdout, stderr, all, or none to stdout/stderr respectively. (default: all) -t {all,system-summary,processes-samples}, --record-types {all,system-summary,processes-samples} Record system-summary, processes-samples, or all (default: all) ``` # con-duct suite In addition to `duct`, this project also includes a set of optional helpers under the `con-duct` command. These helpers may use 3rd party python libraries. ## Installation pip install con-duct[all] ## Extras Helptext ```shell >con-duct --help usage: con-duct [options] A suite of commands to manage or manipulate con-duct logs. positional arguments: {pp,plot,ls} Available subcommands pp Pretty print a JSON log. plot Plot resource usage for an execution. ls Print execution information for all matching runs. options: -h, --help show this help message and exit -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. ``` ## FAQs ### git-annex add keeps adding duct logs directly into git By default, [git-annex](https://git-annex.branchable.com/) treats all dotfiles, and files under directories starting with a `.` as "small" regardless of `annex.largefiles` setting [[ref: an issue describing the logic](https://git-annex.branchable.com/bugs/add__58___inconsistently_treats_files_in_dotdirs_as_dotfiles/?updated#comment-efc1f2aa8f46e88a8be9837a56cfa6f7)]. It is necessary to set `annex.dotfiles` variable to `true` to make git-annex treat them as regular files and thus subject to `annex.largefiles` setting [[ref: git-annex config](https://git-annex.branchable.com/git-annex-config/)]. Could be done the repository (not just specific clone, but any instance since records in `git-annex` branch) wide using `git annex config --set annex.dotfiles true`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/pyproject.toml0000644000175100001660000000013314761605014015707 0ustar00runnerdocker[build-system] requires = ["setuptools >= 46.4.0"] build-backend = "setuptools.build_meta" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1741097499.6434278 con_duct-0.11.0/setup.cfg0000644000175100001660000000366714761605034014635 0ustar00runnerdocker[metadata] name = con-duct version = attr:con_duct.__main__.__version__ description = Runs a not-so-simple command and collects resource usage metrics long_description = file:README.md long_description_content_type = text/markdown author = Austin Macdonald author_email = austin@dartmouth.edu license = MIT license_files = LICENSE url = https://github.com/con/duct/ keywords = command-line cpu memory metrics output-capture provenance time usage classifiers = Development Status :: 3 - Alpha Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: 3.11 Programming Language :: Python :: 3.12 Programming Language :: Python :: 3.13 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy License :: OSI Approved :: MIT License Environment :: Console Intended Audience :: Developers Intended Audience :: Information Technology Intended Audience :: Science/Research Intended Audience :: System Administrators Topic :: System :: Systems Administration project_urls = Source Code = https://github.com/con/duct/ Bug Tracker = https://github.com/con/duct/issues [options] packages = find_namespace: package_dir = =src include_package_data = True python_requires = >= 3.9 [options.packages.find] where = src [options.extras_require] all = matplotlib PyYAML pyout [options.entry_points] console_scripts = duct = con_duct.__main__:main con-duct = con_duct.suite.main:main [mypy] allow_incomplete_defs = False allow_untyped_defs = False ignore_missing_imports = False no_implicit_optional = True implicit_reexport = False local_partial_types = True pretty = True show_error_codes = True show_traceback = True strict_equality = True warn_redundant_casts = True warn_return_any = True warn_unreachable = True [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.632428 con_duct-0.11.0/src/0000755000175100001660000000000014761605034013567 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.634428 con_duct-0.11.0/src/con_duct/0000755000175100001660000000000014761605034015365 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/__init__.py0000644000175100001660000000000014761605014017462 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097496.0 con_duct-0.11.0/src/con_duct/__main__.py0000755000175100001660000011531714761605030017466 0ustar00runnerdocker#!/usr/bin/env python3 from __future__ import annotations import argparse from collections import Counter from collections.abc import Iterable, Iterator from dataclasses import asdict, dataclass, field from datetime import datetime from enum import Enum import json import logging import math import os import re import shutil import socket import string import subprocess import sys import textwrap import threading import time from typing import IO, Any, Optional, TextIO __version__ = "0.11.0" __schema_version__ = "0.2.0" lgr = logging.getLogger("con-duct") DEFAULT_LOG_LEVEL = os.environ.get("DUCT_LOG_LEVEL", "INFO").upper() DUCT_OUTPUT_PREFIX = os.getenv( "DUCT_OUTPUT_PREFIX", ".duct/logs/{datetime_filesafe}-{pid}_" ) ENV_PREFIXES = ("PBS_", "SLURM_", "OSG") SUFFIXES = { "stdout": "stdout", "stderr": "stderr", "usage": "usage.json", "info": "info.json", } EXECUTION_SUMMARY_FORMAT = ( "Summary:\n" "Exit Code: {exit_code!E}\n" "Command: {command}\n" "Log files location: {logs_prefix}\n" "Wall Clock Time: {wall_clock_time:.3f} sec\n" "Memory Peak Usage (RSS): {peak_rss!S}\n" "Memory Average Usage (RSS): {average_rss!S}\n" "Virtual Memory Peak Usage (VSZ): {peak_vsz!S}\n" "Virtual Memory Average Usage (VSZ): {average_vsz!S}\n" "Memory Peak Percentage: {peak_pmem:.2f!N}%\n" "Memory Average Percentage: {average_pmem:.2f!N}%\n" "CPU Peak Usage: {peak_pcpu:.2f!N}%\n" "Average CPU Usage: {average_pcpu:.2f!N}%\n" ) ABOUT_DUCT = """ duct is a lightweight wrapper that collects execution data for an arbitrary command. Execution data includes execution time, system information, and resource usage statistics of the command and all its child processes. It is intended to simplify the problem of recording the resources necessary to execute a command, particularly in an HPC environment. Resource usage is determined by polling (at a sample-interval). During execution, duct produces a JSON lines (see https://jsonlines.org) file with one data point recorded for each report (at a report-interval). environment variables: Many duct options can be configured by environment variables (which are overridden by command line options). DUCT_LOG_LEVEL: see --log-level DUCT_OUTPUT_PREFIX: see --output-prefix DUCT_SUMMARY_FORMAT: see --summary-format DUCT_SAMPLE_INTERVAL: see --sample-interval DUCT_REPORT_INTERVAL: see --report-interval DUCT_CAPTURE_OUTPUTS: see --capture-outputs """ class CustomHelpFormatter(argparse.ArgumentDefaultsHelpFormatter): def _fill_text(self, text: str, width: int, _indent: str) -> str: # Override _fill_text to respect the newlines and indentation in descriptions return "\n".join([textwrap.fill(line, width) for line in text.splitlines()]) def assert_num(*values: Any) -> None: for value in values: assert isinstance(value, (float, int)) class Outputs(str, Enum): ALL = "all" NONE = "none" STDOUT = "stdout" STDERR = "stderr" def __str__(self) -> str: return self.value def has_stdout(self) -> bool: return self is Outputs.ALL or self is Outputs.STDOUT def has_stderr(self) -> bool: return self is Outputs.ALL or self is Outputs.STDERR class RecordTypes(str, Enum): ALL = "all" SYSTEM_SUMMARY = "system-summary" PROCESSES_SAMPLES = "processes-samples" def __str__(self) -> str: return self.value def has_system_summary(self) -> bool: return self is RecordTypes.ALL or self is RecordTypes.SYSTEM_SUMMARY def has_processes_samples(self) -> bool: return self is RecordTypes.ALL or self is RecordTypes.PROCESSES_SAMPLES @dataclass class SystemInfo: cpu_total: int memory_total: int hostname: str | None uid: int user: str | None @dataclass class ProcessStats: pcpu: float # %CPU pmem: float # %MEM rss: int # Memory Resident Set Size in Bytes vsz: int # Virtual Memory size in Bytes timestamp: str etime: str stat: Counter cmd: str def aggregate(self, other: ProcessStats) -> ProcessStats: cmd = self.cmd if self.cmd != other.cmd: lgr.debug( f"cmd has changed. Previous measurement was {self.cmd}, now {other.cmd}." ) # Brackets indicate that the kernel has substituted an abbreviation. surrounded_by_brackets = r"^\[.+\]" if re.search(surrounded_by_brackets, self.cmd): lgr.debug(f"using {other.cmd}.") cmd = other.cmd lgr.debug(f"using {self.cmd}.") new_counter: Counter = Counter() new_counter.update(self.stat) new_counter.update(other.stat) return ProcessStats( pcpu=max(self.pcpu, other.pcpu), pmem=max(self.pmem, other.pmem), rss=max(self.rss, other.rss), vsz=max(self.vsz, other.vsz), timestamp=max(self.timestamp, other.timestamp), etime=other.etime, # For the aggregate always take the latest stat=new_counter, cmd=cmd, ) def for_json(self) -> dict: ret = asdict(self) ret["stat"] = dict(self.stat) return ret def __post_init__(self) -> None: self._validate() def _validate(self) -> None: assert_num(self.pcpu, self.pmem, self.rss, self.vsz) @dataclass class LogPaths: stdout: str stderr: str usage: str info: str prefix: str def __iter__(self) -> Iterator[tuple[str, str]]: for name, path in asdict(self).items(): if name != "prefix": yield name, path @classmethod def create(cls, output_prefix: str, pid: None | int = None) -> LogPaths: datetime_filesafe = datetime.now().strftime("%Y.%m.%dT%H.%M.%S") formatted_prefix = output_prefix.format( pid=pid, datetime_filesafe=datetime_filesafe ) return cls( stdout=f"{formatted_prefix}{SUFFIXES['stdout']}", stderr=f"{formatted_prefix}{SUFFIXES['stderr']}", usage=f"{formatted_prefix}{SUFFIXES['usage']}", info=f"{formatted_prefix}{SUFFIXES['info']}", prefix=formatted_prefix, ) def prepare_paths(self, clobber: bool, capture_outputs: Outputs) -> None: conflicts = [path for _name, path in self if os.path.exists(path)] if conflicts and not clobber: raise FileExistsError( "Conflicting files:\n" + "\n".join(f"- {path}" for path in conflicts) + "\nUse --clobber to overwrite conflicting files." ) if self.prefix.endswith(os.sep): # If it ends in "/" (for linux) treat as a dir os.makedirs(self.prefix, exist_ok=True) else: # Path does not end with a separator, treat the last part as a filename directory = os.path.dirname(self.prefix) if directory: os.makedirs(directory, exist_ok=True) for name, path in self: if name == SUFFIXES["stdout"] and not capture_outputs.has_stdout(): continue elif name == SUFFIXES["stderr"] and not capture_outputs.has_stderr(): continue # TODO: AVOID PRECREATION -- would interfere e.g. with git-annex # assistant monitoring new files to be created and committing # as soon as they are closed open(path, "w").close() @dataclass class Averages: rss: Optional[float] = None vsz: Optional[float] = None pmem: Optional[float] = None pcpu: Optional[float] = None num_samples: int = 0 def update(self: Averages, other: Sample) -> None: assert_num(other.total_rss, other.total_vsz, other.total_pmem, other.total_pcpu) if not self.num_samples: self.num_samples += 1 self.rss = other.total_rss self.vsz = other.total_vsz self.pmem = other.total_pmem self.pcpu = other.total_pcpu else: assert self.rss is not None assert self.vsz is not None assert self.pmem is not None assert self.pcpu is not None assert other.total_rss is not None assert other.total_vsz is not None assert other.total_pmem is not None assert other.total_pcpu is not None self.num_samples += 1 self.rss += (other.total_rss - self.rss) / self.num_samples self.vsz += (other.total_vsz - self.vsz) / self.num_samples self.pmem += (other.total_pmem - self.pmem) / self.num_samples self.pcpu += (other.total_pcpu - self.pcpu) / self.num_samples @classmethod def from_sample(cls, sample: Sample) -> Averages: assert_num( sample.total_rss, sample.total_vsz, sample.total_pmem, sample.total_pcpu ) return cls( rss=sample.total_rss, vsz=sample.total_vsz, pmem=sample.total_pmem, pcpu=sample.total_pcpu, num_samples=1, ) @dataclass class Sample: stats: dict[int, ProcessStats] = field(default_factory=dict) averages: Averages = field(default_factory=Averages) total_rss: Optional[int] = None total_vsz: Optional[int] = None total_pmem: Optional[float] = None total_pcpu: Optional[float] = None timestamp: str = "" # TS of last sample collected def add_pid(self, pid: int, stats: ProcessStats) -> None: # We do not calculate averages when we add a pid because we require all pids first assert ( self.stats.get(pid) is None ) # add_pid should only be called when pid not in Sample self.total_rss = (self.total_rss or 0) + stats.rss self.total_vsz = (self.total_vsz or 0) + stats.vsz self.total_pmem = (self.total_pmem or 0.0) + stats.pmem self.total_pcpu = (self.total_pcpu or 0.0) + stats.pcpu self.stats[pid] = stats self.timestamp = max(self.timestamp, stats.timestamp) def aggregate(self: Sample, other: Sample) -> Sample: output = Sample() for pid in self.stats.keys() | other.stats.keys(): if (mine := self.stats.get(pid)) is not None: if (theirs := other.stats.get(pid)) is not None: output.add_pid(pid, mine.aggregate(theirs)) else: output.add_pid(pid, mine) else: output.add_pid(pid, other.stats[pid]) assert other.total_pmem is not None assert other.total_pcpu is not None assert other.total_rss is not None assert other.total_vsz is not None output.total_pmem = max(self.total_pmem or 0.0, other.total_pmem) output.total_pcpu = max(self.total_pcpu or 0.0, other.total_pcpu) output.total_rss = max(self.total_rss or 0, other.total_rss) output.total_vsz = max(self.total_vsz or 0, other.total_vsz) output.averages = self.averages output.averages.update(other) return output def for_json(self) -> dict[str, Any]: d = { "timestamp": self.timestamp, "num_samples": self.averages.num_samples, "processes": { str(pid): stats.for_json() for pid, stats in self.stats.items() }, "totals": { # total of all processes during this sample "pmem": self.total_pmem, "pcpu": self.total_pcpu, "rss": self.total_rss, "vsz": self.total_vsz, }, "averages": asdict(self.averages) if self.averages.num_samples >= 1 else {}, } return d class Report: """Top level report""" def __init__( self, command: str, arguments: list[str], log_paths: LogPaths, summary_format: str, colors: bool = False, clobber: bool = False, process: subprocess.Popen | None = None, ) -> None: self._command = command self.arguments = arguments self.log_paths = log_paths self.summary_format: str = summary_format self.clobber = clobber self.colors = colors # Defaults to be set later self.start_time: float | None = None self.process = process self.session_id: int | None = None self.gpus: list[dict[str, str]] | None = None self.env: dict[str, str] | None = None self.number = 1 self.system_info: SystemInfo | None = None self.full_run_stats = Sample() self.current_sample: Optional[Sample] = None self.end_time: float | None = None self.run_time_seconds: str | None = None self.usage_file: TextIO | None = None def __del__(self) -> None: safe_close_files([self.usage_file]) @property def command(self) -> str: return " ".join([self._command] + self.arguments) @property def elapsed_time(self) -> float: assert self.start_time is not None return time.time() - self.start_time @property def wall_clock_time(self) -> Optional[float]: if self.start_time is None: return math.nan if self.end_time is None: # if no end_time -- must be still ongoing # Cannot happen ATM but could in "library mode" later return time.time() - self.start_time # we reached the end return self.end_time - self.start_time def collect_environment(self) -> None: self.env = {k: v for k, v in os.environ.items() if k.startswith(ENV_PREFIXES)} def get_system_info(self) -> None: """Gathers system information related to CPU, GPU, memory, and environment variables.""" self.system_info = SystemInfo( cpu_total=os.sysconf("SC_NPROCESSORS_CONF"), memory_total=os.sysconf("SC_PAGESIZE") * os.sysconf("SC_PHYS_PAGES"), hostname=socket.gethostname(), uid=os.getuid(), user=os.environ.get("USER"), ) # GPU information if shutil.which("nvidia-smi") is not None: lgr.debug("Checking NVIDIA GPU using nvidia-smi") try: out = subprocess.check_output( [ "nvidia-smi", "--query-gpu=index,name,pci.bus_id,driver_version,memory.total,compute_mode", "--format=csv", ] ) except subprocess.CalledProcessError as e: lgr.warning("Error collecting gpu information: %s", str(e)) self.gpus = None return try: decoded = out.decode("utf-8") lines = decoded.strip().split("\n") _ = lines.pop(0) # header self.gpus = [] for line in lines: cols = line.split(", ") self.gpus.append( { "index": cols[0], "name": cols[1], "bus_id": cols[2], "driver_version": cols[3], "memory.total": cols[4], "compute_mode": cols[5], } ) except Exception as e: lgr.warning("Error parsing gpu information: %s", str(e)) self.gpus = None def collect_sample(self) -> Optional[Sample]: assert self.session_id is not None sample = Sample() try: output = subprocess.check_output( [ "ps", "-s", str(self.session_id), "-o", "pid,pcpu,pmem,rss,vsz,etime,stat,cmd", ], text=True, ) for line in output.splitlines()[1:]: if line: pid, pcpu, pmem, rss_kib, vsz_kib, etime, stat, cmd = line.split( maxsplit=7, ) sample.add_pid( int(pid), ProcessStats( pcpu=float(pcpu), pmem=float(pmem), rss=int(rss_kib) * 1024, vsz=int(vsz_kib) * 1024, timestamp=datetime.now().astimezone().isoformat(), etime=etime, stat=Counter([stat]), cmd=cmd, ), ) except subprocess.CalledProcessError as exc: # when session_id has no processes lgr.debug("Error collecting sample: %s", str(exc)) return None sample.averages = Averages.from_sample(sample) return sample def update_from_sample(self, sample: Sample) -> None: self.full_run_stats = self.full_run_stats.aggregate(sample) if self.current_sample is None: self.current_sample = Sample().aggregate(sample) else: assert self.current_sample.averages is not None self.current_sample = self.current_sample.aggregate(sample) assert self.current_sample is not None def write_subreport(self) -> None: assert self.current_sample is not None if self.usage_file is None: self.usage_file = open(self.log_paths.usage, "w") self.usage_file.write(json.dumps(self.current_sample.for_json()) + "\n") @property def execution_summary(self) -> dict[str, Any]: # killed by a signal # https://pubs.opengroup.org/onlinepubs/9799919799/utilities/V3_chap02.html#tag_19_08_02 if self.process and self.process.returncode < 0: self.process.returncode = 128 + abs(self.process.returncode) # prepare the base, but enrich if we did get process running return { "exit_code": self.process.returncode if self.process else None, "command": self.command, "logs_prefix": self.log_paths.prefix if self.log_paths else "", "wall_clock_time": self.wall_clock_time, "peak_rss": self.full_run_stats.total_rss, "average_rss": self.full_run_stats.averages.rss, "peak_vsz": self.full_run_stats.total_vsz, "average_vsz": self.full_run_stats.averages.vsz, "peak_pmem": self.full_run_stats.total_pmem, "average_pmem": self.full_run_stats.averages.pmem, "peak_pcpu": self.full_run_stats.total_pcpu, "average_pcpu": self.full_run_stats.averages.pcpu, "num_samples": self.full_run_stats.averages.num_samples, "num_reports": self.number, "start_time": self.start_time, "end_time": self.end_time, } def dump_json(self) -> str: return json.dumps( { "command": self.command, "system": ( None if self.system_info is None else asdict(self.system_info) ), "env": self.env, "gpu": self.gpus, "duct_version": __version__, "schema_version": __schema_version__, "execution_summary": self.execution_summary, "output_paths": asdict(self.log_paths), } ) @property def execution_summary_formatted(self) -> str: formatter = SummaryFormatter(enable_colors=self.colors) return formatter.format(self.summary_format, **self.execution_summary) class SummaryFormatter(string.Formatter): OK = "OK" NOK = "X" NONE = "-" BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(30, 38) RESET_SEQ = "\033[0m" COLOR_SEQ = "\033[1;%dm" FILESIZE_SUFFIXES = (" kB", " MB", " GB", " TB", " PB", " EB", " ZB", " YB") def __init__(self, enable_colors: bool = False) -> None: self.enable_colors = enable_colors def naturalsize( self, value: float | str, format: str = "%.1f", # noqa: A002 ) -> str: """Format a number of bytes like a human readable decimal filesize (e.g. 10 kB). Examples: ```pycon >>> formatter = SummaryFormatter() >>> formatter.naturalsize(3000000) '3.0 MB' >>> formatter.naturalsize(3000, "%.3f") '2.930 kB' >>> formatter.naturalsize(10**28) '10000.0 YB' ``` Args: value (int, float, str): Integer to convert. format (str): Custom formatter. Returns: str: Human readable representation of a filesize. """ base = 1000 bytes_ = float(value) abs_bytes = abs(bytes_) if abs_bytes == 1: return "%d Byte" % bytes_ if abs_bytes < base: return "%d Bytes" % bytes_ for i, _s in enumerate(self.FILESIZE_SUFFIXES): unit = base ** (i + 2) if abs_bytes < unit: break ret: str = format % (base * bytes_ / unit) + _s return ret def color_word(self, s: str, color: int) -> str: """Color `s` with `color`. Parameters ---------- s : string color : int Code for color. If the value evaluates to false, the string will not be colored. enable_colors: boolean, optional Returns ------- str """ if color and self.enable_colors: return "%s%s%s" % (self.COLOR_SEQ % color, s, self.RESET_SEQ) return s def convert_field(self, value: str | None, conversion: str | None) -> Any: if conversion == "S": # Human size if value is not None: return self.color_word(self.naturalsize(value), self.GREEN) else: return self.color_word(self.NONE, self.RED) elif conversion == "E": # colored non-zero is bad return self.color_word( value if value is not None else self.NONE, self.RED if value or value is None else self.GREEN, ) elif conversion == "X": # colored truthy col = self.GREEN if value else self.RED return self.color_word(value if value is not None else self.NONE, col) elif conversion == "N": # colored Red - if None if value is None: return self.color_word(self.NONE, self.RED) else: return self.color_word(value, self.GREEN) return super().convert_field(value, conversion) def format_field(self, value: Any, format_spec: str) -> Any: # TODO: move all the "coloring" into formatting, so we could correctly indent # given the format and only then color it up # print "> %r, %r" % (value, format_spec) if value is None: # TODO: could still use our formatter and make it red or smth like that return self.NONE # if it is a composite :format!conversion, we need to split it if "!" in format_spec and format_spec.index("!") > 1: format_spec, conversion = format_spec.split("!", 1) else: conversion = None try: value_ = super().format_field(value, format_spec) except ValueError as exc: lgr.warning( f"Falling back to `str` formatting for {value!r} due to exception: {exc}" ) return str(value) if conversion: return self.convert_field(value_, conversion) return value_ @dataclass class Arguments: command: str command_args: list[str] output_prefix: str sample_interval: float report_interval: float fail_time: float clobber: bool capture_outputs: Outputs outputs: Outputs record_types: RecordTypes summary_format: str colors: bool log_level: str quiet: bool def __post_init__(self) -> None: if self.report_interval < self.sample_interval: raise argparse.ArgumentError( None, "--report-interval must be greater than or equal to --sample-interval.", ) @classmethod def from_argv( cls, cli_args: Optional[list[str]] = None, **cli_kwargs: Any ) -> Arguments: parser = argparse.ArgumentParser( allow_abbrev=False, description=ABOUT_DUCT, formatter_class=CustomHelpFormatter, ) parser.add_argument( "command", metavar="command [command_args ...]", help="The command to execute, along with its arguments.", ) parser.add_argument( "--version", action="version", version=f"%(prog)s {__version__}" ) parser.add_argument( "command_args", nargs=argparse.REMAINDER, help="Arguments for the command." ) parser.add_argument( "-p", "--output-prefix", type=str, default=DUCT_OUTPUT_PREFIX, help="File string format to be used as a prefix for the files -- the captured " "stdout and stderr and the resource usage logs. The understood variables are " "{datetime}, {datetime_filesafe}, and {pid}. " "Leading directories will be created if they do not exist. " "You can also provide value via DUCT_OUTPUT_PREFIX env variable. ", ) parser.add_argument( "--summary-format", type=str, default=os.getenv("DUCT_SUMMARY_FORMAT", EXECUTION_SUMMARY_FORMAT), help="Output template to use when printing the summary following execution. " "Accepts custom conversion flags: " "!S: Converts filesizes to human readable units, green if measured, red if None. " "!E: Colors exit code, green if falsey, red if truthy, and red if None. " "!X: Colors green if truthy, red if falsey. " "!N: Colors green if not None, red if None", ) parser.add_argument( "--colors", action="store_true", default=os.getenv("DUCT_COLORS", False), help="Use colors in duct output.", ) parser.add_argument( "--clobber", action="store_true", help="Replace log files if they already exist.", ) parser.add_argument( "-l", "--log-level", default=DEFAULT_LOG_LEVEL, type=str.upper, choices=("NONE", "CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG"), help="Level of log output to stderr, use NONE to entirely disable.", ) parser.add_argument( "-q", "--quiet", action="store_true", help="[deprecated, use log level NONE] Disable duct logging output (to stderr)", ) parser.add_argument( "--sample-interval", "--s-i", type=float, default=float(os.getenv("DUCT_SAMPLE_INTERVAL", "1.0")), help="Interval in seconds between status checks of the running process. " "Sample interval must be less than or equal to report interval, and it achieves the " "best results when sample is significantly less than the runtime of the process.", ) parser.add_argument( "--report-interval", "--r-i", type=float, default=float(os.getenv("DUCT_REPORT_INTERVAL", "60.0")), help="Interval in seconds at which to report aggregated data.", ) parser.add_argument( "--fail-time", "--f-t", type=float, default=float(os.getenv("DUCT_FAIL_TIME", "3.0")), help="If command fails in less than this specified time (seconds), duct would remove logs. " "Set to 0 if you would like to keep logs for a failing command regardless of its run time. " "Set to negative (e.g. -1) if you would like to not keep logs for any failing command.", ) parser.add_argument( "-c", "--capture-outputs", default=os.getenv("DUCT_CAPTURE_OUTPUTS", "all"), choices=list(Outputs), type=Outputs, help="Record stdout, stderr, all, or none to log files. " "You can also provide value via DUCT_CAPTURE_OUTPUTS env variable.", ) parser.add_argument( "-o", "--outputs", default="all", choices=list(Outputs), type=Outputs, help="Print stdout, stderr, all, or none to stdout/stderr respectively.", ) parser.add_argument( "-t", "--record-types", default="all", choices=list(RecordTypes), type=RecordTypes, help="Record system-summary, processes-samples, or all", ) args = parser.parse_args( args=cli_args, namespace=cli_kwargs and argparse.Namespace(**cli_kwargs) or None, ) return cls( command=args.command, command_args=args.command_args, output_prefix=args.output_prefix, sample_interval=args.sample_interval, report_interval=args.report_interval, fail_time=args.fail_time, capture_outputs=args.capture_outputs, outputs=args.outputs, record_types=args.record_types, summary_format=args.summary_format, clobber=args.clobber, colors=args.colors, log_level=args.log_level, quiet=args.quiet, ) def monitor_process( report: Report, process: subprocess.Popen, report_interval: float, sample_interval: float, stop_event: threading.Event, ) -> None: lgr.debug( "Starting monitoring of the process %s on sample interval %f for report interval %f", process, sample_interval, report_interval, ) while True: if process.poll() is not None: lgr.debug( "Breaking out of the monitor since the passthrough command has finished" ) break sample = report.collect_sample() # Report averages should be updated prior to sample aggregation if ( sample is None ): # passthrough has probably finished before sample could be collected if process.poll() is not None: lgr.debug( "Breaking out of the monitor since the passthrough command has finished " "before we could collect sample" ) break # process is still running, but we could not collect sample continue report.update_from_sample(sample) if ( report.start_time and report.elapsed_time >= (report.number - 1) * report_interval ): report.write_subreport() report.current_sample = None report.number += 1 if stop_event.wait(timeout=sample_interval): lgr.debug("Breaking out because stop event was set") break class TailPipe: """TailPipe simultaneously streams to an output stream (stdout or stderr) and a specified file.""" TAIL_CYCLE_TIME = 0.01 def __init__(self, file_path: str, buffer: IO[bytes]) -> None: self.file_path = file_path self.buffer = buffer self.stop_event: threading.Event | None = None self.infile: IO[bytes] | None = None self.thread: threading.Thread | None = None def start(self) -> None: self.stop_event = threading.Event() self.infile = open(self.file_path, "rb") self.thread = threading.Thread(target=self._tail, daemon=True) self.thread.start() def fileno(self) -> int: assert self.infile is not None return self.infile.fileno() def _catch_up(self) -> None: assert self.infile is not None data = self.infile.read() if data: self.buffer.write(data) self.buffer.flush() def _tail(self) -> None: assert self.stop_event is not None try: while not self.stop_event.is_set(): self._catch_up() time.sleep(TailPipe.TAIL_CYCLE_TIME) # After stop event, collect and passthrough data one last time self._catch_up() except Exception: raise finally: self.buffer.flush() def close(self) -> None: assert self.stop_event is not None assert self.thread is not None assert self.infile is not None self.stop_event.set() self.thread.join() self.infile.close() def prepare_outputs( capture_outputs: Outputs, outputs: Outputs, log_paths: LogPaths, ) -> tuple[TextIO | TailPipe | int | None, TextIO | TailPipe | int | None]: stdout: TextIO | TailPipe | int | None stderr: TextIO | TailPipe | int | None if capture_outputs.has_stdout(): if outputs.has_stdout(): stdout = TailPipe(log_paths.stdout, buffer=sys.stdout.buffer) stdout.start() else: stdout = open(log_paths.stdout, "w") elif outputs.has_stdout(): stdout = None else: stdout = subprocess.DEVNULL if capture_outputs.has_stderr(): if outputs.has_stderr(): stderr = TailPipe(log_paths.stderr, buffer=sys.stderr.buffer) stderr.start() else: stderr = open(log_paths.stderr, "w") elif outputs.has_stderr(): stderr = None else: stderr = subprocess.DEVNULL return stdout, stderr def safe_close_files(file_list: Iterable[Any]) -> None: for f in file_list: try: f.close() except Exception: pass def remove_files(log_paths: LogPaths, assert_empty: bool = False) -> None: for _, file_path in log_paths: if os.path.exists(file_path): if assert_empty: assert os.stat(file_path).st_size == 0 os.remove(file_path) def main() -> None: logging.basicConfig( format="%(asctime)s [%(levelname)-8s] %(name)s: %(message)s", datefmt="%Y-%m-%dT%H:%M:%S%z", level=getattr(logging, DEFAULT_LOG_LEVEL), ) args = Arguments.from_argv() sys.exit(execute(args)) def execute(args: Arguments) -> int: """A wrapper to execute a command, monitor and log the process details. Returns exit code of the executed process. """ if args.log_level == "NONE" or args.quiet: lgr.disabled = True else: lgr.setLevel(args.log_level) log_paths = LogPaths.create(args.output_prefix, pid=os.getpid()) log_paths.prepare_paths(args.clobber, args.capture_outputs) stdout, stderr = prepare_outputs(args.capture_outputs, args.outputs, log_paths) stdout_file: TextIO | IO[bytes] | int | None if isinstance(stdout, TailPipe): stdout_file = open(stdout.file_path, "wb") else: stdout_file = stdout stderr_file: TextIO | IO[bytes] | int | None if isinstance(stderr, TailPipe): stderr_file = open(stderr.file_path, "wb") else: stderr_file = stderr full_command = " ".join([str(args.command)] + args.command_args) files_to_close = [stdout_file, stdout, stderr_file, stderr] report = Report( args.command, args.command_args, log_paths, args.summary_format, args.colors, args.clobber, ) files_to_close.append(report.usage_file) report.start_time = time.time() try: report.process = process = subprocess.Popen( [str(args.command)] + args.command_args, stdout=stdout_file, stderr=stderr_file, start_new_session=True, ) except FileNotFoundError: # We failed to execute due to file not found in PATH # We should remove log etc files since they are 0-sized # degenerates etc safe_close_files(files_to_close) remove_files(log_paths, assert_empty=True) # mimicking behavior of bash and zsh. print(f"{args.command}: command not found", file=sys.stderr) return 127 # seems what zsh and bash return then lgr.info("duct is executing %r...", full_command) lgr.info("Log files will be written to %s", log_paths.prefix) try: report.session_id = os.getsid(process.pid) # Get session ID of the new process except ProcessLookupError: # process has already finished # TODO: log this at least. pass stop_event = threading.Event() if args.record_types.has_processes_samples(): monitoring_args = [ report, process, args.report_interval, args.sample_interval, stop_event, ] monitoring_thread = threading.Thread( target=monitor_process, args=monitoring_args ) monitoring_thread.start() else: monitoring_thread = None if args.record_types.has_system_summary(): env_thread = threading.Thread(target=report.collect_environment) env_thread.start() sys_info_thread = threading.Thread(target=report.get_system_info) sys_info_thread.start() else: env_thread, sys_info_thread = None, None process.wait() report.end_time = time.time() lgr.debug("Process ended, setting stop_event to stop monitoring thread") stop_event.set() if monitoring_thread is not None: lgr.debug("Waiting for monitoring thread to finish") monitoring_thread.join() lgr.debug("Monitoring thread finished") # If we have any extra samples that haven't been written yet, do it now if report.current_sample is not None: report.write_subreport() report.process = process if env_thread is not None: lgr.debug("Waiting for environment collection thread to finish") env_thread.join() lgr.debug("Environment collection finished") if sys_info_thread is not None: lgr.debug("Waiting for system information collection thread to finish") sys_info_thread.join() lgr.debug("System information collection finished") if args.record_types.has_system_summary(): with open(log_paths.info, "w") as system_logs: report.run_time_seconds = f"{report.end_time - report.start_time}" system_logs.write(report.dump_json()) safe_close_files(files_to_close) if process.returncode != 0 and ( report.elapsed_time < args.fail_time or args.fail_time < 0 ): lgr.info( "Removing log files since command failed%s.", f" in less than {args.fail_time} seconds" if args.fail_time > 0 else "", ) remove_files(log_paths) else: lgr.info(report.execution_summary_formatted) return report.process.returncode if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/py.typed0000644000175100001660000000000014761605014017050 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1741097499.6364279 con_duct-0.11.0/src/con_duct/suite/0000755000175100001660000000000014761605034016516 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/suite/ls.py0000644000175100001660000001462114761605014017510 0ustar00runnerdockerimport argparse from collections import OrderedDict import glob import json import logging import re from types import ModuleType from typing import Any, Dict, List, Optional from con_duct.__main__ import DUCT_OUTPUT_PREFIX, SummaryFormatter from con_duct.utils import parse_version try: import pyout # type: ignore except ImportError: pyout = None try: import yaml except ImportError: yaml: Optional[ModuleType] = None # type: ignore lgr = logging.getLogger(__name__) VALUE_TRANSFORMATION_MAP: Dict[str, str] = { "average_pcpu": "{value:.2f!N}%", "average_pmem": "{value:.2f!N}%", "average_rss": "{value!S}", "average_vsz": "{value!S}", "end_time": "{value:.2f!N}", "exit_code": "{value!E}", "memory_total": "{value!S}", "peak_pcpu": "{value:.2f!N}%", "peak_pmem": "{value:.2f!N}%", "peak_rss": "{value!S}", "peak_vsz": "{value!S}", "start_time": "{value:.2f!N}", "wall_clock_time": "{value:.3f} sec", } NON_TRANSFORMED_FIELDS: List[str] = [ "command", "cpu_total", "duct_version", "gpu", "hostname", "info", "logs_prefix", "num_samples", "num_reports", "prefix", "schema_version", "stderr", "stdout", "uid", "usage", "user", ] LS_FIELD_CHOICES: List[str] = ( list(VALUE_TRANSFORMATION_MAP.keys()) + NON_TRANSFORMED_FIELDS ) MINIMUM_SCHEMA_VERSION: str = "0.2.0" def load_duct_runs( info_files: List[str], eval_filter: Optional[str] = None ) -> List[Dict[str, Any]]: loaded: List[Dict[str, Any]] = [] for info_file in info_files: with open(info_file) as file: try: this: Dict[str, Any] = json.load(file) # this["prefix"] is the path at execution time, could have moved this["prefix"] = info_file.split("info.json")[0] if parse_version(this["schema_version"]) < parse_version( MINIMUM_SCHEMA_VERSION ): lgr.debug( f"Skipping {this['prefix']}, schema version {this['schema_version']} " f"is below minimum schema version {MINIMUM_SCHEMA_VERSION}." ) continue if eval_filter is not None and not ( eval_results := eval(eval_filter, _flatten_dict(this), dict(re=re)) ): lgr.debug( "Filtering out %s due to filter results matching: %s", this, eval_results, ) continue loaded.append(this) except Exception as exc: lgr.warning("Failed to load file %s: %s", file, exc) return loaded def process_run_data( run_data_list: List[Dict[str, Any]], fields: List[str], formatter: SummaryFormatter ) -> List[OrderedDict[str, Any]]: output_rows: List[OrderedDict[str, Any]] = [] for row in run_data_list: flattened = _flatten_dict(row) try: restricted = _restrict_row(fields, flattened) except KeyError: lgr.warning( "Failed to pick fields of interest from a record, skipping. Record was: %s", list(flattened), ) continue formatted = _format_row(restricted, formatter) output_rows.append(formatted) return output_rows def _flatten_dict(d: Dict[str, Any]) -> Dict[str, Any]: items: List[tuple[str, Any]] = [] for k, v in d.items(): if isinstance(v, dict): items.extend(_flatten_dict(v).items()) else: items.append((k, v)) return dict(items) def _restrict_row(field_list: List[str], row: Dict[str, Any]) -> OrderedDict[str, Any]: restricted: OrderedDict[str, Any] = OrderedDict() # prefix is the "primary key", its the only field guaranteed to be unique. restricted["prefix"] = row["prefix"] for field in field_list: if field != "prefix" and field in row: restricted[field.split(".")[-1]] = row[field] return restricted def _format_row( row: OrderedDict[str, Any], formatter: SummaryFormatter ) -> OrderedDict[str, Any]: transformed: OrderedDict[str, Any] = OrderedDict() for col, value in row.items(): transformation: Optional[str] = VALUE_TRANSFORMATION_MAP.get(col) if transformation is not None: value = formatter.format(transformation, value=value) transformed[col] = value return transformed def pyout_ls(run_data_list: List[OrderedDict[str, Any]]) -> None: """Generate and print a tabular table using pyout.""" if pyout is None: raise RuntimeError("pyout is required for this output format.") with pyout.Tabular( style=dict( header_=dict(bold=True, transform=str.upper), ), mode="final", ) as table: for row in run_data_list: table(row) def ls(args: argparse.Namespace) -> int: if not args.paths: pattern = f"{DUCT_OUTPUT_PREFIX[:DUCT_OUTPUT_PREFIX.index('{')]}*" args.paths = [p for p in glob.glob(pattern)] info_files = [path for path in args.paths if path.endswith("info.json")] run_data_raw = load_duct_runs(info_files, args.eval_filter) formatter = SummaryFormatter(enable_colors=args.colors) output_rows = process_run_data(run_data_raw, args.fields, formatter) if args.format == "auto": args.format = "summaries" if pyout is None else "pyout" if args.format == "summaries": for row in output_rows: for col, value in row.items(): if not col == "prefix": col = f"\t{col}" print(f"{col.replace('_', ' ').title()}: {value}") elif args.format == "pyout": if pyout is None: raise RuntimeError("Install pyout for pyout output") pyout_ls(output_rows) elif args.format == "json": print(json.dumps(output_rows)) elif args.format == "json_pp": print(json.dumps(output_rows, indent=2)) elif args.format == "yaml": if yaml is None: raise RuntimeError("Install PyYaml yaml output") plain_rows = [dict(row) for row in output_rows] print(yaml.dump(plain_rows, default_flow_style=False)) else: raise RuntimeError( f"Unexpected format encountered: {args.format}. This should have been caught by argparse.", ) return 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/suite/main.py0000644000175100001660000001014314761605014020011 0ustar00runnerdockerimport argparse import logging import os import sys from typing import List, Optional from con_duct.suite.ls import LS_FIELD_CHOICES, ls from con_duct.suite.plot import matplotlib_plot from con_duct.suite.pprint_json import pprint_json lgr = logging.getLogger("con-duct") DEFAULT_LOG_LEVEL = os.environ.get("DUCT_LOG_LEVEL", "INFO").upper() def execute(args: argparse.Namespace) -> int: if args.log_level == "NONE": logging.disable(logging.CRITICAL) else: logging.basicConfig(level=args.log_level) result = args.func(args) if not isinstance(result, int): raise TypeError( f"Each con-duct subcommand must return an int returncode, got {type(result)}" ) return result def main(argv: Optional[List[str]] = None) -> None: parser = argparse.ArgumentParser( prog="con-duct", description="A suite of commands to manage or manipulate con-duct logs.", usage="con-duct [options]", ) parser.add_argument( "-l", "--log-level", default=DEFAULT_LOG_LEVEL, choices=("NONE", "CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG"), type=str.upper, help="Level of log output to stderr, use NONE to entirely disable.", ) subparsers = parser.add_subparsers(dest="command", help="Available subcommands") # Subcommand: pp parser_pp = subparsers.add_parser("pp", help="Pretty print a JSON log.") parser_pp.add_argument("file_path", help="JSON file to pretty print.") parser_pp.set_defaults(func=pprint_json) # Subcommand: plot parser_plot = subparsers.add_parser( "plot", help="Plot resource usage for an execution." ) parser_plot.add_argument("file_path", help="duct-produced usage.json file.") parser_plot.add_argument( "--output", help="Output path for the image file. If not specified, plot will be shown and not saved.", default=None, ) # parser_plot.add_argument( # "-b", # "--backend", # default=DEFAULT_PLOT_BACKEND, # choices=("matplotlib",) # help="which backend to plot with # ) parser_plot.set_defaults(func=matplotlib_plot) parser_ls = subparsers.add_parser( "ls", help="Print execution information for all matching runs.", ) parser_ls.add_argument( "-f", "--format", choices=("auto", "pyout", "summaries", "json", "json_pp", "yaml"), default="auto", # TODO dry help="Output format. TODO Fixme. 'auto' chooses 'pyout' if pyout library is installed," " 'summaries' otherwise.", ) parser_ls.add_argument( "-F", "--fields", nargs="+", metavar="FIELD", help=f"List of fields to show. Prefix is always included implicitly as the first field. " f"Available choices: {', '.join(sorted(LS_FIELD_CHOICES))}.", choices=LS_FIELD_CHOICES, default=[ "command", "exit_code", "wall_clock_time", "peak_rss", ], ) parser_ls.add_argument( "--colors", action="store_true", default=os.getenv("DUCT_COLORS", False), help="Use colors in duct output.", ) parser_ls.add_argument( "paths", nargs="*", help="Path to duct report files, only `info.json` would be considered. " "If not provided, the program will glob for files that match DUCT_OUTPUT_PREFIX.", ) parser_ls.add_argument( "-e", "--eval-filter", help="Python expression to filter results based on available fields. " "The expression is evaluated for each entry, and only those that return True are included. " "See --fields for all supported fields. " "Example: --eval-filter \"filter_this=='yes'\" filters entries where 'filter_this' is 'yes'. " "You can use 're' for regex operations (e.g., --eval-filter \"re.search('2025.02.09.*', prefix)\").", ) parser_ls.set_defaults(func=ls) args = parser.parse_args(argv) if args.command is None: parser.print_help() else: sys.exit(execute(args)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/suite/plot.py0000644000175100001660000000371314761605014020050 0ustar00runnerdockerimport argparse from datetime import datetime import json def matplotlib_plot(args: argparse.Namespace) -> int: import matplotlib.pyplot as plt import numpy as np data = [] try: with open(args.file_path, "r") as file: for line in file: data.append(json.loads(line)) except FileNotFoundError: print(f"File {args.file_path} was not found.") return 1 except json.JSONDecodeError: print(f"File {args.file_path} contained invalid JSON.") return 1 # Convert timestamps to datetime objects timestamps = [datetime.fromisoformat(entry["timestamp"]) for entry in data] # Calculate elapsed time in seconds elapsed_time = np.array([(ts - timestamps[0]).total_seconds() for ts in timestamps]) # Extract other data pmem = np.array([entry["totals"]["pmem"] for entry in data]) pcpu = np.array([entry["totals"]["pcpu"] for entry in data]) rss_kb = np.array([entry["totals"]["rss"] for entry in data]) vsz_kb = np.array([entry["totals"]["vsz"] for entry in data]) # Plotting fig, ax1 = plt.subplots() # Plot pmem and pcpu on primary y-axis ax1.plot(elapsed_time, pmem, label="pmem (%)", color="tab:blue") ax1.plot(elapsed_time, pcpu, label="pcpu (%)", color="tab:orange") ax1.set_xlabel("Elapsed Time (s)") ax1.set_ylabel("Percentage") ax1.legend(loc="upper left") # Create a second y-axis for rss and vsz ax2 = ax1.twinx() # type: ignore[attr-defined] ax2.plot(elapsed_time, rss_kb, label="rss (B)", color="tab:green") ax2.plot(elapsed_time, vsz_kb, label="vsz (B)", color="tab:red") ax2.set_ylabel("B") ax2.legend(loc="upper right") plt.title("Resource Usage Over Time") if args.output is not None: plt.savefig(args.output) print( f"Successfully rendered input file: {args.file_path} to output {args.output}" ) else: plt.show() # Exit code return 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/suite/pprint_json.py0000644000175100001660000000076314761605014021441 0ustar00runnerdockerimport argparse import json from pprint import pprint def pprint_json(args: argparse.Namespace) -> int: """ Prints the contents of a JSON file using pprint. """ try: with open(args.file_path, "r") as file: data = json.load(file) pprint(data) except FileNotFoundError: print(f"File not found: {args.file_path}") return 1 except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") return 1 return 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/src/con_duct/utils.py0000644000175100001660000000050614761605014017076 0ustar00runnerdockerdef parse_version(version_str: str) -> tuple[int, int, int]: x_y_z = version_str.split(".") if len(x_y_z) != 3: raise ValueError( f"Invalid version format: {version_str}. Expected 'x.y.z' format." ) x, y, z = map(int, x_y_z) # Unpacking forces exactly 3 elements return (x, y, z) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1741097499.6404278 con_duct-0.11.0/src/con_duct.egg-info/0000755000175100001660000000000014761605034017057 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/PKG-INFO0000644000175100001660000002335714761605033020165 0ustar00runnerdockerMetadata-Version: 2.2 Name: con-duct Version: 0.11.0 Summary: Runs a not-so-simple command and collects resource usage metrics Home-page: https://github.com/con/duct/ Author: Austin Macdonald Author-email: austin@dartmouth.edu License: MIT Project-URL: Source Code, https://github.com/con/duct/ Project-URL: Bug Tracker, https://github.com/con/duct/issues Keywords: command-line,cpu,memory,metrics,output-capture,provenance,time,usage Classifier: Development Status :: 3 - Alpha Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: License :: OSI Approved :: MIT License Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Science/Research Classifier: Intended Audience :: System Administrators Classifier: Topic :: System :: Systems Administration Requires-Python: >=3.9 Description-Content-Type: text/markdown License-File: LICENSE Provides-Extra: all Requires-Dist: matplotlib; extra == "all" Requires-Dist: PyYAML; extra == "all" Requires-Dist: pyout; extra == "all" # duct [![codecov](https://codecov.io/gh/con/duct/graph/badge.svg?token=JrPazw0Vn4)](https://codecov.io/gh/con/duct) [![PyPI version](https://badge.fury.io/py/con-duct.svg)](https://badge.fury.io/py/con-duct) [![RRID](https://img.shields.io/badge/RRID-SCR__025436-blue)](https://identifiers.org/RRID:SCR_025436) ## Installation pip install con-duct ## Quickstart Try it out! duct --sample-interval 0.5 --report-interval 1 test/data/test_script.py --duration 3 --memory-size=1000 `duct` is most useful when the report-interval is less than the duration of the script. ## Summary: A process wrapper script that monitors the execution of a command. ```shell >duct --help usage: duct [-h] [--version] [-p OUTPUT_PREFIX] [--summary-format SUMMARY_FORMAT] [--colors] [--clobber] [-l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}] [-q] [--sample-interval SAMPLE_INTERVAL] [--report-interval REPORT_INTERVAL] [--fail-time FAIL_TIME] [-c {all,none,stdout,stderr}] [-o {all,none,stdout,stderr}] [-t {all,system-summary,processes-samples}] command [command_args ...] ... duct is a lightweight wrapper that collects execution data for an arbitrary command. Execution data includes execution time, system information, and resource usage statistics of the command and all its child processes. It is intended to simplify the problem of recording the resources necessary to execute a command, particularly in an HPC environment. Resource usage is determined by polling (at a sample-interval). During execution, duct produces a JSON lines (see https://jsonlines.org) file with one data point recorded for each report (at a report-interval). environment variables: Many duct options can be configured by environment variables (which are overridden by command line options). DUCT_LOG_LEVEL: see --log-level DUCT_OUTPUT_PREFIX: see --output-prefix DUCT_SUMMARY_FORMAT: see --summary-format DUCT_SAMPLE_INTERVAL: see --sample-interval DUCT_REPORT_INTERVAL: see --report-interval DUCT_CAPTURE_OUTPUTS: see --capture-outputs positional arguments: command [command_args ...] The command to execute, along with its arguments. command_args Arguments for the command. options: -h, --help show this help message and exit --version show program's version number and exit -p OUTPUT_PREFIX, --output-prefix OUTPUT_PREFIX File string format to be used as a prefix for the files -- the captured stdout and stderr and the resource usage logs. The understood variables are {datetime}, {datetime_filesafe}, and {pid}. Leading directories will be created if they do not exist. You can also provide value via DUCT_OUTPUT_PREFIX env variable. (default: .duct/logs/{datetime_filesafe}-{pid}_) --summary-format SUMMARY_FORMAT Output template to use when printing the summary following execution. Accepts custom conversion flags: !S: Converts filesizes to human readable units, green if measured, red if None. !E: Colors exit code, green if falsey, red if truthy, and red if None. !X: Colors green if truthy, red if falsey. !N: Colors green if not None, red if None (default: Summary: Exit Code: {exit_code!E} Command: {command} Log files location: {logs_prefix} Wall Clock Time: {wall_clock_time:.3f} sec Memory Peak Usage (RSS): {peak_rss!S} Memory Average Usage (RSS): {average_rss!S} Virtual Memory Peak Usage (VSZ): {peak_vsz!S} Virtual Memory Average Usage (VSZ): {average_vsz!S} Memory Peak Percentage: {peak_pmem:.2f!N}% Memory Average Percentage: {average_pmem:.2f!N}% CPU Peak Usage: {peak_pcpu:.2f!N}% Average CPU Usage: {average_pcpu:.2f!N}% ) --colors Use colors in duct output. (default: False) --clobber Replace log files if they already exist. (default: False) -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. (default: INFO) -q, --quiet [deprecated, use log level NONE] Disable duct logging output (to stderr) (default: False) --sample-interval SAMPLE_INTERVAL, --s-i SAMPLE_INTERVAL Interval in seconds between status checks of the running process. Sample interval must be less than or equal to report interval, and it achieves the best results when sample is significantly less than the runtime of the process. (default: 1.0) --report-interval REPORT_INTERVAL, --r-i REPORT_INTERVAL Interval in seconds at which to report aggregated data. (default: 60.0) --fail-time FAIL_TIME, --f-t FAIL_TIME If command fails in less than this specified time (seconds), duct would remove logs. Set to 0 if you would like to keep logs for a failing command regardless of its run time. Set to negative (e.g. -1) if you would like to not keep logs for any failing command. (default: 3.0) -c {all,none,stdout,stderr}, --capture-outputs {all,none,stdout,stderr} Record stdout, stderr, all, or none to log files. You can also provide value via DUCT_CAPTURE_OUTPUTS env variable. (default: all) -o {all,none,stdout,stderr}, --outputs {all,none,stdout,stderr} Print stdout, stderr, all, or none to stdout/stderr respectively. (default: all) -t {all,system-summary,processes-samples}, --record-types {all,system-summary,processes-samples} Record system-summary, processes-samples, or all (default: all) ``` # con-duct suite In addition to `duct`, this project also includes a set of optional helpers under the `con-duct` command. These helpers may use 3rd party python libraries. ## Installation pip install con-duct[all] ## Extras Helptext ```shell >con-duct --help usage: con-duct [options] A suite of commands to manage or manipulate con-duct logs. positional arguments: {pp,plot,ls} Available subcommands pp Pretty print a JSON log. plot Plot resource usage for an execution. ls Print execution information for all matching runs. options: -h, --help show this help message and exit -l {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG}, --log-level {NONE,CRITICAL,ERROR,WARNING,INFO,DEBUG} Level of log output to stderr, use NONE to entirely disable. ``` ## FAQs ### git-annex add keeps adding duct logs directly into git By default, [git-annex](https://git-annex.branchable.com/) treats all dotfiles, and files under directories starting with a `.` as "small" regardless of `annex.largefiles` setting [[ref: an issue describing the logic](https://git-annex.branchable.com/bugs/add__58___inconsistently_treats_files_in_dotdirs_as_dotfiles/?updated#comment-efc1f2aa8f46e88a8be9837a56cfa6f7)]. It is necessary to set `annex.dotfiles` variable to `true` to make git-annex treat them as regular files and thus subject to `annex.largefiles` setting [[ref: git-annex config](https://git-annex.branchable.com/git-annex-config/)]. Could be done the repository (not just specific clone, but any instance since records in `git-annex` branch) wide using `git annex config --set annex.dotfiles true`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/SOURCES.txt0000644000175100001660000000173114761605033020744 0ustar00runnerdockerCHANGELOG.md LICENSE MANIFEST.in README.md pyproject.toml setup.cfg tox.ini src/con_duct/__init__.py src/con_duct/__main__.py src/con_duct/py.typed src/con_duct/utils.py src/con_duct.egg-info/PKG-INFO src/con_duct.egg-info/SOURCES.txt src/con_duct.egg-info/dependency_links.txt src/con_duct.egg-info/entry_points.txt src/con_duct.egg-info/requires.txt src/con_duct.egg-info/top_level.txt src/con_duct/suite/ls.py src/con_duct/suite/main.py src/con_duct/suite/plot.py src/con_duct/suite/pprint_json.py test/conftest.py test/test_aggregation.py test/test_arg_parsing.py test/test_e2e.py test/test_execution.py test/test_formatter.py test/test_log_paths.py test/test_ls.py test/test_prepare_outputs.py test/test_report.py test/test_schema.py test/test_suite.py test/test_tailpipe.py test/test_utils.py test/test_validation.py test/utils.py test/data/abandoning_parent.sh test/data/cat_to_err.py test/data/test_script.py test/data/mriqc-example/info.json test/data/mriqc-example/usage.json././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/dependency_links.txt0000644000175100001660000000000114761605033023124 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/entry_points.txt0000644000175100001660000000012414761605033022351 0ustar00runnerdocker[console_scripts] con-duct = con_duct.suite.main:main duct = con_duct.__main__:main ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/requires.txt0000644000175100001660000000003714761605033021456 0ustar00runnerdocker [all] matplotlib PyYAML pyout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097499.0 con_duct-0.11.0/src/con_duct.egg-info/top_level.txt0000644000175100001660000000001114761605033021600 0ustar00runnerdockercon_duct ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.638428 con_duct-0.11.0/test/0000755000175100001660000000000014761605034013757 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/conftest.py0000644000175100001660000000167414761605014016164 0ustar00runnerdockerimport os from pathlib import Path from typing import Generator import pytest @pytest.fixture(scope="session", autouse=True) def set_test_config() -> Generator: # set DUCT_SAMPLE_INTERVAL and DUCT_REPORT_INTERVAL to small values # to speed up testing etc. Those could be overridden by a specific # invocation of .from_args() in a test. orig_environ = os.environ.copy() os.environ["DUCT_SAMPLE_INTERVAL"] = "0.01" os.environ["DUCT_REPORT_INTERVAL"] = "0.1" yield # May be not even needed, but should not hurt to cleanup. # it is not just a dict, so let's explicitly reset it for k, v in os.environ.items(): if k in orig_environ: os.environ[k] = v else: del os.environ[k] @pytest.fixture def temp_output_dir(tmp_path: Path) -> str: # Append path separator so that value is recognized as a directory when # passed to `output_prefix` return str(tmp_path) + os.sep ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.639428 con_duct-0.11.0/test/data/0000755000175100001660000000000014761605034014670 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/data/abandoning_parent.sh0000755000175100001660000000062514761605014020701 0ustar00runnerdocker#!/bin/bash nchildren=$1 shift for i in `seq 1 $nchildren`; do "$@" & done echo "Started $nchildren for $$" # Can be useful when running manually, but commented out so we can count pids in tests # pstree -c -p "$$" echo "Starting one more in subprocess" ( "$@" & ) jobs # Can be useful when running manually, but commented out so we can count pids in tests # pstree -c -p "$$" echo "waiting" wait ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/data/cat_to_err.py0000755000175100001660000000074314761605014017370 0ustar00runnerdocker#!/usr/bin/env python3 from __future__ import annotations import argparse import sys from typing import IO def cat_to_stream(path: str, buffer: IO[bytes]) -> None: with open(path, "rb") as infile: buffer.write(infile.read()) if __name__ == "__main__": parser = argparse.ArgumentParser(description="Cat to stderr") parser.add_argument("path", help="Path to the file to be catted") args = parser.parse_args() cat_to_stream(args.path, sys.stderr.buffer) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1741097499.639428 con_duct-0.11.0/test/data/mriqc-example/0000755000175100001660000000000014761605034017434 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/data/mriqc-example/info.json0000644000175100001660000000310514761605014021257 0ustar00runnerdocker{"command": "singularity run --contain --bind /home/asmacdo/devel/sandbox/mriqc-sanity/sourcedata:/data:ro --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4:/out --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4/workdir:/workdir docker://nipreps/mriqc:24.0.2 /data /out participant --participant-label 02 -w /workdir --no-sub", "system": {"uid": "asmacdo", "memory_total": 1081801371648, "cpu_total": 32, "hostname": "typhon"}, "env": {}, "gpu": [{"index": "0", "name": "NVIDIA A100-PCIE-40GB", "bus_id": "00000000:31:00.0", "driver_version": "560.28.03", "memory.total": "40960 MiB", "compute_mode": "Default"}], "duct_version": "0.4.0", "schema_version": "0.1.0", "execution_summary": {"exit_code": 0, "command": "singularity run --contain --bind /home/asmacdo/devel/sandbox/mriqc-sanity/sourcedata:/data:ro --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4:/out --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4/workdir:/workdir docker://nipreps/mriqc:24.0.2 /data /out participant --participant-label 02 -w /workdir --no-sub", "logs_prefix": "long-run-4/duct_", "wall_clock_time": 294.2717432975769, "peak_rss": 15546400768, "average_rss": 9255228603.492952, "peak_vsz": 33200779264, "average_vsz": 23610373711.323944, "peak_pmem": 0.5, "average_pmem": 0.042605633802816914, "peak_pcpu": 446.9, "average_pcpu": 197.9066901408451, "num_samples": 284, "num_reports": 6}, "output_paths": {"stdout": "long-run-4/duct_stdout", "stderr": "long-run-4/duct_stderr", "usage": "long-run-4/duct_usage.json", "info": "long-run-4/duct_info.json", "prefix": "long-run-4/duct_"}} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/data/mriqc-example/usage.json0000644000175100001660000034040014761605014021432 0ustar00runnerdocker{"timestamp": "2024-10-08T14:37:00.831368-04:00", "num_samples": 1, "processes": {"2703236": {"pcpu": 0.0, "pmem": 0.0, "rss": 40456192, "vsz": 1603104768, "timestamp": "2024-10-08T14:37:00.831368-04:00", "etime": "00:00", "stat": {"Ssl": 1}, "cmd": "singularity run --contain --bind /home/asmacdo/devel/sandbox/mriqc-sanity/sourcedata:/data:ro --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4:/out --bind /home/asmacdo/devel/sandbox/mriqc-sanity/long-run-4/workdir:/workdir docker://nipreps/mriqc:24.0.2 /data /out participant --participant-label 02 -w /workdir --no-sub"}}, "totals": {"pmem": 0.0, "pcpu": 0.0, "rss": 40456192, "vsz": 1603104768}, "averages": {"rss": 40456192, "vsz": 1603104768, "pmem": 0.0, "pcpu": 0.0, "num_samples": 1}} {"timestamp": "2024-10-08T14:38:01.044663-04:00", "num_samples": 58, "processes": {"2703616": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044453-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703617": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044478-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703618": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044503-04:00", "etime": "00:42", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703619": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044527-04:00", "etime": "00:42", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703236": {"pcpu": 6.7, "pmem": 0.0, "rss": 21618688, "vsz": 1276940288, "timestamp": "2024-10-08T14:38:01.043602-04:00", "etime": "01:00", "stat": {"Ssl": 58}, "cmd": "Singularity runtime parent"}, "2703620": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044552-04:00", "etime": "00:42", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2704141": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:38:01.044637-04:00", "etime": "00:30", "stat": {"S": 30}, "cmd": "/bin/sh -c N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2704142": {"pcpu": 100.0, "pmem": 0.0, "rss": 128471040, "vsz": 197738496, "timestamp": "2024-10-08T14:38:01.044663-04:00", "etime": "00:30", "stat": {"Rl": 30}, "cmd": "N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2703509": {"pcpu": 38.2, "pmem": 0.0, "rss": 282361856, "vsz": 574029824, "timestamp": "2024-10-08T14:37:17.401469-04:00", "etime": "00:12", "stat": {"Sl": 13}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703514": {"pcpu": 108.0, "pmem": 0.0, "rss": 280641536, "vsz": 612782080, "timestamp": "2024-10-08T14:37:13.258129-04:00", "etime": "00:08", "stat": {"R": 4, "Sl": 1, "S": 4}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703522": {"pcpu": 106.0, "pmem": 0.0, "rss": 4988928, "vsz": 9469952, "timestamp": "2024-10-08T14:37:08.081193-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "fc-list --format=%{file}\\n"}, "2703276": {"pcpu": 89.3, "pmem": 0.0, "rss": 310177792, "vsz": 620011520, "timestamp": "2024-10-08T14:38:01.043724-04:00", "etime": "00:59", "stat": {"R": 5, "S": 11, "Sl": 38, "Rl": 4}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2704079": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:38:01.044582-04:00", "etime": "00:37", "stat": {"S": 36}, "cmd": "/bin/sh -c 3dvolreg -Fourier -twopass -1Dfile sub-02_task-rhymejudgment_bold_valid.1D -1Dmatrix_save sub-02_task-rhymejudgment_bold_valid.aff12.1D -prefix sub-02_task-rhymejudgment_bold_valid_volreg.nii.gz -zpad 4 -maxdisp1D sub-02_task-rhymejudgment_bold_valid_md.1D /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/estimate_hm/sub-02_task-rhymejudgment_bold_valid.nii.gz"}, "2704080": {"pcpu": 101.0, "pmem": 0.0, "rss": 57589760, "vsz": 64397312, "timestamp": "2024-10-08T14:38:01.044610-04:00", "etime": "00:37", "stat": {"R": 36}, "cmd": "3dvolreg -Fourier -twopass -1Dfile sub-02_task-rhymejudgment_bold_valid.1D -1Dmatrix_save sub-02_task-rhymejudgment_bold_valid.aff12.1D -prefix sub-02_task-rhymejudgment_bold_valid_volreg.nii.gz -zpad 4 -maxdisp1D sub-02_task-rhymejudgment_bold_valid_md.1D /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/estimate_hm/sub-02_task-rhymejudgment_bold_valid.nii.gz"}, "2703589": {"pcpu": 64.1, "pmem": 0.0, "rss": 251944960, "vsz": 754130944, "timestamp": "2024-10-08T14:38:01.043766-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703590": {"pcpu": 1.9, "pmem": 0.0, "rss": 247160832, "vsz": 619978752, "timestamp": "2024-10-08T14:38:01.043796-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703591": {"pcpu": 79.9, "pmem": 0.0, "rss": 322437120, "vsz": 708763648, "timestamp": "2024-10-08T14:38:01.043823-04:00", "etime": "00:43", "stat": {"S": 34, "R": 8}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703592": {"pcpu": 18.7, "pmem": 0.0, "rss": 284741632, "vsz": 658579456, "timestamp": "2024-10-08T14:38:01.043850-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703593": {"pcpu": 65.7, "pmem": 0.0, "rss": 473456640, "vsz": 963923968, "timestamp": "2024-10-08T14:38:01.043876-04:00", "etime": "00:43", "stat": {"S": 38, "R": 4}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703594": {"pcpu": 10.3, "pmem": 0.0, "rss": 254566400, "vsz": 623910912, "timestamp": "2024-10-08T14:38:01.043901-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703595": {"pcpu": 21.5, "pmem": 0.0, "rss": 315109376, "vsz": 685051904, "timestamp": "2024-10-08T14:38:01.043926-04:00", "etime": "00:43", "stat": {"S": 41, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703596": {"pcpu": 0.1, "pmem": 0.0, "rss": 246951936, "vsz": 619978752, "timestamp": "2024-10-08T14:38:01.043950-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703597": {"pcpu": 15.8, "pmem": 0.0, "rss": 326623232, "vsz": 820158464, "timestamp": "2024-10-08T14:38:01.043976-04:00", "etime": "00:43", "stat": {"S": 41, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703598": {"pcpu": 7.2, "pmem": 0.0, "rss": 254242816, "vsz": 623910912, "timestamp": "2024-10-08T14:38:01.044002-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703599": {"pcpu": 1.4, "pmem": 0.0, "rss": 256000000, "vsz": 619978752, "timestamp": "2024-10-08T14:38:01.044029-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703600": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044054-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703601": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044080-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703602": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044105-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703603": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044130-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703604": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044155-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703605": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044180-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703606": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044204-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703607": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044228-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703608": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044253-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703609": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044281-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703610": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044307-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703611": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044331-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703612": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044356-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703613": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044380-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703614": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044404-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703615": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:38:01.044428-04:00", "etime": "00:43", "stat": {"S": 42}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}}, "totals": {"pmem": 0.0, "pcpu": 355.8, "rss": 8683257856, "vsz": 22636843008}, "averages": {"rss": 6378588230.62069, "vsz": 16992707901.793104, "pmem": 0.0, "pcpu": 224.06551724137935, "num_samples": 58}} {"timestamp": "2024-10-08T14:39:01.327866-04:00", "num_samples": 58, "processes": {"2703616": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327711-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703617": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327736-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703618": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327761-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703619": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327788-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703236": {"pcpu": 0.1, "pmem": 0.0, "rss": 21618688, "vsz": 1276940288, "timestamp": "2024-10-08T14:39:01.326869-04:00", "etime": "02:00", "stat": {"Ssl": 58}, "cmd": "Singularity runtime parent"}, "2703620": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327813-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2704141": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:38:34.319245-04:00", "etime": "01:04", "stat": {"S": 32}, "cmd": "/bin/sh -c N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2704142": {"pcpu": 100.0, "pmem": 0.0, "rss": 270618624, "vsz": 339689472, "timestamp": "2024-10-08T14:38:34.319271-04:00", "etime": "01:04", "stat": {"Rl": 32}, "cmd": "N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2707085": {"pcpu": 0.0, "pmem": 0.0, "rss": 290816, "vsz": 2965504, "timestamp": "2024-10-08T14:39:01.327840-04:00", "etime": "00:25", "stat": {"S": 25}, "cmd": "/bin/sh -c synthstrip -b 1 -i /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2707086": {"pcpu": 116.0, "pmem": 0.5, "rss": 6096261120, "vsz": 9271750656, "timestamp": "2024-10-08T14:39:01.327866-04:00", "etime": "00:25", "stat": {"R": 25}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/synthstrip -b 1 -i /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2703276": {"pcpu": 14.7, "pmem": 0.0, "rss": 310177792, "vsz": 620011520, "timestamp": "2024-10-08T14:39:01.326986-04:00", "etime": "02:00", "stat": {"Sl": 56, "Rl": 2}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2704079": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:38:43.671240-04:00", "etime": "01:19", "stat": {"S": 41}, "cmd": "/bin/sh -c 3dvolreg -Fourier -twopass -1Dfile sub-02_task-rhymejudgment_bold_valid.1D -1Dmatrix_save sub-02_task-rhymejudgment_bold_valid.aff12.1D -prefix sub-02_task-rhymejudgment_bold_valid_volreg.nii.gz -zpad 4 -maxdisp1D sub-02_task-rhymejudgment_bold_valid_md.1D /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/estimate_hm/sub-02_task-rhymejudgment_bold_valid.nii.gz"}, "2704080": {"pcpu": 100.0, "pmem": 0.0, "rss": 57589760, "vsz": 64397312, "timestamp": "2024-10-08T14:38:43.671247-04:00", "etime": "01:19", "stat": {"R": 41}, "cmd": "3dvolreg -Fourier -twopass -1Dfile sub-02_task-rhymejudgment_bold_valid.1D -1Dmatrix_save sub-02_task-rhymejudgment_bold_valid.aff12.1D -prefix sub-02_task-rhymejudgment_bold_valid_volreg.nii.gz -zpad 4 -maxdisp1D sub-02_task-rhymejudgment_bold_valid_md.1D /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/estimate_hm/sub-02_task-rhymejudgment_bold_valid.nii.gz"}, "2703589": {"pcpu": 0.7, "pmem": 0.0, "rss": 251944960, "vsz": 754130944, "timestamp": "2024-10-08T14:39:01.327027-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703590": {"pcpu": 0.0, "pmem": 0.0, "rss": 247160832, "vsz": 619978752, "timestamp": "2024-10-08T14:39:01.327058-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703591": {"pcpu": 18.6, "pmem": 0.0, "rss": 276250624, "vsz": 644792320, "timestamp": "2024-10-08T14:39:01.327084-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703592": {"pcpu": 1.1, "pmem": 0.0, "rss": 284741632, "vsz": 658579456, "timestamp": "2024-10-08T14:39:01.327111-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703593": {"pcpu": 9.9, "pmem": 0.0, "rss": 379789312, "vsz": 869720064, "timestamp": "2024-10-08T14:39:01.327137-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703594": {"pcpu": 0.6, "pmem": 0.0, "rss": 254566400, "vsz": 623910912, "timestamp": "2024-10-08T14:39:01.327163-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703595": {"pcpu": 2.7, "pmem": 0.0, "rss": 272396288, "vsz": 642064384, "timestamp": "2024-10-08T14:39:01.327188-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703596": {"pcpu": 0.2, "pmem": 0.0, "rss": 256204800, "vsz": 619978752, "timestamp": "2024-10-08T14:39:01.327212-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703597": {"pcpu": 2.7, "pmem": 0.0, "rss": 290127872, "vsz": 779096064, "timestamp": "2024-10-08T14:39:01.327238-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703598": {"pcpu": 1.4, "pmem": 0.0, "rss": 254242816, "vsz": 623910912, "timestamp": "2024-10-08T14:39:01.327263-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703599": {"pcpu": 0.9, "pmem": 0.0, "rss": 281186304, "vsz": 641736704, "timestamp": "2024-10-08T14:39:01.327288-04:00", "etime": "01:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703600": {"pcpu": 0.0, "pmem": 0.0, "rss": 246693888, "vsz": 619978752, "timestamp": "2024-10-08T14:39:01.327313-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703601": {"pcpu": 0.0, "pmem": 0.0, "rss": 249171968, "vsz": 619978752, "timestamp": "2024-10-08T14:39:01.327336-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703602": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327360-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703603": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327384-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703604": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327409-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703605": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327433-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703606": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327458-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703607": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327483-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703608": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327509-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703609": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327537-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703610": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327562-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703611": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327587-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703612": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327612-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703613": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327636-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703614": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327661-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703615": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:39:01.327685-04:00", "etime": "01:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}}, "totals": {"pmem": 0.5, "pcpu": 252.9, "rss": 14632927232, "vsz": 31665618944}, "averages": {"rss": 9718880255.999996, "vsz": 24979013702.620686, "pmem": 0.07586206896551724, "pcpu": 206.5413793103448, "num_samples": 58}} {"timestamp": "2024-10-08T14:40:01.592907-04:00", "num_samples": 58, "processes": {"2703616": {"pcpu": 0.2, "pmem": 0.0, "rss": 277913600, "vsz": 642850816, "timestamp": "2024-10-08T14:40:01.592701-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703617": {"pcpu": 0.0, "pmem": 0.0, "rss": 246693888, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592727-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703618": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:40:01.592751-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703619": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:40:01.592777-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703236": {"pcpu": 0.0, "pmem": 0.0, "rss": 21618688, "vsz": 1276940288, "timestamp": "2024-10-08T14:40:01.591836-04:00", "etime": "03:00", "stat": {"Ssl": 58}, "cmd": "Singularity runtime parent"}, "2703620": {"pcpu": 0.0, "pmem": 0.0, "rss": 245268480, "vsz": 619794432, "timestamp": "2024-10-08T14:40:01.592802-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710144": {"pcpu": 101.0, "pmem": 0.0, "rss": 52457472, "vsz": 122109952, "timestamp": "2024-10-08T14:39:39.772033-04:00", "etime": "00:01", "stat": {"Rl": 2}, "cmd": "N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2710153": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:01.592881-04:00", "etime": "00:19", "stat": {"S": 19}, "cmd": "/bin/sh -c synthstrip -b 1 -i /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2710154": {"pcpu": 124.0, "pmem": 0.2, "rss": 2980311040, "vsz": 6021136384, "timestamp": "2024-10-08T14:40:01.592907-04:00", "etime": "00:19", "stat": {"R": 19}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/synthstrip -b 1 -i /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2707085": {"pcpu": 0.0, "pmem": 0.0, "rss": 290816, "vsz": 2965504, "timestamp": "2024-10-08T14:39:10.676949-04:00", "etime": "00:34", "stat": {"S": 9}, "cmd": "/bin/sh -c synthstrip -b 1 -i /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2707086": {"pcpu": 101.0, "pmem": 0.5, "rss": 6264033280, "vsz": 9271750656, "timestamp": "2024-10-08T14:39:10.676974-04:00", "etime": "00:34", "stat": {"R": 9}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/synthstrip -b 1 -i /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2703612": {"pcpu": 0.2, "pmem": 0.0, "rss": 277573632, "vsz": 642850816, "timestamp": "2024-10-08T14:40:01.592597-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710135": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:39:37.692248-04:00", "etime": "00:01", "stat": {"S": 2}, "cmd": "/bin/sh -c N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/funcMRIQC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/mean/mapflow/_mean0/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat.nii.gz --output sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz"}, "2703597": {"pcpu": 1.1, "pmem": 0.0, "rss": 290127872, "vsz": 779096064, "timestamp": "2024-10-08T14:40:01.592212-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703276": {"pcpu": 9.8, "pmem": 0.0, "rss": 310177792, "vsz": 620011520, "timestamp": "2024-10-08T14:40:01.591954-04:00", "etime": "03:00", "stat": {"Sl": 55, "Rl": 3}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710136": {"pcpu": 106.0, "pmem": 0.0, "rss": 53366784, "vsz": 122101760, "timestamp": "2024-10-08T14:39:37.692275-04:00", "etime": "00:01", "stat": {"Rl": 2}, "cmd": "N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/funcMRIQC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/mean/mapflow/_mean0/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat.nii.gz --output sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz"}, "2703614": {"pcpu": 0.6, "pmem": 0.0, "rss": 287404032, "vsz": 777101312, "timestamp": "2024-10-08T14:40:01.592647-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703609": {"pcpu": 0.9, "pmem": 0.0, "rss": 487059456, "vsz": 899649536, "timestamp": "2024-10-08T14:40:01.592521-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710143": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:39:39.772006-04:00", "etime": "00:01", "stat": {"S": 2}, "cmd": "/bin/sh -c N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_clip/clipped.nii.gz --output clipped_corrected.nii.gz -r"}, "2709209": {"pcpu": 0.0, "pmem": 0.0, "rss": 290816, "vsz": 2965504, "timestamp": "2024-10-08T14:39:15.883501-04:00", "etime": "00:01", "stat": {"S": 2}, "cmd": "/bin/sh -c 3dFWHMx -ShowMeClassicFWHM -combine -detrend -input /workdir/mriqc_wf/anatMRIQC/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/conform/sub-02_T1w_conformed.nii.gz -mask /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz -detprefix sub-02_T1w_conformed_detrend -out sub-02_T1w_conformed_subbricks.out > sub-02_T1w_conformed_fwhmx.out"}, "2709210": {"pcpu": 101.0, "pmem": 0.0, "rss": 46493696, "vsz": 56930304, "timestamp": "2024-10-08T14:39:15.883513-04:00", "etime": "00:01", "stat": {"R": 2}, "cmd": "3dFWHMx -ShowMeClassicFWHM -combine -detrend -input /workdir/mriqc_wf/anatMRIQC/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/conform/sub-02_T1w_conformed.nii.gz -mask /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz -detprefix sub-02_T1w_conformed_detrend -out sub-02_T1w_conformed_subbricks.out"}, "2709211": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:39:27.308154-04:00", "etime": "00:13", "stat": {"S": 13}, "cmd": "/bin/sh -c N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --convergence [ 50x50x50x50 ] --output [ clipped_corrected.nii.gz, clipped_bias.nii.gz ] --weight-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz"}, "2709212": {"pcpu": 101.0, "pmem": 0.0, "rss": 299380736, "vsz": 368009216, "timestamp": "2024-10-08T14:39:27.308180-04:00", "etime": "00:13", "stat": {"Rl": 13}, "cmd": "N4BiasFieldCorrection -d 3 --input-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/pre_clip/clipped.nii.gz --convergence [ 50x50x50x50 ] --output [ clipped_corrected.nii.gz, clipped_bias.nii.gz ] --weight-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz"}, "2703589": {"pcpu": 0.3, "pmem": 0.0, "rss": 251944960, "vsz": 754130944, "timestamp": "2024-10-08T14:40:01.591994-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703590": {"pcpu": 0.0, "pmem": 0.0, "rss": 247160832, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592024-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710119": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:39:34.575495-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c 3dTqual -automask /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/apply_hmc/mapflow/_apply_hmc0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz > sub-02_task-rhymejudgment_bold_desc-realigned_valid_tqual"}, "2710120": {"pcpu": 103.0, "pmem": 0.0, "rss": 49901568, "vsz": 57294848, "timestamp": "2024-10-08T14:39:34.575508-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "3dTqual -automask /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/apply_hmc/mapflow/_apply_hmc0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz"}, "2703591": {"pcpu": 7.9, "pmem": 0.0, "rss": 276250624, "vsz": 644792320, "timestamp": "2024-10-08T14:40:01.592051-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703594": {"pcpu": 0.2, "pmem": 0.0, "rss": 254566400, "vsz": 623910912, "timestamp": "2024-10-08T14:40:01.592132-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703592": {"pcpu": 0.4, "pmem": 0.0, "rss": 284741632, "vsz": 658579456, "timestamp": "2024-10-08T14:40:01.592076-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710124": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:01.592829-04:00", "etime": "00:26", "stat": {"S": 26}, "cmd": "/bin/sh -c antsAffineInitializer 3 /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2710125": {"pcpu": 101.0, "pmem": 0.0, "rss": 279461888, "vsz": 343027712, "timestamp": "2024-10-08T14:40:01.592855-04:00", "etime": "00:26", "stat": {"Rl": 26}, "cmd": "antsAffineInitializer 3 /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2703595": {"pcpu": 1.1, "pmem": 0.0, "rss": 272396288, "vsz": 642064384, "timestamp": "2024-10-08T14:40:01.592159-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703593": {"pcpu": 4.2, "pmem": 0.0, "rss": 379789312, "vsz": 869720064, "timestamp": "2024-10-08T14:40:01.592104-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703600": {"pcpu": 0.1, "pmem": 0.0, "rss": 256139264, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592289-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703598": {"pcpu": 0.6, "pmem": 0.0, "rss": 254242816, "vsz": 623910912, "timestamp": "2024-10-08T14:40:01.592239-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703599": {"pcpu": 0.6, "pmem": 0.0, "rss": 281186304, "vsz": 641736704, "timestamp": "2024-10-08T14:40:01.592264-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703596": {"pcpu": 0.1, "pmem": 0.0, "rss": 256204800, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592186-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703601": {"pcpu": 0.0, "pmem": 0.0, "rss": 249171968, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592315-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703603": {"pcpu": 0.1, "pmem": 0.0, "rss": 257048576, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592365-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703602": {"pcpu": 1.2, "pmem": 0.0, "rss": 304783360, "vsz": 665395200, "timestamp": "2024-10-08T14:40:01.592339-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703607": {"pcpu": 0.5, "pmem": 0.0, "rss": 264478720, "vsz": 625111040, "timestamp": "2024-10-08T14:40:01.592468-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703608": {"pcpu": 0.1, "pmem": 0.0, "rss": 256217088, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592493-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703604": {"pcpu": 2.9, "pmem": 0.0, "rss": 401420288, "vsz": 892071936, "timestamp": "2024-10-08T14:40:01.592391-04:00", "etime": "02:43", "stat": {"S": 55, "R": 3}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703610": {"pcpu": 0.1, "pmem": 0.0, "rss": 256155648, "vsz": 619978752, "timestamp": "2024-10-08T14:40:01.592546-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703611": {"pcpu": 0.2, "pmem": 0.0, "rss": 274935808, "vsz": 645844992, "timestamp": "2024-10-08T14:40:01.592572-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703605": {"pcpu": 5.6, "pmem": 0.0, "rss": 360439808, "vsz": 1039851520, "timestamp": "2024-10-08T14:40:01.592416-04:00", "etime": "02:43", "stat": {"S": 55, "R": 2, "Sl": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703613": {"pcpu": 0.6, "pmem": 0.0, "rss": 287404032, "vsz": 777101312, "timestamp": "2024-10-08T14:40:01.592622-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703606": {"pcpu": 0.2, "pmem": 0.0, "rss": 260534272, "vsz": 629809152, "timestamp": "2024-10-08T14:40:01.592442-04:00", "etime": "02:43", "stat": {"S": 58}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703615": {"pcpu": 0.6, "pmem": 0.0, "rss": 287191040, "vsz": 777101312, "timestamp": "2024-10-08T14:40:01.592674-04:00", "etime": "02:43", "stat": {"S": 57, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}}, "totals": {"pmem": 0.5, "pcpu": 257.4, "rss": 14800699392, "vsz": 31665618944}, "averages": {"rss": 9991580212.965517, "vsz": 25746872178.758625, "pmem": 0.06379310344827588, "pcpu": 158.93793103448274, "num_samples": 58}} {"timestamp": "2024-10-08T14:41:00.834530-04:00", "num_samples": 57, "processes": {"2703616": {"pcpu": 0.2, "pmem": 0.0, "rss": 277913600, "vsz": 642850816, "timestamp": "2024-10-08T14:41:00.834448-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703617": {"pcpu": 0.1, "pmem": 0.0, "rss": 256159744, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834456-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703618": {"pcpu": 0.2, "pmem": 0.0, "rss": 278585344, "vsz": 642850816, "timestamp": "2024-10-08T14:41:00.834466-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703619": {"pcpu": 0.1, "pmem": 0.0, "rss": 256331776, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834475-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703620": {"pcpu": 2.8, "pmem": 0.0, "rss": 399552512, "vsz": 945045504, "timestamp": "2024-10-08T14:41:00.834485-04:00", "etime": "03:42", "stat": {"S": 52, "R": 5}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703615": {"pcpu": 0.5, "pmem": 0.0, "rss": 287191040, "vsz": 777101312, "timestamp": "2024-10-08T14:41:00.834437-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713096": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:32.784514-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713097": {"pcpu": 155.0, "pmem": 0.0, "rss": 84971520, "vsz": 1086967808, "timestamp": "2024-10-08T14:40:32.784540-04:00", "etime": "00:00", "stat": {"Sl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2713335": {"pcpu": 107.0, "pmem": 0.0, "rss": 274026496, "vsz": 343113728, "timestamp": "2024-10-08T14:40:51.497942-04:00", "etime": "00:03", "stat": {"Rl": 4}, "cmd": "antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-GM_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-GM_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}, "2713337": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:51.497967-04:00", "etime": "00:03", "stat": {"S": 4}, "cmd": "/bin/sh -c antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-WM_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-WM_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}, "2713338": {"pcpu": 105.0, "pmem": 0.0, "rss": 225918976, "vsz": 295923712, "timestamp": "2024-10-08T14:40:51.497993-04:00", "etime": "00:03", "stat": {"Rl": 4}, "cmd": "antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-WM_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-WM_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}, "2713127": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:34.863245-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713128": {"pcpu": 160.0, "pmem": 0.0, "rss": 83877888, "vsz": 1085657088, "timestamp": "2024-10-08T14:40:34.863273-04:00", "etime": "00:00", "stat": {"Sl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2703613": {"pcpu": 0.5, "pmem": 0.0, "rss": 287404032, "vsz": 777101312, "timestamp": "2024-10-08T14:41:00.834416-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2710124": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:17.193351-04:00", "etime": "00:42", "stat": {"S": 15}, "cmd": "/bin/sh -c antsAffineInitializer 3 /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2713197": {"pcpu": 0.0, "pmem": 0.0, "rss": 303104, "vsz": 2969600, "timestamp": "2024-10-08T14:41:00.834496-04:00", "etime": "00:24", "stat": {"S": 25}, "cmd": "/bin/sh -c antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ epi_to_mni, epi_to_mni_Warped.nii.gz ] --transform Rigid[ 0.05 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 10000x1000x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 4x2x1 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform Affine[ 0.08 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform SyN[ 0.1, 3.0, 0.0 ] --metric CC[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 4, None, 1 ] --convergence [ 100x30x20, 1e-06, 10 ] --smoothing-sigmas 3.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2713198": {"pcpu": 133.0, "pmem": 0.0, "rss": 231301120, "vsz": 297091072, "timestamp": "2024-10-08T14:41:00.834508-04:00", "etime": "00:24", "stat": {"Rl": 25}, "cmd": "antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ epi_to_mni, epi_to_mni_Warped.nii.gz ] --transform Rigid[ 0.05 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 10000x1000x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 4x2x1 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform Affine[ 0.08 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform SyN[ 0.1, 3.0, 0.0 ] --metric CC[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 4, None, 1 ] --convergence [ 100x30x20, 1e-06, 10 ] --smoothing-sigmas 3.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2710125": {"pcpu": 100.0, "pmem": 0.0, "rss": 232808448, "vsz": 295841792, "timestamp": "2024-10-08T14:40:17.193377-04:00", "etime": "00:42", "stat": {"Rl": 15}, "cmd": "antsAffineInitializer 3 /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2713201": {"pcpu": 0.0, "pmem": 0.0, "rss": 286720, "vsz": 2965504, "timestamp": "2024-10-08T14:40:36.943070-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713202": {"pcpu": 192.0, "pmem": 0.0, "rss": 80900096, "vsz": 1083736064, "timestamp": "2024-10-08T14:40:36.943099-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2714739": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:00.834519-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c Atropos --image-dimensionality 3 --initialization PriorProbabilityImages[3,/workdir/mriqc_wf/anatMRIQC/brain_tissue_segmentation/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/format_tpm_names/priors_%02d.nii.gz,0.1] --intensity-image /workdir/mriqc_wf/anatMRIQC/HeadMaskWorkflow/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/apply_mask/clipped_corrected_enhanced_masked.nii.gz --mask-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz --mrf [0.01,1x1x1] --output [segment.nii.gz,segment_%02d.nii.gz] --use-random-seed 1"}, "2714740": {"pcpu": 101.0, "pmem": 0.0, "rss": 247328768, "vsz": 316952576, "timestamp": "2024-10-08T14:41:00.834530-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "Atropos --image-dimensionality 3 --initialization PriorProbabilityImages[3,/workdir/mriqc_wf/anatMRIQC/brain_tissue_segmentation/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/format_tpm_names/priors_%02d.nii.gz,0.1] --intensity-image /workdir/mriqc_wf/anatMRIQC/HeadMaskWorkflow/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/apply_mask/clipped_corrected_enhanced_masked.nii.gz --mask-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz --mrf [0.01,1x1x1] --output [segment.nii.gz,segment_%02d.nii.gz] --use-random-seed 1"}, "2703236": {"pcpu": 0.0, "pmem": 0.0, "rss": 21618688, "vsz": 1276940288, "timestamp": "2024-10-08T14:41:00.834063-04:00", "etime": "04:00", "stat": {"Ssl": 57}, "cmd": "Singularity runtime parent"}, "2710153": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:15.112456-04:00", "etime": "00:33", "stat": {"S": 13}, "cmd": "/bin/sh -c synthstrip -b 1 -i /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2710154": {"pcpu": 102.0, "pmem": 0.5, "rss": 6236925952, "vsz": 9278046208, "timestamp": "2024-10-08T14:40:15.112483-04:00", "etime": "00:33", "stat": {"R": 13}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/synthstrip -b 1 -i /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/pre_n4/clipped_corrected.nii.gz --model /opt/freesurfer/models/synthstrip.1.pt -n 1 -o clipped_corrected_desc-brain.nii.gz -m clipped_corrected_desc-brain_mask.nii.gz"}, "2713231": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:40:39.023668-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713232": {"pcpu": 190.0, "pmem": 0.0, "rss": 71479296, "vsz": 1051992064, "timestamp": "2024-10-08T14:40:39.023695-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2703276": {"pcpu": 8.4, "pmem": 0.0, "rss": 310390784, "vsz": 620011520, "timestamp": "2024-10-08T14:41:00.834136-04:00", "etime": "03:59", "stat": {"Sl": 54, "Rl": 3}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713262": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:40:41.103471-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713263": {"pcpu": 150.0, "pmem": 0.0, "rss": 60555264, "vsz": 1035997184, "timestamp": "2024-10-08T14:40:41.103498-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2703600": {"pcpu": 0.1, "pmem": 0.0, "rss": 256163840, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834278-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2712251": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:16.153232-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c @compute_gcor -mask /workdir/mriqc_wf/funcMRIQC/ComputeIQMs/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/gcor/mapflow/_gcor0/clipped_corrected_desc-brain_mask.nii.gz -input /workdir/mriqc_wf/funcMRIQC/ComputeIQMs/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/gcor/mapflow/_gcor0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz"}, "2712252": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:16.153259-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c 3dToutcount -fraction -mask /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz -qthr 0.00100 /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/apply_hmc/mapflow/_apply_hmc0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz"}, "2712253": {"pcpu": 0.0, "pmem": 0.0, "rss": 3407872, "vsz": 7073792, "timestamp": "2024-10-08T14:40:16.153286-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/tcsh /opt/afni/@compute_gcor -mask /workdir/mriqc_wf/funcMRIQC/ComputeIQMs/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/gcor/mapflow/_gcor0/clipped_corrected_desc-brain_mask.nii.gz -input /workdir/mriqc_wf/funcMRIQC/ComputeIQMs/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/gcor/mapflow/_gcor0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz"}, "2712254": {"pcpu": 111.0, "pmem": 0.0, "rss": 40247296, "vsz": 56922112, "timestamp": "2024-10-08T14:40:16.153315-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "3dToutcount -fraction -mask /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz -qthr 0.00100 /workdir/mriqc_wf/funcMRIQC/fMRI_HMC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/apply_hmc/mapflow/_apply_hmc0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz"}, "2712262": {"pcpu": 123.0, "pmem": 0.0, "rss": 31653888, "vsz": 56725504, "timestamp": "2024-10-08T14:40:16.153342-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "3dTnorm -overwrite -polort 0 -prefix tmp.unit /workdir/mriqc_wf/funcMRIQC/ComputeIQMs/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/gcor/mapflow/_gcor0/sub-02_task-rhymejudgment_bold_desc-realigned_valid.nii.gz[0..$]"}, "2712265": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:34.863193-04:00", "etime": "00:18", "stat": {"S": 18}, "cmd": "/bin/sh -c antsAffineInitializer 3 /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2712266": {"pcpu": 101.0, "pmem": 0.0, "rss": 71499776, "vsz": 134467584, "timestamp": "2024-10-08T14:40:34.863219-04:00", "etime": "00:18", "stat": {"Rl": 18}, "cmd": "antsAffineInitializer 3 /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz transform.mat 15.000000 0.100000 0 10"}, "2713294": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:43.183999-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c svgo -i - -o - -q -p 3 --pretty"}, "2713295": {"pcpu": 157.0, "pmem": 0.0, "rss": 66768896, "vsz": 1036488704, "timestamp": "2024-10-08T14:40:43.184027-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "node /opt/conda/bin/svgo -i - -o - -q -p 3 --pretty"}, "2712276": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:28.636652-04:00", "etime": "00:11", "stat": {"S": 11}, "cmd": "/bin/sh -c antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ ants_t1_to_mni, ants_t1_to_mni_Warped.nii.gz ] --transform Rigid[ 1.0 ] --metric Mattes[ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz, /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz, 1, 56, Random, 0.2 ] --convergence [ 20, 1e-07, 15 ] --smoothing-sigmas 4.0vox --shrink-factors 2 --use-histogram-matching 0 --transform Affine[ 1.0 ] --metric Mattes[ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz, /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz, 1, 56, Random, 0.1 ] --convergence [ 15, 1e-08, 5 ] --smoothing-sigmas 2.0vox --shrink-factors 1 --use-histogram-matching 1 --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2712277": {"pcpu": 105.0, "pmem": 0.0, "rss": 339767296, "vsz": 426717184, "timestamp": "2024-10-08T14:40:28.636664-04:00", "etime": "00:11", "stat": {"Rl": 11}, "cmd": "antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ ants_t1_to_mni, ants_t1_to_mni_Warped.nii.gz ] --transform Rigid[ 1.0 ] --metric Mattes[ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz, /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz, 1, 56, Random, 0.2 ] --convergence [ 20, 1e-07, 15 ] --smoothing-sigmas 4.0vox --shrink-factors 2 --use-histogram-matching 0 --transform Affine[ 1.0 ] --metric Mattes[ /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/fixed_masked.nii.gz, /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/moving_masked.nii.gz, 1, 56, Random, 0.1 ] --convergence [ 15, 1e-08, 5 ] --smoothing-sigmas 2.0vox --shrink-factors 1 --use-histogram-matching 1 --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2703589": {"pcpu": 0.2, "pmem": 0.0, "rss": 260804608, "vsz": 754130944, "timestamp": "2024-10-08T14:41:00.834152-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703590": {"pcpu": 0.1, "pmem": 0.0, "rss": 256405504, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834165-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703591": {"pcpu": 5.0, "pmem": 0.0, "rss": 279076864, "vsz": 644792320, "timestamp": "2024-10-08T14:41:00.834178-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703592": {"pcpu": 0.2, "pmem": 0.0, "rss": 288190464, "vsz": 661286912, "timestamp": "2024-10-08T14:41:00.834190-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703593": {"pcpu": 2.6, "pmem": 0.0, "rss": 398663680, "vsz": 892694528, "timestamp": "2024-10-08T14:41:00.834203-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703594": {"pcpu": 0.2, "pmem": 0.0, "rss": 260952064, "vsz": 623910912, "timestamp": "2024-10-08T14:41:00.834215-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703595": {"pcpu": 1.2, "pmem": 0.0, "rss": 291581952, "vsz": 780722176, "timestamp": "2024-10-08T14:41:00.834225-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703596": {"pcpu": 0.3, "pmem": 0.0, "rss": 280797184, "vsz": 641736704, "timestamp": "2024-10-08T14:41:00.834235-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703597": {"pcpu": 0.7, "pmem": 0.0, "rss": 307167232, "vsz": 795873280, "timestamp": "2024-10-08T14:41:00.834246-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703598": {"pcpu": 0.7, "pmem": 0.0, "rss": 454316032, "vsz": 830529536, "timestamp": "2024-10-08T14:41:00.834257-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703599": {"pcpu": 0.5, "pmem": 0.0, "rss": 281190400, "vsz": 641736704, "timestamp": "2024-10-08T14:41:00.834268-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713328": {"pcpu": 102.0, "pmem": 0.0, "rss": 273989632, "vsz": 343113728, "timestamp": "2024-10-08T14:40:51.497892-04:00", "etime": "00:03", "stat": {"Rl": 4}, "cmd": "antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-CSF_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-CSF_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}, "2703601": {"pcpu": 0.3, "pmem": 0.0, "rss": 280797184, "vsz": 641736704, "timestamp": "2024-10-08T14:41:00.834289-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703602": {"pcpu": 1.3, "pmem": 0.0, "rss": 380108800, "vsz": 741875712, "timestamp": "2024-10-08T14:41:00.834300-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713327": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:40:51.497865-04:00", "etime": "00:03", "stat": {"S": 4}, "cmd": "/bin/sh -c antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-CSF_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-CSF_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}, "2703604": {"pcpu": 2.1, "pmem": 0.0, "rss": 418258944, "vsz": 908521472, "timestamp": "2024-10-08T14:41:00.834321-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703605": {"pcpu": 4.6, "pmem": 0.0, "rss": 304173056, "vsz": 1025392640, "timestamp": "2024-10-08T14:41:00.834331-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703606": {"pcpu": 0.5, "pmem": 0.0, "rss": 312930304, "vsz": 682201088, "timestamp": "2024-10-08T14:41:00.834341-04:00", "etime": "03:42", "stat": {"S": 56, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703607": {"pcpu": 4.5, "pmem": 0.0, "rss": 575799296, "vsz": 1149947904, "timestamp": "2024-10-08T14:41:00.834351-04:00", "etime": "03:42", "stat": {"S": 49, "R": 8}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703608": {"pcpu": 0.1, "pmem": 0.0, "rss": 256221184, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834361-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703609": {"pcpu": 0.7, "pmem": 0.0, "rss": 274087936, "vsz": 641802240, "timestamp": "2024-10-08T14:41:00.834372-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703610": {"pcpu": 0.1, "pmem": 0.0, "rss": 256155648, "vsz": 619978752, "timestamp": "2024-10-08T14:41:00.834382-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703603": {"pcpu": 0.4, "pmem": 0.0, "rss": 306769920, "vsz": 665395200, "timestamp": "2024-10-08T14:41:00.834311-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703611": {"pcpu": 0.1, "pmem": 0.0, "rss": 274935808, "vsz": 645844992, "timestamp": "2024-10-08T14:41:00.834394-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703612": {"pcpu": 0.2, "pmem": 0.0, "rss": 277573632, "vsz": 642850816, "timestamp": "2024-10-08T14:41:00.834406-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703614": {"pcpu": 0.5, "pmem": 0.0, "rss": 287404032, "vsz": 777101312, "timestamp": "2024-10-08T14:41:00.834427-04:00", "etime": "03:42", "stat": {"S": 57}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713334": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:40:51.497917-04:00", "etime": "00:03", "stat": {"S": 4}, "cmd": "/bin/sh -c antsApplyTransforms --default-value 0 --dimensionality 3 --float 0 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_label-GM_probseg.nii.gz --interpolation Gaussian --output tpl-MNI152NLin2009cAsym_res-01_label-GM_probseg_trans.nii.gz --reference-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/post_n4/clipped_corrected.nii.gz --transform /workdir/mriqc_wf/anatMRIQC/SpatialNormalization/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/SpatialNormalization/ants_t1_to_mniInverseComposite.h5"}}, "totals": {"pmem": 0.5, "pcpu": 446.9, "rss": 15546400768, "vsz": 33200779264}, "averages": {"rss": 10493653207.578947, "vsz": 26227466383.719303, "pmem": 0.07017543859649127, "pcpu": 221.05614035087726, "num_samples": 57}} {"timestamp": "2024-10-08T14:41:54.885641-04:00", "num_samples": 52, "processes": {"2703616": {"pcpu": 0.1, "pmem": 0.0, "rss": 277913600, "vsz": 642850816, "timestamp": "2024-10-08T14:41:53.850937-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703617": {"pcpu": 0.1, "pmem": 0.0, "rss": 256159744, "vsz": 619978752, "timestamp": "2024-10-08T14:41:53.850947-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703618": {"pcpu": 0.1, "pmem": 0.0, "rss": 278585344, "vsz": 642850816, "timestamp": "2024-10-08T14:41:53.850957-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703619": {"pcpu": 2.9, "pmem": 0.0, "rss": 553385984, "vsz": 1101058048, "timestamp": "2024-10-08T14:41:53.850967-04:00", "etime": "04:35", "stat": {"S": 43, "R": 8}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703236": {"pcpu": 0.0, "pmem": 0.0, "rss": 21618688, "vsz": 1276940288, "timestamp": "2024-10-08T14:41:54.885540-04:00", "etime": "04:54", "stat": {"Ssl": 52}, "cmd": "Singularity runtime parent"}, "2703620": {"pcpu": 2.3, "pmem": 0.0, "rss": 285806592, "vsz": 783597568, "timestamp": "2024-10-08T14:41:53.850977-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2715905": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:33.064887-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c cwebp -quiet -noalpha -q 80 -o - -- -"}, "2715906": {"pcpu": 106.0, "pmem": 0.0, "rss": 12595200, "vsz": 19554304, "timestamp": "2024-10-08T14:41:33.064913-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "cwebp -quiet -noalpha -q 80 -o - -- -"}, "2716431": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:43.462285-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c cwebp -quiet -noalpha -q 80 -o - -- -"}, "2716432": {"pcpu": 150.0, "pmem": 0.0, "rss": 11948032, "vsz": 17530880, "timestamp": "2024-10-08T14:41:43.462312-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "cwebp -quiet -noalpha -q 80 -o - -- -"}, "2716452": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:46.583150-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c antsApplyTransforms --default-value 0 --dimensionality 3 --float 1 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_desc-carpet_dseg.nii.gz --interpolation MultiLabel --output tpl-MNI152NLin2009cAsym_res-01_desc-carpet_dseg_trans.nii.gz --reference-image /workdir/mriqc_wf/funcMRIQC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/mean/mapflow/_mean0/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat.nii.gz --transform /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/epi_to_mniInverseComposite.h5"}, "2716453": {"pcpu": 104.0, "pmem": 0.0, "rss": 110809088, "vsz": 183312384, "timestamp": "2024-10-08T14:41:46.583176-04:00", "etime": "00:00", "stat": {"Rl": 1}, "cmd": "antsApplyTransforms --default-value 0 --dimensionality 3 --float 1 --input /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-01_desc-carpet_dseg.nii.gz --interpolation MultiLabel --output tpl-MNI152NLin2009cAsym_res-01_desc-carpet_dseg_trans.nii.gz --reference-image /workdir/mriqc_wf/funcMRIQC/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/mean/mapflow/_mean0/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat.nii.gz --transform /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/epi_to_mniInverseComposite.h5"}, "2703276": {"pcpu": 7.7, "pmem": 0.0, "rss": 339206144, "vsz": 649052160, "timestamp": "2024-10-08T14:41:54.885641-04:00", "etime": "04:53", "stat": {"Rl": 2, "Sl": 49, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703613": {"pcpu": 0.4, "pmem": 0.0, "rss": 287678464, "vsz": 777101312, "timestamp": "2024-10-08T14:41:53.850909-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703599": {"pcpu": 0.5, "pmem": 0.0, "rss": 281190400, "vsz": 641736704, "timestamp": "2024-10-08T14:41:53.850773-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703611": {"pcpu": 0.3, "pmem": 0.0, "rss": 331112448, "vsz": 716169216, "timestamp": "2024-10-08T14:41:53.850890-04:00", "etime": "04:35", "stat": {"S": 49, "R": 2}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2716399": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:41.385810-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c cwebp -quiet -noalpha -q 80 -o - -- -"}, "2703609": {"pcpu": 1.6, "pmem": 0.0, "rss": 444018688, "vsz": 945426432, "timestamp": "2024-10-08T14:41:53.850872-04:00", "etime": "04:35", "stat": {"S": 47, "R": 4}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2716364": {"pcpu": 0.0, "pmem": 0.0, "rss": 299008, "vsz": 2965504, "timestamp": "2024-10-08T14:41:39.305192-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c cwebp -quiet -noalpha -q 80 -o - -- -"}, "2716365": {"pcpu": 150.0, "pmem": 0.0, "rss": 17743872, "vsz": 24289280, "timestamp": "2024-10-08T14:41:39.305218-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "cwebp -quiet -noalpha -q 80 -o - -- -"}, "2703607": {"pcpu": 4.1, "pmem": 0.0, "rss": 319848448, "vsz": 804806656, "timestamp": "2024-10-08T14:41:53.850848-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2716254": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:37.227500-04:00", "etime": "00:00", "stat": {"S": 1}, "cmd": "/bin/sh -c cwebp -quiet -noalpha -q 80 -o - -- -"}, "2716255": {"pcpu": 116.0, "pmem": 0.0, "rss": 12926976, "vsz": 19673088, "timestamp": "2024-10-08T14:41:37.227506-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "cwebp -quiet -noalpha -q 80 -o - -- -"}, "2703589": {"pcpu": 0.2, "pmem": 0.0, "rss": 260804608, "vsz": 754130944, "timestamp": "2024-10-08T14:41:53.850671-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703590": {"pcpu": 0.1, "pmem": 0.0, "rss": 256405504, "vsz": 619978752, "timestamp": "2024-10-08T14:41:53.850682-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703591": {"pcpu": 3.6, "pmem": 0.0, "rss": 279076864, "vsz": 644792320, "timestamp": "2024-10-08T14:41:53.850692-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703592": {"pcpu": 0.2, "pmem": 0.0, "rss": 288190464, "vsz": 661286912, "timestamp": "2024-10-08T14:41:53.850703-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703593": {"pcpu": 1.9, "pmem": 0.0, "rss": 398663680, "vsz": 892694528, "timestamp": "2024-10-08T14:41:53.850715-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703594": {"pcpu": 0.1, "pmem": 0.0, "rss": 260952064, "vsz": 623910912, "timestamp": "2024-10-08T14:41:53.850725-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703595": {"pcpu": 0.9, "pmem": 0.0, "rss": 291581952, "vsz": 780722176, "timestamp": "2024-10-08T14:41:53.850735-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703596": {"pcpu": 0.3, "pmem": 0.0, "rss": 280797184, "vsz": 641736704, "timestamp": "2024-10-08T14:41:53.850744-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713197": {"pcpu": 0.0, "pmem": 0.0, "rss": 303104, "vsz": 2969600, "timestamp": "2024-10-08T14:41:29.949200-04:00", "etime": "00:54", "stat": {"S": 28}, "cmd": "/bin/sh -c antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ epi_to_mni, epi_to_mni_Warped.nii.gz ] --transform Rigid[ 0.05 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 10000x1000x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 4x2x1 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform Affine[ 0.08 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform SyN[ 0.1, 3.0, 0.0 ] --metric CC[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 4, None, 1 ] --convergence [ 100x30x20, 1e-06, 10 ] --smoothing-sigmas 3.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2703597": {"pcpu": 0.5, "pmem": 0.0, "rss": 307167232, "vsz": 795873280, "timestamp": "2024-10-08T14:41:53.850753-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2713198": {"pcpu": 100.0, "pmem": 0.0, "rss": 231301120, "vsz": 297091072, "timestamp": "2024-10-08T14:41:29.949227-04:00", "etime": "00:54", "stat": {"Rl": 28}, "cmd": "antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/EPI2MNI/transform.mat, 0 ] --initialize-transforms-per-stage 0 --interpolation LanczosWindowedSinc --output [ epi_to_mni, epi_to_mni_Warped.nii.gz ] --transform Rigid[ 0.05 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 10000x1000x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 4x2x1 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform Affine[ 0.08 ] --metric Mattes[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --transform SyN[ 0.1, 3.0, 0.0 ] --metric CC[ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-fMRIPrep_boldref.nii.gz, /workdir/mriqc_wf/funcMRIQC/SpatialNormalization/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/SharpenEPI/sub-02_task-rhymejudgment_bold_desc-realigned_valid_tstat_corrected.nii.gz, 1, 4, None, 1 ] --convergence [ 100x30x20, 1e-06, 10 ] --smoothing-sigmas 3.0x2.0x1.0vox --shrink-factors 8x4x2 --use-histogram-matching 1 --masks [ /templateflow/tpl-MNI152NLin2009cAsym/tpl-MNI152NLin2009cAsym_res-02_desc-brain_mask.nii.gz, /workdir/mriqc_wf/funcMRIQC/synthstrip_wf/_entities_subject02.taskrhymejudgment.suffixbold.datatypefunc.extension.nii.gz_in_file_..data..sub-02..func..sub-02_task-rhymejudgment_bold.nii.gz_metadata_FileSize0.024249723181128502.FileSizeUnitsGB.NumberOfVolumes160/synthstrip/clipped_corrected_desc-brain_mask.nii.gz ] --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1"}, "2703600": {"pcpu": 0.1, "pmem": 0.0, "rss": 256163840, "vsz": 619978752, "timestamp": "2024-10-08T14:41:53.850783-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703601": {"pcpu": 0.3, "pmem": 0.0, "rss": 280797184, "vsz": 641736704, "timestamp": "2024-10-08T14:41:53.850792-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2716400": {"pcpu": 166.0, "pmem": 0.0, "rss": 11165696, "vsz": 16908288, "timestamp": "2024-10-08T14:41:41.385839-04:00", "etime": "00:00", "stat": {"R": 1}, "cmd": "cwebp -quiet -noalpha -q 80 -o - -- -"}, "2714739": {"pcpu": 0.0, "pmem": 0.0, "rss": 294912, "vsz": 2965504, "timestamp": "2024-10-08T14:41:41.385755-04:00", "etime": "00:41", "stat": {"S": 39}, "cmd": "/bin/sh -c Atropos --image-dimensionality 3 --initialization PriorProbabilityImages[3,/workdir/mriqc_wf/anatMRIQC/brain_tissue_segmentation/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/format_tpm_names/priors_%02d.nii.gz,0.1] --intensity-image /workdir/mriqc_wf/anatMRIQC/HeadMaskWorkflow/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/apply_mask/clipped_corrected_enhanced_masked.nii.gz --mask-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz --mrf [0.01,1x1x1] --output [segment.nii.gz,segment_%02d.nii.gz] --use-random-seed 1"}, "2714740": {"pcpu": 100.0, "pmem": 0.0, "rss": 722923520, "vsz": 797650944, "timestamp": "2024-10-08T14:41:41.385783-04:00", "etime": "00:41", "stat": {"Rl": 39}, "cmd": "Atropos --image-dimensionality 3 --initialization PriorProbabilityImages[3,/workdir/mriqc_wf/anatMRIQC/brain_tissue_segmentation/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/format_tpm_names/priors_%02d.nii.gz,0.1] --intensity-image /workdir/mriqc_wf/anatMRIQC/HeadMaskWorkflow/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/apply_mask/clipped_corrected_enhanced_masked.nii.gz --mask-image /workdir/mriqc_wf/anatMRIQC/synthstrip_wf/_entities_subject02.suffixT1w.datatypeanat.extension.nii.gz_in_file_..data..sub-02..anat..sub-02_T1w.nii.gz_metadata_FileSize0.0055013783276081085.FileSizeUnitsGB.NumberOfVolumes1/synthstrip/clipped_corrected_desc-brain_mask.nii.gz --mrf [0.01,1x1x1] --output [segment.nii.gz,segment_%02d.nii.gz] --use-random-seed 1"}, "2703603": {"pcpu": 0.4, "pmem": 0.0, "rss": 306769920, "vsz": 665395200, "timestamp": "2024-10-08T14:41:53.850812-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703602": {"pcpu": 1.3, "pmem": 0.0, "rss": 299393024, "vsz": 659300352, "timestamp": "2024-10-08T14:41:53.850802-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703598": {"pcpu": 0.6, "pmem": 0.0, "rss": 301215744, "vsz": 663556096, "timestamp": "2024-10-08T14:41:53.850763-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703608": {"pcpu": 0.2, "pmem": 0.0, "rss": 256221184, "vsz": 619978752, "timestamp": "2024-10-08T14:41:53.850859-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703605": {"pcpu": 3.7, "pmem": 0.0, "rss": 304173056, "vsz": 1025392640, "timestamp": "2024-10-08T14:41:53.850829-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703610": {"pcpu": 4.9, "pmem": 0.0, "rss": 394985472, "vsz": 791846912, "timestamp": "2024-10-08T14:41:53.850881-04:00", "etime": "04:35", "stat": {"S": 38, "R": 13}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703606": {"pcpu": 1.7, "pmem": 0.0, "rss": 325894144, "vsz": 694915072, "timestamp": "2024-10-08T14:41:53.850839-04:00", "etime": "04:35", "stat": {"R": 3, "S": 48}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703612": {"pcpu": 0.7, "pmem": 0.0, "rss": 483627008, "vsz": 888832000, "timestamp": "2024-10-08T14:41:53.850900-04:00", "etime": "04:35", "stat": {"S": 49, "R": 2}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703604": {"pcpu": 2.0, "pmem": 0.0, "rss": 371105792, "vsz": 861331456, "timestamp": "2024-10-08T14:41:53.850821-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703614": {"pcpu": 2.4, "pmem": 0.0, "rss": 874360832, "vsz": 1414688768, "timestamp": "2024-10-08T14:41:53.850918-04:00", "etime": "04:35", "stat": {"S": 45, "R": 6}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}, "2703615": {"pcpu": 0.4, "pmem": 0.0, "rss": 287191040, "vsz": 777101312, "timestamp": "2024-10-08T14:41:53.850928-04:00", "etime": "04:35", "stat": {"S": 50, "R": 1}, "cmd": "/opt/conda/bin/python3.11 /opt/conda/bin/mriqc /data /out participant --participant-label 02 -w /workdir --no-sub"}}, "totals": {"pmem": 0.0, "pcpu": 304.0, "rss": 10619633664, "vsz": 25905614848}, "averages": {"rss": 9945027111.384615, "vsz": 24636519345.230774, "pmem": 0.0, "pcpu": 180.99423076923077, "num_samples": 52}} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/data/test_script.py0000755000175100001660000000304614761605014017611 0ustar00runnerdocker#!/usr/bin/env python3 from __future__ import annotations import argparse import sys import time def consume_cpu(duration: int, load: int) -> None: """Function to consume CPU proportional to 'load' for 'duration' seconds""" end_time = time.time() + duration while time.time() < end_time: for _ in range(load): pass # Busy-wait def consume_memory(size: int) -> bytearray: """Function to consume amount of memory specified by 'size' in megabytes""" # Create a list of size MB bytes_in_mb = 1024 * 1024 return bytearray(size * bytes_in_mb) def main(duration: int, cpu_load: int, memory_size: int) -> None: print("this is of test of STDOUT") print("this is of test of STDERR", file=sys.stderr) _mem_hold = consume_memory(memory_size) # noqa consume_cpu(duration, cpu_load) print( f"Test completed. Consumed {memory_size} MB for {duration} seconds with CPU load factor {cpu_load}." ) if __name__ == "__main__": parser = argparse.ArgumentParser( description="Test script to consume CPU and memory." ) parser.add_argument( "--duration", type=int, default=60, help="Duration to run the test in seconds." ) parser.add_argument( "--cpu-load", type=int, default=10000, help="Load factor to simulate CPU usage." ) parser.add_argument( "--memory-size", type=int, default=10, help="Amount of memory to allocate in MB.", ) args = parser.parse_args() main(args.duration, args.cpu_load, args.memory_size) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_aggregation.py0000644000175100001660000003451614761605014017666 0ustar00runnerdockerfrom collections import Counter from copy import deepcopy from typing import cast from unittest import mock import pytest from con_duct.__main__ import EXECUTION_SUMMARY_FORMAT, ProcessStats, Report, Sample stat0 = ProcessStats( pcpu=0.0, pmem=0, rss=0, vsz=0, timestamp="2024-06-11T10:09:37-04:00", etime="00:00", cmd="cmd 0", stat=Counter(["stat0"]), ) stat1 = ProcessStats( pcpu=1.0, pmem=1.0, rss=1, vsz=1, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 1", stat=Counter(["stat1"]), ) stat2 = ProcessStats( pcpu=2.0, pmem=2.0, rss=2, vsz=2, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 2", stat=Counter(["stat2"]), ) stat100 = ProcessStats( pcpu=100.0, pmem=100.0, rss=2, vsz=2, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 100", stat=Counter(["stat100"]), ) stat_big = ProcessStats( pcpu=20000.0, pmem=21234234.0, rss=43645634562, vsz=2345234523452342, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 2", stat=Counter(["statbig"]), ) @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_num_samples_increment(mock_log_paths: mock.MagicMock) -> None: ex0 = Sample() ex0.add_pid(1, deepcopy(stat1)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 report.update_from_sample(ex0) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.current_sample.averages.num_samples == 1 assert report.full_run_stats.averages.num_samples == 1 report.update_from_sample(ex0) assert report.current_sample.averages.num_samples == 2 assert report.full_run_stats.averages.num_samples == 2 report.update_from_sample(ex0) assert report.current_sample.averages.num_samples == 3 assert report.full_run_stats.averages.num_samples == 3 @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_single_sample_sanity(mock_log_paths: mock.MagicMock) -> None: ex0 = Sample() ex0.add_pid(0, deepcopy(stat0)) ex0.add_pid(1, deepcopy(stat1)) ex0.add_pid(2, deepcopy(stat2)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 report.update_from_sample(ex0) # 3 pids in a single sample should still be "1" sample report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.full_run_stats is not None assert report.current_sample.averages.num_samples == 1 assert report.full_run_stats.averages.num_samples == 1 # assert totals sanity assert report.current_sample.total_rss == stat0.rss + stat1.rss + stat2.rss assert report.current_sample.total_vsz == stat0.vsz + stat1.vsz + stat2.vsz assert report.current_sample.total_pmem == stat0.pmem + stat1.pmem + stat2.pmem assert report.current_sample.total_pcpu == stat0.pcpu + stat1.pcpu + stat2.pcpu # With one sample averages should be equal to totals assert report.current_sample.averages.rss == report.current_sample.averages.rss assert report.current_sample.averages.vsz == report.current_sample.averages.vsz assert report.current_sample.averages.pmem == report.current_sample.averages.pmem assert report.current_sample.averages.pcpu == report.current_sample.averages.pcpu @pytest.mark.parametrize("stat", [stat0, stat1, stat2, stat_big]) @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_single_stat_multiple_samples_sanity( mock_log_paths: mock.MagicMock, stat: ProcessStats ) -> None: ex0 = Sample() ex0.add_pid(1, deepcopy(stat)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 report.update_from_sample(ex0) report.update_from_sample(ex0) report.update_from_sample(ex0) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.current_sample.averages.num_samples == 3 assert report.full_run_stats.averages.num_samples == 3 # With 3 identical samples, totals should be identical to 1 sample assert report.current_sample.total_rss == stat.rss assert report.current_sample.total_vsz == stat.vsz assert report.current_sample.total_pmem == stat.pmem assert report.current_sample.total_pcpu == stat.pcpu # Without resetting the current sample, full_run_stats should equal current_sample assert report.full_run_stats.total_rss == report.current_sample.total_rss assert report.full_run_stats.total_vsz == report.current_sample.total_vsz assert report.full_run_stats.total_pmem == report.current_sample.total_pmem assert report.full_run_stats.total_pcpu == report.current_sample.total_pcpu # Averages too assert report.current_sample.averages.rss == report.full_run_stats.averages.rss assert report.current_sample.averages.vsz == report.full_run_stats.averages.vsz assert report.current_sample.averages.pmem == report.full_run_stats.averages.pmem assert report.current_sample.averages.pcpu == report.full_run_stats.averages.pcpu # With 3 identical samples, averages should be identical to 1 sample assert report.current_sample.averages.rss == report.current_sample.total_rss assert report.current_sample.averages.vsz == report.current_sample.total_vsz assert report.current_sample.averages.pmem == report.current_sample.total_pmem assert report.current_sample.averages.pcpu == report.current_sample.total_pcpu @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_averages(mock_log_paths: mock.MagicMock) -> None: sample0 = Sample() sample0.add_pid(1, deepcopy(stat0)) sample1 = Sample() sample1.add_pid(1, deepcopy(stat1)) sample2 = Sample() sample2.add_pid(1, deepcopy(stat2)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 report.update_from_sample(sample0) report.update_from_sample(sample1) report.update_from_sample(sample2) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.current_sample.averages.num_samples == 3 assert report.full_run_stats.averages.num_samples == 3 # Assert that average calculation works as expected assert ( report.current_sample.averages.rss == (stat0.rss + stat1.rss + stat2.rss) / 3.0 ) assert ( report.current_sample.averages.vsz == (stat0.vsz + stat1.vsz + stat2.vsz) / 3.0 ) assert ( report.current_sample.averages.pmem == (stat0.pmem + stat1.pmem + stat2.pmem) / 3.0 ) assert ( report.current_sample.averages.pcpu == (stat0.pcpu + stat1.pcpu + stat2.pcpu) / 3.0 ) # And full_run_stats.averages is still identical assert report.current_sample.averages.rss == report.full_run_stats.averages.rss assert report.current_sample.averages.vsz == report.full_run_stats.averages.vsz assert report.current_sample.averages.pmem == report.full_run_stats.averages.pmem assert report.current_sample.averages.pcpu == report.full_run_stats.averages.pcpu # Lets make the arithmetic a little less round report.update_from_sample(sample2) report.update_from_sample(sample2) report.update_from_sample(sample2) assert report.current_sample.averages.num_samples == 6 assert report.full_run_stats.averages.num_samples == 6 assert ( report.current_sample.averages.rss == (stat0.rss + stat1.rss + stat2.rss * 4) / 6.0 ) assert ( report.current_sample.averages.vsz == (stat0.vsz + stat1.vsz + stat2.vsz * 4) / 6.0 ) assert ( report.current_sample.averages.pmem == (stat0.pmem + stat1.pmem + stat2.pmem * 4) / 6.0 ) assert ( report.current_sample.averages.pcpu == (stat0.pcpu + stat1.pcpu + stat2.pcpu * 4) / 6.0 ) @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_current_ave_diverges_from_total_ave( mock_log_paths: mock.MagicMock, ) -> None: sample0 = Sample() sample0.add_pid(1, deepcopy(stat0)) sample1 = Sample() sample1.add_pid(1, deepcopy(stat1)) sample2 = Sample() sample2.add_pid(1, deepcopy(stat2)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 report.update_from_sample(sample0) report.update_from_sample(sample1) report.update_from_sample(sample2) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.current_sample.averages.num_samples == 3 assert report.full_run_stats.averages.num_samples == 3 # full_run_stats.averages is still identical to current_sample assert report.current_sample.averages.rss == report.full_run_stats.averages.rss assert report.current_sample.averages.vsz == report.full_run_stats.averages.vsz assert report.current_sample.averages.pmem == report.full_run_stats.averages.pmem assert report.current_sample.averages.pcpu == report.full_run_stats.averages.pcpu # Reset current_sample so averages will diverge from full_run_stats.averages report.current_sample = None report.update_from_sample(sample2) report.update_from_sample(sample2) report.update_from_sample(sample2) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None assert report.current_sample.averages.num_samples == 3 assert report.full_run_stats.averages.num_samples == 6 # Current sample should only contain sample2 assert report.current_sample.averages.rss == sample2.total_rss assert report.current_sample.averages.vsz == sample2.total_vsz assert report.current_sample.averages.pmem == sample2.total_pmem assert report.current_sample.averages.pcpu == sample2.total_pcpu # Full sample average should == (samples_sum/num_samples) assert ( report.full_run_stats.averages.rss == (stat0.rss + stat1.rss + stat2.rss * 4) / 6.0 ) assert ( report.full_run_stats.averages.vsz == (stat0.vsz + stat1.vsz + stat2.vsz * 4) / 6.0 ) assert ( report.full_run_stats.averages.pmem == (stat0.pmem + stat1.pmem + stat2.pmem * 4) / 6.0 ) assert ( report.full_run_stats.averages.pcpu == (stat0.pcpu + stat1.pcpu + stat2.pcpu * 4) / 6.0 ) @pytest.mark.parametrize("stat", [stat0, stat1, stat2, stat_big]) @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_many_samples( mock_log_paths: mock.MagicMock, stat: ProcessStats ) -> None: sample1 = Sample() pid = 1 sample1.add_pid(pid, deepcopy(stat)) mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) assert report.current_sample is None assert report.full_run_stats.averages.num_samples == 0 # Ensure nothing strange happens after many updates for _ in range(100): report.update_from_sample(sample1) report.current_sample = cast( Sample, report.current_sample ) # So mypy is convcinced it is not None assert report.current_sample is not None # Assert that there is exactly 1 ProcessStat.stat count per update assert ( sum(report.current_sample.stats[pid].stat.values()) == report.full_run_stats.averages.num_samples == 100 ) assert report.full_run_stats.averages.rss == (stat.rss * 100) / 100.0 assert report.full_run_stats.averages.vsz == (stat.vsz * 100) / 100.0 assert report.full_run_stats.averages.pmem == (stat.pmem * 100) / 100.0 assert report.full_run_stats.averages.pcpu == (stat.pcpu * 100) / 100.0 # Add a stat that is not 0 and check that the average is still correct sample2 = Sample() sample2.add_pid(1, deepcopy(stat2)) report.update_from_sample(sample2) assert report.full_run_stats.averages.num_samples == 101 assert report.full_run_stats.averages.rss == (stat.rss * 100 + stat2.rss) / 101.0 assert report.full_run_stats.averages.vsz == (stat.vsz * 100 + stat2.vsz) / 101.0 assert report.full_run_stats.averages.pmem == (stat.pmem * 100 + stat2.pmem) / 101.0 assert report.full_run_stats.averages.pcpu == (stat.pcpu * 100 + stat2.pcpu) / 101.0 @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_sample_no_pids(mock_log_paths: mock.MagicMock) -> None: sample0 = Sample() mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) # When there are no pids, finalization should be triggered because the exe is finished, # so a Sample with no PIDs should never be passed to update_from_sample. with pytest.raises(AssertionError): report.update_from_sample(sample0) @mock.patch("con_duct.__main__.LogPaths") def test_aggregation_no_false_peak(mock_log_paths: mock.MagicMock) -> None: sample1 = Sample() sample2 = Sample() mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) sample1.add_pid(1, deepcopy(stat100)) sample1.add_pid(2, deepcopy(stat0)) report.update_from_sample(sample1) sample2.add_pid(1, deepcopy(stat0)) sample2.add_pid(2, deepcopy(stat100)) report.update_from_sample(sample2) assert report.current_sample is not None assert report.current_sample.total_pcpu == 100 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_arg_parsing.py0000644000175100001660000000354014761605014017664 0ustar00runnerdockerimport subprocess import pytest def test_duct_help() -> None: out = subprocess.check_output(["duct", "--help", "ps"]) assert "usage: duct [-h]" in str(out) def test_cmd_help() -> None: out = subprocess.check_output(["duct", "ps", "--help"]) assert "ps [options]" in str(out) assert "usage: duct [-h]" not in str(out) @pytest.mark.parametrize( "args", [ ["duct", "--unknown", "ps"], ["duct", "--unknown", "ps", "--shouldhavenoeffect"], ], ) def test_duct_unrecognized_arg(args: list) -> None: try: subprocess.check_output(args, stderr=subprocess.STDOUT) pytest.fail("Command should have failed with a non-zero exit code") except subprocess.CalledProcessError as e: assert e.returncode == 2 assert "duct: error: unrecognized arguments: --unknown" in str(e.stdout) def test_duct_missing_cmd() -> None: try: subprocess.check_output( ["duct", "--sample-interval", "1"], stderr=subprocess.STDOUT ) pytest.fail("Command should have failed with a non-zero exit code") except subprocess.CalledProcessError as e: assert e.returncode == 2 assert "duct: error: the following arguments are required: command" in str( e.stdout ) def test_abreviation_disabled() -> None: """ If abbreviation is enabled, options passed to command (not duct) are still filtered through the argparse and causes problems. """ try: subprocess.check_output(["duct", "ps", "--output"], stderr=subprocess.STDOUT) raise AssertionError("Invocation of 'ps' should have failed") except subprocess.CalledProcessError as e: assert e.returncode == 1 assert "duct: error: ambiguous option: --output could match" not in str( e.stdout ) assert "ps [options]" in str(e.stdout) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_e2e.py0000644000175100001660000000176114761605014016046 0ustar00runnerdockerfrom __future__ import annotations from itertools import chain import json from pathlib import Path import subprocess import pytest ABANDONING_PARENT = str(Path(__file__).with_name("data") / "abandoning_parent.sh") def test_sanity(temp_output_dir: str) -> None: command = f"duct -p {temp_output_dir}log_ sleep 0.1" subprocess.check_output(command, shell=True) @pytest.mark.parametrize("num_children", [1, 2, 10]) def test_abandoning_parent(temp_output_dir: str, num_children: int) -> None: duct_prefix = f"{temp_output_dir}log_" command = f"duct --s-i 0.001 --r-i 0.01 -p {duct_prefix} {ABANDONING_PARENT} {num_children} sleep 0.2" subprocess.check_output(command, shell=True) with open(f"{duct_prefix}usage.json") as usage_file: all_samples = [json.loads(line) for line in usage_file] all_pids = set(chain.from_iterable(sample["processes"] for sample in all_samples)) # 1 for each child, 1 for pstree, 1 for parent assert len(all_pids) == num_children + 2 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_execution.py0000644000175100001660000002040514761605014017372 0ustar00runnerdockerfrom __future__ import annotations import json import os from pathlib import Path import signal import subprocess import sys import threading from time import sleep, time import pytest from utils import assert_files import con_duct.__main__ as __main__ from con_duct.__main__ import SUFFIXES, Arguments, Outputs, execute TEST_SCRIPT = str(Path(__file__).with_name("data") / "test_script.py") expected_files = [ SUFFIXES["stdout"], SUFFIXES["stderr"], SUFFIXES["info"], SUFFIXES["usage"], ] def assert_expected_files(temp_output_dir: str, exists: bool = True) -> None: assert_files(temp_output_dir, expected_files, exists=exists) def test_sanity_green(caplog: pytest.LogCaptureFixture, temp_output_dir: str) -> None: args = Arguments.from_argv( ["echo", "hello", "world"], sample_interval=4.0, report_interval=60.0, output_prefix=temp_output_dir, ) t0 = time() exit_code = 0 assert execute(args) == exit_code assert time() - t0 < 0.4 # we should not wait for a sample or report interval assert_expected_files(temp_output_dir) assert "Exit Code: 0" in caplog.records[-1].message def test_execution_summary(temp_output_dir: str) -> None: args = Arguments.from_argv( ["sleep", "0.1"], sample_interval=0.05, # small enough to ensure we collect at least 1 sample report_interval=0.1, output_prefix=temp_output_dir, ) assert execute(args) == 0 with open(os.path.join(temp_output_dir, SUFFIXES["info"])) as info: info_dict = json.loads(info.read()) execution_summary = info_dict["execution_summary"] # Since resources used should be small lets make sure values are roughly sane assert execution_summary["average_pmem"] < 10 assert execution_summary["peak_pmem"] < 10 assert execution_summary["average_pcpu"] < 10 assert execution_summary["peak_pcpu"] < 10 assert execution_summary["exit_code"] == 0 @pytest.mark.parametrize("exit_code", [1, 2, 128]) def test_sanity_red( caplog: pytest.LogCaptureFixture, exit_code: int, temp_output_dir: str ) -> None: args = Arguments.from_argv( ["sh", "-c", f"exit {exit_code}"], output_prefix=temp_output_dir, fail_time=0, # keep log files regardless of exit code ) caplog.set_level("INFO") assert execute(args) == exit_code assert f"Exit Code: {exit_code}" in caplog.records[-1].message # We still should execute normally assert_expected_files(temp_output_dir) def test_outputs_full(temp_output_dir: str) -> None: args = Arguments.from_argv( [TEST_SCRIPT, "--duration", "1"], # It is our default, but let's be explicit capture_outputs=Outputs.ALL, outputs=Outputs.ALL, output_prefix=temp_output_dir, ) assert execute(args) == 0 assert_expected_files(temp_output_dir) def test_outputs_passthrough(temp_output_dir: str) -> None: args = Arguments.from_argv( [TEST_SCRIPT, "--duration", "1"], capture_outputs=Outputs.NONE, outputs=Outputs.ALL, output_prefix=temp_output_dir, ) assert execute(args) == 0 expected_files = [SUFFIXES["info"], SUFFIXES["usage"]] assert_files(temp_output_dir, expected_files, exists=True) not_expected_files = [SUFFIXES["stdout"], SUFFIXES["stderr"]] assert_files(temp_output_dir, not_expected_files, exists=False) def test_outputs_capture(temp_output_dir: str) -> None: args = Arguments.from_argv( [TEST_SCRIPT, "--duration", "1"], capture_outputs=Outputs.ALL, outputs=Outputs.NONE, output_prefix=temp_output_dir, ) assert execute(args) == 0 # TODO make this work assert mock.call("this is of test of STDOUT\n") not in mock_stdout.write.mock_calls assert_expected_files(temp_output_dir) def test_outputs_none(temp_output_dir: str) -> None: args = Arguments.from_argv( [TEST_SCRIPT, "--duration", "1"], capture_outputs=Outputs.NONE, outputs=Outputs.NONE, output_prefix=temp_output_dir, ) assert execute(args) == 0 # assert mock.call("this is of test of STDOUT\n") not in mock_stdout.write.mock_calls expected_files = [SUFFIXES["info"], SUFFIXES["usage"]] assert_files(temp_output_dir, expected_files, exists=True) not_expected_files = [SUFFIXES["stdout"], SUFFIXES["stderr"]] assert_files(temp_output_dir, not_expected_files, exists=False) def test_outputs_none_quiet( temp_output_dir: str, capsys: pytest.CaptureFixture, caplog: pytest.LogCaptureFixture, ) -> None: args = Arguments.from_argv( [TEST_SCRIPT, "--duration", "1"], output_prefix=temp_output_dir, ) assert execute(args) == 0 r1 = capsys.readouterr() assert r1.out.startswith("this is of test of STDOUT") assert "this is of test of STDERR" in r1.err assert "Summary" in caplog.text caplog_text1 = caplog.text # now quiet please args.quiet = True args.clobber = True # to avoid the file already exists error assert execute(args) == 0 r2 = capsys.readouterr() # Still have all the outputs assert r1 == r2 # But nothing new to the log assert caplog.text == caplog_text1 # log_level NONE should have the same behavior as quiet args.log_level = "NONE" args.quiet = False args.clobber = True # to avoid the file already exists error assert execute(args) == 0 r3 = capsys.readouterr() # Still have all the outputs assert r1 == r3 # But nothing new to the log assert caplog.text == caplog_text1 def test_exit_before_first_sample(temp_output_dir: str) -> None: args = Arguments.from_argv( ["ls"], sample_interval=0.1, report_interval=0.1, output_prefix=temp_output_dir ) assert execute(args) == 0 assert_expected_files(temp_output_dir) # TODO check usagefile def test_run_less_than_report_interval(temp_output_dir: str) -> None: args = Arguments.from_argv( ["sleep", "0.01"], sample_interval=0.001, report_interval=0.1, output_prefix=temp_output_dir, ) assert execute(args) == 0 # Specifically we need to assert that usage.json gets written anyway. assert_expected_files(temp_output_dir) def test_execute_unknown_command( temp_output_dir: str, capsys: pytest.CaptureFixture ) -> None: cmd = "this_command_does_not_exist_123abrakadabra" args = Arguments.from_argv([cmd]) assert execute(args) == 127 assert f"{cmd}: command not found\n" == capsys.readouterr().err assert_expected_files(temp_output_dir, exists=False) @pytest.mark.parametrize("fail_time", [None, 0, 10, -1, -3.14]) def test_signal_exit(temp_output_dir: str, fail_time: float | None) -> None: def runner() -> int: kws = {} if fail_time is not None: kws["fail_time"] = fail_time args = Arguments.from_argv( ["sleep", "60.74016230000801"], output_prefix=temp_output_dir, **kws ) return execute(args) thread = threading.Thread(target=runner) thread.start() retries = 20 pid = None for i in range(retries): try: ps_command = "ps auxww | grep '[s]leep 60.74016230000801'" # brackets to not match grep process ps_output = subprocess.check_output(ps_command, shell=True).decode() pid = int(ps_output.split()[1]) break except subprocess.CalledProcessError as e: print(f"Attempt {i} failed with msg: {e}", file=sys.stderr) sleep(0.1) # Retry after a short delay if pid is not None: os.kill(pid, signal.SIGTERM) else: raise RuntimeError("Failed to find sleep process") thread.join() if fail_time is None or fail_time != 0: assert_expected_files(temp_output_dir, exists=False) else: # Cannot retrieve the exit code from the thread, it is written to the file with open(os.path.join(temp_output_dir, SUFFIXES["info"])) as info: info_data = json.loads(info.read()) exit_code = info_data["execution_summary"]["exit_code"] assert exit_code == 128 + 15 def test_duct_as_executable(temp_output_dir: str) -> None: ps_command = f"{sys.executable} {__main__.__file__} -p {temp_output_dir} sleep 0.01" # Assert does not raise subprocess.check_output(ps_command, shell=True).decode() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_formatter.py0000644000175100001660000002741614761605014017403 0ustar00runnerdockerfrom unittest import mock import pytest from con_duct.__main__ import Report, SummaryFormatter GREEN_START = SummaryFormatter.COLOR_SEQ % SummaryFormatter.GREEN RED_START = SummaryFormatter.COLOR_SEQ % SummaryFormatter.RED @mock.patch("con_duct.__main__.LogPaths") @mock.patch("con_duct.__main__.subprocess.Popen") def test_execution_summary_formatted_wall_clock_time_nan( mock_popen: mock.MagicMock, mock_log_paths: mock.MagicMock ) -> None: mock_log_paths.prefix = "mock_prefix" wall_clock_format_string = "Wall Clock Time: {wall_clock_time:.3f} sec\n" report = Report("_cmd", [], mock_log_paths, wall_clock_format_string, clobber=False) # It should not crash and it would render even where no wallclock time yet assert report.execution_summary_formatted is not None assert "wall clock time: nan" in report.execution_summary_formatted.lower() # Test with process report.process = mock_popen report.process.returncode = 0 output = report.execution_summary_formatted assert "None" not in output # Process did not finish, we didn't set start_time, so remains nan but there assert "wall clock time: nan" in report.execution_summary_formatted.lower() @mock.patch("con_duct.__main__.LogPaths") @mock.patch("con_duct.__main__.subprocess.Popen") def test_execution_summary_formatted_wall_clock_time_rounded( mock_popen: mock.MagicMock, mock_log_paths: mock.MagicMock ) -> None: mock_log_paths.prefix = "mock_prefix" wall_clock_format_string = "{wall_clock_time:.3f}" report = Report("_cmd", [], mock_log_paths, wall_clock_format_string, clobber=False) report.process = mock_popen report.process.returncode = 0 report.start_time = 1727221840.0486171 report.end_time = report.start_time + 1.111111111 assert "1.111" == report.execution_summary_formatted def test_summary_formatter_no_vars() -> None: not_really_format_string = "test" formatter = SummaryFormatter() out = formatter.format(not_really_format_string, **{}) assert out == not_really_format_string def test_summary_formatter_vars_provided_no_vars_in_format_string() -> None: not_really_format_string = "test" one_arg = {"ok": "pass"} formatter = SummaryFormatter() out = formatter.format(not_really_format_string, **one_arg) assert out == not_really_format_string def test_summary_formatter_one_var() -> None: valid_format_string = "test {ok}" one_arg = {"ok": "pass"} formatter = SummaryFormatter() out = formatter.format(valid_format_string, **one_arg) assert out == "test pass" def test_summary_formatter_many_vars() -> None: valid_format_string = "{one} {two} {three} {four} {five}" many_args = {"one": "1", "two": "2", "three": "3", "four": "4", "five": "5"} formatter = SummaryFormatter() out = formatter.format(valid_format_string, **many_args) assert out == "1 2 3 4 5" def test_summary_formatter_missing_vars() -> None: valid_format_string = "{one}" formatter = SummaryFormatter() with pytest.raises(KeyError): formatter.format(valid_format_string, **{}) valid_format_string = "{one} {two}" formatter = SummaryFormatter() with pytest.raises(KeyError): formatter.format(valid_format_string, **{"one": 1}) def test_summary_formatter_none_replacement() -> None: valid_format_string = "test {none}" one_arg = {"none": None} formatter = SummaryFormatter() out = formatter.format(valid_format_string, **one_arg) assert out == "test -" def test_summary_formatter_S_e2e() -> None: formatter = SummaryFormatter() one_arg = {"big_num": 100000} valid_format_string = "test {big_num}" no_s_applied = formatter.format(valid_format_string, **one_arg) assert no_s_applied == "test 100000" s_format_string = "test {big_num!S}" s_applied = formatter.format(s_format_string, **one_arg) assert s_applied == "test 100.0 kB" none_applied = formatter.format(s_format_string, **{"big_num": None}) assert none_applied == "test -" # YB -> ZB rollover https://github.com/python-humanize/humanize/issues/205 @pytest.mark.parametrize( "num,expected", [ [1, "1 Byte"], [10, "10 Bytes"], [100, "100 Bytes"], [1000, "1.0 kB"], [10000, "10.0 kB"], [100000, "100.0 kB"], [1000000, "1.0 MB"], [10000000, "10.0 MB"], [100000000, "100.0 MB"], [1000000000, "1.0 GB"], [10000000000, "10.0 GB"], [100000000000, "100.0 GB"], [1000000000000, "1.0 TB"], [10000000000000, "10.0 TB"], [100000000000000, "100.0 TB"], [1000000000000000, "1.0 PB"], [10000000000000000, "10.0 PB"], [100000000000000000, "100.0 PB"], [1000000000000000000, "1.0 EB"], [10000000000000000000, "10.0 EB"], [100000000000000000000, "100.0 EB"], [1000000000000000000000, "1.0 ZB"], [10000000000000000000000, "10.0 ZB"], [100000000000000000000000, "100.0 ZB"], [1000000000000900000000000, "1.0 YB"], # see issue above [10000000000000000000000000, "10.0 YB"], [100000000000000000000000000, "100.0 YB"], [1000000000000000000000000000, "1000.0 YB"], ], ) def test_summary_formatter_S_sizes(num: int, expected: str) -> None: formatter = SummaryFormatter() format_string = "{num!S}" actual = formatter.format(format_string, **{"num": num}) assert actual == expected def test_summary_formatter_S_e2e_colors() -> None: formatter = SummaryFormatter(enable_colors=True) s_format_string = "test {big_num!S}" zero_applied = formatter.format(s_format_string, **{"big_num": 0}) assert zero_applied != "test 0 Bytes" expected = f"test {GREEN_START}0 Bytes{formatter.RESET_SEQ}" assert expected == zero_applied ten_5 = formatter.format(s_format_string, **{"big_num": 100000}) expected = f"test {GREEN_START}100.0 kB{formatter.RESET_SEQ}" assert expected == ten_5 zero_applied_c = formatter.format(s_format_string, **{"big_num": 0}) expected = f"test {GREEN_START}0 Bytes{formatter.RESET_SEQ}" assert expected == zero_applied_c none_applied_c = formatter.format(s_format_string, **{"big_num": None}) expected = f"test {RED_START}-{formatter.RESET_SEQ}" assert expected == none_applied_c def test_summary_formatter_E_e2e() -> None: formatter = SummaryFormatter() valid_format_string = "test {e}" no_e_applied = formatter.format(valid_format_string, **{"e": 1}) assert no_e_applied == "test 1" e_format_string = "test {e!E}" e_applied = formatter.format(e_format_string, **{"e": 1}) assert e_applied == "test 1" e_format_string = "test {e!E}" e_zero_applied = formatter.format(e_format_string, **{"e": 0}) assert e_zero_applied == "test 0" def test_summary_formatter_E_e2e_colors() -> None: formatter = SummaryFormatter(enable_colors=True) valid_format_string = "test {e}" no_e_applied = formatter.format(valid_format_string, **{"e": 1}) assert no_e_applied == "test 1" e_format_string = "test {e!E}" # Test Red truthy e_applied = formatter.format(e_format_string, **{"e": 1}) assert e_applied == f"test {RED_START}1{formatter.RESET_SEQ}" # Test Green falsey e_zero_applied = formatter.format(e_format_string, **{"e": 0}) assert e_zero_applied == f"test {GREEN_START}0{formatter.RESET_SEQ}" # # Test Red None e_none_applied = formatter.format(e_format_string, **{"e": None}) assert e_none_applied == f"test {RED_START}-{formatter.RESET_SEQ}" def test_summary_formatter_X_e2e() -> None: formatter = SummaryFormatter() valid_format_string = "test {x}" no_x_applied = formatter.format(valid_format_string, **{"x": 1}) assert no_x_applied == "test 1" x_format_string = "test {x!X}" x_applied = formatter.format(x_format_string, **{"x": 1}) assert x_applied == "test 1" x_zero_applied = formatter.format(x_format_string, **{"x": 0}) assert x_zero_applied == "test 0" x_none_applied = formatter.format(x_format_string, **{"x": None}) assert x_none_applied == "test -" def test_summary_formatter_X_e2e_colors() -> None: formatter = SummaryFormatter(enable_colors=True) valid_format_string = "test {x}" no_x_applied = formatter.format(valid_format_string, **{"x": 1}) assert no_x_applied == "test 1" x_format_string = "test {x!X}" # Test Green truthy x_applied = formatter.format(x_format_string, **{"x": 1}) assert x_applied == f"test {GREEN_START}1{formatter.RESET_SEQ}" # Test Red falsey x_zero_applied = formatter.format(x_format_string, **{"x": 0}) assert x_zero_applied == f"test {RED_START}0{formatter.RESET_SEQ}" # Test Red None x_zero_applied = formatter.format(x_format_string, **{"x": None}) assert x_zero_applied == f"test {RED_START}-{formatter.RESET_SEQ}" def test_summary_formatter_N_e2e() -> None: formatter = SummaryFormatter() valid_format_string = "test {n}" no_n_applied = formatter.format(valid_format_string, **{"n": 1}) assert no_n_applied == "test 1" n_format_string = "test {n!N}" n_applied = formatter.format(n_format_string, **{"n": 1}) assert n_applied == "test 1" n_zero_applied = formatter.format(n_format_string, **{"n": 0}) assert n_zero_applied == "test 0" n_none_applied = formatter.format(n_format_string, **{"n": None}) assert n_none_applied == "test -" def test_summary_formatter_N_e2e_colors() -> None: formatter = SummaryFormatter(enable_colors=True) valid_format_string = "test {n}" no_n_applied = formatter.format(valid_format_string, **{"n": 1}) assert no_n_applied == "test 1" no_n_applied = formatter.format(valid_format_string, **{"n": None}) assert no_n_applied == "test -" n_format_string = "test {n!N}" # Test Green truthy n_applied = formatter.format(n_format_string, **{"n": 1}) assert n_applied == f"test {GREEN_START}1{formatter.RESET_SEQ}" # Test Green falsey n_zero_applied = formatter.format(n_format_string, **{"n": 0}) assert n_zero_applied == f"test {GREEN_START}0{formatter.RESET_SEQ}" # Test Red None n_zero_applied = formatter.format(n_format_string, **{"n": None}) assert n_zero_applied == f"test {RED_START}-{formatter.RESET_SEQ}" @mock.patch("con_duct.__main__.LogPaths") @mock.patch("con_duct.__main__.subprocess.Popen") @pytest.mark.parametrize("colors", [True, False]) def test_execution_summary_formatted_wall_clock_time_nowvalid( mock_popen: mock.MagicMock, mock_log_paths: mock.MagicMock, colors: bool ) -> None: mock_log_paths.prefix = "mock_prefix" wall_clock_format_string = "Rendering: {wall_clock_time:.3f!X}" report = Report( "_cmd", [], mock_log_paths, wall_clock_format_string, clobber=False, colors=colors, ) report.process = mock_popen report.process.returncode = 0 report.start_time = 1727221840.0486171 report.end_time = report.start_time + 1.111111111 if colors: GREEN, STOP = GREEN_START, SummaryFormatter.RESET_SEQ else: GREEN, STOP = "", "" # Assert ValueError not raised assert f"Rendering: {GREEN}1.111{STOP}" == report.execution_summary_formatted # It should not crash and it would render even where no wallclock time yet report = Report( "_cmd", [], mock_log_paths, wall_clock_format_string, clobber=False, colors=colors, ) assert f"Rendering: {GREEN}nan{STOP}" == report.execution_summary_formatted # or if we really provide bad formatting, e.g. the opposite order of conversion and formatting report = Report( "_cmd", [], mock_log_paths, "Rendering: {wall_clock_time!X:.3f}", clobber=False, colors=colors, ) assert f"Rendering: {GREEN}nan{STOP}" == report.execution_summary_formatted ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_log_paths.py0000644000175100001660000001240114761605014017344 0ustar00runnerdockerfrom __future__ import annotations from dataclasses import asdict import os import re from unittest.mock import MagicMock, call, patch import pytest from con_duct.__main__ import LogPaths, Outputs def test_log_paths_filesafe_datetime_prefix() -> None: log_paths = LogPaths.create("start_{datetime_filesafe}") pattern = r"^start_\d{4}\.\d{2}\.\d{2}T\d{2}\.\d{2}\.\d{2}.*" for path in asdict(log_paths).values(): assert re.match(pattern, path) is not None def test_log_paths_pid_prefix() -> None: prefix = "prefix_{pid}_" log_paths = LogPaths.create(prefix, pid=123456) assert log_paths.prefix == prefix.format(pid=123456) @pytest.mark.parametrize( "path", [ "directory/", "nested/directory/", "/abs/path/", ], ) @patch("con_duct.__main__.os.makedirs") @patch("con_duct.__main__.os.path.exists") @patch("builtins.open") def test_prepare_dir_paths_available( _mock_open: MagicMock, mock_exists: MagicMock, mock_mkdir: MagicMock, path: str ) -> None: mock_exists.return_value = False log_paths = LogPaths.create(path) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.ALL) mock_mkdir.assert_called_once_with(path, exist_ok=True) @pytest.mark.parametrize( "path", [ "directory/pre_", "nested/directory/pre_", "/abs/path/pre_", ], ) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prefix_with_filepart_and_directory_part( mock_open: MagicMock, mock_mkdir: MagicMock, mock_exists: MagicMock, path: str ) -> None: mock_exists.return_value = False log_paths = LogPaths.create(path) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.ALL) mock_mkdir.assert_called_once_with(os.path.dirname(path), exist_ok=True) expected_calls = [call(each, "w") for _name, each in log_paths] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prefix_with_filepart_only( mock_open: MagicMock, mock_mkdir: MagicMock, mock_exists: MagicMock ) -> None: mock_exists.return_value = False log_paths = LogPaths.create("filepartonly") log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.ALL) mock_mkdir.assert_not_called() expected_calls = [call(each, "w") for _name, each in log_paths] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prepare_file_paths_available_all( mock_open: MagicMock, _mock_mkdir: MagicMock, mock_exists: MagicMock ) -> None: mock_exists.return_value = False prefix = "prefix_" log_paths = LogPaths.create(prefix) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.ALL) expected_calls = [call(each, "w") for _name, each in log_paths] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prepare_file_paths_available_stdout( mock_open: MagicMock, _mock_mkdir: MagicMock, mock_exists: MagicMock ) -> None: mock_exists.return_value = False prefix = "prefix_" log_paths = LogPaths.create(prefix) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.STDOUT) expected_calls = [ call(each, "w") for name, each in log_paths if name != Outputs.STDERR ] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prepare_file_paths_available_stderr( mock_open: MagicMock, _mock_mkdir: MagicMock, mock_exists: MagicMock ) -> None: mock_exists.return_value = False prefix = "prefix_" log_paths = LogPaths.create(prefix) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.STDERR) expected_calls = [ call(each, "w") for name, each in log_paths if name != Outputs.STDOUT ] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.path.exists") @patch("con_duct.__main__.os.makedirs") @patch("builtins.open") def test_prepare_file_paths_available_no_streams( mock_open: MagicMock, _mock_mkdir: MagicMock, mock_exists: MagicMock ) -> None: mock_exists.return_value = False prefix = "prefix_" log_paths = LogPaths.create(prefix) log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.NONE) streams = [Outputs.STDOUT, Outputs.STDERR] expected_calls = [ call(each, "w") for name, each in log_paths if name not in streams ] mock_open.assert_has_calls(expected_calls, any_order=True) @patch("con_duct.__main__.os.makedirs") @patch("con_duct.__main__.os.path.exists") @patch("builtins.open") def test_prepare_paths_not_available_no_clobber( mock_open: MagicMock, mock_exists: MagicMock, mock_mkdir: MagicMock ) -> None: mock_exists.return_value = True log_paths = LogPaths.create("doesntmatter") with pytest.raises(FileExistsError): log_paths.prepare_paths(clobber=False, capture_outputs=Outputs.ALL) mock_mkdir.assert_not_called() mock_open.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_ls.py0000644000175100001660000000436114761605014016010 0ustar00runnerdockerimport json from unittest.mock import mock_open, patch from con_duct.__main__ import SummaryFormatter from con_duct.suite.ls import ( _flatten_dict, _restrict_row, load_duct_runs, process_run_data, ) def test_load_duct_runs_sanity() -> None: mock_json = json.dumps( {"schema_version": "0.2.1", "prefix": "/test/path_", "command": "echo hello"} ) with patch("builtins.open", mock_open(read_data=mock_json)): result = load_duct_runs(["/test/path_info.json"]) assert len(result) == 1 assert result[0]["prefix"] == "/test/path_" def test_load_duct_runs_skips_unsupported_schema() -> None: mock_json = json.dumps( {"schema_version": "0.1.1", "prefix": "/test/path_", "command": "echo hello"} ) with patch("builtins.open", mock_open(read_data=mock_json)): result = load_duct_runs(["/test/path_info.json"]) assert len(result) == 0 def test_load_duct_runs_uses_filenames_not_stored_prefix() -> None: mock_json = json.dumps( { "schema_version": "0.2.1", "prefix": "/test/not_anymore_", "command": "echo hello", } ) with patch("builtins.open", mock_open(read_data=mock_json)): result = load_duct_runs(["/actual_filepath_info.json"]) assert len(result) == 1 assert result[0]["prefix"] == "/actual_filepath_" def test_flatten_dict() -> None: nested = {"a": {"b": 1, "c": 2}, "d": 3} result = _flatten_dict(nested) assert result == {"b": 1, "c": 2, "d": 3} def test_restrict_row() -> None: row = {"prefix": "/test/path", "exit_code": 0, "extra": "ignore"} fields = ["exit_code"] result = _restrict_row(fields, row) assert "prefix" in result assert "exit_code" in result assert "extra" not in result def test_process_run_data() -> None: run_data = [ { "prefix": "/test/path", "exit_code": 0, "wall_clock_time": 0.12345678, } ] formatter = SummaryFormatter(enable_colors=False) result = process_run_data(run_data, ["wall_clock_time"], formatter) assert isinstance(result, list) assert result[0]["prefix"] == "/test/path" assert "exit_code" not in result[0] assert result[0]["wall_clock_time"] == "0.123 sec" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_prepare_outputs.py0000644000175100001660000003323314761605014020633 0ustar00runnerdockerfrom __future__ import annotations import subprocess from unittest.mock import MagicMock, call, patch from utils import MockStream from con_duct.__main__ import LogPaths, Outputs, prepare_outputs @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_none_output_none( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.NONE, Outputs.NONE, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_not_called() assert stdout == subprocess.DEVNULL assert stderr == subprocess.DEVNULL @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_none_output_stdout( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.NONE, Outputs.STDOUT, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_not_called() assert stdout is None assert stderr == subprocess.DEVNULL @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_none_output_stderr( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.NONE, Outputs.STDERR, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_not_called() assert stdout == subprocess.DEVNULL assert stderr is None @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_none_output_all( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.NONE, Outputs.ALL, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_not_called() assert stdout is None assert stderr is None @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stdout_output_none( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDOUT, Outputs.NONE, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_called_once_with(mock_log_paths.stdout, "w") assert stdout == mock_open.return_value assert stderr == subprocess.DEVNULL @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stdout_output_stdout( mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDOUT, Outputs.STDOUT, mock_log_paths) mock_tee_stream.assert_called_once_with( mock_log_paths.stdout, buffer=mock_stdout.buffer ) mock_open.assert_not_called() assert stdout == mock_tee_stream.return_value assert stderr == subprocess.DEVNULL @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stdout_output_stderr( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDOUT, Outputs.STDERR, mock_log_paths) mock_open.assert_called_once_with(mock_log_paths.stdout, "w") mock_tee_stream.assert_not_called() assert stdout == mock_open.return_value assert stderr is None @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stdout_output_all( mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDOUT, Outputs.ALL, mock_log_paths) mock_tee_stream.assert_called_once_with( mock_log_paths.stdout, buffer=mock_stdout.buffer ) mock_open.assert_not_called() assert stdout == mock_tee_stream.return_value assert stderr is None @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stderr_output_none( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDERR, Outputs.NONE, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_called_once_with(mock_log_paths.stderr, "w") assert stdout == subprocess.DEVNULL assert stderr == mock_open.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stderr_output_stdout( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDERR, Outputs.STDOUT, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_called_once_with(mock_log_paths.stderr, "w") assert stdout is None assert stderr == mock_open.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stderr_output_stderr( _mock_stdout: MockStream, mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDERR, Outputs.STDERR, mock_log_paths) mock_tee_stream.assert_called_once_with( mock_log_paths.stderr, buffer=mock_stderr.buffer ) mock_open.assert_not_called() assert stdout == subprocess.DEVNULL assert stderr == mock_tee_stream.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_stderr_output_all( _mock_stdout: MockStream, mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.STDERR, Outputs.ALL, mock_log_paths) mock_tee_stream.assert_called_once_with( mock_log_paths.stderr, buffer=mock_stderr.buffer ) mock_open.assert_not_called() assert stdout is None assert stderr == mock_tee_stream.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_all_output_none( _mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.ALL, Outputs.NONE, mock_log_paths) mock_tee_stream.assert_not_called() mock_open.assert_has_calls( [call(mock_log_paths.stdout, "w"), call(mock_log_paths.stderr, "w")] ) assert stdout == mock_open.return_value assert stderr == mock_open.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_all_output_stdout( mock_stdout: MockStream, _mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.ALL, Outputs.STDOUT, mock_log_paths) mock_tee_stream.assert_called_with(mock_log_paths.stdout, buffer=mock_stdout.buffer) mock_open.assert_called_once_with(mock_log_paths.stderr, "w") assert stdout == mock_tee_stream.return_value assert stderr == mock_open.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_all_output_stderr( _mock_stdout: MockStream, mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.ALL, Outputs.STDERR, mock_log_paths) mock_tee_stream.assert_called_once_with( mock_log_paths.stderr, buffer=mock_stderr.buffer ) mock_open.assert_called_once_with(mock_log_paths.stdout, "w") assert stdout == mock_open.return_value assert stderr == mock_tee_stream.return_value @patch("builtins.open", new_callable=MagicMock) @patch("con_duct.__main__.TailPipe") @patch("con_duct.__main__.LogPaths") @patch("con_duct.__main__.sys.stderr", new_callable=MockStream) @patch("con_duct.__main__.sys.stdout", new_callable=MockStream) def test_prepare_outputs_capture_all_output_all( mock_stdout: MockStream, mock_stderr: MockStream, mock_LogPaths: LogPaths, mock_tee_stream: MagicMock, mock_open: MagicMock, ) -> None: mock_log_paths = mock_LogPaths.create("mock_prefix") mock_tee_stream.return_value.start = MagicMock() stdout, stderr = prepare_outputs(Outputs.ALL, Outputs.ALL, mock_log_paths) mock_tee_stream.assert_has_calls( [ call(mock_log_paths.stdout, buffer=mock_stdout.buffer), call(mock_log_paths.stderr, buffer=mock_stderr.buffer), ], any_order=True, ) assert stdout == mock_tee_stream.return_value assert stderr == mock_tee_stream.return_value mock_open.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_report.py0000644000175100001660000002141114761605014016700 0ustar00runnerdockerfrom __future__ import annotations from collections import Counter from copy import deepcopy from datetime import datetime import os import subprocess from unittest import mock import pytest from con_duct.__main__ import ( EXECUTION_SUMMARY_FORMAT, Averages, ProcessStats, Report, Sample, ) stat0 = ProcessStats( pcpu=0.0, pmem=0, rss=0, vsz=0, timestamp="2024-06-11T10:09:37-04:00", etime="00:00", cmd="cmd 1", stat=Counter(["stat0"]), ) stat1 = ProcessStats( pcpu=1.0, pmem=0, rss=0, vsz=0, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 1", stat=Counter(["stat1"]), ) stat2 = ProcessStats( pcpu=1.1, pmem=1.1, rss=11, vsz=11, timestamp="2024-06-11T10:13:23-04:00", etime="00:02", cmd="cmd 1", stat=Counter(["stat2"]), ) def test_sample_max_initial_values_one_pid() -> None: maxes = Sample() ex0 = Sample() ex0.add_pid(1, deepcopy(stat0)) maxes = maxes.aggregate(ex0) assert maxes.stats == {1: stat0} def test_sample_max_one_pid() -> None: maxes = Sample() maxes.add_pid(1, deepcopy(stat0)) ex1 = Sample() ex1.add_pid(1, deepcopy(stat1)) maxes = maxes.aggregate(ex1) assert maxes.stats[1].rss == stat1.rss assert maxes.stats[1].vsz == stat1.vsz assert maxes.stats[1].pmem == stat1.pmem assert maxes.stats[1].pcpu == stat1.pcpu def test_sample_max_initial_values_two_pids() -> None: maxes = Sample() ex0 = Sample() ex0.add_pid(1, deepcopy(stat0)) ex0.add_pid(2, deepcopy(stat0)) maxes = maxes.aggregate(ex0) assert maxes.stats == {1: stat0, 2: stat0} assert maxes.stats == {1: stat0, 2: stat0} def test_sample_aggregate_two_pids() -> None: maxes = Sample() maxes.add_pid(1, deepcopy(stat0)) maxes.add_pid(2, deepcopy(stat0)) assert maxes.stats[1].stat["stat0"] == 1 assert maxes.stats[2].stat["stat0"] == 1 assert maxes.stats[1].stat["stat1"] == 0 assert maxes.stats[2].stat["stat1"] == 0 ex1 = Sample() ex1.add_pid(1, deepcopy(stat1)) maxes = maxes.aggregate(ex1) assert maxes.stats[1].stat["stat0"] == 1 assert maxes.stats[2].stat["stat0"] == 1 assert maxes.stats[1].stat["stat1"] == 1 assert maxes.stats[2].stat["stat1"] == 0 ex2 = Sample() ex2.add_pid(2, deepcopy(stat1)) maxes = maxes.aggregate(ex2) # Check the `stat` counts one of each for both pids assert maxes.stats[1].stat["stat0"] == 1 assert maxes.stats[2].stat["stat0"] == 1 assert maxes.stats[1].stat["stat1"] == 1 assert maxes.stats[2].stat["stat1"] == 1 # Each stat1 value > stat0 value assert maxes.stats[1].pcpu == stat1.pcpu assert maxes.stats[1].pmem == stat1.pmem assert maxes.stats[1].rss == stat1.rss assert maxes.stats[1].vsz == stat1.vsz assert maxes.stats[2].pcpu == stat1.pcpu assert maxes.stats[2].pmem == stat1.pmem assert maxes.stats[2].rss == stat1.rss assert maxes.stats[2].vsz == stat1.vsz def test_average_no_samples() -> None: averages = Averages() assert averages.num_samples == 0 sample = Sample() sample.averages = averages serialized = sample.for_json() assert "averages" in serialized assert not serialized["averages"] def test_averages_one_sample() -> None: sample = Sample() sample.add_pid(1, deepcopy(stat0)) averages = Averages.from_sample(sample) assert averages.rss == sample.total_rss assert averages.vsz == sample.total_vsz assert averages.pmem == sample.total_pmem assert averages.pcpu == sample.total_pcpu assert averages.num_samples == 1 def test_averages_two_samples() -> None: sample = Sample() sample.add_pid(1, deepcopy(stat0)) averages = Averages.from_sample(sample) sample2 = Sample() sample2.add_pid(2, deepcopy(stat1)) averages.update(sample2) assert averages.pcpu == (stat0.pcpu + stat1.pcpu) / 2 def test_averages_three_samples() -> None: sample = Sample() sample.add_pid(1, deepcopy(stat0)) averages = Averages.from_sample(sample) sample2 = Sample() sample2.add_pid(2, deepcopy(stat1)) averages.update(sample2) averages.update(sample2) assert averages.pcpu == (stat0.pcpu + (2 * stat1.pcpu)) / 3 def test_sample_totals() -> None: sample = Sample() sample.add_pid(1, deepcopy(stat2)) sample.add_pid(2, deepcopy(stat2)) assert sample.total_rss == stat2.rss * 2 assert sample.total_vsz == stat2.vsz * 2 assert sample.total_pmem == stat2.pmem * 2 assert sample.total_pcpu == stat2.pcpu * 2 @pytest.mark.parametrize( "pcpu, pmem, rss, vsz, etime, cmd", [ (1.0, 1.1, 1024, 1025, "00:00", "cmd"), (0.5, 0.7, 20.48, 40.96, "00:01", "any"), (1, 2, 3, 4, "100:1000", "string"), (0, 0.0, 0, 0.0, "999:999:999", "can have spaces"), (2.5, 3.5, 8192, 16384, "any", "for --this --kind of thing"), (100.0, 99.9, 65536, 131072, "string", "cmd"), ], ) def test_process_stats_green( pcpu: float, pmem: float, rss: int, vsz: int, etime: str, cmd: str ) -> None: # Assert does not raise ProcessStats( pcpu=pcpu, pmem=pmem, rss=rss, vsz=vsz, timestamp=datetime.now().astimezone().isoformat(), etime=etime, cmd=cmd, stat=Counter(["stat0"]), ) @pytest.mark.parametrize( "pcpu, pmem, rss, vsz, etime, cmd", [ ("only", 1.1, 1024, 1025, "etime", "cmd"), (0.5, "takes", 20.48, 40.96, "some", "str"), (1, 2, "one", 4, "anything", "accepted"), (1, 2, 3, "value", "etime", "cmd"), ("2", "fail", "or", "more", "etime", "cmd"), ], ) def test_process_stats_red( pcpu: float, pmem: float, rss: int, vsz: int, etime: str, cmd: str ) -> None: with pytest.raises(AssertionError): ProcessStats( pcpu=pcpu, pmem=pmem, rss=rss, vsz=vsz, timestamp=datetime.now().astimezone().isoformat(), etime=etime, cmd=cmd, stat=Counter(["stat0"]), ) @mock.patch("con_duct.__main__.LogPaths") def test_system_info_sanity(mock_log_paths: mock.MagicMock) -> None: mock_log_paths.prefix = "mock_prefix" report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) report.get_system_info() assert report.system_info is not None assert report.system_info.hostname assert report.system_info.cpu_total assert report.system_info.memory_total > 10 assert report.system_info.uid == os.getuid() assert report.system_info.user == os.environ.get("USER") @mock.patch("con_duct.__main__.shutil.which") @mock.patch("con_duct.__main__.subprocess.check_output") @mock.patch("con_duct.__main__.LogPaths") def test_gpu_parsing_green( mock_log_paths: mock.MagicMock, mock_sp: mock.MagicMock, _mock_which: mock.MagicMock ) -> None: mock_sp.return_value = ( "index, name, pci.bus_id, driver_version, memory.total [MiB], compute_mode\n" "0, NVIDIA RTX A5500 Laptop GPU, 00000000:01:00.0, 535.183.01, 16384 MiB, Default" ).encode("utf-8") report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) report.get_system_info() assert report.gpus is not None assert report.gpus == [ { "index": "0", "name": "NVIDIA RTX A5500 Laptop GPU", "bus_id": "00000000:01:00.0", "driver_version": "535.183.01", "memory.total": "16384 MiB", "compute_mode": "Default", } ] @mock.patch("con_duct.__main__.lgr") @mock.patch("con_duct.__main__.shutil.which") @mock.patch("con_duct.__main__.subprocess.check_output") @mock.patch("con_duct.__main__.LogPaths") def test_gpu_call_error( mock_log_paths: mock.MagicMock, mock_sp: mock.MagicMock, _mock_which: mock.MagicMock, mlgr: mock.MagicMock, ) -> None: mock_sp.side_effect = subprocess.CalledProcessError(1, "errrr") report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) report.get_system_info() assert report.gpus is None mlgr.warning.assert_called_once() @mock.patch("con_duct.__main__.lgr") @mock.patch("con_duct.__main__.shutil.which") @mock.patch("con_duct.__main__.subprocess.check_output") @mock.patch("con_duct.__main__.LogPaths") def test_gpu_parse_error( mock_log_paths: mock.MagicMock, mock_sp: mock.MagicMock, _mock_which: mock.MagicMock, mlgr: mock.MagicMock, ) -> None: mock_sp.return_value = ( "index, name, pci.bus_id, driver_version, memory.total [MiB], compute_mode\n" "not-enough-values, 535.183.01, 16384 MiB, Default" ).encode("utf-8") report = Report("_cmd", [], mock_log_paths, EXECUTION_SUMMARY_FORMAT, clobber=False) report.get_system_info() assert report.gpus is None mlgr.warning.assert_called_once() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_schema.py0000644000175100001660000000175314761605014016634 0ustar00runnerdockerimport json import os from pathlib import Path from con_duct.__main__ import Arguments, execute from con_duct.suite.ls import LS_FIELD_CHOICES, _flatten_dict def test_info_fields(temp_output_dir: str) -> None: """ Generate the list of fields users can request when viewing info files. Fails when schema changes-- commit the new version and bump schema version """ args = Arguments.from_argv( ["echo", "hello", "world"], sample_interval=4.0, report_interval=60.0, output_prefix=temp_output_dir, clobber=True, ) # Execute duct assert execute(args) == 0 # exit_code os.remove(Path(temp_output_dir, "stdout")) os.remove(Path(temp_output_dir, "stderr")) os.remove(Path(temp_output_dir, "usage.json")) info_file = Path(temp_output_dir, "info.json") actual_info_schema = _flatten_dict(json.loads(info_file.read_text())).keys() os.remove(info_file) assert set(actual_info_schema) == set(LS_FIELD_CHOICES) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_suite.py0000644000175100001660000003074614761605014016531 0ustar00runnerdockerimport argparse import contextlib from io import StringIO import json import os import tempfile from typing import Any, Optional import unittest from unittest.mock import MagicMock, mock_open, patch import pytest import yaml from con_duct.suite import main, plot, pprint_json from con_duct.suite.ls import MINIMUM_SCHEMA_VERSION, ls class TestSuiteHelpers(unittest.TestCase): def test_execute_returns_int(self) -> None: def return_non_int(*_args: Any) -> str: return "NOPE" args = argparse.Namespace( command="invalid", file_path="dummy.json", func=return_non_int, log_level="NONE", ) with pytest.raises(TypeError): main.execute(args) @patch("con_duct.suite.main.argparse.ArgumentParser") def test_parser_mock_sanity(self, mock_parser: MagicMock) -> None: mock_args = MagicMock mock_args.command = None mock_parser.parse_args.return_value = mock_args argv = ["/path/to/con-duct", "plot", "--help"] main.main(argv) mock_parser.return_value.print_help.assert_called_once() @patch("con_duct.suite.main.sys.exit", new_callable=MagicMock) @patch("con_duct.suite.main.sys.stderr", new_callable=MagicMock) @patch("con_duct.suite.main.sys.stdout", new_callable=MagicMock) def test_parser_sanity_green( self, mock_stdout: MagicMock, mock_stderr: MagicMock, mock_exit: MagicMock ) -> None: argv = ["--help"] main.main(argv) # [0][1][0]: [first call][positional args set(0 is self)][first positional] out = mock_stdout.write.mock_calls[0][1][0] assert "usage: con-duct [options]" in out mock_stderr.write.assert_not_called() mock_exit.assert_called_once_with(0) @patch("con_duct.suite.main.sys.exit", new_callable=MagicMock) @patch("con_duct.suite.main.sys.stderr", new_callable=MagicMock) @patch("con_duct.suite.main.sys.stdout", new_callable=MagicMock) def test_parser_sanity_red( self, mock_stdout: MagicMock, mock_stderr: MagicMock, mock_exit: MagicMock ) -> None: argv = ["--fakehelp"] main.main(argv) # [0][1][0]: [first call][positional args set(0 is self)][first positional] out = mock_stdout.write.mock_calls[0][1][0] assert "usage: con-duct [options]" in out mock_stderr.write.ssert_not_called() # First call assert ( "usage: con-duct [options]" in mock_stderr.write.mock_calls[0][1][0] ) # second call assert "--fakehelp" in mock_stderr.write.mock_calls[1][1][0] mock_exit.assert_called_once_with(2) class TestPPrint(unittest.TestCase): @patch( "builtins.open", new_callable=mock_open, read_data='{"mock_key": "mock_value"}' ) @patch("con_duct.suite.pprint_json.pprint") def test_pprint_json(self, mock_pprint: MagicMock, mock_open: MagicMock) -> None: args = argparse.Namespace( command="pp", file_path="dummy.json", func=pprint_json.pprint_json, log_level="NONE", ) assert main.execute(args) == 0 mock_open.assert_called_with("dummy.json", "r") mock_pprint.assert_called_once_with({"mock_key": "mock_value"}) @patch("builtins.open", side_effect=FileNotFoundError) def test_file_not_found(self, _mock_open: MagicMock) -> None: args = argparse.Namespace( command="pp", file_path="dummy.json", func=pprint_json.pprint_json, log_level="NONE", ) assert main.execute(args) == 1 @patch("builtins.open", new_callable=mock_open, read_data='{"invalid": "json"') @patch("con_duct.suite.pprint_json.pprint") def test_pprint_invalid_json( self, mock_pprint: MagicMock, mock_open: MagicMock ) -> None: args = argparse.Namespace( command="pp", file_path="dummy.json", func=pprint_json.pprint_json, log_level="NONE", ) assert main.execute(args) == 1 mock_open.assert_called_with("dummy.json", "r") mock_pprint.assert_not_called() class TestPlotMatplotlib(unittest.TestCase): @patch("matplotlib.pyplot.savefig") def test_matplotlib_plot_sanity(self, mock_plot_save: MagicMock) -> None: args = argparse.Namespace( command="plot", file_path="test/data/mriqc-example/usage.json", output="outfile.png", func=plot.matplotlib_plot, log_level="NONE", ) assert main.execute(args) == 0 mock_plot_save.assert_called_once_with("outfile.png") @patch("matplotlib.pyplot.savefig") def test_matplotlib_plot_file_not_found(self, mock_plot_save: MagicMock) -> None: args = argparse.Namespace( command="plot", file_path="test/data/mriqc-example/usage_not_to_be_found.json", output="outfile.png", func=plot.matplotlib_plot, log_level="NONE", ) assert main.execute(args) == 1 mock_plot_save.assert_not_called() @patch("matplotlib.pyplot.savefig") @patch("builtins.open", new_callable=mock_open, read_data='{"invalid": "json"') def test_matplotlib_plot_invalid_json( self, _mock_open: MagicMock, mock_plot_save: MagicMock ) -> None: args = argparse.Namespace( command="plot", file_path="test/data/mriqc-example/usage.json", output="outfile.png", func=plot.matplotlib_plot, log_level="NONE", ) assert main.execute(args) == 1 mock_plot_save.assert_not_called() class TestLS(unittest.TestCase): def setUp(self) -> None: """Create a temporary directory and test files.""" self.temp_dir = tempfile.TemporaryDirectory() self.old_cwd = os.getcwd() os.chdir(self.temp_dir.name) self.files = { "file1_info.json": { "schema_version": MINIMUM_SCHEMA_VERSION, "prefix": "test1", "filter_this": "yes", }, "file2_info.json": { "schema_version": MINIMUM_SCHEMA_VERSION, "prefix": "test2", "filter_this": "no", }, "file3_info.json": {"schema_version": "0.1.0", "prefix": "old_version"}, "not_matching.json": { "schema_version": MINIMUM_SCHEMA_VERSION, "prefix": "no_match", }, ".duct/logs/default_logpath_info.json": { "schema_version": MINIMUM_SCHEMA_VERSION, "prefix": "default_file1", }, } for filename, content in self.files.items(): full_path = os.path.join(self.temp_dir.name, filename) os.makedirs(os.path.dirname(full_path), exist_ok=True) with open(full_path, "w") as f: json.dump(content, f) def tearDown(self) -> None: """Clean up the temporary directory.""" os.chdir(self.old_cwd) self.temp_dir.cleanup() def _run_ls( self, paths: list[str], fmt: str, args: Optional[argparse.Namespace] = None ) -> str: """Helper function to run ls() and capture stdout.""" if args is None: args = argparse.Namespace( paths=[os.path.join(self.temp_dir.name, path) for path in paths], colors=False, fields=["prefix", "schema_version"], eval_filter=None, format=fmt, func=ls, ) buf = StringIO() with contextlib.redirect_stdout(buf): exit_code = ls(args) assert exit_code == 0 return buf.getvalue().strip() def test_ls_sanity(self) -> None: """Basic sanity test to ensure ls() runs without crashing.""" just_file1 = ["file1_info.json"] result = self._run_ls(just_file1, "summaries") assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 1 assert any("file1" in p for p in prefixes) def test_ls_with_filter(self) -> None: """Basic sanity test to ensure ls() runs without crashing.""" paths = ["file1_info.json", "file2_info.json"] args = argparse.Namespace( paths=[os.path.join(self.temp_dir.name, path) for path in paths], colors=False, fields=["prefix", "schema_version"], eval_filter="filter_this=='yes'", format="summaries", func=ls, ) result = self._run_ls(paths, "summaries", args) assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 1 assert any("file1" in p for p in prefixes) # filter_this == 'no' assert "file2" not in result def test_ls_no_pos_args(self) -> None: result = self._run_ls([], "summaries") assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 1 assert any("default_logpath" in p for p in prefixes) assert "file1" not in result assert "file2" not in result assert "file3" not in result assert "not_matching.json" not in result def test_ls_multiple_paths(self) -> None: """Basic sanity test to ensure ls() runs without crashing.""" files_1_and_2 = ["file1_info.json", "file2_info.json"] result = self._run_ls(files_1_and_2, "summaries") assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 2 assert any("file1" in p for p in prefixes) assert any("file2" in p for p in prefixes) def test_ls_ignore_old_schema(self) -> None: """Basic sanity test to ensure ls() runs without crashing.""" files_1_2_3 = ["file1_info.json", "file2_info.json", "file3_info.json"] result = self._run_ls(files_1_2_3, "summaries") assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 2 assert any("file1" in p for p in prefixes) assert any("file2" in p for p in prefixes) # file3 does not meet minimum schema version assert "file3" not in result def test_ls_ignore_non_infojson(self) -> None: """Basic sanity test to ensure ls() runs without crashing.""" all_files = ["file1_info.json", "file2_info.json", "not_matching.json"] result = self._run_ls(all_files, "summaries") assert "Prefix:" in result prefixes = [ line.split(":", 1)[1].strip() for line in result.splitlines() if line.startswith("Prefix:") ] assert len(prefixes) == 2 assert any("file1" in p for p in prefixes) assert any("file2" in p for p in prefixes) # does not end in info.json assert "not_matching.json" not in result def test_ls_json_output(self) -> None: """Test JSON output format.""" result = self._run_ls(["file1_info.json"], "json") parsed = json.loads(result) assert len(parsed) == 1 assert "prefix" in parsed[0] def test_ls_json_pp_output(self) -> None: """Test pretty-printed JSON output format.""" result = self._run_ls(["file1_info.json"], "json_pp") parsed = json.loads(result) assert len(parsed) == 1 assert "prefix" in parsed[0] def test_ls_yaml_output(self) -> None: """Test YAML output format.""" result = self._run_ls(["file1_info.json"], "yaml") parsed = yaml.safe_load(result) assert len(parsed) == 1 assert "prefix" in parsed[0] def test_ls_pyout_output(self) -> None: """Test YAML output format.""" result = self._run_ls(["file1_info.json"], "pyout") # pyout header assert "PREFIX" in result assert os.path.join(self.temp_dir.name, "file1_") in result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_tailpipe.py0000644000175100001660000000472314761605014017203 0ustar00runnerdockerfrom __future__ import annotations from pathlib import Path import subprocess import tempfile from unittest.mock import patch import pytest from utils import MockStream from con_duct.__main__ import TailPipe # 10^7 line fixture is about 70MB FIXTURE_LIST = [f"ten_{i}" for i in range(1, 8)] @pytest.fixture(scope="module", params=FIXTURE_LIST) def fixture_path( request: pytest.FixtureRequest, tmp_path_factory: pytest.TempPathFactory ) -> str: num_lines_exponent = int(request.param.split("_")[1]) base_temp_dir = tmp_path_factory.mktemp("fixture_data") file_path = base_temp_dir / f"{request.param}.txt" with open(file_path, "w") as f: for i in range(10**num_lines_exponent): f.write(f"{i}\n") # print(f"10 ^ {num_lines_exponent}: {10 ** num_lines_exponent}") # print(f"Fixture file size: {os.path.getsize(file_path)} bytes") return str(file_path) @patch("sys.stdout", new_callable=MockStream) def test_high_throughput_stdout(mock_stdout: MockStream, fixture_path: str) -> None: with tempfile.NamedTemporaryFile(mode="wb") as tmpfile: process = subprocess.Popen( ["cat", fixture_path], stdout=tmpfile, ) stream = TailPipe(tmpfile.name, mock_stdout.buffer) stream.start() process.wait() stream.close() assert process.returncode == 0 with open(fixture_path, "rb") as fixture: expected = fixture.read() assert mock_stdout.getvalue() == expected @patch("sys.stderr", new_callable=MockStream) def test_high_throughput_stderr(mock_stderr: MockStream, fixture_path: str) -> None: with tempfile.NamedTemporaryFile(mode="wb") as tmpfile: process = subprocess.Popen( [Path(__file__).with_name("data") / "cat_to_err.py", fixture_path], stdout=subprocess.DEVNULL, stderr=tmpfile, ) stream = TailPipe(tmpfile.name, mock_stderr.buffer) stream.start() process.wait() stream.close() assert process.returncode == 0 with open(fixture_path, "rb") as fixture: expected = fixture.read() assert mock_stderr.getvalue() == expected @patch("sys.stdout", new_callable=MockStream) def test_close(mock_stdout: MockStream) -> None: with tempfile.NamedTemporaryFile(mode="wb") as tmpfile: stream = TailPipe(tmpfile.name, mock_stdout.buffer) stream.start() stream.close() assert stream.infile is not None assert stream.infile.closed ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_utils.py0000644000175100001660000000225314761605014016530 0ustar00runnerdockerimport pytest from con_duct.utils import parse_version @pytest.mark.parametrize( ("lesser", "greater"), [ ("0.0.0", "1.0.0"), # sanity ("0.2.0", "0.12.0"), # each should value should be treated as an int ("0.99.99", "1.0.0"), # X matters more than Y or Z ("0.0.99", "0.1.0"), # Y matters more than Z ("3.2.1", "3.2.01"), # Leading zeros are ok ], ) def test_parse_version_green(lesser: str, greater: str) -> None: assert parse_version(greater) >= parse_version(lesser) @pytest.mark.parametrize( ("invalid"), [ "1", "1.1.1.1", # Four shalt thou not count "1.1", # neither count thou two, excepting that thou then proceed to three "5.4.3.2.1", # Five is right out ], ) def test_parse_version_invalid_length(invalid: str) -> None: with pytest.raises(ValueError, match="Invalid version format"): parse_version(invalid) @pytest.mark.parametrize( ("invalid"), [ "a.b.c", "1.2.3a1", ], ) def test_parse_version_invalid_type(invalid: str) -> None: with pytest.raises(ValueError, match="invalid literal for int"): parse_version(invalid) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/test_validation.py0000644000175100001660000000213114761605014017515 0ustar00runnerdockerimport argparse import pytest from con_duct.__main__ import Arguments, assert_num def test_sample_less_than_report_interval() -> None: args = Arguments.from_argv( ["fake"], sample_interval=0.01, report_interval=0.1, ) assert args.sample_interval <= args.report_interval def test_sample_equal_to_report_interval() -> None: args = Arguments.from_argv( ["fake"], sample_interval=0.1, report_interval=0.1, ) assert args.sample_interval == args.report_interval def test_sample_equal_greater_than_report_interval() -> None: with pytest.raises(argparse.ArgumentError): Arguments.from_argv( ["fake"], sample_interval=1.0, report_interval=0.1, ) @pytest.mark.parametrize("input_value", [0, 1, 2, -1, 100, 0.001, -1.68]) def test_assert_num_green(input_value: int) -> None: assert_num(input_value) @pytest.mark.parametrize("input_value", ["hi", "0", "one"]) def test_assert_num_red(input_value: int) -> None: with pytest.raises(AssertionError): assert_num(input_value) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/test/utils.py0000644000175100001660000000133214761605014015466 0ustar00runnerdockerfrom __future__ import annotations from io import BytesIO from pathlib import Path class MockStream: """Mocks stderr or stdout""" def __init__(self) -> None: self.buffer = BytesIO() def getvalue(self) -> bytes: return self.buffer.getvalue() def assert_files(parent_dir: str, file_list: list[str], exists: bool = True) -> None: if exists: for file_path in file_list: assert Path( parent_dir, file_path ).exists(), f"Expected file does not exist: {file_path}" else: for file_path in file_list: assert not Path( parent_dir, file_path ).exists(), f"Unexpected file should not exist: {file_path}" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1741097484.0 con_duct-0.11.0/tox.ini0000644000175100001660000000235614761605014014317 0ustar00runnerdocker[tox] envlist = lint,typing,py39,py310,py311,py312,py313,pypy3 skip_missing_interpreters = True isolated_build = True minversion = 3.3.0 [testenv] deps = pytest pytest-cov -e .[all] commands = pytest {posargs} test passenv = USER [testenv:lint] deps = flake8 flake8-bugbear flake8-builtins flake8-unused-arguments commands = flake8 src test [testenv:typing] deps = mypy data-science-types # TODO replace archived, https://github.com/wearepal/data-science-types types-PyYAML {[testenv]deps} commands = mypy src test [pytest] addopts = --cov=con_duct --no-cov-on-fail filterwarnings = error norecursedirs = test/data [coverage:run] branch = True parallel = True [coverage:paths] source = src .tox/**/site-packages [coverage:report] precision = 2 show_missing = True [flake8] doctests = True exclude = .*/,build/,dist/,test/data,venv/ max-line-length = 100 unused-arguments-ignore-stub-functions = True select = A,B,B902,B950,C,E,E242,F,U100,W ignore = A003,B005,E203,E262,E266,E501,W503 [isort] atomic = True force_sort_within_sections = True honor_noqa = True lines_between_sections = 0 profile = black reverse_relative = True sort_relative_in_force_sorted_sections = True src_paths = src