Preprocessing
The preprocessing pipeline converts raw imaging data into simulation-ready head meshes. This is a one-time setup per subject — once complete, you can run unlimited simulations without repeating these steps.
graph LR
A([DICOM]) -->|dcm2niix| B([NIfTI T1/T2])
B -->|CHARM| C([Head Mesh])
C -->|subject_atlas| D([Atlas Parcellations])
B -->|recon-all| E([FreeSurfer Surfaces])
C -->|tissue analysis| F([Tissue Report])
B -->|QSIPrep| G([DWI Preprocessed])
G -->|QSIRecon| H([DTI Tensors])
style A fill:#1a3a5c,stroke:#48a,color:#fff
style B fill:#1a5c4a,stroke:#4a8,color:#fff
style C fill:#1a5c4a,stroke:#4a8,color:#fff
style D fill:#1a5c4a,stroke:#4a8,color:#fff
style E fill:#1a5c4a,stroke:#4a8,color:#fff
style F fill:#1a5c4a,stroke:#4a8,color:#fff
style G fill:#1a5c4a,stroke:#4a8,color:#fff
style H fill:#1a5c4a,stroke:#4a8,color:#fff
Full Pipeline
Run all preprocessing steps with a single call:
from tit.pre import run_pipeline
exit_code = run_pipeline(
subject_ids=["001", "002"],
convert_dicom=True,
run_recon=True,
parallel_recon=True,
parallel_cores=4,
create_m2m=True,
run_tissue_analysis=True,
run_qsiprep=False,
run_qsirecon=False,
extract_dti=False,
run_subcortical_segmentations=False,
)
Selective Steps
Each boolean flag controls a specific step. Set only the ones you need — for example, if FreeSurfer recon-all is already done, set run_recon=False and create_m2m=True to run only CHARM (which also runs subject_atlas automatically).
Individual Steps
Each preprocessing step can be called independently for finer control:
from tit.pre import (
run_dicom_to_nifti,
run_recon_all,
run_charm,
run_tissue_analysis,
run_subcortical_segmentations,
run_qsiprep,
run_qsirecon,
extract_dti_tensor,
discover_subjects,
check_m2m_exists,
)
# Discover subjects from sourcedata/
subjects = discover_subjects()
# Check if head mesh already exists
if not check_m2m_exists("001"):
run_charm("001")
Step Details
| Step |
Function |
What It Does |
| DICOM to NIfTI |
run_dicom_to_nifti() |
Converts DICOM files to NIfTI format using dcm2niix |
| CHARM head mesh |
run_charm() |
Creates SimNIBS-compatible head mesh from T1/T2 images |
| Subject atlas |
run_subject_atlas() |
Creates atlas-based parcellations (a2009s, DK40, HCP_MMP1); runs automatically after CHARM in the pipeline |
| FreeSurfer recon-all |
run_recon_all() |
Optional full cortical reconstruction and subcortical segmentation (takes 6-12 hours per subject) |
| Tissue analysis |
run_tissue_analysis() |
Analyzes tissue thickness and volume (bone, CSF, skin) from the head mesh |
| Subcortical segmentation |
run_subcortical_segmentations() |
Runs thalamic nuclei and hippocampal subfield segmentations standalone (also runs automatically at the end of run_recon_all) |
Compute Time
FreeSurfer recon-all is optional and is the most time-consuming step when enabled (6-12 hours per subject). Use parallel_recon=True with parallel_cores to process multiple subjects simultaneously via Python ThreadPoolExecutor; sequential mode lets one subject use FreeSurfer internal parallelism.
DTI / Diffusion Pipeline
For anisotropic conductivity simulations, TI-Toolbox supports diffusion processing via QSIPrep/QSIRecon Docker containers. The default dsi_studio_gqi reconstruction spec directly produces the tensor components SimNIBS needs, and works with both single-shell and multi-shell acquisitions.
from tit.pre import run_qsiprep, run_qsirecon, extract_dti_tensor
import logging
logger = logging.getLogger("my_pipeline")
project = "/path/to/bids_project"
# Run QSIPrep DWI preprocessing
run_qsiprep(project, "001", logger=logger)
# Run QSIRecon reconstruction (default: dsi_studio_gqi)
run_qsirecon(project, "001", logger=logger)
# Extract DTI tensor for SimNIBS anisotropic conductivity
extract_dti_tensor(project, "001", logger=logger)
These steps can also be included in the full pipeline by setting run_qsiprep=True, run_qsirecon=True, and extract_dti=True. Optional configuration dicts (qsiprep_config, qsi_recon_config) control parameters such as output resolution, recon specs, and atlases.
Validation & Platform Notes
This pipeline is functional and producing stable, consistent results. The full chain warrants further validation by domain experts — community input is welcome. On Apple Silicon Macs, QSIPrep/QSIRecon run under Rosetta 2 emulation and may be slower or less stable; allocate 32 GB+ Docker memory.
BIDS Directory Structure
After preprocessing, your project follows this layout:
project_root/
├── sourcedata/ # Raw DICOM
├── sub-001/
│ └── anat/ # NIfTI files (T1w, T2w)
└── derivatives/
├── SimNIBS/sub-001/
│ └── m2m_001/ # Head mesh (simulation-ready)
│ └── segmentation/ # Atlas parcellations
├── freesurfer/sub-001/ # optional recon-all outputs
├── qsiprep/sub-001/ # QSIPrep DWI outputs (if run)
└── qsirecon/sub-001/ # QSIRecon tensor outputs (if run)
API Reference
tit.pre.structural.run_pipeline
run_pipeline(subject_ids: Iterable[str], *, convert_dicom: bool = False, run_recon: bool = False, parallel_recon: bool = False, parallel_cores: int | None = None, create_m2m: bool = False, run_tissue_analysis: bool = False, run_qsiprep: bool = False, run_qsirecon: bool = False, qsiprep_config: dict | None = None, qsi_recon_config: dict | None = None, extract_dti: bool = False, run_subcortical_segmentations: bool = False, debug: bool = False, stop_event: object | None = None, logger_callback: Callable | None = None, runner: CommandRunner | None = None) -> int
Run the preprocessing pipeline for one or more subjects.
Orchestrates DICOM conversion, FreeSurfer recon-all, SimNIBS CHARM,
tissue analysis, QSIPrep/QSIRecon DWI preprocessing, DTI tensor
extraction, and subcortical segmentation. Steps are enabled via
boolean flags; disabled steps are skipped.
Parameters
subject_ids : iterable of str
Subject identifiers without the sub- prefix.
convert_dicom : bool, optional
Run DICOM-to-NIfTI conversion.
run_recon : bool, optional
Run FreeSurfer recon-all.
parallel_recon : bool, optional
Run recon-all in parallel across subjects.
parallel_cores : int or None, optional
Maximum number of parallel subjects for recon-all.
create_m2m : bool, optional
Run SimNIBS charm (also runs subject_atlas for .annot
files).
run_tissue_analysis : bool, optional
Run tissue-volume and thickness analysis.
run_qsiprep : bool, optional
Run QSIPrep DWI preprocessing via Docker.
run_qsirecon : bool, optional
Run QSIRecon reconstruction via Docker.
qsiprep_config : dict or None, optional
Extra configuration passed to run_qsiprep.
qsi_recon_config : dict or None, optional
Extra configuration passed to run_qsirecon.
extract_dti : bool, optional
Extract DTI tensor for SimNIBS anisotropic conductivity.
run_subcortical_segmentations : bool, optional
Run thalamic-nuclei and hippocampal-subfield segmentations.
debug : bool, optional
Enable verbose logging.
stop_event : object or None, optional
Threading event used to cancel running steps.
logger_callback : callable or None, optional
Callback used by the GUI to capture log lines.
runner : CommandRunner or None, optional
Subprocess runner used to stream command output.
Returns
int
0 on success, 1 on failure.
Raises
PreprocessError
If no subjects are provided or a preprocessing step fails.
PreprocessCancelled
If stop_event is set during execution.
See Also
run_dicom_to_nifti : DICOM-to-NIfTI conversion step.
run_recon_all : FreeSurfer recon-all step.
run_charm : SimNIBS CHARM head-mesh step.
run_tissue_analysis : Tissue analysis step.
run_qsiprep : QSIPrep DWI preprocessing step.
run_qsirecon : QSIRecon reconstruction step.
extract_dti_tensor : DTI tensor extraction step.
Source code in tit/pre/structural.py
| def run_pipeline(
subject_ids: Iterable[str],
*,
convert_dicom: bool = False,
run_recon: bool = False,
parallel_recon: bool = False,
parallel_cores: int | None = None,
create_m2m: bool = False,
run_tissue_analysis: bool = False,
run_qsiprep: bool = False,
run_qsirecon: bool = False,
qsiprep_config: dict | None = None,
qsi_recon_config: dict | None = None,
extract_dti: bool = False,
run_subcortical_segmentations: bool = False,
debug: bool = False,
stop_event: object | None = None,
logger_callback: Callable | None = None,
runner: CommandRunner | None = None,
) -> int:
"""Run the preprocessing pipeline for one or more subjects.
Orchestrates DICOM conversion, FreeSurfer recon-all, SimNIBS CHARM,
tissue analysis, QSIPrep/QSIRecon DWI preprocessing, DTI tensor
extraction, and subcortical segmentation. Steps are enabled via
boolean flags; disabled steps are skipped.
Parameters
----------
subject_ids : iterable of str
Subject identifiers without the ``sub-`` prefix.
convert_dicom : bool, optional
Run DICOM-to-NIfTI conversion.
run_recon : bool, optional
Run FreeSurfer ``recon-all``.
parallel_recon : bool, optional
Run ``recon-all`` in parallel across subjects.
parallel_cores : int or None, optional
Maximum number of parallel subjects for ``recon-all``.
create_m2m : bool, optional
Run SimNIBS ``charm`` (also runs ``subject_atlas`` for ``.annot``
files).
run_tissue_analysis : bool, optional
Run tissue-volume and thickness analysis.
run_qsiprep : bool, optional
Run QSIPrep DWI preprocessing via Docker.
run_qsirecon : bool, optional
Run QSIRecon reconstruction via Docker.
qsiprep_config : dict or None, optional
Extra configuration passed to ``run_qsiprep``.
qsi_recon_config : dict or None, optional
Extra configuration passed to ``run_qsirecon``.
extract_dti : bool, optional
Extract DTI tensor for SimNIBS anisotropic conductivity.
run_subcortical_segmentations : bool, optional
Run thalamic-nuclei and hippocampal-subfield segmentations.
debug : bool, optional
Enable verbose logging.
stop_event : object or None, optional
Threading event used to cancel running steps.
logger_callback : callable or None, optional
Callback used by the GUI to capture log lines.
runner : CommandRunner or None, optional
Subprocess runner used to stream command output.
Returns
-------
int
``0`` on success, ``1`` on failure.
Raises
------
PreprocessError
If no subjects are provided or a preprocessing step fails.
PreprocessCancelled
If *stop_event* is set during execution.
See Also
--------
run_dicom_to_nifti : DICOM-to-NIfTI conversion step.
run_recon_all : FreeSurfer recon-all step.
run_charm : SimNIBS CHARM head-mesh step.
run_tissue_analysis : Tissue analysis step.
run_qsiprep : QSIPrep DWI preprocessing step.
run_qsirecon : QSIRecon reconstruction step.
extract_dti_tensor : DTI tensor extraction step.
"""
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_PIPELINE):
return _run_pipeline_inner(
subject_ids,
convert_dicom=convert_dicom,
run_recon=run_recon,
parallel_recon=parallel_recon,
parallel_cores=parallel_cores,
create_m2m=create_m2m,
run_tissue_analysis=run_tissue_analysis,
run_qsiprep=run_qsiprep,
run_qsirecon=run_qsirecon,
qsiprep_config=qsiprep_config,
qsi_recon_config=qsi_recon_config,
extract_dti=extract_dti,
run_subcortical_segmentations=run_subcortical_segmentations,
debug=debug,
stop_event=stop_event,
logger_callback=logger_callback,
runner=runner,
)
|
tit.pre.utils.discover_subjects
discover_subjects(project_dir: str | None) -> list[str]
Return sorted, deduplicated subject IDs found in a BIDS project tree.
Returns an empty list when project_dir is None (project not
configured).
Discovery order:
sourcedata/sub-*/T1w/ or T2w/ -- any subdir, NIfTI, DICOM,
or supported DICOM archive (.zip, .tar, .tar.gz, .tgz).
sourcedata/sub-*/*.tgz (compressed bundles at top level).
sub-*/anat/*T1w*.nii[.gz] or *T2w*.nii[.gz] at project root.
Parameters
project_dir : str
BIDS project root directory.
Returns
list[str]
Sorted list of subject identifiers (without the sub- prefix).
See Also
check_m2m_exists : Check whether a subject's m2m directory exists.
Source code in tit/pre/utils.py
| def discover_subjects(project_dir: str | None) -> list[str]:
"""Return sorted, deduplicated subject IDs found in a BIDS project tree.
Returns an empty list when *project_dir* is ``None`` (project not
configured).
Discovery order:
1. ``sourcedata/sub-*/T1w/`` or ``T2w/`` -- any subdir, NIfTI, DICOM,
or supported DICOM archive (``.zip``, ``.tar``, ``.tar.gz``, ``.tgz``).
2. ``sourcedata/sub-*/*.tgz`` (compressed bundles at top level).
3. ``sub-*/anat/*T1w*.nii[.gz]`` or ``*T2w*.nii[.gz]`` at project root.
Parameters
----------
project_dir : str
BIDS project root directory.
Returns
-------
list[str]
Sorted list of subject identifiers (without the ``sub-`` prefix).
See Also
--------
check_m2m_exists : Check whether a subject's m2m directory exists.
"""
if project_dir is None:
return []
found: list[str] = []
sourcedata_dir = os.path.join(project_dir, "sourcedata")
if os.path.exists(sourcedata_dir):
for subj_dir in glob.glob(os.path.join(sourcedata_dir, "sub-*")):
if os.path.isdir(subj_dir):
t1w_dir = os.path.join(subj_dir, "T1w")
t2w_dir = os.path.join(subj_dir, "T2w")
supported_modality_files = (
".dcm",
".dicom",
".zip",
".tar",
".tar.gz",
".tgz",
".json",
".nii",
".nii.gz",
)
has_valid_structure = (
(
os.path.exists(t1w_dir)
and (
any(
os.path.isdir(os.path.join(t1w_dir, d))
for d in os.listdir(t1w_dir)
)
or any(
f.lower().endswith(supported_modality_files)
for f in os.listdir(t1w_dir)
)
)
)
or (
os.path.exists(t2w_dir)
and (
any(
os.path.isdir(os.path.join(t2w_dir, d))
for d in os.listdir(t2w_dir)
)
or any(
f.lower().endswith(supported_modality_files)
for f in os.listdir(t2w_dir)
)
)
)
or any(f.endswith(".tgz") for f in os.listdir(subj_dir))
)
if has_valid_structure:
subject_id = os.path.basename(subj_dir).replace("sub-", "")
found.append(subject_id)
for subj_dir in glob.glob(os.path.join(project_dir, "sub-*")):
if os.path.isdir(subj_dir):
subject_id = os.path.basename(subj_dir).replace("sub-", "")
if subject_id in found:
continue
anat_dir = os.path.join(subj_dir, "anat")
if os.path.exists(anat_dir):
has_nifti = any(
f.endswith((".nii", ".nii.gz")) and ("T1w" in f or "T2w" in f)
for f in os.listdir(anat_dir)
)
if has_nifti:
found.append(subject_id)
return sorted(found)
|
tit.pre.utils.check_m2m_exists
check_m2m_exists(project_dir: str, subject_id: str) -> bool
Return True if the SimNIBS m2m directory already exists.
Checks for
<project_dir>/derivatives/SimNIBS/sub-<subject_id>/m2m_<subject_id>.
Parameters
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier without the sub- prefix.
Returns
bool
True if the m2m directory exists on disk.
See Also
discover_subjects : Find all subject IDs in a BIDS project.
run_charm : Generate the m2m head mesh.
Source code in tit/pre/utils.py
| def check_m2m_exists(project_dir: str, subject_id: str) -> bool:
"""Return ``True`` if the SimNIBS m2m directory already exists.
Checks for
``<project_dir>/derivatives/SimNIBS/sub-<subject_id>/m2m_<subject_id>``.
Parameters
----------
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier without the ``sub-`` prefix.
Returns
-------
bool
``True`` if the m2m directory exists on disk.
See Also
--------
discover_subjects : Find all subject IDs in a BIDS project.
run_charm : Generate the m2m head mesh.
"""
m2m_dir = os.path.join(
project_dir,
"derivatives",
"SimNIBS",
f"sub-{subject_id}",
f"m2m_{subject_id}",
)
return os.path.exists(m2m_dir)
|
tit.pre.dicom2nifti.run_dicom_to_nifti
run_dicom_to_nifti(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None
Convert DICOM files to BIDS-compliant NIfTI for a subject.
Looks for T1w and T2w DICOM directories under
sourcedata/sub-{subject_id}/ and converts each found modality
using dcm2niix. DICOM discovery is recursive under each
modality's dicom/ directory and includes .dcm and .dicom
files. Supported archives (.zip, .tar, .tar.gz, .tgz)
placed directly in the modality folder or its dicom/ folder are
safely extracted to dicom/extracted_archives/ before discovery.
Parameters
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier without the sub- prefix.
logger : logging.Logger
Logger for progress messages.
runner : CommandRunner or None, optional
Subprocess runner for streaming output.
Raises
PreprocessError
If output NIfTI files already exist for a modality.
See Also
run_pipeline : Full preprocessing pipeline.
Source code in tit/pre/dicom2nifti.py
| def run_dicom_to_nifti(
project_dir: str,
subject_id: str,
*,
logger,
runner: CommandRunner | None = None,
) -> None:
"""Convert DICOM files to BIDS-compliant NIfTI for a subject.
Looks for ``T1w`` and ``T2w`` DICOM directories under
``sourcedata/sub-{subject_id}/`` and converts each found modality
using ``dcm2niix``. DICOM discovery is recursive under each
modality's ``dicom/`` directory and includes ``.dcm`` and ``.dicom``
files. Supported archives (``.zip``, ``.tar``, ``.tar.gz``, ``.tgz``)
placed directly in the modality folder or its ``dicom/`` folder are
safely extracted to ``dicom/extracted_archives/`` before discovery.
Parameters
----------
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier without the ``sub-`` prefix.
logger : logging.Logger
Logger for progress messages.
runner : CommandRunner or None, optional
Subprocess runner for streaming output.
Raises
------
PreprocessError
If output NIfTI files already exist for a modality.
See Also
--------
run_pipeline : Full preprocessing pipeline.
"""
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_DICOM):
pm = get_path_manager(project_dir)
sourcedata_dir = Path(pm.sourcedata_subject(subject_id))
bids_anat_dir = Path(pm.bids_anat(subject_id))
bids_anat_dir.mkdir(parents=True, exist_ok=True)
converted = False
for modality in ("T1w", "T2w"):
dicom_dir = sourcedata_dir / modality / "dicom"
if _convert_modality(
dicom_dir, bids_anat_dir, subject_id, modality, logger, runner
):
converted = True
if not converted:
logger.warning("No DICOM files found or converted")
|
tit.pre.recon_all.run_recon_all
run_recon_all(project_dir: str, subject_id: str, *, logger, parallel: bool = False, runner: CommandRunner | None = None) -> None
Run FreeSurfer recon-all for a subject.
Runs the full recon-all -all pipeline and, upon success,
automatically runs thalamic-nuclei and hippocampal-subfield
segmentations.
Parameters
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the sub- prefix.
logger : logging.Logger
Logger used for progress and command output.
parallel : bool, optional
Use FreeSurfer OpenMP parallelization.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
PreprocessError
If no T1 file is found, the output directory already exists, or
recon-all exits with a non-zero code.
See Also
run_subcortical_segmentations : Standalone subcortical segmentation.
run_charm : SimNIBS CHARM head-mesh generation.
Source code in tit/pre/recon_all.py
| def run_recon_all(
project_dir: str,
subject_id: str,
*,
logger,
parallel: bool = False,
runner: CommandRunner | None = None,
) -> None:
"""Run FreeSurfer ``recon-all`` for a subject.
Runs the full ``recon-all -all`` pipeline and, upon success,
automatically runs thalamic-nuclei and hippocampal-subfield
segmentations.
Parameters
----------
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the ``sub-`` prefix.
logger : logging.Logger
Logger used for progress and command output.
parallel : bool, optional
Use FreeSurfer OpenMP parallelization.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
------
PreprocessError
If no T1 file is found, the output directory already exists, or
``recon-all`` exits with a non-zero code.
See Also
--------
run_subcortical_segmentations : Standalone subcortical segmentation.
run_charm : SimNIBS CHARM head-mesh generation.
"""
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_RECON_ALL):
pm = get_path_manager(project_dir)
fs_subject_dir = Path(pm.freesurfer_subject(subject_id))
fs_subjects_root = fs_subject_dir.parent
t1_file, t2_file = _find_anat_files(subject_id)
if not t1_file:
bids_anat_dir = Path(pm.bids_anat(subject_id))
raise PreprocessError(f"No T1 file found in {bids_anat_dir}")
if fs_subject_dir.exists():
if any(fs_subject_dir.iterdir()):
raise PreprocessError(
f"FreeSurfer output already exists at {fs_subject_dir}. "
"Remove the directory manually before rerunning."
)
else:
shutil.rmtree(fs_subject_dir, ignore_errors=True)
cmd = ["recon-all", "-subject", f"sub-{subject_id}", "-i", str(t1_file)]
if t2_file:
cmd += ["-T2", str(t2_file), "-T2pial"]
cmd += ["-all", "-sd", str(fs_subjects_root)]
if parallel:
cmd.append("-parallel")
logger.info(f"Running recon-all for subject {subject_id}")
if runner:
exit_code = runner.run(cmd, logger=logger)
else:
exit_code = subprocess.call(cmd)
if exit_code != 0:
raise PreprocessError(
f"recon-all failed for subject {subject_id} (exit {exit_code})."
)
_run_subcortical_segmentations(
subject_id, fs_subjects_root, logger=logger, runner=runner
)
|
tit.pre.recon_all.run_subcortical_segmentations
run_subcortical_segmentations(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None
Run thalamic-nuclei and hippocampal-subfield segmentations standalone.
Resolves the FreeSurfer subjects directory from the project layout and
delegates to the internal segmentation runner. Intended for cases where
recon-all has already completed and only the subcortical step needs
to be (re-)run.
Parameters
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the sub- prefix.
logger : logging.Logger
Logger for progress output.
runner : CommandRunner or None, optional
Subprocess runner for streaming output.
See Also
run_recon_all : Full FreeSurfer recon-all (includes subcortical).
run_pipeline : Full preprocessing pipeline.
Source code in tit/pre/recon_all.py
| def run_subcortical_segmentations(
project_dir: str,
subject_id: str,
*,
logger,
runner: CommandRunner | None = None,
) -> None:
"""Run thalamic-nuclei and hippocampal-subfield segmentations standalone.
Resolves the FreeSurfer subjects directory from the project layout and
delegates to the internal segmentation runner. Intended for cases where
``recon-all`` has already completed and only the subcortical step needs
to be (re-)run.
Parameters
----------
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the ``sub-`` prefix.
logger : logging.Logger
Logger for progress output.
runner : CommandRunner or None, optional
Subprocess runner for streaming output.
See Also
--------
run_recon_all : Full FreeSurfer ``recon-all`` (includes subcortical).
run_pipeline : Full preprocessing pipeline.
"""
pm = get_path_manager(project_dir)
fs_subject_dir = Path(pm.freesurfer_subject(subject_id))
fs_subjects_root = fs_subject_dir.parent
_run_subcortical_segmentations(
subject_id, fs_subjects_root, logger=logger, runner=runner
)
|
tit.pre.charm.run_charm
run_charm(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None
Run SimNIBS charm to generate a head mesh for a subject.
Creates an m2m directory at the standard BIDS derivatives location
containing the volumetric head model required for TI simulations.
Parameters
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the sub- prefix.
logger : logging.Logger
Logger used for progress and command output.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
PreprocessError
If no T1 image is found, the m2m directory already exists, or
charm exits with a non-zero code.
See Also
run_subject_atlas : Create atlas .annot files after CHARM.
run_recon_all : FreeSurfer cortical reconstruction.
Source code in tit/pre/charm.py
| def run_charm(
project_dir: str,
subject_id: str,
*,
logger,
runner: CommandRunner | None = None,
) -> None:
"""Run SimNIBS ``charm`` to generate a head mesh for a subject.
Creates an m2m directory at the standard BIDS derivatives location
containing the volumetric head model required for TI simulations.
Parameters
----------
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the ``sub-`` prefix.
logger : logging.Logger
Logger used for progress and command output.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
------
PreprocessError
If no T1 image is found, the m2m directory already exists, or
``charm`` exits with a non-zero code.
See Also
--------
run_subject_atlas : Create atlas ``.annot`` files after CHARM.
run_recon_all : FreeSurfer cortical reconstruction.
"""
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_CHARM):
pm = get_path_manager(project_dir)
simnibs_subject_dir = Path(pm.sub(subject_id))
simnibs_subject_dir.mkdir(parents=True, exist_ok=True)
m2m_dir = Path(pm.m2m(subject_id))
t1_file, t2_file = _find_anat_files(subject_id)
if not t1_file:
bids_anat_dir = Path(pm.bids_anat(subject_id))
raise PreprocessError(f"No T1 image found in {bids_anat_dir}")
if m2m_dir.exists():
raise PreprocessError(
f"m2m output already exists at {m2m_dir}. "
"Remove the directory manually before rerunning."
)
form_flag = _get_form_flag(t1_file)
cmd = ["charm", form_flag, subject_id, str(t1_file)]
if t2_file:
cmd.append(str(t2_file))
logger.info(f"Running SimNIBS charm for subject {subject_id}")
if runner is None:
runner = CommandRunner()
exit_code = runner.run(cmd, logger=logger, cwd=str(simnibs_subject_dir))
if exit_code != 0:
raise PreprocessError(
f"charm failed for subject {subject_id} (exit {exit_code})."
)
|
tit.pre.charm.run_subject_atlas
run_subject_atlas(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None
Run subject_atlas to create .annot files for a subject.
Should be called after run_charm completes successfully.
Generates all three atlases: a2009s, DK40, and HCP_MMP1.
Parameters
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the sub- prefix.
logger : logging.Logger
Logger used for progress and command output.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
PreprocessError
If the m2m directory does not exist or subject_atlas fails.
See Also
run_charm : Generate the m2m head mesh (prerequisite).
Source code in tit/pre/charm.py
| def run_subject_atlas(
project_dir: str,
subject_id: str,
*,
logger,
runner: CommandRunner | None = None,
) -> None:
"""Run ``subject_atlas`` to create ``.annot`` files for a subject.
Should be called after ``run_charm`` completes successfully.
Generates all three atlases: a2009s, DK40, and HCP_MMP1.
Parameters
----------
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the ``sub-`` prefix.
logger : logging.Logger
Logger used for progress and command output.
runner : CommandRunner or None, optional
Subprocess runner used to stream output.
Raises
------
PreprocessError
If the m2m directory does not exist or ``subject_atlas`` fails.
See Also
--------
run_charm : Generate the m2m head mesh (prerequisite).
"""
pm = get_path_manager(project_dir)
m2m_dir = Path(pm.m2m(subject_id))
if not m2m_dir.exists():
raise PreprocessError(f"m2m folder not found at {m2m_dir}. Run charm first.")
# Output directory for atlas segmentation
output_dir = m2m_dir / "segmentation"
output_dir.mkdir(parents=True, exist_ok=True)
if runner is None:
runner = CommandRunner()
logger.info(
f"Running subject_atlas for subject {subject_id} with atlases: {', '.join(ATLASES)}"
)
for atlas in ATLASES:
cmd = [
"subject_atlas",
"-a",
atlas,
"-o",
str(output_dir),
str(m2m_dir),
]
logger.info(f" Creating {atlas} atlas...")
exit_code = runner.run(cmd, logger=logger)
if exit_code != 0:
raise PreprocessError(
f"subject_atlas failed for atlas {atlas} (exit code {exit_code})"
)
logger.info(f"All atlases created successfully for subject {subject_id}")
|
tit.pre.tissue_analyzer.run_tissue_analysis
Run tissue analysis for a subject.
Creates a TissueAnalyzer for each requested tissue type, computes
volume and thickness statistics, and writes reports and visualizations.
Parameters
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the sub- prefix.
tissues : iterable of str, optional
Tissue types to analyze. Defaults to ('bone', 'csf', 'skin').
logger : logging.Logger
Logger for progress output.
runner : CommandRunner or None, optional
Not used; kept for API compatibility with the pipeline runner.
Returns
dict
Mapping of tissue name to per-tissue analysis results (volume,
thickness, voxel counts, report path).
Raises
PreprocessError
If the segmentation labeling file does not exist.
See Also
TissueAnalyzer : Low-level analysis for a single tissue type.
run_pipeline : Full preprocessing pipeline.
Source code in tit/pre/tissue_analyzer.py
| def run_tissue_analysis(
project_dir: str,
subject_id: str,
*,
tissues: Iterable[str] = DEFAULT_TISSUES,
logger: logging.Logger,
runner: CommandRunner | None = None,
) -> dict:
"""Run tissue analysis for a subject.
Creates a ``TissueAnalyzer`` for each requested tissue type, computes
volume and thickness statistics, and writes reports and visualizations.
Parameters
----------
project_dir : str
BIDS project root.
subject_id : str
Subject identifier without the ``sub-`` prefix.
tissues : iterable of str, optional
Tissue types to analyze. Defaults to ``('bone', 'csf', 'skin')``.
logger : logging.Logger
Logger for progress output.
runner : CommandRunner or None, optional
Not used; kept for API compatibility with the pipeline runner.
Returns
-------
dict
Mapping of tissue name to per-tissue analysis results (volume,
thickness, voxel counts, report path).
Raises
------
PreprocessError
If the segmentation labeling file does not exist.
See Also
--------
TissueAnalyzer : Low-level analysis for a single tissue type.
run_pipeline : Full preprocessing pipeline.
"""
pm = get_path_manager(project_dir)
label_path = Path(pm.tissue_labeling(subject_id))
if not label_path.exists():
raise PreprocessError(f"labeling.nii.gz not found: {label_path}")
output_root = Path(pm.ensure(pm.tissue_analysis_output(subject_id)))
results = {}
for tissue in tissues:
if tissue not in TISSUE_CONFIGS:
logger.warning(f"Unknown tissue type: {tissue}, skipping")
continue
output_dir = output_root / f"{tissue}_analysis"
analyzer = TissueAnalyzer(label_path, output_dir, tissue, logger)
results[tissue] = analyzer.analyze()
return results
|
tit.pre.qsi.qsiprep.run_qsiprep
run_qsiprep(project_dir: str, subject_id: str, *, logger: Logger, output_resolution: float = QSI_DEFAULT_OUTPUT_RESOLUTION, cpus: int | None = None, memory_gb: int | None = None, omp_threads: int = QSI_DEFAULT_OMP_THREADS, image_tag: str = QSI_QSIPREP_IMAGE_TAG, skip_bids_validation: bool = True, denoise_method: str = 'dwidenoise', unringing_method: str = 'mrdegibbs', runner: CommandRunner | None = None) -> None
Run QSIPrep preprocessing for a subject's DWI data.
This function spawns a QSIPrep Docker container as a sibling to the
current SimNIBS container using Docker-out-of-Docker (DooD).
Parameters
project_dir : str
Path to the BIDS project root directory.
subject_id : str
Subject identifier (without 'sub-' prefix).
logger : logging.Logger
Logger for status messages.
output_resolution : float, optional
Target output resolution in mm. Default: 2.0.
cpus : int, optional
Number of CPUs to allocate. Default: 8.
memory_gb : int, optional
Memory limit in GB. Default: 32.
omp_threads : int, optional
Number of OpenMP threads. Default: 1.
image_tag : str, optional
QSIPrep Docker image tag. Default from constants.QSI_QSIPREP_IMAGE_TAG.
skip_bids_validation : bool, optional
Skip BIDS validation. Default: True.
denoise_method : str, optional
Denoising method. Default: 'dwidenoise'.
unringing_method : str, optional
Unringing method. Default: 'mrdegibbs'.
runner : CommandRunner | None, optional
Command runner for subprocess execution.
Raises
PreprocessError
If QSIPrep fails or prerequisites are not met.
Source code in tit/pre/qsi/qsiprep.py
| def run_qsiprep(
project_dir: str,
subject_id: str,
*,
logger: logging.Logger,
output_resolution: float = const.QSI_DEFAULT_OUTPUT_RESOLUTION,
cpus: int | None = None,
memory_gb: int | None = None,
omp_threads: int = const.QSI_DEFAULT_OMP_THREADS,
image_tag: str = const.QSI_QSIPREP_IMAGE_TAG,
skip_bids_validation: bool = True,
denoise_method: str = "dwidenoise",
unringing_method: str = "mrdegibbs",
runner: CommandRunner | None = None,
) -> None:
"""
Run QSIPrep preprocessing for a subject's DWI data.
This function spawns a QSIPrep Docker container as a sibling to the
current SimNIBS container using Docker-out-of-Docker (DooD).
Parameters
----------
project_dir : str
Path to the BIDS project root directory.
subject_id : str
Subject identifier (without 'sub-' prefix).
logger : logging.Logger
Logger for status messages.
output_resolution : float, optional
Target output resolution in mm. Default: 2.0.
cpus : int, optional
Number of CPUs to allocate. Default: 8.
memory_gb : int, optional
Memory limit in GB. Default: 32.
omp_threads : int, optional
Number of OpenMP threads. Default: 1.
image_tag : str, optional
QSIPrep Docker image tag. Default from ``constants.QSI_QSIPREP_IMAGE_TAG``.
skip_bids_validation : bool, optional
Skip BIDS validation. Default: True.
denoise_method : str, optional
Denoising method. Default: 'dwidenoise'.
unringing_method : str, optional
Unringing method. Default: 'mrdegibbs'.
runner : CommandRunner | None, optional
Command runner for subprocess execution.
Raises
------
PreprocessError
If QSIPrep fails or prerequisites are not met.
"""
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_QSIPREP):
logger.info(f"Starting QSIPrep for subject {subject_id}")
# Validate DWI data exists
is_valid, error_msg = validate_bids_dwi(project_dir, subject_id, logger)
if not is_valid:
raise PreprocessError(f"DWI validation failed: {error_msg}")
# Check for existing output
output_dir = Path(project_dir) / "derivatives" / "qsiprep" / f"sub-{subject_id}"
if output_dir.exists():
existing_valid, _ = validate_qsiprep_output(project_dir, subject_id)
if existing_valid:
raise PreprocessError(
f"QSIPrep output already exists at {output_dir}. "
"Remove the directory manually before rerunning."
)
# Create output directories
output_dir.parent.mkdir(parents=True, exist_ok=True)
work_dir = Path(project_dir) / "derivatives" / ".qsiprep_work"
work_dir.mkdir(parents=True, exist_ok=True)
# Build configuration
config = QSIPrepConfig(
subject_id=subject_id,
output_resolution=output_resolution,
resources=ResourceConfig(
cpus=cpus,
memory_gb=memory_gb,
omp_threads=omp_threads,
),
image_tag=image_tag,
skip_bids_validation=skip_bids_validation,
denoise_method=denoise_method,
unringing_method=unringing_method,
)
try:
# Build Docker command
builder = DockerCommandBuilder(project_dir)
cmd = builder.build_qsiprep_cmd(config)
except DockerBuildError as e:
raise PreprocessError(f"Failed to build QSIPrep command: {e}")
# Ensure image is available
if not pull_image_if_needed(const.QSI_QSIPREP_IMAGE, image_tag, logger):
raise PreprocessError(
f"Failed to pull QSIPrep image: {const.QSI_QSIPREP_IMAGE}:{image_tag}"
)
# Log the command for debugging
logger.debug(f"QSIPrep command: {' '.join(cmd)}")
# Run the container
if runner is None:
runner = CommandRunner()
logger.info(f"Running QSIPrep for subject {subject_id}...")
returncode = runner.run(cmd, logger=logger)
if returncode != 0:
raise PreprocessError(f"QSIPrep failed with exit code {returncode}")
# Validate output
is_valid, error_msg = validate_qsiprep_output(project_dir, subject_id)
if not is_valid:
raise PreprocessError(f"QSIPrep output validation failed: {error_msg}")
logger.info(f"QSIPrep completed successfully for subject {subject_id}")
|
tit.pre.qsi.qsirecon.run_qsirecon
run_qsirecon(project_dir: str, subject_id: str, *, logger: Logger, recon_specs: list[str] | None = None, atlases: list[str] | None = None, use_gpu: bool = False, cpus: int | None = None, memory_gb: int | None = None, omp_threads: int = QSI_DEFAULT_OMP_THREADS, image_tag: str = QSI_QSIRECON_IMAGE_TAG, skip_odf_reports: bool = True, runner: CommandRunner | None = None) -> None
Run QSIRecon reconstruction for a subject's preprocessed DWI data.
This function spawns QSIRecon Docker containers as siblings to the
current SimNIBS container using Docker-out-of-Docker (DooD).
QSIRecon requires QSIPrep output as input. Multiple reconstruction
specs can be run sequentially.
Parameters
project_dir : str
Path to the BIDS project root directory.
subject_id : str
Subject identifier (without 'sub-' prefix).
logger : logging.Logger
Logger for status messages.
recon_specs : list[str] | None, optional
List of reconstruction specifications to run. Default: ['dsi_studio_gqi'].
This default produces DTI tensors for SimNIBS anisotropic modeling.
Other specs (mrtrix_, dipy_, amico_noddi, pyafq_*, etc.) remain available.
atlases : list[str] | None, optional
List of atlases for connectivity analysis. Default: None (no connectivity).
Not needed for DTI extraction. Set to e.g. ['4S156Parcels', 'AAL116']
if connectivity matrices are desired.
use_gpu : bool, optional
Enable GPU acceleration. Default: False.
cpus : int | None, optional
Number of CPUs to allocate. None = inherit from current container.
memory_gb : int | None, optional
Memory limit in GB. None = inherit from current container.
omp_threads : int, optional
Number of OpenMP threads. Default: 1.
image_tag : str, optional
QSIRecon Docker image tag. Default from constants.QSI_QSIRECON_IMAGE_TAG.
skip_odf_reports : bool, optional
Skip ODF report generation. Default: True.
runner : CommandRunner | None, optional
Command runner for subprocess execution.
Raises
PreprocessError
If QSIRecon fails or prerequisites are not met.
Source code in tit/pre/qsi/qsirecon.py
| def run_qsirecon(
project_dir: str,
subject_id: str,
*,
logger: logging.Logger,
recon_specs: list[str] | None = None,
atlases: list[str] | None = None,
use_gpu: bool = False,
cpus: int | None = None,
memory_gb: int | None = None,
omp_threads: int = const.QSI_DEFAULT_OMP_THREADS,
image_tag: str = const.QSI_QSIRECON_IMAGE_TAG,
skip_odf_reports: bool = True,
runner: CommandRunner | None = None,
) -> None:
"""
Run QSIRecon reconstruction for a subject's preprocessed DWI data.
This function spawns QSIRecon Docker containers as siblings to the
current SimNIBS container using Docker-out-of-Docker (DooD).
QSIRecon requires QSIPrep output as input. Multiple reconstruction
specs can be run sequentially.
Parameters
----------
project_dir : str
Path to the BIDS project root directory.
subject_id : str
Subject identifier (without 'sub-' prefix).
logger : logging.Logger
Logger for status messages.
recon_specs : list[str] | None, optional
List of reconstruction specifications to run. Default: ['dsi_studio_gqi'].
This default produces DTI tensors for SimNIBS anisotropic modeling.
Other specs (mrtrix_*, dipy_*, amico_noddi, pyafq_*, etc.) remain available.
atlases : list[str] | None, optional
List of atlases for connectivity analysis. Default: None (no connectivity).
Not needed for DTI extraction. Set to e.g. ['4S156Parcels', 'AAL116']
if connectivity matrices are desired.
use_gpu : bool, optional
Enable GPU acceleration. Default: False.
cpus : int | None, optional
Number of CPUs to allocate. None = inherit from current container.
memory_gb : int | None, optional
Memory limit in GB. None = inherit from current container.
omp_threads : int, optional
Number of OpenMP threads. Default: 1.
image_tag : str, optional
QSIRecon Docker image tag. Default from ``constants.QSI_QSIRECON_IMAGE_TAG``.
skip_odf_reports : bool, optional
Skip ODF report generation. Default: True.
runner : CommandRunner | None, optional
Command runner for subprocess execution.
Raises
------
PreprocessError
If QSIRecon fails or prerequisites are not met.
"""
# Default to dsi_studio_gqi for SimNIBS DTI extraction
if recon_specs is None:
recon_specs = [const.QSI_DEFAULT_RECON_SPEC]
# Atlases are optional — not needed for DTI extraction
# Pass through None/empty to skip connectivity workflows
from tit.telemetry import track_operation
from tit import constants as _const
with track_operation(_const.TELEMETRY_OP_PRE_QSIRECON):
logger.info(
f"Starting QSIRecon for subject {subject_id} with specs: {recon_specs}, atlases: {atlases}"
)
# Validate QSIPrep output exists
is_valid, error_msg = validate_qsiprep_output(project_dir, subject_id)
if not is_valid:
raise PreprocessError(
f"QSIPrep output validation failed: {error_msg}. "
"Run QSIPrep first before running QSIRecon."
)
# No mkdir here — Docker's `-v` creates host directories automatically.
# Creating them from SimNIBS fails on Docker Desktop due to phantom
# bind-mount entries left by previous sibling containers.
output_base = Path(project_dir) / "derivatives" / "qsirecon"
# Build configuration
config = QSIReconConfig(
subject_id=subject_id,
recon_specs=recon_specs,
atlases=atlases,
use_gpu=use_gpu,
resources=ResourceConfig(
cpus=cpus,
memory_gb=memory_gb,
omp_threads=omp_threads,
),
image_tag=image_tag,
skip_odf_reports=skip_odf_reports,
)
try:
# Build Docker command builder
builder = DockerCommandBuilder(project_dir)
except DockerBuildError as e:
raise PreprocessError(f"Failed to initialize Docker: {e}")
# Ensure image is available
if not pull_image_if_needed(const.QSI_QSIRECON_IMAGE, image_tag, logger):
raise PreprocessError(
f"Failed to pull QSIRecon image: {const.QSI_QSIRECON_IMAGE}:{image_tag}"
)
# Create runner if not provided
if runner is None:
runner = CommandRunner()
# Check for existing output before starting any specs
subject_output_dir = output_base / f"sub-{subject_id}"
if subject_output_dir.exists():
raise PreprocessError(
f"QSIRecon output already exists at {subject_output_dir}. "
"Remove the directory manually before rerunning."
)
# Run each recon spec
for spec in recon_specs:
logger.info(f"Running QSIRecon spec: {spec}")
try:
cmd = builder.build_qsirecon_cmd(config, spec)
except DockerBuildError as e:
raise PreprocessError(f"Failed to build QSIRecon command: {e}")
# Log the command for debugging
logger.debug(f"QSIRecon command: {' '.join(cmd)}")
# Run the container
logger.info(f"Running QSIRecon {spec} for subject {subject_id}...")
returncode = runner.run(cmd, logger=logger)
if returncode != 0:
raise PreprocessError(f"QSIRecon {spec} failed with exit code {returncode}")
logger.info(f"QSIRecon {spec} completed for subject {subject_id}")
logger.info(f"QSIRecon completed successfully for subject {subject_id}")
|
extract_dti_tensor(project_dir: str, subject_id: str, *, logger: Logger, skip_registration: bool = False) -> Path
Extract and register a DTI tensor from QSIRecon DSI Studio output.
Loads the six tensor components produced by DSI Studio GQI,
validates them, registers the tensor to the SimNIBS T1 grid
(with FSL-convention pre-compensation), saves
DTI_coregT1_tensor.nii.gz into the m2m directory, and
generates a QC report.
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier (e.g. '070').
logger : logging.Logger
Logger instance for progress and diagnostic messages.
skip_registration : bool, optional
When True, copy the tensor as-is without resampling or
reorientation. Default is False.
pathlib.Path
Path to the saved DTI_coregT1_tensor.nii.gz.
tit.pre.utils.PreprocessError
If required inputs are missing, the tensor already exists, or
the tensor data is invalid.
Source code in tit/pre/qsi/dti_extractor.py
| def extract_dti_tensor(
project_dir: str,
subject_id: str,
*,
logger: logging.Logger,
skip_registration: bool = False,
) -> Path:
"""Extract and register a DTI tensor from QSIRecon DSI Studio output.
Loads the six tensor components produced by DSI Studio GQI,
validates them, registers the tensor to the SimNIBS T1 grid
(with FSL-convention pre-compensation), saves
``DTI_coregT1_tensor.nii.gz`` into the m2m directory, and
generates a QC report.
Parameters
----------
project_dir : str
BIDS project root directory.
subject_id : str
Subject identifier (e.g. ``'070'``).
logger : logging.Logger
Logger instance for progress and diagnostic messages.
skip_registration : bool, optional
When *True*, copy the tensor as-is without resampling or
reorientation. Default is *False*.
Returns
-------
pathlib.Path
Path to the saved ``DTI_coregT1_tensor.nii.gz``.
Raises
------
tit.pre.utils.PreprocessError
If required inputs are missing, the tensor already exists, or
the tensor data is invalid.
"""
project = Path(project_dir)
logger.info(f"Extracting DTI tensor for subject {subject_id}")
pm = get_path_manager(project_dir)
m2m_dir = Path(pm.m2m(subject_id))
if not m2m_dir.is_dir():
raise PreprocessError(f"m2m directory not found: {m2m_dir}. Run charm first.")
output_path = m2m_dir / const.FILE_DTI_TENSOR
if output_path.exists():
raise PreprocessError(
f"DTI tensor already exists at {output_path}. "
"Remove the file before rerunning."
)
simnibs_t1 = m2m_dir / const.FILE_T1
if not simnibs_t1.exists():
raise PreprocessError(f"SimNIBS T1 not found: {simnibs_t1}. Run charm first.")
dwi_dir = _dsistudio_dwi_dir(project, subject_id)
if not dwi_dir.is_dir():
raise PreprocessError(
f"DSI Studio output not found: {dwi_dir}. "
"Run QSIRecon with dsi_studio_gqi first."
)
# Load and validate
tensor_data, affine = _load_tensor(dwi_dir, subject_id, logger)
_validate_tensor(tensor_data, logger)
# Save intermediate in ACPC space
intermediate = m2m_dir / "DTI_ACPC_tensor.nii.gz"
_save_nifti_gz(tensor_data, affine, intermediate, logger)
logger.info(f"Intermediate tensor: {intermediate}")
# Register to SimNIBS T1 space
if skip_registration:
shutil.copy2(intermediate, output_path)
logger.info("Copied tensor as-is (skip_registration=True)")
else:
acpc_t1 = _qsiprep_t1(project, subject_id)
_register_tensor(tensor_data, affine, simnibs_t1, acpc_t1, output_path, logger)
logger.info(f"DTI tensor saved to: {output_path}")
# QC report
from tit.reporting.generators.dti_qc import create_dti_qc_report
qc_path = create_dti_qc_report(
project_dir=project_dir,
subject_id=subject_id,
tensor_file=str(output_path),
t1_file=str(simnibs_t1),
)
logger.info(f"DTI QC report: {qc_path}")
return output_path
|