Skip to content

Preprocessing

The preprocessing pipeline converts raw imaging data into simulation-ready head meshes. This is a one-time setup per subject — once complete, you can run unlimited simulations without repeating these steps.

graph LR
    A[DICOM] -->|dcm2niix| B[NIfTI]
    B -->|CHARM| C[Head Mesh]
    C -->|subject_atlas| D[Atlas Parcellations]
    B -->|recon-all| E[FreeSurfer Surfaces]
    C -->|tissue analysis| F[Tissue Report]
    style C fill:#2d5a27,stroke:#4a8,color:#fff

Full Pipeline

Run all preprocessing steps with a single call:

from tit.pre import run_pipeline

exit_code = run_pipeline(
    project_dir="/data/my_project",
    subject_ids=["001", "002"],
    convert_dicom=True,
    run_recon=True,
    parallel_recon=True,
    parallel_cores=4,
    create_m2m=True,
    run_tissue_analysis=True,
    run_qsiprep=False,
    run_qsirecon=False,
    extract_dti=False,
    run_subcortical_segmentations=False,
)

Selective Steps

Each boolean flag controls a specific step. Set only the ones you need — for example, if FreeSurfer recon-all is already done, set run_recon=False and create_m2m=True to run only CHARM (which also runs subject_atlas automatically).

Individual Steps

Each preprocessing step can be called independently for finer control:

from tit.pre import (
    run_dicom_to_nifti,
    run_recon_all,
    run_charm,
    run_tissue_analysis,
    run_subcortical_segmentations,
    run_qsiprep,
    run_qsirecon,
    extract_dti_tensor,
    discover_subjects,
    check_m2m_exists,
)

# Discover subjects from sourcedata/
subjects = discover_subjects("/data/my_project")

# Check if head mesh already exists
if not check_m2m_exists("/data/my_project", "001"):
    run_charm("/data/my_project", "001", logger=my_logger)

Step Details

Step Function What It Does
DICOM to NIfTI run_dicom_to_nifti() Converts DICOM files to NIfTI format using dcm2niix
CHARM head mesh run_charm() Creates SimNIBS-compatible head mesh from T1/T2 images
Subject atlas run_subject_atlas() Creates atlas-based parcellations (a2009s, DK40, HCP_MMP1); runs automatically after CHARM in the pipeline
FreeSurfer recon-all run_recon_all() Full cortical reconstruction and subcortical segmentation (takes 6-12 hours per subject)
Tissue analysis run_tissue_analysis() Analyzes tissue thickness and volume (bone, CSF, skin) from the head mesh
Subcortical segmentation run_subcortical_segmentations() Runs thalamic nuclei and hippocampal subfield segmentations standalone (also runs automatically at the end of run_recon_all)

Compute Time

FreeSurfer recon-all is the most time-consuming step (6-12 hours per subject). Use parallel_recon=True with parallel_cores to process multiple subjects simultaneously.

DTI / Diffusion Pipeline

For anisotropic conductivity simulations, TI-Toolbox supports diffusion processing via QSIPrep/QSIRecon Docker containers:

from tit.pre import run_qsiprep, run_qsirecon, extract_dti_tensor
import logging

logger = logging.getLogger("my_pipeline")

# Run QSIPrep DWI preprocessing
run_qsiprep("/data/my_project", "001", logger=logger)

# Run QSIRecon reconstruction
run_qsirecon("/data/my_project", "001", logger=logger)

# Extract DTI tensor for SimNIBS anisotropic conductivity
extract_dti_tensor("/data/my_project", "001", logger=logger)

These steps can also be included in the full pipeline by setting run_qsiprep=True, run_qsirecon=True, and extract_dti=True. Optional configuration dicts (qsiprep_config, qsi_recon_config) control parameters such as output resolution, recon specs, and atlases.

BIDS Directory Structure

After preprocessing, your project follows this layout:

project_root/
├── sourcedata/              # Raw DICOM
├── sub-001/
│   └── anat/               # NIfTI files (T1w, T2w)
└── derivatives/
    ├── SimNIBS/sub-001/
    │   └── m2m_001/         # Head mesh (simulation-ready)
    │       └── segmentation/ # Atlas parcellations
    ├── freesurfer/sub-001/  # recon-all outputs
    ├── qsiprep/sub-001/     # QSIPrep DWI outputs (if run)
    └── qsirecon/sub-001/    # QSIRecon tensor outputs (if run)

API Reference

tit.pre.structural.run_pipeline

run_pipeline(project_dir: str, subject_ids: Iterable[str], *, convert_dicom: bool = False, run_recon: bool = False, parallel_recon: bool = False, parallel_cores: int | None = None, create_m2m: bool = False, run_tissue_analysis: bool = False, run_qsiprep: bool = False, run_qsirecon: bool = False, qsiprep_config: dict | None = None, qsi_recon_config: dict | None = None, extract_dti: bool = False, run_subcortical_segmentations: bool = False, debug: bool = False, stop_event: object | None = None, logger_callback: Callable | None = None, runner: CommandRunner | None = None) -> int

Run the preprocessing pipeline for one or more subjects.

Parameters

project_dir : str BIDS project root. subject_ids : iterable of str Subject identifiers without the sub- prefix. convert_dicom : bool, optional Run DICOM to NIfTI conversion. run_recon : bool, optional Run FreeSurfer recon-all. parallel_recon : bool, optional Run recon-all in parallel across subjects. parallel_cores : int, optional Max parallel subjects for recon-all. create_m2m : bool, optional Run SimNIBS charm (also runs subject_atlas for .annot files). run_tissue_analysis : bool, optional Run tissue analysis pipeline. run_qsiprep : bool, optional Run QSIPrep DWI preprocessing via Docker. run_qsirecon : bool, optional Run QSIRecon reconstruction via Docker. qsi_recon_specs : iterable of str, optional QSIRecon reconstruction specs to run. Default: ['dipy_dki']. extract_dti : bool, optional Extract DTI tensor for SimNIBS anisotropic conductivity. run_subcortical_segmentations : bool, optional Run thalamic nuclei and hippocampal subfield segmentations (standalone). debug : bool, optional Enable verbose logging. stop_event : object, optional Event used to cancel running steps. logger_callback : callable, optional Callback used by GUI to capture log lines. runner : CommandRunner, optional Subprocess runner used to stream output.

Returns

int 0 on success, 1 on failure.

Source code in tit/pre/structural.py
def run_pipeline(
    project_dir: str,
    subject_ids: Iterable[str],
    *,
    convert_dicom: bool = False,
    run_recon: bool = False,
    parallel_recon: bool = False,
    parallel_cores: int | None = None,
    create_m2m: bool = False,
    run_tissue_analysis: bool = False,
    run_qsiprep: bool = False,
    run_qsirecon: bool = False,
    qsiprep_config: dict | None = None,
    qsi_recon_config: dict | None = None,
    extract_dti: bool = False,
    run_subcortical_segmentations: bool = False,
    debug: bool = False,
    stop_event: object | None = None,
    logger_callback: Callable | None = None,
    runner: CommandRunner | None = None,
) -> int:
    """Run the preprocessing pipeline for one or more subjects.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_ids : iterable of str
        Subject identifiers without the `sub-` prefix.
    convert_dicom : bool, optional
        Run DICOM to NIfTI conversion.
    run_recon : bool, optional
        Run FreeSurfer recon-all.
    parallel_recon : bool, optional
        Run recon-all in parallel across subjects.
    parallel_cores : int, optional
        Max parallel subjects for recon-all.
    create_m2m : bool, optional
        Run SimNIBS charm (also runs subject_atlas for .annot files).
    run_tissue_analysis : bool, optional
        Run tissue analysis pipeline.
    run_qsiprep : bool, optional
        Run QSIPrep DWI preprocessing via Docker.
    run_qsirecon : bool, optional
        Run QSIRecon reconstruction via Docker.
    qsi_recon_specs : iterable of str, optional
        QSIRecon reconstruction specs to run. Default: ['dipy_dki'].
    extract_dti : bool, optional
        Extract DTI tensor for SimNIBS anisotropic conductivity.
    run_subcortical_segmentations : bool, optional
        Run thalamic nuclei and hippocampal subfield segmentations (standalone).
    debug : bool, optional
        Enable verbose logging.
    stop_event : object, optional
        Event used to cancel running steps.
    logger_callback : callable, optional
        Callback used by GUI to capture log lines.
    runner : CommandRunner, optional
        Subprocess runner used to stream output.

    Returns
    -------
    int
        0 on success, 1 on failure.
    """
    subject_list = [str(s).strip() for s in subject_ids if str(s).strip()]
    if not subject_list:
        raise PreprocessError("No subjects provided.")

    pm = get_path_manager(project_dir)

    for sid in subject_list:
        ensure_subject_dirs(project_dir, sid)

    datasets = {"ti-toolbox"}
    if run_recon:
        datasets.add("freesurfer")
    if create_m2m:
        datasets.add("simnibs")
    ensure_dataset_descriptions(project_dir, datasets)

    if runner is None:
        runner = CommandRunner(stop_event=stop_event)
    elif stop_event is not None and runner.stop_event is not stop_event:
        runner.stop_event = stop_event

    if parallel_recon and run_recon and len(subject_list) > 1:
        for sid in subject_list:
            _run_subject_pipeline(
                project_dir,
                sid,
                convert_dicom=convert_dicom,
                run_recon=False,
                parallel_recon=parallel_recon,
                create_m2m=create_m2m,
                run_tissue=False,
                run_qsiprep_step=False,
                run_qsirecon_step=False,
                qsiprep_config=qsiprep_config,
                qsi_recon_config=qsi_recon_config,
                extract_dti_step=False,
                run_subcortical=False,
                debug=debug,
                runner=runner,
                callback=logger_callback,
            )

        max_workers = parallel_cores or os.cpu_count() or 1
        max_workers = min(max_workers, len(subject_list))
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            futures = []
            for sid in subject_list:
                futures.append(
                    executor.submit(
                        _run_subject_pipeline,
                        project_dir,
                        sid,
                        convert_dicom=False,
                        run_recon=True,
                        parallel_recon=True,
                        create_m2m=False,
                        run_tissue=False,
                        run_qsiprep_step=False,
                        run_qsirecon_step=False,
                        qsiprep_config=qsiprep_config,
                        qsi_recon_config=qsi_recon_config,
                        extract_dti_step=False,
                        run_subcortical=False,
                        debug=debug,
                        runner=runner,
                        callback=logger_callback,
                    )
                )

            for future in as_completed(futures):
                future.result()

        if run_tissue_analysis:
            for sid in subject_list:
                _run_subject_pipeline(
                    project_dir,
                    sid,
                    convert_dicom=False,
                    run_recon=False,
                    parallel_recon=parallel_recon,
                    create_m2m=False,
                    run_tissue=True,
                    run_qsiprep_step=False,
                    run_qsirecon_step=False,
                    qsiprep_config=qsiprep_config,
                    qsi_recon_config=qsi_recon_config,
                    extract_dti_step=False,
                    run_subcortical=False,
                    debug=debug,
                    runner=runner,
                    callback=logger_callback,
                )
        # Run QSI steps after tissue analysis (if enabled)
        if run_qsiprep or run_qsirecon or extract_dti:
            for sid in subject_list:
                _run_subject_pipeline(
                    project_dir,
                    sid,
                    convert_dicom=False,
                    run_recon=False,
                    parallel_recon=parallel_recon,
                    create_m2m=False,
                    run_tissue=False,
                    run_qsiprep_step=run_qsiprep,
                    run_qsirecon_step=run_qsirecon,
                    qsiprep_config=qsiprep_config,
                    qsi_recon_config=qsi_recon_config,
                    extract_dti_step=extract_dti,
                    run_subcortical=False,
                    debug=debug,
                    runner=runner,
                    callback=logger_callback,
                )
        if run_subcortical_segmentations:
            for sid in subject_list:
                _run_subject_pipeline(
                    project_dir,
                    sid,
                    convert_dicom=False,
                    run_recon=False,
                    parallel_recon=parallel_recon,
                    create_m2m=False,
                    run_tissue=False,
                    run_qsiprep_step=False,
                    run_qsirecon_step=False,
                    qsiprep_config=qsiprep_config,
                    qsi_recon_config=qsi_recon_config,
                    extract_dti_step=False,
                    run_subcortical=True,
                    debug=debug,
                    runner=runner,
                    callback=logger_callback,
                )
    else:
        for sid in subject_list:
            _run_subject_pipeline(
                project_dir,
                sid,
                convert_dicom=convert_dicom,
                run_recon=run_recon,
                parallel_recon=parallel_recon,
                create_m2m=create_m2m,
                run_tissue=run_tissue_analysis,
                run_qsiprep_step=run_qsiprep,
                run_qsirecon_step=run_qsirecon,
                qsiprep_config=qsiprep_config,
                qsi_recon_config=qsi_recon_config,
                extract_dti_step=extract_dti,
                run_subcortical=run_subcortical_segmentations,
                debug=debug,
                runner=runner,
                callback=logger_callback,
            )

    # Generate HTML reports for each subject
    from tit.reporting import PreprocessingReportGenerator

    for sid in subject_list:
        report_gen = PreprocessingReportGenerator(
            project_dir=project_dir,
            subject_id=sid,
        )

        # Add processing steps based on what was run
        if convert_dicom:
            report_gen.add_processing_step(
                step_name="DICOM Conversion",
                description="Convert DICOM files to NIfTI format",
                status="completed",
            )

        if create_m2m:
            report_gen.add_processing_step(
                step_name="SimNIBS charm",
                description="Create head mesh model for simulations",
                status="completed",
            )
            report_gen.add_processing_step(
                step_name="Subject Atlas Segmentation",
                description="Generate atlas-based parcellation",
                status="completed",
            )

        if run_recon:
            report_gen.add_processing_step(
                step_name="FreeSurfer recon-all",
                description="Cortical surface reconstruction",
                status="completed",
            )

        if run_tissue_analysis:
            report_gen.add_processing_step(
                step_name="Tissue Analysis",
                description="Tissue segmentation and analysis",
                status="completed",
            )

        if run_qsiprep:
            report_gen.add_processing_step(
                step_name="QSIPrep",
                description="Diffusion MRI preprocessing",
                status="completed",
            )

        if run_qsirecon:
            report_gen.add_processing_step(
                step_name="QSIRecon",
                description="Diffusion MRI reconstruction",
                status="completed",
            )

        if extract_dti:
            report_gen.add_processing_step(
                step_name="DTI Tensor Extraction",
                description="Extract DTI tensors for anisotropic conductivity",
                status="completed",
            )

        if run_subcortical_segmentations:
            report_gen.add_processing_step(
                step_name="Subcortical Segmentations",
                description="Thalamic nuclei and hippocampal subfield segmentations",
                status="completed",
            )

        report_gen.scan_for_data()
        report_path = report_gen.generate()
        if logger_callback:
            logger_callback(f"Report generated: {report_path}", "info")

    return 0

tit.pre.utils.discover_subjects

discover_subjects(project_dir: str) -> list[str]

Return sorted, deduplicated subject IDs found in a BIDS project tree.

Discovery order: 1. sourcedata/sub-/T1w/ or T2w/ — any subdir, NIfTI (.nii/.nii.gz), or .tgz 2. sourcedata/sub-/.tgz (compressed bundles at top level) 3. sub-/anat/T1w.nii[.gz] or T2w.nii[.gz] at project root

Source code in tit/pre/utils.py
def discover_subjects(project_dir: str) -> list[str]:
    """Return sorted, deduplicated subject IDs found in a BIDS project tree.

    Discovery order:
    1. sourcedata/sub-*/T1w/ or T2w/ — any subdir, NIfTI (.nii/.nii.gz), or .tgz
    2. sourcedata/sub-*/*.tgz (compressed bundles at top level)
    3. sub-*/anat/*T1w*.nii[.gz] or *T2w*.nii[.gz] at project root
    """
    found: list[str] = []

    sourcedata_dir = os.path.join(project_dir, "sourcedata")
    if os.path.exists(sourcedata_dir):
        for subj_dir in glob.glob(os.path.join(sourcedata_dir, "sub-*")):
            if os.path.isdir(subj_dir):
                t1w_dir = os.path.join(subj_dir, "T1w")
                t2w_dir = os.path.join(subj_dir, "T2w")

                has_valid_structure = (
                    (
                        os.path.exists(t1w_dir)
                        and (
                            any(
                                os.path.isdir(os.path.join(t1w_dir, d))
                                for d in os.listdir(t1w_dir)
                            )
                            or any(
                                f.endswith((".tgz", ".json", ".nii", ".nii.gz"))
                                for f in os.listdir(t1w_dir)
                            )
                        )
                    )
                    or (
                        os.path.exists(t2w_dir)
                        and (
                            any(
                                os.path.isdir(os.path.join(t2w_dir, d))
                                for d in os.listdir(t2w_dir)
                            )
                            or any(
                                f.endswith((".tgz", ".json", ".nii", ".nii.gz"))
                                for f in os.listdir(t2w_dir)
                            )
                        )
                    )
                    or any(f.endswith(".tgz") for f in os.listdir(subj_dir))
                )

                if has_valid_structure:
                    subject_id = os.path.basename(subj_dir).replace("sub-", "")
                    found.append(subject_id)

    for subj_dir in glob.glob(os.path.join(project_dir, "sub-*")):
        if os.path.isdir(subj_dir):
            subject_id = os.path.basename(subj_dir).replace("sub-", "")
            if subject_id in found:
                continue
            anat_dir = os.path.join(subj_dir, "anat")
            if os.path.exists(anat_dir):
                has_nifti = any(
                    f.endswith((".nii", ".nii.gz")) and ("T1w" in f or "T2w" in f)
                    for f in os.listdir(anat_dir)
                )
                if has_nifti:
                    found.append(subject_id)

    return sorted(found)

tit.pre.utils.check_m2m_exists

check_m2m_exists(project_dir: str, subject_id: str) -> bool

Return True if the SimNIBS m2m directory for subject_id already exists.

Path: /derivatives/SimNIBS/sub-/m2m_

Source code in tit/pre/utils.py
def check_m2m_exists(project_dir: str, subject_id: str) -> bool:
    """Return True if the SimNIBS m2m directory for subject_id already exists.

    Path: <project_dir>/derivatives/SimNIBS/sub-<subject_id>/m2m_<subject_id>
    """
    m2m_dir = os.path.join(
        project_dir,
        "derivatives",
        "SimNIBS",
        f"sub-{subject_id}",
        f"m2m_{subject_id}",
    )
    return os.path.exists(m2m_dir)

tit.pre.dicom2nifti.run_dicom_to_nifti

run_dicom_to_nifti(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None

Convert DICOMs to BIDS-compliant NIfTI for a subject.

Parameters

project_dir : str BIDS project root directory. subject_id : str Subject identifier without the 'sub-' prefix. logger : logging.Logger Logger for progress and command output. runner : CommandRunner, optional Subprocess runner for streaming output.

Source code in tit/pre/dicom2nifti.py
def run_dicom_to_nifti(
    project_dir: str,
    subject_id: str,
    *,
    logger,
    runner: CommandRunner | None = None,
) -> None:
    """Convert DICOMs to BIDS-compliant NIfTI for a subject.

    Parameters
    ----------
    project_dir : str
        BIDS project root directory.
    subject_id : str
        Subject identifier without the 'sub-' prefix.
    logger : logging.Logger
        Logger for progress and command output.
    runner : CommandRunner, optional
        Subprocess runner for streaming output.
    """

    pm = get_path_manager(project_dir)

    sourcedata_dir = Path(pm.sourcedata_subject(subject_id))
    bids_anat_dir = Path(pm.bids_anat(subject_id))
    bids_anat_dir.mkdir(parents=True, exist_ok=True)

    converted = False
    for modality in ("T1w", "T2w"):
        if _process_modality(
            modality,
            sourcedata_dir,
            bids_anat_dir,
            subject_id,
            pm,
            logger,
            runner,
        ):
            converted = True

    if not converted:
        logger.warning("No DICOM files found or converted")

tit.pre.recon_all.run_recon_all

run_recon_all(project_dir: str, subject_id: str, *, logger, parallel: bool = False, runner: CommandRunner | None = None) -> None

Run FreeSurfer recon-all for a subject.

Parameters

project_dir : str BIDS project root. subject_id : str Subject identifier without the sub- prefix. logger : logging.Logger Logger used for progress and command output. parallel : bool, optional Use FreeSurfer OpenMP parallelization. runner : CommandRunner, optional Subprocess runner used to stream output.

Source code in tit/pre/recon_all.py
def run_recon_all(
    project_dir: str,
    subject_id: str,
    *,
    logger,
    parallel: bool = False,
    runner: CommandRunner | None = None,
) -> None:
    """Run FreeSurfer recon-all for a subject.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_id : str
        Subject identifier without the `sub-` prefix.
    logger : logging.Logger
        Logger used for progress and command output.
    parallel : bool, optional
        Use FreeSurfer OpenMP parallelization.
    runner : CommandRunner, optional
        Subprocess runner used to stream output.
    """

    pm = get_path_manager(project_dir)

    fs_subject_dir = Path(pm.freesurfer_subject(subject_id))
    fs_subjects_root = fs_subject_dir.parent

    t1_file, t2_file = _find_anat_files(subject_id)
    if not t1_file:
        bids_anat_dir = Path(pm.bids_anat(subject_id))
        raise PreprocessError(f"No T1 file found in {bids_anat_dir}")

    if fs_subject_dir.exists():
        if any(fs_subject_dir.iterdir()):
            raise PreprocessError(
                f"FreeSurfer output already exists at {fs_subject_dir}. "
                "Remove the directory manually before rerunning."
            )
        else:
            shutil.rmtree(fs_subject_dir, ignore_errors=True)

    cmd = ["recon-all", "-subject", f"sub-{subject_id}", "-i", str(t1_file)]
    if t2_file:
        cmd += ["-T2", str(t2_file), "-T2pial"]
    cmd += ["-all", "-sd", str(fs_subjects_root)]

    if parallel:
        cmd.append("-parallel")

    logger.info(f"Running recon-all for subject {subject_id}")
    if runner:
        exit_code = runner.run(cmd, logger=logger)
    else:
        exit_code = subprocess.call(cmd)

    if exit_code != 0:
        raise PreprocessError(
            f"recon-all failed for subject {subject_id} (exit {exit_code})."
        )

    _run_subcortical_segmentations(
        subject_id, fs_subjects_root, logger=logger, runner=runner
    )

tit.pre.recon_all.run_subcortical_segmentations

run_subcortical_segmentations(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None

Run thalamic nuclei and hippocampal subfield segmentations standalone.

Resolves the FreeSurfer subjects directory from the project layout and delegates to the internal segmentation runner. Intended for cases where recon-all has already completed and only the subcortical step needs to be (re-)run.

Parameters

project_dir : str BIDS project root. subject_id : str Subject identifier without the sub- prefix. logger : logging.Logger Logger for progress output. runner : CommandRunner, optional Subprocess runner for streaming output.

Source code in tit/pre/recon_all.py
def run_subcortical_segmentations(
    project_dir: str,
    subject_id: str,
    *,
    logger,
    runner: CommandRunner | None = None,
) -> None:
    """Run thalamic nuclei and hippocampal subfield segmentations standalone.

    Resolves the FreeSurfer subjects directory from the project layout and
    delegates to the internal segmentation runner. Intended for cases where
    recon-all has already completed and only the subcortical step needs to
    be (re-)run.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_id : str
        Subject identifier without the ``sub-`` prefix.
    logger : logging.Logger
        Logger for progress output.
    runner : CommandRunner, optional
        Subprocess runner for streaming output.
    """
    pm = get_path_manager(project_dir)
    fs_subject_dir = Path(pm.freesurfer_subject(subject_id))
    fs_subjects_root = fs_subject_dir.parent
    _run_subcortical_segmentations(
        subject_id, fs_subjects_root, logger=logger, runner=runner
    )

tit.pre.charm.run_charm

run_charm(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None

Run SimNIBS charm for a subject.

Parameters

project_dir : str BIDS project root. subject_id : str Subject identifier without the sub- prefix. logger : logging.Logger Logger used for progress and command output. runner : CommandRunner, optional Subprocess runner used to stream output.

Source code in tit/pre/charm.py
def run_charm(
    project_dir: str,
    subject_id: str,
    *,
    logger,
    runner: CommandRunner | None = None,
) -> None:
    """Run SimNIBS charm for a subject.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_id : str
        Subject identifier without the `sub-` prefix.
    logger : logging.Logger
        Logger used for progress and command output.
    runner : CommandRunner, optional
        Subprocess runner used to stream output.
    """
    pm = get_path_manager(project_dir)

    simnibs_subject_dir = Path(pm.sub(subject_id))
    simnibs_subject_dir.mkdir(parents=True, exist_ok=True)
    m2m_dir = Path(pm.m2m(subject_id))

    t1_file, t2_file = _find_anat_files(subject_id)
    if not t1_file:
        bids_anat_dir = Path(pm.bids_anat(subject_id))
        raise PreprocessError(f"No T1 image found in {bids_anat_dir}")

    if m2m_dir.exists():
        raise PreprocessError(
            f"m2m output already exists at {m2m_dir}. "
            "Remove the directory manually before rerunning."
        )

    cmd = ["charm", "--forcesform", subject_id, str(t1_file)]
    if t2_file:
        cmd.append(str(t2_file))

    logger.info(f"Running SimNIBS charm for subject {subject_id}")
    if runner is None:
        runner = CommandRunner()
    exit_code = runner.run(cmd, logger=logger, cwd=str(simnibs_subject_dir))

    if exit_code != 0:
        raise PreprocessError(
            f"charm failed for subject {subject_id} (exit {exit_code})."
        )

tit.pre.charm.run_subject_atlas

run_subject_atlas(project_dir: str, subject_id: str, *, logger, runner: CommandRunner | None = None) -> None

Run subject_atlas to create .annot files for a subject.

This should be called after charm completes successfully. Generates all three atlases: a2009s, DK40, and HCP_MMP1.

Parameters

project_dir : str BIDS project root. subject_id : str Subject identifier without the sub- prefix. logger : logging.Logger Logger used for progress and command output. runner : CommandRunner, optional Subprocess runner used to stream output.

Source code in tit/pre/charm.py
def run_subject_atlas(
    project_dir: str,
    subject_id: str,
    *,
    logger,
    runner: CommandRunner | None = None,
) -> None:
    """Run subject_atlas to create .annot files for a subject.

    This should be called after charm completes successfully.
    Generates all three atlases: a2009s, DK40, and HCP_MMP1.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_id : str
        Subject identifier without the `sub-` prefix.
    logger : logging.Logger
        Logger used for progress and command output.
    runner : CommandRunner, optional
        Subprocess runner used to stream output.
    """

    pm = get_path_manager(project_dir)
    m2m_dir = Path(pm.m2m(subject_id))

    if not m2m_dir.exists():
        raise PreprocessError(f"m2m folder not found at {m2m_dir}. Run charm first.")

    # Output directory for atlas segmentation
    output_dir = m2m_dir / "segmentation"
    output_dir.mkdir(parents=True, exist_ok=True)

    if runner is None:
        runner = CommandRunner()

    logger.info(
        f"Running subject_atlas for subject {subject_id} with atlases: {', '.join(ATLASES)}"
    )

    for atlas in ATLASES:
        cmd = [
            "subject_atlas",
            "-a",
            atlas,
            "-o",
            str(output_dir),
            subject_id,
        ]

        logger.info(f"  Creating {atlas} atlas...")
        exit_code = runner.run(cmd, logger=logger)
        if exit_code != 0:
            raise PreprocessError(
                f"subject_atlas failed for atlas {atlas} (exit code {exit_code})"
            )

    logger.info(f"All atlases created successfully for subject {subject_id}")

tit.pre.tissue_analyzer.run_tissue_analysis

run_tissue_analysis(project_dir: str, subject_id: str, *, tissues: Iterable[str] = DEFAULT_TISSUES, logger: Logger, runner: CommandRunner | None = None) -> dict

Run tissue analysis for a subject.

Parameters

project_dir : str BIDS project root. subject_id : str Subject identifier without the 'sub-' prefix. tissues : iterable of str Tissue types to analyze (default: bone, csf, skin). logger : logging.Logger Logger for progress output. runner : CommandRunner, optional Not used, kept for API compatibility.

Returns

dict Analysis results for each tissue type.

Source code in tit/pre/tissue_analyzer.py
def run_tissue_analysis(
    project_dir: str,
    subject_id: str,
    *,
    tissues: Iterable[str] = DEFAULT_TISSUES,
    logger: logging.Logger,
    runner: CommandRunner | None = None,
) -> dict:
    """Run tissue analysis for a subject.

    Parameters
    ----------
    project_dir : str
        BIDS project root.
    subject_id : str
        Subject identifier without the 'sub-' prefix.
    tissues : iterable of str
        Tissue types to analyze (default: bone, csf, skin).
    logger : logging.Logger
        Logger for progress output.
    runner : CommandRunner, optional
        Not used, kept for API compatibility.

    Returns
    -------
    dict
        Analysis results for each tissue type.
    """
    pm = get_path_manager(project_dir)

    label_path = Path(pm.tissue_labeling(subject_id))
    if not label_path.exists():
        raise PreprocessError(f"Labeling.nii.gz not found: {label_path}")

    output_root = Path(pm.ensure(pm.tissue_analysis_output(subject_id)))
    results = {}

    for tissue in tissues:
        if tissue not in TISSUE_CONFIGS:
            logger.warning(f"Unknown tissue type: {tissue}, skipping")
            continue

        output_dir = output_root / f"{tissue}_analysis"
        analyzer = TissueAnalyzer(label_path, output_dir, tissue, logger)
        results[tissue] = analyzer.analyze()

    return results

tit.pre.qsi.qsiprep.run_qsiprep

run_qsiprep(project_dir: str, subject_id: str, *, logger: Logger, output_resolution: float = QSI_DEFAULT_OUTPUT_RESOLUTION, cpus: int | None = None, memory_gb: int | None = None, omp_threads: int = QSI_DEFAULT_OMP_THREADS, image_tag: str = QSI_DEFAULT_IMAGE_TAG, skip_bids_validation: bool = True, denoise_method: str = 'dwidenoise', unringing_method: str = 'mrdegibbs', runner: CommandRunner | None = None) -> None

Run QSIPrep preprocessing for a subject's DWI data.

This function spawns a QSIPrep Docker container as a sibling to the current SimNIBS container using Docker-out-of-Docker (DooD).

Parameters

project_dir : str Path to the BIDS project root directory. subject_id : str Subject identifier (without 'sub-' prefix). logger : logging.Logger Logger for status messages. output_resolution : float, optional Target output resolution in mm. Default: 2.0. cpus : int, optional Number of CPUs to allocate. Default: 8. memory_gb : int, optional Memory limit in GB. Default: 32. omp_threads : int, optional Number of OpenMP threads. Default: 1. image_tag : str, optional QSIPrep Docker image tag. Default: '1.1.1'. skip_bids_validation : bool, optional Skip BIDS validation. Default: True. denoise_method : str, optional Denoising method. Default: 'dwidenoise'. unringing_method : str, optional Unringing method. Default: 'mrdegibbs'. runner : CommandRunner | None, optional Command runner for subprocess execution.

Raises

PreprocessError If QSIPrep fails or prerequisites are not met.

Source code in tit/pre/qsi/qsiprep.py
def run_qsiprep(
    project_dir: str,
    subject_id: str,
    *,
    logger: logging.Logger,
    output_resolution: float = const.QSI_DEFAULT_OUTPUT_RESOLUTION,
    cpus: int | None = None,
    memory_gb: int | None = None,
    omp_threads: int = const.QSI_DEFAULT_OMP_THREADS,
    image_tag: str = const.QSI_DEFAULT_IMAGE_TAG,
    skip_bids_validation: bool = True,
    denoise_method: str = "dwidenoise",
    unringing_method: str = "mrdegibbs",
    runner: CommandRunner | None = None,
) -> None:
    """
    Run QSIPrep preprocessing for a subject's DWI data.

    This function spawns a QSIPrep Docker container as a sibling to the
    current SimNIBS container using Docker-out-of-Docker (DooD).

    Parameters
    ----------
    project_dir : str
        Path to the BIDS project root directory.
    subject_id : str
        Subject identifier (without 'sub-' prefix).
    logger : logging.Logger
        Logger for status messages.
    output_resolution : float, optional
        Target output resolution in mm. Default: 2.0.
    cpus : int, optional
        Number of CPUs to allocate. Default: 8.
    memory_gb : int, optional
        Memory limit in GB. Default: 32.
    omp_threads : int, optional
        Number of OpenMP threads. Default: 1.
    image_tag : str, optional
        QSIPrep Docker image tag. Default: '1.1.1'.
    skip_bids_validation : bool, optional
        Skip BIDS validation. Default: True.
    denoise_method : str, optional
        Denoising method. Default: 'dwidenoise'.
    unringing_method : str, optional
        Unringing method. Default: 'mrdegibbs'.
    runner : CommandRunner | None, optional
        Command runner for subprocess execution.

    Raises
    ------
    PreprocessError
        If QSIPrep fails or prerequisites are not met.
    """
    logger.info(f"Starting QSIPrep for subject {subject_id}")

    # Validate DWI data exists
    is_valid, error_msg = validate_bids_dwi(project_dir, subject_id, logger)
    if not is_valid:
        raise PreprocessError(f"DWI validation failed: {error_msg}")

    # Check for existing output
    output_dir = Path(project_dir) / "derivatives" / "qsiprep" / f"sub-{subject_id}"

    if output_dir.exists():
        existing_valid, _ = validate_qsiprep_output(project_dir, subject_id)
        if existing_valid:
            raise PreprocessError(
                f"QSIPrep output already exists at {output_dir}. "
                "Remove the directory manually before rerunning."
            )

    # Create output directories
    output_dir.parent.mkdir(parents=True, exist_ok=True)
    work_dir = Path(project_dir) / "derivatives" / ".qsiprep_work"
    work_dir.mkdir(parents=True, exist_ok=True)

    # Build configuration
    config = QSIPrepConfig(
        subject_id=subject_id,
        output_resolution=output_resolution,
        resources=ResourceConfig(
            cpus=cpus,
            memory_gb=memory_gb,
            omp_threads=omp_threads,
        ),
        image_tag=image_tag,
        skip_bids_validation=skip_bids_validation,
        denoise_method=denoise_method,
        unringing_method=unringing_method,
    )

    try:
        # Build Docker command
        builder = DockerCommandBuilder(project_dir)
        cmd = builder.build_qsiprep_cmd(config)
    except DockerBuildError as e:
        raise PreprocessError(f"Failed to build QSIPrep command: {e}")

    # Ensure image is available
    if not pull_image_if_needed(const.QSI_QSIPREP_IMAGE, image_tag, logger):
        raise PreprocessError(
            f"Failed to pull QSIPrep image: {const.QSI_QSIPREP_IMAGE}:{image_tag}"
        )

    # Log the command for debugging
    logger.debug(f"QSIPrep command: {' '.join(cmd)}")

    # Run the container
    if runner is None:
        runner = CommandRunner()

    logger.info(f"Running QSIPrep for subject {subject_id}...")
    returncode = runner.run(cmd, logger=logger)

    if returncode != 0:
        raise PreprocessError(f"QSIPrep failed with exit code {returncode}")

    # Validate output
    is_valid, error_msg = validate_qsiprep_output(project_dir, subject_id)
    if not is_valid:
        raise PreprocessError(f"QSIPrep output validation failed: {error_msg}")

    logger.info(f"QSIPrep completed successfully for subject {subject_id}")

tit.pre.qsi.qsirecon.run_qsirecon

run_qsirecon(project_dir: str, subject_id: str, *, logger: Logger, recon_specs: list[str] | None = None, atlases: list[str] | None = None, use_gpu: bool = False, cpus: int | None = None, memory_gb: int | None = None, omp_threads: int = QSI_DEFAULT_OMP_THREADS, image_tag: str = QSI_DEFAULT_IMAGE_TAG, skip_odf_reports: bool = True, runner: CommandRunner | None = None) -> None

Run QSIRecon reconstruction for a subject's preprocessed DWI data.

This function spawns QSIRecon Docker containers as siblings to the current SimNIBS container using Docker-out-of-Docker (DooD).

QSIRecon requires QSIPrep output as input. Multiple reconstruction specs can be run sequentially.

Parameters

project_dir : str Path to the BIDS project root directory. subject_id : str Subject identifier (without 'sub-' prefix). logger : logging.Logger Logger for status messages. recon_specs : list[str] | None, optional List of reconstruction specifications to run. Default: ['mrtrix_multishell_msmt_ACT-fast']. Available specs: mrtrix_multishell_msmt_ACT-fast, multishell_scalarfest, dipy_dki, dipy_mapmri, amico_noddi, pyafq_tractometry, etc. atlases : list[str] | None, optional List of atlases for connectivity analysis. Default: ['Schaefer100', 'AAL116']. use_gpu : bool, optional Enable GPU acceleration. Default: False. cpus : int, optional Number of CPUs to allocate. Default: 8. memory_gb : int, optional Memory limit in GB. Default: 32. omp_threads : int, optional Number of OpenMP threads. Default: 1. image_tag : str, optional QSIRecon Docker image tag. Default: '1.1.1'. skip_odf_reports : bool, optional Skip ODF report generation. Default: True. runner : CommandRunner | None, optional Command runner for subprocess execution.

Raises

PreprocessError If QSIRecon fails or prerequisites are not met.

Source code in tit/pre/qsi/qsirecon.py
def run_qsirecon(
    project_dir: str,
    subject_id: str,
    *,
    logger: logging.Logger,
    recon_specs: list[str] | None = None,
    atlases: list[str] | None = None,
    use_gpu: bool = False,
    cpus: int | None = None,
    memory_gb: int | None = None,
    omp_threads: int = const.QSI_DEFAULT_OMP_THREADS,
    image_tag: str = const.QSI_DEFAULT_IMAGE_TAG,
    skip_odf_reports: bool = True,
    runner: CommandRunner | None = None,
) -> None:
    """
    Run QSIRecon reconstruction for a subject's preprocessed DWI data.

    This function spawns QSIRecon Docker containers as siblings to the
    current SimNIBS container using Docker-out-of-Docker (DooD).

    QSIRecon requires QSIPrep output as input. Multiple reconstruction
    specs can be run sequentially.

    Parameters
    ----------
    project_dir : str
        Path to the BIDS project root directory.
    subject_id : str
        Subject identifier (without 'sub-' prefix).
    logger : logging.Logger
        Logger for status messages.
    recon_specs : list[str] | None, optional
        List of reconstruction specifications to run. Default: ['mrtrix_multishell_msmt_ACT-fast'].
        Available specs: mrtrix_multishell_msmt_ACT-fast, multishell_scalarfest,
        dipy_dki, dipy_mapmri, amico_noddi, pyafq_tractometry, etc.
    atlases : list[str] | None, optional
        List of atlases for connectivity analysis. Default: ['Schaefer100', 'AAL116'].
    use_gpu : bool, optional
        Enable GPU acceleration. Default: False.
    cpus : int, optional
        Number of CPUs to allocate. Default: 8.
    memory_gb : int, optional
        Memory limit in GB. Default: 32.
    omp_threads : int, optional
        Number of OpenMP threads. Default: 1.
    image_tag : str, optional
        QSIRecon Docker image tag. Default: '1.1.1'.
    skip_odf_reports : bool, optional
        Skip ODF report generation. Default: True.
    runner : CommandRunner | None, optional
        Command runner for subprocess execution.

    Raises
    ------
    PreprocessError
        If QSIRecon fails or prerequisites are not met.
    """
    # Default to mrtrix_multishell_msmt_ACT-fast if no specs provided
    if recon_specs is None:
        recon_specs = ["mrtrix_multishell_msmt_ACT-fast"]

    # Default atlases for connectivity-based recon specs
    if atlases is None:
        atlases = ["Schaefer100", "AAL116"]

    logger.info(
        f"Starting QSIRecon for subject {subject_id} with specs: {recon_specs}, atlases: {atlases}"
    )

    # Validate QSIPrep output exists
    is_valid, error_msg = validate_qsiprep_output(project_dir, subject_id)
    if not is_valid:
        raise PreprocessError(
            f"QSIPrep output validation failed: {error_msg}. "
            "Run QSIPrep first before running QSIRecon."
        )

    # Create output directories
    output_base = Path(project_dir) / "derivatives" / "qsirecon"
    output_base.mkdir(parents=True, exist_ok=True)
    work_dir = Path(project_dir) / "derivatives" / ".qsirecon_work"
    work_dir.mkdir(parents=True, exist_ok=True)

    # Build configuration
    config = QSIReconConfig(
        subject_id=subject_id,
        recon_specs=recon_specs,
        atlases=atlases,
        use_gpu=use_gpu,
        resources=ResourceConfig(
            cpus=cpus,
            memory_gb=memory_gb,
            omp_threads=omp_threads,
        ),
        image_tag=image_tag,
        skip_odf_reports=skip_odf_reports,
    )

    try:
        # Build Docker command builder
        builder = DockerCommandBuilder(project_dir)
    except DockerBuildError as e:
        raise PreprocessError(f"Failed to initialize Docker: {e}")

    # Ensure image is available
    if not pull_image_if_needed(const.QSI_QSIRECON_IMAGE, image_tag, logger):
        raise PreprocessError(
            f"Failed to pull QSIRecon image: {const.QSI_QSIRECON_IMAGE}:{image_tag}"
        )

    # Create runner if not provided
    if runner is None:
        runner = CommandRunner()

    # Run each recon spec
    for spec in recon_specs:
        logger.info(f"Running QSIRecon spec: {spec}")

        # Check for existing output for this spec
        # QSIRecon outputs are organized by recon spec
        spec_output_dir = output_base / f"sub-{subject_id}"

        if spec_output_dir.exists():
            raise PreprocessError(
                f"QSIRecon output already exists at {spec_output_dir}. "
                "Remove the directory manually before rerunning."
            )

        try:
            cmd = builder.build_qsirecon_cmd(config, spec)
        except DockerBuildError as e:
            raise PreprocessError(f"Failed to build QSIRecon command: {e}")

        # Log the command for debugging
        logger.debug(f"QSIRecon command: {' '.join(cmd)}")

        # Run the container
        logger.info(f"Running QSIRecon {spec} for subject {subject_id}...")
        returncode = runner.run(cmd, logger=logger)

        if returncode != 0:
            raise PreprocessError(f"QSIRecon {spec} failed with exit code {returncode}")

        logger.info(f"QSIRecon {spec} completed for subject {subject_id}")

    logger.info(f"QSIRecon completed successfully for subject {subject_id}")

tit.pre.qsi.dti_extractor.extract_dti_tensor

extract_dti_tensor(project_dir: str, subject_id: str, *, logger: Logger, source: str = 'qsirecon', skip_registration: bool = False) -> Path

Extract DTI tensor from QSIRecon output for SimNIBS.

This function finds the DTI tensor in QSIRecon outputs, converts it to SimNIBS format, and saves it to the m2m directory.

Parameters

project_dir : str Path to the BIDS project root directory. subject_id : str Subject identifier (without 'sub-' prefix). logger : logging.Logger Logger for status messages. source : str, optional Source of DTI tensor. Currently only 'qsirecon' is supported. Default: 'qsirecon'. skip_registration : bool, optional If True, skip registration to SimNIBS T1 space. Use this if the tensor is already in the correct space. Default: False.

Returns

Path Path to the extracted DTI tensor file in m2m directory.

Raises

PreprocessError If tensor extraction fails.

Source code in tit/pre/qsi/dti_extractor.py
def extract_dti_tensor(
    project_dir: str,
    subject_id: str,
    *,
    logger: logging.Logger,
    source: str = "qsirecon",
    skip_registration: bool = False,
) -> Path:
    """
    Extract DTI tensor from QSIRecon output for SimNIBS.

    This function finds the DTI tensor in QSIRecon outputs, converts it
    to SimNIBS format, and saves it to the m2m directory.

    Parameters
    ----------
    project_dir : str
        Path to the BIDS project root directory.
    subject_id : str
        Subject identifier (without 'sub-' prefix).
    logger : logging.Logger
        Logger for status messages.
    source : str, optional
        Source of DTI tensor. Currently only 'qsirecon' is supported.
        Default: 'qsirecon'.
    skip_registration : bool, optional
        If True, skip registration to SimNIBS T1 space. Use this if the
        tensor is already in the correct space. Default: False.

    Returns
    -------
    Path
        Path to the extracted DTI tensor file in m2m directory.

    Raises
    ------
    PreprocessError
        If tensor extraction fails.
    """
    # Delayed import to avoid circular dependencies
    import nibabel as nib

    logger.info(f"Extracting DTI tensor for subject {subject_id}")

    # Get paths
    pm = get_path_manager(project_dir)

    m2m_dir = pm.m2m(subject_id)
    if not os.path.isdir(m2m_dir):
        raise PreprocessError(
            f"m2m directory not found for subject {subject_id}. "
            "Run SimNIBS charm first."
        )

    output_path = Path(m2m_dir) / const.FILE_DTI_TENSOR

    # Check for existing output
    if output_path.exists():
        raise PreprocessError(
            f"DTI tensor already exists at {output_path}. "
            "Remove the file manually before rerunning."
        )

    # Find source tensor
    if source != "qsirecon":
        raise PreprocessError(f"Unknown source: {source}")

    qsirecon_dir = Path(project_dir) / "derivatives" / "qsirecon"

    # Try to find DSI Studio tensor components first (most common)
    dsistudio_components = _find_dsistudio_tensor_components(
        qsirecon_dir, subject_id, logger
    )

    if dsistudio_components:
        logger.info("Using DSI Studio tensor components")
        tensor_data, affine, header = _combine_dsistudio_tensor_components(
            dsistudio_components, logger
        )
        # Already in the correct format [Dxx, Dxy, Dxz, Dyy, Dyz, Dzz]
        simnibs_tensor = tensor_data
    else:
        # Try to find DKI tensor
        dt_file, _ = _find_dki_tensor_files(qsirecon_dir, subject_id, logger)

        if dt_file is not None:
            source_file = dt_file
            logger.info(f"Using DKI diffusion tensor: {dt_file}")
        else:
            # Fall back to general DTI tensor search
            source_file = _find_dti_tensor_file(qsirecon_dir, subject_id, logger)

        if source_file is None:
            raise PreprocessError(
                f"No DTI tensor found for subject {subject_id} in QSIRecon output. "
                "Ensure QSIRecon was run with DSI Studio (dsi_studio_gqi) or "
                "another DTI-producing spec like dipy_dki."
            )

        logger.info(f"Source tensor file: {source_file}")

        # Load and convert tensor
        try:
            tensor_img = nib.load(str(source_file))
            tensor_data = tensor_img.get_fdata(dtype=np.float32)
            affine = tensor_img.affine
            header = tensor_img.header
        except (OSError, ValueError) as e:
            raise PreprocessError(f"Failed to load tensor file: {e}")

        # Convert to SimNIBS format
        try:
            simnibs_tensor = _convert_tensor_to_simnibs_format(tensor_data, logger)
        except ValueError as e:
            raise PreprocessError(f"Failed to convert tensor: {e}")

    _validate_tensor(simnibs_tensor, logger)

    # Save intermediate tensor (before registration)
    intermediate_path = Path(m2m_dir) / "DTI_ACPC_tensor.nii.gz"
    try:
        intermediate_img = nib.Nifti1Image(simnibs_tensor, affine, header)
        nib.save(intermediate_img, str(intermediate_path))
        logger.info(f"Saved intermediate tensor to: {intermediate_path}")
    except (OSError, ValueError, TypeError) as e:
        raise PreprocessError(f"Failed to save intermediate tensor: {e}")

    # Register to SimNIBS T1 space
    if not skip_registration:
        simnibs_t1_path = Path(m2m_dir) / const.FILE_T1
        if not simnibs_t1_path.exists():
            raise PreprocessError(
                f"SimNIBS T1 not found at {simnibs_t1_path}. "
                "Run SimNIBS charm first."
            )

        qsiprep_t1_path = _find_qsiprep_t1(Path(project_dir), subject_id)
        if qsiprep_t1_path is None:
            logger.warning(
                "qsiprep T1 not found. Using simple resampling instead of "
                "proper registration."
            )
            _resample_tensor_to_target(
                intermediate_path, simnibs_t1_path, output_path, logger
            )
        else:
            try:
                _register_tensor_to_simnibs_t1(
                    intermediate_path,
                    qsiprep_t1_path,
                    simnibs_t1_path,
                    output_path,
                    logger,
                )
            except PreprocessError as e:
                logger.warning(
                    f"Registration failed: {e}. Falling back to simple resampling."
                )
                _resample_tensor_to_target(
                    intermediate_path, simnibs_t1_path, output_path, logger
                )
    else:
        # Just copy the intermediate file to output
        shutil.copy2(intermediate_path, output_path)
        logger.info(f"Copied tensor to output (skip_registration=True)")

    logger.info(f"Saved DTI tensor to: {output_path}")
    return output_path