Skip to content

Core Utilities

TI-Toolbox provides opinionated, BIDS-compliant infrastructure for path resolution, logging, constants, and config serialization. These modules enforce a strict project structure so that all pipeline stages produce consistent, discoverable outputs.

graph LR
    INIT["tit.init()"] --> LOG[Logger]
    INIT --> PM[PathManager]
    PM --> PATHS[BIDS Paths]
    LOG --> FILE[File Logs]
    CONFIG[Config IO] --> JSON[JSON Files]
    JSON --> CLI[CLI Subprocesses]
    style INIT fill:#2d5a27,stroke:#4a8,color:#fff

Initialization

Every script or entry point starts with a single call:

import tit

tit.init("INFO")
pm = tit.get_path_manager("/data/my_project")

init() configures the tit logger hierarchy and attaches a stdout handler. get_path_manager() returns a global singleton that resolves all paths for the project.

Docker Environment

Inside Docker containers, get_path_manager() can auto-detect the project directory from the PROJECT_DIR or PROJECT_DIR_NAME environment variables. No argument is needed in that case.

PathManager

PathManager is a singleton that enforces the BIDS directory layout. All modules use it instead of constructing paths manually.

Project-Level Paths

These methods take no arguments and return top-level directories or files:

Method Returns
pm.derivatives() <project>/derivatives
pm.sourcedata() <project>/sourcedata
pm.simnibs() <project>/derivatives/SimNIBS
pm.freesurfer() <project>/derivatives/freesurfer
pm.ti_toolbox() <project>/derivatives/ti-toolbox
pm.config_dir() <project>/code/ti-toolbox/config
pm.montage_config() <project>/code/ti-toolbox/config/montage_list.json
pm.project_status() <project>/code/ti-toolbox/config/project_status.json
pm.reports() <project>/derivatives/ti-toolbox/reports
pm.stats_data() <project>/derivatives/ti-toolbox/stats/data
pm.qsiprep() <project>/derivatives/qsiprep
pm.qsirecon() <project>/derivatives/qsirecon

Subject-Level Paths

Methods that accept a subject ID (sid) string without the sub- prefix:

Method Returns
pm.sub("001") <simnibs>/sub-001
pm.m2m("001") <simnibs>/sub-001/m2m_001
pm.t1("001") .../m2m_001/T1.nii.gz
pm.segmentation("001") .../m2m_001/segmentation
pm.tissue_labeling("001") .../segmentation/Labeling.nii.gz
pm.eeg_positions("001") .../m2m_001/eeg_positions
pm.rois("001") .../m2m_001/ROIs
pm.simulations("001") <simnibs>/sub-001/Simulations
pm.leadfields("001") <simnibs>/sub-001/leadfields
pm.ex_search("001") <simnibs>/sub-001/ex-search
pm.flex_search("001") <simnibs>/sub-001/flex-search
pm.logs("001") <ti-toolbox>/logs/sub-001
pm.tissue_analysis_output("001") <ti-toolbox>/tissue_analysis/sub-001
pm.bids_subject("001") <project>/sub-001
pm.bids_anat("001") <project>/sub-001/anat
pm.bids_dwi("001") <project>/sub-001/dwi
pm.freesurfer_subject("001") <freesurfer>/sub-001
pm.freesurfer_mri("001") <freesurfer>/sub-001/mri
pm.sourcedata_subject("001") <sourcedata>/sub-001
pm.qsiprep_subject("001") <qsiprep>/sub-001
pm.qsirecon_subject("001") <qsirecon>/sub-001

Simulation-Level Paths

Methods that accept a subject ID and simulation name:

Method Returns
pm.simulation("001", "motor") .../Simulations/motor
pm.ti_mesh("001", "motor") .../TI/mesh/motor_TI.msh
pm.ti_mesh_dir("001", "motor") .../TI/mesh
pm.ti_central_surface("001", "motor") .../TI/mesh/surfaces/motor_TI_central.msh
pm.mti_mesh_dir("001", "motor") .../mTI/mesh
pm.analysis_dir("001", "motor", "mesh") .../Analyses/Mesh
pm.analysis_dir("001", "motor", "voxel") .../Analyses/Voxel

Additional paths for optimization runs and statistics:

Method Returns
pm.flex_search_run("001", "run_01") .../flex-search/run_01
pm.flex_manifest("001", "run_01") .../flex-search/run_01/flex_meta.json
pm.flex_electrode_positions("001", "run_01") .../flex-search/run_01/electrode_positions.json
pm.ex_search_run("001", "run_01") .../ex-search/run_01
pm.sourcedata_dicom("001", "anat") <sourcedata>/sub-001/anat/dicom
pm.stats_output("group_comparison", "motor_study") <ti-toolbox>/stats/group_comparison/motor_study
pm.logs_group() <ti-toolbox>/logs/group_analysis

Listing Methods

pm.list_simnibs_subjects()       # ["001", "002"] — subjects with m2m folders
pm.list_simulations("001")       # ["motor_cortex", "frontal"]
pm.list_eeg_caps("001")          # ["GSN-HydroCel-185.csv"]
pm.list_flex_search_runs("001")  # ["run_01"] — runs with metadata files

Utility

pm.ensure("/some/path")  # creates directory (with parents) and returns the path

Logging

TI-Toolbox logging is file-first. The tit logger hierarchy has propagate=False, so nothing reaches the terminal unless you explicitly opt in.

Function Purpose
setup_logging(level) Configure the tit logger level; adds NO handlers
add_file_handler(log_file, level, logger_name) Attach a FileHandler (append mode); creates parent dirs
add_stream_handler(logger_name, level) Attach a StreamHandler (stdout)
get_file_only_logger(name, log_file, level) Return a logger that writes ONLY to the given file

Typical Patterns

File logging (used by pipeline modules):

from tit import setup_logging, add_file_handler

setup_logging("DEBUG")
fh = add_file_handler("/data/logs/run.log", level="DEBUG")

Terminal output (used by scripts):

import tit
tit.init("INFO")  # setup_logging + add_stream_handler in one call

Isolated file logger (used per-analysis):

from tit.logger import get_file_only_logger

log = get_file_only_logger("roi_analysis", "/data/logs/roi.log")
log.info("Analyzing ROI...")

Log Format

File handlers:

2025-01-15 14:30:00 | INFO | tit.sim.simulator | Simulation started

Stream handlers use minimal format: %(message)s.

Constants

All hardcoded values live in tit.constants. Key categories:

Category Examples
Directory names DIR_DERIVATIVES, DIR_SIMNIBS, DIR_FLEX_SEARCH, DIR_ANALYSIS
File names FILE_MONTAGE_LIST, FILE_T1, FILE_EGI_TEMPLATE
File extensions EXT_NIFTI (.nii.gz), EXT_MESH (.msh), EXT_CSV
BIDS prefixes PREFIX_SUBJECT (sub-), PREFIX_SESSION (ses-)
Field names FIELD_TI_MAX (TI_max), FIELD_MTI_MAX (TI_Max), FIELD_TI_NORMAL (TI_normal)
Tissue tags GM_TISSUE_TAG (2), WM_TISSUE_TAG (1), BRAIN_TISSUE_TAG_RANGES
Conductivities CONDUCTIVITY_GRAY_MATTER (0.275 S/m), CONDUCTIVITY_WHITE_MATTER (0.126 S/m), 12 tissues total
Tissue properties TISSUE_PROPERTIES — list of dicts with number, name, conductivity, and reference
Atlas names ATLAS_DK40, ATLAS_A2009S, ATLAS_ASEG, ATLAS_APARC_ASEG
Analysis defaults DEFAULT_PERCENTILES ([95, 99, 99.9]), DEFAULT_FOCALITY_CUTOFFS ([50, 75, 90, 95]), DEFAULT_RADIUS_MM (5.0)
Simulation SIM_TYPE_TI, SIM_TYPE_MTI, ELECTRODE_SHAPE_ELLIPSE, DEFAULT_INTENSITY (1.0)
EEG nets EEG_NETS — list of dicts with value, label, electrode_count
Validation bounds VALIDATION_BOUNDS — min/max for radius, current, iterations, etc.
Plot settings PLOT_DPI (600), PLOT_FIGSIZE_DEFAULT ((10, 8))
Timestamps TIMESTAMP_FORMAT_DEFAULT (%Y%m%d_%H%M%S), TIMESTAMP_FORMAT_READABLE
QSI integration QSI_RECON_SPECS, QSI_ATLASES, QSI_DEFAULT_CPUS (8)
from tit import constants as const

const.FIELD_TI_MAX         # "TI_max"
const.GM_TISSUE_TAG        # 2
const.DEFAULT_RADIUS_MM    # 5.0
const.TISSUE_PROPERTIES    # [{"number": 1, "name": "White Matter", ...}, ...]

Config IO

The tit.config_io module serializes typed config dataclasses to JSON for CLI subprocesses. This is the mechanism the GUI uses to pass configurations to optimizer and analyzer processes.

from tit.config_io import write_config_json, read_config_json

# Write: dataclass -> temp JSON file, returns path
path = write_config_json(my_flex_config, prefix="flex")

# Read: JSON file -> plain dict
data = read_config_json(path)

Union-typed fields (ROI specs, electrode specs) get a _type discriminator so the subprocess can reconstruct the correct type:

Class _type value
FlexConfig.SphericalROI "SphericalROI"
FlexConfig.AtlasROI "AtlasROI"
FlexConfig.SubcorticalROI "SubcorticalROI"
ExConfig.PoolElectrodes "PoolElectrodes"
ExConfig.BucketElectrodes "BucketElectrodes"
Montage "Montage"

Error Handling

Custom exceptions are defined in domain-specific modules:

Exception Module Base Class When Raised
PreprocessError tit.pre.utils RuntimeError A preprocessing step fails
PreprocessCancelled tit.pre.utils RuntimeError User cancels a preprocessing run
DockerBuildError tit.pre.qsi.docker_builder Exception Docker command construction fails
from tit.pre.utils import PreprocessError, PreprocessCancelled

try:
    run_pipeline(config)
except PreprocessCancelled:
    print("Pipeline was cancelled")
except PreprocessError as e:
    print(f"Pipeline failed: {e}")

API Reference

Path Management

tit.paths.PathManager

PathManager(project_dir: str | None = None)

Centralized BIDS-compliant path management for TI-Toolbox.

Source code in tit/paths.py
def __init__(self, project_dir: str | None = None):
    self._project_dir: str | None = None
    if project_dir:
        self.project_dir = project_dir

project_dir property writable

project_dir: str | None

Get/set the project directory. Auto-detects from environment if unset.

project_dir_name property

project_dir_name: str | None

Return the basename of project_dir.

ensure

ensure(path: str) -> str

Create directory (with parents) and return path.

Source code in tit/paths.py
def ensure(self, path: str) -> str:
    """Create directory (with parents) and return path."""
    os.makedirs(path, exist_ok=True)
    return path

list_simnibs_subjects

list_simnibs_subjects() -> list[str]

List subject IDs (without 'sub-' prefix) that have an m2m folder.

Source code in tit/paths.py
def list_simnibs_subjects(self) -> list[str]:
    """List subject IDs (without 'sub-' prefix) that have an m2m folder."""
    simnibs_dir = self.simnibs() if self.project_dir else None
    if not simnibs_dir or not os.path.isdir(simnibs_dir):
        return []

    subjects = []
    for item in os.listdir(simnibs_dir):
        if not item.startswith(const.PREFIX_SUBJECT):
            continue
        sid = item.replace(const.PREFIX_SUBJECT, "", 1)
        if os.path.isdir(self.m2m(sid)):
            subjects.append(sid)

    subjects.sort(
        key=lambda x: [
            int(c) if c.isdigit() else c.lower() for c in re.split("([0-9]+)", x)
        ]
    )
    return subjects

list_simulations

list_simulations(sid: str) -> list[str]

List simulation folder names for a subject.

Source code in tit/paths.py
def list_simulations(self, sid: str) -> list[str]:
    """List simulation folder names for a subject."""
    sim_root = self.simulations(sid)

    try:
        simulations: list[str] = []
        with os.scandir(sim_root) as it:
            for entry in it:
                if entry.is_dir() and not entry.name.startswith("."):
                    simulations.append(entry.name)
        simulations.sort()
        return simulations
    except OSError:
        return []

list_eeg_caps

list_eeg_caps(sid: str) -> list[str]

List EEG cap CSV files for a subject.

Source code in tit/paths.py
def list_eeg_caps(self, sid: str) -> list[str]:
    """List EEG cap CSV files for a subject."""
    eeg_pos_dir = self.eeg_positions(sid) if self.project_dir else None
    if not eeg_pos_dir or not os.path.isdir(eeg_pos_dir):
        return []

    caps = [
        f
        for f in os.listdir(eeg_pos_dir)
        if f.endswith(const.EXT_CSV) and not f.startswith(".")
    ]
    caps.sort()
    return caps

list_flex_search_runs

list_flex_search_runs(sid: str) -> list[str]

List flex-search run folders that contain flex_meta.json or electrode_positions.json.

Source code in tit/paths.py
def list_flex_search_runs(self, sid: str) -> list[str]:
    """List flex-search run folders that contain flex_meta.json or electrode_positions.json."""
    root = self.flex_search(sid) if self.project_dir else None
    if not root or not os.path.isdir(root):
        return []

    try:
        out: list[str] = []
        with os.scandir(root) as it:
            for entry in it:
                if not entry.is_dir() or entry.name.startswith("."):
                    continue
                if os.path.isfile(
                    os.path.join(entry.path, "flex_meta.json")
                ) or os.path.isfile(
                    os.path.join(entry.path, "electrode_positions.json")
                ):
                    out.append(entry.name)
        out.sort()
        return out
    except OSError:
        return []

spherical_analysis_name staticmethod

spherical_analysis_name(x: float, y: float, z: float, radius: float, coordinate_space: str) -> str

Return folder name for a spherical analysis.

Source code in tit/paths.py
@staticmethod
def spherical_analysis_name(
    x: float, y: float, z: float, radius: float, coordinate_space: str
) -> str:
    """Return folder name for a spherical analysis."""
    coord_space_suffix = (
        "_MNI" if str(coordinate_space).upper() == "MNI" else "_subject"
    )
    return f"sphere_x{x:.2f}_y{y:.2f}_z{z:.2f}_r{float(radius)}{coord_space_suffix}"

cortical_analysis_name classmethod

cortical_analysis_name(*, whole_head: bool, region: str | None, atlas_name: str | None = None, atlas_path: str | None = None) -> str

Return folder name for a cortical analysis.

Source code in tit/paths.py
@classmethod
def cortical_analysis_name(
    cls,
    *,
    whole_head: bool,
    region: str | None,
    atlas_name: str | None = None,
    atlas_path: str | None = None,
) -> str:
    """Return folder name for a cortical analysis."""
    atlas_clean = cls._atlas_name_clean(atlas_name or atlas_path or "unknown_atlas")
    if whole_head:
        return f"whole_head_{atlas_clean}"
    region_val = str(region or "").strip()
    if not region_val:
        raise ValueError(
            "region is required for cortical analysis unless whole_head=True"
        )
    return f"region_{region_val}_{atlas_clean}"

analysis_output_dir

analysis_output_dir(*, sid: str, sim: str, space: str, analysis_type: str, tissue_type: str | None = None, coordinates=None, radius=None, coordinate_space: str = 'subject', whole_head: bool = False, region: str | None = None, atlas_name: str | None = None, atlas_path: str | None = None) -> str

Return analysis output directory path (does not create it).

Source code in tit/paths.py
def analysis_output_dir(
    self,
    *,
    sid: str,
    sim: str,
    space: str,
    analysis_type: str,
    tissue_type: str | None = None,
    coordinates=None,
    radius=None,
    coordinate_space: str = "subject",
    whole_head: bool = False,
    region: str | None = None,
    atlas_name: str | None = None,
    atlas_path: str | None = None,
) -> str:
    """Return analysis output directory path (does not create it)."""
    base = self.analysis_dir(sid, sim, space)
    at = str(analysis_type).lower()
    if at == "spherical":
        if not coordinates or len(coordinates) != 3 or radius is None:
            raise ValueError(
                "coordinates(3) and radius required for spherical analysis"
            )
        name = self.spherical_analysis_name(
            float(coordinates[0]),
            float(coordinates[1]),
            float(coordinates[2]),
            float(radius),
            coordinate_space,
        )
    else:
        name = self.cortical_analysis_name(
            whole_head=bool(whole_head),
            region=region,
            atlas_name=atlas_name,
            atlas_path=atlas_path,
        )
    if str(space).lower() == "voxel":
        tissue = str(tissue_type or "GM").strip().lower()
        if tissue in {"gm", "wm", "both"}:
            name = f"{name}_{tissue}"
    return os.path.join(base, name)

tit.paths.get_path_manager

get_path_manager(project_dir: str | None = None) -> PathManager

Return the global PathManager singleton.

Source code in tit/paths.py
def get_path_manager(project_dir: str | None = None) -> PathManager:
    """Return the global PathManager singleton."""
    global _path_manager_instance
    if _path_manager_instance is None:
        _path_manager_instance = PathManager()
    if project_dir is not None:
        _path_manager_instance.project_dir = project_dir
    return _path_manager_instance

tit.paths.reset_path_manager

reset_path_manager() -> None

Reset the singleton so the next call to get_path_manager creates a fresh instance.

Source code in tit/paths.py
def reset_path_manager() -> None:
    """Reset the singleton so the next call to get_path_manager creates a fresh instance."""
    global _path_manager_instance
    _path_manager_instance = None

Initialization

tit.init

init(level: str = 'INFO') -> None

One-call setup for scripts: configure logging and enable terminal output.

Equivalent to::

setup_logging(level)
add_stream_handler("tit", level)

Call this at the top of any script that uses the tit package to get sensible defaults with no extra boilerplate.

Source code in tit/__init__.py
def init(level: str = "INFO") -> None:
    """One-call setup for scripts: configure logging and enable terminal output.

    Equivalent to::

        setup_logging(level)
        add_stream_handler("tit", level)

    Call this at the top of any script that uses the ``tit`` package
    to get sensible defaults with no extra boilerplate.
    """
    setup_logging(level)
    add_stream_handler("tit", level)

Logging

tit.logger.setup_logging

setup_logging(level: str = 'INFO') -> None

Configure the tit logger hierarchy.

Sets the log level but adds NO handlers — file handlers are attached later via add_file_handler() and GUI handlers via Qt signal bridges.

Parameters:

Name Type Description Default
level str

Log level string (DEBUG, INFO, WARNING, ERROR, CRITICAL). Defaults to INFO.

'INFO'
Source code in tit/logger.py
def setup_logging(level: str = "INFO") -> None:
    """Configure the ``tit`` logger hierarchy.

    Sets the log level but adds NO handlers — file handlers are attached
    later via ``add_file_handler()`` and GUI handlers via Qt signal bridges.

    Args:
        level: Log level string (DEBUG, INFO, WARNING, ERROR, CRITICAL).
               Defaults to INFO.
    """
    logger = logging.getLogger("tit")
    logger.handlers.clear()
    logger.setLevel(getattr(logging, level.upper(), logging.INFO))
    logger.propagate = False  # never bubble to root/terminal

    # Quiet noisy third-party loggers
    for name in ("matplotlib", "matplotlib.font_manager", "PIL"):
        logging.getLogger(name).setLevel(logging.ERROR)

tit.logger.add_file_handler

add_file_handler(log_file: str | Path, level: str = 'DEBUG', logger_name: str = 'tit') -> FileHandler

Attach a file handler to a named logger.

Creates the parent directory if it does not exist. Returns the handler so callers can remove it when the run completes.

Parameters:

Name Type Description Default
log_file str | Path

Path to the log file (opened in append mode).

required
level str

Minimum log level for this handler. Defaults to DEBUG so the file captures everything.

'DEBUG'
logger_name str

Logger to attach to. Defaults to the root "tit" logger.

'tit'

Returns:

Type Description
FileHandler

The created FileHandler instance.

Source code in tit/logger.py
def add_file_handler(
    log_file: str | Path,
    level: str = "DEBUG",
    logger_name: str = "tit",
) -> logging.FileHandler:
    """Attach a file handler to a named logger.

    Creates the parent directory if it does not exist. Returns the handler
    so callers can remove it when the run completes.

    Args:
        log_file: Path to the log file (opened in append mode).
        level: Minimum log level for this handler. Defaults to DEBUG so the
               file captures everything.
        logger_name: Logger to attach to. Defaults to the root "tit" logger.

    Returns:
        The created FileHandler instance.
    """
    log_file = Path(log_file)
    log_file.parent.mkdir(parents=True, exist_ok=True)
    fh = logging.FileHandler(str(log_file), mode="a")
    fh.setLevel(getattr(logging, level.upper(), logging.DEBUG))
    fh.setFormatter(logging.Formatter(LOG_FORMAT, datefmt=DATE_FORMAT))
    logging.getLogger(logger_name).addHandler(fh)
    return fh

tit.logger.add_stream_handler

add_stream_handler(logger_name: str = 'tit', level: str = 'INFO') -> StreamHandler

Attach a stdout handler to a named logger.

Used by scripts for terminal output and by __main__ entry points so that BaseProcessThread can capture subprocess stdout for the GUI.

Parameters:

Name Type Description Default
logger_name str

Logger to attach to. Defaults to "tit".

'tit'
level str

Minimum log level. Defaults to INFO.

'INFO'

Returns:

Type Description
StreamHandler

The created StreamHandler instance.

Source code in tit/logger.py
def add_stream_handler(
    logger_name: str = "tit",
    level: str = "INFO",
) -> logging.StreamHandler:
    """Attach a stdout handler to a named logger.

    Used by scripts for terminal output and by ``__main__`` entry points
    so that ``BaseProcessThread`` can capture subprocess stdout for the GUI.

    Args:
        logger_name: Logger to attach to. Defaults to ``"tit"``.
        level: Minimum log level. Defaults to INFO.

    Returns:
        The created StreamHandler instance.
    """
    import sys

    handler = logging.StreamHandler(sys.stdout)
    handler.setLevel(getattr(logging, level.upper(), logging.INFO))
    handler.setFormatter(logging.Formatter("%(message)s"))
    logger = logging.getLogger(logger_name)
    logger.addHandler(handler)
    return handler

tit.logger.get_file_only_logger

get_file_only_logger(name: str, log_file: str | Path, level: str = 'DEBUG') -> Logger

Return a logger that writes ONLY to log_file — no console output.

If a logger with name already exists its handlers are replaced so that repeated calls (e.g. across ROIs) always point at the correct file.

Parameters:

Name Type Description Default
name str

Logger name (should be unique per use-case).

required
log_file str | Path

Path to the log file.

required
level str

Minimum log level. Defaults to DEBUG.

'DEBUG'

Returns:

Type Description
Logger

A configured :class:logging.Logger.

Source code in tit/logger.py
def get_file_only_logger(
    name: str,
    log_file: str | Path,
    level: str = "DEBUG",
) -> logging.Logger:
    """Return a logger that writes ONLY to *log_file* — no console output.

    If a logger with *name* already exists its handlers are replaced so that
    repeated calls (e.g. across ROIs) always point at the correct file.

    Args:
        name: Logger name (should be unique per use-case).
        log_file: Path to the log file.
        level: Minimum log level. Defaults to DEBUG.

    Returns:
        A configured :class:`logging.Logger`.
    """
    logger = logging.getLogger(name)
    logger.handlers.clear()
    logger.setLevel(getattr(logging, level.upper(), logging.DEBUG))
    logger.propagate = False  # never bubble to root/terminal
    add_file_handler(log_file, level=level, logger_name=name)
    return logger

Config IO

tit.config_io.serialize_config

serialize_config(config: Any) -> dict[str, Any]

Convert a dataclass to a JSON-serializable dict.

Handles: - Enum fields (uses .value) - Nested dataclasses (recursed) - Union-typed ROI / electrode specs (adds _type discriminator) - None values (preserved)

Source code in tit/config_io.py
def serialize_config(config: Any) -> dict[str, Any]:
    """Convert a dataclass to a JSON-serializable dict.

    Handles:
    - Enum fields (uses ``.value``)
    - Nested dataclasses (recursed)
    - Union-typed ROI / electrode specs (adds ``_type`` discriminator)
    - None values (preserved)
    """
    return _serialize(config)

tit.config_io.write_config_json

write_config_json(config: Any, prefix: str = 'config') -> str

Serialize config dataclass to a temporary JSON file.

Returns the absolute file path.

Source code in tit/config_io.py
def write_config_json(config: Any, prefix: str = "config") -> str:
    """Serialize config dataclass to a temporary JSON file.

    Returns the absolute file path.
    """
    data = serialize_config(config)
    fd, path = tempfile.mkstemp(prefix=f"{prefix}_", suffix=".json")
    with os.fdopen(fd, "w") as f:
        json.dump(data, f, indent=2)
    return path

tit.config_io.read_config_json

read_config_json(path: str) -> dict[str, Any]

Read a JSON config file and return the parsed dict.

Source code in tit/config_io.py
def read_config_json(path: str) -> dict[str, Any]:
    """Read a JSON config file and return the parsed dict."""
    with open(path) as f:
        return json.load(f)

Exceptions

tit.pre.utils.PreprocessError

Bases: RuntimeError

Raised when a preprocessing step fails.

tit.pre.utils.PreprocessCancelled

Bases: RuntimeError

Raised when a preprocessing run is cancelled.

tit.pre.qsi.docker_builder.DockerBuildError

Bases: Exception

Raised when Docker command construction fails.