FairShip
Loading...
Searching...
No Matches
run_tracking_benchmark Namespace Reference

Functions

None run_phase (str description, list[str] cmd)
 

Variables

ArgumentParser parser = ArgumentParser(description="Tracking performance benchmark for straw tube spectrometer")
 
 type
 
 int
 
 default
 
 help
 
 float
 
 None
 
 choices
 
ArgumentParser options = parser.parse_args()
 
ArgumentParser tag = options.tag
 
sim_file = f"{options.outputDir}/sim_{tag}.root"
 
geo_file = f"{options.outputDir}/geo_{tag}.root"
 
reco_file = f"{options.outputDir}/sim_{tag}_rec.root"
 
ArgumentParser json_file = options.output_json or f"{options.outputDir}/tracking_metrics.json"
 
histo_file = f"{options.outputDir}/tracking_benchmark_histos.root"
 
os fairship = os.environ.get("FAIRSHIP", "")
 
os sim_script = os.path.join(fairship, "macro", "run_simScript.py") if fairship else "macro/run_simScript.py"
 
list sim_cmd
 
os reco_script = os.path.join(fairship, "macro", "ShipReco.py") if fairship else "macro/ShipReco.py"
 
list reco_cmd
 
tracking_benchmark bench = tracking_benchmark.TrackingBenchmark(sim_file, reco_file, geo_file)
 

Detailed Description

Run a full tracking benchmark: simulation -> reconstruction -> metrics.

Fires a particle gun upstream of the straw tube spectrometer (T1-T4),
runs digitisation and reconstruction with template matching pattern
recognition, then computes tracking performance metrics.

Each phase (sim, reco) runs as a subprocess because FairRoot singletons
prevent creating multiple FairRunSim instances in the same process.

Example usage:
    python macro/run_tracking_benchmark.py -n 200 --seed 42 --tag test
    python macro/run_tracking_benchmark.py -n 1000 --nTracks 5 --tag multi

Function Documentation

◆ run_phase()

None run_tracking_benchmark.run_phase ( str  description,
list[str]  cmd 
)
Run a subprocess phase, raising on failure.

Definition at line 62 of file run_tracking_benchmark.py.

62def run_phase(description: str, cmd: list[str]) -> None:
63 """Run a subprocess phase, raising on failure."""
64 print("=" * 60)
65 print(f"{description}")
66 print("=" * 60)
67 result = subprocess.run(cmd, check=False)
68 if result.returncode != 0:
69 print(f"FAILED: {description} (exit code {result.returncode})")
70 sys.exit(result.returncode)
71
72
73# ============================================================
74# Phase 1: Simulation via run_simScript.py
75# ============================================================

Variable Documentation

◆ bench

tracking_benchmark run_tracking_benchmark.bench = tracking_benchmark.TrackingBenchmark(sim_file, reco_file, geo_file)

Definition at line 152 of file run_tracking_benchmark.py.

◆ choices

run_tracking_benchmark.choices

Definition at line 43 of file run_tracking_benchmark.py.

◆ default

run_tracking_benchmark.default

Definition at line 27 of file run_tracking_benchmark.py.

◆ fairship

os run_tracking_benchmark.fairship = os.environ.get("FAIRSHIP", "")

Definition at line 59 of file run_tracking_benchmark.py.

◆ float

run_tracking_benchmark.float

Definition at line 29 of file run_tracking_benchmark.py.

◆ geo_file

f run_tracking_benchmark.geo_file = f"{options.outputDir}/geo_{tag}.root"

Definition at line 54 of file run_tracking_benchmark.py.

◆ help

run_tracking_benchmark.help

Definition at line 27 of file run_tracking_benchmark.py.

◆ histo_file

f run_tracking_benchmark.histo_file = f"{options.outputDir}/tracking_benchmark_histos.root"

Definition at line 57 of file run_tracking_benchmark.py.

◆ int

run_tracking_benchmark.int

Definition at line 27 of file run_tracking_benchmark.py.

◆ json_file

ArgumentParser run_tracking_benchmark.json_file = options.output_json or f"{options.outputDir}/tracking_metrics.json"

Definition at line 56 of file run_tracking_benchmark.py.

◆ None

run_tracking_benchmark.None

Definition at line 36 of file run_tracking_benchmark.py.

◆ options

ArgumentParser run_tracking_benchmark.options = parser.parse_args()

Definition at line 47 of file run_tracking_benchmark.py.

◆ parser

ArgumentParser run_tracking_benchmark.parser = ArgumentParser(description="Tracking performance benchmark for straw tube spectrometer")

Definition at line 26 of file run_tracking_benchmark.py.

◆ reco_cmd

list run_tracking_benchmark.reco_cmd
Initial value:
1= [
2 sys.executable,
3 reco_script,
4 "-f",
5 sim_file,
6 "-g",
7 geo_file,
8 "-n",
9 str(options.nEvents),
10 "--realPR",
11 "AR",
12]

Definition at line 122 of file run_tracking_benchmark.py.

◆ reco_file

f run_tracking_benchmark.reco_file = f"{options.outputDir}/sim_{tag}_rec.root"

Definition at line 55 of file run_tracking_benchmark.py.

◆ reco_script

os run_tracking_benchmark.reco_script = os.path.join(fairship, "macro", "ShipReco.py") if fairship else "macro/ShipReco.py"

Definition at line 121 of file run_tracking_benchmark.py.

◆ sim_cmd

list run_tracking_benchmark.sim_cmd

Definition at line 77 of file run_tracking_benchmark.py.

◆ sim_file

f run_tracking_benchmark.sim_file = f"{options.outputDir}/sim_{tag}.root"

Definition at line 53 of file run_tracking_benchmark.py.

◆ sim_script

os run_tracking_benchmark.sim_script = os.path.join(fairship, "macro", "run_simScript.py") if fairship else "macro/run_simScript.py"

Definition at line 76 of file run_tracking_benchmark.py.

◆ tag

ArgumentParser run_tracking_benchmark.tag = options.tag

Definition at line 52 of file run_tracking_benchmark.py.

◆ type

run_tracking_benchmark.type

Definition at line 27 of file run_tracking_benchmark.py.