SIwave log parser – pyedb.workflows.utilities.siwave_log_parser#
siwave_log_parser.py Parse Ansys SIwave batch logs into dataclasses.
Usage#
>>> from pyedb.workflows.utilities.siwave_log_parser import SiwaveLogParser
>>> parser = SiwaveLogParser(r"C:\path o\siwave.log")
>>> log = parser.parse()
>>> log.summary()
>>> log.to_json("siwave.json")
Overview#
The SIwave log parser extracts structured data from Ansys SIwave batch solve logs, including version information, batch execution metadata, simulation settings, warnings, profiling data, and completion status.
Key Features:
Automatic detection of simulation completion status (Normal Completion vs. Aborted)
Extraction of electrical short warnings with precise coordinates
Performance profiling data (timing, memory usage)
Support for multiple timestamp formats
JSON export capability
Quick Start#
Basic usage:
from pyedb.workflows.utilities.siwave_log_parser import SiwaveLogParser
# Parse a SIwave log file
parser = SiwaveLogParser("path/to/siwave.log")
log = parser.parse()
# Check simulation status
if log.is_completed():
print(f"✓ Simulation completed successfully")
else:
print(f"✗ Simulation failed or was aborted")
# Display summary
log.summary()
# Export to JSON
log.to_json("results.json", indent=2)
Top-level façade#
- class pyedb.workflows.utilities.siwave_log_parser.SiwaveLogParser(log_path: str | Path)#
Bases:
objectHigh-level façade that orchestrates all block parsers.
Typical usage:
>>> parser = SiwaveLogParser("/tmp/siwave.log") >>> log = parser.parse() >>> log.summary() >>> log.to_json("output.json")
- parse() ParsedSiwaveLog#
Execute all sub-parsers and return a unified object.
- Returns:
Structured representation of the entire log.
- Return type:
- Raises:
FileNotFoundError – If log_path does not exist.
ValueError – If a mandatory block cannot be parsed.
Aggregated result object#
- class pyedb.workflows.utilities.siwave_log_parser.ParsedSiwaveLog(aedt: ~pyedb.workflows.utilities.siwave_log_parser.AEDTVersion, batch: ~pyedb.workflows.utilities.siwave_log_parser.BatchInfo, settings: ~pyedb.workflows.utilities.siwave_log_parser.SimSettings, warnings: ~typing.List[~pyedb.workflows.utilities.siwave_log_parser.WarningEntry] = <factory>, profile: ~typing.List[~pyedb.workflows.utilities.siwave_log_parser.ProfileEntry] = <factory>)#
Bases:
objectRoot container returned by SiwaveLogParser.parse().
- Variables:
aedt (AEDTVersion) – AEDT version information.
batch (BatchInfo) – Batch run metadata.
settings (SimSettings) – Simulation settings.
warnings (list[WarningEntry]) – Warning entries from the log.
profile (list[ProfileEntry]) – Profile/timing entries.
- is_aborted() bool#
Check if the simulation was aborted.
- Returns:
True if simulation did not complete normally, False otherwise.
- Return type:
- is_completed() bool#
Check if the simulation completed normally.
- Returns:
True if status is “Normal Completion”, False otherwise.
- Return type:
Data containers#
Version information#
Batch execution metadata#
Simulation settings#
Warning entries#
Profile entries#
Block parsers (advanced usage)#
These parsers handle specific sections of the log file. Most users will not need
to use them directly, as they are orchestrated by SiwaveLogParser.
- class pyedb.workflows.utilities.siwave_log_parser.HeaderBlockParser(lines: List[str])#
Bases:
BlockParserExtract AEDT version information from the log header.
- parse() AEDTVersion#
Parse the stored lines and return an AEDTVersion instance.
- Returns:
Populated data object.
- Return type:
- class pyedb.workflows.utilities.siwave_log_parser.BatchSettingsBlockParser(lines: List[str])#
Bases:
BlockParserExtract batch info and simulation settings.
- parse() tuple[BatchInfo, SimSettings]#
Parse batch information and simulation settings.
- Returns:
Tuple of (BatchInfo, SimSettings).
- Return type:
- class pyedb.workflows.utilities.siwave_log_parser.WarningsBlockParser(lines: List[str])#
Bases:
BlockParserExtract warning entries from the log.
- parse() List[WarningEntry]#
Parse warning messages.
- Returns:
List of warning entries.
- Return type:
- class pyedb.workflows.utilities.siwave_log_parser.ProfileBlockParser(lines: List[str])#
Bases:
BlockParserExtract profile entries from the log.
- parse() List[ProfileEntry]#
Parse profile entries showing task timing and resource usage.
- Returns:
List of profile entries.
- Return type:
Base class#
Utility helpers#
The functions below are private by convention (leading underscore) but are exposed in the documentation for contributors and advanced users.
- pyedb.workflows.utilities.siwave_log_parser._parse_ts(txt: str) datetime#
Convert timestamp strings to datetime.
Supports two formats: - ‘11/10/2025 05:46:09 PM’ (date first) - ‘11:55:29 AM Oct 12, 2025’ (time first)
Examples#
Check simulation completion status#
from pyedb.workflows.utilities.siwave_log_parser import SiwaveLogParser
parser = SiwaveLogParser("siwave.log")
log = parser.parse()
if log.is_completed():
print("Status:", log.batch.status)
print("Runtime:", log.batch.stopped - log.batch.started)
else:
print("Simulation was aborted or failed")
Extract warnings#
parser = SiwaveLogParser("siwave.log")
log = parser.parse()
# Get all electrical short warnings
shorts = [w for w in log.warnings if w.category == "SHORT"]
for warning in shorts:
print(f"Short between {warning.net1} and {warning.net2}")
print(f" Layer: {warning.layer}")
print(f" Location: ({warning.x:.3f}, {warning.y:.3f}) mm")
Analyze performance#
parser = SiwaveLogParser("siwave.log")
log = parser.parse()
# Display profiling information
for entry in log.profile:
print(f"Task: {entry.task}")
print(f" Real Time: {entry.real_time}")
print(f" CPU Time: {entry.cpu_time}")
print(f" Memory: {entry.memory}")
Batch processing#
from pathlib import Path
from pyedb.workflows.utilities.siwave_log_parser import SiwaveLogParser
# Process multiple log files
log_dir = Path("simulation_results")
completed = []
failed = []
for log_file in log_dir.glob("**/*.log"):
try:
parser = SiwaveLogParser(log_file)
log = parser.parse()
if log.is_completed():
completed.append(
{
"project": log.batch.path,
"warnings": len(log.warnings),
"runtime": (log.batch.stopped - log.batch.started).total_seconds(),
}
)
else:
failed.append({"project": log.batch.path, "status": log.batch.status})
except Exception as e:
print(f"Failed to parse {log_file}: {e}")
print(f"Completed: {len(completed)}, Failed: {len(failed)}")
Export to JSON#
parser = SiwaveLogParser("siwave.log")
log = parser.parse()
# Export with pretty formatting
log.to_json("results.json", indent=2)
# Convert to dictionary for further processing
data = log.to_dict()
See Also#
HFSS log parser – pyedb.workflows.utilities.hfss_log_parser - HFSS log parser with similar functionality
workflows_api - Workflow utilities overview