HFSS log parser – pyedb.workflows.utilities.hfss_log_parser#

HFSS log file parser for extracting simulation results and metrics.

Top-level façade#

class pyedb.workflows.utilities.hfss_log_parser.HFSSLogParser(log_path: str | Path)#

Bases: object

High-level parser that orchestrates all block parsers.

This class provides the main interface for parsing HFSS log files. It coordinates multiple specialized parsers to extract project info, mesh statistics, adaptive passes, and sweep data.

Parameters:
log_pathstr or pathlib.Path

Path to the HFSS log file to parse.

Examples

>>> from pathlib import Path
>>> log = HFSSLogParser("/tmp/project.aedt.batchinfo.1234/hfss.log")
>>> data = log.parse()
>>> data.is_converged()
True

Parse and check for errors:

>>> log = HFSSLogParser("simulation.log")
>>> result = log.parse()
>>> if result.errors():
...     print("Errors found:", result.errors())
... else:
...     print("No errors detected")
parse() ParsedLog#

Execute all sub-parsers and return a unified object.

Returns:
ParsedLog

Structured representation of the entire log including project info, mesh statistics, adaptive passes, and sweep data.

Examples

>>> log = HFSSLogParser("hfss.log")
>>> result = log.parse()
>>> print(f"Converged: {result.is_converged()}")
>>> print(f"Passes: {len(result.adaptive)}")

Aggregated result object#

class pyedb.workflows.utilities.hfss_log_parser.ParsedLog(project: ProjectInfo, init_mesh: InitMesh, adaptive: list[AdaptivePass], sweep: Sweep | None)#

Bases: object

Root container returned by HFSSLogParser.parse().

This class holds all parsed information from an HFSS log file and provides convenience methods for checking convergence, completion status, and extracting specific metrics.

Attributes:
projectProjectInfo

Project meta-data including name, file path, and design information.

init_meshInitMesh

Initial mesh statistics.

adaptivelist of AdaptivePass

Adaptive passes in chronological order.

sweepSweep or None

Frequency-sweep summary, or None if no sweep was performed.

Examples

>>> from pathlib import Path
>>> parsed = ParsedLog(
...     project=ProjectInfo(name="Test", file=Path("/tmp/test.aedt")),
...     init_mesh=InitMesh(tetrahedra=5000, memory_mb=100, real_time_sec=30, cpu_time_sec=28),
...     adaptive=[],
...     sweep=None,
... )
>>> parsed.project.name
'Test'
adaptive_passes() list[AdaptivePass]#

Return the list of adaptive passes.

Returns:
list of AdaptivePass

All adaptive passes in chronological order.

Examples

>>> parsed = ParsedLog(project=..., init_mesh=..., adaptive=[pass1, pass2], sweep=None)
>>> passes = parsed.adaptive_passes()
>>> len(passes)
2
errors() list[str]#

Extract error lines from the log file.

Searches the log for lines containing error markers like [error] or *** ERROR ***. Warnings are ignored.

Returns:
list of str

List of stripped error lines, empty if none found.

Examples

>>> parsed = ParsedLog(project=..., init_mesh=..., adaptive=[], sweep=None)
>>> errs = parsed.errors()
>>> len(errs)
0
is_completed() bool#

Check if the simulation completed successfully.

A simulation is considered complete when both adaptive convergence occurred and a frequency sweep was executed.

Returns:
bool

True if converged and sweep completed, False otherwise.

Examples

>>> parsed = ParsedLog(
...     project=ProjectInfo(name="T", file=Path("/t")),
...     init_mesh=InitMesh(tetrahedra=100, memory_mb=10, real_time_sec=5, cpu_time_sec=5),
...     adaptive=[
...         AdaptivePass(
...             pass_nr=1,
...             freq_hz=1e9,
...             tetrahedra=100,
...             matrix_size=50,
...             memory_mb=256,
...             delta_s=0.01,
...             converged=True,
...             elapsed_sec=10,
...         )
...     ],
...     sweep=Sweep(type="Interpolating", frequencies=11, solved=[1e9], elapsed_sec=30),
... )
>>> parsed.is_completed()
True
is_converged() bool#

Check if the adaptive solver declared convergence.

Returns:
bool

True if at least one adaptive pass converged, False otherwise.

Examples

>>> parsed = ParsedLog(
...     project=ProjectInfo(name="T", file=Path("/t")),
...     init_mesh=InitMesh(tetrahedra=100, memory_mb=10, real_time_sec=5, cpu_time_sec=5),
...     adaptive=[
...         AdaptivePass(
...             pass_nr=1,
...             freq_hz=1e9,
...             tetrahedra=100,
...             matrix_size=50,
...             memory_mb=10,
...             delta_s=0.01,
...             converged=True,
...             elapsed_sec=10,
...         )
...     ],
...     sweep=None,
... )
>>> parsed.is_converged()
True
memory_on_convergence() float#

Memory consumed by the last converged adaptive pass.

Returns:
float

Memory in megabytes, or math.nan if no pass converged.

Examples

>>> parsed = ParsedLog(
...     project=ProjectInfo(name="T", file=Path("/t")),
...     init_mesh=InitMesh(tetrahedra=100, memory_mb=10, real_time_sec=5, cpu_time_sec=5),
...     adaptive=[
...         AdaptivePass(
...             pass_nr=1,
...             freq_hz=1e9,
...             tetrahedra=100,
...             matrix_size=50,
...             memory_mb=256.5,
...             delta_s=0.01,
...             converged=True,
...             elapsed_sec=10,
...         )
...     ],
...     sweep=None,
... )
>>> parsed.memory_on_convergence()
256.5
to_dict() dict#

Deep-convert the entire object to JSON-serializable primitives.

Returns:
dict

Plain dict/list/scalar structure suitable for JSON serialization.

Examples

>>> parsed = ParsedLog(project=..., init_mesh=..., adaptive=[], sweep=None)
>>> data_dict = parsed.to_dict()
>>> isinstance(data_dict, dict)
True

Data containers#

class pyedb.workflows.utilities.hfss_log_parser.ProjectInfo(name: str, file: Path, design: str = '', user: str = '', cmd_line: str = '')#

Bases: object

Basic meta-data extracted from the header of an HFSS batch log.

Attributes:
namestr

Project name (without extension).

filepathlib.Path

Full path to the project file.

designstr, optional

Active design name. The default is "".

userstr, optional

OS user that launched the solve. The default is "".

cmd_linestr, optional

Exact command line used for the run. The default is "".

Examples

>>> from pathlib import Path
>>> info = ProjectInfo(
...     name="Patch_Antenna", file=Path("/project/antenna.aedt"), design="HFSSDesign1", user="engineer"
... )
>>> info.name
'Patch_Antenna'
class pyedb.workflows.utilities.hfss_log_parser.InitMesh(tetrahedra: int, memory_mb: float, real_time_sec: int, cpu_time_sec: int)#

Bases: object

Statistics reported during the initial tetrahedral meshing phase.

Attributes:
tetrahedraint

Number of tetrahedra created.

memory_mbfloat

Peak memory consumption in megabytes.

real_time_secint

Wall clock time in seconds.

cpu_time_secint

CPU time in seconds.

Examples

>>> mesh = InitMesh(tetrahedra=5000, memory_mb=128.5, real_time_sec=45, cpu_time_sec=42)
>>> mesh.tetrahedra
5000
>>> mesh.memory_mb
128.5
class pyedb.workflows.utilities.hfss_log_parser.AdaptivePass(pass_nr: int, freq_hz: float, tetrahedra: int, matrix_size: int, memory_mb: float, delta_s: float | None, converged: bool, elapsed_sec: int)#

Bases: object

Single adaptive solution pass with convergence metrics.

Attributes:
pass_nrint

1-based pass index.

freq_hzfloat

Target frequency in hertz.

tetrahedraint

Number of tetrahedra at end of pass.

matrix_sizeint

Order of the FEM matrix.

memory_mbfloat

Memory used in megabytes.

delta_sfloat, optional

Maximum |ΔS| observed. The default is None until reported.

convergedbool

True if this pass triggered convergence.

elapsed_secint

Wall time spent in this pass.

Examples

>>> pass1 = AdaptivePass(
...     pass_nr=1,
...     freq_hz=3e9,
...     tetrahedra=10000,
...     matrix_size=5000,
...     memory_mb=256.0,
...     delta_s=0.02,
...     converged=False,
...     elapsed_sec=120,
... )
>>> pass1.pass_nr
1
>>> pass1.converged
False
class pyedb.workflows.utilities.hfss_log_parser.Sweep(type: str, frequencies: int, solved: list[float], elapsed_sec: int)#

Bases: object

Frequency-sweep summary block.

Attributes:
typestr

Sweep algorithm: Interpolating, Discrete or Fast.

frequenciesint

Total number of frequency points requested.

solvedlist of float

List of frequencies (Hz) actually solved.

elapsed_secint

Wall clock time for the entire sweep.

Examples

>>> sweep = Sweep(type="Interpolating", frequencies=101, solved=[1e9, 2e9, 3e9], elapsed_sec=300)
>>> sweep.type
'Interpolating'
>>> len(sweep.solved)
3

Block parsers (advanced usage)#

class pyedb.workflows.utilities.hfss_log_parser.ProjectBlockParser(lines: list[str])#

Bases: BlockParser

Extract project meta-data from the log header.

This parser searches for project name, design name, user information, and command line arguments in the log file header section.

Examples

>>> lines = [
...     "Project: MyProject, Design: HFSSDesign1",
...     "Running as user: engineer",
...     'Using command line: "ansysedt.exe"',
...     "Batch Solve/Save: /path/to/project.aedt",
... ]
>>> parser = ProjectBlockParser(lines)
>>> info = parser.parse()
>>> info.name
'MyProject'
parse() ProjectInfo#

Parse the stored lines and return a ProjectInfo instance.

Returns:
ProjectInfo

Populated project meta-data object.

Examples

>>> lines = ["Project: Antenna, Design: HFSS1", "Batch Solve/Save: /tmp/antenna.aedt"]
>>> parser = ProjectBlockParser(lines)
>>> info = parser.parse()
>>> info.name
'Antenna'
class pyedb.workflows.utilities.hfss_log_parser.InitMeshBlockParser(lines: list[str])#

Bases: BlockParser

Extract initial mesh statistics from the log.

This parser searches for the initial meshing profile section and extracts tetrahedra count, memory usage, and timing information.

Examples

>>> lines = [
...     "[PROFILE] Initial Meshing",
...     "Tetrahedra: 5000",
...     "Memory 128.5 MB",
...     "Real Time 00:45",
...     "CPU Time 00:42",
... ]
>>> parser = InitMeshBlockParser(lines)
>>> mesh = parser.parse()
>>> mesh.tetrahedra
5000
parse() InitMesh#

Parse initial mesh statistics from log lines.

Returns:
InitMesh

Initial mesh metrics including tetrahedra count, memory, and timing.

Examples

>>> lines = ["[PROFILE] Initial Meshing", "Tetrahedra: 10000"]
>>> parser = InitMeshBlockParser(lines)
>>> mesh = parser.parse()
>>> mesh.tetrahedra
10000
class pyedb.workflows.utilities.hfss_log_parser.AdaptiveBlockParser(lines: list[str])#

Bases: BlockParser

Build a list of AdaptivePass objects from the adaptive section.

This parser extracts all adaptive pass information including convergence status, frequency, mesh statistics, and delta-S values.

Examples

>>> lines = [
...     "Adaptive Pass 1 at Frequency: 3 GHz",
...     "Tetrahedra: 10000",
...     "Matrix size: 5000",
...     "Memory 256.0 MB",
...     "Max Mag. Delta S: 0.02",
...     "[CONVERGE] Solution has converged at pass number 1",
...     "Adaptive Passes converged",
... ]
>>> parser = AdaptiveBlockParser(lines)
>>> passes = parser.parse()
>>> passes[0].pass_nr
1
>>> passes[0].converged
True
parse() list[AdaptivePass]#

Parse every adaptive pass and determine which one triggered convergence.

Returns:
list of AdaptivePass

Ordered list of passes (pass_nr always increases).

Examples

>>> lines = ["Adaptive Pass 1 at Frequency: 2 GHz", "Tetrahedra: 8000"]
>>> parser = AdaptiveBlockParser(lines)
>>> passes = parser.parse()
>>> len(passes)
0
class pyedb.workflows.utilities.hfss_log_parser.SweepBlockParser(lines: list[str])#

Bases: BlockParser

Extract frequency-sweep summary from the log.

This parser searches for frequency sweep information including sweep type, number of frequencies, and elapsed time.

Examples

>>> lines = [
...     "Interpolating Sweep",
...     "101 Frequencies",
...     "Frequency - 1 GHz",
...     "Frequency - 2 GHz",
...     "Elapsed time: 00:05:00",
... ]
>>> parser = SweepBlockParser(lines)
>>> sweep = parser.parse()
>>> sweep.type
'Interpolating'
>>> sweep.frequencies
101
parse() Sweep | None#

Return sweep information or None if no sweep block exists.

Returns:
Sweep or None

Sweep summary object, or None if the log contains no sweep block.

Examples

>>> lines = ["No sweep data"]
>>> parser = SweepBlockParser(lines)
>>> parser.parse() is None
True

Base class#

class pyedb.workflows.utilities.hfss_log_parser.BlockParser(lines: list[str])#

Bases: object

Base class for a single block parser.

Parameters:
lineslist of str

Lines of text to parse from the log file.

Examples

>>> lines = ["Line 1", "Line 2"]
>>> parser = BlockParser(lines)
>>> parser.lines
['Line 1', 'Line 2']
parse() Any#

Parse the stored lines.

Returns:
Any

Parsed data structure.

Utility helpers#

The functions below are private by convention (leading underscore) but are exposed in the documentation for contributors and advanced users.

pyedb.workflows.utilities.hfss_log_parser._to_hz(text: str) float#

Convert a human-readable frequency string to hertz.

Parse frequency expressions with standard SI prefixes (k, M, G) and convert them to numerical values in hertz.

Parameters:
textstr

Frequency expression such as '3 GHz', '100 kHz', or '10MHz'. Spaces between value and unit are optional.

Returns:
float

Numerical value in Hz. Returns math.nan if the string cannot be parsed.

Examples

>>> _to_hz("3 GHz")
3000000000.0
>>> _to_hz("100 kHz")
100000.0
>>> _to_hz("10MHz")
10000000.0
>>> _to_hz("2.4GHz")
2400000000.0
>>> import math
>>> math.isnan(_to_hz("invalid"))
True
pyedb.workflows.utilities.hfss_log_parser._to_sec(mm_ss: str) int#

Convert an Ansys time stamp to seconds.

Parse time stamps in various formats (MM:SS, H:MM:SS, or HH:MM:SS) and convert them to total elapsed seconds.

Parameters:
mm_ssstr

Time stamp extracted from the log in format MM:SS, H:MM:SS, or HH:MM:SS.

Returns:
int

Total elapsed seconds.

Examples

>>> _to_sec("02:30")
150
>>> _to_sec("1:15:45")
4545
>>> _to_sec("12:30:00")
45000
>>> _to_sec("00:05")
5
pyedb.workflows.utilities.hfss_log_parser._as_dict(obj: Any) Any#

Recursively convert dataclasses to plain Python types.

Convert dataclass instances, lists, and Path objects to JSON-serializable primitive types (dict, list, str, etc.).

Parameters:
objAny

Object to convert. Can be a dataclass instance, list, Path, or primitive type.

Returns:
Any

Plain Python type representation. Dataclasses become dicts, Paths become strings, lists are recursively processed, and primitives pass through unchanged.

Examples

>>> from dataclasses import dataclass
>>> from pathlib import Path
>>> @dataclass
... class Point:
...     x: int
...     y: int
>>> _as_dict(Point(1, 2))
{'x': 1, 'y': 2}
>>> _as_dict(Path("/tmp/file.txt"))
'/tmp/file.txt'
>>> _as_dict([1, 2, Path("/test")])
[1, 2, '/test']