JobManagerHandler#

class pyedb.workflows.job_manager.backend.job_manager_handler.JobManagerHandler(edb=None, version=None, host='localhost', port=8080)#

Synchronous façade that controls an async Job Manager service.

This class provides a thread-safe interface to manage asynchronous job execution while running the aiohttp server in a background thread.

Parameters:
edbOptional[Edb]

PyEDB instance for automatic ANSYS path detection

versionOptional[str]

Specific ANSYS version to use (e.g., “2023.1”)

hoststr

Hostname or IP address to bind the server

portint

TCP port to listen on

Raises:
ValueError

If specified ANSYS version is not found

RuntimeError

If service fails to start within timeout

Examples

>>> handler = JobManagerHandler()  
>>> handler.start_service()  
>>> print(f"Server running at {handler.url}")  
>>> # Submit jobs via REST API or handler methods
>>> handler.close()  
Attributes:
ansys_pathstr

Path to ANSYS EDT executable

scheduler_typeSchedulerType

Detected scheduler type (SLURM, LSF, or NONE)

managerJobManager

Underlying async job manager instance

hoststr

Server hostname

portint

Server port

urlstr

Full server URL

startedbool

Whether the service is currently running

Overview#

submit_job

Synchronously submit a simulation job.

wait_until_done

Block until the requested job reaches a terminal state

wait_until_all_done

Block until every job currently known to the manager

get_system_status

Get system status and scheduler information.

get_me

Get current user information.

get_jobs

Get list of all jobs with their current status.

get_scheduler_type

Get detected scheduler type.

get_cluster_partitions

Get available cluster partitions/queues.

get_job_log

Get parsed HFSS log for a finished job.

handle_submit_job

Submit a new simulation job.

get_queue_status

Get current queue status for UI display.

get_resources

Get current resource usage for UI display.

cancel_job

Cancel a running or queued job.

start_service

Start the job manager service in a background thread.

close

Gracefully shutdown the job manager service.

stop_service

Stop the aiohttp server and cleanup resources.

create_simulation_config

Create a validated HFSSSimulationConfig.

url

Get the server URL.

Import detail#

from pyedb.workflows.job_manager.backend.job_manager_handler import JobManagerHandler

Property detail#

property JobManagerHandler.url: str#

Get the server URL.

Returns:
str

Full server URL (http://host:port)

Attribute detail#

JobManagerHandler.scheduler_type#
JobManagerHandler.manager#
JobManagerHandler.sio#
JobManagerHandler.app#
JobManagerHandler.runner: aiohttp.web.AppRunner | None = None#
JobManagerHandler.site = None#
JobManagerHandler.started = False#
JobManagerHandler.resource_limits = None#

Method detail#

JobManagerHandler.submit_job(config: pyedb.workflows.job_manager.backend.job_submission.HFSSSimulationConfig, priority: int = 0, timeout: float = 30.0) str#

Synchronously submit a simulation job.

The method is thread-safe: it marshals the async work into the background event-loop and returns the job identifier.

Parameters:
configHFSSSimulationConfig

Fully-built and validated simulation configuration.

priorityint, optional

Job priority (higher → de-queued earlier). Default 0.

timeoutfloat, optional

Seconds to wait for the submission to complete. Default 30 s.

Returns:
str

Unique job identifier (same as config.jobid).

Raises:
RuntimeError

If the service is not started or the submission times out.

Exception

Any validation / scheduler error raised by the underlying coroutine.

Examples

>>> from pyedb.workflows.job_manager.backend.job_manager_handler import JobManagerHandler
>>> from pyedb.workflows.job_manager.backend.job_submission import create_hfss_config, SchedulerType
>>> handler = JobManagerHandler()
>>> handler.start_service()
>>> cfg = create_hfss_config(
>>>     ansys_edt_path=...,
>>>     jobid="my_job",
>>>     project_path=...,
>>>     scheduler_type=SchedulerType.NONE
>>> )
>>> job_id = handler.submit_job(cfg, priority=0)
>>> print("submitted", job_id)
>>> # later
>>> handler.close()
JobManagerHandler.wait_until_done(job_id: str, poll_every: float = 2.0) str#

Block until the requested job reaches a terminal state (completed, failed, or cancelled).

Returns:
str

Terminal status string.

JobManagerHandler.wait_until_all_done(poll_every: float = 2.0) None#

Block until every job currently known to the manager is in a terminal state.

async JobManagerHandler.get_system_status(request)#

Get system status and scheduler information.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response with system status

async JobManagerHandler.get_me(request)#

Get current user information.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response with username

async JobManagerHandler.get_jobs(request)#

Get list of all jobs with their current status.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON array of job objects

async JobManagerHandler.get_scheduler_type(request)#

Get detected scheduler type.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response with scheduler type

async JobManagerHandler.get_cluster_partitions(request)#

Get available cluster partitions/queues.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON array of partition information

async JobManagerHandler.get_job_log(request)#

Get parsed HFSS log for a finished job.

Parameters:
requestaiohttp.web.Request

HTTP request with job_id in URL path

Returns:
aiohttp.web.Response
  • 200: JSON with parsed log data

  • 204: No log available yet

  • 404: Job not found

  • 500: Log parsing error

async JobManagerHandler.handle_submit_job(request)#

Submit a new simulation job.

Parameters:
requestaiohttp.web.Request

HTTP request with JSON payload containing job configuration

Returns:
aiohttp.web.Response

JSON response with job ID and status

Notes

Expected JSON payload:

{
    "config": {
        "scheduler_type": "slurm|lsf|none",
        "project_path": "/path/to/project.aedt",
        ... other HFSS config fields
    },
    "user": "username",
    "machine_nodes": [...],
    "batch_options": {...}
}
async JobManagerHandler.get_queue_status(request)#

Get current queue status for UI display.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON with queue statistics

async JobManagerHandler.get_resources(request)#

Get current resource usage for UI display.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON with current resource usage

async JobManagerHandler.cancel_job(request)#

Cancel a running or queued job.

Parameters:
requestaiohttp.web.Request

HTTP request with job_id in URL path

Returns:
aiohttp.web.Response

JSON response with cancellation status

JobManagerHandler.start_service() None#

Start the job manager service in a background thread.

Raises:
RuntimeError

If service fails to start within 10 seconds

Notes

This method is non-blocking and returns immediately. The service runs in a daemon thread with its own event loop.

JobManagerHandler.close() None#

Gracefully shutdown the job manager service.

Notes

This method is automatically called on program exit via atexit, but can also be called explicitly for clean shutdown.

async JobManagerHandler.stop_service() None#

Stop the aiohttp server and cleanup resources.

This is the async version of close() that runs in the event loop.

JobManagerHandler.create_simulation_config(project_path: str, ansys_edt_path: str | None = None, jobid: str | None = None, scheduler_type: pyedb.workflows.job_manager.backend.job_submission.SchedulerType | None = None, cpu_cores: int = 1, user: str = 'unknown') pyedb.workflows.job_manager.backend.job_submission.HFSSSimulationConfig#

Create a validated HFSSSimulationConfig.

Parameters:
project_pathstr

Path to the AEDT project file

ansys_edt_pathstr, optional

Path to ANSYS EDT executable. Uses detected path if None.

jobidstr, optional

Job identifier. Auto-generated if None.

scheduler_typeSchedulerType, optional

Scheduler type. Uses detected scheduler if None.

cpu_coresint

Number of CPU cores for local execution

userstr

Username for job ownership

Returns:
HFSSSimulationConfig

Validated simulation configuration

Raises:
ValueError

If project_path is empty or invalid

Notes

The cpu_cores parameter is only used when scheduler_type is NONE (local execution). For cluster execution, cores are determined by the scheduler configuration.