JobManager#

class pyedb.workflows.job_manager.backend.service.JobManager(resource_limits: ResourceLimits = None, scheduler_type: pyedb.workflows.job_manager.backend.job_submission.SchedulerType = SchedulerType.NONE)#

Async job manager combining resource monitoring and job scheduling.

This class provides the core functionality for:

  • Resource monitoring via ResourceMonitor

  • Job scheduling via JobPoolManager

  • REST/Socket.IO API via aiohttp web server

  • Background task for continuous job processing

Parameters:
resource_limitsResourceLimits, optional

Host constraints. Creates default instance if None.

scheduler_typeSchedulerType, optional

Type of job scheduler to use. Default is SchedulerType.NONE.

Attributes:
jobsDict[str, JobInfo]

Dictionary of all managed jobs

resource_limitsResourceLimits

Current resource constraints

job_poolJobPoolManager

Priority-aware job queue manager

resource_monitorResourceMonitor

Host resource usage monitor

ansys_pathstr or None

Path to ANSYS EDT executable

siosocketio.AsyncServer

Socket.IO server for real-time updates

appweb.Application

aiohttp web application

Overview#

setup_routes

Internal method that wires aiohttp routes to class methods.

handle_get_system_status

Get system and scheduler status.

handle_get_partitions

Get scheduler partitions/queues.

handle_start_monitoring

Manually start resource monitoring.

handle_index

Serve the main web interface.

handle_submit_job

Submit a new job for execution.

handle_cancel_job

Cancel a running or queued job.

handle_get_jobs

Get list of all jobs.

handle_get_resources

Get current resource usage.

handle_get_queue

Get queue statistics.

handle_set_priority

Change job priority and re-queue.

handle_edit_concurrent_limits

Edit concurrent job limits.

wait_until_all_done

Coroutine that blocks until every job reaches a terminal state.

submit_job

Async entry point for job submission.

cancel_job

Cancel a queued or running job.

edit_concurrent_limits

Edit concurrent job limits in the pool.

Import detail#

from pyedb.workflows.job_manager.backend.service import JobManager

Attribute detail#

JobManager.jobs: Dict[str, JobInfo]#
JobManager.resource_limits = None#
JobManager.job_pool#
JobManager.resource_monitor#
JobManager.ansys_path = None#
JobManager.scheduler_type#
JobManager.sio#
JobManager.app#

Method detail#

JobManager.setup_routes()#

Internal method that wires aiohttp routes to class methods.

Called once from __init__. Sets up all REST API endpoints.

async JobManager.handle_get_system_status(request)#

Get system and scheduler status.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response with system status information

async JobManager.handle_get_partitions(request)#

Get scheduler partitions/queues.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response with partition information or error

async JobManager.handle_start_monitoring(request)#

Manually start resource monitoring.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON response indicating success or failure

async JobManager.handle_index(request)#

Serve the main web interface.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.FileResponse

Static HTML file

async JobManager.handle_submit_job(request)#

Submit a new job for execution.

Parameters:
requestaiohttp.web.Request

HTTP POST request with JSON payload

Returns:
aiohttp.web.Response

JSON response with job ID or error

Notes

Expected JSON payload:

{
    "config": {
        "jobid": "job_123",
        "project_path": "/path/to/project.aedt",
        ... other HFSS config fields
    },
    "priority": 0
}
async JobManager.handle_cancel_job(request)#

Cancel a running or queued job.

Parameters:
requestaiohttp.web.Request

HTTP request with job_id in URL path

Returns:
aiohttp.web.Response

JSON response indicating success or failure

async JobManager.handle_get_jobs(request)#

Get list of all jobs.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON array of job objects with status information

async JobManager.handle_get_resources(request)#

Get current resource usage.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON with current host resource usage

async JobManager.handle_get_queue(request)#

Get queue statistics.

Parameters:
requestaiohttp.web.Request

HTTP request object

Returns:
aiohttp.web.Response

JSON with queue statistics for dashboard display

async JobManager.handle_set_priority(request)#

Change job priority and re-queue.

Parameters:
requestaiohttp.web.Request

HTTP POST request with JSON payload

Returns:
aiohttp.web.Response

JSON response indicating success or failure

async JobManager.handle_edit_concurrent_limits(request)#

Edit concurrent job limits.

Parameters:
requestaiohttp.web.Request

HTTP PUT request with JSON payload

Returns:
aiohttp.web.Response

JSON response indicating success or failure

async JobManager.wait_until_all_done() None#

Coroutine that blocks until every job reaches a terminal state.

Safe to call from REST handlers or CLI scripts. Polls job status until all jobs are completed, failed, or cancelled.

async JobManager.submit_job(config: pyedb.workflows.job_manager.backend.job_submission.HFSSSimulationConfig, priority: int = 0) str#

Async entry point for job submission.

Parameters:
configHFSSSimulationConfig

Validated simulation configuration.

priorityint, optional

Job priority. Default is 0.

Returns:
str

Unique job identifier (same as config.jobid).

Notes

This method: 1. Creates a JobInfo object with QUEUED status 2. Adds the job to the appropriate queue 3. Notifies web clients via Socket.IO 4. Starts the processing loop if not already running

async JobManager.cancel_job(job_id: str) bool#

Cancel a queued or running job.

Parameters:
job_idstr

Identifier returned by submit_job().

Returns:
bool

True → cancellation succeeded, False → job not found or already terminal.

Notes

For queued jobs: immediately removes from queue and marks as cancelled. For running jobs: attempts to terminate the process and cleanup.

async JobManager.edit_concurrent_limits(update_data: Dict[str, Any]) Dict[str, Any] | None#

Edit concurrent job limits in the pool.

Parameters:
update_datadict

Fields to update in resource limits. Valid fields: - max_concurrent_jobs: Positive integer - max_cpu_percent: Float between 0 and 100 - min_memory_gb: Non-negative float - min_disk_gb: Non-negative float

Returns:
dict or None

Updated limits data or None if update failed

Raises:
ValueError

If any field validation fails