JobManager#
- class pyedb.workflows.job_manager.backend.service.JobManager(resource_limits: ResourceLimits = None, scheduler_type: pyedb.workflows.job_manager.backend.job_submission.SchedulerType = SchedulerType.NONE)#
Async job manager combining resource monitoring and job scheduling.
This class provides the core functionality for:
Resource monitoring via
ResourceMonitorJob scheduling via
JobPoolManagerREST/Socket.IO API via aiohttp web server
Background task for continuous job processing
- Parameters:
- resource_limits
ResourceLimits,optional Host constraints. Creates default instance if None.
- scheduler_type
SchedulerType,optional Type of job scheduler to use. Default is
SchedulerType.NONE.
- resource_limits
- Attributes:
- jobs
Dict[str,JobInfo] Dictionary of all managed jobs
- resource_limits
ResourceLimits Current resource constraints
- job_pool
JobPoolManager Priority-aware job queue manager
- resource_monitor
ResourceMonitor Host resource usage monitor
- ansys_path
strorNone Path to ANSYS EDT executable
- sio
socketio.AsyncServer Socket.IO server for real-time updates
- app
web.Application aiohttp web application
- jobs
Overview#
Internal method that wires aiohttp routes to class methods. |
|
Get system and scheduler status. |
|
Get scheduler partitions/queues. |
|
Manually start resource monitoring. |
|
Serve the main web interface. |
|
Submit a new job for execution. |
|
Cancel a running or queued job. |
|
Get list of all jobs. |
|
Get current resource usage. |
|
Get queue statistics. |
|
Change job priority and re-queue. |
|
Edit concurrent job limits. |
|
Coroutine that blocks until every job reaches a terminal state. |
|
Async entry point for job submission. |
|
Cancel a queued or running job. |
|
Edit concurrent job limits in the pool. |
Import detail#
from pyedb.workflows.job_manager.backend.service import JobManager
Attribute detail#
- JobManager.resource_limits = None#
- JobManager.job_pool#
- JobManager.resource_monitor#
- JobManager.ansys_path = None#
- JobManager.scheduler_type#
- JobManager.sio#
- JobManager.app#
Method detail#
- JobManager.setup_routes()#
Internal method that wires aiohttp routes to class methods.
Called once from __init__. Sets up all REST API endpoints.
- async JobManager.handle_get_system_status(request)#
Get system and scheduler status.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON response with system status information
- async JobManager.handle_get_partitions(request)#
Get scheduler partitions/queues.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON response with partition information or error
- async JobManager.handle_start_monitoring(request)#
Manually start resource monitoring.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON response indicating success or failure
- async JobManager.handle_index(request)#
Serve the main web interface.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.FileResponseStatic HTML file
- async JobManager.handle_submit_job(request)#
Submit a new job for execution.
- Parameters:
- request
aiohttp.web.Request HTTP POST request with JSON payload
- request
- Returns:
aiohttp.web.ResponseJSON response with job ID or error
Notes
Expected JSON payload:
{ "config": { "jobid": "job_123", "project_path": "/path/to/project.aedt", ... other HFSS config fields }, "priority": 0 }
- async JobManager.handle_cancel_job(request)#
Cancel a running or queued job.
- Parameters:
- request
aiohttp.web.Request HTTP request with job_id in URL path
- request
- Returns:
aiohttp.web.ResponseJSON response indicating success or failure
- async JobManager.handle_get_jobs(request)#
Get list of all jobs.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON array of job objects with status information
- async JobManager.handle_get_resources(request)#
Get current resource usage.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON with current host resource usage
- async JobManager.handle_get_queue(request)#
Get queue statistics.
- Parameters:
- request
aiohttp.web.Request HTTP request object
- request
- Returns:
aiohttp.web.ResponseJSON with queue statistics for dashboard display
- async JobManager.handle_set_priority(request)#
Change job priority and re-queue.
- Parameters:
- request
aiohttp.web.Request HTTP POST request with JSON payload
- request
- Returns:
aiohttp.web.ResponseJSON response indicating success or failure
- async JobManager.handle_edit_concurrent_limits(request)#
Edit concurrent job limits.
- Parameters:
- request
aiohttp.web.Request HTTP PUT request with JSON payload
- request
- Returns:
aiohttp.web.ResponseJSON response indicating success or failure
- async JobManager.wait_until_all_done() None#
Coroutine that blocks until every job reaches a terminal state.
Safe to call from REST handlers or CLI scripts. Polls job status until all jobs are completed, failed, or cancelled.
- async JobManager.submit_job(config: pyedb.workflows.job_manager.backend.job_submission.HFSSSimulationConfig, priority: int = 0) str#
Async entry point for job submission.
- Parameters:
- config
HFSSSimulationConfig Validated simulation configuration.
- priority
int,optional Job priority. Default is
0.
- config
- Returns:
strUnique job identifier (same as
config.jobid).
Notes
This method: 1. Creates a JobInfo object with QUEUED status 2. Adds the job to the appropriate queue 3. Notifies web clients via Socket.IO 4. Starts the processing loop if not already running
- async JobManager.cancel_job(job_id: str) bool#
Cancel a queued or running job.
- Parameters:
- job_id
str Identifier returned by
submit_job().
- job_id
- Returns:
- bool
True→ cancellation succeeded,False→ job not found or already terminal.
Notes
For queued jobs: immediately removes from queue and marks as cancelled. For running jobs: attempts to terminate the process and cleanup.
- async JobManager.edit_concurrent_limits(update_data: Dict[str, Any]) Dict[str, Any] | None#
Edit concurrent job limits in the pool.
- Parameters:
- update_data
dict Fields to update in resource limits. Valid fields: - max_concurrent_jobs: Positive integer - max_cpu_percent: Float between 0 and 100 - min_memory_gb: Non-negative float - min_disk_gb: Non-negative float
- update_data
- Returns:
- Raises:
ValueErrorIf any field validation fails