PyEDB job manager backend—production documentation#
Overview#
The job manager backend is a hybrid async/sync service that schedules and monitors HFSS / 3D-Layout simulations on:
Local workstations (sub-process)
Enterprise clusters (SLURM, LSF, PBS, Windows-HPC)
It exposes thread-safe synchronous façades for legacy code bases while keeping a fully asynchronous core for high-throughput scenarios.
Module Reference#
job_manager_handler.py#
Thread-safe synchronous façade
Bridges blocking code to the async
JobManagerwithout exposingasyncioAuto-detects ANSYS installation and cluster scheduler
Starts / stops a daemon thread hosting the aiohttp server
Provides convenience helpers such as
create_simulation_config()
job_submission.py#
Cross-platform simulation launcher
Immutable data models:
HFSSSimulationConfig,SchedulerOptions,MachineNodeGenerates ready-to-run shell commands or batch scripts (SLURM/LSF)
Entry point:
create_hfss_config()→config.run_simulation()
service.py#
Async job manager & REST layer
JobManager: priority queues, resource limits, Socket. IO pushResourceMonitor: async telemetry (CPU, RAM, disk)SchedulerManager: live cluster introspection (partitions, queues)Self-hosted aiohttp application with REST + WebSocket endpoints
REST API#
Base URL defaults to http://localhost:8080.
All JSON payloads use Content-Type: application/json.
Jobs#
Method |
Route |
Description |
Payload |
Response |
|---|---|---|---|---|
|
|
Queue new simulation |
|
|
|
|
List all jobs |
— |
JSON array |
|
|
Cancel queued / running job |
— |
|
|
|
Change job priority |
|
|
Resources & Queues#
Method |
Route |
Description |
Response |
|---|---|---|---|
|
|
Host telemetry snapshot |
|
|
|
Queue statistics |
|
|
|
Edit concurrency limits |
|
Cluster Introspection#
Method |
Route |
Description |
Response |
|---|---|---|---|
|
|
Available partitions / queues |
JSON array |
|
|
Combined status object |
Scheduler, resources, limits |
WebSocket Events#
Connect to ws://host:port with Socket.IO.
Emitted server → client:
job_queuedjob_startedjob_scheduledjob_completedlimits_updated
Quick Examples#
Synchronous (Legacy Code)#
from pyedb.workflows.job_manager.backend.job_manager_handler import JobManagerHandler
handler = JobManagerHandler()
handler.start_service()
cfg = handler.create_simulation_config(
"/path/to/antenna.aedt", scheduler_type="slurm", cpu_cores=16
)
job_id = asyncio.run(handler.submit_job(cfg))
print("Submitted", job_id)
# Wait until finished
while handler.manager.jobs[job_id].status not in {"completed", "failed"}:
time.sleep(1)
handler.close()
Asynchronous (Native Asyncio)#
from pyedb.workflows.job_manager.backend.service import JobManager, ResourceLimits
manager = JobManager(ResourceLimits(max_concurrent_jobs=4))
config = HFSSSimulationConfig.from_dict({...})
job_id = await manager.submit_job(config, priority=5)
await manager.wait_until_all_done()
Command Line#
python -m pyedb.workflows.job_manager.backend.job_manager_handler \
--host 0.0.0.0 --port 8080
Deployment Notes#
The service is self-contained; no external database is required (jobs are stored in-memory). For persistence, plug in a small SQLite layer inside
JobManager.jobs.When running inside Docker, expose port
8080and mount the project directory into the container so thatansysedtcan access.aedtfiles.CPU / RAM limits are soft limits; tune
ResourceLimitsto your workstation or cluster node size.TLS termination should be handled by an upstream reverse proxy (nginx, reverse proxy, etc.); the backend only speaks plain HTTP/WebSocket.
See Also#
job_manager_handler_discussion—architectural trade-offs
examples/job_manager/—full CLI & Jupyter demos