Job submission—production notes & quick-start#
Submit and monitor HFSS 3-D Layout simulations through the PyEDB job manager with zero additional infrastructure on your workstation or full cluster support (SLURM, LSF, PBS, Windows-HPC, …).
Pre-requisites#
ANSYS Electronics Desktop must be installed.
The environment variable
ANSYSEM_ROOT<rrr>must point to the installation directory, e.g.export ANSYSEM_ROOT252=/ansys_inc/v252/Linux64 # 2025 R2 # or on Windows set ANSYSEM_ROOT252=C:\Program Files\AnsysEM\v252\Win64
The backend automatically discovers the newest release if several
ANSYSEM_ROOT<rrr>variables are present.(Cluster only) A scheduler template for your workload manager must exist in
pyedb/workflows/job_manager/scheduler_templates/. Out-of-the-box templates are provided for SLURM and LSF; PBS, Torque, Windows-HPC, or cloud batch systems can be added by dropping a new YAML file—no code change required.
Overview—how it works#
The job manager is an asynchronous micro-service that is automatically
started in a background thread when you instantiate JobManagerHandler.
It exposes:
REST & Web-Socket endpoints (
http://localhost:8080by default)Thread-safe synchronous façade for scripts / Jupyter
Native async API for advanced integrations
CLI utilities
submit_local_job,submit_batch_jobs, andsubmit_job_on_schedulerfor shell / CI pipelines
The same backend code path is used regardless of front-end style; the difference is who owns the event loop and how control is returned to the caller.
Tip
Quick-start server (any OS)
Save the launcher script as start_service.py (see Stand-alone server launcher script) and run:
python start_service.py --host 0.0.0.0 --port 9090
The service is ready when the line “✅ Job-manager backend listening on http://0.0.0.0:9090.” appears; leave the terminal open or daemonize it with your favourite supervisor.
Tip
The backend auto-detects the scheduler:
Windows workstation →
SchedulerType.NONE(local subprocess)Linux workstation →
SchedulerType.NONE(local subprocess)Linux cluster with SLURM →
SchedulerType.SLURMLinux cluster with LSF →
SchedulerType.LSF
You can still override the choice explicitly if needed.
Synchronous usage (scripts & notebooks)#
Perfect when you simply want to “submit and wait” without learning asyncio.
1from pyedb.workflows.job_manager.backend.job_submission import (
2 create_hfss_config,
3 SchedulerType,
4)
5from pyedb.workflows.job_manager.backend.job_manager_handler import JobManagerHandler
6
7project_path = r"D:\Jobs\my_design.aedb"
8
9handler = JobManagerHandler() # discovers ANSYS install & scheduler
10handler.start_service() # starts background aiohttp server
11
12config = create_hfss_config(
13 project_path=project_path,
14)
15config.machine_nodes[0].cores = 16 # use 16 local cores
16
17job_id = handler.submit_job(config) # blocks until job accepted
18print("submitted job_id")
19
20status = handler.wait_until_done(job_id) # polls until terminal
21print(f"job finished with status: {status}")
22
23handler.close() # graceful shutdown
Production notes#
Thread-safe: multiple threads may submit or cancel concurrently.
Resource limits (CPU, memory, disk, concurrency) are enforced; jobs stay queued until resources are free.
atexitensures clean shutdown even if the user forgetsclose().Cluster runs: change
SchedulerType.NONE→SLURM/LSFand supplyscheduler_options; the code path remains identical.
Asynchronous usage (CLI & programmatic)#
Use when you need non-blocking behaviour inside an async function or from
the shell / CI pipelines.
CLI—submit_local_job#
The package installs a console entry-point that talks to the same REST API.
Installation#
$ pip install -e . # or production wheel
$ which submit_local_job
/usr/local/bin/submit_local_job
Synopsis#
$ submit_local_job --project-path <PATH> [options]
Environment variables#
- PYEDB_JOB_MANAGER_HOST#
Fallback for
--host.
- PYEDB_JOB_MANAGER_PORT#
Fallback for
--port.
Exit codes#
Code |
Meaning |
|---|---|
|
Job accepted by manager. |
|
CLI validation or connection error. |
|
Unexpected runtime exception. |
Example—CLI (cluster)#
$ submit_job_on_scheduler \
--project-path "/shared/antenna.AEDB" \
--partition hpclarge \
--nodes 2 \
--cores-per-node 32
The command returns immediately after the job is queued; use the printed ID
with wait_until_done or monitor via the web UI.
CLI—submit_batch_jobs#
For bulk submissions, use submit_batch_jobs to automatically discover and submit
multiple projects from a directory tree.
Synopsis#
$ python submit_batch_jobs.py --root-dir <DIRECTORY> [options]
Key features#
Automatic discovery: Scans for all
.aedbfolders and.aedtfilesSmart pairing: When both
.aedband.aedtexist, uses the.aedtfileAsynchronous submission: Submits jobs concurrently for faster processing
Recursive scanning: Optional recursive directory traversal
Options#
Argument |
Default |
Description |
|---|---|---|
|
(required) |
Root directory to scan for projects |
|
|
Job manager host address |
|
|
Job manager port |
|
|
Number of cores to allocate per job |
|
|
Maximum concurrent job submissions |
|
|
Delay in milliseconds between job submissions |
|
|
Scan subdirectories recursively |
|
|
Enable debug logging |
Example—batch submission (local)#
# Submit all projects in a directory
$ python submit_batch_jobs.py --root-dir "D:\Temp\test_jobs"
# Recursive scan with custom core count
$ python submit_batch_jobs.py \
--root-dir "D:\Projects\simulations" \
--num-cores 16 \
--recursive \
--verbose
Example output#
2025-11-07 10:30:15 - __main__ - INFO - Scanning D:\Temp\test_jobs for projects (recursive=False)
2025-11-07 10:30:15 - __main__ - INFO - Found AEDB folder: D:\Temp\test_jobs\project1.aedb
2025-11-07 10:30:15 - __main__ - INFO - Found AEDT file: D:\Temp\test_jobs\project2.aedt
2025-11-07 10:30:15 - __main__ - INFO - Using AEDB folder for project: D:\Temp\test_jobs\project1.aedb
2025-11-07 10:30:15 - __main__ - INFO - Using standalone AEDT file: D:\Temp\test_jobs\project2.aedt
2025-11-07 10:30:15 - __main__ - INFO - Found 2 project(s) to submit
2025-11-07 10:30:15 - __main__ - INFO - Starting batch submission of 2 project(s) to http://localhost:8080
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successfully submitted: project1.aedb (status=200)
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successfully submitted: project2.aedt (status=200)
2025-11-07 10:30:16 - __main__ - INFO - ============================================================
2025-11-07 10:30:16 - __main__ - INFO - Batch submission complete:
2025-11-07 10:30:16 - __main__ - INFO - Total projects: 2
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successful: 2
2025-11-07 10:30:16 - __main__ - INFO - ✗ Failed: 0
2025-11-07 10:30:16 - __main__ - INFO - ============================================================
How it works#
Scanning phase:
Searches for all
.aedbfolders in the root directorySearches for all
.aedtfiles in the root directoryFor each
.aedbfolder, checks if a corresponding.aedtfile exists:If yes: Uses the
.aedtfileIf no: Uses the
.aedbfolder
Standalone
.aedtfiles (without corresponding.aedb) are also included
Submission phase:
Creates job configurations for each project
Submits jobs asynchronously to the job manager REST API
Limits concurrent submissions using a semaphore (default: 5)
Reports success/failure for each submission
Results:
Displays a summary with total, successful, and failed submissions
Logs detailed information about each submission
Note
The script does not wait for jobs to complete, only for submission confirmation. Job execution happens asynchronously in the job manager service.
Tip
Use
--max-concurrentto limit load on the job manager service when submitting large batches.Use
--delay-msto control the pause between submissions (default: 100ms). This ensures HTTP requests are fully sent before the next submission starts.Set
--delay-ms 0to disable the delay if your network is very fast and reliable.For very large batch submissions, consider increasing the timeout in the code if network latency is high.
Programmatic—native asyncio#
import asyncio
from pyedb.workflows.job_manager.backend.service import JobManager
from pyedb.workflows.job_manager.backend.job_submission import create_hfss_config
async def main():
manager = JobManager() # same back-end
config = create_hfss_config(
project_path="antenna.AEDB",
scheduler_type="SLURM", # or "LSF", "NONE", …
scheduler_options={
"queue": "hpclarge",
"nodes": 2,
"cores_per_node": 32,
"time": "04:00:00",
},
)
job_id = await manager.submit_job(config, priority=5)
await manager.wait_until_all_done() # non-blocking wait
print("all done")
asyncio.run(main())
Choosing between sync & async#
Synchronous (scripts / notebooks) |
Asynchronous (services / CLI) |
|---|---|
No |
Caller runs inside |
Blocking calls—caller waits for result. |
Non-blocking—event loop stays responsive. |
Ideal for interactive work, CI pipelines, quick scripts. |
Ideal for web servers, micro-services, GUI applications. |
Stand-alone server launcher script#
The file start_service.py is a minimal wrapper around
JobManagerHandler that exposes only --host and --port.
It is not installed by pip; copy it from the doc folder or the
previous code block and place it anywhere in your PATH.
See also#
job_manager_rest_api–Complete endpoint reference
JobManagerHandler–API reference (sync façade)JobManager–API reference (async core)configuration_syntax–All scheduler & solver options
../tutorials/submit_batch–Bulk submissions on SLURM/LSF