Read along in source
Workflow and automation, traced through one Rust scheduler.
Most pages on this topic explain workflow and automation by drawing a slide with two boxes: a trigger on the left, an action on the right, an arrow in the middle. That picture is fine for a PowerPoint and useless when a real bot needs to fire at 9:13 in the morning, talk to a desktop application, and not eat the machine if the workflow gets stuck. This page opens the actual file that does that work: a 1,642 line Rust scheduler at apps/desktop/src-tauri/src/workflow_scheduler.rs. Four moving parts, each named in code, each anchored to a line you can read on GitHub.
Why the slide-level answer is not enough
The standard explainer for this topic, the one Atlassian and IBM and Zapier all roughly converge on, says: a workflow is a sequence of steps, automation is what happens when something fires that sequence without a human present, and the bridge between the two is a trigger. That is true. It also tells you nothing about what a trigger actually is, what happens between the moment a trigger fires and the moment the first action runs, or what stops the bot from chewing through your CPU when an action gets stuck on a modal.
Those questions are not abstract. They are the questions an enterprise buyer eventually asks the third week of a pilot, when someone notices the bot has been “running” for two hours on a job that used to take three minutes, and there is no obvious way to tell whether it is working or hung. The answers live in code, and the answers are short. Three trigger types. Sixty minutes of jitter. Thirty minutes of execution. Four steps to talk to the executor. Fifty rows of history. Ten megabytes of buffer. Each of those numbers is a bound the system enforces on itself, and each is on a specific line of one Rust file.
The implementation is in the public Mediar codebase, an open-source Rust executor and Windows desktop recorder published under MIT at github.com/mediar-ai/terminator. The point of the line numbers below is not to flatter the project; it is to let you verify, rather than trust, what the bot does when nobody is watching.
Part one
A trigger has exactly three shapes.
The first thing to know about workflow and automation in this codebase is that the universe of ways a workflow can be kicked off is small and closed. It is one Rust enum with three variants, declared in workflow_scheduler.rs between lines 92 and 108. Cron, Manual, Webhook. That is the list. Anything more exotic, a file-system watch, a calendar invite, an inbox rule, is not in this enum today, and so is not something the scheduler can fire on its own.
Closed enums are an underrated design choice for a scheduler. The competing pattern is a generic plugin registry where anything can register itself as a trigger, which is more flexible and almost always more fragile. With three variants in one file, the next engineer who needs to add a fourth has to touch the enum, the cron loop, the JSON deserializer, and the UI form in a single pull request. That friction is a feature. It keeps the trigger surface small enough that an audit team can read it in an afternoon.
One detail worth pulling out of the snippet: the #[serde(tag = "type", rename_all = "lowercase")] attribute on the enum determines the JSON shape on the wire. Every scheduled workflow is persisted as JSON with one of three possible type values: cron, manual, webhook. That is the contract a frontend, a CLI, or a future API consumer has to honor. There is nothing in between, and that absence is the design.
Part two
Cron with jitter, and why one harmless field matters more than it looks.
The Cron variant carries three fields: a schedule string in the standard five-field format, an optional timezone, and an optional jitter_minutes integer between zero and sixty. The first two are exactly what you expect from any cron implementation since the late 1970s. The third is a small comment in source that earns its keep at scale.
The doc-comment is six words long: “Random delay in minutes to add to each execution (0-60). Helps avoid detection by making execution times unpredictable.” Read those two sentences slowly. The mechanic is uniform-random jitter on top of the base cron schedule. The motivation is that an unattended workflow firing at 9:00:00 to the second on every machine in a fleet looks, from the receiving system, like a denial-of-service. Anomaly detectors notice. Rate-limiters kick in. SAP, Jack Henry, FIS, Cerner, Epic, Oracle EBS, and most older mainframe gateways have been tuned over twenty years to throttle synchronized bursts of identical traffic.
Jitter spreads the burst. A hundred copies of the same workflow scheduled to fire at 9 AM with jitter_minutes: 60 do not fire at 9:00:00. They fire spread approximately uniformly across the hour from 9:00 to 10:00, which is indistinguishable from human-driven traffic at the host system's logging layer. Same total throughput, no instantaneous spike, no distinctive fingerprint. This is one of the unglamorous reasons unattended bots in regulated industries survive a year of production without flagging the host's risk team.
The other piece of the cron implementation worth pulling out is the same-minute guard at lines 300 through 307. The cron loop runs every few seconds, and the matching logic is per-minute, so a naive implementation would fire the same workflow ten times in a single minute. The guard reads the previous last_executed timestamp, formats it down to %Y-%m-%d %H:%M, and compares the string. If the strings match, the loop silently skips and logs “Already executed this minute, skipping”. Three lines of code, one of those guarantees that ends up mattering at three in the morning.
Part three
The 30-minute cap, and what happens when a workflow runs too long.
Every scheduled run carries a hard wall. The constant on line 161 reads const MAX_SCHEDULED_EXECUTION_SECS: u64 = 30 * 60; and the comment above it is the design statement: “After this, the scheduler sends stop_execution to the MCP server and kills the workflow.” There is no exponential backoff, no user-configurable override, no “just five more minutes” flag. Half an hour and the kill path runs.
The kill path itself is the function stop_mcp_execution on line 657. The shape is small and worth knowing. It opens a fresh MCP session against the same localhost port, with the client name “mediar-scheduler-stop” so the kill is distinguishable from the original run in logs. It POSTs a single tools/call for the stop_execution tool with empty arguments. It DELETEs the kill session afterward. The original SSE stream from the running workflow unwinds on the executor's side, and the slot in currently_executing (a HashSet on line 194) is freed.
The reason for opening a fresh session for the kill, rather than reusing the one the runaway workflow is hung on, is the non-obvious detail. If the original session is hung, you cannot send anything down it. A fresh session bypasses that. This is a tiny pattern that shows up in well-designed unattended executors and is missing from most weekend-project schedulers.
What happens between a trigger firing and the scheduler stopping the run
The 30-minute number is not a guess; it is calibrated to the workflows the desktop runs in production. A SAP B1 sales-order entry takes ninety seconds. A claims-intake form fill takes two minutes. A patient-intake handoff into Epic takes around three minutes. A bank-onboarding pull from Jack Henry takes under ten. Thirty minutes is roughly five times the longest real workflow, which is the slack you want for an unusually slow day, but not so much slack that a stuck modal blocks the rest of the schedule for an hour. If a workflow keeps approaching the cap, the right response is to break it into pieces, not to raise the cap.
Part four
How the scheduler talks to the executor.
The scheduler does not run workflows itself. It owns the clock and the trigger logic and delegates execution to a separate process, the MCP server, which talks to a separate runtime, the Terminator executor that drives Windows accessibility APIs. The protocol between the scheduler and the MCP server is the standard model context protocol over HTTP on a localhost port. The handshake is four steps in a fixed order, all in the execute_workflow function between lines 391 and 653.
Step 1
initialize
The scheduler POSTs an MCP
initializerequest withprotocolVersion: "2024-11-05"and a clientInfo block naming itself “mediar-scheduler”. The response carries amcp-session-idheader that is read off and saved for the rest of the exchange. A 30-second timeout guards this step. If the MCP server is down, the scheduler fails fast here.Step 2
notifications/initialized
A second POST sends the standard MCP
notifications/initializedJSON-RPC notification, with the session id from step one in the header. This is the protocol-level handshake that tells the server the client is ready. The scheduler does not wait for a response, only acknowledges the message left.Step 3
tools/call execute_sequence
The third POST is the actual run. The body is a JSON-RPC tools/call for
execute_sequencewith two arguments: aurlpointing at the workflow file (afile://URL for local paths, or HTTPS for remote ones), and aninputsobject the workflow can read at runtime. The HTTP timeout is set toMAX_SCHEDULED_EXECUTION_SECS + 60so the connection lives long enough for the executor to stream progress back.Step 4
stream and DELETE
The third POST returns a server-sent-events stream. Each SSE chunk is a JSON-RPC notification: a
notifications/progresswith current and total step counters, or anotifications/messagewhose logger field is “workflow” and whose data block carries one of step_started, step_completed, or step_failed. The scheduler re-emits each as a Tauri event so the desktop UI can update its progress bar in real time. When the stream ends, the scheduler issues an HTTP DELETE on the session id to release the executor slot. The run is over.
The reason this looks like a real protocol and not a custom three-line fetch is that it is a real protocol. MCP is the same standard that Claude Desktop, Cursor, and most agent frameworks use to talk to tool servers. By riding on top of it, the Mediar scheduler inherits a tested handshake, a standardized error shape, and the SSE pattern that streams progress without polling. The cost is the four-step setup. The benefit is everything you do not have to reinvent.
One nuance is the SSE buffer. The constant on line 157, MAX_SSE_BUFFER: usize = 10 * 1024 * 1024, caps the in-flight buffer at ten megabytes. If a workflow ever streamed enough progress events to exceed that, the scheduler logs a warning naming the workflow and drains the buffer rather than letting memory grow unbounded. This is the kind of guardrail that does not exist in most weekend-project schedulers and that becomes load-bearing the first time a chatty workflow runs unattended overnight.
The four numbers, side by side
The whole story above compresses to four hard limits, each policy rather than preference, each named on a specific line. These are the bounds the scheduler enforces on itself, and they are the right shorthand to take into a vendor conversation about workflow and automation.
Three trigger types, sixty minutes of jitter on the wide cron end, thirty minutes per run on the safety end, fifty rows of rolling history per workflow for diagnosis. The shape of every unattended run lives between those four numbers.
“Three trigger types in TriggerConfig, a uniform random delay between zero and sixty minutes on cron, a thirty-minute hard execution cap with a fresh-session stop_execution kill path, and fifty rows of rolling history per workflow. The whole shape of unattended automation in this codebase is bounded by those four numbers, and you can read each one in apps/desktop/src-tauri/src/workflow_scheduler.rs.”
hard limits in workflow_scheduler.rs
What this means if you are buying
The reason it is worth opening one scheduler file rather than reading another high-level explainer is that the four numbers above tell you, more honestly than any feature list, what the bot will and will not do under load. A buyer who wants file-system-watch triggers today will not get them from this scheduler, because the enum has three variants. A buyer who worries about synchronized bursts hitting their SAP gateway will get jitter for free, because it is a documented field on the Cron variant. A buyer who needs to know when a stuck workflow stops eating the desktop will get an answer in minutes, not hours, because the cap is 30 minutes hard.
Those are not abstract guarantees. They are policy decisions a team made and committed to source. The same is true at every competitor. UiPath has its own equivalent constants, Power Automate has its own, Automation Anywhere has its own. The difference is whether you can read them. When the file is open source under MIT and the constant is named, the conversation becomes a comparison of explicit numbers rather than a comparison of marketing copy.
That shift, from feature copy to source-level constants, is the single most useful upgrade you can make to a vendor evaluation for any unattended workflow product. The slide-level diagram of a trigger pointing at an action is a starting point. The four numbers in this scheduler, plus their analogues in whichever other systems you are evaluating, are where a real decision happens.
Bring a workflow that has stalled. We will run it through this scheduler on a call.
Pick a Windows workflow you have already tried to automate. We will record it live, schedule it on cron with jitter, watch it fire, and stop it on demand. Same scheduler this page describes.
Frequently asked questions
What is the literal pair of objects that 'workflow and automation' refers to in this codebase?
Two files. The workflow is a TypeScript file produced by the recording pipeline and stored under the platform-appropriate workflows directory (LOCALAPPDATA/mediar/workflows on Windows, ~/Library/Application Support/mediar/workflows on macOS, ~/.local/share/mediar/workflows on Linux). The automation is a Rust scheduler at apps/desktop/src-tauri/src/workflow_scheduler.rs that holds a HashMap of ScheduledWorkflow entries and decides when each one fires. The pair is the contract: one side is the recipe, the other side is the cook with a clock.
How many trigger types does the scheduler actually support?
Exactly three. The TriggerConfig enum on lines 92 through 108 of workflow_scheduler.rs has three variants and only three: Cron with a schedule string and an optional jitter_minutes, Manual which fires on user click, and Webhook with an optional URL path. Anything more exotic, like a file-system watch or a calendar invite, is not in this enum today. The enum is serde-tagged with type and lowercased, so the JSON shape on the wire is { type: 'cron' }, { type: 'manual' }, or { type: 'webhook' }.
What does jitter_minutes actually do, and why is it there?
It adds a uniformly random delay between 0 and 60 minutes to a cron-scheduled execution. The doc-comment in source is explicit about the reason: 'Helps avoid detection by making execution times unpredictable.' If you have a hundred copies of the same workflow scheduled at 9 AM across a hundred desktops, every one fires at 9:00:00 to the second, and any rate-limiter or anomaly detector on the target system sees a synchronized burst. Jitter spreads the burst across an hour. It is a small piece of code with one of the longest-running consequences for unattended automation at scale.
How long can a single scheduled run take before the system kills it?
Thirty minutes. MAX_SCHEDULED_EXECUTION_SECS on line 161 is hard-coded to 30 * 60. When the timer fires, the scheduler calls stop_mcp_execution which opens a fresh MCP session, sends a tools/call for stop_execution with empty arguments, and DELETEs the session afterward. There is no grace period and no automatic retry. A workflow that needs longer either has to be split into smaller pieces or has to run via a different code path. The cap exists so a stuck UI never holds the desktop hostage overnight.
How does the scheduler talk to the workflow executor?
Through MCP, the model context protocol, over HTTP on a localhost port. The handshake in execute_workflow is four steps in order. Step 1 POSTs an initialize request with protocolVersion 2024-11-05 and reads the mcp-session-id header off the response. Step 2 POSTs a notifications/initialized notification carrying that session id. Step 3 POSTs a tools/call for the execute_sequence tool, with the workflow file URL and the inputs object as arguments. Step 4 streams server-sent events back from the executor and ends with an HTTP DELETE that closes the session. The whole exchange is in the same file between roughly lines 391 and 730.
What stops a runaway log file or a bursty SSE stream from eating the desktop?
Two named constants right above the executor. MAX_EXECUTION_LOGS at line 154 caps each workflow's execution history at 50 entries before the oldest one rolls off; this is what shows up in the desktop UI as the run history. MAX_SSE_BUFFER at line 157 is 10 megabytes; if the SSE buffer ever exceeds it, the scheduler logs a warning naming the workflow and clears the buffer rather than letting memory grow unbounded. Neither limit is configurable today, which is the point: they are policy, not preference.
Does the same minute fire twice if the cron expression is broad?
No. should_execute_cron has an explicit check on lines 300 through 307: if a workflow's last_executed timestamp falls inside the same year-month-day-hour-minute string as 'now', the scheduler logs 'Already executed this minute, skipping' and returns false. The whole cron loop runs every few seconds, so a 'fires every minute' schedule fires once per minute regardless of how many ticks the loop takes inside that minute. This is the boring kind of guarantee that ends up mattering at three in the morning.
Where does the AI sit relative to this scheduler?
Nowhere. There is no Gemini, Claude, or OpenAI call in workflow_scheduler.rs. The model only runs once during recording, in the four-stage processing pipeline that turns captured events into a TypeScript file. After that, the file is on disk, and the scheduler is plain Rust calling the executor over MCP. A workflow that fires at 9:13 in the morning runs through zero inference calls. That separation, AI at authoring time and deterministic at runtime, is the design choice that lets a workflow pass a SOC 2 audit and still benefit from a frontier model during the writing step.
More from the Mediar topic series
Keep reading
What robotic process automation actually is, traced through the source
The mechanical answer in three layers: a six-event capture filter, a four-stage synthesis pipeline, and a four-strategy replay cascade. Each layer walked with the open-source files that implement it.
Where the AI lives in Mediar AI (and where it does not)
The model runs once, offline, during recording. The runtime is plain Rust calling Windows accessibility APIs. A source-level walkthrough of that split.
Meaning of robotic process automation: how the term split into two architectures
The phrase was coined around 2003 to describe a Windows scripting runtime. Twenty-three years later it points at two incompatible architectures. A word-by-word reading of the term.