A seam map for the mid-market ByDesign tenant

ByDesign order-to-cash automation is two lines, not one. Most teams only build the first.

SAP Business ByDesign already ships OData v2 and SOAP web services for the steady steps in O2C. The available-to-promise check, the customer invoice on the happy path, the standard delivery confirmation from a 3PL feed, those are API-shaped and they are where most published guides stop. The hours an actual ByDesign team spends every week are not on those steps. They are on a small number of human-decision screens that do not have a clean web service: sales order entry from a customer-attached PDF, manual payment allocation when the bank file does not auto-match, credit hold release, and dunning review. Those screens are the second line. This page is about how to draw it.

M
Matthew Diakonov
10 min

The thesis, plainly

The most common shape of a ByDesign O2C automation project is to map the whole flow as an event diagram and then ask IT to wire OData and SOAP integrations against every node. That works for the API-shaped nodes and stalls on the UI-shaped ones. After two quarters the steady-state pieces are running, and the team is still entering sales orders from PDFs by hand and fighting through partial-match wire allocations every Tuesday.

The shape that ships is to score every touchpoint in O2C as API-only, UI-only, or mixed, build the API-only and the API portion of the mixed nodes against ByDesign's standard services, and use a UI-driven recorder for the UI-only and the UI portion of the mixed nodes. The two lines are not in competition. They cover different screens.

The rest of this page is the seam map. Then it answers the only mechanical question that matters about the second line: what makes a recorded WorkCenter workflow survive a ByDesign release that nudges the layout.

The seam map: seven O2C touchpoints, scored

Cards in teal are the ones that produce the bulk of human hours and where the recorded WorkCenter line earns its keep. The plain cards are either API-shaped or so context-dependent that the right answer is the API in some tenants and the UI in others.

1. Sales order intake from a customer PDF

A customer emails a PO as a PDF attachment. There is no EDI link and the buyer will not change. The OData CustomerInvoiceCreateRequest does nothing here because the data is not yet in ByDesign. UI-only: someone opens the Sales Orders WorkCenter, clicks New Sales Order, and types eight to twelve fields per line.

2. Available-to-promise check

API-only. The ProductAvailabilityConfirmation service is exactly what it is for. Anything that calls itself O2C automation and loops a recorder through the ATP screen is doing the wrong thing.

3. Credit hold release

Mixed. The hold itself is set by a rule in the credit limit configuration. The release is a judgment call: is this customer paying on time this quarter, did the controller already verbally approve, do we let it through with a stricter dunning level. UI-only.

4. Outbound delivery and pick confirmation

API-only when the warehouse is on a 3PL with a feed back to ByDesign. UI-only on a small site where someone confirms picks in the Outbound Logistics WorkCenter.

5. Customer invoice creation

API-only on the happy path. The CustomerInvoice web service handles the standard cycle. The exception is invoice splits and consolidation rules that are configured per customer; those are still set in the UI by AR.

6. Manual payment allocation

UI-only and very loud. When the bank file does not auto-match (one wire, three open invoices, slightly off amount), the Payment Allocation screen is where a human compares amounts, splits the receipt, and posts. The OData layer cannot guess which invoices the wire is for.

7. Dunning run review and approval

UI-only. The dunning run can be scheduled, but the review (suppress this customer, push this one to the next dunning level, write off this old item) lives in the Receivables WorkCenter and the dunning monitor.

The two lines, side by side

The diagram below is the live shape of a ByDesign O2C automation that admits both lines. On the left, the inputs that show up in a ByDesign tenant every day. In the middle, the runtime that sits between the inputs and the screens. On the right, the WorkCenter surfaces where the human-decision steps actually happen.

Inputs the API does not know about, surfaces the API does not reach

Customer PO email
Bank file
Aged AR view
Recorded WorkCenter run
Sales Orders WorkCenter
Payment Allocation
Receivables WorkCenter

The OData and SOAP integrations sit in a parallel pipe in the same tenant. They handle ATP, the standard customer invoice, and the standard delivery confirmation. The diagram is intentionally about what the API pipe does not cover. The two lines together cover the whole flow.

The screen-shaped problem in three deeper passes

Three of the seven touchpoints absorb the bulk of the time. Each one is worth reading on its own.

Sales order entry from a customer-attached PDF

A customer emails a PO. The buyer is a small company that will not adopt EDI in this lifetime. The PDF has the buyer, the ship-to, eight lines with quantities and prices, and a PO number. TheCustomerInvoiceCreateRequestweb service does not help, because the data is not yet in the tenant. A document model parses the PDF into a typed object; the recorded ByDesign workflow opens the Sales Orders WorkCenter, clicks New Sales Order, and types each field by role and name. The validation shape of the object (a posting date inside an open period, a known customer, line items with non-empty product IDs) is what catches malformed inputs before they reach the screen.

Manual payment allocation when the bank file does not auto-match

Most wires hit the auto-match. The wires that do not are the interesting ones: a single inbound payment that covers parts of three open invoices, a payment that arrives 47 cents short of an invoice total, a wire whose memo line names two PO numbers that were merged into one invoice. The Payment Allocation screen is where this is resolved. The recorded workflow does not replace the judgment about which open items the wire actually covers; it takes a structured allocation proposal (open items, allocated amounts, residuals) and drives the screen to apply it, leaving the human to review the proposed split. The apply happens on the screen because the apply button is on the screen.

Dunning review and approval

The dunning run itself is a scheduled job. The review of the proposal is the slow part. Suppress this customer because they are on a payment plan, push that customer to the next dunning level, write off the old item that has been on the books for two years and will not be paid. That review lives in the Receivables WorkCenter and the dunning monitor. A recorded pass that walks the proposal row by row, applies the allowed actions for each row, and parks the remainder for human review is what turns the weekly four-hour ritual into a forty-minute one.

The mechanical question: what survives a ByDesign release

The fair argument against UI automation in a SaaS ERP is that the vendor reshuffles the layout twice a year and the recordings break. That is true if the recordings are tied to coordinates. It is not true if the recordings are tied to the accessibility surface, which is the same surface a screen reader uses and the one ByDesign and the SAP UI shell maintain across releases.

The Mediar desktop runtime makes that explicit. Before two captured DOM trees are compared, a function called remove_volatile_dom_attributes walks the JSON and drops every x, y, width, height field, plus the value attribute on input elements (which would otherwise dominate the diff with content noise). The file is at apps/desktop/src-tauri/src/dom_tree_diff.rs in the open-source desktop agent.

What the recorder ties itself to, and what it deliberately ignores

  • Field role and name (the screen reader's view of "Customer ID", "Net Value", "Due Date")
  • Automation ID and ARIA label on every WorkCenter input element
  • WorkCenter title and the breadcrumb path that leads to a screen
  • Tab order across the form, which is the recorded keystroke spine
  • Pixel x/y of every element (volatile, stripped before diff)
  • Width and height of every element (volatile, stripped before diff)
  • value attribute on inputs (captured separately, stripped before diff)

The consequence for a ByDesign tenant is the part that matters: a half-yearly release that nudges a field down a row in the Sales Orders WorkCenter or repaints a column in the Payment Allocation grid does not produce a tree diff. The recorder does not see it. The run continues. A release that actually renames a field or removes one shows up as a real change and is flagged at the recording side, which is the place to fix it.

The retry shape behind a ByDesign cloud session

ByDesign is a cloud tenant. Cloud tenants drop sessions. The runtime accounts for that with a deterministic backoff in crates/executor/src/config/retry.rs. The default schedule is 30 seconds, then 60, then 120, capped at 600, with three Infrastructure retries before the run is marked failed. A session that drops for 90 seconds is invisible: the queue waits, retries, and continues. A tenant that is genuinely down for a planned outage produces three retries inside about three and a half minutes and then a clean failure that surfaces in the dashboard as one alert.

From RetryConfig::default in retry.rs

  • max_infrastructure_retries: 3
  • initial_delay_secs: 30
  • max_delay_secs: 600
  • backoff_multiplier: 2.0
  • enabled: true

Anything ByDesign itself rejects (validation failed, posting period closed, duplicate, permission denied) is classified as WorkflowLogic and never retried. Hammering the screen would not change the answer.

The four WorkCenter surfaces that absorb the work

The recorded line is concentrated in four places in a ByDesign tenant. Each one is reachable by role and name on every input and every table column header, which is what makes the recordings hold up across releases.

WorkCenter destinations

Where the recorded O2C line actually lives

Sales Orders WorkCenter

New Sales Order screen survives layout reshuffles when the recorder targets fields by role and name, not by position.

Payment Allocation

The split-and-match screen is the densest UI in O2C. Recordings hold up because the input automation IDs are stable across releases.

Receivables WorkCenter

Dunning monitor and credit hold review surfaces are addressable by table column header, not coordinate.

Outbound Logistics

Pick confirmation works the same way. The list rows are reachable via table semantics, which the runtime captures.

When this design is the wrong fit

Three cases where the recorded WorkCenter line is not the right answer.

Pure happy-path O2C with mature integrations. A tenant where every customer is on EDI, the bank file matches cleanly every time, and the dunning policy is small enough that a scheduled run produces almost no review work, has very little for the second line to do. The OData and SOAP layer is doing the job.

High-volume bulk loads. A 40,000-line price-list update or a one-time customer master migration is what the ByDesign data migration cockpit and the standard mass-upload tools are built for. The recorder is built for the steady stream of event-driven O2C tasks (a PDF arrives, a wire posts, a dunning monitor is reviewed), and it is honest to say it is not the right shape for the once-a-quarter migration.

A tenant that mostly needs reporting. If the gap in your O2C is the analytics layer rather than the execution layer, ByDesign analytics and a downstream warehouse are the right answer. Recorded automation does not produce insight; it produces posted documents.

Bring one ByDesign O2C touchpoint and we will draw the seam against it

On a 30-minute call we walk one of your actual O2C steps (sales order entry from a PDF, a partial wire allocation, a dunning review) and decide together whether it lives on the API line or the WorkCenter line. You leave with a checked-in workflow file you can run yourself.

Frequently asked questions

ByDesign has OData v2 and SOAP web services. Why bring desktop automation into an O2C flow at all?

Because most of the work an O2C team does in a steady-state ByDesign tenant is in the screens that do not have a 1:1 web service. A sales order arriving as a PDF attachment has to be entered before any web service can see it. A wire that does not match a single open invoice has to be split in the Payment Allocation screen. A dunning run has to be reviewed before it is released. The OData and SOAP layer is the right answer for the steady steps (ATP check, invoice posting on the happy path, delivery confirmation from a 3PL feed). The recorder is the right answer for the human-decision screens. They are not in competition.

What stays stable across a ByDesign half-yearly release, and what does not?

Roles, names, and automation IDs on input elements stay stable. Tab order across a form stays stable. WorkCenter titles and breadcrumb paths stay stable. What does not stay stable: the pixel coordinates of every element, the width and height, and the value attribute on inputs at the moment a tree is captured. The Mediar runtime strips x, y, width, height, and the input value field before comparing two captured trees, so a layout shuffle that moves a field down a row does not produce a diff. That is in apps/desktop/src-tauri/src/dom_tree_diff.rs in the open-source desktop agent.

How does the runtime hold up when ByDesign times the cloud session out mid-workflow?

The executor classifies a connection reset, a 503, or a timeout as Infrastructure and retries with backoff. The default schedule from RetryConfig::default in crates/executor/src/config/retry.rs is initial_delay_secs 30, max_delay_secs 600, backoff_multiplier 2.0, max_infrastructure_retries 3. A ByDesign cloud session that drops for 90 seconds is invisible: the queue waits, retries, and continues. Anything ByDesign itself rejects (validation failed, posting period closed) is classified as WorkflowLogic and never retried. Anything novel is classified as Unknown and stops the run for an operator ack.

How is sales order entry from a customer PDF actually wired up?

The trigger is a mailbox or a OneDrive folder. A document model extracts the buyer, ship-to, line items, quantities, prices, and PO reference into a typed object. The recorded ByDesign workflow takes that object as its input schema, opens the Sales Orders WorkCenter in the desktop client, clicks New Sales Order, and types each field by role and name. Validation rules in the schema (date in a posting period, customer in a known account list, line item with a non-empty product ID) reject malformed inputs before they touch the screen, which is cheaper than asking ByDesign to reject them.

Where does manual payment allocation get specifically harder than other ByDesign screens?

The Payment Allocation screen has multiple grids on one surface (open items, applied items, remainders), the user has to split a single inbound wire across several invoices, and the allocation logic is a judgment call that depends on a customer's typical payment shape. The recorder does not replace the judgment. What it does is take the structured output of a matching pass (which open items the inbound wire most likely covers, with a confidence score) and drive the screen to apply that allocation, leaving the human to review the proposed split rather than to type each line. The UI is still where the apply button lives, so the apply has to happen there.

Does this approach work for the dunning run, or only for the dunning review?

Both. The dunning run itself can be scheduled in ByDesign without any automation, and that is the right answer. The review of the dunning proposal (which customers to suppress, which letters to push to the next level, which old items to write off) is the part where humans actually spend time, and that is the part the recorder fits. A scheduled run plus a recorded review pass against the dunning monitor is materially different from one or the other alone.

Is this on-premise, in the cloud, or both? ByDesign is a cloud product.

ByDesign is the cloud tenant. The Mediar desktop agent runs on a Windows machine that has the ByDesign client open and signed in to the customer's tenant. That can be an operator's desktop, a Citrix session, or a dedicated Windows VM. The runtime never touches the cloud tenant directly; it drives the client. SOC 2 Type II and HIPAA postures apply to the runtime and to the platform that schedules the workflow.

Why not just have IT build OData and SOAP integrations for everything?

Two reasons it is not enough on its own. First, the screens that need automation are exactly the ones without a 1:1 web service. Manual payment allocation, dunning review, credit hold release, and inbound sales order entry from a customer email do not have a SOAP envelope you can post that resolves the human decision. Second, mid-market ByDesign customers usually do not have the IT bandwidth for a six-month integration project to chip away at the API-shaped 60 percent. They want the UI-shaped 40 percent automated this quarter. The recorder is the path that gets shipped.

Pricing on a ByDesign O2C workflow?

Runtime is billed at $0.75 per minute regardless of outcome. A typical sales-order entry from a PDF runs in 25 to 45 seconds against a ByDesign tenant. A manual payment allocation pass against a wire with three open items runs in about a minute. A dunning review pass scales with the number of customers in the proposal, roughly five seconds per customer once the recorded loop is warm. The $10K turn-key program fee converts to credits with a bonus, so it is effectively prepaid usage that covers the first pilot.

Where can I read the runtime myself?

The Terminator SDK and the desktop agent are open source under MIT at github.com/mediar-ai/terminator. The volatile-attribute stripping is in apps/desktop/src-tauri/src/dom_tree_diff.rs. The retry classifier and backoff schedule are in crates/executor/src/config/retry.rs. The recording-driven workflow shape (typed input schema, MCP tool steps, deterministic execution) is what survives the most frequent ByDesign cloud release notes.