A guide
Skyvern pricing, decoded: what a credit actually buys, and where it stops.
Skyvern shipped a new credits-based pricing model on 30 January 2026, replacing the previous flat rate of five cents per step. Free, Hobby ($29), Pro ($149), and a custom Enterprise tier are listed on skyvern.com/pricing. Every other page on this question stops at the tier prices. This one does the math the tiers contain, names the surface a credit is actually sized for, and walks the architectural reason that some workflows have to be priced in a different unit altogether.
The four tiers, with the action equivalents the launch post published.
The official structure as of January 2026 is four tiers, each bundling a monthly credit allowance with concurrency, anti-bot features, authentication support, and integrations sized to how seriously you are using automation. The action numbers below are the working approximations the Day 5 launch post posted as guidance. They are not contractual limits. They are the cleanest way to read what a tier is actually for.
Free, Hobby, Pro, Enterprise: the published ladder
- 1
Free
$0/month. 1,000 credits, roughly 170 actions. CAPTCHA solving and basic support.
- 2
Hobby
$29/month. 30,000 credits, roughly 1,200 actions. Webhook integrations and faster execution.
- 3
Pro
$149/month. 150,000 credits, roughly 6,200 actions. Team workspaces and 2FA credential management.
- 4
Enterprise
Custom. Unlimited credits. Self-hosted deployment, HIPAA, SOC 2 Type II, SSO, dedicated account manager.
Free is for kicking the tires on a small workflow. Hobby is for one developer running a steady automation against a couple of portals. Pro is for a team that needs workspaces, 2FA credential handling, and a meaningful concurrency budget. Enterprise is the tier where the bundled features (HIPAA, SOC 2 Type II, SSO, self-hosted deployment, dedicated account manager, SLA) start carrying weight that the credit price alone does not. The structure of the ladder is a normal SaaS shape; the interesting part is the unit the rungs are denominated in.
The math the tiers contain
Back out the implicit per-action cost from the published ratios.
Skyvern does not publish a per-action rate. The Day 5 launch post does publish an action approximation per tier, and a credit count per tier, and the two together back out a useful range. The approximation is rough on purpose, because credits represent a unit of browser execution that varies with runtime, page complexity, retries, and anti-bot measures. With that caveat, here is the math.
Hobby tier
0 credits
per action, on average. 30,000 credits divided by roughly 1,200 actions.
Implied dollar cost
$0.00 / action
$29 monthly fee divided by roughly 1,200 actions, on the Hobby tier.
Old per-step price
$0.00 / step
The flat rate the new credits model replaced on 30 January 2026.
The headline reading is that the Hobby tier prices an action at roughly half the old per-step rate, if your usage actually fills the bundled allowance. That is the trade. You commit to a monthly cap, and in exchange the per-action price drops by about half, plus you get the bundled concurrency and anti-bot allowance. The Pro tier sits in the same neighborhood: 150,000 credits divided by roughly 6,200 actions is about 24 credits per action, and $149 divided by 6,200 actions is about $0.024 per action.
The honest caveat is that any per-action number is a tier average. A click on a static page that hits no anti-bot fabric and needs no retries will burn well below 25 credits. A multi-step authenticated flow on a hardened portal that triggers a CAPTCHA, a proxy switch, and a vision retry will burn well above. The averages are useful for sizing a tier; they are not useful as a quote for any single workflow. That is not a flaw in the pricing, it is the design intent: bundle the variance into the tier so the buyer does not have to model every retry path.
“Credits represent a unit of browser execution. Different workflows consume different amounts of credits depending on runtime, page complexity, retries, and anti-bot measures (CAPTCHA, proxies, geo-targeting).”
Skyvern, Day 5 launch post (30 Jan 2026), credits-per-action implied by Hobby tier ratios
What the unit measures
A credit is sized for one specific surface: a managed Chromium tab.
Read the Skyvern README at github.com/Skyvern-AI/skyvern (AGPL-3.0) and the picture clarifies. The runtime is a Playwright-compatible SDK that drives a managed Chromium instance. The selection layer is a vision LLM that reads screenshots of the rendered viewport, plus DOM context where useful. Anti-bot work (CAPTCHA, proxies, geo-targeting) is proprietary and lives in the cloud product, on top of the open-source core. Every line of that stack is sized for one specific surface: a browser tab the platform fully controls.
That is what a credit measures. Not a click. Not a step. Not a second of wall-clock time. A credit is the unit of compute spent inside Skyvern's managed Chromium fleet on your behalf, including the vision call, the proxy hop, the CAPTCHA solve, and any retries. Two workflows that look identical in your authoring tool can burn very different credit amounts because one of them happened to land on a portal that ships hard anti-bot, and the other did not. The bundling is the product: you pay for one number and the runtime decides how many internal calls it spends to make the workflow finish.
For workflows that live entirely inside a Chromium tab, that unit is well chosen. Vendor portal logins, payer claim status checks, lead enrichment from public web sources, document downloads from a hardened extranet, the long tail of B2B SaaS form fills. The bundled CAPTCHA fabric subsidizes a real cost. The variable retry budget absorbs a real source of variance. The tiered concurrency matches the way a small ops team actually scales out browser work. None of those properties are accidental, and none of them are wrong.
The boundary the unit cannot cross
Where a credit stops measuring anything, and a different unit has to take over.
The moment the workflow leaves the browser tab, the credit unit stops describing the work. A SAP GUI window is not a tab. An Oracle Forms session is not a tab. A Jack Henry green-screen terminal is not a tab. An Epic Hyperspace patient chart is not a tab, even when a Citrix shell wraps it. An Excel sheet that the user edits in place is not a tab. None of those systems render through Chromium, none of them expose a DOM the way a modern web app does, and none of them have a screenshot pipeline the managed-Chromium runtime can read with the same vision model. There is no surface for the credit to measure.
A different runtime is needed, and the natural pricing unit for that runtime is wall-clock time. The agent runs on the user's own Windows session, drives the operating system through the UI Automation accessibility tree (the same interface screen readers use to describe a Windows application to a blind user), and burns minutes the user can see in their own Task Manager. There is no managed VM to amortize, no vision LLM round-trip per click, and no proxy fabric to pay for. The cost is the wall-clock time the agent is active on the desktop, plus the cost of any optional cloud calls the workflow chose to make. The natural unit is one minute of desktop runtime.
That is the pricing we use here at Mediar: $0.75 per minute of runtime, drawn against a $10,000 turn-key program prepay that converts to credits with a small bonus. The unit is wall-clock time on Windows because the work itself is wall-clock time on Windows. The same architectural rule that makes Skyvern price in cloud-credit units makes a desktop RPA price in minutes-on-OS units. It is not that one model is more honest; it is that each unit is sized for the surface its agent actually reads.
The cleanest way to see the difference is to look at the replay code. In apps/desktop/src-tauri/src/focus_state.rs of the Mediar desktop agent, the function restore_focus_state on lines 161 to 196 walks four match strategies in order: accessibility or automation id first, parent window plus element bounds second, visible text content third, and a window-focus fallback fourth. None of those four reads a screenshot. None of them call a vision model. All four read the live Windows UI Automation tree. There is no per-call vision cost to amortize into a credit, which is why the pricing cannot be denominated in one. The replay walks four local strategies; the meter ticks in seconds.
Same job, different unit, different surface
What changes when you price browser-tab work versus desktop work
| Feature | Skyvern | Mediar |
|---|---|---|
| Pricing unit | Credits per month, bundled with concurrency and anti-bot | $0.75 per minute of runtime, billed against a $10,000 program prepay |
| What the unit measures | Browser execution: runtime, page complexity, retries, anti-bot | Wall-clock time the agent spends driving Windows controls |
| Execution surface | Managed Chromium browser tab, vision LLM on screenshots | Windows UI Automation accessibility tree, locally on the user's machine |
| Apps in scope | Any website, vendor portal, or web SaaS the LLM can read | SAP GUI, Oracle Forms, Jack Henry, Fiserv, FIS, Epic Hyperspace, Excel, plus browsers via the same OS surface |
| Concurrency model | Tier-bundled concurrency on Skyvern's managed VM fleet | One agent per Windows session, scaled by adding sessions, not by buying a higher tier |
| License of the open core | AGPL-3.0 (anti-bot bits stay closed in the cloud product) | MIT (Terminator SDK at github.com/mediar-ai/terminator) |
| Where the price breaks down | Workflow leaves the browser tab and lands in a desktop window | Workflow leaves Windows entirely and lands in a closed mobile-only app |
The two units are not in conflict. They are sized for two different surfaces. Pick the unit that matches where your workflow lives, not the one with a smaller headline number.
A short triage instead of a recommendation
A useful triage on Skyvern's pricing asks four questions in order. First: what fraction of the workflow runs inside a Chromium tab? If the answer is high, the credit unit prices the right thing and you should size a tier off the action averages above. If the answer is low, no amount of credit math will help, and the unit you actually need is wall-clock time on the surface your work lives on.
Second: is your usage steady enough to fit a tier? Skyvern's credits are bundled monthly. A team that runs a predictable baseline of vendor portal logins fits Hobby or Pro cleanly. A team with spiky end-of-month batches that triple their usage for two days will hit the tier ceiling and either burn into overages or get pushed onto Enterprise pricing. Per-minute desktop pricing handles spikes more naturally because there is no monthly bucket to overflow.
Third: how much of your workflow's cost is anti-bot work? On hardened portals, the proxy and CAPTCHA fabric inside Skyvern's credit price is a real subsidy. On well-behaved internal systems and desktop apps, that subsidy is dead weight you do not need. Match the bundle to the workload.
Fourth: which compliance frame has to swallow the deployment? Self-hosted Skyvern (AGPL-3.0) covers the runtime but not the managed anti-bot fabric. The Enterprise tier is the unit that carries HIPAA, SOC 2 Type II, SSO, and the SLA. If the buyer is a healthcare or financial-services organization, the published per-action price is no longer the load-bearing number; the Enterprise contract is.
Bring a workflow that has to leave the browser tab.
If your team has tried a credit-priced browser agent and found that the actual work lives in SAP GUI, Oracle Forms, a Jack Henry session, or an Epic Hyperspace window, that is the surface a different runtime is sized for. Twenty minutes is enough to record one live and replay it against the Windows UI Automation tree.
Frequently asked questions
What does Skyvern cost in 2026?
Free at $0/month with 1,000 credits, Hobby at $29/month with 30,000 credits, Pro at $149/month with 150,000 credits, and Enterprise at custom pricing with unlimited credits and self-hosted deployment. The structure is published on skyvern.com/pricing. The Free tier includes CAPTCHA solving and basic support, Hobby adds priority support, faster execution, and webhook integrations, Pro adds team workspaces, advanced workflows, and 2FA credential management, and Enterprise adds HIPAA, SOC 2 Type II, SSO, a dedicated account manager, and SLA guarantees.
How much does one Skyvern action actually cost?
Skyvern does not publish a per-action rate, but the public tier ratios let you back one out. The Day 5 launch post on 30 January 2026 says Hobby gives roughly 1,200 actions for 30,000 credits, which is about 25 credits per action. At Hobby that is roughly $0.024 per action. At Pro (150,000 credits, roughly 6,200 actions for $149), the implied cost lands in the same neighborhood, around $0.024 per action. The official caveat is that a single run consumes credits based on complexity and duration, so any per-action number is a tier-average rather than a guaranteed unit price.
What changed when Skyvern moved off per-step pricing?
Skyvern's previous public model was a flat $0.05 per step. The Day 5 launch post on 30 January 2026 announced the move to monthly credits, with the stated reason that per-step billing forced users to think in the wrong units and discouraged workflow improvements that required adding steps. Credits were positioned as a unit of browser execution that absorbs runtime, page complexity, retries, and anti-bot work into one number, instead of charging separately per click. From the buyer's side, the new model trades a transparent per-step rate for a tier-bundled rate that is harder to predict per workflow but cheaper if your usage is steady.
What does one credit cover?
The Day 5 launch post says credits represent a unit of browser execution, and that different workflows consume different amounts depending on runtime, page complexity, retries, and anti-bot measures (CAPTCHA, proxies, geo-targeting). One credit is not a step, not a page, and not a second. It is a synthetic unit that bundles cloud compute, the vision LLM call, the proxy or anti-bot allowance, and any retries the runtime needed. That is why two workflows with the same number of clicks can burn different credit amounts.
Does Skyvern run on desktop apps like SAP, Oracle Forms, or Epic Hyperspace?
No. The Skyvern repository at github.com/Skyvern-AI/skyvern, licensed AGPL-3.0, is browser-only. The runtime is built on a Playwright-compatible SDK and a vision LLM that reads the rendered Chromium viewport. There is no Windows desktop runtime, no Citrix runtime, and no mainframe terminal connector. If your workflow has to drive a SAP GUI window, an Oracle Forms session, a Jack Henry green-screen, or an Epic Hyperspace patient chart, the credit unit cannot price it because the surface the credit measures is not where your work happens.
Can I self-host Skyvern to avoid the credit pricing?
The open-source repo is AGPL-3.0 and you can run it on your own infrastructure. The cloud product's anti-bot measures stay proprietary, so the self-hosted version covers the core runtime but not the production-grade CAPTCHA and proxy tooling. Self-hosting also means you carry the cost of the vision LLM yourself and you size your own concurrency, instead of buying it bundled in a tier. Self-hosting is a credible answer for steady, predictable workloads where the bundled cloud features are not load-bearing. It is not the answer if you need the managed CAPTCHA and proxy fabric that the credit price subsidizes.
What are the credit limits per tier in actions, not credits?
The published action equivalents from the launch post are roughly 170 actions per month on Free, roughly 1,200 on Hobby, and roughly 6,200 on Pro. These are not contractual limits, they are working approximations the Skyvern team posted as guidance. A single action that triggers a CAPTCHA, a retry, or a heavy anti-bot path will burn more credits than the average. A simple click on a static page burns less. The action numbers are useful as a starting estimate for sizing a tier, not as a billing guarantee.
How does Skyvern's credit pricing compare to traditional RPA?
Skyvern's own marketing draws the comparison: traditional RPA charges roughly $10,000 per bot per year with 20 to 40 percent monthly maintenance, while Skyvern uses usage-based credits with low maintenance overhead. The framing is fair for browser-tab workflows, where the model is genuinely cheaper and the maintenance burden is lower because vision-based selection is more resilient to layout drift. The framing breaks down for desktop workflows that traditional RPA does cover and Skyvern does not, because there is no surface for the credit to measure on a SAP GUI window.
Why does desktop RPA price in minutes instead of credits?
A credit is a unit of managed-cloud browser execution: it bundles the VM hour, the vision call, the proxy, and the retry budget into one synthetic number. A desktop RPA agent runs on the user's own Windows session, drives the operating system through the UI Automation accessibility tree, and bills against wall-clock minutes the agent is active. There is no managed VM to amortize, no vision LLM round-trip per click, and no proxy fabric. The natural unit is time-on-desktop. Mediar prices that unit explicitly at $0.75 per minute, drawn against a $10,000 program prepay that converts to credits.
Where does Skyvern's pricing make sense, and where does it not?
It makes sense if your workflow lives entirely inside a Chromium tab, your usage is steady enough to fit a tier, and you value the bundled CAPTCHA and proxy work that the credit price subsidizes. Vendor portal logins, lead enrichment, payer claim status checks, document downloads, web SaaS form fills. It does not make sense if your workflow crosses out of the tab into a desktop window, or if your usage is spiky enough that the tier ceiling forces you onto Enterprise unnecessarily. Match the unit you are billed in (credits, minutes, per-bot license) to the surface your workflow actually lives on, and the price will tell you the truth about whether the tool fits.
More from the Mediar topic series
Keep reading
CloudCruise, traced through BADGER: a guide to the architecture and where it stops
Five execution strategies on top of a directed-graph DSL, and the input-surface boundary that decides whether a browser-RPA tool can touch your workflow at all.
RPA agent UI input layer: accessibility tree versus pixels
The choice of input surface is the most consequential architectural decision an RPA agent makes. Walks the tree-versus-pixel split and what each gives up.
Mediar, the company: the founders, the funding, and the open-source SDK
Background on the Y Combinator backed company at mediar.ai, the open-source Terminator SDK, and how the open-source pieces fit into the commercial product.