Built for Deno Deploy agencies — 14-day free trial

A cert expiry on your customer's custom domain isn't a downtime event — it's the moment your .deno.dev fallback URL keeps serving green while api.customer.com hard-fails the handshake for paying customers.
The Deno Status Page shows green. `deployctl deploy` reports success on every PR merge. Customers hit `NET::ERR_CERT_AUTHORITY_INVALID`.

Deno Deploy agencies running globally distributed edge-compute apps with Let's Encrypt-only cert provisioning deal with custom-domain cert expiry that silently breaks issuance when the customer apex has CAA tightening (typical during SOC 2 hardening projects that pin to DigiCert or Entrust) while the project's <project-name>.deno.dev fallback URL keeps serving green from Deno's own apex, `deployctl deploy --project=<id>` shipping code on every PR merge while the project's custom-domain attachment is stuck in DNS-validation-failed state (47 green deploys, zero traffic on api.customer.com because every request fails the cert handshake), and Deno Cron handlers throwing `TypeError: fetch failed` against expired-cert partner endpoints where Deno KV-driven state machines stall and no PagerDuty/Slack alert fires. Merlonix monitors every Deno-attached custom domain so the cert expiry surfaces 30 days before the failure window opens.

No credit card for the trial. Cancel any time.

Check cadence (Agency)
5 min
SSL pre-expiry alert
30 days
Independent DNS resolvers
3
Vendors watched
11

Where Deno Deploy agencies get caught out

Three failure modes specific to Deno Deploy where Let's Encrypt-only cert provisioning silently breaks on CAA-tightened customer apexes while the .deno.dev fallback keeps serving green, `deployctl deploy` ships code on every merge while the custom-domain attachment is in DNS-validation-failed state, and Deno Cron handlers throw `TypeError: fetch failed` against expired-cert partner endpoints with no PagerDuty wiring.

Deno Deploy agencies deal with Deno Deploy's Let's-Encrypt-only cert provisioning silently failing on customer apexes with CAA tightening (common during SOC 2 hardening) while the project's <project-name>.deno.dev fallback URL keeps serving green from Deno's own apex, `deployctl deploy --project=<id>` reporting success for every PR merge while the project's custom-domain attachment has been in DNS-validation-failed state for two weeks (the .deno.dev URL serves the latest code; api.customer.com serves nothing because the cert never issued), and Deno Cron handlers calling out to partner HTTPS endpoints throwing `TypeError: fetch failed` when the partner's cert expires — failures accumulate in Deno Deploy logs but don't wire to PagerDuty/Slack by default.

Deno Deploy provisions custom-domain certs via Let&apos;s Encrypt — it&apos;s the only CA the platform supports for custom-domain attachments configured through the Deno Deploy dashboard or `deployctl`. When the underlying customer domain has CAA tightening that excludes LE (typical during SOC 2 Type II hardening projects where the customer&apos;s CISO pins the CAA record to commercial CAs only — DigiCert, Entrust, Sectigo), the next cert renewal at day 60 of the 90-day cycle silently fails. The Deno-managed &lt;project-name&gt;.deno.dev fallback URL keeps serving green (it&apos;s under Deno&apos;s own apex with Deno&apos;s own CAA, completely independent of the customer&apos;s CAA record). Internal dashboards and uptime monitors pointed at the .deno.dev URL show all green while api.customer.com hard-fails the TLS handshake for every paying customer

A Deno Deploy agency operates a customer-facing API for a B2B SaaS client. The project ships as workflow-api and is reachable at workflow-api.deno.dev (the Deno-managed fallback) and at api.clientco.com (the customer-attached custom domain). The client&apos;s CISO mandates CAA tightening as part of an in-progress SOC 2 Type II audit. The CAA at clientco.com is pinned to DigiCert-only. The Deno Deploy cert renewal at day 60 of the 90-day cycle fails silently because CAA blocks Let&apos;s Encrypt. The project&apos;s .deno.dev fallback URL keeps serving from Deno&apos;s own apex with the Deno-managed cert. Internal Datadog dashboards pointed at workflow-api.deno.dev stay green. Customer traffic on api.clientco.com begins hitting cert errors at day 90. Discovery is via a customer support ticket from a Fortune-500 buyer whose security-policy-enforcement breaks the trial signup flow — the buyer&apos;s laptop has corporate certificate-pinning that hard-blocks expired certs

A Deno Deploy agency operates the customer-facing API for ClientCo, a B2B SaaS doing $14M ARR. The project on Deno Deploy is workflow-api; it&apos;s reachable at workflow-api.deno.dev (Deno-managed fallback) and at api.clientco.com (customer-attached custom domain). The custom domain attachment was configured 11 months ago through the Deno Deploy dashboard; cert provisioning succeeded automatically via Let&apos;s Encrypt; the cert has renewed cleanly on 4 prior 90-day cycles. Eight weeks ago, ClientCo began a SOC 2 Type II audit in preparation for a Fortune-500 buyer evaluation. The CISO commissioned a third-party security firm to perform a hardening review. The firm&apos;s recommendations included CAA tightening at clientco.com — pin to DigiCert-only as a defense-in-depth measure against rogue cert issuance from less rigorous CAs. The CISO implemented the CAA tightening 4 weeks ago: clientco.com 0 issue "digicert.com" with a 0 iodef email contact. Let&apos;s Encrypt was removed. The agency engineer who provisioned the Deno Deploy custom domain 11 months ago is on a different project and wasn&apos;t looped in. The CISO didn&apos;t map the CAA change to a CA inventory for downstream services. The Deno Deploy cert renewal at day 60 of the 90-day cycle triggered last week. Deno Deploy&apos;s LE integration submitted the renewal request; LE&apos;s pre-flight CAA check queried the CAA record at clientco.com and got "digicert.com" only — LE is not authorized to issue. The renewal failed with a CAA-rejection error. Deno Deploy logged the failure to the project&apos;s logs but didn&apos;t generate a dashboard-level alert that the agency&apos;s monitoring picks up. Deno Deploy retried 24 hours later — same failure. The retries continued daily for the next 30 days, each failing identically. Throughout this window, the project&apos;s .deno.dev fallback URL (workflow-api.deno.dev) kept serving with the Deno-managed cert from Deno&apos;s own apex — Deno&apos;s deno.dev CAA permits LE, the cert renewed cleanly, the .deno.dev URL is green. The agency&apos;s Datadog synthetic monitor was configured against workflow-api.deno.dev (it was the easiest URL to point at during initial setup); the monitor reported 100% uptime. ClientCo&apos;s internal dashboard for API health was also configured against the .deno.dev URL because ClientCo&apos;s SRE team had inherited the dashboard from the agency. The existing custom-domain cert on api.clientco.com expired yesterday at the 90-day mark. From that point forward, every request to api.clientco.com hits a `NET::ERR_CERT_DATE_INVALID` from Chrome and a hard-block from corporate-managed Safari with strict cert validation policies. Mobile browsers behave inconsistently — most show a click-through warning. The Fortune-500 buyer that ClientCo is courting (the whole point of the SOC 2 audit) attempts a trial signup this morning. The buyer&apos;s laptop has Cisco AnyConnect with corporate certificate-pinning that hard-blocks any cert not chaining to the approved root list. The buyer&apos;s endpoint security flags the cert error and routes it to the buyer&apos;s CISO. The buyer&apos;s CISO logs a support ticket with ClientCo: "Your API endpoint is presenting an expired certificate. Per our security policy we cannot proceed with the trial until this is remediated. This will be referenced in the security assessment of your platform." ClientCo&apos;s CSM escalates to the agency. The agency engineer triages: pulls up the Deno Deploy dashboard, sees the cert renewal failure logs going back 30 days, identifies the LE-vs-CAA mismatch. Resolution requires the client to either (a) re-add LE to the CAA record, which the CISO will not approve mid-audit, (b) migrate the Deno Deploy custom domain to a DigiCert cert provisioned manually outside the LE integration — Deno Deploy doesn&apos;t natively support this, requiring a custom proxy architecture, or (c) move the API to a different runtime that supports DigiCert directly. Each option has a 2-4 week resolution window. During that window, the agency&apos;s monthly engagement contract triggers the per-incident penalty clause ($25k); the SOC 2 audit timeline slips by a month; the Fortune-500 buyer&apos;s security-assessment finding becomes part of the deal&apos;s closing-conditions list.

`deployctl deploy --project=&lt;id&gt;` is the Deno Deploy CLI that ships code from CI. The deploy succeeds based on code-validation criteria — bundle parses, types check, the worker imports resolve — but doesn&apos;t cross-check that the project&apos;s custom-domain attachments are in healthy cert-issued state. A CI pipeline that runs `deployctl deploy` after every PR merge passes for weeks while the custom domain&apos;s DNS validation has been stuck in pending or failed state. The .deno.dev URL serves the latest code; the custom domain serves nothing because the cert never issued. Internal dashboards pointed at deno.dev look healthy. Production traffic on the customer-facing apex hard-fails the handshake. Discovery is delayed until business metrics surface the impact

A Deno Deploy agency runs a green-CI badge on a customer&apos;s GitHub repo. Deploys are passing daily via `deployctl deploy --project=workflow-api`. A DNS change at the customer&apos;s registrar last month moved the Deno Deploy validation TXT record off the apex (the previous DNS provider auto-populated it during the original custom-domain attachment; the new provider doesn&apos;t auto-populate records on migration). Deno Deploy&apos;s domain-validation polling switched the custom-domain attachment to failed state 14 days ago. The agency&apos;s deploy log shows 47 successful deploys in that window; `deployctl deploy` reports each one as green. The Deno-managed .deno.dev URL serves the latest code. api.customer.com hasn&apos;t been receiving any production traffic from the customer&apos;s frontend because every request fails the cert handshake. Revenue impact only surfaces in the customer&apos;s weekly KPI review when transaction counts are flat

A Deno Deploy agency operates the API tier for a B2B fintech client. The Deno Deploy project workflow-api was provisioned 18 months ago with api.customer.com as the custom domain. At provisioning, the agency engineer configured the customer&apos;s DNS at their then-current registrar (Namecheap) — added the CNAME pointing api.customer.com to workflow-api.deno.dev, plus the Deno Deploy domain-validation TXT record on _acme-challenge.api.customer.com. Namecheap&apos;s DNS auto-populated the validation TXT record during the custom-domain attachment flow. The attachment succeeded; the cert provisioned via LE; the API has been serving production traffic for 18 months across multiple cert-renewal cycles. The renewal automation kept working because the validation TXT record persisted in Namecheap&apos;s DNS. Six weeks ago, the customer&apos;s IT team migrated DNS from Namecheap to Cloudflare as part of a broader infrastructure consolidation. The customer&apos;s IT team exported the Namecheap zone, imported it into Cloudflare, and updated the registrar nameservers. The exported zone file included the A/CNAME/MX/TXT records visible in Namecheap&apos;s dashboard. The Deno Deploy validation TXT record on _acme-challenge.api.customer.com was technically visible in the dashboard but was auto-populated by Namecheap&apos;s integration rather than being an explicit user record — the export missed it, or it was filtered out as a managed record. The Cloudflare zone post-import did not contain the validation TXT record. The DNS migration completed cleanly from the customer&apos;s perspective — the apex resolved, the A records served, the MX records routed mail. The agency&apos;s CI pipeline kept running `deployctl deploy --project=workflow-api` on every PR merge. Code shipped successfully — `deployctl deploy` validated the bundle, uploaded it, Deno Deploy returned a successful deploy ID. Each deploy showed a green check in GitHub Actions. The Deno-managed fallback workflow-api.deno.dev kept serving the latest deployed code. But on Deno Deploy&apos;s side, the domain-validation polling daemon began returning failures because _acme-challenge.api.customer.com no longer had the validation TXT record. After 7 days of failed polling, Deno Deploy switched the api.customer.com attachment to "failed" state. The existing cert continued serving until its next renewal cycle (day 60 of 90); the renewal attempt failed for the same reason; the cert expired 14 days later. From that point, api.customer.com served TLS handshake failures. The agency&apos;s deploy log over the 14 days post-cert-expiry shows 47 successful deploys — each one green in GitHub Actions, each one returning a successful deploy ID from Deno Deploy. The agency&apos;s monitoring (Better Stack synthetic monitor) was pointed at workflow-api.deno.dev — green, serving the latest code, returning 200s on the health-check endpoint. The customer&apos;s frontend was making API calls to api.customer.com from browser sessions; every call hard-failed at the TLS layer. The customer-facing dashboard showed "Loading..." indefinitely. Transactions weren&apos;t recording. The customer&apos;s analytics (which tracks transaction completion server-side via the API) showed transactions at zero. The customer&apos;s product team didn&apos;t notice for 6 days — the metric was flat, but the product team had assumed it was a seasonal effect. The customer&apos;s CFO ran the weekly KPI review on day 7: transaction volume flat, but daily active users in the frontend analytics tool was normal. The CFO escalated. The customer&apos;s engineering team investigated: opened browser DevTools, saw `NET::ERR_CERT_DATE_INVALID` on api.customer.com. The agency engineer was paged. Triage took 90 minutes to identify: pulled up Deno Deploy dashboard, saw the api.customer.com attachment in "failed" state, scrolled the validation log to see "validation TXT record not found" failures for the past 21 days. Resolution required re-adding _acme-challenge.api.customer.com TXT record at Cloudflare with the value from the Deno Deploy dashboard, waiting for Deno Deploy&apos;s next polling cycle (hourly) to detect the record, waiting for cert issuance (5-15 minutes), waiting for cert distribution to Deno&apos;s edge. Total time-to-recovery: 2.5 hours. Revenue impact during the 14-day cert-expired window: $340k of transactions didn&apos;t complete. The customer&apos;s engagement contract with the agency has a 99.9% uptime SLA on the API; the breach triggers SLA credits + per-incident penalty. The customer&apos;s next quarterly business review with the agency includes a formal escalation to the customer&apos;s CTO.

Deno Cron (announced 2024) and Deno KV are core Deno Deploy features. Cron-triggered handlers commonly POST to external HTTPS endpoints — webhooks, partner APIs, payment-processor callbacks, billing reconciliation systems. Cert expiry on the external endpoint causes the fetch() call to throw `TypeError: fetch failed`; Deno Cron logs the failure but doesn&apos;t auto-retry beyond the configured attempts (default is 3); KV-driven state machines stall when the Cron handler that&apos;s supposed to advance state errors out. The Deno Deploy logs show the errors but no alert is wired to PagerDuty/Slack by default. Failures accumulate silently in the project&apos;s logs while downstream business processes break in ways that only surface in month-end reconciliations

A Deno Deploy agency operates a billing-reconciliation Cron job for a customer that runs daily at 03:00 UTC. The Cron handler is registered via `Deno.cron("reconcile transfers", "0 3 * * *", reconcileHandler)` and POSTs to a partner&apos;s Stripe-Connect oauth-callback endpoint to reconcile transfer states. The partner&apos;s endpoint cert expires due to a Let&apos;s Encrypt failure unrelated to the agency (the partner&apos;s ops team had a CAA-tightening incident of their own). The agency&apos;s Cron job throws `TypeError: fetch failed` for 7 consecutive days. The Deno Deploy logs show the errors but no alert is wired to PagerDuty/Slack. The partner&apos;s month-end reconciliation report shows $340k of unreconciled transfers. The partner&apos;s finance team escalates to the partner&apos;s product team; the partner&apos;s product team escalates to the agency&apos;s customer; the agency&apos;s engagement triggers a per-day penalty clause

A Deno Deploy agency operates a billing-reconciliation system for a payments-processing customer. The customer runs a B2B marketplace where buyers pay sellers via Stripe Connect; the customer is the platform of record. The customer&apos;s reconciliation flow runs daily at 03:00 UTC: read pending-transfer records from Deno KV, POST each one to the partner&apos;s Stripe-Connect oauth-callback endpoint to validate transfer state on the partner&apos;s side (the partner aggregates transfers across multiple platforms for compliance reporting), update the KV record with the partner-confirmed state, advance the KV-driven state machine. The Cron handler is registered via `Deno.cron("reconcile transfers", "0 3 * * *", reconcileHandler)`. The handler uses native `fetch()` to call the partner&apos;s endpoint at https://reconcile.partner-platform.com/oauth-callback. The handler has retry logic for fetch failures: 3 attempts with exponential backoff, then mark the record as needing-manual-review in KV. On Monday morning at 03:00 UTC, the Cron handler runs. The partner&apos;s endpoint at reconcile.partner-platform.com is serving an expired cert — the partner&apos;s ops team had a CAA-tightening incident over the weekend; their LE renewal failed; their cert expired Sunday at 22:00 UTC. The agency&apos;s fetch() call throws `TypeError: fetch failed` immediately (Deno&apos;s strict TLS validation hard-rejects the expired cert; no insecure mode is enabled). The handler retries 3 times — each fails identically. The handler marks 47 transfer records as needing-manual-review in KV. Deno Cron logs the run as completed with 47 failures. No PagerDuty/Slack alert fires because the agency hadn&apos;t wired Deno Deploy log streaming to an alerting platform (Deno Deploy&apos;s native alerting is dashboard-only; integration with PagerDuty/Slack requires a log-export-to-third-party pipeline, which the agency hadn&apos;t set up at this client because the Cron job had been running cleanly for 14 months). Tuesday morning at 03:00 UTC, the Cron handler runs again. Same partner endpoint, same expired cert (the partner hasn&apos;t fixed their cert yet), same `TypeError: fetch failed`. Another 52 transfer records marked as needing-manual-review. Wednesday, Thursday, Friday, Saturday, Sunday — same. Over 7 days, 340 transfer records accumulate in the needing-manual-review state. The KV-driven state machine for those transfers stalls — downstream Cron jobs that depend on "reconciled" state don&apos;t process them. On Monday of the following week (day 7), the partner&apos;s month-end reconciliation report (the partner runs reconciliation on the 1st of each month) shows $340k of transfers that the agency&apos;s customer initiated but the partner has no record of being acknowledged via the oauth-callback flow. The partner&apos;s finance team flags it: "These 340 transfers don&apos;t have customer-confirmation events. Per the partnership agreement, unconfirmed transfers require manual reconciliation, which we&apos;re billing at $X per record." The partner&apos;s finance team escalates to the partner&apos;s product team. The partner&apos;s product team escalates to the agency&apos;s customer&apos;s product team: "Your platform stopped acknowledging transfers via the oauth-callback. We&apos;re going to need to deprioritize our integration if this isn&apos;t resolved." The agency&apos;s customer&apos;s product team escalates to the agency. The agency engineer triages: pulls up Deno Deploy logs, sees 7 days of `TypeError: fetch failed` errors in the reconcile-transfers Cron output. Diagnoses the partner cert issue. Notifies the partner; the partner&apos;s ops team had been silently struggling with their cert renewal for 8 days. The partner restores their cert. The agency engineer re-runs the reconciliation Cron manually for the backlog; 340 records advance through the KV state machine; the partner&apos;s system acknowledges them. Resolution complete. But the agency&apos;s engagement contract has a per-day penalty clause for reconciliation delays ($5k/day on individual delays exceeding 48 hours; $10k/day on aggregated delays exceeding 5 days). The 7-day delay across 340 records triggers $50k+ in penalties. The agency&apos;s customer&apos;s next-quarter renewal is in question; the customer&apos;s CTO drafts an escalation memo to the agency&apos;s account director.

How it works

SSL and DNS monitoring for Deno Deploy agencies across custom-domain cert provisioning (Let's Encrypt-only, breaks silently on CAA-tightened apexes while .deno.dev keeps serving green), `deployctl deploy` shipping 47 successful deploys while the custom-domain attachment is in DNS-validation-failed state, and Deno Cron handlers throwing `TypeError: fetch failed` against expired-cert partner endpoints with no PagerDuty wiring.

Merlonix monitors SSL expiry and DNS integrity across every Deno-attached custom domain — api.*, app.*, www.* on the customer apex, plus the project's <project-name>.deno.dev fallback as a separate asset — and catches the divergence where the customer apex serves an expired cert while the .deno.dev fallback keeps serving green, the registrar-side validation TXT-record drift that breaks Deno Deploy's custom-domain attachment while `deployctl deploy` keeps reporting success, and the Deno Cron handler `TypeError: fetch failed` errors that accumulate in Deno Deploy logs but never reach the on-call rotation. Each custom domain gets independent monitoring because the .deno.dev fallback cert state is independent of the customer apex cert state.

01

Add every Deno Deploy-attached custom domain — api.*, app.*, www.*, plus the &lt;project-name&gt;.deno.dev fallback as a separate asset — with DNS TXT verification that catches cert expiry on the customer-facing apex 30 days before the failure window opens

Verify ownership with a DNS TXT record on the apex domain. All Deno Deploy-attached subdomains under that apex — api.* (API tier), app.* (edge-served app), www.* — are added without additional verification. The &lt;project-name&gt;.deno.dev fallback URL is added as a separate asset because its cert state is independent of the custom-domain cert state (Deno&apos;s own apex has its own CAA permitting LE; the customer&apos;s apex may have CAA tightening blocking LE). Monitoring both surfaces gets the per-domain divergence — customer apex hard-fails while .deno.dev shows green — surfaces in the first check cycle, not when the Fortune-500 buyer&apos;s endpoint security flags the cert error. Under two minutes per project.

02

CAA inheritance monitoring across customer SOC 2 hardening projects, registrar migrations, and DNS-provider changes — surfacing the CAA tightening that breaks Deno Deploy&apos;s Let&apos;s Encrypt-only renewal and the validation TXT-record drift after DNS migrations that breaks `deployctl deploy` custom-domain attachments

Three independent DNS resolvers check every CAA record and the Deno Deploy validation TXT records on _acme-challenge.&lt;subdomain&gt; on every monitoring interval, walking the CAA inheritance chain from the apex. When a customer CISO tightens CAA records during SOC 2 hardening (pinning to DigiCert / Entrust / Sectigo only and removing Let&apos;s Encrypt), the change is detected in the first check cycle — well before the next 90-day cert renewal hits the tightened CAA list and silently fails. The validation TXT record on _acme-challenge.api.customer.com is monitored alongside — registrar migrations from Namecheap to Cloudflare commonly drop auto-populated managed records during zone import, breaking the custom-domain attachment without breaking `deployctl deploy` reporting.

03

SSL monitoring 30 days before expiry across every Deno-attached custom domain — independent per-subdomain checks because the .deno.dev fallback can serve green while the customer apex hard-fails, and `deployctl deploy` doesn&apos;t cross-check cert state

Full SSL chain validation on every Deno-attached custom domain. Independent checks per-subdomain catch cert expiry 30 days before the failure window opens — enough time to coordinate any CA migration with the customer&apos;s CISO if the apex CAA has been tightened mid-cycle, switch to a proxy-fronted DigiCert cert if Deno Deploy&apos;s native LE integration can&apos;t be used, and avoid SOC 2 audit timeline collisions. The 30-day lead time also covers the worst case where the customer-facing apex has been hard-failing for weeks while internal dashboards pointed at &lt;project-name&gt;.deno.dev kept reporting green and `deployctl deploy` kept shipping 47 successful deploys.

04

Vendor status for Deno Deploy (global), Deno KV / Cron, Cloudflare (most Deno Deploy customers front their apex with CF), Vercel (often complementary for static assets), AWS Route 53, Cloudflare Registrar, Let&apos;s Encrypt, and the Deno Status Page — to distinguish vendor-side incidents from per-tenant SSL configuration failures

Merlonix monitors the Deno Status Page alongside Deno KV / Cron health, Cloudflare (Workers + DNS, since most Deno Deploy customers front their apex with Cloudflare), Vercel (often a complementary deployment for static assets that sits next to a Deno Deploy API tier), AWS Route 53, Cloudflare Registrar, and Let&apos;s Encrypt — so when a Deno Deploy platform incident hits per-region cert distribution simultaneously across multiple client tenants, you see the vendor event clearly rather than spending hours investigating whether the root cause is the customer&apos;s CAA tightening, a validation TXT record dropped during a registrar migration, a CF-in-front WAF rule blocking the LE validator, or a genuine Deno platform issue.

What the numbers mean for Deno Deploy agencies

Monitoring built for Deno Deploy agencies where one client project means a customer-facing api.customer.com on a CAA-controlled apex, a <project-name>.deno.dev fallback under Deno's own apex that won't reflect customer-domain cert state, a `deployctl deploy` CI pipeline that doesn't cross-check custom-domain attachment health, and Deno Cron handlers calling out to partner HTTPS endpoints where cert expiry on the partner side throws `TypeError: fetch failed` into the Cron run — each with independent failure modes that the Deno Status Page won't surface.

Deno Deploy agencies running edge-compute apps with Let's-Encrypt-only cert provisioning, `deployctl deploy` CI pipelines that report success regardless of custom-domain attachment state, and Deno Cron handlers calling external partner APIs need monitoring that recognizes each surface has independent failure modes — because the cert expiry on api.customer.com is silent (the .deno.dev fallback keeps serving green; internal dashboards pointed at deno.dev stay green; `deployctl deploy` keeps reporting success), the validation TXT-record drift after a registrar migration is silent (the existing cert continues serving until the next renewal cycle fails), and the Deno Cron `TypeError: fetch failed` errors against expired-cert partner endpoints are silent (failures accumulate in Deno Deploy logs but don't wire to PagerDuty/Slack by default; downstream KV state machines stall; impact surfaces in the partner's month-end reconciliation).

< 10 min

Time from DNS change to alert — catches CAA tightening introduced by customer SOC 2 hardening projects that silently break Deno Deploy&apos;s Let&apos;s Encrypt-only cert renewal before the next 90-day cycle, plus validation TXT-record drift after Namecheap-to-Cloudflare registrar migrations that drop Deno Deploy&apos;s auto-populated _acme-challenge records during zone import

30 days

SSL expiry warning lead time — enough time to coordinate a CA migration with the customer&apos;s CISO if the apex CAA has been tightened mid-cycle (the renewal failure surfaces 30 days before any paying customer sees `NET::ERR_CERT_DATE_INVALID` on api.customer.com), switch to a proxy-fronted DigiCert cert if Deno Deploy&apos;s native LE integration can&apos;t be used, or restore the validation TXT record before the existing custom-domain cert expires

11 vendors

Upstream services monitored — Deno Status Page, Deno KV / Cron status, Cloudflare (Workers + DNS, since most Deno Deploy customers front their apex with CF), Vercel (often complementary for static assets), AWS Route 53, Cloudflare Registrar, and Let&apos;s Encrypt. Distinguishes a Deno platform incident from a per-tenant SSL configuration failure

200 assets

Maximum monitored domains on the Agency plan — covers a full Deno Deploy client portfolio: 50+ projects each with api.* + app.* + www.* on the customer apex plus the &lt;project-name&gt;.deno.dev fallback as a separate asset (because cert state diverges between the customer apex and Deno&apos;s own apex). Multi-region edge-compute deployments with per-region staging endpoints are absorbed without per-asset fees

Pricing

Flat monthly fee. Every Deno-attached custom domain, every <project-name>.deno.dev fallback, every Deno Cron handler's partner endpoint included.

No per-project charges. No per-region fees. Pick the tier that fits your Deno Deploy portfolio and monitor every custom domain plus the <project-name>.deno.dev fallback without billing surprises.

See full feature comparison →

Starter

For solo Deno Deploy developers managing a single client project with one custom domain attached and a &lt;project-name&gt;.deno.dev fallback monitored separately.

$29/ month

  • 10 monitored assets
  • 1 seat
  • 15-min check cadence
  • SSL + DNS + vendor monitoring
  • Email + Slack alerts
Most chosen

Team

For Deno Deploy agencies managing 5-10 client projects with typical multi-region edge-compute setups — api.* + app.* on the customer apex plus the &lt;project-name&gt;.deno.dev fallback, where cert state diverges between the customer apex and Deno&apos;s own apex.

$79/ month

  • 50 monitored assets
  • 5 seats
  • 10-min check cadence
  • SSL + DNS + vendor monitoring
  • Email + Slack alerts

Agency

For agencies with a full Deno Deploy client roster including SOC 2-hardened customer apexes with CAA tightening that breaks Let&apos;s Encrypt-only renewal, Deno Cron + Deno KV workloads calling external partner HTTPS endpoints, and multi-project portfolios where each project has its own &lt;project-name&gt;.deno.dev fallback plus a customer-attached custom domain.

$199/ month

  • 200 monitored assets
  • 15 seats
  • 5-min check cadence
  • SSL + DNS + vendor monitoring
  • Email + Slack alerts

Know when api.customer.com is hard-failing the TLS handshake — 30 days before a Fortune-500 buyer's endpoint security flags the expired cert and escalates the SOC 2 audit finding.

Add your first Deno Deploy custom domain in under two minutes. Customer-attached api.* and app.* subdomains, the <project-name>.deno.dev fallback, and Deno Cron partner-endpoint checks are monitored from the same dashboard. 14-day trial, no card required.