OrbitusOrbitus
  • Home
  • About
  • Pricing
  • Contact
Sign InSign Up
OrbitusOrbitus

Build amazing web applications with Orbitus

Products

FeaturesPricingDocsStorybook

Support

Getting StartedFAQCommunityContact

Newsletter

Get the latest updates and articles directly in your inbox.

You can unsubscribe at any time. Read our

Privacy Policy
© 2026 Orbitus • All rights reserved.
v1.0.1
website.help.helpCenter/Workflows/Dead Letter Queue (DLQ)

Dead Letter Queue (DLQ)

The system's secure administrative quarantine screen that firmly detains problematic data blocks which have catastrophically failed execution despite multiple automatic retries, requiring necessary manual interaction. Guarantee absolute zero data loss for your critical operations.

Last updated: 04/14/2026, 10:13 AM
<div class="features-wrapper"> <h2>What is the DLQ (Dead Letter Queue)?</h2> <p>In the complex realm of computer sciences, the DLQ concept signifies an exclusive holding pool where "corrupted or unprocessable" messages generated within a standard communications sequence are safely extracted so they absolutely do not block the active pipeline. On Orbitus, the DLQ page functions as the <strong>"Intensive Care Unit"</strong> where technical processes that repeatedly crashed (failed) within a workflow structure are securely placed as a final protective measure.</p> <div class="feature-grid"> <div class="feature-card"> <h3>🛡️ Absolute Isolation</h3> <p>Process executions that have forcefully exceeded their predetermined limitation (Retry Count) are intelligently stripped from the primary operating line and settled on the DLQ; thereby making sure system capacity is protected from swelling, and subsequent healthy operations are never hindered or blocked by a single broken integration.</p> </div>
<div class="feature-card">
  <h3>🩺 Manual Assessment & Intervention</h3>
  <p>A designated backend developer or lead systems manager casually accesses this portal to meticulously examine the detained data (Payload) and accurately deduce the core root error entirely in a post-mortem state, completely devoid of real-time server stress.</p>
</div>

<div class="feature-card">
  <h3>♻️ Total Data Recovery</h3>
  <p>When the problematic external API goes back online or the corrupted underlying payload string is manually resolved, the suspended task can be pushed back into the pipeline directly from its stagnant Node utilizing the specific DLQ interface command <strong>"Retry"</strong> or permanently discarded and deleted via the <strong>"Skip"</strong> function.</p>
</div>
</div> <div class="usecase-example"> <h3>Enterprise Usecase: E-Invoice API Outage</h3> <p>In mission-critical operational architectures, there is simply zero luxury for a "this action errored out, it's garbage now" approach. Imagine your workflow automatically sends E-Invoices, but the government tax API goes down for three hours. Without the DLQ, those invoices would vanish into the void. <strong>With the Orbitus DLQ</strong>, the 300 failed invoice creations elegantly fail over into the DLQ quarantine. The next morning, an admin simply clicks "Retry All" to process them securely, upholding the resolute <strong>0% data loss threshold</strong>.</p> </div> </div>

Was this article helpful?

Comments0

Still need help?

Contact Support