Never Lose a Queue Job Again: Laravel 12.34's New Failover Driver.
9 minutes
Right, let's talk about something that's probably kept you awake at 3am at least once in your career.
Redis goes down. Your queue dies. Password reset emails? Gone. Order confirmations? Vanished. Payment webhooks? Lost to the void. And you're left explaining to stakeholders why customers are furious.
I've been there. We've all been there. It's absolutely rubbish.
Well, Taylor Otwell and the Laravel team just made that nightmare a thing of the past.
The Problem (And Why It's Actually Your Problem)
Here's the uncomfortable truth: your queue system is a single point of failure.
When you dispatch a job in Laravel, you're essentially saying "Redis, please remember to do this thing later." But what happens when Redis has a moment? What happens when AWS has one of those "brief service interruptions" that lasts two hours?
Your jobs disappear. Poof. Gone.
Before Laravel 12.34, your options were:
Build complex custom failover logic (fun!)
Set up extensive monitoring and pray (effective!)
Manually intervene during outages (at 3am!)
Just... accept that sometimes jobs get lost (absolutely not)
None of these are good options. And honestly, this is the kind of infrastructure resilience that should just work.
Enter: The Failover Queue Driver
Laravel 12.34 dropped earlier this week with a feature that made me actually say "oh thank god" out loud: automatic queue failover.
Here's what it does: when your primary queue connection fails, Laravel automatically tries your backup connections until one works. No manual intervention. No lost jobs. No drama.
Your application just... keeps working.
How It Actually Works (The Technical Bit)
Let's peek under the hood because this is genuinely clever in its simplicity.
The failover driver is basically a wrapper that loops through your configured connections. When you dispatch a job, here's what actually happens:
1<?php 2 3// You dispatch a job like normal 4ProcessOrder::dispatch($order); 5 6// Internally, the FailoverQueue does this: 7foreach ($this->connections as $connection) { 8 try { 9 return $this->manager->connection($connection)->push($job, $data, $queue);10 } catch (Throwable $e) {11 $this->events->dispatch(new QueueFailedOver($connection, $job));12 }13}14 15throw $e;
That's it. No complex exception type checking. No sophisticated retry logic. Just a simple loop that catches any throwable exception and moves to the next connection.
The beauty is in the simplicity: it catches everything - connection failures, timeouts, Redis being down, whatever. If something goes wrong, try the next connection. Simple.
What's particularly smart is that this happens at dispatch time, not processing time. So there's no weird state where a job exists in limbo - it's either queued successfully (somewhere) or it throws an exception you can handle.
Why Catch All Exceptions?
You might think "shouldn't we only catch connection-specific exceptions?" But Taylor's implementation is smarter than that. By catching all Throwables:
Redis connection timeouts? Caught.
AWS SQS credential issues? Caught.
Database queue table locked? Caught.
Network hiccup? Caught.
Any infrastructure weirdness? Caught.
The only exceptions that bubble up are when all your connections fail, or when there's a genuine problem with your job class itself (which should fail anyway).
Performance Impact (Spoiler: It's Negligible)
In normal operation, there's basically zero overhead. Your first connection succeeds, job gets queued, done. The try-catch wrapper adds a handful of CPU cycles - completely unnoticeable.
When Redis is down, yes, there's the latency of attempting that connection and waiting for it to fail. But:
You can tune connection timeouts (more on that in a sec)
The failover happens immediately once an exception is thrown
It's still infinitely faster than losing the job entirely
It only affects dispatch, not processing
In practice, you'll see maybe 100-500ms added to your dispatch time during an outage, mostly from the connection timeout itself. Compared to jobs being lost? I'll take that trade every single time.
The simplicity of the implementation also means less code to maintain and fewer edge cases to worry about. Taylor's approach of "catch everything and move on" is surprisingly robust.
Setting It Up (Embarrassingly Simple)
It's embarrassingly simple. Open config/queue.php
and add a new connection:
1<?php 2 3'connections' => [ 4 5 // Your existing connections 6 'redis' => [ 7 'driver' => 'redis', 8 'connection' => 'default', 9 'queue' => env('REDIS_QUEUE', 'default'),10 'retry_after' => 90,11 ],12 13 'database' => [14 'driver' => 'database',15 'table' => 'jobs',16 'queue' => 'default',17 'retry_after' => 90,18 ],19 20 // New failover connection21 'failover' => [22 'driver' => 'failover',23 'connections' => [24 'redis', // Fast and lovely when it works25 'database', // Always reliable, slightly slower26 ],27 ],28 29],
Then in your .env
file:
1QUEUE_CONNECTION=failover
That's it. Seriously.
Real-World Scenarios Where This Saves Your Bacon
E-commerce During Peak Times
Black Friday. Your Redis instance is under load. One server hiccups.
Without failover:
Checkout jobs lost
Customers don't get confirmation emails
Payment webhooks missed
Inventory not updated
Support tickets flood in
With failover:
Jobs seamlessly move to database queue
Everything processes normally
Customers happy
You sleep soundly
SaaS Onboarding Flows
New user signs up. Your onboarding sequence:
Welcome email (must arrive)
Account setup job (must complete)
Trial start notification (must send)
First-use analytics (important for metrics)
If Redis dies halfway through, the failover driver ensures all these jobs still execute. Your conversion funnel doesn't break just because infrastructure had a wobble.
Payment Processing
This is the big one. Payment webhooks from Stripe, PayPal, whatever - these are fire-and-forget HTTP requests. If your queue dies when the webhook arrives, that payment confirmation is gone.
Failover means even if Redis is down, the webhook gets queued in your database. The payment processes. Money appears. Everyone's happy.
Looking to start your next Laravel project?
We’re a passionate, happy team and we care about creating top-quality web applications that stand out from the crowd. Let our skilled development team work with you to bring your ideas to life! Get in touch today and let’s get started!
Monitoring Failover (The Reality Check)
Right, here's something important: Laravel Horizon only supports Redis queues.
If you're using Horizon and your failover kicks in to route jobs to a database or SQS queue, Horizon won't see those jobs. This isn't a bug - Horizon is specifically built for Redis.
So what does this mean for monitoring failover?
The Hybrid Approach
You'll need to use different tools depending on which queue is active:
For Redis (primary): Use Horizon - it's brilliant for this For Database/SQS (fallback): Use standard Laravel queue monitoring
Your Horizon config looks normal:
1<?php 2 3// config/horizon.php 4'environments' => [ 5 'production' => [ 6 'supervisor-redis' => [ 7 'connection' => 'redis', 8 'queue' => ['default'], 9 'balance' => 'auto',10 'processes' => 10,11 'tries' => 3,12 ],13 ],14],
And you'll need traditional queue:work
processes for your fallback queues:
1# In your process manager (Supervisor, etc)2php artisan queue:work database --tries=3 --timeout=60
The key is detecting when failover happens, not necessarily monitoring the fallback queue in Horizon.
Detecting Failover Events
The crucial bit is knowing when failover activates. Add this to your AppServiceProvider
:
1<?php 2 3use Illuminate\Queue\Events\JobQueued; 4use Illuminate\Support\Facades\Event; 5use Illuminate\Support\Facades\Log; 6use Illuminate\Queue\Events\QueueFailedOver; 7 8 9public function boot(): void10{11 Event::listen(QueueFailedOver::class, function ($event) {12 // Check if we're NOT using our primary (Redis) connection13 if ($event->connectionName !== 'redis') {14 Log::warning('Queue failover active', [15 'active_connection' => $event->connectionName,16 'job' => $event->job->displayName(),17 'queue' => $event->queue,18 'timestamp' => now(),19 ]);20 21 // Optional: Alert your team immediately22 // Sentry::captureMessage('Queue failover to ' . $event->connectionName);23 }24 });25}
This logs every time a job is dispatched to a non-Redis queue, which means failover is active.
What You Can Monitor in Horizon
When Redis is working (99% of the time), Horizon gives you:
Real-time job throughput
Failed job tracking
Job retry statistics
Worker process health
When failover kicks in and jobs go to database/SQS, you'll see:
Redis queue goes quiet in Horizon (first sign something's up)
Your logs showing failover events
Jobs still processing, just not visible in Horizon
Alternative: General Queue Monitoring
For a unified view regardless of which queue is active, consider:
Laravel Pulse - Shows jobs across all drivers Custom Monitoring - Track via events and ship to your monitoring service
1<?php 2 3// Track all job events regardless of driver 4Event::listen(JobProcessed::class, function ($event) { 5 // Ship to Datadog, New Relic, whatever 6 Metrics::increment('jobs.processed', [ 7 'connection' => $event->connectionName, 8 'queue' => $event->job->getQueue(), 9 ]);10});
Alert Configuration
Set up Slack notifications for failover events:
1<?php 2 3use Illuminate\Queue\Events\QueueFailedOver; 4 5// In AppServiceProvider 6Event::listen(QueueFailedOver::class, function ($event) { 7 if ($event->connectionName !== config('queue.default')) { 8 Notification::route('slack', config('services.slack.webhook')) 9 ->notify(new QueueFailoverAlert($event));10 }11});
Create the notification:
1<?php 2 3class QueueFailoverAlert extends Notification 4{ 5 public function __construct(public JobQueued $event) 6 {} 7 8 public function via($notifiable): array 9 {10 return ['slack'];11 }12 13 public function toSlack($notifiable): SlackMessage14 {15 return (new SlackMessage)16 ->warning()17 ->content('⚠️ Queue Failover Active')18 ->attachment(function ($attachment) {19 $attachment->title('Failover Details')20 ->fields([21 'Connection' => $this->event->connectionName,22 'Job' => $this->event->job->displayName(),23 'Time' => now()->format('H:i:s'),24 ]);25 });26 }27}
Now your team knows immediately when failover kicks in, even if users don't notice anything wrong.
Connection Order Strategy
The order you list connections matters. Here's how I think about it:
Primary: Redis
Blazingly fast
Perfect for high-throughput
Can be volatile under load
Secondary: SQS or Database
SQS if you want managed infrastructure
Database if you want simple and local
Both are slower but incredibly reliable
Tertiary: Sync (for emergencies)
Processes jobs immediately in-process
Blocks the request, but ensures execution
Last resort for critical jobs
1<?php 2 3'failover' => [ 4 'driver' => 'failover', 5 'connections' => [ 6 'redis', // 99.9% of the time 7 'sqs', // AWS outages are rare 8 'database', // This will basically never be used 9 'sync', // Nuclear option10 ],11],
What About Processing?
Important distinction: the failover driver handles dispatching jobs. Your workers still need to be configured to process from multiple connections.
For Redis queues: Use Horizon
1php artisan horizon
For fallback queues (database, SQS): Use traditional queue workers
1# In your Supervisor/Forge/process manager config2php artisan queue:work database --tries=3 --timeout=60 --sleep=33php artisan queue:work sqs --tries=3 --timeout=60 --sleep=3
The reality is you'll probably run Horizon for your primary Redis queue (handling 99% of jobs), and have a couple of traditional workers running in the background for failover scenarios.
Your database
queue workers will sit mostly idle, only springing into action when Redis has problems. That's exactly what you want.
Edge Cases and Gotchas
1. Connection Timeout Tuning
If Redis is down, you don't want to wait 30 seconds before trying the next connection. Tune your timeouts:
1<?php 2 3'redis' => [ 4 'driver' => 'redis', 5 'connection' => 'default', 6 'queue' => env('REDIS_QUEUE', 'default'), 7 'retry_after' => 90, 8 'block_for' => null, 9 'timeout' => 2, // Fail fast, try next connection10],
2. Database Queue Performance
The database queue is slower than Redis. If you're dispatching hundreds of jobs per second, that could be noticeable during failover.
Monitor your database during failover events. You might need to scale up temporarily or optimize your jobs
table indexes.
3. Job Serialization Differences
Different queue drivers serialize jobs slightly differently. This is almost never an issue, but be aware that job payloads might vary between Redis and database queues.
Test your failover setup before you need it!
4. Horizon Dashboard Confusion
If you use Horizon, remember it only shows Redis queues. When failover kicks in:
Jobs disappear from Horizon dashboard (they're on database queue now)
This is normal! Not a bug!
Check your logs to confirm failover is working
Monitor your database queue separately
You might think jobs are failing, but they're actually being processed successfully on the fallback queue - just invisible to Horizon.
Should You Use This? (Yes, Probably)
If you're running production Laravel apps with background jobs, you should absolutely use failover. The only exceptions I can think of:
Hobby projects - Not worth the complexity
Jobs that can be lost - Some analytics or logging jobs don't matter
Already using managed queues - AWS SQS with multi-region is already ultra-reliable
For everyone else? Add this today. It's a few lines of config that could save you from a disaster.
The Laravel Philosophy Shining Through
This is what I love about Laravel. Taylor and the team could have said "just use SQS, it's reliable" or "set up your own failover logic."
Instead, they built it into the framework. They made resilience easy. They removed an entire category of 3am emergencies.
This is Laravel at its best: thoughtful features that solve real problems without ceremony. You add three lines to your config file and suddenly your infrastructure is dramatically more resilient.
Get Started Now
Update to Laravel 12.34:
1composer update laravel/framework
Add the failover config to config/queue.php
. Update your .env
. Deploy.
Done. You've just made your application significantly more reliable.
Running Laravel in production and want peace of mind? At Jump24, we help businesses build resilient Laravel applications - the kind that don't wake you up at 3am. We've been doing this since 2014, and we'd love to chat about your project. Get in touch - Laravel in Your Language.