Scaling Up While Slicing Costs in Half: How the Les-Tilleuls.coop SRE Team Optimized Mercure.rocks
Published on April 27, 2026
Providing a high-performance, real-time managed service like Mercure.rocks requires a robust infrastructure. But as our user base scaled rapidly, so did our cloud resource consumption. Earlier this year, the Les-Tilleuls.coop Site Reliability Engineering (SRE) team decided it was time to put our infrastructure under the microscope. Our goal was ambitious: optimize our resource usage and shrink our monthly footprint, all while maintaining blazing-fast performance for a rapidly growing list of clients.
Through a combination of infrastructure restructuring and deep-dive code optimizations, we pulled off what every scaling SaaS company dreams of: we onboarded significantly more customers while simultaneously reducing our monthly infrastructure costs by nearly 48%. Here is the step-by-step breakdown of how we achieved almost a 50% reduction in our cloud bill while handling more traffic than ever before.
#The Infrastructure Audit: Finding the Bottleneck
The first step in any optimization journey is gaining observability. We conducted a comprehensive audit of our cloud billing and resource allocation to see exactly where our budget was going.
The data pointed to a clear culprit: our load balancers. They were consuming a disproportionate share of our infrastructure resources. We were over-provisioning static load balancers to handle potential traffic spikes from our growing customer base, which meant paying for massive idle capacity during quieter periods.
Implementing Elastic Load Balancing
Once we identified the bottleneck, the solution became clear. We completely revamped our routing infrastructure by migrating to an elastic load balancer system.
Instead of maintaining static, heavy load balancers, our new dynamic setup scales strictly based on real-time traffic demands. This single infrastructure pivot yielded massive dividends:
- Drastically reduced costs: We stopped paying for unused overhead.
- Flawless scaling: As new customers joined the managed service, the elastic system naturally expanded to absorb the new traffic without requiring manual provisioning.
- Enhanced reliability: We are now much better equipped to handle sudden surges in concurrent connections than we were under the old static model.
Digging Deeper: The Mercure Code Audit
Infrastructure changes were only half the battle. To efficiently pack more customers onto our managed servers, we knew we had to look at the application layer itself.
Our team initiated a rigorous audit of the core Mercure codebase. Because Mercure is an open-source solution at its heart, we didn't just analyze the managed cloud environment: we also evaluated the on-premise version. We profiled the application under heavy load to pinpoint exactly where CPU cycles were being wasted and where memory allocations were inefficient.
Shipping Upgrades: Better Memory and CPU Efficiency
Armed with our profiling data, we got to work writing the fixes. We opened a series of Pull Requests aimed directly at core performance enhancements.
Our code-level optimizations focused heavily on:
- Improving memory efficiency: By reducing the memory footprint per connection, we could support a larger number of concurrent users and new customers on smaller, more cost-effective compute instances.
- Reducing CPU usage: We streamlined the event-loop and internal parsing, ensuring the server uses fewer cycles to process an ever-increasing volume of real-time messages.
Because these PRs were made to the core software, these performance gains aren't just saving Les-Tilleuls.coop money on the managed Mercure.rocks service: they are directly benefiting the entire open-source community running Mercure on-premise.
The Result: Doing More with Less
Scaling a managed service usually means scaling your infrastructure bill right alongside it. By combining strategic infrastructure changes with foundational code improvements, the SRE team successfully flipped that script.
We dropped our monthly cloud spend by ~47.5%, all while successfully serving a substantially larger customer base. By moving to an elastic architecture and optimizing the software’s CPU and memory footprint, the managed version of Mercure.rocks is now more resilient, highly available, and efficient than ever before.


