MaxScale Multiplexing and Caching – Part 3

Part 3 – Backend Sizing and Connection Pooling

This multi-part series breaks down each section into easy logical steps.

Part 1 – Complete Guide for High-Concurrency Workloads

Part 2 – Understanding MaxScale Thread Architecture

Part 3 – Backend Sizing and Connection Pooling

Part 4 – Tuning MaxScale for Real Workloads

Part 5 – MaxScale Multiplexing

Part 6 – MaxScale Caching

Part 7 – Warnings & Caveats

Part 8 – Key Takeaways


MaxScale threads manage frontend client connections, but each thread requires a set of backend connections to the database servers. Proper sizing ensures that queries flow smoothly without overwhelming the database.

Formula:

Effective backend connections = number of MaxScale threads × connections per thread

Typical Recommendations:

  • OLTP workloads (transaction-heavy): 4–8 connections per thread — optimizes transactional throughput without saturating the database.
  • Read-heavy workloads: 8–16 connections per thread — allows higher parallel reads while controlling server resource usage.

Examples:

  • 8 threads × 6 connections per thread = 48 backend connections for OLTP
  • 16 threads × 12 connections per thread = 192 backend connections for read-heavy queries

Important Considerations:

  • MariaDB max_connections: Ensure the total backend connections from all MaxScale instances does not exceed max_connections on the database servers.
  • Connection Pooling: MaxScale uses persistent connection pools per thread; setting persistpoolmax appropriately allows connections to remain open for reuse, reducing connection overhead.
  • Monitoring: Check connection utilization under load to avoid saturating backend servers. Tools like SHOW PROCESSLIST or MaxScale monitoring endpoints help validate pool usage.

Practical Tip:

  • Start with recommended connections per thread, then adjust based on testing and real-world usage to optimize both concurrency and server stability.

Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Page 8

Kester Riley

Kester Riley is a Global Solutions Engineering Leader who leverages his website to establish his brand and build strong business relationships. Through his blog posts, Kester shares his expertise as a consultant, mentor, trainer, and presenter, providing innovative ideas and code examples to empower ambitious professionals.

backend connection reuse (1) backend sizing (1) best practices (1) caching (1) caching caveats (1) caching strategy (1) CentOS (15) concurrency (1) connection multiplexing (1) connection pooling (2) Connector (5) Cooperative Monitoring (3) CPU cores (1) database benchmarking (1) database proxy risks (1) diff router (5) High Availability (13) high concurrency (2) high concurrency databases (1) idle_session_pool_time (1) Java (3) load testing (1) MariaDB (27) MaxScale (29) MaxScale configuration (1) MaxScale multiplex_timeout (1) MaxScale threads (1) multi-threading (1) multiplexing (2) multiplex_timeout (1) OLTP (1) Performance Optimization (2) performance testing (1) performance tuning (2) performance tuning checklist (1) persistmaxtime (1) persistpoolmax (1) Python (2) Replica Rebuild (10) Rocky Linux (15) session sharing (1) Sysbench (1) thread architecture (1) tuning (1) upgrade (5)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.