MaxScale Multiplexing and Caching – Part 6

Part 6 – MaxScale Caching

This multi-part series breaks down each section into easy logical steps.

Part 1 – Complete Guide for High-Concurrency Workloads

Part 2 – Understanding MaxScale Thread Architecture

Part 3 – Backend Sizing and Connection Pooling

Part 4 – Tuning MaxScale for Real Workloads

Part 5 – MaxScale Multiplexing

Part 6 – MaxScale Caching

Part 7 – Warnings & Caveats

Part 8 – Key Takeaways


MaxScale supports multiple caching strategies to reduce backend load and improve query response times. Each cache type has its advantages and trade-offs.

Cache Types

Local Server Cache

    • Memory resides within the MaxScale process.
    • Pros: Extremely fast, minimal network latency.
    • Cons: Limited to a single MaxScale instance; cannot be shared across multiple nodes.
    • Documentation: MaxScale Local Cache

    Memcached

      • Shared cache across MaxScale instances or applications.
      • Pros: Scalable, suitable for multi-node setups.
      • Cons: Network latency; requires running a separate Memcached service.
      • Documentation: MaxScale Memcached Cache

      Redis

        • Shared, persistent cache with advanced features like eviction policies and persistence.
        • Pros: Persistent across restarts, rich feature set.
        • Cons: More complex to configure, higher operational overhead.
        • Documentation: MaxScale Redis Cache

        Configuration Example (Local Cache + Multiplexing)

        Bash
        [CacheService]
        type=service
        router=readwritesplit
        servers=server1,server2,server3
        user=maxuser
        password=maxpass
        idle_session_pool_time=1s
        multiplex_timeout=60s
        
        [CacheListener]
        type=listener
        service=CacheService
        protocol=MariaDBClient
        port=4007
        address=0.0.0.0
        
        [Cache]
        type=filter
        module=cache
        hard_ttl=60s
        soft_ttl=30s
        max_size=100Mi

        Using Memcached or Redis

        Bash
        [CacheMemcached]
        type=filter
        module=cache
        cache_backend=memcached
        memcached_host=127.0.0.1
        memcached_port=11211
        hard_ttl=60s
        soft_ttl=30s
        max_size=500Mi
        
        [CacheRedis]
        type=filter
        module=cache
        cache_backend=redis
        redis_host=127.0.0.1
        redis_port=6379
        hard_ttl=60s
        soft_ttl=30s
        max_size=1024Mi

        A key benefit of this combined approach is that a cache hit bypasses the need for a backend connection entirely, significantly reducing the load on the connection pool and the database itself.

        Testing and Tuning

        1. Prepare Dataset: Use sysbench to create a realistic workload with the required database and user.
        2. Run Load Test: Measure latency, throughput, and cache utilization under expected concurrency.
        3. Monitor Metrics: Track cache hits/misses, backend connections, and query response times.
        4. Tune Parameters: Adjust cache_ttl, max_cache_size, idle_session_pool_time, and persistpoolmax for optimal performance.
        5. Backup Configuration: Always back up your MaxScale configuration before making changes to avoid accidental downtime.

        Example Tuning:

        • If cache hit rate is low, increase cache_ttl or max_cache_size.
        • If backend is overwhelmed, consider adding additional MaxScale instances and enabling shared cache via Memcached or Redis.

        References:


        Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Page 8

        Kester Riley

        Kester Riley is a Global Solutions Engineering Leader who leverages his website to establish his brand and build strong business relationships. Through his blog posts, Kester shares his expertise as a consultant, mentor, trainer, and presenter, providing innovative ideas and code examples to empower ambitious professionals.

        backend connection reuse (1) backend sizing (1) best practices (1) caching (1) caching caveats (1) caching strategy (1) CentOS (15) concurrency (1) connection multiplexing (1) connection pooling (2) Connector (5) Cooperative Monitoring (3) CPU cores (1) database benchmarking (1) database proxy risks (1) diff router (5) High Availability (13) high concurrency (2) high concurrency databases (1) idle_session_pool_time (1) Java (3) load testing (1) MariaDB (27) MaxScale (29) MaxScale configuration (1) MaxScale multiplex_timeout (1) MaxScale threads (1) multi-threading (1) multiplexing (2) multiplex_timeout (1) OLTP (1) Performance Optimization (2) performance testing (1) performance tuning (2) performance tuning checklist (1) persistmaxtime (1) persistpoolmax (1) Python (2) Replica Rebuild (10) Rocky Linux (15) session sharing (1) Sysbench (1) thread architecture (1) tuning (1) upgrade (5)

        Leave a Reply

        This site uses Akismet to reduce spam. Learn how your comment data is processed.