Part 6 – MaxScale Caching
This multi-part series breaks down each section into easy logical steps.
Part 1 – Complete Guide for High-Concurrency Workloads
Part 2 – Understanding MaxScale Thread Architecture
Part 3 – Backend Sizing and Connection Pooling
Part 4 – Tuning MaxScale for Real Workloads
Part 5 – MaxScale Multiplexing
Part 6 – MaxScale Caching
MaxScale supports multiple caching strategies to reduce backend load and improve query response times. Each cache type has its advantages and trade-offs.
Cache Types
Local Server Cache
- Memory resides within the MaxScale process.
- Pros: Extremely fast, minimal network latency.
- Cons: Limited to a single MaxScale instance; cannot be shared across multiple nodes.
- Documentation: MaxScale Local Cache
Memcached
- Shared cache across MaxScale instances or applications.
- Pros: Scalable, suitable for multi-node setups.
- Cons: Network latency; requires running a separate Memcached service.
- Documentation: MaxScale Memcached Cache
Redis
- Shared, persistent cache with advanced features like eviction policies and persistence.
- Pros: Persistent across restarts, rich feature set.
- Cons: More complex to configure, higher operational overhead.
- Documentation: MaxScale Redis Cache
Configuration Example (Local Cache + Multiplexing)
[CacheService]
type=service
router=readwritesplit
servers=server1,server2,server3
user=maxuser
password=maxpass
idle_session_pool_time=1s
multiplex_timeout=60s
[CacheListener]
type=listener
service=CacheService
protocol=MariaDBClient
port=4007
address=0.0.0.0
[Cache]
type=filter
module=cache
hard_ttl=60s
soft_ttl=30s
max_size=100MiUsing Memcached or Redis
[CacheMemcached]
type=filter
module=cache
cache_backend=memcached
memcached_host=127.0.0.1
memcached_port=11211
hard_ttl=60s
soft_ttl=30s
max_size=500Mi
[CacheRedis]
type=filter
module=cache
cache_backend=redis
redis_host=127.0.0.1
redis_port=6379
hard_ttl=60s
soft_ttl=30s
max_size=1024MiA key benefit of this combined approach is that a cache hit bypasses the need for a backend connection entirely, significantly reducing the load on the connection pool and the database itself.
Testing and Tuning
- Prepare Dataset: Use sysbench to create a realistic workload with the required database and user.
- Run Load Test: Measure latency, throughput, and cache utilization under expected concurrency.
- Monitor Metrics: Track cache hits/misses, backend connections, and query response times.
- Tune Parameters: Adjust
cache_ttl,max_cache_size,idle_session_pool_time, andpersistpoolmaxfor optimal performance. - Backup Configuration: Always back up your MaxScale configuration before making changes to avoid accidental downtime.
Example Tuning:
- If cache hit rate is low, increase
cache_ttlormax_cache_size. - If backend is overwhelmed, consider adding additional MaxScale instances and enabling shared cache via Memcached or Redis.
References:
Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Page 8


Leave a Reply
You must be logged in to post a comment.