Documentation
¶
Overview ¶
Package cache provides generic, thread-safe cache implementations with various eviction policies.
This package offers multiple cache types:
- SimpleCache: No eviction policy (stores items indefinitely)
- LRUCache: Least Recently Used eviction based on size
- TTLCache: Time-To-Live eviction based on expiry
- HybridCache: Combined LRU and TTL eviction
All cache implementations are thread-safe with built-in statistics (always enabled for observability) and optional Prometheus metrics integration via functional options.
Package cache provides high-performance, thread-safe caching implementations with multiple eviction policies, built-in statistics tracking, and optional Prometheus metrics integration.
Overview ¶
The cache package offers four cache implementations with different eviction strategies:
- Simple: No eviction (manual cleanup only)
- LRU: Least Recently Used eviction
- TTL: Time-To-Live expiration
- Hybrid: Combines LRU and TTL policies
All implementations are generic, thread-safe, and provide comprehensive observability through always-on statistics and optional metrics.
Quick Start ¶
Simple cache creation:
cache := cache.NewSimple[string]()
cache.Set("key", "value")
value, ok := cache.Get("key")
LRU cache with capacity limit:
cache, err := cache.NewLRU[*User](1000)
if err != nil {
log.Fatal(err)
}
TTL cache with expiration:
cache, err := cache.NewTTL[*Session](ctx, 30*time.Minute, 5*time.Minute)
Hybrid cache with both LRU and TTL:
cache, err := cache.NewHybrid[[]byte](ctx, 5000, 10*time.Minute, 1*time.Minute,
cache.WithMetrics[[]byte](registry, "api_cache"),
cache.WithEvictionCallback[[]byte](func(key string, value []byte) {
log.Printf("Evicted: %s", key)
}),
)
Cache Types and Eviction Policies ¶
Simple Cache (No Eviction):
Items remain in cache until explicitly deleted or cache is cleared. Best for small, stable datasets where manual control is desired.
cache := cache.NewSimple[V]()
LRU Cache (Capacity-Based):
Evicts least recently used items when maximum capacity is reached. Best for fixed-size caches where recent access patterns indicate importance.
cache, _ := cache.NewLRU[V](maxSize)
TTL Cache (Time-Based):
Items expire after a time-to-live period. Background cleanup goroutine removes expired items. Best for time-sensitive data like sessions or tokens.
cache, _ := cache.NewTTL[V](ctx, ttl, cleanupInterval)
Hybrid Cache (Capacity + Time):
Combines LRU and TTL - items are evicted if they're either least recently used OR expired. Best for production caches requiring both size and time limits.
cache, _ := cache.NewHybrid[V](ctx, maxSize, ttl, cleanupInterval)
Observability Architecture ¶
The cache package implements a dual-tracking pattern for comprehensive observability:
Statistics (Always On):
- Tracks all operations using atomic counters
- Zero configuration required
- Available via cache.Stats()
- Provides computed metrics (hit ratio, requests/sec)
- No external dependencies
Prometheus Metrics (Optional):
- Enabled via WithMetrics() option
- Exports to Prometheus for time-series monitoring
- Includes component labels for instance identification
- Standard metric types (Counter, Gauge)
Design Decision: Dual Tracking Pattern ¶
Both Statistics and Metrics track operations independently, which appears redundant but serves distinct operational purposes:
Why Track Twice?
1. Independence: Statistics work without Prometheus dependency
- Always available for debugging, even in minimal deployments
- No external infrastructure required for basic observability
- Critical for tests and local development
2. Computed Metrics: Statistics provide derived values not available in raw Prometheus
- Hit ratio (hits / total requests)
- Requests per second with built-in timing
- Miss ratio (misses / total requests)
- Average item lifetime (for TTL caches)
3. Different Use Cases:
- Statistics: Programmatic access, debugging, tests, runtime inspection
- Metrics: Time-series analysis, Grafana dashboards, alerting, production monitoring
4. Performance Trade-off:
- Overhead: ~50-100ns per operation for dual tracking
- At 1M ops/sec: ~0.5-1% total overhead
- Cost is negligible compared to observability value
Alternative Considered: Metrics-Based Statistics
We considered reading Statistics from Prometheus metrics to avoid duplication:
func (s *Statistics) Hits() int64 {
dto := &dto.Metric{}
s.metrics.hits.Write(dto)
return int64(dto.Counter.GetValue())
}
Rejected because:
- Creates Prometheus dependency for basic stats
- Reading from Prometheus is significantly slower (~10x) than atomic operations
- Breaks Statistics when metrics are disabled
- Violates separation of concerns (stats vs monitoring)
- Makes testing more complex (requires mock metrics)
Performance Impact ¶
Dual tracking overhead per operation:
- 1x atomic increment (Statistics)
- 1x atomic increment (Prometheus counter) if enabled
- 1x gauge set (Prometheus) if enabled
Benchmarks (M1 MacBook Pro):
- LRU Get with stats only: ~226ns/op
- LRU Get with stats + metrics: ~238ns/op (~5% overhead)
- LRU Set with stats only: ~361ns/op
- LRU Set with stats + metrics: ~379ns/op (~5% overhead)
At high throughput (1M ops/sec), dual tracking adds ~50-100ms/sec of overhead, which is acceptable for the operational visibility gained.
Functional Options Pattern ¶
The package uses functional options for clean, composable configuration:
cache, err := cache.NewLRU[V](capacity, cache.WithMetrics[V](registry, "component"), cache.WithEvictionCallback[V](callback), )
Available options:
- WithMetrics: Enable Prometheus metrics export
- WithEvictionCallback: Get notified when items are evicted
- WithStatsInterval: Set stats aggregation interval (TTL/Hybrid only)
This pattern provides:
- Clear intent with named functions
- Easy composition of features
- Backward compatibility when adding options
- Type-safe configuration with generics
Thread Safety ¶
All cache operations are thread-safe for concurrent use:
- Multiple goroutines can read concurrently (RWMutex for reads)
- Writes are serialized with mutex protection
- Statistics use atomic operations (lock-free)
- Metrics use Prometheus atomic types
- TTL cleanup runs in background goroutine
- Eviction callbacks are called outside locks to prevent deadlocks
Performance Characteristics ¶
Simple Cache:
- Get: O(1) map lookup
- Set: O(1) map insert
- Delete: O(1) map delete
- Memory: O(n) where n is number of items
LRU Cache:
- Get: O(1) map lookup + list move
- Set: O(1) map insert + list append/evict
- Delete: O(1) map delete + list remove
- Memory: O(n) map + list overhead
TTL Cache:
- Get: O(1) map lookup + expiry check
- Set: O(1) map insert
- Delete: O(1) map delete
- Cleanup: O(n) periodic scan (background)
- Memory: O(n) map + expiry tracking
Hybrid Cache:
- Get: O(1) map lookup + list move + expiry check
- Set: O(1) map insert + list append/evict
- Delete: O(1) map delete + list remove
- Cleanup: O(n) periodic scan (background)
- Memory: O(n) map + list + expiry tracking
Generic Type Support ¶
Caches are fully generic and work with any Go type:
stringCache := cache.NewSimple[string]() intCache := cache.NewLRU[int](100) structCache := cache.NewTTL[*User](ctx, 5*time.Minute, 1*time.Minute) sliceCache := cache.NewHybrid[[]byte](ctx, 1000, 10*time.Minute, 1*time.Minute)
Type constraints:
- Keys are always strings (for consistent hashing and comparison)
- Values can be any type V
- No serialization required - stores values directly in memory
Common Use Cases ¶
API Response Caching:
cache, _ := cache.NewHybrid[*Response](ctx, 5000, 30*time.Minute, 5*time.Minute, cache.WithMetrics[*Response](registry, "api_cache"), )
Session Storage:
cache, _ := cache.NewTTL[*Session](ctx, 2*time.Hour, 10*time.Minute,
cache.WithEvictionCallback[*Session](func(key string, session *Session) {
session.PersistToDB() // Save to persistent storage on eviction
}),
)
Entity Caching (Two-Level):
l1Cache, _ := cache.NewLRU[*Entity](1000) // Hot entities l2Cache, _ := cache.NewTTL[*Entity](ctx, 1*time.Hour, 5*time.Minute) // All recent entities
Computed Results:
cache, _ := cache.NewLRU[*Result](500, cache.WithMetrics[*Result](registry, "computation_cache"), )
Context and Cleanup ¶
TTL and Hybrid caches run background cleanup goroutines. Always pass a context that will be canceled when cleanup should stop:
ctx, cancel := context.WithCancel(context.Background()) defer cancel() cache, _ := cache.NewTTL[V](ctx, ttl, cleanupInterval) // Cleanup goroutine stops when ctx is canceled
For Simple and LRU caches, no background goroutines are created.
Testing ¶
The package includes comprehensive tests with race detection:
go test -race ./pkg/cache
Benchmarks are available to validate performance:
go test -bench=. ./pkg/cache
Statistics make testing cache behavior easy:
cache := cache.NewSimple[int]()
cache.Set("key", 42)
_, _ = cache.Get("key")
_, _ = cache.Get("missing")
assert.Equal(t, int64(1), cache.Stats().Hits())
assert.Equal(t, int64(1), cache.Stats().Misses())
assert.Equal(t, 0.5, cache.Stats().HitRatio())
Examples ¶
See cache_test.go and examples_test.go for runnable examples that appear in godoc.
Index ¶
- Constants
- func WithStats(ctx context.Context, stats *Statistics) context.Context
- type Cache
- func NewFromConfig[V any](ctx context.Context, config Config, options ...Option[V]) (Cache[V], error)
- func NewLRU[V any](maxSize int, options ...Option[V]) (Cache[V], error)
- func NewNoop[V any]() Cache[V]
- func NewSimple[V any](options ...Option[V]) (Cache[V], error)
- func NewTTL[V any](ctx context.Context, ttl, cleanupInterval time.Duration, options ...Option[V]) (Cache[V], error)
- type CoalescingSet
- type Config
- type Entry
- type EvictCallback
- type Option
- type Statistics
- func (s *Statistics) CurrentSize() int64
- func (s *Statistics) Delete()
- func (s *Statistics) Deletes() int64
- func (s *Statistics) Eviction()
- func (s *Statistics) Evictions() int64
- func (s *Statistics) Hit()
- func (s *Statistics) HitRatio() float64
- func (s *Statistics) Hits() int64
- func (s *Statistics) MaxSize() int64
- func (s *Statistics) MemoryUsage() int64
- func (s *Statistics) Miss()
- func (s *Statistics) MissRatio() float64
- func (s *Statistics) Misses() int64
- func (s *Statistics) RequestsPerSecond() float64
- func (s *Statistics) Reset()
- func (s *Statistics) Set()
- func (s *Statistics) Sets() int64
- func (s *Statistics) Summary() StatsSummary
- func (s *Statistics) UpdateMemoryUsage(usage int64)
- func (s *Statistics) UpdateSize(size int64)
- func (s *Statistics) Uptime() time.Duration
- type StatsSummary
- type Strategy
Constants ¶
const (
// ContextKeyStats can be used to pass statistics through context.
ContextKeyStats contextKey = "cache-stats"
)
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Cache ¶
type Cache[V any] interface { // Get retrieves a value by key. Returns the value and true if found, zero value and false otherwise. Get(key string) (V, bool) // Set stores a value with the given key. Returns true if a new entry was created, false if updated. // Returns an error if the operation fails (e.g., invalid key). Set(key string, value V) (bool, error) // Delete removes an entry by key. Returns true if the key existed and was deleted. // Returns an error if the operation fails. Delete(key string) (bool, error) // Clear removes all entries from the cache. // Returns an error if the operation fails. Clear() error // Size returns the current number of entries in the cache. Size() int // Keys returns a slice of all keys currently in the cache. Keys() []string // Stats returns cache statistics if enabled, nil otherwise. Stats() *Statistics // Close shuts down the cache and releases any resources (e.g., background goroutines). Close() error }
Cache represents a generic cache interface that all cache implementations must satisfy. The cache is parameterized by value type V for type safety.
func NewFromConfig ¶
func NewFromConfig[V any](ctx context.Context, config Config, options ...Option[V]) (Cache[V], error)
NewFromConfig creates a cache based on the provided configuration. Returns a disabled cache (NoopCache) if config.Enabled is false. Additional functional options can be passed to configure metrics, callbacks, etc.
func NewLRU ¶
NewLRU creates a new LRU cache with the specified maximum size. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.
func NewNoop ¶
NewNoop creates a cache that does nothing (always returns cache misses). This is useful when caching is disabled via configuration.
func NewSimple ¶
NewSimple creates a new Simple cache with no eviction policy. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.
func NewTTL ¶
func NewTTL[V any](ctx context.Context, ttl, cleanupInterval time.Duration, options ...Option[V]) (Cache[V], error)
NewTTL creates a new TTL cache with the specified TTL and cleanup interval. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.
type CoalescingSet ¶
type CoalescingSet struct {
// contains filtered or unexported fields
}
CoalescingSet collects keys over a time window and fires a callback with the batch. It deduplicates keys automatically using a map-based set structure. Follows the ticker-based background goroutine pattern from ttl.go.
func NewCoalescingSet ¶
func NewCoalescingSet(ctx context.Context, window time.Duration, callback func([]string)) *CoalescingSet
NewCoalescingSet creates a new CoalescingSet that fires the callback every window duration with the collected (deduplicated) keys. The background goroutine stops when ctx is cancelled or when Close() is called.
func (*CoalescingSet) Add ¶
func (c *CoalescingSet) Add(key string)
Add adds a key to the pending set. If the key already exists, it is deduplicated. Thread-safe.
func (*CoalescingSet) Close ¶
func (c *CoalescingSet) Close() error
Close stops the background ticker and waits for cleanup to complete. It is idempotent - multiple calls are safe.
func (*CoalescingSet) PendingCount ¶
func (c *CoalescingSet) PendingCount() int
PendingCount returns the number of keys currently pending in the set. Thread-safe.
func (*CoalescingSet) Remove ¶
func (c *CoalescingSet) Remove(key string)
Remove removes a key from the pending set if it exists. This is useful when an entity is deleted before the window expires. Thread-safe.
type Config ¶
type Config struct {
// Enabled determines if caching is enabled.
Enabled bool `json:"enabled" schema:"editable,type:bool,description:Enable caching"`
// Strategy determines the eviction strategy.
Strategy Strategy `json:"strategy" schema:"editable,type:enum,description:Cache eviction strategy,enum:simple|lru|ttl|hybrid"`
// MaxSize is the maximum number of entries (for LRU and Hybrid caches).
MaxSize int `json:"max_size" schema:"editable,type:int,description:Maximum number of cache entries (for LRU and Hybrid),min:1"`
// TTL is the time-to-live for entries (for TTL and Hybrid caches).
TTL time.Duration `json:"ttl" schema:"editable,type:string,description:Time-to-live for entries (for TTL and Hybrid)"`
// CleanupInterval is how often to run background cleanup (for TTL and Hybrid caches).
CleanupInterval time.Duration `json:"cleanup_interval" schema:"editable,type:string,description:How often to run background cleanup (for TTL and Hybrid)"`
// StatsInterval is how often to update aggregate statistics.
StatsInterval time.Duration `json:"stats_interval" schema:"editable,type:string,description:How often to update aggregate statistics"`
}
Config contains configuration for cache creation.
func DefaultConfig ¶
func DefaultConfig() Config
DefaultConfig returns a default cache configuration.
func (*Config) UnmarshalJSON ¶
UnmarshalJSON implements custom JSON unmarshaling for Config to support duration strings (e.g., "1h", "5m", "30s") in addition to nanosecond integers.
type Entry ¶
type Entry[V any] struct { Key string Value V // Stored value CreatedAt time.Time ExpiresAt *time.Time // nil means no expiration AccessedAt time.Time }
Entry represents an entry in the cache with metadata.
type EvictCallback ¶
EvictCallback is called when an entry is evicted from the cache. It receives the key and value of the evicted entry.
type Option ¶
type Option[V any] func(*cacheOptions[V])
Option configures cache behavior using the functional options pattern. This provides a clean, extensible API for configuring caches.
func WithEvictionCallback ¶
func WithEvictionCallback[V any](callback EvictCallback[V]) Option[V]
WithEvictionCallback sets a callback function that is called when items are evicted. The callback receives the key and value of the evicted entry.
func WithMetrics ¶
func WithMetrics[V any](registry *metric.MetricsRegistry, prefix string) Option[V]
WithMetrics enables Prometheus metrics export for cache statistics. If registry is nil, this option is ignored. Registry should not be nil in normal usage - this handles edge cases gracefully.
type Statistics ¶
type Statistics struct {
// contains filtered or unexported fields
}
Statistics tracks cache performance metrics.
func NewStatistics ¶
func NewStatistics() *Statistics
NewStatistics creates a new statistics tracker.
func StatsFromContext ¶
func StatsFromContext(ctx context.Context) (*Statistics, bool)
StatsFromContext retrieves statistics from the context.
func (*Statistics) CurrentSize ¶
func (s *Statistics) CurrentSize() int64
CurrentSize returns the current number of entries in the cache.
func (*Statistics) Deletes ¶
func (s *Statistics) Deletes() int64
Deletes returns the total number of delete operations.
func (*Statistics) Evictions ¶
func (s *Statistics) Evictions() int64
Evictions returns the total number of evictions.
func (*Statistics) HitRatio ¶
func (s *Statistics) HitRatio() float64
HitRatio returns the cache hit ratio as a percentage (0.0 to 1.0).
func (*Statistics) Hits ¶
func (s *Statistics) Hits() int64
Hits returns the total number of cache hits.
func (*Statistics) MaxSize ¶
func (s *Statistics) MaxSize() int64
MaxSize returns the maximum number of entries the cache has held.
func (*Statistics) MemoryUsage ¶
func (s *Statistics) MemoryUsage() int64
MemoryUsage returns the estimated memory usage in bytes.
func (*Statistics) MissRatio ¶
func (s *Statistics) MissRatio() float64
MissRatio returns the cache miss ratio as a percentage (0.0 to 1.0).
func (*Statistics) Misses ¶
func (s *Statistics) Misses() int64
Misses returns the total number of cache misses.
func (*Statistics) RequestsPerSecond ¶
func (s *Statistics) RequestsPerSecond() float64
RequestsPerSecond returns the average number of requests (hits + misses) per second.
func (*Statistics) Sets ¶
func (s *Statistics) Sets() int64
Sets returns the total number of set operations.
func (*Statistics) Summary ¶
func (s *Statistics) Summary() StatsSummary
Summary returns a snapshot of all statistics.
func (*Statistics) UpdateMemoryUsage ¶
func (s *Statistics) UpdateMemoryUsage(usage int64)
UpdateMemoryUsage updates the estimated memory usage.
func (*Statistics) UpdateSize ¶
func (s *Statistics) UpdateSize(size int64)
UpdateSize updates the current cache size.
func (*Statistics) Uptime ¶
func (s *Statistics) Uptime() time.Duration
Uptime returns how long the cache has been running.
type StatsSummary ¶
type StatsSummary struct {
Hits int64 `json:"hits"`
Misses int64 `json:"misses"`
Sets int64 `json:"sets"`
Deletes int64 `json:"deletes"`
Evictions int64 `json:"evictions"`
CurrentSize int64 `json:"current_size"`
MaxSize int64 `json:"max_size"`
MemoryUsage int64 `json:"memory_usage"`
HitRatio float64 `json:"hit_ratio"`
MissRatio float64 `json:"miss_ratio"`
RequestsPerSecond float64 `json:"requests_per_second"`
Uptime time.Duration `json:"uptime"`
}
StatsSummary returns a snapshot of all statistics.
type Strategy ¶
type Strategy string
Strategy defines the eviction strategy for the cache.
const ( // StrategySimple uses no eviction policy. StrategySimple Strategy = "simple" // StrategyLRU uses Least Recently Used eviction based on size. StrategyLRU Strategy = "lru" // StrategyTTL uses Time-To-Live eviction based on expiry. StrategyTTL Strategy = "ttl" // StrategyHybrid uses combined LRU and TTL eviction. StrategyHybrid Strategy = "hybrid" )