cache

package
v0.0.0-...-c3791f4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 2, 2026 License: MIT Imports: 10 Imported by: 0

README

Cache Package

A high-performance, thread-safe caching library for Go with multiple eviction policies, built-in statistics, and optional Prometheus metrics integration.

Features

  • 🚀 High Performance: Optimized for concurrent access with minimal lock contention
  • 📊 Always Observable: Statistics always enabled (never operate in the dark)
  • 📈 Prometheus Ready: Optional metrics integration for production monitoring
  • 🔧 Flexible Eviction: LRU, TTL, Hybrid, or no eviction policies
  • 🎯 Type-Safe: Full generic support for any value type
  • 🧩 Functional Options: Clean, composable configuration API

Installation

import "github.com/c360/semstreams/pkg/cache"

Quick Start

Basic Usage
// Simple cache with default settings (stats always enabled)
c := cache.NewSimple[string]()

// LRU cache with max 1000 items
c := cache.NewLRU[*MyStruct](1000)

// TTL cache with 5-minute expiry and 1-minute cleanup interval
c := cache.NewTTL[int](ctx, 5*time.Minute, 1*time.Minute)

// Hybrid cache combining LRU and TTL
c := cache.NewHybrid[string](ctx, 1000, 5*time.Minute, 1*time.Minute)
With Prometheus Metrics
import "github.com/c360/semstreams/pkg/metric"

// Create metrics registry
registry := metric.NewMetricsRegistry()

// Create cache with metrics export
c := cache.NewLRU[*Entity](1000,
    cache.WithMetrics[*Entity](registry, "my_component"),
)

// Metrics automatically exported:
// - semstreams_cache_hits_total{component="my_component"}
// - semstreams_cache_misses_total{component="my_component"}
// - semstreams_cache_size{component="my_component"}
// - etc.
With Multiple Options
// Compose multiple functional options
c := cache.NewTTL[*Document](ctx, 10*time.Minute, 1*time.Minute,
    cache.WithMetrics[*Document](registry, "document_cache"),
    cache.WithEvictionCallback[*Document](func(key string, value *Document) {
        log.Printf("Evicted document: %s", key)
    }),
    cache.WithStatsInterval[*Document](30*time.Second),
)

Cache Types

Simple Cache

No eviction policy - items remain until explicitly deleted.

c := cache.NewSimple[V]()
LRU Cache

Evicts least recently used items when capacity is reached.

c := cache.NewLRU[V](maxSize)
TTL Cache

Evicts items after a time-to-live period expires.

c := cache.NewTTL[V](ctx, ttl, cleanupInterval)
Hybrid Cache

Combines LRU and TTL - evicts items that are either expired or least recently used.

c := cache.NewHybrid[V](ctx, maxSize, ttl, cleanupInterval)

Functional Options

The cache package uses functional options for clean, composable configuration:

WithMetrics

Enable Prometheus metrics export:

cache.WithMetrics[V](registry, "component_name")
WithEvictionCallback

Set a callback for when items are evicted:

cache.WithEvictionCallback[V](func(key string, value V) {
    // Handle evicted item
})
WithStatsInterval

Set statistics aggregation interval (TTL/Hybrid caches only):

cache.WithStatsInterval[V](30*time.Second)

API Reference

Cache Interface
type Cache[V any] interface {
    Get(key string) (V, bool)       // Retrieve value by key
    Set(key string, value V) bool   // Store key-value pair
    Delete(key string) bool          // Remove entry by key
    Clear()                          // Remove all entries
    Size() int                       // Current number of entries
    Keys() []string                  // All keys currently in cache
    Stats() *Statistics              // Cache statistics (never nil)
}
Statistics

Statistics are always collected (not optional) for observability:

stats := cache.Stats()

// Available metrics:
stats.Hits()              // Total cache hits
stats.Misses()            // Total cache misses  
stats.HitRatio()          // Hit rate (0.0 to 1.0)
stats.RequestsPerSecond() // Throughput
stats.CurrentSize()       // Current entries
stats.Evictions()         // Total evictions

Prometheus Metrics

When enabled via WithMetrics(), the following metrics are exported:

Metric Type Description
semstreams_cache_hits_total Counter Total cache hits
semstreams_cache_misses_total Counter Total cache misses
semstreams_cache_sets_total Counter Total set operations
semstreams_cache_deletes_total Counter Total delete operations
semstreams_cache_evictions_total Counter Total evictions
semstreams_cache_size Gauge Current number of entries

All metrics include a component label for identifying different cache instances.

Configuration-Driven Cache Creation

The NewFromConfig() function enables cache creation from configuration files (YAML/JSON):

// Load from config file
config := cache.Config{
    Enabled:         true,
    Strategy:        cache.StrategyLRU,
    MaxSize:         1000,
    TTL:             5 * time.Minute,
    CleanupInterval: 1 * time.Minute,
    StatsInterval:   30 * time.Second,
}

// Create cache from config with optional functional options
c, err := cache.NewFromConfig[V](ctx, config,
    cache.WithMetrics[V](registry, "component_name"),
    cache.WithEvictionCallback[V](func(key string, value V) {
        log.Printf("Evicted: %s", key)
    }),
)

This pattern is useful for:

  • Runtime cache strategy selection
  • Deployment-specific tuning without code changes
  • Configuration files (YAML/JSON) that specify cache behavior

Performance

Benchmark results on MacBook Pro M3:

BenchmarkCacheGet/Simple-12         6,846,386    172.8 ns/op
BenchmarkCacheGet/LRU_1000-12       5,310,026    226.6 ns/op
BenchmarkCacheGet/TTL-12            5,605,714    213.5 ns/op
BenchmarkCacheGet/Hybrid_1000-12    4,665,702    257.2 ns/op

BenchmarkCacheSet/Simple-12         4,666,502    256.8 ns/op
BenchmarkCacheSet/LRU_1000-12       3,477,819    361.4 ns/op
BenchmarkCacheSet/TTL-12            3,702,312    324.1 ns/op
BenchmarkCacheSet/Hybrid_1000-12    3,161,434    379.3 ns/op
Performance Tips
  1. Metrics Overhead: ~5% when enabled, zero when disabled
  2. Stats Overhead: Negligible (atomic operations)
  3. Lock Contention: Use multiple cache instances for high-concurrency scenarios
  4. Memory: Consider item size when setting max capacity

Architecture

Observability: Dual Tracking Pattern

The cache package tracks operations through two independent systems:

flowchart LR
    A[Cache Operation] --> B[Statistics]
    A --> C[Metrics]

    B --> D[Atomic Counters]
    B --> E[Computed Values]

    C --> F[Prometheus Counters]
    C --> G[Prometheus Gauges]

    D --> H[cache.Stats API]
    E --> H

    F --> I[/metrics endpoint]
    G --> I

    style A fill:#e1f5ff
    style B fill:#d4edda
    style C fill:#fff3cd
    style H fill:#d4edda
    style I fill:#fff3cd

Why Track Twice?

Both Statistics and Metrics independently track operations, which appears redundant but serves distinct purposes:

Aspect Statistics (Always On) Metrics (Optional)
Purpose Local debugging & programmatic access Time-series monitoring & dashboards
Dependency None (atomic operations) Prometheus registry
Computed Values Hit ratio, requests/sec Raw counters/gauges only
Access cache.Stats() API /metrics HTTP endpoint
Overhead ~50ns/op ~50ns/op (when enabled)
Use Case Tests, debugging, runtime inspection Production dashboards, alerting

Performance Trade-off:

  • Dual tracking overhead: ~5% per operation when metrics enabled
  • At 1M ops/sec: 0.5-1% total overhead
  • Negligible cost for comprehensive observability

Alternative Considered: Reading Statistics from Prometheus metrics to avoid duplication.

Rejected because:

  • Creates Prometheus dependency for basic stats
  • 10x slower (reading from Prometheus vs atomic operations)
  • Breaks Statistics when metrics disabled
  • Violates separation of concerns
Cache Architecture by Type
flowchart TB
    subgraph Simple["Simple Cache (No Eviction)"]
        S1[Map: key → value]
    end

    subgraph LRU["LRU Cache (Capacity-Based)"]
        L1[Map: key → list element]
        L2[Doubly-Linked List]
        L1 --> L2
        L2 -.->|"Move to front on access"| L2
        L2 -.->|"Evict tail when full"| L2
    end

    subgraph TTL["TTL Cache (Time-Based)"]
        T1[Map: key → entry]
        T2[Expiry: key → timestamp]
        T3[Cleanup Goroutine]
        T1 --> T2
        T3 -.->|"Periodic scan"| T2
    end

    subgraph Hybrid["Hybrid Cache (Capacity + Time)"]
        H1[Map: key → list element]
        H2[Doubly-Linked List]
        H3[Expiry: key → timestamp]
        H4[Cleanup Goroutine]
        H1 --> H2
        H1 --> H3
        H4 -.->|"Periodic scan"| H3
        H2 -.->|"LRU eviction"| H2
    end

    style Simple fill:#e1f5ff
    style LRU fill:#d4edda
    style TTL fill:#fff3cd
    style Hybrid fill:#f8d7da
Eviction Policy Flow
stateDiagram-v2
    [*] --> CheckCapacity: Set(key, value)

    CheckCapacity --> Simple: Simple Cache
    CheckCapacity --> CheckLRU: LRU/Hybrid
    CheckCapacity --> CheckTTL: TTL/Hybrid

    Simple --> Store: Always store

    CheckLRU --> EvictLRU: At capacity
    CheckLRU --> Store: Below capacity
    EvictLRU --> Store: Remove LRU item

    CheckTTL --> Store: Not expired
    CheckTTL --> Replace: Expired
    Replace --> Store: Remove expired item

    Store --> [*]

    note right of EvictLRU
        Remove least recently
        used item from tail
    end note

    note right of CheckTTL
        Background cleanup
        removes expired items
    end note
Architecture Decisions
Why Stats Are Always On

Statistics collection is mandatory because:

  • Observability is critical for production systems
  • Negligible overhead (atomic operations ~50ns)
  • Debugging without stats is nearly impossible
  • Hit ratios inform capacity planning and eviction policy tuning
  • No external dependencies required for basic monitoring
Why Functional Options

We chose functional options over struct-based configuration because:

  • More idiomatic Go pattern
  • Composable and extensible - easy to add features
  • Clear intent with named functions
  • No zero-value confusion in configuration
  • Backward compatible when adding new options
Why Multiple Cache Types

Different eviction strategies serve different use cases:

  • Simple: Explicit control, no automatic eviction
  • LRU: Access pattern optimization, fixed capacity
  • TTL: Time-sensitive data, automatic expiration
  • Hybrid: Production-grade caching with both limits

Examples

Production Cache with Full Monitoring
func setupProductionCache(ctx context.Context, registry *metric.MetricsRegistry) cache.Cache[*User] {
    return cache.NewHybrid[*User](
        ctx,
        10000,                // Max 10k users
        30*time.Minute,       // 30 min TTL
        5*time.Minute,        // Cleanup every 5 min
        cache.WithMetrics[*User](registry, "user_cache"),
        cache.WithEvictionCallback[*User](func(key string, user *User) {
            log.Printf("Evicted user from cache: %s", user.ID)
        }),
    )
}
Request-Scoped Cache
func handleRequest(ctx context.Context) {
    // Create a request-scoped cache
    requestCache := cache.NewLRU[*ComputedResult](100)
    defer requestCache.Clear()
    
    // Use cache during request processing
    if result, ok := requestCache.Get(key); ok {
        return result
    }
    
    // Compute and cache
    result := expensiveComputation()
    requestCache.Set(key, result)
}

Thread Safety

All cache operations are thread-safe. The implementation uses:

  • Fine-grained locking with sync.RWMutex
  • Atomic operations for statistics
  • Lock-free reads where possible

Contributing

When adding new cache implementations:

  1. Statistics must always be initialized
  2. Follow functional options pattern
  3. Support optional Prometheus metrics
  4. Maintain thread safety
  5. Include comprehensive tests with race detection

License

See LICENSE file in repository root.

Documentation

Overview

Package cache provides generic, thread-safe cache implementations with various eviction policies.

This package offers multiple cache types:

  • SimpleCache: No eviction policy (stores items indefinitely)
  • LRUCache: Least Recently Used eviction based on size
  • TTLCache: Time-To-Live eviction based on expiry
  • HybridCache: Combined LRU and TTL eviction

All cache implementations are thread-safe with built-in statistics (always enabled for observability) and optional Prometheus metrics integration via functional options.

Package cache provides high-performance, thread-safe caching implementations with multiple eviction policies, built-in statistics tracking, and optional Prometheus metrics integration.

Overview

The cache package offers four cache implementations with different eviction strategies:

  • Simple: No eviction (manual cleanup only)
  • LRU: Least Recently Used eviction
  • TTL: Time-To-Live expiration
  • Hybrid: Combines LRU and TTL policies

All implementations are generic, thread-safe, and provide comprehensive observability through always-on statistics and optional metrics.

Quick Start

Simple cache creation:

cache := cache.NewSimple[string]()
cache.Set("key", "value")
value, ok := cache.Get("key")

LRU cache with capacity limit:

cache, err := cache.NewLRU[*User](1000)
if err != nil {
	log.Fatal(err)
}

TTL cache with expiration:

cache, err := cache.NewTTL[*Session](ctx, 30*time.Minute, 5*time.Minute)

Hybrid cache with both LRU and TTL:

cache, err := cache.NewHybrid[[]byte](ctx, 5000, 10*time.Minute, 1*time.Minute,
	cache.WithMetrics[[]byte](registry, "api_cache"),
	cache.WithEvictionCallback[[]byte](func(key string, value []byte) {
		log.Printf("Evicted: %s", key)
	}),
)

Cache Types and Eviction Policies

Simple Cache (No Eviction):

Items remain in cache until explicitly deleted or cache is cleared. Best for small, stable datasets where manual control is desired.

cache := cache.NewSimple[V]()

LRU Cache (Capacity-Based):

Evicts least recently used items when maximum capacity is reached. Best for fixed-size caches where recent access patterns indicate importance.

cache, _ := cache.NewLRU[V](maxSize)

TTL Cache (Time-Based):

Items expire after a time-to-live period. Background cleanup goroutine removes expired items. Best for time-sensitive data like sessions or tokens.

cache, _ := cache.NewTTL[V](ctx, ttl, cleanupInterval)

Hybrid Cache (Capacity + Time):

Combines LRU and TTL - items are evicted if they're either least recently used OR expired. Best for production caches requiring both size and time limits.

cache, _ := cache.NewHybrid[V](ctx, maxSize, ttl, cleanupInterval)

Observability Architecture

The cache package implements a dual-tracking pattern for comprehensive observability:

Statistics (Always On):

  • Tracks all operations using atomic counters
  • Zero configuration required
  • Available via cache.Stats()
  • Provides computed metrics (hit ratio, requests/sec)
  • No external dependencies

Prometheus Metrics (Optional):

  • Enabled via WithMetrics() option
  • Exports to Prometheus for time-series monitoring
  • Includes component labels for instance identification
  • Standard metric types (Counter, Gauge)

Design Decision: Dual Tracking Pattern

Both Statistics and Metrics track operations independently, which appears redundant but serves distinct operational purposes:

Why Track Twice?

1. Independence: Statistics work without Prometheus dependency

  • Always available for debugging, even in minimal deployments
  • No external infrastructure required for basic observability
  • Critical for tests and local development

2. Computed Metrics: Statistics provide derived values not available in raw Prometheus

  • Hit ratio (hits / total requests)
  • Requests per second with built-in timing
  • Miss ratio (misses / total requests)
  • Average item lifetime (for TTL caches)

3. Different Use Cases:

  • Statistics: Programmatic access, debugging, tests, runtime inspection
  • Metrics: Time-series analysis, Grafana dashboards, alerting, production monitoring

4. Performance Trade-off:

  • Overhead: ~50-100ns per operation for dual tracking
  • At 1M ops/sec: ~0.5-1% total overhead
  • Cost is negligible compared to observability value

Alternative Considered: Metrics-Based Statistics

We considered reading Statistics from Prometheus metrics to avoid duplication:

func (s *Statistics) Hits() int64 {
	dto := &dto.Metric{}
	s.metrics.hits.Write(dto)
	return int64(dto.Counter.GetValue())
}

Rejected because:

  • Creates Prometheus dependency for basic stats
  • Reading from Prometheus is significantly slower (~10x) than atomic operations
  • Breaks Statistics when metrics are disabled
  • Violates separation of concerns (stats vs monitoring)
  • Makes testing more complex (requires mock metrics)

Performance Impact

Dual tracking overhead per operation:

  • 1x atomic increment (Statistics)
  • 1x atomic increment (Prometheus counter) if enabled
  • 1x gauge set (Prometheus) if enabled

Benchmarks (M1 MacBook Pro):

  • LRU Get with stats only: ~226ns/op
  • LRU Get with stats + metrics: ~238ns/op (~5% overhead)
  • LRU Set with stats only: ~361ns/op
  • LRU Set with stats + metrics: ~379ns/op (~5% overhead)

At high throughput (1M ops/sec), dual tracking adds ~50-100ms/sec of overhead, which is acceptable for the operational visibility gained.

Functional Options Pattern

The package uses functional options for clean, composable configuration:

cache, err := cache.NewLRU[V](capacity,
	cache.WithMetrics[V](registry, "component"),
	cache.WithEvictionCallback[V](callback),
)

Available options:

  • WithMetrics: Enable Prometheus metrics export
  • WithEvictionCallback: Get notified when items are evicted
  • WithStatsInterval: Set stats aggregation interval (TTL/Hybrid only)

This pattern provides:

  • Clear intent with named functions
  • Easy composition of features
  • Backward compatibility when adding options
  • Type-safe configuration with generics

Thread Safety

All cache operations are thread-safe for concurrent use:

  • Multiple goroutines can read concurrently (RWMutex for reads)
  • Writes are serialized with mutex protection
  • Statistics use atomic operations (lock-free)
  • Metrics use Prometheus atomic types
  • TTL cleanup runs in background goroutine
  • Eviction callbacks are called outside locks to prevent deadlocks

Performance Characteristics

Simple Cache:

  • Get: O(1) map lookup
  • Set: O(1) map insert
  • Delete: O(1) map delete
  • Memory: O(n) where n is number of items

LRU Cache:

  • Get: O(1) map lookup + list move
  • Set: O(1) map insert + list append/evict
  • Delete: O(1) map delete + list remove
  • Memory: O(n) map + list overhead

TTL Cache:

  • Get: O(1) map lookup + expiry check
  • Set: O(1) map insert
  • Delete: O(1) map delete
  • Cleanup: O(n) periodic scan (background)
  • Memory: O(n) map + expiry tracking

Hybrid Cache:

  • Get: O(1) map lookup + list move + expiry check
  • Set: O(1) map insert + list append/evict
  • Delete: O(1) map delete + list remove
  • Cleanup: O(n) periodic scan (background)
  • Memory: O(n) map + list + expiry tracking

Generic Type Support

Caches are fully generic and work with any Go type:

stringCache := cache.NewSimple[string]()
intCache := cache.NewLRU[int](100)
structCache := cache.NewTTL[*User](ctx, 5*time.Minute, 1*time.Minute)
sliceCache := cache.NewHybrid[[]byte](ctx, 1000, 10*time.Minute, 1*time.Minute)

Type constraints:

  • Keys are always strings (for consistent hashing and comparison)
  • Values can be any type V
  • No serialization required - stores values directly in memory

Common Use Cases

API Response Caching:

cache, _ := cache.NewHybrid[*Response](ctx, 5000, 30*time.Minute, 5*time.Minute,
	cache.WithMetrics[*Response](registry, "api_cache"),
)

Session Storage:

cache, _ := cache.NewTTL[*Session](ctx, 2*time.Hour, 10*time.Minute,
	cache.WithEvictionCallback[*Session](func(key string, session *Session) {
		session.PersistToDB() // Save to persistent storage on eviction
	}),
)

Entity Caching (Two-Level):

l1Cache, _ := cache.NewLRU[*Entity](1000) // Hot entities
l2Cache, _ := cache.NewTTL[*Entity](ctx, 1*time.Hour, 5*time.Minute) // All recent entities

Computed Results:

cache, _ := cache.NewLRU[*Result](500,
	cache.WithMetrics[*Result](registry, "computation_cache"),
)

Context and Cleanup

TTL and Hybrid caches run background cleanup goroutines. Always pass a context that will be canceled when cleanup should stop:

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

cache, _ := cache.NewTTL[V](ctx, ttl, cleanupInterval)
// Cleanup goroutine stops when ctx is canceled

For Simple and LRU caches, no background goroutines are created.

Testing

The package includes comprehensive tests with race detection:

go test -race ./pkg/cache

Benchmarks are available to validate performance:

go test -bench=. ./pkg/cache

Statistics make testing cache behavior easy:

cache := cache.NewSimple[int]()
cache.Set("key", 42)
_, _ = cache.Get("key")
_, _ = cache.Get("missing")

assert.Equal(t, int64(1), cache.Stats().Hits())
assert.Equal(t, int64(1), cache.Stats().Misses())
assert.Equal(t, 0.5, cache.Stats().HitRatio())

Examples

See cache_test.go and examples_test.go for runnable examples that appear in godoc.

Index

Constants

View Source
const (
	// ContextKeyStats can be used to pass statistics through context.
	ContextKeyStats contextKey = "cache-stats"
)

Variables

This section is empty.

Functions

func WithStats

func WithStats(ctx context.Context, stats *Statistics) context.Context

WithStats adds statistics to the context.

Types

type Cache

type Cache[V any] interface {
	// Get retrieves a value by key. Returns the value and true if found, zero value and false otherwise.
	Get(key string) (V, bool)

	// Set stores a value with the given key. Returns true if a new entry was created, false if updated.
	// Returns an error if the operation fails (e.g., invalid key).
	Set(key string, value V) (bool, error)

	// Delete removes an entry by key. Returns true if the key existed and was deleted.
	// Returns an error if the operation fails.
	Delete(key string) (bool, error)

	// Clear removes all entries from the cache.
	// Returns an error if the operation fails.
	Clear() error

	// Size returns the current number of entries in the cache.
	Size() int

	// Keys returns a slice of all keys currently in the cache.
	Keys() []string

	// Stats returns cache statistics if enabled, nil otherwise.
	Stats() *Statistics

	// Close shuts down the cache and releases any resources (e.g., background goroutines).
	Close() error
}

Cache represents a generic cache interface that all cache implementations must satisfy. The cache is parameterized by value type V for type safety.

func NewFromConfig

func NewFromConfig[V any](ctx context.Context, config Config, options ...Option[V]) (Cache[V], error)

NewFromConfig creates a cache based on the provided configuration. Returns a disabled cache (NoopCache) if config.Enabled is false. Additional functional options can be passed to configure metrics, callbacks, etc.

func NewLRU

func NewLRU[V any](maxSize int, options ...Option[V]) (Cache[V], error)

NewLRU creates a new LRU cache with the specified maximum size. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.

func NewNoop

func NewNoop[V any]() Cache[V]

NewNoop creates a cache that does nothing (always returns cache misses). This is useful when caching is disabled via configuration.

func NewSimple

func NewSimple[V any](options ...Option[V]) (Cache[V], error)

NewSimple creates a new Simple cache with no eviction policy. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.

func NewTTL

func NewTTL[V any](ctx context.Context, ttl, cleanupInterval time.Duration, options ...Option[V]) (Cache[V], error)

NewTTL creates a new TTL cache with the specified TTL and cleanup interval. Stats are always enabled for observability. Use WithMetrics() to also export as Prometheus metrics.

type CoalescingSet

type CoalescingSet struct {
	// contains filtered or unexported fields
}

CoalescingSet collects keys over a time window and fires a callback with the batch. It deduplicates keys automatically using a map-based set structure. Follows the ticker-based background goroutine pattern from ttl.go.

func NewCoalescingSet

func NewCoalescingSet(ctx context.Context, window time.Duration, callback func([]string)) *CoalescingSet

NewCoalescingSet creates a new CoalescingSet that fires the callback every window duration with the collected (deduplicated) keys. The background goroutine stops when ctx is cancelled or when Close() is called.

func (*CoalescingSet) Add

func (c *CoalescingSet) Add(key string)

Add adds a key to the pending set. If the key already exists, it is deduplicated. Thread-safe.

func (*CoalescingSet) Close

func (c *CoalescingSet) Close() error

Close stops the background ticker and waits for cleanup to complete. It is idempotent - multiple calls are safe.

func (*CoalescingSet) PendingCount

func (c *CoalescingSet) PendingCount() int

PendingCount returns the number of keys currently pending in the set. Thread-safe.

func (*CoalescingSet) Remove

func (c *CoalescingSet) Remove(key string)

Remove removes a key from the pending set if it exists. This is useful when an entity is deleted before the window expires. Thread-safe.

type Config

type Config struct {
	// Enabled determines if caching is enabled.
	Enabled bool `json:"enabled" schema:"editable,type:bool,description:Enable caching"`

	// Strategy determines the eviction strategy.
	Strategy Strategy `json:"strategy" schema:"editable,type:enum,description:Cache eviction strategy,enum:simple|lru|ttl|hybrid"`

	// MaxSize is the maximum number of entries (for LRU and Hybrid caches).
	MaxSize int `json:"max_size" schema:"editable,type:int,description:Maximum number of cache entries (for LRU and Hybrid),min:1"`

	// TTL is the time-to-live for entries (for TTL and Hybrid caches).
	TTL time.Duration `json:"ttl" schema:"editable,type:string,description:Time-to-live for entries (for TTL and Hybrid)"`

	// CleanupInterval is how often to run background cleanup (for TTL and Hybrid caches).
	CleanupInterval time.Duration `json:"cleanup_interval" schema:"editable,type:string,description:How often to run background cleanup (for TTL and Hybrid)"`

	// StatsInterval is how often to update aggregate statistics.
	StatsInterval time.Duration `json:"stats_interval" schema:"editable,type:string,description:How often to update aggregate statistics"`
}

Config contains configuration for cache creation.

func DefaultConfig

func DefaultConfig() Config

DefaultConfig returns a default cache configuration.

func (*Config) UnmarshalJSON

func (c *Config) UnmarshalJSON(data []byte) error

UnmarshalJSON implements custom JSON unmarshaling for Config to support duration strings (e.g., "1h", "5m", "30s") in addition to nanosecond integers.

func (Config) Validate

func (c Config) Validate() error

Validate checks if the configuration is valid.

type Entry

type Entry[V any] struct {
	Key        string
	Value      V // Stored value
	CreatedAt  time.Time
	ExpiresAt  *time.Time // nil means no expiration
	AccessedAt time.Time
}

Entry represents an entry in the cache with metadata.

func (*Entry[V]) IsExpired

func (e *Entry[V]) IsExpired() bool

IsExpired checks if the entry has expired based on the current time.

func (*Entry[V]) Touch

func (e *Entry[V]) Touch()

Touch updates the last accessed time of the entry.

type EvictCallback

type EvictCallback[V any] func(key string, value V)

EvictCallback is called when an entry is evicted from the cache. It receives the key and value of the evicted entry.

type Option

type Option[V any] func(*cacheOptions[V])

Option configures cache behavior using the functional options pattern. This provides a clean, extensible API for configuring caches.

func WithEvictionCallback

func WithEvictionCallback[V any](callback EvictCallback[V]) Option[V]

WithEvictionCallback sets a callback function that is called when items are evicted. The callback receives the key and value of the evicted entry.

func WithMetrics

func WithMetrics[V any](registry *metric.MetricsRegistry, prefix string) Option[V]

WithMetrics enables Prometheus metrics export for cache statistics. If registry is nil, this option is ignored. Registry should not be nil in normal usage - this handles edge cases gracefully.

func WithStatsInterval

func WithStatsInterval[V any](interval time.Duration) Option[V]

WithStatsInterval sets how often aggregate statistics are updated. This is only relevant for TTL and Hybrid caches with background cleanup. If interval is <= 0, this option is ignored.

type Statistics

type Statistics struct {
	// contains filtered or unexported fields
}

Statistics tracks cache performance metrics.

func NewStatistics

func NewStatistics() *Statistics

NewStatistics creates a new statistics tracker.

func StatsFromContext

func StatsFromContext(ctx context.Context) (*Statistics, bool)

StatsFromContext retrieves statistics from the context.

func (*Statistics) CurrentSize

func (s *Statistics) CurrentSize() int64

CurrentSize returns the current number of entries in the cache.

func (*Statistics) Delete

func (s *Statistics) Delete()

Delete records a cache delete operation.

func (*Statistics) Deletes

func (s *Statistics) Deletes() int64

Deletes returns the total number of delete operations.

func (*Statistics) Eviction

func (s *Statistics) Eviction()

Eviction records a cache eviction.

func (*Statistics) Evictions

func (s *Statistics) Evictions() int64

Evictions returns the total number of evictions.

func (*Statistics) Hit

func (s *Statistics) Hit()

Hit records a cache hit.

func (*Statistics) HitRatio

func (s *Statistics) HitRatio() float64

HitRatio returns the cache hit ratio as a percentage (0.0 to 1.0).

func (*Statistics) Hits

func (s *Statistics) Hits() int64

Hits returns the total number of cache hits.

func (*Statistics) MaxSize

func (s *Statistics) MaxSize() int64

MaxSize returns the maximum number of entries the cache has held.

func (*Statistics) MemoryUsage

func (s *Statistics) MemoryUsage() int64

MemoryUsage returns the estimated memory usage in bytes.

func (*Statistics) Miss

func (s *Statistics) Miss()

Miss records a cache miss.

func (*Statistics) MissRatio

func (s *Statistics) MissRatio() float64

MissRatio returns the cache miss ratio as a percentage (0.0 to 1.0).

func (*Statistics) Misses

func (s *Statistics) Misses() int64

Misses returns the total number of cache misses.

func (*Statistics) RequestsPerSecond

func (s *Statistics) RequestsPerSecond() float64

RequestsPerSecond returns the average number of requests (hits + misses) per second.

func (*Statistics) Reset

func (s *Statistics) Reset()

Reset resets all statistics to zero.

func (*Statistics) Set

func (s *Statistics) Set()

Set records a cache set operation.

func (*Statistics) Sets

func (s *Statistics) Sets() int64

Sets returns the total number of set operations.

func (*Statistics) Summary

func (s *Statistics) Summary() StatsSummary

Summary returns a snapshot of all statistics.

func (*Statistics) UpdateMemoryUsage

func (s *Statistics) UpdateMemoryUsage(usage int64)

UpdateMemoryUsage updates the estimated memory usage.

func (*Statistics) UpdateSize

func (s *Statistics) UpdateSize(size int64)

UpdateSize updates the current cache size.

func (*Statistics) Uptime

func (s *Statistics) Uptime() time.Duration

Uptime returns how long the cache has been running.

type StatsSummary

type StatsSummary struct {
	Hits              int64         `json:"hits"`
	Misses            int64         `json:"misses"`
	Sets              int64         `json:"sets"`
	Deletes           int64         `json:"deletes"`
	Evictions         int64         `json:"evictions"`
	CurrentSize       int64         `json:"current_size"`
	MaxSize           int64         `json:"max_size"`
	MemoryUsage       int64         `json:"memory_usage"`
	HitRatio          float64       `json:"hit_ratio"`
	MissRatio         float64       `json:"miss_ratio"`
	RequestsPerSecond float64       `json:"requests_per_second"`
	Uptime            time.Duration `json:"uptime"`
}

StatsSummary returns a snapshot of all statistics.

type Strategy

type Strategy string

Strategy defines the eviction strategy for the cache.

const (
	// StrategySimple uses no eviction policy.
	StrategySimple Strategy = "simple"

	// StrategyLRU uses Least Recently Used eviction based on size.
	StrategyLRU Strategy = "lru"

	// StrategyTTL uses Time-To-Live eviction based on expiry.
	StrategyTTL Strategy = "ttl"

	// StrategyHybrid uses combined LRU and TTL eviction.
	StrategyHybrid Strategy = "hybrid"
)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL