Transactions

Overview

Transaction configuration controls how your Talon microservice batches and commits work. Proper transaction tuning balances latency (time to commit individual transactions) against throughput (number of transactions per second).

This section covers runtime configuration for transaction batching and commit behavior. For conceptual understanding of how transactions work, see Transactions.

What You Can Configure

Transaction configuration includes:

Adaptive Batching

Configure the engine to adaptively batch multiple message processing transactions into a single commit:

  • Enable/disable adaptive batching

  • Set maximum batch size (ceiling)

  • Configure batch window timing

  • Tune for latency vs throughput trade-offs

See Adaptive Batching for complete configuration reference.

Transaction Behavior (Future)

Additional transaction configuration topics:

  • Transaction timeouts

  • Commit flush behavior

  • Checkpoint frequency

Note: Additional transaction topics will be added as the documentation expands.

Configuration Hierarchy

Transaction settings are configured in your DDL under the engine's transaction section:

How Transactions Work

Understanding transaction boundaries helps configure batching effectively:

Without Adaptive Batching

Each message processed in its own transaction:

  1. Message arrives

  2. Handler executes

  3. Transaction commits immediately

  4. Next message processed

Characteristics:

  • Lowest latency (microseconds to commit)

  • Lower throughput (commit overhead per message)

  • Ideal for latency-sensitive applications

With Adaptive Batching

Multiple messages batched into single commit:

  1. Messages arrive rapidly

  2. Multiple handlers execute

  3. Batch commits when ceiling reached or window expires

  4. All messages acknowledged together

Characteristics:

  • Higher throughput (amortize commit overhead)

  • Slightly higher latency (wait for batch)

  • Ideal for high-volume applications

See Cluster Consensus for detailed transaction flow diagrams.

Common Configuration Patterns

Low-Latency Trading System

Disable batching for minimum commit latency:

Use when:

  • Sub-millisecond latency is critical

  • Message arrival rate is moderate

  • Each message must be processed ASAP

High-Throughput Order Processor

Enable batching for maximum throughput:

Use when:

  • Throughput is more important than latency

  • Messages arrive in bursts

  • Slight latency increase (milliseconds) is acceptable

Balanced Configuration

Moderate batching for balanced performance:

Use when:

  • Need both good latency and throughput

  • Message arrival rate varies

  • Want automatic adaptation to load

Performance Considerations

Latency Impact

Adaptive batching increases average latency by waiting for batch to fill:

  • Batch size 1: ~100 microseconds per message

  • Batch size 10: ~200-500 microseconds per message

  • Batch size 100: ~1-2 milliseconds per message

The first message in a batch sees the most latency increase.

Throughput Gains

Batching amortizes commit overhead across multiple messages:

  • No batching: ~10,000 messages/second

  • Batch size 10: ~50,000 messages/second

  • Batch size 100: ~200,000 messages/second

Note: Actual numbers depend on message complexity, state size, and hardware.

Adaptive Behavior

The engine automatically adjusts batch size based on message arrival rate:

  • Low arrival rate: Commits immediately (effective batch size 1)

  • High arrival rate: Batches up to ceiling

  • Bursty traffic: Adapts dynamically

This provides good latency during quiet periods and good throughput during bursts.

Monitoring Transaction Performance

Use engine statistics to monitor transaction behavior:

  • TxnCount - Number of transactions committed

  • TxnBatchCount - Number of messages in each transaction

  • TxnLatency - Time from message receipt to commit

See Engine Statistics for complete metrics reference.

Transaction Concepts

Developer Guidance

Configuration Reference

Best Practices

  1. Start with default (no batching): Begin with adaptiveCommitBatchCeiling="1" and measure baseline performance

  2. Increase ceiling for throughput: If throughput is bottleneck, gradually increase ceiling (10, 25, 50, 100)

  3. Monitor latency distribution: Track P50, P99, and P999 latencies to understand batching impact

  4. Test under realistic load: Use production-like message rates and patterns to tune batching

  5. Consider consensus model: Event Sourcing benefits more from batching than State Replication

  6. Account for replication overhead: Higher batch sizes increase state delta sizes in State Replication

See Also

  • Message Flow - Configure message processing behavior

  • Threading - Configure threads that process transactions

  • Monitoring - Monitor transaction statistics and performance

Last updated