Transactions
Overview
Transaction configuration controls how your Talon microservice batches and commits work. Proper transaction tuning balances latency (time to commit individual transactions) against throughput (number of transactions per second).
This section covers runtime configuration for transaction batching and commit behavior. For conceptual understanding of how transactions work, see Transactions.
What You Can Configure
Transaction configuration includes:
Adaptive Batching
Configure the engine to adaptively batch multiple message processing transactions into a single commit:
Enable/disable adaptive batching
Set maximum batch size (ceiling)
Configure batch window timing
Tune for latency vs throughput trade-offs
See Adaptive Batching for complete configuration reference.
Transaction Behavior (Future)
Additional transaction configuration topics:
Transaction timeouts
Commit flush behavior
Checkpoint frequency
Note: Additional transaction topics will be added as the documentation expands.
Configuration Hierarchy
Transaction settings are configured in your DDL under the engine's transaction section:
How Transactions Work
Understanding transaction boundaries helps configure batching effectively:
Without Adaptive Batching
Each message processed in its own transaction:
Message arrives
Handler executes
Transaction commits immediately
Next message processed
Characteristics:
Lowest latency (microseconds to commit)
Lower throughput (commit overhead per message)
Ideal for latency-sensitive applications
With Adaptive Batching
Multiple messages batched into single commit:
Messages arrive rapidly
Multiple handlers execute
Batch commits when ceiling reached or window expires
All messages acknowledged together
Characteristics:
Higher throughput (amortize commit overhead)
Slightly higher latency (wait for batch)
Ideal for high-volume applications
See Cluster Consensus for detailed transaction flow diagrams.
Common Configuration Patterns
Low-Latency Trading System
Disable batching for minimum commit latency:
Use when:
Sub-millisecond latency is critical
Message arrival rate is moderate
Each message must be processed ASAP
High-Throughput Order Processor
Enable batching for maximum throughput:
Use when:
Throughput is more important than latency
Messages arrive in bursts
Slight latency increase (milliseconds) is acceptable
Balanced Configuration
Moderate batching for balanced performance:
Use when:
Need both good latency and throughput
Message arrival rate varies
Want automatic adaptation to load
Performance Considerations
Latency Impact
Adaptive batching increases average latency by waiting for batch to fill:
Batch size 1: ~100 microseconds per message
Batch size 10: ~200-500 microseconds per message
Batch size 100: ~1-2 milliseconds per message
The first message in a batch sees the most latency increase.
Throughput Gains
Batching amortizes commit overhead across multiple messages:
No batching: ~10,000 messages/second
Batch size 10: ~50,000 messages/second
Batch size 100: ~200,000 messages/second
Note: Actual numbers depend on message complexity, state size, and hardware.
Adaptive Behavior
The engine automatically adjusts batch size based on message arrival rate:
Low arrival rate: Commits immediately (effective batch size 1)
High arrival rate: Batches up to ceiling
Bursty traffic: Adapts dynamically
This provides good latency during quiet periods and good throughput during bursts.
Monitoring Transaction Performance
Use engine statistics to monitor transaction behavior:
TxnCount- Number of transactions committedTxnBatchCount- Number of messages in each transactionTxnLatency- Time from message receipt to commit
See Engine Statistics for complete metrics reference.
Related Topics
Transaction Concepts
Transactions - How transactions work in Talon
Cluster Consensus - Transaction commit with consensus
Developer Guidance
Controlling Transactions - Programmatic transaction control
Using Savepoints - Transaction savepoints in handlers
Configuration Reference
Configuration - Complete DDL reference for transaction settings
Best Practices
Start with default (no batching): Begin with
adaptiveCommitBatchCeiling="1"and measure baseline performanceIncrease ceiling for throughput: If throughput is bottleneck, gradually increase ceiling (10, 25, 50, 100)
Monitor latency distribution: Track P50, P99, and P999 latencies to understand batching impact
Test under realistic load: Use production-like message rates and patterns to tune batching
Consider consensus model: Event Sourcing benefits more from batching than State Replication
Account for replication overhead: Higher batch sizes increase state delta sizes in State Replication
See Also
Message Flow - Configure message processing behavior
Threading - Configure threads that process transactions
Monitoring - Monitor transaction statistics and performance
Last updated

