Serialization Module
The Serialization module benchmarks message encoding and decoding performance using X Platform's Xbuf2 binary serialization format.
Overview
Message serialization/deserialization is a critical operation in messaging systems. This benchmark measures the overhead of:
Encoding: Converting POJO messages to wire format
Decoding: Converting wire format back to POJOs
The benchmark uses the same Car message model used in the AEP Module canonical benchmark.
Test Program
Class: com.neeve.perf.serialization.Driver
The benchmark can be invoked through the X Platform Interactive CLI or directly.
Message Formats
xbuf2 / xbuf2.serial
Tests serialization with sequential/predictable data:
java -cp "libs/*" com.neeve.perf.serialization.Driver --provider xbuf2.serialCharacteristics:
Predictable data patterns
Consistent serialized size
Best-case performance
xbuf2.random
Tests serialization with random data:
Characteristics:
Random data in all fields
Variable serialized size
More realistic performance
Test Message
The Car message contains:
Simple Fields:
timestamp (long)
serialNumber (int)
modelYear (short)
available (boolean)
code (enum)
vehicleCode (string)
Complex Fields:
engine (nested object)
extras (bit set)
someNumbers (int array)
Repeated Fields:
performanceFigures (array of objects)
fuelFigures (array of objects)
Typical Size: ~200 bytes serialized
Command-Line Parameters
--provider
-p
xbuf2
Serialization provider: xbuf2.serial or xbuf2.random
Running the Benchmark
Basic Usage
Test with Random Data
Interpreting Results
The benchmark outputs median and mean latencies for encoding and decoding operations.
Example Output:
Result Columns
PROV: Serialization provider
RUN: Run number (multiple runs for consistency)
TYPE: Operation type (ENC=encode, DEC=decode)
SIZE: Serialized size in bytes
MED: Median latency in nanoseconds
MEAN: Mean latency in nanoseconds
Typical Results (Linux x86-64)
Encode
~240-250ns
~250-280ns
~178 bytes
Decode
~235-245ns
~245-275ns
~178 bytes
Performance Characteristics
Encode vs Decode:
Encoding and decoding have similar overhead
Both operations are highly optimized
Sequential vs Random:
Random data ~5-10% slower due to less predictable access patterns
Sequential data represents best-case performance
Message Size:
Overhead scales roughly linearly with message complexity
The Car message is moderately complex
Access Patterns
The benchmark demonstrates two message access patterns:
Indirect Access (POJO)
Standard object-oriented access via getters/setters:
Direct Access (Serializer/Deserializer)
Zero-copy access via serializers (shown in benchmark code):
Direct access is faster (used in high-performance scenarios)
Performance Tuning
For Lowest Latency
Use direct serialization (serializer/deserializer)
Reuse serializer/deserializer instances
Pre-allocate buffers
Minimize nested object depth
For Ease of Use
Use indirect access (POJO getters/setters)
Accept ~10-15% overhead for better code readability
Good for most business applications
Comparison with AEP Module
The AEP Module canonical benchmark includes serialization overhead as part of end-to-end latency:
Serialization Benchmark: ~480ns (encode + decode)
AEP Benchmark: ~27µs (includes serialization + all other operations)
Serialization represents ~1.7% of end-to-end latency
Best Practices
Message Design
Keep messages compact: Fewer fields = faster serialization
Use primitives where possible: Avoid excessive nesting
Size arrays appropriately: Large arrays increase overhead
Consider field ordering: Group frequently-accessed fields
Code Patterns
Next Steps
Review AEP Module to see serialization in end-to-end context
Explore Link Module for messaging transport benchmarks
Return to Modules Overview for other modules
Last updated

