When working with Apache Kafka, one of the most important concepts is serialization and deserialization. Kafka is designed to handle high-throughput messaging, and to do this effectively, data sent by producers must be serialized (converted into bytes) before sending, and then deserialized (converted back into objects) by consumers.

In this blog post, we’ll explore Kafka serialization and deserialization in Java without using Spring Boot. We’ll cover string and byte formats, custom object serialization with JSON, and best practices for handling structured messages.
Table of Contents
What is Serialization and Deserialization in Kafka?
Serialization | Deserialization |
---|---|
The process of converting an object (like a Java POJO or JSON string) into a byte array so it can be sent across the Kafka network. | The reverse process, converting byte arrays back into Java objects. |
Since Kafka only understands byte arrays, every message you send (keys and values) must be serialized before it goes onto a Kafka topic.
Using StringSerializer and ByteArraySerializer for Basic Message Formats
The simplest way to start is by using Kafka’s built-in serializers.
Kafka Producer with StringSerializer
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
public class StringProducer {
public static void main(String[] args) {
String topicName = "string-topic";
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
ProducerRecord<String, String> record =
new ProducerRecord<>(topicName, "key1", "Hello Kafka with StringSerializer!");
producer.send(record);
producer.close();
System.out.println("Message sent successfully!");
}
}
Kafka Consumer with StringDeserializer
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.*;
public class StringConsumer {
public static void main(String[] args) {
String topicName = "string-topic";
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "string-consumer-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(topicName));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("Consumed message: Key = %s, Value = %s%n", record.key(), record.value());
}
}
}
}
This is the easiest way to handle plain text messages in Kafka. For binary formats like images or files, you can use ByteArraySerializer and ByteArrayDeserializer.
Implementing Custom Serializers and Deserializers for POJOs
In real-world applications, we often deal with structured data (like Order
, User
, or Payment
objects). To handle these, we need custom serializers and deserializers.
POJO Example: Order Class
package com.javacodepoint.kafka.model;
public class Order {
private int orderId;
private String productName;
private double amount;
public Order() {}
public Order(int orderId, String productName, double amount) {
this.orderId = orderId;
this.productName = productName;
this.amount = amount;
}
public int getOrderId() {
return orderId;
}
public void setOrderId(int orderId) {
this.orderId = orderId;
}
public String getProductName() {
return productName;
}
public void setProductName(String productName) {
this.productName = productName;
}
public double getAmount() {
return amount;
}
public void setAmount(double amount) {
this.amount = amount;
}
@Override
public String toString() {
return "Order [orderId=" + orderId + ", productName=" + productName + ", amount=" + amount + "]";
}
}
Custom Serializer
package com.javacodepoint.kafka;
import com.javacodepoint.kafka.model.Order;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.common.serialization.Serializer;
public class OrderSerializer implements Serializer<Order> {
private final ObjectMapper objectMapper = new ObjectMapper();
@Override
public byte[] serialize(String topic, Order data) {
try {
return objectMapper.writeValueAsBytes(data);
} catch (Exception e) {
throw new RuntimeException("Error serializing Order", e);
}
}
}
Custom Deserializer
package com.javacodepoint.kafka;
import com.javacodepoint.kafka.model.Order;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.common.serialization.Deserializer;
public class OrderDeserializer implements Deserializer<Order> {
private final ObjectMapper objectMapper = new ObjectMapper();
@Override
public Order deserialize(String topic, byte[] data) {
try {
return objectMapper.readValue(data, Order.class);
} catch (Exception e) {
throw new RuntimeException("Error deserializing Order", e);
}
}
}
Kafka JSON Serialization in Java with Jackson or Gson
You can choose between Jackson and Gson for JSON serialization.
- Jackson (faster, widely used in enterprise apps)
- Gson (lighter, easier for small projects)
An example with Jackson is already shown above. With Gson, you would use new Gson().toJson(data)
for serialization and new Gson().fromJson(string, Order.class)
for deserialization.
Kafka Custom Serializer: Producer & Consumer Example
Kafka Producer with Custom Serializer
package com.javacodepoint.kafka;
import com.javacodepoint.kafka.model.Order;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;
public class OrderProducer {
public static void main(String[] args) {
String topicName = "order-topic";
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "com.javacodepoint.kafka.OrderSerializer");
KafkaProducer<String, Order> producer = new KafkaProducer<>(props);
Order order = new Order(101, "Laptop", 75000.50);
ProducerRecord<String, Order> record = new ProducerRecord<>(topicName, "order1", order);
producer.send(record);
producer.close();
System.out.println("Order produced successfully!");
}
}
OUTPUT:
Order produced successfully!
Kafka Consumer with Custom Deserializer
package com.javacodepoint.kafka;
import com.javacodepoint.kafka.model.Order;
import org.apache.kafka.clients.consumer.*;
import java.time.Duration;
import java.util.*;
public class OrderConsumer {
public static void main(String[] args) {
String topicName = "order-topic";
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "order-consumer-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "com.javacodepoint.kafka.OrderDeserializer");
KafkaConsumer<String, Order> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(topicName));
while (true) {
ConsumerRecords<String, Order> records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, Order> record : records) {
System.out.println("Consumed Order: " + record.value());
}
}
}
}
OUTPUT:
Consumed Order: Order [orderId=101, productName=Laptop, amount=75000.5]
Best Practices for Kafka Message Format
- Use JSON for structured data – Easy to debug and human-readable.
- Consider Avro or Protobuf for schema evolution – Handles backward/forward compatibility better than JSON.
- Add versioning in messages – Helps when updating schema.
- Validate incoming messages – Prevents corrupt or incompatible data.
- Use error-handling strategies – e.g., Dead Letter Queues (DLQs) for failed deserialization.
Performance Tips for Efficient Serialization
- Batch messages in producers to reduce network overhead.
- Use ByteArraySerializer for large binary payloads.
- Enable compression (
gzip
,snappy
,lz4
) for high-volume JSON data. - Minimize object allocations during serialization for faster throughput.
- Reuse ObjectMapper (Jackson) instead of creating new instances.
Conclusion
In this post, we explored Kafka serialization and deserialization in plain Java. We covered:
- Using built-in StringSerializer and ByteArraySerializer
- Implementing custom serializers and deserializers for POJOs
- Handling JSON with Jackson and Gson
- Best practices for schema evolution and error handling
- Performance tips for efficient Kafka message handling
By mastering these concepts, you can build robust Kafka-based applications without depending on frameworks like Spring Boot.
Want to revisit the lessons or explore more?
Return to the Apache Kafka Tutorial Home PageWhether you want to review a specific topic or go through the full tutorial again, everything is structured to help you master Apache Kafka step by step.
FAQs
What is the default serializer in Kafka?
Kafka does not assume a default serializer; you must explicitly configure one (e.g., StringSerializer, ByteArraySerializer).
Should I use JSON or Avro for Kafka messages?
Use JSON for simplicity and debugging. Use Avro/Protobuf for schema evolution and high-performance enterprise systems.
Can I use Kafka serialization without Spring Boot?
Yes. As shown above, Kafka’s Producer and Consumer APIs in plain Java allow you to implement serialization without Spring Boot.
How do I handle serialization errors in Kafka?
Log errors and send failed messages to a Dead Letter Queue (DLQ). Add try-catch in custom serializers/deserializers.
What is the best serializer for performance?
ByteArraySerializer is the fastest for raw bytes. Avro/Protobuf are efficient for structured data with schema evolution.