Apache Kafka is a popular distributed streaming platform used for building real-time data pipelines and event-driven applications. If you’re just getting started, this post will walk you through creating your first Java Kafka application with a producer and a consumer example.

By the end of this tutorial, you’ll have a working Kafka setup where:
- A Producer sends messages to a Kafka topic.
- A Consumer reads those messages from the topic.
Table of Contents
Prerequisites
Before you start, make sure:
Requirement | Description |
---|---|
Java 8+ installed | JDK 8 or higher |
Apache Kafka is running | Either locally via Docker (Zookeeper or KRaft mode) |
Apache Maven installed | Or use your favorite build tool |
IDE or text editor | IntelliJ IDEA, Eclipse, or VS Code |
Step 1: Add Kafka Maven Dependencies
You’ll need the Kafka client library. Add this to your pom.xml
:
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.9.1</version>
</dependency>
</dependencies>
Step 2: Project Structure
kafka-java-app/
├── pom.xml
└── src/
└── main/
└── java/
└── com/
└── javacodepoint/└── demo/
├── SimpleKafkaProducer.java
└── SimpleKafkaConsumer.java
Step 3: Complete pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.javacodepoint</groupId>
<artifactId>kafka-java-app</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.9.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-api -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.36</version>
</dependency>
</dependencies>
</project>
Step 4: Create Kafka Producer (Send Data)
File: SimpleKafkaProducer.java
package com.javacodepoint.demo;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class SimpleKafkaProducer {
public static void main(String[] args) {
// Kafka broker address
String bootstrapServers = "localhost:9092";
String topic = "demo-topic";
// Producer properties
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
// Create producer
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
// Send a message
ProducerRecord<String, String> record =
new ProducerRecord<>(topic, "Hello from Kafka Java Producer!");
producer.send(record, (metadata, exception) -> {
if (exception == null) {
System.out.printf("Message sent successfully! Topic: %s, Partition: %d, Offset: %d%n",
metadata.topic(), metadata.partition(), metadata.offset());
} else {
exception.printStackTrace();
}
});
// Close producer
producer.close();
}
}
Code Explanation: How It Works (Step-by-Step)
Step | Description |
---|---|
1. Set Bootstrap Server | You specify localhost:9092 which tells the producer where Kafka is running. |
2. Configure Serializers | StringSerializer is used to convert key and value objects into bytes before sending to Kafka. |
3. Create Producer Object | KafkaProducer uses the given properties to set up a connection to Kafka. |
4. Create a Message (Record) | You create a ProducerRecord with the topic name and the message you want to send. |
5. Send the Message | The send() method pushes the message to the Kafka broker. It’s asynchronous by default. |
6. Close Producer | This ensures the producer releases resources and sends any buffered messages. |
Step 5: Create Kafka Consumer (Receive Data)
File: SimpleKafkaConsumer.java
package com.javacodepoint.demo;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
public class SimpleKafkaConsumer {
public static void main(String[] args) {
String bootstrapServers = "localhost:9092";
String groupId = "demo-consumer-group";
String topic = "demo-topic";
// Consumer properties
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
// Create consumer
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(topic));
System.out.println("Listening for messages...");
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("Received: Key = %s, Value = %s, Partition = %d, Offset = %d%n",
record.key(), record.value(), record.partition(), record.offset());
}
}
}
}
Code Explanation: How It Works (Step-by-Step)
Step | Description |
---|---|
1. Set Bootstrap Server | You again point to localhost:9092 , so the consumer knows where Kafka is running. |
2. Configure Deserializers | StringDeserializer converts the received byte data back into readable strings. |
3. Set Group ID | Every consumer must belong to a group. Kafka uses this to manage message delivery and offset tracking. |
4. Auto Offset Reset | earliest means the consumer starts from the beginning if no previous offset is found. |
5. Subscribe to Topic | You subscribe the consumer to listen to the topic demo-topic . |
6. Poll for Messages | The consumer enters a loop and keeps polling Kafka every second for new messages. |
7. Process Messages | For each record, you print the message along with the partition and offset info. |
Step 6: Run Your Kafka Application
- Start Kafka: Make sure Kafka is running on
localhost:9092
. For more details, visit another post: How to Install Kafka and Start the Server - Create a topic using the CLI command (if not already created):
Navigate to the Kafka home directory (eg, C:\kafka_2.13-3.9.1), then run the command based on your operating system:
bin\windows\kafka-topics.bat --create --topic demo-topic --bootstrap-server localhost:9092
bin\kafka-topics.sh --create --topic demo-topic --bootstrap-server localhost:9092
- Run the Kafka Consumer Program first
- Then run the Kafka Producer Program
You should see the consumer printing the received message from the producer.
Received Message -> Key: null, Value: Hello from Kafka Java Producer!, Partition: 0, Offset: 1
Tips for Beginners
- Always run your consumer before your producer to catch the message.
- Use different topic names for different test cases.
- Use logging instead of
System.out.println()
for production applications. - Later, explore Spring Kafka to simplify integration in Spring Boot applications.
Conclusion
Congrats! You’ve successfully built your first Java Kafka application with a Producer and a Consumer.
This basic example forms the foundation for more advanced scenarios like:
- Working with multiple partitions
- Handling key-based partitioning
- Using Avro/Protobuf serialization
- Running Kafka in the cloud
🔁 Want to revisit the lessons or explore more?
⬅️ Return to the Apache Kafka Tutorial Home PageWhether you want to review a specific topic or go through the full tutorial again, everything is structured to help you master Apache Kafka step by step.