Kafka MCQs – Kafka Streams API

11.) Which method is used to start a Kafka Streams application?

A) start()
B) begin()
C) launch()
D) open()

Answer: Option A

Explanation: You use KafkaStreams.start() to begin the stream processing pipeline.

12.) What is the default offset reset policy in Kafka Streams?

A) latest
B) earliest
C) none
D) reset

Answer: Option B

Explanation: Kafka Streams reads from the earliest offset by default, unless overridden.

13.) A KStream can be converted into a KTable using:

A) map()
B) join()
C) toTable()
D) toKTable()

Answer: Option C

Explanation: You can convert a stream to a table for operations that require stateful processing.

14.) Which function is used to repartition a stream by key?

A) groupByKey()
B) flatMap()
C) peek()
D) mapValues()

Answer: Option A

Explanation: groupByKey() causes repartitioning, which is necessary for some aggregations.

15.) What type of store is backed by RocksDB in Kafka Streams?

A) GlobalStore
B) FileStore
C) Persistent Key-Value Store
D) Memory Store

Answer: Option C

Explanation: RocksDB is the default persistent state store in Kafka Streams.

16.) Which one is NOT a valid Kafka Streams operation?

A) mapValues()
B) merge()
C) reduce()
D) toUpper()

Answer: Option D

Explanation: toUpper() is not a valid DSL method in Kafka Streams.

17.) How do you persist aggregated results in Kafka Streams?

A) Use Materialized view
B) tore in ZooKeeper
C) Call flush()
D) Use log compaction

Answer: Option A

Explanation: The Materialized class configures state store backing for persistent aggregations.

18.) Kafka Streams ensures fault tolerance using:

A) ZooKeeper logs
B) Internal changelog topics
C) Java Serialization
D) ACLs

Answer: Option B

Explanation: Changelog topics keep track of updates to state stores for fault tolerance.

19.) How are Kafka Streams applications scaled?

A) Adding threads
B) Adding partitions to topics
C) Adding more JVM instances
D) All of the above

Answer: Option D

Explanation: Kafka Streams scales via parallelism at the partition, thread, and instance level.

20.) What does through() operation do in Kafka Streams?

A) Serializes data
B) Filters nulls
C) Rewrites to an intermediate topic
D) Disconnects the stream

Answer: Option C

Explanation: through() allows the stream to pass through a Kafka topic (produced then consumed).

Leave a Reply

Your email address will not be published. Required fields are marked *