Kafka MCQs – Kafka Internals and High Availability

11.) What is a Kafka log segment?

A) An in-memory buffer
B) A Kafka configuration file
C) A file containing a portion of the topic data
D) A separate topic

Answer: Option C

Explanation: Each partition’s data is divided into multiple segment files stored on disk.

12.) Which of the following improves Kafka’s availability?

A) Increasing log retention
B) Enabling log compaction
C) Increasing replication factor
D) Enabling topic auto-creation

Answer: Option C

Explanation: More replicas provide fault tolerance if one broker goes down.

13.) Which configuration defines how many replicas must acknowledge a write?

A) log.retention.hours
B) min.insync.replicas
C) log.segment.bytes
D) message.timeout.ms

Answer: Option B

Explanation: It specifies the minimum number of replicas that must acknowledge a write.

14.) What is the default value of Kafka’s acks producer config?

A) 0
B) 1
C) all
D) -1

Answer: Option B

Explanation: By default, Kafka requires the leader to acknowledge the message (acks=1).

15.) What happens to ISR when a follower goes offline?

A) ISR size increases
B) ISR resets
C) Follower is removed from ISR
D) Topic is deleted

Answer: Option C

Explanation: Kafka removes offline or lagging followers from the ISR list.

16.) Kafka stores logs for each partition in…

A) ZooKeeper
B) HDFS
C) Filesystem on brokers
D) RAM

Answer: Option C

Explanation: Kafka stores logs as segment files in the broker’s local file system.

17.) What is a key factor in Kafka’s high throughput?

A) XML-based messaging
B) No replication
C) Sequential disk writes
D) Using relational database

Answer: Option C

Explanation: Kafka leverages sequential I/O for high performance.

18.) How often does Kafka flush data to disk?

A) After every message
B) At configurable intervals or size thresholds
C) Once per day
D) Never

Answer: Option B

Explanation: Kafka uses parameters like log.flush.interval.messages to control flush frequency.

19.) How does Kafka handle broker failover?

A) Reboots the failed broker
B) Deletes partition data
C) Elects new partition leaders from ISR
D)Forwards data to ZooKeeperr

Answer: Option C

Explanation: Kafka automatically promotes in-sync followers to leader upon failure.

20.) Which component is used in older Kafka versions to manage metadata and configuration?

A) Kafka Broker
B) Kafka Streams
C) ZooKeeper
D) Kafka Connect

Answer: Option C

Explanation: Kafka traditionally used ZooKeeper for metadata management (replaced in KRaft mode).

Leave a Reply

Your email address will not be published. Required fields are marked *