Weekend Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Confluent CCAAK Dumps

Page: 1 / 5
Total 54 questions

Confluent Certified Administrator for Apache Kafka Questions and Answers

Question 1

Which use cases would benefit most from continuous event stream processing? (Choose three.)

Options:

A.

Fraud detection

B.

Context-aware product recommendations for e-commerce

C.

End-of-day financial settlement processing

D.

Log monitoring/application fault detection

E.

Historical dashboards

Question 2

You have a cluster with a topic t1 that already has uncompressed messages. A new Producer starts sending messages to t1 with compression enabled.

Which condition would allow this?

Options:

A.

If the new Producer is configured to use compression.

B.

Never, because topic t1 already has uncompressed messages.

C.

Only if Kafka is also enabled for encryption.

D.

Only if the new Producer disables batching.

Question 3

You have a Kafka cluster with topics t1 and t2. In the output below, topic t2 shows Partition 1 with a leader “-1”.

...

$ kafka-topics --zookeeper localhost:2181 --describe --topic t2

Topic: t2 Partition: 1 Leader: -1 Replicas: 1 Isr:

What is the most likely reason for this?

Options:

A.

Broker 1 failed.

B.

Leader shows “-1” while the log cleaner thread runs on Broker 1.

C.

Compression has been enabled on Broker 1.

D.

Broker 1 has another partition clashing with the same name.

Question 4

A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate messages are not processed and messages are not skipped.

Which property should you use?

Options:

A.

processing.guarantee=exactly_once

B.

ksql.streams auto offset.reset=earliest

C.

ksql.streams auto.offset.reset=latest

D.

ksql.fail.on.production.error=false

Question 5

You have an existing topic t1 that you want to delete because there are no more producers writing to it or consumers reading from it.

What is the recommended way to delete the topic?

Options:

A.

If topic deletion is enabled on the brokers, delete the topic using Kafka command line tools.

B.

The consumer should send a message with a 'null' key.

C.

Delete the log files and their corresponding index files from the leader broker.

D.

Delete the offsets for that topic from the consumer offsets topic.

Question 6

Why does Kafka use ZooKeeper? (Choose two.)

Options:

A.

To access information about the leaders and partitions

B.

To scale the number of brokers in the cluster

C.

To prevent replication between clusters

D.

For controller election

Question 7

How can load balancing of Kafka clients across multiple brokers be accomplished?

Options:

A.

Partitions

B.

Replicas

C.

Offsets

D.

Connectors

Question 8

A developer is working for a company with internal best practices that dictate that there is no single point of failure for all data stored.

What is the best approach to make sure the developer is complying with this best practice when creating Kafka topics?

Options:

A.

Set ‘min.insync.replicas’ to 1.

B.

Use the parameter --partitions=3 when creating the topic.

C.

Make sure the topics are created with linger.ms=0 so data is written immediately and not held in batch.

D.

Set the topic replication factor to 3.

Question 9

What is the correct permission check sequence for Kafka ACLs?

Options:

A.

Super Users → Deny ACL → Allow ACL → Deny

B.

Allow ACL → Deny ACL → Super Users → Deny

C.

Deny ACL → Deny → Allow ACL → Super Users

D.

Super Users → Allow ACL → Deny ACL → Deny

Question 10

Where are Apache Kafka Access Control Lists stored?

Options:

A.

Broker

B.

ZooKeeper

C.

Schema Registry

D.

Connect

Question 11

A broker in the Kafka cluster is currently acting as the Controller.

Which statement is correct?

Options:

A.

It can have topic partitions.

B.

It is given precedence for replication to and from replica followers.

C.

All consumers are allowed to fetch messages only from this server.

D.

It is responsible for sending leader information to all producers.

Question 12

You want to increase Producer throughput for the messages it sends to your Kafka cluster by tuning the batch size (‘batch.size’) and the time the Producer waits before sending a batch (‘linger.ms’).

According to best practices, what should you do?

Options:

A.

Decrease ‘batch.size’ and decrease ‘linger.ms’

B.

Decrease ‘batch.size’ and increase ‘linger.ms’

C.

Increase ‘batch.size’ and decrease ‘linger.ms’

D.

Increase ‘batch.size’ and increase ‘linger.ms’

Question 13

You are using Confluent Schema Registry to provide a RESTful interface for storing and retrieving schemas.

Which types of schemas are supported? (Choose three.)

Options:

A.

Avro

B.

gRPC

C.

JSON

D.

Thrift

E.

Protobuf

Question 14

Which model does Kafka use for consumers?

Options:

A.

Push

B.

Publish

C.

Pull

D.

Enrollment

Question 15

Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a replication factor of three. You create a Consumer Group with four consumers, which subscribes to t1.

In the scenario above, how many Controllers are in the Kafka cluster?

Options:

A.

One

B.

Two

C.

Three

D.

Four

Question 16

Per customer business requirements, a system’s high availability is more important than message reliability.

Which of the following should be set?

Options:

A.

Unclean leader election should be enabled.

B.

The number of brokers in the cluster should be always odd (3, 5, 7 and so on).

C.

The linger.ms should be set to '0'.

D.

Message retention.ms should be set to -1.

Page: 1 / 5
Total 54 questions