We have seen the components and types of Kafka clusters already. We will now quickly see how Kafka components work together in a single node single broker cluster.
Even though in production scenarios, it will be almost always installed in linux/unix based machines, we will install and test it in Windows for simplicity.
Summary of steps
We will be doing the following steps in order:
-
Installing and configuring Kafka and ZooKeeper as a a single node single broker cluster
-
Starting the ZooKeeper server
-
Starting the Kafka broker
-
Creating a Kafka topic
-
Starting a producer to send messages
-
Starting a consumer to consume messages
Installing and configuring Kafka and ZooKeeper as a a single node single broker cluster
Download latest stable Kafka from http://kafka.apache.org/downloads.html and extract it into a folder.
The bat files for Windows are available in bin\windows.
Starting the ZooKeeper server
Kafka provides the default and simple ZooKeeper configuration file used to launch a single local ZooKeeper instance
Important properties defined in zookeeper.properties are:
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
We can start the local ZooKeeper instance as:
zookeeper-server-start.bat ../../config/zookeeper.properties
or
zookeeper-server-start.sh ../config/zookeeper.properties
By default, the ZooKeeper server will listen on *:2181/tcp.
Starting the Kafka broker
The server.properties file defines the following important properties :
Broker.id=0
port=9092
log.dir=/tmp/kafka8-logs
num.partitions=2
zookeeper.connect=localhost:2181
Start the Kafka broker as:
kafka-server-start.bat ../../config/server.properties
or
kafka-server-start.sh ../config/server.properties
Creating a Kafka topic
kafka-topics.bat utility will create a topic, override the default number of partitions, and show a successful creation message. kafka-topics.bat utility also takes ZooKeeper server information.
We can create a topic named mykafkatopic with a single partition and only one replica as:
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mykafkatopic
or
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mykafkatopic
We can now see that topic if we run the list topic command:
kafka-topics.bat --list --zookeeper localhost:2181
or
kafka-topics.sh --list --zookeeper localhost:2181
Starting a producer to send messages
Important properties in producer.properties file are:
metadata.broker.list=localhost:9092
compression.codec=none
We can start the console-based producer to send the messages specifiying the brokers to be connected using the broker-list parameter.
kafka-console-producer.bat --broker-list localhost:9092 --topic mykafkatopic
or
kafka-console-producer.sh --broker-list localhost:9092 --topic mykafkatopic
Now type in a test message.
Alternatively, you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Starting a consumer to consume messages
The default properties for the consumer are defined in /config/consumer.properties. Important properties are:
group.id=test-consumer-group
The consumer group id is a string that uniquely identifies a set of consumers within the same consumer group.
Start a console-based consumer as:
kafka-console-consumer.bat --zookeeper localhost:2181 --topic mykafkatopic --from-beginning
or
kafka-console-consumer.sh --zookeeper localhost:2181 --topic mykafkatopic --from-beginning
You will see the test message you had typed in earlier.
You can type in more messages at the producer window and they appear on the consumer window.
Software Versions Tested Against
-
kafka_2.10-0.8.2.1, JDK 1.8
- heartin's blog
- Log in or register to post comments
Recent comments