By David Castillo on November 03, 2020
article image

Introducing rpk container

Developing with Redpanda just got easier.

engineering

At Vectorized, operational simplicity and a good developer experience (DX) are two of our major goals. rpk was born because of that: you shouldn’t need to be a Linux expert to configure the machine where Redpanda will run. You shouldn’t need magic shell scripts to do basic stuff such as querying a topic’s health. We’ve been there, and we know it sucks.

We also think you shouldn’t need to follow long guides to get started (and yes, we’re working to make our onboarding guide even shorter and simpler!). A single command should do the trick.

Introducing: rpk container

Today I’m very happy to introduce rpk container, a group of rpk commands that will allow developers to deploy local multi-node Redpanda clusters in an instant! rpk container uses the official Redpanda image to spin up a local containerized cluster.

The best thing, though, is that this feature is also available for Mac users!

If you haven’t done so yet, follow the installation instructions for Linux or MacOS so you can follow along.

rpk container leverages Docker. If you haven’t done so already, please follow the installation instructions for Docker (Linux users) or Docker Desktop for Mac (MacOS users).

It’s important to note, however, that you won’t need to interact with Docker directly or have experience with it.

To get started, run rpk container start -n 3. This will start a 3-node cluster. You should see something like this (the addresses may vary):

rpk container start will take a minute the first time you run it, since it will download the latest stable version of Redpanda. The next time you run it it should be quicker.

$ rpk container start -n 3
NODE ID  ADDRESS          CONFIG                                             
  0        172.24.1.2:9092  /home/david/.rpk/cluster/node-0/conf/redpanda.yaml  
  1        172.24.1.4:9092  /home/david/.rpk/cluster/node-1/conf/redpanda.yaml  
  2        172.24.1.3:9092  /home/david/.rpk/cluster/node-2/conf/redpanda.yaml  

Cluster started! You may use 'rpk api' to interact with the cluster. E.g:

rpk api status

It says we can check our cluster with rpk api status Let’s try that!

$ rpk api status
  Redpanda Cluster Status                   
                                            
  0 (172.24.1.2:9092)      (No partitions)  
                                            
  1 (172.24.1.3:9092)      (No partitions)  
                                            
  2 (172.24.1.4:9092)      (No partitions)

All of the rpk api subcommands will detect the local cluster and use its addresses, so you don’t have to configure anything or keep track of IPs and ports.

For example, you can run rpk api topic create and it will work!

$ rpk api topic create -p 6 -r 3 new-topic
Created topic 'new-topic'. Partitions: 6, replicas: 3, cleanup policy: 'delete'

Check our previous blog post on getting started with Redpanda to learn more about rpk api.

To stop a cluster, run rpk container stop:

$ rpk container stop              
Stopping node 2
Stopping node 0
Stopping node 1

And you can restart it by running rpk container start again.

$ rpk container start

Found an existing cluster:

  NODE ID  ADDRESS           CONFIG                                             
  2        192.168.2.16:39053  /home/0x5d/.rpk/cluster/node-2/conf/redpanda.yaml  
  0        192.168.2.16:42723  /home/0x5d/.rpk/cluster/node-0/conf/redpanda.yaml  
  1        192.168.2.16:43475  /home/0x5d/.rpk/cluster/node-1/conf/redpanda.yaml

Your data and configuration will still be there 😉:

$ rpk api topic list
  Name       Partitions  Replicas  
  new-topic  6           3

Finally, if you wanna wipe out all the cluster data and configuration, you can use rpk container purge.

$ rpk container purge   
Stopping node 0
Stopping node 2
Stopping node 1
Deleted data for node 2
Deleted data for node 0
Deleted data for node 1
Deleted cluster data.

It’s worth also mentioning that Redpanda’s API is Kafka®-compatible, so you can point your existing code or Kafka® client to the local cluster’s addresses and keep hacking. You don’t need to change your existing code, deploy Zookeeper, craft complex docker-compose files or maintain obscure bash scripts.

Announcing the official vectorized/redpanda Docker image

The ideal way to run a database such as Redpanda is as a “standalone” process. You can achieve that by deploying each Redpanda broker on a different machine, and using the systemd units included in our pre-built packages for Debian and RHEL systems to achieve as much isolation as possible (i.e. by running sudo systemctl start redpanda). This also guarantees you will get the best results out of rpk tune.

However, we know that’s not always possible. In a world where Docker & Kubernetes have become the standard, we need to provide the tools for you to be able to get the best out of Redpanda, no matter how you choose to deploy it.

So thanks to everyone in our Community Slack workspace who shared their interest in running Redpanda in a container; Roko, our Head of Solutions Engineering who gathered all the feedback; and Dimitris, who has been hard at work to enable the continuous release of new Redpanda Docker Images, I’m glad to say there’s now an official vectorized/redpanda Docker image that you can try today.

This is the image used by rpk container behind the curtains, so if you followed the brief guide above, you already have it. If you don’t, you can pull it by running docker pull vectorized/redpanda.

It works out of the box, so if you wanna start a single broker you can run docker run -ti -p 9092:9092 vectorized/redpanda and start using it right away. However, for dev environments rpk container start is the recommended way.

We hope rpk container and our official Docker image will help you deploy Redpanda more easily both locally and to production, and that they will help you be more productive and focus on writing code that matters.

If you have any feedback regarding this post, rpk, or Redpanda, or if you just wanna chat, join us on Slack!