5 min read

Kafka KRaft Strimzi behind Nginx ingress controller TLS/SSL

Kafka Strimzi operator is used to automate the deployment and configuration of the Kafka cluster on the Kubernetes.

This blog will focus on the configuration of the mTLS behind Nginx ingress controller. The Kafka will be deployed using KRaft. As of time of writting it is not yet ready for the production use - stated by the strimzi.

There are already well written blog articles on the strimzi blog on how to deploy Kafka with KRaft deploy mode using strimzi and how to expose Kafka behind the ingress.

Kafka Node Pools: Supporting KRaft (ZooKeeper-less Apache Kafka)
Strimzi provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations.
Accessing Kafka: Part 5 - Ingress
Strimzi provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations.

Deploying strimzi operator

To deploy the strimzi on the Kubernetes cluster helm chart can be used to deploy quickly.

helm repo add strimzi https://strimzi.io/charts/
helm pull strimzi --untar

Strimzi uses feature gates/feature flags to manage which features are enabled by the strimzi. KRaft mode needs two feature flags.

Updating the values.yaml from the helm chart above:

extraEnvs:
  - name: STRIMZI_FEATURE_GATES
    value: "+UseKRaft,+KafkaNodePools,+UnidirectionalTopicOperator"

Deploying the modified chart:

helm upgrade --install strimzi strimzi

Kafka cluster and nodepools

After deploying strimzi kafka operator it needs to be feeded with custom resources to deploy Kafka cluster.

The strimzi custom resources are listed on the link below.

Configuring Strimzi (0.38.0)
Strimzi provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations.

Confused about Kubernetes operators? See the blog article below.

Creating kubernetes operator using Kubebuilder
The focus of the article will be on creating an operator using Kubebuilder. Let’s create an operator which will create a pod running a simple…

To deploy kafka in KRaft mode nodepool is needed.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
  name: dual-role
  labels:
    strimzi.io/cluster: my-kafka
spec:
  replicas: 3
  roles:
    - controller
    - broker
  storage:
    type: jbod
    volumes:
      - id: 0
        type: persistent-claim
        size: 2Gi

This snippet defines the KafkaNodePool for the Kafka cluster to be used. In this mode every node can be controller or broker. Kafka will internally choose the leaders and brokers.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-kafka
  annotations:
    strimzi.io/node-pools: enabled
    strimzi.io/kraft: enabled
spec:
  kafka:
    version: 3.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        tls: true
        type: ingress
        authentication:
          type: tls
        configuration:
          bootstrap:
            host: bootstrap.example.com
            annotations:
              kubernetes.io/ingress.class: nginx
          brokers:
            - broker: 0
              host: broker-0.example.com
              annotations:
                kubernetes.io/ingress.class: nginx
            - broker: 1
              host: broker-1.example.com
              annotations:
                kubernetes.io/ingress.class: nginx
            - broker: 2
              host: broker-2.example.com
              annotations:
                kubernetes.io/ingress.class: nginx
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
    storage:
      type: jbod
      volumes:
        - id: 0
          type: persistent-claim
          size: 100Gi
          deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    userOperator: {}
    topicOperator: {}

Configuration of the zookeeper related stuff is needed since CRD does validation of the fields but strimzi operator will ignore next fields:

  • spec.kafka.replicas
  • spec.kafka.storage
  • spec.zookeeper
  entityOperator:
    userOperator: {}
    topicOperator: {}

This will also deploy user and topics operator so that the strimzi create users and topics from the custom resources:

  • KafkaTopic
  • KafkaUser

After applying those resources there would be kafka cluster with 3 nodes on the Kubernetes cluster.

Nginx ingress part

To expose kafka using nginx ingress controller, Nginx needs to enabled in the ssl-passthrough mode. That means Nginx will act as TCP proxy to the backend service and will not do anything regarding the SSL/TLS termination and handling of the certificates. Instead backend service will handle all the work regarding certificates.

To deploy nginx-ingress-controller in the ssl-passthrough mode it needs explicit configuration for that. By default ssl-passthroug is disabled.

TLS/HTTPS - Ingress-Nginx Controller

To enable pass-through on the ingress controller just pass the option via args.

- args:
    - /nginx-ingress-controller
    - --enable-ssl-passthrough

This way ingress controller will start and enable the ssl-passthrough options. This only enables the feature and it is not applied on the ingress if not explicitly specified.

Kafka strimzi will by itself provide the needed annotations to enable ssl-passthrough on the specific ingress.

These annotations on the ingress enables ssl-passthrough.

  annotations:
    ingress.kubernetes.io/ssl-passthrough: "true"
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"

This enables the ssl-passthrough on the specific ingress.

Creating Kafka user

To create Kafka user we will use Strimzi CustomResource.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  name: user1
  labels:
    strimzi.io/cluster: my-kafka
spec:
  authentication:
    type: tls

After applying this resource to the Kubernetes, Strimzi will create the user and needed secrets for the authentication. These secrets are available in the Kubernetes.

Certificates

Kafka Strimzi will create all the certificates needed for the Kafka. What we need is Kafka cluster CA truststore and user keystore.

CA truststore:

k get secrets/my-kafka-ca-cert -o jsonpath={$.data.'ca\.p12'} | base64 -d > truststore.p12
k get secrets/my-kafka-ca-cert -o jsonpath={$.data.'truststore\.password'} | base64 -d > truststore.password

User keystore:

k get secrets/user1 -o jsonpath={$.data.'user\.p12'} | base64 -d > truststore.p12
k get secrets/user1 -o jsonpath={$.data.'truststore\.password'} | base64 -d > user.password

Kafka config:

ssl.truststore.location=./truststore.p12
ssl.truststore.password=ENTER_TRUSTSTORE_PASSWORD
ssl.keystore.location =./user.p12
ssl.keystore.password=ENTER_KEYSTORE_PASSWORD
ssl.key.password=ENTER_KEYSTORE_PASSWORD
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=
security.protocol=SSL

Testing out the connection:

./kafka-console-producer.sh --producer.config config.tls --bootstrap-server bootstrap.example:443 --topic testing-topic

Common problems

Nginx returns the default acme certificate when server presents itself

See up about enabling ssl-passthrough.