Quoting from the mail thread that was sent to Kafka mailing list:
We have been using Kafka 0.9.0.1 (server and Java client libraries). So far we had been using it with plaintext transport but recently have been considering upgrading to using SSL. It mostly works except that a mis-configured producer (and even consumer) causes a hard to relate OutOfMemory exception and thus causing the JVM in which the client is running, to go into a bad state. We can consistently reproduce that OOM very easily. We decided to check if this is something that is fixed in 0.10.0.1 so upgraded one of our test systems to that version (both server and client libraries) but still see the same issue. Here’s how it can be easily reproduced
1. Enable SSL listener on the broker via server.properties, as per the Kafka documentation
listeners=PLAINTEXT:ssl.keystore.location=ssl.keystore.password=pass ssl.key.password=pass ssl.truststore.location= ssl.truststore.password=pass 2. Start zookeeper and kafka server
3. Create a “oom-test” topic (which will be used for these tests):
kafka-topics.sh --zookeeper localhost:2181 --create --topic oom-test --partitions 1 --replication-factor 14. Create a simple producer which sends a single message to the topic via Java (new producer) APIs:
public class OOMTest { public static void main(final String[] args) throws Exception { final Properties kafkaProducerConfigs = new Properties(); kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093"); kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); try (KafkaProducer<String, String> producer = new KafkaProducer<>(kafkaProducerConfigs)) { System.out.println("Created Kafka producer"); final String topicName = "oom-test"; final String message = "Hello OOM!"; >