-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to send messages to the kafka topics #185
Comments
Hey @chandana194, Can you use |
Hi @mostafa Thank you for your response. Yes, As you have mentioned - K6 says the messages are produced successfully but they are not reaching the the kafka topics. We have verified all the logs at kafka side. I will try changing it to duration and see how it behaves. |
@mostafa Tried changing the iteration to duration, Observed the same behaviour
**data_received................: 0 B 0 B/s**
**data_sent....................: 0 B 0 B/s**
iteration_duration...........: avg=143.29µs min=143.29µs med=143.29µs max=143.29µs p(90)=143.29µs p(95)=143.29µs
kafka_writer_acks_required...: 0 min=0 max=0
kafka_writer_async...........: 0.00% ✓ 0 ✗ 2
kafka_writer_attempts_max....: 0 min=0 max=0
kafka_writer_batch_bytes.....: 1.3 kB 27 B/s
kafka_writer_batch_max.......: 1 min=1 max=1
kafka_writer_batch_size......: 2 0.040419/s
kafka_writer_batch_timeout...: 0s min=0s max=0s
kafka_writer_error_count.....: 1 0.02021/s
kafka_writer_message_bytes...: 2.0 kB 40 B/s
kafka_writer_message_count...: 3 0.060629/s
kafka_writer_read_timeout....: 0s min=0s max=0s
kafka_writer_retries_count...: 1 0.02021/s
kafka_writer_wait_seconds....: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
kafka_writer_write_count.....: 3 0.060629/s
kafka_writer_write_seconds...: avg=3.17s min=2.13s med=3.17s max=4.22s p(90)=4.01s p(95)=4.11s
kafka_writer_write_timeout...: 0s min=0s max=0s
vus..........................: 0 min=0 max=1
vus_max......................: 1 min=0 max=1 |
|
@mostafa Is there a way which can help me to print that error message ? |
@chandana194 Absolutely! Just add |
Thank you @mostafa It helped me to get the logs. I am getting a timeout issue ! Could you please let me know how can I specify both security protocol ( SASL_SSL ) and security algorithm ( sasl.mechanism=SCRAM-SHA-512) Currently I am using only below config: const saslConfig = {
username: "<username>",
password: "<pwd>",
algorithm: SASL_SCRAM_SHA512,
clientCertPem: "pathto/client.pem",
serverCaPem: "pathto/serverCa.pem",
}; INFO[0016] writing 1 messages to pendingprice (partition: 0)
INFO[0018] initializing kafka reader for partition 0 of pendingprice starting at offset first offset
INFO[0023] the kafka reader for partition 0 of pendingprice is seeking to offset 10238021
INFO[0023] looking up offset of kafka reader for partition 0 of pendingprice: first offset
INFO[0024] the kafka reader got an unknown error reading partition 0 of .pendingprice at offset 10238021: read tcp 172.31.230.127:57108->172.16.49.27:443: i/o timeout
DEBU[0024] Regular duration is done, waiting for iterations to gracefully finish executor=constant-vus gracefulStop=30s scenario=default
INFO[0024] initializing kafka reader for partition 0 of pendingprice starting at offset 10238021 |
AFAIK, when you're using SCRAM, you must enable SSL (via TLS config), which suggests that SCRAM is an authentication mechanism that requires SSL. Also, remove |
@mostafa Tried enabling the SSL through TLS config and removed clientCertPem but still stuck at resolving the below timeout issue !
I think there is no issue with SASL and TLS config as k6 is able to connect to the broker and list the existing topics !
|
@chandana194 Honestly, this is hard to reproduce locally, and given we don't have logs from the server-side, it makes it even harder. In the meantime, I added #187. |
@mostafa I even tried with different offset but by default it's taking the same offset
|
@mostafa I am able to produce the messages without any issues. The read tcp timeout error is coming up, only when I try to consume the messages . Tried for both Confluent topic (no SASL/SSL enabled) and Strimzi ( SASL/SSL enabled)
|
below is the code I have used to test the confluent topic consumer ( Producer is working fine )
|
@mostafa Tried below config but no luck ! still seeing the same error const reader = new Reader({ |
@chandana194 At that time I tried to make a quick fix by adding |
@mostafa have tried building the new binary as suggested above and still I see the below issue: Reader config as below: const reader = new Reader({
brokers: brokers,
topic: topic,
partition: 0,
offset: -1,
sasl: saslConfig,
tls: tlsConfig,
MaxWait:900000 ,
groupID: "stage-test-v18",
ReadBatchTimeout:3000000,
connectLogger: true
}); one more observation is that, even-though I point partition to 0 it's trying to read from all partitions! INFO[0005] entering loop for consumer group, stage-test-v18
INFO[0005] using 'range' balancer to assign group, stage-test-v18
INFO[0005] found member: k6@cnaga (github.com/segmentio/kafka-go)-8a987d62-9c57-4357-b1da-2c9d792b233f/[]byte(nil)
INFO[0005] found topic/partition: topic.pendingprice/0
INFO[0005] found topic/partition: topic.pendingprice/5
INFO[0005] found topic/partition: topic.pendingprice/10
INFO[0005] found topic/partition: topic.pendingprice/13 --listed all partitions
INFO[0005] assigned member/topic/partitions k6@cnagara-dfb77 (github.com/segmentio/kafka-go)-8a987d62-9c57-4357-b1da-2c9d792b233f/topic.pendingprice/[0 5 10 13 8 2 12 14 9 11 4 1 6 7 3]
INFO[0005] joinGroup succeeded for response, stage-test-v18. generationID=1, memberID=k6@cnaga (github.com/segmentio/kafka-go)-8a987d62-9c57-4357-b1da-2c9d792b233f
INFO[0005] Joined group stage-test-v18 as member k6@cnaga (github.com/segmentio/kafka-go)-8a987d62-9c57-4357-b1da-2c9d792b233f in generation 1
INFO[0005] Syncing 1 assignments for generation 1 as member k6@cnaga (github.com/segmentio/kafka-go)-8a987d62-9c57-4357-b1da-2c9d792b233f
INFO[0005] sync group finished for group, stage-test-v18
INFO[0005] subscribed to topics and partitions: map[{topic:topic.pendingprice partition:0}:-2
{topic:topic.pendingprice partition:1}:-2 {topic:topic.pendingprice partition:2}:-2 {topic:topic.pendingprice partition:3}:-2 {topic:topic.pendingprice partition:4}:-2 {topic:topic.pendingprice partition:5}:-2 {topic:topic.pendingprice partition:6}:-2 {topic:topic.pendingprice partition:7}:-2 {topic:topic.pendingprice partition:8}:-2 {topic:topic.pendingprice partition:9}:-2 {topic:topic.pendingprice partition:10}:-2 {topic:topic.pendingprice partition:11}:-2 {topic:topic.pendingprice
INFO[0005] started heartbeat for group, stage-test-v18 [3s]
INFO[0005] initializing kafka reader for partition 5 of topic.pendingprice starting at offset first offset
INFO[0005] started commit for group stage-test-v18
partition:12}:-2 {topic:topic.pendingprice partition:13}:-2 {topic:topic.pendingprice partition:14}:-2]
INFO[0005] initializing kafka reader for partition 2 of topic.pendingprice starting at offset first offset
INFO[0005] initializing kafka reader for partition 10 of topic.pendingprice starting at offset first offset
INFO[0005] initializing kafka reader for partition 9 of topic.pendingprice starting at offset first offset
INFO[0010] the kafka reader for partition 10 of topic.pendingprice is seeking to offset 2928535
INFO[0010] the kafka reader for partition 7 of topic.pendingprice is seeking to offset 7170991
INFO[0010] the kafka reader for partition 14 of topic.pendingprice is seeking to offset 2931028
.....
INFO[0011] the kafka reader got an unknown error reading partition 7 of topic.pendingprice at offset 7170991: read tcp <ip>:49551-><ip>:443: i/o timeout .9s), 1/1 VUs, 0 complete and 0 interrupted iterations
INFO[0011] the kafka reader got an unknown error reading partition 10 of topic.pendingprice at offset 2928535: read tcp <ip>:49550-><ip>:443: i/o timeout
.
.
.
ERRO[0021] Unable to read messages. error="Unable to read messages." |
@chandana194
And as far as I see, the reader seems to initialize and seek on certain partitions to the last offset. However, you added offset -1, which causes the reader to hang, as explained by @andxr in #15 (comment). |
@mostafa Thank you for the quick response. I made the changes as suggested. Tried below options:
in both the cases I observed the same timeout issue !
|
Also, if you want to consume from a consumer group, you need to set |
Can you please test changes in this PR? Update: |
Sure Will try this and update you. FYI when I was testing earlier, I made sure that there were enough messages to consume. |
@mostafa Have tried with this branch.. but no luck ... getting the same error ! |
@chandana194 |
Hey @mostafa, I am also getting the same error. import {
Writer,
Reader,
Connection,
SchemaRegistry,
SCHEMA_TYPE_STRING,
SASL_PLAIN,
TLS_1_2,
} from "k6/x/kafka"; // import kafka extension
const brokers = ["uri:25073"];
const topic = "xk6_kafka_json_topic";
// SASL config
const saslConfig = {
username: "username",
password: "password",
algorithm: SASL_PLAIN,
};
// TLS config
const tlsConfig = {
// Enable/disable TLS (default: false)
enableTls: true,
// Skip TLS verification if the certificate is invalid or self-signed (default: false)
insecureSkipTlsVerify: true,
minVersion: TLS_1_2,
// Only needed if you have a custom or self-signed certificate and keys
// clientCertPem: "/dbaas/kafka/ca-certificate.crt",
// clientKeyPem: "/path/to/your/client-key.pem",
serverCaPem: "dbaas/kafka/ca-certificate.crt",
};
const offset = 0;
const partition = 0;
const writer = new Writer({
brokers: brokers,
topic: topic,
sasl: saslConfig,
tls: tlsConfig,
connectLogger: true,
});
const reader = new Reader({
brokers: brokers,
topic: topic,
partition: partition,
offset: offset,
sasl: saslConfig,
tls: tlsConfig,
connectLogger: true,
maxWait: 900000,
});
const connection = new Connection({
address: brokers[0],
sasl: saslConfig,
tls: tlsConfig,
});
const schemaRegistry = new SchemaRegistry();
// Can accept a SchemaRegistryConfig object
if (__VU == 0) {
// Create a topic on initialization (before producing messages)
connection.createTopic({
// TopicConfig object
topic: topic,
});
}
export default function () {
// Fetch the list of all topics
const topics = connection.listTopics();
console.log(topics); // list of topics
// Produces message to Kafka
writer.produce({
// ProduceConfig object
messages: [
// Message object(s)
{
key: schemaRegistry.serialize({
data: "my-key",
schemaType: SCHEMA_TYPE_STRING,
}),
value: schemaRegistry.serialize({
data: "my-value",
schemaType: SCHEMA_TYPE_STRING,
}),
},
],
});
// Consume messages from Kafka
let messages = reader.consume({
// ConsumeConfig object
limit: 10,
});
// your messages
console.log(messages);
let deserializedValue = schemaRegistry.deserialize({
data: messages[0].value,
schemaType: SCHEMA_TYPE_STRING,
});
}
export function teardown(data) {
if (__VU == 0) {
// Delete the topic
connection.deleteTopic(topic);
}
writer.close();
reader.close();
connection.close();
} Logs from k6 /\ |??| /??/ /??/
/\ / \ | |/ / / /
/ \/ \ | ( / ??\
/ \ | |\ \ | (?) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: /kafka/produce_consume_message.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 40s max duration (incl. graceful stop):
* default: 1 looping VUs for 10s (gracefulStop: 30s)
INFO[0013] ["xk6_kafka_json_topic"] source=console
INFO[0015] writing 1 messages to xk6_kafka_json_topic (partition: 0)
INFO[0017] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset first offset
INFO[0022] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0022] looking up offset of kafka reader for partition 0 of xk6_kafka_json_topic: first offset
INFO[0022] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51727->x.x.x.x:25073: i/o timeout
INFO[0022] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
INFO[0027] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0028] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51729->x.x.x.x:25073: i/o timeout
INFO[0028] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
INFO[0032] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0033] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51731->x.x.x.x:25073: i/o timeout
INFO[0033] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
INFO[0037] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0038] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51737->x.x.x.x:25073: i/o timeout
INFO[0038] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
INFO[0042] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0043] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51739->x.x.x.x:25073: i/o timeout
INFO[0043] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
INFO[0048] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0048] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51741->x.x.x.x:25073: i/o timeout
INFO[0049] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
ERRO[0053] Unable to read messages. error="Unable to read messages."
INFO[0053] the kafka reader for partition 0 of xk6_kafka_json_topic is seeking to offset 0
INFO[0054] the kafka reader got an unknown error reading partition 0 of xk6_kafka_json_topic at offset 0: read tcp 10.254.15.12:51743->x.x.x.x:25073: i/o timeout
INFO[0054] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
running (48.8s), 0/1 VUs, 0 complete and 1 interrupted iterations
default ? [======================================] 1 VUs 10s
WARN[0057] No script iterations finished, consider making the test duration longer
INFO[0058] error initializing the kafka reader for partition 0 of xk6_kafka_json_topic: [3] Unknown Topic Or Partition: the request is for a topic or partition that does not exist on this broker
INFO[0058] initializing kafka reader for partition 0 of xk6_kafka_json_topic starting at offset 0
¦ teardown
data_received................: 0 B 0 B/s
data_sent....................: 0 B 0 B/s
iteration_duration...........: avg=281.79ms min=281.79ms med=281.79ms max=281.79ms p(90)=281.79ms p(95)=281.79ms
kafka_writer_acks_required...: 0 min=0 max=0
kafka_writer_async...........: 0.00% ? 0 ? 1
kafka_writer_attempts_max....: 0 min=0 max=0
kafka_writer_batch_bytes.....: 36 B 0.7369566955647858 B/s
kafka_writer_batch_max.......: 1 min=1 max=1
kafka_writer_batch_size......: 1 0.020471/s
kafka_writer_batch_timeout...: 0s min=0s max=0s
kafka_writer_error_count.....: 0 0/s
kafka_writer_message_bytes...: 36 B 0.7369566955647858 B/s
kafka_writer_message_count...: 1 0.020471/s
kafka_writer_read_timeout....: 0s min=0s max=0s
kafka_writer_retries_count...: 0 0/s
kafka_writer_wait_seconds....: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
kafka_writer_write_count.....: 1 0.020471/s
kafka_writer_write_seconds...: avg=1.68s min=1.68s med=1.68s max=1.68s p(90)=1.68s p(95)=1.68s
kafka_writer_write_timeout...: 0s min=0s max=0s
vus..........................: 0 min=0 max=1
vus_max......................: 1 min=0 max=1 |
Hey @manishsaini7, Have you tried compiling and using the changes in the PR #189? Also, you're just producing a single message, yet you're waiting to receive 10 messages, which causes the consumer to wait forever, until it times out. |
@mostafa Tried with the latest script you have shared. I am able to consume the messages now :) but one observation below
there are multiple partitions and I tried to consume only 2 messages.
|
@chandana194 |
Have just went through the complete logs and don't see the above error message ! |
@chandana194 Are you consuming from a consumer group, that is, using a consumer ID? |
Yes I am consuming using a consumer ID |
Then you shouldn't care about that offset. I'll fix it. And I think the issue is fixed by now. |
@mostafa Could you please let me know if there is any ETA on releasing this fix to main branch ? |
@chandana194 I can release it right away if you confirm it works. |
Yes @mostafa consuming messages through group ID's is working fine now. |
@chandana194 Awesome! The changes are released in v0.16.1. I'll close this ticket, as I consider this fixed. If you have further questions, feel free to reopen this issue or create a new. |
@mostafa One issue have observed with the latest version is that, when I test with more number of users( like 50 Vu's for 30 seconds ), the k6 summary report is not printing the message count/metrics properly. From the backend I see that more than 10K messages are produced, but summary shows very small number - kafka_writer_message_count...: 1931 78.300207/s
|
@chandana194 You have too many errors: |
@mostafa Both failed and success count is not matching the message count that got produced in the backend. Could see a huge difference ! Also, I would like to know if there is way to control the number of messages to be produced/second |
@chandana194 |
Hi,
I am trying the send messages to already available kafka topics. I am able to successfully make a connection to the kafka broker and read the existing topics, but when I try to produce and consume messages I am not getting any error on k6 and also the messages are not reaching the topic. Could you please help me on this ?
Script snippet :
K6 run logs :
The text was updated successfully, but these errors were encountered: