kafka
Here are 7,694 public repositories matching this topic...
-
Updated
Jun 13, 2022 - Java
-
Updated
Jun 21, 2022 - Java
-
Updated
Jul 7, 2022 - Java
-
Updated
Apr 29, 2022 - Scala
-
Updated
Jul 7, 2022 - Java
-
Updated
Jun 7, 2022 - Python
-
Updated
Jul 7, 2022 - C
-
Updated
Jul 4, 2022 - C#
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Currently, ksqlDB causes a full topic scan whenever performing a pull query over a stream. This is inefficient when looking up specific sets of data, but necessary due to how pull queries are implemented over streams.
**Describe the
-
Updated
Jul 7, 2022 - Python
-
Updated
Jul 7, 2022 - Go
-
Updated
Jul 7, 2022 - Go
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED
), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
redpanda-data/redpanda#5356 introduced kafka_offset
to differentiate raft offsets and the translated offsets (kafka offsets). We should switch all model::offset
to kafka_offset
where we use the former to store the translated offsets.
-
Updated
Jun 21, 2022 - Java
-
Updated
Jul 6, 2022 - Java
-
Updated
Jul 8, 2022 - Rust
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Jun 19, 2022 - Java
-
Updated
Jul 7, 2022 - C
Improve this page
Add a description, image, and links to the kafka topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the kafka topic, visit your repo's landing page and select "manage topics."
I have noticed when ingesting backlog(older timestamped data) that the "Messages per minute" line graph and "sources" data do not line up.
The Messages per minute appear to be correct for the ingest rate, but the sources breakdown below it only show messages for each type from within the time window via timestamp. This means in the last hour if you've ingested logs from 2 days ago, the data is