stream-processing
Here are 708 public repositories matching this topic...
-
Updated
Jun 24, 2021 - Java
-
Updated
Jul 24, 2021
-
Updated
Jun 24, 2021 - Python
There is no technical difficulty to support includeValue
option, looks like we are just missing it on the API level.
See SO question
We see many devs from Germany are using SigNoz. It would be be good if README is translated to German.
This will make it easier for devs from Germany and German speaking countries to get started with SigNoz.
Describe the bug
Table scans should support the full suite of operations in the WHERE
clause, however LIKE
is not supported:
To Reproduce
Issue any query with WHERE key LIKE '%foo%';
Expected behavior
Query works
Actual behaviour
ksql> set 'ksql.query.pull.table.scan.enabled'='true';
Successfully changed local property 'ksql.query.pull.table.scan.en
-
Updated
Apr 7, 2021
-
Updated
Jul 27, 2021 - Go
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED
), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
-
Updated
Jul 28, 2021 - C
-
Updated
May 1, 2019 - C
-
Updated
Jul 28, 2021 - Java
-
Updated
Jul 26, 2021 - Go
-
Updated
Jun 15, 2021
-
Updated
Aug 14, 2020 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printf
like we have been until now. We need to either include a timestamp in every@printf
call (laborious and error prone) or c
-
Updated
Jul 21, 2021 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator
to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Jul 5, 2021 - JavaScript
-
Updated
Jul 24, 2021 - Java
-
Updated
Jul 20, 2021 - Scala
-
Updated
Jul 26, 2021 - Java
-
Updated
Jun 25, 2021 - Go
-
Updated
Jun 28, 2021 - TypeScript
-
Updated
May 16, 2021 - Go
-
Updated
Mar 31, 2018
-
Updated
Jul 4, 2021 - JavaScript
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h