stream-processing
Here are 732 public repositories matching this topic...
-
Updated
Oct 8, 2021 - Java
-
Updated
Oct 29, 2021
-
Updated
Oct 26, 2021 - Python
Is your feature request related to a problem?
The first step towards app optimization
Please describe.
Configure whyDidYouRender under the development env
**Note: make sure it is compatible with all the custom hooks that we are using in the project **
-
Updated
Sep 29, 2021
Avoid controlling endless loop with exception in loadAnonymousClasses
, e.g. by extracting loading class to the method:
private boolean tryLoadClass(String innerClassName) {
try {
parent.loadClass(innerClassName);
} catch (ClassNotFoundException ex) {
return false;
}
return true;
}
Similar to LATSET_BY_OFFSET
and COLLECT_LIST
users may want to have a "bounded" COLLECT_LIST
function that only collects the latest N
elements instead of the (default) behavior of collecting the first N
elements.
-
Updated
Nov 12, 2021 - Go
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED
), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
-
Updated
Nov 13, 2021 - C
-
Updated
Nov 13, 2021 - Java
-
Updated
Aug 6, 2021 - C
-
Updated
Jul 26, 2021 - Go
-
Updated
Nov 12, 2021
-
Updated
Aug 14, 2020 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printf
like we have been until now. We need to either include a timestamp in every@printf
call (laborious and error prone) or c
-
Updated
Nov 5, 2021 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator
to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Aug 13, 2021 - JavaScript
The mapcat
function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Nov 12, 2021 - Java
-
Updated
Sep 27, 2021 - Scala
-
Updated
Nov 5, 2021 - Go
-
Updated
Nov 8, 2021 - Java
-
Updated
Oct 6, 2021 - TypeScript
-
Updated
Oct 17, 2021 - Go
-
Updated
Nov 12, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h