stream-processing
Here are 765 public repositories matching this topic...
-
Updated
Feb 2, 2022 - Java
-
Updated
Jan 17, 2022
-
Updated
Jan 29, 2022 - Python
Bug description
When saving retention metrics it looks like the process is allocating space for saving the data serially when OK is clicked on the dialog as it blocks for a noticeable period and you feel the application is unresponsive.
Expected behavior
Instead the space creation for data should happen asynchronously and the dialog should return immediately, or at least show a spi
-
Updated
Sep 29, 2021
Describe the bug
If you try to create a KAFKA formatted source with a BYTES column, the command will return
CREATE STREAM TEST (ID BYTES KEY, b BYTES) WITH (kafka_topic='test', format='DELIMITED');
The 'KAFKA' format does not support type 'BYTES'
This is because the BYTES type is missing [here](https://github.com/confluentinc/ksql/blob/a27e5e7501891e644196f8d164d078672e0feecd
Use try-with-resources or close this "HazelcastServerCachingProvider" in a "finally" clause.
Change the implementation or suppress sonar warning
Under the hood, Benthos csv input
uses the standard encoding/csv
packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes
field.
We have a use case where we need to set the LazyQuotes
field in order to make things work correctly.
I have this implemented in a custom marshaler now but wondering if it makes sense to push back upstream - when we are integrating with legacy services we find it useful to use the correlation ID as the message ID when first brought into watermill - from there they get sent on headers to subsequent services and work normally.
Its a simple change if it would make sense for other users.
-
Updated
Feb 9, 2022 - C
-
Updated
Feb 9, 2022 - Java
-
Updated
Aug 6, 2021 - C
It would be really useful if there was a method that could insert a column into an existing Dataframe between two existing columns. I know about .addColumn, but that seems to place the new column at the end of the Dataframe.
For example:
df.print()
A | B
======
7 | 5
3 | 6
df.insert({ "afterColumn": "A", "newColumnName": "C", "data": [4,1], inplace: true })
df.print()
-
Updated
Jan 4, 2022 - Go
-
Updated
Feb 9, 2022
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printf
like we have been until now. We need to either include a timestamp in every@printf
call (laborious and error prone) or c
-
Updated
Feb 7, 2022 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator
to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat
function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Feb 8, 2022 - Java
-
Updated
Jan 10, 2022 - Go
-
Updated
Feb 8, 2022 - Java
-
Updated
Feb 8, 2022 - Scala
-
Updated
Feb 9, 2022 - Go
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h