stream-processing
Here are 782 public repositories matching this topic...
-
Updated
Mar 23, 2022 - Java
-
Updated
Mar 8, 2022
-
Updated
Mar 30, 2022 - Python
-
Updated
Sep 29, 2021
I have a use case where I need to create a new stream containing the bearing between two consecutive points in a pre-existing lat/lon stream. Normally bearing would be available in a standard lib but in a pinch can easily be implemented through sin, cos, and atan2 funcs, none of which are currently available in ksql.
Basic trig functions have a range of use cases in geometric and geographic co
Add --add-exports jdk.management/com.ibm.lang.management.internal
only when OpenJ9 detected.
Otherwise we got WARNING: package com.ibm.lang.management.internal not in jdk.management
in logs
Under the hood, Benthos csv input
uses the standard encoding/csv
packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes
field.
We have a use case where we need to set the LazyQuotes
field in order to make things work correctly.
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
-
Updated
Apr 5, 2022 - Rust
-
Updated
Apr 5, 2022 - C
to_dict() equivalent
I would like to convert a DataFrame to a JSON object the same way that Pandas does with to_dict()
.
toJSON()
treats rows as elements in an array, and ignores the index labels. But to_dict()
uses the index as keys.
Here is an example of what I have in mind:
function to_dict(df) {
const rows = df.toJSON();
const entries = df.index.map((e, i) => ({ [e]: rows[i] }));
-
Updated
Apr 5, 2022 - Java
-
Updated
Feb 20, 2022 - C
-
Updated
Jan 4, 2022 - Go
-
Updated
Apr 4, 2022
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printf
like we have been until now. We need to either include a timestamp in every@printf
call (laborious and error prone) or c
-
Updated
Mar 28, 2022 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator
to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat
function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Feb 11, 2022 - Go
-
Updated
Apr 5, 2022 - Java
-
Updated
Mar 26, 2022 - Go
-
Updated
Apr 4, 2022 - Java
-
Updated
Mar 1, 2022 - Scala
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h