Skip to content
#

stream-processing

Here are 765 public repositories matching this topic...

pkaske
pkaske commented Dec 29, 2020

I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h

flink-learning

flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》

  • Updated Feb 2, 2022
  • Java
signoz
polyglothacker
polyglothacker commented Feb 8, 2022

Bug description

When saving retention metrics it looks like the process is allocating space for saving the data serially when OK is clicked on the dialog as it blocks for a noticeable period and you feel the application is unresponsive.

Expected behavior

Instead the space creation for data should happen asynchronously and the dialog should return immediately, or at least show a spi

jzaralim
jzaralim commented Feb 1, 2022

Describe the bug
If you try to create a KAFKA formatted source with a BYTES column, the command will return

CREATE STREAM TEST (ID BYTES KEY, b BYTES) WITH (kafka_topic='test', format='DELIMITED');
The 'KAFKA' format does not support type 'BYTES'

This is because the BYTES type is missing [here](https://github.com/confluentinc/ksql/blob/a27e5e7501891e644196f8d164d078672e0feecd

benthos
watermill
AlexCuse
AlexCuse commented Jan 26, 2022

I have this implemented in a custom marshaler now but wondering if it makes sense to push back upstream - when we are integrating with legacy services we find it useful to use the correlation ID as the message ID when first brought into watermill - from there they get sent on headers to subsequent services and work normally.

Its a simple change if it would make sense for other users.

danfojs
goodPointP
goodPointP commented Nov 22, 2021

It would be really useful if there was a method that could insert a column into an existing Dataframe between two existing columns. I know about .addColumn, but that seems to place the new column at the end of the Dataframe.

For example:

df.print()

A | B 
======
7 | 5
3 | 6

df.insert({ "afterColumn": "A", "newColumnName": "C", "data": [4,1], inplace: true })
df.print()

nisanharamati
nisanharamati commented Jul 24, 2018

It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.

So for that end, I think we should add timestamps to the logs.

This has some cons:

  1. We can't just use @printf like we have been until now. We need to either include a timestamp in every @printf call (laborious and error prone) or c
hazelcast-jet
jdormit
jdormit commented Aug 18, 2019

The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:

user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.

Improve this page

Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."

Learn more