Skip to content
#

kafka

Here are 7,490 public repositories matching this topic...

flink-learning

flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》

  • Updated May 20, 2022
  • Java
Barenboim
Barenboim commented May 17, 2022

我们欢迎大家自由的提问,包括但不限于任何业务需求,方案设计和workflow各种使用细节和实现原理。
但我也建议大家采用一种smart的提问方式,可以参考XY Problem问题:https://xyproblem.info/
XY Problem倡导大家提问时,一定尽可能的描述自己的原始需求,也就是说清楚一件事:你到底想干啥。而不是描述你试图使用的方案,然后让我们帮你解决方案里的问题。因为你采用的方案可能一开始就是错误的。而我们只能一直猜测,你到底想解决什么问题。沟通效率非常低。
例如这个issue(提问者是我们的好朋友,workflow还没开源就在用了。不用怕伤害他:)
sogou/workflow#658
就是一个典型的例子,用户问如何通过派生WFServer的new_connection函数获得proxy

good first issue
gimmic
gimmic commented Sep 27, 2019

I have noticed when ingesting backlog(older timestamped data) that the "Messages per minute" line graph and "sources" data do not line up.

The Messages per minute appear to be correct for the ingest rate, but the sources breakdown below it only show messages for each type from within the time window via timestamp. This means in the last hour if you've ingested logs from 2 days ago, the data is

jnh5y
jnh5y commented May 16, 2022

Is your feature request related to a problem? Please describe.
The mentioned functions only require that types by Comparable. Presently, they do not support BYTES or the time types.

Describe the solution you'd like
Add support for BYTES and time types to TopKDistinct, GREATEST, and LEAST

Additional context
confluentinc/ksql#8912 is about handling TopK for these t

watermill
xorcare
xorcare commented Nov 22, 2021

This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?

Is it possible to fix it or did I get something wrong?

https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6

help wanted good first issue S

Surging is a micro-service engine that provides a lightweight, high-performance, modular RPC request pipeline. support Event-based Asynchronous Pattern and reactive programming ,The service engine supports http, TCP, WS,Grpc, Thrift,Mqtt, UDP, and DNS protocols. It uses ZooKeeper and Consul as a registry, and integrates it. Hash, random, polling, Fair Polling as a load balancing algorithm, built-in service governance to ensure reliable RPC communication, the engine contains Diagnostic, link tracking for protocol and middleware calls, and integration SkyWalking Distributed APM

  • Updated May 16, 2022
  • C#

Improve this page

Add a description, image, and links to the kafka topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the kafka topic, visit your repo's landing page and select "manage topics."

Learn more