Skip to content
#

inference

Here are 786 public repositories matching this topic...

pkaske
pkaske commented Dec 29, 2020

I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h

good first issue type:research solution:holistic stat:awaiting googler
rsn870
rsn870 commented Aug 21, 2020

Hi ,

I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.

Please look into this if you could.

enhancement good first issue
ColossalAI
SMesForoush
SMesForoush commented Mar 12, 2022

Dear Colossal-AI team,
There are a few features in my mind that I thought would be helpful to the project, and I wanted to ask if there is any of them which might be more useful so I could start implementing them.
Loki-Promtail is a tool for monitoring distributed logs with Grafana. Connecting the Distributed Logger to it and extracting labels from the log structure would be a user-friendly sys

good first issue
GowthamKudupudi
GowthamKudupudi commented Jun 15, 2021

Description
If the Triton server build fails due to any reason, I have to delete the /tmp/citritonbuild/<backend> folders to prevent the next rebuild from throwing git repo already exists error.

Triton Information
r21.05

I am building the Triton server myself.

To Reproduce
uninstall one of the dependency needed by a backend.
run build.py with all the backends enabled

enhancement good first issue

TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.

  • Updated Jun 2, 2022
  • C++
morgoth95
morgoth95 commented Apr 24, 2022

The latest release of openvino has changed the inference engine API, providing a new API that takes advantage of the full potential of Intel's latest version of IR (IR 11). More information can be found at this link. We should adapt nebullvm's OpenVinoInferenceLearner to their latest API.

Usef

enhancement good first issue

Improve this page

Add a description, image, and links to the inference topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the inference topic, visit your repo's landing page and select "manage topics."

Learn more