-
Updated
Jan 2, 2022 - Python
object-detection
Here are 4,814 public repositories matching this topic...
-
Updated
Jan 5, 2022 - Python
-
Updated
Jan 5, 2022 - C
-
Updated
Nov 17, 2021 - Python
Enhancement
A discussion in #614 revealed a good place for improvement - we should ensure that input image is continuous upon start of the augmentation pipeline. This could be implemented by adding image = np.ascontiguousarray(image)
to image and mask targets.
A proposed place to add this call - somewhere at the beginning of A.Compose.__call__
.
-
Updated
Oct 25, 2021 - Jupyter Notebook
-
Updated
Jan 3, 2022 - Python
-
Updated
Dec 28, 2021
-
Updated
Nov 15, 2021
-
Updated
Oct 25, 2021 - C#
-
Updated
Dec 3, 2021 - Python
-
Updated
Dec 4, 2021 - Python
-
Updated
Dec 22, 2021 - Python
-
Updated
Sep 16, 2021
-
Updated
Jan 5, 2022 - Python
-
Updated
Jan 5, 2022 - Python
-
Updated
Aug 25, 2021 - Python
【已解决】关于AP=0的问题
我发现很多人在使用voc格式的数据集时,和我遇到了同样的问题,训练时AP一直为0,
今早,仔细检查后,我也找到了真正的原因,主要是数据加载的地方出现了问题,还是我们自己太不仔细了
解决流程思路: 解决YOLOX训练时AP为0
-
Updated
Jan 5, 2022 - C++
Hi,
I need to download the something-to-something and jester datasets. But the 20bn website "https://20bn.com" are not available for weeks, the error message is "503 Service Temporarily Unavailable".
I have already downloaded the video data of something-to-something v2, and I need the label dataset. For the Jester, I need both video and label data. Can someone share me the
-
Updated
Feb 22, 2021
-
Updated
Oct 24, 2021 - Jupyter Notebook
-
Updated
Dec 15, 2021 - Jupyter Notebook
-
Updated
Oct 25, 2021 - C++
-
Updated
Dec 29, 2021 - Python
-
Updated
Aug 12, 2021 - Jupyter Notebook
Could FeatureTools be implemented as an automated preprocessor to Autogluon, adding the ability to handle multi-entity problems (i.e. Data split across multiple normalised database tables)? So if you supply Autogluon with a list of Dataframes instead of a single Dataframe it would first invoke FeatureTools:
- take the multiple Dataframes (entities) and try to auto-infer the relationship betwee
-
Updated
Nov 17, 2021 - Python
-
Updated
Jan 5, 2022 - Python
Improve this page
Add a description, image, and links to the object-detection topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the object-detection topic, visit your repo's landing page and select "manage topics."
I want to train a detector based on object365 dataset, but object365 is pretty large, and caused out of memory error in my computer.
I want to split the annotation file to 10, such as ann1,ann2,...ann10, then build 10 datasets and concatenate them, but I'm not sure whether it's
gonna work or not.
Any better suggestion?