dataset
Here are 3,251 public repositories matching this topic...
-
Updated
May 23, 2020 - Python
-
Updated
Dec 1, 2019
-
Updated
Apr 25, 2020
-
Updated
May 28, 2020
Thank you for the great effort, you are putting into this project :) There is, however, a feature I miss; rotated bounding boxes. Especially when objects are thin and diagonal, an ordinary bounding box fits poorly. Examples of such cases are shown here: rotated bounding boxes
A way annotation could be
Enhancement description
I have these errors in log on each start:
backend_1 | Initializing database
backend_1 | Operations to perform:
backend_1 | Apply all migrations: admin, api, auth, authtoken, contenttypes, sessions, social_django
backend_1 | Running migrations:
backend_1 | No migrations to apply.
backend_1 | Your models have changes that are not y
-
Updated
Jun 18, 2020 - JavaScript
Request type
- Please close this issue, I accidentally submitted it without adding any details
- New documentation
- Correction or update
Details
Burried in the Formal Syntax is <media-or>
that allows a list of media rules to use the or
keyword. As I understand this change was added in [CSS Conditional Rules Module Level 3](https
📚 Documentation
Description
It is not clear how (and when) to use SubwordField
from the documentation. And it is hard to find usage examples. It would be great if someone who used it would add at least a few lines to its doc.
For example, if I am using github.com/VKCOM/YouTokenToMe tokenizer - should I create SubwordField
or Field
. And what is the difference between them?
-
Updated
Jun 14, 2019 - Python
-
Updated
Jun 12, 2020 - Jupyter Notebook
-
Updated
Apr 29, 2020 - JavaScript
Expected Behavior
I want to convert torch.nn.Linear modules to weight drop linear modules in my model (possibly big), and I want to train my model with multi-GPUs. However, I have RuntimeError in my sample code. First, I have _weight_drop() which drops some part of weights in torch.nn.Linear (see the code below).
Actual Behavior
RuntimeError: arguments are located on different GPUs at /
It would be nice to have some general developer documentation for potential contributors to help in cases such as #510, etc.
What are the best steps to take towards accomplishing this? Maybe something similar (albeit not all details needed) to the Pandas developer docs?
I've begun an implementation of this on my fork, basicall
I noticed that you used image height param format
as the font size.
https://github.com/Belval/TextRecognitionDataGenerator/blob/33d8985521645280e102987e773bf1e424a045df/TextRecognitionDataGenerator/computer_text_generator.py#L14
In my test, image_font = ImageFont.truetype(font=font_size=500)
, no error was reported, but it was time consuming.
So I am confused, why set format
, font_size
-
Updated
Jun 18, 2020 - Jupyter Notebook
-
Updated
Jun 18, 2020 - JavaScript
-
Updated
May 24, 2020 - Jupyter Notebook
Add a tag document
create Name <-> tags csv
so one day it would be possible to only get nature related names, or only technical names and so on
-
Updated
Jun 10, 2020 - Python
-
Updated
Sep 26, 2019 - Jupyter Notebook
-
Updated
May 14, 2020 - Python
I would be useful to implement support for various photoreceptor models so that it is possible to generate custom cone fundamentals down the road. I have started with Caroll et al. (2000), Stockman and Sharpe (2000), Lamb (1995) photoreceptor models in that notebook: https://colab.research.google.com/drive/1snhtUdUxUrTnw_B0kagvfz015Co9p-xv
We will obviously need support for various pre-receptor
-
Updated
Jan 6, 2020 - Python
The documentation file appears to have been generated with no space between the hashes and the header text. This is causing the headers to not display correctly, and is difficult to read. See below for an example of with and without the space:
##
Mobius API Documentation
###Microsoft.Spark.CSharp.Core.Accumulator</
Improve this page
Add a description, image, and links to the dataset topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the dataset topic, visit your repo's landing page and select "manage topics."
I was wondering if it is possible to generate a list of 'n' unique company names? I saw some PR's which gave a unique keyword for 'words' but doesn't seem to extend to other providers? I understand i could just keep regenerating and dropping duplicates until I got a unique set of length n, but would be nice to just have a keyword for that (plus this m