pytorch
Here are 11,193 public repositories matching this topic...
-
Updated
Jul 3, 2020 - Jupyter Notebook
-
Updated
Jul 12, 2020 - Python
-
Updated
Jul 10, 2020 - Jupyter Notebook
-
Updated
Jul 10, 2020 - Python
-
Updated
Jul 16, 2020 - Python
-
Updated
Jul 8, 2020 - Jupyter Notebook
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 15, 2020
-
Updated
Jul 17, 2020 - JavaScript
-
Updated
Jun 25, 2020 - Jupyter Notebook
-
Updated
Jul 16, 2020 - Python
-
Updated
Jun 9, 2020 - Jupyter Notebook
-
Updated
Jul 17, 2020 - C++
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 17, 2020 - Python
Several parts of the op sec like the main op description, attributes, input and output descriptions become part of the binary that consumes ONNX e.g. onnxruntime causing an increase in its size due to strings that take no part in the execution of the model or its verification.
Setting __ONNX_NO_DOC_STRINGS
doesn't really help here since (1) it's not used in the SetDoc(string) overload (s
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 13, 2020 - Jupyter Notebook
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 14, 2020 - Jupyter Notebook
-
Updated
Jul 17, 2020 - Python
-
Updated
Jul 5, 2020 - Python
-
Updated
Jul 16, 2020 - Jupyter Notebook
-
Updated
Jul 17, 2020 - Python
-
Updated
Jan 31, 2019 - Python
-
Updated
Jul 17, 2020 - Python
Split the code from create_sandbox
into two separate functions and remove the noqa: C901
.
Describe alternatives you've considered
Simplify the function such that no split is required.
Additional context
Code quality:
2020-04-29T13:13:32.5968940Z ./syft/sandbox.py:12:1: C901 'create_sandbox' is too complex (26)
2020-04-29T13:13:32.5969257Z def create_sandbox(gbs,
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Consider this code that downloads models and tokenizers to disk and then uses
BertTokenizer.from_pretrained
to load the tokenizer from disk.ISSUE:
BertTokenizer.from_pretrained()
does not seem to be compatible with Python's native pathlib module.