NumPy

NumPy is an open source library for the Python programming language, adding support for large, multidimensional arrays, and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.
Here are 9,159 public repositories matching this topic...
-
Updated
Jul 26, 2021 - Jupyter Notebook
-
Updated
May 13, 2021 - Python
.coveragerc is present in git but not release tarballs.
-
Updated
Oct 19, 2019
-
Updated
Dec 22, 2020 - Python
What happened:
If a negative value for drop_axis
is passed into either map_blocks
or map_overlap
a non-informative exception is raised.
What you expected to happen:
I would expect this would work as in NumPy for negative axis arguments where axis becomes axis = axis % array.ndim
. If it is not intended to work, then it should raise a user-friendly AxisError
. This came up
-
Updated
Dec 23, 2020 - Python
-
Updated
Sep 27, 2019 - Jupyter Notebook
-
Updated
Jun 21, 2021 - Python
-
Updated
Jul 9, 2021 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
Jun 10, 2021 - Python
-
Updated
Feb 6, 2020
-
Updated
Jun 6, 2021 - Python
-
Updated
Aug 6, 2021 - Python
What's wrong?
codebasics / py
-
Updated
Jul 28, 2021 - Jupyter Notebook
-
Updated
Jul 30, 2021 - C++
hi,
if possible, please add these indicators as well:
TDI (Traders Dynamic Index)
chandelier exit
pivot points
BOP (balance of power)
CTM (Chande trend meter)
Coppock Curve
Correlation Coefficient
PMO (DecisionPoint Price Momentum Oscillator)
Ulcer Index
most of them except TDI are available on stockcharts.com
thanks
[Error Message] Improve error message in SentencepieceTokenizer when arguments are not expected.
Description
While using tokenizers.create with the model and vocab file for a custom corpus, the code throws an error and is not able to generate the BERT vocab file
Error Message
ValueError: Mismatch vocabulary! All special tokens specified must be control tokens in the sentencepiece vocabulary.
To Reproduce
from gluonnlp.data import tokenizers
tokenizers.create('spm', model_p
-
Updated
Aug 6, 2021 - Rust
Support Series.between
Is your feature request related to a problem? Please describe.
Sometimes you want to check that data values are present in another array, but only up to a certain tolerance.
Describe the solution you'd like
da.isin(test_values, tolerance=1e-6)
, where the tolerance argument is optional.
Not sure what the implementation should be but there are two vectorized [suggestions here](http
Created by Travis Oliphant
Latest release 19 days ago
- Repository
- numpy/numpy
- Website
- numpy.org
- Wikipedia
- Wikipedia