New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement torch.igamma #46183
Implement torch.igamma #46183
Conversation
|
store(tmp); | ||
x.store(tmp_x); | ||
for (int64_t i = 0; i < size(); i++) { | ||
tmp[i] = calc_igamma(tmp[i], tmp_b[i]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal build error:
xplat/caffe2/aten/src/ATen/cpu/vec256/vec256_float_neon.h:371:36: error: use of undeclared identifier 'tmp_b'; did you mean 'tmp_x'?
tmp[i] = calc_igamma(tmp[i], tmp_b[i]);
I recommend removing neon support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the code in neon. I hope it removes the error
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@mruberry What happened in fb internal test? Is it not possible to expose the test? |
My build fix suggestion was incorrect. Sorry about that @mfkasim1. Trying another fix. |
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Ah, no worries. I've added 2 commits to remove the tabs and merge |
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Codecov Report
@@ Coverage Diff @@
## master #46183 +/- ##
=======================================
Coverage 68.94% 68.95%
=======================================
Files 434 434
Lines 56188 56191 +3
=======================================
+ Hits 38740 38744 +4
+ Misses 17448 17447 -1 |
@mruberry it still fails on facebook's internal test ... |
I know. Sorry. We're sorting out an internal test issue that's affecting multiple PRs and is not directly related to this PR. I'll update as soon as I get a signal from it. |
Internal tests are passing. Just need to wait for this to be merged. |
Thanks @mruberry |
Thanks @mruberry for your reviews and comments in this PR. Would it still be useful to do |
That's a good question. A compromise approach would be to add a Note to the igamma documentation describing how to compute the regularized upper incomplete gamma function from it and see if we get a reguest for igammac itself. The example could also be extended with a comment, like, "Computes the regularized upper incomplete gamma function..." Is the regularized upper incomplete gamma function used for anything? Bonus question, should we extend the igamma examples anyway with an example of it being used in context? |
The only advantage I see is that it might offer higher precision than |
That's true, and it seems silly to have the real thing but not offer it if there are precision concerns.
OK, let's just do it, then?
Yes, but this one is extremely low cost (for the reason you mention above). I appreciate your thoughtfulness, @mfkasim1. I suppose we might as well add it. |
Great, thanks for your answers! I'll do it soon. |
Summary: Related: #46183 (torch.igamma) This is the regularized upper incomplete gamma function. This is supposed to be exactly the same as #47463, but after rebasing the `viable/strict` branch. cc: mruberry Pull Request resolved: #48171 Reviewed By: zhangguanheng66 Differential Revision: D25060107 Pulled By: mruberry fbshipit-source-id: 89780dea21dbb2141cbc4f7f18192cb78a769b17
Fixes #41637
This is regularized lower incomplete gamma function, equivalent to scipy's
gammainc
and tensorflowigamma
.cc @fritzo @mruberry
The text was updated successfully, but these errors were encountered: