Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement torch.igamma #46183

Closed
wants to merge 37 commits into from
Closed

Implement torch.igamma #46183

wants to merge 37 commits into from

Conversation

@mfkasim1
Copy link
Contributor

@mfkasim1 mfkasim1 commented Oct 12, 2020

Fixes #41637
This is regularized lower incomplete gamma function, equivalent to scipy's gammainc and tensorflow igamma.

cc @fritzo @mruberry

@dr-ci
Copy link

@dr-ci dr-ci bot commented Oct 12, 2020

💊 CI failures summary and remediations

As of commit 44f4229 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 151 times.

@mfkasim1 mfkasim1 marked this pull request as ready for review Oct 13, 2020
Copy link
Collaborator

@fritzo fritzo left a comment

Tests look good, I would only test on more points.

test/test_autograd.py Outdated Show resolved Hide resolved
@bdhirsh bdhirsh requested a review from mruberry Oct 13, 2020
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
aten/src/ATen/native/Math.h Outdated Show resolved Hide resolved
store(tmp);
x.store(tmp_x);
for (int64_t i = 0; i < size(); i++) {
tmp[i] = calc_igamma(tmp[i], tmp_b[i]);

This comment has been minimized.

@mruberry

mruberry Oct 22, 2020
Contributor

Internal build error:

xplat/caffe2/aten/src/ATen/cpu/vec256/vec256_float_neon.h:371:36: error: use of undeclared identifier 'tmp_b'; did you mean 'tmp_x'?
      tmp[i] = calc_igamma(tmp[i], tmp_b[i]);

I recommend removing neon support.

This comment has been minimized.

@mfkasim1

mfkasim1 Oct 23, 2020
Author Contributor

I've removed the code in neon. I hope it removes the error

@mfkasim1 mfkasim1 requested a review from mruberry Oct 24, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Oct 27, 2020

@mruberry What happened in fb internal test? Is it not possible to expose the test?

@mruberry
Copy link
Contributor

@mruberry mruberry commented Oct 27, 2020

My build fix suggestion was incorrect. Sorry about that @mfkasim1. Trying another fix.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Oct 27, 2020

My build fix suggestion was incorrect. Sorry about that @mfkasim1. Trying another fix.

Ah, no worries. I've added 2 commits to remove the tabs and merge viable/upstream

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

@codecov codecov bot commented Oct 27, 2020

Codecov Report

Merging #46183 into master will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #46183   +/-   ##
=======================================
  Coverage   68.94%   68.95%           
=======================================
  Files         434      434           
  Lines       56188    56191    +3     
=======================================
+ Hits        38740    38744    +4     
+ Misses      17448    17447    -1     
@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Oct 28, 2020

@mruberry it still fails on facebook's internal test ...

@mruberry
Copy link
Contributor

@mruberry mruberry commented Oct 28, 2020

@mruberry it still fails on facebook's internal test ...

I know. Sorry. We're sorting out an internal test issue that's affecting multiple PRs and is not directly related to this PR. I'll update as soon as I get a signal from it.

@mruberry
Copy link
Contributor

@mruberry mruberry commented Oct 29, 2020

Internal tests are passing. Just need to wait for this to be merged.

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Oct 29, 2020

Thanks @mruberry 👍

@facebook-github-bot
Copy link
Contributor

@facebook-github-bot facebook-github-bot commented Oct 29, 2020

@mruberry merged this pull request in 6eaa324.

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Nov 3, 2020

Thanks @mruberry for your reviews and comments in this PR. Would it still be useful to do torch.igammac (the upper one) or should we just leave it to the users to do igammac = 1 - igamma?

@mruberry
Copy link
Contributor

@mruberry mruberry commented Nov 3, 2020

That's a good question. A compromise approach would be to add a Note to the igamma documentation describing how to compute the regularized upper incomplete gamma function from it and see if we get a reguest for igammac itself. The example could also be extended with a comment, like, "Computes the regularized upper incomplete gamma function..."

Is the regularized upper incomplete gamma function used for anything? Bonus question, should we extend the igamma examples anyway with an example of it being used in context?

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Nov 3, 2020

The only advantage I see is that it might offer higher precision than 1 - igamma. I'm not sure where upper igamma is used, anyway (my application only uses the lower one).
On the other hand, implementing it is not a big deal.
The function is there already (it's used to compute the lower igamma), so it just needs to dispatch the function to pytorch's API.
It's just the question of "can there be too many API functions?"

@mruberry
Copy link
Contributor

@mruberry mruberry commented Nov 3, 2020

The only advantage I see is that it might offer higher precision than 1 - igamma. I'm not sure where upper igamma is used, anyway (my application only uses the lower one).

That's true, and it seems silly to have the real thing but not offer it if there are precision concerns.

On the other hand, implementing it is not a big deal.
The function is there already (it's used to compute the lower igamma), so it just needs to dispatch the function to pytorch's API.

OK, let's just do it, then?

It's just the question of "can there be too many API functions?"

Yes, but this one is extremely low cost (for the reason you mention above).

I appreciate your thoughtfulness, @mfkasim1. I suppose we might as well add it.

@mfkasim1
Copy link
Contributor Author

@mfkasim1 mfkasim1 commented Nov 3, 2020

Great, thanks for your answers! I'll do it soon.

This was referenced Nov 4, 2020
facebook-github-bot added a commit that referenced this pull request Nov 19, 2020
Summary:
Related: #46183 (torch.igamma)
This is the regularized upper incomplete gamma function.

This is supposed to be exactly the same as #47463, but after rebasing the `viable/strict` branch.

cc: mruberry

Pull Request resolved: #48171

Reviewed By: zhangguanheng66

Differential Revision: D25060107

Pulled By: mruberry

fbshipit-source-id: 89780dea21dbb2141cbc4f7f18192cb78a769b17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

10 participants