Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.finfo doesn't work for complex #35954

Open
anjali411 opened this issue Apr 3, 2020 · 13 comments
Open

torch.finfo doesn't work for complex #35954

anjali411 opened this issue Apr 3, 2020 · 13 comments

Comments

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Apr 3, 2020

import torch
torch.finfo(torch.complex64)
Traceback (most recent call last):
File "", line 1, in
TypeError: torch.finfo() requires a floating point input type. Use torch.iinfo to handle 'torch.finfo'
import numpy
numpy.finfo(numpy.complex64)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)

cc @ezyang @anjali411 @dylanbespalko

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented Apr 6, 2020

I'd like to work on this issue

@anjali411
Copy link
Contributor Author

@anjali411 anjali411 commented Apr 9, 2020

I'd like to work on this issue

@Justin-Huber thank you :) you can start by looking at this file: https://github.com/pytorch/pytorch/blob/master/torch/csrc/TypeInfo.cpp let me know if you have any questions or need any help and add me as a reviewer when you have a working PR :D

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented Apr 9, 2020

Sounds good, thanks!

@jdklee
Copy link

@jdklee jdklee commented Apr 27, 2020

@Justin-Huber Did you get a crack at this? I was wondering maybe we could both work on it?

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented Apr 27, 2020

@jdklee No I haven't, that sounds good to me! Let me know if you fork it and we can work on that together :)

@anjali411
Copy link
Contributor Author

@anjali411 anjali411 commented Apr 27, 2020

On more thought, even though numpy supports finfo(floating point info) for complex dtypes, I think we should not support torch.finfo for complex dtypes as complex_tensor.is_floating_point() returns false. Perhaps we can add torch.cinfo for complex dtypes?

as an FYI, this is numpy's current behavior:

>>> np.finfo(np.complex64)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)
>>> np.finfo(np.float32)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)

cc. @mruberry @ezyang

@mruberry
Copy link
Contributor

@mruberry mruberry commented Apr 27, 2020

I agree that conceptually it would be more elegant for PyTorch (in a bubble) to have finfo and cinfo and finfo doesn't support complex types, but in practice it seems fine to have finfo return complex values since the user is explicitly requesting them (making the risk of user confusion very low) and we greatly prefer being compatible with NumPy where possible. We could also add a cinfo, too, if we wanted, with redundant functionality.

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented May 9, 2020

I'm finally getting around to this and have a few questions. Are there any tests/documentation for finfo? Is the expected behavior the same as numpy's finfo and are the only complex types np.complex_, np.complex128, np.complex64?

@mruberry
Copy link
Contributor

@mruberry mruberry commented May 10, 2020

@Justin-Huber There are existing tests for finfo in test/test_type_info.py and it's documented here https://pytorch.org/docs/master/type_info.html?highlight=finfo#torch.torch.finfo.

I would focus on complex64 and complex128 and yes, the expected behavior is the same as NumPy's. It'd be great to see a PR for this.

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented May 20, 2020

In https://github.com/pytorch/pytorch/blob/master/test/test_type_info.py#L40-L43, what's the reasoning behind constructing a tensor to then get the dtype back from? To be able to get the corresponding numpy dtype? I was looking at #33152 and saw that complex tensors were disabled for the current release, so was wondering if there's another build I should be working from or if just skipping the tensor construction would be fine.

@mruberry
Copy link
Contributor

@mruberry mruberry commented May 20, 2020

There's a lot of historic behavior in this test, and I think you're right they're constructing a NumPy tensor as a method of accessing NumPy's finfo. In torch/testing/_internal/common_utils.py we actually have torch_to_numpy_dtype_dict which will map torch to NumPy dtypes for you.

You should work on the Master branch, which lets you construct complex tensors.

@Justin-Huber
Copy link

@Justin-Huber Justin-Huber commented May 23, 2020

Here's what I have so far. master...Justin-Huber:master Should the default dtype support complex too?

@mruberry
Copy link
Contributor

@mruberry mruberry commented May 27, 2020

Hey @Justin-Huber, hope you had a relaxing weekend.

No, the default dtype already supports too many tensors. We probably should have restricted it to float32 and float64.

You can open a draft PR with a [WIP] tag on it, too, by the way. That'll make it easy to track the conversation and look at the code together. Add me as a reviewer when you do.

@pytorch pytorch deleted a comment from rannlangel Jun 1, 2020
@pytorch pytorch deleted a comment from rannlangel Jun 1, 2020
@pytorch pytorch deleted a comment from Shubhammishra-21 Jun 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.