Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getattr does not support default parameter when jitted #56909

Open
Pangoraw opened this issue Apr 26, 2021 · 11 comments
Open

getattr does not support default parameter when jitted #56909

Pangoraw opened this issue Apr 26, 2021 · 11 comments

Comments

@Pangoraw
Copy link

@Pangoraw Pangoraw commented Apr 26, 2021

🐛 Bug

The builtin getattr function has a third optional parameter called default, returned if the key fetched does not exist. However, the IR emitter does not support this third parameter and supports only the 2 arguments version:

https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/frontend/ir_emitter.cpp#L2858-L2859

To Reproduce

Steps to reproduce the behavior:

  1. Create a function calling getattr with 3 arguments
def f(x): 
 return getattr(torch, "unknown", [])
  1. Try to jit the function
torch.jit.script(f)
  1. Get an error
Error log
RuntimeError                              Traceback (most recent call last)
 in 
----> 1 torch.jit.script(a)

~/anaconda3/lib/python3.7/site-packages/torch/jit/_script.py in script(obj, optimize, _frames_up, _rcb)
988 _rcb = _jit_internal.createResolutionCallbackFromClosure(obj)
989 fn = torch._C._jit_script_compile(--> 990 qualified_name, ast, _rcb, get_default_args(obj)
991 )
992 # Forward docstrings

RuntimeError:
getattr expected exactly 2 arguments but found 3:
File "", line 2
def a(x):
return getattr(torch, "unknown", [])
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

Expected behavior

Environment

PyTorch version: 1.8.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~20.04~2) 10.3.0
Clang version: 10.0.0-4ubuntu1 
CMake version: version 3.16.3

Python version: 3.7 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce 940MX
Nvidia driver version: 460.56
cuDNN version: /opt/arrayfire/lib64/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] numpydoc==0.9.1
[pip3] torch==1.8.1
[pip3] torchvision==0.9.1
[conda] blas                      1.0                         mkl  
[conda] mkl                       2019.4                      243  
[conda] mkl-service               2.3.0            py37he904b0f_0  
[conda] mkl_fft                   1.0.14           py37ha843d7b_0  
[conda] mkl_random                1.1.0            py37hd6b4f25_0  
[conda] numpy                     1.19.5                   pypi_0    pypi
[conda] numpydoc                  0.9.1                      py_0  
[conda] torch                     1.8.1                    pypi_0    pypi
[conda] torchvision               0.9.1                    pypi_0    pypi

cc @gmagogsfm

@SabbirAhmedAdor629
Copy link

@SabbirAhmedAdor629 SabbirAhmedAdor629 commented Apr 28, 2021

Hi, I want to work on this issue.

@tugsbayasgalan
Copy link

@tugsbayasgalan tugsbayasgalan commented Apr 29, 2021

Assigned it to you! Let me know if you need help

@narang99
Copy link
Contributor

@narang99 narang99 commented Apr 30, 2021

Can I try to work on this or any other "gud first issue"? The others I saw had pending pull requests that were not resolved.

@SabbirAhmedAdor629
Copy link

@SabbirAhmedAdor629 SabbirAhmedAdor629 commented May 1, 2021

Can I try to work on this or any other "gud first issue"? The others I saw had pending pull requests that were not resolved.

Hi, I am already working on this.
You can try other good first issue.

@gmagogsfm
Copy link
Contributor

@gmagogsfm gmagogsfm commented May 1, 2021

Can I try to work on this or any other "gud first issue"? The others I saw had pending pull requests that were not resolved.

Hi @narang99,

Thanks for offering to contribute! I just marked a few more issues as "good firs issue", feel free look through them and comment on one or two that you find interesting. Happy to discuss them with you if needed.

@aaditya-panik
Copy link

@aaditya-panik aaditya-panik commented May 11, 2021

Hi @gmagogsfm,

I'm new to pytorch, so was going through a couple of "good first issue" labels to see if I can work on some of them. More specifically for this issue, I noticed the use of py::getattr without the defaults and also a comment related to hasattr for python modules here. Is there a reason for hasattr not being implemented using py::hasattr and not using py::getattr with default for python modules in general?

@gmagogsfm
Copy link
Contributor

@gmagogsfm gmagogsfm commented May 22, 2021

Hi @gmagogsfm,

I'm new to pytorch, so was going through a couple of "good first issue" labels to see if I can work on some of them. More specifically for this issue, I noticed the use of py::getattr without the defaults and also a comment related to hasattr for python modules here. Is there a reason for hasattr not being implemented using py::hasattr and not using py::getattr with default for python modules in general?

Hi @aaditya-panik,

Thanks for the question. TorchScript compiles a PyTorch program into an archive, which can be loaded into Python-free environment for execution, so py::getattr can not be used. As for why current implementation of getattr doesn't support default value, it is because it wasn't needed before, so it was not implemented yet.

@jerryzhenleicai
Copy link

@jerryzhenleicai jerryzhenleicai commented Jun 24, 2021

I am working on this, as a way to get myself familiar with TorchScript codebase.

@jerryzhenleicai
Copy link

@jerryzhenleicai jerryzhenleicai commented Jun 24, 2021

So my plan for fixing this is to first verify that the number of args is 2 or 3 and then replace this line:
return obj->attr(apply.range(), method, name);

so that the IR returned is some sort of IF expression, essentially:

attribute not found with the given name ? default val param for attr : obj->attr(apply.range(), method, name)

Is this the right plan? Please comment.

@aaditya-panik
Copy link

@aaditya-panik aaditya-panik commented Jun 25, 2021

I too thought along these lines. Here's what I could manage to write. Been busy with other work to validate if this is the way to go

https://github.com/aaditya-panik/pytorch/blob/a4de74a219c0e40610e2147ac2e48ec367b15f8b/torch/csrc/jit/frontend/ir_emitter.cpp#L2871-L2907

@jerryzhenleicai
Copy link

@jerryzhenleicai jerryzhenleicai commented Jun 25, 2021

actually, the right syntax tree to emit is a prim::If node with two blocks, one for True one for False

something like this:

`
def g(x):
a = T()
return 1 if a.my == "HI" else 2

Graph:

graph(%x : Tensor):
%5 : str = prim::Constantvalue="HI" # ts1.py:13:21
%8 : int = prim::Constantvalue=1 # ts1.py:13:8
%9 : int = prim::Constantvalue=2 # ts1.py:13:31
%a.1 : torch.T = prim::CreateObject()
%2 : NoneType = prim::CallMethodname="init" # ts1.py:12:5
%my : str = prim::GetAttrname="my"
%6 : bool = aten::eq(%my, %5) # ts1.py:13:13
%10 : int = prim::If(%6) # ts1.py:13:8
block0():
-> (%8)
block1():
-> (%9)
return (%10)
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
JIT Triage
  
In discussion
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants