New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT optimization for sequential casts that are idempotent #3031
base: master
Are you sure you want to change the base?
Conversation
Thanks for sending this in. I made a couple of suggestions which are required to get the OpenCL backend working again.
There is a invalid read access error with the sparse test in the CUDA backend. I am able to reproduce it using the following code:
There is something odd going on with the implicit casts in the subtraction operation. It looks like the buffer object's shape is not set during the conversion. I am not sure where this is happening and I am investigating it. |
@umar456 any update on this? Can I help in any way? |
a9171ff
to
3663038
Compare
@jacobkahn We can limit this optimization to casts between integer types or floating point types. This way it behaves like C++ types. Alternatively, we could keep the current behavior and allow destructive casts and expect the user to use functions like floor or ceil to get the same behavior. |
@umar456 — revisiting this after some time — I think for now, destructive casts that emulate floor/ceiling operations probably aren't good to implicitly-implement. The casts that I think are more interesting to optimize away are casts between similar types with different precisions — f16 <> f32 <> f64 or u32 <> u64, etc. While some of these casts are destructive, there isn't really an operation to emulate them. A user who casts f32 --> f16 --> f32 almost certainly isn't doing so to intentionally lose precision. Thoughts? |
The cast optimization removes the previous node in the AST but doesn't update the returned Array's ready flag in case it is being replaced by a buffer. This caused an error in the CUDA backend under certain scenarios. Added additional tests and logging for testing and debugging
Adds a JIT optimization which does a NOOP in the case of sequential cases that don't result in a differently-typed result.
Description
The following code is technically a noop:
No casting kernels should be generated for any of the above operations, especially for
c
, but they are. The solution here is to, when creating theCastOp
/CastWrapper
forc
, to check to see if the previous operation was a cast. If it was, and the previous operation's previous operation's output type is the same output type as the current cast, create a__noop
node between the prev-prev operation and the current one.This also precludes tricky cases like:
where the result of
b
could be used, in which case the intermediate casting operation can't be discounted completely.With the change, running
AF_JIT_KERNEL_TRACE=stderr ./test/cast_cuda --gtest_filter="*Test_JIT_DuplicateCastNoop"
produces the following kernels:Before the change, the generated kernel used wasteful casts:
This PR also adds op, type, and children accessors to
Node
/NaryNode
to facilitate inspecting the JIT tree for optimization.Further optimization could be had by recursively checking previous operations until an operation has no previous operations are casts - this would fix arbitrarily long chains of casts that were noops on a particular subtree of JIT operations.
Changes to Users
No changes to user behavior.
Checklist