Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JIT optimization for sequential casts that are idempotent #3031

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

Copy link
Contributor

@jacobkahn jacobkahn commented Oct 23, 2020

Adds a JIT optimization which does a NOOP in the case of sequential cases that don't result in a differently-typed result.

Description

The following code is technically a noop:

af::array a = af::randu(10, 1, 1, 1, af::dtype::f32);
af::array b = a.as(af::dtype::f64);
af::array c = b.as(af::dtype::f32);

No casting kernels should be generated for any of the above operations, especially for c, but they are. The solution here is to, when creating the CastOp/CastWrapper for c, to check to see if the previous operation was a cast. If it was, and the previous operation's previous operation's output type is the same output type as the current cast, create a __noop node between the prev-prev operation and the current one.

This also precludes tricky cases like:

af::array a = af::randu(10, 1, 1, 1, af::dtype::f32);
af::array b = a.as(af::dtype::f64);
af::array d = b + 2;
af::array c = b.as(af::dtype::f32);
c.eval();
d.eval();

where the result of b could be used, in which case the intermediate casting operation can't be discounted completely.

With the change, running AF_JIT_KERNEL_TRACE=stderr ./test/cast_cuda --gtest_filter="*Test_JIT_DuplicateCastNoop" produces the following kernels:

Before the change, the generated kernel used wasteful casts:

This PR also adds op, type, and children accessors to Node/NaryNode to facilitate inspecting the JIT tree for optimization.

Further optimization could be had by recursively checking previous operations until an operation has no previous operations are casts - this would fix arbitrarily long chains of casts that were noops on a particular subtree of JIT operations.

Changes to Users

No changes to user behavior.

Checklist

  • Rebased on latest master
  • Code compiles
  • Tests pass
  • Functions added to unified API
  • Functions documented

Copy link
Member

@umar456 umar456 left a comment

Thanks for sending this in. I made a couple of suggestions which are required to get the OpenCL backend working again.

src/backend/cuda/cast.hpp Outdated Show resolved Hide resolved
src/backend/opencl/cast.hpp Outdated Show resolved Hide resolved
src/backend/cuda/cast.hpp Outdated Show resolved Hide resolved
@umar456
Copy link
Member

@umar456 umar456 commented Oct 29, 2020

There is a invalid read access error with the sparse test in the CUDA backend. I am able to reproduce it using the following code:

TEST(Cast, abs) {
    using namespace af;
    array a = randu(100, 100, f64);
    array b = a.as(f32);

    array c = max<double>(abs(a - b));
}

There is something odd going on with the implicit casts in the subtraction operation. It looks like the buffer object's shape is not set during the conversion. I am not sure where this is happening and I am investigating it.

@jacobkahn
Copy link
Contributor Author

@jacobkahn jacobkahn commented Nov 26, 2020

@umar456 any update on this? Can I help in any way?

@umar456 umar456 force-pushed the as_jit_optimization branch 2 times, most recently from a9171ff to 3663038 Compare Jun 8, 2021
@umar456
Copy link
Member

@umar456 umar456 commented Jun 8, 2021

@jacobkahn
I fixed the issues I referenced in my previous comment. I am thinking of a couple of scenarios where this optimization could be an issue. For example, what if we cast a floating point type to an integer type and back to float. This should floor all the values in the array but that may not happen with this change. Do you think it would be a good idea to perform this operation in non-destructive casting operations?

We can limit this optimization to casts between integer types or floating point types. This way it behaves like C++ types.

Alternatively, we could keep the current behavior and allow destructive casts and expect the user to use functions like floor or ceil to get the same behavior.

@jacobkahn
Copy link
Contributor Author

@jacobkahn jacobkahn commented Nov 21, 2021

@umar456 — revisiting this after some time — I think for now, destructive casts that emulate floor/ceiling operations probably aren't good to implicitly-implement. The casts that I think are more interesting to optimize away are casts between similar types with different precisions — f16 <> f32 <> f64 or u32 <> u64, etc. While some of these casts are destructive, there isn't really an operation to emulate them. A user who casts f32 --> f16 --> f32 almost certainly isn't doing so to intentionally lose precision.

Thoughts?

jacobkahn and others added 3 commits Mar 31, 2022
The cast optimization removes the previous node in the AST but doesn't update
the returned Array's ready flag in case it is being replaced by a buffer. This
caused an error in the CUDA backend under certain scenarios.

Added additional tests and logging for testing and debugging
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants