Skip to main content
Filter by
Sorted by
Tagged with
1 vote
1 answer
70 views

Jax / Flax potential tracing issue

I'm currently using Flax for neural network implementations. My model takes two inputs: x and θ. It first processes x through an LSTM, then concatenates the LSTM's output with θ — or more precisely, ...
Dan Leonte's user avatar
0 votes
0 answers
34 views

Signing xla macros using certificate stored in Azure key vault with HSM

We have been fetching the signing certificate from the Azure Key Vault and adding it to the local Windows store, which allows for the signing of macros in the .xla application through Excel. var ...
vamshi krishna's user avatar
0 votes
0 answers
35 views

XLA PJRT plugin on Mac reveals only 4 CPUs

On my MacOS M3, I have compiled the pjrt_c_api_cpu_plugin.so and I'm using it with JAX. My Macbook has 12 CPUs but with a simple python script "jax.devices()" the pjrt plugin reveals just 4 ...
lordav's user avatar
  • 196
0 votes
1 answer
63 views

Does TensorFlow or XLA provide a python API to read and parse the dumped MHLO mlir module?

I turned on XLA when running TensorFLow, and in order to further optimize the fused kernels, I added export XLA_FLAGS="--xla_dump_to=/tmp/xla_dump", and got the dumped IRs, including lmhlo....
StayFoolish's user avatar
1 vote
1 answer
123 views

How to compile tensorflow serving (tensorflow/xla) to have llvm/mlir as shared objects rather than statically included in the binary?

I am trying to compile the tensorflow serving project and I would like to have llvm/mlir compiled as a shared objects. The project is tensorflow serving -> tensorflow -> xla and compiles to a ...
Capybara's user avatar
  • 1,483
0 votes
0 answers
39 views

Flax.linen.conv unexpected behavior

I'm experiencing an unexpected output when using flax.linen.Conv. My output from conv layer has very odd stats. The mean is around 100-110 and sometimes is nan . I tested the same against TensorFlow ...
Nithish M's user avatar
1 vote
0 answers
175 views

Hard to understand the semantics of the stablehlo.scatter

Trying to understand the semantics of https://github.com/openxla/stablehlo/blob/main/docs/spec.md#scatter. Many of attributes do not have any explanation or definition. Can someone please explain how ...
user3755060's user avatar
0 votes
0 answers
56 views

How XLA loads saved model and gets tensor information

Context: I want to use XLA (the one within tensorflow repo) to load model and input data, and get the output. HloRunner executes model via Literal: https://github.com/tensorflow/tensorflow/blob/...
Tinyden's user avatar
  • 578
2 votes
1 answer
298 views

looking for a tool to calculate FLOPs of XLA-HLO computational graph

I'm looking for a tool to calculate the FLOPs when given the computational graph of XLA-HLO. Is someone know some HLO cost models or analytical models for print the FLOPs of operator node for ...
Sandy Yu's user avatar
2 votes
0 answers
416 views

No registered 'RaggedTensorToTensor' OpKernel for XLA_GPU_JIT devices

In short, I have the problem of getting the following error when running a keras_cv/retina_net based object-detection model: "No registered 'RaggedTensorToTensor' OpKernel for XLA_GPU_JIT devices ...
user4711's user avatar
4 votes
0 answers
3k views

Is there a way to suppress STDERR message from tensorflow and XLA

When I run my python script, I had the messages below: WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1701341037.989729 1542352 device_compiler.h:...
xxx yyy's user avatar
  • 41
0 votes
1 answer
258 views

Is it okay to use python operators for tensorflow tensors?

TL;DR Is (a and b) equivalent to tf.logical_and(a, b) in terms of optimization and performance? (a and b are tensorflow tensors) Details: I use python with tensorflow. My first priority is to make the ...
Daniel S.'s user avatar
  • 6,680
2 votes
1 answer
2k views

Why does tensorflow.function (without jit_compile) speed up forward passes of a Keras model?

XLA can be enabled using model = tf.function(model, jit_compile=True). Some model types are faster that way, some are slower. So far, so good. But why can model = tf.function(model, jit_compile=None) ...
Tobias Hermann's user avatar
0 votes
0 answers
58 views

Passing user defined variables to xlam file

How can I pass user-defined variables to xlam files? I have written a macro and saved on an xlam file. I reference the xlam in an xlsm file and I call the macro from the xlsm. I want to pass a user-...
andreamordini's user avatar
0 votes
1 answer
952 views

Enable multiprocessing on pytorch XLA for TPU VM

I'm fairly new to this and have little to no experience. I had a notebook running PyTorch that I wanted to run a Google Cloud TPU VM. Machine specs: - Ubuntu - TPU v2-8 - pt-2.0 I should have 8 cores....
Adham Ali's user avatar

15 30 50 per page
1
2 3 4 5 6