Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
0 votes
1 answer
952 views

Enable multiprocessing on pytorch XLA for TPU VM

I'm fairly new to this and have little to no experience. I had a notebook running PyTorch that I wanted to run a Google Cloud TPU VM. Machine specs: - Ubuntu - TPU v2-8 - pt-2.0 I should have 8 cores....
Adham Ali's user avatar
0 votes
0 answers
106 views

XLA rng-bit-generator takes too much memory

XLA allocates 4G of memory to this tensor. The size of which seems to scale with the batch size. Which doesn't make sense to me, it doesn't seem to be part of the model graph to be stored in HBM. I ...
iordanis's user avatar
  • 1,284
1 vote
1 answer
447 views

pytorch model saved from TPU run on CPU

I found interesting model - question generator, but can't run it. I got an error: Traceback (most recent call last): File "qg.py", line 5, in <module> model = AutoModelWithLMHead....
exelents's user avatar