Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
tutorial
pytorch
transformer
lstm
gru
rnn
seq2seq
attention
neural-machine-translation
sequence-to-sequence
encoder-decoder
pytorch-tutorial
pytorch-tutorials
encoder-decoder-model
pytorch-implmention
pytorch-nlp
torchtext
pytorch-implementation
pytorch-seq2seq
cnn-seq2seq
-
Updated
Aug 4, 2021 - Jupyter Notebook
Currently, all the VED/VAE models work only with a single channel data (e.g. grayscale images but not color images). The extension to multiple channels should be pretty straightforward if there is a use case.