Deep CCA with more customisationΒΆ

Showing some examples of more advanced functionality with DCCA and pytorch-lightning

import numpy as np
import pytorch_lightning as pl
from torch import optim
from torch.utils.data import Subset

from cca_zoo.data import Split_MNIST_Dataset
from cca_zoo.deepmodels import DCCA, CCALightning, get_dataloaders, architectures

n_train = 500
n_val = 100
train_dataset = Split_MNIST_Dataset(mnist_type="MNIST", train=True)
val_dataset = Subset(train_dataset, np.arange(n_train, n_train + n_val))
train_dataset = Subset(train_dataset, np.arange(n_train))
train_loader, val_loader = get_dataloaders(train_dataset, val_dataset)

# The number of latent dimensions across models
latent_dims = 2
# number of epochs for deep models
epochs = 10

# TODO add in custom architecture and schedulers and stuff to show it off
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=392)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=392)

# Deep CCA
dcca = DCCA(latent_dims=latent_dims, encoders=[encoder_1, encoder_2])
optimizer = optim.Adam(dcca.parameters(), lr=1e-3)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 1)
dcca = CCALightning(dcca, optimizer=optimizer, lr_scheduler=scheduler)
trainer = pl.Trainer(max_epochs=epochs, enable_checkpointing=False)
trainer.fit(dcca, train_loader, val_loader)

Out:

Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../../data/MNIST/raw/train-images-idx3-ubyte.gz

  0%|          | 0/9912422 [00:00<?, ?it/s]
 65%|######5   | 6444032/9912422 [00:00<00:00, 64429782.18it/s]
9913344it [00:00, 76450615.29it/s]
Extracting ../../data/MNIST/raw/train-images-idx3-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../../data/MNIST/raw/train-labels-idx1-ubyte.gz

  0%|          | 0/28881 [00:00<?, ?it/s]
29696it [00:00, 83931301.61it/s]
Extracting ../../data/MNIST/raw/train-labels-idx1-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../../data/MNIST/raw/t10k-images-idx3-ubyte.gz

  0%|          | 0/1648877 [00:00<?, ?it/s]
1649664it [00:00, 26830950.38it/s]
Extracting ../../data/MNIST/raw/t10k-images-idx3-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz

  0%|          | 0/4542 [00:00<?, ?it/s]
5120it [00:00, 31304426.36it/s]
Extracting ../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../../data/MNIST/raw


Validation sanity check: 0it [00:00, ?it/s]
Validation sanity check:   0%|          | 0/1 [00:00<?, ?it/s]

/home/docs/checkouts/readthedocs.org/user_builds/cca-zoo/envs/v1.10.6/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:413: UserWarning: The number of training samples (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
  f"The number of training samples ({self.num_training_batches}) is smaller than the logging interval"

Training: 0it [00:00, ?it/s]
Training:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 0:  50%|#####     | 1/2 [00:00<00:00, 38.79it/s, loss=-0.224, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 0: 100%|##########| 2/2 [00:00<00:00, 50.42it/s, loss=-0.224, v_num=0]


Epoch 0:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.224, v_num=0]
Epoch 1:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.224, v_num=0]
Epoch 1:  50%|#####     | 1/2 [00:00<00:00, 41.24it/s, loss=-0.702, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 1: 100%|##########| 2/2 [00:00<00:00, 52.54it/s, loss=-0.702, v_num=0]


Epoch 1:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.702, v_num=0]
Epoch 2:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.702, v_num=0]
Epoch 2:  50%|#####     | 1/2 [00:00<00:00, 40.70it/s, loss=-0.861, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 2: 100%|##########| 2/2 [00:00<00:00, 52.11it/s, loss=-0.861, v_num=0]


Epoch 2:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.861, v_num=0]
Epoch 3:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.861, v_num=0]
Epoch 3:  50%|#####     | 1/2 [00:00<00:00, 40.81it/s, loss=-1.04, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 3: 100%|##########| 2/2 [00:00<00:00, 51.88it/s, loss=-1.04, v_num=0]


Epoch 3:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.04, v_num=0]
Epoch 4:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.04, v_num=0]
Epoch 4:  50%|#####     | 1/2 [00:00<00:00, 40.73it/s, loss=-1.14, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 4: 100%|##########| 2/2 [00:00<00:00, 51.78it/s, loss=-1.14, v_num=0]


Epoch 4:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.14, v_num=0]
Epoch 5:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.14, v_num=0]
Epoch 5:  50%|#####     | 1/2 [00:00<00:00, 41.27it/s, loss=-1.23, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 5: 100%|##########| 2/2 [00:00<00:00, 52.33it/s, loss=-1.23, v_num=0]


Epoch 5:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.23, v_num=0]
Epoch 6:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.23, v_num=0]
Epoch 6:  50%|#####     | 1/2 [00:00<00:00, 39.91it/s, loss=-1.3, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 6: 100%|##########| 2/2 [00:00<00:00, 51.49it/s, loss=-1.3, v_num=0]


Epoch 6:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.3, v_num=0]
Epoch 7:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.3, v_num=0]
Epoch 7:  50%|#####     | 1/2 [00:00<00:00, 41.40it/s, loss=-1.35, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 7: 100%|##########| 2/2 [00:00<00:00, 52.75it/s, loss=-1.35, v_num=0]


Epoch 7:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.35, v_num=0]
Epoch 8:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.35, v_num=0]
Epoch 8:  50%|#####     | 1/2 [00:00<00:00, 41.45it/s, loss=-1.4, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 8: 100%|##########| 2/2 [00:00<00:00, 52.88it/s, loss=-1.4, v_num=0]


Epoch 8:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.4, v_num=0]
Epoch 9:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.4, v_num=0]
Epoch 9:  50%|#####     | 1/2 [00:00<00:00, 40.43it/s, loss=-1.44, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 9: 100%|##########| 2/2 [00:00<00:00, 51.51it/s, loss=-1.44, v_num=0]


Epoch 9: 100%|##########| 2/2 [00:00<00:00, 33.20it/s, loss=-1.44, v_num=0]

Total running time of the script: ( 0 minutes 3.965 seconds)

Gallery generated by Sphinx-Gallery