Deep CCA with more customisationΒΆ

Showing some examples of more advanced functionality with DCCA and pytorch-lightning

import numpy as np
import pytorch_lightning as pl
from torch import optim
from torch.utils.data import Subset

from cca_zoo.data import Split_MNIST_Dataset
from cca_zoo.deepmodels import DCCA, CCALightning, get_dataloaders, architectures

n_train = 500
n_val = 100
train_dataset = Split_MNIST_Dataset(mnist_type="MNIST", train=True)
val_dataset = Subset(train_dataset, np.arange(n_train, n_train + n_val))
train_dataset = Subset(train_dataset, np.arange(n_train))
train_loader, val_loader = get_dataloaders(train_dataset, val_dataset)

# The number of latent dimensions across models
latent_dims = 2
# number of epochs for deep models
epochs = 10

# TODO add in custom architecture and schedulers and stuff to show it off
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=392)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=392)

# Deep CCA
dcca = DCCA(latent_dims=latent_dims, encoders=[encoder_1, encoder_2])
optimizer = optim.Adam(dcca.parameters(), lr=1e-3)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 1)
dcca = CCALightning(dcca, optimizer=optimizer, lr_scheduler=scheduler)
trainer = pl.Trainer(max_epochs=epochs, enable_checkpointing=False)
trainer.fit(dcca, train_loader, val_loader)

Out:

Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../../data/MNIST/raw/train-images-idx3-ubyte.gz

  0%|          | 0/9912422 [00:00<?, ?it/s]
 71%|#######   | 6993920/9912422 [00:00<00:00, 69933430.99it/s]
9913344it [00:00, 72298856.37it/s]
Extracting ../../data/MNIST/raw/train-images-idx3-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../../data/MNIST/raw/train-labels-idx1-ubyte.gz

  0%|          | 0/28881 [00:00<?, ?it/s]
29696it [00:00, 92881470.23it/s]
Extracting ../../data/MNIST/raw/train-labels-idx1-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../../data/MNIST/raw/t10k-images-idx3-ubyte.gz

  0%|          | 0/1648877 [00:00<?, ?it/s]
1649664it [00:00, 28446532.42it/s]
Extracting ../../data/MNIST/raw/t10k-images-idx3-ubyte.gz to ../../data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz

  0%|          | 0/4542 [00:00<?, ?it/s]
5120it [00:00, 30988220.03it/s]
Extracting ../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../../data/MNIST/raw


Validation sanity check: 0it [00:00, ?it/s]
Validation sanity check:   0%|          | 0/1 [00:00<?, ?it/s]

/home/docs/checkouts/readthedocs.org/user_builds/cca-zoo/envs/v1.10.4/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:408: UserWarning: The number of training samples (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
  f"The number of training samples ({self.num_training_batches}) is smaller than the logging interval"

Training: 0it [00:00, ?it/s]
Training:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 0:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 0:  50%|#####     | 1/2 [00:00<00:00, 39.66it/s, loss=-0.146, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 0: 100%|##########| 2/2 [00:00<00:00, 51.87it/s, loss=-0.146, v_num=0]


Epoch 0:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.146, v_num=0]
Epoch 1:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.146, v_num=0]
Epoch 1:  50%|#####     | 1/2 [00:00<00:00, 40.84it/s, loss=-0.596, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 1: 100%|##########| 2/2 [00:00<00:00, 52.34it/s, loss=-0.596, v_num=0]


Epoch 1:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.596, v_num=0]
Epoch 2:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.596, v_num=0]
Epoch 2:  50%|#####     | 1/2 [00:00<00:00, 40.50it/s, loss=-0.746, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 2: 100%|##########| 2/2 [00:00<00:00, 52.48it/s, loss=-0.746, v_num=0]


Epoch 2:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.746, v_num=0]
Epoch 3:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.746, v_num=0]
Epoch 3:  50%|#####     | 1/2 [00:00<00:00, 40.74it/s, loss=-0.913, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 3: 100%|##########| 2/2 [00:00<00:00, 53.01it/s, loss=-0.913, v_num=0]


Epoch 3:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.913, v_num=0]
Epoch 4:   0%|          | 0/2 [00:00<?, ?it/s, loss=-0.913, v_num=0]
Epoch 4:  50%|#####     | 1/2 [00:00<00:00, 40.34it/s, loss=-1.01, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 4: 100%|##########| 2/2 [00:00<00:00, 52.20it/s, loss=-1.01, v_num=0]


Epoch 4:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.01, v_num=0]
Epoch 5:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.01, v_num=0]
Epoch 5:  50%|#####     | 1/2 [00:00<00:00, 40.62it/s, loss=-1.11, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 5: 100%|##########| 2/2 [00:00<00:00, 52.66it/s, loss=-1.11, v_num=0]


Epoch 5:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.11, v_num=0]
Epoch 6:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.11, v_num=0]
Epoch 6:  50%|#####     | 1/2 [00:00<00:00, 40.12it/s, loss=-1.17, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 6: 100%|##########| 2/2 [00:00<00:00, 51.17it/s, loss=-1.17, v_num=0]


Epoch 6:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.17, v_num=0]
Epoch 7:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.17, v_num=0]
Epoch 7:  50%|#####     | 1/2 [00:00<00:00, 40.87it/s, loss=-1.24, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 7: 100%|##########| 2/2 [00:00<00:00, 53.03it/s, loss=-1.24, v_num=0]


Epoch 7:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.24, v_num=0]
Epoch 8:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.24, v_num=0]
Epoch 8:  50%|#####     | 1/2 [00:00<00:00, 39.95it/s, loss=-1.28, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 8: 100%|##########| 2/2 [00:00<00:00, 51.83it/s, loss=-1.28, v_num=0]


Epoch 8:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.28, v_num=0]
Epoch 9:   0%|          | 0/2 [00:00<?, ?it/s, loss=-1.28, v_num=0]
Epoch 9:  50%|#####     | 1/2 [00:00<00:00, 39.62it/s, loss=-1.33, v_num=0]

Validating: 0it [00:00, ?it/s]

Validating:   0%|          | 0/1 [00:00<?, ?it/s]
Epoch 9: 100%|##########| 2/2 [00:00<00:00, 51.92it/s, loss=-1.33, v_num=0]


Epoch 9: 100%|##########| 2/2 [00:00<00:00, 33.49it/s, loss=-1.33, v_num=0]

Total running time of the script: ( 0 minutes 3.923 seconds)

Gallery generated by Sphinx-Gallery