Skip to content

Commit fa31b62

Browse files
committed
readme
1 parent 197e351 commit fa31b62

File tree

1 file changed

+2
-6
lines changed

1 file changed

+2
-6
lines changed

README.md

+2-6
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ PyTorch 0.4.1 | Python 3.6.5
33

44
Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein gradient penalty, least squares, deep regret analytic, bounded equilibrium, relativistic, f-divergence, Fisher, and information generative adversarial networks (GANs), and standard, variational, and bounded information rate variational autoencoders (VAEs).
55

6-
Paper links are supplied at the beginning of each file with a short summary of the paper. See src folder for files to run via terminal, or notebooks folder for Jupyter notebook visualizations via your local browser. The main file changes can be see in the ```train```, ```train_D```, and ```train_G``` of the Trainer class, although changes are not completely limited to only these two areas (e.g. Wasserstein GAN clamps weight in the train function, BEGAN gives multiple outputs from train_D, fGAN has a slight modification in viz_loss function to indicate method used in title).
6+
Paper links are supplied at the beginning of each file with a short summary of the paper. See src folder for files to run via terminal, or notebooks folder for Jupyter notebook visualizations via your local browser. The main file changes can be see in the [```train```](https://github.com/shayneobrien/generative-models/blob/master/src/ns_gan.py#L94-L170), [```train_D```](https://github.com/shayneobrien/generative-models/blob/master/src/ns_gan.py#L172-L194), and [```train_G```](https://github.com/shayneobrien/generative-models/blob/master/src/ns_gan.py#L196-L216) of the Trainer class, although changes are not completely limited to only these two areas (e.g. Wasserstein GAN clamps weight in the train function, BEGAN gives multiple outputs from train_D, fGAN has a slight modification in viz_loss function to indicate method used in title).
77

8-
All code in this repository operates in a generative, unsupervised manner on binary (black and white) MNIST. The architectures are compatible with a variety of datatypes (1D, 2D, square 3D images). Plotting functions work with binary/RGB images. If a GPU is detected, the models use it. Otherwise, they default to CPU. VAE Trainer classes contain methods to visualize latent space representations (see ```make_all``` function).
8+
All code in this repository operates in a generative, unsupervised manner on binary (black and white) MNIST. The architectures are compatible with a variety of datatypes (1D, 2D, square 3D images). Plotting functions work with binary/RGB images. If a GPU is detected, the models use it. Otherwise, they default to CPU. VAE Trainer classes contain methods to visualize latent space representations (see [```make_all```](https://github.com/shayneobrien/generative-models/blob/master/src/vae.py#L333-L343) function).
99

1010
# Usage
1111
To initialize an environment:
@@ -60,11 +60,7 @@ def train_D(self, images):
6060
def train_G(self, images):
6161
...
6262
G_loss = 0.50 * torch.mean((DG_score - 1.)**2)
63-
<<<<<<< HEAD
6463
65-
=======
66-
67-
>>>>>>> 3ee8a9bec7df140d731fc204b5fe80f851cdb428
6864
return G_loss
6965
```
7066

0 commit comments

Comments
 (0)