Neural network architectureThis set of functions provide the necessary functionality to define the neural architectures of autoencoders, by connecting layers of units. |
|
---|---|
Create an input layer |
|
Create a fully-connected neural layer |
|
Create a variational block of layers |
|
Create a convolutional layer |
|
Create an output layer |
|
Dropout layer |
|
Custom layer from Keras |
|
|
Add layers to a network/Join networks |
|
Access subnetworks of a network |
|
Draw a neural network |
|
Layer wrapper constructor |
|
Sequential network constructor |
Coercion to ruta_network |
|
|
Get the index of the encoding |
Autoencoder and variantsThese functions allow to create and customize autoencoder learners. |
|
Create an autoencoder learner |
|
Create a contractive autoencoder |
|
Create a denoising autoencoder |
|
Create a robust autoencoder |
|
Sparse autoencoder |
|
Build a variational autoencoder |
|
|
Add weight decay to any autoencoder |
|
Weight decay |
Add contractive behavior to any autoencoder |
|
|
Add denoising behavior to any autoencoder |
|
Add robust behavior to any autoencoder |
|
Add sparsity regularization to an autoencoder |
|
Detect whether an autoencoder is contractive |
|
Detect whether an autoencoder is denoising |
|
Detect whether an autoencoder is robust |
|
Detect whether an autoencoder is sparse |
|
Detect whether an autoencoder is variational |
|
Sparsity regularization |
|
Create an autoencoder learner |
Loss functionsThese functions define different objective functions which an autoencoder may optimize. Along with these, one may use any loss defined in Keras (such as |
|
|
Contractive loss |
|
Correntropy loss |
|
Variational loss |
|
Coercion to ruta_loss |
Model trainingThe following functions allow to train an autoencoder with input data. |
|
Automatically compute an encoding of a data matrix |
|
Apply filters |
|
Train a learner object with data |
|
|
Detect trained models |
Model evaluationEvaluation metrics for trained models. |
|
|
Evaluation metrics |
|
Custom evaluation metrics |
Tasks for trained modelsThe following functions can be applied when an autoencoder has been trained, in order to transform data from the input space onto the latent space and viceversa. |
|
Retrieve encoding of data |
|
Retrieve decoding of encoded data |
|
Retrieve reconstructions for input data |
|
Generate samples from a generative model |
|
|
Save and load Ruta models |
Noise generatorsThese objects act as input filters which generate some noise into the training inputs when fitting denoising autoencoders. |
|
|
Noise generator |
Additive Cauchy noise |
|
Additive Gaussian noise |
|
|
Filter to add ones noise |
|
Filter to add salt-and-pepper noise |
|
Filter to add zero noise |
Keras conversionsThese are internal functions which convert Ruta wrapper objects into Keras objects and functions. |
|
|
Convert a Ruta object onto Keras objects and functions |
|
Extract Keras models from an autoencoder wrapper |
|
Get a Keras generator from a data filter |
|
Convert Ruta layers onto Keras layers |
|
Obtain a Keras block of layers for the variational autoencoder |
|
Obtain a Keras loss |
|
Build a Keras network |
|
Translate sparsity regularization to Keras regularizer |
|
Obtain a Keras weight decay |
Other methodsSome methods for R generics. |
|
|
Inspect Ruta objects |