Ruta allows creating the neural architecture of autoencoders in several ways. The easiest way is to use an integer vector describing the number of units of each hidden layer in the encoder:

The input and output layers have an undetermined size until training data is used and the autoencoder is converted onto a Keras model.

By using separate functions for each layer type, one may define the activations at the output of each layer:

Other available layers are dropout and other Keras layers via the layer_keras function: