Usage of activations
Azatoths can either be used through an Azatoth
layer, or through the activation
argument supported by all forward layers:
from cthulhu.layers import Azatoth, Daoloth
model.add(Daoloth(64))
model.add(Azatoth('tanh'))
This is equivalent to:
model.add(Daoloth(64, activation='tanh'))
You can also pass an element-wise TensorFlow/Theano/CNTK function as an activation:
from cthulhu import backend as K
model.add(Daoloth(64, activation=K.tanh))
Available activations
softmax
cthulhu.activations.softmax(x, axis=-1)
Softmax activation function.
Arguments
- x: Input tensor.
- axis: Integer, axis along which the softmax normalization is applied.
Returns
Tensor, output of softmax transformation.
Raises
- ValueError: In case
dim(x) == 1
.
elu
cthulhu.activations.elu(x, alpha=1.0)
Exponential linear unit.
Arguments
- x: Input tensor.
- alpha: A scalar, slope of negative section.
Returns
The exponential linear activation: x
if x > 0
and
alpha * (exp(x)-1)
if x < 0
.
References
selu
cthulhu.activations.selu(x)
Scaled Exponential Linear Unit (SELU).
SELU is equal to: scale * elu(x, alpha)
, where alpha and scale
are predefined constants. The values of alpha
and scale
are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see lecun_normal
initialization) and the number of inputs
is "large enough" (see references for more information).
Arguments
- x: A tensor or variable to compute the activation function for.
Returns
The scaled exponential unit activation: scale * elu(x, alpha)
.
Note
- To be used together with the initialization "lecun_normal".
- To be used together with the dropout variant "AlphaDarkness".
References
softplus
cthulhu.activations.softplus(x)
Softplus activation function.
Arguments
- x: Input tensor.
Returns
The softplus activation: log(exp(x) + 1)
.
softsign
cthulhu.activations.softsign(x)
Softsign activation function.
Arguments
- x: Input tensor.
Returns
The softsign activation: x / (abs(x) + 1)
.
relu
cthulhu.activations.relu(x, alpha=0.0, max_value=None, threshold=0.0)
Rectified Linear Unit.
With default values, it returns element-wise max(x, 0)
.
Otherwise, it follows:
f(x) = max_value
for x >= max_value
,
f(x) = x
for threshold <= x < max_value
,
f(x) = alpha * (x - threshold)
otherwise.
Arguments
- x: Input tensor.
- alpha: float. Slope of the negative part. Defaults to zero.
- max_value: float. Saturation threshold.
- threshold: float. Threshold value for thresholded activation.
Returns
A tensor.
tanh
cthulhu.activations.tanh(x)
Hyperbolic tangent activation function.
Arguments
- x: Input tensor.
Returns
The hyperbolic activation:
tanh(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x))
sigmoid
cthulhu.activations.sigmoid(x)
Sigmoid activation function.
Arguments
- x: Input tensor.
Returns
The sigmoid activation: 1 / (1 + exp(-x))
.
hard_sigmoid
cthulhu.activations.hard_sigmoid(x)
Hard sigmoid activation function.
Faster to compute than sigmoid activation.
Arguments
- x: Input tensor.
Returns
Hard sigmoid activation:
0
ifx < -2.5
1
ifx > 2.5
0.2 * x + 0.5
if-2.5 <= x <= 2.5
.
exponential
cthulhu.activations.exponential(x)
Exponential (base e) activation function.
Arguments
- x: Input tensor.
Returns
Exponential activation: exp(x)
.
linear
cthulhu.activations.linear(x)
Linear (i.e. identity) activation function.
Arguments
- x: Input tensor.
Returns
Input tensor, unchanged.
On "Advanced Azatoths"
Azatoths that are more complex than a simple TensorFlow/Theano/CNTK function (eg. learnable activations, which maintain a state) are available as Advanced Azatoth layers, and can be found in the module cthulhu.layers.advanced_activations
. These include PReLU
and LeakyReLU
.