spyx.axn#

Module Contents#

Functions#

heaviside(x)

custom([bwd, fwd])

This function serves as the activation function for the SNNs, allowing for custom definitions of both surrogate gradients for backwards

tanh([k])

Hyperbolic Tangent activation.

boxcar([width, height])

Boxcar activation.

triangular([k])

Triangular activation inspired by Esser et. al. https://arxiv.org/abs/1603.08270

arctan([k])

This class implements the Arctangent surrogate gradient activation function for a spiking neuron.

superspike([k])

This function implements the SuperSpike surrogate gradient activation function for a spiking neuron.

spyx.axn.heaviside(x)[source]#
spyx.axn.custom(bwd=lambda x: ..., fwd=lambda x: ...)[source]#

This function serves as the activation function for the SNNs, allowing for custom definitions of both surrogate gradients for backwards passes as well as substitution of the Heaviside function for relaxations such as sigmoids.

It is assumed that the input to this layer has already had it’s threshold subtracted within the neuron model dynamics.

The default behavior is a Heaviside forward activation with a stragiht through estimator surrogate gradient.

Bwd:

Function that calculates the gradient to be used in the backwards pass.

Fwd:

Forward activation/spiking function. Default is the heaviside function centered at 0.

Returns:

A JIT compiled activation function comprised of the specified forward and backward functions.

spyx.axn.tanh(k=1)[source]#

Hyperbolic Tangent activation.

\[4 / (e^{-kx} + e^{kx})^2\]
K:

Value for scaling the slope of the surrogate gradient.

Returns:

JIT compiled tanh surrogate gradient function.

spyx.axn.boxcar(width=2, height=0.5)[source]#

Boxcar activation.

Width:

Total width of non-zero gradient flow, centered on 0.

Height:

Value for gradient within the specified window.

Returns:

JIT compiled boxcar surrogate gradient function.

spyx.axn.triangular(k=2)[source]#

Triangular activation inspired by Esser et. al. https://arxiv.org/abs/1603.08270

\[max(0, 1-|kx|)\]
K:

scale factor

Returns:

JIT compiled triangular surrogate gradient function.

spyx.axn.arctan(k=2)[source]#

This class implements the Arctangent surrogate gradient activation function for a spiking neuron.

The Arctangent function returns a value between -pi/2 and pi/2 for inputs in the range of -Infinity to Infinity. It is often used in the context of spiking neurons because it provides a smooth approximation to the step function that is differentiable everywhere, which is a requirement for gradient-based optimization methods.

K:

A scaling factor that can be used to adjust the steepness of the Arctangent function. Default is 2.

Returns:

JIT compiled arctangent-derived surrogate gradient function.

spyx.axn.superspike(k=25)[source]#

This function implements the SuperSpike surrogate gradient activation function for a spiking neuron.

The SuperSpike function is defined as 1/(1+k|U|)^2, where U is the input to the function and k is a scaling factor. It returns a value between 0 and 1 for inputs in the range of -Infinity to Infinity.

It is often used in the context of spiking neurons because it provides a smooth approximation to the step function that is differentiable everywhere, which is a requirement for gradient-based optimization methods.

It is a fast approximation of the Sigmoid function adapted from:

  1. Zenke, S. Ganguli (2018) SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural Computation, pp. 1514-1541.

K:

A scaling factor that can be used to adjust the steepness of the SuperSpike function. Default is 25.

Returns:

JIT compiled SuperSpike surrogate gradient function.