I have this code from Keras time series classification with a Transformer model:
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Attention and Normalization
x = layers.MultiHeadAttention(
key_dim=head_size,
num_heads=num_heads,
dropout=dropout)(inputs, inputs)
x = layers.Dropout(dropout)(x)
x = layers.LayerNormalization(epsilon=1e-6)(x)
res = x + inputs
# Feed Forward Part
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(res)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
x = layers.LayerNormalization(epsilon=1e-6)(x)
return x + res
If I try to change the Conv1D
to ConvLSTM1D
layer, I get this error:
Input 0 of layer "conv_lstm1d_27" is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: (None, 24, 4)
I changed the shape of dataset to (1354, 4, 24, 10)
but it doesn't help. I can't understand the issue with shape and dimensions here.
source https://stackoverflow.com/questions/73471864/keras-time-series-transformer
Comments
Post a Comment