I need to implement a layer in Tensorflow for a dataset of size N where each sample has a set of M independent features (each feature is represented by a tensor of dimension L). I want to train M dense layers in parallel, then concatenate the outputted tensors.
I could implement a layer using for loop as below:
class MyParallelDenseLayer(tf.keras.layers.Layer):
def __init__(self, dense_kwargs, **kwargs):
super().__init__(**kwargs)
self.dense_kwargs = dense_kwargs
def build(self, input_shape):
self.N, self.M, self.L = input_shape
list_dense_layers = [tf.keras.layers.Dense(**self.dense_kwargs) for a_m in range(self.M)]
super().build(input_shape)
def call(self, inputs):
parallel_output = [list_dense_layers[i](inputs[:, i]) for i in range(self.M)]
return tf.keras.layers.Concatenate()(parallel_output )
But the for loop in the 'call' function makes my layer extremely slow. Is there a faster way to do this layer?
source https://stackoverflow.com/questions/72777856/efficiently-use-dense-layers-in-parallel
Comments
Post a Comment