I'm trying to find a way to accumulate losses in Keras. This works (for Gaussian-Mixture-Model loss = negative log likelihood) but is not so elegant:
def neg_log_likelihood(y, phis, mu, sigmasq):
a = phis[:, 0]*gaussian_pdf(y, mu[:, 0*t:(0+1)*t], sigmasq[:, 0])
for i in range(1, k):
a += phis[:, i]*gaussian_pdf(y, mu[:, i*t:(i+1)*t], sigmasq[:, i])
loss = tf.math.reduce_mean(-tf.math.log(a))
return loss
I would rather create a new variable for the losses, and accumulate everything there. E.g., I tried:
def neg_log_likelihood(y, phis, mu, sigmasq):
losses = tf.Variable(np.zeros(n, dtype=np.float32))
for i in range(k):
losses.assign_add(phis[:, i]*gaussian_pdf(y, mu[:, i*t:(i+1)*t], sigmasq[:, i]))
loss = tf.math.reduce_mean(-tf.math.log(losses))
return loss
but this fails to produce gradients for some reason. I.e., when I call:
gradients_init = tape.gradient(loss, model.trainable_weights)
I get None
.
Anyway to overcome this?
source https://stackoverflow.com/questions/70454120/adding-losses-in-keras
Comments
Post a Comment