Pytorch Implementation of Feedforward ANN with Varying Inputs: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I'm trying to work with a custom Feedforward implementation that takes varying rows of the input and performs some sort of operation on it.
For example, imagine if the function, f, just sums the rows and columns of an input Tensor:
f = lambda x: torch.sum(x) # sum across all dimensions, producing a scalar
Now, for the input Tensor I have an (n, m) matrix and I want to map the function f over all the rows except the row under consideration. For example, here is the vanilla implementation that works:
d = [] # append the values to d
my_tensor = torch.rand(3, 5, requires_grad=True) # = (n, m)
indices = list(range(n)) # list of indices
for i in range(n): # loop through the indices
curr_range = indices[:i] + indices[i+1:] # fetch all indices except for the current one
d.append(f(my_tensor[curr_range]) # calculate sum over all elements excluding row i
Produces a (n, 1) matrix, which is what I want. The problem is Pytorch cannot auto-differentiate over this and I'm getting errors having to do with lack of grad because I have non-primitive Torch operations:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
source https://stackoverflow.com/questions/71084276/pytorch-implementation-of-feedforward-ann-with-varying-inputs-runtimeerror-ele
Comments
Post a Comment