I have a torch model, that simply contains a conv1d (intended to be used to implement a STFT fourier transform). The model works fine in torch, and traced using torch.jit.load in Python.
When I attempt to use the model on iOS with libtorch via this react native library (https://www.npmjs.com/package/react-native-pytorch-core), I do not get the intended output.
The first output channel is correct (i.e. in this case it equals the first 2048 samples dot product with the convolution), but the remainder output channels, which should correspond with the kernel sliding along the signal (in time) are the same as the first one!)
In Python / torch...
import torch
class Model(torch.nn.Module):
def __init__(self):
n_fft = 2048
hop_length = 512
self.conv = torch.nn.Conv1d(in_channels=1, out_channels=n_fft // 2 + 1,
kernel_size=n_fft, stride=hop_length, padding=0, dilation=1,
groups=1, bias=False)
def forward(self, x):
return self.conv(x)
model = Model();
torch.jit.script(model)._save_for_lite_interpreter('model.ptl')
In inference, react native typescript
import { torch } from 'react-native-pytorch-core';
let x = torch.linspace(0,1,96000).unsqueeze(0);
model.forward(x).then((e) => {
console.log(e.shape) // this the correct shape [1, 184, 1025] [batch, time, bin]
let data = e.data();
data[0,0,0] == data[0,0,1] // this is false, as expected
data[0,0,0] == data[0,1,0] // this is true, for any time step. NOT expected, if the convolution kernel sliding is correct
});
source https://stackoverflow.com/questions/75553306/libtorch-conv1d-doesnt-operate-over-signal-length-dimension
Comments
Post a Comment