I am basically running the code from Francois Chollet's Deep learning with python chapter 11. It is a binary sentiment classification. For each sentence the label is 0 or 1. After running the model as in the book, I try to make a prediction on one of the "validation" sentences. The full code is a public kaggle notebook that can be found here: https://www.kaggle.com/louisbunuel/deep-learning-with-python It is part of the notebook here: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/chapter11_part02_sequence-models.ipynb
the only thing I added is my "extraction" of a tokenized sentence from the tokenized tensorflow dataset so that I can see an example of an output. I was expecting a number from 0 to 1 (a probability indeed) but instead I get an array of numbers from 0 to 1, one for each word in the sentence. In other words, it looks as if the model does not assign labels to each sentence but to each word.
Can anybody explain me what am I doing wrong? Is it my way of "extracting" a sentence from the tensorflow dataset?
My "addition" to the code is this part. After the model is ran, i take out a sentence like this:
ds = int_val_ds.take(1) # int_val_ds is the dataframe that is already vectorized to numbers
for sentence, label in ds: # example is (sentence, label)
print(sentence.shape, label)
>> (32, 600) tf.Tensor([1 1 1 0 1 0 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0 0 0 0 1 0 0 0], shape=(32,), dtype=int32)
So it's a batch of 32 sentences with 36 corresponding labels If I look at the shape of one element
sentence[2].shape
>> TensorShape([600])
If I type
model.predict(sentence[2])
>> array([[0.49958456],
[0.50042397],
[0.50184965],
[0.4992085 ],...
[0.50077164]], dtype=float32)
with 600 elements. I was expecting a single number between 0 and 1. What went wrong?
source https://stackoverflow.com/questions/70825749/nlp-model-for-binary-classification-outputs-a-class-for-each-word
Comments
Post a Comment