I have a trained model I am trying to deploy an image classifier through tensorflow serving in a Docker container. When I run my python code to request over REST I get a 404 response. The request looks as so:
json_response = requests.post('http://localhost:8501/models/model_name:predict', data=data)
json_response.raise_for_status()
prediction = json_response.json()['predictions'][0]
print(prediction)
Upon inspection of the model information using curl, I get this in response:
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
</BODY></HTML>
I followed the instructions from the tensorflow website on serving with docker and had no errors during the setup.
I believe it may have something to do with the way that I saved my model, since examples in the tf_serving guide worked, but I had used the SavedModel format as outlined on tensorflow the code for which is here:
tmpdir = tempfile.mkdtemp()
save_path = os.path.join(tmpdir, "model_name/1/")
print(list(model.signatures.keys())) # ["serving_default"]
infer = model.signatures["serving_default"] # 5 classes
print(infer.structured_outputs)
tf.saved_model.save(model, save_path)
The print statements for the prior code:
['serving_default']
{'dense_2': TensorSpec(shape=(None, 5), dtype=tf.float32, name='dense_2')}
In the tensorflow SavedModel guide it shows the structured output having the name "predictions", which may be part of the issue.
For reference, I am using transfer learning from a frozen MobileNetV2, with a few added layers.
I am fairly new to both Tensorflow and Tensorflow Serving, so please forgive me if this is a simple issue i may have missed.
source https://stackoverflow.com/questions/76030045/tf-served-model-over-docker-missing-model-information-returns-404-if-requested
Comments
Post a Comment