应该是可以的。是否可以提供更具体一些的描述,例如你希望在什么环境中调用,你是如何调用的并且遇到了什么问题,并提供源代码和报错信息。
如果要保存为Keras自有的h5格式,应该使用 model.save 而不是tf.saved_model.save。关于Keras自有的h5格式可参考 https://tf.wiki/zh_hans/deployment/export.html#keras-jinpeng
感谢回答,已经解决了,是tensorflow和opencv版本不匹配导致出错的,现在可以的版本是vs2017+tf2.3+opencv4.4
对于windows而言的话——————tf.saved_model.save(model, “saved/1”)
这段改为tf.saved_model.save(model, “saved\1”) ,亲测有效
请问大神为什么我在使用model.save()的时候一直报错TypeError: a bytes-like object is required, not 'str’啊?折磨好久了,一直没找到原因
import tensorflow as tf
from tensorflow_core import keras
from tensorflow.keras.models import Model
import numpy as np
import pandas as pd
import os
readings = tf.keras.Input(shape=(7, ))
x = keras.layers.Dense(8, activation="linear", kernel_initializer="glorot_uniform")(readings)
x = keras.layers.Dense(8, activation="relu", kernel_initializer="glorot_uniform")(x)
x = keras.layers.Dense(8, activation="relu", kernel_initializer="glorot_uniform")(x)
x = keras.layers.Dense(8, activation="relu", kernel_initializer="glorot_uniform")(x)
x = keras.layers.Dense(8, activation="relu", kernel_initializer="glorot_uniform")(x)
benzene = keras.layers.Dense(3, activation="linear", kernel_initializer="glorot_uniform")(x)
model = Model(inputs=[readings], outputs=[benzene])
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model = Model(inputs=[readings], outputs=[benzene])
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
NUM_EPOCHS = 8000
BATCH_SIZE = 200
folder = "/Users/HRainX/Desktop"
Xtrain = pd.read_csv(os.path.join(folder, 'Xtrain.csv'))
Ytrain = pd.read_csv(os.path.join(folder, 'Ytrain.csv'))
history = model.fit(Xtrain, Ytrain,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2)
model.save('model.h5')
这是完整代码
参考一下 h5py 3.0 incompatibility with TensorFlow model serialization (multiple versions) · Issue #1732 · h5py/h5py · GitHub ,建议新开一个conda环境并完整安装TensorFlow,以避免任何之前安装的包带来的影响。或者可以试试在Colab上运行看看有没有问题。
谢谢大佬指点 ,应该是配置文件有问题,按照另一个博主的https://blog.csdn.net/qq_44644355/article/details/109411624直接改了一下配置文件就可以保存了。
太厉害啦~棒棒的!
在call函数上面加入@tf.function(方便savedmodel导出), 训练的时候出现ValueError: train() should not modify its Python input arguments. Check if it modifies any lists or dicts passed as arguments. Modifying a copy is allowed. 请问怎么解决
你好,请问以savedmodel方式保存的模型,还能接着加载继续训练吗。另外,如果想保存模型并继续训练,该参考什么文章呢,谢谢。
你好,在 cats_vs_dogs 图像分类训练完成后,我使用tf.saved_model.save(model, “save/1”)保存模型,再用model = tf.saved_model.load(“save/1”)导入模型,进行准确率计算时,显示
Cause: ‘arguments’ object has no attribute ‘posonlyargs’
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Traceback (most recent call last):
File “test12.py”, line 85, in
print(mymodel.metrics_names)
AttributeError: ‘_UserObject’ object has no attribute ‘metrics_names’
请问这是什么原因
这是我的源码,谢谢
import tensorflow as tf
import os
import numpy as np
num_epochs = 10
batch_size = 32
learning_rate = 0.001
data_dir = ‘./fastai-datasets-cats-vs-dogs-2’
train_cats_dir = data_dir + ‘/train/cats/’
train_dogs_dir = data_dir + ‘/train/dogs/’
test_cats_dir = data_dir + ‘/valid/cats/’
test_dogs_dir = data_dir + ‘/valid/dogs/’
def my_map(filename, label):
image_string = tf.io.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string)
my_image = tf.image.resize(image_decoded, [256, 256]) / 255.0
return my_image, label
if name == ‘main’:
train_cats_filenames = tf.constant([train_cats_dir + filename for filename in os.listdir(train_cats_dir)])
train_dogs_filenames = tf.constant([train_dogs_dir + filename for filename in os.listdir(train_dogs_dir)])
train_filenames = tf.concat([train_cats_filenames, train_dogs_filenames], axis=-1)
train_labels = tf.concat([tf.zeros(train_cats_filenames.shape, dtype=tf.int32),
tf.ones(train_dogs_filenames.shape, dtype=tf.int32)], axis=-1)
train_datas = tf.data.Dataset.from_tensor_slices((train_filenames, train_labels))
train_datas = train_datas.map(
map_func=my_map,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
train_datas = train_datas.shuffle(buffer_size=23000)
train_datas = train_datas.batch(batch_size)
train_datas = train_datas.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(256, 256, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy]
)
model.fit(train_datas, epochs=num_epochs)
# tf.saved_model.save(model, "saved/1")
test_cat_filenames = tf.constant([test_cats_dir + filename for filename in os.listdir(test_cats_dir)])
test_dog_filenames = tf.constant([test_dogs_dir + filename for filename in os.listdir(test_dogs_dir)])
test_filenames = tf.concat([test_cat_filenames, test_dog_filenames], axis=-1)
test_labels = tf.concat([tf.zeros(test_cat_filenames.shape, dtype=tf.int32),
tf.ones(test_dog_filenames.shape, dtype=tf.int32)], axis=-1)
test_dataset = tf.data.Dataset.from_tensor_slices((test_filenames, test_labels))
test_dataset = test_dataset.map(my_map)
test_dataset = test_dataset.batch(batch_size)
print(model.metrics_names)
print(model.evaluate(test_dataset))
print(’------------------------------------------------------------------------------’)
tf.saved_model.save(model, “save/1”)
mymodel = tf.saved_model.load(‘save/1’)
test_cat_filenames = tf.constant([test_cats_dir + filename for filename in os.listdir(test_cats_dir)])
test_dog_filenames = tf.constant([test_dogs_dir + filename for filename in os.listdir(test_dogs_dir)])
test_filenames = tf.concat([test_cat_filenames, test_dog_filenames], axis=-1)
test_labels = tf.concat([tf.zeros(test_cat_filenames.shape, dtype=tf.int32),
tf.ones(test_dog_filenames.shape, dtype=tf.int32)], axis=-1)
test_dataset = tf.data.Dataset.from_tensor_slices((test_filenames, test_labels))
test_dataset = test_dataset.map(my_map)
test_dataset = test_dataset.batch(batch_size)
print(mymodel.metrics_names)
print(mymodel.evaluate(test_dataset))
这是我的源码
import tensorflow as tf
import os
import numpy as np
num_epochs = 10
batch_size = 32
learning_rate = 0.001
data_dir = ‘./fastai-datasets-cats-vs-dogs-2’
train_cats_dir = data_dir + ‘/train/cats/’
train_dogs_dir = data_dir + ‘/train/dogs/’
test_cats_dir = data_dir + ‘/valid/cats/’
test_dogs_dir = data_dir + ‘/valid/dogs/’
def my_map(filename, label):
image_string = tf.io.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string)
my_image = tf.image.resize(image_decoded, [256, 256]) / 255.0
return my_image, label
if name == ‘main’:
train_cats_filenames = tf.constant([train_cats_dir + filename for filename in os.listdir(train_cats_dir)])
train_dogs_filenames = tf.constant([train_dogs_dir + filename for filename in os.listdir(train_dogs_dir)])
train_filenames = tf.concat([train_cats_filenames, train_dogs_filenames], axis=-1)
train_labels = tf.concat([tf.zeros(train_cats_filenames.shape, dtype=tf.int32),
tf.ones(train_dogs_filenames.shape, dtype=tf.int32)], axis=-1)
train_datas = tf.data.Dataset.from_tensor_slices((train_filenames, train_labels))
train_datas = train_datas.map(
map_func=my_map,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
train_datas = train_datas.shuffle(buffer_size=23000)
train_datas = train_datas.batch(batch_size)
train_datas = train_datas.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(256, 256, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy]
)
model.fit(train_datas, epochs=num_epochs)
# tf.saved_model.save(model, "saved/1")
test_cat_filenames = tf.constant([test_cats_dir + filename for filename in os.listdir(test_cats_dir)])
test_dog_filenames = tf.constant([test_dogs_dir + filename for filename in os.listdir(test_dogs_dir)])
test_filenames = tf.concat([test_cat_filenames, test_dog_filenames], axis=-1)
test_labels = tf.concat([tf.zeros(test_cat_filenames.shape, dtype=tf.int32),
tf.ones(test_dog_filenames.shape, dtype=tf.int32)], axis=-1)
test_dataset = tf.data.Dataset.from_tensor_slices((test_filenames, test_labels))
test_dataset = test_dataset.map(my_map)
test_dataset = test_dataset.batch(batch_size)
print(model.metrics_names)
print(model.evaluate(test_dataset))
print(’------------------------------------------------------------------------------’)
tf.saved_model.save(model, “save/1”)
mymodel = tf.saved_model.load(‘save/1’)
test_cat_filenames = tf.constant([test_cats_dir + filename for filename in os.listdir(test_cats_dir)])
test_dog_filenames = tf.constant([test_dogs_dir + filename for filename in os.listdir(test_dogs_dir)])
test_filenames = tf.concat([test_cat_filenames, test_dog_filenames], axis=-1)
test_labels = tf.concat([tf.zeros(test_cat_filenames.shape, dtype=tf.int32),
tf.ones(test_dog_filenames.shape, dtype=tf.int32)], axis=-1)
test_dataset = tf.data.Dataset.from_tensor_slices((test_filenames, test_labels))
test_dataset = test_dataset.map(my_map)
test_dataset = test_dataset.batch(batch_size)
print(mymodel.metrics_names)
print(mymodel.evaluate(test_dataset))
请提供最小可复现的代码,原则上来说加@tf.function需要非常谨慎
如果要加载继续训练的话还是建议checkpoint,操作起来会方便得多。savedmodel保存的模型一般还是拿来部署比较方便。
用savedmodel导出的模型,建议就不要拿来做evaluate了。虽然理论上是可以,但这个evaluate的函数可能需要你自己写,直接调用可能有问题。
我在 tf.saved_model.save(MLP, “saved/1”) 保存模型的时候 提示:ValueError: Expected an object of type Trackable
, such as tf.Module
or a subclass of the Trackable
class, for export. Got <class ‘main.MLP’> with type <class ‘type’>.的错误 请大神帮我解答,谢谢
请贴一段能完整执行的代码,否则无从得知你这里的MLP是什么。