I know TF1.0 encouraged this type of programming but they are trying to get away from it and you might want to follow the trend.īelow I have defined some global variables which will be used throughout next parts, pretty self-explanatory I guess: BATCH_SIZE = 32ĪUGMENTATION = ExampleAugmentation(LOG_DIR, max_images=4, name="Images") Please do not mix function declaration, global variables, data loading and others (like loading data and creating function afterwards). It suffices for presentation purposes I guess. Important: You may want to rename this class to something like ImageSaver when doing more serious buisness and move augmentation to separate functors/lambda functions. _counter is needed so the images will not be overwritten (I think, not really sure, clarification from someone else would be nice). In Tensorboard, each image will be a separate sample over which you can iterate at the end. Default value is 3, so you may set it as BATCH_SIZE to save every image. In this case, only 4 first images from batch will be saved. Furthermore, let's say max_outputs is specified as 4. Say image in _call_ will have shape (32, 28, 28, 1), where the first dimension is batch, second width, third height and last channels (in case of MNIST only onel but this dimension is needed in tf.image augmentations). Which part you may ask - the part defined by max_outputs. Name will be the name under which each part of images will be saved. Self.file_writer = tf.summary.create_file_writer(logdir)Īugmented_image = tf.image.random_flip_left_right( I have created a convenient wrapping class with _call_ method for easy usage with tf.data.Dataset's map capabilities: import tensorflow as tfĭef _init_(self, logdir: str, max_images: int, name: str): In order to save images, one has to keep tf.summary.SummaryWriter object throughout each pass. Line above will only return TRAIN split (read more about those here). There is absolutely no need to load data in numpy and transform it to tf.data.Dataset if one can do it in a single line: import tensorflow_datasets as tfdsĭataset = tfds.load("mnist", as_supervised=True, split=) I would advise to use Tensorflow Datasets library. If you have any questions/need clarification, please post a comment down below. Model.fit(dataset, epochs=5, callbacks=)Įxcept providing an answer to your question pile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=) Return tf.cast(image, tf.float32) / 255.0, labelĭataset = tf._tensor_slices((x_train, y_train))ĭataset = dataset.map(scale).map(augment).batch(32) How would I add the image summary here? import tensorflow as tf I also didn't really find documentation on this.įor simplicity, I'm showing the code of an MNIST example. Unfortunately, I cannot figure out, how to do this with TensorFlow 2.0 and Keras. To be able to analyze my image augmentation I'd like to see the images I feed into the network in tensorboard. I'm trying to setup an image recognition CNN with TensorFlow 2.0.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |