1670376611
In this TensorFlow tutorial, we will learn about using the Keras and tf.image preprocessing layer in TensorFlow in TensorFlow to enhance images.When you work on a machine learning problem related to images, not only do you need to collect some images as training data, but you also need to employ augmentation to create variations in the image. It is especially true for more complex object recognition problems.
There are many ways for image augmentation. You may use some external libraries or write your own functions for that. There are some modules in TensorFlow and Keras for augmentation too.
In this post, you will discover how you can use the Keras preprocessing layer as well as the tf.image
module in TensorFlow for image augmentation.
After reading this post, you will know:
tf.image
module for image augmentationtf.data
datasetLet’s get started.
This article is divided into five sections; they are:
Before you see how you can do augmentation, you need to get the images. Ultimately, you need the images to be represented as arrays, for example, in HxWx3 in 8-bit integers for the RGB pixel value. There are many ways to get the images. Some can be downloaded as a ZIP file. If you’re using TensorFlow, you may get some image datasets from the tensorflow_datasets
library.
In this tutorial, you will use the citrus leaves images, which is a small dataset of less than 100MB. It can be downloaded from tensorflow_datasets
as follows:
import tensorflow_datasets as tfds
ds, meta = tfds.load('citrus_leaves', with_info=True, split='train', shuffle_files=True)
Running this code the first time will download the image dataset into your computer with the following output:
Downloading and preparing dataset 63.87 MiB (download: 63.87 MiB, generated: 37.89 MiB, total: 101.76 MiB) to ~/tensorflow_datasets/citrus_leaves/0.1.2...
Extraction completed...: 100%|██████████████████████████████| 1/1 [00:06<00:00, 6.54s/ file]
Dl Size...: 100%|██████████████████████████████████████████| 63/63 [00:06<00:00, 9.63 MiB/s]
Dl Completed...: 100%|███████████████████████████████████████| 1/1 [00:06<00:00, 6.54s/ url]
Dataset citrus_leaves downloaded and prepared to ~/tensorflow_datasets/citrus_leaves/0.1.2. Subsequent calls will reuse this data.
The function above returns the images as a tf.data
dataset object and the metadata. This is a classification dataset. You can print the training labels with the following:
...
for i in range(meta.features['label'].num_classes):
print(meta.features['label'].int2str(i))
This prints:
Black spot
canker
greening
healthy
If you run this code again at a later time, you will reuse the downloaded image. But the other way to load the downloaded images into a tf.data
dataset is to use the image_dataset_from_directory()
function.
As you can see from the screen output above, the dataset is downloaded into the directory ~/tensorflow_datasets
. If you look at the directory, you see the directory structure as follows:
.../Citrus/Leaves
├── Black spot
├── Melanose
├── canker
├── greening
└── healthy
The directories are the labels, and the images are files stored under their corresponding directory. You can let the function to read the directory recursively into a dataset:
import tensorflow as tf
from tensorflow.keras.utils import image_dataset_from_directory
# set to fixed image size 256x256
PATH = ".../Citrus/Leaves"
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="bilinear",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
You may want to set batch_size=None
if you do not want the dataset to be batched. Usually, you want the dataset to be batched for training a neural network model.
It is important to visualize the augmentation result, so you can verify the augmentation result is what we want it to be. You can use matplotlib for this.
In matplotlib, you have the imshow()
function to display an image. However, for the image to be displayed correctly, the image should be presented as an array of 8-bit unsigned integers (uint8).
Given that you have a dataset created using image_dataset_from_directory()
You can get the first batch (of 32 images) and display a few of them using imshow()
, as follows:
...
import matplotlib.pyplot as plt
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(5,5))
for images, labels in ds.take(1):
for i in range(3):
for j in range(3):
ax[i][j].imshow(images[i*3+j].numpy().astype("uint8"))
ax[i][j].set_title(ds.class_names[labels[i*3+j]])
plt.show()
Here, you see a display of nine images in a grid, labeled with their corresponding classification label, using ds.class_names
. The images should be converted to NumPy array in uint8 for display. This code displays an image like the following:
The complete code from loading the image to display is as follows:
from tensorflow.keras.utils import image_dataset_from_directory
import matplotlib.pyplot as plt
# use image_dataset_from_directory() to load images, with image size scaled to 256x256
PATH='.../Citrus/Leaves' # modify to your path
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="mitchellcubic",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
# Take one batch from dataset and display the images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(5,5))
for images, labels in ds.take(1):
for i in range(3):
for j in range(3):
ax[i][j].imshow(images[i*3+j].numpy().astype("uint8"))
ax[i][j].set_title(ds.class_names[labels[i*3+j]])
plt.show()
Note that if you’re using tensorflow_datasets
to get the image, the samples are presented as a dictionary instead of a tuple of (image,label). You should change your code slightly to the following:
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
# use tfds.load() or image_dataset_from_directory() to load images
ds, meta = tfds.load('citrus_leaves', with_info=True, split='train', shuffle_files=True)
ds = ds.batch(32)
# Take one batch from dataset and display the images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(5,5))
for sample in ds.take(1):
images, labels = sample["image"], sample["label"]
for i in range(3):
for j in range(3):
ax[i][j].imshow(images[i*3+j].numpy().astype("uint8"))
ax[i][j].set_title(meta.features['label'].int2str(labels[i*3+j]))
plt.show()
For the rest of this post, assume the dataset is created using image_dataset_from_directory()
. You may need to tweak the code slightly if your dataset is created differently.
Keras comes with many neural network layers, such as convolution layers, that you need to train. There are also layers with no parameters to train, such as flatten layers to convert an array like an image into a vector.
The preprocessing layers in Keras are specifically designed to use in the early stages of a neural network. You can use them for image preprocessing, such as to resize or rotate the image or adjust the brightness and contrast. While the preprocessing layers are supposed to be part of a larger neural network, you can also use them as functions. Below is how you can use the resizing layer as a function to transform some images and display them side-by-side with the original:
...
# create a resizing layer
out_height, out_width = 128,256
resize = tf.keras.layers.Resizing(out_height, out_width)
# show original vs resized
fig, ax = plt.subplots(2, 3, figsize=(6,4))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# resize
ax[1][i].imshow(resize(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("resize")
plt.show()
The images are in 256×256 pixels, and the resizing layer will make them into 256×128 pixels. The output of the above code is as follows:
Since the resizing layer is a function, you can chain them to the dataset itself. For example,
...
def augment(image, label):
return resize(image), label
resized_ds = ds.map(augment)
for image, label in resized_ds:
...
The dataset ds
has samples in the form of (image, label)
. Hence you created a function that takes in such tuple and preprocesses the image with the resizing layer. You then assigned this function as an argument for the map()
in the dataset. When you draw a sample from the new dataset created with the map()
function, the image will be a transformed one.
There are more preprocessing layers available. Some are demonstrated below.
As you saw above, you can resize the image. You can also randomly enlarge or shrink the height or width of an image. Similarly, you can zoom in or zoom out on an image. Below is an example of manipulating the image size in various ways for a maximum of 30% increase or decrease:
...
# Create preprocessing layers
out_height, out_width = 128,256
resize = tf.keras.layers.Resizing(out_height, out_width)
height = tf.keras.layers.RandomHeight(0.3)
width = tf.keras.layers.RandomWidth(0.3)
zoom = tf.keras.layers.RandomZoom(0.3)
# Visualize images and augmentations
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# resize
ax[1][i].imshow(resize(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("resize")
# height
ax[2][i].imshow(height(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("height")
# width
ax[3][i].imshow(width(images[i]).numpy().astype("uint8"))
ax[3][i].set_title("width")
# zoom
ax[4][i].imshow(zoom(images[i]).numpy().astype("uint8"))
ax[4][i].set_title("zoom")
plt.show()
This code shows images as follows:
While you specified a fixed dimension in resize, you have a random amount of manipulation in other augmentations.
You can also do flipping, rotation, cropping, and geometric translation using preprocessing layers:
...
# Create preprocessing layers
flip = tf.keras.layers.RandomFlip("horizontal_and_vertical") # or "horizontal", "vertical"
rotate = tf.keras.layers.RandomRotation(0.2)
crop = tf.keras.layers.RandomCrop(out_height, out_width)
translation = tf.keras.layers.RandomTranslation(height_factor=0.2, width_factor=0.2)
# Visualize augmentations
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# flip
ax[1][i].imshow(flip(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("flip")
# crop
ax[2][i].imshow(crop(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("crop")
# translation
ax[3][i].imshow(translation(images[i]).numpy().astype("uint8"))
ax[3][i].set_title("translation")
# rotate
ax[4][i].imshow(rotate(images[i]).numpy().astype("uint8"))
ax[4][i].set_title("rotate")
plt.show()
This code shows the following images:
And finally, you can do augmentations on color adjustments as well:
...
brightness = tf.keras.layers.RandomBrightness([-0.8,0.8])
contrast = tf.keras.layers.RandomContrast(0.2)
# Visualize augmentation
fig, ax = plt.subplots(3, 3, figsize=(6,7))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# brightness
ax[1][i].imshow(brightness(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("brightness")
# contrast
ax[2][i].imshow(contrast(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("contrast")
plt.show()
This shows the images as follows:
For completeness, below is the code to display the result of various augmentations:
from tensorflow.keras.utils import image_dataset_from_directory
import tensorflow as tf
import matplotlib.pyplot as plt
# use image_dataset_from_directory() to load images, with image size scaled to 256x256
PATH='.../Citrus/Leaves' # modify to your path
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="mitchellcubic",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
# Create preprocessing layers
out_height, out_width = 128,256
resize = tf.keras.layers.Resizing(out_height, out_width)
height = tf.keras.layers.RandomHeight(0.3)
width = tf.keras.layers.RandomWidth(0.3)
zoom = tf.keras.layers.RandomZoom(0.3)
flip = tf.keras.layers.RandomFlip("horizontal_and_vertical")
rotate = tf.keras.layers.RandomRotation(0.2)
crop = tf.keras.layers.RandomCrop(out_height, out_width)
translation = tf.keras.layers.RandomTranslation(height_factor=0.2, width_factor=0.2)
brightness = tf.keras.layers.RandomBrightness([-0.8,0.8])
contrast = tf.keras.layers.RandomContrast(0.2)
# Visualize images and augmentations
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# resize
ax[1][i].imshow(resize(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("resize")
# height
ax[2][i].imshow(height(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("height")
# width
ax[3][i].imshow(width(images[i]).numpy().astype("uint8"))
ax[3][i].set_title("width")
# zoom
ax[4][i].imshow(zoom(images[i]).numpy().astype("uint8"))
ax[4][i].set_title("zoom")
plt.show()
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# flip
ax[1][i].imshow(flip(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("flip")
# crop
ax[2][i].imshow(crop(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("crop")
# translation
ax[3][i].imshow(translation(images[i]).numpy().astype("uint8"))
ax[3][i].set_title("translation")
# rotate
ax[4][i].imshow(rotate(images[i]).numpy().astype("uint8"))
ax[4][i].set_title("rotate")
plt.show()
fig, ax = plt.subplots(3, 3, figsize=(6,7))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# brightness
ax[1][i].imshow(brightness(images[i]).numpy().astype("uint8"))
ax[1][i].set_title("brightness")
# contrast
ax[2][i].imshow(contrast(images[i]).numpy().astype("uint8"))
ax[2][i].set_title("contrast")
plt.show()
Finally, it is important to point out that most neural network models can work better if the input images are scaled. While we usually use an 8-bit unsigned integer for the pixel values in an image (e.g., for display using imshow()
as above), a neural network prefers the pixel values to be between 0 and 1 or between -1 and +1. This can be done with preprocessing layers too. Below is how you can update one of the examples above to add the scaling layer into the augmentation:
...
out_height, out_width = 128,256
resize = tf.keras.layers.Resizing(out_height, out_width)
rescale = tf.keras.layers.Rescaling(1/127.5, offset=-1) # rescale pixel values to [-1,1]
def augment(image, label):
return rescale(resize(image)), label
rescaled_resized_ds = ds.map(augment)
for image, label in rescaled_resized_ds:
...
Besides the preprocessing layer, the tf.image
module also provides some functions for augmentation. Unlike the preprocessing layer, these functions are intended to be used in a user-defined function and assigned to a dataset using map()
as we saw above.
The functions provided by the tf.image
are not duplicates of the preprocessing layers, although there is some overlap. Below is an example of using the tf.image
functions to resize and crop images:
...
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
# original
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# resize
h = int(256 * tf.random.uniform([], minval=0.8, maxval=1.2))
w = int(256 * tf.random.uniform([], minval=0.8, maxval=1.2))
ax[1][i].imshow(tf.image.resize(images[i], [h,w]).numpy().astype("uint8"))
ax[1][i].set_title("resize")
# crop
y, x, h, w = (128 * tf.random.uniform((4,))).numpy().astype("uint8")
ax[2][i].imshow(tf.image.crop_to_bounding_box(images[i], y, x, h, w).numpy().astype("uint8"))
ax[2][i].set_title("crop")
# central crop
x = tf.random.uniform([], minval=0.4, maxval=1.0)
ax[3][i].imshow(tf.image.central_crop(images[i], x).numpy().astype("uint8"))
ax[3][i].set_title("central crop")
# crop to (h,w) at random offset
h, w = (256 * tf.random.uniform((2,))).numpy().astype("uint8")
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[4][i].imshow(tf.image.stateless_random_crop(images[i], [h,w,3], seed).numpy().astype("uint8"))
ax[4][i].set_title("random crop")
plt.show()
Below is the output of the above code:
While the display of images matches what you might expect from the code, the use of tf.image
functions is quite different from that of the preprocessing layers. Every tf.image
function is different. Therefore, you can see the crop_to_bounding_box()
function takes pixel coordinates, but the central_crop()
function assumes a fraction ratio as the argument.
These functions are also different in the way randomness is handled. Some of these functions do not assume random behavior. Therefore, the random resize should have the exact output size generated using a random number generator separately before calling the resize function. Some other functions, such as stateless_random_crop()
, can do augmentation randomly, but a pair of random seeds in the int32
needs to be specified explicitly.
To continue the example, there are the functions for flipping an image and extracting the Sobel edges:
...
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# flip
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[1][i].imshow(tf.image.stateless_random_flip_left_right(images[i], seed).numpy().astype("uint8"))
ax[1][i].set_title("flip left-right")
# flip
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[2][i].imshow(tf.image.stateless_random_flip_up_down(images[i], seed).numpy().astype("uint8"))
ax[2][i].set_title("flip up-down")
# sobel edge
sobel = tf.image.sobel_edges(images[i:i+1])
ax[3][i].imshow(sobel[0, ..., 0].numpy().astype("uint8"))
ax[3][i].set_title("sobel y")
# sobel edge
ax[4][i].imshow(sobel[0, ..., 1].numpy().astype("uint8"))
ax[4][i].set_title("sobel x")
plt.show()
This shows the following:
And the following are the functions to manipulate the brightness, contrast, and colors:
...
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# brightness
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[1][i].imshow(tf.image.stateless_random_brightness(images[i], 0.3, seed).numpy().astype("uint8"))
ax[1][i].set_title("brightness")
# contrast
ax[2][i].imshow(tf.image.stateless_random_contrast(images[i], 0.7, 1.3, seed).numpy().astype("uint8"))
ax[2][i].set_title("contrast")
# saturation
ax[3][i].imshow(tf.image.stateless_random_saturation(images[i], 0.7, 1.3, seed).numpy().astype("uint8"))
ax[3][i].set_title("saturation")
# hue
ax[4][i].imshow(tf.image.stateless_random_hue(images[i], 0.3, seed).numpy().astype("uint8"))
ax[4][i].set_title("hue")
plt.show()
This code shows the following:
Below is the complete code to display all of the above:
from tensorflow.keras.utils import image_dataset_from_directory
import tensorflow as tf
import matplotlib.pyplot as plt
# use image_dataset_from_directory() to load images, with image size scaled to 256x256
PATH='.../Citrus/Leaves' # modify to your path
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="mitchellcubic",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
# Visualize tf.image augmentations
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
# original
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# resize
h = int(256 * tf.random.uniform([], minval=0.8, maxval=1.2))
w = int(256 * tf.random.uniform([], minval=0.8, maxval=1.2))
ax[1][i].imshow(tf.image.resize(images[i], [h,w]).numpy().astype("uint8"))
ax[1][i].set_title("resize")
# crop
y, x, h, w = (128 * tf.random.uniform((4,))).numpy().astype("uint8")
ax[2][i].imshow(tf.image.crop_to_bounding_box(images[i], y, x, h, w).numpy().astype("uint8"))
ax[2][i].set_title("crop")
# central crop
x = tf.random.uniform([], minval=0.4, maxval=1.0)
ax[3][i].imshow(tf.image.central_crop(images[i], x).numpy().astype("uint8"))
ax[3][i].set_title("central crop")
# crop to (h,w) at random offset
h, w = (256 * tf.random.uniform((2,))).numpy().astype("uint8")
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[4][i].imshow(tf.image.stateless_random_crop(images[i], [h,w,3], seed).numpy().astype("uint8"))
ax[4][i].set_title("random crop")
plt.show()
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# flip
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[1][i].imshow(tf.image.stateless_random_flip_left_right(images[i], seed).numpy().astype("uint8"))
ax[1][i].set_title("flip left-right")
# flip
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[2][i].imshow(tf.image.stateless_random_flip_up_down(images[i], seed).numpy().astype("uint8"))
ax[2][i].set_title("flip up-down")
# sobel edge
sobel = tf.image.sobel_edges(images[i:i+1])
ax[3][i].imshow(sobel[0, ..., 0].numpy().astype("uint8"))
ax[3][i].set_title("sobel y")
# sobel edge
ax[4][i].imshow(sobel[0, ..., 1].numpy().astype("uint8"))
ax[4][i].set_title("sobel x")
plt.show()
fig, ax = plt.subplots(5, 3, figsize=(6,14))
for images, labels in ds.take(1):
for i in range(3):
ax[0][i].imshow(images[i].numpy().astype("uint8"))
ax[0][i].set_title("original")
# brightness
seed = tf.random.uniform((2,), minval=0, maxval=65536).numpy().astype("int32")
ax[1][i].imshow(tf.image.stateless_random_brightness(images[i], 0.3, seed).numpy().astype("uint8"))
ax[1][i].set_title("brightness")
# contrast
ax[2][i].imshow(tf.image.stateless_random_contrast(images[i], 0.7, 1.3, seed).numpy().astype("uint8"))
ax[2][i].set_title("contrast")
# saturation
ax[3][i].imshow(tf.image.stateless_random_saturation(images[i], 0.7, 1.3, seed).numpy().astype("uint8"))
ax[3][i].set_title("saturation")
# hue
ax[4][i].imshow(tf.image.stateless_random_hue(images[i], 0.3, seed).numpy().astype("uint8"))
ax[4][i].set_title("hue")
plt.show()
These augmentation functions should be enough for most uses. But if you have some specific ideas on augmentation, you would probably need a better image processing library. OpenCV and Pillow are common but powerful libraries that allow you to transform images better.
You used the Keras preprocessing layers as functions in the examples above. But they can also be used as layers in a neural network. It is trivial to use. Below is an example of how you can incorporate a preprocessing layer into a classification network and train it using a dataset:
from tensorflow.keras.utils import image_dataset_from_directory
import tensorflow as tf
import matplotlib.pyplot as plt
# use image_dataset_from_directory() to load images, with image size scaled to 256x256
PATH='.../Citrus/Leaves' # modify to your path
ds = image_dataset_from_directory(PATH,
validation_split=0.2, subset="training",
image_size=(256,256), interpolation="mitchellcubic",
crop_to_aspect_ratio=True,
seed=42, shuffle=True, batch_size=32)
AUTOTUNE = tf.data.AUTOTUNE
ds = ds.cache().prefetch(buffer_size=AUTOTUNE)
num_classes = 5
model = tf.keras.Sequential([
tf.keras.layers.RandomFlip("horizontal_and_vertical"),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.Rescaling(1/127.0, offset=-1),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(ds, epochs=3)
Running this code gives the following output:
Found 609 files belonging to 5 classes.
Using 488 files for training.
Epoch 1/3
16/16 [==============================] - 5s 253ms/step - loss: 1.4114 - accuracy: 0.4283
Epoch 2/3
16/16 [==============================] - 4s 259ms/step - loss: 0.8101 - accuracy: 0.6475
Epoch 3/3
16/16 [==============================] - 4s 267ms/step - loss: 0.7015 - accuracy: 0.7111
In the code above, you created the dataset with cache()
and prefetch()
. This is a performance technique to allow the dataset to prepare data asynchronously while the neural network is trained. This would be significant if the dataset has some other augmentation assigned using the map()
function.
You will see some improvement in accuracy if you remove the RandomFlip
and RandomRotation
layers because you make the problem easier. However, as you want the network to predict well on a wide variation of image quality and properties, using augmentation can help your resulting network become more powerful.
Original article sourced at: https://machinelearningmastery.com
1653123600
This repository is a fork of SimpleMDE, made by Sparksuite. Go to the dedicated section for more information.
A drop-in JavaScript text area replacement for writing beautiful and understandable Markdown. EasyMDE allows users who may be less experienced with Markdown to use familiar toolbar buttons and shortcuts.
In addition, the syntax is rendered while editing to clearly show the expected result. Headings are larger, emphasized words are italicized, links are underlined, etc.
EasyMDE also features both built-in auto saving and spell checking. The editor is entirely customizable, from theming to toolbar buttons and javascript hooks.
Via npm:
npm install easymde
Via the UNPKG CDN:
<link rel="stylesheet" href="https://unpkg.com/easymde/dist/easymde.min.css">
<script src="https://unpkg.com/easymde/dist/easymde.min.js"></script>
Or jsDelivr:
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.css">
<script src="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.js"></script>
After installing and/or importing the module, you can load EasyMDE onto the first textarea
element on the web page:
<textarea></textarea>
<script>
const easyMDE = new EasyMDE();
</script>
Alternatively you can select a specific textarea
, via JavaScript:
<textarea id="my-text-area"></textarea>
<script>
const easyMDE = new EasyMDE({element: document.getElementById('my-text-area')});
</script>
Use easyMDE.value()
to get the content of the editor:
<script>
easyMDE.value();
</script>
Use easyMDE.value(val)
to set the content of the editor:
<script>
easyMDE.value('New input for **EasyMDE**');
</script>
true
, force downloads Font Awesome (used for icons). If set to false
, prevents downloading. Defaults to undefined
, which will intelligently check whether Font Awesome has already been included, then download accordingly.true
, focuses the editor automatically. Defaults to false
.true
, saves the text automatically. Defaults to false
.10000
(10 seconds).autosave.delay
or 10000
(10 seconds).locale: en-US, format: hour:minute
.{ delay: 300 }
, it will check every 300 ms if the editor is visible and if positive, call CodeMirror's refresh()
.**
or __
. Defaults to **
.```
or ~~~
. Defaults to ```
.*
or _
. Defaults to *
.*
, -
or +
. Defaults to *
.textarea
element to use. Defaults to the first textarea
element on the page.true
, force text changes made in EasyMDE to be immediately stored in original text area. Defaults to false
.false
, indent using spaces instead of tabs. Defaults to true
.false
by default, preview for images will appear only for images on separate lines.
as argument and returns a string that serves as the src
attribute of the <img>
tag in the preview. Enables dynamic previewing of images in the frontend without having to upload them to a server, allows copy-pasting of images to the editor with preview.["[", "](http://)"]
.true
, enables line numbers in the editor.false
, disable line wrapping. Defaults to true
."500px"
. Defaults to "300px"
.minHeight
option will be ignored. Should be a string containing a valid CSS value like "500px"
. Defaults to undefined
.true
when the editor is currently going into full screen mode, or false
.true
, will render headers without a space after the #
. Defaults to false
.false
, will not process GFM strikethrough syntax. Defaults to true
.true
, let underscores be a delimiter for separating words. Defaults to false
.false
, will replace CSS classes returned by the default Markdown mode. Otherwise the classes returned by the custom mode will be combined with the classes returned by the default mode. Defaults to true
."editor-preview"
.true
, a JS alert window appears asking for the link or image URL. Defaults to false
.URL of the image:
.URL for the link:
.true
, enables the image upload functionality, which can be triggered by drag and drop, copy-paste and through the browse-file window (opened when the user click on the upload-image icon). Defaults to false
.1024 * 1024 * 2
(2 MB).image/png, image/jpeg
.imageMaxSize
, imageAccept
, imageUploadEndpoint
and imageCSRFToken
ineffective.onSuccess
and onError
callback functions as parameters. onSuccess(imageUrl: string)
and onError(errorMessage: string)
{"data": {"filePath": "<filePath>"}}
where filePath is the path of the image (absolute if imagePathAbsolute
is set to true, relative if otherwise);{"error": "<errorCode>"}
, where errorCode can be noFileGiven
(HTTP 400 Bad Request), typeNotAllowed
(HTTP 415 Unsupported Media Type), fileTooLarge
(HTTP 413 Payload Too Large) or importError
(see errorMessages below). If errorCode is not one of the errorMessages, it is alerted unchanged to the user. This allows for server-side error messages. No default value.true
, will treat imageUrl
from imageUploadFunction
and filePath returned from imageUploadEndpoint
as an absolute rather than relative path, i.e. not prepend window.location.origin
to it.imageCSRFToken
has value, defaults to csrfmiddlewaretoken
.true
, passing CSRF token via header. Defaults to false
, which pass CSRF through request body.#image_name#
, #image_size#
and #image_max_size#
will replaced by their respective values, that can be used for customization or internationalization:uploadImage
is set to true
. Defaults to Attach files by drag and dropping or pasting from clipboard.
.Drop image to upload it.
.Uploading images #images_names#
.Uploading #file_name#: #progress#%
.Uploaded #image_name#
.B, KB, MB
(example: 218 KB
). You can use B,KB,MB
instead if you prefer without whitespaces (218KB
).errorCallback
option, where #image_name#
, #image_size#
and #image_max_size#
will replaced by their respective values, that can be used for customization or internationalization:You must select a file.
.imageAccept
list, or the server returned this error code. Defaults to This image type is not allowed.
.imageMaxSize
, or if the server returned this error code. Defaults to Image #image_name# is too big (#image_size#).\nMaximum file size is #image_max_size#.
.Something went wrong when uploading the image #image_name#.
.(errorMessage) => alert(errorMessage)
.true
, will highlight using highlight.js. Defaults to false
. To use this feature you must include highlight.js on your page or pass in using the hljs
option. For example, include the script and the CSS files like:<script src="https://cdn.jsdelivr.net/highlight.js/latest/highlight.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/highlight.js/latest/styles/github.min.css">
window.hljs
), you can provide an instance here. Defaults to undefined
.renderingConfig
options will take precedence.false
, disable parsing GitHub Flavored Markdown (GFM) single line breaks. Defaults to true
.false
, disable the spell checker. Defaults to true
. Optionally pass a CodeMirrorSpellChecker-compliant function.textarea
or contenteditable
. Defaults to textarea
for desktop and contenteditable
for mobile. contenteditable
option is necessary to enable nativeSpellcheck.false
, disable native spell checker. Defaults to true
.false
, allows side-by-side editing without going into fullscreen. Defaults to true
.false
, hide the status bar. Defaults to the array of built-in status bar items.false
, remove the CodeMirror-selectedtext
class from selected lines. Defaults to true
.false
, disable syncing scroll in side by side mode. Defaults to true
.2
.easymde
.false
, hide the toolbar. Defaults to the array of icons.false
, disable toolbar button tips. Defaults to true
.rtl
or ltr
. Changes text direction to support right-to-left languages. Defaults to ltr
.Most options demonstrate the non-default behavior:
const editor = new EasyMDE({
autofocus: true,
autosave: {
enabled: true,
uniqueId: "MyUniqueID",
delay: 1000,
submit_delay: 5000,
timeFormat: {
locale: 'en-US',
format: {
year: 'numeric',
month: 'long',
day: '2-digit',
hour: '2-digit',
minute: '2-digit',
},
},
text: "Autosaved: "
},
blockStyles: {
bold: "__",
italic: "_",
},
unorderedListStyle: "-",
element: document.getElementById("MyID"),
forceSync: true,
hideIcons: ["guide", "heading"],
indentWithTabs: false,
initialValue: "Hello world!",
insertTexts: {
horizontalRule: ["", "\n\n-----\n\n"],
image: [""],
link: ["[", "](https://)"],
table: ["", "\n\n| Column 1 | Column 2 | Column 3 |\n| -------- | -------- | -------- |\n| Text | Text | Text |\n\n"],
},
lineWrapping: false,
minHeight: "500px",
parsingConfig: {
allowAtxHeaderWithoutSpace: true,
strikethrough: false,
underscoresBreakWords: true,
},
placeholder: "Type here...",
previewClass: "my-custom-styling",
previewClass: ["my-custom-styling", "more-custom-styling"],
previewRender: (plainText) => customMarkdownParser(plainText), // Returns HTML from a custom parser
previewRender: (plainText, preview) => { // Async method
setTimeout(() => {
preview.innerHTML = customMarkdownParser(plainText);
}, 250);
return "Loading...";
},
promptURLs: true,
promptTexts: {
image: "Custom prompt for URL:",
link: "Custom prompt for URL:",
},
renderingConfig: {
singleLineBreaks: false,
codeSyntaxHighlighting: true,
sanitizerFunction: (renderedHTML) => {
// Using DOMPurify and only allowing <b> tags
return DOMPurify.sanitize(renderedHTML, {ALLOWED_TAGS: ['b']})
},
},
shortcuts: {
drawTable: "Cmd-Alt-T"
},
showIcons: ["code", "table"],
spellChecker: false,
status: false,
status: ["autosave", "lines", "words", "cursor"], // Optional usage
status: ["autosave", "lines", "words", "cursor", {
className: "keystrokes",
defaultValue: (el) => {
el.setAttribute('data-keystrokes', 0);
},
onUpdate: (el) => {
const keystrokes = Number(el.getAttribute('data-keystrokes')) + 1;
el.innerHTML = `${keystrokes} Keystrokes`;
el.setAttribute('data-keystrokes', keystrokes);
},
}], // Another optional usage, with a custom status bar item that counts keystrokes
styleSelectedText: false,
sideBySideFullscreen: false,
syncSideBySidePreviewScroll: false,
tabSize: 4,
toolbar: false,
toolbarTips: false,
});
Below are the built-in toolbar icons (only some of which are enabled by default), which can be reorganized however you like. "Name" is the name of the icon, referenced in the JavaScript. "Action" is either a function or a URL to open. "Class" is the class given to the icon. "Tooltip" is the small tooltip that appears via the title=""
attribute. Note that shortcut hints are added automatically and reflect the specified action if it has a key bind assigned to it (i.e. with the value of action
set to bold
and that of tooltip
set to Bold
, the final text the user will see would be "Bold (Ctrl-B)").
Additionally, you can add a separator between any icons by adding "|"
to the toolbar array.
Name | Action | Tooltip Class |
---|---|---|
bold | toggleBold | Bold fa fa-bold |
italic | toggleItalic | Italic fa fa-italic |
strikethrough | toggleStrikethrough | Strikethrough fa fa-strikethrough |
heading | toggleHeadingSmaller | Heading fa fa-header |
heading-smaller | toggleHeadingSmaller | Smaller Heading fa fa-header |
heading-bigger | toggleHeadingBigger | Bigger Heading fa fa-lg fa-header |
heading-1 | toggleHeading1 | Big Heading fa fa-header header-1 |
heading-2 | toggleHeading2 | Medium Heading fa fa-header header-2 |
heading-3 | toggleHeading3 | Small Heading fa fa-header header-3 |
code | toggleCodeBlock | Code fa fa-code |
quote | toggleBlockquote | Quote fa fa-quote-left |
unordered-list | toggleUnorderedList | Generic List fa fa-list-ul |
ordered-list | toggleOrderedList | Numbered List fa fa-list-ol |
clean-block | cleanBlock | Clean block fa fa-eraser |
link | drawLink | Create Link fa fa-link |
image | drawImage | Insert Image fa fa-picture-o |
table | drawTable | Insert Table fa fa-table |
horizontal-rule | drawHorizontalRule | Insert Horizontal Line fa fa-minus |
preview | togglePreview | Toggle Preview fa fa-eye no-disable |
side-by-side | toggleSideBySide | Toggle Side by Side fa fa-columns no-disable no-mobile |
fullscreen | toggleFullScreen | Toggle Fullscreen fa fa-arrows-alt no-disable no-mobile |
guide | This link | Markdown Guide fa fa-question-circle |
undo | undo | Undo fa fa-undo |
redo | redo | Redo fa fa-redo |
Customize the toolbar using the toolbar
option.
Only the order of existing buttons:
const easyMDE = new EasyMDE({
toolbar: ["bold", "italic", "heading", "|", "quote"]
});
All information and/or add your own icons
const easyMDE = new EasyMDE({
toolbar: [
{
name: "bold",
action: EasyMDE.toggleBold,
className: "fa fa-bold",
title: "Bold",
},
"italics", // shortcut to pre-made button
{
name: "custom",
action: (editor) => {
// Add your own code
},
className: "fa fa-star",
title: "Custom Button",
attributes: { // for custom attributes
id: "custom-id",
"data-value": "custom value" // HTML5 data-* attributes need to be enclosed in quotation marks ("") because of the dash (-) in its name.
}
},
"|" // Separator
// [, ...]
]
});
Put some buttons on dropdown menu
const easyMDE = new EasyMDE({
toolbar: [{
name: "heading",
action: EasyMDE.toggleHeadingSmaller,
className: "fa fa-header",
title: "Headers",
},
"|",
{
name: "others",
className: "fa fa-blind",
title: "others buttons",
children: [
{
name: "image",
action: EasyMDE.drawImage,
className: "fa fa-picture-o",
title: "Image",
},
{
name: "quote",
action: EasyMDE.toggleBlockquote,
className: "fa fa-percent",
title: "Quote",
},
{
name: "link",
action: EasyMDE.drawLink,
className: "fa fa-link",
title: "Link",
}
]
},
// [, ...]
]
});
EasyMDE comes with an array of predefined keyboard shortcuts, but they can be altered with a configuration option. The list of default ones is as follows:
Shortcut (Windows / Linux) | Shortcut (macOS) | Action |
---|---|---|
Ctrl-' | Cmd-' | "toggleBlockquote" |
Ctrl-B | Cmd-B | "toggleBold" |
Ctrl-E | Cmd-E | "cleanBlock" |
Ctrl-H | Cmd-H | "toggleHeadingSmaller" |
Ctrl-I | Cmd-I | "toggleItalic" |
Ctrl-K | Cmd-K | "drawLink" |
Ctrl-L | Cmd-L | "toggleUnorderedList" |
Ctrl-P | Cmd-P | "togglePreview" |
Ctrl-Alt-C | Cmd-Alt-C | "toggleCodeBlock" |
Ctrl-Alt-I | Cmd-Alt-I | "drawImage" |
Ctrl-Alt-L | Cmd-Alt-L | "toggleOrderedList" |
Shift-Ctrl-H | Shift-Cmd-H | "toggleHeadingBigger" |
F9 | F9 | "toggleSideBySide" |
F11 | F11 | "toggleFullScreen" |
Here is how you can change a few, while leaving others untouched:
const editor = new EasyMDE({
shortcuts: {
"toggleOrderedList": "Ctrl-Alt-K", // alter the shortcut for toggleOrderedList
"toggleCodeBlock": null, // unbind Ctrl-Alt-C
"drawTable": "Cmd-Alt-T", // bind Cmd-Alt-T to drawTable action, which doesn't come with a default shortcut
}
});
Shortcuts are automatically converted between platforms. If you define a shortcut as "Cmd-B", on PC that shortcut will be changed to "Ctrl-B". Conversely, a shortcut defined as "Ctrl-B" will become "Cmd-B" for Mac users.
The list of actions that can be bound is the same as the list of built-in actions available for toolbar buttons.
You can catch the following list of events: https://codemirror.net/doc/manual.html#events
const easyMDE = new EasyMDE();
easyMDE.codemirror.on("change", () => {
console.log(easyMDE.value());
});
You can revert to the initial text area by calling the toTextArea
method. Note that this clears up the autosave (if enabled) associated with it. The text area will retain any text from the destroyed EasyMDE instance.
const easyMDE = new EasyMDE();
// ...
easyMDE.toTextArea();
easyMDE = null;
If you need to remove registered event listeners (when the editor is not needed anymore), call easyMDE.cleanup()
.
The following self-explanatory methods may be of use while developing with EasyMDE.
const easyMDE = new EasyMDE();
easyMDE.isPreviewActive(); // returns boolean
easyMDE.isSideBySideActive(); // returns boolean
easyMDE.isFullscreenActive(); // returns boolean
easyMDE.clearAutosavedValue(); // no returned value
EasyMDE is a continuation of SimpleMDE.
SimpleMDE began as an improvement of lepture's Editor project, but has now taken on an identity of its own. It is bundled with CodeMirror and depends on Font Awesome.
CodeMirror is the backbone of the project and parses much of the Markdown syntax as it's being written. This allows us to add styles to the Markdown that's being written. Additionally, a toolbar and status bar have been added to the top and bottom, respectively. Previews are rendered by Marked using GitHub Flavored Markdown (GFM).
I originally made this fork to implement FontAwesome 5 compatibility into SimpleMDE. When that was done I submitted a pull request, which has not been accepted yet. This, and the project being inactive since May 2017, triggered me to make more changes and try to put new life into the project.
Changes include:
https://
by defaultMy intention is to continue development on this project, improving it and keeping it alive.
You may want to edit this library to adapt its behavior to your needs. This can be done in some quick steps:
gulp
command, which will generate files: dist/easymde.min.css
and dist/easymde.min.js
;Want to contribute to EasyMDE? Thank you! We have a contribution guide just for you!
Author: Ionaru
Source Code: https://github.com/Ionaru/easy-markdown-editor
License: MIT license
1679035563
When your app is opened, there is a brief time while the native app loads Flutter. By default, during this time, the native app displays a white splash screen. This package automatically generates iOS, Android, and Web-native code for customizing this native splash screen background color and splash image. Supports dark mode, full screen, and platform-specific options.
[BETA] Support for flavors is in beta. Currently only Android and iOS are supported. See instructions below.
You can now keep the splash screen up while your app initializes! No need for a secondary splash screen anymore. Just use the preserve
and remove
methods together to remove the splash screen after your initialization is complete. See details below.
Would you prefer a video tutorial instead? Check out Johannes Milke's tutorial.
First, add flutter_native_splash
as a dependency in your pubspec.yaml file.
dependencies:
flutter_native_splash: ^2.2.19
Don't forget to flutter pub get
.
Customize the following settings and add to your project's pubspec.yaml
file or place in a new file in your root project folder named flutter_native_splash.yaml
.
flutter_native_splash:
# This package generates native code to customize Flutter's default white native splash screen
# with background color and splash image.
# Customize the parameters below, and run the following command in the terminal:
# flutter pub run flutter_native_splash:create
# To restore Flutter's default white splash screen, run the following command in the terminal:
# flutter pub run flutter_native_splash:remove
# color or background_image is the only required parameter. Use color to set the background
# of your splash screen to a solid color. Use background_image to set the background of your
# splash screen to a png image. This is useful for gradients. The image will be stretch to the
# size of the app. Only one parameter can be used, color and background_image cannot both be set.
color: "#42a5f5"
#background_image: "assets/background.png"
# Optional parameters are listed below. To enable a parameter, uncomment the line by removing
# the leading # character.
# The image parameter allows you to specify an image used in the splash screen. It must be a
# png file and should be sized for 4x pixel density.
#image: assets/splash.png
# The branding property allows you to specify an image used as branding in the splash screen.
# It must be a png file. It is supported for Android, iOS and the Web. For Android 12,
# see the Android 12 section below.
#branding: assets/dart.png
# To position the branding image at the bottom of the screen you can use bottom, bottomRight,
# and bottomLeft. The default values is bottom if not specified or specified something else.
#branding_mode: bottom
# The color_dark, background_image_dark, image_dark, branding_dark are parameters that set the background
# and image when the device is in dark mode. If they are not specified, the app will use the
# parameters from above. If the image_dark parameter is specified, color_dark or
# background_image_dark must be specified. color_dark and background_image_dark cannot both be
# set.
#color_dark: "#042a49"
#background_image_dark: "assets/dark-background.png"
#image_dark: assets/splash-invert.png
#branding_dark: assets/dart_dark.png
# Android 12 handles the splash screen differently than previous versions. Please visit
# https://developer.android.com/guide/topics/ui/splash-screen
# Following are Android 12 specific parameter.
android_12:
# The image parameter sets the splash screen icon image. If this parameter is not specified,
# the app's launcher icon will be used instead.
# Please note that the splash screen will be clipped to a circle on the center of the screen.
# App icon with an icon background: This should be 960×960 pixels, and fit within a circle
# 640 pixels in diameter.
# App icon without an icon background: This should be 1152×1152 pixels, and fit within a circle
# 768 pixels in diameter.
#image: assets/android12splash.png
# Splash screen background color.
#color: "#42a5f5"
# App icon background color.
#icon_background_color: "#111111"
# The branding property allows you to specify an image used as branding in the splash screen.
#branding: assets/dart.png
# The image_dark, color_dark, icon_background_color_dark, and branding_dark set values that
# apply when the device is in dark mode. If they are not specified, the app will use the
# parameters from above.
#image_dark: assets/android12splash-invert.png
#color_dark: "#042a49"
#icon_background_color_dark: "#eeeeee"
# The android, ios and web parameters can be used to disable generating a splash screen on a given
# platform.
#android: false
#ios: false
#web: false
# Platform specific images can be specified with the following parameters, which will override
# the respective parameter. You may specify all, selected, or none of these parameters:
#color_android: "#42a5f5"
#color_dark_android: "#042a49"
#color_ios: "#42a5f5"
#color_dark_ios: "#042a49"
#color_web: "#42a5f5"
#color_dark_web: "#042a49"
#image_android: assets/splash-android.png
#image_dark_android: assets/splash-invert-android.png
#image_ios: assets/splash-ios.png
#image_dark_ios: assets/splash-invert-ios.png
#image_web: assets/splash-web.png
#image_dark_web: assets/splash-invert-web.png
#background_image_android: "assets/background-android.png"
#background_image_dark_android: "assets/dark-background-android.png"
#background_image_ios: "assets/background-ios.png"
#background_image_dark_ios: "assets/dark-background-ios.png"
#background_image_web: "assets/background-web.png"
#background_image_dark_web: "assets/dark-background-web.png"
#branding_android: assets/brand-android.png
#branding_dark_android: assets/dart_dark-android.png
#branding_ios: assets/brand-ios.png
#branding_dark_ios: assets/dart_dark-ios.png
# The position of the splash image can be set with android_gravity, ios_content_mode, and
# web_image_mode parameters. All default to center.
#
# android_gravity can be one of the following Android Gravity (see
# https://developer.android.com/reference/android/view/Gravity): bottom, center,
# center_horizontal, center_vertical, clip_horizontal, clip_vertical, end, fill, fill_horizontal,
# fill_vertical, left, right, start, or top.
#android_gravity: center
#
# ios_content_mode can be one of the following iOS UIView.ContentMode (see
# https://developer.apple.com/documentation/uikit/uiview/contentmode): scaleToFill,
# scaleAspectFit, scaleAspectFill, center, top, bottom, left, right, topLeft, topRight,
# bottomLeft, or bottomRight.
#ios_content_mode: center
#
# web_image_mode can be one of the following modes: center, contain, stretch, and cover.
#web_image_mode: center
# The screen orientation can be set in Android with the android_screen_orientation parameter.
# Valid parameters can be found here:
# https://developer.android.com/guide/topics/manifest/activity-element#screen
#android_screen_orientation: sensorLandscape
# To hide the notification bar, use the fullscreen parameter. Has no effect in web since web
# has no notification bar. Defaults to false.
# NOTE: Unlike Android, iOS will not automatically show the notification bar when the app loads.
# To show the notification bar, add the following code to your Flutter app:
# WidgetsFlutterBinding.ensureInitialized();
# SystemChrome.setEnabledSystemUIOverlays([SystemUiOverlay.bottom, SystemUiOverlay.top]);
#fullscreen: true
# If you have changed the name(s) of your info.plist file(s), you can specify the filename(s)
# with the info_plist_files parameter. Remove only the # characters in the three lines below,
# do not remove any spaces:
#info_plist_files:
# - 'ios/Runner/Info-Debug.plist'
# - 'ios/Runner/Info-Release.plist'
After adding your settings, run the following command in the terminal:
flutter pub run flutter_native_splash:create
When the package finishes running, your splash screen is ready.
To specify the YAML file location just add --path with the command in the terminal:
flutter pub run flutter_native_splash:create --path=path/to/my/file.yaml
By default, the splash screen will be removed when Flutter has drawn the first frame. If you would like the splash screen to remain while your app initializes, you can use the preserve()
and remove()
methods together. Pass the preserve()
method the value returned from WidgetsFlutterBinding.ensureInitialized()
to keep the splash on screen. Later, when your app has initialized, make a call to remove()
to remove the splash screen.
import 'package:flutter_native_splash/flutter_native_splash.dart';
void main() {
WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
runApp(const MyApp());
}
// whenever your initialization is completed, remove the splash screen:
FlutterNativeSplash.remove();
NOTE: If you do not need to use the preserve()
and remove()
methods, you can place the flutter_native_splash
dependency in the dev_dependencies
section of pubspec.yaml
.
If you find this package useful, you can support it for free by giving it a thumbs up at the top of this page. Here's another option to support the package:
Android 12 has a new method of adding splash screens, which consists of a window background, icon, and the icon background. Note that a background image is not supported.
Be aware of the following considerations regarding these elements:
1: image
parameter. By default, the launcher icon is used:
2: icon_background_color
is optional, and is useful if you need more contrast between the icon and the window background.
3: One-third of the foreground is masked.
4: color
the window background consists of a single opaque color.
PLEASE NOTE: The splash screen may not appear when you launch the app from Android Studio on API 31. However, it should appear when you launch by clicking on the launch icon in Android. This seems to be resolved in API 32+.
PLEASE NOTE: There are a number of reports that non-Google launchers do not display the launch image correctly. If the launch image does not display correctly, please try the Google launcher to confirm that this package is working.
PLEASE NOTE: The splash screen does not appear when you launch the app from a notification. Apparently this is the intended behavior on Android 12: core-splashscreen Icon not shown when cold launched from notification.
If you have a project setup that contains multiple flavors or environments, and you created more than one flavor this would be a feature for you.
Instead of maintaining multiple files and copy/pasting images, you can now, using this tool, create different splash screens for different environments.
In order to use the new feature, and generate the desired splash images for you app, a couple of changes are required.
If you want to generate just one flavor and one file you would use either options as described in Step 1. But in order to setup the flavors, you will then be required to move all your setup values to the flutter_native_splash.yaml
file, but with a prefix.
Let's assume for the rest of the setup that you have 3 different flavors, Production
, Acceptance
, Development
.
First this you will need to do is to create a different setup file for all 3 flavors with a suffix like so:
flutter_native_splash-production.yaml
flutter_native_splash-acceptance.yaml
flutter_native_splash-development.yaml
You would setup those 3 files the same way as you would the one, but with different assets depending on which environment you would be generating. For example (Note: these are just examples, you can use whatever setup you need for your project that is already supported by the package):
# flutter_native_splash-development.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-development.png
branding: assets/branding-development.png
color_dark: "#121212"
image_dark: assets/logo-development.png
branding_dark: assets/branding-development.png
android_12:
image: assets/logo-development.png
icon_background_color: "#ffffff"
image_dark: assets/logo-development.png
icon_background_color_dark: "#121212"
web: false
# flutter_native_splash-acceptance.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-acceptance.png
branding: assets/branding-acceptance.png
color_dark: "#121212"
image_dark: assets/logo-acceptance.png
branding_dark: assets/branding-acceptance.png
android_12:
image: assets/logo-acceptance.png
icon_background_color: "#ffffff"
image_dark: assets/logo-acceptance.png
icon_background_color_dark: "#121212"
web: false
# flutter_native_splash-production.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-production.png
branding: assets/branding-production.png
color_dark: "#121212"
image_dark: assets/logo-production.png
branding_dark: assets/branding-production.png
android_12:
image: assets/logo-production.png
icon_background_color: "#ffffff"
image_dark: assets/logo-production.png
icon_background_color_dark: "#121212"
web: false
Great, now comes the fun part running the new command!
The new command is:
# If you have a flavor called production you would do this:
flutter pub run flutter_native_splash:create --flavor production
# For a flavor with a name staging you would provide it's name like so:
flutter pub run flutter_native_splash:create --flavor staging
# And if you have a local version for devs you could do that:
flutter pub run flutter_native_splash:create --flavor development
You're done! No, really, Android doesn't need any additional setup.
Note: If it didn't work, please make sure that your flavors are named the same as your config files, otherwise the setup will not work.
iOS is a bit tricky, so hang tight, it might look scary but most of the steps are just a single click, explained as much as possible to lower the possibility of mistakes.
When you run the new command, you will need to open xCode and follow the steps bellow:
Assumption
schemes
setup; production, acceptance and development.Preparation
{project root}/ios/Runner/Base.lproj
xCode
Xcode still doesn't know how to use them, so we need to specify for all the current flavors (schemes) which file to use and to use that value inside the Info.plist file.
LAUNCH_SCREEN_STORYBOARD
$(LAUNCH_SCREEN_STORYBOARD)
Congrats you finished your setup for multiple flavors,
This message is not related to this package but is related to a change in how Flutter handles splash screens in Flutter 2.5. It is caused by having the following code in your android/app/src/main/AndroidManifest.xml
, which was included by default in previous versions of Flutter:
<meta-data
android:name="io.flutter.embedding.android.SplashScreenDrawable"
android:resource="@drawable/launch_background"
/>
The solution is to remove the above code. Note that this will also remove the fade effect between the native splash screen and your app.
Not at this time. PRs are always welcome!
This attribute is only found in Android 12, so if you are getting this error, it means your project is not fully set up for Android 12. Did you update your app's build configuration?
This is caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.
removeAfter
method.No. This package creates a splash screen that is displayed before Flutter is loaded. Because of this, when the splash screen loads, internal app settings are not available to the splash screen. Unfortunately, this means that it is impossible to control light/dark settings of the splash from app settings.
Notes
If the splash screen was not updated correctly on iOS or if you experience a white screen before the splash screen, run flutter clean
and recompile your app. If that does not solve the problem, delete your app, power down the device, power up the device, install and launch the app as per this StackOverflow thread.
This package modifies launch_background.xml
and styles.xml
files on Android, LaunchScreen.storyboard
and Info.plist
on iOS, and index.html
on Web. If you have modified these files manually, this plugin may not work properly. Please open an issue if you find any bugs.
mdpi
, hdpi
, xhdpi
, xxhdpi
and xxxhdpi
drawables.<item>
tag containing a <bitmap>
for your splash image drawable will be added in launch_background.xml
colors.xml
and referenced in launch_background.xml
.styles.xml
.drawable-night
, values-night
, etc. resource folders.@3x
and @2x
images.LaunchScreen.storyboard
.Info.plist
.web/splash
folder will be created for splash screen images and CSS files.1x
, 2x
, 3x
, and 4x
sizes and placed in web/splash/img
.web/index.html
, as well as the HTML for the splash pictures.This package was originally created by Henrique Arthur and it is currently maintained by Jon Hanson.
If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket. Pull request are also welcome.
Run this command:
With Flutter:
$ flutter pub add flutter_native_splash
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
flutter_native_splash: ^2.2.19
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_native_splash/flutter_native_splash.dart';
import 'package:flutter/material.dart';
import 'package:flutter_native_splash/flutter_native_splash.dart';
void main() {
WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// Try running your application with "flutter run". You'll see the
// application has a blue toolbar. Then, without quitting the app, try
// changing the primarySwatch below to Colors.green and then invoke
// "hot reload" (press "r" in the console where you ran "flutter run",
// or simply save your changes to "hot reload" in a Flutter IDE).
// Notice that the counter didn't reset back to zero; the application
// is not restarted.
primarySwatch: Colors.blue,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
void initState() {
super.initState();
initialization();
}
void initialization() async {
// This is where you can initialize the resources needed by your app while
// the splash screen is displayed. Remove the following example because
// delaying the user experience is a bad design practice!
// ignore_for_file: avoid_print
print('ready in 3...');
await Future.delayed(const Duration(seconds: 1));
print('ready in 2...');
await Future.delayed(const Duration(seconds: 1));
print('ready in 1...');
await Future.delayed(const Duration(seconds: 1));
print('go!');
FlutterNativeSplash.remove();
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Invoke "debug painting" (press "p" in the console, choose the
// "Toggle Debug Paint" action from the Flutter Inspector in Android
// Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
// to see the wireframe for each widget.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
Author: jonbhanson
Download Link: Download The Source Code
Official Website: https://github.com/jonbhanson/flutter_native_splash
License: MIT license
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1621939380
Computer Vision is a wide, deep learning field with enormous applications. Image Generation is one of the most curious applications in Computer Vision. Again, Image Generation has a great collection of tasks; to mention, a few can outperform humans. Most image generation tasks are common for videos, too, since a video is a sequence of images.
A few popular Image Generation tasks are:
One deep learning generative model can perform one or more tasks with a few configuration changes. Some famous image generative models are the original versions and the numerous variants of Variational Autoencoder (VAE), and Generative Adversarial Networks (GAN).
This article discusses the concepts behind image generation and the code implementation of Variational Autoencoder with a practical example using TensorFlow Keras. TensorFlow is one of the top preferred frameworks for deep learning processes. Keras is a high-level API built on top of TensorFlow, which is meant exclusively for deep learning.
The following articles may fulfil the prerequisites by giving an understanding of deep learning and computer vision.
#developers corner #autoencoders #beginner #decoder #deepfake #encoder #fashion mnist #gan #image generation #image processing #image synthesis #keras #super-resolution #tensorflow #vae #variational autoencoder
1594525380
Keras and Tensorflow are two very popular deep learning frameworks. Deep Learning practitioners most widely use Keras and Tensorflow. Both of these frameworks have large community support. Both of these frameworks capture a major fraction of deep learning production.
Which framework is better for us then?
This blog will be focusing on Keras Vs Tensorflow. There are some differences between Keras and Tensorflow, which will help you choose between the two. We will provide you better insights on both these frameworks.
Keras is a high-level API built on the top of a backend engine. The backend engine may be either TensorFlow, theano, or CNTK. It provides the ease to build neural networks without worrying about the backend implementation of tensors and optimization methods.
Fast prototyping allows for more experiments. Using Keras developers can convert their algorithms into results in less time. It provides an abstraction overs lower level computations.
Tensorflow is a tool designed by Google for the deep learning developer community. The aim of TensorFlow was to make deep learning applications accessible to the people. It is an open-source library available on Github. It is one of the most famous libraries to experiment with deep learning. The popularity of TensorFlow is because of the ease of building and deployment of neural net models.
Major area of focus here is numerical computation. It was built keeping the processing computation power in mind. Therefore we can run TensorFlow applications on almost kind of computer.
#keras tutorials #keras vs tensorflow #keras #tensorflow