1676934000
In this Vue tutorial, we learn about How to Use Image Augmentation for Deep Learning with Keras. Data preparation is required when working with neural networks and deep learning models. Increasingly, data augmentation is also required on more complex object recognition tasks.
In this post, you will discover how to use data preparation and data augmentation with your image datasets when developing and evaluating deep learning models in Python with Keras.
After reading this post, you will know:
Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Like the rest of Keras, the image augmentation API is simple and powerful.
Keras provides the ImageDataGenerator class that defines the configuration for image data preparation and augmentation. This includes capabilities such as:
An augmented image generator can be created as follows:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator()
Rather than performing the operations on your entire image dataset in memory, the API is designed to be iterated by the deep learning model fitting process, creating augmented image data for you just in time. This reduces your memory overhead but adds some additional time cost during model training.
After you have created and configured your ImageDataGenerator, you must fit it on your data. This will calculate any statistics required to actually perform the transforms to your image data. You can do this by calling the fit() function on the data generator and passing it to your training dataset.
datagen.fit(train)
The data generator itself is, in fact, an iterator, returning batches of image samples when requested. You can configure the batch size and prepare the data generator and get batches of images by calling the flow() function.
X_batch, y_batch = datagen.flow(train, train, batch_size=32)
Finally, you can make use of the data generator. Instead of calling the fit() function on your model, you must call the fit_generator() function and pass in the data generator and the desired length of an epoch as well as the total number of epochs on which to train.
fit_generator(datagen, samples_per_epoch=len(train), epochs=100)
You can learn more about the Keras image data generator API in the Keras documentation.
Now that you know how the image augmentation API in Keras works, let’s look at some examples.
We will use the MNIST handwritten digit recognition task in these examples. To begin with, let’s take a look at the first nine images in the training dataset.
# Plot images
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
# load dbata
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_train[i*3+j], cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
Running this example provides the following image that you can use as a point of comparison with the image preparation and augmentation in the examples below.
Example MNIST images
It is also possible to standardize pixel values across the entire dataset. This is called feature standardization and mirrors the type of standardization often performed for each column in a tabular dataset.
You can perform feature standardization by setting the featurewise_center
and featurewise_std_normalization
arguments to True on the ImageDataGenerator
class. These are set to False by default. However, the recent version of Keras has a bug in the feature standardization so that the mean and standard deviation is calculated across all pixels. If you use the fit()
function from the ImageDataGenerator
class, you will see an image similar to the one above:
# Standardize images across the dataset, mean=0, stdev=1
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True)
# fit parameters from data
datagen.fit(X_train)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False):
print(X_batch.min(), X_batch.mean(), X_batch.max())
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j], cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
For example, the minimum, mean, and maximum values from the batch printed above are:
-0.42407447 -0.04093817 2.8215446
And the image displayed is as follows:
Image from feature-wise standardization
The workaround is to compute the feature standardization manually. Each pixel should have a separate mean and standard deviation, and it should be computed across different samples but independent from other pixels in the same sample. You just need to replace the fit()
function with your own computation:
# Standardize images across the dataset, every pixel has mean=0, stdev=1
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True)
# fit parameters from data
datagen.mean = X_train.mean(axis=0)
datagen.std = X_train.std(axis=0)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False):
print(X_batch.min(), X_batch.mean(), X_batch.max())
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j], cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
The minimum, mean, and maximum as printed now have a wider range:
-1.2742625 -0.028436039 17.46127
Running this example, you can see that the effect is different, seemingly darkening and lightening different digits.
A whitening transform of an image is a linear algebraic operation that reduces the redundancy in the matrix of pixel images.
Less redundancy in the image is intended to better highlight the structures and features in the image to the learning algorithm.
Typically, image whitening is performed using the Principal Component Analysis (PCA) technique. More recently, an alternative called ZCA (learn more in Appendix A of this tech report) shows better results in transformed images that keep all the original dimensions. And unlike PCA, the resulting transformed images still look like their originals. Precisely, whitening converts each image into a white noise vector, i.e., each element in the vector has zero mean and unit standard derivation and is statistically independent of each other.
You can perform a ZCA whitening transform by setting the zca_whitening
argument to True. But due to the same issue as feature standardization, you must first zero-center your input data separately:
# ZCA Whitening
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True, zca_whitening=True)
# fit parameters from data
X_mean = X_train.mean(axis=0)
datagen.fit(X_train - X_mean)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train - X_mean, y_train, batch_size=9, shuffle=False):
print(X_batch.min(), X_batch.mean(), X_batch.max())
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j].reshape(28,28), cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
Running the example, you can see the same general structure in the images and how the outline of each digit has been highlighted.
ZCA whitening MNIST images
Sometimes images in your sample data may have varying and different rotations in the scene.
You can train your model to better handle rotations of images by artificially and randomly rotating images from your dataset during training.
The example below creates random rotations of the MNIST digits up to 90 degrees by setting the rotation_range argument.
# Random Rotations
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(rotation_range=90)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False):
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j].reshape(28,28), cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
Running the example, you can see that images have been rotated left and right up to a limit of 90 degrees. This is not helpful on this problem because the MNIST digits have a normalized orientation, but this transform might be of help when learning from photographs where the objects may have different orientations.
Random rotations of MNIST images
Objects in your images may not be centered in the frame. They may be off-center in a variety of different ways.
You can train your deep learning network to expect and currently handle off-center objects by artificially creating shifted versions of your training data. Keras supports separate horizontal and vertical random shifting of training data by the width_shift_range
and height_shift_range
arguments.
# Random Shifts
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
shift = 0.2
datagen = ImageDataGenerator(width_shift_range=shift, height_shift_range=shift)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False):
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j].reshape(28,28), cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
Running this example creates shifted versions of the digits. Again, this is not required for MNIST as the handwritten digits are already centered, but you can see how this might be useful on more complex problem domains.
Random shifted MNIST images
AD
Another augmentation to your image data that can improve performance on large and complex problems is to create random flips of images in your training data.
Keras supports random flipping along both the vertical and horizontal axes using the vertical_flip
and horizontal_flip
arguments.
# Random Flips
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False):
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j].reshape(28,28), cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
Running this example, you can see flipped digits. Flipping digits is not useful as they will always have the correct left and right orientation, but this may be useful for problems with photographs of objects in a scene that can have a varied orientation.
Randomly flipped MNIST images
Saving Augmented Images to File
The data preparation and augmentation are performed just in time by Keras.
This is efficient in terms of memory, but you may require the exact images used during training. For example, perhaps you would like to use them with a different software package later or only generate them once and use them on multiple different deep learning models or configurations.
Keras allows you to save the images generated during training. The directory, filename prefix, and image file type can be specified to the flow()
function before training. Then, during training, the generated images will be written to the file.
The example below demonstrates this and writes nine images to a “images
” subdirectory with the prefix “aug
” and the file type of PNG.
# Save augmented images to file
from tensorflow.keras.datasets import mnist
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# define data preparation
datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True)
# configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9, shuffle=False,
save_to_dir='images', save_prefix='aug', save_format='png'):
# create a grid of 3x3 images
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(4,4))
for i in range(3):
for j in range(3):
ax[i][j].imshow(X_batch[i*3+j].reshape(28,28), cmap=plt.get_cmap("gray"))
# show the plot
plt.show()
break
Running the example, you can see that images are only written when they are generated.
Augmented MNIST images saved to file
Image data is unique in that you can review the data and transformed copies of the data and quickly get an idea of how the model may perceive it.
Below are some tips for getting the most from image data preparation and augmentation for deep learning.
In this post, you discovered image data preparation and augmentation.
You discovered a range of techniques you can use easily in Python with Keras for deep learning models. You learned about:
Do you have any questions about image data augmentation or this post? Ask your questions in the comments, and I will do my best to answer.
Original article sourced at: https://machinelearningmastery.com
1653123600
This repository is a fork of SimpleMDE, made by Sparksuite. Go to the dedicated section for more information.
A drop-in JavaScript text area replacement for writing beautiful and understandable Markdown. EasyMDE allows users who may be less experienced with Markdown to use familiar toolbar buttons and shortcuts.
In addition, the syntax is rendered while editing to clearly show the expected result. Headings are larger, emphasized words are italicized, links are underlined, etc.
EasyMDE also features both built-in auto saving and spell checking. The editor is entirely customizable, from theming to toolbar buttons and javascript hooks.
Via npm:
npm install easymde
Via the UNPKG CDN:
<link rel="stylesheet" href="https://unpkg.com/easymde/dist/easymde.min.css">
<script src="https://unpkg.com/easymde/dist/easymde.min.js"></script>
Or jsDelivr:
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.css">
<script src="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.js"></script>
After installing and/or importing the module, you can load EasyMDE onto the first textarea
element on the web page:
<textarea></textarea>
<script>
const easyMDE = new EasyMDE();
</script>
Alternatively you can select a specific textarea
, via JavaScript:
<textarea id="my-text-area"></textarea>
<script>
const easyMDE = new EasyMDE({element: document.getElementById('my-text-area')});
</script>
Use easyMDE.value()
to get the content of the editor:
<script>
easyMDE.value();
</script>
Use easyMDE.value(val)
to set the content of the editor:
<script>
easyMDE.value('New input for **EasyMDE**');
</script>
true
, force downloads Font Awesome (used for icons). If set to false
, prevents downloading. Defaults to undefined
, which will intelligently check whether Font Awesome has already been included, then download accordingly.true
, focuses the editor automatically. Defaults to false
.true
, saves the text automatically. Defaults to false
.10000
(10 seconds).autosave.delay
or 10000
(10 seconds).locale: en-US, format: hour:minute
.{ delay: 300 }
, it will check every 300 ms if the editor is visible and if positive, call CodeMirror's refresh()
.**
or __
. Defaults to **
.```
or ~~~
. Defaults to ```
.*
or _
. Defaults to *
.*
, -
or +
. Defaults to *
.textarea
element to use. Defaults to the first textarea
element on the page.true
, force text changes made in EasyMDE to be immediately stored in original text area. Defaults to false
.false
, indent using spaces instead of tabs. Defaults to true
.false
by default, preview for images will appear only for images on separate lines.
as argument and returns a string that serves as the src
attribute of the <img>
tag in the preview. Enables dynamic previewing of images in the frontend without having to upload them to a server, allows copy-pasting of images to the editor with preview.["[", "](http://)"]
.true
, enables line numbers in the editor.false
, disable line wrapping. Defaults to true
."500px"
. Defaults to "300px"
.minHeight
option will be ignored. Should be a string containing a valid CSS value like "500px"
. Defaults to undefined
.true
when the editor is currently going into full screen mode, or false
.true
, will render headers without a space after the #
. Defaults to false
.false
, will not process GFM strikethrough syntax. Defaults to true
.true
, let underscores be a delimiter for separating words. Defaults to false
.false
, will replace CSS classes returned by the default Markdown mode. Otherwise the classes returned by the custom mode will be combined with the classes returned by the default mode. Defaults to true
."editor-preview"
.true
, a JS alert window appears asking for the link or image URL. Defaults to false
.URL of the image:
.URL for the link:
.true
, enables the image upload functionality, which can be triggered by drag and drop, copy-paste and through the browse-file window (opened when the user click on the upload-image icon). Defaults to false
.1024 * 1024 * 2
(2 MB).image/png, image/jpeg
.imageMaxSize
, imageAccept
, imageUploadEndpoint
and imageCSRFToken
ineffective.onSuccess
and onError
callback functions as parameters. onSuccess(imageUrl: string)
and onError(errorMessage: string)
{"data": {"filePath": "<filePath>"}}
where filePath is the path of the image (absolute if imagePathAbsolute
is set to true, relative if otherwise);{"error": "<errorCode>"}
, where errorCode can be noFileGiven
(HTTP 400 Bad Request), typeNotAllowed
(HTTP 415 Unsupported Media Type), fileTooLarge
(HTTP 413 Payload Too Large) or importError
(see errorMessages below). If errorCode is not one of the errorMessages, it is alerted unchanged to the user. This allows for server-side error messages. No default value.true
, will treat imageUrl
from imageUploadFunction
and filePath returned from imageUploadEndpoint
as an absolute rather than relative path, i.e. not prepend window.location.origin
to it.imageCSRFToken
has value, defaults to csrfmiddlewaretoken
.true
, passing CSRF token via header. Defaults to false
, which pass CSRF through request body.#image_name#
, #image_size#
and #image_max_size#
will replaced by their respective values, that can be used for customization or internationalization:uploadImage
is set to true
. Defaults to Attach files by drag and dropping or pasting from clipboard.
.Drop image to upload it.
.Uploading images #images_names#
.Uploading #file_name#: #progress#%
.Uploaded #image_name#
.B, KB, MB
(example: 218 KB
). You can use B,KB,MB
instead if you prefer without whitespaces (218KB
).errorCallback
option, where #image_name#
, #image_size#
and #image_max_size#
will replaced by their respective values, that can be used for customization or internationalization:You must select a file.
.imageAccept
list, or the server returned this error code. Defaults to This image type is not allowed.
.imageMaxSize
, or if the server returned this error code. Defaults to Image #image_name# is too big (#image_size#).\nMaximum file size is #image_max_size#.
.Something went wrong when uploading the image #image_name#.
.(errorMessage) => alert(errorMessage)
.true
, will highlight using highlight.js. Defaults to false
. To use this feature you must include highlight.js on your page or pass in using the hljs
option. For example, include the script and the CSS files like:<script src="https://cdn.jsdelivr.net/highlight.js/latest/highlight.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/highlight.js/latest/styles/github.min.css">
window.hljs
), you can provide an instance here. Defaults to undefined
.renderingConfig
options will take precedence.false
, disable parsing GitHub Flavored Markdown (GFM) single line breaks. Defaults to true
.false
, disable the spell checker. Defaults to true
. Optionally pass a CodeMirrorSpellChecker-compliant function.textarea
or contenteditable
. Defaults to textarea
for desktop and contenteditable
for mobile. contenteditable
option is necessary to enable nativeSpellcheck.false
, disable native spell checker. Defaults to true
.false
, allows side-by-side editing without going into fullscreen. Defaults to true
.false
, hide the status bar. Defaults to the array of built-in status bar items.false
, remove the CodeMirror-selectedtext
class from selected lines. Defaults to true
.false
, disable syncing scroll in side by side mode. Defaults to true
.2
.easymde
.false
, hide the toolbar. Defaults to the array of icons.false
, disable toolbar button tips. Defaults to true
.rtl
or ltr
. Changes text direction to support right-to-left languages. Defaults to ltr
.Most options demonstrate the non-default behavior:
const editor = new EasyMDE({
autofocus: true,
autosave: {
enabled: true,
uniqueId: "MyUniqueID",
delay: 1000,
submit_delay: 5000,
timeFormat: {
locale: 'en-US',
format: {
year: 'numeric',
month: 'long',
day: '2-digit',
hour: '2-digit',
minute: '2-digit',
},
},
text: "Autosaved: "
},
blockStyles: {
bold: "__",
italic: "_",
},
unorderedListStyle: "-",
element: document.getElementById("MyID"),
forceSync: true,
hideIcons: ["guide", "heading"],
indentWithTabs: false,
initialValue: "Hello world!",
insertTexts: {
horizontalRule: ["", "\n\n-----\n\n"],
image: [""],
link: ["[", "](https://)"],
table: ["", "\n\n| Column 1 | Column 2 | Column 3 |\n| -------- | -------- | -------- |\n| Text | Text | Text |\n\n"],
},
lineWrapping: false,
minHeight: "500px",
parsingConfig: {
allowAtxHeaderWithoutSpace: true,
strikethrough: false,
underscoresBreakWords: true,
},
placeholder: "Type here...",
previewClass: "my-custom-styling",
previewClass: ["my-custom-styling", "more-custom-styling"],
previewRender: (plainText) => customMarkdownParser(plainText), // Returns HTML from a custom parser
previewRender: (plainText, preview) => { // Async method
setTimeout(() => {
preview.innerHTML = customMarkdownParser(plainText);
}, 250);
return "Loading...";
},
promptURLs: true,
promptTexts: {
image: "Custom prompt for URL:",
link: "Custom prompt for URL:",
},
renderingConfig: {
singleLineBreaks: false,
codeSyntaxHighlighting: true,
sanitizerFunction: (renderedHTML) => {
// Using DOMPurify and only allowing <b> tags
return DOMPurify.sanitize(renderedHTML, {ALLOWED_TAGS: ['b']})
},
},
shortcuts: {
drawTable: "Cmd-Alt-T"
},
showIcons: ["code", "table"],
spellChecker: false,
status: false,
status: ["autosave", "lines", "words", "cursor"], // Optional usage
status: ["autosave", "lines", "words", "cursor", {
className: "keystrokes",
defaultValue: (el) => {
el.setAttribute('data-keystrokes', 0);
},
onUpdate: (el) => {
const keystrokes = Number(el.getAttribute('data-keystrokes')) + 1;
el.innerHTML = `${keystrokes} Keystrokes`;
el.setAttribute('data-keystrokes', keystrokes);
},
}], // Another optional usage, with a custom status bar item that counts keystrokes
styleSelectedText: false,
sideBySideFullscreen: false,
syncSideBySidePreviewScroll: false,
tabSize: 4,
toolbar: false,
toolbarTips: false,
});
Below are the built-in toolbar icons (only some of which are enabled by default), which can be reorganized however you like. "Name" is the name of the icon, referenced in the JavaScript. "Action" is either a function or a URL to open. "Class" is the class given to the icon. "Tooltip" is the small tooltip that appears via the title=""
attribute. Note that shortcut hints are added automatically and reflect the specified action if it has a key bind assigned to it (i.e. with the value of action
set to bold
and that of tooltip
set to Bold
, the final text the user will see would be "Bold (Ctrl-B)").
Additionally, you can add a separator between any icons by adding "|"
to the toolbar array.
Name | Action | Tooltip Class |
---|---|---|
bold | toggleBold | Bold fa fa-bold |
italic | toggleItalic | Italic fa fa-italic |
strikethrough | toggleStrikethrough | Strikethrough fa fa-strikethrough |
heading | toggleHeadingSmaller | Heading fa fa-header |
heading-smaller | toggleHeadingSmaller | Smaller Heading fa fa-header |
heading-bigger | toggleHeadingBigger | Bigger Heading fa fa-lg fa-header |
heading-1 | toggleHeading1 | Big Heading fa fa-header header-1 |
heading-2 | toggleHeading2 | Medium Heading fa fa-header header-2 |
heading-3 | toggleHeading3 | Small Heading fa fa-header header-3 |
code | toggleCodeBlock | Code fa fa-code |
quote | toggleBlockquote | Quote fa fa-quote-left |
unordered-list | toggleUnorderedList | Generic List fa fa-list-ul |
ordered-list | toggleOrderedList | Numbered List fa fa-list-ol |
clean-block | cleanBlock | Clean block fa fa-eraser |
link | drawLink | Create Link fa fa-link |
image | drawImage | Insert Image fa fa-picture-o |
table | drawTable | Insert Table fa fa-table |
horizontal-rule | drawHorizontalRule | Insert Horizontal Line fa fa-minus |
preview | togglePreview | Toggle Preview fa fa-eye no-disable |
side-by-side | toggleSideBySide | Toggle Side by Side fa fa-columns no-disable no-mobile |
fullscreen | toggleFullScreen | Toggle Fullscreen fa fa-arrows-alt no-disable no-mobile |
guide | This link | Markdown Guide fa fa-question-circle |
undo | undo | Undo fa fa-undo |
redo | redo | Redo fa fa-redo |
Customize the toolbar using the toolbar
option.
Only the order of existing buttons:
const easyMDE = new EasyMDE({
toolbar: ["bold", "italic", "heading", "|", "quote"]
});
All information and/or add your own icons
const easyMDE = new EasyMDE({
toolbar: [
{
name: "bold",
action: EasyMDE.toggleBold,
className: "fa fa-bold",
title: "Bold",
},
"italics", // shortcut to pre-made button
{
name: "custom",
action: (editor) => {
// Add your own code
},
className: "fa fa-star",
title: "Custom Button",
attributes: { // for custom attributes
id: "custom-id",
"data-value": "custom value" // HTML5 data-* attributes need to be enclosed in quotation marks ("") because of the dash (-) in its name.
}
},
"|" // Separator
// [, ...]
]
});
Put some buttons on dropdown menu
const easyMDE = new EasyMDE({
toolbar: [{
name: "heading",
action: EasyMDE.toggleHeadingSmaller,
className: "fa fa-header",
title: "Headers",
},
"|",
{
name: "others",
className: "fa fa-blind",
title: "others buttons",
children: [
{
name: "image",
action: EasyMDE.drawImage,
className: "fa fa-picture-o",
title: "Image",
},
{
name: "quote",
action: EasyMDE.toggleBlockquote,
className: "fa fa-percent",
title: "Quote",
},
{
name: "link",
action: EasyMDE.drawLink,
className: "fa fa-link",
title: "Link",
}
]
},
// [, ...]
]
});
EasyMDE comes with an array of predefined keyboard shortcuts, but they can be altered with a configuration option. The list of default ones is as follows:
Shortcut (Windows / Linux) | Shortcut (macOS) | Action |
---|---|---|
Ctrl-' | Cmd-' | "toggleBlockquote" |
Ctrl-B | Cmd-B | "toggleBold" |
Ctrl-E | Cmd-E | "cleanBlock" |
Ctrl-H | Cmd-H | "toggleHeadingSmaller" |
Ctrl-I | Cmd-I | "toggleItalic" |
Ctrl-K | Cmd-K | "drawLink" |
Ctrl-L | Cmd-L | "toggleUnorderedList" |
Ctrl-P | Cmd-P | "togglePreview" |
Ctrl-Alt-C | Cmd-Alt-C | "toggleCodeBlock" |
Ctrl-Alt-I | Cmd-Alt-I | "drawImage" |
Ctrl-Alt-L | Cmd-Alt-L | "toggleOrderedList" |
Shift-Ctrl-H | Shift-Cmd-H | "toggleHeadingBigger" |
F9 | F9 | "toggleSideBySide" |
F11 | F11 | "toggleFullScreen" |
Here is how you can change a few, while leaving others untouched:
const editor = new EasyMDE({
shortcuts: {
"toggleOrderedList": "Ctrl-Alt-K", // alter the shortcut for toggleOrderedList
"toggleCodeBlock": null, // unbind Ctrl-Alt-C
"drawTable": "Cmd-Alt-T", // bind Cmd-Alt-T to drawTable action, which doesn't come with a default shortcut
}
});
Shortcuts are automatically converted between platforms. If you define a shortcut as "Cmd-B", on PC that shortcut will be changed to "Ctrl-B". Conversely, a shortcut defined as "Ctrl-B" will become "Cmd-B" for Mac users.
The list of actions that can be bound is the same as the list of built-in actions available for toolbar buttons.
You can catch the following list of events: https://codemirror.net/doc/manual.html#events
const easyMDE = new EasyMDE();
easyMDE.codemirror.on("change", () => {
console.log(easyMDE.value());
});
You can revert to the initial text area by calling the toTextArea
method. Note that this clears up the autosave (if enabled) associated with it. The text area will retain any text from the destroyed EasyMDE instance.
const easyMDE = new EasyMDE();
// ...
easyMDE.toTextArea();
easyMDE = null;
If you need to remove registered event listeners (when the editor is not needed anymore), call easyMDE.cleanup()
.
The following self-explanatory methods may be of use while developing with EasyMDE.
const easyMDE = new EasyMDE();
easyMDE.isPreviewActive(); // returns boolean
easyMDE.isSideBySideActive(); // returns boolean
easyMDE.isFullscreenActive(); // returns boolean
easyMDE.clearAutosavedValue(); // no returned value
EasyMDE is a continuation of SimpleMDE.
SimpleMDE began as an improvement of lepture's Editor project, but has now taken on an identity of its own. It is bundled with CodeMirror and depends on Font Awesome.
CodeMirror is the backbone of the project and parses much of the Markdown syntax as it's being written. This allows us to add styles to the Markdown that's being written. Additionally, a toolbar and status bar have been added to the top and bottom, respectively. Previews are rendered by Marked using GitHub Flavored Markdown (GFM).
I originally made this fork to implement FontAwesome 5 compatibility into SimpleMDE. When that was done I submitted a pull request, which has not been accepted yet. This, and the project being inactive since May 2017, triggered me to make more changes and try to put new life into the project.
Changes include:
https://
by defaultMy intention is to continue development on this project, improving it and keeping it alive.
You may want to edit this library to adapt its behavior to your needs. This can be done in some quick steps:
gulp
command, which will generate files: dist/easymde.min.css
and dist/easymde.min.js
;Want to contribute to EasyMDE? Thank you! We have a contribution guide just for you!
Author: Ionaru
Source Code: https://github.com/Ionaru/easy-markdown-editor
License: MIT license
1679035563
When your app is opened, there is a brief time while the native app loads Flutter. By default, during this time, the native app displays a white splash screen. This package automatically generates iOS, Android, and Web-native code for customizing this native splash screen background color and splash image. Supports dark mode, full screen, and platform-specific options.
[BETA] Support for flavors is in beta. Currently only Android and iOS are supported. See instructions below.
You can now keep the splash screen up while your app initializes! No need for a secondary splash screen anymore. Just use the preserve
and remove
methods together to remove the splash screen after your initialization is complete. See details below.
Would you prefer a video tutorial instead? Check out Johannes Milke's tutorial.
First, add flutter_native_splash
as a dependency in your pubspec.yaml file.
dependencies:
flutter_native_splash: ^2.2.19
Don't forget to flutter pub get
.
Customize the following settings and add to your project's pubspec.yaml
file or place in a new file in your root project folder named flutter_native_splash.yaml
.
flutter_native_splash:
# This package generates native code to customize Flutter's default white native splash screen
# with background color and splash image.
# Customize the parameters below, and run the following command in the terminal:
# flutter pub run flutter_native_splash:create
# To restore Flutter's default white splash screen, run the following command in the terminal:
# flutter pub run flutter_native_splash:remove
# color or background_image is the only required parameter. Use color to set the background
# of your splash screen to a solid color. Use background_image to set the background of your
# splash screen to a png image. This is useful for gradients. The image will be stretch to the
# size of the app. Only one parameter can be used, color and background_image cannot both be set.
color: "#42a5f5"
#background_image: "assets/background.png"
# Optional parameters are listed below. To enable a parameter, uncomment the line by removing
# the leading # character.
# The image parameter allows you to specify an image used in the splash screen. It must be a
# png file and should be sized for 4x pixel density.
#image: assets/splash.png
# The branding property allows you to specify an image used as branding in the splash screen.
# It must be a png file. It is supported for Android, iOS and the Web. For Android 12,
# see the Android 12 section below.
#branding: assets/dart.png
# To position the branding image at the bottom of the screen you can use bottom, bottomRight,
# and bottomLeft. The default values is bottom if not specified or specified something else.
#branding_mode: bottom
# The color_dark, background_image_dark, image_dark, branding_dark are parameters that set the background
# and image when the device is in dark mode. If they are not specified, the app will use the
# parameters from above. If the image_dark parameter is specified, color_dark or
# background_image_dark must be specified. color_dark and background_image_dark cannot both be
# set.
#color_dark: "#042a49"
#background_image_dark: "assets/dark-background.png"
#image_dark: assets/splash-invert.png
#branding_dark: assets/dart_dark.png
# Android 12 handles the splash screen differently than previous versions. Please visit
# https://developer.android.com/guide/topics/ui/splash-screen
# Following are Android 12 specific parameter.
android_12:
# The image parameter sets the splash screen icon image. If this parameter is not specified,
# the app's launcher icon will be used instead.
# Please note that the splash screen will be clipped to a circle on the center of the screen.
# App icon with an icon background: This should be 960×960 pixels, and fit within a circle
# 640 pixels in diameter.
# App icon without an icon background: This should be 1152×1152 pixels, and fit within a circle
# 768 pixels in diameter.
#image: assets/android12splash.png
# Splash screen background color.
#color: "#42a5f5"
# App icon background color.
#icon_background_color: "#111111"
# The branding property allows you to specify an image used as branding in the splash screen.
#branding: assets/dart.png
# The image_dark, color_dark, icon_background_color_dark, and branding_dark set values that
# apply when the device is in dark mode. If they are not specified, the app will use the
# parameters from above.
#image_dark: assets/android12splash-invert.png
#color_dark: "#042a49"
#icon_background_color_dark: "#eeeeee"
# The android, ios and web parameters can be used to disable generating a splash screen on a given
# platform.
#android: false
#ios: false
#web: false
# Platform specific images can be specified with the following parameters, which will override
# the respective parameter. You may specify all, selected, or none of these parameters:
#color_android: "#42a5f5"
#color_dark_android: "#042a49"
#color_ios: "#42a5f5"
#color_dark_ios: "#042a49"
#color_web: "#42a5f5"
#color_dark_web: "#042a49"
#image_android: assets/splash-android.png
#image_dark_android: assets/splash-invert-android.png
#image_ios: assets/splash-ios.png
#image_dark_ios: assets/splash-invert-ios.png
#image_web: assets/splash-web.png
#image_dark_web: assets/splash-invert-web.png
#background_image_android: "assets/background-android.png"
#background_image_dark_android: "assets/dark-background-android.png"
#background_image_ios: "assets/background-ios.png"
#background_image_dark_ios: "assets/dark-background-ios.png"
#background_image_web: "assets/background-web.png"
#background_image_dark_web: "assets/dark-background-web.png"
#branding_android: assets/brand-android.png
#branding_dark_android: assets/dart_dark-android.png
#branding_ios: assets/brand-ios.png
#branding_dark_ios: assets/dart_dark-ios.png
# The position of the splash image can be set with android_gravity, ios_content_mode, and
# web_image_mode parameters. All default to center.
#
# android_gravity can be one of the following Android Gravity (see
# https://developer.android.com/reference/android/view/Gravity): bottom, center,
# center_horizontal, center_vertical, clip_horizontal, clip_vertical, end, fill, fill_horizontal,
# fill_vertical, left, right, start, or top.
#android_gravity: center
#
# ios_content_mode can be one of the following iOS UIView.ContentMode (see
# https://developer.apple.com/documentation/uikit/uiview/contentmode): scaleToFill,
# scaleAspectFit, scaleAspectFill, center, top, bottom, left, right, topLeft, topRight,
# bottomLeft, or bottomRight.
#ios_content_mode: center
#
# web_image_mode can be one of the following modes: center, contain, stretch, and cover.
#web_image_mode: center
# The screen orientation can be set in Android with the android_screen_orientation parameter.
# Valid parameters can be found here:
# https://developer.android.com/guide/topics/manifest/activity-element#screen
#android_screen_orientation: sensorLandscape
# To hide the notification bar, use the fullscreen parameter. Has no effect in web since web
# has no notification bar. Defaults to false.
# NOTE: Unlike Android, iOS will not automatically show the notification bar when the app loads.
# To show the notification bar, add the following code to your Flutter app:
# WidgetsFlutterBinding.ensureInitialized();
# SystemChrome.setEnabledSystemUIOverlays([SystemUiOverlay.bottom, SystemUiOverlay.top]);
#fullscreen: true
# If you have changed the name(s) of your info.plist file(s), you can specify the filename(s)
# with the info_plist_files parameter. Remove only the # characters in the three lines below,
# do not remove any spaces:
#info_plist_files:
# - 'ios/Runner/Info-Debug.plist'
# - 'ios/Runner/Info-Release.plist'
After adding your settings, run the following command in the terminal:
flutter pub run flutter_native_splash:create
When the package finishes running, your splash screen is ready.
To specify the YAML file location just add --path with the command in the terminal:
flutter pub run flutter_native_splash:create --path=path/to/my/file.yaml
By default, the splash screen will be removed when Flutter has drawn the first frame. If you would like the splash screen to remain while your app initializes, you can use the preserve()
and remove()
methods together. Pass the preserve()
method the value returned from WidgetsFlutterBinding.ensureInitialized()
to keep the splash on screen. Later, when your app has initialized, make a call to remove()
to remove the splash screen.
import 'package:flutter_native_splash/flutter_native_splash.dart';
void main() {
WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
runApp(const MyApp());
}
// whenever your initialization is completed, remove the splash screen:
FlutterNativeSplash.remove();
NOTE: If you do not need to use the preserve()
and remove()
methods, you can place the flutter_native_splash
dependency in the dev_dependencies
section of pubspec.yaml
.
If you find this package useful, you can support it for free by giving it a thumbs up at the top of this page. Here's another option to support the package:
Android 12 has a new method of adding splash screens, which consists of a window background, icon, and the icon background. Note that a background image is not supported.
Be aware of the following considerations regarding these elements:
1: image
parameter. By default, the launcher icon is used:
2: icon_background_color
is optional, and is useful if you need more contrast between the icon and the window background.
3: One-third of the foreground is masked.
4: color
the window background consists of a single opaque color.
PLEASE NOTE: The splash screen may not appear when you launch the app from Android Studio on API 31. However, it should appear when you launch by clicking on the launch icon in Android. This seems to be resolved in API 32+.
PLEASE NOTE: There are a number of reports that non-Google launchers do not display the launch image correctly. If the launch image does not display correctly, please try the Google launcher to confirm that this package is working.
PLEASE NOTE: The splash screen does not appear when you launch the app from a notification. Apparently this is the intended behavior on Android 12: core-splashscreen Icon not shown when cold launched from notification.
If you have a project setup that contains multiple flavors or environments, and you created more than one flavor this would be a feature for you.
Instead of maintaining multiple files and copy/pasting images, you can now, using this tool, create different splash screens for different environments.
In order to use the new feature, and generate the desired splash images for you app, a couple of changes are required.
If you want to generate just one flavor and one file you would use either options as described in Step 1. But in order to setup the flavors, you will then be required to move all your setup values to the flutter_native_splash.yaml
file, but with a prefix.
Let's assume for the rest of the setup that you have 3 different flavors, Production
, Acceptance
, Development
.
First this you will need to do is to create a different setup file for all 3 flavors with a suffix like so:
flutter_native_splash-production.yaml
flutter_native_splash-acceptance.yaml
flutter_native_splash-development.yaml
You would setup those 3 files the same way as you would the one, but with different assets depending on which environment you would be generating. For example (Note: these are just examples, you can use whatever setup you need for your project that is already supported by the package):
# flutter_native_splash-development.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-development.png
branding: assets/branding-development.png
color_dark: "#121212"
image_dark: assets/logo-development.png
branding_dark: assets/branding-development.png
android_12:
image: assets/logo-development.png
icon_background_color: "#ffffff"
image_dark: assets/logo-development.png
icon_background_color_dark: "#121212"
web: false
# flutter_native_splash-acceptance.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-acceptance.png
branding: assets/branding-acceptance.png
color_dark: "#121212"
image_dark: assets/logo-acceptance.png
branding_dark: assets/branding-acceptance.png
android_12:
image: assets/logo-acceptance.png
icon_background_color: "#ffffff"
image_dark: assets/logo-acceptance.png
icon_background_color_dark: "#121212"
web: false
# flutter_native_splash-production.yaml
flutter_native_splash:
color: "#ffffff"
image: assets/logo-production.png
branding: assets/branding-production.png
color_dark: "#121212"
image_dark: assets/logo-production.png
branding_dark: assets/branding-production.png
android_12:
image: assets/logo-production.png
icon_background_color: "#ffffff"
image_dark: assets/logo-production.png
icon_background_color_dark: "#121212"
web: false
Great, now comes the fun part running the new command!
The new command is:
# If you have a flavor called production you would do this:
flutter pub run flutter_native_splash:create --flavor production
# For a flavor with a name staging you would provide it's name like so:
flutter pub run flutter_native_splash:create --flavor staging
# And if you have a local version for devs you could do that:
flutter pub run flutter_native_splash:create --flavor development
You're done! No, really, Android doesn't need any additional setup.
Note: If it didn't work, please make sure that your flavors are named the same as your config files, otherwise the setup will not work.
iOS is a bit tricky, so hang tight, it might look scary but most of the steps are just a single click, explained as much as possible to lower the possibility of mistakes.
When you run the new command, you will need to open xCode and follow the steps bellow:
Assumption
schemes
setup; production, acceptance and development.Preparation
{project root}/ios/Runner/Base.lproj
xCode
Xcode still doesn't know how to use them, so we need to specify for all the current flavors (schemes) which file to use and to use that value inside the Info.plist file.
LAUNCH_SCREEN_STORYBOARD
$(LAUNCH_SCREEN_STORYBOARD)
Congrats you finished your setup for multiple flavors,
This message is not related to this package but is related to a change in how Flutter handles splash screens in Flutter 2.5. It is caused by having the following code in your android/app/src/main/AndroidManifest.xml
, which was included by default in previous versions of Flutter:
<meta-data
android:name="io.flutter.embedding.android.SplashScreenDrawable"
android:resource="@drawable/launch_background"
/>
The solution is to remove the above code. Note that this will also remove the fade effect between the native splash screen and your app.
Not at this time. PRs are always welcome!
This attribute is only found in Android 12, so if you are getting this error, it means your project is not fully set up for Android 12. Did you update your app's build configuration?
This is caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.
removeAfter
method.No. This package creates a splash screen that is displayed before Flutter is loaded. Because of this, when the splash screen loads, internal app settings are not available to the splash screen. Unfortunately, this means that it is impossible to control light/dark settings of the splash from app settings.
Notes
If the splash screen was not updated correctly on iOS or if you experience a white screen before the splash screen, run flutter clean
and recompile your app. If that does not solve the problem, delete your app, power down the device, power up the device, install and launch the app as per this StackOverflow thread.
This package modifies launch_background.xml
and styles.xml
files on Android, LaunchScreen.storyboard
and Info.plist
on iOS, and index.html
on Web. If you have modified these files manually, this plugin may not work properly. Please open an issue if you find any bugs.
mdpi
, hdpi
, xhdpi
, xxhdpi
and xxxhdpi
drawables.<item>
tag containing a <bitmap>
for your splash image drawable will be added in launch_background.xml
colors.xml
and referenced in launch_background.xml
.styles.xml
.drawable-night
, values-night
, etc. resource folders.@3x
and @2x
images.LaunchScreen.storyboard
.Info.plist
.web/splash
folder will be created for splash screen images and CSS files.1x
, 2x
, 3x
, and 4x
sizes and placed in web/splash/img
.web/index.html
, as well as the HTML for the splash pictures.This package was originally created by Henrique Arthur and it is currently maintained by Jon Hanson.
If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket. Pull request are also welcome.
Run this command:
With Flutter:
$ flutter pub add flutter_native_splash
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
flutter_native_splash: ^2.2.19
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_native_splash/flutter_native_splash.dart';
import 'package:flutter/material.dart';
import 'package:flutter_native_splash/flutter_native_splash.dart';
void main() {
WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// Try running your application with "flutter run". You'll see the
// application has a blue toolbar. Then, without quitting the app, try
// changing the primarySwatch below to Colors.green and then invoke
// "hot reload" (press "r" in the console where you ran "flutter run",
// or simply save your changes to "hot reload" in a Flutter IDE).
// Notice that the counter didn't reset back to zero; the application
// is not restarted.
primarySwatch: Colors.blue,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
void initState() {
super.initState();
initialization();
}
void initialization() async {
// This is where you can initialize the resources needed by your app while
// the splash screen is displayed. Remove the following example because
// delaying the user experience is a bad design practice!
// ignore_for_file: avoid_print
print('ready in 3...');
await Future.delayed(const Duration(seconds: 1));
print('ready in 2...');
await Future.delayed(const Duration(seconds: 1));
print('ready in 1...');
await Future.delayed(const Duration(seconds: 1));
print('go!');
FlutterNativeSplash.remove();
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Invoke "debug painting" (press "p" in the console, choose the
// "Toggle Debug Paint" action from the Flutter Inspector in Android
// Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
// to see the wireframe for each widget.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
Author: jonbhanson
Download Link: Download The Source Code
Official Website: https://github.com/jonbhanson/flutter_native_splash
License: MIT license
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1618317562
View more: https://www.inexture.com/services/deep-learning-development/
We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.
#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services
1599215220
The problem with deep learning models is they need lots of data to train a model. There are two major problems while training deep learning models is overfitting and underfitting of the model. Those problems are solved by data augmentation is a regularization technique that makes slight modifications to the images and used to generate data.
In this article, we will demonstrate why data augmentation is known as a regularization technique. How to apply data augmentation to our model and whether it is used as a preprocessing technique or post-processing techniques…? All these questions are answered in the below demonstration.
Topics that we will demonstrate in this article:-
The regularization is a technique used to reduce the overfitting in the model. unnecessarily. In dealing with deep learning models, too much learning is also bad for the model to make a prediction with unseen data. If we get good results in training data and poor results in unseen data (test data, validation data) then it is framed as an overfitting problem. So now using data augmentation, we perform few transformations to the data like flipping, cropping, adding noise to the data, etc.
As you know, deep learning models are data hungry, if we are lacking data then by using data augmentation transformations of the image we can generate data. Data augmentation is a preprocessing technique because we only work on the data to train our model. In this technique, we generate new instances of images by cropping, flipping, zooming, shearing an original image. So, whenever the training lacks the image dataset, using augmentation, we can create thousands of images to train the model perfectly.
#developers corner #computer vision #data augmentation #deep learning #image augmentation #image data augmentation #image processing #overfitting