Nina Diana

Nina Diana

1575724828

Using Keras & Tensorflow with AMD GPU

AMD is developing a new HPC platform, called ROCm. Its ambition is to create a common, open-source environment, capable to interface both with Nvidia (using CUDA) and AMD GPUs (further information).

This tutorial will explain how to set-up a neural network environment, using AMD GPUs in a single or multiple configurations.

On the software side: we will be able to run Tensorflow v1.12.0 as a backend to Keras on top of the ROCm kernel, using Docker.

This is image title

To install and deploy ROCm are required particular hardware/software configurations.

Hardware requirements

The official documentation (ROCm v2.1) suggests the following hardware solutions.

Supported CPUs

Current CPUs which support PCIe Gen3 + PCIe Atomics are:

  • AMD Ryzen CPUs;
  • The CPUs in AMD Ryzen APUs;
  • AMD Ryzen Threadripper CPUs
  • AMD EPYC CPUs;
  • Intel Xeon E7 v3 or newer CPUs;
  • Intel Xeon E5 v3 or newer CPUs;
  • Intel Xeon E3 v3 or newer CPUs;
  • Intel Core i7 v4 (i7–4xxx), Core i5 v4 (i5–4xxx), Core i3 v4 (i3–4xxx) or newer CPUs (i.e. Haswell family or newer).
  • Some Ivy Bridge-E systems

Supported GPUs

ROCm officially supports AMD GPUs that use the following chips:

  • GFX8 GPUs
  • “Fiji” chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
  • “Polaris 10” chips, such as on the AMD Radeon RX 480/580 and Radeon Instinct MI6
  • “Polaris 11” chips, such as on the AMD Radeon RX 470/570 and Radeon Pro WX 4100
  • “Polaris 12” chips, such as on the AMD Radeon RX 550 and Radeon RX 540
  • GFX9 GPUs
  • “Vega 10” chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
  • “Vega 7nm” chips (Radeon Instinct MI50, Radeon VII)

Software requirements

On the software side, the current version of ROCm (v2.1), is supported only in Linux-based systems.

The ROCm 2.1.x platform supports the following operating systems:

Ubuntu 16.04.x and 18.04.x (Version 16.04.3 and newer or kernels 4.13 and newer)

CentOS 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)

RHEL 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)

Testing setup

The following hardware/software configuration has been used, by the author, to test and validate the environment:

HARDWARE

  • CPU: Intel Xeon E5–2630L
  • RAM: 2 x 8 GB
  • Motherboard: MSI X99A Krait Edition
  • GPU: 2 x RX480 8GB + 1 x RX580 4GB
  • SSD: Samsung 850 Evo (256 GB)
  • HDD: WDC 1TB

SOFTWARE

  • OS: Ubuntu 18.04 LTS

ROCm installation

In order to get everything working properly, is recommended to start the installation process, within a fresh installed operating system. The following steps are referring to Ubuntu 18.04 LTS operating system, for other OS please refer to the official documentation.

The first step is to install ROCm kernel and dependencies:

Update your system

Open a new terminal CTRL + ALT + T

sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot

Add the ROCm apt repository

To download and install ROCm stack is required to add related repositories:

wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list

Install ROCm

Is now required to update apt repository list and install rocm-dkms meta-package:

sudo apt update
sudo apt install rocm-dkms

Set permissions

The official documentation suggests creating a new video group in order to have access to GPU resources, using the current user.

Firstly, check the groups in your system, issuing:

groups

Then add yourself to the video group:

sudo usermod -a -G video $LOGNAME

You may want to ensure that any future users you add to your system are put into the “video” group by default. To do that, you can run the following commands:

echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf

Then reboot the system:

reboot

Testing ROCm stack

Is now suggested to test the ROCm installation issuing the following commands.

Open a new terminal CTRL + ALT + T , issue the following commands:

/opt/rocm/bin/rocminfo

the output should look as follows: link

Then double-check issuing:

/opt/rocm/opencl/bin/x86_64/clinfo

The output should look like that: link

Official documentation finally suggests to add ROCm binaries to PATH:

echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin/x86_64' | sudo tee -a /etc/profile.d/rocm.sh

Congratulations! ROCm is properly installed in your system and the command:

rocm-smi

should display your hardware information and stats:

This is image title
rocm-smi command output

Tensorflow Docker

The fastest and more reliable method to get ROCm + Tensorflow backend to work is to use the docker image provided by AMD developers.

Install Docker CE

First, is required to install Docker. In order to do that, please follow the instructions for Ubuntu systems:

Get Docker Engine

Tip: To avoid inserting sudo docker <command> instead of docker <command> it’s useful to provide access to non-root users: Manage Docker as a non-root user.

Pull ROCm Tensorflow image

It’s now time to pull the Tensorflow docker provided by AMD developers.

Open a new terminal CTRL + ALT + T and issue:

docker pull rocm/tensorflow

after a few minutes, the image will be installed in your system, ready to go.

Create a persistent space

Because of the ephemeral nature of Docker containers, once a docker session is closed all the modifications and files stored, will be deleted with the container.

For this reason is useful to create a persistent space in the physical drive for storing files and Jupyter notebooks. The simpler method is to create a folder to initialize with a docker container. To do that issue the command:

mkdir /home/$LOGNAME/tf_docker_share

This command will create a folder named tf_docker_share useful for storing and reviewing data created within the docker.

Starting Docker

Now, execute the image in a new container session. Simply send the following command:

docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:latest /bin/bash

The docker is in execution on the directory /tf_docker_share and you should see something similar to:

This is image title

it means that you are now operating inside the Tensorflow-ROCm virtual system.

Installing Jupyter

Jupyter is a very useful tool, for the development, debug and test of neural networks. Unfortunately, it’s not currently installed, as default, on the Tensorflow-ROCm, Docker image, published by ROCm team. It’s therefore required to manually install Jupyter.

In order to do that, within Tensorflow-ROCm virtual system prompt,

1. Issue the following command:

pip3 install jupyter

It will install the Jupyter package into the virtual system. Leave open this terminal.

2. Open a new terminal CTRL + ALT + T .

Find the CONTAINER ID issuing the command:

docker ps

A table, similar to the following should appear:

This is image title
Container ID on the left

The first column represents the Container ID of the executed container. Copy that because it’s necessary for the next step.

3. It’s time to commit, to permanently write modifications of the image. From the same terminal, execute:

docker commit <container-id> rocm/tensorflow:<tag>

where tag value is an arbitrary name for example personal .

4. To double check that the image has been generated correctly, from the same terminal, issue the command:

docker images

that should result generate a table similar to the following:

This is image title

It’s important to note that we will refer to this newly generated image for the rest of the tutorial.

The new docker run command, to use, will look like:

docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:<tag> /bin/bash

where once again, tag value is arbitrary, for example personal .

Entering Jupyter notebook environment

We can finally enter the Jupyter environment. Inside it, we will create the first neural network using Tensorflow v1.12 as backend and Keras as frontend.

Cleaning

Firstly close all the previously executing Docker containers.

  1. Check the already open containers:
docker ps

2. Close all the Docker container/containers:

docker container stop <container-id1> <container-id2> ... <container-idn>

3. Close all the already open terminals.

Executing Jupyter

Let’s open a new terminal CTRL + ALT + T :

  1. Run a new Docker container (personal tag will be used as default):
docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:personal /bin/bash

You should be logged into Tensorflow-ROCm docker container prompt.

This is image title

Logged into docker container

2. Execute the Jupyter notebook:

jupyter notebook --allow-root --port=8889

a new browser window should appear, similar to the following:

This is image title
Jupyter root directory

If the new tab does not appear automatically, on the browser, go back to the terminal where jupyter notebook command has been executed. On the bottom, there is a link to follow (press: CTRL + left mouse button on it) then, a new tab in your browser redirects you to Jupyter root directory.

This is image title
Typical Jupyter notebook output. The example link is on the bottom

Train a neural network with Keras

In the last section, of this tutorial, we will train a simple neural network on the MNIST dataset. We will firstly build a fully connected neural network.

Fully connected neural network

Let’s create a new notebook, by selecting Python3 from the upper-right menu in Jupyter root directory.

The upper-right menu in Jupyter explorer

The upper-right menu in Jupyter explorer

A new Jupiter notebook should pop-up in a new browser tab. Rename it tofc_network by clicking Untitled on the upper left corner of the window.

This is image title

Notebook renaming

Let’s check Tensorflow backend. On the first cell insert:

import tensorflow as tf; print(tf.__version__)

then press SHIFT + ENTER to execute. The output should look like:

This is image title
Tensorflow V1.12.0

We are using Tensorflow v1.12.0.

Let’s import some useful functions, to use next:

from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import to_categorical

Let’s set batch size, epochs and number of classes.

batch_size = 128
num_classes = 10
epochs = 10

We will now download and preprocess inputs, loading them into system memory.

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)

It’s time to define the neural network architecture:

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

We will use a very simple, two-layer fully-connected network, with 512 neurons per layer. It’s also included a 20% drop probability on the neuron connections, in order to prevent overfitting.

Let’s print some insight into the network architecture:

model.summary()

This is image title
Network architecture

Despite the simplicity of the problem, we have a considerable number of parameters to train (almost ~700.000), it means also considerable computational power consumption. Convolutional Neural Networks will solve the issue reducing computational complexity.

Now, compile the model:

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

and start training:

history = model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))

This is image title
Training process

The neural network has been trained on a single RX 480 with a respectable 47us/step. For comparison, an Nvidia Tesla K80 is reaching 43us/step but is 10x more expensive.

Multi-GPU Training

As an additional step, if your system has multiple GPUs, is possible to leverage Keras capabilities, in order to reduce training time, splitting the batch among different GPUs.

To do that, first it’s required to specify the number of GPUs to use for training by, declaring an environmental variable (put the following command on a single cell and execute):

!export HIP_VISIBLE_DEVICES=0,1,...

Numbers from 0 to … are defining which GPU to use for training. In case you want to disable GPU acceleration simply:

!export HIP_VISIBLE_DEVICES=-1

It’s also necessary to add multi_gpu_model function.

As an example, if you have 3 GPUs, the previous code will modify accordingly.

Conclusions

That concludes this tutorial. The next step will be to test a Convolutional Neural Network on the MNIST dataset. Comparing performances in both single and multi-GPU.

What’s relevant here, is that AMD GPUs perform quite well under computational load at a fraction of the price. GPU market is changing rapidly and ROCm gave to researchers, engineers, and startups, very powerful, open-source tools to adopt, lowering upfront costs in hardware equipment.

#tensorflow #keras

What is GEEK

Buddha Community

Using Keras & Tensorflow with AMD GPU
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

Hello Jay

Hello Jay

1594525380

Keras vs. Tensorflow - Difference Between Tensorflow and Keras

Keras and Tensorflow are two very popular deep learning frameworks. Deep Learning practitioners most widely use Keras and Tensorflow. Both of these frameworks have large community support. Both of these frameworks capture a major fraction of deep learning production.

Which framework is better for us then?

This blog will be focusing on Keras Vs Tensorflow. There are some differences between Keras and Tensorflow, which will help you choose between the two. We will provide you better insights on both these frameworks.

What is Keras?

Keras is a high-level API built on the top of a backend engine. The backend engine may be either TensorFlow, theano, or CNTK. It provides the ease to build neural networks without worrying about the backend implementation of tensors and optimization methods.

Fast prototyping allows for more experiments. Using Keras developers can convert their algorithms into results in less time. It provides an abstraction overs lower level computations.

Major Applications of Keras

  • The performance of Keras is smooth on both CPU and GPU.
  • Keras provides modularity, flexibility to code, extensibility, and has an adaptation for innovation and research.
  • The pythonic nature of Keras makes it easy to explore and debug the code.

What is Tensorflow?

Tensorflow is a tool designed by Google for the deep learning developer community. The aim of TensorFlow was to make deep learning applications accessible to the people. It is an open-source library available on Github. It is one of the most famous libraries to experiment with deep learning. The popularity of TensorFlow is because of the ease of building and deployment of neural net models.

Major area of focus here is numerical computation. It was built keeping the processing computation power in mind. Therefore we can run TensorFlow applications on almost kind of computer.

Major applications of Tensorflow

  • From mobiles to embedded devices and distributed servers Tensorflow runs on all the platforms.
  • Tensorflow is the enterprise of solving real-world and real-time problems like image analysis, robotics, generating data, and NLP.
  • Developers are implementing tools for translation languages and the detection of skin cancers using Tensorflow.
  • Major projects using TensorFlow are Google translate, video detection, image recognition.

#keras tutorials #keras vs tensorflow #keras #tensorflow

Nina Diana

Nina Diana

1575724828

Using Keras & Tensorflow with AMD GPU

AMD is developing a new HPC platform, called ROCm. Its ambition is to create a common, open-source environment, capable to interface both with Nvidia (using CUDA) and AMD GPUs (further information).

This tutorial will explain how to set-up a neural network environment, using AMD GPUs in a single or multiple configurations.

On the software side: we will be able to run Tensorflow v1.12.0 as a backend to Keras on top of the ROCm kernel, using Docker.

This is image title

To install and deploy ROCm are required particular hardware/software configurations.

Hardware requirements

The official documentation (ROCm v2.1) suggests the following hardware solutions.

Supported CPUs

Current CPUs which support PCIe Gen3 + PCIe Atomics are:

  • AMD Ryzen CPUs;
  • The CPUs in AMD Ryzen APUs;
  • AMD Ryzen Threadripper CPUs
  • AMD EPYC CPUs;
  • Intel Xeon E7 v3 or newer CPUs;
  • Intel Xeon E5 v3 or newer CPUs;
  • Intel Xeon E3 v3 or newer CPUs;
  • Intel Core i7 v4 (i7–4xxx), Core i5 v4 (i5–4xxx), Core i3 v4 (i3–4xxx) or newer CPUs (i.e. Haswell family or newer).
  • Some Ivy Bridge-E systems

Supported GPUs

ROCm officially supports AMD GPUs that use the following chips:

  • GFX8 GPUs
  • “Fiji” chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
  • “Polaris 10” chips, such as on the AMD Radeon RX 480/580 and Radeon Instinct MI6
  • “Polaris 11” chips, such as on the AMD Radeon RX 470/570 and Radeon Pro WX 4100
  • “Polaris 12” chips, such as on the AMD Radeon RX 550 and Radeon RX 540
  • GFX9 GPUs
  • “Vega 10” chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
  • “Vega 7nm” chips (Radeon Instinct MI50, Radeon VII)

Software requirements

On the software side, the current version of ROCm (v2.1), is supported only in Linux-based systems.

The ROCm 2.1.x platform supports the following operating systems:

Ubuntu 16.04.x and 18.04.x (Version 16.04.3 and newer or kernels 4.13 and newer)

CentOS 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)

RHEL 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)

Testing setup

The following hardware/software configuration has been used, by the author, to test and validate the environment:

HARDWARE

  • CPU: Intel Xeon E5–2630L
  • RAM: 2 x 8 GB
  • Motherboard: MSI X99A Krait Edition
  • GPU: 2 x RX480 8GB + 1 x RX580 4GB
  • SSD: Samsung 850 Evo (256 GB)
  • HDD: WDC 1TB

SOFTWARE

  • OS: Ubuntu 18.04 LTS

ROCm installation

In order to get everything working properly, is recommended to start the installation process, within a fresh installed operating system. The following steps are referring to Ubuntu 18.04 LTS operating system, for other OS please refer to the official documentation.

The first step is to install ROCm kernel and dependencies:

Update your system

Open a new terminal CTRL + ALT + T

sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot

Add the ROCm apt repository

To download and install ROCm stack is required to add related repositories:

wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list

Install ROCm

Is now required to update apt repository list and install rocm-dkms meta-package:

sudo apt update
sudo apt install rocm-dkms

Set permissions

The official documentation suggests creating a new video group in order to have access to GPU resources, using the current user.

Firstly, check the groups in your system, issuing:

groups

Then add yourself to the video group:

sudo usermod -a -G video $LOGNAME

You may want to ensure that any future users you add to your system are put into the “video” group by default. To do that, you can run the following commands:

echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf

Then reboot the system:

reboot

Testing ROCm stack

Is now suggested to test the ROCm installation issuing the following commands.

Open a new terminal CTRL + ALT + T , issue the following commands:

/opt/rocm/bin/rocminfo

the output should look as follows: link

Then double-check issuing:

/opt/rocm/opencl/bin/x86_64/clinfo

The output should look like that: link

Official documentation finally suggests to add ROCm binaries to PATH:

echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin/x86_64' | sudo tee -a /etc/profile.d/rocm.sh

Congratulations! ROCm is properly installed in your system and the command:

rocm-smi

should display your hardware information and stats:

This is image title
rocm-smi command output

Tensorflow Docker

The fastest and more reliable method to get ROCm + Tensorflow backend to work is to use the docker image provided by AMD developers.

Install Docker CE

First, is required to install Docker. In order to do that, please follow the instructions for Ubuntu systems:

Get Docker Engine

Tip: To avoid inserting sudo docker <command> instead of docker <command> it’s useful to provide access to non-root users: Manage Docker as a non-root user.

Pull ROCm Tensorflow image

It’s now time to pull the Tensorflow docker provided by AMD developers.

Open a new terminal CTRL + ALT + T and issue:

docker pull rocm/tensorflow

after a few minutes, the image will be installed in your system, ready to go.

Create a persistent space

Because of the ephemeral nature of Docker containers, once a docker session is closed all the modifications and files stored, will be deleted with the container.

For this reason is useful to create a persistent space in the physical drive for storing files and Jupyter notebooks. The simpler method is to create a folder to initialize with a docker container. To do that issue the command:

mkdir /home/$LOGNAME/tf_docker_share

This command will create a folder named tf_docker_share useful for storing and reviewing data created within the docker.

Starting Docker

Now, execute the image in a new container session. Simply send the following command:

docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:latest /bin/bash

The docker is in execution on the directory /tf_docker_share and you should see something similar to:

This is image title

it means that you are now operating inside the Tensorflow-ROCm virtual system.

Installing Jupyter

Jupyter is a very useful tool, for the development, debug and test of neural networks. Unfortunately, it’s not currently installed, as default, on the Tensorflow-ROCm, Docker image, published by ROCm team. It’s therefore required to manually install Jupyter.

In order to do that, within Tensorflow-ROCm virtual system prompt,

1. Issue the following command:

pip3 install jupyter

It will install the Jupyter package into the virtual system. Leave open this terminal.

2. Open a new terminal CTRL + ALT + T .

Find the CONTAINER ID issuing the command:

docker ps

A table, similar to the following should appear:

This is image title
Container ID on the left

The first column represents the Container ID of the executed container. Copy that because it’s necessary for the next step.

3. It’s time to commit, to permanently write modifications of the image. From the same terminal, execute:

docker commit <container-id> rocm/tensorflow:<tag>

where tag value is an arbitrary name for example personal .

4. To double check that the image has been generated correctly, from the same terminal, issue the command:

docker images

that should result generate a table similar to the following:

This is image title

It’s important to note that we will refer to this newly generated image for the rest of the tutorial.

The new docker run command, to use, will look like:

docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:<tag> /bin/bash

where once again, tag value is arbitrary, for example personal .

Entering Jupyter notebook environment

We can finally enter the Jupyter environment. Inside it, we will create the first neural network using Tensorflow v1.12 as backend and Keras as frontend.

Cleaning

Firstly close all the previously executing Docker containers.

  1. Check the already open containers:
docker ps

2. Close all the Docker container/containers:

docker container stop <container-id1> <container-id2> ... <container-idn>

3. Close all the already open terminals.

Executing Jupyter

Let’s open a new terminal CTRL + ALT + T :

  1. Run a new Docker container (personal tag will be used as default):
docker run -i -t \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--workdir=/tf_docker_share \
-v $HOME/tf_docker_share:/tf_docker_share rocm/tensorflow:personal /bin/bash

You should be logged into Tensorflow-ROCm docker container prompt.

This is image title

Logged into docker container

2. Execute the Jupyter notebook:

jupyter notebook --allow-root --port=8889

a new browser window should appear, similar to the following:

This is image title
Jupyter root directory

If the new tab does not appear automatically, on the browser, go back to the terminal where jupyter notebook command has been executed. On the bottom, there is a link to follow (press: CTRL + left mouse button on it) then, a new tab in your browser redirects you to Jupyter root directory.

This is image title
Typical Jupyter notebook output. The example link is on the bottom

Train a neural network with Keras

In the last section, of this tutorial, we will train a simple neural network on the MNIST dataset. We will firstly build a fully connected neural network.

Fully connected neural network

Let’s create a new notebook, by selecting Python3 from the upper-right menu in Jupyter root directory.

The upper-right menu in Jupyter explorer

The upper-right menu in Jupyter explorer

A new Jupiter notebook should pop-up in a new browser tab. Rename it tofc_network by clicking Untitled on the upper left corner of the window.

This is image title

Notebook renaming

Let’s check Tensorflow backend. On the first cell insert:

import tensorflow as tf; print(tf.__version__)

then press SHIFT + ENTER to execute. The output should look like:

This is image title
Tensorflow V1.12.0

We are using Tensorflow v1.12.0.

Let’s import some useful functions, to use next:

from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import to_categorical

Let’s set batch size, epochs and number of classes.

batch_size = 128
num_classes = 10
epochs = 10

We will now download and preprocess inputs, loading them into system memory.

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)

It’s time to define the neural network architecture:

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

We will use a very simple, two-layer fully-connected network, with 512 neurons per layer. It’s also included a 20% drop probability on the neuron connections, in order to prevent overfitting.

Let’s print some insight into the network architecture:

model.summary()

This is image title
Network architecture

Despite the simplicity of the problem, we have a considerable number of parameters to train (almost ~700.000), it means also considerable computational power consumption. Convolutional Neural Networks will solve the issue reducing computational complexity.

Now, compile the model:

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

and start training:

history = model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))

This is image title
Training process

The neural network has been trained on a single RX 480 with a respectable 47us/step. For comparison, an Nvidia Tesla K80 is reaching 43us/step but is 10x more expensive.

Multi-GPU Training

As an additional step, if your system has multiple GPUs, is possible to leverage Keras capabilities, in order to reduce training time, splitting the batch among different GPUs.

To do that, first it’s required to specify the number of GPUs to use for training by, declaring an environmental variable (put the following command on a single cell and execute):

!export HIP_VISIBLE_DEVICES=0,1,...

Numbers from 0 to … are defining which GPU to use for training. In case you want to disable GPU acceleration simply:

!export HIP_VISIBLE_DEVICES=-1

It’s also necessary to add multi_gpu_model function.

As an example, if you have 3 GPUs, the previous code will modify accordingly.

Conclusions

That concludes this tutorial. The next step will be to test a Convolutional Neural Network on the MNIST dataset. Comparing performances in both single and multi-GPU.

What’s relevant here, is that AMD GPUs perform quite well under computational load at a fraction of the price. GPU market is changing rapidly and ROCm gave to researchers, engineers, and startups, very powerful, open-source tools to adopt, lowering upfront costs in hardware equipment.

#tensorflow #keras

Keras Tutorial - Ultimate Guide to Deep Learning - DataFlair

Welcome to DataFlair Keras Tutorial. This tutorial will introduce you to everything you need to know to get started with Keras. You will discover the characteristics, features, and various other properties of Keras. This article also explains the different neural network layers and the pre-trained models available in Keras. You will get the idea of how Keras makes it easier to try and experiment with new architectures in neural networks. And how Keras empowers new ideas and its implementation in a faster, efficient way.

Keras Tutorial

Introduction to Keras

Keras is an open-source deep learning framework developed in python. Developers favor Keras because it is user-friendly, modular, and extensible. Keras allows developers for fast experimentation with neural networks.

Keras is a high-level API and uses Tensorflow, Theano, or CNTK as its backend. It provides a very clean and easy way to create deep learning models.

Characteristics of Keras

Keras has the following characteristics:

  • It is simple to use and consistent. Since we describe models in python, it is easy to code, compact, and easy to debug.
  • Keras is based on minimal substructure, it tries to minimize the user actions for common use cases.
  • Keras allows us to use multiple backends, provides GPU support on CUDA, and allows us to train models on multiple GPUs.
  • It offers a consistent API that provides necessary feedback when an error occurs.
  • Using Keras, you can customize the functionalities of your code up to a great extent. Even small customization makes a big change because these functionalities are deeply integrated with the low-level backend.

Benefits of using Keras

The following major benefits of using Keras over other deep learning frameworks are:

  • The simple API structure of Keras is designed for both new developers and experts.
  • The Keras interface is very user friendly and is pretty optimized for general use cases.
  • In Keras, you can write custom blocks to extend it.
  • Keras is the second most popular deep learning framework after TensorFlow.
  • Tensorflow also provides Keras implementation using its tf.keras module. You can access all the functionalities of Keras in TensorFlow using tf.keras.

Keras Installation

Before installing TensorFlow, you should have one of its backends. We prefer you to install Tensorflow. Install Tensorflow and Keras using pip python package installer.

Starting with Keras

The basic data structure of Keras is model, it defines how to organize layers. A simple type of model is the Sequential model, a sequential way of adding layers. For more flexible architecture, Keras provides a Functional API. Functional API allows you to take multiple inputs and produce outputs.

Keras Sequential model

Keras Functional API

It allows you to define more complex models.

#keras tutorials #introduction to keras #keras models #keras tutorial #layers in keras #why learn keras

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website