Michio JP

Michio JP

1565884274

JavaScript for Machine Learning using TensorFlow

Although Python or R programming language has a relatively easy learning curve, web developers are just happy to do everything within their comfort zone of JavaScript. Considering the trend that Node.js has started, of applying JavaScript to every field, I decided to understand the concepts of machine learning using JS. Python became popular because of the abundance of packages available, but the JS community is not far behind. In this article, we will build a simple classifier in a beginner friendly process.

What you will build

You will make a webpage that uses TensorFlow.js to train a model in the browser. Given “AvgAreaNumberofRooms” for a house, the model will learn to predict “price” of the house.

To do this you will:

  • Load the data and prepare it for training.
  • Define the architecture of the model.
  • Train the model and monitor its performance as it trains.
  • Evaluate the trained model by making some predictions.

Step 1: Let us start with the basics.

Create an HTML page and include the JavaScript. Copy the following code into an HTML file called index.html

<!DOCTYPE html>
<html>
<head>
  <title>TensorFlow.js Tutorial</title>
  <!-- Import TensorFlow.js -->
  <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
  <!-- Import tfjs-vis -->
  <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-vis@1.0.2/dist/tfjs-vis.umd.min.js"></script>
  <!-- Import the main script file -->
  <script src="script.js"></script>
</head>
<body>
</body>
</html>

Create the JavaScript file for the code

In the same folder as the HTML file above, create a file called script.js and put the following code in it.

console.log('Hello TensorFlow');

Test it out

Now that you’ve got the HTML and JavaScript files created, test them out. Open up the index.html file in your browser and open up the devtools console.

If everything is working, there should be two global variables created and available in the devtools console.:

  • tf is a reference to the TensorFlow.js library
  • tfvis is a reference to the tfjs-vis library

You should see a message that says Hello TensorFlow. “If so, you are ready to move on to the next step.

An output like this is expected.

Step 2: Load the data, format the data and visualize the input data.

We will load the “house” data-set, which can be found here. It contains many different features for a given house. For this tutorial, we only want data about average area number of rooms and the price of each house.

Add the following code to your script.js file.

async function getData() {
  const houseDataReq = await fetch('https://raw.githubusercontent.com/meetnandu05/ml1/master/house.json');  
  const houseData = await houseDataReq.json();  
  const cleaned = houseData.map(house => ({
    price: house.Price,
    rooms: house.AvgAreaNumberofRooms,
  }))
  .filter(house => (house.price != null && house.rooms != null));

  return cleaned;
}

This will also remove any entries that do not have either price or number of rooms defined. Let’s also plot this data in a scatterplot to see what it looks like.

Add the following code to the bottom of your script.js file.

async function run() {
  // Load and plot the original input data that we are going to train on.
  const data = await getData();
  const values = data.map(d => ({
    x: d.rooms,
    y: d.price,
  }));
  tfvis.render.scatterplot(
    {name: 'No.of rooms v Price'},
    {values}, 
    {
      xLabel: 'No. of rooms',
      yLabel: 'Price',
      height: 300
    }
  );
  // More code will be added below
}
document.addEventListener('DOMContentLoaded', run);

When you refresh the page. You should see a panel on the left-hand side of the page with a scatterplot of the data. It should look something like this.

This is what the scatter plot will look like.

Generally, when working with data it is a good idea to find ways to take a look at your data and clean it if necessary. Visualizing the data can give us a sense of whether there is any structure to the data that the model can learn.

We can see from the plot above that there is a positive correlation between no. of rooms and price, i.e. as no. of rooms increases, prices of houses generally increase.

Step 3: Build the model to be trained.

Here, we will write the code to build our machine learning model. The model will be trained based on this code and hence this is an important step. Machine Learning models take inputs and then produce outputs. In the case of Tensorflow.js, we have to build neural networks.

Add the following function to your script.js file to define the model.

function createModel() {
  // Create a sequential model
  const model = tf.sequential(); 

  // Add a single hidden layer
  model.add(tf.layers.dense({inputShape: [1], units: 1, useBias: true}));

  // Add an output layer
  model.add(tf.layers.dense({units: 1, useBias: true}));
  return model;
}

This is one of the simplest models we can define in tensorflow.js, let us break-down each line a bit.

Instantiate the model

const model = tf.sequential();

This instantiates a tf.Model object. This model is sequential because its inputs flow straight down to its output. Other kinds of models can have branches, or even multiple inputs and outputs, but in many cases your models will be sequential.

Add layers

model.add(tf.layers.dense({inputShape: [1], units: 1, useBias: true}));

This adds a _hidden layer _to our network. As this is the first layer of the network, we need to define our inputShape. The inputShape is [1] because we have 1number as our input (the number of rooms of a given house).

units sets how big the weight matrix will be in the layer. By setting it to 1 here we are saying there will be 1 weight for each of the input features of the data.

model.add(tf.layers.dense({units: 1}));

The code above creates our output layer. We set units to 1 because we want to output 1 number.

Create an instance

Add the following code to the run function we defined earlier.

// Create the model
const model = createModel();  
tfvis.show.modelSummary({name: 'Model Summary'}, model);

This will create an instance of the model and show a summary of the layers on the webpage.

Step 4: Prepare the data for training

To get the performance benefits of TensorFlow.js that make training machine learning models practical, we need to convert our data to tensors.

Add the following code to your script.js file

function convertToTensor(data) {

  return tf.tidy(() => {
    // Step 1\. Shuffle the data    
    tf.util.shuffle(data);
    // Step 2\. Convert data to Tensor
    const inputs = data.map(d => d.rooms)
    const labels = data.map(d => d.price);
    const inputTensor = tf.tensor2d(inputs, [inputs.length, 1]);
    const labelTensor = tf.tensor2d(labels, [labels.length, 1]);
    //Step 3\. Normalize the data to the range 0 - 1 using min-max scaling
    const inputMax = inputTensor.max();
    const inputMin = inputTensor.min();  
    const labelMax = labelTensor.max();
    const labelMin = labelTensor.min();
    const normalizedInputs = inputTensor.sub(inputMin).div(inputMax.sub(inputMin));
    const normalizedLabels = labelTensor.sub(labelMin).div(labelMax.sub(labelMin));
    return {
      inputs: normalizedInputs,
      labels: normalizedLabels,
      // Return the min/max bounds so we can use them later.
      inputMax,
      inputMin,
      labelMax,
      labelMin,
    }
  });  
}

Let’s break down what’s going on here.

Shuffle the data

// Step 1\. Shuffle the data    
tf.util.shuffle(data);

During training the model, the dataset is divided into smaller sets, each set known as a batch. These batches are then fed to the model for training. Shuffling the data is important because the model should not get the same data over and over again. If the model gets the same data over and over again, the model will not be able to generalize the data and give outputs specified to the inputs it received during training. Shuffling will help by having a variety of data in each batch.

Convert to tensors

// Step 2\. Convert data to Tensor
const inputs = data.map(d => d.rooms)
const labels = data.map(d => d.price);

const inputTensor = tf.tensor2d(inputs, [inputs.length, 1]);
const labelTensor = tf.tensor2d(labels, [labels.length, 1]);

Here we make two arrays, one for our input examples (the no. of rooms entries), and another for the true output values (which are known as labels in machine learning, in our case the price of each house). We then convert each array data to a 2d tensor.

Normalize the data

//Step 3\. Normalize the data to the range 0 - 1 using min-max scaling
const inputMax = inputTensor.max();
const inputMin = inputTensor.min();  
const labelMax = labelTensor.max();
const labelMin = labelTensor.min();
const normalizedInputs = inputTensor.sub(inputMin).div(inputMax.sub(inputMin));
const normalizedLabels = labelTensor.sub(labelMin).div(labelMax.sub(labelMin));

Next, we normalize the data. Here we normalize the data into the numerical range 0-1 using min-max scaling. Normalization is important because the internals of many machine learning models you will build with tensorflow.js are designed to work with numbers that are not too big. Common ranges to normalize data to include 0 to 1 or -1 to 1.

Return the data and the normalization bounds

return {
  inputs: normalizedInputs,
  labels: normalizedLabels,
  // Return the min/max bounds so we can use them later.
  inputMax,
  inputMin,
  labelMax,
  labelMin,
}

We want to keep the values we used for normalization during training so that we can un-normalize the outputs to get them back into our original scale and to allow us to normalize future input data the same way.

Step 5: Train the model

With our model instance created and our data represented as tensors we have everything in place to start the training process.

Copy the following function into your script.js file.

async function trainModel(model, inputs, labels) {
  // Prepare the model for training.  
  model.compile({
    optimizer: tf.train.adam(),
    loss: tf.losses.meanSquaredError,
    metrics: ['mse'],
  });

  const batchSize = 28;
  const epochs = 50;

  return await model.fit(inputs, labels, {
    batchSize,
    epochs,
    shuffle: true,
    callbacks: tfvis.show.fitCallbacks(
      { name: 'Training Performance' },
      ['loss', 'mse'], 
      { height: 200, callbacks: ['onEpochEnd'] }
    )
  });
}

Let’s break this down.

Prepare for training

// Prepare the model for training.  
model.compile({
  optimizer: tf.train.adam(),
  loss: tf.losses.meanSquaredError,
  metrics: ['mse'],
});

We have to ‘compile’ the model before we train it. To do so, we have to specify few very important things:

  • optimizer: This is the algorithm that is going to govern the updates to the model as it sees examples. There are many optimizers available in TensorFlow.js. Here we have picked the adam optimizer as it is quite effective in practice and requires no configuration.
  • loss: this is a function that will tell the model how well it is doing on learning each of the batches (data subsets) that it is shown. Here we use meanSquaredError to compare the predictions made by the model with the true values.
  • metrics: This is an array of metrics we want to calculate at the end of each epoch. Often we want to calculate accuracy on the whole training set so that we can monitor how well we are doing. Here we use mse which is shorthand for meanSquaredError. This is the same function we use for the loss function and is a common one used for regression tasks.
const batchSize = 28;
const epochs = 50;

Next, we pick a batchSize and a number of epochs:

  • batchSize refers to the size of the data subsets that the model will see on each iteration of training. Common batch sizes tend to be in the range 32-512. There isn’t really an ideal batch size for all problems and it is beyond the scope of this tutorial to describe the mathematical motivations for various batch sizes.
  • epochs refers to the number of times the model is going to look at the entire dataset that you provide it. Here we will take 50 iterations through the dataset.

Start the train loop

return model.fit(inputs, labels, {
  batchSize,
  epochs,
  callbacks: tfvis.show.fitCallbacks(
    { name: 'Training Performance' },
    ['loss', 'mse'], 
    { 
      height: 200, 
      callbacks: ['onEpochEnd']
    }
  )
});

model.fit is the function we call to start the training loop. It is an asynchronous function so we return the promise it gives us so that the caller can determine when training is complete.

To monitor training progress we pass some callbacks to model.fit. We use tfvis.show.fitCallbacks to generate functions that plot charts for the ‘loss’ and ‘mse’ metric we specified earlier.

Put it all together

Now we have to call the functions we have defined from our run function.

Add the following code to the bottom of your run function.

// Convert the data to a form we can use for training.
const tensorData = convertToTensor(data);
const {inputs, labels} = tensorData;

// Train the model  
await trainModel(model, inputs, labels);
console.log('Done Training');

When you refresh the page, after a few seconds you should see the following graphs updating.

These are created by the callbacks we created earlier. They display the loss (on the most recent batch) and mse (on the whole dataset) at the end of each epoch.

When training a model we want to see the loss go down. In this case, because our metric is a measure of error, we want to see it go down as well.

Step 6: Make Predictions

Now that our model is trained, we want to make some predictions. Let’s evaluate the model by seeing what it predicts for a uniform range of numbers of low to a high number of rooms.

Add the following function to your script.js file

function testModel(model, inputData, normalizationData) {
  const {inputMax, inputMin, labelMin, labelMax} = normalizationData;  

  // Generate predictions for a uniform range of numbers between 0 and 1;
  // We un-normalize the data by doing the inverse of the min-max scaling 
  // that we did earlier.
  const [xs, preds] = tf.tidy(() => {

    const xs = tf.linspace(0, 1, 100);      
    const preds = model.predict(xs.reshape([100, 1]));      

    const unNormXs = xs
      .mul(inputMax.sub(inputMin))
      .add(inputMin);

    const unNormPreds = preds
      .mul(labelMax.sub(labelMin))
      .add(labelMin);

    // Un-normalize the data
    return [unNormXs.dataSync(), unNormPreds.dataSync()];
  });

  const predictedPoints = Array.from(xs).map((val, i) => {
    return {x: val, y: preds[i]}
  });

  const originalPoints = inputData.map(d => ({
    x: d.rooms, y: d.price,
  }));

  tfvis.render.scatterplot(
    {name: 'Model Predictions vs Original Data'}, 
    {values: [originalPoints, predictedPoints], series: ['original', 'predicted']}, 
    {
      xLabel: 'No. of rooms',
      yLabel: 'Price',
      height: 300
    }
  );
}

A few things to notice in the function above.

const xs = tf.linspace(0, 1, 100);      
const preds = model.predict(xs.reshape([100, 1]));

We generate 100 new ‘examples’ to feed to the model. Model.predict is how we feed those examples into the model. Note that they need to have a similar shape ([num_examples, num_features_per_example]) as when we did training.

// Un-normalize the data
const unNormXs = xs
  .mul(inputMax.sub(inputMin))
  .add(inputMin);

const unNormPreds = preds
  .mul(labelMax.sub(labelMin))
  .add(labelMin);

To get the data back to our original range (rather than 0–1) we use the values we calculated while normalizing, but just invert the operations.

return [unNormXs.dataSync(), unNormPreds.dataSync()];

.dataSync() is a method we can use to get a typedarray of the values stored in a tensor. This allows us to process those values in regular JavaScript. This is a synchronous version of the .data() method which is generally preferred.

Finally, we use tfjs-vis to plot the original data and the predictions from the model.

Add the following code to your run function.

testModel(model, data, tensorData);

Refresh the page and you should see something like the following once the model finishes training.

Congratulations! You have just trained a simple machine learning model using Tensorflow.js! Here is the GitHub repository for reference.

Conclusion

I started doing this because the concept of Machine Learning intrigued me very much and wanted to see if there was any way this could be done in front end development. I was very happy to learn that the Tensorflow.js library could help me achieve my objective. This is just the beginning of Machine Learning in front end development. There is a lot more which can be done and has been already done by the Tensorflow.js team. Thanks for reading!

#javascript #machinelearning #tensorflow

What is GEEK

Buddha Community

JavaScript for Machine Learning using TensorFlow
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

sophia tondon

sophia tondon

1620898103

5 Latest Technology Trends of Machine Learning for 2021

Check out the 5 latest technologies of machine learning trends to boost business growth in 2021 by considering the best version of digital development tools. It is the right time to accelerate user experience by bringing advancement in their lifestyle.

#machinelearningapps #machinelearningdevelopers #machinelearningexpert #machinelearningexperts #expertmachinelearningservices #topmachinelearningcompanies #machinelearningdevelopmentcompany

Visit Blog- https://www.xplace.com/article/8743

#machine learning companies #top machine learning companies #machine learning development company #expert machine learning services #machine learning experts #machine learning expert

Ray  Patel

Ray Patel

1625843760

Python Packages in SQL Server – Get Started with SQL Server Machine Learning Services

Introduction

When installing Machine Learning Services in SQL Server by default few Python Packages are installed. In this article, we will have a look on how to get those installed python package information.

Python Packages

When we choose Python as Machine Learning Service during installation, the following packages are installed in SQL Server,

  • revoscalepy – This Microsoft Python package is used for remote compute contexts, streaming, parallel execution of rx functions for data import and transformation, modeling, visualization, and analysis.
  • microsoftml – This is another Microsoft Python package which adds machine learning algorithms in Python.
  • Anaconda 4.2 – Anaconda is an opensource Python package

#machine learning #sql server #executing python in sql server #machine learning using python #machine learning with sql server #ml in sql server using python #python in sql server ml #python packages #python packages for machine learning services #sql server machine learning services

Nora Joy

1604154094

Hire Machine Learning Developers in India

Hire machine learning developers in India ,DxMinds Technologies is the best product engineering company in India making innovative solutions using Machine learning and deep learning. We are among the best to hire machine learning experts in India work in different industry domains like Healthcare retail, banking and finance ,oil and gas, ecommerce, telecommunication ,FMCG, fashion etc.
**
Services**
Product Engineering & Development
Re-engineering
Maintenance / Support / Sustenance
Integration / Data Management
QA & Automation
Reach us 917483546629

Hire machine learning developers in India ,DxMinds Technologies is the best product engineering company in India making innovative solutions using Machine learning and deep learning. We are among the best to hire machine learning experts in India work in different industry domains like Healthcare retail, banking and finance ,oil and gas, ecommerce, telecommunication ,FMCG, fashion etc.

Services

Product Engineering & Development

Re-engineering

Maintenance / Support / Sustenance

Integration / Data Management

QA & Automation

Reach us 917483546629

#hire machine learning developers in india #hire dedicated machine learning developers in india #hire machine learning programmers in india #hire machine learning programmers #hire dedicated machine learning developers #hire machine learning developers

Nora Joy

1607006620

Applications of machine learning in different industry domains

Machine learning applications are a staple of modern business in this digital age as they allow them to perform tasks on a scale and scope previously impossible to accomplish.Businesses from different domains realize the importance of incorporating machine learning in business processes.Today this trending technology transforming almost every single industry ,business from different industry domains hire dedicated machine learning developers for skyrocket the business growth.Following are the applications of machine learning in different industry domains.

Transportation industry

Machine learning is one of the technologies that have already begun their promising marks in the transportation industry.Autonomous Vehicles,Smartphone Apps,Traffic Management Solutions,Law Enforcement,Passenger Transportation etc are the applications of AI and ML in the transportation industry.Following challenges in the transportation industry can be solved by machine learning and Artificial Intelligence.

  • ML and AI can offer high security in the transportation industry.
  • It offers high reliability of their services or vehicles.
  • The adoption of this technology in the transportation industry can increase the efficiency of the service.
  • In the transportation industry ML helps scientists and engineers come up with far more environmentally sustainable methods for powering and operating vehicles and machinery for travel and transport.

Healthcare industry

Technology-enabled smart healthcare is the latest trend in the healthcare industry. Different areas of healthcare, such as patient care, medical records, billing, alternative models of staffing, IP capitalization, smart healthcare, and administrative and supply cost reduction. Hire dedicated machine learning developers for any of the following applications.

  • Identifying Diseases and Diagnosis
  • Drug Discovery and Manufacturing
  • Medical Imaging Diagnosis
  • Personalized Medicine
  • Machine Learning-based Behavioral Modification
  • Smart Health Records
  • Clinical Trial and Research
  • Better Radiotherapy
  • Crowdsourced Data Collection
  • Outbreak Prediction

**
Finance industry**

In financial industries organizations like banks, fintech, regulators and insurance are Adopting machine learning to improve their facilities.Following are the use cases of machine learning in finance.

  • Fraud prevention
  • Risk management
  • Investment predictions
  • Customer service
  • Digital assistants
  • Marketing
  • Network security
  • Loan underwriting
  • Algorithmic trading
  • Process automation
  • Document interpretation
  • Content creation
  • Trade settlements
  • Money-laundering prevention
  • Custom machine learning solutions

Education industry

Education industry is one of the industries which is investing in machine learning as it offers more efficient and easierlearning.AdaptiveLearning,IncreasingEfficiency,Learning Analytics,Predictive Analytics,Personalized Learning,Evaluating Assessments etc are the applications of machine learning in the education industry.

Outsource your machine learning solution to India,India is the best outsourcing destination offering best in class high performing tasks at an affordable price.Business** hire dedicated machine learning developers in India for making your machine learning app idea into reality.
**
Future of machine learning

Continuous technological advances are bound to hit the field of machine learning, which will shape the future of machine learning as an intensively evolving language.

  • Improved Unsupervised Algorithms
  • Increased Adoption of Quantum Computing
  • Enhanced Personalization
  • Improved Cognitive Services
  • Rise of Robots

**Conclusion
**
Today most of the business from different industries are hire machine learning developers in India and achieve their business goals. This technology may have multiple applications, and, interestingly, it hasn’t even started yet but having taken such a massive leap, it also opens up so many possibilities in the existing business models in such a short period of time. There is no question that the increase of machine learning also brings the demand for mobile apps, so most companies and agencies employ Android developers and hire iOS developers to incorporate machine learning features into them.

#hire machine learning developers in india #hire dedicated machine learning developers in india #hire machine learning programmers in india #hire machine learning programmers #hire dedicated machine learning developers #hire machine learning developers